arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
1206.1725
|
\section{Introduction}\label{sec:intro}
The standard model (SM) of elementary particles~\cite{glashow, weinberg, salam} provides a successful framework that describes a vast
range of particle processes involving weak, electromagetic, and strong interactions. It is widely believed, however, that the SM is
incomplete since, \eg, it fails to incorporate the gravitational force, has no dark matter candidate, and has too many parameters
whose values can not be deduced from the theory. Many models of physics beyond the SM have been proposed to address these
problems. A simple way of extending the SM gauge structure is to include an additional U(1) group, which requires an associated
neutral gauge boson, usually labeled as \zp\,\cite{langacker,leike, cvetic, diener}. Although most studies have assumed
generation-independent gauge couplings for \zp, models in which the \zp couples preferentially to third generation fermions have also
been proposed\,\cite{diener,chivukula,plumacher,malkawi}. The sensitivity of the traditional searches for $Z^\prime$ production using
\EE and \MM final states may be substantially reduced in such non-universal scenarios, motivating the exploration of other allowed
final states. The sequential SM (SSM) includes a neutral gauge boson, \ensuremath{\zp_{\textrm{SSM}}}\xspace, with the same couplings to quarks and leptons as
the SM \Z boson. Although not gauge invariant, this model has been traditionally considered by experiments studying high-mass
resonances. Other models, such as the superstring-inspired $E_{6}$ model, have more complex group structures, $E_{6}\to
SO(10)\times U(1)_{\psi}$, with a corresponding neutral gauge boson denoted as \ensuremath{\zp_{\psi}}\xspace\,\cite{hewett}. In this Letter we present
a search for heavy resonances that decay into pairs of $\tau$ leptons using the \ensuremath{\zp_{\textrm{SSM}}}\xspace and \ensuremath{\zp_{\psi}}\xspace models as benchmarks. A
previous search reported by the CDF collaboration has ruled out a \ensuremath{\zp_{\textrm{SSM}}}\xspace decaying into \ensuremath{\tau^+\tau^-}\xspace with SM couplings with mass below
399\GeV\,\cite{acosta}.
About one-third of the time $\tau$ leptons decay into lighter charged leptons plus neutrinos, whereas the other two-thirds of the
time $\tau$ leptons decay into a hadronic system with one, three, or five charged mesons, which can be accompanied by neutral pions
in addition to the $\tau$ neutrino. We refer to the leptonic and hadronic decay channels as \ensuremath{\tau_{\ell}}\xspace ($\ell=e,\mu$) and \ensuremath{\tau_{\textrm{h}}}\xspace,
respectively. Four \ensuremath{\tau^+\tau^-}\xspace final states, \ensuremath{\tau_{\Pe}\tau_\mu}\xspace, \ensuremath{\tau_{\Pe}\tau_{\textrm{h}}}\xspace, \ensuremath{\tau_\mu\tau_{\textrm{h}}}\xspace, and \ensuremath{\tau_{\textrm{h}}\tau_{\textrm{h}}}\xspace, are studied using a sample of proton-proton collisions
at $\sqrt{s}=7$\TeV recorded by the Compact Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC). The data sample
corresponds to an integrated luminosity of $4.94 \pm 0.11\fbinv$.
\section{The CMS Experiment}\label{sec:cms}
The central feature of the CMS apparatus is a superconducting solenoid, of 6~m internal diameter, providing a field of 3.8\,T. Within
the field volume are the silicon pixel and strip tracker, the lead-tungstate (PbWO$_4$) crystal electromagnetic calorimeter (ECAL),
and the brass/scintillator hadron calorimeter (HCAL). Muons are measured with detection planes made using three technologies: drift
tubes, cathode strip chambers, and resistive plate chambers. Extensive forward calorimetry complements the coverage provided by the
barrel and endcap detectors.
CMS uses a right-handed coordinate system, with the origin at the nominal interaction point, the $x$ axis pointing to the
center of the LHC ring, the $y$ axis pointing up (perpendicular to the plane of the LHC ring), and the $z$ axis along the
counterclockwise-beam direction. The polar angle, $\theta$, is measured from the positive $z$ axis and the azimuthal angle,
$\phi$, is measured in the $x$-$y$ plane.
The inner tracker measures charged particles within the pseudorapidity range $|\eta| < 2.5$, where
$\eta = -\ln[\tan(\theta/2)]$; muons are measured within $|\eta|< 2.4$.
A more detailed description of the CMS detector can be found elsewhere~\cite{CMS}.
\section{Lepton Reconstruction and Identification}\label{sec:leptonRecoId}
Electrons are reconstructed by combining clusters in the ECAL with tracks in the inner tracker fitted with a Gaussian sum filter
algorithm\,\cite{electronTrigger}. Electron candidates are required to have good energy-momentum and spatial match between the
ECAL cluster and the inner track. In addition, the ratio of the energies measured in HCAL and ECAL must be consistent with an
electron signature.
Muons are reconstructed by matching hits found in the muon detectors to tracks in the inner tracker\,\cite{muonTrigger}. Quality
requirements, based on the minimum number of hits in the silicon, pixel, and muon detectors, are applied to suppress backgrounds
from punch-through and decay-in-flight pions. A requirement on the maximum transverse impact parameter with respect to the
beamspot (2\mm) largely reduces the contamination from cosmic-ray muons.
A particle-flow (PF) technique\,\cite{PFTausReco} is used for the reconstruction of hadronically decaying $\tau$ candidates. In the
PF approach, information from all subdetectors is combined to reconstruct and identify all final-state particles produced in the
collision. The particles are classified as either charged hadrons, neutral hadrons, electrons, muons, or photons. These particles
are used to reconstruct \ensuremath{\tau_{\textrm{h}}}\xspace candidates using the hadron plus strip (HPS) algorithm\,\cite{TauID}, which is designed to optimize
the performance of \ensuremath{\tau_{\textrm{h}}}\xspace identification and reconstruction by considering specific $\tau$-lepton decay modes.
\section{Signal and Background MC Samples}\label{sec:samples} Signal and background Monte Carlo (MC) samples are produced with the
\PYTHIA 6.4.22\,\cite{PYTHIA} and \MADGRAPH\,\cite{Madgraph} generators using the Z2 tune\,\cite{Z2} and the CTEQL1 parton
distribution function (PDF) set\,\cite{CTEQL1}. The \TAUOLA package\,\cite{TAUOLA} is used to decay the generated $\tau$ leptons.
All generated objects are input into a detailed {\GEANTfour}\,\cite{Geant} simulation of the CMS detector.
The \ensuremath{\zp_{\textrm{SSM}}}\xspace and \ensuremath{\zp_{\psi}}\xspace signal samples are generated with seven different masses: 350, 500, 750, 1000, 1250, 1500, and
1750\GeV. Their corresponding cross sections are calculated to leading order.
The most important sources of background are the irreducible Drell--Yan process (\ensuremath{\mathrm{Z}\to\tau^+\tau^-}\xspace), production of W bosons in association with
one or more jets (W+jets\xspace), \cPqt{}\cPaqt, dibosons (WW, WZ), and QCD multijet production. Although the \ensuremath{\mathrm{Z}\to\tau^+\tau^-}\xspace background peaks around
the \Z mass, its tail extends to the region where a high-mass resonance might present itself. The W+jets\xspace events are characterized by
an isolated lepton from the decay of the W boson and an uncorrelated jet misidentified as a light lepton or a \ensuremath{\tau_{\textrm{h}}}\xspace. Background
from \ttbar events is usually accompanied by one or two b jets, in addition to genuine, isolated leptons or \ensuremath{\tau_{\textrm{h}}}\xspace. Background from
diboson events produces both genuine, isolated leptons, when the gauge bosons decay leptonically, and a misidentified \ensuremath{\tau_{\textrm{h}}}\xspace when
they decay hadronically. Finally, QCD events are characterized by non-collimated jets with a high-multiplicity of particles, which
can be misidentified as charged light leptons and \ensuremath{\tau_{\textrm{h}}}\xspace.
\section{Event Selection}\label{sec:selections}
A new heavy neutral gauge boson decaying into $\tau$ pairs is characterized by two high-\pt, oppositely charged,
isolated, and almost back-to-back (in the transverse plane) $\tau$ candidates.
CMS uses a two-level trigger system consisting of the so called level one (\Lone) and the high level trigger (HLT). The events
selected for this analysis are required to have at least two trigger objects: an electron and a muon, a \ensuremath{\tau_{\textrm{h}}}\xspace candidate and a light
charged lepton, or two \ensuremath{\tau_{\textrm{h}}}\xspace candidates for the \ensuremath{\tau_{\Pe}\tau_\mu}\xspace, \ensuremath{\tau_{\ell}\tau_{\textrm{h}}}\xspace, and \ensuremath{\tau_{\textrm{h}}\tau_{\textrm{h}}}\xspace final states, respectively. Details of the
electron and muon triggers can be found in Refs.~\cite{electronTrigger,muonTrigger}. The \ensuremath{\tau_{\textrm{h}}}\xspace trigger algorithm requires presence
of jets reconstructed at \Lone using a 3x3 combination of calorimeter trigger regions. Those jets are considered as seeds for the
HLT where a simplified version of the PF algorithm is used to build the HLT \ensuremath{\tau_{\textrm{h}}}\xspace candidate. The HLT \ensuremath{\tau_{\textrm{h}}}\xspace four-momentum is
reconstructed as a sum of the four-momenta of all particles in the jet with pT above 0.5 \GeV in a cone of radius $\DR =
\sqrt{(\Delta\eta)^2 + (\Delta\phi)^2} = 0.2$ around the direction of the leading particle.
The event selection requirements are optimized, for each individual final state, to maximize the sensitivity reach of the search.
Events are required to have at least two $\tau$ candidates satisfying the \pt, $\eta$, and isolation requirements presented in
Table~\ref{tab:leptonId}. The isolation requirement is the strongest discriminator between genuine $\tau$ candidates and those from
misidentified QCD jets. The leptonic $\tau$ candidates (\ensuremath{\tau_{\Pe}}\xspace, \ensuremath{\tau_\mu}\xspace) are required to pass both track and ECAL isolation requirements.
Track isolation (TrkIso) is defined as the sum of the \pt of the tracks, as measured by the tracking system, within an isolation cone
of radius \DR~centered around the charged light lepton track. Similarly, ECAL isolation (EcalIso) measures the amount of energy
deposited in the ECAL within the isolation cone. In both cases the contribution from the charged light lepton candidate is removed from
the sum. For \ensuremath{\tau_{\textrm{h}}}\xspace candidates, the HPS algorithm provides three isolation definitions. The ``loose" $\tau$ definition rejects a
$\tau$ candidate if one or more charged hadrons with $\pt \geq 1.0\GeV$ or one or more photons with transverse energy $\et \geq
1.5\GeV$ are found within the isolation cone. The ``medium" and ``tight" definitions require no charged hadrons or photons with \PT
greater than 0.8 and 0.5\GeV within the isolation cone, respectively. Additionally, \ensuremath{\tau_{\textrm{h}}}\xspace candidates are required to fail electron
and muon requirements.
Pairs are formed from oppositely charged $\tau$ candidates with $\DR > 0.7$. In addition, a back-to-back requirement on the $\tau$
pairs is imposed by selecting candidates with $\cos\Delta\phi(\tau_{1},\tau_{2}) < -0.95$, where $\Delta\phi(\tau_{1},\tau_{2})$ is
the difference in the azimuthal angle between the $\tau$ candidates. The presence of neutrinos in the final state precludes a full
reconstruction of the mass of the \ensuremath{\tau^+\tau^-}\xspace system. We use the visible $\tau$-decay products and the missing transverse energy, \MET,
defined as the magnitude of the negative of the vector sum of the transverse momentum of all PF objects in the event\,\cite{CMSMet},
to reconstruct the effective visible mass:
\ifthenelse{\boolean{cms@external}}{
\begin{multline}
M(\tau_{1},\tau_{2},\MET) = \\
\sqrt{(E_{\tau_{1}}+E_{\tau_{2}}+\MET)^{2}-
(\overrightarrow{p_{\tau_{1}}}+\overrightarrow{p_{\tau_{2}}}+\overrightarrow{\MET})^{2}}.
\label{eq:ditaumass}
\end{multline}
}{
\begin{equation}
M(\tau_{1},\tau_{2},\MET) = \sqrt{(E_{\tau_{1}}+E_{\tau_{2}}+\MET)^{2}-
(\overrightarrow{p_{\tau_{1}}}+\overrightarrow{p_{\tau_{2}}}+\overrightarrow{\MET})^{2}}.
\label{eq:ditaumass}
\end{equation}
}
When compared with the mass obtained by using only the visible decay products of the \ensuremath{\tau^+\tau^-}\xspace system, the effective mass provides better
discrimination against backgrounds. The width and central value of the effective visible mass distribution, which is offset from the
true resonance mass, depend on the true mass of the new resonance. The width-to-mass ratio of the effective mass distribution
reconstructed using Eqn.\,\ref{eq:ditaumass} varies from $\sim 25\%$ for a \zp with a generated mass of 350\GeV to $\sim 50\%$ for a
\zp with generated mass of 1750\GeV.
\begin{table*}[htbp]
\begin{center}
\caption{Lepton \pt, $\eta$, and isolation requirements for each \ensuremath{\tau^+\tau^-}\xspace final state.}
\begin{tabular}{|l|l|c|c|c|c|}
\hline\hline
\multicolumn{2}{|c|}{Selection} & \ensuremath{\tau_{\Pe}\tau_\mu}\xspace & \ensuremath{\tau_{\Pe}\tau_{\textrm{h}}}\xspace & \ensuremath{\tau_\mu\tau_{\textrm{h}}}\xspace & \ensuremath{\tau_{\textrm{h}}\tau_{\textrm{h}}}\xspace \\ \hline
\multirow{4}{*}{\ensuremath{\tau_{\Pe}}\xspace} & Min \pt (\GeV) & 15 & 20 & -- & -- \\
& Max $|\eta|$ & 2.1 & 2.1 & -- & -- \\
& Iso \DR & 0.3 & 0.4 & -- & -- \\
& Max TrkIso (\GeV) & 3.5 & 3.0 & -- & -- \\
& Max EcalIso (\GeV) & 3.0 & 4.5 & -- & -- \\\hline
\multirow{4}{*}{\ensuremath{\tau_\mu}\xspace} & Min \pt (\GeV) & 20 & -- & 20 & -- \\
& Max $|\eta|$ & 2.1 & -- & 2.1 & -- \\
& Iso \DR & 0.5 & -- & 0.5 & -- \\
& Max TrkIso (\GeV) & 3.5 & -- & 1.0 & -- \\
& Max EcalIso (\GeV) & 3.0 & -- & 1.0 & -- \\\hline
\multirow{2}{*}{\ensuremath{\tau_{\textrm{h}}}\xspace} & Min \pt (\GeV) & -- & 20 & 20 & 35 \\
& Max $|\eta|$ & -- & 2.1 & 2.1 & 2.1 \\
& Iso \DR & -- & 0.5 & 0.5 & 0.5 \\
& Iso definition & -- & Medium & Medium & Loose \\
\hline\hline
\end{tabular}\label{tab:leptonId}
\end{center}
\end{table*}
Further selection requirements are applied to suppress background contributions. To discriminate against QCD jets, \MET is
required to be greater than 20\GeV for the \ensuremath{\tau_{\Pe}\tau_\mu}\xspace final state and greater than 30\GeV for the \ensuremath{\tau_{\Pe}\tau_{\textrm{h}}}\xspace, \ensuremath{\tau_\mu\tau_{\textrm{h}}}\xspace, and \ensuremath{\tau_{\textrm{h}}\tau_{\textrm{h}}}\xspace final
states. Furthermore, we consider only one-prong $\ensuremath{\tau_{\textrm{h}}}\xspace$ candidates in the \ensuremath{\tau_{\textrm{h}}\tau_{\textrm{h}}}\xspace final state.
To reduce the contamination from collisions in which W bosons are produced, events are required to be consistent with the signature
of a particle decaying into two $\tau$ leptons. We define a unit vector ($\hat{\zeta}$) along the bisector defined by the \pt vector of
the tau candidates, and two projection variables\,\cite{CDFMSSM}:
\begin{equation}
p_{\zeta}^{vis} = \overrightarrow{p}_{\tau_{1}}^{vis} \cdot \hat{\zeta} + \overrightarrow{p}_{\tau_{2}}^{vis} \cdot \hat{\zeta},
\label{eq:zeta}
\end{equation}
\begin{equation}
p_{\zeta} = p_{\zeta}^{vis} + \overrightarrow{\MET} \cdot \hat{\zeta}.
\label{eq:zetavis}
\end{equation}
We require that $(1.25~\times~p_{\zeta}^{vis}) - p_{\zeta} < 10\GeV$ for the \ensuremath{\tau_{\Pe}\tau_\mu}\xspace final state and
$(0.875~\times~p_{\zeta}^{vis}) - p_{\zeta} < 7\GeV$, for the \ensuremath{\tau_\mu\tau_{\textrm{h}}}\xspace, \ensuremath{\tau_{\Pe}\tau_{\textrm{h}}}\xspace, and \ensuremath{\tau_{\textrm{h}}\tau_{\textrm{h}}}\xspace final states.
Hereafter, we will refer to the inequalities based on the $p_{\zeta}$~and~$p_{\zeta}^{vis}$ variables as the \ensuremath{p_{\hat{\zeta}}}\xspace
requirements.
Any remaining contribution from \ttbar events is minimized by selecting events where none of the jets has been identified as a b
jet. A jet is identified as a b jet if it has at least two tracks with impact parameter significance, defined as the ratio
between the impact parameter and its estimated uncertainty, greater than 3.3\,\cite{CMS_PAS_BTV_10-001}. In order to further reduce
the \ttbar contribution in the \ensuremath{\tau_{\Pe}\tau_\mu}\xspace final state, an additional requirement is imposed on the difference in the azimuthal angle between the
highest-\pt (leading) lepton ($\ell^{\textrm{lead}}$) and the \MET vector ($\cos\Delta\phi(\ell^{\textrm{lead}},\MET) < -0.6$).
Table~\ref{table:acceptance} summarizes the signal selection efficiency after all requirements have been applied for various
\ensuremath{\zp_{\textrm{SSM}}}\xspace masses. The uncertainties in the selection efficiencies are statistical only. The \ensuremath{\zp_{\psi}}\xspace model has comparable
efficiencies.
\begin{table*}[htbp]
\caption{Signal selection efficiency for \ensuremath{\zp_{\textrm{SSM}}}\xspace decaying into \ensuremath{\tau_{\Pe}\tau_\mu}\xspace, \ensuremath{\tau_{\Pe}\tau_{\textrm{h}}}\xspace, \ensuremath{\tau_\mu\tau_{\textrm{h}}}\xspace, and \ensuremath{\tau_{\textrm{h}}\tau_{\textrm{h}}}\xspace final states.}
\centering{
\begin{tabular}{| l | c | c | c | c |}
\hline\hline
\ensuremath{\zp_{\textrm{SSM}}}\xspace Mass (\GeV) & \ensuremath{\tau_{\Pe}\tau_\mu}\xspace & \ensuremath{\tau_{\Pe}\tau_{\textrm{h}}}\xspace & \ensuremath{\tau_\mu\tau_{\textrm{h}}}\xspace & \ensuremath{\tau_{\textrm{h}}\tau_{\textrm{h}}}\xspace \\ [0.5ex]\hline
350 & $0.142 \pm 0.007$ & $0.050 \pm 0.004$ & $0.08 \pm 0.01$ & $0.014 \pm 0.001$ \\
500 & $0.199 \pm 0.008$ & $0.064 \pm 0.005$ & $0.13 \pm 0.01$ & $0.022 \pm 0.001$ \\
750 & $0.257 \pm 0.008$ & $0.083 \pm 0.007$ & $0.15 \pm 0.01$ & $0.030 \pm 0.001$ \\
1000 & $0.281 \pm 0.009$ & $0.087 \pm 0.007$ & $0.17 \pm 0.01$ & $0.034 \pm 0.001$ \\
\hline
\hline
\end{tabular}
}
\label{table:acceptance}
\end{table*}
\section{Background Estimation}\label{sec:extraction}
The estimation of the background contributions in the signal region is derived from data wherever possible. The general
strategy is to modify the standard selection requirements to select samples enriched with background events. These control
regions are used to measure the efficiencies for background candidates to pass the signal selection requirements. In cases
where the above approach is not feasible, data-to-MC scale factors, defined as a ratio of efficiencies, are used to correct
the expected contributions obtained from the simulation samples.
The number of QCD events in the signal region for the \ensuremath{\tau_{\textrm{h}}\tau_{\textrm{h}}}\xspace and \ensuremath{\tau_{\ell}\tau_{\textrm{h}}}\xspace final states is estimated from a sample of
like-sign $\tau\tau$ candidates scaled by the opposite-sign to like-sign ratio ($R_{OS/LS}$) observed in the data. For the \ensuremath{\tau_{\textrm{h}}\tau_{\textrm{h}}}\xspace
final state, $R_{OS/LS}$ is measured for events with transverse mass $M_{T}(\ensuremath{\tau_{\textrm{h}}}\xspace^{\textrm{lead}},\MET)$ between 15 and 90\GeV,
where $\ensuremath{\tau_{\textrm{h}}}\xspace^{\textrm{lead}}$ is the highest-\pt \ensuremath{\tau_{\textrm{h}}}\xspace candidate and $M_{T}(\ensuremath{\tau_{\textrm{h}}}\xspace^{\textrm{lead}},\MET) =
\sqrt{2\pt^{\ensuremath{\tau_{\textrm{h}}}\xspace^{\textrm{lead}}}\MET(1-\cos\Delta \phi)}$. For the \ensuremath{\tau_{\ell}\tau_{\textrm{h}}}\xspace final states $R_{OS/LS}$ is measured from a
sample in which the muon TrkIso is between 4 and 15\GeV. For the \ensuremath{\tau_{\Pe}\tau_\mu}\xspace final state, the QCD background is estimated using a sample
of non-isolated electrons and muons. The isolation efficiency, needed to
extrapolate into the signal region, is measured from a sample of like-sign candidates.
An enhanced sample of W+jets\xspace events is obtained by removing the $\Delta\phi(\tau_{1},\tau_{2})$ and \ensuremath{p_{\hat{\zeta}}}\xspace requirements
from the standard selections. Further enhancement of W events is obtained by requiring that the transverse mass of the lepton
and \MET system to be between 50 and 100\GeV. The number of W+jets\xspace events in the signal region is
estimated from the number of events in the W+jets\xspace enhanced sample passing the $\Delta\phi(\tau_{1},\tau_{2})$ and \ensuremath{p_{\hat{\zeta}}}\xspace
requirements divided by the efficiency of the transverse mass requirement measured from a sample of events passing the
complement of both the $\Delta\phi(\tau_{1},\tau_{2})$ and \ensuremath{p_{\hat{\zeta}}}\xspace requirements.
A high-purity sample of \ttbar events is obtained by requiring the presence of at least one reconstructed b jet with
$\pt > 20$\GeV and removing the $\Delta\phi(\tau_{1},\tau_{2})$ and \ensuremath{p_{\hat{\zeta}}}\xspace requirements. The \ttbar contribution in the
signal region is estimated from the number of events with one or more b jets passing the $\Delta\phi(\tau_{1},\tau_{2})$
and \ensuremath{p_{\hat{\zeta}}}\xspace requirements multiplied by the ratio of events passing the zero b-jet requirement and those events with one
or more b jets. This ratio is measured from a sample with at least two additional jets in the events, which is
dominated by \ttbar events.
Control samples dominated by \ensuremath{\mathrm{Z}\to \Pep \Pem}\xspace and \ensuremath{\mathrm{Z}\to \mu^+ \mu^-}\xspace background events are obtained by removing the \MET requirement and requiring the
$\tau$ candidates to be compatible with charged light-lepton signatures. The number of events in each control sample is compared with
the expected contributions as determined from the simulation and are found to be in agreement. Since the \ensuremath{\mathrm{Z}\to \Pep \Pem}\xspace and \ensuremath{\mathrm{Z}\to \mu^+ \mu^-}\xspace
backgrounds are well described by the simulation, the estimated background contribution from these sources is taken directly from
the number of simulated events passing the standard selection requirements normalized to the recorded integrated luminosity and
the next-to-next-leading order cross-section value.
The contamination from diboson production is estimated directly from the number of simulated events that pass the analysis
requirements normalized to the integrated luminosity with next-to-leading order (NLO) cross-section values.
Finally, high-purity samples of $\mathrm{Z}\to\ensuremath{\tau_{\ell}\tau_{\textrm{h}}}\xspace$ events in which the data-to-MC scale factor can be evaluated
are obtained by removing the \MET selection and requiring that the \ensuremath{\tau_{\ell}}\xspace transverse momentum be less than 40\GeV.
Contamination from W+jets\xspace events in these samples is reduced after requiring $M_T(\ensuremath{\tau_{\ell}}\xspace,\MET) < 40\GeV$.
For the \ensuremath{\tau_{\Pe}\tau_\mu}\xspace final state the data-to-MC scale factor is evaluated for $M(\tau_e,\tau_\mu,\MET) < 150\GeV$.
Table~\ref{table:expectations} lists the number of estimated background events compared with the total number of observed events in
data for each final state. The statistical and systematic uncertainties are quoted separately. Only the uncertainties on the cross
section and background estimation methods are included in the systematic uncertainties. Other effects, such as the uncertainty of
the luminosity measurement, are not included in Table~\ref{table:expectations}. The \ensuremath{\tau^+\tau^-}\xspace effective visible mass distributions,
$M(\tau_1,\tau_2,\MET)$, for all four final states are shown in Fig.\,\ref{fig:masses}. The largest background sources are from
W+jet(s) and Drell--Yan production for \ensuremath{\tau_{\Pe}\tau_\mu}\xspace and \ensuremath{\tau_{\ell}\tau_{\textrm{h}}}\xspace final states, and QCD processes for \ensuremath{\tau_{\textrm{h}}\tau_{\textrm{h}}}\xspace. We use the background
shapes normalized to the values obtained from the background estimation methods to search for a broad enhancement in the
$M(\tau_{1},\tau_{2},\MET)$ spectrum consistent with the production of a high-mass resonance state. The background shapes are taken
from the MC simulation and parametrized to extrapolate to the high-mass regions.
\begin{table*}[htbp]
\caption{Number of observed events in data and estimated background events for the whole mass range. The first and
second uncertainties are the statistical and systematic, respectively.}
\centering{
\begin{tabular}{| l | c | c | c | c |}\hline\hline
Process & \ensuremath{\tau_{\Pe}\tau_\mu}\xspace & \ensuremath{\tau_{\Pe}\tau_{\textrm{h}}}\xspace & \ensuremath{\tau_\mu\tau_{\textrm{h}}}\xspace & \ensuremath{\tau_{\textrm{h}}\tau_{\textrm{h}}}\xspace \\ [0.5ex] \hline
\ensuremath{\mathrm{Z}\to\tau^+\tau^-}\xspace & $816 \pm 58 \pm 44$ & $462 \pm 56 \pm 24$ & $804 \pm 53 \pm 44$ & $30.9 \pm 3.6 \pm 4.3$ \\
\ensuremath{\mathrm{Z}\to \mu^+ \mu^-}\xspace & -- & -- & $20.8 \pm 8.3 \pm 1.1$ & -- \\
\ensuremath{\mathrm{Z}\to \Pep \Pem}\xspace & -- & $220 \pm 24 \pm 11$ & -- & $0.66 \pm 0.33 \pm 0.22$ \\
W+jets\xspace & $83 \pm 15 \pm 7$ & $181 \pm 36 \pm 13$ & $459 \pm 26 \pm 29$ & $5.8 \pm 1.7 \pm 1.1$ \\
WW & $55.6 \pm 1.4 \pm 1.9$ & -- & $24.59 \pm 0.80 \pm 0.80$ & -- \\
WZ & $5.60 \pm 0.35 \pm 0.22$ & -- & -- & -- \\
\ttbar & $9.6 \pm 1.2 \pm 0.7$ & $10.8 \pm 2.8 \pm 0.9$& $46.2 \pm 6.9 \pm 3.7$ & $0.00^{+0.76 + 0.15}_{-0.00}$ \\
QCD & $45.1 \pm 3.3 \pm 9.0$ & $185 \pm 31 \pm 19$ & $72 \pm 18 \pm 8$ & $467 \pm 26 \pm 67 $ \\
Total & $1015 \pm 60 \pm 45$ & $1058 \pm 77 \pm 35$ & $1427 \pm 63 \pm 53$ & $504 \pm 26 \pm 67$ \\\hline
Observed & $1044$ & $1043$ & $1422$ & $488$\\
\hline\hline
\end{tabular}
}
\label{table:expectations}
\end{table*}
\begin{table*}[htb]
\caption{Number of observed events in data and estimated background events obtained from the integration of the
$M(\tau_1,\tau_2,\MET)$ distribution for masses above 300\GeV. The statistical and systematic uncertainties have
been added in quadrature.}
\centering{
\begin{tabular}{| l | c | c | c | c |}\hline\hline
Process & \ensuremath{\tau_{\Pe}\tau_\mu}\xspace & \ensuremath{\tau_{\Pe}\tau_{\textrm{h}}}\xspace & \ensuremath{\tau_\mu\tau_{\textrm{h}}}\xspace & \ensuremath{\tau_{\textrm{h}}\tau_{\textrm{h}}}\xspace \\ [0.5ex] \hline
\ensuremath{\mathrm{Z}\to\tau^+\tau^-}\xspace & $2.76 \pm 0.25$ & $12.5 \pm 1.7$ & $18.0 \pm 1.5$ & $3.41 \pm 0.62$ \\
\ensuremath{\mathrm{Z}\to \mu^+ \mu^-}\xspace & -- & -- & $3.0 \pm 1.2$ & -- \\
\ensuremath{\mathrm{Z}\to \Pep \Pem}\xspace & -- & $6.94 \pm 0.83$ & -- & -- \\
W+jets\xspace & $4.86 \pm 0.97$ & $0.23 \pm 0.05$ & $9.58 \pm 0.81$ & $1.83 \pm 0.69$ \\
WW & $8.53 \pm 0.36$ & -- & $4.01 \pm 0.18$ & -- \\
WZ & $1.34 \pm 0.10$ & -- & -- & -- \\
\ttbar & $1.57 \pm 0.23$ & $0.93 \pm 0.25$ & $6.6 \pm 1.1$ & -- \\
QCD & -- & $0.13 \pm 0.03$ & $0.16 \pm 0.04$ & $18.0 \pm 2.9$ \\
Total & $19.1 \pm 1.1$ & $20.8 \pm 1.9$ & $41.3 \pm 2.4$ & $23.3 \pm 3.0$ \\\hline
Observed & $20$ & $10$ & $32$ & $13$ \\
\hline\hline
\end{tabular}
}
\label{table:expectationsAbove300}
\end{table*}
\begin{figure*}
\begin{center}
\includegraphics[angle=0,width=.48\textwidth]{emuDiTauMETMassVarBins.pdf}
\includegraphics[angle=0,width=.48\textwidth]{etauDiTauMETMassVarBins.pdf}
\includegraphics[angle=0,width=.48\textwidth]{mutauDiTauMETMassVarBins.pdf}
\includegraphics[angle=0,width=.48\textwidth]{hadtauDiTauMETMassVarBins.pdf}
\caption{$M(\tau_{1},\tau_{2},\MET)$ distributions for all four final states: (a) \ensuremath{\tau_{\Pe}\tau_\mu}\xspace, (b) \ensuremath{\tau_{\Pe}\tau_{\textrm{h}}}\xspace, (c) \ensuremath{\tau_\mu\tau_{\textrm{h}}}\xspace, and (d) \ensuremath{\tau_{\textrm{h}}\tau_{\textrm{h}}}\xspace.
The dashed line represents the mass distribution for a $\ensuremath{\zp_{\textrm{SSM}}}\xspace\to\ensuremath{\tau^+\tau^-}\xspace$ with a mass of 750\GeV.}
\label{fig:masses}
\end{center}
\end{figure*}
\section{Systematic Uncertainties}\label{sec:SystematicSources}
The main source of systematic uncertainty results from the estimation of the background contributions that are dominated by the
statistical uncertainty of the data used in the control regions. These uncertainties are in the range of 6 to 14\%. The
contamination from other backgrounds in these control regions has a negligible effect on the systematic uncertainty. The
efficiencies for electron and muon reconstruction and identification are measured with the ``tag-and-probe"
method\,\cite{electronTrigger, muonTrigger} with a resulting uncertainty of 4.5\% for electrons and up to 3\% for muons. The
hadronic $\tau$ trigger efficiency is measured from $\Z\to\ensuremath{\tau_\mu\tau_{\textrm{h}}}\xspace$ events selected by single-muon triggers. This leads to a relative
uncertainty of 4.0\% and 6.4\% per $\ensuremath{\tau_{\textrm{h}}}\xspace$ candidate for the \ensuremath{\tau_{\ell}\tau_{\textrm{h}}}\xspace and \ensuremath{\tau_{\textrm{h}}\tau_{\textrm{h}}}\xspace final states, respectively. Systematic
effects associated with \ensuremath{\tau_{\textrm{h}}}\xspace identification are extracted from a fit to the \ensuremath{\mathrm{Z}\to\tau^+\tau^-}\xspace visible mass distribution,
$M(\tau_1,\tau_2)$. The fit constrains the \Z production cross section to the measured cross section in \ensuremath{\mathrm{Z}\to \Pep \Pem}\xspace and \ensuremath{\mathrm{Z}\to \mu^+ \mu^-}\xspace decay
channels, leading to a relative uncertainty of 6.8\% per $\ensuremath{\tau_{\textrm{h}}}\xspace$ candidate\,\cite{tauIDSyst}. The simulation is used to verify that
\ensuremath{\tau_{\textrm{h}}}\xspace identification efficiency remains constant as a function of \pt up to the high-mass signal region.
Uncertainties that contribute to the $M(\tau_{1},\tau_{2},\MET)$ shape variations include the $\ensuremath{\tau_{\textrm{h}}}\xspace$ (2\%) and charged light-lepton
(1\%) energy scales, and the uncertainty on the \MET scale, which is used for the $M(\tau_{1},\tau_{2},\MET)$ mass calculation. The
\MET scale uncertainties contribute via the jet energy scale (2-5\% depending on $\eta$ and \pt) and unclustered energy scale
(10\%), where unclustered energy is defined as the energy not associated with the reconstructed leptons and jets with
$\pt >10\GeV$. The unclustered energy scale uncertainty has a negligible systematic effect on the signal acceptance and
$M(\tau_{1},\tau_{2},\MET)$ shape. In addition, the limited sizes of the simulated samples in the high-mass regions lead to
systematic uncertainties in the background shape parametrization at high mass. The uncertainty on the probability for a light quark
or gluon jet to be misidentified as a b jet (20\%) also has a negligible effect on the signal acceptance and
$M(\tau_{1},\tau_{2},\MET)$ mass shape.
The uncertainty on signal acceptance due to the PDF set included in the simulated samples is evaluated by comparing CTEQ6.6L,
MRST2006, and NNPDF10 PDF sets\,\cite{CTEQ,mrst2006,nnpdf10} with the default PDF set. The systematic effect due to imprecise
modeling of initial- and final-state radiation is determined by reweighting events to account for effects such as missing $\alpha$
terms in the soft-collinear approach\,\cite{softCollinear} and missing NLO terms in the parton shower
approach\,\cite{partonShower}. Finally, the uncertainty in the luminosity measurement is 2.2\%\,\cite{REFLUMI}.
Table~\ref{table:Systematics} summarizes the sources of systematics considered.
\begin{table*}[ht]
\caption{Summary of the sources of systematic uncertainties.}
\centering{
\begin{tabular}{| l | c | c | c | c |}
\hline\hline
Source of Systematic & \ensuremath{\tau_{\Pe}\tau_\mu}\xspace & \ensuremath{\tau_{\Pe}\tau_{\textrm{h}}}\xspace & \ensuremath{\tau_\mu\tau_{\textrm{h}}}\xspace& \ensuremath{\tau_{\textrm{h}}\tau_{\textrm{h}}}\xspace \\ [0.5ex] \hline
Background estimation & 7.4\% & 8.0\% & 5.8\% & 14.3\% \\
Muon id & 3.0\% & -- & 1.1\% & -- \\
Electron id & 4.5\% & 4.5\% & -- & -- \\
Tau id & -- & 6.8\% & 6.8\% & 9.6\% \\
Tau energy scale & -- & 2.1\% & 2.1\% & 3.0\% \\
Tau trigger & -- & 4.0\% & 4.0\% & 9.0\% \\
\hline
Jet energy scale & \multicolumn{4}{c|}{4.0\%}\\
Luminosity & \multicolumn{4}{c|}{2.2\%}\\
Parton distribution functions & \multicolumn{4}{c|}{4.0 -- 6.5\%}\\
Initial-state radiation & \multicolumn{4}{c|}{3.1\%} \\
Final-state radiation & \multicolumn{4}{c|}{2.2\% } \\
\hline
\hline
\end{tabular}
}
\label{table:Systematics}
\end{table*}
\section{Results}\label{sec:results}
The observed mass spectra shown in Fig.\,\ref{fig:masses} do not reveal any evidence for \ensuremath{\mathrm{Z}^\prime\to\tau^+\tau^-}\xspace production. A fit of the
expected mass spectrum to the data based on the CL$_\textrm{S}$\xspace criterion\,\cite{Read, Junk} is used to calculate an upper limit on the product
of the resonance cross section and its branching fraction into $\tau$-lepton pairs at 95\% CL as a function of the \zp mass for each
\ensuremath{\tau^+\tau^-}\xspace final state taking into account all systematic uncertainties shown in Table~\ref{table:Systematics}. The final limits are
obtained from the combination of all four final states.
Figure\,\ref{fig:Limits} shows the combined expected and observed limits as well as the theoretical cross section times the
branching fraction to $\tau$-lepton pairs for the SSM (\ensuremath{\zp_{\textrm{SSM}}}\xspace) and E6 (\ensuremath{\zp_{\psi}}\xspace) models as functions of the \zp resonance
mass. The bands on the expected limits represent the one and two standard deviations obtained using a large sample of
pseudo-experiments based on the background-only hypothesis. The upper limit on $\sigma(\textrm{pp}\to\zp)\times
\textrm{Br}(\zp\to\ensuremath{\tau^+\tau^-}\xspace)$ corresponds to the point where the observed limit crosses the theoretical line. We exclude a \ensuremath{\zp_{\textrm{SSM}}}\xspace
with mass below 1.4\TeV and a \ensuremath{\zp_{\psi}}\xspace with mass less than 1.1\TeV. The \ensuremath{\tau_{\Pe}\tau_{\textrm{h}}}\xspace and \ensuremath{\tau_\mu\tau_{\textrm{h}}}\xspace final states contribute the most to the
limits. A downward fluctuation in the number of observed events with respect to the number of expected events leads to a limit that is
about 1.5 standard deviations higher than expected in the region above 600\GeV.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=\cmsFigWidth]{jointLimitStandAlone.pdf}
\caption{Combined upper limit at the 95\% CL on the product of the cross section and branching fraction into $\tau$ pairs as a
function of the \zp mass. The bands represent the one and two standard deviations obtained from the background-only
hypothesis.}
\label{fig:Limits}
\end{center}
\end{figure}
\section{Summary}\label{sec:conclusion}
A search for new heavy \zp bosons decaying into $\tau$-lepton pairs using data corresponding to an integrated luminosity of
$4.94 \pm 0.11\fbinv$ collected by the CMS detector in proton-proton collisions at $\sqrt{s}=7$\TeV was performed. The observed
mass spectrum did not reveal any evidence for \ensuremath{\mathrm{Z}^\prime\to\tau^+\tau^-}\xspace production, and an upper limit, as a function of the \zp mass, on the
product of the resonance cross section and branching fraction into \ensuremath{\tau^+\tau^-}\xspace was calculated. The \ensuremath{\zp_{\textrm{SSM}}}\xspace and \ensuremath{\zp_{\psi}}\xspace resonances
decaying to $\tau$-lepton pairs were excluded for masses below 1.4 and 1.1\TeV, respectively, at 95\% CL. These represent the
most stringent limits on the production of a new heavy resonance decaying into $\tau$-lepton pairs published to date.
\section*{Acknowledgments}
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC machine. We
thank the technical and administrative staff at CERN and other CMS institutes, and acknowledge support from: FMSR
(Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC
(China); COLCIENCIAS (Colombia); MSES (Croatia); RPF (Cyprus); MoER, SF0690030s09 and ERDF (Estonia); Academy of
Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); OTKA and NKTH
(Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); NRF and WCU (Korea); LAS (Lithuania);
CINVESTAV, CONACYT, SEP, and UASLP-FAI (Mexico); MSI (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT
(Portugal); JINR (Armenia, Belarus, Georgia, Ukraine, Uzbekistan); MON, RosAtom, RAS and RFBR (Russia); MSTD (Serbia);
MICINN and CPAN (Spain); Swiss Funding Agencies (Switzerland); NSC (Taipei); TUBITAK and TAEK (Turkey); STFC (United
Kingdom); DOE and NSF (USA).
|
2212.10567
|
\section{Introduction}\label{sec:introduction}
{A}{ccording} to the \emph{global cancer statistics 2020}, \cite{sung2021global}, cancer is one of the leading causes of mortality worldwide.
It is a diversified group of numerous complicated diseases, rather than a single one, marked by uncontrolled cell growth and a propensity to rapidly spread or infiltrate other body parts. Cancer's inherent complexity and heterogeneity have proven to be significant barriers to the development of effective anticancer therapies \cite{basith2017expediting}. Cancer can be treated with conventional clinical methods such as surgery, radiation, and chemotherapy, but these methods have drawbacks that can be painful for patients \cite{kaur2022data}. Although the aforementioned conventional methods deliver positive outcomes, they can also have some substantial adverse effects, including myelosuppression, cardiac toxicity, and gastrointestinal damage \cite{basak2021comparison}.
The discovery of anticancer peptides (ACPs) has transformed the paradigm for treating cancer. The ACPs can interact with the anionic cellular components of cancer cells and repair them selectively without harming normal or healthy cells in the body. This amazing feature of the ACPs is vital for therapeutic strategies. The ACPs are typically composed of $5$ to $50$ amino acids that are often synthesized using antimicrobial peptides (AMPs), many of which have cationic characteristics. These features have resulted in the development of novel alternative cancer therapies.
The biggest challenge with the ACPs is distinguishing them from other synthetic or natural peptides \cite{yi2019acp}. Researchers employ a variety of approaches to identify the ACPs \cite{tyagi2013silico}. Although the experimental procedures are gold-standard methods, they are costly and time-consuming, and hence, unsuitable for large-scale searches for prospective ACP candidates. As a result, alternative methodologies for identifying APCs are desired.
Technical advances in artificial intelligence (AI) have substantiated that it is a powerful tool for dealing with incredibly complex situations \cite{atif2022multi}. Many studies have used machine learning models to predict proteins and classify peptide sequences; see, for instance, \cite{khan2018rafp,usman2020afp,park2020e3,al2021ecm,usman2021aop,chen2018ifeature}. Even for the ACPs alone, there are several \emph{in silico} approaches for identifying new ACPs. For instance, Tyagi et al. have proposed a \emph{support vector machine} (SVM)-based classification algorithm in \cite{tyagi2013silico}. Another study, \cite{hajisharifi2014predicting}, employed Chou's \emph{pseudo amino acid composition} to predict the ACPs and tested their mutagenicity using the \emph{Ames test}. Generalized chaos game representation methods \cite{ge2019identifying}, deep learning-based short-term memory models \cite{yi2019acp}, ensemble learning models \cite{ge2020enacp}, augmentation strategies for improved classification performance \cite{chen2021acp}, and \emph{ETree classifiers}-based on \emph{amino acid composition} (AAC) \cite{agrawal2021anticp} are examples of alternative approaches.
Although existing machine learning techniques have some advantages for ACP prediction, there is still a need for improvement. For instance, deep learning models provide cutting-edge performance, but their \emph{black-box} nature obscures the classification judgment. A relatively simple model, on the other hand, may not provide appropriate classification accuracy. To that aim, the \emph{sparse-representation classification} (SRC) method provides a great balance, where constrained optimization is a proven method for explainable sparse modeling \cite{naseem2017ecmsrc,li2023multi,wright2008robust,hofmann2008kernel}. In the SRC, a test sample may be reconstructed using a linear combination of dictionary items with sparse weights under the basic principle \cite{usman2022afp,naseem2010sparse,bengio2013representation}.
The SRC is a non-parametric learning approach in which the magnitude of a sparse vector corresponds to the contribution of the dictionary atoms \cite{elad2010sparse,li2013simultaneous}.
Various sparse vector combinations can be used to tackle optimization and ill-posed problems. \emph{basis-pursuit} (BP) \cite{chen1994basis}, \emph{orthogonal-matching-pursuit} (OMP) \cite{pati1993orthogonal}, and \emph{matching-pursuit} (MP) \cite{gharavi1998fast} are some prominent methods for the SRC. These strategies employ $l_1$-norm regularization to relax $l_0$-norm rigid sparsity constraint, allowing gradient estimation from continuous error surfaces \cite{zhang2010sparseness,mandal2016employing}. The BP furnishes the most sparse solution, but its computing cost grows exponentially. The MP is faster than the BP and OMP, although its sparsity is not guaranteed.
Aside from the optimization approach, the efficacy of the \emph{over-complete dictionary} (OCD) is the most important feature for the construction of an SRC model. In this regard, Zhang et al. proposed a kernel SRC \cite{zhang2011kernel}. The kernel mapping converts the nonlinear relationship between different atoms (samples in OCD) to a linear relationship, allowing the classification of even more complex patterns \cite{atif2022multi,khan2017novel,zhang2011kernel}. Furthermore, a \emph{composition of K-spaced amino-acid pairs} (CKSAAP) is employed to capture a diverse range of peptide sequences, yielding a comprehensive feature vector.
Motivated by the success of the SRC and the \emph{kernel trick}, we propose in this work to combine polynomial kernel-based \emph{principal component analysis} (PCA) embedding to reduce the feature space dimensions and \emph{synthetic minority oversampling technique} (SMOTE) using $K$-Means \cite{last2017oversampling} to balance the sample space dimension for the construction of the \emph{kernel SRC} model. Details of the proposed approach, including datasets, feature encoding techniques, and classification methods, are furnished in Section \ref{sec:method}. The experimental analysis and discussion of results are provided in Section \ref{sec:result}. The paper is concluded in Section \ref{sec:conclusion}.
\section{Proposed Approach}\label{sec:method}
We propose a \emph{kernel sparse representation classification} (KSRC) method in this section which includes feature encoding, dimension reduction for \emph{OCD matrix} (ODM) design, and $n$-fold cross-validation for model evaluation. Fig. \ref{fig:overview} shows the complete block diagram describing the overall classification process. Individual steps are described in detail in the following subsections.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.66\textwidth]{flowchart.png}
\caption{Overview of the proposed ACPs classification strategy.}
\label{fig:overview}
\end{figure}
\subsection{Dataset} \label{sec:dataset}
There are many datasets available, including those in \cite{hajisharifi2014predicting, chen2016iacp, wei2018acpred}. Three benchmark datasets are used in this work to design and evaluate the ACP classification strategy. The first dataset, \emph{ACP344}, was obtained from \cite{hajisharifi2014predicting} and contains $344$ peptide sequences, $138$ of which are ACPs and the remaining $206$ are non-ACP samples. The second dataset, \emph{ACP740}, was obtained from previous studies by Chen et al. \cite{chen2016iacp} and Wei et al. \cite{wei2018acpred}). It contains $740$ peptide sequences, $376$ of which are ACP samples and $364$ are non-ACP samples. A filtered and curated version of the ACP740 dataset can be found in \cite{yi2019acp}. Different classifiers are designed and evaluated for each dataset according to the protocols reported in \cite{chen2021acp}. Specifically, $10$-fold cross-validation is used for ACP344, whereas $5$-fold cross-validation is used for ACP740. In the third dataset, two ACP samples were chosen at random from the ACP740 \cite{yi2019acp} dataset, and different mutations were developed for mutation sensitivity analysis.
It is worth noting that this independent mutant dataset is solely utilized for mutation analysis and is not included in the design of the ODM.
\subsection{Features Encoding}\label{features}
Protein or peptide sequences are often recorded and stored in \emph{FastA} format, with each amino acid represented by an alphabetic symbol; see, for instance, \cite{binz2019proteomics}. These variable-length alphabetic sequences are processed using a variety of sequence encoding techniques, such as AAC, \emph{di-peptide AAC} (DAAC), etc., to extract numerically meaningful features. The AAC is the most basic feature encoding approach, providing a feature vector containing the frequency count of essential amino acids; hence, the overall AAC feature vector length equals the total number of amino acids, i.e., $20$. Similarly, the DAAC is the frequency of peptide pairings, with the total length of the feature vector equal to the number of possible combinations of $20$ amino acid pairs (i.e., $20\times 20 = 400 $). The DAAC feature vector containing the frequencies of $0$-spaced amino acid pairs (i.e., the DAAC of amino acid pairs separated by $K = 0$ residues) is given mathematically by
\begin{align*}
{\boldsymbol{\psi}}_0:=
\begin{bmatrix}
\dfrac{\psi_{A A}}{N_0}& \dfrac{\psi_{A C}}{N_0}& \dfrac{\psi_{A D}}{N_0}& \cdots& \dfrac{\psi_{Y Y}}{N_0}
\end{bmatrix}^T\in\mathbb{R}^{400}.
\end{align*}
Here, $\psi_{\rm string}$ is the DAAC descriptor furnishing the frequency of the peptide pairing described by the \textit{string} and $N_k:=L_x-(k+1)$ is the number of local sequence windows defined in terms of the protein sequence length $L_x$ and the number of residues $k$ with $0\leq k\leq K$.
Both the AAC and DAAC have widely used sequence encoding methods and have been successfully used to design classifiers for various protein and peptide sequences \cite{chen2018ifeature}. However, these techniques are limited in their representation as they do not cover the diverse patterns of the AAC. To improve the pattern capture in DAAC, a modified version is proposed in \cite{chen2018ifeature} by concatenating the DAAC feature vectors of at most $K$-spaced amino acid pairs. For example, for $K=2$, we need to calculate ${\boldsymbol{\psi}}_k$, for $k=0,1,2$, and the final CKSAAP feature vector, $\Psi_{K}$, will be a concatenated version of ${\boldsymbol{\psi}}_0$, ${\boldsymbol{\psi}}_1$, and ${\boldsymbol{\psi}}_2$. Here,
\begin{align*}
{\boldsymbol{\psi}}_1:=&
\begin{bmatrix}
\dfrac{\psi_{A x A}}{N_1}& \dfrac{\psi_{AxC}}{N_1}& \dfrac{\psi_{AxD}}{N_1}& \cdots& \dfrac{\psi_{YxY}}{N_1}
\end{bmatrix}^T\in\mathbb{R}^{400},
\\
{\boldsymbol{\psi}}_2:=&
\begin{bmatrix}
\dfrac{\psi_{A x x A}}{N_2}& \dfrac{\psi_{A x x C}}{N_2}& \dfrac{\psi_{A x x D}}{N_2}& \cdots& \dfrac{\psi_{Y x x Y}}{N_2}
\end{bmatrix}^T\in\mathbb{R}^{400},
\\
\Psi_{K}:=&
\begin{bmatrix}
{\boldsymbol{\psi}}_0^T& {\boldsymbol{\psi}}_1^T& \cdots& {\boldsymbol{\psi}}_K^T
\end{bmatrix}^T
\in \mathbb{R}^{400(K+1)}.
\end{align*}
and $k$ represents a gap value used for the calculation of $k$th DAAC feature vector (${\boldsymbol{\psi}}_k$) whereas $K$ is the largest possible gap for which CKSAAP feature vector $\Psi_{K}$ is calculated. Fig. \ref{fig_comp} shows an example of the ${\boldsymbol{\psi}}_1$ calculation.
\begin{figure*}[!htb]
\centering
\includegraphics[width=1\textwidth]{cksaap_features.PNG
\caption{Illustration of k-spaced DAAC $\psi_k$ descriptor calculation for $k=1$. Extracted from \cite{usman2020afp} }
\label{fig_comp}
\end{figure*}
\subsection{Dimensionality Reduction using Kernel PCA}
In machine learning, a large amount of data is often considered useful. A \emph{curse of dimensionality} is nonetheless created when there are few measurements or samples but more attributes (i.e., $A>M$ with $A$ and $M$ being the number of attributes and measurements, respectively). In this study, our dataset has a small number of samples (less than $1,000$) but the description from the CKSAAP with $K=8$ contains $3,600$ attributes. This curse of dimensionality not only makes our classification problem ill-posed mathematically but is also very crucial for the design of an OCD for sparse representation with $A<M$. To that end, we suggest using principal component analysis to prone out the least informative dimensions $f<M=N+P$ from the original feature space of $A=3600$, allowing us to design an ODM of size $f\times M$. Here, $N$ and $P$ are, respectively, the numbers of negative and positive samples in the dictionary.
We employ linear and non-linear projection methods to provide a comparison of the SRC and KSRC methods. A comparison of the eigenvalue spread is presented in Fig.~\ref{fig:eigenvalue_PCAvsKPCA_ACP740} for the ACP740 dataset. Specifically, from one out of $5$-fold cross-validation results (i.e., $n=5$), the feature set of $A=3600$ attributes, and $M=740\times (n-1)/n = 592$ measurements (samples), is projected in linear and kernel eigenspace and top $600$ eigenvalues are plotted. It can be observed that the KPCA compresses the feature dimension more effectively; hence the relative eigenvalues in the KPCA are smaller. It is also worth noting that the actual number of samples used for the design of the OCD is limited by the training samples in $n$-fold cross-validation. To avoid data leakage, no test sample is used for the estimation of the kernel or for the design of the OCD in our proposed method. First, a PCA projection is learned on training samples, and later the test samples are projected onto the same space using already learned projection.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.66\textwidth]{pca_vs_kpca.png}
\caption{Eigenvalues of the top $500$ principal components.}
\label{fig:eigenvalue_PCAvsKPCA_ACP740}
\end{figure}
\subsection{Over-complete Dictionary Matrix for Sparse Representation Classification}
The ODM represents the matrix consisting of feature vectors of ACPs and non-ACPs and is composed of atoms (i.e., training sample vectors). The ODM is used for the SRC in which all ACPs and non-ACPs are characterized using class indices $l=1$ and $l=2$, respectively.
In this section, we take $f$, $N$, and $P$ as the number of features, training samples with negative classes, and training samples with positive classes, respectively. If $\mathbf{d}_i^{(l)}\in\mathbb{R}^{f}$ represents the $i$th training sample from $l$th class label then the ODM, $\mathbf{D}\in \mathbb{R}^{f\times(N+P)}$, is formed as
\begin{align*}
\mathbf{D}:=\begin{bmatrix}
\mathbf{d}_1^{(1)} & \mathbf{d}_2^{(1)}& \cdots & \mathbf{d}_N^{(1)} & \mathbf{d}_1^{(2)} & \mathbf{d}_2^{(2)} & \cdots & \mathbf{d}_P^{(2)}\end{bmatrix}.
\end{align*}
A test sample vector $\mathbf{t}\in \mathbb{R}^f$ can be represented as
\begin{align*}
\mathbf{t}=\mathbf{D}{\boldsymbol{\gamma}},
\end{align*}
where the coefficient vector ${\boldsymbol{\gamma}}\in\mathbb{R}^{N+P}$ is defined by
\begin{align*}
{\boldsymbol{\gamma}}:=\begin{bmatrix} \gamma_{1}^{(1)} & \gamma_{2}^{(1)}& \cdots & \gamma_{N}^{(1)}& \gamma_{1}^{(2)}& \gamma_{2}^{(2)}& \cdots &\gamma_{P}^{(2)}\end{bmatrix}^T.
\end{align*}
If the true class label of the test sample $\mathbf{t}$ is $l=1$ then all entries $\gamma_{1}^{(2)}, \gamma_{2}^{(2)}, \cdots, \gamma_{P}^{(2)}$ should be zero. Similarly, if the true class label is $l=2$ then all entries
$\gamma_{1}^{(1)}, \gamma_{2}^{(1)}, \cdots, \gamma_{N}^{(1)}$ should be zero.
According to \emph{sparse reconstruction theory}, if the dictionary $\mathbf{D}$ is given then the sparse vector ${\boldsymbol{\gamma}}$ can be recovered \cite{naseem2008sparse,naseem2010sparse}. In principle, the most sparse ${\boldsymbol{\gamma}}$ can be sought as the solution to the optimization problem
\begin{equation}
\arg \displaystyle{\min_{\boldsymbol{\gamma}}} \left\|{\boldsymbol{\gamma}}\right\|_{0}\ \ \mbox{subject to}\ \ \mathbf{t}=\mathbf{D}{\boldsymbol{\gamma}},
\label{l0}
\end{equation}
where $\left\|\cdot\right\|_{0}$ is the $l_0$-norm that counts the number of non-zeros entries in the vector.
The constrained optimization problem \eqref{l0} is non-convex, which makes it difficult to find the optimal vector ${\boldsymbol{\gamma}}$. Several algorithms for recovering the sparse vector ${\boldsymbol{\gamma}}$ by solving a \emph{convex relaxation} of the constrained optimization problem \eqref{l0} have been proposed in the literature. To that end, these algorithms make use of the $l_1$-norm to solve the \emph{relaxed} optimization problem
\begin{equation}
\arg{\min_{\boldsymbol{\gamma}}} \left\|{\boldsymbol{\gamma}}\right\|_{1}\ \ \mbox{subject to}\ \ \mathbf{t}=\mathbf{D}{\boldsymbol{\gamma}}.
\label{equ:l1_prob}
\end{equation}
Some notable techniques for solving the optimization problem \eqref{equ:l1_prob} are the BP \cite{chen1994basis}, OMP \cite{pati1993orthogonal}, and MP \cite{gharavi1998fast}. Among these algorithms, the BP is considered the most robust method as it furnishes the most sparse solution, but its computing cost grows exponentially. The OMP technique can provide a reasonable trade-off between sparsity and computational complexity, but the latter is also very high. On the other hand, MP is faster than the BP and OMP techniques, although its optimality in terms of sparsity is not guaranteed. In the proposed approach, we use the MP as the $l_1$-minimization algorithm because of its suitability for the task at hand.
It should be noted that ${\boldsymbol{\gamma}}$ is expected to contain high-value entries corresponding to the columns of $\mathbf{D}$ that are relevant to the class label of the probe $\mathbf{t}$. This embedded information about the class label of $\mathbf{t}$ can be used to identify $\mathbf{t}$. Let
\begin{align*}
e_l(\mathbf{t}):=\left\|\mathbf{t}-\mathbf{D}{\boldsymbol{\theta}}_l({\boldsymbol{\gamma}})\right\|_2, \qquad \ l=1,2,
\end{align*}
where the vector ${\boldsymbol{\theta}}_l({\boldsymbol{\gamma}})$ has all zero entries except at the locations corresponding to class $l$ where the value is one. The decision can be ruled in favor of the class using minimum reconstruction error, i.e.,
\begin{align*}
\mbox{class-label}(\mathbf{t})=\arg {\min_l} \left(e_l (\mathbf{t})\right).
\end{align*}
\subsection{Evaluation Protocol}
The proposed algorithm has been evaluated for \emph{true positive rate} (TPR) or sensitivity ($S_n$), \emph{true negative rate} (TNR) or
specificity ($S_p$), \emph{prediction accuracy} (${\rm Acc}$), \emph{Matthew's correlation coefficient} (${\rm MCC}$), \emph{balanced accuracy} (${\rm Bal. Acc.}$), \emph{Youden's index} (${\rm YI}$), and \emph{$F1$ Score} defined as
\begin{align*}
&S_n := \frac{TP}{TP+FN},
\\
&S_p := \frac{TN}{TN+FP},
\\
&\text{Acc.} := \frac{TP+TN}{TP+TN+FP+FN},
\\
&{\rm MCC} :={\frac{TPTN-FPFN}{\sqrt{\Delta}}},
\\
&\text{Bal. Acc.} := \frac{S_n + S_p}{2},
\\
&\text{YI} := S_n + S_p - 1,
\\
&\text{F$1$ Score} := 2* \frac {\text{Precision}*S_n}{\text{Precision} + S_n},
\end{align*}
where $TP$, $FP$, $TN$, and $FN$ indicate the true positive, false positive, true negative, and false negative, respectively. Here,
\begin{align*}
&\text{Precision} := \frac{TP}{TP+FP},
\\
&\Delta := (TP+FP)(TN+FN)(TP+FN)(TN+FP).
\end{align*}
\section{Experimental Results} \label{sec:result}
In this section, we perform different experiments to validate our methodology, supporting the selection of various hyper-parameters, solver approaches, embedding strategies, the number of principal components, etc.
\subsection{Comparison of Dictionary Matrices}
The robustness and effectiveness of the ODM are the most critical elements of a sparse representation classifier. We employ principal component embedding of the CKSAAP features to create a useful dictionary. In particular, the two most frequently used approaches, polynomial-kernel projection and linear projection, are compared. Three comparison criteria are used: 1) the compactness and compressing power of the embedding method, 2) the linear separability of ACPs and non-ACPs in the embedding space, and 3) the classification performance.
As shown in Fig. \ref{fig:eigenvalue_PCAvsKPCA_ACP740}, the kernel PCA requires fewer components to represent the same amount of information as linear PCA. In Fig. \ref{fig:AUC_acp344_PCA_vs_KPCA}, we examine the \emph{area-under-the-receiver-operator-characteristic} (AUROC) curve for the classification of the ACPs from the ACP344 dataset to further substantiate this assertion. The linear PCA-based SRC and polynomial PCA-based KSRC were tested specifically using dictionaries consisting of the first $10$ principal components. The findings show that the KSRC can do better classification with fewer features.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.66\textwidth]{ACP344_AUC_comparison_PC10_MP.png}
\caption{Comparison of the AUROC curves of the SRC and KSRC on ACP344 dataset for same configurations using $10$ principal components.}
\label{fig:AUC_acp344_PCA_vs_KPCA}
\end{figure}
Although compactness is important for sparse representation, the linear separability of class distributions in embedding space is also an important condition. To that end, the $t$-distributed stochastic neighbor embedding (TSNE) \cite{gisbrecht2012linear} plots of linear and kernel PCA embeddings of CKSAAP features of ACP344 dataset are compared in Fig. \ref{fig:latent_space_acp344_PCA_vs_KPCA}. Again, the kernel PCA demonstrates superior linear separability between ACPs and non-ACPs samples.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.66\textwidth]{ACP344_Latent_space_comparison_PC300_TSNE.png}
\caption{Comparison of the linear-PCA and kernel-PCA embedding of the ACP344 dataset using the TSNE and $300$ principal components.}
\label{fig:latent_space_acp344_PCA_vs_KPCA}
\end{figure}
In Fig. \ref{fig:mutation_acp344_KPCA_TSNE_CKSAAP}, we compare the variants of the ACP344 dataset to further validate the robustness of the OCD method. In particular, the TSNE plots of the kernel PCA embedding of the CKSAAP features from the original ACP344 dataset and mutants of $138$ ACPs from the ACP344 dataset are compared. The objective of this experiment is to assess the sensitivity of OCD against random mutation. The results show that the separability in the empirical distributions of the ACPs and non-ACPs decreases with the mutation rate. Precisely, when more amino acids in ACPs are mutated, the likelihood of their having anticancer capabilities decreases. A number of statistical methods are available to quantify this distribution separability, ranging from the \emph{strictly standardized mean difference} (SSMD) \cite{zhang2007use} to distribution overlap \cite{park2020gssmd}. However, we are interested in improving classification performance in our experiments, and accordingly, it is the most important aspect of our analysis.
\begin{figure*}[!htb]
\centering
\includegraphics[width=1\textwidth]{mutationACP340_TSNE.png}
\caption{Mutation rate and its effect on the feature space. Scatter plot of $2$-components of the TSNE of kernel PCA embedded original and mutant ACPs CKSAAP composition features.}
\label{fig:mutation_acp344_KPCA_TSNE_CKSAAP}
\end{figure*}
For the ACP344 dataset on 10-fold cross-validation, the kernel PCA-based KSRC method achieves the maximum mean MCC of $0.8590$ with only $80$ principal components. In contrast, the maximum mean MCC of $0.848$ was achieved in the linear PCA-based SRC with $300$ principal components, which is $1.3\%$ lower than the KSRC performance. In the aforementioned experiments, the MP solver was employed for both the SRC and KSRC methods, while all other settings were unchanged.
\subsection{Comparison of the Optimization Algorithms}
\subsubsection{Classification Performance}
One major challenge in sparse representation classification is to obtain a suitable solution for optimization problem \eqref{equ:l1_prob}. The efficiency of the solution depends on a variety of factors, ranging from the quality of the ODM to the robustness of the solver. There are a variety of algorithms available to deal with ill-posed problems like the one given in \eqref{equ:l1_prob}. The popular $l_1$-minimization algorithms include BP, OMP, and MP. As previously stated, there is a trade-off between the robustness of these algorithms in providing the most sparse solution and their computational efficiency. In the proposed framework, we have adopted the MP algorithm, which is an efficient yet effective $l_1$-minimization algorithm for the task at hand.
To deal with non-linearity and dimension reduction, the polynomial kernel PCA method \cite{scholkopf1997kernel} is used, while the K-means SMOTE \cite{last2017oversampling} is employed to balance the dictionary. The performance of the proposed method is assessed on the benchmark ACP344 dataset for a varying number of principal components. The findings in Fig.~\ref{fig:mcc_acp344_KPCA} show that the performance of the proposed MP-based KSRC is similar compared to state-of-the-art BP-based KSRC and OMP-based KSRC methods. Specifically, the proposed MP-based ACP-KSRC achieved a mean $10$-fold MCC of $0.8590$ with only $80$ principal components, while the BP and OMP-based approaches achieved a mean $10$-fold MCC of $0.8550$ and $0.8419$ with $40$ and $175$ principal components, respectively. This clearly demonstrates the sparse solution recovery of the BP method. However, due to the nature of our investigation, the sparsity of the solution is not the key aspect. Rather, we are concerned with classification performance, which is superior in the case of MP.
{\begin{figure}[!htb]
\centering
\includegraphics[width=0.66\textwidth]{MCC_comparison_ACP344_KPCA.png}
\caption{Classification performance for different optimization solvers with a varying number of principal components for the ACP344 dataset.}
\label{fig:mcc_acp344_KPCA}
\end{figure}}
\subsubsection{Time Complexity}\label{subsec: Time_Complex}
If we overlook the minor performance gain in MP and examine the run of principal components, we can see that the MP is utilizing more features, which may increase the computational cost. Therefore, we compare the temporal complexity of the preceding experiments for a varying number of principal components. In particular, the \emph{box plots} of all three solvers are shown in Fig.~\ref{fig:recon_time_acp344_KPCA} on a semi-log scale, displaying the median and quartile values for total sample reconstruction time. Interestingly, the time complexity of the MP solver is linearly proportional to the number of principal components, but it is exponential in BP and OMP. This means that even with double the number of principal components, the classification time in the MP is still lower than the BP and equivalent to the OMP with $40$ principal components. The best performance configuration in the OMP is attained with $175$ principal components, the processing time required in the OMP and BP is roughly ten times that of the MP. All experiments were carried out on a freely available \emph{Colab-Notebook} equipped with \emph{Intel Xeon CPU} @2.20 GHz, and $13$ GB RAM.
{\begin{figure}[!htb]
\centering
\includegraphics[width=0.66\textwidth]{Time_comparison_ACP344_KPCA.png}
\caption{Reconstruction time for different configurations and a varying number of principal components for the ACP344 dataset.}
\label{fig:recon_time_acp344_KPCA}
\end{figure}}
\subsection{Comparison with State-of-the-Art ACP Classification Approaches}
In this section, we compare the performance of the proposed ACP-KSRC method with that of the current state-of-the-art ACP classification algorithms on the ACP344 \cite{hajisharifi2014predicting} and ACP740 \cite{yi2019acp} datasets. It should be noted that the proposed ACP-KSRC does not have a training phase, and the training data is only used to construct the dictionary matrix. However, to make a fair comparison, the training and testing samples in all methods were kept consistent as described in previously published research. For instance, the ACP344 dataset is assessed using $10$-fold cross-validation, whereas the ACP740 dataset is evaluated using $5$-fold cross-validation. Other important details about the evaluation of specific datasets are given in the relevant subsections.
\subsubsection{ACP344 Dataset}\label{sec:res_acp344}
In Table \ref{tbl:res_acp344}, we compare the performance statistics of different algorithms on the ACP344 dataset. The number of principal components for the dictionary matrix is set at $f=80$ in this experiment. Since the dataset is unbalanced, the conventional accuracy metric is not a suitable representative of the overall performance. To that end, class-specific evaluation parameters such as the MCC and Youden's index are used to indicate the overall classification ability of the classifier.
Notably, the proposed method yields the best results, demonstrating its ability to effectively differentiate the features of the ACPs. In particular, the ACP-KSRC achieved the highest MCC value of $0.85$ which is $27.06\%$ higher than the ACP-DL, $1.18\%$ higher than the ACP-LDF with the RF and SVM classifiers, $2.35\%$ higher than the ACP-LDF with \emph{LibD3C} classifier, and $4.71\%$ higher than the SAP with the SVM classifier \cite{{ge2020enacp}}. This supports the claim that the proposed method can be used to predict new ACPs or ACP-like peptides. Other assessment metrics mirror this efficacy, indicating the distinct potential between the ACPs and non-ACPs.
\begin{table}[!htb]
\caption{Performance comparison of the ACP-KSRC with contemporary methods on ACP344 dataset.}
\label{tbl:res_acp344}
\begin{center}
\resizebox{1\columnwidth}{!}
{\begin{tabular}{lcccccccc}
\hline
{\bf Methods} & {$S_n$} & {$S_p$}& {\bf Acc.} & {\bf Bal. Acc.} & {\bf MCC} &{\bf YI } & {\bf F1-score} &{\bf Classifier} \\
\hline
SAP \cite{ge2020enacp} & 86.23\% & 95.63\% & 91.86\% & 90.93\% & 0.83 & 0.81 & 0.89 & SVM \\ \hline
ACP-LDF \cite{ge2020enacp} & 87.70\% & 96.10\% & 92.73\% & 91.90\% & 0.84 & 0.83 & 0.92 & SVM \\ \hline
ACP-LDF \cite{ge2020enacp} & 85.50\% & 96.10\% & 92.15\% & 91.05\% & 0.83 & 0.82 & 0.92 & LibD3C \\ \hline
ACP-LDF \cite{ge2020enacp} & 86.20\% & 97.10\% & 92.70\% & 91.65\% & 0.84 & 0.83 & 0.92 & RF \\ \hline
ACP-DL \cite{yi2019acp} & 75.82\% & 86.32\% & 82.16\% & 81.07\% & 0.62 & 0.62 & 0.77 & LSTM \\ \hline
ACP-KSRC & 97.07\% & 86.97\% & 93.02\% & 91.89\% & 0.85 & 0.84 & 0.94 & SRC \\ \hline
\end{tabular} }
\end{center}
\end{table}
\subsubsection{ACP740 Dataset}\label{sec:res_acp740}
We compare our results for the ACP740 dataset with the ACP-DL \cite{yi2019acp} and ACP-DA\cite{chen2021acp} in Table \ref{tbl:res_acp740}, as these are the only algorithms that used the ACP740 dataset. The proposed method outperforms both algorithms in terms of the class-specific evaluation parameter MCC for $f=100$ principal components. In particular, the ACP-KSRC achieved the highest MCC value of $0.67$ which is $6\%$ and $4.48\%$ higher than the ACP-DL and ACP-DA, respectively. This efficacy is also reflected in other evaluation metrics, indicating the ability of the ACP-KSRC to discriminate between ACPs and non-ACPs. This suggests that the proposed method can be used to predict ACPs or ACP-like peptides.
\begin{table}[!ht]
\caption{Performance comparison of ACP-KSRC and contemporary methods on ACP-$740$ dataset.}
\label{tbl:res_acp740}
\begin{center}
\resizebox{1\columnwidth}{!}
{\begin{tabular}{lcccccccc}
\hline
{\bf Methods} & $S_n$ & $S_p$ & {\bf Acc.} & {\bf Bal. Acc.} & {\bf MCC} &{\bf YI } & {\bf F1-score} &{\bf Classifier} \\
\hline
ACP-DL \cite{yi2019acp} & 82.61\% & 80.59\% & 83.48\% & 83.3\% & 0.63 & 0.62 & 0.71& LSTM \\ \hline
ACP-DA \cite{chen2021acp} & 86.98\% & 83.26\% & 82.03\% & 85.12\% & 0.64 & 0.70 & 0.85 & MLP \\ \hline
ACP-KSRC & 86.23\% & 81.62\% & 83.91\% & 83.94\% & 0.67 & 0.67 & 0.84 & SRC \\ \hline
\end{tabular}}
\end{center}
\end{table}
\subsection{Mutation Analysis}
For mutation sensitivity analysis, we have used two ACP samples randomly selected from the ACP740 dataset. Two separate peptides were chosen to have distinct 3D structures, i.e., one had more redundant amino acids than the other. Different mutants of the sequences were constructed for sensitivity analysis. Table \ref{tbl:res_acp740_mutation_10fold} lists the original and mutant sequences.
\begin{table}[!ht]
\caption{Mutation effect and classification-score sensitivity of ACP-KSRC.}
\label{tbl:res_acp740_mutation_10fold}
\begin{center}
\resizebox{1\columnwidth}{!}
{\begin{tabular}{l|c|c}
\hline
{\bf Mutation / (Seq-ID)} & {\bf Sequence} &{\bf Classification-Score} \\
\hline
\hline
Original / (A) & ALSKALSKALSKALSKALSKALSK & $0.781$ \\ \hline
\hline
Point Mutation / (B) & ALSKALSKALS\textbf{E}ALSKALSKALSK & $0.762$ \\ \hline
Loop Mutation / (C) & ALSKALSKALSK\textbf{SQAE}ALSKALSK & $0.759$ \\ \hline
\hline
Original / (D) & ACDCRGDCFCGGGGIVRRADRAAVP & $0.745$ \\ \hline
\hline
Point Mutation / (E) & ACDCRG\textbf{K}CFCGGGGIVRRADRAAVP & $0.705$ \\ \hline
Point Mutation / (F) & ACDCRGDCFCGGGGIVRRA\textbf{K}RAAVP & $0.710$ \\ \hline
Double Point Mutation / (G) & ACDCRG\textbf{K}CFCGGGGIVRRA\textbf{K}RAAVP & $0.650$ \\ \hline
Loop Mutation / (H) & ACDCRGDCFC\textbf{SSSS}IVRRADRAAVP & $0.556$ \\ \hline
\end{tabular} }
\end{center}
\end{table}
For a single point mutation analysis, one amino acid from the middle of the peptide sequence is substituted with an amino acid having the opposite property (e.g., non-polar to polar, positive to negative charge, etc.). Similar criteria are employed for double point mutations, but the process is repeated for two amino acids. In loop mutation, multiple peptide composition pairs are replaced with their opposite counterparts.
For all the sequences given in Table~\ref{tbl:res_acp740_mutation_10fold}, the classification score is predicted using the proposed ACP-KSRC method. The prediction scores were generated using the OCD of ACP740 dataset after $10$-fold cross-validation and each fold sequence from Table~\ref{tbl:res_acp740_mutation_10fold} was used as an independent test sample. Finally, the average prediction score was calculated by taking the mean of the individual fold results. It is noteworthy to point out that the mutant sequence dataset was not used in the design of the OCD matrix.
The prediction score is decreasing for both peptides with higher mutation rates, as predicted. However, the classification score is more sensitive to mutation for sequence (D) as compared to sequence (A). To figure out why this is happening, \emph{Alphafold2} \cite{jumper2021highly} was used to predict the 3D structure of sequences in Table~\ref{tbl:res_acp740_mutation_10fold}.
It can be observed in Fig.~\ref{fig:Protein_Structure_AlphaFold} that the alpha helix structure of sequence (A) is unaffected by point (B) and loop mutation (C), however, the flexible structure of sequence (D) exhibits a notable difference even with a single point mutation (E). This 3D structure-based mutation analysis demonstrates that the prediction score of the proposed ACP-KSRC is sensitive to structural variation and, hence, can be used as a valuable tool for large-scale ACP screening.
\begin{figure}[!htb]
\centering
\includegraphics[width=1\textwidth]{Protein_Structure_AlphaFold.png}
\caption{\emph{AlphaFold2} predicted 3D structures of original and mutant peptide sequences. (A) and (D) structure of original sequences in Table~\ref{tbl:res_acp740_mutation_10fold}. (B), (E) and (F) structures after point mutation. (G) structure after double-point-mutation. (C) and (H) structures after loop mutation.}
\label{fig:Protein_Structure_AlphaFold}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
Cancer, as the most challenging disease due to its complexity and heterogeneity, requires multifaceted therapeutic approaches. Anticancer peptides (ACPs) provide a promising perspective for cancer treatment, but their large-scale identification and synthesis require reliable prediction methods. In this study, we have provided an ACP classification strategy which makes use of sparse representation classification combined with kernel principal component analysis (KSRC). The proposed ACP-KSRC approach relies on the well-understood statistical theory of sparse representation classification, unlike the conventional \emph{black box} methods. In particular, we have designed over-complete dictionary matrices using the embedding of the composition of the K-spaced amino-acid pairs (CKSAAP). To deal with non-linearity and dimension reduction, the kernel principal component analysis (KPCA) method is used, while to balance the dictionary, the SMOTE oversampling technique is also utilized. The proposed method is evaluated on two benchmark datasets for well-known statistical parameters and is found to outperform the existing methods. The results indicate the highest sensitivity with the highest balanced accuracy, which can be useful in the understanding of the structural and chemical properties and the development of new ACPs.
\section*{Competing interests}
All the authors declare that they have no competing interests.
\section*{Acknowledgment}
The authors would like to thank Dr. Shujaat Khan for his valuable suggestions and for providing a Python implementation of his sparse representation classification toolbox. This work was supported by the Nazarbayev University, Kazakhstan through Faculty Development Competitive Research Grant Program (FDCRGP) under Grant 1022021FD2914.
\bibliographystyle{IEEEtran}
\section{Introduction}\label{sec:introduction}
{A}{ccording} to the \emph{global cancer statistics 2020}, \cite{sung2021global}, cancer is one of the leading causes of mortality worldwide.
It is a diversified group of numerous complicated diseases, rather than a single one, marked by uncontrolled cell growth and a propensity to rapidly spread or infiltrate other body parts. Cancer's inherent complexity and heterogeneity have proven to be significant barriers to the development of effective anticancer therapies \cite{basith2017expediting}. Cancer can be treated with conventional clinical methods such as surgery, radiation, and chemotherapy, but these methods have drawbacks that can be painful for patients \cite{kaur2022data}. Although the aforementioned conventional methods deliver positive outcomes, they can also have some substantial adverse effects, including myelosuppression, cardiac toxicity, and gastrointestinal damage \cite{basak2021comparison}.
The discovery of anticancer peptides (ACPs) has transformed the paradigm for treating cancer. The ACPs can interact with the anionic cellular components of cancer cells and repair them selectively without harming normal or healthy cells in the body. This amazing feature of the ACPs is vital for therapeutic strategies. The ACPs are typically composed of $5$ to $50$ amino acids that are often synthesized using antimicrobial peptides (AMPs), many of which have cationic characteristics. These features have resulted in the development of novel alternative cancer therapies.
The biggest challenge with the ACPs is distinguishing them from other synthetic or natural peptides \cite{yi2019acp}. Researchers employ a variety of approaches to identify the ACPs \cite{tyagi2013silico}. Although the experimental procedures are gold-standard methods, they are costly and time-consuming, and hence, unsuitable for large-scale searches for prospective ACP candidates. As a result, alternative methodologies for identifying APCs are desired.
Technical advances in artificial intelligence (AI) have substantiated that it is a powerful tool for dealing with incredibly complex situations \cite{atif2022multi}. Many studies have used machine learning models to predict proteins and classify peptide sequences; see, for instance, \cite{khan2018rafp,usman2020afp,park2020e3,al2021ecm,usman2021aop,chen2018ifeature}. Even for the ACPs alone, there are several \emph{in silico} approaches for identifying new ACPs. For instance, Tyagi et al. have proposed a \emph{support vector machine} (SVM)-based classification algorithm in \cite{tyagi2013silico}. Another study, \cite{hajisharifi2014predicting}, employed Chou's \emph{pseudo amino acid composition} to predict the ACPs and tested their mutagenicity using the \emph{Ames test}. Generalized chaos game representation methods \cite{ge2019identifying}, deep learning-based short-term memory models \cite{yi2019acp}, ensemble learning models \cite{ge2020enacp}, augmentation strategies for improved classification performance \cite{chen2021acp}, and \emph{ETree classifiers}-based on \emph{amino acid composition} (AAC) \cite{agrawal2021anticp} are examples of alternative approaches.
Although existing machine learning techniques have some advantages for ACP prediction, there is still a need for improvement. For instance, deep learning models provide cutting-edge performance, but their \emph{black-box} nature obscures the classification judgment. A relatively simple model, on the other hand, may not provide appropriate classification accuracy. To that aim, the \emph{sparse-representation classification} (SRC) method provides a great balance, where constrained optimization is a proven method for explainable sparse modeling \cite{naseem2017ecmsrc,li2023multi,wright2008robust,hofmann2008kernel}. In the SRC, a test sample may be reconstructed using a linear combination of dictionary items with sparse weights under the basic principle \cite{usman2022afp,naseem2010sparse,bengio2013representation}.
The SRC is a non-parametric learning approach in which the magnitude of a sparse vector corresponds to the contribution of the dictionary atoms \cite{elad2010sparse,li2013simultaneous}.
Various sparse vector combinations can be used to tackle optimization and ill-posed problems. \emph{basis-pursuit} (BP) \cite{chen1994basis}, \emph{orthogonal-matching-pursuit} (OMP) \cite{pati1993orthogonal}, and \emph{matching-pursuit} (MP) \cite{gharavi1998fast} are some prominent methods for the SRC. These strategies employ $l_1$-norm regularization to relax $l_0$-norm rigid sparsity constraint, allowing gradient estimation from continuous error surfaces \cite{zhang2010sparseness,mandal2016employing}. The BP furnishes the most sparse solution, but its computing cost grows exponentially. The MP is faster than the BP and OMP, although its sparsity is not guaranteed.
Aside from the optimization approach, the efficacy of the \emph{over-complete dictionary} (OCD) is the most important feature for the construction of an SRC model. In this regard, Zhang et al. proposed a kernel SRC \cite{zhang2011kernel}. The kernel mapping converts the nonlinear relationship between different atoms (samples in OCD) to a linear relationship, allowing the classification of even more complex patterns \cite{atif2022multi,khan2017novel,zhang2011kernel}. Furthermore, a \emph{composition of K-spaced amino-acid pairs} (CKSAAP) is employed to capture a diverse range of peptide sequences, yielding a comprehensive feature vector.
Motivated by the success of the SRC and the \emph{kernel trick}, we propose in this work to combine polynomial kernel-based \emph{principal component analysis} (PCA) embedding to reduce the feature space dimensions and \emph{synthetic minority oversampling technique} (SMOTE) using $K$-Means \cite{last2017oversampling} to balance the sample space dimension for the construction of the \emph{kernel SRC} model. Details of the proposed approach, including datasets, feature encoding techniques, and classification methods, are furnished in Section \ref{sec:method}. The experimental analysis and discussion of results are provided in Section \ref{sec:result}. The paper is concluded in Section \ref{sec:conclusion}.
\section{Proposed Approach}\label{sec:method}
We propose a \emph{kernel sparse representation classification} (KSRC) method in this section which includes feature encoding, dimension reduction for \emph{OCD matrix} (ODM) design, and $n$-fold cross-validation for model evaluation. Fig. \ref{fig:overview} shows the complete block diagram describing the overall classification process. Individual steps are described in detail in the following subsections.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.66\textwidth]{flowchart.png}
\caption{Overview of the proposed ACPs classification strategy.}
\label{fig:overview}
\end{figure}
\subsection{Dataset} \label{sec:dataset}
There are many datasets available, including those in \cite{hajisharifi2014predicting, chen2016iacp, wei2018acpred}. Three benchmark datasets are used in this work to design and evaluate the ACP classification strategy. The first dataset, \emph{ACP344}, was obtained from \cite{hajisharifi2014predicting} and contains $344$ peptide sequences, $138$ of which are ACPs and the remaining $206$ are non-ACP samples. The second dataset, \emph{ACP740}, was obtained from previous studies by Chen et al. \cite{chen2016iacp} and Wei et al. \cite{wei2018acpred}). It contains $740$ peptide sequences, $376$ of which are ACP samples and $364$ are non-ACP samples. A filtered and curated version of the ACP740 dataset can be found in \cite{yi2019acp}. Different classifiers are designed and evaluated for each dataset according to the protocols reported in \cite{chen2021acp}. Specifically, $10$-fold cross-validation is used for ACP344, whereas $5$-fold cross-validation is used for ACP740. In the third dataset, two ACP samples were chosen at random from the ACP740 \cite{yi2019acp} dataset, and different mutations were developed for mutation sensitivity analysis.
It is worth noting that this independent mutant dataset is solely utilized for mutation analysis and is not included in the design of the ODM.
\subsection{Features Encoding}\label{features}
Protein or peptide sequences are often recorded and stored in \emph{FastA} format, with each amino acid represented by an alphabetic symbol; see, for instance, \cite{binz2019proteomics}. These variable-length alphabetic sequences are processed using a variety of sequence encoding techniques, such as AAC, \emph{di-peptide AAC} (DAAC), etc., to extract numerically meaningful features. The AAC is the most basic feature encoding approach, providing a feature vector containing the frequency count of essential amino acids; hence, the overall AAC feature vector length equals the total number of amino acids, i.e., $20$. Similarly, the DAAC is the frequency of peptide pairings, with the total length of the feature vector equal to the number of possible combinations of $20$ amino acid pairs (i.e., $20\times 20 = 400 $). The DAAC feature vector containing the frequencies of $0$-spaced amino acid pairs (i.e., the DAAC of amino acid pairs separated by $K = 0$ residues) is given mathematically by
\begin{align*}
{\boldsymbol{\psi}}_0:=
\begin{bmatrix}
\dfrac{\psi_{A A}}{N_0}& \dfrac{\psi_{A C}}{N_0}& \dfrac{\psi_{A D}}{N_0}& \cdots& \dfrac{\psi_{Y Y}}{N_0}
\end{bmatrix}^T\in\mathbb{R}^{400}.
\end{align*}
Here, $\psi_{\rm string}$ is the DAAC descriptor furnishing the frequency of the peptide pairing described by the \textit{string} and $N_k:=L_x-(k+1)$ is the number of local sequence windows defined in terms of the protein sequence length $L_x$ and the number of residues $k$ with $0\leq k\leq K$.
Both the AAC and DAAC have widely used sequence encoding methods and have been successfully used to design classifiers for various protein and peptide sequences \cite{chen2018ifeature}. However, these techniques are limited in their representation as they do not cover the diverse patterns of the AAC. To improve the pattern capture in DAAC, a modified version is proposed in \cite{chen2018ifeature} by concatenating the DAAC feature vectors of at most $K$-spaced amino acid pairs. For example, for $K=2$, we need to calculate ${\boldsymbol{\psi}}_k$, for $k=0,1,2$, and the final CKSAAP feature vector, $\Psi_{K}$, will be a concatenated version of ${\boldsymbol{\psi}}_0$, ${\boldsymbol{\psi}}_1$, and ${\boldsymbol{\psi}}_2$. Here,
\begin{align*}
{\boldsymbol{\psi}}_1:=&
\begin{bmatrix}
\dfrac{\psi_{A x A}}{N_1}& \dfrac{\psi_{AxC}}{N_1}& \dfrac{\psi_{AxD}}{N_1}& \cdots& \dfrac{\psi_{YxY}}{N_1}
\end{bmatrix}^T\in\mathbb{R}^{400},
\\
{\boldsymbol{\psi}}_2:=&
\begin{bmatrix}
\dfrac{\psi_{A x x A}}{N_2}& \dfrac{\psi_{A x x C}}{N_2}& \dfrac{\psi_{A x x D}}{N_2}& \cdots& \dfrac{\psi_{Y x x Y}}{N_2}
\end{bmatrix}^T\in\mathbb{R}^{400},
\\
\Psi_{K}:=&
\begin{bmatrix}
{\boldsymbol{\psi}}_0^T& {\boldsymbol{\psi}}_1^T& \cdots& {\boldsymbol{\psi}}_K^T
\end{bmatrix}^T
\in \mathbb{R}^{400(K+1)}.
\end{align*}
and $k$ represents a gap value used for the calculation of $k$th DAAC feature vector (${\boldsymbol{\psi}}_k$) whereas $K$ is the largest possible gap for which CKSAAP feature vector $\Psi_{K}$ is calculated. Fig. \ref{fig_comp} shows an example of the ${\boldsymbol{\psi}}_1$ calculation.
\begin{figure*}[!htb]
\centering
\includegraphics[width=1\textwidth]{cksaap_features.PNG
\caption{Illustration of k-spaced DAAC $\psi_k$ descriptor calculation for $k=1$. Extracted from \cite{usman2020afp} }
\label{fig_comp}
\end{figure*}
\subsection{Dimensionality Reduction using Kernel PCA}
In machine learning, a large amount of data is often considered useful. A \emph{curse of dimensionality} is nonetheless created when there are few measurements or samples but more attributes (i.e., $A>M$ with $A$ and $M$ being the number of attributes and measurements, respectively). In this study, our dataset has a small number of samples (less than $1,000$) but the description from the CKSAAP with $K=8$ contains $3,600$ attributes. This curse of dimensionality not only makes our classification problem ill-posed mathematically but is also very crucial for the design of an OCD for sparse representation with $A<M$. To that end, we suggest using principal component analysis to prone out the least informative dimensions $f<M=N+P$ from the original feature space of $A=3600$, allowing us to design an ODM of size $f\times M$. Here, $N$ and $P$ are, respectively, the numbers of negative and positive samples in the dictionary.
We employ linear and non-linear projection methods to provide a comparison of the SRC and KSRC methods. A comparison of the eigenvalue spread is presented in Fig.~\ref{fig:eigenvalue_PCAvsKPCA_ACP740} for the ACP740 dataset. Specifically, from one out of $5$-fold cross-validation results (i.e., $n=5$), the feature set of $A=3600$ attributes, and $M=740\times (n-1)/n = 592$ measurements (samples), is projected in linear and kernel eigenspace and top $600$ eigenvalues are plotted. It can be observed that the KPCA compresses the feature dimension more effectively; hence the relative eigenvalues in the KPCA are smaller. It is also worth noting that the actual number of samples used for the design of the OCD is limited by the training samples in $n$-fold cross-validation. To avoid data leakage, no test sample is used for the estimation of the kernel or for the design of the OCD in our proposed method. First, a PCA projection is learned on training samples, and later the test samples are projected onto the same space using already learned projection.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.66\textwidth]{pca_vs_kpca.png}
\caption{Eigenvalues of the top $500$ principal components.}
\label{fig:eigenvalue_PCAvsKPCA_ACP740}
\end{figure}
\subsection{Over-complete Dictionary Matrix for Sparse Representation Classification}
The ODM represents the matrix consisting of feature vectors of ACPs and non-ACPs and is composed of atoms (i.e., training sample vectors). The ODM is used for the SRC in which all ACPs and non-ACPs are characterized using class indices $l=1$ and $l=2$, respectively.
In this section, we take $f$, $N$, and $P$ as the number of features, training samples with negative classes, and training samples with positive classes, respectively. If $\mathbf{d}_i^{(l)}\in\mathbb{R}^{f}$ represents the $i$th training sample from $l$th class label then the ODM, $\mathbf{D}\in \mathbb{R}^{f\times(N+P)}$, is formed as
\begin{align*}
\mathbf{D}:=\begin{bmatrix}
\mathbf{d}_1^{(1)} & \mathbf{d}_2^{(1)}& \cdots & \mathbf{d}_N^{(1)} & \mathbf{d}_1^{(2)} & \mathbf{d}_2^{(2)} & \cdots & \mathbf{d}_P^{(2)}\end{bmatrix}.
\end{align*}
A test sample vector $\mathbf{t}\in \mathbb{R}^f$ can be represented as
\begin{align*}
\mathbf{t}=\mathbf{D}{\boldsymbol{\gamma}},
\end{align*}
where the coefficient vector ${\boldsymbol{\gamma}}\in\mathbb{R}^{N+P}$ is defined by
\begin{align*}
{\boldsymbol{\gamma}}:=\begin{bmatrix} \gamma_{1}^{(1)} & \gamma_{2}^{(1)}& \cdots & \gamma_{N}^{(1)}& \gamma_{1}^{(2)}& \gamma_{2}^{(2)}& \cdots &\gamma_{P}^{(2)}\end{bmatrix}^T.
\end{align*}
If the true class label of the test sample $\mathbf{t}$ is $l=1$ then all entries $\gamma_{1}^{(2)}, \gamma_{2}^{(2)}, \cdots, \gamma_{P}^{(2)}$ should be zero. Similarly, if the true class label is $l=2$ then all entries
$\gamma_{1}^{(1)}, \gamma_{2}^{(1)}, \cdots, \gamma_{N}^{(1)}$ should be zero.
According to \emph{sparse reconstruction theory}, if the dictionary $\mathbf{D}$ is given then the sparse vector ${\boldsymbol{\gamma}}$ can be recovered \cite{naseem2008sparse,naseem2010sparse}. In principle, the most sparse ${\boldsymbol{\gamma}}$ can be sought as the solution to the optimization problem
\begin{equation}
\arg \displaystyle{\min_{\boldsymbol{\gamma}}} \left\|{\boldsymbol{\gamma}}\right\|_{0}\ \ \mbox{subject to}\ \ \mathbf{t}=\mathbf{D}{\boldsymbol{\gamma}},
\label{l0}
\end{equation}
where $\left\|\cdot\right\|_{0}$ is the $l_0$-norm that counts the number of non-zeros entries in the vector.
The constrained optimization problem \eqref{l0} is non-convex, which makes it difficult to find the optimal vector ${\boldsymbol{\gamma}}$. Several algorithms for recovering the sparse vector ${\boldsymbol{\gamma}}$ by solving a \emph{convex relaxation} of the constrained optimization problem \eqref{l0} have been proposed in the literature. To that end, these algorithms make use of the $l_1$-norm to solve the \emph{relaxed} optimization problem
\begin{equation}
\arg{\min_{\boldsymbol{\gamma}}} \left\|{\boldsymbol{\gamma}}\right\|_{1}\ \ \mbox{subject to}\ \ \mathbf{t}=\mathbf{D}{\boldsymbol{\gamma}}.
\label{equ:l1_prob}
\end{equation}
Some notable techniques for solving the optimization problem \eqref{equ:l1_prob} are the BP \cite{chen1994basis}, OMP \cite{pati1993orthogonal}, and MP \cite{gharavi1998fast}. Among these algorithms, the BP is considered the most robust method as it furnishes the most sparse solution, but its computing cost grows exponentially. The OMP technique can provide a reasonable trade-off between sparsity and computational complexity, but the latter is also very high. On the other hand, MP is faster than the BP and OMP techniques, although its optimality in terms of sparsity is not guaranteed. In the proposed approach, we use the MP as the $l_1$-minimization algorithm because of its suitability for the task at hand.
It should be noted that ${\boldsymbol{\gamma}}$ is expected to contain high-value entries corresponding to the columns of $\mathbf{D}$ that are relevant to the class label of the probe $\mathbf{t}$. This embedded information about the class label of $\mathbf{t}$ can be used to identify $\mathbf{t}$. Let
\begin{align*}
e_l(\mathbf{t}):=\left\|\mathbf{t}-\mathbf{D}{\boldsymbol{\theta}}_l({\boldsymbol{\gamma}})\right\|_2, \qquad \ l=1,2,
\end{align*}
where the vector ${\boldsymbol{\theta}}_l({\boldsymbol{\gamma}})$ has all zero entries except at the locations corresponding to class $l$ where the value is one. The decision can be ruled in favor of the class using minimum reconstruction error, i.e.,
\begin{align*}
\mbox{class-label}(\mathbf{t})=\arg {\min_l} \left(e_l (\mathbf{t})\right).
\end{align*}
\subsection{Evaluation Protocol}
The proposed algorithm has been evaluated for \emph{true positive rate} (TPR) or sensitivity ($S_n$), \emph{true negative rate} (TNR) or
specificity ($S_p$), \emph{prediction accuracy} (${\rm Acc}$), \emph{Matthew's correlation coefficient} (${\rm MCC}$), \emph{balanced accuracy} (${\rm Bal. Acc.}$), \emph{Youden's index} (${\rm YI}$), and \emph{$F1$ Score} defined as
\begin{align*}
&S_n := \frac{TP}{TP+FN},
\\
&S_p := \frac{TN}{TN+FP},
\\
&\text{Acc.} := \frac{TP+TN}{TP+TN+FP+FN},
\\
&{\rm MCC} :={\frac{TPTN-FPFN}{\sqrt{\Delta}}},
\\
&\text{Bal. Acc.} := \frac{S_n + S_p}{2},
\\
&\text{YI} := S_n + S_p - 1,
\\
&\text{F$1$ Score} := 2* \frac {\text{Precision}*S_n}{\text{Precision} + S_n},
\end{align*}
where $TP$, $FP$, $TN$, and $FN$ indicate the true positive, false positive, true negative, and false negative, respectively. Here,
\begin{align*}
&\text{Precision} := \frac{TP}{TP+FP},
\\
&\Delta := (TP+FP)(TN+FN)(TP+FN)(TN+FP).
\end{align*}
\section{Experimental Results} \label{sec:result}
In this section, we perform different experiments to validate our methodology, supporting the selection of various hyper-parameters, solver approaches, embedding strategies, the number of principal components, etc.
\subsection{Comparison of Dictionary Matrices}
The robustness and effectiveness of the ODM are the most critical elements of a sparse representation classifier. We employ principal component embedding of the CKSAAP features to create a useful dictionary. In particular, the two most frequently used approaches, polynomial-kernel projection and linear projection, are compared. Three comparison criteria are used: 1) the compactness and compressing power of the embedding method, 2) the linear separability of ACPs and non-ACPs in the embedding space, and 3) the classification performance.
As shown in Fig. \ref{fig:eigenvalue_PCAvsKPCA_ACP740}, the kernel PCA requires fewer components to represent the same amount of information as linear PCA. In Fig. \ref{fig:AUC_acp344_PCA_vs_KPCA}, we examine the \emph{area-under-the-receiver-operator-characteristic} (AUROC) curve for the classification of the ACPs from the ACP344 dataset to further substantiate this assertion. The linear PCA-based SRC and polynomial PCA-based KSRC were tested specifically using dictionaries consisting of the first $10$ principal components. The findings show that the KSRC can do better classification with fewer features.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.66\textwidth]{ACP344_AUC_comparison_PC10_MP.png}
\caption{Comparison of the AUROC curves of the SRC and KSRC on ACP344 dataset for same configurations using $10$ principal components.}
\label{fig:AUC_acp344_PCA_vs_KPCA}
\end{figure}
Although compactness is important for sparse representation, the linear separability of class distributions in embedding space is also an important condition. To that end, the $t$-distributed stochastic neighbor embedding (TSNE) \cite{gisbrecht2012linear} plots of linear and kernel PCA embeddings of CKSAAP features of ACP344 dataset are compared in Fig. \ref{fig:latent_space_acp344_PCA_vs_KPCA}. Again, the kernel PCA demonstrates superior linear separability between ACPs and non-ACPs samples.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.66\textwidth]{ACP344_Latent_space_comparison_PC300_TSNE.png}
\caption{Comparison of the linear-PCA and kernel-PCA embedding of the ACP344 dataset using the TSNE and $300$ principal components.}
\label{fig:latent_space_acp344_PCA_vs_KPCA}
\end{figure}
In Fig. \ref{fig:mutation_acp344_KPCA_TSNE_CKSAAP}, we compare the variants of the ACP344 dataset to further validate the robustness of the OCD method. In particular, the TSNE plots of the kernel PCA embedding of the CKSAAP features from the original ACP344 dataset and mutants of $138$ ACPs from the ACP344 dataset are compared. The objective of this experiment is to assess the sensitivity of OCD against random mutation. The results show that the separability in the empirical distributions of the ACPs and non-ACPs decreases with the mutation rate. Precisely, when more amino acids in ACPs are mutated, the likelihood of their having anticancer capabilities decreases. A number of statistical methods are available to quantify this distribution separability, ranging from the \emph{strictly standardized mean difference} (SSMD) \cite{zhang2007use} to distribution overlap \cite{park2020gssmd}. However, we are interested in improving classification performance in our experiments, and accordingly, it is the most important aspect of our analysis.
\begin{figure*}[!htb]
\centering
\includegraphics[width=1\textwidth]{mutationACP340_TSNE.png}
\caption{Mutation rate and its effect on the feature space. Scatter plot of $2$-components of the TSNE of kernel PCA embedded original and mutant ACPs CKSAAP composition features.}
\label{fig:mutation_acp344_KPCA_TSNE_CKSAAP}
\end{figure*}
For the ACP344 dataset on 10-fold cross-validation, the kernel PCA-based KSRC method achieves the maximum mean MCC of $0.8590$ with only $80$ principal components. In contrast, the maximum mean MCC of $0.848$ was achieved in the linear PCA-based SRC with $300$ principal components, which is $1.3\%$ lower than the KSRC performance. In the aforementioned experiments, the MP solver was employed for both the SRC and KSRC methods, while all other settings were unchanged.
\subsection{Comparison of the Optimization Algorithms}
\subsubsection{Classification Performance}
One major challenge in sparse representation classification is to obtain a suitable solution for optimization problem \eqref{equ:l1_prob}. The efficiency of the solution depends on a variety of factors, ranging from the quality of the ODM to the robustness of the solver. There are a variety of algorithms available to deal with ill-posed problems like the one given in \eqref{equ:l1_prob}. The popular $l_1$-minimization algorithms include BP, OMP, and MP. As previously stated, there is a trade-off between the robustness of these algorithms in providing the most sparse solution and their computational efficiency. In the proposed framework, we have adopted the MP algorithm, which is an efficient yet effective $l_1$-minimization algorithm for the task at hand.
To deal with non-linearity and dimension reduction, the polynomial kernel PCA method \cite{scholkopf1997kernel} is used, while the K-means SMOTE \cite{last2017oversampling} is employed to balance the dictionary. The performance of the proposed method is assessed on the benchmark ACP344 dataset for a varying number of principal components. The findings in Fig.~\ref{fig:mcc_acp344_KPCA} show that the performance of the proposed MP-based KSRC is similar compared to state-of-the-art BP-based KSRC and OMP-based KSRC methods. Specifically, the proposed MP-based ACP-KSRC achieved a mean $10$-fold MCC of $0.8590$ with only $80$ principal components, while the BP and OMP-based approaches achieved a mean $10$-fold MCC of $0.8550$ and $0.8419$ with $40$ and $175$ principal components, respectively. This clearly demonstrates the sparse solution recovery of the BP method. However, due to the nature of our investigation, the sparsity of the solution is not the key aspect. Rather, we are concerned with classification performance, which is superior in the case of MP.
{\begin{figure}[!htb]
\centering
\includegraphics[width=0.66\textwidth]{MCC_comparison_ACP344_KPCA.png}
\caption{Classification performance for different optimization solvers with a varying number of principal components for the ACP344 dataset.}
\label{fig:mcc_acp344_KPCA}
\end{figure}}
\subsubsection{Time Complexity}\label{subsec: Time_Complex}
If we overlook the minor performance gain in MP and examine the run of principal components, we can see that the MP is utilizing more features, which may increase the computational cost. Therefore, we compare the temporal complexity of the preceding experiments for a varying number of principal components. In particular, the \emph{box plots} of all three solvers are shown in Fig.~\ref{fig:recon_time_acp344_KPCA} on a semi-log scale, displaying the median and quartile values for total sample reconstruction time. Interestingly, the time complexity of the MP solver is linearly proportional to the number of principal components, but it is exponential in BP and OMP. This means that even with double the number of principal components, the classification time in the MP is still lower than the BP and equivalent to the OMP with $40$ principal components. The best performance configuration in the OMP is attained with $175$ principal components, the processing time required in the OMP and BP is roughly ten times that of the MP. All experiments were carried out on a freely available \emph{Colab-Notebook} equipped with \emph{Intel Xeon CPU} @2.20 GHz, and $13$ GB RAM.
{\begin{figure}[!htb]
\centering
\includegraphics[width=0.66\textwidth]{Time_comparison_ACP344_KPCA.png}
\caption{Reconstruction time for different configurations and a varying number of principal components for the ACP344 dataset.}
\label{fig:recon_time_acp344_KPCA}
\end{figure}}
\subsection{Comparison with State-of-the-Art ACP Classification Approaches}
In this section, we compare the performance of the proposed ACP-KSRC method with that of the current state-of-the-art ACP classification algorithms on the ACP344 \cite{hajisharifi2014predicting} and ACP740 \cite{yi2019acp} datasets. It should be noted that the proposed ACP-KSRC does not have a training phase, and the training data is only used to construct the dictionary matrix. However, to make a fair comparison, the training and testing samples in all methods were kept consistent as described in previously published research. For instance, the ACP344 dataset is assessed using $10$-fold cross-validation, whereas the ACP740 dataset is evaluated using $5$-fold cross-validation. Other important details about the evaluation of specific datasets are given in the relevant subsections.
\subsubsection{ACP344 Dataset}\label{sec:res_acp344}
In Table \ref{tbl:res_acp344}, we compare the performance statistics of different algorithms on the ACP344 dataset. The number of principal components for the dictionary matrix is set at $f=80$ in this experiment. Since the dataset is unbalanced, the conventional accuracy metric is not a suitable representative of the overall performance. To that end, class-specific evaluation parameters such as the MCC and Youden's index are used to indicate the overall classification ability of the classifier.
Notably, the proposed method yields the best results, demonstrating its ability to effectively differentiate the features of the ACPs. In particular, the ACP-KSRC achieved the highest MCC value of $0.85$ which is $27.06\%$ higher than the ACP-DL, $1.18\%$ higher than the ACP-LDF with the RF and SVM classifiers, $2.35\%$ higher than the ACP-LDF with \emph{LibD3C} classifier, and $4.71\%$ higher than the SAP with the SVM classifier \cite{{ge2020enacp}}. This supports the claim that the proposed method can be used to predict new ACPs or ACP-like peptides. Other assessment metrics mirror this efficacy, indicating the distinct potential between the ACPs and non-ACPs.
\begin{table}[!htb]
\caption{Performance comparison of the ACP-KSRC with contemporary methods on ACP344 dataset.}
\label{tbl:res_acp344}
\begin{center}
\resizebox{1\columnwidth}{!}
{\begin{tabular}{lcccccccc}
\hline
{\bf Methods} & {$S_n$} & {$S_p$}& {\bf Acc.} & {\bf Bal. Acc.} & {\bf MCC} &{\bf YI } & {\bf F1-score} &{\bf Classifier} \\
\hline
SAP \cite{ge2020enacp} & 86.23\% & 95.63\% & 91.86\% & 90.93\% & 0.83 & 0.81 & 0.89 & SVM \\ \hline
ACP-LDF \cite{ge2020enacp} & 87.70\% & 96.10\% & 92.73\% & 91.90\% & 0.84 & 0.83 & 0.92 & SVM \\ \hline
ACP-LDF \cite{ge2020enacp} & 85.50\% & 96.10\% & 92.15\% & 91.05\% & 0.83 & 0.82 & 0.92 & LibD3C \\ \hline
ACP-LDF \cite{ge2020enacp} & 86.20\% & 97.10\% & 92.70\% & 91.65\% & 0.84 & 0.83 & 0.92 & RF \\ \hline
ACP-DL \cite{yi2019acp} & 75.82\% & 86.32\% & 82.16\% & 81.07\% & 0.62 & 0.62 & 0.77 & LSTM \\ \hline
ACP-KSRC & 97.07\% & 86.97\% & 93.02\% & 91.89\% & 0.85 & 0.84 & 0.94 & SRC \\ \hline
\end{tabular} }
\end{center}
\end{table}
\subsubsection{ACP740 Dataset}\label{sec:res_acp740}
We compare our results for the ACP740 dataset with the ACP-DL \cite{yi2019acp} and ACP-DA\cite{chen2021acp} in Table \ref{tbl:res_acp740}, as these are the only algorithms that used the ACP740 dataset. The proposed method outperforms both algorithms in terms of the class-specific evaluation parameter MCC for $f=100$ principal components. In particular, the ACP-KSRC achieved the highest MCC value of $0.67$ which is $6\%$ and $4.48\%$ higher than the ACP-DL and ACP-DA, respectively. This efficacy is also reflected in other evaluation metrics, indicating the ability of the ACP-KSRC to discriminate between ACPs and non-ACPs. This suggests that the proposed method can be used to predict ACPs or ACP-like peptides.
\begin{table}[!ht]
\caption{Performance comparison of ACP-KSRC and contemporary methods on ACP-$740$ dataset.}
\label{tbl:res_acp740}
\begin{center}
\resizebox{1\columnwidth}{!}
{\begin{tabular}{lcccccccc}
\hline
{\bf Methods} & $S_n$ & $S_p$ & {\bf Acc.} & {\bf Bal. Acc.} & {\bf MCC} &{\bf YI } & {\bf F1-score} &{\bf Classifier} \\
\hline
ACP-DL \cite{yi2019acp} & 82.61\% & 80.59\% & 83.48\% & 83.3\% & 0.63 & 0.62 & 0.71& LSTM \\ \hline
ACP-DA \cite{chen2021acp} & 86.98\% & 83.26\% & 82.03\% & 85.12\% & 0.64 & 0.70 & 0.85 & MLP \\ \hline
ACP-KSRC & 86.23\% & 81.62\% & 83.91\% & 83.94\% & 0.67 & 0.67 & 0.84 & SRC \\ \hline
\end{tabular}}
\end{center}
\end{table}
\subsection{Mutation Analysis}
For mutation sensitivity analysis, we have used two ACP samples randomly selected from the ACP740 dataset. Two separate peptides were chosen to have distinct 3D structures, i.e., one had more redundant amino acids than the other. Different mutants of the sequences were constructed for sensitivity analysis. Table \ref{tbl:res_acp740_mutation_10fold} lists the original and mutant sequences.
\begin{table}[!ht]
\caption{Mutation effect and classification-score sensitivity of ACP-KSRC.}
\label{tbl:res_acp740_mutation_10fold}
\begin{center}
\resizebox{1\columnwidth}{!}
{\begin{tabular}{l|c|c}
\hline
{\bf Mutation / (Seq-ID)} & {\bf Sequence} &{\bf Classification-Score} \\
\hline
\hline
Original / (A) & ALSKALSKALSKALSKALSKALSK & $0.781$ \\ \hline
\hline
Point Mutation / (B) & ALSKALSKALS\textbf{E}ALSKALSKALSK & $0.762$ \\ \hline
Loop Mutation / (C) & ALSKALSKALSK\textbf{SQAE}ALSKALSK & $0.759$ \\ \hline
\hline
Original / (D) & ACDCRGDCFCGGGGIVRRADRAAVP & $0.745$ \\ \hline
\hline
Point Mutation / (E) & ACDCRG\textbf{K}CFCGGGGIVRRADRAAVP & $0.705$ \\ \hline
Point Mutation / (F) & ACDCRGDCFCGGGGIVRRA\textbf{K}RAAVP & $0.710$ \\ \hline
Double Point Mutation / (G) & ACDCRG\textbf{K}CFCGGGGIVRRA\textbf{K}RAAVP & $0.650$ \\ \hline
Loop Mutation / (H) & ACDCRGDCFC\textbf{SSSS}IVRRADRAAVP & $0.556$ \\ \hline
\end{tabular} }
\end{center}
\end{table}
For a single point mutation analysis, one amino acid from the middle of the peptide sequence is substituted with an amino acid having the opposite property (e.g., non-polar to polar, positive to negative charge, etc.). Similar criteria are employed for double point mutations, but the process is repeated for two amino acids. In loop mutation, multiple peptide composition pairs are replaced with their opposite counterparts.
For all the sequences given in Table~\ref{tbl:res_acp740_mutation_10fold}, the classification score is predicted using the proposed ACP-KSRC method. The prediction scores were generated using the OCD of ACP740 dataset after $10$-fold cross-validation and each fold sequence from Table~\ref{tbl:res_acp740_mutation_10fold} was used as an independent test sample. Finally, the average prediction score was calculated by taking the mean of the individual fold results. It is noteworthy to point out that the mutant sequence dataset was not used in the design of the OCD matrix.
The prediction score is decreasing for both peptides with higher mutation rates, as predicted. However, the classification score is more sensitive to mutation for sequence (D) as compared to sequence (A). To figure out why this is happening, \emph{Alphafold2} \cite{jumper2021highly} was used to predict the 3D structure of sequences in Table~\ref{tbl:res_acp740_mutation_10fold}.
It can be observed in Fig.~\ref{fig:Protein_Structure_AlphaFold} that the alpha helix structure of sequence (A) is unaffected by point (B) and loop mutation (C), however, the flexible structure of sequence (D) exhibits a notable difference even with a single point mutation (E). This 3D structure-based mutation analysis demonstrates that the prediction score of the proposed ACP-KSRC is sensitive to structural variation and, hence, can be used as a valuable tool for large-scale ACP screening.
\begin{figure}[!htb]
\centering
\includegraphics[width=1\textwidth]{Protein_Structure_AlphaFold.png}
\caption{\emph{AlphaFold2} predicted 3D structures of original and mutant peptide sequences. (A) and (D) structure of original sequences in Table~\ref{tbl:res_acp740_mutation_10fold}. (B), (E) and (F) structures after point mutation. (G) structure after double-point-mutation. (C) and (H) structures after loop mutation.}
\label{fig:Protein_Structure_AlphaFold}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
Cancer, as the most challenging disease due to its complexity and heterogeneity, requires multifaceted therapeutic approaches. Anticancer peptides (ACPs) provide a promising perspective for cancer treatment, but their large-scale identification and synthesis require reliable prediction methods. In this study, we have provided an ACP classification strategy which makes use of sparse representation classification combined with kernel principal component analysis (KSRC). The proposed ACP-KSRC approach relies on the well-understood statistical theory of sparse representation classification, unlike the conventional \emph{black box} methods. In particular, we have designed over-complete dictionary matrices using the embedding of the composition of the K-spaced amino-acid pairs (CKSAAP). To deal with non-linearity and dimension reduction, the kernel principal component analysis (KPCA) method is used, while to balance the dictionary, the SMOTE oversampling technique is also utilized. The proposed method is evaluated on two benchmark datasets for well-known statistical parameters and is found to outperform the existing methods. The results indicate the highest sensitivity with the highest balanced accuracy, which can be useful in the understanding of the structural and chemical properties and the development of new ACPs.
\section*{Competing interests}
All the authors declare that they have no competing interests.
\section*{Acknowledgment}
The authors would like to thank Dr. Shujaat Khan for his valuable suggestions and for providing a Python implementation of his sparse representation classification toolbox. This work was supported by the Nazarbayev University, Kazakhstan through Faculty Development Competitive Research Grant Program (FDCRGP) under Grant 1022021FD2914.
\bibliographystyle{IEEEtran}
|
1708.02200
|
\section{Introduction}
Since 2007 we have been engaged in a long-term program using the Carnegie
Astrometric Planet Search Camera (CAPSCam) on the Las Campanas
Observatory 2.5-m du Pont telescope to search for gas giant planets in
orbit around nearby low-mass stars by the astrometric detection method.
There are 21 known G stars within 10 pc of the sun and at least 236 M
dwarfs (Henry et al. 1997), with nearby late M dwarfs continuing to be
discovered (e.g., Reiners \& Basri 2009). These abundant M dwarfs are a
natural choice for astrometric planet searches, because low mass primaries
and close proximity to the sun greatly enhance the detectability of planetary
companions.
M dwarfs have become the favored targets for finding the closest habitable
worlds, those most amenable to follow-up observations. M dwarf exoplanets
are likely to be the primary targets to be searched by JWST, the James
Webb Space Telescope, for atmospheric biosignatures (Deming et al. 2009)
among the transiting super-Earths to be found by TESS, the Transiting
Exoplanet Survey Satellite (Ricker et al. 2014). In fact, M dwarfs are
estimated to harbor about 37\% of the habitable zone
``real estate'' within 10 pc of the Sun (Cantrell et al. 2013), and
estimates of the frequency of habitable Earths around M dwarfs
can be as high as 53\% (Kopparapu 2013). The detection of a
habitable-zone exoEarth orbiting the M6V dwarf Proxima Centuari
(Anglada-Escud\'e et al. 2016), literally the closest star to the sun
at 1.295 pc, has raised the stakes in the race for the first direct
imaging and spectroscopy studies of nearby habitable worlds.
The detection of at first three (Gillon et al. 2016), and then a total of
at least seven exoplanets (Gillon et al. 2017) orbiting the M8V dwarf
TRAPPIST-1 (2M2306-05, 2MASS J23062928-0502285)
has only added fuel to this race to discover and characterize
potentially habitable exoplanets. Transit photometry has revealed
that five of the seven known TRAPPIST-1 planets have radii and masses within
factors of $\sim 1.1$ and $\sim 2$, respectively, of that of the Earth,
and three of these appear to have stellar irradiation levels that might
permit the existence of oceans of surface water, assuming Earth-like atmospheres
(Gillon et al. 2017).
The near-resonant orbital periods of the six innermost TRAPPIST-1
planets suggests their formation at a greater orbital distance followed
by inward coupled migration (Gillon et al. 2017). A number of the first
hot and warm super-Earth exoplanets discovered by Doppler spectroscopy
were found to have at least one longer period gas giant sibling planet,
with several having two (e.g., the M4V dwarf Gl 876 and the G8V star
HD 69830) or even three such siblings (e.g., the G3V star Mu Ara and
the G8V star 55 Cnc). HD 181433 is a K5V star with an inner 7.5 Earth-mass
planet and two outer Jupiter-mass planets, while the G6V HD 47186 has
an inner 22 Earth-mass planet and an outer Saturn-mass planet
(Bouchy et al. 2009). Outer gas giants may thus accompany inner habitable
worlds, a situation analogous to that of our own solar system.
The question then becomes, what other planets might be orbiting
TRAPPIST-1 at greater distances than the seven known planets? The
outermost one, TRAPPIST-1 h, has a semi-major axis of $\sim 0.063$ AU,
leaving a large amount of orbital distance unexplored.
Montet et al. (2014) combined Doppler and direct imaging results to
estimate that about 6.5\% of M dwarfs host one or more gas giants within
20 AU. Gas giants orbiting M dwarfs may represent a challenge to the core
accretion formation mechanism for gas giant planet formation
(e.g., Koshimoto et al. 2014), but not for the competing disk
instability mechanism (e.g., Boss 2006). The VLT FORS2 camera astrometric
survey of Sahlmann et al. (2014) set an upper bound of about 9\% on
the occurrence of gas giants larger than 5 Jupiter masses orbiting
at 0.01 to 0.8 AU around a sample of 20 M8 to L2 dwarfs. The {\it Gaia} space
telescope is presently searching about 1000 M dwarfs for long-period exoplanets
(Sozzetti et al. 2014; Perryman et al. 2014).
We began observing TRAPPIST-1 with CAPSCam
in 2011 as part of a long-term astrometric program to detect gas giant
planets around approximately 100 nearby M, L, and T dwarf stars.
Weinberger et al. (2016) published a trigonometric distance for TRAPPIST-1
of 12.49 pc $\pm 0.2$ pc, slightly larger than the previous trigonometric
distance of 12.1 pc $\pm 0.4$ pc (Costa et al. 2006), but consistent
within the error bars. Reiners and Basri (2009) found a spectrophotometric
distance of 12.1 pc. The CAPSCam distance was based on the first eleven epochs
of observations, spanning the 3 yrs from 2011 to 2014. Here we
re-analyze TRAPPIST-1, using the additional 4 epochs of observations
taken in 2015 and 2016 to refine the trigonometric parallax and proper
motions derived by Weinberger et al. (2016) and to search for any evidence
of long-period gas giant companions to TRAPPIST-1.
\section{Astrometric Camera}
The CAPSCam camera (Boss et al. 2009) employs a Hawaii-2RG HyViSi hybrid
array (2048 x 2048) that allows the definition of an arbitrary Guide Window
(GW), which can be read out (and reset) rapidly, repeatedly, and
independently of the rest of the array, the Full Frame (FF). The GW is
centered on the relatively bright target stars, with multiple short exposures to
avoid saturation. The rest of the array then integrates for prolonged
periods on the background reference grid of fainter stars.
The natural plate scale for CAPSCam on the 2.5-m du Pont is 0.196
arcsec/pixel, a scale that allows us to avoid introducing any
extra optical elements into the system that would produce astrometric
errors. An astrometric quality ($\lambda/30$) dewar filter window with a
red bandpass of 810 nm to 910 nm (similar to SDSS z)
is the only other optical element in
the system besides the du Pont primary and secondary mirrors.
We take multiple exposures with CAPSCam, with small variations (2 arcsec) of
the image position (dithering to four positions), in order to average out uncertainties due to pixel response non-uniformity.
Analysis of the star NLTT 48256 showed that an astrometric accuracy better
than 0.4 milliarcsec (mas) can be obtained over a time scale of several years
with CAPSCam on the du Pont 2.5-m (Boss et al. 2009). Anglada-Escud\'e et al.
(2012) used CAPSCam to place an upper mass limit on the known Doppler
exoplanet GJ 317b (Johnson et al. 2007). Analysis of the M3.5 dwarf
GJ 317 showed that the limiting astrometric accuracy, at least for the
brighter targets in our sample, is about 0.6 mas per epoch in Right Ascension
(R.A.) and about 1 mas per epoch in Declination (Dec.), based on the
results of the preliminary data analysis pipeline available at the time.
With 18 epochs, the overall accuracy in fitting the parallax of
GJ 317 was about 0.15 mas (Anglada-Escud\'e et al. 2012).
For comparison, Very Large Telescope (VLT) astrometry with SPHERE yields
positional errors of about 1 mas (Zurlo et al. 2014). Hence CAPSCam
should be able to either detect or place significant astrometric
constraints on the masses of any long-period gas giants in the TRAPPIST-1
system.
\section{Observations and Data Reduction}
TRAPPIST-1 (RA = 23 06 29.283; DEC = -05 02 28.59 [2000]) was observed
at the 15 epochs listed in Table 1 between 2011 and 2016. TRAPPIST-1 is
bright enough (I= 14.024, J= 11.354) that the
GW mode was used, with either a 10 sec or 15 sec GW exposure (depending
on the seeing conditions: the former for seeing of $\sim 1.0$ arcsec,
and the latter for seeing of $\sim 1.4$ arcsec) and a 90 sec FF exposure.
At each epoch, there were typically 20 frames taken during a 40 minute
time period as TRAPPIST-1 passed through the meridian.
The data were analyzed using the same data pipeline that has been used
for reducing the data for all previously published CAPSCam observations,
the APTa pipeline developed by one of us (GAE). Details about ATPa may
be found in Boss et al. (2009) and in Anglada-Escud\'e et al. (2012) and
about the parallax, proper motions, and astrometric zero point corrections
in Weinberger et al. (2016).
Weinberger et al. (2016) used the same first eleven epochs of
observations as are used here to determine an absolute parallax
$\pi_{abs} = 80.09 \pm 1.17$ mas and proper motions in R.A. and Dec.,
respectively, of $922.02 \pm 0.61$ mas yr$^{-1}$ and $-461.88 \pm 0.94$
mas yr$^{-1}$. This absolute parallax is based on a relative
parallax of $\pi_{rel} = 79.10 \pm 1.11$ mas and a zero point
correction of $-0.99 \pm 0.36$ mas. Note that the proper motions
are not corrected for any bias in the reference frame, and that
Weinberger et al. (2016) used only a portion of the ATPa data analysis
pipeline and developed new routines to fit for the parallax and proper
motion. When all fifteen epochs listed in Table 1 are included in
the solution, the absolute parallax becomes
$\pi_{abs} = 79.59 \pm 0.78$ mas and the proper motions in R.A. and Dec.,
respectively, become $922.64$ mas yr$^{-1}$ and $-462.88$ mas yr$^{-1}$.
Now the absolute parallax is based on a relative
parallax of $\pi_{rel} = 78.51 \pm 0.75$ mas and a zero point
correction of $-1.08 \pm 0.22$ mas, as described below.
Clearly the revised values for the parallax and proper motion are
consistent with each other and are well within the error bars.
TRAPPIST-1 thus has a revised trigonometric parallax distance of
$12.56 \pm 0.12$ pc.
The correction to absolute parallax was based on a comparison of the photometric distance to five reference stars with good catalog photometry
at V, I, J, H, and Ks and with fit effective temperatures $\ge$ 4000 K.
Our CAPSCam measurements of these stars yield nominal parallaxes, which
in this case are all smaller than their uncertainties. We averaged the
offset between the nominal parallaxes and the photometric parallaxes
obtained by fitting Kurucz stellar atmosphere models to the broad-band
photometry and find a correction to absolute parallax of $1.08 \pm 0.22$
mas. Note that this correction is irrelevant to the determination of the
residuals to the five parameter fit to the astrometry and therefore
does not contribute to the upper limit on any giant planet companion
to the star.
\section{Astrometric Residuals}
Figure 1 displays the CAPSCam TRAPPIST-1 field image that served as
the reference plate for the ATPa astrometric analysis. TRAPPIST-1 itself
lies within the central GW, while the stars labelled with blue numbers
were used as the reference stars for the ATPa analysis. Red vectors
depict the inferred proper motions of the stars.
Table 1 lists the ATPa residuals in R.A. and Dec. for the apparent motion
of TRAPPIST-1 with respect to the background reference stars that remain
after the parallax and proper motion have been removed from the solution.
Table 1 also lists the individual epoch statistical uncertainties,
computed as the root mean square (RMS) of the offsets of the individual
frames that are taken at each epoch. We note that the standard deviation of the
residuals ($\sim$ 1.2 mas in R.A.) is much larger than the typical individual
epoch statistical uncertainty ($\sim$ 0.6 mas), and we attribute the difference
to systematic sources that we are working to resolve.
The ATPa pipeline searches for periodicities in these Table 1 residuals
that could be caused by a companion object on a circular orbit, with a
minimum orbital period of 80 days and maximum orbital period of 4000 days.
Figure 2 shows the resulting power spectrum for a possible long-period
companion to TRAPPIST-1. The power spectrum yields no hints of any
long-period planets, as it is dominated by a large number of short-period
peaks with rather low amplitudes, with little power at periods longer
than about 1500 days.
Figures 3 and 4 plot the ATPa residuals ($\delta_{R.A.}, \delta_{Dec.}$)
from Table 1, allowing a visual search for any suspected periodicity.
These figures confirm what is evident from Figure 2, namely that the
only way to fit the residuals with an astrometric wobble would be to
use an orbit with a period of order a year or less. Given that the
fifteen CAPSCam observations spanning a little over 5 years average out to
only about 3 epochs per year, orbital periods of a year or less are
likely to spurious. Ongoing work on developing the CAPSCam date pipeline
will allow us to further refine the astrometric analysis.
In Table 1, the mean absolute value of the ATPa residuals in R.A. is
1.17 mas, while that for the residuals in Dec. is 1.46 mas. For the
detection of an astrometric amplitude $A$ (in mas) in the presence of
astrometric uncertainties of $\sigma$ (in mas) with $N_{obs}$ observations,
a conservative estimate of the signal to noise ratio ($SNR$) is given by
$$ SNR = { A \over \sigma } \sqrt{N_{obs}}. $$
\noindent
Taking the larger of the two residuals, namely those in Dec., as
$\sigma = 1.46$ mas, $N_{obs} = 15$, and requiring $SNR = 5$ yields
a conservative estimate of the largest amplitude that could be hidden
in our CAPSCam data of $A = 1.9$ mas.
We then use 1.9 mas as an upper limit on any wobble of TRAPPIST-1, allowing
us to place upper limits on the mass of any long-period gas giant companions,
as a function of their orbital periods. The angular semimajor axis ($\Theta$)
of the displacement of a star about the common center of mass of
a star-planet system is given (e.g., Boss 1996) by
$$ \Theta = { M_p a \over M_s r} = { M_p P^{2/3} \over M_s^{2/3} r}, $$
\noindent
where $\Theta$ is in arcsec, $M_p$ is the planet mass (in solar masses),
$M_s$ is the stellar mass (in solar masses), $M_p$ is assumed to
be negligible in mass compared to $M_s$, $a$ is the orbital semimajor axis
(in AU) and is equal to the radius of an assumed circular orbit, $P$ is
the orbital period (in years), and $r$ is the distance to the
star-planet system (in pc). For example, with only Jupiter orbiting
around the sun, the solar wobble is $\pm$ 1 mas when viewed from 5 pc.
Note that in the case of TRAPPIST-1, where the primary has a mass of
only about 80 $M_{Jup}$ (Gillon et al. 2017), neglecting the planet mass
is not strictly a good approximation when considering possible
planetary masses in the multiple Jupiter-mass range, but this approximation
is adequate for the constraint considered here.
Figure 5 displays the possible long-period planet masses that appear to
be ruled out by our non-detection at the 1.9 mas level for the TRAPPIST-1
system, compared to the six TRAPPIST-1 planets with known masses, and all the
confirmed exoplanets currently contained in the NASA Exoplanet Archive.
We assumed a mass for TRAPPIST-1 of 0.08 $M_\odot$ (Gillon et al. 2017).
We halt the astrometric constraint at an orbital period of five years,
the length of our CAPSCam observations.
It can be seen from Figure 5 that our astrometric constraint still leaves a
large area of discovery space to be explored for additional planets in
the TRAPPIST-1 system. While the plethora of confirmed exoplanets
shown in Figure 5 is suggestive of the possibility of other planets to be
found around TRAPPIST-1, it should be noted that most of those confirmed planets
orbit stars of earlier type than the M8 dwarf TRAPPIST-1, and planetary
demographics can be expected to depend on stellar masses.
\section{Doppler Constraints}
Tanner et al. (2012) performed Keck II NIRSPEC Doppler spectroscopy of
23 late-M dwarfs, including 2M2306-05 = TRAPPIST-1. Their data consisted
of three epochs, one in 2006 and two more in 2010 taken
on consecutive nights, a sequence chosen to search for both short-period
hot Jupiters and long-period cold Jupiters. They obtained
radial velocities for 2M2306-05 with an
uncertainty of 130 m s$^{-1}$. Following Boss (1996), the radial
velocity amplitude in km s$^{-1}$ for an edge-on planetary orbit is
$$ v_r = 30 { M_p \over P^{1/3} M_s^{2/3}}, $$
\noindent
using the same units as previously. Figure 5 shows the resulting
upper limit on the mass of any undetected compansions to TRAPPIST-1,
based on the conservative estimate of a maximum Doppler wobble of
380 m s$^{-1}$. This estimate assumes that $SNR = 5$ (Mayor \& Queloz
1995), $\sigma = 130$ m s$^{-1}$, and $N_{obs} = 3$. It can be
seen that this Doppler upper mass limit, in combination with our
astrometric upper mass limit, and rules out a large portion of possible
discovery space, though an equally large and interesting portion
remains to be explored. Orbital stability of the TRAPPIST-1 system
appears to be assured even in the presence of a 15 $M_\oplus$ planet,
provided that it orbits beyond 0.37 AU (Quarles et al. 2017), yielding
another constraint on any undiscovered planets in this system.
We have also performed a more involved analysis of the companion masses
ruled out by the Tanner et al. (2012) Doppler RV observations. To assess
the detectability of RV companions, we followed the procedure described in
Kohn et al. (2016). We calculated orbits for 482 million putative binary
systems for masses of the secondary of 0.6 to 12.6 $M_{Jup}$ and periods
up to 1550 days. The mass of the primary was assumed to be 0.08 $M_\odot$.
For each period and mass-ratio pair, we calculated the radial velocities that
would be observed given a single epoch precision of 130 m s$^{-1}$
for an ensemble of binaries with orbital elements drawn from the
eccentricity distribution for known planets as described by the Beta distribution
in Kipping et al. (2013), a uniform distribution of $sin \ i$ (inclination $i$),
a uniform distribution of time of periastron passage over the period,
and a uniform distribution of the longitude of periastron. We define
a binary to be observable at a given period and mass-ratio if,
in 67\% of the calculated orbits, the RMS of its calculated RV
exceeded 225 m s$^{-1}$. We plot the resulting RV constraint as a blue
curve in Figure 5 as well.
\section{Photometry}
Given that about two transits occur every day in the TRAPPIST-1 system
(Gillon et al. 2017), and that the transits last for at least a half-hour
apiece, transits should be occurring on average for one hour every day.
Each CAPSCam epoch lasted for roughly one hour on target, so 15 epochs
observed meant that CAPSCam had observed TRAPPIST-1 for a total of
about 15 hours, or 0.6 day. Hence there is a reasonable chance that at
least one transit occurred during our astrometric observations.
We thus decided to check to see if any transits were obvious in our
CAPSCam data. The deepest transit depth for a TRAPPIST-1 planet
is about 0.782\% (for TRAPPIST-1 g), while the shallowest depth is
about 0.352\% (for TRAPPIST-1 h), so detecting a transit requires
millimag photometry from the ground. CAPSCam was not designed to be
a photometric camera, and our astrometric observations do not require
photometric seeing, and as a consequence our photometric precision
has not been measured or developed. A quick look at a total of 356
CAPSCam images of TRAPPIST-1 found that when the photometric flux
in TRAPPIST-1 was normalized by that of the brighter reference stars
in the 6.63 arcmin x 6.63 arcmin field of view, variations of order
unity occurred in an irregular manner. Even larger variations were
evident when normalized to fainter reference stars. As result, we
abandoned our search for transits in the CAPSCam data.
\section{Conclusions}
The TRAPPIST-1 system is a fascinating example of a planetary system
that will continue be a focus of intensive research in the coming decades.
Our ongoing astrometric observations with CAPSCam have refined
the trigonometric distance to TRAPPIST-1, with the new value being
12.56 pc. In addition, the absence of any clear, long-period signals
in the residuals, once the parallax and proper motions have been
removed, places strong upper limits on the masses of any gas giant
planets orbiting well beyond the seven known transiting planets:
no planets more massive than $\sim 4.6 M_{Jup}$ orbit with a 1 yr period,
and none more massive than $\sim 1.6 M_{Jup}$ orbit with a 5 yr period.
A large region of discovery space intermediate between these long-period
orbits and the short-period orbits of the TRAPPIST-1 planets, however,
remains to be explored by other means.
One of us (TLA) is presently working on further developing ATPa, with the
goal of reducing sources of systematic errors, such as those caused
by differential chromatic refraction, and by distortions of the entire
optical system that might change with time. The latter is being addressed
by taking multiply dithered (4 x 4) CAPSCam images of rich stellar fields
at multiple epochs, and generating a pixel by pixel correction function.
We plan to continue our astrometric observations of TRAPPIST-1, and
we look forward to learning what an improved analysis of our data might
reveal about the TRAPPIST-1 planetary system.
\acknowledgments
We thank the David W. Thompson Family Fund for support of the
CAPSCam astrometric planet search program, the Carnegie Observatories
for continued access to the du Pont telescope, and the telescope
operators and technicians at the Las Campanas Observatory for making
these observations possible. We also thank the referee for several
suggestions for improvements. The development of the CAPSCam camera was
supported in part by NSF grant AST-0352912.
|
1708.02246
|
\section{Introduction and formalism}
\label{sec:Intro}
Scalable quantum computers are expected to require some form of error correction (EC) to function reliably. Unfortunately, no practical model for a self-correcting quantum memory has been proposed to date, despite considerable effort \cite{BrownMemory16}.
The models that come closest to this goal involve topological protection in the presence of physically imposed symmetries \cite{KitaevWire01,KarzigScalable17}, but even these are not expected to reduce error rates sufficiently for large computations.
Therefore active protocols that require measuring the check operators of an error correcting code are probably necessary to realize scalable quantum computing.
There are three general approaches of fault-tolerant error correction (FTEC) applicable to a wide range of stabilizer codes due to Shor \cite{Shor96}, Steane \cite{Steane97}, and Knill \cite{KL2005}.
There are also a number of promising code-specific FTEC schemes, most notably the surface code with a minimum weight matching error correction scheme \cite{BK98,DKLP02,FMMC12}.
This approach gives the best fault-tolerant thresholds to date and only requires geometrically local measurements.
A high threshold \cite{Shor96,AB97,Preskill98,KLZ98} implies that relatively imperfect hardware could be used to reliably implement long quantum computations.
Despite this, the hardware and overhead requirements for the surface code are sufficiently demanding that it remains extremely challenging to implement in the lab.
Fortunately, there are reasons to believe that there could be better alternatives to the surface code.
For example, dramatically improved thresholds could be possible using concatenated codes if they enjoyed the same level of optimization as the surface code has in recent years \cite{Poulin06,PhysRevA.83.020302}.
Another enticing alternative is to find and use efficiently-decodable low density parity check (LDPC) codes with high rate \cite{Gallager1960,LDPC13,TZLDPC14} in a low-overhead FTEC protocol \cite{Gottesman13LDPC}.
For these and other reasons, it is important to have general FTEC schemes applicable to a wide range of codes and to develop new schemes.
Shor EC can be applied to any stabilizer code, but typically requires more syndrome measurement repetitions than Steane and Knill EC. Furthermore, all weight-$w$ stabilizer generators are measured sequentially using $w$-qubit verified cat states. On the other hand, Steane EC has higher thresholds than Shor EC and has the advantage that all Clifford gates are applied transversally during the protocol. However, Steane EC is only applicable to CSS \cite{CS96,Steane97} codes and uses a verified logical $\ket{+}$ state encoded in the same code to simultaneously obtain all $X$-type syndromes, using transversal measurement (similarly for $Z$).
Knill EC can also be applied to any stabilizer code but requires two additional ancilla code blocks (encoded in the same code that protects the data) prepared in a logical Bell state. The Bell state teleports the encoded information to one of the ancilla code blocks and the extra information from the transversal Bell measurement gives the error syndrome. Knill EC typically achieves higher thresholds than Shor and Steane EC but often uses more qubits \cite{Knill05,Fern08KnillUpperBound}. It is noteworthy that for large LDPC codes, in which low weight generators are required be fault-tolerantly measured, Shor EC is much more favourable than Steane or Knill EC. Many improvements in these schemes have been made. For examples, in \cite{DA07}, ancilla decoding was introduced to correct errors arising during state preparation in Shor and Steane EC rather than simply rejecting all states which fail the verification procedure.
In this work, we build on a number of recent papers \cite{CR17v1,CR17v2,Yoder2017surfacecodetwist} that demonstrate flag error correction for particular distance-three and error detecting codes and present a general protocol for arbitrary distance codes. Flag error correction uses extra ancilla qubits to detect potentially problematic high weight errors that arise during the measurement of a stabilizer. We provide a set of requirements for a stabilizer code (along with the circuits used to measure the stabilizers) which, if satisfied, can be used for flag error correction. We are primarily concerned with extending the lifetime of encoded information using fault-tolerant error correction and defer the study of implementing gates fault-tolerantly to future work. Our approach can be applied to a broad class of codes (including but not limited to surface codes, color codes and quantum Reed-Muller codes).
Of the three general schemes described above, flag EC has most in common with Shor EC.
Further, flag EC does not require verified state preparation, and for all codes considered to date, requires fewer ancilla qubits. Lastly, we note that in order to satisfy the fault-tolerant error correction definition presented in \cref{subsec:Section0}, our protocol applied to distance-three codes differs from \cite{CR17v1}.
We foresee a number of potential applications of these results.
Firstly we believe it is advantageous to have new EC schemes with different properties that can be used in various settings.
Secondly, flag EC involves small qubit overhead, hence possibly the schemes presented here and in other flag approaches \cite{CR17v1,CR17v2,Yoder2017surfacecodetwist} will find applications in early qubit-limited experiments.
Thirdly, we expect the flag EC protocol presented here could potentially be useful for LDPC codes as described in \cite{Gottesman13LDPC}.
In \cref{subsec:ReviewChaoReichardt,subsec:Distance5protocol} we provide important definitions and introduce flag FTEC for distance-three and -five codes. In \cref{subsec:ApplicationProtocolColorCode} we apply the protocol to two examples: the \codepar{19,1,5} and \codepar{17,1,5} color codes, which importantly have a variety of different weight stabilizers. The general flag FTEC protocol for arbitrary distance codes is given in \cref{subsec:GeneralProtocol}. A proof that the general protocol satisfies the fault-tolerance criteria is given in \cref{app:ProtocolGeneralProof}. In \cref{subsec:Remarks} we provide examples of codes that satisfy the conditions that we required for flag FTEC. Flag circuit constructions for measuring stabilizers of the codes in \cref{subsec:Remarks} are given \cref{app:GeneralTflaggedCircuitConstruction}. We also provide a candidate circuit construction for measuring arbitrary weight stabilizers in \cref{App:GeneralwFlagCircuitConstruction}. In \cref{sec:CircuitLevelNoiseFTEC}, we analyze numerically a number of flag EC schemes and compare with other FTEC schemes under various types of circuit level noise. We find that flag EC schemes, which have large numbers of idle qubit locations, behave best in error models in which idle qubit errors occur with a lower probability than CNOT errors. The remainder of this section is devoted to FTEC and noise model/simulation methods.
\subsection{Fault-tolerant error correction}
\label{subsec:Section0}
Throughout this paper, we assume a simple depolarizing noise model in which idle qubits fail with probability $\tilde{p}$ and all other circuit operations (gates, preparations and measurements) fail with probability $p$, which recovers standard circuit noise when $\tilde{p}=p$. A detailed description is given in \cref{subsec:NoiseAndNumerics}.
The weight of a Pauli operator $E$ ($\text{wt}(E)$) is the number of qubits on which it has non-trivial support. We first make some definitions,
\begin{definition}{\underline{Weight-$t$ Pauli operators}}
\begin{align}
\mathcal{E}_{t} = \{ E \in \mathcal{P}_{n} | \text{wt}(E) \le t \},
\end{align}
where $\mathcal{P}_{n}$ is the $n$-qubit Pauli group.
\label{Def:EpsilontSet}
\end{definition}
\begin{definition}{\underline{Stabilizer error correction}}
Given a stabilizer group $\mathcal{S} = \langle g_{1}, \cdots, g_{m} \rangle$, we define the syndrome $s(E)$ to be a bit string, with i'th bit equal to zero if $g_i$ and $E$ commute, and one otherwise.
Let $E_{\text{min}}(s)$ be a minimal weight correction $E$ where $s(E)=s$.
We say operators $E$ and $E'$ are logically equivalent, written as $E \sim E'$, iff $E' \propto g E$ for $g \in \mathcal{S}$.
\label{Def:LogEquivDef}
\end{definition}
An error correction protocol typically consists of a sequence of basic operations to infer syndrome measurements of a stabilizer code $C$, followed by the application of a Pauli operator (either directly or through Pauli frame tracking \cite{DA07,Barbara15,CIP17}) intended to correct errors in the system.
Roughly speaking, a given protocol is fault-tolerant if for sufficiently weak noise, the effective noise on the logical qubits is even weaker.
More precisely, we say that an error correction protocol is a $t$-FTEC if the following is satisfied:
\begin{definition}{\underline{Fault-tolerant error correction}}
For $t = \lfloor (d-1)/2\rfloor$, an error correction protocol using a distance-$d$ stabilizer code $C$ is $t$-fault-tolerant if the following two conditions are satisfied:
\begin{enumerate}
\item For an input codeword with error of weight $s_{1}$, if $s_{2}$ faults occur during the protocol with $s_{1} + s_{2} \le t$, ideally decoding the output state gives the same codeword as ideally decoding the input state.
\item For $s$ faults during the protocol with $s \le t$, no matter how many errors are present in the input state, the output state differs from a codeword by an error of at most weight $s$.
\end{enumerate}
\label{Def:FaultTolerantDef}
\end{definition}
Here ideally decoding is equivalent to performing fault-free error correction.
By codeword, we mean any state $\ket{\overline{\psi}} \in C$ such that $g\ket{\overline{\psi}} = \ket{\overline{\psi}} \thinspace \forall \thinspace g \in \mathcal{S}$ where $\mathcal{S}$ is the stabilizer group for the code $C$.
Note that for the second criteria in \cref{Def:FaultTolerantDef}, the output and input codeword can differ by a logical operator.
The first criteria in \cref{Def:FaultTolerantDef} ensures that correctable errors don't spread to uncorrectable errors during the error correction protocol. Note however that the first condition alone isn't sufficient. For instance, the trivial protocol where no correction is ever applied at the end of the EC round also satisfies the first condition, but clearly is not fault-tolerant.
The second condition is not always checked for protocols in the literature, but it is important as it ensures that errors do not accumulate uncontrollably in consecutive rounds of error correction (see \cite{AGP06} for a rigorous proof and \cite{CDT09} for an analysis of the role of input errors in an extended rectangle). To give further motivation as to why the second condition is important, consider a scenario with $s$ faults introduced during each round of error correction, and assume that $t/n<s<(2t+1)/3$ for some integer $n$ (see Fig.~\ref{fig:ConditionTwoJustification}). Consider an error correction protocol in which $r$ input errors and $s$ faults in an EC block leads to an output state with at most $r+s$ errors\footnote{This is the case for Shor, Steane and Knill EC with appropriately verified ancilla states. However the surface code does not satisfy this due to hook errors but nonetheless still satisfies condition 1 of \cref{Def:FaultTolerantDef}. }. Clearly condition 1 is satisfied.
With the above considerations, an input state $E_{1}\ket{\bar{\psi}}$ with $\text{wt}(E_{1})\leq s$ is taken to $E_{2}\ket{\bar{\psi}}$, with $\text{wt}(E_{2})\leq 2s$ by one error correction round with $s$ faults.
After the $j$th round, the state will be $E_{j}\ket{\bar{\psi}}$ with the first condition implying $\text{wt}(E_{j})\leq j \cdot s$ provided that $j \leq n$.
However, when $j > n$, the requirement of the first condition is no longer satisfied so we cannot use it to upper bound $\text{wt}(E_{j})$.
Now consider the same scenario but assuming both conditions hold.
The second condition implies that after the first round, the input state $E_{1}\ket{\bar{\psi}}$ becomes $E'_{2}\ket{\bar{\phi}} = E_{2}\ket{\bar{\psi}}$, with $\text{wt}(E_{2}')\leq s$, and where $\ket{\bar{\phi}}$ is a codeword.
Therefore the codewords are related by: $\ket{\bar{\phi}}= (E_{2}^{'\dagger} E_{2}) \ket{\bar{\psi}}$, with logical operator $(E_{2}^{'\dagger} E_{2})$ having weight at most $3s$, since $\text{wt}(E_{2})+\text{wt}(E_{2}') \leq 3s$.
However, the minimum non-trivial logical operator of the code has weight $(2t+1)>3s$, implying that $\ket{\bar{\psi}} = \ket{\bar{\phi}}$, and therefore that $\text{wt}(E_{2}) = \text{wt}(E_{2}') \leq s$.
Hence, for the $j$th round, $\text{wt}(E_{j}) \leq s$ for all $j$, i.e. the distance from the codeword is not increased by consecutive error correction rounds with $s$ faults, provided $s < (2t+1)/3$.
\begin{figure
\centering
\includegraphics[width=0.45\textwidth]{ConditionTwoJustification.png}
\caption{An example showing the first fault tolerance condition alone in \cref{Def:FaultTolerantDef} is not sufficient to imply a long lifetime.
We represent $s$ faults occurring during a round of error correction with a vertical arrow, and a state a distance $r$ from the desired codeword with a horizontal arrow with $r$ above.
The first condition alone allows errors to build up over time as in the top figure, which would quickly lead to a failure.
However provided $s<(2t+1)/3$, both conditions together ensure that errors in consecutive error correction rounds do not build up, provided each error correction round introduces no more than $s$ faults, which could remain true for a long time.}
\label{fig:ConditionTwoJustification}
\end{figure}
\subsection{Noise model and pseudo-threshold calculations}
\label{subsec:NoiseAndNumerics}
In \cref{sec:CircuitLevelNoiseFTEC}, we perform a full circuit level noise analysis of various error correction protocols. Unless otherwise stated, we use the following depolarizing noise model:
\begin{enumerate}
\item With probability $p$, each two-qubit gate is followed by a two-qubit Pauli error drawn uniformly and independently from $\{I,X,Y,Z\}^{\otimes 2}\setminus \{I\otimes I\}$.
\item With probability $\frac{2p}{3}$, the preparation of the $\ket{0}$ state is replaced by $\ket{1}=X\ket{0}$. Similarly, with probability $\frac{2p}{3}$, the preparation of the $\ket{+}$ state is replaced by $\ket{-}=Z\ket{+}$.
\item With probability $\frac{2p}{3}$, any single qubit measurement has its outcome flipped.
\item Lastly, with probability $\tilde{p}$, each resting qubit location is followed by a Pauli error drawn uniformly and independently from $\{ X,Y,Z \}$.
\end{enumerate}
Some error correction schemes that we analyze contain a significant number of idle qubit locations. Consequently, most schemes will be analyzed using three ratios ($\tilde{p} = p$, $\tilde{p} = p/10$ and $\tilde{p} = p/100$) to highlight the impact of idle qubit locations on the logical failure rate.
The two-qubit gates we consider are: CNOT, XNOT$ = H_1(\text{CNOT})H_1$, and CZ$ = H_2(\text{CNOT})H_2$.
Logical failure rates are estimated using an $N$-run Monte Carlo simulation. During a particular run, errors are added at each location following the noise model described above. Once the error locations are fixed, the errors are propagated through a fault-tolerant error correction circuit and a recovery operation is applied. After performing a correction, the output is ideally decoded to verify if a logical fault occurred.
For an error correction protocol implemented using a stabilizer code $C$ and a fixed value of $p$, we define the logical failure rate
\begin{equation}
p_{\text{L}}^{(C)}(p) = \lim_{N \to\infty} \frac{N_{\text{fail}}^{(C)}(p)}{N} ,
\end{equation}
where $N_{\text{fail}}^{(C)}(p)$ is the number of times a logical $X$ \textit{or} logical $Z$ error occurred during the $N$ rounds.
In practice we take $N$ sufficiently large to estimate $p_{\text{L}}^{(C)}(p)$, and provide error bars \cite{AliferisCross07,CJL16b}.
In this paper we are concerned with evaluating the performance of FTEC protocols (i.e. we do not consider performing logical gates fault-tolerantly). We define the pseudo-threshold of an error correction protocol to be the value of $p$ such that
\begin{align}
\tilde{p}(p) = p^{(C)}_{L}(p).
\label{Def:PseudoThreshDef}
\end{align}
Note that it is important to have $\tilde{p}$ on the left of \cref{Def:PseudoThreshDef} instead of $p$ since we want an encoded qubit to have a lower logical failure rate than an unencoded idle qubit. From the above noise model, a resting qubit will fail with probability $\tilde{p}$.
\section{Flag error correction for small distance codes}
\label{sec:Section1}
In this and the next section, we present a $t$-fault-tolerant flag error correction protocol with distance-$(2t+1)$ codes satisfying a certain condition.
Our approach extends that introduced by Chao and Reichardt \cite{CR17v1} for distance three codes, which we first review using our terminology in \cref{subsec:ReviewChaoReichardt}.
In \cref{subsec:Distance5protocol} we present the protocol for distance five CSS codes which contains most of the main ideas of the general case (which is provided in \cref{app:GeneralFTEC}).
Lastly, in section \cref{subsec:ApplicationProtocolColorCode} we provide examples of how the protocol is applied to the \codepar{19,1,5} and \codepar{17,1,5} color codes.
\subsection{Definitions and Flag $1$-FTEC with distance-3 codes}
\label{subsec:ReviewChaoReichardt}
In what follows, we use the term location to refer to a gate, state preparation, measurement or idle qubit where a fault may occur.
Note also that a two-qubit Pauli error $P_{1}\otimes P_{2}$ arising at a two-qubit gate location counts as a single fault.
It is well known that with only a single measurement ancilla, a single fault in a blue CNOT of the stabilizer measurement circuit shown in \cref{fig:StabNonFT} can result in a multi-weight error on the data block.
This could cause a distance-$3$ code to fail, or more generally could cause a distance-$d$ code to fail due to fewer than $(d-1)/2$ total faults.
We therefore say the blue CNOTs are \textit{bad} according to the following definition:
\begin{definition}{\underline{Bad locations}}
A circuit location in which a single fault can result in a Pauli error $E$ on the data block with $\mathrm{wt}(E) \ge 2$ will be referred to as a bad location.
\label{Def:BadErrorDef}
\end{definition}
\begin{figure
\centering
\begin{subfigure}{0.25\textwidth}
\includegraphics[width=\textwidth]{StabNonFT.png}
\caption{}
\label{fig:StabNonFT}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{Weight4Generator.png}
\caption{}
\label{fig:StabFTwithAncilla}
\end{subfigure}
\begin{subfigure}{0.25\textwidth}
\includegraphics[width=\textwidth]{StabFTwithAncillaV2.png}
\caption{}
\label{fig:StabFTwithAncillaCompact}
\end{subfigure}
\caption{Circuits for measuring the operator $ZZZZ$ (can be converted to any Pauli by single qubit Cliffords). (a) Non-fault-tolerant circuit. A single fault $IZ$ occurring on the third CNOT (from the left) results in the error $IIZZ$ on the data block. (b) Flag version of \cref{fig:StabNonFT}. An ancilla (flag) qubit prepared in $\ket{+}$ and two extra CNOT gates signals when a weight two data error is caused by a single fault. Subsequent rounds of error correction may identify which error occurred. Consider an $IZ$ error on the second CNOT, in the non-flag circuit this would result in a weight two error, but in this case, this fault causes the circuit to flag. (c) An alternative flag circuit with lower depth than (b). All bad locations are illustrated in blue.}
\label{fig:ErrorPropSteane}
\end{figure}
As shown in \cref{fig:StabFTwithAncilla}, the circuit can be modified by including an additional ancilla (flag) qubit, and two extra CNOT gates.
This modification leaves the bad locations and the fault-free action of the circuit unchanged.
However, any single fault leading to an error $E$ with $\mathrm{wt}(E) \ge 2$ will also cause the measurement outcome of the flag qubit to flip \cite{CR17v1}. The following definitions will be useful:
\begin{definition}{\underline{Flags and measurements}}
Consider a circuit for measuring a stabilizer generator that includes at least one flag ancilla. The ancilla used to infer the stabilizer outcome is referred to as the \textit{measurement qubit}. We say the circuit has flagged if the eigenvalue of a flag qubit is measured as $-1$. If the eigenvalue of a measurement qubit is measured as $-1$, we will say that the measurement qubit flipped.
\label{Def:GlagMeasureQubitsDef}
\end{definition}
The purpose of flag qubits is to signal when high weight data qubit errors result from few fault locations during a stabilizer measurement. Two key definitions are:
\begin{definition}{\underline{t-flag circuit}}
A circuit\footnote{To avoid confusing the notation of $C(P)$ that represents a circuit and $C$ that represents a code space, we always include the measured Pauli in parenthesis unless clear from context.} $C(P)$ which, when fault-free, implements a projective measurement of a weight-$w$ Pauli $P$ without flagging is a $t$-flag circuit if the following holds: For any set of $v$ faults at up to $t$ locations in $C(P)$ resulting in an error $E$ with $\text{min}(\text{wt}(E),\text{wt}(E P)) > v$, the circuit flags.
\label{Def:tFlaggedCircuitDef}
\end{definition}
Note that a $t$-flag circuit for measuring a weight-$t$ stabilizer $P$ is also a $k$-flag circuit for any $k>t$. In \cref{app:GeneralTflaggedCircuitConstruction} we give constructions for some $t$-flag circuits.
\begin{definition}{\underline{Flag error set}}
Let $\mathcal{E}(g_{i})$ be the set of all errors caused by one fault which caused the circuit $C(g_i)$ to flag.
\label{Def:FlagErrSetDef1}
\end{definition}
Note that the flag error set can contain the identity as well as weight one errors.
Suppose all errors in a flag error set $\mathcal{E}(g)$ for a 1-flag circuit $C(g)$ have distinct syndromes.
As $C(g)$ is a 1-flag circuit, a single fault that leads to an error of weight greater than one will cause the circuit $C(g)$ to flag.
Moreover, when a flag has occurred due to at most one fault, a complete set of fault-free stabilizer measurements will infer the resulting element of the flag error set which has been applied to the data qubits. In fact, one would only require distinct syndromes for errors in the flag error set that are logically inequivalent, as defined in \cref{Def:LogEquivDef}.
As an example, consider the 1-flag circuit in \cref{fig:StabFTwithAncilla}. A single fault at any of the blue CNOT gates can lead to an error $E_{b}$ with $\text{wt}(E_{b}) \le 2$ on the data. The set $\mathcal{E}(Z^{\otimes 4})$ contains all errors $E_{b}$ which resulted from a fault at a blue CNOT gate causing the circuit $C(Z^{\otimes 4})$ of \cref{fig:StabFTwithAncilla} to flag, i.e.,
$\mathcal{E}(g) = \{ I,Z_{q_{3}}Z_{q_{4}},X_{q_{2}}Z_{q_{3}}Z_{q_{4}},Z_{q_{1}}X_{q_{2}},Z_{q_{4}},$ $X_{q_{3}}Z_{q_{4}},Y_{q_{3}}Z_{q_{4}} \}$ with qubits $q_1$ to $q_4$.
With the above definitions, we can construct a fault-tolerant flag error correction protocol for $d=3$ stabilizer codes satisfying the following condition.
\begin{definition}{\underline{\textbf{Flag $1$-FTEC condition:}}}
Consider a stabilizer code $\mathcal{S} = \langle g_{1},g_{2},\cdots , g_{r} \rangle$ and $1$-flag circuits $\{ C(g_{1}),C(g_{2}), \cdots , C(g_{r}) \}$. For every generator $g_{i}$, all pairs of elements $E,E'\in \mathcal{E}(g_{i})$ satisfy $s(E)\neq s(E')$ or $E \sim E'$.
\label{Def:Flag1FTECcondition}
\end{definition}
In other words, we require that any two errors that arise when a circuit $C(g_{i})$ flags due to a single fault must be either distinguishable or logically equivalent. For the following protocol to satisfy the FTEC conditions in \cref{Def:FaultTolerantDef}, one can assume there is at most 1 fault. If the Flag $1$-FTEC condition is satisfied, the protocol is implemented as follows:
\vspace{17px}
\fbox{\begin{minipage}{23em}
\textbf{Flag $1$-FTEC Protocol:}
Repeat the syndrome measurement using flag circuits until one of the following is satisfied:
\begin{enumerate}
\item If the syndrome $s$ is repeated twice in a row and there were no flags, apply the correction $E_{\text{min}}(s)$.
\item If there were no flags and the syndromes $s_{1}$ and $s_{2}$ from two consecutive rounds differ, repeat the syndrome measurement using non-flag circuits yielding syndrome $s$. Apply the correction $E_{\text{min}}(s)$.
\item If a circuit $C(g_{i})$ flags, stop and repeat the syndrome measurement using non-flag circuits yielding syndrome $s$. If there is an element $E \in \mathcal{E}(g_{i})$ which satisfies $s(E)=s$, then apply $E$, otherwise apply $E_{\text{min}}(s)$.
\end{enumerate}
\end{minipage}}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{TreeD3.png}
\caption{Tree diagram illustrating the possible paths of the Flag $1$-FTEC Protocol. Numbers enclosed in red circles at the end of the edges indicate which step to implement in the Flag $1$-FTEC Protocol. A dashed line is followed when any of the 1-flag circuits $C(g_{i})$ flags. Solid squares indicate a syndrome measurement using 1-flag circuits whereas rings indicate a decision based on syndrome outcomes. Note that the syndrome measurement is repeated at most three times.}
\label{fig:TreeD3Diag}
\end{figure}
\begin{figure
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{SteaneCodeColor.png}
\caption{}
\label{fig:SteaneColor1}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{SteaneConditionSatisfied.png}
\caption{}
\label{fig:SteaneColor1}
\end{subfigure}
\caption{
(a) A representation of the Steane code where each circle is a qubit, and there is an $X$- and a $Z$-type stabilizer generator for each face.
Stabilizer cicuits are specified from that in Fig.~\ref{fig:ErrorPropSteane}(a) after rotating the lattice such that the relevant face is on the bottom left.
(b) For $g = Z_{q_{1}}Z_{q_{2}}Z_{q_{3}}Z_{q_{4}}$, the flag error set is $\mathcal{E}(g) = \{ I,Z_{q_{3}}Z_{q_{4}},X_{q_{2}}Z_{q_{3}}Z_{q_{4}},Z_{q_{1}}X_{q_{2}},Z_{q_{4}},$ $X_{q_{3}}Z_{q_{4}},X_{q_{3}}Z_{q_{3}}Z_{q_{4}} \}$ which contains all errors arising from a single fault that causes the stabilizer measurement circuit $C(g)$ to flag.
Since the Steane code is a CSS code, the $X$ component of an error will be corrected independently allowing us to consider the $Z$-part of the flag error set $\mathcal{E}_Z(g)=\{I,Z_{q_1},Z_{q_4},Z_{q_3}Z_{q_4} \}$.
As required, the elements of $\mathcal{E}_Z(g)$ all have distinct syndromes (with satisfied stabilizers represented by a plus).
}
\label{fig:Steane}
\end{figure}
A tree diagram for the flag $1$-FTEC Protocol is illustrated in \cref{fig:TreeD3Diag}. We now outline the proof that the flag 1-FTEC protocol satisfies the fault-tolerance criteria of \cref{Def:FaultTolerantDef} (a more rigorous proof of the general case is presented in \cref{app:ProtocolGeneralProof}). To show that Flag $1$-FTEC Protocol satisfies the criteria of \cref{Def:FaultTolerantDef}, we can assume there is at most one fault during the protocol. If a single fault occurs in either the first or second round leading to a flag, repeating the syndrome measurement will correctly diagnose the error. If there are no flags and a fault occurs which causes the syndromes in the first two rounds to change, then the syndrome during the third round will correctly diagnose the error. There could also be a fault during either the first or second round that goes undetected. But since there were no flags it cannot spread to an error of weight-2. In this case applying a minimum weight correction based on the measured syndrome of the second round will guarantee that the output codeword differs from a valid codeword by an error of weight at most one. Note that the above argument applies irrespective of any errors on the input state, hence the second criteria of \cref{Def:FaultTolerantDef} is satisfied. It is worth pointing out that up to three repetitions are required in order to guarantee that the second criteria of \cref{Def:FaultTolerantDef} is satisfied (unless the code has the property that all states are at most a weight-one error away from a valid codeword, as in \cite{CR17v1}).
The Steane code is an example which satisfies the Flag $1$-FTEC condition with a simple choice of circuits.
To verify this, the representation of the Steane code given in \cref{fig:SteaneColor1} is useful.
There is an $X$- and a $Z$-type stabilizer generator supported on the four qubits of each of the three faces.
First let us specify all six stabilizer measurement circuits.
The circuit that measures $Z_{q_1}Z_{q_2}Z_{q_3} Z_{q_4}$ is specified by taking qubits $q_1$, $q_2$, $q_3$, and $q_4$ to be the four data qubits in descending order in the 1-flag circuit in \cref{fig:StabFTwithAncilla}.
The other two $Z$-stabilizer measurement circuits are obtained by first rotating \cref{fig:SteaneColor1} by $120^{\circ}$ and $240^{\circ}$ and then using \cref{fig:StabFTwithAncilla}.
The $X$-stabilizer circuit for each face is the same as the $Z$-stabilizer circuit for that face, replacing CNOT gates acting on data qubits by XNOT gates. The $Z$ component of the flag error set of the circuit in \cref{fig:StabFTwithAncilla} is $\mathcal{E}_Z(Z_{q_1}Z_{q_2}Z_{q_3}Z_{q_4}) = \{ I,Z_{q_1},Z_{q_4},Z_{q_3}Z_{q_4} \}$.
As can be seen from \cref{fig:SteaneColor1}, each of these has a distinct syndrome, thus the measurement circuit for $Z_{q_1}Z_{q_2}Z_{q_3} Z_{q_4}$ satisfies the flag $1$-FTEC condition, as do the remaining five measurement circuits by symmetry.
\subsection{Flag $2$-FTEC with distance-5 codes}
\label{subsec:Distance5protocol}
Before explicitly describing the conditions and protocol, we discuss some of the complications that arise for codes with $d>3$.
For distance-5 codes, we must ensure that if two faults occur during the error correction protocol, the output state will differ from a codeword by an error of at most weight-two. For instance, if two faults occur in a circuit for measuring a stabilizer of weight greater than four, the resulting error $E$ on the data should satisfy $\text{wt}(E) \le 2$ unless there is a flag. In other words, all stabilizer generators should be measured using 2-flag circuits.
In another case, two faults could occur during the measurement of \textit{different} stabilizer generators $g_{i}$ and $g_{j}$. If two bad locations fail and are both flagged, and assuming there are no more faults, the measured syndrome will correspond to the product of the error caused in each circuit (which could have weight greater than two). Consequently, one should modify \cref{Def:FlagErrSetDef1} of the flag error set to include these types of errors.
One then decodes based on the pair of errors that resulted in the measured syndrome, provided logically inequivalent errors have distinct syndromes.
Before stating the protocol, we extend some definitions from \cref{subsec:ReviewChaoReichardt}.
Consider a stabilizer code $\mathcal{S} = \langle g_{1},g_{2},\cdots , g_{r} \rangle$ and $t$-flag circuits $C(g_{i})$ for measuring the generator $g_{i}$.
\begin{definition}{\underline{Flag error set}}
Let $\mathcal{E}_{m}(g_{i_{1}},\cdots , g_{i_{k}})$ be the set of all errors caused by precisely $m$ faults spread amongst the circuits $C(g_{i_{1}}),C(g_{i_{2}}), \cdots , C(g_{i_{k}})$ which all flagged.
\label{Def:FlagErrSetDef}
\end{definition}
Note that there could be more than one fault in a single circuit $C(g_{i_{k}})$. Examples of flag error sets are given in \cref{tab:PossibleCorrelatedErrors} where only contributions from $Z$ errors are included (since the considered code is a CSS code). We also define a general $t$-fault correction set:
\begin{align}
\tilde{E}_{t}^{m}(g_{i_{1}},\cdots , g_{i_{k}},s) =
\begin{cases}
\{ E \in \mathcal{E}_{m}(g_{i_{1}},\cdots , g_{i_{k}}) \times \mathcal{E}_{t-m} \\
\text{ such that } s(E) = s \} \\
\{ E_{\text{min}}(s) \} \text{ if above set empty. }
\end{cases}
\label{eq:GeneralLookupTable}
\end{align}
By $E \in \mathcal{E}_{m}(g_{i_{1}},\cdots , g_{i_{k}}) \times \mathcal{E}_{t-m}$, we are considering the set consisting of products between errors caused by $k$ flags and any error of weight $t-m$.
As will be seen below, the correction set will form a critical part of the protocol by specifying the correction applied based on the measured syndrome and flag outcomes over multiple syndrome measurement rounds. In the case where $k$ $t$-flag circuits flagged caused by $k \le m \le t$ faults, the correction applied to the data block will correspond to an element of $\mathcal{E}_{m}(g_{i_{1}},\cdots , g_{i_{k}}) \times \mathcal{E}_{t-m}$ if the measured syndrome corresponds to an element in this set (there could also be $t-m$ faults which did not give rise to a flag). However in practice, there could be more than $t$ faults and so the measured syndrome may not be consistent with any element of the set $\mathcal{E}_{m}(g_{i_{1}},\cdots , g_{i_{k}}) \times \mathcal{E}_{t-m}$. In this case, and for the error correction protocol to satisfy the second criteria of \cref{Def:FaultTolerantDef}, the correction will correspond to $E_{\text{min}}(s)$. These features are all included in the set $\tilde{E}_{t}^{m}(g_{i_{1}},\cdots , g_{i_{k}},s)$.
\begin{definition}{\underline{\textbf{Flag $2$-FTEC condition:}}}
Consider a stabilizer code $\mathcal{S} = \langle g_{1},g_{2},\cdots , g_{r} \rangle$ and $2$-flag circuits $\{ C(g_{1}),C(g_{2}), \cdots , C(g_{r}) \}$. For any choice of generators $\{ g_{i}, g_{j} \}$:
\begin{enumerate}
\item $E,E' \in \mathcal{E}_{2}(g_{i},g_{j}) \Rightarrow s(E)\neq s(E')$ or $E \sim E'$,
\item $E,E' \in \mathcal{E}_{2}(g_{i})\cup (\mathcal{E}_{1}(g_{i}) \times \mathcal{E}_{1}) \Rightarrow s(E) \neq s(E')$ or $E \sim E'$.
\end{enumerate}
\label{Def:Flag2FTECcondition}
\end{definition}
In order to state the protocol, we define an update rule given a sequence of syndrome measurements using $t$-flag circuits for the counters\footnote{$n_{\text{diff}}$ tracks the minimum number of faults that could have caused the observed syndrome outcomes. For example, if the sequence $s_{1},s_{2},s_{1}$ was measured, $n_{\text{diff}}$ would increase by one since a single measurement fault could give rise to the given sequence (for example, this could be caused by a single CNOT failure which resulted in a data qubit and measurement error). However for the sequence $s_{1},s_{2},s_{1},s_{2}$, $n_{\text{diff}}$ would increase by two.}
$n_{\text{diff}}$ and $n_{\text{same}}$ as follows:
\vspace{10px}
\fbox{\begin{minipage}{23em}
\underline{\textbf{Flag $2$-FTEC protocol -- update rules:}}
Given a sequence of consecutive syndrome measurement outcomes $s_{k}$ and $s_{k+1}$:
\begin{enumerate}
\item If $n_{\text{diff}}$ didn't increase in the previous round, and $s_{k}\neq s_{k+1}$, increase $n_{\text{diff}}$ by one.
\item If a flag occurs, reset $n_{\text{same}}$ to zero.
\item If $s_{k} = s_{k+1}$, increase $n_{\text{same}}$ by one.
\end{enumerate}
\end{minipage}}
\vspace{10px}
For the following protocol to satisfy \cref{Def:FaultTolerantDef}, one can assume there are at most 2 faults. If the Flag $2$-FTEC condition is satisfied, the protocol is implemented as follows:
\vspace{10px}
\fbox{\begin{minipage}{23em}
\underline{\textbf{Flag $2$-FTEC protocol -- corrections:}}
Set $n_{\text{diff}}=0$ and $n_{\text{same}} = 0$.
Repeat the syndrome measurement using flag circuits until one of the following is satisfied:
\begin{enumerate}
\item The same syndrome $s$ is repeated $3-n_{\text{diff}}$ times in a row and there were no flags, apply the correction $E_{\text{min}}(s)$.
\item There were no flags and $n_{\text{diff}}=2$. Repeat the syndrome measurement using non-flag circuits yielding syndrome $s$. Apply the correction $E_{\text{min}}(s)$.
\item Some set of two circuits $C(g_{i})$ and $C(g_{j})$ have flagged. Repeat the syndrome measurement using non-flag circuits yielding syndrome $s$. Apply any correction from the set $\tilde{E}_{2}^{2}(g_{i},g_{j},s)$.
\item Any circuit $C(g_{i})$ has flagged and $n_{\text{diff}}=1$. Repeat the syndrome measurement using non-flag circuits yielding syndrome $s$. Apply any correction from the set $\tilde{E}_{2}^{1}(g_{i},s)$.
\item Any circuit $C(g_{i})$ has flagged and $n_{\text{diff}}=0$ and $n_{\text{same}}=1$. Use the measured syndrome $s$ from the last round. Apply any correction from the set $\tilde{E}_{2}^{1}(g_{i},s)\cup \tilde{E}_{2}^{2}(g_{i},s)$.
\end{enumerate}
\end{minipage}}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{TreeD5.png}
\caption{Tree diagram for the Flag $2$-FTEC protocol. Numbers encircled in red at the end of the edges indicate which step to implement in the Flag $2$-FTEC Protocol. A dashed line is followed when any of the 2-flag circuits $C(g_{i})$ flags. Solid squares indicate a syndrome measurement using 2-flag circuits whereas rings indicate a decision based on syndrome outcomes. Edges with different colors indicate the current value of $n_{\text{diff}}$ in the protocol. Note that the protocol is repeated at most 6 times.}
\label{fig:TreeDiagramD5}
\end{figure}
\vspace{10px}
Note that when computing the update rules, if a flag occurs during the $j$'th round of syndrome measurements, the syndrome is not recorded for that round since all stabilizers must be measured. Thus when computing $n_{\text{diff}}$ and $n_{\text{same}}$ using consecutive syndromes $s_k$ and $s_{k+1}$, we are assuming that no flags occurred during rounds $k$ and $k+1$.
In each case of the protocol, the correction sets correspond to those data errors which could arise from up to two faults which are consistent with the conditions of the case.
As the elements are logically equivalent (by \cref{eq:GeneralLookupTable,Def:Flag2FTECcondition}), which element is applied is unimportant.
The general protocol for codes of arbitrary distance is given in \cref{app:GeneralFTEC}.
\subsection{Examples of flag 2-FTEC applied to $d=5$ codes}
\label{subsec:ApplicationProtocolColorCode}
\begin{figure
\centering
\begin{subfigure}{0.22\textwidth}
\includegraphics[width=\textwidth]{19qubitColorLattice.png}
\caption{}
\label{fig:19qubitLatticeColor}
\end{subfigure}
\begin{subfigure}{0.25\textwidth}
\includegraphics[width=\textwidth]{17qubitColorLattice.png}
\caption{}
\label{fig:17qubitLatticeColor}
\end{subfigure}
\caption{Graphical representation of (a) the 19-qubit 2D color code and (b) the 17-qubit 2D color code. The $X$ and $Z$ stabilizers of the code are symmetric, given by the vertices of each plaquette. Both codes have distance-5.}
\label{fig:ColorCodeLattices}
\end{figure}
In this section we give examples of the flag $2$-FTEC protocol applied to the 2-dimensional \codepar{19,1,5} and \codepar{17,1,5} color codes, (see \cref{fig:19qubitLatticeColor,fig:17qubitLatticeColor}). We first find 2-flag circuits for all generators (weight-4 and -6 for the 19-qubit code and weight-4 and -8 for the 17-qubit code). We also show that the flag 2-FTEC condition is satisfied for both codes.
\begin{table}[t]
\begin{tabular}{ c|c|c|c}
\multicolumn{2}{c|}{Weight-4 measurement} & \multicolumn{2}{|c}{Weight-6 measurement}\\ \hline
1-fault & 2-faults & 1-fault & 2-faults \\ \hline
$I$,$Z_{1}$ & $I$,$Z_{1}$,$Z_{2}$ & $I$,$Z_{1}$,$Z_{6}$ & $I$,$Z_{1}$,$Z_{2}$\\
$Z_{4}$ & $Z_{3}$,$Z_{4}$&$Z_{1}Z_{2}$ &$Z_{3}$, $Z_{4}$,$Z_{5}$,$Z_{6}$ \\
$Z_{3}Z_{4}$ &$Z_{1}Z_{2}$ & $Z_{5}Z_{6}$ & $Z_{1}Z_{2}$,$Z_{1}Z_{3}$ \\
& $Z_{1}Z_{4}$& $Z_{4}Z_{5}Z_{6}$ & $Z_{1}Z_{4}$,$Z_{1}Z_{5}$ \\
& $Z_{2}Z_{4}$& & $Z_{1}Z_{6}$,$Z_{2}Z_{3}$ \\
& & & $Z_{2}Z_{6}$,$Z_{3}Z_{4}$ \\
& & & $Z_{3}Z_{6}$,$Z_{4}Z_{5}$ \\
& & & $Z_{4}Z_{6}$,$Z_{5}Z_{6}$ \\
& & & $Z_{1}Z_{2}Z_{3}$,$Z_{1}Z_{5}Z_{6}$ \\
& & & $Z_{2}Z_{5}Z_{6}$,$Z_{3}Z_{4}Z_{5}$ \\
& & & $Z_{3}Z_{4}Z_{6}$,$Z_{3}Z_{5}Z_{6}$ \\
& & & $Z_{4}Z_{5}Z_{6}$\\
\end{tabular}
\caption{$Z$ part of the flag error set of \cref{Def:FlagErrSetDef} for flag circuits used to measure the stabilizers $g_{1} = Z_{1}Z_{2}Z_{3}Z_{4}$ and $g_{3} = Z_{1}Z_{2}Z_{3}Z_{4}Z_{5}Z_{6}$ (we removed errors equivalent up to the stabilizer being measured).}
\label{tab:PossibleCorrelatedErrors}
\end{table}
For a 2-flag circuit, two faults leading to an error of weight greater or equal to 3 (up to multiplication by the stabilizer) must always cause at least one of the flag qubits to flag. As shown in \cref{app:GeneralTflaggedCircuitConstruction}, a 2-flag circuit satisfying these properties can always be constructed using at most four flag qubits. We show 2-flag circuits for measuring weight six and eight generators in \cref{fig:Flag2CircuitExamples}.
In \cref{subsec:Remarks}, it will be shown that the family of color codes with a hexagonal lattice satisfy a sufficient condition which guarantees that the flag 2-FTEC condition is satisfied. However, there are codes that do not satisfy the sufficient condition but which nonetheless satisfy the 2-Flag FTEC condition. For the 19-qubit and 17-qubit color codes, we verified that the flag 2-FTEC condition was satisfied by enumerating all errors as one would have to for a generic code. In particular, in the case where the 2-flag circuits $C(g_{i})$ and $C(g_{j})$ flag, the resulting errors belonging to the set $\mathcal{E}_{2}(g_{i},g_{j})$ must be logically equivalent or have distinct syndromes (which we verified to be true). If a single circuit $C(g_{i})$ flags, there could either have been two faults in the circuit or a single fault along with another error that did not cause a flag. If the same syndrome is measured twice in a row after a flag, then errors in the set $\mathcal{E}_{2}(g_{i})\cup (\mathcal{E}_{1}(g_{i}) \times \mathcal{E}_{1})$ must be logically equivalent or have distinct syndromes (which we verified). If there is a flag but two different syndromes are measured in a row, errors belonging to the set $\mathcal{E}_{1}(g_{i}) \times \mathcal{E}_{1}$ must be logically equivalent or have distinct syndromes (as was already checked). The flag error sets (see \cref{Def:FlagErrSetDef}) for the 19-qubit code can be obtained using the Pauli's shown in \cref{tab:PossibleCorrelatedErrors}.
\begin{figure
\centering
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=\textwidth]{Weight6Generator.png}
\caption{}
\label{fig:WeightSixGenerators}
\end{subfigure}
\begin{subfigure}{0.37\textwidth}
\includegraphics[width=\textwidth]{Weight8Generator.png}
\caption{}
\label{fig:Weight8Generator}
\end{subfigure}
\caption{
Illustration of 2-flag circuits for measuring (a) $Z^{\otimes 6}$ requiring only two flag qubits and (b) $Z^{\otimes 8}$ requiring only three flag qubits. Flag qubits are prepared in the $\ket{+}$ state, and measurement qubits in the $\ket{0}$ state.
}
\label{fig:Flag2CircuitExamples}
\end{figure}
Given that the flag 2-FTEC condition is satisfied, the flag 2-FTEC protocol can be implemented following the steps of \cref{subsec:Distance5protocol} and the tree diagram illustrated in \cref{fig:TreeDiagramD5}.
\section{Flag error correction protocol for arbitrary distance codes}
\label{app:GeneralFTEC}
In this section we first provide the general flag $t$-FTEC protocol in \cref{subsec:GeneralProtocol}. In \cref{subsec:Remarks} we give a sufficient condition for stabilizer codes that allow us to easily prove that flag FTEC can be applied to a number of infinite code families. We show that the families of surface codes, hexagonal lattice color codes and quantum Reed-Muller codes satisfy the sufficient condition. Lastly, in \cref{app:GeneralTflaggedCircuitConstruction}, we give general $t$-flag circuit constructions which are applicable to the code families described in \cref{subsec:Remarks}.
We assume the reader is familiar with all previous definitions. However, to make this section reasonably self contained, we repeat some key definitions below.
\begin{definitionBob}{6}{\underline{$t$-flag ciruit}}
A circuit $C(P)$ which, when fault-free, implements a projective measurement of a weight-$w$ Pauli $P$ without flagging is a $t$-flag circuit if the following holds: For any set of $v$ faults at up to $t$ locations in $C(P)$ resulting in an error $E$ with $\text{min}(\text{wt}(E),\text{wt}(E P)) > v$, the circuit flags.
\end{definitionBob}
\begin{definitionBob}{9}{\underline{Flag error set}}
Let $\mathcal{E}_{m}(g_{i_{1}},\cdots , g_{i_{k}})$ be the set of all errors caused by precisely $m$ faults spread amongst the circuits $C(g_{i_{1}}),C(g_{i_{2}}), \cdots , C(g_{i_{k}})$ which all flagged.
\end{definitionBob}
We also remind the reader of the correction set
\begin{align}
\tilde{E}_{t}^{m}(g_{i_{1}},\cdots , g_{i_{k}},s) =
\begin{cases}
\{ E \in \mathcal{E}_{m}(g_{i_{1}},\cdots , g_{i_{k}}) \times \mathcal{E}_{t-m} \\
\text{ such that } s(E) = s \} \\
\{ E_{\text{min}}(s) \} \text{ if above set empty. }
\end{cases}
\label{eq:GeneralLookupTableV2}
\end{align}
\subsection{Conditions and protocol}
\label{subsec:GeneralProtocol}
In what follows we generalize the fault-tolerant error correction protocol presented in \cref{subsec:Distance5protocol} to stabilizer codes of arbitrary distance.
\begin{definition}{\underline{\textbf{Flag $t$-FTEC condition:}}}
Consider a stabilizer code $\mathcal{S} = \langle g_{1},g_{2},\cdots , g_{r} \rangle$ and $t$-flag circuits $\{ C(g_{1}),C(g_{2}), \cdots , C(g_{r}) \}$. For any set of $m$ stabilizer generators $\{ g_{i_{1}},\cdots ,g_{i_{m}} \}$ such that $1 \le m \le t$, every pair of elements $E,E' \in \bigcup_{j=0}^{t-m}\mathcal{E}_{t-j}(g_{i_{1}},\cdots ,g_{i_{m}})\times \mathcal{E}_{j}$ either satisfy $s(E)\neq s(E')$ or $E \sim E'$.
\label{Def:FlagtFTECcondition}
\end{definition}
The above conditions ensure that if there are at most $t=\lfloor (d-1)/2 \rfloor$ faults, the protocol described below will satisfy the fault-tolerance conditions of \cref{Def:FaultTolerantDef}.
In order to state the protocol, we define an update rule given a sequence of syndrome measurements using $t$-flag circuits for the counters $n_{\text{diff}}$ and $n_{\text{same}}$ as follows (see also \cref{subsec:Distance5protocol} and the associated footnote):
\vspace{10px}
\fbox{\begin{minipage}{23em}
\underline{\textbf{Flag $t$-FTEC protocol -- update rules:}}
Given a sequence of consecutive syndrome measurement outcomes $s_{k}$ and $s_{k+1}$:
\begin{enumerate}
\item If $n_{\text{diff}}$ didn't increase in the previous round, and $s_{k}\neq s_{k+1}$, increase $n_{\text{diff}}$ by one.
\item If a flag occurs, reset $n_{\text{same}}$ to zero.
\item If $s_{k} = s_{k+1}$, increase $n_{\text{same}}$ by one.
\end{enumerate}
\end{minipage}}
\vspace{10px}
\fbox{\begin{minipage}{23em}
\underline{\textbf{Flag $t$-FTEC protocol -- corrections:}}
Set $n_{\text{diff}}=0$ and $n_{\text{same}} = 0$.
Repeat the syndrome measurement using flag circuits until one of the following is satisfied:
\begin{enumerate}
\item The same syndrome $s$ is repeated $t-n_{\text{diff}}+1$ times in a row and there are no flags, apply the correction $E_{\text{min}}(s)$.
\item There were no flags and $n_{\text{diff}}=t$. Repeat the syndrome measurement using non-flag circuits yielding the syndrome $s$. Apply the correction $E_{\text{min}}(s)$.
\item Some set of $t$ circuits $\{ C(g_{i_{1}}), \cdots , C(g_{i_{t}}) \}$ have flagged. Repeat the syndrome measurement using non-flag circuits yielding syndrome $s$. Apply any correction from the set $\tilde{E}_{t}^{t}(g_{i_{1}},\cdots , g_{i_{t}},s)$.
\item Some set of $m$ circuits $\{ C(g_{i_{1}}), \cdots , C(g_{i_{m}}) \}$ have flagged with $1 \le m < t$ and $n_{\text{diff}} = t -m $. Repeat the syndrome measurement using non-flag circuits yielding syndrome $s$. Apply any correction from the set $\tilde{E}_{t}^{m}(g_{i_{1}},\cdots , g_{i_{m}},s)$.
\item Some set of $m$ circuits $\{ C(g_{i_{1}}), \cdots , C(g_{i_{m}}) \}$ have flagged with $1 \le m < t$; $n_{\text{diff}} <t-m$ and $n_{\text{same}} = t-m-n_{\text{diff}}+1$. Use the syndrome $s$ obtained during the last round and apply any correction from the set $\bigcup_{j=0}^{t-m-n_{\text{diff}}}\tilde{E}_{t}^{t-j-n_{\text{diff}}}(g_{i_{1}},\cdots ,g_{i_{m}}, s)$.
\end{enumerate}
\end{minipage}}
\vspace{10px}
In each case of the protocol, the correction sets correspond to those data errors which could arise from up to $t$ faults which are consistent with the conditions of the case. As the elements are logically equivalent (by \cref{eq:GeneralLookupTableV2,Def:FlagtFTECcondition}), which element is applied is unimportant.
For the protocol to satisfy the fault-tolerance criteria, the syndrome measurement needs to be repeated a minimum of $t+1$ times. In the scenario where the most syndrome measurement rounds are performed, $t$ identical syndromes are obtained before a fault causes the $t+1$'th syndrome to change (in which case $n_{\text{diff}}$ would increase by one). Afterwords, one measures the same syndrome $t-1$ times in a row until another fault causes the syndrome to change. This continues until all of the $t$ possible faults have been exhausted. At this stage, $n_{\text{diff}}=t$ so an extra syndrome measurement round will be performed using non-flag circuits. Thus the maximum number of syndrome measurement rounds $n_{\text{max}}$ is given by
\begin{align}
n_{\text{max}} = \sum_{j=0}^{t-1}(t-j) + t+1 = \frac{1}{2}(t^{2}+3t+2).
\label{Eq:Nmax}
\end{align}
Note that a similar approach by repeating syndrome measurements is used for Shor error correction \cite{AGP06,Gottesman2010}. However, our scheme requires fewer syndrome measurement repetitions than is often described for Shor error correction and does not require the preparation and verification of a $w$-qubit cat state when measuring a stabilizer of weight-$w$. \footnote{One could also define update rules analogous to those for $n_{\text{diff}}$ and $n_{\text{same}}$ when implementing Shor-EC which would only require at most $\frac{1}{2}(t^{2}+3t+2)$ syndrome measurement repetitions as in the flag $t$-FTEC protocol.}
For codes that satisfy the flag $t$-FTEC condition, we also show in \cref{app:StatePrepAndMeasure} how to fault-tolerantly prepare and measure logical states using the flag $t$-FTEC protocol.
\subsection{Sufficient condition and satisfying code families}
\label{subsec:Remarks}
The general flag $t$-FTEC condition can be difficult to verify for a given code since it depends on precisely which $t$-flag circuits are used.
A sufficient (but not necessary) condition that implies the flag $t$-FTEC condition is as follows:
\textbf{Sufficient flag $t$-FTEC condition:}
Given a stabilizer code with distance $d>1$, and $\mathcal{S} = \langle g_{1},g_{2},\cdots , g_{r} \rangle$, we require that for all $v = 0,1, \dots t$, all choices $Q_{t-v}$ of $2(t-v)$ qubits, and all subsets of $v$ stabilizer generators $\{ g_{i_1},\dots ,g_{i_v} \} \subset \{ g_{1},\cdots , g_{r} \}$, there is no logical operator $l \in N(\mathcal{S}) \setminus \mathcal{S}$ such that
\begin{align}
\text{supp}(l) \subset \text{supp}(g_{i_1}) \cup \cdots \cup \text{supp}(g_{i_v}) \cup Q_{t-v},
\end{align}
where $N(\mathcal{S})$ is the normalizer of the stabilizer group.
If this condition holds, then the flag $t$-FTEC condition is implied for any choice of $t$-flag circuits $\{ C(g_{1}),C(g_{2}), \cdots , C(g_{r}) \}$.
To prove this, we must show that it implies that none of the sets appearing in the $t$-FTEC condition contain elements that differ by a logical operator.
Consider the set $\bigcup_{j=0}^{t-m}\mathcal{E}_{t-j}(g_{i_{1}},\cdots ,g_{i_{m}})\times \mathcal{E}_{j}$ for some set of $m$ stabilizer generators $\{ g_{i_{1}},\cdots ,g_{i_{m}} \}$ with $1 \le m \le t$.
An error $E$ from this set will have support in the union of the support of the $m$ stabilizer generators $\{ g_{i_{1}},\cdots ,g_{i_{m}} \}$, along with up to $t-m$ other single qubits.
Another error $E'$ from this set will have support in the union of support of the same $m$ stabilizer generators $\{ g_{i_{1}},\cdots ,g_{i_{m}} \}$, along with up to $t-m$ other \textit{potentially different} single qubits.
If the sufficient condition holds, then $\text{supp}(E E')$ cannot contain a logical operator.
The sufficient flag $t$-FTEC condition is straightforward to verify for a number of code families with a lot of structure in their stabilizer generators and logical operators.
We briefly provide a few examples.
\textbf{Surface codes flag $t$-FTEC:}
The rotated surface code \cite{KITAEV97Surface,TS14,BK98,PhysRevLett.90.016803} family \codepar{d^2,1,d} for all odd $d=2t+1$ (see \cref{fig:surfacecodeproof}) satisfies the flag $t$-FTEC condition using any 4-flag circuits.
Firstly, by performing an exhaustive search, we verified that the circuit of \cref{fig:StabFTwithAncilla} is a 4-flag circuit.
As a CSS code, we can restrict our attention to purely $X$-type and $Z$-type logical operators.
An $X$ type logical operator must have at least one qubit in each of the $2t+1$ rows of the lattice shown.
However, each stabilizer only contains qubits in two different rows.
Therefore, with $v$ stabilizer generators, at most $2v$ of the rows could have support.
With an additional $2(t-v)$ qubits, at most $2t$ rows can be covered, which is fewer than the number of rows, and therefore no logical $X$ operator is supported on the union of the support of $v$ stabilizers and $2(t-v)$ qubits.
An analogous argument holds for $Z$-type logical operators, therefore the sufficient $t$-FTEC condition is satisfied.
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{sufacecodeproof.png}
\caption{
The $d=5$ rotated surface code. Qubits are represented by white circles, and $X$ and $Z$ stabilizer generators are represented by red and green faces.
As in the example, any logical $X$ operator has $X$ operators acting on at least five qubits, with at least one in each row of the lattice, involving an even number in any green face.
In this case, no two stabilizer generators can have qubits in five rows, and therefore cannot contain an $X$ type logical operator.
The argument is analogous for logical $Z$ operators.
}
\label{fig:surfacecodeproof}
\end{figure}
\textbf{Color codes flag $t$-FTEC:}
Here we show that any distance $d=(2t+1)$ self-dual CSS code with at most weight-6 stabilizer generators satisfies the flag $t$-FTEC condition using any 6-flag circuits (see \cref{fig:6flagCircuit} for an example).
Examples include the hexagonal color code \cite{Bombin06TopQuantDist} family \codepar{(3d^2+1)/4,1,d} (see \cref{fig:19qubitLatticeColor}).
As a self-dual CSS code, $X$ and $Z$ type stabilizer generators are identically supported and we can consider a pure $X$-type logical operator without loss of generality.
Consider an $X$ type logical operator $l$ such that
\begin{align}
\text{supp}(l) \subset \text{supp}(g_{i_1}) \cup \cdots \cup \text{supp}(g_{i_v}) \cup Q_{t-v},
\label{eq:SuppRepeated}
\end{align}
for some set of $v$ stabilizer generators $\{ g_{i_1},\dots ,g_{i_v} \} \subset \{ g_{1},\cdots , g_{r} \}$ along with $2(t-v)$ other qubits $Q_{t-v}$.
Restricted to the support of any of the $v$ stabilizers $g_i$, $l|_{g_i}$ must have weight 0, 2, 4, or 6 (otherwise it would anti-commute with the corresponding $Z$ type stabilizer).
If the restricted weight is 4 or 6, we can produce an equivalent lower weight logical operator $l' = g_i l$, which still satisfies \cref{eq:SuppRepeated}.
Repeating this procedure until the weight of the logical operator can no longer be reduced yields a logical operator $l_{\text{min}}$ which has weight either 0 or 2 when restricted to the support of any of the $v$ stabilizer generators.
The total weight of $l_{\text{min}}$ is then at most $2v+2(t-v) =2t$, which is less than the distance of the code, giving a contradiction which therefore implies that $l$ could not have been a logical operator.
An analogous arguments holds for $Z$-type logical operators, therefore the sufficient $t$-FTEC condition is satisfied.
This proof can be easily extended to show that any distance $d=(2t+1)$ self-dual CSS code with at most weight-$2 v$ stabilizer generators for some integer $v$ satisfies the flag $t'$-FTEC condition using any $(v-1)$-flag circuits, where $t'= t/\lfloor v/2\rfloor$.
\textbf{Quantum Reed-Muller codes flag $1$-FTEC:}
The \codepar{n=2^m-1,k=1,d=3} quantum Reed-Muller code family for every integer $m\geq 3$ satisfies the flag 1-FTEC condition using any 1-flag circuits for the standard choice of generators.
We use the following facts about the Quantum Reed-Muller code family (see \cref{app:QRMcodes} and \cite{ADP14} for proofs of these facts): (1) The code is CSS, allowing us to restrict to pure $X$ type and pure $Z$ type logical operators, (2) all pure $X$ or $Z$ type logical operators have odd support, (3) every $X$-type stabilizer generator has the same support as some $Z$-type stabilizer generator, and (4) every $Z$-type stabilizer generator is contained within the support of an $X$ type generator.
We only need to prove the sufficient condition for $v=0,1$ in this case. For $v=0$, no two qubits can support a logical operator, as any logical operator has weight at least three.
For $v=1$, assume the support of an $X$-type stabilizer generator contains a logical operator $l$.
That logical operator $l$ cannot be $Z$ type or it would anti-commute with the $X$-stabilizer due to its odd support.
However, by fact (3), there is a $Z$ type stabilizer with the same support as the $X$ type stabilizer, therefore implying $l$ cannot be $X$ type either.
Therefore, by contradiction we conclude that no logical operator can be contained in the support of an $X$ stabilizer generator.
Since every other stabilizer generator is contained within the support of an $X$-type stabilizer generator, a logical operator cannot be contained in the support of any stabilizer generator.
Note that the Hamming code family has a stabilizer group which is a proper subgroup of that of the quantum Reed-Muller codes described here. The $X$-type generators of each Hamming code are the same as for a quantum Reed-Muller code, and the Hamming codes are self-dual CSS codes. It is clear that the sufficient condition cannot be applied to the Hamming code since it has even-weight $Z$-type logical operators (which are stabilizers for the quantum Reed-Muller code) supported within the support of some stabilizer generators.
\textbf{Codes which satisfy flag $t$-FTEC condition but not the sufficient flag $t$-FTEC condition:}
\begin{figure
\centering
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{CircuitReichardt1.png}
\caption{}
\label{fig:CircuitReichardt1}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{CircuitReichardt2.png}
\caption{}
\label{fig:CircuitReichardt2}
\end{subfigure}
\caption{(a) A 1-flag circuit for measuring the stabilizer $Z_{8}Z_{9}Z_{10}Z_{11}Z_{12}Z_{13}Z_{14}Z_{15}$ of the \codepar{15,7,3} Hamming code. However a single fault on the fourth or fifth CNOT can lead to the error $Z_{12}Z_{13}Z_{14}Z_{15}$ on the data which is a logical fault. With the CNOT gates permuted as shown in (b), the \codepar{15,7,3} satisfies the general flag 1-FTEC condition.}
\end{figure}
Note that there are codes which satisfy the general flag $t$-FTEC condition but not the sufficient condition presented in this section. An example of such a code is the \codepar{5,1,3} code (see \cref{tab:StabilizerGeneratorsLists} for the codes stabilizer generators and logical operators). Another example includes the Hamming codes as was explained in the discussion on quantum Reed-Muller codes. For instance, consider the \codepar{15,7,3} Hamming code. Using the 1-flag circuit shown in \cref{fig:CircuitReichardt1}, the \codepar{15,7,3} will not satisfy the general flag 1-FTEC condition since a single fault can lead to a logical error on the data. As was shown in \cite{CR17v1}, by permuting the CNOT gates resulting in the circuit illustrated in \cref{fig:CircuitReichardt2}, the flag 1-FTEC condition is satisfied.
\subsection{Circuits}
\label{app:GeneralTflaggedCircuitConstruction}
\begin{figure
\centering
\begin{subfigure}{0.35\textwidth}
\includegraphics[width=\textwidth]{6FlagCircuit.png}
\caption{}
\label{fig:6flagCircuit}
\end{subfigure}
\begin{subfigure}{0.35\textwidth}
\includegraphics[width=\textwidth]{3flagWeight8.png}
\caption{}
\label{fig:3flagWeight8}
\end{subfigure}
\caption{(a) Illustration of a w-flag circuit for measuring the operator $Z^{\otimes w}$ where $w=6$ using the smallest number of flag qubits. (b) Illustration of a 3-flag circuit for measuring $Z^{\otimes 8}$ using the smallest number of flag qubits.}
\label{fig:ExamplesOfLargeFlagCircuits}
\end{figure}
In \cref{subsec:Remarks} we showed that the family of surface codes, color codes with a hexagonal lattice and quantum Reed-Muller codes satisfied a sufficient condition allowing them to be used in the flag $t$-FTEC protocol. Along with the general 1-flag circuit construction of \cref{fig:General1FlagCircuitSecCircuit}, the $6$-flag circuit for measuring $Z^{\otimes 6}$ of \cref{fig:6flagCircuit} can be used as $t$-flag circuits for all of the codes in \cref{subsec:Remarks}. Note that the circuit in \cref{fig:StabFTwithAncilla} (which is a special case of \cref{fig:General1FlagCircuitSecCircuit} when $w=4$) is a 4-flag circuit which is used for measuring $Z^{\otimes 4}$.
Before describing general 1- and 2-flag circuit constructions, we give the following two definitions which we will frequently use: Any CNOT that couples a data qubit to the measurement qubit will be referred to as $\text{CNOT}_{dm}$ and any CNOT coupling a measurement qubit to a flag qubit will be referred to as $\text{CNOT}_{fm}$. In both cases the target qubit will always be the measurement qubit.
\textbf{1- and 2-flag circuits for weight $w$ stabilizer measurements:}
We provide 1- and 2-flag circuit constructions for measuring a weight-$w$ stabilizer.
The 1-flag circuit requires a single flag qubit, and the 2-flag circuit requires at most four flag qubits.
\begin{figure}
\centering
\begin{subfigure}{0.35\textwidth}
\includegraphics[width=\textwidth]{General1FlagCircuitSecCircuit.png}
\caption{}
\label{fig:General1FlagCircuitSecCircuit}
\end{subfigure}
\begin{subfigure}{0.43\textwidth}
\includegraphics[width=\textwidth]{General2FlagCircuitSecCircuit.png}
\caption{}
\label{fig:General2FlagCircuitSecCircuit}
\end{subfigure}
\begin{subfigure}{0.43\textwidth}
\includegraphics[width=\textwidth]{General2FlagCircuitSecCircuitCompressed.png}
\caption{}
\label{fig:General2FlagCircuitSecCircuitCompressed}
\end{subfigure}
\caption{(a) General 1-flag circuit for measuring the stabilizer $Z^{\otimes w}$. (b) Example of a 2-flag circuit for measuring $Z^{\otimes 12}$ using our general 2-flag circuit construction. (c) An equivalent circuit using fewer flag qubits by reusing a measured flag qubit and reinitializing it in the $\ket{+}$ state for use in another pair of $\text{CNOT}_{\text{fm}}$ gates.}
\label{fig:Generall1And2FlagCircuits}
\end{figure}
Without loss of generality, in proving that the circuit constructions described below are 1- and 2-flag circuits, we can assume that all faults occurred on CNOT gates. This is because any set of $v$ faults (including those at idle, preparation or measurement locations) will have the same output Pauli operator and flag measurement results as some set of at most $v$ faults on CNOT gates (since every qubit is involved in at least one CNOT).
As was shown in Ref.~\cite{CR17v1}, \cref{fig:General1FlagCircuitSecCircuit} illustrates a general 1-flag circuit construction for measuring the stabilizer $Z^{\otimes w}$ which requires only two $\text{CNOT}_{\text{fm}}$ gates. To see that the first construction is a 1-flag circuit, note that an $IZ$ error occurring on any CNOT will give rise to a flag unless it occurs on the first or last $\text{CNOT}_{\text{dm}}$ gates or the last $\text{CNOT}_{\text{fm}}$ gate. However, such a fault on any of these three gates can give rise to an error of weight at most one (after multiplying by the stabilizer $Z^{\otimes w}$). One can also verify that if there are no faults, the circuit in \cref{fig:General1FlagCircuitSecCircuit} implements a projective measurement of $Z^{\otimes w}$ without flagging. Following the approach in \cite{LAR11}, one simply needs to check that the circuit preserves the stabilizer group generated by $Z^{\otimes w}$ and $X$ on each ancilla prepared in the $\ket{+}$ state and $Z$ on each ancilla prepared in the $\ket{0}$ state. By using pairs of $\text{CNOT}_{\text{fm}}$ gates, this construction satisfies the requirement.
We now give a general 2-flag circuit construction for measuring $Z^{\otimes w}$ for arbitrary $w$ (see \cref{fig:General2FlagCircuitSecCircuit} for an example). The circuit consists of pairs of $\text{CNOT}_{\text{fm}}$ gates each connected to a different flag qubit prepared in the $\ket{+}$ state and measured in the $X$ basis. The general 2-flag circuit construction involves the following placement of $w/2-1$ pairs of $\text{CNOT}_{\text{fm}}$ gates:
\begin{enumerate}
\item Place a $\text{CNOT}_{\text{fm}}$ pair between the first and second last $\text{CNOT}_{\text{dm}}$ gates.
\item Place a $\text{CNOT}_{\text{fm}}$ pair between the second and last $\text{CNOT}_{\text{dm}}$ gates.
\item After the second $\text{CNOT}_{\text{fm}}$ gate, place the first $\text{CNOT}_{\text{fm}}$ gate of the remaining pairs after every two $\text{CNOT}_{\text{dm}}$ gates. The second $\text{CNOT}_{\text{fm}}$ gate of a pair is placed after every three $\text{CNOT}_{\text{dm}}$ gates.
\end{enumerate}
As shown in \cref{fig:General2FlagCircuitSecCircuitCompressed}, it is possible to reuse some flag qubits to measure multiple pairs of $\text{CNOT}_{\text{fm}}$ gates at the cost of introducing extra time steps into the circuit. For this reason, at most four flag qubits will be needed, however, if $w \le 8$, then $w/2-1$ flag qubits are sufficient.
We now show that the above construction satisfies the requirements of a 2-flag circuit. If one CNOT gate fails, by an argument analogous to that used for the 1-flag circuit, there will be a flag or an error of at most weight-one on the data. If the first pair of $\text{CNOT}_{\text{fm}}$ gates fail causing no flag qubits to flag, after multiplying the data qubits by $Z^{\otimes w}$, the resulting error $E_{r}$ will have $\text{wt}(E_{r}) \le 2$. For any other pair of $\text{CNOT}_{\text{fm}}$ gates that fail causing an error of weight greater than two on the data, by construction there will always be another $\text{CNOT}_{\text{fm}}$ gate between the two that fail which will propagate a $Z$ error to a flag qubit causing it to flag. Similarly, if pairs of $\text{CNOT}_{\text{dm}}$ gates fail resulting in the data error $E_{r}$ with $\text{wt}(E_{r}) \ge 2$, by construction there will always be an odd number of $Z$ errors propagating to a flag qubit due to the $\text{CNOT}_{\text{fm}}$ gates in between the $\text{CNOT}_{\text{dm}}$ gates that failed causing a flag qubit to flag. The same argument applies if a failure occurs between a $\text{CNOT}_{\text{dm}}$ and $\text{CNOT}_{\text{fm}}$ gate.
Lastly, a proposed general $w$-flag circuit construction for arbitrary $w$ is provided in \cref{App:GeneralwFlagCircuitConstruction}.
\textbf{Use of flag information:}
As seen in \cref{fig:6flagCircuit,fig:3flagWeight8,fig:General2FlagCircuitSecCircuit,fig:General2FlagCircuitSecCircuitCompressed}, in general $t$-flag circuits require more than one flag qubit. Apart from their use in satisfying the $t$-flag circuit properties, the extra flag qubits could be used to reduce the size of the flag error sets (defined in \cref{Def:FlagErrSetDef}) when verifying the Flag $t$-FTEC condition of \cref{app:GeneralFTEC}. To do so, we first define $f$, where $f$ is a bit string of length $u$ (here $u$ is the number of flag qubits) with $f_{i} = 1$ if the i'th flag qubit flagged and 0 otherwise. In this case, the correction set of \cref{eq:GeneralLookupTableV2} can be modified to include flag information as follows:
\begin{align}
&\tilde{E}_{t}^{m}(g_{i_{1}},\cdots , g_{i_{k}},s,f_{i_{1}},\cdots ,f_{i_{k}}) = \nonumber \\
&\begin{cases}
\{ E \in \mathcal{E}_{m}(g_{i_{1}},\cdots , g_{i_{k}},f_{i_{1}},\cdots ,f_{i_{k}}) \times \mathcal{E}_{t-m} \\
\text{ such that } s(E) = s \} \\
\{ E_{\text{min}}(s) \} \text{ if above set empty., }
\end{cases}
\label{eq:CorrectionSetWithFlagInfo}
\end{align}
where $\mathcal{E}_{m}(g_{i_{1}},\cdots , g_{i_{k}},f_{i_{1}},\cdots,f_{i_{k}})$ is the new flag error set containing only errors caused by precisely $m$ faults spread amongst the circuits $C(g_{i_{1}}),C(g_{i_{2}}), \cdots , C(g_{i_{k}})$ which each gave rise to the flag outcomes $f_{i_{1}},\cdots,f_{i_{k}}$.
Hence only errors which result from the measured flag outcome would be stored in the correction set. With enough flag qubits, this could potentially broaden the family of codes which satisfy the Flag $t$-FTEC condition.
\section{Circuit level noise analysis}
\label{sec:CircuitLevelNoiseFTEC}
The purpose of this section is to demonstrate explicitly the flag 2-FTEC protocol, and to identify parameter regimes in which flag FTEC presented both here and in other works offers advantages over other existing FTEC schemes.
In \cref{subsec:Numerics19} we analyze the logical failure rates of the \codepar{19,1,5} color code and compute it's pseudo-threshold for the three choices of $\tilde{p}$. In \cref{subsec:CompareFlagecschemes} we compare logical failure rates of several fault-tolerant error correction schemes applied to distance-three and distance-five stabilizer codes. The stabilizers for all of the studied codes are given in \cref{tab:StabilizerGeneratorsLists}. Logical failure rates are computed using the full circuit level noise model and simulation methods described in \cref{subsec:NoiseAndNumerics}.
\subsection{Numerical analysis of the \codepar{19,1,5} color code}
\label{subsec:Numerics19}
The full circuit-level noise analysis of the flag 2-FTEC protocol applied to the \codepar{19,1,5} color code was performed using the stabilizer measurement circuits of \cref{fig:StabFTwithAncilla,fig:WeightSixGenerators}.
In the weight-six stabilizer measurement circuit of \cref{fig:WeightSixGenerators}, there are 10 CNOT gates, three measurement and state-preparation locations, and 230 resting qubit locations. When measuring all stabilizer generators using non-flag circuits, there are 42 CNOT and 42 XNOT gates, 18 measurement and state-preparation locations, and 2196 resting qubit locations. Consequently, we expect the error suppression capabilities of the flag EC scheme to depend strongly on the number of idle qubit locations.
\begin{table}[t]
\begin{tabular}{ c|c}
three-qubit flag EC & pseudo-threshold \\ \hline
\codepar{19,1,5} and $\tilde{p}=p$ & $p_{\mathrm{pseudo}} = (1.14 \pm 0.02) \times 10^{-5}$ \\
\codepar{19,1,5} and $\tilde{p}=\frac{p}{10}$ & $p_{\mathrm{pseudo}} = (6.70 \pm 0.07) \times 10^{-5}$ \\
\codepar{19,1,5} and $\tilde{p}=\frac{p}{100}$ & $p_{\mathrm{pseudo}} = (7.74 \pm 0.16) \times 10^{-5}$ \end{tabular}
\caption{Table containing pseudo-threshold values for the flag 2-FTEC protocol applied to the \codepar{19,1,5} color code for $\tilde{p}=p$, $\tilde{p}=p/10$ and $\tilde{p}=p/100$.}
\label{tab:PseudoThresholAllThree1915}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{CBplotsAllThreeCurves.png}
\caption{Logical failure rates of the \codepar{19,1,5} color code after implementing the flag 2-FTEC protocol presented in \cref{subsec:Distance5protocol} for the three noise models described in \cref{subsec:NoiseAndNumerics}. The dashed curves represent the lines $\tilde{p}=p$, $\tilde{p}=p/10$ and $\tilde{p}=p/100$. The crossing point between $\tilde{p}$ and the curve corresponding to $p^{(\codepar{19,1,5})}_{L}(\tilde{p})$ in \cref{Def:PseudoThreshDef} gives the pseudo-threshold.}
\label{fig:PseudoThreshPlots19ColorAllThree}
\end{figure}
Pseudo-thresholds of the \codepar{19,1,5} code were obtained using the methods of \cref{subsec:NoiseAndNumerics}. Recall that for extending the lifetime of a qubit (when idle qubit locations fail with probability $\tilde{p}$), the probability of failure after implementing an FTEC protocol should be smaller than $\tilde{p}$. We calculated the pseudo-threshold using \cref{Def:PseudoThreshDef} for the three cases were idle qubits failed with probability $\tilde{p}=p$, $\tilde{p}=p/10$ and $\tilde{p}=p/100$. The results are shown in \cref{tab:PseudoThresholAllThree1915}.
The logical failure rates for the three noise models are shown in \cref{fig:PseudoThreshPlots19ColorAllThree}. It can be seen that when the probability of error on a resting qubit decreases from $p$ to $p/10$, the pseudo-threshold improves by nearly a factor of six showing the strong dependence of the scheme on the probability of failure of idle qubits.
\subsection{Comparison of flag 1- and 2-FTEC with other FTEC schemes}
\label{subsec:CompareFlagecschemes}
\begin{figure*}[t!]
\begin{subfigure}{0.33\textwidth}
\begin{center}
\includegraphics[width=\textwidth]{Distance3pPlots.png}
\caption{}
\includegraphics[width=\textwidth]{Distance5PlotCombinationsp.png}
\caption{}
\end{center}
\end{subfigure}\hfill
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\textwidth]{Distance3p10Plots.png}
\caption{}
\includegraphics[width=\textwidth]{Distance5PlotCombinationsp10.png}
\caption{}
\end{subfigure}\hfill
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\textwidth]{Distance3p100Plots.png}
\caption{}
\includegraphics[width=\textwidth]{Distance5PlotCombinationsp100.png}
\caption{}
\end{subfigure}\hfill
\caption{Logical failure rates for various fault-tolerant error correction methods applied to the \codepar{5,1,3} code, \codepar{7,1,3} Steane code and the \codepar{19,1,5} color code. The dashed curves correspond to the lines $\tilde{p}=p$, $\tilde{p}=p/10$ and $\tilde{p}=p/100$. In (a), (c) and (e), the flag 1-FTEC protocol is applied to the \codepar{5,1,3} and Steane code and the results are compared with the $d=3$ surface code and Steane error correction applied to the Steane code. In (b), (d) and (f), the flag 2-FTEC protocol is applied to the \codepar{19,1,5} color code, and the results are compared with the $d=5$ surface code and Steane error correction applied to the \codepar{19,1,5} color code.
These numerical results suggest the following fault-tolerant experiments of the schemes we consider for extending the fidelity of a qubit. (1) If $7 \le n \le16$, only the 5 and 7 qubit codes with flag 1-FTEC are accessible. However, the performance is much worse than higher qubit alternatives unless $\tilde{p}/p$ is small. (2) For $17\le n \le34$, the $d=3$ surface code seems most promising, unless $\tilde{p}/p$ is small, in which case flag 2-FTEC with the 19-qubit code should be better. (3) For $35\le n \le48$, Steane EC applied to distance-three codes is better than all other approaches studied, except for very low $p$ where flag 2-FTEC should be better due to ability to correct two rather than just one fault. (4) For $n \ge 49$, the d=5 surface code is expected to perform better than the other alternatives below pseudo-threshold.}
\label{fig:AllComparisonPlotsCombined}
\end{figure*}
The most promising schemes for testing fault-tolerance in near term quantum devices are those which achieve high pseudo-thresholds while maintaining a low qubit overhead. The flag FTEC protocol presented in this paper uses fewer qubits compared to other well known fault-tolerance schemes but typically has increased circuit depth. In this section we apply the flag FTEC protocol of \cref{subsec:ReviewChaoReichardt,subsec:Distance5protocol} to the \codepar{5,1,3}, \codepar{7,1,3} and \codepar{19,1,5} codes. We compare logical failure rates for three values of $\tilde{p}$ with Steane error correction applied to the \codepar{7,1,3} and \codepar{19,1,5} codes and with the $d=3$ and $d=5$ rotated surface code. More details on Steane error correction and surface codes are provided in \cref{app:SurfaceECSection,app:SteaneECSection}. Note that recent work by Goto has provided optimizations to prepare Steane ancillas \cite{Goto16}. However, our numerical results for Steane-EC were produced using the methods presented in \cref{app:SteaneECSection}.
Results of the logical failure rates for $\tilde{p}=p$, $\tilde{p}=p/10$ and $\tilde{p}=p/100$ are shown in \cref{fig:AllComparisonPlotsCombined}. Various pseudo-thresholds and required time-steps for the considered fault-tolerant error correction methods are given in \cref{tab:PseudoThresholdsAllECschemesD3,tab:PseudoThresholdsAllECschemesD5}.
The circuits for measuring the stabilizers of the 5-qubit code were similar to the ones used in \cref{fig:StabFTwithAncilla} (for an $X$ Pauli replace the CNOT by an XNOT). For flag-FTEC methods, it can be seen that the \codepar{5,1,3} code always achieves lower logical failure rates compared to the \codepar{7,1,3} code. However, when $\tilde{p}=p$, both the $d=3$ surface code as well as Steane-EC achieves lower logical failure rates (with Steane-EC achieving the best performance). For $\tilde{p}=p/10$, flag-EC applied to the \codepar{5,1,3} code achieves nearly identical logical failure rates compared to the $d=3$ surface code. For $\tilde{p}=p/100$, flag 1-FTEC applied to the \codepar{5,1,3} code achieves lower logical failure rates than the $d=3$ surface code but still has higher logical failure rates compared to Steane-EC.
\begin{table*}[!t]
\begin{tabular}{ c|c|c|c|c}
FTEC scheme & Noise model & Number of qubits & Time steps ($T_{\mathrm{time}}$) &Pseudo-threshold \\ \hline
Flag-EC \codepar{5,1,3} & $\tilde{p} = p$ & $7$ & $64 \le T_{\mathrm{time}} \le 88$ & $p_{\mathrm{pseudo}} = (7.09 \pm 0.03) \times 10^{-5}$ \\
Flag-EC \codepar{7,1,3} & & $9$ & $72 \le T_{\mathrm{time}} \le 108$ & $p_{\mathrm{pseudo}} = (3.39 \pm 0.10) \times 10^{-5}$ \\
$d=3$ Surface code & & 17 & $\ge 18$ & $p_{\mathrm{pseudo}} = (3.29 \pm 0.16) \times 10^{-4}$ \\
Steane-EC \codepar{7,1,3} & &$\ge 35$ & 15 & $p_{\mathrm{pseudo}} = (6.29 \pm 0.13) \times 10^{-4}$ \\ \hline
Flag-EC \codepar{5,1,3} & $\tilde{p} = p/10$ & $7$ & $64 \le T_{\mathrm{time}} \le 88$ & $p_{\mathrm{pseudo}} = (1.11 \pm 0.02) \times 10^{-4}$ \\
Flag-EC \codepar{7,1,3} & & $9$ & $72 \le T_{\mathrm{time}} \le 108$ & $p_{\mathrm{pseudo}} = (8.68 \pm 0.15) \times 10^{-5}$ \\
$d=3$ Surface code & & 17 & $\ge 18$ & $p_{\mathrm{pseudo}} = (1.04 \pm 0.02) \times 10^{-4}$ \\
Steane-EC \codepar{7,1,3} & &$\ge 35$ & 15 & $p_{\mathrm{pseudo}} = (3.08 \pm 0.01) \times 10^{-4}$ \\ \hline
Flag-EC \codepar{5,1,3} & $\tilde{p} = p/100$ & $7$ & $64 \le T_{\mathrm{time}} \le 88$ & $p_{\mathrm{pseudo}} = (2.32 \pm 0.03) \times 10^{-5}$ \\
Flag-EC \codepar{7,1,3} & & $9$ & $72 \le T_{\mathrm{time}} \le 108$ & $p_{\mathrm{pseudo}} = (1.41 \pm 0.05) \times 10^{-5}$ \\
$d=3$ Surface code & & 17 & $\ge 18$ & $p_{\mathrm{pseudo}} = (1.37 \pm 0.03) \times 10^{-5}$ \\
Steane-EC \codepar{7,1,3} & &$\ge 35$ & 15 & $p_{\mathrm{pseudo}} = (3.84 \pm 0.01) \times 10^{-5}$ \\
\end{tabular}
\caption{Distance-three pseudo-threshold results for various FTEC protocols and noise models applied to the \codepar{5,1,3}, \codepar{7,1,3} and $d=3$ rotated surface code. We also include the number of time steps required to implement the protocols.}
\label{tab:PseudoThresholdsAllECschemesD3}
\end{table*}
We also note that the pseudo-threshold increases when $\tilde{p}$ goes from $p$ to $p/10$ for both the \codepar{5,1,3} and \codepar{7,1,3} codes when implemented using the flag 1-FTEC protocol. This is primarily due to the large circuit depth in flag-EC protocols since idle qubits locations significantly outnumber other gate locations. For the surface code, the opposite behaviour is observed. As was shown in \cite{FMMC12}, CNOT gate failures have the largest impact on the pseudo-threshold of the surface code. Thus, when idle qubits have lower failure probability, lower physical error rates will be required in order to achieve better logical failure rates. For instance, if idle qubits never failed, then performing error correction would be guaranteed to \textit{increase} the probability of failure due to the non-zero failure probability of other types of locations (CNOT, measurements and state-preparation). Lastly, the pseudo-threshold for Steane-EC also decreases with lower idle qubit failure rates, but the change in pseudo-threshold is not as large as the surface code. This is primarily due to the fact that all CNOT gates are applied transversally in Steane-EC, so that the pseudo-threshold is not as sensitive to CNOT errors compared to the surface code. Furthermore, most high-weight errors arising during the state-preparation of the logical ancilla's will be detected (see \cref{app:SteaneECSection}). Hence, idle qubit errors play a larger role than in the surface code, but Steane-EC has fewer idle qubit locations compared to flag-EC (see \cref{tab:PseudoThresholdsAllECschemesD3} for the circuit depths of all schemes).
\begin{table*
\begin{tabular}{ c|c|c|c|c}
FTEC scheme & Noise model & Number of qubits & Time steps ($T_{\mathrm{time}}$) &Pseudo-threshold \\ \hline
Flag-EC \codepar{19,1,5} & $\tilde{p} = p$ & $22$ & $504 \le T_{\mathrm{time}} \le 960$ & $p_{\mathrm{pseudo}} = (1.14 \pm 0.02) \times 10^{-5}$ \\
$d=5$ Surface code & & 49 & $\ge 18$ & $p_{\mathrm{pseudo}} = (9.41 \pm 0.17) \times 10^{-4}$ \\
Steane-EC \codepar{19,1,5} & &$\ge 95$ & 15 & $p_{\mathrm{pseudo}} = (1.18 \pm 0.02) \times 10^{-3}$ \\ \hline
Flag-EC \codepar{19,1,5} & $\tilde{p} = p/10$ & $22$ & $504 \le T_{\mathrm{time}} \le 960$ & $p_{\mathrm{pseudo}} = (6.70 \pm 0.07) \times 10^{-5}$ \\
$d=5$ Surface code & & 49 & $\ge 18$ & $p_{\mathrm{pseudo}} = (7.38 \pm 0.22) \times 10^{-4}$ \\
Steane-EC \codepar{19,1,5} & &$\ge 95$ & 15 & $p_{\mathrm{pseudo}} = (4.42 \pm 0.27) \times 10^{-4}$ \\ \hline
Flag-EC \codepar{19,1,5} & $\tilde{p} = p/100$ & $22$ & $504 \le T_{\mathrm{time}} \le 960$ & $p_{\mathrm{pseudo}} = (7.74 \pm 0.16) \times 10^{-5}$ \\
$d=5$ Surface code & & 49 & $\ge 18$ & $p_{\mathrm{pseudo}} = (2.63 \pm 0.18) \times 10^{-4}$ \\
Steane-EC \codepar{19,1,5} & &$\ge 95$ & 15 & $p_{\mathrm{pseudo}} = (5.60 \pm 0.43) \times 10^{-5}$ \\
\end{tabular}
\caption{Distance-five pseudo-threshold results for various FTEC protocols and noise models applied to the \codepar{19,1,5} color code and $d=5$ rotated surface code. We also include the number of time steps required to implement the protocols.}
\label{tab:PseudoThresholdsAllECschemesD5}
\end{table*}
Although Steane-EC achieves the lowest logical failure rates compared to the other fault-tolerant error correction schemes, it requires a minimum of 35 qubits (more details are provided in \cref{app:SteaneECSection}). In contrast, the $d=3$ surface code requires 17 qubits, and flag 1-FTEC applied to the \codepar{5,1,3} code requires only 7 qubits. Therefore, if the probability of idle qubit errors is much lower than gate, state preparation and measurement errors, flag-FTEC methods could be good candidates for early fault-tolerant experiments.
It is important to keep in mind that for the flag 1-FTEC protocol applied to the distance-three codes considered in this section, the same ancilla qubits are used to measure all stabilizers. A more parallelized version of flag-FTEC applied to the \codepar{7,1,3} code using four ancilla qubits instead of two is considered in \cref{app:CompactRepFlagQubit}.
In computing the number of time steps required by the flag $t$-FTEC protocols, a lower bound is given in the case where there are no flags and the same syndrome is repeated $t+1$ times. In \cref{app:GeneralFTEC} it was shown that the full syndrome measurement for flag-FTEC is repeated at most $\frac{1}{2}(t^{2} + 3t + 2)$ times where $t = \lfloor (d-1)/2 \rfloor$. An upper bound on the total number of required time steps is thus obtained from a worst case scenario where syndrome measurements are repeated $\frac{1}{2}(t^{2} + 3t + 2)$ times.
For distance-five codes, the first thing to notice from \cref{fig:AllComparisonPlotsCombined} is that the slopes of the logical failure rate curves of flag-EC applied to the \codepar{19,1,5} code and $d=5$ surface code are different from the slopes of Steane-EC applied to the \codepar{19,1,5} code. In particular, $p_{\text{L}} = cp^{3} + \mathcal{O}(p^{4})$ for flag-EC and the surface code whereas $p_{\text{L}} = c_{1}p^{2} + c_{2}p^{3} + \mathcal{O}(p^{4})$ for Steane-EC ($c$, $c_{1}$ and $c_{2}$ are constants that depend on the code and FTEC method). The reason that Steane-EC has non-zero $\mathcal{O}(p^{2})$ contributions to the logical failure rates is that there are instances where errors occurring at two different locations can lead to a logical fault. Consequently, the Steane-EC method that was used is not strictly fault-tolerant according to \cref{Def:FaultTolerantDef}. In \cref{app:SteaneECSection}, more details on the fault tolerant properties of Steane-EC are provided and a fully fault-tolerant implementation of Steane-EC is analyzed (at the cost of using more qubits).
For $d=5$, the surface code achieves significantly lower logical failure rates compared to all other distance 5 schemes but uses 49 qubits instead of 22 for the \codepar{19,1,5} code. Furthermore, due the differences in the slopes of flag-2 FTEC protocol compared with Steane-EC applied to the \codepar{19,1,5} code, there is a regime where flag-2 FTEC achieves lower logical failure rates compared to Steane-EC. For $\tilde{p}=p/100$, it can be seen in \cref{fig:AllComparisonPlotsCombined} that this regime occurs when $p \lesssim 10^{-4}$. We also note that the pseudo-threshold of flag-EC applied to the \codepar{19,1,5} color code increases for all noise models whereas the pseudo-threshold decreases for the other FTEC schemes. Again, this is due to the fact that flag-EC has a larger circuit depth compared to the other FTEC methods and is thus more sensitive to idle qubit errors.
Comparing the flag 2-FTEC protocol (applied to the \codepar{19,1,5} color code) to all the $d=3$ schemes that were considered in this section, due to the higher distance of the 19-qubit code, there will always be a parameter regime where the 19-qubit color code acheives lower logical failure rates than both the $d=3$ surface code and Steane-EC applied to the \codepar{7,1,3} code. In the case where $\tilde{p}=p/100$ and with $p \lesssim 1.5 \times10^{-4}$, using flag error correction with only 22 qubits outperforms Steane error correction (which uses a minimum of 35 qubits) and the $d=3$ rotated surface code (which uses 17 qubits).
Note the considerable number of time steps involved in a round of flag-EC, particularly in the $d=5$ case (see \cref{tab:PseudoThresholdsAllECschemesD5}).
For many applications, this is a major drawback, for example for quantum computation when the time of an error correction round dictates the time of a logical gate.
However there are some cases in which having a larger number of time-steps in an EC round while holding the logical error rate fixed is advantageous as it corresponds to a longer physical lifetime of the encoded information.
Such schemes could be useful for example in demonstrating that encoded logical quantum information can be stored for longer time scales in the lab using repeated rounds of FTEC.
\section{Conclusion}
\label{sec:Conclusion}
Building on definitions and a new flag FTEC protocol applied to distance-three and -five codes presented in \cref{sec:Section1}, in \cref{subsec:GeneralProtocol} we presented a general flag FTEC protocol, which we called flag $t$-FTEC, and which is applicable to stabilizer codes of distance $d = 2t+1$ that satisfy the flag $t$-FTEC condition. The protocol makes use of flag ancilla qubits which signal when $v$ faults lead to errors of weight greater than $v$ on the data when performing stabilizer measurements. In \cref{subsec:ApplicationProtocolColorCode,app:GeneralTflaggedCircuitConstruction} we gave explicit circuit constructions, including those needed for distance 3 and 5 codes measuring stabilizers of weight 4, 6 and 8.
In \cref{subsec:Remarks} we gave a sufficient condition for codes to satisfy the requirements for flag $t$-FTEC. Quantum Reed-Muller codes, Surface codes and hexagonal lattice color codes were shown to be families of codes that satisfy the sufficient condition.
The flag $t$-FTEC protocol could be useful for fault-tolerant experiments performed in near term quantum devices since it tends to use fewer qubits than other FTEC schemes such as Steane, Knill and Shor EC. In \cref{subsec:CompareFlagecschemes} we provided numerical evidence that with only 22 qubits, the flag $2$-FTEC protocol applied to the \codepar{19,1,5} color code can achieve lower logical failure rates than other codes using similar numbers of qubits such as the rotated distance-3 surface code and Steane-EC applied to the Steane code.
A clear direction of future work would be to find optimal general constructions of $t$-flag circuits for stabilizers of arbitrary weight that improve upon the general construction given in \cref{App:GeneralwFlagCircuitConstruction}. Of particular interest would be circuits using few flag qubits and CNOT gates while minimizing the probability of false-positives (i.e. when the circuit flags without a high-weight error occurring). Finding other families of stabilizer codes which satisfy the sufficient or more general condition for flag $t$-FTEC would also be of great interest. One could also envisage hybrid schemes combining flag EC with other FTEC approaches.
Another direction of future research would be to find general circuit constructions for simultaneously measuring multiple stabilizers while minimizing the number of required ancilla qubits. Further, we believe performing a rigorous numerical analysis to understand the impact of more compact circuit constructions on the codes threshold is of great interest.
Lastly, the decoding complexity (i.e. generating the flag error set lookup tables) is limited by the decoding complexity of the code. In some cases, for example concatenated codes, it may be possible to exploit some structure to generate the flag error sets more efficiently. In the case of concatenated code, the decoding complexity would be reduced to the decoding complexity of the codes used at every level. Finding other scalable constructions for efficient decoding schemes using flag error correction remains an open problem.
\section{Acknowledgements}
The authors would like to thank Krysta Svore, Tomas Jochym-O'Connor, Nicolas Delfosse and Jeongwan Haah for useful discussions. We also thank Steve Weiss for providing the necessary computational tools that allowed us to complete our work in a timely manner. C. C. would like to acknowledge the support of QEII-GSST and thank Microsoft and the QuArC group for its hospitality where all of this work was completed.
\newpage
\bibliographystyle{unsrtnat}
|
2210.07227
|
\section{Introduction}
The effect of quenched disorder to critical phenomena in spin systems has been the subject of intense study for almost half a century. One of the central models has been the random bond Ising model (RBIM), which serves as a model for certain spin glass materials \cite{Edwards1975, binder1986}, certain localization problems and plateau transitions in the quantum Hall effect \cite{cho1997,merz2002} but also has been shown to be relevant for the analysis of the performance of topological quantum error correcting codes when assuming certain noise models \cite{dennis2002topological, chubb2021statistical}.
The RBIM in flat space has been understood quite comprehensively by now: while weak disorder is irrelevant in the renormalization group sense \cite{dotsenko1983}, increasing the disorder strength lowers the phase transition temperature until the so called ``Nishimori point'' is reached. Beyond this, the system stays disordered for all temperatures. In more than two dimensions, the system for low temperatures and large disorder is in a spin glass phase, with the Nishimori point being the tri-critical point.
The present paper is now concerned with the properties of the RBIM in curved space, which to the best of our knowledge has previously been studied only in the absence of disorder \cite{Rietman1992, krcmar2008, mnasri2015, Jiang2018, breuckmann2020}.
In this limit, the model undergoes a phase transition from a paramagnetic high-temperature to a low-temperature ferromagnetic phase, just as its flat-space counterpart.
The transition is mean-field in nature, but surprisingly it is not located at the fixed-point of the Kramers--Wannier duality, even on self-dual hyperbolic lattices. This observation implies either the existence of a second phase transition, for which no evidence was found numerically, or a violation of self-duality of the Ising model on self-dual hyperbolic lattices.
We note that the existence of a second phase transition for the pure Ising model on the hyperbolic plane with free boundary condition has been proved~\cite{Wu1996, Wu2000,Jiang2018}.
As we show in this work, there is an anomaly in the hyperbolic RBIM.
It turns out that it is not self-dual even on self-dual lattices, but, in the disorder-free limit, is related by the Kramers--Wannier duality to what we call the \emph{dual-RBIM}.
Hence, in this paper we study both the critical properties of the random bond Ising model and its dual in hyperbolic space.
Note that what we call the dual-RBIM it is not related to the RBIM by an exact duality in the presence of disorder.
We begin our study of both models by mapping out their phase diagrams using a combination of high-temperature series expansion techniques and Monte-Carlo simulations. We show that the RBIM realizes a paramagnetic, a ferromagnetic and a spin-glass phase with the Nishimori point as the tricritical point. All transitions (with the exception of the multicritical point) are compatible with second-order mean-field behavior. In contrast, the dual-RBIM in the disorder-free limit as well as along the Nishiori line undergoes a strongly first-order transition as evidenced through Metropolis and canonical simulations using the Wang-Landau algorithm.
We numerically verify the duality of the two models in the disorder-free case and show that a duality conjectured by Takeda et al. \cite{takeda2005exact} is fulfilled only approximately.
The rest of the paper is organized as follows. In \autoref{sec:intro_rbim} we give necessary notions and definitions; in particular the dual-RBIM is derived in \autoref{sec:intro_duality}. In \autoref{sec:methods} we derive the high-temperature expansion for the RBIM and give details on the Monte-Carlo simulations used. \autoref{sec:rbim} presents the results on the phase diagram and critical properties of the random Bond Ising model and \autoref{sec:dual-rbim} presents the same for the dual model. Finally, we discuss the relevance of our results to the decoding of hyperbolic surface codes in \autoref{sec:qec}. We conclude in \autoref{sec:conclusion}.
\section{The disordered Ising model and its dual in the hyperbolic plane\label{sec:intro_rbim}}
\subsection{Hyperbolic surfaces}\label{sec:intro_hyperbolic}
The hyperbolic plane is a 2D manifold of constant negative curvature.
It can be realized in terms of several models.
Here, we will employ the \emph{Poincar\'e disk model}, which is defined as follows.
Consider a disk in $\mathbb{R}^2$ with unit radius and centered at the origin.
Let $x$ and $y$ denote the standard coordinates of $\mathbb{R}^2$.
Then the hyperbolic plane is given by the set of points
\begin{align}
\mathbb{H}^2 = \{ (x,y)\in \mathbb{R}^2 \mid x^2+y^2 < 1 \}
\end{align}
with metric given by
\begin{align}\label{eq:metric}
ds^2 = \frac{dx^2 + dy^2}{\left( 1-x^2-y^2 \right)^2}
\end{align}
It is immediate from \autoref{eq:metric} that length scales are highly distorted towards the boundary of the disk compared to the euclidean metric, see \autoref{fig:55tessellation}.
Just as regular euclidean space can be tessellated by squares, triangles or hexagons, hyperbolic space can be tessellated by regular polygons as well.
In fact, it turns out that hyperbolic space supports an infinite number of regular tessellations.
We can label regular tessellations by the \emph{Schl\"afli symbol} $\{r,s\}$, where $r$ is the number of sides of the polygonal plaquettes and $s$ is the number of plaquettes meeting at each vertex.
For example, the hexagonal lattice has Schl\"afli symbol $\{6,3\}$.
Its dual lattice can be obtained by reversing the Schl\"afli symbol, i.e. the triangular lattice $\{3,6\}$.
These two examples, together with the self-dual square tessellation $\{4,4\}$ are all the possible regular tessellations of the euclidean plane.
The hyperbolic plane supports any regular tessellation $\{r,s\}$ as long as $1/r+1/s < 1/2$.
The $\{5,5\}$ tessellation of the hyperbolic plane in the Poincar\'e disk model is shown in \autoref{fig:55tessellation}.
\begin{figure}
\centering
\includegraphics[width=0.65\columnwidth]{tesselation.png}
\caption{(a) Poincar\'e disk model of the infinite hyperbolic plane $\mathbb{H}^2$ with the $\{5,5\}$ lattice. All edges have the same length with respect to the hyperbolic metric, see \autoref{eq:metric}.}
\label{fig:55tessellation}
\end{figure}
In order to approximate the infinite hyperbolic plane for numerical analysis, we can consider sequences of finite neighborhoods $B_R$ (discs) of increasing radii $R$.
This is commonly done in the context of statistical mechanics models in euclidean space for performing finite size analysis.
The models differ at the boundaries of the finite regions from the infinite euclidean plane.
However, the effects of this deviation vanish in the thermodynamic limit as $\operatorname{vol}(\partial B_R) / \operatorname{vol}(B_R) \rightarrow 0$ for $R\rightarrow \infty$.
This is not the case in hyperbolic space where $\operatorname{vol}(\partial B_R)$ and $\operatorname{vol}(B_R)$ have the same asymptotic scaling.
This means that taking finite neighborhoods with boundaries can not be used to analyze the behaviour of the infinite model.
We solve this problem by considering families of boundaryless, finite surfaces (supporting the same tessellation) which are indistinguishable from the infinite hyperbolic plane in local regions of increasing size at any point.
Introducing periodic boundary conditions is a much more subtle process in hyperbolic spaces compared to euclidean spaces.
In particular, closed, orientable hyperbolic manifolds have a genus that is proportional to their area.
This is seen most easily by considering a theorem due to Gau\ss --Bonnet, which states that the geometry (curvature) of a 2D surface is connected to its topology.
More concretely, it states that for any orientable surface~$S$ of genus~$g$ it holds that
\begin{align}\label{eqn:gauss_bonnet}
2-2g = \frac{1}{2\pi} \int_S \kappa\, dA
\end{align}
where on the right hand side we integrate the curvature~$\kappa$ at every point in $S$ over the area of $S$.
If $S$ is euclidean, then the curvature~$\kappa$ is equal to 0 at every point.
From \autoref{eqn:gauss_bonnet} it then immediately follows that all orientable euclidean surfaces are tori ($g=1$).
On the other hand, if $S$ is hyperbolic then $\kappa = -1$ everywhere. Orientable hyperbolic surfaces hence have
\begin{align}\label{eq:area_propto_genus}
\frac{\operatorname{area}(S)}{2\pi} = 2g-2
\end{align}
so that larger surfaces necessarily have a higher genus.
In \autoref{fig:klein_quartic} we show an example of a closed $g=3$ hyperbolic surface, called \emph{Klein quartic}, which supports a $\{7,3\}$ tessellation.
\begin{figure}[h]
\centering
\includegraphics[width=0.2\textwidth]{klein_compact.png}
\hfil
\includegraphics[width=0.2\textwidth]{klein_fd.png}
\caption{A hyperbolic surface of genus 3 tessellated by the $\{7,3\}$-tessellation (left).
If we cut the surface open we obtain a flat piece of hyperbolic space (right).
The plaquettes are colored to guide they eye.}
\label{fig:klein_quartic}
\end{figure}
As it turns out, the subtlety that hyperbolic surfaces are topologically complex becomes important in the Kramers--Wannier duality.
This is because the Kramers--Wannier duality is sensitive to the number of closed loops (cycles) in the lattice and the higher genus of hyperbolic surfaces introduces more such loops, see discussion in \autoref{sec:intro_duality}.
\subsection{Duality in the Hyperbolic Ising model\label{sec:intro_duality}}
We consider the Ising model (for the time being \emph{without} quenched disorder) on a lattice $\mathcal L = (V, E, F)$. We denote by $V$ the set of vertices, by $E$ the set of edges and by $F$ the set of faces of the lattice.
Denoting nearest neighbor bonds between two vertices $i$ and $j$ of the lattice by $\expval{ij}$, the Hamiltonian of the Ising model is then given by
\begin{equation}
\label{eq:pure-Ising}
H = J \sum_{\expval{ij}} \sigma_i \sigma_j,
\end{equation}
where $\sigma\in\{\pm1\}$ are Ising spin variables and we asume $J<0$ for ferromagnetic coupling.
In euclidean space, the Kramers--Wannier duality \cite{kramers1941} relates the high-temperature expansion of the Ising model [\autoref{eq:pure-Ising}] to its low-temperature expansion of the same model on the dual lattice.
In particular, Kramers and Wannier showed a exact relation the two partition functions
\begin{subequations}
\label{eq:krammers-wannier}
\begin{align}
Z(T) = \Tilde Z(T^*)
\end{align}
where~$Z$ and~$\Tilde Z$ are the partition functions of the Ising model on the lattice and its dual respectively and $T$ and~$T^*$ satisfy
\begin{equation}
\label{eq:krammers-wannier-T}
\sinh(2J/T)\sinh(2J/T^*) = 1.
\end{equation}
\end{subequations}
On a self-dual lattice, $Z=\Tilde Z$ and thus the duality [\autoref{eq:krammers-wannier}]] constitutes an exact mapping between the behavior of the system at high and low temperature. In particular, assuming that a single phase transition occurs, this fixes the critical temperature to the fixed-point of \autoref{eq:krammers-wannier-T}
\begin{equation}
\label{eq:krammers-wannier-Tc}
\sinh(2J/T_c)\sinh(2J/T_c) = 1~\Rightarrow~T_c \approx 2.2692J.
\end{equation}
An open question posed by earlies studies \cite{Rietman1992, breuckmann2020} was how \autoref{eq:krammers-wannier-Tc} is violated in hyperbolic space.
That is, if the Kramers--Wannier duality [\autoref{eq:krammers-wannier}] holds also for hyperbolic lattices, one of the following must hold: either that all self-dual hyperbolic lattices (that is tesselations of compact hyperbolic manifolds with Schl\"afli symbol $\{r, s\}$ with $r=s$) have the \emph{same} critical temperature, given by \autoref{eq:krammers-wannier-Tc}, or there exist \emph{two} phase transitions, related by \autoref{eq:krammers-wannier-T}.
In fact, as we will show below, the Ising model on tessellations of compact hyperbolic manifolds is not related by the Krammers--Wannier duality to the same Ising model on the dual lattice. In particular, it is not self-dual, even on self-dual lattices.
\subsubsection{Re-derivation of the Kramers--Wannier duality}
To understand this, let us perform a more careful re-derivation of the Kramers--Wannier duality.
To this end, we first consider the high-temperature expansion of the Ising model on a lattice $\mathcal L = (V, E, F)$.
Let $Z_1$ be the set of subsets $\gamma \subset E$ such that in the subgraph induced by any such $\gamma$ every vertex has even degree.
The subsets $\gamma \in Z_1$ are called \emph{cycles}.
It is well-known that the partition function can be written as a sum over the set of all cycles of the graph $Z_1$ (see e.g. \cite[Chapter~2]{oitmaa2006}):
\begin{subequations}
\label{eq:kw-high-T}
\begin{align}
Z(K) &= \sum_{{\sigma}\in \{\pm 1\}^N} \prod_{(i,j)\in E} \exp(K \sigma_i \sigma_j)\\
&= (\cosh K)^{|E|} \sum_{{\sigma}} \prod_{(i,j)} (1+ \sigma_i \sigma_j \tanh K)\\
&= 2^N (\cosh K)^{|E|} \sum_{\gamma \in Z_1} (\tanh K)^{|\gamma|}
\end{align}
\end{subequations}
where we have defined $K=J/T$ and by $|S|$ denotes the size of the set $S$.
Note that the set $Z_1$ of cycles $\gamma$ in \autoref{eq:kw-high-T} includes ones that are contractible as well as ones that are non-contractible. Two examples for such cycles on a surface with genus 3, tessellated by the $\{7, 3\}$ tessellation (cf. \autoref{fig:klein_quartic}), are given in \autoref{fig:klein_quartic_dual}. On the right we show a contractible cycle on the primal lattice (solid lines) in blue. On the left we show, also in blue, a non-contractible cycle on the dual lattice (dashed lines).
\begin{figure}[h]
\centering
\includegraphics[width=0.2\textwidth]{klein_cycle.png}
\hfil
\includegraphics[width=0.2\textwidth]{klein_boundary.png}
\caption{The left shows a cycle on the dual lattice (blue) and the associate cocylce on the primal lattice (red). The right shows a boundary on the primal lattice (blue) and the associate coboundary on the dual lattice (red).}
\label{fig:klein_quartic_dual}
\end{figure}
To establish the duality, we also consider the low-temperature expansion of the Ising model, but on the dual lattice $\mathcal L^* = (V^*,E^*,F^*) = (F,E,V)$. For regular tessellations of hyperbolic surfaces, the dual lattice is just obtained by swapping the first and second entry of its Schl\"afli symbol $\{r, s\}$. This is also indicated in \autoref{fig:klein_quartic_dual}. The primal lattice (solid lines) is the $\{7, 3\}$ tessellation and its dual (dashed lines) is the $\{3, 7\}$ tessellation of the same surface.
The low temperature expansion follows from expressing the partition function in terms of excitations on top of the (ferromagnetic) ground state. These are given by domain walls.
For example, consider starting from a all-ferromagnetic state of the Ising model [\autoref{eq:pure-Ising}] on the (dual) lattice indicated by dashed lines in \autoref{fig:klein_quartic_dual}. The cost of flipping the spin on the central cite is given by the size of the domain wall indicated in red on the right of \autoref{fig:klein_quartic_dual}.
Generally, let $B^{1*}$ be the set of all possible domain walls on the dual lattice. We can write
\begin{subequations}
\label{eq:low-T-z}
\begin{align}
\Tilde Z(K) &= \sum_{{\sigma}\in \{\pm 1\}^N} \prod_{(i,j)\in E^*} \exp(K \sigma_i \sigma_j)\\
&= 2 \sum_{\omega^* \in B^{1*}} \exp(K^*)^{|E^*|-2|\omega^*|}\\
&= 2 \exp(K)^{|E^*|} \sum_{\omega^* \in B^{1*}} \exp(-2 K)^{|\omega^*|}
\end{align}
\end{subequations}
where the second equality directly follows from the definition of $B^{1*}$. In the language of homology, the set $B^{1*}$ is given exactly by the set of \emph{coboundaries} on the \emph{dual} lattice.
The basis of the Kramers--Wannier duality, homologically speaking, is the fact that the set of cycles $Z_1$ is in one-to-one correspondence with the set of \emph{cocycles} $Z^{1*}$ on the dual lattice $\mathcal L^*$. This is also indicated in \autoref{fig:klein_quartic_dual} where we show two examples of the correspondence of cocycles (red) and cycles (blue). The left side shows a non-contractible cocycle on the primal lattice (solid, red) and the corresponding cycle on the dual (blue, dashed). The right side shows a contractible cycle (a \emph{boundary}) on the primal lattice (red, solid) and the corresponding cocycle (a \emph{coboundary}) on the dual lattice (red, dashed).
Using this equivalence, $Z_1 = Z^{1*}$, as well as \autoref{eq:krammers-wannier-T}, and defining $K^*=J/T^*$, we can then rewrite
\begin{align}
Z(K) &= 2 \exp(K^*)^{|E^*|}\sum_{\gamma^* \in Z^{1*}}\exp(-2K^*)^{|\gamma^*|}.
\label{eq:dual-z}
\end{align}
Above, the right hand side is \emph{almost} the low-temperature expansion of the Ising model on the dual lattice [\autoref{eq:low-T-z}], at temperature $T^*$ [\autoref{eq:krammers-wannier-T}].
The difference between \autoref{eq:dual-z} and \autoref{eq:low-T-z} is that the sum above is over all cocycles $\gamma^* \in Z^{1*}$ whereas the low-temperature expansion is a sum over domain walls $\omega^* \in B^{1*}$, that is coboundaries or ``contractible'' cocycles.
Physically, we can rationalize this difference by looking at the example of a non-contractible cocycle on the left of \autoref{fig:klein_quartic_dual} (red, solid). The corresponding cocycle appears in the high-temperature expansion of the dual lattice (every vertex in it has even degree). However, there is no set of spins on vertices of the primal lattice that we could flip to get a domain of that form.
Hence, for Ising models on regular tessellations of closed manifolds, we have established what is the \emph{difference} between their high-temperature expansion [\autoref{eq:kw-high-T}] and their low-temperature expansion at the dual temperature [\autoref{eq:dual-z}]. In the following we will show that (i) for tessellations of closed euclidean surfaces (tori), this difference vanishes in the thermodynamic limit, yielding the Kramers--Wannier duality [\autoref{eq:krammers-wannier}], and (ii) the difference does \emph{not} vanish for tessellations of closed hyperbolic surfaces, leading to a violation of \autoref{eq:krammers-wannier}.
Note that the contribution of any cocycle in \autoref{eq:dual-z} has a weight $\exp(-2K^*)^{|\gamma^*|}$.
For euclidean lattices on an $L\times L$ torus this implies that the contribution of any non-contractible cocycle is at least of order $\order{\exp(-2K^*)^L}$.
Focussing on such minimal-size cocycles, of which there are $\sim L$, the difference between \autoref{eq:dual-z} and the low-temperature expansion of the Ising model vanishes in the thermodynamic limit
\begin{equation}
Z(T) - \Tilde Z(T^*) \sim L \exp(-2K^*L) \xrightarrow[L\to \infty]{} 0.
\end{equation}
This then yields \autoref{eq:krammers-wannier}.
In contrast, in hyperbolic space, the number of minimal, non-contractible cocycles goes as $\sim N$ [see \autoref{eq:area_propto_genus}] while their length grows only logarithmically \cite{macaj2008injectivity,moran1997growth} and so the same difference goes as
\begin{equation}
Z(T) - \Tilde Z(T^*) \sim N^{1-2K^*}
\end{equation}
which does not generally vanish as $N\to\infty$.
\subsubsection{The dual Ising model in hyperbolic space}
In order to obtain a model that does fulfill the Kramers--Wannier duality, we have to define a model where possible domain walls on top of the ferromagnetic ground state include all non-contractible cocycles.
We achieve this by a rather simple trick. Given an Ising model [\autoref{eq:pure-Ising}] on a tessellation of a closed hyperbolic surface~$S$ with $2g$ nonequivalent, non-contractible cocyles~$\ell$, we introduce one additional Ising degree of freedom~$\eta_\ell$ per nonequivalent, non-contractible cocycle.
We then define the ``dual Ising model'' as
\begin{align}\label{eq:H-dual-ising-pure}
H &= J \sum_{\expval{ij}} \left( \prod_{\ell\,|\,\expval{ij} \in \ell} \eta_\ell \right)\,\sigma_i \sigma_j
\end{align}
where $J<0 $ as before is chosen to be ferromagnetic and we have chosen one representative per nontrivial cocycle~$\ell$. One example of such a representative on a hyperbolic surface with genus 3, tessellated by the $\{7, 3\}$-tesselation is shown on the left side of \autoref{fig:klein_quartic_dual}, where it is highlighted in red. The effect of flipping this Ising degree of freedom $\eta_\ell \to -\eta_\ell$ is to reverse the sign of the coupling of each edge that is part of the representative $\ell$. One can think of each variable $\eta_\ell$ to encode the \emph{boundary} condition in one possible direction which can either be periodic ($\eta_\ell=1$) or anti-periodic ($\eta_\ell=-1$).
Because of this, the domain walls of the model defined by \autoref{eq:H-dual-ising-pure} include the nontrivial cocycles of the lattice and its partition is given by \autoref{eq:dual-z}, that is the dual-RBIM for $p=0$ is indeed the Kramers--Wannier dual of the Ising model.
\subsection{The Random-Bond Ising model}
The random-bond Ising model (RBIM), first introduced by Edwards and Anderson \cite{Edwards1975} to model the interaction of dilute magnetic alloys, serves as a simple model to study critical phenomena in systems with quenched disorder.
The Hamiltonian for the RBIM on a lattice with nearest-neighbor bonds $\expval{ij}$ is
\begin{align}
H = \sum_{\langle i,j \rangle} J_{ij} \sigma_i \sigma_j
\label{eq:H-RBIM}
\end{align}
where $\sigma_i\in \lbrace \pm 1 \rbrace$ are Ising spin variables and $J_{ij}$ are random couplings.
Whenever we refer to the Ising model in ``hyperbolic space'' or on ``hyperbolic lattices'' throughout this work, we refer to a model where spins are located on the vertices of regular tessellations of \emph{compact} hyperbolic manifolds, with Schl\"afli symbol $\{r,s\}$. This emphasis is important, since considering the same model on non-compact hyperbolic manifolds with, for example, open or closed boundary conditions will generally change its properties \cite{Wu1996, Wu2000}.
The couplings are distributed independently and identically. In this paper, we take their individual probability distribution to be the so called
``$\pm J$-distribution''
\begin{align}\label{eq:disorder}
P(J_{ij}) = p\, \delta(J_{ij}-1) + (1-p) \, \delta(J_{ij}+1)
\end{align}
so that each coupling is anti-ferromagnetic $J_{ij} = +1$ with probability $p$ and ferromagnetic $J_{ij} = -1$ with probability $1-p$.
Hence, on the infinite hyperbolic plane, $p$ is equal to the fraction of anti-ferromagnetic bonds.
The free energy of the model, when considering quenched disorder is then given by
\begin{align}
F &= \left[\log(Z) \right], \\
Z &= \sum_{\{\sigma\}} \exp\left(-\beta \sum_{\langle i,j \rangle} J_{ij} \sigma_i \sigma_j \right),
\end{align}
where brackets $[\dots]$ denote the average over disorder configurations.
For $p=0$, the model reduces to the ferromagnetic Ising model, which we have studied for regular tessellations of compact hyperbolic manifolds in a previous paper \cite{breuckmann2020}. This model as a function of temperature undergoes a phase transition from a high-temperature paramagnetic into a low-temperature ferromagnetic phase. Our study revealed that this transition is mean-field in nature for all investigated tessellations. In the present work, we extend our previous work to the case of finite $0 < p < 1/2$.
We also study the \emph{dual} Ising model \autoref{eq:H-dual-ising-pure} in the presence of quenched disorder. In this case it becomes
\begin{align}\label{eq:H-dual-ising}
H &= \sum_{\expval{ij}} J_{ij} \left( \prod_{\ell\,|\,\expval{ij} \in \ell} \eta_\ell \right)\,\sigma_i \sigma_j.
\end{align}
As before, the $\sigma_j\in\{\pm1\}$ are Ising variables, as are the $\eta_\ell \in \{\pm 1\}$. While the $\sigma_j$ are located on the vertices of the lattice, each $\eta_\ell$ is associated with a nontrivial cocycle $\eta_\ell$ (cf. \autoref{sec:intro_duality}). The $J_{ij}$ are random couplings drawn from the $\pm J$ distribution defined in \autoref{eq:disorder}.
The Kramers--Wannier duality [\autoref{eq:krammers-wannier}], as usual, is only valid is the disorder-free case.
However there is conjecture by Takeda and Nihsimori \cite{takeda2005exact} relating the location of the Nishimori point of the RBIM with the position in the dual model
\begin{equation}\label{eq:duality-conjecture}
H(p_{\rm N}) + H(p_{\rm N}^*) = 1
\end{equation}
where $H(p) = -p \log_2(p) - (1-p)\log_2(1-p)$ is the binary entropy.
As discussed in \autoref{sec:dual-rbim} we see that the conjecture holds approximately, but not within error bars.
\subsection{Possible Phases and Order Parameters}
\begin{figure}
\centering
\includegraphics{phases_sketch.pdf}
\caption{Schematic phase diagram of the random bond Ising model and its dual on the hyperbolic plane as a function of temperature $T$ and the fraction of antiferromagnetic bonds~$p$. The high-temperature paramagnetic (PM) phase at low temperatures gives way either to a ferromagnetic (FM) phase spin glass (SG) phase at weak and strong disorder respectively.
For the dual model, we only indicated the schematic boundary of the FM phase.
Note that although the temperatures~$T_c$ and~$T_c^*$ are related by the Kramers--Wannier relation, the dual model of the hyperbolic Ising model is different from the original model even on self-dual lattices (see main text for details).
The phase boundary of the dual model corresponds to the decoding threshold of the hyperbolic surface code under phenomenological noise. The Nishimori line is indicated in dashed-gray. }
\label{fig:phases_sktech}
\end{figure}
At high temperature, both the RBIM and its dual are in the paramagnetic phase. As the temperature is lowered, at low disorder this gives way to a ferromagnetic phase which is continuously connected to that of the pure model at $p=0$. The transition from the paramagnet to the ferromagnet corresponds to an instability of the mean of the magnetization distribution $\rho(m)$. That means while in the paramagnet we have
\begin{equation}
\rho(m) = \delta(m),
\end{equation}
in the ferromagnetic phase
\begin{equation}
\rho(m) = \delta(\abs{m - M}).
\end{equation}
For large disorder, $p \approx 1/2$, random systems can also develop spin glass order at low temperature, which corresponds to an instability in the variance of the magnetization distribution, which is also called the Edwards-Anderson (EA) order parameter
\begin{equation}
q_{\rm EA} = \left[m^2\right],
\end{equation}
where the magnetization vanishes ($[m]=0$).
At intermediate values of disorder, there is in principle also the possibility of a magnetized spin glass phase \cite{thouless1986, Carlson1990}, where the magnetization distribution has both finite width ($q_{\rm EA} \neq 0$) and mean ($[m] \neq 0$).
The schematic phase diagram of the RBIM and its dual on the hyperbolic plane is shown in \autoref{fig:phases_sktech}. Note that for the dual model, we only indicate the phase boundary of the ferromagnetic phase. There could exist a spin-glass phase in principle, but the investigation of that is beyond the scope of this work.
We also indicate the so called \emph{Nishimori line} \cite{Nishimori1981}, which is defined by the condition
\begin{equation}
\exp(2\beta J) = \frac{p}{1-p},
\label{eq:Nishimori}
\end{equation}
that is the (relative) probability of frustrating a bond due to thermal fluctuations is equal to that of flipping its sign due to the quenched disorder.
It is known that the multiciritical point in the RBIM lies on the Nishimori line and that the phase boundary of any magnetized phase must be reentrant or vertical, that is no magnetized phase can exist for $p_{\rm N} < p$ \cite{Nishimori1981}.
As indicated, we expect the ferromagnetic phase of the RBIM to have a larger extent than that of its dual, since the additional cocycle degrees of freedom $\eta_{\ell}$ have a finite contribution to the entropy, which is then strictly greater than that of the RBIM.
\section{Methods\label{sec:methods}}
\subsection{High-Temperature Series Expansion\label{sec:series_expansion}}
Our primary means to map out the phase diagram of the random-bond Ising model in hyperbolic space will be to perform high-temperature series expansions of both the susceptibility
\begin{equation}
\chi = \beta \frac{1}{N} \sum_{i,j} \left[\expval{\sigma_i \sigma_j} - \expval{\sigma_i}\expval{\sigma_j}\right],
\end{equation}
as well as of the Edwards-Anderson (EA) susceptibility
\begin{equation}
\chi_{\rm EA} = \beta \frac{1}{N^2} \sum_{i,j} \left[\expval{\sigma_i \sigma_j}^2 - \expval{\sigma_i}^2\expval{\sigma_j}^2\right].
\end{equation}
Coming from a high-temperature, if there is a transition to low-temperatures ferromagnetic phase, the susceptibility $\chi$ at the transition should diverge as a power law
\begin{equation}
\chi \sim \frac{1}{(T-T_c)^\gamma}
\end{equation}
while the Edwards-Anderson susceptiblity $\chi_{\rm EA}$ can have either a weak singularity or diverge as well \cite{binder1986}. In contrast, if there is a transition into a low-temperature spin-glass phase, the susceptibility $\chi$ will exhibit only a weak singularity (a cusp), while the Edwards-Anderson susceptibility diverges as a power law
\begin{equation}
\chi_{\rm EA} \sim \frac{1}{(T-T_c)^{\gamma'}}.
\end{equation}
\subsubsection{Biconnected graph expansion of inverse susceptibilities}
It turns out that for susceptibilties of the form
\begin{equation}
\chi_{k, l} = \beta \frac{1}{N} \sum_{i,j} \left[\expval{\sigma_i \sigma_j}^k - \expval{\sigma_i}^k\expval{\sigma_j}^k\right]^l,
\end{equation}
it is favourable to perform the high-temperature expansion in the {\em inverse} susceptibility. The reason for this is that it can be shown~\cite{singh87} that the only non-trivial contributions come from \emph{biconnected} graphs, that is graphs which stay connected if any of their vertices (and the edges attached to it) are being removed.
We show the first few graphs that contribute to the susceptibility $\chi=\chi_{1, 1}$ and EA-susceptibility $\chi_{\rm EA}=\chi_{2, 1}$ on the $\{5, 5\}$ lattice in \autoref{fig:55bicon}.
The inverse susceptibility can be expanded in terms of these graphs as a function of both inverse temperature $v=\tanh(\beta J)$ and disorder strength $\mu = 1-2p$. In practice, the variables in the systematic biconnected graph expansion are $w=v^2$ and $\alpha = \mu/v$:
\begin{align}\label{eqn:sus_series}
\tilde{\chi}^{-1}(w, \alpha) = 1\, + \, \sum_{g}\, c(g)\, W(g)
\end{align}
where the sum is over all graphs, $c(g)$ is the coefficient of~$N$ of the number of embeddings of the graph~$g$ into the lattice and~$W(g)$ for each graph is a function of both~$w$ and~$\alpha$.
Expanding $W$ as a function of inverse temperature $w$, one can show that for each order $n$, the coefficient of $w^n$ is a polynomial in $\alpha$ of order $n$ with integer coefficients.
For example, the inverse susceptibility on the $\{5, 5\}$ lattice is given by
\begin{align}
\chi^{-1}(w, \alpha) = 1 &- 5 \alpha w + 5 \alpha^2 w^2 - 5 \alpha^3 w^3 + 5 \alpha^4 w^4 \nonumber\\
&+ (10 \alpha + 10 \alpha^2 +10 \alpha^3 + 10 \alpha^4 + 5 \alpha^5) w^5 \nonumber\\
&+ \order{w^6}.
\end{align}
Note that for $\alpha = 1$ (that is $v=\mu$), we obtain the series on the Nishimori line up to order $\order{w^n}=\order{v^2n}$.
For more details and a derivation of \autoref{eqn:sus_series} see Ref.~\onlinecite{singh87}.
\begin{figure}
{\includegraphics[width=1.2cm]{55_bicon/0.png}}
{\includegraphics[width=1.2cm]{55_bicon/1.png}}
{\includegraphics[width=1.2cm]{55_bicon/2.png}}
{\includegraphics[width=1.2cm]{55_bicon/3.png}}
{\includegraphics[width=1.2cm]{55_bicon/4.png}}\\
{\includegraphics[width=1.2cm]{55_bicon/5.png}}
{\includegraphics[width=1.2cm]{55_bicon/6.png}}
{\includegraphics[width=1.2cm]{55_bicon/7.png}}
{\includegraphics[width=1.2cm]{55_bicon/8.png}}
{\includegraphics[width=1.2cm]{55_bicon/9.png}}\\
{\includegraphics[width=1.2cm]{55_bicon/10.png}}
{\includegraphics[width=1.2cm]{55_bicon/11.png}}
{\includegraphics[width=1.2cm]{55_bicon/12.png}}
{\includegraphics[width=1.2cm]{55_bicon/13.png}}
{\includegraphics[width=1.2cm]{55_bicon/14.png}}\\
\caption{Some small biconnected subgraphs of the $\{5,5\}$-tiling. Removing a vertex and all its incident edges will leave the graphs connected. Only biconnected graphs contribute to the series expansion.}
\label{fig:55bicon}
\end{figure}
\subsubsection{Analysis of the series}
We analyze the generates series $\tilde{\chi}(w, \alpha)$, usually for fixed $\alpha$ as a function of $w$, using \emph{first-order homogeneous integrated differential approximants (FO-IDAs)}.
One reason to choose FO-IDAs over simpler methods is that they are known to be less biased towards the lower-order coefficients of the expansion~\cite{singh2}.
This is important, as the most relevant contributions on a $\{r, s\}$ tiling come from graphs with at least~$r$ edges.
The analysis using FO-IDAs proceeds as follows:
For fixed disorder strength $\alpha$, we assume that the series $\tilde{\chi}$ is the solution of a first-order differential equation of the form
\begin{equation}\label{eqn:IDAdef}
Q_L(w) \frac{d \tilde{\chi}(w)}{d v} + R_M(w)\, \tilde{\chi}(w) + S_T(w) = 0
\end{equation}
where $Q_L$, $R_M$ and $S_T$ are polynomials of degree $L$, $M$, $T$, respectively.
By equating the series order-by-order with the coefficients of \autoref{eqn:IDAdef} we obtain a linear system of equations in the coefficients of the polynomials $Q_L$, $R_M$ and $S_T$.
It can be shown that for any root $w_c$ of the polynomial~$Q_L$, a solution of \autoref{eqn:IDAdef} has an algebraic singularity of the form $(w-w_c)^{-\gamma}$ \cite{oitmaa2006}.
The exponent of the singularity is given by
\begin{align}\label{eqn:crit_exp}
\gamma = \frac{R_{M}(w_c)}{Q'_L(w_c)} .
\end{align}
Generally, the results for $w_c$ and $\gamma$ will depend on the choice of degrees $L$, $M$ and $T$.
If we have the series up to order $N$ then we can choose all possible values satisfying $L+M+T \leq N-2$.
Following~\cite{singh2} we exclude approximants if
\begin{itemize}
\item a root of $R_M$ is close to $w_c$, giving rise to a small estimate of $\gamma$
\item a complex root of $Q_L$ with small absolute value smaller than $w_c$ is close to the real axis
\end{itemize}
We observe that the convergence of the series is very good, since the approximants for different choices of $L$, $M$ and $T$ are all close.
\subsection{Monte Carlo Simulations}
To corroborate our results from the series expansion and to compute additional observables, we also perform classical Monte-Carlo simulations for some sets of parameters.
To compute the disorder average $\left[\dots\right]$, we perform Monte-Carlo simulations for using 1000 disorder realizations $\left\{J_{ij}\right\}$. For each realization, we simulate two independent copies $\{ \sigma_j^{(1)} \}$, $\{ \sigma_j^{(2)} \}$ of the system.
\subsubsection{Equilibration in the (possible) presence of glassiness}
Since it is know that there is no spin glass behavior on the Nishimori line \cite{Nishimori1981}, we expect that a standard local Metropolis-Hastings algorithm is sufficent to equilibrate the system at temperatures $T > 2J\log[p/(1-p)]^{-1}$.
When approaching the spin glass phase, the local algorithm suffers from a drastic slowdown. Nevertheless, we are able to study the spin glass transition since for that we do not need to equilibrate the system deep inside the glassy phase. To make sure that the system is actually equilibrated, we keep track of the autocorrelation time of all relevant observables (computed via binning analysis~\cite{ALPSCore}) to ensure that we equilbrate the system for at least $10$ times as long as the largest equilibration time in the system and that we take $5000$ independent samples per temperature value for each observable
\subsubsection{Finite size scaling}
Due to the absence of a unique linear dimension in the compactifications of the hyperbolic plane, we perform finite size scaling as a function of the number of sites~$N$. This was initially proposed for a fully connected model~\cite{Botet1982} and has been used for hyperbolic lattices with open boundary conditions \cite{Shima2006} as well as in our study of the pure Ising model in the hyperbolic plane \cite{breuckmann2020}.
The main idea is that a quantity $A$, close to criticality, follows a scaling form
\begin{equation}
A \sim \abs{T-T_c}^a \, F\left(N/N_c\right)
\end{equation}
with a correlation number $N_c$.
Assuming that a corresponding system of finite dimension $d = d_c$, where $d_c$ is the upper critical dimension, has the same scaling behavior as its hyperbolic sibling, it follows that
\begin{equation}
N_c = \sim \abs{T-T_c}^{-\mu},
\end{equation}
with the critical exponent
\begin{equation}
\mu = \nu_{\rm MF} \, d_c,
\end{equation}
and $\nu_{\rm MF}$ is the mean-field value of the critical exponent of the correlation length $\xi$.
\subsubsection{Observables}
To map out the phase diagram and compute critical exponents, we study a number of observables, all of which are related to either the magnetization
\begin{align}
m^{(\alpha)} = \frac{1}{N} \sum_{j} \sigma_j^{(\alpha)} \label{eq:mag}
\end{align}
or the Edwards-Anderson order parameter
\begin{equation}
q = \frac{1}{N} \sum_{j} \sigma_j^{(1)} \sigma_j^{(2)}.
\end{equation}
First, to determine the location of the critical point and the critical exponent of the correlation number $\mu$, we compute the binder cumulants
\begin{align}
g &= 1 - \frac{\left[\expval{m^4}\right]}{\left[\expval{m^2}^2\right]}, \label{eq:binder}\\
g_{\rm EA} &= 1 - \frac{\left[\expval{q^4}\right]}{\left[\expval{q^2}^2\right]} \label{eq:binderEA}
\end{align}
which, for different system sizes, cross at the transition to a magnetized and a spin glass phase respectively. The best estimate for the transition temperature $T_c$ and the exponent $\mu$ is given by performing a data collapse, using the fact that close to the transition the respective cumulant is given by
\begin{equation}
g = G\left(N^{1/\mu}(T-T_c)\right),
\end{equation}
with some universal scaling function $G$.
For both order parameters, we also compute the corresponding susceptibilties
\begin{align}
\chi &= \beta N\left( \left[\expval{m^2}\right] - \left[\expval{m}\right] \right), \label{eq:sus}\\
\chi_{\rm EA} &= \beta N\left( \left[\expval{q^2}\right] - \left[\expval{q}\right] \right). \label{eq:susEA}
\end{align}
Again, the best estimate for $T_c$, $\gamma$ and $\mu$ are obtained by performing a data collapse, since close to the transition the susceptibility is given by
\begin{equation}
\chi = N^{\gamma/\mu} S\left(N^{1/\mu}(T-T_c)\right),
\end{equation}
with some universal scaling function $S$.
\section{Results for the RBIM\label{sec:rbim}}
\subsection{Phase diagram on the \{5, 5\} lattice}
To study general features of the phase diagram of the RBIM in hyperbolic space as well as to assess the reliability of the high-temperature series expansion (HTSE) in the presence of disorder, we first map out the phase diagram of the model on the $\{5, 5\}$ lattice in detail, using both HTSE as well as Monte-Carlo simulations.
\begin{figure}
\centering
\includegraphics{phases_55.pdf}
\caption{Phase diagram on the $\{5, 5\}$ lattice as a function of temperature $T$ and disorder strength $p$. We show both the magnetization $m$ as well as the Edwards-Anderson order parameter $q_{\rm EA}$ (inset) obtained from Monte-Carlo (MC) simulations of a $N=1920$ system. We superimpose this with the phase boundaries obtained from the high-temperature series expansion (HTSE) and MC (see main text for details).}
\label{fig:phases55}
\end{figure}
The phase diagram of $\{5, 5\}$ is obtained from HTSE and MC simulations is shown in \autoref{fig:phases55}. Compared to the RBIM on the euclidean square ($\{4, 4\}$) lattice, we find a much larger ferromagnetic phase and a extended spin glass phase. In contrast to the Bethe lattice, here we do not find evidence for a magnetized spin glass phase, although our low-temperature data is not good enough to rule out a very small extent.
Turning to explain our results in more detail, in \autoref{fig:phases55} we show both, the magnetization $m$ as well as the Edwards-Anderson order parameter $q$ (in the inset) as obtained from a MC simulation with system size $N = 1920$. While the magnetization is nonzero only in the ferromagnetic phase, the EA order parameter is nonzero in both the ferromagnet and the spin glass.
We superimpose these plots with the critical points obtained using finite-size scaling of the MC data (open circles) and with the critical lines obtained from HTSE of the (EA-) susceptibility (solid lines). In both methods, we can distinguish the transition from the paramagnet to a ferromagnetic phase and that to a spin glass phase reliably. In the finite size analysis of the MC data, a transition to the ferromagnetic phase is signaled by a crossing of both the binder cumulant of the magnetization, $g$ [\autoref{eq:binder}] as well as a crossing of the binder cumulant of the Edwards-Anderson order parameter, $g_{\rm EA}$ [\autoref{eq:binderEA}]. In contrast, at the transition to a spin glass phase, only $g_{\rm EA}$ shows a crossing while $g$ does not, since the magnetization $m$ vanishes in the spin glass. Finite size scaling along the Nishimori line indicates a transition at $p_{\rm N} = 0.247 \pm 0.02$ and finite size analysis as a function of temperature at constant disorder shown a transition into a ferromagnet for $p \lessapprox p_{\rm N}$ and a transition into a spin glass for $p \gtrapprox p_{\rm N}$, making the Nishimori point the multicritical point.
This result is corroborated by HTSE analysis. Here, a transition to the ferromagnet (spin glass) is signaled by the divergence ferromagnetic (EA-) susceptibility $\chi_{(\rm EA)}$. Note that since the non-divergent susceptibility at both transitions typically also has a weak singularity (a cusp), series analysis normally predicts a divergence for both susceptibilities, but at different critical temperatures. In practice, we distinguish the two transitions by the fact which susceptibility is predicted to diverge at a larger temperature. Along the Nishimori line, that is $\alpha = 1$ in \autoref{eqn:sus_series}, the two susceptibilities are equal and HTSE yields a critical point $w_c = 0.256456 \pm \num{8.6e-6}$, which corresponds to $p_{\rm N} = 0.246793 \pm \num{4.2e-6}$. For $\alpha < 1$ we find a transition to a ferromagnetic phase while for $\alpha > 1$ we find a transition into a spin glass phase, again suggesting that the Nishimori point is indeed the multicritical point of the model.
\subsection{Phase boundaries for different tilings: coordination vs curvature}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{phases_curvature.pdf}
\caption{Critical temperature $T_c$ obtained from high-temperature expansion, for different tilings of the hyperbolic plane. The inset shows $v_c = \tanh(J/T_c)$ as a function of curvature $\kappa$ for the pure model ($p=0$), along the Nishimori line ($(1-p)/p = e^{-\beta J}$) and for the spin glass boundary ($p=1/2$).}
\label{fig:phases_curv}
\end{figure}
We now use the high-temperature expansion to study how the paramagnet-ferromagnet and paramagnet-spin-glass phase boundaries vary for different tilings $\{r, s\}$. For low disorder, the critical temperature is mostly controlled by the coordination number $s$ and for $p=0$ even agrees quantitatively with that of the Bethe lattice with the same coordination \cite{breuckmann2020}.
Qualitatively, this behaviour can be understood by considering that the transition into the ferromagnet at low disorder is driven by a competition between and the entropy of the paramagnet and internal energy of the ferromagnetic state
\begin{equation}
E_{\rm FM} = \frac{sN}{2}\left[J_{ij}\right],
\end{equation}
which is proportional to the coordination number $s$. This means that with larger~$s$, the ferromagnet becomes more favorable at larger temperatures.
As disorder is increased however, $[J_{ij}]$ also increases (approaching zero from a negative value) and so does the importance of~$s$ as a control parameter for the transition temperature.
Finally, $[J_{ij}] \to 0$ as $p\to \frac{1}{2}$ and the critical temperature becomes a monotonic function of the curvature $\kappa$, as seen in the inset of \autoref{fig:phases_curv}.
\subsection{Critical Behaviour}
\begin{table}
\centering
\begin{tabular}{c|c|c|c}
& $p=0$ & Nishimori Line & $p=1/2$\\\hline
$\mu$ & 2 & $3.0 \pm 0.1$ & $2.0 \pm 0.1$ \\
$\gamma$ & $1.000001 \pm 0.000005$ & $1.0003 \pm 0.0008$ & -\\
$\gamma_{\rm EA}$ & - & $1.0003 \pm 0.0008$ & $1.0011 \pm 0.0025$ \\
$\beta$ & $0.46 \pm 0.05$ & $1.00 \pm 0.05$ & -
\end{tabular}
\caption{Critical exponents on the $\{5, 5\}$ lattice along different scaling axes. We estimate the correlation volume exponent, $\mu$, from finite size analysis of the binder parameter $g$. For the susceptibility exponents $\gamma$ and $\gamma_{\rm EA}$ the best estimates are obtained via HTSE analysis. }
\label{tab:exponents}
\end{table}
\begin{figure}
\centering
\includegraphics{collapse_nishimori}
\caption{Finite size scaling collapse of the Binder cumulant~$g$ [\autoref{eq:binder}], the susceptibility [\autoref{eq:sus}], and the magnetization $m$ [\autoref{eq:mag}] of the random bond Ising model on the $\{5, 5\}$ lattice along the Nishimori line [\autoref{eq:Nishimori}].}
\label{fig:collapse-N}
\end{figure}
In \autoref{tab:exponents}, we show out best results for the critical exponents for the $\{5, 5\}$ lattice for different scaling axis (with the $p=0$ results taken from Ref. \onlinecite{breuckmann2020}).
The best results are typically obtained from the HTSE, except for the exponent $\mu$ of the correlation volume, which we compute by finite size analysis of the Monte-Carlo data.
In all cases were results from both methods are available, they are compatible within errors.
The best finite-size scaling collapse of the Monte Carlo data along the Nishimori line is shown in \autoref{fig:collapse-N}.
The best collapse is obtained for slightly different values of $p_c$ for the susceptibility and the binder cumulant, which we attribute to finite size effects.
The results in \autoref{tab:exponents} are all compatible with the mean-field expectation, except for the exponents $\mu = 3$ and $\beta = 1$, observed along the Nishimori line. This is because as established in \autoref{sec:rbim}, the Nishimori line passes through the multicritical point, which generally shows distinct critical behavior even in (effectively) infinite dimensions.
Note that still, the exponents are consistent with the hyperscaling relation
\begin{equation}
\mu = 2\beta + \gamma
\end{equation}
Note that the specific heat does not develop a power-law singularity for any of the transitions considered and hence we do not present a critical exponent $\alpha$.
\section{Results for the dual-RBIM\label{sec:dual-rbim}}
In this section, we present results of Monte-Carlo simulations of the dual random-bond Ising model (dual-RBIM). We present strong evidence that this model exhibits a strongly first-order transition as a results of its cocycle degrees of freedom and numerically verify that for $p=0$, the critical temperature of this transition is indeed the Kramers--Wannier dual to the critical temperature of the Ising model on the dual lattice.
\subsection{Dual Ising model}
\begin{figure}
\centering
\includegraphics{monte_carlo_dual_p=0}
\caption{Evidence for a strongly first-order phase transition of the pure dual Ising model (that is \autoref{eq:H-dual-ising} with $p=0$) on the $\{5, 5\}$ lattice. We show the vertex magnetization $m = \expval{\sigma_j}$, its Binder cumulant $g$ as well as the loop magnetization $m_{\eta} = \expval{\eta_\ell}$ and its Binder cumulant $g_{\eta}$ as a function of temperature.}
\label{fig:dual-mc}
\end{figure}
In \autoref{fig:dual-mc}, we show results from Monte-Carlo siumulations of the dual Ising model, that is \autoref{eq:H-dual-ising} with $p=0$, on the $\{5, 5\}$ lattice. We show the average vertex magnetization $m = \expval{\sigma_j}$ and \emph{loop} magnetization $m_{\eta} = \expval{\eta_\ell}$, as well as the Binder cumulants $g$ and $g_\eta$ for vertex and loop magnetization respectively. The fact that the magnetizations for different system sizes cross at a single point, together with the pronounced dip of the Binder cumulants just before the transition are strong evidence that both quantities undergo a strongly first-order transition.
Since the $\{5, 5\}$ lattice is self dual and the Kramers--Wannier duality [\autoref{eq:krammers-wannier-T}] is exact at $p=0$, we expect the transition to occur at a critical temperature dual to the the critical point of the Ising model. Subsituting $T_c = 3.93$ \cite{breuckmann2020} into \autoref{eq:krammers-wannier-T} yields $T_{\rm c}^* \approx 1.44$, which we indicate in \autoref{fig:dual-mc} by a vertical dashed line and is in good agreement with the position of the crossing of both Binder cumulants and magnetizations.
\begin{figure}
\centering
\includegraphics{wang_landau_p=0}
\caption{Free energy difference [\autoref{eq:fdiff}] between the Ising model and its dual on the $\{5, 5\}$ lattice.}
\label{fig:dual-wl-pure}
\end{figure}
To corraborate the above findings, we also implement the Wang-Landau algorithm \cite{wang_landau2001, wang_landau2001b, schulz2003, belardinelli2007} and compute the free energy difference of the Ising model and its dual that is
\begin{equation}
\label{eq:fdiff}
\Delta F(T) = \log[Z_{\rm tot}(T)] - \log[Z_{0}(T)].
\end{equation}
Here, $Z_{\rm tot}$ is the partition function of the dual Ising model, that is it includes a sum over all cocycle variables~$\eta_\ell$ (therefore the subscript `tot'). $Z_0$ is the partition function of the Ising model on the same lattice, that is we fix $\eta_\ell = 1$ for all $\ell$.
Because of the latter relation between $Z_{\rm tot}$ and $Z_0$, we have $\Delta F > 0$ for all~$T$. In the ordered phase of the dual Ising model, the difference vanishes since the sum over cocycle variables does not contribute. This is shown in \autoref{fig:dual-wl-pure}. The quantity $\Delta F$ also has the advantage of indicating both phase transitions in one observable, since the free energy of the Ising model shows a visible kink at $T_{\rm c}$. Both critical temperatures are again indicated in the figure by vertical dashed lines.
\subsection{Dual random bond Ising model}
\begin{figure}
\centering
\includegraphics{wang_landau_Nishimori}
\caption{Free energy difference [\autoref{eq:fdiff}] between the random bond Ising model (RBIM) and the dual-RBIM on the $\{5, 5\}$ lattice along the Nishimori line. The shaded region indicates the location of the Nishimori point $p_{\rm N} = 0.0228 \pm 0.0010$. The inset shows the best data collapse, assuming the same correlation exponent $\mu$ as in the RBIM.}
\label{fig:dual-wl-Nishimori}
\end{figure}
In the case of the random model, the dual-RBIM is not exactly dual to the RBIM and hence we have a a-priori guess for the location of the critical point. Additionally, as is already the case in the pure model, the strongly first order nature of the transition complicates its numerical investigation. We find that single-spin flip Monte-Carlo is unreliable even for small system sizes. However the Wang-Landau algorithm is still converging and hence we can infer the location of the critical point from the free energy difference [\autoref{eq:fdiff}]. The difference for $T < T_c^*$ vanishes as a function of system size and diverges as a function of system size for $T_c^* < T$ respectively.
In \autoref{fig:dual-wl-Nishimori}, we show $\Delta F$ as a function of disorder strength~$p$ along the Nishimori line [\autoref{eq:Nishimori}]. The data is consistent with a transition at $p_{\rm N}^* = 0.0228 \pm 0.001$, which is indicated in the figure by a shaded area. The inset shows the best data collapse assuming the same correlation exponent $\mu = 3$ as in the RBIM.
Substituting the value of $p_{\rm N} = 0.246793 \pm \num{4.2e-6}$ obtained from the high-temperature expansion of the RBIM (see \autoref{sec:rbim} for details) into the duality relation conjectured by Nishimori (\autoref{eq:duality-conjecture}) and solving for $p_{\rm N}^*$ yields a value of $p_{\rm N}^* = 0.029891 \pm \num{2e-6}$. As observed for the RBIM on a range of euclidean lattice geometries \cite{takeda2005exact} this is somewhat close to our numerical result but not compatible within error bars.
\section{Quantum Error Correction\label{sec:qec}}
Quantum error correcting codes are used in quantum computation to reduce the effects of decoherence.
Certain infinite families of codes, together with associated quantum error correction protocols, can be shown to have a \emph{threshold}.
A threshold is a critical value of a noise parameter, below which the error correction protocol succeeds with probability approaching 1 with increasing code sizes.
It was argued in \cite{dennis2002topological,wang2003confinement} that the threshold of the toric code corresponds to the phase transition point along the Nishimori line of the RBIM on the square-grid $\{4,4\}$.
In~\cite{kubica2018three} it was proved that this is indeed the case for quantum codes which encode a finite number of qubits.
In~\cite[Section~IV-C]{chubb2021statistical} it was mentioned that the statistical mechanical models associated to quantum codes which encode an extensive number of qubits may exhibit multiple phase transitions.
This behaviour was studied in \cite{kovalev2018numerical}.
The quantum codes associated to the hyperbolic RBIM are called \emph{hyperbolic surface codes}~\cite{breuckmann2016constructions,breuckmann2017hyperbolic,conrad2018small}.
These codes do encode an extensive number of qubits, so that the proofs of \cite{kubica2018three,chubb2021statistical} do not apply to them.
In \cite{jiang2019duality} the authors consider the hyperbolic RBIM and give a condition sufficient for error correction to be possible, which is equivalent to $\Delta F \to 0$, where $\Delta F$ is the free energy difference of the RBIM and the dual-RBIM, see \autoref{eq:fdiff}.
Hence, the phase transition of what we call the ``dual-RBIM'' along the Nishimori line corresponds exactly to the maximum likelihood decoding threshold of the hyperbolic surface code under independent bit- and phase-flip noise
\begin{equation}
p_{\rm th, ML} = p_{\rm N}^* = 0.0228 \pm 0.0010.
\end{equation}
This can be compared to the threshold when using a minimum-weight perfect-matching decoder, which is $p_{\rm th, MWPM} \approx 0.0175$ \cite{breuckmann2017homological}.
Using an optimal decoder rather than MWPM hence increases the threshold by about 27\%.
\section{Conclusion\label{sec:conclusion}}
To summarize, we have presented an in-depth study of the random bond Ising model (RBIM) on the hyperbolic plane as well as the model that is its Kramers--Wannier dual in the absence of disorder. Resolving a conundrum raised in earlier work \cite{Rietman1992, breuckmann2020}, we showed that this ``dual-RBIM'' is different from the RBIM even on self-dual lattices due to the extensive number of nontrivial cocycles of hyperbolic lattices. Combining high-temperature expansion techniques and Monte-Carlo techniques, we mapped out the phase diagrams of both models, establishing the existence of a spin-glass phase with the Nshimori point as the tricritical point. Studying the critical properties of the high-temperature transitions, we showed that with the exception of the multicritical point, all transitions are mean-field in nature.
We verified the duality of both models explicitly in the disorder-free case and showed that the extended duality as conjectured by Takeda, Sesamoto and Nishimori \cite{takeda2005exact} is fulfilled only approximately.
Finally, we commented on the relation of the above findings to the decoding of hyperbolic surface codes and argued that the critical disorder along the Nishimori of what we call the dual-RBIM corresponds to the maximum-likelihood decoding threshold of hyperbolic surface codes under independent bit- and phase-flip noise. This generalizes the statistical mechanics mappings of the decoding of zero-rate quantum codes\cite{dennis2002topological, chubb2021statistical, kubica2018three} to quantum codes with \emph{finite} rate.
This work open up multiple interesting ares for future work. For example, beyond the scope of the current paper was a detailed investigation of the nature of the spin-glass phase in hyperbolic space and in particular its fate in the dual-RBIM.
Moreover, a detailed investigation of the phase space structure of the dual model could yield valuable insights into the decoding of finite-rate quantum codes.
\subsection*{Acknowledgements}
We thank Leonid Pryadko, Aleksander Kubica, Sounak Biswas, Rajiv Singh and Roderich Moessner for helpful discussions, and also Philippe Suchsland and Dmitry L. Kovrizhin and Peng Rao for helpful comments on the manuscript.
BP acknowledges support by the Deutsche Forschungsgemeinschaft under grants SFB 1143 (project-id 247310070) and the cluster of excellence ct.qmat (EXC 2147, project-id 390858490).
NPB acknowledges support through the EPSRC Prosperity Partnership in Quantum Software for Simulation and Modelling (EP/S005021/1).
|
0801.4915
|
\section{Introduction}
\label{sect1}
Recent observations with the spectropolarimeter of the
Solar Optical Telescope (SOT) onboard the Hinode space
observatory \citep{kosugi+al07} indicate that the quiet internetwork
region (the inner regions of supergranular cells of the quiet Sun)
harbors a photospheric magnetic field whose mean flux density of the
horizontal component considerably surpasses that of the vertical
component \citep{lites+al07,orozco+al07,lites+al08}. According to
these papers, the vertical fields are concentrated in the intergranular
lanes, whereas the stronger, horizontal fields occur most commonly at
the edges of the bright granules, aside from the vertical fields.
In a gravitationally stratified atmosphere, vertical magnetic flux
concentrations naturally develop a horizontal component as they expand
with height in a funnel-like manner. Indeed, \cite{rezaei+al07} found
funnel shaped magnetic elements in the internetwork from the same
Hinode data. However, the newly discovered
horizontal fields also occur apart from
vertical flux concentrations and seem to cover a larger
surface fraction than the vertical fields.
Regarding numerical simulations, \citet{ugd+al98}, note:
``we find in all simulations also strong horizontal fields above
convective upflows'', and \citet{schaffenberger+al05,schaffenberger+al06}
find frequent horizontal fields in their three-dimensional simulations, which
they describe as ``small-scale canopies''.
Also the 3-D simulations of \citet{abbett07} display
``horizontally directed ribbons of magnetic flux that permeate the model
chromosphere'', not unlike the figures shown by \citet{schaffenberger+al06}.
More recently, \citet{schuessler+voegler08} find in a three-dimensional
surface-dynamo simulation ``a clear dominance of the horizontal field in
the height range where the spectral lines used for the Hinode observations
are formed''.
Here we report on the analysis of existing and new
three-dimensional magnetohydrodynamic computer
simulations of the internetwork magnetic field aiming at
following questions: Does a realistic simulation of
the surface layers of the Sun intrinsically produce horizontal
magnetic fields and can their mean flux density indeed surpass
the mean flux density of the vertical field component? What is
the polarimetric signal of this field and how does it compare
to measurements with Hinode?
In the following we explain in Sect.~2 the details of two simulations
and present results
that answer the first two of the above questions in Sect.~3. In Sect.~4,
we synthesize Stokes profiles
and compare them to measurements from Hinode. Conclusions
follow in Sect.~5.
\section{Two simulation runs}
\label{sect2}
We have carried out two runs, run v10 and run h20, which significantly
differ in their initial and boundary conditions for the magnetic field.
This enables us to judge the robustness of our results with respect to
magnetic boundary conditions. Both runs are carried out within a common
three-dimensional computational domain extending from
1400 km below the mean surface of optical depth $\tau_{\rm c } =1$ to
1400~km above it. With this choice we ensure that the top boundary
is located sufficiently high for not to unduly tamper the atmospheric
layers that are in the focus of the present investigation, in particular
the formation layers of the spectral lines used in polarimetric
measurements with Hinode. The horizontal dimensions
are $4\,800\;\mbox{km}\times 4\,800\;\mbox{km}$, corresponding to
$6.6\arcsec\times 6.6\arcsec$ on the solar disk.
With $120^3$ grid cells, the spatial
resolution in the horizontal direction is 40~km, while in the vertical
direction it is 20~km throughout the photosphere and chromosphere.
Both runs have periodic lateral boundary conditions,
whereas the bottom boundary is open in the sense that the fluid can
freely flow in and out of the computational domain subject
to vanishing total mass flux. The upper boundary is ``closed'', i.e.,
a reflective boundary is applied to the velocity.
\begin{figure}
\centering
\includegraphics[width=0.33\textwidth]{steiner+al_f1.png}
\caption{Horizontal (solid) and vertical (dashed) absolute field
strength as functions of height for run h20 (heavy curves)
and for run v10 (light curves).
\label{fig1}}
\end{figure}
Run v10 starts with a homogeneous,
vertical, unipolar magnetic field of a strength of 1\,mT superposed
on a previously computed, relaxed model of thermal convection.
After relaxation, fields of mixed polarity occur throughout the photosphere
with an area imbalance of typically 3:1 for fields stronger than 1\,mT.
The magnetic field in run v10 is constrained to have vanishing
horizontal components at the top and bottom boundary but lines of force
can freely move in the horizontal direction. Although this condition is quite
stringent for the magnetic field near the top boundary, it still allows the field
to freely expand with height through the photospheric layers.
The mean vertical net magnetic flux density remains 1\,mT throughout
the simulation. These inital and boundary conditions might
actually be more appropriate for the simulation of network
magnetic fields because of preference for one polarity and
the vertical direction.
Run h20 starts without a magnetic field but upwellings that enter the
simulation domain across the bottom boundary area carry horizontal
magnetic field of a uniform strength of 2\,mT and of uniform direction
parallel to the $x$-axis with them. Outflowing material carries whatever
magnetic field it happens to have. These boundary conditions are the
same as used by \cite{stein+nordlund06}. They are appropriate when
flux ascends from deeper layers of the convection zone, carried by
convective upflows. Starting from a relaxed model of thermal convection,
magnetic field steadily spreads into the convective layer
of the simulation domain and after 600~s slowly begins to
expand throughout the photosphere, growing in mean absolute
strength. Reflective conditions apply to the field at the top boundary,
resulting in $\mathrm{d}B_{x,y}/\mathrm{d}z = 0$, $B_{z} = 0$.
The magnetic energy in the box
steadily increases because convective plasma motion strengthens
the magnetic field. After a time of about 2.45~h an equilibrium
value in magnetic energy seems to establishes itself when
the mean absolute vertical field strength near the
surface of optical depth
$\tau_{\rm c}\! =\! 1$ is approximately 1\,mT.
Events of convective plumes pump magnetic fields in the downward
direction out of the domain so that the mean Poynting flux at the lower
boundary is negative, pointing out of the box.
Runs v10 and h20 have been carried out with an extended version of
the computer code
CO$^\mathsf{5}$BOLD\footnote{www.astro.uu.se/\~{}bf/co5bold\_main.html}
that includes magnetic
fields. The code solves the coupled system of the equations
of compressible ideal magnetohydrodynamics in an external gravity
field taking non-local radiative transfer into account. For the present
runs, frequency-independent opacities are used, which are also used
for computing the continuum optical depth $\tau_{\rm c}$.
The multidimensional problem is reduced to a sequence of 1-D sweeps
by dimensional splitting. Each of these 1-D problems is solved with a
Godunov-type finite-volume scheme using an approximate Riemann solver
modified for a realistic equation of state and gravity. Details of
the method can be found in
\citet{schaffenberger+al05,schaffenberger+al06}.
\section{Structure and development of the horizontal magnetic field}
\label{sect3}
Figure~\ref{fig1} shows the horizontally and temporally averaged absolute
vertical and horizontal magnetic field strength as functions of height
for both runs. In run h20, the mean horizontal field strength,
$\langle\sqrt{B_x^2 + B_y^2}\,\rangle$, is larger than the mean strength
of the vertical component, $\langle |B_z|\rangle$, throughout the
photosphere and the lower chromosphere: in run v10 this is the case in
the height range between 250~km and 850~km.
It shows a local maximum close to the classical temperature minimum at
a height of around 500~km, where it is 5.6 times stronger than the mean
vertical field in case of run h20. The horizontal fields also dominate in
the upper photosphere of run v10 for which case one might expect the
initial state and boundary condition to favor the development of vertical
fields rather than horizontal ones. There, the ratio
$\langle B_{\mathrm{hor}}\rangle /\langle B_{\mathrm{ver}}\rangle$
at the location of maximum $\langle B_{\mathrm{hor}}\rangle$ is 2.5.
For the second half of the h20 time series and
in a horizontal section at a height of mean optical depth $\tau_{\rm c} = 1$,
14.2 \% of the total area is covered by horizontal fields
stronger than 5\,mT, while this fraction is 5.1 \% for the vertical
fields surpassing 5\,mT. At the height of 200~km in
the photosphere and a threshold of 2\,mT the average area fractions
are 25.8 \% and 6.2 \%, respectively. Thus, fields with a horizontal component
larger than a given limit in strength occupy a significantly larger
surface area than fields with a vertical component exceeding this limit.
This is a second reason (after inherent strength) why the measured
mean flux density of the horizontal field component may exceed
that of the vertical one.
\begin{figure*}
\centering
\includegraphics[width=0.63\textwidth]{steiner+al_f2.jpg}
\caption{Left: Horizontal field strength on the surface of
continuum optical depth $\tau_{\rm c} = 0.3$. The black
curves refer to contours of 2\,mT vertical field strength, where
solid and dashed contours have opposite polarity. Right: Map
of the continuum intensity a 630~nm.
\label{fig3}}
\end{figure*}
Figure~\ref{fig3} (left) shows for a typical time instant in the second half
of run h20 the horizontal field strength (colors) on the surface of
continuum optical depth $\tau_{\rm c} = 0.3$. Superimposed on the colors
are contours of 2\,mT of the vertical field strength, where solid and dashed
contours indicate opposite polarity.
To the right and the lower right of the image center, $(x,y) = (3.1\,,\,2.5)$ and
$ (3.7\,,\,1.6)$, as well as in the middle close to the front side, $ (2.4\,,\,0.3)$,
we can see a frequently occurring event consisting of a ``ring'' of
horizontal field. It starts to appear as a patch filled with horizontal
field like to the left front side, $(x,y) = (0.8\,,\,0.3)$, subsequently expanding to
become a ring. The ring can also be seen in the vertical field
component, where opposite halves of it have opposite polarity as is visible from
the indicated 2\,mT contours. This pattern arises from horizontal magnetic
field that is transported to the surface by vigorous upflows. The field is
anchored in the downdrafts at the edges of a granule or in the
weaker upwellings of a granule interior. Here, the field is most
concentrated and hence not only the vertical but also the horizontal
field is strongest there. As can be seen when
comparing to the continuum intensity image, Figure~\ref{fig3} (right),
a ring does often not enclose a full granule but only a part of
it so that part of the vertical magnetic field occurs within the granule.
The horizontal field between the crescents of vertical fields of opposite
polarity covers part of the granule like a cap forming a small-scale canopy
\citep{schaffenberger+al05,schaffenberger+al06}. Events of related
topograpy were observed by \cite{centeno+al07} and \cite{ishikawa+al08}.
As this horizontal field is pushed into the stable layers of the
upper photosphere by the overshooting convection, it stops rising
for lack of buoyancy, neither are there vigorous downflows
that would pump it back down again.
Hence, convective flow and its overshooting act
to expel magnetic flux from the granule interior to its boundaries,
i.e., not only to the intergranular lanes but also to the upper layers of the
photosphere.
The surface of $\tau_{\rm c} \approx 1$, which separates the convective
regime from the subadiabatically stratified photosphere, also acts as
a separatrix for the vertically directed Poynting flux, $S_z$, where
\begin{equation}
\label{eqn_poynting}
S= \displaystyle\frac{1}{4\pi}\mbox{\boldmath$(B\times
(v\times B))$} .
\end{equation}
This can be seen in Fig.~\ref{fig4} (top), which displays the horizontally
averaged $S_z$ as a function of height in the atmosphere and time
for the second half of run h20. The conspicuous dark streaks in the
lower part of the diagram mark events of downflow plumes that carry
horizontal magnetic field with them, giving rise to $\langle S_z\rangle < 0$.
Differently in the photosphere, where $\langle S_z\rangle$
stays mainly positive (bright), due to the transport of horizontal fields in
the upward direction. They become deposited in and give rise to the distinct
layer of enhanced horizontal fields in the upper photosphere, clearly visible
in the middle panel of Fig.~\ref{fig4}. Here again,
both the transport of horizontal fields downwards in the
convection zone and upwards in the photosphere are visible.
The bottom panel of Fig.~\ref{fig4} shows the mean vertical field strength that
monotonically decreases with height at all times.
\section{Comparison with results from the Hinode space observatory}
\label{sect4}
For a reality check we compare the
Zeeman measurements from the Hinode spectropolarimeter with the
synthesized Stokes profiles of both 630~nm \ion{Fe}{1} spectral lines of
the two simulation runs. Profiles were computed with the radiative transfer
code SIR~\citep{sir92, sir_luis} along vertical lines of sight (disk center) with
a spectral sampling of 2\,pm.
We then applied a point spread function (PSF) to these `virtual observations':
the theoretical, diffraction-limited PSF of SOT
as well as two other non-ideal PSFs that take additional stray-light
into account, all evaluated at $\lambda = 630$~nm (see \cite{wedemeyer08}
for details). The following results refer to a PSF obtained by convolution of
the ideal PSF with a Voigt function with $\gamma=5.7\arcsec\!\times 10^{-3}$
and $\sigma = 8\arcsec\!\times 10^{-3}$, derived from eclipse data.
\begin{figure*}[t]
\centering
\includegraphics[width=0.77\textwidth]{steiner+al_f3.jpg}
\caption{Vertically directed Poynting flux, $S_z$, horizontal magnetic
flux density, $\langle B_\mathrm{h}\rangle$, and vertical absolute magnetic flux
density, $\langle |B_\mathrm{z}|\rangle$ as functions of height and time from run h20.
All quantities are averages in horizontal planes of the three-dimensional
computational box. The temporal average of $\langle S_{z}\rangle$
is maximal $7.4\times 10^2$\,Wm$^{-2}$ (at 200~km) and
minimal $-5.2\times 10^4$\,Wm$^{-2}$ (at $-800$~km).
\label{fig4}}
\end{figure*}
For a faithful comparison with the results of \cite{lites+al08}, we subject the
synthetic profiles to the same procedure for conversion to apparent flux
density\footnote{Apparent because finite spatial resolution may mask
the true flux density through cancellation of opposite polarization.}
as was done by these authors. Thus, we obtain
calibration curves for the conversion from the wavelength integrated polarization
signals $V_{\mathrm{tot}}$ and $Q_{\mathrm{tot}}$ to the apparent
longitudinal and transversal magnetic flux densities
$|B^{\mathrm{L}}_{\mathrm{app}}|$ and
$B^{\mathrm{T}}_{\mathrm{app}}$, respectively. Equally, $Q_{\mathrm{tot}}$
is the resulting $Q$-profile after transformation to the
``preferred-frame azimuth'' in which the +$Q$-direction is parallel to the
projection of the magnetic field vector on the plane of sky, when $U\approx 0$.
Having the calibration curves, we derive spatial and temporal averages
for the transversal and longitudinal apparent magnetic flux densities,
$B^{\mathrm{T}}_{\mathrm{app}}$ and $|B^{\mathrm{L}}_{\mathrm{app}}|$
of respectively 21.5~Mx\,cm$^{-2}$ and 5.0~Mx\,cm$^{-2}$ for the second
half of run h20 and 10.4~Mx\,cm$^{-2}$ and 6.6~Mx\,cm$^{-2}$ for run v10.
Thus, the ratio
$r=\langle B^{\mathrm{T}}_{\mathrm{app}} \rangle/
\langle |B^{\mathrm{L}}_{\mathrm{app}}|\rangle = 4.3$
in case of run h20 and $1.6$ in case of run v10. \cite{lites+al08} obtain
from Hinode SP data
$\langle |B^{\mathrm{T}}_{\mathrm{app}}|\rangle = 55$~Mx\,cm$^{-2}$ and
$\langle |B^{\mathrm{L}}_{\mathrm{app}}|\rangle = 11$~Mx\,cm$^{-2}$ resulting in
$r = 5.0$.
While $\langle B_{\mathrm{hor}}\rangle /\langle B_{\mathrm{ver}}\rangle = 5.6$
and $2.5$ for run h20 and v10, respectively, at the location of maximum
$\langle B_{\mathrm{hor}}\rangle$, the above quoted lower ratios result because
the main contribution to the Stokes signals does not come from this
height but rather from the low photosphere, where the two components
differ less (see Fig.~\ref{fig1}).
At full spatial resolution, i.e., without application of the PSF, we obtain
from the syntesized Stokes data of run h20
$B^{\mathrm{T}} = 24.8$~Mx\,cm$^{-2}$ and
$|B^{\mathrm{L}}| = 8.8$~Mx\,cm$^{-2}$, thus
$r=2.8$.
The higher ratio $r$ when applying the PSF results because of apparent flux
cancellation within a finite resolution element. The vertical component
is more subject to this effect than the horizontal one because of its smaller
spatial scale and higher intermittency. This indicates that the predominance
of the horizontal component decreases with increasing spatial resolution
and that spatial resolution is a fundamental parameter to take into
account when interpreting measurements of field inclinations
\citep{orozco+al07}. The probability density for the field
inclination at full spatial resolution of the simulation shows on
the surface $\tau_{\rm c}=0.01$ a flat (isotropic) distribution
in the range $\pm 50^{\circ}$ from the horizontal direction.
\section{Conclusions}
\label{sect5}
We have carried out two simulations of magnetoconvection
in the surface layers of the quiet internetwork region of the solar atmosphere.
The simulations greatly differ in their initial state and boundary conditions for
the magnetic field, but otherwise they both equal
faithfully reproduce properties of normal granulation
(as the magnetic field is weak). The top boundary is placed in the middle
chromosphere (at a height of 1400~km) far away from the photospheric
layers.
Both simulations intrinsically produce a horizontal magnetic field throughout
the photosphere and lower chromosphere with a mean field strength that
exceeds the mean strength of the vertical field component at the same height
by up to a factor of 5.6. The strength of the horizontal field component shows
a local maximum close to the classical temperature minimum near 500~km
height (which largely escapes measurements with the
\ion{Fe}{1} 630~nm line pair).
Fields with a horizontal component exceeding a certain limit in strength
occupy a significantly larger
surface area than fields with a vertical component exceeding this limit.
This horizontal field can be considered a
consequence of the flux expulsion process \citep{galloway+weiss81}: in the
same way as magnetic flux is expelled
from the granular interior to the intergranular lanes, it also gets pushed
to the middle and upper photosphere by overshooting convection, where
it tends to form a layer of horizontal field of enhanced flux density,
reaching up into the lower chromosphere.
Below the surface of $\tau_{\rm c} \approx 0.1$,
convective plumes pump horizontal magnetic field in the downward
direction. Hence, this surface acts as a separatrix
for the Poynting flux, which is mainly directed upwards above it
and in the downward direction below it.
The response of this field in linear and circular polarization
of the two neutral iron lines at 630~nm yields a ratio
$\langle B^{\mathrm{T}}_{\mathrm{app}} \rangle/
\langle |B^{\mathrm{L}}_{\mathrm{app}}|\rangle = 4.3 $ in case of
run h20 (which, according to Sect.~2, we deem better to represent
the conditions of
internetwork regions). This is close to the measurements of
\cite{lites+al08}, which indicate a factor of 5. Errors may
come from the straylight produced by the spectrograph and
polarization optics that was not taken into account with our PSF,
the difference in mean absolute flux density (run h20, has
only about half the measured value [see Sect.~\ref{sect4}]),
the frequency-independent treatment of radiative transfer,
lacking spatial resolution, but also natural fluctuations.
The predominance of the horizontal component, may possibly
only exist on a scale comparable to or less than the spatial
resolution of SOT. At full spatial resolution of the simulation we
obtain a ratio of 2.8 instead of 4.3.
\acknowledgements
The authors thank B.W.~Lites for providing the calibration software,
R.~Hammer and M.~Sch\"ussler for detailed comments on a draft
version of this paper, and L.R.~Bellot Rubio for helping greatly improving it.
This work was supported by the Deutsche Forschungsgemeinschaft
(SCHM 1168/8-1).
|
1812.08854
|
\section{Introduction}
\label{section:introduction}
The European Spallation Source (ESS)~\cite{ess} will soon commence operations
as the most powerful neutron source in the world. Highly efficient, position sensing
neutron detectors are crucial to the scientific mission of ESS. The worldwide
shortage of $^{3}$He~\cite{kouzes09,shea10,zeitelhack12} has resulted in
considerable effort being undertaken to develop new neutron-detector technologies.
One such effort is the development of \underline{S}olid-state \underline{N}eutron
\underline{D}etectors SoNDe~\cite{sonde,jaksch17,jaksch18} for high-flux
applications, motivated by the desire for two-dimensional position-sensitive
systems for small-angle neutron-scattering
experiments~\cite{heiderich91,kemmerling01,kemmerling04a,kemmerling04b,jaksch14,feoktystov15, ralf97, ralf98, ralf99, ralf02}.
The SoNDe concept features individual neutron-detector modules which may be easily
configured to instrument essentially any experimental phase space. The
specification for the neutron interaction position reconstruction accuracy for the SoNDe
technology is 6~mm.
The core components of a SoNDe module are the
neutron-sensitive Li-glass scintillator and the pixelated multi-anode
photomultiplier tube (MAPMT) used to collect the scintillation light. The
response of MAPMTs to scintillation-emulating laser light has been extensively
studied~\cite{korpar00,rielage01,matsumoto04,lang05,abbon08,montgomery12,montgomery13,montgomery15,wang16}.
Similar Li-glass/MAPMT detectors have been tested with thermal
neutrons~\cite{zaiwei12} and a SoNDe detector prototype has been evaluated in a
reactor-based thermal-neutron beam~\cite{jaksch18}. In this paper, we present
results obtained for the response of a SoNDe detector prototype to
$\alpha$-particles from a collimated $^{241}$Am source, scanned across the face
of the scintillator. Our goal was to examine methods to optimize the localization
of the scintillation signal with a view to optimizing the position resolution of
the detector, under constraints imposed by envisioned readout schemes for the
detector.
We were particularly interested in the behavior of the SoNDe detector
prototype at the vertical and horizontal boundaries between the pixels and the
corners where four pixels meet.
\section{Apparatus}
\label{section:apparatus}
\subsection{Collimated $\alpha$-particle source}
\label{subsection:alpha_particle_beam}
Fig.~\ref{figure:figure_01_HolderCollimator} shows a sketch of the assembly
used to produce a beam of $\alpha$-particles. It consisted of a
$^{241}$Am $\alpha$-particle source mounted in a 3D-printed holder/collimator
assembly.
\begin{figure}[H]
\begin{center}
\includegraphics[width=1.\textwidth]{Figure_01_HolderCollimator.png}
\caption{
$^{241}$Am $\alpha$-particle source mounted in the
holder/collimator assembly used to define a beam of
$\alpha$-particles. The radioactive source is a snug fit in the blue holder/collimator. The assembly
shown has a 1~mm thick face plate and a 1~mm diameter
hole, resulting in the diverging red beam of
$\alpha$-particles.
(For interpretation of the references to color in this
figure caption, the reader is referred to the web version
of this article.)
\label{figure:figure_01_HolderCollimator}
}
\end{center}
\end{figure}
The $\alpha$-particle emission energies of $^{241}$Am are 5.5~MeV ($\sim$85\%)
and 5.4~MeV ($\sim$15\%). The gamma-ray background from the
subsequent decay of excited states of $^{237}$Np has an energy of $\sim$60~keV and
has a neglible effect on the $\alpha$-particle response of the SoNDe detector
prototype. The $\alpha$-particle spectrum from the present source was measured
(Fig.~\ref{figure:figure_02_AlphaCalib}) using a high-resolution passive-implanted
planar silicon (PIPS) detector system in vacuum, where the PIPS detector was
calibrated using a three-actinide calibration source. The average energy of the
$\alpha$-particles emitted by the presently employed source was $\sim$4.5~MeV,
corresponding to an energy loss of $\sim$1~MeV in the source window.
\begin{figure}[H]
\begin{center}
\includegraphics[width=1.\textwidth]{Figure_02_AlphaCalib.pdf}
\caption{
Uncollimated $\alpha$-particle spectrum emitted by the
$^{241}$Am source employed in the suite of measurements
reported on here (broad blue distribution) together with
a red fitted Gaussian function (indicating peaking at
4.54~MeV) and a triple $\alpha$-particle spectrum emitted
by a very thin-windowed three-actinide calibration source
(sharp black primary peaks at 5.155~MeV, 5.486~MeV, and
5.805~MeV).
(For interpretation of the references to color in this
figure caption, the reader is referred to the web version
of this article.)
\label{figure:figure_02_AlphaCalib}
}
\end{center}
\end{figure}
Holder/collimator assemblies for the $^{241}$Am source were 3D-printed from
polyactic acid using the fused deposition modeling technique. For our
measurements, we used 1~mm thick collimators with either 3~mm (for gain mapping)
or 1~mm (for border scanning) diameter apertures, resulting in uniform 3~mm or
1~mm irradiation spots at the upstream face of the scintillator. The distance from
the $^{241}$Am source to this upstream face was $\sim$6~mm, so that the mean
$\alpha$-particle energy at the surface of the GS20 wafer was
$\sim$4.0~MeV~\cite{alpha_loss}.
\subsection{SoNDe detector prototype}
\label{subsection:detector}
The SoNDe detector prototype investigated in this paper is being developed for
large-area arrays to detect thermal to cold neutrons with energies of $\leq$25~meV.
It consists of a 1~mm thick lithium-silicate scintillating glass wafer coupled to an MAPMT.
\subsubsection{Li-glass scintillator}
\label{subsubsection:Liglass_scintillator}
Cerium-activated lithium-silicate glass scintillator
GS20~\cite{firk61,spowart76,spowart77,fairly78}
purchased from Scintacor~\cite{scintacor} was chosen for this application.
GS20 has been demonstrated to be an excellent scintillator for
the detection of thermal and cold neutrons and arrays of scintillator tiles can be arranged
into large area detector systems~\cite{kemmerling04a,kemmerling04b,heiderich91,kemmerling01,feoktystov15}.
The lithium content is 6.6\% by weight, with a 95\% $^{6}$Li isotopic enhancement, giving a $^{6}$Li
concentration of 1.58~$\times$~$10^{22}$~atoms/cm$^{3}$. $^{6}$Li has a
thermal-neutron capture cross section of $\sim$940~b at 25~meV, so that a 1~mm thick wafer of GS20
detects $\sim$75\% of incident thermal neutrons. The process
produces a 2.05~MeV $\alpha$-particle and a 2.73~MeV triton which have mean ranges of
5.3~$\mu$m and 34.7~$\mu$m respectively~\cite{jamieson15} in GS20. Using the present
$\alpha$-particle source, scintillation light was generated overwhelmingly within
$\sim$15~$\mu$m of the upstream face of the scintillating wafer. We note that the
scintillation light-yield outputs for 1~MeV protons is $\sim$5 times higher than that of 1~MeV $\alpha$-particles~\cite{dalton87}.
Thus, after thermal-neutron capture, the triton will produce a factor of
$\sim$5 more scintillation light than the $\alpha$-particle.
Tests by van~Eijk~\cite{vanEijk04}
indicate $\sim$6600 scintillation photons per neutron event with a peak at a
wavelength of 390~nm, which corresponds to $\sim$25\% of the anthracene benchmark.
The sensitivity of GS20 to gamma-rays is energy dependent. A threshold cut will eliminate the low-energy gamma-rays, but higher-energy gamma-rays
can produce large pulses if the subsequent electrons (from Compton scattering or pair production) traverse sufficient thickness of GS20.
Our glass wafer was 1~mm thick and 50~mm~$\times$~50~mm in
area. The glass faces, apart from the edges, were polished and the wafer was fitted to the MAPMT window without any optical coupling medium. The index of refraction of GS20 is 1.55 at 395~nm. We have
assumed that the $^6$Li distribution in our scintillating wafer was uniform.
\subsubsection{Multi-anode photomultiplier tube}
\label{subsubsection:mapmt}
The Hamamatsu type H12700 MAPMT employed in the SoNDe detector prototype is an
8~$\times$~8 pixel device with outer dimensions 52~mm~$\times$~52~mm and an active
cathode area of 48.5~mm~$\times$~48.5~mm, resulting in a packing density of 87\%.
The bialkali photocathode produces a peak quantum efficiency of $\sim$33\% at
$\sim$380~nm wavelength, which is well matched to the GS20 scintillation.
Compared to its predecessor type H8500 MAPMT, the H12700 MAPMT achieves similar overall
gain, but with 10 as opposed to 12 dynode stages. The H12700 MAPMT employed for the
present tests had a gain of 2.09~$\times$~10$^{6}$ and a dark current of 2.67~nA
at an anode-cathode potential of $-$1000~V.
Each of the 64 $\sim$6~$\times$~6~mm$^2$ pixels in the Hamamatsu 12700 MAPMT has a slightly different gain,
which is measured and documented by the supplier. A typical H12700 MAPMT has a factor
2 variation in pixel gain (factor 3 worst case)~\cite{hamabrochure}. The
datasheet provided by Hamamatsu for the H12700 MAPMT used in this study had a
worst case anode-to-anode gain difference of a factor 1.7.
Figure~\ref{figure:figure_XX_MAPMTDetector} shows a photograph of the device
together with a pixel map.
\begin{figure}[H]
\begin{center}
\begin{subfigure}[b]{0.57\textwidth}
\includegraphics[width=1.\textwidth]{Figure_03a_MAPMTDetector.jpg}
\caption{MAPMT with sctintillator}
\label{subfig:photograph}
\end{subfigure}
\begin{subfigure}[b]{0.57\textwidth}
\includegraphics[width=1.\textwidth]{Figure_03b_12700_crop.pdf}
\caption{MAPMT pixel map}
\label{subfig:pixelmap}
\end{subfigure}
\caption{
The Hamamatsu 12700 MAPMT. \ref{subfig:photograph}:
Photograph of the MAPMT together with the GS20
scintillator wafer. \ref{subfig:pixelmap}:
Numbering of the 64 MAPMT pixels (front view). Pixel~1
(P1) is located in the top left-hand corner of the
MAPMT looking into it from the front. Sketch from Ref.~\cite{hamabrochure}. The red
boxes indicate the region of irradiation reported on in
detail in this paper.
(For interpretation of the references to color in this
figure caption, the reader is referred to the web version
of this article.)
\label{figure:figure_XX_MAPMTDetector}}
\end{center}
\end{figure}
\section{Measurement}
\label{section:measurement}
The SoNDe detector prototype was irradiated using the collimated beams of
$\alpha$-particles (Sec.~\ref{subsection:alpha_particle_beam}) where the center of
the beam was directed perpendicular to the face of the GS20 wafer. The downstream
face of the source holder/collimator assembly was translated parallel to the
surface of the scintillator wafer on an XY-coordinate scanner, powered by a pair
of Thorlabs NRT150 stepping motors~\cite{thorlabs}. This was programmed to scan a
lattice of irradiation points uniformly distributed across the face of the device.
The entire assembly was located within a light-tight box and the temperature
($\sim$25\degree), pressure ($\sim$101.3~kPa), and humidity ($\sim$30\%) within
the box were logged at the beginning and end of each measurement position. The
anode signals from each of the pixels in the MAPMT were recorded using standard
NIM and VME electronics. The positive polarity dynode-10 signal was shaped and
inverted using an ORTEC~454 NIM timing filter amplifier producing a negative
polarity signal with risetime $\sim$5~ns, falltime $\sim$20~ns, and amplitude some
tens of mV, which was fed to an ORTEC CF8000 constant-fraction discriminator set
to a threshold of $-$8~mV. The resulting NIM logic pulses with a duration of 150~ns
provided a trigger for the data-acquisition system and a gate for the CAEN V792 VME
charge-to-digital converters (QDCs) used to record the 64 anode pixel charges. A
CAEN V2718 VME-to-PCI optical link bridge was used to sense the presence of a
trigger signal and to connect the VMEbus to a Linux PC-based data-acquisition
system. The digitized signals were recorded on disk and subsequently processed using
ROOT-based software~\cite{root}. Data were recorded for $\sim$120~s at each point on
a scan, so that in total a scan could take several hours.
\section{Results}
\label{section:results}
The gain-calibration datasheet provided by Hamamatsu may be used to correct for
non-uniform response. However, previous
work~\cite{korpar00,rielage01,matsumoto04,lang05,abbon08,montgomery12,montgomery15}
has clearly suggested that mapping of pixel gains is highly dependent upon the irradiation
conditions. Since our $\alpha$-particle beam results in very short ($\sim$10s~of~ns)
pulses of highly localized scintillation light, which are in sharp contrast to the
steady-state irradiation measurement employed by Hamamatsu, we re-measured
the gain-map of our MAPMT {\it in situ} using the equipment described previously.
For each pixel, the 3~mm diameter $\alpha$-particle collimator was centered on the
XY position of the pixel center of the MAPMT photocathode and 2200 $\alpha$-particle
events were recorded. The resulting anode-charge distributions were well fitted with
Gaussian functions (average $\chi^2$ per degree
of freedom~1.50 with a variance of 0.14) and the means, $\mu$, and
standard deviations, $\sigma$, were recorded. The largest measured $\mu$-value
(corresponding to the pixel with the highest gain) was normalized to 100.
The relative difference between the Hamamatsu gain-map values, $H$, and the $\alpha$-scanned
gain-map values, $\alpha$, was calculated as $\frac{H - \alpha}{H}$ on a pixel-by-pixel basis.
Figure~\ref{figure:gainmaps}
shows the results of our gain-map measurement, where the present results show significant differences to the Hamamatsu measurement.
General trends in regions of high and low gain agree.
Measurements of a sample of 30 H12700 MAPMTs revealed that the
window face is not flat and was systematically $\sim$80~$\mu$m lower at the center
compared to the edges. The non-uniformity of the air gap between the GS20 and MAPMT
window may be a partial cause of the gain discrepancy displayed in
Fig.~\ref{figure:gainmaps}, as may reflections at the edges of the GS20 wafer.
In the following, we use the present $\alpha$-scintillation generated gain-map,
which in principle will embody non-uniform light-collection effects. Nonetheless,
scintillation-light propagation through the SoNDe detector protoype is being
studied in a \geant-based simulation~\cite{agostinelli03,allison06} in order to
better understand these effects.
\begin{figure}[H]
\begin{center}
\begin{subfigure}[b]{0.73\textwidth}
\includegraphics[width=1.\textwidth]{Figure_04a_gaincorrection_difference_2D.pdf}
\caption{Gain differences, areal}
\label{subfig:heatmap}
\end{subfigure}
\begin{subfigure}[b]{0.73\textwidth}
\includegraphics[width=1.\textwidth]{Figure_04b_gaincorrection_difference_1D.pdf}
\caption{Gain differences, projected}
\label{subfig:heatmap_projection}
\end{subfigure}
\caption{
Differences between the $\alpha$-scan gain-map and the
Hamamatsu gain-map normalized to the Hamamatsu gain-map
in percent.
\ref{subfig:heatmap}: 2D representation in which
the top-left corner
corresponds to P1. \ref{subfig:heatmap_projection}:
1D representation of the
same as a function of pixel. Error bars are derived from
fit widths. The values have been joined with a line to
guide the eye. A histogram of the gain differences is
projected in grey on the right vertical axis. Cluster~A of that histogram
corresponds to red pixels in \ref{subfig:heatmap}
while cluster~B corresponds to blue pixels.
(For interpretation of the references to color in this
figure caption, the reader is referred to the web version
of this article.)
}
\label{figure:gainmaps}
\end{center}
\end{figure}
Figure ~\ref{figure:mean_yields_with_gain_correction} shows the positions of
the 1~mm collimated $\alpha$-particle source employed for the horizontal,
vertical, and diagonal XY scans. The positions are labeled A--V and color
coded (\ref{subfig:key}). The $\alpha$-particle pulse-height spectra
recorded at each of the scan positions are displayed in
\ref{subfig:horscan}--\ref{subfig:diascan} for pixels P36, P37, P44, and P45
which encompass the scan coordinates. The QDC pulse-height distributions
have been pedestal subtracted and corrected for non-uniform pixel gain
(Fig.~\ref{figure:gainmaps}). It is obvious that the efficiency of scintillation
light collection in a single pixel is strongly dependent on the position of the
$\alpha$-particle interaction. The signal amplitude is maximized when light is
produced at the center of the pixel. This variation in amplitude with position
may be seen more clearly in \ref{subfig:lightshare}, which shows the mean of
the pulse-height distributions as a function of interaction position for the
horizontal scan (\ref{subfig:horscan}). The full curves are splines to guide the eye, while the dashed curves display the predictions of a ray-tracing
simulation of light propagation~\cite{ROS}. The simulation is in good
agreement with the measured data. After gain correction, these distributions
should be symmetric about the pixel boundary locations.
The system was aligned such that position D should have corresponded to the
boundary between P36 and P37. However, the fits to the data suggest that the
scan positions were offset by 0.2~mm to the left (Fig.~\ref{subfig:lightshare}).
Corresponding fits to the vertical-scan data show a 0.4~mm vertical offset.
The sum of the means of the two adjacent pixels scanned is also displayed. This
shows that the amount of light collected by the two pixels over which the scan
is performed is independent of position.
\begin{figure}[H]
\begin{center}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=0.99\textwidth]{Figure_05a1_scan_horisontal_P37.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=0.99\textwidth]{Figure_05b1_scan_vertical_P37.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=0.99\textwidth]{Figure_05c1_scan_diagonal_P37.pdf}
\end{subfigure}
\quad
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=0.99\textwidth]{Figure_05a2_scan_horisontal_P36.pdf}
\caption{Horizontal scan F $\leftarrow$ A}
\label{subfig:horscan}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=0.99\textwidth]{Figure_05b2_scan_vertical_P45.pdf}
\caption{Vertical scan A $\downarrow$ V}
\label{subfig:verscan}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=0.99\textwidth]{Figure_05c2_scan_diagonal_P44.pdf}
\caption{Diagonal scan O $\swarrow$ A}
\label{subfig:diascan}
\end{subfigure}
\begin{subfigure}[b]{0.43\textwidth}
\hspace*{5mm}
\includegraphics[width=0.75\textwidth]{Figure_05d_all_map.pdf}
\vspace*{6mm}
\caption{Key}
\label{subfig:key}
\end{subfigure}
\begin{subfigure}[b]{0.56\textwidth}
\hspace*{-5mm}
\includegraphics[width=1.0\textwidth]{Figure_05e_light_share_horisontal_corrected.pdf}
\caption{Light sharing horizontal scan F $\leftarrow$ A}
\label{subfig:lightshare}
\end{subfigure}
\caption{
\ref{subfig:horscan}--\ref{subfig:diascan}:
Measured charge distributions for four adjacent pixels
as the $\alpha$-particle beam was translated in 1~mm
horizontal and vertical steps across the pixel
boundaries. (a): horizontal scan from P37 to
P36. (b): vertical scan from P37 to P45.
(c): diagonal scan from P37 to P44.
\ref{subfig:key}: Key. The solid (dashed) black
lines indicate the pixel boundaries (centers).
\ref{subfig:lightshare}: Gain-corrected means of
the collected charge distributions corresponding to
\ref{subfig:horscan}. The curves are splines
drawn to guide the eye. Error bars correspond to
$\sigma/10$.
(For interpretation of the references to color in
this figure caption, the reader is referred to the
web version of this article.)
\label{figure:mean_yields_with_gain_correction}
}
\end{center}
\end{figure}
In general, several pixels adjacent to the target pixel will collect some
scintillation light and in principle, this could be used to better localize the
position of the scintillation as in an Anger camera~\cite{anger58,riedel15}. While possible,
this will not be the standard mode of operation for SoNDe modules running at ESS due to data-volume
limitations. For production running at ESS, MAPMT
pixels will be read and time-stamped on an event-by-event basis as lying either
above or below per-pixel discriminator thresholds.
The multiplicity of pixels with a signal above discriminator threshold (the
hit multiplicity denoted $M$~$=$~1, $M$~$=$~2, etc.) has been investigated as a
function of the scintillation position and also the discrimination level.
Figure~\ref{figure:contour_maps_M_with_cuts} displays regions in the vicinity of
P37 where $M$~$=$~1, 2, and 3 predominate. Hits have been determined according
to the pulse heights (Fig.~\ref{figure:mean_yields_with_gain_correction})
exceeding discrimination levels of 100 (\ref{subfig:100_channels}), 235
(\ref{subfig:235_channels}), and 500 (\ref{subfig:500_channels}) QDC~channels. At the 100-channel
threshold, $M$~$=$~1 events are confined to the center of a pixel. At the edges,
events are predominantly $M$~$=$~2, while in the corners, $M$~$=$~3. To see any considerable
$M$~$=$~4 contributions around the pixel corners, lower thresholds are required
as seen in Fig.~\ref{figure:active_area_per_threshold}. Raising the threshold to
235~channels extinguishes $M$~$=$~2 and $M$~$=$~3 almost completely. Raising even
further to 500~channels serves merely to reduce the number of $M$~$=$~1 events.
The threshold level obviously affects the relative efficiency with which the
SoNDe detector prototype registers $M$~$=$~1, $M$~$=$~2, etc. events, and
Fig.~\ref{figure:contour_maps_M_with_cuts} clearly shows that there is an optimum
threshold value to maximize the number of $M$~$=$~1 events detected and also to
maximze the area of the detector where the $M$~$=$~1 efficiency is high.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\hspace*{10mm}\includegraphics[width=0.7\textwidth]{Figure_06a_colorbars.pdf}
\vspace*{4mm}
\caption{Key}
\label{subfig:multkey}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\textwidth]{Figure_06b_multiplicity_colored_map_thresh_100.pdf}
\vspace*{-8mm}
\caption{Threshold 100 QDC~channels}
\label{subfig:100_channels}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\textwidth]{Figure_06c_multiplicity_colored_map_thresh_235.pdf}
\vspace*{-8mm}
\caption{Threshold 235 QDC~channels}
\label{subfig:235_channels}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\textwidth]{Figure_06d_multiplicity_colored_map_thresh_500.pdf}
\vspace*{-8mm}
\caption{Threshold 500 QDC~channels}
\label{subfig:500_channels}
\end{subfigure}
\caption{
Contour plots of the multiplicity distributions for pixels
lying near P37 for different thresholds as a function of
$\alpha$-particle beam irradiation location. The black lines
denote the pixel boundaries. In all three contour plots, blues
indicate $M$~$=$~1 events, reds indicate $M$~$=$~2 events, and
greens indicate $M$~$=$~3 events. The lighter the shade of the
color, the fewer the counts.
(For interpretation of the references to color in this
figure caption, the reader is referred to the web version
of this article.)
\label{figure:contour_maps_M_with_cuts}
}
\end{figure}
Figure~\ref{figure:active_area_per_threshold} illustrates the trend in $M$ as a
function of QDC threshold cut when a series of 36 $\alpha$-particle beam
measurements were performed in a 6~$\times$~6 grid covering
the face of P37. Variable thresholds have been applied to the data, but in each case,
the same cut has been applied to all pixels. Two ``extreme" curves are shown in
\ref{subfig:m_extreme}: the $M$~$=$~0 curve and the $M$~$>$~4 curve. The $M$~$=$~0 curve
corresponds to events which do not exceed the applied threshold in any pixel.
$M$~$=$~0 events start to register at a threshold of $\sim$30~channels and rise
steeply after channel~$\sim$200 to $\sim$95\% at channel~600. The $M$~$>$~4 curve
corresponds to events which exceed the applied threshold in four or more pixels.
At a threshold of $\sim$4~channels, $\sim$90\% of events are $M$~$>$~4, falling
essentially to zero at channel~$\sim$50. Four other curves are shown in
\ref{subfig:m1to4}: $M$~$=$~1, $M$~$=$~2, $M$~$=$~3, and $M$~$=$~4, corresponding
to events which exceed the applied threshold in one, two, three, and
four pixels, respectively. Each of these curves demonstrate clear maxima so that
the analysis procedure may be ``tuned" to select an event multiplicity by applying
the appropriate threshold. The detection efficiency for $M$~$=$~1 events peaks at
$\sim$75\% at a threshold of 235~channels, where $M$~$=$~2, 3, and 4 have negligible
efficiency as they peak at channels~65, 25, and 18, respectively.
\begin{figure}[H]
\begin{center}
\begin{subfigure}[b]{0.73\textwidth}
\includegraphics[width=1.\textwidth]{Figure_07a_active_area_M0.pdf}
\caption{Extreme multiplicities}
\label{subfig:m_extreme}
\end{subfigure}
\begin{subfigure}[b]{0.73\textwidth}
\includegraphics[width=1.\textwidth]{Figure_07b_active_area_M1to4.pdf}
\caption{Practical multiplicities}
\label{subfig:m1to4}
\end{subfigure}
\caption{Tuning the analysis using a single QDC threshold.
Relationships between the relative number of events and
threshold. Top Panel: $M$~$=$~0 (black dot-dot-dashed line)
and $M$~$>$~4 (light blue dot-dashed line).
Bottom Panel: $M$~$=$~1 (dark blue solid line), $M$~$=$~2
(orange dashed line), $M$~$=$~3 (green dot-dashed line), and
$M$~$=$~4 (red dotted line). The grey vertical lines at
QDC~channel~100, 235, and 500 represent three of the QDC
threshold cuts employed in Fig.~\ref{figure:contour_maps_M_with_cuts}.
Arrows indicate the optimal values for QDC threshold cuts for
tuning the resulting data set for a specific value of $M$.
(For interpretation of the references to color in this
figure caption, the reader is referred to the web version
of this article.)}
\label{figure:active_area_per_threshold}
\end{center}
\end{figure}
\section{Summary and Discussion}
\label{section:summary}
The position-dependent response of a SoNDe detector prototype, which consists
of a 1~mm thick
wafer of GS20 scintillating glass read out by an 8~$\times$~8~pixel type H12700
MAPMT has been measured using a collimated $^{241}$Am source. The spreading of
the scintillation light and the resulting distributions of charge on the MAPMT
anodes were studied as a function of $\alpha$-particle interaction position by
scanning the collimated $\alpha$-particle beam across the face of the MAPMT using
a high precision XY coordinate translator.
Initially, pixel gain non-uniformity across the 64 MAPMT anodes was measured
using the 3~mm collimated source positioned at each pixel center, which produced
uniform illumination of the pixel centers.
The results, which differ from relative gain
data provided by the MAPMT manufacturer on the 10\% level (Fig.~\ref{figure:gainmaps}), were used
to correct all subsequent 1~mm scan data.
Anode charge distributions collected from each MAPMT pixel at each scanned
coordinate show a strong position dependence of the signal amplitude. The single
pixel signal is strongest when the source is located at the pixel
center, and falls away as the pixel boundaries are approached
(Fig.~\ref{figure:mean_yields_with_gain_correction}). At the pixel center, the
signal tends to be concentrated in that pixel, while at pixel boundaries, the
signal is shared between the adjacent pixels.
Rate and data-volume considerations for operation of SoNDe modules at ESS
will require a relatively simple mode of operation for the SoNDe
data-acquisition system. It will not be possible to read out multiple pixels to
construct a weighted-mean interaction position as in an Anger Camera. Instead,
it will be necessary to identify the pixel where the maximum charge occurs and
record only the identity (P1 -- P64) of that pixel.
To this end, we studied the
effect of signal amplitude thresholds on the multiplicity of pixel hits (that
is, the number of signals above threshold) as a function of the $\alpha$-particle
interaction position (Fig.~\ref{figure:contour_maps_M_with_cuts}).
This study showed that there is an optimum discrimination level which
maximizes the number of single-pixel or $M$~$=$~1 hits. Below this level,
multi-pixel hits start to dominate, while above this level, the single-pixel
efficiency drops (Fig.~\ref{figure:active_area_per_threshold}). At the optimum
discrimination level, which under the present operating conditions was
channel~235, $\sim$75\% of the $\alpha$-particle interactions were registered
as single pixel.
Further work pertaining to the characterization of the SoNDe detector prototype
is progressing in parallel to the project reported here. This includes the
development of a simulation within the \geant~framework to complement and
extend the ray-tracing simulation mentioned in this work. This \geant~model
fully simulates the interactions of ionizing radiation in the GS20 and tracks
the produced scintillation photons to the MAPMT cathode. It is being used to
study the effects of optical coupling, surface finish, and partial pixelation
(by machining grooves) of the GS20 wafer on the response of the detector. The
relative response of a SoNDe detector prototype to different incident particle
species is also being studied. Irradiation of a SoNDe detector prototype with
a collimated beam of thermal neutrons has been performed and data analysis is
nearing completion. An extension of these studies, where an optical diffuser is
inserted between the GS20 and the MAPMT, is planned to examine a possible
Anger camera mode of operation. And further position-dependence studies will
be made using fine needle-like beams of a few MeV protons and deuterons produced
by an accelerator. This will allow determination of the scintillation signal
as a function of interacting particle species and will be complemented by work
with fast-neutron and gamma-ray sources.
\section*{Acknowledgements}
\label{acknowledgements}
We thank Prof. David Sanderson from the Scottish Universities Environmental
Research Centre for providing $\alpha$-spectroscopy facilities for the
calibration of our $^{241}$Am source. We acknowledge the support of
the European Union via the Horizon 2020 Solid-State Neutron Detector Project,
Proposal ID 654124, and the BrightnESS Project, Proposal ID 676548. We also
acknowledge the support of the UK Science and Technology Facilities Council
(Grant nos. STFC 57071/1 and STFC 50727/1) and UK Engineering and Physical
Sciences Research Council Centre for Doctoral Training in Intelligent
Sensing and Measurement (Grant No. EP/L016753/1).
\newpage
\bibliographystyle{elsarticle-num}
|
1711.09047
|
\section{Introduction}
Quasar spectra show many rest-frame ultraviolet (UV) absorption lines. According to their distances from the background quasar, absorption lines can be divided into two classes: (1) intrinsic absorption lines that are caused by the gases inside the quasar; (2) intervening absorption lines that are caused by the gas in/around the quasar host galaxy, galaxy cluster around the quasar, or the foreground media having no physical relations with the quasar. The intrinsic absorption lines are usually classified into three categories according to their absorption widths: broad absorption lines (BALs: absorption widths of at least 2000 $\rm km~s^{-1}$; Weymann et al. 1991) detected in about 41 per cent of quasars (Allen et al. 2011); mini-BALs (absorption widths from 500 to 2000 $\rm km~s^{-1}$; Hamann \& Sabra 2004); narrow absorption lines (NALs: absorption widths of less than 500 $\rm km~s^{-1}$) detected in $\thicksim20-50$ per cent of quasars (e.g. Misawa et al. 2007).
At present, the relationship between BAL, mini-BAL and intrinsic NAL is still unclear. In an evolution model picture (e.g. Farrah et al. 2007), different types of intrinsic absorption lines may represent different evolution stages of the quasar outflows. For example, the BAL might represent a powerful phase of the outflow, while the intrinsic NAL and mini-BAL are the beginning or the ending of the BAL outflow (e.g. Hamann et al. 2008). In an inclination model picture (e.g. Murray et al. 1995; Elvis 2000; Proga, Stone \& Kallman 2000), different types of intrinsic absorption lines may be due to viewing angle effects. For instance, the BAL might represent the main body of the outflow, while the NAL and mini-BAL are the clumpy structures on the edge of the outflow at high latitudes (e.g. Ganguly et al. 2001a).
\begin{figure*}
\includegraphics[width=2\columnwidth]{Figure1.eps}
\caption{Spectra of quasar J0027--0944. The top and middle panels show the spectra observed by SDSS on MJD 52145 and 54825, respectively. The flux density is in units of $\rm 10^{-17} erg~ cm^{-2}~s^{-1}~$\AA$^{-1}$. The blue lines are the corresponding pseudo-continua. The orange lines represent the pseudo-continuum fluxes taking into account the flux uncertainties. In the bottom panel, the black and red lines show the normalized spectra of J0027--0944 from observations on MJD 54825 and 52145, respectively. The red, blue, purple and green lines mark out four identified NAL systems.}
\label{fig.1}
\end{figure*}
BALs exhibit diverse shapes; for example, multiple absorption troughs (e.g. Turnshek et al. 1980), detached absorption troughs (e.g. Osmer \& Smith 1977), and classic P-Cygni (e.g. Scargle, Caroff \& Noerdlinger 1970), etc. The cause of BAL profile diversity is also under debate. Hydrodynamic simulation performed by Pereyra (2014) has suggested that the discontinuities in the ionization balance of the outflow, which is caused by X-ray shielding, may result in a profile shape of multiple absorption troughs. Assuming an X-ray shielded region, Pereyra has further found that the diversity of BAL profile can be explained by the viewing angel effect: from `face-on' to `edge-on', one will successively detect multiple absorption troughs, then detached absorption troughs, and then classic P-Cygni. In an observational view, Baskin, Laor \& Hamann (2013) have found that the velocity of \ion{C}{iv} BAL profile is controlled by the \ion{He}{ii} emission equivalent width (EW), while its profile depth is controlled by the spectral slope in the 1700--3000 \AA~range. They suggested that the \ion{He}{ii} emission EW and the spectral slope may indicate the ionizing continuum and the viewing angle, respectively.
We happened to discover that a number of BALs are actually the mixtures of NALs, during a programme to study the BAL variation with time (Lu, Lin \& Qin 2017). Through a careful visual inspection of about 2000 BAL quasars with multi-epoch observations from SDSS-I/II/III, we confirmed that this type of BALs are not rare. This interesting discovery may offer a new perspective to study outflows, for example the relationship between different types of absorption lines (BAL, mini-BAL or NAL), and the cause of the profile diversity of BALs, etc. As a pilot study of this type of BAL, we show a typical example with two-epoch observations of quasar SDSS J002710.06-094435.3 (hereafter J0027-0944) in this paper. As a BAL quasar, J0027-0944 has been studied in many systematic studies (Trump et al. 2006; Gibson et al. 2009a; Scaringi et al. 2009; Allen et al. 2011; He et al. 2015, 2017). J0027-0944 has been observed twice by SDSS. The first-epoch SDSS spectrum of J0027-0944 has the balnicity index (BI; defined as ${\rm BI}=\int_{-25000~\rm km~s^{-1}}^{-3000~\rm km~s^{-1}} (1-\frac{f(v)}{0.9})C{\rm d}v$\footnotemark; Weymann et al. 1991) of 433.1 and 758.9 $\rm km~s^{-1}$ for the \ion{Si}{iv} and \ion{C}{iv} BALs (taken from Allen et al. 2011), respectively. The \ion{C}{iv} BAL has a P-Cygni shape, while the \ion{Si}{iv} BAL shows multiple absorption troughs. In this paper, we present that the \ion{C}{iv} and \ion{Si}{iv} BALs in J0027-0944 each actually contains at least four NAL systems.
The paper is structured as follows. Section 2 presents the quasar spectra and describes the spectral analysis. Section 3 contains the results and discussions. Section 4 gives the conclusion and future works.
\footnotetext{Where $f (v)$ is the continuum-normalized spectral flux as a function of a velocity $v$ (in $\rm km~s^{-1}$), relative to the quasar rest frame. The dimensionless value $C$ is set to 1 where the normalized flux starts to continuously fall at least 10 per cent below the continuum for at least 2000 $\rm km~s^{-1}$, and is switched to zero everywhere else.}
\begin{table*}
\centering
\caption{Measurements of \ion{Si}{iv} and \ion{C}{iv} BALs.}
\label{tab.1}
\begin{tabular}{ccccrccr}
\hline\noalign{\smallskip}
Species & $\rm z_{abs}$ & Velocity &\multicolumn{2}{c}{MJD: 52145} &{ } & \multicolumn{2}{c}{MJD: 54825}\\
\cline{4-5}\cline{7-8}\noalign{\smallskip}
{}&{}& &$\rm EW$ & $\rm FWHM$ & {} &$\rm EW$ & $\rm FWHM$ \\
{}&{}&$({\rm km~s}^{-1})$& (\AA) & (${\rm km~s}^{-1}$) & {}& (\AA) & (${\rm km~s}^{-1}$)\\
\hline\noalign{\smallskip}
\ion{Si}{iv}$\lambda$1393 &\multirow{2}{*}{2.0208}&\multirow{2}{*}{$-6199$}& $1.13\pm0.29$ & 368.89& {}&$0.30\pm0.39$ &301.80\\
\ion{Si}{iv}$\lambda$1402 & {} & {} & $0.66\pm0.25$ & 249.90 & {} &$0.25\pm0.25$ &199.91\\
\ion{Si}{iv}$\lambda$1393 &\multirow{2}{*}{2.0262} &\multirow{2}{*}{$-5660$}&$1.22\pm0.18$ &284.54 & {} & $0.40\pm0.23$ & 251.08\\
\ion{Si}{iv}$\lambda$1402 & {} &&$0.73\pm0.32$ &299.34& {} & $0.33\pm0.28$ & 249.47\\
\ion{Si}{iv}$\lambda$1393 &\multirow{2}{*}{2.0290} &\multirow{2}{*}{$-5383$}&$0.73\pm0.24$ & 250.83& {} &$0.53\pm0.15$ &250.81\\
\ion{Si}{iv}$\lambda$1402& {} &{} &$0.66\pm0.37$ &282.45& {} & $0.49\pm0.18$ &249.20\\
\ion{Si}{iv}$\lambda$1393 &\multirow{2}{*}{2.0369} &\multirow{2}{*}{$-4603$}&$1.25\pm0.21$ & 333.57 & {} & $0.43\pm0.35$ &333.60\\
\ion{Si}{iv}$\lambda$1402& {} &{}&$0.90\pm0.35$ & 314.86& {} &$0.32\pm0.27$ &248.59\\
\ion{C}{iv} BAL &-- &$-6745\thicksim-3793^{\rm a}$&$12.28\pm1.07$ &$2952^{\rm b}$& {} &$9.74\pm0.54$ &$2952^{\rm b}$\\
\ion{Si}{iv} BAL &-- &$-7985\thicksim-3511^{\rm a}$&$7.97\pm1.92$&$4474^{\rm b}$& &$3.66\pm0.87$&$4474^{\rm b}$\\
\hline
\end{tabular}
\begin{tablenotes}
\footnotesize
\item$^{\rm a}$Velocity range of the BAL troughs with respect to emission rest frame.
\item$^{\rm b}$Total width calculated from edge-to-edge of the BAL troughs.
\end{tablenotes}
\end{table*}
\section{SPECTROSCOPIC ANALYSIS
The SDSS uses a 2.5-m Ritchey-Chretien telescope (Gunn et al. 2006) at Apache Point Observatory, New Mexico. SDSS-I/II (the first two periods of the SDSS project) spectra have a spectral resolution of $R \thickapprox 1800-2200$ (e.g. York et al. 2000). J0027--0944 ($\rm z=2.0839$, taken from Hewett \& Wild 2010) was observed by SDSS on MJD 52145 and 54825, respectively. These two observations span about 7 yr in the observed frame ($\Delta \rm{t_{obs}} = 2680~days$), i.e., about 2.4 yr in the quasar rest frame ($\Delta \rm {t_{rest}} = 869~days$). The median signal-to-noise ratio (S/N) of the MJD 52145 and MJD 54825 spectra of J0027--0944 are 16.38 and 27.46 per pixel, respectively.
\begin{figure}
\includegraphics[width=\columnwidth]{Figure2.eps}
\caption{Pseudo-continuum normalized spectra of quasar J0027--0944 with a wavelength range from 1355 to 1395 \AA~(quasar frame). The upper and lower spectra are snippets from the spectra observed on MJD 52145 and 54825, respectively. The Gaussian fittings in red, blue, purple and green represent the four identified \ion{Si}{iv} NAL systems. The red, blue, purple and green solid lines mark out four identified NAL systems. The total fit model is also marked out by a brown broken line.}
\label{fig.2}
\end{figure}
We fit the continua using the cubic spline functions iteratively. In order to reduce the effects of absorption troughs and remaining sky pixels, during our fitting, we masked out the pixels that beyond $3\sigma$ significance from the current fit. Our continuum fitting is shown in Fig. \ref{fig.1}. Then we measured the absorption lines in the pseudo-continuum normalized spectra.
For the blended NALs within the \ion{Si}{iv} BAL trough, we employed four pairs of Gaussian functions to fit them (Fig. \ref{fig.2}). Limited by the low resolution of the spectra, we cannot measure or discuss the coverage fraction of NALs. Therefore, we accepted a full coverage during line fitting in this work. For each \ion{Si}{iv} NAL doublet, we adopted the absorption redshift measured from the spectrum observed on MJD 52145 (Table \ref{tab.1}). The rest-frame EW of each NAL was measured based on the Gaussian fitting. Imitating the optimal extraction method (see Schneider et al. 1993), the error on EW was estimated using
\begin{equation}
\sigma_{\rm EW}=\frac{\sqrt{\sum_{i=1}^{N} P^{2}(\lambda_i-\lambda_0)\sigma_{f_i}^{2}\Delta \lambda_{i}^2}}{(1+z_{\rm abs})\times\sum_{i=1}^{N} P^{2}(\lambda_i-\lambda_0)} ,
\label{eq:quadratic 1}
\end{equation}
where $P(\lambda_i-\lambda_0)$ represents the Gaussian line profile centred at $\lambda_0$ and $\sigma_{f_i}$ represents the normalized flux uncertainty. $\Delta \lambda_{i}$ is a pixel scale in a unit of \AA. The sum was performed on an integer number of pixels covering $\pm3$ characteristic Gaussian widths.
For the \ion{C}{iv} BAL, we could not identify the independent NALs due to their severe blending. So we just measured the EW for the whole \ion{C}{iv} BAL trough from the normalized spectra using
\begin{equation}
{\rm EW}=\sum_i (1-\frac{F_i}{F_c})\Delta \lambda_{i},
\label{eq:quadratic 2}
\end{equation}
and the uncertainty on the EW is
\begin{equation}
\sigma_{\rm EW}=\sqrt{[\frac{\Delta F_c}{F_c}\sum_i (\frac{\Delta \lambda_{i}F_i}{F_c})]^2+\sum_i (\frac{\Delta \lambda_{i}\Delta F_i}{F_c})^2},
\label{eq:quadratic 3}
\end{equation}
where $F_i$, $\Delta F_i$, $F_c$ and $\Delta F_c$ are the flux in the $i$th bin, the error on $F_i$, the underlying continuum flux, and the uncertainty in the mean continuum flux in the normalization window, respectively (Kaspi et al. 2002). In the normalized spectra, $F_c$= 1. We calculated $\Delta F_c$ using a window of 1485--1515 \AA, which is the closest window to the \ion{C}{iv} BAL trough. We also measured the EW for the whole \ion{Si}{iv} BAL. Measurements of the \ion{C}{iv} and \ion{Si}{iv} BAL are also listed in Table \ref{tab.1}.
\section{Discussion
\subsection{Origin and variability cause
The complex of high-redshift absorbers can usually be explained as super clustering at high redshift or as gas that is intrinsic to the quasar (Ganguly, Charlton \& Bond 2001b; Richards et al. 2002). For J0027--0944, there is no doubt that the complex absorption lines are caused by the intrinsic gas for a strong reason: the absorption lines show time variability. Because we know that it is unpredictable for an intervening absorption to show variability in such a short time (e.g. Hamann et al. 1995). This is why the time variability can be a powerful tool for identifying quasar intrinsic absorption lines (although quasar intrinsic absorption lines are not always time-variable; e.g. Barlow \& Sargent 1997; Hamann et al. 1997). Especially for the cases in moderate-resolution spectra, which can identify the intrinsic absorption lines neither by partial coverage nor photoionization simulations.
Furthermore, the coordinated weakening among different absorption components within the \ion{Si}{iv} BAL trough is detected (see Fig. \ref{fig.2} and Table \ref{tab.1}). The different NAL components within the \ion{C}{iv} BAL trough are blended severely, resulting in a P-Cygni absorption though. However, the global weakening does occur across the entire \ion{C}{iv} BAL trough rather than in small segments (see Fig. \ref{fig.1}), which means that the variations of different NAL components within the \ion{C}{iv} BAL are also coordinated. Based on the above, variations of the same absorber of different ions for \ion{C}{iv} and \ion{Si}{iv} are also coordinated. Such well-coordinated variations can be interpreted as a result of global changes in the ionization state of the absorbing gases (e.g. Hamann et al. 2011; Chen \& Qin 2015; Wang et al. 2015).
Line-locking is usually considered as evidence for radiative acceleration (e.g. Foltz et al. 1987; Bowler et al. 2014). Line-locking requires that our sight lines are roughly parallel to the outflow gas motion. The \ion{Si}{iv} BAL in J0027-0944 does not show a line-locking, which means that our line of sight is less likely to be parallel to the outflow wind.
The absorption depths of \ion{Si}{iv} and \ion{C}{iv} BALs are deeper than the corresponding broad emission lines (Fig. \ref{fig.1}), which suggests that the absorbers cover both the continuum source and broad emission line region (BELR). Thus, their distance from the flux source should be larger than the size of the BELR.
\subsection{Two BAL types
Some of the BALs are believed to have intrinsically diffuse and smooth line profile, they cannot be resolved into multiple discrete narrow components (hereafter Type I BAL, e.g. Hamann et al. 1997; Capellupo et al. 2012). However, in J0027-0944, we did find another type of BAL, consisting of a number of discrete narrow components (hereafter Type II BAL). In other words, the \ion{Si}{iv} and \ion{C}{iv} BALs in J0027-0944 are same as intrinsic NALs in terms of their absorption profiles. On the relationship between the BAL and NAL, there are two main kinds of conjectures: the evolution model and the inclination model (as described in the introduction). Two types of BALs are less likely to evolve into each other (due, for example, to a change of ionization condition, a gas motion, and/or any other mechanisms) because their appearance are completely different. Thus, the evolution model is less likely as the interpretation of such a phenomenon. Compared to the evolution model, the inclination model is favourable `at least' to explain the existence of Type II BAL. In the inclination model, Type I BALs are generally considered to be formed in the main body of the outflow near the plane of accretion disc, while NALs are formed along the line of sight that skim the edges of the BAL flow at higher latitudes above the disc (e.g. Ganguly et al. 2001a; Hamann et al. 2012). While Type II BAL may form in the transitional zone of outflows between Type I BAL and NAL. If the above conjecture is true, then Type II BALs may have the same origin as mini-BALs (see fig. 4 of Hamann et al. 2012).
\subsection{Profile shapes
As described in the introduction, BALs exhibit a wide variety of profile shapes. In J0027-0944, although having the same origin from the same clumpy structures, the appearance of the \ion{Si}{iv} BAL is different from \ion{C}{iv} BAL. First, the \ion{Si}{iv} BAL covers a wider velocity range but has weaker EW than \ion{C}{iv} BAL (see Table \ref{tab.1}). Secondly, the \ion{Si}{iv} BAL shows a shape of multiple absorption troughs, while the \ion{C}{iv} BAL shows P-Cygni (see Fig. \ref{fig.1}). These differences may be due to the following reasons. On one hand, due to the difference in fine structure, the red and blue lines of \ion{Si}{iv} $\lambda\lambda$ 1393, 1402 doublets separate farther than those of \ion{C}{iv} $\lambda\lambda$ 1548, 1551 doublets. On the other hand, due to the difference in oscillator strength, the EW/column density of \ion{Si}{iv} NALs tends to be weaker than \ion{C}{iv} NALs. Thus, although they probably have the same origin, the \ion{Si}{iv} BAL still shows multiple absorption features in a wider velocity range, while the \ion{C}{iv} BAL is more severely blended to be a classic P-Cygni in a narrower velocity range. Therefore, the different shapes between the \ion{Si}{iv} and \ion{C}{iv} BALs in J0027-0944 can simply be explained by how crowded the NALs are in velocity space, which depends on the fundamental parameters from atomic physics between \ion{Si}{iv} and \ion{C}{iv}. In addition, it is not a special case that the \ion{Si}{iv} and \ion{C}{iv} BALs in the same quasar spectrum show different appearances, just like that shown in J0027-0944.
Previous study also showed that the \ion{Si}{iv} BALs are generally weaker than \ion{C}{iv} BALs (e.g. Capellupo et al. 2012; Filiz Ak et al. 2013), and usually have a wider velocity range than \ion{C}{iv} BALs (Capellupo et al. 2012).
Although both \ion{C}{iv} and \ion{Si}{iv} BALs show a wide variety of profile shapes, the phenomenon shown in J0027-0944 can be easily found in other BAL quasar spectra. We found this phenomenon based on a visual check on a large spectral sample of BAL quasars, during a programme to study the BAL variation (Lu et al. 2017). The proportion of Type II BAL will be given quantitatively in future work.
\subsection{Clumpy structure
The complex of NALs within the BAL troughs and their coordinated variations in J0027--0944 motivate the idea that these absorptions may arise from clumpy gas clouds with similar locations, kinematics and physical conditions. In fact, clumpy structures of outflows have been proved by many works in different aspects. For example, five high-velocity outflow NALs identified in SDSS J212329.46-005052.9 require five distinct clumpy structures of the outflow with similar physical conditions, characteristic sizes and kinematics (Hamann et al. 2011). To solve the so-called `overionization problem' in quasar and active galactic nucleus outflows, Hamann et al. (2013) suggested that mini-BAL absorbers may consist of a number of small-scale ($d_{\rm cloud}\la10^{-3}-10^{-4}~\rm pc$) but large-density clouds ($n_{\rm e}\ga10^6-10^7~\rm cm^{-3}$). Joshi et al. (2014) also involved a similar picture to explain the strength and velocity variations of a \ion{C}{iv} BAL in quasar SDSS J085551+375752 and J091127+055054.
How the clumpy structures survive in the quasar outflows is still under debate. The classical outflow model (e.g. Murray et al. 1995; Murray \& Chiang 1997) interpreted that the survival of clumpy structures is due to a shielding medium that is located at the bottom of the outflow. Because the shielding medium blocks most of the quasar's far-UV and X-ray radiations, the clumpy structures can avoid overionization at a much lower gas density. This is supported by the observations that BAL quasars are usually relatively X-ray weak compared to non-BAL quasars (e.g. Green et al. 1995; Brandt et al. 2000). However, other observations of NAL or mini-BAL quasars showed less X-ray absorption (Misawa et al. 2008; Chartas et al. 2009; Gibson et al. 2009b), though radiatively accelerated NAL or mini-BAL outflows also have high speeds and ionizations that are similar to BAL absorbers (e.g. Hamann et al. 2011). Hamann et al. (2011) argued that high gas densities in small outflow substructures allow the clumpy structures to survive without significant radiative shielding. In quasar J0027-0944, the existence of \ion{Si}{iv} and \ion{C}{iv} absorption means that these clumpy structures avoid over-ionization, which implies (i) the existence of a shielding medium according to the classical outflow model or (ii) absorbers are self-shielded. In order to further confirm this, it would be interesting to check whether there is a strong X-ray absorption in the quasar J0027-0944. Further study of the X-ray properties of Type II BALs may offer a new insight into the survival mechanism of clumpy structures.
An effective way to solve the clumpy structure of outflows is to use the multiple sightlines caused by gravitationally lensed quasars (e.g. Misawa et al. 2014, 2016). As shown in J0027-0944, for Type II BAL, the decomposition of a BAL into NALs can serve as another way to resolve the clumpy structure of outflows along the line of sight. It is important to resolve a Type II BAL into NAL components. If the high-resolution spectra are obtained, we can measure the covering factors and column densities of this clumpy structure more accurately, which cannot be done via a whole BAL trough. Based on these physical quantities and using the photoionization model, we can further deduce the absorbing region size, the radial distance from the supermassive black hole, the outflow kinetic energy and feedback efficiency.
\section{Conclusion
We have found a \ion{C}{iv} BAL that consists of multiple NAL components in the quasar SDSS J0027-0944. Our main results are as the follows.
\begin{enumerate}
\item Each of the \ion{Si}{iv} and \ion{C}{iv} BALs actually consists of at least four blended NAL systems. In the rest-frame time-scale of about 2.4 yr, all these NAL systems show coordinated time variations (weakening), suggesting that they may originate from the same outflow clouds, and that their variations can be interpreted as a result of global changes in the ionization state of the absorbing gases.
\item BALs that consist of a number of NAL components indicate that they are analogous to intrinsic NALs. The existence of two types of BALs prefers the inclination model to the evolution model. This type of BAL, as well as mini-BALs, may be formed at a position between the NALs and diffusion-profile BALs.
\item The \ion{Si}{iv} and \ion{C}{iv} BALs in J0027-0944 have the same origin but show different profile shapes. The \ion{Si}{iv} BAL shows multiple absorption troughs in a wider velocity range, while the \ion{C}{iv} BAL is P-Cygni in a narrower velocity range. These differences could be interpreted as the substantial differences in their fundamental parameters from atomic physics or just their physical conditions.
\item NAL complex, as one form of BAL, indicates the clumpy structure of this type of BAL outflow. Our discovery offers another way to resolve the clumpy structure for the outflow, which is useful for learning about the characteristics of the clumpy structure of outflows, and for testing the outflow model, when combined to use the high-resolution spectra of the quasar and photoionization model.
\end{enumerate}
This paper presents a discovery that some of the BALs actually consist of blended NALs, but what is the proportion of this type of BALs is not clear. In the future work, we will process a systematic study on the BALs that consist of blended NALs, with the large spectroscopic data set of SDSS. In addition, for some typical cases, we will utilize high-resolution spectra and photoionization models to study the outflow in more detail.
\section*{Acknowledgements}
We gratefully thank the anonymous referee for many comments that greatly improved the quality of this article. This work was supported by the National Natural Science Foundation of China (No. 11363001; No. 11763001), and the Guangxi Natural Science Foundation (2015GXNSFBA139004).
Funding for SDSS-III was provided by the Alfred P. Sloan Foundation, the
Participating Institutions, the National Science Foundation, and the US
Department of Energy Office of Science. The SDSS-III web site is
\url{http://www.sdss3.org/.}
SDSS-III is managed by the Astrophysical Research Consortium for the
Participating Institutions of the SDSS-III Collaboration, including the
University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of
Washington, and Yale University.
|
2109.13212
|
\section{Introduction}
\label{sec:Introduction}
The upcoming Legacy Survey of Space and Time (LSST) is designed to meet a broad range of science goals with a single 10-year, time-domain survey.
LSST's four key science pillars are: Constraining Dark Energy \& Dark Matter, Taking an Inventory of the Solar System, Exploring the Transient Optical Sky and
Mapping the Milky Way.
With a single-exposure depth of $r{\sim}24.7$, its long temporal baseline, and its its ability to cover a third of the sky each night, detecting billions of stars and galaxies, and millions of transients and variable objects, this photometric survey will be well equipped for these science cases. Especially its ability to carry out repeated observations over a wide sky area, will deliver unprecedented advances in a diverse set of science cases \citep[][hereafter The LSST Science Book]{Abell2009}.
As members of the LSSTC Transients and Variable Stars (TVS) Classification group, we here take a closer look at how the cadence choice will influence LSST's ability to detect characteristic modulations of amplitude, period and phase of RR Lyrae \citep[the so-called Blazhko effect,][] {Blazhko1907}, which falls into the science goals of mapping the Milky Way and exploring the transient and variable optical sky.
So far, due to depth and cadence of typical all-sky surveys, it was nearly impossible to study this effect on a larger sample. Surveys such as PS1 3$\pi$ \citep{Chambers2016} with relatively few observations over a moderately long baseline allow only for fitting the period and phase of RR Lyrae stars while integrating over the complete survey length, thus not giving any information regarding whether the period and/or phase of the light curve might have changed during the survey \citep{Hernitschek2016,Sesar2017a}. On the other hand, surveys specialized for detecting slightly changing light curves due to very finely sampled cadence
\citep[such as TESS, see][]{Ricker2015} usually have a relatively small footprint. LSST's ability to cover a wide area of the sky while making deep, rapid observations, however, will allow for studying variable stars in the Milky Way's old halo in a way that makes population studies possible.
Around 80\% of the survey's observing time will be dedicated to the main or Wide Fast Deep (WFD) survey, covering at least 18,000 deg$^2$. In addition to that, LSST will carry out ``mini-surveys'' (for instance, a dedicated Galactic plane survey), ``deep drilling fields'' (DDFs) and potentially, ``targets of opportunity'' (ToOs).
Covering this vast amount of science goals makes the choice of an observing strategy an important, but also difficult, problem. We emphasize here the community-driven approach for LSST survey strategy optimization, which started with an observing strategy white paper \citep{LSST2017} and led to the LSST Project
Science Team and the LSST Science Advisory Committee releasing a call for community white papers proposing
observing strategies for the LSST WFD survey, as well as the DDFs and mini-surveys \citep{Ivezic2018}. Survey strategies building on those white papers are discussed in \cite{Jones2021}.
Finally, the LSST Survey Cadence Optimization Committee\footnote{\url{https://www.lsst.org/content/charge\%2Dsurvey-cadence-optimization-committee-scoc}} (SCOC) released a series of top-level survey strategy questions\footnote{\url{https://docushare.lsst.org/docushare/dsweb/Get/Document-36755}} which can be analyzed using the simulations of \cite{Jones2021}, leading to ``cadence notes''.
In response to this, we submitted a white paper on a mini-survey of the northern sky to Dec $<$+30 \citep{Capak2018}, as well as a paper on
the impact of LSST template acquisition strategies on early science for transients and variable stars \citep{Hambleton2020}. Recently, we submitted a cadence note\footnote{\url{https://docushare.lsst.org/docushare/dsweb/Get/Document-37673}} which was the precursor of this paper.
In this paper, in order to support decisions on the survey strategy, we develop a quantitative metric, as well as evaluate a number of simulated LSST observing strategies on this metric regarding their ability to detect the characteristic RR Lyrae light-curve modulations of the Blazhko effect.
This paper is part of the ApJS Rubin Cadence Focus Issues, for which an opening paper is available \citep{Bianco2021}.
The paper is structured as follows:
Section \ref{sec:ScienceCases} describes the science cases we investigate. Section \ref{sec:StudyParametersandMethods} gives a brief overview of anticipated observing strategies for LSST, the sets of simulations of different observing strategies, and how we access those data.
In Section \ref{sec:Results}, we present metrics that are capable of assessing the impact of different observing strategies on those science cases, as well as how we analyze the results we get from running the metric we developed on different cadence simulations. In Section \ref{sec:OptimizationofFilterandCadenceAllocation}, we discuss our results drawn from the analysis of the metric on the simulated observing strategy, as well as highlight our recommendations for the observing strategy choice regarding our science goals. Finally, in Section \ref{sec:Conclusions}, we summarize our conclusions drawn from the analysis of the metric on the simulated observing strategy and our recommendations for the observing strategy choice regarding our science goals.
\section{Science Cases}
\label{sec:ScienceCases}
Pulsating stars of the RR Lyrae type have been studied for over a century now, as they play an important role in distance estimation as well as in studying the old halo content, such as globular clusters and stellar streams, of our Milky Way. RR Lyrae stars serve as relatively easily detectable standard candles, enabling the calculation of distances from PLZ (period, luminosity, metallicity) relationships.
Depending on their light-curve characteristics, RR Lyrae stars are divided into subclasses. The RRab stars, pulsating in the fundamental radial mode, show high-amplitude, skewed non-sinusoidal light curves. RRc stars, in contrast, pulste in the radial first overtone mode and show a nearly sinusiodal light curve with smaller amplitudes than RRab stars. RRd stars pulsate in both modes simultaneously.
The Blazhko effect, first observed by S. Blazhko in 1907 in the star RW Draconis \citep{Blazhko1907}, is a modulation of period, phase and amplitude in RR Lyrae stars. The typical pulsation period of an RR Lyrae star is 0.2 - 1.1 days with a 0.5 - 1 mag amplitude, whereas the modulation due to the Blazhko effect is on timescales from weeks to months, and its amplitude is a few tenths of magitudes smaller. The Blazhko effect is observed for about 20 - 30\% of the RRab stars \citep{Szeidl1988, MoskalikPoretti2003} and for less than 5\% of the RRc stars.
The physics behind the Blazhko effect is still under discussion. Among the three primary hypothesis, the first (resonance model)
sees the cause of the modulation in a nonlinear resonance among the fundamental or the first-overtone pulsation mode and a higher mode. The second (magnetic model) sees the cause of the modulation in the magnetic field being inclined to the rotational axis of the star, thus deforming the main radial mode. The third model assumes cycles occurring in the convection as cause for the modulations.
The cadence, depth and large footprint of LSST will allow us to address questions such as how frequent the Blazhko effect is, and in addition will allow us to carry out population studies on stars showing these modulation effects. Such studies then can shed more light on the question of underlying physical processes driving the Blazhko effect.
Also, as using RR Lyrae stars as standard candles, determining correct periods is crucial for calculating distances. Better understanding the impact of the Blazhko effect, and taking it into account when calculating distances, will improve our ability to study the halo population of our Milky Way.
\section{Study Parameters and Methods}
\label{sec:StudyParametersandMethods}
In this section, we give a brief overview of LSST's anticipated observing strategies under discussion for the main WFD survey, as well as the simulations used for evaluating survey strategies within the Metrics Analysis Framework.
\subsection{LSST Observing Strategy}
\label{sec:LSSTObservingStrategy}
Currently under construction, the Rubin Observatory located at Cerro Pach\'{o}n in Northern Chile, and will undertake the Legacy
Survey of Space and Time (LSST), a 10-year survey expected to start in 2023.
With an 8.4 m (6.7 m effective) diameter primary mirror and a 9.6 deg$^2$ field of view, the telescope will reach a 5$\sigma$ single-exposure depth (over 30 s) in ugrizy of
23.9, 25.0, 24.7, 24.0, 23.3, 22.1 mag. The co-added survey depth will be approximately 26.1, 27.4, 27.5, 26.8, 26.1, 24.9 mag.
An alternative exposure strategy, instead of taking 30 s exposures, is taking two successive exposures of 15 s. The latter will help to
mitigate cosmic-ray and satellite trail artifacts. The decision between these exposure strategies will not be made until the commissioning phase of the LSST survey.
Detailed technical specifications of the Rubin Observatory and LSST can be found in the LSST Science Requirements Document \citep[][hereafter SRD]{Ivezic2013}.
The observing strategy of LSST is impacted by several factors, and its optimization to take into account the survey’s wide range of science drivers is especially challenging.
The LSST SRD lists the following top specifications for the survey:
\begin{itemize}
\item The sky area uniformly covered by the main survey (WFD) will include at least 15,000 deg$^2$, and 18,000 deg$^2$ by design.
\item The sum of the median number of visits in each band across the sky will be at least 750, with a design goal of 825.
\item At least 1,000 deg$^2$ of the sky, with a design goal of 2,000 deg$^2$, will have multiple observations separated by
nearly uniformly sampled time scales ranging from 40 sec to 30 min.
\end{itemize}
The document also emphasizes the influence of observing conditions such as seeing, sky brightness (affected by lunar phase and time of night), transparency, and airmass. The algorithm of the observation scheduler will balance observing goals to maintain a coverage as uniform as possible in time and in location on the sky.
Given the specifications described above, a baseline observing strategy was developed \citep{Jones2021}.
The strategy strategy outlined here is considered the current nominal observing strategy plan pending further modifications:
\begin{itemize}
\item Visits are 1 $\times$ 30 s long. (According to the baseline simulation, this results in ${\sim} 2.2 \times 10^6$ visits over the anticipated baseline of 10 years.
\item To assist with asterioid identification, pairs of visits are carried out. These pairs of visits are scheduled for approximately 22 min separation and are in two filters as follows: $u - g$, $u - r$, $g - r$, $r - i$, $i - z$, $z - y$ or $y - y$.
\item The survey footprint is composed of the main Wide-Fast-Deep survey (WFD, spanning 18,000 deg$^2$ from -62 to +2 degrees excluding the Galactic equator), the North Ecliptic Spur (NES), the Galactic Plane (GP) and South Celestial Pole (SCP). The baseline footprint includes WFD, NES
($griz$ only), GP, and SCP. WFD is ${\sim}82$\% of the total survey time.
\item Deep-drilling fields (DDF) with deeper coverage and a more frequent temporal sampling. The baseline strategy includes five DDFs,
one of them being composed of two pointings covering the Euclid Deep Field-South (EDF-S). DDF make up 5\% of the total survey time.
\item Visits will be distributed between the filters as follows: 6\% in $u$, 9\% in $g$, 22\% in $r$, 22\% in $i$, 20\% in $z$, and 21\% in $y$.
\item As the filter exchange system can hold five out of the six filters at the same time, the decision which five filters will actually be installed in the filter exchange system is made as follows: If at the start of the night
the moon is 40\% illuminated or more (corresponding to approximately full moon $\pm$ 6 nights), the $z$-band filter is installed; otherwise the $u$-band filter is installed.
\item The camera is rotationally dithered nightly between -80 and 80 degrees.
Rotating the camera will cancel field rotation during an exposure; after that, the camera reverts back to the chosen rotation angle for the next exposure. At the beginning of each night, the rotation is randomly selected.
\item Twilight observations are carried out in $rizy$, and are chosen by an algorithm that maximizes the reward function at each observation.
\item The baseline strategy observes the entire footprint each observing season (``non-rolling cadence''). Under discussion is also a rolling
cadence that would prioritize sections of the footprint at different times (e.g., observing half the footprint for one year
and changing to the other half the next). This would improve the cadence in that section.
\end{itemize}
\subsection{Survey and Observing Strategy Simulations}
\label{sec:SurveyandObservingStrategySimulations}
In order to allow for evaluating metrics for potential survey designs, survey simulations have been created.
LSST Operations Simulation \citep[Opsim,][]{Naghib2019, Delgado2016, Delgado2014} generates simulated pointing histories under realistic conditions by using the default scheduler for LSST, the Feature-Based Scheduler \citep[FBS,][]{Naghib2019}. FBS utilizes a modified Markov Decision Process to decide the next observing direction and filter selection, allowings a flexible
approach to scheduling, including the ability to compute a reward function throughout a night.
New versions of simulated strategies are released regularly with improvements to the scheduler, weather simulation and changes to the baseline strategy. In this paper, we focus on the latest version, LSST OpSim FBS 1.7.
In Table \ref{tab:simulationfamilies}, we give a summary of the different simulations (the so-called survey strategy simulation families) in LSST OpSim FSB 1.7.
A detailed description LSST OpSim FSB 1.7, including recent survey strategy changes and summary statistics, can be found in the in official Rubin Observatory online resources, i.e. the OpSim1.7 Summary Information Document\footnote{\url{https://github.com/lsst-pst/survey_strategy/blob/master/fbs_1.7/SummaryInfo.ipynb}} and the Survey Simulations v1.7 release (January 2021)\footnote{\url{https://community.lsst.org/t/survey\%2Dsimulations-v1-7-release-january-2021/4660}}.
The simulations can be downloaded at \url{http://astro-lsst-01.astro.washington.edu:8081/}.
\clearpage
\begin{longtable*}{p{1.8cm}p{5.5cm}p{8cm}}
\caption{Description of LSST OpSim FSB 1.7 release simulation families, including a brief explanation of the family.}\label{tab:simulationfamilies}\\
\hline
simulation family & simulation runs & Description \\
\hline
baseline & \texttt{baseline\_nexp1\_v1.7\_10yrs.db}, \texttt{baseline\_nexp2\_v1.7\_10yrs.db} & Simulations of the baseline strategy, with 1$\times$30 s and 2$\times$15 s visits, respectively. The baseline survey footprint includes the main Wide-Fast-Deep (WFD) survey, several mini-surveys and the so-called Deep Drilling Fields (DDF).
All other runs in LSST OpSim FSB 1.7 have 2$\times$15 s visits and thus should be compared to \texttt{baseline\_nexp2\_v1.7\_10yrs.db}. \\
footprint\_tune & \texttt{footprint\_\{i\}\_v1.7\_10yrs},\newline \texttt{i = \{0,...,8\}} & Test of varying survey footprints based on increasing the low-extinction area of the WFD; coverage of the bulge and outer Milky Way disk as part of the WFD area is also tested.\\
rolling & \texttt{rolling\_\{i\}scale\{j\}\_nslice\{k\}\_}\newline\texttt{v1.7\_10yrs}, \newline \texttt{i = \{,nm\_\}},\newline \texttt{j = \{0.2, 0.4, 0.6, 0.8, 0.9, 1.0\}}, \texttt{k=\{2,3\}} &
Rolling cadence involves that some parts of the sky receive a higher number of visits during an `on' season, followed by a lower number of visits during an 'off' season. Test of different rolling cadence scenarios by dividing the sky in half (nslice2) or thirds (nslice3), varying the weight of the rolling from 20\% (scale0.2) to 100\% (scale1.0). In addition, with \texttt{nm} rolling simulations are tested which do not include the daily north/south modulation.\\
ddf\_dithers & \texttt{ddf\_dither\{i\}\_v1.7\_10yrs}, \newline \texttt{i = \{0, 0.05, 0.10, 0.30, 0.70, 1.00, 1.50, 2.00\}} & Test of the size of the translational dither offsets in the DDF, varying from 0 to 2 degrees. Smaller dithers will help the overall depth and uniformity, but larger dithers may be needed for calibration.\\
euclid\_dithers & \texttt{euclid\_dither\{i\}\_v1.7\_10yrs}, \newline \texttt{i = \{1,...,5\}} & Tests of varying translational dither offsets for the Euclid DDF footprint. The simulations vary the spacial dither both towards and perpendicular to the second pointing.\\
pair\_times & \texttt{pair\_times\_\{i\}\_v1.7\_10yrs}, \newline \texttt{i= \{11, 22, 33, 44, 55\} } & Tests of varying pair time from 11 to 55 minutes. In the baseline simulation, observations are typically taken in pairs separated by ${\sim}$22 minutes. \\
twilight\_pairs & \texttt{twi\_pairs\_v1.7\_10yrs, twi\_pairs\_mixed\_v1.7\_10yrs, twi\_pairs\_repeat\_v1.7\_10yrs, twi\_pairs\_mixed\_repeat\_v1.7\_10yrs} & Test of pair twilight visits rather than single visits as in the baseline strategy.
The baseline simulations to not attempt to pair twilight visits. Depending on the simulations, twilight observations are paired in either the same filter, or a different filter (mixed). The twilight observations are also set to attempt to re-observe areas of the sky that have already been observed in the night (repeat). The simulation are as follows: Twilight pairs same filter, twilight pairs in mixed filters, twilight pairs in in the same filter with repeated area, twilight pairs in mixed filters with repeated area\\
twi\_neo & \texttt{twi\_neo\_pattern\{i\}\_v1.7\_10yrs}, \texttt{i=\{1,...,7\}} & Test of the impact of adding a twilight NEO survey wich includes a large number of 1s observations. These visits are acquired in both morning and evening twilight, in sets of triplets separated by about 3 minutes.\\
u\_long & \texttt{u\_long\_ms\_\{i\}\_v1.7\_10yrs}, \newline \texttt{i=\{30, 40, 50, 60\}} & Test of different $u$ band exposure times. Observations in the $u$ filter are taken as single exposures. The number of $u$ band visits is left unchanged, resulting in a shift of visits from other filters to compensate for the increase in time. (Exception: DDF $u$-band observations are the default 2$\times$15s exposures).\\
filter\_cadence & \texttt{cadence\_drive\_gl\{i\}\_\{j\}v1.7\_10yrs}, \texttt{i=\{30,100,200\}}, \texttt{j=\{, gcb\} } & Test of impact of reducing gaps between $g$ band visits over the month. Different limits on how many $g$-band observations are taken. Long gaps in the $g$-band exposures are avoided by fill-in visits. \\
new\_rolling & \texttt{rolling\_nm\_scale0.90\_nslice2\_}\newline\texttt{fpw0.9\_nrw1.0v1.7\_10yrs}, \texttt{rolling\_nm\_scale0.90\_nslice3\_}\newline\texttt{fpw0.9\_nrw1.0v1.7\_10yrs}, \texttt{full\_disk\_v1.7\_10yrs},
\texttt{full\_disk\_scale0.90\_nslice2\_}\newline\texttt{fpw0.9\_nrw1.0v1.7\_10yrs}, \texttt{full\_disk\_scale0.90\_nslice3\_}\newline\texttt{fpw0.9\_nrw1.0v1.7\_10yrs}, \texttt{footprint\_6\_v1.7.1\_10yrs}, \texttt{bulge\_roll\_scale0.90\_nslice2\_}\newline\texttt{fpw0.9\_nrw1.0v1.7\_10yrs}, \texttt{bulge\_roll\_scale0.90\_nslice3\_}\newline\texttt{fpw0.9\_nrw1.0v1.7\_10yrs}, \texttt{six\_stripe\_scale0.90\_nslice6\_}\newline\texttt{fpw0.9\_nrw0.0v1.7\_10yrs} & Tests of survey footprints with more emphasis on the Galactic plane, including rolling cadence in these variations in order to explore the impact on transients in the Galactic plane. The rolling cadence algorithm is different from the one in the above ``rolling'' simulations in the way it suppresses revisits within the same night and thus more effectively decrease the length of inter-night gaps between visits. \\
\hline
\end{longtable*}
\subsection{Metrics Analysis Framework}
\label{sec:MetricsAnalysisFramework}
The LSST Metrics Analysis Framework \citep[MAF,][]{Jones2014} is a Python framework designed to easily evaluate aspects of the simulated survey strategies\footnote{\url{https://sims-maf.lsst.io/}}. MAF computes
simple quantities, such as the total co-added depth or number of visits, but it can also be used to construct more
complex metrics tailored towards specific science cases such as our new metric\footnote{\url{https://github.com/LSST-nonproject/sims_maf_contrib}\newline\url{https://github.com/ninahernitschek/LSST_cadencenote}}.
To facilitate metric analysis of the LSST OpSim outputs, NOIRLab's Community Science \& Data Centre (CSDC) provides the LSST MAF software packages and simulations outputs to their Astro Data Lab science platform.
\section{Results: Survey Strategy Metric and Evaluation}
\label{sec:Results}
In this section, we describe the metric we developed to evaluate the feasibility of recovering light-curve modulation in RR Lyrae stars, as well as the results from evaluating this metric on several OpSim FSB 1.7 databases.
\subsection{The Metric}
\label{sec:TheMetric}
Using the most recent survey simulations (from the LSST OpSim FSB 1.7 release), in this work we develop metrics to assess the effectiveness of different cadences for the LSST survey regarding detection of amplitude, period, phase modulation effects (e.g., Blazhko effect) in RR Lyrae stars. We here focus on the optimization of the WFD survey, so for this reason we don't look at simulations that deal with testing different cadences in the DDF and mini-surveys. All simulations, however, contain DDF and mini-surveys, see description of simulations in Tab. \ref{tab:simulationfamilies}.
To evaluate the feasibility of recovering light-curve modulation in RR Lyrae stars, such as caused by the Blazhko effect, we have developed a metric \texttt{PeriodicStarModulationMetric}.
While our primary purpose is studying the detection of the Blazhko effect, this metric not solely aims at that purpose, but at evaluating variable star light curves from short time intervals in general. We evaluate our metric at OpSim FSB 1.7 databases \texttt{baseline\_nexp2\_v1.7\_10yrs.db} (baseline), \texttt{rolling\_scale0.2\_nslice2\_v1.7\_10yrs.db} (rolling cadence) and \texttt{pair\_times\_55\_v1.7\_10yrs.db} (pair times cadence), see
the OpSim1.7 Summary Information Document and Tab. \ref{tab:simulationfamilies}.
The goal of our metric is to calculate the fraction of recovered RRab and RRc stars based on the light curve length (time interval within the first 2 years of LSST) and distance modulus. The metric is based on the \texttt{sims\_maf\_contrib} \texttt{PeriodicStarMetric} metric, which was modified in a way to reproduce attempts to identify a change in period, phase or amplitude in RR Lyrae stars.
We have not implemented this modulation in the curve itself, as the modulation can take very different forms. Instead, we took a more general approach to investigate how well we can
identify period, phase or amplitude from a variable star's light curve on rather short baselines (15, 20, 30, 50 days). This attempt is also useful for other purposes, i.e. if we want to test whether we can just recover period, phase and amplitude from short baselines at all, without necessarily having in mind to look for light-curve modulations.
Like in \texttt{PeriodicStarMetric}, the light curve of an RR Lyrae star, or a periodic star in general, is approximated as a simple sin wave. (A future version might make use of light-curve templates to generate light curves, see e.g. the RRab and RRc light curves from \cite{Sesar2012}.)
Instead of evaluating the complete light curve at once, we split light curves into time intervals.
We then measure the fraction of light curves whose parameters are recovered correctly (within a given tolerance) from those time intervals.
Two other modifications we introduced for the \texttt{PeriodicStarModulationMetric} are:
In contrast to \texttt{PeriodicStarMetric}, we allow for a random phase offset to mimic observation starting at random phase.
Also, we vary the periods and amplitudes within $\pm$10 \% to allow for a more realistic sample of variable stars.
The metric is calculated using HEALPix\footnote{\url{http://healpix.sourceforge.net/}} \citep{Gorski2005} maps, with pixel resolution of 7$^{\circ}$.33 (achieved using the HEALPix resolution parameter Nside = 8).
To test whether the parameters amplitude, period and phase offset of a simulated light curve can be correctly recovered, we run
a simple \texttt{curve\_fit} (\texttt{scipy.optimize}).
We evaluate our metric for different values of the distance modulus (17.0, 18.0, 19.0, 20.0, 21.0, 22.0) as well as different length of time intervals
(15, 20, 30, 50 days) on sin-wave light-curves with amplitudes and periods typical for RRab and RRc stars.
Code and figures relevant for the metric \texttt{PeriodicStarModulationMetric} can be found in our GitHub repository\footnote{\url{https://github.com/ninahernitschek/LSST\_cadencenote}}.
\subsection{Analysis of Metric Results}
\label{sec:AnalysisOfMetricResults}
We evaluate our metric on simulated 2-year light curves for OpSim FSB 1.7 databases \texttt{baseline\_nexp2\_v1.7\_10yrs.db} (baseline), \texttt{rolling\_scale0.2\_nslice2\_v1.7\_10yrs.db} (rolling cadence) and \texttt{pair\_times\_55\_v1.7\_10yrs.db} (pair times cadence).
Light curves were simulated both for RRab and RRc stars. Plots showing an evaluation of the metric for RRab and RRc stars can be found in Fig.~\ref{fig:RRab} and Fig.~\ref{fig:RRc}, respectively. In both Figures, we plot the area which a given fraction of stars with a given distance modulus (17 to 22) from a light curve with a given time interval (15 to 55 days) can be identified. As we focus on early science with LSST, the metric was evaluated on simulated 2-year light curves for OpSim 1.7 databases \texttt{baseline\_nexp2\_v1.7\_10yrs.db} (baseline), \texttt{rolling\_scale0.2\_nslice2\_v1.7\_10yrs.db} (rolling cadence) and \texttt{pair\_times\_55\_v1.7\_10yrs.db} (pair times cadence).
\begin{figure*}
\includegraphics[width = 1\textwidth]{figures/opsim_PeriodicStarModulationMetric_night_lt_365_2_HEAL_RRab_gal_Histogram_combined.pdf}
\caption{Metric evaluation for RRab light curves. We plot the area (in 1000s of square degrees) for which a given fraction of RRab stars with a given distance modulus (17 to 22) from a light curve with a given time interval (15 to 55 days) can be identified. The metric was evaluated on simulated 2-year light curves for OpSim 1.7 databases \texttt{baseline\_nexp2\_v1.7\_10yrs.db} (baseline), \texttt{rolling\_scale0.2\_nslice2\_v1.7\_10yrs.db} (rolling cadence) and \texttt{pair\_times\_55\_v1.7\_10yrs.db} (pair times cadence).}
\label{fig:RRab}
\end{figure*}
\begin{figure*}
\includegraphics[width = 1\textwidth]{figures/opsim_PeriodicStarModulationMetric_night_lt_365_2_HEAL_RRc_gal_Histogram_combined.pdf}
\caption{Metric evaluation for RRc light curves. We plot the area (in 1000s of square degrees) for which a given fraction of RRc stars with a given distance modulus (17 to 22) from a light curve with a given time interval (15 to 55 days) can be identified. The metric was evaluated on simulated 2-year light curves for OpSim 1.7 databases \texttt{baseline\_nexp2\_v1.7\_10yrs.db} (baseline), \texttt{rolling\_scale0.2\_nslice2\_v1.7\_10yrs.db} (rolling cadence) and \texttt{pair\_times\_55\_v1.7\_10yrs.db} (pair times cadence).}
\label{fig:RRc}
\end{figure*}
\clearpage
\subsubsection{LSST Baseline Survey Results}
\label{sec:LSSTBaselineSurveyResults}
In Fig. \ref{fig:RRab}, we plot the area (in 1000s of square degrees) for which a given fraction of RRab stars can be correctly detected, i.e. the light curve parameters retrieved within the allowed tolerances. We do this for time intervals of 15 to 50 days, and the distance modulus spanning 17 to 22.
As expected, the area over which a relevant fraction of RRab and RRc stars can be correctly recovered from such short time intervals drops significantly for a distance modulus $>$ 20.
For a distance modulus up to 19, for more than half of the survey footprint more than half of the light curves can be fit correctly using time intervals of 30 days.
For a distance modulus of 21, we have to move to a time interval of 50 days to get a correct fit for 10 \%.
Also, as expected, we find an increase of the recovery rate over time interval length. Here we want to highlight that even for a distance modulus of 20, over half of the survey footprint more than 30 \% of the light curves can be recovered correctly from a time interval of 50 days. For a lower distance modulus, the same recovery rate can be achieved for time intervals from 15 - 20 days.
As the modulation due to the Blazhko effect happens on timescales from weeks to months, using a time interval larger than 20 days is reasonable.
As we are especially interested in RR Lyrae stars in the outer halo, we are dealing with a distance modulus $>$ 19.
We now take a look at the influence of different survey strategy choices.
\subsubsection{Alternative Survey Strategy}
\label{sec:AlternativeSurveyStrategy}
For the rolling cadence (evaluated only on \texttt{rolling\_scale0.2\_nslice2\_v1.7\_10yrs.db}), we find a sometimes (depending on distance modulus and time interval) slightly higher, sometimes slightly lower fraction of recovered light curves per area than for the baseline survey, but always less recovered light curves per area than for the the pair times cadence (\texttt{pair\_times\_55\_v1.7\_10yrs.db}).
In this comparison, we thus see as slight advantage of the pair times cadence over the other cadences tested.
Varying the pair time changes the overall number of filter changes per night, so longer pair times (55 minutes in our case) result
in more visits overall in the survey. In addition, this matches better the typical time scale of RR Lyrae light curves: The standard baseline attempts pairs at 22 minutes, whereas here we have chosen 55 minutes as timeline over which RR Lyrae star vary significantly.
The survey strategy of a rolling cadence means that some parts of the sky receive a higher number of visits during an `on' season,
followed by a lower number of visits during an `off' season, while during the first as well as the last year and half, the sky is covered uniformly as normal. We have so far not tested the influence of cadence on a full 10-year survey, but assume from our tests so far that the influence of a rolling cadence might be worse.
\section{Discussion: Optimization of Filter and Cadence Allocation}
\label{sec:OptimizationofFilterandCadenceAllocation}
In this section, we discuss our results drawn from the analysis of the metric on the simulated observing strategy, and highlight our recommendations for the observing strategy choice regarding our science goals.
As RR Lyrae stars are fairly blue stars (spectral types A and F), the bluer filters ($u$, $g$) are particularly important for RR Lyrae and we thus want to have a high cadence in those filters.
Detecting and characterizing relatively faint RR Lyrae stars in the old Milky Way's halo would in addition benefit from an increased $u$ band exposure time of 1$\times$50 sec.
The relevant simulation runs are in the family `u\_long' which tests different $u$ band exposure times, and filter\_cadence which adds $g$ band exposures with the primary goal to improve te discovery for longer timescale transients like SN. (See Tab. \ref{tab:simulationfamilies}.)
Therefore, we recommend that the number of $u$, $g$ observations is increased in the WFD cadence plan to benefit the transient and variable star science.
In addition, the simulation family `pair\_times' deals with the question of obtaining two visits in a pair in the same vs. in different filters.
Observations in different filters are helpful for classification based on colors, for example to first identify RR Lyrae stars (and other standard candles) when light curves are too sparse to calculate periods. However, our attempt of getting precise periods, phases and amplitudes will benefit from having more observations in the same filter.
The simulation with a pair-spacing cadence of 55 days (\texttt{pair\_times\_55\_v1.7\_10yrs.db}) improves the recovery rate of RR Lyrae periods, phases and amplitudes from short-time light curves, as there is no significant variability on e.g. the 20-minute baseline.
Likely visit pairs more widely spaced would improve the ability to recover those light-curve parameters.
We investigated whether our metric benefits from cadence allocations as present in the simulation family `rolling'.
A rolling cadence means that some parts of the sky receive a higher number of visits during an `on' season, followed by a lower number of visits during an `off' season.
As this scenario provides higher-cadence light curves, the rolling cadence could benefit variable stars investigation, especially in the Galactic Plane. However, for variability analysis of RR Lyrae stars, we are looking for high-latitude objects in the old Milky Way's halo.
Our simulations for the rolling cadence have shown that the results are worse than for the baseline survey strategy.
This also agrees with other more general analysis showing the coverage of one-day timescales (see Cadence Note Bellm et al.\footnote{\url{https://docushare.lsst.org/docushare/dsweb/Get/Document-37644/Delta_T_2021.pdf}}): They find that the \texttt{rolling\_scale\*} and \texttt{alt\_roll} simulations have very poor (sub-percent) coverage of one-day timescales, the \texttt{rolling\_nm\_scale1.0\_nslice2} result is close to the \texttt{baseline}, and
\texttt{rolling\_nm\_scale0.90\_nslice3\_fpw0.9\_nrw1.0} approaches the \texttt{pair\_times\_55} simulation in its effective timescale coverage.\\
\section{Conclusions}
\label{sec:Conclusions}
The LSST survey shows great potential for carrying out the science goals of mapping the Milky Way and exploring the transient and variable optical sky.
We here explored especially its possiblities to detect the characteristic modulations of amplitude, period and phase shown by many RR Lyrae stars, the
so-called Blazhko effect.
Systematically studying this effect, which is known since more than 100 years \citep{Blazhko1907}, so far was difficult as all-sky surveys such as
PS1 3$\pi$ lack the necessary number of observations within the rather short time frame of this effect which ranges from weeks to months, whereas other surveys have the necessary cadence to clearly show the Blazhko effect, but have a relatively small footprint, such as e.g. the TESS survey that makes population studies difficult.
With the upcoming LSST survey, which will cover a wide area of the sky with deep, rapid observations, we see an exciting possibility in carrying out such population studies.
In this paper, we have developed a metric to investigate the impact of observing strategy choices on the detectability of the Blazhko effect.
We compared the metric on simulation runs form the `pair\_times' and `rolling' families with those from the baseline observing strategy.
From our results, our first recommendation for the observing strategy choice regarding our (and likely similar) science goals is to have a higher number of subsequent observations in the same filter with a medium-length spacing, such as 55 days. This improves the recovery rate of RR Lyrae periods, phases and amplitudes in contrast to the baseline strategy, as there is no significant variability on e.g. the 20-minute baseline.\newline
Our second recommendation is to use no rolling cadence observing strategy, as this would only improve the light curve cadence for variable stars in the Galactic Plane, whereas RR Lyrae stars are high-latitude objects in the old Milky Way's halo. This also agrees well with similar investigations from other Cadence Notes submitted.
\begin{acknowledgements}
This paper was created in the nursery of the Rubin LSST Transient and
Variable Star Science Collaboration
\footnote{\url{https://lsst-tvssc.github.io/}}. The authors acknowledge the
support of the Vera C. Rubin Legacy Survey of Space and Time Transient and
Variable Stars Science Collaboration that provided opportunities for
collaboration and exchange of ideas and knowledge. The authors are thankful for
the support provided by the Vera C. Rubin Observatory MAF team in the creation
and implementation of MAFs. The authors acknowledge the support of the LSST
Corporations that enabled the organization of many workshops and hackathons
throughout the cadence optimization process.
This research uses services or data provided by the Astro
Data Lab at NSF’s National Optical-Infrared Astronomy Research Laboratory. NOIRLab is operated by the
Association of Universities for Research in Astronomy
(AURA), Inc. under a cooperative agreement with the
National Science Foundation.
Software: LSST metrics analysis framework \citep[MAF][]{Jones2014}; Astropy (Astropy Collaboration et al.
2013); JupyterHub\footnote{\url{https://jupyterhub.readthedocs.io/en/stable/index.html}}.
\end{acknowledgements}
\clearpage
|
1912.13175
|
\section{Introduction}
Consider a connected undirected graph $G$ on $n$ vertices, where the edges $e$ have positive real lengths $\ell(e)$.
Consider an entity -- let's call it a robot -- that can move at speed $1$ along edges.
There are many different rules one might specify for how the robot chooses which edge to take after reaching a vertex
-- for instance the ``random walk" rule, to choose edge $e$ with probability proportional to $\ell(e)$ or $1/\ell(e)$.
One well-studied aspect of the random walk is the {\em cover time}, the time until every vertex has been visited -- see
Ding, Lee and Peres \cite{cover} for references to
special examples and surprisingly deep connections with other fields.
This article instead concerns what we will call\footnote{Confusingly previously called {\em nearest neighbor}, inconsistent with the usual terminology that neighbors are linked by a single edge, but justifiable by the artifice of extending the given graph to a complete graph via defining each edge $(v,v^*)$ to have length $d(v,v^*)$. But the phrase {\em nearest neighbor} is used in many other contexts, so the more precise name NUV seems preferable.}
the {\em nearest unvisited vertex} (NUV) walk, defined as follows.
A path of edges has a length, the sum of edge-lengths, and the distance $d(v,v^*)$
between vertices is the length of the shortest path.
For simplicity assume all such distances are distinct, so the shortest path is unique.
Now the NUV walk is the deterministic walk defined in words by
\begin{quote}
after arriving at a vertex, next move at speed $1$ along the path to the closest unvisited vertex
\end{quote}
and continue until every vertex has been visited.\footnote{This {\em walk} convention is consistent with random walk cover times; one could alternatively
use the {\em tour} convention that the walk finally returns to its start, consistent with TSP.}
In symbols, from initial vertex $v_0$
the vertices can be written $v_0,v_1,v_2, \ldots,v_{n-1}$ in order of first visit;
\begin{equation}
v_i = \arg \min_{v \not\in \{v_0,\ldots,v_{i-1}\}} d(v_{i-1},v) , \quad 1 \le i \le n-1
\label{vii}
\end{equation}
and this walk has length $L = L_{NUV} = L_{NUV}(G,v_0) = \sum_{i=1}^{n-1} d(v_{i-1},v_i)$.
There are several types of question one can ask about NUV walks.
\begin{itemize}
\item The order of magnitude of $L$ for a general graph?
\item Sharper estimates of $L$ for specific models of random graphs?
\item Structural properties of the NUV path in different contexts?
\end{itemize}
The first question has been studied in the context of TSP (travelling salesman problem) heuristics and robot motion,
and a 2012 survey of the general area, under the name {\em online graph exploration}, is given in
Megow, Mehlhorn and Schweitzer \cite{megow}.
\subsection{Outline of results}
Our first purpose is to record a formalization (Proposition \ref{P:1}) of the basic general relationship between $L_{NUV}$ and ball-covering.
This is implicit in two now-classical results:
Corollary \ref{C:1}, which compares $L_{NUV}$ to the length $L_{TSP}$ of the shortest path through all $n$ vertices,
and Corollary \ref{C:2}, which upper bounds $L_{NUV}$ for $n$ arbitrary points in the unit square with Euclidean distance.
As shown in section \ref{sec:basic}, each follows easily from our formalization.
Our main purpose is to point out that the relation with ball-covering enables (in some simple probability models)
the order of magnitude of $L$ to be deduced easily from known first passage percolation estimates.
In section \ref{sec:FPP} we study two specific models.
\begin{itemize}
\item
For the $m \times m$ grid with i.i.d. edge-lengths, Corollary \ref{C:grid} shows that $L$ is indeed $O(m^2)$ rather than larger order.
\item
For the complete graph on $n$ vertices, with i.i.d. edge-lengths normalized so that the shortest edge at a vertex is order $1$,
Corollary \ref{C:MF} shows that $L$ is indeed $O(n)$ rather than larger order.
\end{itemize}
In both of those models the (first-order) behavior of first passage percolation is well understood, via the {\em shape theorem} on the two-dimensional grid,
and the Yule process approximation on the complete graph model.
A final purpose is to point out that the second and third questions above have apparently never been studied.
The NUV rule on a deterministic graph is ``fragile" in the sense that small changes in the length of an edge might affect a large proportion of the walk,
But it is possible that introducing random edge-lengths might ``smooth" the typical properties of the walk on a random graph.
We defer further general discussion to section \ref{sec:remarks}.
\section{Basics}
\label{sec:basic}
\subsection{Relation with ball-covering}
A basic mathematical observation is that $L_{NUV}$ is related to ball-covering\footnote{And thereby to {\em metric entropy} -- see section \ref{sec:order}}.
Given $r>0$ define $N(r) = N(G,r)$ to be the minimal size of a set $\SS$ of vertices such that every vertex is within distance $r$ from
some element of $\SS$.
In other words, the union over $s \in \SS$ of the balls of radii $r$ centered at $s$ covers the entire graph.
\begin{Proposition}
\label{P:1}
(i) $N(r) \le 1 + L_{NUV}/r, \ 0 < r < \infty $.\\
(ii) $L_{NUV} \le 2 \int_0^{\Delta/2} N(r) \ dr $ where
$\Delta = \max_{v,w} d(v,w)$ is the diameter of the graph.
\end{Proposition}
{\bf Proof.\ }
Inequality (i) is almost obvious.
As at (\ref{vii}), write the vertices as $v_0,v_1,v_2, \ldots,v_{n-1}$ in order of first visit by the NUV walk, and say $v_i$ has rank $i$.
Write $\zeta(v_i) = \sum_{j=0}^{i-1} d( v_j, v_{j+1})$ for the length of the walk up to $v_i$.
Select vertices $(z(k), 0 \le k \le k^* - 1)$ along the walk by selecting the first vertex at distance $>r$ along the walk after the previous selected vertex.
That is, $z(k) = v_{I(k)}$ where $I(0) = 0$ and for $k \ge 0$
\[ I(k +1) = \min \{i > I(k) : \zeta(v_i) - \zeta(v_{I(k)} ) > r \} \]
until no such $i$ exists.
By construction every vertex is within distance $r$ of some $z$, and the number $k^*$ of selected vertices is at most
$1 + L_{NUV}/r$.
This establishes (i).
For inequality (ii), write
$D(v_i) = d(v_i,v_{i+1})$ for the length of the {\em path} (which may encompass several edges)
from the rank-$i$ vertex to the rank-$(i+1)$ vertex, and $D(v_{n-1}) = 0$.
The argument rests upon the following simple observation, illustrated in Figure \ref{Fig:1}.
Fix a vertex $v^*$ and a real $r > 0$, and consider the set of vertices within distance $r$ from $v^*$:
\[ B(v^*,r) := \{v : d(v,v^*) \le r \} . \]
Consider the vertex $\bar{v}$ of highest NUV-rank within $B(v^*,r)$.
When the NUV walk first visits $v_i \in B(v^*,r)$ with $v_i \neq \bar{v}$,
there is then some first unvisited vertex $\tilde{v}$ on the minimum-length path from $v_i$ to $\bar{v}$, and so
\[ D(v_i) \le d(v_i,\tilde{v}) \le d(v_i,\bar{v}) \le 2r \]
the final inequality using the triangle inequality via $v^*$.
We conclude that
\begin{equation}
\mbox{ $D(v) \le 2r$ for all $v \in B(v^*,r)$ except perhaps one vertex}.
\label{newq}
\end{equation}
Now by considering a set, say $S(r)$, containing $N(r)$ vertices, such that every vertex is within distance $r$ from some element of $S(r)$,
inequality (\ref{newq}) implies
\begin{equation}
\mbox{the number of vertices $w$ with $D(w) > 2r$
is at most $N(r)$. }
\label{eq}
\end{equation}
Because $D(w)$ is bounded by the graph diameter $\Delta$,
for a uniformly random vertex $J$ we have
\begin{eqnarray*}
L_{NUV} &=& n \mathbb{E}[D(J)] \\
&= & n \int_0^\Delta P(D (J)>r)dr \\
&= & \int_0^\Delta \mbox{(number of vertices $w$ with $D(w) > r$)} \ dr \\
&\le& \int_0^\Delta \ N(r/2) dr
\end{eqnarray*}
which is equivalent to (ii).
\ \ \rule{1ex}{1ex}
\begin{figure}
\setlength{\unitlength}{0.14in}
\begin{picture}(20,20)(-5,-10)
\put(0,0){\circle{17}}
\put(0,0){\circle{1}}
\put(-8.1,4.1){\line(1,-1){1.9}}
\put(-6,2){\line(1,-2){0.9}}
\put(-5,0){\line(-1,-1){1.3}}
\put(-6.5,-1.5){\line(-1,-1.7){1.7}}
\put(-6,2){\circle*{0.39}}
\put(-5,0){\circle*{0.39}}
\put(-6.5,-1.5){\circle*{0.39}}
\put(1.7,8.7){\line(-1,-1){2.4}}
\put(-1,6){\line(0,-1){2.7}}
\put(-1,3){\line(1,-3){0.88}}
\put(0,0){\line(-1,-1){1.8}}
\put(-2,-2){\line(1,-2){0.85}}
\put(-1,-4){\line(-2,-1){1.85}}
\put(-1,-4){\line(1,-3){0.95}}
\put(0,-7){\line(1,-1){1.95}}
\put(-1,6){\circle*{0.39}}
\put(-1,3){\circle*{0.39}}
\put(0,0){\circle*{0.39}}
\put(-2,-2){\circle*{0.39}}
\put(-1,-4){\circle*{0.39}}
\put(-3,-5){\circle*{0.39}}
\put(0,-7){\circle*{0.39}}
\put(9.5,0){\line(-1,1){1.85}}
\put(7.5,2){\line(-1,0){1.2}}
\put(6,2){\line(-3,-2){2.7}}
\put(3,0){\line(1,-1){2.8}}
\put(6,-3){\line(-1,-1){0.8}}
\put(5,-4){\line(-1,2){1.85}}
\put(3,0){\line(-1,0){2.65}}
\put(-1,3){\line(2,1){3.85}}
\put(3,5){\line(1,1){0.8}}
\put(7.5,2){\circle*{0.39}}
\put(6,2){\circle*{0.39}}
\put(3,0){\circle*{0.39}}
\put(6,-3){\circle*{0.39}}
\put(5,-4){\circle*{0.39}}
\put(3,5){\circle*{0.39}}
\put(4,6){\circle*{0.39}}
\put(-1,6){\line(-1,1){2.5}}
\put(-1,6){\line(4,-1){4}}
\put(4,6){\line(1,3){0.6}}
\put(4,6){\line(3,1){1.9}}
\put(-1,3){\line(-5,-1){5}}
\put(-1,3){\line(-4,-3){4}}
\put(-1,3){\line(7,-1){7}}
\put(6,2){\line(1,1){2.0}}
\put(0,0){\line(-3,1){6}}
\put(-2,-2){\line(-9,1){4.5}}
\put(-6.5,-1.5){\line(-2,1){2.4}}
\put(-6.5,-1.5){\line(1,-1){3.5}}
\put(-3,-5){\line(-2,-3){1.9}}
\put(-1,-4){\line(1,1){4}}
\put(5,-4){\line(-7,2){7}}
\put(5,-4){\line(-5,-3){5}}
\put(5,-4){\line(1,-5){0.7}}
\put(6,-3){\line(7,1){2.5}}
\put(25,0){\circle{17}}
\put(25,0){\circle{1}}
\put(16.9,4.1){\vector(1,-1){1.9}}
\put(19,2){\vector(1,-2){0.9}}
\put(20,0){\vector(-1,-1){1.3}}
\put(18.5,-1.5){\vector(-1,-1.7){1.7}}
\put(19,2){\circle*{0.39}}
\put(20,0){\circle*{0.39}}
\put(18.5,-1.5){\circle*{0.39}}
\put(26.7,8.7){\vector(-1,-1){2.4}}
\put(24,6){\vector(0,-1){2.7}}
\put(24,3){\vector(1,-3){0.88}}
\put(25,0){\vector(-1,-1){1.8}}
\put(23,-2){\vector(1,-2){0.85}}
\put(23.9,-3.9){\vector(-2,-1){1.85}}
\put(23,-2){\vector(1,-2){0.85}}
\put(22.1,-5.1){\line(2,1){1.75}}
\put(24,-4){\vector(1,-3){0.95}}
\put(25,-7){\vector(1,-1){1.95}}
\put(24,6){\circle*{0.39}}
\put(24,3){\circle*{0.39}}
\put(25,0){\circle*{0.39}}
\put(23,-2){\circle*{0.39}}
\put(24,-4){\circle*{0.39}}
\put(22,-5){\circle*{0.39}}
\put(25,-7){\circle*{0.39}}
\put(34.5,0){\vector(-1,1){1.85}}
\put(32.5,2){\vector(-1,0){1.2}}
\put(31,2){\vector(-3,-2){2.7}}
\put(28,0){\vector(1,-1){2.8}}
\put(31,-3){\vector(-1,-1){0.8}}
\put(30,-4){\line(-1,2){1.75}}
\put(28,0){\line(-1,0){2.65}}
\put(25.35,0){\line(-1,3){1.0}}
\put(24.35,3){\vector(3.65,2){3.45}}
\put(28,5){\vector(1,1){0.8}}
\put(32.5,2){\circle*{0.39}}
\put(31,2){\circle*{0.39}}
\put(28,0){\circle*{0.39}}
\put(31,-3){\circle*{0.39}}
\put(30,-4){\circle*{0.39}}
\put(28,5){\circle*{0.39}}
\put(29,6){\circle*{0.39}}
\put(16.2,4.3){a}
\put(16.2,-4.9){b}
\put(26.9,8.9){c}
\put(27,-9.4){d}
\put(34.5,-0.5){e}
\put(29.1,-4.2){f}
\put(28.3,4.3){g}
\put(29.3,5.3){h}
\end{picture}
\caption{Illustration of the proof of (\ref{newq}).
The left panel shows the subgraph within a radius-$r$ ball.
The NUV walk must consist of one or several excursions within the ball.
These excursions depend on the configuration outside the ball, and the right side shows one possibility.
The first excursion enters via edge $a$ and exits via edge $b$.
The second excursion enters via edge $c$ and exits via edge $d$, en route backtracking across one edge.
The third excursion enters via edge $e$ and proceeds to vertex $f$; at that time only vertices $g, h$ within the ball are unvisited, and the next step of the walk is a path going via three previously-visited vertices to reach $g$ and then $h$. The next step from $h$, not shown, might be very long, depending on whether nearby vertices outside the ball have all been visited.
Arrowheads indicate the end of a step of the NUV walk, that is the edge by which the vertex is first entered.
}
\label{Fig:1}
\end{figure}
\paragraph{Remarks.}
The simple formulation of Proposition \ref{P:1} is more implicit than explicit in the literature we have found.
Part (i) is a less sharp version of a more complex lemma used in
Rosenkrantz, Stearns and Lewis \cite{rosen} to prove Corollary \ref{C:1} below.
In the context of TSP or robot exploration heuristics, the NUV algorithm is typically (e.g. in
Hurkens and Woeginger \cite{hurkens}
and in
Johnson and Papadimitriou \cite{johnson})
mentioned only briefly before continuing to better algorithms.
From an algorithmic viewpoint, calculating $N(r)$ on a general graph is not simple, so part (ii) of Proposition \ref{P:1} is not so relevant,
but as we see in section \ref{sec:FPP} it is very helpful in providing order-of-magnitude bounds for familiar models of random networks.
\subsection{Two classical results}
Two classical results follow readily from the formulation of Proposition \ref{P:1}.
Write $L_{TSP} = L_{TSP}(G,v_0) $ for
the length of the shortest {\em walk} starting from $v_0$ and visiting every vertex\footnote{The convention that TSP refers to a {\em tour} has the virtue that the length is independent of starting vertex. But the latter is not true for the NUV tour.}.
So $L_{NUV} \ge L_{TSP}$ and it is natural to ask how large the ratio can be.
This was answered in Rosenkrantz et al. \cite{rosen}.
\begin{Corollary}
\label{C:1}
Let $a(n)$ be the maximum, over all connected $n$-vertex graphs with edge lengths and all initial vertices, of the ratio
$L_{NUV}/L_{TSP}$.
Then $a(n) = O(\log n)$.
\end{Corollary}
{\bf Proof.\ }
The argument for Proposition \ref{P:1}(i) is unchanged if we use the TSP path instead of the NUV path,
so in fact gives the stronger result
$N(r) \le 1 + L_{TSP}/r, \ 0 < r < \infty $.
Now apply Proposition \ref{P:1}(ii) and note that $\Delta \le L_{TSP}$, so
\[ L_{NUV} \le 2 \int_0^{L_{TSP}/2} \min(n, 1 + L_{TSP}/r ) \ dr \le 2 L_{TSP} + 2 L_{TSP} \log n \]
the second inequality by splitting the integral at $r = L_{TSP}/n$.
\ \ \rule{1ex}{1ex}
There are examples to show that the $O(\log n)$ bound cannot be improved -- see
Johnson and Papadimitriou \cite{johnson},
Hurkens and Woeginger \cite{hurkens},
Hougardy and Wilde \cite{hougardy},
Rosenkrantz et al. \cite{rosen}.
As noted in the elementary expository article Aldous \cite{me-ES},
in constructing such an example the key point is to make the bound in (\ref{newq}) be tight, in the sense
\begin{quote}
for appropriate values of $r$ with $1 \ll L_{TSP}/r \ll n$ there are distinguished vertices separated by distance $r$ along the TSP path such that
the NUV path from one to the next is order $r$.
\end{quote}
Hurkens and Woeginger \cite{hurkens} show that one can make such examples be planar, embedded in the plane with edge-lengths as Euclidean length, and edge-lengths constrained to a neighborhood of $1$.
But such constructions seem very artificial.
Here is the second classical result.
See Steele \cite{steele} for one proof and the early history of this result.
\begin{Corollary}
\label{C:2}
There is a constant $A$ such that,
for the complete graph on $n$ arbitrary points in the unit square, with Euclidean lengths,
\[ L_{NUV} \le A n^{1/2} . \]
\end{Corollary}
Note this implies the well known corresponding result $L_{TSP} \le A n^{1/2}$ .
{\bf Proof.\ }
By ball-covering in the continuum unit square
there is a numerical constant $C$ such that $N(r) \le C/r^2$, and so Proposition \ref{P:1}(ii) gives
\[ L_{NUV} \le 2 \int_0^{\sqrt{1/2}} \min (n, C/r^2) \ dr
\le 4 C^{1/2} n^{1/2} .\]
\ \ \rule{1ex}{1ex}
\subsection{The order of magnitude question}
\label{sec:order}
What is the size of $L_{NUV}$ for a {\em typical} graph?
That is a very vague question, but let us attempt a discussion anyway.
For this informal discussion it is convenient to scale distances so that the typical distance from a vertex to its closest neighbor is order $1$, and therefore $L_{NUV}$
is at least order $n$.
Examples mentioned above show that $L_{NUV}$ can still be as large as order $n \log n$,
but intuition suggests that for natural examples
$L_{NUV}$ is of order $n$ rather than larger order.
For this it is certainly necessary, but not sufficient, that the length $L_{MST}$ of the minimum spanning tree (MST)\footnote{Recall $L_{MST} \le L_{TSP} \le 2 L_{MST}$.}
is $O(n)$.
Proposition \ref{P:1}(ii) provides a quantitative criterion:
it is sufficient that $N(r)/n$ is order $r^{- \alpha}$ for some $\alpha > 1$ over $1 \ll r \ll \Delta$.
Intuitively this corresponds to ``dimension $> 1$", where dimension is measured by metric entropy\footnote{The reader may be more familiar with metric entropy involving {\em small} balls for continuous spaces, but it is equally relevant in our context of large balls, as used for instance in defining fractal dimension of subsets of ${\mathbb{Z}}^d$.},
as illustrated in the examples in section \ref{sec:FPP}.
\subsection{Other questions in the deterministic setting}
It is not clear what other results might hold for general graphs $G$.
One can ask about the variability of $L_{NUV}(G,v)$ as $v$ varies.
Clearly it can be arbitrarily concentrated e.g. on the complete graph with edge-lengths arbitrarily close to $1$.
On the other hand, consider the linear graph $G_n$ on vertices $\{0,1,\ldots,n-1\}$ with slowly decreasing edge-lengths $\ell(i-1,i) = 1 - i/n^2$.
Here there is a factor of $2$ variability in $L_{NUV}(G,v)$ as $v$ varies.
We do not see any easy example with large variability, prompting the following question.
\begin{OP}
\label{OP:1}
Is $ \frac { \max_v L_{NUV}(G,v)}{ \min_v L_{NUV}(G,v)}$
bounded over all finite graphs $G$?
\end{OP}
In this context it is perhaps more natural to extend the NUV walk to a {\em tour} which finally returns to its start.
Note that in the linear graph example above, $|L_{NUV}(G,v) - L_{NUV}(G,v^\prime) |$ is small for adjacent vertices $(v,v^\prime)$,
so one can ask whether there there is a general bound for some average of $|L_{NUV}(G,v) - L_{NUV}(G,v^\prime) |$ over nearby vertex-pairs $(v,v^\prime)$.
One can also consider overlap of edges used in walks from different starts.
Note that if two vertices are each other's nearest neighbor then every NUV walk uses their linking edge.
One can ask, for the two walks started at arbitrary different vertices, how small
can be the proportion of time spent on edges used by both walks,
though we hesitate to formulate a conjecture.
\subsection{The three levels of randomness}
\label{sec:3levels}
Introducing randomness leads to different questions.
There are three ways one can introduce randomness.
One can simply randomize the starting vertex.
This suggests the following conjecture, modifying Open Problem \ref{OP:1}.
\begin{Conjecture}
\label{Con:2}
The ratio $\frac{ {\rm s.d.}(L_{NUV}(G,V))}{\mathbb{E} L_{NUV}(G,V)}$,
where the initial vertex $V$ is uniform random,
is bounded over all finite graphs.
\end{Conjecture}
A second level of randomness is to start with a given deterministic $G$ but then consider the random graph $\mathcal G$
in which the edge-lengths $\ell(e)$ are replaced by independent random lengths $\ell^*(e)$
with Exponential(mean $\ell(e)$) distribution.
So here we have a random variable $\mathcal L^*(G) = L_{NUV}(\mathcal G,V)$ where again the initial vertex $V$ is uniform random.
In this model of random graphs $\mathcal G$, results of Aldous \cite{me-FPP} for first passage percolation say that the percolation time is
weakly concentrated\footnote{As in the weak law of large numbers.}
around its mean provided no single edge contributes non-negligibly to the total time.
So one can ask whether a similar result holds for $\mathcal L^*(G)$.
The third level of randomness involves more specific models
of random graphs, which we will consider in the next sections.
\section{Random points in the square}
\label{sec:square}
One very special model of random graph is to take the complete graph on $n$ random (i.i.d.
uniform) points in the unit square,
with Euclidean edge-lengths.
Figure \ref{Fig:800} shows a realization of the corresponding NUV walk with $n = 800$ random points,
and Table \ref{table:3} shows some simulation data for the lengths $L^*_n$ of the NUV walk (see discussion below).
The qualitative behavior seen in simulations corresponds to intuition:
the walk starts to traverse through most (but not all) vertices in any small region, goes through different regions as some discrete analog of a space-filling curve,
and near the end has to capture missed patches and the remaining isolated unvisited vertices via longer steps across already-explored regions.
Indeed in Figure \ref{Fig:800} we see that the actual behavior of the walk within a medium-sized ball is like the sketch in
Figure \ref{Fig:1}, with several different excursions.
\begin{table}[h!]
\centering
\begin{tabular}{rrcc}
$n$ & $\mathbb{E} L^*_n$&$n^{-1/2} \mathbb{E} L^*_n$& s.d.($L^*_n$) \\
100 & 9.05 & 0.91 & 0.41 \\
200 & 12.78 & 0.90 & 0.54 \\
400 & 18.06 & 0.90 & 0.54 \\
800 & 25.54& 0.90 & 0.49
\end{tabular}
\caption{Simulation data for lengths $L^*_n$ in the random points in unit square model.
Simulations and data in this model by Yechen Wang.}
\label{table:3}
\end{table}
\begin{figure}
\includegraphics[width=5.0in]{800points.png}
\caption{A NUV walk through 800 random points in the unit square, and histogram of step lengths.}
\label{Fig:800}
\end{figure}
The lack of scaling for the s.d. may seem surprising, but is understandable as follows.
To adhere to our scaling convention
(distance to nearest neighbor is order $1$)
we should take the square to have area $n$ and write $L_n = n^{1/2}L^*_n$ for the length of the NUV walk.
Intuition, thinking of $L_n$ as the sum of $n$ order-$1$ lengths, suggests there are limit constants
\begin{equation}
c := \lim_n n^{-1} L_n = \lim_n n^{-1/2} L^*_n; \quad \sigma := \lim_n n^{-1/2} \mathrm{s.d.} (L_n) = \lim_n \mathrm{s.d.} (L^*_n) .
\label{c3lim}
\end{equation}
Our small-scale simulation data suggests this holds in the present model with $c \approx 0.9$ and $\sigma \approx 0.5$.
How generally this holds is a natural question, and
we defer further discussion to section \ref{sec:remarks}.
Corollary \ref{C:2} implies $\mathbb{E} L_n \le An$, which is all that we know rigorously.
But there are many questions one can ask.
As well as the limits (\ref{c3lim})
one might conjecture there are concentration bounds and a Gaussian limit for $n^{-1/2} (L_n - \mathbb{E} L_n)$.
For TSP length, existence of a limit constant is known via subadditivity arguments
(Steele \cite{steelebook} and Yukich \cite{yukich})
and concentration via now-classical Talagrand arguments, and for MST length the Gaussian limit
is also known by martingale arguments (Kesten and Lee \cite{kestenMST}).
Alas it seems hard to find any rigorous such arguments for the NUV walk.
One might also bear in mind that, for the {\em random walk} cover time problem, the two-dimensional case is the hardest to analyze sharply, so this might also hold for the NUV walk.
In any of our models,
by considering the length as $L_n(G_n,V_n)$ for a uniform random starting vertex $V_n$,
we can consider the variance decomposition
\[
\mathrm{var} L_n = \mathrm{var} \mathbb{E}(L_n \vert G_n) + \mathbb{E} \mathrm{var}(L_n \vert G_n)
\]
where the first term represents the variability due to the random graph
and the second term represents the variability due to the starting vertex.
In simulations of the present model, for $n = 100$ the two terms are roughly equal.
Figure \ref{Fig:3starts} superimposes the NUV walks from three different starts, in a realization of the present model, giving some impression of the extent of overlap.
\begin{figure}
\begin{center}
\includegraphics[width=2.5in]{3_starts.png}
\end{center}
\caption{3 different starts for the NUV walk on 100 points in the square.}
\label{Fig:3starts}
\end{figure}
\section{Relation with first passage percolation}
\label{sec:FPP}
For graphs with i.i.d. random edge-lengths,
one can seek to find the correct order of magnitude of $L_{NUV}$ by
combining Proposition \ref{P:1}(ii) with known {\em first passage percolation} (FPP) results.
Here is the basic example.
\subsection{The 2-dimensional grid}
Consider the $m \times m$ grid, that is the subgraph of the Euclidean lattice ${\mathbb{Z}}^2$, and assign i.i.d. edge-lengths
$\ell(e) > 0$ to make a random graph $G_m$.
Because the shortest edge-length at a given vertex is $\Omega(1)$, clearly $L_{NUV}$ is $\Omega(m^2)$.
\begin{Corollary}
\label{C:grid}
For the 2-dimensional grid model $G_m$ above, the sequence $(m^{-2} L_{NUV}(G_m), \ m \ge 2)$ is tight.
\end{Corollary}
We conjecture that in fact $m^{-2} L_{NUV}(G_m)$ converges in probability to a constant, but we do not see any simple argument.
Table \ref{table:7} shows simulation data, where $\ell(e)$ has Exponential(1) distribution.
\begin{table}[h!]
\centering
\begin{tabular}{rrrcc}
$n= m^2$ & $\mathbb{E} L(G_m)$&$n^{-1} \mathbb{E} L(G_m)$& s.d.($L(G_m)$) & $n^{-1/2} $ s.d.($L(G_m)$) \\
100 & 66.2 & 0.66 & 7.67 & 0.77 \\
400 & 259 & 0.65 & 14.8 & 0.74 \\
900 & 576 & 0.64 & 17.0 & 0.57
\end{tabular}
\caption{Simulation data for lengths $L(G_m) $ in the grid model.}
\label{table:7}
\end{table}
{\bf Proof.\ }
For a vertex $v$ of $G_m$ write $B(v,r)$ for the random set of vertices $v^\prime$ with $d(v,v^\prime) \le r$,
and write $D(v,r)$ for the non-random set of vertices $v^\prime$ with Euclidean distance $|| v - v^\prime|| \le r$.
Standard results for FPP on ${\mathbb{Z}}^2$ going back to Kesten \cite{kesten}
(see Auffinger, Damron and Hanson \cite{auff} Theorem 3.41 for recent discussion)
imply that there exist constants $c_1, c_2, c_3$ (depending on the distribution of $\ell(e)$) such that
\begin{equation}
\Pr( D(v,r) \not\subseteq B(v,c_1r)) \le c_2 \exp(- c_3r) , \ 0 < r < \infty .
\label{subseteq}
\end{equation}
The remainder of the proof is conceptually straightforward.
Given large $m$ and $r$, there is a set $S(m,r)$ of at most $a_1 m^2/r^2$ vertices of $G_m$ such that
$\cup_{v \in S(m,r)} D(v,r)$ covers $G_m$, and note $D(v,r)$ contains at most $a_2r^2$ vertices; here $a_1$ and $a_2$ are absolute constants.
By Markov's inequality and (\ref{subseteq}) the
probability of the event
\begin{eqnarray}
&&\mbox{the number of $v$ in $S(m,r)$ such that $D(v,r) \not\subseteq B(v,c_1r) $} \nonumber\\
&& \mbox{ exceeds a given $s > 0$} \label{event}
\end{eqnarray}
is at most
$a_1 m^2 r^{-2} c_2 \exp(- c_3r) /s $.
Apply this with $s =m^2 r^{-2} \exp(-c_3 r/2)$.
Now define a vertex-set $S^+(m,r)$ as
\begin{quote}
the union of $S(m,r)$ and all the vertices in all the discs $D(v,r)$ with $v \in S(m,r)$ and
$D(v,r) \not\subseteq B(v,c_1r) $.
\end{quote}
Outside the event (\ref{event}), we have that
$\cup_{v \in S^+(m,r)} D(v,r)$ covers $G_m$, and $S^+(m,r)$ has cardinality at most
\[ n_m(r) := a_1 m^2/r^2 + s a_2r^2 = a_1 m^2/r^2 + a_2 m^2 \exp(-c_3 r/2) . \]
So we have shown
\begin{equation}
\Pr ( N(G_m,r) > n_m(r) )
\le a_1 c_2 \exp(- c_3r/2) .
\label{NGm}
\end{equation}
This holds for fixed $r$, but because $N(G_m,r)$ and $n_m(r)$ are decreasing in $r$ we have inclusion of events, for $j = 1, 2,\ldots $
\[ \{ N(G_m,r) > n_m(r-1) \mbox{ for some } j \le r \le j+1 \}
\subseteq
\{ N(G_m,j) > n_m(j) \}
\]
Applying (\ref{NGm}) and summing over $j$,
\[
\Pr ( N(G_m,r) > n_m(r-1) \mbox{ for some } r > r_0)
\le \Phi(r_0)
\]
where $\Phi$ depends on the distribution of $\ell(e)$ but not on $m$, and
\begin{equation}
\Phi(r_0) \downarrow 0 \mbox{ as } r_0 \to \infty.
\label{phi}
\end{equation}
Noting that $n_m(r)/m^2$ does not depend on $m$ and
\[ \psi(r_0) := \int_{r_0}^\infty n_m(r-1)/m^2 \ dr \to 0 \mbox{ as } r_0 \to \infty \]
and
$N(G_m,r) \le m^2$
we have, for all $r_0 > 0$,
\[
\Pr \left( \int_0^\infty m^{-2} N(G_m,r) \ dr > r_0 + \psi(r_0) \right) \le \Phi(r_0)
\]
which, together with (\ref{phi}) and Proposition \ref{P:1}(ii), implies tightness of the sequence $(m^{-2} L_{NUV}(G_m), \ m \ge 2)$.
\ \ \rule{1ex}{1ex}
The central point is that the argument depends only on some bound like (\ref{subseteq}),
which one expects to hold very generally in FPP-like settings in dimension $> 1$.
For instance FPP on a large family of connected random geometric graphs is studied in Hirsch, Neuh\"{a}user, Gloaguen and Schmidt \cite{hirsch}
and it seems plausible that results from that topic can be used to prove that $L_{NUV}$ is $O(n)$ on such $n$-vertex graphs.
The next example is infinite dimensional, and the bound (\ref{Acn}) below will be the analog of the bound (\ref{subseteq}) above.
\subsection{The mean-field model of distance}
\label{sec:M-F}
Take the complete graph on $n$ vertices and assign to edges i.i.d. random weights with Exponential (mean $n$) lengths.
This ``mean-field model of distance" $G_n$ turns out to be surprisingly tractable, because
the smallest edge-lengths
$0 < \ell_1 < \ell_2 < \ldots$
at a given vertex are distributed (in the $n \to \infty$ limit)
as the points of a rate-$1$ Poisson point process on $(0,\infty)$, and as regards short edges the graph
is locally tree-like.
A now classical result of Frieze \cite{friezeMST} proves that the length
$L_{MST}^{(n)}$ of the MST in this model satisfies $\mathbb{E} L_{MST}^{(n)} \sim \zeta(3) n$.
A later remarkable result of W\"{a}stlund \cite{wastlund}, formalizing ideas of M\'{e}zard - Parisi \cite{mezard},
shows that the expected length of the TSP path in this model is asymptotically $c n$ for an explicit constant $c = 2.04....$.
Might it be possible to get a similar explicit result for the NUV length?
Corollary \ref{C:MF} below gives the correct order of magnitude by essentially the same method as above for Corollary \ref{C:grid}.
Table \ref{table:5} gives some simulation results.
\begin{table}[h!]
\centering
\begin{tabular}{rrrcc}
$n$ & $\mathbb{E} L_n$&$n^{-1} \mathbb{E} L_n$& s.d.($L_n$) & $n^{-1/2} $ s.d.($L_n$) \\
100 & 209 & 2.09 & 22 & 2.2 \\
400 & 865 & 2.14 & 41 & 2.1 \\
900 & 1954 & 2.17 & 57 & 1.9
\end{tabular}
\caption{Simulation data for lengths $L_n$ in the mean-field model.}
\label{table:5}
\end{table}
As in the previous models we expect limits of the form
\[ c := \lim_n n^{-1} \mathbb{E} L_n , \quad \sigma := \lim_n n^{-1/2} \mathrm{s.d.}(L_n) \]
and Table \ref{table:5} is loosely consistent with that.
\setlength{\unitlength}{0.7in}
\begin{figure}
\begin{picture}(8,6)(-4,-3)
\put(0,0){\circle{0.21}}
\put(0,0){\circle*{0.07}}
\put(1.14,0){\circle*{0.07}}
\put(1.46,0){\circle*{0.07}}
\put(-1.72,0){\circle*{0.07}}
\put(-2.83,0){\circle*{0.07}}
\put(-2.59,0.24){\circle*{0.07}}
\put(-2.83,0.73){\circle*{0.07}}
\put(-3.31,0.48){\circle*{0.07}}
\put(-3.21,-0.38){\circle*{0.07}}
\put(-2.83,0){\line(1,1){0.24}}
\put(-2.83,0){\line(0,1){0.73}}
\put(-2.83,0){\line(-1,1){0.48}}
\put(-2.83,0){\line(-1,-1){0.38}}
\put(-2.28,-0.56){\circle*{0.07}}
\put(-2.86,-1.14){\circle*{0.07}}
\put(-2.86,-1.24){\circle*{0.07}}
\put(-1.72,0){\line(-1,-1){0.56}}
\put(-2.28,-0.56){\line(-1,-1){0.58}}
\put(-2.86,-1.14){\line(0,-1){0.10}}
\put(-1.72,-1.09){\circle*{0.07}}
\put(-1.72,0){\line(0,-1){1.09}}
\put(2.8,2.8){\circle*{0.07}}
\put(2.7,-2.7){\circle*{0.07}}
\put(0,0){\line(1,0){1.46}}
\put(0,0){\line(-1,0){2.83}}
\put(0,0){\line(1,1){2.8}}
\put(0,0){\line(1,-1){2.7}}
\end{picture}
\caption{Mean-field model: vertices and edges within a ball of radius $4$ in a realization, illustrating the local tree-like property. Edges to vertices outside the ball not shown.}
\label{fig:MF1}
\end{figure}
\begin{figure}
\begin{picture}(8,6)(-4,-4)
\put(2.8,2.8){\circle*{0.07}}
\put(2.85,2.65){3}
\put(3.5,3.5){(2)}
\put(2.85,2.8){\vector(1,1){0.4}}
\put(3.2,3.25){\vector(-1,-1){0.4}}
\put(2.7,-2.7){\circle*{0.07}}
\put(2.75,-2.89){46}
\put(3.4,-2.7){\vector(-1,0){0.62}}
\put(3.45,-2.78){(45)}
\put(2.7,-2.75){\vector(0,-1){0.7}}
\put(2.56,-3.65){(47)}
\put(0.08,0){\vector(1,0){1.0}}
\put(0,0){\circle*{0.07}}
\put(-0.1,-0.2){30}
\put(1.14,0){\circle*{0.07}}
\put(1.04,-0.2){31}
\put(1.12,0){\vector(1,0){0.30}}
\put(1.46,0){\circle*{0.07}}
\put(1.36,-0.2){32}
\put(1.44,0){\vector(1,0){2.6}}
\put(4.1,-0.06){(33)}
\put(-1.72,0){\circle*{0.07}}
\put(-1.61,-0.01){\vector(1,0){1.59}}
\put(-1.81,0.07){24}
\put(-1.72,-1.09){\circle*{0.07}}
\put(-1.67,-1.26){29}
\put(-1.69,-1.04){\vector(0,1){1.00}}
\put(-1.76,-0.04){\vector(-1,-1){0.49}}
\put(-2.32,-0.60){\vector(-1,-1){0.51}}
\put(-2.28,-0.56){\circle*{0.07}}
\put(-2.25,-0.71){25}
\put(-2.86,-1.14){\circle*{0.07}}
\put(-3.13,-1.09){26}
\put(-2.86,-1.24){\circle*{0.07}}
\put(-3.13,-1.39){27}
\put(-2.81,-1.29){\vector(1,-2){0.31}}
\put(-2.15,-1.91){\vector(1,2){0.38}}
\put(-2.55,-2.11){(28)}
\put(-2.83,0){\circle*{0.07}}
\put(-2.80,-0.15){18}
\put(-2.73,-0.01){\vector(1,0){0.92}}
\put(-4.2,0){\vector(1,0){1.28}}
\put(-4.7,-0.06){(17)}
\put(-2.59,0.24){\circle*{0.07}}
\put(-2.54,0.27){19}
\put(-2.77,0.06){\vector(1,1){0.15}}
\put(-2.54,0.21){\vector(-1,-1){0.62}}
\put(-3.21,-0.38){\circle*{0.07}}
\put(-3.51,-0.46){20}
\put(-3.21,-0.32){\vector(1,1){0.31}}
\put(-2.93,0.05){\vector(-1,1){0.38}}
\put(-3.31,0.48){\circle*{0.07}}
\put(-3.58,0.33){21}
\put(-3.41,0.53){\vector(-1,1){0.83}}
\put(-4.43,1.48){(22)}
\put(-4.13,1.36){\vector(1,-1){1.23}}
\put(-2.88,0.1){\vector(0,1){0.58}}
\put(-2.78,0.68){\vector(0,-1){0.58}}
\put(-2.83,0.73){\circle*{0.07}}
\put(-2.93,0.83){23}
\put(-2.86,-1.24){\circle*{0.07}}
\put(-2.86,-1.14){\line(0,-1){0.10}}
\end{picture}
\caption{Mean-field model: in the Figure \ref{fig:MF1} realization, the NUV walk within the ball and entrance-exit edges.
Vertices numbered according to order in an NUV walk started outside the ball, with vertices outside the ball in parentheses.}
\label{fig:MF2}
\end{figure}
\newpage
As in section \ref{sec:square}, by considering the length as $L_n(G_n,V_n)$ for a uniform random starting vertex $V_n$,
we can consider the variance decomposition
\[
\mathrm{var} L_n = \mathrm{var} \mathbb{E}(L_n \vert G_n) + \mathbb{E} \mathrm{var}(L_n \vert G_n)
\]
where the first term represents the variability due to the random graph
and the second term represents the variability due to the starting vertex.
In simulations with $n = 100$ the former variance term is around 30 times larger than the second term,
consistent with the general conjectures (section \ref{sec:3levels}) that the initial state $v$ typically has little influence on $L_{NUV}(G,v)$.
We now prove the $O(n)$ upper bound in this model.
\begin{Corollary}
\label{C:MF}
For the mean-field model of distance $G_n$, the sequence $(n^{-1} L_{NUV}(G_n), \ n \ge 2)$ is tight.
\end{Corollary}
To prove this, we first record a simple estimate.
\begin{Lemma}
\label{L:Hs}
Let $Z_p$ have Geometric($p$) distribution.
Let $Z^*_p$ coincide with $Z_p - 1$ outside an event $A$.
Let $H$ be a random subset of $[n] = \{1,2,\ldots,n\}$ distributed uniformly on size $Z^*_p$ subsets of $[n]$.
Then
\[ \Pr(A^c \mbox{ and } H \cap [s] = \emptyset ) \le \frac{p}{1 - e^{-s/n}} .
\]
\end{Lemma}
{\bf Proof.\ }
It is standard (by comparing sampling with and without replacement) that
\[ \Pr(H \cap [s] = \emptyset \vert Z^*_p = i) \le \exp(-si/n) . \]
So
\begin{eqnarray*}
\Pr(A^c \mbox{ and } H \cap [s] = \emptyset )& \le& \sum_{i \ge 0} p (1-p)^i \exp(-si/n)\\
&=& \frac{p}{1 - (1-p)e^{-s/n}}\\
&\le & \frac{p}{1 - e^{-s/n}}.
\end{eqnarray*}
\ \ \rule{1ex}{1ex}
As before, for a vertex $v \in [n] = \{1,2,\ldots,n\}$ write
$B_n(v,r) = \{v^\prime : d(v,v^\prime) \le r \}$ for the ball of radius $r$ in $G_n$.
Conceptually we want to consider balls around $s$ randomly chosen vertices, but by symmetry this
is equivalent to using the first $s$ vertices, which is notationally simpler.
So define the vertex-set
\[ C_n(s,r) = \mbox{complement of } \cup_{i \le s} B(i,r) \]
and then by appending to $[s]$ every vertex in $C_n(s,r) $,
\begin{equation}
N(G_n,r) \le s + |C_n(s,r)| , \ 1 \le s \le n .
\label{NGn}
\end{equation}
Recall (see e.g. Pinsky and Karlin \cite{karlin} section 6.1.3) the {\em standard Yule process} $(Y(r), 0 \le r < \infty)$ for which $Y(r)$ has exactly Geometric($e^{-r}$) distribution.
The $n \to \infty$ limit distribution of the process
$( | B_n(v,r)| , 0 \le r < \infty)$ over a fixed $r$-interval is well known to be this standard Yule process
(This is part of the theory in Aldous and Steele \cite{PWIT} surrounding the
PWIT\footnote{Poisson Weighted Infinite Tree.}.)
Choosing $r_1 = \frac{1}{3} \log n$ so that $\exp(r_1) = n^{1/3}$ it is
not difficult to use the natural coupling of the two processes to quantify this convergence to show
\begin{quote}
the distribution of
$( | B_n(v,r)| , 0 \le r \le r_1)$ agrees with the distribution of $(Y(r), 0 \le r \le r_1)$ outside an event $A_n(v)$ of probability $\delta_n = O(n^{-1/4}) \to 0$
as $n \to \infty$.
\end{quote}
For a vertex $v \in [s+1,n]$, and for $r \le r_1$,
\begin{eqnarray}
\Pr(A^c_n(v) \mbox{ and } v \in C_n(s,r)) &=& \Pr( A^c_n(v) \mbox{ and } B_n(v,r) \cap [s] = \emptyset)\nonumber \\
&\le& \frac{e^{-r}}{1 - e^{-s/(n-1)}} \label{Acn}
\end{eqnarray}
the inequality from Lemma \ref{L:Hs} applied to $[n] \setminus \{v\}$.
Apply this with
\[ s = s_n(r) := - (n-1) \log (1 - e^{-r/2}) \]
which is the solution of $e^{-r/2} = 1 - e^{-s/(n-1)}$, so
\[ \Pr(A^c_n(v) \mbox{ and } v \in C_n(s_n(r),r)) \le e^{-r/2} . \]
Summing over $v$, from (\ref{NGn}) we can write, for $r \le r_1$,
\[ N(G_n,r) \le s_n(r) + X_n + Y_n(r)
\mbox{ where $\mathbb{E} X_n \le n \delta_n$ and
$\mathbb{E} Y_n(r) \le n e^{-r/2}$}.
\]
Applying Markov's inequality separately to the two terms on the right side of the first inequality above,
\[
\Pr( N(G_n,r) > s_n(r) + n \delta^{1/2}_n + n e^{-r/4}) \le \delta^{1/2}_n + e^{-r/4} , \ r \le r_1 .
\]
As in the proof of Corollary \ref{C:grid}
we can use monotonicity to convert this fixed-$r$ bound to a uniform bound over a ``medium" interval $r_0 \le r \le r_1$:
\[
\Pr( N(G_n,r) > s_n(r-1) + n \delta^{1/2}_n + n e^{-(r-1)/4}
\mbox{ for some } r_0 \le r \le \lfloor r_1 \rfloor
) \le \delta^{1/2}_n \log n + 5 e^{-r_0/4} .
\]
Because $s_n(r) \approx n e^{-r/2} $ over the interval of interest,
\[ n^{-1} \int_{r_0}^{r_1} (s_n(r-1) + n \delta^{1/2}_n + n e^{-(r-1)/4}) \ dr
\le K e^{-r_0/4} + \delta_n^{1/2} \log n \]
for some constant $K$,
and so
\[ \Pr \left( n^{-1} \int_{r_0}^{r_1} N(G_n,r) \ dr > Ke^{-r_0/4} + \delta_n^{1/2} \log n \right)
\le \delta^{1/2}_n \log n + 5 e^{-r_0/4} .
\]
For the tail of the integral, the diameter $\Delta$ of $G_n$ is known (Janson \cite{janson123}) to be asymptotically $3 \log n$
and so by monotonicity of $N(r)$
\[n^{-1} \int_{r_1}^{\Delta} N(G_n,r) \ dr
= O( n^{-1} \cdot N(G_n,r_1) \cdot \log n)
\to 0
\mbox{ in probability}.
\]
We will show below that
\begin{equation}
\mathbb{E} N(G_n,r_1) = O(n^{11/12}) .
\label{Nshow}
\end{equation}
Because $ \delta_n^{1/2} \log n \to 0$ and $n^{-1} N(G_n,r) \le 1$ for $r \le r_0$,
these bounds establish tightness of the sequence
\[ n^{-1} \int_{0}^{\Delta/2} N(G_n,r) \ dr, \ \ n \ge 2 \]
which by Proposition \ref{P:1}(ii) implies
the sequence $(n^{-1} L_{NUV}(G_n), \ n \ge 2)$ is tight.
To outline a proof of (\ref{Nshow}), take expectation in (\ref{NGn}) to get
\begin{equation}
\mathbb{E} N(G_n,r_1) \le s + n \Pr(v \in C_n(s,r_1)) , \ 1 \le s \le n
\label{NGn2}
\end{equation}
for a vertex $v \in [s+1,n]$.
We will use this with $s = n^{3/4}$.
Conditional on $|B_n(v,r_1)| = \beta$ we have, in order of magnitude,
\[ \Pr(v \in C_n(s,r_1)) \asymp (1 - \beta/n)^s \asymp \exp(- \beta s/n) .\]
Now the distribution of $\beta$ is asymptotically Exponential with mean $e^{r_1} = n^{1/3}$,
so by integrating over $\beta$ the unconditional probability becomes
\[ \Pr(v \in C_n(s,r_1)) \asymp \frac{n^{-1/3}}{n^{-1/3} + s/n} \asymp n^{-1/12} . \]
Combining with (\ref{NGn2}) gives (\ref{Nshow}).
\section{Final Remarks}
\label{sec:remarks}
\paragraph{Analogy with the MST.}
As an algorithm, the NUV walk is somewhat similar to the greedy (Prim's) algorithm for the MST (minimum spanning tree), in that both grow a connected graph one edge at at a time.
Recall that for the MST there is an intrinsic criterion for whether a given edge $e$ is in the MST
\begin{quote}
$e$ is in the MST if and only if there is no alternative path between the endpoints of $e$, all of whose edges are shorter than $\ell(e)$.
\end{quote}
This enables a martingale proof (Kesten and Lee \cite{kestenMST}) of the central limit theorem for the length $L_{MST}$ within the Euclidean model (complete graph on random points in the square)
which we will discuss in section \ref{sec:square}.
There is no such intrinsic criterion for the NUV walk, so
to improve the order-of-magnitude result (Corollary \ref{C:2} below) for $L_{NUV}$ in that model
one would need some other kind of control over the geometry of the set of points visited before each step.
Also, as noted in section \ref{sec:M-F}, in the
``mean-field model of distance" the exact asymptotic constants for the lengths of the TSP tour and the MST are known: can they also be calculated for the NUV walk?
\paragraph{Local weak convergence.}
Our results are conceptually merely consequences of Proposition \ref{P:1},
and further progress would require some other technique.
One possible general approach is via local weak convergence
(Aldous and Steele \cite{PWIT},
Benjamini and Schramm \cite{B-S}).
Our three specific models each have local weak convergence limits
(complete graph on a Poisson point process on the infinite plane with Euclidean distance;
i.i.d. edge-lengths on the infinite lattice; the PWIT)
and intuitively the conjectured limits
$\lim_n n^{-1} \mathbb{E} L_n$
are the mean step-lengths in an appropriately defined NUV walk on the limit infinite graph.
Can this intuition be made rigorous?
In fact one expects the limits in our models to be {\em collections} of disjoint doubly-infinite walks which cover the infinite graph.
This relates to a longstanding folklore problem:
for the NUV walk on the complete-graph Poisson point process on the infinite plane, estimate the number of never-visited vertices in the radius-$r$ ball, as $r \to \infty$.
See Bordenave, Foss and Last \cite{bordenave} for discussion.
\paragraph{Restrictions on local behavior of paths.}
For another possible direction of analysis, consider the Figure \ref{Fig:1} sketch of one possible trajectory for the NUV path through a given ball.
In general there will be many possible trajectories, depending on the graph outside the ball, but can one find restrictions on the possibilities,
extending the obvious restriction:
\begin{quote}
if two vertices are each other's nearest neighbor, then every NUV walk, after visiting the first, immediately visits the second.
\end{quote}
Intuitively, for $1 \ll r_1 \ll r_2$, given the subgraph in the ball $B(v^*,r_2)$, in a random graph there will typically be only a few possibilities for the NUV trajectory within $B(v^*,r_1)$.
\paragraph{Variance of $L_{NUV}$?}
A final issue involves the variance of $L_{NUV}$ in random graph models.
We expect order $n$ ``each other's nearest neighbor" pairs, and then the randomness of edge-lengths suggests that the contribution
to variance of $L_{NUV}$ from these edges alone must be at least order $n$ (in our conventional scaling).
However our small-scale simulation results in Tables \ref{table:7} and \ref{table:5} cast some doubt on this conjectured lower bound.
\bigskip
\paragraph{Acknowledgements.} I thank three anonymous referees for helpful comments.
\bigskip
\paragraph{Competing interests.} The author declares none.
|
1912.13115
|
\section{Introduction}
The formation of ferromagnetic and ferroelectric domain structures in thin
films is a well-known phenomenon\cite{kittel,kmf}.
Polydomain structures appear in ferroelectric thin films in order to screen the
electric depolarizing field arising
at the interfaces between the surfaces of the thin film and its environment,
such as vacuum or a non-metallic substrate.
The electrostatic description of a ferroelectric thin film in an infinite vacuum
has been studied in detail \cite{springer,vacuum_1}.
The equilibrium domain width $w$ follows Kittel's law versus film thickness $d$,
namely, $w\propto \sqrt{d}$, when $w\ll d$.
Within the same model but making no approximations on the electrostatics
arising from an ideal, regular polydomain structure, for $w\gtrsim d$, $w$ reaches a
minimum and grows again when decreasing $d$, until the monodomain is
reached.\cite{springer,vacuum_1}.
This description of an isolated thin film, however, does not describe the effect
that surrounding materials have on the thin film and hence the domain structure.
It is now possible to fabricate ferromagnetic and ferroelectric samples by growing alternating layers of different thin films, just a few unit cells in thickness, in a periodic array (superlattice)\cite{sl_magnet,sl_bto_sto,sl_bto_sto_2}. Alternating between ferroelectric and paraelectric layers (FE/PE superlattice, see Fig. \ref{fig:superlattice}), a great deal of control over the superlattice's properties can be achieved by changing the relative thicknesses of the layers\cite{dawber_superlattice_1,dawber_superlattice_2,dawber_superlattice_3,dawber_superlattice_4}. This has generated interest in the study of FE/PE superlattices from the theoretical\cite{russian_domains_1,russian_domains_2} and computational\cite{zubko_superlattice_1} perspectives.
The dependence of the domain structure on superlattice geometry cannot be described using the theory of a thin film in an infinite vacuum, however. Some generalizations have appeared in the literature which include the effects of surrounding materials\cite{substrate_domains_1,substrate_domains_1,superlattice_domains_1,superlattice_domains_2,superlattice_domains_3,
superlattice_domains_4,superlattice_domains_5,superlattice_domains_6,russian_domains_1}. For a free-standing thin film on a substrate, it was claimed that the electrostatic description is the same as for a thin film of half the width sandwiched between two paraelectric media \cite{substrate_domains_1}. This has been used to fit measurements of ferroelectric domains \cite{substrate_domains_2,substrate_domains_3}.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{superlattice}
\caption{Geometry of a FE/PE periodic superlattice. The unit cell is indicated by the dashed square. The thicknesses of the layers are indicated on the right and $W_+$ and $W_-$ are the widths of the positive and negative polarization domains. In polydomain limit, these widths are equal: $W_+ = W_- = w$.}
\label{fig:superlattice}
\end{figure}
By placing a ferroelectric thin film together with a paraelectric layer between two short circuited capacitor plates, it was found that the domain structure could be controlled by tuning the properties of the paraelectric layer, and the stability of the ferroelectric film could be improved \cite{superlattice_domains_1,superlattice_domains_2,superlattice_domains_3,
superlattice_domains_4,superlattice_domains_5,superlattice_domains_6}. This system is to some extent equivalent to a FE/PE superlattice since the capacitor plates impose periodic electrostatic boundary conditions.
A number of experimental and computational advances have revived interest in this
problem.
Interesting effects can occur at interfaces between different materials
such as the formation of a two-dimensional electron gas (2DEG) at polar-nonpolar
interfaces like \chem{LaAlO_3/SrTiO_3} (LAO/STO)\cite{ohtomo2002,ohtomo2004}.
It is thought that the 2DEG appears to screen the polar discontinuity at the LAO/STO interface\cite{bristowe_electronic_reconstruction},
and similarly, it has recently been proposed as a mechanism to screen the depolarising field at ferroelectric/paraelectric interfaces\cite{pablo_2deg_theory,pablo_2deg_dft}.
This is difficult to observe directly by experiments, and evidence for 2DEG formation at FE/PE interfaces has only very recently been found\cite{2deg_2018,2deg_2018_2,2deg_2018_3}.
This is because there is competition with domain formation for the screening of the
depolarizing field.
Since these phenomena are of an electrostatic origin, a clear picture of the electrostatics of ferroelectrics is essential in order to understand them.
Although ferroelectric thin films have been frequently simulated from first
principles in different settings and environments\cite{pablo_2deg_dft,
junquera_critical_thickness,junquera_critical_thickness_2,dft_1,dft_2,dft_3},
ferroelectric domains are quite demanding to simulate from first principles,
as they require much larger supercells.
Recent developments in effective modelling from first-principles calculations
(second-principles methods) make it possible to study very large systems,
including large domain structures in ferroelectric materials\cite{vanderbilt_bto_effective_prl,vanderbilt_bto_effective_prl_2,
vanderbilt_bto_effective_prb,joannopoulos,2nd_principles_theory,2nd_principles_theory_2,bellaiche_1,bellaiche_2,bellaiche_3}
and observe interesting related effects such as negative capacitance\cite{zubko_negative_capacitance} and polar skyrmions\cite{2nd_principles_skyrmion}.
These scientific advances, both experimental and computational, have motivated
us to revisit the electrostatic description of ferroelectric domains.
The continuum electrostatic description of a monodomain ferroelectric thin film is essentially unaffected by a dielectric environment of the film. This is because there is zero field outside the thin film and hence these regions make no contributions to the electrostatic energy. For a polydomain ferroelectric thin film, the domain structure introduces stray electric fields into the regions outside the film (see Fig. \ref{fig:film}). We would expect different behavior if we replaced the vacuum regions with a dielectric medium. Understanding the effect of more general geometries on the electrostatic description of ferroelectric thin films not only gives an insight into how the surrounding dielectric media contribute to the screening of the depolarizing field, but also allows us to understand the behavior of the domain structure of the film in different environments, bringing us closer to a realistic description of a thin film.
This paper is organized as follows: first, we review the continuum model of a ferroelectric thin film in a vacuum with full electrostatics and a domain wall term. We then generalize the theory for three different systems: a thin film on an infinite substrate (overlayer, OL), a thin film sandwiched between two infinite dielectric media (sandwich, SW), and a FE/PE superlattice (SL).
We keep the prevalent nomenclature in the literature of referring to a spacer material such
as STO as paraelectric, but the description will be exclusively that of a dielectric
material with a given isotropic dielectric permitivity.
Some of these systems have appeared in the literature in various contexts and with different levels of detail. Here we present a coherent comparative study. We compare the different cases in the Kittel limit first, when $w\ll d $, and for which analytic expressions are obtained for $w(d)$. In the
general situation, and, in particular, when $w \gtrsim d$, the behavior of the domain width must be determined numerically. Previous studies of periodic superlattices have assumed ferroelectric and paraelectric layers of equal width. Here we provide a more general study of domain structures as a function of superlattice geometry, and obtain an interesting critical value for their thickness ratio
which separates the weak and strong coupling regimes in SL structures. We also present a detailed derivation of the electrostatic energies in Appendix A.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{film}
\caption{Geometry of a ferroelectric thin film of thickness $d$ with a $180^{\circ}$ polydomain structure. The red lines represent the electrostatic depolarizing field, which bend around the interfaces and domain walls.}
\label{fig:film}
\end{figure}
\section{Review of model for a film in vacuum}
The fundamental model used in this work is based on
the following free energy (per unit volume) of a ferroelectric thin film in a
vacuum with a $180^{\circ}$ stripe domain structure
\beq{F_tot}
\F = \F_0(P) + \frac{\S}{w} + \F_{\text{dep}}(w,d) \, ,
\;,\end{equation}
where $\F_0(P)$, defined as
\beq{eq:ferro}
\F_0(P) = \frac{1}{2{\epsilon}_0\kappa_c}\left( \frac{1}{4}\frac{P^4}{P_S^2}-\frac{1}{2}P^2\right) \, ,
\end{equation}
is the bulk ferroelectric with spontaneous polarization $P_S$ and dielectric permittivity $\kappa_c$, which describes the curvature about the minima. $\S$ is the energy cost (per unit area) for creating a domain wall, $\F_{\text{dep}}$ is the electrostatic energy associated with the depolarizing field, and $w$ and $d$ are the width of one domain and thickness of the film, respectively. We assume that the domain walls are infinitely thin. We will consider the polarization oriented normal to the FE film. Furthermore, we assume that the polarization is constant throughout this film. In reality, the magnitude of the polarization will increase or decrease near the interfaces due to surface effects. This effect can be treated using a Landau-Ginzburg theory, but will be neglected here.
Since we will be interested in the electrostatic effects due to a finite polarization, we will
consider the polarization to be $P_S$, except for its modification in linear response to
the depolarizing field implicit when using a dielectric permitivity for the material normal
to the field, $\kappa_c$.
This assumption is equivalent to replacing the form of $\F_0(P)$ in Eq.~\ref{eq:ferro} by
\beq{eq:ferro2}
\F_0(P) = \frac{1}{2{\epsilon}_0\kappa_c} ( P-P_S)^2 \, .
\end{equation}
The equilibrium domain structure for this system for a given thickness is obtained by minimising the energy: $\partial_{w}\F=0$.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{vacuum_domain}
\caption[region1]{Equilibrium domain width as a function of thickness for an isolated thin film. The red curve shows the numerical solution using the full expression for the electrostatic energy, truncated at $n=100$ terms. The solid black curve is the Kittel curve: $w(d) = \sqrt{l_k d}$. Some of the points $(d_i,w_{eq,i})$ are marked with black dots, which will be referred to in Fig. \ref{fig:vacuum_energy}. The following values of $d$ were used: $d_1 = 2 \ \si{nm}$, $d_2 = 1 \ \si{nm}$, $d_3 = 0.4 \ \si{nm}$, $d_4 = 0.105 \ \si{nm}$, $d_5 = 0.1 \ \si{nm}$, $d_6 = 0.99 \ \si{nm}$, $d_6 = 0.9 \ \si{nm}$. The values of the parameters used are: $P = 0.78$ C/m\textsuperscript2, $\S = 0.13$ J/m\textsuperscript2, $\kappa_a = 185$, $\kappa_c = 34$, $\kappa_s = 300$.}
\label{fig:vacuum_domain}
\end{figure}
In this work we consider an ideal domain structure made by regular straight stripes,
all of them of the same width $w$ (in Appendix A different widths are considered).
For an isolated film, the electrostatic energy for that structure is given by \cite{springer}
\beq{vacuum_full}
\F_{\text{e}} = \frac{8P_S^2}{{\epsilon}_0\pi^3}\frac{w}{d}\sum_{n \ \text{odd}}\frac{1}{n^3}\frac{1}{1+\chi\kappa_c\coth{\left( \frac{n\pi}{2}\chi\frac{d}{w}\right)}}
\end{equation}
where $\kappa_a$, $\kappa_c$ are the dielectric permittivities in the directions parallel and normal to the film and $\chi=\sqrt{\kappa_a/\kappa_c}$ is the anisotropy of the film. In the Kittel limit\cite{kittel,kmf}, $\frac{w}{d}\ll 1$, Eq. \eqref{vacuum_full} reduces to
\beq{F_kittel}
\F^{\text{K}}_{\text{e}} = \frac{P_S^2}{2{\epsilon}_0}\b\frac{w}{d}
\;,\end{equation}
where
\beq{beta}
\b = \frac{14\zeta(3)}{\pi^3}\frac{1}{1+\chi\kappa_c}
\;,\end{equation}
and $\zeta(n)$ is the Riemann zeta function. An analytic expression is obtained for the equilibrium domain width:
\beq{kittel_vacuum}
w(d) = \sqrt{l_kd}
\;,\end{equation}
where
\beq{}
l_k = \frac{2{\epsilon}_0\S}{P_S^2\b}
\end{equation}
is the Kittel length, which defines a characteristic length scale of the system. Eq. \eqref{kittel_vacuum} is known as Kittel's law\cite{kittel}.
Beyond the Kittel regime, we can obtain the equilibrium domain width from
the numerical solution to Eq. \eqref{F_tot} for the full electrostatics expression
in Eq.~\eqref{vacuum_full}.
In Fig. \ref{fig:vacuum_domain}, we plot $w(d)$ both from the Kittel limit and numerical solutions, truncating Eq. \ref{vacuum_full} at $n=100$ terms. We use \chem{PbTiO_3} (PTO) and \chem{SrTiO_3} (STO) as examples of ferroelectric and paraelectric materials, respectively, in all of the plots in this paper, using suitable parameters\footnote{The following values of $d$ were used: $d_1 = 2 \ \si{nm}$, $d_2 = 1 \ \si{nm}$, $d_3 = 0.4 \ \si{nm}$, $d_4 = 0.105 \ \si{nm}$, $d_5 = 0.1 \ \si{nm}$, $d_6 = 0.99 \ \si{nm}$, $d_6 = 0.9 \ \si{nm}$. The values of the parameters used are: $P = 0.78$ C/m\textsuperscript2, $\S = 0.13$ J/m\textsuperscript2, $\chi_{\eta} = 26$, $\kappa_a = 185$, $\kappa_c = 34$, $\kappa_s = 300$.}. From Fig. \ref{fig:vacuum_domain} we see that the domain width follows Kittel's law at large values of $d$, but, for decreasing $d$, $w$ reaches a minimum at $d_{\text{m}}$ and then
diverges at $d_{\infty}$.
\begin{figure}[b]
\centering
\includegraphics[width=\columnwidth]{vacuum_energy}
\caption[region1]{Energy as a function of $w$ for various values of $d$. The red curve is the energy cost of creating domain walls. The black curves are the total energies for different values of $d$, and the dashed curves immediately beneath are the respective electrostatic energies at the same thicknesses (truncated at $n=100$ terms). The minimum with respect to $w$ is indicated with a black dot. The inset shows the energy curves near where the equilibrium domain width diverges.}
\label{fig:vacuum_energy}
\end{figure}
We can understand this behavior by studying the shape of the energy curves as a function of $w$ and $d$. In Fig. \ref{fig:vacuum_energy}, we plot the different energies as a function of domain width at different thicknesses. The energy per unit volume associated with creating the domain walls, shown in red, is unaffected by the thickness of the film. The dashed gray lines show the electrostatic energy Eq. \eqref{vacuum_full} at different thicknesses. We can see in each case that for small $w$, they are linear in $w$, following Kittel's law (Eq. \eqref{F_kittel}). As $w$ increases, Kittel's law breaks down, and the curves level off, approaching the monodomain electrostatic energy:
\beq{F_mono}
\F_{\text{mono}} = \frac{P_S^2}{2{\epsilon}_0\kappa_c}
\;.\end{equation}
As $d$ decreases, the saturation of the electrostatic energy is realized earlier, and the minimum in total energy becomes shallower, eventually disappearing, the equilibrium domain width thereby diverging. We can visualize this by looking at the minima of the total energy curves as $d$ is decreased. The minima are marked with black dots on Fig. \ref{fig:vacuum_energy} and are also shown on the plot of $w(d)$ in Fig. \ref{fig:vacuum_domain}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{vacuum_kc}
\caption[region1]{Energy vs $w$ for various multiples of $\kappa_c = 34$. The red curve is the energy cost of creating a domain structure. The black curves are the total energies for different values of $d$, and the dashed grey curves immediately beneath are the respective electrostatic energies at the same thicknesses (truncated at $n=100$ terms). Each curve has a thickness of $d = 1 \ \si{nm}$. .}
\label{fig:vacuum_kc}
\end{figure}
The described deviation from Kittel's law is sensitive to the system's parameters. In Ref. [\onlinecite{vacuum_1}], an expression for $d_m$ was reported\footnote{The authors in Ref. [\onlinecite{vacuum_1}] do not provide details on how this was obtained.} of the form
\beq{dcrit}
d_{\text{m}} = 5\pi\S{\epsilon}_0\frac{\kappa_c}{\chi}\frac{1}{P_S^2}
\;.\end{equation}
where such dependence is explicit.
In Fig. \ref{fig:vacuum_kc} we show the effect of changing $\kappa_c$. Increasing $\kappa_c$ decreases the curvature of the electrostatic energy and also decreases the monodomain energy (the asymptotic energy for large $w$). By increasing $\kappa_c$ for a fixed value of $d$, the total energy minimum again
becomes shallower and then disappears.
Although analytic solutions for the equilibrium domain width can not be obtained using Eq. \eqref{vacuum_full}, we can obtain approximate solutions. Close to the thickness where the domain width diverges, $d_{\infty}$, we have
\beq{w_asymptote}
\begin{split}
w(d) &\cong \frac{\pi\chi}{2\sqrt{e}} d \exp{\left(\frac{\pi^2}{8}\frac{\kappa_c}{\chi}\b\frac{l_k}{d}\right)}\\
d_{\text{m}} &\cong \frac{\pi^2}{8}\frac{\kappa_c}{\chi}\b l_k = \frac{\pi^2}{4}\S{\epsilon}_0\frac{\kappa_c}{\chi}\frac{1}{P^2}
\end{split}
\;.\end{equation}
Details of this approximation are given in Appendix B and in Ref. [\onlinecite{superlattice_domains_1}].
In this approximation $d_m$ has the same dependence on the systems parameters
as Eq. \eqref{dcrit}, but the constant prefactor is different.
We can also obtain an analytic approximation to the domain width at all thicknesses by replacing the Eq. \eqref{vacuum_full} with a simpler expression which has the correct behavior in the monodomain and Kittel limits,
\beq{F_approx}
\F_{\text{e}}^* = \frac{P_S^2}{2{\epsilon}_0\kappa_c}\frac{1}{1+\frac{1}{\kappa_c\b}\frac{d}{w}}
\;,\end{equation}
which clearly tends to Eq. \eqref{F_mono} and Eq. \eqref{F_kittel} when $w/d$ is large and small, respectively. Using this, we get
\beq{w_approx}
\begin{split}
w(d) &= \frac{\sqrt{l_k d}}{1-\kappa_c\b\sqrt{\frac{l_k}{d}}}\\
d_{\text{m}} &= 4\kappa_c^2\b^2l_k(\kappa_s) \approx \frac{112\zeta(3)}{\pi^3}\S{\epsilon}_0\frac{\kappa_c}{\chi}\frac{1}{P_S^2}
\end{split}
\;.\end{equation}
Details of this approximation are given in Appendix C. This approximation is of the same form as Eq. \eqref{dcrit} but again with a different numerical prefactor. Eq. \eqref{w_approx} gives a good approximation to $d_{\text{m}}$, but overestimates the domain width near $d_{\text{m}}$. This is because, while Eq. \eqref{F_approx} has the correct behavior in the monodomain and polydomain limits, it underestimates the curvature in the intermediate region. In spite of this, the approximation predicts the correct dependence on the system's parameters.
Having understood the behavior of the equilibrium domain width with thickness and the system's parameters, we proceed to investigate the effect of changing the surrounding environment of the thin film. In order to investigate the effect of changing the environment, we must obtain more general expressions for the electrostatic energies, similar to Eq. \eqref{vacuum_full}.
\section{Generalized Electrostatics}
The electrostatic energies were obtained for the OL, SW and SL cases. The expressions, including their derivation, are shown in detail in Appendix A.
\subsection{Generalized Kittel Law}
Taking the Kittel limit for the energies in Eqs. \eqref{full_elec} and \eqref{full_sub}, we obtain a generalization of Kittel's law:
\beq{kittel_general}
\begin{split}
w(d) &=\sqrt{l_k(\kappa_s)d}\\
l_k(\kappa_s)& = \frac{2{\epsilon}_0\S}{P_S^2\b(\kappa_s)}
\end{split}
\;.\end{equation}
The generalization is introduced through the factor $\b$:
\beq{beta_general}
\begin{split}
\b_{\text{SW}}(\kappa_s)& = \frac{14\zeta(3)}{\pi^3}\frac{1}{\kappa_s+\chi\kappa_c}\\
\b_{\text{SL}}(\kappa_s,\a) & = \frac{1}{1+\a}\frac{14\zeta(3)}{\pi^3}\frac{1}{\kappa_s+\chi\kappa_c}\\
\b_{\text{OL}}(\kappa_s) & = \frac{7\zeta(3)}{\pi^3}\left( \frac{1+\kappa_s +2\chi\kappa_c}{(1+\chi\kappa_c)(\kappa_s+\chi\kappa_c)}\right)
\end{split}
\;.\end{equation}
The SL case has an additional dependence on $\a \equiv d_{\text{PE}}/d_{\text{FE}}$, the ratio of thicknesses of the paraelectric and ferroelectric layers. However, the energy cost of creating a domain wall is also renormalized by this refactor, and thus, in the Kittel limit, the ratio $\a$ affects the energy scale but does not influence the behavior of the domains. For each case in Eq. \eqref{beta_general}, Eq. \eqref{beta} is recovered in the limit $\kappa_s\to 1$.\\
The domain widths for the four different systems are plotted in Fig. \ref{fig:kittel}. We can see that including the environment has the effect of shifting the curve upwards, but the square root behavior is unaffected. This makes sense physically: the paraelectric medium contributes to the screening of the depolarizing field. For high dielectric constants, this contribution is large, meaning less screening is required by the domains, so there are fewer domains, and hence the width increases.\\
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{kittel}
\caption{$w(d)$ for a thin film in a vacuum (black), the OL system (red), the SW (blue) and the SL system with $\a=3$ (blue). The solid lines show the analytic solutions from the generalized Kittel's law and the dashed lines show numerical solutions using the full expressions for the depolarizing energies. The SL and SW systems have identical square root curves in the Kittel limit. }
\label{fig:kittel}
\end{figure}
The SL and SW cases have the exact same behavior in the Kittel limit. This expected, since in the Kittel limit, the field in the superlattice loops in the paraelectric layers but does not penetrate through to neighboring ferroelectric layers. In this regime, the coupling between the ferroelectric layers is weak, and the ferroelectric layers are isolated from each other, which is the SW case.
In Ref. [\onlinecite{substrate_domains_1}], it was claimed that there should be a factor of two between the length scales of the OL and SW systems. From Eq. \ref{beta_general} we have:
\beq{}
\frac{l_{k,\text{OL}}(\kappa_s)}{l_{k,\text{SW}}(\kappa_s)} = \frac{\b_{\text{SW}}(\kappa_s)}{\b_{\text{OL}}(\kappa_s)} = \frac{1 +\chi\kappa_c}{1+\kappa_s +2\chi\kappa_c}
\;.\end{equation}
When $\kappa_s\approx 1$, this is indeed true. However, when the $\kappa_s$ is comparable or larger, this does not hold. For example, for PTO and STO, $\chi\kappa_c \sim 79$ and $\kappa_s = 300$ and can be as large as $10^4$ at low temperatures, and the difference in the Kittel lengths becomes significantly larger than a factor of two.
\subsection{Beyond Kittel: Thin Films}
\begin{figure}[b]
\centering
\includegraphics[width=\columnwidth]{substrate_dielectric}
\caption{Domain widths as a function of thickness for various values of $\kappa_s$ for (a) the OL system and (b) the SW system. Each domain width and film thickness is normalized by the Kittel length for that value of $\kappa_s$. }
\label{fig:ks_sub_dielec}
\end{figure}
Although the square root curve is simply shifted upwards after including the environment, the behavior for thinner films is quite different. In Fig. \ref{fig:kittel} we can see that the width at which the domain width diverges is very sensitive to the dielectric environment. In Fig. \ref{fig:ks_sub_dielec}, we plot the domain widths for various values of the dielectric permittivity of the substrate material, $\kappa_s$ for the OL and SW systems, each curve scaled by the relevant Kittel length, $\l_k(\kappa_s)$. We see that $d_{\text{m}}$ decreases with increasing $\kappa_s$.
In Fig. \ref{fig:sub_ks_min} we plot the critical thickness as a function of $\kappa_s$ to illustrate this effect.
For the SW system, $d_{\text{m}}$ decreases more dramatically. This is expected, as there is screening on both sides of the thin film in the SW system.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{ks}
\caption{$d_m$ relative to the corresponding Kittel length as a function of dielectric permittivity of the substrate material for the OL (red) and the SW (blue) systems. }
\label{fig:sub_ks_min}
\end{figure}
We can understand the effect of the paraelectric permittivity on $d_{\text{m}}$ by examining the form of the electrostatic energy. For example, for the SW system:
\beq{}
\F_{\text{SW}} = \frac{1}{\kappa_s}\frac{8P_S^2}{{\epsilon}_0\pi^3}\frac{w}{d} \sum_{n\ \text{odd}} \frac{1}{n^3}\frac{1}{1+\chi\frac{\kappa_c}{\kappa_s}\coth{\left( \frac{n \pi}{2}\chi \frac{d}{w}\right)}}
\;.\end{equation}
This is equivalent to the the electrostatic energy of the IF system, but with the overall energy and $\kappa_c$ both scaled by $\kappa_s$. As we know from Eqs. \eqref{dcrit} and \eqref{w_approx} that $d_{\text{m}} \propto \kappa_c^{3/2}$, it is clear that $d_{\text{m}}$ should decrease with increasing $\kappa_s$.
\begin{figure}[b]
\hspace*{-0.4cm}
\centering
\includegraphics[width=\columnwidth]{superlattice_ks}
\caption{Domain width as a function of thickness for the SL system with (a) $\a=1$, (b) $\a=\sqrt{\kappa_a/\kappa_c}=2.33$, and (c) $\a=100$. Each domain width and film thickness is normalized by the Kittel length for that value of $\kappa_s$. }
\label{fig:superlattice_ks}
\end{figure}
\subsection{Superlattice}
For the SL system with with $\a=1$ ($d_{\text{PE}}=d_{\text{FE}}$), we find that $d_{\text{m}}$ actually increases with the permittivity of the paraelectric layers, as shown in Fig. \ref{fig:superlattice_ks}(a),
contrary to what happens for OL and SW.
For small values of $\a$, the periodic boundary conditions of the superlattice make the electrostatic description very different from the OL and SW systems. When the paraelectric layers are thin, the depolarizing field penetrates through them and there is strong coupling between the ferroelectric layers. The superlattice acts as an effectively uniform ferroelectric material. The average polarization decreases with the permittivity of the paraelectric layers, and according to Eq. \eqref{dcrit}, $d_m$ increases.
For large spacings between the ferroelectric layers ($\alpha \gg 1$), the coupling between them becomes weak, the SW system being realized for $\a\to\infty$. This is illustrated in Fig. \ref{fig:superlattice_ks}(c), which is almost identical to Fig. \ref{fig:ks_sub_dielec}(b).
Interestingly, when $\a=\alpha_{\text{c}}\equiv\chi \approx 2.33$, $d_{\text{m}}/l_k(\kappa_s)$ is independent of $\kappa_s$. At this ratio, the dielectric permittivity of the spacer has no influence on the equilibrium domain structure, relative to the length scale given by $l_k(\kappa_s)$. This is shown in Fig. \ref{fig:superlattice_ks}(b). In Fig. \ref{fig:ks_crit_superlattice} we plot $d_m$ as a function of $\kappa_s$ for different values of $\a$. We see that when $\a > \alpha_{\text{c}}$, $d_{\text{m}}$ increases with $\kappa_s$, while it decreases for $\a < \alpha_{\text{c}}$, and remains constant when $\a=\alpha_{\text{c}}$. Thus, $\alpha_{\text{c}}$ represents a natural boundary between the strong and weak coupling regimes of superlattices.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{ks_crit_superlattice}
\caption{Critical thickness of the SL system as a function of $\kappa_s$ for several values of $\a$. Each value of $d$ is scaled by the appropriate Kittel length. }
\label{fig:ks_crit_superlattice}
\end{figure}
The critical ratio $\alpha_{\text{c}}$ can be predicted from both the asymptotic and analytic approximations. Using the analytic approximation to the SL system (see Appendix C), we have
\beq{}
\begin{split}
\frac{d_{\text{m}}}{l_k(\kappa_s)} &= 4(\kappa_c+\a^{-1}\kappa_s)^2\b(\kappa_s)^2\\
& \propto \frac{\kappa_c}{\kappa_a}\frac{(1+\frac{\kappa_s}{\a\kappa_c})^2}{(1+\frac{\kappa_s}{\chi\kappa_c})^2}\frac{1}{P_S^2}
\end{split}
\;.\end{equation}
From this we can see that when $\a=\alpha_{\text{c}}$, the dependence on $\kappa_s$ vanishes.
\section{Discussion and Conclusion}
We have extended the electrostatic description of an isolated ferroelectric thin film within Kittel's model to thin films surrounded by dieelectric media and FE/PE superlattices. While some of the generalizations have previously appeared in the literature, a detailed comparison had not been done. In doing so, we have understood how the dielectric materials influence the domain structure in the ferroelectric materials, both in the Kittel limit and beyond.
In the Kittel limit, the square root behaviour is only affected in scale, the domain width increasing with dieelectric permittivity, $\kappa_s$. This provides a useful correction to measurements of domain width with film thickness, as Kittel's law for an isolated film typically underestimates domain widths. Beyond Kittel's regime, we have found that increasing $\kappa_s$ decreases $d_m$, that it, the thickness for which the domain width is minimal.
For FE/PE superlattices, we found that $\kappa_s$ can either decrease or increase $d_m$, depending on the ratio of thicknesses, $\a = d_{\text{PE}}/d_{\text{FE}}$. We relate this to the different coupling regimes between the ferroelectric layers, as discussed in Ref. [\onlinecite{russian_domains_1}], for example. When $\a$ is large, the ferroelectric layers are weakly coupled, and the minimum thickness decreases with $\kappa_s$. When $\a$ is small, the ferroelectric layers are strongly coupled, and $d_m$ increases with $\kappa_s$. Remarkably, when $\a = \alpha_{\text{c}} \equiv \chi$, the anisotropy of the ferroelectric layers, $d_m/l_k$ is unaffected by $\kappa_s$. $d_m$ does change, since the Kittel length depends on $\kappa_s$, but the critical ratio $\alpha_{\text{c}}$ serves as a clear boundary between the strong and weak coupling regimes from an electrostatic viewpoint.
This continuum electrostatic theory makes use of several important approximations. Polarization gradients throughout the films occur, as the polarization increases or decreases close to the interfaces, depending on the surface or interface effects, but such gradients have been neglected here. Domains are typically not straight or of infinite length, and the domain structure may not be an equilibrium one ($A\neq 0,\pm1$, see Appendix A).
As stated above, one important approximation in the Kittel-like model used here is the description of the polarization in the ferroelectric, assuming a dielectric linear-response modification of the spontaneous polarization $P_S$ (or using Eq.~\ref{eq:ferro2} instead of Eq.~\ref{eq:ferro} as free energy term related to the polarization). Within this approximation, the system approaches a monodomain phase in a thin-limit regime in which the more complete treatment may predict $P=0$. We investigate this possibility by considering a theory with Eq.~\ref{eq:ferro} for the polarization, and Eq. \ref{F_approx} as the model electrostatic energy. We find that the polarization is zero for
small thicknesses until
\beq{}
d_{\text{c}} = 27(\kappa_c\b)^2 l_k
\end{equation}
at which the polarization jumps to $P_S/\sqrt{3}$ \onlinecite{pablo_2deg_theory} and quickly saturates to $P_S$.
Or, coming from $d>d_c$, the polarization decreases and the domain width increases, until at $d_{\text{c}}$, the ferroelectric material becomes paraelectric.
If $d_{\text{c}} < d_{\infty}$ the theory is unaffected, and the polydomain to monodomain transition would occur before the ferroelectric to paraelectric transition. Otherwise, the ferroelectric film becomes paraelectric without a polydomain to monodomain transition.
For an isolated thin film of PTO, $d_{\text{m}} \sim 0.2l_k$ and $d_{\text{c}}\sim 0.8l_k$, meaning a ferroelectric to paraelectric transition takes place before the polydomain to monodomain transition. However, $d_{\text{c}}$ is also very sensitive to the environment of the film. For a sandwich system with a thin film of PTO between two regions of STO, again $d_{\text{c}} \gg d_{\text{m}}$. For strongly-coupled FE/PE superlattices (small $\a$), however, $d_{\text{m}}$ increases with $\kappa_s$, and we would have $d_{\text{m}} \gg d_{\text{c}}$, and therefore the thin-limit behaviour presented above should be observable before the films becoming paraelectric.
The comparative study offered in this work, however, gives the
expected behaviour of ferroelectric/dielectric heterostructures within the simplest
Kittel continuum model (continuum electrostatics for a given spontaneous polarization
and dielectric response, plus ideal domain wall formation).
The described behaviors are already quite rich, and we think they represent a
paradigmatic reference as basis for the understanding of more complex effects.
In particular for superlattices, the strong to weak coupling regime separation based on
this simplest model should be a useful guiding concept.
\section*{Acknowledgments}
The authors would like to thank Pablo Aguado-Puente for helpful discussions. DB would like to acknowledge the EPSRC Centre for Doctoral Training in Computational Methods for Materials Science under grant number EP/L015552/1.
\section*{Appendix A: Electrostatics}
\label{appendix:elec}
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{superlattice_unit_cell}
\caption{The geometry of a FE/PE superlattice. Regions I and III correspond to half of a paraelectric layer each and region II is the ferroelectric layer. The thicknesses of the layers are indicated on the right and $W_+$ and $W_-$ are the widths of the different domain orientations. The black squares are positive domains, with polarization $+P$ and the white squares are negative domains with polarization $-P$. The system is periodic in the horizontal and vertical directions, with periods $W=W_+ + W_-$ and $D=d_{\text{FE}} + d_{\text{PE}}$, respectively}
\label{fig:superlattice_unit_cell}
\end{figure}
Following Ref. [\onlinecite{springer}], we obtained the expressions for the electrostatic energies of the OL, SW and SL systems. We present the derivation for the SL system, but the method also applies to the OL and SW systems, the only difference being that the boundary conditions change from periodic to infinite.
Consider a periodic array of ferroelectric and paraelectric layers as shown in Fig. \ref{fig:superlattice_unit_cell}. The ferroelectric layer has a structure of $180^{\circ}$ stripe domains with polarization $\pm P$ and widths $W_+$, $W_-$. The unit cell of such a system is formed by one positive and one negative polarization domain in the $x$-direction, with period $W=W_+ + W_-$, and one ferroelectric and one paraelectric layer in the $z$-direction, with period $D = d_{\text{FE}}+d_{\text{PE}}$. As mentioned previously, we assume that the widths of the domain walls are infinitely thin. Thus, we write the polarization as a Fourier series:
\beq{superlattice_polarisation}
P(x)= AP+ \sum_{n=1}^{\infty}\frac{4P}{n\pi}\sin{\left(\frac{n\pi}{2}(A+1)\right)}\cos{\left( n kx\right)}
\;,\end{equation}
where $A = \frac{W_+ - W_-}{W}$ is the mismatch between the domains and $k = \frac{2\pi}{W}$. We can see that the polarization is split into a monodomain term, the average polarization $AP$, and polydomain terms in the infinite series. The polydomain limit is obtained when $A\to 0$, i.e. the domain widths are equal. The monodomain limit is obtained when $A\to\pm 1$, i.e. one of the domain widths tends to zero. To obtain the electric fields in the SL, we must first determine the electrostatic potentials. They satisfy the following Laplace equations:
\beq{laplace}
\begin{split}
\kappa_{ij}\partial_{i}\partial_{j}\phi_{\text{II}} &= 0 \\
\kappa_s\grad\pI = \kappa_s\grad\phi_{\text{III}} & = 0
\end{split}
\;,\end{equation}
where regions I, II and III are the different parts of the unit cell as shown in Fig. \ref{fig:superlattice_unit_cell}. Since the terms in \eqref{superlattice_polarisation} are linearly independent, we can treat the monodomain and polydomain cases separately. Clearly the potentials must be even and periodic in $x$, so the general solutions to \eqref{laplace} are of the form
\beq{potentials2}\resizebox{0.85\columnwidth}{!}{$
\begin{split}
\pI(x,z) &= c_0^1(z)+\sum_{n=1}^{\infty}\cos{\left( nkx \right)}\left( c_n^1e^{n k z} + d_n^1e^{-n k z}\right)\\
\phi_{\text{II}}(x,z) &= c_0^2(z)+\sum_{n=1}^{\infty}\cos{\left( nkx \right)}\left( c_n^2e^{nk\sqrt{\frac{\kappa_a}{\kappa_c}}z} + d_n^2e^{-nk\sqrt{\frac{\kappa_a}{\kappa_c}}z}\right) \\
\phi_{\text{III}}(x,z) &= c_0^3(z)+\sum_{n=1}^{\infty}\cos{\left( nkx \right)}\left( c_n^3e^{n k z} + d_n^3e^{-n k z}\right)
\end{split}$}
\;.\end{equation}
In order to obtain the potentials, we must use the symmetries and boundary conditions of the system to determine the coefficients:
\beq{}
\begin{split}
\pI (d_{\text{FE}}/2) & = \phi_{\text{II}}(d_{\text{FE}}/2) \\
\phi_{\text{III}} (-d_{\text{FE}}/2) & =\phi_{\text{II}}(-d_{\text{FE}}/2) \\
\pI (D/2) & =\phi_{\text{III}}(-D/2) \\
(\vec{D}_{\text{I}}-\vec{D}_{\text{II}})\cdot\hat{n} & = 0\\
(\vec{D}_{\text{III}}-\vec{D}_{\text{II}})\cdot\hat{n} & = 0\\
\pI(z)& =-\phi_{\text{III}}(-z)
\end{split}
\;.\end{equation}
The first two conditions are obtained by matching the potentials at the interfaces. The third comes from imposing periodic boundary conditions on the unit cell. The fourth and fifth are obtained by matching the normal components of the displacement fields,
\beq{d_fields}
\begin{split}
\vec{D}_{\text{I}}& = {\epsilon}_0\kappa_s\vec{E}_{\text{I}} \\
\vec{D}_{\text{II}}& = {\epsilon}_0\kappa \vec{E}_{\text{II}} + \vec{P} \\
\vec{D}_{\text{III}}& = {\epsilon}_0\kappa_s\vec{E}_{\text{III}}
\end{split}
\;,\end{equation}
at the interfaces, and the final condition is obtained from the symmetry of the system under $z\to -z$.
After some algebra, we find that the potentials are given by
\begin{widetext}
\beq{}
\begin{aligned}
\pI (z) & = -\frac{AP_S}{{\epsilon}_0\left[\frac{\kappa_c}{d_{\text{FE}}}+\frac{\kappa_s}{d_{\text{PE}}}\right]d_{\text{PE}}}(z-D/2) - \sum_{n=1}^{\infty}\alpha_n \beta_n \frac{\cos{\left( n k x\right)}\sinh{\left( nk \left( z-D/2\right)\rb}}{\chi\kappa_c\cosh{\left( nk\chi \frac{d_{\text{FE}}}{2}\right)}+\kappa_s\coth{\left( nk \frac{d_{\text{PE}}}{2}\right)}\sinh{\left( nk\chi \frac{d_{\text{FE}}}{2}\right)}}\\
\phi_{\text{II}} (z)& = \frac{AP_S}{{\epsilon}_0\left[\frac{\kappa_c}{d_{\text{FE}}}+\frac{\kappa_s}{d_{\text{PE}}}\right]d_{\text{FE}}}z + \sum_{n=1}^{\infty}\alpha_n \frac{\cos{\left( n k x\right)}\sinh{\left( nk\chi z\right)}}{\chi\kappa_c\cosh{\left( nk\chi \frac{d_{\text{FE}}}{2}\right)}+\kappa_s\coth{\left( nk \frac{d_{\text{PE}}}{2}\right)}\sinh{\left( nk\chi \frac{d_{\text{FE}}}{2}\right)}}\\
\phi_{\text{III}} (z)& = -\frac{AP_S}{{\epsilon}_0\left[\frac{\kappa_c}{d_{\text{FE}}}+\frac{\kappa_s}{d_{\text{PE}}}\right]d_{\text{PE}}}(z+D/2)- \sum_{n=1}^{\infty}\alpha_n \beta_n\frac{ \cos{\left( n k x\right)}\sinh{\left( nk\left( z+D/2\right)\rb}}{\chi\kappa_c\cosh{\left( nk\chi \frac{d_{\text{FE}}}{2}\right)}+\kappa_s\coth{\left( nk \frac{d_{\text{PE}}}{2}\right)}\sinh{\left( nk\chi \frac{d_{\text{FE}}}{2}\right)}}
\end{aligned}
\;,\end{equation}
\end{widetext}
where
\beq{}
\begin{split}
\alpha_n &= \frac{4P}{{\epsilon}_0 n^2\pi k}\sin{\left( \frac{n\pi}{2}(A+1)\right)} \\
\beta_n &= \frac{\sinh{\left( n k\chi \frac{d_{\text{FE}}}{2}\right)}}{\sinh{\left( n k \frac{d_{\text{PE}}}{2}\right)}}
\end{split}
\;.\end{equation}
The monodomain part of the potential has a zig-zag shape as expected, which is sensitive to the ratio of layer thicknesses and permittivities. The electrostatic energy of the system is obtained from
\beq{}
\F = \frac{1}{2}\int \kappa_{ij}E_iE_j\dd{x}\dd{z}
\;,\end{equation}
where the fields are the gradients of the potentials: $\vec{E}=-\vec{\nabla}\phi$. We integrate over the domain period in the $x$-direction and over both layers in the $z$-direction. Finally, the total electrostatic energy of the system is given by the second line of Eq. \eqref{full_elec}. The first and third lines are the electrostatic energies of the SW and SL cases respectively, obtained using the same method. In all cases the energy is conveniently split into monodomain and polydomain parts. We can see that the monodomain parts for the OL and SW cases are identical to that of a thin film in a vacuum, as expected. We can also see that the polydomain part vanishes when $A\to\pm 1$, and the polydomain energy is obtained when $A\to 0$.
It will be useful for us to work in terms of energy \textit{per unit volume}. For the OL and SW cases, we simply divide by the thickness the thin film. For the superlattice however, we must use the total volume of the unit cell. For convenience, we would like to work in terms volume of the ferroelectric layer. So we let
\beq{}
\begin{split}
d &= d_{\text{FE}} \\
\alpha &= \frac{d_{\text{PE}}}{d_{\text{FE}}}
\end{split}
\;,\end{equation}
so that
\beq{}
\begin{split}
d_{\text{PE}} &= \alpha d \\
D & = (1+\alpha)d
\end{split}
\;.\end{equation}
The energies in Eq. \eqref{full_elec} give a complete picture of the electrostatics of ferroelectric thin films and superlattices.
\begin{widetext}
\beq{full_elec}
\begin{aligned}
\F_{\text{SW}} &= \frac{P^2}{2{\epsilon}_0\kappa_c}\left( A^2 + \frac{16\kappa_c}{\pi^3}\frac{w}{d}\sum_{n=1}^{\infty}\frac{\sin^2{\left( \frac{n\pi}{2}(A+1)\right)}}{n^3}\frac{1}{\kappa_s+\chi\kappa_c\coth{\left( \frac{n\pi}{2}\chi\frac{d}{w}\right)}}\right)\\
\F_{\text{SL}} &= \frac{1}{(1+\a)}\frac{P^2}{2{\epsilon}_0\kappa_c}\left(\frac{\kappa_c}{\kappa_c + \a^{-1}\kappa_s}A^2 + \frac{16\kappa_c}{\pi^3}\frac{w}{d}\sum_{n=1}^{\infty}\frac{\sin^2{\left(\frac{n\pi}{2}(A+1)\right)}}{n^3}\frac{1}{\chi\kappa_c\coth{\left( \frac{n \pi}{2}\chi \frac{d}{w}\right)}+\kappa_s\coth{\left( \frac{n \pi}{2}\a\frac{d}{w}\right)}}\right)
\end{aligned}
\end{equation}
\end{widetext}
For the substrate case, the energy is given by
\beq{full_sub}\resizebox{0.85\columnwidth}{!}{$
\F_{\text{OL}} = \frac{P^2}{2{\epsilon}_0\kappa_c}\left( A^2 + \frac{8\kappa_c}{\pi^3}\frac{w}{d}\sum_{n=1}^{\infty}\frac{\sin^2{\left( \frac{n\pi}{2}(A+1)\right)}}{n^3}\gamma_n^{-2}\Gamma_n \right)$}
\end{equation}
where
\beq{}\resizebox{0.85\columnwidth}{!}{$
\begin{split}
\gamma_n &= (\chi^2\kappa_c^2+\kappa_s)\sinh{\left( n\pi \chi\frac{d}{w}\right)}+\chi\kappa_c(1+\kappa_s)\cosh{\left( n\pi \chi\frac{d}{w}\right)}\\
\Gamma_n & = (\chi^2\kappa_c^2-\kappa_s)(1+\kappa_s)-4\chi^2\kappa_c^2(1+\kappa_s)\cosh{\left( n\pi \chi\frac{d}{w}\right)} \\
& + (1+\kappa_s)(3\chi^2\kappa_c^2+\kappa_s)\cosh{\left( 2n\pi \chi\frac{d}{w}\right)} \\
& - 4\chi\kappa_c(\chi^2\kappa_c^2+\kappa_s)\sinh{\left( n\pi \chi\frac{d}{w}\right)}\\
& +\chi\kappa_c(1+2\chi^2\kappa_c^2+\kappa_s(4+\kappa_s))\sinh{\left( 2n\pi \chi\frac{d}{w}\right)}
\end{split}$}
\end{equation}
It is important to check that the polydomain part of the energy reproduces the monodomain and Kittel energies in the appropriate limits. Letting $A=0$, we have
\beq{}\resizebox{0.85\columnwidth}{!}{$
\F_{\text{SL}} = \frac{P^2}{2{\epsilon}_0\kappa_c}\left( \frac{16\kappa_c}{\pi^3}\frac{w}{d}\sum_{n \ \text{odd}}\frac{1}{n^3}\frac{1}{\chi\kappa_c\coth{\left( \frac{n \pi}{2}\chi \frac{d}{w}\right)}+\kappa_s\coth{\left( \frac{n \pi}{2}\a\frac{d}{w}\right)}}\right)
$}
\;,\end{equation}
ignoring the prefactor of $(1+\a)^{-1}$. The monodomain limit is realized when $w\to\infty$. Using the expansion $\coth{(a x)} \sim \frac{1}{a x}$ about $x=0$, we get
\beq{}
\begin{split}
\F_{\text{SL}}&\to \frac{P^2}{2{\epsilon}_0(\kappa_c+\a^{-1}\kappa_s)}\frac{8}{\pi^2}\sum_{n \ \text{odd}}\frac{1}{n^2}\\
&=\frac{P^2}{2{\epsilon}_0(\kappa_c+\a^{-1}\kappa_s)}
\end{split}
\;,\end{equation}
since $\sum_{n \ \text{odd}}\frac{1}{n^2} = \frac{\pi^2}{8}$. For the Kittel limit, $\frac{d}{w}\gg 1$. Using $\coth{(x)}\to 1$ for large $x$, we get
\beq{}
\F_{\text{SL}}\to \frac{P^2}{2{\epsilon}_0}\frac{14\zeta(3)}{\pi^3}\frac{1}{\kappa_s+\chi\kappa_c}\frac{w}{d}
\;,\end{equation}
where we used $\sum_{n \ \text{odd}}\frac{1}{n^3} = \frac{7\zeta(3)}{8}$.
\section*{Appendix B: Asymptotic Approximation of the Domain Width in the Ultrathin Limit}
Following the method in Ref. [\onlinecite{superlattice_domains_1}], we obtain an approximation to the equilibrium domain width behavior in the ultrathin limit. For the IF system, total energy is approximately
\beq{}
\F_{} \cong \frac{\S}{w} + \frac{8P^2}{{\epsilon}_0\kappa_c\pi^2}\frac{1}{\xi}\sum_{n=0}^{\infty}\frac{1}{(2n+1)^3}\tanh{\left( \frac{(2n+1)}{2}\xi\right)}
\;,\end{equation}
when $\xi = \pi\chi\frac{d}{w} \ll 1$. Using
\beq{}\resizebox{0.85\columnwidth}{!}{$
\tanh{\left( \frac{(2n+1)}{2}\xi\right)} = \int_0^1 \del_{\lambda}\left( \tanh{\left( \frac{(2n+1)}{2}\xi\lambda\right)}\right)\dd{\lambda}
$}
\;,\end{equation}
we get
\beq{}
\begin{split}
\F & \cong \frac{\S}{w} + \frac{4P^2}{{\epsilon}_0\kappa_c\pi^2}\int_0^1\dd \lambda\sum_{n=0}^{\infty}\frac{1}{(2n+1)^2}\frac{1}{\cosh^2{\left( \frac{(2n+1)}{2}\xi\lambda\right)}}\\
&\approx \frac{\S}{w} + \frac{16P^2}{{\epsilon}_0\kappa_c\pi^2}\int_0^1\dd \lambda\sum_{n=0}^{\infty}\frac{e^{-(2n+1)\xi\lambda}}{(2n+1)^2}
\end{split}
\;.\end{equation}
From Ref. [\onlinecite{superlattice_domains_1}]:
\beq{}
\int_0^1\dd \lambda\sum_{n=0}^{\infty}\frac{e^{-(2n+1)\xi\lambda}}{(2n+1)^2} = \frac{\pi^2}{8}-\frac{\xi}{4}\ln{\left(\frac{e^p}{\xi}\right)}+\bigO(\xi^3)
\;,\end{equation}
where $p=\frac{1}{2}(3+\ln{(4)})$. Thus, our approximation to the energy becomes
\beq{}
\F \cong \frac{\S}{w}+\frac{P^2}{2{\epsilon}_0\kappa_c}+ \frac{P^2}{2{\epsilon}_0\kappa_c}\lb3-\frac{8}{\pi}\chi\frac{d}{w}\ln{\left(\Lambda\frac{w}{d}\right)}\right)
\;,\end{equation}
where
\beq{}
\Lambda = \frac{e^p}{\pi\chi}
\;.\end{equation}
The first two terms are the domain energy and monodomain energy, and the third term is an asymptotic correction. Minimizing with respect to $w$, we get
\beq{}
w(d) = \frac{\pi\chi}{2\sqrt{e}}d\exp{\left(\frac{\pi^2}{8}\frac{\kappa_c}{\chi}\b\frac{l_k}{d}\right)}
\;.\end{equation}
The corresponding minimum width is
\beq{}
d_{\text{m}} = \frac{\pi^2}{8}\frac{\kappa_c}{\chi}\b l_k
\;.\end{equation}
\section*{Appendix C: Analytic Approximation to the Domain Width}
We can obtain an analytic approximation to the equilibrium domain behavior if we replace the electrostatic energy with a simpler function which reproduces the monodomain and Kittel energies in the appropriate limits. For the IF system, we could use:
\beq{}
\F_{\text{e}}^* = \underbrace{\frac{P^2}{2{\epsilon}_0\kappa_c}}_{\F_{\text{mono}}}\frac{1}{1+\frac{1}{\kappa_c\b}\frac{d}{w}}
\;.\end{equation}
When $w/d$ is very large, the second term in the denominator goes to zero and we get $\F_{\text{e}}^* = \F_{\text{mono}}$. When $w/d$ is very small, the second term in the denominator dominates and we get $\F_{\text{elec}}^* = \frac{P^2}{2{\epsilon}_0}\b\frac{w}{d} = \F_{\text{Kittel}}$. This approximation can also be used for the OL and SW systems, since the extension to these systems is simply achieved via $\b \to \b(\kappa_s)$. For the superlattice, the energy in the monodomain limit is different:
\beq{}
\F_{\text{mono,SL}} = \frac{1}{(1+\a)}\frac{P^2}{2{\epsilon}_0(\kappa_c+\a^{-1}\kappa_s)}
\end{equation}
The prefactor $(1+\a)^{-1}$ scales the energy with the ratio of the layer thicknesses. The energy cost of creating a domain structure is scaled by this prefactor. Thus, the equilibrium domain width will be unaffected by this prefactor, and we can neglect it. Now, the monodomain energy for a SL is similar to the case of a thin film, but with renormalized permittivity: $\kappa_c\to\kappa_c + \a^{-1}\kappa_s$. When $\a\to\infty$, the thin film expressions are recovered, so we can work with the SL system and the other systems can be recovered by taking $\a\to\infty$ and the correct choice of $\b(\kappa_s)$.
The total energy for the SL system is
\beq{F_approx_SL}
\F_{\text{SL}}^* = \frac{\S}{w}+\frac{F_{\text{mono,SL}}}{1+\frac{x}{w}}
\;,\end{equation}
where
\beq{}
x = \frac{d}{(\kappa_c+\a^{-1}\kappa_s)\b(\kappa_s)}
\;.\end{equation}
Minimizing Eq. \eqref{F_approx_SL}, we get
\beq{w}
w(d) = \frac{\sqrt{l_k(\kappa_s)d}}{1-(\kappa_c+\a^{-1}\kappa_s)\b(\kappa_s)\sqrt{\frac{l_k(\kappa_s)}{d}}}
\;.\end{equation}
Clearly, this expression has square root behavior for large $d$ (Kittel) and diverges for small $d$ (monodomain). The width diverges at
\beq{}
d_{\infty} = (\kappa_c+\a^{-1}\kappa_s)^2\b(\kappa_s)^2l_k(\kappa_s)
\;,\end{equation}
and has a minimum at
\beq{}
\begin{split}
d_{\text{m}} &= 4(\kappa_c+\a^{-1}\kappa_s)^2\b(\kappa_s)^2l_k(\kappa_s) = 4d_{\infty}\\
& = 8{\epsilon}_0(\kappa_c+\a^{-1}\kappa_s)^2\b(\kappa_s)\S\frac{1}{P^2}
\end{split}
\;.\end{equation}
Interestingly, the relation $d_{\text{m}} = 4d_{\infty}$ is universal and independent of system-specific parameters.
|
1912.13106
|
\section{Conclusion and Future Work}
As the use of machine learning becomes more pervasive all over the world, people speaking different languages will come to expect seamless and customized experience of their own. Building a language independent model can accelerate the enablement of machine learning and cognitive solutions in new languages at a large scale. We demonstrate the power of this language-independent modeling approach through a series of experiments on multiple task types, language sets and data resources. Our annotated data for low-resource languages will be made publicly available. We hope that the insights gained from these experiments will help researchers and practitioners develop solutions and tools that enable better scalability, integration and operations in many other languages. In future, we will continue to explore the effects of different combinations of languages with respect to various end tasks. Besides, we plan to extend the studies to more NLP tasks, and investigate the feasibility of multi-task learning for building a task and language independent framework.
\section{Credits}
\section{Experiments}
The effects of LIMs can be affected by at least three factors: task type, language set and data resource. In this section, we empirically investigate the effects of these factors on the performance of LIMs.
\subsection{Factor Characterization}
\paragraph{Task Type} We explore whether LIMs are equally effective across different end tasks. For the scope of this paper, we consider sentence classification and sequence labeling as two of the most popular NLP tasks. In particular, we select and compare two representative tasks: Sentiment Analysis and Named Entity Recognition (NER). Sentiment Analysis represents a typical sentence classification task, while NER is a popular sequence labeling task.
\paragraph{Language Set} While theoretically an LIM can be trained using any language set, and be used to make predictions in any language, multilingual representations may not be equally effective across different languages~\cite{gerz2018relation}. For instance, it has been shown that a multilingual word embedding alignment between English and Chinese is much more difficult to learn than that between English and Spanish~\cite{conneau2017word}. We explore many different languages when training and testing LIMs.
\paragraph{Data Resource} For high-resource languages, the annotated data can be of different sizes; for low-resource languages, large amounts of data do not often exist~\cite{kasai2019low}. We explore the effects of different data sizes when training and testing LIMs.
\subsection{Case Study on Sentiment Analysis} \label{section:model}
We take Sentiment Analysis as a 3-class classification problem: given a sentence $s$ in a target language $T$, which consists of a series of words: $\{w_1, ... , w_m\}$, predict the sentiment polarity $y \in \{positive, neutral, negative\}$.
For this case study, we consider 7 high-resource languages: English, Spanish, Italian, Brazilian Portuguese, Dutch, Japanese and Chinese, covering both western and eastern languages. The high-resource training set consists of 770K data points --- 230K English, and 90K each in other 6 languages; the test set contain both public available test data and high quality in-house test data --- 630K English, 10K Spanish, 57K Japanese, 10K Chinese and 15K French. Meanwhile, we collect 5K data points each in 5 languages: Danish, Swedish, Norwegian, Russian, and Turkish, which are considered as low-resource languages in our experiments. We use 4K as training set and 1K as test set for each low-resource language.
We randomly split 1/10 from the training set as the development set for model selection and the rest for model training (i.e., fine-tuning the parameters of Multilingual BERT and the sentence classification layer). Following original BERT fine-tuning~\cite{devlin2019bert}, we fine-tune the multilingual BERT with the following parameter choices: (1) batch size: 16, 32; (2) learning rate: 5e-5, 3e-5, 2e-5; (3) number of epochs: 3, 4. The model of 32 batch size, 2e-5 learning rate and 4 epochs was selected as the best model based on its performance on the development set. We denote the LIM for Sentiment Analysis trained with high-resource languages as \textbf{LIM-H}, and the LIM trained with the mix of high-resource and low-resource languages as \textbf{LIM-M}.
\subsubsection{Results on High-Resource Languages}
For high-resource languages, we compare \textbf{LIM-H} with the following methods:
\begin{itemize}
\item \textbf{CNN}~\cite{kim2014convolutional} is a convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We use this method to train monolingual Sentiment Analysis models as a baseline because of its popularity and simple implementation for reproducibility.
\item \textbf{ULMFiT}~\cite{howard2018universal} is a recent generative pretrained language model with task-specific fine-tuning. We follow ULMFiT by adopting discriminative fine-tuning and slanted triangular learning rates to stabilize the fine-tuning process and create monolingual Sentiment Analysis models.
\item \textbf{Monolingual-BERT}. We trained monolingual Sentiment Analysis models by fine-tuning BERT with monolingual datasets for every language, respectively. For example, a \textit{Chinese-only BERT model} refers to the BERT model fine-tuned using Chinese-only annotated data for Sentiment Analysis.
\end{itemize}
In Table~\ref{table:sentiment_1}, we report the accuracy results of Sentiment Analysis on English and Spanish across various models. We get a significant boost in performance of 7.4\% than CNN, and 3.2\% than ULMFiT in English. As for Spanish, we outperform the previous methods by 4.5\% and 2.3\% respectively.
\begin{table} [!htb]
\center
\begin{tabular}{l|ccc}
Language & CNN & ULMFiT & LIM-H \\
\hline
English & 72.1 & 76.3& \textbf{79.5} \\
Spanish & 69.4 & 71.6 & \textbf{73.9} \\
\end{tabular}
\caption{Accuracy results of Sentiment Analysis on English and Spanish across various models.} \label{table:sentiment_1}
\end{table}
Furthermore, we show that our method is able to compete with the monolingual BERT models on Sentiment Analysis in Table~\ref{table:sentiment_2}. By leveraging data from non-native languages, our LIM outperforms the English-only BERT model by 1.8\% and the Japanese-only BERT model by 0.7\%, but falls behind the Chinese-only BERT model by 1.2\%. It should be noted that BERT specifically pre-trained the Chinese-only model to account for its unique character tokenization. Therefore, it is still very encouraging to see that our LIM is comparable to a specially customized monolingual BERT model.
\begin{table} [!htb]
\center
\begin{tabular}{l|cc}
Language & Monolingual-BERT & LIM-H \\
\hline
English & 77.7 & \textbf{79.5}\\
Japanese & 78.0 & \textbf{78.7}\\
Chinese & \textbf{74.5} & 73.3
\end{tabular}
\caption{Accuracy results of Sentiment Analysis on English, Japanese and Chinese between monolingual BERT and LIM-H.} \label{table:sentiment_2}
\end{table}
In Table~\ref{table:sentiment_3}, we evaluate the impact of LIM on Sentiment Analysis via zero-shot transfer learning. When we do not include any French annotated data for training, we can still obtain a significant improvement of 5.7\% over the monolingual CNN model trained using French annotated data.
\begin{table} [!htb]
\center
\begin{tabular}{l|cc}
Language & CNN & LIM-H \\
\hline
French & 54.0 & \textbf{59.7}
\end{tabular}
\caption{Accuracy results of Sentiment Analysis on French between CNN and LIM-H. This demonstrates a \textit{zero-shot transfer learning} case for LIM-H as it does not involve any French annotated data when training the model.} \label{table:sentiment_3}
\end{table}
\subsubsection{Results on Low-Resource Languages}
For low-resource languages, we compare both \textbf{LIM-H} and \textbf{LIM-M} in Table~\ref{table:sentiment_4}. LIM-H demonstrates the effects of zero-shot transfer learning on low-resource languages, with an average of 60\% accuracy. Since we do not use any low-resource training data in LIM-H, this shows that LIM can be used to address the \textit{cold-start} problem, where no initial model is available for a new target low-resource language, when building such models from scratch is costly. Furthermore, LIM-M demonstrates how much improvement a LIM can gain by adding only a small amount of data in low-resource languages. In particular, by adding 4K annotated data in each low-resource language, we obtain an average of 11\% improvement. This largely saves the cost and time for acquiring annotated data of a new target low-resource language by transferring the knowledge learned from a larger amount of annotated data available in high-resource languages.
\begin{table} [!htb]
\center
\begin{tabular}{l|cc}
Language & LIM-H & LIM-M\\
\hline
Danish & 62.5 & \textbf{69.2} \\
Swedish & 56.8 & \textbf{68.6} \\
Norwegian & 62.0 & \textbf{70.3} \\
Russian & 62.1 & \textbf{75.8} \\
Turkish & 56.8 & \textbf{69.1}
\end{tabular}
\caption{Accuracy results of Sentiment Analysis on low-resource languages. We compare the performance of zero-shot transfer learning in LIM=H (without any annotated data from the target languages) and low-resource transfer training in LIM-M (only 4K annotated data from the target languages were used in training).} \label{table:sentiment_4}
\end{table}
\subsection{Case Study on Named Entity Recognition}
Given a sentence $s$ in a target language $T$, which consists of a series of words: $\{w_1, ... , w_m\}$, NER outputs a sequence of labels $\{l_1, ... , l_m\}$, with respect to the named entity type $e \in $~\{\textit{Person, Location, Organization, Date, Time, JobTitle, Duration, Facility, GeographicFeature, Measure, Ordinal, Money}\}. This is much more fine-grained and complex than the traditional CoNLL NER task that only considers 4 entity types~\cite{TjongKimSang:2002, TjongKimSang:2003}. We follow the \textit{Inside–-outside–-beginning (IOB2)} tagging format~\cite{ramshaw1999text}: a $B$-prefix means that the tag is the beginning of a chunk, an $I$-prefix indicates that the tag is inside a chunk, and an $O$ tag represents that a token belongs to no chunk.
We build an LIM for NER with annotated data in 3 languages: French, Italian and German. The training set consists of 679K data points (148K in French, 470K in Italian and 61K in German). We randomly split 1/10 from the training set as the development set for model selection and the rest for model training (i.e., fine-tuning the parameters of Multilingual BERT and the sequence labeling layer). We selected the best model of 32 batch size, 2e-5 learning rate and 3 epochs, after fine-tuning with different parameters (described in Section~\ref{section:model}) on the development set.
\subsubsection{Compared Methods}
We compare \textbf{LIM} with the following methods:
\begin{itemize}
\item \textbf{BiLSTM+CRF}~\cite{lample2016neural} is a bidirectional LSTM with a sequential conditional random field above it. We use this method to train monolingual NER models as a baseline because it has been effective and widely used on sequence labeling tasks.
\item \textbf{FLAIR}~\cite{akbik2019flair} is one of the latest NLP frameworks that achieved state-of-the-art for sequence labeling tasks. It models words as sequence of characters and leverages contextual string embeddings produced from a trained character language model~\cite{akbik2018contextual}. We adopt the pre-trained multilingual FLAIR embedding to build multilingual NER models using the FLAIR framework.
\end{itemize}
\subsubsection{Results}
We evaluate the models on high quality in-house benchmark datasets for NER in various languages including French (3870 entities), Italian (3776 entities), and German (5023 entities)\footnote{We refer to the number of entities instead of data points as one data point can contain multiple entities.}.
First of all, we report the F-measure results of NER on French, Italian and German. Regarding French, we reach a significant improvement in performance of 9.9\% than BiLSTM+CRF, and 7.1\% than FLAIR. Similarly, on German, we outperform the previous methods by 6.1\% and 2.4\% respectively. Our LIM approach is comparable to BiLSTM+CRF and outperforms FLAIR by 3.5\% on Italian.
\begin{table} [!htb]
\center
\begin{tabular}{l|ccc}
Language & BiLSTM+CRF & FLAIR & LIM \\
\hline
French & 68.0 & 70.8 & \textbf{77.9} \\
Italian & 71.5 & 68.0 & 71.5 \\
German & 64.5 & 68.2 & \textbf{70.6} \\
\end{tabular}
\caption{F-measure results of NER on French, Italian and German. The BiLSTM-CRF models were trained using monolingual data in each language respectively. The FLAIR and LIM models were trained using the concatenation of French, Italian and German annotated data.} \label{table:ner_1}
\end{table}
Secondly, we evaluate the effects of our LIM approach for zero-shot transfer learning on NER. We trained another FLAIR and LIM using only the concatenation of French and Italian annotated data while excluding German annotated data. Table~\ref{table:ner_2} shows that our LIM method is able to retain the performance of 58.6\% while FLAIR drops to 20.3\%. demonstrates shows the power of our LIM method in accelerating the development of models for a new language where no annotated data is available.
\begin{table} [!htb]
\center
\begin{tabular}{l|cc}
Language & FLAIR & LIM \\
\hline
German & 20.3 & \textbf{58.6} \\
\end{tabular}
\caption{F-measure results of NER on German (zero-shot transfer learning). The FLAIR and LIM models were trained using the concatenation of French and Italian annotated data, while German annotated data was excluded.} \label{table:ner_2}
\end{table}
\subsection{Discussion}
\paragraph{Task Type} While the results demonstrate the effectiveness of LIMs on two most representative NLP tasks, we found that LIMs are generally more effective on a sentence classification task than a sequence labeling task, particularly for zero-shot transfer learning. For example, LIM outperforms the corresponding baseline on Sentiment Analysis (Table~\ref{table:sentiment_3}), but falls behind the corresponding baseline on NER (Table~\ref{table:ner_1} and~\ref{table:ner_2}), when no annotated data from the target language was used in model training.
\paragraph{Language Set} Powered by the multilingual representations learned in pre-trained BERT, LIMs seem more suitable for typologically similar languages. For instance, the LIM-H is not as good as the model trained using Chinese-only BERT on Sentiment Analysis, though the difference is relatively small (Table~\ref{table:sentiment_2}). This is consistent with the findings from multilingual representation learning using word embeddings~\cite{conneau2017word}.
\paragraph{Data Resource} Language-independent models are not only suitable for high-resource languages, but also very effective in low-resource languages. In particular, adding a relatively small amount of low-resource training data can result in a significant improvement of performance (Table~\ref{table:sentiment_4}).
\paragraph{Implications} These insights bring unique values to the development and customization of natural language understanding models and solutions in new languages. First of all, it can be used to solve the cold-start problem, where no initial model is available for a new target language, when building such models from scratch is costly. Secondly, it largely saves the cost and time for acquiring annotated data of a new target language by reusing data already annotated in previously supported languages. Thirdly, it simplifies the deployment process of a new model and save the efforts for simultaneously maintaining multiple monolingual models in a production setting.
\section{Introduction}
In today's globalized world, companies need to be able to understand and analyze what is being said out there, about them, their products, services, or their competitors, regardless of the human language used. Many organizations have spent tremendous resources to develop cognitive applications and services for dealing with customers in different countries. For example, cognitive systems may use machine learning techniques to process input messages or statements to determine their meaning and to provide associated confidence scores based on knowledge acquired by the cognitive system. Typically, the use of such cognitive systems requires training individual natural language understanding models in a specific human language. For example, a tone analyzer model can be built to predict tones from English conversations~\cite{liu2018voice}, but such model would not work effectively with other languages. While translation techniques can be applied to translate data from an existing language to another language, human translation is labor-intensive and time-consuming, and machine translation can be costly and unreliable. As a result, attempts to scale existing applications to multiple human languages has traditionally proven to be difficult, mainly due to the language-dependent nature of preprocessing and feature engineering techniques employed in traditional approaches~\cite{akkiraju2018characterizing}.
In this work, we empirically investigate the feasibility of multilingual representations to build \textit{language-independent models}, which can be trained with data from multiple \textit{source languages} and then serve multiple \textit{target languages} (target languages can be different from source languages). We explore this question using a unified language model \textit{Multilingual BERT}~\cite{devlin2019bert}, which is pre-trained on the combination of monolingual Wikipedia corpora from 104 languages. Through a series of experiments on multiple task types, language sets and data resources, we contribute empirical findings of how factors affect language-independent models:
\begin{itemize}
\item \textbf{Task Type.} We analyze and compare language-independent models on two most representative NLP tasks: sentence classification and sequence labeling. On both tasks, we show that language-independent models can be comparable to or even outperform the models trained using monolingual data. Language-independent models are generally more effective on sentence classification.
\item \textbf{Language Set.} Theoretically language-independent models can be trained using any language set, and be used to make predictions in any language. Through training and testing language-independent models with many different languages, we show that they are more suitable for typologically similar languages.
\item \textbf{Data Resource.} We explore the effects of different data sizes when training language-independent models. We demonstrate that language-independent models are not only suitable for high-resource languages, but also very effective in low-resource languages.
\end{itemize}
We derive insights from our experiments to facilitate the development and customization of natural language understanding models and solutions in new languages. First of all, it can be used to solve the \textit{cold-start} problem, where no initial model is available for a new target language, when building such models from scratch is costly. Secondly, it largely saves the cost and time for acquiring annotated data of a new target language by reusing data already annotated in previously supported languages. Thirdly, it simplifies the deployment process of a new model and save the efforts for simultaneously maintaining multiple monolingual models in a production setting. Our annotated data for low-resource languages will be made publicly available.
\section{Language-Independent Model}
In this section, we describe the motivation of language-independent models, and how to create such models via multilingual representation learning and fine-tuning.
\subsection{One Model, Many Languages}
To scale our efforts to support the diversity of people in the world, it is important to build and customize machine learning models for many different languages in various NLP tasks. For each target language, however, this often requires going through the whole lifecycle of data collection, data cleansing, data annotation, data storage, feature creation and selection, machine learning model training, model validation, benchmarking and deployment of these models as services in production~\cite{akkiraju2018characterizing}. It easily becomes overwhelming as the number of target languages increases. To address this problem, we advocate to build one model for all target languages together, which we called a \textit{Language-Independent Model (LIM)}, as the target languages to serve in production do not necessarily depend on which source languages were used in training. Figure~\ref{figure:lim} shows a conceptual example: an LIM can be trained using annotated data from the source languages such as English (EN) and French (FR), and then serve in the target languages including Spanish (ES), Italian (IT), Japanese(JA), which are different from the source languages. This not only accelerates the enablement of a new language by reusing data already annotated in previously supported languages, but also simplifies the deployment process and save efforts for maintaining multiple monolingual models in production.
\begin{figure}
\includegraphics[width=\linewidth]{figures/LIM.pdf}
\caption{A conceptual example of a Language-Independent Model (LIM). The target languages to serve in production do not necessarily depend on which source languages were used in training. For instance, an LIM can be trained using annotated data from the source languages such as English (EN) and French (FR), and then serve in the target languages including Spanish (ES), Italian (IT), Japanese(JA) and so on. } \label{figure:lim}
\end{figure}
\subsection{Multilingual Representation Learning with BERT}
The basis for building LIMs lies in learning a representation that can feature multiple languages. Among the recent significant advances in deep contextualized representation learning for natural language understanding, BERT~\cite{devlin2019bert} stands out as its pre-training process naturally supports multilingual representation learning. Specifically, multilingual BERT was pre-trained on the Wikipedia pages (excluding user and talk pages) of 104 languages with a 110K shared WordPiece~\cite{wu2016google} vocabulary. It is a 12-layer, 768-hidden, 12-head transformer model~\cite{vaswani2017attention} with 110M parameters. To alleviate the bias towards high-resource languages such as English, data from high-resource languages were under-sampled and those from low-resource languages were over-sampled. The pre-training of multilingual BERT does not use any marker denoting the input language, and does not rely on parallel corpus to explicitly encourage translation-equivalent pairs to have similar representations.
\subsection{Fine-Tuning Multilingual BERT for End Tasks}
The multilingual representations learned with BERT can be generalized for many natural language understanding tasks such as Sentiment Analysis, Named Entity Recognition, Categorization, and so on (as illustrated in Figure~\ref{figure:bert}). The input representation of multilingual BERT is a sequence of tokens in any language, which may be a single sentence or two sentences packed together. The input representation of each token is constructed as the sum of the corresponding token, segment, and position embeddings. For sentence classification tasks, the first token of each sequence is a special classification embedding ([CLS]) and its final hidden state will be used as the aggregate representation of the whole sequence. For sequence labeling tasks, the final hidden state of each token will encode its contextualized representation with respect to the whole sequence. To fine-tune multilingual BERT, a classification layer is added on top of the final representation layer, and the probabilities of all label classes are computed with a standard softmax. The parameters of multilingual BERT and the classification layer are fine-tuned jointly to maximize the log-probability of the correct label. The labeled data of end tasks are shuffled across different languages when fine-tuning multilingual BERT.
\begin{figure}
\includegraphics[width=\linewidth]{figures/bert.pdf}
\caption{An illustration of generalized multilingual representation learning for different NLP tasks.} \label{figure:bert}
\end{figure}
\section{Related Works}
Multilingual representation learning has been an active area of research, starting from word embeddings alignment that uses small dictionaries to align word representations from different languages~\cite{mikolov2013exploiting}. Research by~\cite{faruqui2014improving} has demonstrated that multilingual representations can be leveraged to improve the quality of monolingual representations. An unsupervised learning method has been proposed by~\cite{conneau2017word} to align multilingual word embeddings without parallel data. In addition to word embedding alignment, aligning sentence representations from multiple languages has also been studied in machine translation, on both supervised learning~\cite{johnson2017google, artetxe2018massively} and unsupervised learning~\cite{lample2017unsupervised, artetxe2017unsupervised}. However, most of these approaches focus on pairwise multilingual representation learning. In this work, we empirically investigate the impact of multilingual representations learned from a large number of languages on tasks that involves more languages than a certain language pair.
Our work builds on top of recent advances in pre-trained language modeling. ELMo~\cite{peters2018deep} extracts context-sensitive features from a bidirectional LSTM language model and provides additional features for a task-specific architecture. ULMFiT~\cite{howard2018universal} advocates discriminative fine-tuning and slanted triangular learning rates to stabilize the fine-tuning process with respect to end tasks. OpenAI GPT~\cite{radford2018improving} builds on multi-layer transformer~\cite{vaswani2017attention} decoders instead of LSTM to achieve effective transfer while requiring minimal changes to the model architecture. Recently, BERT~\cite{devlin2019bert} uses bidirectional transformer encoders to pre-train a large corpus, and fine-tunes the pre-trained model that requires almost no specific architecture for each end task.
In this work, we leverage the multilingual representations learned from multilingual BERT~\cite{devlin2019bert} to build models that can scale to many languages.
|
1907.10721
|
\section{Introduction}
The geometric phase is a phase factor acquired by a quantum system during adiabatic cyclic evolution. In 1984, M. V. Berry systematically discussed the geometric phase in non-degenerate quantum systems, and such a $\textrm{U(1)}$ Abelian geometric phase (Berry phase) appears as a phase factor on the non-degenerate states\cite{berry1984quantal}. F. Wilczek and A. Zee generalized Berry's work to degenerate quantum systems and showed that a non-Abelian geometric phase can be obtained\cite{PhysRevLett.52.2111}. Geometric phases have been studied in a broad range of physical systems and they connect to both fundamental and practical applications. In condensed matter physics, the geometric phase and the corresponding gauge structure in the Bloch band are closely related to the quantum Hall effect\cite{ye1999berry,bruno2004topological,haldane2004berry,zhang2005experimental}. In quantum computing, the non-Abelian geometric phase can be used to construct non-Abelian holonomic gates\cite{pachos1999non}, which are the foundation of robust holonomic quantum computing. There are many experimental realizations of non-Abelian geometric gates in different quantum systems, such as trapped ions\cite{duan2001geometric,leibfried2003experimental}, superconducting qubits\cite{abdumalikov2013experimental,zhang2015fast,yan2019experimental}, and nitrogen vacancy (NV) centers\cite{zu2014experimental}.
In the study of cold atoms in optical lattices, a geometric phase and the related Berry curvature of the Bloch band have been investigated\cite{atala2013direct,aidelsburger2015measuring,flaschner2016experimental} and are closely related to the study of the topology of the Bloch bands. In continuous quantum gases, the effects caused by the non-Abelian geometric phase have also been studied in a $^{87}\textrm{Rb}$ Bose-Einstein condensate (BEC), where the cyclic evolution of the atomic system was driven by slowly varying microwave and radio-frequency (RF) fields\cite{bharath2018singular,sugawa2018second}. Using a resonant tripod scheme, the non-Abelian adiabatic geometric transformation in the dark state manifold has also been realized in a metastable neon atom system\cite{theuer1999novel} and a cold strontium gas system\cite{leroux2018non}.
To obtain the non-Abelian geometric gauge transformation and non-Abelian gauge potentials, a quantum system with degenerate energy levels is necessary. The degenerate multi-level system in the study of continuous quantum gases is usually introduced by considering a multipod scheme\cite{ruseckas2005non,dalibard2011colloquium} or a system with a special symmetry property\cite{sugawa2018second,campbell2011realistic}. Recent theoretical works on the Floquet analysis of periodically driven systems shows that a periodically driven Hamiltonian can make quantum levels within the same Floquet band degenerate within the adiabatic approximation, and therefore one can realize non-Abelian geometric phase effects from a periodically driven system\cite{PhysRevA.95.023615,PhysRevA.100.012127}.
In this work we propose a practical experimental protocol for generating an $\textrm{SU}(2)$ non-Abelian geometric gauge transformation by a periodically driven Raman process and observing the $\textrm{SU}(2)$ geometric phase in a pseudo-spin-1/2 system in the ground state manifold of a non-interacting ultracold Bose gas system, where $\textrm{SU(2)}$ is the group of rotations of a spin-1/2 system and such a geometric phase results in a spin rotation in our system. Our analysis is based on the recent theoretical works\cite{PhysRevA.95.023615,PhysRevA.100.012127}, in which the authors applied Floquet theory to a system consisting of a spin interacting with a periodically driven magnetic field. They showed that when the oscillating magnetic field vector is slowly changing in direction, a non-Abelian geometric phase will appear. We build on this theoretical result by considering a pseudo-spin system interacting with a synthetic magnetic field generated by an optical Raman coupling, and propose a experimental protocol that may be realized practically. Our simulation shows that it is possible to measure the non-Abelian geometric phase using parameter sets that lie within the capabilities of existing cold atom experiments. Furthermore, we show that with our protocol one can observe the non-Abelian geometric phase even in the presence of undesired parameter fluctuations.
Our pseudo-spin-1/2 system consists of two Zeeman sublevels in the hyperfine ground state manifold of an alkali atom. The periodically driven Raman process is realized by applying the product of a low frequency and a high frequency periodic signal simultaneously to the bias magnetic field, Raman laser intensities and relative phase between Raman lasers. Our simulation of the time-dependent Schr\"{o}dinger equation (TDSE) shows that an $\textrm{SU}(2)$ geometric phase can be observed and that the evolution operators which generate the geometric phase are non-Abelian, i.e. $[U_1,U_2]\ne0$, where $U_1$ and $U_2$ are different unitary transformation operators caused by different geometric phases. Although we use an $^{87}\textrm{Rb}$ system as an example, this protocol can be used in other atomic systems, both Bosonic and Fermionic, and has the potential to become a robust quantum control method.
\section{Non-Abelian geometric phase in a periodically driven system}
The Floquet analysis of periodically driven quantum systems has been well studied. We consider a system based on the periodically driven systems studied in two recent papers\cite{PhysRevA.95.023615,PhysRevA.100.012127}. Consider a spin-1/2 system whose Hamiltonian takes the form
\begin{equation}
H(t,\omega t+\theta)=\tilde{H}(t)f(\omega t+\theta), \nonumber
\end{equation}
where $\tilde{H}(t)=\tilde{H}[\vec{\lambda}(t)]$ is a Hamiltonian that depends on a set of slowly varying parameters $\vec{\lambda}(t)=\{\lambda_{\mu}\}$ ($\mu=1,2,3,...$), and $f(\omega t+\theta)$ is a periodic function with driving frequency $\omega$ (period $T=2\pi/\omega$) and phase offset $\theta$. For simplicity, we only consider the harmonic driving case, where $f(\omega t+\theta)=\cos(\omega t+\theta)$. The evolution of the state follows the Schr\"{o}dinger equation
\begin{equation}\label{Schrodinger}
i\hbar\partial_{t}|\psi(t)\rangle=H(t,\omega t+\theta)|\psi(t)\rangle.
\end{equation}
We can transform the system into a Floquet basis by introducing a micromotion operator $R(t,\omega t+\theta)=\exp\{iS(\omega t+\theta)\}$, where $S(\omega t+\theta)=\tilde{H}(t)\sin(\omega t+\theta)/\hbar\omega$ is the kick operator. The micromotion operator transforms the system from the physical basis into the Floquet basis.
Furthermore, we let $\tilde{H}(t)$ take the form
\begin{equation}\label{SU2 H form}
\tilde{H}=\hbar\Omega_0\hat{r}(t)\cdot\hat{\sigma},
\end{equation}
where $\hat{\sigma}=(\sigma_x,\sigma_y,\sigma_z)^{T}$ is the vector of Pauli operators, and $\hat{r}(t)=\hat{r}[\vec{\lambda}(t)]$ is a unit vector parameterized by $\vec{\lambda}(t)$. If we set $\Omega_{0}$ as constant, then the Hamiltonian $\tilde{H}$ describes a spin-1/2 system subject to a magnetic field whose direction is slowly changing. The Hamiltonian in the Floquet basis takes the form
\begin{eqnarray}
H_{F}&&=R^{\dagger}(t,\theta')H(t,\theta')R(t,\theta')-i\hbar R^{\dagger}(t,\theta')\partial_{t}R(t,\theta') \nonumber \\
&&=-\dot{\lambda}_{\mu}(i\hbar R^{\dagger}(t,\theta')\partial_{\mu}R(t,\theta')) \\
&&=\dot{\lambda}_{\mu}A_{\mu}(\vec{\lambda},\theta'), \nonumber
\end{eqnarray}
where we defined $\theta'=\omega t+\theta$, $\partial_{\mu}=\partial/\partial\lambda_{\mu}$ for fixed $\theta'$, and $A_{\mu}(\vec{\lambda},\theta')=-i\hbar R^{\dagger}(t,\theta')\partial_{\mu}R(t,\theta')$ is the gauge potential in the parameter space.
According to Floquet theory, the Hamiltonian in the Floquet basis can be written as a Fourier expansion of the form
\begin{equation}
H_{F}=\sum_{l}H_{F}^{(l)}e^{il\theta'}; l=0,\pm 1,\pm 2,..., \nonumber
\end{equation}
where $H_{F}^{(l)}=\frac{1}{2\pi}\int_{0}^{2\pi}H_{F}e^{-il\theta'}d\theta'$ is the $l$th Fourier component of $H_{F}$. If we assume the amplitudes of the matrix elements of these Fourier components are much smaller than the driving frequency, i.e.
\begin{equation}\label{adiabatic condition}
|\langle \alpha|H_{F}^{(l)}|\beta\rangle|\ll\hbar\omega,
\end{equation}
then this adiabatic condition allows us to neglect transitions between different Floquet bands and the evolution of the system can be approximated by the evolution within a single Floquet band\cite{PhysRevA.100.012127}. Furthermore, we can restrict our attention to the zeroth ($l=0$) Floquet band, since the states in the $l\ne0$ Floquet bands will evolve like their corresponding states in the $l=0$ band except for an additional $\textrm{U}(1)$ global phase factor $\phi_{global}=l\omega t$. Thus within the adiabatic approximation, the Hamiltonian in the Floquet basis is well approximated by the zeroth order Fourier component and can be written as
\begin{equation}\label{zeroth order effective Hamiltonian}
H_{F}^{(0)}=\dot{\lambda}_{\mu}A_{\mu}^{(0)},
\end{equation}
where $A_{\mu}^{(0)}$ is the zeroth order gauge potential $A_{\mu}^{(0)}(\vec{\lambda})=\frac{1}{2\pi}\int_{0}^{2\pi}A_{\mu}(\vec{\lambda},\theta')d\theta'$.
The zeroth order component of the Hamiltonian in the Floquet basis does not depend on the fast driving and thus can be regarded as the effective Hamiltonian of the spin-1/2 system in the Floquet basis. It takes the explicit form\cite{PhysRevA.100.012127}
\begin{eqnarray}\label{effective Hamiltonian}
H_{eff}&&=\frac{\hbar(1-J_0(a))}{2}\vec{r}\times\dot{\vec{r}}\cdot\vec{\sigma} \nonumber \\
&&=\dot{\lambda}_{\mu}\hbar g\epsilon_{ijk}r_i\partial_{\mu}r_j\sigma_k \\
&&=\dot{\lambda}_{\mu}A_{\mu}^{(0)},\nonumber
\end{eqnarray}
where $J_{0}(a)$ is the zeroth order Bessel function of the first kind, $a=|2\Omega_{0}(t)|/\omega$, $\epsilon_{ijk}$ is the Levi-Civita tensor (here $i$,$j$, and $k$ stand for $x$, $y$, and $z$), $g=\frac{1}{2}(J_{0}(a)-1)$, and we have used $\partial_{t}=\dot{\lambda}_{\mu}\partial_{\mu}$ with fixed $\theta'$. Note that the zeroth order $\textrm{SU}(2)$ gauge potential is $A_{\mu}^{(0)}=\hbar g\hat{n}\cdot\hat{\sigma}$, where $A_{\mu}^{(0)k}=\hbar g\epsilon_{ijk}r_i\partial_{\mu}r_j$ and $n_{k}=\epsilon_{ijk}r_{i}\partial_{\mu}r_{j}$.
The state in Floquet space is defined as $|\chi(t)\rangle=R^{\dagger}(t,\theta')|\psi(t)\rangle$, and the time evolution of the system in the Floquet space under adiabatic approximation can be written as
\begin{equation}
|\chi(t)\rangle=U_{eff}(t,t_0)|\chi(t_0)\rangle,
\end{equation}
or in the original basis
\begin{equation}
|\psi(t)\rangle=R(t,\theta')U_{eff}(t,t_0)R^{\dagger}(t_0,\theta'_0)|\psi(t_0)\rangle, \nonumber
\end{equation}
where $U_{eff}=\mathcal{T}\exp\{-\frac{i}{\hbar}\int_{t_0}^{t}H_{eff}dt'\}$ is the time evolution operator under the adiabatic approximation in the Floquet basis; and here $\mathcal{T}$ stands for time-ordering, and $\theta'_0=\omega t_0+\theta$.
Using Eqn.(\ref{effective Hamiltonian}), we can change the time evolution operator $U_{eff}$ from its time-ordered form into the form of a path-ordered $\textrm{SU}(2)$ unitary transformation operator $U_{eff}=\mathcal{P}exp\{-\frac{i}{\hbar}\int_{\vec{\lambda}(t_0)}^{\vec{\lambda}(t)}A_{\mu}^{(0)}d\lambda_{\mu}\}$, where $\mathcal{P}$ stands for path-ordering. The form of $U_{eff}$ shows that the evolution of the state only depends on the path that the parameter $\vec{\lambda}$ takes from its initial value $\vec{\lambda}(t_0)$ to its final value $\vec{\lambda}(t)$, and does not depend on its rate of change. If the parameter $\vec{\lambda}$ has cyclic time dependence, then the path-ordered unitary transformation operator only depends on the geometry of the closed loop that the parameter follows. In this case, the path-ordered operator takes the form $U_{c}=\mathcal{P}\exp\{-\frac{i}{\hbar}\oint A_{\mu}d\lambda_{\mu}\}$, and the system gains an $\textrm{SU}(2)$ geometric phase (Wilczek-Zee phase).
Unlike the $\textrm{U}(1)$ Berry phase that acts as a commutable phase factor, the non-Abelian geometric phase can cause population transfer between two eigenstates and two non-Abelian geometric phase factors related to different closed loops in the parameter space do not necessarily commute. Usually, to observe the adiabatic non-Abelian geometric phase, the system needs to be degenerate in order that all states acquire the same dynamic phase, which would otherwise make the geometric phase hard to detect. Therefore many studies of non-Abelian geometric phases are done within the degenerate dark state manifold. However, the physical system defined by $\tilde{H}$ in this work is not required to be degenerate. The periodic driving $f(\omega t+\theta)$ on the Hamiltonian $\tilde{H}$ introduces a Floquet band structure to the system, and the energy levels within the same Floquet band become degenerate under the adiabatic approximation\cite{PhysRevA.100.012127,PhysRevA.95.023615}. Since we set our parameters in the adiabatic regime, the system stays in the same Floquet band, and the cyclic evolution in the same Floquet band results in a non-Abelian geometric phase.
\section{Periodically driven Hamiltonian of the atomic system}
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{levels.jpg}
\end{center}
\caption{(a) Level diagram of the Raman process that we consider. We choose $|1\rangle=|F=1,m_F=-1\rangle$ and $|2\rangle=|F=1,m_F=1\rangle$ in $5^{2}P_{\frac{1}{2}}, F=1$ manifold as our pseudo-spin-1/2 system. The ground states are coupled by two Raman lasers $a$ and $b$ with $\sigma^{+}$ and $\sigma^{-}$ polarizations, respectively. The excited states in the $5^{2}P_{\frac{1}{2}}, F=1$ and $F=2$ manifolds that couple to the ground states are $|3\rangle=|F=1,m_F=0\rangle$, $|4\rangle=|F=2,m_F=0\rangle$, $|5\rangle=|F=2,m_F=-2\rangle$ and $|6\rangle=|F=2,m_F=2\rangle$. $\Delta_{i}$, $i=1,2,3,4$ are single-photon detunings, $\delta$ is the two-photon detuning. (b) The experimental setup we consider: two Raman lasers merge to form a single beam before they interact with the atoms.}\label{level diagram}
\end{figure}
In this section we consider strategies for creating the periodic Hamiltonian discussed in the previous section using a Raman process in the ground state manifold of a $^{87}\textrm{Rb}$ atom. The Raman process has a variety of applications in the study of ultracold atoms, including quantum state manipulation, generating artificial gauge potentials and spin-orbit coupling, and creating topological defects\cite{wright2008raman,schultz2014raman,lin2011spin,goldman2014light,schultz2016creating,schultz2016raman}. We consider $|1\rangle=|F=1,m_F=-1\rangle$ and $|2\rangle=|F=1,m_F=1\rangle$ in the $5^{2}S_{\frac{1}{2}}, F=1$ ground state manifold of $^{87}\textrm{Rb}$ to be our pseudo-spin-1/2 system, as shown in Fig.\ref{level diagram}(a). The Raman process is realized by applying two co-propagating circularly polarized lasers to the ultracold atoms, which are subject to a weak bias magnetic field oriented along the beam axis ($z$-axis)\cite{wright2008raman,schultz2014raman,schultz2016creating,schultz2016raman}.
Since the Raman lasers that we consider couple our pseudo-spin-1/2 states in the $5^{2}S_{\frac{1}{2}}, F=1$ manifold to the excited states in the $5^{2}P_{\frac{1}{2}}, F=1$ and $F=2$ hyperfine levels, the level diagram that we use in our calculation is actually a $W$-type instead of $\Lambda$-type\cite{wright2008raman}, see Fig.\ref{level diagram}(a).
We start with the dipole interaction Hamiltonian $H_0=H_a-\vec{d}\cdot\vec{E}$, where $H_a$ is the Hamiltonian of the atom in the presence of a bias magnetic field, $\vec{d}$ is the atomic dipole moment, and $\vec{E}$ is the laser electric field, taking the form $\vec{E}=\vec{E}_ae^{-i\omega_a t}+\vec{E}_be^{-i\omega_b t}+c.c$, where $\omega_a$ and $\omega_b$ are laser frequencies. To solve the problem we can go to a rotating frame defined by the gauge transformation operator $U=\textrm{diag}\{e^{i\alpha_1},e^{i\alpha_2},e^{i\alpha_3},e^{i\alpha_4},e^{i\alpha_5},e^{i\alpha_6}\}$, where we define
\begin{align}
&\alpha_1=(\omega_1-\delta/2)t,&& \alpha_2=(\omega_2+\delta/2)t, \nonumber \\
&\alpha_3=(\omega_3-\Delta_1)t,&& \alpha_4=(\omega_4-\Delta_2)t, \nonumber \\
&\alpha_5=(\omega_5-\Delta_3)t,&& \alpha_6=(\omega_6-\Delta_4)t; \nonumber
\end{align}
$\hbar\omega_i$ ($i=1,2,3,4,5,6$) is the energy of state $|i\rangle$ in the bias magnetic field; and we also define the one-photon detuning $\Delta_i$ ($i=1,2,3,4$) and two-photon detuning $\delta$ as
\begin{eqnarray}
&&2\pi\delta=\omega_a-\omega_b+\omega_1-\omega_2 \nonumber \\
&&2\pi\Delta_1=\omega_3-\frac{\omega_1+\omega_2}{2}-\frac{\omega_a+\omega_b}{2} \nonumber \\
&&2\pi\Delta_2=\omega_4-\frac{\omega_1+\omega_2}{2}-\frac{\omega_a+\omega_b}{2} \\
&&2\pi\Delta_3=\omega_5-\frac{\omega_1+\omega_2}{2}-\frac{\omega_a+\omega_b}{2} \nonumber \\
&&2\pi\Delta_4=\omega_6-\frac{\omega_1+\omega_2}{2}-\frac{\omega_a+\omega_b}{2}. \nonumber
\end{eqnarray}
If we are in the far-detuned regime, namely where the one-photon detuning is much larger than the decay time of the excited state, we can adiabatically eliminate the excited states and get the effective two-level Hamiltonian\cite{wright2008raman,schultz2016raman,schultz2014raman,schultz2016creating}
\begin{equation}\label{W hamiltonian}
W=-\hbar\left( {\begin{array}{cc}
\xi_{11}+\frac{\delta}{2} & \eta_{12}e^{-i\phi} \\
\eta_{12}e^{i\phi} & \xi_{22}-\frac{\delta}{2}
\end{array} } \right),
\end{equation}
where the matrix elements are defined as
\begin{eqnarray}
\xi_{11}&&=\frac{|\Omega_{a13}|^2}{\Delta_1}+\frac{|\Omega_{a14}|^2}{\Delta_2}+\frac{|\Omega_{b15}|^2}{\Delta_3} \nonumber \\
\xi_{22}&&=\frac{|\Omega_{b23}|^2}{\Delta_1}+\frac{|\Omega_{b24}|^2}{\Delta_2}+\frac{|\Omega_{a26}|^2}{\Delta_4} \nonumber \\
\eta_{12}&&=\frac{|\Omega_{a13}\Omega_{b23}|}{\Delta_1}+\frac{|\Omega_{a14}\Omega_{b24}|}{\Delta_2} \nonumber.
\end{eqnarray}
Here $\Omega_{\rho ij}$ is the Rabi frequency, and it takes the form $\Omega_{\rho ij}=-d_{D1}E_{\rho}C_{ij}/\hbar$, with $\rho=a,b$, $i=1,2$, and $j=3,4,5,6$. $d_{D1}$ is the effective dipole moment of the $D_{1}$ transitions, $E_{\rho}$ is the electric field, $C_{ij}$ is the Clebsch-Gordon coefficient between states $|i\rangle$ and $|j\rangle$ and $\phi=\phi_{b}-\phi_{a}$ is the relative phase between the two Raman lasers.
Our goal is to construct the periodically driven Hamiltonian with a high frequency driving signal $H=\tilde{H}f(\omega t+\theta)$, or in the harmonic driving case, $H=\tilde{H}\cos{\omega t}$, where $\theta$ is taken to be zero for simplicity. Notice that we can rewrite the effective two-level Hamiltonian Eqn.(\ref{W hamiltonian}) as
\begin{eqnarray}\label{SU(2) form of W}
W=&&-\frac{\hbar\left[\delta+(\xi_{11}-\xi_{22})\right]}{2}\sigma_z-\hbar\eta_{12}\cos{\phi}\sigma_x-\hbar\eta_{12}\sin{\phi}\sigma_y \nonumber \\
&&-\frac{\hbar(\xi_{11}+\xi_{22})}{2}\mathbb{1},
\end{eqnarray}
where $\mathbb{1}$ is the $2\times 2$ identity matrix. Since we are in the far-detuned regime, the single-photon detunings are much larger than the two-photon detuning, the Rabi frequencies, and the Zeeman splitting between different magnetic sublevels. If we set the Rabi frequencies to satisfy $|\Omega_{a13}|=|\Omega_{b23}|$, $|\Omega_{a14}|=|\Omega_{b24}|$, and $|\Omega_{a26}|=|\Omega_{b15}|$, with a bias magnetic field (e.g. 5G), then $\xi_{11}$ and $\xi_{22}$ will be approximately equal. Therefore we can further reduce the effective two-level Hamiltonian and write it as
\begin{equation}
W\approx-\hbar(\frac{\delta}{2}\sigma_z+\eta_{12}\cos{\phi}\sigma_x+\eta_{12}\sin{\phi}\sigma_y), \nonumber
\end{equation}
where we ignored the last term in Eqn.(\ref{SU(2) form of W}) since it only effects a $\textrm{U}(1)$ global phase factor during the evolution.
To create the harmonic driving of the Hamiltonian, we can modulate the bias magnetic field and $\eta_{12}$ as $\cos{\omega t}$. In this case, the effective two-level Hamiltonian $W$ can be regarded as the desired driven Hamiltonian $H$, and takes the form
\begin{equation}
H=-\hbar(\frac{\tilde{\delta}}{2}\sigma_z+\tilde{\eta}_{12}\cos{\phi}\sigma_x+\tilde{\eta}_{12}\sin{\phi}\sigma_y)\cos{\omega t}, \nonumber
\end{equation}
where $\tilde{\delta}\cos{\omega t}=\delta$ and $\tilde{\eta}_{12}\cos{\omega t}=\eta_{12}$. Now let $\tilde{\delta}=2\Omega_0\cos{\Theta(t)}$ and $\tilde{\eta}_{12}=\Omega_0\sin{\Theta(t)}$, where $\Theta(t)$ is the slowly varying parameter. This can be achieved by driving both Raman lasers as $\sqrt{|\sin{\Theta(t)}\cos{\omega t}|}$ and changing the relative phase $\phi=\phi_b-\phi_a$ between two lasers from $\phi=\Phi$ to $\phi=\pi+\Phi$, where $\Phi$ is a parameter that does not depend on fast periodic driving. The periodic Hamiltonian can finally be written in the desired form
\begin{equation}\label{effective periodic Hamiltonian}
H(t)=\hbar\Omega_0\hat{r}\cdot\hat{\sigma}\cos{\omega t},
\end{equation}
with $\hat{r}(\vec{\lambda})=(-\sin{\Theta}\cos{\Phi},-\sin{\Theta}\sin{\Phi},-\cos{\Theta})^{T}$. If we fix $\Phi$ and slowly drive $\Theta$ in a cyclic manner, namely $\Theta=\Omega t$, where $\Omega\ll\omega$, we will obtain the desired Hamiltonian (Eqn.(\ref{SU2 H form})) that leads to an $\textrm{SU}(2)$ non-Abelian geometric phase. Note that we take $\vec{\lambda}=\{\Theta,\Phi\}$ as the set of coordinates on a unit 2-sphere with a time dependent polar angle $\Theta=\Omega t$ and a fixed azimuthal angle $\Phi$. This Hamiltonian describes a pseudo-spin-1/2 system in a rotating synthetic magnetic field whose magnitude is modulated.
\section{A Practical experimental protocol and simulation Results}
\subsection{The geometric phase}
After realizing the effective two-level periodic Hamiltonian in Eqn.(\ref{effective periodic Hamiltonian}), the dynamics of the system in the rotating frame follow from the Schr\"{o}dinger equation (Eqn.(\ref{Schrodinger})). Using Eqn.(\ref{effective Hamiltonian}), we find the zeroth Fourier component of the effective Hamiltonian in the Floquet basis to be
\begin{equation}
H_{eff}=\hbar\Omega g\hat{n}\cdot\hat{\sigma},
\end{equation}
where $\hat{n}=(-\sin{\Phi},\cos{\Phi},0)^T$, and $g=\frac{1}{2}(J_{0}(a)-1)$ with $a=2\Omega_0/\omega$. We assume $\Theta=\Omega t$, $\Phi=\textrm{const}.$ so that $\dot{\Theta}=\Omega$, $\dot{\Phi}=0$. Therefore, the zeroth order $\textrm{SU}(2)$ gauge potential takes the form
\begin{equation}
A_{\Theta}^{(0)}=-\hbar g(\sin{\Phi}\sigma_x-\cos{\Phi}\sigma_y).
\end{equation}
We can write the $\textrm{SU}(2)$ transformation operator as
\begin{equation}\label{evolution operator}
U(t,t_0)=\exp\{ig(\sin{\Phi}\sigma_x-\cos{\Phi}\sigma_y)\left[\Theta(t)-\Theta(t_0)\right]\}.
\end{equation}
For cyclic evolution, we get the geometric phase $\gamma=2m\pi g$, $m=\pm1,\pm2,\pm3,...$, and the $\textrm{SU}(2)$ transformation operator can be written as
\begin{equation}\label{cyclic transformation operator}
U_c=\exp\{-i\gamma\vec{n}\cdot\vec{\sigma}\}=\exp\{i2m\pi g(\sin{\Phi}\sigma_x-\cos{\Phi}\sigma_y)\},
\end{equation}
where $m$ is an integer. In the case that we consider, $m$ takes a negative sign due to the choice of states.
We use a fourth-order finite-difference method to solve the time-dependent Schr\"{o}dinger equation (TDSE) (Eqn.(\ref{Schrodinger})). The TDSE (Eqn.(\ref{Schrodinger})) describes the dynamics in the rotating frame, but the evolution operator that causes the geometric phase is in the Floquet basis. Notice that the micromotion operator that can transform the rotating basis to the Floquet basis takes the form $R=\exp\{i\tilde{H}(t)\sin{\omega t}/\hbar\omega\}$, and it goes to the identity operator at the end of each cycle of fast driving, namely $\sin{\omega T_q}=0$, with $T_q=2q\pi/\omega$ ($q=0,1,2,3,...$). Thus if we prepare the system into an eigenstate of $\tilde{H}(t_0)$ at the initial time $t_0$ and turn on the Raman laser and periodic driving abruptly, the system will start evolving in the Floquet basis. If we measure the system at the end of each fast driving cycle, the rotating basis will be already aligned with the Floquet basis so we get a direct measurement in Floquet space.
\subsection{Experimental setup}
To experimentally realize the setup we consider, two co-propagating Raman lasers along the z-axis with left and right circular polarizations are needed, see Fig.\ref{level diagram}(b). Unlike the usual Raman process, where the Raman laser intensities are time-independent or only have slow time dependence, here we need to modulate both the laser intensities and the relative phase between two Raman lasers with a periodic function that is the product of a low frequency and a high frequency periodic signal. Meanwhile the magnitude of the bias magnetic field generated by a Helmholtz coil also needs to be modulated to provide the periodic two-photon detuning $\delta$. The driving signal of the bias magnetic field is $B=B_0+\Delta B\cos{\Omega t}\cos{\omega t}$ and the Raman laser intensities are proportional to $|\sin{\Omega t}\cos{\omega t}|$, respectively, see Fig.\ref{driving profiles}.(a,b). The relative phase between two Raman lasers follows the function $\frac{\pi}{2}[1-\textrm{sgn}(\sin{\Omega t}\cos{\omega t})]$, where $\textrm{sgn}(x)$ is the signum function, as shown in Fig.\ref{driving profiles}.(c).
\begin{figure}
\begin{center}
\includegraphics[scale=0.3]{driving-signal.jpg}
\end{center}
\caption{From top to bottom: (a) driving signal of the magnetic field, (b) laser intensity, and (c) relative phase between lasers. The driving parameters are $\Omega=2\pi\times50\textrm{kHz}$, $\omega=2\pi\times500\textrm{kHz}$. Here we only show the driving signals for the first $10\mu s$.}\label{driving profiles}
\end{figure}
To realize the desired periodically driven signal of the parameters, we can first use an arbitrary waveform generator (AWG) to generate the modulation signals. Then we can propagate the signals to acousto-optical modulators (AOMs) to drive the intensity of the Raman lasers and to an electro-optical modulator (EOM) to drive the relative phase between the Raman lasers. The parameters that we choose in our simulation are $\Omega=2\pi\times50\textrm{kHz}$ and $\omega=2\pi\times500\textrm{kHz}$ with the beam waist $w=300\mu m$, which will not push commercial AOMs beyond their limits. Finally, the modulation of the bias magnetic field can be realized by sending the modulation signal from the AWG to an audio power amplifier, and use it to drive the Helmholtz coil that generates the bias magnetic field.
\subsection{Preparation of the states and projection measurements}
After describing a potential experimental setup, we discuss the preparation of the initial states and how to do projection measurements of the desired quantum state. Our theoretical framework in this paper is based on a single atom, which assumes the ultracold Bose gas is dilute enough so that the interaction between atoms is negligible. In this work, we consider an ultracold dilute $^{87}\textrm{Rb}$ Bose gas and we focus on the $5^{2}S_{\frac{1}{2}}, F=1$ ground state manifold.
We produce a Bose-Einstein condensate in the $|1\rangle\equiv|R=1,m_F=-1\rangle$ state and then must transfer the population to the desired initial state. There are many ways to control the system and achieve this state preparation. In our laboratory, we have developed a reliable Raman waveplate method to achieve state rotations on the Bloch sphere, and we can use a Raman waveplate pulse to rotate the states and measure the atomic Stokes parameters\cite{schultz2016creating,hansen2016singular,schultz2014raman}. The waveplate pulse couples states $|1\rangle$ and $|2\rangle$ and rotates the system to the initial state $|\psi(0)\rangle$, which is a superposition state of $|1\rangle$ and $|2\rangle$. Other than the Raman waveplate, there are also other ways to transfer the atoms into the desired initial state, such as using a radio-frequency pulse sequence.
We can use a Stern-Gerlach time-of-flight (TOF) imaging method to measure the populations in different states. To get phase information from the system, we need to rotate the system into the eigenbasis of the $x$ and $y$ axes. This can be achieved by any high-fidelity $\pi/2$ rotation operation around $x$ and $y$ axes, i.e. Raman waveplate pulses. Generally, if we ignore the undetectable global phase, the state of the system can be written as $|\psi\rangle=c_1|1\rangle+c_2e^{i\beta}|2\rangle$, where coefficients $c_1$ and $c_2$ are real, and $\beta$ is the relative phase. The atomic Stokes parameters are defined as
\begin{eqnarray}
&&S_1=2c_1c_2\cos{\beta} \nonumber\\
&&S_2=2c_1c_2\sin{\beta} \\
&&S_3=c_1^2-c_2^2, \nonumber
\end{eqnarray}
and they can be understood as projection measurements on the $x$, $y$ and $z$-axes of the Bloch sphere. The Stern-Gerlach TOF can be regarded as a measurement of $S_3$, while to measure the other two atomic Stokes parameters $S_1$ and $S_2$ we need to apply a $\frac{\pi}{2}$-waveplate pulse to rotate the detection axis to the $x$ and $y$-axes\cite{hansen2016singular}. From the atomic Stokes parameters we are able to extract both the population and phase information of the state.
The Stern-Gerlach TOF imaging happens after we turn the Raman lasers off, so we are performing the measurement in a Zeeman basis defined solely by the bias magnetic field. However, in our calculations we work in a rotating frame. Thus our final state, which is stationary in the rotating frame, will acquire an extra phase factor between the $|1\rangle$ and $|2\rangle$ states of $\alpha=\alpha_2-\alpha_1=(\omega_a-\omega_b)t$ in the Zeeman basis. Since in our experimental setup, the laser frequencies are fixed and shifted by AOMs, we are able to record the frequency difference between Raman lasers. Therefore, we can calculate the extra phase difference at any time when the Raman lasers are on and eliminate the extra phase factor in data processing.
\subsection{Simulation results}
\begin{figure}
\begin{center}
\includegraphics[scale=0.35]{geo-Rabi.jpg}
\end{center}
\caption{Simulation results of an $\textrm{SU}(2)$ transformation, where circles and squares and triangles are simulation results from solving the TDSE in the rotating frame, and dashed lines are analytical curves calculated in the Floquet basis under adiabatic approximation. Upper plot: population transfer, blue and red represent $|1\rangle$ and $|2\rangle$, respectively. Lower plot: evolution of the atomic Stokes parameters, where blue, red and black represent $S_1$, $S_2$, and $S_3$, respectively. Here we use the parameters: maximum laser powers $P_a=P_b=271.7\mu W$, beam waist $w=300\mu m$, magnitude of the time-varying part of the bias magnetic field $\Delta B=0.368G$, bias magnetic field average $B_0=5G$, which results in $\Omega_0=2\pi\times258.3\textrm{kHz}$. The driving frequencies are $\Omega=2\pi\times50\textrm{kHz}$ and $\omega=2\pi\times500\textrm{kHz}$. With the above parameters, the geometric phase result in a Rabi-like oscillation with a period $T_{geo}\approx80\mu s$. }\label{geometric phase}
\end{figure}
By numerically solving the TDSE Eqn.(\ref{Schrodinger}), we get the evolution of the system in the rotating frame by extracting the points at the end of each fast driving cycle. Also, we analytically calculate the evolution of the state subjected to the $\textrm{SU}(2)$ unitary transformation given by Eqn.(\ref{evolution operator}) in the Floquet basis under the adiabatic approximation. As shown in Fig.\ref{geometric phase}, the simulation results match the analytical calculations well. In the upper plot, blue circles and red squares are the simulation results states of $|1\rangle$ and $|2\rangle$, respectively. The initial state is $|\psi(0)\rangle=|1\rangle$. Dashed lines are results of the analytical calculation for population transfer. In the lower plot, blue circles, red squares and black triangles are simulation results for atomic Stokes parameters $S_1$, $S_2$, and $S_3$, respectively. Dashed lines are the analytical predictions. The parameters we use are: maximum laser powers $P_a=P_b=271.7\mu W$, beam waist $w=300\mu m$, magnitude of the time-varying part of the bias magnetic field $\Delta B=0.368G$, the bias magnetic field average $B_0=5G$, which results in $\Omega_0=2\pi\times258.3\textrm{kHz}$. The driving frequencies are $\Omega=2\pi\times50\textrm{kHz}$ and $\omega=2\pi\times500\textrm{kHz}$. With the above parameters, $g=-0.1248$, and the geometric phase produces a Rabi-like oscillation with a period $T_{geo}\approx80\mu s$.
Using the atomic Stokes parameter values at the end of the slow driving cycle, we can calculate the $\textrm{SU}(2)$ transformation operator $U_c$. Take $t=20\mu s$ as an example. The atomic Stokes parameters take values $S_1=0.998$, $S_2=0.06315$, and $S_3=-0.0014$, which give $|c_1|^2=0.4993$, $|c_2|^2=0.5007$, and $\beta=0.063\textrm{rad}$. Therefore, the $\textrm{SU}(2)$ transformation operator at $t=20\mu s$ is calculated to be $U_c=0.7066\mathbb{1}-i(0.0445\sigma_x+0.7062\sigma_y)$, which matches the analytical prediction $U_{c}^{theory}=(\mathbb{1}-i\sigma_y)/\sqrt{2}$ that describes a $\pi/2$ rotation around the $y$-axis on the Bloch sphere.
\begin{figure}
\begin{center}
\includegraphics[scale=0.35]{non-Abelian.jpg}
\end{center}
\caption{Non-Abelian property of the evolution operators. From top to bottom: evolution of the atomic Stokes parameters $S_1$, $S_2$ and $S_3$, respectively. The evolution operators $U_1$ and $U_2$ are achieved by using the same parameters as Fig.(\ref{geometric phase}) and setting the relative phase between Raman lasers as $\Phi_1=0$ and $\Phi_2=\pi/2$, respectively. The duration of each evolution operator is $\tau=20\mu s$. Different colors represent the results for different order of operators, blue: $U_2U_1$ and red: $U_1U_2$. Blue circles and red squares represent simulation results from TDSE while dashed lines are theoretical predictions. The different final results for atomic Stokes parameters shows $[U_1,U_2]\ne0$, which proves the geometric phase that we get is non-Abelian.} \label{non-Abelian}
\end{figure}
After solving for the the $\textrm{SU}(2)$ transformation operator, the next step is to prove the non-Abelian property of the geometric gauge transformations. We consider two $\textrm{SU}(2)$ transformation operators
\begin{eqnarray}
U_1&&=\exp\{i2\pi g\sigma_y\}\approx\frac{1}{\sqrt{2}}(\mathbb{1}-i\sigma_y), \nonumber\\
U_2&&=\exp\{-i2\pi g\sigma_x\}\approx\frac{1}{\sqrt{2}}(\mathbb{1}+i\sigma_x), \nonumber
\end{eqnarray}
which are constructed by turning on the periodically driven Hamiltonian for $t=20\mu s$ and setting the relative phase parameter to be $\Phi_1=0$ and $\Phi_2=\pi/2$, respectively. All the other parameters are the same as what we used in Fig.\ref{geometric phase}. After constructing the $\textrm{SU}(2)$ transformation operators, we apply them to the initial state $|\psi(0)\rangle=|1\rangle$, one after another in different orders $U_2U_1$ and $U_1U_2$. As shown in Fig.\ref{non-Abelian}, the difference in the atomic Stokes parameters at the final time shows that the $\textrm{SU}(2)$ transformation operators that we construct do not commute, $[U_1,U_2]\ne0$, which verifies the non-Abelian property of the geometric phase.
The geometric phase that we get over one low frequency cycle depends on the parameters we choose, as long as the adiabatic condition Eqn.(\ref{adiabatic condition}) is satisfied. Therefore, we can easily change the parameters, such as Raman laser intensities, modulation amplitude of the magnetic field, and the periods of both low and high frequency driving parameters to tune the geometric phase over a broad range of values. In both simulation results, we see slight differences of the TDSE solution from the analytical predictions. This is the joint effect of the non-zero $\frac{1}{2}(\xi_{11}-\xi_{22})$ term in Eqn.(\ref{SU(2) form of W}) and the quadratic Zeeman effect, which was ignored in the construction of the effective two-level Hamiltonian of the Raman process. The non-zero $\xi_{11}-\xi_{22}$ term will introduce an additional term proportional to $|\sin{\Theta t}\cos{\omega t}|\sigma_z$ in the effective two-level Hamiltonian. The quadratic Zeeman shift will introduce an additional term to the two-photon detuning that is proportional to $(\cos{\Omega t}\cos{\omega t})^2$, which brings in additional zeroth and second harmonic terms. The zeroth harmonic from both terms can be canceled by shifting the laser frequencies, but the higher harmonic terms will bring in additional terms in the effective Hamiltonian and affect the dynamics of the system. However, such effects do not affect our result very much because we work with a weak bias magnetic field, and under the conditions that we consider in our simulations the amplitude of the additional higher harmonic terms is $2\pi\times1.88\textrm{kHz}$, whereas $\Omega_0=2\pi\times258.3\textrm{kHz}$. Therefore, the amplitudes of both $\frac{1}{2}(\xi_{11}-\xi_{22})$ term and the quadratic Zeeman shift will be much smaller than the amplitude of two-photon detuning $\delta$ so that both terms are negligible.
\subsection{Robustness against parameter fluctuations}
\begin{figure}
\begin{center}
\includegraphics[scale=0.2]{noise.jpg}
\end{center}
\caption{Fidelity of the geometric gauge transformation versus number of fluctuations. Each point is calculated by averaging five runs with the same number of fluctuations for 26 different initial states, the errorbars is the standard deviation of all the fidelities calculated from the 130 runs for the same number of fluctuations. The ideal $\textrm{SU}(2)$ transformation operator is $U_{c}^{ideal}=(\mathbb{1}-i\sigma_y)/\sqrt{2}$. The standard deviation of random Gaussian noises for bias magnetic field and laser powers are $5\%$ of their amplitudes. For relative phase, the standard deviation of the Gaussian noise is $0.01\pi$. All fluctuations are distributed evenly during the pulse duration.} \label{noise}
\end{figure}
After showing the non-Abelian property of the geometric phase, it is natural to ask if such a geometric phase is sufficiently robust against parameter fluctuations that one can observe it in the laboratory. We introduce random Gaussian noise to the magnitude of the bias magnetic field, the Raman laser powers, and the relative phase between the two Raman lasers. The ideal $\textrm{SU}(2)$ transformation operator takes the form $U_{c}^{ideal}=(\mathbb{1}-i\sigma_y)/\sqrt{2}$ and causes a $\pi/2$ rotation around $y$ axis. With the parameters we considered in Fig.\ref{geometric phase}, the pulse duration is $20\mu s$. We denote the operator with noise as $U_c^{noise}$. The standard deviation of the magnetic field noise and laser power noise are $5\%$ of their amplitudes, while for the relative phase $\phi$, the standard deviation of the Gaussian noise is $0.01\pi$. Then we start from the same initial state $|\psi_0\rangle$, vary the number of fluctuations that are evenly distributed over $20\mu s$ and calculate the fidelity $f\equiv|\langle \psi_{ideal}|\psi_{noise}\rangle|^2$, where $|\psi_{noise}\rangle=U_c^{noise}|\psi_0\rangle$ and $|\psi_{ideal}\rangle=U_{c}^{ideal}|\psi_0\rangle$. The results are shown in Fig.\ref{noise}, where each point is the average fidelity calculated from 26 different initial states evenly distributed on the Bloch sphere with 5 runs for each state, and the error bars are the standard deviations calculated from the $5\times26=130$ data sets for each number of fluctuations. We can see that the average fidelity is always above $0.9$, which shows that the operators we constructed with noise are robust against random fluctuations for the different initial states that we consider. However, we see much smaller error bars for the numbers of fluctuations higher than $200$, which correspond to high frequency fluctuations (above $10\textrm{MHz}$), than we see for lower frequency fluctuations (below $10\textrm{MHz}$). Since for low frequency fluctuations, $5\%$ parameter fluctuation can greatly deform the contour that the rotating synthetic magnetic field takes and therefore have a greater influence on the resulting geometric phase. For higher frequency noise, the averaged fidelity is above $98\%$ and the error bars become much smaller than the low frequency points, which indicates that the deformation of the contour in parameter space is averaged out and the geometric phase becomes robust against high frequency random fluctuations.
The fluctuations we consider here are all above $1\textrm{MHz}$. Since our pulse duration is relatively short, the lower frequency fluctuations of up to several kilohertz, such as mechanical noises, can be regarded as long term drift with small amplitude for our problem, which does not degrade the observation of our effect significantly. The high fidelity shown in Fig.\ref{noise} demonstrates that the non-Abelian geometric phase induced by the periodically driven Raman process is robust enough to be observed and therefore has the potential to be another method to control the quantum state of a cold atom system.
\section{Conclusion and discussion}
In this paper we have proposed a possible realization of a periodically driven Hamiltonian through periodically driving a Raman process in the hyperfine ground state manifold of an alkali atom. A non-Abelian geometric phase is observed according to our simulation results. Through measuring the atomic Stokes parameters, we are able to measure the $\textrm{SU}(2)$ transformation operator of the cyclic evolution in Floquet space. The non-commuting property of two different $\textrm{SU}(2)$ transformation operators subject to different geometric phase factors proves the non-Abelian property of this geometric phase. For simplicity, we only set one of our parameters $\Theta$ as time dependent. In fact the other parameter that we consider $\Phi$ can also be time dependent as long as the parameters form a closed loop. Based on the general theory of a spin interacting with a periodically driven magnetic field\cite{PhysRevA.100.012127,PhysRevA.95.023615}, our work extends the realization of the non-Abelian geometric phase into a pseudo-spin system with Raman coupling. We also proposed a practical experimental implementation using a dilute ultracold atomic gas interacting with Raman lasers and we verify that the non-Abelian geometric phase effect can be observed in the laboratory even with the presence of possible parameter fluctuations.
Due to the oscillating magnetic field, an additional quadratic Zeeman effect term will appear in the Hamiltonian\cite{gan2018oscillating}. However, as we discussed, the effect caused by the quadratic Zeeman effect can be ignored when the field is sufficiently weak. Another issue is heating. Since we use a Raman process with large single-photon detuning, the dynamics of the system is confined to the ground state manifold, so the heating caused by spontaneous emission from the excited states is negligible. The excitation between different Floquet bands is also suppressed if we let the parameters satisfy the adiabatic condition Eqn.(\ref{adiabatic condition}). In addition, in our protocol we use co-propagating Raman lasers, so there will be no momentum transfer in our periodically driven Raman process. Therefore the Floquet heating effect discussed in \cite{li2019floquet} will be suppressed as well. Finally, in our analysis, since the duration of the evolution is much less than the typical decoherence time of ultracold Bose gases, we can ignore decoherence effects and use pure state descriptions of the system in the calculations. For a more general case, if the duration of the geometric phase pulse becomes comparable to the decoherence time, one needs to take decoherence into consideration and use density matrix methods in instead.
Our simulation results show that by introducing periodically driven interactions, one can promote the Abelian physical system to a non-Abelian one under the adiabatic approximation\cite{PhysRevA.100.012127,PhysRevA.95.023615}, and that the associated geometric phase is sufficiently robust against parameter fluctuations and thus detectable in the laboratory. To obtain the non-Abelian geometric phase and the non-Abelian gauge potential, one needs to work with a system with degenerate quantum levels. As discussed by Novi{\v{c}}enko\cite{PhysRevA.100.012127}, the non-Abelian geometric phase comes from neglecting the transitions between different Floquet bands such that the states in the same Floquet band become degenerate. There are other strategies to get non-Abelian geometric phase effects\cite{leroux2018non,sugawa2018second} that work in the degenerate eigenbasis of a system. Such an eigenbasis consists of superposition states in the Zeeman basis. In contrast, the experimental protocol that we propose realizes a non-Abelian geometric phase effect in a Floquet basis that can be projected to the Zeeman basis, allowing us to measure the system in a more direct way. Although we only considered a $^{87}\textrm{Rb}$ Bose gas in this work, the protocol we propose can be used in other atomic or ionic systems, both Bosonic and Fermionic. Therefore, our protocol has the potential to be a reliable quantum control method in ultracold atom studies.
\section{Acknowledgement}
We thank Maitreyi Jayaseelan and Elisha Haber for discussions. We also thank Gediminas Juzeli\ifmmode \bar{u}\else \={u}\fi{}nas for the useful comments on the manuscript. This work was supported by the National Science Foundation grant number PHY-1708008 and NASA/JPL RSA 1616833.
\section{Introduction}
The geometric phase is a phase factor acquired by a quantum system during adiabatic cyclic evolution. In 1984, M. V. Berry systematically discussed the geometric phase in non-degenerate quantum systems, and such a $\textrm{U(1)}$ Abelian geometric phase (Berry phase) appears as a phase factor on the non-degenerate states\cite{berry1984quantal}. F. Wilczek and A. Zee generalized Berry's work to degenerate quantum systems and showed that a non-Abelian geometric phase can be obtained\cite{PhysRevLett.52.2111}. Geometric phases have been studied in a broad range of physical systems and they connect to both fundamental and practical applications. In condensed matter physics, the geometric phase and the corresponding gauge structure in the Bloch band are closely related to the quantum Hall effect\cite{ye1999berry,bruno2004topological,haldane2004berry,zhang2005experimental}. In quantum computing, the non-Abelian geometric phase can be used to construct non-Abelian holonomic gates\cite{pachos1999non}, which are the foundation of robust holonomic quantum computing. There are many experimental realizations of non-Abelian geometric gates in different quantum systems, such as trapped ions\cite{duan2001geometric,leibfried2003experimental}, superconducting qubits\cite{abdumalikov2013experimental,zhang2015fast,yan2019experimental}, and nitrogen vacancy (NV) centers\cite{zu2014experimental}.
In the study of cold atoms in optical lattices, a geometric phase and the related Berry curvature of the Bloch band have been investigated\cite{atala2013direct,aidelsburger2015measuring,flaschner2016experimental} and are closely related to the study of the topology of the Bloch bands. In continuous quantum gases, the effects caused by the non-Abelian geometric phase have also been studied in a $^{87}\textrm{Rb}$ Bose-Einstein condensate (BEC), where the cyclic evolution of the atomic system was driven by slowly varying microwave and radio-frequency (RF) fields\cite{bharath2018singular,sugawa2018second}. Using a resonant tripod scheme, the non-Abelian adiabatic geometric transformation in the dark state manifold has also been realized in a metastable neon atom system\cite{theuer1999novel} and a cold strontium gas system\cite{leroux2018non}.
To obtain the non-Abelian geometric gauge transformation and non-Abelian gauge potentials, a quantum system with degenerate energy levels is necessary. The degenerate multi-level system in the study of continuous quantum gases is usually introduced by considering a multipod scheme\cite{ruseckas2005non,dalibard2011colloquium} or a system with a special symmetry property\cite{sugawa2018second,campbell2011realistic}. Recent theoretical works on the Floquet analysis of periodically driven systems shows that a periodically driven Hamiltonian can make quantum levels within the same Floquet band degenerate within the adiabatic approximation, and therefore one can realize non-Abelian geometric phase effects from a periodically driven system\cite{PhysRevA.95.023615,PhysRevA.100.012127}.
In this work we propose a practical experimental protocol for generating an $\textrm{SU}(2)$ non-Abelian geometric gauge transformation by a periodically driven Raman process and observing the $\textrm{SU}(2)$ geometric phase in a pseudo-spin-1/2 system in the ground state manifold of a non-interacting ultracold Bose gas system, where $\textrm{SU(2)}$ is the group of rotations of a spin-1/2 system and such a geometric phase results in a spin rotation in our system. Our analysis is based on the recent theoretical works\cite{PhysRevA.95.023615,PhysRevA.100.012127}, in which the authors applied Floquet theory to a system consisting of a spin interacting with a periodically driven magnetic field. They showed that when the oscillating magnetic field vector is slowly changing in direction, a non-Abelian geometric phase will appear. We build on this theoretical result by considering a pseudo-spin system interacting with a synthetic magnetic field generated by an optical Raman coupling, and propose a experimental protocol that may be realized practically. Our simulation shows that it is possible to measure the non-Abelian geometric phase using parameter sets that lie within the capabilities of existing cold atom experiments. Furthermore, we show that with our protocol one can observe the non-Abelian geometric phase even in the presence of undesired parameter fluctuations.
Our pseudo-spin-1/2 system consists of two Zeeman sublevels in the hyperfine ground state manifold of an alkali atom. The periodically driven Raman process is realized by applying the product of a low frequency and a high frequency periodic signal simultaneously to the bias magnetic field, Raman laser intensities and relative phase between Raman lasers. Our simulation of the time-dependent Schr\"{o}dinger equation (TDSE) shows that an $\textrm{SU}(2)$ geometric phase can be observed and that the evolution operators which generate the geometric phase are non-Abelian, i.e. $[U_1,U_2]\ne0$, where $U_1$ and $U_2$ are different unitary transformation operators caused by different geometric phases. Although we use an $^{87}\textrm{Rb}$ system as an example, this protocol can be used in other atomic systems, both Bosonic and Fermionic, and has the potential to become a robust quantum control method.
\section{Non-Abelian geometric phase in a periodically driven system}
The Floquet analysis of periodically driven quantum systems has been well studied. We consider a system based on the periodically driven systems studied in two recent papers\cite{PhysRevA.95.023615,PhysRevA.100.012127}. Consider a spin-1/2 system whose Hamiltonian takes the form
\begin{equation}
H(t,\omega t+\theta)=\tilde{H}(t)f(\omega t+\theta), \nonumber
\end{equation}
where $\tilde{H}(t)=\tilde{H}[\vec{\lambda}(t)]$ is a Hamiltonian that depends on a set of slowly varying parameters $\vec{\lambda}(t)=\{\lambda_{\mu}\}$ ($\mu=1,2,3,...$), and $f(\omega t+\theta)$ is a periodic function with driving frequency $\omega$ (period $T=2\pi/\omega$) and phase offset $\theta$. For simplicity, we only consider the harmonic driving case, where $f(\omega t+\theta)=\cos(\omega t+\theta)$. The evolution of the state follows the Schr\"{o}dinger equation
\begin{equation}\label{Schrodinger}
i\hbar\partial_{t}|\psi(t)\rangle=H(t,\omega t+\theta)|\psi(t)\rangle.
\end{equation}
We can transform the system into a Floquet basis by introducing a micromotion operator $R(t,\omega t+\theta)=\exp\{iS(\omega t+\theta)\}$, where $S(\omega t+\theta)=\tilde{H}(t)\sin(\omega t+\theta)/\hbar\omega$ is the kick operator. The micromotion operator transforms the system from the physical basis into the Floquet basis.
Furthermore, we let $\tilde{H}(t)$ take the form
\begin{equation}\label{SU2 H form}
\tilde{H}=\hbar\Omega_0\hat{r}(t)\cdot\hat{\sigma},
\end{equation}
where $\hat{\sigma}=(\sigma_x,\sigma_y,\sigma_z)^{T}$ is the vector of Pauli operators, and $\hat{r}(t)=\hat{r}[\vec{\lambda}(t)]$ is a unit vector parameterized by $\vec{\lambda}(t)$. If we set $\Omega_{0}$ as constant, then the Hamiltonian $\tilde{H}$ describes a spin-1/2 system subject to a magnetic field whose direction is slowly changing. The Hamiltonian in the Floquet basis takes the form
\begin{eqnarray}
H_{F}&&=R^{\dagger}(t,\theta')H(t,\theta')R(t,\theta')-i\hbar R^{\dagger}(t,\theta')\partial_{t}R(t,\theta') \nonumber \\
&&=-\dot{\lambda}_{\mu}(i\hbar R^{\dagger}(t,\theta')\partial_{\mu}R(t,\theta')) \\
&&=\dot{\lambda}_{\mu}A_{\mu}(\vec{\lambda},\theta'), \nonumber
\end{eqnarray}
where we defined $\theta'=\omega t+\theta$, $\partial_{\mu}=\partial/\partial\lambda_{\mu}$ for fixed $\theta'$, and $A_{\mu}(\vec{\lambda},\theta')=-i\hbar R^{\dagger}(t,\theta')\partial_{\mu}R(t,\theta')$ is the gauge potential in the parameter space.
According to Floquet theory, the Hamiltonian in the Floquet basis can be written as a Fourier expansion of the form
\begin{equation}
H_{F}=\sum_{l}H_{F}^{(l)}e^{il\theta'}; l=0,\pm 1,\pm 2,..., \nonumber
\end{equation}
where $H_{F}^{(l)}=\frac{1}{2\pi}\int_{0}^{2\pi}H_{F}e^{-il\theta'}d\theta'$ is the $l$th Fourier component of $H_{F}$. If we assume the amplitudes of the matrix elements of these Fourier components are much smaller than the driving frequency, i.e.
\begin{equation}\label{adiabatic condition}
|\langle \alpha|H_{F}^{(l)}|\beta\rangle|\ll\hbar\omega,
\end{equation}
then this adiabatic condition allows us to neglect transitions between different Floquet bands and the evolution of the system can be approximated by the evolution within a single Floquet band\cite{PhysRevA.100.012127}. Furthermore, we can restrict our attention to the zeroth ($l=0$) Floquet band, since the states in the $l\ne0$ Floquet bands will evolve like their corresponding states in the $l=0$ band except for an additional $\textrm{U}(1)$ global phase factor $\phi_{global}=l\omega t$. Thus within the adiabatic approximation, the Hamiltonian in the Floquet basis is well approximated by the zeroth order Fourier component and can be written as
\begin{equation}\label{zeroth order effective Hamiltonian}
H_{F}^{(0)}=\dot{\lambda}_{\mu}A_{\mu}^{(0)},
\end{equation}
where $A_{\mu}^{(0)}$ is the zeroth order gauge potential $A_{\mu}^{(0)}(\vec{\lambda})=\frac{1}{2\pi}\int_{0}^{2\pi}A_{\mu}(\vec{\lambda},\theta')d\theta'$.
The zeroth order component of the Hamiltonian in the Floquet basis does not depend on the fast driving and thus can be regarded as the effective Hamiltonian of the spin-1/2 system in the Floquet basis. It takes the explicit form\cite{PhysRevA.100.012127}
\begin{eqnarray}\label{effective Hamiltonian}
H_{eff}&&=\frac{\hbar(1-J_0(a))}{2}\vec{r}\times\dot{\vec{r}}\cdot\vec{\sigma} \nonumber \\
&&=\dot{\lambda}_{\mu}\hbar g\epsilon_{ijk}r_i\partial_{\mu}r_j\sigma_k \\
&&=\dot{\lambda}_{\mu}A_{\mu}^{(0)},\nonumber
\end{eqnarray}
where $J_{0}(a)$ is the zeroth order Bessel function of the first kind, $a=|2\Omega_{0}(t)|/\omega$, $\epsilon_{ijk}$ is the Levi-Civita tensor (here $i$,$j$, and $k$ stand for $x$, $y$, and $z$), $g=\frac{1}{2}(J_{0}(a)-1)$, and we have used $\partial_{t}=\dot{\lambda}_{\mu}\partial_{\mu}$ with fixed $\theta'$. Note that the zeroth order $\textrm{SU}(2)$ gauge potential is $A_{\mu}^{(0)}=\hbar g\hat{n}\cdot\hat{\sigma}$, where $A_{\mu}^{(0)k}=\hbar g\epsilon_{ijk}r_i\partial_{\mu}r_j$ and $n_{k}=\epsilon_{ijk}r_{i}\partial_{\mu}r_{j}$.
The state in Floquet space is defined as $|\chi(t)\rangle=R^{\dagger}(t,\theta')|\psi(t)\rangle$, and the time evolution of the system in the Floquet space under adiabatic approximation can be written as
\begin{equation}
|\chi(t)\rangle=U_{eff}(t,t_0)|\chi(t_0)\rangle,
\end{equation}
or in the original basis
\begin{equation}
|\psi(t)\rangle=R(t,\theta')U_{eff}(t,t_0)R^{\dagger}(t_0,\theta'_0)|\psi(t_0)\rangle, \nonumber
\end{equation}
where $U_{eff}=\mathcal{T}\exp\{-\frac{i}{\hbar}\int_{t_0}^{t}H_{eff}dt'\}$ is the time evolution operator under the adiabatic approximation in the Floquet basis; and here $\mathcal{T}$ stands for time-ordering, and $\theta'_0=\omega t_0+\theta$.
Using Eqn.(\ref{effective Hamiltonian}), we can change the time evolution operator $U_{eff}$ from its time-ordered form into the form of a path-ordered $\textrm{SU}(2)$ unitary transformation operator $U_{eff}=\mathcal{P}exp\{-\frac{i}{\hbar}\int_{\vec{\lambda}(t_0)}^{\vec{\lambda}(t)}A_{\mu}^{(0)}d\lambda_{\mu}\}$, where $\mathcal{P}$ stands for path-ordering. The form of $U_{eff}$ shows that the evolution of the state only depends on the path that the parameter $\vec{\lambda}$ takes from its initial value $\vec{\lambda}(t_0)$ to its final value $\vec{\lambda}(t)$, and does not depend on its rate of change. If the parameter $\vec{\lambda}$ has cyclic time dependence, then the path-ordered unitary transformation operator only depends on the geometry of the closed loop that the parameter follows. In this case, the path-ordered operator takes the form $U_{c}=\mathcal{P}\exp\{-\frac{i}{\hbar}\oint A_{\mu}d\lambda_{\mu}\}$, and the system gains an $\textrm{SU}(2)$ geometric phase (Wilczek-Zee phase).
Unlike the $\textrm{U}(1)$ Berry phase that acts as a commutable phase factor, the non-Abelian geometric phase can cause population transfer between two eigenstates and two non-Abelian geometric phase factors related to different closed loops in the parameter space do not necessarily commute. Usually, to observe the adiabatic non-Abelian geometric phase, the system needs to be degenerate in order that all states acquire the same dynamic phase, which would otherwise make the geometric phase hard to detect. Therefore many studies of non-Abelian geometric phases are done within the degenerate dark state manifold. However, the physical system defined by $\tilde{H}$ in this work is not required to be degenerate. The periodic driving $f(\omega t+\theta)$ on the Hamiltonian $\tilde{H}$ introduces a Floquet band structure to the system, and the energy levels within the same Floquet band become degenerate under the adiabatic approximation\cite{PhysRevA.100.012127,PhysRevA.95.023615}. Since we set our parameters in the adiabatic regime, the system stays in the same Floquet band, and the cyclic evolution in the same Floquet band results in a non-Abelian geometric phase.
\section{Periodically driven Hamiltonian of the atomic system}
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{levels.jpg}
\end{center}
\caption{(a) Level diagram of the Raman process that we consider. We choose $|1\rangle=|F=1,m_F=-1\rangle$ and $|2\rangle=|F=1,m_F=1\rangle$ in $5^{2}P_{\frac{1}{2}}, F=1$ manifold as our pseudo-spin-1/2 system. The ground states are coupled by two Raman lasers $a$ and $b$ with $\sigma^{+}$ and $\sigma^{-}$ polarizations, respectively. The excited states in the $5^{2}P_{\frac{1}{2}}, F=1$ and $F=2$ manifolds that couple to the ground states are $|3\rangle=|F=1,m_F=0\rangle$, $|4\rangle=|F=2,m_F=0\rangle$, $|5\rangle=|F=2,m_F=-2\rangle$ and $|6\rangle=|F=2,m_F=2\rangle$. $\Delta_{i}$, $i=1,2,3,4$ are single-photon detunings, $\delta$ is the two-photon detuning. (b) The experimental setup we consider: two Raman lasers merge to form a single beam before they interact with the atoms.}\label{level diagram}
\end{figure}
In this section we consider strategies for creating the periodic Hamiltonian discussed in the previous section using a Raman process in the ground state manifold of a $^{87}\textrm{Rb}$ atom. The Raman process has a variety of applications in the study of ultracold atoms, including quantum state manipulation, generating artificial gauge potentials and spin-orbit coupling, and creating topological defects\cite{wright2008raman,schultz2014raman,lin2011spin,goldman2014light,schultz2016creating,schultz2016raman}. We consider $|1\rangle=|F=1,m_F=-1\rangle$ and $|2\rangle=|F=1,m_F=1\rangle$ in the $5^{2}S_{\frac{1}{2}}, F=1$ ground state manifold of $^{87}\textrm{Rb}$ to be our pseudo-spin-1/2 system, as shown in Fig.\ref{level diagram}(a). The Raman process is realized by applying two co-propagating circularly polarized lasers to the ultracold atoms, which are subject to a weak bias magnetic field oriented along the beam axis ($z$-axis)\cite{wright2008raman,schultz2014raman,schultz2016creating,schultz2016raman}.
Since the Raman lasers that we consider couple our pseudo-spin-1/2 states in the $5^{2}S_{\frac{1}{2}}, F=1$ manifold to the excited states in the $5^{2}P_{\frac{1}{2}}, F=1$ and $F=2$ hyperfine levels, the level diagram that we use in our calculation is actually a $W$-type instead of $\Lambda$-type\cite{wright2008raman}, see Fig.\ref{level diagram}(a).
We start with the dipole interaction Hamiltonian $H_0=H_a-\vec{d}\cdot\vec{E}$, where $H_a$ is the Hamiltonian of the atom in the presence of a bias magnetic field, $\vec{d}$ is the atomic dipole moment, and $\vec{E}$ is the laser electric field, taking the form $\vec{E}=\vec{E}_ae^{-i\omega_a t}+\vec{E}_be^{-i\omega_b t}+c.c$, where $\omega_a$ and $\omega_b$ are laser frequencies. To solve the problem we can go to a rotating frame defined by the gauge transformation operator $U=\textrm{diag}\{e^{i\alpha_1},e^{i\alpha_2},e^{i\alpha_3},e^{i\alpha_4},e^{i\alpha_5},e^{i\alpha_6}\}$, where we define
\begin{align}
&\alpha_1=(\omega_1-\delta/2)t,&& \alpha_2=(\omega_2+\delta/2)t, \nonumber \\
&\alpha_3=(\omega_3-\Delta_1)t,&& \alpha_4=(\omega_4-\Delta_2)t, \nonumber \\
&\alpha_5=(\omega_5-\Delta_3)t,&& \alpha_6=(\omega_6-\Delta_4)t; \nonumber
\end{align}
$\hbar\omega_i$ ($i=1,2,3,4,5,6$) is the energy of state $|i\rangle$ in the bias magnetic field; and we also define the one-photon detuning $\Delta_i$ ($i=1,2,3,4$) and two-photon detuning $\delta$ as
\begin{eqnarray}
&&2\pi\delta=\omega_a-\omega_b+\omega_1-\omega_2 \nonumber \\
&&2\pi\Delta_1=\omega_3-\frac{\omega_1+\omega_2}{2}-\frac{\omega_a+\omega_b}{2} \nonumber \\
&&2\pi\Delta_2=\omega_4-\frac{\omega_1+\omega_2}{2}-\frac{\omega_a+\omega_b}{2} \\
&&2\pi\Delta_3=\omega_5-\frac{\omega_1+\omega_2}{2}-\frac{\omega_a+\omega_b}{2} \nonumber \\
&&2\pi\Delta_4=\omega_6-\frac{\omega_1+\omega_2}{2}-\frac{\omega_a+\omega_b}{2}. \nonumber
\end{eqnarray}
If we are in the far-detuned regime, namely where the one-photon detuning is much larger than the decay time of the excited state, we can adiabatically eliminate the excited states and get the effective two-level Hamiltonian\cite{wright2008raman,schultz2016raman,schultz2014raman,schultz2016creating}
\begin{equation}\label{W hamiltonian}
W=-\hbar\left( {\begin{array}{cc}
\xi_{11}+\frac{\delta}{2} & \eta_{12}e^{-i\phi} \\
\eta_{12}e^{i\phi} & \xi_{22}-\frac{\delta}{2}
\end{array} } \right),
\end{equation}
where the matrix elements are defined as
\begin{eqnarray}
\xi_{11}&&=\frac{|\Omega_{a13}|^2}{\Delta_1}+\frac{|\Omega_{a14}|^2}{\Delta_2}+\frac{|\Omega_{b15}|^2}{\Delta_3} \nonumber \\
\xi_{22}&&=\frac{|\Omega_{b23}|^2}{\Delta_1}+\frac{|\Omega_{b24}|^2}{\Delta_2}+\frac{|\Omega_{a26}|^2}{\Delta_4} \nonumber \\
\eta_{12}&&=\frac{|\Omega_{a13}\Omega_{b23}|}{\Delta_1}+\frac{|\Omega_{a14}\Omega_{b24}|}{\Delta_2} \nonumber.
\end{eqnarray}
Here $\Omega_{\rho ij}$ is the Rabi frequency, and it takes the form $\Omega_{\rho ij}=-d_{D1}E_{\rho}C_{ij}/\hbar$, with $\rho=a,b$, $i=1,2$, and $j=3,4,5,6$. $d_{D1}$ is the effective dipole moment of the $D_{1}$ transitions, $E_{\rho}$ is the electric field, $C_{ij}$ is the Clebsch-Gordon coefficient between states $|i\rangle$ and $|j\rangle$ and $\phi=\phi_{b}-\phi_{a}$ is the relative phase between the two Raman lasers.
Our goal is to construct the periodically driven Hamiltonian with a high frequency driving signal $H=\tilde{H}f(\omega t+\theta)$, or in the harmonic driving case, $H=\tilde{H}\cos{\omega t}$, where $\theta$ is taken to be zero for simplicity. Notice that we can rewrite the effective two-level Hamiltonian Eqn.(\ref{W hamiltonian}) as
\begin{eqnarray}\label{SU(2) form of W}
W=&&-\frac{\hbar\left[\delta+(\xi_{11}-\xi_{22})\right]}{2}\sigma_z-\hbar\eta_{12}\cos{\phi}\sigma_x-\hbar\eta_{12}\sin{\phi}\sigma_y \nonumber \\
&&-\frac{\hbar(\xi_{11}+\xi_{22})}{2}\mathbb{1},
\end{eqnarray}
where $\mathbb{1}$ is the $2\times 2$ identity matrix. Since we are in the far-detuned regime, the single-photon detunings are much larger than the two-photon detuning, the Rabi frequencies, and the Zeeman splitting between different magnetic sublevels. If we set the Rabi frequencies to satisfy $|\Omega_{a13}|=|\Omega_{b23}|$, $|\Omega_{a14}|=|\Omega_{b24}|$, and $|\Omega_{a26}|=|\Omega_{b15}|$, with a bias magnetic field (e.g. 5G), then $\xi_{11}$ and $\xi_{22}$ will be approximately equal. Therefore we can further reduce the effective two-level Hamiltonian and write it as
\begin{equation}
W\approx-\hbar(\frac{\delta}{2}\sigma_z+\eta_{12}\cos{\phi}\sigma_x+\eta_{12}\sin{\phi}\sigma_y), \nonumber
\end{equation}
where we ignored the last term in Eqn.(\ref{SU(2) form of W}) since it only effects a $\textrm{U}(1)$ global phase factor during the evolution.
To create the harmonic driving of the Hamiltonian, we can modulate the bias magnetic field and $\eta_{12}$ as $\cos{\omega t}$. In this case, the effective two-level Hamiltonian $W$ can be regarded as the desired driven Hamiltonian $H$, and takes the form
\begin{equation}
H=-\hbar(\frac{\tilde{\delta}}{2}\sigma_z+\tilde{\eta}_{12}\cos{\phi}\sigma_x+\tilde{\eta}_{12}\sin{\phi}\sigma_y)\cos{\omega t}, \nonumber
\end{equation}
where $\tilde{\delta}\cos{\omega t}=\delta$ and $\tilde{\eta}_{12}\cos{\omega t}=\eta_{12}$. Now let $\tilde{\delta}=2\Omega_0\cos{\Theta(t)}$ and $\tilde{\eta}_{12}=\Omega_0\sin{\Theta(t)}$, where $\Theta(t)$ is the slowly varying parameter. This can be achieved by driving both Raman lasers as $\sqrt{|\sin{\Theta(t)}\cos{\omega t}|}$ and changing the relative phase $\phi=\phi_b-\phi_a$ between two lasers from $\phi=\Phi$ to $\phi=\pi+\Phi$, where $\Phi$ is a parameter that does not depend on fast periodic driving. The periodic Hamiltonian can finally be written in the desired form
\begin{equation}\label{effective periodic Hamiltonian}
H(t)=\hbar\Omega_0\hat{r}\cdot\hat{\sigma}\cos{\omega t},
\end{equation}
with $\hat{r}(\vec{\lambda})=(-\sin{\Theta}\cos{\Phi},-\sin{\Theta}\sin{\Phi},-\cos{\Theta})^{T}$. If we fix $\Phi$ and slowly drive $\Theta$ in a cyclic manner, namely $\Theta=\Omega t$, where $\Omega\ll\omega$, we will obtain the desired Hamiltonian (Eqn.(\ref{SU2 H form})) that leads to an $\textrm{SU}(2)$ non-Abelian geometric phase. Note that we take $\vec{\lambda}=\{\Theta,\Phi\}$ as the set of coordinates on a unit 2-sphere with a time dependent polar angle $\Theta=\Omega t$ and a fixed azimuthal angle $\Phi$. This Hamiltonian describes a pseudo-spin-1/2 system in a rotating synthetic magnetic field whose magnitude is modulated.
\section{A Practical experimental protocol and simulation Results}
\subsection{The geometric phase}
After realizing the effective two-level periodic Hamiltonian in Eqn.(\ref{effective periodic Hamiltonian}), the dynamics of the system in the rotating frame follow from the Schr\"{o}dinger equation (Eqn.(\ref{Schrodinger})). Using Eqn.(\ref{effective Hamiltonian}), we find the zeroth Fourier component of the effective Hamiltonian in the Floquet basis to be
\begin{equation}
H_{eff}=\hbar\Omega g\hat{n}\cdot\hat{\sigma},
\end{equation}
where $\hat{n}=(-\sin{\Phi},\cos{\Phi},0)^T$, and $g=\frac{1}{2}(J_{0}(a)-1)$ with $a=2\Omega_0/\omega$. We assume $\Theta=\Omega t$, $\Phi=\textrm{const}.$ so that $\dot{\Theta}=\Omega$, $\dot{\Phi}=0$. Therefore, the zeroth order $\textrm{SU}(2)$ gauge potential takes the form
\begin{equation}
A_{\Theta}^{(0)}=-\hbar g(\sin{\Phi}\sigma_x-\cos{\Phi}\sigma_y).
\end{equation}
We can write the $\textrm{SU}(2)$ transformation operator as
\begin{equation}\label{evolution operator}
U(t,t_0)=\exp\{ig(\sin{\Phi}\sigma_x-\cos{\Phi}\sigma_y)\left[\Theta(t)-\Theta(t_0)\right]\}.
\end{equation}
For cyclic evolution, we get the geometric phase $\gamma=2m\pi g$, $m=\pm1,\pm2,\pm3,...$, and the $\textrm{SU}(2)$ transformation operator can be written as
\begin{equation}\label{cyclic transformation operator}
U_c=\exp\{-i\gamma\vec{n}\cdot\vec{\sigma}\}=\exp\{i2m\pi g(\sin{\Phi}\sigma_x-\cos{\Phi}\sigma_y)\},
\end{equation}
where $m$ is an integer. In the case that we consider, $m$ takes a negative sign due to the choice of states.
We use a fourth-order finite-difference method to solve the time-dependent Schr\"{o}dinger equation (TDSE) (Eqn.(\ref{Schrodinger})). The TDSE (Eqn.(\ref{Schrodinger})) describes the dynamics in the rotating frame, but the evolution operator that causes the geometric phase is in the Floquet basis. Notice that the micromotion operator that can transform the rotating basis to the Floquet basis takes the form $R=\exp\{i\tilde{H}(t)\sin{\omega t}/\hbar\omega\}$, and it goes to the identity operator at the end of each cycle of fast driving, namely $\sin{\omega T_q}=0$, with $T_q=2q\pi/\omega$ ($q=0,1,2,3,...$). Thus if we prepare the system into an eigenstate of $\tilde{H}(t_0)$ at the initial time $t_0$ and turn on the Raman laser and periodic driving abruptly, the system will start evolving in the Floquet basis. If we measure the system at the end of each fast driving cycle, the rotating basis will be already aligned with the Floquet basis so we get a direct measurement in Floquet space.
\subsection{Experimental setup}
To experimentally realize the setup we consider, two co-propagating Raman lasers along the z-axis with left and right circular polarizations are needed, see Fig.\ref{level diagram}(b). Unlike the usual Raman process, where the Raman laser intensities are time-independent or only have slow time dependence, here we need to modulate both the laser intensities and the relative phase between two Raman lasers with a periodic function that is the product of a low frequency and a high frequency periodic signal. Meanwhile the magnitude of the bias magnetic field generated by a Helmholtz coil also needs to be modulated to provide the periodic two-photon detuning $\delta$. The driving signal of the bias magnetic field is $B=B_0+\Delta B\cos{\Omega t}\cos{\omega t}$ and the Raman laser intensities are proportional to $|\sin{\Omega t}\cos{\omega t}|$, respectively, see Fig.\ref{driving profiles}.(a,b). The relative phase between two Raman lasers follows the function $\frac{\pi}{2}[1-\textrm{sgn}(\sin{\Omega t}\cos{\omega t})]$, where $\textrm{sgn}(x)$ is the signum function, as shown in Fig.\ref{driving profiles}.(c).
\begin{figure}
\begin{center}
\includegraphics[scale=0.3]{driving-signal.jpg}
\end{center}
\caption{From top to bottom: (a) driving signal of the magnetic field, (b) laser intensity, and (c) relative phase between lasers. The driving parameters are $\Omega=2\pi\times50\textrm{kHz}$, $\omega=2\pi\times500\textrm{kHz}$. Here we only show the driving signals for the first $10\mu s$.}\label{driving profiles}
\end{figure}
To realize the desired periodically driven signal of the parameters, we can first use an arbitrary waveform generator (AWG) to generate the modulation signals. Then we can propagate the signals to acousto-optical modulators (AOMs) to drive the intensity of the Raman lasers and to an electro-optical modulator (EOM) to drive the relative phase between the Raman lasers. The parameters that we choose in our simulation are $\Omega=2\pi\times50\textrm{kHz}$ and $\omega=2\pi\times500\textrm{kHz}$ with the beam waist $w=300\mu m$, which will not push commercial AOMs beyond their limits. Finally, the modulation of the bias magnetic field can be realized by sending the modulation signal from the AWG to an audio power amplifier, and use it to drive the Helmholtz coil that generates the bias magnetic field.
\subsection{Preparation of the states and projection measurements}
After describing a potential experimental setup, we discuss the preparation of the initial states and how to do projection measurements of the desired quantum state. Our theoretical framework in this paper is based on a single atom, which assumes the ultracold Bose gas is dilute enough so that the interaction between atoms is negligible. In this work, we consider an ultracold dilute $^{87}\textrm{Rb}$ Bose gas and we focus on the $5^{2}S_{\frac{1}{2}}, F=1$ ground state manifold.
We produce a Bose-Einstein condensate in the $|1\rangle\equiv|R=1,m_F=-1\rangle$ state and then must transfer the population to the desired initial state. There are many ways to control the system and achieve this state preparation. In our laboratory, we have developed a reliable Raman waveplate method to achieve state rotations on the Bloch sphere, and we can use a Raman waveplate pulse to rotate the states and measure the atomic Stokes parameters\cite{schultz2016creating,hansen2016singular,schultz2014raman}. The waveplate pulse couples states $|1\rangle$ and $|2\rangle$ and rotates the system to the initial state $|\psi(0)\rangle$, which is a superposition state of $|1\rangle$ and $|2\rangle$. Other than the Raman waveplate, there are also other ways to transfer the atoms into the desired initial state, such as using a radio-frequency pulse sequence.
We can use a Stern-Gerlach time-of-flight (TOF) imaging method to measure the populations in different states. To get phase information from the system, we need to rotate the system into the eigenbasis of the $x$ and $y$ axes. This can be achieved by any high-fidelity $\pi/2$ rotation operation around $x$ and $y$ axes, i.e. Raman waveplate pulses. Generally, if we ignore the undetectable global phase, the state of the system can be written as $|\psi\rangle=c_1|1\rangle+c_2e^{i\beta}|2\rangle$, where coefficients $c_1$ and $c_2$ are real, and $\beta$ is the relative phase. The atomic Stokes parameters are defined as
\begin{eqnarray}
&&S_1=2c_1c_2\cos{\beta} \nonumber\\
&&S_2=2c_1c_2\sin{\beta} \\
&&S_3=c_1^2-c_2^2, \nonumber
\end{eqnarray}
and they can be understood as projection measurements on the $x$, $y$ and $z$-axes of the Bloch sphere. The Stern-Gerlach TOF can be regarded as a measurement of $S_3$, while to measure the other two atomic Stokes parameters $S_1$ and $S_2$ we need to apply a $\frac{\pi}{2}$-waveplate pulse to rotate the detection axis to the $x$ and $y$-axes\cite{hansen2016singular}. From the atomic Stokes parameters we are able to extract both the population and phase information of the state.
The Stern-Gerlach TOF imaging happens after we turn the Raman lasers off, so we are performing the measurement in a Zeeman basis defined solely by the bias magnetic field. However, in our calculations we work in a rotating frame. Thus our final state, which is stationary in the rotating frame, will acquire an extra phase factor between the $|1\rangle$ and $|2\rangle$ states of $\alpha=\alpha_2-\alpha_1=(\omega_a-\omega_b)t$ in the Zeeman basis. Since in our experimental setup, the laser frequencies are fixed and shifted by AOMs, we are able to record the frequency difference between Raman lasers. Therefore, we can calculate the extra phase difference at any time when the Raman lasers are on and eliminate the extra phase factor in data processing.
\subsection{Simulation results}
\begin{figure}
\begin{center}
\includegraphics[scale=0.35]{geo-Rabi.jpg}
\end{center}
\caption{Simulation results of an $\textrm{SU}(2)$ transformation, where circles and squares and triangles are simulation results from solving the TDSE in the rotating frame, and dashed lines are analytical curves calculated in the Floquet basis under adiabatic approximation. Upper plot: population transfer, blue and red represent $|1\rangle$ and $|2\rangle$, respectively. Lower plot: evolution of the atomic Stokes parameters, where blue, red and black represent $S_1$, $S_2$, and $S_3$, respectively. Here we use the parameters: maximum laser powers $P_a=P_b=271.7\mu W$, beam waist $w=300\mu m$, magnitude of the time-varying part of the bias magnetic field $\Delta B=0.368G$, bias magnetic field average $B_0=5G$, which results in $\Omega_0=2\pi\times258.3\textrm{kHz}$. The driving frequencies are $\Omega=2\pi\times50\textrm{kHz}$ and $\omega=2\pi\times500\textrm{kHz}$. With the above parameters, the geometric phase result in a Rabi-like oscillation with a period $T_{geo}\approx80\mu s$. }\label{geometric phase}
\end{figure}
By numerically solving the TDSE Eqn.(\ref{Schrodinger}), we get the evolution of the system in the rotating frame by extracting the points at the end of each fast driving cycle. Also, we analytically calculate the evolution of the state subjected to the $\textrm{SU}(2)$ unitary transformation given by Eqn.(\ref{evolution operator}) in the Floquet basis under the adiabatic approximation. As shown in Fig.\ref{geometric phase}, the simulation results match the analytical calculations well. In the upper plot, blue circles and red squares are the simulation results states of $|1\rangle$ and $|2\rangle$, respectively. The initial state is $|\psi(0)\rangle=|1\rangle$. Dashed lines are results of the analytical calculation for population transfer. In the lower plot, blue circles, red squares and black triangles are simulation results for atomic Stokes parameters $S_1$, $S_2$, and $S_3$, respectively. Dashed lines are the analytical predictions. The parameters we use are: maximum laser powers $P_a=P_b=271.7\mu W$, beam waist $w=300\mu m$, magnitude of the time-varying part of the bias magnetic field $\Delta B=0.368G$, the bias magnetic field average $B_0=5G$, which results in $\Omega_0=2\pi\times258.3\textrm{kHz}$. The driving frequencies are $\Omega=2\pi\times50\textrm{kHz}$ and $\omega=2\pi\times500\textrm{kHz}$. With the above parameters, $g=-0.1248$, and the geometric phase produces a Rabi-like oscillation with a period $T_{geo}\approx80\mu s$.
Using the atomic Stokes parameter values at the end of the slow driving cycle, we can calculate the $\textrm{SU}(2)$ transformation operator $U_c$. Take $t=20\mu s$ as an example. The atomic Stokes parameters take values $S_1=0.998$, $S_2=0.06315$, and $S_3=-0.0014$, which give $|c_1|^2=0.4993$, $|c_2|^2=0.5007$, and $\beta=0.063\textrm{rad}$. Therefore, the $\textrm{SU}(2)$ transformation operator at $t=20\mu s$ is calculated to be $U_c=0.7066\mathbb{1}-i(0.0445\sigma_x+0.7062\sigma_y)$, which matches the analytical prediction $U_{c}^{theory}=(\mathbb{1}-i\sigma_y)/\sqrt{2}$ that describes a $\pi/2$ rotation around the $y$-axis on the Bloch sphere.
\begin{figure}
\begin{center}
\includegraphics[scale=0.35]{non-Abelian.jpg}
\end{center}
\caption{Non-Abelian property of the evolution operators. From top to bottom: evolution of the atomic Stokes parameters $S_1$, $S_2$ and $S_3$, respectively. The evolution operators $U_1$ and $U_2$ are achieved by using the same parameters as Fig.(\ref{geometric phase}) and setting the relative phase between Raman lasers as $\Phi_1=0$ and $\Phi_2=\pi/2$, respectively. The duration of each evolution operator is $\tau=20\mu s$. Different colors represent the results for different order of operators, blue: $U_2U_1$ and red: $U_1U_2$. Blue circles and red squares represent simulation results from TDSE while dashed lines are theoretical predictions. The different final results for atomic Stokes parameters shows $[U_1,U_2]\ne0$, which proves the geometric phase that we get is non-Abelian.} \label{non-Abelian}
\end{figure}
After solving for the the $\textrm{SU}(2)$ transformation operator, the next step is to prove the non-Abelian property of the geometric gauge transformations. We consider two $\textrm{SU}(2)$ transformation operators
\begin{eqnarray}
U_1&&=\exp\{i2\pi g\sigma_y\}\approx\frac{1}{\sqrt{2}}(\mathbb{1}-i\sigma_y), \nonumber\\
U_2&&=\exp\{-i2\pi g\sigma_x\}\approx\frac{1}{\sqrt{2}}(\mathbb{1}+i\sigma_x), \nonumber
\end{eqnarray}
which are constructed by turning on the periodically driven Hamiltonian for $t=20\mu s$ and setting the relative phase parameter to be $\Phi_1=0$ and $\Phi_2=\pi/2$, respectively. All the other parameters are the same as what we used in Fig.\ref{geometric phase}. After constructing the $\textrm{SU}(2)$ transformation operators, we apply them to the initial state $|\psi(0)\rangle=|1\rangle$, one after another in different orders $U_2U_1$ and $U_1U_2$. As shown in Fig.\ref{non-Abelian}, the difference in the atomic Stokes parameters at the final time shows that the $\textrm{SU}(2)$ transformation operators that we construct do not commute, $[U_1,U_2]\ne0$, which verifies the non-Abelian property of the geometric phase.
The geometric phase that we get over one low frequency cycle depends on the parameters we choose, as long as the adiabatic condition Eqn.(\ref{adiabatic condition}) is satisfied. Therefore, we can easily change the parameters, such as Raman laser intensities, modulation amplitude of the magnetic field, and the periods of both low and high frequency driving parameters to tune the geometric phase over a broad range of values. In both simulation results, we see slight differences of the TDSE solution from the analytical predictions. This is the joint effect of the non-zero $\frac{1}{2}(\xi_{11}-\xi_{22})$ term in Eqn.(\ref{SU(2) form of W}) and the quadratic Zeeman effect, which was ignored in the construction of the effective two-level Hamiltonian of the Raman process. The non-zero $\xi_{11}-\xi_{22}$ term will introduce an additional term proportional to $|\sin{\Theta t}\cos{\omega t}|\sigma_z$ in the effective two-level Hamiltonian. The quadratic Zeeman shift will introduce an additional term to the two-photon detuning that is proportional to $(\cos{\Omega t}\cos{\omega t})^2$, which brings in additional zeroth and second harmonic terms. The zeroth harmonic from both terms can be canceled by shifting the laser frequencies, but the higher harmonic terms will bring in additional terms in the effective Hamiltonian and affect the dynamics of the system. However, such effects do not affect our result very much because we work with a weak bias magnetic field, and under the conditions that we consider in our simulations the amplitude of the additional higher harmonic terms is $2\pi\times1.88\textrm{kHz}$, whereas $\Omega_0=2\pi\times258.3\textrm{kHz}$. Therefore, the amplitudes of both $\frac{1}{2}(\xi_{11}-\xi_{22})$ term and the quadratic Zeeman shift will be much smaller than the amplitude of two-photon detuning $\delta$ so that both terms are negligible.
\subsection{Robustness against parameter fluctuations}
\begin{figure}
\begin{center}
\includegraphics[scale=0.2]{noise.jpg}
\end{center}
\caption{Fidelity of the geometric gauge transformation versus number of fluctuations. Each point is calculated by averaging five runs with the same number of fluctuations for 26 different initial states, the errorbars is the standard deviation of all the fidelities calculated from the 130 runs for the same number of fluctuations. The ideal $\textrm{SU}(2)$ transformation operator is $U_{c}^{ideal}=(\mathbb{1}-i\sigma_y)/\sqrt{2}$. The standard deviation of random Gaussian noises for bias magnetic field and laser powers are $5\%$ of their amplitudes. For relative phase, the standard deviation of the Gaussian noise is $0.01\pi$. All fluctuations are distributed evenly during the pulse duration.} \label{noise}
\end{figure}
After showing the non-Abelian property of the geometric phase, it is natural to ask if such a geometric phase is sufficiently robust against parameter fluctuations that one can observe it in the laboratory. We introduce random Gaussian noise to the magnitude of the bias magnetic field, the Raman laser powers, and the relative phase between the two Raman lasers. The ideal $\textrm{SU}(2)$ transformation operator takes the form $U_{c}^{ideal}=(\mathbb{1}-i\sigma_y)/\sqrt{2}$ and causes a $\pi/2$ rotation around $y$ axis. With the parameters we considered in Fig.\ref{geometric phase}, the pulse duration is $20\mu s$. We denote the operator with noise as $U_c^{noise}$. The standard deviation of the magnetic field noise and laser power noise are $5\%$ of their amplitudes, while for the relative phase $\phi$, the standard deviation of the Gaussian noise is $0.01\pi$. Then we start from the same initial state $|\psi_0\rangle$, vary the number of fluctuations that are evenly distributed over $20\mu s$ and calculate the fidelity $f\equiv|\langle \psi_{ideal}|\psi_{noise}\rangle|^2$, where $|\psi_{noise}\rangle=U_c^{noise}|\psi_0\rangle$ and $|\psi_{ideal}\rangle=U_{c}^{ideal}|\psi_0\rangle$. The results are shown in Fig.\ref{noise}, where each point is the average fidelity calculated from 26 different initial states evenly distributed on the Bloch sphere with 5 runs for each state, and the error bars are the standard deviations calculated from the $5\times26=130$ data sets for each number of fluctuations. We can see that the average fidelity is always above $0.9$, which shows that the operators we constructed with noise are robust against random fluctuations for the different initial states that we consider. However, we see much smaller error bars for the numbers of fluctuations higher than $200$, which correspond to high frequency fluctuations (above $10\textrm{MHz}$), than we see for lower frequency fluctuations (below $10\textrm{MHz}$). Since for low frequency fluctuations, $5\%$ parameter fluctuation can greatly deform the contour that the rotating synthetic magnetic field takes and therefore have a greater influence on the resulting geometric phase. For higher frequency noise, the averaged fidelity is above $98\%$ and the error bars become much smaller than the low frequency points, which indicates that the deformation of the contour in parameter space is averaged out and the geometric phase becomes robust against high frequency random fluctuations.
The fluctuations we consider here are all above $1\textrm{MHz}$. Since our pulse duration is relatively short, the lower frequency fluctuations of up to several kilohertz, such as mechanical noises, can be regarded as long term drift with small amplitude for our problem, which does not degrade the observation of our effect significantly. The high fidelity shown in Fig.\ref{noise} demonstrates that the non-Abelian geometric phase induced by the periodically driven Raman process is robust enough to be observed and therefore has the potential to be another method to control the quantum state of a cold atom system.
\section{Conclusion and discussion}
In this paper we have proposed a possible realization of a periodically driven Hamiltonian through periodically driving a Raman process in the hyperfine ground state manifold of an alkali atom. A non-Abelian geometric phase is observed according to our simulation results. Through measuring the atomic Stokes parameters, we are able to measure the $\textrm{SU}(2)$ transformation operator of the cyclic evolution in Floquet space. The non-commuting property of two different $\textrm{SU}(2)$ transformation operators subject to different geometric phase factors proves the non-Abelian property of this geometric phase. For simplicity, we only set one of our parameters $\Theta$ as time dependent. In fact the other parameter that we consider $\Phi$ can also be time dependent as long as the parameters form a closed loop. Based on the general theory of a spin interacting with a periodically driven magnetic field\cite{PhysRevA.100.012127,PhysRevA.95.023615}, our work extends the realization of the non-Abelian geometric phase into a pseudo-spin system with Raman coupling. We also proposed a practical experimental implementation using a dilute ultracold atomic gas interacting with Raman lasers and we verify that the non-Abelian geometric phase effect can be observed in the laboratory even with the presence of possible parameter fluctuations.
Due to the oscillating magnetic field, an additional quadratic Zeeman effect term will appear in the Hamiltonian\cite{gan2018oscillating}. However, as we discussed, the effect caused by the quadratic Zeeman effect can be ignored when the field is sufficiently weak. Another issue is heating. Since we use a Raman process with large single-photon detuning, the dynamics of the system is confined to the ground state manifold, so the heating caused by spontaneous emission from the excited states is negligible. The excitation between different Floquet bands is also suppressed if we let the parameters satisfy the adiabatic condition Eqn.(\ref{adiabatic condition}). In addition, in our protocol we use co-propagating Raman lasers, so there will be no momentum transfer in our periodically driven Raman process. Therefore the Floquet heating effect discussed in \cite{li2019floquet} will be suppressed as well. Finally, in our analysis, since the duration of the evolution is much less than the typical decoherence time of ultracold Bose gases, we can ignore decoherence effects and use pure state descriptions of the system in the calculations. For a more general case, if the duration of the geometric phase pulse becomes comparable to the decoherence time, one needs to take decoherence into consideration and use density matrix methods in instead.
Our simulation results show that by introducing periodically driven interactions, one can promote the Abelian physical system to a non-Abelian one under the adiabatic approximation\cite{PhysRevA.100.012127,PhysRevA.95.023615}, and that the associated geometric phase is sufficiently robust against parameter fluctuations and thus detectable in the laboratory. To obtain the non-Abelian geometric phase and the non-Abelian gauge potential, one needs to work with a system with degenerate quantum levels. As discussed by Novi{\v{c}}enko\cite{PhysRevA.100.012127}, the non-Abelian geometric phase comes from neglecting the transitions between different Floquet bands such that the states in the same Floquet band become degenerate. There are other strategies to get non-Abelian geometric phase effects\cite{leroux2018non,sugawa2018second} that work in the degenerate eigenbasis of a system. Such an eigenbasis consists of superposition states in the Zeeman basis. In contrast, the experimental protocol that we propose realizes a non-Abelian geometric phase effect in a Floquet basis that can be projected to the Zeeman basis, allowing us to measure the system in a more direct way. Although we only considered a $^{87}\textrm{Rb}$ Bose gas in this work, the protocol we propose can be used in other atomic or ionic systems, both Bosonic and Fermionic. Therefore, our protocol has the potential to be a reliable quantum control method in ultracold atom studies.
\section{Acknowledgement}
We thank Maitreyi Jayaseelan and Elisha Haber for discussions. We also thank Gediminas Juzeli\ifmmode \bar{u}\else \={u}\fi{}nas for the useful comments on the manuscript. This work was supported by the National Science Foundation grant number PHY-1708008 and NASA/JPL RSA 1616833.
|
1702.00146
|
\section{Introduction}
Any generic closed curve in the plane can be transformed into a simple closed curve by a finite sequence of the following local operations:
\begin{itemize}\itemsep0pt
\item \EMPH{$\arc{1}{0}$}: Remove an empty loop.
\item \EMPH{$\arc{2}{0}$}: Separate two subpaths that bound an empty bigon.
\item \EMPH{$\arc{3}{3}$}: Flip an empty triangle by moving one subpath over the opposite intersection point.
\end{itemize}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{Fig/homotopy-moves-aligned}\\
\caption{Homotopy moves $\arc10$, $\arc20$, and $\arc33$.}
\label{F:homotopy}
\end{figure}
See Figure \ref{F:homotopy}. Each of these operations can be performed by continuously deforming the curve within a small neighborhood of one face; consequently, we call these operations and their inverses \EMPH{homotopy moves}. Our notation is nonstandard but mnemonic; the numbers before and after each arrow indicate the number of local vertices before and after the move. Homotopy moves are “shadows” of the classical Reidemeister moves used to manipulate knot and link diagrams~\cite{ab-tkc-26,r-ebk-27}.
We prove that $\Theta(n^{3/2})$ homotopy moves are sometimes necessary and always sufficient to simplify a closed curve in the plane with $n$ self-crossings. Before describing our results in more detail, we review several previous results.
\subsection{Past Results}
An algorithm to simplify any planar closed curve using at most $O(n^2)$ homotopy moves is implicit in Steinitz's proof that every 3-connected planar graph is the 1-skeleton of a convex polyhedron \cite{s-pr-1916,sr-vtp-34}. Specifically, Steinitz proved that any non-simple closed curve (in fact, any 4-regular plane graph) with no empty loops contains a \emph{bigon} (“Spindel”): a disk bounded by a pair of simple subpaths that cross exactly twice, where the endpoints of the (slightly extended) subpaths lie outside the disk. Steinitz then proved that any \emph{minimal} bigon (“irreduzible Spindel”) can be transformed into an empty bigon using a sequence of $\arc33$ moves, each removing one triangular face from the bigon, as shown in Figure~\ref{F:SR-spindle}. Once the bigon is empty, it can be deleted with a single $\arc20$ move. See Grünbaum~\cite{g-cp-67}, Hass and Scott \cite{hs-scs-94}, Colin de Verdière \etal~\cite{cgv-rep-96}, or Nowik~\cite{n-cpsc-09} for more modern treatments of Steinitz's technique. The $O(n^2)$ upper bound also follows from algorithms for \emph{regular} homotopy, which forbids $\biarc01$ moves, by Francis~\cite{f-frtcs-69}, Vegter~\cite{v-kfdp-89} (for polygonal curves), and Nowik~\cite{n-cpsc-09}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.175]{Fig/SR-irreducible-spindle} \\
\includegraphics[scale=0.1]{Fig/SR-spindle-side-before} \quad
\includegraphics[scale=0.1]{Fig/SR-spindle-side-after} \qquad
\includegraphics[scale=0.12]{Fig/SR-spindle-end-before} \quad
\includegraphics[scale=0.12]{Fig/SR-spindle-end-after}
\caption{Top: A minimal bigon. Bottom: $\arc33$ moves removing triangles from the side or the end of a (shaded) minimal bigon. All figures are from Steinitz and Rademacher \cite{sr-vtp-34}.}
\label{F:SR-spindle}
\end{figure}
The $O(n^2)$ upper bound can also be derived from an algorithm of Feo and Provan~\cite{fp-dtert-93} for reducing a \emph{plane} graph to a single edge by \emph{electrical transformations}: degree-1 reductions, series-parallel reductions, and $\Delta$Y-transformations. (We consider electrical transformations in more detail in Section \ref{S:electric}.) Any curve divides the plane into regions, called its \emph{faces}. The \emph{depth} of a face is its distance to the outer face in the dual graph of the curve. Call a homotopy move \emph{positive} if it decreases the sum of the face depths; in particular, every $\arc10$ and $\arc20$ move is positive. \EDIT{A key technical lemma} of Feo and Provan implies that every non-simple curve in the plane admits a positive homotopy move \cite[Theorem~1]{fp-dtert-93}. Thus, the sum of the face depths is an upper bound on the minimum number of moves required to simplify the curve. Euler's formula implies that every curve with $n$ crossings has $O(n)$ faces, and each of these faces has depth $O(n)$.
Gitler~\cite{g-dtaa-91} conjectured that a variant of Feo and Provan's algorithm that always makes the \emph{deepest} positive move requires only $O(n^{3/2})$ moves. Song~\cite{s-iifpd-01} observed that if Feo and Provan's algorithm always chooses the \emph{shallowest} positive move, it can be forced to make $\Omega(n^2)$ moves even when the input curve can be simplified using only $O(n)$ moves.
Tight bounds are known for two special cases where some homotopy moves are forbidden. First, Nowik~\cite{n-cpsc-09} proved a tight $\Omega(n^2)$ lower bound for regular homotopy. Second, Khovanov~\cite{k-dg-97} defined two curves to be \emph{doodle equivalent} if one can be transformed into the other using $\biarc10$ and $\biarc20$ moves. Khovanov~\cite{k-dg-97} and Ito and Takimura~\cite{it-whkp-13} independently proved that any planar curve can be transformed into its unique equivalent doodle with the smallest number of vertices, using only $\arc10$ and $\arc20$ moves. Thus, two doodle equivalent curves are connected by a sequence of $O(n)$ moves, which is obviously tight.
Looser bounds are also known for the minimum number of Reidemeister moves needed to reduce a diagram of the unknot~\cite{hn-udrqn-10,l-pubrm-15}, to separate the components of a split link~\cite{hhn-unnrm-12}, or to move between two equivalent knot diagrams~\cite{hh-msrmd-11,cl-ubrm-14}.
\subsection{New Results}
In Section \ref{S:lower-bound}, we derive an $\Omega(n^{3/2})$ lower bound using a numerical curve invariant called \emph{defect}, introduced by Arnold~\cite{a-tipcc-94, a-pctip-94} and~Aicardi~\cite{a-tc-94}. Each homotopy move changes the defect of a closed curve by at most~$2$. The lower bound therefore follows from constructions of Hayashi \etal~\cite{hh-msrmd-11,hhsy-musrm-12} and Even-Zohar \etal~\cite{ehln-irkl-14} of closed curves with defect $\Omega(n^{3/2})$. We simplify and generalize their results by computing the defect of the standard planar projection of any $p\times q$ torus knot where either $p\bmod q = 1$ or $q\bmod p = 1$. Our calculations imply that for any integer $p$, reducing the standard projection of the $p\times (p+1)$ torus knot requires at least $\smash{\binom{p+1}{3}} \ge n^{3/2}/6 - O(n)$ homotopy moves. Finally, using winding-number arguments, we prove that in the worst case, simplifying an arrangement of $k$ closed curves requires $\Omega(n^{3/2}+ nk)$ homotopy moves, with an additional $\Omega(k^2)$ term if the target configuration is specified in advance.
In Section \ref{S:electric}, we provide a proof, based on arguments of Truemper~\cite{t-drpg-89} and Noble and Welsh~\cite{nw-kg-00}, that reducing a \emph{unicursal} \EDIT{plane} graph $G$---\EDIT{one} whose medial graph is the image of a single closed curve---using \EDIT{facial} electrical transformations requires at least as many steps as reducing the medial graph of~$G$ to a simple closed curve using homotopy moves. The homotopy lower bound from Section \ref{S:lower-bound} then implies that reducing any $n$-vertex \EDIT{plane} graph with treewidth $\Omega(\sqrt{n})$ requires $\Omega(n^{3/2})$ \EDIT{facial} electrical transformations. This lower bound matches known upper bounds for rectangular and cylindrical grid graphs.
We develop a new algorithm to simplify any closed curve in $O(n^{3/2})$ homotopy moves in Section~\ref{S:upper}. First we describe an algorithm that uses $O(D)$ moves, where $D$ is the sum of the face depths of the input curve. At a high level, our algorithm can be viewed as a variant of Steinitz's algorithm that empties and removes \emph{loops} instead of bigons.
We then extend our algorithm to \emph{tangles}: collections of boundary-to-boundary paths in a closed disk. Our algorithm simplifies a tangle as much as possible in $O(D + ns)$ moves, where $D$ is the sum of the depths of the tangle's faces, $s$ is the number of paths, and~$n$ is the number of intersection points.
Then, we prove that for any curve with maximum face depth $\Omega(\sqrt{n})$, we can find a simple closed curve whose interior tangle has \EDIT{$m$ interior vertices, at most $\sqrt{m}$ paths, and maximum face depth $O(\sqrt{n})$}.
Simplifying this tangle and then recursively simplifying the resulting curve requires a total of $O(n^{3/2})$ moves.
We show that this simplifying sequence of homotopy moves can be computed in $O(1)$ amortized time per move, assuming the curve is presented in an appropriate graph data structure.
We conclude this section by proving that any arrangement of $k$ closed curves can be simplified in $O(n^{3/2} + nk)$ homotopy moves, or in $O(n^{3/2} + nk + k^2)$ homotopy moves if the target configuration is specified in advance, precisely matching our lower bounds for all values of $n$ and $k$.
Finally, in Section \ref{S:genus}, we consider curves on surfaces of higher genus. We prove that $\Omega(n^2)$ homotopy moves are required in the worst case to transform one non-contractible closed curve to another on the torus, and therefore on any orientable surface. Results of Hass and Scott \cite{hs-ics-85} imply that this lower bound is tight if the non-contractible closed curve is homotopic to a simple closed curve.
\subsection{Definitions}
A \EMPH{closed curve} in a surface $M$ is a continuous map $\gamma \colon S^1 \to M$. In this paper, we consider only \emph{generic} closed curves, which are injective except at a finite number of self-intersections, each of which is a transverse double point; closed curves satisfying these conditions are called \emph{immersions} of the circle. A~closed curve is \EMPH{simple} if it is injective. For most of the paper, we consider only closed curves in the plane; we consider more general surfaces in Section \ref{S:genus}.
The image of any non-simple closed curve has a natural structure as a 4-regular plane graph. Thus, we refer to the self-intersection points of a curve as its \EMPH{vertices}, the maximal subpaths between vertices as \EMPH{edges}, and the components of the complement of the curve as its \EMPH{faces}. Two curves $\gamma$ and $\gamma'$ are \emph{isomorphic} if their images \EDIT{define combinatorially equivalent maps}; we will not distinguish between isomorphic curves.
A \emph{corner} of $\gamma$ is the intersection of a face of $\gamma$ and a small neighborhood of a vertex of $\gamma$. A \EMPH{loop} in a closed curve $\gamma$ is a subpath of $\gamma$ that begins and ends at some vertex $x$, intersects itself only at $x$, and encloses exactly one corner at $x$. A \EMPH{bigon} in $\gamma$ consists of two simple interior-disjoint subpaths of $\gamma$ with the same endpoints and enclose one corner at each of those endpoints. A loop or bigon is \EMPH{empty} if its interior does not intersect $\gamma$. Notice that a $\arc10$ move is applied to an empty loop, and a $\arc20$ move is applied on an empty bigon.
We adopt a standard sign convention for vertices first used by Gauss~\cite{g-n1gs-00}. Choose an arbitrary basepoint $\gamma(0)$ and orientation for the curve. \EDIT{For each vertex $x$,} we define $\sgn(x) = +1$ if the first traversal through the vertex crosses the second traversal from right to left, and $\sgn(x) = -1$ otherwise. See Figure~\ref{F:signs}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{Fig/vertex-signs}
\caption{Gauss's sign convention.}
\label{F:signs}
\end{figure}
A \EMPH{homotopy} between two curves $\gamma$ and $\gamma'$ in surface $M$ is a continuous function $H\colon {S^1 \times [0,1] \to M}$ such that $H(\cdot,0) = \gamma$ and $H(\cdot,1) = \gamma'$. Any homotopy $H$ describes a continuous deformation \EDIT{from} $\gamma$ \EDIT{to}~$\gamma'$, where the second argument of $H$ is “time”. Each homotopy move can be executed by a homotopy. Conversely, Alexander's simplicial approximation theorem \cite{a-cas-26}, together with combinatorial arguments of Alexander and Briggs~\cite{ab-tkc-26} and Reidemeister~\cite{r-ebk-27}, imply that any generic homotopy between two closed curves can be decomposed into a finite sequence of homotopy moves. Two curves are \emph{homotopic}, or in the same \emph{homotopy class}, if there is a homotopy from one to the other. All closed curves in the plane are homotopic.
A \EMPH{multicurve} is an immersion of one or more disjoint circles; in particular, a \EMPH{$k$-curve} is an immersion of $k$ disjoint circles. A multicurve is \emph{simple} if it is injective, or equivalently, if it can be decomposed into pairwise disjoint simple closed curves. The image of any multicurve in the plane is the disjoint union of simple closed curves and 4-regular plane graphs. A \EMPH{component} of a multicurve $\gamma$ is any multicurve whose image is a connected component of the image of $\gamma$. We call the individual closed curves that comprise a multicurve its \EMPH{constituent \EDIT{curves}}; see Figure \ref{F:components}. The definition of homotopy and the decomposition of homotopies into homotopy moves extend naturally to multicurves.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{Fig/multicurve-example}
\caption{A multicurve with two components and three constituent curves, one of which is simple.}
\label{F:components}
\end{figure}
\section{Lower Bounds}
\label{S:lower-bound}
\subsection{Defect}
\label{SS:defect}
To prove our main lower bound, we consider a numerical invariant of closed curves in the plane introduced by Arnold~\cite{a-tipcc-94, a-pctip-94} and~Aicardi~\cite{a-tc-94} called \EMPH{defect}. Polyak~\cite{p-icfgd-98} proved that defect can be computed---or for our purposes, defined---as follows:
\[
\Defect(\gamma) \coloneqq -2 \sum_{x\between y} \sgn(x)\cdot\sgn(y).
\]
Here the sum is taken over all \emph{interleaved} pairs of vertices of $\gamma$: two vertices $x\ne y$ are interleaved, denoted \EMPH{$x\between y$}, if they alternate in cyclic order---$x$, $y$, $x$, $y$---along $\gamma$.
(The factor of $-2$ is a historical artifact, which we retain only to be consistent with Arnold's original definitions~\cite{a-tipcc-94, a-pctip-94}.)
Even though the signs of individual vertices depend on the basepoint and orientation of the curve, the defect of a curve is independent of those choices. Moreover, the defect of any curve is preserved by any homeomorphism from the plane (or the sphere) to itself, including reflection.
Trivially, every simple closed curve has defect zero. Straightforward case analysis~\cite{p-icfgd-98} implies that any single homotopy move changes the defect of a curve by at most $2$; the various cases are listed below and illustrated in Figure \ref{F:defect-change}.
\vspace{3pt}
\begin{itemize}\itemsep0pt
\item A $\arc10$ move leaves the defect unchanged.
\item A $\arc20$ move decreases the defect by $2$ if the two disappearing vertices are interleaved, and leaves the defect unchanged otherwise.
\item A $\arc33$ move increases the defect by $2$ if the three vertices before the move contain an even number of interleaved pairs, and decreases the defect by $2$ otherwise.
\end{itemize}
In light of this case analysis, the following lemma is trivial:
\begin{lemma}
\label{L:defect}
Simplifying any closed curve $\gamma$ in the plane requires at least $\abs{\Defect(\gamma)}/2$ homotopy moves.
\end{lemma}
\unskip
\begin{figure}[htb]
\centering\small
\def\arraystretch{1.25}
\begin{tabular}{cc@{~}cc@{~}c}
\toprule
$1\arcto 0$ & \multicolumn{2}{c}{$2\arcto 0$} & \multicolumn{2}{c}{$3\arcto 3$}
\\ \midrule
\raisebox{-.5\height}{\includegraphics[scale=0.25]{Fig/homotopy-move-1}}
&
\raisebox{-.5\height}{\includegraphics[scale=0.25]{Fig/homotopy-move-2-}}
&
\raisebox{-.5\height}{\includegraphics[scale=0.25]{Fig/homotopy-move-2+}}
&
\raisebox{-.5\height}{\includegraphics[scale=0.25]{Fig/homotopy-move-3+}}
&
\raisebox{-.5\height}{\includegraphics[scale=0.25]{Fig/homotopy-move-3-}}
\\ \midrule
$0$ & $0$ & $-2$ & $+2$ & $+2$
\\ \bottomrule
\end{tabular}
\caption{Changes to defect incurred by homotopy moves. Numbers in each figure indicate how many pairs of vertices are interleaved; dashed lines indicate how the rest of the curve connects.}
\label{F:defect-change}
\end{figure}
\subsection{Flat Torus Knots}
\label{SS:torus-knots}
For any relatively prime positive integers $p$ and $q$, let \EMPH{$T(p,q)$} denote the curve with the following parametrization, where $\theta$ runs from~$0$ to $2\pi$:
\[
T(p,q)(\theta) \coloneqq \left((\cos (q\theta)+2) \cos(p\theta),~ (\cos (q\theta)+2) \sin(p\theta)\right).
\]
The curve $T(p,q)$ winds around the origin $p$ times, oscillates $q$ times between two concentric circles, and crosses itself exactly $(p-1)q$ times. We call these curves \EMPH{flat torus knots}.
\begin{figure}[ht]
\centering
\hfil
\includegraphics[width=1.75in]{Fig/T87}\hfil{}
\includegraphics[width=1.75in]{Fig/T78}\hfil{}
\caption{The flat torus knots $T(8,7)$ and $T(7,8)$.}
\end{figure}
Hayashi \etal~\cite[Proposition~3.1]{hhsy-musrm-12} proved that for any integer~$q$, the flat torus knot $T(q+1,q)$ has defect $-2\binom{q}{3}$. Even-Zohar \etal~\cite{ehln-irkl-14} used a star-polygon representation of the curve $T(p, 2p+1)$ as the basis for a universal model of random knots; in our notation, they proved that $\Defect(T(p, 2p+1)) = 4\binom{p+1}{3}$ for any integer $p$.
In this section we simplify and generalize both of these results to all flat torus knots $T(p,q)$ where either $q\bmod p = 1$ or $p\bmod q = 1$. \EDIT{For purposes of illustration, we cut $T(p,q)$ along a spiral path parallel to a portion of the curve, and then deform the $p$ resulting subpaths, which we call \emph{strands}, into a “flat braid” between two fixed diagonal lines. See Figure~\ref{F:flat-braid}.}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{Fig/8,17-cut-braid}
\caption{Transforming $T(8,17)$ into a flat braid.}
\label{F:flat-braid}
\end{figure}
\begin{lemma}
\label{L:braid-wide}
$\Defect(T(p, ap+1)) = 2a \binom{p+1}{3}$ for all integers $a\ge 0$ and $p \ge 1$.
\end{lemma}
\begin{proof}
The curve $T(p, 1)$ can be reduced \EDIT{to a simple closed curve} using only $\arc10$ moves, so its defect is zero. \EDIT{For the rest of the proof, assume $a\ge 1$.}
\EDIT{We define a \emph{stripe} of $T(p,ap+1)$ to be a subpath from some innermost point to the next outermost point, or equivalently, a subpath of any strand from the bottom to the top in the flat braid representation. Each stripe contains exactly $p-1$ crossings. A~\emph{block} of $T(p,ap+1)$ consists of $p(p-1)$ crossings in $p$ consecutive stripes; within any block, each pair of strands intersects exactly twice.} We can reduce $T(p, ap+1)$ to $T(p, ({a-1})p+1)$ by straightening \EDIT{any block} one strand at a time. Straightening the bottom strand of \EDIT{the} block requires the following $\binom{p}{2}$ moves, as shown in Figure \ref{F:braid-wide}.
\begin{itemize}
\item
$\binom{p-1}{2}$ $\arc33$ moves pull the bottom strand downward over one intersection point of every other pair of strands. Just before each $\arc33$ move, exactly one of the three pairs of the three relevant vertices is interleaved, so each move decreases the defect by $2$.
\item
$(p-1)$ $\arc20$ moves eliminate a pair of intersection points between the bottom strand and every other strand. Each of these moves also decreases the defect by $2$.
\end{itemize}
Altogether, straightening one strand decreases the defect by $\smash{2\binom{p}{2}}$. Proceeding similarly with the other strands,
we conclude that $\Defect(T(p, ap+1)) = \Defect(T(p, {(a-1)p+1})) + 2\binom{p+1}{3}$. The lemma follows immediately by induction.
\end{proof}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.275]{Fig/braid-wide-cmyk}
\caption{Straightening one strand in a block of $T(8, 8a+1)$.}
\label{F:braid-wide}
\end{figure}
\vspace{3pt}
\begin{lemma}
\label{L:braid-deep}
$\Defect(T(aq+1, q)) = -2a \binom{q}{3}$ for all integers $a\ge 0$ and $q \ge 1$.
\end{lemma}
\begin{proof}
The curve $T(1,q)$ is simple, so its defect is trivially zero. For any positive integer $a$, we can transform $T(aq+1, q)$ into $T((a-1)q+1, q)$ by incrementally removing the innermost $q$ \emph{loops}. We can remove the first loop using $\binom{q}{2}$ homotopy moves, as shown in Figure \ref{F:braid-deep}. (The first transition in Figure~\ref{F:braid-deep} just reconnects the top left and top right endpoints of the flat braid.)
\begin{itemize}
\item
$\binom{q-1}{2}$ $\arc33$ moves pull the left side of the loop to the right, over the crossings inside the loop. Just before each $\arc33$ move, the three relevant vertices contain two interleaved pairs, so each move \emph{increases} the defect by $2$.
\item
$(q-1)$ $\arc20$ moves pull the loop over $q-1$ strands. The strands involved in each move are oriented in opposite directions, so these moves leave the defect unchanged.
\item
Finally, we can remove the loop with a single $\arc10$ move, which does not change the defect.
\end{itemize}
Altogether, removing one loop increases the defect by $\smash{2\binom{q-1}{2}}$. Proceeding similarly with the other loops,
we conclude that $\Defect(T(aq+1, q)) = \Defect(T((a-1)q+1, q)) - 2 \binom{q}{3}$. The lemma follows immediately by induction.
\end{proof}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.275]{Fig/braid-deep-simple}
\caption{Removing one loop from the innermost block of $T(7a+1, 7)$.}
\label{F:braid-deep}
\end{figure}
Either of the previous lemmas imply the following lower bound, which is also implicit in the work of Hayashi \etal~\cite{hhsy-musrm-12}.
\begin{theorem}
For every positive integer~$n$, there are closed curves with $n$ vertices whose defects are $n^{3/2}/3 - O(n)$ and $-n^{3/2}/3 + O(n)$, and therefore requires at least $n^{3/2}/6 - O(n)$ homotopy moves to reduce to a simple closed curve.
\end{theorem}
\begin{proof}
The lower bound follows from the previous lemmas by setting $a=1$. If $n$ is a prefect square, then the flat torus knot $T(\sqrt{n}+1, \sqrt{n})$ has $n$ vertices and defect $\smash{-2\binom{\sqrt{n}}{3}}$. If $n$ is not a perfect square, we can achieve defect ${-2\binom{\floor{\sqrt{n}}}{3}}$ by applying $\arc01$ moves to the curve $T(\floor{\sqrt{n}}+1, \floor{\sqrt{n}})$. Similarly, we obtain an $n$-vertex curve with defect $\smash{2\binom{\floor{\sqrt{n+1}}+1}{3}}$ by adding loops to the curve $T(\floor{\sqrt{n+1}}, \floor{\sqrt{n+1}}+1)$. Lemma \ref{L:defect} now immediately implies the lower bound on homotopy moves.
\end{proof}
\subsection{Multicurves}
\label{SS:multi-lower}
Our previous results immediately imply that simplifying a multicurve with $n$ vertices requires at least $\Omega(n^{3/2})$ homotopy moves; in this section we derive additional lower bounds in terms of the number of constituent curves. We distinguish between two natural variants of simplification: transforming a multicurve into an \emph{arbitrary} set of disjoint simple closed curves, or into a \emph{particular} set of disjoint simple closed curves.
Both lower bound proofs rely on the classical notion of \EMPH{winding number}. Let $\gamma$ be an arbitrary closed curve in the plane, let $p$ be any point outside the image of $\gamma$, and let~$\rho$ be any ray from $p$ to infinity that intersects~$\gamma$ transversely. The winding number of $\gamma$ around $p$, which we denote \EMPH{$\Wind(\gamma, p)$}, is the number of times~$\gamma$ crosses $\rho$ from right to left, minus the number of times $\gamma$ crosses $\rho$ from left to right. The winding number does not depend on the particular choice of ray $\rho$. All points in the same face of~$\gamma$ have the same winding number. Moreover, if there is a homotopy from one curve $γ$ to another curve $γ’$, \EDIT{where the image of any intermediate curve} does not include $p$, then $\Wind(γ, p) = \Wind(γ', p)$ \cite{h-udtse-35}.
\begin{lemma}
Transforming a $k$-curve with $n$ vertices in the plane into $k$ arbitrary disjoint circles requires $\Omega(nk)$ homotopy moves in the worst case.
\end{lemma}
\begin{proof}
For arbitrary positive integers $n$ and $k$, we construct a multicurve with $k$ disjoint constituent \EDIT{curves}, all but one of which are simple, as follows. The first $k-1$ constituent \EDIT{curves} $\gamma_1, \dots, \gamma_{k-1}$ are disjoint circles inside the open unit disk centered at the origin. (The precise configuration of these circles is unimportant.) The remaining \EDIT{constituent} curve $\gamma_o$ is a spiral winding $n+1$ times around the closed unit disk centered at the origin, plus a line segment connecting the endpoints of the spiral; $\gamma_o$ is the simplest possible curve with winding number $n+1$ around the origin. Let $\gamma$ be the disjoint union of these $k$ curves; we claim that $\Omega(nk)$ homotopy moves are required to simplify $\gamma$. See Figure \ref{F:winding}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{Fig/winding-target}
\caption{Simplifying this multicurve requires $\Omega(nk)$ homotopy moves.}
\label{F:winding}
\end{figure}
Consider the faces of the outer curve $\gamma_o$ during any homotopy of $\gamma$. Adjacent faces of $\gamma_o$ have winding numbers that differ by $1$, and the outer face has winding number $0$. Thus, for any non-negative integer~$w$, as long as the maximum absolute winding number $\Abs{\max_p \Wind(\gamma_o,p)}$ is at least $w$, the curve $\gamma_o$ has at least $w+1$ faces (including the outer face) and therefore at least $w-1$ vertices, by Euler's formula. On the other hand, if any curve~$\gamma_i$ intersects a face of $\gamma_o$, no homotopy move can remove that face \EDIT{until the intersection between $\gamma_i$ and $\gamma_o$ is removed}. Thus, before the simplification of $\gamma_o$ is complete, each curve~$\gamma_i$ must intersect only faces with winding number $0$, $1$, or $-1$.
For each index $i$, let $w_i$ denote the maximum absolute winding number of $\gamma_o$ around any point of~$\gamma_i$:
\[
w_i \coloneqq \max_\theta \Abs{\Wind\left(\gamma_o,\strut \gamma_i(\theta)\right)}.
\]
Let $W = \sum_i w_i$. Initially, $W = k(n+1)$, and when $\gamma_o$ first becomes simple, we must have $W \le k$. Each homotopy move changes $W$ by at most $1$; specifically, at most one term $w_i$ changes at all, and that term either increases or decreases by $1$. The $\Omega(nk)$ lower bound now follows immediately.
\end{proof}
\begin{theorem}
\label{Th:multi-lower}
Transforming a $k$-curve with $n$ vertices in the plane into an arbitrary set of $k$ simple closed curves requires $\Omega(n^{3/2} + nk)$ homotopy moves in the worst case.
\end{theorem}
We say that a collection of $k$ disjoint simple closed curves is \EMPH{nested} if some point lies in the interior of every curve, and \EMPH{unnested} if the curves have disjoint interiors.
\begin{lemma}
Transforming $k$ nested circles in the plane into $k$ unnested circles requires $\Omega(k^2)$ homotopy moves.
\end{lemma}
\begin{proof}
Let $\gamma$ and $\gamma'$ be two nested circles, with $\gamma'$ in the interior of~$\gamma$ and with $\gamma$ directed counterclockwise. Suppose we apply an arbitrary homotopy to these two curves. If the curves remain disjoint during the entire homotopy, then $\gamma'$ always lies inside a face of $\gamma$ with winding number~$1$; in short, the two curves remain nested.
Thus, any sequence of homotopy moves that takes $\gamma$ and $\gamma'$ to two non-nested simple closed curves contains at least one $\arc{0}{2}$ move that makes the curves cross (and symmetrically at least one $\arc{2}{0}$ move that makes them disjoint again).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{Fig/unnesting}
\caption{Nesting or unnesting $k$ circles requires $\Omega(k^2)$ homotopy moves.}
\end{figure}
Consider a set of $k$ nested circles. Each of the $\smash{\binom{k}{2}}$ pairs of circles requires at least one $\arc{0}{2}$ move and one $\arc{2}{0}$ move to unnest. Because these moves involve distinct pairs of curves, at least $\smash{\binom{k}{2}}$ $\arc{0}{2}$ moves and $\smash{\binom{k}{2}}$ $\arc{2}{0}$ moves, and thus at least $k^2-k$ moves altogether, are required to unnest every pair.
\end{proof}
\begin{theorem}
\label{Th:multi-lower2}
Transforming a $k$-curve with $n$ vertices in the plane into $k$ nested (or unnested) circles requires $\Omega(n^{3/2} + nk + k^2)$ homotopy moves in the worst case.
\end{theorem}
\begin{corollary}
\label{C:multi-lower3}
Transforming one $k$-curve with at most $n$ vertices into another $k$-curve with at most $n$ vertices requires $\Omega(n^{3/2} + nk + k^2)$ homotopy moves in the worst case.
\end{corollary}
Although our lower bound examples consist of disjoint curves, all of these lower bounds apply without modification to \emph{connected} multicurves, because any $k$-curve can be connected with at most $k-1$ $\arc{0}{2}$ moves. On the other hand, any connected $k$-curve has at least $2k-2$ vertices, so the $\Omega(k^2)$ terms in Theorem~\ref{Th:multi-lower2} and Corollary \ref{C:multi-lower3} are redundant.
\section{Electrical Transformations}
\label{S:electric}
Now we consider a related set of local operations on plane graphs, called \EMPH{\EDIT{facial} electrical transformations}, consisting of six operations in three dual pairs, as shown in Figure \ref{F:elec-dual}.
\begin{itemize}\itemsep0pt
\item
\emph{degree-$1$ reduction}: Contract the edge incident to a vertex of degree $1$, or delete the edge incident to a face of degree $1$
\item
\emph{series-parallel reduction}: Contract either edge incident to a vertex of degree $2$, or delete either edge incident to a face of degree $2$
\item
\emph{$\Delta Y$ transformation}: Delete a vertex of degree 3 and connect its neighbors with three new edges, or delete the edges bounding a face of degree 3 and join the vertices of that face to a new vertex.
\end{itemize}
\begin{figure}[ht]
\centering
\includegraphics[width=0.85\textwidth]{Fig/graph-moves-2}
\caption{Facial electrical transformations in a plane graph $G$ and its dual graph $G^*$.}
\label{F:elec-dual}
\end{figure}
\EDIT{Electrical transformations are usually defined more generally as a set of operations performed on abstract graphs, which} have been used since the end of the 19th century~\cite{k-etscn-1899,r-md-1904} to analyze resistor networks and other electrical circuits, but have since been applied to a number of other combinatorial problems on planar graphs, including shortest paths and maximum flows~\cite{a-wtns-60}; multicommodity flows~\cite{f-erpns-85}; and counting spanning trees, perfect matchings, and cuts~\cite{cpv-nastc-95}. We refer to our earlier preprint \cite[Section~1.1]{defect} for a more detailed history and an expanded list of applications. \EDIT{However, all the algorithms we describe below reduce any plane graph to a single vertex using only \emph{facial} electrical transformations as defined above.}
In light of these applications, it is natural to ask \emph{how many} \EDIT{facial} electrical transformations are required in the worst case.
\EDIT{The earliest algorithm for reducing a plane graph to a single vertex already follows from Steinitz's bigon-reduction argument, which we described in the introduction~\cite{s-pr-1916,sr-vtp-34}. Steinitz reduced local transformations of \EDIT{plane} \emph{graphs} to local transformations of planar \emph{curves} by defining the \emph{medial graphs} (“$\Theta$-Prozess”), which we consider in detail below. Later algorithms were given by Truemper \cite{t-drpg-89}, Feo and Provan \cite{fp-dtert-93}, and others. Both Steinitz’s algorithm and Feo and Provan’s algorithm require at most $O(n^2)$ facial electrical transformations; this is the best upper bound known.}
Even the special case of regular grids is open and interesting. Truemper~\cite{t-drpg-89,t-md-92} describes a method to reduce the $p\times p$ grid in $O(p^3)$ steps. Nakahara and Takahashi~\cite{nt-aafts-96} prove an upper bound of $O(\min\set{pq^2, p^2q})$ for the $p\times q$ cylindrical grid. Because every $n$-vertex \EDIT{plane} graph is a minor of an $O(n)\times O(n)$ grid~\cite{v-ucvc-81,s-mncpe-84}, both of these results imply an $O(n^3)$ upper bound for arbitrary plane graphs; see Lemma \ref{L:smoothing}. Feo and Provan~\cite{fp-dtert-93} claim without proof that Truemper's algorithm actually performs only $O(n^2)$ electrical transformations. On the other hand, the smallest (cylindrical) grid containing every $n$-vertex plane graph as a minor has size $Ω(n) \times Ω(n)$~\cite{v-ucvc-81}. Archdeacon \etal~\cite{acgp-frpwg-00} asked whether the $O(n^{3/2})$ upper bound for square grids can be improved to near-linear:
\begin{quote}\small
It is possible that a careful implementation and analysis of the grid-embedding schemes can lead to an $O(n\sqrt{n})$-time algorithm for the general planar case. It would be interesting to obtain a near-linear algorithm for the grid\dots. However, it may well be that reducing planar grids is $Ω(n\sqrt{n})$.
\end{quote}
\EDIT{Most of these earlier algorithms actually solve a more difficult problem, first considered by Akers~\cite{a-wtns-60} and Lehman~\cite{l-wtpn-63} and later solved by Epifanov \cite{e-rpges-66}, of reducing a planar graph with two special vertices called \emph{terminals} to a single edge between the two terminals. In this context, any electrical transformation that contracts an edge incident to a terminal is forbidden. Unfortunately, not every two-terminal plane graph can be reduced to a single edge using only facial electrical transformations; Figure~\ref{F:bullseye} shows two examples. However, it is sufficient to allow loop reductions, parallel reductions, and $\arc\Delta Y$ transformations to be performed on faces that contain a terminal vertex of degree $1$ (and nothing else) \cite{fp-dtert-93}. It is an open question whether our lower bound still holds if these additional non-facial transformations are allowed.}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{Fig/spine8}
\caption{Plane graphs with two terminals that cannot be further reduced using only facial electrical transformations.}
\label{F:bullseye}
\end{figure}
\subsection{Definitions}
The \EMPH{medial graph} of a plane graph $G$, which we denote \EMPH{$G^\times$}, is another plane graph whose vertices correspond to the edges of $G$ and whose edges correspond to incidences (with multiplicity) between vertices of $G$ and faces of~$G$. Two vertices of $G^\times$ are connected by an edge if and only if the corresponding edges in~$G$ are consecutive in cyclic order around some vertex, or equivalently, around some face in~$G$. Every vertex in every medial graph has degree $4$; thus, every medial graph is the image of a multicurve. The medial graphs of any plane graph~$G$ and its dual $G^*$ are identical.
To avoid trivial boundary cases, we define the medial graph of an isolated vertex to be a circle.
\EDIT{Facial} electrical transformations in any plane graph $G$ correspond to local transformations in the medial graph $G^\times$ that are almost identical to homotopy moves. Each degree-$1$ reduction in $G$ corresponds to a $\arc10$ homotopy move in $G^\times$, and each $\Delta$Y transformation in~$G$ corresponds to a $\arc33$ homotopy move in~$G^\times$. A series-parallel reduction in $G$ contracts an empty bigon in $G^\times$ to a single vertex. Extending our earlier notation, we call this transformation a \EMPH{$\arc21$} move. We collectively refer to these transformations and their inverses as \EMPH{medial electrical moves}; see Figure~\ref{F:medial-elec}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\linewidth]{Fig/graph-moves-3}\\
\caption{Medial electrical moves $\arc10$, $\arc21$, and $\arc33$.}
\label{F:medial-elec}
\end{figure}
\EMPH{Smoothing} a \EDIT{multicurve} $γ$ at a vertex $x$ means replacing the intersection of $γ$ with a small neighborhood of $x$ with two disjoint simple paths, so that the result is another \EDIT{multicurve}. (There are two possible smoothings at each vertex; see Figure \ref{F:smoothing}.) A \EMPH{smoothing} of $γ$ is any graph obtained by smoothing zero or more vertices of $γ$, and a \EMPH{proper smoothing} of $γ$ is any smoothing other than $\gamma$ itself. For any plane graph $G$, the (proper) smoothings of the medial graph $G^\times$ are precisely the medial graphs of (proper) minors of $G$.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{Fig/smoothing}
\caption{Smoothing a vertex.}
\label{F:smoothing}
\end{figure}
\subsection{Electrical to Homotopy}
The main result of this section is that the number of \emph{homotopy} moves required to simplify a closed curve is a lower bound on the number of \emph{medial electrical moves} required to simplify the same closed curve. This result is already implicit in the work of Noble and Welsh~\cite{nw-kg-00}, and most of our proofs closely follow theirs. We include the proofs here to make the inequalities explicit and to keep the paper self-contained.
For any connected multicurve (or 4-regular \EDIT{plane} graph) $\gamma$, let \EMPH{$X(γ)$} denote the minimum number of medial electrical moves required to reduce $γ$ to a simple closed curve, and let \EMPH{$H(γ)$} is the minimum number of homotopy moves required to reduce $γ$ to an arbitrary collection of disjoint simple closed curves.
The following key lemma follows from close reading of proofs by Truemper~\cite[Lemma~4]{t-drpg-89} and several others~\cite{g-dtaa-91,nt-aafts-96,acgp-frpwg-00,nw-kg-00} that every minor of a ΔY-reducible graph is also ΔY-reducible. Our proof most closely resembles an argument of Gitler~\cite[Lemma~2.3.3]{g-dtaa-91}, but restated in terms of medial electrical moves to simplify the case analysis.
\begin{lemma}
\label{L:smoothing}
For any connected plane graph $G$, reducing any connected proper minor of $G$ to a single vertex requires strictly fewer \EDIT{facial} electrical transformations than reducing $G$ to a single vertex.
Equivalently, $X(\overline{γ}) < X(γ)$ for every connected proper smoothing $\overline{γ}$ of every connected multicurve $γ$.
\end{lemma}
\begin{proof}
Let $γ$ be a connected multicurve, and let $\overline{γ}$ be a connected proper smoothing of $γ$. If $γ$ is already simple, the lemma is vacuously true. Otherwise, the proof proceeds by induction on $X(γ)$.
We first consider the special case where $\overline{γ}$ is obtained from $γ$ by smoothing a single vertex~$x$. Let~$γ'$ be the result of the first medial electrical move in the minimum-length sequence that reduces $γ$ \EDIT{to a simple closed curve}. We immediately have $X(γ) = X(γ')+1$. There are two nontrivial cases to consider.
First, suppose the move from $γ$ to $γ'$ does not involve the smoothed vertex $x$. Then we can apply the same move to $\overline{γ}$ to obtain a new graph $\overline{γ}'$; the same graph can also be obtained from $γ'$ by smoothing~$x$. We immediately have $X(\overline{γ}) \le X(\overline{γ}') + 1$, and the inductive hypothesis implies $X(\overline{γ}')+1 < X(γ')+1 = X(γ)$.
Now suppose the first move in $Σ$ does involve $x$. In this case, we can apply at most one medial electrical move to $\overline{γ}$ to obtain a (possibly trivial) smoothing $\overline{γ}'$ of $γ'$. There are eight subcases to consider, shown in Figure \ref{F:smooth-moves}. One subcase for the $\arc01$ move is impossible, because~$\overline{γ}$ is connected. In the remaining $\arc01$ subcase and one $\arc21$ subcase, the curves $\overline{γ}$, $\overline{γ}'$ and $γ'$ are all isomorphic, which implies $X(\overline{γ}) = X(\overline{γ}') = X(γ') = X(γ)-1$. In all remaining subcases, $\overline{γ}'$ is a connected proper smoothing of $γ'$, so the inductive hypothesis implies $X(\overline{γ}) ≤ X(\overline{γ}')+1 < X(γ')+1 = X(γ)$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{Fig/smooth-move-2}
\caption{Cases for the proof of the Lemma~\ref{L:smoothing}; the circled vertex is $x$.}
\label{F:smooth-moves}
\end{figure}
Finally, we consider the more general case where $\overline{γ}$ is obtained from $γ$ by smoothing more than one vertex. Let $\widetilde{γ}$ be any intermediate curve, obtained from~$γ$ by smoothing just one of the vertices that were smoothed to obtain $\overline{γ}$. As $\overline{γ}$ is a connected smoothing of $\widetilde{γ}$, the curve $\widetilde{γ}$ itself must be connected too.
Our earlier argument implies that $X(\widetilde{γ}) < X(γ)$. Thus, the inductive hypothesis implies $X(\overline{γ}) < X(\widetilde{γ})$, which completes the proof.
\end{proof}
\begin{lemma}
\label{L:monotonicity}
For every connected multicurve $γ$, there is a minimum-length sequence of medial electrical moves that reduces $γ$ \EDIT{to a simple closed curve} and that does not contain $\arc01$ or $\arc12$ moves.
\end{lemma}
\begin{proof}
Our proof follows an argument of Noble and Welsh~\cite[Lemma~3.2]{nw-kg-00}.
Consider a minimum-length sequence of medial electrical moves that reduces an arbitrary connected multicurve $γ$ \EDIT{to a simple closed curve}. For any integer $i ≥ 0$, let $γ_i$ denote the result of the first $i$ moves in this sequence; in particular, $γ_0 = γ$ and $γ_{X(γ)}$ is a \EDIT{simple closed curve}. Minimality of the reduction sequence implies that $X(γ_i) = X(γ)-i$ for all $i$. Now let $i$ be an arbitrary index such that $γ_i$ has one more vertex than $γ_{i-1}$. Then $γ_{i-1}$ is a connected proper smoothing of $γ_i$, so Lemma~\ref{L:smoothing} implies that $X(γ_{i-1}) < X(γ_i)$, giving us a contradiction.
\end{proof}
\begin{lemma}
\label{L:homotopy}
$X(γ) ≥ H(γ)$ for every closed curve $γ$.
\end{lemma}
\begin{proof}
The proof proceeds by induction on $X(γ)$, following an argument of Noble and Welsh~\cite[Proposition 3.3]{nw-kg-00}.
Let $γ$ be a closed curve. If $X(γ) = 0$, then $γ$ is already simple, so $H(γ) = 0$. Otherwise, let $Σ$ be a minimum-length sequence of medial electrical moves that reduces $γ$ to a \EDIT{simple closed curve}. Lemma~\ref{L:monotonicity} implies that we can assume that the first move in $Σ$ is neither $\arc01$ nor $\arc12$. If the first move is $1\arcto 0$ or $3\arcto 3$, the theorem immediately follows by induction.
The only interesting first move is $2\arcto 1$. Let $γ'$ be the result of this $\arc21$ move, and let $\overline{γ}$ be the result of the corresponding $\arc20$ homotopy move. The minimality of $Σ$ implies that $X(γ) = X(γ')+1$, and we trivially have $H(γ) \le H(\overline{γ}) + 1$. Because $γ$ consists of \emph{one} single curve, $\overline{γ}$ \EDIT{is also a single curve and is therefore} connected. The curve $\overline{γ}$ is also a proper smoothing of $γ'$, so the Lemma~\ref{L:smoothing} implies $X(\overline{γ}) < X(γ') < X(γ)$. Finally, the inductive hypothesis implies that $X(\overline{γ}) \ge H(\overline{γ})$, which completes the proof.
\end{proof}
We call a plane graph $G$ \EMPH{unicursal} if its medial graph $G^\times$ is the image of a single closed curve.
\begin{theorem}
\label{Th:lowerbound}
For every connected \EDIT{plane} graph $G$ and every unicursal minor~$H$ of $G$, reducing $G$ to a single vertex requires at least $\abs{\Defect(H^\times)}/2$ \EDIT{facial} electrical transformations.
\end{theorem}
\begin{proof}
Either $H$ equals $G$, or Lemma~\ref{L:smoothing} implies that reducing a proper minor $H$ of $G$ to a single vertex requires strictly fewer \EDIT{facial} electrical transformations than reducing $G$ to a single vertex. Note that \EDIT{facial} electrical transformations performed on $H$ corresponds precisely to medial electrical moves performed on $H^\times$. Now because $γ \coloneqq H^\times$ is unicursal, Lemma~\ref{L:defect} and Lemma~\ref{L:homotopy} implies that $X(γ) ≥ H(γ) ≥ \abs{\Defect(\gamma)}/2$.
\end{proof}
\subsection{Cylindrical Grids}
Finally, we derive explicit lower bounds for the number of \EDIT{facial} electrical transformations required to reduce any cylindrical grid to a single vertex. For any positive integers $p$ and $q$, we define two cylindrical grid graphs; see Figure \ref{F:cylinders}.
\begin{itemize}
\item
\EMPH{$C(p,q)$} is the Cartesian product of a cycle of length $q$ and a path of length $p-1$. If $q$ is odd, then the medial graph of $C(p,q)$ is the flat torus knot $T(2p, q)$.
\item
\EMPH{$C'(p,q)$} is obtained by connecting a new vertex to the vertices of one of the $q$-gonal faces of $C(p,q)$, or equivalently, by contracting one of the $q$-gonal faces of $C(p+1,q)$ to a single vertex. If~$q$ is even, then the medial graph of $C'(p,q)$ is the flat torus knot $T(2p+1, q)$.
\end{itemize}
\unskip
\begin{figure}[ht]
\centering
\hfil
\includegraphics[width=1.75in]{Fig/C47}\hfil
\includegraphics[width=1.75in]{Fig/Cprime38}\hfil{}
\caption{The cylindrical grid graphs $C(4,7)$ and $C'(3,8)$ and (in light gray) their medial graphs $T(8,7)$ and $T(7,8)$.}
\label{F:cylinders}
\end{figure}
\begin{corollary}
\label{C:cylindrical-grid}
For all positive integers $p$ and $q$, the cylindrical grid $C(p,q)$ requires $Ω(\min\set{p^2 q, p q^2})$ \EDIT{facial} electrical transformations to reduce to a single vertex.
\end{corollary}
\begin{proof}
First suppose $p \le q$. Because $C(p-1,q)$ is a minor of $C(p,q)$, we can assume without loss of generality that $p$ is even and $p<q$. Let $H$ denote the cylindrical grid $C(p/2, ap+1)$, where $a \coloneqq \floor{(q-1)/p} \ge 1$. $H$~is a minor of $C(p, q)$ (because $ap+1 \le q$), and the medial graph of $H$ is the flat torus knot $T(p, ap+1)$. Lemma~\ref{L:braid-wide} implies
\[
\Defect\!\left(T(p,\strut ap+1)\right) = 2a\binom{p+1}{3} = Ω(ap^3) = Ω(p^2q).
\]
Theorem~\ref{Th:lowerbound} now implies that reducing $C(p,q)$ requires at least $\Omega(p^2 q)$ \EDIT{facial} electrical transformations.
The symmetric case $p > q$ is similar. We can assume without loss of generality that $q$ is odd. Let $H$ denote the cylindrical grid $C'(aq, q)$, where $a \coloneqq \floor{(p-1)/q} \ge 1$. $H$ is a proper minor of $C(p, q)$ (because $aq<p$), and the medial graph of $H$ is the flat torus knot $T(2aq+1, q)$. Lemma~\ref{L:braid-deep} implies
\[
\Abs{ \Defect\!\left(T(2aq+1,\strut q)\right)}
= 4a\binom{q}{3} = Ω(aq^3) = Ω(pq^2).
\]
Theorem~\ref{Th:lowerbound} now implies that reducing $C(p,q)$ requires at least $\Omega(p q^2)$ \EDIT{facial} electrical transformations.
\end{proof}
In particular, reducing any $\Theta(\sqrt{n})\times\Theta(\sqrt{n})$ cylindrical grid requires at least $\Omega(n^{3/2})$ \EDIT{facial} electrical transformations. Our lower bound matches an $O(\min\set{pq^2, p^2q})$ upper bound by Nakahara and Takahashi~\cite{nt-aafts-96}. Because every $p\times q$ rectangular grid contains $C(\floor{p/3}, \floor{q/3})$ as a minor, the same $Ω(\min\set{p^2 q, p q^2})$ lower bound applies to rectangular grids. In particular, Truemper's $O(p^3) = O(n^{3/2})$ upper bound for the $p\times p$ square grid~\cite{t-drpg-89} is tight. Finally, because every \EDIT{plane} graph with treewidth~$t$ contains an $Ω(t)\times Ω(t)$ grid minor~\cite{rst-qepg-94}, reducing \emph{any} $n$-vertex \EDIT{plane} graph with treewidth~$t$ requires at least $Ω(t^3 + n)$ \EDIT{facial} electrical transformations.
Like Gitler~\cite{g-dtaa-91}, Feo and Provan~\cite{fp-dtert-93}, and Archdeacon \etal~\cite{acgp-frpwg-00}, we conjecture that any $n$-vertex \EDIT{plane} graph can be reduced \EDIT{to a vertex} using $O(n^{3/2})$ \EDIT{facial} electrical transformations. More ambitiously, we conjecture an upper bound of $O(nt)$ for any $n$-vertex \EDIT{plane} graph with treewidth $t$.
\section{Upper Bound}
\label{S:upper}
For any point $p$, let \EMPH{$\Depth(p, \gamma)$} denote the minimum number of times a path from~$p$ to infinity crosses~$\gamma$. Any two points in the same face of $\gamma$ have the same depth, so each face~$f$ has a well-defined depth, which is its distance to the outer face in the dual graph of~$\gamma$; see Figure~\ref{F:curve-depth-contract}. The depth of the curve, denoted \EMPH{$\Depth(\gamma)$}, is the maximum depth of the faces of~$\gamma$; and the \EMPH{potential $D(\gamma)$} is the sum of the depths of the faces. Euler's formula implies that any 4-regular \EDIT{plane} graph with $n$ vertices has exactly $n+2$ faces; thus, for any curve $\gamma$ with $n$ vertices, we have $n+1 \le D(\gamma) \le (n+1)\cdot \Depth(\gamma)$.
\subsection{Contracting Simple Loops}
\begin{lemma}
\label{L:contract}
Every closed curve $\gamma$ in the plane can be simplified using at most $3D(\gamma) - 3$ homotopy moves.
\end{lemma}
\begin{proof}
\EDIT{We prove the statement by induction on the number of vertices in $\gamma$.}
The lemma is trivial if $\gamma$ is already simple, so assume otherwise.
Let $x \coloneqq \gamma(\theta) = \gamma(\theta')$ be the first vertex to be visited twice by $\gamma$ after the (arbitrarily chosen) basepoint $\gamma(0)$.
Let~$\alpha$ denote the subcurve of~$\gamma$ from $\gamma(\theta)$ to $\gamma(\theta')$; our choice of $x$ implies that $\alpha$ is a simple loop. Let $m$ and $s$ denote the number of vertices and maximal subpaths of $\gamma$ in the interior of $\alpha$ respectively.
Finally, let $\gamma'$ denote the closed curve obtained from $\gamma$ by removing $\alpha$. The first stage of our algorithm transforms~$\gamma$ into $\gamma'$ by contracting the loop $\alpha$ via homotopy moves.
\begin{figure}[ht]
\centering\includegraphics[scale=0.5]{Fig/shrink-loop}
\caption{Transforming $\gamma$ into $\gamma'$ by contracting a simple loop. Numbers are face depths.}
\label{F:curve-depth-contract}
\end{figure}
We remove the vertices and edges from the interior of $\alpha$ one at a time as follows \EDIT{(see Figure \ref{F:move-strand})}.
If we can perform a $\arc20$ move to remove one edge of $\gamma$ from the interior of $\alpha$ and decrease $s$, we do so. Otherwise, either $\alpha$ is empty, or some vertex of $\gamma$ lies inside $\alpha$. In the latter case, at least one vertex $x$ inside~$\alpha$ has a neighbor that lies on $\alpha$. We move $x$ outside $\alpha$ with a $\arc02$ move (which increases $s$ \EDIT{by $1$}) followed by a $\arc33$ move \EDIT{(which decreases $m$ by $1$)}.
Once $\alpha$ is an empty loop, we remove it with a single $\arc10$ move. Altogether, our algorithm transforms~$\gamma$ into~$\gamma'$ using at most $3m + s + 1$ homotopy moves. Let $M$ denote the actual number of homotopy moves used.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{Fig/move-strand-2}
\caption{Moving a loop over an interior empty bigon or an interior vertex. }
\label{F:move-strand}
\end{figure}
Euler's formula implies that $\alpha$ contains exactly \EDIT{$m+s+1$} faces of $\gamma$. The Jordan curve theorem implies that $\Depth(p, \gamma') \le \Depth(p, \gamma)-1$ for any point $p$ inside $\alpha$, and trivially $\Depth(p, \gamma') \le \Depth(p, \gamma)$ for any point $p$ outside $\alpha$. It follows that $D(\gamma') \le D(\gamma) - (\EDIT{m+s+1}) \le D(\gamma) - M/3$, and therefore $M \le 3D(\gamma) - 3 D(\gamma')$. The induction hypothesis implies that we can recursively simplify $\gamma'$ using at most $3D(\gamma') - 3$ moves. The lemma now follows immediately.
\end{proof}
Our upper bound is a factor of 3 larger than Feo and Provan's \cite{fp-dtert-93}; however our algorithm has the advantage that it extends to \emph{tangles}, as described in the next subsection.
\subsection{Tangles}
\label{SS:tangles}
A \EMPH{tangle} is a collection of boundary-to-boundary paths $\gamma_1, \gamma_2, \dots,\gamma_s$ in a closed topological disk~$\Sigma$, which (self-)intersect only pairwise, transversely, and away from the boundary of~$\Sigma$. This terminology is borrowed from knot theory, where a tangle usually refers to the intersection of a knot or link with a closed 3-dimensional ball \cite{c-eklst-70,cdm-ivki-12}; our tangles are perhaps more properly called \emph{flat tangles}, as they are images of tangles under appropriate projection. (Our tangles are unrelated to the obstructions to small branchwidth introduced by Robertson and Seymour \cite{rs-gm10-91}.) Transforming a curve into a tangle is identical to (an inversion of) the \emph{flarb} operation defined by Allen \etal~\cite{abil-ivd-16}.
We call each individual path $\gamma_i$ a \EMPH{strand} of the tangle. The \EMPH{boundary} of a tangle is the boundary of the disk $\Sigma$ that contains it; we usually denote the boundary by $\sigma$. By the Jordan-Schönflies theorem, we can assume without loss of generality that $\sigma$ is actually a circle. We can obtain a tangle from any closed curve $\gamma$ by considering its restriction to any closed disk whose boundary~$\sigma$ intersects $\gamma$ transversely away from its vertices; we call this restriction the \EMPH{interior tangle} of $\sigma$.
The strands and boundary of any tangle define a plane graph $T$ whose boundary vertices each have degree $3$ and whose interior vertices each have degree $4$. Depths and potential \EDIT{of a tangle} are defined exactly as for closed curves: The depth of any face $f$ of $T$ is its distance to the outer face in the dual graph $T^*$; the depth of the tangle is its maximum face depth; and the potential $D(T)$ of the tangle is the sum of its face depths.
A tangle is \EMPH{tight} if every pair of strands intersects at most once and \EMPH{loose} otherwise. Every loose tangle contains either an empty loop or a (not necessarily empty) bigon. Thus, any tangle with $n$ vertices can be transformed into a tight tangle---or less formally, \emph{tightened}---in $O(n^2)$ homotopy moves using Steinitz's algorithm. On the other hand, there are infinite classes of loose tangles for which no homotopy move \EDIT{that} decreases the potential, so we cannot directly apply Feo and Provan's algorithm to this setting.
\EDIT{We describe a two-phase algorithm to tighten any tangle. First, we remove any self-intersections in the individual strands, by contracting loops as in the proof of Lemma \ref{L:contract}. Once each strand is simple, we move the strands so that each pair intersects at most once. See Figure \ref{F:tighten-tangle}.}
\begin{figure}[ht]
\centering\includegraphics[width=0.9\linewidth]{Fig/just-tangle}
\caption{Tightening a tangle in two phases: First simplifying the individual strands, then removing excess crossings between pairs of strands.}
\label{F:tighten-tangle}
\end{figure}
\begin{lemma}
\label{L:pretangle}
\EDIT{Every tangle $T$ with $n$ vertices and $s$ strands, such that every strand is simple, can be tightened using at most $3ns$ homotopy moves.}
\end{lemma}
\begin{proof}~
\EDIT{We prove the lemma by induction on $s$. The base case when $s=1$ is trivial, so assume $s\ge 2$.}
Fix an arbitrary reference point on the boundary circle $\sigma$ that is not an endpoint of a strand. For each index $i$, let $\sigma_i$ be the arc of $\sigma$ between the endpoints of $\gamma_i$ that does not contain the reference point. A strand $\gamma_i$ is \emph{extremal} if the corresponding arc $\sigma_i$ does not contain any other arc $\sigma_j$.
Choose an arbitrary extremal strand $\gamma_i$. Let $m_i$ denote the number of tangle vertices in the interior of the disk bounded by $\gamma_i$ and the boundary arc $\sigma_i$; \EDIT{call this disk $\Sigma_i$}. Let $s_i$ denote the number of intersections between $\gamma_i$ and other strands. Finally, let $\gamma'_i$ be a path inside the disk $\Sigma$ defining tangle~$T$, with the same endpoints as $\gamma_i$, that intersects each other strand in $T$ at most once, such that the disk bounded by $\sigma_i$ and $\gamma'_i$ has no tangle vertices inside its interior.
\EDIT{(See Figure~\ref{F:move-strand-tangle} for an illustration; the red strand in the left tangle is $\gamma_i$, the red strand in the middle tangle is $\gamma'_i$, and the shaded disk is $\Sigma_i$.)}.
We can deform $\gamma_i$ into $\gamma'_i$ using essentially the algorithm from Lemma \ref{L:contract}; \EDIT{the disk $\Sigma_i$ is contracted along with $\gamma_i$ in the process}. If \EDIT{$\Sigma_i$} contains an empty bigon \EDIT{with one side in $\gamma_i$}, remove it with a $\arc20$ move \EDIT{(which decreases~$s_i$ by $1$)}. If \EDIT{$\Sigma_i$} has an interior vertex with a neighbor on $\gamma_i$, remove it using at most two homotopy moves (which increases~$s_i$ \EDIT{by $1$ and decreases $m_i$ by $1$}). Altogether, this deformation requires at most $3m_i + s_i \le 3n$ homotopy moves.
\begin{figure}[ht]
\centering\includegraphics[width=0.9\linewidth]{Fig/tangle-one-strand}
\caption{Moving one strand out of the way and shrinking the tangle boundary.}
\label{F:move-strand-tangle}
\end{figure}
After deforming $\gamma_i$ to $\gamma_i'$, we \EDIT{redefine the tangle by ``shrinking'' its boundary curve} slightly to exclude~$\gamma'_i$, without creating or removing any \EDIT{vertices in the tangle or endpoints on the boundary} \EDIT{(see the right of Figure~\ref{F:move-strand-tangle})}. We emphasize that shrinking the boundary does not modify the strands and therefore does not require any homotopy moves. The resulting smaller tangle has exactly $s-1$ strands, each of which is simple. Thus, the induction hypothesis implies that we can recursively tighten this smaller tangle using at most $3n(s-1)$ homotopy moves.
\end{proof}
\begin{corollary}
\label{C:tangle}
Every tangle $T$ with $n$ vertices and $s$ strands can be tightened using at most $3D(T) + 3ns$ homotopy moves.
\end{corollary}
\begin{proof}
\EDIT{As long as $T$ contains at least one non-simple strand, we identify a simple loop $\alpha$ in that strand and remove it as described in the proof of Lemma \ref{L:contract}. Suppose there are $m$ vertices and $t$ maximal subpaths in the interior of $\alpha$, and let $M$ be the number of homotopy moves required to remove $\alpha$. The algorithm in the proof of Lemma \ref{L:contract} implies that $M\le 3m+t+1$, and Euler’s formula implies that $\alpha$ contains $m+t+1 \ge M/3$ faces. Removing $\alpha$ decreases the depth of each of these faces by at least $1$ and therefore decreases the potential of the tangle by at least $M/3$.}
\EDIT{Let $T'$ be the remaining tangle after all such loops are removed. Our potential analysis for a single loop implies inductively that transforming~$T$ into~$T'$ requires at most $3D(T) - 3D(T') \le 3D(T)$ homotopy moves.
Because $T'$ still has $s$ strands and at most $n$ vertices, Lemma \ref{L:pretangle} implies that we can tighten $T'$ with at most $3ns$ additional homotopy moves.}
\end{proof}
\subsection{Main Algorithm}
We call a simple closed curve $\sigma$ \EMPH{useful} for $\gamma$ if $\sigma$ intersects $\gamma$ transversely away from its vertices, and the interior tangle $T$ of $\sigma$ has at least $s^2$ vertices, where $s \coloneqq \abs{\sigma\cap\gamma}/2$ is the number of strands.
Our main algorithm repeatedly finds a useful closed curve whose interior tangle has $O(\sqrt{n})$ depth, and tightens its interior tangle; if there are no useful closed curves, then we fall back to the loop-contraction algorithm of Lemma \ref{L:contract}.
\begin{lemma}
\label{L:useful}
Let $\gamma$ be an arbitrary non-simple closed curve in the plane with $n$ vertices. Either there is a useful simple closed curve for $\gamma$ whose interior tangle has depth $O(\sqrt{n})$, or the depth of $\gamma$ is $O(\sqrt{n})$.
\end{lemma}
\begin{proof}
To simplify notation, let $d \coloneqq \Depth(\gamma)$. For each integer $j$ between $1$ and $d$, let $R_j$ be the set of points $p$ with $\Depth(p, \gamma) \ge d+1-j$,
and let~$\tilde{R}_j$ denote a small open neighborhood of the closure of $R_j \cup \tilde{R}_{j-1}$, where $\tilde{R}_0$ is the empty set. Each region~$\tilde{R}_j$ is the disjoint union of closed disks, whose boundary cycles intersect $\gamma$ transversely away from its vertices, if at all. In particular, $\tilde{R}_d$ is a disk containing the entire curve $\gamma$.
Fix a point $z$ such that $\Depth(z,\gamma) = d$. For each integer $j$, let $\Sigma_j$ be the unique component of $\tilde{R}_j$ that contains $z$, and let $\sigma_j$ be the boundary of $\Sigma_j$. Then~$\sigma_1, \sigma_2, \dots, \sigma_d$ are disjoint, nested, simple closed curves; see Figure \ref{F:curve-levels}. Let $n_j$ be the number of vertices and let $s_j \coloneqq \abs{\gamma \cap \sigma_j}/2$ be the number of strands of the interior tangle of $\sigma_j$. For notational convenience, we define $\Sigma_0 \coloneqq \varnothing$ and thus $n_0 = s_0 = 0$. We ignore the outermost curve $\sigma_d$, because it contains the entire curve $\gamma$. The next outermost curve $\sigma_{d-1}$ contains every vertex of $\gamma$, so $n_{d-1} = n$.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{Fig/tangle-levels}
\caption{Nested depth cycles around a point of maximum depth.}
\label{F:curve-levels}
\end{figure}
By construction, for each $j$, the interior tangle of $\sigma_j$ has depth $j+1$. Thus, to prove the lemma, it suffices to show that if none of the curves $\sigma_1, \sigma_2, \dots, \sigma_{d-1}$ is useful, then $d = O(\sqrt{n})$.
Fix an index $j$. Each edge of $\gamma$ crosses $\sigma_j$ at most twice. Any edge of $\gamma$ that crosses~$\sigma_j$ has at least one endpoint in the annulus $\Sigma_j \setminus \Sigma_{j-1}$, and any edge that crosses $\sigma_j$ twice has both endpoints in~$\Sigma_j \setminus \Sigma_{j-1}$. Conversely, each vertex in $\Sigma_j$ is incident to at most two edges that cross $\sigma_j$ and no edges that cross $\sigma_{j+1}$. It follows that $\abs{\sigma_j\cap\gamma} \le 2(n_j - n_{j-1})$, and therefore $n_j \geq n_{j-1} + s_j$.
Thus, by induction, we have
\[
n_j \ge \sum_{i=1}^j s_i
\]
for every index $j$.
Now suppose no curve $\sigma_j$ with $1\le j <d$ is useful. Then we must have $s_j^2 > n_j$ and therefore
\[
s_j^2 > \sum_{i=1}^j s_i
\]
for all $1\le j < d$. Trivially, $s_1\ge 1$, because $\gamma$ is non-simple.
A straightforward induction argument implies that $s_j \ge (j+1)/2$
and therefore
\[
n
~=~ n_{d-1}
~\ge~ \sum_{i = 1}^{d-1} \frac{i+1}{2}
~\ge~ \frac{1}{2} \binom{d+1}{2}
~>~ \frac{d^2}{4}.
\]
We conclude that $d \le 2\sqrt{n}$, which completes the proof.
\end{proof}
\begin{theorem}
\label{Th:upper}
Every closed curve in the plane with $n$ vertices can be simplified in $O(n^{3/2})$ homotopy moves.
\end{theorem}
\begin{proof}
Let $\gamma$ be an arbitrary closed curve in the plane with $n$ vertices. If $\gamma$ has depth $O(\sqrt{n})$, Lemma~\ref{L:contract} and the trivial upper bound $D(\gamma) \le {(n+1)}\cdot \Depth(\gamma)$ imply that we can simplify $\gamma$ in $O(n^{3/2})$ homotopy moves. For purposes of analysis, we charge $O(\sqrt{n})$ of these moves to each vertex of $\gamma$.
Otherwise, let $\sigma$ be an arbitrary useful closed curve chosen according to Lemma \ref{L:useful}. Suppose the interior tangle of $\sigma$ has $m$ vertices, $s$ strands, and depth~$d$. Lemma \ref{L:useful} implies that $d = O(\sqrt{n})$, and the definition of useful implies that $s \le \sqrt{m}$, which is $O(\sqrt{n})$. Thus, by \EDIT{Corollary \ref{C:tangle}}, we can tighten the interior tangle of $\sigma$ in $O(m d + m s) = O(m \sqrt{n})$ moves. This simplification removes at least $m - s^2/2 \ge m/2$ vertices from $\gamma$, as the resulting tight tangle has at most $s^2/2$ vertices. Again, for purposes of analysis, we charge $O(\sqrt{n})$ moves to each deleted vertex. We then recursively simplify the resulting closed curve.
In either case, each vertex of $\gamma$ is charged $O(\sqrt{n})$ moves as it is deleted. Thus, simplification requires at most $O(n^{3/2})$ homotopy moves in total.
\end{proof}
\subsection{Efficient Implementation}
Here we describe how to implement our curve-simplification algorithm to run in $O(n^{3/2})$ time; in fact, our implementation spends only constant amortized time per homotopy move. We assume that the input curve is given in a data structure that allows fast exploration and modification of plane graphs, such as a quad-edge data structure \cite{gs-pmgsc-85} or a doubly-connected edge list \cite{bcko-cgaa-08}. If the curve is presented as a polygon with $m$ edges, an appropriate graph representation can be constructed in $O(m\log m + n)$ time using classical geometric algorithms \cite{cs-arscg-89,m-fppa-90,ce-oails-92}; more recent algorithms can be used for piecewise-algebraic curves \cite{ek-ee2aa-08}.
\begin{theorem}
\label{Th:upper-algo}
Given a simple closed curve $\gamma$ in the plane with $n$ vertices, we can compute a sequence of $M = O(n^{3/2})$ homotopy moves that simplifies $\gamma$ in $O(M)$ time.
\end{theorem}
\begin{proof}
We begin by labeling each face of \EDIT{$\gamma$} with its depth, using a breadth-first search of the dual graph in $O(n)$ time. Then we construct the depth contours of \EDIT{$\gamma$}—the boundaries of the regions~$\tilde{R}_j$ from the proof of Lemma \ref{L:useful}—and organize them into a \emph{contour tree} in $O(n)$ time by brute force. Another $O(n)$-time breadth-first traversal computes the number of strands and the number of interior vertices of every contour's interior tangle; in particular, we identify which depth contours are useful. To complete the preprocessing phase, we place all the leafmost useful contours into a queue. We can charge the overall $O(n)$ preprocessing time to the $\Omega(n)$ homotopy moves needed to simplify the curve.
As long as the queue of leafmost useful contours is non-empty, we extract one contour $\sigma$ from this queue and simplify its interior tangle $T$ as follows. Suppose $T$ has $m$ interior vertices.
Following the proof of Theorem \ref{Th:upper}, we first simplify every loop in each strands of $T$. We identify loops by traversing the strand from one endpoint to the other, marking the vertices as we go; the first time we visit a vertex that has already been marked, we have found a loop $\alpha$. We can perform each of the homotopy moves required to shrink $\alpha$ in $O(1)$ time, because each such move modifies only a constant-radius boundary of a vertex on $\alpha$. After the loop is shrunk, we continue walking along the strand starting at the most recently marked \EDIT{vertex}.
The second phase of the tangle-simplification algorithm proceeds similarly. We walk around the boundary of $T$, marking vertices as we go. As soon as we see the second endpoint of any strand $\gamma_i$, we pause the walk to straighten $\gamma_i$. As before, we can execute each homotopy move used to move $\gamma_i$ to $\gamma'_i$ in $O(1)$ time. We then move the boundary of the tangle over the vertices of $\gamma'_i$, and remove the endpoints of $\gamma'_i$ from the boundary curve, in $O(1)$ time per vertex.
The only portions of the running time that we have not already charged to homotopy moves are the time spent marking the vertices on each strand and the time to update the tangle boundary after moving a strand aside. Altogether, the uncharged time is $O(m)$, which is less than the number of moves used to tighten $T$, because the contour $\sigma$ is useful. Thus, tightening the interior tangle of a useful contour requires $O(1)$ amortized time per homotopy move.
Once the tangle is tight, we must update the queue of useful contours. The original contour $\sigma$ is still a depth contour in the modified curve, and tightening $T$ only changes the depths of faces that intersect~$T$. Thus, we could update the contour tree in $O(m)$ time, which we could charge to the moves used to tighten $T$; but in fact, this update is unnecessary, because no contour in the interior of $\sigma$ is useful. We then walk up the contour tree from $\sigma$, updating the number of interior vertices until we find a useful ancestor contour. The total time spent traversing the contour tree for new useful contours is $O(n)$; we can charge this time to the $\Omega(n)$ moves needed to simplify the curve.
\end{proof}
\subsection{Multicurves}
Finally, we describe how to extend our $O(n^{3/2})$ upper bound to multicurves. Just as in Section~\ref{SS:multi-lower}, we distinguish between two variants, depending on whether the target of the simplification is an \emph{arbitrary} set of disjoint cycles or a \emph{particular} set of disjoint cycles. In both cases, our upper bounds match the lower bounds proved in Section~\ref{SS:multi-lower}.
First we extend our loop-contraction algorithm from Lemma \ref{L:contract} to the multicurve setting.
\EDIT{Recall that a \emph{component} of a multicurve $\gamma$ is any multicurve whose image is a component of the image of $\gamma$, and the individual closed curves that comprise $\gamma$ are its \emph{constituent curves}.}
The main difficulty is that \EDIT{one} component of the multicurve might lie inside a face of another component, making progress on the larger component impossible. To handle this potential obstacle, we simplify the \emph{innermost} components of the \EDIT{multicurve} first, and we move isolated simple closed curves toward the outer face as quickly as possible. Figure \ref{F:multi-shrink} sketches the basic steps of our algorithm when the input multicurve is connected.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\linewidth]{Fig/multi-shrink-loop}
\caption{Simplifying a connected multicurve: shrink an arbitrary simple loop or cycle, recursively simplify any inner components, translate inner circle clusters to the outer face, and recursively simplify the remaining non-simple components.}
\label{F:multi-shrink}
\end{figure}
\unskip
\begin{lemma}
\label{L:multi-contract}
Every $n$-vertex $k$-curve $\gamma$ in the plane can be transformed into $k$ disjoint simple closed curves using at most $3D(\gamma) + 4nk$ homotopy moves.
\end{lemma}
\begin{proof}
Let $\gamma$ be an arbitrary $k$-curve with $n$ vertices.
If $\gamma$ is connected, we either contract and delete a loop, exactly as in Lemma \ref{L:contract}, or we contract a simple constituent curve to an isolated circle, using essentially the same algorithm. In either case, the number of moves performed is at most $3D(\gamma) - 3D(\gamma')$, where $\gamma'$ is the multicurve after the contraction. The lemma now follows immediately by induction.
We call a component of $\gamma$ an \EMPH{outer component} if it is incident to the unbounded outer face of $\gamma$, and an \EMPH{inner component} otherwise. If $\gamma$ has more than one outer component, we partition $\gamma$ into subcurves, each consisting of one outer component $\gamma\!_o$ and all inner components located inside faces of $\gamma\!_o$, and we recursively simplify each subcurve independently; the lemma follows by induction. If any outer component is simple, we ignore that component and simplify the rest of $\gamma$ recursively; again, the lemma follows by induction.
Thus, we can assume without loss of generality that our multicurve $\gamma$ is disconnected but has only one outer component $\gamma\!_o$, which is non-simple. For each face $f$ of $\gamma\!_o$, let $\gamma\!_f$ denote the union of all components inside $f$. Let $n_f$ and $k_f$ respectively denote the number of vertices and constituent \EDIT{curves} of~$\gamma\!_f$. Similarly, let~$n_o$ and $k_o$ respectively denote the number of vertices and constituent \EDIT{curves} of the outer component $\gamma\!_o$.
We first recursively simplify each subcurve $\gamma\!_f$; let $\kappa_f$ denote the resulting \emph{cluster} of $k_f$ simple closed curves. By the induction hypothesis, this simplification requires at most $3D(\gamma\!_f) + 4 n_f k_f$ homotopy moves. We \emph{translate} each cluster $\kappa_f$ to the outer face of $\gamma\!_o$ by shrinking $\kappa_f$ to a small $\e$-ball and then moving the entire cluster along a shortest path in the dual graph of $\gamma\!_o$. This translation requires at most $4n_o k_f $ homotopy moves; each circle in $\kappa_f$ uses one $\arc{2}{0}$ move and one $\arc{0}{2}$ move to cross any edge of $\gamma\!_o$, and in the worst case, the cluster might cross all $2n_0$ edges of $\gamma\!_o$. After all circle clusters are in the outer face, we recursively simplify $\gamma\!_o$ using at most $3 D(\gamma\!_o) + 4 n_o k_o$ homotopy moves.
The total number of homotopy moves used in this case is
\[
\sum_f 3D(\gamma\!_f) + 3 D(\gamma\!_o)
~+~
\sum_f 4 n_f k_f + \sum_f 4 n_o k_f + 4 n_o k_o.
\]
Each face of $\gamma\!_o$ has the same depth as the corresponding face of $\gamma$, and for each face~$f$ of $\gamma\!_o$, each face of the subcurve $\gamma\!_f$ has lesser depth than the corresponding face of $\gamma$. It follows that
\[
\sum_f D(\gamma\!_f) + D(\gamma\!_o) \le D(\gamma).
\]
Similarly, $\sum_f n_f + n_o = n$ and $\sum_f k_f + k_o = k$. The lemma now follows immediately.
\end{proof}
To reduce the leading term to $O(n^{3/2})$, we extend the definition of a tangle to the intersection of a multicurve $\gamma$ with a closed disk whose boundary intersects the multicurve transversely away from its vertices, or not at all. Such a tangle can be decomposed into boundary-to-boundary paths, called \emph{open} strands, and closed curves that do not touch the tangle boundary, called \emph{closed} strands. \EDIT{Each closed strand is a constituent curve of $\gamma$.} A tangle is \emph{tight} if every strand is simple, every pair of open strands intersects at most once, and otherwise all strands are disjoint.
\begin{theorem}
\label{Th:multi-upper}
Every $k$-curve in the plane with $n$ vertices can be transformed into a set of $k$ disjoint simple closed curves using $O(n^{3/2} + nk)$ homotopy moves.
\end{theorem}
\begin{proof}
Let $\gamma$ be an arbitrary $k$-curve with $n$ vertices. Following the proof of Lemma \ref{L:multi-contract}, we can assume without loss of generality that $\gamma$ has a single outer component $\gamma\!_o$, which is non-simple.
When $\gamma$ is disconnected, we follow the strategy in the previous proof. Let $\gamma\!_f$ denote the union of all components inside any face $f$ of $\gamma\!_o$. For each face $f$, we recursively simplify $\gamma\!_f$ and translate the resulting cluster of disjoint circles to the outer face; when all faces are empty, we recursively simplify $\gamma\!_o$. The theorem now follows by induction.
When $\gamma$ is non-simple and connected, we follow the useful closed curve strategy from Theorem~\ref{Th:upper}. We define a closed curve $\sigma$ to be useful for $\gamma$ if the interior tangle of $\sigma$ has its number of vertices at least the square of the number of \emph{open} strands; then the proof of Lemma~\ref{L:useful} applies to connected multicurves with no modifications. So let $T$ be a tangle with $m$ vertices, $s \le \sqrt{m}$ open strands, $\ell$ closed strands, and depth $d = O(\sqrt{n})$. We straighten~$T$ in two phases, almost exactly as in \EDIT{Section~\ref{SS:tangles}}, contracting loops and simple closed strands in the first phase, and straightening open strands in the second phase.
In the first phase, contracting one loop or \EDIT{simple} closed \EDIT{strand} uses at most $3D(T) - 3D(T')$ homotopy moves, where $T'$ is the tangle after contraction. After each contraction, if $T'$ is disconnected—in particular, if we just contracted a \EDIT{simple} closed \EDIT{strand}—we simplify and extract any isolated components as follows. Let $T'_o$ denote the component of $T'$ \EDIT{that} includes the boundary cycle, and for each face $f$ of $T'_o$, let $\gamma_f$ denote the union of all components of $T'$ inside $f$. We simplify each multicurve $\gamma_f$ using the algorithm from Lemma~\ref{L:multi-contract}—\emph{not recursively!}—and then translate the resulting cluster of disjoint circles \emph{to the outer face of~$\gamma$}. \EDIT{See Figure \ref{F:tighten-open}.} Altogether, simplifying and translating these subcurves requires at most
\(
3D(T') - 3D(T'') + 4n \sum_f k_f
\)
homotopy moves, where $T''$ is the resulting tangle.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{Fig/just-tangle-open}
\caption{Whenever shrinking a loop or simple closed strand disconnects the tangle, simplify each isolated component and translate the resulting cluster of circles to the outer face of the entire multicurve.}
\label{F:tighten-open}
\end{figure}
The total number of moves performed in the first phase is at most $3D(T) + 4m\ell = O(m\sqrt{n} + n\ell)$. The first phase ends when the tangle consists entirely of simple open strands. Thus, the second phase straightens the remaining open strands exactly as in the proof of \EDIT{Lemma \ref{L:pretangle}}; the total number of moves in this phase is $O(ms) = O(m\sqrt{n})$. We charge $O(\sqrt{n})$ time to each deleted vertex and $O(n)$ time to each constituent curve that was simplified and translated outward. We then recursively simplify the remaining multicurve, ignoring any outer circle clusters.
Altogether, each vertex of $\gamma$ is charged $O(\sqrt{n})$ time as it is deleted, and each constituent curve of $\gamma$ is charged $O(n)$ time as it is translated outward.
\end{proof}
With $O(k^2)$ additional homotopy moves, we can transform the resulting set of $k$ disjoint circles into $k$ nested or unnested circles.
\begin{theorem}
\label{Th:multi-upper2}
Any $k$-curve with $n$ vertices in the plane can be transformed into $k$ nested (or unnested) simple closed curves using $O(n^{3/2} + nk + k^2)$ homotopy moves.
\end{theorem}
\begin{corollary}
\label{C:multi-upper3}
Any $k$-curve with at most $n$ vertices in the plane can be transformed into any other $k$-curve with at most $n$ vertices using $O(n^{3/2} + nk + k^2)$ homotopy moves.
\end{corollary}
Theorems \ref{Th:multi-lower} and \ref{Th:multi-lower2} and Corollary \ref{C:multi-lower3} imply that these upper bounds are tight in the worst case for all possible values of $n$ and $k$. As in the lower bounds, the $O(k^2)$ terms are redundant for connected multicurves.
More careful analysis implies that any $k$-curve with $n$ vertices and depth $d$ can be simplified in $O(n \min\set{d, n^{1/2}} + k\min\set{d, n})$ homotopy moves, or transformed into $k$ unnested circles using $O(n \min\set{d, n^{1/2}} + k \min\set{d, n} + k \min\set{d, k})$ homotopy moves. Moreover, these upper bounds are tight for all possible values of $n$, $k$, and $d$. We leave the details of this extension as an exercise for the reader.
\section{Higher-Genus Surfaces}
\label{S:genus}
Finally, we consider the natural generalization of our problem to closed curves on orientable surfaces of higher genus. Because these surfaces have non-trivial topology, not every closed curve is homotopic to a single point or even to a simple curve. A closed curve is \EMPH{contractible} if it is homotopic to a single point. We call a closed curve \EMPH{tight} if it has the minimum number of self-intersections in its homotopy class.
\subsection{Lower Bounds}
Although defect was originally defined as an invariant of \emph{planar} curves, Polyak's formula $\Defect(\gamma) = -2\sum_{x\between y} \sgn(x)\sgn(y)$ extends naturally to closed curves on any orientable surface; homotopy moves change the resulting invariant exactly as described in Figure \ref{F:defect-change}. Thus, Lemma~\ref{L:defect} immediately generalizes to any orientable surface as follows.
\begin{lemma}
\label{L:defect-surface}
Let $\gamma$ and $\gamma'$ be arbitrary closed curves that are homotopic on an arbitrary orientable surface. Transforming $\gamma$ into $\gamma'$ requires at least $\abs{\Defect(\gamma) - \Defect(\gamma')}/2$ homotopy moves.
\end{lemma}
The following construction implies a quadratic lower bound for tightening noncontractible curves on orientable surfaces with any positive genus.
\begin{lemma}
For any positive integer $n$, there is a closed curve on the torus with $n$ vertices and defect $\Omega(n^2)$ that is homotopic to a simple closed curve but not contractible.
\end{lemma}
\begin{proof}
Without loss of generality, suppose $n$ is a multiple of $8$. The curve $\gamma$ is illustrated on the left in Figure \ref{F:bad-torus}. The torus is represented by a rectangle with opposite edges identified. We label three points $a,b,c$ on the vertical edge of the rectangle and decompose the curve into a red path from $a$ to $b$, a green path from $b$ to $c$, and a blue path from~$c$ to $a$. The red and blue paths each wind vertically around the torus, first $n/8$ times in one direction, and then $n/8$ times in the opposite direction.
\begin{figure}[ht]
\centering\includegraphics[scale=0.5]{Fig/bad-torus-plain}
\caption{A curve $\gamma$ on the torus with defect $\Omega(n^2)$ and a simple curve homotopic to $\gamma$.}
\label{F:bad-torus}
\end{figure}
As in previous proofs, we compute the defect of $\gamma$ by describing a sequence of homotopy moves that \EDIT{simplifies} the curve, while carefully tracking the changes in the defect that these moves incur. We can unwind one turn of the red path by performing one $\arc20$ move, followed by $n/8$ $\arc33$ moves, followed by one $\arc20$ move, as illustrated in Figure \ref{F:bad-torus-unwind}. Repeating this sequence of homotopy moves $n/8$ times removes all intersections between the red and green paths, after which a sequence of $n/4$ $\arc20$ moves straightens the blue path, yielding the simple curve shown on the right in Figure \ref{F:bad-torus}. Altogether, we perform $n^2/64 + n/2$ homotopy moves, where each $\arc33$ move increases the defect of the curve by $2$ and each $\arc20$ move decreases the defect of the curve by $2$. We conclude that $\Defect(\gamma) = -n^2/32 + n$.
\end{proof}
\begin{figure}[ht]
\centering\includegraphics[width=0.9\linewidth]{Fig/bad-torus-unwind}
\caption{Unwinding one turn of the red path.}
\label{F:bad-torus-unwind}
\end{figure}
\begin{theorem}
\label{Th:lower-torus}
Tightening a closed curve with $n$ crossings on a torus requires $\Omega(n^2)$ homotopy moves in the worst case, even if the curve is homotopic to a simple curve.
\end{theorem}
\subsection{Upper Bounds}
Hass and Scott proved that any non-simple closed curve on any orientable surface that is homotopic to a simple closed curve contains either a simple (in fact empty) contractible loop or a simple contractible bigon \cite[Theorem 1]{hs-ics-85}. It follows immediately that any such curve can be simplified in $O(n^2)$ moves using Steinitz's algorithm; Theorem \ref{Th:lower-torus} implies that the upper bound is tight for non-contractible curves.
For the most general setting, where the given curve is not necessarily homotopic to a simple closed curve, we are not even aware of a \emph{polynomial} upper bound! Steinitz's algorithm does not work here; there are curves with excess self-intersections but no simple contractible loops or bigons \cite{hs-ics-85}. Hass and Scott \cite{hs-scs-94} and De Graff and Schrijver \cite{gs-mcmcr-97} independently proved that any closed curve on any surface can be simplified using a \emph{finite} number of homotopy moves that never increase the number of self-intersections. Both proofs use discrete variants of curve-shortening flow; for sufficiently well-behaved curves and surfaces, results of Grayson \cite{g-sec-89} and Angenent \cite{a-pecs2-91} imply a similar result for differential curvature flow. Unfortunately, without further assumptions about the precise geometries of both the curve and the underlying surface, the number of homotopy moves cannot be bounded by any function of the number of crossings; even in the plane, there are closed curves with three crossings for which curve-shortening flow alternates between a $\arc33$ move and its inverse arbitrarily many times. Paterson \cite{p-cails-02} describes a combinatorial algorithm to compute a tightening sequence of homotopy moves without such reversals, but she offers no analysis of her algorithm.
We conjecture that any arbitrary curves (or even multicurves) on any surface can be simplified with at most $O(n^2)$ homotopy moves.
\paragraph{Acknowledgements.}
We would like to thank Nathan Dunfield, Joel Hass, Bojan Mohar, and Bob Tarjan for encouragement and helpful discussions, and Joe O'Rourke for asking about multiple curves. \EDIT{We would also like to thank the anonymous referees for several suggestions that improved the paper.}
\bibliographystyle{newuser}
|
1301.1338
|
\section{Introduction}
\label{sec:introduction}
It is generally understood that the level of ongoing star formation
(SF) activity in a cloud correlates with the reservoir of dense
gas. This concept first became important for extragalactic research
(e.g., \citealt{gao2004:hcn}), and has since been expanded to include
the Milky Way
\citep{wu2010:hcn,lada2010:sf-efficiency,heiderman2010:sf-law,gutermuth2011:sf-law}. Such
work suggests that (\textit{i}) the mass of dense gas and star
formation rate are proportional, and that (\textit{ii}) the
proportionality constant is the same for all clouds near and
far. These are the key results for efforts to understand Milky Way and
extragalactic SF \citep{kennicutt2012:review}.
It is thus interesting to study regions like the Galactic Center (GC)
molecular cloud G0.253+0.016 (or M0.25+0.01:
\citealt{guesten1981:cmz-nh3})---that is more massive and dense than
the Orion~A cloud ($\sim{}2\times{}10^5\,M_{\sun}$ in $2.8~\rm{}pc$
radius for G0.253+0.016; \citealt{lis1994:m0.25};
\citealt{longmore2011:m025}, hereafter L12), but does hardly form
stars at all \citep{lis1994:m0.25}. An infrared luminosity of the
entire cloud of $\le{}3\times{}10^5\,L_{\sun}$, and the absence of
embedded compact H\textsc{ii} regions in $8.4~\rm{}GHz$ VLA maps,
imply $\lesssim{}5$ embedded stars earlier than B0
\citep{lis2001:ir-spectra}. Spitzer can provide more stringent limits,
as it can detect SF at luminosities of a few $10^3\,L_{\sun}$ out to
distances $\approx{}7~\rm{}kpc$ (e.g., in Infrared Dark Clouds
[IRDCs]; \citealt{zhang2009:early-phase-fragmentation} and
\citealt{pillai2011:initial-conditions}). However, this analysis is
beyond the scope of the present paper. A faint $\rm{}H_2O$ maser has
been detected in the cloud \citep{lis1994:m0.25}, but no other masers
reside in the area \citep{caswell2010:gal-center-methanol-masers,
caswell2011:atca-h20}. G0.253+0.016 appears to be in a very extreme
physical state, with gas kinetic temperatures $\sim{}80~\rm{}K$
exceeding dust temperatures $\le{}30~\rm{}K$
(\citealt{guesten1981:cmz-nh3, carey1998:irdc-properties,
lis2001:ir-spectra}; L12). G0.253+0.016 forms part of the
$\sim{}100~\rm{}pc$ circumnuclear ring of clouds
\citep{molinari2011:cmz-ring} at $\sim{}8.5~\rm{}kpc$ distance (also
see L12).
Dense GC clouds may play a key role in the mysterious
formation of compact and massive stellar aggregates like the
``Arches'' cluster (\citealt{lis1998:m025}; L12). For all these
reasons, we present the first high--resolution interferometric line
and dust emission data on G0.253+0.016, obtained using CARMA and the
SMA.
\begin{figure*}
\includegraphics[width=\linewidth]{G0253_SMA-combi.eps}
\caption{Spitzer and SMA maps of G0.253+0.016. The \emph{left} panel
presents Spitzer IRAC data. Overlaid are $450~\rm{}\mu{}m$
wavelength intensity contours at 30 and $70~\rm{}Jy\,beam^{-1}$
(SCUBA Legacy Archive;
\citealt{difrancesco2008:scuba_catalogue}). The lower contour is
repeated in all maps shown. The \emph{middle and righ panel} present
signal--to--noise maps of the $\rm{}N_2H^+$ (3--2) and
$280~\rm{}GHz$ dust continuum probed by the SMA. The $\rm{}H_2O$
maser reported by \citet{lis1994:m0.25} is marked.\label{fig:sma}}
\end{figure*}
\section{Observations \& Data Reduction}
\label{sec:observations}
The Submillimeter Array (SMA\footnote {The Submillimeter Array is a
joint project between the Smithsonian Astrophysical Observatory and
the Academia Sinica Institute of Astronomy and Astrophysics, and is
funded by the Smithsonian Institution and the Academia Sinica.})
$\rm{}N_2H^+$ (3--2; $\approx{}0.34~\rm{}km\,s^{-1}$ resolution) and
continuum observations near 280~GHz ($4~\rm{}GHz$ total bandwidth) were made
with seven antennas in compact--north configuration in a single track
in June 2009. Eleven positions separated at less than half a
$42\arcsec$ primary beam were observed. The 345~GHz receiver was tuned
to the $\rm{}N_2H^+$ line in the LSB spectral band s4, using 256
channels per chunk and 24 chunks per sideband. The data were taken
under good weather conditions at $<1.3~\rm{}mm$ water vapor with
characteristic system temperatures $<180~\rm{}K$.
The Combined Array for Research in Millimeter--wave Astronomy
(CARMA\footnote {Support for CARMA construction was derived from the
Gordon and Betty Moore Foundation, the Kenneth T.\ and Eileen L.\
Norris Foundation, the James S.\ McDonnell Foundation, the
Associates of the California Institute of Technology, the University
of Chicago, the states of California, Illinois, and Maryland, and
the National Science Foundation (NSF). Ongoing CARMA development and
operations are supported by the NSF and the CARMA partner
universities.}) observations were executed in CARMA23 mode in
November 2011 in a combined D and SH configuration. Four USB bands
were used to observe spectral lines ($\rm{}N_2H^+$ [1--0],
$\rm{}HCO^+$ [1--0], SiO [2--1]; $\approx{}0.5~\rm{}km\,s^{-1}$
resolution; $\rm{}HCO^+$ is not analyzed here), and continuum (500~MHz
bandwidth) for calibration purposes. Six positions, spaced at half the
$\approx{}80\arcsec$ primary beam for the 10.4m--telescopes, were
observed. We flagged 3.5m--telescope baselines $<50~\rm{}ns$ to reduce
sidelobes.
Calibration and imaging were done using MIR (an IDL--based SMA package),
MIRIAD, and GILDAS. Flux calibrations using Titan and Uranus for the
SMA, and Neptune for CARMA are expected to be accurate within 20\%.
\begin{figure*}
\centerline{\includegraphics[width=0.8\linewidth]{M0.25_specs_N2H+_horizontal.eps}}
\caption{Analysis of SMA $\rm{}N_2H^+$ (3--2) data. \emph{Panel (a)}
shows three bright reliable example spectra sampling the full range
of observed velocity dispersions. CARMA $\rm{}N_2H^+$ (1--0) spectra
are overlaid for reference. Gaussian fits are summarized by \emph{green
lines and fitted parameters}. These fit results are analyzed in
\emph{panel (b)}: possible and reliable detections are marked by \emph{grey
and yellow circles}, respectively. Model line widths and intensities,
calculated with MOLLIE \citep{keto2010:mollie} using a kinetic
temperature of $20~\rm{}K$, are indicated (using \emph{green and
dashed black lines}, respectively) for a range of intrinsic line
widths, $\Delta{}v_{\rm{}in}$, and densities at $0.1~\rm{}pc$
radius, $n({\rm{}H_2})$.\label{fig:n2h-analysis}}
\end{figure*}
\section{Results}
\label{sec:results}
\subsection{SMA Dust Emission: No Compact
Cores\label{sec:dust-densities}}
Figure~\ref{fig:sma}~(right) presents the $280~\rm{}GHz$ dust emission
data, observed with a beam size of $2\farcs{}6\times{}1\farcs{}8$ (PA
$48\degr{}$). A continuum peak of $90~\rm{}mJy\,beam^{-1}$ is detected
within $0\farcs{}5$ of the aforementioned $\rm{}H_2O$ maser position reported
by \citet{lis1994:m0.25}. The remaining part of the map is free of emission above
the $5\sigma$--noise--level of $30~\rm{}mJy\,beam^{-1}$.
We adopt a dust temperature of $20~\rm{}K$, following Herschel--based
estimates of $20$--$25~\rm{}K$ (L12), and
\citet{ossenkopf1994:opacities} dust opacities scaled down by a factor
1.5 ($0.008~\rm{}cm^2\,g^{-1}$; see
\citealt{kauffmann2010:mass-size-i} and Appendix~A of
\citealt{kauffmann2008:mambo-spitzer}). The $5\sigma$--noise--level
corresponds to an $\rm{}H_2$ column density of
$1.7\times{}10^{23}~\rm{}cm^{-2}$. Towards the $\rm{}H_2O$ maser, the
column density derived from the intensity is
$5.2\times{}10^{23}~\rm{}cm^{-2}$. This yields masses per beam of
$<26\,M_{\sun}$ and $78\,M_{\sun}$, respectively, when integrating the
column densities over the the half power beam width (of
$0.046~\rm{}pc$ effective radius).
Note that L12 use \citet{ossenkopf1994:opacities} opacities. For
consistency, we increase their Herschel--based mass measurement
by a factor 1.5.
\begin{figure*}
\begin{center}
\includegraphics[width=\linewidth]{G0253_CARMA-combi.eps}
\end{center}
\caption{CARMA maps of G0.253+0.016. \emph{Panel (a)} presents a cloud
segmentation in position--position--velocity space. \emph{Panels (b,
c)} illustrate the complex velocity structure and chemistry (note
discrepancy between SiO and $\rm{}N_2H^+$). \emph{Panel (d)}
presents the same information, collapsed into two velocity
ranges. The dashed line is the lower SCUBA contour from
Fig.~\ref{fig:sma}.\label{fig:carma}}
\end{figure*}
\subsection{SMA $\rm{}N_2H^+$ Data: Gas
Densities $\le{}3\times{}10^5~\rm{}cm^{-3}$\label{sec:n2h+-densities}}
Figure~\ref{fig:sma}~(middle) summarizes the SMA observations of the
$\rm{}N_2H^+$ (3--2) line, observed with a beam size of
$2\farcs{}7\times{}1\farcs{}9$ (PA $46\degr{}$). It presents a
signal--to--noise ratio (SNR) map: at a given location, we divide the
signal in the brightest velocity channel by the standard deviation
obtained from channels known to be free of emission. Manual inspection
reveals emission at velocities from
$-10~{\rm{}to}~+60~\rm{}km\,s^{-1}$, indicating channels free of
emission at velocities $70$--$100~\rm{}km\,s^{-1}$. Peak positions
with an SNR $\ge{}10$ are considered as potentially detected (i.e., 56
positions); detections are deemed reliable for FWHM diameters larger
than two beams, permitting lower threshold peak SNRs $\ge{}8$ (22
positions). Figure~\ref{fig:sma} (middle) illustrates these
positions. These spectra are characterized using one--component
Gaussian fits. This approach ignores $\rm{}N_2H^+$ hyperfine blending,
which however is considered in the modeling below. Example spectra are
shown in Fig.~\ref{fig:n2h-analysis}(a). Manual inspection always
reveals one single significant velocity component per position.
We model the $\rm{}N_2H^+$ observations from
Fig.~\ref{fig:n2h-analysis}(b) using the MOLLIE non--LTE hyperfine
radiative transfer code in the hyperfine statistical equilibrium
approximation \citep{keto2010:mollie}. We adopt a relative
$\rm{}N_2H^+$ abundance of $1.5\times{}10^{-10}$ per $\rm{}H_2$ (e.g.,
\citealt{tafalla2006:internal-structure}). Spherical $\rm{}H_2$
density profiles $n=n_{0.1\rm{}pc}\cdot{}(r/0.1~{\rm{}pc})^{-2}$ are
assumed, with the density
vanishing for radii $\ge{}0.5~\rm{}pc$. The other free
parameters---i.e.\ the non--thermal gas velocity dispersion,
$\sigma_{v,\rm{}in}$, expressed by intrinsic line widths
$\Delta{}v_{\rm{}in}=(8\,\ln[2])^{1/2}\,\sigma_{v,\rm{}in}$; and the
kinetic temperature, $T_{\rm{}kin}$---are assumed to be constant
within the model sphere. We adopt $T_{\rm{}kin}=20~\rm{}K$, based on
the L12 dust temperature, resulting in optically thick (3--2)
lines. The (1--0) emission is not modelled here; it probes a larger
spatial scale not focus of the present letter. For given density,
higher abundances or temperatures imply higher intensities.
As shown in Fig.~\ref{fig:n2h-analysis}(b), the brightest
$\rm{}N_2H^+$ (3--2) peaks can be modelled using densities
$n_{0.1\rm{}pc}=(2\pm{}1)\times{}10^5~\rm{}cm^{-3}$. Integration of
the implied density profiles thus yields masses
$(260\pm{}125)~M_{\sun}$ within apertures of $0.1~\rm{}pc$ projected
radius for the most massive structures.
Surprisingly, the continuum--detected $\rm{}H_2O$--maser position is
not detected in $\rm{}N_2H^+$. The $\rm{}H_2O$--maser position is
probably less abundant in $\rm{}N_2H^+$, and thus not detectable, as
seen in some high--mass SF (HMSF) regions
\citep{fontani2006:pre-stellar-n2h+,
zinchenko2009:hmsf-multiline}. Furthermore, all $\rm{}N_2H^+$ cores
show no significant dust emission. These cores are probably starless
and have $\rm{}N_2H^+$ abundances $\ge{}10^{-9}$, as seen in IRDCs
that resemble G0.253+0.016 in being relatively dense and starless
(\citealt{ragan2006:IRDC-lines}, \citealt{sakai2008:irdc-molecules},
\citealt{vasyunina2011:irdc-chemistry}): for example, using MOLLIE to
model cores with a higher $\rm{}N_2H^+$ abundance of $\ge{}10^{-9}$ and
$\Delta{}v_{\rm{}in}=1~\rm{}km\,s^{-1}$, the predicted $\rm{}N_2H^+$ (3--2)
line intensity is $\ge{}4.6~\rm{}K$, which is above the detection
limit (Fig.~\ref{fig:n2h-analysis}[b]), even when the density is
$n_{0.1\rm{}pc}=10^4~\rm{}cm^{-3}$, which is an order of magnitude
below that derived from the upper limit of the dust continuum flux
($<26\,M_{\sun}$ within $0.046~\rm{}pc$ radius;
Sec.~\ref{sec:dust-densities}).
\subsection{CARMA Line Emission Maps:\\ Many Fragments
with large Velocity Differences}
Figure~\ref{fig:carma}(d) shows maps for $\rm{}N_2H^+$ (1--0) and SiO
(2--1) observed with CARMA. The beam size is
$7\farcs{}1\times{}3\farcs{}5$ (PA $6\degr{}$). We identify cloud
fragments as continuous $\rm{}N_2H^+$ (1--0) emission structures in
position--position--velocity space exceeding an intensity threshold of
$0.35~{\rm{}Jy\,beam^{-1}}=1.97~{\rm{}K}$ (noise is
$0.06~{\rm{}to}~0.12~\rm{}Jy\,beam^{-1}$). These fragments, numbered
1--7, are shown in Fig.~\ref{fig:carma}(a). The threshold was chosen
to yield a simple yet representative decomposition of the cloud
structure. Segmentation was done using 3D~Slicer\footnote{3D~Slicer is
available from \url{http://www.slicer.org}. See
\url{http://am.iic.harvard.edu} on astronomical research with
3D~Slicer.} and CLUMPFIND \citep{williams1994:clumpfind}, followed
by manual removal of artifacts at map boundaries.
Fragment properties are listed in Table~\ref{tab:features}: from
spectra integrated over each fragment, we calculate
$\langle{}v\rangle{}$ and $\sigma_v$ as the intensity--weighted
velocity mean and standard deviation calculated directly from the
velocities and intensities per channel, $v_i$ and $T(v_i)$. Using
intensity--weighted mean line--of--sight velocities calculated for
every pixel, we also list the standard deviation among line--of--sight
velocities within a given fragment, $\sigma_v^{\rm{}los}$. Velocity
gradients, characterized by $\sigma_v^{\rm{}los}$, dominate the
velocity dispersion, since $\sigma_v\approx{}\sigma_v^{\rm{}los}$. The
effective radius, $R=(A/\pi)^{1/2}$, is calculated from the
CLUMPFIND--derived fragment area within the
$0.35~{\rm{}Jy\,beam^{-1}}$ intensity surface, $A$.
Figure \ref{fig:carma}(b) illustrates that the SMA--detected
$\rm{}N_2H^+$ (3--2) cores are associated with the CARMA--detected
$\rm{}N_2H^+$ fragments, as expected for cores embedded in extended
envelopes. Figure \ref{fig:carma}(c) demonstrates that the
$\rm{}N_2H^+$ fragments 4--7 are also detected in SiO.
\section{Analysis}
\label{sec:analysis}
\subsection{Star Formation Law\label{sec:sf-law}}
The most striking feature of G0.253+0.016, noted by all previous
papers on the cloud, is its low star formation rate. Here, we present
the first quantitative comparison to recently proposed ``star
formation laws''.
\citet{lada2010:sf-efficiency} suggest that molecular clouds typically
contain one embedded YSO per $\sim{}5~M_{\sun}$ of gas at $\rm{}H_2$
column densities $\ge{}7\times{}10^{21}~\rm{}cm^{-2}$. Since
G0.253+0.016 contains $2\times{}10^5\,M_{\sun}$ at column densities
$\ge{}4.5\times{}10^{22}~\rm{}cm^{-2}$ (L12, plus correction in
Sec.~\ref{sec:dust-densities}), the cloud should contain
$\sim{}4\times{}10^4$ YSOs.
\citet{lada2010:sf-efficiency} consider, of course, YSOs bright enough
to be detected. We assume that \citeauthor{lada2010:sf-efficiency}
cannot sense YSOs of mass $<0.08\,M_{\sun}$, and detect only 50\% of
stars with mass $0.08~{\rm{}to}~0.5\,M_{\sun}$. For a typical stellar
initial mass function (IMF), such as the $\alpha_3=2.7$ case of
\citet{kroupa2002:imf}, the total number of stars down to
$0.01\,M_{\odot}$ is equal to the \citeauthor{lada2010:sf-efficiency}
YSO count times a factor 2.63.
Considering this IMF, a cluster of $\sim{}4\times{}10^4$ YSOs similar
to the sources considered by \citeauthor{lada2010:sf-efficiency} would
contain stars of mass $\gtrsim{}100\,M_{\sun}$. This contradicts radio
continuum surveys for H\textsc{ii} regions, ruling out stars with mass
$\gtrsim{}16\,M_{\sun}$ in G0.253+0.016
\citep{lis1994:m0.25}. Assuming a maximum stellar mass
$\sim{}16\,M_{\sun}$, the $\alpha_3=2.7$ \citet{kroupa2002:imf} IMF,
and the factor of 2.63 to account for YSOs too faint to be detected
even in nearby clouds, the cloud should contain $\sim{}900$ YSOs of
the sort considered by \citet{lada2010:sf-efficiency}---i.e., by a
factor $\sim{}45$ lesser than the $\sim{}4\times{}10^4$ YSOs predicted
by the \citet{lada2010:sf-efficiency} law. See
\citet{lis2001:ir-spectra} for a similar IMF analysis. The
\citet{lada2010:sf-efficiency} law does thus not provide a universal
description of the SF process, contrary to assumptions by
\citet{lada2012:sf-laws} to explain the extragalactic
\citet{gao2004:hcn} infrared--HCN luminosity correlation.
\begin{table}
\caption{Fragment Properties\label{tab:features}}
\begin{center}
\begin{tabular}{llllllllllll}
\hline \hline
Fragment & $\langle{}v\rangle{}$ & $\sigma_v$ & $\sigma_v^{\rm{}los}$
& $R$ & $\alpha$ assuming\\
& $\rm{}km\,s^{-1}$ & $\rm{}km\,s^{-1}$ & $\rm{}km\,s^{-1}$ &
pc & $3\times{}10^{23}~\rm{}cm^{-2}$ \\ \hline
1 & $-0.1$ & 6.1 & 4.9 & 0.47 & 4.5\\
2 & 6.4 & 5.4 & 4.6 & 0.70 & 2.3\\
3 & 14.8 & 4.3 & 2.0 & 0.68 & 1.5\\
4 & 32.2 & 5.9 & 5.0 & 0.63 & 3.1\\
5 & 31.5 & 5.2 & 4.8 & 1.06 & 1.4\\
6 & 44.2 & 8.4 & 8.1 & 1.16 & 3.4\\
7 & 42.7 & 13.9 & 13.5 & 1.62 & 6.8\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Kinematics \& Gravitational Binding\label{sec:self-gravity}}
Stability against gravitational collapse can, e.g., be evaluated using
the virial parameter, $\alpha=5R\sigma_v^2/(GM)$ or
\begin{equation}
\alpha=1.2\,\left(\frac{\sigma_v}{\rm{}km\,s^{-1}}\right)^2
\left(\frac{R}{\rm{}pc}\right)
\left(\frac{M}{10^3\,\rm{}M_{\sun}}\right)^{-1} \, ,
\label{eq:stability}
\end{equation}
where $\sigma_v$ is the one--dimensional velocity dispersion and $G$
is the constant of gravity. Slightly depending on the equation of
state, collapse requires $\alpha{}\lesssim{}2$
\citep{bertoldi1992:pr_conf_cores, ebert1955:be-spheres,
bonnor1956:be-spheres}.
L12 derive $M=2.0\times{}10^5\,M_{\sun}$ within $R=2.8~\rm{}pc$ (after
correction in Sec.~\ref{sec:dust-densities}), and a line width
$\Delta{}v=(8\,\ln[2])^{1/2}\,\sigma_v<16~\rm{}km\,s^{-1}$. This
yields $\alpha<0.8$: the cloud should collapse. But L12 exclude a
component at $\approx{}10~\rm{}km\,s^{-1}$ radial velocity, which our maps
show to be part of the cloud morphology (i.e., fragments 1--3;
Fig.~\ref{fig:carma}). Inclusion of the $\approx{}10~\rm{}km\,s^{-1}$
component yields $\Delta{}v=(35\pm{}5)~\rm{}km\,s^{-1}$ (Fig.~4 of
L12), resulting in $\alpha{}=3.8^{+1.2}_{-1.0}$. G0.253+0.016 thus
seems to be unbound. Fast motions $\Delta{}v>16~\rm{}km\,s^{-1}$ are
also suggested by widespread SiO shocks (Sec.~\ref{sec:nature-future}).
Still, many interferometer--detected structures seem to be
bound. Using Eq.~(\ref{eq:stability}), Table~\ref{tab:features}
reports $\alpha$ for CARMA--detected cloud fragments, assuming column
densities $\sim{}3\times{}10^{23}~\rm{}cm^{-2}$
(Sec.~\ref{sec:mass-size}). For the SMA--detected $\rm{}N_2H^+$ cores,
we adopt $R=0.1~\rm{}pc$, and the density structure from
Sec.~\ref{sec:n2h+-densities}. To include thermal pressure, we
substitute
$(\sigma_v^2+[0.188~{\rm{}km\,s^{-1}}]^2\cdot{}[T_{\rm{}kin}/10~\rm{}K])^{1/2}$
for $\sigma_v$
in Eq.~(\ref{eq:stability}). For $T_{\rm{}kin}\le{}80~\rm{}K$,
$\Delta{}v\le{}3.0~\rm{}km\,s^{-1}$, and
$n_{0.1\rm{}pc}=10^5~\rm{}cm^{-3}$ (Fig.~\ref{fig:n2h-analysis}),
$\alpha\le{}1.8$ is obtained.
Many $\rm{}N_2H^+$ (3--2) spectra reveal lines consistent with an
intrinsic line width $\lesssim{}0.5~\rm{}km\,s^{-1}$
(Fig.~\ref{fig:n2h-analysis}[b]). Compared with the
$\gtrsim{}10~\rm{}km\,s^{-1}$ lines typically found in single--dish
spectra of the GC region (e.g., \citealt{lis1998:m025}), these are
probably the most narrow lines so far detected in the GC region.
\subsection{Density Structure\label{sec:mass-size}}
Figure~\ref{fig:mass-size} summarizes the density structure of
G0.253+0.016. From L12, we take a mass of $2\times{}10^5\,M_{\sun}$
within $2.8~\rm{}pc$ radius, and include their peak column density of
$5.3\times{}10^{23}~\rm{}cm^{-2}$ per $36\arcsec$ beam at
$0.7~\rm{}pc$ radius ($1.7\times{}10^4\,M_{\sun}$; data scaled as
explained in Sec.~\ref{sec:dust-densities}). Interferometer--based
assessments ($78\,M_{\sun}$ and $[260\pm{}125]\,M_{\sun}$ at 0.046 and
$0.1~\rm{}pc$ radius, respectively) are from
Secs.~\ref{sec:dust-densities} and \ref{sec:n2h+-densities}. For
dust--based measurements, we adopt an opacity--induced uncertainty by
a factor 2 \citep{kauffmann2008:mambo-spitzer}. Reference data on
non--HMSF clouds are from
\citet{kauffmann2010:mass-size-ii}. Unpublished Bolocam
maps\footnote{We are indebted to D.~Li for providing the data, and
A.~Ginsburg for reducing it.} (adopting $15~\rm{}K$ dust
temperature) and extinction data from
\citet{kainulainen2011:confinement} are used to characterize Orion~A
using methods from \citeauthor{kauffmann2010:mass-size-i}
(\citeyear{kauffmann2010:mass-size-i}, building on
\citealt{rosolowsky2008:dendrograms}). \citet{espinoza2009:arches}
characterize the Arches cluster. An approximate mass--size limit for
HMSF is taken from \citet{kauffmann2010:irdcs},
\begin{equation}
m_{\rm{}lim}(r)=870\,M_{\sun}\,(r/{\rm{}pc})^{1.33}\,{}.
\label{eq:hmsf-limit}
\end{equation}
At $r=2.8~\rm{}pc$, G0.253+0.016 exceeds the mass of equal--sized
structures in Orion~A by a factor $\sim{}25$, and the
\citeauthor{kauffmann2010:irdcs} criterion by a factor 60. The mean
$\rm{}H_2$ volume and column densities are
$M/(4/3\,\pi\,R^3)\to{}3.2\times{}10^{4}~\rm{}cm^{-3}$ and
$3.6\times{}10^{23}~\rm{}cm^{-2}$, respectively. But at smaller
spatial scales, G0.253+0.016 falls short of the masses of the Arches
cluster and the most massive structures in Orion~A by factors $\sim{}4$. At
$0.046~\rm{}pc$ radius, the \citeauthor{kauffmann2010:irdcs} criterion
is exceeded by a modest factor $\lesssim{}5$.
The interferometer--derived masses are probably underestimated. Note,
e.g., that the peak column densities from SMA and Herschel data are
similar, i.e.\ $5.2\times{}10^{23}~\rm{}cm^{-2}$ vs.\
$5.3\times{}10^{23}~\rm{}cm^{-2}$. This may result from two
factors. First, interferometer--induced spatial filtering may reduce
observed intensities. Second, the dust opacity law might be different
than assumed. None of this affects our conclusion that little dense
gas exist in G0.253+0.016. For example, if masses were higher by a
factor 5, this would imply virial parameters $\alpha\le{}0.1$ for all
SMA--detected $\rm{}N_2H^+$ cores with line widths
$\Delta{}v\sim{}0.5~\rm{}km\,s^{-1}$. Such low values for $\alpha$ are
very unusual (Pillai et al., in prep.), and thus unlikely. This
comparison suggests mass errors smaller than a factor 5.
\begin{figure}
\includegraphics[width=\linewidth]{mass_G0253.eps}
\caption{The density structure of G0.253+0.016 (\emph{red}). Reference
data are obtained using a hierarchical structure decomposition
(``dendrograms'': \citealt{rosolowsky2008:dendrograms}), based on
published structure analysis (\citealt{kauffmann2010:mass-size-i,
kauffmann2010:mass-size-ii}; \emph{grey lines}) and previously
unexplored Orion~A data (\citealt{kainulainen2011:confinement}, and
see Sec.~\ref{sec:mass-size}; \emph{green lines}). For reference,
the \emph{dotted line} highlights an $\rm{}H_2$ column density of
$10^{23}~\rm{}cm^{-2}$. \emph{Gray line and shading} indicate the
\citet{kauffmann2010:irdcs} limit
(Eq.~\ref{eq:hmsf-limit}).\label{fig:mass-size}}
\end{figure}
\subsection{Decay of Gas Motions \& Accretion onto Cores}
\label{sec:decay-motions}
HMSF in G0.253+0.016 is still possible if structures in the cloud grow
more dense over time. Growth is controlled by the flow crossing time
$\ell{}/\sigma_v$ for a spatial scale $\ell$,
\begin{eqnarray}
t_{\rm{}cross}&=&1~{\rm{}Myr}\,\left(\frac{\ell}{\rm{}pc}\right)\,\left(\frac{\sigma_v}{\rm{}km\,s^{-1}}\right)^{-1}\\
&=&2.4~{\rm{}Myr}\,\left(\frac{\ell}{\rm{}pc}\right)\,\left(\frac{\Delta{}v}{\rm{}km\,s^{-1}}\right)^{-1}\,{}.
\label{eq:decay}
\end{eqnarray}
For the entire cloud, using $\ell=2R$ and
$\Delta{}v\approx{}35~\rm{}km\,s^{-1}$,
$t_{\rm{}cross}\approx{}0.4~\rm{}Myr$. Undriven turbulence is expected
to decay as ${\rm{}e}^{-t/t_{\rm{}cross}}$
\citep{maclow2004:review}. Thus, global collapse would take several
$0.4~\rm{}Myr$.
If observed velocity dispersions were reflecting pure inward motions
of speed $\sigma_v/2$, structures of constant radius $R$ could ingest
material from radii $r=R~{\rm{}to}~2R$ within the time
$(2R-R)/(\sigma_v/2)=2R/\sigma_v\equiv{}2\cdot{}t_{\rm{}cross}$. For
the $\rm{}N_2H^+$ fragments listed in Table~\ref{tab:features},
$2R/\sigma_v=0.14~{\rm{}to}~0.40~\rm{}Myr$. Adopting $R=0.1~\rm{}pc$
and $\Delta{}v=0.5~{\rm{}to}~6.0~\rm{}km\,s^{-1}$
(Fig.~\ref{fig:n2h-analysis}),
$2R/\sigma_v=0.08~{\rm{}to}~0.9~\rm{}Myr$ holds for the SMA--detected
structures.
These timescales control the structure evolution. Several
$10^5~\rm{}yr$ must pass before cores as dense as those in
Orion~A can form.
\subsection{Nature and Future of G0.253+0.016\label{sec:nature-future}}
The low SF rate for this compact and massive cloud indicates that
G0.253+0.016 is in an extreme physical state (Sec.~\ref{sec:sf-law}).
\citet{lis1998:m025} and \citet{lis2001:ir-spectra} take the existence
of widespread SiO emission as evidence for an ongoing cloud--cloud
collision. This molecule is believed to trace shocks unambiguously:
Silicon is usually locked up in dust grains, and requires grain--grain
collisions at velocities $\gtrsim{}20~\rm{}km\,s^{-1}$ to be released
\citep{guillet2009:shocks-sio}. Further gas phase reactions yield SiO
in $\lesssim{}10^3~\rm{}yr$. Figure~\ref{fig:carma} shows for the
first time that the SiO distribution is likely too smooth and extended
to result from outflows associated with a population of embedded
stars. Processes on larger spatial scales, such as cloud--cloud
collisions, are a more probable origin. It thus seems plausible that
G0.253+0.016 is a very young cloud that will soon dissipate internal
motions and efficiently form stars in a few $10^5~\rm{}yr$
(Sec.~\ref{sec:decay-motions}).
However, the cloud may not be gravitationally bound and simply
disperse (Sec.~\ref{sec:self-gravity}). Furthermore, G0.253+0.016 is
subject to the disruptive GC environment: as already mentioned by L12,
following the GC orbit proposed by \citet{molinari2011:cmz-ring},
G0.253+0.016 will arrive at the present location of Sgr~B2 in
$\sim{}8.5\times{}10^5~\rm{}yr$. The latter cloud essentially
represents a standing shock, where gas clouds on different GC orbit
families collide (e.g., \citealt{bally2010:bolocam-gc}). Given the
disturbed nature of the Sgr~B2 region, it is not clear whether
G0.253+0.016 will then be disrupted or be pushed into collapse.
\section{Conclusion}
\label{sec:conclusion}
G0.253+0.016 deviates from current ``star formation laws'' (e.g.,
\citealt{lada2010:sf-efficiency}) by a factor $\sim{}45$
(Sec.~\ref{sec:sf-law}). The scarcity of significant dust and
$\rm{}N_2H^+$ cores in our SMA interferometer maps
(Secs.~\ref{sec:dust-densities}--\ref{sec:n2h+-densities}) reveals
that G0.253+0.016 is presently far away from forming high--mass stars
and clusters (Sec.~\ref{sec:mass-size}): considerable evolution for
several $10^5~\rm{}yr$ is needed before such star formation might
occur (Sec.~\ref{sec:decay-motions}). The cloud might thus be very
young and currently forming in a cloud--cloud collision indicated by
SiO shocks (Sec.~\ref{sec:nature-future}). Given the disruptive
dynamics of the Galactic Center region (Sec.~\ref{sec:nature-future}),
and the potentially unbound nature of the cloud
(Sec.~\ref{sec:self-gravity}), it is unclear whether evolution towards
significant star formation will ever happen.
\acknowledgements{We thank S.~Longmore for giving us access to
\citet{longmore2011:m025} in advance of publication, D.~Lis and
K.~Menten for enlightening discussions, and an anonymous referee for
making the paper more readable. JK is grateful to D.~Li and
P.~Goldsmith, his hosts at JPL, for making this work possible. Part
of the research was carried out at the Jet Propulsion Laboratory,
California Institute of Technology, under a contract with the
National Aeronautics and Space Administration. TP acknowledges
support from CARMA, supported by the National Science Foundation
through grant AST~05--40399.}
Facilities: \facility{SMA}, \facility{CARMA}, \facility{Spitzer}
|
quant-ph/0412001
|
\section{Introduction}
\label{sec:Introduction}
The statistics of an arbitrary quantum measurement are described by
a positive operator valued measure, or POVM (Davies~\cite{Davies}, Busch \emph{et
al}~\cite{Busch1}, Peres~\cite{Peres}, Nielsen and Chuang~\cite{Nielsen} and
references cited therein). Suppose the measurement has only a finite number of
distinct outcomes. Then the corresponding POVM assigns to each outcome $i$
the positive operator
$\hat{E}_{i}$ with the property that
$\Tr (\hat{E}_i
\hat{\rho})$ is the probability of obtaining outcome $i$
(where $\hat{\rho}$ is the density operator). Since $\sum_{i} \Tr (\hat{E}_i
\hat{\rho})=1$ for all $\hat{\rho}$ we must have $\sum_{i} \hat{E}_i=1$.
A POVM is said to be \emph{informationally complete} if the probabilities
$\Tr (\hat{E}_i
\hat{\rho})$ uniquely determine the density operator $\hat{\rho}$. The concept
of informational completeness is originally due to
Prugove\v{c}ki~\cite{Prugo} (also see Busch~\cite{Busch2}, Busch \emph{et
al}~\cite{Busch1}, d'Ariano \emph{et al}~\cite{dAriano}, Flammia \emph{et
al}~\cite{Flammia}, Finkelstein~\cite{Finkelstein}, and references cited
therein). It has an obvious relevance to the problem of quantum state
determination. It also plays an important role in
Caves \emph{et al}'s~\cite{Caves1,Caves2,FuchsSasaki,Fuchs} Bayesian approach to
the interpretation of quantum mechanics, and in Hardy's~\cite{Hardy1,Hardy2}
proposed axiomatization.
Suppose the Hilbert space has finite dimension $d$. Then it is easily seen that
an informationally complete POVM must contain at least $d^2$ distinct operators
$\hat{E}_i$. An informationally complete POVM is said to be \emph{symmetric
informationally complete} (or SIC) if it contains exactly this minimal number of
distinct operators and if, in addition,
\begin{enumerate}
\item $\lambda \hat{E}_i$ is a
one dimensional projector for all $i$ and some fixed constant $\lambda$.
\item The overlap $\Tr(\hat{E}_i \hat{E}_j)$ is the same for every pair of
distinct labels $i, j$.
\end{enumerate}
It is straightforward to show that this is equivalent to the requirement that, for
each
$i$,
\begin{equation}
\hat{E}_i = \frac{1}{d} \left| \psi_i \right> \left< \psi_i \right|
\end{equation}
where the $d^2$ vectors $|\psi_i\rangle$ satisfy
\begin{equation}
\left|\langle \psi_i | \psi_j \rangle \right| = \begin{cases}
1 \qquad & i=j\\
\frac{1}{\sqrt{d+1}} \qquad & i\ne j
\end{cases}
\label{eq:OverlapCondition}
\end{equation}
SIC-POVMs were introduced in a dissertation by Zauner~\cite{Zauner}, and in
Renes
\emph{et al}~\cite{Renes}. Wootters~\cite{Wootters1},
Bengtsson and Ericsson~\cite{Bengtsson,BengtssonB}
and Grassl~\cite{Grassl} have made further contributions. There appear to be
some intimate connections with the theory of mutually unbiased
bases~\cite{Wootters1,Wootters2,Wootters3}, finite affine
planes~\cite{Wootters1,Bengtsson,BengtssonB}, and
polytopes~\cite{Bengtsson,BengtssonB}.
If SIC-POVMs existed in every finite dimension (or, failing that, in a
sufficiently large set of finite dimensions) they would constitute a naturally
distinguished class of POVMs which might be expected to have many interesting
applications to quantum tomography, cryptography and information theory
generally. They would also be obvious candidates for the ``fiducial'' or
``standard'' POVMs featuring in the work of Fuchs~\cite{Fuchs} and
Hardy~\cite{Hardy1,Hardy2}.
The question consequently arises: is it in fact true that SIC-POVMs exist in
every finite dimension? The answer to this question is still unknown. Analytic
solutions to Eqs.~(\ref{eq:OverlapCondition})
have been constructed in dimensions
$2,3,4,5,6$ and $8$. Moreover Renes \emph{et al}~\cite{Renes} have constructed
numerical solutions in dimensions $5$ to $45$ (the actual vectors can be
downloaded from their website~\cite{RenesVectors}). So one may plausibly
speculate that SIC-POVMs exist in every finite dimension. But it has not been
proved.
The SIC-POVMs which have so far been
explicitly\footnote{
Renes \emph{et al}~\cite{Renes} mention that they have constructed numerical
solutions
which are covariant under the action of other groups, but they do not give any
details.
}
described in the literature are
all covariant under the action of the generalized Pauli group (or Weyl-Heisenberg
group, as it is often called). It is therefore natural to investigate their
behaviour under the action of the extended Clifford group. The Clifford group
proper is defined to be the normalizer of the generalized Pauli group, considered
as a subgroup of $\U(d)$ (the group consisting of all unitary operators in
dimension $d$). It is relevant to a number of areas of quantum
information theory, and it has been extensively discussed in the
literature~\cite{Gottesman1,Gottesman2,Gottesman3,Dehaene,Hostens,vanDenNest,vanDenNestB}.
Its relevance to the SIC-POVM problem has been stressed by Grassl~\cite{Grassl}.
As Grassl notes, it is related to the Jacobi group~\cite{JacobiRef}, which has
attracted some notice in the pure mathematical literature. We define the
extended Clifford group to be the group which results when the Clifford group
is enlarged, so as to include all
\emph{anti}-unitary operators which normalize the generalized Pauli group. As
we will see, this enlargement is essential if one wants to achieve a full
understanding of the SIC-POVM problem.
In Sections~\ref{sec:GPGroup}--\ref{sec:ExtendedClifford} we give a
self-contained account of the structure of the extended Clifford group. In the
course of this discussion we obtain a number of results concerning the structure
of the Clifford group proper which, to the best of our knowledge, have not
previously appeared in the literature and which may be of some independent
interest.
In Section~\ref{sec:CliffordTrace} we define and establish some of the properties
of a function we call the Clifford trace. We also identify a distinguished class
of order $3$ Clifford unitaries for which the Clifford trace $=-1$. We refer to
these as \emph{canonical} order $3$ unitaries.
In Section~\ref{sec:RBSCVectors} we analyze the vectors constructed numerically
by Renes \emph{et al}~\cite{Renes} (RBSC in the sequel) in dimension $5$--$45$.
We show that each of them is an eigenvector of a canonical order $3$ Clifford
unitary. This suggests the conjecture, that \emph{every} GP fiducial vector
is an eigenvector of a canonical order $3$ unitary. We also show that, with
one exception, the stability group of each RBSC vector is order~$3$ (the
exception being dimension
$7$, where the stability group is order~$6$).
In Section~\ref{sec:Zauner} we show that RBSC's results also support a
strengthened version of a conjecture of Zauner's~\cite{Zauner} (also see
Grassl~\cite{Grassl}).
In Section~\ref{sec:VectorsOrbitsStability} we use RBSC's numerical data,
regarding the total number of fiducial vectors in dimensions $2$--$7$, to give a
complete characterization of the orbits and stability groups in dimensions
$2$-$7$. Our results show that in each of these dimensions \emph{every} fiducial
vector covariant under the action of the generalized Pauli group is an
eigenvector of a canonical order $3$ Clifford unitary. We also identify the
total number of distinct orbits. It was already known~\cite{Renes,Grassl} that
there are infinitely many orbits in dimension~$3$, and one orbit in
dimensions $2$ and $6$. We show that there is, likewise, only one orbit in
dimensions $4$ and $5$, but two distinct orbits in dimension $7$. We also
construct exact expressions for two fiducial vectors in dimension $7$ (one on
each of the two distinct orbits).
RBSC's numerical data may suggest that, after dimension $7$, the stability group
of every fiducial vector has order $3$. In Section~\ref{sec:dimension19} we show
that there is at least one exception to that putative rule by constructing an
exact expression for a fiducial vector in dimension $19$ for which the stability
group has order $\ge 18$.
Our construction of exact solutions in dimensions $7$ and $19$ was
facilitated by the fact that in these dimensions there exist canonical order $3$
unitaries having a particularly simple form. In Section~\ref{sec:diagonalF} we
show that a similar simplification occurs in every dimension $d$ for which (a)
$d$ has at least one prime factor $= 1 \;(\text{mod}\; 3)$, (b) $d$ has no prime
factors $= 2 \;(\text{mod}\; 3)$ and (c) $d$ is not divisible by $9$. In other
words, it happens when $d=7, 13, 19, 21, 31, \dots $.
\section{Fiducial Vectors for the Generalized Pauli Group}
\label{sec:GPGroup}
The SIC-POVMs which have been constructed to date all have a certain group
covariance property. Let $G$ be a finite group having $d^2$ elements,
and suppose we have an injective map $g \to \hat{U}_g $ which associates to each
$g\in G$ a unitary operator $\hat{U}_g$ acting on $d$-dimensional Hilbert space.
Suppose that for all $g$, $g'$
\begin{equation}
\hat{U}_{g} \hat{U}_{g'} = e^{i \xi_{g g'}} \hat{U}_{g g'}
\end{equation}
where $e^{i \xi_{g g'}}$ is a phase (so the map defines a group homomorphism of
$G$ into the quotient group $\U(d)/\Uc(d)$, where
$\Uc(d)$ is the centre of $\U(d)$). Finally (and this, of course, is the
difficult part) suppose we can find a vector $|\psi\rangle\in
\mathbb{C}^d$ such that $\langle\psi|\psi\rangle=1$ and
\begin{equation}
\bigl| \langle \psi | \hat{U}_g | \psi\rangle \bigr| = \frac{1}{\sqrt{d+1}}
\end{equation}
for all $g\neq e$ ($e$ being the identity of $G$). Then
the assignment
\begin{equation}
\hat{E}_{g} = \frac{1}{d} \hat{U}_{g} |\psi\rangle \langle \psi|
\hat{U}_{g}^{\dagger}
\end{equation}
defines a SIC-POVM on $\mathbb{C}^{d}$. The vector $|\psi\rangle$ is said to be a
\emph{fiducial vector}.
To date attention has been largely focussed on the case
$G=(\mathbb{Z}_d)^2$, where $\mathbb{Z}_d$ is the set of integers
$0,1,\dots, d-1$ under addition \emph{modulo} $d$ (although there is numerical
evidence that fiducial vectors exist for other choices of group~\cite{Renes}).
That is also the case on which we will focus here.
To construct a suitable map $(\mathbb{Z}_d)^2 \to \U(d)$, let $|e_0\rangle,
|e_1\rangle, \dots |e_{d-1}\rangle$ be an orthonormal basis for $\mathbb{C}^d$,
and let $\hat{T}$ be the operator defined by
\begin{equation}
\hat{T} |e_r\rangle =\omega^r |e_r\rangle
\label{eq:TDef}
\end{equation}
where $\omega = e^{2 \pi i/d}$. Let $\hat{S}$ be the shift operator
\begin{equation}
\hat{S} |e_r\rangle = \begin{cases}
|e_{r+1}\rangle \qquad & r=0,1,\dots , d-2
\\
|e_0\rangle \qquad & r=d-1
\end{cases}
\label{eq:SDef}
\end{equation}
Then define, for each pair of integers $\mathbf{p}=(p_1,p_2)\in \mathbb{Z}^2$,
\begin{equation}
\hat{D}_{\mathbf{p}} =\tau^{p_1 p_2} \hat{S}^{p_1} \hat{T}^{p_2}
\label{eq:DOpDefinition}
\end{equation}
where $\tau = - e^{\pi i /d}$ (the minus sign means that
$\tau^{d^2}=1$ for all
$d$, thereby simplifying some of the formulae needed in the sequel). We have,
for all $\mathbf{p}, \mathbf{q} \in \mathbb{Z}^2 $,
\begin{align}
\hat{D}_{\mathbf{p}}^\dagger & = \hat{D}_{-\mathbf{p}}
\label{eq:DConjugate}
\\
\hat{D}_{\mathbf{p}} \hat{D}_{\mathbf{q}} & = \tau^{\langle
\mathbf{p},\mathbf{q}\rangle} \hat{D}_{\mathbf{p} + \mathbf{q}}
\label{eq:DCompositionRule}
\intertext{and}
\hat{D}_{\mathbf{p} + d\mathbf{q}} & = \begin{cases}
\hat{D}_{\mathbf{p}} \qquad & \text{if $d$ is odd}
\\
(-1)^{\langle \mathbf{p},\mathbf{q}\rangle} \hat{D}_{\mathbf{p}}
\qquad & \text{if $d$ is even}
\end{cases}
\label{eq:DShiftExpression}
\end{align}
where $\langle
\mathbf{p},\mathbf{q}\rangle$ is the symplectic form
\begin{equation}
\langle
\mathbf{p},\mathbf{q}\rangle=p_2 q_1-p_1 q_2
\end{equation}
Consequently the map $\mathbf{p}
\in (\mathbb{Z}_d)^2 \to \hat{D}_{\mathbf{p}} \in \U(d)$
has
all the required properties. The operators $\hat{D}_{\mathbf{p}}$ are sometimes
called generalized Pauli matrices. So we will say that a vector
$|\psi\rangle\in \mathbb{C}^d$ is a generalized Pauli fiducial vector, or
\emph{GP fiducial vector} for short, if it is a fiducial vector relative to the
action of these operators: \emph{i.e.}~if $\langle \psi |\psi \rangle =1$ and
\begin{equation}
\bigl| \langle \psi | \hat{D}_{\mathbf{p}} |\psi \rangle
\bigr| = \frac{1}{\sqrt{d+1}}
\label{eq:GPfiducial}
\end{equation}
for every $\mathbf{p} \in \mathbb{Z}^2 \neq \boldsymbol{0}\;(\text{mod}\;d)$.
The set of operators $\hat{D}_{\mathbf{p}}$ is not a group. However, it becomes
a group if we allow each $\hat{D}_{\mathbf{p}}$ to be multiplied by an arbitrary
phase. We will refer to the group
$\W(d) = \{e^{i \xi} \hat{D}_{\mathbf{p}}\colon \xi \in \mathbb{R},
\mathbf{p} \in
\mathbb{Z}^2\}$ so obtained as the generalized Pauli
group\footnote{
Also known as the Weyl-Heisenberg group.
Our definition is, perhaps, slightly unconventional. It would be more usual
to define $\W(d) =\{\tau^{n}
\hat{D}_{\mathbf{p}}\colon n
\in
\mathbb{Z},
\mathbf{p} \in
\mathbb{Z}^2\}$---\emph{i.e.}\ the subgroup generated by the operators
$\hat{D}_{\mathbf{p}}$.
}.
We now want to investigate the normalizer of $\W(d)$: \emph{i.e.}\ the
group $\C(d)$ consisting of all unitary operators $\hat{U}\in \U(d)$ with
the property
\begin{equation}
\hat{U} \W(d) \hat{U}^{\dagger} = \W(d)
\end{equation}
The significance of this group for us is that it generates automorphisms of
$\W(d)$ according to the prescription
\begin{equation}
\hat{P} \to \hat{U} \hat{P} \hat{U}^{\dagger}
\end{equation}
Consequently, if $|\psi\rangle$ is a GP fiducial vector, then so is $\hat{U}
|\psi\rangle$ for every
$\hat{U}\in
\C(d)$.
The group $\C(d)$ is known as the Clifford group, and has been
extensively discussed in the
literature~\cite{Gottesman1,Gottesman2,Gottesman3,Dehaene,Hostens,vanDenNest,vanDenNestB}.
Its relevance to the SIC-POVM problem has been stressed by
Grassl~\cite{Grassl}. However, none of these accounts
derive all the results needed for our analysis of the RBSC
vectors. In the interests of
readability we give a unified treatment in the next section.
\section{The Clifford Group: Structure, and Calculation of the Unitaries}
\label{sec:CliffordGroup}
We begin with some definitions. Let
\begin{equation}
\overline{d}=\begin{cases} d \qquad & \text{if $d$ is odd}
\\
2 d \qquad & \text{if $d$ is even}
\end{cases}
\end{equation}
Let $\SL(2, \mathbb{Z}_{\overline{d}})$ be the group consisting of all $2\times 2$
matrices
\begin{equation}
\begin{pmatrix}
\alpha & \beta \\ \gamma & \delta
\end{pmatrix}
\end{equation}
such that $\alpha, \beta, \gamma, \delta \in \mathbb{Z}_{\overline{d}}$ and
$\alpha \delta - \beta \gamma = 1 \; (\text{mod} \; \overline{d})$. Note
that inverses exist in this group because the condition
$\alpha \delta - \beta \gamma = 1 \; (\text{mod} \; \overline{d})$ implies
\begin{equation}
\begin{pmatrix}
\alpha & \beta \\ \gamma & \delta
\end{pmatrix}
\begin{pmatrix}
\delta & -\beta \\ -\gamma & \alpha
\end{pmatrix}
=
\begin{pmatrix}
1 & 0\\ 0 & 1
\end{pmatrix}
\end{equation}
in arithmetic \emph{modulo} $\overline{d}$.
We then have
\begin{lemma}
\label{lem:CliffordStructure1}
For each unitary operator $\hat{U}\in\C(d)$ there exists a matrix $F\in
\SL(2,\mathbb{Z}_{\overline{d}})$ and a vector
$\boldsymbol{\chi}\in (\mathbb{Z}_d)^2$ such that
\begin{equation}
\hat{U} \hat{D}_{\mathbf{p}} \hat{U}^{\dagger}
= \omega^{\langle \boldsymbol{\chi}, F\mathbf{p}\rangle} \hat{D}_{F\mathbf{p}}
\end{equation}
for all $\mathbf{p}\in\mathbb{Z}^2$ (where $\omega = \tau^2=e^{2 \pi i/d}$, as
before).
\label{lem:FChiToAisSurjective}
\end{lemma}
\begin{proof}
If $\hat{U}\in\C(d)$ it is immediate that there exist functions $f$ and
$g$ such that
\begin{equation}
\hat{U} \hat{D}_{\mathbf{p}} \hat{U}^{\dagger}
=e^{ i g(\mathbf{p})} \hat{D}_{f(\mathbf{p})}
\label{eq:Lem1EqA}
\end{equation}
for all $\mathbf{p}\in \mathbb{Z}^2$. It follows from
Eq.~(\ref{eq:DCompositionRule}) that
\begin{equation}
\left( e^{ i g(\mathbf{p})} \hat{D}_{f(\mathbf{p})} \right)
\left( e^{ i g(\mathbf{q})} \hat{D}_{f(\mathbf{q})} \right)
=\tau^{\langle{\mathbf{p},\mathbf{q}\rangle}}
\left(e^{ i g(\mathbf{p}+\mathbf{q})} \hat{D}_{f(\mathbf{p}+\mathbf{q})}
\right)
\end{equation}
for all $\mathbf{p}$, $\mathbf{q}\in\mathbb{Z}^2$. Consequently
\begin{equation}
e^{ i ( g(\mathbf{p})+g(\mathbf{q}))} \tau^{\langle
f(\mathbf{p}),f(\mathbf{q})\rangle} \hat{D}_{f(\mathbf{p})+f(\mathbf{q})}
= e^{ i g(\mathbf{p}+\mathbf{q})} \tau^{\langle{\mathbf{p},\mathbf{q}\rangle}}
\hat{D}_{f(\mathbf{p}+\mathbf{q})}
\label{eq:Lem1EqB}
\end{equation}
which implies $f(\mathbf{p}+\mathbf{q}) = f(\mathbf{p})+f(\mathbf{q})
\;(\text{mod}\; d)$. We may therefore write
\begin{equation}
f(\mathbf{p}) = F' \mathbf{p} + d h(\mathbf{p})
\end{equation}
for some matrix $F'$ and function $h$. Inserting this expression in
Eq.~(\ref{eq:Lem1EqA}) gives, in view of Eq.~(\ref{eq:DShiftExpression}),
\begin{equation}
\hat{U} \hat{D}_{\mathbf{p}} \hat{U}^{\dagger}
= e^{ i g(\mathbf{p})} \hat{D}_{F'\mathbf{p}+ d h(\mathbf{p})}
=\begin{cases}
e^{ i g(\mathbf{p})} \hat{D}_{F'\mathbf{p}} \qquad & \text{$d$ odd}
\\
e^{ i g(\mathbf{p})} (-1)^{\langle\mathbf{p},h(\mathbf{p})\rangle}
\hat{D}_{F'\mathbf{p}} \qquad & \text{$d$ even}
\end{cases}
\end{equation}
With the appropriate definition of $g'$ this means
\begin{equation}
\hat{U} \hat{D}_{\mathbf{p}} \hat{U}^{\dagger}
=e^{ i g'(\mathbf{p})} \hat{D}_{F'\mathbf{p}}
\label{eq:Lem1EqC}
\end{equation}
for all $\mathbf{p}$. Repeating the argument which led to
Eq.~(\ref{eq:Lem1EqB}) we find
\begin{equation}
e^{i g'(\mathbf{p}+\mathbf{q})-g'(\mathbf{p})-g'(\mathbf{q})}
\tau^{\langle \mathbf{p},\mathbf{q}\rangle - \langle
F'\mathbf{p},F'\mathbf{q}\rangle}=1
\label{eq:Lem1EqD}
\end{equation}
Interchanging $\mathbf{p}$ and $\mathbf{q}$ gives
\begin{equation}
e^{i g'(\mathbf{p}+\mathbf{q})-g'(\mathbf{p})-g'(\mathbf{q})}
\tau^{-\langle \mathbf{p},\mathbf{q}\rangle + \langle
F'\mathbf{p},F'\mathbf{q}\rangle}=1
\end{equation}
We consequently require
\begin{equation}
\omega^{\langle \mathbf{p},\mathbf{q}\rangle - \langle
F'\mathbf{p},F'\mathbf{q}\rangle}
= \tau^{2\left( \langle \mathbf{p},\mathbf{q}\rangle - \langle
F'\mathbf{p},F'\mathbf{q}\rangle \right)}
=1
\end{equation}
for all $\mathbf{p}, \mathbf{q}$. It is readily verified that
$\langle
F'\mathbf{p},F'\mathbf{q}\rangle =(\Det F') \langle
\mathbf{p},\mathbf{q}\rangle $. We must therefore have
\begin{equation}
\Det F' = 1 \; (\text{mod} \; d)
\end{equation}
If $d$ is odd, or if $d$ is even and $\Det F' = 1\; (\text{mod} \;
\overline{d}) $, we can find a matrix $F\in \SL(2,\mathbb{Z}_{\overline{d}})$ such
that
$F=F' \; (\text{mod} \;
\overline{d})$. It then follows from Eq.~(\ref{eq:DShiftExpression}) that
$\hat{D}_{F\mathbf{p}}=\hat{D}_{F'\mathbf{p}}$ for all $\mathbf{p}$.
Suppose, on the other hand,
$d$ is even and
$\Det F' \neq 1\; (\text{mod} \; \overline{d}) $. Then
$\Det F' = d+1 \; (\text{mod} \; \overline{d})$. Write
\begin{equation}
F'=
\begin{pmatrix}
\alpha & \beta \\ \gamma & \delta
\end{pmatrix}
\end{equation}
We know $\alpha \delta -\beta \gamma = \Det F'$ is odd. So either $\alpha,
\delta$ are both odd, or else $\beta, \gamma$ are both odd. If $\alpha, \delta$
are both odd let
\begin{equation}
\Delta=
\begin{pmatrix}
1 & 0 \\ 0 & 0
\end{pmatrix}
\end{equation}
while if $\beta, \gamma$ are both odd let
\begin{equation}
\Delta=
\begin{pmatrix}
0 & 1 \\ 0 & 0
\end{pmatrix}
\end{equation}
Then $\Det (F'+d \Delta) = 1\;(\text{mod}\; \overline{d})$. We can therefore
choose a matrix $F\in \SL(2,\mathbb{Z}_{\overline{d}})$ such
that
$F=F'+d \Delta \; (\text{mod} \;
\overline{d})$. Inserting this expression in Eq.~(\ref{eq:Lem1EqC}) we have, in
view of Eq.~(\ref{eq:DShiftExpression}),
\begin{equation}
\hat{U} \hat{D}_{\mathbf{p}} \hat{U}^{\dagger}
=e^{ i g'(\mathbf{p})} \hat{D}_{(F-d \Delta )\mathbf{p}} = e^{ i g'(\mathbf{p})}
(-1)^{\langle F\mathbf{p},\Delta \mathbf{p}\rangle} \hat{D}_{F\mathbf{p}}
\end{equation}
We conclude that there is, in every case, a function $g''$ and a matrix
$F\in \SL(2,\mathbb{Z}_{\overline{d}})$ such that
\begin{equation}
\hat{U} \hat{D}_{\mathbf{p}} \hat{U}^{\dagger}
=
e^{ i g''(\mathbf{p})} \hat{D}_{F\mathbf{p}}
\end{equation}
for all $\mathbf{p}$.
It remains to establish the form of the function $g''$. We note, first of all,
that it follows from Eqs.~(\ref{eq:DOpDefinition})
and~(\ref{eq:DCompositionRule}) that
\begin{equation}
\bigl( \hat{D}_{\mathbf{p}}\bigr)^d=\hat{D}_{d \mathbf{p}} = \tau^{d^2 p_1 p_2}
\hat{S}^{d p_1}
\hat{T}^{d p_2} =1
\label{eq:DpPower}
\end{equation}
for all $\mathbf{p}$ (because $\hat{S}^d=\hat{T}^d = \tau^{d^2} =1$).
Consequently
\begin{equation}
1 = \hat{U} \bigl( \hat{D}_{\mathbf{p}}\bigr)^d \hat{U}^{\dagger}
=\bigl(\hat{U} \hat{D}_{\mathbf{p}} \hat{U}^{\dagger}\bigr)^d
=e^{ i d g''(\mathbf{p})} \bigl( \hat{D}_{F \mathbf{p}}\bigr)^d
= e^{ i d g''(\mathbf{p})}
\end{equation}
for all $\mathbf{p}$. We must therefore have $e^{ i
g''(\mathbf{p})}=\omega^{\tilde{g}(\mathbf{p})}$ for some function $\tilde{g}$
taking values in $\mathbb{Z}_d$. Repeating the argument which led to
Eq.~(\ref{eq:Lem1EqD}) we find
\begin{equation}
\omega^{\tilde{g}(\mathbf{p}+\mathbf{q})-\tilde{g}(\mathbf{p})-\tilde{g}
(\mathbf{q})}
\tau^{\langle \mathbf{p},\mathbf{q}\rangle - \langle
F\mathbf{p},F\mathbf{q}\rangle}=1
\end{equation}
We have $\langle \mathbf{p},\mathbf{q}\rangle - \langle
F\mathbf{p},F\mathbf{q}\rangle = (1-\Det F)\langle \mathbf{p},\mathbf{q}\rangle
=0\; (\text{mod} \; \overline{d})$. Consequently
$\tau^{\langle \mathbf{p},\mathbf{q}\rangle - \langle
F\mathbf{p},F\mathbf{q}\rangle}=1$ (because $\tau^{\overline{d}}=1$) and so
\begin{equation}
\tilde{g}(\mathbf{p}+\mathbf{q})=\tilde{g}(\mathbf{p})+\tilde{g}
(\mathbf{q}) \; (\text{mod}\; d)
\end{equation}
for all $\mathbf{p}, \mathbf{q}$.
This implies $\tilde{g}(\mathbf{p})=\langle \boldsymbol{\chi}',\mathbf{p}\rangle
\; (\text{mod}\; d)$ for for all $\mathbf{p}$ some fixed
$\boldsymbol{\chi}' \in (\mathbb{Z}_d)^2$. Setting
$\boldsymbol{\chi}=F \boldsymbol{\chi}'$, and using the fact that $\langle
F^{-1}\boldsymbol{\chi},\mathbf{p}\rangle=\langle
\boldsymbol{\chi},F\mathbf{p}\rangle\; (\text{mod}\; d)$ we conclude
\begin{equation}
\hat{U} \hat{D}_{\mathbf{p}} \hat{U}^{\dagger}
=
\omega^{\langle
\boldsymbol{\chi},F\mathbf{p}\rangle} \hat{D}_{F\mathbf{p}}
\end{equation}
for all $\mathbf{p}$.
\end{proof}
We now want to prove the converse of Lemma~\ref{lem:FChiToAisSurjective}. That
is, we want to prove that, for each pair $F\in\SL(2,\mathbb{Z}_{\overline{d}})$
and
$\boldsymbol{\chi}\in (\mathbb{Z}_d)^2$ there is a corresponding operator
$\hat{U}\in\C(d)$. We also want to derive an explicit expression for the
operator $\hat{U}$ (this has, in effect, already been done by
Hostens~\emph{et
al}~\cite{Hostens}; however, the formulae we derive are different, and better
adapted to the questions addressed in this paper).
We begin by focussing on a special class of matrices $F$. Let
$[n_1,n_2,\dots,n_r]$ denote the GCD (greatest common divisor) of the integers
$n_1,n_2, \dots, n_r$. We define the class of \emph{prime matrices} to be the
set of all matrices
\begin{equation}
F=
\begin{pmatrix}
\alpha & \beta \\ \gamma & \delta
\end{pmatrix}
\end{equation}
$\in \SL(2, \mathbb{Z}_{\overline{d}})$ such that $\beta$ is non-zero and
$[\beta,
\overline{d}]=1$ (so that $\beta$ has a multiplicative inverse in
$\mathbb{Z}_{\overline{d}}$). We then have
\begin{lemma}
\label{lem:CliffordStructure2}
Let
\begin{equation}
F=
\begin{pmatrix}
\alpha & \beta \\ \gamma & \delta
\end{pmatrix}
\end{equation}
be a prime matrix $\in \SL(2, \mathbb{Z}_{\overline{d}})$. Let
\begin{equation}
\hat{V}_{F} = \frac{1}{\sqrt{d}} \sum_{r,s=0}^{d-1}
\tau^{\beta^{-1} \left(\alpha s^2 - 2 r s+ \delta r^2 \right)} |e_r\rangle
\langle e_s|
\label{eq:VFDef}
\end{equation}
(where
$\beta^{-1}\in \mathbb{Z}_{\overline{d}}$ is such that $\beta^{-1}\beta = 1 \;
(\text{mod}\;
\overline{d})$). Then $\hat{V}_F$ is a unitary operator $\in\C(d)$ such
that
\begin{equation}
\hat{V}_{F}^{\vphantom{\dagger}} \hat{D}_{\mathbf{p}}^{\vphantom{\dagger}}
\hat{V}_{F}^{\dagger} = \hat{D}_{F\mathbf{p}}^{\vphantom{\dagger}}
\end{equation}
for all $\mathbf{p}$.
\end{lemma}
\begin{proof}
Let
\begin{equation}
\hat{S}' = \hat{D}_{(\alpha,\gamma)} \qquad \text{and} \qquad
\hat{T}' = \hat{D}_{(\beta,\delta)}
\end{equation}
and define
\begin{equation}
|f_{0}\rangle = \frac{1}{\sqrt{d}} \sum_{r=0}^{d-1} (\hat{T}')^r |e_0\rangle
\end{equation}
It follows from Eq.~(\ref{eq:DpPower}) that $\bigl( \hat{T}' \bigr)^{d} =
1$.
Consequently
\begin{equation}
\hat{T}'|f_{0}\rangle = |f_{0}\rangle
\end{equation}
It follows from Eq.~(\ref{eq:DCompositionRule}) that
$\hat{T}' \hat{S}'=\omega \hat{S}' \hat{T}'$. So we can obtain a complete set of
eigenvectors by laddering. Specifically, let
\begin{equation}
|f_r\rangle = \bigl(\hat{S}'\bigr)^{r} | f_0\rangle
\end{equation}
for $r=1,\dots, d-1$. Then
\begin{equation}
\hat{T}' |f_r\rangle = \omega^{r} |f_r\rangle
\label{eq:Lem2EqB}
\end{equation}
for all $r$. Since $\bigl( \hat{S}' \bigr)^{d} =
1$ (as follows from Eq.~(\ref{eq:DpPower})) we also have
\begin{equation}
\hat{S}' |f_r\rangle = | f_{r \oplus_d 1}\rangle
\end{equation}
for all $r$ (where $\oplus_d$ signifies addition \emph{modulo} $d$).
We next show that the vectors $|f_r\rangle$ are orthonormal. It follows from
Eqs.~(\ref{eq:TDef}), (\ref{eq:SDef}), (\ref{eq:DOpDefinition})
and~(\ref{eq:DCompositionRule}) that
\begin{equation}
(\hat{T}')^r |e_{0}\rangle
=
\hat{D}_{(r\beta,r\delta)} |e_{0}\rangle
=
\tau^{ \beta \delta r^2} \hat{S}^{\beta r} |e_{0}\rangle
\end{equation}
and consequently
\begin{equation}
|f_0\rangle=\left(\frac{1}{\sqrt{d}} \sum_{r=0}^{d-1} \tau^{ \beta \delta r^2}
\hat{S}^{\beta r}\right) |e_{0}\rangle
=
\left(\frac{1}{\sqrt{d}} \sum_{r=0}^{d-1} \tau^{ \beta^{-1} \delta (\beta r)^2}
\hat{S}^{\beta r}\right) |e_{0}\rangle
\end{equation}
(where we have used the fact that $\tau^{\overline{d}}=1$).
We need to be careful at this point, due to the fact that congruence
\emph{modulo} $d$ need not imply congruence \emph{modulo} $\overline{d}$. Let
$q_r$ be the quotient of $\beta r$ on division by $d$, and let $t_r$ be the
remainder. So $\beta r = q_r d + t_r$ and
\begin{equation}
|f_0\rangle=
\left(\frac{1}{\sqrt{d}} \sum_{r=0}^{d-1} \tau^{ \beta^{-1} \delta (q_r d +
t_r)^2}
\hat{S}^{q_r d + t_r}\right) |e_{0}\rangle
\end{equation}
We have
\begin{align}
\hat{S}^{q_r d + t_r} & = \hat{S}^{t_r}
\intertext{and}
\tau^{ \beta^{-1} \delta (q_r d +
t_r)^2} & =
\tau^{\beta^{-1} \delta (t_r^2 + 2 d q_r t_r + d^2 t_r^2)}
=
\tau^{\beta^{-1} \delta t_r^2}
\end{align}
(because $\tau^{2 d}=\tau^{d^2} =1$). Consequently
\begin{equation}
|f_0\rangle=
\left(\frac{1}{\sqrt{d}} \sum_{r=0}^{d-1} \tau^{ \beta^{-1} \delta
t_r^2}
\hat{S}^{t_r}\right) |e_{0}\rangle
=
\frac{1}{\sqrt{d}}\sum_{r=0}^{d-1} \tau^{ \beta^{-1} \delta
t_r^2}|e_{t_r}\rangle
\end{equation}
The fact that $[\beta,\overline{d}]=1$ implies that $[\beta,d]=1$. It follows
that, as
$r$ runs over the integers $0,1,\dots, d-1$, so does $t_r$ (though not
necessarily in the same order). Consequently
\begin{equation}
|f_0\rangle=
\frac{1}{\sqrt{d}}\sum_{t=0}^{d-1} \tau^{ \beta^{-1} \delta
t^2}|e_{t}\rangle
\label{eq:Lem2EqA}
\end{equation}
It follows that
\begin{equation}
\langle f_r|f_r\rangle = \langle f_0|( \hat{S}')^{-r} (\hat{S}')^{r} |
f_0\rangle = \langle f_{0} | f_{0}\rangle = 1
\end{equation}
The fact that $\langle f_r | f_s \rangle=0$ when $r\neq s$ is an immediate
consequence of the fact that $|f_r\rangle$, $|f_s \rangle$ are eigenvectors of
$\hat{T}'$ corresponding to different eigenvalues. We conclude that
\begin{equation}
\langle f_r | f_s \rangle=\delta_{rs}
\end{equation}
as claimed.
We now want to calculate an explicit formula for $|f_r\rangle$ when $r>0$. It
follows from previous results that
\begin{equation}
|f_r\rangle
=
\hat{D}_{r\alpha,r\gamma} |f_0\rangle
=
\frac{1}{\sqrt{d}}\sum_{t=0}^{d-1} \tau^{ \beta^{-1} \delta
t^2+\alpha \gamma r^2 + 2 \gamma r t} (\hat{S})^{r \alpha} |e_{t}\rangle
\end{equation}
By an argument similar to the one leading to Eq.~(\ref{eq:Lem2EqA}) we deduce
\begin{align}
|f_r\rangle
& =
\frac{1}{\sqrt{d}}\sum_{t=0}^{d-1} \tau^{ \beta^{-1} \delta
(t-\alpha r)^2+\alpha \gamma r^2 + 2 \gamma r (t-\alpha r)}
|e_{t}\rangle
\\
& =
\frac{1}{\sqrt{d}}\sum_{t=0}^{d-1} \tau^{ \beta^{-1} \bigl( \delta t^2 -
2 r t + \alpha r^2 \bigr)}
|e_{t}\rangle
\end{align}
(since $\alpha \delta - \beta \gamma = 1 \;
(\text{mod}\; \overline{d})$). Comparing with Eq.~(\ref{eq:VFDef}) we see that
\begin{equation}
\hat{V}_{F} = \sum_{r=0}^{d-1} |f_r\rangle \langle e_r |
\end{equation}
which shows that $\hat{V}_{F}$ is unitary. Moreover,
\begin{equation}
\hat{V}_{F} \hat{T} \hat{V}_{F}^{\dagger} |f_r\rangle
=
\hat{V}_{F} \hat{T} |e_r\rangle
=\omega^{r} |f_r\rangle
\end{equation}
for all $r$. Comparing with Eq.~(\ref{eq:Lem2EqB}) we deduce $\hat{V}_{F}
\hat{T}
\hat{V}_{F}^{\dagger}=\hat{T}'$. Similarly
$\hat{V}_{F}
\hat{S}
\hat{V}_{F}^{\dagger}=\hat{S}'$. Hence
\begin{align}
\hat{V}_{F}
\hat{D}_{\mathbf{p}}
\hat{V}_{F}^{\dagger}
& =
\tau^{p_1 p_2} \hat{V}_{F}
\hat{S}^{p_1} \hat{T}^{p_2}
\hat{V}_{F}^{\dagger}
\\
& =
\tau^{p_1 p_2} \hat{D}_{\alpha p_1,\gamma p_1} \hat{D}_{\beta p_2,\delta p_2}
\\
& = \tau^{\bigl(1-\beta\gamma+\alpha\delta \bigr)p_1 p_2}
\hat{D}_{F \mathbf{p}}
\\
& =
\hat{D}_{F \mathbf{p}}
\end{align}
for all $\mathbf{p}$.
\end{proof}
To extend this result to the case of an arbitary matrix $\in
\SL(2,\mathbb{Z}_{\overline{d}})$ we need the following decomposition lemma,
which states that every non-prime matrix can be written as the product of two
prime matrices:
\begin{lemma}
\label{lem:CliffordStructure3}
Let
\begin{equation}
F=
\begin{pmatrix}
\alpha & \beta \\ \gamma & \delta
\end{pmatrix}
\end{equation}
be a non-prime matrix $\in \SL(2, \mathbb{Z}_{\overline{d}})$. Then there exists
an integer $x$ such that $\delta + x\beta$ is non-zero and $[\delta +
x\beta,\overline{d}]=1$. Let $x$ be any integer having that property, and let
\begin{align}
F_1 & = \begin{pmatrix}
0 & -1 \\ 1 & x
\end{pmatrix}
\\
F_2 & = \begin{pmatrix}
\gamma + x \alpha & \delta + x \beta \\ - \alpha & -\beta
\end{pmatrix}
\end{align}
Then $F_1$, $F_2$ are prime matrices $\in \SL(2, \mathbb{Z}_{\overline{d}})$ such
that
\begin{equation}
F=F_1 F_2
\end{equation}
\end{lemma}
\begin{proof}
Suppose, to begin with, that $\beta, \delta$ are both non-zero. Let
$k=[\beta,\delta]$. We then have
\begin{align}
\beta & = k \beta_0\\
\delta & = k \delta_0
\end{align}
where
$[\beta_0,\delta_0]=1$. We also have $[k,\overline{d}]=1$
(because $\alpha \delta-\beta \gamma = 1\; (\text{mod}\; \overline{d})$). The
fact that $\beta_0$, $\delta_0$ are relatively prime means we can use Dirichlet's
theorem (see, for example, Nathanson~\cite{Nathanson} or Rose~\cite{Rose}) to
deduce that the sequence
\begin{equation}
\delta_0, \; (\delta_0 +\beta_0),\; (\delta_0
+ 2 \beta_0), \dots
\end{equation}
contains infinitely many primes. Consequently, there exists an integer $x$ such
that $\delta_0 + x \beta_0\neq0$ and $[\delta_0 + x
\beta_0, \overline{d}]=1$. The fact that $k\neq 0$ and
$[k,\overline{d}]=1$ then implies that $\delta+ x\beta\neq 0$ and $[\delta +
x\beta,
\overline{d}]=1$. The claim is now immediate.
It remains to consider the case when $\beta, \delta$ are not both non-zero.
If $\delta =0$ the fact that $\det F = 1 \; (\text{mod}\;\overline{d})$ would
imply that
$\beta
\neq 0$ and
$[\beta,
\overline{d}]=1$---contrary to the assumption that the matrix $F$ is non-prime.
Suppose, on the other hand, that $\beta =0$. Then the fact that
$\det F = 1 \; (\text{mod}\;\overline{d})$ implies that $\delta\neq 0$ and
$[\delta,
\overline{d}]=1$. So the claim is true for every choice of $x$.
\end{proof}
We can now deduce the following converse of Lemma~\ref{lem:CliffordStructure1}:
\begin{lemma}
\label{lem:CliffordStructure4}
Let $(F,\boldsymbol{\chi})$ be any pair $\in
\SL(2,\mathbb{Z}_{\overline{d}}) \times (\mathbb{Z}_d)^2$. If $F$ is a prime
matrix define
\begin{equation}
\hat{U} = \hat{D}_{\boldsymbol{\chi}} \hat{V}_F
\label{eq:CliffordSurjection1}
\end{equation}
(where $\hat{V}_F$ is the operator defined by Eq.~(\ref{eq:VFDef})).
If $F$ is non-prime choose two prime matrices $F_1, F_2$ such that $F=F_1 F_2$
(the existence of such matrices being guaranteed by
Lemma~\ref{lem:CliffordStructure3}), and define
\begin{equation}
\hat{U} = \hat{D}_{\boldsymbol{\chi}} \hat{V}_{F_1} \hat{V}_{F_2}
\label{eq:CliffordSurjection2}
\end{equation}
(where $\hat{V}_{F_1}, \hat{V}_{F_2}$ are the operators defined by
Eq.~(\ref{eq:VFDef})). Then
\begin{equation}
\hat{U} \hat{D}_{\mathbf{p}} \hat{U}^{\dagger}
= \omega^{\langle \boldsymbol{\chi}, F\mathbf{p}\rangle} \hat{D}_{F\mathbf{p}}
\label{eq:UforGeneralFandChi}
\end{equation}
for all $\mathbf{p}\in\mathbb{Z}^2$
\end{lemma}
\begin{proof}
The claim is an immediate consequence of Eqs.~(\ref{eq:DConjugate}),
(\ref{eq:DCompositionRule}) and Lemma~\ref{lem:CliffordStructure2}.
\end{proof}
If $\hat{U}$, $\hat{U}'$ differ by a phase, so that $\hat{U}'=e^{i \theta}
\hat{U}$, they have the same action on the generalized Pauli group:
\begin{equation}
\hat{U} \hat{D}_{\mathbf{p}}
\hat{U}^{\dagger}=\hat{U}'\hat{D}_{\mathbf{p}} \hat{U}'\mathstrut^{\dagger}
\end{equation}
for all $\mathbf{p}$.
So the object of real interest is not the Clifford group itself, but the group
$\C(d)/\Cc(d)$ which results when the phases are factored out. Here $\Cc(d)$ is
the subgroup consisting of all operators of the form $e^{i \theta}
\hat{I}$, where $\hat{I}$ is the identity operator and $\theta\in \mathbb{R}$.
The elements of
$\C(d)/\Cc(d)$ are often called \emph{Clifford
operations}.
Let $\SL(2, \mathbb{Z}_{\overline{d}}) \ltimes
(\mathbb{Z}_d)^2$ be the semi-direct product of
$\SL(2,\mathbf{Z}_{\overline{d}})$ and $(\mathbb{Z}_d)^2$:
\emph{i.e.}~the group which results when the set
$\SL(2,
\mathbb{Z}_{\overline{d}})
\times (\mathbb{Z}_d)^2$ is equipped with the composition rule
\begin{equation}
(F_1,\boldsymbol{\chi}_1) \circ (F_2,\boldsymbol{\chi}_2)
= (F_1 F_2, \boldsymbol{\chi}_1 + F_1 \boldsymbol{\chi}_2)
\label{eq:SemiDirectCompositionRule}
\end{equation}
Then we have the following structure theorem, which states that $\C(d)/\Cc(d)$ is
naturally isomorphic to $\SL(2, \mathbb{Z}_{\overline{d}}) \ltimes
(\mathbb{Z}_d)^2$ when $d$ is odd, and naturally isomorphic to a quotient group
of $\SL(2, \mathbb{Z}_{\overline{d}}) \ltimes
(\mathbb{Z}_d)^2$
when
$d$ is even:
\begin{theorem}
\label{thm:CliffordStructure}
There exists a unique surjective homomorphism
\begin{equation}
f \colon \SL(2, \mathbb{Z}_{\overline{d}}) \ltimes
(\mathbb{Z}_d)^2 \to \C(d)/\Cc(d)
\end{equation}
with the property $\hat{U}
\hat{D}_{\mathbf{p}}
\hat{U}^{\dagger} =\omega^{<\boldsymbol{\chi},F
\mathbf{p}>}\hat{D}_{F\mathbf{p}}$ for each $\hat{U} \in
f(F,\boldsymbol{\chi})$ and all
$\mathbf{p}\in \mathbb{Z}^2$.
If $d$ is odd $f$ is an isomorphism. If $d$ is even the kernel of $f$ is the
subgroup $K_f\subseteq \SL(2, \mathbb{Z}_{\overline{d}}) \ltimes
(\mathbb{Z}_d)^2$ consisting of the $8$ elements of the form
\begin{equation}
\left(\begin{pmatrix}1+ r d & s d \\ t d & 1+r d
\end{pmatrix},
\begin{pmatrix} s d /2 \\ t d/2
\end{pmatrix}
\right)
\label{eq:KernelExpression}
\end{equation}
where $r,s,t = 0$ or $1$.
\end{theorem}
\begin{proof}
An operator $\hat{U}\in \C(d)$ has the property
\begin{equation}
\hat{U} \hat{D}_{\mathbf{p}} \hat{U}^{\dagger} = \hat{D}_{\mathbf{p}}
\end{equation}
for all $\mathbf{p}$
if and only if it is a multiple of the identity. So it follows from results
already proved that there is exactly one surjective map
\begin{equation}
f \colon \SL(2, \mathbb{Z}_{\overline{d}}) \ltimes
(\mathbb{Z}_d)^2 \to \C(d)/\Cc(d)
\end{equation}
such that $\hat{U}
\hat{D}_{\mathbf{p}}
\hat{U}^{\dagger} =\omega^{<\boldsymbol{\chi},F
\mathbf{p}>}\hat{D}_{F\mathbf{p}}$ for each $\hat{U} \in
f(F,\boldsymbol{\chi})$ and all
$\mathbf{p}\in \mathbb{Z}^2$. The fact that $f$ is actually a homomorphism is
then an immediate consequence of the definitions.
Let $K_f$ be the kernel of $f$. Then $(F,\boldsymbol{\chi})\in K_f$ if and
only if
\begin{equation}
\omega^{<\boldsymbol{\chi},F\mathbf{p}>} \hat{D}_{F \mathbf{p}} =
\hat{D}_{\mathbf{p}}
\label{eq:KernelDef}
\end{equation}
for all $\mathbf{p}$. For that to be true we must have $F=
1\; (\text{mod}\; d)$. If $d$ is odd this implies
$\hat{D}_{F
\mathbf{p}} = \hat{D}_{\mathbf{p}}$ for all $\mathbf{p}$.
Eq.~(\ref{eq:KernelDef}) then becomes
$\omega^{<\boldsymbol{\chi},\mathbf{p}>}=1$ for all $\mathbf{p}$, implying
$\boldsymbol{\chi}=\begin{pmatrix} 0 \\ 0\end{pmatrix}$. So
the kernel is trivial, and
$f$ is an isomorphism as claimed.
Suppose, on the other hand, that $d$ is even. The condition $F=
1\; (\text{mod}\; d)$
then implies that $F=1+ d\Delta$, where $\Delta$ is a matrix of the form
\begin{equation}
\Delta=\begin{pmatrix} r_1 & s \\ t & r_2
\end{pmatrix}
\end{equation}
with $r_1, r_2, s, t= 0$ or $1$. Inserting this expression in
Eq.~(\ref{eq:KernelDef}) we find, in view of
Eqs.~(\ref{eq:DConjugate}--\ref{eq:DShiftExpression}), that
$(F,\boldsymbol{\chi})\in K_f$ if and only if
\begin{equation}
1=\omega^{<\boldsymbol{\chi},F\mathbf{p}>} \hat{D}_{F \mathbf{p}}
\hat{D}_{-\mathbf{p}}
= \omega^{<\boldsymbol{\chi},\mathbf{p}>}
\tau^{d <\mathbf{p},\Delta \mathbf{p}>}
\end{equation}
for all $\mathbf{p}$. After re-arranging the condition becomes
\begin{equation}
\omega^{\chi_2 p_1-\chi_1 p_2} = (-1)^{(r_1-r_2)p_1 p_2 - t p_1^2 + s p_2^2}
=(-1)^{(r_1-r_2)p_1 p_2 + t p_1 - s p_2}
\end{equation}
for all $\mathbf{p}$. This is true if and only if $r_1=r_2$, $\chi_1=sd/2$ and
$\chi_2=td/2$.
\end{proof}
We conclude with a result concerning the order of the group $\C(d)/\Cc(d)$
which will be needed later on. Let $\nu(n,d)$ be the number of distinct ordered
pairs
$(x,y)\in (\mathbb{Z}_d)^2$ such that $x y = n\; (\text{mod}\; d)$. We then have
\begin{lemma}
\label{lem:CliffordStructure5}
The order of the group $\C(d)/\Cc(d)$ is
\begin{equation}
\bigl| \C(d)/\Cc(d)\bigr|
=d^2 \left( \sum_{n=0}^{d-1} \nu(n,d)\nu(n+1,d) \right)
\label{eq:CliffordOrderFormula1}
\end{equation}
If $d$ is a prime number this reduces to
\begin{equation}
\bigl| \C(d)/\Cc(d)\bigr|=d^3 (d^2-1)
\label{eq:CliffordOrderFormula2}
\end{equation}
\end{lemma}
\begin{proof}
We begin by showing that $\C(d)/\Cc (d)$ and $\SL(2, \mathbb{Z}_d) \ltimes
(\mathbb{Z}_d)^2$ have the same cardinality when considered as \emph{sets}. This
is true for all
$d$, notwithstanding the fact that when $d$ is even $\C(d)/\Cc (d)$ and $\SL(2,
\mathbb{Z}_d)
\ltimes (\mathbb{Z}_d)^2$ are not naturally isomorphic as \emph{groups}.
The statement is immediate when $d$ is odd. Suppose, on the other hand, that $d$
is even. Let $g: \SL(2,\mathbb{Z}_{2 d}) \to \SL(2,\mathbb{Z}_d)$ be the natural
homomorphism defined by
\begin{equation}
g\colon \begin{pmatrix} \alpha & \beta \\ \gamma & \delta
\end{pmatrix}
\mapsto
\begin{pmatrix} [\alpha]_d & [\beta]_d \\ [\gamma]_d & [\delta]_d
\end{pmatrix}
\end{equation}
where $[x]_d$ denotes the residue class of $x$ \emph{modulo} $d$. It is easily
seen that $g$ is surjective. In fact, consider arbitrary
\begin{equation}
F = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta
\end{pmatrix} \in \SL(2, \mathbb{Z}_d)
\end{equation}
Then $\alpha \delta - \beta \gamma =1 + n d$ for some integer $n$. If $n$ is
even then $F \in \SL(2, \mathbb{Z}_{\overline{d}})$ and $F=g(F)$. Suppose, on the
other hand, that $n$ is odd. Then either $\alpha$ or $\beta$ is odd. If $\alpha$
is odd $F=g(F')$ where
\begin{equation}
F' = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta + d
\end{pmatrix} \in \SL(2, \mathbb{Z}_{\overline{d}})
\end{equation}
while if $\beta$ is odd $F=g(F'')$ where
\begin{equation}
F'' = \begin{pmatrix} \alpha & \beta \\ \gamma+d & \delta
\end{pmatrix} \in \SL(2, \mathbb{Z}_{\overline{d}})
\end{equation}
Now let
$K_g$ be the kernel of $g$. A matrix $F\in K_g$ if and only if
\begin{equation}
F=\begin{pmatrix} 1+ r_1 d & s d \\ t d & 1+ r_2 d
\end{pmatrix}
\end{equation}
where $r_1, r_2, s, t=0$ or $1$ and $(1+r_1 d)(1+r_2 d) - s t d^2 = 1\;
(\text{mod}\; 2 d)$. We have
\begin{equation}
(1+r_1 d)(1+r_2 d) - s t d^2 = 1 + (r_1+ r_2)d \quad (\text{mod} \; 2 d)
\end{equation}
(bearing in mind that $d$ is even, so $d^2=0\; (\text{mod}\; 2d)$). We
therefore require $r_1=r_2$. It follows that
$K_g$ consists of the $8$ matrices of the form
\begin{equation}
\begin{pmatrix} 1+ r d & s d \\ t d & 1+ r d
\end{pmatrix}
\end{equation}
where $r, s, t= 0$ or $1$. The fact that $g$ is surjective and $|K_g| =8$ implies
$|\SL(2,\mathbb{Z}_{2 d})|=8|\SL(2,\mathbb{Z}_d)|$. In view of
Theorem~\ref{thm:CliffordStructure} this means
\begin{equation}
\bigl| \C(d)/\Cc (d)\bigr|=\frac{1}{8}\bigl|\SL(2,
\mathbb{Z}_{2 d})\bigr| \bigl|(\mathbb{Z}_d)^2 \bigr|
=
\bigl|\SL(2,\mathbb{Z}_{ d})\bigr| \bigl|(\mathbb{Z}_d)^2 \bigr|
=
\bigl|\SL(2,
\mathbb{Z}_d)
\ltimes (\mathbb{Z}_d)^2 \bigr|
\end{equation}
as claimed.
We have shown that $\bigl| \C(d)/\Cc (d)\bigr| =\bigl|\SL(2,
\mathbb{Z}_d)
\ltimes (\mathbb{Z}_d)^2 \bigr| = d^2 \bigl| \SL(2,
\mathbb{Z}_d)\bigr| $ for all $d$, odd or even. It remains to calculate
$\bigl|\SL(2,\mathbb{Z}_d)\bigr|$. For each $n\in \mathbb{Z}_d$ let $M_n
\subseteq \SL(2,\mathbb{Z}_d)$ be the set of matrices
\begin{equation}
\begin{pmatrix} \alpha & \beta \\ \gamma & \delta
\end{pmatrix}
\end{equation}
for which $\alpha \delta= n+1 \; (\text{mod}\; d)$ and $\beta \gamma= n \;
(\text{mod}\; d)$. Clearly
$
\SL(2,\mathbb{Z}_d)=\bigcup_{n=0}^{d-1}
M_n$
and $|M_n| = \nu(n,d) \nu(n+1,d)$. It follows that
\begin{equation}
\bigl|\SL(2,\mathbb{Z}_d)\bigr| = \sum_{n=0}^{d-1}\nu(n,d)
\nu(n+1,d)
\end{equation}
Eq.~(\ref{eq:CliffordOrderFormula1}) is now immediate.
If $d$ is a prime number
\begin{equation}
\nu(n,d) =
\begin{cases}
2 d -1 \qquad &\text{if $n=0$ (mod $d$)} \\
d-1 \qquad &\text{otherwise}
\end{cases}
\end{equation}
implying
\begin{equation}
\sum_{n=0}^{d-1}\nu(n,d)
\nu(n+1,d) = d(d^2-1)
\end{equation}
Eq.~(\ref{eq:CliffordOrderFormula2}) is now immediate.
\end{proof}
\section{The Extended Clifford Group}
\label{sec:ExtendedClifford}
It can be seen from Eqs.~(\ref{eq:TDef}--\ref{eq:DOpDefinition})
and~(\ref{eq:GPfiducial}) that, if
$|\psi\rangle = \sum_{r=0}^{d-1} \psi_r |e_r\rangle$ is a GP fiducial vector,
then so is the vector
$|\psi^{*}\rangle = \sum_{r=0}^{d-1} \psi^{*}_r |e_r\rangle$ obtained by complex
conjugation. So to make the analysis complete we need to consider automorphisms
of $\W(d)$ which are generated by anti-unitary
operators.
An anti-linear operator is a map $\hat{L}\colon \mathbb{C}^d \to \mathbb{C}^d$
with the property
\begin{equation}
\hat{L} \left(\alpha|\phi\rangle + \beta |\psi\rangle \right)
= \alpha^{*} \hat{L} |\phi \rangle + \beta^{*} \hat{L} |\psi \rangle
\end{equation}
for all $|\phi \rangle, |\psi \rangle \in \mathbb{C}^d$ and all $\alpha, \beta
\in
\mathbb{C}$. The adjoint $\hat{L}^{\dagger}$ is defined to be the unique
anti-linear operator with the property
\begin{equation}
\langle \phi|\hat{L}^{\dagger} |\psi\rangle = \langle \psi | \hat{L}
|\phi\rangle
\end{equation}
for all $|\phi \rangle, |\psi \rangle \in \mathbb{C}^d$. An operator $\hat{U}$
is said to be anti-unitary if it is anti-linear and
$\hat{U}^{\dagger} \hat{U}=1$ (or, equivalently, $\hat{U} \hat{U}^{\dagger} =
1$).
We now define the \emph{extended Clifford Group} to be the group $\EC(d)$
consisting of all unitary or anti-unitary operators $\hat{U}$ having the
property
\begin{equation}
\hat{U} \W(d) \hat{U}^{\dagger} = \W(d)
\end{equation}
Let us also define
$\ESL(2,\mathbb{Z}_{\overline{d}})$ to be the group consisting of all $2\times 2$
matrices
\begin{equation}
\begin{pmatrix} \alpha & \beta \\ \gamma & \delta
\end{pmatrix}
\end{equation}
such that $\alpha, \beta, \gamma, \delta \in \mathbb{Z}_{\overline{d}}$ and
$\alpha \delta -\beta \gamma = \pm 1 \;(\text{mod}\; \overline{d})$. In the last
section we showed that there is a natural homomorphism
$f\colon \SL(2,\mathbb{Z}_{\overline{d}})\ltimes (\mathbb{Z}_d)^2 \to
\C(d)/\Cc(d)$. We are going to show that this extends to a natural
homomorphism
$f_{\mathrm{E}}\colon\ESL(2,\mathbb{Z}_{\overline{d}})\ltimes (\mathbb{Z}_d)^2 \to
\EC(d)/\Cc(d)$.
Let $\hat{J}$ be the anti-linear operator which replaces
components in the standard basis with their complex conjugates:
\begin{equation}
\hat{J} \colon \sum_{r=0}^{d-1} \psi_r |e_r\rangle \mapsto
\sum_{r=0}^{d-1} \psi_r^{*} |e_r\rangle
\label{eq:JopDef}
\end{equation}
Clearly $\hat{J}^{\dagger} = \hat{J}$ and $\hat{J}^{\dagger} \hat{J}=\hat{J}^2 =
1$. So $\hat{J}$ is an anti-unitary operator. Furthermore, it follows from
Eqs.~(\ref{eq:TDef}--\ref{eq:DOpDefinition}) that
\begin{equation}
\hat{J} \hat{D}_{\mathbf{p}} \hat{J}^{\dagger} = \hat{D}_{\tilde{J}\mathbf{p}}
\label{eq:JAction}
\end{equation}
for all $\mathbf{p}$, where
\begin{equation}
\tilde{J} = \begin{pmatrix} 1 & 0 \\ 0 & -1
\end{pmatrix}
\end{equation}
So $\hat{J}\in \EC(d)$. Note that $\Det \tilde{J} = -1 \;(\text{mod}\;
\overline{d})$, so $\tilde{J}\in
\ESL(2,\mathbb{Z}_{\overline{d}})$.
Now let $\AC(d)$ be the set of anti-unitary operators $\in \EC(d)$ (so $\EC(d)$
is the disjoint union $\EC(d) =\C(d) \cup \AC(d)$). The mapping $\hat{U}
\mapsto
\hat{J} \hat{U}$ defines a bijective correspondence between $\AC(d)$ and
$\C(d)$. We can use this to prove the following extension of
Theorem~\ref{thm:CliffordStructure}:
\begin{theorem}
\label{thm:CliffordStructureE}
There is a unique surjective homomorphism
\begin{equation}
f_{\mathrm{E}} \colon \ESL (2, \mathbb{Z}_{\overline{d}}) \ltimes (\mathbb{Z}_d)^2
\to
\EC(d)/\Cc(d)
\end{equation}
such that, for each $(F,\boldsymbol{\chi}) \in
\ESL (2, \mathbb{Z}_{\overline{d}}) \ltimes (\mathbb{Z}_d)^2$
and $\hat{U} \in
f_{\mathrm{E}} (F,\boldsymbol{\chi})$,
\begin{equation}
\hat{U}
\hat{D}_{\mathbf{p}}
\hat{U}^{\dagger} =\omega^{<\boldsymbol{\chi},F
\mathbf{p}>}\hat{D}_{F\mathbf{p}}
\end{equation}
for all
$\mathbf{p}$. $\hat{U}$ is unitary if
$\Det F = 1
\;(\text{mod}\;
\overline{d})$ and anti-unitary if $\Det F = -1 \;(\text{mod}\;
\overline{d})$.
$f_{\mathrm{E}}$ extends the homomorphism $f$ defined in
Theorem~\ref{thm:CliffordStructure}, and has the same kernel. So
$f_{\mathrm{E}}$ is an isomorphism if $d$ is odd, while if $d$ is even its kernel
is the subgroup $K_f$ defined in Theorem~\ref{thm:CliffordStructure}.
\end{theorem}
\begin{proof}
Let $\hat{U}$ be an arbitrary anti-unitary operator $\in \AC(d)$. The fact
that $\hat{J}, \hat{U}$ are both anti-unitary means that $\hat{J}\hat{U}$ is
unitary. So $\hat{J}\hat{U}\in \C(d)$. It then
follows from Theorem~\ref{thm:CliffordStructure} that there exists $(F',
\boldsymbol{\chi}')\in \SL(2,\mathbb{Z}_{\overline{d}})\ltimes (\mathbb{Z}_d)^2$
such that
\begin{equation}
(\hat{J} \hat{U}) \hat{D}_{\mathbf{p}} (\hat{J}\hat{U})^{\dagger}
=\omega^{<\boldsymbol{\chi}',F' \mathbf{p}>} \hat{D}_{F'\mathbf{p}}
\end{equation}
for all $\mathbf{p}$. Define $F = \tilde{J} F'$ and $\boldsymbol{\chi}=\tilde{J}
\boldsymbol{\chi}'$. In view of Eq.~(\ref{eq:JAction}), and the fact that
$\hat{J}^2=1$, we deduce
\begin{equation}
\hat{U} \hat{D}_{\mathbf{p}} \hat{U}^{\dagger}
=\hat{J}(\hat{J} \hat{U}) \hat{D}_{\mathbf{p}} (\hat{J}\hat{U})^{\dagger}
\hat{J}^{\dagger}
= \omega^{-<\boldsymbol{\chi}',F' \mathbf{p}>} \hat{D}_{\tilde{J}F'\mathbf{p}}
=\omega^{<\boldsymbol{ \chi},F \mathbf{p}>}
\hat{D}_{F\mathbf{p}}
\end{equation}
for all $\mathbf{p}$ (where we have used the fact that
$<\boldsymbol{\xi},\boldsymbol{\eta}>=-
<\tilde{J}\boldsymbol{\xi},\tilde{J}\boldsymbol{\eta}>$ for all
$\boldsymbol{\xi}, \boldsymbol{\eta}$). We have
$\Det (F)=(\Det
\bar{J})(\Det F')=-1$, so $(F,
\boldsymbol{\chi})\in\ESL(2,\mathbb{Z}_{\overline{d}})\ltimes
(\mathbb{Z}_d)^2$.
Reversing the argument we deduce the converse proposition: for each $(F,
\boldsymbol{\chi})\in
\ESL(2,\mathbb{Z}_{\overline{d}})\ltimes (\mathbb{Z}_d)^2$, there exists $\hat{U}
\in \EC(d)$ such that $\hat{U} \hat{D}_{\mathbf{p}}
\hat{U}^{\dagger}=\omega^{<\boldsymbol{ \chi},F
\mathbf{p}>}\hat{D}_{F\mathbf{p}}$ for all
$\mathbf{p}$. The fact that an operator commutes with
$\hat{D}_{\mathbf{p}}$ for all $\mathbf{p}$ if and only if it is a multiple of
the identity means that $\hat{U}$ is unique up to a phase.
This establishes the existence and uniqueness of the homomorphism
$f_{\mathrm{E}}$. The proof of the remaining statements is straightforward, and
is left to the reader.
\end{proof}
Finally, we have the following result which, together with
Lemma~\ref{lem:CliffordStructure5}, enables us to calculate the order of
$\EC(d)/\Cc(d)$:
\begin{lemma}
\label{lem:OrderECd}
\begin{equation}
\bigl| \EC(d)/\Cc(d) \bigr|=2\bigl| \C(d)/\Cc(d) \bigr|
\end{equation}
for all $d$.
\end{lemma}
\begin{proof}
The map
\begin{equation}
\hat{U} \Cc(d)\mapsto \hat{J} \hat{U} \Cc(d)
\end{equation}
defines a bijective correspondence between
$\AC(d) /\Cc(d)$ and $\C(d)/\Cc(d)$. So the set $\AC(d) /\Cc(d)$ contains the
same number of elements as $\C(d) /\Cc(d)$. The statement is now immediate.
\end{proof}
\section{The Clifford Trace}
\label{sec:CliffordTrace}
We now define the Clifford trace. The significance of this function for us is
that every GP fiducial vector which has been constructed to
date
is an eigenvector of a Clifford unitary having Clifford trace
$=-1$.
Let $[F,
\boldsymbol{\chi}]\in
\EC(d)/
\Cc(d)$ be the image of
$(F,\boldsymbol{\chi})$ under the homomorphism $f_{\mathrm{E}}$ defined in
Theorem~\ref{thm:CliffordStructureE}. We refer to $[F,
\boldsymbol{\chi}]$ as an extended Clifford operation (or Clifford
operation if it $\in \C(d)/\Cc(d)$). The operators $\in [F,
\boldsymbol{\chi}]$ only differ by a phase. It is therefore convenient to adopt
a terminology which blurs the distinction between the operation $[F,
\boldsymbol{\chi}]$ and the operators $\hat{U}\in [F,
\boldsymbol{\chi}]$. In particular, we will adopt the
convention that properties which hold for each
$\hat{U}\in [F,\boldsymbol{\chi}]$ may also be attributed to $[F,
\boldsymbol{\chi}]$. Thus, we
will say that
$[F,
\boldsymbol{\chi}]$ is unitary (respectively anti-unitary) if the operators
$\hat{U}\in [F,
\boldsymbol{\chi}]$ are unitary (respectively anti-unitary). Similarly, we will
say that $|\psi\rangle \in \mathbb{C}^d$ is an eigenvector of $[F,
\boldsymbol{\chi}]$ if it is an eigenvector of the operators
$\hat{U}\in [F,
\boldsymbol{\chi}]$.
It is easily verified that $\Tr (F_1) =
\Tr(F_2)\;(\text{mod}\;d)$ whenever $[F_1,
\boldsymbol{\chi}_1]=[F_2,\boldsymbol{\chi}_2]$ (note that it is not necessarily
true that $\Tr (F_1) =
\Tr(F_2)\;(\text{mod}\;\overline{d})$ if $d$ is even). We therefore obtain a
well-defined function $\EC(d)/\Cc(d) \to \mathbb{Z}_d$ if we assign to each
operation $[F,\boldsymbol{\chi}]$ the value $\Tr(F)\;(\text{mod}\;d)$. We obtain
a function $\EC(d) \to \mathbb{Z}_d$ by assigning to each $\hat{U} \in
[F,\boldsymbol{\chi}]$ the value $\Tr(F)\;(\text{mod}\;d)$. We use the term
``Clifford trace'' to refer to either of these functions.
We now prove the main result of this section, which states that there is a
connection between the order of a Clifford operation and its Clifford trace.
\begin{lemma}
\label{lem:CliffordTrace}
Let $[F,\boldsymbol{\chi}] \in
\C(d)/\Cc(d)$, where $d$ is any dimension $\neq 3$. Then
$[F,\boldsymbol{\chi}]$ is of order
$3$ if $\Tr (F)=-1\;(\text{mod}\;d)$.
Let $[F,\boldsymbol{\chi}] \in
\C(d)/\Cc(d)$, where $d$ is any prime dimension $\neq 3$. Then the stronger
statement is true:
$[F,\boldsymbol{\chi}]$ is of order
$3$ if and only if $\Tr (F)=-1\;(\text{mod}\;d)$.
\end{lemma}
\begin{remark}
The restriction to operations $\in \C(d)/\Cc(d)$ is essential (because if
$[F,\boldsymbol{\chi}]$ is anti-unitary its order must be even).
\end{remark}
\begin{proof}
Let $[F,
\boldsymbol{\chi}]\in \C(d)/\Cc(d)$, and let $\kappa = \Tr (F)$. Then, taking
into account the fact that $\Det (F)=1 \; (\text{mod}\; \overline{d})$, it is
straightforward to show
\begin{align}
F^2 & = \kappa F -1 & & (\text{mod}\; \overline{d})
\label{eq:F2TermsTraceF}
\\
\intertext{implying}
F^3 & = (\kappa^2 -1) F - \kappa & &(\text{mod}\; \overline{d})
\label{eq:F3TermsTraceF}
\\
1+F + F^2 & = (\kappa +1) F & &(\text{mod}\; \overline{d})
\end{align}
Now suppose that $\kappa = -1 \;(\text{mod}\; d)$. Then there are three
possibilities: (a) $d$ is odd; (b) $d$ is even and $\kappa = -1 \;(\text{mod}\;
\overline{d})$; (c) $d$ is even and $\kappa = -1+d \;(\text{mod}\;
\overline{d})$. In case (a) or (b) we have
\begin{align}
F^3 & = 1 & &(\text{mod}\; \overline{d})
\\
1+F + F^2 & = 0 & &(\text{mod}\; \overline{d})
\end{align}
while in case (c) we have $\kappa^2-1 = d^2 - 2 d = 0\;(\text{mod}\;
\overline{d})$, and consequently
\begin{align}
F^3 & = \begin{pmatrix} 1+ d & 0 \\ 0 & 1+ d \end{pmatrix}
& &(\text{mod}\; \overline{d})
\\
1+F + F^2 & = 0 & &(\text{mod}\; d)
\end{align}
Referring to the definition of $K_f$ (see Theorem~\ref{thm:CliffordStructure}) we
deduce that, in every case,
\begin{equation}
(F,\boldsymbol{\chi})^3=(F^3,(1+F+F^2)\boldsymbol{\chi}) \in K_f
\end{equation}
implying that $[F,
\boldsymbol{\chi}]^3=[1,\boldsymbol{0}]$.
It remains to show that neither $[F,
\boldsymbol{\chi}]$ nor $[F,
\boldsymbol{\chi}]^2=[1,\boldsymbol{0}]$. To see that
$[F,
\boldsymbol{\chi}] \neq [1,\boldsymbol{0}]$ observe that the contrary would imply
$-1 = \kappa = \Tr({1}) = 2 \;(\text{mod}\; d)$, which is not possible
given that $d \neq 3$. Similarly,
if $[F,
\boldsymbol{\chi}]^2=[1,\boldsymbol{0}]$ it would follow (taking the trace on
both sides of Eq.~(\ref{eq:F2TermsTraceF})) that $2 = \kappa^2-2=-1
\;(\text{mod}\; d)$, contrary to the assumption that $d\neq 3$. We conclude that
$[F,
\boldsymbol{\chi}]$ is of order $3$, as claimed.
To prove the second part of the lemma suppose that $d$ is a prime number $\neq 3$
and $[F,
\boldsymbol{\chi}]$ is of order $3$. Then $(F^3,(1+F+F^2)\boldsymbol{\chi}) \in
K_f$, implying
$F^3 = 1 \; (\text{mod} \; d)$. In view of Eq.~(\ref{eq:F3TermsTraceF}) this
means
\begin{equation}
(\kappa+1)\left( (\kappa-1) F -1\right) = 0 \qquad (\text{mod}\; d)
\label{eq:F3Equal1Consequence}
\end{equation}
We now proceed by \emph{reductio ad absurdum}. Suppose that $\kappa \neq
-1\;(\text{mod}\; d)$. Then Eq.~(\ref{eq:F3Equal1Consequence}) and the fact that
$d$ is prime implies
\begin{equation}
(\kappa-1) F =1\qquad (\text{mod}\; d)
\label{eq:F3Equal1ConsequenceB}
\end{equation}
Taking the trace on both sides gives $(\kappa+1)(\kappa -2) =
0\;(\text{mod}\; d) $ implying $\kappa = 2\;(\text{mod}\; d) $. Substituting
this value
into Eq.~(\ref{eq:F3Equal1ConsequenceB}) we deduce $F = 1\;(\text{mod}\; d)$,
implying $F^2 = 1\;(\text{mod}\; \overline{d})$ and $F^3 = F\;(\text{mod}\;
\overline{d})$. So
\begin{equation}
(F, 3\boldsymbol{\chi})
=(F,\boldsymbol{\chi})^3 \in K_f
\end{equation}
implying $(F, \boldsymbol{\chi})\in K_f$. But that would mean
$[F,
\boldsymbol{\chi}]$ is of order $1$, contrary to assumption.
We conclude that $\kappa =-1
\;(\text{mod}\; d)$, as claimed.
\end{proof}
The result does not hold when $d=3$ because then the identity has Clifford trace
$=-1$. It is, however, easily verified that in dimension $3$ (as in every other
prime dimension) every order $3$ Clifford operation has Clifford trace $=-1$.
If $d$ is not a prime number there may exist order $3$ Clifford operations for
which the Clifford trace $\neq-1$. Consider, for example,
\begin{equation}
[F,\boldsymbol{\chi}]=\left[\begin{pmatrix} 5 & 4 \\ 2 & -3
\end{pmatrix},
\begin{pmatrix} - 4 \\ 5
\end{pmatrix}
\right] \in \C(6)/\Cc(6)
\end{equation}
Then $[F,\boldsymbol{\chi}]$ is of order $3$ yet $\Tr(F) =2 \;
(\text{mod}\; 6)$.
Because these results will play an important role in the following it is
convenient to introduce some terminology. We will say that an operation
$[F,\boldsymbol{\chi}]\in \C(d)/\Cc(d)$ is a \emph{canonical} order $3$ unitary
if
\begin{enumerate}
\item[(a)] $\Tr (F)=-1\;(\text{mod} \; d)$.
\item[(b)] $F$ is not the identity matrix.
\end{enumerate}
Note that the second stipulation is only needed because of the possibility
that $d=3$. If $d\neq 3$ an operation $[F,\boldsymbol{\chi}]\in
\C(d)/\Cc(d)$ is a canonical order $3$ unitary if and only if $\Tr
(F)=-1\;(\text{mod}
\; d)$.
\section{The RBSC Vectors}
\label{sec:RBSCVectors}
For $5\le d \le 45$
RBSC~\cite{Renes,RenesVectors} have constructed GP fiducial vectors numerically.
In this section we examine the behaviour of these vectors under the action of the
extended Clifford group. In particular we show that each of them is an
eigenvector of a canonical order $3$ Clifford unitary. This suggests
\begin{quote}
\textbf{Conjecture A:} GP fiducial vectors exist in every finite dimension.
Furthermore, every such vector is an eigenvector of a
canonical order
$3$ unitary.
\end{quote}
Conjecture A is related to a conjecture of Zauner's. Let
\begin{equation}
[ Z,\boldsymbol{0}]=\left[
\begin{pmatrix} 0 & -1 \\ 1 & -1
\end{pmatrix},
\begin{pmatrix} 0 \\ 0
\end{pmatrix}
\right]
\end{equation}
It will be observed that $[ Z,\boldsymbol{0}]$ is defined, and $\in
\C(d)/\Cc(d)$, for every dimension
$d$, and that it is canonical order
$3$. Zauner~\cite{Zauner} has conjectured
\begin{quote}
\textbf{Conjecture B:} In each dimension $d$ there exists a GP fiducial vector
which is an eigenvector of $[ Z,\boldsymbol{0}]$.
\end{quote}
In Section~\ref{sec:Zauner} we will see that RBSC's numerical data also provides
further support for Conjecture B.
Let $|\psi_d\rangle$ be the RBSC vector in dimension $d$. In
Table~\ref{tbl:RBSC} we list, for each value of $d$, a unitary Clifford operation
$[F_d,
\boldsymbol{\chi}_d]$ having $|\psi_d\rangle$ as one of its eigenvectors. It will
be seen that, in every case, $\Tr(F_d) = -1 \;(\text{mod}\;d)$,
implying that
$[F_d,
\boldsymbol{\chi}_d]$ is canonical order $3$. Clearly,
$|\psi_d\rangle$ is also an eigenvector of $[F_d,
\boldsymbol{\chi}_d]^2$. Moreover, $[F_d,
\boldsymbol{\chi}_d]^2$ also has Clifford trace $=-1$. There are, however, no
other Clifford operations with these properties.
In Table~\ref{tbl:RBSC} we also list $(n_{d1},n_{d2},n_{d3})$,
the dimensions of the three eigenspaces of $[F_d,
\boldsymbol{\chi}_d]$, and $n_d$, the dimension of the
particular eigenspace to which $|\psi_d\rangle$ belongs. It will be seen that,
with one exception, $|\psi_d\rangle$ always belongs to an eigenspace of highest
dimension (the exception being $d=17$, where $|\psi_d\rangle$ belongs to the
eigenspace of
\emph{lowest} dimension).
\begin{table}
\SMALL
\begin{center}
\begin{tabular}{|c c c c c||c c c c c|}
\hline
\parbox{1 pt}{\rule{0 ex}{5 ex}}
$d$ & $F_d$ & $\boldsymbol{\chi}_d$ & $(n_{d1},n_{d2},n_{d3})$ & $n_d$ \hspace{1
ex} &
\hspace{1 ex}
$d$ &
$F_d$ &
$\boldsymbol{\chi}_d$ & $(n_{d1},n_{d2},n_{d3})$ & $n_d$
\\
\hline
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $5$ &
$\begin{pmatrix} -1 & -1 \\ 1 & 0 \end{pmatrix}$ &
$\begin{pmatrix} 2 \\ 2 \end{pmatrix}$ &
$(1, 2, 2)$ & $2$
\hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $26$ &
$\begin{pmatrix} -7 & -9 \\ -1 & 6 \end{pmatrix}$ &
$\begin{pmatrix} -11 \\ 11 \end{pmatrix}$ &
$(8, 9, 9)$ & $9$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $6$ &
$\begin{pmatrix} -2 & 3 \\ -1 & 1 \end{pmatrix}$ &
$\begin{pmatrix} 3 \\ 0 \end{pmatrix}$ &
$(1, 2, 3)$ & $3$
\hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $27$ &
$\begin{pmatrix} -10 & 1 \\ -10 & 9 \end{pmatrix}$ &
$\begin{pmatrix} -3 \\ -12 \end{pmatrix}$ &
$(8, 9, 10)$ & $10$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $7$ &
$\begin{pmatrix} -2 & -2 \\ -2 & 1 \end{pmatrix}$ &
$\begin{pmatrix} 2 \\ 0 \end{pmatrix}$ &
$(2, 2, 3)$ & $3$
\hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $28$ &
$\begin{pmatrix} -3 & 21 \\ 5 & 2 \end{pmatrix}$ &
$\begin{pmatrix} -10 \\ -6 \end{pmatrix}$ &
$(9, 9, 10)$ & $10$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $8$ &
$\begin{pmatrix} -4 & 3 \\ 1 & 3 \end{pmatrix}$ &
$\begin{pmatrix} 3 \\ -1 \end{pmatrix}$ &
$(2, 3, 3)$ & $3$
\hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $29$ &
$\begin{pmatrix} -13 & -6 \\ 2 & 12 \end{pmatrix}$ &
$\begin{pmatrix} -10 \\ 12 \end{pmatrix}$ &
$(9, 10, 10)$ & $10$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $9$ &
$\begin{pmatrix} -3 & 2 \\ 1 & 2 \end{pmatrix}$ &
$\begin{pmatrix} 2 \\ 1 \end{pmatrix}$ &
$(2, 3, 4)$ & $4$
\hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $30$ &
$\begin{pmatrix} -8 & -7 \\ -9 & 7 \end{pmatrix}$ &
$\begin{pmatrix} 11 \\ -3 \end{pmatrix}$ &
$(9, 10, 11)$ & $11$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $10$ &
$\begin{pmatrix} -4 & -7 \\ -1 & 3 \end{pmatrix}$ &
$\begin{pmatrix} -2 \\ 0 \end{pmatrix}$ &
$(3, 3, 4)$ & $4$
\hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $31$ &
$\begin{pmatrix} -9 & -10 \\ -2 & 8 \end{pmatrix}$ &
$\begin{pmatrix} -14 \\ 6 \end{pmatrix}$ &
$(10, 10, 11)$ & $11$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $11$ &
$\begin{pmatrix} -5 & 4 \\ 3 & 4 \end{pmatrix}$ &
$\begin{pmatrix} -5 \\ 0 \end{pmatrix}$ &
$(3, 4, 4)$ & $4$
\hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $32$ &
$\begin{pmatrix} -11 & -31 \\ -15 & 10 \end{pmatrix}$ &
$\begin{pmatrix} 11 \\ -7 \end{pmatrix}$ &
$(10, 11, 11)$ & $11$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $12$ &
$\begin{pmatrix} -4 & 11 \\ 1 & 3 \end{pmatrix}$ &
$\begin{pmatrix} 4 \\ -5 \end{pmatrix}$ &
$(3, 4, 5)$ & $5$
\hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $33$ &
$\begin{pmatrix} -7 & -5 \\ 2 & 6 \end{pmatrix}$ &
$\begin{pmatrix} 8 \\ -5 \end{pmatrix}$ &
$(10, 11, 12)$ & $12$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $13$ &
$\begin{pmatrix} -2 & -2 \\ -5 & 1 \end{pmatrix}$ &
$\begin{pmatrix} 6 \\ 0 \end{pmatrix}$ &
$(4, 4, 5)$ & $5$
\hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $34$ &
$\begin{pmatrix} -12 & 3 \\ 1 & 11 \end{pmatrix}$ &
$\begin{pmatrix} -1 \\ -16 \end{pmatrix}$ &
$(11, 11, 12)$ & $12$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $14$ &
$\begin{pmatrix} -2 & -3 \\ 1 & 1 \end{pmatrix}$ &
$\begin{pmatrix} -5 \\ 1 \end{pmatrix}$ &
$(4, 5, 5)$ & $5$
\hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $35$ & $\begin{pmatrix} -13 & -12 \\ 16 & 12 \end{pmatrix}$ &
$\begin{pmatrix} 11 \\ -12 \end{pmatrix}$ &
$(11, 12, 12)$ & $12$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $15$ &
$\begin{pmatrix} -5 & 1 \\ -6 & 4 \end{pmatrix}$ &
$\begin{pmatrix} -7 \\ -6 \end{pmatrix}$ &
$(4, 5, 6)$ & $6$
\hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $36$ &
$\begin{pmatrix} -8 & 21 \\ -13 & 7 \end{pmatrix}$ &
$\begin{pmatrix} 0 \\ 7 \end{pmatrix}$ &
$(11, 12, 13)$ & $13$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $16$ &
$\begin{pmatrix} -8 & 13 \\ 3 & 7 \end{pmatrix}$ &
$\begin{pmatrix} 1 \\ 0 \end{pmatrix}$ &
$(5, 5, 6)$ & $6$
\hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $37$ &
$\begin{pmatrix} -16 & 1 \\ 18 & 15 \end{pmatrix}$ &
$\begin{pmatrix} -4 \\ 3 \end{pmatrix}$ &
$(12, 12, 13)$ & $13$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $17$ &
$\begin{pmatrix} -5 & -7 \\ 3 & 4 \end{pmatrix}$ &
$\begin{pmatrix} 6 \\ 7 \end{pmatrix}$ &
$(5, 6, 6)$ & $5$
\hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $38$ &
$\begin{pmatrix} -6 & -31 \\ 1 & 5 \end{pmatrix}$ &
$\begin{pmatrix} 12 \\ -10 \end{pmatrix}$ &
$(12, 13, 13)$ & $13$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $18$ &
$\begin{pmatrix} -5 & 5 \\ 3 & 4 \end{pmatrix}$ &
$\begin{pmatrix} 9 \\ 0 \end{pmatrix}$ &
$(5, 6, 7)$ & $7$
\hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $39$ &
$\begin{pmatrix} -17 & -11 \\ 0 & 16 \end{pmatrix}$ &
$\begin{pmatrix} 8 \\ 15 \end{pmatrix}$ &
$(12, 13, 14)$ & $14$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $19$ &
$\begin{pmatrix} -2 & 4 \\ 4 & 1 \end{pmatrix}$ &
$\begin{pmatrix} -7 \\ -4 \end{pmatrix}$ &
$(6, 6, 7)$ & $7$
\hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $40$ &
$\begin{pmatrix} -3 & 19 \\ -13 & 2 \end{pmatrix}$ &
$\begin{pmatrix} -12 \\ -19 \end{pmatrix}$ &
$(13, 13, 14)$ & $14$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $20$ &
$\begin{pmatrix} -2 & -3 \\ 1 & 1 \end{pmatrix}$ &
$\begin{pmatrix} -9 \\ -6 \end{pmatrix}$ &
$(6, 7, 7)$ & $7$ \hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $41$ &
$\begin{pmatrix} -2 & -10 \\ -12 & 1 \end{pmatrix}$ &
$\begin{pmatrix} 19 \\ 13 \end{pmatrix}$ &
$(13, 14, 14)$ & $14$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $21$ &
$\begin{pmatrix} -5 & -6 \\ -7 & 4 \end{pmatrix}$ &
$\begin{pmatrix} -6 \\ 1 \end{pmatrix}$ &
$(6, 7, 8)$ & $8$
\hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $42$ &
$\begin{pmatrix} -15 & 11 \\ 19 & 14 \end{pmatrix}$ &
$\begin{pmatrix} 0 \\ -15 \end{pmatrix}$ &
$(13, 14, 15)$ & $15$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $22$ &
$\begin{pmatrix} -2 & -1 \\ 3 & 1 \end{pmatrix}$ &
$\begin{pmatrix} 8 \\ 2 \end{pmatrix}$ &
$(7, 7, 8)$ & $8$ \hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $43$ &
$\begin{pmatrix} -11 & 1 \\ 18 & 10 \end{pmatrix}$ &
$\begin{pmatrix} -1 \\ 21 \end{pmatrix}$ &
$(14, 14, 15)$ & $15$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $23$ &
$\begin{pmatrix} -11 & -10 \\ -5 & 10 \end{pmatrix}$ &
$\begin{pmatrix} 0 \\ -3 \end{pmatrix}$ &
$(7, 8, 8)$ & $8$
\hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $44$ &
$\begin{pmatrix} -8 & -29 \\ 5 & 7 \end{pmatrix}$ &
$\begin{pmatrix} 16 \\ -5 \end{pmatrix}$ &
$(14, 15, 15)$ & $15$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $24$ &
$\begin{pmatrix} -2 & -3 \\ 1 & 1 \end{pmatrix}$ &
$\begin{pmatrix} 0 \\ -3 \end{pmatrix}$ &
$(7, 8, 9)$ & $9$
\hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $45$ &
$\begin{pmatrix} -20 & -1 \\ 21 & 19 \end{pmatrix}$ &
$\begin{pmatrix} -8 \\ 6 \end{pmatrix}$ &
$(14, 15, 16)$ & $16$
\\ \parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} $25$ &
$\begin{pmatrix} -6 & -1 \\ 6 & 5 \end{pmatrix}$ &
$\begin{pmatrix} -7 \\ 12 \end{pmatrix}$ &
$(8, 8, 9)$ & $9$
\hspace{1 ex} & \hspace{1 ex}
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}} & & & & \\
\hline
\end{tabular}
\normalsize
\vspace{2 ex}
\caption{For each $d$ the RBSC vector $|\psi_d\rangle$ is an eigenvector of the
unitary operation $[F_d, \boldsymbol{\chi}_d]$. Note that in every case $\Tr F_d
=-1$, implying that $[F_d, \boldsymbol{\chi}_d]$ is canonical order $3$.
$(n_{d1},n_{d2},n_{d3})$ are the dimensions of the three eigenspaces of
$[F_d, \boldsymbol{\chi}_d]$, and $n_d$ is the dimension of the eigenspace to
which $|\psi_d\rangle$ belongs. Note that $n_d = \max (n_{d1},n_{d2},n_{d3})$,
with the single exception of
$d=17$.}
\label{tbl:RBSC}
\end{center}
\end{table}
We used a computer algebra package (\emph{Mathematica}) to construct the table.
To illustrate the method employed we give a detailed
description for the case $d=5$. We begin with the observation that, if
$|\psi_5\rangle$ is an eigenvector of $[F,\boldsymbol{\chi}]$, then
\begin{equation}
\langle \psi_5 | \hat{D}_{\mathbf{p}} |\psi_5\rangle
=e^{\frac{2 \pi i}{5}\langle \boldsymbol{\chi},F \mathbf{p}\rangle}
\langle \psi_5 | \hat{D}_{F\mathbf{p}} |\psi_5\rangle
\end{equation}
for all $\mathbf{p}$. So, using the value of $|\psi_5\rangle$ which is available
on RBSC's website~\cite{RenesVectors}, we look for values of
$\mathbf{p}$,
$\mathbf{q}$ such that
\begin{equation}
\frac{5}{2 \pi}\left( \arg\left(\langle \psi_5 | \hat{D}_{\mathbf{p}}
|\psi_5\rangle \right)-\arg\left(\langle \psi_5 | \hat{D}_{\mathbf{q}}
|\psi_5\rangle \right)
\right)
\end{equation}
is an (approximate) integer. We find that if $\mathbf{p} =(1,0) $ this is only
true when $\mathbf{q} = (1,0), (-1,1)$ or $(0,-1)\;(\text{mod}\;5)$, and that if
$\mathbf{p} =(0,1) $ it is only true when
$\mathbf{q} = (0,1), (-1,0)$ or $(1,-1)\;(\text{mod}\;5)$. Taking account of
the requirement $\Det(F)=1 \;(\text{mod}\;5)$ we deduce that the only candidates
are (apart from the identity)
\begin{equation}
[F_5,\boldsymbol{\chi}_5]=
\left[\begin{pmatrix} -1 & -1 \\ 1 & 0
\end{pmatrix},
\begin{pmatrix} 2 \\ 2
\end{pmatrix}\right]
\end{equation}
and its square, $[F_5,\boldsymbol{\chi}_5]^2$. To check that $|\psi_5\rangle$
actually is an eigenvector of $[F_5,\boldsymbol{\chi}_5]$ we observe that $F_5$ is
a prime matrix. So in view of Lemma~\ref{lem:CliffordStructure4} we have the
following explicit formula for the
$\hat{U}\in[F_5,\boldsymbol{\chi}_5]$:
\begin{equation}
\hat{U} =\frac{1}{\sqrt{5}} e^{i \theta} \hat{D}_{(2,2)}
\left(\sum_{r,s=0}^{4} e^{-\frac{4 \pi i}{5}s(s+2 r)} |e_r\rangle \langle e_s |
\right)
\end{equation}
$e^{i \theta}$ being an arbitrary phase. Suppose we choose $\theta=\frac{7 \pi
}{15}$. Then we find $\hat{U}^3=1$ and
\begin{equation}
\bigl\| (\hat{U} -1)|\psi_{5} \rangle \bigr\|^{2}= 0
\end{equation}
to machine precision. This confirms that $|\psi_5\rangle$ is indeed an
eigenvector of
$[F_5,\boldsymbol{\chi}_5]$. To calculate the dimensions of the eigenspaces
define, for $r = 0, \pm 1$ (and with the same choice of $\theta$),
\begin{equation}
\hat{P}_r = \frac{1}{3} \left(1+ e^{-\frac{2 r \pi i}{3}} \hat{U} +
e^{\frac{2 r \pi i}{3}} \hat{U}^2
\right)
\end{equation}
Then $\hat{P}_r$ projects onto the eigenspace of $\hat{U}$ with eigenvalue
$e^{\frac{2 r \pi i}{3}}$. We find
\begin{equation}
\Tr(\hat{P_r}) =
\begin{cases} 1 \qquad & r=1 \\
2 \qquad & r = -1 \;\text{or}\; 0
\end{cases}
\end{equation}
implying that the dimensions of the eigenspaces are $1, 2, 2$, and that
$|\psi_{5}\rangle$ is in one of the eigenspaces with dimension $2$.
In dimensions $6$ to $45$ the calculation goes through in essentially the same
way. The calculation is, however, slightly more complicated when $d$ is
even, due to the fact that we must then require $\Det F_d=1\;(\text{mod}\; 2 d)$.
Note, also, that when $d=6, 21, 24, 28$ or $36$ the matrix $F_d$ is non-prime, so
we have to use the decomposition of Lemma~\ref{lem:CliffordStructure3}.
This method also enables us to establish the full stability group of
$|\psi_d\rangle$:
\emph{i.e.}~the set of all operations (unitary or anti-unitary) $\in
\EC(d)/\Cc(d)$ of which $|\psi_d\rangle$ is an eigenvector. It turns out that,
with one exception, the stability group is the order $3$ cyclic subgroup
generated by $[F_d,\boldsymbol{\chi}_{d}]$. The
exception is dimension~$7$, where the stability group is the order~$6$ cyclic
subgroup generated by the anti-unitary operation
\begin{equation}
\left[ A_{\mathstrut 7}, \boldsymbol{\xi}_7
\right]
=
\left[ \begin{pmatrix} 2 & -1 \\ -1 & 0
\end{pmatrix},
\begin{pmatrix} 1\\ 1
\end{pmatrix}
\right]
\end{equation}
Note that $[ A^{\mathstrut}_{ 7}, \boldsymbol{\xi}_7
]^2=[ F^{\mathstrut}_{ 7}, \boldsymbol{\chi}_{7}]$.
\section{Zauner's Conjecture}
\label{sec:Zauner}
In the last section we saw that RBSC's numerical results
support Conjecture A. Their results also support Conjecture B:
\emph{i.e.}\ Zauner's conjecture, that in each dimension $d$ there exists a
GP fiducial vector which is an eigenvector of $[Z,\boldsymbol{0}]$.
In fact, for each $5\le d \le 45$ let $[L_d,\boldsymbol{\eta}_d]$ be the
operation specified in Table~\ref{tbl:Conjugacy}.
\begin{table}
\SMALL
\begin{center}
\begin{tabular}{|c c c ||c c c || c c c|}
\hline
\parbox{1 pt}{\rule{0 ex}{5 ex}}
$d$ & $L_d$ & $\boldsymbol{\eta}_d$ \hspace{1
ex} &
\hspace{1 ex}
$d$ &
$L_d$ &
$\boldsymbol{\eta}_d$ \hspace{1 ex} &
\hspace{1 ex}
$d$ &
$L_d$ &
$\boldsymbol{\eta}_d$ \\
\hline
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}}
$5$ & $\begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix}$
& $\begin{pmatrix} 0 \\ -2 \end{pmatrix}$ &
$19$ & $\begin{pmatrix} 2 & 1 \\ 0 & -9 \end{pmatrix}$
& $\begin{pmatrix} 5 \\ -6 \end{pmatrix}$ &
$33$ & $\begin{pmatrix} 6 & 2 \\ 5 & -15 \end{pmatrix}$
& $\begin{pmatrix} 13 \\ 15 \end{pmatrix}$ \\
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}}
$6$ & $\begin{pmatrix} 0 & 1 \\ 1 & -1 \end{pmatrix}$
& $\begin{pmatrix} 1 \\ -1 \end{pmatrix}$ &
$20$ & $\begin{pmatrix} 0 & 1 \\ -1 & -1 \end{pmatrix}$
& $\begin{pmatrix} 9 \\ -3 \end{pmatrix}$ &
$34$ & $\begin{pmatrix} 0 & 1 \\ -1 & -11 \end{pmatrix}$
& $\begin{pmatrix} 13 \\ 3 \end{pmatrix}$ \\
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}}
$7$ & $\begin{pmatrix} 2 & 0 \\ -3 & -3 \end{pmatrix}$
& $\begin{pmatrix} 0 \\ 3 \end{pmatrix}$ &
$21$ & $\begin{pmatrix} 2 & 1 \\ -4 & 8 \end{pmatrix}$
& $\begin{pmatrix} -3 \\ -7 \end{pmatrix}$ &
$35$ & $\begin{pmatrix} 14 & 2 \\ 10 & 4 \end{pmatrix}$
& $\begin{pmatrix} 4 \\ 6 \end{pmatrix}$ \\
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}}
$8$ & $\begin{pmatrix} 0 & 1 \\ -1 & -3 \end{pmatrix}$
& $\begin{pmatrix} -2 \\ 3 \end{pmatrix}$ &
$22$ & $\begin{pmatrix} 1 & 0 \\ 2 & 1 \end{pmatrix}$
& $\begin{pmatrix} 8 \\ 6 \end{pmatrix}$ &
$36$ & $\begin{pmatrix} 17 & 1 \\ 5 & -4 \end{pmatrix}$
& $\begin{pmatrix} -2 \\ -5 \end{pmatrix}$ \\
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}}
$9$ & $\begin{pmatrix} 2 & 0 \\ -3 & -4 \end{pmatrix}$
& $\begin{pmatrix} 0 \\ -4 \end{pmatrix}$ &
$23$ & $\begin{pmatrix} 0 & 3 \\ -8 & -7 \end{pmatrix}$
& $\begin{pmatrix} -10 \\ -4 \end{pmatrix}$ &
$37$ & $\begin{pmatrix} 6 & 0 \\ -15 & -6 \end{pmatrix}$
& $\begin{pmatrix} -7 \\ -6 \end{pmatrix}$ \\
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}}
$10$ & $\begin{pmatrix} 3 & 1 \\ -7 & -2 \end{pmatrix}$
& $\begin{pmatrix} 2 \\ 4 \end{pmatrix}$ &
$24$ & $\begin{pmatrix} 0 & 1 \\ -1 & -1 \end{pmatrix}$
& $\begin{pmatrix} 3 \\ 0 \end{pmatrix}$ &
$38$ & $\begin{pmatrix} 0 & 1 \\ -1 & -5 \end{pmatrix}$
& $\begin{pmatrix} -6 \\ 16 \end{pmatrix}$ \\
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}}
$11$ & $\begin{pmatrix} 1 & 1 \\ 2 & 3 \end{pmatrix}$
& $\begin{pmatrix} 0 \\ 5 \end{pmatrix}$ &
$25$ & $\begin{pmatrix} 1 & 0 \\ 6 & 1 \end{pmatrix}$
& $\begin{pmatrix} 3 \\ 4 \end{pmatrix}$ &
$39$ & $\begin{pmatrix} 7 & 2 \\ 2 & 6 \end{pmatrix}$
& $\begin{pmatrix} 17 \\ 14 \end{pmatrix}$ \\
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}}
$12$ & $\begin{pmatrix} 0 & 1 \\ -1 & -3 \end{pmatrix}$
& $\begin{pmatrix} 3 \\ 2 \end{pmatrix}$ &
$26$ & $\begin{pmatrix} 9 & 0 \\ 11 & -23 \end{pmatrix}$
& $\begin{pmatrix} 2 \\ -7 \end{pmatrix}$ &
$40$ & $\begin{pmatrix} 27 & 1 \\ 14 & -35 \end{pmatrix}$
& $\begin{pmatrix} -19 \\ 2 \end{pmatrix}$ \\
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}}
$13$ & $\begin{pmatrix} 4 & 2 \\ 5 & 6 \end{pmatrix}$
& $\begin{pmatrix} -6 \\ -5 \end{pmatrix}$ &
$27$ & $\begin{pmatrix} 1 & 0 \\ 10 & -1 \end{pmatrix}$
& $\begin{pmatrix} -4 \\ 7 \end{pmatrix}$ &
$41$ & $\begin{pmatrix} 18 & 0 \\ -5 & 16 \end{pmatrix}$
& $\begin{pmatrix} 1 \\ -15 \end{pmatrix}$ \\
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}}
$14$ & $\begin{pmatrix} 0 & 1 \\ -1 & -1 \end{pmatrix}$
& $\begin{pmatrix} -4 \\ 3 \end{pmatrix}$ &
$28$ & $\begin{pmatrix} 12 & 1 \\ -25 & 26 \end{pmatrix}$
& $\begin{pmatrix} -6 \\ -8 \end{pmatrix}$ &
$42$ & $\begin{pmatrix} 2 & 1 \\ 11 & -36 \end{pmatrix}$
& $\begin{pmatrix} 8 \\ 7 \end{pmatrix}$ \\
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}}
$15$ & $\begin{pmatrix} 1 & 0 \\ 5 & -1 \end{pmatrix}$
& $\begin{pmatrix} 0 \\ 7 \end{pmatrix}$ &
$29$ & $\begin{pmatrix} 11 & 0 \\ -2 & 8 \end{pmatrix}$
& $\begin{pmatrix} -4 \\ -2 \end{pmatrix}$ &
$43$ & $\begin{pmatrix} 8 & 1 \\ -16 & -18 \end{pmatrix}$
& $\begin{pmatrix} 14 \\ 16 \end{pmatrix}$ \\
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}}
$16$ & $\begin{pmatrix} 3 & 1 \\ -11 & -14 \end{pmatrix}$
& $\begin{pmatrix} 5 \\ 8 \end{pmatrix}$ &
$30$ & $\begin{pmatrix} 10 & 1 \\ 29 & 3 \end{pmatrix}$
& $\begin{pmatrix} 12 \\ 1 \end{pmatrix}$ &
$44$ & $\begin{pmatrix} 7 & 1 \\ -37 & 20 \end{pmatrix}$
& $\begin{pmatrix} 6 \\ 19 \end{pmatrix}$ \\
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}}
$17$ & $\begin{pmatrix} 1 & 1 \\ 2 & 3 \end{pmatrix}$
& $\begin{pmatrix} 8 \\ -4 \end{pmatrix}$ &
$31$ & $\begin{pmatrix} 11 & 0 \\ 6 & -14 \end{pmatrix}$
& $\begin{pmatrix} -5 \\ 4 \end{pmatrix}$ &
$45$ & $\begin{pmatrix} 1 & 0 \\ 20 & 1 \end{pmatrix}$
& $\begin{pmatrix} 14 \\ -6 \end{pmatrix}$ \\
\parbox{0.11 pt}{\rule{0 ex}{7.5 ex}}
$18$ & $\begin{pmatrix} 2 & 1 \\ 7 & -14 \end{pmatrix}$
& $\begin{pmatrix} -3 \\ 3 \end{pmatrix}$ &
$32$ & $\begin{pmatrix} 27 & 1 \\ -8 & -5 \end{pmatrix}$
& $\begin{pmatrix} 13 \\ -15 \end{pmatrix}$ &
& & \\
\hline
\end{tabular}
\normalsize
\vspace{2 ex}
\caption{}
\label{tbl:Conjugacy}
\end{center}
\end{table}
It is easily verified that
\begin{equation}
[L_d,\boldsymbol{\eta}_d] [F_d,\boldsymbol{\chi}_d]
[L_d,\boldsymbol{\eta}_d]^{-1}
= [Z,\boldsymbol{0}]
\end{equation}
This means that if $\hat{U} \in
[L_d,\boldsymbol{\eta}_d]$, and if $|\psi_d\rangle$ is the RBSC vector in
dimension $d$, then $\hat{U}|\psi_d\rangle$ is a GP fiducial vector which
is an eigenvector of
$[Z,\boldsymbol{0}]$. Conjecture B is thus confirmed numerically for
every dimension $\le 45$.
This suggests
\begin{quote}
\textbf{Conjecture C:} GP fiducial vectors exist in every finite dimension.
Furthermore, every such vector is an eigenvector of a canonical order $3$
unitary which is conjugate to
$[Z,\boldsymbol{0}]$.
\end{quote}
Conjecture C is clearly stronger than Conjecture B. It also implies Conjecture
A.
An operation conjugate to $[Z,\boldsymbol{0}]$ is automatically a canonical
order
$3$ unitary. It would be interesting to know whether the converse is also
true:
\emph{i.e.}\ whether every canonical order
$3$ unitary is conjugate to $[Z,\boldsymbol{0}]$. If that
were not the case Conjecture C would be strictly stronger than Conjecture A.
\section{Dimensions $2$ to $7$: Vectors, Orbits and Stability Groups}
\label{sec:VectorsOrbitsStability}
In dimensions $2$--$7$ RBSC made a numerical search, in an attempt to find the
total number of GP fiducial vectors. On the assumption that their search
was exhaustive
we use their data
to calculate, for dimensions
$2$--$7$, the number of distinct orbits under the action of the extended Clifford
group. We also calculate the order of the
stability group corresponding to each orbit. Our results are tabulated in
Table~\ref{tbl:Orbits}. They confirm that in dimensions $2$--$7$
\emph{every} GP fiducial vector is an eigenvector of a canonical order $3$
Clifford unitary (in agreement with Conjecture A). We incidentally give
exact expressions for two of the GP fiducial vectors in dimension $7$ (one
on each of the two distinct orbits).
The calculations on which these statements are based are somewhat lengthy, and
there is not the space to reproduce them here. We therefore confine ourselves
to summarizing the end results, which it is straightforward (albeit tedious) to
confirm with the help of (for example)
\emph{Mathematica}.
\begin{table}
\begin{center}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{| c | c| c| c |}
\hline
& \multicolumn{2}{c |}{Stability Group} & \\
\cline{2-3}
dimension & type & order & number of orbits \\
\hline
$2$ & non-Abelian & $6$ & $1$ \\
\hline
& non-Abelian & $6$ & $\infty$ \\
\cline{2-4}
3 & non-Abelian & $12$ & $1$ \\
\cline{2-4}
& non-Abelian & $48$ & $1$ \\
\hline
$4$ & cyclic & $6$ & $1$ \\
\hline
$5$ & cyclic & $3$ & $1$ \\
\hline
$6$ & cyclic & $3$ & $1$ \\
\hline
& cyclic & $3$ & $1$ \\
\cline{2-4}
\raisebox{2 ex}[0 cm][0 cm]{$7$ } & cyclic & $6$ & $1$ \\
\hline
\end{tabular}
\end{center}
\vspace{ 0.1 in}
\caption{Stability groups in dimensions $2$--$7$. In every case the
stability group includes an order
$3$ cyclic subgroup generated by a unitary operation having Clifford trace $=-1$.}
\label{tbl:Orbits}
\end{table}
\subsection*{Dimension 2} Exact solutions in dimension $2$ have been obtained
by Zauner~\cite{Zauner} and RBSC~\cite{Renes}. In dimension
$2$ the GP fiducial vectors all lie on a single orbit of the extended Clifford group.
Consider the GP fiducial vector
\begin{equation}
|\psi_{2}\rangle =
\sqrt{(3+\sqrt{3})/6} \; |e_0\rangle + e^{\frac{i
\pi}{4}}\sqrt{(3-\sqrt{3})/6} \; |e_1\rangle
\end{equation}
The stability group of $|\psi_2\rangle$ is the order~$6$, non-Abelian subgroup of
$\EC(2)/\Cc(2)$ generated by the unitary operation
\begin{equation}
[F_2,\boldsymbol{0}]
=
\left[
\begin{pmatrix} 0 & 1 \\ -1 & -1
\end{pmatrix},
\begin{pmatrix} 0 \\ 0
\end{pmatrix}
\right]
\end{equation}
and the three anti-unitary operations
{
\allowdisplaybreaks
\begin{align}
[A_2,\boldsymbol{0}]
& =
\left[
\begin{pmatrix} 0 & 1 \\ 1 & 0
\end{pmatrix},
\begin{pmatrix} 0 \\ 0
\end{pmatrix}
\right]
\\
[B_2,\boldsymbol{0}]
& =
\left[
\begin{pmatrix} -1 & -1 \\ 0 & 1
\end{pmatrix},
\begin{pmatrix} 0 \\ 0
\end{pmatrix}
\right]
\\
[C_2,\boldsymbol{0}]
& =
\left[
\begin{pmatrix} 1 & 0 \\ -1 & -1
\end{pmatrix},
\begin{pmatrix} 0 \\ 0
\end{pmatrix}
\right]
\end{align}
Note that $[F_2,\boldsymbol{0}]$ is
canonical order $3$. It follows from Lemmas~\ref{lem:CliffordStructure5}
and~\ref{lem:OrderECd} that $\left| \EC(2)/\Cc(2)\right| = 48$. So the orbit
consists of $48 \div 6=8$ fiducial vectors (identifying vectors which only differ
by a phase), constituting $2$ distinct SIC-POVM's (as described by RBSC).
}
\subsection*{Dimension 3} Exact solutions in dimension~$3$ have been obtained by
Zauner~\cite{Zauner} and RBSC~\cite{Renes}. We saw in
Section~\ref{sec:CliffordTrace} that dimension~$3$ is unusual in that it is the
only dimension for which the identity operator has Clifford trace
$=-1$. It seems to be unusual in another respect also: for it is the only case
presently known where the GP fiducial vectors constitute infinitely many
distinct orbits of the extended Clifford group.
Consider the one parameter family of GP fiducial vectors
\begin{equation}
|\psi_{3}(t)\rangle = \frac{1}{\sqrt{2}} \left( e^{- i t} |e_1\rangle
-e^{i t} |e_2\rangle
\right)
\end{equation}
The complete set of GP fiducial vectors is obtained by acting on the vectors
$|\psi_{3}(t)\rangle$ with elements of $\EC(3)$.
Let $\hat{T}$ and $\hat{J}$ be the operators defined by Eqs.~(\ref{eq:TDef})
and~(\ref{eq:JopDef}) respectively. Then
\begin{equation}
\hat{T} |\psi_{3}(t)\rangle =-|\psi_{3}({t+\tfrac{
\pi}{3}})\rangle
\qquad
\text{and}
\qquad
\hat{J} |\psi_{3}(t)\rangle = |\psi_{3}(-t)\rangle
\end{equation}
So $|\psi_{3}(t)\rangle$ and $|\psi_{3}(t')\rangle$ are on the same orbit if
$t'=\frac{ n \pi}{3} \pm t$ for some integer $n$. At the cost of rather more
computational effort one can show that this condition is not only sufficient
but also necessary for $|\psi_{3}(t)\rangle$ and $|\psi_{3}(t')\rangle$ to be on
the same orbit. So for each distinct orbit there is exactly one value of $t\in
[0,\frac{ \pi}{6}]$ such that $|\psi_{3}(t)\rangle$ is on the orbit.
There are three kinds of orbit: a set of infinitely many generic orbits
corresponding to values of $t$ in the interior of the interval
$[0,\frac{\pi}{6}]$, and two exceptional orbits corresponding to the two end
points $t=0$ and $\frac{\pi}{6}$.
The stability group of the exceptional vector $|\psi_{3}(0)\rangle$ consists of
all
$48$ operations of the form $[F,\boldsymbol{0}]$, where $F$ is any element of
$\ESL(2,\mathbf{Z}_3)$. The orbit thus consists of $432\div 48=9$ fiducial
vectors, constituting a single SIC-POVM.
The stability group of the exceptional vector $|\psi_{3}(\frac{\pi}{6})\rangle$ is
the order~$12$ non-Abelian subgroup of $\EC(3)/ \Cc(3)$ generated by the unitary
operation
\begin{equation}
[F_3,\boldsymbol{\chi}_3]=
\left[ \begin{pmatrix} -1 & 0 \\ -1 & -1
\end{pmatrix},
\begin{pmatrix} 0 \\ 1
\end{pmatrix}
\right]
\end{equation}
and the anti-unitary operation
\begin{equation}
[A_3,\boldsymbol{\chi}_3]=
\left[ \begin{pmatrix} 1 & 0 \\ 0 & -1
\end{pmatrix},
\begin{pmatrix} 0 \\ 1
\end{pmatrix}
\right]
\end{equation}
Note that
\begin{equation}
[F_3,\boldsymbol{\chi}_3]^2=
\left[ \begin{pmatrix} 1 & 0 \\ -1 & 1
\end{pmatrix},
\begin{pmatrix} 0 \\ 0
\end{pmatrix}
\right]
\end{equation}
is canonical order $3$. The orbit thus consists of
$432\div 12=36$ fiducial
vectors, constituting $4$ distinct SIC-POVMs.
The stability group of a generic vector $|\psi_{3}(t)\rangle$ with
$0<t<\frac{\pi}{6}$ is the order~$6$ non-Abelian subgroup generated by the
unitary operation
\begin{equation}
[F_3,\boldsymbol{\chi}_3]^2=
\left[ \begin{pmatrix} 1 & 0 \\ -1 & 1
\end{pmatrix},
\begin{pmatrix} 0 \\ 0
\end{pmatrix}
\right]
\end{equation}
and the anti-unitary operation
\begin{equation}
[F_3,\boldsymbol{\chi}_3] \circ [A_3,\boldsymbol{\chi}_3]
=
\left[
\begin{pmatrix} -1 & 0 \\ -1 & 1
\end{pmatrix},
\begin{pmatrix} 0 \\ 0
\end{pmatrix}
\right]
\end{equation}
The orbit thus consists of
$432\div 6=72$ fiducial
vectors, constituting $8$ distinct SIC-POVMs.
\subsection*{Dimension 4} The vector
\begin{multline}
|\psi_4\rangle =
\sqrt{\frac{5-\sqrt{5}}{40}} \biggl(
2 \cos\frac{\pi}{8} |e_0\rangle
+ i \Bigl( e^{-\frac{i \pi}{8}} + \bigl(2+\sqrt{5}\bigr)^{\frac{1}{2}}
e^{\frac{i
\pi}{8}}\Bigr) |e_1\rangle \biggr.
\\
\biggl.
+ 2 i \sin \frac{\pi}{8} |e_2\rangle
+ i \Bigl( e^{-\frac{i \pi}{8}} - \bigl(2+\sqrt{5}\bigr)^{\frac{1}{2}}
e^{\frac{i
\pi}{8}}\Bigr) |e_3\rangle
\biggr)
\end{multline}
is a GP fiducial vector in dimension~$4$, as discovered by Zauner~\cite{Zauner}
and RBSC~\cite{Renes}\footnote{
In Zauner's notation $|\psi_4\rangle$ is the vector
\begin{equation}
e^{-\frac{i \pi}{8}} \left( X \psi_{1 a} + \rho^3 Y \psi_{1 b}\right)
\end{equation}
In RBSC's notation it is the
vector
\begin{equation}
r_0 |e_0\rangle + r_{+} e^{i \theta_{+}} |e_1\rangle + r_{1} e^{i
\theta_{1}} |e_2\rangle + r_{-} e^{i \theta_{-}} |e_3 \rangle
\end{equation}
for the case
$n= j= m=1$ and $ k=0$ (note, however, that there is a typographical error in
RBSC~\cite{Renes}: their expression for $r_0$ should read
$r_0 =
\sqrt{\left(1-1/\sqrt{5}\right)}/\left(2 \sqrt{2-\sqrt{2}}\right)$).
}.
The stability group of $|\psi_4\rangle$ is the order~$6$ cyclic subgroup of
$\EC(4)/\Cc(4)$ generated by the anti-unitary operation
\begin{equation}
[A_4,\boldsymbol{\chi}_4]
=
\left[
\begin{pmatrix} -1 & 1 \\ -1 & 2
\end{pmatrix},
\begin{pmatrix} 2 \\ 0
\end{pmatrix}
\right]
\end{equation}
Note that
\begin{equation}
[A_4,\boldsymbol{\chi}_4] ^2
=
\left[
\begin{pmatrix} 0 & 1 \\ -1 & 3
\end{pmatrix},
\begin{pmatrix} 0 \\ 2
\end{pmatrix}
\right]
=
\left[
\begin{pmatrix} 0 & 1 \\ -1 & -1
\end{pmatrix},
\begin{pmatrix} 0 \\ 0
\end{pmatrix}
\right]
\end{equation}
is canonical order $3$ (where we used
Eq.~(\ref{eq:KernelExpression}) to obtain the last expression on the right hand
side).
It follows from Lemmas~\ref{lem:CliffordStructure5} and~\ref{lem:OrderECd} that
the group $\EC(4)/\Cc(4)$ is of order $1536$. So the orbit generated by
$|\psi_4\rangle$ contains $1536 \div 6= 256$ fiducial vectors, constituting $256
\div 16=16$ SIC-POVMs. It was shown by RBSC that there are only $16$
SIC-POVMs in dimension $4$. We conclude that the fiducial vectors all
lie on a single orbit of the extended Clifford group.
\subsection*{Dimension 5} Let $|\psi_5\rangle$ be RBSC's numerical vector in
dimension~$5$. We noted in the last section that the stability group of
$|\psi_5\rangle$ is of order $3$. It follows from
Lemmas~\ref{lem:CliffordStructure5} and~\ref{lem:OrderECd} that
the group $\EC(5)/\Cc(5)$ is of order $6000$. So the orbit generated by
$|\psi_5\rangle$ contains $6000\div 3 = 2000$ fiducial vectors, constituting
$2000\div 25=80$ SIC-POVMs. It was shown by RBSC that there are only $80$
SIC-POVMs in dimension~$5$. We conclude that the fiducial vectors all lie on a
single orbit of the extended Clifford group.
Note that Zauner's analytic solution in dimension $5$ (on p.~63 of his
thesis~\cite{Zauner}) can be used to give exact expressions for each of the
vectors on the orbit.
\subsection*{Dimension 6} Let $|\psi_6\rangle$ be RBSC's numerical vector in
dimension~$6$. We noted in the last section that the stability group of
$|\psi_6\rangle$ is of order $3$. It follows from
Lemmas~\ref{lem:CliffordStructure5} and~\ref{lem:OrderECd} that
the group $\EC(6)/\Cc(6)$ is of order $10368$. So the orbit generated by
$|\psi_6\rangle$ contains $10368\div 3 = 3456$ fiducial vectors, constituting
$3456\div 36=96$ SIC-POVMs. It was shown by RBSC that there are only $96$
SIC-POVMs in dimension~$5$. We conclude that the fiducial vectors all lie on a
single orbit of the extended Clifford group (in agreement with
Grassl's~\cite{Grassl} analysis, based on his exact solution in dimension $6$).
Note that Grassl's~\cite{Grassl} analytic solution can be used to give exact
expressions for each of the vectors on the orbit.
\subsection*{Dimension 7} Let $|\psi_7\rangle$ be RBSC's numerical vector in
dimension~$7$. We noted in the last section that the stability group of
$|\psi_7\rangle$ is of order $6$. It follows from
Lemmas~\ref{lem:CliffordStructure5} and~\ref{lem:OrderECd} that
the group $\EC(6)/\Cc(6)$ is of order $32928$. So the orbit generated by
$|\psi_7\rangle$ contains $32928\div 6 = 5488$ fiducial vectors, constituting
$5488\div 49=112$ SIC-POVMs. However, it was shown by RBSC that there are $336$
SIC-POVMs in dimension $7$. We conclude that there must be at least one other
orbit.
The search for the additional orbit or orbits is
facilitated by the fact that in dimension
$7$ there exists a canonical order $3$ Clifford unitary for which the $F$ matrix
is diagonal: namely
\begin{equation}
[F'_{7},\boldsymbol{0}]
=
\left[\begin{pmatrix} - 3 & 0 \\ 0 & 2
\end{pmatrix},
\begin{pmatrix} 0 \\ 0
\end{pmatrix}
\right]
\end{equation}
The fact that $F'_{7}$ is diagonal means that the $\hat{U}\in
[F'_{7},\boldsymbol{0}]$ are permutation matrices. Specifically
\begin{equation}
\hat{U} = e^{i \theta} \sum_{r=0}^{6} |e_{4 r}\rangle \langle e_r |
\end{equation}
for every $\hat{U}\in
[F'_{7},\boldsymbol{0}]$
(where $e^{i \theta}$ is an arbitrary phase, and where we have used the
decomposition described in Lemma~\ref{lem:CliffordStructure3}). This
considerably simplifies the calculations.
We will also have occasion to consider the anti-unitary operation
\begin{equation}
[A'_{7},\boldsymbol{0}]
=
\left[\begin{pmatrix} -2 & 0 \\ 0 & -3
\end{pmatrix},
\begin{pmatrix} 0 \\ 0
\end{pmatrix}
\right]
\end{equation}
which is a square root of $[F'_{7},\boldsymbol{0}]$.
We look for eigenvectors of $[F'_{7},\boldsymbol{0}]$.
Let
\begin{equation}
l_r = \begin{cases}
1 \qquad & \text{if $r=1,2$ or $4$} \\
-1 & \text{if $r=3,5$ or $6$}
\end{cases}
\end{equation}
Also let
\begin{equation}
a_0 = \frac{1}{2} \left(\sqrt{\frac{1}{4-\sqrt{2}}} +
i \sqrt{\frac{4-\sqrt{2}}{2}}
\right) \qquad a_1 = \frac{1}{4} \sqrt{\frac{8-5\sqrt{2}}{7}}
\qquad a_2 = 2^{-\frac{7}{4}}
\end{equation}
and
\begin{equation}
b_0 = \sqrt{\frac{2+3\sqrt{2}}{14}} \qquad
b_1=\sqrt{\frac{4-\sqrt{2}}{28}} \qquad
\theta = \cos^{-1} \left(-\frac{\sqrt{\sqrt{2}+1}}{2} \right)
\end{equation}
Then define
\begin{align}
|\psi'_7\rangle & = a_0 |e_0\rangle -\sum_{r=1}^{6} \left(a_1 + l_r a_2\right)
|e_r\rangle
\label{eq:Dim7AnalyticVectorA}
\\
|\psi''_7\rangle & = b_0 |e_0\rangle + \sum_{r=1}^{6} b_1 e^{i l_r \theta}
|e_r\rangle
\label{eq:Dim7AnalyticVectorB}
\end{align}
It is readily confirmed that $|\psi'_7\rangle$ and $|\psi''_7\rangle$ are both GP
fiducial vectors. The stability group of $|\psi'_7\rangle$ is the order $3$
subgroup generated by $[F'_{7},\boldsymbol{0}]$, while the stability group of
$|\psi''_7\rangle$ is the order $6$ subgroup generated by
$[A'_{7},\boldsymbol{0}]$. Since the stability groups are non-isomorphic the
orbits generated by $|\psi'_7\rangle$ and $|\psi''_7\rangle$ are disjoint. The
orbit generated by
$|\psi'_7\rangle$ contains $32928\div 3 = 10976$ fiducial vectors, constituting
$10976\div 49=224$ SIC-POVMs. The
orbit generated by
$|\psi''_7\rangle$ contains $5488$ fiducial vectors, constituting a further
$112$ SIC-POVMs. This accounts for all $336$ of the SIC-POVMs identified by
RBSC. We conclude that there are no other orbits, apart from these two.
For the sake of completeness let us note that
\begin{equation}
|\psi_7\rangle = \hat{U} |\psi''_7\rangle
\end{equation}
where $|\psi_7\rangle$ is RBSC's numerical vector and $\hat{U}$ is a unitary
operator
\begin{equation}
\hat{U} \in
\left[ \begin{pmatrix} 1 & 1 \\ -3 & -2
\end{pmatrix},
\begin{pmatrix} 0 \\1
\end{pmatrix}
\right]
\end{equation}
Finally, let us remark that $l_r$ is the Legendre symbol (see, \emph{e.g.},
Nathanson~\cite{Nathanson} or Rose~\cite{Rose})
\begin{equation}
l_r=\genfrac{(}{)}{}{}{r}{7}
\end{equation}
It has the important property that $l_{rs}=l_r l_s$ for all $r,s\in \mathbb{Z}$.
\section{A Fiducial Vector in Dimension 19}
\label{sec:dimension19}
In Section~\ref{sec:RBSCVectors} we saw that, except in dimension~$7$, each of
RBSC's numerical solutions has stability group of order~$3$. This might
encourage one to speculate that when
$d>7$ the stability group is \emph{always} of order~$3$. In this section we
show that there is at least one exception to that
putative rule, by constructing a GP fiducial vector in dimension~$19$ for which
the stability group has order $\ge 18$.
The vector we construct is an eigenvector of the order $18$ anti-unitary operation
\begin{equation}
[A'_{19},\boldsymbol{0}]
=
\left[
\begin{pmatrix} -9 & 0 \\ 0 & -2
\end{pmatrix},
\begin{pmatrix} 0\\ 0
\end{pmatrix}
\right]
\in
\EC(19)/\Cc(19)
\end{equation}
Note that
\begin{equation}
[F'_{19},\boldsymbol{0}]
=
[A'_{19},\boldsymbol{0}]^6
=\left[
\begin{pmatrix} -8 & 0 \\ 0 & 7
\end{pmatrix},
\begin{pmatrix} 0\\ 0
\end{pmatrix}
\right]
\end{equation}
is canonical order~$3$.
The construction is similar to our
construction of the vector
$|\psi''_7\rangle$ in the last section. Let
$l'_r$ be the Legendre symbol
\begin{equation}
l'_r = \genfrac{(}{)}{}{}{r}{19}
= \begin{cases} 1 \qquad & \text{if $r=1,4,5,6,7,9,11,16$ or $17$} \\
-1\qquad & \text{if $r=2,3,8,10,12,13,14,15$ or $18$}
\end{cases}
\end{equation}
and let
\begin{equation}
b'_0 = \sqrt{\frac{5 + 9\sqrt{5}}{95} } \qquad
b'_1= \sqrt{\frac{10-\sqrt{5}}{190}} \qquad
\theta'= \cos^{-1} \left(\sqrt{\frac{\sqrt{5}-1}{8}} \right)
\end{equation}
Then define
\begin{equation}
|\psi'_{19}\rangle = b'_0 |e_0\rangle
+ \sum_{r=1}^{18} b'_1 e^{ i l'_r \theta'} |e_r\rangle
\label{eq:Dim19AnalyticVector}
\end{equation}
It is readily confirmed that $|\psi'_{19}\rangle$ is a GP fiducial vector, and an
eigenvector of $[A'_{19},\boldsymbol{0}]$.
Observe that the orbit generated by $|\psi'_{19}\rangle$ is disjoint from the
orbit generated by RBSC's numerical vector $|\psi_{19}\rangle$ (because the
stability groups are non-isomorphic). It follows that there are at least two
distinct orbits in dimension
$19$.
\section{Diagonalizing the $F$ matrix}
\label{sec:diagonalF}
Our construction of the exact solutions $|\psi'_7\rangle$, $ | \psi''_7\rangle$
and
$|\psi'_{19}\rangle$ in Eqs.~(\ref{eq:Dim7AnalyticVectorA}),
(\ref{eq:Dim7AnalyticVectorB}) and~(\ref{eq:Dim19AnalyticVector}) was facilitated
by the fact that in dimensions $7$ and $19$ there exist canonical order $3$
unitaries for which the corresponding $F$ matrix is diagonal. It is natural to
ask in what other dimensions that is true. The theorem proved below
answers that question.
We will need the following lemma:
\begin{lemma}
\label{lem:DiagonalF}
Let $p$ be a prime number $=1\;(\text{mod}\; 3) $, and let $n$ be any
integer $\ge 1$. Then there exists an integer $\alpha$ such that
\begin{equation}
\alpha^2 + \alpha + 1 = 0 \quad (\text{mod}\; p^n)
\end{equation}
\end{lemma}
\begin{proof}
The proof relies heavily on the theory of primitive roots, as described in (for
example) Chapter 3 of Nathanson~\cite{Nathanson} or Chapter~5 of
Rose~\cite{Rose}. Let $\phi$ be Euler's phi, or totient function (so for every
integer $x\ge 1$, $\phi(x)$ is the number of integers $y$ in the range $1\le y<
x$ which are relatively prime to $x$). Then there exists a single positive
integer
$g$ such that for every integer $m\ge 1$ the multiplicative order of $g$,
considered as an element of $\mathbb{Z}_{p^m}$, is $\phi(p^m)=(p-1)p^{m-1}$ (see,
for example, Nathanson~\cite{Nathanson}, p.~93, or Rose~\cite{Rose}, p.~91). The
fact that $p=1\;(\text{mod}\; 3)$ means $p= 3 k + 1$ for some integer
$k\ge 1$. Define
\begin{equation}
\alpha = g^{k p^{n-1} }
\end{equation}
It is then immediate that
\begin{equation}
\alpha^3 = g^{\phi(p^n)} = 1 \quad (\text{mod}\; p^n)
\label{eq:Lem8AlphaCubed}
\end{equation}
It is also true that $\alpha-1$ is relatively prime to $p$. For suppose that
were not the case. It would then follow from the definition of $\alpha$, and
the fact that $g$ is a primitve root \emph{modulo} $p$, that
\begin{equation}
k p^{n-1} = l (p-1) = 3 k l
\end{equation}
for some integer $l\ge 1$. That, however, is impossible since $p$ is not a
multiple of $3$.
The fact that $\alpha - 1$ is relatively prime to $p$ means that there exists an
integer $\beta$ such that
\begin{equation}
\beta (\alpha - 1) = 1 \quad (\text{mod}\; p^n)
\label{eq:Lem8AlphaMinusOne}
\end{equation}
It now follows from Eqs.~(\ref{eq:Lem8AlphaCubed})
and~(\ref{eq:Lem8AlphaMinusOne}) that
\begin{equation}
\alpha^2 + \alpha + 1 = \beta (\alpha^3 -1 ) = 0 \quad (\text{mod}\; p^n)
\end{equation}
\end{proof}
We are now in a position to prove our main result:
\begin{theorem}
There exists a canonical order $3$ unitary $[F,\boldsymbol{\chi}] \in
\C(d)/\Cc(d)$ for which the matrix $F$ is diagonal if and only if each of the
following is true
\begin{enumerate}
\item $d$ has at least one prime divisor $=1\; (\text{mod} \; 3)$.
\item $d$ has no prime divisors $=2\; (\text{mod}\; 3)$.
\item $d$ is not divisible by $9$.
\end{enumerate}
\end{theorem}
\begin{remark}
So there exist canonical order $3$ unitaries $[F,\boldsymbol{\chi}]$ for which
$F$ is diagonal in dimension $7, 13, 19, 21, 31, 37, 39, 43, 49, \dots$
\end{remark}
\begin{proof}
We begin by proving sufficiency. Suppose that conditions (1), (2) and (3) are all
true. Then we have, for some $t\ge 1$,
\begin{equation}
d= 3_{\vphantom{1}}^{n_0} p_1^{n_1} \dots p_t^{n_t}
\end{equation}
where the $p_i$ are distinct prime numbers $=
1\;(\text{mod} 3)
$, where the integer $n_0 =0$ or $1$, and where the integers $n_1, \dots ,
n_t$ are all
$\ge 1$. It follows from Lemma~\ref{lem:DiagonalF} that there exist integers
$\alpha_1, \dots , \alpha_t$ such that
\begin{equation}
\alpha_i^2 + \alpha^{\vphantom{2}}_i + 1 = 0 \quad (\text{mod}\; p_i^{n_i})
\end{equation}
for $i=1, \dots, t$. We then use the Chinese remainder theorem (see, for
example, Nathanson~\cite{Nathanson} or Rose~\cite{Rose}) to deduce that there
exists a single integer $\alpha$ such that
\begin{align}
\alpha & = 1 \quad (\text{mod}\; 3)
\\
\intertext{and}
\alpha & = \alpha_i \quad (\text{mod}\; p_i^{n_i})
\end{align}
for $i=1, \dots , t$. We have
\begin{align}
\alpha^2 + \alpha + 1 & = 0 \quad (\text{mod}\; 3)
\\
\intertext{and}
\alpha^2 + \alpha + 1 & = 0 \quad (\text{mod}\; p_i^{n_i})
\end{align}
for $i=1, \dots , t$. Consequently
\begin{equation}
\alpha^2 + \alpha + 1 = 0 \quad (\text{mod}\; d)
\end{equation}
It follows that the matrix
\begin{equation}
F = \begin{pmatrix} \alpha & 0 \\ 0 & -\alpha - 1
\end{pmatrix}
\end{equation}
$\in \SL(2, \mathbb{Z}_{\overline{d}})$
(bearing in mind that $d$ is odd). Moreover, $\Tr(F) = -1 \; (\text{mod} \; d)$.
Since $d \neq 3$ we conclude that $[F, \boldsymbol{\chi}]$ is a canonical order
$3$ unitary for all $\boldsymbol{\chi} \in (\mathbb{Z}_d)^2$. This proves
sufficiency.
To prove necessity suppose
\begin{equation}
F
=
\begin{pmatrix} \alpha & 0 \\ 0 & \delta
\end{pmatrix}
\in
\SL(2,\mathbb{Z}_{\overline{d}} )
\end{equation}
is such that $[F, \boldsymbol{\chi}]$ is canonical order $3$ for some
$\boldsymbol{\chi} \in (\mathbb{Z}_d)^2$. Then $\alpha +
\delta = -1\; (\text{mod} \; d)$, implying
\begin{align}
\alpha^2 + \alpha + 1 & = 0 \quad (\text{mod} \; d)
\label{eq:alphaQuadraticCondition}
\\
\alpha^3 & = 1 \quad (\text{mod} \; d)
\label{eq:alphaCubedCondition}
\end{align}
(in view of the fact that $\alpha \delta = 1 \; (\text{mod} \; \overline{d})$).
To show that $d$ has no prime divisors $=2\; (\text{mod}\; 3)$ assume the
contrary. It would
then follow from Eqs.~(\ref{eq:alphaQuadraticCondition})
and~(\ref{eq:alphaCubedCondition}) that
\begin{align}
\alpha^2 + \alpha + 1 & = 0 \quad (\text{mod} \; p)
\label{eq:alphaQuadraticConditionB}
\\
\alpha^3 & = 1 \quad (\text{mod} \; p)
\label{eq:alphaCubedConditionB}
\end{align}
for some prime number $p=2\; (\text{mod}\; 3)$. Let $r$ be a
primitive root of
$p$ and let
$k\in \mathbb{Z}$ be such that
$0\le k < p-1$ and $\alpha = r^k\;(\text{mod} \; p)$ (see, for example,
Nathanson~\cite{Nathanson} or Rose~\cite{Rose}). Then
Eq.~(\ref{eq:alphaCubedConditionB}) implies
$r^{3 k}=1\;(\text{mod} \; p)$ which, in view of the fact that $r$ is a primitive
root, means $3 k = l( p-1)$ for some $l\in \mathbb{Z}$. The fact that $0\le k <
p-1$ implies $ 0 \le l < 3 $. Taking into account the fact that $p-1$ is not
divisible by $3$ (because $p=2\; (\text{mod}\; 3)$) we deduce that $l=0$.
But then $k= 0$, implying $\alpha=1 \;(\text{mod} \; p)$. In view of
Eq.~(\ref{eq:alphaQuadraticConditionB}) this means $3=0 \;(\text{mod}\; p)$:
which is a contradiction.
To prove that $d$ is not divisible by $9$ we again proceed by \emph{reductio ad
absurdum}. Suppose that $d$ were divisible by $9$. It would then follow
from Eq.~(\ref{eq:alphaQuadraticCondition}) that
\begin{equation}
\alpha^2 + \alpha + 1 = 0 \quad (\text{mod} \; 9)
\label{eq:alphaQuadraticConditionC}
\end{equation}
However, it is easily verified (by explicit enumeration) that this equation has
no solutions.
Finally, suppose that $d$ had no prime divisors $=1 \;(\text{mod}\; 3)$. In view
of the results just proved it would follow that $d=3$. But if $d=3$,
Eq.~(\ref{eq:alphaQuadraticCondition}) implies $\alpha = 1\; (\text{mod} \; 3)$.
Taking into account the requirement $\alpha \delta =\det F = 1 \; (\text{mod} \;
3)$ this means
$\delta = 1 \; (\text{mod} \; 3)$. But then $F$ is the identity matrix,
which contradicts the assumption that $[F,\boldsymbol{\chi}]$ is a canonical
order
$3$ unitary. We conclude that
$d$ must have at least one prime divisor $=1 \;(\text{mod}\; 3)$.
\end{proof}
\section{Conclusion}
RBSC conclude their paper by saying ``a rigorous proof of existence of SIC-POVMs
in all finite dimensions seems tantalizingly close, yet remains somehow
distant''. That well expresses our own perception of the matter. While working
on this problem we have several times had the sense that the crucial discovery
lay just round the corner, only to find that our hopes were illusory. We make
our results public in the hope that they may, nevertheless, contain a few clues,
which will help to take us further forward.
In particular it seems to us that significant progress would be made if it could
be established whether it is in fact true that every GP fiducial vector is an
eigenvector of a canonical order $3$ unitary. Also, if that is the case, one
would like to know exactly
\emph{why} it is the case.
\subsubsection*{Acknowledgements} I am grateful to Chris Fuchs for exciting my
interest in this problem. I am also grateful to him and to Paul Busch, Barry
Rhule and R\"{u}diger Schack for numerous inspiring discussions about POVMs in
particular, and the mysteries of quantum mechanics in general. Finally, I am
grateful to Markus Grassl, Gerhard Zauner and Alexander Vlasov for some
very helpful comments regarding the first version of this paper.
|
1303.3993
|
\section{Introduction}\label{intro}
In \cite{k} J. Kinnunen proved the boundedness of the Hardy-Littlewood maximal operator given by
\[Mf(x):=\sup_{r>0} \dashint_{B(x,r)}|f(y)|dy\]
on the Sobolev space $W^{1,p}(\mathbb{R}^n)$ for $1<p \leq \infty.$ Since $Mf$ is never integrable for non-trivial functions this cannot be extended to $p=1.$ However one can ask whether the operator $f \mapsto \nabla Mf$ is bounded from $W^{1,p}(\mathbb{R}^n)$ to $L^1(\mathbb{R}^n)$. This question, asked by Hajlasz and Onninen in \cite{ho}, was answered positively for $n=1$ in the easier case of non-centered maximal function by Tanaka, and for the centered case recently by Kurka; see \cite{t,k}. Indeed the result of Tanaka was strengthened by J.M. Aldaz and J. P\'erez L\'azaro in \cite{apl2} to show
\begin{equation}\label{e1.1}
\var{\widetilde{M}f\leq \var f}
\end{equation}
where $\widetilde{M}f$ is the non-centered maximal function, whereas Kurka derived his answer to the question from the analogous result for the centered one:
\begin{equation}\label{e1.2}
\var{{M}f\leq C \var f}
\end{equation}.
Consider the discrete Hardy-Littlewood maximal function
\[Mf(n)=\sup_{r\in \mathbb{Z}^+}\frac{1}{2r+1}\sum_{k=-r}^r |f(n+k)|\]
where $f:\mathbb{Z}\mapsto \mathbb{R}$ and $\mathbb{Z}^+$ denotes non-negative integers. One can similarly define the non-centered version:
\[\widetilde{M}f(n)=\sup_{r,s\in \mathbb{Z}^+}\frac{1}{r+s+1}\sum_{k=-r}^s |f(n+k)|.\]
Although the result of Kinnunen does not meaningfully extend to this setting, the analogue of \eqref{e1.1} was showed by Bober, Carneiro, Hughes and Pierce in \cite{bchp}. In this paper we will extend \eqref{e1.2} to discrete setting. More precisely define
\[\var f:= \sum_{k\in \mathbb{Z}}|f(k+1)-f(k)|\]
for a function $f:\mathbb{Z}\mapsto \mathbb{R}$. Then we prove the following.
\begin{theorem}
Let $f:\mathbb{Z}\mapsto \mathbb{R}$ be a function of bounded variation. Then
\[\var{{M}f\leq C \var f}.\]
\end{theorem}
It is conjectured in \cite{bchp} that $C=1$, but as in Kurka's work we are not able to obtain this constant.
We will adapt ideas developed by Kurka in \cite{ku} to discrete setting to obtain our result.
The rest of the paper is organized as follows. In the next section we will give definitions necessary for classifying local extrema, and state a lemma that handles the variation arising from one class of local extrema at a time. Using these we will explain main ideas underlying the proof and then prove this lemma. In last three sections the issue of putting all classes together will be dealt with.
\section{Preliminaries}
Before going to our definitions we first note that it suffices to prove our theorem for non-negative functions. So let $f\geq 0$ be a function defined on integers with bounded variation.
\begin{definition}
{\rm\textbf{I.}} A peak is a system of three integers $p<r<q$ satisfying $Mf(p)<Mf(r)$ and $Mf(q)<Mf(r)$.\\
{\rm\textbf{II.}} We define the variation of a peak $\mathbbm{p}=\{p<r<q\}$ by
\[\var \mathbbm{p}=2Mf(r)-Mf(p)-Mf(q). \]
{\rm\textbf{III.}} We define the variation of a system $\mathbbm{P}$ of peaks by
\[\var \mathbbm{P}=\sum_{\mathbbm{p}\in \mathbbm{P}} \var \mathbbm{p}.\]
{\rm\textbf{IV.}} We call a peak $\mathbbm{p}$ essential if
\[\max_{p<k<q} f(k)\leq Mf(r)-\frac{1}{4}\var \mathbbm{p}.\]
{\rm\textbf{V.}} We define averaging operators of radius k for a non-negative integer k by
\[A_kf(n)=\frac{1}{2k+1}\sum_{j=-k}^kf(n+j)\]
{\rm\textbf{VI.}} We define the radius of an essential peak as
\[\omega(r):=\max\{w>0: A_{\omega}f(r)=Mf(r)\}\]
\end{definition}
Clearly the last part of the definition needs further elaboration. We need to know that the set under consideration is not empty and that it contains finitely many elements. These as well as a further property of $\omega(r)$ shall be dealt with below, but first we introduce some further notation. For $x,y \in \mathbb{Z}$ the notation $[x,y]$ will stand for integers $n$ satisfying $x \leq n \leq y$, and we will call $[x,y]$ an interval. By length of an interval $[x,y]$ we will mean the quantity $y-x.$ We define average of a function $f$ on an interval $[x,y]$ by
\[A_{x,y}f=\frac{1}{y-x+1}\sum_{k=x}^yf(k).\]
Now we state and prove the lemma clarifying the last part of the Definition 1.
\begin{lemma}
Let $\mathbbm{p}=\{p<r<q\}$ be and essential peak. Then $w(r)$ is well defined and satisfies
\[r-\omega(r)<p<q<r+\omega(r).\]
\end{lemma}
\begin{proof}
First let's see that our set is nonempty. Since $\mathbbm{p}$ is an essential peak we have
$f(r)<Mf(r)$. Thus for our set to be empty we must have for every $\omega\geq0$
$A_{\omega}f(r)<Mf(r).$
But then also by definition of the maximal function we must have a strictly increasing sequence $\{\omega_j\}_{j\in \mathbb{N}}$ of natural numbers such that
\[\lim_{j\rightarrow \infty}A_{\omega_j}f(r)=Mf(r).\]
But note that we also have
\[Mf(p)\geq \lim_{j\rightarrow \infty}A_{\omega_j}f(p)=Mf(r)>Mf(p).\]
Note that this same argument also gives that our set cannot contain infinitely many elements, hence $\omega(r)$ is well defined.
Now note that $f(p)\leq Mf(p) <Mf(r)$ and $f(q) \leq Mf(q)<Mf(r)$, thus $p \leq r-\omega(r)<r+\omega(r)\leq q$ would imply $A_{\omega(r)}f(r)<Mf(r)$, hence at least one of $r-\omega(r)<p$, $q<r+\omega(r)$ is true. We assume the first one is true, and the second is wrong: the converse can be dealt with similarly. We have $A_{p+\omega(r)-r}f(p) \leq Mf(p)<Mf(r)$, which means
$ A_{2p+\omega(r)-r+1,r+\omega(r)}f \geq Mf(r).$
But $p<2p+\omega(r)-r+1<r+\omega(r)\leq q$ means
$ A_{2p+\omega(r)-r+1,r+\omega(r)}f< Mf(r).$
Hence we are done.
\end{proof}
The following is the lemma that handles variation arising from a specific class of local extrema. As shall be explained, it plays a fundamental role in our proof.
\begin{lemma}
Let $[x,y]$ be an interval of length $L$ with $L$ an even integer. Let $\mathbbm{p_i}=\{p_i<r_i<q_i\}$ be a system of essential peaks satisfying
\[x \leq r_1<q_1 \leq p_2<r_2 <q_2 \leq \ldots \leq p_{m-1} < r_{m-1} \leq p_m< r_m \leq y\]
and $32L<w(r_i)\leq 64L$ for $1\leq i \leq m.$ Then there exists $s<u<v<t$ such that,
\[x-64L \leq s,\ \ t \leq y+64L, \ \ u-s \geq 4L, \ \ v-u=L, \ \ t-v \geq 4L\]
\[\min\{f(s),f(t)\}- A_{u,v}f \geq \frac{1}{12} \sum_{i=1}^m \var \mathbbm{p}_i\]
\end{lemma}
This lemma says that if in a system of essential peaks, all peaks lie in an interval of length comparable to all of their radii, the variation of this system can be bounded by using values of the function at nearby points. So this immediately implies that we can put together such systems located at sufficiently distant intervals easily. Hence even if we do not require the peaks to lie in an interval of certain length, the system can be broken into subsystems using a finite covering of the real line by equally spaced intervals and then easily dominated by the variation of the function. As we will see in the section 3, it is very easy to estimate the variation of non-essential peaks, so proving this lemma reduces the problem to taking care of systems with essential peaks of different length scales.
\begin{proof}
We shall decompose the system into three parts. If we take the first and the last peaks out, there remains a system which entirely lies in the interval. Thus proving the lemma for single peaks and systems lying in the interval, with constant on the right hand side 1/4 instead of 1/12 suffices. We will first prove the lemma for a single peak so let $\mathbbm{p}=\{p<r<q\}$ denote our system. Our first step is to find $s,t$ satisfying
\[f(s)\geq Mf(r), \ \ \ \ x-64L \leq s \leq 2q-(r+\omega(r))\]
\[f(t)\geq Mf(r), \ \ \ \ 2p-(r-\omega(r)) \leq t \leq y+64L\]
We shall utilize the same ideas as used in Lemma 1 and since the same procedure dealswith both, we shall find an $s$ only.
\[A_{\omega(r)}f(r)=Mf(r), \ \ \ A_{r+\omega(r)-q}f(q)\leq Mf(q) < Mf(r)\]
thus
\[A_{r-\omega(r),2q-(r+\omega(r))-1}f\geq Mf(r)\]
Since $x-64L \leq r-\omega(r)$, there must be an $s$ with desired properties.
To locate suitable $u,v$ we shall consider two subcases:\\
\textbf{I.}If $q-p < 12L$ then
\[s < 2q-(r+ \omega(r))< 2p +24L-r-32L < p-8L.\]
Similarly
\[t > 2p-(r-\omega(r)) >2q-24L-r+\omega(r) > q+8L. \]
So we set $u=p-L/2, \ v=p+L/2$ if $Mf(p) \leq Mf(q)$, and $u=q-L/2,\ v=q+L/2$ otherwise. This choice clearly satisfies distance requirements, and
\[\min\{f(s), f(t)\}-A_{u,v}f \geq Mf(r)-\min\{Mf(p), Mf(q)\}\geq \frac{1}{2}\var \mathbbm{p} \]
\textbf{II.} Let $q-p \geq 12L$. Since $\mathbbm{p}$ is an essential peak
\[f(s), f(t) \geq Mf(r)> \max_{p<k<q}f(k).\]
Thus $s\leq p<q \leq t.$ Choosing
\[u=\left\lfloor \frac{p+q}{2} \right\rfloor-L/2, \ \ v=\left\lfloor \frac{p+q}{2} \right\rfloor+L/2\]
Then
\[u-s\geq \frac{p+q}{2}-1-L/2-p \geq \frac{q-p}{2}-L/2-1 \geq 5L-1 \geq 4L\]
\[t-v \geq q-\frac{p+q}{2}-L/2 \geq 5L\]
and
\[\min\{f(s), f(t)\}-A_{u,v}f \geq Mf(r)- \max_{p<k<q}{f(k)} \geq \frac{1}{4}\var \mathbbm{p}.\]
Now assume that our peaks are entirely contained in $[x,y]$, so $x\leq p_1, q_{m} \leq y.$ We will work with a modification of our system: set
\begin{equation*}
\begin{aligned}
e_i&=p_i, \ \ \ \ \ \ i=1 \ \text{or} \ Mf(p_i)\leq Mf(q_{i-1})\\
e_i&=q_{i-1}, \ \ \ i=m+1 \ \text{or} \ Mf(p_i)> Mf(q_{i-1})
\end{aligned}
\end{equation*}
and
\[\widetilde{\mathbbm{p}_i}=\{e_i<r_i<e_{i+1}\}, \ \ 1 \leq i \leq m. \]
We will show the existence of $s_i, t_i$ for $1\leq i \leq m$ that satisfy
\[f(s_i) \geq Mf(e_{i+1})+ \frac{Mf(r_i)-Mf(e_{i+1})}{e_{i+1}-r_i}\cdot \omega(r_i), \ \ x-64L \leq s_i \leq x-30L,\]
\[f(t_i) \geq Mf(e_{i})+ \frac{Mf(r_i)-Mf(e_{i})}{r_i-e_i}\cdot \omega(r_i), \ \ y+30L \leq t_i \leq y+64L.\]
We will find $s_i$, and $t_i$ are found similarly. We have
\[A_{\omega(r_i)}f(r_i)=(2\omega(r_i)+1)Mf(r_i),\]
\[ A_{r+\omega(r_i)-e_i}f(e_i)\leq (2(r_i+\omega(r_i)-e_i)+1)Mf(e_{i+1}).\]
So
\[ A_{r_i-\omega(r_i),2e_i-r_i-\omega(r_i)-1}f\geq (2\omega(r_i)+1)(Mf(r_i)-Mf(e_{i+1}))+2(e_i-r_i)Mf(e_{i+1})\]
Since $x-64L \leq r_i-\omega(r_i)$ and $ 2e_i-r_i-\omega(r_i)-1 \leq 2y-x-32L =x-30L$
there exists an $s_i$ with asserted properties.
To locate $u, v$ we consider two cases.\\
\textbf{I.} Let
\[|Mf(e_{m+1})-Mf(e_1)|>\frac{1}{2}\sum_{i=1}^{m}\var\widetilde{\mathbbm{p_i}}.\]
Let's also assume that $Mf(e_{m+1})>Mf(e_1)$, the other case is similar. Since $Mf(e_1)\geq f(e_1)$, if we can show that $f(s_i),f(t_j) \geq Mf(e_{m+1})$ for some $i,j$ choosing $s=s_i,t=t_j,u=e_1-L/2,v=e_1+L/2$ will do. From our choice of $s_i$ we have $Mf(s_m)\geq Mf(e_{m+1})$ and
\begin{equation*}\begin{aligned}
f(t_m)\geq Mf(e_m)+\frac{Mf(r_m)-Mf(e_{m})}{r_m-e_m}\cdot (r_m-e_m)&=Mf(r_m)\\
& \geq Mf(e_{m+1})
\end{aligned}
\end{equation*}
\textbf{II.} Let
\[|Mf(e_{m+1})-Mf(e_1)| \leq \frac{1}{2}\sum_{i=1}^{m}\var\widetilde{\mathbbm{p_i}}.\]
We know that
\[Mf(e_{m+1})-Mf(e_{1}) = \sum _{i=1}^{m} \big( Mf(r_{i})-Mf(e_{i}) \big) - \big( Mf(r_{i})-Mf(e_{i+1}) \big) \]
and that
\[ \sum _{i=1}^{m} \var \widetilde{\mathbbm{p}}_{i} = \sum _{i=1}^{m} \big( Mf(r_{i})-Mf(e_{i}) \big) + \big( Mf(r_{i})-Mf(e_{i+1}) \big). \]
Thus we have
\[\sum _{i=1}^{m} Mf(r_{i})-Mf(e_{i}), \ \sum _{i=1}^{m} Mf(r_{i})-Mf(e_{i+1}) \geq \frac{1}{4}\sum _{i=1}^{m} \var \widetilde{\mathbbm{p}}_{i},\]
We choose $i_0,j_0$ to be the indices that maximize the expressions
\[\frac{Mf(r_{i})-Mf(e_{i+1})}{e_{i+1}-r_{i}}, \ \ \frac{Mf(r_{j})-Mf(e_{j})}{r_{j}-e_{j}}. \]
Then we have
\begin{equation*}
\begin{aligned}
f(s_{i_0})-Mf(e_{i_0+1})&\geq \frac{Mf(r_{i_0})-Mf(e_{i_0+1})}{e_{i_0+1}-r_{i_0}}\cdot \omega(r_{i_0}) \\ &\geq \frac{Mf(r_{i_0})-Mf(e_{i_0+1})}{e_{i_0+1}-r_{i_0}}\cdot 32L \\ &\geq \frac{Mf(r_{i_0})-Mf(e_{i_0+1})}{e_{i_0+1}-r_{i_0}}\cdot32 \cdot \sum _{i=1}^{m} e_{i+1}-r_{i} \\
&= 32\sum _{i=1}^{m} \frac{Mf(r_{i})-Mf(e_{i+1})}{e_{i+1}-r_{i}} \cdot (e_{i+1}-r_{i}) \\
& \geq 32\sum _{i=1}^{m} Mf(r_{i})-Mf(e_{i+1}) \\
& \geq 8 \sum _{i=1}^{m} \var \widetilde{\mathbbm{p}}_{i}.
\end{aligned}
\end{equation*}
The same process applies to $f(t_{j_0})-Mf(e_{j_0})$. So set $s=s_{i_0},t=t_{j_0},u=e_{j_0}-L/2,v=e_{j_0}+L/2$ and note that
\[|Mf(e_{j_0})-Mf(e_{i_0+1})| \leq \sum _{i=1}^{m} \var \widetilde{\mathbbm{p}}_{i}.\]
Hence this choice satisfies desired properties.
\end{proof}
\section{bounding systems containing different scales}
We first fix a system
\[ a_1<b_1<a_2<b_2<\ldots<a_{\sigma}<b_{\sigma}<a_{\sigma+1}\]
satisfying $Mf(a_i)<Mf(b_i)$ and $Mf(a_{i+1})<Mf(b_i)$ for $1 \leq i\leq \sigma.$ We will use $\mathbbm{P}$ to denote collection of all peaks $\mathbbm{p}_i=\{a_i<b_i<{a_{i+1}}\}$
arising from this system. The letter $\mathbbm{E}$ will stand for those $\mathbbm{p}_i$ that are essential. We further partition the essential peaks as follows: for $n >5, \ k \in \mathbbm{Z}$ we define
\[\mathbbm{E}^n_k=\{\mathbbm{p}_i \in \mathbbm{E}:2^{n-1}<\omega(b_i)\leq2^n, \ k2^{n-5}<b_i \leq (k+1)2^{n-5}\},\]
and we let $\mathbbm{E}'$ denote all essential peaks not belonging to one of the above collections.
We first will bound the variation of non-essential peaks, and then describe how to handle $\mathbbm{E}'$. After these two relatively easy tasks we will set ourselves to bounding the variation of remaining peaks.
\begin{lemma}
We have the inequality
\[\var ({\mathbbm{P}\setminus \mathbbm{E}}) \leq 2 \var f.\]
\end{lemma}
\begin{proof}Since $\mathbbm{p} \in \mathbbm{P}\setminus \mathbbm{E} $ is a non-essential peak we have a point $x_i \in [a_i+1, a_{i+1}-1]$ satisfying
\[f(x_i) \geq Mf(b_i) -\frac{1}{4} \var \mathbbm{p}_i. \]
Then we have
\begin{equation*}
\begin{aligned}
|f(x_i)-f(a_i)|+ |f(x_{i})-f(a_{i+1})| & \geq 2f(x_i)-f(a_i)-f(a_{i+1}) \\
& \geq 2(Mf(b_i) -\frac{1}{4} \var \mathbbm{p}_i)- Mf(a_i)-Mf(a_{i+1})\\
&= \frac{1}{2}\var \mathbbm{p}_i.
\end{aligned}
\end{equation*}
From this our assertion is clear.
\end{proof}
\begin{lemma}
We have
\[\var \mathbbm{E}' \leq 1200 \cdot \var f.\]
\end{lemma}
\begin{proof}
We partition the integers into subsets $\mathbbm{Z}_l=300\mathbbm{Z}+l$ for $ 0\leq l <300.$
Similarly partition $\mathbbm{E}'$ into
\[\mathbbm{E}'_l=\{ \mathbbm{p}_i=\{a_i<b_i<a_{i+1}\}:\mathbbm{p}_i \in \mathbbm{E}', \ b_i \in \mathbbm{Z}_l\}.\]
We apply to $\mathbbm{p}_i$ the same procedure as in the proof of Lemma 2. for a single peak to find $s_i<u_i<t_i$ satisfying $b_i-32 \leq s_i, t_i \leq b_i+32$ and
\[\min\{f(s_i),f(t_i)\}-f(u_i)\geq \frac{1}{4}\var\mathbbm{p}_i.\]
Using these points we have
\begin{equation*}
\begin{aligned}
\var \mathbbm{E}'_l = \sum_{\mathbbm{p}_i\in\mathbbm{E}'_l} \var \mathbbm{p}_i \leq 4\cdot \sum_{\{i:\mathbbm{p}_i\in\mathbbm{E}'_l\}} |f(s_i)-f(u_i)|+|f(t_i)-f(u_i)| \leq 4\cdot \var f
\end{aligned}
\end{equation*}
since the peaks in $\mathbbm{E}'_l$ are sufficiently distant.
Thus
\[\var \mathbbm{E}' = \sum_l \var \mathbbm{E}'_l \leq 300\cdot4\cdot\var f=1200 \cdot \var f.\]
\end{proof}
To handle the remaining peaks we need to classify further. The next lemma will serve to this purpose.
\begin{lemma}
Let $\mathbbm{E}^n_k$ be non-empty for some $n\geq 6, k\in \mathbbm{Z}$. Then one of the following is true:\\
{\rm\textbf{A.}} There exists
$s<\alpha<\beta<\gamma< \delta<t$ satisfying
\[(k-64)2^{n-5} \leq s, \ \ \ t \leq(k+65)2^{n-5},\]
\[ \alpha-s \geq 2^{n-5},\ \ \ \beta-\alpha \geq 2^{n-5} \ \ \ \gamma-\beta \geq2^{n-4}, \ \ \ \delta-\gamma\geq 2^{n-5}, \ \ \ t-\delta \geq 2^{n-5} \]
\[ \min\{f(s),f(t)\}-\max \{A_{\alpha,\beta}f, A_{\gamma,\delta}f\} \geq \frac{1}{24}\var \mathbbm{E}^n_k \]
{\rm \textbf{B.}} There exists $\alpha<\beta<u<v<\gamma<\delta$ satisfying
\[(k-64)2^n \leq \alpha, \ \ \ \delta \leq(k+65)2^n,\]
\[\beta-\alpha \geq 2^{n-5}, \ \ \ u-\beta\geq 2^{n-5}, \ \ \ v-u\geq2^{n-5}, \ \ \ \gamma-v\geq 2^{n-5}, \ \ \ \delta-\gamma\geq 2^{n-5} \]
\[\min\{A_{\alpha,\beta}f, A_{\gamma,\delta}f \}-A_{u,v}f \geq \frac{1}{24}\var\mathbbm{E}^n_k.\]
\end{lemma}
\begin{proof}
We have by Lemma 2 points $s<u<v<t$ for peaks of $\mathbbm{E}^n_k$ and interval $[k2^{n-5},(k+1)2^{n-5}].$ We then define
\[\alpha=u-3\cdot 2^{n-5}, \ \ \ \beta=u-2\cdot 2^{n-5}, \ \ \ \gamma=v+2\cdot 2^{n-5}, \ \ \ \delta= 3\cdot 2^{n-5}.\]
If the inequality
\[\min\{A_{\alpha,\beta}f, A_{\gamma,\delta}f \} \geq \frac{1}{2}\min\{f(s),f(t)\}+\frac{1}{2}A_{u,v}f\]
is satisfied then we just need to subtract $A_{u,v}f$ from both sides and use the Lemma 2
to see \textbf{B} satisfied. Assume it does not hold. We first assume $A_{\alpha,\beta}=\min\{A_{\alpha,\beta}f, A_{\gamma,\delta}f \}$. In this case
\[\max\{A_{\alpha,\beta},A_{u,v}\}+\frac{1}{2}\min\{f(s),f(t)\}\leq \min\{f(s),f(t)\}+\frac{1}{2}A_{u,v}f \] Applying Lemma 2 from here yields the desired inequality if we keep $\alpha, \beta$ the same, and set $\gamma=u,\delta=v$. For the case $A_{\gamma,\delta}=\min\{A_{\alpha,\beta}f, A_{\gamma,\delta}f \}$ all we need is to keep $\gamma,\delta$ the same and set $\alpha=u,\beta=v.$
\end{proof}
Thus we define $\mathcal{A}$ to be the union of $\mathbbm{E}_k^n$ satisfying \textbf{A}, and $\mathcal{B}$ as the union those satisfying \textbf{B}. We further define $\mathcal{A}_K^n$ to be the union of $\mathbbm{E}_k^n$ in $\mathcal{A}$ for which $k=\mod 300$, and $\mathcal{B}_K^n$ is defined analogously. Notice that since we took a finite number of peaks in , there exists $n_A$ representing the largest $n$ for which $\mathcal{A}_K^n$ is non-empty for at least one $K$. Similarly we have an $n_B.$
In the next two sections we shall deal respectively with variation arising from peaks of $\mathcal{A}$ and $\mathcal{B}$.
\section{The Variation of peaks of $\mathcal{A}$ }
The following is the main proposition we want to prove in this section.
\begin{proposition}
Let $0\leq N \leq 11$, $0\leq K\leq 299$ and let $L_N$ denote $2^{N-6}$ if $N\geq 6$ and $2^{N+6}$ if $n\leq5.$ There exists a system
\[x_1<u_1<v_1<x_2<u_2<v_2<x_3<\ldots<x_m<u_m<v_m<x_{m+1}\]
with properties
\[u_i-x_i\geq L_{N}, \ \ v_i-u_i\geq L_N, \ \ x_{i+1}-v_{i}\geq L_N, \ \ 1 \leq i\leq m, \]
\[\sum_{i=1}^mf(x_i)+f(x_{i+1})-2A_{u_i,v_i}f\geq \frac{1}{60}\sum_{n=N\mod 12}\var{\mathcal{A}_{K}^n}.\]
\end{proposition}
We shall prove this inductively. Let $n_N=N \mod 12$ denote the maximum integer $n$ for which $\mathcal{A}_{K}^n$ is non-empty. We clearly have a system as described above that bounds the variation of $\mathcal{A}_{K}^n$ which have $2^{n_N-5}$ instead of $L_N$ -this is true only if $n_N>N$ of course, but if $n_N=N$ we directly obtain the desired system using Lemma 2-. Now assume we have a system that bounds sum of variations coming from all classes $\mathcal{A}_K^n$ for $n>n_0$ where $\mathcal{A}_K^{n_0}$ is non-empty, and that has $2^{n_0+12}$ instead of $L_N$. If we can modify this system so that it bouns all classes for $n\geq n_0$ with $2^{n_0}$ replacing $L_N$, a finite iteration would give our proposition.
Thus we assume there exists a system
\begin{equation}\label{eqs}
x_1<u_1<v_1<x_2<u_2<v_2<x_3<\ldots<x_{m_0}<u_{m_0}<v_{m_0}<x_{m_0+1}\end{equation}
that satisfies conditions given by the inductive hypothesis above.
The class $\mathcal{A}_K^{n_0}$ is a union of a finite number of systems of peaks $\mathbbm{E}_k^{n_0}$, we will describe how to incorporate these into the existing system. Pick one such $\mathbbm{E}_k^{n_0}$ and consider $s<\alpha<\beta<\gamma<\delta<t$
coming from the alternative \textbf{A} of Lemma 5 for it. We will modify \eqref{eqs} according to its relation with the interval $[s,t]$.
\textbf{I.} First assume for any $1\leq i\leq m_0$ we have $\text{dist}([u_i,v_i],[s,t])\geq2^{n-5}.$ In this case one of the intervals
\[(-\infty,u_1],\ [v_1,u_2],\ \ldots\ [v_{m_0-1},u_{m_0}],\ [v_0,\infty)\]
must contain $[s,t]$. This interval also contain a unique $x_i, 1\leq i \leq m_0$, which must satisfy either $\text{dist}(x_i,\beta)\geq \text{dist}(x_i,\gamma)$ or $\text{dist}(x_i,\beta)< \text{dist}(x_i,\gamma)$ . If the first happens we take $s,\alpha, \beta$, otherwise we take $\gamma,\delta,t$ and add them to our system. The new system is easily seen to
satisfy desired properties.
\textbf{II.} There exists an $i$ with $\text{dist}([u_i,v_i],[s,t])<2^{n-5}$. We first note that this $i$ is unique. Observe that either $(k-150)2^{n-5}\in [u_i,v_i]$ or $(k+150)2^{n-5}\in [u_i,v_i],$ we will assume the first, as the second is handled similarly. Let $g=h=k \mod 300$ be such that
$(g-155)2^{n-5} \leq x' <(g+145)2^{n-5},(h-155)2^{n-5} \leq y' <(h+145)2^{n-5}.$ Notice that these condition determine $g,h$ uniquely. Using these we partition our interval
\[[u_i,(g+150)2^{n-5}-1], \ [(g+150)2^{n-5},(g+450)2^{n-5}-1], \ldots [(h-150)2^{n-5},v_i].\]
One of these subintervals contains $(k-150)2^{n-5}$ which will be denoted by $I$ and, average of $f$ on one of these subintervals is less than or equal to average over $[u_i,v_i]$, we will call this $[u_i',v_i'].$ If $I$ is not the same as $[u_i',v_i']$, then this latter interval is distant enough from $[s,t]$, and replacing $[u_i,v_i]$ by $[u_i',v_i']$ and choosing appropriate ones out of $\{s,\alpha,\beta,\gamma,\delta,t\}$
will do.
If they are the same then we have to consider two different cases. Either
there exists $[c,d]\subset[u_i',v_i']$ with $d-c\geq 2^{n-5}$ such
that
\[A_{c,d}f \leq A_{u_i,v_i}f-\frac{1}{120}\var \mathbbm{E}_k^{n_0},\]
or we have a system
$c<d<y<c'<d'$ with $[c,d']\subset[u_i',u_i'+300\cdot2^{n-5}]$ and
\[d-c\geq 2^{n-5}, \ \ \ y-d \geq 2^{n-5}, \ \ \ c'-y\geq 2^{n-5}, \ \ \ d'-c'\geq 2^{n-5}, \]
such that
\[A_{c,d}f +
A_{c',d'}f-f(y) \leq A_{u_i,v_i}f-\frac{1}{120}\var \mathbbm{E}_k^{n_0}.\]
In both cases what to do is clear, in the first case $[u_i,v_i]$ is replaced by $[c,d]$, while in the second we replace $[u_i,v_i]$ by two intervals $[c,d],[c',d']$ and the point $y$ between them. But that one of these must hold should be shown. We set
\[w_i'=u_i'+\left\lceil \frac{v_i'-u_i'}{5}\right\rceil \]
and observe that both $[u_i',w_i'], \ [w_i'+1,v_i']$ are longer than $2^{n-5}$. We have either
\begin{equation}\label{eqd} A_{w_i'+1,v_i'}f\leq A_{u_i',v_i'}f-\frac{1}{120}\var\mathbbm{E}_k^{n_0} \ \ \text{or} \ \ A_{u_i',w_i'}f \leq A_{u_i',v_i'}+\frac{4}{120}\var\mathbbm{E}_k^{n_0},
\end{equation}
and if the first holds we just set $c=w_i'+1,d=v_i'$ to obtain \textbf{a} whereas if the second holds we let $c=u_i',d=w_i',y=s,c'=\alpha,d'=\beta.$ That $y-d\geq2^{n-5}$ follows from the definitions of $u_i',w_i'.$
We thus incorporated the first $\mathbbm{E}^{n_0}_k$ into the system. For the rest we apply a similar procedure but, we also have to deal with previously made changes, which shorten the distance between successive points from $2^{n+7}$ to $2^{n-5}$. Let us incorporate a second system $\mathbbm{E}^{n_0}_l$. Let our modified system be
\begin{equation}\label{eqs2}
x_1<u_1<v_1<x_2<u_2<v_2<x_3<\ldots<x_{m_{0,1}}<u_{m_{0,1}}<v_{m_{0,1}}<x_{m_{0,1}+1}
\end{equation}
and consider $s'<\alpha'<\beta'<\gamma'<\delta'<t'$ coming from Lemma 5 for $\mathbbm{E}^{n_0}_l$. We again have the same two alternatives which this time we will call \textbf{I'},\textbf{II'}, and if \textbf{I'} is the case, exactly same ideas suffice. If on the other hand $\text{dist}([u_i,v_i],[s',t'])\leq 2^{n-5}$ holds for some $1\leq i\leq m_{0,1}$, then some additional consideration is needed. First we need to see that this $[u_i,v_i]$ is unique. Since $k\neq l$ we have $\text{dist}([s',t'],[(k-150)2^{n-5}(k+150)2^{n-5}])\geq 2^{n-5}$ such $[u_i,v_i]$ can be either unmodified intervals, or only first of three types of intervals arising from \textbf{II}. If $[u_i,v_i]$ is close to an unmodified interval, it is sufficiently distant from all other unmodified intervals and intervals arising from $\textbf{II}.$
Similarly being close to an interval arising from $\textbf{II}$ guarantees distance from all unmodified intervals. Thus $[u_i,v_i]$ is unique. After this methods described in \textbf{II} handles both cases. Clearly these considerations suffice to add the remaining systems, and after a finite number of steps we will have $\mathcal{A}^{n_0}_K$ incorporated.
Thus the proof of our proposition is complete. From this proposition we easily deduce that \[ \var \mathcal{A} \leq 120\cdot2^{12}\cdot 300 \cdot \var f.\]
\section{The variation of peaks of $\mathcal{B}$}
Arguments of this section will largely be analogous to those of section 4. We state the main proposition of this section.
\begin{proposition}
Let $0\leq N \leq 11$, $0\leq K\leq 299$ and let $L_N$ denote $2^{N-6}$ if $N\geq 6$ and $2^{N+6}$ if $n\leq5.$ There exists a system
\[x_1<y_1<u_1<v_1<x_2<y_2<u_2<v_2<\ldots<u_m<v_m<x_{m+1}<y_{m+1}\]
with properties
\[y_i-x_i\geq L_N, \ \ 1 \leq i\leq m+1, \]
\[u_i-y_i\geq L_{N}, \ \ v_i-u_i\geq L_N, \ \ x_{i+1}-v_{i}\geq L_N \ \ 1 \leq i\leq m, \]
\[\sum_{i=1}^mA_{x_i,y_i}f+A_{x_{i+1},y_{i+1}}f-2A_{u_i,v_i}f\geq \frac{1}{60}\sum_{n=N\mod 12}\var{\mathcal{B}_{K}^n}.\]
\end{proposition}
We shall again utilize induction. Assume we have a system
\[x_1<y_1<u_1<v_1<x_2<y_2<u_2<v_2<\ldots<u_{m_0}<v_{m_0}<x_{m_0+1}<y_{m_0+1}\]
that bounds the variation of all classes $\mathcal{B}_K^n$ for $n>n_0$ where $\mathcal{B}_K^{n_0}$ is non-empty, and that has $2^{n_0+12}$ instead of $L_N$.
Let $\mathbbm{E}_k^{n_0}$ be one of subsystems comprising $\mathcal{B}_K^{n_0}$, and
consider $\alpha<\beta<u<v<\gamma<\delta$
coming from the alternative \textbf{B} of Lemma 5 for it. We again will investigate the relation of our system with the interval $[\alpha,\delta]$, this time however, we will have three cases.
\textbf{I.} First assume for all $1\leq i\leq m_0$ we have $\text{dist}([u_i,v_i],[\alpha,\delta])\geq2^{n-5}$, and for all $1\leq i\leq m_0+1$ we have $\text{dist}([x_i,y_i],[\alpha,\delta])\geq2^{n-5}$. This case is easy, we just choose two appropriate ones out of three intervals $[\alpha,\beta],[u,v],[\gamma,\delta]$, and incorporate to our system.
\textbf{II.}There exist an $1\leq i\leq m_0$ such that $\text{dist}([u_i,v_i],[\alpha,\delta])< 2^{n-5}.$ Clearly this $i$ is unique, moreover $[\alpha.\delta]$ is distant from $[x_i,y_i]$ type intervals. This case will be dealt with in the same way as the case \textbf{II} of section 4. We divide $[u_i,v_i]$ into subintervals and pick $I$, $[u_i',v_i']$ exactly in the same way. The case when they are not the same is easy and handled as before, whereas if they are the same either
there exists $[c,d]\subset[u_i',v_i']$ with $d-c\geq 2^{n-5}$ such
that
\begin{equation}\label{a2}A_{c,d}f \leq A_{u_i,v_i}f-\frac{1}{120}\var \mathbbm{E}_k^{n_0},
\end{equation}
or we have a system
$c<d<x<y<c'<d'$ with $[c,d']\subset[u_i',u_i'+300\cdot2^{n-5}]$ and
\[d-c\geq 2^{n-5}, \ \ \ x-d \geq 2^{n-5}, \ \ \ y-x\geq 2^{n-5},\ \ \ c'-y\geq 2^{n-5}, \ \ \ d'-c'\geq 2^{n-5}, \]
such that
\begin{equation}\label{b2}A_{c,d}f +
A_{c',d'}f-A_{x,y}f \leq A_{u_i,v_i}f-\frac{1}{120}\var \mathbbm{E}_k^{n_0}.
\end{equation}
In each what to do is clear, we will show that one of these holds. Defining $w'_i$ as before we have the dichotomy given in \eqref{eqd}. If the first alternative of this dichotomy holds we set $c=w_i'+1,d=v_i'$ and get \eqref{a2}, while if the second holds we set
\begin{equation}\label{choice}c=u_i', \ \ d=w_i', \ \ x=u,\ \ y=v, \ \ c'=\gamma, \ \ d'=\delta
\end{equation}
and obtain \eqref{b2}.
\textbf{III.} There exist an $1\leq i\leq m_0+1$ such that $\text{dist}([x_i,y_i],[\alpha,\delta])< 2^{n-5}.$ This case is similar to what we have above, only essential difference will be changes in signs of averages over intervals. As above this $i$ is unique, further $[\alpha.\delta]$ is distant from $[u_i,v_i]$ type intervals. We subdivide $[x_i,y_i]$ the way we did $[u_i,v_i]$ above and, choose $I$. This time, however, $[x_i',y_i']$ will be the subinterval on which average is not smaller than the average over $[x_i,y_i].$ If these are not the same, replacing $[x_i,y_i]$ with $[x_i',y_i']$ will suffice. If they are the same we either have $[c,d]\subset[x_i',y_i']$ with $d-c\geq 2^{n-5}$ such
that
\[A_{c,d}f \geq A_{x_i,y_i}f+\frac{1}{120}\var \mathbbm{E}_k^{n_0},\]
or we have a system
$c<d<\mu<\nu<c'<d'$ with $[c,d']\subset[x_i',x_i'+300\cdot2^{n-5}]$ and
\[d-c\geq 2^{n-5}, \ \ \ \mu-d \geq 2^{n-5}, \ \ \ \nu-\mu \geq 2^{n-5},\ \ \ c'-\nu \geq 2^{n-5}, \ \ \ d'-c'\geq 2^{n-5}, \]
such that
\[A_{c,d}f +
A_{c',d'}f-A_{\mu,\nu}f \geq A_{x_i,y_i}f+\frac{1}{120}\var \mathbbm{E}_k^{n_0},\]
\[A_{c,d}f,A_{c',d'}f\geq
A_{\mu,\nu}f\]
This last additional property handles problems arising when $i=1$ and $i=m_0+1$. As before in the first case $[c,d]$ replaces $[x_i,y_i]$, while in the second $[c,d],[\mu,\nu],[c',d']$, does.
Defining
\[z_i'=x_i'+\left\lceil \frac{y_i'-x_i'}{5}\right\rceil \]
we have the dichotomy
\[A_{z_i'+1,y_i'}f\geq A_{x_i',y_i'}f+\frac{1}{120}\var\mathbbm{E}_k^{n_0} \ \ \text{or} \ \ A_{u_i',z_i'}f \geq A_{x_i',y_i'}f-\frac{4}{120}\var\mathbbm{E}_k^{n_0}. \]
If the first is the case we just set $c=z_i'+1$, $d=y_i'$ to obtain \eqref{a2}, if the second holds we set $\mu=u,\ \ \nu=v,\ \ c'=\gamma, \ \ d'=\delta,$ and
\[c=\alpha, \ \ d=\beta \ \ \text{if} \ \ A_{\alpha,\beta}f\geq A_{x'_i,y_i'}f, \]
\[c=x_i',\ d=y_i'\ \ \text{if} \ \ A_{x'_i,y_i'}f >A_{\alpha,\beta}f.\]
Here using the interval on which average is greater guarantees the last additional property.
We thus incorporated $\mathbbm{E}_k^{n_0}$ into our system. To incorporate the rest we have to deal with previously made changes. Let us incorporate a second system $\mathbbm{E}^{n_0}_l$. Let our modified system be
\begin{equation}\label{eqs3}
x_1<y_1<u_1<v_1<x_2<u_2<\ldots<u_{m_{0,1}}<v_{m_{0,1}}<x_{m_{0,1}+1}<y_{m_{0,1}+1}
\end{equation}
and $\alpha'<\beta'<u'<v'<\gamma'<\delta'$ coming from Lemma 5 for $\mathbbm{E}^{n_0}_l$. We have the same three alternatives which we will call \textbf{I'},\textbf{II'},\textbf{III'}, and if \textbf{I'} is the case, exactly same ideas suffice. If $II'$ is the case, that is if $[\alpha,\delta]$ is close to $[u_i,v_i]$ for some $1\leq i \leq m_0$, then this is either an unmodified interval, or emerges as the first of three types of intervals arising from \textbf{II}. In either case $i$ should be unique by the same considerations as in section 4, and methods explained in \textbf{II} handles this case. If \textbf{III'} holds then by the same arguments $[\alpha,\delta]$ is close to $[x_i,y_i]$ for a unique $1\leq i\leq m_{0}+1$, and this $[x_i,y_i]$ is either unmodified, or a result of a modification through first of three methods described in \textbf{III}. In either case methods of \textbf{III} deals with this case. Clearly these considerations suffice to add the remaining systems, and after a finite number of steps we will have $\mathcal{B}^{n_0}_K$ added.
This completes the proof of our proposition from which we easily obtain
\[\var \mathcal{B} \leq 120\cdot2^{12}\cdot 300 \cdot \var f.\]
\section{Proof of Theorem 1}
We now use results we proved in sections 3,4,5 to prove our theorem. We have
\[\var Mf \leq \sum_{k=-\infty}^{\infty}|f(k+1)-f(k)| \leq \sup_{m,n:\ m\leq n} \sum_{k=m}^n |f(k+1)-f(k)|. \]
For each couple $m,n$ with $m\leq n$ dispensing with redundant elements the interval $[m,n+1]$ gives a system
\[ b_0\leq a_1<b_1<a_2<b_2<\ldots<a_{\sigma}<b_{\sigma}<a_{\sigma+1}\leq b_{\sigma+1}\]
with $Mf(a_i)<Mf(b_i), \ Mf(a_{i+1})<Mf(b_i)$ for $1\leq i \leq\sigma $, and $Mf(a_1)\leq Mf(b_0),\ \ Mf(a_{\sigma+1})\leq Mf(b_{\sigma+1})$
\begin{align*}\sum_{k=m}^n |f(k+1)-f(k)|&= \sum_{i=1}^{\sigma}\big(2Mf(b_i)-Mf(a_{i+1})-Mf(a_i)\big)\\ &+Mf(b_0)-Mf(a_1)+Mf(a_{\sigma+1})-Mf(b_{\sigma+1})\end{align*}
We apply Lemma 3, Proposition 1, Proposition 2 to obtain
\[\sum_{i=1}^{\sigma}\big(2Mf(b_i)-Mf(a_{i+1})-Mf(a_i)\big)\leq (2\cdot120\cdot2^{12}\cdot300+2)\cdot \var f.\]
On the other hand
\begin{align*}Mf(b_0)-Mf(a_1)+Mf(a_{\sigma+1})-Mf(b_{\sigma+1})&\leq 2\sup_{k\in\mathbbm{Z}} Mf(k)-2\inf_{k\in\mathbbm{Z}}Mf(k)\\ &\leq 2\sup_{k\in\mathbbm{Z}} f(k)-2\inf_{k\in\mathbbm{Z}}f(k)\\ &\leq 2\var f
\end{align*}
So finally taking supremum on the left we have
\[\var Mf \leq (2\cdot120\cdot2^{12}\cdot300+4)\cdot \var f.\]
|
1303.4063
|
\section{Introduction}
Copper compounds have been extensively studied as spin-$\frac12$ quantum
magnets, material prototypes of quantum spin models. While local properties of
these compounds are usually similar and involve nearly isotropic Heisenberg
spins, the variability of the magnetic behavior stems from the unique
structural diversity. Depending on the particular arrangement of the magnetic
Cu$^{2+}$ atoms and their ligands in the crystal structure, different spin
lattices can be formed. Presently, experimental examples for many of simple
lattice geometries, including the uniform
chain,\cite{stone2003,HC_KCuF3_INS_5_LL} square
lattice,\cite{ronnow1999,*ronnow2001,tsyrulin2009} Shastry-Sutherland lattice
of orthogonal spin dimers,\cite{miyahara2003,takigawa2010} and kagom\'e
lattice,\cite{mendels2010} are available and actively studied. Some of the
copper compounds feature more complex spin
lattices\cite{ruegg2007,mentre2009,*tsirlin2010,janson2012} that have not been
anticipated in theoretical studies, yet trigger the theoretical
research\cite{laflorencie2011,lavarelo2011} once relevant material prototypes
are available.
Owing to the competition between ferromagnetic (FM) and antiferromagnetic (AFM) contributions
to the exchange couplings, compounds of particular interest are those with M-X-M bridging angles
close to 90$^{\circ}$, with M being a transition metal and X being a ligand. Such geometries
are realized in the quasi-1D cuprates featuring chains of edge-sharing CuO$_4$ plaquettes,
which represent a simple example of low-dimensional spin-1/2 magnetic materials. Independent
of the sign of the nearest-neighbor (NN) coupling $J_1$, its competition with
the sizeable AFM next-nearest-neighbor (NNN) coupling $J_2$ leads to magnetic frustration.
Depending on the ratio $J_2/J_1$, such compounds exhibit exotic magnetic behavior like helical
order,\cite{FHC_LiCuVO4_ENS_spin_supercurrents} spin-Peierls transition\cite{FHC_CuGeO3_spin-Peierls}
or quantum critical behavior.\cite{FHC_Li2ZrCuO4} The difficulties in the
microscopic description of such compounds originate from
ambiguities\footnote{The same value of specific physical properties such as propagation vectors
or spin gaps can be realized in different parts of the phase-diagram.
To illustrate this effect, we consider an AFM $J_1$
and $J_2$\,=\,0. In this case, the model is reduced to a uniform
Heisenberg chain. In the other limit, $J_1$\,=\,0 with an AFM $J_2$, the
system reduces to two decoupled uniform Heisenberg chains. Although the
$J_2/J_1$ ratios are very different for these two cases, the physics is
the same.} in the experimental estimates of the ratio $J_2/J_1$, leading to controversial
modeling of the magnetic structure.\cite{FHC_LiCu2O2_chiT_CpT_NS_wrong_model_paper,
*FHC_LiCu2O2_chiT_CpT_NS_wrong_model_comment, *FHC_LiCu2O2_chiT_CpT_NS_wrong_model_reply}
Thus, the combination of different sets of experimental data with a careful theoretical analysis
of the individual exchange pathways is of crucial importance for obtaining a precise microscopic magnetic model.
However, the search for new quantum magnets, as well as the work on existing materials,
require not only the ability to estimate the couplings but also a solid understanding of
the nexus between crystallographic features of
the material and ensuing magnetic couplings. The Goodenough-Kanamori-Anderson
(GKA)\cite{GKA_1,*GKA_2,*GKA_3} rules are a generic and well-established
paradigm that prescribes FM couplings for bridging angles
close to 90$^{\circ}$ and AFM couplings else, where the bridging angle refers to the
M--X--M pathway. In Cu$^{2+}$ oxides, generally the
GKA rules successfully explain the crossover between the FM and AFM
interactions for Cu--O--Cu angles close to 90$^{\circ}$. The boundary between
the FM and AFM regimes is usually within the range of $95-98^{\circ}$,\cite{braden}
but may considerably be altered by side groups and distortions.\cite{gk, ruiz97_2, leb2}
In addition to Cu$^{2+}$ oxides, the systems of interest include copper
halides,\cite{coldea1997,*coldea2001,ruegg2003}
carbodiimides,\cite{zorko2011,*tsirlin2012} and other compound families.
Although microscopic arguments behind the GKA rules should be also applicable
to these non-oxide materials, the critical angles separating the FM and AFM
regimes, as well as the role of the ligand in general, are still little
explored. Moreover, the low number of experimentally studied compounds impedes
a comprehensive experimental analysis available for oxides.
More and more, density functional theory (DFT) electronic structure calculations complement
experimental studies and deliver accurate estimates of magnetic couplings.\cite{leb_a2cup2o7,dioptase,azurite,licu2o2_whangbo,cav4o9_pick,WF_Ku_2002}
They are especially well suited for the study of magnetostructural correlations, as both real and fictitious
crystal structures can be considered in a calculation. However, in a periodic
structure the effect of a single geometrical parameter is often difficult to
elucidate, because different geometrical features are intertwined and evolve
simultaneously upon the variation of an atomic position. Geometrical effects on the local magnetic coupling
are better discerned in cluster models that represent a small group of
magnetic atoms and, ideally, a single exchange pathway. Additional advantages of
cluster models, owing to their low number of correlated atoms, are lower computational
costs and, most important, their potential for the application of parameter-free
wavefunction-based computational methods, i.e. in a strict sense $ab$ $initio$
calculations. By contrast, presently available band-structure methods for calculating
strongly correlated compounds rely on empirical parameters and corrections where their
choice is in general not unambiguous.\cite{dcc1,dcc2}
There have been several attempts to describe the local properties of
solids with clusters especially in combination with $ab$ $initio$ quantum-chemical
methods.\cite{munoz2002,munoz2005,caballol2010,hozoi_2011,paulus_2008} However, the construction of clusters is far from being trivial.
On one side, to make the calculations computationally feasible, the number of quantum mechanically treated atoms has to be kept as small as possible.
On the other side, accurate results require that these atoms experience the "true" crystal potential.
Usually, this is achieved by embedding the cluster into a cloud of
point charges\cite{paulus_2012,hozoi_2011} and so called total ion potentials.\cite{TIP_winter89,graaf1}
But even for involved embeddings it was demonstrated, that the choice of the cluster
may have significant effects on the results of the calculations and, thus, size-convergence has to be checked thoroughly.\cite{graaf1}
Here, we study the effect of geometrical parameters on the magnetic exchange in
Cu$^{2+}$ halides. The family of halogen atoms spans a wide range of
electronegativities, from the ultimately electronegative fluorine, forming
strongly ionic Cu--F bonds, to chlorine and bromine that produce largely
covalent compounds with Cu$^{2+}$.\cite{sawatzky1981} Presently, we do not
consider iodine because no Cu$^{2+}$ iodides have been reported. In our
modeling, we use the simplest possible periodic crystal structure of a CuX$_2$
chain that enables the variation of the Cu--X--Cu bridging angle in a broad
range. We further perform a comparative analysis for clusters and additionally
consider the problem of long-range couplings. The evaluation of such couplings
requires larger clusters, thus posing a difficulty for the cluster approach.
The observed trends for the magnetic exchange as a function of the bridging
angle are analyzed from the microscopic viewpoint, and reveal the crucial role
of covalency that underlies salient differences between the ionic Cu$^{2+}$
fluorides and largely covalent chlorides and bromides.
On the experimental side, the compounds and crystal structures under
consideration are relevant to the CuCl$_2$ and CuBr$_2$ materials that show
interesting examples of frustrated Heisenberg
chains.\cite{banks2009,FHC_CuCl2_DFT_chiT_simul_TMRG} At low temperatures,
these halides form helical magnetic structures and demonstrate improper
ferroelectricity along with the strong magnetoelectric
coupling.\cite{seki2010,zhao2012}
The paper is organized as follows. In section II, the applied theoretical
methods are presented. In the third section, the crystal structures of the
CuX$_2$ compounds are described and compared. In section IV, the results of
periodic and cluster calculations are discussed and compared. Finally, the
discussion, summary, and a short outlook are given in section V.
\section{Methods}
The electronic structures of clusters and periodic systems were calculated with
the full-potential local-orbital code \textsc{fplo9.00-34}.\cite{fplo} For the
scalar-relativistic calculations within the local density approximation (LDA), the Perdew-Wang
parameterization\cite{pw92} of the exchange-correlation potential was used together
with a well converged mesh of up to 12$\times$12$\times$12 k-points for the periodic models.
The effects of strong electronic correlations were considered by mapping the
LDA bands onto an effective tight-binding (TB) model. The transfer integrals
$t_i$ of the TB-model are evaluated as nondiagonal elements between Wannier
functions (WFs). For the clusters, the transfer integral corresponds to half of the
energy difference of the magnetic orbitals.\cite{hth75} These transfer integrals $t_i$ are further introduced into the
half-filled single-band Hubbard model $\hat{H}=\hat{H}_{TB}+U_{\text{eff}}\sum_{i}\hat{n}_{i\uparrow}\hat{n}_{i\downarrow}$ that is eventually reduced to the
Heisenberg model for low-energy excitations,
\begin{equation}
\hat{H}=\sum_{\left\langle ij\right\rangle}J_{ij}\hat{S_{i}}\cdot\hat{S_{j}}
\end{equation}
The reduction is well-justified in
the strongly correlated limit $t_i\ll U_{\text{eff}}$, where $U_{\text{eff}}$ is the
effective on-site Coulomb repulsion, which exceeds $t_i$ by at least an order
of magnitude (see Table~\ref{T_tJ}). This procedure yields AFM contributions to the exchange evaluated
as $J_i^{\text{AFM}}=4t_i^2/U_{\text{eff}}$.
Alternatively, the full exchange couplings $J_i$, comprising FM and AFM contributions, can be derived from total energies
of collinear magnetic arrangements evaluated in spin-polarized supercell calculations\footnote{A unit cell quadrupled along the $b$ axis with P2/m symmetry and five Cu-sites defines the supercell. Three different arrangements of the spins localized on the Cu-sites were sufficient to calculate all presented $J_i$: two with the spin on one Cu-site flipped and one with the spins on two Cu-sites flipped (see supplemental material\cite{supp}). This defines a system of linear equations of the type $E_s=\epsilon_0+a_s\cdot J_1+b_s\cdot J_2$ which can easily be solved. $E_s$ is the total energy of spin arrangement $s$, $\epsilon_0$ is a constant and $a_s$ and $b_s$ describe how often a certain coupling is effectively contained in the supercell.}
within the mean-field density functional theory (DFT)+$U$ formalism.
We use a local spin-density approximation (LSDA)+$U$ scheme in combination with a unit cell quadrupled along the $b$ axis and a $k$-mesh of 64 points.
The on-site repulsion and exchange amount to $U_d$\,=\,7$\pm$0.5\,eV and $J_d$\,=\,1\,eV,
respectively. The same $U_d$ value is chosen for all CuX$_2$ (X\,=\,F, Cl, Br)\ compounds to facilitate a
comparison of the magnetic behavior. In section~\ref{sec:luvsb3}, however, it will be
shown that $U_d$ has in fact no qualitative effect on the magnetic couplings of the CuX$_2$ (X\,=\,F, Cl, Br)\ compounds.
We applied the around mean field (AMF) as well as the fully localized limit (FLL)
double counting corrections where both types where found to supply similar results. Thus, following the earlier studies of Cu$^{2+}$
compounds,\cite{FHC_CuCl2_DFT_chiT_simul_TMRG,leb_a2cup2o7,dioptase} the presented results are obtained within the AMF scheme.
For the clusters we used, in addition to the LSDA+$U$ method, the B3LYP hybrid
functional\cite{b3lyp} with a 6-311G basis
set. The B3LYP calculations were performed within the \textsc{gaussian09} code.\cite{g09} The free
parameter $\alpha$, indicating the admixture of exact exchange, was varied
in the range between 0.15 and 0.25 to investigate its influence on the calculated exchange couplings.
\section{Crystal structures}
The copper CuX$_2$ dihalides feature isolated chains of edge-sharing CuX$_4$
plaquettes.\cite{supp} The chains of this type are the central building block of many well-studied
cuprates such as CuGeO$_3$ (Ref.~\onlinecite{FHC_CuGeO3_spin-Peierls}),
Li$_2$ZrCuO$_4$ (Ref.~\onlinecite{FHC_Li2ZrCuO4}), and Li$_2$CuO$_2$
CuX$_2$ halides are charge neutral, which makes them especially well suited for the
modeling within the cluster approach.
CuBr$_2$\ crystallizes in the monoclinic space group $C2/m$ with
$a$\,=\,14.728\,\r{A}, $b$\,=\,5.698\,\r{A} and $c$\,=\,8.067\,\r{A}, and
$\beta$\,=\,115.15$^{\circ}$ at room temperature.\cite{oeckler} The planar
chains of edge-sharing CuBr$_{4}$ plaquettes run along the $b$-axis
(Fig.~\ref{F-str}). The Cu--Br--Cu bridging angle $\theta$ amounts to
92.0$^{\circ}$, the Cu--Br distance is 2.41\,\r{A}, while the distances between
the neighboring chains amount to $d_{\parallel}$\,=\,3.82\,\r{A} and
$d_{\perp}$\,=\,3.15\,\r{A} in the direction parallel to $c$ and perpendicular
to the plaquette plane, respectively.
CuCl$_2$ is isostructural to CuBr$_2$\ with the Cu--Cl distance of 2.26\,\r{A} and
$\angle$(Cu--Cl--Cu)\,=\,93.6$^{\circ}$.\cite{cucl2_str} The interchain
separations amount to $d_{\parallel}$\,=\,3.73\,\r{A} and
$d_{\perp}$\,=\,2.96\,\r{A} along the $c$ and $a$ directions, respectively.
CuF$_2$\ features a two-dimensional distorted version of the rutile structure,
with corner-sharing CuF$_4$ plaquettes forming a buckled square
lattice.\cite{cuf2_str} This atomic arrangement is very different from the
chain structures of CuCl$_2$\ and CuBr$_2$. For the sake of comparison with other
Cu$^{2+}$ halides, we constructed a fictitious one-dimensional structure of
CuF$_2$. The Cu--F distance of 1.91\,\r{A} was chosen to match the respective
average bond length in the real CuF$_2$\ compound. The corresponding bridging angle,
yielding a minimum in total energy, was determined to be 102$^{\circ}$.\footnote{The
bridging angle minimizing the total energy for the given Cu-F bonding distance was
estimated by a series of LDA-calculations for bridging angles between 70$^{\circ}$
and 120$^{\circ}$.} Although this crystal structure remains hypothetical, it is
likely metastable and could be formed in CuF$_2$ under a strong tensile strain on
an appropriate substrate.
\begin{figure}[tbp]
\includegraphics[width=8.6cm]{figure1}
\caption{\label{F-str}(Color online) Edge-sharing CuX$_4$-plaquettes forming the magnetic chains
in the CuX$_2$ compounds. The chains, running along [010] are flat and lie in the $ab$ plane. The stacking of the planes is accompanied by a shift to match the
monoclinic angle.\cite{supp} The arrows indicate the nearest-neighbor and
next-nearest-neighbor interaction pathways, and $\theta$ denotes the Cu-X-Cu bridging
angle.}
\end{figure}
\section{Results}
\subsection{Band structure calculations}
First, we consider magnetic couplings in the experimental crystal structures of
CuCl$_2$ and CuBr$_2$, as well as in the relaxed structure of chain-like
CuF$_2$. The DFT calculations of the band structure and the density of states
(DOS) of CuX$_2$ (X\,=\,F, Cl, Br)\ compounds within the LDA yield a valence band width of
6--8\,eV,\cite{supp} in agreement with the experimental photoelectron
spectra.\cite{sawatzky1981} The valence band complex becomes slightly narrower
upon an increase in the ligand size, because the lower electronegativity of Cl
and Br brings the respective $p$ states closer to the Cu $3d$ states, thus
enhancing the hybridization and reducing the energy separation between the Cu
and ligand orbitals. All the band structures feature a separated band crossing
the Fermi level (Fig.~\ref{bands}). In the local-orbital representation
visualized by WFs (Fig.~\ref{J_analys}), this band is formed by the antibonding
$\sigma$*-combination of Cu $3d_{x^2-y^2}$ and X $p$ orbitals.\footnote{The
orbitals are denoted with respect to a local coordinate system, where for each
plaquette one of the Cu--X bonds and the direction perpendicular to the
plaquette are chosen as $x$ and $z$-axes, respectively.} The isolated
half-filled band suffices for describing the magnetic properties and the
low-lying magnetic excitations via the transfer integrals $t_i$ which are subsequently
introduced into a Hubbard model. Ligand valence $p$-orbital contributions to the
magnetic orbital, denoted as $\beta$ in Table~\ref{T_tJ}, illustrate the increase in the
metal--ligand hybridization from F to Br.
The dispersion calculated with the WF-based one-band TB model for CuBr$_2$ is also shown
in Fig.~\ref{bands}, and the leading transfer integrals together with the AFM
contributions $J_i^{\text{AFM}}$ are given in Table~\ref{T_tJ}. The evaluation of
$J_i^{\text{AFM}}$ requires the value of $U_{\text{eff}}$, which is not known precisely. Here,
we estimate $U_{\text{eff}}$ by comparing the transfer integral $t_2$ obtained from
the TB analysis with the exchange coupling $J_2$ from the LSDA+$U$
calculations. While short-range couplings may involve large FM contributions,
the long-range coupling $J_2$ should be primarily AFM. Therefore,
$J_2^{\text{AFM}}=J_2$ in a good approximation, and $U_{\text{eff}}=4t_2^2/J_2$. This way,
we find $U_{\text{eff}}=6$~eV for X = F, 4~eV for Cl, and 3~eV for Br. The reduction
in $U_{\text{eff}}$ reflects the general trend of the enhanced Cu--X hybridization
and covalency, because the $U_{\text{eff}}$ value pertains to the screened Coulomb
repulsion in the mixed Cu--X band. The enhanced hybridization leads to a
stronger screening, larger spatial extension and, thus, to the lower $U_{\text{eff}}$ values.
\begin{figure}[tbp]
\includegraphics[width=8.6cm]{figure2}
\caption{\label{bands}(Color online) Comparison of the calculated LDA
band structure of CuBr$_2$\ and the band derived from a fit using an effective
one-band tight-binding model based on Cu-centered Wannier functions (WF TB).
The right plot shows the total density of states (DOS) together with the
partial DOS of Cu(3$d$) and Br(4$p$) states. The Fermi level is at zero energy.
Notation of $k$-points: $\Gamma=(000)$, X$=(\frac{\pi}{a}00)$,
S$=(\frac{\pi}{a}\frac{\pi}{b}0)$, Y$=(0\frac{\pi}{b}0)$,
YZ$=(0\frac{\pi}{b}\frac{\pi}{c})$,
XYZ$=(\frac{\pi}{a}\frac{\pi}{b}\frac{\pi}{c})$, Z$=(00\frac{\pi}{c})$.}
\end{figure}
\begin{table*}[tbp]
\begin{ruledtabular}
\caption{\label{T_tJ}\label{T_lsdau} Results for the experimental (X = Cl, Br)
and hypothetical (X = F) structures of CuX$_2$: the bridging angle $\theta$, the
ligand contribution to the magnetic orbital $\beta$, transfer integrals $t_i$, AFM
contributions to the exchange $J_i^{\text{AFM}}=4t_i^2/U_{\text{eff}}$, total exchange
integrals $J_i$ from LSDA+$U$ calculations with $U_d=7\pm 0.5$~eV, and the
effective on-site Coulomb repulsion $U_{\text{eff}}$ obtained by equilibration of
$4t_2^2/U_{\text{eff}}$ (see text for details).
}
\begin{tabular}{c c c c c r r r r r}
& $\theta$ & $\beta$ & $t_1$ & $t_2$ & $J^{AFM}_{1}$ & $J^{AFM}_{2}$ & $J_1$ & $J_2$ & $U_{\text{eff}}$ \\
& (deg) & & (meV) & (meV) & (meV) & (meV) & (meV) & (meV) & (eV) \\\hline
CuBr$_2$ & 92 & 0.28 & 47 & 136 & 2.5 & 21.0 & $-8.8\pm 0.4$ & $22.2\pm 3.4$ & 3.0 \\
CuCl$_2$ & 93.6 & 0.26 & 34 & 117 & 1.1 & 13.7 & $-12.9\pm 0.9$ & $13.4\pm 2.2$ & 4.0 \\
CuF$_2$ & 102 & 0.15 & 132 & 50 & 11.6 & 1.6 & $5.4\pm 0.9$ & $1.2\pm 0.2$ & 6.0 \\
\end{tabular}
\end{ruledtabular}
\end{table*}
The estimates in Table~\ref{T_tJ} reveal two major differences between the
ionic CuF$_2$ and more covalent CuCl$_2$ and CuBr$_2$ compounds. First, the
nearest-neighbor (NN) coupling $J_1$ is AFM in the fluoride, while FM in the
chloride and bromide. Second, the AFM next-nearest-neighbor (NNN) coupling
$J_2$ is enhanced upon increasing the covalency of the Cu--X bonds. In CuF$_2$, this
coupling is weak ($J_2\ll J_1$), whereas in the chloride and bromide $J_2\geq
|J_1|$. The NNN coupling is amplified by the larger ligand size
and the increased covalency. This coupling involves the long-range Cu--X--X--Cu
pathway and requires a strong overlap between the ligand orbitals, which is
possible for X = Cl and especially Br, while remaining weak for the smaller
fluoride anion. The changes in the NN coupling seem to be well described by
the GKA rules. Considering the trends for copper oxides,\cite{braden} one
expects FM $J_1$ for $\theta$ close to $90^{\circ}$, as in CuCl$_2$ and
CuBr$_2$, and AFM $J_1$ for $\theta>98^{\circ}$, as in the chain-like structure
of CuF$_2$. Nevertheless, the covalency is also paramount for the sign of
$J_1$, as shown by the magnetostructural correlations presented below
(Sec.~\ref{sec:bridging}).
Finally, we briefly compare our DFT-based estimates of $J_i$ with the
experiment. Because the chain-like polymorph of CuF$_2$ has not been prepared
experimentally, no comparison can be performed. The microscopic analysis of
CuCl$_2$ presented in Ref.~\onlinecite{FHC_CuCl2_DFT_chiT_simul_TMRG} shows
reasonable agreement between the experimental ($J_1=-7.8$~meV, $J_2=11.6$~meV) and
calculated ($J_1=-12.9\pm 0.9$~meV, $J_2=13.4\pm 2.2$~meV) values. The same is true
for CuBr$_2$, where we evaluated the intrachain couplings as $J_1=-8.8\pm 0.4$~meV,
$J_2=22.2\pm 3.4$~meV which compare well with recently published experimental data
$J_1=-11.0\pm 1.6$~meV, $J_2=31.0$~meV.\cite{cubr2_2012} Moreover, our calculations
reveal significantly lower deviations from experiment than those supplied in Ref.~\onlinecite{cubr2_2012}.
Puzzled by the origin of the discrepancy between our values for $J_1$ and $J_2$ and the published
calculational results for CuBr$_2$,\cite{cubr2_2012} we repeated the DFT+$U$ calculations for CuBr$_2$\ as well as
CuCl$_2$\ with the code \textsc{vasp}\cite{vasp1,*vasp2} and the same computational parameters as used in
Ref.~\onlinecite{cubr2_2012}. For the parameters $U_d$ and $J_d$, we adopted 8\,eV and 1\,eV, respectively, which corresponds to the effective $U\!=U_d\!-\!J_d\!=\!7$\,eV in Refs.~\onlinecite{cubr2_2012}.
For the GGA+$U$ calculations, we used again a unit cell quadrupled along the $b$
axis and the $k$-mesh of 64 points. The resulting $J_1$ and
$J_2$ values generally agree with the published values,\cite{banks2009,cubr2_2012} except for $J_1$ in CuBr$_2$, for
which we obtain only half of the value provided in Ref.~\onlinecite{cubr2_2012}. The agreement with the
experimental data can be improved by increasing the $U_d$ value. In particular, $U_d\!=\!12$\,eV
yields $J_1\!=\!-95$\,K and $J_2$\,=\,113\,K for CuCl$_2$ and $J_1\!=\!-124$\,K and $J_2$\,=\,357\,K
for CuBr$_2$, very close to the experimental estimates.\cite{FHC_CuCl2_DFT_chiT_simul_TMRG,cubr2_2012} This $U_d$ value is significantly higher than the $U_d=7$\,eV we used in our \textsc{fplo9.00-34} calculations.\footnote{A $U_d$ value of 7\,eV has turned out to supply good agreement with experimental data for several Cu$^{2+}$-compounds, see e.g. Refs.~\onlinecite{CaYCuO, dioptase, linaritePRB}.} There are basically two reasons for the large difference: The first reason are the different basis sets of \textsc{fplo9.00-34} and \textsc{vasp}, implementing local orbitals and projected augmented waves,\cite{paw1,*paw2} respectively, which crucially affect the local quantity $U_d$. Second, we used an around mean field double counting correction (DCC) while a fully localized limit DCC, which is always used in \textsc{vasp}, requires larger $U_d$ values.\cite{tsirlin2010_cucl}
\subsection{Variation of the bridging angle}
\label{sec:bridging}
To establish magnetostructural correlations in CuX$_2$ halides, we
systematically vary the bridging angle $\theta$ and evaluate the NN coupling
$J_1$. Since the Cu--Cu distance and two Cu--X distances form a triangle with
$\theta$ being one of its angles, the change in $\theta$ alters either the
Cu--Cu distance, or the Cu--X distance, or both. We compared different flavors
of varying $\theta$:\footnote
we adopted a fictitious structure, where the edge-sharing chains are simply
stacked, leading to a rectangular unit cell. This has the advantage of a much
simpler construction of the different structures. For CuBr$_2$, also the small
tilting between the chains is not considered and the Cu--Br distance is
slightly enhanced to 2.45\,\r{A} enabling to span a broader range of the
bridging angles without getting artefacts from unphysically small Br--Br
distances.} i) the Cu-Cu distance is varied, while the X position is
subsequently optimized to yield the equilibrium Cu--X distance and $\theta$;
ii) the Cu-Cu distance is fixed, while the Cu--X distance is varied; and iii)
the Cu--X distance is fixed, while the Cu--Cu distance is varied. For all three
cases, we evaluated $J_1$ as a function of the Cu--X--Cu angle. Fig.~\ref{J_ang_o}
shows on the example of CuCl$_2$\ that despite minor numerical differences, all three
methods conform well to each other. Additionally, we studied the influence of
$U_d$ by varying it in the wide range of 4--9\,eV. This causes a shift of the
curves along the vertical axis, but the qualitative behavior of $J_1$ versus
the Cu-X-Cu angle is retained.\cite{supp}
Remarkably, $J_1$ reaches its minimum absolute value at around
$\theta=100^{\circ}$ and becomes strongly FM at large bridging angles
(Fig.~\ref{J_ang_o}). This result is robust with respect to the particular
procedure of varying $\theta$. To better understand the microscopic origin of
this peculiar behavior, we performed similar calculations for CuF$_2$ and
CuBr$_2$. As different procedures of varying $\theta$ arrive at similar
results, we fixed the Cu--X distance for each ligand and achieved different
$\theta$ values by adjusting the Cu--Cu distance, only.
\begin{figure}[tbp]
\includegraphics[width=8.6cm]{figure3}
\caption{\label{J_ang_o}(Color online) $J_1$ of CuCl$_2$\ as function of the
bridging angles where different structural parameters are fixed: i) the Cu-Cu distance is varied, while the X position is
subsequently optimized to yield the equilibrium Cu--X distance and $\theta$;
ii) the Cu-Cu distance is fixed, while the Cu--X distance is varied; and iii)
the Cu--X distance is fixed, while the Cu--Cu distance is varied. The
dashed vertical line indicates the experimental bridging angle.}
\end{figure}
Similar to our results for the fixed geometries (Table~\ref{T_tJ}),
magnetostructural correlations for $J_1$ (Fig.~\ref{J_analys}) reveal a large
difference between the ionic CuF$_2$ and covalent CuCl$_2$ and CuBr$_2$. In
CuF$_2$, $J_1$ follows the anticipated behavior with the FM-to-AFM crossover at
$\theta\simeq 100^{\circ}$. However, the covalent compounds always show FM
$J_1$, with a maximum (i.e., the minimum in the absolute value) at
$\theta\!=\!100\!-\!105^{\circ}$ and the enhanced FM character at even larger bridging angles.
This trend persists up to at least $\theta=120^{\circ}$ (Fig.~\ref{J_ang_o}).
The effect of strongly FM $J_1$ in CuCl$_2$ and CuBr$_2$ can be explained by
considering individual contributions to the exchange. The AFM contribution
$J_1^{\text{AFM}}$ arises from the electron hopping between the Cu sites. The hopping
probability measured by the transfer integral $t_1$ critically depends on the
Cu--X--Cu bridging angle. In a simple ionic picture, the transfer is maximal at
$\theta=180^{\circ}$ (singly bridged) and approaches zero at $\theta=90^{\circ}$, thus providing the
microscopic reasoning behind the GKA rules. This anticipated trend is indeed
shown by CuF$_2$, where $J_1^{\text{AFM}}=4t_1^2/U_{\text{eff}}$ increases above
$\theta=90^{\circ}$ and underlies the increase in $J_1$. However, the covalent
CuX$_2$ halides show qualitatively different behavior with the very low (and
decreasing) $t_1$ and $J_1^{\text{AFM}}$ up to at least $\theta=110^{\circ}$. This
result implies that the large contribution of the ligand states in a covalent
compound has also a strong influence on the Cu--X--Cu hopping process and alters the
anticipated trend for the AFM exchange.
The FM contribution $J_1^{\text{FM}}$ can be evaluated as \mbox{$J_1-J_1^{\text{AFM}}$}, where we
use $J_1$ from the LSDA+$U$ calculation and $J_1^{\text{AFM}}=4t_1^2/U_{\text{eff}}$ from
the TB analysis. Microscopically, $J_1^{\text{FM}}$ originates from the Hund's
coupling on the ligand site\cite{beta4} and/or from the FM coupling between the
Cu $3d$ and ligand $p$ states.\cite{FHC_LiCuVO4_MH_DFT_DMRG,*kuzian2012}
Regarding the former mechanism,\cite{beta4} a simple model expression reads as
$J_1^{\text{FM}}=-\beta^4J_H$, where $\beta$ is the ligand's contribution to the
Cu-centered magnetic orbital, and $J_H$ is the (effective) Hund's coupling on the ligand. Even
though this expression is derived for $\theta=90^{\circ}$, our data obtained
for different $\theta$ values are well understood in terms of the variable
$\beta$ (see bottom panels of Fig.~\ref{J_analys}). The increase in the
bridging angle leads to larger $\beta$, thus enhancing $J_1^{\text{FM}}$. Since
$\beta$ enters $J_1^{\text{FM}}$ as $\beta^4$, its effect should be dominant over any
other contributions, such as slight variations of $J_H$. The increase in $\beta$ also explains the
increasing FM contribution at low $\theta$ (Fig.~\ref{J_analys}).
In contrast to the covalent chloride and bromide, the ionic CuF$_2$ shows only
a minor FM contribution owing to the very low $\beta$. We also tried to
artificially enhance $\beta$ by reducing the Cu--F bonding distance down to
1.60\,\r{A}. For bridging angles larger than 100$^{\circ}$ the AFM coupling becomes
twice as large as for the Cu--F distance of 1.91\,\r{A} and for angles smaller than
80$^{\circ}$ the model compound becomes also AFM. The FM coupling strength about
90$^{\circ}$ is almost unaffected. This indicates the robust ionic
nature of Cu--F bonds. The reduction in the Cu--F distance increases the
electron transfer without changing the hybridization, hence $J_1^{\text{AFM}}$ is
increased, while $J_1^{\text{FM}}$ remains weak.
\begin{figure*}[tbp]
\includegraphics[width=17.1cm]{figure4}
\caption{\label{J_analys}(Color online) Magnetostructural correlations for the
CuX$_2$ halides (the Cu--X distance is fixed, the Cu--Cu distance is variable).
The upper panels show the total exchange $J_1$ (LSDA+$U$, $U_d=7$~eV) along
with $J_1^{\text{AFM}}=4t_1^2/U_{\text{eff}}$ and $J_1^{\text{FM}}=J_1-J_1^{\text{AFM}}$. The bottom
panels show $\beta^4$, where $\beta$ is the ligand's contribution to the
Cu-based magnetic orbital. The WFs for the experimental (relaxed) geometries are shown as
insets.}
\end{figure*}
\subsection{Cluster models}
In a periodic calculation, the variation of structural parameters, such as bond
lengths and angles, is generally challenging: the high symmetry couples the
structural parameters to each other. As a result, changing a single parameter
is often impossible without affecting the other parameters. The cluster models
are more flexible and may allow for an independent variation of individual bond
lengths and angles. This property renders the clusters as an excellent
playground to study the magnetostructural correlations.
Before discussing the intrachain couplings using a combination of periodic and
cluster models, we first want to demonstrate how cluster models for the three
Cu dihalide compounds are constructed. Since the chains are spatially
well-separated from each other, we can consider segments of a chain, with the
terminal Li atoms keeping the electroneutrality (Fig.~\ref{clusters}). No
additional point charges are required, so that the clusters are kept as simple
as possible.
\begin{figure}[tbp]
\includegraphics[width=8.6cm]{figure5}
\caption{\label{clusters} Three examples of model clusters: Cu$_3$ trimer
cluster as the minimal cluster for the evaluation of $J_1$ and $J_2$ (A); the
pentamer cluster for calculating $J_2$, with only two Cu$^{2+}$ and three
substituted non-magnetic ions (B); and the tetramer cluster for calculating
$J_1$ with two magnetic Cu and two nonmagnetic $M_S$ centers (C).}
\end{figure}
First, the effect of the chain length on $J_1$, $J_2$ and the ratio $-J_2/J_1$
is investigated (Fig.~\ref{F-jN}). For all three compounds, small clusters,
such as dimers or trimers, are insufficient for describing the magnetic
properties. The convergence with respect to the cluster size is different for
different compounds (e.g., the ionic CuF$_2$\ demonstrates the slowest
size convergence). To ensure a meaningful comparison with the periodic model
or the experimental data, the convergence with respect to the cluster size has
to be carefully checked.
\begin{figure}[tbp]
\includegraphics[width=8.6cm]{figure6}
\caption{\label{F-jN} (Color online) $J_1(N)/J_1(N\!=\!8)$, $J_2(N)/J_2(N\!=\!8)$ and
$\alpha(N)/\alpha(N\!=\!8)$ as a function of the chain length $N$. The
bridging angle is fixed to the experimental (CuCl$_2$ and CuBr$_2$) and
optimized (CuF$_2$) values, respectively. For $J_2$ and $-J_2/J_1$, the
minimal number of Cu-centers amounts to three. The exchange couplings are
calculated with the LSDA+$U$ method with $U_d$\,=\,7\,eV.}
\end{figure}
On the other hand, a large number of correlated centers requires a large number
of spin configurations to estimate exchange couplings. While larger clusters
are still feasible for DFT, they may pose a problem for advanced \textit{ab
initio} quantum-chemical methods. Therefore, we attempted to reduce the number
of correlated Cu$^{2+}$ ions by substituting them by formally nonmagnetic
Mg$^{2+}$ and Zn$^{2+}$ ions (Fig.~\ref{clusters}). Even with this minimum
number of correlated centers, deviations below 10\% to the size-converged
Cu$_8$ octamer cluster are obtained for the Cu--Br (Fig.~\ref{F-jN_sub}) and
also for the Cu--Cl clusters. In case of Cu-F, where convergence is reached at larger cluster size, at least
four correlated centers are required to reduce the deviations down to that level.
\begin{figure}[tbp]
\includegraphics[width=8.6cm]{figure7}
\caption{\label{F-jN_sub}(Color online) $J_1$ and $J_2$ of the Cu-Br
clusters calculated with clusters containing two correlated and N$_S$
uncorrelated Mg$^{2+}$ or Zn$^{2+}$ centers. The bridging angle is fixed to the
experimental value. The resulting exchange integrals are normalized to that for the
Cu$_8$-octamer cluster. For the calculations, the LSDA+$U$ method is used with $U_d$\,=\,7\,eV.}
\end{figure}
Similar results, as for the $J$'s, concerning size convergence and
substitutions are obtained for the NN and NNN transfers, $t_1$ and $t_2$,
calculated in LDA. These results show that the simple clusters suffice for
describing the intrachain physics of these compounds and that the problem of
appropriately embedding the clusters may be at least partially bypassed by
increasing the cluster size and substituting part of the correlated centers
with weakly correlated ions.
\subsection{Cluster versus periodic models}
In the following, both cluster and periodic models will be used for calculating
$J_2$ and the $-J_2/J_1$ ratio, as well as the transfer integrals $t_i$ of the
Cu dihalides. The comparison of periodic and cluster models for a broad range
of bridging angles allows to exclude an accidental agreement between both
models, which can be realized in a specific geometry by appropriately choosing
the chain length, substitutions, and the termination of the cluster. However,
when the cluster is prepared in such a way, the good agreement with the
periodic model would be lost by varying the geometrical parameters.
\begin{figure}[tbp]
\includegraphics[width=8.6cm]{figure8}
\caption{\label{F-Jvar}(Color online) CuBr$_2$: exchange integrals $J_1$ and
$J_2$ as a function of the bridging angle. A periodic
as well as two different cluster models (Cu$_4$ and Cu$_8$) were used. The
inset shows the ratio $-J_2/J_1$. The dashed vertical line indicates the
experimental bridging angle of 92$^{\circ}$. For the calculations, the LSDA+$U$ method is used with $U_d$\,=\,7\,eV.}
\end{figure}
\begin{figure}[tbp]
\includegraphics[width=8.6cm]{figure9}
\caption{\label{F-varcl}(Color online) Exchange integrals $J_1$
and the ratio $-J_2/J_1$ of CuCl$_2$\ as a function of the bridging angle
calculated with a periodic and two cluster models. The dashed vertical line
indicates the experimental bridging angle of 93.6$^{\circ}$. For the CuF$_2$\
data, see supplementary information. For the calculations, the LSDA+$U$ method is used with $U_d$\,=\,7\,eV.}
\end{figure}
\begin{figure}[tbp]
\includegraphics[width=8.6cm]{figure10}
\caption{\label{F-t1}(Color online) CuBr$_2$: the nearest-neighbor transfer
integral $t_1$ as a function of the bridging angle calculated with a periodic as well as two cluster models (Cu$_4$ and Cu$_8$).}
\end{figure}
The exchange integrals as well as the $-J_2/J_1$ ratio versus the bridging
angles are depicted in Figs.~\ref{F-Jvar} and~\ref{F-varcl} for CuBr$_2$
and CuCl$_2$, respectively. A comparison of the nearest-neighbor transfer
integral $t_1$ of CuBr$_2$, calculated with cluster and periodic models, is
shown in Fig.~\ref{F-t1}. The clusters can reproduce the results of band
structure calculations over the whole range of bridging angles, thus
justifying the construction of the clusters. In the $-J_2/J_1$ ratio,
which governs the magnetic ground state, the deviations between the
cluster and periodic models are compensated to a large degree.
In CuF$_2$, the deviations between $J_1$ and $J_2$ obtained in the cluster and
periodic models, respectively, are also compensated in the ratio $-J_2/J_1$,
except for the smallest bridging angles.\cite{supp} The singularity in
$-J_2/J_1$ at about 100$^{\circ}$ arises from the crossover between the FM and
the AFM $J_1$.
These results show that well-controlled cluster models are capable of
describing local properties of ionic as well as strongly covalent solids,
whereas the good agreement with band structure calculations is not accidental
or artificial. Finally, the results demonstrate that superexchange and
magnetic coupling in insulators are relatively short-range effects even for
strongly covalent compounds.
\subsection{LSDA+$U$ vs. hybrid functionals}
\label{sec:luvsb3}
A common problem of DFT-based approaches applied to strongly correlated
electrons is the ambiguous choice of empirical parameters and corrections that are required to mimic
many-body effects, e.g., in the mean-field DFT+$U$ approach. Hybrid functionals
represent an alternative, although still empirical, way of simulating the effect
of strong electron correlations within DFT. In this way, the non-local exact exchange is mixed
with the local LDA or GGA exchange, while the mixing parameter $\alpha$ is
typically the only free parameter.
In contrast to DFT+$U$, hybrid functionals
are more robust with respect to the adjustable parameters, and the constant
value of $\alpha=0.20$ or $\alpha=0.25$ can be used in a rather general fashion. Additionally, the
exact exchange correction is generally applied to all orbitals while in DFT+$U$
the corrections are applied to a certain set of orbitals which are assumed to be
the strongly correlated ones.
In this study, we apply the B3LYP functional
on dimer models and vary $\alpha$ between 0.15
and 0.25 ($\alpha=0.20$ corresponds to the standard B3LYP functional as
implemented in \textsc{gaussian}).
Although we pointed out that dimer models are too small for calculating $J_1$ in quantitative agreement with the periodic model,
they are well suited for comparing the different DFT methods and parameter sets.\footnote{For larger clusters of
CuBr$_2$\ the expectation values \mbox{$\langle S^2 \rangle$} of the broken symmetry (BS) states (which are described by single Slater determinants) calculated
with the hybrid functional tend to deviate from the theoretical values. The deviations
($<10$\%), depending on the bridging angle and $\alpha$, slightly shift the BS states and thus
affect the exchange couplings. This impedes a fair comparison of the different methods and different choices of
parameters what is exactly our goal.}
Despite substantially different treatment of many-body effects in DFT+$U$ and hybrid functionals,
the resulting exchange integrals of all three CuX$_2$ compounds are quite similar
(Fig.~\ref{F-b3lyp}). Thus, the B3LYP calculations confirm the LSDA+$U$ results, justify the
choice of the free parameters in the latter approach and demonstrate that the unusual
FM $J_1$ coupling of CuCl$_2$\ and CuBr$_2$\ is not an artifact of a certain method. Despite
the fact that B3LYP was originally constructed to reproduce the thermodynamical
data for small molecules, it provides meaningful results for strongly
correlated systems such as CuX$_2$, in line with the earlier
studies.\cite{ruiz97,munoz2002,b3lyp_QC1} Moreover, the calculated exchange
integrals are robust with respect to $\alpha$: the exchange integrals are
rather insensitive to the choice of this parameter.
\begin{figure*}[tbp]
\includegraphics[width=17.1cm]{figure11}
\caption{\label{F-b3lyp}(Color online) The exchange integral $J_1$ of
the CuX$_2$ (X\,=\,F, Cl, Br)\ compounds as a function of the bridging angle. The calculations are
done for a dimer model with LSDA+$U$ and $U_d=7\pm 1$~eV (grey area),
and with the B3LYP functional ($\alpha=0.15-0.25$).}
\end{figure*}
\section{Discussion and Summary}
\label{summ}
Our study of magnetostructural correlations in the CuX$_2$ halides reveals the
crucial role of the ligand in magnetic exchange. Its effect is two-fold: First,
the larger size of Cl and Br is responsible for the enhanced NNN coupling $J_2$
that is assisted by the sizable overlap of ligand $p$ orbitals along the
Cu--X--X--Cu pathway. Second, the covalent nature of the Cu--Cl and Cu--Br
bonds underlies the large ligand contribution to the magnetic orbitals and,
consequently, the strong FM nearest-neighbor (NN) coupling $J_1$ in the broad range of bridging
angles which could be ascribed to Hund's exchange on the ligand site. The tendency
of covalent Cu$^{2+}$ halides to exhibit FM exchange along the
Cu--X--Cu pathways can be illustrated but also challenged by several experimental
observations. It should be emphasized that ferromagnetic NN coupling requires not
only sizeable ferromagnetic contributions but also small transfer integrals as were
found for CuCl$_2$\ and CuBr$_2$. Otherwise, the AFM contributions will outweigh the FM
terms even for covalent compounds.
Experimental data for Cu$^{2+}$ chlorides and bromides indeed show the robust FM NN coupling for
the bridging angles below $90^{\circ}$. While the $\theta<90^{\circ}$ regime is
not typical for the ionic oxides and fluorides, it is abundant in covalent
systems and observed, e.g., in Cu-based FM spin
chains.\cite{FM_wl,*devries1987} The FM nature of the NN coupling at
$\theta\!=\!90\!-\!95^{\circ}$ is evidenced by CuCl$_2$ and CuBr$_2$
themselves.\cite{banks2009,FHC_CuCl2_DFT_chiT_simul_TMRG,zhao2012,cubr2_2012} However,
larger $\theta$ values are less common and require geometries other than the
edge-sharing CuX$_2$ chains considered in the present study.
The angles of $\theta>95^{\circ}$ are only found in edge-sharing dimers and
corner-sharing chains. Moreover, the respective experimental situation is
rather incoherent. In (CuBr)LaNb$_2$O$_7$ and (CuCl)LaTa$_2$O$_7$, the
corner-sharing geometry with $\theta>100^{\circ}$ indeed leads to the FM
exchange, although with a tendency towards AFM exchange at $\theta\geq
108-109^{\circ}$ (Refs.~\onlinecite{cubr-nb,cucl-ta}). By contrast, the
Cu$_2$Cl$_6$ dimers may reveal the AFM exchange even at $\theta\simeq
95.5^{\circ}$, as in LiCuCl$_3\cdot 2$H$_2$O (Ref.~\onlinecite{abrahams1963})
or TlCuCl$_3$ and KCuCl$_3$ (Ref.~\onlinecite{shiramura1997}) where the latter
exhibit transfer integrals that are 3.5 times larger as that in CuCl$_2$. On the
other hand, similar Cu$_2$Cl$_6$ dimers with the same bridging angle of
$\theta\simeq 95.5^{\circ}$ in the spin-ladder compound IPA-CuCl$_3$ feature the
sizable FM intradimer coupling.\cite{masuda2006}
These experimental examples show that the bridging angle $\theta$
may not be the single geometrical parameter determining the Cu--X--Cu
superexchange. Details of the atomic arrangement are important even for
Cu$^{2+}$ oxides,\cite{ruiz97,*ruiz97_2} whereas in more covalent systems
this effect is likely exaggerated because interactions involve specific
orbitals, so that each bond determines the orientation of other bonds around
the same atom.
We have pointed out, that such magnetostructural correlations, essential for understanding the
magnetic behavior and for the search of new interesting materials, can nicely be
investigated with cluster models. In particular in case of intricate crystal structures clusters
enable studying effects of each structural parameters separately, while for periodic models
only a set of parameters can be modified at once.
On a more general side, our results identify the Cu--X--Cu pathways as the
leading mechanism of the short-range exchange in Cu$^{2+}$ halides. The fact
that the magnetostructural correlations weakly depend on the procedure of
varying $\theta$ (Fig.~\ref{J_ang_o}) entails the minor role of direct Cu--Cu
interactions, because the coupling always evolves in a similar fashion, no
matter whether the Cu--Cu distance is fixed or varied. Therefore, the nature of
the ligand is of crucial importance, and affects the Cu--X--Cu hopping along with the
FM contribution, presumably related to the Hund's coupling on the ligand
site.\cite{beta4} In ionic systems, the nearest neighbor
hopping increases with the bridging angle and dominates over the small FM contributions, thus
leading to the conventional GKA behavior. However, the GKA behavior may be strongly altered in covalent
compounds, as shown by our study and previously argued in model studies on the effect of side
groups and distortions.\cite{gk,leb2,ruiz97}
From the computational perspective,
magnetic modeling of chlorides and bromides is generally challenging.
Although these compounds are still deep in the insulating regime, far from the
Mott transition ($t_i\ll U_{\text{eff}}$, see Table~\ref{T_tJ}), the
sizable hybridization of ligand states with correlated Cu $3d$ orbitals
challenges the DFT+$U$ approach, with correlation effects restricted to the $d$ states. The microscopic
evaluation of magnetic couplings in Cu$^{2+}$ chlorides and bromides indeed
leads to large uncertainties.\cite{cucl-ta,Cs2CuX4_DFT} Hybrid functionals, on the other hand, tend to overestimate magnetic exchange couplings\cite{munoz2002} and provide a working, but empirical solution to the problem of strongly correlated electronic systems. This calls for the development and application of alternative
techniques, as for instance \textit{ab
initio} quantum-chemical calculations, appropriately accounting for strong
electron correlations. Since the wavefunction-based quantum-chemical calculations are
presently restricted to finite systems, they require the construction of
appropriate clusters. This task has been successfully accomplished in our work.
We have demonstrated that relatively small clusters with a low number of
correlated centers are capable of reproducing the results obtained for periodic
systems, and provide adequate estimates of the magnetic exchange even for the
long-range Cu--X--X--Cu interactions.
In summary, we have studied magnetostructural correlations in the family of
CuX$_2$ halides with X = F, Cl, and Br. Our results show substantial
differences between the ionic CuF$_2$ and largely covalent CuCl$_2$ and
CuBr$_2$. The fluoride compound behaves similar to Cu$^{2+}$ oxides, and shows
weak FM exchange at the bridging angles close to $\theta=90^{\circ}$ along
with the AFM exchange at $\theta\geq 100^{\circ}$. Going from F to Cl and Br
leads to two major changes: i) the larger size of the ligand
amplifies the AFM next-nearest-neighbor coupling $J_2$; ii) the increased
covalency of the Cu--X bonds results in the strong mixing between the Cu $3d$
and ligand $p$ states, and enhances the FM contribution to the short-range nearest-neighbor
coupling $J_1$.
We have constructed cluster models which, first, supplied an excellent description
of local properties of the solids. Second, they turned out as highly valuable
tool for investigating magnetostructural correlations, e.g., they could be instrumental
in the microscopic analysis of the covalent Cu$^{2+}$ chlorides and bromides with
interesting but still barely explored magnetism.
Finally, they seem to be a viable approach to parameter-free quantum-chemical
calculations of strongly correlated solids.
\section{Acknowledgements}
We acknowledge valuable discussions with O. K. Anderson and P. Blaha. S. L.
acknowledges the funding from the Austrian Fonds zur F\"orderung
der wissenschaftlichen Forschung (FWF). A.T. was partly supported by the
Mobilitas grant of the ESF.
\bibliographystyle{apsrev4-1}
|
2109.08822
|
\section{Supplemental Material}
\section{Appendix}
\subsection{Constraints on ULMBs coupled to B and L}
\begin{figure}[H]
\centering \includegraphics[width=\columnwidth]{BlimitsSimple.png}
\caption{Plot equivalent to Fig. \ref{fig:B-Llimits} for B-coupled dark matter ULMBs.}
\label{fig:XYZlimits}
\end{figure}
We assumed above that the ULMBs are coupled to B-L. Our constraints on ULMBs coupled to B are shown in Fig. \ref{fig:XYZlimits}. Because $\Delta_{\rm B} \ll \Delta_{\rm B-L}$, our constraints on ULMBs coupled to L are similar to those shown in Fig. \ref{fig:B-Llimits}. In a narrow band of masses around our most sensitive mass $m_{\rm DM}=8\times 10^{-18}$ eV/$c^2$ there is some improvement on the E\"ot-Wash EP B limits. MICROSCOPE's result constrains this parameter space.\\
\newpage
\subsection{Analysis systematics}
We did a variety of tests to verify the accuracy of our analysis. First, we predicted the angle $\theta$ in response to a torque signal along $\hat{\bm{X}}$ and added it to the angle data. We ran the entire analysis chain to obtain a resulting amplitude. As shown in Fig. \ref{fig:FullInjection}, it was consistent with the injected torque amplitude. Fig. \ref{fig:XYZInjection} shows the response to an injected signal near the resonant frequency of the balance where the frequency-dependent correction outlined in Equation \eqref{torque_cor} is largest. The most prominent peaks are consistent with the injected signal and demonstrate that the direction of a ULMB signal can be determined. The less prominent peaks occur because of covariance, for example, between $K_{\rm Xp}(t)e^{i\omega_{\rm DM}t}$ and $K_{\rm Yp}(t)e^{i(\omega_{\rm DM}\pm 2\omega_{\oplus})t}$ and $K_{\rm Zp}(t)e^{i(\omega_{\rm DM}\pm\omega_{\oplus})t}$ where $\omega_{\oplus}=7.3\times 10^{-5}$ rad/s is the sidereal frequency. Since $K_{\rm Zp}(t)$ is approximately constant, the less prominant peaks are spaced by $\omega_{\oplus}$. Previous results with a rotating torsion balance \cite{SPIN} mitigated this by taking data with multiple orientations of the dipole. Fig. \ref{fig:CovarDaily} shows that the covariance between the science and the daily instrumental basis functions, which could have introduced errors, is negligible.
\begin{figure}[H]
\centering \includegraphics[width=\columnwidth]{InjectionFull.png}
\caption{A plot of X fit amplitudes resulting from an injected X signal with $g_{\rm B-L}(\hbar c)^{-1/2} = 10^{-23}$ (red line). The results are consistent with the expected amplitude indicating that the instrumental drift parameters did not appreciably bias our result.}
\label{fig:FullInjection}
\end{figure}
\begin{figure*}[!h]
\centering \includegraphics[width=\textwidth]{Injections.png}
\caption{$a_{\rm X}$, $a_{\rm Y}$, and $a_{\rm Z}$ amplitudes resulting from injected X, Y, and Z signals with $g_{\rm B-L}(\hbar c)^{-1/2} = 10^{-24}$ and mass $8 \times 10^{-18}$ eV/$c^2$ ($f_{\mathrm{DM}}$ near the resonant frequency 1.934 mHz), where the frequency-dependent correction of Equation \eqref{torque_cor} is largest. Although X, Y, and Z signals produced peaks in all three amplitudes, the most prominent was always associated with the injected signal. The agreement between the input signal and the output peak amplitudes indicates no bias from the frequency-dependent correction or instrumental parameters.}
\label{fig:XYZInjection}
\end{figure*}
\begin{figure*}[!h]
\centering \includegraphics[width=\textwidth]{NormCovarianceX.png}
\caption{Pearson correlation coefficients $\rho_{i,j}=cov(i,j)/(\sigma(i)\sigma(j))$ for the X quadrature basis functions and the 12 hour (Left) and 24 hour (Right) instrumental basis functions summed over all 3,334,034 data points at each mass. The plot for Y quadrature amplitudes is essentially the same. Z is less interesting because it has no sidereal modulation. The yellow region shows the masses included in the analysis. Starting at $0.4\times 10^{-18}$ eV/$c^2$ the correlation is small enough to get accurate fits (at worse $\pm$5\%) as evidenced by injection tests (Fig. \ref{fig:FullInjection} and \ref{fig:XYZInjection}).}
\label{fig:CovarDaily}
\end{figure*}
\section{Introduction}
Despite an abundance of indirect astrophysical evidence for the existence of cold dark matter \cite{be:10}, the nature of this phenomenon remains a mystery. Many direct detection searches have assumed that the dark matter consists of supersymmetry-inspired fermions called WIMPs. However, despite considerable effort, no evidence has been found for supersymmetry \cite{supersymmetry} nor for WIMPs \cite{wimps}. Focus has now shifted to dark matter consisting of ultra low-mass bosons (ULMBs). It has been shown that scalar, pseudovector or vector bosons with a wide range of masses between 10$^{-22}$ and 100~eV/$c^2$ are viable dark matter candidates \cite{Hui}.
For dark matter masses $m_{\rm DM} \le 1$~eV/$c^2$, the dark matter number density is large enough to make the dark matter behave as a classical field. Dark matter particles bound in our galaxy must be highly non-relativistic with a local velocity distribution peaked at $v_{\rm DM} \approx 10^{-3}c$ \cite{Bovy_2012}. The field would oscillate at a frequency
\begin{equation}
f_{\rm DM}=\frac{m_{\rm DM} c^2}{h} [1+(v_{\rm DM}/c)^2/2],\label{eq:1}
\end{equation}
where $c$ is the speed of light and $h$ is Planck's constant. This corresponds to a central Compton frequency $f_{\rm DM}=m_{\rm DM} c^2/h$ with a fractional spread $\delta f_{\rm DM}/f_{\rm DM}=(v_{\rm DM}/c)^2/2 \approx 10^{-6}$. This field should be coherent for times less than the phase coherence time
\begin{equation}
t_{\rm c}=1/\delta f_{\rm DM}~,\label{eq:2}
\end{equation}
{\em i.e.} about $10^6$ oscillation periods.
Many searches for these dark-matter waves have been recently reported. Ultra-stable clocks \cite{dilaton, atomicclocksdilaton} and gravitational-wave detectors \cite{GWdilaton} constrain the couplings of scalar dilaton ULMBs with masses between $10^{-21}$ and $10^{-5}$ eV/$c^2$. A wide variety of experimental approaches constrain the couplings of pseudo-scalar axion or axion-like ULMBs. Spin-precession experiments \cite{nEDM, SPIN, Smorra2019} constrain the coupling of pseudoscalar axion-like ULMBs with masses between $10^{-23}$ and $10^{-18}$ eV/$c^2$. Axion-to-photon conversion in ultra-sensitive electromagnetic cavities immersed in strong magnetic fields \cite{Sikivie} are searching for the Peccei-Quinn axion \cite{peccei} in the mass range $10^{-6}$ eV/$c^2$ $\lesssim m_{DM}\lesssim 10^{-3}$ eV/$c^2$ \cite{HAYSTAC2021, CAST}. The ADMX collaboration \cite{ADMX2021} has definitively excluded such dark-matter axions with masses between 2.7 $\mu$eV/c$^2$ and 4.2 $\mu$eV/c$^2$.
Fewer searches for vector ULMBs have been reported, and these focus on dark photons \cite{ADMXhiddenphoton, superQubit}.
Here we present limits on vector ULMBs coupled to B-L (B and L are baryon number and lepton number, respectively); this coupling is particularly interesting because B-L is conserved in many unified theories. A coherent vector boson wave couples to laboratory objects like a time-varying "B-L electric" field $\tilde{\bm{E}}$ interacting with "B-L charges" $\tilde{q}$ on the objects with a coupling constant $\tilde{g}$.
If the dark matter consists predominantly of such bosons, $\tilde{E} \approx \sqrt{\rho_{DM}\hbar c} = 7.7\times 10^{3}$ eV/m \cite{gr:13} where $\rho_{DM} \approx 0.3$ GeV/cm$^3$ is the local dark-matter density. The direction of $\tilde{\bm{E}}$ is expected to be random but steady for times less than $t_{\rm c}$ \cite{gr:13,nelson}. The force on a "charge" $\tilde{q}$ is
\begin{equation}
\bm{F}_{DM} = \tilde{q}\tilde{g}(\hbar c)^{-1/2} \tilde{\bm{E}}.
\end{equation}
The B-L "charge" of a test-body consisting of electrically neutral atoms is $(q_{\rm B-L}/\mu)(m_T/u) = (N/\mu)(m_T/u)$, where $N$ is the number of neutrons, $m_T$ is the test-body mass, and $\mu$ is atomic mass in atomic mass units (u). We detect these forces using a torsion pendulum containing a "charge" dipole. The differential acceleration (force/mass) of the 2 "charges",
\begin{equation}
\Delta a_{B-L} = g_{B-L} \sqrt{\rho_{DM}}\, \Delta_{B-L}/u~,
\end{equation}
applies a torque on the pendulum that is our ULMB signal
($\Delta_{B-L} = (N_1/\mu_1)-(N_2/\mu_2)$ is the difference in the charge-to-mass ratios of the two atomic species in the dipole).
\begin{figure}[h!]
\centering \includegraphics[width=0.5\textwidth]{PaperDiagramNew.png}
\caption{Perspective and top views of our apparatus. The insert shows the 8 test-mass pendulum \cite{CQG2012} (red and blue indicate the composition dipole of four Be and four Al test masses) that hangs within a mu-metal shield in the vacuum chamber. A motor-driven roller chain is on the left side of the torsion balance. A cord attached to the roller chain moves the two 486 g brass cylinders in opposite directions effectively rotating a quadrupole field with respect to the balance. The object protruding on the right is the autocllimator that measures the pendulum twist. The N and W vectors indicate the orientation of the instrument and define the lab coordinate system: $\hat{\bm{x}}$ points North, $\hat{\bm{y}}$ points West, and $\hat{\bm{z}}$ is local vertical.}
\label{Apparatus}
\end{figure}
\section{Experimental Setup}
We searched for the frequency-dependent signatures of B-L coupled ULMBs using a stationary torsion-balance with a pendulum consisting of a Be-Al composition dipole for which $\Delta_{B-L} = 0.0359$. Fig. \ref{Apparatus} shows an overview of our apparatus. The pendulum was suspended by a fused-silica fiber manufactured in our lab. The observed resonant frequency $f_0 = (2\pi)^{-1}\sqrt{\kappa/I} = 1.934$ mHz and the rotational inertia $I = 3.78\times 10^{-5}$ kg~m$^2$ (obtained from a detailed model of the pendulum) yielded a torsion constant of $\kappa = 5.58$ nNm/rad. We observed a quality factor $Q =$ 460,000 corresponding to a thermal noise \cite{saulson},
\begin{equation}
\tau(f) = \sqrt{\frac{2k_B T\kappa}{\pi Q f}},
\end{equation}
which has a value at 1 mHz of $1.8\times 10^{-16}$ Nm/$\sqrt{\rm Hz}$. The corresponding value of our tungsten fiber ($\kappa = 2.4$ nJ/rad and $Q = 6700$ \citep{CQG2012}) is $9.6\times 10^{-16}$ Nm/$\sqrt{\rm Hz}$. Furthermore, the low-frequency drift and thermal-susceptibility of silica is smaller by at least an order of magnitude (Fig.~\ref{fig:fiber}). The pendulum sits within a vacuum chamber and is surrounded by a mu-metal shield. We measured the pendulum angle, $\theta$, using a multi-slit autocollimator \cite{autocollimator}.
We kept $\theta$ within the autocollimator's linear range using a novel gravitational damper, consisting of an adjustable mass quadrupole that applied a controlled torque on the pendulum. The performance of the damper is displayed in Fig. \ref{fig:EPBUDMTimeSeries}.
\begin{figure}[h!]
\centering \includegraphics[width=\columnwidth]{driftPlot.png}
\caption{Comparison of fused-silica (orange) and tungsten (blue) fibers. The lower drifts in fused silica ($<1$~$\mu$rad/day) are much smaller than tungsten ($>10$~$\mu$rad/day). The rapid oscillations are the resonant free oscillations of the torsion oscillator; these are driven by thermal effects and environmental vibrations. The higher Q of fused silica is obvious.}
\label{fig:fiber}
\end{figure}
\section{Data Analysis}
Our data set consisted of 91.4 days of angle measurements taken at a sampling interval $\Delta = 1.00212$ s over a span of $S = 114$ days. We searched for signals from dark matter with 491,019 different possible masses between $0.4\times 10^{-18}$ and $206.8 \times 10^{-18}$ eV/$c^2$ ($f_{\rm DM}$ from 0.1 mHz to 50 mHz) in steps of $4 \times 10^{-22}$ eV/$c^2$ ($1/S = 101.6$ nHz). At each assumed value of $m_{\rm DM}$ we obtained the corresponding torque on the pendulum
\begin{align}
\tau(t) =& I\ddot{\theta}(t)+\kappa\left(1+\frac{i}{Q}\right)\theta(t) \nonumber \\
\approx& I\ddot{\theta}(t)+\frac{\kappa}{2\pi f Q}\dot{\theta}(t)+\kappa\theta(t).
\label{torque_eqn}
\end{align}
The first derivative term gives the exact response for a single-frequency signal. Hence, we used numerical derivatives \cite{derivatives} to estimate the torque as
\begin{align}
\tau_j =& \frac{I}{\Delta^2}\left(-\frac{\theta_{j-2}}{12}+\frac{4\theta_{j-1}}{3}-\frac{5\theta_j}{2}+\frac{4\theta_{j+1}}{3}-\frac{\theta_{j+2}}{12}\right) \nonumber \\
&+\frac{\kappa}{2\pi f_{DM} Q\Delta}\left(\frac{\theta_{j-2}}{12}-\frac{2\theta_{j-1}}{3}+\frac{2\theta_{j+1}}{3}-\frac{\theta_{j+2}}{12}\right) \nonumber \\
&+\kappa\theta_j \label{torque_cor},
\end{align}
where $\theta_j$ and $\tau_j$ are the angle and torque at the j$^{\mathrm{th}}$ measurement.
\begin{figure}[h!]
\centering \includegraphics[width=\columnwidth]{run0160ShowOff.png} \includegraphics[width=\columnwidth]{run0160GravitationalDamper.png} \includegraphics[width=\columnwidth]{run0160TorqueEarthquake.png} \caption{\textbf{Top:} Two weeks of data from our ULMB campaign. Red bars indicate times when the gravitational damper was active, while yellow bars indicate seismic events. These data were excluded. The gravitational damper allowed us to maintain a small free-oscillation amplitude. \textbf{Middle:} Gravitational damper performance on day 11 in top plot. The damper efficiently removed residual oscillation. \textbf{Bottom:} Example of filtered torque data dropped to exclude an earthquake (on day 1 where there is a noticeable spike in the angle timeseries).} \label{fig:EPBUDMTimeSeries}
\end{figure}
We applied a lowpass FIR filter \cite{hamming} to avoid high-frequency spectral leakage dominated by autocollimator noise. We divided the torque data into sidereal-day length cuts and removed data points when the gravitational damper was active or during noticeable glitches in the data. We adjusted the starting time of each cut so that fewer than 10\% of the points were removed and no damping event occurred during the cut. These data quality measures, which gave us a total of 40 cuts, ensured acceptable covariance between the basis functions (see Appendix). From here the analysis was similar to that introduced in Ref.~\cite{SPIN} and summarized below.
The dark matter torque on our dipole is $\bm{\tau}_{\rm DM}=\Delta q_{tot} g_{\rm B-L}(\hbar c)^{-1/2}(\bm{d}\times\tilde{\bm{E}}_{lab})/2$ where $\Delta q_{tot}$ is the difference of the total B-L charge on the dipole and $\bm{d} = 3.77\,(\cos{\gamma_d}\,\hat{\bm{x}}-\sin{\gamma_d}\,\hat{\bm{y}})$ cm is the vector from one element of the composition dipole to the other with azimuthal angle $\gamma_d=-80.7\pm 5$. But our instrument is only sensitive to the vertical component so that
\begin{equation}
\tau_{\rm DM} = \frac{m_d g_{\rm B-L}\Delta_{B-L}}{2u\sqrt{\hbar c}}\bm{p}\cdot\tilde{\bm{E}}_{lab}\ .
\end{equation}
Here $m_d=19.4$ g is the mass of four of our test bodies and $\bm{p}=\hat{\bm{z}}\times\bm{d}$.
We constrained ULMB waves polarized along arbitrary directions in geocentric celestial coordinates $\hat{\bm{X}}$, $\hat{\bm{Y}}$, $\hat{\bm{Z}}$ (the origin is the center of the Earth, $\hat{\bm{X}}$ points towards the sun at the vernal equinox, $\hat{\bm{Z}}$ points North and passes through the center of the Earth, and $\hat{\bm{Y}} = \hat{\bm{Z}}\times\hat{\bm{X}}$). The anticipated signals are coherent for $\approx 10^6$ oscillations (see equation \eqref{eq:2}), which for the highest fitted frequency of $50$ mHz corresponds to 231 days, 2 times longer than the span of our data.
Following the procedure introduced in Ref.~\citep{SPIN} we generated 6 science basis functions that evaluated $\bm{p}\cdot\tilde{\bm{E}}_{\rm lab}$ for unit-strength fields along the $\hat{\bm{X}}$, $\hat{\bm{Y}}$, and $\hat{\bm{Z}}$ directions ($K_{\rm Xp}$, $K_{\rm Yp}$, and $K_{\rm Zp}$) with each having a cosine and sine phase. We fit our torque data with
\begin{align}
\tau_j =& K_{\mathrm{Xp}}(t_j)[a_{\mathrm{Xcos}}\cos{(\omega_{\rm DM}t_j)}+a_{\mathrm{Xsin}}\sin{(\omega_{\rm DM}t_j)}] \nonumber \\
&+ \left(\mathrm{similar\, terms\, for\, Y\, and\, Z}\right) \nonumber \\
&+ \left(\mathrm{instrumental\, parameters}\right) \label{torque_basis}
\end{align}
where $\omega_{\rm DM} = m_{\rm DM}c^2/\hbar$. Note that this expression is perfectly general as any phase is a simple linear combination of sine and cosine amplitudes. We computed the components of $\hat{\bm{X}}$, $\hat{\bm{Y}}$, and $\hat{\bm{Z}}$ parallel to $\hat{\bm{p}}$ using the \texttt{AstroPy} library \cite{astropy} to find the altitude and azimuth ($\alpha$ and $\gamma$) of these unit vectors in the lab frame. For example, the projection of $\hat{\bm{X}}$ onto $\hat{\bm{p}}$ is
\begin{equation}
K_{\mathrm{Xp}}(t_j) = -\cos\alpha_{\mathrm{X}}^j\sin(\gamma_{\mathrm{X}}^j-\gamma_{\mathrm{d}})\ .
\end{equation}
Here $\alpha_{\mathrm{X}}^j$ and $\gamma_{\mathrm{X}}^j$ are the altitude and azimuth of $\hat{X}$ for the j$^{\rm th}$ measurement. We included 6 additional instrumental parameters to account for the behavior of the equilibrium angle: 2 for offset and linear drift ($a_0$ and $a_1$) and 4 for daily variations with 24 and 12 hour periods ($a_{\rm cos24}$, $a_{\rm sin24}$, $a_{\rm cos12}$, and $a_{\rm sin12}$). These allowed for daily temperature and tilt variations. The analysis in the Appendix shows that for Compton frequencies of interest (greater than 10x the sidereal frequency) there is negligible covariance between the science and instrumental basis functions.
\begin{figure}[h!]
\centering \includegraphics[width=\columnwidth]{HistogramUncertaintyEstimationX.png} \includegraphics[width=\columnwidth]{Xrayleigh.png}
\caption{\textbf{Upper 2 panels}: histograms of $\overline{a_{\mathrm{Xcos}}}/\delta a_{\mathrm{Xcos}}$ and $\overline{a_{\mathrm{Xsin}}}/\delta a_{\mathrm{Xsin}}$ for all
491,019 masses. Results are characterized by zero-mean Gaussians with $\sigma = 1.025(1)$ and $1.022(1)$ (shown by the red curves). Similar results are observed for Y and Z. \textbf{Lower panel}: histogram of corresponding $a_{\mathrm{X}}/\delta a_{\mathrm{X}}$ values. As expected these follow a Rayleigh distribution with scale parameter $\sigma = 1.023(1)$ (shown by red curve). The results for Y and Z had scale parameters $1.023(1)$ and $1.021(1)$, respectively.}
\label{fig:GaussRayleighStats}
\end{figure}
For every assumed mass, we extracted the science parameters from linear least-squares fits of the 40 cuts. For each parameter we used the average and standard deviation of the 40 values as the amplitude and uncertainty ($\overline{a_{\mathrm{Xcos}}}\pm\delta a_{\mathrm{Xcos}}$, etc.). Fig. \ref{fig:GaussRayleighStats} shows that these are consistent with zero-mean Gaussians with identical widths. We then marginalized over the uninteresting phases of the $\tilde{\bm{E}}$ fields by combining the quadrature amplitudes to obtain total amplitudes, $a_{\mathrm{X}} = \sqrt{\abs{\overline{a_{\mathrm{Xcos}}}}^2 + \abs{\overline{a_{\mathrm{Xsin}}}}^2}$, etc., and total uncertainties, $\delta a_{\rm X} = (\delta a_{\mathrm{Xcos}} + \delta a_{\mathrm{Xsin}})/2$, etc. As expected these are consistent with Rayleigh distributions.
\begin{figure}[h!]
\centering \includegraphics[width=\columnwidth]{B-LlimitsSimple.png}
\caption{95\% confidence upper limits on B-L coupled dark matter ULMBs. The exclusion regions are indicated in yellow. The black lines, which consist of 491,019 95\% confidence upper limits evenly spaced in frequency, show the limits on the amplitudes of fields along X, Y, and Z. (For reference, $g_{\rm B-L} /\sqrt{(\hbar c)} = 10^{-25}$ corresponds to a torque $9.73\times 10^{-19}$ N m). The horizontal blue lines are the upper limit from the E\"ot-Wash EP test Ref.~\cite{CQG2012}. The green lines are the upper limits from the initial MICROSCOPE result \cite{MICROSCOPE, MICROSCOPE2}.}
\label{fig:B-Llimits}
\end{figure}
\section{Results}
We calculated confidence limits, shown in Fig. \ref{fig:B-Llimits}, on waves propagating along $\hat{\bm{X}}$, $\hat{\bm{Y}}$, and $\hat{\bm{Z}}$ separately using the Rice distribution -- the generalization of the conventional mean$+ 2\sigma$ 95\% C.L. upper limit on normally distributed results to Rayleigh distributed total amplitudes appropriate for our signal search (computed with the percent-point function of the Rice distribution in \texttt{SciPy} \cite{scipy}). Note that below the free resonance of the torsion oscillator, the limits scale as $1/\sqrt{f}$ (behavior characterized, but not limited to internal damping). Above the resonance where the oscillator response drops as $1/f^2$, the limits scale as $f$ (which is evidence that, in our frequency range, the autocollimator noise amplitude is proportional to $1/f$). Because our results were taken for spans $S<t_{\rm c}$, they do not necessarily reflect the long-term average strengths of $\tilde{\bm{E}}$ which would be revealed by data taken with spans $S>t_{\rm c}$. Corrections for this effect have been discussed for scalar bosons \cite{scalarstochastic, scalargradstatistics, scalarwavedetection} and axions \cite{axionstatistics}. However, to our knowledge, corrections for $\tilde{\bm{E}}$ of a vector boson are not available (note that chaotic light is not a sufficient analog as photons have only 2 polarization states while vector ULMBs have three). Furthermore Ref.~\cite{scalarstochastic} assumes an unrealistic isotropic, isothermal, fully-virialized halo model. Recent analyses of Gaia-2 data \cite{DMhurricane, Fattahi, FIRElight} suggest that 10\%-50\% of the dark matter in our galaxy is in unequilibrated debris streams from recent mergers with satellite galaxies. As a result we cite an experimental result that is easily corrected later.
A viable dark matter candidate cannot consist of ULMBs with masses and couplings already excluded by conventional EP tests that provide strong constraints on the couplings of ULMBs over a large range of masses $m_{\rm DM} < \hbar c/R_{\rm Earth}=3.1\times 10^{-14}$ eV/$c^2$. If the dark matter predominantly consists of B-L coupled ULMBs, the constraints set by this work improve on MICROSCOPE's initial EP limits \cite{MICROSCOPE, MICROSCOPE2} in the mass range between $2 \times 10^{-18}$ eV/$c^2$ and $20 \times 10^{-18}$ eV/$c^2$ and improve on E\"ot-Wash EP limits \cite{CQG2012} in a slightly wider band. Conventional EP tests provide strong constraints on dark matter candidates, but as pointed out by Ref.~\cite{gr:13}, the signal in those tests is $\propto g_{\rm B-L}^2$, whereas in this search the signal is linear in $g_{B-L}$. Hence, an N fold improvement in the sensitivity would require an N$^2$ improvement in conventional EP results for an equal improvement in constraints.
\section{Conclusion}
We described a torsion-balance search for a dark matter signal from B-L coupled ULMB candidates. In future experiments we expect improved results using a longer and thinner fused-silica torsion fiber and a larger B-L dipole moment. Upgrades to the autocollimator and a better seismic environment will eliminate the need for a gravitational damper and allow for longer cuts, which would improve on the uncertainties of the fit parameters. Active tilt feedback could address the need to account for daily drifts allowing us to extend constraints to lower masses.
We hope to improve our sensitivity by a factor of five or more
We thank W.\,A.~Terrano for encouraging us to do this experiment. J.\,G.~Lee helped with our analysis methods. We thank Peter Graham and Will Terrano for clarifying conversations. This work was supported in part by National Science Foundation Grants PHY-1912514, PHY-2011520, and PHY-2012350.
\bibliographystyle{unsrtnat}
|
2008.09631
|
\section{Introduction}
The virtual braid group is obtained from the classical braid group by adding generators called virtual crossings, and relations. It is easy to see how two braid words which contain only classical generators and represent the same element in the classical braid group need to represent the same element of the virtual braid group on the same number of strand. The more complicated question is to show that two braid words over classical generators which represent the same element of the virtual braid group also represent the same element in the classical braid group. This claim can be proved through the representation of braids in the automorphism group of free groups, as first noted by Kamada in \cite{Kam2004} where he relies on the surjection from the virtual braid group to the titular braid-permutation group from \cite{FRR1997}. That technique, while beautiful is far from elementary. This paper proposes a solution which gives an explicit map, the Gaussian projection, from a sequence of virtual moves between classical braid diagrams to a sequence of classical moves between classical braid diagrams. \\
This paper is structured as follows: Section \ref{groups} defines the classical and virtual braid groups, Section \ref{alex} defines almost classical braids and gives the conditions for almost classical braids to be classical braids, Section \ref{parity} constructs the Gaussian projection, and shows that it maps virtual braid diagrams to almost classical braid diagrams, and Section \ref{mainthm} provides a proof that classical braid groups inject in virtual braid groups.
\section{Virtual braid groups and diagrams} \label{groups}
This section contains a review of the definitions of the classical and virtual braid groups. In order to use the tool of Gaussian projection, explored in the next section, it is important to understand braids as diagrams in the plane, a translation to this system from the algebraic interpretation is provided below.
\subsection{Classical braid groups}
Let $n\in \mathbb{N}$. The \emph{classical braid group} on $n$ strands, denoted $B_n$, is the group of words over the elements $\sigma_1, \sigma_2, \ldots , \sigma_{n-1}$ and their formal inverses, subject to the relations:
$$ \mathbf{(U1)} \ \ \forall \ i \textrm{ and } j, \textrm{ if } \ |i-j|>1 \textrm{ then } \sigma_i\sigma_j=\sigma_j\sigma_i, $$
$$ \mathbf{(U2)} \ \ \sigma_i \sigma_i\-= \sigma_i\- \sigma_i=1,$$
$$ \mathbf{(U3)} \ \ \sigma_i\sigma_{i-1}\sigma_i= \sigma_{i-1}\sigma_i\sigma_{i-1},$$
where $i$ and $j$ are in $\{1, 2, \ldots, n-1\}$. Astute readers will notice that U2 follows from the definition of $B_n$ as a group, and its inclusion as a relation is solely for the purpose of introducing the notation.
An element of $B_n$ is called a \emph{braid}, and a finite presentation of this element is called a \emph{braid word}. Geometrically, braid words can be seen as diagrams by assigning to each generating element a crossing as done in Figure \ref{classcross} and stacking them from the bottom to the top as the word is read from left to right.
\begin{figure}[htbp]
\centering
\def \svgwidth{\textwidth} \input{classcross.eps_tex}
\caption{The realization of the classical braid group generators as braid diagrams. } \label{classcross}
\end{figure}
The relations on braid words are translated to braid diagram moves by realizing the equivalent words as diagrams. Modifying a braid diagram by replacing a part of it by an equivalent local depiction is called a \emph{classical braid move}, which in this paper are classified to be of either type U2 or U3. See Figure \ref{braidmoves} for examples of a move of type U2 and a move of type U3. Moves of type U1 are ignored as they can be considered to the result of isotopies of the region in which the braid diagram is drawn. In addition to the move given by the U3 relation, all moves obtained from it by mirror symmetry of the diagrams (followed by switching the orientation of the strands to be upwards if needed) are also said to be of that same type.
\begin{figure}[htbp]
\centering
\def \svgwidth{\textwidth} \input{braidmoves.eps_tex}
\caption{Two of the classical braid moves.} \label{braidmoves}
\end{figure}
\subsection{Virtual braid groups}
The \emph{virtual braid group} on $n$ strands, denoted $vB_n$, is a generalization of the classical braid group first mentioned in \cite{Kau1999} as a topic for further research. The group consists of the words over the elements $\sigma_1, \sigma_2, \ldots , \sigma_{n-1}, \tau_1, \tau_2, \ldots , \tau_{n-1}$, subject to the relations U1, U2, and U3 above along with the following:
$$\mathbf{(V1)} \ \ \forall \ i \textrm{ and } j, \textrm{ if } \ |i-j|>1 \textrm{ then } \tau_i\tau_j=\tau_j\tau_i,$$
$$\mathbf{(V2)} \ \ \tau_i \tau_i=1,$$
$$\mathbf{(V3)} \ \ \tau_i\tau_{i-1}\tau_i= \tau_{i-1}\tau_i\tau_{i-1},$$
$$\mathbf{(V4)}\ \ \tau_i\tau_{i-1}\sigma_i= \sigma_{i-1}\tau_i\tau_{i-1},$$
$$\mathbf{(V5)} \ \ \forall \ j, \ |i-j|>1,\tau_i\sigma_j=\sigma_j\tau_i,$$
where $i$ and $j$ are in $\{1, 2, \ldots, n-1\}$.
An element of $vB_n$ is called a \emph{virtual braid}, and like the classical braids, it can be represented by diagrams, by adding the representation of the virtual crossings in Figure \ref{virtcross}.
\begin{figure}[htbp]
\centering
\def \svgwidth{\textwidth} \input{virtcross.eps_tex}
\caption{The realization of the virtual generators as braid diagrams.} \label{virtcross}
\end{figure}
The relations on virtual braid words which includes virtual crossings (V1, V2, V3, V4 and V5) can be generalized and unified into an equivalence of virtual braid diagrams called the \emph{virtual detour move} in \cite{Kau1999}. These moves are all invisible to the classical crossings of the diagram and, for reasons that are beyond the scope of this paper, classical crossings encode all of the information of a virtual braid. Therefore, the impact of virtual detour move on the constructions in the sections below is trivial. The denomination ``virtual detour move'' will therefore be used to shorten arguments. For reference, the moves in Figure \ref{detour} are virtual detour moves corresponding to the relations V2 and V4 respectively.
\begin{figure}[htbp]
\centering
\def \svgwidth{\textwidth} \input{detour.eps_tex}
\caption{Two of the moves on braid diagram involving virtual crossings. } \label{detour}
\end{figure}
\subsection{Inclusion maps}
It is clear that, for each $n=2, 3, \ldots$, there is an inclusion map from the set of classical braid diagrams on $n$ strands to the set of virtual braid diagrams on $n$ strands, defined as the identity of the generators.
This paper aims to show, in Section \ref{mainthm}, that the maps $\bar v: B_n\to vB_n$, induced by $v$, are injective for all $n\in \mathbb{N}$.
\section{Alexander numberings} \label{alex}
For a classical oriented link diagram, the \emph{Alexander numbering} of the complement of the diagram is a signed measure of how far a component is from being the region at infinity. The concept has been adapted to other settings by first pushing the numbering from the complement of the diagram onto the diagram itself, which allowed to extend it to virtual oriented link diagrams. This section defines the virtual braid diagrams which have Alexander numbers to be \emph{almost classical} and proves some of their properties.
\subsection{General definition}
Let $\beta$ be a virtual or classical braid diagram. The \emph{strands} of $\beta$ are the oriented curves running from the bottom to the top of the braid. The boundary of these curves is called the \emph{endpoints} of the braids. The \emph{arcs} of $\beta$ are the connected components of the strands with all the classical crossings removed. Each arc connects either two classical crossings, two endpoints, or an endpoint and a classical crossing, and may contain virtual crossings.
An \emph{integer numbering} of a classical braid diagram is an assignment of an integer to each arc of the diagram, such that it satisfies, at each crossing, the relation depicted in Figure \ref{crossnum}. If moreover, $\lambda=\mu-1$ at each crossing of the diagram, the numbering is called an \emph{Alexander numbering}. It is sometimes convenient to see the local condition of the Alexander numberings as saying that the arcs are numbered such that the operation seen in Figure \ref{smooth}, called the \emph{oriented smoothing}, is always merging together arcs with equal numbers.
\begin{figure}[htbp]
\centering
\def \svgwidth{\textwidth} \input{crossnum.eps_tex}
\caption{Numbering around a crossing.} \label{crossnum}
\end{figure}
\begin{prop} \label{classicalnum}
Any classical braid diagram admits an Alexander numbering with the boundary conditions that the $i$th strand's first arc be numbered $i$, and that the $i$th strand from the left at the top of the braid have its last arc also be numbered $i$. (See Figure \ref{boundnum} for a diagram with the boundary conditions on the bottom of the strands.)
\end{prop}
\begin{proof}
Let $\beta$ be a classical braid diagram. Its first crossing from the bottom is $\sigma_\lambda^{\pm 1}$ and with the assigned boundary condition, its arcs are numbered as seen on the appropriate crossing of Figure \ref{smooth}. By smoothing this crossing, the same argument shows that the whole diagram is numberable.
\end{proof}
In fact, this proof can also be used to show that assuming the local conditions of Alexander numbering and the boundary condition on only either the top or the bottom of the braid to be holding implies that the boundary condition on the other end of the braid also holds.
\begin{figure}[htbp]
\centering
\def \svgwidth{\textwidth} \input{boundnum.eps_tex}
\caption{Boundary conditions on the Alexander numbering for a braid diagram. } \label{boundnum}
\end{figure}
\subsection{Numbering virtual braid diagrams}
It is now possible to define integer numberings of virtual braid diagrams, to be an assignment of an integer to each arc of the diagram, where each arc is delimited by classical crossings and by endpoints, and the numbering increases or decreases at each classical crossing as seen in Figure \ref{crossnum}. Again, an integer numbering is called an Alexander numbering if and only if it satisfies the boundary conditions in Figure \ref{boundnum} and the local Alexander numbering condition as seen in Figure \ref{smooth} for each classical crossing.
It is quite simple to find that there are virtual braid words for which the corresponding braid diagram has no Alexander numbering. One such example is seen in Figure \ref{example1}, which depicts a diagram for the word the word $\tau_1\sigma_1\tau_1\sigma_1\in vB_2$. The first classical crossing in the braid satisfies the Alexander numbering condition if and only if $i=j-1$, while the second classical crossing requires that $j+1=i-2$. It is impossible for both of those equations to hold simultaneously. This gap between virtual braid diagrams and virtual braid diagrams which admit Alexander numberings justifies distinguishing between integer and Alexander numberings in the previous subsection.
\begin{figure}[htbp]
\centering
\def \svgwidth{\textwidth} \input{example1.eps_tex}
\caption{A braid which cannot admit an Alexander numbering. } \label{example1}
\end{figure}
Since all classical braid diagrams have Alexander numberings, we say the following:
\begin{definition}
A virtual braid diagram is \emph{almost classical} if it can be given an integer numbering that satisfies the restrictions in Figures \ref{crossnum} with $\lambda=\mu-1$ and \ref{boundnum}.
A virtual braid is almost classical if it is representable by an almost classical virtual braid diagram.
\end{definition}
\subsection{Almost classical braids}
It is inconvenient that the concatenation of two almost classical virtual braid diagrams may itself not be almost classical. One such example is the diagram representing the virtual braid word $\tau_1\sigma_1\in vB_2$ decomposes into two almost classical braid diagrams, with one (respectively virtual and classical) crossing each. A solution would be to ask for the stronger boundary conditions, where the numbers of the arcs at the top of the braid are also consecutively increasing integers, starting with $1$ on the left. This solution would make those particularly well-numbered virtual braids into a subgroup of the virtual braid group, and satisfy the following lemma:
\begin{lemma} \label{acisu}
Let $\beta$ be a virtual braid diagram on $n$ strands. If $\beta$ is almost classical and the top endpoints of $\beta$ are numbered $1$ through $n$ consecutively from left to right, then all the virtual crossings of $\beta$ can be removed by a finite sequence of virtual detour moves.
\end{lemma}
\begin{figure}[htbp]
\centering
\def \svgwidth{\textwidth} \input{smooth.eps_tex}
\caption{The Alexander numbering of crossings and their smoothing.} \label{smooth}
\end{figure}
Lemma \ref{acisu} can be restated to say that almost classical virtual braids with a numbering that satisfies the boundary condition on the top and the bottom are in the image of $\bar v: B_n \to vB_n$. This justifies the laxer boundary condition and the inconvenience of almost classical braids on $n$ strands not being a subgroup of $vB_n$.
\begin{proof}
If $\beta$ has no classical crossings, the only restriction on the Alexander numbering comes from the endpoints of each strand. The definition dictates that the $i$th strand of the diagram be numbered $i$ at the top and the bottom, for each $i=1,2, \ldots, n$. Since $\beta$ contains no classical crossings, each strand consists of a single arc, and the numbering shows that the underlying permutation is trivial. Therefore, such a $\beta$ is equivalent to the identity braid.
Now, assume that the lemma holds for any braid diagram with $k$ classical crossings for some natural number number $k$. Let $\beta$ be an almost classical braid diagram with $k+1$ classical crossings. Then, the diagram $\beta_0$ obtained by smoothing any of the classical crossings of $\beta$ as in Figure \ref{smooth} is also almost classical and has $k$ classical crossings. By our assumption, $\beta_0$ is related to a classical braid diagram by a finite sequence of virtual detour moves, and by the definition of those moves, the same sequence takes $\beta$ to a classical braid diagram.
\end{proof}
\section{Parity projection} \label{parity}
This section defines a map from virtual braids to almost classical braids, which will be used to prove the main theorem of this paper.
\subsection{Gaussian parity}
In general, a virtual braid diagram is not Alexander numberable. Its arcs can still be numbered by using the rule that the $i$th strand starts with number $i$, and that the numbering changes by $+1$ or $-1$ at each classical crossing as depicted in Figure \ref{crossnum}. The result is again called an integer numbering of the dia. For some crossings, the numbering will still respect the Alexander numbering rule that the incoming and outgoing arcs on each side of the crossing bear the same number. In general, they will not. Following the convention established in \cite{IMN2013} say that a crossing in a braid is \emph{even} if and only if the integer numbering of the arcs respects the Alexander condition around that crossing. Otherwise, the crossing is called \emph{odd}. The property that a crossing be odd or even is its \emph{parity}.
For example, consider the two crossings in Figure \ref{proofrm2}, considered to be portions of the same, larger braid diagram. Both crossings are even whenever $\lambda= \mu-1$, and both are odd otherwise. Moreover, observe that the integer numbering of the rest of the diagram is unchanged by the local move.
\begin{figure}[htbp]
\centering
\def \svgwidth{\textwidth} \input{proofrm2.eps_tex}
\caption{The Alexander numbering around a canceling pair of classical crossings in a braid diagram. } \label{proofrm2}
\end{figure}
In Figure \ref{proofrm3}, the crossings on the right and the left version can be identified by the numbering of the strands at the bottom of the picture. The $\gamma/\lambda$ crossing is even if and only if $\gamma= \lambda-1$ (on the left picture, the condition is equivalent, and reads $\gamma+1= \lambda$). Similarly, the parity of the $\gamma/\mu$ and the $\lambda/\mu$ crossings is unchanged by the move, and by looking at the labels at the top of the picture, it still holds that the integer numbering of the rest of the picture is unchanged by the local move. The more interesting property that can be computed in this situation is that if any two of the crossings pictured are even, so is the third one. That is, if $\gamma= \lambda-1$ and $\gamma+ 1= \mu-1$, then the $\lambda/\mu$ crossing is also even. Similarly for the other two pairs of crossings.
\hfill
\begin{figure}[htbp]
\centering
\def \svgwidth{\textwidth} \input{proofrm3.eps_tex}
\caption{The Alexander numbering around three crossings before and after a move on a braid diagram. } \label{proofrm3}
\end{figure}
\subsection{Projection map}
Define the \emph{Gaussian projection} of a virtual braid diagram $\beta$ to be $\phi(\beta)$, the braid diagram obtained by turning all the odd crossings of $\beta$ into virtual crossings, computing the parity of the remaining classical crossings with the new arcs, turning any new odd crossings virtual, and repeating this process until the resulting braid diagram is almost classical.
\begin{lemma} \label{lem2}
Let $\beta$ and $\beta'$ be equivalent virtual braid diagrams. Then, $\phi(\beta)$ and $\phi(\beta')$ are equivalent almost classical braid diagrams.
\end{lemma}
\begin{proof}
First note that the assertion that $\phi(\beta)$ and $\phi(\beta')$ are almost classical pure braid diagrams follows trivially from the definition of the map $\phi$.
Now, if $\beta'$ is obtained from $\beta$ by a virtual detour move, there is a one-to-one correspondence between the classical crossings of each diagram, and the parity of the crossings is not impacted by virtual detour moves. Thus, $\phi(\beta')$ is obtained from $\phi(\beta)$ by a virtual detour move.
Now, assume that $\beta$ and $\beta'$ differ by only one classical braid move. Considering first a move of type U2, where $\beta$ is the diagram with more crossings. Then, those crossings are either both preserved by $\phi$, or both made virtual by $\phi$. In either case, the diagram $\phi(\beta')$ can be obtained from $\phi(\beta)$. An example or this situation is shown in Figure \ref{mainfig}.
Now, if $\beta'$ is obtained from doing a move of type U3 to $\beta$, there are three possible situation. The crossings involved in the move can be all odd, all even, or only one of the three could be even. Again, the number of odd and even crossings in $\beta$ and $\beta'$ is equal, and, should exactly one of them be even, it is the crossing between the corresponding stands in each diagram. Then, $\phi(\beta')$ can be obtained from $\phi(\beta)$ by applying either a V3, a U3, or a virtual detour move. See Figure \ref{mainfig} for an example.
It suffices to apply the arguments above to each move in a sequence to obtain the general statement.\end{proof}
In other terms, Lemma \ref{lem2} states that the map of braid diagrams, $\phi$, factors to a map of virtual braids, $\bar \phi: vB_n \to vB_n$, and that a braid is almost classical if and only if it is in the image of $\bar \phi$. Notice that almost classical braids are a proper subset of $vB_n$ for each $n$, but that they do not form subgroups.
\begin{figure}[htbp] \centering
\centering
\def \svgwidth{\textwidth} \input{mainfig.eps_tex}
\caption{The result of the Gaussian projection map on some braid moves.} \label{mainfig}
\end{figure}
\section{Main theorem} \label{mainthm}
\begin{thm} For each $n=2, 3, \ldots$, the map $\bar v: B_n \to vB_n$ is injective.
\end{thm}
\begin{proof}
Fix $n$ to be an integer greater than 1, and let $\beta$ and $\beta'$ be classical braid diagrams representing the same element in $vB_n$. To prove that $v$ is injective, it suffices to show that $\beta$ and $\beta'$ represent the same element in the classical pure braid group $B_n$.
Let $\{\beta=\beta_0, \beta_1, \ldots , \beta_k=\beta'\}$ be a sequence of virtual braid diagrams such that $\beta_{i-1}$ and $\beta_i$ for each $i=1,\ldots k$ differ by exactly one classical braid move and possibly a virtual detour move (defined in the discussion of Figure \ref{detour}). Then, the sequence $\{\phi(\beta_0)= \beta, \phi(\beta_1), \ldots , \phi(\beta_k)=\beta'\}$ consists of almost classical braid diagrams, and applying Lemma \ref{lem2}, $\phi(\beta_i)$ is obtained from $\phi(\beta_{i-1})$ by virtual detour moves and at most one classical braid move.
Since $\beta$ is classical, the numbering at the top $\phi(\beta_1)$ is consecutively increasing from 1. Then by Lemma \ref{acisu}, the virtual crossings $\phi(\beta_1)$ can be removed by virtual detour moves, to obtain a classical braid diagram. This last argument can be applied to each projected diagram in turn, yielding an explicit sequence of classical braid diagram moves relating $\beta$ and $\beta'$.
\end{proof}
A part of the proof technique in action is illustrated by a rather artificial example in Figure \ref{example}. Say that the diagram on the left, corresponding to $\tau_1\sigma_2\sigma_2\-\tau_1\in vB_3$ appears at some point in a sequence of virtual braid diagrams. Then, the projection maps it to the almost classical diagram of the trivial braid in $vB_3$ seen on the right of the figure.
\begin{figure}[htbp] \centering
\centering
\def \svgwidth{\textwidth} \input{example.eps_tex}
\caption{Example of a trivial braid projecting to a trivial almost classical braid.} \label{example}
\end{figure}
\newpage
|
2008.09600
|
\section{Introduction}
Higher-form global symmetries \cite{Gaiotto:2014kfa} of theories play an important role in characterizing refined properties, such as the spectrum of line- and higher-dimensional defect operators. In the simplest instance they correspond to the center symmetries of Yang-Mills theories, under which the Wilson lines are charged.
In higher dimensions, in particular 5d and 6d much recent progress has been made in uncovering properties of superconformal field theories (SCFTs) and related theories, such as little string theories (LSTs). SCFTs in 5d and 6d are intrinsically strongly coupled, and have an IR description in terms of an effective theory on the Coulomb branch and tensor branch, respectively. One of the questions that we will address in this paper is how to determine the higher-form symmetries of the quantum theories from the effective description. The key subtlety here is the existence of instanton particles or strings, which can be charged under the one-form symmetry, and can thereby break the symmetry.
This will be complemented with the analysis in geometry, using either the description in terms of collapsable surfaces or a description in terms of generalized toric diagrams (i.e. 5-brane-webs). Much progress has been made on mapping out the theories in 6d, including a putative classification of SCFTs \cite{Heckman:2013pva, Heckman:2015bfa, Bhardwaj:2015xxa, Bhardwaj:2019hhd} and LSTs \cite{Bhardwaj:2015oru, Bhardwaj:2019hhd} from F-theory on elliptic Calabi-Yau three-folds -- for a review of the 6d classification, see \cite{Heckman:2018jxk}. In 5d recent progress has been made in mapping out and furthering the classification of SCFTs using the M-theory realization on canonical singularities \cite{Hayashi:2014kca, DelZotto:2017pti, Jefferson:2017ahm, Closset:2018bjz, Jefferson:2018irk, Apruzzi:2018nre, Bhardwaj:2018yhy, Bhardwaj:2018vuu, Apruzzi:2019vpe, Apruzzi:2019opn, Apruzzi:2019enx, Bhardwaj:2019jtr, Apruzzi:2019kgb, Bhardwaj:2019fzv, Bhardwaj:2019xeg, Eckhard:2020jyr, Bhardwaj:2020kim, Closset:2020scj}.
Higher form symmetries in 6d and 5d SCFTs are highly constrained by the superconformal algebra.
As is shown in \cite{Cordova:2016emh} (and related upcoming work by the same authors), there cannot be any continous 1-form symmetry in such theories. Indeed, we will see that 1-form symmetries 5d and 6d SCFTs are discrete.
From a geometric engineering point of view, higher form symmetries were discussed using the M-theory realization of 5d SCFTs on Calabi-Yau threefolds, as well as other M-theory geometric engineering constructions such as $G_2$-holonomy compactifications to 4d in \cite{Morrison:2020ool, Albertini:2020mdx, Eckhard:2020jyr}. Related works in Type IIB, for 4d SCFTs in particular Argyres-Douglas theories were obtained in \cite{Garcia-Etxebarria:2019cnb, Closset:2020scj, DelZotto:2020esg}. In 6d the defect group was analyzed in \cite{DelZotto:2015isa} and the 1-form symmetries in 6d SCFTs were discussed from a geometric construction in \cite{Morrison:2020ool}. The global form of the flavor symmetry in 6d was discussed in \cite{Dierigl:2020myk}, using the torsional part of the Mordell-Weil group of elliptic fibrations in F-theory.
In this paper the main new insight is two-fold: we determine how to compute the higher form symmetry from the effective description in the IR, taking into account non-perturbative instanton effects. We observe that in many cases these non-perturbative effects are correlated with the existence of half-hypers in the theory, i.e. if the half-hypers are completed into full hypers, the non-perturbative effects disappear. The other aspect of this paper is the generalization to arbitrary 6d and 5d theories. This includes 6d SCFTs, LSTs and the frozen phases of F-theory \cite{Tachikawa:2015wka, Bhardwaj:2018jgp}.
6d theories are closely connected with 5d theories by circle-reduction, with potentially added holonomies (in flavor symmetries), or twists. We track the higher form symmetries through this dimensional reduction and match it with one computed in 5d. This provides another confirmation for the approach we propose, and confirms the geometric analysis.
In 5d a complementary approach uses the 5-brane webs, which engineer a class of 5d SCFTs. These are dual to so-called generalized toric polygons (or dot diagrams) \cite{Benini:2009gi}. We provide a prescription generalizing the analysis for toric models for computing the 1-form symmetry for generalized toric polygons, and underpin this with a discussion of the Wilson lines in the 5-brane web.
The plan of the paper is as follows: in section \ref{6} we discuss the 6d case, starting with the 2-form symmetry in 6d SCFTs and LSTs, followed by their 1-form symmetry. In section \ref{5} the 5d theories are discussed, both in terms of their relation to 6d theories, and the analysis on the Coulomb branch. We furthermore provide an analysis of the 5d theories that have a description as brane-webs, or generalized toric diagrams.
\section{Higher-form symmetries of $6d$ SCFTs and LSTs}\label{6}
This section is devoted to the study of higher-form symmetries in supersymmetric $6d$ theories. There are two known kinds of UV complete theories in six dimensions which do not include dynamical gravity. The first are supersymmetric conformal field theories (SCFTs), and the second are supersymmetric little string theories (LSTs).
We would like to argue that it is sufficient for us to focus on a class of $6d$ theories\footnote{From this point onward, a ``$6d$ theory'' will refer to either a $6d$ SCFT or a $6d$ LST.} which admit only two different kinds of higher-form symmetry groups, namely discrete 1-form symmetry group $\mathcal{O}$ and discrete 2-form symmetry group $\mathcal{T}$. One can obtain theories outside this class by performing various kinds of discrete gaugings. For example, one can gauge a subgroup $\mathcal{O}'$ of the 1-form symmetry $\mathcal{O}$ to obtain a $6d$ theory with discrete 3-form symmetry group. One can also stack the $6d$ theory with an SPT phase carrying 1-form symmetry $\mathcal{O}'$ before gauging the diagonal $\mathcal{O}'$ symmetry, thus producing more $6d$ theories which have 3-form symmetries. It might also be possible to obtain $6d$ theories carrying 4-form symmetry by gauging discrete subgroups, possibly in combination with an SPT phase, of the 0-form symmetry group of the above special class of $6d$ theories. At the time of writing of this paper, there is no known $6d$ theory that cannot be produced as a discrete gauging of the above class of $6d$ theories. For any such discrete gauging, the spectrum of higher-form symmetries (along with possible higher-group structures) and their anomalies can be deduced from the knowledge of the spectrum of higher-form symmetries and anomalies of the above special class of $6d$ theories.
Moreover, at the time of writing of this paper, all the known $6d$ theories in the above class admit a geometric construction in F-theory\footnote{These constructions can be divided into two types. The first kind of constructions are referred to be in the ``unfrozen phase'' of F-theory and do not involve O7$^+$ planes. The second type of constructions are referred to be in the ``frozen phase'' of F-theory \cite{Bhardwaj:2018jgp,Tachikawa:2015wka} and involve O7$^+$ planes. See \cite{Heckman:2015bfa,Bhardwaj:2015oru} for classification of theories of first type and \cite{Bhardwaj:2019hhd} for classification of theories of second type.}. In this paper, we thus focus only on the above set of ``F-theoretic'' $6d$ theories and provide methods to determine their 1-form and 2-form symmetry groups.
Our analysis will involve passing on to a generic point on the tensor branch of vacua\footnote{Note that every known F-theoretic $6d$ theory admits a tensor branch of vacua.} of the $6d$ theory. We will assume that the full higher-form symmetry of the $6d$ theory is visible at such a point on the tensor branch, if we also take into account massive BPS excitations in the theory on the tensor branch. We will be presenting our analysis in field-theory terms without referring to the technicalities of F-theory construction. An advantage of this approach is that it allows us to treat the $6d$ theories arising from both the unfrozen and the frozen phases of F-theory on an equal footing.
At a generic point on the tensor branch, an F-theoretic $6d$ SCFT or LST flows to a $6d$ $\mathcal{N}=(1,0)$ gauge theory (carrying a semi-simple gauge algebra) along with a set of free tensor multiplets\footnote{For an SCFT all these tensor multiplets are dynamical, while for an LST one of the tensor multiplets is a non-dynamical background tensor multiplet.} in the IR. Moreover, the theory on the tensor branch carries massive BPS string excitations in one-to-one correspondence with a special basis for these tensor multiplets. These strings are charged under the 2-form gauge fields living in the tensor multiplets. Their charges are captured by a symmetric positive semi-definite integer matrix $\Omega^{ij}$ (which is the matrix participating in Green-Schwarz mechanism of gauge anomaly cancellation) with non-positive off-diagonal entries, where $i$ labels different elements in the above-mentioned special basis for the tensor multiplets. This matrix $\Omega^{ij}$ is positive definite for a $6d$ SCFT, and it is a positive semi-definite matrix of corank 1 for an irreducible\footnote{We call an LST irreducible if it cannot be written as a stack product of other LSTs.} LST. The rank of $\Omega^{ij}$ will be denoted by $r$ in what follows, and it is also known as the rank of the $6d$ SCFT or LST to which $\Omega^{ij}$ is associated.
A subset of the above mentioned BPS strings arise as the BPS instanton strings for the simple factors in the low-energy gauge algebra. Thus, each simple factor of the gauge algebra is associated to some $i$ and we refer to the corresponding simple factor of gauge algebra as $\mathfrak{g}_i$.
We can thus denote a $6d$ SCFT or LST by displaying the above discussed data in a graphical notation of the following form:
\begin{equation}\label{TBD}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$\Omega^{ii}$};
\node (v4) at (-0.5,1) {$\mathfrak{g}_i$};
\begin{scope}[shift={(2.8,0.05)}]
\node at (-0.5,0.9) {$\mathfrak{g}_j$};
\node (v2) at (-0.5,0.45) {$\Omega^{jj}$};
\end{scope}
\node (v3) at (0.9,0.5) {\tiny{$-\Omega^{ij}$}};
\draw (v1)--(v3);
\draw (v2)--(v3);
\begin{scope}[shift={(-2.3,0.05)}]
\node (v2_1) at (-0.5,0.45) {$\Omega^{kk}$};
\end{scope}
\begin{scope}[shift={(0,1.95)}]
\node (v2_2) at (-0.5,0.45) {$\Omega^{ll}$};
\node at (-0.5,0.9) {$\mathfrak{g}_l$};
\end{scope}
\draw (v2_2) edge (v4);
\draw (v2_1) edge (v1);
\end{tikzpicture} \,,
\end{equation}
where there is a node for each $i$. Each node is labeled by $\Omega^{ii}$ and the associated gauge algebra $\mathfrak{g}_i$. We leave $\mathfrak{g}_i$ empty for a node $i$ if the BPS string corresponding to that node is not an instanton string of any gauge algebra. The node labeled as $k$ in the above graph is such an example. Two nodes $i$ and $j$ are connected by an edge if the off-diagonal entry $\Omega^{ij}\neq0$. If furthermore $-\Omega^{ij}>1$, then we insert a label at the middle of the edge indicating this number $-\Omega^{ij}$. If $-\Omega^{ij}=1$, then no such label is inserted. The edge between $i$ and $l$ in the above graph is such an example. See \cite{Bhardwaj:2019fzv} for more details on this notation in the context of $6d$ SCFTs.
\subsection{2-form symmetry}\label{6T}
\subsubsection{2-form symmetry of $6d$ SCFTs}
If we forget about the BPS strings for a moment, then there is a $U(1)$ 2-form symmetry associated to each tensor multiplet $i$ under which the ``Wilson surface'' for the 2-form gauge field living within the tensor multiplet $i$ has charge 1. Thus, we obtain a potential $U(1)^r$ 2-form symmetry\footnote{It should be noted that this $U(1)^r$ 2-form symmetry is spontaneously broken along the tensor branch. This is akin to the spontaneous breaking of $U(1)^r$ electric 1-form symmetry in an abelian gauge theory \cite{Gaiotto:2014kfa}. Since the 2-form symmetry $\mathcal{T}$ of $6d$ SCFTs and LSTs embeds into this $U(1)^r$ 2-form symmetry, $\mathcal{T}$ is always spontaneously broken along the tensor branch as well. We expect $\mathfrak{T}$ to be spontaneously broken at the conformal point of a $6d$ SCFT too.}. When the BPS strings are included, the 2-form symmetry is reduced to the subgroup of $U(1)^r$ under which the BPS strings are uncharged.
The 2-form symmetry in the presence of the charged strings is then found by computing the Smith normal form $T^{ij}$ of the matrix $\Omega^{ij}$, which, due to the positive definiteness of $\Omega^{ij}$, is a diagonal matrix with the diagonal entries being positive integers. Let $n_i$ be the $i$-th diagonal entry of $T^{ij}$. Then the 2-form symmetry group $\mathcal{T}$ can be written as
\begin{equation}\label{2g}
\mathcal{T}=\prod_{i=1}^r~{\mathbb Z}_{n_i}\,,
\end{equation}
i.e. a product of ${\mathbb Z}_{n_i}$ for all $i$, where ${\mathbb Z}_1$ denotes the trivial group.
The appearance of the Smith normal form is easy to understand from the point of view of Pontryagin dual of the 2-form symmetry group. Before accounting for the charged strings, the dual is the lattice ${\mathbb Z}^r$ which captures the possible charges of surface defects and dynamical strings under the 2-form gauge fields. The matrix $\Omega^{ij}$ defines a sublattice $[\Omega^{ij}]\cdot {\mathbb Z}^r$ inside the lattice ${\mathbb Z}^r$ which is spanned by vectors
\begin{equation}
v^i:=\sum _j\Omega^{ij}u_j \,,
\end{equation}
where $u_i$ is the standard basis of ${\mathbb Z}^r$. This sublattice captures the charges of the dynamical strings. The charges under $\mathcal{T}$ are then captured by the quotient lattice
\begin{equation}
\frac{{\mathbb Z}^r}{[\Omega^{ij}]\cdot {\mathbb Z}^{r}}\,,
\end{equation}
whose Pontryagin dual is $\mathcal{T}$. After changing the basis inside ${\mathbb Z}^r$ and $[\Omega^{ij}]\cdot {\mathbb Z}^r$, we can write the above quotient lattice as
\begin{equation}
\frac{{\mathbb Z}^r}{[T^{ij}]\cdot {\mathbb Z}^{r}}=\bigoplus_{i=1}^r~\frac{{\mathbb Z}}{n_i{\mathbb Z}}\,,
\end{equation}
The Pontryagin dual of each subfactor is isomorphic to itself since $n_i>0$, and hence we find that the 2-form symmetry group $\mathcal{T}$ is as shown in (\ref{2g}).
\subsubsection{2-form symmetry of $6d$ LSTs}
The structure of $6d$ LSTs is similar to that of $6d$ SCFTs, the crucial difference being that the matrix $\Omega^{ij}$ is only positive semi-definite for $6d$ LSTs. Naively, one might expect that the 2-form symmetry group for an LST would be captured by the quotient lattice
\begin{equation}
\frac{{\mathbb Z}^{r+1}}{[\Omega^{ij}]\cdot {\mathbb Z}^{r+1}}\,,
\end{equation}
where the total number of nodes $i$ is $r+1$ as $\Omega^{ij}$ has rank $r$ and corank 1 for an irreducible LST. The fact that the corank of $\Omega^{ij}$ is 1 implies that the above quotient lattice contains one factor of ${\mathbb Z}$ along with a torsion part. That is, the above quotient lattice takes the following form
\begin{equation}
\bigoplus_{i=1}^r~\frac{{\mathbb Z}}{n_i{\mathbb Z}}\oplus{\mathbb Z}\,.
\end{equation}
Taking its Pontryagin dual, the above naive expectation would lead us to believe that the 2-form symmetry group for a LST takes the form
\begin{equation}\label{6L}
\prod_{i=1}^r~{\mathbb Z}_{n_i}\times U(1)\,.
\end{equation}
However, we must take into account the fact that one of the tensor multiplets, out of the $r+1$ tensor multiplets associated to the nodes $i$, is non-dynamical. Hence this tensor multiplet does not generate a potential $U(1)$ 2-form symmetry, and we should mod out this $U(1)$ factor from (\ref{6L}) since we have taken it into account in our above calculation. Thus, the 2-form symmetry of a little string theory is
\begin{equation}
\mathcal{T}=\prod_{i=1}^r~{\mathbb Z}_{n_i}\,.
\end{equation}
\subsubsection{Examples}
\ubf{Example 1}: Consider the case of $\mathcal{N}=(2,0)$ SCFTs. These can be described in terms of a simply laced simple Lie algebra $\mathfrak{g}$. The matrix $\Omega^{ij}$ is the Cartan matrix of $\mathfrak{g}$. Then, $\mathcal{T}$ simply coincides with the center of $\mathfrak{g}$.
Similarly, $\mathcal{N}=(2,0)$ LSTs are also described in terms of a simply laced simple Lie algebra $\mathfrak{g}$ but the associated matrix $\Omega^{ij}$ is the Cartan matrix of $\mathfrak{g}^{(1)}$, which is the untwisted affine Lie algebra associated to $\mathfrak{g}$. Again, $\mathcal{T}$ coincides with the center of $\mathfrak{g}$.
\vspace{8pt}
\ni\ubf{Example 2}: Consider the following $6d$ SCFT arising in the \emph{frozen} phase of F-theory
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$4$};
\node (v4) at (-0.5,1) {$\mathfrak{so}(n)$};
\begin{scope}[shift={(2.2,0.05)}]
\node (v2) at (-0.5,0.45) {2};
\end{scope}
\node (v3) at (0.6,0.5) {\tiny{2}};
\draw (v1)--(v3);
\draw (v2)--(v3);
\node (v4) at (1.7,1) {$\mathfrak{su}(n-8)$};
\end{tikzpicture}
\end{equation}
Its associated tensor branch gauge theory contains gauge algebra $\mathfrak{so}(n)\oplus\mathfrak{su}(n-8)$ with the matter content being a hyper in bifundamental representation plus $n-16$ hypers in fundamental representation of $\mathfrak{su}(n-8)$. The matrix $\Omega^{ij}$ for this theory is
\begin{equation}
\begin{pmatrix}
4&-2\\
-2&2
\end{pmatrix} \,.
\end{equation}
The Smith normal form of the above matrix is
\begin{equation}
\begin{pmatrix}
2&\:\:\:\:\:\:\\
\:\:\:\:\:\:&2
\end{pmatrix}\,,
\end{equation}
and thus $\mathcal{T}={\mathbb Z}_2\times{\mathbb Z}_2$.
\vspace{8pt}
\ni\ubf{Example 3}: Consider the LST
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$4$};
\node (v4) at (-0.5,1) {$\mathfrak{so}(2n+8)$};
\begin{scope}[shift={(2.2,0.05)}]
\node (v2) at (-0.5,0.45) {1};
\end{scope}
\node (v3) at (0.6,0.5) {\tiny{2}};
\draw (v1)--(v3);
\draw (v2)--(v3);
\node (v4) at (1.7,1) {$\sp(n)$};
\end{tikzpicture}
\end{equation}
whose tensor branch gauge theory contains a \emph{full} hypermultiplet in the bifundamental. The 2-form symmetry group can be computed to be
\begin{equation}
\mathcal{T}={\mathbb Z}_1 \,.
\end{equation}
One can obtain a $6d$ SCFT from a LST by deleting a node. Note that a $6d$ SCFT obtained this way need not have the same 2-form symmetry group as that of the $6d$ LST. For example, deleting the $\sp(n)$ node in the above LST, we obtain the $6d$ SCFT
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$4$};
\node (v4) at (-0.5,1) {$\mathfrak{so}(2n+8)$};
\end{tikzpicture}
\end{equation}
for which $\mathcal{T}={\mathbb Z}_4$.
\subsubsection{Relative nature of $6d$ SCFTs and LSTs}
General $6d$ SCFTs and LSTs are relative theories, which means that they are more properly thought of as theories living on the boundaries of some particular kind of $7d$ topological quantum field theories (TQFTs). It is well-known in the context of $6d$ SCFTs having a construction in the unfrozen phase of F-theory that the $7d$ TQFT associated to such a $6d$ SCFT is captured by the 2-form symmetry group (also known as the \emph{defect group} \cite{DelZotto:2015isa}) $\mathcal{T}$ of the $6d$ SCFT.
This should admit a straightforward generalization to $6d$ SCFTs constructed in the frozen phase of F-theory and LSTs, for which the recipe to compute $\mathcal{T}$ has been provided above.
\subsection{1-form symmetry of $6d$ SCFTs and LSTs}\label{6O}
If we forget about the hypermultiplet matter content of the $\mathcal{N}=(1,0)$ low-energy gauge theory and the dynamical BPS strings, then the 1-form symmetry is the product of the center\footnote{More precisely, we are working with a form of the theory where the gauge groups $G_i$ realizing all the gauge algebras $\mathfrak{g}_i$ are simply connected. Other forms of the theory having non-simply-connected gauge groups can be obtained from this form of the theory by gauging the 1-form symmetries, if any. Throughout this paper, we will abuse the language and refer to the center $Z(G)$ of the simply connected group $G$ of a simple algebra $\mathfrak{g}$ as the ``center of the simple algebra $\mathfrak{g}$''.} $\Gamma_i$ of each simple factor $\mathfrak{g}_i$ of the tensor branch gauge algebra\footnote{The $\mathcal{N}=(1,0)$ low-energy non-abelian gauge theory is free in the extreme IR, and hence described by a bunch of free vector multiplets in the far IR. The 1-form symmetry associated to these free vector multiplets is spontaneously broken. Since, as we will see, the 1-form symmetry $\mathcal{O}$ of the 6d SCFT or LST is a subgroup of $\prod \Gamma_i$ 1-form symmetry which is further embedded into the 1-form symmetry of the free vector multiplets in the IR, $\mathcal{O}$ is spontaneously broken along the tensor branch. We also expect $\mathcal{O}$ to be spontaneously broken at the conformal point of a $6d$ SCFT.}. Including the hypermultiplets and BPS strings, the 1-form symmetry $\mathcal{O}$ of the theory becomes the subgroup of $\prod_i\Gamma_i$ under which all hypermultiplets and BPS strings are uncharged.
The charges of (full or half) hypermultiplets under $\prod_i\Gamma_i$ is determined by knowing the representation $R$ of $\mathfrak{g}=\oplus_i\mathfrak{g}_i$ formed by these hypermultiplets. We will describe a way to compute the charge of any arbitrary representation $R$ under $\prod_i\Gamma_i$ in Section \ref{5GG}. The charges of representations relevant in the context of $6d$ SCFTs and LSTs are displayed in Table \ref{table}. The charges for arbitrary reps are provided in equations (\ref{ch1}) and (\ref{ch2}).
\begin{table}[h]
\begin{center}
\begin{tabular}{ | l | c | c | c | }
\hline
Gauge algebra & Center & Representations & Charge \\ \hline\hline
$\mathfrak{su}(n)$ & ${\mathbb Z}_n$ & \parbox[t]{0.35cm}{$\mathsf{F}$\\$\L^2$\\$\L^3$\\$\S^2$} & \parbox[t]{4.1cm}{$1~(\text{mod}~n)$\\$2~(\text{mod}~n)$\\$3~(\text{mod}~n)$\\$2~(\text{mod}~n)$} \\ \hline
$\mathfrak{so}(2n+1)$ & ${\mathbb Z}_2$ & \parbox[t]{0.35cm}{$\mathsf{F}$\\$\S$} & \parbox[t]{4.1cm}{$0~(\text{mod}~2)$\\$1~(\text{mod}~2)$} \\ \hline
$\sp(n)$ & ${\mathbb Z}_2$ & \parbox[t]{0.35cm}{$\mathsf{F}$\\$\L^2$\\$\L^3$} & \parbox[t]{4.1cm}{$1~(\text{mod}~2)$\\$0~(\text{mod}~2)$\\$1~(\text{mod}~2)$} \\ \hline
$\mathfrak{so}(4n+2)$ & ${\mathbb Z}_4$ & \parbox[t]{0.35cm}{$\mathsf{F}$\\$\S$\\${\mathbb C}$} & \parbox[t]{4.1cm}{$2~(\text{mod}~4)$\\$1~(\text{mod}~4)$\\$3~(\text{mod}~4)$} \\ \hline
$\mathfrak{so}(4n)$ & ${\mathbb Z}_2\times{\mathbb Z}_2$ & \parbox[t]{0.35cm}{$\mathsf{F}$\\$\S$\\${\mathbb C}$} & \parbox[t]{4.1cm}{$\left(1~(\text{mod}~2),1~(\text{mod}~2)\right)$\\$\left(1~(\text{mod}~2),0~(\text{mod}~2)\right)$\\$\left(0~(\text{mod}~2),1~(\text{mod}~2)\right)$} \\ \hline
$\mathfrak{e}_6$ & ${\mathbb Z}_3$ & \parbox[t]{0.35cm}{$\mathsf{F}$} & \parbox[t]{4.1cm}{$1~(\text{mod}~3)$} \\ \hline
$\mathfrak{e}_7$ & ${\mathbb Z}_2$ & \parbox[t]{0.35cm}{$\mathsf{F}$} & \parbox[t]{4.1cm}{$1~(\text{mod}~2)$} \\ \hline
$\mathfrak{e}_8$ & ${\mathbb Z}_1$ & \parbox[t]{0.35cm}{$\mathsf{F}$} & \parbox[t]{4.1cm}{$0~(\text{mod}~1)$} \\ \hline
$\mathfrak{f}_4$ & ${\mathbb Z}_1$ & \parbox[t]{0.35cm}{$\mathsf{F}$} & \parbox[t]{4.1cm}{$0~(\text{mod}~1)$} \\ \hline
$\mathfrak{g}_2$ & ${\mathbb Z}_1$ & \parbox[t]{0.35cm}{$\mathsf{F}$} & \parbox[t]{4.1cm}{$0~(\text{mod}~1)$} \\ \hline
\end{tabular}
\end{center}
\caption{Centers of various gauge algebras and charges of some of the representations under the center of the gauge algebra. The adjoint representation $\mathsf{A}$ is not mentioned in the table above since it always has charge 0 under the corresponding center. $\mathsf{F}$ denotes the fundamental representation for $\mathfrak{su}(n),\sp(n)$; the vector representation for $\mathfrak{so}(n)$; and the irreducible representations of dimensions $\mathbf{7},\mathbf{26},\mathbf{27},\mathbf{56}$ for $\mathfrak{g}_2,\mathfrak{f}_4,\mathfrak{e}_6,\mathfrak{e}_7$ respectively. We often refer to $\mathsf{F}$ as the ``fundamental representation'' of the corresponding algebra. $\Lambda^2$ and $\Lambda^3$ denote the irreducible two and three index antisymmetric representations for $\mathfrak{su}(n)$ and $\sp(n)$. $\S^2$ denotes the two-index symmetric irrep for $\mathfrak{su}(n)$. $\S$ and ${\mathbb C}$ denote the irreducible spinor reps of different chirality for $\mathfrak{so}(2n)$; and $\S$ denotes the irreducible spinor rep for $\mathfrak{so}(2n+1)$. The charges for arbitrary irreps are provided in equations (\ref{ch1}) and (\ref{ch2}).}
\label{table}
\end{table}
As far as charges of BPS strings are concerned, it is often the case that the charges of BPS strings under $\prod_i\Gamma_i$ are already accounted by the charges of hypermultiplets under $\prod_i\Gamma_i$. However, in some cases, BPS strings lead to independent contributions not accounted by the hypermultiplets. The hallmark of these cases is that either they involve tensor multiplets that are not paired to a gauge algebra, or the matter content is such that we have a half-hyper in some irreducible representation of $\mathfrak{g}=\oplus_i\mathfrak{g}_i$, or the ${\mathbb Z}_2$ valued theta angle of a $\mathfrak{g}_i=\sp(n)$ is relevant. More exhaustively, these cases are listed below:
\begin{enumerate}
\item Consider a node $i$ with $\Omega^{ii}=1$ and $\mathfrak{g}_i$ trivial. Then, look at the set\footnote{This set is trivial if there is a node $j$ with $\Omega^{ij}<-1$. See the discussion later in this subsection accounting for the possibility of such nodes.} of nodes $j$ such that $\Omega^{ij}=-1$ and $\mathfrak{g}_j$ is non-trivial. It is well-known that the sum $\oplus_j\mathfrak{g}_j$ of these $\mathfrak{g}_j$ is a subalgebra of $\mathfrak{e}_8$. Correspondingly, the adjoint representation of $\mathfrak{e}_8$ decomposes as some representation $\mathcal{R}$ of $\oplus_j\mathfrak{g}_j$. Then, the charge of the BPS string corresponding to node $i$ is captured by the charge of $\mathcal{R}$ under $\prod_j\Gamma_j$.\\
Schematically the graph near the node $i$ takes the following form
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$1$};
\begin{scope}[shift={(2,0.05)}]
\node at (-0.5,0.9) {$\mathfrak{g}_j$};
\node (v2) at (-0.5,0.45) {$\Omega^{jj}$};
\end{scope}
\begin{scope}[shift={(-2,0.05)}]
\node (v2_1) at (-0.5,0.45) {$\Omega^{kk}$};
\node at (-0.5,0.9) {$\mathfrak{g}_k$};
\end{scope}
\begin{scope}[shift={(0,1.95)}]
\node (v2_2) at (-0.5,0.45) {$\Omega^{ll}$};
\node at (-0.5,0.9) {$\mathfrak{g}_l$};
\end{scope}
\draw (v2_2) edge (v1);
\draw (v2_1) edge (v1);
\draw (v1) edge (v2);
\end{tikzpicture}
\end{equation}
\item Consider a situation, where we have two nodes $i$ and $j$ such that $\mathfrak{g}_i=\sp(n)$ and $\mathfrak{g}_j=\mathfrak{so}(m)$ for $n>0$ and $m\neq8$, and $\Omega^{ij}=-1$. The matter content between $\sp(n)$ and $\mathfrak{so}(m)$ is a half-hyper in a mixed representation of $\sp(n)\oplus\mathfrak{so}(m)$ with mixed representation being the bifundamental representation. In this case, we need to account for the charge of BPS instanton strings for $\sp(n)$ under center $\Gamma_j$ of $\mathfrak{so}(m)$. We can take this string to be charged under $\Gamma_j$ as the irreducible spinor representation $\S$ of $\mathfrak{so}(m)$ is charged under $\Gamma_j$.\\
Schematically the graph near the nodes $i$ and $j$ takes the following form
\begin{equation}\label{sf}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$1$};
\node (v4) at (-0.5,1) {$\sp(n)$};
\begin{scope}[shift={(2,0.05)}]
\node at (-0.5,0.9) {$\mathfrak{so}(m)$};
\node (v2) at (-0.5,0.45) {$\Omega^{jj}$};
\end{scope}
\begin{scope}[shift={(4.2,0.05)}]
\node (v2_1) at (-0.5,0.45) {$\Omega^{kk}$};
\end{scope}
\begin{scope}[shift={(-2.5,0.05)}]
\node (v2_2) at (-0.5,0.45) {$\Omega^{ll}$};
\node at (-0.5,0.9) {$\mathfrak{g}_l$};
\end{scope}
\draw (v2_2) edge (v1);
\draw (v2_1) edge (v2);
\begin{scope}[shift={(0,-1.7)}]
\node (v3) at (-0.5,0.9) {$\mathfrak{g}_m$};
\node (v2_3) at (-0.5,0.45) {$\Omega^{mm}$};
\end{scope}
\draw (v1) edge (v2);
\draw (v1) edge (v3);
\end{tikzpicture}
\end{equation}
\item Now, consider a situation where we have two nodes $i$ and $j$ such that $\mathfrak{g}_i=\sp(n)$ and $\mathfrak{g}_j=\mathfrak{so}(8)$ for $n>0$, and $\Omega^{ij}=-1$. In this case, the matter content between $\sp(n)$ and $\mathfrak{so}(8)$ is a half-hyper in a mixed representation of $\sp(n)\oplus\mathfrak{so}(8)$. The mixed representation takes the form $\mathsf{F}\otimes\mathcal{R}$ where $\mathsf{F}$ is the fundamental representation of $\sp(n)$ and $\mathcal{R}$ is one of the following 3 representations of $\mathfrak{so}(8)$: vector $\mathsf{F}$, spinor $\S$, or cospinor ${\mathbb C}$. If $\mathcal{R}=\mathsf{F}$, then the charge of BPS instanton string for $\sp(n)$ under $\Gamma_j$ can be taken to be the same as that of the representation $\S$ of $\mathfrak{so}(8)$. If $\mathcal{R}=\S$, then the charge of BPS instanton string for $\sp(n)$ under $\Gamma_j$ can be taken to be the same as that of the representation ${\mathbb C}$ of $\mathfrak{so}(8)$. If $\mathcal{R}={\mathbb C}$, then the charge of BPS instanton string for $\sp(n)$ under $\Gamma_j$ can be taken to be the same as that of the representation $\mathsf{F}$ of $\mathfrak{so}(8)$.\\
The schematic form of the graph near nodes $i$ and $j$ is displayed in (\ref{sf}) where $m=8$.
\item Consider a situation, where we have two nodes $i$ and $j$ such that $\Omega^{ii}=1$, $\Omega^{jj}=2$, $\Omega^{ij}=-1$, $\mathfrak{g}_i=\sp(n)$ and $\mathfrak{g}_j=\mathfrak{su}(2n+8)$. The matter content between $\sp(n)$ and $\mathfrak{su}(2n+8)$ is a hyper in bifundamental. In this case, the $6d$ $\sp(n)$ gauge algebra requires the input of a discrete theta angle$\theta$ which takes values $0,\pi$. For $\theta=\pi$, we need to account for the charge of BPS instanton string for $\mathfrak{g}_i=\sp(n)$ under its own center $\Gamma_i={\mathbb Z}_2$, and the charge is $1$.\\
The graph near the nodes $i$ and $j$ takes the following schematic form
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$2$};
\node (v4) at (-0.5,1) {$\mathfrak{su}(2n+8)$};
\begin{scope}[shift={(2.8,0.05)}]
\node at (-0.5,0.9) {$\mathfrak{g}_k$};
\node (v2) at (-0.5,0.45) {$\Omega^{kk}$};
\end{scope}
\node (v3) at (0.9,0.5) {\tiny{$-\Omega^{jk}$}};
\draw (v1)--(v3);
\draw (v2)--(v3);
\begin{scope}[shift={(-2.5,0.05)}]
\node (v2_1) at (-0.5,0.45) {$1$};
\end{scope}
\begin{scope}[shift={(0,1.95)}]
\node (v2_2) at (-0.5,0.45) {$\Omega^{ll}$};
\node at (-0.5,0.9) {$\mathfrak{g}_l$};
\end{scope}
\draw (v2_2) edge (v4);
\draw (v2_1) edge (v1);
\node at (-3,1) {$\sp(n)_\pi$};
\end{tikzpicture}
\end{equation}
where we have displayed the theta angle for $\sp(n)$ which is relevant since all the $2n+8$ fundamental hypers of $\sp(n)$ are gauged by an $\mathfrak{su}$ gauge algebra.
\end{enumerate}
The fact that BPS strings carry non-trivial charges under $\mathfrak{g}_i$ (and hence $\Gamma_i$) in the first three of the above four cases is a known fact in the literature. On the other hand, the fact that the above four cases are the \emph{only cases} where one needs to account for the charges of BPS strings under $\Gamma_i$ requires a justification, which we will provide in Section \ref{5KG}.
In any case, let us address a few pressing questions that might arise upon a reading of the above list:
\begin{enumerate}
\item First, it is possible, in the context of $6d$ SCFTs and LSTs, to have two nodes $i$ and $j$ with $\Omega^{ii}=1$, $\mathfrak{g}_i$ trivial, $\Omega^{ij}<-1$ and $\mathfrak{g}_j$ non-trivial. In this case, the BPS string associated to $i$ will be charged under $\mathfrak{g}_j$, so why is this possibility not accounted in the above list? It turns out that in this case, the charge of the BPS string under $\Gamma_j$ is trivial. To see this, notice that the only theory where this situation occurs is the following $6d$ LST
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$4$};
\node (v4) at (-0.5,1) {$\mathfrak{so}(8)$};
\begin{scope}[shift={(2.2,0.05)}]
\node (v2) at (-0.5,0.45) {1};
\end{scope}
\node (v3) at (0.6,0.5) {\tiny{2}};
\draw (v1)--(v3);
\draw (v2)--(v3);
\end{tikzpicture} \,,
\end{equation}
for which $\mathfrak{so}(8)$ is embedded into $\mathfrak{e}_8$ with embedding index 2. Thus, the BPS string corresponding to the right node is charged as
\begin{equation}
(\S\otimes\S)\oplus({\mathbb C}\otimes{\mathbb C})
\end{equation}
under $\mathfrak{so}(8)$ which has trivial charge under the ${\mathbb Z}_2\times{\mathbb Z}_2$ center of $\mathfrak{so}(8)$.
\item Second, how about the cases, where we have a node $i$ with $\Omega^{ii}=2$ and $\mathfrak{g}_i$ trivial? In this case, the set of nodes $j$ such that $\Omega^{ij}<0$ and $\mathfrak{g}_j$ non-trivial is either trivial, or includes a single node (which we label by $j$) with $\Omega^{ij}=-1$ and $\mathfrak{g}_j=\mathfrak{su}(2)$. Moreover, the $\mathfrak{su}(2)$ gauge algebra on node $j$ must carry a positive number of full hypers in fundamental of $\mathfrak{su}(2)$, out of which one half-hyper must be trapped by the node $i$, i.e. the half-hyper cannot be gauged by some other gauge algebra $\mathfrak{g}_k$. This half-hyper completely destroys the center of $\mathfrak{su}(2)$, and hence one does not need to account for the contribution from BPS string associated to node $i$.
\item Third, in the above list the only possibilities that arise have a half-hyper charged in a mixed representation of \emph{two} simple gauge algebras. What about the possibility of having a half-hyper charged in a mixed representation of \emph{more than two} simple gauge algebras? In the context of $6d$ SCFTs and LSTs, this possibility is only realized in the $6d$ LST with the associated $6d$ gauge theory carrying $\mathfrak{su}(2)^3$ gauge algebra along with a half-hyper in trifundamental plus two extra full hypers in fundamental representation of each $\mathfrak{su}(2)$. In this case, the extra full hypers break the center of each of the three $\mathfrak{su}(2)$s and hence one does not need to consider the contributions of BPS strings.
\item Fourth, how about the cases where we have a half-hyper charged under a \emph{single} gauge algebra only? In all of these cases, it turns out that there is no $6d$ SCFT or LST where the hypermultiplet content does not already capture the contribution of BPS strings. For example, consider a node $i$ with $\Omega^{ii}=3$ and $\mathfrak{g}_i=\mathfrak{so}(12)$. Any $6d$ theory containing this node contains a half-hyper charged as $\S$ of $\mathfrak{so}(12)$ and 5 hypers charged as $\mathsf{F}$. Since the half-hyper in $\S$ cannot be gauged by any other gauge algebra $\mathfrak{g}_j$ for a $6d$ SCFT or LST, the ${\mathbb Z}_2^2$ center of $\mathfrak{so}(12)$ is broken down to the ${\mathbb Z}_2$ subgroup under which $\mathsf{F}$ and ${\mathbb C}$ reps of $\mathfrak{so}(12)$ have charge 1. It turns out that there is no way to gauge the 5 hypers in $\mathsf{F}$ and to simultaneously complete the node $i$ into a $6d$ SCFT or LST such that the above ${\mathbb Z}_2$ subgroup of the center of $\mathfrak{so}(12)$ would survive as 1-form symmetry. Thus, the center of $\mathfrak{so}(12)$ is already completely broken by the hypermultiplet content, and we do not get to the point where we need to discuss the charge of BPS string associated to $i$ under $\Gamma_i$.
\end{enumerate}
\subsubsection{Examples}\label{1ex}
\ni\ubf{Example 1}: Consider the $6d$ SCFT
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$4$};
\node (v4) at (-0.5,1) {$\mathfrak{so}(4n)$};
\end{tikzpicture}
\end{equation}
where $n\ge2$. The center of $\mathfrak{so}(4n)$ is ${\mathbb Z}_2\times{\mathbb Z}_2$ under which fundamental, spinor and cospinor representations have charges $(1,1)$, $(1,0)$ and $(0,1)$ respectively. The above $6d$ SCFT contains $4n-8$ hypers in fundamental representation. For $n=2$, there are no hypers and we find
\begin{equation}
\mathcal{O}={\mathbb Z}_2\times{\mathbb Z}_2 \,.
\end{equation}
For $n>2$, the fundamental hypers are uncharged under only a diagonal combination of the two ${\mathbb Z}_2$s and thus
\begin{equation}
\mathcal{O}={\mathbb Z}_2 \,.
\end{equation}
For the $6d$ SCFT
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$4$};
\node (v4) at (-0.5,1) {$\mathfrak{so}(4n+2)$};
\end{tikzpicture}
\end{equation}
the center is ${\mathbb Z}_4$ under which fundamental has charge $2$ and spinor/cospinor have charges $\pm1$. The $6d$ SCFT contains $4n-6$ hypers and $n\ge2$ for the theory to exist. The presence of fundamental hypers implies that the 1-form symmetry for this theory is
\begin{equation}
\mathcal{O}={\mathbb Z}_2 \,.
\end{equation}
In all of the cases considered in this example, there is no extra breaking induced by the instanton string.
\vspace{8pt}
\ni\ubf{Example 2}: Consider the $6d$ SCFT
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$4$};
\node (v4) at (-0.5,1) {$\mathfrak{so}(2n)$};
\begin{scope}[shift={(2.2,0.05)}]
\node (v2) at (-0.5,0.45) {1};
\end{scope}
\node (v4) at (1.7,1) {$\sp(2n-8)$};
\draw (v1) edge (v2);
\end{tikzpicture}
\end{equation}
Consider first the $n>4$ case, for which we have a half-hyper in the bifundamental and $3n-8$ full hypers in fundamental of $\sp(2n-8)$. The presence of fundamentals of $\sp(2n-8)$ breaks the ${\mathbb Z}_2$ center 1-form symmetry associated to $\sp(2n-8)$ down to ${\mathbb Z}_1$. And the presence of bifundamental breaks the center 1-form symmetry associated to $\mathfrak{so}(2n)$ down to the ${\mathbb Z}_2$ subgroup under which fundamental representation is uncharged.
However, this is not the end of story, as the BPS instanton string associated to the $\sp(2n-8)$ has non-trivial charge under the above ${\mathbb Z}_2$ subgroup of the center of $\mathfrak{so}(2n)$. Thus, we find that the 1-form symmetry for the above $6d$ SCFT is trivial. That is,
\begin{equation}
\mathcal{O}={\mathbb Z}_1 \,.
\end{equation}
For $n=4$, $\sp(2n-8)=\sp(0)$ denotes that there is no gauge algebra associated to the right node and we can write the quiver as
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$4$};
\node (v4) at (-0.5,1) {$\mathfrak{so}(8)$};
\begin{scope}[shift={(2,0.05)}]
\node (v2) at (-0.5,0.45) {1};
\end{scope}
\draw (v1) edge (v2);
\end{tikzpicture}
\end{equation}
The potential 1-form symmetry is ${\mathbb Z}_2\times{\mathbb Z}_2$ coming from the center of $\mathfrak{so}(8)$. There are no hypermultiplets, but we again have to account for the BPS string associated to the right node. This string is charged as the adjoint of the total flavor symmetry $\mathfrak{e}_8$ associated to the right node. The $\mathfrak{so}(8)$ gauge algebra embeds into $\mathfrak{e}_8$ such that the adjoint of $\mathfrak{e}_8$ decomposes into a representation of $\mathfrak{so}(8)$ which contains both the spinor and cospinor representations. Thus, both the ${\mathbb Z}_2$s are broken by this BPS string and we again obtain
\begin{equation}
\mathcal{O}={\mathbb Z}_1 \,.
\end{equation}
Consider also the $6d$ LST
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$4$};
\node (v4) at (-0.5,1) {$\mathfrak{so}(2n)$};
\begin{scope}[shift={(2.2,0.05)}]
\node (v2) at (-0.5,0.45) {1};
\end{scope}
\node (v3) at (0.6,0.5) {\tiny{2}};
\draw (v1)--(v3);
\draw (v2)--(v3);
\node at (1.7,1) {$\sp(n-8)$};
\end{tikzpicture}
\end{equation}
whose matter content is a full hyper rather than a half-hyper in the bifundamental of the two algebras. According to our general discussion above, due to the presence of a full hyper, we don't need to consider the contribution of BPS instanton strings. Any element of the center $\Gamma_{\mathfrak{so}}$ of $\mathfrak{so}(2n)$ that acts non-trivially on the representation $\mathsf{F}$ of $\mathfrak{so}(2n)$ can be combined with the generator of the center ${\mathbb Z}_2$ of $\sp(n-8)$ to produce an element of the 1-form symmetry group of the above theory. Thus, we find that
\begin{equation}
\mathcal{O}\simeq\Gamma_{\mathfrak{so}} \,.
\end{equation}
\vspace{8pt}
\ni\ubf{Example 3}: Consider the $6d$ SCFT
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$4$};
\node (v4) at (-0.5,1) {$\mathfrak{so}(2n)$};
\begin{scope}[shift={(2.2,0.05)}]
\node (v2) at (-0.5,0.45) {2};
\end{scope}
\node (v3) at (0.6,0.5) {\tiny{2}};
\draw (v1)--(v3);
\draw (v2)--(v3);
\node (v4) at (1.7,1) {$\mathfrak{su}(2n-8)$};
\end{tikzpicture} \,
\end{equation}
where $n\ge8$. The theory contains a bifundamental hyper plus $2n-16$ fundamental hypers for $\mathfrak{su}(2n-8)$. Let us first consider the case $n>8$. Then, the $2n-16$ fundamental hypers of $\mathfrak{su}(2n-8)$ completely destroy the center ${\mathbb Z}_{2n-8}$ 1-form symmetry associated to $\mathfrak{su}(2n-8)$. As above, the bifundamental hyper leaves only a ${\mathbb Z}_2$ 1-form symmetry out of the center 1-form symmetry associated to $\mathfrak{so}(2n)$. The BPS strings do not contribute to any additional breaking of the potential 1-form symmetry since the theory does not contain any half-hypers in mixed representation of $\mathfrak{so}(2n)\oplus\mathfrak{su}(2n-8)$. Thus, the 1-form symmetry is
\begin{equation}
\mathcal{O}={\mathbb Z}_2 \,,
\end{equation}
for $n>8$.
Now consider the case $n=8$. We can combine the order two element in the center ${\mathbb Z}_8$ associated to $\mathfrak{su}(2n-8)=\mathfrak{su}(8)$ with the generators of the two ${\mathbb Z}_2$s in the center of $\mathfrak{so}(2n)=\mathfrak{so}(16)$ to obtain two ${\mathbb Z}_2$ symmetries under which the hypermultiplet content is uncharged. Due to the same reason as for the case $n>8$, the BPS strings do not further reduce the 1-form symmetry in the $n=8$ case as well. Thus, the above $6d$ SCFT for $n=8$ has 1-form symmetry
\begin{equation}
\mathcal{O}={\mathbb Z}_2\times{\mathbb Z}_2 \,.
\end{equation}
\vspace{8pt}
\ni\ubf{Example 4}: Consider the $6d$ SCFT
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$4$};
\node (v4) at (-0.5,1) {$\mathfrak{so}(2n+8)$};
\begin{scope}[shift={(2,0.05)}]
\node (v2) at (-0.5,0.45) {1};
\end{scope}
\draw (v1) edge (v2);
\node (v1_1) at (3.5,0.5) {$4$};
\draw (v2) edge (v1_1);
\node (v4_1) at (3.5,1) {$\mathfrak{so}(2n+8)$};
\node (v4_1) at (1.5,1) {$\sp(n)$};
\end{tikzpicture}
\end{equation}
which makes sense for $n\ge0$. Consider first the case of $n>0$. Then, the hypermultiplet content of the theory is
\begin{equation}
\frac{1}{2}(\mathsf{F},\mathsf{F},1)\oplus\frac{1}{2}(1,\mathsf{F},\mathsf{F})\oplus n(\mathsf{F},1,1)\oplus n(1,1,\mathsf{F}) \,,
\end{equation}
where $\mathsf{F}$ denotes the fundamental representation. This breaks the ${\mathbb Z}_2$ center of $\sp(n)$, but leaves a ${\mathbb Z}_2$ element inside the center of each $\mathfrak{so}(2n+8)$ unbroken. The unbroken ${\mathbb Z}_2$ inside $\mathfrak{so}(2n+8)$ acts non-trivially on the spinor and cospinor representations but acts trivially on the fundamental representation. The BPS instanton string for $\sp(n)$ has charge $(1,1)$ under the unbroken ${\mathbb Z}_2^2$ potential 1-form symmetry coming from the two $\mathfrak{so}(2n+8)$ gauge algebras. Thus we see that only a diagonal combination of the two surviving ${\mathbb Z}_2$s associated to the two $\mathfrak{so}(2n+8)$s survives. That is, the 1-form symmetry for $n>0$ is
\begin{equation}
\mathcal{O}={\mathbb Z}_2 \,.
\end{equation}
Notice that if one of the two $\mathfrak{so}(2n+8)$ was not gauged, then we would have obtained a trivial 1-form symmetry as discussed in an example above.
Now consider the case of $n=0$ for which we can write the quiver as
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$4$};
\node (v4) at (-0.5,1) {$\mathfrak{so}(8)$};
\begin{scope}[shift={(1.5,0.05)}]
\node (v2) at (-0.5,0.45) {1};
\end{scope}
\draw (v1) edge (v2);
\node (v1_1) at (2.5,0.5) {$4$};
\draw (v2) edge (v1_1);
\node (v4_1) at (2.5,1) {$\mathfrak{so}(8)$};
\end{tikzpicture}
\end{equation}
This theory contains no charged hypermultiplets. But the BPS string associated to the middle node is charged under the adjoint of its $\mathfrak{e}_8$ flavor symmetry, which decomposes under the two $\mathfrak{so}(8)$s as
\begin{equation}
(\mathsf{A},1)\oplus(1,\mathsf{A})\oplus(\mathsf{F},\mathsf{F})\oplus(\S,\S)\oplus({\mathbb C},{\mathbb C}) \,
\end{equation}
where $\mathsf{A}$ denotes the adjoint representation. Thus we see that the BPS string is left invariant by a diagonal combination of the centers of the two $\mathfrak{so}(8)$. Thus, the 1-form symmetry is
\begin{equation}
\mathcal{O}={\mathbb Z}_2\times{\mathbb Z}_2 \,.
\end{equation}
This result can be extended to the $6d$ SCFT
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$4$};
\node (v4) at (-0.5,1) {$\mathfrak{so}(8)$};
\begin{scope}[shift={(1.5,0.05)}]
\node (v2) at (-0.5,0.45) {1};
\end{scope}
\draw (v1) edge (v2);
\node (v1_1) at (2.5,0.5) {$4$};
\draw (v2) edge (v1_1);
\node (v4_1) at (2.5,1) {$\mathfrak{so}(8)$};
\begin{scope}[shift={(4.5,0.05)}]
\node (v2_1) at (-0.5,0.45) {1};
\end{scope}
\node (v1_1_1) at (7,0.5) {$4$};
\node (v4_1_1) at (7,1) {$\mathfrak{so}(8)$};
\node (v1_1_2) at (5.5,0.5) {$\cdots$};
\draw (v1_1) edge (v2_1);
\draw (v2_1) edge (v1_1_2);
\draw (v1_1_2) edge (v1_1_1);
\end{tikzpicture}
\end{equation}
for which only a diagonal combination of the centers of all the $\mathfrak{so}(8)$s survives, thus leading to
\begin{equation}
\mathcal{O}={\mathbb Z}_2\times{\mathbb Z}_2 \,.
\end{equation}
\vspace{8pt}
\ni\ubf{Example 5}: Consider the $6d$ SCFT
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$6$};
\node (v4) at (-0.5,1) {$\mathfrak{e}_6$};
\begin{scope}[shift={(1.5,0.05)}]
\node (v2) at (-0.5,0.45) {1};
\end{scope}
\draw (v1) edge (v2);
\node (v1_1) at (2.5,0.5) {$3$};
\draw (v2) edge (v1_1);
\node (v4_1) at (2.5,1) {$\mathfrak{su}(3)$};
\end{tikzpicture}
\end{equation}
which carries no charged hypers and for which the BPS string associated to the middle node is charged under $\mathfrak{e}_6\oplus\mathfrak{su}(3)$ as
\begin{equation}
(\mathsf{A},1)\oplus(1,\mathsf{A})\oplus(\mathsf{F},\bar\mathsf{F})\oplus(\bar\mathsf{F},\mathsf{F}) \,,
\end{equation}
where $\mathsf{F}=\mathbf{27}$ for $\mathfrak{e}_6$. This is left invariant by a diagonal ${\mathbb Z}_3$ combination of the ${\mathbb Z}_3$ centers associated to $\mathfrak{e}_6$ and $\mathfrak{su}(3)$, thus leading to the final result
\begin{equation}
\mathcal{O}={\mathbb Z}_3
\end{equation}
This result can be extended to the $6d$ SCFTs
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$3$};
\node (v4) at (-0.5,1) {$\mathfrak{su}(3)$};
\begin{scope}[shift={(1.5,0.05)}]
\node (v2) at (-0.5,0.45) {1};
\end{scope}
\draw (v1) edge (v2);
\node (v1_1) at (2.5,0.5) {$6$};
\draw (v2) edge (v1_1);
\node (v4_1) at (2.5,1) {$\mathfrak{e}_6$};
\begin{scope}[shift={(4.5,0.05)}]
\node (v2_1) at (-0.5,0.45) {1};
\end{scope}
\node (v1_1_1) at (7,0.5) {$3$};
\node (v4_1_1) at (7,1) {$\mathfrak{su}(3)$};
\node (v1_1_2) at (5.5,0.5) {$\cdots$};
\draw (v1_1) edge (v2_1);
\draw (v2_1) edge (v1_1_2);
\draw (v1_1_2) edge (v1_1_1);
\begin{scope}[shift={(-1.5,0.05)}]
\node (v2_2) at (-0.5,0.45) {1};
\end{scope}
\node (v1_1_3) at (-3.5,0.5) {$6$};
\node (v4_1_2) at (-3.5,1) {$\mathfrak{e}_6$};
\draw (v1_1_3) edge (v2_2);
\draw (v2_2) edge (v1);
\end{tikzpicture}
\end{equation}
and
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$3$};
\node (v4) at (-0.5,1) {$\mathfrak{su}(3)$};
\begin{scope}[shift={(1.5,0.05)}]
\node (v2) at (-0.5,0.45) {1};
\end{scope}
\draw (v1) edge (v2);
\node (v1_1) at (2.5,0.5) {$6$};
\draw (v2) edge (v1_1);
\node (v4_1) at (2.5,1) {$\mathfrak{e}_6$};
\begin{scope}[shift={(4.5,0.05)}]
\node (v2_1) at (-0.5,0.45) {1};
\end{scope}
\node (v1_1_1) at (7,0.5) {$6$};
\node (v4_1_1) at (7,1) {$\mathfrak{e}_6$};
\node (v1_1_2) at (5.5,0.5) {$\cdots$};
\draw (v1_1) edge (v2_1);
\draw (v2_1) edge (v1_1_2);
\draw (v1_1_2) edge (v1_1_1);
\begin{scope}[shift={(-1.5,0.05)}]
\node (v2_2) at (-0.5,0.45) {1};
\end{scope}
\node (v1_1_3) at (-3.5,0.5) {$6$};
\node (v4_1_2) at (-3.5,1) {$\mathfrak{e}_6$};
\draw (v1_1_3) edge (v2_2);
\draw (v2_2) edge (v1);
\end{tikzpicture}
\end{equation}
for which again only a diagonal ${\mathbb Z}_3$ combination of all the centers survives, leading to
\begin{equation}
\mathcal{O}={\mathbb Z}_3
\end{equation}
\vspace{8pt}
\ni\ubf{Example 6}: Consider the following LST arising in the frozen phase of F-theory
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-1,0.5) {$1$};
\node (v4) at (-1,1) {$\sp(n)_\pi$};
\begin{scope}[shift={(2,0.05)}]
\node (v2) at (-0.5,0.45) {2};
\end{scope}
\draw (v1) edge (v2);
\node (v1_1) at (4.5,0.5) {$4$};
\node (v4_1) at (4.5,1) {$\mathfrak{so}(2n+16)$};
\node (v4) at (1.5,1) {$\mathfrak{su}(2n+8)$};
\node (v3) at (3,0.5) {\tiny{2}};
\draw (v2) edge (v3);
\draw (v3) edge (v1_1);
\end{tikzpicture} \,,
\end{equation}
for $n>0$, where the theta angle for $\sp(n)$ is relevant since all of the $2n+8$ fundamental hypers of $\sp(n)$ have been gauged by $\mathfrak{su}(2n+8)$ gauge algebra, and we have chosen this theta angle to be $\pi$. The hypermultiplet content forms a representation
\begin{equation}
(\mathsf{F},\mathsf{F},1)\oplus(1,\mathsf{F},\mathsf{F}) \,,
\end{equation}
of $\sp(n)\oplus\mathfrak{su}(2n+8)\oplus\mathfrak{so}(2n+16)$. The potential center 1-form symmetry is $\Gamma:={\mathbb Z}_2\times{\mathbb Z}_{2n+8}\times\Gamma_{\mathfrak{so}}$ where ${\mathbb Z}_2$ factor is the center of $\sp(n)$, ${\mathbb Z}_{2n+8}$ factor is the center of $\mathfrak{su}(2n+8)$ and $\Gamma_{\mathfrak{so}}$ is the center of $\mathfrak{so}(2n+16)$, where $\Gamma_{\mathfrak{so}}={\mathbb Z}_4$ if $n$ is odd and $\Gamma_{\mathfrak{so}}={\mathbb Z}_2^2$ when $n$ is even). This potential 1-form symmetry is broken by the above hyper content to a subgroup $\tilde\Gamma$ of $\Gamma$. It turns out that $\tilde\Gamma$ is isomorphic to $\Gamma_{\mathfrak{so}}$ with the generators of $\tilde\Gamma$ being obtained by combining the generators of the $\Gamma_{\mathfrak{so}}$ factor of $\Gamma$ combined with the order 2 element in the ${\mathbb Z}_{2n+8}$ factor of $\Gamma$ combined with the generator of ${\mathbb Z}_2$ factor of $\Gamma$.
However, the BPS string associated to the $\sp(n)$ node has charge 1 under the ${\mathbb Z}_2$ factor of $\Gamma$ since the theta angle for $\sp(n)$ is $\pi$, and hence the $\tilde\Gamma$ potential 1-form symmetry is completely broken since all the generators of $\tilde\Gamma$ involve the generator of the ${\mathbb Z}_2$ factor of $\Gamma$. We find that the above LST has
\begin{equation}
\mathcal{O}={\mathbb Z}_1 \,.
\end{equation}
\section{1-form symmetry of $5d$ $\mathcal{N}=1$ theories}\label{5}
In this section, our aim is to study higher-form symmetries of $5d$ $\mathcal{N}=1$ theories. More precisely, we aim to study mass-deformations of $5d$ SCFTs and circle compactifications of $6d$ SCFTs and LSTs.
Just as in the previous section, we would like to argue that it is sufficient for us to focus on a class of $5d$ theories, which admit only one kind of higher-form symmetries, namely 1-form symmetries \footnote{Just like the case of 1-form and 2-form symmetries of $6d$ theories, these 1-form symmetries of $5d$ theories will also be spontaneously broken in all kinds of vacua we discuss below.}. The argument is again that all known $5d$ theories arise by discrete gaugings of the above class of theories\footnote{See however \cite{Closset:2020scj} for some proposed counter-examples. In these cases, there are 3-form symmetries, whose interpretation remains to be fully understood in terms of the classification of 5d SCFTs. }. Moreover, all the known $5d$ theories in the above class admit a geometric construction in M-theory which we will be using to study these theories. The geometric constructions that we will consider require extra discrete data that we fix by demanding that all the non-compact complex curves can be wrapped by M2-branes. This severely limits the non-compact complex surfaces that can be wrapped by M5-branes. See \cite{Morrison:2020ool, Albertini:2020mdx} for more discussion about this discrete data. It is this above mentioned choice of discrete data that gives rise to the $5d$ theories in the above mentioned class of $5d$ theories that we will be studying.
\subsection{1-form symmetry from the Coulomb branch}
At a generic point on its Coulomb branch, a $5d$ $\mathcal{N}=1$ theory flows to a $5d$ $\mathcal{N}=1$ abelian gauge theory with gauge group $U(1)^r$, where $r$ is often called as the rank of the original $5d$ $\mathcal{N}=1$ theory. We can choose a basis for $U(1)^r$ such that the $U(1)^r$ charges of the line defects and dynamical particles in the theory lie in a lattice generated by primitive Wilson lines $W_i$ having charge $+1$ under $U(1)_i$ gauge group and charge $0$ under $U(1)_j$ gauge group for $j\neq i$.
Each $U(1)_i$ gauge group gives rise to a potential $U(1)$ 1-form symmetry, and we can identify the actual 1-form symmetry group $\mathcal{O}$ of the $5d$ $\mathcal{N}=1$ theory as the elements of these potential $U(1)$ 1-form symmetries under which all the BPS (and massless) particles are uncharged.
\subsubsection{1-form symmetry from M-theory geometry}\label{5G}
The above discussed procedure of determining the 1-form symmetry of a $5d$ $\mathcal{N}=1$ theory from its Coulomb branch is easy to implement if the $5d$ $\mathcal{N}=1$ theory admits a geometric construction in M-theory. In such a construction, the Coulomb branch of $5d$ $\mathcal{N}=1$ theory is constructed by compactifying M-theory on a non-compact Calabi-Yau threefold (CY3).
The CY3 contains a collection of irreducible compact Kahler surfaces $S_i$. Decomposing the M-theory 3-form gauge field in terms of a basis of 2-forms associated to $S_i$ leads to a collection of 1-forms $A_i$ which are identified as the gauge fields for gauge groups $U(1)_i$. The CY3 also contains compact holomorphic curves which lead to dynamical BPS particles via compactification of M2-branes on these curves. The charge of a particle arising from a curve $C$ under $U(1)_i$ is given by the intersection number $C\cdot S_i$.
Typically, the surfaces $S_i$ can be identified as blowups of Hirzebruch surfaces or blowups of $\P^2$. Moreover, the CY3 can often be presented in a form such that each curve $C$ can be written as a linear combination of compact curves living inside $S_i$. The intersection number $C\cdot S_i$ can then be traced to intersection theory of Hirzebruch surfaces and $\P^2$.
To do this, let $\alpha$ parametrize different intersections between $S_i$ and $S_j$ for $i\neq j$. Then the locus of $\alpha^{\text{th}}$ intersection can be identified as a compact curve $C_{ij}^{(\alpha)}$ living in $S_i$ and a compact curve $C_{ji}^{(\alpha)}$ living in $S_j$. In other words, we say that the $\alpha^{\text{th}}$ intersection between $S_i$ and $S_j$ is produced by identifying the curve $C_{ij}^{(\alpha)}$ living in $S_i$ with the curve $C_{ji}^{(\alpha)}$ living in $S_j$. We refer to $C_{ij}^{(\alpha)}$ and $C_{ji}^{(\alpha)}$ as the gluing curves corresponding to this intersection. Moreover, let us define the \emph{total} gluing curves for the intersections of $S_i$ and $S_j$ as $C_{ij}:=\sum_\alpha C_{ij}^{(\alpha)}$ and $C_{ji}:=\sum_\alpha C_{ji}^{(\alpha)}$.
Similarly, different self-intersections of a surface $S_i$ can be obtained by gluing $C_i^{(\alpha)}$ with $D_i^{(\alpha)}$ where $C_i^{(\alpha)}$ and $D_i^{(\alpha)}$ are curves living in $S_i$. In this case, we identify the total \emph{self-gluing} curve as $C_i:=\sum_\alpha C_i^{(\alpha)}+\sum_\alpha D_i^{(\alpha)}$.
If a compact curve $C$ lives in $S_i$ then its intersection number with $S_j$ for $j\neq i$ can be written as
\begin{equation}
C\cdot S_j=(C\cdot C_{ij})_{S_i} \,,
\end{equation}
where the brackets with a subscript $S_i$ represents the fact that the intersection can be taken inside the surface $S_i$ without regard for the details of the rest of the CY3. On the other hand, the intersection number of $C$ with $S_i$ can be written as
\begin{equation}
C\cdot S_i=(C\cdot K_i)_{S_i}+(C\cdot C_i)_{S_i}=2g(C)-2-(C\cdot C)_{S_i}+(C\cdot C_i)_{S_i}
\end{equation}
where $K_i$ is the canonical divisor of $S_i$ and we have used the adjunction formula (applied to the surface $S_i$) to write its intersection with $C$ in terms of the self-intersection of $C$ (inside $S_i$) and the genus $g(C)$ of $C$.
The upshot of the above discussion is that we can reduce the calculation of $U(1)_i$ charges of various dynamical particles in the $5d$ $\mathcal{N}=1$ theory to the calculation of some intersection numbers inside the surfaces $S_i$, where an intersection number \emph{inside} $S_i$ can be computed without regard for the details of the rest of the CY3. Now we only need to discuss the intersection theory of curves inside a fixed surface $S_i$.
As we remarked above, each $S_i$ is either a blowup of a Hirzebruch surface or a blowup of $\P^2$. The first homology of a blowup of Hirzebruch surface can be described in terms of curves $e$, $f$ and $x_i$, where $e$ is the homology class of the total transform (under all blowups) of the base $\P^1$ of the Hirzebruch surface, $f$ is the homology class of the total transform (under all blowups) of a fiber $\P^1$ of the Hirzebruch surface, and $x_i$ is the homology class of the total transform (under subsequent\footnote{For our convenience, when we consider concrete geometries below, we will \emph{not} adopt the order that the blowup $j$ is performed after blowup $i$ if $j>i$.} blowups $j>i$) of the exceptional $\P^1$ introduced by the $i^{\text{th}}$ blowup.
Similarly, the first homology of a blowup of $\P^2$ can be described in terms of curves $l$ and $x_i$, where $l$ is the homology class of the total transform (under all blowups) of a $\P^1$ inside $\P^2$, and $x_i$ is the homology class of the total transform (under subsequent blowups $j>i$) of the exceptional $\P^1$ introduced by the $i^{\text{th}}$ blowup.
The intersection numbers between these curves in the case of a Hirzebruch surface ${\mathbb F}_n$ of degree $n$ are
\begin{align}
e\cdot e &= -n\\
f\cdot f &= 0\\
x_i\cdot x_j &= -\delta_{ij}\\
e\cdot f &= +1\\
x_i\cdot e &= 0\\
x_i\cdot f &= 0 \,.
\end{align}
We will also use the $h$ curve which is defined as
\begin{equation}
h:=e+nf \,.
\end{equation}
On the other hand, the intersection numbers in the case of $\P^2$ are
\begin{align}
l\cdot l &= +1\\
x_i\cdot x_j &= -\delta_{ij}\\
x_i\cdot l &= 0 \,.
\end{align}
Using the above information, we can determine the $U(1)_i$ charges of any dynamical particle on the Coulomb branch of the $5d$ $\mathcal{N}=1$ theory $\mathfrak{T}$ in consideration. Similar to the case in Section \ref{6T}, the 1-form symmetry group $\mathcal{O}$ for $\mathfrak{T}$ can be computed from the point of view of its Pontryagin dual. For this purpose, let ${\mathbb Z}^r$ be the lattice of possible $U(1)_i$ charges. Then, let $\mathcal{C}$ be a set of curves defined as follows:
For each $S_i$, which is a blowup of a Hirzebruch surface, we add the curves $e,f,x_i$ into $\mathcal{C}$, and for each $S_i$, which is a blowup of $\P^2$, we add the curves $l,x_i$ into $\mathcal{C}$.\\
Let $\alpha$ parametrize different elements of $\mathcal{C}$. Then, the $U(1)_i$ charges of elements of $\mathcal{C}$ define the \emph{charge matrix} $Q^{\alpha i}$, which can be used to describe $\mathcal{O}$ as the Pontryagin dual of the quotient lattice\footnote{This result was first derived in \cite{Morrison:2020ool}.}
\begin{equation}
\frac{{\mathbb Z}^r}{[Q^{\alpha i}]\cdot {\mathbb Z}^{r}}=\bigoplus_{i=1}^r~\frac{{\mathbb Z}}{n_i{\mathbb Z}} \,,
\end{equation}
where $n_i:=\tilde Q^{ii}$ and $\tilde Q^{\alpha i}$ is the Smith normal form of $Q^{\alpha i}$.
If the $5d$ $\mathcal{N}=1$ theory is a $5d$ SCFT or a compactification of a $6d$ SCFT (twisted or untwisted) on a circle of finite non-zero radius, then each $n_i>0$, and we can write the Pontryagin dual as
\begin{equation}
\mathcal{O}=\prod_{i=1}^r~{\mathbb Z}_{n_i} \,,
\end{equation}
with ${\mathbb Z}_1$ being the trivial group.
\subsection{1-form symmetry of $5d$ $\mathcal{N}=1$ non-abelian gauge theories}\label{5N}
As in Section \ref{6O}, the 1-form symmetry of a non-abelian $5d$ $\mathcal{N}=1$ gauge theory with gauge algebra $\mathfrak{g}=\oplus_i \mathfrak{g}_i$ (where $\mathfrak{g}_i$ are simple) can be described as a subgroup $\mathcal{O}$ of $\prod_i\Gamma_i$ where $\Gamma_i$ is the center of $\mathfrak{g}_i$. One necessary condition on $\mathcal{O}$ is that its elements should leave all the (full or half) hypermultiplets invariant. As in Section \ref{6O}, we also need to include the instantonic excitations. In that section, the effect of these excitations was captured by requiring that the fundamental BPS instanton strings be uncharged under elements of $\mathcal{O}$. In the case of $5d$ $\mathcal{N}=1$ theories, the effect of instantonic excitations is captured by requiring that BPS instanton particles are left invariant by elements of $\mathcal{O}$.
Some examples of instantonic contributions to (the breaking of) 1-form symmetry in $5d$ theories were already studied in \cite{Morrison:2020ool}. Two such examples are obtained by considering a pure $5d$ $\mathcal{N}=1$ gauge theory with a \emph{simple} gauge algebra $\mathfrak{g}=\mathfrak{su}(n),\sp(n)$. As discussed in the above reference, for a pure $\mathfrak{su}(n)$ theory with Chern-Simons (CS) level $k$, the instantonic contributions are captured by accounting for an instanton particle of charge $k~(\text{mod}~n)$ under the center ${\mathbb Z}_n$ of $\mathfrak{su}(n)$; and for a pure $\sp(n)$ theory with theta angle $\theta=m\pi~(\text{mod}~2\pi)$, the instantonic contributions are captured by accounting for an instanton particle of charge $m~(\text{mod}~2)$ under the center ${\mathbb Z}_2$ of $\sp(n)$.
In this subsection, we will discuss other examples where instantonic contributions are relevant to the discussion of 1-form symmetry of $5d$ gauge theories. To this end, we will employ the M-theory construction of these $5d$ gauge theories.
\subsubsection{1-form symmetry of non-abelian gauge theories from geometry}\label{5GG}
In Section \ref{5G}, we discussed geometric constructions of Coulomb branches of $5d$ $\mathcal{N}=1$ theories. At special loci in the Coulomb branch, the low-energy theory enhances from an abelian gauge theory to a non-abelian gauge theory such that in the vicinity of such a locus we can regard the abelian gauge theory as arising on the Coulomb branch of the non-abelian gauge theory.
Let us consider a locus where a non-abelian gauge theory with a semi-simple gauge algebra $\mathfrak{g}$ arises. In the vicinity of this locus, the M-theory geometry can be represented in the following special form (see \cite{Bhardwaj:2019ngx,Bhardwaj:2020gyu} for more details):\\
We can represent each surface $S_i$ as a blowup of a Hirzebruch surface such that the \emph{intersection matrix} $M_{ij}$ defined by
\begin{equation}\label{Cartan}
M_{ij}:=-f_i\cdot S_j \,,
\end{equation}
(where $f_i$ denotes (the homology class of) a fiber $\P^1$ of Hirzebruch surface $S_i$) can be identified as the Cartan matrix of $\mathfrak{g}$.\\
The hypermultiplet content of the non-abelian gauge theory is encoded in the blowups and gluing curves. The details of this encoding can be found in \cite{Bhardwaj:2019ngx,Bhardwaj:2020gyu}. Here we will only need to consider special cases of the general case analyzed there.
(\ref{Cartan}) establishes a one-to-one correspondence between the nodes in the Dynkin diagram of $\mathfrak{g}$ and the surfaces $S_i$. Let the semi-simple gauge algebra $\mathfrak{g}$ decompose into simple factors as $\mathfrak{g}=\oplus_\mu\mathfrak{g}_\mu$. Let $S_i^\mu$ be the surfaces corresponding to $\mathfrak{g}_i^\mu$.
(\ref{Cartan}) implies that the total gluing curve $C_{ij}$ for $i\neq j$ can be written as
\begin{equation}
C_{ij}=-M_{ij}e_i+\beta_{ij}f_i+\sum_m\gamma_{ijm}x_{im}
\end{equation}
for some undetermined coefficients $\beta_{ij}$ and $\gamma_{ijm}$ where $x_{im}$ are the blowups living in the Hirzebruch surface $S_i$. Using the above form for $C_{ij}$ and structure of Cartan matrix $M_{ij}$, we can find a (non-unique) surface $\tilde S^\mu$ among the surfaces $S^i_\mu$ such that we can write
\begin{equation}\label{inst}
e_i^\mu\sim n_i^\mu\tilde e^\mu+\cdots \,,
\end{equation}
where the $\sim$ sign denotes the curves on the two sides are same inside the homology of the full threefold; $\tilde e^\mu$ is the $e$ curve for the surface $\tilde S^\mu$; $n_i^\mu$ are strictly positive integers; and the omitted terms denoted by dots include contribution only from fibers and blowups living inside surfaces $S_i^\mu$ for various $i$. An explicit choice for $\tilde e_\mu$ for various simple Lie algebras will be provided later in this subsection. This result (\ref{inst}) will be very helpful for us in determining the contribution of instantons to the 1-form symmetry, but let us keep it aside for some time and turn to the discussion of the realization of center symmetry in terms of surfaces $S_i$.
For each $\mu$ we have surfaces $S_i^\mu$ for $i=1,\cdots,r_\mu$ where $r_\mu$ is the rank of $\mathfrak{g}_\mu$. Consider the lattice $\Lambda_S^\mu\simeq{\mathbb Z}^{r_\mu}$ spanned by $S_i^\mu$ and the lattice $\Lambda_f^\mu\simeq{\mathbb Z}^{r_\mu}$ spanned by $f_i^\mu$. We claim that we can change basis inside $\Lambda_S^\mu$ from $S_i^\mu$ to $S_a^\mu$ (which are some linear combinations of $S_i^\mu$) with $a=1,\cdots,r_\mu$, and the basis inside $\Lambda_f^\mu$ from $f_i^\mu$ to $f_a^\mu$ (which are some linear combinations of $f_i^\mu$) with $a=1,\cdots,r_\mu$, such that
\begin{align}
-f_a^\mu\cdot S_b^\mu&=\delta_{ab}\\
-f_c^\mu\cdot S_b^\mu&=0\\
-f_a^\mu\cdot S_c^\mu&=0
\end{align}
for $a,b>1$ and $c=1$ if $\mathfrak{g}_\mu\neq\mathfrak{so}(4n)$; and $a,b>2$ and $c=1,2$ if $\mathfrak{g}_\mu=\mathfrak{so}(4n)$ for some $n$. Furthermore,
\begin{equation}
-f_1^\mu\cdot S^1_\mu=N_\mu
\end{equation}
for $\mathfrak{g}_\mu\neq\mathfrak{so}(4n)$ where ${\mathbb Z}_{N_\mu}$ is the center of $\mathfrak{g}_\mu$, and
\begin{equation}
-f_a^\mu\cdot S_b^\mu=2\delta_{ab}
\end{equation}
for $\mathfrak{g}_\mu=\mathfrak{so}(4n)$ where $a,b\in\{1,2\}$. More importantly, these results imply that if $\mathfrak{g}_\mu\neq\mathfrak{so}(4n)$, then
\begin{equation}\label{C1}
-f_i^\mu\cdot S_{a=1}^\mu=k_i^\mu N_\mu
\end{equation}
for some integers $k_i^\mu$ having gcd 1. Similarly, if $\mathfrak{g}_\mu=\mathfrak{so}(4n)$, then
\begin{equation}\label{C2}
-f_i^\mu\cdot S_{a}^\mu=2k_{ia}^\mu
\end{equation}
for $a=1,2$ and some integers $k_{i1}^\mu$ having gcd 1 and some integers $k_{i2}^\mu$ having gcd 1.
The upshot of the above analysis is that we have changed the basis of potential 1-form symmetries from $U(1)^\mu_i$ to $U(1)^\mu_a$ such that the W-bosons $f_i^\mu$ break $U(1)^\mu_a$ down to the center $\Gamma_\mu$ of $\mathfrak{g}_\mu$. For $\mathfrak{g}_\mu\neq\mathfrak{so}(4n)$, the \emph{center 1-form symmetry} arises from the $U(1)_{a=1}^\mu$ associated to the surface $S_{a=1}^\mu$. For $\mathfrak{g}_\mu=\mathfrak{so}(4n)$, the \emph{center 1-form symmetry} has two factors which arise from the $U(1)_{a=1}^\mu$ and $U(1)_{a=2}^\mu$ associated to the surfaces $S_{a=1}^\mu$ and $S_{a=2}^\mu$. (\ref{C1}) and (\ref{C2}) simply state that the W-bosons have a charge
\begin{equation}
0~(\text{mod}~n)
\end{equation}
under $U(1)_a^\mu$ where $n$ is the order of the \emph{center symmetry} associated to $U(1)_a^\mu$.
Let us now provide an explicit identification of surfaces $S_{a=1}^\mu$ for various possible simple Lie algebras $\mathfrak{g}_\mu\neq\mathfrak{so}(4n)$ and an explicit identification of surfaces $S_{a=1,2}^\mu$ for $\mathfrak{g}_\mu=\mathfrak{so}(4n)$. As we have discussed above, these surfaces generate the center 1-form symmetries associated to $\mathfrak{g}_\mu$. We leave an explicit identification of $f_b^\mu$ and $S_a^\mu$ for other values of $a$ to the reader.
\begin{itemize}
\item For $\mathfrak{g}_\mu=\mathfrak{su}(n)$, label the nodes in the Dynkin diagram as
\begin{equation}
\begin{tikzpicture}[scale=1.5]
\draw[fill=black]
(0,0) node (v1) {} circle [radius=.1]
(1,0) node (v3) {} circle [radius=.1]
(2,0) node (v4) {} circle [radius=.1]
(4,0) node (v5) {} circle [radius=.1];
\node (v2) at (3,0) {$\cdots$};
\draw (v1) edge (v3);
\draw (v3) edge (v4);
\draw (v4) edge (v2);
\draw (v2) edge (v5);
\node at (0,-0.3) {{$1$}};
\node at (1,-0.3) {{$2$}};
\node at (2,-0.3) {{$3$}};
\node at (4,-0.3) {{$n-1$}};
\end{tikzpicture}
\end{equation}
Then, we can take
\begin{equation}
S_{a=1}^\mu=\sum_{i=1}^{n-1}iS^\mu_i \,.
\end{equation}
Only the fiber $f_{i=n-1}^\mu$ has a non-zero charge under the $U(1)$ generated by the above surface. This fiber has charge $n$, thus reducing the $U(1)$ generated by $S^\mu_{a=1}$ to ${\mathbb Z}_n$, which can be identified as the center of $\mathfrak{su}(n)$.\\
We can choose $\tilde e^\mu=e_{i=1}^\mu$.
\item For $\mathfrak{g}_\mu=\mathfrak{so}(2n+1)$, label the nodes in the Dynkin diagram as
\begin{equation}
\begin{tikzpicture}[scale=1.5]
\draw[fill=black]
(0,0) node (v1) {} circle [radius=.1]
(1,0) node (v3) {} circle [radius=.1]
(2,0) node (v4) {} circle [radius=.1]
(4,0) node (v5) {} circle [radius=.1]
(5,0) node (v6) {} circle [radius=.1];
\node (v2) at (3,0) {$\cdots$};
\draw (v1) edge (v3);
\draw (v3) edge (v4);
\draw (v4) edge (v2);
\draw (v2) edge (v5);
\node at (0,-0.3) {{$1$}};
\node at (1,-0.3) {{$2$}};
\node at (2,-0.3) {{$3$}};
\node at (4,-0.3) {{$n-1$}};
\node at (5,-0.3) {{$n$}};
\begin{scope}[shift={(-1,0)}]
\draw (5.15,0.025) -- (5.825,0.025) (5.825,-0.025) -- (5.15,-0.025);
\draw (5.5,0.1) -- (5.6,0) -- (5.5,-0.1);
\end{scope}
\end{tikzpicture}
\end{equation}
Then, we can take
\begin{equation}
S_{a=1}^\mu=S^\mu_{i=n} \,.
\end{equation}
The non-trivial charges under this surface are provided by the fiber $f_{in-1}^\mu$ and $f_{i=n}^\mu$, both of which have charge $\pm 2$, thus reducing the $U(1)$ generated by $S^\mu_{a=1}$ to ${\mathbb Z}_2$, which can be identified as the center of $\mathfrak{so}(2n+1)$.\\
We can choose $\tilde e^\mu=e_{i=1}^\mu$.
\item For $\mathfrak{g}_\mu=\sp(n)$, label the nodes in the Dynkin diagram as
\begin{equation}
\begin{tikzpicture}[scale=1.5]
\draw[fill=black]
(0,0) node (v1) {} circle [radius=.1]
(1,0) node (v3) {} circle [radius=.1]
(2,0) node (v4) {} circle [radius=.1]
(4,0) node (v5) {} circle [radius=.1]
(5,0) node (v6) {} circle [radius=.1];
\node (v2) at (3,0) {$\cdots$};
\draw (v1) edge (v3);
\draw (v3) edge (v4);
\draw (v4) edge (v2);
\draw (v2) edge (v5);
\node at (0,-0.3) {{$1$}};
\node at (1,-0.3) {{$2$}};
\node at (2,-0.3) {{$3$}};
\node at (4,-0.3) {{$n-1$}};
\node at (5,-0.3) {{$n$}};
\begin{scope}[shift={(-1,0)}]
\draw (5.15,0.025) -- (5.825,0.025) (5.825,-0.025) -- (5.15,-0.025);
\draw (5.6,0.1) -- (5.5,0) -- (5.6,-0.1);
\end{scope}
\end{tikzpicture}
\end{equation}
Then, we can take
\begin{equation}
S_{a=1}^\mu=\sum_{i=1}^{n}\frac{1-(-1)^i}{2}S^\mu_i \,.
\end{equation}
Each fiber $f_{i}^\mu$ has charge $\pm 2$ under this surface, thus reducing the $U(1)$ generated by $S^\mu_{a=1}$ to ${\mathbb Z}_2$, which can be identified as the center of $\sp(n)$.\\
We can choose $\tilde e^\mu=e_{i=n}^\mu$.
\item For $\mathfrak{g}_\mu=\mathfrak{so}(4n+2)$, label the nodes in the Dynkin diagram as
\begin{equation}
\begin{tikzpicture}[scale=1.5]
\draw[fill=black]
(-0.4,0) node (v1) {} circle [radius=.1]
(0.8,0) node (v3) {} circle [radius=.1]
(2,0) node (v4) {} circle [radius=.1]
(4,0) node (v5) {} circle [radius=.1]
(5,0) node (v6) {} circle [radius=.1]
(0.8,1.2) node (v7) {} circle [radius=.1];
\node (v2) at (3,0) {$\cdots$};
\draw (v1) edge (v3);
\draw (v3) edge (v4);
\draw (v4) edge (v2);
\draw (v2) edge (v5);
\node at (-0.4,-0.3) {{$2n+1$}};
\node at (0.8,-0.3) {{$2n-1$}};
\node at (2,-0.3) {{$2n-2$}};
\node at (4,-0.3) {{$2$}};
\node at (5,-0.3) {{$1$}};
\draw (v5) edge (v6);
\draw (v7) edge (v3);
\node at (0.8,1.5) {{$2n$}};
\end{tikzpicture}
\end{equation}
Then, we can take
\begin{equation}
S_{a=1}^\mu=3S^\mu_{i=2n+1}+S^\mu_{i=2n}+\sum_{i=1}^{2n-1}\left(1-(-1)^i\right)S^\mu_i \,.
\end{equation}
Each fiber $f_{i}^\mu$ has charge $\pm 4$ under this surface except for $f^\mu_{i=2n}$ which has 0 charge. Thus, the $U(1)$ generated by $S^\mu_{a=1}$ is reduced to ${\mathbb Z}_4$, which can be identified as the center of $\mathfrak{so}(4n+2)$.\\
We can choose $\tilde e^\mu=e_{i=1}^\mu$.
\item For $\mathfrak{g}_\mu=\mathfrak{so}(4n)$, label the nodes in the Dynkin diagram as
\begin{equation}
\begin{tikzpicture}[scale=1.5]
\draw[fill=black]
(-0.4,0) node (v1) {} circle [radius=.1]
(0.8,0) node (v3) {} circle [radius=.1]
(2,0) node (v4) {} circle [radius=.1]
(4,0) node (v5) {} circle [radius=.1]
(5,0) node (v6) {} circle [radius=.1]
(0.8,1.2) node (v7) {} circle [radius=.1];
\node (v2) at (3,0) {$\cdots$};
\draw (v1) edge (v3);
\draw (v3) edge (v4);
\draw (v4) edge (v2);
\draw (v2) edge (v5);
\node at (-0.4,-0.3) {{$2n-1$}};
\node at (0.8,-0.3) {{$2n-2$}};
\node at (2,-0.3) {{$2n-3$}};
\node at (4,-0.3) {{$2$}};
\node at (5,-0.3) {{$1$}};
\draw (v5) edge (v6);
\draw (v7) edge (v3);
\node at (0.8,1.5) {{$2n$}};
\end{tikzpicture}
\end{equation}
Then, we can take
\begin{align}
S_{a=1}^\mu&=\sum_{i=1}^{2n-1}\frac{1-(-1)^i}2S^\mu_i\\
S_{a=2}^\mu&=S^\mu_{i=2n}+\sum_{i=1}^{2n-2}\frac{1-(-1)^i}2S^\mu_i \,.
\end{align}
Each fiber $f_{i}^\mu$ has charge $\pm 2$ under $S_{a=1}^\mu$ except for $f^\mu_{i=2n}$ which has 0 charge. Similarly, each fiber $f_{i}^\mu$ has charge $\pm 2$ under $S_{a=2}^\mu$ except for $f^\mu_{i=2n-1}$ which has 0 charge. Thus, the $U(1)\times U(1)$ generated by $S^\mu_{a=1}$ and $S^\mu_{a=2}$ is reduced to ${\mathbb Z}_2\times{\mathbb Z}_2$, which can be identified as the center of $\mathfrak{so}(4n)$.\\
We can choose $\tilde e^\mu=e_{i=1}^\mu$.
\item For $\mathfrak{g}_\mu=\mathfrak{e}_6$, label the nodes in the Dynkin diagram as
\begin{equation}
\begin{tikzpicture}[scale=1.5]
\draw[fill=black]
(1,0) node (v1) {} circle [radius=.1]
(2,0) node (v3) {} circle [radius=.1]
(3,0) node (v4) {} circle [radius=.1]
(4,0) node (v5) {} circle [radius=.1]
(5,0) node (v6) {} circle [radius=.1]
(3,1) node (v7) {} circle [radius=.1];
\node at (1,-0.3) {{$5$}};
\node at (2,-0.3) {{$4$}};
\node at (3,-0.3) {{$3$}};
\node at (4,-0.3) {{$2$}};
\node at (5,-0.3) {{$1$}};
\draw (v5) edge (v6);
\node at (3,1.3) {{$6$}};
\draw (v4) edge (v5);
\draw (v3) edge (v4);
\draw (v1) edge (v3);
\draw (v7) edge (v4);
\end{tikzpicture}
\end{equation}
Then, we can take
\begin{equation}
S_{a=1}^\mu=\sum_{i=1}^{5}iS^\mu_i \,.
\end{equation}
Only the fiber $f_{i=5}^\mu$ and $f_{i=6}^\mu$ have non-trivial charges under under this surface, which are $6$ and $3$ respectively. Thus, the $U(1)$ generated by $S^\mu_{a=1}$ is reduced to ${\mathbb Z}_3$, which can be identified as the center of $\mathfrak{e}_6$.\\
We can choose $\tilde e^\mu=e_{i=1}^\mu$.
\item For $\mathfrak{g}_\mu=\mathfrak{e}_7$, label the nodes in the Dynkin diagram as
\begin{equation}
\begin{tikzpicture}[scale=1.5]
\draw[fill=black]
(1,0) node (v1) {} circle [radius=.1]
(2,0) node (v3) {} circle [radius=.1]
(3,0) node (v4) {} circle [radius=.1]
(4,0) node (v5) {} circle [radius=.1]
(5,0) node (v6) {} circle [radius=.1]
(6,0) node (v8) {} circle [radius=.1]
(3,1) node (v7) {} circle [radius=.1];
\node at (1,-0.3) {{$6$}};
\node at (2,-0.3) {{$5$}};
\node at (3,-0.3) {{$4$}};
\node at (4,-0.3) {{$3$}};
\node at (5,-0.3) {{$2$}};
\node at (6,-0.3) {{$1$}};
\draw (v5) edge (v6);
\node at (3,1.3) {{$7$}};
\draw (v4) edge (v5);
\draw (v3) edge (v4);
\draw (v1) edge (v3);
\draw (v7) edge (v4);
\draw (v6) edge (v8);
\end{tikzpicture}
\end{equation}
Then, we can take
\begin{equation}
S_{a=1}^\mu=S_{i=1}^\mu+S_{i=3}^\mu+S_{i=7}^\mu \,.
\end{equation}
Each fiber $f_{i}^\mu$ has charge $\pm 2$ under this surface except for $f^\mu_{i=5}$ and $f^\mu_{i=6}$, both of which have 0 charge. Thus, the $U(1)$ generated by $S^\mu_{a=1}$ is reduced to ${\mathbb Z}_2$, which can be identified as the center of $\mathfrak{e}_7$.\\
We can choose $\tilde e^\mu=e_{i=1}^\mu$.
\item For $\mathfrak{g}_\mu=\mathfrak{e}_8$, label the nodes in the Dynkin diagram as
\begin{equation}
\begin{tikzpicture}[scale=1.5]
\draw[fill=black]
(1,0) node (v1) {} circle [radius=.1]
(2,0) node (v3) {} circle [radius=.1]
(3,0) node (v4) {} circle [radius=.1]
(4,0) node (v5) {} circle [radius=.1]
(5,0) node (v6) {} circle [radius=.1]
(6,0) node (v8) {} circle [radius=.1]
(7,0) node (v9) {} circle [radius=.1]
(3,1) node (v7) {} circle [radius=.1];
\node at (1,-0.3) {{$7$}};
\node at (2,-0.3) {{$6$}};
\node at (3,-0.3) {{$5$}};
\node at (4,-0.3) {{$4$}};
\node at (5,-0.3) {{$3$}};
\node at (6,-0.3) {{$2$}};
\node at (7,-0.3) {{$1$}};
\draw (v5) edge (v6);
\node at (3,1.3) {{$8$}};
\draw (v4) edge (v5);
\draw (v3) edge (v4);
\draw (v1) edge (v3);
\draw (v7) edge (v4);
\draw (v6) edge (v8);
\draw (v8) edge (v9);
\end{tikzpicture}
\end{equation}
There is no linear combination of $S_i^\mu$ under which $f_j^\mu$ have charges with gcd bigger than 1, which is consistent with the fact that the center of $\mathfrak{e}_8$ is trivial.\\
We can choose $\tilde e^\mu=e_{i=1}^\mu$.
\item For $\mathfrak{g}_\mu=\mathfrak{f}_4$, label the nodes in the Dynkin diagram as
\begin{equation}
\begin{tikzpicture}[scale=1.5]
\draw[fill=black]
(6,0) node (v3) {} circle [radius=.1]
(3,0) node (v4) {} circle [radius=.1]
(4,0) node (v5) {} circle [radius=.1]
(5,0) node (v6) {} circle [radius=.1];
\node at (6,-0.3) {{$4$}};
\node at (3,-0.3) {{$1$}};
\node at (4,-0.3) {{$2$}};
\node at (5,-0.3) {{$3$}};
\begin{scope}[shift={(-1,0)}]
\draw (5.15,0.025) -- (5.825,0.025) (5.825,-0.025) -- (5.15,-0.025);
\draw (5.6,0.1) -- (5.5,0) -- (5.6,-0.1);
\end{scope}
\draw (v4) edge (v5);
\draw (v6) edge (v3);
\end{tikzpicture}
\end{equation}
There is no linear combination of $S_i^\mu$ under which $f_j^\mu$ have charges with gcd bigger than 1, which is consistent with the fact that the center of $\mathfrak{f}_4$ is trivial.\\
We can choose $\tilde e^\mu=e_{i=4}^\mu$.
\item For $\mathfrak{g}_\mu=\mathfrak{g}_2$, label the nodes in the Dynkin diagram as
\begin{equation}
\begin{tikzpicture}[scale=1.5]
\draw[fill=black]
(4,0) node (v5) {} circle [radius=.1]
(5,0) node (v6) {} circle [radius=.1];
\node at (4,-0.3) {{$1$}};
\node at (5,-0.3) {{$2$}};
\begin{scope}[shift={(-1,0)}]
\draw (5.15,0.025) -- (5.825,0.025) (5.825,-0.025) -- (5.15,-0.025);
\draw (5.15,0) -- (5.825,0);
\draw (5.6,0.1) -- (5.5,0) -- (5.6,-0.1);
\end{scope}
\end{tikzpicture}
\end{equation}
There is no linear combination of $S_i^\mu$ under which $f_j^\mu$ have charges with gcd bigger than 1, which is consistent with the fact that the center of $\mathfrak{g}_2$ is trivial.\\
We can choose $\tilde e^\mu=e_{i=2}^\mu$.
\end{itemize}
Now that we have identified the centers $\Gamma_\mu$ of $\mathfrak{g}_\mu$ in terms of surfaces, it is straightforward to compute the charges of other particles under $\Gamma_\mu$. Let us first consider the effect of a (full or half) hyper charged in an irreducible representation $R$ of the gauge algebra $\mathfrak{g}=\oplus_\mu\mathfrak{g}_\mu$. The highest weight of $R$ is given by some non-negative integers $n^\mu_i$ for various $i$ and $\mu$. Then, the geometry for the gauge theory must contain a curve $C$ which satisfies
\begin{equation}
-C\cdot S^\mu_i = n^\mu_i \,.
\end{equation}
Moreover, the other curves associated to this hyper can be obtained from $C$ by subtracting $f^\nu_j$ for various $j$ and $\nu$ from it. Since $f^\nu_j$ do not screen the center $\Gamma=\prod_\mu\Gamma_\mu$ potential 1-form symmetry, the screening due to the hyper is completely captured by the charge of the curve $C$ under $\Gamma$, which can be readily computed using the data provided so far. For $\mathfrak{g}_\mu\neq\mathfrak{so}(4n)$, we have a single surface responsible for generating the center which can be written as
\begin{equation}
S_{a=1}^\mu=\sum_{i=1}^{r_\mu}p_i^\mu S_i^\mu \,,
\end{equation}
from which we find that the charge of the hyper under $\Gamma_\mu$ is
\begin{equation}\label{ch1}
\sum_{i=1}^{r_\mu}p_i^\mu n^\mu_i~(\text{mod}~N_\mu) \,,
\end{equation}
where $N_\mu$ is the order of $\Gamma_\mu$. On the other hand, for $\mathfrak{g}_\mu=\mathfrak{so}(4n)$, the charges under $\Gamma_\mu={\mathbb Z}_2^2$ are given by
\begin{equation}\label{ch2}
\left(~\sum_{i=1}^{2n-1}\frac{1-(-1)^i}2n^\mu_i~(\text{mod}~2),~n^\mu_{i=2n}+\sum_{i=1}^{2n-2}\frac{1-(-1)^i}2n^\mu_i~(\text{mod}~2)\right) \,.
\end{equation}
Thus, we have computed the charge of an arbitrary irreducible representation $R$ under the center $\Gamma$ of a semi-simple Lie algebra $\mathfrak{g}$. One can use the results presented here to verify the charges tabulated in Table \ref{table}.
At this point, we have incorporated the effect of the fibers and blowups living in all the Hirzebruch surfaces $S_i$. The fibers were responsible for breaking the potential 1-form symmetry down to the center and the blowups encode the reduction of the center 1-form symmetry induced by the hypermultiplets. The only contribution left to be taken into account now come from the $e$ curves of $S_i$. These contributions themselves can be further simplified drastically since we only need to take into account a single $e$ curve for each $\mu$. This follows from the result (\ref{inst}) which states that the contribution of every $e_i^\mu$ for a fixed $\mu$ is accounted by the $\tilde e^\mu$ upto the contributions coming from fibers and blowups, but we have already accounted for the contributions from fibers and blowups. So the relevant instanton contribution can be captured by the charges of $\tilde e^\mu$ under the center $\Gamma_\nu$
\begin{equation}
-\tilde e^\mu\cdot S_a^\nu \,,
\end{equation}
where $a=1$ for $\mathfrak{g}_\nu\neq\mathfrak{so}(4n)$ and $a=1,2$ for $\mathfrak{g}_\nu=\mathfrak{so}(4n)$.
Let us see how these instanton contributions affect gauge theories carrying a simple gauge algebra only. Consider first pure gauge theories for which geometries were provided in \cite{Bhardwaj:2019ngx}. For a pure $\mathfrak{su}(n)$ theory with CS level $k$ such that $0\le k<n-2$, the geometry is
\begin{equation}
\begin{tikzpicture} [scale=1.9]
\node (v1) at (-4.9,-0.5) {$\mathbf{1}_{n-2-k}$};
\node (v2) at (-3.1,-0.5) {$\mathbf{2}_{n-4-k}$};
\node (v3) at (-1.3,-0.5) {$\mathbf{3}_{n-6-k}$};
\node at (-4.4,-0.4) {\scriptsize{$e$}};
\node at (-3.6,-0.4) {\scriptsize{$h$}};
\draw (v1) edge (v2);
\draw (v2) edge (v3);
\begin{scope}[shift={(1.8,0)}]
\node at (-4.4,-0.4) {\scriptsize{$e$}};
\node at (-3.6,-0.4) {\scriptsize{$h$}};
\end{scope}
\node (v4) at (-0.2,-0.5) {$\cdots$};
\draw (v3) edge (v4);
\node (v5) at (1.2,-0.5) {$\mathbf{(n-1)}_{2-n-k}$};
\draw (v4) edge (v5);
\begin{scope}[shift={(3.6,0)}]
\node at (-4.4,-0.4) {\scriptsize{$e$}};
\node at (-3.2,-0.4) {\scriptsize{$h$}};
\end{scope}
\end{tikzpicture}
\end{equation}
where $\mathbf{i}_n$ is a notation for a Hirzebruch surface $S_i={\mathbb F}_n$ without any blowups. An edge between two surfaces denotes an intersection between the two surfaces. The labels on each end of the edge denote the gluing curves inside the two surfaces being identified to construct the intersection. We can compute
\begin{align}
-\tilde e\cdot \left(\sum_{i=1}^{n-1}iS_i\right)=-e_1\cdot S_1-2e_1\cdot S_2&=(k+4-n)+(2n-4-2k)\\
&=n-k=-k~(\text{mod}~n) \,,
\end{align}
which reproduces the contribution from the instanton proposed in \cite{Morrison:2020ool}. Similarly, for $k=n-2+2m$ with $m\ge0$ the geometry is
\begin{equation}
\begin{tikzpicture} [scale=1.9]
\node (v1) at (-4.9,-0.5) {$\mathbf{1}_{0}$};
\node (v2) at (-3.1,-0.5) {$\mathbf{2}_{4-n+k}$};
\node (v3) at (-2,-0.5) {$\cdots$};
\node at (-4.5,-0.4) {\scriptsize{$e$+$mf$}};
\node at (-3.6,-0.4) {\scriptsize{$e$}};
\draw (v1) edge (v2);
\draw (v2) edge (v3);
\begin{scope}[shift={(1.8,0)}]
\node at (-4.4,-0.4) {\scriptsize{$h$}};
\node at (-3.2,-0.4) {\scriptsize{$e$}};
\end{scope}
\node (v4) at (-0.7,-0.5) {$\mathbf{(n-2)}_{n-4+k}$};
\draw (v3) edge (v4);
\node (v5) at (1.3,-0.5) {$\mathbf{(n-1)}_{n-2+k}$};
\draw (v4) edge (v5);
\begin{scope}[shift={(3.6,0)}]
\node at (-3.6,-0.4) {\scriptsize{$h$}};
\node at (-3,-0.4) {\scriptsize{$e$}};
\end{scope}
\end{tikzpicture}
\end{equation}
from which we compute
\begin{align}
-\tilde e\cdot \left(\sum_{i=1}^{n-1}iS_i\right)=-e_1\cdot S_1-2e_1\cdot S_2&=2-2m\\
&=-k~(\text{mod}~n)
\end{align}
For $k=n-2+2m$ with $m\ge0$ the geometry is
\begin{equation}
\begin{tikzpicture} [scale=1.9]
\node (v1) at (-4.9,-0.5) {$\mathbf{1}_{1}$};
\node (v2) at (-3.1,-0.5) {$\mathbf{2}_{4-n+k}$};
\node (v3) at (-2,-0.5) {$\cdots$};
\node at (-4.5,-0.4) {\scriptsize{$h$+$mf$}};
\node at (-3.6,-0.4) {\scriptsize{$e$}};
\draw (v1) edge (v2);
\draw (v2) edge (v3);
\begin{scope}[shift={(1.8,0)}]
\node at (-4.4,-0.4) {\scriptsize{$h$}};
\node at (-3.2,-0.4) {\scriptsize{$e$}};
\end{scope}
\node (v4) at (-0.7,-0.5) {$\mathbf{(n-2)}_{n-4+k}$};
\draw (v3) edge (v4);
\node (v5) at (1.3,-0.5) {$\mathbf{(n-1)}_{n-2+k}$};
\draw (v4) edge (v5);
\begin{scope}[shift={(3.6,0)}]
\node at (-3.6,-0.4) {\scriptsize{$h$}};
\node at (-3,-0.4) {\scriptsize{$e$}};
\end{scope}
\end{tikzpicture}
\end{equation}
from which we compute
\begin{align}
-\tilde e\cdot \left(\sum_{i=1}^{n-1}iS_i\right)=-e_1\cdot S_1-2e_1\cdot S_2&=1-2m\\
&=-k~(\text{mod}~n) \,.
\end{align}
Thus we find that for pure $\mathfrak{su}(n)$ with CS level $k$, the instanton contributions can be accounted for by considering an instanton of charge $-k~(\text{mod}~n)$ under the center ${\mathbb Z}_n$.
For pure $\mathfrak{so}(2n+1)$, the geometry is
\begin{equation}
\begin{tikzpicture} [scale=1.9]
\node (v1) at (-2.9,-0.5) {$\mathbf{1}_{2n-5}$};
\node (v3) at (-1.3,-0.5) {$\mathbf{2}_{2n-7}$};
\node at (-2.5,-0.4) {\scriptsize{$e$}};
\node at (-1.7,-0.4) {\scriptsize{$h$}};
\node (v4) at (-0.3,-0.5) {$\cdots$};
\draw (v3) edge (v4);
\node (v5) at (0.9,-0.5) {$\mathbf{(n-2)}_{1}$};
\draw (v4) edge (v5);
\begin{scope}[shift={(3.6,0)}]
\node at (-4.5,-0.4) {\scriptsize{$e$}};
\node at (-3.2,-0.4) {\scriptsize{$h$}};
\end{scope}
\draw (v1) edge (v3);
\node (v2) at (2.4,-0.5) {$\mathbf{(n-1)}_1$};
\draw (v5) edge (v2);
\node at (1.4,-0.4) {\scriptsize{$e$}};
\node at (1.9,-0.4) {\scriptsize{$e$}};
\node (v6) at (3.7,-0.5) {$\mathbf{n}_6$};
\draw (v2) edge (v6);
\node at (2.9,-0.4) {\scriptsize{$2h$}};
\node at (3.4,-0.4) {\scriptsize{$e$}};
\end{tikzpicture}
\end{equation}
from which we compute
\begin{equation}
-\tilde e\cdot S_n=-e_1\cdot S_n=0 \,.
\end{equation}
Thus the instanton associated to $\mathfrak{so}(2n+1)$ is not charged under its center.
For pure $\sp(n)$ with $\theta=n\pi~(\text{mod}~2\pi)$, the geometry is
\begin{equation}
\begin{tikzpicture} [scale=1.9]
\node (v1) at (-2.9,-0.5) {$\mathbf{1}_{2n+2}$};
\node (v3) at (-1.3,-0.5) {$\mathbf{2}_{2n}$};
\node at (-2.5,-0.4) {\scriptsize{$e$}};
\node at (-1.7,-0.4) {\scriptsize{$h$}};
\node (v4) at (-0.3,-0.5) {$\cdots$};
\draw (v3) edge (v4);
\node (v5) at (0.9,-0.5) {$\mathbf{(n-2)}_{8}$};
\draw (v4) edge (v5);
\begin{scope}[shift={(3.6,0)}]
\node at (-4.5,-0.4) {\scriptsize{$e$}};
\node at (-3.2,-0.4) {\scriptsize{$h$}};
\end{scope}
\draw (v1) edge (v3);
\node (v2) at (2.4,-0.5) {$\mathbf{(n-1)}_6$};
\draw (v5) edge (v2);
\node at (1.4,-0.4) {\scriptsize{$e$}};
\node at (1.9,-0.4) {\scriptsize{$h$}};
\node (v6) at (3.7,-0.5) {$\mathbf{n}_1$};
\draw (v2) edge (v6);
\node at (3.4,-0.4) {\scriptsize{$2h$}};
\node at (2.9,-0.4) {\scriptsize{$e$}};
\end{tikzpicture}
\end{equation}
from which we compute
\begin{equation}
-\tilde e\cdot \left(\sum_{i=1}^{n}\frac{1-(-1)^i}{2}S_i\right)=-e_n\cdot \left(\frac{1-(-1)^n}{2}S_n\right)=\frac{1-(-1)^n}{2} \,,
\end{equation}
which is only non-trivial for $n=2m+1$.
Similarly, for pure $\sp(n)$ with $\theta=(n+1)\pi~(\text{mod}~2\pi)$, the geometry is
\begin{equation}
\begin{tikzpicture} [scale=1.9]
\node (v1) at (-2.9,-0.5) {$\mathbf{1}_{2n+2}$};
\node (v3) at (-1.3,-0.5) {$\mathbf{2}_{2n}$};
\node at (-2.5,-0.4) {\scriptsize{$e$}};
\node at (-1.7,-0.4) {\scriptsize{$h$}};
\node (v4) at (-0.3,-0.5) {$\cdots$};
\draw (v3) edge (v4);
\node (v5) at (0.9,-0.5) {$\mathbf{(n-2)}_{8}$};
\draw (v4) edge (v5);
\begin{scope}[shift={(3.6,0)}]
\node at (-4.5,-0.4) {\scriptsize{$e$}};
\node at (-3.2,-0.4) {\scriptsize{$h$}};
\end{scope}
\draw (v1) edge (v3);
\node (v2) at (2.4,-0.5) {$\mathbf{(n-1)}_6$};
\draw (v5) edge (v2);
\node at (1.4,-0.4) {\scriptsize{$e$}};
\node at (1.9,-0.4) {\scriptsize{$h$}};
\node (v6) at (3.9,-0.5) {$\mathbf{n}_0$};
\draw (v2) edge (v6);
\node at (3.5,-0.4) {\scriptsize{$2e$+$f$}};
\node at (2.9,-0.4) {\scriptsize{$e$}};
\end{tikzpicture}
\end{equation}
from which we compute
\begin{equation}
-\tilde e\cdot \left(\sum_{i=1}^{n}\frac{1-(-1)^i}{2}S_i\right)=-\frac{1-(-1)^{n-1}}{2}~(\text{mod}~2) \,,
\end{equation}
which is only non-trivial for $n=2m$. Thus, combining both the cases, we find that the instanton has a non-trivial contribution only for $\sp(n)$ with $\theta=\pi$ for which it contributes with charge $1$ under the center ${\mathbb Z}_2$ associated to $\sp(n)$. This agrees with the proposal of \cite{Morrison:2020ool}.
For pure $\mathfrak{so}(4n+2)$ the geometry is
\begin{equation}
\begin{tikzpicture} [scale=1.9]
\node (v1) at (-2.9,-0.5) {$\mathbf{1}_{4n-4}$};
\node (v3) at (-1.3,-0.5) {$\mathbf{2}_{4n-6}$};
\node at (-2.5,-0.4) {\scriptsize{$e$}};
\node at (-1.7,-0.4) {\scriptsize{$h$}};
\node (v4) at (-0.2,-0.5) {$\cdots$};
\draw (v3) edge (v4);
\node (v5) at (0.9,-0.5) {$\mathbf{(2n-2)}_{2}$};
\draw (v4) edge (v5);
\begin{scope}[shift={(3.6,0)}]
\node at (-4.5,-0.4) {\scriptsize{$e$}};
\node at (-3.3,-0.4) {\scriptsize{$h$}};
\end{scope}
\draw (v1) edge (v3);
\node (v2) at (2.6,-0.5) {$\mathbf{(2n-1)}_0$};
\draw (v5) edge (v2);
\node at (1.5,-0.4) {\scriptsize{$e$}};
\node at (2,-0.4) {\scriptsize{$e$}};
\node (v6) at (4.1,-0.5) {$\mathbf{2n}_2$};
\draw (v2) edge (v6);
\node at (3.2,-0.4) {\scriptsize{$e$}};
\node at (3.7,-0.4) {\scriptsize{$e$}};
\node (v7) at (2.6,0.5) {$\mathbf{(2n+1)}_2$};
\draw (v7) edge (v2);
\node at (2.5,-0.2) {\scriptsize{$e$}};
\node at (2.5,0.2) {\scriptsize{$e$}};
\end{tikzpicture}
\end{equation}
for which we compute
\begin{equation}
-\tilde e\cdot \left(3S_{i=2n+1}+S_{i=2n}+\sum_{i=1}^{2n-1}\left(1-(-1)^i\right)S_i\right)=-2e_1\cdot S_1=12-8n=0~(\text{mod}~4) \,.
\end{equation}
Thus the instanton associated to $\mathfrak{so}(4n+2)$ is not charged under its ${\mathbb Z}_4$ center.
For pure $\mathfrak{so}(4n)$ the geometry is
\begin{equation}
\begin{tikzpicture} [scale=1.9]
\node (v1) at (-2.9,-0.5) {$\mathbf{1}_{4n-6}$};
\node (v3) at (-1.3,-0.5) {$\mathbf{2}_{4n-8}$};
\node at (-2.5,-0.4) {\scriptsize{$e$}};
\node at (-1.7,-0.4) {\scriptsize{$h$}};
\node (v4) at (-0.2,-0.5) {$\cdots$};
\draw (v3) edge (v4);
\node (v5) at (0.9,-0.5) {$\mathbf{(2n-3)}_{2}$};
\draw (v4) edge (v5);
\begin{scope}[shift={(3.6,0)}]
\node at (-4.5,-0.4) {\scriptsize{$e$}};
\node at (-3.3,-0.4) {\scriptsize{$h$}};
\end{scope}
\draw (v1) edge (v3);
\node (v2) at (2.6,-0.5) {$\mathbf{(2n-2)}_0$};
\draw (v5) edge (v2);
\node at (1.5,-0.4) {\scriptsize{$e$}};
\node at (2,-0.4) {\scriptsize{$e$}};
\node (v6) at (4.1,-0.5) {$\mathbf{2n}_2$};
\draw (v2) edge (v6);
\node at (3.2,-0.4) {\scriptsize{$e$}};
\node at (3.7,-0.4) {\scriptsize{$e$}};
\node (v7) at (2.6,0.5) {$\mathbf{(2n-1)}_2$};
\draw (v7) edge (v2);
\node at (2.5,-0.2) {\scriptsize{$e$}};
\node at (2.5,0.2) {\scriptsize{$e$}};
\end{tikzpicture}
\end{equation}
for which we compute
\begin{equation}
-\tilde e\cdot \left(\sum_{i=1}^{2n-1}\frac{1-(-1)^i}2S_i\right)=-e_1\cdot S_1=8-4n=0~(\text{mod}~2) \,,
\end{equation}
and
\begin{equation}
-\tilde e\cdot \left(S_{i=2n}+\sum_{i=1}^{2n-2}\frac{1-(-1)^i}2S_i\right)=-e_1\cdot S_1=8-4n=0~(\text{mod}~2) \,.
\end{equation}
Thus the instanton associated to $\mathfrak{so}(4n)$ is not charged under its ${\mathbb Z}_2^2$ center.
For pure $\mathfrak{e}_6$ the geometry is
\begin{equation}
\begin{tikzpicture} [scale=1.9]
\node at (0.6,-0.4) {\scriptsize{$h$}};
\node at (-0.1,-0.4) {\scriptsize{$e$}};
\node (v4) at (-0.4,-0.5) {$\mathbf{1}_4$};
\node (v5) at (0.9,-0.5) {$\mathbf{2}_{2}$};
\draw (v4) edge (v5);
\node (v2) at (2.2,-0.5) {$\mathbf{3}_0$};
\draw (v5) edge (v2);
\node at (1.2,-0.4) {\scriptsize{$e$}};
\node at (1.9,-0.4) {\scriptsize{$e$}};
\node (v6) at (3.5,-0.5) {$\mathbf{4}_2$};
\draw (v2) edge (v6);
\node at (2.5,-0.4) {\scriptsize{$e$}};
\node at (3.2,-0.4) {\scriptsize{$e$}};
\node (v7) at (2.2,0.5) {$\mathbf{6}_2$};
\draw (v7) edge (v2);
\node at (2.1,-0.2) {\scriptsize{$e$}};
\node at (2.1,0.2) {\scriptsize{$e$}};
\node (v8) at (4.8,-0.5) {$\mathbf{5}_4$};
\draw (v6) edge (v8);
\node at (3.8,-0.4) {\scriptsize{$h$}};
\node at (4.5,-0.4) {\scriptsize{$e$}};
\end{tikzpicture}
\end{equation}
for which we compute
\begin{equation}
-\tilde e\cdot \left(\sum_{i=1}^{5}iS_i\right)=-e_1\cdot S_1-2e_1\cdot S_2=-2+8=0~(\text{mod}~3)
\end{equation}
Thus the instanton associated to $\mathfrak{e}_6$ is not charged under its ${\mathbb Z}_3$ center.
For pure $\mathfrak{e}_7$ the geometry is
\begin{equation}
\begin{tikzpicture} [scale=1.9]
\node at (0.6,-0.4) {\scriptsize{$h$}};
\node at (-0.1,-0.4) {\scriptsize{$e$}};
\node (v4) at (-0.4,-0.5) {$\mathbf{2}_4$};
\node (v5) at (0.9,-0.5) {$\mathbf{3}_{2}$};
\draw (v4) edge (v5);
\node (v2) at (2.2,-0.5) {$\mathbf{4}_0$};
\draw (v5) edge (v2);
\node at (1.2,-0.4) {\scriptsize{$e$}};
\node at (1.9,-0.4) {\scriptsize{$e$}};
\node (v6) at (3.5,-0.5) {$\mathbf{5}_2$};
\draw (v2) edge (v6);
\node at (2.5,-0.4) {\scriptsize{$e$}};
\node at (3.2,-0.4) {\scriptsize{$e$}};
\node (v7) at (2.2,0.5) {$\mathbf{7}_2$};
\draw (v7) edge (v2);
\node at (2.1,-0.2) {\scriptsize{$e$}};
\node at (2.1,0.2) {\scriptsize{$e$}};
\node (v8) at (4.8,-0.5) {$\mathbf{6}_4$};
\draw (v6) edge (v8);
\node at (3.8,-0.4) {\scriptsize{$h$}};
\node at (4.5,-0.4) {\scriptsize{$e$}};
\node (v1) at (-1.7,-0.5) {$\mathbf{1}_6$};
\draw (v1) edge (v4);
\node at (-0.7,-0.4) {\scriptsize{$h$}};
\node at (-1.4,-0.4) {\scriptsize{$e$}};
\end{tikzpicture}
\end{equation}
for which we compute
\begin{equation}
-\tilde e\cdot \left(S_1+S_3+S_7\right)=-e_1\cdot S_1=-4=0~(\text{mod}~2)
\end{equation}
Thus the instanton associated to $\mathfrak{e}_7$ is not charged under its ${\mathbb Z}_2$ center.
Thus, for pure gauge theories we find that only for the case of $\mathfrak{su}(n)$ with CS level $k$ and $\sp(n)$ with $\theta=\pi$ do we have to include contributions from instanton particles. Let us consider adding matter in the form of full hypermultiplets in some representation $R$ of $\mathfrak{g}$. If $\mathfrak{g}\neq\mathfrak{su}(n),\sp(n)$ then the geometry for the theory can be represented as the geometry for the pure theory plus some blowups on top of the surfaces $S_i$ which are possibly glued to each other in some way \cite{Bhardwaj:2019ngx}. This means that the intersections of $\tilde e$ curve with the surfaces remain the same as in the pure case. That is, for $\mathfrak{g}\neq\mathfrak{su}(n),\sp(n)$ we do not need to consider the instanton contributions.
For $\mathfrak{g}=\mathfrak{su}(n)$ with CS level $k$, addition of a full hyper in a representation $R$ of $\mathfrak{su}(n)$ shifts the CS level\footnote{Our convention for CS level differs from the convention used in \cite{Morrison:2020ool}. In our convention, CS level is defined by the tree-level contribution to the prepotential of the theory.} by $\pm\frac{A(R)}{2}$ where $A(R)$ is the \emph{anomaly coefficient} associated to $R$ (see \cite{Bhardwaj:2019ngx}). Then, for an $\mathfrak{su}(n)$ theory with CS level $k$ and full hypers forming a (in general reducible) rep $R$, the instanton contributions can be accounted by accounting for an instanton particle of charge
\begin{equation}\label{kfh}
-k+\frac{A(R)}2~(\text{mod}~n)
\end{equation}
under the center ${\mathbb Z}_n$.
For $\mathfrak{g}=\sp(n)$, one can either add hypers such that theta angle becomes irrelevant, or add hypers such that theta angle remains relevant. If the theta angle becomes irrelevant, there are no instanton contributions to account for. If the theta angle remain relevant, then for $\theta=\pi$ we need to account for an instanton particle with charge $1~(\text{mod}~2)$ under the center ${\mathbb Z}_2$ of $\sp(n)$.
The above discussion wraps up the story of relevant instanton contributions for gauge theories with simple gauge algebra and matter in full hypers only. New interesting phenomena arise if we add matter in half-hypers of the simple gauge algebra. Unlike the case of full hypers discussed above, it is not possible to write a geometry carrying half-hypers in terms of geometry for the pure theory plus some blowups (that are possibly glued with each other). Thus, it is possible for the instanton contributions to be different from the instanton contributions for the pure gauge theory. As an illustrative example, consider adding a half-hyper in $\S$ to a pure $\mathfrak{so}(12)$ gauge theory. Since the instanton contribution to the pure $\mathfrak{so}(12)$ gauge theory is trivial, we might naively think that we only need to include the effect of matter in spinor rep $\S$, thus coming to the conclusion that the $5d$ gauge theory $\mathfrak{so}(12)+\frac{1}{2}\S$ has
\begin{equation}
\mathcal{O}={\mathbb Z}_2 \,.
\end{equation}
However, let us take a look at the geometry corresponding to this gauge theory which can be written as \cite{Bhardwaj:2020gyu}
\begin{equation}
\begin{tikzpicture} [scale=1.9]
\node (v1) at (-5.3,-0.5) {$\mathbf{3}_{2}$};
\node (v2) at (-3.3,-0.5) {$\mathbf{2}_{4}$};
\node (v3) at (-9.5,-0.5) {$\mathbf{5}_{2}$};
\node (v0) at (-7.6,-0.5) {$\mathbf{4}^1_{0}$};
\draw (v1) edge (v2);
\node at (-5.6,-0.4) {\scriptsize{$e$}};
\node at (-3,-0.8) {\scriptsize{$h$+$f$}};
\node at (-7.9,-0.4) {\scriptsize{$e$}};
\node at (-9.1,-0.4) {\scriptsize{$e$}};
\draw (v0) edge (v1);
\node at (-7.2,-0.4) {\scriptsize{$e$}};
\node at (-5,-0.4) {\scriptsize{$h$}};
\node (v4) at (-3.3,-1.8) {$\mathbf{1}^{2}_{8}$};
\draw (v0) edge (v3);
\draw (v2) edge (v4);
\node at (-3.6,-0.4) {\scriptsize{$e$}};
\node at (-3.1,-1.5) {\scriptsize{$e$}};
\node at (-6.5,-0.7) {\scriptsize{$f$-$x$}};
\node at (-4.7,-0.7) {\scriptsize{$f$}};
\node at (-4.4,-1.7) {\scriptsize{$x$-$y$}};
\node at (-9.05,-0.75) {\scriptsize{$f$}};
\node at (-3.7,-1.3) {\scriptsize{$f$-$x$-$y$}};
\node at (-4.3,-1.4) {\scriptsize{$y$}};
\draw (v3) edge (v4);
\draw (v0) edge (v4);
\draw (v1) edge (v4);
\node (v5) at (-7.6,0.8) {$\mathbf{6}_{1}$};
\draw (v5) edge (v0);
\node at (-7.4,-0.2) {\scriptsize{$e$-$x$}};
\node at (-7.4,0.5) {\scriptsize{$e$}};
\end{tikzpicture}
\end{equation}
where the notation $\mathbf{i}_n^b$ denotes a surface obtained by blowing up $b$ times a Hirzebruch surface $S_i={\mathbb F}_n$. Thus the Hirzebruch surface $S_4$ is blown up at one point and the Hirzebruch surface $S_1$ is blown up at two points where the exceptional curves associated to the two blowups are denoted as $x$ and $y$. Computing the contribution of instanton
\begin{align}
&\left(-\tilde e\cdot \left(\sum_{i=1}^{2n-1}\frac{1-(-1)^i}2S_i\right),-\tilde e\cdot \left(S_{i=2n}+\sum_{i=1}^{2n-2}\frac{1-(-1)^i}2S_i\right)\right)\\
=&\left(-e_1\cdot (S_1+S_3),-e_1\cdot(S_1+S_3)\right)\\
=&\left(-7,-7\right)\\
=&\left(1~(\text{mod}~2),1~(\text{mod}~2)\right) \,.
\end{align}
Thus we find that the instanton contribution combined with the contribution from spinor matter completely destroy the potential center 1-form symmetry of $\mathfrak{so}(12)$ and the correct 1-form symmetry for $\mathfrak{so}(12)+\frac{1}{2}\S$ is
\begin{equation}
\mathcal{O}={\mathbb Z}_1 \,.
\end{equation}
Generalizing this, we see that for $\mathfrak{so}(12)+n\S$ we have
\begin{equation}
\mathcal{O}={\mathbb Z}_2 \,,
\end{equation}
but for $\mathfrak{so}(12)+\left(n+\frac{1}{2}\right)\S$ we have
\begin{equation}
\mathcal{O}={\mathbb Z}_1 \,.
\end{equation}
A similar phenomenon occurs when we consider adding a half-hyper in $\L^3$ to an $\mathfrak{su}(6)$ gauge theory. The geometries for this case were also discussed in \cite{Bhardwaj:2020gyu}. For CS level $k=\frac{1}{2}-l$ with $1\le l\le 7$, the geometry can be written as
\begin{equation}
\begin{tikzpicture} [scale=1.9]
\node (v1) at (-5.3,-0.5) {$\mathbf{3}_{l}$};
\node (v2) at (-3.3,-0.5) {$\mathbf{4}_{l-4}$};
\node (v3) at (-9.5,-0.5) {$\mathbf{1}_{4+l}$};
\node (v0) at (-7.6,-0.5) {$\mathbf{2}^1_{2+l}$};
\draw (v1) edge (v2);
\node at (-5,-0.4) {\scriptsize{$e$}};
\node at (-3.8,-0.4) {\scriptsize{$h$+$f$}};
\node at (-8,-0.4) {\scriptsize{$h$}};
\node at (-9.1,-0.4) {\scriptsize{$e$}};
\draw (v0) edge (v1);
\node at (-7.2,-0.4) {\scriptsize{$e$}};
\node at (-5.6,-0.4) {\scriptsize{$h$}};
\node (v4) at (-3.3,-1.8) {$\mathbf{5}^{2}_{l-6}$};
\draw (v0) edge (v3);
\draw (v2) edge (v4);
\node at (-3.1,-0.8) {\scriptsize{$e$}};
\node at (-3.1,-1.5) {\scriptsize{$h$}};
\node at (-6.5,-0.7) {\scriptsize{$f$-$x$}};
\node at (-4.7,-0.7) {\scriptsize{$f$}};
\node at (-4.4,-1.7) {\scriptsize{$x$-$y$}};
\node at (-9.1,-0.7) {\scriptsize{$f$}};
\node at (-3.7,-1.3) {\scriptsize{$f$-$x$-$y$}};
\node at (-4.3,-1.4) {\scriptsize{$y$}};
\draw (v3) edge (v4);
\draw (v0) edge (v4);
\draw (v1) edge (v4);
\end{tikzpicture}
\end{equation}
Hence the instanton contribution turns out to be
\begin{equation}\label{khh}
-\tilde e\cdot \left(\sum_{i=1}^{5}iS_i\right)=-\tilde e\cdot (S_1+2S_2+5S_5)=(-2-l)+2(4+l)-5=-k+\frac32~(\text{mod}~3) \,,
\end{equation}
where we are considering the contribution modulo 3 since the $\L^3$ matter already breaks the ${\mathbb Z}_6$ center down to a potential ${\mathbb Z}_3$ 1-form symmetry only. We obtain the same instanton contribution for other values of CS level as well, as the reader can check using the geometries presented in \cite{Bhardwaj:2020gyu}. The contribution (\ref{khh}) in the half-hyper case should be contrasted with the contribution (\ref{kfh}) in the full hyper case.
The above comments associated to matter in full vs half-hypermultiplets extend to the case of a semi-simple gauge algebra $\mathfrak{g}=\oplus_\mu\mathfrak{g}_\mu$. First of all, for the pure gauge theory based on $\mathfrak{g}$, the instanton $\tilde e_\mu$ has 0 charge under $\Gamma_\nu$ for $\nu\neq\mu$, and has non-trivial charge under $\Gamma_\mu$ only if $\mathfrak{g}_\mu=\mathfrak{su}(n)$ or $\sp(n)_\pi$. Now, whenever there is a half-hyper charged in a mixed rep of $\mathfrak{g}_{\mu_1}\oplus\mathfrak{g}_{\mu_2}\oplus\cdots\oplus\mathfrak{g}_{\mu_l}\subseteq\mathfrak{g}$ (for $l\ge1$), there is at least one $\mu\in\{\mu_1,\mu_2,\cdots,\mu_l\}$ such that the instanton $\tilde e^\mu$ has a charge under $\Gamma_{\mu_1}\times\Gamma_{\mu_2}\times\cdots\times\Gamma_{\mu_l}$ that is different from the its charge under $\Gamma_{\mu_1}\times\Gamma_{\mu_2}\times\cdots\times\Gamma_{\mu_l}$ for the case of pure gauge theory based on $\mathfrak{g}$. The full hypers can again be ignored when accounting for instantonic contributions.
For example, consider an $\mathfrak{so}(8)\oplus\mathfrak{su}(2)$ gauge theory with a \emph{half-hyper} in bifundamental representation. Including the data of only the gauge algebras and hypermultiplet matter content, we will expect the 1-form symmetry to be
\begin{equation}\label{5QO}
\mathcal{O}={\mathbb Z}_2\times{\mathbb Z}_2 \,,
\end{equation}
but the geometry implies a ${\mathbb Z}_1$ 1-form symmetry as we will see below. The geometry for this theory can be written as
\begin{equation}
\begin{tikzpicture} [scale=1.9]
\node (v2) at (2.2,-0.5) {$\mathbf{4}_0$};
\node (v6) at (3.7,-0.5) {$\mathbf{1}_2$};
\draw (v2) edge (v6);
\node at (2.5,-0.4) {\scriptsize{$e$}};
\node at (3.4,-0.4) {\scriptsize{$e$}};
\node (v1) at (0.7,-0.5) {$\mathbf{3}_2$};
\node (v3) at (2.2,0.6) {$\mathbf{2}_2$};
\draw (v1) edge (v2);
\draw (v3) edge (v2);
\node at (1,-0.4) {\scriptsize{$e$}};
\node at (1.9,-0.4) {\scriptsize{$e$}};
\node at (2.1,-0.2) {\scriptsize{$e$}};
\node at (2.1,0.3) {\scriptsize{$e$}};
\node (v4) at (5.8,-0.5) {$\mathbf{5}^4_0$};
\draw (v6) edge (v4);
\node at (4,-0.4) {\scriptsize{$f$}};
\node at (4.8,-0.4) {\scriptsize{$x_3$-$x_4$}};
\draw (v3) edge (v4);
\node[rotate=-18] at (5.3427,-0.2747) {\scriptsize{$x_1$-$x_2$}};
\node at (2.5726,0.6013) {\scriptsize{$f$}};
\draw (v2) .. controls (2.2,-1) and (5.4,-1) .. (v4);
\draw (v1) .. controls (0.8,-1.2) and (5.8,-1.2) .. (v4);
\node[rotate=10] at (5.2148,-0.7235) {\scriptsize{$x_2$-$x_3$}};
\node at (2.6489,-0.7239) {\scriptsize{$f$}};
\node at (0.6209,-0.7732) {\scriptsize{$f$}};
\node at (5.95,-0.9) {\scriptsize{$f$-$x_1$-$x_2$}};
\end{tikzpicture}
\end{equation}
From this geometry we see that the BPS instanton $e_1$ associated to $\mathfrak{so}(8)$ has charge 1 under the center ${\mathbb Z}_2$ symmetry associated to $\mathfrak{su}(2)$ (which is generated by $S_5$), and the BPS instanton $e_5$ associated to $\mathfrak{su}(2)$ has charge $(1,0)$ under the center ${\mathbb Z}_2\times{\mathbb Z}_2$ symmetry associated to $\mathfrak{so}(8)$ (which are generated respectively by $S_1+S_3$ and $S_1+S_2$). Out of the ${\mathbb Z}_2^3$ center symmetry, the blowups $x_i$ preserve a ${\mathbb Z}_2$ symmetry associated to $S_2+S_3$ and a ${\mathbb Z}_2$ symmetry associated to $S_1+S_2+S_5$. This is the ${\mathbb Z}_2\times{\mathbb Z}_2$ 1-form symmetry expected to be preserved from the field theoretic analysis. Now we need to to also consider the instantons. $e_4$ is charged as $(0,1)$ and $e_5$ is charged as $(1,0)$ under the ${\mathbb Z}_2^2$ symmetry preserved by the blowups. Thus, after including the contribution of instantons we find that the 1-form symmetry for $\mathfrak{so}(8)\oplus\mathfrak{su}(2)$ theory with a half-bifundamental is
\begin{equation}\label{5QHO}
\mathcal{O}={\mathbb Z}_1
\end{equation}
contrary to the expected answer (\ref{5QO}). The reader can also verify the answer (\ref{5QHO}) by directly computing the Smith normal form of the charge matrix $Q^{\alpha i}$ associated to the above geometry.
\subsection{1-form symmetry of $5d$ KK theories}
In this paper, we will use the term ``$5d$ KK theories'' to refer to $5d$ theories obtained by compactifying a $6d$ SCFT or LST on a circle of finite non-zero radius. The terminology stresses the fact that these $5d$ theories are different for standard $5d$ quantum field theories because they contain the KK mode arising from the circle compactification.
Upon compactification of a $6d$ theory on a circle, we can turn on Wilson lines in the flavor symmetry group of the $6d$ theory. For the continuous part of the flavor symmetry\footnote{Throughout this paper, we use the terms ``flavor symmetry'' and ``0-form symmetry'' interchangeably.} group, these Wilson lines become the mass parameters of the $5d$ KK theory. For the discrete part of the flavor symmetry group, the Wilson lines are discrete and hence parametrize different $5d$ KK theories. Two discrete Wilson lines related by a discrete background gauge transformation (valued in the discrete \emph{global} symmetry group) are equivalent on a circle, and hence lead to the same $5d$ KK theory. We refer to non-trivial discrete Wilson lines upto discrete background gauge transformations as \emph{twists}.
\subsubsection{Untwisted case}\label{5KKU}
Let us first consider the untwisted circle compactification of a $6d$ theory. The 1-form symmetry $\mathcal{O}_{6d}$ of the $6d$ theory is generated by topological operators of codimension 2. Upon compactifying the $6d$ theory on a circle, we can either wrap these operators along the circle or insert them at a point on the circle. Wrapping these operators along the circle gives rise to 1-form symmetries in the $5d$ theory, while inserting the operators at a point gives rise to 0-form symmetries in the $5d$ theory. The $5d$ theory contains both the 1-form and 0-form symmetries descending from 1-form symmetry of the $6d$ theory.
Similarly, the 2-form symmetry $\mathcal{T}_{6d}$ of the $6d$ theory is generated by topological operators of codimension 3. Wrapping the operators along a circle would give rise to 2-form symmetry in the $5d$ theory, while inserting the operators at a point gives rise to 1-form symmetries of the $5d$ theory. However, unlike the case of $\mathcal{O}_{6d}$ discussed above, the $5d$ theory cannot simultaneously have both the 1-form and 2-form symmetries originating from the 2-form symmetry of the $6d$ theory.
This is due to the fact that the 2-form symmetry of the $6d$ theory is, in a sense, ``self-dual''. That is, the $6d$ theory does not admit backgrounds for the 2-form symmetry which correspond to insertion of codimension 3 topological operators along intersecting 3-cycles. Thus, we need to choose whether we wish to keep inside the $5d$ theory the 1-form symmetry arising from the 2-form symmetry of the $6d$ theory, or the 2-form symmetry arising from the 2-form symmetry of the $6d$ theory. If we choose to keep the 1-form symmetry, then we can gauge this 1-form symmetry in the resulting $5d$ theory to obtain the $5d$ theory where we would have chosen to keep the 2-form symmetry instead, and vice-versa. In this paper, we always choose to keep the 1-form symmetry.
In conclusion, a $5d$ KK theory arising via an untwisted compactification of a $6d$ theory has 1-form symmetry group
\begin{equation}\label{5KU}
\mathcal{O}_{5d}=\mathcal{O}_{6d}\times\mathcal{T}_{6d} \,.
\end{equation}
\subsubsection{Twisted case}\label{5KKT}
Discrete 0-form symmetries are generated by topological operators of codimension 1. So, we can think of a twisted KK theory as being produced by inserting, at a point of the circle, the codimension 1 topological operator associated to a discrete 0-form symmetry implementing the twist. The insertion of this topological operator results in a reduction in the 1-form symmetry of the $5d$ KK theory associated to a twisted compactification as compared to the 1-form symmetry of the $5d$ KK theory associated to the untwisted compactification of the same $6d$ theory. The reason for this reduction is that the topological operators corresponding to 0-form symmetry may act on the topological operators corresponding to the 1-form or 2-form symmetries in the $6d$ theory.
As we have discussed above, a subset of the 1-form symmetries of the $5d$ KK theory arise by wrapping the topological operators corresponding to 1-form symmetry of the $6d$ theory along the circle. In the case of a non-trivial twist, say corresponding to a discrete 0-form symmetry element $g$, we are only allowed to wrap topological operators corresponding to 1-form symmetries that are left invariant by $g$. This is because, if a topological operator corresponding to a 1-form symmetry is charged under $g$, then traversing around the circle changes the type of the topological operator as it crosses the insertion of topological operator corresponding to $g$, and hence it cannot close back to itself. The surviving 1-form symmetries form a group $\text{ker}_g(\mathcal{O}_{6d})$, that is the kernel of the action of $g$ on $\mathcal{O}_{6d}$.
On the other hand, another subset of the 1-form symmetries of the $5d$ KK theory arise by inserting the topological operators corresponding to 2-form symmetry of the $6d$ theory at a point on the circle. Suppose we have inserted a topological operator corresponding to a 2-form symmetry element $h$. Moving this operator around the cirle, we obtain the topological operator corresponding to the 2-form symmetry element $g\cdot h$, that is the 2-form symmetry element obtained by applying the action of $g$. Thus, as elements of the 2-form symmetry group of the $5d$ KK theory, $h$ and $g\cdot h$ are identified. More generally, since $\mathcal{T}_{6d}$ is abelian, an element $h_1h_2$ of $\mathcal{T}_{6d}$ is identified with the elements $g(h_1)h_2$ and $h_1g(h_2)$. This identification gives rise to an equivalence relation $\sim_g$ on $\mathcal{T}_{6d}$. This means that the 1-form symmetry group of the $5d$ KK theory arising from 1-form symmetry group of the $6d$ theory is the projection $\mathcal{T}_{6d}/\sim_g$.
In total, we can write the 1-form symmetry group of the $5d$ KK theory obtained by $g$-twist of a $6d$ theory as
\begin{equation}\label{5KT}
\mathcal{O}_{5d}=\frac{\mathcal{T}_{6d}}{\sim_g}\times \text{ker}_g(\mathcal{O}_{6d}) \,.
\end{equation}
Let us discuss the structure of (\ref{5KT}) in more detail for different kinds of twists of $6d$ theories. So far these twists have been studied only in the context of $6d$ SCFTs \cite{Bhardwaj:2019fzv,Bhardwaj:2020kim} but similar structure is expected to extend to the case of $6d$ LSTs. From the study of twists of $6d$ SCFTs, we expect three different kinds of twists for $6d$ theories:
\begin{enumerate}
\item The first kind originate from the outer-automorphisms of the gauge algebras appearing on the tensor branch of the $6d$ theory.
\item The second kind originate from a permutation symmetry of tensor multiplets arising on the tensor branch of the $6d$ theory.
\item The third kind originate for some $6d$ theories whose tensor branch theory carries an $O(2n)$ flavor symmetry. Since $O(2n)$ has two disconnected components, the holonomies valued in the component not connected to the identity element give rise to a twisted $5d$ KK theory.
\end{enumerate}
Combining the twists mentioned above, one can write a general $5d$ KK theory using the following graphical notation mimicking the graphical notation used for $6d$ theories:
\begin{equation}\label{GD}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$\Omega^{ii}$};
\node (v4) at (-0.5,1) {$\mathfrak{g}^{(q_i)}_i$};
\begin{scope}[shift={(2.8,0.05)}]
\node at (-0.5,0.95) {$\mathfrak{g}^{(q_j)}_j$};
\node (v2) at (-0.5,0.45) {$\Omega^{jj}$};
\end{scope}
\node (v3) at (0.9,0.5) {\tiny{$-\Omega^{ij}$}};
\draw (v1)--(v3);
\draw [<-](v2)--(v3);
\begin{scope}[shift={(-2.3,0.05)}]
\node (v2_1) at (-0.5,0.45) {$\Omega^{kk}$};
\end{scope}
\begin{scope}[shift={(0,1.95)}]
\node (v2_2) at (-0.5,0.45) {$\Omega^{ll}$};
\node at (-0.5,0.95) {$\mathfrak{g}^{(q_l)}_l$};
\end{scope}
\draw (v2_2) edge (v4);
\draw (v2_1) edge (v1);
\node (v5) at (-2.8,2.4) {$\left[{\mathbb Z}_2^{(2)}\right]$};
\draw (v5) edge (v2_2);
\end{tikzpicture}
\end{equation}
where each node $i$ carries a twisted or untwisted affine Lie algebra $\mathfrak{g}_i^{(q_i)}$. This algebra may be empty for some of the nodes, as is the case for the node $k$ in the above graph. The graph also involves the data of a \emph{non-symmetric} positive-definite integer matrix $\Omega^{ij}$ with non-positive off-diagonal entries. If $\Omega^{ij}=\Omega^{ji}$ for some specific $j\neq i$, then the nodes $j$ and $i$ are connected by $-\Omega^{ij}$ number of undirected edges, as we did in the case of $6d$ theories. We can also have directed edges which arise for example when $\Omega^{ji}=-1$ and $\Omega^{ij}<-1$. Then we join the nodes $i$ and $j$ by a directed edge pointing from $i$ to $j$ and insert a label in the middle of the edge capturing the value of $-\Omega^{ij}$. The edge between nodes $i$ and $j$ in the above graph is such an example. In addition to all of this, we can have some nodes which are attached to a $\left[{\mathbb Z}_2^{(2)}\right]$ which is a shorthand to denote the fact that these nodes have an $O(2n)$ flavor symmetry and we have turned on holonomies in the component disconnected to the identity. In the above graph, node $l$ is an example of such a node.
The corresponding $6d$ theory can be obtained from the graph for the $5d$ KK theory by ``unfolding'' it and removing the superscript labels $q_i$ and nodes $\left[{\mathbb Z}_2^{(2)}\right]$. For example, the $6d$ theory associated to the $5d$ KK theory shown in the above graph for $-\Omega^{ij}=2$ takes the following form
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.5) {$\Omega^{ii}$};
\node (v4) at (-0.5,1) {$\mathfrak{g}_i$};
\begin{scope}[shift={(2,0.05)}]
\node at (-0.5,0.9) {$\mathfrak{g}_j$};
\node (v2) at (-0.5,0.45) {$\Omega^{jj}$};
\end{scope}
\begin{scope}[shift={(-2.3,0.05)}]
\node (v2_1) at (-0.5,0.45) {$\Omega^{kk}$};
\end{scope}
\begin{scope}[shift={(0,1.95)}]
\node (v2_2) at (-0.5,0.45) {$\Omega^{ll}$};
\node at (-0.5,0.9) {$\mathfrak{g}_l$};
\end{scope}
\draw (v2_2) edge (v4);
\draw (v2_1) edge (v1);
\begin{scope}[shift={(0,-1.75)}]
\node (v3) at (-0.5,0.9) {$\mathfrak{g}_m$};
\node (v2_3) at (-0.5,0.45) {$\Omega^{mm}$};
\end{scope}
\draw (v1) edge (v2);
\draw (v1) edge (v3);
\end{tikzpicture}
\end{equation}
with $\mathfrak{g}_m=\mathfrak{g}_j$ and $\Omega^{mm}=\Omega^{jj}$. The twist converting the above $6d$ theory to the above $5d$ KK theory contains outer-automorphisms of $\mathfrak{g}_i$ and $\mathfrak{g}_l$ of order $q_i$ and $q_l$ respectively. This includes the possibility of no outer-automorphism twist for $\mathfrak{g}_i$ (or $\mathfrak{g}_l$) which is associated to $q_i=1$ and corresponds to the untwisted affine Lie algebra $\mathfrak{g}_i^{(1)}$ which is defined for any $\mathfrak{g}_i$. The twist also contains a permutation exchanging the tensor multiplets $m$ and $j$ which identifies $\mathfrak{g}_j$ and $\mathfrak{g}_m$ and it is also possible to have an outer-automorphism of order $q_j$ of the algebra $\mathfrak{g}_j$ after accounting for the identification. This identification of $j$ and $m$ induces a ``folding'' of the graph which is represented by a directed edge from $i$ to $j$ in the graph for the $5d$ KK theory. The label $-\Omega^{ij}=2$ in the middle of the directed edge tells us that the folding has been obtained by identifying 2 different nodes. Similarly, if we were to identify 3 nodes of a $6d$ SCFT, the $5d$ KK theory will contain a directed edge with a label 3 placed in the middle of the directed edge. As discussed above, the twist also includes turning on holonomies in the component disconnected to identity of the flavor symmetry $O(2n)$ associated to node $l$.
\subsubsection{Geometric analysis}\label{5KG}
We now turn to the determination of 1-form symmetry group of a $5d$ KK theory by using its M-theory geometric construction. Such geometric constructions have been extensively studied in \cite{Bhardwaj:2019fzv,Bhardwaj:2020kim,Bhardwaj:2018vuu,Bhardwaj:2018yhy,DelZotto:2017pti,Jefferson:2018irk, Apruzzi:2019opn, Apruzzi:2019vpe, Apruzzi:2019syw, Eckhard:2020jyr}. The M-theory geometric construction for a $5d$ KK theory can be easily described in terms of its graphical data of the form (\ref{GD}). For every node $i$, we have a collection of irreducible Hirzebruch surfaces (carrying some blowups) $S_{a,i}$ in the geometry. Let us first consider the nodes $i$ for which $\mathfrak{g}_i$ is non-trivial. The number of surfaces for each $i$ equal $r_i+1$ where $r_i$ is the rank of the gauge algebra $\mathfrak{h}_i$ left invariant by the outer-automorphism $\mathcal{O}^{(q_i)}$ acting\footnote{For any $\mathcal{O}^{(1)}$ we can choose the trivial automorphism which does not act on the gauge algebra and hence $\mathfrak{h}_i=\mathfrak{g}_i$, which makes sense since $\mathcal{O}^{(1)}$ means that we do not involve any outer-automorphism twist. In this paper, we choose outer-automorphisms $\mathcal{O}^{(q_i)}$ for $q_i>1$ such that the invariant gauge algebras are as follows. $\mathcal{O}^{(2)}$ acting on $\mathfrak{su}(n)$ leaves $\sp(n)$ invariant, $\mathcal{O}^{(2)}$ acting on $\mathfrak{so}(2n)$ leaves $\mathfrak{so}(2n-1)$ invariant, $\mathcal{O}^{(2)}$ acting on $\mathfrak{e}_6$ leaves $\mathfrak{f}_4$ invariant, and $\mathcal{O}^{(3)}$ acting on $\mathfrak{so}(8)$ leaves $\mathfrak{g}_2$ invariant.} on $\mathfrak{g}_i$. Let $f_{a,i}$ denote the fibers of these Hirzebruch surfaces. Then, the intersection numbers
\begin{equation}
M_{ab,i}:=-f_{a,i}\cdot S_{b,i}
\end{equation}
form the Cartan matrix of the affine Lie algebra $\mathfrak{g}_i^{(q_i)}$ (see \cite{Bhardwaj:2019fzv} for more details). We let $S_{0,i}$ be the surface corresponding to the affine node of the Dynkin diagram of $\mathfrak{g}_i^{(q_i)}$ such that $M_{ab,i}$ for $a,b\neq0$ form the Cartan matrix of $\mathfrak{h}_i$.
Now let us consider the nodes $i$ for which $\mathfrak{g}_i$ is trivial. For these nodes, there is only a single corresponding surface $S_{0,i}$ which can only be one of the following three types: ${\mathbb F}_1^8$; ${\mathbb F}_0^2$ with $e-x_1$ glued to $e-x_2$; or ${\mathbb F}_1^2$ with the two blowups glued. For ${\mathbb F}_1^8$ we define $f_{0,i}$ to be $2e+3f-\sum x_i$. For ${\mathbb F}_0^2$ with $e-x_1$ glued to $e-x_2$, we define $f_{0,i}$ to be $f$. For ${\mathbb F}_1^2$ with glued blowups, we define $f_{0,i}$ to be $2e+3f-2\sum x_i$.
For any two nodes $i\neq j$, we have
\begin{equation}\label{off}
-f_{a,i}\cdot S_{b,j}=0 \,.
\end{equation}
To the nodes of the Dynkin diagram of an affine Lie algebra, we can associate Coxeter labels, which are minimal positive integers that form a row null vector for the Cartan matrix of the affine Lie algebra. Similarly, we can associate dual Coxeter labels, which are minimal positive integers that form a column null vector for the Cartan matrix of the affine Lie algebra. Let us denote the Coxeter and dual Coxeter labels for $\mathfrak{g}_i^{(q_i)}$ by $d_{a,i}$ and $d^\vee_{a,i}$ respectively. For $\mathfrak{g}_i$ trivial, we let $d_{0,i}=d^\vee_{0,i}=1$. Then, to each $i$, we can assign a linear combination $S_i$ of surfaces $S_{a,i}$
\begin{equation}
S_i:=\sum_a d^\vee_{a,i}S_{a,i} \,,
\end{equation}
which has the special properties that
\begin{equation}\label{f}
f_{a,i}\cdot S_i =0 \,,
\end{equation}
and
\begin{equation}\label{x}
x\cdot S_i=0 \,,
\end{equation}
for any blowup $x$ living in any of the surfaces $S_{b,j}$. Note that we can use (\ref{off}) to write (\ref{f}) in the following more generalized form
\begin{equation}\label{f'}
f_{b,j}\cdot S_i=0 \,,
\end{equation}
for arbitrary $a,i,j$. The equations (\ref{x}) and (\ref{f'}) imply that the surfaces $S_i$ are ``null'' in the sense that all the fibers and blowups of all the Hirzebruch surfaces have no intersection with $S_i$. Note that the $e$ curves of the Hirzebruch surfaces can still intersect the null surfaces $S_i$, so it is not \emph{strictly null}.
In the last subsection, for a collection of Hirzebruch surfaces with intersection matrix describing a simple Lie algebra $\mathfrak{g}$, we associated the $e$ curve of a particular Hirzebruch surface to $\mathfrak{g}$. This curve was denoted as $\tilde e$ and it is supposed to capture the contributions of BPS instantons of $\mathfrak{g}$ to the breaking of 1-form symmetry. We use this fact to assign a curve $\tilde e_i$ to each $i$ as follows. For nodes $i$ with $\mathfrak{g}_i$ non-trivial, the surfaces $S_{a,i}$ for $a\neq 0$ and fixed $i$ form a collection of surfaces with intersection matrix describing the simple Lie algebra $\mathfrak{h}_i$, and we denote the $\tilde e$ curve associated to $\mathfrak{h}_i$ as $\tilde e_i$. For nodes $i$ with $\mathfrak{g}_i$ trivial, we let $\tilde e_i$ be the $e$ curves of the three possibilities discussed above. Then, it turns out that
\begin{equation}\label{5KB}
-S_i\cdot \tilde e_j=\Omega^{ij} \,,
\end{equation}
where $\Omega^{ij}$ is the matrix associated to the $5d$ KK theory as discussed above.
Now we can describe how the 1-form symmetry (\ref{5KT}) of the $5d$ KK theory is encoded in this geometry. First, we can change the basis of surfaces for each $i$ from $S_{a,i}$ to $S_i,S_{a\neq0,i}$ which is an acceptable change of basis since $d^\vee_{0,i}=1$ for any $\mathfrak{g}^{(q_i)}_i$. Then, we claim that the $\frac{\mathcal{T}_{6d}}{\sim_g}$ part of (\ref{5KT}) is encoded in the surfaces $S_i$. Indeed $S_i$ give rise to the $\u(1)$ gauge algebras descending from KK reduction of $6d$ tensor multiplets. One can view the curves $\tilde e_i$ as BPS particles arising by wrapping (on the compactification circle) the BPS string corresponding to node $i$ in the $6d$ theory. From the above recounted facts about intersections of $S_i$ with various curves, we see that it is only the $\tilde e_i$ i.e. the BPS strings that screen the $U(1)^s$ potential 1-form symmetry generated by the surfaces $S_i$, which makes sense since $\frac{\mathcal{T}_{6d}}{\sim_g}$ part of (\ref{5KT}) captures the data of the 2-form symmetry of the $6d$ theory. According to (\ref{5KB}), we find that
\begin{equation}\label{5K2}
\frac{\mathcal{T}_{6d}}{\sim_g}=\text{Tors}\left(\frac{{\mathbb Z}^s}{[\Omega^{ij}]\cdot {\mathbb Z}^{s}}\right) \,,
\end{equation}
where Tors denotes the torsional part of the quotient lattice. The appearance of Tors is relevant only if the $5d$ KK theory arises via a compactification of a $6d$ LST in which case a $\u(1)$ generated by a linear combination of the $S_i$ is non-dynamical, whose contribution should be modded out, just as in the case of computation of 2-form symmetry of $6d$ LSTs discussed earlier in this paper. Just like in the case of 2-form symmetry of LSTs, the contribution from this non-dynamical $\u(1)$ gives rise to a free part in the quotient lattice, and hence we retain only the torsional part of the quotient lattice. If we specialize (\ref{5K2}) to the case of a $5d$ KK theory arising via an \emph{untwisted} compactification of a $6d$ theory, we obtain
\begin{equation}
\mathcal{T}_{6d}=\text{Tors}\left(\frac{{\mathbb Z}^s}{[\Omega^{ij}]\cdot {\mathbb Z}^{s}}\right) \,,
\end{equation}
where $s$ is now captures the number of nodes in the graph associated to the $6d$ theory itself and $\Omega^{ij}$ is the matrix associated to the $6d$ theory that we discussed in Section \ref{6}. The above equation simply recovers the result of Section \ref{6T}.
The part $\text{ker}_g(\mathcal{O}_{6d})$ of (\ref{5KT}) is encoded in the surfaces $S_{a\neq0,i}$. The fibers and blowups living in these surfaces give rise to a $5d$ non-abelian gauge theory $\mathfrak{T}$ with gauge algebra $\oplus_i\mathfrak{h}_i$, where the sum over $i$ is only taken over nodes with non-trivial $\mathfrak{g}_i$. Additional matter content for this $5d$ non-abelian gauge theory $\mathfrak{T}$ arises from blowups living in the surfaces $S_{0,i}$ for the nodes $i$ with $\mathfrak{g}_i$ non-trivial. As we have discussed in great detail in Section \ref{5GG}, the analysis of 1-form symmetries associated to the surfaces giving rise to $\oplus_i\mathfrak{h}_i$ can be reduced to some linear combinations of surfaces for each $i$ which capture the center symmetry $\Gamma_i$ of $\mathfrak{h}_i$. Potentially these surfaces give rise to a $\Gamma:=\prod_i\Gamma_i$ 1-form symmetry, which is broken according to the matter content for $\mathfrak{T}$ descending from the $5d$ KK theory. As discussed in Section \ref{5GG}, further breaking of $\Gamma$ is induced by instantons $\tilde e_i$ for each $\mathfrak{h}_i$. These curves capture precisely the BPS instanton strings associated to $\mathfrak{g}_i$ in the $6d$ theory as we discussed above. Following the discussion of Section \ref{5GG}, one can easily determine the charges of $\tilde e_i$ under $\Gamma$. Moreover one also needs to account for the charges of $\tilde e_i$ associated to nodes with $\mathfrak{g}_i$ trivial under $\Gamma$, which can be easily determined from the data of the geometry of the $5d$ KK theory. These contributions to the breaking of potential 1-form symmetry are interpreted as contributions from non-gauge-theoretic BPS strings of the $6d$ theory.
The above contributions are an end of the story if the $5d$ KK theory under consideration arises as an untwisted compactification. However, in the case of twisted compactification, one needs to consider another contribution in some cases. This contribution arises from the charge of $f_{0,i}$ under $=\Gamma_i$. The reason this is unimportant for untwisted cases is because of the fact that the genus-one fiber
\begin{equation}
f_i:=\sum_a d_{a,i} f_{a,i}
\end{equation}
has the property that
\begin{equation}
f_i\cdot S_{a,i}=0
\end{equation}
for all $a$. Since $f_{a,i}$ for $a\neq 0$ have zero charge under $\Gamma_i$, $f_{0,i}$ must have zero charge under $\Gamma_i$ as long as $d_{0,i}=1$. The latter condition is only true if the affine gauge algebra $\mathfrak{g}_i^{(q_i)}$ for node $i$ is untwisted, i.e. $q_i=1$. When non-trivial twist is involved, it can happen that $d_{0,i}>1$, in which case we need to include the charge of $f_{0,i}$ under $\Gamma_i$ separately into consideration. Note that we do not need to consider the charge of $f_{0,i}$ under $\Gamma_j$ for $j\neq i$ due to the fact (\ref{off}).
Thus, in conclusion, $\text{ker}_g(\mathcal{O}_{6d})$ part of (\ref{5KT}) is comprised of those elements of $\Gamma$ that leave the matter content charged under $\mathfrak{h}$ and the extra BPS particles $\tilde e_i,f_{0,i}$ invariant.
Specializing the above discussion to the case of a $5d$ KK theory arising from an untwisted compactification of a $6d$ theory provides us with a method for computing the 1-form symmetry group $\mathcal{O}_{6d}$ of the $6d$ theory itself. In this case, the $5d$ gauge theory $\mathfrak{T}$ is identified with the $6d$ gauge theory arising on the tensor branch of the $6d$ theory. The curves $\tilde e_i$ are in one-to-one correspondence with the BPS strings of the $6d$ theory. If $\mathfrak{g}_i$ is non-trivial, then the associated $\tilde e_i$ corresponds to the BPS instanton string for $\mathfrak{g}_i$. If $\mathfrak{g}_i$ is trivial, then the associated $\tilde e_i$ corresponds to the non-gauge-theoretic BPS string associated to the node $i$. The charges of $\tilde e_i$ under the center $\Gamma$ of $\mathfrak{h}$ are identified with the charges of the BPS strings of the $6d$ theory under the center $\Gamma$ of the $6d$ gauge algebra $\mathfrak{g}$. Moreover, according to the discussion of Section \ref{5GG}, we need to consider contributions of the charges of $\tilde e_i$ for non-trivial $\mathfrak{g}_i$ only if there are half-hypers involved or if $\mathfrak{g}_i=\mathfrak{su}(n),\sp(n)$. In fact, we do not need even need to consider the case of $\mathfrak{g}_i=\mathfrak{su}(n)$ since the contribution of the instanton string in this case is always accounted for by the hypermultiplet spectrum. This can be easily checked for all the possible $\mathfrak{g}_i=\mathfrak{su}(n)$ that can arise in the context of $6d$ SCFTs and LSTs by taking into account (\ref{kfh}) and (\ref{khh}) along with the fact that the CS level for a $5d$ $\mathfrak{su}(n)$ descending from a $6d$ $\mathfrak{su}(n)$ via an untwisted compactification is always 0.\\
For example, consider the case of $\mathfrak{g}_i=\mathfrak{su}(n)$ and $\Omega^{ii}=1$ such that the matter content charged under $\mathfrak{su}(n)$ is $\L^2+(n+8)\mathsf{F}$. Then (\ref{kfh}) implies that the instanton string has charge $2~(\text{mod}~n)$ under the center ${\mathbb Z}_n$ of $\mathfrak{su}(n)$. But since we already have a hyper in $\L^2$, as long as this hyper is not gauged by some other gauge algebra $\mathfrak{g}_j$, this hyper breaks the ${\mathbb Z}_n$ center down to ${\mathbb Z}_2$ and thus the charge of instanton string is irrelevant. On the other hand, remaining in the realm of $6d$ SCFTs and LSTs, it is not possible to gauge the $\L^2$ in such a way that we would be forced to account for the charge of the instanton string.\\
Thus, the only situations where the contribution of a BPS string associated to node $i$ of a $6d$ theory is relevant are as follows:
\begin{enumerate}
\item There is a half-hyper transforming in a mixed representation $\mathfrak{g}_{\mu_1}\oplus\mathfrak{g}_{\mu_2}\oplus\cdots\oplus\mathfrak{g}_{\mu_l}\subseteq\mathfrak{g}$ (for $l\ge1$) where $\mu_1=i$.
\item $\mathfrak{g}_i$ is trivial.
\item $\mathfrak{g}_i=\sp(n)$ with $\theta=\pi$.
\end{enumerate}
This justifies the claims of Section \ref{6O}.
\subsubsection{Examples}
In this subsection, we discuss examples of $5d$ KK theories arising via non-trivial twisted compactifications of $6d$ SCFTs, and discuss their 1-form symmetry using the geometric methods discussed above. We do not pursue $5d$ KK theories arising via untwisted compactifications as the computation in that case reduces to the computations performed in Section \ref{1ex}.
\ni\ubf{Example 1}: Consider the $5d$ KK theory
\begin{equation}
\begin{tikzpicture}
\node (v2) at (-0.5,0.45) {$3$};
\node at (-0.45,0.95) {$\mathfrak{su}(3)^{(2)}$};
\end{tikzpicture} \,,
\end{equation}
which is obtained by performing an outer-automorphism twist on the $6d$ SCFT
\begin{equation}
\begin{tikzpicture}
\node (v2) at (-0.5,0.45) {$3$};
\node at (-0.45,0.9) {$\mathfrak{su}(3)$};
\end{tikzpicture} \,.
\end{equation}
The 2-form ${\mathbb Z}_3$ symmetry of the $6d$ SCFT is left unaffected by the twist, and hence we expect to obtain a ${\mathbb Z}_3$ factor in the 1-form symmetry of the $5d$ KK theory. On the other hand, the outer automorphism twist acts on the 1-form ${\mathbb Z}_3$ symmetry of the $6d$ SCFT by complex conjugation, and hence we expect no contribution to the 1-form symmetry of the $5d$ KK theory from the 1-form symmetry of the $6d$ SCFT. In total, we expect that the $5d$ KK theory has
\begin{equation}\label{er1}
\mathcal{O}_{5d}={\mathbb Z}_3 \,.
\end{equation}
Let us verify these expectations geometrically. The geometry for the $5d$ KK theory is
\begin{equation}
\begin{tikzpicture} [scale=1.9]
\node (v1) at (-3.05,-0.5) {$\mathbf{0}_{10}$};
\node (v2) at (-1.4,-0.5) {$\mathbf{1}_0$};
\draw (v1) edge (v2);
\node at (-2.75,-0.4) {\scriptsize{$e$}};
\node at (-1.8,-0.4) {\scriptsize{$4e$+$f$}};
\end{tikzpicture}
\end{equation}
We claimed above that the contribution to the 1-form symmetry of $5d$ KK theory from the 2-form symmetry of the $6d$ SCFT can be computed by finding the Smith normal form for $\Omega^{ij}$ associated to the $5d$ KK theory where $\Omega^{ij}$ can be computed geometrically via (\ref{5KB}). For the above geometry there is a single index $i$, and we have
\begin{equation}
S_i=S_0+2S_1
\end{equation}
and
\begin{equation}
\tilde e_i=e_1 \,.
\end{equation}
We can compute
\begin{equation}
\Omega^{ii}=-S_i\cdot \tilde e_i=-(S_0+2S_1)\cdot e_1=-(4e_1+f_1)\cdot e_1-2K_1\cdot e_1=-1+4=3
\end{equation}
which indeed is precisely what we expect. And hence we find that the 2-form part of the $6d$ theory indeed contributes ${\mathbb Z}_3$ factor to the 1-form symmetry of the $5d$ theory.
To compute the contribution to the 1-form symmetry of $5d$ KK theory from the 1-form symmetry of the $6d$ SCFT, we need to first delete the surface $S_0$ leaving us with the geometry
\begin{equation}
\begin{tikzpicture} [scale=1.9]
\node (v2) at (-1.4,-0.5) {$\mathbf{1}_0$};
\end{tikzpicture}
\end{equation}
which gives rise to a $5d$ non-abelian gauge theory $\mathfrak{T}=\mathfrak{su}(2)$ without any matter. Note that there is no extra matter content coming from $S_0$ since $S_0$ contains no blowups. The potential center 1-form symmetry associated to $\mathfrak{T}$ is ${\mathbb Z}_2$ spanned by the surface $S_1$. Under this, we see that $\tilde e_i$ has charge
\begin{equation}
-e_1\cdot S_1=-e_1\cdot K_1=2=0~(\text{mod}~2)
\end{equation}
and $f_0$ has charge
\begin{equation}
-f_0\cdot S_1=-f_0\cdot e_0=-1=1~(\text{mod}~2)
\end{equation}
implying that the ${\mathbb Z}_2$ center is broken, and thus there is no contribution to the 1-form symmetry of $5d$ KK theory from the 1-form symmetry of the $6d$ SCFT, confirming the expected result (\ref{er1}).
\vspace{8pt}
\ni\ubf{Example 2}: Consider the $5d$ KK theory
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.4) {2};
\begin{scope}[shift={(1.7,-0.05)}]
\node (v2) at (-0.5,0.45) {$2$};
\end{scope}
\node (v6) at (0.4,0.4) {\tiny{2}};
\draw [<-] (v1) edge (v6);
\draw (v6) edge (v2);
\end{tikzpicture}
\end{equation}
which carries non non-trivial gauge algebra. This KK theory is obtained by applying a permutation twist on the following $6d$ SCFT
\begin{equation}
\begin{tikzpicture}
\node (v1) at (-0.5,0.4) {2};
\begin{scope}[shift={(1.7,-0.05)}]
\node (v2) at (-0.5,0.45) {$2$};
\end{scope}
\begin{scope}[shift={(3.4,-0.05)}]
\node (v3) at (-0.5,0.45) {$2$};
\end{scope}
\draw (v1) edge (v2);
\draw (v2) edge (v3);
\end{tikzpicture}
\end{equation}
which is the $A_3$ $\mathcal{N}=(2,0)$ theory. As such it has a ${\mathbb Z}_4$ 2-form symmetry which is acted upon by the permutation twist. The ${\mathbb Z}_4$ can be identified as the center of $A_3$ and the permutation can be identified as the outer-automorphism of $A_3$ Lie algebra which acts by a complex conjugation on the center ${\mathbb Z}_4$ when ${\mathbb Z}_4$ is viewed as a subgroup of $U(1)$. The complex conjugation leaves only the ${\mathbb Z}_2$ subgroup of ${\mathbb Z}_4$ invariant, and hence we expect the $5d$ KK theory to attain a ${\mathbb Z}_2$ 1-form symmetry factor descending from the ${\mathbb Z}_4$ 2-form symmetry of the $6d$ SCFT. On the other hand, the $6d$ theory has no 1-form symmetry, and hence we expect the full 1-form symmetry of the $5d$ KK theory to be
\begin{equation}\label{er2}
\mathcal{O}_{5d}={\mathbb Z}_2 \,.
\end{equation}
Let us verify this geometrically. The geometry for the $5d$ KK theory can be written as
\begin{equation}
\begin{tikzpicture} [scale=1.9]
\node (v8) at (2.7,-2) {$\mathbf{1}^{1+1}_{0}$};
\node (v7_1) at (4.4,-2) {$\mathbf{2}^{1+1}_{0}$};
\node at (3.95,-1.9) {\scriptsize{$f$-$y,y$}};
\node at (3.2,-1.9) {\scriptsize{$2f$-$x,x$}};
\node (v3_1) at (3.6,-2) {\scriptsize{2}};
\draw (v8) edge (v3_1);
\draw (v3_1) edge (v7_1);
\draw (v8) .. controls (2.1,-2.7) and (3.3,-2.7) .. (v8);
\node at (2.3,-2.3) {\scriptsize{$e$-$x$}};
\node at (3.1,-2.3) {\scriptsize{$e$-$y$}};
\draw (v7_1) .. controls (3.8,-2.7) and (5,-2.7) .. (v7_1);
\node at (4,-2.3) {\scriptsize{$e$-$x$}};
\node at (4.8,-2.3) {\scriptsize{$e$-$y$}};
\end{tikzpicture}
\end{equation}
where we label the two nodes by $i$ and $j$. We have $S_i=S_{0,i}=S_1$ and $S_j=S_{0,j}=S_2$. Moreover, $\tilde e_i= e_1$ and $\tilde e_j=e_2$. We can compute the matrix $-S_i\cdot\tilde e_j$ to be
\begin{equation}
\begin{pmatrix}
2&-1\\
-2&2
\end{pmatrix} \,.
\end{equation}
which is indeed the matrix associated to the graph of the $5d$ KK theory. Computing its Smith normal form indeed reveals a ${\mathbb Z}_2$ contribution to the 1-form symmetry of the $5d$ KK theory. On the other hand, since both surfaces are affine surfaces, deleting them, leads to a trivial theory with no center, and hence there is no other contribution to the 1-form symmetry of the $5d$ KK theory, and we have recovered the expected result (\ref{er2}).
\subsection{Brane-web and GTP Analysis}
A subclass of 5d SCFTs have a description in terms of 5-brane webs \cite{Aharony:1997bh}, or dually in terms of generalized toric diagrams (GTP, or dot diagrams) \cite{Benini:2009gi}. We now discuss how the 1-form symmetry is encoded in this formulation of the theories, in particular how the IR gauge theory description, by inclusion of the instanton particles gives rise to the correct UV higher form symmetry.
For models that are toric, it was argued in \cite{Morrison:2020ool}, that the 1-form symmetry of the 5d SCFT realized in terms of a toric
fan $\{\mathbf{v}_i\}$, $i=1, \cdots, f+3$, $f$= rank of the flavor group, and with $\mathbf{v}_i = (v_i^1 , v_i^2, 1)\in \mathbb{Z}^3$, then
\begin{equation}\label{ToricO}
\mathcal{O} = \mathbb{Z}_{a_1} \oplus \mathbb{Z}_{a_2}\oplus \mathbb{Z}_{a_3}\,,
\end{equation}
with
\begin{equation}
\text{diag} (a_1, a_2, a_3) = \text{SNF} (\mathbf{v}_1 \cdots \mathbf{v}_{f+3}) \,,
\end{equation}
where $\text{SNF}$ is the Smith normal form, applied to the matrix of vectors in the fan.
This is entirely independent on the resolution data and therefore computes the 1-form symmetry of the SCFT.
In the dual web, this corresponds to taking the SNF for the $(p,q)$-charges of the external 5-branes
\begin{equation}\label{WebOne}
\text{diag} (n_1, n_2, n_3) = \text{SNF}\left( \begin{matrix} p_1 & q_1 \cr\vdots &\vdots \cr p_{f+3} & q_{f+3} \end{matrix}\right) \,,
\end{equation}
When an IR gauge theory description exists, the naive expectation from the gauge theory can be that the 1-form symmetry is larger than the one of the SCFT. However as we have argued the instanton particles can be charged under the 1-form symmetry and thereby correct the classical expectation. The resulting 1-form symmetry is then always in agreement with that of the SCFT.
We exemplify this in the case pure $SU(N)_k$. Field-theoretically we know that the 1-form symmetry is
\begin{equation}
\mathcal{O}= {\mathbb Z}_{\text{gcd}(N, k)} \,.
\end{equation}
For pure $SU(N)_0$ the toric diagram is (shown here for $N=4$)
\begin{equation}
\begin{tikzpicture}[x=.5cm,y=.5cm]
\draw[step=.5cm,gray,very thin] (0,0) grid (2,4);
\draw[ligne] (0,0)--(1,0)--(2,4)--(1,4)-- (0,0);
\draw[ligne] (1,0)--(1,1) -- (1,2)--(1,3)-- (1,4) ;
\node[bd] at (0,0) {};
\node[bd] at (1,0) {};
\node[bd] at (2,4) {};
\node[bd] at (1,4) {};
\node[bd] at (1,1) {};
\node[bd] at (1,2) {};
\node[bd] at (1,3) {};
\node[bd] at (1,4) {};
\end{tikzpicture} \,,
\end{equation}
One can compute using the above prescription that the 1-form symmetry associated to the above toric diagram is ${\mathbb Z}_4$. On the other hand, consider pure $SU(N)_1$ for which the toric diagram is (shown here for $N=4$):
\begin{equation}
\begin{tikzpicture}[x=.5cm,y=.5cm]
\draw[step=.5cm,gray,very thin] (0,0) grid (2,4);
\draw[ligne] (0,0)--(1,0)--(2,3)--(1,4)-- (0,0);
\draw[ligne] (1,0)--(1,1) -- (1,2)--(1,3)-- (1,4) ;
\node[bd] at (0,0) {};
\node[bd] at (1,0) {};
\node[bd] at (2,3) {};
\node[bd] at (1,4) {};
\node[bd] at (1,1) {};
\node[bd] at (1,2) {};
\node[bd] at (1,3) {};
\node[bd] at (1,4) {};
\end{tikzpicture} \,,
\end{equation}
Computing the 1-form symmetry using the above prescription we find that $\mathcal{O}={\mathbb Z}_1$. If we delete either the right-most or the left-most black dot then computing SNF leads to ${\mathbb Z}_4$. This implies that the left-most and right-most black dots capture the instanton contribution. Indeed, this fact was already observed in \cite{Closset:2018bjz}, see also related observations in \cite{Albertini:2020mdx}.
Here we conjecture that there is a generalization to non-toric, generalized toric polygon (GTP). Consider a GTP, comprised of black and white vertices, and bring it into a convex form (see \cite{vanBeest:2020kou}). The 1-form symmetry is computed in the same way as (\ref{ToricO}), except we include all vertices that lie on the polygon -- i.e. all white dots get converted into black dots.
The conjecture is that the resulting toric polygon has the same 1-form symmetry as the diagram with white dots.
Consider e.g. $\mathfrak{su}(4)_0 +\L^2$, whose GTP is the left diagram
\begin{equation}
\begin{tikzpicture}[x=.5cm,y=.5cm]
\draw[step=.5cm,gray,very thin] (0,-2) grid (3,2);
\draw[ligne] (0,0)--(1,-1)--(2,-2)--(3,-1)-- (2,2)-- (1,1) --(0,0);
\node[bd] at (0,0) {};
\node[wd] at (1,-1) {};
\node[bd] at (2,-2) {};
\node[bd] at (3,-1) {};
\node[bd] at (2,2) {};
\node[bd] at (1,1) {};
\end{tikzpicture} \qquad \qquad
\begin{tikzpicture}[x=.5cm,y=.5cm]
\draw[step=.5cm,gray,very thin] (0,-2) grid (3,2);
\draw[ligne] (0,0)--(1,-1)--(2,-2)--(3,-1)-- (2,2)-- (1,1) --(0,0);
\node[bd] at (0,0) {};
\node[bd] at (1,-1) {};
\node[bd] at (2,-2) {};
\node[bd] at (3,-1) {};
\node[bd] at (2,2) {};
\node[bd] at (1,1) {};
\end{tikzpicture} \,,
\end{equation}
Computing the 1-form symmetry from the right diagram results in
\begin{equation}
\mathcal{O} = \mathbb{Z}_2 \,.
\end{equation}
The right hand GTP describes an $\mathfrak{su}(2)_0\oplus\mathfrak{su}(4)_0$ gauge theory carrying a bifundamental
which indeed has the same 1-form symmetry.
Similarly for $\mathfrak{su}(6)_0 + \mathbf{AS}$, which has GTP given by the left diagram of
\begin{equation}
\begin{tikzpicture}[x=.5cm,y=.5cm]
\draw[step=.5cm,gray,very thin] (0,-3) grid (4,3);
\draw[ligne] (0,0)--(1,-1)--(2,-2)--(3,-3)--(4,-1)--(3,3)-- (2,2)--(1,1)-- (0,0);
\node[bd] at (0,0) {};
\node[wd] at (1,-1) {};
\node[wd] at (2,-2) {};
\node[wd] at (3,-3) {};
\node[bd] at (4,-1) {};
\node[bd] at (3,3) {};
\node[wd] at (2,2) {};
\node[bd] at (1,1) {};
\end{tikzpicture} \qquad \qquad
\begin{tikzpicture}[x=.5cm,y=.5cm]
\draw[step=.5cm,gray,very thin] (0,-3) grid (4,3);
\draw[ligne] (0,0)--(1,-1)--(2,-2)--(3,-3)--(4,-1)--(3,3)-- (2,2)--(1,1)-- (0,0);
\node[bd] at (0,0) {};
\node[bd] at (1,-1) {};
\node[bd] at (2,-2) {};
\node[bd] at (3,-3) {};
\node[bd] at (4,-1) {};
\node[bd] at (3,3) {};
\node[bd] at (2,2) {};
\node[bd] at (1,1) {};
\end{tikzpicture} \,.
\end{equation}
The right hand diagram is $\mathfrak{su}(2)_0-\mathfrak{su}(4)_0-\mathfrak{su}(6)_0$. Indeed both theories have
$\mathcal{O}=\mathbb{Z}_2$ 1-form symmetry.
\begin{figure}
\centering
\includegraphics[width=12cm]{Junctions.pdf}
\caption{On the left hand side is shown a general 5-brane web (blue) indicating the external $(p,q)$ 5-branes, ending on 7-branes (cyan). From these emanate $(p,q)$-strings (green), that end on D3-branes (yellow). Given a pair of external 5-branes, the strings can only form a junction, if they satisfy (\ref{junctioncond}). This is shown on the right hand side. The resulting string can be moved into the brane-web, by moving the D3-brane inside the web, and becomes a local operator. This is the screening of the Wilson lines by local operators, realized in the brane-web.\label{fig:Junct}}
\end{figure}
This observation about filling in of white dots can be understood by considering the Wilson lines in the $(p,q)$-web, which correspond to $(p,q)$-strings, which stretch to infinity (or end on D3-branes at finite distance) \cite{Assel:2018rcw, Uhlemann:2020bek}). A pair of strings ending on 7-branes $(p_1, q_1)$ and $(p_2, q_2)$ can form a single string junction if
\begin{equation}
\det \left(\begin{matrix} p_1 &q_1\\ p_2 & q_2 \end{matrix}\right) = \pm 1 \,.
\end{equation}
Consider a brane web with external 5-branes emanating. Consider two of these of type $(p_1, q_1)$ and $(p_2, q_2)$, which each end on 7-branes at finite distance, of the same $(p,q)$-type. From these 7-branes we can have $(p, q)$ D-strings emanating, which correspond to the Wilson lines (we can end these on D3-branes). Let
\begin{equation}
\left| \det \left(\begin{matrix} p_1 &q_1\\ p_2 & q_2 \end{matrix}\right)\right| = n_{1,2} \,.
\end{equation}
Then these strings can form a junction satisfying \cite{Bergman:1998ej}
\begin{equation}\label{junctioncond}
{n_{1,2} (p_1, q_1)} + (p_2, q_2) \qquad \rightarrow \qquad \left((p_2 , q_2) + n_{1,2} (p_1, q_1)\right) \,.
\end{equation}
These can end on D3-branes and can be moved back into the web. This is the analog of the screening of Wilson lines by local operators and is illustrated in figure \ref{fig:Junct}. For a given 5-brane web, each external 5-brane gives rise to Wilson line, in the fashion above. Considering pair-wise the possible junctions determines which Wilson loops are screened. Taking the gcd over these computes the overall screening by all possible string junctions in the web.
This of course is precisely encoded in the expression (\ref{WebOne}) and the resulting 1-form symmetry.
From this perspective it is also clear why in a GTP with white dots, the 1-form symmetry is computed from the GTP obtained by filling all white dots and converting the diagram to black dots. A white dot corresponds to two 5-branes ending on the same 7-brane, whereas a black dot along a edge corresponds to two parallel 5-branes ending on one 7-brane each.
In the former configuration by not including this dot, we would not consider the complete set of strings. The 7-branes are not essential in this, as we can send these to infinity. By not including the white dots, we would not account for all possible strings (Wilson lines), as there can be Wilson lines ending on either of the 5-branes, that end on the 7-brane.
\section*{Acknowledgements}
We thank Fabio Apruzzi, Pietro Benetti Genolini, Antoine Bourget, Cyril Closset, Yi-Nan Wang, Gabi Zafrir and in particular Julius Eckhard for discussions.
The work of LB is supported by NSF grant PHY-1719924.
The work of SSN is
supported by the ERC Consolidator Grant number 682608 "Higgs bundles: Supersymmetric Gauge Theories and Geometry (HIGGSBNDL)". SSN acknowledges support also from the Simons Foundation.
\bibliographystyle{ytphys}
\let\bbb\itemsep4pt\bbb\def\itemsep4pt\bbb{\itemsep4pt\bbb}
|
2212.11611
|
\section{Introduction}
Due to the prevailing use of online social networking sites, social networks are very much a hot topic in network science. Nowadays, we have a good understanding of network structures and attention has shifted more towards their prediction, influence, and control. Full control of social networks is very hard to achieve due to their varying structures, dynamics, and the complexities of human behaviour. This study looks into how driver nodes, which enable complex network control, can be used in the context of influence spread in the social network space. We use driver nodes at both the global and community level to `divide and conquer' the time-consuming problem of driver node identification.
Until recently, we did not know if and how the structure of social networks correlated with the number of driver nodes required to control the network~\cite{sadaf2021insight}. As driver nodes play a key role in achieving control of a complex network, identifying them and studying their correlation with network structure measures can bring valuable insights, such as what network structures are easier to control, and how we can alter the structure in our favour to achieve the maximum control over the network. Our previous work~\cite{sadaf2021insight} determines the relationship between some global network structure measures and the number of driver nodes. This study builds an understanding of how global network profiles of synthetic (random, small-world, scale-free) and real social networks influence the number of driver nodes needed for control. It focuses on global structural measures such as network density and how it can play an important role in determining the size of a suitable set of driver nodes. Our results show that as density increases in networks with structures exhibited by random, small world and scale free networks, the number of driver nodes tends to decrease.
In this work we explore the potential that exploiting local structures (in this study we focus on communities) can offer in developing control of, and influencing, the network. Finding communities in a social network is itself a difficult task due to both dynamic and combinatorial factors~\cite{sathiyakumari2016community}.
This study explores the possibility of using community structure in social networks to reduce the cost of identifying driver nodes, and whether this remains a feasible approach for network control and influence spread methods.
Our main research questions for this work are stated as follows:
\begin{enumerate}
\item How can we rank driver nodes within communities to identify an optimal subset of driver nodes for use as seed nodes?
\item How quickly does influence spread from seed nodes chosen using driver node selection methods at the community level?
\item Does the percentage of influenced nodes increases or decreases when using driver node based seed selection methods in communities as compared to driver node based seed selection methods in the network as a whole, for both synthetic and real data?
\item How does the network structure (of synthetic or real networks) impact the percentage of nodes influenced with each method?
\end{enumerate}
This paper contains the following sections: Section~\ref{background} describes related work and the main research challenge that is the focus of this study.
Sections~\ref{methodology} and~\ref{resultsandanalysis} describe (i) the research methodology in detail and (ii) include results and analysis of the experiments performed respectively. Finally, the conclusions drawn from the experiments and future work are discussed in Section~\ref{conclusion&futurework}.
\section{Related Work}\label{background}
The Influence Maximisation problem aims at discovering an influential set of nodes that can influence the highest number of nodes in social networks in the shortest possible time. A set of these nodes can be used to propagate influence in terms of social media news, advertising, etc. Several algorithms have been proposed to solve the influence maximisation problem that identify a set of nodes that is highly influential as compared to other nodes. For example Basic Greedy~\cite{kempe2003maximizing}, CELF~\cite{leskovec2005graphs}, CELF++~\cite{goyal2011celf++}, Static Greedy~\cite{cheng2013staticgreedy}, Nguyen’s Method~\cite{nguyen2013budgeted}, Brog et al.'s Method~\cite{borgs2014maximizing},SKIM~\cite{cohen2014sketch}, TIM+~\cite{tang2014influence}, IMM~\cite{tang2015influence}, Stop and Stare~\cite{nguyen2016stop}, Zohu et al.’s Method~\cite{zhu2017emotional} and BCT~\cite{nguyen2017billion} are some of those algorithms. Many algorithms have high run times when identifying a set of nodes to diffuse the influence through a social network, therefore there is a need to work on exploring different types of nodes if those can work towards achieving the high influence~\cite{kazemzadeh2022influence}.
The problem of influence maximisation has high relevancy to the spreading of information on networks. The two most common network-based models are Independent Cascade model~\cite{kempe2003maximizing} and Threshold models~\cite{granovetter1978threshold}.
In one of the previously proposed framework, the possible seed set has been identified by analysing the properties of the community structures in the networks. The CIM algorithm (i.e. Community-Based Influence Maximisation), utilises hierarchical clustering to detect communities from the networks and then uses the information of community structures to identify the possible seed nodes candidates, and at the end the final seed set is selected from the candidate seed nodes~\cite{chen2014cim}.
From the previous work such as~\cite{chen2014cim} and~\cite{kazemzadeh2022influence}, we can see, that by detecting communities and then selecting seed nodes from those communities can be an effective strategy to maximise influence.
From previous study~\cite{sadaf2021insight}, following main results were achieved, which are the basis for further new experiments in this current research work.
\begin{itemize}
\item Correlation between network density and number of driver nodes: For this purpose, network densities and number of driver nodes in those networks are plotted against each other to see the increase/decrease in number of driver nodes with the increase/decrease in the densities of the networks.
\item Structural measures and density of driver nodes: In this step a comparison of structural measures like (Betweenness Centrality, Closeness Centrality, Nodes, Edges, Eigenvector Centrality and Clustering Coefficient) is presented with the density of number of driver nodes. Density of number of driver nodes is defined as total number of driver nodes divided by total number of nodes in the network.
\end{itemize}
In our proposed methods, we utilise driver nodes within the communities of networks for the influence spread using Linear Threshold Model. To make the driver nodes more influential, we propose different ranking mechanisms to see the number of nodes influenced after a certain time with a certain percentage of seed nodes in synthetic as well as real networks. The detail of network datasets has been presented in the later sections.
We explain our method to select seed nodes from the communities in the next section.
\section{Methodology}\label{methodology}
This work springs from the question, whether network control methods, in particular driver node selection, can be used to improve seed selection in influence models.
This prompts two possible approaches: (i) using driver nodes selected from the network as a whole, and (ii) using driver nodes selected at the community level as seeds.
For all experiments, we used the Linear Threshold Model to model influence propagation. We used a set threshold of 0.5 for the network diffusion model. We have previously observed that a threshold value of at least 0.4 accelerates influence propagation~\cite{chen2014cim}.
\subsection{Datasets Description}
To enable comprehensive and robust testing of the proposed approaches, both generated and real-world social networks have been used. Following is a brief description of networks used in the experiments.
\begin{enumerate}
\item Generated Networks: we generated random, small-world and scale free networks from network size of (100, 200, 300, 400, 500) nodes. For each network size (from 100 to 500), we generated networks with increasing density, to the maximum density of 1. A total of 720 networks were generated~\cite{sadaf2021insight}.
\item Social Networks: we use 22 real-world social networks of varying size, the number of nodes and number of edges are presented in Table~\ref{table:Gainoverothermethodssocialnetworks}. The networks are available for download at SNAP\footnote{http://snap.stanford.edu/}.
\end{enumerate}
\subsection{Influence spread using global driver nodes as seeds}
The first experiment focuses on the seed selection process from the global perspective. Driver nodes are selected from the network as a whole, ranked, and finally used as seeds in the influence process. The below described approach has been proposed in~\cite{sadaf2022bridge}. As it outperforms other state-of-the art ranking methods, it serves in this study as a benchmark to show a difference between global- and local-level seed selection methods. The steps are as follows:
\begin{enumerate}
\item Minimum Dominating Set method~\cite{nacher2012dominating} has been used to identify the number of driver nodes from the networks. More detail of this process can be found in~\cite{sadaf2021insight}. DMS has been found by using greedy algorithm. At start, the dominating set is empty. Then in each iteration of the algorithm, a vertex is added to the set such that it covers the maximum number of previously uncovered vertices. Then, if more than one vertex fulfils this criteria, the vertex is added randomly among the set of nominated vertices~\cite{sanchis2002experimental}.
\item We ranked the nodes using different ranking mechanisms. The goal was to achieve an efficient set of nodes as seeds that can achieve maximum or full influence more quickly.
The ranking mechanisms used are: Random, Degree Centrality, Closeness Centrality, Betweenness Centrality, Kempe Ranking, Degree-Closeness-Betweenness. We tested various seed set sizes: 1\%, 10\%, 20\%, 30\%, 40\% and 50\% of all detected driver nodes ranked these methods. In each of the methods, the driver nodes are ranked based on the following measures:
\begin{itemize}
\item In Random (Driver Random -- DR) we ranked the driver nodes randomly.
\item In Degree seed selection (DD) we ranked the driver nodes based on their degree in descending order.
\item For Closeness Centrality based seed selection method (Driver Closeness -- DC), we ranked the nodes on the basis of their closeness centrality in descending order.
\item For Betweenness Centrality based seed selection method (Driver Betweenness -- DB), we ranked the nodes on the basis of their betweenness centrality in descending order.
\item For Degree-Closeness-Betweenness method (Driver Degree Closeness Betweenness -- DDCB), we ranked (in descending order) the driver nodes on the basis of the average of degree, closeness and betweenness centralities of each driver nodes.
\item For Kempe ranking (Driver Kempe -- DK), we start by spreading influence through all the driver nodes as seed nodes. So we calculate the total number of nodes influenced by each driver node already in the seed set, and then rank them in descending order. After ranking, we select a percentage of nodes that are required for a seed set.
\item Linear Threshold Model (LTM) has been implemented for influence spread process. In LTM the idea is that a node becomes active if a sufficient part of its neighbourhood is active. Each node $u$ has a threshold $t \in[0, 1]$. The threshold represents the fraction of neighbours of $u$ that must be active in order for $u$ to become active. At the beginning of the process, a small percentage of nodes (seeds) is set as active in order to start the process. In the next steps a node becomes active if the fraction of its active neighbours is greater than its threshold, and the whole process stops when no node is activated in the current step~\cite{d2016influence}.
\end{itemize}
\end{enumerate}
\subsection{Influence spread using local driver nodes as seeds}
The second experiment employs a new strategy: first identify communities in the network, and then identify driver nodes on a per-community basis.
Once driver nodes for each community are identified, they are then ranked using the same ranking mechanisms as in the first experiment, with seed sets chosen to cover all communities (detailed below).
In detail, the approach is as follows:
\begin{enumerate}
\item Firstly, communities are identified in the network. This was done using Girvan-Newman algorithm~\cite{girvan2002community}. The Girvan–Newman algorithm detects communities by progressively removing edges from the original graph in order of the highest betweenness centrality.
\item Within each community, candidate driver nodes were identified using the Minimum Dominating Set~\cite{nacher2012dominating} approach as used with the whole network.
Correlation between community densities and number of driver nodes is found by obtaining densities of the communities and identifying number of driver nodes in those communities by MDS method. Difference (Diff.) between total number of driver nodes identified in overall networks (NDN) as compared to the number of driver nodes found in communities of those networks (NDNC) is also obtained. The Diff. tells us, the significance of identifying driver nodes within communities, like following a divide and conquer approach.
\item To rank the nodes, we introduce a multi-round selection process. This process effectively ranks driver nodes within each community according to the ranking criterion, then selects one node per community per round, in the order given by the ranking, until the total percentage to be chosen is reached. This is perhaps better explained by the following example, illustrated in Figure~\ref{fig:ExampleSelectionofSeedSetfromCommunities}. Consider a network with 1,000 nodes and 6 communities. Select a ranking method, in this case the node degree. Choose a target percentage of nodes to use as seed nodes, 1\% in the example. Now, in order to choose 10 nodes from the driver nodes detected in the communities, we select 6 nodes at first -- the highest degree node from each community, marked in yellow in the figure. In the second round, we can select at most 4 nodes to reach the target of 10 -- from each community, we take the node with the second-highest node degree and rank these nodes according to their degrees and take the 4 nodes with the highest degree. We choose the same ranking mechanism for all the community based driver nodes seed selection methods i.e., the highest node degree, apart from the original ranking that is different in each technique as explained previously.
\item Influence spread in the overall network using Driver Based Seed Selection Methods is done by following a series of steps. Starting from identification of driver nodes from the networks, ranking of driver nodes based upon Random, Node Degree, Closeness Centrality, Betweenness Centrality, Kempe Ranking, Degree-Closeness-Betweenness Centralities combined. After ranking of driver nodes, we selected our seed set on the basis of percentage of nodes from that set. We run our LTM for different seed sets, namely for example 1\%, 10\%, 20\%, 30\%, 40\% and 50\%.
\item Influence spread through Driver Nodes in communities of Networks is done by identifying driver nodes in communities. However, there was a challenge of getting the ultimate seed set that has representation from all the communities of the network. For this purpose, we devised our ranking approach that makes sure that at least one driver node is selected from each community of the network to make sure that the nodes in those communities can also be part of the influence process. For each of the driver based seed selection methods, we used one unified approach to further rank the nodes so that we are able to select at least one node from each of the communities.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[height=7cm]{Images/9-SNMAM1.png}
\caption{An example showing the process for selecting seed nodes set from the driver nodes identified in network communities}
\label{fig:ExampleSelectionofSeedSetfromCommunities}
\end{figure}
\section{Results and Analysis}\label{resultsandanalysis}
\begin{figure}[htb]
\centering
\includegraphics[height=8cm]{Images/9-SNMAM3.png}
\caption{Number of Nodes Influenced in Random, Small-World and Scale-Free Networks: when the number of nodes (N) is 100 and the number of edges (E) is 800 (Figures a, b and c); when N is 300 and E is 12800 (Figures d, e and f); when N is 500 and E is 72000 (Figures g, h and i). A Comparison of all methods for 20 iterations when the seed size is 1\% is presented.}
\label{fig:RSWSF-20Iterations100-300-500-Nodes}
\end{figure}
\sloppypar
Six novel network level seed selection methods (i.e. Driver-Random (DR), Driver-Degree (DD), Driver-Closeness (DC), Driver-Betweenness (DB), Driver-Kempe (DK) and Driver-Degree-Closeness-Betweenness (DDCB)) have been proposed and tested on synthetic and real world networks before in~\cite{sadaf2022bridge} and the results show that those methods outperform their non-driver based counterparts. In this study, we use those methods but instead of selecting driver nodes from the global network, we propose a local approach where driver nodes are identified within the networks' communities. We name the new methods by adding C (for community) to the previously proposed methods (i.e, DRC - Driver-Random-Community, DDC - Driver-Degree-Community, DCC - Driver-Closeness-Community, DBC - Driver-Betweenness-Community, DKC - Driver-Kempe-Community and DDCBC - Driver-Degree-Closeness-Betweenness-Community). Below, we compare community based driver seed selection methods to network based driver seed selection methods.
\subsection{Results From Generated Networks}
This section covers the results and analysis of the experiments performed on generated networks.
\subsubsection{What is the speed and reach of the influence spread?}
First, we compare the percentage of nodes influenced for global-level driver based seed selection methods and local-level (community) driver based seed selection methods. We perform the analysis iteration by iteration to see which seed selection methods enable to achieve the highest coverage the fastest.
In Figure~\ref{fig:RSWSF-20Iterations100-300-500-Nodes}, we can see trend-lines for all the seed selection methods (when seed set size is 1\% of all the driver nodes) for random, small-world and scale-free networks. DDCBC method outperforms other methods in almost all the experimented cases. We can see a `head-start' in the trend-line of DDCBC (represented in black colour) for all the networks when number of nodes in the network is 100 and number of edges is 800. This means that in only few iterations, DDCBC enables to influence more nodes than in the case of other seed selection methods.
\begin{figure}[htb]
\centering
\includegraphics{Images/9-SNMAM2.png}
\caption{Average Number of Nodes in Communities of Random, Small-World and Scale-Free Networks versus number of communities in those networks. Legend shows the Number of Nodes in communities of generated networks i.e. Random (R), Small-World (SW) and Scale-Free (SF).}
\label{fig:AvgNodesCommunities-RSWSF}
\end{figure}
Results in Figure~\ref{fig:AvgNodesCommunities-RSWSF} show that when the network is of small size, and density is approximately equal to $0.6$, the influence spreads faster when using driver-community based seed selection methods than when the global-level driver based methods are employed.
If we look at Figure~\ref{fig:AvgNodesCommunities-RSWSF}, the network of smaller densities (i.e. $0.4$), where number of nodes is 300 and number of edges is 2,800, the difference between the global-level driver based methods and community-level driver based methods is not so big. But we do see a gap between DDCBC method and other methods. Which tells us that, so far, DDCBC ranking of driver nodes in communities is working better than when we are using driver nodes of communities as seed nodes.
Although the comparison is done on a very small size of seed set (1\% of all driver nodes), in DDCBC, we still achieve more influence earlier in the spreading process when using community-level driver based methods. It also gives us another insight regarding larger networks, their structures and densities, and how those are connected to spreading influence. We see that the spread is faster when density is higher than $0.5$ as in the case of networks presented in the Figure~\ref{fig:RSWSF-20Iterations100-300-500-Nodes} (network with 500 nodes and 72,000 edges). We can see that in those cases, the driver-community based method DRC, DDC, DBC, DKC and DDCBC outperforms their counterpart methods DR, DD, DB, DK and DDCB.
Based upon these observations, we conclude it does not matter which type of network it is, as long as its density is higher than $0.5$ it will respond to the community-based seed selection methods better and the spread will be faster. Also, regardless of the network density, community-based method -- DDCBC -- outperforms all other methods Figure~\ref{fig:RSWSF-20Iterations100-300-500-Nodes}(a-f). This holds true for all the other settings as well. As when we have different edges for 100, 200, 300, 400 and 500 nodes networks.
\subsubsection{How much advantage do community—level driver based seed selection methods give?}
Given a number of iterations $n$ and a method $X$, let $N^{infl}_{n}(X)$ denote the number of nodes influenced using the method $X$ after $n$ iterations. The Percentage Gain of method $A$ over method $B$ after $n$ iterations is then given by:
\begin{equation}\label{eqn:percent_gain}
\frac{N^{infl}_{n}(A) - N^{infl}_{n}(B)}{N} \times 100
\end{equation}
where $N$ is the number of nodes in the network.
Table~\ref{table:Gainoverothermethodsgeneratednetworks} shows the percentage gain of the DDCBC method over the global-level driver based methods. We represent only driver based methods (i.e. DR, DB, DC, DD, DK and DDCB), as the gain is higher over these methods as compared to other driver-community based methods (i.e. DRC, DBC, DCC, DDC and DKC) as well as they are our baseline for this study. Percentage gain is calculated by knowing the maximum number of nodes influenced after $20$ iterations when seed size is 1\%.
From Table~\ref{table:Gainoverothermethodsgeneratednetworks} we can see the maximum gain in when the average density of the communities of the network is greater than 0.5. When the density reaches 1 all the methods perform very similar as spread in fully connected network behaves in a very similar way regardless of applied seed selection method. This highlights our previous point that density of network plays an important part in how effective a network is going to respond to the influence spread process. We can see the highest gain for DDCBC method in random networks, but DDCBC outperforms all global-level driver based methods in all the networks, except for the networks with densities equal or very close to 1.
From Figure~\ref{fig:AvgNodesCommunities-RSWSF}, we can see the number of average nodes in communities versus the total number of communities in Random, Small-World and Scale-Free networks. The denser the network, the fewer communities we have, and those communities are denser than the previous ones. Hence, due to increase in community density, we see the higher percent gain in DDCBC method. The number of nodes influenced by DDCBC method increases, when there are fewer communities. Because when number of communities are less, they tend to be denser, hence the increase in number of nodes influenced. We see the difference in number of nodes influenced in DDCBC method which is bigger than compared to other methods.
\begin{table}[ht]
\centering
\caption{\label{table:Gainoverothermethodsgeneratednetworks}A percentage gain table shows the percentage gain of DDCBC method over other seed selection methods in influencing the nodes in Random, Small-World and Scale-Free networks when the seed set size is 1\% after 20 iterations. $N$ is number of nodes, $E$ is number of edges, $C$ is number of communities and $CD$ is average community density.}
\begin{tabular}{|r|r|r|r|llllll|llllll|llllll|}
\hline
& \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{CD} & \multicolumn{6}{c|}{Random Networks} & \multicolumn{6}{c|}{Small-World Networks} & \multicolumn{6}{c|}{Scale-Free Networks} \\ \cline{4-22}
\multirow{-2}{*}{N} & \multicolumn{1}{c|}{\multirow{-2}{*}{E}} & \multicolumn{1}{c|}{\multirow{-2}{*}{C}} & Avg ± SD & \multicolumn{1}{l|}{DR} & \multicolumn{1}{l|}{DB} & \multicolumn{1}{l|}{DC} & \multicolumn{1}{l|}{DD} & \multicolumn{1}{l|}{DK} & DDCB & \multicolumn{1}{l|}{DR} & \multicolumn{1}{l|}{DB} & \multicolumn{1}{l|}{DC} & \multicolumn{1}{l|}{DD} & \multicolumn{1}{l|}{DK} & DDCB & \multicolumn{1}{l|}{DR} & \multicolumn{1}{l|}{DB} & \multicolumn{1}{l|}{DC} & \multicolumn{1}{l|}{DD} & \multicolumn{1}{l|}{DK} & DDCB \\ \hline
& 800 & 6 & 0.16±0.01 & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} \\ \cline{2-22}
& 1600 & 5 & 0.3±0.03 & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} \\ \cline{2-22}
& 2400 & 4 & 0.44±0.06 & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} \\ \cline{2-22}
& 3200 & 3 & 0.58±0.12 & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} \\ \cline{2-22}
& 4000 & 2 & 0.73±0.14 & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} \\ \cline{2-22}
& 4800 & 1 & 0.88±0.15 & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} \\ \cline{2-22}
\multirow{-7}{*}{100} & 4950 & 1 & 0.96±0.07 & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} \\ \hline
& 2400 & 5 & 0.12±0.01 & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} \\ \cline{2-22}
& 4800 & 4 & 0.23±0.02 & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} \\ \cline{2-22}
& 7200 & 4 & 0.36±0.01 & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{C5E6D0}9} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} \\ \cline{2-22}
& 9600 & 4 & 0.48±0.02 & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} \\ \cline{2-22}
& 12000 & 3 & 0.56±0.07 & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} \\ \cline{2-22}
& 14400 & 2 & 0.67±0.09 & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} \\ \cline{2-22}
& 16800 & 1 & 0.78±0.11 & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} \\ \cline{2-22}
& 19200 & 1 & 0.9±0.1 & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} \\ \cline{2-22}
\multirow{-9}{*}{200} & 19900 & 1 & 0.97±0.06 & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} \\ \hline
& 12800 & 5 & 0.31±0.03 & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} \\ \cline{2-22}
& 19200 & 5 & 0.41±0.03 & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} \\ \cline{2-22}
& 22400 & 4 & 0.46±0.06 & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} \\ \cline{2-22}
& 25600 & 4 & 0.53±0.08 & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} \\ \cline{2-22}
& 28800 & 3 & 0.58±0.1 & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} \\ \cline{2-22}
& 32000 & 2 & 0.63±0.17 & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} \\ \cline{2-22}
& 35200 & 1 & 0.69±0.16 & \multicolumn{1}{r|}{\cellcolor[HTML]{BFE4CB}10} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} \\ \cline{2-22}
& 38400 & 1 & 0.76±0.17 & \multicolumn{1}{r|}{\cellcolor[HTML]{ADDCBB}13} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFE4CB}10} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{ADDCBB}13} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} \\ \cline{2-22}
& 41600 & 1 & 0.83±0.17 & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} \\ \cline{2-22}
\multirow{-10}{*}{300} & 44850 & 1 & 0.91±0.15 & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} \\ \hline
& 40000 & 4 & 0.43±0.12 & \multicolumn{1}{r|}{\cellcolor[HTML]{70C386}23} & \multicolumn{1}{r|}{\cellcolor[HTML]{7CC891}21} & \multicolumn{1}{r|}{\cellcolor[HTML]{7CC891}21} & \multicolumn{1}{r|}{\cellcolor[HTML]{76C68B}22} & \multicolumn{1}{r|}{\cellcolor[HTML]{7CC891}21} & \multicolumn{1}{r|}{\cellcolor[HTML]{7CC891}21} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} \\ \cline{2-22}
& 44000 & 4 & 0.48±0.12 & \multicolumn{1}{r|}{\cellcolor[HTML]{63BE7B}25} & \multicolumn{1}{r|}{\cellcolor[HTML]{76C68B}22} & \multicolumn{1}{r|}{\cellcolor[HTML]{76C68B}22} & \multicolumn{1}{r|}{\cellcolor[HTML]{76C68B}22} & \multicolumn{1}{r|}{\cellcolor[HTML]{76C68B}22} & \multicolumn{1}{r|}{\cellcolor[HTML]{76C68B}22} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} \\ \cline{2-22}
& 48000 & 4 & 0.53±0.12 & \multicolumn{1}{r|}{\cellcolor[HTML]{63BE7B}25} & \multicolumn{1}{r|}{\cellcolor[HTML]{7CC891}21} & \multicolumn{1}{r|}{\cellcolor[HTML]{7CC891}21} & \multicolumn{1}{r|}{\cellcolor[HTML]{7CC891}21} & \multicolumn{1}{r|}{\cellcolor[HTML]{7CC891}21} & \multicolumn{1}{r|}{\cellcolor[HTML]{7CC891}21} & \multicolumn{1}{r|}{\cellcolor[HTML]{C5E6D0}9} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFE4CB}10} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{C5E6D0}9} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} \\ \cline{2-22}
& 52000 & 4 & 0.58±0.12 & \multicolumn{1}{r|}{\cellcolor[HTML]{76C68B}22} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3DFC0}12} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3DFC0}12} & \multicolumn{1}{r|}{\cellcolor[HTML]{ADDCBB}13} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3DFC0}12} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3DFC0}12} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3DFC0}12} & \multicolumn{1}{r|}{\cellcolor[HTML]{B9E1C5}11} & \multicolumn{1}{r|}{\cellcolor[HTML]{B9E1C5}11} & \multicolumn{1}{r|}{\cellcolor[HTML]{B9E1C5}11} & \multicolumn{1}{r|}{\cellcolor[HTML]{B9E1C5}11} & \multicolumn{1}{r|}{\cellcolor[HTML]{B9E1C5}11} & \multicolumn{1}{r|}{\cellcolor[HTML]{ADDCBB}13} & \multicolumn{1}{r|}{\cellcolor[HTML]{B9E1C5}11} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3DFC0}12} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3DFC0}12} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3DFC0}12} & \multicolumn{1}{r|}{\cellcolor[HTML]{B9E1C5}11} \\ \cline{2-22}
& 60000 & 3 & 0.67±0.14 & \multicolumn{1}{r|}{\cellcolor[HTML]{8ED0A0}18} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3DFC0}12} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3DFC0}12} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3DFC0}12} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3DFC0}12} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3DFC0}12} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFE4CB}10} & \multicolumn{1}{r|}{\cellcolor[HTML]{C5E6D0}9} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFE4CB}9} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFE4CB}10} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFE4CB}10} & \multicolumn{1}{r|}{\cellcolor[HTML]{C5E6D0}9} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3DFC0}12} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFE4CB}10} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFE4CB}10} & \multicolumn{1}{r|}{\cellcolor[HTML]{B9E1C5}11} & \multicolumn{1}{r|}{\cellcolor[HTML]{B9E1C5}11} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFE4CB}10} \\ \cline{2-22}
& 64000 & 2 & 0.76±0.07 & \multicolumn{1}{r|}{\cellcolor[HTML]{ADDCBB}13} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{C5E6D0}9} & \multicolumn{1}{r|}{\cellcolor[HTML]{C5E6D0}9} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{C5E6D0}9} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} \\ \cline{2-22}
& 68000 & 1 & 0.83±0.03 & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} \\ \cline{2-22}
& 72000 & 1 & 0.88±0.03 & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} \\ \cline{2-22}
& 76000 & 1 & 0.93±0.03 & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} \\ \cline{2-22}
\multirow{-10}{*}{400} & 98000 & 1 & 0.98±0.03 & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} \\ \hline
& 72000 & 4 & 0.52±0.1 & \multicolumn{1}{r|}{\cellcolor[HTML]{70C386}23} & \multicolumn{1}{r|}{\cellcolor[HTML]{A1D7B0}15} & \multicolumn{1}{r|}{\cellcolor[HTML]{A1D7B0}15} & \multicolumn{1}{r|}{\cellcolor[HTML]{9BD5AB}16} & \multicolumn{1}{r|}{\cellcolor[HTML]{9BD5AB}16} & \multicolumn{1}{r|}{\cellcolor[HTML]{A1D7B0}15} & \multicolumn{1}{r|}{\cellcolor[HTML]{B9E1C5}11} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3DFC0}12} & \multicolumn{1}{r|}{\cellcolor[HTML]{B9E1C5}11} & \multicolumn{1}{r|}{\cellcolor[HTML]{B9E1C5}11} & \multicolumn{1}{r|}{\cellcolor[HTML]{B9E1C5}11} & \multicolumn{1}{r|}{\cellcolor[HTML]{B9E1C5}11} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFE4CB}10} \\ \cline{2-22}
& 76800 & 3 & 0.56±0.1 & \multicolumn{1}{r|}{\cellcolor[HTML]{7CC891}21} & \multicolumn{1}{r|}{\cellcolor[HTML]{9BD5AB}16} & \multicolumn{1}{r|}{\cellcolor[HTML]{9BD5AB}16} & \multicolumn{1}{r|}{\cellcolor[HTML]{9BD5AB}16} & \multicolumn{1}{r|}{\cellcolor[HTML]{9BD5AB}16} & \multicolumn{1}{r|}{\cellcolor[HTML]{9BD5AB}16} & \multicolumn{1}{r|}{\cellcolor[HTML]{B9E1C5}11} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} \\ \cline{2-22}
& 81600 & 4 & 0.6±0.09 & \multicolumn{1}{r|}{\cellcolor[HTML]{88CD9B}19} & \multicolumn{1}{r|}{\cellcolor[HTML]{ADDCBB}13} & \multicolumn{1}{r|}{\cellcolor[HTML]{A7DAB6}14} & \multicolumn{1}{r|}{\cellcolor[HTML]{A7DAB6}14} & \multicolumn{1}{r|}{\cellcolor[HTML]{ADDCBB}13} & \multicolumn{1}{r|}{\cellcolor[HTML]{ADDCBB}13} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFE4CB}10} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{D8EEE0}6} \\ \cline{2-22}
& 86400 & 3 & 0.69±0.01 & \multicolumn{1}{r|}{\cellcolor[HTML]{88CD9B}19} & \multicolumn{1}{r|}{\cellcolor[HTML]{ADDCBB}13} & \multicolumn{1}{r|}{\cellcolor[HTML]{ADDCBB}13} & \multicolumn{1}{r|}{\cellcolor[HTML]{ADDCBB}13} & \multicolumn{1}{r|}{\cellcolor[HTML]{ADDCBB}13} & \multicolumn{1}{r|}{\cellcolor[HTML]{ADDCBB}13} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFE4CB}10} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{C5E6D0}9} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} \\ \cline{2-22}
& 91200 & 3 & 0.73±0.01 & \multicolumn{1}{r|}{\cellcolor[HTML]{A1D7B0}15} & \multicolumn{1}{r|}{\cellcolor[HTML]{A7DAB6}14} & \multicolumn{1}{r|}{\cellcolor[HTML]{A7DAB6}14} & \multicolumn{1}{r|}{\cellcolor[HTML]{A7DAB6}14} & \multicolumn{1}{r|}{\cellcolor[HTML]{A7DAB6}14} & \multicolumn{1}{r|}{\cellcolor[HTML]{A7DAB6}14} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} \\ \cline{2-22}
& 96000 & 3 & 0.76±0.01 & \multicolumn{1}{r|}{\cellcolor[HTML]{B3DFC0}12} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFE4CB}10} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFE4CB}10} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFE4CB}10} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFE4CB}10} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFE4CB}10} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} \\ \cline{2-22}
& 100800 & 1 & 0.81±0.01 & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{C5E6D0}9} & \multicolumn{1}{r|}{\cellcolor[HTML]{C5E6D0}9} & \multicolumn{1}{r|}{\cellcolor[HTML]{CCE9D5}8} & \multicolumn{1}{r|}{\cellcolor[HTML]{D2EBDB}7} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} \\ \cline{2-22}
& 105200 & 1 & 0.84±0 & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{DEF0E5}5} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{F0F8F5}2} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} \\ \cline{2-22}
& 110000 & 2 & 0.88±0 & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{E4F3EA}4} & \multicolumn{1}{r|}{\cellcolor[HTML]{EAF5F0}3} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{F6FAFA}1} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} \\ \cline{2-22}
\multirow{-10}{*}{500} & 124750 & 2 & 0.97±0.06 & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} & \multicolumn{1}{r|}{\cellcolor[HTML]{FCFCFF}0} \\ \hline
\end{tabular}
\end{table}
\begin{table}[ht]
\centering
\caption{\label{table:Gainoverothermethodssocialnetworks}A percentage gain table shows the percentage gain of DDCBC method over other seed selection methods in influencing the nodes of the social networks. Average Community Densities of the networks are as follows: FB (0.06±0.02), ZKC (0.32±0.4), Twitter (0.00029±0.05), Diggs (0.00008±0.007), Youtube (0.000012±0.04), Ego (0.00034±0.05), LC (0.007±0.032), LF (0.0073±0.09), PF (0.015±0.54), MFb (0.001±0.43), DHR (0.00085±0.21), DRO (0.0005±0.4), DHU (0.0004±0.63), MG (0.0011±0.03), L (0.0019±0.54), FbAR (0.0014±0.03), FbA (0.0015±0.09), FbG (0.0075±0.05), FbN (0.0013±0.003), FbP (0.0049±0.003), FbPF (0.004±0.032) and Fbt (0.0051±0.05)}
\begin{tabular}{|r|r|r|c|ccccccccccc|}
\hline \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & & \multicolumn{11}{c|}{Seed Selection Methods (20\% of all nodes)} \\ \cline{5-15}
\multicolumn{1}{|c|}{\multirow{-2}{*}{N}} & \multicolumn{1}{c|}{\multirow{-2}{*}{E}} & \multicolumn{1}{c|}{\multirow{-2}{*}{C}} & \multirow{-2}{*}{Networks} & \multicolumn{1}{c|}{DR} & \multicolumn{1}{c|}{DD} & \multicolumn{1}{c|}{DC} & \multicolumn{1}{c|}{DB} & \multicolumn{1}{c|}{DDCB} & \multicolumn{1}{c|}{DK} & \multicolumn{1}{c|}{DRC} & \multicolumn{1}{c|}{DDC} & \multicolumn{1}{c|}{DCC} & \multicolumn{1}{c|}{DBC} & DKC \\ \hline
4039 & 88234 & 180 & FB & \multicolumn{1}{r|}{\cellcolor[HTML]{93BC77}28.68} & \multicolumn{1}{r|}{\cellcolor[HTML]{9FC784}25.03} & \multicolumn{1}{r|}{\cellcolor[HTML]{A0C784}24.94} & \multicolumn{1}{r|}{\cellcolor[HTML]{9CC481}25.94} & \multicolumn{1}{r|}{\cellcolor[HTML]{9FC783}25.15} & \multicolumn{1}{r|}{\cellcolor[HTML]{A1C985}24.59} & \multicolumn{1}{r|}{\cellcolor[HTML]{AAD18F}21.59} & \multicolumn{1}{r|}{\cellcolor[HTML]{AAD18F}21.59} & \multicolumn{1}{r|}{\cellcolor[HTML]{A9D08E}22.19} & \multicolumn{1}{r|}{\cellcolor[HTML]{A9D08E}22.28} & \multicolumn{1}{r|}{\cellcolor[HTML]{ABD190}21.14}
\\ \hline
34 & 78 & 2 & ZKC & \multicolumn{1}{r|}{\cellcolor[HTML]{B7D8A0}12.18} & \multicolumn{1}{r|}{\cellcolor[HTML]{C2DEAF}4.00} & \multicolumn{1}{r|}{\cellcolor[HTML]{C4DFB1}2.82} & \multicolumn{1}{r|}{\cellcolor[HTML]{C5E0B3}2.09} & \multicolumn{1}{r|}{\cellcolor[HTML]{C5E0B3}1.95} & \multicolumn{1}{r|}{\cellcolor[HTML]{C5E0B3}1.73} & \multicolumn{1}{r|}{\cellcolor[HTML]{C6E0B4}1.18} & \multicolumn{1}{r|}{\cellcolor[HTML]{C6E0B4}1.18} & \multicolumn{1}{r|}{\cellcolor[HTML]{C6E0B4}1.27} & \multicolumn{1}{r|}{\cellcolor[HTML]{C6E0B4}1.00} & \multicolumn{1}{r|}{\cellcolor[HTML]{C6E0B4}1}
\\ \hline
23371 & 32832 & 350 & Twitter & \multicolumn{1}{r|}{\cellcolor[HTML]{74A057}37.81} & \multicolumn{1}{r|}{\cellcolor[HTML]{96BF7A}27.83} & \multicolumn{1}{r|}{\cellcolor[HTML]{99C27E}26.80} & \multicolumn{1}{r|}{\cellcolor[HTML]{99C27E}26.78} & \multicolumn{1}{r|}{\cellcolor[HTML]{ACD292}20.16} & \multicolumn{1}{r|}{\cellcolor[HTML]{99C27E}26.77} & \multicolumn{1}{r|}{\cellcolor[HTML]{A3CB88}23.81} & \multicolumn{1}{r|}{\cellcolor[HTML]{A3CB88}23.80} & \multicolumn{1}{r|}{\cellcolor[HTML]{A4CB88}23.74} & \multicolumn{1}{r|}{\cellcolor[HTML]{A6CD8B}23.06} & \multicolumn{1}{r|}{\cellcolor[HTML]{ABD190}21.22}
\\ \hline
1924000 & 3298475 & 156432 & Diggs & \multicolumn{1}{r|}{\cellcolor[HTML]{659146}42.49} & \multicolumn{1}{r|}{\cellcolor[HTML]{709C53}39.05} & \multicolumn{1}{r|}{\cellcolor[HTML]{78A35B}36.76} & \multicolumn{1}{r|}{\cellcolor[HTML]{79A45C}36.47} & \multicolumn{1}{r|}{\cellcolor[HTML]{729E55}38.37} & \multicolumn{1}{r|}{\cellcolor[HTML]{709B52}39.21} & \multicolumn{1}{r|}{\cellcolor[HTML]{ACD292}20.11} & \multicolumn{1}{r|}{\cellcolor[HTML]{AED394}18.89} & \multicolumn{1}{r|}{\cellcolor[HTML]{AFD496}17.67} & \multicolumn{1}{r|}{\cellcolor[HTML]{B1D598}16.53} & \multicolumn{1}{r|}{\cellcolor[HTML]{ACD292}19.85}
\\ \hline
1134891 & 2987625 & 54983 & Youtube & \multicolumn{1}{r|}{\cellcolor[HTML]{669348}42.00} & \multicolumn{1}{r|}{\cellcolor[HTML]{749F56}38.02} & \multicolumn{1}{r|}{\cellcolor[HTML]{7DA860}35.12} & \multicolumn{1}{r|}{\cellcolor[HTML]{85AF69}32.79} & \multicolumn{1}{r|}{\cellcolor[HTML]{86B069}32.59} & \multicolumn{1}{r|}{\cellcolor[HTML]{81AC65}33.92} & \multicolumn{1}{r|}{\cellcolor[HTML]{C3DFB0}3.51} & \multicolumn{1}{r|}{\cellcolor[HTML]{C4DFB1}2.71} & \multicolumn{1}{r|}{\cellcolor[HTML]{C5E0B3}1.91} & \multicolumn{1}{r|}{\cellcolor[HTML]{C6E0B4}1.11} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFDCAB}6.45}
\\ \hline
23629 & 39195 & 75 & Ego & \multicolumn{1}{r|}{\cellcolor[HTML]{A0C885}24.83} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3D69B}15.34} & \multicolumn{1}{r|}{\cellcolor[HTML]{B4D69C}14.33} & \multicolumn{1}{r|}{\cellcolor[HTML]{B4D69C}14.33} & \multicolumn{1}{r|}{\cellcolor[HTML]{B0D497}17.15} & \multicolumn{1}{r|}{\cellcolor[HTML]{AAD18F}21.81} & \multicolumn{1}{r|}{\cellcolor[HTML]{BBDAA5}9.64} & \multicolumn{1}{r|}{\cellcolor[HTML]{B9D9A3}10.62} & \multicolumn{1}{r|}{\cellcolor[HTML]{B9D9A2}11.14} & \multicolumn{1}{r|}{\cellcolor[HTML]{BBDAA6}9.05} & \multicolumn{1}{r|}{\cellcolor[HTML]{BCDAA6}8.89}
\\ \hline
4658 & 33116 & 517 & LC & \multicolumn{1}{r|}{\cellcolor[HTML]{82AC65}33.84} & \multicolumn{1}{r|}{\cellcolor[HTML]{9AC27E}26.62} & \multicolumn{1}{r|}{\cellcolor[HTML]{9DC582}25.61} & \multicolumn{1}{r|}{\cellcolor[HTML]{9DC582}25.61} & \multicolumn{1}{r|}{\cellcolor[HTML]{9EC682}25.52} & \multicolumn{1}{r|}{\cellcolor[HTML]{89B26C}31.81} & \multicolumn{1}{r|}{\cellcolor[HTML]{A8CF8D}22.40} & \multicolumn{1}{r|}{\cellcolor[HTML]{A5CD8A}23.23} & \multicolumn{1}{r|}{\cellcolor[HTML]{A3CA88}23.98} & \multicolumn{1}{r|}{\cellcolor[HTML]{AAD18F}21.65} & \multicolumn{1}{r|}{\cellcolor[HTML]{A9D08E}22.06}
\\ \hline
874 & 1309 & 97 & LF & \multicolumn{1}{r|}{\cellcolor[HTML]{ADD393}19.29} & \multicolumn{1}{r|}{\cellcolor[HTML]{B9D9A3}10.62} & \multicolumn{1}{r|}{\cellcolor[HTML]{BBDAA5}9.56} & \multicolumn{1}{r|}{\cellcolor[HTML]{BBDAA5}9.34} & \multicolumn{1}{r|}{\cellcolor[HTML]{BBDAA6}9.25} & \multicolumn{1}{r|}{\cellcolor[HTML]{BBDAA5}9.33} & \multicolumn{1}{r|}{\cellcolor[HTML]{BCDBA7}8.38} & \multicolumn{1}{r|}{\cellcolor[HTML]{BBDAA5}9.35} & \multicolumn{1}{r|}{\cellcolor[HTML]{BAD9A4}10.20} & \multicolumn{1}{r|}{\cellcolor[HTML]{BDDBA8}7.86} & \multicolumn{1}{r|}{\cellcolor[HTML]{BBDAA6}9.11}
\\ \hline
1858 & 12534 & 206 & PF & \multicolumn{1}{r|}{\cellcolor[HTML]{B9D9A3}10.62} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFDCAA}6.66} & \multicolumn{1}{r|}{\cellcolor[HTML]{C0DDAC}5.43} & \multicolumn{1}{r|}{\cellcolor[HTML]{C1DDAD}5.21} & \multicolumn{1}{r|}{\cellcolor[HTML]{C1DDAD}5.13} & \multicolumn{1}{r|}{\cellcolor[HTML]{C1DDAD}5.25} & \multicolumn{1}{r|}{\cellcolor[HTML]{C4DFB1}2.94} & \multicolumn{1}{r|}{\cellcolor[HTML]{C3DEAF}3.78} & \multicolumn{1}{r|}{\cellcolor[HTML]{C1DEAE}4.64} & \multicolumn{1}{r|}{\cellcolor[HTML]{C4DFB2}2.60} & \multicolumn{1}{r|}{\cellcolor[HTML]{C4DFB1}2.71}
\\ \hline
22470 & 171002 & 2643 & MFb & \multicolumn{1}{r|}{\cellcolor[HTML]{9EC682}25.44} & \multicolumn{1}{r|}{\cellcolor[HTML]{A9D08E}22.16} & \multicolumn{1}{r|}{\cellcolor[HTML]{ABD190}21.11} & \multicolumn{1}{r|}{\cellcolor[HTML]{ABD190}21.11} & \multicolumn{1}{r|}{\cellcolor[HTML]{ABD190}21.10} & \multicolumn{1}{r|}{\cellcolor[HTML]{ABD190}21.11} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3D69B}15.07} & \multicolumn{1}{r|}{\cellcolor[HTML]{B2D59A}15.80} & \multicolumn{1}{r|}{\cellcolor[HTML]{A7CE8C}22.70} & \multicolumn{1}{r|}{\cellcolor[HTML]{ACD291}20.43} & \multicolumn{1}{r|}{\cellcolor[HTML]{B1D498}16.8}
\\ \hline
54574 & 498202 & 6420 & DHR & \multicolumn{1}{r|}{\cellcolor[HTML]{6E9A50}39.77} & \multicolumn{1}{r|}{\cellcolor[HTML]{7CA75F}35.42} & \multicolumn{1}{r|}{\cellcolor[HTML]{84AE67}33.21} & \multicolumn{1}{r|}{\cellcolor[HTML]{88B26B}32.00} & \multicolumn{1}{r|}{\cellcolor[HTML]{88B26C}31.90} & \multicolumn{1}{r|}{\cellcolor[HTML]{81AB64}34.2} & \multicolumn{1}{r|}{\cellcolor[HTML]{BFDCAA}6.78} & \multicolumn{1}{r|}{\cellcolor[HTML]{BEDCA9}7.26} & \multicolumn{1}{r|}{\cellcolor[HTML]{BDDBA8}7.73} & \multicolumn{1}{r|}{\cellcolor[HTML]{C1DDAD}5.21} & \multicolumn{1}{r|}{\cellcolor[HTML]{C0DDAB}6.01}
\\ \hline
41774 & 125826 & 4914 & DRO & \multicolumn{1}{r|}{\cellcolor[HTML]{659147}42.43} & \multicolumn{1}{r|}{\cellcolor[HTML]{7BA65E}35.74} & \multicolumn{1}{r|}{\cellcolor[HTML]{79A45C}36.42} & \multicolumn{1}{r|}{\cellcolor[HTML]{80AB64}34.22} & \multicolumn{1}{r|}{\cellcolor[HTML]{81AB64}34.12} & \multicolumn{1}{r|}{\cellcolor[HTML]{80AA63}34.45} & \multicolumn{1}{r|}{\cellcolor[HTML]{B5D79E}13.50} & \multicolumn{1}{r|}{\cellcolor[HTML]{B5D79E}13.40} & \multicolumn{1}{r|}{\cellcolor[HTML]{B6D79F}13.13} & \multicolumn{1}{r|}{\cellcolor[HTML]{B6D79F}12.94} & \multicolumn{1}{r|}{\cellcolor[HTML]{81AB64}34.18}
\\ \hline
47539 & 222887 & 5592 & DHU & \multicolumn{1}{r|}{\cellcolor[HTML]{5B883C}45.40} & \multicolumn{1}{r|}{\cellcolor[HTML]{7BA65E}35.77} & \multicolumn{1}{r|}{\cellcolor[HTML]{7FAA62}34.52} & \multicolumn{1}{r|}{\cellcolor[HTML]{80AA63}34.33} & \multicolumn{1}{r|}{\cellcolor[HTML]{81AB64}34.13} & \multicolumn{1}{r|}{\cellcolor[HTML]{719D53}38.85} & \multicolumn{1}{r|}{\cellcolor[HTML]{9BC37F}26.35} & \multicolumn{1}{r|}{\cellcolor[HTML]{99C17D}27.02} & \multicolumn{1}{r|}{\cellcolor[HTML]{9DC581}25.84} & \multicolumn{1}{r|}{\cellcolor[HTML]{9DC582}25.61} & \multicolumn{1}{r|}{\cellcolor[HTML]{9EC683}25.33}
\\ \hline
37700 & 289003 & 4435 & MG & \multicolumn{1}{r|}{\cellcolor[HTML]{8DB670}30.54} & \multicolumn{1}{r|}{\cellcolor[HTML]{97C07B}27.43} & \multicolumn{1}{r|}{\cellcolor[HTML]{9CC480}26.07} & \multicolumn{1}{r|}{\cellcolor[HTML]{9BC380}26.25} & \multicolumn{1}{r|}{\cellcolor[HTML]{9CC480}26.07} & \multicolumn{1}{r|}{\cellcolor[HTML]{9BC37F}26.34} & \multicolumn{1}{r|}{\cellcolor[HTML]{B2D599}16.14} & \multicolumn{1}{r|}{\cellcolor[HTML]{B2D59A}15.49} & \multicolumn{1}{r|}{\cellcolor[HTML]{B2D599}16.05} & \multicolumn{1}{r|}{\cellcolor[HTML]{BAD9A4}10.35} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3D69B}14.86}
\\ \hline
7624 & 27806 & 759 & L & \multicolumn{1}{r|}{\cellcolor[HTML]{9AC27F}26.55} & \multicolumn{1}{r|}{\cellcolor[HTML]{9FC683}25.25} & \multicolumn{1}{r|}{\cellcolor[HTML]{A3CA87}24.04} & \multicolumn{1}{r|}{\cellcolor[HTML]{A3CB88}23.82} & \multicolumn{1}{r|}{\cellcolor[HTML]{A3CB88}23.79} & \multicolumn{1}{r|}{\cellcolor[HTML]{A3CB88}23.81} & \multicolumn{1}{r|}{\cellcolor[HTML]{AFD395}18.34} & \multicolumn{1}{r|}{\cellcolor[HTML]{AFD396}18.11} & \multicolumn{1}{r|}{\cellcolor[HTML]{AFD496}17.75} & \multicolumn{1}{r|}{\cellcolor[HTML]{AFD496}17.71} & \multicolumn{1}{r|}{\cellcolor[HTML]{AFD496}17.70}
\\ \hline
50516 & 819306 & 5943 & FbAR & \multicolumn{1}{r|}{\cellcolor[HTML]{6D994F}39.97} & \multicolumn{1}{r|}{\cellcolor[HTML]{87B06A}32.40} & \multicolumn{1}{r|}{\cellcolor[HTML]{8BB46E}31.18} & \multicolumn{1}{r|}{\cellcolor[HTML]{8BB56F}30.95} & \multicolumn{1}{r|}{\cellcolor[HTML]{8BB56F}30.93} & \multicolumn{1}{r|}{\cellcolor[HTML]{8AB46E}31.30} & \multicolumn{1}{r|}{\cellcolor[HTML]{91BA74}29.43} & \multicolumn{1}{r|}{\cellcolor[HTML]{92BA75}29.14} & \multicolumn{1}{r|}{\cellcolor[HTML]{8CB56F}30.85} & \multicolumn{1}{r|}{\cellcolor[HTML]{93BC77}28.56} & \multicolumn{1}{r|}{\cellcolor[HTML]{91BA75}29.28}
\\ \hline
13867 & 86858 & 1383 & FbA & \multicolumn{1}{r|}{\cellcolor[HTML]{548235}47.29} & \multicolumn{1}{r|}{\cellcolor[HTML]{86B06A}32.45} & \multicolumn{1}{r|}{\cellcolor[HTML]{8BB56F}31.05} & \multicolumn{1}{r|}{\cellcolor[HTML]{8DB670}30.55} & \multicolumn{1}{r|}{\cellcolor[HTML]{6A964C}40.87} & \multicolumn{1}{r|}{\cellcolor[HTML]{59873A}45.89} & \multicolumn{1}{r|}{\cellcolor[HTML]{87B16A}32.28} & \multicolumn{1}{r|}{\cellcolor[HTML]{88B26C}31.83} & \multicolumn{1}{r|}{\cellcolor[HTML]{84AE67}33.28} & \multicolumn{1}{r|}{\cellcolor[HTML]{8DB671}30.46} & \multicolumn{1}{r|}{\cellcolor[HTML]{88B26B}32.01}
\\ \hline
7058 & 89455 & 784 & FbG & \multicolumn{1}{r|}{\cellcolor[HTML]{AAD18F}21.95} & \multicolumn{1}{r|}{\cellcolor[HTML]{ACD292}20.22} & \multicolumn{1}{r|}{\cellcolor[HTML]{AED394}18.93} & \multicolumn{1}{r|}{\cellcolor[HTML]{AED394}18.71} & \multicolumn{1}{r|}{\cellcolor[HTML]{ADD394}19.18} & \multicolumn{1}{r|}{\cellcolor[HTML]{ADD394}19.20} & \multicolumn{1}{r|}{\cellcolor[HTML]{B5D79D}13.97} & \multicolumn{1}{r|}{\cellcolor[HTML]{B5D79D}13.75} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3D69A}15.39} & \multicolumn{1}{r|}{\cellcolor[HTML]{B6D79F}13.13} & \multicolumn{1}{r|}{\cellcolor[HTML]{B5D79E}13.68}
\\ \hline
27918 & 206259 & 3284 & FbN & \multicolumn{1}{r|}{\cellcolor[HTML]{82AC65}33.82} & \multicolumn{1}{r|}{\cellcolor[HTML]{A6CD8B}23.03} & \multicolumn{1}{r|}{\cellcolor[HTML]{AAD18F}22.00} & \multicolumn{1}{r|}{\cellcolor[HTML]{AAD18F}21.95} & \multicolumn{1}{r|}{\cellcolor[HTML]{AAD18F}21.96} & \multicolumn{1}{r|}{\cellcolor[HTML]{A9D08E}22.01} & \multicolumn{1}{r|}{\cellcolor[HTML]{B6D79F}12.85} & \multicolumn{1}{r|}{\cellcolor[HTML]{B6D89F}12.64} & \multicolumn{1}{r|}{\cellcolor[HTML]{B7D8A0}12.18} & \multicolumn{1}{r|}{\cellcolor[HTML]{B7D8A0}12.10} & \multicolumn{1}{r|}{\cellcolor[HTML]{B7D8A0}12.40}
\\ \hline
5909 & 41729 & 562 & FbP & \multicolumn{1}{r|}{\cellcolor[HTML]{89B36C}31.73} & \multicolumn{1}{r|}{\cellcolor[HTML]{A6CE8B}22.90} & \multicolumn{1}{r|}{\cellcolor[HTML]{AAD18F}21.76} & \multicolumn{1}{r|}{\cellcolor[HTML]{AAD190}21.40} & \multicolumn{1}{r|}{\cellcolor[HTML]{AAD18F}21.87} & \multicolumn{1}{r|}{\cellcolor[HTML]{AAD18F}21.89} & \multicolumn{1}{r|}{\cellcolor[HTML]{B2D59A}15.89} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3D59A}15.47} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3D69B}15.15} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3D69B}14.90} & \multicolumn{1}{r|}{\cellcolor[HTML]{B3D69B}15.31}
\\ \hline
11566 & 67114 & 1051 & FbPF & \multicolumn{1}{r|}{\cellcolor[HTML]{6E9A51}39.61} & \multicolumn{1}{r|}{\cellcolor[HTML]{87B16B}32.21} & \multicolumn{1}{r|}{\cellcolor[HTML]{8CB56F}30.85} & \multicolumn{1}{r|}{\cellcolor[HTML]{8DB670}30.57} & \multicolumn{1}{r|}{\cellcolor[HTML]{8DB771}30.39} & \multicolumn{1}{r|}{\cellcolor[HTML]{8DB671}30.48} & \multicolumn{1}{r|}{\cellcolor[HTML]{9BC37F}26.30} & \multicolumn{1}{r|}{\cellcolor[HTML]{9CC480}26.12} & \multicolumn{1}{r|}{\cellcolor[HTML]{9DC581}25.85} & \multicolumn{1}{r|}{\cellcolor[HTML]{9FC783}25.21} & \multicolumn{1}{r|}{\cellcolor[HTML]{9BC480}26.21}
\\ \hline
3893 & 17262 & 387 & FbT & \multicolumn{1}{r|}{\cellcolor[HTML]{9CC481}25.93} & \multicolumn{1}{r|}{\cellcolor[HTML]{A7CE8C}22.84} & \multicolumn{1}{r|}{\cellcolor[HTML]{A0C885}24.70} & \multicolumn{1}{r|}{\cellcolor[HTML]{A2C986}24.29} & \multicolumn{1}{r|}{\cellcolor[HTML]{AFD496}17.73} & \multicolumn{1}{r|}{\cellcolor[HTML]{AFD496}17.77} & \multicolumn{1}{r|}{\cellcolor[HTML]{ADD393}19.37} & \multicolumn{1}{r|}{\cellcolor[HTML]{AED395}18.46} & \multicolumn{1}{r|}{\cellcolor[HTML]{B5D79E}13.71} & \multicolumn{1}{r|}{\cellcolor[HTML]{B5D79E}13.36} & \multicolumn{1}{r|}{\cellcolor[HTML]{B0D496}17.63}
\\ \hline
\end{tabular}
\end{table}
\subsection{Results From Social Networks}
The observation that real-world social networks tend to contain dense communities suggests that community based driver node selection would have a significant advantage over global selection. This relationship with density is also apparent in the generated networks. To verify whether this intuition is correct, we conduct similar analysis to this performed on generated networks. First, we analyse the percentage of nodes influenced by each method over 100 iterations with a seed set size of 20\% of driver nodes. We have run the experiments for the seed set sizes from 1\%, 10\%, 20\%, 30\%, 40\% and 50\%. We show the comparison in case of 20\% seed size, as it is the lowest seed set level to reach maximum influence in at most 100 iterations. We note however that there is also improvements at smaller seed set sizes.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.5]{Images/9-SNMAM4.png}
\caption{Percentage of Number of Nodes Influenced in FB, Z, LC, LF, PF, FbG, FbP, FbPF and FbT Networks. A Comparison of all methods for 100 iterations.}
\label{fig:FB-ZKC-LC-LF-PF-FbG-FbP-FbPF-FbT}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.5]{Images/9-SNMAM5.png}
\caption{Percentage of Number of Nodes Influenced in MFb, DHR, DRO, DHU, MG, L, FbAR and FbA Networks. A Comparison of all methods for 100 iterations.}
\label{fig:MFb-DHR-DRO-DHU-MG-L-FbAR-FbA}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.5]{Images/9-SNMAM6.png}
\caption{Percentage of Number of Nodes Influenced in Youtube, Twitter, Diggs and Ego Networks. A Comparison of all methods for 100 iterations.}
\label{fig:Youtube-Twitter-Diggs-Ego}
\end{figure}
\subsubsection{What is the speed and reach of the influence spread?}
Figures~\ref{fig:FB-ZKC-LC-LF-PF-FbG-FbP-FbPF-FbT},~\ref{fig:MFb-DHR-DRO-DHU-MG-L-FbAR-FbA} and~\ref{fig:Youtube-Twitter-Diggs-Ego} show a comparison between global-level driver based seed selection methods and community-level driver based seed selection methods. We grouped the networks on the basis of their sizes and densities to analyse the results effectively.
From Figure~\ref{fig:FB-ZKC-LC-LF-PF-FbG-FbP-FbPF-FbT}, we see a higher density of networks. The densities of these networks are: FB (0.01), Z ( 0.13), LC (0.003), LF (0.003), PF (0.007), FbG (0.003), FbP (0.002), FbPF (0.001) and FbT (0.002). Overall comparison tells us that, in these networks, there is less difference between the percentage of number of nodes influenced after 100 iterations. Which indicates that when the network's densities are higher, then there is more chance that seed selection methods are able to achieve influence faster. If we look at the Fb network in Figure~\ref{fig:FB-ZKC-LC-LF-PF-FbG-FbP-FbPF-FbT}, its network density is 0.01 which is greater than the rest of the networks except the network Z which has the highest density of 0.14. If we compare the plots, we see that DDCBC method also works exceptionally better in most networks as compared to the rest of the methods.
From Figure~\ref{fig:MFb-DHR-DRO-DHU-MG-L-FbAR-FbA}, we see the networks with densities ranging from 0.0001 to 0.0009. Densities of these networks are: MFb (0.0006), DHR (0.0003), DRO (0.0001), DHU (0.0001), MG (0.0004), L (0.0009), FbAR (0.0006) and FbA (0.0009). With the lower density networks, we can see that the gain in driver community based methods is more prominent as compared to driver based methods. It means density of the network does play an important role to determine the total number of nodes influenced.
From Figure~\ref{fig:Youtube-Twitter-Diggs-Ego}, we see the networks with the lowest densities ranging from 0.000002 to 0.0001. Densities of these networks are: Youtube (0.000004), Twitter (0.00012), Diggs (0.000002) and Ego (0.00014). In these networks, we see a huge gap between DDCBC method and the rest of the methods. Which means, even in the lowest density networks, when we locally construct communities, the density tend to increase as we can see from Table~\ref{table:Gainoverothermethodssocialnetworks}. Average community density of Youtube was calculated to be 0.000012±0.04, which means if we compare it to the overall network density of 0.000004, it is notably denser. That is why, even in these networks, driver-community based methods specially DDCBC method outperforms the driver based methods.
\subsubsection{How much advantage do community-level driver based seed selection methods give?}
From Table~\ref{table:Gainoverothermethodssocialnetworks}, we see the percentage of gain that DDCBC has over other seed selection methods in terms of number of nodes influenced after 100 iterations when seed size is 20\%. We can see from the table that DDCBC outperforms all methods, but the gain is bigger in terms of global-level driver based methods than the community-level driver based methods. We see this difference in gain mainly because of locally selected and then ranked driver nodes. Also, community creation plays an important role as, the communities are denser than the overall network. From Table~\ref{table:Gainoverothermethodssocialnetworks} we can see that the biggest gain is achieved by DDCBC method over DK method which is 45.89\% in FbA network. And the lowest gain is achieved by DDCBC method over DK method in ZKC network. The reason for lowest or lower gain is that ZKC has the highest network density and smallest size. In denser networks, we tend to see the less gain in DDCBC method. Which precisely can mean that, if we locally identify communities, those have denser structures as compared to the overall network. That is why community-driver based methods combined with ranking of DCB works better than the rest of the methods.
\section{Conclusion and Future Work}\label{conclusion&futurework}
An idea of bringing the methods from control and influence fields together has been proposed in this research. In fact, we played with a research dimension that is at the intersection of both fields and fulfils the objectives of many research questions from both domains.
We proposed, implemented and compared a list of new and novel seed selection methods with the traditional seed selection methods from influence domain and driver seed selection methods from influence meets control field.
In this work, we introduced new seed selection methods, by utilising driver nodes in communities of the networks. The new methods outperformed the old ones. This opens up an avenue in the already existing research of control methods in complex networks. Our community-driver based methods show that, we can achieve maximum influence in fewer number of iterations and with a comparatively less seed set size. Also, if we use ranking mechanisms based upon the centrality measures combining degree, betweenness and closeness, the driver nodes selected as seed nodes perform much better in that case as compared to when we rank them on the basis of individual centrality measures.
Work remains to be done in the context of ranking of driver nodes by using different other algorithms for example, Page Rank, Leader Rank, cluster Rank and K-Shell Decomposition.
E.g., Page Rank~\cite{brin1998anatomy}, Leader Rank~\cite{lu2011leaders}, Cluster Rank~\cite{chen2013identifying} and K-Shell Decomposition~\cite{liu2015improving}. New methods such as Preferential Matching~\cite{zhang2014structural} can be used to identify driver nodes to improve the efficiency of the seed selection process.
Another avenue for exploration is the effects of differing influence models, such as the Independent Cascade Model~\cite{duan2009informational}.
\section*{Acknowledgement}
This work was supported in part by the Polish National Science Centre, under Grant no. 2016/21/D/ST6/02408, and in part by the Australian Research Council, Dynamics and Control of Complex Social Networks, under Grant DP190101087.
\bibliographystyle{9-SNMAM}
\section{Fixed-Period Problems: The Sublinear Case}
With this chapter, the preliminaries are over, and we begin the search
for periodic solutions to Hamiltonian systems. All this will be done in
the convex case; that is, we shall study the boundary-value problem
\begin{eqnarray*}
\dot{x}&=&JH' (t,x)\\
x(0) &=& x(T)
\end{eqnarray*}
with $H(t,\cdot)$ a convex function of $x$, going to $+\infty$ when
$\left\|x\right\| \to \infty$.
\subsection{Autonomous Systems}
In this section, we will consider the case when the Hamiltonian $H(x)$
is autonomous. For the sake of simplicity, we shall also assume that it
is $C^{1}$.
We shall first consider the question of nontriviality, within the
general framework of
$\left(A_{\infty},B_{\infty}\right)$-subquadratic Hamiltonians. In
the second subsection, we shall look into the special case when $H$ is
$\left(0,b_{\infty}\right)$-subquadratic,
and we shall try to derive additional information.
\subsubsection{The General Case: Nontriviality.}
We assume that $H$ is
$\left(A_{\infty},B_{\infty}\right)$-sub\-qua\-dra\-tic at infinity,
for some constant symmetric matrices $A_{\infty}$ and $B_{\infty}$,
with $B_{\infty}-A_{\infty}$ positive definite. Set:
\begin{eqnarray}
\gamma :&=&{\rm smallest\ eigenvalue\ of}\ \ B_{\infty} - A_{\infty} \\
\lambda : &=& {\rm largest\ negative\ eigenvalue\ of}\ \
J \frac{d}{dt} +A_{\infty}\ .
\end{eqnarray}
Theorem~\ref{ghou:pre} tells us that if $\lambda +\gamma < 0$, the
boundary-value problem:
\begin{equation}
\begin{array}{rcl}
\dot{x}&=&JH' (x)\\
x(0)&=&x (T)
\end{array}
\end{equation}
has at least one solution
$\overline{x}$, which is found by minimizing the dual
action functional:
\begin{equation}
\psi (u) = \int_{o}^{T} \left[\frac{1}{2}
\left(\Lambda_{o}^{-1} u,u\right) + N^{\ast} (-u)\right] dt
\end{equation}
on the range of $\Lambda$, which is a subspace $R (\Lambda)_{L}^{2}$
with finite codimension. Here
\begin{equation}
N(x) := H(x) - \frac{1}{2} \left(A_{\infty} x,x\right)
\end{equation}
is a convex function, and
\begin{equation}
N(x) \le \frac{1}{2}
\left(\left(B_{\infty} - A_{\infty}\right) x,x\right)
+ c\ \ \ \forall x\ .
\end{equation}
\begin{proposition}
Assume $H'(0)=0$ and $ H(0)=0$. Set:
\begin{equation}
\delta := \liminf_{x\to 0} 2 N (x) \left\|x\right\|^{-2}\ .
\label{eq:one}
\end{equation}
If $\gamma < - \lambda < \delta$,
the solution $\overline{u}$ is non-zero:
\begin{equation}
\overline{x} (t) \ne 0\ \ \ \forall t\ .
\end{equation}
\end{proposition}
\begin{proof}
Condition (\ref{eq:one}) means that, for every
$\delta ' > \delta$, there is some $\varepsilon > 0$ such that
\begin{equation}
\left\|x\right\| \le \varepsilon \Rightarrow N (x) \le
\frac{\delta '}{2} \left\|x\right\|^{2}\ .
\end{equation}
It is an exercise in convex analysis, into which we shall not go, to
show that this implies that there is an $\eta > 0$ such that
\begin{equation}
f\left\|x\right\| \le \eta
\Rightarrow N^{\ast} (y) \le \frac{1}{2\delta '}
\left\|y\right\|^{2}\ .
\label{eq:two}
\end{equation}
\begin{figure}
\vspace{2.5cm}
\caption{This is the caption of the figure displaying a white eagle and
a white horse on a snow field}
\end{figure}
Since $u_{1}$ is a smooth function, we will have
$\left\|hu_{1}\right\|_\infty \le \eta$
for $h$ small enough, and inequality (\ref{eq:two}) will hold,
yielding thereby:
\begin{equation}
\psi (hu_{1}) \le \frac{h^{2}}{2}
\frac{1}{\lambda} \left\|u_{1} \right\|_{2}^{2} + \frac{h^{2}}{2}
\frac{1}{\delta '} \left\|u_{1}\right\|^{2}\ .
\end{equation}
If we choose $\delta '$ close enough to $\delta$, the quantity
$\left(\frac{1}{\lambda} + \frac{1}{\delta '}\right)$
will be negative, and we end up with
\begin{equation}
\psi (hu_{1}) < 0\ \ \ \ \ {\rm for}\ \ h\ne 0\ \ {\rm small}\ .
\end{equation}
On the other hand, we check directly that $\psi (0) = 0$. This shows
that 0 cannot be a minimizer of $\psi$, not even a local one.
So $\overline{u} \ne 0$ and
$\overline{u} \ne \Lambda_{o}^{-1} (0) = 0$. \qed
\end{proof}
\begin{corollary}
Assume $H$ is $C^{2}$ and
$\left(a_{\infty},b_{\infty}\right)$-subquadratic at infinity. Let
$\xi_{1},\allowbreak\dots,\allowbreak\xi_{N}$ be the
equilibria, that is, the solutions of $H' (\xi ) = 0$.
Denote by $\omega_{k}$
the smallest eigenvalue of $H'' \left(\xi_{k}\right)$, and set:
\begin{equation}
\omega : = {\rm Min\,} \left\{\omega_{1},\dots,\omega_{k}\right\}\ .
\end{equation}
If:
\begin{equation}
\frac{T}{2\pi} b_{\infty} <
- E \left[- \frac{T}{2\pi}a_{\infty}\right] <
\frac{T}{2\pi}\omega
\label{eq:three}
\end{equation}
then minimization of $\psi$ yields a non-constant $T$-periodic solution
$\overline{x}$.
\end{corollary}
We recall once more that by the integer part $E [\alpha ]$ of
$\alpha \in \bbbr$, we mean the $a\in \bbbz$
such that $a< \alpha \le a+1$. For instance,
if we take $a_{\infty} = 0$, Corollary 2 tells
us that $\overline{x}$ exists and is
non-constant provided that:
\begin{equation}
\frac{T}{2\pi} b_{\infty} < 1 < \frac{T}{2\pi}
\end{equation}
or
\begin{equation}
T\in \left(\frac{2\pi}{\omega},\frac{2\pi}{b_{\infty}}\right)\ .
\label{eq:four}
\end{equation}
\begin{proof}
The spectrum of $\Lambda$ is $\frac{2\pi}{T} \bbbz +a_{\infty}$. The
largest negative eigenvalue $\lambda$ is given by
$\frac{2\pi}{T}k_{o} +a_{\infty}$,
where
\begin{equation}
\frac{2\pi}{T}k_{o} + a_{\infty} < 0
\le \frac{2\pi}{T} (k_{o} +1) + a_{\infty}\ .
\end{equation}
Hence:
\begin{equation}
k_{o} = E \left[- \frac{T}{2\pi} a_{\infty}\right] \ .
\end{equation}
The condition $\gamma < -\lambda < \delta$ now becomes:
\begin{equation}
b_{\infty} - a_{\infty} <
- \frac{2\pi}{T} k_{o} -a_{\infty} < \omega -a_{\infty}
\end{equation}
which is precisely condition (\ref{eq:three}).\qed
\end{proof}
\begin{lemma}
Assume that $H$ is $C^{2}$ on $\bbbr^{2n} \setminus \{ 0\}$ and
that $H'' (x)$ is non-de\-gen\-er\-ate for any $x\ne 0$. Then any local
minimizer $\widetilde{x}$ of $\psi$ has minimal period $T$.
\end{lemma}
\begin{proof}
We know that $\widetilde{x}$, or
$\widetilde{x} + \xi$ for some constant $\xi
\in \bbbr^{2n}$, is a $T$-periodic solution of the Hamiltonian system:
\begin{equation}
\dot{x} = JH' (x)\ .
\end{equation}
There is no loss of generality in taking $\xi = 0$. So
$\psi (x) \ge \psi (\widetilde{x} )$
for all $\widetilde{x}$ in some neighbourhood of $x$ in
$W^{1,2} \left(\bbbr / T\bbbz ; \bbbr^{2n}\right)$.
But this index is precisely the index
$i_{T} (\widetilde{x} )$ of the $T$-periodic
solution $\widetilde{x}$ over the interval
$(0,T)$, as defined in Sect.~2.6. So
\begin{equation}
i_{T} (\widetilde{x} ) = 0\ .
\label{eq:five}
\end{equation}
Now if $\widetilde{x}$ has a lower period, $T/k$ say,
we would have, by Corollary 31:
\begin{equation}
i_{T} (\widetilde{x} ) =
i_{kT/k}(\widetilde{x} ) \ge
ki_{T/k} (\widetilde{x} ) + k-1 \ge k-1 \ge 1\ .
\end{equation}
This would contradict (\ref{eq:five}), and thus cannot happen.\qed
\end{proof}
\paragraph{Notes and Comments.}
The results in this section are a
refined version of \cite{smit:wat};
the minimality result of Proposition
14 was the first of its kind.
To understand the nontriviality conditions, such as the one in formula
(\ref{eq:four}), one may think of a one-parameter family
$x_{T}$, $T\in \left(2\pi\omega^{-1}, 2\pi b_{\infty}^{-1}\right)$
of periodic solutions, $x_{T} (0) = x_{T} (T)$,
with $x_{T}$ going away to infinity when $T\to 2\pi \omega^{-1}$,
which is the period of the linearized system at 0.
\begin{table}
\caption{This is the example table taken out of {\it The
\TeX{}book,} p.\,246}
\begin{center}
\begin{tabular}{r@{\quad}rl}
\hline
\multicolumn{1}{l}{\rule{0pt}{12pt}
Year}&\multicolumn{2}{l}{World population}\\[2pt]
\hline\rule{0pt}{12pt}
8000 B.C. & 5,000,000& \\
50 A.D. & 200,000,000& \\
1650 A.D. & 500,000,000& \\
1945 A.D. & 2,300,000,000& \\
1980 A.D. & 4,400,000,000& \\[2pt]
\hline
\end{tabular}
\end{center}
\end{table}
\begin{theorem} [Ghoussoub-Preiss]\label{ghou:pre}
Assume $H(t,x)$ is
$(0,\varepsilon )$-subquadratic at
infinity for all $\varepsilon > 0$, and $T$-periodic in $t$
\begin{equation}
H (t,\cdot )\ \ \ \ \ {\rm is\ convex}\ \ \forall t
\end{equation}
\begin{equation}
H (\cdot ,x)\ \ \ \ \ {\rm is}\ \ T{\rm -periodic}\ \ \forall x
\end{equation}
\begin{equation}
H (t,x)\ge n\left(\left\|x\right\|\right)\ \ \ \ \
{\rm with}\ \ n (s)s^{-1}\to \infty\ \ {\rm as}\ \ s\to \infty
\end{equation}
\begin{equation}
\forall \varepsilon > 0\ ,\ \ \ \exists c\ :\
H(t,x) \le \frac{\varepsilon}{2}\left\|x\right\|^{2} + c\ .
\end{equation}
Assume also that $H$ is $C^{2}$, and $H'' (t,x)$ is positive definite
everywhere. Then there is a sequence $x_{k}$, $k\in \bbbn$, of
$kT$-periodic solutions of the system
\begin{equation}
\dot{x} = JH' (t,x)
\end{equation}
such that, for every $k\in \bbbn$, there is some $p_{o}\in\bbbn$ with:
\begin{equation}
p\ge p_{o}\Rightarrow x_{pk} \ne x_{k}\ .
\end{equation}
\qed
\end{theorem}
\begin{example} [{{\rm External forcing}}]
Consider the system:
\begin{equation}
\dot{x} = JH' (x) + f(t)
\end{equation}
where the Hamiltonian $H$ is
$\left(0,b_{\infty}\right)$-subquadratic, and the
forcing term is a distribution on the circle:
\begin{equation}
f = \frac{d}{dt} F + f_{o}\ \ \ \ \
{\rm with}\ \ F\in L^{2} \left(\bbbr / T\bbbz; \bbbr^{2n}\right)\ ,
\end{equation}
where $f_{o} : = T^{-1}\int_{o}^{T} f (t) dt$. For instance,
\begin{equation}
f (t) = \sum_{k\in \bbbn} \delta_{k} \xi\ ,
\end{equation}
where $\delta_{k}$ is the Dirac mass at $t= k$ and
$\xi \in \bbbr^{2n}$ is a
constant, fits the prescription. This means that the system
$\dot{x} = JH' (x)$ is being excited by a
series of identical shocks at interval $T$.
\end{example}
\begin{definition}
Let $A_{\infty} (t)$ and $B_{\infty} (t)$ be symmetric
operators in $\bbbr^{2n}$, depending continuously on
$t\in [0,T]$, such that
$A_{\infty} (t) \le B_{\infty} (t)$ for all $t$.
A Borelian function
$H: [0,T]\times \bbbr^{2n} \to \bbbr$
is called
$\left(A_{\infty} ,B_{\infty}\right)$-{\it subquadratic at infinity}
if there exists a function $N(t,x)$ such that:
\begin{equation}
H (t,x) = \frac{1}{2} \left(A_{\infty} (t) x,x\right) + N(t,x)
\end{equation}
\begin{equation}
\forall t\ ,\ \ \ N(t,x)\ \ \ \ \
{\rm is\ convex\ with\ respect\ to}\ \ x
\end{equation}
\begin{equation}
N(t,x) \ge n\left(\left\|x\right\|\right)\ \ \ \ \
{\rm with}\ \ n(s)s^{-1}\to +\infty\ \ {\rm as}\ \ s\to +\infty
\end{equation}
\begin{equation}
\exists c\in \bbbr\ :\ \ \ H (t,x) \le
\frac{1}{2} \left(B_{\infty} (t) x,x\right) + c\ \ \ \forall x\ .
\end{equation}
If $A_{\infty} (t) = a_{\infty} I$ and
$B_{\infty} (t) = b_{\infty} I$, with
$a_{\infty} \le b_{\infty} \in \bbbr$,
we shall say that $H$ is
$\left(a_{\infty},b_{\infty}\right)$-subquadratic
at infinity. As an example, the function
$\left\|x\right\|^{\alpha}$, with
$1\le \alpha < 2$, is $(0,\varepsilon )$-subquadratic at infinity
for every $\varepsilon > 0$. Similarly, the Hamiltonian
\begin{equation}
H (t,x) = \frac{1}{2} k \left\|k\right\|^{2} +\left\|x\right\|^{\alpha}
\end{equation}
is $(k,k+\varepsilon )$-subquadratic for every $\varepsilon > 0$.
Note that, if $k<0$, it is not convex.
\end{definition}
\paragraph{Notes and Comments.}
The first results on subharmonics were
obtained by Rabinowitz in \cite{fo:kes:nic:tue}, who showed the existence of
infinitely many subharmonics both in the subquadratic and superquadratic
case, with suitable growth conditions on $H'$. Again the duality
approach enabled Clarke and Ekeland in \cite{may:ehr:stein} to treat the
same problem in the convex-subquadratic case, with growth conditions on
$H$ only.
Recently, Michalek and Tarantello (see \cite{fost:kes} and \cite{czaj:fitz})
have obtained lower bound on the number of subharmonics of period $kT$,
based on symmetry considerations and on pinching estimates, as in
Sect.~5.2 of this article.
|
1812.03143
|
\section{White Paper Information}
Correspondence can be directed to Keaton Bell: \href{mailto:bell@mps.mpg.de}{bell@mps.mpg.de}
\begin{enumerate}
\item {\bf Science Category:} Exploring the Transient Optical Sky
\item {\bf Survey Type Category:} Deep Drilling fields, mini survey
\item {\bf Observing Strategy Category:} a specific observing strategy to enable specific time domain science,
that is relatively agnostic to where the telescope is pointed
\end{enumerate}
\clearpage
\section{Scientific Motivation}
\iffalse
\begin{footnotesize}
{\it Describe the scientific justification for this white paper in the context
of your field, as well as the importance to the general program of astronomy,
including the relevance over the next decade.
Describe other relevant data, and justify why LSST is the best facility for these observations.
(Limit: 2 pages + 1 page for figures.)}
\end{footnotesize}
\fi
In its main Wide-Fast-Deep survey, LSST will achieve a thorough census of the variable sky on timescales exceeding the minimum typical revisit time of $\sim1$\,hr. The frequency of revisits in the main survey is limited to achieve LSST's broad science goals. However, LSST will observe each of a few ``Deep Drilling Fields'' (DDFs) $\approx 5$ more times than the typical 825 visits for the 10-year main survey. In addition to achieving higher co-added survey depths, the additional observations will be able to better sample time domain variability in these specific fields. While some of the DDF locations and filter distributions have been discussed or selected, the detailed drilling cadences have not been established. We propose that acquiring the additional observations of the DDFs in continuous, multi-hour sequences will extend the completeness of LSST's survey of the transient and optical sky down to $\sim$minute timescales while decreasing observational overhead of the DDF mini surveys due to less slewing.
There are many interesting astrophysical processes that are known to cause photometric variations on $\sim$minute timescales. For example, some classes of pulsating stars, including pulsating white dwarfs \citep[e.g.,][]{Mukadam2006}, pulsating hot subdwarfs \citep[sdBs;][]{Kilkenny2007}, blue large-amplitude pulsators \citep[BLAPs;][]{Pietrukowicz2017}, and rapidly oscillating Ap stars \citep{Kurtz1990}, have typical periods of $\sim10$\,minutes.
These variations will be detectable given LSST's photometric precision, but severe undersampling of the Wide-Fast-Deep survey will hamper accurate period determinations that would be useful for asteroseismology or other in-depth analyses of individual objects. If the regular revisit times are more frequent than twice per intrinsic period, these signals will not be obscured by the effects of aliasing.
Many transients also have much shorter durations than typical LSST revisits, and these will only be observed at most once per event by the main survey. For example, most flares from M dwarf stars have durations shorter than an hour \citep[e.g.,][]{Moffett1974}, and deep transits of planetary debris or even intact planets around white dwarfs will only last a couple of minutes \citep{Cortes2018,Lund2018}. The continuous cadence proposed here will enable the shapes of such events in the light curves to be recorded and studied in detail. The examples that are recorded at high cadence in the DDFs will serve as guides for interpreting similar transients of objects that are only detected in sparse, individual visits by the main survey.
With the DDF fields receiving $\approx5\times$ more visits than the main survey, the DDF mini-surveys will be sensitive to lower-amplitude photometric variability. With $N$ total visits per field, the noise level in a periodogram and the uncertainty on frequency, amplitude, and phase from least-squares fits to the time series all scale as $\propto 1/\sqrt{N}$ \citep[e.g.,][]{Montgomery1999}. These will all be better by more than a factor of 2 for the DDFs compared to the main survey, making these mini-surveys ``deep'' not only in co-added depth, but also variability amplitude.
While we advocate for continuous visits lasting at least 2\,hr to fill the gap below the typical main-survey revisit timescale, longer visits are preferred to improve frequency resolution of the data. In all practicality, for individual observations spaced by $\Delta t$ acquired continuously for sequences of duration $T$, the effective limiting upper Nyquist frequency of observations is $1/(2\Delta t)$, with an effective frequency resolution of $1/T$. This slight simplification ignores the longer total baseline of observations, and while this entire $\approx10$-year baseline technically defines the frequency resolution, in reality the $1/T$ resolution only becomes filled by inaccurate aliases that reach the higher resolution. Variable sources that have multiple periodicities separated by $<1/T$ will be realistically indecipherable from a single periodicity, and so exacting time domain science---such as probing the interiors of pulsating stars with asteroseismology---will benefit from longer individual visits.
To obtain a high Nyquist frequency, across which higher-frequency intrinsic signals are reflected as inaccurate aliases that are easy to misinterpret, the individual 15-second exposures should be saved and processed for the high-cadence survey. This will yield a Nyquist limit of 0.03125\,Hz, assuming two exposures take 32 seconds to obtain and read out.
Even when limited to the DDFs, LSST will provide a deeper survey of many short-period variables and transients than previous surveys of this type. Shallower examples that have been motivated by similar science and that demonstrate the value of such observations include the OmegaWhite survey on the VLT \citep{Macfarlane2015,Toma2016} and the ``Fast and Furious'' minisurvey covering the Galactic plane with the Zwicky Transient Facility \citep[ZTF;][]{ZTF}. These other efforts have particularly focused on discovering ultracompact binaries, the precise period determination of which will be complicated by aliasing from cycle-count-ambiguities if sparsely sampled. While many of the DDFs will target extragalactic fields and our rapid variability science is primarily motivated by stellar astrophysics, the wide field of view of the LSST images ensures that many stars will be resolved in each of these fields, especially faint white dwarfs that can show many types of $\sim$minute timescale variations. If this proposed cadence is in conflict with specific science criteria of some of the DDFs, application to any subset of the DDFs remains extremely valuable, with preference given to those containing the greatest number of resolved stars. At the least, we urge that a subset of the additional observations of each DDF be obtained in at least one continuous run of 2--4 hours---even these more limited high-cadence observations will allow us to disentangle some of the severe observational frequency aliases that will result from the otherwise sparse sampling.
Although the various Galactic Plane mini surveys being proposed (see white papers by, e.g., Lund et al.; Street et al.) request fewer observations than the DDFs, the concentration of stellar variables in these fields make them particularly valuable for rapid, continuous monitoring (e.g., the ZTF ``Fast and Furious'' mini survey targets the Galactic Plane specifically). While this white paper proposes a specific cadence rather than survey fields, we submit that continuous acquisition of the observations requested by Galactic Plane mini survey proposals should also be considered.
\clearpage
\section{Technical Description}
\iffalse
\begin{footnotesize}
{\it Describe your survey strategy modifications or proposed observations. Please comment on each observing constraint
below, including the technical motivation behind any constraints. Where relevant, indicate
if the constraint applies to all requested observations or a specific subset. Please note which
constraints are not relevant or important for your science goals.}
\end{footnotesize}
\fi
\subsection{High-level description}
If DDFs receive five times as many exposures as the main survey, the total time exposing each field is $\approx34.4$\,hr. The science requirements for a co-added survey depth in these fields is directly satisfied by the total number of observations, assuming the same average single-visit depth (same average seeing/brightness conditions). In order to obtain an interpretable record of short-timescale photometric variability in these fields, these observations would ideally be acquired in continuous sequences lasting 2--4 hours each. This strategy yields a high Nyquist frequency with minimal aliasing. Saving and processing the data from each of the 15-second exposures will produce a Nyquist limit twice that attained if the images are only stored from $2\times15$-second pairs.
\subsection{Footprint -- pointings, regions and/or constraints}
\iffalse
\begin{footnotesize}{\it Describe the specific pointings or general region (RA/Dec, Galactic longitude/latitude or
Ecliptic longitude/latitude) for the observations. Please describe any additional requirements, especially if there
are no specific constraints on the pointings (e.g. stellar density, galactic dust extinction).}
\end{footnotesize}
\fi
We propose only a modification of the cadence for the DDFs while allowing the selection of those pointings to be decided by the nearly independent science considerations of which fields benefit most from higher co-added survey depth. Our proposed drilling cadence could be limited to single extended 2--4 hour runs for DDFs where obtaining all observations in a continuous mode would conflict with other cadence requirements that were integral to the selection of these fields---this would still enable much high-speed variability science in these fields, though with less sensitivity to low-amplitude variations in fainter stars. Our request for high-cadence observations is motivated primarily for sampling stellar variability; if only a subset of DDFs can be observed at continuous cadence, preference should be given to Galactic and LMC/SMC fields, though even extragalactic fields will include a valuable number of foreground stars. Adopting a continuous cadence for the Galactic Plane mini surveys under consideration would also yield a tremendous record of rapid stellar variability in the Galaxy.
\subsection{Image quality}
\iffalse
\begin{footnotesize}{\it Constraints on the image quality (seeing).}\end{footnotesize}
\fi
Suitable image quality to achieve the desired DDF co-added depths will be sufficient for the proposed cadence.
\subsection{Individual image depth and/or sky brightness}
\iffalse
\begin{footnotesize}{\it Constraints on the sky brightness in each image and/or individual image depth for point sources.
Please differentiate between motivation for a desired sky brightness or individual image depth (as
calculated for point sources). Please provide sky brightness or image depth constraints per filter.}
\end{footnotesize}
\fi
The sources brighter than the limiting magnitudes of the individual images will be the targets that can be directly analyzed in the high-cadence observations. It will be especially informative to compare short-timescale variability in different bandpasses. To maximize the number of sources with high-speed coverage in most filters, observations in bandpasses with generally lower total throughput should be preferred in dark sky conditions. We note that our requested readout of the 15-second exposures will not be individually as deep as the $2\times15$\,s coadds, but they can be combined later.
\subsection{Co-added image depth and/or total number of visits}
\iffalse
\begin{footnotesize}{\it Constraints on the total co-added depth and/or total number of visits.
Please differentiate between motivations for a given co-added depth and total number of visits.
Please provide desired co-added depth and/or total number of visits per filter, if relevant.}
\end{footnotesize}
\fi
We anticipate that the co-added depth will be constrained in the selection of DDFs. A greater number of total observations decreases the noise floor of periodograms, enabling lower-amplitude variability to be detected.
\subsection{Number of visits within a night}
\iffalse
\begin{footnotesize}{\it Constraints on the number of exposures (or visits) in a night, especially if considering sequences of visits. }
\end{footnotesize}
\fi
For 2--4 hours of continuous coverage, each sequence within a given night will consist of roughly 450--900 15-second exposures (assuming 1-second overhead for each). This sequence length provides decent frequency resolution ($\approx$70--140\,$\mu$Hz), but runs as short as 1 hour would provide coverage of the frequency range below the typical Wide-Fast-Deep survey revisit timescales. Longer individual runs are preferred to yield better frequency resolution for studying multi-periodic variables and more precise measurements of $\lesssim1$-hour periods.
\subsection{Distribution of visits over time}
\iffalse
\begin{footnotesize}{\it Constraints on the timing of visits --- within a night, between nights, between seasons or
between years (which could be relevant for rolling cadence choices in the WideFastDeep.
Please describe optimum visit timing as well as acceptable limits on visit timing, and options in
case of missed visits (due to weather, etc.). If this timing should include particular sequences
of filters, please describe.}
\end{footnotesize}
\fi
Summing the total time spent exposing and reading out images for each DDF will reveal some number of multi-hour sequences that each can be broken down into. During a visit, the exposures should be made continuously back-to-back, but we do not define a preferred timescale for spacing the revisits, and these will perhaps be different for each field depending on the reason for its selection. To reliably obtain the desired sequence lengths, these should be initiated only when the weather forecasts predict multiple hours of observable conditions.
\subsection{Filter choice}
Depending on the science motivation for selecting each field, the distributions of filters may be different. Changing filters occasionally during a sequence of exposures would not significantly decrease the Nyquist frequency and should be done as required by the mini survey design. \citet{VanderPlas2015} present analysis tools for accounting for observations made in multiple bandpasses when calculating a periodogram. However, longer segments of observations in individual filters are preferred, and with a limited number of allowed filter changes over the 10-year survey, limiting filter changes here may create a bit more flexibility for additional filter changes in other surveys.
\subsection{Exposure constraints}
\iffalse
\begin{footnotesize}
{\it Describe any constraints on the minimum or maximum exposure time per visit required (or alternatively, saturation limits).
Please comment on any constraints on the number of exposures in a visit.}
\end{footnotesize}
\fi
Access to the individual 15-second exposures will maximize the effective Nyquist frequency below which intrinsic frequencies of variability can be accurately recovered.
\subsection{Other constraints}
No other constraints.
\subsection{Estimated time requirement}
The time required to observe each DDF will mostly be determined by the number of observations required to reach the target co-added depths. However, this observing strategy will save time on overheads by requiring fewer slews to the DDFs. If each field is monitored for a total of $\approx39$\,hr in $10\times\approx4$-hour sequences, the slew overhead will only be $10\times120 = 1200$\,sec, making LSST more efficient overall (compared to 99000\,sec of slewing to each DDF that might otherwise be observed by individual $2\times15$ second visits).
\begin{table}[ht]
\centering
\begin{tabular}{l|l|l|l}
\toprule
Properties & Importance \hspace{.3in} \\
\midrule
Image quality & 2 \\
Sky brightness & 2 \\
Individual image depth & 2 \\
Co-added image depth & 2 \\
Number of exposures in a visit & 1 \\
Number of visits (in a night) & 1 \\
Total number of visits & 1 \\
Time between visits (in a night) & 1 \\
Time between visits (between nights) & 3 \\
Long-term gaps between visits & 3 \\
Other (please add other constraints as needed) & 3 \\
\bottomrule
\end{tabular}
\caption{{\bf Constraint Rankings:} Summary of the relative importance of various survey strategy constraints. Ranked from 1=very important, 2=somewhat important, 3=not important.}
\label{tab:obs_constraints}
\end{table}
\subsection{Technical trades}
To aid in attempts to combine this proposed survey modification with others, the following questions are addressed:
\begin{enumerate}
{\it \item What is the effect of a trade-off between your requested survey footprint (area) and requested co-added depth or number of visits?}
The total number of visits per DDF may be decreased for an increased number of DDFs. Many hours of continuous coverage, broken up into a few 2--4 hour segments, would be sufficient for studying $\lesssim1$-hour timescale variability. More fields with fewer overall exposures would be preferred for increasing the number of targets with high-speed coverage.
{\it \item If not requesting a specific timing of visits, what is the effect of a trade-off between the uniformity of observations and the frequency of observations in time? e.g. a `rolling cadence' increases the frequency of visits during a short time period at the cost of fewer visits the rest of the time, making the overall sampling less uniform.}
We are requesting a specific timing of visits, with uniform-cadence, high-frequency observations.
{\it \item What is the effect of a trade-off on the exposure time and number of visits (e.g. increasing the individual image depth but decreasing the overall number of visits)?}
Increased exposure times decreases the Nyquist frequency and increases the probability of having trouble identifying accurate timescales of variability due to aliasing. However, increasing the individual image depth increases the number of stars with high-cadence coverage. For the science cases identified, total exposures longer than a minute especially risk complicating their interpretation by introducing Nyquist aliases.
{\it \item What is the effect of a trade-off between uniformity in number of visits and co-added depth? Is there any benefit to real-time exposure time optimization to obtain nearly constant single-visit limiting depth?}
Uniformity in exposure time is important so that variability amplitudes are not smoothed by different amounts in different images, complicating interpretation.
{\it \item Are there any other potential trade-offs to consider when attempting to balance this proposal with others which may have similar but slightly different requests?}
The 2--4\,hour durations of individual runs are ideal. Runs of at least 1 hour would cover timescales below the main Wide-Fast-Deep survey. If DDFs are proposed to be observed with conflicting, less rapid cadences, we suggest that at least 2--4\,hours of the $\approx 39$ hours of observations of each field should be obtained in a continuous mode. We anticipate in particular that high-cadence observations will yield fewer detections of long-timescale transients such as supernovae, which may be part of the scientific motivation for selecting certain DDFs.
\end{enumerate}
\section{Performance Evaluation}
\iffalse
\begin{footnotesize}
{\it Please describe how to evaluate the performance of a given survey in achieving your desired
science goals, ideally as a heuristic tied directly to the observing strategy (e.g. number of visits obtained
within a window of time with a specified set of filters) with a clear link to the resulting effect on science.
More complex metrics which more directly evaluate science output (e.g. number of eclipsing binaries successfully
identified as a result of a given survey) are also encouraged, preferably as a secondary metric.
If possible, provide threshold values for these metrics at which point your proposed science would be unsuccessful
and where it reaches an ideal goal, or explain why this is not possible to quantify. While not necessary,
if you have already transformed this into a MAF metric, please add a link to the code (or a PR to
\href{https://github.com/lsst-nonproject/sims_maf_contrib}{sims\_maf\_contrib}) in addition to the text description. (Limit: 2 pages).}
\end{footnotesize}
\fi
It is fairly straightforward to evaluate the performance of realizations of this DDF cadence for recording short-period variability. We wish for the visits to achieve a high Nyquist frequency, $f_{\rm Nyq} = 1/(2\Delta t)$ where $\Delta t$ is the time between exposures, and also for the sequence durations to provide high effective frequency resolution, $\delta f= 1/T$, where $T$ is the total duration of each visit. We will develop a MAF metric that can quantify comparatively how well different simulated survey strategies probe the short-period variability regime in the DDFs. The product of the Nyquist frequency, the inverse of the effective frequency resolution, and signal-to-noise in the periodogram is a useful value to maximize $\propto T\sqrt{N} /\Delta t$, where $N$ is the number of observations obtained in all of the continuous observing sequences. The scientific yield for rapid stellar variables will also be proportional to the number of stars in each field with magnitudes below the typical single-visit depths, which could also factor into the decision of what percentage of the exposures of each DDF field (and the Galactic Plane mini survey) to obtain in a continuous mode.
\section{Special Data Processing}
\iffalse
\begin{footnotesize}
{\it Describe any data processing requirements beyond the standard LSST Data Management pipelines and how these will be achieved.}
\end{footnotesize}
\fi
Saving and processing the individual 15-second exposures is important for attaining a high Nyquist frequency, though this strategy will still yield incredibly valuable records of rapid photometric variables and transients if only the $2\times15$ second pairs are able to be processed.
\section{Acknowledgements}
This work was developed within the Transients and Variable Stars Science Collaboration (TVS) and the authors acknowledge the support of TVS in the preparation of this paper.
\section{References}
\begin{itemize}
\bibitem[Bellm(2014)]{ZTF} Bellm, E.\ 2014, The Third Hot-wiring the Transient Universe Workshop, 27
\bibitem[Cortes \& Kipping(2018)]{Cortes2018} Cortes, J., \& Kipping, D.~M.\ 2018, arXiv:1810.00776
\bibitem[Kilkenny(2007)]{Kilkenny2007} Kilkenny, D.\ 2007, Communications in Asteroseismology, 150, 234
\bibitem[Kurtz(1990)]{Kurtz1990} Kurtz, D.~W.\ 1990, ARA\&A, 28, 607 \bibitem[Lund et al.(2018)]{Lund2018} Lund, M.~B., Pepper, J.~A., Shporer, A., \& Stassun, K.~G.\ 2018, arXiv:1809.10900
\bibitem[Macfarlane et al.(2015)]{Macfarlane2015} Macfarlane, S.~A., Toma, R., Ramsay, G., et al.\ 2015, MNRAS, 454, 507
\bibitem[Moffett(1974)]{Moffett1974} Moffett, T.~J.\ 1974, ApJS, 29, 1
\bibitem[Montgomery \& Odonoghue(1999)]{Montgomery1999} Montgomery, M.~H., \& Odonoghue, D.\ 1999, Delta Scuti Star Newsletter, 13, 28
\bibitem[Mukadam et al.(2006)]{Mukadam2006} Mukadam, A.~S., Montgomery, M.~H., Winget, D.~E., Kepler, S.~O., \& Clemens, J.~C.\ 2006, ApJ, 640, 956
\bibitem[Pietrukowicz et al.(2017)]{Pietrukowicz2017} Pietrukowicz, P., Dziembowski, W.~A., Latour, M., et al.\ 2017, Nature Astronomy, 1, 0166
\bibitem[Toma et al.(2016)]{Toma2016} Toma, R., Ramsay, G., Macfarlane, S., et al.\ 2016, MNRAS, 463, 1099
\bibitem[VanderPlas \& Ivezi{\'c}(2015)]{VanderPlas2015} VanderPlas, J.~T., \& Ivezi{\'c}, {\v Z}.\ 2015, ApJ, 812, 18
\end{itemize}
\end{document}
|
2005.06399
|
\section{Introduction} \label{Sec:Introduction}
For computing numerical solutions of the compressible Euler equations, upwind biasing of spatial derivatives is done. To achieve this upwind biasing, various
techniques are used, two of which are using approximate Riemann solvers \cite{roe1981} and Flux vector splitting \cite{steger1981}. These techniques, in addition to
accounting for the direction of propagation of waves, often (not always) introduce ``natural dissipation'' and make the numerical methods stable. However, in the
presence of stationary or moving shocks, this dissipation leads to smearing of shocks, post shock oscillations, and other errors in the numerical solution.
The post shock oscillations error and mass conservation error or momentum spike error, have been reported in a number of papers in literature.
For slowly moving shocks, Roberts \cite{roberts1990}, investigated the source of error and evaluated the performance of Roe and Osher flux with respect to these errors.
Jin et al \cite{jin1996} explained the source of the post shock oscillation errors from the perspective of smearing due to numerical viscosity. Lin \cite{lin1995} proposed
a modification to
the ROE flux to suppress the post shock oscillation and also reported that the error in the solution is also dependent of direction of motion of shock.
Arora et al \cite{arora1997}, explained the cause of the post shock oscillations, remarked that this problem may be unavoidable for shock capturing schemes without
significant increase in computational effort and suggested some ways to overcome these problems.
Xu \cite{xu1999} asked the question `Does Perfect Riemann Solver Exist'. Xu analysed the dissipation mechanism
in the Godunov scheme, consisting of the `gas evolution stage' `for numerical fluxes across a cell interface' and the `projection stage' `for the reconstruction of constant
state inside each cell'. Xu remarked that the numerical dissipation is solely provided by the `projection stage' and to compute numerical solutions with discontinuities,
addition of explicit dissipation is needed in the `gas evolution stage'. We refer to \cite{stiriba2003, johnsen2013, kitamura2019} for recent analysis of the post shock
oscillations problem and other `shock anomalies'.
Momentum spike or mass flux errors in steady state numerical solutions with shocks is another such shock anomaly. Barth \cite{barth1989} reported that momentum spike with
error as high as 40\% can occur in numerical solutions.
When the flux splitting or approximate Riemann solver used in the numerical method leads to smearing of shock, generally an error in mass conservation equation
is introduced. Jin et al \cite{jin1996} show that this is similar to adding dissipation in the mass conservation equation.
In this paper we study the post shock oscillation error and mass conservation error. We define invariants across a moving shock, modelled using one-dimensional Euler
equations and quantify the error in the numerical solution based on the invariant associated with mass conservation.
We compare the performance of different numerical flux functions, namely, the ROE Flux \cite{roe1981}, ROE Flux with Harten Hyman 2 Fix \cite[p. 266]{harten1983},
Osher's Flux with P-Ordering \cite[Section 12.3.1, p. 393]{toro2013}, Osher's Flux with O-Ordering \cite[Section 12.3.2, p. 397]{toro2013}, AUSM\textsuperscript{+}-up
flux \cite{liou2006} and the global Lax-Freidrichs flux.
We compare the errors in
the numerical solutions obtained using these numerical flux functions and numerical methods with different formal order of accuracies. We show that there is no one particular
flux function that consistently performs better than others for different shock speeds, shock movement directions and order of accuracies of the numerical methods.
Later, for steady state solutions with shocks, we demonstrate
the error in mass conservation or `mass conservation error' or `mass flux error', with the help of test problems having shocks in solution,
for one-dimensional Euler equations, quasi-one-dimensional Euler equations and two-dimensional Euler equations. We show how this mass flux error
varies with respect to different parameters like the formal order of accuracy of the numerical method, etc.
It is known that for the one-dimensional Euler equations, using less dissipative fluxes like the Roe flux can lead to capturing a normal shock without smearing and mass
conservation error.
This changes with the introduction of viscous fluxes. Even at large Reynolds numbers and comparatively low magnitude of viscous fluxes, the shock gets dissipated and this
coupled with the ROE flux leads to significant errors in mass conservation. We demonstrate this for the one-dimensional viscous fluid flow equations (Newtonian, Navier-Stokes)
for the problem of normal shock. We show that one way to mitigate this error is using a sufficiently fine mesh to resolve the shock.
We indicate an efficient way of resolving flow near a shock is using multiple overset meshes, using which shocks can be captured with
less error and demonstrate this using the problem of flow through a variable area duct. We also show the connection between mass conservation error and carbuncle formation, using the problem of flow over a cylinder.
We show a way to cure the carbuncle by refining the mesh near the shock using multiple overset mesh and present preliminary results.
In this paper, we use three numerical methods, namely, the finite volume method, the Shu-Osher conservative finite difference scheme and Discontinuous Galerkin method.
For high order finite
volume and finite difference methods, we use high-order WENO \cite{jiang1996} or linear reconstruction.
The mass conservation error and post shock oscillations seem essentially independent of the underlying numerical methods. A comparison of the slight differences among the solutions obtained using these methods is presented, wherever they are interesting.
For discretisation of time derivatives, we use the TVD-RK3 method.
The rest of the paper will be organised as follows: In section \ref{Sec:NumMethod}, we give a brief description of the finite volume method, the Shu-Osher
conservative finite difference method, high-order WENO and linear reconstructions, the Discontinuous Galerkin method with simple WENO limiter, TVD-RK3 method,
different numerical fluxes and flux splittings. We give labels for the different numerical methods used. In section \ref{Sec:Verification}, the implementation
of WENO and DG schemes are verified
using test problems of Burgers equation with source term and the Isentropic Euler vortex. In section \ref{Sec:postShockOscillations} we compare the performance of different
numerical flux functions and numerical methods of different order of accuracy for moving shock problems. We show that there are certain problem parameters for which
the Roe flux produces lesser error than the Osher flux. We underscore the importance of characteristic-wise reconstruction by giving examples of problems for which
doing component-wise reconstruction leads to `NAN's in the computations.
In section \ref{Sec:massFluxErr} we show how the mass flux error varies in numerical solutions having stationary shocks,
for different problems and different numerical methods. We explain the cause of the mass flux error, indicate technique to mitigate it and demonstrate it by
applying the technique for two problems. In section \ref{Sec:refinementAndOverset}, we show that refining near the shock using multiple overset meshes near shocks leads to the mitigation of mass flux error and demonstrate this by using two levels of
over set meshes ( One mesh, overset with a finer mesh, which is in turn overset with a finer mesh ), for the problem of flow through a variable area duct with a normal
shock. We also show the link between the mass conservation error and carbuncle formation and that it can be cured by refining near the shock using overset mesh with two levels of refinement.
We end the paper with concluding remarks in section \ref{Sec:Conclusions}.
\section{ Numerical methods }\label{Sec:NumMethod}
In this section the numerical methods used, namely, the finite volume and the Shu-Osher conservative finite difference method with Weighted Essentially Non-oscillatory
(WENO) reconstruction and Discontinuous Galerkin method are described. Methods used for time discretisation, namely Total Variation Diminishing-Three stage Runge
Kutta (TVD-RK3) and Butcher's six stage Runge Kutta time discretisation are described. Also, the flux splitting and approximate Riemann solvers used will be
described shortly. We start with the description of the finite volume scheme.
\subsection{Finite Volume method}
Consider a hyperbolic conservation law of the form
\begin{equation}
\label{eq:hypConsLaw}
\frac{\partial }{\partial t}Q(x,t) + \frac{\partial}{\partial x}E(Q(x, t)) = 0
\end{equation}
The physical domain is divided into `n' cells, with the $i^{th}$ cell having size equal to $\Delta x$. Integrating equation \ref{eq:hypConsLaw}, over the $i^{th}$ cell with
boundaries $[x_{i-\frac{1}{2}}, x_{i+\frac{1}{2}}]$ between times $t_n$ and $t_{n+1}$ (separated by $\Delta t$ ), we get
\begin{equation}
\label{eq:FVEqn}
(\bar{Q}_i^{n+1} - \bar{Q}_i^{n})\Delta x + \int\limits_{t_n}^{t_{n+1}}\left(E(x_{i+\frac{1}{2}}, t) - E(x_{i-\frac{1}{2}}, t)\right)dt = 0,
\end{equation}
where $\bar{Q}^n_i$ is the cell average of Q over the $i^{th}$ cell at time level $t_n$. The time integral of the flux ($E$) is approximated using a numerical flux
function ($\hat{E}$)
\begin{equation}
\label{eq:FluxFunction}
\int\limits_{t_n}^{t_{n+1}}E(x_{i+\frac{1}{2}}, t)dt = \hat{E}(Ql^{n}_{i+\frac{1}{2}}, Qr^{n}_{i+\frac{1}{2}})\Delta t,
\end{equation}
where $Ql^{n}_{i+\frac{1}{2}}, Qr^{n}_{i+\frac{1}{2}}$ are left and right biased approximations to $Q(x_{i+\frac{1}{2}}, t_n)$. These approximations will be obtained using
different high order reconstruction procedures that will be described later. The flux function can be based on the approximate Riemann solver of ROE or the AUSM splitting and
others that will be described later. Next we briefly describe the Shu-Osher conservative finite difference method.
\subsection{Shu-Osher Conservative finite difference scheme}\label{Sec:shuOsherConsFinDiffSch}
Let the computational domain consist of grid points uniformly spaced in the physical domain, with grid point spacing equal to $\Delta x$. A function $h(x, t)$ is defined
such that the sliding average of $h(x, t)$ over a length $\Delta x$ is equal to $E(x, t)$, that is,
\begin{equation}
\label{eq:ImpFn}
\frac{1}{\Delta x}\int\limits_{-\frac{\Delta x}{2}}^{\frac{\Delta x}{2}}h(x+y, t)dy = E(x, t)
\end{equation}
Taking a partial derivative of equation~(\ref{eq:ImpFn}) with $x$, we get
\begin{equation}
\label{eq:derivInTermsOfImpFn}
\frac{\partial E}{\partial x}\bigg|_{x=x_o} = \frac{h(x_o+\frac{\Delta x}{2}, t) - h(x_o-\frac{\Delta x}{2}, t)}{\Delta x}
\end{equation}
We refer to Barry Merriman \cite{merriman2003} for detailed explanation and analysis of the Shu-Osher conservative finite difference scheme.
Using the method of lines and equations~(\ref{eq:hypConsLaw}), and (\ref{eq:derivInTermsOfImpFn}), a semi-discrete form of equation~(\ref{eq:hypConsLaw}) is obtained at
$x=x_o, t=t_o$,
which is
\begin{equation}
\label{eq:discreteHypConsLaw}
\frac{\partial Q}{\partial t}\bigg|_{x=x_0, t=t_o} + \frac{h(x_o+\frac{\Delta x}{2}, t_o) - h(x_o-\frac{\Delta x}{2}, t_o)}{\Delta x} = 0
\end{equation}
\subsection{Upwinding and Flux-Splitting}\label{Sec:Upwinding}
To account for propagation along the characteristic directions,
upwind biasing of spatial derivatives is needed. This can be achieved by using flux splitting and appropriate biasing of approximations involving the split fluxes.
\begin{equation}
\label{eq:FSplitting}
E^{\pm} = \frac{1}{2}\left(E(Q) \pm \hat{A} Q\right).
\end{equation}
Equation (\ref{eq:FSplitting}) gives the expression for the split fluxes.
Different choices of $\hat{A}$ leads to different flux splittings which will be described in detail later.
The semi-discrete form of the hyperbolic conservation law incorporating flux
splitting becomes
\begin{equation}
\label{eq:discreteHypConsLawFS}
\frac{\partial Q}{\partial t}\bigg|_{x=x_0, t=t_o} + \frac{h^{+}(x_o+\frac{\Delta x}{2}, t_o) - h^{+}(x_o-\frac{\Delta x}{2}, t_o)}{\Delta x} + \frac{h^{-}(x_o+\frac{\Delta x}{2}, t_o) - h^{-}(x_o-\frac{\Delta x}{2}, t_o)}{\Delta x}= 0,
\end{equation}
where
\begin{equation}
\label{eq:ImpFnFS}
\frac{1}{\Delta x}\int\limits_{-\frac{\Delta x}{2}}^{\frac{\Delta x}{2}}h^{\pm}(x+y, t)dy = E^{\pm}(x, t).
\end{equation}
A high-order linear reconstruction or WENO reconstruction procedure is used to obtain approximations to $h^{+}$ and $h^{-}$ using left and right biased stencils,
respectively. It is described next.
\subsection{Linear reconstruction and WENO reconstruction procedures}\label{Sec:WENOPROC}
Using high-order reconstruction (linear reconstruction) to approximate $h^{+}$ and $h^{-}$ in the presence of discontinuities will lead to oscillations in the
solution. To avoid this, we use WENO reconstruction wherever necessary.
WENO reconstruction was introduced by Liu, Osher and Chan in 1994 \cite{liu1994}.
Jiang et al gave a framework to build high order(of order $2r - 1$ for $r=2, 3, ...$) WENO schemes \cite{jiang1996}. Changes to these schemes were
proposed \cite{borges2008, castro2011, wu2016} to avoid loss of accuracy near critical points. Two such schemes are, WENO-Z (or ZWENO) proposed by Borges et al
\cite{borges2008, castro2011} and WENO-NP3 \cite{wu2016} proposed by Wu et al.
Equation~(\ref{eq:discreteHypConsLawFS}) is used to advance from time $t_n$ to $t_{n+1}$. At grid point
$x_i$, approximations $\hat{h}^{\pm}_{i+\frac{1}{2}}$ and $\hat{h}^{\pm}_{i-\frac{1}{2}}$ (subscript $n$, indicating time level, is dropped for brevity) to
$h^{\pm}(x_{i+\frac{1}{2}}, t_n)$ and $h^{\pm}(x_{i-\frac{1}{2}}, t_n)$, respectively, are needed. These, for a $2r-1$ reconstruction are given by the following equations:
\begin{align}
\label{eq:fluxReconsApprox}
\hat{h}^{\pm}_{i+\frac{1}{2}} &= \sum\limits_{j=1}^r \omega^{\pm}_j H^{\pm}_j,~
\omega^{\pm}_j = \frac{\tilde{\omega}_j}{\bar{\omega}^{\pm}},~
\bar{\omega}^{\pm} = \sum\limits_{j=1}^r \tilde{\omega}^{\pm}_j\\
\label{eq:omegaWenoZ}
\tilde{\omega}^{\pm}_j &= \gamma^{\pm}_j\bigg(1+\bigg(\frac{\tau^{\pm}}{\beta^{\pm}_j + \epsilon}\bigg)^p\bigg),\text{ for ZWENO (ZW) reconstruction \cite{borges2008}, and}\\
\label{eq:linHiOrd}
\tilde{\omega}^{\pm}_j &= \gamma^{\pm}_j\text{ for linear reconstruction (LR)}.
\end{align}
where $\tau^{\pm}$ are high order smoothness indicators \cite{borges2008}, $\beta^{\pm}_j$ are the Jiang-Shu smoothness indicators \cite{jiang1996}, $\gamma^{\pm}_j$ are linear
weights, $\epsilon = 10^{-14}$, $p = r-1$ (unless specified otherwise).
$\gamma^{\pm}_j \text{ and } H^{\pm}_j$ for a third order reconstruction
are given by:
\begin{align}
\label{eq:gammasThirdOrd}
\gamma^{+}_1 = \frac{1}{3},~ \gamma^{+}_2 = \frac{2}{3}, &~
\gamma^{-}_1 = \frac{2}{3},~ \gamma^{-}_2 = \frac{1}{3}.\\
H^{+}_1 = \frac{3E^{+}_i - E^{+}_{i-1}}{2},~ H^{+}_2 = \frac{E^{+}_i + E^{+}_{i+1}}{2},&~
H^{-}_1 = \frac{E^{-}_i + E^{-}_{i+1}}{2},~ H^{-}_2 = \frac{3E^{-}_{i+1} - E^{-}_{i+2}}{2}.
\end{align}
Formulae for $\beta^{\pm}_j$ for WENO-NP3 or ZWENO3 ($r=2$) reconstruction, for which a stencil of 3 points is used
are given below.
\begingroup
\allowdisplaybreaks
\begin{align}
\beta^{+}_1 = (E^+_{i-1} - E^+_i)^2, \beta^{+}_2 = (E^+_{i+1} - E^+_i)^2,&
\beta^{-}_1 = (E^-_{i+1} - E^-_i)^2, \beta^{-}_2 = (E^-_{i+1} - E^-_{i+2})^2,\\
\dot{\beta}^{+} = \frac{1}{4}(E^+_{i-1} - E^+_{i+1})^2 + &\frac{13}{12}(E^+_{i-1} - 2E^+_{i} + E^+_{i+1})^2,\\
\dot{\beta}^{-} = \frac{1}{4}(E^-_{i} - E^-_{i+2})^2 + &\frac{13}{12}(E^-_{i} - 2E^-_{i+1} + E^-_{i+2})^2,\\
\tau^{\pm} = \tau^{\pm}_{NP} = \Bigg| \dot{\beta}^{\pm} - \frac{\beta^{\pm}_1 + \beta^{\pm}_2}{2}\Bigg|^{1.5},~
E^{\pm}_k &= E^{\pm}(x_k, t_n)\text{, for } k=i-1, i, i+1, i+2.
\end{align}
\endgroup
We refer to \cite{borges2008, castro2011} for WENO reconstruction for $r=3$ and $r=4$.
\subsection{Component-wise and characteristic-wise reconstruction}
For system of conservation laws, like the compressible Euler equations, the reconstruction described above can either be done component-wise or characteristic-wise
\cite{zhang2011, qiu2002}. For characteristic-wise decomposition, cell average of the vector of conserved variables $Q$ or the split fluxes $E^{\pm}$ are
transformed into the local characteristic coordinates. The reconstruction is performed on the quantities in the characteristic coordinates and then the reconstructed
quantities are transformed back and used. The transformation to characteristic coordinates and back is done based on the left and right Eigen vectors of the
flux Jacobian $A(Q)$ based on the $Q$ at
the grid point or cell immediately to the left of the point at which the reconstruction is sought, similar to the method labelled U1ZWENO in \cite{zhang2011}.
\subsection{Flux splitting}\label{Sec:FSplittingExp}
Equation \ref{eq:FSplitting} gives the split fluxes($E^{\pm}$) in terms of the flux function ($E$), $Q$ and a parameter $\hat{A}$.
While calculating approximations to $h^{+}$ and $h^{-}$ at $x_{i+\frac{1}{2}}$, there are different choices for $\hat{A}$.
Choosing $\hat{A} = \alpha$, where $\alpha = \max_{Q}(|\vec{V}| + a)$($a$ is the speed of sound and maximum is taken over all grid points),
leads to the Lax-Freidrich Flux splitting.
For a less dissipative splitting we choose $\hat{A}$ based on the Roe Flux \cite{roe1981}. Let $Q_L$ and $Q_R$ be left biased and right biased approximations to
$Q(x_{i+\frac{1}{2}, t_n})$, obtained using WENO interpolation of the same formal order of accuracy ($2r-1$). We refer to \cite{shu2009} for details of WENO Interpolation.
Let $\tilde{Q}$ be the Roe-average state obtained using $Q_L$ and $Q_R$ (see equations 5.41, 5.48, and 5.51 in \cite{laney1998}) and let $A(Q)$ ($=(\partial/\partial Q)E$)
be the flux Jacobian.
Now, we choose $\hat{A} = |A(\tilde{Q})|$. We label this `Roe flux splitting'
and remark that this flux splitting is less dissipative than the Lax-Freidrichs flux splitting.
\subsection{Numerical flux functions}
Equation \ref{eq:FluxFunction} gives the integral of the flux over time in terms of the reconstructed values of $Q$ and a numerical flux function $\hat{E}(l, r)$. Of the
different numerical flux functions available, we present a comparison of results obtained using, the Roe Flux \cite{roe1981}, Roe Flux with Harten Hyman 2 Fix
\cite[p. 266]{harten1983}, Osher's Flux with P-Ordering \cite[Section 12.3.1, p. 393]{toro2013}, Osher's Flux with O-Ordering \cite[Section 12.3.2, p. 397]{toro2013},
AUSM\textsuperscript{+}-up flux \cite{liou2006} and the global Lax-Freidrichs flux.
\subsection{System of equations with viscous fluxes and two-dimensional equations}\label{Sec:extnToViscEqns}
Described previous sections is the procedure for spatial discretisation of hyperbolic conservation law in one space dimension. For equations with viscous fluxes that have
second derivatives of the form
\begin{equation}
\frac{\partial }{\partial t}Q(x, t) + \frac{\partial}{\partial x}E(Q(x, t)) + \frac{\partial}{\partial x}\Bigg(\mu(x, t) \frac{\partial}{\partial x} \bigg(E_v\big(Q(x, t)\big)\bigg)\Bigg) = 0,
\end{equation} a procedure similar to the one described in sections \ref{Sec:shuOsherConsFinDiffSch} - \ref{Sec:WENOPROC} can be used twice to get the second derivatives.
The procedure is first applied to calculate the terms
$\mu(x, t)\partial/\partial x (E_v(Q(x, t)))$. Then the same procedure is again applied on $\mu(x, t)\partial/\partial x (E_v(Q(x, t)))$ to get the second derivative.
Biasing of approximations involving viscous fluxes is not necessary. Therefore, $\hat{A} = 0$ (in equation \ref{eq:FSplitting}) is used for flux splitting. Also, linear
reconstruction ( equation \ref{eq:linHiOrd} - LR) is used for calculating viscous fluxes.
For equations in two space dimensions such as,
\begin{equation}
\frac{\partial }{\partial t}Q(x, y, t) + \frac{\partial}{\partial x}E(Q(x, y, t)) + \frac{\partial}{\partial y}F(Q(x, y, t)) = 0,
\end{equation}
the same procedure can be used for discretising the $x$ and $y$ derivatives separately.
The resulting semi-discrete form is integrated in time using TVD-RK3 or the Butcher's RK5 method, explained in section \ref{Sec:RKTimeInteg}.
Next we describe the Discontinuous Galerkin Method.
\subsection{Formulation of Discontinuous Galerkin Method}\label{sec:DGMformulation}
The original Discontinuous Galerkin (DG) finite element method was introduced by Reed and Hill \cite{rh} for solving the neutron transport equation which is a linear hyperbolic equation. It was later developed for solving time dependent nonlinear hyperbolic conservation laws as the Runge-Kutta Discontinuous Galerkin (RKDG) method by Cockburn et al. in a series of papers \cite{cs1}, \cite{cs2}, \cite{chs} and \cite{cs3}. The history and development of the DG method is given in the survey paper \cite{cks}.
We now look at solving \eqref{eq:hypConsLaw} using the Discontinuous Galerkin method. We approximate the solution domain by $K$ non overlapping elements whose domain is given by $\mathbf{I}^{k}=[x_{l}^{k},x_{r}^{k}]$. We will approximate the local solution as a polynomial of order $N=N_{p}-1$, where $N_{p}$ is the number of degrees of freedom of the approximation. This is termed to be $\mathbf{P}^{N}$ based Discontinuous Galerkin method. The approximation is given as:
\begin{equation}\label{modalForm}
Q_{h}^{k}(x,t) = \sum_{n=0}^{N} \hat{Q}^{k}_{n}(t)\psi_{n}^{k}(x) \qquad \forall x\in\mathbf{I}^{k}
\end{equation}
Here, $Q_{h}^{k}(x,t)$ is the approximate local polynomial solution, $\psi_{n}^{k}(x)$ is the local polynomial basis of approximation and $\hat{q}^{k}_{n}(t)$ are the degrees of freedom.
Similarly, we will also approximate the flux $E(Q)$ in the solution domain as given below:
\begin{equation}\label{fluxApprox}
E_{h}^{k}(Q_{h}^{k}) = \sum_{n=0}^{N} \hat{E}^{k}_{n}(t)\psi_{n}^{k}(x) \qquad \forall x\in\mathbf{I}^{k}
\end{equation}
\noindent We have used the orthonormalized Legendre polynomials as done by Hesthaven et al\cite{hestha1}. The following affine mapping is employed.
\begin{equation}\label{affineMap}
x(r) = x_{l}^{k} + \frac{1+r}{2}h^{k}, \qquad h^{k} = x_{r}^{k} - x_{l}^{k} \qquad \forall r\in\mathbf{I}=[-1,1]
\end{equation}
\noindent The corresponding recurrence formula for the required orthonormalized Legendre polynomials is given by:
\begin{equation}\label{recFormula}
r\tilde{P}_{n}(r) = a_{n}\tilde{P}_{n-1}(r) + a_{n+1}\tilde{P}_{n+1}(r),\qquad a_{n} = \sqrt{\frac{n^{2}}{(2n+1)(2n-1)}}
\end{equation}
\noindent with
\begin{displaymath}
\tilde{P}_{0}(r) = \frac{1}{\sqrt{2}},\qquad \tilde{P}_{1}(r) = \sqrt{\frac{3}{2}}r
\end{displaymath}
\noindent Now the local polynomial basis is given as:
\begin{equation}\label{polyBasis}
\psi_{n}^{k}(r) = \tilde{P}_{n-1}(r)
\end{equation}
\noindent The degrees of freedom $\hat{Q}^{k}_{n}$ can be advanced in time by the following scheme obtained from the weak form of the governing equation:
\begin{equation}\label{weakFormScheme}
\frac{d}{dt}\hat{Q}_{h}^{k} = (\mathbf{M}^{k})^{-1}(\mathbf{S}^{k})^{T} \hat{E}_{h}^{k}({Q}_{h}^{k}) - (\mathbf{M}^{k})^{-1} (E^{*}|_{r_{N_{p}}}e_{N_{p}} - E^{*}|_{r_{1}}e_{1})
\end{equation}
\noindent Here, $e_{i}$ is a vector of length $N_{p}$ which has zero entries everywhere except at the $i$th location, and $\mathbf{M}^{k}$ is the local mass matrix which is given as:
\begin{equation}\label{massMatrix}
\mathbf{M}^{k} = \left[M_{ij}^{k}\right] = \left[\int_{x_{l}^{k}}^{x_{r}^{k}} \psi_{i}^{k}(x) \psi_{j}^{k}(x) \text{dx}\right]
\end{equation}
\noindent and $\mathbf{S}^{k}$ is the local stiffness matrix which is given by:
\begin{equation}\label{stiffnessMatrix}
\mathbf{S}^{k} = \left[S_{ij}^{k}\right] = \left[\int_{x_{l}^{k}}^{x_{r}^{k}} \psi_{i}^{k}(x) \frac{d\psi_{j}^{k}(x)}{dx} \text{dx}\right]
\end{equation}
\noindent Also, $E^{*}$ is the monotone numerical flux at the interface which is calculated using an exact or approximate Riemann solver. A study of performance of various numerical fluxes for discontinuous Galerkin method has been done in \cite{qks}.
\\
\\
\noindent Now, the semi-discrete scheme given in \eqref{weakFormScheme} is discretized in time by using the TVD Runge-Kutta time discretization introduced in \cite{shu}. We have used a third order TVD Runge-Kutta time discretization for all our calculations. For equations with viscous fluxes of the form
\begin{equation}\label{eqnDGViscous}
\frac{\partial }{\partial t}Q(x, t) + \frac{\partial}{\partial x}\left(E(Q(x, t))-E_{v}(Q(x, t),\nabla Q(x,t))\right) = 0,
\end{equation}
\noindent we use the local DG (LDG) method as given by Cockburn and Shu in \cite{cs5}. We will solve \eqref{eqnDGViscous} along with
\begin{equation}\label{gradientEqnDG}
U(x,t) - \nabla Q(x,t) = 0
\end{equation}
\noindent Here $U(x,t)$ can be approximated locally as
\begin{equation}\label{modalFormGradient}
U_{h}^{k}(x,t) = \sum_{n=0}^{N} \hat{U}^{k}_{n}(t)\psi_{n}^{k}(x) \qquad \forall x\in\mathbf{I}^{k}
\end{equation}
\noindent Using this, we can obtain the weak form of \eqref{gradientEqnDG} as
\begin{equation}
\mathbf{M}^{k}\hat{U}_{m}^{k} = \int_{\mathbf{I}^{k}} Q_{h}(x,t).\nabla\psi_{m}^{k}(x) dx - \int_{\partial \mathbf{I}^{k}} Q_{h}(x,t)\psi_{m}^{k}(x).\hat{n} ds
\end{equation}
\noindent Each of the above integral is evaluated using an appropriate quadrature rule. The value $Q_{h}(x,t).\hat{n}$ is part of a surface integral and it is taken to be $Q_{h}^{+}.\hat{n}$ where $+$ indicates the discontinuous value outside the element. The value of $\psi_{m}^{k}(x)$ on the surface integral is taken from inside the element. This way, once we obtain $U_{h}(x,t) = \nabla Q(x,t)$, we can find all terms in $E_{v}(Q(x, t),\nabla Q(x,t))=E_{v}(Q(x, t),U(x,t))$. Then \eqref{eqnDGViscous} is written in the weak form similar to \eqref{weakFormScheme} and we can solve the whole system of equations. The numerical flux for $E$ labelled $E^{*}$ in the weak form is obtained using an exact or approximate Riemann solver. The numerical flux for $E_{v}$ labelled $E_{v}^{*}$ is taken to be $E_{v}^{-}$ where $-$ represents the discontinuous value of the solution inside the element. We again use a third order TVD Runge-Kutta time discretization for the solution of the system in time. This completes the LDG formulation.
\\
\\
\noindent Solutions obtained with Discontinuous Galerkin method develop spurious oscillations near discontinuities and a non linear limiter is used to control such oscillations. The common methodology for limiting in Discontinuous Galerkin method is as given below in two steps:
\\
\noindent \textbf{1)} Identify the cells which need to be limited. They are often called troubled cells. \\
\noindent \textbf{2)} Replace the solution polynomial in the troubled cell with a new polynomial that is less oscillatory but with the same cell average and order of accuracy.
\\
\\
\noindent For the first step, we have used the KXRCF troubled cell indicator for all the calculations done in this paper as it is rated highly by Qiu and Shu in \cite{qs2} on the basis of it's performance in detecting the discontinuities in various test problems. The second step is where we do the limiting process. We have used the so called simple WENO limiter developed by Zhong and Shu \cite{zs} for all the calculations done in this paper.
\subsection{Labelling the numerical methods}
We will denote component-wise reconstruction methods using weights given by equations (\ref{eq:linHiOrd})
, (\ref{eq:omegaWenoZ}) as LR,
ZW respectively and characteristic-wise reconstruction as LCDLR,
LCDZW, respectively. The number
following these labels is used to indicate the formal order of accuracy of the reconstruction.
We use a prefix FD and FV to denote conservative finite difference and finite volume methods respectively
Similarly we denote the Discontinuous Galerkin methods using DG $P^n$ label,
where the $n$ indicates the degree of the basis polynomial.
To indicate the flux splitting or flux function used, we add one of the
following suffixes:
\begin{itemize}
\item `-ROE' for Roe Flux
\item `-ROEHH2' for ROE Flux with Harten Hyman 2 entropy fix,
\item `-LF' for Lax-Freidrichs flux function,
\item `-AUSM' for the AUSM\textsuperscript{+}-up flux,
\item `-OshP' for Osher's P-Ordering flux.
\item `-OshO' for Osher's O-Ordering flux.
\item `-C' suffix indicates central scheme was used.
\end{itemize}
Therefore, FDLR7-C indicates linear reconstruction with formal order of accuracy of 7 with $\hat{A}=0$ is used.
FVLCDZW3-Roe indicates that finite volume method with characteristic-wise WENO-NP3 reconstruction and Roe flux is
used while FDZW5-LF indicates that finite difference method with component-wise ZWENO5 reconstruction with Lax-Freidrichs flux splitting is used.
\subsection{Runge Kutta time discretisation}\label{Sec:RKTimeInteg}
Consider the equation
\begin{equation}
\label{eq:timInt}
\frac{d}{dt} Q = L(Q).
\end{equation}
The simple forward Euler time discretisation between two time levels $t_n$ and $t_{n+1}$ separated by $\Delta t$ is given by
\begin{equation}
\label{eq:fowEulDisc}
Q^{n+1} = Q^{n} + \Delta t L(Q^{n}).
\end{equation}
A three stage third order TVD (Total Variation Diminishing) or SSP (Strong Stability Preserving) \cite{gottlieb2001} Runge-Kutta discretisation is given by
\begin{align}
Q^{(1)} = Q^{n} + \Delta t L(Q^{n}),~~&
Q^{(2)} = \frac{3}{4} Q^n + \frac{1}{4}Q^{(1)} + \frac{1}{4}\Delta t L(Q^{(1)}),\\
Q^{n+1} = \frac{1}{3} Q^n &+ \frac{2}{3}Q^{(1)} + \frac{2}{3}\Delta t L(Q^{(2)}).
\end{align}
The Runge Kutta discretisation described above are used to advance in time from $t_n$ to $t_{n+1}$.
\subsection{Refinement near shock using overset mesh}\label{Sec:OversetDesc}
To improve solution accuracy near stationary shocks, we use a finer overset mesh. Next, we briefly describe the procedure employed for obtaining numerical solutions using overset mesh.
\subsubsection{Conservative coupling procedure for the finite difference scheme}
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.65\textwidth]{images/overset.pdf}
\caption{ Overset mesh with coarse (black, square grid points) and fine components (red, circular grid points), with left ($i+\frac{1}{2}$) and right ($k+\frac{1}{2}$) coupling interfaces shown.}
\label{fig:overset}
\end{center}
\end{figure}
As shown in figure \ref{fig:overset}, for the finite difference method, we use an overset mesh with coarse (black, square grid points) and fine components (red, circular
grid points). The grid point spacing (GPS or $\delta x$) of the finer mesh component is chosen so as to have coupling interfaces like $i+\frac{1}{2}$, $j+\frac{3}{2}$
and overlapping mesh points, like $i$, $j$, $i+1$, $j+3$. On the coarse mesh component, grid points from $i+1$ to $k$ are fringe points and the remaining grid points are
discretisation points, except for the ghost points used for boundary condition application. On the finer mesh component, grid points $j+2$ to $l+1$ are the
discretisation points and the remaining are fringe points. State in the fringe points in the coarse mesh (like $i+1$) is copied directly from the corresponding
overlapping grid points ($j+3$) in the fine mesh. State in the fringe points in the fine mesh (like $j$, $j+1$) is obtained using an interpolation polynomial,
based on the data from nearest neighbouring grid points in the coarse mesh component. The degree of the polynomial used for interpolation is equal to the formal order of accuracy (F.O.A) of the finite
difference scheme used.
To have a conservative coupling between the coarse and the finer mesh components, a unique numerical flux $\hat{h}^{\pm}$ (see equation \ref{eq:fluxReconsApprox})
must be used at the left ($i+\frac{1}{2}$,$j+\frac{3}{2}$) and right ($l+\frac{1}{2}$, $l+\frac{3}{2}$) coupling interfaces \cite{cheng2013, cheng2016}. This numerical
flux can be calculated using either the coarse mesh component or fine mesh component or any convex combination of them. Using numerical flux from the fine component at the
left coupling interface and the numerical flux from the coarse mesh component at the right coupling interface seems to give good results.
\subsubsection{Overset mesh method for the DG scheme}
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.35\textwidth]{OversetDG.pdf}
\caption{ Overset mesh with coarse (black) and fine components (red).}
\label{fig:oversetDG}
\end{center}
\end{figure}
A typical one-dimensional overset mesh for DGM is shown in Figure \ref{fig:oversetDG}. Here the element $i$ in the coarse mesh (black) and the element $k$ in the fine mesh (red) are overlapping. While advancing the solution in time in the coarse mesh, we find the solution at $i+1/2$ in the fine mesh (by locating it appropriately in the local coordinate system of the fine mesh) and apply it as the boundary condition for calculating the numerical flux at $i+1/2$. This procedure is followed as given by Galbraith et al\cite{gbot} for two-dimensional meshes. Similarly for advancing the solution in the fine mesh, we apply the coarse mesh solution at $k-1/2$ as the boundary condition. This is again used to calculate the numerical flux at $k-1/2$. This procedure gives good results for using overset meshes with DGM.
\section{Verification}\label{Sec:Verification}
We verify the implementation of the WENO and DG schemes using the problems of the burgers equations with source term and the isentropic Euler vortex problem \cite{spiegel2015}.
\subsection{Burgers equation with source term}
We solve
\begin{equation}
\frac{\partial}{\partial t} u(x, t) + \frac{\partial}{\partial x}\bigg(\frac{1}{2}u^2(x, t)\bigg) = -8x^7u(x, t), 0\leq x \leq 1
\end{equation}
with the boundary conditions $u(0, t) = 2.0$, $u(1, t) = 1.0$ and the initial conditions $u(x, 0) = 2.0-x$ to steady state. The steady state solution is $u_s(x) = 2.0 - x^8$.
The flux function, $E(x, t) = u^2(x, t)/2$ and the flux splitting is $E^{+} = u^2(x, t)/2$ and $E^{-} =0$ (LB or left biased splitting). The FDZW7-LB
and DG $P^7$-LB scheme with TVD-RK3 time discretisation was used to obtain the numerical solutions for different grid point spacings(GPS = $\Delta x$) or different
cell sizes. Table
\ref{tab:burgersL1Error} has the L1 errors and the
observed order of accuracy.
\begin{table}[!htbp]
\begin{center}
\caption{$L_1$ errors of numerical solutions obtained using FDZW7-LB and DG schemes for different GPS/cell size and observed order of accuracy.}
\label{tab:burgersL1Error}
\begin{tabular}{|>{\centering\arraybackslash}m{0.1\textwidth}|>{\centering\arraybackslash}m{0.14\textwidth}|>{\centering\arraybackslash}m{0.08\textwidth}|>{\centering\arraybackslash}m{0.14\textwidth}|>{\centering\arraybackslash}m{0.08\textwidth}|}
\hline
\multirow{2}{5em}{GPS/Cell size} & \multicolumn{2}{c|}{FDZW7}&\multicolumn{2}{c|}{DG $P^7$}\\\cline{2-5}
&$L_1\text{ error}$ $\times 10^{-11}$ & order & $L_{1}\text{ error}$ $\times 10^{-11}$ & order \\ \hline
1/25 & 120870.8 & - & 23426.1 & - \\ \hline
1/50 & 851.6 & 7.14 & 162.35 & 7.17 \\ \hline
1/75 & 49.2 & 7.03 & 9.2153 & 7.075 \\ \hline
1/100 & 7.0 & 6.77 & 1.2834 & 6.853 \\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Isentropic Euler Vortex Problem \cite{spiegel2015}}
We solve the two-dimensional Euler equations, which are
\begin{equation}
\label{eq:Euler2dDiff}
\frac{\partial Q}{\partial t} + \frac{\partial E}{\partial x} + \frac{\partial F}{\partial y} = 0
\end{equation}
where
\begin{equation}
\label{eq:Euler2dDiffTerms}
~Q = \begin{bmatrix}\rho \\ \rho u \\ \rho v \\ \rho e_t\end{bmatrix}, ~E = \begin{bmatrix}\rho u \\ \rho u^2 + p \\ \rho uv \\ (\rho e_t + p)u\end{bmatrix},
~F = \begin{bmatrix}\rho v \\ \rho vu \\ \rho v^2 +p \\ (\rho e_t + p)v\end{bmatrix},
~ e_t = \frac{p}{\rho(\gamma -1)} + \frac{1}{2}\left(u^2 + v^2 \right)
\end{equation}
The initial conditions are an isentropic vortex perturbation added to a uniform flow in the positive $x$ direction and is given by:
\begin{align}
u(x, y, 0) &= u_0 - 5 e^{(1-r^{2})} \frac{y-y_{0}}{2\pi},\\
v(x, y, 0) &= 5 e^{(1-r^{2})} \frac{x-x_{0}}{2\pi},\\
\rho(x, y, 0) &= \left(1 - \left(\frac{\gamma - 1}{16\gamma \pi^{2}}\right) 25 e^{2(1-r^{2})}\right)^{\frac{1}{\gamma-1}},
\end{align}
with $p(x, y, 0) = (\rho(x, y, 0))^{\gamma}$ and $r=\sqrt{(x-x_{0})^{2}+(y-y_{0})^{2}}$. The parameter values chosen are $x_{0}=7.0$, $y_{0}=0$, $\beta=2.0$, $u_0 = 1.0$
and $\gamma = 1.4$. The computational domain is a square of dimensions $14 \text{units} \times 14 \text{units}$ with $0 \leq x \leq 14$ and $-7 \leq y \leq 7$. Periodic
boundary conditions are applied along the $x$ and $y$ directions. The FDZW5-LF and DG $P^4$-LF method with Lax-Friedrichs flux splitting are used to obtain numerical solution
at $t = 14.0$ units (one time period).
For time discretisation Butchers six stage and fifth order RK scheme with time step ${\Delta t= 0.07 \times \Delta x}$ was used with the FDZW5-LF scheme. TVD-RK3
scheme with $\Delta t = 0.1 \times (\Delta x)^{5/3}$ for the DG $P^4$ scheme. This problem was run for meshes with grid point
spacings ($GPS=\Delta x = \Delta y$) of $1/25, 1/50, 1/75, 1/100, 1/150, 1/175, 1/200, \text{ and } 1/225$. The $L_1$
errors for meshes with different GPS and the observed order of accuracy are given in table~\ref{tab:eulerIsenVortErrs}.
\begin{table}[!htbp]
\begin{center}
\caption{$L_1$ errors of total energy density ($\rho e_t$) obtained using FDZW5-LF and DG schemes for different GPS/cell size and observed order of accuracy.}
\label{tab:eulerIsenVortErrs}
\begin{tabular}{|>{\centering\arraybackslash}m{0.1\textwidth}|>{\centering\arraybackslash}m{0.14\textwidth}|>{\centering\arraybackslash}m{0.08\textwidth}|>{\centering\arraybackslash}m{0.14\textwidth}|>{\centering\arraybackslash}m{0.08\textwidth}|}
\hline
\multirow{2}{5em}{GPS/Cell size} & \multicolumn{2}{c|}{FDZW5-LF}&\multicolumn{2}{c|}{DG $P^4$-LF}\\\cline{2-5}
&$L_1\text{ error}$ $\times 10^{-11}$ & order & $L_{1}\text{ error}$ $\times 10^{-11}$ & order \\ \hline
1/25 & 235635.2 & - & 15415.2 & - \\ \hline
1/50 & 6746.7 & 5.13 & 421.64 & 5.19 \\ \hline
1/75 & 876.3 & 5.03 & 52.345 & 5.145 \\ \hline
1/100 & 203.2 & 5.08 & 12.346 & 5.02 \\ \hline
1/150 & 26.7 & 5.00 & 1.6432 & 4.97 \\ \hline
1/200 & 6.4 & 4.96 & 0.4124 & 4.805 \\ \hline
\end{tabular}
\end{center}
\end{table}
\section{Post shock oscillations and mass conservation error}\label{Sec:postShockOscillations}
When there are discontinuities in the solution, the shock capturing methods described above can have issues like mass conservation error, post shock oscillations,
convergence stalling. In this paper, we focus on the issues of the post shock oscillation and mass conservation error. To demonstrate these issues, we
use the problem of a moving normal shock modelled using one-dimensional Euler equations.
\subsection{One-dimensional Euler equations}
Consider the system of equations
\begin{equation}
\label{eq:oneDimEulerEqn}
\frac{\partial}{\partial t}Q(x, t) + \frac{\partial}{\partial x}E(x, t) = 0,
\end{equation}
where $Q = [\rho, \rho u , \rho e_t]^T$, $E(x, t) = [\rho u, \rho u^2, (\rho e_t + p)u]^T$, with $\gamma = 1.4$,
We solve equations (\ref{eq:oneDimEulerEqn}), for $0 \leq x \leq L$ with initial conditions
\begin{equation}
\label{eq:oneDNormShk}
Q(x, 0) = \begin{cases}
Q_{BS} & x<x_S \\
Q_{AS} & x\geq x_S\\
\end{cases},
\end{equation}
where $(\rho_{BS}, u_{BS}, p_{BS}) = (\gamma, M+u_S, 1.0)$,
$$(\rho_{AS}, u_{AS}, p_{AS}) = \left(\frac{(\gamma+1)M^2 \rho_{BS}}{(\gamma-1)M^2 + 2}, \frac{\rho_{BS}u_{BS}}{\rho_{AS}}+u_S, \frac{p_{BS}(2\gamma M^2 - (\gamma -1))}{\gamma+1}
\right),$$
with supersonic inflow conditions at $x=0$ and subsonic outflow conditions with a back pressure ($p_{back}$) equal to $p_{AS}$ at $x=L$ as boundary conditions.
These conditions correspond to a normal shock moving with a velocity of $u_s$. We choose a mesh with cell size ($ = \Delta x$) of $1/100$, with number of cells equal to
$L/(\Delta x)$.
Next, we demonstrate the post shock oscillations and mass conservation error using the first order numerical solutions, using different numerical flux functions and
values for parameters $M, u_S, x_S$.
\subsection{Error in numerical solutions obtained using first order schemes}
\begin{figure}[!htbp]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.65}{\input{images/StM4SpM0_04DensityPSOsc.imgtex}}
\caption{Post Shock Oscillations}
\label{fig:StM4SpM0_04PSOsc}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.65}{\input{images/StM4SpM0_04DensityMFE.imgtex}}
\caption{Mass flux near the shock}
\label{fig:StM4SpM0_04MFE}
\end{subfigure}
\caption{Post shock oscillations and Mass conservation error in numerical solutions (F.O.A = 1) for $M=4$, $u_s = 0.04$ using FVLR1 (finite volume, linear reconstruction ,
first order)}
\label{fig:StM4SpM0_04PSOAndMFE}
\end{figure}
For demonstration, we choose the parameter values $M = 4$, $u_S = -0.04, L = 10$ and $x_S = 5$. Figure \ref{fig:StM4SpM0_04PSOsc}
has plots of $\rho$ vs
x at $t = 1.26$ units for numerical solutions obtained using AUSM\textsuperscript{+}-up, ROE, P-Ordering Osher's and the global Lax-Freidrichs flux functions using
FVLR1
scheme,
which show the post shock oscillations. Figure \ref{fig:StM4SpM0_04MFE}
has plots of $\rho u$ vs x at $t = 1$ units, obtained
using different numerical flux functions. These figures show the momentum density or mass flux spike (non monotonic variation in the mass flux). Both the post shock
oscillations and the mass flux spike are artefacts of the numerical solutions.
We seek to quantify these errors.
We know that across the moving shock ($0 \leq x \leq L$), $CAS_1, CAS_2, CAS_3$ defined by
\begin{equation}
\label{eq:movShkInvariants}
CAS_1 = \rho(u - u_S), CAS_2 = \rho(u-u_S)^2 + p, CAS_3 = \frac{p}{\gamma-1} + \frac{1}{2}\rho(u-u_S)^2,
\end{equation}
are constant.
Based on the invariant $CAS_1$, we define the total mass conservation error percentage (CEP) in the numerical solution at a time $t_n$ as
\begin{equation}
\label{eq:totalMCEDef}
\text{CEP}_n = \int\limits_{x=0}^{x=L}\frac{\rho(x, t_n)(u(x, t_n)-u_S) - \rho_{BS}u_{BS}}{\rho_{BS}u_{BS}}dx \times 100.
\end{equation}
Equation \ref{eq:totalMCEDef} can be written in terms of cell averages as
\begin{equation}
\label{eq:totalMCEDisc}
\text{CEP}(t_n) = \text{CEP}_n = \sum\limits_{i=0}^{\text{num. cells}}\frac{\overline{\rho u}_i^n - \bar{\rho}_i^nu_S - \rho_{BS}u_{BS}}{\rho_{BS}u_{BS}}\Delta x \times 100
\end{equation}
\begin{figure}[!htbp]
\centering
\scalebox{.9}{\input{images/StM4SpM0_04TotalMFE.imgtex}}
\caption{Total mass conservation error percentage (CEP) for $M=4$, $u_s = -0.04$ using FVLR1}
\label{fig:StM4SpM0_04CEP}
\end{figure}
\begin{figure}[!htbp]
\centering
\scalebox{.9}{\input{images/StM4SpM0_4TotalMFE.imgtex}}
\caption{Total mass conservation error percentage (CEP) for $M=4$, $u_s = -0.4$ using FVLR1}
\label{fig:StM4SpM0_4CEP}
\end{figure}
Figure \ref{fig:StM4SpM0_04CEP}
has plots of CEP$(t_n)$ vs $t_n$ ($n$ = time step number) in numerical solutions obtained using different numerical flux functions. We can see that the solution obtained
using Osher's P-Ordering flux has less error than the that
of ROE flux which is consistent with what is reported in literature \cite{roberts1990}. This however changes with changes in $u_s$, for instance for $u_s = -0.4$, shown in
figure \ref{fig:StM4SpM0_4CEP}
For $u_s = -0.4$, the order of error is reversed, with least error obtained using AUSM\textsuperscript{+}, followed by ROE flux
and Osher's P-Ordering flux. In both cases ($u_s = -0.04$ and $u_s = -0.4$), using the Lax-Freidrichs flux leads to the highest error.
Next, the errors in numerical solutions obtained using high-order schemes for the same problem is discussed.
\subsection{Error in numerical solutions obtained using high order schemes}
\begin{figure}[!htbp]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.65}{\input{images/StM4SpM0_04DensityPSOscLaxFrCmp.imgtex}}
\caption{Lax-Freidrichs}
\label{fig:StM4SpM0_04PSOscLaxFr}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.65}{\input{images/StM4SpM0_04DensityPSOscAUSMPlusCmp.imgtex}}
\caption{AUSM\textsuperscript{+}-up}
\label{fig:StM4SpM0_04PSOscAUSMPl}
\end{subfigure}
\caption{Comparison of post shock oscillations obtained using schemes with different F.O.A for AUSM\textsuperscript{+}-up splitting and Lax-Freidrichs flux
for $M=4$, $u_s = -0.04$ using FVZW and FVLR}
\label{fig:StM4SpM0_04PSOscCmp}
\end{figure}
\begin{figure}[!htbp]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.65}{\input{images/StM4SpM0_04TotalMFEW3.imgtex}}
\caption{FVZW3, $u_s = -0.04$}
\label{fig:StM4SpM0_04TotalMFEW3}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.65}{\input{images/StM4SpM0_04TotalMFEW5.imgtex}}
\caption{FVZW5, $u_s = -0.04$}
\label{fig:StM4SpM0_04TotalMFEW5}
\end{subfigure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.65}{\input{images/StM4SpM0_4TotalMFEW3.imgtex}}
\caption{FVZW3, $u_s = -0.4$}
\label{fig:StM4SpM0_4TotalMFEW3}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.65}{\input{images/StM4SpM0_4TotalMFEW5.imgtex}}
\caption{FVZW5, $u_s = -0.4$}
\label{fig:StM4SpM0_4TotalMFEW5}
\end{subfigure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.65}{\input{images/StM4SpMPl0_4TotalMFEW3.imgtex}}
\caption{FVZW3, $u_s = 0.4$}
\label{fig:StM4SpMPl0_4TotalMFEW3}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.65}{\input{images/StM4SpMPl0_4TotalMFEW5.imgtex}}
\caption{FVZW5, $u_s = 0.4$}
\label{fig:StM4SpMPl0_4TotalMFEW5}
\end{subfigure}
\caption{Comparison of CEP in numerical solutions obtained using FVZW3, FVZW5 methods with different numerical flux functions, for $u_s = -0.4, -0.04$ and $0.4$ }
\label{fig:StM4TotalMFEW3W5Cmp}
\end{figure}
Figure \ref{fig:StM4SpM0_04PSOscCmp}
has plots of density ($\rho$) vs $x$ at $t=1.26$, obtained using FVLR1, FVZW3 FVZW5 schemes and Lax-Freidrichs, AUSM\textsuperscript{+}-up
fluxes, for $u_s = -0.04$ . As the formal order of accuracy of the scheme increases, both the amplitude and wave number of the oscillations increase.
A similar trend is observed for the ROE flux and Osher's P-Ordering flux. Figure \ref{fig:StM4TotalMFEW3W5Cmp} has plots of CEP, for different
values of $u_s$, different numerical flux functions, for FVZW3 and FVZW5 methods.
From the plots in figures \ref{fig:StM4SpM0_04CEP},
\ref{fig:StM4SpM0_4CEP}, and
\ref{fig:StM4TotalMFEW3W5Cmp},
it seems that no one numerical flux function consistently leads to less error. For instance in the case
of $u_s=0.4$, for the FVZW5 method, Lax-Freidrichs flux function leads to error (see figure \ref{fig:StM4SpMPl0_4TotalMFEW5}), less than that due to the
AUSM\textsuperscript{+}-up and the ROE flux with Harten-Hyman 2 \cite[p. 266]{harten1983} entropy fix.
Additionally, for $u_s = 0.4$, the ROE and Osher's P-Ordering fluxes fail to produce numerical solutions when FVZWE3 or FVZW5 methods are used as `NAN' is
produced in the course of computations. Using the Harten-Hyman 2 \cite[p. 266]{harten1983} entropy fix for the ROE flux, rectified the problem and a numerical solution
was obtained, but with an error higher than that of Lax-Freidrichs and AUSM\textsuperscript{+}-up fluxes (see Figures \ref{fig:StM4SpMPl0_4TotalMFEW3},
\ref{fig:StM4SpMPl0_4TotalMFEW5}).
The problem of encountering `NAN' in using ROE and Osher's P-Ordering fluxes can be tackled by doing a characteristic-wise reconstruction.
\subsection{The importance of characteristic decomposition}
\begin{figure}[!htbp]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.65}{\input{images/StM4SpMPl0_4TotalMFELCDW3.imgtex}}
\caption{LCDZW3}
\label{fig:StM4SpMPl0_4TotalMFELCDW3}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.65}{\input{images/StM4SpMPl0_4TotalMFELCDW5.imgtex}}
\caption{LCDZW5}
\label{fig:StM4SpMPl0_4TotalMFELCDW5}
\end{subfigure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.65}{\input{StM4SpMPl0_4TotalMFEW3DG.imgtex}}
\caption{DG - P2, $u_s = 0.4$}
\label{fig:StM4SpMPl0_4TotalMFEW3DG}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.65}{\input{StM4SpMPl0_4TotalMFEW5DG.imgtex}}
\caption{DG - P4, $u_s = 0.4$}
\label{fig:StM4SpMPl0_4TotalMFEW5DG}
\end{subfigure}
\caption{Comparison of CEP in numerical solutions obtained using LCDZW3, LCDZW5, DG $P^2$, DG $P^4$ and different numerical flux functions, for $u_s = 0.4$.
}
\label{fig:StM4SpMPl0_4TotalMFELCDW}
\end{figure}
As mentioned before, doing a characteristic-wise reconstruction \cite{qiu2002, peng2019, zhang2011}, will produce better and less oscillatory results when compared to doing
component wise reconstruction. An example illustrating this point in the extreme is the problem discussed above, for $u_s = 0.4$. Doing a component wise ZWENO3 or
ZWENO5 reconstruction, and using the ROE or Osher's P-Ordering flux will lead to `NAN's in the computations. This can be rectified by doing a characteristic-wise
reconstruction. The characteristic-wise reconstruction also reduces the error or `CEP' as can be seen in figures \ref{fig:StM4SpMPl0_4TotalMFEW3},
\ref{fig:StM4SpMPl0_4TotalMFEW5} and \ref{fig:StM4SpMPl0_4TotalMFELCDW}. Errors similar to FDLCDZW are obtained using DG with simple WENO limiter (with
characteristic-wise limiting), as shown in figures \ref{fig:StM4SpMPl0_4TotalMFEW3DG} and \ref{fig:StM4SpMPl0_4TotalMFEW5DG}.
In summary, the order of performance of numerical flux function seems problem dependent and while using high order reconstruction, doing a characteristic wise
reconstruction is critical to get results. As the formal order of accuracy increases, results produced using Lax-Freidrichs flux seem to become better than that obtained using high resolution fluxes, as is
evident from figure \ref{fig:StM4SpMPl0_4TotalMFELCDW}. Next, we study the mass conservation error in steady state
numerical solutions having shocks.
\section{Mass conservation error in numerical solutions with stationary shocks}\label{Sec:massFluxErr}
We solve the Euler equations \ref{eq:oneDimEulerEqn} in $0 \leq x \leq 1$ with the initial conditions given in equation (\ref{eq:oneDNormShk}), with $u_s = 0$.
We obtain the numerical solutions using TVD-RK3 time discretisation, using FDLCD and DG methods. Evaluating the spatial derivative
($(\partial E)/(\partial x)$) using the ROE flux leads to a zero value for the spatial derivative whereas for the Lax-Freidrichs flux it leads to a non-zero value.
The also leads to an error in
the mass flux($\rho u$) near the shock. Across a normal shock, $\rho u$ should be constant, where as in the numerical solution obtained using Lax-Freidrichs splitting
$\rho u$ is not constant. We define the percentage error of a conserved variable($q$) at grid point $i$ and time $t_n$ as
\begin{equation}
\label{eq:percentageError}
PE(q_i^n) = \frac{|q(0, 0) - q_i^n|}{q(0, 0)} \times 100.
\end{equation}
Here $q_i^n$ is the value of the conserved variable $q$ at grid point $x_i$, at time $t_n$ and $q(0, 0)$ is the value of $q(x,t)$ at $x=0,t=0$. For a steady state solution of the one-dimensional Euler equations, $q$ can be one of $\rho u, (\rho u^2 + p)$ and
$(\rho e_t + p)u$ as they are conserved, whereas in the numerical
solution they
are not conserved.
Figure \ref{fig:massErrorDiffFOA} has plots of $PE(\rho u_i^n)$ vs $x_i$ at $t_n=100.0$ (mass flux error percentages), for schemes with different formal order of accuracies.
For both FD and DG methods, the first order schemes produce the maximum mass flux error, spread across a larger length of the domain when compared to the third and fifth
order schemes. Between the FDLCDZW and DG methods, the spread of the mass flux error is more for the FDLCDZW schemes than that for the DG schemes but the
maximum mass flux error is slightly lower for the FDLCDZW schemes (as is evident from table \ref{tab:maxMassFluxErrors}).
\begin{figure}[!htbp]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.6}{\input{images/massFluxErr/mach2MassErrWENODiffFOA.imgtex}}
\caption{FDLCDZW5,3 and FDLR1 with LF splitting}
\label{fig:massFluxE}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.6}{\input{mach2MassErrDGDiffFOA.imgtex}}
\caption{DG - $P^4, P^2, P^0$ with LF splitting}
\label{fig:massFluxErrM2DGP4}
\end{subfigure}
\caption{Mass flux error percentages across a Mach 2 shock at $t=100.0$, for schemes with different formal order of accuracies.}
\label{fig:massErrorDiffFOA}
\end{figure}
\begin{table}[!htbp]
\begin{center}
\caption{Maximum mass flux errors across different shocks at $t=100.0$, for different schemes}
\label{tab:maxMassFluxErrors}
\begin{tabular}{|>{\centering\arraybackslash}m{0.08\textwidth}|>{\centering\arraybackslash}m{0.12\textwidth}|>{\centering\arraybackslash}m{0.15\textwidth}|>{\centering\arraybackslash}m{0.15\textwidth}|>{\centering\arraybackslash}m{0.08\textwidth}|>{\centering\arraybackslash}m{0.08\textwidth}|>{\centering\arraybackslash}m{0.08\textwidth}|}
\hline
\multirow{3}{5em}{Mach Number} & \multicolumn{6}{c|}{Maximum mass flux error percentage}\\\cline{2-7}
&\multirow{2}{5em}{FDLR1-LF} & \multicolumn{2}{c|}{FDLCDZW-LF with FOA} & \multicolumn{3}{c|}{DG-LF}\\\cline{3-7}
& & 3 & 5 & $P^0$ & $P^2$ & $P^4$ \\ \hline
2.0 & 14.2 & 10.8 & 9.9 & 14.1 & 12.5 & 12.5\\ \hline
2.4 & 20.0 & 15.3 & 14.1 & 19.8 & 16.2 & 16.2 \\ \hline
2.8 & 24.7 & 18.9 & 17.8 & 24.2 & 19.4 & 19.3 \\ \hline
3.0 & 26.6 & 20.5 & 19.4 & 26.0 & 21.3 & 21.3 \\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Quasi-One-dimensional Euler equations}\label{Sec:quasiOneD}
The quasi-one-dimensional Euler equations are given by:
\begin{equation}
\label{eq:quasiOneDimEulerEqn}
\frac{\partial}{\partial t}Q(x, t) + \frac{\partial}{\partial x}E(x, t) = -\frac{A^{'}(x)}{A(x)}S,
\end{equation}
where $Q = [\rho, \rho u , \rho e_t]^T$, $E(x, t) = [\rho u, \rho u^2, (\rho e_t + p)u]^T$, $S = [\rho u, \rho u^2, (\rho e_t + p)u]^T$, $A(x)$ is the area of
cross-section, with $\gamma = 1.4$,
We solve system of equations (\ref{eq:quasiOneDimEulerEqn}) in the domain $0 \leq x \leq 1$, for $A(0) = 1.0$, $A^{'}(x) = 1.0$. $x=0$ is a supersonic inflow, with
$(\rho, u, p) = (\gamma, M, 1.0)$. $x=1$ is a subsonic outflow with a back pressure $p_{back}$. The back pressure $p_{back}$ is set such that there is a shock
at $x=0.5$. Initial conditions correspond to $(\rho, u, p) = (\gamma, 0, 1)$. We obtain numerical solutions for $M=2, 2.4, 2.8, 3.0$ using FDLCDZW5-LF and DG-$P^4$-LF
methods. For
quasi-one-dimensional Euler equations the quantity $A\rho u$ is conserved. However, in the numerical solution it is not conserved, due to using
flux splitting or a numerical flux function. For the numerical solution of one-dimensional Euler equations, using the ROE flux does not lead to any mass flux error but
it does, for quasi-one-dimensional Euler equations.
Figure \ref{fig:quasi1dM2MassFluxErr} has plots of mass flux error for different schemes. Table \ref{tab:quasi1dmaxMassFluxErrors} has the maximum mass flux errors.
\begin{figure}[!htbp]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.6}{\input{images/massFluxErr/quasi1dM2ZWenoMassFluxErr.imgtex}}
\caption{FDLCDZW5}
\label{fiig:quasi1dM2ZWenoMassFluxErr}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.6}{\input{quasi1dM2DGMassFluxErr.imgtex}}
\caption{DG - $P^4$}
\label{fig:quasi1dM2DGP4MassFluxErr}
\end{subfigure}
\caption{PE($A(x)\rho u_i$) vs $x$, at $t=100.0$, for a quasi-one-dimensional flow with shock at $x=0.0$ with an inflow Mach Number $2.0$}
\label{fig:quasi1dM2MassFluxErr}
\end{figure}
\begin{table}[!htbp]
\begin{center}
\caption{Maximum mass flux errors at $t=100.0$, for different inflow Mach numbers and different schemes for quasi-one-dimensional Euler equations, with shock at $x=0.5$, }
\label{tab:quasi1dmaxMassFluxErrors}
\begin{tabular}{|>{\centering\arraybackslash}m{0.1\textwidth}|>{\centering\arraybackslash}m{0.1\textwidth}|>{\centering\arraybackslash}m{0.1\textwidth}|>{\centering\arraybackslash}m{0.1\textwidth}|>{\centering\arraybackslash}m{0.1\textwidth}|}
\hline
\multirow{3}{5em}{Mach Number} & \multicolumn{4}{c|}{Maximum mass flux error percentage}\\\cline{2-5}
&\multicolumn{2}{c|}{FDZW5} &\multicolumn{2}{c|}{DG $P^{4}$}\\\cline{2-5}
& LCD LF& ROE & LF & ROE \\ \hline
2.0 & 12.6 & 16.1 & 16.3 & 16.1\\ \hline
2.4 & 15.0 & 20.8 & 20.4 & 20.2 \\ \hline
2.8 & 17.3 & 24.5 & 25.2 & 24.5 \\ \hline
3.0 & 18.3 & 26.1 & 27.3 & 26.8 \\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Two-dimensional Euler equations}
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.65\textwidth]{images/135DegOblShkGeo.pdf}
\caption{$135^o$ oblique shock with periodic (along $135^o$ lines) boundary conditions.}
\label{fig:135DegOblShkGeo}
\end{center}
\end{figure}
Next, we solve the two-dimensional Euler equations (\ref{eq:Euler2dDiff}) using FDLCDZW5, FDZW5 and DG-$P^4$ with Lax-Freidrichs and ROE Fluxes.
The computational domain
and the corresponding boundary conditions are shown in figure \ref{fig:135DegOblShkGeo}. The portion of the domain ABCGH is initialised with the `pre-shock conditions',
which are $(\rho, u, v, p) = (\gamma, M, 0, 1.0)$. The portion of the domain CDEFG is initialised with the post-shock conditions.
Let $v_n, v_t$ be components of velocity normal and parallel to the shock respectively (see figure \ref{fig:135DegOblShkGeo}). Across a stationary oblique shock
$\rho v_n$ should be constant, but in numerical solutions obtained using Lax-Freidrichs flux and ROE flux, it is not constant.
\begin{figure}[!htbp]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.6}{\input{images/massFluxErr/135M2OblShkU1ZWeno5MassFluxErr.imgtex}}
\caption{ZW5 with LF, LCD LF and ROE}
\label{fiig:135M2OblShkU1ZWeno5MassFluxErr}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.6}{\input{135M2OblShkDGP4MassFluxErr.imgtex}}
\caption{DG - $P^4$ with Lax-Freidrichs flux}
\label{fig:135M2OblShkDGP4LFMassFluxErr}
\end{subfigure}
\caption{Mass flux error percentage vs $\xi$ across a Mach 2.0 oblique shock with Lax-Freidrichs flux.}
\label{fig:135M2OblShkLF}
\end{figure}
\begin{table}[!htbp]
\begin{center}
\caption{Maximum mass flux errors across different $135^0$ oblique shocks for different schemes}
\label{tab:2dOblShkmaxMassFluxErrors}
\begin{tabular}{|>{\centering\arraybackslash}m{0.1\textwidth}|>{\centering\arraybackslash}m{0.1\textwidth}|>{\centering\arraybackslash}m{0.1\textwidth}|>{\centering\arraybackslash}m{0.1\textwidth}|>{\centering\arraybackslash}m{0.1\textwidth}|>{\centering\arraybackslash}m{0.1\textwidth}|}
\hline
\multirow{3}{5em}{Mach Number} & \multicolumn{5}{c|}{Maximum mass flux error percentage}\\\cline{2-6}
& \multicolumn{3}{c|}{FDZW5} & \multicolumn{2}{c|}{DG $P^{4}$}\\\cline{2-6}
& LF & LCD LF & ROE & LF & ROE \\ \hline
2.0 & 3.5 & 3.1 & 2.9 & 2.3 & 2.03 \\ \hline
2.4 & 7.7 & 5.0 & 5.8 & 5.2 & 5.44 \\ \hline
2.8 & 7.0 & 8.3 & 3.4 & 6.85 & 5.92 \\ \hline
3.0 & 6.1 & 7.6 & 5.7 & 7.4 & 6.1 \\ \hline
\end{tabular}
\end{center}
\end{table}
If the point A is $(0,0)$ (see figure \ref{fig:135DegOblShkGeo}), we plot the mass flux error vs a variable $\xi$ measured along the line given by $y=x-0.5$,
starting with the point $(0.5, 0)$ corresponding to $\xi=0$ and ending with $(1.5, 1)$ corresponding to $\xi=1$.
Figure \ref{fig:135M2OblShkLF} has plots of mass flux error percentage vs $\xi$. Table \ref{tab:2dOblShkmaxMassFluxErrors} has the maximum mass flux error percentages
along the same line ($y=x-0.5$) for different schemes and different Mach numbers.
\subsection{Cause of the mass flux error}
The cause of the mass flux error seems to be the flux splitting and the approximate Riemann solver used for FDZW and DG schemes respectively. When a shock is captured
with dissipation, dissipation is also introduced in the mass conservation equation (as suggested by Jin et al \cite{jin1996}) and this leads to the mass flux error.
This mass flux error, as shown previously is high near the shock and it propagates into the region downstream of the shock.
In case the flux splitting or the approximate Riemann solver used leads to capturing the shock without any dissipation, the mass flux error is absent. This is apparent in
the case of using Roe-flux for the normal shock problem for one-dimensional Euler equations.
\subsubsection{Alignment of shock to cell faces and Rotated Riemann flux}
For the two-dimensional Euler equations, the Roe flux also caused mass flux error because, the 135\textdegree{} shock is not aligned either with the $x$ or the $y$
direction. To rectify this in case of the DG method, a mesh made of right angular triangles can be chosen such that the oblique shock is aligned to the hypotenuse faces
of the triangles. Since the shock is aligned with the faces of the triangles, the Roe flux will be applied along a direction normal to the shock and therefore,
the shock is sustained without any dissipation and mass flux error.
In case of the FDZW schemes, a similar solution of applying the Roe flux in a direction normal to the shock can be used.
Let $Q_l$ and $Q_r$ be the left and right biased WENO interpolations at $(x_{i+\frac{1}{2}}, y_j)$. A rotated Riemann based flux splitting can be used with the
shock orientation being given by
\begin{multline}
\label{eq:rotationCosineSine}
\big(\cos(\theta), \sin(\theta)\big) = \begin{cases}
\bigg( \frac{|u_l - u_r|}{|\vec{V}_{diff}|}, \frac{|v_l - v_r|}{|\vec{V}_{diff}|} \bigg), &\text{ if }|\vec{V}_{diff}|>10^{-2} \\
(1, 0), &\text{ otherwise}.\\
\end{cases},\\
\text{where }\vec{V}_{diff} = ( |u_l - u_r|, |v_l - v_r| ),
\end{multline}
where $\theta$ is the angle made by the shock with the $y$-axis.
This will ensure that the oblique shock is captured without mass flux error and dissipation.
Capturing curved or oblique shocks or shock reflections without dissipation is a much more challenging problem as aligning the mesh faces with
shock becomes an issue for the DG method. For the FDZW method, a similar problem of shock passing through a grid point makes it challenging.
Additionally, for the FDZW method, using the rotated Riemann based flux splitting can be an issue because the procedure
given for calculating $\cos(\theta)$ and $\sin(\theta)$ in equation (\ref{eq:rotationCosineSine}) is known to lead to convergence stalling \cite{levy1993}.
Next, we look at how the introduction of viscous fluxes changes the mass flux error, with the help of the one-dimensional compressible viscous
fluid flow equations
(Newtonian fluid, Stokes' hypothesis used and with viscous and heat flux coefficients modelled using the Sutherland formulae).
\subsection{One-dimensional Compressible Viscous fluid flow equations (Navier-Stokes)}\label{Sec:compViscFluFlow}
The non-dimensional or scaled viscous fluid flow equations for Newtonian fluid in one space dimension are given by
\begin{align}
\label{eq:viscFluidFlowEqns}
\frac{\partial}{\partial t^*} & Q^*(x^*, t^*) + \frac{\partial}{\partial x^*} \bigg( E^*(x^*, t^*) - E^*_v(x^*, t^*) \bigg) = 0,\text{ where,}\\
\label{eq:viscFluidFlowEqnsTerms1}
Q^* = &\begin{bmatrix} \rho^* \\ \rho^* u^* \\ \rho^* e^*_t \end{bmatrix},
E^* = \begin{bmatrix} \rho^* u^* \\ \rho^* u^{*2} \\ (\rho^* e^*_t + p^*)u^* \end{bmatrix},
e^{*}_t = \frac{p^{*}}{\rho^{*}(\gamma -1)} + \frac{1}{2}\left(u^{*2} + v^{*2} \right),\\
\label{eq:viscFluidFlowEqnsTerms2}
E^*_v = &\bigg[ 0~~~~ \left(\frac{\partial u^*}{\partial x^*}\frac{\lambda+2\mu}{\rho_0 U_0 L}\right)~~~~
\left(u^*\frac{\partial u^*}{\partial x^*}\frac{\lambda+2\mu}{\rho_0 U_0 L} + \frac{\kappa}{R\rho_0U_0L}\frac{\partial T^*}{\partial x^*}\right) \bigg]^T
\end{align}
The scaling used is:
\begin{multline}
\label{eq:viscFluidFlowEqnsTerms3}
p = p^*\rho_0~ U_0^2,~ e_t = e_t^* U_0^2, x = x^* L,~ u = u^* U_0,~ t = t^*\frac{L}{U_0},~ \rho = \rho^*~ \rho_0,
a = a^* U_0,\\\text{ and } T = T^*\frac{U_0^2}{R},
\end{multline}
where, $R = 287.4J/(kg K), \gamma = 1.4$. The Stokes' hypothesis is assumed which is $3\lambda + 2\mu = 0$. The Sutherland model for coefficients of viscosity and
heat conduction is used,
given by
\begin{multline}
\label{eq:sutherlandViscModel}
\mu = C_1\frac{T^{\frac{3}{2}}}{T+C_2},~\kappa = C_3\frac{T^{\frac{3}{2}}}{T+C_4},\text{ where }C_1 = 1.458X10^{-6} \frac{kg}{ms\sqrt{K}}, C_2 = 110.4K,\\
C_3 = 2.495X10^{-3}\frac{kgm}{s^3K^{\frac{3}{2}}},\text{ and }C_4 = 194K.
\end{multline}
The values of the scaling parameters are $\rho_0 = 1.204kg/m^3, U_0 = 343.249m/s$.
System of equations (\ref{eq:viscFluidFlowEqns}) are solved in the domain $0 \leq x^{*} \leq 1$ with initial conditions
\begin{equation}
\label{eq:oneDNormShkVisc}
Q^*(x^*, 0) = \begin{cases}
Q^*_{BS} & x^*<0.5 \\
Q^*_{AS} & x^*\geq 0.5\\
\end{cases},
\end{equation}
where
\begin{equation}
\label{eq:1dNormShkViscInitConds}
\begin{bmatrix}\rho^*_{BS} \\ u^*_{BS} \\ p^*_{BS}\end{bmatrix} = \begin{bmatrix}\gamma \\ M \\ 1.0\end{bmatrix},
\begin{bmatrix}\rho^*_{AS} \\ u^*_{AS} \\ p^*_{AS}\end{bmatrix} =
\begin{bmatrix}\frac{(\gamma+1)M^2 \rho^*_{BS}}{(\gamma-1)M^2 + 2}
\\ \frac{\rho^*_{BS}u^*_{BS}}{\rho^*_{AS}} \\
\frac{p^*_{BS}(2\gamma M^2 - (\gamma -1))}{\gamma+1}
\end{bmatrix},
\end{equation}
with supersonic inflow conditions at $x^* = 0.0$ and subsonic outflow conditions with back pressure of $p^*_{AS}$ at $x^* = 1.0$. We chose a mesh with GPS( $ = \Delta x^*$)
of $1/100$. We use the numerical methods described in section \ref{Sec:NumMethod} for obtaining the numerical solutions. As mentioned in section \ref{Sec:extnToViscEqns}
we use FDLR5 (equation \ref{eq:linHiOrd}, $\hat{A} = 0$) to calculate viscous fluxes and their derivatives for the FD schemes. For calculating inviscid fluxes and their
derivatives,
we use FDZW5-LF or FDLCDZW5-LF or FDZW5-ROE. Also, we obtain numerical solutions using LDG method.
We obtain numerical solutions for inflow conditions corresponding to $M = 2.0, 2.4, 2.8, 3.0$ for the values of parameter $L$ equal to $1.0, 10^{-4},\text{ and } 10^{-6}$.
For the viscous fluid flow equations, using Roe flux also leads to mass flux error. Table \ref{tab:maxMassFluxErrorsVisc} has the maximum mass flux error percentages for
different schemes for $L=1$ and $10^{-4}$. The maximum mass flux error for $L=10^{-6}$ using FDZW5-LF, FDLCDZW5-LF, FDZW5-ROE, and LDG-P4 with Lax-Freidrichs or ROE
Flux are of the order of $10^{-3}$.
\begin{table}[!htbp]
\begin{center}
\caption{Maximum mass flux errors across different shocks at $t=100.0$, for different schemes}
\label{tab:maxMassFluxErrorsVisc}
\begin{tabular}{|>{\centering\arraybackslash}m{0.1\textwidth}|>{\centering\arraybackslash}m{0.1\textwidth}|>{\centering\arraybackslash}m{0.1\textwidth}|>{\centering\arraybackslash}m{0.1\textwidth}|>{\centering\arraybackslash}m{0.1\textwidth}|>{\centering\arraybackslash}m{0.1\textwidth}|}
\hline
\multirow{3}{5em}{Mach Number} & \multicolumn{5}{c|}{Maximum mass flux error percentage for $L=1$}\\\cline{2-6}
&\multicolumn{3}{c|}{FDZW5} &\multicolumn{2}{c|}{LDG-$P^4$}\\\cline{2-6}
& LF & LCD LF & ROE & LF & ROE\\ \hline
2.0 & 13.4 & 9.7 & 2.2 & 9.9 & 2.03\\ \hline
2.4 & 18.6 & 13.8 & 9.1 & 14.1 & 8.7 \\ \hline
2.8 & 22.7 & 17.1 & 16.0 & 17.5 & 15.8 \\ \hline
3.0 & 24.4 & 18.5 & 19.1 & 18.2 & 19.3 \\ \hline
\multicolumn{6}{|c|}{Maximum mass flux error percentage for $L=10^{-4}$}\\\cline{1-6}
2.0 & 13.2 & 9.2 & 10.4 & 9.7 & 10.3\\ \hline
2.4 & 18.6 & 13.12 & 14.4 & 13.4 & 13.9 \\ \hline
2.8 & 21.9 & 16.4 & 23.2 & 16.7 & 16.2 \\ \hline
3.0 & 23.6 & 17.8 & 24.4 & 18.3 & 18.1 \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[!htbp]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.6}{\input{images/massFluxErr/Mach2ViscROEMassFluxErr.imgtex}}
\caption{FDZW5-ROE and FDLR5-C methods}
\label{fig:massFluxErrM2ZW5ROEVisc}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.6}{\input{Mach2ViscROEMassFluxErrDG.imgtex}}
\caption{LDG - $P^4$ with ROE flux}
\label{fig:massFluxErrM2DGP4ROEVisc}
\end{subfigure}
\caption{Mass flux error percentage vs $x^*$ at $t=100.0$ across a Mach 2.0 shock using ROE flux for WENO and DG schemes for different values of $L$.}
\label{fig:massFluxErrM2Visc}
\end{figure}
For $L=10^{-6}$ and $\Delta x^* = 1/100$ the shock is sufficiently resolved and therefore it is not necessary to use WENO reconstruction or limiter for the LDG method.
Also, the upwind biasing of Inviscid fluxes is not necessary and therefore one can calculate inviscid flux derivatives with $\hat{A}=0$ (see equation
\ref{eq:FSplitting}). Therefore, we use FDLR5-C and DG $P^4$-C methods . Figure \ref{fig:massFluxErrM2Visc} has plots of mass
flux error percentage vs $x^*$ for different schemes.
Clearly, the mass flux error is minimum for the numerical solution obtained using FDLR5-C or the DG $P^{4}$-C Linear schemes.
\section{Refinement near the shock using overset meshes}\label{Sec:refinementAndOverset}
Of course using a fine mesh corresponding to a $\Delta x^{*} = 10^{-8}$ for problems of general interest may not be possible. One solution in such cases is to reduce the
mass conservation error by doing a mesh
refinement near the shock using a series of overset meshes. This is demonstrated by applying it in computing numerical solutions of quasi-one-dimensional Euler
equations and two-dimensional Euler equations.
\subsection{Quasi-One-dimensional Euler Equations}\label{Sec:quasiOneDOverset}
We solve the same problem mentioned in section \ref{Sec:quasiOneD} for $M=3.0$, using FDLCDZW5-LF, DG $P^4$-LF with TVD-RK3 method using three mesh configurations. The first
configuration (Config1) is a mesh in the domain $0 \leq x \leq 1$ with GPS or $\Delta x$ of $1/200$. The second configuration (Config2) is that with with a mesh in
the domain $0.48 \leq x \leq 0.525$, with GPS or $\Delta x$ of $1/2200$, overset on mesh config1. The third configuration (Config3) is that with a mesh in the domain
$0.499\overline{54} \leq x \leq 0.50\overline{36}$, with GPS or $\Delta x$ of $1/24200$, overset on mesh config2. First a solution is obtained using Config1. Initial
conditions for Config2 is the
numerical solution obtained using Config1 and that for Config3 is numerical solution obtained using Config2 ( See section \ref{Sec:OversetDesc} for more details ).
As mentioned in section \ref{Sec:quasiOneD}, $\rho A u$ should be constant along the domain but it is not in the numerical solution. Figures \ref{fig:PEADUVsx3Configs},
\ref{fig:PEADUVsx3ConfigsDG} have plots of PE($A\rho u(x_i)$) vs $x_i$ for the three mesh configurations obtained using FDLCDZW5-LF and DG $P^4$-LF methods.
Figures \ref{fig:PVsX3Configs}, \ref{fig:PVsX3ConfigsDG} have corresponding plots of pressure vs x for the three mesh configurations.
\begin{figure}[!htbp]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.6}{\input{images/overSetWENO/PEADUVsx3Configs.imgtex}}
\caption{PE($A\rho u(x_i)$) vs $x_i$}
\label{fig:PEADUVsx3Configs}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.6}{\input{images/overSetWENO/PVsX3Configsi.imgtex}}
\caption{pressure vs $x$ in the three meshes obtained using Config3 }
\label{fig:PVsX3Configs}
\end{subfigure}
\caption{Plots of PE($A\rho u(x_i)$) vs $x_i$ and pressure vs $x$ for meshes Config1, Config2, Config3, obtained using FDLCDZW5-LF. }
\label{fig:oversetWENOPlots}
\end{figure}
\begin{figure}[!htbp]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.6}{\input{PEADUVsx3ConfigsDG.imgtex}}
\caption{PE($A\rho u(x_i)$) vs $x_i$}
\label{fig:PEADUVsx3ConfigsDG}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\scalebox{.6}{\input{PVsX3ConfigsiDG.imgtex}}
\caption{pressure vs $x$ in the three meshes obtained using Config3 }
\label{fig:PVsX3ConfigsDG}
\end{subfigure}
\caption{Plots of PE($A\rho u(x_i)$) vs $x_i$ and pressure vs $x$ for meshes Config1, Config2, Config3, obtained using DG $P^4$-LF. }
\label{fig:oversetDGPlots}
\end{figure}
As can be seen in the figures \ref{fig:oversetWENOPlots} and \ref{fig:oversetDGPlots}, going from Config1 to Config3,
the difference between the post shock mass flux error percentage drops from approximately $10^{-1}$ to $10^{-3}$. Therefore, the error that is propagated into the region downstream of the shock reduces by doing a mesh refinement near the shock. Also, the solutions
obtained using WENO and DG methods are almost the same.
\subsection{Curing the carbuncle: Initial results}
\begin{figure}[!htbp]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{images/carbuncleStructuredMesh.png}
\caption{Structured mesh}
\label{fig:structuredMeshFlowOverCyl}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{images/Mach3CarbuncleStrMsh.png}
\caption{Colour plot of density showing Carbuncle}
\label{fig:densityPlotWithCarbuncle}
\end{subfigure}
\caption{Mach 3.0 flow over a circular cylinder}
\label{fig:mach3FlowOverCylStrMshCarb}
\end{figure}
\begin{figure}[!htbp]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{CircCylRoeFluxwCarbuncle.png}
\caption{Density Solution with Carbuncle, without refinement}
\label{fig:RoeFluxWCarbuncle}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{CircCylRoeFluxwoCarbuncle.png}
\caption{Density solution without carbuncle obtained using two levels of overset mesh}
\label{fig:RoeFluxWoCarbuncle}
\end{subfigure}
\caption{Density plots for flow over a circular cylinder using Roe Flux, obtained using DG $P^2$}
\label{fig:RoeFlux}
\end{figure}
It is well known that a carbuncle (as shown in Figure \ref{fig:densityPlotWithCarbuncle}) forms when the Roe flux and a structured mesh (as shown in Figure \ref{fig:structuredMeshFlowOverCyl}) are used for computing flow over a circular cylinder.
An often proposed solution to this problem is using dissipative
flux functions in the direction normal to the shock \cite{nishikawa2008}. The reasons for the formation of carbuncle were studied and that there may be a connection between numerical mass flux and the carbuncle formation was reported in literature \cite{liou2000}.
One way of reducing error in the
numerical mass flux, as demonstrated above, is the use of multiple overset meshes. This kind of mesh refinement near the shock, using multiple overset meshes (whilst using
Roe flux) seems to cure the carbuncle problem. We report here,
preliminary results obtained using such multiple overset mesh configuration (two levels for the present case, similar to the one described in section
\ref{Sec:quasiOneDOverset} ) in the shock region for the problem of Mach 3.0
flow over a circular cylinder. We have observed that the carbuncle disappears by using two levels of overset mesh. The solutions with and without the carbuncle,
obtained using meshes without and with two levels of overset meshes respectively,
are shown in Figure \ref{fig:RoeFlux}. More details of this methodology will be given in a subsequent paper, focused
on accurate shock capturing using high-order methods.
\begin{figure}[!htbp]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{images/carbuncleAvoidingMesh1.png}
\caption{Carbuncle Avoiding Structured mesh: Perturbing bottom row of cells}
\label{fig:perturbedMesh}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{images/Mach3NoCarbuncleStrMshXYChange1.png}
\caption{Density solution without carbuncle obtained using mesh similar to the one on the left}
\label{fig:perturbedMeshMach3Soln}
\end{subfigure}
\caption{Avoiding the carbuncle by perturbing the bottom row of cells}
\label{fig:perturbedMeshAndSoln}
\end{figure}
Interestingly, another way to cure or avoid the carbuncle is to perturb the mesh such that the faces of the two bottom most row of cells are not parallel to the x and y directions as show in figure \ref{fig:perturbedMesh}. Using this mesh with the Roe flux does not produce
the carbuncle as shown in figure \ref{fig:perturbedMeshMach3Soln}.
In this mesh, the bottom portion of the shock, or the ``normal portion''( portion of the shock that is almost a normal shock) of the shock , that forms and travels upstream is not aligned with the cell faces and hence using the Roe flux also leads to introducing dissipation or mass
conservation error. Where as for a mesh similar to the one shown in figure \ref{fig:structuredMeshFlowOverCyl}, the ``normal portion'' of the shock is almost aligned with the cell faces and hence there will be essentially no mass conservation error in this
region,
but near the ``oblique portion'' of the shock (region excluding the ``normal portion''), there will be considerable mass conservation error. These results show the link between mass conservation error and carbuncle.
It shows that essentially zero mass conservation error near ``normal portion'' of the shock and considerable error near ``oblique portion'' of the shock could be the reason for
formation of Carbuncle. Also, the carbuncle can be cured or avoided
either by reducing the mass conservation error near the ``oblique portion'' of the shock by using overset mesh with 2 levels of refinement or by introducing mass conservation error near the ``normal portion'' of the shock by skewing the mesh.
\section{Conclusion}\label{Sec:Conclusions}
For the moving shock problem, we compared the performance of different numerical fluxes used in combination with different numerical methods. For the first order methods,
we showed that for certain problem parameters, ROE flux performs better than Osher flux, as opposed to the problems generally reported and cited in literature
\cite{roberts1990}. We underscored the importance of doing a characteristic-wise reconstruction for high-order methods
by giving an example of a case where doing a component-wise reconstruction instead, leads to `NAN's in the computation.
Using the test problems of normal shock for one-dimensional and quasi-one-dimensional Euler equations and the test problem of oblique shock for two-dimensional Euler equations,
we have shown that mass flux error occurs due to the use of dissipative flux splittings for conservative finite difference WENO schemes and the use of dissipative
flux functions (approximate Riemann solvers) for the Discontinuous Galerkin method. We showed that the mass flux error varies with Mach number
before the shock and formal order of accuracy of the scheme. We showed that using ROE flux also leads to significant mass conservation error while solving
the quasi-one-dimensional Euler equations, one-dimensional viscous fluid flow equations and
two-dimensional Euler equations.
For the two-dimensional Euler equations, for the simple problem of the 135\textdegree{} oblique shock, techniques like, choosing a
mesh with shock aligned to cell faces for DG method and using the flux splitting based on Rotated Riemann solvers for the FDLCDZW, to avoid
the mass flux error were given. However, extending these techniques for capturing more complex flows having curved shocks or shock reflections is not straightforward.
We showed that without upwind biasing, using high order linear reconstruction for the conservative finite difference scheme and using a central flux function for
the DG method, a shock can be captured, if a mesh of sufficient resolution is used.
We applied the technique of using multiple levels of overset meshes for resolving flow near the shock for a quasi-one-dimensional flow problem.
We showed that using such a mesh leads to mitigation of mass flux error.
We also showed that the connection between the mass conservation error and the formation of carbuncle. We showed preliminary results of two ways of curing the carbuncle. One, by reducing the mass conservation near the ``oblique portion'' of the shock by using overset mesh with 2
levels of refinement and the other by
introducing mass conservation error near the ``normal portion'' of the shock by skewing the mesh near the ``normal portion'' of the shock.
Finally, the mass conservation error or the post shock oscillation error seemed essentially independent of whether the WENO method was used or DG method was used.
\section{}
\label{}
\section{}
\label{}
\section{}
\label{}
|
1703.10046
|
\section{Introduction}
\label{sec:intro}
Over the past 20 years the number of exoplanet detections has soared most notably due to contributions from the \textit{Kepler space telescope} (Kepler herein). As of November 2016 Kepler has detected 3414 confirmed planets, with 575 existing in multi-planet systems (exoplanet.eu; \cite{2011A&A...532A..79S}). Planet multiplicity provides information on the underlying architecture of planetary systems, such as expected orbital spacing, mutual inclinations and size distributions. For the multi-planet systems observed by Kepler, super Earth/mini Neptune type objects on tightly packed orbits inside of $\sim$200 days are common (\cite{2011ApJS..197....8L}; \cite{2014ApJ...784...44L}; \cite{2016ApJ...822...86M}). Moreover such systems are observed to have small inclination dispersions of $\lesssim$5$^\circ$ (\cite{2011ApJS..197....8L}; \cite{2012ApJ...761...92F}; \cite{2012A&A...541A.139F}; \cite{2012AJ....143...94T}; \cite{2013A&A...551A..90M}; \cite{2014ApJ...790..146F}).
How representative Kepler multi-planet systems are of a common underlying planetary architecture however is impeded by Kepler preferentially detecting objects which orbit closest to the host star. To generalise Kepler systems to an underlying population, it is therefore necessary to account for the inherent probability that transiting systems are observed.
Taking into account such probabilities, there appears to be an over-abundance of planetary systems with a single transiting planet (\cite{2011ApJS..197....8L}; \cite{2011ApJ...742...38Y}; \cite{2012ApJ...758...39J}; \cite{2016ApJ...816...66B}). This is commonly referred to as the 'Kepler Dichotomy'.
It is currently not known what causes this excess. Statistical and \textit{Spitzer} confirmation studies all suggest that the false positive rate for single transiting objects with R$_\mathrm{p}$ $< $4R$_\oplus$ is low at $\lesssim$15\% (\cite{2011ApJ...738..170M}; \cite{2013ApJ...766...81F}; \cite{2014AJ....147..119C}; \cite{2015ApJ...804...59D}). Perhaps then, there are populations of inherently single planet systems in addition to multi-planet systems which are closely packed and have small inclination dispersions. However there may also be a population of multi-planet systems where the mutual inclination dispersion is large, such that only a single planet is observed to transit.
The presence of an outer planetary companion may drive this potential large spread in mutual inclinations. Recent N-body simulations show that the presence of a wide orbit planet in multi-planet systems can decrease the number of inner planets that are observed to transit, either through dynamical instability or inclination excitation (\cite{2016arXiv160908058M}; \cite{2017MNRAS.tmp..186H}). Beyond a few au, planetary transit probabilities drop to negligible values. It is possible therefore that additional wide orbit planets could indeed exist in multi-planet systems observed by Kepler. Giant planets at a few au have been detected around stars in the general stellar population by a number of radial velocity (RV) surveys (\cite{2013A&A...551A..90M}; \cite{2016ApJ...817..104R}; \cite{2016ApJ...819...28W}; \cite{2016ApJ...821...89B}), with suggested occurrence rates ranging from $\sim10-50\%$ (\cite{2008PASP..120..531C}; \cite{2011arXiv1109.2497M}; \cite{2016ApJ...821...89B}). Moreover, indirect evidence of undetected giant planets has also been suggested through apsidal alignment of inner RV detected planets (\cite{2014Sci...346..212D}). As RV studies are largely insensitive to planetary inclinations, it is possible that such wide orbit planets could be on mutually inclined orbits, which may arise from a warp in the disc (\cite{2010A&A...511A..77F}) or due to an excitation by a stellar flyby (\cite{2004AJ....128..869Z}; \cite{2011MNRAS.411..859M}).
Calculating transit probabilities of multi-planet systems is complex, often requiring computationally exhaustive numerical methods such as Monte Carlo techniques (e.g. \cite{2011ApJS..197....8L}; \cite{2012ApJ...758...39J}; \cite{2016MNRAS.455.2980B}; \cite{2016arXiv160908058M}; \cite{2017MNRAS.tmp..186H}). However analytical methods can offer a significantly more efficient route for this calculation and allows for coupling with other fundamental analytical theory, such as for the expected dynamical evolution of the system from inter-planet interactions. Despite this however, analytical investigations into the transit probabilities of multi-planet systems for this purpose are relatively sparse (e.g. \cite{2010arXiv1006.3727R}; \cite{2016ApJ...821...47B}). Recently \cite{2016ApJ...821...47B} showed how differential geometry techniques can be used to calculate multi-planet transit probabilities by mapping transits onto a celestial sphere. In this paper we perform a similar analysis, however we focus on regions where pairs of planets can be observed to transit. We also give an explicit analytical form using simple vector relations to describe the boundaries of such transit regions.
The multi-planet systems observed by Kepler appear to be mostly stable on long timescales (\cite{2011ApJS..197....8L}; \cite{2015ApJ...807...44P}). Dynamical interactions with a potential outer planet on an inclined orbit would therefore be expected to occur on secular timescales. Recent analytical work by \cite{2017AJ....153...42L} suggests that such interactions can lead to large mutual inclinations in an inner planetary system, assuming that the direction of the angular momentum vector of the outer planet is fixed. We build on this work by deriving analytical relations for the mutual inclination that can be induced in an inner planetary system by a general planetary companion. We then simplify this result specifically for when the companion is on a wide orbit. Combining this result with our robust analytical treatment of transit probabilities, we can then derive a simple relation describing how the presence of an outer planetary companion affects the transit probability of an inner system due to long term interactions.
We also complement recent N-body simulations of Kepler-like systems interacting with an inclined outer planetary companion shown in \cite{2016arXiv160908058M} and \cite{2017MNRAS.tmp..186H} by using our robust treatment of transit probabilities to consider whether an outer planet with a range of masses, semi-major axes and inclinations can reduce an underlying population of Kepler double transiting systems enough to recover the observed number of single transiting systems through long term interactions only. We also investigate whether the presence of specific wide orbit planets in multi-planet systems preferentially predicts single transiting planets with a given distribution of radii and semi-major axes.
In \S\ref{sec:semianal} we overview our semi-analytical method for calculating the transit probability of two mutually inclined planets. In \S\ref{sec:secint} we derive a simplified form to describe the evolution of the mutual inclination between two planets due to presence of an outer planetary companion. We show how this mutual inclination affects the transit probability of the two inner planets in \S\ref{sec:combtrans}. In \S\ref{sec:realsys} we apply this work to Kepler-56, Kepler-68, HD 106315 and Kepler-48 to place constraints on the inclination of the outer planets in these systems. In \S\ref{sec:Kepdich} we investigate whether a wide orbit planet in Kepler systems can decrease the number of observed two planet transiting systems enough to recover the observed abundances of single transiting systems. We finally discuss this work in \S\ref{sec:discussion} and conclude in \S\ref{sec:conc}.
\section{Semi-analytical Transit Probability}
\label{sec:semianal}
A planet on a circular orbit with a semi-major axis $a$ and radius $R_\mathrm{p}$ subtends a band of shadow across the celestial sphere due its orbital motion. We refer to this band of shadow as the \textit{transit region} (\cite{2010arXiv1006.3727R}; \cite{2016ApJ...821...47B}). The probability that an observer will view an individual transit event of this planet, assuming that the system is viewed for long enough, is equal to the number of viewing vectors that intersect the transit region, divided by the total number of possible viewing vectors. Perhaps more intuitively, this is equivalent to the surface area of the transit region divided by the total surface area of the celestial sphere.
To calculate the area of a transit region on the celestial sphere first consider that the area of a given surface element ($S$) on a unit sphere is equal to
\begin{equation}
S = \int^{\theta_0}_0\int^{\phi_0}_0 \sin\theta' d\theta' d\phi' = \left[1 - \cos\theta'\right]^{\theta_0}_0\left[\phi'\right]^{\phi_0}_0,
\label{eq:surfel}
\end{equation}
where $\theta'$ is the polar angle and $\phi'$ is the azimuthal angle. A given area on the celestial sphere can therefore be represented on a 2d plane of $1 - \cos\theta'$ vs. $\phi'$, from 0 $\rightarrow$ 2 and 0 $\rightarrow 2\pi$ respectively, such that the 2d plane has a total surface area of $4\pi$. Below we show how the boundaries of a given transit region traverses this 2d plane. This allows for the area contained within these boundaries and therefore the associated transit probability to be calculated.
\subsection{Single Planet Case}
\begin{figure}
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = \linewidth]{coordinates_3.png}
\caption{The coordinate system used to show how a transit region traverses the surface of a celestial sphere. The dashed line represents an orbital plane inclined to a fixed reference plane by $\Delta i$. The direction $\hat{\mathrm{\mathbf{n}}}$ is normal to the orbital plane. The directions $\hat{\mathrm{\mathbf{r}}}$, $\hat{\mathrm{\mathbf{r}}}_1$ and $\hat{\mathrm{\mathbf{r}}}_2$ trace the central, lower and upper boundaries of a transit region respectively.}
\label{fig:dobcoord}
\end{figure}
\begin{figure}
\includegraphics[trim={0cm 1cm 0cm 0cm}, width = \linewidth]{single_transit_outline_v2_output.png}
\caption{The surface of a celestial sphere represented on a 2d plane. The dotted lines represent the centre of a transit region for a planet inclined to a fixed reference plane by $\Delta i$. The solid lines refer to the boundaries of such transit regions for when $R_\star/a = 0.25$. The area within these transit regions are identical, giving an identical single transit probability equal to 0.25.}
\label{fig:outline}
\end{figure}
Consider some fixed reference plane where [$\hat{\mathrm{\mathbf{X}}}, \hat{\mathrm{\mathbf{Y}}}$] define a pair of orthogonal directions in this plane, and $\hat{\mathrm{\mathbf{Z}}}$ defines a direction orthogonal to this plane as shown in Figure \ref{fig:dobcoord}. The fixed reference frame in Figure \ref{fig:dobcoord} is assumed to be centred on a host star with radius $R_\star$. The line of sight of an observer is considered to be randomly oriented over the surface of a celestial sphere with respect to this fixed reference plane. Now consider that the orbital plane of a planet with a semi-major axis $a$ and radius $R_\mathrm{p}$, is inclined to the fixed reference plane by $\Delta i$, with the intersection between the two planes occurring along the $\hat{\mathrm{\mathbf{X}}}$ direction. The direction of the normal of the orbital plane is given by $\hat{\mathrm{\mathbf{n}}}$. The position of a planet in the orbital plane is defined by the direction $\hat{\mathrm{\mathbf{r}}}$ which makes the angles $\theta$ and $\phi$ with the $\hat{\mathrm{\mathbf{Z}}}$ and $\hat{\mathrm{\mathbf{X}}}$ directions respectively. Hence $\hat{\mathrm{\mathbf{r}}}$ traces the centre of the transit region with respect to the fixed reference plane. As $\hat{\mathrm{\mathbf{n}}} \cdot \hat{\mathrm{\mathbf{r}}} = 0$, where $\hat{\mathrm{\mathbf{n}}} = [0, \sin\Delta i, \cos\Delta i]$ and $\hat{\mathrm{\mathbf{r}}} = [\sin\theta\cos\phi, \sin\theta\sin\phi, \cos\theta]$ it follows that
\begin{equation}
- \sin\Delta i\sin\theta\sin\phi + \cos\Delta i\cos\theta = 0.
\label{eq:cent}
\end{equation}
Hence eq. (\ref{eq:cent}) defines how the centre of a transit region inclined to a fixed reference plane by $\Delta i$ traverses a celestial sphere. This is shown by the dashed lines in Figure \ref{fig:outline} for different values of $\Delta i$, where the surface area of the celestial sphere is shown on a 2d plane defined by eq. (\ref{eq:surfel}). We note that at the special case where $\Delta i$ = 90$^\circ$, $\phi$ can only take values of 0 or $\pi$.
Similarly the directions that define the boundaries of the transit region can be given by $\hat{\mathrm{\mathbf{r}}}_1$ and $\hat{\mathrm{\mathbf{r}}}_2$ which makes the angles $\theta_1, \theta_2$ and $\phi_1, \phi_2$ with the $\hat{\mathrm{\mathbf{Z}}}$ and $\hat{\mathrm{\mathbf{X}}}$ directions respectively, shown in Figure \ref{fig:dobcoord}. The boundaries of the transit region also subtend an angle $\pm \theta_\mathrm{sub}$ from the orbital plane where $\sin\theta_\mathrm{sub} = R_\star/a$ assuming $R_\star \gg R_\mathrm{p}$ (\cite{1984Icar...58..121B}). As $\hat{\mathrm{\mathbf{r}}}_1 = [\sin\theta_1\cos\phi_1, \sin\theta_1\sin\phi_1, \cos\theta_1]$, $\hat{\mathrm{\mathbf{r}}}_2 = [\sin\theta_2\cos\phi_2, \sin\theta_2\sin\phi_2, \cos\theta_2]$ and $\hat{\mathrm{\mathbf{n}}} \cdot \hat{\mathrm{\mathbf{r}}}_1 = R_\star/a$ and $\hat{\mathrm{\mathbf{n}}} \cdot \hat{\mathrm{\mathbf{r}}}_2 = -R_\star/a$, it follows that
\begin{equation}
-\sin\Delta i\sin\theta_{1}\sin\phi_1 + \cos\Delta i\cos\theta_{1} = R_\star/a,
\label{eq:bound1}
\end{equation}
\vspace{-0.5cm}
\begin{equation}
-\sin\Delta i\sin\theta_{2}\sin\phi_2 + \cos\Delta i\cos\theta_{2} = -R_\star/a.
\label{eq:bound2}
\end{equation}
Hence eq. (\ref{eq:bound1}) and eq. (\ref{eq:bound2}) describe how the lower and upper boundaries of the transit region for a planet inclined to a fixed reference plane by $\Delta i$ traverse a celestial sphere. The solid lines in Figure \ref{fig:outline} show these boundaries for different values of $\Delta i$, where $R_\star/a$ = 0.25. This value of $R_\star/a$ might be considered to be unrealistically large and is used for demonstration purposes only. In Appendix \ref{sec:transeq} we further discuss how the values of ($\theta_1, \phi_1$) and ($\theta_2, \phi_2$) in eq. (\ref{eq:bound1}) and eq. (\ref{eq:bound2}) respectively would be expected to change as $\Delta i$ is increased from $\Delta i = 0\rightarrow90^\circ$.
An integration between the upper and lower boundaries of a transit region divided by the total surface area of the celestial sphere gives the associated \textit{single transit probability} of the planet ($R_\star/a$, \cite{1984Icar...58..121B}). All of the transit regions shown in Figure \ref{fig:outline} for different $\Delta i$ therefore contain identical areas and hence have identical single transit probabilities equal to 0.25. We note that if the planet has a non-negligible radius then the single transit probability becomes $(R \pm R_\mathrm{p})/a$ for grazing and full transits respectively. Throughout this work however we assume that $R_\mathrm{p} \ll R_\star$.
\subsection{Two Planet Case}
\label{subsec:twoplanetcase2}
Consider now a system containing two planets, both of which are on circular orbits with semi-major axes and radii of $a_1$, $a_2$ and $R_\mathrm{{p_1}}, R_\mathrm{{p_2}}$ respectively, where $a_1 < a_2$ and the orbital planes are mutually inclined by $\Delta i$ (we give an exact definition for mutual inclination in \S\ref{sec:secint}). The probability that a randomly oriented observer will view both planets to transit (assuming the system is observed for long enough) is equal to the overlap area between the transit regions of both planets, divided by the total area of the celestial sphere. We refer to this probability as the \textit{double transit probability}.
Therefore, using eq. (\ref{eq:bound1}) and eq. (\ref{eq:bound2}) to find where the boundaries of the transit regions of each planet intersect, an outline of the overlap between the transit regions can be determined. The area of this overlap can subsequently be calculated by an appropriate integration, which when divided by 4$\pi$ gives the double transit probability. How the double transit probability changes as a function of $\Delta i$ is shown by the blue line in Figure \ref{fig:pi}, for when $R_\star/a_1 = 0.2$ and $R_\star/a_2 = 0.1$. We note that this result is consistent regardless of the choice of reference plane and the orientation of the orbital planes of both planets with respect to this reference plane (see \cite{2010arXiv1006.3727R} for a further discussion). That is, the double transit probability depends on the mutual inclination between the two planets only (in addition to the physical size of the respective transit regions).
Depending on the value of $\Delta i$, the double transit probability ($P$ herein) can be split into three regimes (also discussed in \cite{2010arXiv1006.3727R}; \cite{2016ApJ...821...47B}).
(1) For low values of $\Delta i$, the transit region of the outer planet is enclosed within that of the inner planet. The double transit probability is therefore equal to $R_\star/a_2$.
(2) $\Delta i$ is large enough that the transit region of one planet is no longer fully enclosed inside the other, however there is still partial overlap for all azimuthal angles on the celestial sphere. The transition to this regime occurs for a value of $\Delta i = I_1$, which causes $\theta_1$ in eq. (\ref{eq:bound1}) for both planets to be equal at $\phi_1 = \pi/2$. Evaluating eq. (\ref{eq:bound1}) at this point gives
\begin{equation}
\sin I_1 = -\kappa_2(1 - \kappa_1^2)^{1/2} + \kappa_1(1 - \kappa_2^2)^{1/2},
\label{eq:I1}
\end{equation}
where $\kappa_1 = R_\star/a_1$ and $\kappa_2 = R_\star/a_2$ for simplicity. We note that determining the overlap area of the two transit regions with an exact analytical expression in this regime is difficult and is commonly calculated by Monte Carlo techniques (e.g. \cite{2010arXiv1006.3727R}; \cite{2012ApJ...758...39J}; \cite{2016MNRAS.455.2980B}; \cite{2016arXiv160908058M}; \cite{2017MNRAS.tmp..186H}).
(3) For large $\Delta i$, the transit regions only overlap at the intersection of the two orbital planes. The transition to this regime occurs when $\Delta i = I_2$, where $\theta_1$ for the inner planet is equal to $\theta_2$ for the outer planet at $\phi_1 = \phi_2 = \pi/2$. Evaluating eq. (\ref{eq:bound1}) and (\ref{eq:bound2}) here gives
\begin{equation}
\sin I_2 = \kappa_2(1 - \kappa_1^2)^{1/2} + \kappa_1(1 - \kappa_2^2)^{1/2}.
\label{eq:I2}
\end{equation}
The values of $I_1$ and $I_2$ are shown by the green and red lines respectively in Figure \ref{fig:pi}. If it is assumed that the transit region overlap in regime 3 can be represented as a 2d parallelogram, \cite{2010arXiv1006.3727R} showed the double transit probability can be approximated by\footnote{\label{note1}For greater accuracy, we include a 2/$\pi$ factor here that is not included in \cite{2010arXiv1006.3727R}.}
\begin{equation}
P = \frac{2R_\star^2}{\pi a_1a_2\sin\Delta i}.
\label{eq:doubleprob}
\end{equation}
For large $\Delta i$ therefore, the double transit probability predicted by eq. (\ref{eq:doubleprob}) tends to a value of 2$R_\star^2/\pi a_1a_2$. We show eq. (\ref{eq:doubleprob}) as the black dashed line in Figure \ref{fig:pi}. We note that in \cite{2010arXiv1006.3727R} it was assumed that the double transit probability transitions straight from regime (1) to (3) at $\Delta i = \arcsin\left(\frac{2}{\pi}\cdot\mathrm{min}(R_\star/a_1,R_\star/a_2)\right)$\footref{note1}.
For $\Delta i > I_2$ our method predicts a double transit probability that agrees well with the analytical estimate from \cite{2010arXiv1006.3727R}. However there is a clear discrepancy for $I_1 < \Delta i < I_2$, for when there is partial overlap between the transit regions at all azimuthal angles. This highlights the need for semi-analytical methods like the one suggested here over purely analytical relations, to robustly calculate double transit probabilities at all values of $\Delta i$. We note that our method also agrees well with the Monte Carlo treatment of double transit probabilities shown in \cite{2010arXiv1006.3727R}.
Calculating transit probabilities using the method outlined here is significantly more computationally efficient than equivalent Monte Carlo methods, as it is only necessary to solve combinations of eq. (\ref{eq:bound1}) and (\ref{eq:bound2}) for different planets to find where transit regions overlap. From integrating around this overlap, the associated double transit probability is also exact and not subject to Monte Carlo noise effects from under-sampling the total number of line of sight vectors.
\begin{figure}
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = \linewidth]{mult_transit_area_compare_output.png}
\caption{The double transit probability as a function of mutual inclination between two planets from our method (blue line) for when $R_\star/a_1 = 0.2$ and $R_\star/a_2 = 0.1$. The dashed black lines represent the associated analytical estimate given by eq. (\ref{eq:doubleprob}). The green and red lines represent which inclination cause the double transit probability to go from regime 1 to 2 and regime 2 to 3, with the regimes being defined in \S\ref{subsec:twoplanetcase2}.}
\label{fig:pi}
\end{figure}
\section{Secular Interactions}
\label{sec:secint}
\subsection{N planet system}
\label{subsec:nplanetsys}
Consider a system of $N$ secularly interacting planets in which planet $j$ has a semi-major axis $a_j$ and mass $m_j$. The inclination and longitude of ascending node of planet $j$ are given by $I_j$ and $\Omega_j$ respectively, and can be combined into the associated complex inclination $y_j = I_je^{i\Omega_j}$. Assuming that the vector involving all the planet's orbital planes is given by $\bm{y}$ = [$y_1, y_2, ..y_N$], the evolution of complex inclinations in the low inclination and eccentricity limit can be given by Laplace - Lagrange theory in the form
\begin{equation}
\dot{\bm{y}} = i\mathbf{{B}}\bm{y},
\label{eq:Laplace}
\end{equation}
(\cite{1999ssd..book.....M}) where $\mathbf{B}$ is a matrix with elements given by
\begin{equation}
\begin{split}
& B_{jk} = \frac{1}{4}n_j\left(\frac{m_k}{M_\star + m_j}\right)\alpha_{jk}\tilde{\alpha}_{jk}b^{(1)}_{3/2}(\alpha_{jk}) \hspace{1cm}(j\neq k) \\
& B_{jj} = -\sum^{N}_{k=1, j\neq k}B_{jk},
\end{split}
\label{eq:matrix}
\end{equation}
$j$ and $k$ are integers associated with each planet, $M_\star$ and $m_i$ are the masses of the star and planet $i$, $n_j$ is the mean motion of planet $j$ where $n_j^2a_j^3 = G(M_\star + m_j)$, $\alpha_{jk} = \tilde{\alpha}_{jk}$ = $a_j/a_i$ for $a_j < a_k$ and $\alpha_{jk} = a_k/a_j$ and $\tilde{\alpha}_{jk} = 1$ otherwise, and $b^{(1)}_{3/2}(\alpha_{jk})$ corresponds to a Laplace coefficient given by
\begin{equation}
b^{(\nu)}_s(\alpha) = \frac{1}{\pi}\int_{0}^{2\pi}\frac{\cos(\nu x)dx}{(1 - 2\alpha\cos(x) + \alpha^2)^s} \hspace{1cm} \alpha < 1.
\end{equation}
Eq. (\ref{eq:Laplace}) can be solved to show that the evolution of $\bm{y}$ is given by a superposition of eigenmodes associated with each eigenfrequency $f_i$ of the matrix $\mathbf{B}$
\begin{equation}
y_j(t) = \sum^{N}_{k=1}\mathbf{\mathit{I}}_{jk}e^{i(f_kt + \gamma_k)},
\label{eq:secsol}
\end{equation}
where {\textit{{I}$_{jk}$}} are the eigenvectors of $\mathbf{B}$ scaled to initial boundary conditions and $\gamma_k$ is an initial phase term. If it is assumed that all objects are spherically symmetric, additional terms in the diagonal elements of $\mathbf{B}$ in eq. (\ref{eq:matrix}) (e.g. stellar oblateness) need not be included. A choice of reference frame for the inclination also becomes arbitrary, leading to one of the eigenfrequencies equalling zero (c.f. \cite{1999ssd..book.....M}). It is only meaningful therefore to describe a \textit{mutual inclination} between pairs of planets, with the invariable plane commonly being chosen as a reference plane. The invariable plane is defined as being perpendicular to the total angular momentum vector of a system. The mutual inclination is then the angle between individual angular momentum vectors of a pair of planets. The inclination solution described by eq. (\ref{eq:secsol}) also becomes simplified when the invariable plane is taken as a reference plane as the eigenvector associated with the zero value eigenfrequency is also equal to zero.
\subsection{Two planet system with an inclined companion}
\label{subsec:twoplansec}
\begin{figure*}
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.45\linewidth]{a_M.png}
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.45\linewidth]{simpa_M.png}
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.45\linewidth]{M_i.png}
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.45\linewidth]{simpM_i.png}
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.45\linewidth]{a_i.png}
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.45\linewidth]{simpa_i.png}
\caption{The maximum mutual inclination, $\mathrm{max}|\Delta i_{12}|$, between two planets on circular, initially coplanar orbits with semi-major axis of 0.2, 0.5au and masses of 10M$_\oplus$ respectively, from the secular interaction with an outer third planet. The value of $\mathrm{max}|\Delta i_{12}|$ calculated by the full Laplace-Lagrange solution from eq. (\ref{eq:mutinc}) is given by the colour scale on the left panels. The right panel colour scales give $\mathrm{max}|\Delta i_{12}|$ calculated by the simplified Laplace-Lagrange solution for when $a_3 \gg a_1,a_2$, given by eq. (\ref{eq:mutinc_simp}) and eq. (\ref{eq:Ksimp}). For the top panels $\Delta i = 10^\circ$, for the middle panels $m_3 = 1$M$_\mathrm{J}$ and for the bottom panels $a_3 = 2$au. It is important to note that the assumptions of Laplace-Lagrange theory break down when $\Delta i \gg 20^\circ$. Larger inclinations are only included in this Figure to aid comparison between $\mathrm{max}|\Delta i_{12}|$ predicted by the full and simplified Laplace-Lagrange theory solutions.}
\label{fig:M_i}
\end{figure*}
Consider the same general two planet system from \S\ref{subsec:twoplanetcase2}. Assume that the two planets are initially coplanar. Consider now a third planet on an external circular orbit, with a mass and semi-major axis of $m_3$ and $a_3$ respectively such that $a_3 > a_2$. The orbital plane of this external planet is initially mutually inclined to the inner planets by $\Delta i$. We assume that each of the planets interact through secular interactions only and that inclinations and eccentricities remain small, allowing for application of Laplace - Lagrange theory. Assuming that the invariable plane is taken as a fixed reference plane, the initial inclination of the third planet $i_3$ is given by
\[ i_3 = \arctan\left[\frac{(L_1 + L_2)\sin\Delta i}{L_3 + (L_1 + L_2)\cos\Delta i}\right]\]
where $L_j = m_ja_j^{1/2}$ and is proportional to the angular momentum in the low eccentricity limit. The initial inclination of the inner planets with respect to the invariable plane is therefore $i_1 = \Delta i - i_3$.
From eq. (\ref{eq:secsol}) the complex inclination of each of the inner two planets with respect to the invariable plane evolves in the form of
\begin{equation}
\begin{split}
& y_1 = I_{11}e^{i(f_1t + \gamma_1)} + I_{12}e^{i(f_2t + \gamma_2)}\\
& y_2 = I_{21}e^{i(f_1t + \gamma_1)} + I_{22}e^{i(f_2t + \gamma_2)},
\label{eq:individualinclination}
\end{split}
\end{equation}
where $y_1$ and $y_2$ are the complex inclinations of the innermost and second innermost planet respectively. The evolution of the mutual inclination between the inner pair of planets is hence given by
\begin{equation}
y_1 - y_2 = (I_{11} - I_{21})e^{i(f_1t + \gamma_1)} + (I_{12} - I_{22})e^{i(f_2t + \gamma_2)}.
\label{eq:mutincevofull}
\end{equation}
The $t = 0$ boundary conditions give $\gamma_1 = \pi$ and $\gamma_2 = 0$. Also as $y_1$($t$ = 0) = $y_2$($t$ = 0) = $i_1$, it follows from eq. (\ref{eq:individualinclination}) that $I_{11} - I_{21}$ = $I_{12} - I_{22}$. The evolution of the mutual inclination from eq. (\ref{eq:mutincevofull}) is therefore is equivalent to
\begin{equation}
y_1 - y_2 = (I_{12} - I_{22})\left(e^{i(f_1t + \pi)} + e^{if_2t}\right).
\label{eq:complexi}
\end{equation}
Hence the evolution of the instantaneous mutual inclination between the inner pair of planets, $\Delta i_{12} = |y_1 - y_2|$, can be calculated if the first and second elements of the eigenvector associated with the $f_2$ eigenfrequency are known.
In Appendix \ref{sec:fullsec} we fully solve eq. (\ref{eq:Laplace}) to give $I_{12}$ and $I_{22}$ in terms of physical variables. Here we simply say that
\begin{equation}
y_1 - y_2 = \Delta i K\left[e^{i(f_1t + \pi)} + e^{if_2t}\right],
\label{eq:mutinc}
\end{equation}
where $K$ is dependant on the masses and semi-major axes of the three planets, shown explicitly in Appendix \ref{sec:fullsec}. We note that the maximum value of $K \approx 1$, implying that the maximum value of the mutual inclination between the inner pair of planets from eq. (\ref{eq:complexi}) is twice the initial mutual inclination with the external third planet i.e. max|$\Delta i_{12}$| = 2$\Delta i$. For given values of masses and semi-major axes of the inner pair of planets therefore, the evolution of the mutual inclination between them is dependant on three quantities, $a_3$, $m_3$ and $\Delta i$.
The left panels of Figure \ref{fig:M_i} show how max|$\Delta i_{12}$| changes as a function of different combinations of $a_3$, $m_3$ and $\Delta i$ in eq. (\ref{eq:mutinc}) for an example system where $a_1$, $a_2$ = 0.2, 0.5au and $m_1$, $m_2$ = 10M$_\oplus$ respectively. We note that the assumptions of Laplace-Lagrange theory are expected to break down when $\Delta i \gg 20^\circ$. Larger inclinations are included for demonstration purposes only. It is evident that as the third planet tends to a limit where it is on a wide orbit, with a low mass and low initial mutual inclination, the maximum mutual inclination between the inner pair of planets becomes small as one might expect.
\subsection{Companion wide orbit approximation}
\label{subsec:inclinedcompanion}
In \S\ref{sec:combtrans} we look to investigate how the evolving mutual inclination between the inner pair of planets affects the associated double transit probability, for the specific case where the external third planet is assumed to be on a wide orbit. For $a_3 \gg a_1,a_2$, certain individual and combinations of $\mathbf{B}$ matrix elements from eq. (\ref{eq:matrix}) become small and we find that eq. (\ref{eq:mutinc}) can be simplified to
\begin{equation}
y_1 - y_2 \approx \Delta i K_\mathrm{simp}\left[e^{i(f_1t + \pi)} + e^{if_2t}\right],
\label{eq:mutinc_simp}
\end{equation}
where
\begin{equation}
K_{\mathrm{simp}} = \frac{3 m_3 a_2^{7/2}}{m_2 a_1^{1/2}a_3^3}\frac{1}{b^1_{3/2}\left(\frac{a_1}{a_2}\right)\left(1 + (L_1/L_2)\right)}.
\label{eq:Ksimp}
\end{equation}
Here it is assumed that as $a_3 \gg a_1,a_2$, certain Laplace coefficients from the $\mathbf{B}$ matrix elements can be simplified, specifically $b^1_{3/2}(\alpha) \approx 3(\alpha)$ (\cite{1999ssd..book.....M}). Similar simplifications can be made to each of the eigenfrequencies, for which
\begin{equation}
\begin{split}
& f_1 \approx -\frac{\pi m_2 a_1^{1/2}}{2 M_\star^{1/2}a_2^2}b^1_{3/2}\left(\frac{a_1}{a_2}\right)\left(1 + L_1/L_2\right),\\
& f_2 \approx -\frac{3\pi m_3 a_2^{3/2}}{2 M_\star^{1/2} a_3^3}\frac{1}{1 + L_1/L_2}.
\end{split}
\end{equation}
As eq. (\ref{eq:mutinc}) shows that the maximum value of the mutual inclination between the inner pair of planets cannot be larger than twice the initial mutual inclination with the wide orbit planet (max$|\Delta i_{12}| \ngtr 2\Delta i$) we assume that the maximum value of the mutual inclination between the inner two planets predicted by eq. (\ref{eq:mutinc_simp}) is
\begin{equation}
\begin{split}
\mathrm{max}|\Delta i_{12}| &\approx 2\Delta iK_{\mathrm{simp}} \hspace{1.5cm}\quad\text{for }K_{\mathrm{simp}} < 1,\\
&\approx 2\Delta i \hspace{2.46cm}\quad\text{otherwise. }
\end{split}
\label{eq:simp_plot}
\end{equation}
\begin{figure*}
\centering
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.415\linewidth]{prob_time_output_inc.png}
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.425\linewidth]{prob_time_output_prob.png}
\caption{(\textit{left}): The evolution of the mutual inclination of the two inner planets considered in Figure \ref{fig:M_i} due to secular interactions with a third planet with $a_3$ = 2au, $m_3$ = 1M$_\mathrm{J}$ and $\Delta i$ = 5$^\circ$. (\textit{right}): The associated evolution of the double transit probability.}
\label{fig:prob_time}
\end{figure*}
The right panels of Figure \ref{fig:M_i} show max|$\Delta i_{12}$| predicted by eq. (\ref{eq:simp_plot}) and eq. (\ref{eq:Ksimp}) using the same planet parameters as shown in the left panels. We find that when $a_3$ $\gtrsim$1.25au, the simplified form for max|$\Delta i_{12}$| from eq. (\ref{eq:simp_plot}) and eq. (\ref{eq:Ksimp}) agrees with the full Laplace - Lagrange solution to within $\sim25\%$ for all values of $m_3$ and $\Delta i$. For $a_3 \sim$1au, the simplified form of max|$\Delta i_{12}$| begins to break down and eq. (\ref{eq:simp_plot}) can underestimate max|$\Delta i_{12}$| from the full Laplace - Lagrange solution by up to a factor of 2.
This estimate is similar to the result derived by \cite{2017AJ....153...42L}, who assumed that the angular momentum vector direction of the outer inclined planet is fixed in time. They find that the maximum mutual inclination that can be induced in an inner pair of planets depends on the strength of the coupling between them (parametrized by $\epsilon_{12}$ in their eq. 12). Assuming inclinations are small we find eq. (\ref{eq:simp_plot}) agrees with the equivalent prediction of max|$\Delta i_{12}$| from \cite{2017AJ....153...42L} if $K_\mathrm{simp} = \epsilon_{12}$. Indeed, $K_\mathrm{simp}$ and $\epsilon_{12}$ are almost identical despite the different derivation techniques (e.g. we derive the full Laplace- Lagrange solution and then simplified assuming $a_3 \gg a_1, a_2$), apart from $K_\mathrm{simp}$ contains an additional factor of $a_1^{1/2}a_2^{3/2}$ whereas $\epsilon_{12}$ contains a factor of ($a_2^2 - a_1^2$). By considering different combinations of $a_1$ and $a_2$ and comparing to the value of max|$\Delta i_{12}$| given by the full solution in Appendix \ref{sec:fullsec}, we find that neither eq. (\ref{eq:simp_plot}) and (\ref{eq:Ksimp}) or the equivalent equation from \cite{2017AJ....153...42L} is favoured as a more accurate approximation, since which is closer to the full solution depends on the exact parameters.
\section{Combining Transit Probabilities with Secular Theory}
\label{sec:combtrans}
Considering two inner, initially coplanar planets and an outer inclined planetary companion, we combine the analysis of transit probabilities from \S\ref{sec:semianal} with secular interactions from \S\ref{sec:secint} in two main ways. First in \S\ref{subsec:meanprob}, we assume that the outer planet is not necessarily on a wide orbit. The evolution of the mutual inclination between the inner planets is therefore assumed to be given by the full Laplace-Lagrange solution derived in eq. (\ref{eq:mutinc}). The double transit probability of the inner two planets during this evolution is then calculated through the method outlined in \S\ref{sec:semianal}. This provides the most accurate prediction for how the double transit probability of two inner planets evolves (in the low inclination limit) considering a given outer planetary companion. We make use of this method for a detailed discussion of how an outer planet affects an inner population of Kepler systems in \S\ref{sec:Kepdich}.
Second, in \S\ref{subsec:meanprobsimp} we assume that the outer planetary companion is on a significantly wide orbit. The evolution of the mutual inclination between the inner two planets is therefore given by eq. (\ref{eq:mutinc_simp}) and eq. (\ref{eq:Ksimp}). Here we look to give a simple analytical form to describe the double transit probability of two inner planets, due to secular interactions with a given outer planetary companion. We make use therefore of simple analytical relations such as eq. (\ref{eq:doubleprob}) to describe double transit probabilities. Comparing with the work in \S\ref{subsec:meanprob} allows for the accuracy of these approximations to be judged. We demonstrate in \S\ref{sec:realsys} how simple constraints can be placed on the inclination of an outer companion in specific systems using this method.
\subsection{Two planet system with an inclined companion}
\label{subsec:meanprob}
From Figure \ref{fig:pi} it is clear that if the amplitude of the mutual inclination between the inner two planets is large, then the associated double transit probability, $P$, will only be at a maximum value for a small proportion of the secular evolution. The presence of an outer inclined planet may therefore result in a significant reduction in the mean double transit probability $\langle P \rangle$ on long timescales. Figure \ref{fig:prob_time} shows how both the mutual inclination and the double transit probability evolve with time for two inner planets from Figure \ref{fig:M_i}, which are perturbed by an outer planetary companion with a semi-major axis, mass and inclination of $a_3$ = 2au and $m_3$ = 1M$_\mathrm{J}$ and $\Delta i$ = 5$^\circ$ respectively. Indeed, $P$ is only at a maximum value for a small proportion of the secular evolution leading to a significant reduction in $\langle P \rangle$ compared with if the outer planet were not present.
\begin{figure*}
\centering
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.44\linewidth]{prob_aM_output.png}
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.44\linewidth]{prob_aM_simp_output.png}
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.44\linewidth]{prob_ai_output.png}
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.44\linewidth]{prob_ai_simp_output.png}
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.44\linewidth]{prob_Mi_output.png}
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.44\linewidth]{prob_Mi_simp_output.png}
\caption{The mean double transit probability of two planets $\langle P \rangle$ from Figure \ref{fig:M_i}, which are being secularly perturbed by a third planet on a mutually inclined orbit according to the full Laplace - Lagrange solution (left panels) and the simplified Laplace-Lagrange solution for when the third planet is assumed to be on a wide orbit. The black lines show the boundary where the maximum mutual inclination between the inner planets exceeds $I_1$ from eq. (\ref{eq:I1}) and $\langle P \rangle$ is assumed to be significantly reduced. The black lines on the respective left and right panels are identical and included to aid comparison. As noted in Figure \ref{fig:M_i}, Laplace - Lagrange theory is expected to break down for $\Delta i \gg 20^\circ$. Larger inclinations are only included here for demonstration purposes only.}
\label{fig:Pa_M}
\end{figure*}
Furthermore, the left panels of Figure \ref{fig:Pa_M} show how $\langle P \rangle$ changes due to perturbations from an outer planet with the same range of parameters considered in Figure \ref{fig:M_i}. As one may expect, through comparing the left panels of Figures \ref{fig:M_i} and \ref{fig:Pa_M}, an outer planet which induces a large value of max|$\Delta i_{12}$| also causes a significant reduction in the mean double transit probability of the inner two planets and vice versa for small values of max|$\Delta i_{12}$|.
The left panels of Figure \ref{fig:Pa_M} also suggest a clear boundary of $a_3$, $m_3$ and $\Delta i$, above which the outer planet causes $\langle P \rangle$ to be significantly reduced and below which $\langle P \rangle$ is unchanged. From Figure \ref{fig:pi}, the double transit probability of the two inner planets can be considered to be significantly reduced when $\Delta i_{12} > I_1$, where $I_1$ is given by eq. (\ref{eq:I1}). We assume therefore that the boundary where $\langle P \rangle$ is significantly reduced occurs when max|$\Delta i_{12}$| $\approx$ $I_1$. The values of $a_3$, $m_3$ and $\Delta i$ which give this boundary are shown by the black lines in the left panels of Figure \ref{fig:Pa_M}.
\subsection{Companion wide orbit approximation}
\label{subsec:meanprobsimp}
Considering the simplified evolution of the mutual inclination from eq. (\ref{eq:mutinc_simp}) and (\ref{eq:Ksimp}) for when $a_3 \gg a_1,a_2$, here we estimate the value of the mean double transit probability itself. We assume that $\langle P \rangle$ is dominated by the maximum or minimum value of the double transit probability, $P_{\mathrm{max}}$ and $P_{\mathrm{min}}$ respectively, depending on whether max|$\Delta i_{12}$| is greater than $I_1$. We assume that $I_1 \approx R_\star/a_1 - R_\star/a_2$ from eq. (\ref{eq:I1}) for $R_\star/a_1$, $R_\star/a_2$ $\ll$ 1. From Figure \ref{fig:pi}, the value of $P_{\mathrm{max}}$ = $R_\star/a_2$, however a value of $P_{\mathrm{min}}$ is more difficult as no specific analytical estimate exists. We therefore assume $P_{\mathrm{min}}$ can be given by the estimate from \cite{2010arXiv1006.3727R} shown by eq. (\ref{eq:doubleprob}). We note that this approximation for $P_{\mathrm{min}}$ would be expected to break down if max|$\Delta i_{12}$| predicts partial overlap between the transit regions of the inner planets for all azimuthal angles (see Figure \ref{fig:pi}).
Assuming that the masses and semi-major axes of all the planets are known, in addition to the inclination of the outer planet and that max|$\Delta i_{12}$| is given by the simplified Laplace - Lagrange solution from eq. (\ref{eq:simp_plot}), $\langle P \rangle$ can be estimated by
\begin{equation}
\begin{split}
\langle P \rangle & \approx R_\star/a_2\hspace{2.2cm}\quad\text{for }\mathrm{max}|\Delta i_{12}| < R_\star/a_1 - R_\star/a_2\\
&\approx \frac{2R_\star^2}{\pi a_1a_2\sin(\mathrm{max}|\Delta i_{12}|)}\hspace{.2cm}\quad\text{otherwise, }
\end{split}
\label{eq:meanpsimp}
\end{equation}
The right panels of Figure \ref{fig:Pa_M} show the value of $\langle P \rangle$ predicted by eq. (\ref{eq:meanpsimp}), using the same planet parameters as those in the left panel. The black lines are identical to those in the left panels of Figure \ref{fig:Pa_M} and are included to aid comparison between both sides of the Figure.
The above assumptions bias the double transit probability toward spending a greater proportion of the secular evolution at $P_{\rm{min}}$. As such, eq. (\ref{eq:meanpsimp}) can under predict $\langle P \rangle$, by a factor of up to 4 when comparing the left and right panels of Figure \ref{fig:Pa_M}. We suggest therefore that eq. (\ref{eq:meanpsimp}) should be used as a first order approximation of $\langle P \rangle$ only.
\section{Application to specific systems}
\label{sec:realsys}
Here we consider real systems observed to have both transiting planets and an additional outer, non-transiting planet. Due to the inherent faintness of Kepler stars, follow up observations to detect non-transiting planets, namely by RV studies, are challenging. Thus the number of systems observed with such architectures are relatively sparse. We consider three of these systems: Kepler-56, Kepler-68 and Kepler-48 in addition to HD 106315. As RV surveys are largely insensitive to planetary inclinations, we apply eq. (\ref{eq:meanpsimp}) with eq. (\ref{eq:Ksimp}) to place constraints on the inclination of the non-transiting planets in these systems.
Assume that, as the transiting planets are indeed transiting, the mean double transit probability is at a maximum. Rearranging eq. (\ref{eq:meanpsimp}) one finds
\begin{equation}
\begin{split}
\Delta i_{\mathrm{crit}} &\approx \frac{R_\star/a_1 - R_\star/a_2}{2K_{\mathrm{simp}}}\hspace{2.2cm} \quad\text{for } K_{\mathrm{simp}} < 1\\
&\approx \frac{R_\star/a_1 - R_\star/a_2}{2}\hspace{2.2cm}\quad\text{otherwise, }
\end{split}
\label{eq:critinc}
\end{equation}
where $\Delta i_{\mathrm{crit}}$ is the inclination of the non-transiting planet required to significantly reduce the mean probability that the inner planets are observed to transit due to secular interactions. We note that eq. (\ref{eq:critinc}) assumes that the transiting planets are initially coplanar. However if these planets were initially mutually inclined by a small amount, a smaller secular perturbation from the outer planet would be required to significantly reduce the mean probability that the inner planets are observed to transit. In this case, $i_\mathrm{crit}$ from eq. (\ref{eq:critinc}) would be reduced.
\subsection{Kepler-56}
Kepler-56 is a red giant star with a mass and radius of M$_\odot$ = 1.32 $\pm$ 0.13 M$_\odot$ and R$_\odot$ = 4.23 $\pm$ 0.15R$_\odot$ respectively (\cite{2013ApJ...767..127H}), which is observed to host three planets. Interestingly, Kepler-56 represents one of the few red giant stars observed to host a planetary system (\cite{2014A&A...562A.109L}; \cite{2015A&A...573L...5C}; \cite{2015ApJ...803...49Q}; \cite{2016arXiv160701755P}). The two inner planets (b, c) are observed to transit with periods of 10.5 and 21.4 days respectively (\cite{2011ApJ...728..117B}; \cite{2013MNRAS.428.1077S}; \cite{2013ApJ...767..127H}; \cite{2014ApJ...787...80H}; \cite{2016ApJS..225....9H}; \cite{2016ApJ...822...86M}) and have masses of 22.1$^{+3.9}_{-3.6}$M$_\oplus$ and 181$^{+21}_{-19}$M$_\oplus$ respectively (\cite{2013ApJ...767..127H}). Keck/HIRES and HARPS-North observations have revealed a non-transiting giant planet (d) with a period of 1002$\pm$5 days and minimum mass of 5.62$\pm$0.38M$_\mathrm{J}$ (\cite{2013ApJ...767..127H}; \cite{2016AJ....152..165O}). An interesting quirk of this system is that the transiting planets, while being roughly coplanar, are misaligned to the stellar spin axis by $\sim$40$^\circ$ (\cite{2013ApJ...767..127H}). It is unclear if this large obliquity is caused by long term dynamical interactions with a highly inclined companion, such as Kepler-56d, or from the star being inherently tilted to the disk from which the planets formed (\cite{2014ApJ...794..131L}).
Applying eq. (\ref{eq:critinc}), we find that $i_\mathrm{crit} = 704^\circ$. This unphysically large value means that, regardless of how Kepler-56d is inclined in this system, the mean double transit probability of the inner two transiting planets cannot be significantly reduced. That is, we suggest that the transiting planets in Kepler-56 are not strongly affected by the secular perturbations of Kepler-56d, regardless of its mutual inclination. This is a similar result to that found in \cite{2017AJ....153...42L} who also find that the inner planets are strongly coupled against external secular interactions. We therefore cannot place any constraint on the inclination of Kepler-56d using this method. We note however that this does not preclude that the 40$^\circ$ misalignment from the stellar spin axis comes from an inclined outer companion, since both inner planets could be inclined together without significant mutual inclination.
\subsection{Kepler-68}
Kepler-68 is a roughly solar type star with a mass and radius of 1.08$\pm$0.05M$_\odot$ and 1.24$\pm$0.02R$_\odot$ respectively (\cite{2013ApJ...766...40G}; \cite{2014ApJS..210...20M}). It hosts two transiting planets (b, c) with periods of 5.4 and 9.6 days respectively (\cite{2013ApJ...766...40G}; \cite{2014ApJS..210...20M}; \cite{2015ApJ...808..126V}; \cite{2016ApJS..225....9H}; \cite{2016ApJ...822...86M}) and fitted masses of 5.97$\pm$1.70 and 2.18$\pm$3.5M$_\oplus$ respectively (\cite{2014ApJS..210...20M}). Keck/HIRES RV follow up of this system detected a non-transiting planet (d) with a period of 625$\pm$16 days with a fitted mass of 267$\pm$16M$_\oplus$ (\cite{2014ApJS..210...20M}).
Applying eq. (\ref{eq:critinc}) we find $i_\mathrm{crit} = 244^\circ$. Similar to Kepler-56 therefore, regardless of the mutual inclination of Kepler-68d, the mean double transit probability of the inner two transiting planets cannot be significantly reduced by secular perturbations. We therefore cannot place a constraint on the inclination of Kepler-68d using this method. We note that Kepler-68d can indeed have a large inclination without affecting the overall stability of the system according to a suite of N-body simulations, which suggest that Kepler-68d is inclined by $\Delta i < 85^\circ$ (\cite{2015ApJ...814L...9K}).
\subsection{HD 106315}
HD 106315 is a bright F dwarf star at a distance $d = 107.3 \pm 3.9$pc (\cite{2016A&A...595A...2G}) with mass and radius of 1.07$\pm$0.03M$_\odot$ and 1.18$\pm$0.11R$_\odot$ respectively (\cite{2012ApJ...761....6M}; \cite{2015PhDT........82P}; \cite{2017arXiv170103811C}). Recent \textit{K2} observations detect two transiting planets (b, c) with periods of 9.55 and 21.06 days respectively and radii of 2.23$^{+0.30}_{-0.25}$ and 3.95$^{+0.42}_{-0.39}R_\oplus$ respectively (\cite{2017arXiv170103811C}; \cite{2017arXiv170103807R}). Mass-radius relationships suggest these planets have masses of 8 and 20M$_\oplus$ respectively (\cite{2016ApJ...819...83W}; \cite{2016ApJ...825...19W}; \cite{2017arXiv170103811C}). Further Keck/HIRES RV observations also indicate the presence of a third outer companion planet (d) with a period of $P_\mathrm{d} \gtrsim 80$ days, which has a mass of $m_\mathrm{d} \gtrsim$1M$_\mathrm{J}$ (\cite{2017arXiv170103811C}). As the exact period of this outer planet is unknown we consider two possibilities where the outer planet has a period of $P_\mathrm{d} = 80$ days and $P_\mathrm{d} = 365$ days respectively. Assuming $P_\mathrm{d} = 80$ days implies a mass of $m_\mathrm{d} = 1$M$_\mathrm{J}$ (\cite{2009ApJ...700..302W}; \cite{2017arXiv170103811C}). Applying eq. (\ref{eq:critinc}) with this outer planet gives $i_\mathrm{crit} = 1.1^\circ$. This suggests that if the outer planet had a period of $P_\mathrm{d} = 80$ days, it must have an inclination of $\Delta i \lesssim 1.1^\circ$, otherwise the mean probability of observing the inner two planets to transit would be significantly reduced due to the secular interaction. Conversely, if the outer planet is assumed to be further out with $P_\mathrm{d} = 365$ days, implying a mass of $\sim$7M$_\mathrm{J}$, eq. (\ref{eq:critinc}) suggests that $i_\mathrm{crit} = 2.4^\circ$. That is, if the outer planet has a period of $P_\mathrm{d} = 365$ days, it must have an inclination of $\Delta i \lesssim 2.4^\circ$, otherwise the secular interaction would significantly reduce the mean probability that the inner planets are observed to transit.
The mutual inclination of the outer planet might also be constrained through astrometric observations of HD 106315 with ESA's \textit{Gaia} mission (\cite{2001A&A...369..339P}; \cite{2008A&A...482..699C}; \cite{2014MNRAS.437..497S}; \cite{2014ApJ...797...14P}; \cite{2015MNRAS.447..287S}). The astrometric displacement of the host star due to the presence of a planet is defined by
\begin{equation}
\alpha = \left(\frac{m_\mathrm{p}}{M_\star}\right)\left(\frac{a_\mathrm{p}}{1\mathrm{au}}\right)\left(\frac{d}{1\mathrm{pc}}\right)^{-1} \mathrm{arcsec},
\label{eq:alpha}
\end{equation}
with the astrometric signal-to-noise equal to $S/N = \alpha\sqrt{N_\mathrm{obs}}/\sigma$, where $N_\mathrm{obs}$ is the scheduled number of astrometric measurements ($N_\mathrm{obs}$ = 36 for HD 106315\footnote{http://gaia.esac.esa.int/gost/}) with typical uncertainties of $\sigma = 40\mathrm{\mu as}$ (\cite{2012Ap&SS.341...31D}). If $S/N > 20$, the orbital inclination can be constrained to a precision of $< 10^\circ$ (\cite{2015MNRAS.447..287S}). We find that for the example periods and masses considered above for HD 106315d that $S/N < 10$. We therefore expect that the inclination of the above examples of HD 106315d cannot be constrained using \textit{Gaia} astrometry. However if HD 106315d is outside of $\sim$1.3au, (implying a mass of $\gtrsim12\mathrm{M}_\mathrm{J}$) eq. (\ref{eq:alpha}) suggests that $S/N > 20$ such that the inclination of HD 106315d should be constrained by \textit{Gaia} astrometry. Further RV follow-up of this system will allow for greater constraints to be placed on the mass and the orbit of HD 106315d, which in turn allow for greater constraints to be placed on the inclination, either through potential astrometry measurements or through our model represented by eq. (\ref{eq:critinc}).
\subsection{Systems with three transiting planets and a wide orbit companion}
Here we generalise the affect a wide orbit planet has on the transit probabilities of three inner transiting planets. Consider Kepler-48 as an example of such a system. Kepler-48 has a mass and radius of M$_\star$ = 0.88$\pm$0.06M$_\odot$ and R$_\star$ = 0.89$\pm$0.05R$_\odot$ respectively. It hosts three transiting planets (b,c,d) with periods of 4.78, 9.67 and 42.9 days and fitted masses of 3.94$\pm$2.10, 14.61$\pm$2.30 and 7.93$\pm$4.6M$_\oplus$ respectively (\cite{2013MNRAS.428.1077S}; \cite{2014ApJS..210...20M}; \cite{2014ApJ...787...80H}; \cite{2016ApJS..225....9H}; \cite{2016ApJ...822...86M}). Keck/HIRES RV analysis also detects a non-transiting planet (e) with a period and fitted mass of 982$\pm$8 days and 657$\pm$ 25M$_\oplus$ respectively (\cite{2014ApJS..210...20M}).
\begin{figure}
\centering
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.95\linewidth]{fourplanet_plotter.png}
\caption{The mutual inclination between the respective planets in Kepler-48, when the non-transiting planet, Kepler-48e is initially mutually inclined by $\Delta i=10^\circ$. The black dashed line shows the evolution of the mutual inclination between the inner two transiting planets with the outer transiting planet, for when the inner two planets are treated as a single body with an equal orbital angular momentum.}
\label{fig:Kepler48}
\end{figure}
Returning to the derivation of the secular interaction in \S\ref{sec:secint}, the initial inclination of the non-transiting planet, $i_e$, with respect to the invariable plane can be generalised to
\begin{equation}
i_\mathrm{e} = \arctan\left(\frac{\sin\Delta i\left(\sum\limits_{n=1}^{3}L_n\right)}{L_\mathrm{e} + \cos\Delta i\left(\sum\limits_{n=1}^{3}L_n\right)}\right),
\end{equation}
where $L_\mathrm{e} = m_ea_e^{1/2}$ and is proportional to the angular momentum of Kepler-48e in the low eccentricity limit and $L_n = m_na_n^{1/2}$ for either Kepler-48b, c, or d. The initial inclination of the transiting planets is therefore equal to $\Delta i - i_\mathrm{e}$.
As the strength of the secular interaction between planets largely depends on their separation (e.g. eq. (\ref{eq:simp_plot})) we assume that Kepler-48d will be affected most by perturbations from the non-transiting planet. We demonstrate this in Figure \ref{fig:Kepler48}, which shows how the mutual inclination between each of the transiting planets evolves assuming Laplace - Lagrange theory (eq. (\ref{eq:secsol})) and that Kepler-48e is initially mutually inclined by $\Delta i = 10^\circ$. The red line shows the mutual inclination between Kepler-48b and c ($\Delta i_\mathrm{bc}$), the blue between b and d ($\Delta i_\mathrm{bd}$), and the green between c and d ($\Delta i_\mathrm{cd}$). The mutual inclination between Kepler-48b and c is largely unchanged and they remain roughly coplanar. Conversely the mutual inclination between b and d and c and d is significant and roughly equal throughout the secular evolution. It can be assumed for Kepler-48 therefore that the inner two transiting planets are largely unaffected by the secular perturbations of Kepler-48e, but both can become significantly mutually inclined to the outer transiting planet.
As such, we assume that Kepler-48b and c can be treated as a single body whose angular momentum is the sum of Kepler48-b and c, reducing the system to a total of three planets. With this approximation, the evolution of the mutual inclination between Kepler-48b and c with d ($\Delta i_\mathrm{bc,d}$) is shown by the dashed black line in Figure \ref{fig:Kepler48}. It can be seen that this way of treating Kepler-48b and c as a single body gives a good approximation for the evolution of the mutual inclination between Kepler-48b, c with d.
The initial mutual inclination of Kepler-48e which causes a significant reduction in the mean probability of the inner planets transiting, $\Delta i_\mathrm{crit}$, can therefore be approximated by eq. (\ref{eq:critinc}), where the value of $K_\mathrm{simp}$ becomes
\begin{equation}
K_{\mathrm{simp}} = \frac{3 m_\mathrm{e} a_\mathrm{d}^{7/2}}{m_\mathrm{d} a_\mathrm{bc}^{1/2}a_\mathrm{e}^3}\frac{1}{b^1_{3/2}\left(\frac{a_\mathrm{bc}}{a_\mathrm{d}}\right)\left(1 + (L_\mathrm{bc}/L_\mathrm{d})\right)},
\end{equation}
with the subscripts referring to a respective planet and the subscript 'bc' to the planet which has the same total angular momentum as Kepler-48b and c.
We find that $\Delta i_\mathrm{crit}$ = 3.7$^\circ$. This suggests therefore that the inclination of Kepler-48e $\Delta i \lesssim 3.7^\circ$, otherwise the secular interaction would cause a significant reduction in the mean probability that all three inner planets are observed to transit. Under the simpler assumption that max|$\Delta i_\mathrm{bc,d}$| $\lesssim$ $R_\star/a_\mathrm{d}$, \cite{2017AJ....153...42L} also find that the inclination of Kepler-48e, considering secular interactions only, must also be small with $\Delta i \lesssim2.3^\circ$.
\begin{figure*}
\centering
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.49\linewidth]{single_pop.png}
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.49\linewidth]{doub_pop.png}
\caption{The smoothed distribution of the radii and the semi-major axes of planets observed by Kepler to be in systems with a single transiting planet (\textit{left}) and in systems with two transiting planets (\textit{right}). Pixel sizes are log($a$) = 0.15 by log($R_p$) = 0.1.}
\label{fig:obsa_rpdist}
\end{figure*}
\section{Application to the Kepler Dichotomy}
\label{sec:Kepdich}
As discussed in \S\ref{sec:intro}, Kepler has observed an excess of single transiting systems which cannot be explained by geometric effects alone, commonly referred to as the Kepler dichotomy (\cite{2011ApJS..197....8L}; \cite{2011ApJ...742...38Y}; \cite{2012ApJ...758...39J}; \cite{2016ApJ...816...66B}). This may suggest that there is a population of inherently single transiting systems in addition to a population of multi-planet systems with small inclination dispersions. However there may also be a population of multi-planet systems where the mutual inclination dispersion is large, increasing the probability that only a single planet is observed to transit. Here we investigate whether both these types of multi-planet systems can significantly contribute to the abundance of systems observed by Kepler to have one and two transiting planets respectively.
The Kepler systems we consider are discussed in \S\ref{sec:sample}. A method for debiasing Kepler systems to a general population of planetary systems is described in \S\ref{sec:debiasing}. We consider the scenario where planets share some inherently fixed mutual inclination in \S\ref{subsec:inher}, before considering when this mutual inclination is evolving due to the presence of an outer inclined planetary companion in \S\ref{subsec:KOIwide}. We note from the outset that we do not consider Kepler systems observed to have more than two planets. Instead we look to explore what effects an outer planet might have on observables of a subset of Kepler like systems, rather than observables of the whole Kepler population. We discuss this assumption further in \S\ref{subsec:assump}.
\subsection{Kepler Candidate Sample}
\label{sec:sample}
We select planet candidates from the cumulative Kepler objects of interest (KOI) table from the NASA exoplanet archive\footnote{exoplanetarchive.ipac.caltech.edu}, accessed on 13/09/16. The vast majority of the KOIs ($\sim97\%$) that survive our cuts detailed below, to make it into our final sample are listed as being taken from the most recent Q1-17 DR24 data release. This data release is of particular note as it incorporates an automated processing of all KOIs (\cite{2016ApJS..224...12C}).
Out of the initial 8826 KOIs we consider those which orbit solar type stars, with surface temperatures and surface gravities between 4200K $< T < $ 7000K and 4.0 $<$ log($g$) $<4.9$ respectively. This reduces the total number of KOIs to 7446. We also find the total number of unique Kepler stars within this range (discussed in \S\ref{sec:discussion}) is 164966 from the 'Kepler Stellar data' table. We next remove false positives, which refer to KOI light curves that are indicative of either an eclipsing binary, having significant contamination from a background eclipsing binary, showing significant stellar variability which mimics a planetary transit or where instrument artefacts have produced a transit like signal (see \cite{2014AJ....147..119C}; \cite{2014ApJ...784...45R}; \cite{2015arXiv150400707R}; \cite{2015ApJS..217...18S}; \cite{2016ApJS..224...12C}). This reduces our sample of KOIs (candidates herein) to 4072 objects. We subsequently remove non planetary-like objects with radii $>$22.4$R_\oplus$ (\cite{2011ApJ...728..117B}), leaving 3757 objects, after which we remove candidates with a SNR $<10$ reducing the possibility that a transit signal is caused by systematic background noise (\cite{2016ApJ...822...86M}), leaving 3327 objects. Finally we remove candidates listed as not having a satisfactory fit to the transit signal (\cite{2014ApJ...784...45R}; \cite{2015arXiv150400707R}). This gives our final sample of 3255 objects. We note that our choice of cuts means that KOI systems can become reduced in multiplicity. We find that our final sample includes systems which contain 1-6 candidates with $N_{i}$ = (1, 2, 3, 4, 5, 6) = (1951, 341, 117, 43, 15, 4) e.g. 1951 systems with a single candidate, 341 systems with two candidates etc. Herein, we consider the 1951 systems observed by Kepler to have a single transiting planet and the 341 systems observed to have two.
The smoothed distribution of the semi-major axis and planetary radii for the single and double planet transiting systems are shown in Figure \ref{fig:obsa_rpdist}. Comparing the left and right panels of Figure \ref{fig:obsa_rpdist}, there are types of planets which are only present in single transiting systems. We briefly discuss these differences here for future reference. Large planets with short periods i.e. Hot Jupiters, are not present in Kepler systems with two transiting planets. Indeed, investigations into the formation processes of Hot Jupiters predict a lack of close companions (\cite{2009ApJ...693.1084W}; \cite{2012PNAS..109.7982S}; \cite{2015ApJ...808...14M}; \cite{2016arXiv160908110H}, see WASP-47 for an exception, \cite{2015ApJ...812L..18B}; \cite{2016A&A...595L...5A}). Long period planets are also more abundant in the population of single transiting systems. This may not necessarily indicate that long period planets inherently favour being in single transiting systems, but instead they might be the inner planet of a higher multiplicity system where the outer planets are on too long a period to produce a significant transit signal.
\begin{figure*}
\centering
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.45\linewidth]{mutinc_0_0.png}
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.45\linewidth]{mutinc_4_4.png}
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.45\linewidth]{mutinc_3_6.png}
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = 0.45\linewidth]{singmodpop_bestfitchi.png}
\caption{The distribution of the radii and semi-major axis of single transiting planets observed from the model population with: (\textit{top}) no third planet. (\textit{middle}) A third planet with $m_3$ = 1M$_\mathrm{J}$, $a_3$ = 1.9au and $\Delta i$ = 10$^\circ$. The total number of single transiting planets predicted by the model population is equal to that observed by Kepler. The colour scale for this panel is saturated for ease of comparison. (\textit{bottom}) A third planet with $m_3$ = 24M$_\oplus$, $a_3$ = 1.07au and $\Delta i$ = 10$^\circ$. We find the 1564 single transiting planets predicted here are a best fit to those observed by Kepler (left panel of Figure \ref{fig:obsa_rpdist}). The contours show the distribution of single transiting planets from the Kepler population. Pixel sizes are log($a$) = 0.15 by log($R_p$) = 0.1.}
\label{fig:moda_rpdist_nothird}
\end{figure*}
Finally there appears to be an over abundance in the population of single transiting systems for planets with $R_\mathrm{p} \lesssim 2R_\oplus$ at periods $P < 10$ days ($\lesssim 0.03$au) (see \cite{2011ApJS..197....8L}; \cite{2012ApJ...758...39J}; \cite{2016PNAS..11312023S}; \cite{2016arXiv161009390L}). The formation processes which lead to these types of planets are unclear. It is also unknown if these objects are inherently rocky planets, or are the cores of Neptune sized planets whose envelopes have been irradiated (\cite{2015ApJ...807...45D}; \cite{2015ApJ...801...41R}; \cite{2016arXiv161009390L}). If these outlying systems are largely ignored, the question remains of whether the remaining planets in single transiting systems are part of the same underlying distribution of higher order planetary systems; i.e. could these single transiting systems contain similar planets which are not observed to transit?
For our dynamical analysis it is not the radii of these planets which is of relevance, rather their masses. We estimate the masses of planets according to the following mass-radius relations. For radii less than 1.5$R_\oplus$ we use the rocky planet mass-radius relation from \cite{2014ApJ...783L...6W}, where density ($\rho_\mathrm{p}$) is related to radii ($R_\mathrm{p}$) through $\rho_\mathrm{p} = 2.43 + 3.39(R_\mathrm{p}/R_\oplus)$gcm$^{-3}$. For radii 1.5 $\leq$ $R_\mathrm{p}$ $\leq$ 4$R_\oplus$, we use the deterministic version of the probabilistic mass-radius relation for sub-Neptune objects from \cite{2016ApJ...825...19W}, where mass ($M_\mathrm{p}$) is given by $M_\mathrm{p}/M_\oplus = 2.7(R_\mathrm{p}/R_\oplus)^{1.3}$. Once radii become $R_\mathrm{p}\gtrsim$4$R_\oplus$ deterministic mass-radius relations become uncertain due to the onset of planetary contraction under self-gravity (see \cite{2017ApJ...834...17C}). From the mass-radius relations detailed in \cite{2017ApJ...834...17C}, we find their 'Neptunian worlds' deterministic relation of $M_{\mathrm{p}}/M_\oplus = (1.23R_\mathrm{p}/R_\oplus)^{1.7}$ gives the most sensible masses for all planets with $R_\mathrm{p} > 4R_\oplus$.
\subsection{De-biasing the Kepler population}
\label{sec:debiasing}
As previously alluded to, Kepler only observes planetary systems that have their orbital planes aligned with our line of sight. It is therefore sensible to suggest that there is a much larger, underlying population of planetary systems within which only some are observed to transit. We refer to this underlying population of planetary systems as the \textit{model population}. Conversely, we refer to the population of planetary systems actually observed by Kepler as the \textit{Kepler population}. We assume that Kepler systems are representative of planetary systems in the model population once geometrical biases have been taken into account.
To construct an underlying model population, our primary goal is for this to predict the correct number and planet parameter distribution seen in the Kepler population for systems with two transiting planets (Figure \ref{fig:obsa_rpdist} right). To achieve this we first assume that all stars either have two or zero planets. Any system which hosts two planets is assumed to be identical to one of the 341 double transiting systems observed by Kepler. We assume the abundance of a specific Kepler-like system in the model population is equal to the inverse of the mean of the double transit probability calculated by the method outlined in \S\ref{sec:semianal}. Systems with inherently low mean double transit probabilities, are therefore probabilistically assumed to be more numerous in the model population. By definition therefore, each unique system in the model population would be expected to be observed with both planets transiting exactly once and so the model population predicts the correct distribution shown in the right panel of Figure \ref{fig:obsa_rpdist}. We note that a model population generated in this way is similar to the method described in \cite{2012ApJ...758...39J}, albeit with their work predicting the correct number and planet parameter distribution seen in the Kepler population for systems with three transiting planets.
The sum of the inversed mean double transit probabilities of all the 341 double transiting systems gives the total number of planetary systems in the model population. If we assume that all of the two planet systems are coplanar, we find the model population includes 16517 systems (the remaining 148449 systems observed by Kepler are assumed to have no planets).
Each system in the model population can be observed to have a single transiting planet, depending on the viewing angle. The sum of the mean single transit probabilities for each of the 16517 systems in the coplanar model population gives the total number of single transiting planets, $N_\mathrm{sing}$, that would be expected to be observed. Here the mean single transit probability for a given system is equal to $R_\star/a_1 - R_\star/a_2$, where $a_1$, $a_2$ are the semi-major axes of each planet when $a_2 > a_1$ and $R_\star$ is the radius of the host star. We find $N_\mathrm{sing}$ = 589, which clearly underestimates the 1951 single transiting systems in the observed Kepler population, by a factor of $\sim3$. This is the Kepler dichotomy discussed in \S\ref{sec:intro}. We show the smoothed distribution of the semi-major axes and planet radii for these 589 predicted single transiting planets in the top left panel of Figure \ref{fig:moda_rpdist_nothird}, which when compared with the left panel of Figure \ref{fig:obsa_rpdist} clearly shows an under-prediction of the single transiting planets observed by Kepler.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{inhermutinc_plot_combined_output.png}
\caption{(\textit{top}) The expected number of single transiting planets observed from a model population generated from Kepler systems with two planets that are mutually inclined by $\Delta i_{12}$. The number of double transiting systems predicted by the model population is constant with 341 systems. (\textit{bottom}) The associated modified $\chi^2$ comparing types of single transiting planets predicted by the model population with the Kepler population. The minimum modified $\chi^2$ value corresponds to $\Delta i_{12} = 3.6^\circ$.}
\label{fig:chis}
\end{figure}
\subsection{Inherently inclined multi-planet systems}
\label{subsec:inher}
From transit duration variation (TDV) studies, the mutual inclinations of planets in multi-transiting systems are small at $\lesssim2-3^\circ$ (\cite{2012ApJ...761...92F}; \cite{2014ApJ...790..146F}). We note that this mutual inclination also best fits the distribution of impact parameters in the Kepler population. Perhaps then, if two planets are assumed to be inherently mutually inclined by a small amount, this may account for the abundance of single transiting planets in the Kepler population. Consider a fixed mutual inclination $\Delta i_{12}$ between the two planets in each of the 341 double transiting systems. The mean single transit probability for each planet from a given system, $P_{\mathrm{sing,1}}$ and $P_{\mathrm{sing,2}}$ respectively where $P_{\mathrm{sing,1}} > P_{\mathrm{sing,2}}$, is now given by
\begin{equation}
\begin{split}
P_{\mathrm{sing,1}} = \frac{R_{\star}}{a_{1}} - P \\
P_{\mathrm{sing,2}} = \frac{R_{\star}}{a_{2}} - P,
\end{split}
\label{eq:Psing}
\end{equation}
where $P$ is the mean double transit probability and $P_\mathrm{sing,1} + P_\mathrm{sing,2}$ is the total mean single transit probability for this system. As $\Delta i_{12}$ increases, the mean double transit probability decreases (Figure \ref{fig:pi}). Therefore for a fixed population of double transiting systems considered here, the expected abundance of single transiting systems increases. Figure \ref{fig:chis} shows how $N_\mathrm{sing}$ increases with $\Delta i_{12}$ for when the number of double transiting systems is kept constant at 341 systems. If $\Delta i_{12} = 4.4^\circ$, we find $N_\mathrm{sing} = 1951$, i.e. the number of single transiting planets expected to be observed from the model population is equal to the number in the observed Kepler population. This suggests that mutual inclinations in Kepler systems observed with two planets must be less than 4.4$^\circ$, or the number of single planet systems observed by Kepler would be too large relative to the number of doubles.
We show the distribution of the semi-major axes and radii of the expected single transiting planets for when $\Delta i = 4.4^\circ$ in the top right panel of Figure \ref{fig:moda_rpdist_nothird}. Comparing with the left panel of Figure \ref{fig:obsa_rpdist}, there is an over abundance of predicted single transiting planets with radii of $\sim2.5$R$_\oplus$ and semi-major axes of $\sim$0.15au. This is due to the model population compensating for not being able to reproduce all types of single transiting planets in the Kepler population (e.g. Hot Jupiters discussed in \S\ref{sec:sample}). Herein therefore when discussing how well a model population predicts the Kepler population of single transiting planets we refer to how well the \textit{types} of planets from each population compare, rather than the total number. That is, we look to find which value of $\Delta i_{12}$ causes the associated version of the top right panel of Figure \ref{fig:moda_rpdist_nothird} to be most like the left panel of Figure \ref{fig:obsa_rpdist}.
We judge the success of this comparison using a modified $\chi^2$ minimisation test, in which we simply sum the square of the difference between the number of singles with a given radius and semi-major axis expected from the model population, with that of the observed Kepler population. Varying $\Delta i_{12}$ we therefore look to identify a minimum in the modified $\chi^2$ space without caring for the modified $\chi^2$ value itself. We show this in Figure \ref{fig:chis}, with the modified $\chi^2$ minimum occurring for $\Delta i_{12} = 3.6^\circ$. The distribution of the single transiting planets expected from the model population for this mutual inclination is shown in the bottom left panel of Figure \ref{fig:moda_rpdist_nothird}. Comparing with the left panel of Figure \ref{fig:obsa_rpdist}, these single transiting planets share a stronger agreement with those in the Kepler population, compared with when the outer planet predicted $N_\mathrm{sing} = 1951$ (e.g. top right panel of Figure \ref{fig:moda_rpdist_nothird}). We note that the total number of single transiting planets expected from the model population for $\Delta i_{12} = 3.6^\circ$ is 1504. We assume therefore that the remaining 1951-1504 = 447 single transiting transiting planets in the Kepler population not fit by this model population are \textit{inherently single planet systems}.
Despite the model population for $\Delta i_{12} = 3.6^\circ$ giving the lowest modified $\chi^2$ value, this mutual inclination is perhaps larger than that suggested by TDV studies. We note however that mutual inclination estimates from TDV studies consider a range of planet multiplicities. For example \cite{2012ApJ...761...92F} consider a model population of planetary systems with 1-7+ planets and predict that $\sim$50\% of observed planetary systems should contain a single planet, with the remaining systems containing multiple planets with mutual inclinations of $\lesssim3^\circ$. In order to properly predict the inherent mutual inclination in the multi-planet systems considered in this work therefore, it would be necessary to simultaneously model the TDV data directly. We consider such an analysis as part of future work. Instead in \S\ref{subsec:KOIwide} we consider the possibility that Kepler planets form coplanar, but end up mutually inclined due to perturbations from an outer planetary companion on an inclined orbit. This may provide another way to predict the correct abundance of single transiting systems observed by Kepler, and also result in a low mutual inclination for those systems with two transiting planets.
\subsection{Including an inclined planetary companion}
\label{subsec:KOIwide}
We now consider the effects of a hypothetical outer planet in each of the systems in the model population. We first amend the assumption from \S\ref{sec:debiasing} and assume that all stars either host three or zero planets. Any system which hosts three planets is assumed to be identical to one of the 341 double transiting systems from the Kepler population plus an additional outer planet. The outer planet is assumed to have the same mass and semi-major axis in all systems and starts on an inclination to the inner planets when these are coplanar, causing the mutual inclination between the inner planets to evolve according to eq. (\ref{eq:mutinc}). We assume that the outer planet satisfies the Hill stability criterion of $\Delta$ = 2$\sqrt{3}$ (\cite{1999MNRAS.304..793C}) with the outer of the inner two planets for all 341 considered systems, where $\Delta = (a_{3} - a_{2})/R_{H}$ and
\[
R_{H}= \left(\frac{m_{2} + m_3}{3M_{\star}}\right)^{1/3}\left(\frac{a_{2} + a_3}{2}\right),
\]
where $M_{\star}$ is the stellar mass. If this criterion is not satisfied, we move the outer planet for this specific system until it is. For example, when the outer planet is assumed to have a semi-major axis and mass of 1au and 1$\mathrm{M}_\oplus$ respectively, we find 6 of the 341 systems do not satisfy this stability criterion and the outer planet needs to be moved to a mean semi-major axis of 1.2au. When the outer planet has a semi-major axis and mass of 1au and 10$\mathrm{M}_\mathrm{J}$ respectively we find 22 of the 341 systems do not satisfy the stability criterion and the outer planet needs to be moved to a mean semi-major axis of 1.4au.
Each one of the 341 systems is again replicated enough times in the model population to be expected to be observed exactly once. That is, the inverse of the mean double transit probability of the inner two planets, gives the abundance of each of the 341 systems in the model population. The associated mean single transit probabilities for each of the inner two planets is of the same form as eq. (\ref{eq:Psing}). The sum of the mean single transit probabilities for every system in the model population therefore again gives the abundance of a given single transiting planet that would be expected to be observed from the model population that also fits the number of double transiting systems.
Similarly to the modelling approach in \S\ref{subsec:inher}, we look to identify which mass ($m_3$), semi-major axis ($a_3$) and initial inclination ($\Delta i$) of the outer planet causes the types of single transiting systems expected from the associated model population to be most like those in the observed Kepler population. For a given combination of $a_3$, $m_3$ and $\Delta i$ we therefore calculate a modified $\chi^2$ value described in \S\ref{subsec:inher}. We show these modified $\chi^2$ values in Figure \ref{fig:chispace} for an outer planet with $\Delta i$ = 10$^\circ$ (top panel), $m_3$ = 1M$_\mathrm{J}$ (middle panel) and $a_3$ = 2au (bottom panel). Inclinations of $\Delta i \gg 20^\circ$ where eq. (\ref{eq:mutinc}) is expected to break down are included for completeness.
From the top panel in Figure \ref{fig:chispace}, it is clear that there is a 'valley' of semi-major axes and masses of the outer planet which causes a significantly lower modified $\chi^2$ value. It can be assumed therefore that such an additional planet predicts single transiting systems whose radii and semi-major axes better fit those in the Kepler population. However there is also a distinct minimum in the modified $\chi^2$ space when the outer planet has a semi-major axis of $\sim$1au for a mass of $\sim$30M$_\oplus$. Similarly in the other panels of Figure \ref{fig:chispace} there appear to be distinct minima. For the middle panel this occurs for an outer planet (of $m_3$ = 1M$_\mathrm{J}$) with a semi-major axis of 1.38au, initially inclined to the inner planets by $\Delta i = 5.7^\circ$. Finally for the bottom panel, this minimum occurs for a mass of $\sim$6M$_\mathrm{J}$ and inclination of 6$^\circ$ (where $a_3 = 2$au). Generally, we find the distribution of single transiting planets expected from the model population is more representative of those in the Kepler population for $3 \lesssim \Delta i \lesssim 10^\circ$.
The bottom right panel of Figure \ref{fig:moda_rpdist_nothird} gives the distribution of single transiting planets expected from the model population when the outer planet exists in a minimum of the modified $\chi^2$ space with $a_3$ = 1.07au, $m_3$ = 24M$_\oplus$ and $\Delta i$=10$^\circ$ (white circle in the top panel of Figure \ref{fig:chispace}). We note that the total number of single transiting planets expected from this model population is $1564$. The outer planet parameters which predict $N_\mathrm{sing} = 1564$ are shown by the white lines in Figure \ref{fig:chispace}. This line highlights that while many outer planet parameters can predict $N_\mathrm{sing}=1564$, some predict single transiting planets which are more representative of those in the Kepler population. We note that $N_\mathrm{sing}$ predicted by the same range of outer planet parameters from Figure \ref{fig:chispace} is shown in Appendix \ref{sec:Ntot}.
\begin{figure}
\centering
\includegraphics[trim={0cm 0.5cm 0cm 0cm}, width = 0.9\linewidth]{prob_aM_rad_a_dist_chi_plotter.png}
\includegraphics[trim={0cm 0.5cm 0cm 0cm}, width = 0.9\linewidth]{prob_ai_rad_a_dist_chi_plotter.png}
\includegraphics[trim={0cm 0.8cm 0cm 0cm}, width = 0.9\linewidth]{prob_Mi_rad_a_dist_chi_plotter.png}
\caption{Modified $\chi^2$ value comparing types of single transiting planets predicted by the model with Kepler population. For the top panel $\Delta i$ = 10$^\circ$, for the middle panel $m_3$ = 1$\mathrm{M_J}$ and for the bottom panel $a_3$ = 2au. Laplace-Lagrange theory is expected to break down for $\Delta i \gg 20^\circ$. The red dashed line refers to a rough RV detection threshold. The white line shows where the model population predicts $N_\mathrm{sing}=1564$. The white triangle and circle gives the third planet parameters used to produce the middle and bottom panels of Figure \ref{fig:moda_rpdist_nothird} respectively.}
\label{fig:chispace}
\end{figure}
\section{Discussion}
\label{sec:discussion}
\subsection{Combining inherently mutually inclined and outer planet populations}
\label{subsec:hybrid}
In reality it is likely that the total number of single planet transiting systems observed by Kepler ($N_\mathrm{sing, Kep} = 1951$) is contributed to by different populations of planetary systems. These may include a number of inherently single planet systems ($N_\mathrm{sing, inh}$) in addition to a number of single transiting planets observed from a population of two planet systems which have a fixed mutual inclination of $\Delta i_{12}$ ($N_\mathrm{sing, \Delta i_{12}}$). They may also include a number of single transiting planets which are observed from a population of initially coplanar two planet systems interacting with an inclined planetary companion ($N_\mathrm{sing, planet}$). Hence in general, it can be considered that
\begin{equation}
N_\mathrm{sing, Kep} = N_\mathrm{sing, inh} + N_{\mathrm{sing},\Delta i_{12}} + N_\mathrm{sing, planet}.
\label{eq:combNsing}
\end{equation}
Here we make the assumption that the total number of double transiting systems observed by Kepler ($N_\mathrm{doub, Kep} = 341$) is made up of a fraction $f$ that are two planet systems with an inherent mutual inclination and a fraction (1-$f$) that are two planet systems with an inclined outer companion. We can thus rewrite eq. (\ref{eq:combNsing}) as
\begin{equation}
\begin{split}
N_\mathrm{sing, Kep} = N_\mathrm{sing, inh} + f(N_\mathrm{sing, N doub=341})_{\Delta i_{12}} \\+ (1-f)(N_\mathrm{sing, N doub=341})_\mathrm{planet},
\label{eq:combN}
\end{split}
\end{equation}
where ($N_\mathrm{sing,Ndoub=341}$)$_{\Delta i_{12}}$ is the number of singles that would have been produced from the population of two planet systems with a fixed mutual inclination of $\Delta i_{12}$, had it been numerous enough to reproduce the 341 double transiting Kepler systems (which is shown in Figure \ref{fig:chis} as a function of $\Delta i_{12}$). Conversely ($N_\mathrm{sing,Ndoub=341}$)$_\mathrm{planet}$ is the number of singles that would have been produced from the population of two planet systems which are perturbed by an outer companion, had it been numerous enough to reproduce the 341 double transiting systems. We estimate the number of inherently single planet systems to be $N_\mathrm{sing,inh} $ = 447 from \S\ref{subsec:inher}. We note that $N_\mathrm{sing,inh}$ will change for different values of $\Delta i_{12}$, however for simplicity we keep it constant at 447.
For the assumed $N_\mathrm{sing,inh}$ and an assumed fixed mutual inclination for the fraction of the double transiting systems that are inherently inclined ($f$), eq. (\ref{eq:combN}) means that the number of single transiting systems observed by Kepler can be reproduced by specific combination with the fraction of double transiting systems that have an outer planet ($1-f$) and the properties of these planetary systems which determine the ratio of single to double transiting systems from this population (i.e. ($N_\mathrm{sing,Ndoub=341}$)$_\mathrm{planet}$). This combination is plotted in Figure \ref{fig:hybrid}, which can be read alongside Figure \ref{fig:Nsing} to determine the outer planet parameters required to reproduce the required ($N_\mathrm{sing, N doub=341}$)$_\mathrm{planet}$. For example, for $f=0.2$ and $\Delta i_{12} = 2^\circ$, ($N_\mathrm{sing, N doub=341}$)$_\mathrm{planet}$ = 1676 from Figure \ref{fig:hybrid}, which from Figure \ref{fig:Nsing} would be reproduced by an outer planet with $a_3 = 2$au, $m_3 = 132$M$_\oplus$ and $\Delta i$ = 10$^\circ$. For $f=0.5$, ($N_\mathrm{sing, N doub=341}$)$_\mathrm{planet}$ is increased to 2192 requiring the mass of this outer planet to be increased to $m_3 = 955$M$_\oplus$ (for $a_3 = 2$au and $\Delta i$ = 10$^\circ$). The outer planet parameters required to produce ($N_\mathrm{sing, N doub=341}$)$_\mathrm{planet}$ are therefore extremely sensitive to the value of $f$. However, increasing the value of $\Delta i_{12}$ for a given value of $f$ increases the value of ($N_\mathrm{sing,Ndoub=341}$)$_{\Delta i_{12}}$ and hence decreases ($N_\mathrm{sing, N doub=341}$)$_\mathrm{planet}$ as seen in Figure \ref{fig:hybrid}, requiring an outer planet which is a weaker perturber of the inner planets.
\begin{figure}
\centering
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = \linewidth]{hybrid_pop_frac_output.png}
\caption{The number of single transiting planets needed to be predicted by a population of two planet systems with an outer planetary companion, assuming that ($1-f$) of observed Kepler systems host such systems. The remaining fraction of observed Kepler systems are assumed to be two planet systems inherently mutually inclined by $\Delta i_{12}$.}
\label{fig:hybrid}
\end{figure}
It should be noted that $f$ and $1-f$ are not equivalent to the underlying fraction of stars that host a two planet system with a fixed mutual inclination, or a two planet system with an outer companion respectively. However if $f$ is known, such fractions for the underlying population of stars can be estimated through occurrence rate calculations. We discuss such calculations of occurrence rates in \S\ref{subsec:occurr}, however it is first necessary to estimate a value for $f$, which we discuss below.
\subsection{Comparing inherently mutually inclined and outer planet populations}
\label{subsec:disting}
From \S\ref{subsec:inher} a sole population of two planet systems which are inherently mutually inclined by $\Delta i = 3.6^\circ$ (i.e. when $f = 1$) can reproduce a population of single and double transiting systems representative of those observed by Kepler (Figure \ref{fig:moda_rpdist_nothird}). However from \S\ref{subsec:KOIwide} a sole population of two planet systems with an outer planet (i.e. $f = 0$) can also reproduce a population of single and double transiting systems representative of those observed by Kepler (Figure \ref{fig:chispace}). Here we look to differentiate between these two models by considering the predicted distribution of mutual inclinations that would be observed in the two planet populations for each model. We note that combining these two models in a way described in \S\ref{subsec:hybrid} (i.e. when $0<f<1$) would then give some intermediate distribution of mutual inclinations between the overall two planet population.
For the model in which the two planets have an inherent mutual inclination of $\Delta i = 3.6^\circ$, that distribution is narrowly peaked at 3.6$^\circ$ (see Figure \ref{fig:Nexp}). In contrast, for the model in which two planets are perturbed by an inclined outer planet, the distribution of mutual inclinations is biased toward coplanar systems. This is because, while the outer planet induces a significant mutual inclination between the inner planets, as required to reproduce the correct ratio of single to double transiting systems, the inclination is not always large (see Figure \ref{fig:prob_time}) and the probability of witnessing a double transit system is much higher when their mutual inclination is low. Consider an outer companion with $m_3 = 24$M$_\oplus$, $a_3 = 1.07$au and $\Delta i = 10^\circ$, which was in a minimum of the modified $\chi^2$ space (white circle, Figure \ref{fig:chispace} top). Weighting the secularly evolving mutual inclinations between the inner two planets in the 341 considered systems by the associated double transit probability gives the predicted distribution of mutual inclinations which are most likely to be observed. This distribution is shown by the black line in Figure \ref{fig:Nexp}. It is clear that the most likely observed mutual inclination is when the inner two planets are coplanar. Moreover the number of systems expected to be observed with mutual inclinations beyond 0.5$^\circ$ drops to negligible values.
\begin{figure}
\centering
\includegraphics[trim={0cm 0cm 0cm 0cm}, width = \linewidth, height = 7.43cm]{mut_inc_dist_N.png}
\caption{Predicted distribution of mutual inclinations between the two planets in the observed Kepler double transit population for different model populations that both produce the correct number of double and single transiting systems. The grey line refers to the model where the two planet are inherently inclined by $\Delta i_{12}$=4.4$^\circ$. The black line refers to the model where two planets are secularly perturbed by a outer companion with $m_3$ = 1M$_\mathrm{J}$, $\Delta i = 10^\circ$ and $a_3$=1.9au.}
\label{fig:Nexp}
\end{figure}
From transit duration variation studies, the distribution of mutual inclinations between planets in multi-planet Kepler systems is peaked at $\sim 2^\circ$ (\cite{2012ApJ...761...92F}; \cite{2014ApJ...790..146F}), noting however that these works consider different planet populations to those considered here as discussed in \S\ref{subsec:inher}. Combining the two above models to produce a similar distribution in mutual inclinations may therefore allow for $f$ to be determined. We look to combine the two models in such a way to predict a value of $f$, as well as modelling the TDVs of the planetary systems considered in this work directly to predict the distribution of inherent mutual inclinations, as part of future work. For example if a fraction of two planet systems observed by Kepler are considered to have a fixed mutual inclination of $\Delta i_{12} = 4^\circ$, then in order to reproduce a distribution of mutual inclinations that peaks at $\sim$2$^\circ$ from modelling of TDVs, it might be expected that $f \sim 0.5$.
An additional method to estimate $f$ might be to consider whether hypothetical outer planets considered in this work would have been detectable by other means. It is expected that RV studies would be most sensitive to such outer planetary companions. On Figure \ref{fig:chispace} we plot a rough constraint from RV studies, shown by the red dashed lines, assuming a detection threshold of $\sim$2m/s. Outside of 5au we assume RV studies are not sensitive to planets due to long periods. Planets above or to the left of these lines would therefore be detectable with this level of RV precision. This detection threshold suggests that a wide orbit planet located in the minima of the modified $\chi^2$ values in Figure \ref{fig:chispace} (white circle) should be just detectable by RV studies. This would assume however that all Kepler systems with two planets host this outer companion i.e. $f=0$. From Figure \ref{fig:hybrid} and highlighted in \S\ref{subsec:hybrid}, if $f> 0$ a planet with a larger mass, shorter period or larger inclination is required to reproduce the total number of single transiting systems observed by Kepler. Such outer planets should be readily detectable by RV surveys. For example, for the values of $f=0.2$ and $f=0.5$ for $\Delta i_{12} = 2^\circ$ considered in \S\ref{subsec:hybrid}, both of the outer planets in these cases would be expected to be detectable by RV surveys. Due to the inherent faintness of Kepler stars, few have been extensively studied for wide orbit planets. We suggest therefore that detailed follow-up RV studies of Kepler systems would allow for $f$ to be constrained. Generally for example, a low yield of outer planets in RV studies would suggest that $f$ is low and vice versa.
\subsection{Occurrence Rates}
\label{subsec:occurr}
Similar to that discussed specifically for Kepler systems in \S\ref{subsec:hybrid}, consider that the underlying population of planetary systems contains three possible types of planetary systems. These include inherently single planet systems, two planet systems which have a fixed mutual inclination of $\Delta i_{12}$ and two planet systems which are being perturbed by an inclined outer planet. In \S\ref{subsec:hybrid} it was shown that combining these systems with a free parameter $f$, which describes the fraction of the observed double transiting population that are two planet systems with a fixed mutual inclination, recovers the total number of single and double transiting systems observed by Kepler.
However this value of $f$ is not the same as the fraction of the underlying population of stars that have two planets that are inherently mutually inclined. Here we define the occurrence rate of a given population to be the fraction of stars which would be expected to host such systems. Occurrence rates in this work can be estimated by taking the ratio of the number of systems in a given model population ($N_\mathrm{mod}$) to the total number of stars observed by Kepler ($N_\mathrm{Kep}$). The individual occurrence rates for the inherently single planet systems is therefore given by ($N_\mathrm{mod}/N_\mathrm{Kep}$)$_\mathrm{inh}$, for the two planet systems with the fixed mutual inclination of $\Delta i_{12}$ by ($N_\mathrm{mod}/N_\mathrm{Kep}$)$_{\Delta i_{12}}$ and for the two planet systems being perturbed by an inclined outer planet by ($N_\mathrm{mod}/N_\mathrm{Kep}$)$_\mathrm{planet}$. For example, for the population of two planet systems which were inherently mutually inclined by 3.6$^\circ$ (for when $f=1$), i.e. those which predicted a population of single transiting planets representative of those observed by Kepler (\S\ref{subsec:inher}), the number of systems in the model population was equal to 43807. From \S\ref{sec:sample} the total number of Kepler stars was 164966. Therefore the occurrence rate for this type of system, ($N_\mathrm{mod}/N{_\mathrm{Kep}}$)$_{\Delta i_{12}}$ = 27\%. Conversely, considering the population of two planet systems which were perturbed by an outer companion with $m_3=24$M$_\oplus$, $a_3=1.07$au and $\Delta i = 10^\circ$ (white circle Figure \ref{fig:chispace} top) for when $f=0$, predicted 42733 systems in the associated model population. Therefore the associated occurrence rate of this type of system ($N_\mathrm{mod}/N_\mathrm{Kep}$)$_\mathrm{planet}$ = 26\%.
The calculation of the occurrence rate for the population of inherently single planet systems is slightly different to that described above. From \S\ref{subsec:inher}, assume that there are 447 inherently single planet systems (noting that this is subject to the value of $\Delta i_{12}$). The distribution of the semi-major axes of these 447 planets is equal to the difference between the distributions of semi-major axes for the single transiting systems observed by Kepler and those predicted by the population of two planet systems with a fixed mutual inclination of $\Delta i_{12} = 3.6^\circ$, i.e. the difference between the left panel of Figure \ref{fig:obsa_rpdist} and the bottom left panel of Figure \ref{fig:moda_rpdist_nothird}. The number of inherently single planet systems in a model population is then the sum of the inverse of the single transit probabilities ($R_\star/a$) of all these 447 planets. We find this model population contains 15852 systems, predicting an occurrence rate of inherently single planet systems of 9.6\%. This is large compared with the occurrence rate of Hot Jupiters ($\sim1-2\%$ e.g. \cite{2005PThPS.158...24M}; \cite{2008PASP..120..531C}; \cite{2011arXiv1109.2497M}; \cite{2012ApJ...753..160W}; \cite{2016A&A...587A..64S}). We therefore expect that our population of inherently single planet systems is dominated by a different population, such as those described in \S\ref{sec:sample} which are poorly constrained.
In a similar way to that described for eq. (\ref{eq:combN}), the total occurrence rate of assumed planetary systems in the underlying population of planetary systems can be estimated to be
\begin{equation}
\left(\frac{N_\mathrm{mod}}{N_\mathrm{Kep}}\right)_\mathrm{tot} = \left(\frac{N_\mathrm{mod}}{N_\mathrm{Kep}}\right)_\mathrm{inh} + f\left(\frac{N_\mathrm{mod}}{N_\mathrm{Kep}}\right)_{\Delta i_{12}} + (1-f)\left(\frac{N_\mathrm{mod}}{N_\mathrm{Kep}}\right)_\mathrm{planet}.
\end{equation}
Consider the example combination of systems from \S\ref{subsec:disting} for when $f=0.2$, $\Delta i_{12} = 2^\circ$ and the outer planet parameters are $a_3 = 2$au, $m_3 = 132$M$_\oplus$ and $\Delta i = 10^\circ$. Here $f$($N_\mathrm{mod}/N_\mathrm{Kep}$)$_{\Delta i_{12}}$ $\sim$ 3\% and ($1-f$)($N_\mathrm{mod}/N_\mathrm{Kep}$)$_\mathrm{planet} \sim$21\%. We note that $f$($N_\mathrm{mod}/N_\mathrm{Kep}$)$_{\Delta i_{12}}$/($1-f$)($N_\mathrm{mod}/N_\mathrm{Kep}$)$_\mathrm{planet}$ = 3/21 = 14\%. This highlights that the occurrence rate of stars which have two planet systems with an inherent mutual inclination is similar to, but not the same as the parameter $f$.
Combining with the occurrence rate of inherently single planet systems estimated above, the total occurrence rate of planetary systems becomes 34\%. This is similar to occurrence rates of $\sim25\%-30\%$ for Kepler like planets derived from injection and recovery analysis of planet candidates from the Kepler pipeline (\cite{2013PNAS..11019273P}; \cite{2015ApJ...810...95C}).
Estimates of occurrence rates for planets similar to the outer planets considered in this work exist from RV studies. \cite{2008PASP..120..531C} suggest an occurrence rate of 7.0 $\pm$ 1.4\% for planets with masses and semi-major axes of $m_\mathrm{p}$ = 1-10M$_\mathrm{J}$ and $\sim$1-5au respectively. Extrapolating this occurrence rate also predicts that 17-20\% of stars have gas giants within 20au. Similarly \cite{2011arXiv1109.2497M} suggest an occurrence rate of 13.9 $\pm$ 1.7\% for planets with masses and periods of $m_\mathrm{p} > 50$M$_\oplus$ and $P < 10$yrs respectively. More recently \cite{2016ApJ...821...89B} suggest that for systems with 1 or 2 RV planets, the occurrence rate of an additional companion with a mass and semi-major axis of 1-20M$_\mathrm{J}$ and 5-20au respectively is as high as 52 $\pm$ 5\%. The above example occurrence rate for the systems with an outer planet, i.e. ($1-f$)($N_\mathrm{mod}/N_\mathrm{Kep}$)$_\mathrm{planet} \sim$21\%, is then therefore not contradicted by these studies. However, this example assumed an estimated value of $f$. In addition to the methods described in \S\ref{subsec:disting}, observationally estimated occurrence rates for outer planets may also be able to constrain the value of $f$. For example if it is assumed that the occurrence rate of the types of outer planets considered in this work is 13.9\% (\cite{2011arXiv1109.2497M}), then it can be estimated that ($1-f$)($N_\mathrm{mod}/N_\mathrm{Kep}$)$_\mathrm{planet} \sim 13.9\%$. As ($N_\mathrm{mod}/N_\mathrm{Kep}$)$_\mathrm{planet} \ngtr$ 1 (i.e. it is unphysical that there are more stars in the model population than the number actually observed by Kepler), this results in an upper limit of $f\leq0.86$. We suggest therefore that combining this method of placing constraints on $f$ with those described in \S\ref{subsec:disting} might provide a strong constraint on the percentage of planetary systems which may share a fixed mutual inclination compared with systems that may host an outer inclined planet.
\subsection{Comparing with similar works}
Whether an outer planet can reduce the multiplicity of expected transiting planets in an inner planetary system in the context of N-body simulations has recently been investigated by \cite{2017MNRAS.tmp..186H}. A notable example they include is the effect of a companion with a mass of 1M$_\mathrm{J}$ at 1au, which is inclined to an inner population of planetary systems with a variety of multiplicities by 10$^\circ$. They find the ratio of the total number of double to single transiting systems that Kepler would be expected to observe is 0.184 (i.e. $\sim$5 times more expected single than double transiting systems). We find an identical outer planetary companion in our work gives this ratio to be 0.14. We suggest this difference is caused by the population of inner planetary systems used. \cite{2017MNRAS.tmp..186H} incorporate 50 model inner planetary systems with a range of multiplicities (the vast majority contained 3-6 planets at the end of their simulations), rather than the two planet Kepler systems considered in this work. Higher multiplicities increases the number of competing secular modes in the system, which can stabilise inner planets against the secular perturbations of an outer companion (e.g. \cite{2016MNRAS.457..465R}). Such an example was shown in this work in \S\ref{sec:realsys} for application to Kepler-48. Perhaps then, mutual inclinations are more easily induced between inner planets in this work, increasing the predicted number of single transiting planets that Kepler would be expected to observe, relative to a fixed population of planetary systems.
Moreover compared with N-body simulations, our work does not allow for dynamical instability. If inclinations are large then they couple with eccentricity (\cite{1999ssd..book.....M}), potentially causing orbital crossings between neighbouring planets leading to dynamical instabilities on short, non-secular timescales. Indeed \cite{2017MNRAS.tmp..186H} find for the above mentioned outer planetary companion that roughly half of the 50 systems they consider lose at least one planet. Moreover \cite{2015ApJ...807...44P} suggest that the abundance of single and double transiting systems might be the remains of higher order planetary systems that were once tightly packed and have since undergone dynamical instability. A detailed discussion on how dynamical stability would be expected to affect our results is difficult. Our choice that all planets must be initially Hill stable is by no means a robust constraint on the long term stability of all the planetary systems we consider during the secular interaction.
The effects of dynamical instability in tightly packed planet systems interacting with a wide orbit companion planet was also shown by \cite{2015ApJ...808...14M}. They find that an outer giant planet undergoing Kozai-Lidov interactions with a stellar binary (\cite{1962AJ.....67R.579K}; \cite{1962P&SS....9..719L}) can have an eccentricity which takes its orbit within the inner planets, leading to a significant reduction in planet multiplicity. Moreover more recent work in \cite{2016arXiv160908058M} suggests that these same interactions can cause $\sim$50\% of Kepler like systems to lose a planet, either through collisions or ejections. If inclination is not completely decoupled with eccentricity then, these works suggest that dynamical instability plays a significant role in sculpting an inner planetary system.
\subsection{Metallicity Distribution}
The fraction of stars with gas giants increases with higher metal content (e.g. \cite{1996astro.ph..9148G}; \cite{2016ApJ...831...64T}). However it is unclear if this relation extends to smaller planets with $R_p \lesssim 4 R_\oplus$ (\cite{2011arXiv1109.2497M}; \cite{2016ApJ...832..196Z}). If single transiting planets are in systems which contain an outer giant companion similar to that considered in this work then the transiting planet should follow a similar metallicity relation as the giant planet. If there is an inherent population of single planet systems with $R_p \lesssim 4 R_\oplus$, in addition to a population of inherently mutually inclined double transiting systems, then these systems will follow a different metallicity relation. Therefore the population of single and double transiting systems observed by Kepler may contain a mixture of metallicity relations. If a distinction can be made between these different relations then this may place constraints on the presence of additional planets in Kepler systems with a single transiting planet.
\subsection{Assumptions of this work}
\label{subsec:assump}
Throughout this work we have considered mutual inclinations evolve between two planets due to secular interactions with an outer planet. As stated above, increasing the multiplicity of planetary systems complicates the evolution of mutual inclinations. For application to the Kepler dichotomy, including higher multiplicity systems may cause proportionally fewer to be observed as single transiting systems. We look to investigate this as part of future work. Moreover higher multiplicity systems also allow for investigation into whether the presence of an outer planetary companion can explain the number of higher order systems observed by Kepler. This is of particular interest as \cite{2012ApJ...758...39J} find that generating a model population which predicts the number of systems observed by Kepler with three transiting planets (with small inherent mutual inclinations and no outer companion) cannot simultaneously predict the number of systems with a single and two transiting planets observed by Kepler.
We have also assumed that the inner transiting planets interacting with an outer companion were initially coplanar. However these transiting planets would most likely also have a small inherent mutual inclination (e.g. \cite{2012ApJ...761...92F}; \cite{2014ApJ...790..146F}) which in turn may affect the mean double transit probability.
\section{Summary and Conclusions}
\label{sec:conc}
In summary, during the first part of this work we developed a semi-analytical method for the calculation of transit probabilities by considering the area a transiting planet subtends on a celestial sphere (\S\ref{sec:semianal}). Applying this method to a general two planet system, we showed how the probability that both planets are observed to transit changes as they become mutually inclined.
In \S\ref{sec:secint} we discussed how the mutual inclination between two initially coplanar planets evolves due to secular interactions with an external mutually inclined planetary companion. We derived the full solution describing this evolution assuming that the mutual inclination remains small, before simplifying it under the assumption that the external planet was on a wide orbit. We found that the maximum mutual inclination between the inner two planets is approximately equal to twice the initial mutual inclination with the external planet. Below this the maximum mutual inclination between the inner two planets scales according to the mass, semi-major axis and inclination of the external planet by $\propto \Delta im_3/a_3^3$.
How the secular interaction causes the double transit probability of the inner two planets to evolve was shown in \S\ref{sec:combtrans}. Assuming that this double transit probability is significantly reduced when the maximum mutual inclination exceeds $\approx (R_\star/a_1) + (R_\star/a_2)$ we derived an expression for the mean of the double transit probability considering a given external planetary companion. This expression was applied to Kepler-56, Kepler-68, and Kepler-48 to place constraints on the inclination of the outer RV detected planets in these systems in \S\ref{sec:realsys}. We found that the inner two transiting planets in Kepler-56 and Kepler-68 are not significantly secularly perturbed by the outer planets, regardless of their inclination. For HD 106315 we find that an outer planet inferred from recent RV analysis can cause a significant perturbation to the mutual inclination of two internal transiting planets. Moreover we find that if the outer planet is present within $\sim$1au, its inclination must be no more than $2.4^\circ$, otherwise the probability of observing both the inner planets to transit is significantly reduced. We also found that the RV detected planet in Kepler-48 needs to be inclined with respect to the inner planets by $\lesssim3.7^\circ$, otherwise the probability that all the inner planets are observed to transit is significantly reduced. We conclude therefore that using the expression for the mean transit probability between inner planets from eq. (\ref{eq:meanpsimp}) and (\ref{eq:Ksimp}) can be used to place significant constraints on the inclinations of RV detected planets, whose host systems also contain transiting planets.
We further applied our method of calculating transit probabilities to the Kepler population in \S\ref{sec:Kepdich}. We found that relative to a fixed population of transiting systems with two planets on initially coplanar orbits, the expected number of single transiting systems can be significantly increased both by inherently inclining the two planets and by introducing an outer planetary companion. We found that an inherent mutual inclination of $\Delta i_{12} = 3.6^\circ$ predicts a population of single transiting planets most representative of those in the Kepler population. Moreover, we found that outer planets initially inclined by $\sim3-10^\circ$ to the inner planets also predict a representative population of single transiting systems. These outer planets should be detectable by RV studies.
However it is likely that planetary systems observed by Kepler may include a combination of systems which include inherently single planet systems, two planet systems which have some fixed mutual inclination and two planet systems interacting with an inclined outer planet. For two planet systems which are perturbed by an outer planet, the distribution of the mutual inclinations between the inner planets of such systems is biased toward coplanar systems. This is due to an increased probability of observing inner planets when coplanar compared with when mutual inclinations are larger. We suggest that combining populations of inherently mutually inclined two planet systems with two planet systems which are interacting with an outer planet may be able to reproduce the observed distribution of mutual inclinations between Kepler planets. In doing so, this may provide constraints on the presence of outer planets in the Kepler population. We suggest also that detailed follow-up of RV studies in Kepler systems will provide a more direct constraints on the presence of outer planets. There should also be a dichotomy in the number of transiting systems observed by the upcoming \textit{TESS} mission (\cite{2014SPIE.9143E..20R}), however for these systems astrometry and RV techniques will be able to be used to verify the presence and influence of outer planets.
\section*{Acknowledgements}
We thank Simon Gibbons and Grant Kennedy for useful conversations regarding this work. MJR acknowledges support of an STFC studentship and MCW acknowledges the support from the European Union through grant number 279973. We also thank the reviewer for comments which were a great help in improving this paper. This research has also made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program.
|
1703.10241
|
\section{Introduction}
Powerful relativistic jets are often observed in various astrophysical phenomena expanding a very wide range of space-time scales, from pulsars to active galactic nuclei (AGN).
The mechanism to power such energetic phenomena --even though not jet fully understood-- is believed to reside on the magnetospheres around compact (spinning) objects
like neutron stars or black holes; where intense magnetic fields, accompanied by charged particles (plasma), tap rotational energy from the central object
and induce strong Poynting fluxes \cite{Lynden1969, goldreich, Blandford}.
While pulsars admit a classical electrodynamical interpretation in terms of \textit{unipolar induction} \cite{goldreich},
the AGN scenario is more subtle, since it involves the question of how to extract energy from a black hole.
The general picture for AGNs was developed in the seminal work of Blandford \& Znajek \cite{Blandford},
providing a plausible astrophysical mechanism to extract the BH energy electromagnetically, in a form of generalized Penrose process \cite{lasota2014}.
They demonstrated the existence of stationary electromagnetic fields that possess outgoing Poynting fluxes,
enabled by the presence of a rarefied plasma around a (slowly) spinning BH with a magnetized accretion disk.
This effect is now broadly known as the Blandford-Znajek (BZ) mechanism.
Force-Free Electrodynamics (FFE), describes a particular regime of magnetically dominated plasmas
that plays a key role in the physics of both neutron stars and black hole magnetospheres.
In these regimes, regarded as the vanishing particle inertia limit of ideal relativistic magnetohydrodynamics (MHD) \cite{komissarov2002},
the electromagnetic field obeys a modified --nonlinear-- version of Maxwell equations,
while the plasma only accommodates as to locally cancel-out the Lorentz force.
Blandford \& Znajek first argued that, under typical astrophysical conditions,
the magnetosphere around a spinning black hole would be composed of a tenuous plasma arising from a self-regulatory particle production cascade,
so that the dynamics there would be effectively force-free.
This has been later supported by full MHD numerical simulations (e.g.~\cite{mckinney2004, komissarov2004, mckinney2005, komissarov2005, tchekhovskoy2008, tchekhovskoy2010, tchekhovskoy2011, mckinney2012}),
suggesting the plasma density is very low away from the disk and especially in the funnel region over the jet.
And although a ``real relativistic jet'' is expected to deviate significantly of the force-free regime at large distances,
it was argued that the jet power would be determined entirely by the initial force-free zone \cite{tchekhovskoy2010}.
Most MHD simulations also agreed upon the presence of a (self-consistently generated) large-scale magnetic field threading the black hole.
While the strength and topology of this field is not completely clear (since is highly dynamical and dependent on the details of the accretion flow),
it has been suggested \cite{mckinney2005} that near the rotating BH the dominant field geometry would be given by a vertical contribution.
Such magnetic field around the central region is believed to play two important roles: (i) to power the jet by extracting BH rotational energy through the BZ mechanism;
and (ii) to confine the jet.
Considerable analytical and numerical efforts have concentrated on the study of black hole magnetospheres under the force-free approximation
(see e.g. \cite{Macdonald1984, komissarov2001, komissarov2002, Komissarov2004b, komissarov2007, Palenzuela2010Mag, alic2012, ruiz2012, shapiro2013, Gralla2013, Gralla2014, Yang, Pfeiffer2015, gralla2015}).
Numerical simulations started to look at more realistic astrophysical scenarios beyond the monopole, split-monopole and paraboloidal field configurations of the original BZ work.
In the AGN context, the black hole magnetosphere was usually regarded as embedded on an (asymptotically) uniform vertical magnetic field,
supported by the currents generated on a distant accretion disk. %
This was referred as the \textit{magnetospheric Wald problem}, in allusion to an exact static electro-vacuum solution constructed by Wald \cite{Wald}.
It was then noticed that --as for any dipolar type configuration-- the numerical solutions develop an equatorial current sheet within the ergosphere,
in which the force-free approach breaks down and some sort of electrical resistivity must take over.
And it has been suggested that such current sheet might play a crucial role on determining the magnetosphere structure.
Thus, this plasma-filled version of the Wald problem was signaled as \textit{``an ultimate Rosetta Stone for the research in black hole electrodynamics''} \cite{Komissarov2004b}.
In spite of the fact that the force-free equations have been around for several years, the mathematical details regarding
their initial and boundary value formulation were not fully developed until recently.
Furthermore, it has been shown that a direct formulation of force-free electrodynamics renders a weakly hyperbolic
set of evolution equations \cite{Pfeiffer}, and hence, an ill-posed problem.
On that article (and on a subsequent work \cite{Pfeiffer2015}) these authors show the system was not only strongly hyperbolic but symmetric hyperbolic,
by presenting suitable reformulations of the theory in a particular $3+1$ decomposition.
Following a different approach, we have constructed in \cite{FFE} a covariant hyperbolization for the FFE system
relying on Geroch's generalized symmetric hyperbolic formalism \cite{Geroch}.
Since our construction involves an explicit symmetrizer, it has allowed us to find an appropriate set of evolution equations.
Moreover, we were able to covariantly extend the system outside the constraint submanifold (namely, the surface of all tensor fields $F_{ab}$ satisfying $F_{ab}^{*}F^{ab} = 0$).
The extension was built in a way that guarantees the system remains well-posed even outside of the constraint surface,
a essential property for a numerical implementation, as one knows it is not feasible to enforce the constraint condition exactly.
In the present work, we use --for the first time-- the set of equations derived in \cite{FFE}, to numerically evolve the FFE system.
To implement the problem numerically, we shall consider a computational domain with $S^2 \times \mathbb{R}^{+}$ topology;
that is, a region
foliated by successive concentric spherical layers.
This allows to excise the spacetime singularity from the domain and, on the other hand, to facilitate the implementation of boundary conditions.
To smoothly cover a region with this topology one is forced to employ multiple coordinate patches, and thus, multiple grids.
The challenging aspect of doing so, is in how to ensure a suitable transfer of information among the different grids involved.
We adopt to that end, the \textit{multi-block approach} \cite{Carpenter1994, Carpenter1999, Carpenter2001} (see also \cite{Leco_1} for the present implementation on curved backgrounds),
which enables the construction of stable finite-differences schemes for computational domains with several grids that only abut\footnote{
Only points at boundaries are common to different grids.}.
The method relies on the use of finite difference operators satisfying \textit{summation by parts} and \textit{penalty techniques} to transfer information between the grids.
It essentially consist in the addition of penalty terms to the evolution equations at the interfaces among different patches;
these terms are constructed using the characteristic information of the particular evolution system and guarantees:
(i) a consistent transfer of information between the different grids, and (ii) the derivation of energy estimates at the semi-discrete level used to ensure numerical stability.
We shall picture our computational domain as immersed on an ambient electromagnetic configuration, which in our case, would be the uniform magnetic field of the magnetospheric Wald problem.
Then, we use the \textit{penalty technique}, combined with such fixed ``external solution'', to set the incoming (characteristic) physical modes at the outer numerical edge.
This way, and together with a method to handle the magnetic divergence-free constraint at the boundary (adapted from \cite{Mari}),
we achieve stable and constraint-preserving boundary conditions for the astrophysical scenario of interest.
In contrast, the usual treatment one founds in the literature consist on placing the outer numerical edge far from the central region
(typically at a radius $\sim 100 M$, or even further away) and setting maximally dissipative conditions there.
That is, no-incoming modes from the external surface: nothing comes in and any physical signal reaching the boundary would leave the numerical domain.
Therefore, the jet solutions obtained under this setup were usually referred as \textit{quasi-stationary} configurations
(see e.g. \cite{Palenzuela2010Mag,Palenzuela2011}), reflecting the fact that the interior jet solution is not at ``equilibrium'' with the outer boundary.
The main result of this article may be stated as follows: within our numerical approach, involving the evolution of a new set of FFE equations
and a novel treatment for the boundary conditions, we have obtained truly stationary jet solutions.
These solutions are consistent with those found in previous works for similar astrophysical settings (like e.g. \cite{Komissarov2004b, Palenzuela2010Mag, Yang}).
Both the aligned and misaligned cases exhibit a collimated Poynting flux along the direction of the asymptotic magnetic field
and energy extraction through the BZ mechanism;
a current sheet develops within the ergosphere at the plane normal to the flux,
and is found to supply significant amounts of EM energy to the total emitted power;
the dependence of the net electromagnetic flux on black hole spin and inclination angle is analyzed as well.
We have emphasized our numerical solutions are stationary since, even locating the outer numerical edge close to the central region,
the dynamical fields --after the initial transient-- always ``equilibrate'' with the boundary and remains stable for long times.
This paper is organized as follows: We begin in Sec.~II by introducing our numerical implementation,
including a brief description of the \textit{multi-block approach} and a detailed discussion of the boundary conditions adopted.
In Sec.~III we present the numerical results on the magnetospheric Wald problem, considering both the aligned and misaligned cases.
We analyze the influence the numerical boundary (condition and location) has on the results.
Conclusions and some perspectives for future projects are presented on Sec.~IV.
Appendix A review the set of FFE evolution equations (taken from \cite{FFE}) implemented here.
Useful complementary material regarding conservations and the BZ mechanism is provided on Appendix B.
While Appendix C shows further tests to the code, including convergence and an analysis of the constraints behavior.
\section{Numerical Implementation}
We evolve the equations of force-free electrodynamics obtained in \cite{FFE},
which we have included again here on an appendix (see equations \eqref{dt_phi},\eqref{dt_E}+\eqref{damp},\eqref{dt_B})
in order to keep this article self-contained.
Our numerical implementation is based on the \textit{multi-block approach} \cite{Leco_1, Carpenter1994, Carpenter1999, Carpenter2001}
and on a computational infrastructure developed in \cite{Leco_1}, where a particular multiple patch structure has been equipped with the Kerr metric in appropriate coordinates.
In this section, we briefly summarize this approach and provide further details on the treatment given to boundary conditions.
We then discuss the initial and boundary data chosen; and finally, we describe how we deal with the current sheet that develops within the ergoregion. \\
\subsection{General Scheme}
We consider a numerical domain consisting on several grids which just abut (i.e. do not overlap),
commonly referred to as \textit{multi-block approach}.
The equations are discretized at each individual subdomain by using difference operators constructed to satisfy summation by
parts\footnote{A property representing the discrete analogue of integration by parts at the continuous level.} (SBP).
In particular, we employ difference operators which are third-order accurate at the boundaries and sixth-order on the interior.
\textit{Penalty terms} \cite{Carpenter1994, Carpenter1999, Carpenter2001} are added to the evolution equations at boundary points.
These terms penalize possible mismatches between the different values the characteristic fields take at the interfaces,
providing a consistent way of communicate information between the different blocks:
essentially, the outgoing characteristic modes of one grid are matched onto the ingoing modes of its neighboring grids.
At each subdomain, it is possible to find a semi-discrete energy defined by both a symmetrizer of the system at the continuum and a discrete scalar product (with respect to which SBP holds).
The summation by parts property of the operators allows one to obtain an energy estimate, up to outer boundary and interface terms left after SBP.
The penalties are constructed so that they make a contribution to the energy estimate which cancels inconvenient interface terms, thus providing an energy estimate which covers the whole integration region across grids.
Such semi-discrete energy estimates --provided an appropriate time integrator is chosen-- guarantee the stability of the numerical scheme \cite{Leco_2}.
A classical fourth order Runge-Kutta method is used for time integration in our code.
Each subdomain is handed to a separate processor, while the information required for the interfaces treatment is communicated among them by the \textit{message passing interface} (MPI) system.
The computation for each grid may be, as well, parallelized by means of OpenMP.
We incorporate numerical dissipation to the code through the use of adapted Berger-Oliger operators \cite{Tiglio2007}.
We handle the output of our numerical simulations with \textit{VisIt} \cite{VisIt},
a visualization tool used to make most of the plots in this article.
\subsection{Multiple Patch Structure}\label{sec:grids}
We want to represent a foliation of the Kerr spacetime on a computational domain with the topology $S^2 \times \mathbb{R}^{+}$.
A relatively simple multi-block structure for $S^2$ is given by the so called \textit{cubed sphere coordinates},
represented by a set of six patches (as illustrated on Figure \ref{fig:cubed}).
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.26]{grids.pdf}
\caption{\textit{Cubed sphere coordinates:}
six Cartesian patches to cover the sphere, with points only overlapping at boundaries.}
\label{fig:cubed}
\end{center}
\end{figure}
The idea is to write the background metric in Kerr-Schild form and use the \textit{cubed sphere coordinates} for the angular directions.
This scheme was already considered in \cite{Leco_1}, where the explicit definitions for this set of coordinates are provided.
In this way, the three-dimensional space can be though as being foliated by successive layers of spherical concentric surfaces.
Our computational domain is thus a spherical shell, like the one depicted on Fig.~\ref{fig:mesh}.
\begin{figure
\begin{center}
\includegraphics[scale=0.2]{mesh_half.jpeg}
\caption{Half of the computational domain for a typical resolution (grid-points are at the intersections of the black lines).
Three grids are shown, from the six that makes a spherical shell covering the range between $r=1.35M$ to $r=11.35 M$.
Further spherical shells (like this one) can be added to cover larger radial distances.
The semitransparent (red) surface represents the event horizon of a Kerr black hole with $a=0.9$.}
\label{fig:mesh}
\end{center}
\end{figure}
The inner edge is located at some radius $r_{in}$, always inside the black hole horizon (for any choice of spin parameter),
and hence, no boundary condition is needed there, as it constitute a purely-outflow surface.
The outer boundary, on the other hand, extends to a radius, $r_{out}$, that will range from $10 M$ to $100 M$ (depending on the case under consideration).
Notice that $r$ here (and hereafter) denotes the radius in the isotropic coordinates of the Kerr-Schild foliation, rather than the usual Kerr-Schild or Boyer-Lindquist radial coordinates.
The metric in Kerr-Schild form can be written as $g_{ab} = \eta_{ab} + H \ell_{a} \ell_{b}$,
where $\eta_{ab}$ is the flat metric and $\ell_a$ is certain null co-vector (both with respect to the flat and the whole metric).
For visual representation, throughout this article, we will present our results in the Cartesian coordinates $\{x,y,z\}$
associated with the flat part of the metric\footnote{Sometimes referred as the Kerr-Schild ``Cartesian'' coordinates, or the Kerr-Schild frame (see e.g.~\cite{Visser2007}).}. %
That is why the event horizon, the red semitransparent surface in Fig.~\ref{fig:mesh}, does not look spherical.
The typical resolution employed in the simulations consist on $41\times41$ points for the angular directions at each grid\footnote{
Totalizing a number of around $9600$ points for each of the spherical layers that foliate space.},
and $201$ points to cover a radial distance of $10 M$.
Fig.~\ref{fig:mesh} illustrates this typical resolution to provide the readers with an intuitive feel of grid-points density.
In some cases we have doubled the resolution (i.e. $81\times81\times401$ for each grid), as in Figs.~\ref{fig:PF_slice_aligned} and \ref{fig:PF_slice_misaligned};
and some calculations were done using a coarser resolution (half of the typical one) as those shown in Figs.~\ref{fig:PF_spin_dep} and \ref{fig:PF_ang_dep}.
\subsection{Boundary Conditions}
We must prescribe boundary conditions (only) on the external surface of our numerical domain, i.e. the sphere of radius $r_{out}$.
That is, we need to set --at each point over this surface-- the incoming characteristic modes associated with our particular evolution system.
There are in general two kind of conditions: those corresponding to the physical scenario one wishes to represent and those related with the preservation of the constraints.
We shall rely on the penalty method to enforce the physical modes, while we will use alternative methods to ensure no constraint violations arise from the boundary.
For the evolution equations we want to implement, there are four physical modes
(the fast magneto-sonic and Alfven waves) and three (unphysical) constraint modes.
Since the theory is nonlinear, these different subspaces might degenerate at some points during the evolution,
and thus, a careful analysis for the characteristic structure of the system is needed.
This was already done by the authors in Appendix A of reference \cite{FFE}, where all the possible degeneracy cases were examined in detail.
With this information at hand we are now in conditions to appropriately modify the evolution equations at the outer boundary points.
\subsubsection{Physical condition}
As previously discussed, the idea here is to use the \textit{penalty terms}, which may be written:
\begin{equation}
\partial_t U^{\alpha} \rightarrow \partial_t U^{\alpha} + \sum_{i} \left( \frac{\lambda_i}{\sigma_{oo} \Delta x } \right) P^{\alpha}_{(i) \beta} \left( U^{\beta}_{ext} - U^{\beta} \right) \label{penalty}
\end{equation}\\
with $U^{\alpha} := \{ \phi , E^i , B^i \} $ being the set of dynamical fields.
The corrective terms are given by a sum over all the incoming physical modes
(i.e. physical characteristic subspaces ``$i$'' with positive eigenvalues, $\lambda_i > 0$), where $P^{\alpha}_{(i) \beta}$ is the projection onto the $i$-subspace,
and $U^{\alpha}_{ext}$ represents the ``exterior'' solution we want to impose \footnote{This idea of analytically fixing the incoming characteristic modes (``exterior solution'')
was also used in \cite{Pfeiffer2015} to study the stability of the exact \textit{null$^{+}$} FFE solutions on a Schwarzschild background.}.
The factors $\sigma_{oo}$ and $\Delta x$ in the expression reflect the dependence on the discrete inner product used and on the grid resolution, respectively.
\subsubsection{Constraints}
To restrict the entrance of possible violations of the divergence-free constraint ($\nabla \cdot \vec{B} = 0$),
we adapt to our problem a method proposed in \cite{Mari}.
The idea is to study the subsidiary system describing the dynamics of the constraint, and then impose the no-incoming condition on these modes.
Such subsidiary system is obtained from the definitions $D:=\mathcal{D}_j B^j$ and $\delta_i := \partial_i \phi$ as,
\begin{eqnarray}
&& \partial_t D = \beta^k \partial_k D + D~\mathcal{D}_j \beta^j - \alpha~\mathcal{D}_j \delta^j - \delta^k \partial_k \alpha + \mathcal{D}_j Z^j \nonumber\\
&& \partial_t \delta^i = \partial^i (\beta^k \delta_k) - \alpha (\partial^i D + \kappa \delta^i ) - (D + \kappa \phi ) \partial^i \alpha - \partial^i W \nonumber
\end{eqnarray}
where $ Z^i \equiv \frac{\alpha}{\tilde{F}} \left[ \hat{\epsilon}^{ijk} r_j \tilde{B}_k + \frac{\tilde{E}^i}{\tilde{B}^2}\tilde{S}^k r_k \right] $ and $W \equiv \frac{\alpha}{\tilde{F}} \tilde{E}^k r_k$.
Its characteristic problem --after some manipulations-- reduces to,
\begin{eqnarray}
&& (\lambda - \beta_m ) \hat{D} = -\alpha \hat{\delta}_m \nonumber\\
&& (\lambda - \beta_m ) \hat{\delta}^i = -\alpha \hat{D} m^i \nonumber
\end{eqnarray}
where $m^i$ is the unit normal to the boundary, and we have denoted contractions by, $\beta_m \equiv \beta_i m^i $ and $\hat{\delta}_m \equiv \hat{\delta}_i m^i $.
Hence, the no-incoming condition reads:
\begin{equation}
\frac{1}{2} (\delta_m - D) = 0 \label{no-incoming}
\end{equation}
Solving for $D$ and $\delta_m$ from the general system and imposing the condition \eqref{no-incoming},
we finally get the corrective terms at boundary points:
\begin{eqnarray}
\partial_t \phi &\rightarrow& \partial_t \phi + \frac{1}{2} \alpha \left[ \mathcal{D}_j B^j - m^j \partial_j \phi \right] \label{normal_phi}\\
\partial_t B^i &\rightarrow& \partial_t B^i - \frac{1}{2} \alpha m^i \left[ \mathcal{D}_j B^j - m^j \partial_j \phi \right] \label{normal_B}
\end{eqnarray}\\
Regarding the algebraic constraint, $G=0$, we observe the evolution equations naturally determines $\partial_t G \sim 0$
at the boundary, whenever the value of the constraint remains small (i.e. $G\sim 0$) at the interior.
Thus, provided we manage to keep this constraint under control through the evolution, there would be no need to further modify
the equations at the outer boundary.
We monitor the behavior of both constraints
during our simulations to ensure there are no violations entering from the boundary and,
moreover, that no significant deviations develop within the whole numerical domain.
To see these results we refer the reader to Section \ref{sec:constraints_preservation}.
\subsection{Initial/Boundary Data}\label{sec:data}
We consider a spinning black hole surrounded by a magnetized accretion disk.
Assuming the disk to be sufficiently distant, the magnetic field configuration it gives rise to would look essentially
uniform within our computational domain.
The direction of this magnetic field is perpendicular to the disk, and is not necessarily aligned with the symmetry axis of the spacetime.
We shall then picture our black hole as initially immersed on a uniform magnetic field, aligned or misaligned respect to the rotational axis.
In isotropic Kerr-Schild coordinates thus reads,
\begin{equation}
B^{x} = \frac{B_o}{\sqrt{h}} \sin{(\alpha_o)} \text{,} \quad B^{y} = 0 \text{,} \quad B^{z} = \frac{B_o}{\sqrt{h}} \cos{(\alpha_o)}
\label{initial}
\end{equation}
The interior solution will be modified during the evolution due to the presence of the plasma around the BH horizon,
while the exterior region should remain dominated by the uniform magnetic field configuration.
Thus, we will set this configuration as the ``exterior'' solution ($U^{\alpha}_{ext}$) on equation \eqref{penalty}, as discussed above.
We want to chose astrophysically relevant values
for an scenario with a super-massive black hole of mass $M \sim 10^{6-10} M_{\odot}$
and a magnetic field of around $B \sim 10^{1-4} G$ \cite{blandford1992}.
In particular, we adopt a black hole mass $M = 10^{8} M_{\odot}$ and a magnetic field strength $B_o = 10^{4} G$ for later comparison with \cite{Palenzuela2010Mag}.
In the geometrized units of the code (where the mass of the black hole has been set to unity),
and according to the Lorentz-Heaviside units employed for the evolution equations, the magnetic field strength must be then $B_o [1/M] = 1.2 \times 10^{-8}$. \\
\subsection{Current Sheet Treatment} \label{sec:current_sheet}
It is well known that current sheets may develop on black hole magnetospheres.
In particular, dipolar magnetospheric configurations leads to the formation of a strong current sheet at the equatorial plane.
In these regions, the force-free condition $B^2 - E^2 > 0 $ is no longer satisfied and the theory breaks down, physically and mathematically.
The perfect conductivity approximation fails and a model of electrical resistivity would be required.
Komissarov \cite{Komissarov2004b} analyzed a model of radiative resistivity based on the inverse Compton scattering of background photons
and concluded that the cross-field conductivity inside the current sheet has to be governed by a self-regulatory mechanism ensuring marginal
screening of the electric field. Thus, the electromagnetic field at the current sheet is expected to satisfy,
\begin{equation}
B^2 - E^2 \approx 0 \nonumber
\end{equation}
A simple way of implement this resistivity numerically (even though not very appealing from the mathematical point of view)
is by reducing the electric field whenever it gets too close in magnitude to the magnetic one.
This is applied at each iteration of the Runge-Kutta time step.
In this way, one is effectively dissipating electric field at the current sheet and driving the electromagnetic field into a state very close to $B^2 - E^2 = 0$, as physically expected.
We do it by follow a similar prescription to the one employed in \cite{Palenzuela2010Mag}, but ``cutting'' the field on a slightly smoother manner:
\begin{equation}
E^i \rightarrow f\left( \frac{|E|}{|B|}\right) E^i \nonumber
\end{equation}
where $f(x)$ is a smooth piecewise function, which equals one for $x \leq 1-2\varepsilon$; is given by a fifth degree polynomial for the interval $ 1-2\varepsilon < x < 1-\varepsilon$;
and is $\frac{1}{x}$ for larger values of $x$. With $\varepsilon$ being a small parameter, generally set to $\varepsilon=0.05$ in the code.
\section{Numerical Results}\label{sec:results}
As discussed in Section \ref{sec:data}, we chose initial/boundary data to study the \textit{magnetospheric Wald problem},
in which a uniform magnetic field dominates the far field region.
Our numerical solutions goes through an initial dynamical transient where the magnetic field lines twist around and an electric field is induced;
after which, all the simulations reach a steady state.
The time to reach such final configurations depends on the location of the outer numerical boundary, to which they equilibrate with.
The late time solutions attained are truly stationary and exhibit collimated flows of electromagnetic energy, as those seen
on the right panel of figures \ref{fig:PF_slice_aligned} and \ref{fig:PF_slice_misaligned}.
The net flux over any spherical surface, $\Phi(R)$, is always positive, meaning energy is being extracted from the black hole.
This energy is carried to the asymptotic region in the form of a collimated Poynting flux: the jet.
\onecolumngrid
\begin{figure}[t!]
\begin{center}
\begin{minipage}{8cm}
\subfigure{\includegraphics[scale=0.21]{aligned_3D.jpeg}}
\end{minipage}
\begin{minipage}{8cm}
\subfigure{\includegraphics[scale=0.26]{aligned_2D.jpeg}}
\end{minipage}
\caption{Aligned case: late time numerical solution ($t=120M$) for a black hole with $a=0.9$.
\textbf{Left:} Representative streamlines of the magnetic field (black thick lines) and the electric field (blue lines).
The solid black surface is the BH horizon, whereas the semitransparent (red) one represents the ergosphere.
\textbf{Right:} The radial Poynting flux density is shown in color scale at the $x-z$ plane.
Thick solid and dotted black lines represents the black hole horizon and ergosphere, respectively.
}
\label{fig:PF_slice_aligned}
\end{center}
\end{figure}
\twocolumngrid
We find no significant differences in the late time numerical solutions for initial/boundary data corresponding to the uniform magnetic field
we use here (also used in \cite{Palenzuela2010Mag}) and the Wald configuration with zero electric field (as considered in e.g. \cite{Komissarov2004b}).
The reason for that, can be easily understood by noticing that the magnetic fields on these two configurations only (slightly) differ near the black hole horizon;
and hence, the boundary data --which is ultimately what determines the final state-- is essentially the same in both cases.
We shall consider first the case where the asymptotic magnetic field is aligned with the symmetry-axis of the black hole.
This will serve to explore some of the known features of these jet solutions: the operation of the Blandford-Znajek mechanism,
the dependence of this mechanism on black hole spin, the presence of an electronic circuit, and the development of an equatorial current sheet and its effects on the total emitted power.
We will also use this scenario to analyze the influence of the conditions and location adopted for the outer numerical boundary.
Later, we shall abandon axial symmetry to study the more general case when the asymptotic field is not aligned with the rotation axis of the black hole.
This way, we will exploit the full potential of our three dimensional code, and confirm some known --though more scarce-- results for this setting.
\subsection{Aligned Case}
In Fig.~\ref{fig:PF_slice_aligned} we have considered a representative (high resolution) numerical solution
for the aligned case, with black hole spin $a=0.9$.
The general structure of the electric and magnetic fields is depicted on the left panel image,
where it can be seen that all the magnetic field lines which penetrate the ergosphere acquire a toroidal component.
The electric field is predominantly toroidal everywhere, with an induced $z$-component along the jet.
The right panel of Fig.~\ref{fig:PF_slice_aligned} shows the radial\footnote{Here,
we refer again to a radius $r$ in the isotropic coordinates of the Kerr-Schild foliation, rather than the usual radial coordinate.
We remark, however, that both densities should look very similar. }
EM energy flux density, $p^r$ (see \eqref{flux-density}), on the $x-z$ plane.
It illustrates the highly collimated Poynting flux generated.
A first question one may ask is, whether the energy extraction taking place in these configurations are Penrose-like processes, or not.
In the aligned case, we know the late times solutions are stationary and axi-symmetric.
Thus, we just need to check if our solutions satisfy the Blandford-Znajek condition \cite{Blandford, lasota2014},
\begin{equation}
0 < \Omega_F < \Omega_H \label{BZ-condition}
\end{equation}
within the ergoregion. Where,
\begin{equation}
\Omega_F := \frac{F_{t \theta}}{F_{\theta \varphi}} \label{frec-rot}
\end{equation}
is the \textit{rotation frequency of the electromagnetic field}, which captures the notion of ``angular velocity of the magnetic field lines''
for the stationary and axi-symmetric case (see e.g. \cite{Blandford, Komissarov2004b, Palenzuela2010Mag});
and $\Omega_H := \frac{a}{2M r_H}$, is the frame dragging orbital frequency at the black hole horizon.
\begin{figure
\begin{center}
\begin{minipage}{3.1cm}
\subfigure{\includegraphics[scale=0.11]{AV_contour_half.jpeg}}
\end{minipage}
\begin{minipage}{5.3cm}
\subfigure{\includegraphics[scale=0.155]{ang_vel_rho.jpeg}}
\end{minipage}
\caption{Angular frequency of magnetic field lines at late time ($t=120M$) solution, for a spinning black hole with $a=0.9$.
\textbf{Left:} Contour lines of $\Omega_F / \Omega_H $ between $0.05$ and $0.45$ (near the axis) at the $x-z$ plane.
Thick solid and dotted lines represents the black hole horizon and ergosphere, respectively.
\textbf{Right:} Profile of $\Omega_F / \Omega_H $ across the jet: it shows the distribution on the cylindrical distance from the axis, $\rho$, at $z=4M$. }
\label{fig:BZ}
\end{center}
\end{figure}
In Fig.~\ref{fig:BZ}, we plot the quotient $\Omega_F / \Omega_H $ of a representative late time configuration,
which confirms that the BZ condition is indeed satisfied.
The left panel presents contour lines of this quantity on the $x-z$ plane, showing those magnetic field lines crossing the ergoregion acquire angular velocity and fulfill condition \eqref{BZ-condition}.
Moreover, it can be seen the values attained at the jet region, $\Omega_F \sim 0.5 ~ \Omega_H$, represents a maximum in the power expression, i.e. \eqref{BZ-cond}.
The right panel of Fig.~\ref{fig:BZ}, on the other hand, displays a the distribution of $\Omega_F / \Omega_H $ at $z=4M$.
It provides with a representative profile (at any height $z$) of the magnetic field lines angular velocity distribution across the jet.
\subsubsection{Dependence on black hole spin} \label{sec:spin_dep}
We consider the dependence of the net flux of energy emerging from the black hole on its spin parameter.
It was argued that the total electromagnetic energy flux behaves as $\Phi \propto \Omega_{H}^2$, at first leading order \cite{tchekhovskoy2010, Palenzuela2010Mag}.
In Fig.~\ref{fig:PF_spin_dep}, we plot $\Phi$ for different values of the spin parameter, together with the curve $\Omega_{H}^{2} (a)$.
It can be seen from the image, that the curve fits the numerical values very well.
We also note these results are in good agreement with those found in fig.~4 of \cite{Palenzuela2010Mag}.
The only significant difference we report, is a factor of (almost) two on the emitted power:
our net energy flux is roughly twice the one obtained there\footnote{
Recall we chose the strength of the asymptotic magnetic field and black hole mass for comparison with this reference.}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.32]{a_dep.eps}
\caption{Dependence on black hole spin of the total EM energy flux (in the aligned case).
A black hole of mass $M = 10^{8} M_{\odot}$ and a magnetic field of strength $B_o = 10^{4} G$ are considered, for comparison with \cite{Palenzuela2010Mag}.
The dots corresponds to the net flux (integrated at $r=2.5M$) in our numerical solutions;
while the dashed (red) curve is a fit of the form $\Phi \propto \Omega_{H}^2$.}
\label{fig:PF_spin_dep}
\end{center}
\end{figure}
\subsubsection{Electronic circuit}
It is interesting to note one can recover part of the information regarding the plasma through Maxwell equation, $j^a \equiv \nabla_b F^{ab}$.
On Fig.~\ref{fig:circuit}, we display the electric charge density at the $x-z$ plane, along with the induced electric currents found in our late time numerical solutions.
As seen in the image, the current flow in the direction of the symmetry axis into the black hole at the poles and then back out (in the opposite sense) within a cylindrical shell
starting near the intersection of the ergosphere and the equatorial plane.
Such induced electronic circuit on the magnetospheric plasma is consistent with known qualitative and numerical studies
(see e.g. \cite{Komissarov2004b, Palenzuela2010Mag});
and is what determines the critical difference with the electro-vacuum case (i.e. no plasma), allowing for the Poynting flux and energy extraction from the black hole.
The ripples observed on the charge density distribution on Fig.~\ref{fig:circuit} are due to the numerical prescription we use to handle the current sheet,
that generate numerical disturbances which then propagate around and dissipate.
Since computing the charge density involves spatial derivatives, it is particularly sensitive to this numerical noise.
\begin{figure
\begin{center}
\includegraphics[scale=0.27]{charge_current.jpeg}
\caption{Electronic circuit and induced charge distribution for a late time solution ($t=120M$) on a black hole with $a=0.9$.
Charge density (color scale) and poloidal electric currents (arrows) are displayed on the $x-z$ plane.
Solid/dashed lines represents the black hole horizon and ergosphere, respectively. }
\label{fig:circuit}
\end{center}
\end{figure}
\subsubsection{Current sheet}
Since all magnetic streamlines penetrating the ergosphere are ``forced to co-rotate'' with the black hole,
a discontinuity on the toroidal component of the magnetic field is then generated along the equatorial plane (within the ergosphere) and a current sheet develops.
In Fig.~\ref{fig:C3}, we display contour lines of $\frac{B^2 - E^2}{B^2}$ on the $x-z$ plane,
raging from around $0.1$ at the inner region (close to the equatorial plane) up to $0.9$.
This distribution of $\frac{B^2 - E^2}{B^2}$ illustrates the structure of the current sheet, as it is known that a violation of the magnetic-domination condition
is taken as evidence that the plasma would have a non-negligible back reaction on the electric field.
At this region, we know our numerical implementation is effectively dissipating electric field (Section \ref{sec:current_sheet}).
We have attempted to restrict the dissipating mechanism to make it operate only inside the black hole horizon\footnote{
By gradually reducing it after the initial transient.}, but failed.
This fact relates with the observation made in \cite{Komissarov2004b} that the ``gravitationally induced'' electric field cannot be completely screened inside the ergosphere.
So it seems there is no way to prevent a strong current sheet from developing inside the ergoregion,
and thus, the equilibrium reached by the late time solutions will have this mechanism actively operating there.
Komissarov has suggested that the anisotropic resistivity seems to play a key role in shaping the resulting magnetic field structure
(within the ergosphere at least), when comparing the FFE solutions from \cite{Komissarov2004b}
with those ideal MHD solutions found in \cite{komissarov2005}.
A similar observation was made in Ref.~\cite{Yang}, where the authors developed a family of analytic jet-like solutions and found the numerical evolution tends to a unique steady configuration (independently of initial data).
They suggest (as observed previously in \cite{gruzinov2006,ruiz2012} in the context of neutron stars) that it might be the equatorial current sheet
which determines the final state from the whole family of possible solutions.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.25]{C3_contour_half.jpeg}
\caption{Contour lines of $\frac{B^2 - E^2}{B^2}$ for a representative late time solution ($a=0.9$) at the $x-z$ plane.
It illustrates the structure of the current sheet developed.
Solid and dashed black lines represents the BH horizon and ergosphere, respectively.}
\label{fig:C3}
\end{center}
\end{figure}
The fact that the numerical resistivity is operating at the current sheet in our late time configurations
means the electromagnetic stress energy tensor is no longer conserved there;
and thus, a source (or sink) of energy is expected to contribute when analyzing the total flux emitted (see expression \eqref{eq:stokes}).
As it was first pointed out in Ref.~\cite{Komissarov2004b}, the current sheet indeed supplies both energy and angular momentum to the force-free magnetosphere.
We have plotted, in Fig.~\ref{fig:inyection}, the net Poynting flux as it passes through $r=cst$ spheres, as a function of $r$.
The figure illustrates how the flux increases considerably in between the horizon and the ergosphere.
The increment might be interpreted in part by negative energy falling into the black hole and in part as the energy of the electric field being dissipated at the current sheet;
that energy is also negative, so its dissipation actually increases the energy as seen from infinity.
Thus, we find that perplexing situation where dissipation acts as a positive source of energy.
The curves displayed corresponds to different values of the parameter controlling the mechanism to handle the current sheet, as described on Section \ref{sec:current_sheet}.
Essentially, the grater the value of the parameter $\varepsilon$ the more the electric field gets trimmed at the sheet and, hence, larger amounts of energy getting dissipated.
This reflects on the plot, which shows larger increments on the emitted power for larger values of $\varepsilon$.
\begin{figure
\begin{center}
\includegraphics[scale=0.32]{E_injection.pdf}
\caption{Net energy flux as a function of Cartesian radius: energy is being ``injected'' by dissipation at the current sheet.
The flux is integrated on spherical layers at different radius, for representative numerical configurations (with $a=0.9$). The curves
corresponds to different values of the parameter $\varepsilon$, which controls the dissipative prescription (see Section \ref{sec:current_sheet}).\\
$r_H$ and $r_E$ (horizon and ergosphere intersection with the equatorial plane) signals the region where energy is not conserved.
}
\label{fig:inyection}
\end{center}
\end{figure}
\subsubsection{Dependence on boundary condition/location}
A natural question arises regarding our outer boundary condition treatment, which can be stated as follows: What is the influence the location of the numerical
edge has on our late time solutions? To that matter, we have consider in Fig.~\ref{fig:location}, the evolution of the electromagnetic energy
enclosed in a common region of space (specifically $r\in[1.35M, 11.65M]$), and the evolution of the net energy flux through a fixed spherical surface at $r=3M$,
for simulations with their numerical edges placed at different radius.
Another run using \textit{maximally dissipative} boundary conditions (and with the outer edge placed very far, $r_{out}\sim 225M$)
was consider as well, for comparison.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.32]{BC_E.pdf}
\includegraphics[scale=0.32]{BC_PF.pdf}
\caption{ Numerical solutions with an outer edge located at different radius,
further compared with a solution obtained under \textit{maximally dissipative} boundary conditions.
\textbf{Top:} The evolution of the EM energy, when integrated up-to $r\sim 12M$.
\textbf{Bottom:} Net Poynting flux through the spherical layer at $r=3M$, as a function of time. }
\label{fig:location}
\end{center}
\end{figure}
Within our approach for the outer BC, we see the energy (top image) reaches an equilibrium on a time that depends on the location of the outer edge:
when it is placed closer to the black hole, the solution equilibrates faster. The value attained also depends on the boundary location, being larger for smaller domains.
Whereas in the case of the maximally dissipative BC, the energy keeps slowly dropping towards the end of the simulation at $t=200M$.
For the Poynting flux (bottom image), we notice
that the net flux achieved under our BC, at different radial locations, seems to equilibrate and approach to a single value near the end
(still slightly larger for smaller numerical domains). We further notice that it takes longer for the flux to equilibrate.
In the case of the outgoing boundary condition, we see the flux is slowly increasing after the first transient
but then gradually starts to decrease at the final stage.
Its value is, however, always lower when compared with the other numerical solutions.
This observation might help on explaining the difference in the emitted power we found on Sec.~\ref{sec:spin_dep}.
We have noticed that when imposing the outgoing boundary conditions through penalty terms, there is an initial perturbation
generated at the external surface, which propagates (at the speed of light) into the bulk.
When this perturbation reaches the central region, the solution gets ruined, and hence, the time we are allowed to run the simulation gets limited.
The reason for such perturbation to occur is that the penalties try to enforce the no-incoming condition from the very beginning,
while the incoming modes associated to the initial data employed are nonzero.
\subsection{Misaligned Case}
We are interested on studying those cases in which the exterior magnetic field (generated by the accretion disk)
is not aligned to the black hole rotational axis.
We choose the $x-z$ plane for this displacement and denoted by $\alpha_o$ the angle between these two directions (see expr. \eqref{initial}).
A representative (high resolution) late time configuration, corresponding to a spin parameter $a=0.9$ and an inclination angle $\alpha_o = 15$\textdegree,
is displayed on Fig.~\ref{fig:PF_slice_misaligned}.
The electric and magnetic field attained (left panel image) are very similar to those in the aligned case, but now tilted.
It is also apparent from the picture that the distribution of magnetic streamlines is no longer axi-symmetric.
Again, a collimated Poynting flux is observed at the final stationary solutions (see right panel of Fig.~\ref{fig:PF_slice_misaligned}).
It can be seen that the jet follows the direction of the asymptotic (or exterior) magnetic field.
\begin{figure*}[t]
\begin{center}
\begin{minipage}{8cm}
\subfigure{\includegraphics[scale=0.21]{misaligned_3D.jpeg}}
\end{minipage}
\begin{minipage}{8cm}
\subfigure{\includegraphics[scale=0.26]{misaligned_2D.jpeg}}
\end{minipage}
\caption{Misaligned case ($\alpha_o = 15$\textdegree): late time numerical solution ($t=120M$) for a black hole with $a=0.9$.
\textbf{Left:} Representative streamlines of the magnetic field (black thick lines) and the electric field (blue lines).
The solid black surface is the BH horizon, whereas the semitransparent (red) one represents the ergosphere.
\textbf{Right:} The radial Poynting flux density is shown in color scale at the $x-z$ plane.
Thick solid and dotted black lines represents the black hole horizon and ergosphere, respectively.
}
\label{fig:PF_slice_misaligned}
\end{center}
\end{figure*}
\subsubsection{Dependence on inclination angle}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.32]{inclination_dep.pdf}
\caption{Dependence of the electromagnetic energy flux on the orientation of the asymptotic magnetic field.
Net Poynting flux is plotted as a function of the inclination angle $\alpha_o$.
The dots correspond to the numerical values obtained, while the dashed (red) curve is a fit of the form $\Phi \propto 1+ \cos^{2}(\alpha_o)$.}
\label{fig:PF_ang_dep}
\end{center}
\end{figure}
We now consider the dependence of the jet power on the inclination angle.
To that end, we vary the angle $\alpha_o$ from zero (aligned case) to $\frac{\pi}{2}$ (orthogonal case).
In Fig.~\ref{fig:PF_ang_dep}, this dependence is illustrated for a particular value of spin parameter ($a=0.7$)
and contrasted with the expected $\Phi \propto 1+ \cos^{2}(\alpha_o)$ behavior,
which is consistent with the one found for analytic non axi-symmetric jet solutions in \cite{gralla2015}.
Again, our results compares well with those obtained in \cite{Palenzuela2010Mag} (figure 4)
and, as observed there, we see that even for the extreme situation where the two directions are orthogonal
there is still a positive net electromagnetic energy flux.
Moreover, the jet power decreases only to a fraction of the one in the aligned case.
In contrast, we do not find the small bump around $\alpha_o \approx 15$\textdegree reported in \cite{Palenzuela2010Mag}.
Instead, however, we find a similar departure of the trend starting from around $\alpha_o \approx 30$\textdegree.
As described in \cite{Palenzuela2010Mag}, we also find that the transversal structure of the magnetic field is given by
a toroidal field with (counter)-clockwise rotation in the (anti)-aligned scenario, while in the orthogonal case ($\alpha_o = \pi/2$),
the system generates instead two counter-rotating toroidal fields off-set by a distance
of about the black hole diameter (see figure 6 from Ref.~\cite{Palenzuela2010Mag}).
\section{Conclusions and Perspectives}
In this article, we have introduced a new finite-difference code to perform time-dependent
and fully 3D numerical simulations of force-free electrodynamics on a Kerr background.
The code evolves, for the first time, the set of FFE equations developed in \cite{FFE}, which has improved properties in terms of well-posedness:
it is a symmetric hyperbolic system and it remains so even when $\vec{E}\cdot \vec{B} \neq 0$,
while the more traditional FFE evolution equations has shown to be only weakly hyperbolic \cite{Pfeiffer,Pfeiffer2015, FFE}.
The second and more important feature of the code consist on the implementation of stable and constraint preserving boundary conditions,
which allows to represent the numerical domain as embedded on a particular ambient configuration of the electromagnetic field.
Our treatment for the outer boundary relies on the penalty technique (already build-in for the multiple patch structure of the code),
that uses the characteristic information of the particular evolution system.
We chose a uniform magnetic field, as the surrounding environment, to set the incoming physical modes at the numerical boundary.
While for the constraints modes, we adapt an alternative method to avoid undesired constraint-violations to arise.
The inner numerical boundary, on the other hand, is always placed inside the black hole horizon and, therefore,
no boundary condition has to be prescribed there.
We have found stationary jet solutions that reproduces many of the central known results of the area.
Namely: that all magnetic field lines crossing the ergosphere acquire a toroidal component (or angular velocity);
that a collimated Poynting flux is generated, together with an electronic circuit at the surrounding plasma;
and that energy is extracted from the black hole through the Blandford-Znajek mechanism.
We find a net energy flux dependence on black hole spin and on inclination angle
(among the asymptotic magnetic field direction and the rotation axis)
which is consistent with previous studies.
Our solutions are not just steady states, but truly stationary configurations,
in the sense they achieve equilibrium --through boundary conditions-- with the ambient electromagnetic field chosen.
We have further analyzed the influence of the condition and location adopted for the outer boundary on our numerical results,
finding that the total emitted power might be larger when compared to analogue simulations that uses maximally dissipative boundary conditions.
The location of the outer edge, on the other hand, does not seem to significantly affect the late time configurations, including their luminosities.
We remark that besides the Blandford-Znajek extraction mechanism acting in our late time solutions,
there is energy injection at the equatorial current sheet \cite{Komissarov2004b}, where negative EM energy-at-infinity is being dissipated.
Moreover, we find the amount of energy incorporated to the stationary emitted flux is significant and depends on the method
to control the current sheet (see Sec. \ref{sec:current_sheet}).
The more energy dissipated at the sheet, the larger the flux, which can reach almost twice the value observed at the black hole horizon.
Even though the process is a priori numerical (to avoid the break-down of the force-free approximation),
it has been argued that when inertial effects are taken into account, the electromagnetic field is expected to transfer energy to the fluid
as to ensure a state of marginal screening of the electric field (i.e. $B^2 - E^2 \approx 0$), effectively dissipating EM energy.
And thus, in principle, the effect may be genuinely physical.
However, the details of this process are still unclear, and a finite-resistivity model
to go beyond the force-free and ideal MHD regimes seems to be the essential step to determine a more precise jet structure and emitted power.
In Appendix C, we have considered the monopole and split-monopole solutions as standard tests to our code, including a convergence analysis.
We have also monitored the dynamical behavior of the constraints, ensuring no excessive deviations develops or enter the domain.
Our implementation of boundary conditions allows to place the outer numerical edge relatively close to the black hole horizon ($r_{out}\sim 10M$),
where the force-free approximation is believed to be valid.
In this paper, we consider the surrounding environment to be a uniform magnetic field (with zero electric field),
that corresponds to the magnetospheric Wald problem we wanted to study.
However, we emphasize this ambient configuration might be easily replaced in the code to represent other physical scenarios;
even time-dependent ambient fields cases.
One possibility, would be to refine the ambient field adopted, as to account for more particular (or realistic) accretion disk properties.
Nevertheless, we believe the vertical field contribution near the BH horizon should always appear as the dominant one, and thus, we would not expect the results to differ much from those obtained here.
A second --and more interesting-- possibility, is to consider the physical scenario of a black hole in translational motion through a magnetized plasma.
It has been first proposed in \cite{palenzuela2010dual,Luis2011}, that even a non-spinning black hole when moving relative to a plasma
with an asymptotically stationary electromagnetic field topology, can produce jets.
The problem has also been later approached analytically (see e.g. \cite{morozova2014,penna2015}).
The idea is that, the kinetic energy is now what powers the jet, instead of the rotational energy of the black hole as in the usual BZ mechanism.
By appropriately boosting the uniform magnetic field configuration we use in this paper as initial/boundary data, it is possible to numerically implement the problem within our scheme.
The advantage of our numerical approach is that we can reach truly stationary solutions and, since we are in the frame of the black hole, we can --in principle-- probe
the whole range of possible boost velocities.
We are already exploring this scenario and hopefully we will present the results soon, in a forthcoming paper.
Finally, it would be also interesting to explore how the different physical modes behaves during the initial dynamical transient and how they set down towards the final configuration,
in both the magnetospheric Wald problem and on the boosted black hole scenario.
For doing it, we count with the projections (already built in the code) into the different characteristic subspaces of the system.
\section{Acknowledgments}
We would like to thank Luis Lehner for several very helpful discussions and orientations throughout the realization of this work.
We are also grateful to Miguel Meguevand and Gabriela Vila for many interesting comments and suggestions, specially at F.C. thesis defense.
We acknowledge financial support from CONICET, SeCyT-UNC and MinCyT-Argentina.
This work used computational resources from CCAD Universidad Nacional de C´ordoba (http://ccad.unc.edu.ar/), in particular
the Mendieta Cluster, which is also part of SNCAD – MinCyT-Argentina.
|
1806.05112
|
\section{Introduction}
\label{sec_intro}
The goal of the supervised learning is to estimate label $y$ by learning an estimator $\hat{y}(X)$ as a function of associated feature $X$.
Arguably, an estimator of better predictive power is preferred, and standard supervised learning algorithm learns $\hat{y}(X)$ from existing data.
However, when it is applied to human-related decision-making, such as employment, college admission, and credit, an estimator optimizing its predictive power can learn biases on the existing data.
To address this issue, fairness-aware machine learning proposes methodologies that yield predictors that not only have better predictive power but also complies with some notion of non-discrimination.
Let $s$ be the (categorical) sensitive attribute among $X$ that represent the applicants' identity (e.g., gender or race).
Group level fairness concerns the inequality among groups of different $s$.
A naive approach, which we call \textit{color-blind} \cite{CL93}, is to remove $s$ from $X$ in predicting $\hat{y}$: Although such an approach avoids direct discrimination through $s$, the correlation between $s$ and the other attributes in $X$ causes indirect discrimination, which is referred to as the disparate impact.
Another notion of fairness, which is widely studied (e.g., \cite{DBLP:conf/pkdd/KamishimaAAS12,DBLP:conf/cikm/RistanoskiLB13,DBLP:conf/icml/ZemelWSPD13,DBLP:conf/pkdd/FukuchiSK13}), is demographic parity (DP). DP requires the independence of $s$ from $\hat{y}$. For instance, a university admission comply with DP if each group has equal access to the university. The demographic parity is justified in the legal context in labor market: The U.S. Equal Employment Opportunity Commission \cite{eeoc} clarified the so-called 80\%-rule, that prohibits employment decisions of non-negligible inequality.
In spite of such legal background, some concerns on DP are raised. Hardt et al.\,\cite{DBLP:conf/nips/HardtPNS16} argued that DP is incompatible with the perfect classifier $\hat{y} = y$, and thus it is not appropriate when the true label $y$ is reliable. To address this issue, they provided an alternative notion of fairness called the equalized odds (EO), which requires the independence of $s$ from $\hat{y}$ conditioned on $y$ and thus allows $\hat{y} = y$. Note that essentially the same notion is also proposed in Zafar et al.\,\cite{DBLP:conf/www/ZafarVGG17}, and the notion of the counterfactual fairness \cite{KusnerLRS17} is similar to EO given a specific causal modeling. Note that DP and EO are mutually incompatible \cite{KleinbergMR17}.
Despite massive interest in the fairness in machine learning, only a few of them concerned on the resulting social impact of a policy based on the proposed notion of fairness produces. The result of a policy is far from straightforward: In some case, an introduction of a naive notion of fairness can be harmful:
For example, consider the case of a university admission policy. If the admission office discriminates blacks by believing they are less likely to perform well academically and lowers their admission standard for them to propel affirmative action, blacks may be discouraged to invest in their education because they pass the admission regardless of their effort. As a result, blacks may end up being less proficient and the negative stereotype ``self-perpetuates''.
Indeed, self-fulfillment of stereotype is an empirically documented phenomenon in some fields \cite{glover2017discrimination}
The difficulty of analyzing this phenomenon lies in the interaction between the policy-maker and the applicants: when a policy changes, the applicants also change their behavior due to a modified incentive.
This lack of interest in the social outcome, in turn, results in the absence of a unified measure to compare different fairness criteria.
In this regard, economic theory offers useful tools. In particular, literature in labor economics has a long history of analysis of welfare implication of policy changes. That is, economists investigate how the players' welfare, or aggregate level of their utility, changes by imposing a policy.
By combining the theoretical framework developed in labor economics with the ``oblivious'' post-processing non-discriminatory machine learning \cite{DBLP:conf/nips/HardtPNS16}, we propose a framework of comparison between different fairness notions in view of the incentives.
We demonstrate such a comparison between several policies induced by well-known fairness criteria; color-blind (CB), demographic parity (DP), and equalized odds (EO). As a result, we show that while CB and DP sometimes disproportionately discourage unfavored groups from investing in the improvement of their value, EO equally incentivizes the two groups.
Importantly, our framework is not just theoretical but applicable to practices and enables to assess the fairness notions based on the actual situation.
To demonstrate this point, we compare the fairness policies by using a real-world dataset: We show that (i) Unlike CB and DP, EO is disparity-free. Moreover, (ii) all of the CB, DP, and the EO tend to reduce social welfare compared to no fairness intervention case. Among them, EO yielded the lowest social welfare: One can view this as a cost of removing disparity.
\subsection{Related work}
A long line of works on discrimination and affirmative action policy exists in the literature of labor economics(\cite{arrow1998has};\cite{holzer2000assessing};see Fang and Moro\,\cite{fm2011} for a survey of recent theoretical frameworks). Coate and Loury \cite{CL93} considered a simple model where an employer infers applicants' productivity based on one-dimensional signal, which contains information about their invested effort in skill. This nominal paper argues that even under the affirmative action policy to enforce the employer to set the same rate of hiring to all the groups, there still exist equilibria where one group is negatively stereotyped, and consequently, discouraged from investing in skills.
The problem of those analyses in economics is that their setting is abstract and simplified so that they do not allow us real-world applications with actual datasets. For instance, based on their simple model, Coate and Loury \cite{CL93} states that ``The simplest intervention would insist that employers make color-blind assignments'' and it would ensure the fairness as well as the same incentives across groups. However, it is commonly perceived in machine learning that color-blind policy does not ensure fairness due to disparate impact \cite{sweeney2013,misc:219,pmlr-v81-buolamwini18a}.
Due to a lack of consideration on such learning-from-data process and related issues, frameworks proposed in economics are not designed for the real-world application. This paper modifies their models to be applicable to machine learning problems.
More importantly, their main interest lies in affirmative action:
While affirmative action that imposes a restriction on the outcome such as the ratio of admitted students (which is similar to demographic parity) is arguably important, modern machine learning algorithms propose various methodologies to ensure the fairness at the prediction level, not the outcome level.
A few papers in machine learning considered a game-theoretic view of decision-making processes and thus enable us to compare fairness criteria. In particular, the closest papers to ours are \cite{HuC18,delayedarxiv}. Hu and Chen\,\cite{HuC18} considered a two-stage process and each stage dealt with group-level and individual-level fairness, whereas we are focusing on comparing the several notions of group-level fairness.
Liu et al.\,\cite{delayedarxiv} compared several notions of fairness including the demographic parity and the equalized opportunity in terms of its long-term improvements and characterized the conditions where each of these fairness-related constraints works. Unlike ours, the analysis in Liu et al.\,\cite{delayedarxiv} assumes the availability of the function that determines how the delayed impact from the prediction arises. Identification of such a function requires us counter-factual experiments or model-dependent analyses.
Moreover, they evaluate the fairness criteria by the disparity between groups, without analyzing the social welfare.
By assuming a model with micro-foundation of players' decision-making, we are able to compare the welfare implication of different fairness criteria.
\section{Model}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figures/timeline.pdf}
\caption{Sequence of timings.}
\label{fig_timeline}
\end{figure}%
We consider a game between a continuum of applicants and a single firm.
The game models application processes, such as university admissions, job applications, and credit card applications.
A firm has a dataset on the performance of applicants and uses it to estimate the performance of the future applicants. For the ease of discussion, we assume that there exist two groups:
Applicant of each group is assigned a sensitive attribute $s \in \{0,1\}$. Let $\lambda_1$ be the fraction of the applicants of $s=1$, and $\lambda_0$ be $1- \lambda_1$. Each of the applicants has an option to exert his or her effort, and before determining whether or not to exert the effort, the applicant is given a cost $c \in [\underline{c},\bar{c}]$ of that. Let $e \in \{q,u\}$ be the variable that indicates the effort of an applicant. The applicant's feature $X \in \mathcal{X}$ is drawn from a distribution
The effort is very relevant to the performance of the applicant, and thus the firm would like to admit all the applicants of $e=q$ (that we call the qualified applicants) and to dismiss the applicants of $e=u$ (that we call the unqualified applicants). If a qualified applicant is accepted, the firm earns revenue $v_q>0$. If an unqualified applicant is accepted, the firm loses $-v_u<0$ (= negative revenue). All the applicants prefer to be accepted, and let $\omega$ be the revenue of the applicant to be accepted. The firm uses the pre-trained classifier that estimates the effort $e$ of the applicant from the sensitive attribute $s$ and non-sensitive attributes $X$.
Following \cite{DBLP:conf/nips/HardtPNS16}, we assume that the classifier is a function $\mathcal{X} \rightarrow \mathbb{R}$, where $\theta(X) \in \mathbb{R}$ indicates how likely the applicant is qualified.
Let $f_{e,s}(\theta)$ and $F_{e,s}(\theta)$ be the density and distribution of $\theta = \theta(X)$ given $e$ and $s$.
Let $G_s(c)$ be the distribution of the cost $c$ given $s$.
For the ease of discussion, we assume $G_s(c)$ be a uniform distribution over $[\underline{c}, \bar{c}]$.
Figure \ref{fig_timeline} displays the timing of the interaction between the applicants and the firm.
We pose the following assumption on the signal $\theta$ of the classifier.
\begin{assumption}{\rm Monotone Likelihood Ratio Property (MLRP): }
$\frac{f_{q,s}(\theta) }{f{u,s}(\theta)}$ is strictly increasing in $\theta$ for $s=\{0,1\}$.
\label{asm_mlrp}
\end{assumption}%
Namely, Assumption \ref{asm_mlrp} states that the applicant of a larger $\theta$ is more likely to be qualified.
In the sequel, we discuss rational behavior of the firm (Section \ref{subsec_firm}) and the applicants (Section \ref{subsec_applicant}).
\subsection{Firm's behavior}
\label{subsec_firm}
The MLRP (Assumption \ref{asm_mlrp}) motivates the firm to make a threshold of $\theta$ on the hiring decision.
A rational firm, without fairness-related restriction, would optimize its revenue, and the optimal threshold of $\theta$ depends on the firm's belief on the fraction of the qualified applicants:
Let $\pi_s$ be the fraction of the qualified applicants given $s$.
When the firm observes $(\theta,s)$, the probability of this applicant being qualified is
\begin{align*}
\mathbb{P}(e=q|\theta,s) = \frac{\pi_s f_{q,s}(\theta)}{\pi_s f_{q,s}(\theta)+(1-\pi_s) f_{u,s}(\theta)}
\end{align*}
The firm accepts this applicant iff $\mathbb{P}(e=q|\theta,s)v_q+(1-\mathbb{P}(e=q|\theta,s))v_u\geq 0 $.
Given the MLRP assumption, this is equivalent to set a threshold $\tilde{\theta}_s$ such that
\begin{equation}
\frac{v_q}{v_u} = \frac{1-\pi_s}{\pi_s} \frac{f_{u,s}(\tilde{\theta}_s)}{f_{q,s}(\tilde{\theta}_s)}
\label{eq_rfirm}
\end{equation
Letting $r = v_q/v_u$ and $\phi_s = f_{u,s}(\tilde{\theta}_s)/f_{q,s}(\tilde{\theta}_s)$, \eqref{eq_rfirm} is equivalent to
\begin{equation}
\pi_s = \frac{\phi_s(\tilde{\theta}_s)}{r + \phi_s(\tilde{\theta}_s)},
\label{eq_rfirmtwo}
\end{equation}
and the applicants with $\theta >\tilde{\theta}_s$ is approved.
\subsection{Applicants' behavior}
\label{subsec_applicant}
Let $\tilde{c}_s(\theta) = \omega [ F_{u,s}(\theta) - F_{q,s}(\theta) ]$
be the expected increase of reward by exerting an effort. Given the firm's threshold $\tilde{\theta}_s$, $\tilde{c}_s(\tilde{\theta}_s)$ is the incentive of the applicant to exert an effort.
A rational applicant invests in skills iff his or her cost $c$ is smaller than $\tilde{c}_s(\tilde{\theta}_s)$ : which implies
\begin{equation}
\label{eq_applicants}
\pi_s = G(\tilde{c}_s(\tilde{\theta}_s)) := \min\left(1, \frac{\tilde{c}_s(\tilde{\theta}_s) - \underline{c}}{\bar{c}-\underline{c}}\right).
\end{equation}
\subsection{Laissez-faire Equilibria}
\label{subsec_lf}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figures/EEWWcurve.pdf}
\caption{Illustration of the equilibrium parameters $(\tilde{\theta}_s, \pi_s)$. Assumption \ref{asm_mlrp} implies that the FR curve is strictly decreasing, and the AR curve is unimodal. The value $\hat{\theta} =
\left[ \theta : (\mathrm{d} G(\tilde{c}(\theta)))/(\mathrm{d} \theta) = 0\right]$ is the mode of the AR curve.}
\label{fig_eewwcurve}
\end{figure}%
Section \ref{subsec_firm} (resp. \ref{subsec_applicant}) introduced the best response of the firm (resp. the applicants) to the belief in response to the action of the applicants (resp. the firm).
When no fairness-related constraint is posed, a firm that fully exploits $s$ (that we call ``Laissez-faire'', LF) will set different threshold $\tilde{\theta}_s$ for each $s$.
If the fraction of the qualified people and the threshold of hiring are exactly the two rate postulated by the beliefs, then the players on both sides cannot increase their revenue by deviating from the current actions: Namely, in the equilibrium $\pi_s = G(\tilde{c}_s(\tilde{\theta}_s))$ holds:
\begin{definition}{\rm (Laissez-Faire Equilibrium \cite{CL93})}
An equilibrium is a quadraple $(\tilde{\theta}_0, \tilde{\theta}_1, \pi_0, \pi_1)$ satisfying Equality \ref{eq_rfirmtwo} for $s=0,1$ and $\pi_0 = G(\tilde{c}_0(\tilde{\theta}_0)), \pi_1 = G(\tilde{c}_1(\tilde{\theta}_1))$.
\end{definition}
Figure \ref{fig_eewwcurve} illustrates the beliefs $\pi_s$ on equilibria, which is the intersections of the following two curves. Namely, (i) the Firm-Response (FR) curve: $\{(\tilde{\theta}_s,\pi_s): \pi_s = \frac{\phi_s(\tilde{\theta}_s)}{r + \phi_s(\tilde{\theta}_s)} \}$ that indicates the threshold that maximizes the firm's revenue and (ii) the Applicant-Response (AR) curve: $\{(\tilde{\theta}_s,\pi_s): \pi_s = G(\tilde{c}_s(\tilde{\theta}_s))\}$ that indicates the incentive of the applicants.
The following proposition holds:
\begin{proposition}{\rm (Existence of multiple equilibria, Proposition 1 in Coate and Loury \cite{CL93})}
For each $s$, there exist two or more intersections of the FR and AR curves if and only if there exists $\tilde{\theta}_s$ such that $ G(\tilde{c}_s(\tilde{\theta}_s)) > \frac{\phi_s(\tilde{\theta}_s)}{r + \phi_s(\tilde{\theta}_s)}$.
\end{proposition}
The proof directly follows from the monotonicity of FR and the unimodality of AR.
As discussed by Coate and Loury \cite{CL93}, the existence of multiple intersections implies the existence of asymmetric equilibria where $\pi_0 < \pi_1$, even in the case signal is not biased (i.e., $F_{e, s=0}(\theta) = F_{e,s=1}(\theta)$). Such an asymmetric equilibrium discourages the unfavored group $s=0$ as $\theta_0 < \theta_1$ implies the reduced incentive of the unfavored group.
\subsection{Social Welfare}
\label{subsec_sw}
In accordance with Sections \ref{subsec_firm} and \ref{subsec_applicant}, we define the social welfare as follows:
The firm's welfare is
\begin{align*}
\mathrm{FW}_s = \mathrm{FW}_s(\theta, \pi) = \left( \pi_s (1-F_{q,s}(\theta)) v_q - (1-\pi_s) (1-F_{u,s}(\theta)) v_u \right),
\end{align*}
whereas the applicants' welfare is
\begin{align*}
\mathrm{AW}_s = \mathrm{AW}_s (\theta, \pi) = \omega \Bigl( \pi \left( 1-F_{q,s}(\theta) \right) + (1 - \pi) \left( 1-F_{u,s}(\theta) \right) \Bigr) + \int_{\underline{c}}^{(1 - \pi) \underline{c} + \pi \bar{c}} c \mathrm{d} c.
\end{align*}
The social welfare is the sum of the two quantities above summed over the groups: Let $\mathrm{SW}_s = \mathrm{FW}_s + \mathrm{AW}_s$. The quantity $\mathrm{SW} = \sum_{s} \lambda_s \mathrm{SW}_s(\tilde{\theta}_s, \pi_s)$ is the social welfare per applicant.
\begin{theorem}{\rm (Equilibrium of the maximum social welfare)}
Fix $s \in \{0,1\}$. For group $s$, let there be two equilibria $(\tilde{\theta}_s^{(1)}, \pi_s^{(1)})$, $(\tilde{\theta}_s^{(2)},\pi_s^{(2)})$ such that $\pi_s^{(1)} > \pi_s^{(2)}$. Let $\mathrm{SW}_s^{(1)}, \mathrm{SW}_s^{(2)}$ the corresponding social welfares. Then, $\mathrm{SW}_s^{(1)} > \mathrm{SW}_s^{(2)}$.
\label{thm_sworder}
\end{theorem}
\begin{proof}
Note that, the fact that $(\tilde{\theta}_s^{(i)},\pi_s^{(i)})$ for $i \in \{1,2\}$ are at equilibrium implies that
\begin{equation}
\mathrm{SW}_s^{(i)} := \mathrm{FW}_s(\tilde{\theta}_s^{(i)},\pi_s^{(i)}) + \mathrm{AW}_s(\tilde{\theta}_s^{(i)},\pi_s^{(i)}) = \max_{\theta} \mathrm{FW}_s(\theta, \pi_s^{(i)}) + \max_\pi \mathrm{AW}_s(\tilde{\theta}_s^{(i)}, \pi).
\label{ineq_sw_sup}
\end{equation}
, and thus
\begin{multline}
\mathrm{SW}_s^{(1)} - \mathrm{SW}_s^{(2)} \ge \min_\theta \left( (\mathrm{FW}_s(\theta, \pi_s^{(1)}) - \mathrm{FW}_s(\theta, \pi_s^{(2)})) \right) \\
+ \max_\pi \mathrm{AW}_s(\tilde{\theta}_s^{(1)}, \pi) - \max_\pi \mathrm{AW}_s(\tilde{\theta}_s^{(2)}, \pi).
\end{multline}
The term $\min_\theta \left( (\mathrm{FW}_s(\theta, \pi_s^{(1)}) - \mathrm{FW}_s(\theta, \pi_s^{(2)})) \right)$ is positive because $\mathrm{FW}_s(\theta, \pi)$ is strictly increasing in $\pi$.
On the other hand, the monotonicity of the FR curve and $\pi_s^{(1)} > \pi_s^{(2)}$ imply $\tilde{\theta}_s^{(1)} < \tilde{\theta}_s^{(2)}$. The second term $\max_\pi \mathrm{AW}_s(\tilde{\theta}_s^{(1)}, \pi) - \max_\pi \mathrm{AW}_s(\tilde{\theta}_s^{(2)}, \pi)$ is non-negative because $\max_\pi \mathrm{AW}_s(\theta, \pi)$, which is the function of $\theta$, is decreasing it is a integration over applicants and each applicant takes maximum over (i) pay its cost $c$ to get reward $\omega (1 - F_{q,s}(\theta))$ or (ii) get reward $\omega (1 - F_{u,s}(\theta))$, and both of the two options have decreasing reward in $\theta$.
\end{proof}
Theorem \ref{thm_sworder} states that the equilibria are ordered by $\pi$: This matches our conception on the application process. The more effort the applicants pay, the more applicants the firm accepts, and the better the equilibrium is.
\section{Fairness Criteria and Their Results}
\label{sec_fairpolicy}
Section \ref{subsec_lf} shows that a lack of fairness constraint discourages the individuals of the unfavored group under an asymmetric equilibrium. A natural question is that, whether or not we can impose some non-distriminatory constraint on the firm's decision-making to remove such asymmetric equilibria.
This section compares several constraints that are discussed in the literature.
The first constraint is the one that adopts the same threshold to the two groups:
\begin{definition}{\rm (Color blind (CB) policy)}
The firm decision is said to be color-blind iff
$\tilde{\theta}_0 = \tilde{\theta}_1$.
The equilibria under CB are characterized by a set of quadraples $(\tilde{\theta}_0, \tilde{\theta}_1, \pi_0, \pi_1)$ that satisfies following constraints: (i) Equality \eqref{eq_applicants} holds for $s=0,1$. (ii) Moreover, letting $\tilde{\theta} := \tilde{\theta}_0 = \tilde{\theta}_1$, the following holds:
\[
(\lambda_0 \pi_0 + \lambda_1 \pi_1) = \frac{\phi(\tilde{\theta})}{r + \phi(\tilde{\theta})},
\]
where $\phi(\theta) = \frac{\lambda_0 f_{u,s=0}(\theta) + \lambda_1 f_{u,s=1}(\theta)}{\lambda_0 f_{q,s=0}(\theta) + \lambda_1 f_{q,s=1}(\theta)} $.
\end{definition}
In other words, under CB the firm considers an optimization of single $\tilde{\theta}$ over a single group that mixed the two groups of $s=0,1$.
Contrary to the argument of \cite{CL93} (as discussed in Section \ref{sec_intro}),
CB potentially yields an unfair treatment between two groups when $F_{e,s}(\theta)$ varies largely among two groups $s=0,1$:
\begin{proposition}
There exists an equilibrium with $\pi_0 \ne \pi_1$ under CB.
\label{prop_cbdisp}
\end{proposition}
In the following, we show examples of the disparity in Proposition \ref{prop_cbdisp}.
Let $\mathcal{N}(\mu, \sigma^2)$ be a normal distribution with mean $\mu$ and variance $\sigma^2$.
Let $\mathbb{I}(A)$ be $1$ if $A$ holds or $0$ otherwise,
\begin{example}{\rm (Insufficient identification)}
Let $d=1$ and
\begin{align}
X_{s=0} &\sim \mathcal{N}(\mathbb{I}(e=q), 1)\nonumber\\
X_{s=1} &\sim \mathcal{N}(\mathbb{I}(e=q)+10, 1)
\end{align}
and $\lambda_0 \approx 1$. As the classifier cannot consider $s$ explicitly, it utilizes the only dimension as $\theta = X$.
Assume that $v_u, v_q$, and $\omega$ are such that there exists more than two equilibria as shown in Figure \ref{fig_eewwcurve} for group $s=0$. Remember that the equilibria under CB is determined by the interaction between the firm and a mixture of two groups $s=0,1$.
As the population of $s=1$ approaches $0$, one can show that the $\tilde{\theta}$ of any equilibrium is arbitrarily close to the one of the equilibria for the majority $s=0$, which has some capability of identifying $e=u$ or $q$ of person in $s=0$, and thus $\tilde{\theta}$ is not very far from $0.5$.
In this case, most people of $s=1$ would be assigned to $\hat{y}=1$ regardless of their efforts (which discourages them), and thus $\pi_1$ is close to $0$ whereas $\pi_0$ is not.
\end{example}
Another example is the case where predictive power of $\theta$ largely differs between two
\begin{example}{\rm (signal of different accuracy)}
Let $X \in \mathbb{R}^2$ and $\textbf{b}_0, \textbf{b}_1$ be the orthogonal bases of $X$.
\begin{align}
X|s=0 &\sim \mathcal{N}(\mathbb{I}(e=q)-0.5, 1)\ \textbf{b}_0 \nonumber\\
X|s=1 &\sim \mathcal{N}(\mathbb{I}(e=q)-0.5, 10^2)\ \textbf{b}_1.
\end{align}
In this case, a linear classifier can utilize a linear combination of the two basis to create a signal $\theta$: The first (resp. the second) basis is for identifying the effort of people in $s=0$ (resp. $s=1$). For any threshold value of $\theta$, such a signal is of very different incentive $\tilde{c}_s(\theta)$ between groups $s=0,1$. Due to the noisy signal, $\theta$ gives very little information on whether a person of $s=1$ exert an effort or not. When an equilibrium exists, the very little of $s=1$ would exert an effort, whereas a certain portion of $s=0$ would be incentivized to exert an effort.
\end{example}%
The implication of the examples above is as follows: When the signal $\theta$ treats the two group differently, as is shown in the case of credit risk prediction \cite{DBLP:conf/nips/HardtPNS16} (Figure 4 therein), the accuracy of a classifier can vary among $s$, which will make a mere application of CB fail.
We next consider the constraint of the demographic parity, which is arguably the most common notion of fairness in the context of fairness-aware machine learning.
\begin{definition}{\rm (demographic parity, DP)}
The firm decision is said satisfy demographic parity iff
$\mathbb{P}[\theta > \tilde{\theta}_0| s= 0] = \mathbb{P}[\theta > \tilde{\theta}_1| s = 1]$. The equilibria under DP are characterized by a set of quadraples $(\tilde{\theta}_0, \tilde{\theta}_1, \pi_0, \pi_1)$ that satisfies following constraints: (i) Equality \eqref{eq_applicants} holds for $s=0,1$. (ii) Moreover, letting $\tilde{\theta} := \tilde{\theta}_0 = \tilde{\theta}_1$, the following holds:
\begin{align*}
(\tilde{\theta}_0, \tilde{\theta}_1) = \max_{(\theta_0, \theta_1)}\ & \mathrm{FW}(\tilde{\theta}_0, \tilde{\theta}_1, \pi_0, \pi_1), \nonumber\\
\textrm{s.t.}\ &\pi_0 (1-F_{q,s=0}) + (1- \pi_0) (1-F_{u,s=0}) \nonumber\\ & =
\pi_1 (1-F_{q,s=1}) + (1- \pi_1) (1-F_{u,s=1}).
\end{align*}
\end{definition}
In other words, it equalizes the ratio of the people accepted among $s=0,1$.
However, as discussed in Coate and Loury \cite{CL93}, such a constraint does not remove disparity:
\begin{proposition}
There exists an equilibrium with $\pi_0 \ne \pi_1$ under the demographic parity.
\label{prop_ineqaa}
\end{proposition}
The formal construction of explicit example was shown in Coate and Loury \cite{CL93} (Section B therein). Although they show some example where $\theta$ is discrete, it is not very difficult to empirically confirm that standard classifier can yields equilibria of $\pi_0 \ne \pi_1$ as we empirically show in Section \ref{sec_simulation}.
At a word, an asymmetric equilibrium exists when (i) the ratio of minority $\lambda_1$ is small, and (ii) the classifier is very accurate (i.e., $F_{u,s}(\theta)-F_{q,s}(\theta)$ is large).
In such a case, the firm ``patronizes'' the minority of not exerting efforts (i.e., small $\pi_1$) because it is relatively cheaper to admit a small fraction of the unqualified minorities than dismissing many qualified majority applicants. The equilibrium is discouraging minorities as they have a little motivation for investing themselves when they know they are accepted regardless of their efforts.
Recent work \cite{DBLP:conf/nips/HardtPNS16,DBLP:conf/www/ZafarVGG17} proposed alternative criteria of fairness called equalized opportunity and equalized odds.
Let $\mathrm{FP}_s(\tilde{\theta}) = \mathbb{P}[\theta > \tilde{\theta}_s | s, e = 0 ]$ and $\mathrm{TP}_s(\tilde{\theta}) = \mathbb{P}[\theta > \tilde{\theta}_s | s, e = 1 ]$
be the false positive (FP) and the true positive (TP) rate of the classifier, respectively.
The equalized odds criterion requires $\theta$ to have the same Receiver Operating Characteristic (ROC) curve (i.e., a curve comprised of (FP, TP)) for both groups.
When the data is biased, $\theta$ does not satisfy the equalize odds criterion \cite{sweeney2013,misc:219,pmlr-v81-buolamwini18a}. In our simulation in Section \ref{sec_simulation}, the classifier trained with a U.S. national survey dataset is biased towards the majority (Figure \ref{fig_wwee} (a)).
To address this issue, Hardt et al.\,\cite{DBLP:conf/nips/HardtPNS16} proposed a post-processing that derives another classifier $\theta'$ from the original signal $\theta$.
The following theorem states the feasible region of FP and TP rates of the derived predictor.
\begin{theorem}{\rm (feasible region of a derived predictor \cite{DBLP:conf/nips/HardtPNS16})}
Consider a two-dimensional convex region that is spanned by the (FP,TP)$(\theta)$-curve and a line segment from $(0,0)$ to $(1,1)$. The (FP,TP) of a derived predictor $\theta'$ lies in the convex region.
\end{theorem}
In other words, any $\theta'$ of an ROC curve is available as long as it is under the ROC curve of $\theta$. The EO policy is formalized as follows:
\begin{definition}{\rm (Equalized odds)}
The firm's policy is said to be odds-equalized when a (derived) predictor $\theta'$ satisfies
$\mathrm{FP}_{s=0}(\theta') = FP_{s=1}(\theta')$ and $\mathrm{TP}_{s=0}(\theta') = TP_{s=1}(\theta')$,
and the assignment based on the derived signal $\theta'$ is color-blind.
\label{eodds}
\end{definition}
The following theorem states that the EO does not generate disparity: There exists no asymmetric equilibrium under a derived predictor of EO.
\begin{theorem}
For any equilibrium under EO, $\pi_0 = \pi_1$ holds.
\label{thm_eodds}
\end{theorem}
\begin{proof}
Let $\tilde{\theta}'$ be the threshold at an equilibrium.
From EO, $\tilde{c}_s(\tilde{\theta}') = \omega (F_{s,u}(\tilde{\theta}') - F_{s,q}(\tilde{\theta}'))$ is identical for two groups $s=0,1$, and thus $\pi_s = G(\tilde{c}_s(\tilde{\theta}'))$ is also identical.
\end{proof}
Note that Hardt et al.\,\cite{DBLP:conf/nips/HardtPNS16} also proposed a policy called equalized opportunity that only requires the equality of TP.
By definition, any predictor of the equalized odds satisfies the equalized opportunity, but not vice versa. Unlike the equalized odds, the equalized opportunity can result in $\pi_0 \ne \pi_1$.
\section{Simulation}
\label{sec_simulation}
\begin{figure}[t]
\begin{center}
\subfloat[The ROC curve]{
\includegraphics[width=0.3\textwidth]{figures/fptp.pdf}
}
\subfloat[$s=0$]{
\includegraphics[scale=0.4]{figures/wwee1.pdf}
}
\subfloat[$s=1$]{
\includegraphics[scale=0.4]{figures/wwee2.pdf}
}
\end{center}
\caption{(a) The ROC curves of a predictor trained with the NLSY dataset. Details of the dataset and the settings are described in Section \ref{sec_simulation}. One can confirm that the convexity of the ROC curve is equivalent to MLRP. In the figure, the two ROC curves are fairly close to convex. (b)(c) The WR and AR curves estimated from the NLSY dataset.}
\vspace{-1.5em}
\label{fig_wwee}
\end{figure}%
To assess the social welfare and disparity on the equilibrium of the LF, CB, DP, and EO policies, we conducted numerical simulations.
\begin{table}[b!]
\caption{Results of the policies. The social welfare (SW), the welfare of the firm (FW), and disparity $|\pi_0 - \pi_1|$ of the best equilibrium are shown. We set $\lambda_0 = 2028/(2028+782)$.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
policy & LF & CB & DP & EO \\\hline\hline
Disparity & $9.8$\% & $8.8$\% & $12.8$\% & $0$\% \\\hline
SW & $2338.6$ & $2304.7$ & $2411.2$ & $1412.2$ \\\hline
RW & $1614.3$ & $1596.9$ & $1564.8$ & $1004.3$ \\\hline
\end{tabular}
\end{center}
\label{tbl_results}
\end{table}%
\textbf{Dataset and Settings:} We used the National Longitudinal Survey of Youth (NLSY97) dataset retrieved from https://www.bls.gov/nls/ that involves survey results by the U.S. Bureau of Labor Statistics that is intended to gather information on the labor market activities and other life events of several groups. We model a virtual company's hiring decision assuming that the company does not have access to the applicants' academic scores.
We set $y$ to be whether each person's GPA is $\ge 3.0$ or not.
Sensitive attribute $s$ is the race of the person ($s=0$: white, $s=1$: black or African American). We have total 2,028 (resp. 782) people of $s=0$ (resp. $s=1$). and $X$ to be demographic features comprised of their school records, attitude towards life (voluntary and anti-moral activities of themselves and their peers), and geographical information during 1997 (corresponding to their late teenage).
The reward $v_q$ (resp. $v_u$) are chosen to be $53097 - 46640$ (resp. $46640 - 40604$) dollars, which is the gap of the average income in 2015 (corresponding to their early thirties) between the people of GPA $\ge 3.0$ (resp. $< 3.0$) from all people: If a job market is in perfect competition, the wage is equivalent to the productivity of the workers that a company hires, and hiring a worker yields reward that is a gap between his or her productivity and the average wage. $\omega$ are chosen to be $46640 - 40604$ dollars, which models the gap between the salary of the firm and the minimum wage they would be able to obtain with minimal effort. The cost distribution is chosen to be uniform distribution from $0$ to $\max_{s,\theta} \omega (F_{u,s}(\theta) - F_{q,s}(\theta))$, as the applicants of a cost above this value never exert effort. Note that our results are not very sensitive to these settings as long as multiple equilibria exist.
We used the RidgeCV classifier of the scikit-learn library \cite{scikit-learn} to yield $\theta$. The two thirds of the people are used to train the classifier, and the following results are tested by using the rest of them.
\textbf{Results:}
From the ROC curve is shown in Figure \ref{fig_wwee} (a) one can see that the accuracy of the classifier varies among two groups: The GPA of the majority $s=0$ is more predictable than the minority. This might come from the fact that a classifier minimizes the cumulative empirical loss, and as a result it tends to fit to the majority.
Figure \ref{fig_wwee} (b)(c) shows the best response of the applicants and the firm under LF. Generally, equilibrium values of $\pi_0$ is larger than that of $\pi_1$. As a result, the social welfare per person in $s=0$ is usually larger than that of $s=1$. Note that, in estimating the FR curve, we applied some averaging to make $f_{e,s}(\theta)$ stable.
Based on the $F_{e,s}(\theta)$ and $f_{e,s}(\theta)$ in Figure \ref{fig_wwee}, we conducted simulation to confirm the social welfare (SW) and disparity measured by $|\pi_0 - \pi_1|$ (Table \ref{tbl_results}). In finding equilibria, we discretized $\theta$ and sought where the best response curves intersected.
One can see that (i) The result of CB and DP are more or less the same as the one of LF: They did not remove disparity. DP even increases the disparity. Unlike these policies, EO is disparity-free.
(ii) EO, which is the only policy that does not yield disparity, results in the smallest SW. This result is not very surprising because EO reduces the predictive power of the classifier for $s=0$ to match up with that for $s=1$, which we may consider as a price of achieving incentive-level equality.
Somewhat surprisingly, DP slightly increases SW, about which we discuss in Appendix \ref{app_swdec}.
\section{Conclusion}
We have studied a game between many applicants and a firm, which models human-related decision-making scenes such as university admission, hiring, and credit risk assessments. Our framework that ties two lines of work in the theoretical labor economics and the machine learning provides a method to compare existing (or future) non-discriminatory policies in terms of their social welfare and disparity. The framework is rather simple, and many extensions can be considered (e.g., making the investment $e$ continuous value).
Although we show that EO is the only available policy that does not yield disparity, it tends to reduce social welfare. Interesting directions of possible future work include a proposal of policy that balances the social welfare and disparity: A policy with minimal or no loss of social welfare that has a small disparity is desirable. Another possible line of future work lies in evaluating policies in online settings, such as multi-armed bandits \cite{DBLP:conf/sigecom/KannanKMPRVW17}.
\clearpage
\section*{Acknowledgement}
The authors gratefully thank Hiromi Arai and Kazuto Fukuchi for useful discussions and insightful comments.
\bibliographystyle{unsrt}
|
1112.2152
|
\section{Introduction}
\par
Progress in neutrino oscillation experiments gradually confirms that neutrinos are massive and oscillate \cite{PDG}. However, the theoretical understanding of the origin of the mixing pattern and the smallness of neutrino masses has not yet been settled. Many suggestions on possible models for neutrino mixing and masses have been made. For example, the T2K data \cite{T2K} on
$ \sin^2 \, {2 \theta_{13}} > 0$ has motivated models on discrete flavor groups and corrections to the original tri-bimaximal
mixing \cite{ALT}. The MiniBooNE antineutrino data \cite{MIN} has renewed the interest on sterile neutrinos \cite{GIU} and
extra Higgs doublets can also be a source of new neutrino properties \cite{JAP}.
\par
Neutrino masses and oscillations seem to require new physical scales that are not present in the standard model (SM). There are at
least three new scales involved: the neutrino mass scale, the lepton number violation scale and the parity breaking scale. All these
scales enter in one of the most appealing extensions of the SM, the left-right symmetric models \cite{JCP}. These
models start from the simple gauge structure of $SU(2)_{L}\otimes SU(2)_{R}\otimes U(1)$ and can, hopefully,
be tested at the LHC energies. Parity can be broken at the $SU(2)_R$ scale. But it can also be broken by a neutral singlet sector, as in the $D$-parity mechanism developed by Chang, Mohapatra and Parida \cite{CMP}. Small neutrino masses can be generated by the seesaw mechanism.
In this case, lepton number violation is introduced by Majorana terms at very high (GUT) energies. An alternative is the inverse seesaw mechanism \cite{MMV}.
In the original version of the inverse seesaw mechanism, a new left-handed neutrino singlet is introduced. If one imposes lepton
number conservation in this sector, there are no Majorana mass terms. A new right-handed neutral fermion singlet is also present
and it is allowed to violate lepton number at a very small scale. This small scale is responsible for the small neutrino mass.
In this scenario no ultrahigh breaking scale is introduced.
From another point of view, mirror models have recently \cite{MART} been studied and it was shown that three additional mirror families are consistent with the standard model if one additional inert Higgs doublet is included.
This paper is organized as follows: in Section II we summarize the scalar content of the model. In Section III we present the
fundamental fermionic representation of the model. In Section IV we discuss the gauge interactions and identify the neutrino
fields and new $Z^{\prime}$ interactions. In Section V we present the model predictions for the LHC energies. Finally we
summarize the model and its phenomenological consequences in Section VI.
\section{The Higgs scalars}\setcounter{equation}{0}
The fundamental scalar representation in our mirror model contains the following Higgs scalars: two doublets $\Phi_L$ and $\Phi_R$, which develop the vacuum expectation values $ v_L $ and $v_R$ respectively,
\begin{eqnarray}
\begin{array}{ccc}
& \Phi_L=\left(\begin{array}{cc}
\phi_{_L}^+\\
\\
\phi_{_L}^0\end{array}\right),&
\phi_{_R}=\left(\begin{array}{cc}
\phi_{_R}^+\\
\\
\phi_{_R}^0\end{array}\right),
\end{array}
\end{eqnarray}
\noindent
where
$$\Phi_{_R} \buildrel {\rm D} \over \longleftrightarrow \Phi_{_L} $$
\noindent with transformation properties under $SU(2)_L\times SU(2)_R \times U(1)_Y$ given by $(1/2,0,1)_{\phi_L},\; (0,1/2,1)_{\phi_R}$.
The singlet fields of the model are $S_M$, which develops a v.e.v. at a very small scale and is coupled with Majorana mass terms,
and $M_{N_L}$, $M_{N_R}$, which must couple to lepton number conserving terms (Dirac) at a TeV scale.
For the lepton number violating singlet we impose the symmetry,
$$S_M \buildrel {\rm D} \over \longleftrightarrow S_M $$
and for the lepton number conserving singlets,
$$M_{N_L} \buildrel {\rm D} \over \longleftrightarrow -M_{N_R}. $$
\par
These scalar fields will develop vacuum expectation values according to
$${\phi_L,\phi_R,S_M,M_{N_L},M_{N_R}} \buildrel {\rm v.e.v.} \over \longleftrightarrow {v_L,v_R,s,v_{M_L},v_{M_R}}. $$
\par
The motivation behind these symmetries is to generate a simple spectrum for the neutrino sector ( see section III ). The $\phi_L$ field will be broken at the same scale of the SM Higgs field $v_L = v_{\, \text{Fermi}}$. The new $v_R$ scale can be searched
for at the LHC energies in the $ 1-10 $ TeV range. The bound from neutrinoless double beta decay will imply ( see section IV ) that $v_{M_L} > 1$ TeV and $v_{M_R} > 10^5 $ TeV. The $S_M$ singlet field will break lepton number at a small scale $s \simeq 1$ eV and will give small neutrino masses.
\par
The most general scalar potential invariant under the preceding symmetries has more than twenty new parameters. We can obtain constrained equations and stability conditions from a simpler form, still consistent with the
stated symmetries:
\begin{widetext}
\begin{eqnarray}
V &=& -\mu_1^2 S_M^2 -\mu_2^2( M_{N_L}^2 +M_{N_R}^2) -\mu_3^2 (\phi_L^2 + \phi_R^2)+\mu_{4}^2M_{N_L}M_{N_R} +
\mu_{5} ( M_{N_L} -M_{N_R})(\phi_L^2 - \phi_R^2) \nonumber\\
&+& \lambda_1S_M^4 +\lambda_2( M_{N_L}^4 +M_{N_R}^4)+ \lambda_3 M_{N_L}^2 M_{N_R}^2 + \lambda_4(\phi_L^4 + \phi_R^4) + \lambda_5\phi_L^2 \phi_R^2 .
\end{eqnarray}
\end{widetext}
The $\phi_{L,R}$ doublets will give masses to the gauge bosons of $ SU(2)_{L,R}$ respectively. There will remain five neutral scalar Higgs fields in the model. It is straightforward, although lengthy, to find the constraint equations and Hessian matrix that guarantee the minimum conditions. They are explicitly given in the Appendix.
The approximate eigenvalues of the squared-mass matrix are given by
\begin{eqnarray}
&& M_1^2 = 8 \lambda_1 s^2 \nonumber\\
&& M_2^2 \simeq 4\left[ \lambda_4 (v_R^2+v_L^2) - \sqrt{ \lambda_4^2 (v_R^2-v_L^2)^2 + \lambda_5 ^2 \, v_L^2 v_R^2 }\right] \nonumber\\
&& M_3^2\simeq 4\left[ \lambda_4 (v_R^2+v_L^2) + \sqrt{ \lambda_4^2 (v_R^2-v_L^2)^2 + \lambda_5^2 \, v_L^2 v_R^2 }\right] \nonumber\\
&& M_4^2\simeq \left[ \lambda_3 + 2 \lambda_2 - \vert \lambda_3-6 \lambda_2 \vert \right] v_{M_R}^2\nonumber\\
&& M_5^2\simeq \left[ \lambda_3 + 2 \lambda_2 + \vert \lambda_3-6 \lambda_2 \vert \right] v_{M_R}^2
\end{eqnarray}
The most prominent feature of these expressions is the prediction for the squared-mass value that corresponds to the standard model
Higgs. This result for the Higgs mass shows that mirror models have a clear difference with the standard model Higgs.
The recent LHC experimental searches for the standard model Higgs have detected no positive signal. There are increasing data
constraining some regions for the Higgs mass value \cite{FERM}. According to the recent data from ATLAS \cite{ATLAS} and CMS \cite{CMS}
collaborations, there still remains a first open window for the Higgs mass in the $116-145$ GeV region. This limits the free parameters
of our invariant potential to the following region in parameter space:
\begin{eqnarray}
&& 0 < \lambda_1 < 1 \nonumber\\
&& 0 < \lambda_2 < 1/2 \nonumber\\
&& 0 < \lambda_3 < 1 \nonumber\\
&& 0.05 < \lambda_4 < 0.56 \nonumber\\
&& -1 < \lambda_5 < 1
\end{eqnarray}
The ATLAS and CMS collaborations also exclude with $95\%$ C.L. the existence of a Higgs over most of the mass region from $145$ to $466$
GeV. A second open mass window $ M_{\text Higgs} > 466 $ GeV would imply the same values for $\lambda_i$, except for $ \lambda_4 > 0.8 $.
\section{Neutrinos in the mirror model}
The fundamental fermion representation for the first lepton family and its transformation under the discrete parity
symmetry ($D$ parity) in the mirror model is given by
\begin{equation}
\left(
\begin{array}{c}
\nu \\
e \\
\end{array}\right)_L,\
\nu_{_R},\
e_{_R} \quad \buildrel {\rm D} \over \longleftrightarrow \quad
\left(
\begin{array}{c}
N \\
E \\
\end{array}\right)_R,\
N_{_L},\
E_{_L},
\end{equation}\
\noindent
where the doublets transform under $SU(2)_L\times SU(2)_R \times U(1)_Y$ as $(1/2,0,-1)_{L},\; (0,1/2,-1)_{R}$.
In order to discuss the mass for the neutral fermion fields we start by considering the following Majorana fields coming from the
fundamental mirror representation:
\begin{eqnarray}
\Psi_{\nu} \equiv \nu_{_L} + \nu_{_L}^c \qquad \Psi_{_N} \equiv N_{_L} + N_{_L}^c \nonumber\\
\omega_{\nu} \equiv \nu_{_R} + \nu_{_R}^c \qquad \omega_{_N} \equiv N_{_R} + N_{_R}^c.
\end{eqnarray}
The doublets transform as $(1/2,0,-1)_{L},\; (0,1/2,-1)_{R}$.
Let us discuss the mass Lagrangian by showing explicitly the physical content of each term.
The mirror mass Lagrangian coupled with the Higgs doublets is given by
\begin{eqnarray}
\mathcal{L}_{M}^{(\text{mirror})} &=& \frac{1}{2} v_L [ { \bar\nu_{_L} \nu_{_R} + \bar\nu_{_R} \nu_{_L} + \bar{\nu^c_{_L}} \nu^c_{_R} +
\bar{\nu^c_{_R}} \nu^c_{_L}} ] \nonumber\\
&+& \frac{1}{2} v_R [\bar N_{_L} N_{_R} + \bar N_{_R} N_{_L} + \bar {N^c_{_L}} N^c_{_R} +
\bar{N^c_{_R}} N^c_{_L} ]. \nonumber\\
\end{eqnarray}
In this expression we have no Majorana mass terms that violate lepton number. In the Majorana field basis we have
\begin{eqnarray}
\mathcal{L}_{M}^{(\text{mirror})} &=& \frac{1}{2} v_{_L}[ \bar\Psi_{\nu} \,\omega_{\nu} + \bar\omega_{\nu} \,\Psi_{\nu}]
+
\frac{1}{2}v_R [ \bar\Psi_{_N} \,\omega_{_N} + \bar \omega_{_N} \,\Psi_{_N}].\nonumber\\
\end{eqnarray}
As required by the inverse seesaw mechanism we must introduce a new neutral fermionic singlet (called "$P$"). As we are considering a
parity conserving model both left and right handed components of this field must be present. We have a new Lagrangian mass term given by:
\begin{eqnarray}
\mathcal {L}_{M}^{P} &=& \frac{s}{2} [ \bar P_{_L} P^c_{_L} + \bar P^c_{_L} P_{_L} + \bar P_{_R} P^c_{_R} + \bar P^c_{_R} P_{_R} ] \nonumber\\
&+& v_{M_L} [\bar\nu_{_R}P_{_L} + \bar P_{_L} \nu_{_R} ]
- v_{ M_R} [\bar N_{_L} P_{_R} + \bar P_{_R} N_{_L}] \nonumber\\
&+& \frac{s}{2} [ \bar P_{_L} N^c_{_L} + \bar N^c_{_L} P_{_L} + \bar P^c_{_L} N_{_L} + \bar N_{_L} P^c_{_L} ] \nonumber\\
&+& \frac{s}{2} [ \bar P_{_R} \nu^c_{_R} + \bar \nu^c_{_R} P_{_R} + \bar P^c_{_R} \nu_{_R} + \bar \nu_{_R} P^c_{_R} ].
\end{eqnarray}
We have now new Majorana fields,
\begin{eqnarray}
\epsilon \equiv P_{_L} + P^c_{_L} \qquad \sigma = P_{_R} + P_{_R}^c,
\end{eqnarray}
and these terms give a new contribution to the mass Lagrangian ,
\begin{eqnarray}
\mathcal {L}_{M}^{P} &=& \frac{s}{2} [ \bar\epsilon \,\, \sigma + \bar \sigma \,\, \epsilon
+ \bar \epsilon \,\, \epsilon + \bar \sigma \,\, \sigma ] \nonumber\\
&+& v_{M_L} [\bar \omega_{\nu} \,\, \epsilon + \bar \epsilon \,\, \omega_{\nu} ]
- v_{M_R} [ \bar \Psi_{_N} \,\, \sigma + \bar \sigma \,\, \Psi_{_N} ] \nonumber\\
&+& \frac{s}{2} [ \bar \epsilon \,\, \Psi_{_N} + \bar \Psi_{_N} \,\, \epsilon +
\bar \sigma \, \omega_{\nu} + \bar \omega_{\nu} \,\, \sigma ].
\end{eqnarray}
Returning now to the Majorana basis, the full mass Lagrangian can be written as
\begin{widetext}
\begin{eqnarray}
\mathcal{L}_{\text mass}
&=& \left( \bar{\Psi_{\nu}} \,\, \bar \omega_{\nu} \,\, \bar \epsilon \,\, \bar \omega_{_N} \,\, \bar \Psi_{_N} \,\,
\bar \sigma \right)
\left(\begin{array}{cccccc}
0 & v_{_L}/2 & 0 & 0 & 0 & 0 \\
\\
v_{_L}/2 & 0 & v_{M_L} & 0 & 0 & s/2 \\
\\
0 & v_{ M_L} & s/2 & 0 & s/2 & 0 \\
\\
0 & 0 & 0 & 0 & v_{_R}/2 & 0 \\
\\
0 & 0 & s/2 & v_{_R}/2 & 0 & -v_{M_R} \\
\\
0 & s/2 & 0 & 0 & -v_{M_R} & s/2
\end{array} \right)
\left(\begin{array}{c}
\Psi_{\nu}\\
\\
\omega_{\nu}\\
\\
\epsilon\\
\\
\omega_{_N}\\
\\
\Psi_{_N} \\
\\
\sigma
\end{array} \right).\nonumber
\end{eqnarray}
\end{widetext}
This last matrix has two blocks in the inverse seesaw form,
\begin{eqnarray}
M
&=&
\left(\begin{array}{ccc}
0 & v & 0 \\
v & 0 & M \\
0 & M & s
\end{array} \right).\nonumber
\end{eqnarray}
As $ s $ will be responsible for the very small neutrino masses, it must have a very small value. Then the general mass
matrix, to first order, has two independent inverse seesaw blocks.
The diagonalization of the mass matrix, to first order, allows to calculate the mass eigenvalues and eigenvectors.
Introducing the notation
\begin{eqnarray}
&& R_{_L} \equiv \sqrt{v_{_L}^2+v_{M_L}^2}, \quad
R_{_R} \equiv \sqrt{v_{_R}^2+v_{M_R}^2}, \nonumber\\
&&
s_{_L} = \frac{v_{_L}}{R_{_L}}, \quad c_{_L} \equiv \frac{v_{M_L}}{R_{_L}}, \nonumber\\
&&
s_{_R} = \frac{v_{_R}}{R_{_R}}, \quad c_{_R} \equiv \frac{v_{M_R}}{R_{_R} },
\end{eqnarray}
\begin{widetext}
\begin{eqnarray}
m_1 \simeq R_{_L} \quad \qquad m_2 \simeq - R_{_L} \quad \qquad m_3 \simeq s^2_{_L}\,\, s
\end{eqnarray}
\begin{equation}
\Psi_1= \left(
\begin{array}{c}
\frac{{\sqrt 2}}{2} {s_{_L}}\\
0 \\
\frac{\sqrt{2}}{2}\\
0 \\
-\frac{{\sqrt 2}}{2} {s_{_L}} \\
0
\end{array}\right)\nonumber\\
\qquad
\Psi_2= \left(
\begin{array}{c}
\frac{{\sqrt 2}}{2} {s_{_L}}\\
0 \\
-\frac{\sqrt{2}}{2}\\
0 \\
-\frac{{\sqrt 2}}{2} {s_{_L}} \\
0
\end{array}\right)
\qquad
\Psi_3= \left(
\begin{array}{c}
c_{_L} \\
0 \\
-s_{_L} c_{_L} \frac{s}{R_{_L}}\\
0 \\
-s_{_L} \\
0
\end{array}\right)
\end{equation}
\begin{eqnarray}
m_4 \simeq R_{_R} \qquad \qquad m_5 \simeq - R_{_R} \qquad \qquad m_6 \simeq s^2_{_R}\,\, s
\end{eqnarray}
\begin{equation}
\Psi_4= \left(
\begin{array}{c}
0 \\
\frac{{\sqrt 2}}{2} {s_{_R}}\\
0\\
\frac{\sqrt{2}}{2}\\
0 \\
-\frac{{\sqrt 2}}{2} {c_{_R}} \\
0
\end{array}\right)\nonumber\\
\qquad
\Psi_5= \left(
\begin{array}{c}
0 \\
\frac{{\sqrt 2}}{2} {s_{_R}}\\
0\\
-\frac{\sqrt{2}}{2}\\
0 \\
-\frac{{\sqrt 2}}{2} {c_{_R}}
\end{array}\right)
\qquad
\Psi_6= \left(
\begin{array}{c}
0\\
-c_{_R} \\
0 \\
-s_{_R} c_{_R} \frac{s}{R_{_R}}\\
0 \\
-s_{_R} \\
0
\end{array}\right)
\end{equation}
The diagonalization matrix can be written as:
\begin{eqnarray}
U
&=&
\left(\begin{array}{cccccc}
\frac{\sqrt{2}}{2} s_{_L} & \frac{\sqrt{2}}{2} s_{_L} & - c_{_L} & 0 & 0 & 0 \\
0 & 0 & 0 & \frac{\sqrt{2}}{2} s_{_R} & \frac{\sqrt{2}}{2} s_{_R} & -c_{_R} \\
\frac{\sqrt{2}}{2} & -\frac{\sqrt{2}}{2} & - s_{_L}c_{_L} \frac{s}{R_{_L}} & 0 & 0 & 0 \\
0 & 0 & 0 & \frac{\sqrt{2}}{2} & -\frac{\sqrt{2}}{2} & s_{_R}c_{_R} \frac{s}{R_{_R}} \\
- \frac{\sqrt{2}}{2}c_{_L} & - \frac{\sqrt{2}}{2}c_{_L} & s_{_L} & 0 & 0 & 0 \\
0 & 0 & 0 & - \frac{\sqrt{2}}{2}c_{_R} & - \frac{\sqrt{2}}{2}c_{_R} & -s_{_R}
\end{array} \right).\nonumber
\end{eqnarray}
\end{widetext}
\section{The gauge interactions}
In order to proceed to the neutrino identification we must look at the neutral current interactions.
The general gauge structure of our model was developed in ref. \cite{DEALM}. From equation (18), the neutral gauge bosons $Z$
and $Z^{\prime}$ interact only with $\nu_L$ and $N_R$. All other neutrino states have no gauge interactions as they are neutral singlets.
Neglecting $\omega^2$ terms, in the Majorana basis this is given by equation (19) of ref. \cite{DEALM},
\begin{widetext}
\begin{eqnarray}
\mathcal {L}_{NC} &=& \frac{-g_{_L}}{2 \cos\theta_{W}}[ \bar \Psi_{\nu} \frac{\gamma^{\mu}(1- \gamma^5)}{2}
\Psi_{\nu} ]Z_{\mu}
- \frac{g_{_L}}{2} \tan\theta_W \tan\beta [ \bar \Psi_{\nu} \frac{ \gamma^{\mu}(1- \gamma^5)}{2}
\Psi_{\nu} + \frac{1}{\sin^2\beta} \bar \omega_{_N} \frac{\gamma^{\mu}(1+ \gamma^5)}{2} \omega_{_N}
] Z^{\prime} _{\mu}.
\end{eqnarray}
\end{widetext}
As the $\Psi_{\nu}$ field is given by
$$ \Psi_{\nu} = \frac{\sqrt 2}{2} [ s_{_L} \Psi_1 + \Psi_3 - c_{_L} \Psi_5 ],$$
\noindent
the relevant combination for the $Z$ interaction comes from
\begin{eqnarray*}
\bar \Psi_{\nu} \Psi_{\nu} &\rightarrow & \frac{1}{2} [ s^2_{_L} \bar \Psi_1 \Psi_1 +
\bar \Psi_3 \Psi_3 + c^2_{_L} \bar \Psi_5 \Psi_5 \nonumber\\
&+& s_{_L} \bar \Psi_1 \Psi_3 - s_{_L} c_{_L} \bar \Psi_1 \Psi_5 -
c_{_L}\bar \Psi_3 \Psi_5 ].
\end{eqnarray*}
Hence, the $Z$ full coupling is given by the light $\Psi_3$ state. This state is to be identified with the SM neutrino.
There is no (light-light) $\Psi_3 -\Psi_6$ mixing and the $Z$ width is the same as in the SM. As $\Psi_5$ is the heaviest
state, we will have the leading terms:
$$\bar \Psi_{\nu} \Psi_{\nu} \simeq \frac{1}{2} [ \bar \Psi_3 \Psi_3 + s_{_L} \bar \Psi_1 \Psi_3 ].$$
The $Z^{\prime}$ interaction involves the $\omega_{_N}$ state as
$$\omega_{_N} = \frac{\sqrt 2}{2} [ \Psi_1 s_{_L} - \Psi_3 - c_{_L} \Psi_5 ],$$
\noindent and we have
\begin{eqnarray*}
\bar \omega_{_N} \omega_{_N} &\rightarrow & \frac{1}{2} [ s^2_{_L} \bar \Psi_1 \Psi_1 + \bar \Psi_3 \Psi_3 + c^2_{_L}\bar \Psi_5 \Psi_5 \nonumber\\
&-& s_{_L} \bar \Psi_1 \Psi_3 - s_{_L}c_{_L}\bar \Psi_1 \Psi_5 + c_{_L} \bar \Psi_3 \Psi_5 ].
\end{eqnarray*}
The leading terms are
$$\bar \omega_{_N} \omega_{_N} \simeq \frac{1}{2} [ \bar \Psi_3 \Psi_3 -s_{_L} \bar \Psi_1 \Psi_3 ].$$
So the new $Z^{\prime}$ will decay in the light state $Z^{\prime} \longrightarrow \bar \nu_3 \nu_3$ but with a
coupling much larger than that of SM case. The $Z^{\prime} \bar \nu_3 \nu_3$ vertex is given by
$$Z^{\prime}\bar \nu_3 \nu_3 \simeq \frac{g_L}{2} \tan\theta_W \tan\beta (1 + \frac{1}{\sin^2 \beta}).$$
\noindent
We also have the interaction vertex
$$Z^{\prime} \bar \nu_1 \nu_3 \simeq \frac{g_L}{2} \tan\theta_W \tan\beta (1 - \frac{1}{\sin^2 \beta}), $$
and this term can also be quite large.
The charged current interaction is given by
\begin{eqnarray}
{\cal L}_{e\, N\, W} &=& \frac{g}{2 \sqrt 2} \bar e_{_L} \gamma^{\mu} \nu_{_L} W_{\mu} \nonumber\\
&=& \frac{g}{2 \sqrt 2} \bar e \gamma^{\mu} [s_{_L} \Psi_1 + \Psi_3 + c_{_L} \Psi_5 ] W_{\mu},
\end{eqnarray}
where $\Psi_3$ is the SM neutrino state.
From neutrinoless double $\beta$ decay ($0\nu\beta\beta$) we have the experimental bound \cite{RODE}
$$\frac{\sin^2\theta_{e\, N\, W}}{M_{_N}} < 5 \times 10^{-8} \, {\rm GeV}^{-1}.$$
\par
For the first heavy neutrino ($\Psi_1$) we obtain the bound
$$\frac{v_L^2}{v^2_L+v_{M_L}^2} \frac{1}{\sqrt{v^2_L +v_{ M_L}^2}}\simeq \frac{v^2_L}{v_{M_L}^3} < 5 \times 10^{-8}\, {\rm GeV}^{-1},$$
\noindent
which implies $v_{M_L} > 1 - 10$ TeV. This uncertainty comes from the absorption of coupling constants in the definition of
our $ v_i$. If we let the corresponding couplings vary in the range $ g_i \simeq 0.1 - 1 $, then the preceding result follows.
For the second heavy neutrino ($\Psi_5$) we have
$$\frac{v_{M_L}^2}{v^2_L+v_{M_L}^2} \frac{1}{\sqrt{v^2_R + v^2_{M_{R}}}}\simeq \frac{1}{v_{M_{R}}} < 5 \times 10^{-8} \, {\rm GeV}^{-1},$$
so that $ v_{M_R} \gtrsim 10^5 $ TeV.
It is a remarkable result that from neutrino bounds we have recovered the Peccei-Quinn scale related to the strong CP problem \cite{CAR}.
With the identification $\Psi_3 \longrightarrow \Psi_{\nu_e}$ and $\Psi_1 \longrightarrow \Psi_N$, the leading new $Z^\prime$ interaction with neutrinos is
\begin{eqnarray}
\mathcal {L}_{NC} &=& -\frac{g_L \tan\theta_{\omega} \tan\beta}{4} \{ \bar\Psi_{\nu_e} \gamma^{\mu} [g_A -g_V \gamma^5] \,
\Psi_{\nu_e} \nonumber\\
&+& s_L \bar\Psi_{\nu_e} \gamma^{\mu} [g_A -g_V \gamma^5] \Psi_N + h.c.\} \,Z^{\prime}_{\mu}
\end{eqnarray}
\noindent
where
$$g_V = 1 - \frac{1}{\sin^2\beta} \qquad {\rm and} \qquad g_A = 1 + \frac{1}{\sin^2\beta}.$$
\smallskip
From the preceding relations the $Z^{\prime} {\bar N} N $ vertex is suppressed by an $ {s_L}^2$ factor.
\section {Results}
In this section we present the main phenomenological consequences of our model for the LHC. Although many extended models
predict a new $Z^{\prime}$, it is a very distinctive property of mirror models that the invisible $Z^{\prime}$ channel
will be very high. In Table I we show the branching ratios for the $M_{Z^{\prime}} = 1.5 $ TeV with $\Gamma_{Z^\prime} \simeq 25 $ GeV
considering $v_{M_L} = 1 - 10$ TeV. The heavy neutrino channels are strongly dependent on the choice of $v_{M_L}$.
The clearest signal for a new $Z^{\prime}$ will be the leptonic channel $p + p \rightarrow l^{+} + l^{-}+ X$. The recent LHC searches for
this process have not detected any evidence of a new $Z^{\prime}$ boson. For instance, the ATLAS Collaboration \cite{ATL} with a luminosity
around $1$ fb$^{-1}$ sets a lower bound $M_{Z^{\prime}} > 1.83$ TeV on the mass of a new sequential heavy $Z^{\prime}$. Using the package
CompHep \cite{COMP} with CTEQL1 parton distribution functions, we have estimated the corresponding bound on the mass of the
mirror $Z^{\prime}$ boson. Applying a set of cuts on the final leptons, namely, $\vert\eta\vert \leq 2.5$,
$ M_{Z^{\prime}} - 5 \, \Gamma_{Z^{\prime}} < M_{l^- l^+} < M_{Z^{\prime}} + 5 \, \Gamma_{Z^{\prime}}$, and an energy cut of
$ E_T > 25$ GeV, we display in Figure 1a the total cross section and number of events
for $\sqrt s = 7$ TeV and an integrated luminosity of $10$ fb$^{-1}$. The negative
result of the ATLAS search leads, therefore, to a bound $M_{Z^{\prime}} > 1.5$ TeV on the $Z^{\prime}$ mass in our model. The forthcoming
luminosity of $10$ fb$^{-1}$ will allow the search for this $Z^{\prime}$ to be extended up to $M_{Z^{\prime}} = 2.0$ TeV.
In Figure 1b we show our results for a center of mass energy of $14$ TeV and an integrated luminosity of $ 100$ fb$^{-1}$. In
this case we can estimate an upper bound on $M_{Z^{\prime}}$ around $4.0$ TeV.
\begin{figure}
\begin{center}
\includegraphics[width=.45\textwidth]{Figure1.eps}
\includegraphics[width=.45\textwidth]{Figure2.eps}
\caption{Total cross section and number of events versus $M_{Z^{\prime}}$ in the process $p + p \rightarrow l^{+}+ l^{-}+ X$
at $\sqrt s= 7$ TeV considering ${\cal L}= 10$ fb$^{-1}$ (up) and at $\sqrt s= 14$ TeV considering ${\cal L}= 100$ fb$^{-1}$ (down).}
\end{center}
\end{figure}
Another important test of the model is the prediction of heavy Majorana neutrinos with masses up to the TeV region \cite{CHART}. From Table 1 we
see that the dominant heavy Majorana neutrino production in our model is through $Z^{\prime} \rightarrow N +\bar \nu$. This result is to be
compared with a very similar model \cite{DEALM} where the neutrino masses comes from a double seesaw mechanism. In this last case, the dominant heavy Majorana production is through heavy pair production and the consequent same-sign dilepton production. In Figure 2 we show the total cross section for the process $p + p \rightarrow N +\bar \nu + X$ for the planned LHC energies and luminosities. The final state will be seen as
$p + p \rightarrow {\rm invisible} + l^{\pm}+ W^{\mp}+ X$, with the invariant $l^{\pm}+ W^{\mp}$ mass peaked at the
heavy neutrino mass.
For heavy Majorana neutrinos with masses near $100$ GeV the dominant production
mechanism is through the SM $W$ and $Z$ interactions. However this mechanism is kinematically restricted to masses below
$200$ GeV \cite{ALM}. For higher masses the dominant mechanism is {\it via} $Z^{\prime}$ exchange. From Figure 2 we can estimate
the heavy neutrino mass dependence at the energy of $\sqrt s = 7$ TeV produced {\it via} $Z^{\prime}$ with mass equal to
$ 2.0 $ TeV. The scenario of $\sqrt s = 14$ TeV allows us to estimate the $M_N$ behavior from $500$ GeV to
$2$ TeV, with $Z^{\prime}$ masses varying from $1.5$ TeV to $3.0$ TeV.
\par
\begin{figure}
\begin{center}
\includegraphics[width=.45\textwidth]{Figure3.eps}
\includegraphics[width=.45\textwidth]{Figure4.eps}
\caption{Total cross section and number of events versus $M_N$ in the process $p + p \rightarrow N + \bar \nu_i+ X$
at $\sqrt s= 7 $ TeV considering ${\cal L}= 10$ fb$^{-1}$ (up) and at $\sqrt s= 14$ TeV considering ${\cal L}= 100$ fb$^{-1}$ (down).}
\end{center}
\end{figure}
{\section {Conclusions}}
In this paper we have presented a mirror model that restores parity at high energies. Neutrino masses are generated by
the inverse seesaw mechanism. Besides new mirror fermions, the model also predicts new gauge vector bosons. Our choice of a scalar sector with
Higgs doublets and singlets and no Higgs bidoublets means that the new charged vector bosons will not be coupled with
ordinary matter in the mirror model at tree level. This points to a significant difference between the present model and other left-right models
with new $\nu_R$ neutrinos in new $SU(2)_R$ doublets . But mixing in the neutral vector boson sector is present
and the first important phenomenological consequence of the model is a new neutral current. As the new v.e.v. $v_R$ is not known,
we cannot determine exactly the new $Z^{\prime}$ mass. But the LHC can test the hypothesis that $v_R$ is of the order of a few TeV.
The new $Z^{\prime}$ mixing with the other neutral gauge bosons can be calculated \cite{PON} and we can determine both the main
decay channels and production rates for this new $Z^{\prime}$.
The heavy Majorana neutrino production can be used as a test for the basic neutrino mass generation mechanism.
In the double seesaw mechanism we have the dominant channel $ Z^{\prime} \rightarrow N + N$ and the consequent same-sign
dilepton production, whereas for the inverse seesaw mechanism the dominant channel is $ Z^{\prime} \rightarrow N +\bar \nu$.
The other important prediction of our mirror model comes from the fact the we have fixed the symmetry breaking scales only from the neutrino
sector; therefore, the Higgs spectrum can be fixed according to the recent LHC bounds. The two main mass windows for the SM Higgs mass in the
$116-145$ GeV and above $466$ GeV can be fixed by natural choices of the coupling constants of the scalar potential, in the range $(-1,1)$.
\begin{widetext}
|
1112.2146
|
\section{#1}\setcounter{equation}{0}}
\def\ft#1#2{{\textstyle{{\scriptstyle #1}\over {\scriptstyle #2}}}}
\def\fft#1#2{{#1 \over #2}}
\def\mathcal{A}{\mathcal{A}}
\def\mathcal{B}{\mathcal{B}}
\def\mathcal{C}{\mathcal{C}}
\def\mathcal{D}{\mathcal{D}}
\def\mathcal{E}{\mathcal{E}}
\def\mathcal{F}{\mathcal{F}}
\def\mathcal{G}{\mathcal{G}}
\def\mathcal{H}{\mathcal{H}}
\def\mathcal{I}{\mathcal{I}}
\def\mathcal{J}{\mathcal{J}}
\def\mathcal{K}{\mathcal{K}}
\def\mathcal{L}{\mathcal{L}}
\def\mathcal{M}{\mathcal{M}}
\def\mathcal{N}{\mathcal{N}}
\def\mathcal{O}{\mathcal{O}}
\def\mathcal{P}{\mathcal{P}}
\def\mathcal{Q}{\mathcal{Q}}
\def\mathcal{R}{\mathcal{R}}
\def\mathcal{S}{\mathcal{S}}
\def\mathcal{T}{\mathcal{T}}
\def\mathcal{U}{\mathcal{U}}
\def\mathcal{V}{\mathcal{V}}
\def\mathcal{W}{\mathcal{W}}
\def\mathcal{X}{\mathcal{X}}
\def\mathcal{Y}{\mathcal{Y}}
\def\mathcal{Z}{\mathcal{Z}}
\def\Q#1#2{\frac{\partial #1}{\partial #2}}
\def\delta{\partial}
\def\QS#1#2{\frac{\partial^S #1}{\partial #2}}
\def\varQ#1#2{\frac{\delta #1}{\delta #2}}
\def\varQL#1#2{\frac{\delta^L #1}{\delta #2}}
\def\norm#1{\|#1\|}
\def\epsilon{\epsilon}
\def\varepsilon{\varepsilon}
\def\sqrt{-g}{\sqrt{-g}}
\def\hspace{0.06em}\text{d}^n \hspace{-0.06em} x{\hspace{0.06em}\text{d}^n \hspace{-0.06em} x}
\def\text{d}_V{\text{d}_V}
\def\text{d}\hspace{-0.06em}x{\text{d}\hspace{-0.06em}x}
\def\text{d}_V\hspace{-0.06em}\phi{\text{d}_V\hspace{-0.06em}\phi}
\def\text{d}_V\hspace{-0.06em}\xi{\text{d}_V\hspace{-0.06em}\xi}
\def\frac{1}{2}{\frac{1}{2}}
\def\nonumber{\nonumber}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def\hbox{ 1\hskip -3pt l}{\hbox{ 1\hskip -3pt l}}
\def {\mathbb{R}}{{\mathbb{R}}}
\def {\mathbb{C}}{{\mathbb{C}}}
\def {\mathbb{N}}{{\mathbb{N}}}
\def\sst#1{{\scriptscriptstyle #1}}
\def{\it i.e.~}{{\it i.e.~}}
\def\mathfrak{\mathfrak}
\def\alpha{\alpha}
\def\beta{\beta}
\def\gamma{\gamma}
\def\delta{\delta}
\def\epsilon{\epsilon}
\def\eta{\eta}
\def\kappa{\kappa}
\def\lambda{\lambda}
\def\Lambda{\Lambda}
\def\Gamma{\Gamma}
\def\Delta{\Delta}
\def\mu{\mu}
\def\nu{\nu}
\def\rho{\rho}
\def\omega{\omega}
\def\Omega{\Omega}
\def\sigma{\sigma}
\def\Sigma{\Sigma}
\def\tau{\tau}
\def\theta{\theta}
\def\omega{\omega}
\def\pi{\pi}
\def\varepsilon{\varepsilon}
\def\xi{\xi}
\def\zeta{\zeta}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\newcommand{\rom}[1]{\mathrm{#1}}
\def{\cal A}{{\cal A}}
\def{\cal G}{{\cal G}}
\def\mathfrak{so}{\mathfrak{so}}
\def\mathfrak{su}{\mathfrak{su}}
\def\mathfrak{sp}{\mathfrak{sp}}
\def\mathfrak{sl}{\mathfrak{sl}}
\def\mathfrak{gl}{\mathfrak{gl}}
\def\mathfrak{sq}{\mathfrak{sq}}
\def\mathcal{A}{\mathcal{A}}
\def\mathcal{B}{\mathcal{B}}
\def\mathcal{C}{\mathcal{C}}
\def\mathcal{D}{\mathcal{D}}
\def\mathcal{E}{\mathcal{E}}
\def\mathcal{F}{\mathcal{F}}
\def\mathcal{G}{\mathcal{G}}
\def\mathcal{H}{\mathcal{H}}
\def\mathcal{I}{\mathcal{I}}
\def\mathcal{J}{\mathcal{J}}
\def\mathcal{K}{\mathcal{K}}
\def\mathcal{L}{\mathcal{L}}
\def\mathcal{M}{\mathcal{M}}
\def\mathcal{N}{\mathcal{N}}
\def\mathcal{O}{\mathcal{O}}
\def\mathcal{P}{\mathcal{P}}
\def\mathcal{Q}{\mathcal{Q}}
\def\mathcal{R}{\mathcal{R}}
\def\mathcal{S}{\mathcal{S}}
\def\mathcal{T}{\mathcal{T}}
\def\mathcal{U}{\mathcal{U}}
\def\mathcal{V}{\mathcal{V}}
\def\mathcal{W}{\mathcal{W}}
\def\mathcal{X}{\mathcal{X}}
\def\mathcal{Y}{\mathcal{Y}}
\def\mathcal{Z}{\mathcal{Z}}
\def\frac{\frac}
\def\epsilon{\epsilon}
\def\mathfrak{\mathfrak}
\defB_2{B_2}
\def\omega_3{\omega_3}
\def\chi_1{\chi_1}
\defH_2{H_2}
\def\cF_{(2)}^2{\mathcal{F}_{(2)}^2}
\def\cF_{(1)}{\mathcal{F}_{(1)}}
\def\chi_2{\chi_2}
\def\chi_3{\chi_3}
\defF_{(1)}^1{F_{(1)}^1}
\defF_{(1)}^2{F_{(1)}^2}
\defB_1{B_1}
\defF_{(2)}{F_{(2)}}
\def\nonumber{\nonumber}
\def\frac{1}{2}{\frac{1}{2}}
\def\nonumber{\nonumber}
\def\chi_{s\ell(3)}{\chi_{s\ell(3)}}
\def\Psi_\rom{Schw}{\Psi_\rom{Schw}}
\def\mathrm{curl\,}{\mathrm{curl\,}}
\newcommand{\gsim}{\mathrel{\raisebox{-.6ex}{$\stackrel{\textstyle>}{\si
m}$}}}
\newcommand{\lsim}{\mathrel{\raisebox{-.6ex}{$\stackrel{\textstyle<}{\si
m}$}}}
\newcommand*\widefbox[1]{\fbox{\hspace{1em}#1\hspace{1em}}}
\renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}}
\begin{document}
\pagestyle{myheadings}
\markboth{\textsc{\small }}{%
\textsc{\small Supertranslations and Holographic Stress Tensor}}
\addtolength{\headsep}{4pt}
\begin{flushright}
\texttt{AEI-2011-100}\\
\end{flushright}
\vspace{1cm}
\begin{centering}
\vspace{0cm}
\textbf{\Large{Supertranslations and Holographic Stress Tensor}}
\vspace{0.8cm}
{\large Amitabh Virmani}
\vspace{0.5cm}
\begin{minipage}{.9\textwidth}\small \begin{center}
Max-Planck-Institut f\"ur Gravitationsphysik (Albert-Einstein-Institut) \\
Am M\"uhlenberg 1,
D-14476 Golm, Germany \\
{\tt virmani@aei.mpg.de}
\\ $ \, $ \\
\end{center}
\end{minipage}
\end{centering}
\vspace{1cm}
\begin{abstract}
It is well known in the context of four dimensional asymptotically flat
spacetimes that the leading order boundary metric must be conformal to unit de
Sitter metric when hyperbolic cutoffs are used. This
situation is very different from asymptotically AdS settings where one is
allowed to choose an arbitrary boundary metric. The closest one can come to changing the boundary metric in the asymptotically
flat context,
while maintaining the group of asymptotic symmetries to be Poincar\'e,
is to change the so-called `supertranslation frame' $\omega$.
The most studied choice corresponds to taking $\omega = 0.$
In this paper we study consequences of making alternative choices. We perform
this analysis in the covariant phase space approach as well as in the
holographic renormalization approach. We show that all choices for $\omega$ are
allowed in the sense that the covariant phase space is well defined
irrespective of how we choose to fix supertranslations. The on-shell action and
the leading order boundary stress tensor are insensitive to the
supertranslation frame. The next to leading order boundary stress tensor
depends on the supertranslation frame but only in a way that the transformation
of angular momentum under translations continues to hold as in special
relativity.
\end{abstract}
\vfill
\thispagestyle{empty} \newpage
\tableofcontents
\newpage
\setcounter{equation}{0}
\section{Introduction}
The development of gauge/gravity dualities has revolutionized string theoretic
investigations of quantum gravity. These dualities relate string theories in
higher dimensions to certain quantum theories in lower dimensions on fixed
metric backgrounds. Consequently, they provide us with a framework where one
can address deep puzzles of quantum gravity concerning black holes, singularities and the
like, by performing calculations in lower dimensional non-gravitating settings.
The best understood case occurs in AdS/CFT \cite{M} where string theory on AdS
space space is dual to certain gauge theory on the boundary of AdS space.
Similar dualities are not known for spacetimes of the most physical interest,
such as cosmologies or flat Minkowski space. It is clearly of interest to
investigate if holographic dualities exist in these settings.
A well appreciated point is that although the
most well understood cases of these dualities involve AdS spaces and local CFTs, neither AdS nor CFTs are
fundamental to these dualities. It is often speculated that every possible
boundary condition that defines a bulk string theory is dual to a
non-gravitational theory through the AdS/CFT correspondence. In the
case of AdS spaces, the correspondence arises from the way in which the AdS boundary parametrizes the space of
possible boundary conditions on bulk fields \cite{Wi, GKP}. In fact, the
richness of AdS/CFT precisely comes from the fact that in an asymptotically AdS settings one
is allowed to choose a variety of boundary conditions. In particular, one is
allowed to choose an arbitrary boundary metric \cite{Sk1, BK1, BK2, Sk2, Sk3}.
The corresponding situation in the asymptotically flat setting is very
different. This is one of the reason why the the subject of holographic duality
for flat spacetime has resisted developments over all these years. For
example, for the four dimensional asymptotically flat spacetimes the leading
order boundary metric must be conformal to unit de Sitter metric when hyperbolic cutoffs are used to define asymptotically flat
spacetimes \cite{BS}. The structure of the asymptotic equations is such
that there is absolutely no freedom in the choice of the leading order boundary metric.
One is led to ask how much freedom one has at the next to leading order. Even
there the freedom is quite limited. As we discuss in detail in this paper, the
closest one can come to changing the boundary metric in the asymptotically flat context, while
maintaining asymptotic symmetries to be Poincar\'e,
is to change the so-called `supertranslation frame' $\omega$. The information about
the supertranslation frame enters in the asymptotic expansion is a very specific
way. In this paper we study what it precisely means to change the
supertranslation frame, and the consequences this brings to the construction of
the boundary stress tensor.
We perform this analysis in the covariant phase space approach of \cite{ABR,
LW, W, IW, WZ} as well as in the holographic renormalization approach of Mann
and Marolf \cite{MM, MMV, MMMV}. The key results of the present paper are as
follows. First, we show that all choices for the supertranslation frame are allowed. More precisely, that covariant phase
space is well defined irrespective of how we choose to fix supertranslations.
Second, we show that the on-shell action and the leading order boundary stress
tensor are insensitive to the supertranslation frame. Third, we show that the
next to leading order boundary stress tensor depends on the supertranslation frame but only in a way
that the transformation of angular momentum under translations continues to hold as
in special relativity.
We will elaborate on these points momentarily, for now let us step back a bit
and recall certain basic facts about supertranslations. It turns out that the
issue of supertranslations is much related to the general notion of angular momentum. Recall
that in special relativity the notion of angular momentum
is origin dependent. This origin dependence arises because there is a
four-parameter family of Lorentz subgroups in the Poincar\'e group. None of
these subgroups is preferred over any other, so the origin dependence is
inevitable. The structure of the Lie algebra of the Poincar\'e group then tells
us the transformation property of the angular momentum under the action of
translations. The resulting notion matches with our intuitive understanding of
angular momentum.
For asymptotically flat spacetimes at spatial infinity, the asymptotic
symmetries form an infinite dimensional group---the so-called spatial
infinity (SPI) group \cite{Ashtekar:1978zz, Ashtekar:1991vb}. The SPI group is
similar to the Poincar\'e group, except that the four translations are replaced
by an infinite number of angle dependent translations---the so called
supertranslations. The group structure of the SPI group is that of a
semi-direct product of the supertranslation group with the Lorentz group. The
supertranslation group is the infinite-dimensional additive group of smooth
functions on the unit hyperboloid. The semi-direct product structure is as
follows: if $(\alpha, \xi^a)$ and $(\beta, \eta^a)$ are two elements of the Lie algebra of
the SPI group, with $\alpha$ and $\beta$ arbitrary smooth functions on the
hyperboloid and $\xi^a$ and $\eta^a$ exact Killing vectors of the unit
hyperboloid, then the SPI Lie bracket is \cite{Ashtekar:1978zz, Ashtekar:1991vb}
\begin{equation}
[(\alpha, \xi^a),(\beta, \eta^a)] = (\pounds_\xi \beta - \pounds_\eta \alpha,
[\xi, \eta]^a).
\end{equation}
It follows that the SPI group admits an infinite class of Lorentz
subgroups. None of these subgroups is preferred over the
others.
Therefore, a naive approach to defining angular momentum suffers from the so-called supertranslation
ambiguities: origin dependence of angular momentum where the origin lies in an
infinite dimensional space. We clearly need an additional structure at spatial
infinity that can reduce the SPI group to the Poincar\'e group. These points
were very well emphasized in \cite{Ashtekar:1978zz,AM2}. The main
purpose of this paper is to systematically study the freedom we have in the Poincar\'e reduction of the SPI
group.
The plan of the rest of the paper is as follows. In section
\ref{sec:review} we begin with various definitions and provide a brief review of
the counterterm construction of \cite{MM}. In this section we also review the
boundary conditions of Ashtekar, Bombelli, and Reula (ABR) \cite{ABR}, and
present our supertranslated generalization of the ABR boundary conditions. The
main point of this section is to show that the covariant phase space is well
defined for our supertranslated boundary conditions.
In section \ref{sec:expansion} we perform systematic expansion of the equations
of motion and discuss Beig's integrability conditions \cite{B, CDV}. We find
that despite the fact that we have an arbitrary function $\omega$ in
our asymptotic expansion, the integrability conditions do not change. As
in \cite{B, CDV}, the integrability conditions require Lorentz charges
constructed using
\begin{equation}
\mathrm{curl\,} [ 4
\epsilon_{cd(a} \sigma^c \sigma^{d}_{b)}]
\end{equation}
to be zero. In this paper we choose, following ABR \cite{ABR}, mass
aspect $\sigma$ to be a symmetric function on the hyperboloid. As a result, the
integrability conditions are automatically satisfied \cite{MMMV, CDV}. In this
section we also perform a systematic expansion of the
Mann-Marolf counterterm.
Section \ref{sec:BST} is devoted to the study of the renormalized on-shell
action and the expansion of the boundary stress tensor. We show that the on-shell action
and the leading order boundary stress tensor are insensitive to the
supertranslation frame. In section \ref{sec:BSTprop} properties of the boundary
stress tensor are studied in further detail. Our boundary
stress tensor satisfies all the expected properties: (i) it is conserved a la
Brown-York \cite{BY}, (ii) it reduces to the previous expression of \cite{MMMV}
when either (a) $\omega =0$ or (b) when $\omega$ represents a
translation---i.e.,
when it is not a non-trivial supertranslation, and (iii) the next to leading
order boundary stress tensor transforms under translations in an expected way.
Finally, in section \ref{sec:conclusions} we end with our conclusions and
possible future directions. Certain technical and computational details are
relegated to two appendices. In appendix \ref{app:EOM} asymptotic expansion of
the equations of motion is presented. In appendix \ref{app:BST} certain details
on the asymptotic expansion of the boundary stress tensor are presented.
As a last comment in this section, we wish to emphasize an
important point. In the study of asymptotic structure of spacetimes,
the notions one introduces and the boundary conditions one chooses are to some
extent arbitrary; their justification lies in the perspective they bring.
Already there are variety of methods known to analyze asymptotic flatness and
construct conserved quantities \cite{WZ, Ashtekar:1978zz, ADM1, RT, Geroch,
AshRev, AD, BB, Sorkin}. These different approaches offer different
perspectives.
All these methods ultimately lead to similar/equivalent results.
For example,
the framework presented in \cite{Ashtekar:1978zz} defines unambiguously a
useful notion of angular momentum at spatial infinity, and allows us to relate
such construction to the analogous conserved quantities at null infinity.
The boundary conditions of \cite{Ashtekar:1978zz} are further strengthened in
\cite{ABR} by demanding mass aspect $\sigma$ to be symmetric. These strengthened
boundary conditions offer a new perspective:
they lead to a well defined covariant phase space. The study presented below should
be taken in this spirit. Our motivation is a combination of the ideas: (i) we wish to have an unambiguous and useful notion
of angular momentum, (ii) we wish to have a well defined phase space precisely
in the sense of \cite{ABR}, and finally (iii) drawing motivation from AdS/CFT
we wish to explore systematically the freedom we have in choosing the boundary
conditions while maintaining (i) and (ii). The perspective our study brings is
that it illustrates the fact that there is not a unique boundary condition, but
rather a class of boundary conditions that all lead to a well defined notion of
asymptotic flatness. Various generalizations and variations on the study
presented below are possible. This study is a continuation of \cite{MM, MMV,
MMMV, CDV, V} and is largely motivated by comments in
\cite{MM, Ma, MV}. For an alternative
point of view on some of these ideas see \cite{CD}. Other studies of
supertranslations include a series of papers by Barnich et
al \cite{Barnich:2010eb} at null infinity.
\setcounter{equation}{0}
\section{Asymptotic Flatness, Actions, Supertranslations}
\label{sec:review}
In this section we first provide relevant definitions and a brief review of previous work. We then spell out our boundary conditions.
\subsection{Asymptotic Flatness and the Mann-Marolf Action}
As in previous work \cite{MM, MMV, MMMV, CDV, V}, we introduce our notion of
asymptotic flatness based on the work of Beig and Schmidt \cite{BS, B}. The
key advantage of using Beig-Schmidt expansion is that all results can be readily
translated to the geometrical language of Ashtekar and Hansen
\cite{Ashtekar:1978zz, AshRev} or that of Ashtekar and Romano \cite{Ashtekar:1991vb}.
Beig-Schmidt expansion for asymptotically flat spacetimes near spatial infinity
takes the form
\begin{equation}
\label{metric1}
ds^2 = \left( 1 + \frac{\sigma}{\rho} \right)^2 d \rho^2 + \rho^2\left( h^{0}_{ab} +
\frac{h^{1}_{ab}}{\rho} + \frac{h^{2}_{ab}}{\rho^2} + {\mathcal O}(\rho^{-3})
\right) dx^a dx^b,
\end{equation}
where $h^{0}_{ab}dx^ dx^b$ is the metric on the unit three-dimensional de Sitter
space $dS_3$, or equivalently on the unit three-dimensional hyperboloid
\cite{BS},
\begin{equation}
h^{0}_{ab}dx^a dx^b = -d\tau^2 + \cosh^2\tau (d\theta^2 + \sin^2\theta d\phi^2).\label{h0}
\end{equation}
We use $\mathcal{D}_a$ to denote the unique torsion-free covariant derivative
compatible with the metric $h^0_{ab}$ on the unit hyperboloid. The radial
coordinate $\rho$ is associated to some asymptotically Minkowski coordinates
$x^\mu$ via $\rho^2 = \eta_{\mu \nu} x^\mu x^\nu$. The fields $\sigma$,
$h^{1}_{ab},h^{2}_{ab}$, etc.~are assumed to be smooth functions on the unit
hyperboloid. We use $h_{ab}$ to denote the complete
induced metric on a constant $\rho$ slice (for some large $\rho$) and use $D$
to denote the unique torsion-free covariant derivative compatible with $h_{ab}$.
Further boundary conditions will be specified below.
Next, we recall the action principle of \cite{MM}. There it was shown that
a good variational principle for asymptotically flat configurations defined by
the expansion \eqref{metric1} is given by the action
\begin{equation}
\label{action} S = \frac{1}{16\pi G} \int_{\cal M} d^4x \sqrt{-g}\,R +
\frac{1}{8\pi G} \int_{\partial {\cal M}} d^3x\sqrt{-h} \,(K - \hat K).
\end{equation}
The counterterm in the action \eqref{action} is $\hat K :=
h^{ab} \hat K_{ab}$. $\hat K_{ab}$ is defined implicitly via a Gauss-Codacci
like equation
\begin{equation}
\label{defhatK}
{\cal R}_{ab} = \hat K_{ab} \hat K - \hat K_a{}^{c} \hat K_{cb}.
\end{equation}
Here ${\cal R}_{ab}$ is the Ricci tensor of the boundary metric $h_{ab}$. For details we refer the
reader to \cite{MM, MMV, MMMV, V}.
\subsection{Supertranslations}
In the introduction section we already mentioned certain basic facts about
supertranslations. The best way to further understand the precise nature of
supertranslations is to work out the set of diffeomorphisms that preserve
the form of appropriately defined asymptotically flat metrics. Since results
obtained in the Beig-Schmidt coordinates \eqref{metric1} can be readily
translated to the geometrical languages, it is most natural to work out these
diffeomorphisms in this gauge. Supertranslations in the Beig-Schmidt gauge are
interpreted as different conformal completions in the SPI framework
\cite{Ashtekar:1978zz} or as different hyperboloid completions in the Ashtekar-Romano
framework \cite{Ashtekar:1991vb}. The problem of finding these
diffeomorphisms has been analyzed by several authors \cite{BS, Ashtekar:1978zz,
Ashtekar:1991vb, B}; for reviews see \cite{CDV, AshNew}. The
upshot of this analysis is that in an asymptotically Cartesian coordinate
system with $\rho^2 = \eta_{\mu \nu} x^{\mu} x^{\nu}$, all diffeomorphisms of
the form
\begin{equation}
\bar{x}^\mu = L_{\nu}^{\mu}x^\nu + T^\mu + S^\mu(x^a) + o(\rho^0) \label{diffeos}
\end{equation}
preserve the form of the metric \eqref{metric1}. The transformations generated
by the constants $L_{\nu}^{\mu}$ and $T^\mu$ constitute the Poincar\'e group.
The transformations generated by angle dependent translations $S^\mu(x^a)$ are
the so-called supertranslations. In fact, they are all spi-supertranslations.
In the Beig-Schmidt expansion, the asymptotic spi-supertranslation Killing
vector $\xi^\mu_{\omega}$ is related to an arbitrary function $\omega$ on the
hyperboloid as
\begin{equation}
\xi^{\rho}_\omega = \omega(x) + \mathcal{O}(\rho^{-1}), \qquad
\xi^{a}_\omega = \frac{1}{\rho} \omega^a(x) + \mathcal{O}(\rho^{-2}),
\end{equation}
where $\omega^a = \mathcal{D}^a \omega$ and where $x$ denotes collectively the coordinates
on the hyperboloid. As emphasized in the introduction, we need an additional structure at
spatial infinity that can reduce the SPI group to the Poincar\'e group, as
otherwise the notion of angular momentum is not the familiar one. Furthermore, since
supertranslations depend arbitrarily on the angular coordinates, in particular
on the time coordinate $\tau$, even if one attempts to define conserved
charge for them, the associated charges will in general be not conserved. Not
surprisingly, a large body of work on asymptotic flatness at spatial infinity
has taken the point of view to strengthen the boundary conditions. With the
strengthened boundary conditions the freedom of performing
supertranslations is eliminated. This is achieved in \cite{ABR,
Ashtekar:1978zz, Ashtekar:1991vb, B, AshNew} by demanding the leading order
asymptotic Weyl curvature to be purely electric . We will continue to demand this condition.
There is still some freedom left and this is what we would like to draw the
attention of the reader to.
Let us look at the next to leading order asymptotic equations
of motion. It turns to be convenient to work with the variable
\begin{equation}
k_{ab} = h^{1}_{ab} +2 \sigma h^{0}_{ab}\label{def:kab}.
\end{equation}
Equations of motion at first order now take the form \cite{BS}
\begin{eqnarray}
\square \sigma + 3 \sigma &=& 0, \label{EOMs} \\
\mathcal{D}^b k_{ab} - \mathcal{D}_a k &=& 0, \label{kabEOM1}\\
(\square -3)k_{ab}
+k h^{0}_{ab} -\mathcal{D}_a \mathcal{D}_b k &=& 0 \label{kabEOM2},
\end{eqnarray}
where $\square = \mathcal{D}^a \mathcal{D}_a$ and $k = k_{a}{}^a$.
The important point to note here is
that equations for the fields $\sigma$ and $k_{ab}$ are decoupled.
By introducing the leading
order electric and magnetic parts of the Weyl tensor,
\begin{equation}
E^{1}_{ab} = -\mathcal{D}_a \mathcal{D}_b \sigma - h^{0}_{ab} \sigma \, , \qquad \qquad
B^{1}_{ab} = \frac{1}{2} \: \mathrm{curl\,} k_{ab} = \frac{1}{2}\epsilon_a^{\;\; c d} \mathcal{D}_c k_{d b} \,
\label{dBk},
\end{equation}
one can rewrite
these equations in a more enlightening form. See, for example, reference
\cite{V} for a detailed discussion on this. Our boundary conditions require
\begin{equation}
\fbox{$\displaystyle
B^{1}_{ab} = 0.$}
\end{equation}
This implies that $k_{ab}$ must be of the form
\begin{equation}
k_{ab} =2 \mathcal{D}_a \mathcal{D}_b \omega + 2h^{0}_{ab} \omega, \label{kabmain}
\end{equation}
for some arbitrary $\omega$.
This is because
the combination $\mathcal{D}_a \mathcal{D}_b \omega + h^{0}_{ab} \omega$ has vanishing curl,
and hence it does not contribute to the magnetic part of the Weyl tensor. For
the form \eqref{kabmain} of $k_{ab}$ equations of motion \eqref{kabEOM1} and
\eqref{kabEOM2} are also automatically satisfied. Now recall that this freedom
in the choice of $k_{ab}$ also exactly correspond to performing
supertranslations in the space of Beig-Schmidt configurations \eqref{metric1}. By choosing a particular
representative for the inverse of the curl operator, i.e., a particular $\omega$ in \eqref{kabmain},
the freedom of performing supertranslations is eliminated.
Once such a choice is made, the function $\omega$ is fixed. It is best to regard it as a fixed background structure. This background structure is
precisely what we mean by the phrase `supertranslation frame.' The most studied choice corresponds to taking $\omega = 0$ \cite{ ABR,
MMV, MMMV, Ashtekar:1978zz, Ashtekar:1991vb, B}. We show in the rest of this
section that other choices of $\omega$ are equally allowed. In this paper we
wish to explore precisely the physics of making such a choice. To set the stage
for this discussion, we first need to look at the boundary conditions of
Ashtekar-Bombelli-Reula \cite{ABR}. These boundary
conditions were in turn motivated by \cite{Ashtekar:1978zz}. We will comment on
the motivation of references \cite{ABR, Ashtekar:1978zz} for choosing $\omega =0$
in section \ref{sec:conclusions}.
\subsubsection{Ashtekar-Bombelli-Reula Boundary Conditions}
For gravitational theories it is a well known fact that the boundary conditions
play a crucial role in the description of the phase space. For such
considerations it is often convenient to work in the covariant phase space
formalism \cite{ABR, LW, W, IW, WZ}. The key quantity to consider in this
approach is the symplectic current vector $w^\mu$. The symplectic current vector
depends on the background metric $g$ and on perturbations around the background
metric $\delta_1 g$ and $\delta_2 g$. It is skew symmetric
in the pair ($\delta_1 g$, $\delta_2 g$), and for the case of general relativity
it takes the form
\begin{equation}
w^\mu = P^{\mu \nu \alpha \beta \gamma \delta} (\delta_2 g_{\nu \alpha} \nabla_\beta \delta_1 g_{\gamma
\delta}- \delta_1 g_{\nu \alpha} \nabla_\beta \delta_2 g_{\gamma
\delta}),
\end{equation}
where
\begin{equation}
P^{\mu \nu \alpha \beta \gamma \delta} =
g^{\mu \gamma} g^{\delta \nu} g^{\alpha \beta}
- \frac{1}{2}g^{\mu \beta} g^{\nu \gamma} g^{\delta \alpha}
- \frac{1}{2}g^{\mu \nu} g^{\alpha \beta} g^{\gamma \delta}
- \frac{1}{2}g^{\nu \alpha} g^{\mu \gamma} g^{\delta \beta}
+ \frac{1}{2}g^{\nu \alpha} g^{\mu \beta} g^{\gamma \delta}.
\end{equation}
Using the Ansatz
\begin{equation}
ds^2 = g_{\mu \nu} dx^\mu dx^\nu = N^2 d \rho^2 + h_{ab} dx^a dx^b,
\label{ansatz1}
\end{equation}
and
\begin{equation}
\delta ds^2 = \delta g_{\mu \nu} dx^\mu dx^\nu = 2 N \delta N d \rho^2 +
\delta h_{ab} dx^a dx^b,
\label{ansatz2}
\end{equation}
we obtain the 3+1 split of the symplectic current vector. The radial component
reads
\begin{eqnarray}
w^\rho &=& \frac{1}{4N^3} h^{ab}h^{cd} \Bigg{\{} \Big{[} N h^{ef} \delta_1
h_{ab} \delta_2 h_{ce} \partial_\rho h_{df}
+ 2 \delta_2 h_{ac}(\delta_1 N
\partial_\rho h_{bd} - N \partial_\rho \delta_1 h_{bd}) \nonumber \\
&&- 2 \delta_2 h_{ab}(\delta_1 N \partial_\rho h_{cd} - N \partial_\rho
\delta_1 h_{cd}) \Big{]} - (1 \leftrightarrow 2) \Bigg{\}}, \label{wrho}
\end{eqnarray}
whereas the angular components read
\begin{eqnarray}
w^f &=& \frac{1}{2N} h^{fa} h^{bc} \Bigg{\{}
2 \delta_2 N D_a \delta_1 h_{bc} +
2 \delta_2 h_{bc} D_a \delta_1 N
- 2 \delta_2 N D_c \delta_1 h_{ab}
- 2 \delta_2 h_{ab} D_c \delta_1 N \nonumber \\ & &
+ h^{de} \Big{[}
N \delta_2 h_{bc} D_a \delta_1 h_{de}
- N \delta_2 h_{bd} D_a \delta_1 h_{ce}
- \delta_1 h_{ab} \delta_2 h_{de} D_c N
- N \delta_2 h_{ab} D_c \delta_1 h_{de} \nonumber \\ & &
+ 2 N \delta_2 h_{bd} D_e \delta_1 h_{ac}
- N \delta_2 h_{bc} D_e \delta_1 h_{ad}
\Big{]} - (1 \leftrightarrow 2) \Bigg{\}}.
\label{wf}
\end{eqnarray}
For the Ansatz \eqref{ansatz1} and \eqref{ansatz2},
equations \eqref{wrho} and \eqref{wf} are general expressions for the
radial and the tangential components of the symplectic current $w^\mu$. In
arriving at these expressions no reference to any boundary conditions has been
made.
Although these equations look somewhat clumsy, from the computational point of
view these are the easiest expressions to work with.
The integral of the Hodge dual of the symplectic current vector over a Cauchy
slice $\Sigma$ defines the symplectic structure. One must choose boundary conditions
to ensure that the symplectic structure is finite and conserved. When
$\delta_1 g $ and $\delta_2 g$ satisfy linearized equations of motion, it
follows from a standard argument that $\nabla_\mu w^{\mu} =0$, where
$\nabla_\mu$ is the covariant derivative compatible with the bulk metric $g$.
Therefore, the two requirements---finiteness and conservation of the symplectic
structure---reduce to respectively
\begin{equation}
\frac{1}{16\pi G}\int_\Sigma \star_4 w^\mu < \infty \qquad \mbox{and} \qquad
\frac{1}{16\pi G}\int_{\Sigma_{12}} \star_4 w^\mu = 0.
\end{equation}
Here, $\star_4$ denotes the four-dimensional Hodge star, and surface
$\Sigma_{12}$ is defined as follows. Let $\Sigma_1$ and $\Sigma_2$ be two Cauchy
surfaces ending at spatial infinity. These surfaces enclose a spacetime volume
bounded by $\Sigma_1$ and $\Sigma_2$ and a portion of the boundary.
$\Sigma_{12}$ denotes that portion of the boundary. With our notion of
asymptotic flatness these requirements are translated into, respectively,
\begin{equation}
\frac{1}{16\pi G}\lim_{\rho \to \infty}\int_\Sigma \sqrt{-g} w^{\tau} d \rho d
\theta d \phi < \infty \qquad \mbox{and} \qquad \frac{1}{16\pi G} \lim_{\rho
\to \infty} \int_{\Sigma_{12}} \sqrt{-g} w^\rho d\tau d\theta d\phi = 0,
\end{equation}
where we have taken Cauchy surfaces $\Sigma_{1,2}$ to asymptote to constant
$\tau$ surfaces in the hyperboloid. Ashtekar, Bombelli, and Reula showed that
with the boundary conditions
\begin{eqnarray}
\quad h^1_{ab} &=&- 2 \sigma h^0_{ab},\quad
\label{ABR1}\\
\quad \delta h^1_{ab} &=&- 2 \delta \sigma h^0_{ab},\quad
\label{ABR2}
\\
\quad \sigma(\tau, \theta, \phi)&=& \sigma(-\tau, \pi - \theta, \phi + \pi),\quad
\label{ABR3}
\label{sigmasymm}
\end{eqnarray}
both the above requirements are satisfied. In particular, for the integral
\begin{equation}
\frac{1}{16\pi G}\lim_{\rho \to \infty}\int_\Sigma \sqrt{-g} w^{\tau} d \rho d
\theta d \phi
= \frac{1}{4\pi G}\lim_{\rho \to \infty}\int_\Sigma\sqrt{-h^0}
\frac{1}{\rho} \left(\delta_1 \sigma \mathcal{D}^\tau \delta_2 \sigma - \delta_2 \sigma \mathcal{D}^\tau
\delta_1 \sigma \right)d\rho d \theta d
\phi,
\end{equation}
one find that the potentially divergent term on the right hand side vanishes
upon using boundary condition \eqref{sigmasymm}. In the boundary conditions
\eqref{ABR1}--\eqref{ABR3} the choice $\omega = 0$ has been
made. Since a particular choice has been made, supertranslations do not
act on the phase space.
\subsubsection{Supertranslated Boundary Conditions}
The boundary conditions we work with in this paper are as follows
\begin{empheq}[box=\widefbox]{align}
\quad h^1_{ab} &=- 2 \sigma h^0_{ab} + 2 \mathcal{D}_a \mathcal{D}_b \omega + 2 h^0_{ab} \omega,\quad
\label{sABR1} \\
\quad \delta h^1_{ab} &=- 2 \delta \sigma h^0_{ab},\quad \label{sABR2} \\
\quad \sigma(\tau, \theta, \phi)&= \sigma(-\tau, \pi - \theta, \phi + \pi).\quad
\label{sABR3}
\end{empheq}
In the boundary conditions
\eqref{sABR1}--\eqref{sABR3} the choice $\omega \neq 0$ has been
made. Once again since a particular choice for $\omega$ has been made,
supertranslations \emph{do not} act on the phase space.
Nothing changes in the calculation of the symplectic structure when working with
these boundary conditions. The symplectic structure is still finite and
conserved as is the case with the ABR boundary conditions. It is best to regard
$\omega$ as the fixed background structure. We refer to boundary conditions \eqref{sABR1}--\eqref{sABR3} as
the supertranslated ABR boundary conditions.
At this point we wish to point out that such a generalization
should be possible was already speculated in the work of Mann and Marolf
\cite{MM}, though the precise boundary conditions
\eqref{sABR1}--\eqref{sABR3}
were not stated.
\setcounter{equation}{0}
\section{Asymptotic Expansions}
\label{sec:expansion}
Having specified our boundary conditions we now wish to study the consequences
on the asymptotic equations of motion and on the construction of the boundary
stress tensor at the next-to-next-to leading order. It is necessary to work at
this order to get a handle over the construction of Lorentz charges. We will concentrate mostly on the
physics, and will not go into much calculational details. Since we carry out
asymptotic expansions at second order for arbitrary $\omega$, the manipulations
involved are in fact quite intricate and tedious.
\subsection{Second Order Equations of Motion}
The radial 3+1 split of the bulk Einstein equations give the following equations
for the Ansatz \eqref{ansatz1} \cite{B}
\begin{eqnarray}
h^{ab} \partial_\rho K_{ab} - N K_{ab}K^{ab} + D^2 N &=& 0, \\
D_b K^b{}_{a} - D_a K &=& 0, \\
\mathcal{R}_{ab}- N^{-1} \partial_\rho K_{ab} - N^{-1} D_a D_b N - K K_{ab} +
2 K_{a}{}^{c} K_{cb} &=& 0.
\end{eqnarray}
Here $K_{ab}$ denotes the extrinsic curvature of the constant $\rho$
hypersurface. We carry out the expansion of these equations systematically in
appendix \ref{app:EOM}. The final outcome of this analysis is the second
order equations of motion. These equations take the form
\begin{eqnarray}
h^2 &=& 12 \sigma^2 + \sigma_a \sigma^a + 3 \omega^2 + 2 \omega \Box \omega +
\omega_{ab}\omega^{ab} - 9 \omega \sigma - \sigma \Box \omega + \sigma_a \omega^a + \sigma_a
\Box \omega^a \nonumber \\
& & + 2 \sigma_{ab}\omega^{ab},\label{traceh2}\\
\mathcal{D}^b h^2_{ab} &=& 16 \sigma \sigma_a + 2 \sigma_{ab}\sigma^b + 2 \omega \omega_a + 2 \omega
\Box \omega_a + 2 \omega^b \omega_{ab} + \omega_{ab}\Box \omega^b +
\omega_{abc}\omega^{bc} \nonumber \\
& & - \sigma \omega_a - 3 \omega \sigma_a + \sigma_a \Box \omega - \sigma \Box \omega_a + 3
\sigma_{ab} \omega^b - \omega_{ab} \sigma^b + \sigma_{ab}\Box \omega^b \nonumber + \sigma^b \Box \omega_{ab} \\
&&
+ 2 \sigma_{abc}\omega^{bc} + 2 \omega_{abc} \sigma^{bc}, \label{divh2}\\
(\Box - 2 )h^2_{ab} &=& 6(\sigma_c \sigma^c - 3 \sigma^2)h^0_{ab} + 8 \sigma_a \sigma_b + 14 \sigma
\sigma_{ab} + 2 \sigma_{ac}\sigma^{c}{}_{b} + 2 \sigma_{abc}\sigma^c\nonumber \\
& & + 2 (\omega \Box \omega - \omega^2 + \omega_c \omega^c )h^0_{ab}
- 4 \omega \omega_{ab} + 2 \omega_{ab}\Box \omega + 2 \omega \Box \omega_{ab}
\nonumber \\ &&
+ 4 \omega_{abc}\omega^c
-2 \omega_{cb}\omega^{c}{}_{a} + 2 \omega_{a}{}^{cd} \omega_{bcd} + 2 \omega_{c(a} \Box \omega_{b)}{}^c
\nonumber \\
& & + (14 \omega \sigma - 4 \sigma \Box \omega - 4 \sigma_c \omega^c + 2 \sigma^c \Box \omega_c + 4
\sigma_{cd}\omega^{cd})h^0_{ab} + 17 \sigma \omega_{ab} - \omega \sigma_{ab} \nonumber \\
& &
- \sigma_{ab}\Box \omega - \sigma \Box \omega_{ab} + 5 \sigma_{abc}\omega^c - 5 \sigma^c \omega_{abc} + \sigma_{abc}
\Box \omega^c + \sigma^c \Box \omega_{abc} \nonumber \\
& & + 2 \sigma_{abcd}\omega^{cd} + 2 \omega_{abcd}\sigma^{cd} + 2 \sigma_{c(a} \omega_{b)}{}^c + 2
\sigma_{c(a} \Box \omega_{b)}{}^c + 4 \sigma_{(a}{}^{cd} \omega_{b)cd}.
\label{boxh2}
\end{eqnarray}
In writing these equations we use the following compact
notation,
\begin{eqnarray}
\omega_{abcd} &=& \mathcal{D}_d \mathcal{D}_c \mathcal{D}_b \mathcal{D}_a \omega, \\
\Box \omega_{abc} &=& (\mathcal{D}^e \mathcal{D}_e) \omega_{abc} = \mathcal{D}^e \mathcal{D}_e \mathcal{D}_c \mathcal{D}_b \mathcal{D}_a \omega,
\end{eqnarray}
etc.~and similarly for $\sigma$.
In the special case when $(\Box + 3) \omega = 0$ these equations can
also be extracted from \cite{CD}.
\subsection{Integrability Conditions}
The second order equations of motion \eqref{traceh2}--\eqref{boxh2} are in fact
quite complicated. It might seem difficult to rewrite these equations in a form that can be used to perform
an integrability analysis following Beig \cite{B}. Remarkably enough, this is
not the case. These equations have a somewhat magical structure: all the $(\sigma,\omega)$
terms and all $(\omega,\omega)$ terms on the right hand side of equation
\eqref{boxh2} can be repackaged as $(\Box -2)$ acting of the following tensor
\begin{eqnarray}
\chi_{ab}& = & - \sigma \omega_{ab} - \omega \sigma_{ab} - 4 h^0_{ab} \sigma \omega
+ 2 h^0_{ab} \sigma_c \omega^c + \sigma_{abc}\omega^c + \omega_{abc}\sigma^c + 2
\sigma_{c(a}\omega_{b)}{}^{c} \nonumber \\
& &
+ 2 \omega \omega_{ab} + \omega_a{}^c
\omega_{bc} + h^0_{ab} \omega^2 \label{chiab}
\end{eqnarray}
As a result \eqref{boxh2} can be written as
\begin{eqnarray}
(\Box -2)(h^2_{ab} - \chi_{ab}) = 6(\sigma_c \sigma^c - 3 \sigma^2)h^0_{ab} + 8 \sigma_a \sigma_b + 14 \sigma
\sigma_{ab} + 2 \sigma_{ac}\sigma^{c}{}_{b} + 2 \sigma_{abc}\sigma^c\nonumber.
\end{eqnarray}
The usefulness of the tensor
$\chi_{ab}$ goes well beyond that. Equation \eqref{traceh2} can be rewritten
as
\begin{equation}
h^2 - \chi = 12 \sigma^2 + \sigma_a \sigma^a,
\end{equation}
and similarly the divergence equation \eqref{divh2} is rewritten as
\begin{equation}
\mathcal{D}^b(h^2_{ab} - \chi_{ab})= 16 \sigma \sigma_a + 2 \sigma_{ab}\sigma^b.
\end{equation}
Written in this form the second order equations are much more manageable. Now,
following the discussion in \cite{CDV} we define a tensor $V_{ab}$ as
\begin{eqnarray}
V_{ab}= -h^2_{ab} + \chi_{ab} + 6 \sigma^2 h^0_{ab} + 2 \sigma_{ab} \sigma - 2 \sigma_a \sigma_b +
\sigma^c \sigma_c h^0_{ab}.
\end{eqnarray}
In terms of $V_{ab}$ the equations of motion take the form
\begin{empheq}[box=\widefbox]{align}
V_{a}^{a} &=0,\\
\mathcal{D}^aV_{ab} &=0, \\
(\Box-2)V_{ab}&= \mathrm{curl\,} [ 4 \epsilon_{cd(a} \sigma^c \sigma^{d}_{b)}],
\end{empheq}
where as in \eqref{dBk} $\mathrm{curl\,}$ of a tensor $T_{ab}$ is defined as
\begin{equation}
\mathrm{curl\,} T_{ab} = \epsilon_{a}{}^{cd} \mathcal{D}_c T_{db}.
\end{equation}
For further properties of the $\mathrm{curl\,}$ operator and of the tensor
structure $\epsilon_{cd(a} \sigma^c \sigma^{d}_{b)}$ we refer the reader to \cite{CDV}.
Since $V_{ab}$ is symmetric, traceless, and divergence free, discussion of
the integrability conditions of \cite{CDV} applies as is. We find
that despite the fact that we have an arbitrary function $\omega$ in our
asymptotic expansion the integrability conditions do not change.
The integrability conditions require Lorentz charges constructed using
\begin{equation}
\mathrm{curl\,} [ 4
\epsilon_{cd(a} \sigma^c \sigma^{d}_{b)}]
\end{equation}
to be zero. In this paper, we have chosen
the mass aspect $\sigma$ to be a symmetric function on the hyperboloid. As a
result, the integrability conditions are automatically satisfied
\cite{MMMV, CDV}. The outcome of this is that the tensor $V_{ab}$ can be
readily used to construct well defined and conversed Lorentz charges\footnote{One comment regarding tensor $\chi_{ab}$ is in order here: the form of $\chi_{ab}$ \eqref{chiab} can also be worked out by calculating the
non-linear action of supertranslation $\omega$ on $h^2_{ab}$ starting with the ABR
boundary conditions, in particular using equation \eqref{ABR1}. This calculation in a
somewhat different context was first performed in an unpublished work in
collaboration with Geoffrey Compere and Francois Dehouck. For the special case
when $\mathcal{D}^a k_{ab} = k_{a}^{a} = 0$, i.e., $(\Box + 3)\omega = 0$, such an
expression can also be extracted from \cite{CD}. I thank Geoffrey Compere and
Francois Dehouck for their permission to use material from this joint
unpublished work.}.
\subsection{Expansion of Counterterm}
\label{sec:expansionCounterterm}
Having analysed the second order equations of motion and the integrability
conditions, we now turn to the expansion of the Mann-Marolf counterterm. Recall that the counterterm $\hat K$
is defined implicitly via the Gauss-Codacci like equation \eqref{defhatK}. It
is convenient to introduce $\hat p_{ab} = \frac{1}{\rho} \hat K_{ab}$.
Expanding
$\hat p_{ab}$ as
\begin{equation}
\hat p_{ab} = h^0_{ab} + \frac{1}{\rho} \hat p^1_{ab} +
\frac{1}{\rho^2} \hat p^2_{ab} + {\cal O} \left(
\frac{1}{\rho^3}\right), \label{expansionp}
\end{equation}
we can invert the relation \eqref{defhatK} and express $\hat p^1_{ab}, \hat
p^2_{ab}$ in terms of the expansion of the Ricci tensor on the hyperboloid.
This computation was first done in \cite{MMV} for the ABR boundary conditions. We refer the reader to the
appendix B of \cite{MMV} for details.
By a direct calculation we find upon using equations of motion obtained above
\begin{eqnarray}
\hat p^1_{ab} = \sigma_{ab} - \sigma h^0_{ab} + \omega_{ab} + h^0_{ab} \omega.
\end{eqnarray}
A similar calculation for $\hat p^2_{ab}$ gives
\begin{eqnarray}
\hat p^{2}_{ab} &=& h^{2}_{ab} - \left( \frac{5}{4}\sigma^2 + \sigma_c \sigma^c +
\frac{1}{4} \sigma^{cd}\sigma_{cd}\right) h^0_{ab} +
2 \sigma_a \sigma_b + \sigma \sigma_{ab} + \sigma_a{}{}^c \sigma_{cb}
- h^0_{ab} \omega^2 \nonumber \\ && - 2 \omega \omega_{ab} - \omega_{a}{}^{c} \omega_{cb}
+ \left(3 \sigma \omega
+ \frac{3}{2} \sigma \Box \omega - \sigma^c \omega_c + \frac{1}{2}
\sigma_{cd}\omega^{cd} \right) h^0_{ab}
+ \omega \sigma_{ab}
\nonumber \\ &&
+ \sigma_{ab} \Box \omega
- \omega_{abc}\sigma^c - 2 \sigma_{c(a}\omega_{b)}{}^{c}.
\label{phat2:1}
\end{eqnarray}
The traces of $\hat p^{1}_{ab}$ and $\hat p^{2}_{ab}$ simplify to
\begin{eqnarray}
\hat p^1 &:=& h^{0 ab}\hat p^{1}_{ab} = - 6 \sigma + \Box \omega + 3 \omega,\\
\hat p^2 &:=& h^{0 ab}\hat p^{2}_{ab} = \frac{21}{4}\sigma^2 + \frac{1}{4} \sigma_{cd}
\sigma^{cd} - 3 \omega \sigma + \frac{1}{2} \sigma \Box \omega + \frac{3}{2} \sigma_{cd}
\omega^{cd}.
\label{tracephat2}
\end{eqnarray}
These equations are important for the considerations of the next section.
\setcounter{equation}{0}
\section{Supertranslations and Boundary Stress Tensor}
\label{sec:BST}
In this section we study the on-shell value of the action and its first
variations. We also compute the next to leading order expression for the
boundary stress tensor. We follow the corresponding discussion in \cite{MM,
MMMV, V}. The new element in the following discussion is our boundary conditions
\eqref{sABR1}--\eqref{sABR3}.
\subsection{First Variations}
Let us consider the first variations of the Mann-Marolf action over
configurations satisfying our boundary conditions \eqref{sABR1}--\eqref{sABR3}
and evaluate it on-shell.
This set-up was already considered in \cite{MM} so we shall be brief. The first
variation of the Mann-Marolf action is
\cite{MM, MMMV, V}
\begin{equation}
(16 \pi G) \delta S_{\rom{total}} = \int_{\partial \mathcal{M}} \sqrt{-h} d^3 x(\pi^{ab} - \hat \pi^{ab} + \Delta^{ab}) \delta h_{ab},
\end{equation}
where $\pi^{ab} = K h^{ab}- K^{ab}$, $\hat \pi^{ab} = \hat K
h^{ab}- \hat K^{ab}$ and $\Delta_{ab}$ is
\begin{equation}
\Delta^{ab} = \hat K^{ab} - 2 \tilde L^{cd} (\hat K_{cd} \hat K^{ab} - \hat
K^{a}_{c} \hat K_{d}^{b}) + D^2 \tilde L^{ab} + h^{ab}D_c D_d \tilde L^{cd} - 2
D_{d}D^{(a}\tilde L^{b)d},
\label{Delta}
\end{equation}
with $L_{ab}{}^{cd}$ and $\tilde L^{ab}$ given by \cite{MM, MMMV, Ross}
\begin{equation}
L_{ab}{}^{cd} = h^{cd}\hat K_{ab} + \delta_{(a}^c\delta_{b)}^d \hat K -
\delta_{(a}^c\hat K_{b)}^d- \delta_{(a}^d\hat K_{b)}^c, \qquad \quad \tilde
L^{ab} := h^{cd}(L^{-1})_{cd}{}^{ab}.
\end{equation}
Using asymptotic expansions of the previous section it follows that
\begin{equation}
(\pi^{ab} - \hat \pi^{ab} + \Delta^{ab}) = \frac{1}{\rho^{4}}\left(\sigma^{ab} + \sigma
h^{0ab}\right) + \mathcal{O}\left(\frac{1}{\rho^{5}}\right).
\label{inter}
\end{equation}
Now, using our boundary condition \eqref{sABR2} we see that in
the $\rho \to \infty $ limit
\begin{equation}
(16 \pi G) \delta S_{\rom{total}} = \int_{dS_3} \sqrt{-h^{0}} d^3 x\left(
\sigma^{ab} + h^{0\:ab}\sigma\right)\left(- 2 \delta \sigma h^0_{ab}\right).
\end{equation}
The equation of motion for $\sigma$ now immediately tells us that the
first variation of the action vanishes identically
\begin{equation}
\delta S_{\rom{total}} = 0.
\end{equation}
Thus, action \eqref{action} provides a good
variational principle for our notion of asymptotic flatness\footnote{Alternatively, using
$\delta h_{ab} = \rho \delta h^{1}_{ab} + \ldots = - 2 \rho \delta \sigma
h^{0}_{ab} + 2 \rho \mathcal{D}_a \mathcal{D}_b \delta \omega + 2 \rho \delta \omega h^0_{ab}+ \ldots$
and $\sqrt{-h} = \rho^{3}\sqrt{-h^{0}} + \ldots $ it follows that in the $\rho
\to \infty $ limit
\begin{equation}
(16 \pi G) \delta S_{\rom{total}} = \int_{dS_3} \sqrt{-h^{0}} d^3 x\left(
\sigma^{ab} + h^{0\:ab}\sigma\right)\left(- 2 \delta \sigma h^0_{ab} + 2 \mathcal{D}_a \mathcal{D}_b
\delta \omega + 2 \delta \omega h^0_{ab}\right).
\end{equation}
Using the equation of motion
for $\sigma$, this equation further simplifies to
\begin{equation}
(16 \pi G) \delta S_{\rom{total}} = \int_{dS_3} \sqrt{-h^{0}} d^3 x\left(
\sigma^{ab} + h^{0\:ab}\sigma\right)\left( 2 \mathcal{D}_a \mathcal{D}_b
\delta \omega\right).
\end{equation}
Performing integration by parts and using equation of motion for $\sigma$ one more
time, we see that the first variation of the action vanishes identically
\begin{equation}
\delta S_{\rom{total}} = 0.
\end{equation}
In particular, supertranslations need not be fixed! Asymptotically
flat metrics related to each other via arbitrary supertranslations can be
consistently considered in the Mann-Marolf variational principle. See also
\cite{CD}. However, this is not the boundary conditions we use in this paper
for reasons emphasized in the introduction section.
\label{footnote:supertranslation}}.
\subsection{On-shell Action}
\label{onshell}
We now calculate the on-shell value of the action. Given our results above,
this calculation is rather straightforward. Since our
spacetimes are Ricci flat the bulk term in \eqref{action} vanishes on-shell.
Therefore,
\begin{equation}
S_{\rom{on-shell}} = \frac{1}{8 \pi G} \int_{\partial \mathcal{M}}
d^3x\sqrt{-h} (K - \hat K).
\end{equation}
Now, using expansions of $K_{ab}$ and $\hat K_{ab}$ obtained above (section
\ref{sec:expansionCounterterm} and Appendix \ref{app:EOM} respectively) we
have
\begin{equation}
S_{\rom{on-shell}} = \frac{1}{32
\pi G} \int_{dS_3}d^3x\sqrt{-h^{0}}\left[3 \sigma^2 - \sigma_{ab}\sigma^{ab} + 2 \sigma \Box \omega + 2
\sigma_{ab}\omega^{ab} \right].
\label{onshell}
\end{equation}
All divergent terms have cancelled. The on-shell is finite. Doing integrations
by parts and using equations of motion for $\sigma$, we observe that the on-shell
action vanishes
\begin{equation}
S_{\rom{on-shell}} = 0.
\end{equation}
In particular, the on-shell value \emph{does not} depend on the supertranslation
frame $\omega$.
An interpretation of this result is as follows \cite{V}. We showed above that
$\delta S = 0$ on all variations satisfying our boundary conditions. It follows
that $S_{\rom{on-shell}}$ must be constant as we move along any smooth
path in our phase space. Furthermore, we expect all
configurations to be smoothly connected to Minkowski space. For Minkowski space
$S_{\rom{on-shell}}$ is identically zero. Therefore, it follows that
$S_{\rom{on-shell}}$ is identically zero on any asymptotically flat solution satisfying our boundary condition. For more
comments on this point see \cite{V} and also footnote
\ref{footnote:supertranslation}.
\subsection{Boundary Stress Tensor}
From the first variation of the action, the boundary stress tensor can also be
computed. It admits an expansion in the inverse powers of $\rho$. The leading
order and the next to leading order terms in the expansion are relevant for the
construction of translations and Lorentz charges respectively \cite{MM, MMV}.
After a long and tedious computation we find these expressions to be
\begin{eqnarray}
T_{ab} = -\frac{1}{8 \pi G} \left( T^{1}_{ab} + \frac{1}{\rho}T^{2}_{ab}
+ \ldots \right) \label{Texpansion}
\end{eqnarray}
where
\begin{eqnarray}
T^1_{ab} = \sigma_{ab} + h^0_{ab} \sigma \label{T1}
\end{eqnarray}
and
\begin{eqnarray}
T^2_{ab} &=& h^2_{ab} + 2 \sigma_a \sigma_b + \frac{49}{4}\sigma \sigma_{ab} + 4 \sigma_{abc}\sigma^c +
7 \sigma_{a}{}^{c}\sigma_{bc} - \frac{3}{4} \sigma_{abcd}\sigma^{cd} +
\frac{9}{4}\sigma_{acd}\sigma_{b}{}^{cd} \nonumber \\
& & + \left[
\frac{35}{4}\sigma^2 + 3 \sigma_c \sigma^c - \frac{13}{4} \sigma_{cd}\sigma^{cd} -
\frac{3}{4}\sigma_{cde}\sigma^{cde} \right] h^0_{ab} + \frac{1}{2} \omega_a \omega_b - 2 \omega
\omega_{ab} + \omega_{ab} \Box \omega \nonumber \\
& & + \omega_{(a}\Box \omega_{b)} + \omega_{abc}\omega^{c} - 4 \omega_{ac}\omega_b{}^{c} +
\frac{1}{2}\Box \omega_a \Box \omega_b - \frac{1}{2} \Box \omega \Box \omega_{ab} - 2
\omega_{abc}\Box \omega^{c} \nonumber \\
& &+ 2 \omega_{c(a}\Box \omega_{b)}{}^{c} - \omega_{ab} \Box \Box \omega -
\frac{1}{2}\omega_{abcd}\omega^{cd} + \frac{3}{2} \omega_{acd}\omega_{b}{}^{cd} +
\left[-\omega^2 + \frac{1}{2} \omega_c \omega^c - 2 \omega_c \Box \omega^{c}
\right. \nonumber \\
&& \left.
+ \frac{1}{2} \Box
\omega^{c} \Box \omega_{c}- \frac{1}{2}\omega_{cd} \Box \omega^{cd} + \frac{1}{2} \Box \omega \Box
\Box \omega - \frac{1}{2}\omega_{cde}\omega^{cde} \right] h^0_{ab} + \sigma \omega_{ab} + \omega
\sigma_{ab} \nonumber \\
& & + \frac{7}{4} \sigma_{ab} \Box \omega - \frac{9}{4} \sigma \Box \omega_{ab} -
\frac{3}{2}\sigma_{abc}\omega^{c} - \frac{11}{2} \omega_{abc} \sigma^c - 2 \sigma_{c(a}
\omega^{c}{}_{b)} + 3 \Box \omega^c \sigma_{abc} - 3 \sigma^{c}{}_{(a} \Box \omega_{b)c} \nonumber \\
& & + \frac{3}{2} \sigma_{ab} \Box \Box \omega + \frac{3}{4} \sigma_{abcd}\omega^{cd}+
\frac{3}{4} \omega_{abcd}\sigma^{cd} - \frac{9}{2} \sigma_{(a}{}^{cd} \omega_{b)cd} + \left[
4 \omega \sigma + \frac{17}{4} \sigma \Box \omega - \frac{11}{2} \sigma_c \omega^{c}
\right.
\nonumber \\
& &
\left.
+ \frac{9}{2} \sigma^c
\Box \omega_c + \frac{9}{4} \sigma \Box \Box \omega + \frac{13}{4} \sigma_{cd}\omega^{cd} +
\frac{3}{4} \sigma_{cd}\Box \omega^{cd} + \frac{3}{2} \sigma_{cde}\omega^{cde} \right]
h^0_{ab}.
\label{mainT}
\end{eqnarray}
Equation \eqref{mainT} is one of the main result of this paper. Certain
calculational details on how we obtained this expression can be found in
appendix \ref{app:BST}.
\setcounter{equation}{0}
\section{Properties of Boundary Stress Tensor}
\label{sec:BSTprop}
In this section we explore properties of our boundary stress tensor
\eqref{Texpansion}--\eqref{mainT}.
\subsection{Boundary Stress Tensor is Conserved a la Brown-York}
\label{sec:BSTprop1}
The above stress tensor can be shown to be conserved
\begin{equation}
D^{b}T_{ab} = 0. \label{conserve}
\end{equation}
However, care must be exercised in interpreting this result. The
derivative $D^{a}$ in \eqref{conserve} is the torsion-free covariant derivative
compatible with the \emph{full metric on the hyperboloid $h_{ab}$.} When
expanded in powers of $\rho$ this equation reads at leading order
\begin{equation}
\mathcal{D}^{b}T^{1}_{ab} = 0,
\end{equation}
and at the next to leading order
\begin{equation}
\mathcal{D}^{b}T^2_{ab} - \sigma \sigma_a - \sigma_{ab} \sigma^b - 2 \sigma \omega_a - 2 \sigma \Box \omega_a - 2
\sigma_{ab}\omega^b - 2 \omega_{ab}\sigma^b - \sigma_{ab} \Box \omega^b - 2 \sigma_{abc}\omega^{bc} -
\omega_{abc}\sigma^{bc}=0. \label{conserve1}
\end{equation}
An important question to ask at this point is whether or not the
above expression can be written as a total derivative of a symmetric tensor $\tilde T_{ab}$. For
$\omega$ independent terms this is indeed the case \cite{MMV}
\begin{equation}
\tilde T_{ab} = T^2_{ab} - \sigma T^1_{ab}. \label{eq:Td}
\end{equation}
When $\omega$ dependent terms are included, with our preliminary investigations we
were unable to write \eqref{conserve1} as a total derivative of a symmetric
tensor. This is not necessarily an obstacle for the construction of conserved
charges. We already know from our study of the integrability conditions of the
second order equations of motion that a conserved tensor constructed using
$h^2_{ab}$---namely $V_{ab}$---exist and can be used to construct
conserved Lorentz charges.
We expect such a tensor to play an important role in the covariant phase space
construction of charges. Given the analysis of \cite{CD} and our considerations of the
covariant phase space above, it is fairly clear that such a construction goes
through without surprises. It can be interesting to fill in all details. We
will not pursue this direction here. On the other hand, construction of
conserved Lorentz charges using the boundary stress tensor approach is more
interesting and perhaps more difficult; we explore certain aspect of this in the
rest of the paper.
Reference \cite{MM} presented a general construction of boundary stress tensor charges starting with equation \eqref{conserve}.
There an expression for conserved charge for an asymptotic Killing vector
$\xi_\rho^a$ is given in terms of the variation of the renormalized action
\begin{equation}
Q[\xi] = - \Delta_{f, \xi} S_\rom{renorm} = - \lim_{\rho \to \infty}
\frac{1}{2} \int_{\partial \mathcal{M}_{\rho}} \sqrt{-h} T^{ab} \Delta_{f, \xi_\rho}
h_{ab} d^3x,
\label{charge}
\end{equation}
where
\begin{equation}
\Delta_{f,\xi} h_{ab} = (\pounds_{f\xi} g)_{ab} - f (\pounds_{\xi} g)_{ab},
\label{Deltafxi}
\end{equation}
and where $f$ is smooth function that take the value $f=0$ at the past
boundary of $\partial \mathcal{M}_{\rho}$ and the value $f=1$ at the future boundary.
The right hand side of \eqref{Deltafxi} denotes quantities evaluated in
the bulk $\mathcal{M}_{\rho}$ and then pulled back to the boundary $\partial
\mathcal{M}_{\rho}$. Using general arguments it has been shown in \cite{HIM} that
this charge is also the generator of the asymptotic symmetry $\xi_\rho^a$.
Upon performing integrations by parts, equation \eqref{charge} can be converted
into an integral over a co-dimension two surface $C_\rho$---a cut in boundary
$\partial \mathcal{M}_\rho$
\begin{equation}
Q[\xi] = \lim_{\rho \to \infty}
\int_{\mathcal{C}_{\rho}} \sqrt{-h_{C_\rho}} T_{ab} \xi_\rho^a n_\rho^{b} d^2x.
\label{charge2}
\end{equation}
At this stage the above expression for conserved charges is somewhat formal. All
quantities that enter into this expression admit expansions in inverse powers of
$\rho$. For analysing Lorentz charges second order expansion of various
quantities is required, which makes the analysis quite intricate.
Nevertheless, we expect that our boundary stress tensor can be used to construct
conserved charges. The precise details as to how this construction proceeds is
not investigated at this stage. We will return to this problem elsewhere in the future.
It is worthwhile to point out that for the case $\omega = 0$ the
corresponding construction was carried out in \cite{MMV}, where divergence
free nature of tensor $\tilde T_{ab}$ \eqref{eq:Td} was also observed.
Although it is fairly non-trivial to carry out explicit
construction of conserved charges
for $\omega \neq 0$ in all detail, it is rather straightforward to study
transformation properties of Lorentz charges under translations from \eqref{charge2}.
We present this study in section \ref{sec:transform}. For now let us explore
some further properties of our stress tensor.
\subsection{Special Cases}
\label{sec:BSTprop2}
In this subsection we look at various special cases where our stress tensor
simplifies. In all cases it satisfies expected properties. This
study allows us to probe the structure of our stress tensor.
\subsubsection{$\omega = 0$}
When we choose $\omega = 0$ the boundary stress reduces to a previously computed
expression \cite{MMMV}
\begin{eqnarray}
T_{ab} = -\frac{1}{8 \pi G} \left( T^{1}_{ab} + \frac{1}{\rho}T^{2}_{ab}
+ \ldots \right),
\end{eqnarray}
where
\begin{eqnarray}
T^1_{ab} = \sigma_{ab} + h^0_{ab} \sigma,
\end{eqnarray}
and
\begin{eqnarray}
T^2_{ab} &=& h^2_{ab} + 2 \sigma_a \sigma_b + \frac{49}{4}\sigma \sigma_{ab} + 4 \sigma_{abc}\sigma^c +
7 \sigma_{a}{}^{c}\sigma_{bc} - \frac{3}{4} \sigma_{abcd}\sigma^{cd} +
\frac{9}{4}\sigma_{acd}\sigma_{b}{}^{cd} \nonumber \\
& & + \left[
\frac{35}{4}\sigma^2 + 3 \sigma_c \sigma^c - \frac{13}{4} \sigma_{cd}\sigma^{cd} -
\frac{3}{4}\sigma_{cde}\sigma^{cde} \right] h^0_{ab}.
\label{simpleT}
\end{eqnarray}
Properties of this expression are already well studied in the
literature \cite{MMMV, CDV}.
\subsubsection{$\omega_{ab} + h^0_{ab}\omega = 0$}
When $\omega_{ab} + h^0_{ab}\omega = 0$, i.e., when $\omega$ is a translation,
$k_{ab}$ \eqref{kabmain} vanishes identically. In this case the asymptotic
metric expansion also reduces to the previously studied case of \cite{MMV, MMMV,
CDV}.
Therefore, we expect again the boundary stress tensor to reduce to \eqref{simpleT}. It can be
verified by a direct calculation that this is indeed the case. Remarkable cancellations happen when $\omega_{ab} +
h^0_{ab}\omega = 0$ is substituted in \eqref{mainT}. All $\omega$ dependent
terms reduce to zero, giving us \eqref{simpleT} as the final expression. This
provides a highly non-trivial test on our computations.
\subsubsection{$\sigma = 0$}
Another non-trivial case is when the mass aspect is
set to zero. In this case all conserved charges corresponding to translations
vanish identically. Perhaps Minkowski space is the only solution with this
property. In this section we wish to understand
properties of the Lorentz charges when $\sigma =0$. When $\sigma$ is set to zero,
$h^2_{ab}$ is solved from equations \eqref{traceh2}--\eqref{boxh2} to\footnote{with the
most natural choice $V_{ab} =0$. A choice is necessary because the
corresponding equations are hyperbolic.} read
\begin{equation}
h^2_{ab} = 2 \omega \omega_{ab} + \omega_{a}{}^{c}\omega_{bc} + h^0_{ab} \omega^2.
\label{eqsubs}
\end{equation}
Below we substitute this expression
of $h^2_{ab}$ in the stress tensor. The resulting stress tensor is the
stress tensor of Minkowski space in a general supertranslation gauge. The fact that the following calculation is non-trivial
and has a non-zero answer is somewhat analogous to holographic conformal
anomaly.
To analyse the structure of the simplified stress tensor, we first need to
recall a few useful results concerning symmetric divergence free tensors from
\cite{MMMV, B, Geroch}. A tensor $\theta_{ab}$ is said to admit a scalar
potential $\alpha$ if \begin{equation} \label{T2ScalarPot} \theta_{ab}[\alpha] = \mathcal{D}_a \mathcal{D}_b \alpha - h_{ab}^{0} \mathcal{D}^2 \alpha -
2 \alpha h_{ab}^{0}.
\end{equation}
The tensor $\theta_{ab} [\alpha]$ is conserved, i.e., $\mathcal{D}^a
\theta_{ab}[\alpha]=0$.
Moreover, if $\xi^{a}$ is a Killing vector of $h_{ab}^{0}$ then the current
$\theta_{ab}[\alpha] \xi^{b}$ can be expressed as the divergence of an anti-symmetric tensor \begin{equation} \label{T2pot}
\theta_{ab}[\alpha] \xi^{b} = \mathcal{D}^b\left( 2 \xi_{[b} \mathcal{D}_{a]}\alpha + \alpha
\mathcal{D}_{[b} \xi_{a]}\right).
\end{equation}
As a result the currents of the form $\theta_{ab}[\alpha] \xi^{b}$ do not
contribute to the conserved charge associated with $\xi^{a}$. Similarly, a
tensor $t_{ab}$ is said to admit a symmetric, transverse tensor potential
$\gamma_{ab}$ with $\mathcal{D}^{a} \gamma_{ab} = 0$ if
\begin{equation} \label{TensorPotential}
t_{ab}[\gamma_{ab}] = \mathcal{D}^2 \gamma_{ab} + 2 \mathcal{R}^{0}_{acbd}\gamma^{cd}
\quad \mbox{where} \quad\mathcal{R}^{0}_{acbd} = h^0_{ab} h^0_{cd} - h^0_{cb}
h^0_{ad}.
\end{equation}
The tensor
$t_{ab}[\gamma_{ab}]$ is conserved, and for $\xi^{a}$ a Killing vector of
$h^{0}_{ab}$ the current $t_{ab}[\gamma_{ab}]\xi^{b}$ is the divergence of an
anti-symmetric tensor
\begin{equation}
t_{ab}[\gamma_{ab}] \xi^{b} = 2\mathcal{D}^{a} ( \xi^{c}D_{[a} \gamma_{b]c} +
\gamma_{c[a}D_{b]} \xi^{c}).
\end{equation}
Hence, currents of this form also do not contribute to the conserved charges.
Our strategy is to write the simplified expression for the stress tensor after
setting $\sigma = 0$ and $h^2_{ab}$ from \eqref{eqsubs} in terms of a scalar and
a tensor potential.
The simplified boundary stress tensor is $ T^1_{ab}\big{|}_{\sigma = 0} =0$, and
\begin{eqnarray}
& & T^2_{ab}\big{|}_{\sigma = 0} = \frac{1}{2} \omega_a \omega_b - 2 \omega
\omega_{ab} + \omega_{ab} \Box \omega
+ \omega_{(a}\Box \omega_{b)} + \omega_{abc}\omega^{c} - 4 \omega_{ac}\omega_b{}^{c}
+\frac{1}{2}\Box \omega_a \Box \omega_b
\nonumber \\ & &
- \frac{1}{2} \Box \omega \Box \omega_{ab} - 2
\omega_{abc}\Box \omega^{c}
+ 2 \omega_{c(a}\Box \omega_{b)}{}^{c} - \omega_{ab} \Box \Box \omega
-\frac{1}{2}\omega_{abcd}\omega^{cd}
+ \frac{3}{2} \omega_{acd}\omega_{b}{}^{cd}
\label{simpTomega} \\ & &
+h^0_{ab}\left[ \frac{1}{2} \omega_c \omega^c -\omega^2 - 2 \omega_c \Box \omega^{c}
+ \frac{1}{2} \Box \omega^{c} \Box \omega_{c}
- \frac{1}{2}\omega_{cd} \Box \omega^{cd}
+\frac{1}{2} \Box \omega \Box \Box \omega
- \frac{1}{2}\omega_{cde}\omega^{cde} \right].
\nonumber
\end{eqnarray}
Expression \eqref{simpTomega} can be rewritten as
\begin{eqnarray}
T^2_{ab}\big{|}_{\sigma = 0} =
2 \theta^{(2)}_{ab}
- \frac{1}{2} t^{(4)}_{ab}
+ \frac{1}{2} \theta^{(4)}_{(1)ab}
+ \frac{1}{4} s^{(4)}_{ab}
+ \frac{1}{4} \theta^{(6)}_{(1)ab}
+ \frac{1}{4} \theta^{(6)}_{(2)ab}
- \frac{1}{4} t^{(6)}_{(1)ab}
- \frac{1}{8} t^{(6)}_{(2)ab},
\label{simpTomega2}
\end{eqnarray}
where
\begin{equation}
\begin{array}{ll}
\theta^{(2)}_{ab} = \theta_{ab} \left[ \frac{1}{2} \omega^2 \right],\qquad
\qquad &
t_{ab}^{(4)} = t_{ab} \left[ \theta^{(2)}_{ab} \right], \\
\theta^{(4)}_{(1)ab} = \theta_{ab} \left[ \omega \Box \omega \right], \qquad
\qquad&
\theta^{(4)}_{(2)ab} = \theta_{ab} \left[ \omega_c \omega^c \right], \\
\theta^{(6)}_{(1)ab} = \theta_{ab} \left[ \omega_c \Box \omega^c \right], \qquad
\qquad&
\theta^{(6)}_{(2)ab} = \theta_{ab} \left[ \Box \omega
\Box \omega \right],\\
t^{(6)}_{(1)ab} = t_{ab} \left[ s^{(4)}_{ab} \right],
\qquad \qquad &
t^{(6)}_{(2)ab} =
t_{ab} \left[ \theta^{(4)}_{(2)ab} \right], \\
\end{array}
\end{equation}
and finally
\begin{eqnarray}
s^{(4)}_{ab} = 2 h^{0}_{ab} \omega_{cd}\omega^{cd} - 2 h^0_{ab} \Box \omega\Box
\omega - 4 \omega_{ac}\omega_b{}^{c} + 4 h^0_{ab} \omega_c \omega^c + 4 \omega_{ab} \Box \omega - 4
\omega_a \omega_b.
\end{eqnarray}
The superscripts, e.g. as $^{(6)}$ in $t^{(6)}_{(1)ab}$, denote the maximum
number of derivatives appearing in the corresponding expressions. The
subscripts, e.g., $_{(1)}$ in $t^{(6)}_{(1)ab}$, are just labels. We immediately see that with the
possible exception of $s^{(4)}_{ab}$, terms in \eqref{simpTomega2} cannot
contribute to the conserved Lorentz charges. As far as we have explored, we find
that the tensor $s^{(4)}_{ab}$ can possibly contribute to the Lorentz charges.
However, this is not a problem. The contribution due to $s^{(4)}_{ab}$ is simply a c-number
due to our boundary conditions;
it only depends on the background structure $\omega$ and is completely
independent of dynamical fields. Hence, it is a constant over our phase space.
The presence of such a term is consistent with the general analysis of \cite{HIM}.
\subsection{Transformation of Lorentz Charges under Translations}
\label{sec:transform}
Having analysed properties of the stress tensor in special cases in the
previous subsection, now let us study the transformation of Lorentz
charges under translations. The idea behind this computation is as follows. As
mentioned in section \ref{sec:BSTprop1} a general (perhaps somewhat formal) expression for Lorentz charges
can be written as
\begin{equation}
Q[\xi] = \lim_{\rho \to \infty}
\int_{\mathcal{C}_{\rho}} \sqrt{-h_{C_\rho}} T_{ab} \xi_\rho^a n_{\rho}^{b} d^2x.
\end{equation}
The most important quantity in this expression is the boundary
stress tensor $T_{ab}$, which has expansion in powers of $\rho$.
To investigate transformation property of Lorentz charges under translations, we
need to look at how $T_{ab}$ changes under translations. On the unit
hyperboloid, translations are represented by four functions satisfying
\begin{equation}
\mathcal{D}_a \mathcal{D}_b \chi + h^0_{ab} \chi =0.
\end{equation}
Under translations by an amount $\chi$, the function
$\omega$ changes as $\omega \rightarrow \omega + \chi.$ We wish to know how the expansion of
the boundary stress tensor changes, i.e., we want to know the expansion of
$\Delta_\chi T_{ab}$.
Since we are considering a difference between two stress tensors for fixed value of $\sigma$,
many terms immediately cancel out. In particular, in
$\Delta_\chi T_{ab}$ the leading term in the expansion starts at order
$\rho^{-1}$. Due to this fact, calculation of $\Delta_\chi Q[\xi]$ is a
relatively straightforward exercise as opposed to $Q[\xi]$. We find
\begin{eqnarray}
\Delta_\chi T_{ab} &=& -\frac{1}{8 \pi G \rho} \left( - 3 \chi h^0_{ab} - 3 \chi
\sigma_{ab} + h^0_{ab} \sigma_c \chi^c + \sigma_{abc} \chi^c\right) + \ldots \\
&=& -\frac{1}{\rho} \mathcal{D}_c \left[(\sigma_{ab} + \sigma h^0_{ab})\chi^c\right] + \ldots~.
\end{eqnarray}
Substituting this expression in the definition of Lorentz charges to calculate
the $\Delta_\chi Q[\xi]$, we see that
\begin{eqnarray}
\Delta_\chi Q[\xi] &=& \lim_{\rho \to \infty}
\int_{\mathcal{C}_{\rho}} \sqrt{-h_{C_\rho}} \Delta_\chi T_{ab} \xi_\rho^a n_\rho^{b}
d^2x
\\
& =&
- \int_{\mathcal{C}} \sqrt{-h^0_{C}} \mathcal{D}_c \left[(\sigma_{ab} + \sigma h^0_{ab})\chi^c\right]
\xi^a n_{(0)}^{b} d^2x\\
& =&
\int_{\mathcal{C}} \sqrt{-h^0_{C}} \mathcal{D}_c \left[E^1_{ab}\chi^c\right]
\xi^a n_{(0)}^{b} d^2x.
\end{eqnarray}
Here $\mathcal{C}$ denotes a cut of unit hyperboloid, and $\xi^a$ an exact Killing
vector of the unit hyperboloid, and $n_{(0)}^{b}$ the unit normal to the cut
$\mathcal{C}$.
This last expression is precisely the expected transformation property of the
Lorentz charges under translations \cite{Ashtekar:1978zz, B, AshRev}. Note that
the fact that we obtain this result is highly non-trivial. In the
expansion of $\Delta_\chi T_{ab}$ all terms linear in $\omega$ cancel out. Once
again, these remarkable cancellations are highly non-trivial test of our
computations.
\setcounter{equation}{0}
\section{Conclusions and Future Directions}
\label{sec:conclusions}
Let us summarize what we have achieved in this paper. First and foremost,
we have systematically studied the closest one can come to changing the boundary metric in the
asymptotically flat context, while maintaining the group of asymptotic
symmetries to be Poincar\'e. The result of this analysis is that we can choose
the supertranslation frame as we like.
We studied consequences of making choices $\omega \neq 0$. We performed
this analysis in the covariant phase space approach as well as in the holographic renormalization approach.
We showed that the covariant phase space is well
defined irrespective of how we choose to fix supertranslations. Furthermore,
we showed that the on-shell action and the leading order boundary stress tensor
are insensitive to the supertranslation frame. The most significant result of
this paper is the construction of the boundary stress tensor at second order. We
carried out this construction in detail, and studied its conservation
properties.
We also observed that although the next to leading order boundary stress tensor depends
on the supertranslation frame, the dependence is of a very special type. It
is such that the transformation of angular momentum under translations continues
to hold as in special relativity.
Let us now comment on the
motivation Ashtekar and Hansen \cite{Ashtekar:1978zz} had for choosing $\omega
=0$. There it was observed that when $\omega \neq
0$, the second order magnetic part of the Weyl tensors fails to be conserved
with respect to the derivative operator compatible with the unit hyperboloid
metric. This is indeed an obstacle if one insists on using the second
order magnetic part of the Weyl tensor to construct Lorentz charges. However,
this obstacle is only an illusion: above we constructed a symmetric and
divergence free tensor $V_{ab}$ using second order fields. Taking the curl of
$V_{ab}$ one obtains a new symmetric and divergence free tensor $W_{ab}$
\cite{CDV}. The tensor $W_{ab}$ is the natural quantity to use instead of the
second order magnetic part of the Weyl tensor to construct Lorentz charges
following Ashtekar-Hansen when $\omega \neq 0$.
A
natural extension of our work is to calculate the conserved Lorentz charges
\eqref{charge} using our boundary stress tensor
with our supertranslated boundary conditions. Given the general analysis of
\cite{MM, Sorkin, HIM}, we expect such a construction to go thorough, however,
the precise details as to how it proceeds are not investigated at this stage.
The reason this computation is non-trivial is because the expression
\eqref{charge2} is somewhat formal. All quantities that enter into this
expression admit expansions in powers of $\rho$. This makes the analysis of
Lorentz charges from holographic point of view significantly
complicated. We will return to this problem
elsewhere. In this regard, the precise significance of equation
\eqref{conserve1} is also not clear at this stage.
Although boundary stress tensor methods are most well studied for
asymptotically AdS and related settings, the success of these and related
methods in other contexts \cite{MV, MM2, Wiseman, Ross3, Ross2, BdBH, MM1}
motivates further study in the asymptotically flat context. Our work here attempted to fill in this divide
further by extending our previous work \cite{MMV, MMMV, CDV, V}. We also
highlighted certain similarities and differences with the asymptotically AdS
setting. Further exploration in this direction should provide additional
insights into the still elusive nature of holography for flat space
\cite{Ma, Barnich:2010eb, deBoer:2003vf, Arcioni:2003xx, Alvarez:2004hga,
Barbon:2008ut, Li:2010dr, Compere:2011dx}.
\subsection*{Acknowledgements}
I thank Glenn Barnich, Geoffrey Compere, Francois Dehouck for
discussions and Donald Marolf and Simon Ross for encouragement. Several of the
calculations presented in this paper are performed using \textit{xAct}
\cite{JMM}, a suite of free packages for doing tensor algebra in \textit{Mathematica}.
These packages are developed by Jos\'{e} M. Mart\'{\i}n-Garc\'{\i}a and collaborators. I am grateful to
Leo Stein and the \textit{xAct} Internet community for getting me started. I am
particularly grateful to Teake Nutma for his help and patient
explanations on \textit{xAct},
and
for sharing his ADM splitting code. I also thank Geoffrey Compere for
his careful reading of an earlier draft of the manuscript and for providing
positive feedback.
|
2104.09193
|
\section{Introduction}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{Open_v2.pdf}
\caption{Example interactions for open challenges on generating referring expressions.
\label{fig:overview}
\end{figure}
In a human-robot collaborative task, it is critical that verbal communication between a robot and a human is effective. For instance, when a human assembles furniture, and a robot helps to find the correct pieces, the robot should direct its human partner and describe the target objects effectively. Expressions used for describing objects in terms of their distinguishing features are called referring expressions, and referring expression generation is defined as \textit{``choosing the words and phrases to express domain objects''} \cite{foster2019natural}.
Generating appropriate referring expressions has the potential of significantly improving human-robot collaboration. It is one of the most studied areas in natural language generation for social robotics because the problem contains a relatively straightforward input and output \cite{foster2019natural}. Studies on referring expression generation based on primarily rule-based templates or algorithms \cite{williams2017referring,williamsreferring,kunze2017spatial,zender2009situated}, and recent studies have addressed this problem using learning-based methods \cite{dougan2019learning,magassouba2019multimodal,tanaka2019generating}.
Although referring expression generation has been extensively studied in HRI, there are still open challenges for more efficient generation mechanisms applicable to different tasks. In this paper, after reviewing the state of the art research in this area, we summarize the main open challenges and open further research directions (see Figure \ref{fig:overview}
, which can be summarized as follows:
\begin{itemize}
\item Leveraging contextual information to generate referring expressions that facilitate communication
\item While generating referring expressions, taking the perspectives of users for an effective collaboration
\item Being able to complete the task accurately in dynamic environments by autonomously handling the misinterpretation
\end{itemize}
\section{Open Challenges}
\subsection{Using Contextual Information}
When a robot describes an object to its human partner, it needs to consider the social context (e.g., the person's age and knowledge level about the object) and spatial context (e.g., whether it is likely to find the target object in the existing place or the robot needs to direct the user to another place), and adapt itself accordingly.
Understanding social context is essential to generate comprehensible referring expressions. For instance, if a robot describes a rarely known object to a child, it can be more efficient to use color or shape information of the object instead of solely using the object's name.
To facilitate finding the described object, interpreting spatial context is crucial. For example, if a robot describes an object which is more likely to belong to a kitchen,
it can direct the user to the correct place with its referring expressions, e.g., "the object A next to the object B \textbf{in the kitchen}".
Otherwise, users can waste time by looking for objects in the wrong places, which can affect the user's perception of the interaction and the robot. To address these challenges, recent studies on context modeling in robots have employed deep learning \cite{dougan2018cinet,dougan2018deep,8460828,bozcan2019cosmo}. In these studies, co-occurrences of the objects \cite{dougan2018deep}, spatial relations between them \cite{8460828}, and their affordances \cite{bozcan2019cosmo} have been utilized.
Adapting referring expressions concerning context has been focused \cite{viethen2010speaker,garoufi2014generation,krahmer2002efficient,foster2014task}. In a promising referring expression study, Viethen and Dale \cite{viethen2010speaker} showed that human behavior on selecting the content of referring expressions are mostly speaker-dependent. They stated that this speaker-dependency might be correlated to ``age, gender, and social or cultural background'' of the users, and this is open for further research. Some studies have focused on context-dependent selection of distinguishing features of objects \cite{viethen2010speaker,garoufi2014generation,krahmer2002efficient} rather than using fixed preference ordering over features \cite{dale1995computational}. Further, Foster et al. \cite{foster2014task} studied the context-sensitive generation of referring expressions on human-robot joint construction tasks. Their context-sensitive expressions include "... this red cube and screw it ..." where "this red cube" and "it" are called context-sensitive references. Even though these works have important improvements in context-dependency of referring expressions, considering all aspects of social and spatial contexts remains unsolved.
\subsection{Perspective-Taking}
Perspective-taking has been a research topic in a variety of fields (e.g., psychology, cognitive science, robotics, and computer vision). In robotics, it has been studied when the robot comprehends referring expressions \cite{berlin2006perspective,wiltshire2013towards,pandey2010mightability}. Many of these studies focused on perspective-taking when users employ spatial relations \cite{fong2005peer,kennedy2007spatial,sisbot2010synthesizing,trafton2005enabling,fong2006human}. While generating referring expressions, Magassouba et al. \cite{magassouba2019multimodal} addressed this problem by generating perspective-free expressions, i.e., expressions that don't tie to particular viewpoints.
To generate unambiguous referring expressions, a robot needs to be explicit about from which perspective it refers to the objects, especially while using spatial relations between them. This can be achieved, for example, by clearly stating ``the object A that \textbf{I} see to the right of the object B'' or `the object A that \textbf{you} see to the left of the object B' to not to cause any misunderstanding during the interaction.
While the robot is describing objects, taking the perspectives of users and evaluating its impacts have been remained unsolved until very recently. Lately, we have proposed a method for making the first attempt to address this problem. In our recent work \cite{dougan2020impact}, we observed a scene from different perspectives and generated an expression from the closest perspective of a user. Further, we evaluated the impact of perspective-taking with regard to different aspects of the interaction (e.g., task efficiency, perception of the task, and the robot). Through a user study, we showed that when the objects are spatially described from the users' perspectives, participants take less time to find the referred objects, find the correct objects more often and consider the task easier.
Although our recent work demonstrates the significance of perspective-taking for effective collaboration, the method we proposed depended on the views of a scene from different perspectives. For a more general solution, i.e., taking different perspectives from a single view,
3D or 6D pose information of the objects can be helpful. However, existing 3D or 6D off the shelf object pose predictors are still generally limited by predicting a few types of objects — mostly vehicles, pedestrians, trees. Therefore, they are not sufficient to use in real-world settings. Further, taking different perspectives in 2D is still an open topic in computer vision. Even though there are novel view synthesis methods \cite{liu2018geometry,sun2018multi,flynn2019deepview} that address this problem, existing solutions are still immature to use in real-world HRI.
Perspective-taking is also crucial when there are occluded objects from the views of users. When a robot observes a target object is occluded from the perspective of users, it should inform them that they need to change their viewpoint to see the target object and to achieve the task accurately. To address this problem, 3D cameras can be used, and occlusions can be estimated from the depth information of the objects. Further, if the problem is aimed to be resolved in 2D, studies on single-image depth estimation \cite{lee2018single,NIPS2016_6510,liu2015deep} can be employed to predict the depth of objects.
Although perspective-taking has been addressed while comprehending referring expressions and recently to generate these expressions, while the robot is describing objects, taking a user's perspectives from a single view and informing the user about occlusions is still open for further research.
\subsection{Handling of Misinterpretations in an Autonomous Manner}
When a robot describes an object to its partner, it needs to cope with misinterpretations of its expressions and clarify them in an autonomous manner. In other words, when environments are highly ambiguous, users might misinterpret the expression of a robot (e.g., they might head towards the wrong objects) or ask for clarification. In these cases, the robot should be able to clarify its expressions in an autonomous manner to accurately describe the target object in dynamically changing real-world environments.
To describe objects autonomously (i.e., generating referring expressions directly from scenes without requiring any prior knowledge about the environment, existing objects, or their configurations), recent studies in robotics have relied on deep learning \cite{dougan2019learning,magassouba2019multimodal,tanaka2019generating}.
In our recent work \cite{dougan2019learning}, we have proposed a method to generate spatial referring expressions in a natural and unambiguous manner in real-world environments.
In order to handle misinterpretations while a robot comprehends referring expressions,
Shridhar and Hsu \cite{shridhar2018interactive,shridhar2020ingress} have proposed a method grounding by generation. They have used the generation part of their model for asking disambiguating questions during comprehension. Moreover, to handle misinterpretations while a robot refers to objects, Wallbridge et al. \cite{wallbridge2019generating} have proposed a dynamic method. In this method, there are three different dynamic description categories (i.e., negate, elaborate, positive) to provide user further clarification.
Even though these systems have made some advances to cope with misinterpretations,
they are either mainly focused on comprehension and a specific task (i.e., pick and place) \cite{shridhar2018interactive,shridhar2020ingress} or limited by the number of clarification categories \cite{wallbridge2019generating}. In order to handle misinterpretations while describing objects, existing autonomous referring expression generation methods need to be extended with detecting misinterpretations and providing more flexible clarifications. For this purpose, works on explainability (i.e., building more transparent and understandable models in their prediction-making process \cite{BarredoArrieta2020}), visual question answering (i.e., generating an answer for a given image and a question \cite{antol2015vqa}), or visual dialog (i.e., generating an answer for a given image and a history of a dialog \cite{Das_2017_CVPR}) can be avenues worth exploring.
\section{Conclusion}
In this paper, we have focused on open challenges while generating referring expressions. We have suggested that utilizing contextual information and adapting expressions concerning context might contribute to more effective language-based interactions between robots and people. Further, we have stated that being explicit about from which perspective the expression is generated and taking the user perspective might be helpful for an unambiguous and efficient interaction. Finally, we have claimed that the handling of misinterpretations in an autonomous manner
is necessary for successfully completing the task in dynamically changing environments.
\bibliographystyle{ACM-Reference-Format}
|
2105.02513
|
\section{}
\begin{acknowledgments}
We thank E. Iwaniczko, Q. Wang, and R. S. Crandall and the National Renewable Energy Laboratory for preparation of the $a$-Si:H films; G. Hohensee and D. G. Cahill for sound velocity and thermal conductivity measurements; and A. Fefferman for helpful discussions. The UCB portion of this work was supported by the NSF grants DMR-0907724, 1508828 and 1809498. Work performed at NRL was supported by the Office of Naval Research.
\end{acknowledgments}
|
1805.09843
|
\section{Introduction}\label{introduction}
Word embeddings, learned from massive unstructured text data, are widely-adopted building blocks for Natural Language Processing (NLP).
By representing each word as a fixed-length vector, these embeddings can group semantically similar words, while implicitly encoding rich linguistic regularities and patterns \citep{bengio2003neural, mikolov2013distributed, pennington2014glove}.
Leveraging the word-embedding construct, many deep architectures have been proposed to model the \emph{compositionality} in variable-length text sequences.
These methods range from simple operations like addition \citep{mitchell2010composition, iyyer2015deep}, to more sophisticated compositional functions such as Recurrent Neural Networks (RNNs) \citep{tai2015improved, sutskever2014sequence}, Convolutional Neural Networks (CNNs) \citep{kalchbrenner2014convolutional, kim2014convolutional, Zhang2017AdversarialFM} and Recursive Neural Networks \citep{socher2011parsing}.
Models with more expressive compositional functions, \emph{e.g.}, RNNs or CNNs, have demonstrated impressive results; however, they are typically computationally expensive, due to the need to estimate hundreds of thousands, if not millions, of parameters \citep{parikh2016decomposable}.
In contrast, models with simple compositional functions often compute a sentence or document embedding by simply adding, or averaging, over the word embedding of each sequence element obtained via, \emph{e.g.}, \emph{word2vec} \citep{mikolov2013distributed}, or \emph{GloVe} \citep{pennington2014glove}.
Generally, such a Simple Word-Embedding-based Model (SWEM) does not explicitly account for spatial, word-order information within a text sequence.
However, they possess the desirable property of having significantly fewer parameters, enjoying much faster training, relative to RNN- or CNN-based models.
Hence, there is a computation-\emph{vs.}-expressiveness tradeoff regarding how to model the compositionality of a text sequence.
In this paper, we conduct an extensive experimental investigation to understand when, and why, simple pooling strategies, operated over word embeddings alone, already carry sufficient information for natural language understanding.
To account for the distinct nature of various NLP tasks that may require different semantic features, we compare SWEM-based models with existing recurrent and convolutional networks in a point-by-point manner.
Specifically, we consider 17 datasets, including three distinct NLP tasks: \emph{document classification} (Yahoo news, Yelp reviews, \emph{etc}.), \emph{natural language sequence matching} (SNLI, WikiQA, \emph{etc}.) and \emph{(short) sentence classification/tagging} (Stanford sentiment treebank, TREC, \emph{etc}.). Surprisingly, SWEMs exhibit comparable or even superior performance in the majority of cases considered.
In order to validate our experimental findings, we conduct additional investigations to understand to what extent \emph{the word-order information} is utilized/required to make predictions on different tasks. We observe that in text representation tasks, many words (\emph{e.g.}, stop words, or words that are not related to sentiment or topic) do not meaningfully contribute to the final predictions (\emph{e.g.}, sentiment label). Based upon this understanding, we propose to leverage a \emph{max-pooling} operation directly over the word embedding matrix of a given sequence, to select its most \emph{salient} features.
This strategy is demonstrated to extract complementary features relative to the standard averaging operation, while resulting in a more interpretable model.
Inspired by a case study on sentiment analysis tasks, we further propose a \emph{hierarchical pooling} strategy to abstract and preserve the spatial information in the final representations.
This strategy is demonstrated to exhibit comparable empirical results to LSTM and CNN on tasks that are sensitive to word-order features, while maintaining the favorable properties of not having compositional parameters, thus fast training.
Our work presents a simple yet strong baseline for text representation learning that is widely ignored in benchmarks, and highlights the general computation-\emph{vs.}-expressiveness tradeoff associated with appropriately selecting compositional functions for distinct NLP problems.
Furthermore, we quantitatively show that the word-embedding-based text classification tasks can have the similar level of difficulty regardless of the employed models, using the subspace training~\cite{li_id_2018_ICLR} to constrain the trainable parameters. Thus, according to Occam's razor, simple models are preferred.
\section{Related Work}\label{related_work}
A fundamental goal in NLP is to develop expressive, yet computationally efficient compositional functions that can capture the linguistic structure of natural language sequences.
Recently, several studies have suggested that on certain NLP applications, much simpler word-embedding-based architectures exhibit comparable or even superior performance, compared with more-sophisticated models using recurrence or convolutions \citep{parikh2016decomposable, vaswani2017attention}.
Although complex compositional functions are avoided in these models, additional modules, such as attention layers, are employed on top of the word embedding layer.
As a result, the specific role that the word embedding plays in these models is not emphasized (or explicit), which distracts from understanding how important the word embeddings alone are to the observed superior performance.
Moreover, several recent studies have shown empirically that the advantages of distinct compositional functions are highly dependent on the specific task \citep{mitchell2010composition, iyyer2015deep, zhang2015fixed, wieting2015towards, arora2016simple}.
Therefore, it is of interest to study the practical value of the additional expressiveness, on a wide variety of NLP problems.
SWEMs bear close resemblance to Deep Averaging Network (DAN) \citep{iyyer2015deep} or fastText \citep{joulin2016bag}, where they show that average pooling achieves promising results on certain NLP tasks.
However, there exist several key differences that make our work unique. First, we explore a series of pooling operations, rather than only average-pooling.
Specifically, a \emph{hierarchical} pooling operation is introduced to incorporate spatial information, which demonstrates superior results on sentiment analysis, relative to average pooling.
Second, our work not only explores when simple pooling operations are enough, but also investigates the underlying reasons, \emph{i.e.}, what semantic features are required for distinct NLP problems.
Third, DAN and fastText only focused on one or two problems at a time, thus a comprehensive study regarding the effectiveness of various compositional functions on distinct NLP tasks, \emph{e.g.}, categorizing short sentence/long documents, matching natural language sentences, has heretofore been absent.
In response, our work seeks to perform a comprehensive comparison with respect to simple-\emph{vs.}-complex compositional functions, across a wide range of NLP problems, and reveals some general rules for rationally selecting models to tackle different tasks.
\section{Models \& training}\label{model}
\vspace{-1mm}
Consider a text sequence represented as $X$ (either a sentence or a document), composed of a sequence of words: $\{w_1, w_2, ...., w_L\}$, where $L$ is the number of tokens, \emph{i.e.}, the sentence/document length.
Let $\{v_1, v_2, ...., v_L\}$ denote the respective word embeddings for each token, where $v_l\in\mathbb{R}^K$.
The compositional function, $X \to z$, aims to combine word embeddings into a fixed-length sentence/document representation $z$.
These representations are then used to make predictions about sequence $X$.
Below, we describe different types of functions considered in this work.
\subsection{Recurrent Sequence Encoder}\label{rnn}
\vspace{-1mm}
A widely adopted compositional function is defined in a recurrent manner: the model successively takes word vector $v_t$ at position $t$, along with the hidden unit $h_{t-1}$ from the last position $t-1$, to update the current hidden unit via $h_t = f(v_t, h_{t-1})$, where $f(\cdot)$ is the transition function.
To address the issue of learning long-term dependencies, $f(\cdot)$ is often defined as Long Short-Term Memory (LSTM) \citep{hochreiter1997long}, which employs \emph{gates} to control the flow of information abstracted from a sequence.
We omit the details of the LSTM and refer the interested readers to the work by \citet{graves2013hybrid} for further explanation.
Intuitively, the LSTM encodes a text sequence considering its word-order information, but yields additional compositional parameters that must be learned.
\subsection{Convolutional Sequence Encoder}\label{cnn}
The Convolutional Neural Network (CNN) architecture \citep{kim2014convolutional, collobert2011natural, gan2017learning, zhang2017deconvolutional, shen2017deconvolutional} is another strategy extensively employed as the compositional function to encode text sequences.
The convolution operation considers windows of $n$ consecutive words within the sequence, where a set of filters (to be learned) are applied to these word windows to generate corresponding \emph{feature maps}.
Subsequently, an aggregation operation (such as max-pooling) is used on top of the feature maps to abstract the most salient semantic features, resulting in the final representation.
For most experiments, we consider a single-layer CNN text model.
However, Deep CNN text models have also been developed \citep{conneau2016very}, and are considered in a few of our experiments.
\subsection{Simple Word-Embedding Model (SWEM)}\label{swem}
\vspace{-1mm}
To investigate the raw modeling capacity of word embeddings, we consider a class of models with no additional compositional parameters to encode natural language sequences, termed SWEMs.
Among them, the simplest strategy is to compute the element-wise average over word vectors for a given sequence \cite{wieting2015towards, adi2016fine}:
\vspace{-2mm}
\begin{align}\label{eq:ave}
z = \frac{1}{L} \sum_{i=1}^{L} v_i \,.
\end{align}
The model in \eqref{eq:ave} can be seen as an average pooling operation, which takes the mean over each of the $K$ dimensions for all word embeddings, resulting in a representation $z$ with the same dimension as the embedding itself, termed here SWEM-\emph{aver}.
Intuitively, $z$ takes the information of every sequence element into account via the addition operation.
\paragraph{Max Pooling}
Motivated by the observation that, in general, only a small number of key words contribute to final predictions, we propose another SWEM variant, that extracts the most salient features from every word-embedding dimension, by taking the maximum value along each dimension of the word vectors.
This strategy is similar to the max-over-time pooling operation in convolutional neural networks \citep{collobert2011natural}:
\begin{align}\label{eq:max}
z =\textbf{\normalfont{Max-pooling}}(v_1, v_2, ..., v_L) \,.
\end{align}
We denote this model variant as SWEM-\emph{max}.
Here the $j$-th component of $z$ is the maximum element in the set $\{v_{1j},\dots,v_{Lj}\}$, where $v_{1j}$ is, for example, the $j$-th component of $v_1$. With this pooling operation, those words that are unimportant or unrelated to the corresponding tasks will be ignored in the encoding process (as the components of the embedding vectors will have small amplitude), unlike SWEM-\emph{aver} where every word contributes equally to the representation.
Considering that SWEM-\emph{aver} and SWEM-\emph{max} are complementary, in the sense of accounting for different types of information from text sequences, we also propose a third SWEM variant, where the two abstracted features are concatenated together to form the sentence embeddings, denoted here as SWEM-\emph{concat}.
For all SWEM variants, there are no additional compositional parameters to be learned.
As a result, the models only exploit intrinsic word embedding information for predictions.
\paragraph{Hierarchical Pooling}\label{swem_lg}
Both SWEM-\emph{aver} and SWEM-\emph{max} do not take word-order or spatial information into consideration, which could be useful for certain NLP applications.
So motivated, we further propose a \emph{hierarchical} pooling layer.
Let $v_{i:i+n-1}$ refer to the \emph{local} window consisting of $n$ consecutive words words, $v_i , v_{i+1}, . . . , v_{i+n-1}$.
First, an average-pooling is performed on each local window, $v_{i:i+n-1}$.
The extracted features from all windows are further down-sampled with a \emph{global} max-pooling operation on top of the representations for every window.
We call this approach SWEM-\emph{hier} due to its layered pooling.
This strategy preserves the local spatial information of a text sequence in the sense that it keeps track of how the sentence/document is constructed from individual word windows, \emph{i.e.}, $n$-grams.
This formulation is related to bag-of-$n$-grams method \cite{zhang2015character}.
However, SWEM-\emph{hier} learns fixed-length representations for the $n$-grams that appear in the corpus, rather than just capturing their occurrences via count features, which may potentially advantageous for prediction purposes.
\begin{table}[t!]
\def1.2{1.0}
\resizebox{\columnwidth}{!}{%
\begin{tabular} {c||c|c|c}
\toprule[1.2pt]
\textbf{Model} & Parameters & Complexity & Sequential Ops \\
\hline
CNN & $n\cdot K \cdot d$ & $\mathcal{O}(n\cdot L\cdot K \cdot d)$ & $\mathcal{O}(1)$ \\
LSTM & $4 \cdot d\cdot (K+d)$ & $\mathcal{O}(L\cdot d^2 + L \cdot K \cdot d)$ & $\mathcal{O}(L)$ \\
SWEM & 0 & $\mathcal{O}(L\cdot K)$ & $\mathcal{O}(1)$ \\
\bottomrule[1.2pt]
\end{tabular}
}
\caption{Comparisons of CNN, LSTM and SWEM architectures. Columns correspond to the number of \emph{compositional} parameters, computational complexity and sequential operations, respectively.}
\label{tab:comparison}
\vspace{-1mm}
\end{table}
\begin{table*}[t!]
\centering
\def1.2{1.0}
\begin{small}
\begin{tabular}{c||c|c|c|c|c}
\toprule[1.2pt]
\textbf{Model} & \textbf{Yahoo! Ans.} & \textbf{AG News} & \textbf{Yelp P.} & \textbf{Yelp F.} & \textbf{DBpedia} \\
\hline
Bag-of-means$^{\ast}$ & 60.55 & 83.09 & 87.33 & 53.54 & 90.45 \\
Small word CNN$^{\ast}$ & 69.98 & 89.13 & 94.46 & 58.59 & 98.15 \\
Large word CNN$^{\ast}$ & 70.94 & 91.45 & 95.11 & 59.48 & 98.28 \\
LSTM$^{\ast}$ & 70.84 & 86.06 & 94.74 & 58.17 & 98.55 \\
Deep CNN (29 layer)$^{\dagger}$ & 73.43 & 91.27 & \bf{95.72} &\bf{64.26} & \bf{98.71} \\
fastText $^{\ddagger}$ & 72.0 & 91.5 & 93.8 & 60.4 & 98.1 \\
fastText (bigram)$^{\ddagger}$ & 72.3 & 92.5 & 95.7 & 63.9 & 98.6 \\
\hline
SWEM-\emph{aver} & 73.14 & 91.71 & 93.59 & 60.66 & 98.42 \\
SWEM-\emph{max} & 72.66 & 91.79 & 93.25 & 59.63 & 98.24 \\
SWEM-\emph{concat} & \bf{73.53} & \bf{92.66} & 93.76 & 61.11 & \bf{98.57} \\
\hline
SWEM-\emph{hier} & 73.48 & 92.48 & \bf{95.81} & \bf{63.79} & 98.54 \\
\bottomrule[1.2pt]
\end{tabular}
\end{small}
\vspace{-2mm}
\caption{Test accuracy on (long) document classification tasks, in percentage. Results marked with $\ast$ are reported in \citet{zhang2015character}, with $\dagger$ are reported in \citet{conneau2016very}, and with $\ddagger$ are reported in \citet{joulin2016bag}.}
\label{tab:document}
\vspace{0mm}
\end{table*}
\subsection{Parameters \& Computation Comparison}\label{compare}
\vspace{-1mm}
We compare CNN, LSTM and SWEM wrt their parameters and computational speed.
$K$ denotes the dimension of word embeddings, as above.
For the CNN, we use $n$ to denote the filter width (assumed constant for all filters, for simplicity of analysis, but in practice variable $n$ is commonly used).
We define $d$ as the dimension of the final sequence representation.
Specifically, $d$ represents the dimension of hidden units or the number of filters in LSTM or CNN, respectively.
We first examine the number of \emph{compositional parameters} for each model.
As shown in Table~\ref{tab:comparison}, both the CNN and LSTM have a large number of parameters, to model the semantic compositionality of text sequences, whereas SWEM has no such parameters.
Similar to \citet{vaswani2017attention}, we then consider the computational complexity and the minimum number of sequential operations required for each model.
SWEM tends to be more efficient than CNN and LSTM in terms of computation complexity.
For example, considering the case where $K=d$, SWEM is faster than CNN or LSTM by a factor of $nd$ or $d$, respectively.
Further, the computations in SWEM are highly parallelizable, unlike LSTM that requires $\mathcal{O}(L)$ sequential steps.
\section{Experiments}\label{experiments}
\vspace{-1mm}
We evaluate different compositional functions on a wide variety of supervised tasks, including document categorization, text sequence matching (given a sentence pair, $X_1$, $X_2$, predict their relationship, $y$) as well as (short) sentence classification.
We experiment on 17 datasets concerning natural language understanding, with corresponding data statistics summarized in the Supplementary Material.
We use GloVe word embeddings with $K=300$ \citep{pennington2014glove} as initialization for all our models.
Out-Of-Vocabulary (OOV) words are initialized from a uniform distribution with range $[-0.01, 0.01]$.
The GloVe embeddings are employed in two ways to learn refined word embeddings: ($i$) directly updating each word embedding during training; and ($ii$) training a 300-dimensional Multilayer Perceptron (MLP) layer with ReLU activation, with GloVe embeddings as input to the MLP and with output defining the refined word embeddings.
The latter approach corresponds to learning an MLP model that adapts GloVe embeddings to the dataset and task of interest.
The advantages of these two methods differ from dataset to dataset.
We choose the better strategy based on their corresponding performances on the validation set.
The final classifier is implemented as an MLP layer with dimension selected from the set $[100, 300, 500, 1000]$, followed by a sigmoid or softmax function, depending on the specific task.
Adam \citep{kingma2014adam} is used to optimize all models, with learning rate selected from the set $[1 \times 10^{-3}, 3 \times 10^{-4}, 2 \times 10^{-4}, 1 \times 10^{-5}]$ (with cross-validation used to select the appropriate parameter for a given dataset and task).
Dropout regularization \citep{srivastava2014dropout} is employed on the word embedding layer and final MLP layer, with dropout rate selected from the set $[0.2, 0.5, 0.7]$.
The batch size is selected from $[2, 8, 32, 128, 512]$.
\begin{table*}[t!]
\centering
\def1.2{1.1}
\begin{small}
\begin{tabular}{c|c|c|c|c|c|c}
\toprule[1.2pt]
\textbf{Politics} & \textbf{Science} & \textbf{Computer} & \textbf{Sports} & \textbf{Chemistry} & \textbf{Finance} & \textbf{Geoscience} \\
\hline
philipdru & coulomb & system32 & billups & sio2 (SiO$_2$) & proprietorship & fossil \\
justices & differentiable & cobol & midfield & nonmetal & ameritrade & zoos \\
impeached & paranormal & agp & sportblogs & pka & retailing & farming \\
impeachment & converge & dhcp & mickelson & chemistry & mlm & volcanic \\
neocons & antimatter & win98 & juventus & quarks & budgeting & ecosystem \\
\bottomrule[1.2pt]
\end{tabular}
\caption{Top five words with the largest values in a given word-embedding dimension (each column corresponds to a dimension). The first row shows the (manually assigned) topic for words in each column.}
\label{tab:similar}
\end{small}
\vspace{-3mm}
\end{table*}
\subsection{Document Categorization}
We begin with the task of categorizing documents (with approximately 100 words in average per document).
We follow the data split in \citet{zhang2015character} for comparability.
These datasets can be generally categorized into three types: \emph{topic categorization} (represented by Yahoo! Answer and AG news), \emph{sentiment analysis} (represented by Yelp Polarity and Yelp Full) and \emph{ontology classification} (represented by DBpedia).
Results are shown in Table~\ref{tab:document}.
Surprisingly, on topic prediction tasks, our SWEM model exhibits stronger performances, relative to both LSTM and CNN compositional architectures, this by leveraging both the average and max-pooling features from word embeddings.
Specifically, our SWEM-\emph{concat} model even outperforms a 29-layer deep CNN model \citep{conneau2016very}, when predicting topics.
On the ontology classification problem (DBpedia dataset), we observe the same trend, that SWEM exhibits comparable or even superior results, relative to CNN or LSTM models.
Since there are no compositional parameters in SWEM, our models have an order of magnitude fewer parameters (excluding embeddings) than LSTM or CNN, and are considerably more computationally efficient.
As illustrated in Table~\ref{tab:yahoo}, SWEM-\emph{concat} achieves better results on Yahoo! Answer than CNN/LSTM, with only 61K parameters (one-tenth the number of LSTM parameters, or one-third the number of CNN parameters), while taking a fraction of the training time relative to the CNN or LSTM.
\begin{table}[h!]
\centering
\def1.2{1.0}
\begin{small}
\begin{tabular} {c||c|c}
\toprule[1.2pt]
\textbf{Model} & Parameters & Speed \\%& Acc. \\
\hline
CNN & 541K & 171s \\
LSTM & 1.8M & 598s \\
SWEM & \textbf{61K} & \textbf{63s} \\%& \textbf{73.53} \\
\bottomrule[1.2pt]
\end{tabular}
\end{small}
\caption{Speed \& Parameters on Yahoo! Answer dataset.}
\label{tab:yahoo}
\vspace{-5mm}
\end{table}
Interestingly, for the sentiment analysis tasks, both CNN and LSTM compositional functions perform better than SWEM, suggesting that word-order information may be required for analyzing sentiment orientations.
This finding is consistent with \citet{pang2002thumbs}, where they hypothesize that the positional information of a word in text sequences may be beneficial to predict sentiment.
This is intuitively reasonable since, for instance, the phrase ``not really good'' and ``really not good'' convey different levels of negative sentiment, while being different only by their word orderings.
Contrary to SWEM, CNN and LSTM models can both capture this type of information via convolutional filters or recurrent transition functions.
However, as suggested above, such word-order patterns may be much less useful for predicting the topic of a document.
This may be attributed to the fact that word embeddings alone already provide sufficient topic information of a document, at least when the text sequences considered are relatively long.
\subsubsection{Interpreting model predictions}
Although the proposed SWEM-\emph{max} variant generally performs a slightly worse than SWEM-\emph{aver}, it extracts complementary features from SWEM-\emph{aver}, and hence in most cases SWEM-\emph{concat} exhibits the best performance among all SWEM variants.
More importantly, we found that the word embeddings learned from SWEM-\emph{max} tend to be sparse.
We trained our SWEM-\emph{max} model on the Yahoo datasets (randomly initialized). With the learned embeddings, we plot the values for each of the word embedding dimensions, for the entire vocabulary.
As shown in Figure~\ref{fig:sparsity}, most of the values are highly concentrated around zero, indicating that the word embeddings learned are very sparse. On the contrary, the GloVe word embeddings, for the same vocabulary, are considerably denser than the embeddings learned from SWEM-\emph{max}.
This suggests that the model may only depend on a few key words, among the entire vocabulary, for predictions (since most words do not contribute to the max-pooling operation in SWEM-\emph{max}). Through the embedding, the model learns the important words for a given task (those words with non-zero embedding components). \par
\begin{figure}[h!]
\centering
\def1.2{1.0}
\vspace{-2mm}
\includegraphics[scale=0.3]{figure/hist.pdf}
\vspace{-2mm}
\captionof{figure}{Histograms for learned word embeddings (randomly initialized) of SWEM-\emph{max} and GloVe embeddings for the same vocabulary, trained on the Yahoo! Answer dataset.}
\label{fig:sparsity}
\vspace{-2mm}
\end{figure}
\begin{table*}[t!]
\vspace{0mm}
\centering
\def1.2{1.0}
\begin{small}
\begin{tabular}{c||c|cc|cc|c|cc}
\toprule[1.2pt]
& &\multicolumn{2}{c|}{\textbf{MultiNLI}} & & & \\
\textbf{Model} & \textbf{SNLI} & \textbf{Matched} & \textbf{Mismatched} & \multicolumn{2}{c|}{\textbf{WikiQA}} & \textbf{Quora} & \multicolumn{2}{c}{\textbf{MSRP}} \\
\hline
& \emph{Acc.} &\emph{Acc.} & \emph{Acc.} & \emph{MAP} & \emph{MRR} & \emph{Acc.} & \emph{Acc.} & \emph{F1} \\
\hline
CNN & 82.1 & 65.0 & 65.3 & 0.6752 & 0.6890 & 79.60 & 69.9 & 80.9 \\
LSTM & 80.6 & 66.9$^{\ast}$ & 66.9$^{\ast}$ & \bf{0.6820} & \bf{0.6988} & 82.58 & 70.6 & 80.5 \\
\hline
SWEM-\emph{aver} & 82.3 & 66.5 & 66.2 & \bf{0.6808} & \bf{0.6922} & 82.68 & 71.0 & 81.1 \\
SWEM-\emph{max} & \bf{83.8} & \textbf{68.2} & \textbf{67.7} & 0.6613 & 0.6717 & 82.20 & 70.6 & 80.8 \\
SWEM-\emph{concat} & 83.3 & 67.9 & 67.6 & 0.6788 & 0.6908 & \bf{83.03} & \bf{71.5} & \bf{81.3} \\
\bottomrule[1.2pt]
\end{tabular}
\caption{Performance of different models on matching natural language sentences. Results with $^{\ast}$ are for Bidirectional LSTM, reported in \citet{williams2017broad}. Our reported results on MultiNLI are only trained MultiNLI training set (without training data from SNLI). For MSRP dataset, we follow the setup in \citet{hu2014convolutional} and do not use any additional features.}
\label{tab:matching}
\end{small}
\vspace{-3mm}
\end{table*}
In this regard, the nature of max-pooling process gives rise to a more interpretable model.
For a document, only the word with largest value in each embedding dimension is employed for the final representation.
Thus, we suspect that semantically similar words may have large values in some shared dimensions.
So motivated, after training the SWEM-\emph{max} model on the Yahoo dataset, we selected five words with the largest values, among the entire vocabulary, for each word embedding dimension (these words are selected preferentially in the corresponding dimension, by the max operation).
As shown in Table~\ref{tab:similar}, the words chosen wrt each embedding dimension are indeed highly relevant and correspond to a common topic (the topics are inferred from words).
For example, the words in the first column of Table~\ref{tab:similar} are all political terms, which could be assigned to the \emph{Politics \& Government} topic.
Note that our model can even learn locally interpretable structure that is not explicitly indicated by the label information.
For instance, all words in the fifth column are \emph{Chemistry}-related.
However, we do not have a chemistry label in the dataset, and regardless they should belong to the \emph{Science} topic.
\subsection{Text Sequence Matching}
To gain a deeper understanding regarding the modeling capacity of word embeddings, we further investigate the problem of sentence matching, including natural language inference, answer sentence selection and paraphrase identification.
The corresponding performance metrics are shown in Table~\ref{tab:matching}.
Surprisingly, on most of the datasets considered (except WikiQA), SWEM demonstrates the best results compared with those with CNN or the LSTM encoder.
Notably, on SNLI dataset, we observe that SWEM-\emph{max} performs the best among all SWEM variants, consistent with the findings in \citet{nie2017shortcut, conneau2017supervised}, that \emph{max-pooling} over BiLSTM hidden units outperforms average pooling operation on SNLI dataset.
As a result, with only 120K parameters, our SWEM-\emph{max} achieves a test accuracy of 83.8\%, which is very competitive among state-of-the-art sentence encoding-based models (in terms of both performance and number of parameters)\footnote{See leaderboard at \url{https://nlp.stanford.edu/projects/snli/} for details.}.
The strong results of the SWEM approach on these tasks may stem from the fact that when matching natural language sentences, it is sufficient in most cases to simply model the word-level alignments between two sequences \citep{parikh2016decomposable}.
From this perspective, word-order information becomes much less useful for predicting relationship between sentences.
Moreover, considering the simpler model architecture of SWEM, they could be much easier to be optimized than LSTM or CNN-based models, and thus give rise to better empirical results.
\subsubsection{Importance of word-order information}\label{important}
One possible disadvantage of SWEM is that it ignores the word-order information within a text sequence, which could be potentially captured by CNN- or LSTM-based models.
However, we empirically found that except for sentiment analysis, SWEM exhibits similar or even superior performance as the CNN or LSTM on a variety of tasks.
In this regard, one natural question would be: how important are word-order features for these tasks?
To this end, we randomly shuffle the words for every sentence in the training set, while keeping the original word order for samples in the test set.
The motivation here is to remove the word-order features from the training set and examine how sensitive the performance on different tasks are to word-order information.
We use LSTM as the model for this purpose since it can captures word-order information from the original training set.
\begin{table} [h!]
\centering
\vspace{-2mm}
\def1.2{1.2}
\begin{small}
\begin{tabular}{c||c|c|c}
\toprule[1.2pt]
\textbf{Datasets} & \textbf{Yahoo} & \textbf{Yelp P.} & \textbf{SNLI} \\
\hline
\textbf{Original} & 72.78 & 95.11 & 78.02 \\
\textbf{Shuffled} & 72.89 & 93.49 & 77.68 \\
\bottomrule[1.2pt]
\end{tabular}
\end{small}
\caption{Test accuracy for LSTM model trained on original/shuffled training set.}
\label{tab:order}
\vspace{-2mm}
\end{table}
The results on three distinct tasks are shown in Table~\ref{tab:order}.
Somewhat surprisingly, for Yahoo and SNLI datasets, the LSTM model trained on shuffled training set shows comparable accuracies to those trained on the original dataset, indicating that word-order information does not contribute significantly on these two problems, \emph{i.e.}, topic categorization and textual entailment.
However, on the Yelp polarity dataset, the results drop noticeably, further suggesting that word-order does matter for sentiment analysis (as indicated above from a different perspective).
Notably, the performance of LSTM on the Yelp dataset with a shuffled training set is very close to our results with SWEM, indicating that the main difference between LSTM and SWEM may be due to the ability of the former to capture word-order features. Both observations are in consistent with our experimental results in the previous section.
\begin{table}[t!]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l p{2.6in}}
\toprule[1.2pt]
\textbf{Negative}: & Friendly staff and nice selection of vegetarian options. Food \textbf{\textcolor{blue}{is just okay}}, \textbf{\textcolor{blue}{not great}}. \textbf{\textcolor{blue}{Makes me wonder why everyone likes}} food fight so much. \\
\hline
\hline
\textbf{Positive}: & The store is small, but it carries specialties that are difficult to find in Pittsburgh. I \textbf{\textcolor{blue}{was particularly excited}} to find middle eastern chili sauce and chocolate covered turkish delights.\\
\bottomrule[1.2pt]
\end{tabular}
}
\caption{Test samples from Yelp Polarity dataset for which LSTM gives wrong predictions with shuffled training data, but predicts correctly with the original training set.}
\label{tab:order_matter}
\vspace{-4mm}
\end{table}
\paragraph{Case Study
To understand what type of sentences are sensitive to word-order information, we further show those samples that are wrongly predicted because of the shuffling of training data in Table~\ref{tab:order_matter}.
Taking the first sentence as an example, several words in the review are generally positive, \emph{i.e.} \emph{friendly}, \emph{nice}, \emph{okay}, \emph{great} and \emph{likes}.
However, the most vital features for predicting the sentiment of this sentence could be the phrase/sentence \emph{`is just okay'}, \emph{`not great'} or \emph{`makes me wonder why everyone likes'}, which cannot be captured without considering word-order features.
It is worth noting the hints for predictions in this case are actually $n$-gram phrases from the input document.
\subsection{SWEM-\emph{hier} for sentiment analysis} \label{LG}
As demonstrated in Section~\ref{important}, word-order information plays a vital role for sentiment analysis tasks.
However, according to the case study above, the most important features for sentiment prediction may be some key $n$-gram phrase/words from the input document.
We hypothesize that incorporating information about the local word-order, \emph{i.e.}, $n$-gram features, is likely to
largely mitigate the limitations of the above three SWEM variants.
Inspired by this observation, we propose using another simple pooling operation termed as hierarchical (SWEM-\emph{hier}), as detailed in Section~\ref{swem_lg}.
We evaluate this method on the two document-level sentiment analysis tasks and the results are shown in the last row of Table~\ref{tab:document}.
SWEM-\emph{hier} greatly outperforms the other three SWEM variants, and the corresponding accuracies are comparable to the results of CNN or LSTM (Table~\ref{tab:document}).
This indicates that the proposed hierarchical pooling operation manages to abstract spatial (word-order) information from the input sequence, which is beneficial for performance in sentiment analysis tasks.
\begin{table*}[!ht]
\vspace{-2mm}
\centering
\def1.2{1.0}
\begin{small}
\begin{tabular}{c||c|c|c|c|c}
\toprule[1.2pt]
\textbf{Model} & \textbf{MR} & \textbf{SST-1} & \textbf{SST-2} & \textbf{Subj} & \textbf{TREC} \\
\hline
RAE \cite{socher2011semi} & 77.7 & 43.2 & 82.4 & -- & -- \\
MV-RNN \cite{socher2012semantic} & 79.0 & 44.4& 82.9 & -- & -- \\
LSTM \cite{tai2015improved} & -- & 46.4 & 84.9 & -- & -- \\
RNN \cite{zhao2015self} & 77.2 & -- & -- & \bf{93.7} & 90.2 \\
Constituency Tree-LSTM \cite{tai2015improved} & - & \bf{51.0} & 88.0 & - & - \\
Dynamic CNN \cite{kalchbrenner2014convolutional} & -- & 48.5 & 86.8 & -- & 93.0 \\
CNN \cite{kim2014convolutional} & \bf{81.5} & 48.0 & \bf{88.1} & 93.4 & \bf{93.6} \\
DAN-ROOT \cite{iyyer2015deep} & - & 46.9 & 85.7 & - & - \\
\hline
SWEM-\emph{aver} & 77.6 &45.2 & 83.9 & 92.5 & \bf{92.2} \\
SWEM-\emph{max} & 76.9 &44.1 & 83.6 & 91.2 & 89.0 \\
SWEM-\emph{concat} & \bf{78.2} &\bf{46.1} & \bf{84.3} & \bf{93.0} & 91.8 \\
\bottomrule[1.2pt]
\end{tabular}
\end{small}
\vspace{-2mm}
\caption{Test accuracies with different compositional functions on (short) sentence classifications.}
\label{tab:sentence}
\vspace{-2mm}
\end{table*}
\subsection{Short Sentence Processing} \label{short}
We now consider sentence-classification tasks (with approximately 20 words on average).
We experiment on three sentiment classification datasets, \emph{i.e.}, MR, SST-1, SST-2, as well as subjectivity classification (Subj) and question classification (TREC).
The corresponding results are shown in Table~\ref{tab:sentence}.
Compared with CNN/LSTM compositional functions, SWEM yields inferior accuracies on sentiment analysis datasets, consistent with our observation in the case of document categorization.
However, SWEM exhibits comparable performance on the other two tasks, again with much less parameters and faster training.
Further, we investigate two sequence tagging tasks: the standard CoNLL2000 chunking and CoNLL2003 NER datasets.
Results are shown in the Supplementary Material, where LSTM and CNN again perform better than SWEMs.
Generally, SWEM is less effective at extracting representations from \emph{short} sentences than from \emph{long} documents.
This may be due to the fact that for a shorter text sequence, word-order features tend to be more important since the semantic information provided by word embeddings alone is relatively limited.
Moreover, we note that the results on these relatively small datasets are highly sensitive to model regularization techniques due to the overfitting issues.
In this regard, one interesting future direction may be to develop specific regularization strategies for the SWEM framework, and thus make them work better on small sentence classification datasets.
\section{Discussion}
\subsection{Comparison via subspace training}
We use {\it subspace training}~\cite{li_id_2018_ICLR} to measure the model complexity in text classification problems. It constrains the optimization of the trainable parameters in a subspace of low dimension $d$, the intrinsic dimension $d_{\rm int}$ defines the minimum $d$ that yield a good solution. Two models are studied: the SWEM-\emph{max} variant, and the CNN model including a convolutional layer followed by a FC layer.
We consider two settings:
(1) The word embeddings are randomly intialized, and optimized jointly with the model parameters. We show the performance of direct and subspace training on AG News dataset in Figure~\ref{fig:subspace} (a)(b). The two models trained via direct method share almost identical perfomrnace on training and testing. The subspace training yields similar accuracy with direct training for very small $d$, even when model parameters are not trained at all ($d=0$). This is because the word embeddings have the full degrees of freedom to adjust to achieve good solutions, regardless of the employed models. SWEM seems to have an easier loss landspace than CNN for word embeddings to find the best solutions. According to Occam's razor, simple models are preferred, if all else are the same.
(2) The pre-trained GloVe are frozen for the word embeddings, and only the model parameters are optimized. The results on testing datasets of AG News and Yelp P. are shown in Figure~\ref{fig:subspace} (c)(d), respectively. SWEM shows significantly higher accuracy than CNN for a large range of low subspace dimension, indicating that SWEM is more parameter-efficient to get a decent solution. In Figure~\ref{fig:subspace}(c), if we set the performance threshold as 80\% testing accuracy, SWEM exhibits a lower $d_{\rm int}$ than CNN on AG News dataset. However, in Figure~\ref{fig:subspace}(d), CNN can leverage more trainable parameters to achieve higher accuracy when $d$ is large.
\begin{figure}[t!] \centering
\vspace{-0mm}
\begin{tabular}{c c}
\hspace{-6mm}
\includegraphics[width=3.80cm]{figure/training_agnew.pdf}
&
\hspace{-6mm}
\includegraphics[width=3.80cm]{figure/testing_agnew.pdf} \\
\hspace{-4mm}
(a) Training on AG News \hspace{-0mm} & \hspace{-5mm}
(b) Testing on AG News\\
\hspace{-6mm}
\includegraphics[width=3.80cm]{figure/testing_agnews_we.pdf}
&
\hspace{-6mm}
\includegraphics[width=3.80cm]{figure/testing_yelp_we.pdf} \\
\hspace{-4mm}
(c) Testing on AG News \hspace{-0mm} & \hspace{-5mm}
(d)Testing on Yelp P.
\end{tabular}
\vspace{-3mm}
\caption{Performance of subspace training. Word embeddings are optimized in (a)(b), and frozen in (c)(d).}
\vspace{-6mm}
\label{fig:subspace}
\end{figure}
\subsection{Linear classifiers}
\vspace{-1mm}
To further investigate the quality of representations learned from SWEMs, we employ a linear classifier on top of the representations for prediction, instead of a non-linear MLP layer as in the previous section. It turned out that utilizing a linear classifier only leads to a very small performance drop for both Yahoo! Ans. (from $73.53\%$ to $73.18\%$) and Yelp P. datasets (from $93.76\%$ to $93.66\%$) . This observation highlights that SWEMs are able to extract robust and informative sentence representations despite their simplicity.
\subsection{Extension to other languages}
\vspace{-1mm}
We have also tried our SWEM-concat and SWEM-hier models on Sogou news corpus (with the same experimental setup as \cite{zhang2015character}), which is a \emph{Chinese} dataset represented by Pinyin (a phonetic romanization of Chinese). SWEM-concat yields an accuracy of $91.3\%$, while SWEM-hier (with a local window size of 5) obtains an accuracy of $96.2\%$ on the test set. Notably, the performance of SWEM-hier is comparable to the best accuracies of CNN ($95.6\%$) and LSTM ($95.2\%$), as reported in \cite{zhang2015character}. This indicates that hierarchical pooling is more suitable than average/max pooling for Chinese text classification, by taking spatial information into account. It also implies that Chinese is more sensitive to local word-order features than English.
\vspace{-1mm}
\section{Conclusions}
\vspace{-2mm}
We have performed a comparative study between SWEM (with parameter-free pooling operations) and CNN or LSTM-based models, to represent text sequences on 17 NLP datasets.
We further validated our experimental findings through additional exploration, and revealed some general rules for rationally selecting compositional functions for distinct problems.
Our findings regarding when (and why) simple pooling operations are enough for text sequence representations are summarized as follows: \nocite{shen2017adaptive} \par
\smallskip
\noindent$\bullet$ Simple pooling operations are surprisingly effective at representing longer documents (with hundreds of words), while recurrent/convolutional compositional functions are most effective when constructing representations for short sentences.\par
\smallskip
\noindent$\bullet$ Sentiment analysis tasks are more sensitive to word-order features than topic categorization tasks. However, a simple \emph{hierarchical pooling layer} proposed here achieves comparable results to LSTM/CNN on sentiment analysis tasks. \par
\smallskip
\noindent$\bullet$ To match natural language sentences, \emph{e.g.}, textual entailment, answer sentence selection, \emph{etc.}, simple pooling operations already exhibit similar or even superior results, compared to CNN and LSTM.\par
\smallskip
\noindent$\bullet$ In SWEM with max-pooling operation, each \emph{individual dimension} of the word embeddings contains interpretable semantic patterns, and groups together words with a common theme or \emph{topic}.\par
\smallskip
|
2003.08227
|
\section*{Abstract}
{\bf
Recent transport experiments in spatially modulated quasi-1D structures created on top of LaAlO$_3$/SrTiO$_3$ interfaces have revealed some interesting features, including phenomena conspicuously absent without the modulation. In this work, we focus on two of these remarkable features and provide theoretical analysis allowing their interpretation. The first one is the appearance of two-terminal conductance plateaus at rational fractions of $e^2/h$. We explain how this phenomenon, previously believed to be possible only in systems with strong repulsive interactions, can be stabilized in a system with attraction in the presence of the modulation. Using our theoretical framework we find the plateau amplitude and shape, and characterize the correlated phase which develops in the system due to the partial gap, namely a Luttinger liquid of electronic trions.
The second observation is a sharp conductance dip below a conductance of $1\times e^2/h$, which changes its value over a wide range when tuning the system. We theorize that it is due to resonant backscattering caused by a periodic spin-orbit field. The behavior of this dip can be reliably accounted for by considering the finite length of the electronic waveguides, as well as the interactions therein. The phenomena discussed in this work exemplify the intricate interplay of strong interactions and spatial modulations, and reveal the potential for novel strongly correlated phases of matter in systems which prominently feature both.
}
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\vspace{10pt}
\section{\label{sec:intro}Introduction\protect}
Low dimensional electronic systems with strong interactions present unique opportunities for the implementation and study of highly correlated quantum matter. Prominent examples are the fractional quantum Hall effect \cite{FQHexperiment,FQHlaughlin,FQHHaldane}, high-$T_c$ superconductors \cite{hightcRMP}, non-Fermi liquids \cite{LLHaldane,nonFLog,nonFLsyk}, quantum spin liquids \cite{RVBqsl,KITAEqsl}, and the correlated states in single and multi wall carbon nanotubes \cite{EggerCnt, EggerMultiwallCNT, CNTWigner, CNTWignerSHahal}.
The two dimensional interface between the polar insulator LaAlO$_3$ and the non-polar SrTiO$_3$ features some interesting strong correlation effects, including metal-insulator transitions \cite{LASTOmetalinsulator} and tunable superconductivity \cite{LASTOsuperconduct}. Recent advances in atomic force microscopy (AFM) and electron lithography have enabled confinement of these surface electrons to quasi-1D waveguides \cite{lastoTech}. Electric conductance experiments in these heterostructures have revealed ballistic transport, and apparent strong attractive interactions between the charge carriers which lead to formation of composite few-electron particles \cite{LASTOstraight,JL_Pascal}.
Recent experiments in these quasi-1D structures have explored the introduction of an additional feature, namely spatially periodic modulation of the waveguide. This may be done in a ``vertical'' way, i.e., by modulating the voltage applied by the AFM tip during the patterning of the wire, which creates an effective Kronig–Penney landscape for the electrons \cite{JL_95_conductance_vertical}. Alternatively, one may consider ``lateral'' modulation, where a serpentine shape is etched at a constant AFM potential \cite{JL_dip_lateral}. The experimental results suggest that these kind of modulations may lead to a much richer phase diagram of the electron waveguide.
Transport measurements in the vertical case have revealed a regime in which a plateau in the two-terminal conductance appears at rational fractions of the quantum of conductance \cite{JL_95_conductance_vertical}. Such phenomena have been previously observed in 1D constrictions \cite{PepperFractional}, yet it was commonly believed strong repulsion is needed to stabilize the fractional phase \cite{fractionalGSYO}. In Sec. \ref{sec:fractionalcond} we address this discrepancy, by extending the theoretical model presented in Ref. \cite{fractionalGSYO} to cases where the interaction has a dominant spatially periodic component. We explain the possible origin of such a form of interaction, and demonstrate that in such scenarios it is possible to measure fractional conductance in the presence of strong \textit{attractive} interactions, provided two electronic modes have fillings approximately commensurate with one another. The presented theory is thus consistent with the total absence of such a fractional feature in the straight wires fabricated with the same technique, as we conjecture that the periodic interaction originates in the modulated patterning. Our analysis allows us to reliably recreate the plateau behavior observed in experiments, and to understand the intriguing many-body correlated phase which develops ``on the plateau''. We also find that another intriguing effect observed in the vertically modulated waveguides, namely an enhanced electron pairing regime, may also be accounted for by the spatially periodic attraction.
In the laterally modulated waveguides we address a different anomaly in the transport data, namely the emergence of a conductance dip. Interestingly, this dip seems to develop and deepen continuously with change of experimental parameters, i.e., gate voltage and magnetic field \cite{JL_dip_lateral}.
In contrast to the conductance plateau in the vertical case, the conductance changes over a much wider range. Moreover, this feature is adjacent to a conductance plateau of $e^2/h$. These differences indicate that a different mechanism is at work here.
In Sec. \ref{sec:periodicpotential} we theorize this feature may be accounted for by the presence of a modulation-induced spin-orbit interaction, leading to an effective periodic potential felt by the electrons. Matching the modulation wave vector and the electron Fermi momenta leads to a resonance condition for suppressed conductivity.
We show that the interplay of strong interactions in the waveguide and the short length of the conductor results in the sensitivity of the conductance dip shape and location to external parameters. Moreover, we explain how this feature can be used as a powerful probe on the strength of the interactions in the system.
\section{\label{sec:fractionalcond} Vertical modulation: fractional conductance plateau and 1-2-trion phase}
As will be later shown in this Section, some of the more remarkable results in the vertically modulated waveguides can be accounted for by considering spatially modulated electron-electron interactions. It was recently established \cite{JL_LAOSTO_PRX} that the strength of interactions, as well as their sign (repulsion or attraction) may be tuned in exactly such one-dimensional LaAlO$_3$/SrTiO$_3$ waveguides, by variation of the electronic density. This variation supposedly induces a kind of Lifshitz transition \cite{JL_LAOSTO_PRX, laosto_Lifshitz_altman_ruhman}, modifying the orbital nature of the charge carriers, along with the effective interactions.
We conjecture that the same sort of mechanism may be at play when the ``depth'' of the waveguide is modulated by the varying AFM potential.
As the strength and sign of interactions oscillate along the wire, the interaction becomes peaked in Fourier space, along the corresponding modulation wavevector. The consequences and significance of this modulation will become apparent in the following discussion.
\subsection{\label{sec:fractionalmodel} Model}
Our theoretical framework consists of a 1D system hosting two modes of spinless fermions, see Fig~\ref{fig:dispfig}a.
These modes represent the two lowest-lying electronic modes of the waveguide at a given magnetic field, as the magnetic field significantly modifies the non-interacting band structure, through both Zeeman and orbital effects \cite{LASTOstraight}.
The spin label of these two modes and their spatial distribution in the cross-section of the waveguide is immaterial for the purposes of this work, as long as these are two distinct modes. We consider the Hamiltonian (setting $\hbar=1$)
\begin{align}
H & =\int dx\Psi_{i}^{\dagger}\left(x\right)\left(-\frac{1}{2m_{i}}\partial_{x}^{2}-\mu_{i}\right)\Psi_{i}\left(x\right)\nonumber\\
& +\int dx\int dy\,\rho_{i}\left(y\right)\mathbf{U}^{ij}\left(\left|x-y\right|\right)\rho_{j}\left(x\right), \label{eq:ModelQuadratic}
\end{align}
where $ \Psi_{i}\left(x\right)$ annihilates a fermion of mode $i$ at position $x$, $m_i$ and $\mu_i$ are the mass and chemical potential of the $i$-th mode, $\rho_i\equiv\Psi_{i}^\dagger\Psi_{i}$, $\mathbf{U}$ is an interaction matrix, and summation over repeated indices is implicit.
Of particular interest will be the case where $\mathbf{U}$ has a contribution which is \textit{periodically modulated in space}. This will manifest in the Fourier transform of the interaction $\mathbf{U}^{ij}_q=\int dx e^{iqx} \mathbf{U}^{ij}\left(x\right)$, which will become peaked around a specific $q^*$ corresponding to the modulation wavevector. Notice that the form of interaction in Eq.~\eqref{eq:ModelQuadratic} is written in a momentum conserving manner.
The low-energy physics of the model Eq. \eqref{eq:ModelQuadratic} may be described by linearizing the fermionic spectra near their respective Fermi momenta $k_{i,F}$, and writing the Hamiltonian in terms of right- and left-moving modes. The Hamiltonian is comprised of two contributions. The ``free'' part describing the linearly dispersing chiral movers,
\begin{equation}
{\cal H}_{0}=iv_{i}\left(\psi_{i,R}^{\dagger}\partial_{x}\psi_{i,R}-\psi_{i,L}^{\dagger}\partial_{x}\psi_{i,L}\right), \label{eq:linearizedH0}
\end{equation}
where $\psi_{i,R/L}$ annihilates a right/left moving fermion of mode $i$, and $v_i$ are the Fermi velocities. The electron-electron interactions are described by
\begin{align}
{\cal H}_{{\rm int}} & =g_{i}\rho_{i,R}\rho_{i,L}+g_{\perp}\left(\rho_{1,R}\rho_{2,L}+\rho_{2,R}\rho_{1,L}\right)\nonumber\\
& +g_{\rm bs}\left(\psi_{1,R}^{\dagger}\psi_{2,L}^{\dagger}\psi_{2,R}\psi_{1,L}+{\rm h.c.}\right), \label{eq:linearizedHint}
\end{align}
where $\rho_{i,r}\equiv\psi_{i,r}^\dagger \psi_{i,r}$ ($r=R,L$), and we have omitted the so-called ``$g_4$ interactions'' of the form $\rho_r^2$, which only renormalize the velocities later on. The different coupling coefficients may be extracted from the different momentum components of the interaction matrix,
\begin{equation}
g_i = \mathbf{U}^{ii}_{0}-\mathbf{U}^{ii}_{2k_{i,F}}, \,\,\,\,g_\perp = \mathbf{U}^{12}_{0},\,\,\,\,g_{\rm bs}= \mathbf{U}^{12}_{2k_{1,F}}.\label{eq:gUmatrixconnection}
\end{equation}
The backscattering interaction $g_{\rm bs}$ conserves momentum (and is thus relevant at low energies) only when the Fermi momenta of the two waveguide modes are nearly identical, i.e., $k_{1,F}\approx k_{2,F}$.
We now consider a scenario where the Fermi momentum of one mode is nearly an integer multiple of the other,
\begin{equation}
k_{1,F}=nk_{2,F}, \label{eq:conditionCommensurate}
\end{equation}
facilitating a higher order backscattering term,
\begin{equation}
{\cal H}_\lambda=\lambda\psi_{1,R}^{\dagger}\psi_{1,L}\left(\psi_{2,L}^{\dagger}\psi_{2,R}\right)^{n}+{\rm h.c.}\,,
\end{equation}
\textit{which conserves momentum} and is therefore potentially relevant. For the sake of clarity, we will focus our discussion on the case $n=2$ (illustrated in Fig. \ref{fig:dispfig}b), which is the simplest possible scenario. The arguments we present here may be generalized to higher $n$ in a straightforward manner \cite{fractionalGSYO}.
\begin{figure}\begin{center}
\includegraphics[scale=0.5]{dispersionfigHorizontal.png}
\end{center}
\caption{\label{fig:dispfig}
Schematic of the system we analyze: (a) Two non-interacting leads are attached to the strongly interacting two-mode (purple and yellow) electronic waveguide (center). The smooth broadening of the waveguide occurs on a length scale much larger than the Fermi wavelength, preventing interface reflections. (b) Qualitative dispersion of the two electronic modes in the waveguide. The dotted red line marks the Fermi level, $b_1$ and $b_2$ denote the bands bottoms and the Fermi momenta $\pm k_{1,F}, \pm k_{2,F}$ are indicated. The figure illustrates the multi-particle backscattering process we focus on: a right moving mode-1 particle (purple) backscatters off two left moving mode-2 particles (yellow). This process and its conjugate conserve momentum if $k_{1,F}=2k_{2,F}$. This condition is satisfied when the Fermi level is at the critical chemical potential $\mu_c$.
}
\end{figure}
The interacting model we have presented here, captured by the effective Hamiltonian density
\begin{equation}
{\cal H}={\cal H}_0+{\cal H}_{\rm int}+{\cal H}_\lambda,\label{eq:fullHamiltoniandensity}
\end{equation}
can best be analyzed in the framework of abelian bosonization \cite{giamarchi2004quantum,Voit_1996,LLHaldane}. This is done by expressing the chiral fermionic operators in terms of new bosonic variables,
\begin{equation}
\psi_{i,r}\sim\frac{\eta_{i,r}}{\sqrt{2\pi\alpha}}{\rm \exp}\left[i\theta_{i}-ir\left(\phi_{i}+k_{i,F}x\right)\right],\label{eq:bosonIdentity}
\end{equation}
with $r=\pm$ corresponding to $R,L$, $\alpha$ is the short-distance cutoff of our continuum model, $\eta_{i,r}$ are Klein factors ensuring fermionic commutation relations such that $\left\{\eta_\mu,\eta_\nu\right\}=2\delta_{\mu\nu}$, and the bosonic fields obey the algebra $\left[\phi_i\left(x\right),\partial_x \theta_j\left(x'\right)\right]=i\pi\delta\left(x-x'\right)\delta_{i,j}$. Before writing down our bosonized model, we perform one final step: a canonical transformation on the bosons implementing a change of basis,
\begin{equation}
\phi_{g}=\frac{\phi_{1}-2\phi_{2}}{\sqrt{5}},\,\,\,\,\,\,\phi_{f}=\frac{2\phi_{1}+\phi_{2}}{\sqrt{5}},\label{eq:bosonTransform}
\end{equation}
with the same transformation for the $\theta$ operators. In this new basis, the bosonized Hamiltonian density may be written in the Luttinger liquid form,
\begin{align}
{\cal H} & =\sum_{j=f,g}\frac{u_{j}}{2\pi}\left[\frac{1}{K_{j}}\left(\partial_{x}\phi_{j}\right)^{2}+K_{j}\left(\partial_{x}\theta_{j}\right)^{2}\right]\nonumber\\
& +\frac{1}{2\pi}\left(V_{\phi}\partial_{x}\phi_{f}\partial_{x}\phi_{g}+V_{\theta}\partial_{x}\theta_{f}\partial_{x}\theta_{g}\right)\nonumber\\
& +\frac{\lambda}{4\left(\pi\alpha\right)^{3}}\cos\left(2\sqrt{5}\phi_{g}\right).\label{eq:bosonizedHamiltonianTotal}
\end{align}
The explicit general form of the different parameters in the Hamiltonian \eqref{eq:bosonizedHamiltonianTotal} in terms of the different interaction strengths are given in Appendix \ref{app:hParameters}. Notice that we have omitted the $g_{\rm bs}$ interaction term, as it violates momentum conservation near the range of validity of Eq. \eqref{eq:conditionCommensurate}.
\subsection{Fractional conductance in the strong backscattering limit}\label{sec:fracconductancdetails}
The physics we are interested in concerns the fate of the $\phi_g$ cosine term. Specifically, we begin by considering the limit $\lambda\to\infty$. We will now show that this enables us to relate the experimental signature of a fractional two-terminal conductance plateau to a gap opening in the $g$ sector.
For the sake of completeness, we briefly give here the derivation for the fractional conductance, along the lines described in Ref. \cite{fractionalGSYO} and its Supplementary Materials. We adiabatically attach non-interacting leads to the electronic waveguide (see Fig. \ref{fig:dispfig}a), and consider the scattering problem of incoming and outgoing currents in both modes. These currents are related by
\begin{equation}
\begin{pmatrix}O_{R}\\
O_{L}
\end{pmatrix}=\begin{pmatrix}\mathcal{T} & 1-\mathcal{T}\\
1-\mathcal{T} & \mathcal{T}
\end{pmatrix}\begin{pmatrix}I_{R}\\
I_{L}
\end{pmatrix},\label{eq:Tmatrix}
\end{equation}
where $O_{R.L}$ and $I_{R,L}$ are chiral outgoing and incoming current vectors of length $N$, the number of modes in the waveguide, and $\mathcal{T}$ is a $N\times N$ matrix. In the case two-mode discussed here, $N=2$ \footnote{The generalization to arbitrary $N$, as well as an arbitrary backscattering process, is straightforward, and appears in Ref. \cite{fractionalGSYO}.}. In terms of the $\phi_{1,2}$ bosonic variables, their elements are
\begin{equation}
I_{R,i} = \frac{e}{2\pi}\partial_t\frac{\theta_i-\phi_i}{\sqrt{2}}|_{x=\frac{L}{2}},\,\,\,\, I_{L,i} = \frac{e}{2\pi}\partial_t\frac{\theta_i+\phi_i}{\sqrt{2}}|_{x=-\frac{L}{2}},
\end{equation}
\begin{equation}
O_{R,i} = \frac{e}{2\pi}\partial_t\frac{\theta_i-\phi_i}{\sqrt{2}}|_{x=-\frac{L}{2}},\,\,\,\, O_{L,i} = \frac{e}{2\pi}\partial_t\frac{\theta_i+\phi_i}{\sqrt{2}}|_{x=\frac{L}{2}}.
\end{equation}
In the asymptotic limit we are considering, $\lambda\to\infty$, $\phi_g$ is pinned throughout the system, and we have the boundary condition $\partial_t \phi_1 -2\partial_t \phi_2 = 0$. Taken at opposite ends of the system, this boundary condition is equivalent to
\begin{equation}
\mathbf{n}_g^T\mathcal{T}=0,\label{eq:ng}
\end{equation}
with $\mathbf{n}_g=\frac{1}{\sqrt{5}}\left(1,-2\right)^T$, defined in accordance to Eq. \eqref{eq:bosonTransform}. The unobstructed propagation of the $\phi_f$ mode through the system leads to the boundary conditions $2O_{R/L,1}+O_{R/L,2}=2I_{R/L,1}+I_{R/L,2}$, or equivalently,
\begin{equation}
\mathbf{n}_f^T\mathcal{T}=\mathbf{n}_f^T, \label{eq:nf}
\end{equation}
and $\mathbf{n}_f=\frac{1}{\sqrt{5}}\left(2,1\right)^T$, which forms an orthonormal set with $\mathbf{n}_g$.
The solution to Eqs. \eqref{eq:ng},\eqref{eq:nf} can be readily found to be $\mathcal{T}=1-\mathbf{n}_g\mathbf{n}_g^T$.
The total current flowing through the system in both modes may be expressed as $J=\left(1,1\right)\cdot\left(I_R-O_L\right)$. Assuming incoming right movers emanate from a reservoir at potential $V$ and the left movers from a reservoir with zero potential, we set $I_R=\frac{e^2}{h}V\left(1,1\right)^T$, and $I_L=\left(0,0\right)^T$.
The two-terminal conductance of the waveguide can then be extracted,
\begin{equation}
\frac{G}{e^2/h}=\left(1,1\right)\mathcal{T}\left(1,1\right)^T=\frac{9}{5}.\label{eq:asymptotic95}
\end{equation}
A robust conductance plateau as a function of external gate voltage at $\sim 9/5$ was experimentally observed in Ref. \cite{JL_95_conductance_vertical} for a wide range of magnetic fields ($3$-$7$ T). This remarkable agreement between theory and the experimental data strengthen our assertion that the observed plateaus originate in high order backscattering interactions, enabled by approximately commensurate fillings of the two modes.
We further note that for the $n=3$ case, the conductance is predicted to be $G=\frac{8}{5}\frac{e^2}{h}$. A plateau at this value of conductance, albeit much fainter than $\frac{9}{5}$ on, is also present in Ref. \cite{JL_95_conductance_vertical} at magnetic fields $\sim 9$ T.
The appearance of fractional conductance plateaus at a certain range of magnetic fields, as well as the possible plateau ``evolution'' with magnetic field, are both consistent with the well-understood role of the field in determining the band structure ~\cite{LASTOstraight}. The magnetic field shifts the low-lying modes in energy, such that the commensurability condition Eq.~\eqref{eq:conditionCommensurate} is made possible at a certain gate voltage. The magnetic field thus plays the role of a control parameter enabling and disabling particular many-body scattering processes.
Experimental verification of the backscattering-induced fractional conductance is possible by measuring the tunneling shot-noise ``on the plateau'' \cite{SelaShotNoise,fractionalGSYO}. We find that for the $n=2$ case one expects a Fano factor $e^*/e=3/5$, whereas in the $n=3$ case one should find $e^*/e=2/5$ \cite{fractionalGSYO}.
In the next section, we will explain how the varying magnetic field affects which momentum conserving backscattering interaction becomes relevant, and consequently which plateaus consequently emerge.
\subsection{Spatially modulated interactions}\label{sec:spatialmodulations}
As was previously discussed in Ref. \cite{fractionalGSYO}, the relevance of the $\lambda$ perturbation, and hence the formation of the $\phi_g$ gap and fractional plateau, hinge on very strong \textit{repulsive} interactions between the 1D electrons. However, there is strong evidence for attractive interactions in the electron waveguide devices patterned on the LaAlO$_3$/SrTiO$_3$ interface \cite{LASTOstraight,JL_Pascal}. The key to understanding this apparent discrepancy may lie in another intriguing observation, namely that fractional conductance features were altogether absent from such straight non-modulated waveguides \cite{LASTOstraight}. This hints at the possibility that the periodic modulation helps facilitate the formation of a partial gap.
To gain a qualitative understanding of how $\phi_g$ can become gapped even in the presence of attractive interactions,
we examine the renormalization group (RG) flow of the Hamiltonian Eq. \eqref{eq:bosonizedHamiltonianTotal}.
At each step of the RG, short-distance (or high-momentum ``fast'') degrees of freedom are integrated out, and the short-distance cutoff is rescaled $\alpha\to\alpha\left(1+d\ell\right)$. This generates new terms in the Hamiltonian, leading to a modification of the coupling constants~\cite{cardy_1996}.
Treating $\lambda,V_\phi,V_\theta$ terms as perturbations of the Luttinger liquid Hamiltonian, The second order RG equations are given by
\begin{subequations}\label{eq:totalRG}
\begin{equation}
\frac{d}{d\ell}\tilde{\lambda}=\left(2-5K_g\right)\tilde{\lambda},\label{eq:RG}
\end{equation}
\begin{equation}
\frac{d}{d\ell}K_{g}^{-1}=\frac{5}{4}\tilde{\lambda}^2,\label{eq:RG_Kg}
\end{equation}\end{subequations}
with $\ell$ the RG flow parameter, and the dimensionless coupling constant $\tilde{\lambda}\equiv\lambda/\left(2\pi^2\alpha u_g\right)$. We note that $V_{\phi/\theta}$ modify the scaling dimension of $\tilde{\lambda}$ only to second order in $V_{\phi,\theta}/u_{f,g}$, and thus introduce third-order (and higher) perturbative corrections to Eq. \eqref{eq:RG}.
It is clearly evident from Eq. \eqref{eq:totalRG} that $K_g$ is the crucial parameter determining the fate of the $\phi_g$ sector. Neglecting its flow (which is justified if the bare $\tilde{\lambda}$ is sufficiently small), we find that the condition for a gap to open in this sector is $K_g<K_{g,\rm{c}}=\frac{2}{5}$.
Let us now consider the consequence of a peaked $\mathbf{U}_q$, specifically around $q^*=2k_{2,F}$, such that the dominant coupling coefficient is $\mathbf{U}_{2k_{2,F}}^{22}$.
For simplicity, we assume all other intra-mode interaction matrix elements are of comparable strength, $\mathbf{U}_{0}^{11}\approx\mathbf{U}_{0}^{22}\approx\mathbf{U}_{2k_{1,F}}^{11}\equiv U$.
Under these assumptions,
\begin{equation}
K_{g}=\sqrt{\frac{\frac{5}{2}\pi v+\mathbf{U}_{2k_{2,F}}^{22}+\mathbf{U}_{0}^{12}-U}{\frac{5}{2}\pi v-\mathbf{U}_{2k_{2,F}}^{22}-\mathbf{U}_{0}^{12}+U}}.\label{eq:KgSimplified}
\end{equation}
(See Appendix \ref{app:hParameters} for the full general expression.)
Keeping in mind that the interactions are attractive, hence the couplings are all \textit{negative}, one finds that with strong interactions, sufficiently dominant $\mathbf{U}_{2k_{2,F}}^{22}$ indeed greatly diminishes the value of $K_g$ and may possibly bring it below the critical $K_{g,{\rm c}}$. This is a central observation of this work, which identifies the modulated interaction as an important ingredient in the fractional plateau puzzle.
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{LeadsPlateausCleanwithlowerscale.png}
\end{center}
\caption{\label{fig:leadsplateaus}
Two-terminal conductance in the presence of a gap in the $\phi_g$ sector, calculated from Eq.~\eqref{eq:condformula} and Eq.~\eqref{eq:Tvertical}. The different traces correspond to an increasing gap $\Delta$ (from left to right), from $2\,\mu$eV to $20\,\mu$eV, in $1\,\mu$eV steps. Consecutive traces are shifted horizontally by $8.5\,\mu$eV for clarity. Notice that as the gap increases, the plateau approaches its asymptotic value (marked by the dashed line). We use the parameters $b_1=10\,\mu$eV, $b_2=14\,\mu$eV, $\mu_c=26\,\mu$eV, and $T=25$ mK for all traces.
}
\end{figure}
\subsection{
Relation to experimental results}\label{sec:experimentalconsequence}
In the vicinity of $K_g^*=\frac{1}{5}$, which corresponds to strong interactions in the waveguide, we may supplement our asymptotic calculation Eq. \eqref{eq:asymptotic95} by an \textit{exact} re-fermionization solution, extensively described in Ref. \cite{fractionalGSYO}. The point $K_g=K_g^*$ represents a generalization of the exactly solvable Luther-Emery point \cite{LutherEmery} of attractive spin-degenerate electrons in a quantum wire.
We emphasize that although a Luttinger parameter much smaller than 1 usually corresponds to strong repulsive interaction, a modulated interaction may indeed lead to $K_g\ll 1$ for an \textit{attractive} interaction as well. (See Eq.~\eqref{eq:KgSimplified}.)
In the limit where the length of the waveguide $L$ is sufficiently long, $L\gg\frac{\Delta}{u_g}$ with $\Delta$ the gap opened in the $\phi_g$ sector, one recovers the finite temperature linear conductance
\begin{equation}
G\left(\mu,T\right)=\frac{e^2}{h}\int d\epsilon \frac{{\cal T}\left(\epsilon\right)}{4T\cosh^2\left(\frac{\epsilon-\mu}{2T}\right)},\label{eq:condformula}
\end{equation}
with $\mu$ a global chemical potential controlled by a gate voltage, $T$ the temperature, and the transmission function, which depends on the energies of the bottom of the two bands $b_{1,2}$ (see Fig.~\ref{fig:dispfig}), and on the critical commensurate chemical potential $\mu_c$ (see Fig. \ref{fig:dispfig}b),
\begin{equation}\label{eq:Tvertical}
{\cal{T}}\left(\epsilon\right)=\begin{cases}
0, & \epsilon<b_1\\
1, & b_1\leq\epsilon<b_2\\
2, & b_2\leq\epsilon<\mu_c-\frac{\Delta}{2}\\
\frac{9}{5}, &\mu_c-\frac{\Delta}{2}\leq\epsilon<\mu_c+\frac{\Delta}{2}\\
2, &\mu_c+\frac{\Delta}{2}\leq\epsilon
\end{cases}.
\end{equation}
In Fig. \ref{fig:leadsplateaus} we show an example of the predicted conductance with different sizes of $\Delta$. A striking resemblance to the data shown in Fig. 3E of Ref. \cite{JL_95_conductance_vertical} is evident. Even a ``pre-plateau'' conductance peak feature that was seen in experiments is recreated: it is attributed to a region with transmission of $2$ preceding the fractional regime, which at finite temperature does not allow the conductance to reach all the way up to its integer value. We note that the sometimes-missing plateaus at integer values of $1$ and $2$ in the experiment can be accounted for by an interplay between the size of the gap, the temperature and the inter-band separations. (in Fig. \ref{fig:leadsplateaus} for example, the plateau at $1$, which is outside the plotted range, is smeared out for our choice of parameters.)
We may gain further qualitative insight by examining the gap $\Delta$ itself.
Its size may be approximated by integrating Eq. \eqref{eq:RG} up to $\tilde{\lambda}\sim{\cal O}\left(1\right)$,
\begin{equation}
\Delta\approx W \left(\frac{\lambda}{W}\right)^{\frac{1}{2-5K_g}},\label{eq:GapSize}
\end{equation}
with $W = u_g /\alpha_0$ a typical bandwidth parameter ($\alpha_0$ is the bare short-distance cutoff).
As expected, $\Delta$ becomes larger as $K_g$ is reduced. According to our theory, in the experiment the value of $q^*$ around which the interaction is peaked, remains constant. However, as the external magnetic field is modified, the energy dispersions of the two populated modes change, and the value of $k_{2,F}$ at the gate voltage corresponding to the commensurate condition \eqref{eq:conditionCommensurate} depends on the magnetic field. As $2k_{2,F}$ drifts further away from $q^*$, $U^{22}_{2k_{2,F}}$ becomes less dominant, and $K_g$ grows. Thus, the mechanism we present to account for the fractional plateaus in the system is entirely consistent with $\Delta$ depending on the magnetic field, and thus has remarkable agreement with the experimental variations. The same reasoning accounts for the appearance of a $9/5$ plateau in one regime, and of a $8/5$ plateau in another one, corresponding to $n=2$ and $n=3$, respectively.
We will now argue that our analysis suggests that the fractional features in the system we study, with modulated attractive interactions, are more robust as compared to the uniform repulsion case. This happens because scattering from impurities is less relevant in the former, and sometimes become irrelevant in the RG sense.
Consider the situation where $\phi_g$ is gapped and the only gapless sector is $\phi_f$. Then, the relevance of any impurity scatterer term will depend only on $K_f$ (times a numerical factor of ${\cal O}\left(1\right)$ determined by how the impurity impacts the two original modes). Employing an additional simplification $\mathbf{U}_{0}^{12}\approx U$ we may write
\begin{equation}
K_{f}=\sqrt{\frac{2\pi v+\frac{1}{5}\mathbf{U}_{2k_{2,F}}^{22}-U}{2\pi v-\frac{1}{5}\mathbf{U}_{2k_{2,F}}^{22}+U}},
\end{equation} such that even for relatively dominant $\mathbf{U}_{2k_{2,F}}^{22}$ one would still expect $K_f$ to be controlled by the interaction $U$ and thus be significantly larger than in the repulsive case (remember $U<0$). Large $K_f$ causes the impurities to be less relevant \cite{KaneImpurity,KaneFisher}, and the viability of the fractional conductance plateau with a value close to its asymptotic rational value rises substantially, even in a non-ideal ``dirty'' system.
We note that the relevance of impurities in the modulated system, and hence the size of $K_f$, can be experimentally probed by introducing imperfections to the quasi-1D waveguide in its writing process. If the effect of these impurities on the conductance increases steeply with temperature, one may infer that $K_f$ is much larger than 1.
We note here that for the same reasons larger $K_f$ is expected to render proximity induced superconductivity in the residual sector much more relevant. This may possibly enable the stabilization of fractional Majorana zero modes at the edges of the waveguide \cite{HelicalSelaOreg} at more accessible temperature and proximity strength regimes.
\begin{figure}
\begin{center}
\includegraphics[scale=0.31]{heuristicfig.png}
\end{center}
\caption{\label{fig:strongcouplingheuristic}
Strong coupling limit of the two commensurate waveguide modes, cf.~Fig.\ref{fig:dispfig}. Attractive interactions with wave-vector $2k_{2,F}$ (solid black line at the bottom) corresponding to the less populated mode (mode 2), tend to induce a charge density wave commensurate with that wave vector in each mode. The inter-mode attraction (wiggly green lines) then ``locks'' the phases of the two density waves together. This locking corresponds to pinning $\phi_g$, whereas a composite 1-2-trion $\phi_f$ can propagate along the waveguide.
}
\end{figure}
\subsection{Strong coupling -- the 1-2-Trion phase}
It is worth pointing out that the expressions we have found for the Luttinger parameters [see Eq.~\eqref{eq:KgSimplified}], and their dependence on the interaction matrix elements are correct only for weak coupling, i.e., when the typical bandwidth of the two modes $W$ is sufficiently larger than the size of the elements comprising the interaction matrix $\mathbf{U}$.
Furthermore, our RG arguments regarding the relevance of the multi-particle backscattering term were also perturbative. For very strong interactions, the functional dependence of, e.g., the size of the gap (or its existence), may vary.
We claim, however, that qualitatively one should reach the same conclusions in the strong interaction limit.
To understand why, consider the limit of negligible electron hopping and only interactions of the kind we have discussed, see Fig.~\ref{fig:strongcouplingheuristic}. If the intra-mode attractive interaction has a dominant component modulated with a spatial frequency matching the density of the less populated mode $\mathbf{U}_{2k_{2,F}}^{22}$, the most energetically favored state would be a charge density wave with the corresponding wave vector, which will maximize the attraction. Then, the subdominant inter-mode attraction will tend to ``glue'' a mode-2 particle to two mode-1 particles. This corresponds to the free $\phi_f$ mode left in the waveguide in the language of our previous discussion. According to Eq.~\eqref{eq:bosonTransform} it is composed of 2 bosonic modes from mode-1 and one from mode-2. Notice that in the expression for $K_g$, Eq.~\eqref{eq:KgSimplified}, from which we determine the fate of $\phi_g$, the weak coupling dependence on the coupling constants $\mathbf{U}_{2k_{2,F}}^{22},\mathbf{U}_{0}^{12}$ reflects in essence the strong coupling heuristic description, as large attractive (with negative amplitudes) interaction tend to make $K_g$ small and pin $\phi_g$. We note that the above argument is not specific to $n=2$, and generally holds, with proper modifications, for arbitrary $n$.
We now comment on the state of the electrons in the waveguide ``on the plateau'', i.e., deep in the $\phi_g$ gapped phase. Let us consider the operator
\begin{align}
\Psi_{\text{1-2-}{\rm trion}}\left(x\right) & \equiv\Psi_{1}\left(x\right)\Psi_{1}\left(x+\alpha\right)\Psi_{2}\left(x\right)\nonumber \\
& \propto e^{-i\sqrt{5}\theta_{f}}\cos\left(\frac{\phi_{f}}{\sqrt{5}}+k_{2,F}x\right),\label{triondefinition}
\end{align}
which creates a three-particle fermionic excitation, as in Fig.~\ref{fig:strongcouplingheuristic}.
The offset by $\alpha$ in the second annihilation operator is crucial in order to create a local pair due to the fermionic nature of $\Psi_1$.
This is the lowest order operator one can construct that does not contain the dual variable $\theta_g$, which strongly oscillates in the fractional phase leading to exponentially decaying correlation functions of all operators containing it.
We may then examine trion-density-density and trion-pair correlations,
\begin{equation}
\left\langle \rho_{\text{1-2-}{\rm trion}}\left(x\right)\rho_{\text{1-2-}{\rm trion}}\left(0\right)\right\rangle \propto\cos\left(2k_{2,F}x\right)x^{-\frac{2K_{f}}{5}},
\end{equation}
\begin{equation}
\left\langle \Delta_{\text{1-2-}{\rm trion}}^\dagger\left(x\right)\Delta_{\text{1-2-}{\rm trion}}\left(0\right)\right\rangle \propto x^{-\frac{10}{K_{f}}},
\end{equation}\sloppy
respectively, with $\rho_{\text{1-2-}{\rm trion}}\left(x\right)=\Psi_{\text{1-2-}{\rm trion}}^{\dagger}\left(x\right)\Psi_{\text{1-2-}{\rm trion}}\left(x\right)$, and $\Delta_{\text{1-2-}{\rm trion}}\left(x\right)=\Psi_{\text{1-2-}{\rm trion}}\left(x\right)\Psi_{\text{1-2-}{\rm trion}}\left(x+\alpha\right)$. The value of $K_f$ determines which of these two will be the dominant order in the gapped system: for $K_f<5$ charge density wave order of composite trions will dominate, whereas trion pairing will have the leading susceptibility if $K_f>5$.
\subsection{Enhanced pairing} \label{sec:enhanced_pairing}
Another remarkable phenomenon observed in the vertically modulated waveguides is an extended regime where electrons form bound pairs~\cite{JL_95_conductance_vertical}. We now demonstrate that this experimental signature is entirely consistent with the conjectured periodic attractive interaction.
We examine the Hamiltonian density ${\cal H}_0+{\cal H}_{\rm int}$ [Eqs.~\eqref{eq:linearizedH0} and \eqref{eq:linearizedHint}] around the commensurability point $k_{1,F}=k_{2,F}\equiv k_F$. Thus, $g_{\rm bs}$ is a relevant perturbation, whereas ${\cal H}_\lambda$ is not (and thus omitted).
For simplicity, we assume the difference between the various intra- and inter-mode interactions are negligible, i.e., $\mathbf{U}^{11}_{0}\approx\mathbf{U}^{22}_{0}\approx\mathbf{U}^{12}_{0}$, and $\mathbf{U}^{11}_{2k_{F}}\approx\mathbf{U}^{22}_{2k_{F}}\approx\mathbf{U}^{12}_{2k_{F}}\equiv{U}_{2k_F}$. Simplifying further, $v_1\approx v_2 \equiv v$, we find that the bosonized Hamiltonian density can be written as
\begin{align}
{\cal H}_{\rm pair} & =\sum_{\eta=+,-}\frac{u_{\eta}}{2\pi}\left[\frac{1}{K_{\eta}}\left(\partial_{x}\phi_{\eta}\right)^{2}+K_{\eta}\left(\partial_{x}\theta_{\eta}\right)^{2}\right]\nonumber\\
& +\frac{U_{2k_F}}{2\left(\pi\alpha\right)^{2}}\cos\left(\sqrt{8}\phi_{-}\right),\label{eq:bosonizedPairingHamiltonianTotal}
\end{align}
with $\phi_{\pm}=\frac{\phi_1\pm\phi_2}{\sqrt{2}}$, and an identical transformation is applied for $\theta_\pm$. (The parameters $u_+$, $u_-$, and $K_+$ are not important for our discussion and are given in Appendix~\ref{app:hParameters}.) Crucially, we find \cite{giamarchi2004quantum}
\begin{equation}
K_-=\sqrt{\frac{1+U_{2k_F}/\left(2\pi v\right)}{1-U_{2k_F}/\left(2\pi v\right)}},\label{eq:K_minus_expression}
\end{equation}
which is smaller than 1 for attractive interactions, making the cosine term relevant in the RG sense.
When this term flows to strong coupling, $\phi_-$ gets pinned, and only pairs with one electron from each mode (corresponding to the $\phi_+$ channel) remain gapless.
Integrating the RG flow up to strong coupling, we can estimate the pairing gap [similarly to Eq.~\eqref{eq:GapSize}],
\begin{equation}
\Delta_{\rm pair}\approx W \left(\frac{U_{2k_F}}{W}\right)^{\frac{1}{2-2K_-}}.\label{eq:GapPairing}
\end{equation}
We now consider the impact of modulated attractions, such that $\mathbf{U}_q$ peaks near $q=2k_F$. Clearly, this leads to a significant enhancement of $\left|U_{2k_F}\right|$ as compared to the more generic short-range or power-law decaying interactions. This in turn increases the size of the gap $\Delta_{\rm pair}$, as it makes $K_-$ smaller and the ratio $U_{2k_F}/W$ larger.
An enhancement of the pairing region as compared to the non-modulated case (cf. Ref. \cite{LASTOstraight}) can thus be attributed to the modulated attractive interaction.
This conjectured form of interaction can thus account for \textit{both} of the most prominent features observed in the vertically modulated waveguides.
\section{\label{sec:periodicpotential} Lateral modulation: Gap opening and Reduction of conductance }
The effect of lateral modulation of the electron waveguide may be captured by an alternating electric field in the lateral direction with wave vector $Q$, $\mathbf{E}=E\cos\left(Qx\right)\hat{y}$, felt by the electrons having momenta $\mathbf{k}=k\hat{x}$. An effective modulated Rashba spin-orbit field $\boldsymbol{\alpha}$ in the out-of-plane direction is thus expected, as $\boldsymbol{\alpha}\propto\mathbf{k}\times\mathbf{E}=kE\cos\left(Qx\right)\hat{z}$. In this Section, we explore the consequences of a modulated spin-orbit interaction in the high (out-of-plane) magnetic field regime.
Focusing on the lowest-lying spinfull mode in the waveguide, we describe it by an Hamiltonian $H=\int dx\Psi^{\dagger}\left[{\cal H}_{0}+{\cal H}_{Q}\right]\Psi$, where $\Psi=\left(\psi_{\uparrow},\psi_{\downarrow}\right)^{T}$ is a spinor of electron annihilation operators, and ${\cal H}_{0}$ describes the system without modulation,
\begin{equation}
{\cal H}_{0}=-\frac{\partial_{x}^{2}}{2m}-\epsilon_0+V_{Z}\sigma_{z}-\alpha_{0}i\partial_{x}\sigma_{z},\label{eq:lateralH0}
\end{equation}
with $m$ being the electron mass, $\epsilon_0$ is the Fermi energy in the absence of Zeeman splitting and spin-orbit coupling, $V_{Z}$ is the Zeeman energy, $\alpha_{0}$ is the non-modulated component of the spin-orbit interaction, and $\sigma_z$ is a Pauli operator. The modulated spin-orbit interaction is described by
\begin{equation}
{\cal H}_{Q}=\alpha_{Q}\left\{ -i\partial_{x},\cos\left(Qx\right)\right\} \sigma_{z},\label{eq:lateralHQ}
\end{equation}
with $\alpha_{Q}$ the strength of the modulated spin-orbit coupling, and the anti-commutator ensures the hermiticity of the Hamiltonian.
Such a form of spin-orbit interaction was considered in Ref. \cite{modulatedRashbaTheory}, where a metal-insulator transition was studied.
Considering here the regime $V_{Z}\gg\epsilon_0+\frac{m\alpha_{0}^{2}}{2}$ (taking $V_{Z}$ positive without loss of generality), we may limit our discussion to the low-energy $\sigma_{z}=-1$ sector, as depicted in Fig. \ref{fig:PeriodicCondDips}a. Linearizing the spectrum of ${\cal H}_{0}$ around $k=\pm k_{F}+k_{{\rm SO}}$, with $k_{F}=\sqrt{2m\left(\epsilon_0+V_{Z}\right)+m^{2}\alpha_{0}^{2}}$, and $k_{{\rm SO}}=m\alpha_{0}$, we expand
\begin{equation}
\psi_{\downarrow}\approx e^{i\left(k_{F}+k_{{\rm SO}}\right)x}\psi_{R}+e^{-i\left(k_{F}-k_{{\rm SO}}\right)x}\psi_{L},\label{eq:expandpsidown}
\end{equation}
with $\psi_{L/R}$ being chiral fermionic operators. The total Hamiltonian density projected to the $\sigma_{z}=-1$ sector may then be expressed
as \begin{align}
{\cal H} & =iv\left(\psi_{R}^{\dagger}\partial_{x}\psi_{R}-\psi_{L}^{\dagger}\partial_{x}\psi_{L}\right)\nonumber \\
& +\alpha_{Q}k_{{\rm SO}}\left(\psi_{R}^{\dagger}\psi_{L}e^{-i\left(2k_{F}-Q\right)x}+{\rm h.c.}\right),\label{eq:Hprojected}
\end{align}
where we have omitted rapidly oscillating terms, and $v=k_{F}/m$. It is apparent from Eq. (\ref{eq:Hprojected}) that in our parameter regime of interest the system is equivalent to one of spinless fermions subjected to a spatially periodic potential. This periodic potential is due to the ``dc'' and periodic components of the spin-orbit interaction conspiring together.
Performing a unitary transformation $\psi_{R/L}\to\psi_{R/L}{\rm exp}\left(\pm i\frac{2k_{F}-Q}{2}x\right)$, Eq. \eqref{eq:Hprojected} becomes
\begin{align}
\tilde{\cal H} & =iv\left(\psi_{R}^{\dagger}\partial_{x}\psi_{R}-\psi_{L}^{\dagger}\partial_{x}\psi_{L}\right)-\tilde{\mu}\left(\psi_{R}^{\dagger}\psi_{R}+\psi_{L}^{\dagger}\psi_{L}\right)\nonumber \\
& +\Delta_{Q}\left(\psi_{L}^{\dagger}\psi_{R}+\psi_{R}^{\dagger}\psi_{L}\right),\label{eq:periodicHamiltonianTransformed}
\end{align}
where $\tilde{\mu}=\frac{v}{2}\left(2k_{F}-Q\right)$, and $\Delta_Q=\alpha_Q k_{\rm SO}$.
\subsection{\label{sec:periodicConduct} Conductance}
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{Periodicdipssqueezed.png}
\end{center}
\caption{\label{fig:PeriodicCondDips}
(a)
Top: Schematic dispersion for ${\cal H}_0$, Eq.~\eqref{eq:lateralH0}, with uniform spin orbit coupling and Zeeman field. Different colors represent opposite out-of-plane spin projections, and the vertical offset is due to an out of plane Zeeman magnetic field term.
Bottom: Zoom-in on the dashed square of the left panel presenting a schematic dispersion of a 1D single spin fermion without (dashed line) and with (solid line) a periodic potential, originating in the spatially modulated spin-orbit interaction, Eq.~\eqref{eq:lateralHQ}. When $Q$, the lateral modulation wave vector, matches $2k_F$, $\tilde{\mu}=0$, Eq.~\eqref{eq:periodicHamiltonianTransformed}. Then the Fermi level (dashed black line) is within the induced gap.
(b) Two-terminal conductance of a mode subjected to a spatially periodic potential. We use Eq.~\eqref{eq:condformula} and Eqs.~\eqref{eq:TLperiodic}--\eqref{eq:transmissionlateral} to calculate the conductance. The different traces correspond to different values of $T_L=v/L$ (from left to right) between $30$ $\mu$eV and $10$ $\mu$eV in steps of $1$ $\mu$eV. (We expect that experimentally the application of magnetic field will affect the velocity of the mode, $v$, in the wire as discussed in Ref.~\cite{JL_dip_lateral}.) For clarity, we mark the direction of increasing $T_L$ by an arrow, and consecutive traces are shifted horizontally by $14$ $\mu$eV. We use the parameters $T=25$ mK, $\Delta_Q=10 k_B T$, and $\mu_Q=12$~$\mu$eV for all traces.
(c) Similar to (b), but now varying the temperature between $2$--$6$ $\mu$eV in steps of $0.2$ $\mu$eV, and keeping $T_L=20\;\mu$eV constant. For clarity, we mark the direction of increasing $T$, and consecutive traces are shifted horizontally by $14$~$\mu$eV. Here we use $\Delta_Q=20$~$\mu$eV and $\mu_Q=12$~$\mu$eV for all traces.
}
\end{figure}
When the spin-orbit spatial frequency $Q$ exactly matches $2k_F$, so that $\tilde \mu =0$, the impact of $\Delta_Q$ is maximal, as is illustrated in Fig.~\ref{fig:PeriodicCondDips}a. The transmission coefficient for a (non interacting) system of length $L$ at the center of the gap is given by, (see Appendix \ref{app:periodicscattering}):
\begin{equation}
{\cal T}_{L}=\frac{1}{\cosh^{2}\left(\frac{\Delta_Q}{v}L\right)},\label{eq:TLperiodic}
\end{equation}
and thus we can use Landauer's two-terminal conductance formula in Eq.~\eqref{eq:condformula} to calculate the conductance with the approximated transmission function
\begin{equation}\label{eq:transmissionlateral}
{\cal{T}}\left(\epsilon\right)=\begin{cases}
0, & \epsilon<0\\
1, &0\leq\epsilon<\mu_Q-\Delta_Q\\
{\cal T}_L, & \mu_Q-\Delta_Q\leq\epsilon<\mu_Q+\Delta_Q\\
1, &\mu_Q+\Delta_Q\leq\epsilon
\end{cases},
\end{equation}
with $\mu_Q=\frac{Qv}{2}$.
[Notice that the chemical potential used in Eq.~\eqref{eq:condformula} is defined with respect to the bottom of the linearized band in Eq.~\eqref{eq:periodicHamiltonianTransformed}. The Fermi level in this model thus coincides with $\mu=vk_F$, and when it is equal to $\mu_Q$ one finds $\tilde{\mu}=0$, and the backscattering is resonant.]
In Eq.~\eqref{eq:transmissionlateral} we have simplified the transmission function, such that the transmission within the gap is approximately constant, and outside the gap it is unity. This simplified form is sufficient to capture the experimental features. The accurate transmission coefficient of the effective scattering problem may be found in Appendix~\ref{app:periodicscattering}.
Calculating the conductance, with temperature of $25$~mK~$\approx~2.15$ $\mu$eV that is comparable to the experimental temperature, $\Delta_Q$ ten times larger, and $T_L\equiv v/L$ that varies between about 15 to 5 $T$, we obtain the conductance depicted in Fig.~\ref{fig:PeriodicCondDips}b. A ``shoulder'' at the conductance around $0.6 e^2/h$ for large $T_L$ develops to a pronounced dip for small $T_L$. Similar behavior is observed in the experiment, (see Fig. 2 of Ref. \cite{JL_dip_lateral}) when the out of plane magnetic field is varied.
We expect that the magnetic field will affect the velocity of the modes in the wire as can be ascertained from our expression for $k_F$, and hence $T_L$ is expected to vary, as we plot in Fig.~\ref{fig:PeriodicCondDips}b.
\subsection{\label{sec:periodicplusInteractions} Interactions}
The discussion of the lateral modulation so far does not include a key ingredient of the experimental system, strong electron-electron interactions. We shall account for this by using, similar to the vertical case, the bosonization technique. At $\tilde{\mu}=0$, the bosonized Hamiltonian of the system in question, connected to infinite non-interacting leads at both its ends, may be written in the form
\begin{align}
{\cal H}=\frac{\tilde{v}}{2\pi}\left[ \frac{1}{K\left(x\right)}\left(\partial_x\phi\right)^2+K\left(x\right)\left(\partial_x\theta\right)^2\right]\nonumber\\
+w\left(x\right)\frac{\Delta_Q}{2\pi\alpha}\cos\left(2\phi\right),
\end{align}
where $w\left(x\right)$ is unity within the region $0<x<L$ and zero outside of it, and $K\left(x\right)=K$ within that same region and $K\left(x\right)=1$ in the leads. The effects of the interaction are captured by the modification $v\to\tilde{v}$, and by $K$, and for simplicity we neglect the effect of different Fermi velocities in different regions of the system.
In contrast to our discussion in Sec.~\ref{sec:fractionalcond}, in the laterally modulated case we do not assume a modulated interaction, and thus
$K>1$ corresponds to attractive interactions and $K<1$ for repulsive interactions, as usual.
The lowest order RG equation for the flow of $\Delta_Q$ is
\begin{equation}
\frac{d}{d\ell}\Delta_Q=\left(2-K\right)\Delta_Q,\label{eq:RGperiodic}
\end{equation}
showing that the periodic perturbation may be relevant even for moderately strong attractive interactions, as long as $K<2$. We consider the regime in which $T\ll T_L \apprle \Delta_Q$, such that the RG flow is cut off by the length scale of the system. Thus, at energy scales below $T_L$ we are left with the Hamiltonian of a simple backscattering impurity center embedded in a interaction-free Luttinger liquid (the leads connected to the system). The strength of this effective impurity as compared to $\Delta_Q$ depends on the nature of interactions in the system. Since the scaling of the gap in such a regime goes as $\Delta_Q=\Delta_Q^0\left(\frac{L}{\alpha_0}\right)^{1-K}$, with $\Delta_Q^0$ the bare gap value, the transmission coefficient from Eq.~\eqref{eq:TLperiodic} may be replaced by the approximation (which is valid in the vicinity of $K\approx 1$)
\begin{equation}
{\cal T}_{L}^*=\frac{1}{\cosh^{2}\left[\frac{\Delta_Q^0}{W}\left(\frac{L}{\alpha_0}\right)^{2-K}\right]},\label{eq:Treansmissionperiodicinteracting}
\end{equation}
and once again $W=\tilde{v}/\alpha_0$ is the bandwidth parameter. By measuring the conductance of identical systems with varying length, Eq. \eqref{eq:Treansmissionperiodicinteracting} provides a probe on the interaction strength in the modulated waveguide, as well as its nature (attraction or repulsion).
We finally comment on the effect of temperature in such a regime. As long as the system remains in the regime $T<T_L$, a change of temperature will have a negligible impact on the transmission coefficient, as the model still reduces to an effective impurity backscattering center problem in a non-interacting system (i.e., the infinite leads). Thus, the \textit{value} of the conductance dip around $\tilde{\mu}=0$ should not change with temperature, yet its shape would be blurred (and eventually vanish) when the system is heated up. This is precisely the trend observed in Fig. 3 of Ref. \cite{JL_dip_lateral}, and is recreated with sensible parameters in our Fig.~\ref{fig:PeriodicCondDips}c. This is strong evidence supporting our conclusions regarding the parameter regime, as well as the origin of the finite transmission plateau at low densities.
\section{\label{sec:conclusions} Conclusions}
Electron waveguides created on LaAlO$_3$/SrTiO$_3$ interfaces have proven themselves in recent years to be new and exciting platforms to study highly correlated electrons physics. The experiments addressed in this work, Refs. \cite{JL_95_conductance_vertical,JL_dip_lateral}, explored the effect of waveguide modulation on the electron transport. Two novel features were found for the two different kinds of modulation.
For the vertical case, where the ``writing'' potential oscillated along the wire, plateaus in the two-terminal conductance as a function of gate voltage appeared at fractional values of the quantum of conductance. The appearance of these plateaus depended on the magnetic field as well as the fillings of the modes. With lateral modulation, creating a serpentine-like trajectory for the electrons, an intriguing conductance dip emerged in the supposedly singly-occupied-mode regime. This dip appeared to vary its value, and to some extent its shape, when the external magnetic field was swept.
In this manuscript, we have presented theoretical frameworks which can account for these unusual transport phenomena. We have argued that the experimental data for the vertically modulated waveguides is consistent with the existence of two strongly interacting electronic modes, whose filling is commensurate with one another.
An asymptotic theoretical analysis of the conductance for 2:1 filling ratio
yields a plateau with conductance of $9/5\,e^2/h$ for a certain range of magnetic filed. Similarly a 3:1 ratio yields conductance of $8/5\,e^2/h$.
Remarkably, these two filling scenarios were previously predicted to be the most susceptible to the opening of a partial gap in the system, and thus to stabilization of fractional conductance signatures \cite{fractionalGSYO}. The shapes of the conductance plateaus were calculated using a re-fermionization technique in a strongly coupled regime (akin to a generalized Luther-Emery point \cite{LutherEmery}), and were found to bare qualitative resemblance to the reported experiments.
In the current work we have further argued that the spatial modulation is indispensable to the stabilization of the high-order backscattering gap in the presence of attractive interactions. We conjecture that the main role of the modulation is in making the interaction itself oscillate and peak at a specific wavevector $q^*$, through the mechanism discussed in Refs. \cite{laosto_Lifshitz_altman_ruhman,JL_LAOSTO_PRX}. We have shown here that such an interaction indeed supports the formation of a gap leading to fractional plateaus, both in the weak-coupling RG sense, and in the strong coupling picture. We have demonstrated that the second remarkable feature observed in these wave guides, an enhancement of the electron pairing, may also be explained by the the same modulated interaction. This lends credence to our claim that vertical modulation of the electronic waveguide can lead to a periodic interaction felt by the electrons.
The appearance of two-terminal conductance plateaus at rational fractions of the quantum of conductance ${e^2}/{h}$ with the introduction of periodic modulation to the system has profound implications. We have demonstrated that in such a scenario, contrary to previous studies concerning this fractional phenomenon, the partial gap due to strong interactions may be stabilized by electron-electron \textit{attraction}. This suggests that the fractional conductance anomaly is perhaps more ubiquitous than it is currently believed to be, and may be realized at certain parameter regimes in other experimental platforms.
We speculate that the attractive nature of the interactions is responsible for the relative robustness of the plateaus as compared to, e.g., the plateaus observed in Ref. \cite{PepperFractional}, where the experiments were performed in GaAs based split-gate quantum wires with repulsive interactions. The attraction would generically make the residual impurity back scattering, which are expected to deteriorate the transport in the repulsive scenario, less relevant. Thus, one would expect the values of the conductance plateaus to be much closer to their asymptotic values calculated by the method of Sec. \ref{sec:fracconductancdetails}.
As mentioned earlier, the conductance dip at certain fillings of the laterally modulated waveguide is qualitatively distinct from the plateau observed with vertical modulation, suggesting the two have different origins.
We attribute it to an effective periodic potential felt by the propagating electrons, presumably originating in a modulation-induced spatially-periodic spin-orbit interaction. When this spin-orbit potential provides the correct momentum for a single-particle backscattering event, i.e., when $Q=2k_F$, the electronic mode develops a gap. For long enough waveguides, this would lead to a total suppression of the two-terminal conductance at low temperatures. However, as we explain, the experimental data suggests that the finite conductance found on this resonance is due to the finite length of the system. The observed results are consistent with the energy scale $T_L$ (which is inversely proportional to the waveguide length) and the gap energy being of comparable sizes, while the temperature is much smaller than both.
Interactions play an important role in the lateral modulation as well. They tend to renormalize the size of the gap and make it larger for repulsive interactions and moderately attractive ones, or diminish it in the case of strong enough attraction. The continuous shift of the conductance dip value as the magnetic field is swept in the experiment with lateral modulation can thus be attributed also to a change of the effective interactions, which by affecting the renormalized Fermi velocity or the gap, alter the ratio $\Delta_Q/T_L$ (defined in Sec. \ref{sec:periodicConduct} and Fig.~\ref{fig:PeriodicCondDips}). Furthermore, assuming the relevant experimental regime corresponds to the gap renormalization being cut off by the finite system length $L$, varying $L$ while measuring the change in the conductance would allow one to ascertain the strength of the interaction and possibly verify its attractive nature.
As we conjecture in the beginning of Sec.~\ref{sec:periodicpotential}, the transport may indicate the presence of modulated spin-orbit interactions. While these experiments~\cite{JL_dip_lateral} were conducted at high field, the modulated spin orbit coupling may lead to interesting phenomena in the absence of magnetic filed. For example, tuning the modulation wavelength and shape may enable designing high quality spin transistors with no ferromagnetic reservoirs \cite{spinTransistor}.
\section*{Acknowledgements}
We thank Jeremy Levy for fruitful discussions of his experimental data.
\paragraph{Funding information}
This work was partially supported by the European Union’s Horizon 2020 research and innovation programme (Grant Agreement LEGOTOP No. 788715), the DFG (CRC/Transregio 183, EI 519/7-1), and the Israel Science Foundation (ISF) and by a grant from the Binational Science Foundation (BSF) and the National Science Foundation (NSF).
\begin{appendix}
\section{Bosonized Hamiltonian parameters}\label{app:hParameters}
In Sec.~\ref{sec:fractionalcond} we discuss the role of vertical modulation and use a bosonized formulation of the model, see Eq.~\eqref{eq:bosonizedHamiltonianTotal}.
For the sake of completeness, we bring here the general form of the parameters that are used in it, in terms of the fermionic velocities and interactions appearing in Eqs.~\eqref{eq:linearizedH0}$-$\eqref{eq:linearizedHint}. Assuming for simplicity $v_1=v_2\equiv v$, we find
\begin{equation}
u_{g}=v\sqrt{1-\left(\frac{g_{1}+4g_{2}-4g_{\perp}}{10\pi v}\right)^{2}},\,\,\,\,u_{f}=v\sqrt{1-\left(\frac{4g_{1}+g_{2}+4g_{\perp}}{10\pi v}\right)^{2}},
\end{equation}
\begin{equation}
K_{g}=\sqrt{\frac{10\pi v-g_{1}-4g_{2}+4g_{\perp}}{10\pi v+g_{1}+4g_{2}-4g_{\perp}}},\,\,\,\,K_{f}=\sqrt{\frac{10\pi v-4g_{1}-g_{2}-4g_{\perp}}{10\pi v+4g_{1}+g_{2}+4g_{\perp}}},
\end{equation}
\begin{equation}
V_{\phi/\theta}=\mp\frac{2}{5\pi}\left(g_{2}-g_{1}+\frac{3}{2}g_{\perp}\right). \label{eq:appaVphitheta}
\end{equation}
The connection between the coupling constants and the interaction matrix $\mathbf{U}$ is given in Eq. \eqref{eq:gUmatrixconnection}. In the main text we use the simplification $\mathbf{U}_{0}^{11}\approx\mathbf{U}_{0}^{22}\approx\mathbf{U}_{2k_{1,F}}^{11}\equiv U$ for $K_g$ in Eq.~\eqref{eq:KgSimplified}, and the additional assumption $g_\perp \approx U$ when we discussed the role of $K_f$ in Sec.~\ref{sec:experimentalconsequence}. We note that under the same assumptions, one find the expression $V_{\phi/\theta}=\mp\frac{1}{\pi}\left(U-\frac{2}{5}\mathbf{U}_{2k_{2,F}}^{22}\right)$. As discussed in Sec.~\ref{sec:spatialmodulations}, these cross interactions affect the scaling dimension of the $\phi_g$ sector, yet in a quantitatively modest manner.
In Sec.~\ref{sec:enhanced_pairing} we discuss a modified Hamiltonian, valid around $k_{1,F}=k_{2,F}$, see Eq.~\eqref{eq:bosonizedPairingHamiltonianTotal}. The expressions for the parameters that go into it may be expressed as
\begin{equation}
u_{+}=v\sqrt{1-\left(\frac{U_{2k_F}-2\mathbf{U}_0}{2\pi v}\right)^2},\,\,\,\,u_{-}=v\sqrt{1-\left(\frac{U_{2k_F}}{2\pi v}\right)^2},
\end{equation}
\begin{equation}
K_+=\sqrt{\frac{1+U_{2k_F}/\left(2\pi v\right)-2\mathbf{U}_{0}/\left(2\pi v\right)}{1-U_{2k_F}/\left(2\pi v\right)+2\mathbf{U}_{0}/\left(2\pi v\right)}},\,\,\,\,K_-=\sqrt{\frac{1+U_{2k_F}/\left(2\pi v\right)}{1-U_{2k_F}/\left(2\pi v\right)}},
\end{equation}
where we have used the simplifications $v_1\approx v_2 \equiv v$, $\mathbf{U}^{11}_{0}\approx\mathbf{U}^{22}_{0}\approx\mathbf{U}^{12}_{0}\equiv\mathbf{U}_{0}$, and $\mathbf{U}^{11}_{2k_{F}}\approx\mathbf{U}^{22}_{2k_{F}}\approx\mathbf{U}^{12}_{2k_{F}}\equiv{U}_{2k_F}$.
\section{Solving the scattering problem} \label{app:periodicscattering}
Here we solve the scattering problem we discuss in Sec.~\ref{sec:periodicpotential}. Let us rewrite Eq.~\eqref{eq:periodicHamiltonianTransformed} in a more convenient form,
\begin{equation}
H=\int dx \Psi^\dagger{\cal{H_{\rm scat}}}\Psi,\; {\rm with} \;\; {\cal{H_{\rm scat}}}=iv\partial_x\sigma_z-\tilde{\mu}+\Delta_Q\sigma_x.
\end{equation}
Here $\sigma_i$ are Pauli matrices and $\Psi=\left(\psi_R,\,\psi_L\right)^T$. We solve the Schrodinger equation ${\cal{H_{\rm scat}}}\Psi=E\Psi$ using the ansatz $\Psi\left(x\right)={\rm exp}\left[Fx/v\right]\Psi\left(0\right)$, which is justified for a translation-invariant Hamiltonian. One can readily find:
\begin{equation}
F=\Delta_Q\sigma_{y}-i\left(E+\tilde{\mu}\right)\sigma_{z}.
\end{equation}
To solve the scattering problem we set the boundary conditions $\Psi\left(0\right)=\left(1,\,r\right)^T$, $\Psi\left(L\right)=\left(t,\,0\right)^T$, and find the transmission coefficient as ${\cal T}=\left|t\right|^2$. Overall, using $T_L\equiv v/L$, we find
\begin{equation}
t=\left({\cosh\sqrt{\left(\frac{\Delta_{Q}}{T_{L}}\right)^{2}-\left(\frac{E+\tilde{\mu}}{T_{L}}\right)^{2}}+i\frac{E+\tilde{\mu}}{\sqrt{\left(\Delta_{Q}\right)^{2}-\left(E+\tilde{\mu}\right)^{2}}}\sinh\sqrt{\left(\frac{\Delta_{Q}}{T_{L}}\right)^{2}-\left(\frac{E+\tilde{\mu}}{T_{L}}\right)^{2}}}\right)^{-1},
\end{equation}
from which we recover Eq. \eqref{eq:TLperiodic} for $E=\tilde{\mu}=0$.
\end{appendix}
|
2103.00461
|
\section{Introduction}
Consider the damped biharmonic plate equation in three dimensions
\begin{align}\label{eqn}
\Delta^2u(x, k) - k^2 u(x, k) - {\rm i}k\sigma u(x, k) = f(x), \quad x\in\mathbb R^3,
\end{align}
where $k>0$ is the wavenumber, $\sigma>0$ is the damping coefficient, and $f\in L^2(\mathbb R^3)$ is a assumed to be a real-valued function with a compact support contained in $B_R=\{x\in\mathbb R^3: |x|<R\}$, where $R>0$ is a constant. Let $\partial B_R$ be the boundary of $B_R$. Since the problem is formulated in the open domain, the Sommerfeld radiation condition is imposed usually on $u$ and $\Delta u$ to ensure the well-posedness of the problem \cite{TS}. This paper is concerned with the inverse source problem of determining $f$ from the boundary measurements
\begin{align*
u(x, k), \, \nabla u(x, k), \, \Delta u(x, k), \, \nabla \Delta u(x, k), \quad x\in\partial B_R
\end{align*}
corresponding to the wavenumber $k$ given in a finite interval.
In general, there is no uniqueness for the inverse source problems of the wave equations at a fixed frequency \cite{BLT, LYZ}. Computationally, a more serious issue is the lack of stability, i.e., a small variation of the data might lead to a huge error in the reconstruction. Hence it is crucial to examine the stability of the inverse source problems. In \cite{BLT}, the authors initialized the study of the inverse source problem for the Helmholtz equation by using multi-frequency data. Since then, it has become an active research topic on the inverse source problems via multiple frequency data in order to overcome the non-uniqueness issue and enhance the stability. The increasing stability was investigated for the inverse source problems of various wave equations which include the acoustic, elastic, and electromagnetic wave equations \cite{BLZ, CIL, EI-18, EI-20, LY, LZZ} and the Helmholtz equation with attenuation \cite{IL}. On the other hand, it has generated sustained interest in the mathematics community on the boundary value problems for higher-order elliptic operators \cite{GGS}. The biharmonic operator, which can be encountered in models originating from elasticity for example, appears as a natural candidate for such a study \cite{RR, MMM}. Compared with the equations involving the second order differential operators, the model equations with the biharmonic operators are much less studied in the community of inverse problems. We refer to \cite{AP, Iw, KLU, LKU, TS} and the references cited therein on the recovery of the lower-order coefficients by using either the far-field pattern or the Dirichlet-to-Neumann map on the boundary. In a recent paper \cite{LYZ}, the authors demonstrated the increasing stability for the inverse source problem of the biharmonic operator with a zeroth order perturbation by using multi-frequency near-field data. The main ingredient of the analysis relies on the study of an eigenvalue problem for the biharmonic operator with the hinged boundary conditions. But the method is not applicable directly to handle the biharmonic operator with a damping coefficient.
Motivated by \cite{CIL, IL}, we use the Fourier transform in time to reduce the inverse source problem into the identification of the initial data for the initial value problem of the damped biharmonic plate wave equation by lateral Cauchy data. The Carleman estimate is utilized to obtain an exact observability bound for the source function in the framework of the initial value problem for the corresponding wave equation, which connects the scattering data and the unknown source function by taking the inverse Fourier transform. An appropriate rate of time decay for the damped plate wave equation is proved in order to justify the Fourier transform. Then applying the results in \cite{LYZ} on the resolvent of the biharmonic operator, we obtain a resonance-free region of the data with respect to the complex wavenumber and the bound of the analytic continuation of the data from the given data to the higher wavenumber data. By studying the dependence of analytic continuation and of the exact observability bound for the damped plate wave equation on the damping coefficient, we show the exponential dependence of increasing stability on the damping constant. The stability estimate consists of the Lipschitz type of data discrepancy and the high wavenumber tail of the source function. The latter decreases as the wavenumber of the data increases, which implies that the inverse problem is more stable when the higher wavenumber data is used. But the stability deteriorates as the damping constant becomes larger. It should be pointed out that due to the existence of the damping coefficient, we can not obtain a sectorial resonance-free region for the data as that in \cite{CIL, LY}. Instead, we choose a rectangular resonance-free region as that in \cite{LZZ}, which leads to a double logarithmic type of the high wavenumber tail for the estimate.
This paper is organized as follows. In section \ref{resolvent analysis}, the direct source problem is discussed; the resolvent is introduced for the elliptic operator, and its resonance-free region and upper bound are obtained. Section \ref{inverse problem} is devoted to the stability analysis of the inverse source problem by using multi-frequency data. In appendix \ref{exact}, we use the Carleman estimate to derive an exact observability bound with exponential dependence on the damping coefficient. In appendix \ref{time decay}, we prove an appropriate rate of time decay for the damped plate wave equation to justify the Fourier transform.
\section{The direct source problem}\label{resolvent analysis}
In this section, we discuss the solution of the direct source problem and study the resolvent of the biharmonic operator with a damping coefficient.
\begin{theorem}
Let $f\in L^2(\mathbb R^3)$ with a compact support. Then there exists a unique solution $u$ of Schwartz distribution to \eqref{eqn} for every $k>0$. Moreover, the solution satisfies
\[
|u(x, k)|\leq C(k, f)e^{-c(k, \sigma)|x|}
\]
as $|x|\to\infty,$ where $C(k, f)$ and $c(k, \sigma)$ are positive constants depending on $k, f$ and $k, \sigma$, respectively.
\end{theorem}
\begin{proof}
Taking the Fourier transform of $u(x, k)$ formally with respect to the spatial variable $x$, we define
\begin{align*}
u^*(x, k) = \int_{\mathbb R^3} e^{{\rm i}x\cdot\xi} \frac{1}{|\xi|^4 - k^2 - {\rm i}k\sigma} \hat{f}(\xi) {\rm d}\xi, \quad x\in\mathbb R^3,
\end{align*}
where
\[
\hat{f}(\xi) = \frac{1}{(2\pi)^3} \int_{\mathbb R^3} f(x) e^{-{\rm i}x\cdot\xi} {\rm d}x.
\]
It follows from the Plancherel theorem that for each $k>0$ we have that $u^*(\cdot, k)\in H^4(\mathbb R^3)$ and satisfies the equation \eqref{eqn} in the sense of Schwartz distribution.
Denote
\[
G(x, k) = \int_{\mathbb R^3} e^{{\rm i}x\cdot\xi} \frac{1}{|\xi|^4 - k^2 - {\rm i}k\sigma}{\rm d}\xi.
\]
By a direct calculation we can write $u^*(x, k)$ as
\begin{align}\label{u^*}
u^*(x, k) = (G*f)(x) = \frac{1}{2\kappa^2}\int_{\mathbb R^3} \Big(\frac{e^{{\rm i}\kappa |x - y|}}{4\pi|x - y|} - \frac{e^{-\kappa |x - y|}}{4\pi|x - y|}\Big) f(y){\rm d}y,
\end{align}
where $\kappa = (k^2 + {\rm i}k\sigma)^{\frac{1}{4}}$ such that $\Re \kappa>0$ and $\Im \kappa>0$. Since $f$ has a compact support, we obtain from \eqref{u^*} that the solution $u^*(x, k)$ satisfies the estimate
\[
|u^*(x, k)|\leq C(k, f)e^{-c(k, \sigma)|x|}
\]
as $|x|\to\infty$, where $C(k, f)$ and $c(k, \sigma)$ are positive constants depending on $k, f$ and $k, \sigma$, respectively. By direct calculations, we may also show that $\nabla u^*$ and $\Delta u^*$ have similar exponential decay estimates.
Next is show the uniqueness. Let $\tilde{u}^*(x, k)$ be another Schwartz distributional solution to \eqref{eqn}. Clearly we have
\begin{align*}
(\Delta^2 - k^2 - {\rm i}k\sigma) (u^* - \tilde{u}^*) = 0.
\end{align*}
Taking the Fourier transform on both sides of the above equation yields
\[
(|\xi|^4 - k^2 - {\rm i}k\sigma) ( \widehat{u^* - \tilde{u}^*}) (\xi) = 0.
\]
Notice that for $k>0$ we have $|\xi|^4 - k^2 - {\rm i}k\sigma\neq 0$ for all $\xi\in\mathbb R^3$. Taking the generalized inverse Fourier transform gives $u^* - \tilde{u}^* = 0$, which proves the uniqueness.
\end{proof}
To study the resolvent we let
\[
u^*(x, \kappa) := u(x, k), \quad \kappa = (k^2 + {\rm i}k\sigma)^{\frac{1}{4}},
\]
where $\Re \kappa>0$ and $\Im\kappa>0$. By \eqref{eqn}, $u^*$ satisfies
\begin{align*}
\Delta^2 u^* - \kappa^4 u^*= f.
\end{align*}
Denote by $\mathcal{R} = \{z\in\mathbb C: (\delta, +\infty)\times (-d, d)\}$ the infinite rectangular slab, where $\delta$ is any positive constant and $d\ll 1$. For $k\in\mathcal{R}$, denote the resolvent
\[
R(k) := (\Delta^2 - k^2 - {\rm i}k\sigma)^{-1}.
\]
Then we have $R(\kappa) = (\Delta^2 - \kappa^4)^{-1}$. Hereafter, the notation $a\lesssim b$ stands for $a\leq Cb,$ where $C>0$ is a generic constant which may change step by step in the proofs.
\begin{lemma}\label{direct}
For each $k\in\mathcal{R}$ and $\rho\in C_0^\infty(B_R)$ the resolvent operator $R(k)$ is analytic and has the following estimate:
\begin{align*
\|\rho R(k) \rho\|_{L^2(B_R)\rightarrow H^j(B_R)} \lesssim |k|^{\frac{j}{2}} e^{2R(\sigma + 1)|k|^{\frac{1}{2}}}, \quad j = 0, 1, 2, 3, 4.
\end{align*}
\end{lemma}
\begin{proof}
It is clear to note that for a sufficiently small $d$, the set $\{(k^2 + {\rm i}k\sigma)^{\frac{1}{4}}: k\in\mathcal{R}\}$ belongs to the first quadrant. Consequently, $(k^2 + {\rm i}k\sigma)^{\frac{1}{4}}$ is analytic with respect to $k\in\mathcal{R}$. By \cite[Theorem 2.1]{LYZ}, the resolvent $\mathcal{R}(\kappa)$ is analytic in $\mathbb{C}\backslash\{0\}$ and the following estimate holds:
\begin{align}\label{free}
\|\rho R(\kappa) \rho\|_{L^2(B_R)\rightarrow H^j(B_R)}\lesssim |\kappa|^{-2}\langle\kappa\rangle^j (e^{2R (\Im\kappa)_-} + e^{2R (\Re\kappa)_-}), \quad j=0, 1, 2, 3, 4,
\end{align}
where $x_{-}:=\max\{-x,0\}$ and $\langle\kappa\rangle = (1 + |\kappa|^2)^{1/2}$. On the other hand, letting $k = k_1 + {\rm i}k_2$, we have from a direct calculation that
\[
k^2 + {\rm i}k\sigma = k_1^2 - k_2^2 - k_2\sigma + (2k_1k_2 + k_1\sigma){\rm i}.
\]
It is easy to see that if $d$ is sufficiently small, which gives that $|k_2|$ is sufficiently small, there is a positive lower bound for $|k^2 + {\rm i}k\sigma|$ with $k\in\mathcal{R}$ and then $|\kappa|>c$ for some positive constant $c$. The proof is completed by replacing $\kappa$ with $(k^2 + {\rm i}k\sigma)^{\frac{1}{4}}$ in \eqref{free}.
\end{proof}
\section{The inverse source problem}\label{inverse problem}
In this section, we address the inverse source problem of the damped biharmonic plate equation and present an increasing stability estimate by using multi-frequency scattering data.
Denote
\begin{align*}
\|u(x, k)\|^2_{\partial B_R}&:= \int_{\partial B_R}\Big( ( k^4 + k^2)|u(x, k)|^2 + k^2|\nabla u(x, k)|^2 \\
&\quad+ (k^2 + 1)|\Delta u(x, k)|^2 + |\nabla\Delta u(x, k)|^2\Big) {\rm d}s(x).
\end{align*}
The following lemma provides a relation between the unknown source function and the boundary measurements. Hereafter, by Remark \ref{rd}, we assume that $f\in H^n(B_R)$ where $n\geq 4.$
\begin{lemma}\label{control}
Let $u$ be the solution to the direct scattering problem \eqref{eqn}. Then
\begin{align*}
\|f\|_{L^2(B_R)}^2&\lesssim 2e^{C\sigma^2} \int_{0}^{+\infty} \|u(x, k)\|^2_{\partial B_R} {\rm d}k.
\end{align*}
\end{lemma}
\begin{proof}
Consider the initial value problem for the damped biharmonic plate wave equation
\begin{align}\label{Kirchhoff}
\begin{cases}
\partial_t^2 U(x, t) + \Delta^2 U(x, t) + \sigma\partial_t U(x, t) = 0, &\quad (x, t)\in B_R\times (0, +\infty),\\
U(x, 0) = 0, \quad \partial_tU(x, 0) = f(x), &\quad x\in B_R.
\end{cases}
\end{align}
We define $U(x, t) = 0$ when $t<0$ and denote $U_T(x, t) = U(x, t)\chi_{[0, T]}(t)$ and
\[
\widehat{U_T} (x, k) = \int_0^T U(x, t) e^{{\rm i}kt} {\rm d}t.
\]
By the decay estimate \eqref{de} we have that $U(x, t)\in L_t^2(0, +\infty)$ and $\lim_{T\rightarrow\infty}U_T(x, t) = U(x, t)$ in $L^2_t(\mathbb R)$ uniformly for all $x\in\mathbb R^3$. It follows from the Plancherel Theorem that $\widehat{U_T}$ also converges in $L^2_k(\mathbb R)$ to a function $u_*(x, k)\in L^2_k(\mathbb R)$ uniformly for all $x\in\mathbb R^3$, which implies that $u_*(x, k)$ is the Fourier transform of $U(x, t)$.
Denote by $\langle \cdot, \cdot \rangle$ and $\mathcal{S}$ the usual scalar inner product of $L^2(\mathbb R^3)$ and the space of Schwartz functions, respectively. We take $u_*(x, k)$ as a Schwartz distribution such that $u_*(x, k)(\varphi) = \langle u_*, \varphi \rangle$ for each $\varphi\in\mathcal{S}$. In what follows, we show that $u_* (x, k)$ satisfies the equation \eqref{eqn} in the sense of Schwartz distribution.
First we multiply both sides of the wave equation \eqref{Kirchhoff} by a Schwartz function $\varphi$ and take integration over $\mathbb R^3$. Using the wave equation \eqref{Kirchhoff} and the integration by parts with respect to the $t$ variable over $[0, T]$ for $T>0$, we obtain
\begin{align}\label{identity}
0 &= \int_0^{T} \langle \partial_t^2 U + \Delta^2 U + \sigma\partial_t U, \varphi \rangle e^{{\rm i}kt} {\rm d}t\notag\\
&= e^{{\rm i}kT}\langle \partial_t U(x, T), \varphi \rangle - {\rm i}k e^{{\rm i}kT}\langle U(x, T), \varphi \rangle + \sigma e^{{\rm i}kT}\langle U(x, T),\varphi \rangle\notag\\
&\quad - \langle \partial_t U(x, 0), \varphi \rangle +\Big\langle\int_0^T (\Delta^2 U - k^2 U - {\rm i}k\sigma U) e^{{\rm i}kt} {\rm d}t, \varphi \Big\rangle.
\end{align}
It follows from the decay estimate \eqref{de} that $|\partial_t U(x, t)|, |U(x, t)|\lesssim {(1 + t)^{-\frac{3}{4}}}$ uniformly for all $x\in\mathbb R^3$, which give
\[
\lim_{T\rightarrow \infty} e^{{\rm i}kT}\langle \partial_t U(x, T), \varphi \rangle = \lim_{T\rightarrow \infty} {\rm i}k e^{{\rm i}kT}\langle U(x, T), \varphi \rangle = \lim_{T\rightarrow \infty} \sigma e^{{\rm i}kT}\langle U(x, T),\varphi \rangle= 0.
\]
On the other hand, we have from the integration by parts that
\begin{align}\label{identity_2}
&\Big\langle\int_0^T (\Delta^2 U - k^2 U - {\rm i}k\sigma U) e^{{\rm i}kt} {\rm d}t, \varphi \Big\rangle\notag\\
&= \Big\langle\int_0^T U {\rm d}t, \Delta^2\varphi\Big\rangle + \Big\langle\int_0^T (- k^2 U - {\rm i}k\sigma U) e^{{\rm i}kt} {\rm d}t, \varphi \Big\rangle.
\end{align}
Since $\lim_{T\rightarrow+\infty} \widehat{U_{T}} (x, k) = u_*(x, k)$ in $L^2_k(\mathbb R)$ uniformly for $x\in\mathbb R^3$, we can choose a positive sequence $\{T_n\}_{n=1}^\infty$ such that
$\lim_{n\rightarrow\infty} T_n= +\infty$ and
$\lim_{n\rightarrow\infty} \widehat{U_{T_n}} (x, k) = u_*(x, k)$ pointwisely for a.e. $k\in\mathbb R$ and uniformly for all $x\in\mathbb R^3$. Define a sequence of Schwartz distributions $\{\mathcal{D}_n\}_{n=1}^\infty\subset\mathcal{S}^\prime$ as follows
\[
\mathcal{D}_n(\varphi) := \langle \widehat{U_{T_n}}, \varphi \rangle, \quad \varphi\in\mathcal{S}.
\]
Since $\lim_{n\rightarrow\infty} \widehat{U_{T_n}} (x, k) = u_*(x, k)$ for a.e. $k\in\mathbb R$ and uniformly for all $x\in\mathbb R^3$, we have
\[
\lim_{n\rightarrow\infty} \mathcal{D}_n(\varphi) = \langle u_*, \varphi \rangle.
\]
Consequently, replacing $T$ by $T_n$ in \eqref{identity_2} and letting $n\rightarrow\infty$, we get
\begin{align*}
&\lim_{n\rightarrow\infty} \Big(\big\langle\int_0^{T_n} U {\rm d}t, \Delta^2\varphi\big\rangle + \big\langle\int_0^{T_n} (- k^2 U - {\rm i}k\sigma U) e^{{\rm i}kt} {\rm d}t, \varphi \big\rangle\Big)\\
&= u_*(\Delta^2\varphi) - k^2 u_*(\varphi) - {\rm i}k\sigma u_*(\varphi)\\
&= (\Delta^2 - k^2 - {\rm i}k\sigma)u_*(\varphi),
\end{align*}
which further implies by \eqref{identity} that
\[
(\Delta^2 - k^2 - {\rm i}k\sigma)u_*(\varphi) = \langle f, \varphi\rangle
\]
for every $\varphi\in\mathcal{S}$. Then $u_* (x, k)$ is a solution to the equation \eqref{eqn} as a Schwartz distribution. Furthermore, it follows from the uniqueness of the direct problem that we obtain $u_* (x, k) = u(x, k)$, which gives that $u(x, k)$ is the Fourier transform of $U(x, t)$.
By Theorem \ref{thm-decay}, we have the estimates
\begin{align*}
|\partial_t^2 U|, \,\, |\partial_t U|, \,\, |\partial_t\nabla U|, \,\, |\partial_t\Delta U|, \,\, |\Delta U|, \,\, |\nabla\Delta U| \lesssim (1 + t)^{-\frac{3}{4}}.
\end{align*}
Moreover, they are continuous and belong to $L^2_t(\mathbb R)$ uniformly for all $x\in\mathbb R^3$. Similarly, we may show that
\begin{align*}
&\widehat{\partial_t^2 U} = -k^2 u, \,\, \widehat{\partial_t U} = {\rm i}k u, \,\, \widehat{\partial_t\nabla U} = {\rm i}k\nabla u,\\
&\widehat{\partial_t \Delta U} = {\rm i}k\Delta u, \,\, \widehat{\Delta U} = \Delta u, \,\, \widehat{\nabla\Delta U} = \nabla\Delta u.
\end{align*}
It follows from Plancherel's theorem that
\begin{align}\label{identities}
&\int_0^{+\infty} \Big(|\partial_t^2 U|^2 + |\partial_t U|^2 + |\partial_t\nabla U|^2 + |\partial_t\Delta U|^2 + |\Delta U|^2 + |\nabla\Delta U|^2\Big){\rm d}t\notag\\
&= \int_{-\infty}^{+\infty} \Big(|k^2 u|^2 + |k u|^2 + |k\nabla u|^2 + |k \Delta u|^2 + |\Delta u|^2 + |\nabla\Delta u|^2 \Big){\rm d}k.
\end{align}
By \eqref{identities} and the exact observability bounds \eqref{control_1}, we obtain
\begin{align*}
\|f\|^2_{L^2(B_R)}&\lesssim e^{C\sigma^2}\int_{-\infty}^{+\infty} \int_{\partial B_R} \Big(( k^4 + k^2)|u(x, k)|^2 + k^2|\nabla u(x, k)|^2 \\
&\quad+ (k^2 + 1)|\Delta u(x, k)|^2 + |\nabla\Delta u(x, k)|^2 \Big){\rm d}s(x){\rm d}k\\
&\lesssim e^{C\sigma^2} \int_{-\infty}^\infty \|u(x, k)\|^2_{\partial B_R} {\rm d}k.
\end{align*}
Since $f(x)$ is real-valued, we have $ \overline{u(x, k)} = u(x, -k)$ for $k\in\mathbb R$ and then
\[
\int_{-\infty}^\infty \|u(x, k)\|^2_{\partial B_R} {\rm d}k = 2\int_{0}^\infty \|u(x, k)\|^2_{\partial B_R} {\rm d}k,
\]
which completes the proof.
\end{proof}
Let $\delta$ be a positive constant and define
\begin{align*}
I(k) &= \int_\delta^k \|u(x, \omega)\|^2_{\partial B_R} {\rm d}s(x){\rm d}\omega.
\end{align*}
The following lemma gives a link between the values of an analytical function for small and large arguments (cf. \cite[Lemma A.1]{LZZ}).
\begin{lemma}\label{ac}
Let $p(z)$ be analytic in the infinite rectangular slab
\[
R = \{z\in \mathbb C: (\delta, +\infty)\times (-d, d) \},
\]
where $\delta$ is a positive constant, and continuous in $\overline{R}$ satisfying
\begin{align*}
\begin{cases}
|p(z)|\leq \epsilon_1, &\quad z\in (\delta, K],\\
|p(z)|\leq M, &\quad z\in R,
\end{cases}
\end{align*}
where $\delta, K, \epsilon_1$ and $M$ are positive constants. Then there exists a function $\mu(z)$ with $z\in (K, +\infty)$ satisfying
\begin{equation*
\mu(z) \geq \frac{64ad}{3\pi^2(a^2 + 4d^2)} e^{\frac{\pi}{2d}(\frac{a}{2} - z)},
\end{equation*}
where $a = K - \delta$, such that
\begin{align*
|p(z)|\leq M\epsilon^{\mu(z)}\quad \forall\, z\in (K, +\infty).
\end{align*}
\end{lemma}
\begin{lemma}\label{ac_1}
Let $f$ be a real-valued function and $\|f\|_{L^2(B_R)}\leq Q$.
Then there exist positive constants $d$ and $\delta, K$ satisfying
$0< \delta<K$, which do not depend on $f$ and $Q$, such that
\[
|I(k)| \lesssim
Q^2e^{4R(\sigma + 2)\kappa}\epsilon_1^{2\mu(k)} \quad \forall\,
k\in (K, +\infty)
\]
and
\begin{align*}
\epsilon_1^2 = \int_\delta^{K} \int_{\partial B_R} \|u(x, k)\|^2_{\partial B_R}{\rm d}s(x){\rm d}k,\quad \mu(k) \geq \frac{64ad}{3\pi^2(a^2 + 4d^2)} e^{\frac{\pi}{2d}(\frac{a}{2} -
k)},
\end{align*}
where $a = K - \delta.$
\end{lemma}
\begin{proof}
Let
\begin{align*}
I_1(k) &= \int_\delta^k \int_{\partial B_R}\Big( ( \omega^4 + \omega^2)u(x, \omega)u(x, -\omega) + \omega^2\nabla u(x, \omega) \cdot \nabla u(x, -\omega) \\
&\quad+ (\omega^2 + 1) \Delta u(x, \omega)\Delta u(x, -\omega) + \nabla\Delta u(x, \omega)\cdot \nabla\Delta u(x, -\omega)\Big){\rm d}s(x){\rm d}\omega,
\end{align*}
where $k\in\mathcal{R}$. Following similar arguments as those in the proof of Lemma \ref{direct}, we may show that $R(-k)$ is also analytic for $k\in\mathcal{R}$. Since $f$ is real-valued, we have $ \overline{u(x, k)} = u(x, -k)$ for $k\in\mathbb R$, which gives
\[
I_1(k) = I(k), \quad k>0.
\]
It follows from Lemma \ref{direct} that
\begin{align*}
|I_1(k)|\lesssim Q^2e^{C\sigma^2}e^{4R(\sigma+1)|k|}, \quad k\in\mathcal{R},
\end{align*}
which gives
\begin{align*}
e^{-4R(\sigma+2)|k|}|I_1(k)|\lesssim Q^2 e^{C\sigma^2}, \quad k\in\mathcal{R}.
\end{align*}
An application of Lemma \ref{ac} leads to
\[
\big| e^{-4R(\sigma+2)|k|} I(k)\big| \lesssim
Q^2\epsilon^{2\mu(k)} \quad \forall\, k\in (K, +\infty),
\]
where
\[
\mu(k) \geq \frac{64ad}{3\pi^2(a^2 + 4d^2)} e^{\frac{\pi}{2d}(\frac{a}{2} - k)},
\]
which completes the proof.
\end{proof}
Here we state a simple uniqueness result for the inverse source problem.
\begin{theorem}
Let $f\in L^2(B_R)$ and $I\subset\mathbb R^+$ be an open interval. Then the source function $f$ can be uniquely determined by the multi-frequency Cauchy data $\{u(x, k), \nabla u(x, k), \Delta u(x, k), \nabla \Delta u(x, k): x\in\partial B_R, k\in I\}$.
\end{theorem}
\begin{proof}
Let $u(x, k) = \nabla u(x, k) =\Delta u(x, k) = \nabla\Delta u(x, k) = 0$ for all $x\in\partial B_R$ and $k\in I$. It suffices to prove that $f (x) = 0$. By Lemma \ref{direct}, $u(x, k)$ is analytic in the infinite slab $\mathcal{R}$ for any $\delta>0$, which implies that $u(x, k) = \Delta u(x, k) = 0$ for all $k\in\mathbb R^+$. We conclude from Lemma \ref{control} that $f = 0.$
\end{proof}
The following result concerns the estimate of $u(x, k)$ for high wavenumbers.
\begin{lemma}\label{tail}
Let $f\in H^{n}(B_R)$ and $\|f\|_{H^{n}(B_R)}\leq Q$. Then the following estimate holds:
\begin{align*}
&\int_s^\infty \|u(x, k)\|^2_{\partial B_R}{\rm d}k \lesssim \frac{1}{s^{n-3}}\|f\|^2_{H^n(B_R)}.
\end{align*}
\end{lemma}
\begin{proof}
Recall the identity
\begin{align}\label{tail_s1}
\int_s^\infty \|u(x, k)\|^2_{\partial B_R}{\rm d}k &= \int_s^\infty \int_{\partial B_R} \Big(( k^4 + k^2)|u(x, k)|^2 + k^2|\nabla u(x, k)|^2 \notag\\
&\quad + (k^2 + 1)|\Delta u(x, k)|^2 + |\nabla\Delta u(x, k)|^2 \Big){\rm d}s(x){\rm d}k.
\end{align}
Using the decomposition
\[
R(\kappa) = (\Delta^2 - \kappa^4)^{-1} = \frac{1}{2\kappa^2} \big[(-\Delta - \kappa^2)^{-1} - (-\Delta + \kappa^2)^{-1}\big],
\]
we obtain
\begin{align*}
u(x) = \int_{B_R}\frac{1}{2\kappa^2} \Big(\frac{e^{{\rm i}\kappa |x - y|}}{4\pi|x - y|} - \frac{e^{-\kappa |x - y|}}{4\pi|x - y|}\Big) f(y){\rm d}y,\quad x\in\partial B_R.
\end{align*}
For instance, we consider one of the integrals on the right-hand side of \eqref{tail_s1}
\begin{align*}
J :&= \int_s^\infty k^4 |u(x, k)| {\rm d}k\\
&= \int_s^\infty k^4 \Big|\int_{B_R}\frac{1}{2\kappa^2} \Big(\frac{e^{{\rm i}\kappa |x - y|}}{4\pi|x - y|} - \frac{e^{-\kappa |x - y|}}{4\pi|x - y|}\Big) f(y){\rm d}y \Big|^2 {\rm d}k.
\end{align*}
Using the spherical coordinates $r = |x - y|$ originated at $y$, we have
\begin{align*}
J = \frac{1}{8\pi} \int_s^\infty \int_{\partial B_R} k^2 \Big|\int_0^{2\pi} {\rm d}\theta \int_0^\pi \sin\varphi {\rm d}\varphi \int_0^\infty
(e^{{\rm i}\kappa r} - e^{-\kappa r}) fr {\rm d}r \Big|^2{\rm d}s(x){\rm d}k.
\end{align*}
By the integration by parts and noting $x\in\partial B_R$ and $\text{supp}f \subset B_{\hat{R}} \subset B_R$ for some $\hat{R}<R$, we obtain
\begin{align*}
J = \frac{1}{4\pi} \int_s^\infty \int_{\partial B_R} k^2 \Big|\int_0^{2\pi} {\rm d}\theta \int_0^\pi \sin\varphi {\rm d}\varphi \int_{R - \hat{R}}^{2R}
\Big(\frac{e^{{\rm i}\kappa r}}{({\rm i}\kappa)^n} - \frac{e^{-\kappa r}}{(-\kappa)^n} \Big) \frac{\partial^n (fr)}{\partial r^n} {\rm d}r \Big|^2{\rm d}s(x){\rm d}k.
\end{align*}
Since $x\in\partial B_R$ and $|\kappa| \geq k^{1/2}$ for $k>0$, we get from direction calculations that
\begin{align*}
J\lesssim \|f\|^2_{H^n(B_R)}\int_s^\infty k^{2-n} {\rm d}k \lesssim \frac{1}{s^{n-3}}\|f\|^2_{H^n(B_R)}.
\end{align*}
The other integrals on the right-hand side of \eqref{tail_s1} can be estimated similarly. The details are omitted for brevity.
\end{proof}
Define a real-valued function space
\[
\mathcal C_Q = \{f \in H^{n}(B_R): n\geq 4, \|f\|_{H^{n}(B_R)}\leq Q, ~ \text{supp}
f\subset B_{\hat{R}}\subset B_R, ~ f: B_R \rightarrow \mathbb R \},
\]
where $\hat{R}<R$. Now we are in the position to present the main result of this paper.
\begin{theorem}
Let $u(x,\kappa)$ be the solution of the scattering problem \eqref{eqn} corresponding to the source $f\in \mathcal C_Q$.
Then for $\epsilon$ sufficiently small, the following estimate holds:
\begin{align}\label{stability}
\|f\|_{L^2( B_R)}^2 \lesssim e^{C\sigma^2}\Big(\epsilon^2 +
\frac{Q^2}{K^{\frac{1}{2}(n-3)}(\ln|\ln\epsilon|)^{\frac{1}{2}(n-3)}}\Big),
\end{align}
where
\begin{align*}
\epsilon:= \int_0^K \|u(x, k)\|^2_{\partial B_R} {\rm d}k = \int_0^\delta \|u(x, k)\|^2_{\partial B_R} {\rm d}k + \epsilon_1^2.
\end{align*}
\end{theorem}
\begin{proof}
We can assume that $\epsilon \leq e^{-1}$, otherwise the estimate is obvious.
First, we link the data $I(k)$ for large wavenumber $k$ satisfying $k\leq L$ with the given data $\epsilon_1$ of small wavenumber by using the analytic continuation in Lemma \ref{ac_1}, where $L$ is some large positive integer to be determined later. It follows from Lemma \ref{ac_1} that
\begin{align*}
I(k) & \lesssim Q^2e^{c|\kappa|} \epsilon_1^{\mu(\kappa)}\\
& \lesssim Q^2{\rm exp}\{c\kappa - \frac{c_2a}{a^2 + c_3}e^{c_1(\frac{a}{2} - \kappa)}
|{\ln}\epsilon_1|\}\\
& \lesssim Q^2{\rm exp} \{ - \frac{c_2a}{a^2 + c_3}e^{c_1(\frac{a}{2} - \kappa)}|{\ln}\epsilon_1| (1 - \frac{c_4\kappa(a^2 + c_3)}{a} e^{c_1(\kappa - \frac{a}{2})}|{\ln}\epsilon_1|^{-1})\}\\
& \lesssim Q^2{\rm exp} \{ - \frac{c_2a}{a^2 + c_3}e^{c_1(\frac{a}{2} - L)}|{\ln}\epsilon_1| (1 - \frac{c_4L(a^2 + c_3)}{a} e^{c_1(L - \frac{a}{2})}|{\ln}\epsilon_1|^{-1})\}\\
&\lesssim Q^2{\rm exp} \{ - b_0e^{-c_1L}|{\ln}\epsilon_1| (1 - b_1L e^{c_1L }|{\ln}\epsilon_1|^{-1})\},
\end{align*}
where $c, c_i, i=1,2$ and $b_0, b_1$ are constants. Let
\begin{align*}
L =
\begin{cases}
\left[\frac{1}{2c_1}\ln|\ln \epsilon_1|\right], &\quad k\leq \frac{1}{2c_1} \ln|\ln\epsilon_1|,\\
k, &\quad k> \frac{1}{2c_1}\ln|\ln\epsilon_1|.
\end{cases}
\end{align*}
If $K\leq \frac{1}{2c_1}\ln|\ln\epsilon_1|$, we obtain for sufficiently small $\epsilon_1$ that
\begin{align*
I(k)&\lesssim Q^2{\rm exp} \{ - b_0e^{-c_1L}|{\ln}\epsilon_1| (1 - b_1L e^{c_1L }|{\ln}\epsilon_1|^{-1})\}\\
& \lesssim Q^2\exp\{-\frac{1}{2}b_0e^{-c_1L}|\ln \epsilon_1|\}.
\end{align*}
Noting $e^{-x}\leq \frac{(2n+3)!}{x^{2n+3}}$ for $x>0$, we have
\begin{align*}
I(L) \lesssim Q^2
e^{(2n+3)c_1L}|\ln\epsilon_1|^{-(2n+3)}.
\end{align*}
Taking $L=\frac{1}{2c_1}\ln|\ln\epsilon_1|$, combining the above estimates, Lemma \ref{control}
and Lemma \ref{tail}, we get
\begin{align*}
\|f\|^2_{L^2(B_R)}&\lesssim e^{C\sigma^2}\Big(\epsilon^2 + I(L)
+ \int_L^\infty \int_{\partial B_R} \|u(x, k)\|^2_{\partial B_R}{\rm d}k\Big)\\
&\lesssim e^{C\sigma^2}\Big(\epsilon^2 + Q^2e^{(2n+3)c_1L}|\ln\epsilon_1|^{-(2n+3)}+ \frac{Q^2}{L^{n-3}}\Big)\\
&\lesssim e^{C\sigma^2}\Big(\epsilon^2 + Q^2\left(|\ln\epsilon_1|^{\frac{2n+3}{2}}|\ln\epsilon_1|^{-(2n+3)}+(\ln|\ln\epsilon_1|)^{3-n}\right)\Big)\\
&\lesssim e^{C\sigma^2}\Big(\epsilon^2 + Q^2\left(|\ln\epsilon_1|^{-\frac{2n+3}{2}}+(\ln|\ln\epsilon_1|)^{3-n}\right)\Big)\\
&\lesssim e^{C\sigma^2}\Big(\epsilon^2 + Q^2(\ln|\ln\epsilon_1|)^{3-n}\Big)\\
&\lesssim e^{C\sigma^2}\Big(\epsilon^2 +
\frac{Q^2}{K^{\frac{1}{2}(n-3)}(\ln|\ln\epsilon_1|)^{\frac{1}{2}(n-3)}}\Big)\\
&\lesssim e^{C\sigma^2}\Big(\epsilon^2 +
\frac{Q^2}{K^{\frac{1}{2}(n-3)}(\ln|\ln\epsilon|)^{\frac{1}{2}(n-3)}}\Big),
\end{align*}
where we have used $|\ln\epsilon_1|^{1/2}\geq \ln|\ln\epsilon_1|$ for
sufficiently small $\epsilon_1$ and $\ln|\ln\epsilon_1| \geq \ln|\ln\epsilon|$.
If $K > \frac{1}{2c_1}\ln|\ln\epsilon_1|$, we have from Lemma \ref{tail} that
\begin{align*}
\|f\|_{L^2( B_R)}^2 &\lesssim e^{C\sigma^2}\Big(\epsilon^2 + \int_{K}^\infty \int_{\partial B_R}\|u(x, k)\|^2_{\partial B_R}{\rm d}k\Big)\\
&\lesssim e^{C\sigma^2}\Big(\epsilon^2 +
\frac{Q^2}{K^{n-3}}\Big)\\
&\lesssim e^{C\sigma^2}\Big(\epsilon^2 +
\frac{Q^2}{K^{\frac{1}{2}(n-3)}(\ln|\ln\epsilon|)^{\frac{1}{2}(n-3)}}\Big),
\end{align*}
which completes the proof.
\end{proof}
It can be observed that for a fixed damping coefficient $\sigma$, the stability \eqref{stability} consists of two parts: the data discrepancy and the high frequency tail. The former is of the Lipschitz type. The latter decreases as $K$ increases which makes the problem have an almost Lipschitz stability. But the stability deteriorates exponentially as the damping coefficient $\sigma$ increases.
\begin{appendix}
\section{An exact observability bound}\label{exact}
Consider the initial value problem for the damped biharmonic plate wave equation
\begin{align}\label{pe}
\begin{cases}
\partial_t^2 U(x, t) + \Delta^2 U(x, t) + \sigma\partial_t U(x, t) = 0, &\quad (x, t)\in B_R\times (0, +\infty),\\
U(x, 0) = 0, \quad \partial_tU(x, 0) = f(x), &\quad x\in B_R.
\end{cases}
\end{align}
The following theorem presents an exact observability bound for the above equation. The proof follows closely from that in \cite[Theorem 3.1]{IL}.
\begin{theorem}\label{control_1}
Let the observation time $4(2R + 1)< T < 5(2R + 1)$. Then there exists a constant $C$ depending on the domain $B_R$ such that
\begin{align}\label{bc}
\|f\|^2_{L^2(B_R)} &\leq Ce^{C\sigma^2} \big( \|\partial^2_t U\|^2_{L^2(\partial B_R\times (0, T))}+ \|\partial_t U\|^2_{L^2(\partial B_R\times (0, T))} + \|\partial_t \nabla U\|^2_{L^2(\partial B_R\times (0, T))} \notag\\
&\quad + \|\partial_t \Delta U\|^2_{L^2(\partial B_R\times (0, T))} + \|\Delta U\|^2_{L^2(\partial B_R\times (0, T))} + \|\nabla\Delta U\|^2_{L^2(\partial B_R\times (0, T))}\big).
\end{align}
\end{theorem}
Before showing the proof, we introduce the energies
\begin{align*}
E(t) &= \frac{1}{2}\int_{\Omega} \big(|\partial_t U(x, t)|^2 + |\Delta U(x, t)|^2 + |U(x, t)|^2\big) {\rm d}x,\\
E_0(t) &= \frac{1}{2}\int_{\Omega} \big(|\partial_t U(x, t)|^2 + |\Delta U(x, t)|^2 \big) {\rm d}x,
\end{align*}
and denote
\begin{align*}
F^2 &= \int_{\partial \Omega\times (t_1, t_2)}\big( |\partial^2_t U(x, t)|^2 + |\partial_t U(x, t)|^2 + |\partial_t \nabla U(x, t)|^2 \\
&\quad + |\partial_t\Delta U(x, t)|^2 + |\Delta U(x, t)|^2 + |\nabla \Delta U(x, t)|^2 \big){\rm d}s(x){\rm d}t.
\end{align*}
\begin{lemma}
Let $U$ be a solution of the damped biharmonic plate wave equation \eqref{pe} with the initial value $f\in H^1(B_R)$, $supp f\subset B_R$. Let $0\leq t_1<t_2\leq T$ and $1\leq 2\sigma$. Then the following estimates holds:
\begin{align}\label{energy_1}
E(t_2) &\leq e^{4(t_2 - t_1)^2}(2E(t_1) + F^2), \\
\label{energy_2}
E(t_2)&\leq e^{(2\sigma + 4(t_2 - t_1))(t_2 - t_1)}(E(t_2) + F^2).
\end{align}
\end{lemma}
\begin{proof}
Multiplying both sides of \eqref{pe} by $(\partial_t U) e^{\theta t}$ and integrating over $\Omega\times (t_1, t_2)$ give
\[
\int_{\Omega\times ((t_1, t_2)} \Big(\frac{1}{2}\partial_t (\partial_t U)^2 + \Delta^2U\partial_t U + \sigma(\partial_t U)^2\Big)e^{\theta t}\,{\rm d}x\,{\rm d}t = 0.
\]
Using the integration $\Delta^2U\partial_t U$ by parts over $\Omega$ and noting $\Delta U \partial_t(\Delta U) = \frac{1}{2}\partial_t|\Delta U|^2$, we obtain
\begin{align*}
&\int_{t_1}^{t_2} (\partial_t E_0(t))e^{\theta t}{\rm d}t + \int_{\Omega\times(t_1, t_2)} \sigma(\partial_t U)^2e^{\theta t}{\rm d}x{\rm d}t\\
&\quad + \int_{\partial\Omega\times(t_1, t_2)} (\partial_\nu(\Delta U)\partial_tU - \Delta U \partial_t(\partial_\nu U))e^{\theta t}{\rm d}s(x){\rm d}t = 0.
\end{align*}
Hence,
\begin{align*}
E_0(t_2)e^{\theta t_2} - E_0(t_1)e^{\theta t_1} &= \int_{\Omega\times(t_1, t_2)} \Big(\frac{\theta}{2} ((\partial_t U)^2 + |\Delta U|^2) - \sigma(\partial_t U)^2\Big)e^{\theta t}{\rm d}x{\rm d}t\\
&\quad - \int_{\partial\Omega\times(t_1, t_2)} (\partial_\nu(\Delta U)\partial_tU - \Delta U \partial_t(\partial_\nu U))e^{\theta t}{\rm d}s(x){\rm d}t = 0.
\end{align*}
Letting $\theta = 0$, using Schwartz's inequality, and noting $\sigma>0$, we get
\begin{align*}
E_0(t_2)&\leq E_0(t_1) + \int_{\Omega\times(t_1, t_2)} (-\sigma)(\partial_t U)^2{\rm d}x{\rm d}t\\
&\quad + \frac{1}{2}\int_{\partial\Omega\times(t_1, t_2)} \Big((\partial_t U)^2 + (\partial_t(\partial_\nu U))^2\Big){\rm d}s(x){\rm d}t\\
&\quad + \frac{1}{2}\int_{\partial\Omega\times(t_1, t_2)} \Big((\Delta U)^2 + (\partial_\nu(\Delta U))^2\Big){\rm d}s(x){\rm d}t\\
&\leq E_0(t_1) + F^2.
\end{align*}
Similarly, letting $\theta = 2\sigma$, we derive
\begin{align*}
E_0(t_1)e^{2\sigma t_1}&\leq E_0(t_2)e^{2\sigma t_2} + \int_{\Omega\times(t_2, t_1)}-\sigma (\Delta U)^2{\rm d}x{\rm d}t\\
&\quad + \frac{1}{2}\int_{\partial\Omega\times(t_1, t_2)} \Big((\partial_t U)^2 + (\partial_t(\partial_\nu U))^2\Big)e^{2\sigma t}{\rm d}s(x){\rm d}t\\
&\quad + \frac{1}{2}\int_{\partial\Omega\times(t_1, t_2)} \Big((\Delta U)^2 + (\partial_\nu(\Delta U))^2\Big)e^{2\sigma t}{\rm d}s(x){\rm d}t\\
&\leq E_0(t_2)e^{2\sigma t_2} + \frac{1}{2}\int_{\partial\Omega\times(t_1, t_2)} \Big((\partial_t U)^2 + (\partial_t(\partial_\nu U))^2\Big)e^{2\sigma t}{\rm d}s(x){\rm d}t\\
&\quad + \frac{1}{2}\int_{\partial\Omega\times(t_1, t_2)} \Big((\Delta U)^2 + (\partial_\nu(\Delta U))^2\Big)e^{2\sigma t}{\rm d}s(x){\rm d}t.
\end{align*}
which gives
\begin{align*}
E_0(t_1) \leq e^{2\sigma(t_2 - t_1)} (E_0(t_2) + F^2).
\end{align*}
The proof is completed by following similar arguments as those in \cite[Lemma 3.2]{IL}.
\end{proof}
Now we return to the proof of Theorem \ref{bc}
\begin{proof}[Proof of Theorem \ref{bc}]
Let $\varphi(x, t) = |x - a|^2 - \theta^2(t - \frac{T}{2})^2$, where ${\rm dist} (a, \Omega) = 1, \theta = \frac{1}{2}$.
Using the Carleman-type estimate in \cite{Yuan}, we obtain
\begin{align}\label{carleman}
&\tau^6 \int_Q |U|^2e^{2\tau\varphi}{\rm d}x{\rm d}t +
\tau^3 \int_Q |\partial_tU|^2e^{2\tau\varphi}{\rm d}x{\rm d}t + \tau \int_Q |\Delta U|^2e^{2\tau\varphi}{\rm d}x{\rm d}t\notag\\
&\lesssim\int_Q ((\partial_t^2 + \Delta^2)U)^2e^{2\tau\varphi}{\rm d}x{\rm d}t\notag\\
&\quad + \int_{\partial Q}\tau^6(|\partial_\nu \Delta U|^2 + |\partial_t \Delta U|^2 + |\partial_t^2 U)|^2)e^{2\tau\varphi}{\rm d}s(x){\rm d}t.
\end{align}
It is easy to see that $1 - \theta^2\varepsilon_0^2\leq\varphi$ on $\Omega\times \{|t - \frac{T}{2}|<\varepsilon_0\}$ for some positive $\varepsilon<1$. Then we have from \ref{energy_2} that
\begin{align}\label{lhs}
&\tau^6 \int_Q |U|^2e^{2\tau\varphi}{\rm d}x{\rm d}t +
\tau^3 \int_{Q} |\partial_tU|^2e^{2\tau\varphi}{\rm d}x{\rm d}t + \tau \int_Q |\Delta U|^2e^{2\tau\varphi}{\rm d}x{\rm d}t \notag\\
&\geq \tau^6 \int_{\Omega\times(\frac{T}{2} - \varepsilon_0, \frac{T}{2} + \varepsilon_0)} |U|^2e^{2\tau(1 - \theta^2\varepsilon_0^2)}{\rm d}x{\rm d}t +
\tau^3 \int_{\Omega\times(\frac{T}{2} - \varepsilon_0, \frac{T}{2} + \varepsilon_0)} |\partial_tU|^2e^{2\tau(1 - \theta^2\varepsilon_0^2)}{\rm d}x{\rm d}t \notag\\
&\quad + \tau \int_{\Omega\times(\frac{T}{2} - \varepsilon_0, \frac{T}{2} + \varepsilon_0)} |\Delta U|^2e^{2\tau(1 - \theta^2\varepsilon_0^2)}{\rm d}x{\rm d}t \notag\\
&\geq \tau e^{2\tau(1 - \theta^2\varepsilon_0^2)} \int_{\Omega\times(\frac{T}{2} - \varepsilon_0, \frac{T}{2} + \varepsilon_0)} E(t) {\rm d}t \notag\\
&\geq \tau e^{2\tau(1 - \theta^2\varepsilon_0^2)}\varepsilon_0 (2e^{-(2\sigma + 4T)T} E(0) - F^2).
\end{align}
Moreover, it follows from \eqref{energy_2} and $\varphi\leq (2R + 1)^2 - \theta^2T^2/4$ on $\Omega\times (0, T)$ that
\begin{align*}
&\tau^6 \int_Q |U|^2e^{2\tau\varphi}{\rm d}x{\rm d}t +
\tau^3 \int_Q |\partial_tU|^2e^{2\tau\varphi}{\rm d}x{\rm d}t + \tau \int_Q |\Delta U|^2e^{2\tau\varphi}{\rm d}x{\rm d}t\\
&\leq \tau^6 e^{2\tau((2R + 1)^2 - \theta^2T^2/4)}(E(0) + E(T))\\
&\leq \tau^6 e^{2\tau((2R + 1)^2 - \theta^2T^2/4)} ((e^{4T^2} + 1)E(0) + e^{4T^2}F^2).
\end{align*}
By \eqref{carleman} and \eqref{lhs}, we obtain
\begin{align}\label{combine}
&\tau e^{2\tau(1 - \theta^2\varepsilon_0^2)}\varepsilon_0 e^{-(2\sigma + 1 + 4T)T} E(0)\notag\\
&\quad +\tau^6 \int_Q |U|^2e^{2\tau\varphi}{\rm d}x{\rm d}t +
\tau^3 \int_Q |\partial_tU|^2e^{2\tau\varphi}{\rm d}x{\rm d}t + \tau \int_Q |\Delta U|^2e^{2\tau\varphi}{\rm d}x{\rm d}t\notag\\
&\leq \Big(\sigma^2\int_Q |\partial_t U|^2e^{2\tau\varphi}{\rm d}x{\rm d}t + \int_{\partial Q}\tau^6(|\partial_\nu \Delta U|^2 + |\partial_t \Delta U|^2 + |\partial_t^2 U)|^2)e^{2\tau\varphi}{\rm d}s(x){\rm d}t\notag\\
&\quad + (\tau e^{2\tau(1 - \theta^2\varepsilon_0^2)} + \tau^6 e^{2\tau((2R + 1)^2 - \theta^2T^2/4)}e^{4T^2}) F^2 + \tau^6 e^{2\tau((2R + 1)^2 - \theta^2T^2/4)}e^{4T^2} E(0)\Big).
\end{align}
Choosing $\tau$ sufficiently large, we may remove the first integral on the right hand side of \eqref{combine}. We also choose $T^2 = 4\frac{(2R+1)^2}{\theta^2} + 4\varepsilon_0^2$ and $\tau = (2\sigma + 8T)T + \ln(2(\varepsilon_0)^{-1}C) + C\sigma^2$. Noting $\tau^5e^{-\tau}\leq 5!$, we have
\begin{align*}
\tau^5 e^{2\tau((2R + 1)^2 - \theta^2T^2/4 - 1 + \theta^2\varepsilon_0^2) + (2\sigma + 8T)T} &= \tau^5 e^{-2\tau + (2\sigma + 8T)T}\\
&\leq 5! e^{-\tau + (2\sigma + 8T)T}\leq\frac{\varepsilon_0}{2C}.
\end{align*}
In addition, since $T\leq 5(2R + 1)$, it follows that
\begin{align*}
\tau^5 e^{2\tau((2R + 1)^2 - 1 + \theta^2\varepsilon_0^2 + (2\sigma + 4T)T)} \leq \tau^5 e^{2((2\sigma + 8T)T + C\sigma^2 + C)(2R + 1)^2 + (2\sigma + 4T)T}\leq Ce^{C\sigma^2}.
\end{align*}
Using the above inequality and the inequality $\varphi<(2R + 1)^2$ on $Q$ and dividing both sides in \eqref{combine} by the factor of $E(0)$ on the left hand side, we obtain
\[
E(0)\leq Ce^{C\sigma^2} F^2.
\]
Since $f$ is supported in $\Omega$, there holds $\|U\|_{L^2(\partial B_R\times (0, T))}\leq C\|\partial_t U\|_{L^2(\partial B_R\times (0, T))}$, which completes the proof.
\end{proof}
\section{A decay estimate}\label{time decay}
We prove a decay estimate for the solution of the initial value problem of the damped plate wave equation
\begin{align}\label{Kirchhoff_2}
\begin{cases}
\partial_t^2 U(x, t) + \Delta^2 U(x, t) + \sigma\partial_t U(x, t) = 0, &\quad (x, t)\in \mathbb R^3\times (0, +\infty),\\
U(x, 0) = 0, \quad \partial_tU(x, 0) = f(x), &\quad x\in\mathbb R^3,
\end{cases}
\end{align}
where $f(x)\in L^1(\mathbb R^3)\cap H^s(\mathbb R^3)$. By the Fourier transform, the solution $U(x, t)$ of \eqref{Kirchhoff_2} is given as
\begin{align*}
U(x, t) = \mathcal{F}^{-1} (m_\sigma (t, \xi) \hat{f}(\xi))(x),
\end{align*}
where $\mathcal{F}^{-1}$ denotes the inverse Fourier transform,
\begin{align*}
m_\sigma(t, \xi) = \frac{e^{-\frac{\sigma}{2}t}}{\sqrt{\sigma^2 - 4|\xi|^4}} \Big(e^{\frac{1}{2}t \sqrt{\sigma^2 - 4|\xi|^4}} - e^{-\frac{1}{2}t \sqrt{\sigma^2 - 4|\xi|^4}}\Big),
\end{align*}
and $\hat{f}(\xi)$ is the Fourier transform of $f$, i.e.,
\[
\hat{f}(\xi) = \frac{1}{(2\pi)^3} \int_{\mathbb R^3} e^{-{\rm i}x\cdot\xi} f(x) {\rm d}x.
\]
Let $\sqrt{\sigma^2 - 4|\xi|^4} = {\rm i} \sqrt{4|\xi|^4 - \sigma^2}$ when $|\xi|^4>\frac{\sigma^2}{4}$. Then we have
\begin{align*}
m_\sigma(t, \xi) =
\begin{cases}
e^{-\frac{\sigma}{2}t} \frac{\sinh (\frac{t}{2}\sqrt{\sigma^2 - 4|\xi|^4 })}{\sqrt{\sigma^2 - 4|\xi|^4}},&\quad |\xi|^4<\frac{\sigma^2}{4},\\
e^{-\frac{\sigma}{2}t} \frac{\sin (\frac{t}{2}\sqrt{4|\xi|^4 - \sigma^2})}{\sqrt{4|\xi|^4 - \sigma^2}},&\quad |\xi|^4>\frac{\sigma^2}{4}.
\end{cases}
\end{align*}
It is clear to note from the representation of $m_\sigma(t, \xi)$ that the solution $U(x, t)$ depends on both of the low and high frequency of $\xi$. In fact, the solution $U(x, t)$ behaves as a ``parabolic type" of $e^{-t\Delta^2} f$ for the low frequency part, while for the high frequency part it behaves like a ``dispersive type" of $e^{{\rm i}t\Delta^2} f$.
\begin{theorem}\label{thm-decay}
Let $U(x, t)$ be the solution of \eqref{Kirchhoff_2}. Then $U(x, t)$ satisfies the decay estimate
\begin{align}\label{de}
{\rm sup}_{x\in\mathbb R^3} |\partial^\alpha_x\partial_t^j U(x, t)| \lesssim (1 + t)^{-\frac{3 + |\alpha|}{4}} \|f\|_{L^1(\mathbb R^3)} + e^{-ct} \|f\|_{H^s(\mathbb R^3)},
\end{align}
where $j\in\mathbb N$, $\alpha$ is a multi-index vector in $\mathbb N^3$ such that $\partial_\alpha = \partial_{x_1}^{\alpha_1}\,\partial_{x_2}^{\alpha_2}\,\partial_{x_3}^{\alpha_3}$, $s>2j + |\alpha| - \frac{1}{2}$ and $c>0$ is some positive constant. In particular, for $|\alpha| = s = 0$, the following estimate holds:
\begin{align}\label{FD}
{\rm sup}_{x\in\mathbb R^3} |U(x, t)|\lesssim (1 + t)^{-\frac{3}{4}} (\|f\|_{L^1(\mathbb R^3)} + \|f\|_{L^2(\mathbb R^3)}).
\end{align}
\end{theorem}
\begin{remark}\label{plancherel}
The estimate \eqref{FD} provides a time decay of the order $O((1 + t)^{-\frac{3}{4}})$ for $U(x, t)$ uniformly for all $x\in\mathbb R^3$, which gives
\begin{align*}
\sup_{x\in\mathbb R^3} \int_0^\infty |U(x, t)|^2 {\rm d}t \lesssim \int_0^\infty (1 + t)^{-3/2}{\rm d}t < +\infty.
\end{align*}
Hence, let $U(x, t) = 0$ when $t<0$, then $U(x, t)$ has a Fourier transform $\hat{U}(x, k) \in L^2(\mathbb R)$ for each $x\in\mathbb R^3$. Moreover, the following Plancherel equality holds:
\begin{align*}
\int_0^{+\infty} |U(x, t)|^2{\rm d}t = \int_{-\infty}^{+\infty} |\hat{U}(x, k)|^2 {\rm d}k.
\end{align*}
\end{remark}
\begin{remark}\label{rd}
To study the inverse source problem, it suffices to assume that $f\in H^4(\mathbb R^3)$. In this case, it follows from the above theorem that both $\partial_t^2U(x, t)$ and $\Delta^2 U(x, t)$ are continuous functions. Moreover, we have from \eqref{de} that the following estimate holds:
\begin{align*}
{\rm sup}_{x\in\mathbb R^3} |\partial_t^j U(x, t)|& \lesssim (1 + t)^{-\frac{3}{4}} \|f\|_{L^1(\mathbb R^3)} + e^{-ct} \|f\|_{H^s(\mathbb R^3)},\quad j=1, 2,\\
{\rm sup}_{x\in\mathbb R^3} |\partial^\alpha_xU(x, t)|& \lesssim (1 + t)^{-\frac{3 + |\alpha|}{4}} \|f\|_{L^1(\mathbb R^3)} + e^{-ct} \|f\|_{H^s(\mathbb R^3)},\quad |\alpha|\leq 4.
\end{align*}
\end{remark}
\begin{proof}
Without loss of generality, we may assume that $\sigma = 1$, and then
\begin{align*}
m_\sigma(t, \xi) = \frac{e^{-\frac{1}{2}t}}{\sqrt{1 - 4|\xi|^4}} \Big(e^{\frac{1}{2}t \sqrt{1 - 4|\xi|^4}} - e^{-\frac{1}{2}t \sqrt{1 - 4|\xi|^4}}\Big).
\end{align*}
First we prove \eqref{de} for $j = 0$. Choose $\chi\in C_0^\infty(\mathbb R^3)$ such that ${\rm supp}\chi\subset B(0, \frac{1}{2})$ and $\chi(\xi) = 1$ for $|\xi|\leq\frac{1}{4}$. Let
\begin{align*}
U(x, t) &= \mathcal{F}^{-1} (m(t, \xi)\chi(\xi)\hat{f}) + \mathcal{F}^{-1}(m(t, \xi)(1 - \chi(\xi))\hat{f})\\
&:= U_1(x, t) + U_2(x, t).
\end{align*}
For $U_1(x, t)$, since $\sqrt{1 - 4|\xi|^4}\leq 1 - 2|\xi|^4$ when $0\leq |\xi| \leq \frac{1}{2}$, we have for $|\xi|\leq\frac{1}{2}$ that
\[
m(t, \xi) = \frac{1}{\sqrt{1 - 4|\xi|^4}} e^{-\frac{t}{2}(1 \pm \sqrt{1 - 4|\xi|^4})}\leq 2e^{-t|\xi|^4}, \quad t\geq 0.
\]
For each $x\in\mathbb R^3$ we have
\[
\partial^\alpha U_1(x, t) = \int_{\mathbb R^3} e^{{\rm i}x\cdot\xi} ({\rm i}\xi)^\alpha m(t, \xi) \chi(\xi) \hat{f}(\xi) {\rm d}\xi,
\]
which gives
\begin{align*}
\sup_{x\in\mathbb R^3} |\partial_x^\alpha U_1(x, t)| \leq \int_{|\xi|\leq\frac{1}{2}} |\xi|^\alpha e^{-t|\xi|^4} |\hat{f}(\xi)|{\rm d}\xi\lesssim \|\hat{f}\|_{L^\infty(\mathbb R^3)} \int_{|\xi|\leq\frac{1}{2}} |\xi|^\alpha e^{-t|\xi|^4}{\rm d}\xi.
\end{align*}
Since
\begin{align*}
\int_{|\xi|\leq\frac{1}{2}} |\xi|^\alpha e^{-t|\xi|^4}{\rm d}\xi \leq
\begin{cases}
C, &\quad 0\leq t \leq 1,\\
t^{-\frac{3 + |\alpha|}{4}}, &\quad t\geq 1,
\end{cases}
\end{align*}
and $\|\hat{f}\|_{L^\infty(\mathbb R^3)} \leq \|f\|_{L^1(\mathbb R^3)}$, we obtain
\begin{align}\label{U1}
\sup_{x\in\mathbb R^3}|\partial_x^\alpha U_1(x, t)| \lesssim (1 + t)^{-\frac{3 + |\alpha|}{4}} |f\|_{L^1(\mathbb R^3)} \quad \forall \alpha\in\mathbb N^3.
\end{align}
To estimate $U_2(x, t)$, noting
\[
(1 - \Delta)^{\frac{p}{2}} U_2(x, t) = \int_{\mathbb R^3} e^{{\rm i}x\cdot\xi} (1 + |\xi|^2)^{\frac{p}{2}} m(t, \xi) (1 - \chi(\xi)) \hat{f}(\xi) {\rm d}\xi,
\]
we have from Plancherel's theorem that
\begin{align}\label{PU2}
\int_{\mathbb R^3} |(1 - \Delta)^{\frac{p}{2}} U_2(x, t)|^2 {\rm d}x = \int_{\mathbb R^3} (1 + |\xi|^2)^p |m(t, \xi) (1 - \chi(\xi)) \hat{f}(\xi)|^2 {\rm d}\xi.
\end{align}
It holds that
\begin{align*}
|m(t, \xi)|\leq
\begin{cases}
te^{-\frac{t}{2}(1 - \sqrt{1 - 4|\xi|^4})} \Big|\frac{1 - e^{-t\sqrt{1 - 4|\xi|^4}}}{t\sqrt{1 - 4|\xi|^4}}\Big|\lesssim e^{-\frac{t}{8}}, &\quad \frac{1}{2}<|\xi|\leq \frac{\sqrt{2}}{2},\\
\frac{1}{2}e^{-\frac{t}{2}} \frac{\sin \frac{t}{2}\sqrt{4|\xi|^4 - 1}}{\frac{t}{2}\sqrt{4|\xi|^4 - 1}}\lesssim e^{-\frac{t}{8}}, &\quad \frac{\sqrt{2}}{2}<|\xi|\leq 1,\\
\frac{e^{-\frac{t}{2}}}{\sqrt{4|\xi|^4 - 1}} |\sin \frac{t}{2}\sqrt{4|\xi|^4 - 1}|\leq \frac{e^{-\frac{t}{2}}}{\sqrt{4|\xi|^4 - 1}}, &\quad |\xi|>1.
\end{cases}
\end{align*}
Hence, when $|\xi|\geq\frac{1}{2}$ we have
\[
|(1 + |\xi|^2)m(t, \xi)| \lesssim e^{-\frac{t}{8}}.
\]
It follows from \eqref{PU2} that
\begin{align*}
\|U_2(x, t)\|^2_{H^p(\mathbb R^3)} &\leq \int_{|\xi|\geq \frac{1}{2}} | (1 + |\xi|^2)^{\frac{p}{2}} m(t, \xi) \hat{f}(\xi) |^2 {\rm d}\xi\\
&\leq e^{-\frac{t}{4}} \int_{\mathbb R^3} | (1 + |\xi|^2)^{-1 + \frac{p}{2}} \hat{f}(\xi) |^2 {\rm d}\xi = e^{-\frac{t}{4}} \|f\|^2_{H^{p-2}(\mathbb R^3)}.
\end{align*}
On the other hand, by Sobolev's theorem, we have for $p>\frac{3}{2}$ that
\[
\sup_{x\in\mathbb R^3} |U_2(x, t)| \leq \|U_2(\cdot, t)\|_{H^{p}(\mathbb R^3)} \lesssim e^{-\frac{t}{8}}\|f\|_{H^{p-2}(\mathbb R^3)}.
\]
More generally, for any $\alpha\in\mathbb N^3$ it holds that
\[
(1 - \Delta)^{\frac{p}{2}}\partial_x^\alpha U_2(x, t) = \mathcal{F}^{-1} ((1 + |\xi|^2)^{\frac{p}{2}} m(t, \xi) (1 - \chi(\xi))\widehat{\partial^\alpha f}),
\]
which leads to
\begin{align}\label{U2}
\sup_{x\in\mathbb R^3} |\partial_x^\alpha U_2(x, t)| \lesssim e^{-\frac{t}{8}} \|\partial^\alpha f\|_{H^{p-2}(\mathbb R^3)}\lesssim e^{-\frac{t}{8}} \|f\|_{H^{s}(\mathbb R^3)}.
\end{align}
Here $s = p - 2 + |\alpha|>|\alpha| - \frac{1}{2}$ by choosing $p > \frac{3}{2}$. Combining the estimate \eqref{U1} with \eqref{U2} yields \eqref{de} for $j = 0.$
Next we consider the general case with $j\neq 0$. Noting
\[
\partial_t^j U(x, t) = \int_{\mathbb R^3} e^{{\rm i}x\cdot\xi} \partial_t^j m(t, \xi) \hat{f}(\xi) {\rm d}\xi,
\]
we obtain from direct calculations that
\begin{align*}
\partial_t^j m(t, \xi) &= \partial_j \Big(\frac{e^{-\frac{1}{2}t}}{\sqrt{1 - 4|\xi|^4}} \Big(e^{\frac{1}{2}t \sqrt{1 - 4|\xi|^4}} - e^{-\frac{1}{2}t \sqrt{1 - 4|\xi|^4}}\Big)\Big)\\
&= \sum_{l=0}^j 2^{-j} (\sqrt{1 - 4|\xi|^4})^{l-1} e^{-\frac{t}{2}} \Big(e^{\frac{1}{2}t \sqrt{1 - 4|\xi|^4}} + (-1)^{l+1}e^{-\frac{1}{2}t \sqrt{1 - 4|\xi|^4}}\Big)\\
&:= \sum_{l=0}^j m_l(t, \xi).
\end{align*}
Hence we can write $\partial_t^j U(x, t)$ as
\begin{align}\label{sw}
\partial_t^j U(x, t) = \sum_{l=0}^j \int_{\mathbb R^3} e^{{\rm i}x\cdot\xi} m_l(t, \xi) \hat{f}(\xi){\rm d}\xi:= \sum_{l=0}^j W_l(x, t).
\end{align}
For each $0\leq l \leq j, \, j\neq 0$, using similar arguments for the case $j = 0$ we obtain
\begin{align}\label{w}
\sup_{x\in\mathbb R^3} |\partial_x^\alpha W_l(x, t)| \leq (1 + t)^{-\frac{3 + |\alpha|}{4}}\|f\|_{L^1(\mathbb R^3)} + e^{-\frac{t}{8}}\|f\|_{H^{s}(\mathbb R^3)}
\end{align}
for $s>2l + |\alpha| - \frac{1}{2}$. Combining \eqref{sw} and \eqref{w}, we obtain the general estimate \eqref{de}.
\end{proof}
\begin{remark}
For the damped biharmonic plate wave equation, besides the decay estimate \eqref{de}, we can deduce other decay estimates of the $L^p$-$L^q$ type and time-space estimates by more sophisticated analysis for the Fourier multiplier $m(t, \xi)$. For example, it can be proved that
\[
\|U(x, t)\|_{L^q(\mathbb R^3)} \lesssim (1 + t)^{-\frac{3}{4}(\frac{1}{p} - \frac{1}{q})}\|f\|_{L^p(\mathbb R^3)} + e^{-ct}\|f\|_{W^{q, s}(\mathbb R^3)},
\]
where $1<p\leq q<+\infty$ and $s\geq 3 (\frac{1}{q} - \frac{1}{2}) - 2$. We hope to present the proofs of these $L^p$-$L^q$ estimates and their applications elsewhere.
\end{remark}
\end{appendix}
\section*{Acknowledgement}
We would like to thank Prof. Masahiro Yamamoto for providing the reference \cite{Yuan} on Carleman estimates of the Kirchhoff plate equation. The research of PL is supported in part by the NSF grant DMS-1912704. The research of XY is supported in part by NSFC (No. 11771165). The research of YZ is supported in part by NSFC (No. 12001222).
|
1904.04297
|
\section{Introduction}
\IEEEPARstart{C}{ompared} to 2D photometric images, 3D data in the form of mesh surfaces provides more information and is invariant to illumination, out-of-plane rotations and color variations. Further, it provides geometric cues, which enable better separation of the object of interest from its background. Despite being more promising and information-rich, the focus of previous research on representing 3D data has been to carefully design hand-crafted methods of feature description. While automatically learned feature representations in terms of activations of a trained deep neural network have shown their superiority on a number of tasks using 2D RGB images, learning generic shape representations from 3D data is still in its infancy.
Among the different application contexts where shape representations have consolidated their relevance, face analysis is indubitably one of the topics of more active 3D research. On the one hand, this is motivated by the fact that 3D face data can complement 2D images to enhance face analysis in difficult conditions as in the case of facial expressions, occlusions, pose and illumination changes, or in specific tasks where 2D information alone is not sufficient as for face spoofing. On the other hand, 3D face research is boosted by the availability of an increasing number of devices that can be easily used to acquire the face in 3D either at high-resolution, using 3D static/dynamic scanners, or at low-resolution, with 3D cameras like Kinect.
A specific face analysis task, which is attracting increasing interest, is that of recognizing facial expressions from 3D static or dynamic data.
In fact, facial expressions are one of the most important ways of person-to-person non-verbal communication, by which humans convey, either deliberately or in an unconscious way, their emotional state.
The way humans perform and perceive expressions has been studied for long time, with the seminal works by Ekman et al.~\cite{ekman:1992}, showing that human facial expressions can be categorized into six \textit{prototypical} classes, namely, \textit{angry}, \textit{disgust}. \textit{fear}, \textit{happy}, \textit{sad}, and \textit{surprise} that are invariant across different cultures and ethnic groups.
Later studies have also shown that it is possible to think of expressions as the facial deformations induced by the movement of one or more muscles; such atomic deformations have been classified according to a Facial Action Coding System (FACS)~\cite{ekman:1978}, where a code is used to identify an Action Unit (AU) corresponding to the effect of individual or groups of muscles.
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{Fig_methodOverview/texture_map_and_process_1.png} \\
\small{(a) \hspace{0.5\linewidth} (b)}
\caption{(a) Computation of a new texture mapping transformation $\mathcal{T}$ between the down-sampled mesh model of the face and its texture image, both derived from the original face model.
(b) A variety of geometric descriptors are computed on the down-sampled 3D face mesh surface. These are mapped to the texture 2D image using the inverse texture mapping function $\mathcal{T}^{-1}$. From the constructed images, dubbed \textit{geometry-augmented images} (GAIs), we derive combinations of image triplets arranged into three-channel images. These latter images, dubbed \textit{fused geometry-augmented images} (FGAIs) are used with different CNN models, thus learning highly discriminative feature representations.}
\label{fig:blocdiagram}
\end{figure*}
In this paper, we propose an original approach to extend the application of deep learning solutions to 3D data given in the form of triangular meshes. This is obtained by developing on the idea of establishing a mapping between the 3D mesh domain and the 2D image domain. Existing mapping solutions directly generate 2D photometric images by flattening the 3D model to the image plane, or representing it by different 2D views. But, in doing so, descriptors are computed on the 2D generated images, thus losing most of the real geometry of the 3D shape.
Different from previous works, we propose to directly capture the geometric information from 3D mesh surface in terms of a set of local geometric descriptors. The extracted geometric information is then fused in our proposed geometrically augmented 2D images which can efficiently be used in conjunction with state-of-the-art CNN models.
\NW{Moreover, our method makes it possible to compute the geometric descriptors on a down-sampled version\footnote{We will use the terms down-sampled and compressed interchangeably.} of the mesh-model allowing a considerable gain in efficiency without compromising the performance.}
Compared with existing methods, the proposed approach faithfully preserves the intrinsic geometric structure in terms of local descriptors, is computationally efficient, and does not require memory intensive tensor representations.
As shown in the block diagram of Fig.~\ref{fig:blocdiagram}, our proposed framework jointly exploits the 3D shape and 2D texture information. First, we perform a down-sampling on the facial surface derived from the full 3D face model. Subsequently, we compute the new texture mapping transformation $\mathcal{T}$ between the texture image and the compressed mesh. Afterward, we extract local shape descriptors in terms of curvatures, shape index and local depth on the 3D shape (details are given in Sect.~\ref{sect:Local-Geometric}). A novel scheme is then proposed to map the extracted geometric descriptors onto 2D textured images, using the inverse of the texture mapping transform $\mathcal{T}$ (see Sect.~\ref{sect:3D-to-2D}). The mapping preserves the shape information, while compactly encoding the geometric description in 2D. We dubbed these images \textit{geometry-augmented images} (GAIs).
It is relevant to remark here that, in the proposed mapping, we assume the existence of a 2D texture image, which is in correspondence with the triangulated mesh via standard texture-mapping. In this respect, our solution can also be regarded as a multi-modal 2D+3D solution, where the 2D texture data, at the same time, is required to enable the mapping, and also constitutes an additional feature that can be early fused with the 3D data in a straightforward way.
\NW{The GAIs are then combinatorially fused to generate multiple three-channel images, which are used to learn highly discriminative feature representations. We dubbed these images \textit{fused geometry-mapped images} (FGAIs).
The effectiveness of the proposed learned feature representation scheme is demonstrated through extensive experiments on the Bosphorus dataset, for the tasks of facial expression classification and AU detection, and on the BU-4DFE dataset, for the tasks of static and dynamic facial expression recognition.} In summary, the original contributions of this work are:\\
1-We propose a new scheme, which maps the geometric information from compressed 3D meshes onto 2D textured images. The proposed scheme provides us with a mechanism to simultaneously represent geometric attributes from the mesh-model alongside with the texture information. The proposed mapped geometric information can be employed in conjunction with a Convolutional Neural Network (CNN) for feature learning from multi-modal (2D and 3D) data.\\
2-We provide a highly discriminative representation of 3D data and texture data in terms of activations of a trained CNN model. Compared to existing learned feature representation schemes for 3D data, the proposed method is both memory and computation efficient as it does not resort to expensive tensor-based or multi-view inputs.\\
3-The proposed scheme allows us to intrinsically fuse texture and shape information at data-level by mapping 3D geometric information in terms of local descriptors onto 2D textured images. Compared with other score or decision level fusion schemes, the proposed approach jointly learns to fuse 2D and 3D information at the data level. Such low-level data fusion has been shown to be more effective compared with the high-level fusion of scores or decisions~\cite{ross2003}.\\
4-We propose a novel geometric descriptor, called \textit{local depth}, which effectively encodes the depth value of a point on the 2D manifold, within a local neighborhood (details are given in Sect.~\ref{sect:Local-Geometric}).
To the best of our knowledge, we are the first to propose such 2D and 3D information fusion for textured-geometric data analysis.
The approach by Li et al.~\cite{lin2017} is the closest one to our proposed solution. However, our method presents two fundamental differences with respect to their work. First, Li et al.~\cite{lin2017} separately encoded texture and geometric information, and dedicate a sub-network to each descriptor. The related output features go into a subsequent feature-level fusion network. In contrast, in our approach, the texture and the geometric information are fused at the data level by mapping the geometric descriptors onto texture images, then rendering multiple three-channel images, which are fed as input to the CNN model. Second, the geometry maps in Li et al.~\cite{lin2017} are obtained by computing the geometric attributes on the face mesh model, then displaying and saving them as 2D images. In our method, we rather establish a correspondence between geometric descriptors computed on 3D triangular facets\footnote{The term facet will be used to refer to a triangular face of the mesh.} of the mesh and pixels in the texture image. Specifically, in our case, geometric attributes are computed on the mesh at each facet and then mapped to their corresponding pixels in the texture image using the newly proposed scheme. This yields to a sort of multi-spectral image, where each pixel is an aggregation of texture and geometric information in the form of local descriptors. Such aggregation allows us to encode facial shape and texture in a different multi-channel image representation, hence offering a new data augmentation mechanism. \NW{Third, our method is computationally more-efficient as we compute the geometric attributes on a compressed facial mesh surface.}
\NW{A recent work close to ours, which is worth to mention, is the conformal mapping method proposed by Kittler et al.~\cite{kittler2016}. Like our proposed solution, this method maps 3D information to 2D face images to adapt the data representation to CNN processing. However, there are three fundamental differences between their approach and ours. First, in~\cite{kittler2016}, the mapping is performed using a single generic 3D morphable face model (3DMM) to the 2D face image, whereas in our solution we map each actual face model of a subject to its corresponding 2D face image. Second, the geometric information mapped from the 3DMM to the 2D image is given by the point coordinates which are just an estimate of the actual values, obtained from the 3DMM by landmark-based model fitting. In our work, instead, we map the actual point coordinates, in addition to other varieties of shape descriptors, from each face model to its corresponding image. Third, regarding the scope, the work of Kittler et al.~\cite{kittler2016} deals with 2D face recognition, while our work addresses facial expression and AU recognition using 3D face images.
Finally, we also mention the works of Zhu et al.~\cite{zhu2016} and Sinha et al.~\cite{sinha:2016}. In~\cite{zhu2016}, similarly to~\cite{kittler2016}, a 3DMM is fit to a 2D image, but for face image frontalization purposes. The face alignment method employs a cascaded CNN to handle the self-occlusions. In~\cite{sinha:2016}, a spherical parameterization of the mesh manifold is performed allowing to have a two-dimensional structure that can be treated by conventional CNNs.}
The remainder of the paper is organized as follows. Existing methods for learning representations from 3D shape data and facial image analysis are discussed in Sect.~\ref{sect:Related-work}. Then, in Sect.~\ref{sect:3D-Learned-Feature}, we provide a description of our proposed scheme, followed by extensive experiments in Sect.~\ref{sect:Experimental-Results} to empirically validate the efficacy of the proposed method. Discussion and conclusions are given in Sect.~\ref{sect:conclusions}.
\section{Related work}\label{sect:Related-work}
In the following, we summarize the works in the literature that are more closed to our proposed method. In particular, first, we revise methods that have been used to represent the shape of 3D models surface (Sect.~\ref{sect:3d-shape-representation});
then, we report on the approaches that addressed facial expression recognition and AU detection from 3D static and dynamic data (Sect.~\ref{sect:3d-expression-recognition}).
\subsection{3D Shape Representation}\label{sect:3d-shape-representation}
Most of the research on 3D object classification uses carefully hand-crafted descriptors.
Guo et al.~\cite{guo2016comprehensive} presented a comprehensive survey and performance comparison of such descriptors, whereby the authors categorized these traditional descriptors into two groups:
\emph{(i)} descriptors based upon histograms of spatial distributions (\emph{e.g.}, point-clouds). \emph{(ii)} descriptors based upon histograms of local geometric characteristics (\emph{e.g.}, surface normals and curvatures).
In contrast to learned feature representations, a significant limitation of the traditional hand-crafted descriptors is their lack of generalization across different domains and tasks. These hand-crafted descriptors have been recently outperformed by features learned from raw data using deep CNN.
Feature learning capabilities of CNNs from RGB images have been demonstrated in a number of challenging Computer Vision tasks such as object classification and detection, scene recognition, texture classification and image segmentation~\cite{krizhevsky2012imagenet,redmon2016you}. Features learned on a generic large scale dataset such as ImageNet have been shown to generalize well across other tasks, \emph{e.g.}, fine-grained classification and scene understanding~\cite{sharif2014cnn}. Due to their excellent generalization capabilities and impressive performance gain, learned feature representations are believed to be superior compared with traditional hand-crafted features. Despite their success on 2D images, research on learning features from 3D geometric data is still in its infancy compared with its 2D counterpart. Below, we provide an overview of existing 3D deep learning methods.
In one of the earliest works, CNNs and recursive neural networks (RNNs) have been jointly trained on RGB-D data~\cite{socher2012convolutional}.
Gupta et al.~\cite{gupta2014learning} learned geocentric embedding, which encodes height above ground and angle with gravity for depth images. Eitel et al.~\cite{eitel2015multimodal} first separated the color (RGB) and depth (D) information through a CNN followed by late-fusion for RGB-D object detection. Compared with RGB-D data, 3D data in the form of mesh model provides complete and more structured shape information. New methods have been developed to represent and learn features from such data. These methods are discussed below.
\NW{Approaches for learning features from 3D data can be divided into two categories: \textit{volumetric} approaches and \textit{manifold} approaches. The first category treats 3D volumetric data and encompasses basically two paradigms: volumetric CNNs and multiview CNNs. Volumetric CNNs process 3D data in its raw format (\emph{e.g.}, a volumetric tensor of binary/real-valued voxels~\cite{wang2015}. Unlike 2D images, where each pixel carries meaningful information, only the voxels corresponding to the object surface and boundaries are helpful. Volumetric representation based CNNs are therefore memory intensive and inefficient. Recent works addressed this problem by proposing architectures operating on a cloud of points, while respecting the permutation invariance of points in the input~\cite{qi2017}.
Multiview CNNs paradigm~\cite{crqi2016} extends 2D CNNs to 3D data by synthetically rendering multiple 2D images across different viewpoints of a given 3D point-cloud. These multiple images are then fed as inputs to CNNs, followed by a fusion scheme to get a single entity representation of the 3D shape. These multi-view representations have shown superior performance compared with volumetric approaches. However, a limitation of multi-view scheme is that 3D geometric information is not fully preserved in rendering images from 3D data.}
\NW{Manifold approaches operate on mesh surfaces, which serve as a natural parametrization to 3D shapes; but learning using CNNs is a challenging task in such modality. Current paradigms to tackle this challenge either adapt the convolutional filters to mesh surfaces or learn spectral descriptors defined by the Laplace-Beltrami operator. Boscaini et al.~\cite{boscaini:2015} proposed a generalization of CNNs to non-Euclidean domains for the analysis of deformable shapes based on localized frequency analysis. Masci et al.~\cite{masci:2015} extended the CNN paradigm to non-Euclidean manifolds by using a local geodesic system of polar coordinates to extract ``patches'' on which geodesic convolution can be computed. Seong et al.~\cite{seong:2017} introduced a geometric CNN (gCNN) that deals with data representation over a mesh surface and renders pattern recognition in a multi-shell mesh structure. Wang et al.~\cite{wang:2017b} built a hash table to quickly construct the local neighborhood volume of eight sibling octants that allow an efficient computation of the 3D convolutions of these octants in parallel.}
\subsection{Facial Expression Recognition}\label{sect:3d-expression-recognition}
Most of the work on facial expression recognition and AU detection has been done using 2D data. A survey of these works appeared in~\cite{Caleanu2013}. Here, we review the methods developed for 3D data only. We can broadly categorize 3D facial analysis methods into two groups: \textit{feature-based} and \textit{model-based}.
Feature-based methods extract geometric descriptors either holistically or locally from the 3D facial scans. For example, Zhao et al.~\cite{Zhao2010} detect facial landmarks on a given 3D face. Local geometric and texture features were then extracted around the detected landmarks, and used to represent the 3D facial scan. Similarly, Maalej et al.~\cite{Maalej2011} represented a facial scan in terms of local surface patches extracted around $70$ facial landmarks. Geodesic distance on the Riemannian geometry is utilized as a metric to compare the extracted patches. A number of other works represented 3D scans using either local or holistic geometric descriptors. Examples include distances between 3D facial landmarks~\cite{Tang2008}, distances between locally extracted surface patches~\cite{LI2015}
Model-based approaches first establish a dense point-to-point correspondence between a query mesh and a generic expression deformable mesh using rigid and non-rigid transformation techniques. The transformation parameters are then used as representations of the query mesh for classification. Non-rigid facial deformations are characterized by a bilinear deformable model in~\cite{Mpiperis2008}. The shape of a scan with facial expression is decomposed into neutral and expression parts in~\cite{Gong2009}. The expression part of the decomposed scan is then employed for encoding the facial scan. Some works combine the strengths of both feature-based and model-based techniques. For example, Zhen et al.~\cite{Zhen2016} first segment the face into multiple regions based upon muscular movements. Geometric descriptors are then extracted from these regions, followed by a fusion scheme weights to optimally combine decisions from different regions.
\NW{More recently, following the release of the BU-4DFE database~\cite{yin2008}, several works were interested in the dynamic face recognition.
Sun and Yin~\cite{yin2008} proposed a deformable heuristic model representing both static and dynamic information with spatiotemporal descriptors, followed by a 2D hidden Markov Model (HMM) for the video expression classification. Fang et al.~\cite{Fang2011-ICCVW}, used the Mesh-HOG for matching faces and dynamic Local Binary Patterns (LBP) descriptor as an additional feature to capture temporal indices; an SVM was used for the classification. In~\cite{SANDBACH2012-IVC}, Sandbach et al. extracted 3D motion primitives, namely, Free-Form Deformation (FFD), between neighboring 3D Frames, then a GentleBoost classifier and HMMs have been adopted for recognizing the expression. Reale et al.~\cite{real2013} proposed a dynamic regional joint histogram of curvature and polar angles; then LBP-features are extracted and used in nearest-neighbor classifier. Berretti et al.~\cite{Berretti2013}, modeled facial deformation with mutual distances between facial landmarks, and used two HMMs for the expression classification. Ben Amor et al.~\cite{Amor2014} presented collections of radial curves to describe the face, from which they derived quantified motion cues across successive 3D frames, using Riemannian geometry tools. LDA and HMM were subsequently used to classify expressions. As in~\cite{Berretti2013}, Xu et al.~\cite{Xue2015} detected facial landmarks to construct localized temporal depth maps from which 3D-DCT were derived to form dynamic face signatures fed later into a nearest-neighbor classifier. A similar region-wise description paradigm was proposed by Zhen et al.~\cite{Zhen2016}, where a variety of geometric descriptors were derived to weighted muscular areas, and used with an SVM and a muscular movement model (MMM) for the recognition. These same classifiers have been used in a subsequent work in~\cite{Zhen2018}, but adopting a spatial deformation encoded with Dense Scalar Fields, then using temporal filtering to amplify facial dynamic. Yao et al.~\cite{Yao2018} proposed a textual and geometric diffusion facial representation using a scattering operator. A multiple learning of the kernel (MKL) was used to combine the contributions of different channels.
Another category of methods adopted a random forest classifier~\cite{meguid2018,Dapogny2018}. Meguid et al.~\cite{meguid2018} used PCA features, derived from the dataset, and collection of binary random forests coupled with an SVM classifier.
Dapogny et al.~\cite{Dapogny2018} used a mixture of geometric and HOG features with a Greedy Neural Forest Classifier.}
\section{3D Learned Feature Representation}\label{sect:3D-Learned-Feature}
\SB{Our proposed 3D learned feature representation is constituted by several steps as illustrated in Fig.~\ref{fig:blocdiagram}. In the following, we deepen each stage as follows: first, four local geometric descriptors are computed on the triangular mesh manifold as explained in Sect.~\ref{sect:Local-Geometric}; then, such descriptors are projected to the 2D domain using an original solution that combines mesh resampling and inverse texture mapping (see Sect.~\ref{sect:3D-to-2D}). Triplets of GAIs originated from the 3D-to-2D descriptors mapping, and also from the gray-level texture image associated to the mesh, are then fused into the FGAIs and used as input to a DCNN architecture (Sect.~\ref{sect:Mapped-Image}); the output activations of the DCNN model generate a highly discriminative feature representation, which is used in the subsequent classification stage (Sect.~\ref{sect:Classification})}.
\subsection{Local Geometric Descriptors}\label{sect:Local-Geometric}
Given a triangular mesh manifold obtained from a 3D point-cloud, we compute four local geometric descriptors, namely, \textit{mean curvature} (\textit{H}), \textit{Gaussian curvature} (\textit{K}), \textit{shape index} (\textit{SI}), and the \textit{local-depth} (\textit{LD}).
These local descriptors can be computed efficiently and complement each other by encoding different shape information. In addition to these, we also consider the \textit{gray-level} (\textit{GL}) of the original 2D texture image associated to the mesh.
For completeness, we provide below a brief explanation of the geometric descriptors.
The Gaussian curvature ($K$), the mean curvature ($H$), and the shape index ($SI$) are computed using the following equations~\cite{topon2006}:
\begin{eqnarray}
K = \lambda_1 \lambda_2 , \;
H = \frac{( \lambda_1 + \lambda_2)}{2} , \;
SI = 1/2 - \frac{1}{\pi} ( \frac{\lambda_1 + \lambda_2}{\lambda_2 + \lambda_1} ) \; .
\end{eqnarray}
\noindent
Here $\lambda_1$ and $\lambda_2$ are the principal curvatures determined by the roots of the following quadratic equation, for the unknown $k$~\cite{topon2006}:
\begin{equation}
\begin{vmatrix}
L-kE & M-kF \\
M-kF & N-kG
\end{vmatrix}
= 0 \; ,
\end{equation}
\noindent
where $(E,F,G)$ and $(L,M,N)$ are, respectively, the coefficients of the first and the second fundamental form, computed at the given point $(x,y,z)$ on the surface. These coefficients are defined by:
\begin{eqnarray}
E = 1 + (\frac{\partial z}{\partial x})^2, \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;
F = \frac{\partial z}{\partial x} \frac{\partial z}{\partial y} \; \\
\; \;\;\; \; \;\;\;G = 1 + (\frac{\partial z}{\partial y})^2,\;\;\;\;\;\; \;\;\; \;\;\;
M = \frac{\partial z}{\partial x}\frac{\partial z}{\partial y}/(EG-F)^2 \; \\
N = \frac{\partial^2 z}{\partial y^2}/(EG-F)^2, \;\;\;\;\;\;\;
L = \frac{\partial^2 z}{\partial x^2}/(EG-F)^2
\end{eqnarray}
The \textit{local-depth} (LD) is our newly proposed descriptor, which represents the depth value of a point in a local reference system attached to its neighboring points on the manifold. \NW{We define the neighborhood as the set of points encompassed by the geodesic disc of radius $3e$, where $e$ is the average edge length of the triangles of the mesh surface.}
The following steps are performed to compute the local depth at a given point on the mesh manifold. First, the local canonical reference system is computed at the point neighborhood (Fig.~\ref{fig:localdepth}). This reference is defined by the center of mass of the points within that neighborhood and the three eigenvectors of the Principal Component Analysis (PCA) of their covariance matrix. The local depth is the $z$ coordinate in the local reference obtained by computing the algebraic distance between the point and the plane $(x,y)$ \SB{spanning the first two principal components}, \NW{and having as normal the least principal component.}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\linewidth]{Fig_localdepth/localdepth_v2_crop.png}
\caption{The local depth is the algebraic distance (red segment in the figure) from a point on the surface to the main plane $(x,y)$ of the local reference spanning a local neighborhood (the sectional brown curve).}
\label{fig:localdepth}
\end{figure}
\subsection{3D to 2D Mapping}\label{sect:3D-to-2D}
The core procedure of the proposed method is constituted by the mapping that we establish between the 3D and the 2D domain. This process assumes to have a 3D triangular mesh with its 2D texture image as inputs, and produces as output a set of 2D images representing the different surface descriptors computed on the mesh surface (\emph{i.e.}, GAIs).
Given a triangular mesh originated from a face scan, its vertices can be regarded as a cloud of scattered points. These points are initially projected onto the plane spanned by the main orientation of the face. This yields the depth function $z=f(x,y)$ defined on the scattered set of points. The function is then interpolated and re-computed over a regular grid constructed by a uniform down-sampling of order $k$, \NW{where $k$ defines the sub-sampling step.}
The 2D Delaunay triangulation computed over the achieved regular points produces a set of uniform triangular facets. We complete them with interpolated $z$ coordinate to obtain a 3D down-sampled regular mesh. This is illustrated in the middle face portion at the bottom of Fig.~\ref{fig:blocdiagram}~(a). In this procedure, we also compute, for each vertex in the new re-sampled mesh, its nearest neighbor in the original mesh.
\NW{A sub-sampling of step $k$ produces approximately a compression ratio (the original data over the compressed data) of $k$ for both the facets and the vertices.}
According to the initial hypothesis, the original 3D mesh has an associated texture image. The mapping between the mesh and the texture image is established by the usual texture mapping approach, where the vertices of each triangle (\emph{i.e.}, also called facet in the following) on the mesh are mapped to three points (\emph{i.e.}, pixels) in the image. It is evident that the projection and re-sampling step, as illustrated above, break such correspondence that so needs to be re-established. Therefore, in the next stage, we reconstruct a new texture mapping between the 2D face image and the newly re-sampled mesh vertices. For each vertex in the re-sampled mesh, we use its nearest neighbor in the original mesh, computed in the previous stage, to obtain the corresponding pixel in the original image via the texture mapping information in the original face scan. This new obtained texture mapping transformation ($\mathcal{T}$ in Fig.~\ref{fig:blocdiagram}) between the original image and the new re-sampled facial mesh allows us to map descriptors computed on the facial surface, at any given mesh resolution, to a 2D \textit{geometry-augmented image} (GAI), which encodes the surface descriptor as pixel values in a 2D image structure.
\NW{We note here that, in order to keep consistent correspondence between the texture information and the geometric descriptors, we down-sample the original texture images to bring it at the same resolution of its GAI counterpart. We do this as follows: we take the vertices of each facet in the down-sampled facial mesh (Fig.~\ref{fig:subsamplegray}~(a)), and we compute, using $\mathcal{T}^{-1}$ their corresponding pixels in the original image (Fig.~\ref{fig:subsamplegray}~(b)). These three pixels form a triangle in the original image; we assign to all pixels that are confined in that triangle the average of their gray-level values (Fig.~\ref{fig:subsamplegray}~(c)).}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth, height=3cm]{Fig_txm/map_texture_resampled_mesh1.jpg}
\footnotesize{ (a) \hspace{2.3cm} (b) \hspace{2.3cm} (c)}
\caption{(a) A triangular facet in a down-sampled face mesh; (b) The texture mapping associates the vertices of the facet in (a) to three pixels in the texture image forming thus a triangle; (c) All the pixels confined in the triangle in (b) are assignrf their mean gray level (this results in a quite evident effect of triangular tessellation on the texture image). Ultimately, the texture in (c) constitutes the image counterpart of the mesh down-sampling.}
\label{fig:subsamplegray}
\end{figure}
Figure~\ref{fig:hrlr} depicts the GAIs obtained by mapping the face descriptors computed on the original mesh (top row), and its down-sampled counterpart, at a compressed ratio of \NW{3.5}.
As the facial expressions affect the face shape at a macro-level, we advocate that the down-sampling, while significantly reduces the computational complexity, should not severely compromise the performance. Our hypothesis will be confirmed in the experiments of Sect.~\ref{sect:Experimental-Results}.
\begin{figure}[b]
\centering
\includegraphics[width=\linewidth, height=3cm]{Fig_HrLr/high_low_descr1.jpg} \\
\footnotesize{ K \hspace{1.2cm} {H} \hspace{1.1cm} {LD} \hspace{1.1cm} {GL} \hspace{1.2cm} {SI}}
\caption{GAIs obtained by 2D mapping of the descriptors computed on the original mesh (top row), and its down-sampled version (bottom row). From left-to-right: Gaussian curvature (K), mean curvature (H), local-depth (LD), gray level (GL), and shape index (SI). \NW{The compressed mesh is obtained with a sub-sampling step of 3.5. For this example, the number of facets/vertices in the original and compressed mesh are (9558/4873) and (2799/1459), respectively.}}
\label{fig:hrlr}
\end{figure}
\subsection{Fused Geometry Augmented Images}\label{sect:Mapped-Image}
After mapping the extracted local geometric descriptors onto the 2D textured images, we can encode the shape information with the help of a compact 2D matrix. Since we extract four local geometric descriptors, the shape information is represented by their corresponding three 2D GAIs, to which we add the gray level image.
We propose to fuse the shape information in terms of multiple descriptors by combinatorially selecting three descriptors at once. This results in ten three-channel images ($C_{3}^5$) that we call \textit{fused geometry augmented images} (FGAIs).
Each FGAI realizes a sort of early fusion between the descriptors. For example, an FGAI can be a combination of three GAIs obtained, respectively, from Gaussian curvature (K), shape-index (SI), and gray-level (GL); thus, we can indicate this specific combination as K-SI-GL. \NW{While we have no evidence about the plausibility of this hypothesis, we assume the permutations of this combination across the three channels to be equivalent (\emph{i.e.}, K-SI-GL is equivalent to GL-SI-K, SI-K-GL, SI-GL-K, K-GL-SI, GL-K-SI). Also, though we do not experimented it in this present work, we think that the other permutations can be utilized as a data augmentation technique.}
Following this early feature fusion scheme, the FGAIs for a sample face are visualized in Fig.~\ref{fig:fused}.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{Figures/sample_fuesd_descriptors_2.JPG}
\caption{The ten three-channel FGAIs generated for a sample face by combinatorially selecting the mapped geometric descriptors. Top row: K-GL-LD, K-H-GL, H-GL-LD, K-LD-SI, K-GL-SI. Bottom row: H-LD-SI, H-GL-SI, K-H-SI, GL-LD-SI, K-H-LD.}
\label{fig:fused}
\end{figure}
\NW{If we consider the GAIs as our input data, the proposed FGAIs constitute a data-fusion scheme, as opposed to feature-level schemes, where the fusion operates on the features derived from the input images. Here, one can envisage dedicating a CNN network acting as feature extractor to each GAI type, and the derived features will be fed at fusion layer which output goes to a fully connected layer followed by an SVM or softmax classifier. This feature-level fusion architecture was adopted by Li et al.~\cite{lin2017}. We believe that our data-fusion approach will be more advantageous for three main reasons: \emph{(i)} low-level fusion performs better than higher-level counterparts when it comes to biometry applications~\cite{ross2003}; \emph{(ii)} Our data-fusion allows us to utilize pre-trained powerful architectures in a transfer-learning mode. Avoiding thus the time demanding training of CNNs created from scratch, while allowing us to gain from the effectiveness and the generality of learned features~\cite{sharif2014cnn}; \emph{(iii)} This proposed fusion scheme naturally brings-in the effect of data augmentation, which has proved its effectiveness in numerous deep learning tasks, especially where limited training data are available.}
Based on the above, we have opted to employ the FGAIs in a transfer-learning approach using two standard architectures, the AlexNet and Vgg-vd16. We propose to explore first these architectures in a reuse mode rather than the fine-tuning mode, that is, we will employ features extracted from these architectures as a starting point to the subsequent classification task. This option ensures a full relive of any training that the fine-tuning would need.
\textbf{AlexNet} -- This network~\cite{krizhevsky2012imagenet} contains five alternating convolution and sub-sampling layers followed by Rectified Linear Units (ReLU). The network has three fully connected layers. AlexNet provided a breakthrough on the ImageNet challenge, and serves a solid baseline. We used AlexNet as a feature representation network by extracting features from two convolution layers (conv4 and conv5), one pooling layer (Pool5), and two fully connect layers (FC6 and FC7).
\textbf{Vgg-vd16} -- Compared with AlexNet, Vgg-vd16~\cite{Parkhi:2015} has far more number of trainable layers (16, with the last 3 being fully connected). The large depth of the network was possible because of smaller filter-sizes ($3\times3$). Vgg-vd16 achieved significant performance gain on the ImageNet dataset, and has shown its effectiveness on a number of transfer learning tasks. We utilized three convolution layers (conv$5_1$, conv$5_2$ and conv$5_3$) and two fully connected layers (FC6 and FC7).
A performance comparison of AlexNet and Vgg-vd16 is presented in our experimental evaluations (see Sect.~\ref{sect:Experimental-Results}). For both of them, the pre-trained weights have been used to extract features from our GAIs and FGAIs.
\subsection{Classification}\label{sect:Classification}
For each facial expression (FE) / action unit (AU) class, we learn a discriminative model. For this purpose, we train a simple one-vs.-rest binary SVM classifier. Specifically, to learn the model parameters of one FE/AU, we consider feature encoding for one FE/AU as the positive class, whereas the encodings of the remaining FEs/AUs are considered as the negative class. A binary SVM is then trained to learn the hyperplane which optimally discriminates the two classes:
\begin{equation}
\label{l2l2}
\min_\mathbf{w} \; \; \frac{1}{2}\mathbf{w}^T\mathbf{w} + C \sum_t \left ( \max\left ( 0,1-{\ell_t}\mathbf{w}^T\mathbf{x}_t \right ) \right )^2 \; ,
\end{equation}
\noindent
where $\ell_t=\{1,-1\}$, $\mathbf{x}$ represents the feature vector, $\mathbf{w}$ is the vector defining the parameters of the separation hyper-plane, and $C$ corresponds to the penalty parameter. This last controls the trade-off between the speed of training and the number of support vectors.
Following this procedure, we learn a set of model parameters $\mathbf{w}_i$: $ i = 1...m$, where \textit{m} is equal to $6$ and $24$, for the FEs and AUs, respectively.
\section{Experimental Results}\label{sect:Experimental-Results}
The effectiveness of the proposed approach is demonstrated through extensive experiments in two tasks: facial expression recognition and action unit (AU) detection. We demonstrate that our proposed features are quite generic and can work simultaneously for these tasks. Experiments have been conducted with two publicly available datasets, namely, the Bosphorus 3D face database~\cite{Savran2008} and the Binghamton University 4D Facial Expression database (BU-4DFE)~\cite{yin2008}.
\subsection{Bosphorus dataset}
\NW{The Bosphorus database~\cite{Savran2008}, contains $4,666$ 3D face scans belonging to $105$ subjects. Each scan is labeled with locations of facial landmarks and presence of facial AUs and emotions (one of the six discrete facial expressions). The database is challenging due to the presence of large-scale self-occlusions and head rotations. Further, the subjects in the database exhibit diversity in terms of gender, ethnicity and age.
\NW{For facial expression recognition, $845$ scans of $65$ subjects exhibit six discrete expressions including \textit{anger}, \textit{happy}, \textit{sad}, \textit{surprise}, \textit{disgust} and \textit{fear}. For AU detection, a total of $3,838$ scans across the 105 subjects show 24 AUs.}
Samples from the Bosphorus dataset are depicted in Fig.~\ref{fig:bos-sample}.}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth, height=2.5cm]{Figures/bosphorus_samples.png}
\caption{Samples from the Bosphorus dataset. Top row: the six facial expressions, respectively, \emph{anger}, \emph{disgust}, \emph{fear}, \emph{happiness}, \emph{sadness}, \emph{surprise}.
Bottom row: six AUs out of the 24: \emph{right lip corner puller}, \emph{lip stretcher}, \emph{outer brow raiser}, \emph{eyes closed}, \emph{lip suck}, and \emph{cheek puff}.}
\label{fig:bos-sample}
\end{figure}
\NW{For facial expression recognition, we evaluate the performance in terms of accuracy, \textit{i.e.}, the ratio of correctly classified samples.
For AU detection, the area under the ROC curve (AuC) is used as a metric. Specifically, for each of the 24 AUs, we consider a one-vs.-rest classification approach and compute AuC. Other performance metrics such as Recall, Precision, F1-score, and Sensitivity could have been used for evaluation purposes as well. The choice of these two performance metrics is motivated by the need to make our results directly comparable with the reported state-of-the-art methods in the literature.}
\subsubsection{Ablative Analysis}\label{sec:ablative}
We conducted an ablative analysis experimentation that aims to investigate the performance with respect to: \emph{(i)} the discrimination capacity of the features corresponding to the different FGAIs across the different network layers; \emph{(ii)} the CNN layers employed as output features; \emph{(iii)} down-sampled data versus original data; \emph{(iv)} comparison between AlexNet and Vgg-vd16; and \emph{(v)} the early fusion scheme.
To compare the discrimination capacity of the features corresponding to the different FGAIs in the network architecture (point \emph{(i)} above), we used a Fisher's linear discriminant-like criterion. Given a feature vector $\mathcal{V}$ of length $n_\mathcal{V}$ derived from a layer $L$ in the trained CNN, for a given FGAI, we define the discrimination power of the feature $\xi^{L}_r$, $r = 1:n_\mathcal{V}$, as:
\begin{align}
\label{equ:descrit}
\mathcal{J}(\xi^{L}_r) & = \sum_{i=1}^{N_E} \sum_{j=i+1}^{N_E} \frac{1}{2}(\mu_i(\xi^{L}_r) - \mu_j(\xi^{L}_r))^2( \frac{1}{\sigma_{i}(\xi^{L}_r)^2} + \frac{1}{\sigma_{j}(\xi^{L}_r)^2} ) \\ \nonumber
& +\frac{1}{2}( \frac{\sigma_{i}(\xi^{L}_r)^2}{\sigma_{j}(\xi^{L}_r)^2} + \frac{\sigma_{j}(\xi^{L}_r)^2}{\sigma_{i}(\xi^{L}_r)^2} - 2) \; ,
\end{align}
\noindent
where $N_E$ is the number of facial expressions (\textit{i.e.}, 6) $(\mu_i,\mu_j)$, and $(\sigma_i,\sigma_j)$ are the mean and the standard deviation of the feature $\xi^{L}_r$ computed for a pair of facial expression classes $i$ and $j$, respectively.
The larger is the criterion $\mathcal{J}$, the higher is the discrimination capacity of the feature $\xi^{L}_r$. For a given layer in the network and for each facial expression class, we compute all the output feature vector samples corresponding to a specific FGAI. Then, for each element, (\textit{e.g.}, feature $\xi^{L}_r$) in the vector, we compute the criterion $J$ as described in Eq.~\eqref{equ:descrit}.
The different criteria computed for each feature are then ranked and displayed together for the different layers so that they can be visually compared. Note that for the AlexNet, a number of elements in the feature vectors are expected to have a zero-value because of the large sparsity of the weights in the network layers~\cite{lorieul2016}. Therefore, we detect and remove these zero-valued features from each feature vector before computing the criterion $J$.
\begin{figure}[t]
\centering
\begin{minipage}[h]{0.49\linewidth}
\includegraphics[width=\linewidth]{Fig_J_FGAI/fig_J_FGAIS_alex_conv4_Bosph.png}
\includegraphics[width=\linewidth]{Fig_J_FGAI/fig_J_FGAIS_alex_conv5_Bosph.png}
\includegraphics[width=\linewidth]{Fig_J_FGAI/fig_J_FGAIS_alex_pool5_Bosph.png}
\end{minipage}
\begin{minipage}[h]{0.49\linewidth}
\includegraphics[width=\linewidth]{Fig_J_FGAI/fig_J_FGAIS_alex_fc6_Bosph.png}
\includegraphics[width=\linewidth]{Fig_J_FGAI/fig_J_FGAIS_alex_fc7_Bosph.png}
\includegraphics[width=\linewidth]{Fig_J_FGAI/fig_J_alex_Bosph.png}
\end{minipage}
\caption{Bosphorus dataset: Ranked $\mathcal{J}$ of Eq.~\eqref{equ:descrit} plotted for the top $500$ features for each FGAI in the AlexNet layers.
\SB{Looking to the plots top-to-bottom and left-to-right, the first five compare the ten FGAI at Conv4, Conv5, Pool5, FC6 and FC5 network layers; the last plot compares the different layers to each other in the same plot for all the FGAI.}}
\label{fig:descri_power_alexnet}
\end{figure}
Figure~\ref{fig:descri_power_alexnet} depicts the criteria $J$ for the top $500$ features corresponding to the ten FGAIs across the AlexNet layers.
The first five plots (top-to-bottom and left-to-right) show that the FGAI for the combination H-GL-SI looks having the largest discriminative capacity neatly above the others, particularly at Conv4 and Conv5.
\SB{The other plots show some disparity for the other FGAIs, thus not allowing a conclusive assessment as for the H-GL-SI.
In the last plot in the figure (third row, second column)}, we report a test variant aiming to assess the discrimination power of the features layer-wise (\emph{i.e.}, for each of Conv4, Conv5, Pool5, FC6 and FC7). In particular, we computed the criterion $\mathcal{J}$ for each feature in these layers, considering all the FGAI outputs when computing the means and the standard deviations in Eq.~\eqref{equ:descrit}.
Results clearly show that the criterion $\mathcal{J}$ keeps neatly higher in the Conv4, Conv5 and Pool5 layers compared to FC6 and FC7 across all the features, thus reflecting a higher discrimination capacity of the former layers.
\SB{The $J$ test above accounts for the general discriminative power of the different FGAI at different layers; to further investigate the effectiveness of the features,} we conducted a series of tests assessing the performance of each FGAI for the facial expression recognition across the different network layers of AlexNet and Vgg-vd16. We considered the original version of the face scans (\emph{i.e.}, without any down-sampling). We adopted the standard 10-fold cross validation over 60 randomly selected subjects, by partitioning the subjects into 10 sets, and deriving, in each round, the testing test from one fold, and the training set from the other 9 folds.
Table~\ref{tab:combBos} reports the classification rate obtained for AlexNet~(a) and Vgg-vd16~(b). Looking at the first and the second score in each column, marked in bold and blue, respectively, we can notice that K-GL-SI and H-GL-SI form the best combinations for AlexNet. It is also noticeable that the pair GL-SI is present in both of them. For the Vgg-vd16, we can observe that H-GL-SI and H-LD-SI seem representing the best combinations.
We also notice the low rate obtained with FC6 and FC7 in both the networks confirming the discriminative power analysis reported earlier in Fig.~\ref{fig:descri_power_alexnet}.
Referring to the overall findings in the discrimination analysis and the facial expressions, we selected the H-GL-SI as the best FGAI candidate for the rest of the experimentation.
\begin{table}[t]
\caption{Bosphorus dataset: Classification rate using the ten different FGAIs for AlexNet~(a) and Vgg-vd16~(b)}
\label{tab:combBos}
\begin{minipage}[h]{0.05\linewidth}
(A)
\end{minipage}
\begin{minipage}[h]{0.43\linewidth}
\begin{tabular}{lccccc}
\toprule
{FGAI} & {Conv4} & {Conv5} & {Pool5} & {FC6} & {FC7} \\
\midrule
K-H-GL & {89.48} & {89.0} & {89.56} & {44.44} & {42.22} \\
K-H-LD & {91.67} & {85.67} & {90.67} & {42.78} & {45.67} \\
K-H-SI & {95.7} & {90.11} & {91.22} & {41.78} & {25.56} \\
K-GL-LD & {94.6} & {92.33} & {92.53} & {47.78} & \bc{{50.0}} \\
K-GL-SI & \bc{97.53} & \textbf{{96.78}} & \bc{97.33} & {52.33} & {37.78} \\
K-LD-SI & {96.44} & {90.67} & {96.78} & {51.78} & {30.0} \\
H-GL-LD & {94.97} & {93.44} & \textbf{{97.89}} & {47.89} & \textbf{{50.0}} \\
H-GL-SI & \textbf{98.27} & \bc{94.56} & {96.22} & \bc{53.44} & {46.22} \\
H-LD-SI & {96.8} & {89.56} & {96.78} & \textbf{{54.56}} & {41.22} \\
GL-LD-SI & {96.07} & {89.56} & {90.67} & {47.33} & {22.22} \\
\bottomrule
\textbf{Mean} & 95.153 & 91.168 & 93.965 & 48.411 & 39.089 \\
\bottomrule
\end{tabular}
\end{minipage}
\vspace{0.2cm}
\begin{minipage}[h]{0.05\linewidth}
(B)
\end{minipage}
\begin{minipage}[h]{0.43\linewidth}
\begin{tabular}{lccccc}
\toprule
{FGAI} & {Conv$5_1$} & {Conv$5_2$} & {Conv$5_3$} & {FC6} & {FC7} \\
\midrule
K-H-GL & {94.59} & {89.2} & {88.44} & {64.37} & {56.22} \\
K-H-LD & {92.89} & {88.31} & {85.67} & {\textbf{66.03}} & \bc{63.44} \\
K-H-SI & {93.44} & {89.4} & {87.89} & {55.04} & {57.33} \\
K-GL-LD & {95.44} & {93.33} & {89.56} & {61.27} & {62.89} \\
K-GL-SI & {97.44} & {95.53} & {\textbf{94.0}} & {62.85} & {61.78} \\
K-LD-SI & {97.54} & {96.10} & {89.56} & {61.38} & {60.11} \\
H-GL-LD & {97.89} & \bc{93.7} & {91.78} & {60.54} & {\textbf{67.33}} \\
H-GL-SI & \textbf{98.16} & {92.5} & {91.22} & {\bc{65.78}} & {57.89} \\
H-LD-SI & \bc{98.03} & {\textbf{96.12}} & \bc{93.44} & {58.85} & {59.56} \\
GL-LD-SI & {96.78} & {89.76} & {85.67} & {51.38} & {55.11} \\
\bottomrule
\textbf{Mean} & 96.22 & 92.395 & 89.723 & 60.549 & 60.166 \\
\bottomrule
\end{tabular}
\end{minipage}
\end{table}
For the aspects \emph{(ii)}, \emph{(iii)}, and \emph{(iv)}, we conducted an experimentation on AUs classification with AlexNet and Vgg-vd16. We used features computed with the FGAI (H-GL-SI) from both the original and the down-sampled data, in a $5$-fold cross validation. Results of the analysis are summarized in Fig.~\ref{fig:layers}~(a)-(b) showing the average AuC for AU detection.
For AlexNet, we notice that for the original data, the highest score is obtained with the max pooling layer Pool5 ($99.79\%$) followed by the convolutional layers Conv4 ($99.71\%$) and Conv5 ($98.6\%$); then, we observed a drop in the recognition rate in the fully connected layers, particularly noticeable for FC7 ($90.7\%$). This decrease, in concordance with the discriminative analysis of Fig~\ref{fig:descri_power_alexnet}, can be explained by the fact that the deeper and final layers in the model are more ImageNet class-specific.
A similar behavior is observed for Vgg-vd16 that shows the best recognition rate for the Conv4 layer ($99.3\%$).
With respect to the proposed down-sampling scheme of the data, we notice a same moderate decrease pattern in both AlexNet and Vgg-vd16. Second, in AlexNet, the drop in the recognition rate is more noticeable in the fully connected layers, particularly for FC7 (10\%), while it is quite minor in the others (less than 2\%). The same observation can be mentioned for Vgg-vd16, for the first four layers, while the last layer (FC7) shows a relatively larger decrease of $5\%$. These observations suggested us the option of employing the down-sampled 3D face scans, thus allowing us a significant reduction of the computational time. \NW{Tested on an Intel i7-5500, 2.4 GHz, 16GB RAM, and 64 bits machine, we found that computation of the GAIs runs runs 11 times faster for the compressed data version than its original counterpart. This demonstrates the significant gain in efficiency that our down-sampling scheme affords.}
An overall comparison of the classification rate across the different layers in the two architectures seems to be in favor of AlexNet, noticeably at the non-fully connected layers.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\linewidth]{Figures/AU_OvsC_Alex.png}
\includegraphics[width=0.48\linewidth]{Figures/AU_OvsC_Vgg.png}
\caption{Bosphorus dataset, AU detection: AuC values comparison for features extracted at different layers of AlexNet (left), and Vgg-vd16 (right). Comparison between the results obtained with the original (blue bar) and compressed, \emph{i.e.}, down-sampled (yellow bar) data is also shown.}
\label{fig:layers}
\end{figure}
Finally, we investigated the extent to which the proposed early fused representation impacts the accuracy of the classification (aspect \emph{(v)} above). To this end, we conducted a series of experiments to evidence that the improvement in the performance in our method emanates from the proposed fused scheme rather than from the usage of the pre-trained AlexNet and Vgg-vd16 networks. We investigated this aspect with experiments on facial expression classification.
\SB{The single GAI, derived from the compressed face model and corresponding to the K, H, GL, LD, and GL descriptors, is replicated over the three-channels of the network input. The features extracted from the networks are then used as the SVM training and testing data.}
\NW{From this initial set of GAI, we generated also another augmented set using a horizontal flip, rotation, and addition of white Gaussian noise. We did this to enlarge the number of GAI samples per expression, and thus to compensate any potential SVM over-fitting effect that might compromise the single GAI performance in favor of the FGAI.} Afterwards, we performed the two classification testing in $10$-fold cross-validation for each GAI.
Table~\ref{tab:bosphorus-features} reports the best classification rate obtained with a single descriptor in each layer for AlexNet and Vgg-vd16 together with the classification rate obtained with the FGAI (H-GL-SI). As it can be clearly noticed, the significant gap between the performance brings evidence that the learning from different descriptors individually is less effective than learning from the data fused using our proposed fusion scheme.
\begin{table}[ht]
\caption{Bosphorus dataset: Classification rates obtained with single GAI versus the FGAI:H-GL-SI. }
\label{tab:bosphorus-features}
\centering
\small
\begin{tabular}{lcccccc}
\toprule
{AlexNet} & { K} & { H} & { GL} & { LD} & { SI} & {H-GL-SI} \\
\midrule
\textbf{Conv4} & 75.1 & 81.2 & 72.3 & 82.1 &88.5 & \textbf{98.27} \\
Conv5 & 71.5 & 76.67 & 70.4 & { 79.44} & {82.79} & {\textbf{94.56}} \\
pool5 & 73.1 & {84.12} & 72.2 & 81.88 & 78.85 & {\textbf{96.78}} \\
FC6 & 45.1 & {52.2} & 49.8 & 52.7 & 53.9 & {\textbf{53.56}} \\
FC7 & 39.2 & 40.16 & 40.02 & {41.48} & 41.77 & {\textbf{46.22}} \\
\bottomrule
\\
{Vgg-vd16} \\
\midrule
\textbf{Conv$5_1$} & 80.22 & {83.48} & 78.33 & 81.17 & 80.54 & \textbf{98.16} \\
Conv$5_2$ & 70.56 & 71.22 & {74.85} & 72.58 & 70.93 & {\textbf{92.5}} \\
Conv$5_3$ & 73.35 & 70.49 & 72.11 & 69.98 & 70.04 & {\textbf{91.22}} \\
FC6 & 52.48 & 54.33 & 50.62 & {58.95} & 56.42 & {\textbf{65.78}} \\
FC7 & 39.51 & {44.08} & 40.3 & 43.5 & 42.71 & {\textbf{57.89}} \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth,height=5cm]{Figures/Bosph_AUC_compare.png}
\caption{Bosphorus dataset: AU detection measured as AuC for each of the 24 AUs. \SB{Results for 12 AUs are repotted in the upper plot, and for the remaining 12 in the lower one (please note that AUs are indicated with the abbreviation used in the Bosphorus dataset, where LF and UF stay, respectively, for lower- and upper-face, while R and L indicate the left and right part of the face).}
Comparison with the following state-of-the-art methods: 3DLBPs~\cite{Huang06}, LNBPs~\cite{Sandbach12}, LABPs~\cite{Sandbach2012}, LDPQs~\cite{Ville2008}, LAPQs~\cite{Ville2008}, LDGBPs~\cite{Zhang05}, LAGBPs~\cite{Zhang05}, LDMBPs~\cite{Yang10}, LAMBPs~\cite{Yang10}.}
\label{fig:au-results}
\end{figure*}
\subsubsection{Action Unit Classification}\label{sect:result}
\SB{In this part we experimented the extent to which our method can correctly classify each the 24 AUs.}
We compared our results with a number of existing methods, adopting the same protocol \NW{(\emph{i.e.}, $5$-fold cross validation, $3,838$ AU scans collected from $105$ subjects)}.
The state-of-the-art methods we compared to, include the 3D Local Binary Patterns (3DLBPs)~\cite{Huang06}, Local Azimuthal Binary Patterns (LABPs)~\cite{Sandbach2012}, Local Depth Phase Quantisers (LDPQs)~\cite{Ville2008}, Local Azimuthal Phase Quantisers (LAPQs)~\cite{Ville2008}, Local Depth Gabor Binary Patterns (LDGBPs)~\cite{Zhang05}, Local Azimuthal Gabor Binary Patterns (LAGBPs)~\cite{Zhang05}, Local Depth Monogenic Binary Patterns (LDMBPs) and Local Azimuthal Monogenic Binary Patterns (LAMBPs)~\cite{Yang10}.
Figure~\ref{fig:au-results} shows the AuC results for each AU individually, obtained with our proposed method, using the combination (H-GL-SI), together with the state-of-the-art methods listed above.
\SB{Computing the average AuC on all the AUs,} it can be observed that our proposed feature representation scheme achieves the highest score of $99.79\%$, outperforming the current state-of-the art AuC of $97.2\%$, which is scored by the Depth Gabor Binary Patterns (LDGBP) feature proposed in~\cite{Sandbach2012}.
\begin{table}[htb]
\caption{Bosphorus dataset: Classification rate obtained with the FGAI:H-GL-SI compared with the state-of-the-art methods.}
\label{tab:bosph:FE}
\center
\begin{tabular}{lc}
\toprule
{Method} & {Accuracy} \\
\hline
MS-LNPs~\cite{li2012a} & {75.83} \\
GSR~\cite{Yang15} & {77.50} \\
iPar–CLR~\cite{LI2015} & {79.72} \\
DF-CNN svm~\cite{lin2017} & {80.28} \\
Original-AlexNet & \textbf{{98.27}} \\
Compressed-AlexNet & \textbf{{93.29}} \\
Original-Vgg-vd16 & \textbf{{98.16}} \\
Compressed-Vgg-vd16 & \textbf{{92.38}} \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Facial Expression Classification}
\NW{For facial expression recognition, we adopted the same experimentation protocol reported in the recent state-of-the-art methods~\cite{li2012a,Yang15,LI2015,lin2017} ($10$-cross validation, expression scans collected from $60$ subjects, randomly selected from $65$ individuals).
Results obtained with the (H-GL-SI) are reported in Table~\ref{tab:bosph:FE} for both original and compressed face scans.
Remarkably, the proposed scheme, for both original and compression face scans, outperforms the current state-of-the-art solutions, by a significant margin of $18\%/13\%$ and $17\%/12\%$ for the learned features of the AlexNet and the Vgg-vd16, respectively.}
\subsection{BU-4DFE}
\NW{The BU-4DFE dataset~\cite{yin2008} contains $101$ subjects divided into $43$ males and $58$ females. Each subject is captured in six different expressions: \textit{Anger}, \textit{Disgust}, \textit{Fear}, \textit{Happy}, \textit{Sad} and \textit{Surprise}. Each video contains a sequence of 3D face scans captured at the rate of $25fps$ for a $4$ seconds duration. The expression dynamics encompasses four phases: \textit{neutral}, \textit{onset}, \textit{apex}, \textit{offset}, and \textit{neutral} (see Fig.~\ref{fig:bu-sample}.a for an example). All the experiments reported in this section have been conducted with the down-sampled version of the BU-4DFE scans (\emph{i.e.}, compression equal to 3.5)}.
Figure~\ref{fig:bu-sample}.b depicts samples of the GAI for one subject.
\begin{figure}[b]
\centering
\begin{minipage}[c]{0.1\linewidth}
\footnotesize{(a)}
\end{minipage}
\begin{minipage}[c]{0.87\linewidth}
\includegraphics[width=\linewidth, height=1cm]{Figures/bu4d_samples.jpg}
\end{minipage}
\begin{minipage}[c]{0.1\linewidth}
\footnotesize{(b)}
\end{minipage}
\begin{minipage}[c]{0.87\linewidth}
\includegraphics[width=\linewidth, height=1.1cm]{Figures/BU_sample1.png}
\end{minipage}
\caption{BU-4DFE dataset: (a) Sample 3D frames selected from the \textit{neutral}, \textit{onset}, \textit{apex}, \textit{offset}, and \textit{neutral} parts of a sequence labeled as ``Happy''; (b) GAIs extracted from sample frames of one subject with reduced resolution to $240 \times 240$. From left: Gaussian curvature (K), mean curvature (H), local depth (LD), gray level (GL) and shape index (SI).}
\label{fig:bu-sample}
\end{figure}
\subsubsection{Ablative Analysis}
We conducted the same experiments described for the Bosphorus dataset (see Sect.~\ref{sec:ablative}) investigating the performance of the different FGAIs, the network layers, and the impact of the early fusion.
Figure~\ref{fig:descri_power_alexnet_bu4d} depicts the criteria $J$ for the top $500$ features corresponding to the ten FGAIs across the AlexNet layers. Here we found that the FGAI with the H-LD-SI combination exhibited the highest discrimination power. Layer-wise, as with the Bosphorus dataset, Conv4, Conv5 and Pool5 showed better discrimination capacity compared to FC6 and FC7.
\begin{figure}[htb]
\centering
\begin{minipage}[h]{0.49\linewidth}
\includegraphics[width=\linewidth]{Fig_J_FGAI/fig_J_FGAIS_alex_conv4_bu4d.png}
\includegraphics[width=\linewidth]{Fig_J_FGAI/fig_J_FGAIS_alex_conv5_bu4d.png}
\includegraphics[width=\linewidth]{Fig_J_FGAI/fig_J_FGAIS_alex_pool5_bu4d.png}
\end{minipage}
\begin{minipage}[h]{0.49\linewidth}
\includegraphics[width=\linewidth]{Fig_J_FGAI/fig_J_FGAIS_alex_fc6_bu4d.png}
\includegraphics[width=\linewidth]{Fig_J_FGAI/fig_J_FGAIS_alex_fc7_bu4d.png}
\includegraphics[width=\linewidth]{Fig_J_FGAI/fig_J_alex_bu4d.png}
\end{minipage}
\caption{BU-4DFE dataset: Ranked $\mathcal{J}$ of Eq.~\eqref{equ:descrit} plotted for the top $500$ features for each FGAI in the AlexNet layers.}
\label{fig:descri_power_alexnet_bu4d}
\end{figure}
\NW{Results for the experiment assessing the performance of the FGAIs individually for the facial expression classification obtained with AlexNet is reported in Table~\ref{tab:combBU}-(A). Here, we considered the six expressions plus the neutral expression, which data can be derived from the video sequences, and proceeded in a 5-fold cross validation.
We can observe that all the FGAIs produce close and high classification rate at Conv4, the same can be said for Pool5 and Conv5, though with slightly lower performance. The classification rate drops consecutively for the FC6 and the FC7 layer.
Overall, the FGAIs with the H-LD-SI combination achieved the best performance across the different layers, which concord with the discrimination analysis previously reported in Fig.~\ref{fig:descri_power_alexnet_bu4d}.
The classification rate obtained with the Vgg-vd16(Table~\ref{tab:combBU}-(B)) indicates that the FGAI:H-LD-SI has the best performance, confirming the results found in the discrimination analysis of Fig.~\ref{fig:descri_power_alexnet_bu4d}. Based on these findings, we considered H-LD-SI as the best FGAI, and utilized it in the following experiments.}
\begin{table}[htb]
\caption{BU-4DFE dataset: Classification rate using the ten different FGAIs for AlexNet~(a) and Vgg-vd16~(b).}\label{tab:combBU}
\begin{minipage}[h]{0.05\linewidth}
(A)
\end{minipage}
\begin{minipage}[h]{0.43\linewidth}
\begin{tabular}{lccccc}
\toprule
{FGAI} & {Conv4} & {Conv5} & {Pool5} & {FC6} & {FC7} \\
\midrule
K-H-GL & {89.98} &{\textbf{87.84}} & {{89.84}} & {71.2} & {\textbf{54.05}} \\
K-H-LD & {89.95} & {85.02} & \bc{89.95} & {67.02} & {30.16} \\
K-H-SI & {88.27} & {81.66} & {88.02} & {62.34} & {18.48} \\
K-GL-LD & {\bc{90.09}} & {86.84} & {89.8} & {70.77} & {40.98} \\
K-GL-SI & {88.34} & {85.88} & {87.84} & {69.66} & {27.84} \\
K-LD-SI & {86.88} & {83.63} & {86.84} & {66.77} & {25.66} \\
H-GL-LD & \bc{90.09} & \bc{87.77} & {89.73} & {{71.84}} & {42.23} \\
H-GL-SI & {88.48} & {86.27} & {87.7} & \bc{71.34} & {27.05} \\
H-LD-SI & \textbf{91.34} & {86.8} & \textbf{90.02} & \textbf{72.3} & \bc{48.09} \\
GL-LD-SI & {88.56} & {82.48} & {86.48} & {59.52} & {24.2} \\
\bottomrule
\textbf{Mean} & 90.03 & 87.34 & 89.89 & 71.27& 33.87 \\
\bottomrule
\end{tabular}
\end{minipage}
\vspace{0.2cm}
\begin{minipage}[h]{0.05\linewidth}
(B)
\end{minipage}
\begin{minipage}[h]{0.43\linewidth}
\begin{tabular}{lccccc}
\toprule
{FGAI} & {Conv$5_1$} & {Conv$5_2$} & {Conv$5_3$} & {FC6} & {FC7} \\
\midrule
K-H-GL & {89.04} & {86.83} & \textbf{89.01} & {70.26} & {{33.41}} \\
K-H-LD & {89.01} & {84.08} & {88.09} & {66.08} & {29.22} \\
K-H-SI & {87.33} & {80.72} & {87.08} & {61.4} & {17.54} \\
K-GL-LD & {\bc{89.15}} & {85.9} & {88.86} & {69.83} & {40.04} \\
K-GL-SI & {87.4} & {84.94} & {86.9} & {68.72} & {26.9} \\
K-LD-SI & {85.94} & {82.69} & {85.9} & {65.83} & {24.72} \\
H-GL-LD & \bc{89.15} & \bc{86.9} & {88.79} & {70.36} & \bc{41.29} \\
H-GL-SI & {87.54} & {85.33} & {86.76} & \bc{70.4} & \textbf{51.11} \\
H-LD-SI & \textbf{89.81} & \textbf{87.86} & \bc{88.9} & \textbf{70.9} & {39.15} \\
GL-LD-SI & {87.4} & {81.54} & {85.54} & {58.58} & {23.26} \\
\bottomrule
\textbf{Mean} & 89.15 & 86.83 & 87.58 & 68.11 & 32.66\\
\bottomrule-
\end{tabular}
\end{minipage}
\end{table}
\NW{The comparison between the single GAI and the FGAI with H-LD-SI combination across the different layers of the AlexNet and the Vgg-vd16 networks is reported in Table~\ref{tab:result2-BU4D}. The findings consolidate the results found with the Bosphorus dataset on the superiority of our fusion scheme, in addition to confirming the discrimination power of the convolution and pooling layers compared to the fully connected ones.}
\begin{table}[t]
\caption{BU-4DFE dataset: Classification rates obtained with a single GAI versus FGAI:H-LD-SI for AlexNet and Vgg-vd16.}
\label{tab:result2-BU4D}
\center
\begin{tabular}{lcccccc}
\toprule
{AlexNet} & { K} & { H} & { GL} & { LD} & { SI} & H-LD-SI \\
\midrule
\textbf{Conv4} & 63.4 & 72.1 & 61.9 & 75.9 & 77.33 & \textbf{91.34} \\
Conv5 & 61.1 & 65.3 & {60.21} & 72.25 & 70.19 & {86.80} \\
pool5 & 62.7 & 70.90 & 61.63 & 74.32 & {73.43} & {90.02} \\
FC6 & 48.12 & 56.81 & 50.38 & 51.40 & {52.01} & {72.30} \\
FC7 & 40.71 & 42.44 & 39.89 & 40.92 & {38.01} & {48.09} \\
\bottomrule
\\
{VGG-vd16} & \multirow{3}{*}{} \\
\midrule
\textbf{Conv5\_1} & 63.24 & 68.93 & 64.14 & {67.55} & 70.02 & \textbf{89.81} \\
Conv5\_2 & 62.87 & 66.34 & 61.98 & 66.17 & {68.44} & {87.86} \\
Conv5\_3 & 63.09 & 67.81 & 64.07 & 67.25 & {69.32} & {88.9} \\
FC6 & 51.04 & 54.11 & 53.38 & 56.39 & {53.01} & {70.90} \\
FC7 & 28.03 & 30.15 & 29.73 & 30.33 & {31.01} & {39.15} \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Facial Expression Classification}
\NW{On BU-4DFE, we experimented our proposed method in a static and dynamic mode in order to compare it with different state-of-the-art solutions. In the first mode, we recognized the expression in each frame, whereas in the second mode the decision on the expression is made upon examining a sequence of frames. For the sake of the comparison, we report the experimentation protocol in comparison with representative state-of-the-art methods. In this regard, we note that most of the methods used $60$ subjects in a dynamic mode~\cite{yin2008,SANDBACH2012-IVC,Berretti2013,Amor2014,Xue2015,Zhen2016-ICPR,Yao2018,Zhen2018}, and few considered all the subjects~\cite{Fang2011-ICCVW,real2013,meguid2018,Dapogny2018}.
For the first group, the experimental protocols show some differences; for example, most of the methods adopted a $10$-fold cross-validation apart of~\cite{SANDBACH2012-IVC}, where a $6$-fold cross-validation was used. Some methods considered the whole video sequence~\cite{Berretti2013,Amor2014,Xue2015,Zhen2016-ICPR,Zhen2018}, while other methods used frames sampled with a sliding-window~\cite{yin2008,real2013}.}
\NW{Methods in the second group~\cite{Fang2011-ICCVW,real2013}, and~\cite{meguid2018,Dapogny2018} adopted a dynamic mode and a static mode, respectively. In~\cite{Fang2011-ICCVW}, a $10$-fold cross-validation was used on the full sequence, whereas in~\cite{real2013} a $15$-frames sequence starting from the first onset frame was considered, and the cross-validation scheme was not reported. In~\cite{Dapogny2018}, a $5$-fold cross-validation protocol was considered with seven classes rather than six (neutral plus six other expressions). The samples in the classes were obtained by selecting $8,219$ frames from all the video, but the way in which the frames were selected was not reported.}
\NW{In our static mode experimentation, we considered seven classes, \emph{i.e.}, the neutral expression plus the six other expressions. Note that even though the neutral frames are supposed to be at the beginning and the end of the video sequence, according to the order neutral-onset-apex-outset-neutral, we found that this temporal pattern is breached in several samples, therefore we selected these neutral frames manually. For the other expressions, we selected 10 frames taken at the $80^{th}$ estimated as the apex frame. We performed a quality control check to ensure the validity of the frame. Classification has been obtained using $5$-fold cross validation, which makes our setting quite comparable to~\cite{Dapogny2018} and~\cite{meguid2018}.
Table~\ref{tab:stat_expression-comparison} reports results obtained with our method together with competitive state-of-the-art methods, namely the method of Meguid et al.~\cite{meguid2018} and two methods used by Dapogny et al.~\cite{Dapogny2018}. The former method (method-1) is the baseline method reported that employs geometric and HOG descriptors with a standard random forest classifier. The second method (method-2) employs geometric descriptors and CNN features with the neural forest classifier}. We notice that our proposed method outperforms the state-of-the-art solution by a large margin of 17\% and 15\% for the AlexNet and the Vgg-vd16 versions, respectively.
\begin{table}[htb]
\caption{BU-4FE dataset: Classification rate obtained with the FGAI:H-LD-SI compared with the state-of-the-art methods.}
\label{tab:stat_expression-comparison}
\center
\begin{tabular}{lc}
\toprule
{Method} & {Accuracy} \\
\hline
Abd El Meguid et al.~\cite{meguid2018} & {73.10} \\
Dapogny et al.~\cite{Dapogny2018}-1 & {72.80} \\
Dapogny et al.~\cite{Dapogny2018}-2 & {74.00} \\
compressed-AlexNet & \textbf{{91.34}} \\
compressed-Vgg-vd16 & \textbf{{89.81}} \\
\bottomrule
\end{tabular}
\end{table}
\NW{For the dynamic mode experimentation, we considered $60$ subjects, $10$-fold cross validation, and a decision for the expression of a given frame $i$ made using a sliding temporal window of size $6$. The classification is performed by computing the histogram of the expressions in the moving window and selecting the most plausible expression by majority voting. This setting makes our protocol close to its counterparts in~\cite{Berretti2013,Amor2014,Xue2015,Zhen2016-ICPR}. However, a major difference remains. Unlike these methods, our approach does not employ proper temporal features (derived from a frame sequence), despite the fact that the majority voting acts on chronologically ordered frames. Thus our method does not fully profit from the dynamic aspect of the data. This limitation put our method in a disadvantageous position with respect to the other dynamic methods.
Table~\ref{tab:bu-4dfe} reports the previous methods, their experimentation protocols and their results compared to those obtained in this paper. Our method scores second, with a classification rate of $95\%$ slightly below the method of Yao et al.~\cite{Yao2018} that uses the key-frames (95.13\%). Compared with methods adopting close protocols, our approach achieved the best performance.}
\begin{table}[ht]
\caption{BU-4DFE dataset: Comparison with state-of-the-art solutions in dynamic mode. The ``Protocol'' column reports the experimental setting with the following symbols:NUmber of expressions(\#E), number of subjects(\#S), number of folds(\#-CV), frames used (Full-seq., Key-frames, Win=\#) }
\label{tab:bu-4dfe}
\begin{tabular}{l|l|c}
\multicolumn{1}{l|}{\textbf{Method}} & \multicolumn{1}{l|}{\textbf{Protocol}} & \multicolumn{1}{l}{\textbf{Acc.} (\%)} \\ \hline \bottomrule
Sun et al.~\cite{yin2008}, 2008 & --, 6E, 60S, 10-CV, Win=6 & 90.44 \\ \hline
Sandbach et al.~\cite{SANDBACH2012-IVC}, 2012 & D, 6E, 60S, 6-CV, Win=4 & 64.60 \\ \hline
Fang et al.~\cite{Fang2011-ICCVW}, 2012 & D, --, 100S, 10-CV, -- & 74.63 \\ \hline
Reale et al.~\cite{real2013}, 2013 & D, 6E, 100S, -, Win=15 & 76.90 \\ \bottomrule
Berretti et al.~\cite{Berretti2013}, 2013 & D, --, 60S, 10-CV, Full-seq. & 79.40 \\ \hline
Ben Amor et al.~\cite{Amor2014}, 2014 & D, --, 60S, 10-CV, Full-seq. & 93.21 \\ \hline
Xue et al.~\cite{Xue2015}, 2015 & D, --, 60S, 10-CV, Full-seq. & 78.80 \\ \hline
Zhen et al.~\cite{Zhen2016-ICPR}, 2016 & D, --, 60S, 10-CV, Full-seq. & 94.18 \\ \hline
Ours & D, --, 60S, 10-CV, Full-seq. & \bc{95.00} \\
\bottomrule
Yao et al.~\cite{Yao2018}, 2018 & D, --, 60S, 10-CV, Key-frames & 87.61 \\ \hline
Zhen et al.~\cite{Zhen2018}, 2017 & D, --, 60S, 10-CV, Key-frames & \bf{95.13} \\
\bottomrule
\end{tabular}
\end{table}
\section{Discussion and Conclusions}\label{sect:conclusions}
\NW{In this paper, we presented a novel paradigm for learning effective feature representations from 2D texture and 3D geometric data. We proposed an original scheme for mapping local geometric descriptors, computed on the 3D mesh surface, onto their corresponding textured images. The resulting geometrically augmented images (GAIs) are then combinatorially selected to generate multiple three channel images, forming fused geometrically augmented images (FGAIs). This newly proposed representation is employed with the standard AlexNet and Vgg-vd16 CNNs in a transfer learning mode to extract discriminative features. In addition, our mesh data down-sampling scheme achieves significant computational efficiency, with a gain of the order of ten, without excessively compromising the performance of the extracted features.}
\NW{The extensive experimentation conducted on AUs and facial expression recognition on two public datasets evidenced the high discriminative capacity of the learned features derived from the FGAIs and the competitive performance of their counterparts derived from down-sampled data. It also reports on the performance behavior of the extracted features across the different layers of the CNN models.
We also conducted a comprehensive comparison with state-of-the-art methods of facial expression in static and dynamic settings.
In the static mode, our method achieved a significant boost in performance exceeding $18\%$ and $17\%$ on the Bosphorus and the BU-4DE datasets, respectively.
In the dynamic mode, and despite being devoid of temporal features, our method reached a slightly lower performance than the best state-of-the-art method. Yet, it showed competitive and better performance compared with methods adopting close experimentation protocols.}
\NW{We believe that several aspects in our method contribute to the achieved significant boost of performance. First, the fusion of 2D and 3D information which allows a face discriminative capacity largely above
its individual 2D and 3D counterparts ~\cite{soltana2010}.
Second, the early-level fusion aspect in our method, whereby 3D geometry and 2D photometric descriptors are fused at pixel level. This is in-line with previous findings confirming that among the four levels of fusions, namely, \textit{data}, \textit{feature}, \textit{score}, and \textit{decision}, the low-level fusion (data and feature) performs better than its higher level counterparts (score and decision)~\cite{ross2003}.
Third, by proposing the so aforementioned early fusion in a CNN network, we implicitly perform a fusion of learned face representation emanating from texture and shape information. This can be viewed as a continuation of the fusion paradigm proposed by Jung et al.~\cite{jung2015} in their 2D facial expression recognition work, where they reported that using a combination of geometric and deep learned texture representations significantly improves the performance. Our method push further this paradigm by fusing both learned shape and texture information.}
\NW{As future work, there are several directions that can be explored. At first, comes investigating the effect of changing the permutations of the three GAIs on the performance, and check if a specific FGAI permutation performs better than others. It would be also attempting to go beyond the three GAIs fusion constraint which is imposed by the architecture of the input layer in the AlexNet and Vgg-vd16. For example, one can fusing four or five GAIs at the input level. This, however, requires designing and training a new architecture, which would be time and resource demanding. Besides, this does not align with our advocacy of utilizing pre-trained architecture and profiting from their potentials.}
\NW{The discrimination capacity results of the features obtained the different layers suggest using the criterion of Eq.~\eqref{equ:descrit}, as a feature selection tool, allowing at the same time dimensionality reduction. Even though the features taken at the layers are not necessarily expected to have a Gaussian distribution, bearing in mind the non-linear activation functions across the networks and the output sparsity in the layers~\cite{lorieul2016}, we believe it would be worth investigating adopting this criterion as a dimensionality reduction tool.
Finally, we plan to enlarge the scope of our framework and extend it to other categories of mesh surfaces for medical and remote sensing applications.}
\bibliographystyle{IEEEtran}
|
1907.10340
|
\section*{\large{Supplemental Material}}
\section{optimality of the conventional strategy}
In this section, we show that the strategy presented in the example paragraph of the main text is optimal among conventional strategies of quantum metrology \cite{sm2006Giovannetti10401}.
Let
\begin{eqnarray}\label{sm-channel}
\Lambda_\theta(t):=e^{\mathcal{L}_\theta t}
\end{eqnarray}
be the quantum channel associated with the generalized amplitude damping process. To estimate the parameter $\theta$ characterizing this channel, a conventional strategy \cite{sm2006Giovannetti10401} is to send $N$ probes through $N$ parallel channels $\Lambda_\theta(t)$, measure them at the output, and use an inference rule $\hat{\theta}(\bm{x})$ to extract the value of $\theta$ from the measurement result $\bm{x}$ (see Fig. \ref{sm-fig1}).
\begin{figure}[htbp]
\includegraphics[width=0.4\textwidth]{sm-fig1}
\caption{General scheme for conventional quantum metrology. $N$ probes, prepared in an initial state, are sent through $N$ parallel channels $\Lambda_\theta(t)$. A measurement is performed on the final state, from which the parameter $\theta$ is estimated via an inference rule $\hat{\theta}$.}
\label{sm-fig1}
\end{figure}
Therefore, a general scheme of conventional quantum metrology consists of three ingredients: an input state of $N$ probes, a measurement at the output, and an inference rule. In the following, we do not impose any restriction on these ingredients. That is, the input state can be an arbitrary, possibly highly entangled, state; the measurement is a general, not necessarily local, POVM; and the inference rule can be biased or unbiased. Additionally, $t$ appearing in Eq.~(\ref{sm-channel}) can take any non-negative
value; it is not necessarily to fulfill the condition assumed in the main text, which requires that the evolution time $t$ is much longer than the relaxation time of the channel.
A well-known measure quantifying the deviation of the estimator $\hat{\theta}(\bm{x})$ from the true value $\theta$ reads
\cite{sm1994Braunstein3439}
\begin{eqnarray}
\Delta\theta(\bm{x}):=\frac{\hat{\theta}(\bm{x})}{\abs{d\expt{\hat{\theta}}/d\theta}}
-\theta.
\end{eqnarray}
Here, $\expt{\hat{\theta}}$ denotes the statistical average of $\hat{\theta}(\bm{x})$ over potential outcomes $\bm{x}$. Accordingly, the estimation error can be defined as \cite{sm1994Braunstein3439}
\begin{eqnarray}
\delta\theta:=\sqrt{\expt{\Delta\theta^2}},
\end{eqnarray}
i.e., the square root of the statistical average of $\Delta\theta^2(\bm{x})$ over potential outcomes $\bm{x}$. In particular, if $\hat{\theta}(\bm{x})$ is unbiased, i.e., $\expt{\hat{\theta}}=\theta$, there is
\begin{eqnarray}
\Delta\theta(\bm{x})=\hat{\theta}(\bm{x})-\expt{\hat{\theta}},
\end{eqnarray}
indicating that $\delta\theta$ is simply the standard deviation,
\begin{eqnarray}\label{sec1:df-error}
\delta\theta=\sqrt{\textrm{Var}(\hat{\theta})},
\end{eqnarray}
for an unbiased estimator $\hat{\theta}(\bm{x})$. This fact has been used in the main text.
Using the quantum Cram\'{e}r-Rao inequality \cite{sm1994Braunstein3439}, we have that the estimation error $\delta\theta$ is lower bounded by
\begin{eqnarray}\label{Q-CR}
\delta\theta\geq\frac{1}{\sqrt{F\left[\Lambda_\theta^{\otimes N}(t)[\rho(0)]\right]}}.
\end{eqnarray}
Here, $F$ is the quantum Fisher information (QFI) and $\Lambda_\theta^{\otimes N}(t)[\rho(0)]$ is the output state of the $N$ probes, where $\rho(0)$ denotes the input state. To evaluate the quantity $F\left[\Lambda_\theta^{\otimes N}(t)[\rho(0)]\right]$, we
introduce two amplitude damping channels
\cite{sm2010Nielsen}, $\Lambda_i(t)$, $i=0,1$, transforming the Bloch vector as
\begin{eqnarray}
\Lambda_0(t):\ \ (r_x,r_y,r_z)\rightarrow(e^{-\frac{t}{2}}r_x,
e^{-\frac{t}{2}}r_y,1-e^{-t}+e^{-t}r_z),
\end{eqnarray}
and
\begin{eqnarray}
\Lambda_1(t):\ \ (r_x,r_y,r_z)\rightarrow(e^{-\frac{t}{2}}r_x,
e^{-\frac{t}{2}}r_y,e^{-t}-1+e^{-t}r_z),
\end{eqnarray}
respectively.
Noting that the effect of $\Lambda_\theta(t)$ is
\begin{eqnarray}
(r_x,r_y,r_z)\rightarrow
\left(e^{-\frac{t}{2}}r_x,
e^{-\frac{t}{2}}r_y,(2\theta-1)(1-e^{-t})+e^{-t}r_z\right),
\end{eqnarray}
we have
\begin{eqnarray}\label{eq:decom}
\Lambda_\theta(t)=\theta\Lambda_0(t)+(1-\theta)\Lambda_1(t).
\end{eqnarray}
Equation (\ref{eq:decom}) enables us to rewrite $\Lambda_\theta(t)$ as a $\theta$-independent quantum channel acting on a larger input space,
\begin{eqnarray}\label{eq:big-map}
\Lambda_\theta(t)[\rho]=\Phi(t)[\rho\otimes\rho_\theta].
\end{eqnarray}
Here, $\rho_\theta=\textrm{diag}(\theta,1-\theta)$ is the steady state, and $\Phi(t)$ is defined as
\begin{eqnarray}
\Phi(t)[\rho\otimes\sigma]:=\sum_{i=0,1}\Lambda_i(t)\otimes\mathcal{E}_i
[\rho\otimes\sigma]
=\sum_{i=0,1}\Lambda_i(t)[\rho]\otimes\mathcal{E}_i
[\sigma],
\end{eqnarray}
where $\mathcal{E}_i[\sigma]:=\bra{i}\sigma\ket{i}$, $i=0,1$.
Using Eq.~(\ref{eq:big-map}), we have
\begin{eqnarray}
F\left[\Lambda_\theta^{\otimes N}(t)[\rho(0)]\right]=
F\left[\Phi^{\otimes N}(t)[\rho(0)\otimes\rho_\theta^{\otimes N}]\right]
\leq F\left[\rho(0)\otimes\rho_\theta^{\otimes N}\right]
=F\left[\rho_\theta^{\otimes N}\right],
\end{eqnarray}
where we have used the monotonicity of the QFI under parameter-independent quantum channels \cite{sm1994Braunstein3439}. Noting that $F\left[\rho_\theta^{\otimes N}\right]=NF\left[\rho_\theta\right]$ and $F[\rho_\theta]=\frac{1}{\theta(1-\theta)}$, we further have
\begin{eqnarray}\label{eq:bound-F}
F\left[\Lambda_\theta^{\otimes N}(t)[\rho(0)]\right]\leq\frac{N}{\theta(1-\theta)}.
\end{eqnarray}
Substituting Eq.~(\ref{eq:bound-F}) into Eq.~(\ref{Q-CR}) yields
\begin{eqnarray}
\delta\theta\geq\sqrt{\frac{\theta(1-\theta)}{N}},
\end{eqnarray}
indicating that the strategy presented in the main text is optimal among conventional strategies of quantum metrology.
\section{Proof of formula ({8})}
In this section, we present a proof for formula ({8}) in the main text. Note that $\hat{\theta}(q)=f^{-1}(q/N)$ is an unbiased estimator.
By definition,
\begin{eqnarray}
\delta\theta=\sqrt{\textrm{Var}(\hat{\theta})}.
\end{eqnarray}
Using error propagation theory, we have
\begin{eqnarray}\label{delta}
\delta\theta=\sqrt{\textrm{Var}(q)}/\left(N\frac{\partial f}{\partial\theta}\right).
\end{eqnarray}
Here,
\begin{eqnarray}\label{sec2:var-q}
\textrm{Var}(q)
=\int dq\left(q-N\expt{A}_\theta\right)^2\textrm{Pr}(q),
\end{eqnarray}
where
\begin{eqnarray}\label{sec2:prob}
\textrm{Pr}(q):=\bra{q}\mathrm{tr}_\mathscr{S}\mathcal{E}_T^N
(\rho_\theta\otimes\ket{\phi}\bra{\phi})\ket{q}
\end{eqnarray}
denotes the probability distribution of the pointer reading $q$.
Noting that $\ket{\phi}=\int dp \phi(p)\ket{p}$ and $\mathcal{E}_T^N
(\rho_\theta\otimes\ket{p}\bra{p^\prime})=\left(e^{\mathcal{L}_{p,p^\prime}NT}\rho_\theta
\right)\otimes\ket{p}\bra{p^\prime}$, we can rewrite Eq.~(\ref{sec2:prob}) as
\begin{eqnarray}\label{sec2:prob-new}
\textrm{Pr}(q)=
\frac{1}{2\pi}\iint
dpdp^\prime\phi(p)\phi^*(p^\prime)
e^{i(p-p^\prime)q}\mathrm{tr}_\mathscr{S}(e^{\mathcal{L}_{p,p^\prime}NT}\rho_\theta).
\end{eqnarray}
To compute $\textrm{Pr}(q)$, we need to find an expression for the term $\mathrm{tr}_\mathscr{S}(e^{\mathcal{L}_{p,p^\prime}NT}\rho_\theta)$ appearing in Eq.~(\ref{sec2:prob-new}). Noting that
\begin{eqnarray}
\mathcal{L}_{p,p^\prime}=\mathcal{L}_\theta+T^{-1}\mathcal{K},
\end{eqnarray}
with
\begin{eqnarray}
\mathcal{K}\rho:=-i\left(pA\rho-p^\prime\rho A\right),
\end{eqnarray}
we can interpret $\mathcal{L}_{p,p^\prime}$ as the sum of the ``unperturbed term'' $\mathcal{L}_\theta$ and the perturbation term $\mathcal{K}$. Since $\rho_\theta$ is an eigenstate of the unperturbed term $\mathcal{L}_\theta$, i.e., $\mathcal{L}_\theta\rho_\theta=0$, it must be an approximate eigenstate of $\mathcal{L}_{p,p^\prime}$. Perturbation theory \cite{sm1995Kato} tells us that the difference between such an approximate eigenstate and the associated true eigenstate of $\mathcal{L}_{p,p^\prime}$ is of the order $O(1/T)$. Using this fact as well as noting that $\norm{e^{\mathcal{L}_{p,p^\prime}NT}}=O(1)$, we have
\begin{eqnarray}\label{sec2:Lpp}
\mathrm{tr}_\mathscr{S}(e^{\mathcal{L}_{p,p^\prime}NT}\rho_\theta)=\mathrm{tr}_\mathscr{S} (e^{\lambda NT}\rho_\theta)=e^{\lambda NT},
\end{eqnarray}
up to a term of order $O(1/T)$, which is negligible as $T\gg 1$.
Here, $\lambda$ denotes the corresponding eigenvalue of $\mathcal{L}_{p,p^\prime}$, which can be expressed as a series
\begin{eqnarray}\label{sec2:eigv}
\lambda=\lambda^{(0)}+T^{-1}\lambda^{(1)}+T^{-2}\lambda^{(2)}+\cdots,
\end{eqnarray}
where $\lambda^{(n)}$ denotes its $n$-th order perturbation.
Substituting Eq.~(\ref{sec2:eigv}) into Eq.~(\ref{sec2:Lpp}) gives
\begin{eqnarray}\label{sec2:Lpp2}
\mathrm{tr}_\mathscr{S}(e^{\mathcal{L}_{p,p^\prime}NT}\rho_\theta)=e^{\lambda^{(0)}NT+\lambda^{(1)}N+
\lambda^{(2)}N/T}.
\end{eqnarray}
Here, we have ignored terms of the order $O(N/T^2)$, which are negligible because of $N\leq N_\textrm{max}:=O(T)$, a condition that has been assumed in the main text.
According to perturbation theory \cite{sm1995Kato}, there are
\begin{eqnarray}\label{sec2:0th}
\lambda^{(0)}=0,
\end{eqnarray}
\begin{eqnarray}\label{sec2:1st}
\lambda^{(1)}=\mathrm{tr}_\mathscr{S}(\mathcal{K}\rho_\theta)
=-i(p-p^\prime)\expt{A}_\theta,
\end{eqnarray}
and
\begin{eqnarray}\label{sec2:2nd}
\lambda^{(2)}=-\mathrm{tr}_\mathscr{S}\left[\mathcal{K}\mathcal{S_\theta}\mathcal{K}(\rho_\theta)\right]
=p(p-p^\prime)\mathrm{tr}_\mathscr{S}[A\mathcal{S}_\theta(A\rho_\theta)]-p^\prime(p-p^\prime)
\mathrm{tr}_\mathscr{S}[A\mathcal{S}_\theta(\rho_\theta A)],
\label{secondorder}
\end{eqnarray}
with $\mathcal{S}_\theta$ being the pseudoinverse of $\mathcal{L}_\theta$, as defined in the main text. Here, we have used the formulae dealing with perturbations of eigenvalues of linear operators (see page 79 of Ref.~\cite{sm1995Kato}). Note that the original formulae in Ref.~\cite{sm1995Kato} are expressed in terms of linear operators, but here we have reformulated them in terms of superoperators for serving our purpose.
Noting that $\mathcal{S}_\theta$ is a Hermitian map, i.e.,
\begin{eqnarray}
\mathcal{S}_\theta(X)^\dagger=\mathcal{S}_\theta(X^\dagger),
\end{eqnarray}
we have
\begin{eqnarray}\label{sec2:complex-relation}
\mathrm{tr}_\mathscr{S}[A\mathcal{S}_\theta(A\rho_\theta)]=\mathrm{tr}_\mathscr{S}[A\mathcal{S}_\theta(\rho_\theta A)]^*.
\end{eqnarray}
Using Eq.~(\ref{sec2:complex-relation}), we can rewrite Eq.~(\ref{sec2:2nd}) as
\begin{eqnarray}\label{sec2:2nd-new}
\lambda^{(2)}=(p-p^\prime)^2\textrm{Re}\left(\mathrm{tr}_\mathscr{S}
[A\mathcal{S}_\theta(A\rho_\theta)]\right)
+
i(p-p^\prime)(p+p^\prime)\textrm{Im}\left(\mathrm{tr}_\mathscr{S}[A\mathcal{S}_\theta(A\rho_\theta)]\right).
\end{eqnarray}
Inserting Eqs.~(\ref{sec2:0th}), (\ref{sec2:1st}), and (\ref{sec2:2nd-new}) into Eq.~(\ref{sec2:Lpp2}), we arrive at the desired expression
\begin{eqnarray}\label{sec2:Lpp-final}
\mathrm{tr}_\mathscr{S}(e^{\mathcal{L}_{p,p^\prime}NT}\rho_\theta)=
e^{-i(p-p^\prime)N\expt{A}_\theta +\frac{N}{T}(p-p^\prime)^2\textrm{Re}\left(\mathrm{tr}_\mathscr{S}
[A\mathcal{S}_\theta(A\rho_\theta)]\right)+
\frac{iN}{T}(p-p^\prime)(p+p^\prime)\textrm{Im}\left(
\mathrm{tr}_\mathscr{S}[A\mathcal{S}_\theta(A\rho_\theta)]\right)}.
\end{eqnarray}
As assumed in the main text, the coordinate representation of $\ket{\phi}$ is a Gaussian with standard deviation $\sigma$. Hence, there is
\begin{eqnarray}
\phi(p)=\frac{1}{(2\pi\sigma^{\prime2})^{1/4}}e^{-\frac{p^2}{4\sigma^{\prime 2}}},
\end{eqnarray}
where
\begin{eqnarray}
\sigma^\prime=\frac{1}{2\sigma}.
\end{eqnarray}
Using the above expression of $\phi(p)$ and substituting Eq.~(\ref{sec2:Lpp-final}) into Eq.~(\ref{sec2:prob-new}),
we have
\begin{eqnarray}\label{sec2:Pr}
\textrm{Pr}(q)=\frac{1}{2\pi\sqrt{2\pi\sigma^{\prime 2}}}
\iint dpdp^\prime e^{-\frac{p^2+p^{\prime 2}}{4\sigma^{\prime 2}}} e^{i(p-p^\prime)(q-N\expt{A}_\theta)+
\frac{N}{T}(p-p^\prime)^2\textrm{Re}\left(\mathrm{tr}_\mathscr{S}
[A\mathcal{S}_\theta(A\rho_\theta)]\right)+
\frac{iN}{T}(p-p^\prime)(p+p^\prime)\textrm{Im}
\left(\mathrm{tr}_\mathscr{S}[A\mathcal{S}_\theta(A\rho_\theta)]\right)}.\nonumber\\
\end{eqnarray}
Inserting Eq.~(\ref{sec2:Pr}) into Eq.~(\ref{sec2:var-q}) and simplifying the resultant equation by defining new variables
\begin{eqnarray}
x:&=&p-p^\prime,\nonumber\\
y:&=&p+p^\prime,
\end{eqnarray}
we obtain
\begin{eqnarray}\label{sec2:vq-new}
\textrm{Var}(q)=\frac{1}{4\pi\sqrt{2\pi\sigma^{\prime 2}}}\int dq
(q-N\expt{A}_\theta)^2\int dx
e^{-\frac{x^2}{8\sigma^{\prime 2}}}
e^{ix(q-N\expt{A}_\theta)}
e^{\frac{N}{T}x^2\textrm{Re}\left(\mathrm{tr}_\mathscr{S}
[A\mathcal{S}_\theta(A\rho_\theta)]\right)}
\int dy e^{-\frac{y^2}{8\sigma^{\prime 2}}}
e^{\frac{iN}{T}xy\textrm{Im}\left(\mathrm{tr}_\mathscr{S}
[A\mathcal{S}_\theta(A\rho_\theta)]\right)}.\nonumber\\
\end{eqnarray}
Here, the fact $dpdp^\prime=\frac{1}{2}dxdy$ has been used.
Expanding terms $e^{\frac{N}{T}x^2\textrm{Re}\left(\mathrm{tr}_\mathscr{S}[A\mathcal{S}_\theta(A\rho_\theta)]\right)}$ and $e^{\frac{iN}{T}xy\textrm{Im}\left(\mathrm{tr}_\mathscr{S}[A\mathcal{S}_\theta(A\rho_\theta)]\right)}$ appearing in Eq.~(\ref{sec2:vq-new}) as power series
\begin{eqnarray}
e^{\frac{N}{T}x^2\textrm{Re}\left(\mathrm{tr}_\mathscr{S}
[A\mathcal{S}_\theta(A\rho_\theta)]\right)}
&=&\sum_{m=0}^\infty\frac{\left[\frac{N}{T}\textrm{Re}
\left(\mathrm{tr}_\mathscr{S}[A\mathcal{S}_\theta(A\rho_\theta)]\right)
\right]^m}{m!}x^{2m},\nonumber\\
e^{\frac{iN}{T}xy\textrm{Im}\left(\mathrm{tr}_\mathscr{S}
[A\mathcal{S}_\theta(A\rho_\theta)]\right)}
&=&\sum_{n=0}^\infty \frac{\left[\frac{iN}{T}\textrm{Im}
\left(\mathrm{tr}_\textrm{S}[A\mathcal{S}_\theta(A\rho_\theta)]\right)\right]^n}{n!}
x^ny^n,
\end{eqnarray}
we have
\begin{eqnarray}\label{vq-new}
\textrm{Var}(q)=\sum_{m=0}^\infty\sum_{n=0}^\infty
\frac{\left[\frac{N}{T}\textrm{Re}\left(\mathrm{tr}_\mathscr{S}
[A\mathcal{S}_\theta(A\rho_\theta)]\right)
\right]^m}{m!}
\frac{\left[\frac{iN}{T}\textrm{Im}\left(\mathrm{tr}_\mathscr{S}[A\mathcal{S}_\theta(A\rho_\theta)]\right)
\right]^n}{n!}
F(m,n),
\end{eqnarray}
where
\begin{eqnarray}
F(m,n):=\frac{1}{4\pi\sqrt{2\pi\sigma^{\prime 2}}}\int dq
(q-N\expt{A}_\theta)^2\int dx
e^{-\frac{x^2}{8\sigma^{\prime 2}}}
e^{ix(q-N\expt{A}_\theta)}x^{2m+n}\int dy
e^{-\frac{y^2}{8\sigma^{\prime 2}}}y^{n}.
\end{eqnarray}
Since
\begin{eqnarray}
\int dy e^{-\frac{y^2}{8\sigma^{\prime 2}}} y^n=0,\quad n\in\textrm{odd},
\end{eqnarray}
there is
\begin{eqnarray}
F(m,n)=0, \quad n\in\textrm{odd}.
\end{eqnarray}
Further, since
\begin{eqnarray}
\int dq(q-N\expt{A}_\theta)^2\int dx e^{-\frac{x^2}{8\sigma^{\prime 2}}} e^{ix(q-N\expt{A}_\theta)} x^{2l}=0, \quad l\geq 2,
\end{eqnarray}
there is
\begin{eqnarray}
F(m,n)=0, \quad 2m+n=4,6,8,10,\cdots.
\end{eqnarray}
So, the non-vanishing terms are $F(0,0)$, $F(1,0)$, and $F(0,2)$, given by
\begin{eqnarray}\label{sec2:F}
F(0,0)=\sigma^2,\quad
F(1,0)=-2, \quad\textrm{and}\quad
F(0,2)=-2/\sigma^2,
\end{eqnarray}
respectively. Inserting Eq.~(\ref{sec2:F}) into Eq.~(\ref{vq-new}) yields
\begin{eqnarray}\label{sec2:vq-f}
\textrm{Var}(q)=\sigma^2-\frac{2N\textrm{Re}\left(\mathrm{tr}_\mathscr{S}
[A\mathcal{S}_\theta(A\rho_\theta)]
\right)}{T}
+\left[\frac{N\textrm{Im}\left(\mathrm{tr}_\mathscr{S}
[A\mathcal{S}_\theta(A\rho_\theta)]\right)}{T\sigma }\right]^2.
\end{eqnarray}
Finally, substituting Eq.~(\ref{sec2:vq-f}) into Eq.~(\ref{delta}), we obtain
\begin{eqnarray}\label{sec2:formula}
\delta\theta=\sqrt{\sigma^2-\frac{2N\textrm{Re}\left(
\mathrm{tr}_\mathscr{S}[A\mathcal{S}_\theta(A\rho_\theta)]\right)}{T}
+\left[\frac{N\textrm{Im}\left(\mathrm{tr}_\mathscr{S}[A\mathcal{S}_\theta(A\rho_\theta)]\right)}{T\sigma }\right]^2}/\left(N\frac{\partial f}{\partial\theta}\right).
\end{eqnarray}
This completes the proof.
\section{The estimation error in the multi-parameter case}
In this section, we address the estimation error of our approach in the multi-parameter case. To quantify the estimation error in this case, we adopt the following measure,
\begin{eqnarray}\label{sec3:df-error}
\delta\bm{\theta}:=\sqrt{\int d\bm{q} \norm{\hat{\bm{\theta}}(\bm{q})-\bm{\theta}}^2\textrm{Pr}(\bm{q})}.
\end{eqnarray}
Here, $d\bm{q}:=dq_1\cdots dq_M$, $\norm{\hat{\bm{\theta}}(\bm{q})-\bm{\theta}}^2:=
\sum_i\left(\hat{\theta}_i(\bm{q})-\theta_i\right)^2$, and $\textrm{Pr}(\bm{q}):=\textrm{Pr}(q_1)\cdots\textrm{Pr}(q_M)$, where $\textrm{Pr}(q_i)$ denotes the probability distribution of the pointer reading
$q_i$. The above measure is a natural generalization of the measure defined in Ref.~\cite{sm1994Braunstein3439}, as Eq.~(\ref{sec3:df-error}) reduces to Eq.~(\ref{sec1:df-error}) in the single-parameter case.
Using Taylor-series expansion of $\bm{f}^{-1}(\bm{q}/N)$ and noting that $\bm{\theta}=\bm{f}^{-1}(\expt{A_1}_\theta,\cdots,\expt{A_M}_\theta)$, we have
\begin{eqnarray}\label{sec3:ts}
\norm{\hat{\bm{\theta}}(\bm{q})-\bm{\theta}}^2=
\norm{\bm{J}_{\bm{f}^{-1}}\left(\bm{q}/N-(\expt{A_1}_\theta,\cdots,\expt{A_M}_\theta)
\right)^T}^2
=\frac{1}{N^2}\sum_{i}\left[\sum_j(\bm{J}_{\bm{f}^{-1}})_{ij}(q_j-N\expt{A_j}_\theta)\right]^2.
\end{eqnarray}
Here, $\bm{J}_{\bm{f}^{-1}}$ is the Jacobian matrix associated with $\bm{f}^{-1}$, with $(\bm{J}_{\bm{f}^{-1}})_{ij}$ denoting its $ij$-th element. Equation (\ref{sec3:ts}) holds for the $\bm{q}/N$ that is close to $(\expt{A_1}_\theta,\cdots,\expt{A_M}_\theta)$. Noting that $\textrm{Pr}(\bm{q})$ exponentially decreases to zero in the course of moving $\bm{q}/N$ away from $(\expt{A_1}_\theta,\cdots,\expt{A_M}_\theta)$, we can substitute Eq.~(\ref{sec3:ts}) into Eq.~(\ref{sec3:df-error}), and obtain
\begin{eqnarray}\label{sec3:st1}
\delta\bm{\theta}&=&\frac{1}{N}\sqrt{\int d\bm{q} \sum_i\left[\sum_j(\bm{J}_{\bm{f}^{-1}})_{ij}(q_j-N\expt{A_j}_\theta)\right]^2
\textrm{Pr}(\bm{q})}\nonumber\\
&=&
\frac{1}{N}\sqrt{\sum_{ijk}(\bm{J}_{\bm{f}^{-1}})_{ij}(\bm{J}_{\bm{f}^{-1}})_{ik}\iint dq_jdq_k(q_j-N\expt{A_j}_\theta)
(q_k-N\expt{A_k}_\theta)
\textrm{Pr}(q_j)\textrm{Pr}(q_k)}.
\end{eqnarray}
Simplifying Eq.~(\ref{sec3:st1}) by noting that
\begin{eqnarray}
\int dq_j (q_j-N\expt{A_j}_\theta)\textrm{Pr}(q_j)=0,
\end{eqnarray}
we have
\begin{eqnarray}\label{sec3:st3}
\delta\bm{\theta}=\frac{1}{N}\sqrt{\sum_{ij}(\bm{J}_{\bm{f}^{-1}})_{ij}^2
\textrm{Var}(q_j)}.
\end{eqnarray}
Substituting Eq.~(\ref{sec2:vq-f}) into Eq.~(\ref{sec3:st3}), we reach the formula describing the error $\delta\bm{\theta}$,
\begin{eqnarray}\label{sec3:formula}
\delta\bm{\theta}=\frac{1}{N}\sqrt{\sum_{ij}(\bm{J}_{\bm{f}^{-1}})_{ij}^2
\left[\sigma^2-\frac{2N\textrm{Re}\left(\mathrm{tr}_\mathscr{S}
[A_j\mathcal{S}_\theta(A_j\rho_\theta)]
\right)}{T}
+\frac{N^2\textrm{Im}\left(\mathrm{tr}_\mathscr{S}
[A_j\mathcal{S}_\theta(A_j\rho_\theta)]\right)^2}{T^2\sigma^2}\right]}.
\end{eqnarray}
As can be seen from this formula, as long as $N\leq N_\textrm{max}:=O(T)$, $\delta\bm{\theta}\thicksim 1/N$, that is, our approach gives the Heisenberg
scaling of precision in the multi-parameter case as well. On the other hand, noting that $\bm{J}_{\bm{f}^{-1}}=1/\frac{\partial f}{\partial\theta}$ in the single-parameter case, we deduce that formula (\ref{sec3:formula}) reduces to formula (\ref{sec2:formula}).
|
1503.07171
|
\section{Introduction}
Next generation ground-based gravitational wave (GW) detectors such as
aLIGO~\citep{LIGO}
are expected to reach
design sensitivity within the next few years. Among their most
promising sources are mergers of compact objects (COs), including black
hole--neutron star (BH--NS) binaries. BH--NS mergers are also proposed
short-hard gamma-ray burst (sGRB) engines
(e.g.~\citet{Meszaros:2006rc}), and may power other electromagnetic
(EM) transients either
preceding~\citep{Hansen:2000am,McWilliams:2011zi,Paschalidis:2013jsa}
or following~\citep{2012ApJ...746...48M} the merger. These EM
counterparts to GWs could be observed by current and future
wide-field telescopes such as PTF~\citep{2009PASP..121.1334R} and
LSST~\citep{2012arXiv1211.0310L}.
Extracting maximum information from such ``multimessenger''
observations requires careful modeling of BH--NS mergers.
Several studies of quasicircular BH--NS inspirals using numerical
relativity simulations have been performed, see,
e.g.,~\citet{cabllmn10,Etienne:2012te, Kyutoku:2013wxa,
Tanaka:2013ixa,Foucart:2014nda}. While
quasicircular binaries may dominate the global rates of BH--NS
encounters in the Universe, recent
calculations~\citep{Kocsis:2011jy,lee2010,Samsing:2013kua} suggest
that in dense stellar regions, such as galactic nuclei and globular
clusters (GCs), CO binaries can form through dynamical capture and
merge with non-negligible eccentricities.
Compared to quasi-circular inspirals, these emit more GW energy in the high
luminosity, strong-field regime of general relativity, and small
changes in the energy of the binary at each pericenter passage can lead
to relatively large changes in the time between GW bursts---the
leading order GW observable. Hence these systems could be excellent
laboratories to test gravity and measure the internal structure of a NS
(insofar as this affects the energy of the orbit, e.g. tidal excitation of
f-modes).
Rates of these events are highly uncertain, but have been estimated to be up to
$\sim100\ \rm{yr}^{-1}\ Gpc^{-3}$. To realize this rich potential
to learn about the Universe from eccentric mergers, it is irrelevant
what their rates are compared to quasi-circular inspiral, only that
eccentric mergers occur frequently enough that some events could be
observable by aLIGO within its lifetime. However, new detection pipelines would
be needed that are better adapted to the repeated-burst nature of eccentric GW
mergers for aLIGO to efficiently detect them~\citep{Tai:2014bfa}. For more
discussion of rates, distinguishing features, and detection issues for eccentric
encounters, see~\citet{2013PhRvD..87d3004E} and references therein.
Motivated by the above, \citet{ebhns_letter,bhns_astro_paper}
performed fully general-relativistic hydrodynamical (GR-HD)
simulations of dynamical-capture BH--NS mergers. These studies explored
the effects of impact parameter, BH spin, and NS equation of state (EOS)
on GW emission and post-merger BH disk and ejecta masses.
Here, we expand upon this work by including the effects of NS spin.
To date the only simulations including spinning NSs focused on quasicircular NS--NS mergers,
e.g.~\citet{Tichy:2011gw,Bernuzzi:2013rza}, demonstrating that even
moderate spins can affect the dynamics.
NS spin has two main effects: (1) it modifies the star's structure,
making it less gravitationally bound; (2) it changes the orbital
dynamics, e.g., by shifting the effective innermost stable orbit
(ISO). This can impact not only the GWs from CO mergers,
but also the amount of matter forming the BH accretion disk that
putatively powers a sGRB, and the amount of unbound
matter that powers other EM transients, such as kilonovae. There
may also be effects on pre-merger EM signals since the NS spin
determines the light-cylinder radius, and hence the orbital separation
at which unipolar induction turns on.
Spin effects on the NS structure cannot be neglected if the NS spin
period $P$ is $\mathcal{O}({\rm ms})$. Furthermore, for comparable
mass BH--NSs near the tidal disruption radius, NS spin effects on the
orbit will be non-negligible when $P$ is similar to the BH--NS
encounter timescale~\citep{Tichy:2011gw}. For example, a BH--NS
eccentric encounter with mass ratio $q=M_{\rm BH}/M_{\rm NS}=4$ (as
studied here) near a periapse of $r_p=10M$ has an interaction
timescale of $t_{\rm int}\simeq(r_p^3/M)^{1/2}\sim1.0(M_{\rm
NS}/1.4M_\odot)\rm ms$ ($M$ is the system's total mass,
and we use geometric units with $G=c=1$ throughout).
NSs in field BH--NS binaries may not commonly have $P=\mathcal{O}({\rm ms})$ near
merger. However, there are two reasons to think that the opposite may
hold for dynamical capture BH--NS mergers occurring in GCs: the pulsar
spin period distribution in Galactic GCs peaks in the milliseconds, and
millisecond pulsars (MSPs) have longer inferred magnetic dipole
spin-down timescales.
Of the 144 currently known pulsars in Galactic GCs, $\sim83\%$ have
periods less than 10 ms, $\sim55\%$ less than 5 ms, and $\sim12\%$ have
periods less than 2.5 ms~\footnote{\url{www.naic.edu/~pfreire/GCpsr.html}\label{Footnote1}}.
This set includes PSR-J1748-2446ad---the fastest-spinning pulsar
known, with $P=1.396$ ms~\citep{Hessels:2006ze}.
The theoretical explanation for this skew toward short periods is that
GCs favor the formation of low-mass X-ray binaries
(LMXB)~\citep{Verbunt1987}, which are thought to
spin up the NS to ms periods through mass and angular momentum
transfer~\citep{Alparetal1982,Radhakrishnan1982}.
Assuming that pulsar spin-down is predominantly due to magnetic dipole
emission, both the magnetic field strength ($B$) and spin-down timescale
($t_{\rm sd}$) can be computed from observations of $P$ and its time
derivative $\dot{P}$. For $B$ the relation is~\citep{Bhattacharya19911}
\begin{eqnarray}
B\sim 1.6\times10^{8}{\rm \ G}\left[\frac{P}{2.5{\rm \ ms}}\frac{\dot P}{10^{-20}}\right]^{1/2},\nonumber
\end{eqnarray}
which for known GC MSPs with $\dot{P}>0$ gives typical values
$B\sim10^{8}\mbox{---}10^{9}$G. Observations of X-ray oscillations of
accretion-powered MSPs in LMXBs, and pulsar recycling theory imply
$B$ in the range $3\times10^7\mbox{---}3\times10^8$G~\citep{LambYu2005}. For
$t_{\rm sd}$ the expression is~\citep{ZhangMezaros2001}
\begin{eqnarray}
t_{\rm sd}&\sim& 4 {\rm \ Gyr} \frac{I}{10^{45}\rm{g}\ cm^2}\left(\frac{B}{3\times
10^8\rm{\ G}}\right)^{-2} \\ \nonumber
&&\left(\frac{P}{2.5 \rm \ ms}\right)^{2}\left(\frac{R_{\rm NS}}{10 \rm \
km}\right)^{-6},\nonumber
\end{eqnarray}
where the NS moment of inertia is $I$.
Even neglecting the possibility of magnetic field decay, e.g. a pulsar
with a magnetic field of $3\times10^8$G and initial $P=2.5$ms
will take roughly a Hubble time for $P$ to double.
Given the long spin-down timescale of MSPs and the results of~\citet{lee2010},
which suggest that in GCs there could be 40 BH--NS
collisions per Gyr per Milky Way-equivalent galaxy, it is at least
conceivable that some of these eccentric BH--NS collisions take place
with millisecond NSs.
With this motivation, here we focus on eccentric BH--NS mergers (with initial
conditions corresponding to a marginally unbound
Newtonian orbit) and explore NS spin effects.
We show that even moderate spins
can significantly impact the outcome, both in terms of the GWs,
and amounts of tidally stripped bound and unbound matter. The
remainder of the paper is as follows: in
Sec.~\ref{numerical_approach} we describe our initial data and
numerical methods. In Sec.~\ref{results_and_discussion} we present
our simulation results and discuss the impact of NS spin on
gravitational and EM signatures. We summarize in
Sec.~\ref{conclusions} and discuss future work.
\section{Numerical approach}
\label{numerical_approach}
We perform GR-HD simulations of BHs merging with rotating NSs using
the code of~\cite{code_paper}. The field equations are solved in the
generalized-harmonic formulation, using finite differences, while the
hydrodynamics are evolved using the same high-resolution
shock-capturing techniques as in~\cite{bhns_astro_paper}.
To construct initial data, we solve the constraint equations using the
code of~\cite{idsolve_paper}, specifying the free-data as a
superposition of a non-spinning BH with an equilibrium,
uniformly rotating NS, which we generate using the code
of~\cite{1994ApJ...424..823C,1994ApJ...422..227C}. For the NS EOS, we
adopt the HB piece-wise polytrope from~\cite{read}, and include a
thermal component $P_{\rm th}=0.5\epsilon_{\rm th}\rho$ allowing for
shock heating.
Fixing the NS gravitational mass to $1.35M_\odot$, we consider NSs
with dimensionless spins $a_{\rm NS}=J/M_{\rm NS}^2=0$, 0.1, 0.2, 0.3,
0.4, 0.5, and 0.756, having corresponding compactions (mass-to-equatorial-radius)
$C=0.172$, 0.171,
0.169, 0.166, 0.161, 0.154, and $0.12$. The ranges of $T/|W|$
(kinetic-to-gravitational-potential-energy ratio) and $P$ in our spinning
NS models are $[0.003,0.12]$ and $[5.25,1.00]$ ms. The
fastest spinning NS considered has a polar-to-equatorial-radius ratio
of $r_{po}/r_{eq}=0.55$, near the mass-shedding limit of
$r_{po}/r_{eq}=0.543$.
We also vary
$r_p/M\in[5,8]$.
We consider systems with $q=4$. The two COs are initially placed at a
separation of $d=50M$ ($\sim500$km), with positions and velocities
corresponding to a marginally unbound Newtonian orbit labeled by
$r_p$. For this initial study, we only consider cases where the spin
is aligned or anti-aligned with the orbital angular momentum (the
latter indicated by $a_{\rm NS}<0$).
The simulations utilize seven levels of adaptive mesh refinement
that are dynamically adjusted based on the estimated truncation error
of the metric. Most simulations are performed using a base-level
resolution with $193^3$ points, and finest-level resolution with
approximately $75$ ($130$) points covering the (non-spinning) NS (BH)
diameter, respectively. For $r_p/M=6$, $a_{\rm NS}=0.756$ we also
perform simulations at $2/3$ and $4/3\times$ the resolution, to
establish convergence and estimate truncation error. In
Fig.~\ref{conv_plot} we demonstrate the convergence in the GW emission.
\begin{figure}
\begin{center} \hspace{-0.5cm}
\includegraphics[height=2.5in,clip=true,draft=false]{rp6_a76_psi4_conv.eps}
\caption{ Convergence of the GW emission for the $r_p/M=6$, $a_{\rm NS}=0.756$ case.
The top panel shows the real part of the $\ell=m=2$ mode of the Newman--Penrose scalar
$\Psi_4$ at three resolutions, and the bottom panel the differences in
this quantity with resolution, scaled assuming second-order
convergence.} \label{conv_plot}
\end{center}
\end{figure}
\section{Results and discussion}
\label{results_and_discussion}
\subsection{Simple estimates}
\label{simple_estimates}
During eccentric encounters between NSs and BHs, for NS tidal
disruption to form a substantial disk and unbind a non-negligible
amount of matter, it must occur outside the ISO radius ($r_{\rm
ISO}=4M_{\rm BH}$ for a marginally unbound test particle about a
non-spinning BH).
In addition to shifting the effective ISO, spin makes the NS less self-bound,
and thus alters the tidal disruption radius.
Equating the sum of the tidal and centrifugal accelerations to
the gravitational acceleration on the NS surface yields
\begin{equation}
\label{rtidal}
\frac{r_{t}}{M_{\rm BH}}\sim q^{-2/3}C^{-1}\left(\frac{1}{1-(a_{\rm NS}/a_{\rm ms})^2}\right)^{1/3},
\end{equation}
where we replaced the NS angular frequency with $\Omega=J/I=a_{\rm
NS}M_{\rm NS}^2/I$, and let $I=2fM_{\rm NS}R_{\rm NS}^2/5$, with $f$
an order-unity constant that depends on the NS structure. Here,
$a_{\rm ms}=(4f^2/25 C)^{1/2}\simeq 0.8(f/0.7)(C/0.12)^{-1/2}$ is the
mass-shedding limit spin parameter.
Equation~\eqref{rtidal} shows that the closer $a_{\rm NS}$ is
to $a_{\rm ms}$, the larger the tidal disruption radius. It also
suggests that for sufficiently fast rotators, the tidal disruption
radius can be outside the ISO even for large $q$, something which is
not true for non-spinning NSs unless the BH has near-extremal spin.
Additionally, as the prograde NS spin increases, the effective ISO
decreases. Therefore, we expect more massive disks and more unbound
material following tidal disruption outside the ISO with increasing
$a_{\rm NS}$.
\subsection{Dynamics and Gravitational Waves\label{Sec:GWs}} For the cases
considered here, those with $r_p\leq6.5$ merge on the initial
encounter, while those with $r_p\geq7.5$ go back out on an elliptic
orbit after fly-by. Figure~\ref{GWs_plot} plots the dominant
contribution to the GW signal for $r_p/M=6$, 7, and 8; see also Table~\ref{bhrns_table}.
\begin{figure}
\begin{center}
\includegraphics[height=2.5in,clip=true,draft=false]{bh_rns_rp6_psi4.eps}
\includegraphics[height=2.5in,clip=true,draft=false]{bh_rns_rp7_psi4.eps}
\includegraphics[height=2.5in,clip=true,draft=false]{bh_rns_rp8_psi4.eps}
\caption{Plots of the real part of the $l=m=2$ mode of the
scalar $\Psi_4$. The top and middle panels show GWs
from simulations with $r_p/M=6$ and 7, respectively. The
bottom panel plots the GWs for $r_p/M=8$ fly-by cases.}
\label{GWs_plot}
\end{center}
\end{figure}
Near the critical $r_p$ --- below (above) where a merger (fly-by)
occurs --- there are large differences in the dynamics that have a
noticeable impact on the GW signal and tidally stripped matter. This
is evident here with the $r_p/M=7$ case, where there is either partial
tidal disruption followed by a merger on a second encounter (for
$a_{\rm NS}=0.4$ and 0.5), or complete tidal disruption/merger on the
first encounter (for the other spins). This is illustrated in
Fig.~\ref{density_snapshots} for $a_{\rm NS}=0.2,0.5$, as well as the
bottom-right panel of Fig.~\ref{matter_unbound_plot} where it can be seen
that the NS for $a_{\rm NS}=0.4$ (0.5) loses $\sim10\%$ ($\sim20\%$)
of its mass in the initial encounter.
Though it is difficult to disentangle the nonlinear dynamics occurring
here, we suggest the following explanation for this non-monotonic
behavior as a function of $a_{\rm NS}$ for $r_p/M=7$. The $a_{\rm
NS}=0.756$ NS is the least bound and for this case complete tidal
disruption occurs on the initial encounter. As the NS is tidally
stretched, GW emission effectively shuts off (middle panel
Fig.~\ref{GWs_plot}), and matter begins to accrete onto the BH (bottom
panel of Fig.~\ref{matter_unbound_plot}). For the next two lower spins,
the material is more tightly bound, and only partial disruption
occurs, with some of the material promptly accreting onto the
BH. However, as the NS spin decreases, the effective ISO radius
increases, and for $a_{\rm NS}\lesssim0.3$ the core of the NS crosses
the ISO, resulting in immediate merger. The $a_{\rm NS}=0.4$ and 0.5
cases lose enough orbital energy and angular momentum during the first
encounter that they merge on the second.
Note that the simple estimate of Equation~(\ref{rtidal}) implies that
since at $r_p/M=7$ the $a_{\rm NS}=0.756$ case is completely
disrupted, while some of the lower spin cases are partially disrupted,
$r_p/M=8,a_{\rm NS}=0.756$ should also be disrupted, given how close
this spin is to the break up value. That this does {\em not} happen
shows that this crude estimate significantly underestimates the
self-binding of high spin stars.
\begin{figure*}
\begin{center}
\includegraphics[height = 1.275in]{vertical_scale.eps}
\put(1,86){$10^{15}$ gm cm$^{-3}$}
\put(1,1){$10^{9}$}
\hspace{0.8 in}
\includegraphics[trim =5.5cm 4.61489cm 5.5cm 4.61489cm,height=1.275in,clip=true,draft=false]{t810ms_rp7a5.eps}
\includegraphics[trim =5.5cm 4.61489cm 5.5cm 4.61489cm,height=1.275in,clip=true,draft=false]{t938ms_rp7a5.eps}
\includegraphics[trim =5.5cm 4.61489cm 5.5cm 4.61489cm,height=1.275in,clip=true,draft=false]{t1416ms_rp7a5.eps}
\includegraphics[trim =5.5cm 4.61489cm 5.5cm 4.61489cm,height=1.275in,clip=true,draft=false]{t1658ms_rp7a5.eps}
\includegraphics[height = 1.275in]{vertical_scale.eps}
\put(1,86){$10^{15}$ gm cm$^{-3}$}
\put(1,1){$10^{9}$}
\hspace{0.8 in}
\includegraphics[trim =5.5cm 4.61489cm 5.5cm 4.61489cm,height=1.275in,clip=true,draft=false]{t810ms_rp7a2.eps}
\includegraphics[trim =5.5cm 4.61489cm 5.5cm 4.61489cm,height=1.275in,clip=true,draft=false]{t938ms_rp7a2.eps}
\includegraphics[trim =5.5cm 4.61489cm 5.5cm 4.61489cm,height=1.275in,clip=true,draft=false]{t1416ms_rp7a2.eps}
\includegraphics[trim =5.5cm 4.61489cm 5.5cm 4.61489cm,height=1.275in,clip=true,draft=false]{t1658ms_rp7a2.eps}
\caption{Equatorial density snapshots. Top row ($r_p/M=7,\ a_{\rm
NS}=0.5$) from left to right: the NS survives the first encounter
(first and second panels), it is completely tidally disrupted during
the second encounter (third panel), the bulk of the matter outside
the BH is unbound (fourth panel). Bottom row ($r_p/M=7,\ a_{\rm
NS}=0.2$) from left to right: the NS is tidally disrupted during
the first encounter (first panel), a tidal tail forms ejecting some
matter to infinity (second panel), an accretion disk develops
outside the BH (third and fourth panels). The scale can be inferred from the
size of the BH ($R_{\rm BH}\sim16$ km).} \label{density_snapshots}
\end{center}
\end{figure*}
\subsection{Post-merger Matter Distribution}
In Table~\ref{bhrns_table} we list the amount of bound and unbound mass exterior
to the BH shortly following
merger. For $r_p/M=5,6$ the bound mass
is only a weak function of $a_{\rm NS}$, with the notable
exception of $a_{\rm NS}=0.756, r_p/M=6$. By contrast, near the
critical impact parameter ($r_p/M=7$)
there is over an order of magnitude variation
in bound material as a function of NS spin.
The amount of bound rest-mass that forms a BH accretion disk is
$0.01\mbox{--}0.15M_\odot$ in our set. If these disks power sGRBs on
timescales of $\sim0.2$s, the accretion rates will be
$\sim0.05\mbox{--}0.75M_\odot\ \rm s^{-1}$, i.e., consistent with
magnetohydrodynamic BH--NS
studies~\citep{Paschalidis:2014qra}. Assuming a $1\%$ conversion
efficiency of accretion power to jet luminosity, these accretion rates
imply luminosities of $10^{51}\mbox{--}10^{52}\rm erg\ s^{-1}$---consistent with
characteristic sGRB luminosities.
The top two panels in Fig.~\ref{matter_unbound_plot} show plots of the
asymptotic velocity distribution of the unbound matter for
$r_p/M=6,7$. As anticipated (see Sec.~\eqref{simple_estimates}), the
general trend is that increasing $a_{\rm NS}$ increases both the
amount and average asymptotic velocity of unbound material. This is
also seen in Table~\ref{bhrns_table} where we list
these quantities for all cases.
The bottom-left panel of Fig.~\ref{matter_unbound_plot} also demonstrates
that including spin increases the amount of unbound material by an
order of magnitude or more for the cases considered.
\begin{figure*}
\begin{center}
\includegraphics[height=2.5in,clip=true,draft=false]{bh_rns_velocity_rp6.eps}
\includegraphics[height=2.5in,clip=true,draft=false]{bh_rns_velocity_rp7.eps}
\includegraphics[height=2.5in,clip=true,draft=false]{mub_ans.eps}
\includegraphics[height=2.5in,clip=true,draft=false]{bh_rns_rp7_m0.eps}
\caption{ Top: distribution of the asymptotic velocity of unbound
rest-mass, binned in increments of $0.05c$, and computed $\approx10$
ms post-merger for $r_p/M=6$ (left) and $r_p/M=7$ (right) and
various spins. Bottom: total unbound mass as a function of NS spin (left)
and rest-mass outside the BH versus time for $r_p/M=7$ and various spins
(right). Note that for the lowest point in the bottom-left panel, no unbound matter
was found in the simulation (indicated by an arrow).
} \label{matter_unbound_plot}
\end{center}
\end{figure*}
\begin{table*}
\begin{center}
{\scriptsize
\begin{tabular}{l l l l l l l l l l l l}
\hline
$r_p$ &
$a_{\rm NS}$ &
$\frac{J_{\rm ADM}}{M^2}$\tablenotemark{a} &
$\frac{E_{\rm GW}}{M}\times100 $ \tablenotemark{b} &
$ \frac{J_{\rm GW}}{M^2}\times100$ \tablenotemark{c} &
$M_{0,b}$ \tablenotemark{d} &
$M_{0,u}$ \tablenotemark{e} &
$<v_{\infty}>$ \tablenotemark{f} &
$E_{{\rm kin},51}$ \tablenotemark{g} &
$L_{41}$ \tablenotemark{h} &
$F_{\nu}$ \tablenotemark{i} &
$a_{\rm BH}$ \tablenotemark{j}
\\
\hline
5.0 & 0.00 & 0.52 & 0.71 & 4.33 & 1.11 & 0.00 & 0.0 & 0.0 & 0.0 & 0.0 & 0.50 \\
5.0 & 0.20 & 0.52 & 0.73 & 4.51 & 0.96 & 0.44 & 0.20 & 0.09 & 1.1 & 0.02 & 0.51 \\
5.0 & 0.40 & 0.53 & 0.81 & 4.95 & 0.94 & 0.64 & 0.19 & 0.3 & 1.3 & 0.05 & 0.51 \\
5.0 & 0.75 & 0.55 & 0.81 & 4.97 & 1.12 & 1.51 & 0.42 & 3.0 & 2.9 & 4.5 & 0.51 \\
6.0 & -0.40 & 0.55 & 0.92 & 5.66 & 1.01 & 0.50 & 0.22 & 0.3 & 1.2 & 0.08 & 0.51 \\
6.0 & 0.00 & 0.56 & 1.13 & 6.86 & 1.15 & 0.02 & 0.18 & 0.007 & 0.2 & 0.001 & 0.52 \\
6.0 & 0.10 & 0.57 & 1.16 & 7.47 & 1.07 & 0.30 & 0.19 & 0.1 & 0.9 & 0.02 & 0.53 \\
6.0 & 0.20 & 0.57 & 1.21 & 7.30 & 1.10 & 0.21 & 0.20 & 0.1 & 0.7 & 0.02 & 0.54\\
6.0 & 0.40 & 0.58 & 1.31 & 8.03 & 0.91 & 1.99 & 0.32 & 2.3 & 2.9 & 1.6 & 0.53\\
6.0 & 0.75 & 0.60 & 1.04(1.23)\tablenotemark{k}& 6.85(7.92)& 2.40(2.23)&14.39(14.01) & 0.346(0.350) & 19.9 & 8.2 & 17.9 & 0.491 (0.492) \\
6.5 & 0.40 & 0.60 & 1.51 & 9.91 & 4.39 & 9.08 & 0.28 & 8.0 & 5.8 & 3.8 & 0.50 \\
7.0 & -0.40& 0.63 & 1.51 & 9.38 & 1.17 & 0.35 & 0.23 & 0.2 & 1.0 & 0.06 & 0.53\\
7.0 & 0.00 & 0.61 & 1.72 & 11.65 & 4.31 & 3.53 & 0.21 & 1.7 & 3.1 & 0.4 & 0.51\\
7.0 & 0.20 & 0.62 & 1.68 & 12.13 & 8.95 & 11.73 & 0.26 & 8.6 & 6.3 & 3.4 & 0.47 \\
7.0 & 0.30 & 0.62 & 1.52 & 12.75 & 15.34 & 18.98 & 0.28 & 17.3 & 8.5 & 8.8 & 0.40 \\
7.0 & 0.40 & 0.63 & 2.12 & 18.27 & 0.94 & 5.10 & 0.28 & 4.6 & 4.4 & 2.3 & 0.45\\
7.0 & 0.50 & 0.63 & 1.65 & 15.39 & 2.06 & 16.80 & 0.33 & 20.4 & 8.6 & 16.1 & 0.50\\
7.0 & 0.75 & 0.64 & 0.70 & 6.95 & 12.43 & 30.98 & 0.32 & 36.6 & 11.4 & 25.3 & 0.37\\
\hline
\end{tabular}
\caption{Summary of simulations followed through merger. \label{bhrns_table}} \begin{justify}
For $r_p/M\geq7.5$ only the first fly-by encounter was modeled, hence
no information related to disrupted material is available. The
energy and angular momentum emitted in GWs for these cases drops
with increasing $r_p$ after the first encounter, as expected. To
within the estimated $20\%$ truncation error inferred from the
$r_p=6$ case resolution study, we see no variation with
spin. However, even a small variation in the energy
emission at fly-by could result in a significant change in the time
to the subsequent close encounter in a highly eccentric binary. Thus,
higher resolution studies would be needed to ascertain the effect of
spin on the GW signal for $r_p/M\geq7.5$.\end{justify}
\tablenotetext{1}{ADM angular momentum.}
\tablenotetext{2}{Total energy emitted in GWs through the $r=100 M$ surface.}
\tablenotetext{3}{Total angular momentum emitted in GWs.}
\tablenotetext{4}{Bound rest mass outside the BH $\sim 10$ ms post-merger in percent of $M_{\odot}$. }
\tablenotetext{5}{Unbound rest mass in percent of $M_{\odot}$. }
\tablenotetext{6}{Rest-mass averaged asymptotic velocity of unbound material.}
\tablenotetext{7}{Kinetic energy of ejecta in units of $10^{51}$ erg.}
\tablenotetext{8}{Kilonovae bolometric luminosity in units of $10^{41}\rm erg\ s^{-1}$ using Eq.~\eqref{Lkilonovae}.}
\tablenotetext{9}{Specific brightness from ejecta interaction with ISM in units of mJy using Eq.~\eqref{Fnu}.
}
\tablenotetext{10}{Remnant BH dimensionless spin.}
\tablenotetext{11}{Values in parentheses are Richardson extrapolated values
using all three resolutions.}
}
\end{center}
\end{table*}
The increase in total rest-mass $M_{\rm 0,u}$ and velocity $v$ of the unbound
material with increasing $a_{\rm NS}$ can strongly impact potential kilonovae
signatures from such mergers. These arise when neutron-rich ejecta produce
heavy elements through the r-process than then undergo fission, emitting
photons~\citep{Li:1998bw,2005astro.ph.10256K}.
Recently~\cite{2013ApJ...775...18B} have shown that the
opacities in r-process ejecta will likely be dominated by
lanthanides, giving rise times of
\[t_{\rm peak}\approx0.25 (M_{\rm
0,u}/10^{-2}\ M_{\odot})^{1/2}(v/0.3c)^{-1/2} \ \mbox{ d}\nonumber\]
with peak luminosities of
\begin{equation}L\approx 2\times 10^{41}\left(\frac{M_{\rm
0,u}}{10^{-2}\ M_{\odot}}\right)^{1/2}\left(\frac{v}{0.3c}\right)^{1/2}\mbox{ erg
s$^{-1}$}\label{Lkilonovae}\end{equation}
for typical values found here. In some cases, opacities an order of
magnitude lower than those used above may be
justified~\citep{2015MNRAS.446.1115M}. Using Eq.~\eqref{Lkilonovae}
we estimate the luminosity from potential kilonovae in
Table~\ref{bhrns_table}. Apart from the fact that NS spin can make
the difference in whether there will be a kilonova at all for
$r_p/M=5$, for $r_p/M=6$ and $r_p/M=7$ spin affects $L$ by an order of
magnitude in our set. For $r_p/M=6$ even a moderate $a_{\rm NS}=0.1$
increases $L$ by a factor of 4 compared to $a_{\rm
NS}=0$. \citet{2013ApJ...775...18B} predict that a kilonova
luminosity of $\sim10^{41} \rm erg\ s^{-1}$ corresponds to an r-band
magnitude of 23.5 mag at 200 Mpc (near the edge of the aLIGO volume),
above the planned LSST survey sensitivity of 24.5 mag.
Thus, differences in luminosity by factors of a few could be discernible.
Ejecta will also sweep the interstellar medium (ISM) producing radio
waves. These will peak on timescales of weeks with
brightness~\citep{2011Natur.478...82N}
\begin{eqnarray}
F(\nu_{\rm obs}) &\approx& 0.6(E_{\rm kin}/10^{51}\mbox{
erg})(n_0/0.1{\rm \ cm}^{-3})^{7/8} \label{Fnu} \\
&& (v/0.3c)^{11/4}(\nu_{\rm obs}/{\rm GHz})^{-3/4}(d/100{\rm \ Mpc})^{-2}\mbox{ mJy}\nonumber
\end{eqnarray}
for an observation frequency $\nu_{\rm obs}$ at a distance $d$, and
using $n_0\sim0.1$ cm$^{-3}$ as the density for GC
cores~\citep{2013MNRAS.430.2585R}. Estimating the kinetic energy and
the mass-averaged velocity in the ejecta, we show $F(\nu_{\rm
obs})$ via Eq.~\eqref{Fnu} in Table~\ref{bhrns_table}. For $r_p/M=6$
($r_p/M=7$) $F(\nu_{\rm obs})$ varies by 3 (2)
orders of magnitude over our set of spins.
Finally, it has been suggested that ejecta from mergers involving NSs
may make a non-negligible contribution to the overall abundance of
r-process elements ~\citep{1974ApJ...192L.145L,Rosswog:1998gc}.
In particular,
dynamical-capture binaries, which can form and merge on shorter
timescales, may be favored over field binaries in explaining
abundances in carbon-enhanced metal-poor
stars~\citep{2014arXiv1410.3467R}. The average galactic production of
these elements is estimated to be $\sim10^{-6}M_{\odot}$
yr$^{-1}$~\citep{2000ApJ...534L..67Q}. Making the limiting assumption
that all r-process material comes from extreme BH-NS mergers cases
like the $r_p=7$, $a_{\rm NS}=0.76$ with $M_{0,\rm
u}\approx0.3M_{\odot}$ caps these extreme events at $3\times10^{-6}$ yr$^{-1}$
per galaxy (similar to predicted rates for primordial BH--NS
mergers~\citep{2010CQGra..27q3001A}).
\section{Conclusions}
\label{conclusions}
We have demonstrated using GR-HD simulations of dynamical capture
BH--NS mergers that even moderate values of NS spin can significantly
increase the mean velocity and amount of unbound material (to as much as
$0.3M_{\odot}$ for extreme spins). This could lead to
significantly brighter transients, including kilonovae a factor of a
few brighter, and radio wave emission from interaction with the ISM an
order of magnitude or more brighter. For comparison, simulations of
quasicircular BH--NS mergers with nonspinning NSs typically find ejecta
velocities $\sim0.2\mbox{--}0.3c$, comparable, though somewhat smaller
than found here, but only find similar amounts of ejected material for
cases with smaller mass-ratios and/or high BH
spin~\citep{2015arXiv150205402K}. We also find that the NS spin can
alter the amount of bound matter that, following tidal disruption,
remains to form an accretion disk that may power a sGRB. Depending on
the impact parameter and NS spin, these mergers can produce accretion
disks of up to a tenth of a solar mass.
We find that near the critical impact parameter the NS spin influences
the orbital dynamics to a sufficient extent to affect whether a merger
or fly-by occurs, with a corresponding large effect on the GW
emission. At a first glance this variability might seem exceedingly
rare, requiring a finely tuned impact parameter. However, since the
primary source of this sensitivity to binary parameters arises because
the pericenter gets close to the region of unstable orbits, which
exists for all eccentricities, not merely the initially hyperbolic
case considered here, one can speculate that the last few encounters
for any case where non-negligible orbital eccentricity remains will be
subject to this sensitivity. Likewise, the variability associated
with EM counterparts could also be present for a larger range of
initial impact parameters. Future simulations of multi-burst events
will be needed to address this speculation. At the other end of the
spectrum, some fraction of dynamical-capture binaries that form at
larger initial separations will circularize prior to merger due to GW
emission; the results found here thus also motivate the study of
quasicircular mergers involving millisecond NSs.
We have shown it is important to include spin to understand the full
range of possible EM and GW outcomes in eccentric mergers.
However, whether it will be possible to perform parameter estimation from a
putative multimessenger event is a different question. Certainly in a single
burst event the
degeneracies will be too strong to, for example,
identify NS spin as the sole reason for an unusually bright
counterpart. Multi-burst events can in principle lift much of the
degeneracy, as information in the timing of the bursts could
significantly narrow the parameters of the progenitor binary. The
range of viable NS EOSs, NS spin directions, and BH spins needs
to be simulated, both to determine how these parameters affect the
observable outcomes, and how they add to or lift degeneracies. GW detection
rates and parameter estimation also needs to be investigated within a
realistic data analysis framework including detector noise. All of
these problems we leave for future studies. We also plan to study the
effect of spin in dynamical-capture NS--NS mergers.
\acknowledgments
We are grateful to Stuart Shapiro for access to the equilibrium
rotating NS code. This work was supported by NSF grant PHY-1305682 and
the Simons Foundation. Computational resources were provided by
XSEDE/TACC under grant TG-PHY100053 and the Orbital cluster at
Princeton University.
\bibliographystyle{hapj}
|
quant-ph/9708052
|
\section{Introduction}
In spite of the linearity of the Schr\"odinger and Liouville-von
Neumann equations, nonlinearly evolving states are
encountered in quantum mechanics quite often. Typically this is
a result of approximations used in a description of collective
phenomena (cf. nonlinear interferences in a Bose-Einstein
condensate \cite{BEC1})
but no proof has been given so far that it is not the
{\it linearity\/} that is a result of some approximation.
This obvious observation motivated many authors to either look
for nonlinear extensions of quantum mechanics, or to try to find
an argument against a nonlinear evolution at a fundamental
level.
A privileged role in both cases was played by a separability
condition. Apparently the first use of the condition can be
found in \cite{BBM}. The authors required that a nonlinearity
must allow two separated, noninteracting and uncorrelated
subsystems to evolve independently of each other, and this (plus
some additional assumptions) led them to the nonlinear term $\ln
\big(|\psi(x)|\big) \psi(x)$. Similar argument (with
different additional assumptions) led Haag and Bannier \cite{HB}
and Weinberg \cite{W} to the 1-homogeneity condition (i.e. $\psi(x)$
and $\lambda\psi(x)$ should satisfy the same equation). An
important element of these works was the assumption that the
systems are uncorrelated, that is their wave function is a
product one. Equations that have such a locality property may be
called {\it weakly separable\/}~\cite{weak-strong}.
It was
shown later by Gisin and others \cite{W,G1,G2,MCfpl}
that a weakly separable equation
may still violate causality if an initial state of the composite
system is entangled. A slightly
different in spirit analysis of separability conditions was
given in \cite{GS} where it was argued
that an extension from 1 to $N$ particles may
lead to new effects that become visible for $N>N_0$, where $N_0$
is a parameter characterizing a given hierarchy of theories.
It was originally conjectured that these properties
refute any deterministic (i.e.
non-stochastic) nonlinear generalization of quantum mechanics.
That this is not the case was shown for Weinberg-type quantum
mechanics by Polchinski \cite{P} (for pure states) and Jordan
\cite{J} (for density matrices). These results were subsequently
generalized to a more general class of theories (Lie-Nambu
dynamics) by myself in \cite{MCpla}. Therefore, contrary to a
rather common belief, there exists a
class of nonlinear generalizations of quantum mechanics that
does not lead to the locality problems for general (pure-entangled
and general-mixed) states. Such theories can be called {\it
strongly separable\/}. The strongly separable theories
considered so far involve equations which are Hamiltonian, which
means
there exists a Hamiltonian {\it function\/} that generates the
dynamics. Actually, the solution of the problem proposed by
Polchinski in \cite{P} was based on an appropriate choice of
this function. In addition, the Polchinski approach was
formulated within a finite-dimensional framework.
Still, there exists an interesting class of {\it non-Hamiltonian\/}
and infinite-dimensional nonlinear equations. For example, it is
known that all Doebner-Goldin equations that are Hamiltonian are
linearizable \cite{linearizable} and therefore their strong separability
may not be very interesting. Quite recently the problem of
strong separability of non-Hamiltonian Doebner-Goldin equations
was addressed in \cite{LN}.
Using the typical extension of the dynamics from 1 to $N$
particles the authors showed that at $t=0$ the time derivatives
(up to the 3rd order)
of a 1-particle probability density in position space do not depend on
the potential applied to the other particle if the equation is
Galilean-covariant. This {\it suggests\/} that
Galilean covariance may lead to strong locality in
nonrelativistic domain. The
result agrees with computer algebraic tests undertaken by Werner
\cite{Werner} who chose oscillator-type potentials and wave
functions satisfying certain Gaussian Ansatz. The approach
chosen in those works is essentially a third order perturbation
theory applied to diagonal elements of reduced density
matrices.
A technical detail that did not allow these authors to find a
general non-perturbative solution was that they investigated a dynamics of
diagonal elements of a
non-pure reduced density matrix (typical of entangled states)
but the formalism they used was devised
for state vectors and, hence, they had no control over the
behavior of non-pure states and off-diagonal elements
of density matrices~\cite{dm}.
In this paper I will give a general and model independent
solution of the problem. I
will generalize to non-Hamiltonian systems the technique that
proved useful in the context of density matrix equations,
namely the triple-bracket formalism \cite{MCpla,diss}. I will
then show how to
extend a 1-particle dynamics to $N$-particle systems in a strongly
separable way for a large class of both Hamiltonian and
non-Hamiltonian Schr\"odinger non-dissipative equations.
As opposed to basically all previous papers dealing with pure
state nonlinear dynamics (the exception is \cite{P}, but it
deals with finite dimensional systems) I will not {\it guess\/}
the ``obvious" form of the extension but will {\it derive\/} it.
This somewhat
long path beginning with pure states, then extension via
mixtures, and again back to pure states, will be rewarded because the
$N$-particle form we will get will not be the one one might expect.
Its paricularly interesting feature is the fact that the
equations are {\it integro-differential\/}. This feature is
implicitly present also in the Polchinski-Jordan formalism but
is hidden behind the finite dimensional convention where the
integrals do not explicitly show up.
The approach is applicable to {\it all\/} non-dissipative
Doebner-Goldin equations \cite{DGpra,DG},
the equations discussed in \cite{BBM,HB,W,Twarock},
as well as a large class of nonlinear
Schr\"odinger equations that were not discussed so far.
I will prove simultanously a much stronger result.
By the very construction the formalism is applicable to those
Liouville-von Neumann nonlinear equations that reduce to the
corresponding nonlinear Schr\"odinger dynamics on pure states. All these
equations will be shown to be not only {\it strongly separable\/} but
also {\it completely separable\/}. By the latter I mean a
strongly separable dynamics which additionally satisfies the
self-consistency condition
\begin{eqnarray}
{\rm Tr\,}_2\circ \phi^t_{1+2}=\phi^t_{1}\circ {\rm Tr\,}_2,\label{self}
\end{eqnarray}
where $\phi^t_{1+2}$ and $\phi^t_{1}$ denote the dynamics of the
composite system and the subsystem, respectively, and the
partial trace ${\rm Tr\,}_2$ is
a map that reduces the dynamics from the large system to the
subsystem. Condition (\ref{self}), which is independent of strong
separability, was recently pointed out as an
important ingredient of nonlinear dynamics that may be regarded as
completely positive \cite{MCprl}. We will also see that not only
the probability densities in position space but also the
off-diagonal elements of reduced density matrices are
independedent of details of interaction in the remote systems.
\section{Example: Haag-Bannier equation and its
almost-Lie-Poisson form}
The general convention I will use was elaborated in detail in
\cite{MCpla,MCMK} but to make this work self-contained let us
first explain the general scheme on a concrete example, the
Haag-Bannier equation \cite{HB}. This equation is of the
Doebner-Goldin type, that is contains a 1-homogeneous nonlinear
term with derivatives, but is simpler:
\begin{eqnarray}
i\hbar\partial_t\psi(a) &=&
\Bigl(
-\frac{\hbar^2}{2m}\Delta_a + V(a)
\Bigr)\psi(a)
+
\vec A(a)
\frac{\bar \psi(a)\vec \nabla_a \psi(a) -\psi(a)\vec \nabla_a
\bar \psi(a)}{2i|\psi(a)|^2}
\psi(a).
\end{eqnarray}
Its Liouville-von Neumann counterpart is
\begin{eqnarray}
i\hbar\partial_t \rho(a,a')&=&
\Bigl(
-\frac{\hbar^2}{2m}\Delta_{a} + V(a)
\Bigr)\rho(a,a')
+
\vec A(a)
\frac{\int dy\,\delta(a-y)\vec \nabla_{a}[ \rho(a,y)
- \rho(y,a)]}
{2i\rho(a,a)}
\rho(a,a')\nonumber\\
&\phantom{=}&-
\Bigl(
-\frac{\hbar^2}{2m}\Delta_{a'} + V({a'})
\Bigr)\rho(a,a')
-
\vec A(a')
\frac{\int dy\,\delta(a'-y)\vec \nabla_{a'}[ \rho(a',y)
- \rho(y,a')]}
{2i\rho(a',a')}
\rho(a,a'),\label{LvN}
\end{eqnarray}
where $\rho(a,a')=\overline{\rho(a',a)}$, and $f(a)=\rho(a,a)\geq
0$ is
$d^3a$-integrable~\cite{what} together with all its natural
powers $f(a)^n$. For
$\rho(a,a')=\psi(a)\overline{\psi(a')}$ (\ref{LvN}) reduces to
the Schr\"odinger-Haag-Bannier dynamics.
Using the composite index convention described in
\cite{MCpla} we can write the equation in a form which is
compact and simplifies general calculations.
Denote $\rho_a=\rho(a,a')$ and
\begin{eqnarray}
H^a(\rho)&=&
H(a',a)=
K(a',a) + V(a)\delta(a-a')
+
\vec A(a)
\frac{\int dy\,\delta(a-y)\vec \nabla_{a}[ \rho(a,y)
- \rho(y,a)]}
{2i\rho(a,a)} \delta(a-a').
\end{eqnarray}
The kinetic kernel satisfies
\begin{eqnarray}
\int dy\,K(a,y)\rho(y,a')=-\frac{\hbar^2}{2m}\Delta_{a}\rho(a,a'),
\end{eqnarray}
and $K(a,b)=\overline{K(b,a)}$.
All the integrals are in $\bbox R^3$ i.e. $da=d^3a$, etc., and
the ``summmation convention" is applied at the composite index
level (two repeated indices are inegrated).
There is no conflict of notation here because the
composite indices are always in their upper or lower positions
whereas the 3-dimensional coordinates $a$ are arguments of
functions or distributions. (Notice that the composite indices
correspond always to pairs of primed and unprimed 3-dimensional
coordinates.)
The indices are raised by the metric $g^{ab}$
working as follows
\begin{eqnarray}
g^{ab}\rho_b
&=&
\int db db'\delta(a-b')\delta(b-a')\rho(b,b')=\rho(a',a)=\rho^a.
\nonumber
\end{eqnarray}
So if $\rho_a=\rho(a,a')$ then $\rho^a=\rho(a',a)$.
To lower an index one uses obvious inverse formulas \cite{MCpla}.
The Liouville-von Neumann equation can be written in a
triple-bracket-type form \cite{MCpla,Mor,MCijtp}
\begin{eqnarray}
i\hbar\partial_t\rho_f &=&
\int db db' dc dc'
\Big(
\underbrace{
\delta(f-b')\delta(b-c')\delta(c-f')
-
\delta(f-c')\delta(b-f')\delta(c-b')
}_{\Omega_{fbc}}\Big)
\underbrace{H(b',b)}_{H^b} \underbrace{\rho(c',c)}_{\rho^c}\\
&=&
\Omega_{fbc}H^b\rho^c=
\Omega_{abc}
\frac{\delta \rho_f}{\delta \rho_a}H^b\rho^c.\label{triple}
\end{eqnarray}
Let us note that if
\begin{eqnarray}
H^b=\frac{\delta H}{\delta \rho_b}\label{H}
\end{eqnarray}
then (\ref{triple}) describes a Lie-Poisson dynamics of a density
matrix \cite{Bona}. If no Hamiltonian function $H$ satisfying
(\ref{H}) exists, the dynamics will be called
almost-Lie-Poisson. The generic case discussed in this Letter is
almost-Lie-Poisson. The Weinberg-type and mean-field dynamics
discussed in \cite{J,Bona} are Lie-Poisson.
The discussion presented here applies to a general
almost-Lie-Poisson dynamics where the structure constants have
the following general form \cite{MCpla}
\begin{eqnarray}
\Omega{_{abc}}&=&
I_{\alpha\beta'}
I_{\beta\gamma'}
I_{\gamma\alpha'}
-
I_{\alpha\gamma'}
I_{\beta\alpha'}
I_{\gamma\beta'}\label{O_}\\
\Omega{^{abc}}&=&
-\omega^{\alpha\beta'}
\omega^{\beta\gamma'}
\omega^{\gamma\alpha'}
+
\omega^{\alpha\gamma'}
\omega^{\beta\alpha'}
\omega^{\gamma\beta'}\label{O^}
\end{eqnarray}
where $\omega^{\alpha\alpha'}=\omega^a$ and $I_{\alpha\alpha'}=I_a$ are,
respectively, the symplectic form and the Poisson tensor
corresponding to the pure state equation. Here
$\omega^a=\delta(a-a')$, $I_a=\delta(a-a')$ but (\ref{O_}) and
(\ref{O^}) are valid also for other Hilbert spaces and equations
(cf. \cite{MCpla,MCMK}).
\section{2-particle extension}
Consider now a 2-particle system described by the density matrix
which, depending on whether the state is pure or general, is in
either of the two forms
\begin{eqnarray}
\rho_a=\rho_{a_1a_2}&=&\rho(a_1,a_2,a_1',a_2')
\label{12a}\\
&{\stackrel{\rm pure}{=}}&
\Psi(a_1,a_2)\overline{\Psi(a_1',a_2')}.
\label{12b}
\end{eqnarray}
The reduced density matrices are
\begin{eqnarray}
\rho^{I}_{a_1}&=&\int dy\,\rho(a_1,y,a_1',y)=
\omega^{a_2}\rho_{a_1a_2}
\\
\rho^{II}_{a_2}&=&\int dy\,\rho(y,a_2,y,a_2')=
\omega^{a_1}\rho_{a_1a_2}.
\end{eqnarray}
The 2-particle Liouville-von Neumann-Haag-Bannier equation is
given by (\ref{triple}) but with all indices doubled like in
(\ref{12a}), (\ref{12b})
with the 2-particle structure constants given explicitly by
\begin{eqnarray}
\Omega_{abc}^{(2)}
&=&
\delta(a_1-b_1')\delta(b_1-c_1')\delta(c_1-a_1')\nonumber\\
&\phantom{=}&\times
\delta(a_2-b_2')\delta(b_2-c_2')\delta(c_2-a_2')\nonumber\\
&-&
\delta(a_1-c_1')\delta(b_1-a_1')\delta(c_1-b_1')\nonumber\\
&\phantom{=}&\times
\delta(a_2-c_2')\delta(b_2-a_2')\delta(c_2-b_2').
\end{eqnarray}
The crucial element of the whole construction is the
nonlinear extension of the 1-particle Hamiltonian operator
kernel. We {\it define\/} it as
\begin{eqnarray}
H^b(\rho)
&=&
H_{I}^{d_1}(\rho^{I})\frac{\delta\rho^{I}_{d_1}}
{\delta\rho_{b}}
+
H_{II}^{d_2}(\rho^{II})\frac{\delta\rho^{II}_{d_2}}
{\delta\rho_{b}}\label{H1+H2}\\
&=&
H_{I}^{b_1}(\rho^{I})\omega^{b_2}
+
\omega^{b_1}
H_{II}^{b_2}(\rho^{II}),
\end{eqnarray}
with $b=b_1b_2$.
(\ref{H1+H2}) is a nonlinear functional generalization of the
well known linear recipe $H_{1+2}=H_1\otimes \bbox 1 +
\bbox 1\otimes H_2$ and reduces to the formulas arising from the
triple-bracket formalism if Hamiltonian functions exist.
The whole 2-particle equation can be written as
follows
\begin{eqnarray}
i\hbar\partial_t\rho_f
&=&
\Omega_{abc}^{(2)}
\frac{\delta \rho_f}{\delta \rho_a}
\Bigg(
H_{I}^{d_1}(\rho^{I})\frac{\delta\rho^{I}_{d_1}}
{\delta\rho_{b}}
+
H_{II}^{d_2}(\rho^{II})\frac{\delta\rho^{II}_{d_2}}
{\delta\rho_{b}}\Bigg)\rho^c\label{eq}
\end{eqnarray}
where $\rho_f=\rho_{f_1f_2}$ is the 2-particle density matrix.
Now comes the important general result. Let us perform a partial
trace i.e. contract both sides of (\ref{eq}) with
$\omega^{f_2}$. Since $\rho^{I}_{f_1}=\omega^{f_2}\rho_{f_1f_2}=
I_{f_2}\rho{_{f_1}}^{f_2}$
and $\omega^{f_2}=\delta(f_2-f_2')$ is $t$- and
$\rho$-independent
\begin{eqnarray}
&{}&i\hbar\partial_t\rho^{I}_{f_1}\nonumber\\
&{}&\phantom{=} =
\Omega_{abc}^{(2)}
\frac{\delta \rho^{I}_{f_1}}{\delta \rho_a}
\Bigg(
H_{I}^{d_1}(\rho^{I})\frac{\delta\rho^{I}_{d_1}}
{\delta\rho_{b}}
+
H_{II}^{d_2}(\rho^{II})\frac{\delta\rho^{II}_{d_2}}
{\delta\rho_{b}}\Bigg)\rho^c\label{1}\\
&{}&\phantom{=} =
\Omega_{abc}^{(2)}
\frac{\delta \rho^{I}_{f_1}}{\delta \rho_a}
H_{I}^{d_1}(\rho^{I})\frac{\delta\rho^{I}_{d_1}}
{\delta\rho_{b}}\rho^c\label{2}\\
&{}&\phantom{=} =
\Omega_{f_1d_1c_1}^{(1)}
H_{I}^{d_1}(\rho^{I})
\rho^{I\,c_1}.\label{4}
\end{eqnarray}
where the transition from (\ref{1}) to (\ref{2}) is a
consequence of
\begin{eqnarray}
\Omega{^{(N)}}{_{abc}}
\frac{\delta \rho{^I}_{d}}{\delta \rho_{a}}
\frac{\delta \rho{^{II}}_{e}}{\delta \rho_{b}}=0\label{lem1}
\end{eqnarray}
holding for all $N$-particle structure constants and reduced
density matrices of non-overlapping subsystems (for the proof of
(\ref{lem1}) see
Lemma~1 in \cite{MCpla}).
Let me summarize what has happened until now: Using a correct
extension of a 1-particle nonlinear Hamiltonian operator and the
general property of triple brackets we have reduced a 2-particle
equation for a 2-particle density matrix to a 1-particle
equation which involves {\it only\/} the quantities which are
intrinsic to this subsystem. All elements depending on the other
subsystem have simply vanished. We have obtained this by
performing only one operation --- the partial trace over the
``external" subsystem.
Therefore the reduced density matrix in $I$ does not depend on
details of interaction in the separated system $II$ and the
dynamics is strongly separable. Actually,
we have simultaneously obtained more. Indeed, we do not assume
that we take the partial trace at ``$t=0$". Therefore we can
trace out the external system at any time and the reduced
dynamics is indistinguishable from a dynamics defined entirely in terms
of the subsystem and starting from the initial condition
$\rho_1(0)=\rho^I(0)={\rm Tr\,}_2\rho(0)$ where $\rho(0)$ is the
initial condition for the large system. It proves that the
dynamics so constructed is {\it completely separable\/}.
The result we have obtained is completely general and works for
all Schr\"odinger equations whose nonlinear Hamiltonian operator
kernels allow to write them in terms of 1-particle density matrices.
Putting it differently, the construction works correctly
if a
given Schr\"odinger equation allows for an extension to an
almost-Lie-Poisson Liouville-von Neumann equation.
The example of the Haag-Bannier equation served only as a means of
focusing our attention and making the discussion less abstract.
So how does the 2-particle equation look explicitly? Beginning
again with the general formula we can obtain immediately its
model independent form just by performing abstract
operations on the composite indices. We get
\begin{eqnarray}
i\hbar\partial_t\rho_{a}
&=&
\Omega_{abc}^{(2)}
\Bigg(
H_{I}^{b_1}(\rho^{I})\omega^{b_2}
+
\omega^{b_1}H_{II}^{b_2}(\rho^{II})\Bigg)\rho^c\nonumber\\
&=&
\Omega_{a_1b_1c_1}^{(1)}
H_{I}^{b_1}(\rho^{I})\rho^{c_1}{_{a_2}}
+
\Omega_{a_2b_2c_2}^{(1)}
H_{II}^{b_2}(\rho^{II})\rho{_{a_1}}^{c_2}.\nonumber
\end{eqnarray}
Returning for the sake of completeness to the Haag-Bannier case
we can write it as
\begin{eqnarray}
{}&{}&i\hbar\partial_t \rho(a_1,a_2,a_1',a_2')\nonumber\\
&{}&=
\Bigg[
\Bigl(
-\frac{\hbar^2}{2m}\Big(\Delta_{a_1}
+ \Delta_{a_2} -\Delta_{a_1'}- \Delta_{a_2'}\Big)
+ V_1(a_1) +V_2(a_2) - V_1(a_1') - V_2(a_2')
\nonumber\\
&{}&
+\vec A_1(a_1)
\frac{\int dy_1\,\delta(a_1-y_1)\vec \nabla_{a_1}
[\int dz\, \rho(a_1,z,y_1,z)
- \int dz\,\rho(y_1,z,a_1,z)]}
{2i\int dz\,\rho(a_1,z,a_1,z)}
\nonumber\\
&{}&
+
\vec A_2(a_2)
\frac{\int dy_2\,\delta(a_2-y_2)\vec \nabla_{a_2}
[\int dz\, \rho(z,a_2,z,y_2)
- \int dz\,\rho(z,y_2,z,a_2)]}
{2i\int dz\,\rho(z,a_2,z,a_2)}
\nonumber\\
&{}&-
\vec A_1(a_1')
\frac{\int dy_1\,\delta(a_1'-y_1)\vec \nabla_{a_1'}
[\int dz\, \rho(a_1',z,y_1,z)
- \int dz\,\rho(y_1,z,a_1',z)]}
{2i\int dz\,\rho(a_1',z,a_1',z)}
\nonumber\\
&{}&-
\vec A_2(a_2')
\frac{\int dy_2\,\delta(a_2'-y_2)\vec \nabla_{a_2'}
[\int dz\, \rho(z,a_2',z,y_2)
- \int dz\,\rho(z,y_2,z,a_2')]}
{2i\int dz\,\rho(z,a_2',z,a_2')}
\Bigg]
\rho(a_1,a_2,a_1',a_2').
\end{eqnarray}
Its {\it pure state\/} 2-particle counterpart is
\begin{eqnarray}
i\hbar\partial_t \Psi(a_1,a_2)
&=&
\Bigg[
-\frac{\hbar^2}{2m}\big(\Delta_{a_1}+\Delta_{a_2}\big)
+ V_1(a_1) + V_2(a_2)\nonumber\\
&\phantom{=}&
+
\vec A_1(a_1)
\frac{\int dz\big[\overline{\Psi(a_1,z)} \vec \nabla_{a_1}\Psi(a_1,z)
- \Psi(a_1,z)\vec \nabla_{a_1}\overline{\Psi(a_1,z)}\big]}
{2i\int dz|\Psi(a_1,z)|^2}
\nonumber\\
&\phantom{=}&+
\vec A_2(a_2)
\frac{
\int dz\, \big[\overline{\Psi(z,a_2)}\nabla_{a_2}\Psi(z,a_2)
- \Psi(z,a_2)\nabla_{a_2}\overline{\Psi(z,a_2)}\big]}
{2i\int dz |\Psi(z,a_2)|^2}
\Bigg]
\Psi(a_1,a_2).
\end{eqnarray}
This equation has several interesting features. The Hamiltonian
operator obviously
reduces to the sum of ordinary 1-particle terms, involving no
integrals, if $\Psi$ is a
product state. What makes it unusual is the presence of
the integrals. Typically it is said that such equations
should not be taken into account because they are {\it
nonlocal\/}. They indeed appear nonlocal but a closer look shows
that it is in fact just the opposite: The currents and probability
densities are the local 1-particle ones. Therefore it is the lack of
{\it appropriate\/} inegrals that makes typical 2-particle
equations nonlocal.
The equation considered in \cite{LN} involved no such integrals
and this led to difficulties.
An extension of the above results from 2 to $N$ particles is
immediate so explicit formulas will not be given.
\section{Further examples}
Let me now list some of nonlinear Hamiltonian operators
that have been considered in
literature and which admit the completely separable extension to
an arbitrary number of particles in arbitrary entangled and mixed states.
The general rule is that an extension to $N$ particles will be
given in one of these forms but instead of $\rho(x,x')$ a reduced
1-particle density matrix should be placed. If the $N$-particle state is
pure then the reduced density matrix is a functional of the pure
state which involves $N-1$ integrations. This matrix is
subsequently put into a suitable place in the Schr\"odinger
equation. Similarly it can be put into a Liouville-von Neumann
equation if the state is more general.
\medskip
\noindent
a) {\it ``Nonlinear Schr\"odinger"\/}
\begin{eqnarray}
|\psi(x)|^2\to \rho(x,x)\nonumber
\end{eqnarray}
b) {\it Bia{\l}ynicki-Birula--Mycielski\/}
\begin{eqnarray}
\ln\big(|\psi(x)|^2\big)\to \ln\rho(x,x)\nonumber
\end{eqnarray}
Obviously in the same way one can treat any equation with
nonlinearities given by some function $F(|\psi(x)|)$\cite{problem}.
\noindent
c) {\it Doebner--Goldin\/}
\begin{eqnarray}
R_1:&{}&
\frac{1}{2i}
\frac{
\bar\psi(x)\Delta_x \psi(x)
-
\psi(x)\Delta_x \bar\psi(x)}
{|\psi(x)|^2}\nonumber\\
&{}&\to
\frac{1}{2i}
\frac{\int dz\delta(x-z)
\Delta_x \big[\rho(x,z)
-\rho(z,x)\big]}
{\rho(x,x)},\nonumber\\
R_2:&{}&
\frac{\Delta_x|\psi(x)|^2}{|\psi(x)|^2}
\to
\frac{\Delta_x\rho(x,x)}{\rho(x,x)}\nonumber\\
R_3:&{}&
\frac{1}{(2i)^2}
\frac{
\bigl[
\bar \psi(x)\vec \nabla_x \psi(x)
-
\psi(x)\vec \nabla_x \bar \psi(x) \bigr]^2}
{|\psi(x)|^4}\nonumber\\
&{}&
\to
\frac{1}{(2i)^2}
\frac{\big(
\int dz\delta(x-z)
\vec \nabla_x \big[\rho(x,z)
-\rho(z,x)\big]\big)^2}
{\rho(x,x)^2}\nonumber\\
R_4:&{}&
\frac{1}{2i}
\frac{
\bigl[
\bar \psi(x)\vec \nabla_x \psi(x)
-
\psi(x)\vec \nabla_x\bar \psi(x)
\bigr]\cdot\vec \nabla_x |\psi(x)|^2}
{|\psi(x)|^4}\nonumber\\
&{}&\to
\frac{1}{2i}
\frac{\int dz\delta(x-z)
\vec \nabla_x \big[\rho(x,z)
-\rho(z,x)\big]\cdot \vec \nabla_x\rho(x,x)}
{\rho(x,x)^2},\nonumber\\
R_5:&{}&
\frac{
\bigl[
\vec \nabla_x |\psi(x)|^2\bigr]^2}
{|\psi(x)|^4}
\to
\frac{
\bigl[
\vec \nabla_x \rho(x,x)\bigr]^2}
{\rho(x,x)^2}\nonumber
\end{eqnarray}
d) {\it Twarock\/} on $S^1$\cite{Twarock}
\begin{eqnarray}
{}&{}&
\frac{\psi(x)'' \overline{\psi(x)'}
-
\overline{\psi(x)''}\psi(x)'}
{\psi(x) \overline{\psi(x)'}
-
\overline{\psi(x)} \psi(x)'}\nonumber\\
&{}&\to
\frac{
[\int dy\delta(x-y)\partial_x^2\rho(x,y)]
[\int dz\delta(x-z)\partial_x^2\rho(z,x)] - c.c.}
{\rho(x,x)\int dy\delta(x-y)\partial_x\rho(x,y) - c.c.}\nonumber
\end{eqnarray}
e) $(n,n)$-{\it homogeneous nonlinearities\/}. Denote by
$D$ a differential operator involving
arbitrary mixed partial derivatives up to order $k$.
Consider a real function
$F(\psi)=F\big(D\psi(x)\big)$,
$(n,n)$-homogeneous i.e. satisfying $F(\lambda\psi)=\lambda^n
\bar \lambda^n F(\psi)$. We first write
\begin{eqnarray}
F\big(D\psi(x)\big)=
\frac{F\big(\overline{\psi(x)}D\psi(x)\big)}
{|\psi(x)|^{2n}}\nonumber
\end{eqnarray}
and then apply the tricks used for the Haag--Bannier,
Doebner--Goldin and Twarock
terms. Obviously any reasonable function of such
$(n,n)$-homogeneous expressions with different $n$'s is
acceptable as well.
\section{Extension in stages}
The formalism proposed above allows to extend a 1-particle
dynamics to $N$ particles. We will now show that it allows for a
more general kind of extension: From 1 {\it system\/} to $N$ systems.
The procedure is self-consistent in the sense that one can first
produce several composite systems from the 1-particle ones, and then
combine them into a single overall composite system.
Alternatively, one can produce the final system without the
intermediate stages, directly by
extension from single particles.
Consider the $N$-particle extension
\begin{eqnarray}
H_{A}^b(\rho)
&=&
H_{A}^{b_1\dots b_N}(\rho)\nonumber\\
&=&
H_{(1)}^{d_1}(\rho^{(1)})\frac{\delta\rho^{(1)}_{d_1}}
{\delta\rho_{b_1\dots b_N}}
+
H_{(2)}^{d_2}(\rho^{(2)})\frac{\delta\rho^{(2)}_{d_2}}
{\delta\rho_{b_1\dots b_N}}
+\dots
+
H_{(N)}^{d_N}(\rho^{(N)})\frac{\delta\rho^{(N)}_{d_N}}
{\delta\rho_{b_1\dots b_N}}\\
&=&
H_{(1)}^{b_1}(\rho^{(1)})
\omega^{b_2}\dots \omega^{b_N}
+
\omega^{b_1}H_{(2)}^{b_2}(\rho^{(2)})
\omega^{b_3}\dots \omega^{b_N}
+\dots
+
\omega^{b_1}\dots \omega^{b_{N-1}}
H_{(N)}^{b_N}(\rho^{(N)}).
\end{eqnarray}
The Hamiltonian operator (kernel) $H_{A}^b(\rho)$ describes a
composite system, labelled $A$, which consists of $N$ particles
that do not interact with one another. Consider now another
system, labelled $B$, consisting of $M$ particles. Its
Hamiltonian operator is
\begin{eqnarray}
H_{B}^b(\rho)
&=&
H_{B}^{b_{N+1}\dots b_{N+M}}(\rho)\nonumber\\
&=&
H_{(N+1)}^{d_{N+1}}(\rho^{(N+1)})\frac{\delta\rho^{(N+1)}_{d_{N+1}}}
{\delta\rho_{b_{N+1}\dots b_{N+M}}}
+
H_{(N+2)}^{d_{N+2}}(\rho^{(N+2)})\frac{\delta\rho^{(N+2)}_{d_{N+2}}}
{\delta\rho_{b_{N+1}\dots b_{N+M}}}
+\dots
+
H_{(N+M)}^{d_{N+M}}(\rho^{(N+M)})\frac{\delta\rho^{(N+M)}_{d_{N+M}}}
{\delta\rho_{b_{N+1}\dots b_{N+M}}}\\
&=&
H_{(N+1)}^{b_{N+1}}(\rho^{(N+1)})
\omega^{b_{N+2}}\dots \omega^{b_{N+M}}
+
\omega^{b_{N+1}}
H_{(N+2)}^{b_{N+2}}(\rho^{(N+2)})
\omega^{b_{N+3}}\dots \omega^{b_{N+M}}
\nonumber\\
&\phantom =&
+\dots
+
\omega^{b_{N+1}}\dots \omega^{b_{N+M-1}}
H_{(N+M)}^{b_{N+M}}(\rho^{(N+M)})
\end{eqnarray}
The $(N+M)$-particle extension can be obtained in two stages
\begin{eqnarray}
H_{A+B}^b(\rho)
&=&
H_{A+B}^{b_{1}\dots b_{N+M}}(\rho)\nonumber\\
&=&
H_{A}^{d_{1}\dots d_{N}}
(\rho^A)
\frac{\delta\rho^{A}_{d_{1}\dots d_{N}}}
{\delta\rho_{b_{1}\dots b_{N+M}}}
+
H_{B}^{d_{N+1}\dots d_{N+M}}(\rho^{B})
\frac{\delta\rho^{B}_{d_{N+1}\dots d_{N+M}}}
{\delta\rho_{b_{1}\dots b_{N+M}}}\nonumber\\
&=&
H_{A}^{b_{1}\dots b_{N}}
(\rho^A)
\omega^{b_{N+1}}\dots \omega^{b_{N+M}}
+
\omega^{b_{1}}\dots \omega^{b_{N}}
H_{B}^{b_{N+1}\dots b_{N+M}}(\rho^{B})\nonumber\\
&=&
\Big(
H_{(1)}^{b_1}(\rho^{(1)})
\omega^{b_2}\dots \omega^{b_N}
+\dots
+
\omega^{b_1}\dots \omega^{b_{N-1}}
H_{(N)}^{b_N}(\rho^{(N)})
\Big)
\omega^{b_{N+1}}\dots \omega^{b_{N+M}}\nonumber\\
&\phantom =&
+
\omega^{b_{1}}\dots \omega^{b_{N}}
\Big(
H_{(N+1)}^{b_{N+1}}(\rho^{(N+1)})
\omega^{b_{N+2}}\dots \omega^{b_{N+M}}
+\dots
+
\omega^{b_{N+1}}\dots \omega^{b_{N+M-1}}
H_{(N+M)}^{b_{N+M}}(\rho^{(N+M)})
\Big)
\end{eqnarray}
which could be derived also directly from the (N+M)-particle
extension of the 1-particle dynamics.
\section{Is nonlinear quantum mechanics nonlocal?}
N.~Gisin in his nowadays classic paper \cite{G1} argued
that {\it any\/} nonlinear and non-stochastic dynamics
necessarily leads to faster-than-light communication via
EPR-type correlations.
The argument is based (implicitly) on the additional assumption
that a reduced density matrix (of, say, Alice) evolves by means
of an independent
dynamics of its pure-state components.
(Let us note that this was essentially the main question discussed by
Haag and Bannier \cite{HB} in the context of the nonlinear, convex scheme
proposed by Mielnik \cite{Mielnik}.)
This viewpoint is
suggested by the fact that what one starts with is a nonlinear
(Schr\"odinger-type) dynamics of pure states. But in the EPR
case the pure state is a 2-particle one and one has to start
with a 2-particle nonlinear Schr\"odinger equation. Assume this
equation is given (e.g. our Haag-Bannier equation) and its
2-particle pure-state solution $|\psi\rangle$ has been found. To
have the Gisin effect one has to find at least one observable of
the form
\begin{eqnarray}
\langle\hat A\rangle=
\langle\psi|\hat A\otimes 1|\psi\rangle={\rm
Tr}_A\rho_A\hat A\label{alice}
\end{eqnarray}
describing the
Alice subsystem, which depends on a parameter controlled by Bob.
Here
$\rho_A={\rm Tr}_B|\psi\rangle\langle\psi|$ is the reduced
density matrix of the Alice subsystem.
Assume that the only actions Bob can undertake are reducible to the
modifications of the parameters of the Hamiltonian corresponding
to ``his" particle. (In the case contemplated by Gisin Bob would
rotate his Stern-Gerlach device; this is equivalent to modifying
the magnetic term in the corresponding interaction Hamiltonian.)
(\ref{alice}) shows that Bob can influence $\langle\hat
A\rangle$ if and only if the reduced density matrix $\rho_A$
depends on the parameter he controls. This is true for the
Weinberg theory \cite{MCfpl} but will {\it never\/} happen for
{\it any\/} equation we have discussed. It follows that the
Gisin effect applies to a limited class of theories.
The additional assumption that leads to the
Gisin phenomenon is the following: Each time Bob makes a measurement, he
chooses an initial condition for the reduced density matrix of
Alice. This explicitly involves the {\it ordinary\/}
reduction of the wave packet postulate. One can invent other
versions of the postulate, for example, a composition of a
nonlinear map $N$ with a projector $P$: $N^{-1}\circ P\circ N$,
as proposed by L\"ucke in his analysis of nonlinear gauge
transformations \cite{Lucke}.
Alternatively, as in the Weinberg theory, one can get
the Gisin effect {\it without\/} the explicit use of the
projection postulate: The nonlinear generator of
evolution must be basis dependent. If this is the case, the
2-particle solution is parametrized by the choice of the basis
in the Hilbert space. Assuming that Bob can change this basis we
allow him to change globally the form of the solution.
But the rotation of the Stern-Gerlach device (modification of
the intarction term) is not yet a change
of basis in this sense. In the Weinberg theory the generator was
interpreted as an average energy. A change of basis was
equivalent to a different way of measuring the average.
For this reason it was justified to say \cite{P} that
the effect was implied by the projection postulate although
explicit calculations were not referring to any projections
\cite{MCfpl}
(similarly to the linear case, the projection is not included
in a non-stochastic Schr\"odinger dynamics).
We can conclude that the Gisin-type reasoning has to be regarded
as ``unphysical", exactly in the same sense as the
Einstein-Podolsky-Rosen argument was unphysical according to
Bohr.
The version of nonlinear theory we have discussed
seems to be free of physical inconsistencies
and is not more nonlocal than the linear one.
\bigskip
The idea of using the Haag-Bannier equation as a laboratory for
testing the concepts of separbility was suggested to me by
G.~A.~Goldin who also informed me about the results of his
joint work with G.~Svetlichny.
I am indebted to G.~A.~Goldin,
H.-D.~Doebner, W.~L\"ucke, P.~Nattermann, and J.~Hennig for
valuable suggestions and critical comments, and to W. Puszkarz
who explained to me how to include into the framework
the equations of Kostin \cite{Kostin} and Staruszkiewicz
\cite{Star,Pusz}. The work is a part
of the Polish-Flemish project 007 and was done during my stay at
the Arnold Sommerfeld Institute in Clausthal. I gratefully
acknowledge a support from DAAD.
|
2001.04689
|
\section{Introduction}
The electrocardiogram (ECG) is a recording of the electrical activity of the heart, obtained with the help of electrodes located on the human body. This is one of the most important methods for the diagnosis of heart diseases. The ECG is usually treated by a doctor. Recently, automatic ECG analysis is of great interest.
The ECG analysis includes detection of QRS complexes, P and T waves, followed by an analysis of their shapes, amplitudes, relative positions, etc. (see Fig.\,\ref{fig:delineationDoc}). The detection of onsets and offsets of QRS complexes and P and T waves is also called segmentation or delineation of the ECG signal.
\begin{figure}
\includegraphics[width=\linewidth]{ecg_delineation_doc.png}
\caption{An example of medical segmentation. Yellow color corresponds to P waves, red to QRS complexes, green to T waves.
The symbol $\rhd$ means the onset of a wave, $\circ$ means the wave peak, $\lhd$ corresponds to the offset of a wave.}
\label{fig:delineationDoc}
\end{figure}
Accurate ECG automatic segmentation is a difficult problem for the following reasons.
For example, the P wave has a small amplitude and can be difficult to identify due to interference arising from the movement of electrodes, muscle noise, etc. P and T waves can be biphasic, which makes it difficult to accurately determine their onsets and offsets. Some cardiac cycles may not contain all standard segments, for example, the P wave may be missing, etc.
Among the methods of automatic ECG segmentation, methods using wavelet transforms have proven to be the best
\cite{Bote2017,Kalyakulina2018,Li1995,DiMarco2011,Martinez2010,Rincon2011}.
In \cite{Sereda2018}, a neural network approach for ECG segmentation is proposed.
The segmentation quality turned out to be close to the quality obtained by state-of-the-art algorithms based on wavelet transform, but still, as a rule, lower.
In this paper, we suggest using the UNet-like \cite{UNet2015} neural network.
As a result, using the neural network approach, it is possible to achieve and even exceed the quality of segmentation obtained by other algorithms.
In terms of quality, the proposed approach is superior to analogues.
In particular, $F$1-measures for detection of onsets and offsets
of P and T waves and for QRS-complex are at least $97.8$\%, $99.5$\%, and $99.9$\%, respectively.
In addition, the proposed segmentation method differs from analogous in speed, a small number of parameters in the neural network and good generalization: it is adaptive to different sampling rates and is generalized to various types of ECG monitors.
The main differences of the proposed approach from the paper \cite{Sereda2018} follow:
\begin{itemize}
\item in \cite{Sereda2018}, an ensemble of $12$ convolutional neural networks is used;
here we use one full-convolutional neural network with skip links;
\item in contrast to the present work, \cite{Sereda2018} does not use postprocessing;
\item in \cite{Sereda2018}, a preprocessing is used to remove a isoline drift; we process signals as is;
in Section \ref{sec:Examples}, we will see that the quality of ECG segmentation is high even in the case of the isoline drift.
\end{itemize}
\section{Algorithm}
\subsection{Preprocessing}
The neural network described below was trained on a dataset of ECG signals with the sampling frequency 500 Hz and the duration 10 s
(see Section \ref{section:LUDB}).
In order to use this network for signals of a different frequency or/and a different duration, we propose the following preprocessing.
Let the frequency of an input signal $x = (x_1, x_2, \dots, x_n)$ be $\nu$, and the network is trained on signals with the frequency $\mu$. Then $T=n/\nu$ is the signal duration.
Convert the input signal as follows.
\begin{enumerate}
\item[1.] Form an array of time samples $t = (t_1, t_2, \dots, t_n)$,
where $t_i = \dfrac{(2i-1)T}{2n}$ are the midpoints of the time intervals formed by dividing the segment $[0, T]$ into $n$
equal parts $(i=1,2,\dots,n)$.
\item[2.] On the set of points $\{(t_1, x_1), (t_2, x_2), \dots, (t_n, x_n) \}$, construct the cubic spline \cite{Spline}.
\item[3.] Form the array of new time samples $t'=(t_1', t_2', \dots, t_m')$, where
$$
m=\ceil[\big]{\mu T}, \qquad t_i'=\frac{(2i-1)T}{2m}.
$$
\item[4.] Using the cubic spline, find the signal values at $t'$. The resulting array will be the input to the neural network.
\end{enumerate}
\subsection{The neural network architecture}
The architecture of the neural network (see Fig.\,\ref{fig:modelPic}) is similar to the UNet architecture \cite{UNet2015}.
The input of the neural network is a vector of length $l$, where $l$ is the length of the ECG signal received from one lead.
Each lead is fed to the input of the neural network separately.
\begin{figure}
\includegraphics[width=\linewidth]{model.eps}
\caption{Neural network architecture}
\label{fig:modelPic}
\end{figure}
The output size is $(4, l)$.
Each column of the output matrix contains $4$ scores, that characterize the confidence degree of the neural network that the current value of the signal belongs to the segments P, QRS, T or none of the above.
The proposed neural network includes the following layers:
\begin{itemize}
\item[(i)] $4$ blocks, each of which includes two convolutional layers with batch normalization and the Relu activation function;
these blocks are connected sequentially with MaxPooling layers;
\item[(ii)] the output from the previous layer through the MaxPooling layer is fed to the input of another block containing two
convolutional layers with batch normalization and the Relu activation function;
\item[(iii)] the output from the previous layer through the deconvolution and zero padding layers is concatenated with the output
from the layer (ii) and is fed to the input of the block that includes two convolutional layers each with
the batch normalization and the Relu activation function;
\item[(iv)] the output from the previous layer through the deconvolution and zero padding layers is sequentially fed to the input of
another $4$ blocks containing two convolutional layers each with batch normalization and Relu activation function;
each time the output is concatenated with the output from the corresponding layers (i) in the reverse order;
\item[(v)] the output from the previous layer is fed to the input of another convolutional layer.
\end{itemize}
All convolutional layers have the following characteristics: $\text{\it kernel-size}=9$, $\text{\it padding}=4$.
All deconvolution layes have $\text{\it kernel-size}=8, \text{\it stride}=2, \text{\it padding}=3$.
For the last convolutional layer $\text{\it kernel-size}=1$.
The main differences between the proposed network and UNet follow:
\begin{itemize}
\item we use 1d convolutions instead of 2d convolutions;
\item we use a different number of channels and different parameters in the convolutions;
\item we use of copy $+$ zero pad layers instead of copy $+$ crop layers;
as a result, in the proposed method the dimension of the output is the same as the input;
in contrast, at the output of the UNet network, we obtain a segmentation of only a part of the image.
\end{itemize}
\subsection{Postprocessing}
The output of the neural network is the matrix of size $(4, l)$, where $l$ is the input signal length.
Applying the argmax function to the columns of the matrix,
we obtain a vector of length $l$.
Form an array of waves, finding all continuous segments with the same label.
For processing multi-leads ECG (a typical number of leads is $12$), we propose to process each lead independently,
and then find the average of the resulting scores.
As we will see in the Section \label{sec: Experiments}, such an analysis improves the quality of the prediction.
\section{Experimental results}
\subsection{LUDB dataset} \label{section:LUDB}
The training of the neural network and experiment conducting were performed on the extended LUDB dataset \cite{LUDB2018}.
The dataset consists of a $455$ $12$-leads ECG with the duration of $10$ seconds recorded with the sampling rate of $500$ Hz.
For comparison of algorithms, the dataset was divided into a train and a test sets,
where the test consists of $200$ ECG signals borrowed from the original LUDB dataset.
Since the proposed neural network elaborate the leads independently,
$255 \times 12 = 3060$ signals of length $500 \times 10 = 5000$ were used for training.
To prevent overfitting, augmentation of data was performed:
at each batch iteration, a random continuous ECG fragment of $4$ seconds was fed to the input of the neural network.
The LUDB dataset has the following feature.
One (sometimes two) first and last cardiac cycles are not annotated.
At the same time, the first and last marked segments are necessarily QRS
(see an exanmple in Fig. \ref{fig:delineationDoc}).
To implement a correct comparison with the reference segmentation, the following modifications were made in the algorithm:
\begin{itemize}
\item during augmentation, the first and last $2$ seconds were not taken,
i.\,e. subsequences of the length of $4$ seconds were chosen starting from the $2$-nd to the $4$-th
(ending from the $6$-th to the $8$-th seconds);
\item in order to avoid a large number of false positives, the first and the last cardiac cycles were removed
during the validation of the algorithm.
\end{itemize}
\subsection{Comparison of the algorithms} \label{sec:Experiments}
Table \ref{table:Compare} contains results of the experiment and the comparison of the results with
one of the best segmentation algorithm using wavelets \cite{Kalyakulina2018}
and the neural network segmentation algorithm \cite{Sereda2018}.
The last line shows the characteristics of our algorithm that analyses the leads independently
for a test set consisting of $200 \times 12 = 2400$ ECG.
The quality of the algorithms is determined using the following procedure.
According to the recommendations of the Association for Medical Instrumentation \cite{AAMI1999}, it is considered that an onset or an offset are detected correctly, if their deviation from the doctor annotations does not exceed in absolute value the {\it tolerance} of 150 ms.
If an algorithm correctly detects a significant point (an onset or an offset of one of the P, QRS, T segments),
then a true positive result (TP) is counted and the time deviation (error) of the automatic determined point from the manually marked point is measured.
If there is no corresponding significant point in the test sample in the neighborhood of $\pm \text{\it tolerance}$ of the detected significant point, then the I type error is counted (false positive -- FP).
If the algorithm does not detect a significant point, then the II type error is counted (false negative -- FN).
Following \cite{Bote2017,DiMarco2011,Martinez2010,Rincon2011},
we measure the following quality metrics:
\begin{itemize}
\item the mean error $m$;
\item the standard deviation $\sigma$ of the mean error;
\item the sensitivity, or recall, $\text{\it Se} = \text{\rm TP}/(\text{\rm TP} + \text{\rm FN})$;
\item the positive predictive value, or precision, $\text{\it PPV} = \text{\rm TP}/(\text{\rm TP} + \text{\rm FP})$.
\end{itemize}
Here TP, FP, FN denotes the total number of correct solutions, type I errors, and
type II errors, respectively.
We also give the value of
\begin{itemize}
\item the $F1$-measure: $F1 = 2\,\dfrac{\text{\it Se}\cdot\text{\it PPV}}{\text{\it Se}+\text{\it PPV}}$.
\end{itemize}
\begin{table}
\begin{center}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|p{2.8cm}|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{ } & P onset & P offset & QRS onset & QRS offset & T onset & T offset \\
\hline
\begin{tabular}{l} Kalyakulina \\ \textit{et al.} \cite{Kalyakulina2018} \end{tabular}
& \begin{tabular}{c}{\it Se} (\%) \\ {\it PPV} (\%) \\ {\it F}1 (\%) \\ $m\pm\sigma(ms)$ \end{tabular}
& \begin{tabular}{c}$98.46$ \\ $96.41$ \\ $97.42$ \\ $-2.7 \pm10.2$ \end{tabular}
& \begin{tabular}{c}$98.46$ \\ $96.41$ \\ $97.42$ \\ $0.4\pm11.4$ \end{tabular}
& \begin{tabular}{c}$99.61$ \\ $99.87$ \\ $99.74$ \\ $-8.1\pm7.7$ \end{tabular}
& \begin{tabular}{c}$99.61$ \\ $99.87$ \\ $99.74$ \\ $3.8\pm8.8$ \end{tabular}
& \begin{tabular}{c}$-$ \end{tabular}
& \begin{tabular}{c}$98.03$ \\ $98.84$ \\ $98.43$ \\ $5.7\pm 15.5$ \end{tabular} \\
\hline
\begin{tabular}{l} ~ \\ Sereda \textit{et al.} \cite{Sereda2018} \\ ~ \end{tabular}
& \begin{tabular}{c}{\it Se} (\%) \\ {\it PPV} (\%) \\ {\it F}1 (\%) \\ $m\pm\sigma(ms)$ \end{tabular}
& \begin{tabular}{c}$95.20$ \\ $82.66$ \\ $88.49$ \\ $2.7\pm21.9$ \end{tabular}
& \begin{tabular}{c}$95.39$ \\ $82.59$ \\ $88.53$ \\ $-7.4\pm28.6$ \end{tabular}
& \begin{tabular}{c}$99.51$ \\ $98.17$ \\ $98.84$ \\ $2.6\pm12.4$ \end{tabular}
& \begin{tabular}{c}$99.50$ \\ $97.96$ \\ $98.72$ \\ $-1.7\pm14.1$ \end{tabular}
& \begin{tabular}{c}$97.95$ \\ $94.81$ \\ $96.35$ \\ $8.4\pm28.2$ \end{tabular}
& \begin{tabular}{c}$97.56$ \\ $94.96$ \\ $96.24$ \\ $-3.1\pm28.2$ \end{tabular} \\
\hline
\begin{tabular}{l} ~ \\ This work \\ ~ \end{tabular}
& \begin{tabular}{c}{\it Se} (\%) \\ {\it PPV} (\%) \\ {\it F}1 (\%) \\ $m\pm\sigma(ms)$ \end{tabular}
& \begin{tabular}{c}$98.05$ \\ $97.73$ \\ $\bf 97.89$ \\ $-0.6\pm17.5$ \end{tabular}
& \begin{tabular}{c}$98.01$ \\ $97.69$ \\ $\bf 97.85$ \\ $-2.4\pm18.4$ \end{tabular}
& \begin{tabular}{c}$100.00$ \\ $99.93$ \\ $99.97$ \\ $1.5\pm11.1$ \end{tabular}
& \begin{tabular}{c}$100.00$ \\ $99.93$ \\ $99.97$ \\ $2.0\pm10.6$ \end{tabular}
& \begin{tabular}{c}$99.68$ \\ $99.37$ \\ $\bf 99.52$ \\ $2.9\pm23.7$ \end{tabular}
& \begin{tabular}{c}$99.77$ \\ $99.46$ \\ $\bf 99.61$ \\ $-2.4\pm30.4$ \end{tabular} \\
\hline
\begin{tabular}{l} This work \\ (only lead II) \end{tabular}
& \begin{tabular}{c}{\it Se} (\%) \\ {\it PPV} (\%) \\ {\it F}1 (\%) \\ $m\pm\sigma(ms)$ \end{tabular}
& \begin{tabular}{c}$98.61$ \\ $95.61$ \\ $97.09$ \\ $-4.1\pm20.4$ \end{tabular}
& \begin{tabular}{c}$98.59$ \\ $95.59$ \\ $97.07$ \\ $3.7\pm19.6$ \end{tabular}
& \begin{tabular}{c}$99.99$ \\ $99.99$ \\ $\bf 99.99$ \\ $1.8\pm13.0$ \end{tabular}
& \begin{tabular}{c}$99.99$ \\ $99.99$ \\ $\bf 99.99$ \\ $-0.2\pm11.4$ \end{tabular}
& \begin{tabular}{c}$99.32$ \\ $99.02$ \\ $99.17$ \\ $-3.6\pm28.0$ \end{tabular}
& \begin{tabular}{c}$99.40$ \\ $99.10$ \\ $99.25 $ \\ $-4.1\pm35.3$ \end{tabular} \\
\hline
\begin{tabular}{l} This work \\ (each lead is \\ used separately) \end{tabular}
& \begin{tabular}{c}{\it Se} (\%) \\ {\it PPV} (\%) \\ {\it F}1 (\%) \\ $m\pm\sigma(ms)$ \end{tabular}
& \begin{tabular}{c}$97.38$ \\ $95.53$ \\ $96.47$ \\ $0.9\pm14.1$ \end{tabular}
& \begin{tabular}{c}$97.36$ \\ $95.52$ \\ $96.43$ \\ $-3.5\pm15.7$ \end{tabular}
& \begin{tabular}{c}$99.96$ \\ $99.84$ \\ $99.90$ \\ $2.1\pm9.8$ \end{tabular}
& \begin{tabular}{c}$99.96$ \\ $99.84$ \\ $99.90$ \\ $1.6\pm9.8$ \end{tabular}
& \begin{tabular}{c}$99.43$ \\ $98.88$ \\ $99.15$ \\ $1.3\pm 20.9$ \end{tabular}
& \begin{tabular}{c}$99.48$ \\ $98.94$ \\ $99.21$ \\ $-0.3\pm22.9$ \end{tabular} \\
\hline
\end{tabular}}
\end{center}
\caption{The comparison of ECG segmentation algorithms}
\label{table:Compare}
\end{table}
Analyzing the results, we can draw the following conclusions:
\begin{itemize}
\item the indicators {\it Se} and {\it PPV} for the proposed algorithm are the most or almost the highest for all types of ECG segments;
\item averaging the answer over all $12$ leads helps to detect the complexes better: it has improved both {\it Se} and {\it PPV};
however, the detecting the onsets and the offsets worsens, which is indicated by the growth of $\sigma$ in all indicators;
\item to detect the QRS-complexes, it is enough to use only lead II, since it gives the highest quality of their determination;
such an approach will reduce the time of the algorithm $12$ times, without passing the other leads through the neural network;
\item the best $\sigma$ values are given by the algorithm \cite{Kalyakulina2018};
\item the results of the proposed approach for all indicators surpassed the other neural network approach \cite{Sereda2018}.
\end{itemize}
\subsection{Examples of the resulting segmentations} \label{sec:Examples}
Examples of segmentations obtained by the proposed algorithm are shown in Fig. \ref{fig:lNoise}--\ref{fig:interpolate}.
The experiments show that the proposed algorithm confidently copes with noise of different frequencies.
An example with low frequency noise (breathing) is shown in Fig. \ref{fig:lNoise}.
An example with high frequency noise is presented in Fig. \ref{fig:hNoise}.
An example of the segmentation of an ECG with a pathology (ventricular extrasystole) is shown in Fig. \ref{fig:extra}.
An example of segmentation of an ECG obtained from another type of ECG monitor is shown in Fig. \ref{fig:anotherBase}. It is characterized by high T waves and a strong degree of smoothness.
Figure \ref{fig:interpolate} presents an example of segmentation of an ECG with the frequency of 50 Hz, reduced using a cubic spline to the frequency of 500 Hz.
\begin{figure}
\includegraphics[width=\linewidth]{ln.png}
\caption{An example of low frequency noise ECG segmentation (breathing)}
\label{fig:lNoise}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{hn.png}
\caption{An example of high frequency noise ECG segmentation}
\label{fig:hNoise}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{extra.png}
\caption{An example of ECG segmentation with pathology (ventricular extrasystole)}
\label{fig:extra}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{another_base.png}
\caption{An example of segmentation of an ECG obtained from another type of ECG monitor. It is characterized by high T waves and a strong degree of smoothness.}
\label{fig:anotherBase}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{interpolate.png}
\caption{An example of segmentation of an ECG with the frequency of 50 Hz, reduced using a cubic spline to the frequency of 500 Hz}
\label{fig:interpolate}
\end{figure}
\section{Conclusion and future work}
The paper describes an algorithm based on the use of a UNet-like neural network, which is capable to quickly and efficiently construct the ECG segmentation.
Our method uses a small number of parameters and it has a good generalization.
In particular, it is adaptive to different sampling rates and it is generalized to various types of ECG monitors.
The proposed approach is superior to other state-of-the-art segmentation methods in terms of quality.
$F$1-measures for detection of onsets and offsets
of P and T waves and for QRS-complexes are at least $97.8$\%, $99.5$\%, and $99.9$\%, respectively.
In the future, this can be used with diagnostic purposes. Using segmentation, one can compute useful signal characteristics or use the neural network output directly as a new network input for automated diagnostics with the hope of improving the quality of classification.
In addition, one can try to improve the algorithm itself. In particular, the loss function used in the proposed neural network probably does not quite reflect the quality of segmentation. For example, it does not take into account some features of the ECG (e.\,g. two adjacent QRS complexes cannot be too close to each other or too far from each other).
\paragraph{Acknowledgement.}
The authors are grateful to the referee for valuable suggestions and comments.
The work is supported by the Ministry of Education and Science of Russian Federation (project 14.Y26.31.0022).
|
1511.00768
|
\section{Introduction}
The discoveries of new extra-solar planets continue to be exciting
and have revealed many new implications about the formation and evolution of
planetary systems.
Though the majority of them was detected by the method of Doppler shift
(Marcy \& Butler 1998),
other methods such as transit, direct imaging etc. also made
impressed contributions.
Because the orbital configurations of extra-solar planetary systems
are generally quite different from our Solar system, many investigations
on their dynamical properties and evolution have been done
(see Jiang \& Ip 2001, Ji et al. 2002,
Jiang et al. 2003, Jiang \& Yeh 2004a,
Jiang \& Yeh 2004b, Jiang \& Yeh 2007,
Mordasini et al. 2009).
In addition, the further observational effort also produces
new fruitful results continuously. For example,
Kepler space telescope has discovered many multiple planetary systems
through the transit method (Lissauer et al. 2011).
Maciejewski et al. (2010) and Jiang et al. (2013)
discovered possible transit timing variations (TTVs) which could
imply the existence of additional bodies in these planetary systems.
Lee et al. (2014) also found planetary companions around evolved stars
through the method of radial velocities by Doppler shift.
Among these,
those extreme systems with very short orbital period
have particularly raised many interesting questions
such as where they could have
formed, how they would have migrated to current positions, and how
stable their current orbits are etc.
Their physical properties have also been seriously investigated with
great effort. For example, WASP-12 planetary system,
discovered by Hebb et al. (2009), was one of the well known
extreme systems that attracted much attention.
The planet was argued to be losing mass by exceeding its Roche lobe.
Due to the falling of planetary gas towards the host star through the
first Lagrange point, it is likely to form an accretion disk
(Li et al. 2010).
This might lead to the transfer of metals and thus enhance the stellar
metallicity.
Maciejewski et al. (2011) employed a high-precision photometric
monitoring to study this system and greatly improved the determination
of WASP-12b planetary properties.
On the other hand, the WASP-43 planetary system, first discovered by
Hellier et al. (2011), is another case with
an even smaller orbit.
The planet is moving around a low mass K star with
an orbital period only about 0.8 days.
With a mass of 1.8 Jupiter Mass,
it is one of the most massive exoplanets carrying
an extremely short orbital period.
The existence of WASP-43 system has therefore triggered
the study of thermal radiation from exoplanets.
For example, Wang et al. (2013) confirmed the thermal emission
from the planet WASP-43b.
Chen et al.(2014) observed one transit and one occultation
event in many bands simultaneously. They detected
the day-side thermal emission in the $K$-band.
Moreover, Kreidberg et al. (2014)
determined the water abundance in the atmosphere of WASP-43b
based on the observations through Hubble Space Telescope.
As discussed in Jiang et al. (2003),
a system with a close-in planet would experience an orbital decay through
star-planet tidal interactions.
Indeed, through the XMM-Newton observations,
Czesla et al. (2013) showed an X-ray detection and
claimed that WASP-43 is an active K-star, which could be related with
tidal interactions.
In order to obtain more precise measurements of the
characteristics of this system, Gillon et al. (2012)
performed an intense photometric monitoring by ground-based
telescopes. The physical parameters have been measured
with much higher precision. Employing their data, the
atmosphere of WASP-43b was modeled. However,
they concluded that their transit data presented
no sign of transit timing variations.
Later, through a timing analysis on the transits of
WASP-43b, Blecic et al. (2014)
proposed an orbital period decreasing rate
about 0.095 second per year. With the data from
Gran Telescopio Canarias (GTC),
Murgas et al. (2014) also claimed an orbital decay
with period decreasing rate about 0.15 second per year
and suggested that a further timing analysis over future years
would be important.
Motivated by the above interesting results,
we employ two telescopes to monitor the WASP-43b transit events
and obtain eight new transit light curves.
Combining our own data
with available published
photometric transit data of WASP-43b, we investigate
the possible timing variations or orbital decay here.
Since these data cover more than 1800 epochs of the orbital evolution,
our results shall serve as the most updated reference for this
system. Our observational data are described in Section 2,
the analysis of light curves is in Section 3, the results of
transit timing variations are presented in Section 4,
and finally the concluding remarks
are provided in Section 5.
\section{Observational Data}
\subsection{Observations and Data Reduction}
In this project,
two telescopes were employed to observe
the transits of
WASP-43b. One is the 1.25-meter telescope (AZT-11)
at the Crimean Astrophysical Observatory (CrAO) in Nauchny, Crimea,
and another is the 60-inch telescope (P60) at Palomar Observatory
in California, USA.
We successfully performed one complete transit observation
with AZT-11 in 2012 and seven with P60 in 2014 and 2015.
A summary of the above observations is presented in Table 1.
After some standard procedures such as flat-field corrections etc.,
we first use the IRAF task, $daofind$, to pick bright
stars and then the task, $phot$, to measure these stars' fluxes
in each image. The light curves of these bright stars
are thus determined (Jiang et al. 2013, Sun et al. 2015).
In order to decide which stars could
be comparison stars, we first choose those with higher
brightness consistency as candidates, i.e. candidate stars.
Therefore, we calculate the
Pearson's correlation coefficient between
any two of these light curves and use 0.9 as the criterion.
The candidate stars are a set of stars in which
the correlation coefficient between any pairs of their light curves
must be more than 0.9, in order to ensure the brightness
consistency and that none of the candidate stars
are variable objects.
Finally, the flux of WASP-43 is divided by any possible combination
of the fluxes from candidate stars.
For example, when there are three candidate stars,
the flux of WASP-43 is divided by the summation
of all three candidate stars, the summation of any possible pairs
from these candidate stars, and also the flux of individual
candidate stars.
Each of the above operations leads to one calibrated light curve,
and the one with the smallest out-of-transit root-mean-square
deviation becomes the light curve of WASP-43.
Note that the out-of-transit root-mean-square deviation
is determined after the normalization process,
which would be described in Section 2.3 later.
The comparison stars are those involved in the determination of
the light curve of WASP-43.
The number of bright stars, candidate stars, and comparison stars
are listed in Table 2.
\subsection{Other Observational Data from Literature}
In addition to our own light curves, it will be very helpful
to employ those publicly available transit data
in previous work.
With both our own and other transit light curves,
we could therefore cover a large number of
epochs for the investigation on possible transit timing variations.
We review all WASP-43b papers and find that
there are five papers in which the electronic files
of transit light curves are provided.
Gillon et al. (2012) employed the 60cm telescope, TRAPPIST
(TRAnsiting Planets and PlanetesImals Small Telescope),
in the Astrodon $I+z$ filter to obtain 20 light curves
and the 1.2m Euler Swiss telescope in
the Gunn-$r'$ filter to obtain three light curves.
Two of the above light curves are actually
for the same transit event. That is, Epoch 38
is observed by both telescopes. Note that the epochs are given
an identification number following the convention that the transit observed
in Hellier et al. (2011) is Epoch 0.
Chen et al. (2014) observed the transit event of Epoch 499 with the
GROND instrument mounted on 2.2m MPG/ESO telescope
in seven bands: Sloan $g', r', i', z'$ and NIR $J, H, K$.
We take the light curve in $J$ band,
because only $J, H$, and $K$ bands have the information
of seeing and the wavelength of $J$ band is the closest to $R$ band,
the one we used for our own observations.
Maciejewski et al. (2013) provided the light curves of Epoch 543
and Epoch 1032.
Murgas et al. (2014) used GTC (Gran Telescopio Canarias)
instrument OSIRIS to obtain long-slit spectra. We choose the
white light-curve data to do the analysis in this paper.
In addition, there are
seven light curves available from Ricci et al.(2015).
Therefore, we take 23 light curves from Gillon et al. (2012).
In addition, we get one light curve from Chen et al. (2014),
two light curves from Maciejewski et al. (2013),
one from Murgas et al. (2014), and seven from
Ricci et al.(2015).
In total, we have 34 light curves
taken from published papers.
We do not simply use the mid-transit times written in these papers,
but re-analyze all the photometric data with the same procedure
and software to perform
parameter fitting in a consistent way.
Because all these data go
through the same transit modeling and fitting procedure
as our own data,
it can ensure that the results are reliable.
\subsection{The Normalization and Time Stamp of Light Curves}
For all the previously mentioned light curves,
including eight from our work and 34 from the
published papers, we further consider the airmass and
seeing effects here.
As the procedure in Murgas et al. (2014),
a 3rd-degree polynomial is used to model the airmass effect,
and a linear function is employed to model
the seeing effect.
The original light curve, $F_{o}(t)$,
can be expressed as:
\begin{equation}
F_{o}(t) = F(t) \mathcal{P}(t) \mathcal{Q}(s),
\end{equation}
where $F(t)$ is the corrected light curve,
$\mathcal{P}(t) = a_0 + a_1 t + a_2 t^2+ a_3 t^3$,
and $\mathcal{Q}(s) = 1 + c_0 s$,
where $s$ is the seeing of each image.
(Note that, in Maciejewski et al. 2013 and Ricci et al. 2015,
the seeing is not known and no seeing correction can be done.
We thus set $\mathcal{Q}(s)=1$ for light curves from these two papers.)
We numerically search the best values of five parameter
$a_0, a_1, a_2, a_3, c_0$ to make out-of-transit part
of $F(t)$ close to unity with smallest standard deviations,
and thus normalize all the light curves.
$F(t)$ would be simply called the observational
light curves and used in any further analysis
for the rest of this paper.
On the other hand, the time stamp we use is the Barycentric Julian Date
in the Barycentric Dynamical Time ($BJD_{TDB}$).
We compute the UT time of mid exposure from the recorded header
and convert the time stamp to
$BJD_{TDB}$ as in Eastman et al. (2010).
\section{The Analysis of Light Curves}
The Transit Analysis Package (TAP) presented by
Gazak et al. (2012) is used to obtain transit models and
corresponding parameters from all the above 42 light curves.
TAP employs the light-curve models of Mandel \& Agol (2002),
the wavelet-based likelihood function developed by Carter \& Winn (2009),
and Markov Chain Monte Carlo (MCMC) technique to
determine the parameters.
All 42 light curves are loaded into TAP
and analyzed simultaneously.
For each light curve, five MCMC chains of length 500,000 are computed.
To start an MCMC chain in TAP, we need to set the initial values
of the following parameters:
orbital period $P$, orbital inclination $i$, semi-major axis $a$
(in the unit of stellar radius $R_{\ast}$), the planet's radius $R_{\rm p}$
(in the unit of stellar radius),
the mid-transit time $T_{\rm m}$, the linear limb darkening
coefficient $u_1$, the quadratic limb darkening
coefficient $u_2$, orbital eccentricity $e$ and
the longitude of periastron $\omega$. Once the initial
values are set, one could choose any one of the above to be:
(1) completely fixed (2)
completely free to vary or (3) varying following a Gaussian function,
i.e., Gaussian prior, during the MCMC chain in TAP.
Moreover, any of the above parameters which is not completely fixed
can be linked among different light curves.
The orbital period is treated as
a fixed parameter $P$ = 0.81347753, which is taken from Table 5 of
Gillon et al. (2012). The initial values of inclination $i$,
semi-major axis, and planet's radius are all from Gillon et al.(2012),
i.e. $i$=82.33, $a/R_{\ast}$=4.918, and $R_{\rm p}/R_{\ast}$=0.15945.
They are completely free to vary and linked among all light curves.
We leave the mid-transit times
$T_{\rm m}$ to be completely free during TAP runs and
it is only linked among those light curves in the same transit events.
Two light curves from Gillon et al. (2012) are for
the same transit event, i.e. epoch 38,
and another two from Ricci et al. (2015) are for epoch 1469.
A Gaussian prior centered on the values
of quadratic limb darkening coefficients
with certain $\sigma$ are set for TAP runs.
The quadratic limb darkening coefficients and $\sigma$
for $I+z$ and Gunn-$r'$ filters
are set as the values in Gillon et al. (2012),
and the one for white light curve follows the values used
in Murgas et al. (2014).
For $i, I, J, R$, and $V$ filters,
we linearly interpolate
from Claret (2000, 2004) to the values effective temperature
$T_{\rm eff}$ = 4400 K, log$g$ = 4.5 ${\rm cm/s^{2} }$,
metallicity [Fe/H] = 0, and micro-turbulent velocity
$V_{\rm t}$ = 0.5 ${\rm km/s}$ (Hellier et al. 2011).
In order to consider the possible small differences mentioned in
Southworth (2008), a Gaussian prior centered
on the theoretical values with $\sigma$ = 0.05 is set
for our limb darkening coefficients $u_1$ and $u_2$ during
TAP runs.
The details of parameter setting for TAP runs are listed in
Table 3 and Table 4.
There are five chains in each of our TAP runs, and all of the chains
are added together into the final results. The 15.9, 50.0 and
84.1 percentile levels are recorded. The 50.0 percentile, i.e., median level,
is used as the best value, and the other two percentile levels
give the error bar.
The mid-transit time for the corresponding epoch
of each transit event is obtained.
In order to examine whether there is any outlier, these mid-transit times are
fitted by a linear function. It is found that the mid-transit time of
epoch 1469 has the largest deviation and is more than 3$\sigma$ away
from the linear function. We thus remove two light curves of
epoch 1469 from our data set and re-run TAP through the same procedure.
We finally obtain the mid-transit time for the corresponding epoch
of each transit event, as those presented in Table 5.
They will be used to establish a new ephemeris and study
the transit timing variations in next section.
The results of inclination,
semi-major axis, and planet's radius are listed in Table 6.
These values are consistent with all those published in previous work.
For example, comparing with the results in Gillon et al.(2012)
or Ricci et al (2015), our results
are all extremely close to theirs, if error bars are considered.
Our error bars are actually smaller than theirs.
This shows that our analysis with more light curves gives
stronger observational constraints.
Moreover, the observational light curves and
best fitting models of our own data are presented in Figure 1,
where the points are observational data and solid curves are
the best fitting models.
These eight light curves of our own work
are available in a machine-readable form in
the electronic version of Table 7.
\section{Transit Timing Variations}
\subsection{A New Ephemeris}
When all mid-transit times of 39 epochs in Table 5
are considered, we obtain a new ephemeris
by minimizing $\chi^2$ through
fitting a linear function as
\begin{equation}
T^C_{\rm m} (E) = T_0 + P E,
\end{equation}
where $T_0$ is a reference time, $E$ is an epoch
(The transit observed in Hellier et al. 2011 is defined to be
epoch $E=0$, and other transits' epochs are defined accordingly.),
$P$ is the orbital period, and $T^C_{\rm m}(E)$
is the calculated mid-transit time at a given epoch $E$.
We find that
$T_{0} = 2455528.86860518\pm 0.00003632$ ($BJD_{TDB}$),
$P = 0.81347392\pm 0.00000004$ (day).
The corresponding $\chi^2$ = 266.2076.
Because the degree of freedom is 37, the reduced $\chi^2$,
$\chi^2_{red}(37)$ = 7.1948.
Using this new ephemeris,
the $O-C$ diagram is presented as the data points in Figure 2.
The large value of reduced $\chi^2$ of the linear fitting presented here
indicates that a certain level of TTVs does exist.
\subsection{A Model of Orbital Decay}
Through the transit timing analysis, Blecic et al. (2014)
and Murgas et al. (2014) proposed a possible orbital decay for
the planet WASP-43b. However, their transit data were up to about
epoch 1000 only. It would be very interesting to see
whether our newly observed data gives the transit timing
with a trend of orbital decay.
Assume the orbital period is $P_q$, and the predicted
mid-transit time at epoch $E$ is $T_{S}(E)$.
For convenience, the mid-transit time of epoch 0, $T_S(0)$, is set to be zero,
so the mid-transit time of epoch 1
is $T_S(1)=P_q$, and the elapsed time
$\delta t_1= P_q$. If there is a small amount of
period changing $\delta P$ from time $t= T_S(1)$ to
$t=T_S(2)$, the elapsed time is $\delta t_2= P_q + \delta P$.
Suppose there is a further period changing with $\delta P$
from time $t= T_S(2)$ to $t= T_S(3)$,
so the elapsed time $\delta t_3= P_q + 2\delta P$.
Following this continuous period decay, we
have $\delta t_i= P_q + (i-1)\delta P$,
where $i=1, 2, ...,(E-1), E$.
Summing up all the above $\delta t_i$, we obtain
$T_{S}(E) = E P_q + [E(E-1)/2]\delta P $.
Therefore,
as in Blecic et al. (2014), a model of orbital decay can be obtained
by minimizing $\chi^2$ through
fitting a function as\\
\begin{equation}
T_{S}(E) = T_{q0} + P_q E + \delta P \frac{E(E-1)}{2}
\end{equation}
where $T_{q0}$ is a reference time, $E$ is an epoch,
$P_{q}$ is the orbital period, $\delta P$
is the amount of period changing between each mid-transit time
starting from $t=T_S(1)$.
When only the data of earlier work
with transits before epoch 1100, i.e.
Gillon et al. (2012), Chen et al. (2014),
Maciejewski et al. (2013), and Murgas et al. (2014),
are considered, we have
$T_{q0} = 2455528.86809115\pm 0.00006471$,
$P_q = 0.81347925\pm 0.00000055$,
$\delta P = -1.03181346\times 10^{-8}\pm 0.10711789\times 10^{-8}$.
The corresponding $\chi^2$= 131.7672, and
$\chi^2_{red}(24)= 5.4903$.
Using the above best-fitted parameters for
$T_{S}(E)$ and the new ephemeris for $T^C_{\rm m}(E)$,
the $ T_{S}(E)- T^C_{\rm m}(E)$
as a function of epoch $E$ is plotted as the dashed curve
in the $O-C$ diagram, together with
data points as shown in Figure 2.
Both the units of $P_q$ and $\delta P$ are days,
and we obtain
$dP/dt=\delta P/P_q$ = $-0.40027520\pm 0.04155436 $ sec/year.
We find that this result is consistent
with the orbital decay rate stated in previous works.
When all the data in Table 5 are considered, we obtain
$T_{q0} = 2455528.86851783\pm 0.00004318$,
$P_q=0.81347448\pm 0.00000016$,
$\delta P = -7.45173434\times 10^{-10}\pm 1.98109164\times 10^{-10}$.
The corresponding $\chi^2_{red}(36)=7.0057$.
The larger value of reduced $\chi^2$ is due to the
larger number of data points in this case.
Using the above best-fitted parameters for
$T_{S}(E)$ and the new ephemeris for $T^C_{\rm m}(E)$,
the $ T_{S}(E)- T^C_{\rm m}(E)$
as a function of epoch $E$ is plotted as the solid curve
in the $O-C$ diagram, together with
data points as shown in Figure 2.
Comparing the solid curve with the dashed curve in Figure 2,
it is obvious that the
data points around epoch 1500 and epoch 1900 do not follow the
dashed curve. That is, the newly obtained transits do not
follow the predicted transit timings in previous models.
On the other hand, for the solid curve in Figure 2, the overall
orbital decay rate is
$dP/dt=\delta P/P_q$ = $-0.02890795\pm 0.00772547$ sec/year, which is
one order smaller than the values in previous work.
Therefore, with our newly observed transits,
we obtain a very different orbital decay rate.
These results indicate that,
if there is any orbital decay, the decay rate shall be much smaller
than those values proposed in previous works.
This slower orbital decay rate leads to a new estimate of
the stellar tidal dissipation factor $Q_{\ast}$.
Following the equation in Blecic et al. (2014),
we obtain a value of $Q_{\ast}$ about the order of $10^5$,
which is within the range of normally assumed theoretical value
from $10^5$ to $10^{10}$.
\subsection{The Frequency Analysis}
In order to search for possible periodicities of transit timing
variations from the timing residuals,
Lomb-Scargle normalized periodogram (Press \& Rybicki 1989)
is used. Figure 3 shows the resulting spectral
power as a function of frequencies.
The false-alarm probability of the largest power
of frequencies is 0.20, which is very far from the usual
thresholds 0.05 or 0.01 for a confirmed frequency.
Therefore, our results show that there is no evidence
for periodic TTVs.
\section{Concluding Remarks}
Employing telescopes at two observatories,
we monitor the transits of exoplanet WASP-43b
and obtain eight new transit light curves.
Together with the light curves from published papers,
they are all further analyzed through the same procedure.
The transit timings are obtained,
and a new ephemeris is established.
The newly determined inclination $i=82.149^{+0.084}_{-0.086}$,
semi-major axis $a/R_{\ast}=4.837^{+0.021}_{-0.022}$,
and planet's radius $R_{\rm p}/R_{\ast}=0.15929^{+0.00045}_{-0.00045}$
are all consistent with previous work.
Our results reconfirm that a certain level of
TTVs does exist, which is the same as what was claimed
in Blecic et al.(2014) and Murgas et al.(2014) previously.
However, the results here show that the transit timings of new data
do not follow the fast trend of the orbital decay suggested
in Blecic et al. (2014) and Murgas et al. (2014).
Our results lead to an orbital decay rate
$dP/dt$ = $ -0.02890795\pm 0.00772547$ sec/year, which is
one order smaller than the previous values.
This slower rate corresponds to a larger
stellar tidal dissipation factor $Q_{\ast}$
in the range of normally assumed theoretical value.
On the other hand,
the false-alarm probabilities in the frequency analysis indicate
that these TTVs are unlikely to be periodic.
The TTVs we present here could be signals of a slow orbital decay.
We conclude that, in order to further investigate and understand
this interesting system, both realistic theoretical modeling
and much more high-precision observations
are desired in the future.
\section*{Acknowledgment}
We thank the anonymous referee for good suggestions
which greatly improved this paper. We also thank the
helpful communications with
Gillon, M., Gazak, J. Z., Maciejewski, G., and Ngeow, C.-C..
This work is supported in part
by the Ministry of Science and Technology, Taiwan,
under MOST 103-2112-M-007-020-MY3 and
NSC 100-2112-M-007-003-MY3.
|
1511.00652
|
\section{Introduction}\label{sec:intro}
Linear theories of discrete complex analysis look back on a long and varied history. We refer here to the survey of Smirnov \cite{Sm10S}. Already Kirchhoff's circuit laws describe a discrete harmonicity condition for the potential function whose gradient describes the current flowing through the electric network. Discrete harmonic functions on the square lattice were studied by a number of authors in the 1920s, including Courant, Friedrichs, and Lewy \cite{CoFrLe28}. Two different notions for discrete holomorphicity on the square lattice were suggested by Isaacs \cite{Is41}. Dynnikov and Novikov studied a notion equivalent to one of them on triangular lattices in \cite{DN03}; the other was reintroduced by Lelong-Ferrand \cite{Fe44,Fe55} and Duffin \cite{Du56}. Duffin also extended the theory to rhombic lattices \cite{Du68}. Mercat \cite{Me01}, Kenyon \cite{Ke02}, and Chelkak and Smirnov \cite{ChSm11} resumed the investigation of discrete complex analysis on rhombic lattices or, equivalently, isoradial graphs.
Some two-dimensional discrete models in statistical physics exhibit conformally invariant properties in the thermodynamical limit. Such conformally invariant properties were established by Smirnov for site percolation on a triangular grid \cite{Sm01} and for the random cluster model \cite{Sm10}, by Chelkak and Smirnov for the Ising model \cite{ChSm12}, and by Kenyon for the dimer model on a square grid (domino tiling) \cite{Ke00}. In all cases, linear theories of discrete analytic functions on regular grids were important. The motivation for linear theories of discrete Riemann surfaces also comes from statistical physics, in particular, the Ising model. Mercat defined a discrete Dirac operator and discrete spin structures on quadrilateral decompositions and he identified criticality in the Ising model with rhombic quad-graphs \cite{Me01}. In \cite{Ci12}, Cimasoni discussed discrete Dirac operators and discrete spin structures on an arbitrary weighted graph isoradially embedded in a flat surface with conical singularities and how they can be used to give an explicit formula of the partition function of the dimer model on that graph. Also, he studied discrete spinors and their connection to s-holomorphic functions \cite{Ci15} that played an instrumental role in the universality results of Chelkak and Smirnov \cite{ChSm12}.
Important non-linear discrete theories of complex analysis involve circle packings or, more generally, circle patterns. Stephenson explains the links between circle packings and Riemann surfaces in \cite{Ste05}. Rodin and Sullivan proved that the Riemann mapping of a complex domain to the unit disk can be approximated by circle packings \cite{RSul87}. A similar result for isoradial circle patterns, even with irregular combinatorics, is due to B\"ucking \cite{Bue08}. In \cite{BoMeSu05} it was shown that discrete holomorphic functions describe infinitesimal deformations of circle patterns.
Mercat extended the linear theory from domains in the complex plane to discrete Riemann surfaces \cite{Me01b,Me01,Me07,Me08}. There, the discrete complex structure on a bipartite cellular decomposition of the surface into quadrilaterals is given by complex numbers $\rho_Q$ with positive real part. More precisely, the discrete complex structure defines discrete holomorphic functions by demanding that \[f(w_+)-f(w_-)=i\rho_Q\left(f(b_+)-f(b_-)\right)\] holds on any quadrilateral $Q$ with black vertices $b_-,b_+$ and white vertices $w_-,w_+$. Mercat focused on discrete complex structures given by real numbers in \cite{Me01b,Me01,Me07} and sketched some notions for complex $\rho_Q$ in \cite{Me08}. He introduced discrete period matrices \cite{Me01b, Me07}, their convergence to their continuous counterparts was shown in \cite{BoSk12}. In \cite{BoSk12}, also a discrete Riemann-Roch theorem was provided. Graph-theoretic analogues of the classical Riemann-Roch theorem and Abel-Jacobi theory were given by Baker and Norine \cite{BaNo07}.
A different linear theory for discrete complex analysis on triangulated surfaces using holomorphic cochains was introduced by Wilson \cite{Wi08}. Convergence of period matrices in that discretization to their smooth counterparts was also shown. A nonlinear theory of discrete conformality that discretizes the notion of conformally equivalent metrics was developed in \cite{BoPSp10}.
In \cite{BoG15}, a medial graph approach to discrete complex analysis on planar quad-graphs was suggested. Many results such as discrete Cauchy's integral formulae relied on discrete Stokes' Theorem~\ref{th:stokes} and Theorem~\ref{th:derivation} stating that the discrete exterior derivative is a derivation of the discrete wedge-product. These theorems turn out to be quite useful also in the current setting of discrete Riemann surfaces.
Our treatment of discrete differential forms on bipartite quad-decompositions of Riemann surfaces is close to what Mercat proposed in \cite{Me01b,Me01,Me07,Me08}. However, our version of discrete exterior calculus is based on the medial graph representation and is slightly more general. The goal of this paper is to present a comprehensive theory of discrete Riemann surfaces with complex weights $\rho_Q$ including discrete coverings, discrete exterior calculus, and discrete Abelian differentials. It includes several new notions and results including branched coverings of discrete Riemann surfaces, the discrete Riemann-Hurwitz Formula~\ref{th:Riemann_Hurwitz}, double poles of discrete one-forms and double values of discrete meromorphic functions that enter the discrete Riemann-Roch Theorem~\ref{th:Riemann_Roch}, and a discrete Abel-Jacobi map whose components are discrete holomorphic by Proposition~\ref{prop:Abel_holomorphic}.
The precise definition of a discrete complex structure will be given in Section~\ref{sec:basic}. Note that not all discrete Riemann surfaces can be realized as piecewise planar quad-surfaces, but these given by real weights $\rho_Q$ can, see Theorem~\ref{th:realization}. In Section~\ref{sec:holomap}, an idea how branch points of higher order can be modeled on discrete Riemann surfaces and a discretization of the Riemann-Hurwitz formula are given.
Since the notion of discrete holomorphic mappings developed in Section~\ref{sec:holomap} is too rigid to go further, we concentrate on discrete meromorphic functions and discrete one-forms. First, we shortly comment how the version of discrete exterior calculus developed in \cite{BoG15} generalizes to discrete Riemann surfaces in Section~\ref{sec:differentials}. The results of \cite{BoG15} that are relevant for the sequel are just stated, their proofs can be literally translated from \cite{BoG15}. Sometimes, we require in addition to a discrete complex structure local charts around the vertices of the quad-decomposition. Their existence is ensured by Proposition~\ref{prop:charts}. However, the definitions actually do not depend on the choice of charts.
In Section~\ref{sec:periods}, periods of discrete differentials are introduced and the discrete Riemann Bilinear Identity~\ref{th:RBI} is proven more or less in the same way as in the classical theory. Then, discrete harmonic differentials are studied in Section~\ref{sec:harmonic_holomorphic}. In Section~\ref{sec:Abelian_theory}, we recover the discrete period matrices of Mercat \cite{Me01b, Me07} and the discrete Abelian differentials of the first and the third kind of \cite{BoSk12} in the general setup of complex weights. Furthermore, discrete Abelian differentials of the second kind are defined. This leads to a slightly more general version of the discrete Riemann-Roch Theorem~\ref{th:Riemann_Roch}. Finally, discrete Abel-Jacobi maps and analogies to the classical theory are discussed in Section~\ref{sec:Abel}.
\section{Basic definitions}\label{sec:basic}
The aim of this section is to introduce discrete Riemann surfaces in Section~\ref{sec:setup}, giving piecewise planar quad-surfaces as an example in Section~\ref{sec:polyhedral}. There, we also discuss the question whether conversely discrete Riemann surfaces can be realized as piecewise planar quad-surfaces. The basic definitions are very similar to the notions in \cite{BoG15}, such as the medial graph introduced in Section~\ref{sec:medial}.
\subsection{Discrete Riemann surfaces}\label{sec:setup}
\begin{definition}
Let $\Sigma$ be a connected oriented surface without boundary, for short \textit{surface}. A \textit{bipartite quad-decomposition} $\Lambda$ \textit{of} $\Sigma$ is a strongly regular and locally finite cellular decomposition of $\Sigma$ such that all its 2-cells are quadrilaterals and its 1-skeleton is bipartite. Strong regularity requires that two different faces are either disjoint or share only one vertex or share only one edge; local finiteness requires that a compact subset of $\Sigma$ contains only finitely many quadrilaterals. If $\Sigma=\mC$ and $\Lambda$ is embedded in the complex plane such that all edges are straight line segments, then $\Lambda$ is called a \textit{planar quad-graph}.
Let $V(\Lambda)$ denote the set of 0-cells (\textit{vertices}), $E(\Lambda)$ the set of 1-cells (\textit{edges}), and $F(\Lambda)$ the set of 2-cells (\textit{faces} or \textit{quadrilaterals}) of $\Lambda$.
\end{definition}
In what follows, let $\Lambda$ be a bipartite quad-decomposition of the surface $\Sigma$.
\begin{definition}
We fix one decomposition of $V(\Lambda)$ into two independent sets and refer to the vertices of this decomposition as \textit{black} and \textit{white} vertices, respectively. Let $\Gamma$ be the graph defined on the black vertices where $vv'$ is an edge of $\Gamma$ if and only if its two black endpoints are vertices of a single face of $\Lambda$. Its dual graph $\Gamma^*$ is defined as the correponding graph on white vertices.
\end{definition}
\begin{remark}
The assumption of strong regularity guarantees that any edge of $\Gamma$ or $\Gamma^*$ is the diagonal of exactly one quadrilateral of $\Lambda$.
\end{remark}
\begin{definition}
$\Diamond:=\Lambda^*$ is the dual graph of $\Lambda$.
\end{definition}
\begin{definition}
If a vertex $v \in V(\Lambda)$ is a vertex of a quadrilateral $Q\in F(\Lambda)\cong V(\Diamond)$, then we write $Q \sim v$ or $v \sim Q$ and say that $v$ and $Q$ are \textit{incident} to each other.
\end{definition}
All faces of $\Lambda$ inherit an orientation from $\Sigma$. We may assume that the orientation on $\Sigma$ is chosen in such a way that the image of any orientation-preserving embedding of a two-cell $Q \in F(\Lambda)$ into the complex plane is positively oriented.
\begin{definition}
Let $Q \in F(\Lambda)$ with vertices $b_-,w_-,b_+,w_+$ in counterclockwise order, where $b_\pm \in V(\Gamma)$ and $w_\pm \in V(\Gamma^*)$. An orientation-preserving embedding $z_Q$ of $Q$ to a rectilinear quadrilateral in $\mC$ without self-intersections such that the image points of $b_-,w_-,b_+,w_+$ are vertices of the quadrilateral is called a \textit{chart} of $Q$. Two such charts are called \textit{compatible} if the oriented diagonals of the image quadrilaterals are in the same complex ratio \[\rho_Q:= -i \frac{w_+-w_-}{b_+-b_-}.\] Moreover, let $\varphi_Q:=\arccos\left(\re\left(i\rho_Q/|\rho_Q|\right)\right)$ be the angle under which the diagonal lines of $Q$ intersect.
\end{definition}
Note that $0<\varphi_Q<\pi$. Figure~\ref{fig:quadgraph} shows part of a planar quad-graph together with the notations we use for a single face $Q$ and the \textit{star of a vertex} $v$, i.e., the set of all faces incident to $v$.
\begin{figure}[htbp]
\begin{center}
\beginpgfgraphicnamed{quad}
\begin{tikzpicture}
[white/.style={circle,draw=black,fill=white,thin,inner sep=0pt,minimum size=1.2mm},
black/.style={circle,draw=black,fill=black,thin,inner sep=0pt,minimum size=1.2mm},
gray/.style={circle,draw=black,fill=gray,thin,inner sep=0pt,minimum size=1.2mm}]
\node[white] (w1) [label=left:$v'_{s-1}$]
at (-0.9,-0.9) {};
\node[white] (w2) [label=below:$v'_s$]
at (1,-1) {};
\node[white] (w3) [label=below:$v'_k$]
at (1,0) {};
\node[white] (w4) [label=left:$v'_1$]
at (1,1) {};
\node[white] (w5) [label=left:$v'_2$]
at (-1,1) {};
\node[white] (w6) [label=above:$w_+$]
at (-4,0) {};
\node[white] (w7)
at (1,-2) {};
\node[white] (w8)
at (3,0) {};
\node[white] (w9)
at (1,2) {};
\node[white] (w10) [label=below:$w_-$]
at (-6,-2) {};
\node[white] (w11)
at (-2,-2) {};
\node[black] (b1) [label=above:$v$]
at (0,0) {};
\node[black] (b2) [label=right:$v_1$]
at (2,1) {};
\node[black] (b3)
at (2,-1) {};
\node[black] (b4) [label=below:$v_s$]
at (0,-2) {};
\node[black] (b5) [label=below:$b_+$]
at (-4,-2) {};
\node[black] (b6) [label=above:$v_2$]
at (0,2) {};
\node[black] (b7) [label=above :$b_-$]
at (-6,0) {};
\node[black] (b8)
at (-2,0) {};
\draw[color=white] (w1) --node[midway,color=black] {$Q_s$} (w2)
\draw (b2) -- (w9) -- (b6) -- (w5) -- (b8);
\draw (b4) -- (w7) -- (b3) -- (w8) -- (b2);
\draw (b8) -- (w1) -- (b4) -- (w2) -- (b3) -- (w3) -- (b2) -- (w4) -- (b6);
\draw (b1) -- (w1);
\draw (b1) -- (w2);
\draw (b1) -- (w3);
\draw (b1) -- (w4);
\draw (b1) -- (w5);
\draw (b8) -- (w11) -- (b4);
\draw (w6) -- (b5) -- (w10) -- (b7) -- (w6);
\draw (b5) -- (w11);
\draw (w6) -- (b8);
\draw[dashed] (b5) -- (b7);
\draw[dashed] (w10) -- (w6);
\node[gray] (z) [label=below:$Q$]
at (-5,-1) {};
\draw (-4.45,-1.55) arc (-45:43:0.8cm);
\coordinate[label=right:$\varphi_Q$] (phi) at (-4.9,-1);
\end{tikzpicture}
\endpgfgraphicnamed
\caption{Bipartite quad-decomposition with notations}
\label{fig:quadgraph}
\end{center}
\end{figure}
In addition, we denote by $\Diamond_0$ always a connected subgraph of $\Diamond$ and by $V(\Diamond_0)\subseteq V(\Diamond)$ the corresponding subset of faces of $\Lambda$. Through our identification $V(\Diamond)\cong F(\Lambda)$, we can call the elements of $V(\Diamond)$ quadrilaterals and identify them with the corresponding faces of $\Lambda$.
In particular, an equivalence class of charts $z_Q$ of a single quadrilateral $Q$ is uniquely characterized by the complex number $\rho_Q$ with a positive real part. An assignment of positive real numbers $\rho_Q$ to all faces $Q$ of $\Lambda$ was the definition of a discrete complex structure Mercat used in \cite{Me01}. In his subsequent work \cite{Me08}, he proposed a generalization to complex $\rho_Q$ with positive real part. Mercat's notion of a discrete Riemann surface is equivalent to the definition we give:
\begin{definition}
A \textit{discrete Riemann surface} is a triple $(\Sigma, \Lambda, z)$ of a bipartite quad-decomposition $\Lambda$ of a surface $\Sigma$ together with an \textit{atlas} $z$, i.e., a collection of charts $z_Q$ of all faces $Q \in F(\Lambda)$ that are compatible to each other. An assignment of complex numbers $\rho_Q$ with positive real part to the faces $Q$ of the quad-decomposition is said to be a \textit{discrete complex structure}.
$(\Sigma, \Lambda, z)$ is said to be \textit{compact} if the surface $\Sigma$ is compact.
\end{definition}
Note that real $\rho_Q$ correspond to quadrilaterals $Q$ whose diagonals are orthogonal to each other. They arise naturally if one considers a Delaunay triangulation of a polyhedral surface $\Sigma$ and places the vertices of the dual at the circumcenters of the triangles. Discrete Riemann surfaces based on this structure were investigated in \cite{BoSk12}. There, the above definition of a discrete Riemann surface was suggested as a generalization.
\begin{remark}
Compared to the classical theory, charts around vertices of $\Lambda$ are missing so far and were not considered by previous authors. In order to obtain definitions that can be immediately motivated from the classical theory, we will introduce such charts in our setting. However, we do not include them in the definition of a discrete Riemann surface. As it turns out, there always exist appropriate charts around vertices and besides discrete derivatives of functions on $V(\Diamond)$ all of our notions do not depend on these charts.
\end{remark}
\begin{definition}
Let $v \in V(\Lambda)$. An orientation-preserving embedding $z_v$ of the star of $v$ to the star of a vertex of a planar quad-graph $\Lambda'$ that maps vertices of $\Lambda$ to vertices of $\Lambda'$ is said to be a \textit{chart} as well. $z_v$ is said to be \textit{compatible} with the discrete complex structure of the discrete Riemann surface $(\Sigma,\Lambda,z)$ if for any quadrilateral $Q\sim v$ the induced chart of $z_v$ on $Q$ is compatible to $z_Q$.
\end{definition}
\begin{remark}
When we later speak about particular charts $z_v$, we always refer to charts compatible with the discrete complex structure.
\end{remark}
\begin{proposition}\label{prop:charts}
Let $\Lambda$ be a bipartite quad-decomposition of a Riemann surface $\Sigma$, and let the numbers $\rho_Q$, $Q\in V(\Diamond)$, define a discrete complex structure. Then, there exists an atlas $z$ such that the image quadrilaterals of charts $z_Q$ are parallelograms with the oriented ratio of diagonals equal to $i\rho_Q$ and such that for any $v \in V(\Lambda)$ there exists a chart $z_v$ compatible with the discrete complex structure.
\end{proposition}
\begin{proof}
The construction of the charts $z_Q$ is simple: In the complex plane, the quadrilateral with black vertices $\pm 1$ and white vertices $\pm i\rho_Q$ is a parallelogram with the desired oriented ratio of diagonals.
In contrast, the construction of charts $z_v$ is more delicate. See Figure~\ref{fig:construction} for a visualization. Let us consider the star of a vertex $v\in V(\Gamma)$. If $v$ is white, then just replace $\rho_Q$ by $1/\rho_Q$ and $\varphi_Q$ by $\pi-\varphi_Q$ in the following. Let $Q_1,Q_2,\ldots,Q_k$ be the quadrilaterals incident to $v$.
We choose $0<\theta<\pi$ in such a way that $\theta<\varphi_{Q_1}<\pi-\theta$. Let $\alpha_1:= \pi-\theta$, and define $\alpha_s:= (\pi+\theta)/(k-1)$ for the other $s$. Then, all $\alpha_s$ sum up to $2\pi$.
First, we construct the images of $Q_s$, $s\neq 1$, starting with an auxiliary construction. As in Figure~\ref{fig:quadgraph}, let $v$, $v'_{s-1}$, $v_s$, $v'_{s}$ be the vertices of $Q_s$ in counterclockwise order. Then, we map $v'_{s-1}$ to $-1$ and $v'_{s}$ to $1$. All points $x$ that enclose an oriented angle $\alpha_s>0$ with $\pm 1$ lie on a circular arc above the real axis. Since the real part of $\rho_{Q_s}$ is positive, the ray $ti\rho_{Q_s}$, $t>0$, intersects this arc in exactly one point. If we choose the intersection point $x_v$ as the image of $v$ and $x_{v_s}:=x_v-2i\rho_{Q_s}$ as the image of $v_s$,then we get a quadrilateral in $\mC$ that has the desired oriented ratio of diagonals $i\rho_{Q_s}$. The quadrilateral is convex if and only if $x_v-2i\rho_{Q_s}$ has nonpositive imaginary part.
Now, we translate all the image quadrilaterals such that $v$ is always mapped to zero. By construction, the image of $Q_s$ is contained in a cone of angle $\alpha_s$. Thus, we can rotate and scale the images of $Q_s$, $s\neq 1$, in such a way that they do not overlap and that the images of edges $vv'_{s}$ coincide. Since all $\alpha_s$ sum up to $2\pi$, there is still a cone of angle $\alpha_1=\pi-\theta$ empty.
Let us identify the vertices $v$, $v'_k$, and $v'_1$ with their corresponding images and choose $q$ on the line segment $v'_kv'_1$. If $q$ approaches the vertex $v'_k$, then $\angle vqv'_k \to \pi - \angle v'_1v'_kv$, and if $q$ approaches $v'_1$, then $\angle vqv'_k \to \angle vv'_1v'_k$. Since \[\angle vv'_1v'_k<\theta<\varphi_{Q_1}<\pi-\theta<\pi - \angle v'_1v'_kv,\] there is a point $q$ on the line segment such that $\angle vqv'_k = \varphi_Q$. If we take the image of $v_1$ on the ray $tq$, $t>0$, such that its distance to the origin is $|v'_k-v'_1|/|\rho_{Q_1}|$, then we obtain a quadrilateral with the oriented ratio of diagonals $i\rho_{Q_1}$.
\end{proof}
\begin{figure}[htbp]
\centering
\subfloat[Image of $Q_s$, $s\neq 1$]{
\beginpgfgraphicnamed{construction1}
\begin{tikzpicture}[white/.style={circle,draw=black,fill=white,thin,inner sep=0pt,minimum size=1.2mm},
black/.style={circle,draw=black,fill=black,thin,inner sep=0pt,minimum size=1.2mm},
gray/.style={circle,draw=black,fill=gray,thin,inner sep=0pt,minimum size=1.2mm},scale=0.7]
\clip(-3.5,-1.5) rectangle (3.5,5.5);
\draw [domain=0.0:17, dash pattern=on 5pt off 5pt] plot(\x,{(-0.0--3.66*\x)/1.0});
\tkzDefPoint(-3,0){w1}\tkzDefPoint(1,3.66){b1}\tkzDefPoint(3,0){w2}
\tkzCircumCenter(w1,b1,w2)
\tkzGetPoint{O}
\tkzDrawArc(O,w2)(w1)
\node[white] (w1) [label=below:$-1$] at (-3,0) {};
\node[white] (w2) [label=below:$1$] at (3,0) {};
\node[gray] (x) [label=below:$0$] at (0,0) {};
\node[black] (b1) [label=above left:$x_v$] at (1,3.66) {};
\node[black] (b2) [label=above left:$x_{v_s}$] at (0.2686098530106421,0.9831120620189502) {};
\draw [dash pattern=on 5pt off 5pt] (x) -- (w2);
\draw (w1)--(b1)--(w2)--(b2)--(w1);
\draw (2,5) node {$i\rho_{Q_s}$};
\draw (0.41,3.12) arc (222.46:298.65:0.8cm);
\coordinate[label=right:$\alpha_{s}$] (alpha) at (0.4,3.2);
\end{tikzpicture}
\endpgfgraphicnamed}
\qquad
\subfloat[Construction of the image of $Q_1$]{
\beginpgfgraphicnamed{construction2}
\begin{tikzpicture}
[white/.style={circle,draw=black,fill=white,thin,inner sep=0pt,minimum size=1.2mm},
black/.style={circle,draw=black,fill=black,thin,inner sep=0pt,minimum size=1.2mm},
gray/.style={circle,draw=black,fill=gray,thin,inner sep=0pt,minimum size=1.2mm},scale=0.75]
\clip(2,-4) rectangle (10.8,2.3);
\node[black] (b1) [label=right:$v_1$] at (9.82,-1.2) {};
\node[black] (b2) [label=above:$v$] at (5.56,-1.2) {};
\node[black] (b3) at (4.6,-0.8) {};
\node[black] (b4) at (5.7,-3.38) {};
\node[black] (b5) at (4.6,-3.59) {};
\node[black] (b6) at (5.2,1.3) {};
\node[white] (w1) [label=above:$v'_1$] at (6.8,1.5) {};
\node[white] (w2) [label=below:$v'_{k}$] at (6.8,-3) {};
\node[white] (w3) at (3.64,0.54) {};
\node[white] (w4) at (5,-3.02) {};
\node[white] (w5) at (4.31,-1.61) {};
\node[gray] (q) [label=above right:$q$] at (6.8,-1.2) {};
\coordinate[label=right:$Q_1$] (q1) at (8.8,0);
\draw [dash pattern=on 5pt off 5pt] (b2)-- (b1);
\draw [dash pattern=on 5pt off 5pt] (w1)-- (w2);
\draw [dotted] (w2)-- (b1)--(w1);
\draw (b2) -- (w1);
\draw (b2) -- (w2);
\draw (b2) -- (w3);
\draw (b2) -- (w4);
\draw (b2) -- (w5);
\draw (w1) -- (b6) -- (w3);
\draw (w3) -- (b3) -- (w5);
\draw (w5) -- (b5) -- (w4);
\draw (w4) -- (b4) -- (w2);
\end{tikzpicture}
\endpgfgraphicnamed}
\caption[]{Visualization of the proof of Proposition~\ref{prop:charts}}
\label{fig:construction}
\end{figure}
\begin{remark}
Note that dependent on the discrete Riemann surface it could be impossible to find charts around vertex stars whose images consist of convex quadrilaterals only. Indeed, the interior angle at a black vertex $v$ of a convex quadrilateral with purely imaginary oriented ratio of diagonals $i\rho$ has to be at least $\arctan(|\rho|)=\pi/2-\arccot(|\rho|)$. In particular, the interior angles at $v$ of five or more incident convex quadrilaterals $Q_s$ such that $\rho_{Q_s}>\pi$ sum up to more than $2\pi$.
\end{remark}
\subsection{Piecewise planar quad-surfaces and discrete Riemann surfaces}\label{sec:polyhedral}
A polyhedral surface $\Sigma$ without boundary consists of Euclidean polygons glued together along common edges. Clearly, there are a lot of possibilities to make it a discrete Riemann surface. An essentially unique way to make a closed polyhedral surface a discrete Riemann surface is the following (see for example \cite{BoSk12}): The vertices of the (essentially unique) Delaunay triangulation are the black vertices and the centers of the circumcenters of the triangles are the white vertices (Figure~\ref{fig:DelaunayVoronoi}). The corresponding quadrilaterals possess isometric embeddings into the complex plane and form together a discrete Riemann surface. Note that all quadrilaterals are kites, corresponding to a discrete complex structure with real numbers $\rho_Q$ that are given by the so-called \textit{cotangent weights} \cite{PP93}. The corresponding cellular decomposition is called \textit{Delaunay-Voronoi quadrangulation}.
\begin{figure}[htbp]
\begin{center}
\beginpgfgraphicnamed{DelaunayVoronoi}
\begin{tikzpicture}
[white/.style={circle,draw=black,fill=white,thin,inner sep=0pt,minimum size=1.2mm},
black/.style={circle,draw=black,fill=black,thin,inner sep=0pt,minimum size=1.2mm},
gray/.style={circle,draw=black,fill=gray,thin,inner sep=0pt,minimum size=1.2mm},scale=0.7]
\clip(-4.1,-3.5) rectangle (4.1,3.5);
\node[black] (b1)
at (-2,3.464) {};
\node[black] (b2)
at (2,3.464) {};
\node[black] (b3)
at (-4,0) {};
\node[black] (b4)
at (0,0) {};
\node[black] (b5)
at (4,0) {};
\node[black] (b6)
at (-2,-3.464) {};
\node[black] (b7)
at (2,-3.464) {};
\tkzSetUpPoint[fill=white,size=3mm]
\tkzCircumCenter(b1,b2,b4) \tkzGetPoint{w2}
\tkzCircumCenter(b1,b3,b4) \tkzGetPoint{w1}
\tkzCircumCenter(b5,b2,b4) \tkzGetPoint{w3}
\tkzCircumCenter(b3,b6,b4) \tkzGetPoint{w4}
\tkzCircumCenter(b6,b7,b4) \tkzGetPoint{w5}
\tkzCircumCenter(b7,b5,b4) \tkzGetPoint{w6}
\draw [dash pattern=on 5pt off 5pt] (b3)--(b1)-- (b2)-- (b5)-- (b4)-- (b3)-- (b6)-- (b7)-- (b4)--(b1);
\draw [dash pattern=on 5pt off 5pt] (b7)--(b5);
\draw [dash pattern=on 5pt off 5pt] (b2)--(b4);
\draw (w1)--(b1);
\draw (w1)--(b3);
\draw (w1)--(b4);
\draw (w2)--(b1);
\draw (w2)--(b2);
\draw (w2)--(b4);
\draw (w3)--(b5);
\draw (w3)--(b2);
\draw (w3)--(b4);
\draw (w4)--(b3);
\draw (w4)--(b6);
\draw (w4)--(b4);
\draw (w5)--(b6);
\draw (w5)--(b7);
\draw (w5)--(b4);
\draw (w6)--(b7);
\draw (w6)--(b5);
\draw (w6)--(b4);
\tkzDrawPoints(w1,w2,w3,w4,w5,w6)
\end{tikzpicture}
\endpgfgraphicnamed
\caption{Delaunay-Voronoi quadrangulation corresponding to a Delaunay triangulation}
\label{fig:DelaunayVoronoi}
\end{center}
\end{figure}
Let us suppose that the polyhedral surface $\Sigma$ is a piecewise planar quad-surface. Then, $\Sigma$ becomes a discrete Riemann surface in a canonical way. In the classical theory, any polyhedral surface possesses a canonical complex structure and any compact Riemann surface can be recovered from some polyhedral surface \cite{Bost92}. In the discrete setting, the situation is different.
\begin{theorem}\label{th:realization}
Let $(\Sigma,\Lambda,z)$ be a compact discrete Riemann surface.
\begin{enumerate}
\item If all numbers $\rho_Q$ of the discrete complex structure are real, then there exists a polyhedral surface consisting of rhombi such that its induced discrete complex structure is the one of $(\Sigma,\Lambda,z)$.
\item If all numbers $\rho_Q$ of the discrete complex structure are real but one is not, then there exists no piecewise planar quad-surface with the combinatorics of $\Lambda$ such that its induced discrete complex structure coincides with the one of $(\Sigma,\Lambda,z)$.
\end{enumerate}
\end{theorem}
\begin{proof}
(i) The diagonals of a rhombus intersect orthogonally. Clearly, the oriented ratio of diagonals of a rhombus $Q$ is $i\rho_Q=i\tan\left(\alpha/2\right)$, where $\alpha$ denotes the interior angle at a black vertex. Choosing $\alpha=2\arctan(\rho_Q)$ gives a rhombus with the desired oriented ratio of diagonals. If all the side lengths of the rhombi are one, then we can glue them together to obtain the desired closed polyhedral surface.
(ii) For a chart $z_Q$ of $Q\in V(\Diamond)$, consider the image $z_Q(Q)$. We denote the lengths of its edges by $a,b,c,d$ in counterclockwise order, starting with an edge going from a black to a white vertex, and the lengths of the line segments connecting the vertices with the intersection of the diagonal lines by $e_1,e_2,f_1,f_2$ as in Figure~\ref{fig:cosine}.
\begin{figure}[htbp]
\centering
\beginpgfgraphicnamed{cosine}
\begin{tikzpicture}
[white/.style={circle,draw=black,fill=white,thin,inner sep=0pt,minimum size=1.2mm},
black/.style={circle,draw=black,fill=black,thin,inner sep=0pt,minimum size=1.2mm},
gray/.style={circle,draw=black,fill=gray,thin,inner sep=0pt,minimum size=1.2mm},scale=4]
\draw [shift={(1.02,0.5)}] (0,0) -- (-0.13:0.12) arc (-0.13:133.79:0.12) -- cycle;
\node[white] (w1) [label=below:$z_Q(w_-)$] at (1.44,0.07) {};
\node[white] (w2) [label=above:$z_Q(w_+)$] at (0.62,0.93) {};
\node[black] (b1) [label=left:$z_Q(b_-)$] at (0,0.5) {};
\node[black] (b2) [label=right:$z_Q(b_+)$] at (1.7,0.5) {};
\draw (b1)--node[midway,above] {$d$} (w2);
\draw (w2)--node[midway,above] {$c$} (b2);
\draw (b2)--node[midway,below] {$b$} (w1);
\draw (w1)--node[midway,below] {$a$} (b1);
\node[gray] (q) at (1.02,0.5) {};
\draw [dash pattern=on 5pt off 5pt] (w2)--node[midway,left] {$f_2$}(q)--node[midway,left] {$f_1$} (w1);
\draw [dash pattern=on 5pt off 5pt] (b1)--node[midway,below] {$e_1$}(q)--node[midway,below] {$e_2$} (b2);
\coordinate[label=right:$\varphi_Q$] (phi) at (0.95,0.55);
\end{tikzpicture}
\endpgfgraphicnamed
\caption{Illustration of the formula $a^2-b^2+c^2-d^2=-2\cos(\varphi_Q)ef$}
\label{fig:cosine}
\end{figure}
Cosine theorem implies \begin{align*}
a^2&=e_1^2+f_1^2-2e_1f_1\cos(\varphi_Q),\\
b^2&=e_2^2+f_1^2+2e_2f_1\cos(\varphi_Q),\\
c^2&=e_2^2+f_2^2-2e_2f_2\cos(\varphi_Q),\\
d^2&=e_1^2+f_2^2+2e_1f_2\cos(\varphi_Q).
\end{align*}
Taking the alternating sum, we get \[a^2-b^2+c^2-d^2=-2\cos(\varphi_Q)(e_1f_1+e_2f_1+e_2f2+e_1f_2)=-2\cos(\varphi_Q)ef,\] where $e:=e_1+e_2$ and $f:=f_1+f_2$ are the lengths of the two diagonals. In particular, $\varphi_Q=\pi/2$ if and only if $a^2-b^2+c^2-d^2=0$.
Suppose there is a piecewise planar quad-surface with the combinatorics of $\Lambda$ such that its induced discrete complex structure is the one of $(\Sigma,\Lambda,z)$. Let us orient all edges from the white to its black endpoint. For each quadrilateral $Q$ we consider its alternating sum of edge lengths such that the sign in front of an edge that is oriented in counterclockwise direction is positive and negative otherwise. If we sum these sums up for all $Q\in V(\Diamond)$, then each edge length appears twice with different signs, so the sum is zero. On the other hand, for all but one $Q$ $\rho_Q$ is real and the remaining one is not, so the sum is nonzero, contradiction. Thus, there cannot exist such a piecewise planar quad-surface.
\end{proof}
\subsection{Medial graph} \label{sec:medial}
\begin{definition}
The \textit{medial graph} $X$ of the bipartite quad-decomposition $\Lambda$ of the surface $\Sigma$ is defined as the following cellular decomposition of $\Sigma$. Its vertex set is given by all the midpoints of edges of $\Lambda$, and two vertices $x,x'$ are adjacent if and only if the corresponding edges belong to the same face $Q$ of $\Lambda$ and have a vertex $v\in V(\Lambda)$ in common. We denote this edge (or 1-cell) by $[Q,v]$. A \textit{face} (or 2-cell) $F_v$ of $X$ corresponding to $v\in V (\Lambda)$ shall have the edges of $\Lambda$ incident to $v$ as vertices, and a \textit{face} (or 2-cell) $F_Q$ of $X$ corresponding to $Q\in F(\Lambda)\cong V(\Diamond)$ shall have the four edges of $\Lambda$ belonging to $Q$ as vertices. In Figure~\ref{fig:medial}, the vertices of the medial graph are colored gray. In this sense, the set $F(X)$ of \textit{faces} of $X$ is defined and in bijection with $V(\Lambda)\cup V(\Diamond)$.
\end{definition}
\begin{figure}[htbp]
\centering
\subfloat[Varignon parallelogram inside $Q$]{
\beginpgfgraphicnamed{medial1}
\begin{tikzpicture}[white/.style={circle,draw=black,fill=white,thin,inner sep=0pt,minimum size=1.2mm},
black/.style={circle,draw=black,fill=black,thin,inner sep=0pt,minimum size=1.2mm},
gray/.style={circle,draw=black,fill=gray,thin,inner sep=0pt,minimum size=1.2mm},scale=0.6]
\clip(-1.7,-6.5) rectangle (7.4,-0.2);
\draw (-0.6,-4.16)-- (2.02,-1.28);
\draw (2.02,-1.28)-- (6.4,-4.16);
\draw (6.4,-4.16)-- (2.94,-5.52);
\draw (2.94,-5.52)-- (-0.6,-4.16);
\draw [color=gray] (1.17,-4.84)-- (0.71,-2.72);
\draw [color=gray] (0.71,-2.72)-- (4.21,-2.72);
\draw [color=gray] (4.21,-2.72)-- (4.67,-4.84);
\draw [color=gray] (4.67,-4.84)-- (1.17,-4.84);
\node[white] (w1) at (2.94,-5.52) {};
\node[white] (w2) [label=above:$v$] at (2.02,-1.28) {};
\node[black] (b1) [label=left:$v'_{-}$] at (-0.6,-4.16) {};
\node[black] (b2) [label=right:$v'_{+}$] at (6.4,-4.16) {};
\draw (2.9,-4.16) node {$Q$};
\node[gray] (m1) at (0.71,-2.72) {};
\node[gray] (m2) at (4.21,-2.72) {};
\node[gray] (m3) at (4.67,-4.84) {};
\node[gray] (m4) at (1.17,-4.84) {};
\draw (2.25,-2.3) node {$[Q,v]$};
\end{tikzpicture}
\endpgfgraphicnamed}
\qquad
\subfloat[Medial graph in the star of a vertex]{
\beginpgfgraphicnamed{medial2}
\begin{tikzpicture}
[white/.style={circle,draw=black,fill=white,thin,inner sep=0pt,minimum size=1.2mm},
black/.style={circle,draw=black,fill=black,thin,inner sep=0pt,minimum size=1.2mm},
gray/.style={circle,draw=black,fill=gray,thin,inner sep=0pt,minimum size=1.2mm},scale=0.6]
\clip(2,-4) rectangle (10.8,2);
\draw (2.4,-1.03)-- (3.64,0.54);
\draw (3.64,0.54)-- (6.18,1.3);
\draw (3.64,0.54)-- (5.56,-1.12);
\draw (5.56,-1.12)-- (4.31,-1.61);
\draw (4.31,-1.61)-- (2.4,-1.03);
\draw (4.31,-1.61)-- (4.6,-3.59);
\draw (9.82,-0.09)-- (7.83,0.18);
\draw (9.82,-0.09)-- (8.3,-1.39);
\draw (7.83,0.18)-- (5.56,-1.12);
\draw (8.3,-1.39)-- (5.56,-1.12);
\draw (7.83,0.18)-- (6.18,1.3);
\draw (5.56,-1.12)-- (6.51,-3.02);
\draw (6.51,-3.02)-- (4.6,-3.59);
\draw (6.51,-3.02)-- (8.48,-3.38);
\draw (8.48,-3.38)-- (8.3,-1.39);
\draw [color=gray] (4.94,-1.36)-- (4.6,-0.29);
\draw [color=gray] (4.6,-0.29)-- (6.7,-0.47);
\draw [color=gray] (6.7,-0.47)-- (6.93,-1.25);
\draw [color=gray] (6.93,-1.25)-- (6.04,-2.07);
\draw [color=gray] (6.04,-2.07)-- (4.94,-1.36);
\node[white] (w1) at (9.82,-0.09) {};
\node[white] (w2) at (5.56,-1.12) {};
\node[white] (w3) at (2.4,-1.03) {};
\node[white] (w4) at (8.48,-3.38) {};
\node[white] (w5) at (4.6,-3.59) {};
\node[white] (w6) at (6.18,1.3) {};
\node[black] (b1) at (7.83,0.18) {};
\node[black] (b2) at (8.3,-1.39) {};
\node[black] (b3) at (3.64,0.54) {};
\node[black] (b4) at (6.51,-3.02) {};
\node[black] (b5) at (4.31,-1.61) {};
\draw [color=gray] (7.495,-3.2) --(8.39,-2.385)--(6.93,-1.25)--(9.06,-0.74)--(8.825,0.045)--(6.7,-0.47)--(7.005,0.74) --(4.91,0.92)--(4.6,-0.29)--(3.02,-0.245)--(3.355,-1.32)--(4.94,-1.36)--(4.455,-2.6)--(5.555,-3.305)--(6.04,-2.07)--(7.495,-3.2);
\node[gray] (m1) at (6.93,-1.25) {};
\node[gray] (m2) at (6.7,-0.47) {};
\node[gray] (m3) at (4.6,-0.29) {};
\node[gray] (m4) at (4.94,-1.36) {};
\node[gray] (m5) at (6.04,-2.07) {};
\node[gray] (m6) at (7.495,-3.2) {};
\node[gray] (m7) at (8.39,-2.385) {};
\node[gray] (m8) at (9.06,-0.74) {};
\node[gray] (m9) at (8.825,0.045) {};
\node[gray] (m10) at (7.005,0.74) {};
\node[gray] (m11) at (4.91,0.92) {};
\node[gray] (m12) at (3.02,-0.245) {};
\node[gray] (m13) at (3.355,-1.32) {};
\node[gray] (m14) at (4.455,-2.6) {};
\node[gray] (m15) at (5.555,-3.305) {};
\end{tikzpicture}
\endpgfgraphicnamed}
\caption[]{Medial graph $X$ and notation of its edges}
\label{fig:medial}
\end{figure}
A priori, $X$ is just a combinatorial datum, giving a cellular decomposition of $\Sigma$ with induced orientation. But charts $z_v$ and $z_Q$ induce geometric realizations of the faces $F_v$ and $F_Q$ corresponding to $v\in V(\Lambda)$ and $Q\in V(\Diamond)$, respectively, in the complex plane. For this, we identify vertices of $X$ with the midpoints of the images of corresponding edges and map the edges of $X$ to straight line segments. $z_Q$ always induces an orientation-preserving embedding, $z_v$ does if it maps the quadrilaterals of the star of $v$ to quadrilaterals whose interior angle at $z_v(v)$ is less than $\pi$. Due to Varignon's theorem, $z_Q(F_Q)$ is a parallelogram, even if $z_Q(Q)$ is not. Also, the image of the oriented edge $e=[Q,v]$ of $X$ connecting the edges $vv'_-$ with $vv'_+$ is just half the image of the diagonal: $2z_Q(e)=z_Q(v'_+)-z_Q(v'_-)$. In this sense, $e$ is \textit{parallel} to the edge $v'_-v'_+$ of $\Gamma$ or $\Gamma^*$.
\begin{definition}
We call an edge of $X$ \textit{black} or \textit{white} if it is parallel to an edge of $\Gamma$ or $\Gamma^*$, respectively.
\end{definition}
\begin{remark}
Even if $z_v$ does not induce an orientation-preserving embedding of $F_v$, we still obtain a rectilinear polygon $z_v(F_v)$ by the construction described above. In particular, the algebraic area of $z_v(F_v)$ is defined, where the orientation of $z_v(F_v)$ is inherited from the orientation of the star of $v$ on $\Sigma$.
\end{remark}
\begin{definition}
For a connected subgraph $\Diamond_0 \subseteq \Diamond$, we denote by $\Lambda_0$ the subgraph of $\Lambda$ whose vertices and edges are exactly the vertices and edges of the quadrilaterals in $V(\Diamond_0)$. An \textit{interior} vertex $v\in V(\Lambda_0)$ is a vertex such that all incident faces in $\Lambda$ belong to $V(\Diamond_0)$. All other vertices of $\Lambda_0$ are said to be \textit{boundary vertices}. Let $\Gamma_0$ and $\Gamma_0^*$ denote the subgraphs of $\Gamma$ and of $\Gamma^*$ whose edges are exactly the diagonals of quadrilaterals in $V(\Diamond_0)$.
$\Diamond_0\subseteq\Diamond$ is said to \textit{form a simply-connected closed region} if the union of all quadrilaterals in $V(\Diamond_0)$ forms a simply-connected closed region in $\Sigma$.
Furthermore, we denote by $X_0 \subseteq X$ the connected subgraph of $X$ consisting of all edges $[Q,v]$ where $Q\in V(\Diamond_0)$ and $v$ is a vertex of $Q$. For a finite collection $F$ of faces of $X_0$, $\partial F$ denotes the union of all counterclockwise oriented boundaries of faces in $F$, where oriented edges in opposite directions cancel each other out.
\end{definition}
\section{Discrete holomorphic mappings}\label{sec:holomap}
Throughout this section, let $(\Sigma, \Lambda, z)$ and $(\Sigma', \Lambda', z')$ be discrete Riemann surfaces.
\subsection{Discrete holomorphicity}\label{sec:Cauchy_Riemann}
The following notion of discrete holomorphic functions is essentially due to Mercat \cite{Me01,Me07,Me08}.
\begin{definition}
Let $f:V(\Lambda_0) \to \mC$. $f$ is said to be \textit{discrete holomorphic} if the \textit{discrete Cauchy-Riemann equation} \[\frac{f(b_+)-f(b_-)}{z_Q(b_+) - z_Q(b_-)}=\frac{f(w_+)-f(w_-)}{z_Q(w_+) - z_Q(w_-)}\] is satisfied for all quadrilaterals $Q \in V(\Diamond_0)$ with vertices $b_-,w_-,b_+,w_+$ in counterclockwise order, starting with a black vertex. $f$ is \textit{discrete antiholomorphic} if $\bar{f}$ is discrete holomorphic.
\end{definition}
Note that the discrete Cauchy-Riemann equation in the chart $z_Q$ is equivalent to the corresponding equation in a compatible chart $z'_Q$, i.e., it depends on the discrete complex structure only.
\begin{definition}
A mapping $f:V(\Lambda) \to V(\Lambda')$ is said to be \textit{discrete holomorphic} if the following conditions are satisfied:
\begin{enumerate}
\item $f(V(\Gamma))\subseteq V(\Gamma')$ and $f(V(\Gamma^*))\subseteq{V({\Gamma'}^*)}$;
\item for any quadrilateral $Q \in F(\Lambda)$, there exists a face $Q' \in F(\Lambda')$ such that $f(v)\sim Q'$ for all $v\sim Q$;
\item for any quadrilateral $Q \in F(\Lambda)$, the function $z'_{Q'} \circ f: V(Q)\to \mC$ is discrete holomorphic.
\end{enumerate}
\end{definition}
The first condition asserts that $f$ respects the bipartite structures of the quad-decompositions. The second one discretizes continuity and guarantees that the third holomorphicity condition makes sense.
\begin{remark}
Note that a discrete holomorphic mapping $f$ may be \textit{biconstant} (constant at black and constant at white vertices) at some quadrilaterals, but not at all, whereas in the smooth case, any holomorphic mapping that is locally constant somewhere is constant on connected components. We resolve this contradiction by interpreting quadrilaterals where $f$ is biconstant as branch points.
\end{remark}
\subsection{Simple properties and branch points}\label{sec:branch}
The following lemma discretizes the classical fact that nonconstant holomorphic mappings are open.
\begin{lemma} \label{lem:open_map}
Let $f:V(\Lambda) \to V(\Lambda')$ be a discrete holomorphic mapping. Then, for any $v \in V(\Lambda)$ there exists a nonnegative integer $k$ such that the image of the star of $v$ goes $k$ times along the star of $f(v)$ (preserving the orientation).
\end{lemma}
\begin{proof}
By definition of discrete holomorphicity, the image of the star of $v$ is contained in the star of $f(v)$ and the orientation is preserved. If $f$ is biconstant around the star of $v$, then the statement is true with $k=0$. So assume that $f$ is not biconstant there. Then, at least one quadrilateral in the star is mapped to a complete quadrilateral in the star of $f(v)$. The next quadrilateral is either mapped to an edge if $f$ is biconstant at this quadrilateral or to the neighboring quadrilateral. Since this has to close up in the end, the image goes $k>0$ times along the star of $f(v)$.
\end{proof}
\begin{definition}
If the number $k$ in the lemma above is zero, then we say that $v$ is a \textit{vanishing point}. Otherwise, $v$ is a \textit{regular point}. If $k>1$, then we say that $v$ is a \textit{branch point} of multiplicity $k$. In any case, we define $b_f(v)=k-1$ as the \textit{branch number} of $f$ at $v$.
If $f$ is biconstant at $Q\in F(\Lambda)$, then we say that $Q$ is a \textit{branch point} of multiplicity two with branch number $b_f(Q)=1$. Otherwise, $Q$ is not a branch point and $b_f(Q)=0$.
\end{definition}
\begin{figure}[htbp]
\centering
\subfloat{
\includegraphics[scale=0.4]{Cover1}}
\qquad
\subfloat{
\includegraphics[scale=0.4]{Cover2}}
\caption[]{Two-sheeted cover of a cube by a surface of genus three}
\label{fig:cover}
\end{figure}
\begin{example}
Figure~\ref{fig:cover} shows a two-sheeted covering of an elementary cube by a surface of genus three that is composed of 8 vertices, 24 edges, and 12 faces. For this, points $X_i$ and $X_j$, $X \in \{A,B,C,D,E,F,G\}$ and $i,j \in \{1,2,3,4\}$, are identified. The mapping $f$ between these surfaces maps a point $X_i$ to the corresponding point $X$ on the cube.
The bipartite quad-decomposition of the surface of genus three is not strongly regular, but a uniform decomposition of each square into nine smaller squares gives us a strongly regular quad-decomposition. This makes both surfaces discrete Riemann surfaces in a canonical way, and $f$ is discrete holomorphic. Each of the eight vertices of the surface of genus three is a branch point of multiplicity two.
\end{example}
\begin{remark}
Note that even if $f$ is not globally biconstant, it may have vanishing points. The reason for saying that quadrilaterals where $f$ is biconstant are branch points of multiplicity two is that if we go along the vertices $b_-,w_-,b_+,w_+$ of $Q$, then its images are $f(b_-),f(w_-),f(b_-),f(w_-)$ (Figure~\ref{fig:merging}). However, in combination with vanishing points, this definition of branching might be misleading. It is more appropriate to consider a finite subgraph $\Diamond_0\subseteq\Diamond$ that forms a simply-connected closed region consisting of $F$ quadrilaterals, $I$ interior points $V_{\textnormal{int}}$ (all of them vanishing points), and $B=2(F-I+1)$ boundary points (all of them regular points) as one single branch point of multiplicity $F-I+1$. Indeed, black and white points alternate at the boundary and they are always mapped to the same black or white image point, respectively. In terms of branch numbers this interpretation is fine since \[F-I=\sum_{Q \in V(\Diamond_0)} b_f(Q) + \sum_{v \in V_{\textnormal{int}}} b_f(v).\]
\begin{figure}[htbp]
\centering
\beginpgfgraphicnamed{merging}
\begin{tikzpicture}
[white/.style={circle,draw=black,fill=white,thin,inner sep=0pt,minimum size=1.2mm},
black/.style={circle,draw=black,fill=black,thin,inner sep=0pt,minimum size=1.2mm},
gray/.style={circle,draw=black,fill=gray,thin,inner sep=0pt,minimum size=1.2mm},scale=0.6]
\clip(1.8,-4) rectangle (10.8,2);
\draw (2.4,-1.03)-- (3.64,0.54);
\draw (3.64,0.54)-- (6.18,1.3);
\draw (3.64,0.54)-- (5.56,-1.12);
\draw (5.56,-1.12)-- (4.31,-1.61);
\draw (4.31,-1.61)-- (2.4,-1.03);
\draw (4.31,-1.61)-- (4.6,-3.59);
\draw (9.82,-0.09)-- (7.83,0.18);
\draw (9.82,-0.09)-- (8.3,-1.39);
\draw (7.83,0.18)-- (5.56,-1.12);
\draw (8.3,-1.39)-- (5.56,-1.12);
\draw (7.83,0.18)-- (6.18,1.3);
\draw (5.56,-1.12)-- (6.51,-3.02);
\draw (6.51,-3.02)-- (4.6,-3.59);
\draw (6.51,-3.02)-- (8.48,-3.38);
\draw (8.48,-3.38)-- (8.3,-1.39);
\node[white] (w1) [label=right:$w$] at (9.82,-0.09) {};
\node[white] (w2) [label=above:$w$] at (5.56,-1.12) {};
\node[white] (w3) [label=below:$w$] at (2.4,-1.03) {};
\node[white] (w4) [label=right:$w$] at (8.48,-3.38) {};
\node[white] (w5) [label=left:$w$] at (4.6,-3.59) {};
\node[white] (w6) [label=above:$w$] at (6.18,1.3) {};
\node[black] (b1) [label=above:$b$] at (7.83,0.18) {};
\node[black] (b2) [label=right:$b$] at (8.3,-1.39) {};
\node[black] (b3) [label=above:$b$] at (3.64,0.54) {};
\node[black] (b4) [label=below:$b$] at (6.51,-3.02) {};
\node[black] (b5) [label=below left:$b$] at (4.31,-1.61) {};
\end{tikzpicture}
\endpgfgraphicnamed
\caption{A branch point of multiplicity 5=5-1+1 (labels indicate image points)}
\label{fig:merging}
\end{figure}
\end{remark}
\begin{corollary}\label{cor:surjective}
Let $f:V(\Lambda) \to V(\Lambda')$ be discrete holomorphic and not biconstant. Then, $f$ is surjective. If in addition $\Sigma$ is compact, then $\Sigma'$ is compact as well.
\end{corollary}
\begin{proof}
Assume that $f$ is not surjective. Then, there is $v'\in V(\Lambda')$ not contained in the image. Say $v'$ is black. Take $v_0' \in f(\Gamma)$ combinatorially closest to $v'$. Since all black neighbors of a black vanishing point of $f$ have the same image and $f$ is not biconstant, there is a regular point $v_0$ in the preimage of $v_0'$. By Lemma~\ref{lem:open_map}, the image of the star of $v_0$ equals the star of $v_0'$. Thus, there is an image point combinatorially nearer to $v'$ as $v_0'$, contradiction.
If $\Sigma$ is compact, then $\Lambda$ is finite. So $\Lambda'$ is finite as well and $\Sigma'$ is compact.
\end{proof}
\begin{corollary}\label{cor:Liouville}
Let $\Sigma$ be compact and $\Sigma'$ be homeomorphic to a plane. Then, any discrete holomor\-phic mapping $f:V(\Lambda) \to V(\Lambda')$ is biconstant.
\end{corollary}
Note that we will prove the more general discretization of Liouville's theorem that any complex valued discrete holomorphic function $f:V(\Lambda) \to \mC$ on a compact discrete Riemann surface is biconstant later in Theorem~\ref{th:Liouville}.
\begin{theorem}\label{th:degree}
Let $f:V(\Lambda) \to V(\Lambda')$ be a discrete holomorphic mapping. Then, there exists a number $N \in \mZ_{\geq 0} \cup \left\{ \infty \right\}$ such that for all $v' \in V(\Lambda')$: \[N=\sum_{v\in f^{-1}(v')}\left(b_f(v)+1\right).\] Furthermore, for any $Q' \in F(\Lambda')$, $N$ equals the number of $Q \in F(\Lambda)$ such that $f$ maps the vertices of $Q$ bijectively to the vertices of $Q'$.
\end{theorem}
\begin{proof}
If $f$ is biconstant, then all $b_f(v)+1$ are zero and $N=0$ fulfills the requirements.
Assume now that $f$ is not biconstant. By Corollary~\ref{cor:surjective}, $f$ is surjective. Let $Q' \in F(\Lambda')$ and let $v'$ be a vertex of $Q'$. We want to count the number $N$ of $Q \in F(\Lambda)$ such that $f$ maps the vertices of $Q$ bijectively to the vertices of $Q'$. Let $v \in f^{-1}(v')$. By Lemma~\ref{lem:open_map}, exactly $b_f(v)+1$ quadrilaterals incident to $v$ are mapped bijectively to $Q'$. Conversely, any $Q \in F(\Lambda)$ such that $f$ maps the vertices of $Q$ bijectively to the vertices of $Q'$ has exactly one vertex in the preimage of $f^{-1}(v')$. Therefore, \[N=\sum_{v\in f^{-1}(v')}\left(b_f(v)+1\right).\] The same formula holds true if we replace $Q'$ by another face incident to $v'$ or $v'$ by some other vertex incident to $Q'$. Thus, $N$ does not depend on the choice of the face $Q'$ and the incident vertex $v'$.
\end{proof}
\begin{definition}
If $N>0$, then $f$ is called an \textit{$N$-sheeted discrete holomorphic covering}.
\end{definition}
\begin{remark}
If $\Sigma$ is compact, then $N<\infty$. The characterization of $N$ as the number of preimage quadrilaterals nicely explains why $N$ is called the number of sheets of $f$. However, a quadrilateral of $\Lambda$ corresponds to one of the $N$ sheets (and not to just two single points) only if $f$ is not biconstant there.
\end{remark}
Finally, we state and prove a \textit{discrete Riemann-Hurwitz formula}.
\begin{theorem}\label{th:Riemann_Hurwitz}
Let $\Sigma$ be compact and $f:V(\Lambda) \to V(\Lambda')$ be an $N$-sheeted discrete holomorphic covering of the compact discrete Riemann surface $\Sigma'$ of genus $g'$. Then, the genus $g$ of $\Sigma$ is equal to \[g=N(g'-1)+1+\frac{b}{2},\] where $b$ is the total branching number of $f$: \[b=\sum_{v \in V(\Lambda)} b_f(v) + \sum_{Q \in V(\Diamond)} b_f(Q).\]
\end{theorem}
\begin{proof}
Since we consider quad-decompositions, the number of edges of $\Lambda$ equals twice the number of faces. Thus, the Euler characteristic $2-2g$ of $\Sigma$ is given by $|V(\Lambda)|-|V(\Diamond)|$. By Theorem~\ref{th:degree}, \[|V(\Lambda)|=N|V(\Lambda')|-\sum_{v \in V(\Lambda)} b_f(v).\] If we count the number of faces of $\Lambda$, then we have $N|V(\Diamond')|$ quadrilaterals that are mapped to a complete quadrilateral of $\Lambda'$ by Theorem~\ref{th:degree} and $\sum_{Q \in V(\Diamond)} b_f(Q)$ faces are mapped to an edge of $\Lambda'$. Hence, \[|V(\Diamond)|=N|V(\Diamond')|+\sum_{Q \in V(\Diamond)} b_f(Q).\]
\begin{align*}2-2g=|V(\Lambda)|-|V(\Diamond)|&=N|V(\Lambda')|-\sum_{v \in V(\Lambda)} b_f(v)-N|V(\Diamond')|-\sum_{Q \in V(\Diamond)} b_f(Q)=N(2-2g')-b\end{align*} now implies the final result.
\end{proof}
\begin{example}
In the example depicted in Figure~\ref{fig:cover}, $g=3$, $g'=0$, $N=2$, and $b=8$. \[3=2\cdot(0-1)+1+\frac{8}{2}\] then demonstrates the validity of the discrete Riemann-Hurwitz formula.
\end{example}
\section{Discrete exterior calculus}\label{sec:differentials}
In this section, we consider a discrete Riemann surface $(\Sigma, \Lambda, z)$ and adapt the fundamental notions and properties of discrete complex analysis discussed in \cite{BoG15} to discrete Riemann surfaces. All omitted proofs can be literally translated from \cite{BoG15} to the more general setting of discrete Riemann surfaces.
Note that our treatment of discrete exterior calculus is similar to Mercat's approach in \cite{Me01,Me07,Me08}. However, in Section~\ref{sec:differential_forms}
we suggest a different notation of multiplication of functions with discrete one-forms, leading to a discrete exterior derivative that is defined on a larger class of discrete one-forms in Section~\ref{sec:derivative}. It coincides with Mercat's discrete exterior derivative in the case of discrete one-forms of type $\Diamond$ that he considers. In contrast, our definitions mimic the coordinate representation of the smooth theory. Still, our definitions of a discrete wedge product in Section~\ref{sec:wedge} and a discrete Hodge star in Section~\ref{sec:hodge} are equivalent to Mercat's in \cite{Me08}.
\subsection{Discrete differential forms} \label{sec:differential_forms}
The most important type of functions is $f:V(\Lambda)\to\mC$, but in local charts complex functions defined on subsets of $V(\Diamond)$ such as $\partial_\Lambda f$ occur as well.
\begin{definition}
A \textit{discrete one-form} or \textit{discrete differential} $\omega$ is a complex function on the oriented edges of the medial graph $X_0$ such that $\omega(-e)=\omega(e)$ for any oriented edge $e$ of $X_0$. Here, $-e$ denotes the edge $e$ with opposite orientation.
The evaluation of $\omega$ at an oriented edge $e$ of $X_0$ is denoted by $\int_e \omega$. For a directed path $P$ in $X_0$ consisting of oriented edges $e_1,e_2,\ldots,e_n$, the \textit{discrete integral} along $P$ is defined as $\int_P \omega=\sum_{k=1}^n \int_{e_k} \omega$. For closed paths $P$, we write $\oint_P \omega$ instead.
\end{definition}
\begin{remark}
If we speak about discrete one-forms or discrete differentials and do not specify their domain, then we will always assume that they are defined on oriented edges of the whole medial graph $X$.
\end{remark}
Of particular interest are discrete one-forms that actually come from discrete one-forms on $\Gamma$ and $\Gamma^*$.
\begin{definition}
A discrete one-form $\omega$ defined on the oriented edges of $X_0$ is of \textit{type} $\Diamond$ if for any quadrilateral $Q \in V(\Diamond_0)$ and its incident black (or white) vertices $v,v'$ the equality $\omega([Q,v])=-\omega([Q,v'])$ holds. The latter two edges inherit their orientation from $\partial F_Q$.
\end{definition}
\begin{definition}
A \textit{discrete two-form} $\Omega$ is a complex function on $F(X_0)$.
The evaluation of $\Omega$ at a face $F$ of $X_0$ is denoted by $\iint_F \Omega$. If $S$ is a set of faces $F_1,\ldots, F_n$ of $X_0$, then $\iint_S \Omega=\sum_{k=1}^n \iint_{F_k} \Omega$ defines the \textit{discrete integral} of $\Omega$ over $S$.
$\Omega$ is of \textit{type} $\Lambda$ if $\Omega$ vanishes on all faces of $X_0$ corresponding to $V(\Diamond_0)$ and of \textit{type} $\Diamond$ if $\Omega$ vanishes on all faces of $X_0$ corresponding to $V(\Lambda_0)$.
\end{definition}
\begin{remark}
Discrete two-forms of type $\Lambda$ or type $\Diamond$ correspond to functions on $V(\Lambda_0)$ or $V(\Diamond_0)$ by the discrete Hodge star that will be defined later in Section~\ref{sec:hodge}.
\end{remark}
\begin{definition}
Let for short $z$ be a chart $z_Q$ of a quadrilateral $Q \in V(\Diamond)$ or a chart $z_v$ of the star of a vertex $v \in V(\Lambda)$. On its domain, the discrete one-forms $dz$ and $d\bar{z}$ are defined in such a way that $\int_e dz=z(e)$ and $\int_e d\bar{z}=\overline{z(e)}$ hold for any oriented edge $e$ of $X$. The discrete two-forms $\Omega_\Lambda^z$ and $\Omega_\Diamond^z$ are zero on faces of $X$ corresponding to vertices of $\Diamond$ or $\Lambda$, respectively, and defined by \[\iint_F \Omega_\Lambda^z=-4i\textnormal{area}(z(F)) \textnormal{ and } \iint_F \Omega_\Diamond^z=-4i\textnormal{area}(z(F))\] on faces $F$ corresponding to vertices of $\Lambda$ or $\Diamond$, respectively. Here, $\textnormal{area}(z(F))$ denotes the algebraic area of the polygon $z_v(F_v)$ or the Euclidean area of the parallelogram $z(F)$, respectively.
\end{definition}
\begin{remark}
Our main objects either live on the quad-decomposition $\Lambda$ or on its dual $\Diamond$. Thus, we have to deal with two different cellular decompositions at the same time. The medial graph has the crucial property that its faces split into two sets which are respectively $\Lambda=\Diamond^*$ and $\Diamond=\Lambda^*$. Furthermore, the Euclidean area of the Varignon parallelogram inside a quadrilateral $z(Q)$ is just half of its area. In an abstract sense, a corresponding statement is true for the cells of $X$ corresponding to vertices of $\Lambda$ and the faces of $\Diamond$. This statement can be made precise in the setting of planar parallelogram-graphs, see \cite{BoG15}. For this reason, the additional factor of two is necessary to make $\Omega_\Lambda^z$ and $\Omega_\Diamond^z$ the straightforward discretizations of $dz \wedge d\bar{z}$. As it turns out in Section~\ref{sec:wedge}, $\Omega_\Diamond^z$ is indeed the discrete wedge product of $dz$ and $d\bar{z}$.
\end{remark}
\begin{definition}
Let $f:V(\Lambda_0)\to\mC$, $h:V(\Diamond_0)\to\mC$, $\omega$ a discrete one-form defined on the oriented edges of $X_0$, and $\Omega_1,\Omega_2$ discrete two-forms defined on $F(X_0)$ that are of type $\Lambda$ and $\Diamond$, respectively. For any oriented edge $e=[Q,v]$ and any faces $F_v, F_Q$ of $X_0$ corresponding to $v\in V(\Lambda_0)$ or $Q \in V(\Diamond_0)$, we define the products $f\omega$, $h\omega$, $f\Omega_1$, and $h\Omega_2$ by
\begin{align*}
\int_{e}f\omega:&=f(v)\int_{e}\omega \ \quad \textnormal{ and } \quad \iint_{F_v} f\Omega_1:=f(v)\iint_{F_v}\Omega_1, \quad \iint_{F_Q} f\Omega_1:=0;\\
\int_{e}h\omega:&=h(Q)\int_{e}\omega \quad \textnormal{ and } \quad \iint_{F_v} h\Omega_2:=0, \qquad \qquad \qquad \; \! \iint_{F_Q} h\Omega_2:=h(Q)\iint_{F_Q}\Omega_2.
\end{align*}
\end{definition}
\begin{remark}
A discrete one-form of type $\Diamond$ can be locally represented as $pdz_Q+qd\bar{z}_Q$ on all edges of a face of $X$ corresponding to $Q \in V(\Diamond)$, where $p,q \in \mC$. Similarly, we could define discrete one-forms of type $\Lambda$. However, this notion would depend on the chart and would be not well-defined on a discrete Riemann surface.
\end{remark}
\subsection{Discrete derivatives and Stokes' theorem}\label{sec:derivative}
\begin{definition}
Let $Q \in V(\Diamond)\cong F(\Lambda)$ and $f$ be a complex function on the vertices of $Q$. In addition, let $v\in V(\Lambda)$ and $h$ be a complex function defined on all quadrilaterals $Q_s \sim v$. Let $F_Q$ and $F_v$ be the faces of $X$ corresponding to $Q$ and $v$ with counterclockwise orientations of their boundaries. Then, the \textit{discrete derivatives} $\partial_\Lambda f$, $\bar{\partial}_\Lambda f$ in the chart $z_Q$ and $\partial_\Diamond h$, $\bar{\partial}_\Diamond h$ in the chart $z_v$ are defined by
\begin{align*}
\partial_\Lambda f(Q)&:=\frac{1}{\iint_{F_Q} \Omega_\Diamond^{z_Q}}\oint_{\partial F_Q} f d\bar{z}_Q, \qquad \bar{\partial}_\Lambda f (Q):=\frac{-1}{\iint_{F_Q} \Omega_\Diamond^{z_Q}}\oint_{\partial F_Q} f dz_Q;\\
\partial_\Diamond h(v)&:=\frac{1}{\iint_{F_v} \Omega_\Lambda^{z_v}}\oint_{\partial F_v} h d\bar{z}_v, \qquad \quad \bar{\partial}_\Diamond h(v):=\frac{-1}{\iint_{F_v} \Omega_\Lambda^{z_v}}\oint_{\partial F_v} h dz_v.
\end{align*}
$h$ is said to be \textit{discrete holomorphic} in the chart $z_v$ if $\bar{\partial}_\Diamond h (v)=0$.
\end{definition}
As in the classical theory, the discrete derivatives depend on the chosen chart. We do not include these dependences in the notions, but it will be clear from the context which chart is used.
\begin{remark}
Whereas discrete holomorphicity for functions $f:V(\Lambda) \to \mC$ is well-defined and equivalent to $\bar{\partial}_\Lambda f (Q)=0$ in any chart $z_Q$ (see \cite{BoG15}), discrete holomorphicity of functions on $V(\Diamond)$ is not consistently defined by the discrete complex structure. Indeed, if $\rho_Q=1$ for all faces $Q$ incident to $v \in V(\Lambda)$, then any cyclic polygon with the correct number of vertices can be the image of the vertices adjacent to $v$ under a chart $z_v$ compatible with the discrete complex structure, but the equation $\bar{\partial}_\Diamond h (v)=0$ depends on the choice of the cyclic polygon.
\end{remark}
\begin{definition}
Let $f:V(\Lambda_0) \to \mC$ and $h:V(\Diamond_0) \to \mC$. We define the \textit{discrete exterior derivatives} $df$ and $dh$ on the edges of $X_0$ in a chart $z$ as follows:
\begin{align*}
df:=\partial_\Lambda f dz+\bar{\partial}_\Lambda f d\bar{z}, \quad dh:=\partial_\Diamond h dz+\bar{\partial}_\Diamond h d\bar{z}.
\end{align*}
Let $\omega$ be a discrete one-form defined on all boundary edges of a face $F_v$ of the medial graph $X$ corresponding to $v\in V(\Lambda)$ or on all four boundary edges of a face $F_Q$ of $X$ corresponding to $Q\in F(\Lambda)$. In a chart $z$ around $F_v$ or $F_Q$, respectively, we write $\omega=p dz+ q d\bar{z}$ with functions $p,q$ defined on faces incident to $v$ or vertices incident to $Q$, respectively. The \textit{discrete exterior derivative} $d\omega$ is given by
\begin{align*}
d\omega|_{F_v}&:=\left(\partial_\Diamond q - \bar{\partial}_\Diamond p\right) \Omega_\Lambda^z,\\
d\omega|_{F_Q}&:=\left(\partial_\Lambda q - \bar{\partial}_\Lambda p\right) \Omega_\Diamond^z.
\end{align*}
\end{definition}
The representation of $\omega$ as $p dz+ q d\bar{z}$ ($p,q$ defined on edges of $X$) we have used above may be nonunique. However, $d\omega$ is well-defined and does not depend on the chosen chart by \textit{discrete Stokes' theorem}.
\begin{theorem}\label{th:stokes}
Let $f:V(\Lambda_0) \to \mC$ and $\omega$ be a discrete one-form defined on oriented edges of $X_0$. Then, for any directed edge $e$ of $X_0$ starting in the midpoint of the edge $vv'_-$ and ending in the midpoint of the edge $vv'_+$ of $\Lambda_0$ and for any finite collection of faces $F$ of $X_0$ with counterclockwise oriented boundary $\partial F$ we have:
\begin{align*}
\int_e df&=\frac{f(v'_+)-f(v'_-)}{2}=\frac{f(v)+f(v'_+)}{2}-\frac{f(v)+f(v'_-)}{2};\\
\iint_F d\omega&=\oint_{\partial F} \omega.
\end{align*}
\end{theorem}
\begin{definition}
Let $\Diamond_0 \subseteq \Diamond$ form a simply-connected closed region. A discrete one-form $\omega$ defined on oriented edges of $X_0$ is said to be \textit{closed} if $d\omega\equiv 0$.
\end{definition}
\begin{proposition}\label{prop:dd0}
Let $f:V(\Lambda) \to \mC$. Then, $ddf=0$.
\end{proposition}
\begin{corollary}\label{cor:commutativity}
Let $f$ be a function defined on the vertices of all quadrilaterals incident to $v \in V(\Lambda)$. Then, $\partial_\Diamond\bar{\partial}_\Lambda f(v)=\bar{\partial}_\Diamond\partial_\Lambda f(v)$ in a chart $z_v$ of the star of $v$. In particular, $\partial_\Lambda f$ is discrete holomorphic in $z_v$ if $f$ is discrete holomorphic.
\end{corollary}
\begin{corollary}\label{cor:f_holomorphic}
Let $f:V(\Lambda) \to \mC$. Then, $f$ is discrete holomorphic at all faces incident to $v\in V(\Lambda)$ if and only if in a chart $z_v$ around $v$, $df=p dz_v$ for some function $p$ defined on the faces incident to $v$. In this case, $p$ is discrete holomorphic in $z_v$.
\end{corollary}
\begin{definition}
A discrete differential $\omega$ of type $\Diamond$ is \textit{discrete holomorphic} if $d\omega=0$ and if in any chart $z_Q$ of a quadrilateral $Q\in V(\Diamond)$, $\omega=p dz_Q$. $\omega$ is \textit{discrete antiholomorphic} if $\bar{\omega}$ is discrete holomorphic.
\end{definition}
\begin{remark}
It suffices to check this condition for just one chart of $Q$, as follows from Lemma~\ref{lem:Hodge_projection} below. In particular, discrete holomorphicity of discrete one-forms depends on the discrete complex structure only. If $\omega$ is discrete holomorphic, then we can write $\omega=p dz_v$ in a chart $z_v$ around $v \in V(\Lambda)$, where $p$ is a function defined on the faces incident to $v$. In this case, $p$ is discrete holomorphic in $z_v$. Conversely, the closeness condition can be replaced by requiring that $p$ is discrete holomorphic.
\end{remark}
\begin{proposition}\label{prop:primitive2}
Let $\Diamond_0 \subseteq \Diamond$ form a simply-connected closed region and let $\omega$ be a closed discrete differential of type $\Diamond$ defined on oriented edges of $X_0$. Then, there is a function $f:=\int \omega :V(\Lambda_0)\to\mC$ such that $\omega=df$. $f$ is unique up to two additive constants on $\Gamma_0$ and $\Gamma^*_0$. If $\omega$ is discrete holomorphic, then $f$ is as well.
\end{proposition}
\subsection{Discrete wedge product}\label{sec:wedge}
Let $\omega$ be a discrete one-form of type $\Diamond$. Then, for any chart $z_Q$ of a quadrilateral $Q\in V(\Diamond)$ there is a unique representation $\omega|_{\partial F_Q}=p dz_Q+q d\bar{z}_Q$ with complex numbers $p$ and $q$. To calculate them, one can first construct a function $f$ on the vertices of $Q$ such that $\omega|_{\partial F_Q}=df$ and then take $p=\partial_\Lambda f$ and $q=\bar{\partial}_\Lambda f$, see \cite{BoG15}.
\begin{definition}
Let $\omega,\omega'$ be two discrete one-forms of type $\Diamond$ defined on the oriented edges of $X_0$. Then, the \textit{discrete wedge product} $\omega\wedge\omega'$ is defined as the discrete two-form of type $\Diamond$ that equals \[\left(pq'-qp'\right)\Omega_\Diamond^{z_Q}\] on a face $F_Q$ corresponding to $Q\in V(\Diamond)$. Here, $z_Q$ is a chart of $Q$ and $\omega|_{\partial F_Q}=p dz_Q+ q d\bar{z}_Q$ and $\omega'|_{\partial F_Q}=p' dz_Q+ q' d\bar{z}_Q$.
\end{definition}
The following proposition connects our definition of a discrete wedge product with Mercat's in \cite{Me01,Me07,Me08} and also shows that the discrete wedge product does not depend on the choice of the chart.
\begin{proposition}\label{prop:wedge_Mercat}
Let $F_Q$ be the face of $X$ corresponding to $Q\in V(\Diamond)$, let $z_Q$ be a chart, and let $e,e^*$ be the oriented edges of $X$ parallel to the black and white diagonal of $Q$, respectively, such that $\im \left(z_Q(e^*)/z_Q(e)\right)>0$. Then, \[\iint_{F_Q} \omega\wedge\omega' = 2\int_e \omega \int_{e^*} \omega'- 2\int_{e^*} \omega \int_e \omega'.\]
\end{proposition}
Finally, the discrete exterior derivative is a derivation for the wedge product:
\begin{theorem}\label{th:derivation}
Let $f:V(\Lambda_0) \to \mC$ and $\omega$ be a discrete one-form of type $\Diamond$ defined on the oriented edges of $X_0$. Then, the following identity holds on $F(X_0)$: \[d(f\omega)=df\wedge\omega+fd\omega.\]
\end{theorem}
\subsection{Discrete Hodge star and discrete Laplacian}\label{sec:hodge}
\begin{definition}
Let $\Omega_{\Sigma}$ be a fixed nowhere vanishing discrete two-form, $f:F(\Lambda_0)\to\mC$, $h:V(\Diamond_0)\to\mC$, $\omega$ a discrete one-form of type $\Diamond$ defined on oriented edges of $X_0$, and $\Omega$ a discrete two-form either of type $\Lambda$ or $\Diamond$. In a chart $z_Q$ of $Q\in V(\Diamond)$, we write $\omega|_{\partial F_Q}=p dz_Q +qd\bar{z}_Q$. Then, the \textit{discrete Hodge star} is defined by \begin{align*} \star f:= f \Omega_\Sigma; \quad \star h:= h \Omega_\Sigma; \quad \star \omega|_{\partial F_Q}:=-ip dz_Q+iq d\bar{z}_Q;\quad \star \Omega&:=\frac{\Omega}{\Omega_\Sigma}.\end{align*}
\end{definition}
\begin{remark}
In the planar case, the choice of $\Omega_{\Sigma}=i/2 \Omega_\Diamond^z$ on faces of $X$ corresponding to faces of the quad-graph and $\Omega_{\Sigma}=i/2 \Omega_\Lambda^z$ on faces corresponding to vertices is the most natural one. Throughout the remainder of this chapter, $\Omega_{\Sigma}$ is a fixed positive real two-form on $(\Sigma,\Lambda,z)$.
In the classical setup, there is a canonical nonvanishing two-form coming from a complete Riemannian metric of constant curvature. An interesting question is whether there exists some canonical two-form for discrete Riemann surfaces as we defined them. Note that the nonlinear theory developed in \cite{BoPSp10} contains a uniformization of discrete Riemann surfaces and discrete metrics with constant curvature.
\end{remark}
\begin{proposition}\label{prop:hodge_Mercat}
Let $Q\in V(\Diamond)$ with chart $z_Q$, and let $e,e^*$ be oriented edges of $X$ parallel to the black and white diagonal of $Q$, respectively, such that $\im \left(e^*/e\right)>0$. If $\omega$ is a discrete one-form of type $\Diamond$ defined on the oriented edges of the boundary of the face of $X$ corresponding to $Q$, then
\begin{align*}
\int_e \star\omega&=\cot\left(\varphi_Q\right) \int_e \omega-\frac{|z_Q(e)|}{|z_Q(e^*)| \sin\left(\varphi_Q\right)}\int_{e^*}\omega,\\
\int_{e^*} \star\omega&=\frac{|z_Q(e^*)|}{|z_Q(e)| \sin\left(\varphi_Q\right)} \int_e \omega-\cot\left(\varphi_Q\right)\int_{e^*}\omega.
\end{align*}
\end{proposition}
Proposition~\ref{prop:hodge_Mercat} shows not only that our definition of a discrete Hodge star on discrete one-forms does not depend on the chosen chart, but also that it coincides with Mercat's definition given in \cite{Me08}.
Clearly, $\star^2=-\textnormal{Id}$ on discrete differentials of type $\Diamond$ and $\star^2=\textnormal{Id}$ on complex functions and discrete two-forms. The next lemma shows that discrete holomorphic differentials are well-defined.
\begin{lemma}\label{lem:Hodge_projection}
Let $Q \in V(\Diamond)$ and $F_Q$ be the face of $X$ corresponding to $Q$. A discrete differential $\omega$ of type $\Diamond$ defined on the oriented edges of $F_Q$ is of the form $\omega=p dz_Q$ (or $\omega=q d\bar{z}_Q$) in any chart $z_Q$ of $Q\in V(\Diamond)$ if and only if $\star\omega=-i\omega$ (or $\star\omega=i\omega$).
\end{lemma}
\begin{proof}
Let us take a (unique) representation $\omega=p dz_Q+ q d\bar{z}_Q$ in a coordinate chart $z_Q$ of $Q\in V(\Diamond)$. By definition, $\star\omega=-i\omega$ is equivalent to $q=0$. Analogously, $\star\omega=i\omega$ is equivalent to $p=0$.
\end{proof}
\begin{definition}
If $\omega$ and $\omega'$ are both discrete differentials of type $\Diamond$ defined on oriented edges of $X$, then we define their \textit{discrete scalar product} \[\langle \omega, \omega' \rangle:=\iint_{F(X)} \omega \wedge \star\bar{\omega}',\] whenever the right hand side converges absolutely. In a similar way, the discrete scalar product between two discrete two-forms or two complex functions on $V(\Lambda)$ is defined.
\end{definition}
A calculation in local coordinates shows that $\langle \cdot,\cdot \rangle$ is indeed a Hermitian scalar product.
\begin{definition}
$L_2(\Sigma,\Lambda,z)$ is the Hilbert space of \textit{square integrable} discrete differentials with respect to $\langle \cdot,\cdot \rangle$.
\end{definition}
\begin{proposition}\label{prop:adjoint2}
$\delta:=-\star d \star$ is the \textit{formal adjoint} of the discrete exterior derivative $d$:
Let $f:V(\Lambda)\to\mC$, let $\omega$ be a discrete one-form of type $\Diamond$, and let $\Omega:F(X)\to\mC$ be a discrete two-form of type $\Lambda$. If all of them are compactly supported, then \[\langle df, \omega \rangle =\langle f, \delta \omega \rangle \textnormal{ and }\langle d\omega,\Omega\rangle= \langle \omega, \delta \Omega\rangle.\]
\end{proposition}
\begin{definition}
The \textit{discrete Laplacian} on functions $f:V(\Lambda)\to\mC$, discrete one-forms of type $\Diamond$, or discrete two-forms on $F(X)$ of type $\Lambda$ is defined as the linear operator \[\triangle:=-\delta d-d\delta=\star d \star d +d \star d \star.\]
$f:V(\Lambda)\to\mC$ is said to be \textit{discrete harmonic} at $v\in V(\Lambda)$ if $\triangle f(v)=0$.
\end{definition}
\begin{remark}
Note that straight from the definition and Corollary~\ref{cor:commutativity}, it follows for $f:V(\Lambda)\to\mC$ that $\triangle f (v)$ is proportional to $4\partial_\Diamond\bar{\partial}_\Lambda f(v)=4\bar{\partial}_\Diamond\partial_\Lambda f(v)$ in the chart $z_v$ around $v \in V(\Lambda)$. In particular, discrete harmonicity of functions does not depend on the choice of $\Omega_\Sigma$, and discrete holomorphic functions are discrete harmonic.
\end{remark}
\begin{lemma}\label{lem:Dirichlet_boundary2}
Let $(\Sigma,\Lambda,z)$ be a compact discrete Riemann surface. Then, the discrete Dirichlet energy functional $E_\Diamond$ defined by $E_{\Diamond}(f):=\langle df,df \rangle$ for functions $f:V(\Lambda) \to \mR$ is a convex nonnegative quadratic functional in the vector space of real functions on $V(\Lambda)$. Furthermore, \[-\frac{\partial E_{\Diamond}}{\partial f(v)}(f)=2\triangle f(v)\iint_{F_v} \Omega_{\Sigma} \] for any $v \in V(\Lambda)$. In particular, extremal points of this functional are functions that are discrete harmonic everywhere.
\end{lemma}
As a conclusion of this section, we state and prove \textit{discrete Liouville's theorem}.
\begin{theorem}\label{th:Liouville}
Let $(\Sigma,\Lambda,z)$ be a compact discrete Riemann surface. Then, any discrete harmonic function $f:V(\Lambda)\to\mC$ is biconstant. In particular, any complex valued discrete holomorphic function is biconstant.
\end{theorem}
\begin{proof}
Since $\delta$ is the formal adjoint of $d$ by Proposition~\ref{prop:adjoint2},
\[\langle df,df \rangle=\langle f, \delta d f \rangle=\langle f, \triangle f \rangle=0.\] Now, $\langle df,df \rangle \geq 0$ and equality holds only if $df=0$, i.e., if $f$ is biconstant.
\end{proof}
\section{Periods of discrete differentials}\label{sec:periods}
In this section, we define the (discrete) periods of a closed discrete differential of type $\Diamond$ on a compact discrete Riemann surface $(\Sigma, \Lambda, z)$ of genus $g$ in Section~\ref{sec:cover} and state and prove a discrete Riemann bilinear identity in Section~\ref{sec:RBI}. Although we aim at being as close as possible to the smooth case in our presentation, the bipartite structure of $\Lambda$ prevents us from doing so. We struggle with the same problem of white and black periods as Mercat did for discrete Riemann surfaces whose discrete complex structure is described by real numbers $\rho_Q$ in \cite{Me07}. The reason for this is that a discrete differential of type $\Diamond$ corresponds to a pair of discrete differentials on each of $\Gamma$ and $\Gamma^*$.
Mercat constructed out of a canonical homology basis on $\Lambda$ certain canonical homology bases on $\Gamma$ and $\Gamma^*$. By solving a discrete Neumann problem, he then proved the existence of dual cohomology bases on $\Gamma$ and $\Gamma^*$. The discrete Riemann bilinear identity for the elements of the bases (and by linearity for general closed discrete differentials) was a direct consequence of the construction.
On the contrary, the proof given in \cite{BoSk12} followed the ideas of the smooth case, but the relation to discrete wedge products was not that immediate. We will give a full proof of the general discrete Riemann bilinear identity that follows the lines of the proof of the classical Riemann bilinear identity, using almost the same notation. The main difference to \cite{BoSk12} is that we use a different refinement of the cellular decomposition to profit of a cellular decomposition of the canonical polygon with $4g$ vertices. The appearance of black and white periods indicates the analogy to Mercat's approach in \cite{Me07}.
\subsection{Universal cover and periods}\label{sec:cover}
Let $p: \tilde{\Sigma} \to \Sigma$ denote the universal covering of the compact surface $\Sigma$. $p$ gives rise to a bipartite quad-decomposition $\tilde{\Lambda}$ with medial graph $\tilde{X}$ and a covering $p: \tilde{\Lambda} \to \Lambda$. Now, $(\tilde{\Sigma}, \tilde{\Lambda}, z \circ p)$ is a discrete Riemann surface as well and $p: V(\tilde{\Lambda})\to V(\Lambda)$ is a discrete holomorphic mapping.
We fix a base vertex $\tilde{v}_0 \in V(\tilde{\Lambda})$. Let $\alpha_1, \ldots, \alpha_g, \beta_1, \ldots, \beta_g$ be smooth loops on $\Sigma$ with base point $v_0:=p(\tilde{v}_0)$ such that these loops cut out a fundamental $4g$-gon $F_g$. It is well known that such loops exist; the order of loops at the boundary of $F_g$ is $\alpha_k, \beta_k, \alpha_k^{-1}, \beta_k^{-1}$, $k$ going in order from $1$ to $g$. Their homology classes $a_1, \ldots, a_g, b_1, \ldots, b_g$ form a canonical homology basis of $H_1(\Sigma,\mZ)$.
Clearly, there are homotopies between $\alpha_1, \ldots, \alpha_g, \beta_1, \ldots, \beta_g$ and closed paths $\alpha'_1, \ldots, \alpha'_g, \beta'_1, \ldots, \beta'_g$ on $X$, all of the latter having the same fixed base point $x_0\in V(X)$.
\begin{definition}
Let $P$ be an oriented cycle on $X$. $P$ induces closed paths on $\Gamma$ and $\Gamma^*$ that we denote by $B(P)$ and $W(P)$ in the following way: For an oriented edge $[Q,v]$ of $P$, we add the black (or white) vertex $v$ to $B(P)$ (or $W(P)$) and the corresponding white (or black) diagonal of $Q \in F(\Lambda)$ to $W(P)$ (or $B(P)$), see Figure~\ref{fig:contours2}. The orientation of the diagonal is induced by the orientation of $[Q,v]$. Clearly, $B(P)$ and $W(P)$ are cycles on $\Gamma$ and $\Gamma^*$ that are homotopic to $P$. We denote the one-chains on $X$ consisting of all the black or white edges corresponding to $B(P)$ and $W(P)$ by $BP$ and $WP$, respectively.
\end{definition}
\begin{figure}[htbp]
\begin{center}
\beginpgfgraphicnamed{medial}
\begin{tikzpicture}
[white/.style={circle,draw=black,fill=black,thin,inner sep=0pt,minimum size=1.2mm},
black/.style={circle,draw=black,fill=white,thin,inner sep=0pt,minimum size=1.2mm},
gray/.style={circle,draw=black,fill=gray,thin,inner sep=0pt,minimum size=1.2mm},scale=1.0]
\node[white] (w1)
at (-2,-2) {};
\node[white] (w2)
at (0,-2) {};
\node[white] (w3)
at (2,-2) {};
\node[white] (w4)
at (-1,-1) {};
\node[white] (w5)
at (1,-1) {};
\node[white] (w6)
at (-2,0) {};
\node[white] (w7)
at (0,0) {};
\node[white] (w8)
at (2,0) {};
\node[white] (w9)
at (-1,1) {};
\node[white] (w10)
at (1,1) {};
\node[white] (w11)
at (-2,2) {};
\node[white] (w12)
at (0,2) {};
\node[white] (w13)
at (2,2) {};
\node[black] (b1)
at (-1,-2) {};
\node[black] (b2)
at (1,-2) {};
\node[black] (b3)
at (-2,-1) {};
\node[black] (b4)
at (0,-1) {};
\node[black] (b5)
at (2,-1) {};
\node[black] (b6)
at (-1,0) {};
\node[black] (b7)
at (1,0) {};
\node[black] (b8)
at (-2,1) {};
\node[black] (b9)
at (0,1) {};
\node[black] (b10)
at (2,1) {};
\node[black] (b11)
at (-1,2) {};
\node[black] (b12)
at (1,2) {};
\node[gray] (m1)
at (0,-1.5) {};
\node[gray] (m2)
at (0.5,-1) {};
\node[gray] (m3)
at (1,-0.5) {};
\node[gray] (m4)
at (1.5,0) {};
\node[gray] (m5)
at (1,0.5) {};
\node[gray] (m6)
at (0.5,1) {};
\node[gray] (m7)
at (0,1.5) {};
\node[gray] (m8)
at (-0.5,1) {};
\node[gray] (m9)
at (-1,0.5) {};
\node[gray] (m10)
at (-1.5,0) {};
\node[gray] (m11)
at (-1,-0.5) {};
\node[gray] (m12)
at (-0.5,-1) {};
\draw[dashed] (w1) -- (b1) -- (w2) -- (b2) -- (w3);
\draw[dashed] (b3) -- (w4) -- (b4) -- (w5) -- (b5);
\draw[dashed] (w6) -- (b6) -- (w7) -- (b7) -- (w8);
\draw[dashed] (b8) -- (w9) -- (b9) -- (w10) -- (b10);
\draw[dashed] (w11) -- (b11) -- (w12) -- (b12) -- (w13);
\draw[dashed] (w1) -- (b3) -- (w6) -- (b8) -- (w11);
\draw[dashed] (b1) -- (w4) -- (b6) -- (w9) -- (b11);
\draw[dashed] (w2) -- (b4) -- (w7) -- (b9) -- (w12);
\draw[dashed] (b2) -- (w5) -- (b7) -- (w10) -- (b12);
\draw[dashed] (w3) -- (b5) -- (w8) -- (b10) -- (w13);
\draw (w2) -- (w5) -- (w8) -- (w10) -- (w12) -- (w9) -- (w6) -- (w4) -- (w2);
\draw (b4) -- (b7) -- (b9) -- (b6) -- (b4);
\draw[color=gray] (m1) -- (m2) -- (m3) -- (m4) -- (m5) -- (m6) -- (m7) -- (m8) -- (m9) -- (m10) -- (m11) -- (m12) -- (m1);
\coordinate[label=center:$W(P)$] (z1) at (-0.2,-0.3) {};
\coordinate[label=center:$P$] (z2) at (0.5,-1.25) {};
\coordinate[label=center:$B(P)$] (z3) at (-0.85,-1.7) {};
\end{tikzpicture}
\endpgfgraphicnamed
\caption{Cycles $P$ on $X$, $B(P)$ on $\Gamma$, and $W(P)$ on $\Gamma^*$}
\label{fig:contours2}
\end{center}
\end{figure}
\begin{definition}
Let $\omega$ be a closed discrete differential of type $\Diamond$. For $1 \leq k \leq g$, we define its $a_k$\textit{-periods} $A_k:=\oint_{\alpha'_k} \omega$ and $b_k$\textit{-periods} $B_k:=\oint_{\beta'_k} \omega$ and its
\begin{align*}
\textit{black } a_k\textit{-periods } A^B_k&:=2\int_{B\alpha'_k} \omega \; \textnormal{ and } \; \textit{black } b_k\textit{-periods } B^B_k:=2\int_{B\beta'_k} \omega\\
\textnormal{and its }\textit{white } a_k\textit{-periods } A^W_k&:=2\int_{W\alpha'_k} \omega \textnormal{ and } \textit{white } b_k\textit{-periods } B^W_k:=2\int_{W\beta'_k} \omega.
\end{align*}
\end{definition}
\begin{remark}
The reason for the factor of two is that to compute the black or white periods, we actually integrate $\omega$ on $\Gamma$ or $\Gamma^*$ and not on the medial graph $X$. Clearly, $2A_k=A_k^B+A_k^W$ and $2B_k=B_k^B+B_k^W$.
\end{remark}
\begin{lemma}
The periods of the closed discrete differential $\omega$ of type $\Diamond$ depend only on the homology classes $a_k$ and $b_k$, i.e., if $\alpha''_k, \beta''_k$, $1 \leq k \leq g$, are loops on $X$ that are in the homology classes $a_k$ and $b_k$, respectively, then
\begin{align*}
\int_{a_k} \omega &:= A_k=\oint_{\alpha''_k} \omega, \int_{Ba_k} \omega :=A_k^B=2\int_{B\alpha''_k} \omega, \int_{Wa_k} \omega :=A_k^W=2\int_{W\alpha''_k} \omega;\\
\int_{b_k} \omega &:= B_k=\oint_{\beta''_k} \omega, \int_{Bb_k} \omega :=B_k^B=2\int_{B\beta''_k} \omega, \int_{Wb_k} \omega :=B_k^W=2\int_{W\beta''_k} \omega.
\end{align*}
\end{lemma}
\begin{proof}
That $a$- and $b$-periods of a closed discrete one-form depend on the homology class only follows from discrete Stokes' Theorem~\ref{th:stokes}. For the other four cases, we use that $\omega$ induces discrete differentials on $\Gamma$ and $\Gamma^*$ in the obvious way since it is of type $\Diamond$. These differentials are closed in the sense that the integral along the black (or white) cycle around any white (or black) vertex of $\Lambda$ vanishes. Since the paths $B\alpha'_k$ and $B\alpha''_k$ on $\Gamma$ are both in the homology class $a_k$, $\int_{B\alpha'_k} \omega=\int_{B\alpha''_k} \omega$. The same reasoning applies for the other cases.
\end{proof}
\subsection{Discrete Riemann bilinear identity}\label{sec:RBI}
Again, let $\alpha_1, \ldots, \alpha_g, \beta_1, \ldots, \beta_g$ be smooth loops on the compact surface $\Sigma$ with base point $v_0 \in V(\Lambda)$ such that these loops cut out a fundamental $4g$-gon $F_g$. For the following two definitions, we follow \cite{BoSk12},but give a different proof for the discrete Riemann bilinear identity than \cite{BoSk12}.
\begin{definition}
For a loop $\alpha$ on $\Sigma$, let $d_{\alpha}$ denote the induced deck transformations on $\tilde{\Sigma}$, $\tilde{\Lambda}$, and $\tilde{X}$.
\end{definition}
\begin{definition}
$f:V(\tilde{\Lambda})\to \mC$ is \textit{multi-valued} with black periods $A_1^B, A_2^B, \ldots, A_g^B$, $B_1^B, B_2^B, \ldots, B_g^B \in \mC$ and white periods $A_1^W, A_2^W, \ldots, A_g^W$, $B_1^W, B_2^W, \ldots, B_g^W \in \mC$ if
\begin{align*}
f(d_{\alpha_k}b)=f(b)+A_k^B, \quad f(d_{\alpha_k}w)=f(w)+A_k^W,\quad f(d_{\beta_k}b)=f(b)+B_k^B,\quad f(d_{\beta_k}w)=f(w)+B_k^W
\end{align*}
for any $1\leq k \leq g$, each black vertex $b\in V(\tilde{\Gamma})$, and each white vertex $w \in V(\tilde{\Gamma}^*)$.
\end{definition}
\begin{lemma}\label{lem:multivalued}
Let $f:V(\tilde{\Lambda})\to \mC$ be multi-valued. Then, $df$ defines a closed discrete one-form of type $\Diamond$ on the oriented edges of $X$ and $df$ has the same black and white periods as $f$. Conversely, if $\omega$ is a closed discrete differential of type $\Diamond$, then there is a multi-valued function $f:V(\tilde{\Lambda})\to \mC$ such that $df$ projects to $\omega$. If $\omega$ is discrete holomorphic, then $f$ is as well.
\end{lemma}
\begin{proof}
Let $\tilde{e}$ be an oriented edge of $\tilde{X}$. Discrete Stokes' Theorem~\ref{th:stokes} implies that $df(d_\alpha(\tilde{e}))=df(\tilde{e})$ for any loop $\alpha$ on $\Sigma$. In particular, $df$ is well-defined on the oriented edges of $X$. Closeness follows from $ddf=0$ by Proposition~\ref{prop:dd0}. Clearly, black and white periods of $f$ and $df$ are the same by definition of these periods.
Let $\tilde{\omega}$ be the lift of $\omega$ to $\tilde{X}$. Since the universal cover is simply-connected, it follows from Proposition~\ref{prop:primitive2} that there exists a discrete primitive $f:=\int \tilde{\omega} :V(\tilde{\Lambda})\to\mC$ that is discrete holomorphic if $f$ is.
\end{proof}
\begin{remark}
As a consequence, white and black periods of a closed discrete one-form of type $\Diamond$ are not determined by its periods.
\end{remark}
We are now ready to prove the following \textit{discrete Riemann bilinear identity}.
\begin{theorem}\label{th:RBI}
Let $\omega$ and $\omega'$ be closed discrete differentials of type $\Diamond$. Let their black and white periods be given by $A_k^B, B_k^B, A_k^W, B_k^W$ and ${A'_k}^B, {B'_k}^B, {A'_k}^W, {B'_k}^W$, respectively, for $k=1,\ldots,g.$ Then, \[\iint_{F(X)} \omega \wedge \omega'=\frac{1}{2}\sum_{k=1}^g \left(A_k^B {B'_k}^W-B_k^B {A'_k}^W\right)+\frac{1}{2}\sum_{k=1}^g \left(A_k^W {B'_k}^B-B_k^W {A'_k}^B\right).\]
\end{theorem}
\begin{proof}
By Lemma~\ref{lem:multivalued}, there is a multi-valued function $f:V(\tilde{\Lambda})\to \mC$ such that $df=\omega$ with the same black and white periods as $\omega$ has. Let $F_v$ and $F_Q$ be faces of $X$ corresponding to $v\in V(\Lambda)$ and $Q\in V(\Diamond)\cong F(\Lambda)$. Consider any lifts of the star of $v$ and of $Q$ to $\tilde{\Lambda}$, and denote by $\tilde{F}_v$ and $\tilde{F}_Q$ the corresponding lifts of $F_v$ and $F_Q$ to $F(\tilde{X})$. By Theorem~\ref{th:derivation}, $\omega \wedge \omega'= d(f\omega')$, lifting $\omega,\omega'$ to $\tilde{X}$ and using that $\omega'$ is closed. So by discrete Stokes' Theorem~\ref{th:stokes}, \[\iint_{F} \omega \wedge \omega'=\oint_{\partial \tilde{F}} f\omega',\] where $F$ is either $F_v$ or $F_Q$. Note that the right hand side is independent of the chosen lift $\tilde{F}$ because $\omega'$ is closed. It follows that the statement above remains true when we integrate over $F(X)$ and the counterclockwise oriented boundary of any collection $\tilde{F}(X)$ of lifts of each a face of $X$ to $\tilde{X}$.
It remains to compute $\int_{\partial \tilde{F}(X)} f\omega'$. If $g=0$, then $\tilde{\Sigma}=\Sigma$ and $f$ is a complex function on $V(\Lambda)$. Furthermore, the boundary of $\tilde{F}(X)=F(X)$ is empty, so $\iint_{F(X)} \omega \wedge \omega'=0$ as claimed. In what follows, let $g>0$.
By definition, if $e=[\tilde{Q},\tilde{v}]$ is an edge of $\tilde{X}$ ($F(\tilde{\Lambda})\ni\tilde{Q}\sim \tilde{v} \in V(\tilde{\Lambda})$), then $\int_e f\omega'=f(\tilde{v})\int_e \omega'$. So we may consider $f$ as a function on $E(\tilde{X})$ defined by $f([\tilde{Q},\tilde{v}]):=f(\tilde{v})$. Then, $f:E(\tilde{X})\to\mC$ fulfills for any $k$:
\begin{align*}
f(d_{\alpha_k}[\tilde{Q},\tilde{v}])=f([\tilde{Q},\tilde{v}])+A_k^B &\textnormal{ and } f(d_{\beta_k}[\tilde{Q},\tilde{v}])=f([\tilde{Q},\tilde{v}])+B_k^B \textnormal{ if } \tilde{v}\in V(\tilde{\Gamma}),\\
f(d_{\alpha_k}[\tilde{Q},\tilde{v}])=f([\tilde{Q},\tilde{v}])+A_k^W &\textnormal{ and } f(d_{\beta_k}[\tilde{Q},\tilde{v}])=f([\tilde{Q},\tilde{v}])+B_k^W \textnormal{ if } \tilde{v}\in V(\tilde{\Gamma}^*).
\end{align*}
In this sense, $f$ is multi-valued on $E(\tilde{X})$ with black (white) periods defined on white (black) edges.
Since $f$ and $\omega'$ are now determined by topological data, we may forget the discrete complex structure of $\tilde{\Sigma}$ and can consider $\omega'$ and $f\omega'$ as functions on the oriented edges. Their evaluation on an edge $e$ will still be denoted by $\int_e$. Let $\tilde{\Sigma}'$ be the polyhedral surface that is given by $\tilde{X}$ requiring that all faces are regular polygons of side length one. Similarly, $\Sigma'$ is constructed. Now, $p$ induces a covering $p:\tilde{\Sigma}'\to \Sigma'$ in a natural way requiring that $p$ on each face is an isometry.
The homeomorphic images of the paths $\alpha_k,\beta_k$ are loops on $\Sigma'$ with the base point being somewhere inside the face $F_{v_0}$. Let us choose piecewise smooth paths on $\Sigma'$ with base point being the center of $F_{v_0}$ homotopic to the previous loops such that the new paths (that will be denoted the same) still cut out a fundamental $4g$-gon.
For $v \in V(\Lambda)$, consider the same subdivision of all the lifts of the regular polygon corresponding to $F_v$ into smaller polygonal cells induced by straight lines. All new edges get the same color as the original edges of $F_v$ had, i.e., the opposite color to the one of $v$. We extend $f$ on the new edges by $f(v)$. Obviously, the new function is still multi-valued with the same periods. We define the one-form $\omega'$ on the new edges consecutively by inserting straight lines. Each time an existing oriented edge $e$ is subdivided into two equally oriented parts $e'$ and $e''$, we define $\int_{e'}\omega'=\int_{e''}\omega':=\int_{e}\omega'/2$. On segments of the inserted line, we define $\omega'$ by the condition that it should remain closed. Defining a black (or white) $c$-period of $\omega'$ on the subdivided cellular decomposition as twice the discrete integral over all black (or white) edges of a closed path with homology $c$, we see that the black and white $a$- and $b$-periods of $\omega'$ are the same as before.
Now, let $F_Q$ be the square corresponding to $Q \in F(\Lambda)$. We consider a subdivision of $F_Q$ (and all its lifts) into smaller polygonal cells induced by straight lines parallel to the edges of the square, requiring in addition that all subdivision points on the edges of $F_Q$ coming from the previous subdivisions of $F_v$, $v\sim Q$, are part of it. A new edge is black (or white) if it is parallel to an original black (or white) edge of $X$. Any new edge $e'$ is of length $0<l\leq 1$ and $e'$ is parallel to an edge $e$ of $F_Q$. Since $\omega'$ is of type $\Diamond$, it coincides on parallel edges, so we can define $\int_{e'}\omega':=l\int_{e}\omega'$. By construction, the new discrete one-form $\omega'$ is closed, and its black and white periods do not change. $f$ is extended in such a way that if the new edge $e'$ is parallel to the edges $[Q,v]$ and $[Q,v']$, having distance $0\leq l\leq 1$ to $[Q,v]$ and distance $1-l$ to $[Q,v']$, then $f(e'):=(1-l)f([Q,v])+lf([Q,v'])$. $f$ is still multi-valued with the same periods.
If the subdivisions of faces $F_v$ and $F_Q$ are fine enough, then we find cycles homotopic to $\alpha_k,\beta_k$ on the edges of the resulting cellular decomposition $X'$ on $\Sigma$ in such a way that they still cut out a fundamental polygon with $4g$ vertices. Let us denote these loops by $\alpha_k,\beta_k$ as well.
By construction, $\oint_{\partial F} f\omega'$ equals the sum of all discrete contour integrals of $f\omega'$ around faces of the subdivision of the face $F$ of $X$. It follows that $\int_{\partial \tilde{F}(X)} f\omega'=\int_{\partial \tilde{F}(X')} f\omega'$ for any collection $\tilde{F}(X')$ of lifts of faces of $X'$, using that $\omega'$ is closed. Let us choose $\tilde{F}(X')$ in such a way that it builds a fundamental $4g$-gon whose boundary consists of lifts $\tilde{\alpha}_k,\tilde{\beta}_k$ of $\alpha_k,\beta_k$ and lifts $\tilde{\alpha}_k^{-1},\tilde{\beta}_k^{-1}$ of its reverses. Since interior edges of the polygon are traversed twice in both directions, they do not contribute to the discrete integral and we get \[\iint_{F(X)} \omega \wedge \omega'=\int_{\partial \tilde{F}(X')} f\omega'=\sum_{k=1}^g \left( \int_{\tilde{\alpha}_k} f\omega' +\int_{\tilde{\alpha}_k^{-1}} f\omega'\right)+\sum_{k=1}^g \left( \int_{\tilde{\beta}_k} f\omega' +\int_{\tilde{\beta}_k^{-1}} f\omega'\right).\]
Let $e$ be an edge of $\tilde{\alpha}_k$ and $e'$ the corresponding edge of $\tilde{\alpha}_k^{-1}$. Then, $d_{\beta_k}e=-e'$. Hence, $\omega'$ has opposite signs on $e$ and $e'$, and $f$ differs by $B_k^W$ on black edges and by $B_k^B$ on white edges. Therefore, \begin{align*} \int_{\tilde{\alpha}_k} f\omega' +\int_{\tilde{\alpha}_k^{-1}} f\omega'&=\int_{B\tilde{\alpha}_k} \left(f\omega' - (f+B_k^W)\omega'\right)+\int_{W\tilde{\alpha}_k} \left(f\omega' - (f+B_k^B)\omega'\right)\\&=-\frac{1}{2}B_k^W {A'_k}^B-\frac{1}{2}B_k^B {A'_k}^W. \end{align*}
If $e$ is an edge of $\tilde{\beta}_k$ and $e'$ the corresponding edge of $\tilde{\beta}_k^{-1}$, then $d_{\alpha_k^{-1}}e=-e'$. Thus, \[\int_{\tilde{\beta}_k} f\omega' +\int_{\tilde{\beta}_k^{-1}} f\omega'=\frac{1}{2}A_k^W {B'_k}^B+\frac{1}{2}A_k^B {B'_k}^W.\] Inserting the last two equations into the previous one gives the desired result.
\end{proof}
\begin{remark}
Note that as in the classical case, the formula is true for any canonical homology basis $\{a_1,\ldots,a_g,b_1,\ldots,b_g\}$, not necessarily the one we started with. The proof is essentially the same as in the smooth theory, see \cite{Gue14}.
\end{remark}
\begin{corollary}\label{cor:RBI2}
Let $\omega$ and $\omega'$ be closed discrete differentials of type $\Diamond$. Let their periods are given by $A_k, B_k$ and ${A'_k}, {B'_k}$, respectively, and assume that the black $a$-periods of $\omega,\omega'$ coincide with corresponding white $a$-periods. Then, \[\iint_{F(X)} \omega \wedge \omega'=\sum_{k=1}^g \left(A_k {B'_k}-B_k {A'_k}\right).\]
\end{corollary}
\section{Discrete harmonic and discrete holomorphic differentials}\label{sec:harmonic_holomorphic}
Throughout this section that aims in investigating discrete harmonic and discrete holomorphic differentials, let $(\Sigma,\Lambda,z)$ be a discrete Riemann surface. In Section~\ref{sec:Hodge_decomposition}, we state the discrete Hodge decomposition. Afterwards, we restrict to compact $\Sigma$ and compute the dimension of the space of discrete holomorphic differentials in Section~\ref{sec:harm_holo}. Discrete period matrices are introduced in Section~\ref{sec:period_matrices}. For Sections~\ref{sec:harm_holo} and~\ref{sec:period_matrices}, we therefore assume that $\Sigma$ is compact and of genus $g$. Let $\{a_1,\ldots,a_g,b_1,\ldots,b_g\}$ be a canonical basis of $H_1(\Sigma,\mZ)$ in this case.
\subsection{Discrete Hodge decomposition}\label{sec:Hodge_decomposition}
\begin{definition}
A discrete differential $\omega$ of type $\Diamond$ is \textit{discrete harmonic} if it is closed and \textit{co-closed}, i.e., $d\omega=0$ and $d\star \omega=0$ (or, equivalently, $\delta \omega=0$).
\end{definition}
\begin{lemma}\label{lem:harmonic_forms}
Let $\omega$ be a discrete differential of type $\Diamond$.
\begin{enumerate}
\item $\omega$ is discrete harmonic if and only if for any $\Diamond_0\subseteq\Diamond$ forming a simply-connected closed region, there exists a discrete harmonic function $f:V(\Lambda_0)\to\mC$ such that $\omega=df$.
\item Let $\Sigma$ be compact. Then, $\omega$ is discrete harmonic if and only if $\triangle \omega=0$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) Suppose that $\omega$ is discrete harmonic. Then, it is closed, so since $\Diamond_0$ forms a simply-connected closed region, Proposition~\ref{prop:primitive2} gives the existence of $f:V(\Lambda_0)\to\mC$ such that $\omega=df$ on oriented edges of $X_0$. Now, $\triangle f=\delta d f=\delta \omega =0$, so $f$ is discrete harmonic. Conversely, if $\omega=df$ locally, then $d\omega=ddf=0$ by Proposition~\ref{prop:dd0} (that is also locally true, see \cite{BoG15}) and $\delta \omega = \delta df=\triangle f=0$ by definition.
(ii) If $\omega$ is discrete harmonic, then $d\omega=\delta\omega=0$ implies $\triangle \omega=0$. Conversely, let $\triangle \omega=0$. Using that $\delta$ is the formal adjoint of $d$ on compact discrete Riemann surfaces by Proposition~\ref{prop:adjoint2}, \[0=\langle \triangle \omega, \omega \rangle= \langle d\omega, d\omega \rangle + \langle \delta\omega, \delta \omega \rangle.\] The right hand side vanishes only for $d\omega=\delta \omega=0$, so $\omega$ is closed and co-closed.
\end{proof}
The proof of the following \textit{discrete Hodge decomposition} follows the lines of the proof in the smooth theory given in the book \cite{FaKra80} of Farkas and Kra.
\begin{theorem}
Let $E,E^*$ denote the sets of \textit{exact} and \textit{co-exact} square integrable discrete differentials of type $\Diamond$, i.e., $E$ and $E^*$ consist of all $\omega=df$ and $\omega=\star df$, respectively, where $f:V(\Lambda) \to \mC$ and $\langle \omega,\omega\rangle <\infty$. Let $H$ be the set of square integrable discrete harmonic differentials. Then, we have an orthogonal decomposition $L_2(\Sigma,\Lambda,z)=E\oplus E^*\oplus H$.
\end{theorem}
\begin{proof}
Clearly, $E$ and $E^*$ are the closures of all exact and co-exact square integrable discrete differentials of type $\Diamond$ of compact support. Let $E^\perp$ and ${E^*}^\perp$ denote the orthogonal complements of $E$ and $E^*$ in $L_2(\Sigma,\Lambda,z)$. Then, $\omega \in E^\perp$ if and only if $\langle \omega, df\rangle=0$ for all $f:V(\Lambda)\to\mC$ of compact support. To compute the scalar product, we may restrict $\omega$ to a finite neighborhood of the support of $f$, so Proposition~\ref{prop:adjoint2} implies $0=\langle \omega,df\rangle=\langle \delta \omega,f\rangle$. It follows that $\delta \omega=0$. Thus, $E^\perp$ consists of all co-closed discrete differentials of type $\Diamond$. Similarly, ${E^*}^\perp$ is the space of all closed discrete differentials of type $\Diamond$. By Proposition~\ref{prop:dd0}, any (co-)exact discrete differential of type $\Diamond$ is (co-)closed, so we get an orthogonal decomposition $L_2(\Sigma,\Lambda,z)=E\oplus E^*\oplus H$, $H=E^\perp \cap {E^*}^\perp$ being the set of all discrete harmonic differentials.
\end{proof}
\subsection{Existence of certain discrete differentials}\label{sec:harm_holo}
First, we want to show that for any set of black and white periods there is a discrete harmonic differential with these periods. In \cite{Me07}, Mercat proved this statement by referring to a (discrete) Neumann problem. The proof given in \cite{BoSk12} used the finite-dimensional Fredholm alternative. Here, we give a proof based on the (discrete) Dirichlet energy.
\begin{theorem}\label{th:harmonic_existence}
Let $A_k^B,B_k^B,A_k^W, B_k^W$, $1\leq k \leq g$, be $4g$ given complex numbers. Then, there exists a unique discrete harmonic differential $\omega$ with these black and white periods.
\end{theorem}
\begin{proof}
Since periods are linear in the discrete differentials, it suffices to prove the statement for real periods. Let us consider the vector space of all multi-valued functions $f:V(\tilde{\Lambda}) \to \mR$ having the given black and white periods. For such a function $f$, $df$ is well-defined on $X$, as is the discrete Dirichlet energy $E_\Diamond(f)=\langle df, df\rangle$. By Lemma~\ref{lem:Dirichlet_boundary2}, the critical points of this functional are discrete harmonic functions, noting that $\triangle f$ is a function on $V(\Lambda)$. Since the discrete Dirichlet energy is convex, quadratic, and nonnegative, a minimum $f:V(\tilde{\Lambda}) \to \mR$ has to exist. By Lemma~\ref{lem:harmonic_forms}~(i), $\omega:=df$ is discrete harmonic and has the required periods by Lemma~\ref{lem:multivalued}.
Suppose that $\omega$ and $\omega'$ are two discrete harmonic differentials with the same black and white periods. Since $\omega-\omega'$ is closed, there is a multi-valued function $f:V(\tilde{\Lambda}) \to \mC$ such that $\omega-\omega'=df$ by Lemma~\ref{lem:multivalued}. But black and white periods of $f$ vanish, so $f$ is well-defined on $V(\Lambda)$ and discrete harmonic by Proposition~\ref{lem:harmonic_forms}~(i). By discrete Liouville's Theorem~\ref{th:Liouville}, $\omega-\omega'=df=0$.
\end{proof}
\begin{lemma}\label{lem:holo_harm}
Let $\omega$ be a discrete differential of type $\Diamond$.
\begin{enumerate}
\item $\omega$ is discrete harmonic if and only if it can be decomposed as $\omega=\omega_1+\bar{\omega}_2$, where $\omega_1,\omega_2$ are discrete holomorphic differentials.
\item $\omega$ is discrete holomorphic if and only if it can be decomposed as $\omega=\alpha+i\star\alpha$, where $\alpha$ is a discrete harmonic differential.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) Suppose that $\omega=\omega_1+\bar{\omega}_2$, where $\omega_1,\omega_2$ are discrete holomorphic. Then, $\omega$ is closed since $\omega_1,\omega_2$ are, and it is co-closed since $d\star\omega_k=-id\omega_k=0$ by Lemma~\ref{lem:Hodge_projection}. Thus, $\omega$ is discrete harmonic.
Conversely, let $\omega$ be discrete harmonic. Then, we can write $\omega=p dz_v + q d\bar{z}_v$ in a chart $z_v$ around $v\in V(\Lambda)$, where $p,q$ are complex functions on the faces incident to $v$. Define $\omega_1:=p dz_v$ and $\omega_2:=\bar{q} dz_v$ in the chart $z_v$. By Lemma~\ref{lem:Hodge_projection}, $\omega_1,\omega_2$ are well defined on the whole discrete Riemann surface as the projections of $\omega$ onto the $\pm i$-eigenspaces of $\star$.
Since $\omega$ is closed, $0=d\omega|_{F_v}=\left(\partial_{\Diamond} q (v) - \bar{\partial}_{\Diamond} p(v)\right)\Omega^{z_v}_\Lambda$, so $\partial_{\Diamond} q (v) = \bar{\partial}_{\Diamond} p (v)$. Similarly, $d\star \omega|_{F_v}=0$ implies $\partial_{\Diamond} q (v) = -\bar{\partial}_{\Diamond} p (v)$. Thus, $\bar{\partial}_{\Diamond} p(v)=0=\partial_{\Diamond} q(v)$, i.e., $p,\bar{q}$ are discrete holomorphic in $v$. It follows that $\omega_1,\omega_2$ are discrete holomorphic.
(ii) Suppose that $\omega=\alpha+i\star\alpha$. Then, $d\omega=0$ because $\alpha$ is closed and co-closed. In addition, we have $\star \omega=\star\alpha-i\alpha=-i\omega$. By Lemma~\ref{lem:Hodge_projection}, $\omega$ is discrete holomorphic. Conversely, for discrete harmonic $\omega$ we define $\alpha:=(\omega+\bar{\omega})/2$ that is discrete harmonic by (i) and that satisfies $\omega=\alpha+i\star\alpha$ by construction.
\end{proof}
\begin{corollary}\label{cor:dimension}
The complex vector space $\mathcal{H}$ of discrete holomorphic differentials has dimension $2g$.
\end{corollary}
\begin{proof}
Using that $\langle\omega_1,\bar{\omega_2}\rangle=\omega_1\wedge \star\omega_2=0$ for discrete holomorphic differentials $\omega_1,\omega_2$, Lemma~\ref{lem:holo_harm} implies that the space of discrete harmonic differentials $H$ is a direct orthogonal sum of the spaces of discrete holomorphic and discrete antiholomorphic one-forms, $\mathcal{H}$ and $\bar{\mathcal{H}}$. Due to Theorem~\ref{th:harmonic_existence}, $\textnormal{dim } H=4g$. Since $\mathcal{H}$ and $\bar{\mathcal{H}}$ are isomorphic, $\textnormal{dim } \mathcal{H}=2g$.
\end{proof}
\begin{remark}
As for the space of discrete harmonic differentials, the dimension of $\mathcal{H}$ is twice as high as the one of its classical counterpart due to the splitting of periods into black and white periods.
\end{remark}
\begin{lemma}\label{lem:holomorphic_periods}
Let $\omega\neq 0$ be a discrete holomorphic differential whose black and white periods are given by $A_k^B,B_k^B$ and $A_k^W,B_k^W$, $1\leq k \leq g$. Then, \[\im\left(\sum_{k=1}^g \left(A_k^B \bar{B}_k^W+A_k^W \bar{B}_k^B \right)\right) <0.\]
\end{lemma}
\begin{proof}
Since $\omega$ is discrete holomorphic, $\omega$ and $\bar{\omega}$ are closed. Thus, we can apply the discrete Riemann Bilinear Identity~\ref{th:RBI} to them: \begin{align*}\iint_{F(X)} \omega \wedge \bar{\omega}= \frac{1}{2}\sum_{k=1}^g \left(A_k^B \bar{B}_k^W-B_k^B {\bar{A}}_k^W\right)+\frac{1}{2}\sum_{k=1}^g \left(A_k^W {\bar{B}}_k^B-B_k^W {\bar{A}}_k^B\right)=\sum_{k=1}^g i\im \left(A_k^B \bar{B}_k^W+A_k^W \bar{B}_k^B \right).\end{align*}
On the other hand, $\omega \wedge \bar{\omega}$ vanishes on faces $F_v$ of $X$ corresponding to vertices $v\in V(\Lambda)$ and in a chart $z_Q$ of $Q\in F(\Lambda)$, $\omega \wedge \bar{\omega}=|p|^2 \Omega^{z_Q}_\Diamond$ if $\omega|_{\partial F_Q}=p dz_Q$. Since $\omega \neq 0$, $p \neq 0$ for at least one $Q$ and \[\im\left(\sum_{k=1}^g \left(A_k^B \bar{B}_k^W+A_k^W \bar{B}_k^B \right)\right)=\im\left(\iint_{F(X)} \omega \wedge \bar{\omega}\right)< 0. \qedhere\]
\end{proof}
\begin{corollary}\label{cor:periods_vanish}
Let $\omega$ be a discrete holomorphic differential.
\begin{enumerate}
\item If all black and white $a$-periods of $\omega$ vanish, then $\omega=0$.
\item If all black and white periods of $\omega$ are real, then $\omega=0$.
\end{enumerate}
\end{corollary}
\begin{proof}
If all black and white $a$-periods vanish or all black and white periods of $\omega$ are real, then \[\im\left(\sum_{k=1}^g \left(A_k^B \bar{B}_k^W+A_k^W \bar{B}_k^B \right)\right) =0.\] In particular, $\omega=0$ by Lemma~\ref{lem:holomorphic_periods}.
\end{proof}
\begin{theorem}\label{th:holomorphic_existence}
Let $(\Sigma,\Lambda,z)$ be a compact discrete Riemann surface of genus $g$.
\begin{enumerate}
\item For any $2g$ complex numbers $A_k^B,A_k^W$, $1\leq k\leq g$, there exists exactly one discrete holomorphic differential $\omega$ with these black and white $a$-periods.
\item For any $4g$ real numbers $\re\left(A_k^B\right),\re\left(B_k^B\right),\re\left(A_k^W\right),\re\left(B_k^W\right)$, there exists exactly one discrete holomorphic differential $\omega$ such that its black and white periods have these real parts.
\end{enumerate}
\end{theorem}
\begin{proof}
Let us consider the complex-linear map $P_1:\mathcal{H}\to\mC^{2g}$ that assigns to each discrete holomorphic differential its black and white $a$-periods and the real-linear map $P_2:\mathcal{H}\to\mR^{4g}$ that assigns to each discrete holomorphic differential the real parts of its black and white periods. By Corollary~\ref{cor:periods_vanish}, $P_1$ and $P_2$ are injective. By Corollary~\ref{cor:dimension}, $\mathcal{H}$ has complex dimension $2g$, so $P_1$ and $P_2$ have to be surjective.
\end{proof}
\subsection{Discrete period matrices}\label{sec:period_matrices}
Discrete period matrices in the special case of real weights $\rho_Q$ were already studied by Mercat in \cite{Me01b, Me07}. In \cite{BoSk12}, a proof of convergence of discrete period matrices to their continuous counterparts was given and the case of complex weights was sketched.
By Theorem~\ref{th:holomorphic_existence}, there exists exactly one discrete holomorphic differential with prescribed black and white $a$-periods. Having a limit of finer and finer quadrangulations of a Riemann surface in mind, it is natural to demand that black and white $a$-periods coincide.
\begin{definition}
The unique set of $g$ discrete holomorphic differentials $\omega_k$ that satisfies for all $1\leq j,k \leq g$ the equation $2\int_{Ba_j}\omega_k=2\int_{Wa_j}\omega_k=\delta_{jk}$ is called \textit{canonical}. The $(g\times g)$-pmatrix $\left(\Pi_{jk}\right)_{j,k=1}^g$ with entries $\Pi_{jk}:=\int_{b_j}\omega_k$ is the \textit{discrete period pmatrix} of the discrete Riemann surface $(\Sigma,\Lambda,z)$.
\end{definition}
The definition of the discrete period pmatrix as the arithmetic mean of black and white periods was already given in \cite{BoSk12}, adapting Mercat's definition in \cite{Me01b,Me07}. In our notation with discrete differentials defined on the medial graph it becomes clear why this is a natural choice. Still, it is reasonable to consider black and white periods separately to encode all possible information. We end up with the same matrices Mercat defined in \cite{Me01b,Me07}.
\begin{definition}
Let $\omega_k^B$, $1\leq k \leq g$, be the unique discrete holomorphic differential with black $a_j$-period $\delta_{jk}$ and vanishing white $a$-periods. Furthermore, let $\omega_k^W$, $1\leq k \leq g$, be the unique discrete holomorphic differential with white $a_j$-period $\delta_{jk}$ and vanishing black $a$-periods. The basis of these $2g$ discrete differentials is called the \textit{canonical basis (of discrete holomorphic differentials)}.
We define the $(g\times g)$-matrices $\Pi^{B,B},\Pi^{W,B},\Pi^{B,W},\Pi^{W,W}$ with entries \begin{align*}\Pi^{B,B}_{jk}:=2\int_{Bb_j}\omega^B_k, \quad \Pi^{W,B}_{jk}:=2\int_{Wb_j}\omega^B_k,\quad \Pi^{B,W}_{jk}:=2\int_{Bb_j}\omega^W_k, \quad \Pi^{W,W}_{jk}:=2\int_{Wb_j}\omega^W_k.\end{align*}
The \textit{complete discrete period pmatrix} is the $(2g\times 2g)$-pmatrix defined by \[\tilde{\Pi}:=\left( \begin{matrix} \Pi^{B,W} & \Pi^{B,B}\\ \Pi^{W,W} & \Pi^{W,B}\end{matrix}\right).\]
\end{definition}
\begin{remark}
Note that $\omega_k=\omega_k^W+\omega_k^B$ implies that $\Pi=(\Pi^{B,W} + \Pi^{B,B}+ \Pi^{W,W} + \Pi^{W,B})/2$.
\end{remark}
\begin{example}
In the example of a bipartitely quadrangulated flat torus $\Sigma=\mC/(\mZ+\mZ\tau)$ of modulus $\tau \in \mC$ with $\im \tau >0$, the classical period of the Riemann surface $\Sigma$ is $\tau$. In the discrete setup, $dz$ is globally defined and discrete holomorphic. It follows that the discrete period $\Pi$ equals the $b$-period of $dz$ that is $\tau$. Thus, discrete and smooth period coincide in this case.
\end{example}
\begin{remark}
Although the black and white $a$-periods of the canonical set of discrete holomorphic differentials coincide by definition, the black and white $b$-periods must not in general. A counterexample was given in \cite{BoSk12}, namely the bipartite quad-decomposition of a torus induced by the triangulation given by identifying opposite sides of the base of the side surface of a regular square pyramid and its dual.
\end{remark}
\begin{theorem}\label{th:period_pmatrix}
Both the discrete period pmatrix $\Pi$ and the complete discrete period pmatrix $\tilde{\Pi}$ are symmetric and their imaginary parts are positive definite.
\end{theorem}
\begin{proof}
Let $\{\omega_1,\ldots,\omega_g\}$ be the canonical set of discrete holomorphic differentials used to compute $\Pi$. By looking at the coordinate representations, $\omega_j \wedge \omega_k=0$ for all $j,k$. Inserting this into the discrete Riemann Bilinear Identity~\ref{th:RBI}, the periods of $\omega:=\omega_j$ and $\omega':=\omega_k$ satisfy \begin{align*}0&=\sum_{l=1}^g \left(A_l^B {B'_l}^W-B_l^B {A'_l}^W\right)+\sum_{l=1}^g \left(A_l^W {B'_l}^B-B_l^W {A'_l}^B\right)={B'_j}^W-B_k^B+{B'_j}^B-B_k^W=2\Pi_{jk}-2\Pi_{kj}.\end{align*}
Applying the same arguments to discrete differentials of the canonical basis $\{\omega_1^W,\ldots,\omega_g^W,\omega_1^B,\ldots,\omega_g^B\}$, \[(\Pi^{B,W})^T=\Pi^{B,W} \textnormal{ and } (\Pi^{W,B})^T=\Pi^{W,B}\] if we apply the discrete Riemann Bilinear Identity~\ref{th:RBI} to all pairs $\omega_j^W,\omega_k^W$ and $\omega_j^B,\omega_k^B$, respectively. Considering pairs $\omega_j^W,\omega_k^B$ yields $(\Pi^{B,B})^T=\Pi^{W,W}.$ Thus, $\Pi$ and $\tilde{\Pi}$ are symmetric.
Let $\alpha=(\alpha_1,\ldots,\alpha_g)^T$ be a nonzero real column vector. Applying Lemma~\ref{lem:holomorphic_periods} to the discrete holomorphic differential $\omega:=\sum_{k=1}^g \alpha_k \omega_k$ with black and white $a_k$-period $\alpha_k$ yields \[0>\im\left(\sum_{k=1}^g \left(\alpha_k \sum_{j=1}^g\alpha_j 2\overline{\Pi}_{kj}\right)\right)=-2\im\left(\alpha^T \Pi \alpha\right).\] Hence, $\im (\Pi)$ is positive definite. Similarly, $\im (\tilde{\Pi})$ is positive definite.
\end{proof}
Since black and white $b$-periods of a discrete holomorphic differential do not have to coincide even if their black and white $a$-periods do, the discrete period matrices do not change similarly to the classical theory if another canonical homology basis is chosen, but the complete discrete period matrices do.
\begin{proposition}\label{prop:transformation}
The complete discrete period matrices $\tilde{\Pi}$ and $\tilde{\Pi}'$ corresponding to the canonical homology bases $\left\{a,b\right\}$ and $\left\{a',b'\right\}$, respectively, are related by \[\tilde{\Pi}'=\left(\tilde{C}+\tilde{D}\tilde{\Pi}\right)\left(\tilde{A}+\tilde{B}\tilde{\Pi}\right)^{-1}.\] Here, the two canonical bases are related by $\left( \begin{smallmatrix} a' \\ b'\end{smallmatrix}\right)=\left( \begin{smallmatrix} A & B\\ C & D\end{smallmatrix}\right) \left( \begin{smallmatrix} a \\ b\end{smallmatrix}\right)$ and the $(2g\times 2g)$-matrices $\tilde{A},\tilde{B},\tilde{C},\tilde{D}$ are given by $\tilde{A}:=\left( \begin{smallmatrix} A & 0\\ 0 & A\end{smallmatrix}\right),\tilde{B}:=\left( \begin{smallmatrix} B & 0\\ 0 & B\end{smallmatrix}\right),\tilde{C}:=\left( \begin{smallmatrix} C & 0\\ 0 & C\end{smallmatrix}\right),\tilde{D}:=\left( \begin{smallmatrix} D & 0\\ 0 & D\end{smallmatrix}\right).$
\end{proposition}
\begin{proof}
Let $\omega = (\omega_1^W,\ldots,\omega_g^W,\omega_1^B,\ldots,\omega_g^B)$ be the canonical basis of discrete holomorphic differentials corresponding to $(a,b)$. Labeling the columns of the matrices by discrete differentials and their rows by first all white and then all black cycles we get \[\int_{Wa',Ba'} \omega=\tilde{A}+\tilde{B}\tilde{\Pi}, \quad \int_{Wb',Bb'} \omega=\tilde{C}+\tilde{D}\tilde{\Pi}.\]
Thus, the canonical basis $\omega'$ corresponding to $(a',b')$ is given by $\omega'=\omega\left(\tilde{A}+\tilde{B}\tilde{\Pi}\right)^{-1}$ and \[\tilde{\Pi}'=\int_{Wb',Bb'} \omega'=\int_{Wb',Bb'} \omega \left(\tilde{A}+\tilde{B}\tilde{\Pi}\right)^{-1} =\left(\tilde{C}+\tilde{D}\tilde{\Pi}\right)\left(\tilde{A}+\tilde{B}\tilde{\Pi}\right)^{-1}.\qedhere\]
\end{proof}
\section{Discrete theory of Abelian differentials}\label{sec:Abelian_theory}
After introducing discrete Abelian differentials in Section~\ref{sec:Abelian_differentials} and discussing several properties of them, the aim of Section~\ref{sec:RR} is to state and prove the discrete Riemann-Roch Theorem~\ref{th:Riemann_Roch}. We conclude this chapter by discussing discrete Abel-Jacobi maps in Section~\ref{sec:Abel}. Throughout this section, we consider a compact discrete Riemann surface $(\Sigma,\Lambda,z)$ of genus $g$. Let $\{a_1,\ldots,a_g,b_1,\ldots,b_g\}$ be a canonical basis of its homology, $\{\omega_1, \ldots, \omega_g\}$ the canonical set and $\{\omega_1^B, \omega_1^W \ldots, \omega_g^B, \omega_g^W\}$ the canonical basis of discrete holomorphic differentials.
\subsection{Discrete Abelian differentials}\label{sec:Abelian_differentials}
\begin{definition}
A discrete differential $\omega$ of type $\Diamond$ is said to be a \textit{discrete Abelian differential}. For a vertex $v\in V(\Lambda)$ and its corresponding face $F_v \in F(X)$, the \textit{residue} of $\omega$ at $v$ is defined as \[\textnormal{res}_v (\omega) := \frac{1}{2\pi i} \oint_{\partial F_v} \omega.\]
\end{definition}
\begin{remark}
By definition, the discrete integral of a discrete differential of type $\Diamond$ around a face $F_Q$ corresponding to $Q \in V(\Diamond)$ is always zero. For this reason, a residue at faces $Q \in V(\Diamond)$ is not defined.
\end{remark}
\begin{proposition}\label{prop:residue}
Discrete residue theorem: Let $\omega$ be a discrete Abelian differential. Then, the sum of all residues of $\omega$ at black vertices vanishes as well as the sum of all residues of $\omega$ at white vertices: \[\sum_{b \in V(\Gamma)} \textnormal{res}_b (\omega)=0=\sum_{w \in V(\Gamma^*)} \textnormal{res}_w (\omega).\]
\end{proposition}
\begin{proof}
Since $\omega$ is of type $\Diamond$, $\int_{[Q,b_-]} \omega=-\int_{[Q,b_+]} \omega$ if $b_-,b_+$ are two black vertices incident to a quadrilateral $Q \in F(\Lambda)$ and $[Q,b_-]$ and $[Q,b_+]$ are oriented in such a way that they go clockwise around $F_Q$. Equivalently, they are oriented in such a way that they go counterclockwise around $F_{b_-}$ and $F_{b_+}$, respectively. It follows that the sum of all residues of $\omega$ at black vertices can be arranged in pairwise canceling contributions. Thus, the sum is zero. Similarly, $\sum_{w \in V(\Gamma^*)} \textnormal{res}_w (\omega)=0$.
\end{proof}
\begin{definition}
Let $\omega$ be a discrete Abelian differential, $v\in V(\Lambda)$, $Q \in V(\Diamond)$, and $F_Q$ the face of $X$ corresponding to $Q$. If $\omega$ has a nonzero residue at $v$, then $v$ is a \textit{simple pole} of $\omega$. If $z_Q$ is a chart of $Q$ and $\omega|_{\partial F_Q}$ is not of the form $p dz_Q$, $p\in \mC$, then $Q$ is a \textit{double pole} of $\omega$. If $\omega|_{\partial F_Q}=0$, then $Q$ is a \textit{zero} of $\omega$.
\end{definition}
\begin{remark}
To say that quadrilaterals $Q$ where $\omega \neq pdz_Q$ are double poles of $\omega$ is well motivated. In \cite{BoG15}, the existence of functions $K_Q$ on $V(\Lambda)$ that are discrete holomorphic at all but one fixed face $Q \in V(\Diamond)$ was shown. These functions appeared in the discrete Cauchy's integral formulae and model $z^{-1}$ besides its asymptotics. Similarly, $\partial_\Lambda K_Q$ models $-z^{-2}$. Now, $dK_Q$ should be like $-z^{-2} dz$, modeling a double pole at $Q$. By construction, $dK_Q$ is a discrete Abelian differential that is of the form $pdz_{Q'}$ in any chart $z_{Q'}$ around a face $Q' \neq Q$. But in a chart $z_Q$, $dK_Q=pdz_Q+qd\bar{z}_Q$ with $q\neq 0$.
\end{remark}
\begin{definition}
Let $\omega$ be a discrete Abelian differential. If $\omega$ is discrete holomorphic, then we say that $\omega$ is a \textit{discrete Abelian differential of the first kind}. If $\omega$ is not discrete holomorphic, but all its residues vanish, then it is a \textit{discrete Abelian differential of the second kind}. A discrete Abelian differential whose residues do not vanish identically is said to be a \textit{discrete Abelian differential of the third kind}.
\end{definition}
As in the classical setup, there exists a set of normalized discrete Abelian differentials with certain prescribed poles and residues that can be normalized such that their $a$-periods vanish. In the case of a Delaunay-Voronoi quadrangulation, the existence of corresponding normalized discrete Abelian integrals of the second kind and discrete Abelian differentials of the third kind was shown in \cite{BoSk12}. Our proofs will be similar, but in addition, we obtain the existence of certain discrete Abelian differentials of the second kind as a corollary. The computation of the $b$-periods of the normalized discrete Abelian differentials of the third kind is also new.
\begin{proposition}\label{prop:existence_third}
Let $v,v' \in V(\Gamma)$ or $v,v' \in V(\Gamma^*)$. Then, there exists a discrete Abelian differential of the third kind $\omega$ whose only poles are at $v$ and $v'$ and whose residues are $\textnormal{res}_\omega(v)=-\textnormal{res}_\omega(v')=1$. Any two such discrete differentials differ just by a discrete holomorphic differential.
\end{proposition}
\begin{proof}
Clearly, the difference of two discrete Abelian differentials of the third kind with equal residues and no double poles has no poles at all, so it is discrete holomorphic.
Let $V$ be the vector space of all discrete Abelian differentials that have no double poles. For any $Q \in V(\Diamond) \cong F(\Lambda)$, we choose one chart $z_Q$. By definition, each $\omega \in V$ is of the form $pdz_Q$ at $Q$. Conversely, any function $p:V(\Diamond)\to\mC$ defines by $pdz_Q$ a discrete Abelian differential that has no double poles. Thus, the complex dimension of $V$ equals $|F(\Lambda)|$.
Now, let $W$ be the image in $\mC^{|V(\Lambda)|}$ of the linear map $\textnormal{res}$ that assigns to each $\omega \in V$ all its residues at vertices of $\Lambda$. By Proposition~\ref{prop:residue}, the residues at all black points sum up to zero as well as all residues at white vertices. Thus, the complex dimension of $W$ is at most $|V(\Lambda)|-2$. Since $\Lambda$ is a quad-decomposition, $|V(\Lambda)|-2=|F(\Lambda)|-2g$. Therefore, the dimension of $W$ is at most $|F(\Lambda)|-2g$.
On the other hand, the dimension of $W$ equals $|F(\Lambda)|$ minus the dimension of the kernel of the map $\textnormal{res}$. But if $\omega \in V$ has vanishing residues, then it is discrete holomorphic. Due to Corollary~\ref{cor:dimension}, the space of discrete holomorphic differentials is $2g$-dimensional. For this reason, $\textnormal{dim }W=|F(\Lambda)|-2g=|V(\Lambda)|-2$. In particular, we can find a discrete Abelian differential without double poles for any prescribed residues that sum up to zero at all black and at all white vertices.
\end{proof}
\begin{corollary}\label{cor:existence_second}
Given a quadrilateral $Q \in F(\Lambda)$ and a chart $z_Q$, there exists a unique discrete Abelian differential of the second kind that is of the form \[pdz_Q-\frac{\pi}{2\textnormal{area}(z_Q(F_Q))} d\bar{z}_Q\] in the chart $z_Q$, that has no other poles, and whose black and white $a$-periods vanish. This discrete differential is denoted by $\omega_Q$. Here, $\textnormal{area}(z_Q(F_Q))$ denotes the Euclidean area of the parallelogram $z_Q(F_Q)$.
\end{corollary}
\begin{proof}
Consider the discrete Abelian differential of the third kind $\omega$ that is given by the local representation $-\pi/\left(2\textnormal{area}(z_Q(F_Q))\right)d\bar{z}_Q$ at the four edges of $F_Q$ and zero everywhere else. Its only poles other than $Q$ are at the four vertices incident to $Q$ and since $\omega$ is of type $\Diamond$, residues at opposite vertices are equal up to sign. Using Proposition~\ref{prop:existence_third} twice, we can find a discrete Abelian differential $\omega'$ that has no double poles and whose residues equal the residues of $\omega$. To get vanishing black and white $a$-periods, Theorem~\ref{th:holomorphic_existence} allows us to add a suitable discrete holomorphic differential $\omega''$ such that $\omega_Q:=\omega-\omega'+\omega''$ is what we are looking for. Since the difference of two such discrete differentials has vanishing black and white $a$-periods, uniqueness follows by Corollary~\ref{cor:periods_vanish}.
\end{proof}
\begin{remark}
As in the classical case, $\omega_Q$ depends on the choice of the chart $z_Q$. In our setting, the coefficient of $d\bar{z}_Q$ of $\omega_Q$ equals $-\bar{\partial}_\Lambda K_Q(Q) = -\pi /\left(2 \textnormal{area}(z_Q(F_Q))\right)$.
\end{remark}
\begin{lemma}\label{lem:property_second}
Let $Q\neq Q' \in F(\Lambda)$ and let $\omega_Q,\omega_{Q'}$ be the discrete Abelian differentials of the second kind corresponding to the charts $z_Q,z_{Q'}$. Define complex numbers $\alpha, \beta$ in such a way that $\omega_Q=\alpha dz_{Q'}$ on the four edges of $F_{Q'}$ and $\omega_{Q'}=\beta dz_{Q}$ on the four edges of $F_{Q}$. Then, $\alpha=\beta$.
\end{lemma}
\begin{proof}
By definition, $\omega_Q$ and $\omega_{Q'}$ are closed discrete differentials whose black and white $a$-periods vanish. So by the discrete Riemann Bilinear Identity~\ref{th:RBI}, $\iint_{F(X)} \omega_Q \wedge \omega_{Q'}=0$. Since $\omega_Q$ and $\omega_{Q'}$ have no pole at a face of $X$ corresponding to a quadrilateral $Q''\neq Q,Q'$, $\left(\omega_Q \wedge \omega_{Q'}\right)|_{F_{Q''}}=0$. Hence,
\begin{align*}
0&=\iint_{F(X)}\omega_Q \wedge \omega_{Q'}=-\iint_{F_Q}\omega_{Q'} \wedge \omega_Q+\iint_{F_{Q'}}\omega_Q \wedge \omega_{Q'}\\
&=\frac{\beta\pi}{2\textnormal{area}(z_Q(F_Q))}\iint_{F_Q}dz_Q\wedge d\bar{z}_Q-\frac{\alpha\pi}{2\textnormal{area}(z_{Q'}(F_{Q'}))}\iint_{F_{Q'}}dz_{Q'}\wedge d\bar{z}_{Q'}=-2\pi i (\beta-\alpha). \qedhere\end{align*}
\end{proof}
\begin{proposition}\label{prop:periods_second}
Let $Q \in F(\Lambda)$ and let $\omega_Q$ be the discrete Abelian differential of the second kind corresponding to the chart $z_Q$. Suppose that $\omega_k|_{\partial F_Q}=\alpha_k dz_Q$ for $k=1,\ldots,g$. Then, $\int_{b_k} \omega_Q=2\pi i \alpha_k$.
\end{proposition}
\begin{proof}
In a chart $z_{Q'}$ of a face $Q' \neq Q$, $\omega_k$ and $\omega_Q$ are both of the form $pdz_{Q'}$, so $\omega_k \wedge \omega_Q$ vanishes at $F_{Q'}$. It follows from the discrete Riemann Bilinear Identity~\ref{th:RBI} applied to $\omega_k$ and $\omega_Q$ that \[\int_{b_k} \omega_Q=\int_{W b_k} \omega_Q+\int_{B b_k} \omega_Q=\iint_{F(X)} \omega_k \wedge \omega_Q=\frac{-\alpha_k\pi}{2\textnormal{area}(z_Q(F_Q))}\iint_{F_Q} dz_Q \wedge d\bar{z}_Q=2\pi i \alpha_k\] since black and white $a$-periods of $\omega_Q$ vanish.
\end{proof}
Since discrete Abelian differentials of the third kind have residues, periods are not well-defined. However, periods of the discrete Abelian differentials constructed in Proposition~\ref{prop:existence_third} are defined modulo $2\pi i$. To normalize them, we think of $a_k,b_k$ as given closed curves $\alpha'_k,\beta'_k$ on $X$.
\begin{definition}
Let $\alpha'_k,\beta'_k$, $1\leq k \leq g$, be cycles on $X$ in the homotopy classes $a_k,b_k$. Let $v,v' \in V(\Gamma)$ or $v,v' \in V(\Gamma^*)$. Then, $\omega_{vv'}$ denotes the unique discrete Abelian differential whose integrals along $\alpha'_k,\beta'_k$ are zero, whose nonzero residues are given by $\textnormal{res}_\omega(v)=-\textnormal{res}_\omega(v')=1$, and that has no further poles.
\end{definition}
\begin{definition}
Let $R$ be an oriented path on $\Gamma$ or $\Gamma^*$ and $\omega$ a discrete Abelian differential. To each oriented edge in $R$ we choose one of the corresponding parallel edges of $X$ and orient it the same. By $R_X$, we denote the resulting one-chain on $X$. Then, $\int_R \omega:=2\int_{R_X}\omega$.
\end{definition}
\begin{proposition}\label{prop:periods_third}
Let $v,v' \in V(\Gamma)$ or $v,v' \in V(\Gamma^*)$. Suppose that the cycles $\alpha'_k,\beta'_k$ on $X$ are homotopic to closed paths $\alpha_k,\beta_k$ on $\Sigma$ cutting out a fundamental polygon with $4g$ vertices on the surface $\Sigma \backslash \{v,v'\}$. In addition, let $R$ be an oriented path on $\Gamma$ or $\Gamma^*$ from $v'$ to $v$ that does not intersect any of the curves $\alpha_1,\ldots,\alpha_g,\beta_1,\ldots,\beta_g$. Then, \[\int_{b_k}\omega_{vv'}=2\pi i \int_R \omega_k.\]
\end{proposition}
\begin{proof}
On the one hand, $\omega_k \wedge \omega_{vv'}=0$ since both discrete Abelian differentials are of the form $pdz_Q$ in any chart $z_Q$. On the other hand, we can find a discrete holomorphic multi-valued function $f:V(\tilde{\Lambda})\to \mC$ such that $df=\omega_k$ by Lemma~\ref{lem:multivalued}. Since $d$ is a derivation by Theorem~\ref{th:derivation}, \[0=\omega_k \wedge \omega_{vv'}= d(f\omega_{vv'})-fd\omega_{vv'}\] is true if $\omega_k,\omega_{vv'}$ are lifted to $\tilde{X}$. Now, choose a collection $\tilde{F}(X)$ of lifts of all faces of $X$ to $\tilde{X}$ such that the corresponding lifts $\tilde{v}$ and $\tilde{v}'$ of $v$ and $v'$ are connected by a lift of $R$ in $\tilde{\Gamma}$ or $\tilde{\Gamma}^*$. It is not necessary that all faces of $\tilde{X}$ intersecting the lift of $R$ are contained in $\tilde{F}(X)$. Due to discrete Stokes' Theorem~\ref{th:stokes}, $d\omega_{vv'}=0$ on all lifts of faces $F_Q$ corresponding to $Q\in V(\Diamond)$ or faces $F_{v''}$ corresponding to a vertex $v''\neq v,v'$. Using $\textnormal{res}_\omega(v)=-\textnormal{res}_\omega(v')=1$, discrete Stokes' Theorem~\ref{th:stokes} gives \[\int_{\partial \tilde{F}(X)} f\omega_{vv'}=\iint_{\tilde{F}(X)} d(f\omega_{vv'})=\iint_{\tilde{F}(X)} fd\omega_{vv'}=2\pi i\left(f(\tilde{v})-f(\tilde{v}')\right)=2\pi i\int_{R} \omega_k.\]
The left hand side can be calculated in exactly the same way as in the proof of the discrete Riemann Bilinear Identity~\ref{th:RBI}. The only essential difference is that when we extend $\omega_{vv'}$ to a subdivision of lifts $\tilde{F}_{v}$ or $\tilde{F}_{v'}$, the extended one-form shall have zero residues at all new faces but one containing $v$ or $v'$, where it should remain $1$ or $-1$. As a result, we obtain $\int_{b_k}\omega_{vv'}$, observing that almost all black and white $a$-periods of $f$ and $\omega_{vv'}$ vanish.
\end{proof}
\begin{remark}
Results analogous to Propositions~\ref{prop:periods_second} and~\ref{prop:periods_third} are true for the black and white $b$-periods of $\omega_Q$ and $\omega_{vv'}$, replacing $\omega_k$ by $\omega_k^B$ or $\omega_k^W$.
\end{remark}
\begin{proposition}\label{prop:basis}
Let a chart $z_Q$ to each $Q \in F(\Lambda)$ be given. Fix $b \in V(\Gamma)$ and $w \in V(\Gamma^*)$. Then, the normalized discrete Abelian differentials of the first kind $\omega_k^B$ and $\omega_k^W$, $k=1,\ldots,g$, of the second kind $\omega_Q$, $Q \in F(\Lambda)$, and of the third kind $\omega_{bb'}$ and $\omega_{ww'}$, $b'\neq b$ being black and $w'\neq w$ being white vertices, form a basis of the space of discrete Abelian differentials.
\end{proposition}
\begin{proof}
Linear independence is clear. Given any discrete Abelian differential, we can first use the $\omega_Q$ to eliminate all double poles. For the resulting discrete Abelian differential we can find linear combinations of $\omega_{bb'}$ and $\omega_{ww'}$ that have the same residues at black and white vertices, respectively. We end up with a discrete holomorphic differential that can be represented by a linear combination of the $2g$ discrete differentials $\omega_k^B$ and $\omega_k^W$.
\end{proof}
\subsection{Divisors and the discrete Riemann-Roch Theorem}\label{sec:RR}
We generalize the notion of divisors on a compact discrete Riemann surface $(\Sigma,\Lambda,z)$ of genus $g$ and the discrete Riemann-Roch theorem given in \cite{BoSk12} to general quad-decompositions. In addition, we define double poles of discrete Abelian differentials and double values of functions $f:V(\Lambda)\to\mC$.
\begin{definition}
A \textit{divisor} $D$ is a formal linear combination \[D=\sum_{j=1}^M m_j v_j + \sum_{k=1}^N n_k Q_k,\] where $m_j\in \left\{-1,0,1\right\}$, $v_j \in V(\Lambda)$, $n_k \in \left\{-2,-1,0,1,2\right\}$, and $Q_k \in V(\Diamond)$.
$D$ is \textit{admissible} if even $m_j\in \left\{-1,0\right\}$ and $n_k \in \left\{-2,0,1\right\}$. Its \textit{degree} is defined as \[\deg D:=\sum_{j=1}^M m_j + \sum_{k=1}^N \textnormal{sign}(n_k).\] $D\geq D'$ if the formal sum $D-D'$ is a divisor whose coefficients are all nonnegative.
\end{definition}
\begin{remark}
Note that double points just count once in the degree. The reason is that these points correspond to double values and not to double zeroes of a discrete meromorphic function. Concerning discrete Abelian differentials, a double pole does not include a simple pole and therefore counts once.
\end{remark}
As noted in \cite{BoSk12}, divisors on a discrete Riemann surface do not form an Abelian group. One of the reasons is that the pointwise product of discrete holomorphic functions does not need to be discrete holomorphic itself, another one is the asymmetry of point spaces. Whereas discrete meromorphic functions will be defined on $V(\Lambda)$, discrete Abelian differentials are essentially defined by complex functions on $V(\Diamond)$, supposed that a chart for each quadrilateral is fixed.
\begin{definition}
Let $f:V(\Lambda) \to \mC$, $v \in V(\Lambda)$, and $Q \in F(\Lambda)\cong V(\Diamond)$. $f$ is called \textit{discrete meromorphic}.
\begin{itemize}
\item $f$ has a \textit{zero} at $v$ if $f(v)=0$.
\item $f$ has a \textit{simple pole} at $Q$ if $df$ has a double pole at $Q$.
\item $f$ has a \textit{double value} at $Q$ if $df|_{\partial F_Q}=0$.
\end{itemize}
If $f$ has zeroes $v_1,\ldots,v_M \in V(\Lambda)$, double values $Q_1,\ldots,Q_N \in V(\Diamond)$, and poles $Q'_1,\ldots,Q'_{N'} \in V(\Diamond)$, then its \textit{divisor} is defined as \[(f):=\sum_{j=1}^M v_j + \sum_{k=1}^N 2Q_k - \sum_{k'=1}^{N'} Q'_{k'}.\]
\end{definition}
\begin{remark}
Note that in the smooth setting, a double value of a smooth function $f$ is a point where $f-c$ has a double zero for some constant $c$. In the discrete setup, a double value at a quadrilateral $Q$ implies that the values of the discrete function $f$ at both black vertices coincide as well as at the two white vertices of $Q$. In this sense, double values are separated from the points where the function is evaluated.
\end{remark}
\begin{definition}
Let $\omega$ be a discrete Abelian differential. If $\omega$ has zeroes $Q_1,\ldots,Q_N \in V(\Diamond)$, double poles at $Q'_1,\ldots,Q'_{N'} \in V(\Diamond)$, and simple poles at $v_1,\ldots,v_M \in V(\Lambda)$, then its \textit{divisor} is defined as \[(\omega):=\sum_{k=1}^N Q_k - \sum_{k'=1}^{N'} 2Q'_{k'}-\sum_{j=1}^M v_j.\]
\end{definition}
\begin{remark}
In the linear theory of discrete Riemann surfaces the (pointwise) product of discrete holomorphic functions is not discrete holomorphic function in general. That is also the reason why we cannot give a local definition of poles and zeroes of higher order. However, in Section~\ref{sec:branch} we merged several branch points to define one branch point of higher order. In a slightly different way, we can consider a finite subgraph $\Diamond_0\subseteq\Diamond$ that forms a simply-connected closed region consisting of $F$ quadrilaterals, where each quadrilateral is a double value of the discrete meromorphic function $f$, as one multiple value of order $F+1$. Then, $f$ takes the same value at all black vertices of $\Diamond_0$ and at all white vertices of $\Diamond_0$. If $\Diamond_0$ contains no interior vertex, both the numbers of black and of white vertices equal $F+1$, and if in addition $f$ equals zero at each black vertex, then we can interpret the $F+1$ black vertices of $\Diamond_0$ as a zero of order $F+1$.
In a similar way, double poles of discrete Abelian differentials can be merged to a pole of higher order. Unfortunately, we do not see a way how higher order poles of discrete meromorphic functions or multiple zeroes of discrete Abelian differentials can be defined.
\end{remark}
\begin{definition}
Let $D$ be a divisor. By $L(D)$ we denote the complex vector space of discrete meromorphic functions $f$ that vanish identically or whose divisor satisfies $(f)\geq D$. Similarly, $H(D)$ denotes the complex vector space of discrete Abelian differentials $\omega$ such that $\omega \equiv 0$ or $(\omega)\geq D$. The dimensions of these spaces are denoted by $l(D)$ and $i(D)$, respectively.
\end{definition}
We are now able to formulate and prove the following \textit{discrete Riemann-Roch theorem}.
\begin{theorem}\label{th:Riemann_Roch}
If $D$ is an admissible divisor on a compact discrete Riemann surface of genus $g$, then \[l(-D)=\deg D-2g+2+i(D).\]
\end{theorem}
\begin{proof}
We write $D=D_0-D_{\infty}$, where $D_0,D_{\infty}\geq 0$. Since $D$ is admissible, $D_0$ is a sum of elements of $V(\Diamond)$, all coefficients being one. Let $V_0$ denote the set of $Q\in V(\Diamond)$ such that $D_0\geq Q$.
For each $Q \in V(\Diamond)$, we fix a chart $z_Q$. As in Proposition~\ref{prop:basis}, we denote the normalized Abelian differentials of the first kind by $\omega_k^B$ and $\omega_k^W$, $k=1,\ldots,g$, these of the second kind by $\omega_Q$, $Q \in V(\Diamond)$, and these of the third kind by $\omega_{bb'}$ and $\omega_{ww'}$, $b'\neq b$ being black and $w'\neq w$ being white vertices, $b \in V(\Gamma)$ and $w \in V(\Gamma^*)$ fixed. Now, we investigate the image $H$ of the discrete exterior derivative $d$ on functions in $L(-D)$. $H$ consists of discrete Abelian differentials and only biconstant functions are in the kernel.
Let $f\in L(-D)$. Then, $df$ is a discrete Abelian differential that might have double poles at the points of $D_0$. In addition, all the residues and periods of $df$ vanish. So since the discrete Abelian differentials above form a basis by Proposition~\ref{prop:basis}, \[df=\sum_{Q\in V_0} f_Q \omega_Q\] for some complex numbers $f_Q$. Now, all black and white $b$-periods of $df$ vanish. Using Proposition~\ref{prop:periods_second} and the remark at the end of Section~\ref{sec:Abelian_differentials} on the black and white $b$-periods of $\omega_Q$, \[\sum_{Q\in V_0} f_Q\alpha_k^B(Q)=0=\sum_{Q\in V_0} f_Q\alpha_k^W(Q),\] where $\omega_k^B|_{\partial F_Q}=\alpha_k^B(Q) dz_Q$ and $\omega_k^W|_{\partial F_Q}=\alpha_k^W(Q) dz_Q$ for $k=1,\ldots,g$.
In the chart $z_{Q'}$ of a face $Q'\neq Q$, $\omega_{Q}$ can be written as $\beta_{Q}(Q')dz_{Q'}$. So if $D_{\infty}\geq 2P$, $P\in V(\Diamond)$, then $f$ has a double value at $P$ and $df|_{\partial F_P}=0$ for the corresponding face $F_P$ of $X$. Due to Lemma~\ref{lem:property_second}, \[0=\sum_{Q\in V_0} f_Q\beta_Q(P)=\sum_{Q\in V_0} f_Q\beta_P(Q).\]
Suppose that $D_{\infty}\geq v+v'$, where $v,v' \in V(\Gamma)$ or $v,v' \in V(\Gamma^*)$. By definition, $f$ has zeroes at $v,v'$, so $f(v)=f(v')$. The last equality remains true when a biconstant function is added. This yields an additional restriction to $H$. Now, using discrete Stokes' Theorem~\ref{th:stokes}, $d\omega_{vv'}$ equals $2\pi i$ when integrated over $F_v$, $-2\pi i$ when integrated over $F_{v'}$, and zero around all other vertices. Also, it follows that $\iint_{F(X)} d\left(f\omega_{vv'}\right)=0$. Writing $\omega_{vv'}|_{\partial{F_Q}}=\gamma_{vv'}(Q) dz_Q$, we observe that for $Q\in V(\Diamond)$ such that $D_0\geq Q$,
\[\left(df\wedge\omega_{vv'}\right)|_{F_Q}=f_Q\gamma_{vv'}(Q)\frac{\pi}{2\textnormal{area}(z_Q(F_Q))}dz_Q\wedge d\bar{z}_Q,\] and $df\wedge\omega_{vv'}=0$ everywhere else. Using $d\left(f\omega_{vv'}\right)=fd\omega_{vv'}+df\wedge\omega_{vv'}$ by Theorem~\ref{th:derivation}, we obtain \[0=2\pi i\left( f(v)-f(v')\right)=\iint_{F(X)} fd\omega_{vv'}=\iint_{F(X)} d\left(f\omega_{vv'}\right)-\iint_{F(X)} df\wedge\omega_{vv'}=2\pi i\sum_{Q\in V_0}f_Q\gamma_{vv'}(Q).\]
In the case that there are more than two black (or white) vertices $v$ that satisfy $D_{\infty}\geq v$, we fix one such black (or white) vertex as $b$ (or $w$). Denote by $B_0$ and $W_0$ the sets of these black and white vertices. Then, $f$ is constant on $B_0$ and $W_0$ if and only if for any $b' \in B_0$, $b'\neq b$, and $w'\in W_0$, $w'\neq w$: \[\sum_{Q\in V_0}f_Q\gamma_{bb'}(Q)=0=\sum_{Q\in V_0}f_Q\gamma_{ww'}(Q).\]
Consider the pmatrix $M$ whose $k$-th column is the column vector $\left(\alpha_k^B(Q)\right)_{Q \in V_0}$, whose $(g+k)$-th column is the column vector $\left(\alpha_k^W(Q)\right)_{Q \in V_0}$, and whose next columns are the column vectors $\left(\beta_P (Q)\right)_{Q \in V_0}$, $P\in V(\Diamond)$ such that $D_\infty\geq 2P$. In the case that $|B_0|\geq 2$, we add the column vectors $\left(\gamma_{bb'} (Q)\right)_{Q \in V_0}$, $b'\in B_0$ different from $b$, to $M$, and if $|W_0|\geq 2$, then we additionally add the column vectors $\left(\gamma_{ww'} (Q)\right)_{Q \in V_0}$, $w'\in W_0$ different from $w$. By our consideration above, $\sum_{Q\in V_0} f_Q \omega_Q$ is in $H$ only if the column vector $(f_Q)_{Q \in V_0}$ is in the kernel of $M^T$. Conversely, any element of the kernel is a closed discrete differential with vanishing periods, so it can be integrated to a discrete meromorphic function $f$ using Lemma~\ref{lem:multivalued}. $f$ can have poles only at $Q\in V_0$, it is biconstant on $B_0 \cup W_0$, and it has double values at all $P \in V(\Diamond)$ such that $D_{\infty} \geq 2P$. Hence, $H$ is isomorphic to the kernel of $M^T$. We obtain \[\textnormal{dim } H=\textnormal{dim } \textnormal{ker } M^T=|V_0|-\textnormal{rank } M^T=\deg D_0-\textnormal{rank } M.\]
Let us first suppose that $B_0$ and $W_0$ are both nonempty. This means that at least one zero of $f$ at a black and one at a white vertex is fixed. Therefore, $d:L(-D)\to H$ has a trivial kernel and $l(-D)=\dim H$. In addition, $\textnormal{rank } M=2g+\deg D_{\infty}-2-\textnormal{dim }\textnormal{ker } M.$ But with complex numbers $\lambda_j$, the kernel of $M$ consists of discrete Abelian differentials \[\omega=\sum_{k=1}^g \left(\lambda_k \omega_k^B+\lambda_{k+g} \omega_k^W\right)+\sum_{P: D_\infty\geq 2P} \lambda_P \omega_P+\sum_{b'\in B_0, b'\neq b}\lambda_{b'}\omega_{bb'}+\sum_{w'\in W_0, w'\neq w}\lambda_{w'}\omega_{ww'}\] such that $\omega|_{\partial {F_Q}}=0$ for any $Q \in V_0$, so the kernel is exactly $H(D)$. It follows that \begin{align*} l(-D)=\textnormal{dim } H&=\deg D_0-\textnormal{rank } M\\&=\deg D_0-2g-\deg D_{\infty}+2+\textnormal{dim }\textnormal{ker } M=\deg D-2g+2+i(D).\end{align*}
If $B_0$ or $W_0$ is empty, then we can add an additive constant to all values of $f$ at black or white vertices, respectively, still getting an element of $L(-D)$. Thus, the kernel of $d:L(-D)\to H$ is one- or even two-dimensional, when both $B_0$ and $W_0$ are empty. But $\textnormal{rank } M$ is now $2g+\deg D_{\infty}-x-\textnormal{dim }\textnormal{ker } M$ with $x=1$ or $x=0$. Again, we get $l(-D)=\deg D-2g+2+i(D).$
\end{proof}
\begin{remark}
The difference between the classical and the discrete Riemann-Roch theorem is explained by the fact that $\Lambda$ is bipartite: The space of constant functions is no longer one- but two-dimensional; instead of just $g$ $a$-periods of Abelian differentials we have $2g$, namely black and white.
Furthermore, note that the interpretation of several neighboring double values (or poles) as a multiple value (or pole) of higher order is compatible with the discrete Riemann-Roch theorem.
\end{remark}
Let us just state the following corollary for a quadrangulated flat torus that was already mentioned in \cite{BoSk12}. The proof is a consequence of the discrete Riemann-Roch Theorem~\ref{th:Riemann_Roch} and the fact that $dz$ is a discrete holomorphic differential on the torus. For the second part, one uses the decomposition of a function into real and imaginary part. For details, see the thesis \cite{Gue14}.
\begin{corollary}\label{cor:torus_poles}
Let $\Sigma=\mC/(\mZ+\mZ\tau)$ be bipartitely quadrangulated and $\im \tau >0$.
\begin{enumerate}
\item There exists no discrete meromorphic function with exactly one simple pole.
\item Suppose that in addition the diagonals of all quadrilaterals are orthogonal to each other. Then, there exists a discrete meromorphic function with exactly two simple poles at $Q,Q' \in V(\Diamond)$ if and only if the black diagonals of $Q,Q'$ are parallel to each other.
\end{enumerate}
\end{corollary}
The first part of Corollary~\ref{cor:torus_poles} does not remain true if we consider general discrete Riemann surfaces:
\begin{proposition}\label{prop:counterexample_cauchykernel}
For any $g\geq 0$, there exists a compact discrete Riemann surface of genus $g$ such that there exists a discrete meromorphic function $f:V(\Lambda)\to\mC$ that has exactly one simple pole.
\end{proposition}
\begin{proof}
We start with any compact discrete Riemann surface $(\Sigma',\Lambda',z')$ of genus $g$ and pick one quadrilateral $Q'\in V(\Diamond')$. Now, $Q'$ is combinatorially replaced by the five quadrilaterals of Figure~\ref{fig:stamp}. We define the discrete complex structure of the central quadrilateral $Q$ by the complex number $\varrho_1$ and the discrete complex structure of the four neighboring quadrilaterals $Q_k$ by $\varrho_2$, $\re (\varrho_k)>0$. Clearly, this construction yields a new compact discrete Riemann surface $(\Sigma,\Lambda,z)$ of genus $g$.
\begin{figure}[htbp]
\begin{center}
\beginpgfgraphicnamed{spider4}
\begin{tikzpicture}
[white/.style={circle,draw=black,fill=white,thin,inner sep=0pt,minimum size=1.2mm},
black/.style={circle,draw=black,fill=black,thin,inner sep=0pt,minimum size=1.2mm},scale=0.7]
\foreach \x in {-2,-1}
{
\ifthenelse{\isodd{\x}}{\node[black] (p_\x_1) at (\x,\x) {}; \node[white] (p_\x_2) at (\x,-\x) {}; }{\node[white] (p_\x_1) at (\x,\x) {}; \node[black] (p_\x_2) at (\x,-\x) {};}
\draw (p_\x_1) -- (p_\x_2);
}
\foreach \x in {1,2}
{
\ifthenelse{\isodd{\x}}{\node[black] (p_\x_1) at (\x,\x) {}; \node[white] (p_\x_2) at (\x,-\x) {}; }{\node[white] (p_\x_1) at (\x,\x) {}; \node[black] (p_\x_2) at (\x,-\x) {};}
\draw (p_\x_1) -- (p_\x_2);
}
\foreach \x in {-2,-1,1,2}
{\pgfmathparse{int(multiply(\x,-1))};
\draw (p_\x_1) -- (p_\pgfmathresult_2);
}
\foreach \x in {1}
{\pgfmathparse{int(add(-\x,-1))};
\draw (p_-\x_1) -- (p_\pgfmathresult_1);
\draw (p_-\x_2) -- (p_\pgfmathresult_2);
\pgfmathparse{int(add(\x,1))};
\draw (p_\x_1) -- (p_\pgfmathresult_1);
\draw (p_\x_2) -- (p_\pgfmathresult_2);
}
\coordinate[label=center:$Q$] (phi0) at (0,0);
\coordinate[label=center:$Q_3$] (phi1) at (0,1.5);
\coordinate[label=center:$Q_2$] (phi2) at (1.5,0);
\coordinate[label=center:$Q_1$] (phi3) at (0,-1.5);
\coordinate[label=center:$Q_4$] (phi4) at (-1.5,0);
\coordinate[label=center:$w_-$] (phi5) at (0.63,-0.7);
\coordinate[label=center:$w_+$] (phi6) at (-0.61,0.64);
\coordinate[label=center:$b_-$] (phi7) at (-0.56,-0.64);
\coordinate[label=center:$b_+$] (phi8) at (0.7,0.7);
\end{tikzpicture}
\endpgfgraphicnamed
\caption{Replacement of chosen quadrilateral by five new quadrilaterals}
\label{fig:stamp}
\end{center}
\end{figure}
For a complex number $x\neq0$, consider the function $f:V(\Lambda)\to\mathds{C}$ that fulfills $f(b_-)=x=-f(b_+)$, $f(w_+)=i\varrho_2 x=-f(w_-)$, and $f(v)=0$ for all other vertices. Then, $f$ is a discrete meromorphic function that has exactly one simple pole, namely at $Q$.
\end{proof}
\subsection{Discrete Abel-Jacobi maps}\label{sec:Abel}
Due to the fact that black and white periods of discrete holomorphic one-forms do not have to coincide, we cannot define a discrete Abel-Jacobi map on all of $V(\Lambda)$ and $V(\Diamond)$. However, by either restricting to black vertices (and faces) or white vertices (and faces) or considering the universal covering of the compact discrete Riemann surface $(\Sigma,\Lambda,z)$ of genus $g$, we get reasonable discretizations of the Abel-Jacobi map.
\begin{definition}
Let $\omega$ denote the column vector with entries $\omega_k$, $\{\omega_1,\ldots,\omega_g\}$ being the canonical set of discrete holomorphic differentials. The $g\times g$-matrices $\Pi^B$ and $\Pi^W$ with entries $\Pi^B_{jk}:=2\int_{Bb_j}\omega_k$ and $\Pi^W_{jk}:=2\int_{Wb_j}\omega_k$ are called the \textit{black} and \textit{white period pmatrix}, respectively. Let $L$ denote the lattice $L:=\left\{Im+\Pi n | m,n \in \mathds{Z}^g\right\}$, where $I$ is the $(g\times g)$-identity pmatrix. Similarly, the lattices $L^B$ and $L^W$ with $\Pi^B$ and $\Pi^W$ instead of $\Pi$ are defined. Then, the complex tori $\mathcal{J}:=\mathds{C}^g/L$, $\mathcal{J}^B:=\mathds{C}^g/L^B$, and $\mathcal{J}^W:=\mathds{C}^g/L^W$ are the \textit{discrete}, the \textit{black}, and the \textit{white Jacobian variety}, respectively.
\end{definition}
\begin{remark}
In the notation of Section~\ref{sec:period_matrices}, $\Pi^B=\Pi^{B,W}+\Pi^{B,B}$ and $\Pi^W=\Pi^{W,W}+\Pi^{W,B}$.
\end{remark}
\begin{definition}
Let $\tilde{Q},{\tilde{Q}'}\in F(\tilde{\Lambda})$, $v\in V(\tilde{\Gamma})$, $v' \in V(\tilde{\Gamma}^*)$. Let $R$ be an oriented path on $\tilde{\Gamma}$ connecting a black vertex $b\sim\tilde{Q}$ with $v$, and let $d$ be an edge of $\tilde{X}$ parallel to the black diagonal of $\tilde{Q}$ oriented toward $b$. Lifting the discrete differentials of $\omega$ to the universal covering $(\tilde{\Sigma},\tilde{\Lambda},z \circ p)$, \[\tilde{\mathcal{A}}_{\tilde{Q}}(v):=\tilde{\mathcal{A}}^B_{\tilde{Q}}(v):=\int_{\tilde{Q}}^v \omega:=\int_d \omega + \int_{R} \omega.\] Similarly, we define $\tilde{\mathcal{A}}_{\tilde{Q}}(v'):=\tilde{\mathcal{A}}^W_{\tilde{Q}}(v'):=\int_{\tilde{Q}}^{v'} \omega$ by replacing the graph $\tilde{\Gamma}$ by $\tilde{\Gamma}^*$. Furthermore, we define $\tilde{\mathcal{A}}^B_{\tilde{Q}}({\tilde{Q}'}):=\tilde{\mathcal{A}}^B_{\tilde{Q}}(b)-\tilde{\mathcal{A}}^B_{\tilde{Q}'}(b)$ and $\tilde{\mathcal{A}}^W_{\tilde{Q}}({\tilde{Q}'}):=\tilde{\mathcal{A}}^W_{\tilde{Q}}(w)-\tilde{\mathcal{A}}^W_{\tilde{Q}'}(w)$ for a white vertex $w$ incident to $\tilde{Q}$.
\end{definition}
\begin{remark}
Since all discrete differentials $\omega_k$ are closed, the above definitions do not depend on the choice of paths. Furthermore, $\tilde{\mathcal{A}}^B_{\tilde{Q}}: V(\tilde{\Gamma}) \cup F(\tilde{\Lambda}) \to \mathds{C}^g$ and $\tilde{\mathcal{A}}^W_{\tilde{Q}}: V(\tilde{\Gamma}^*) \cup F(\tilde{\Lambda}) \to \mathds{C}^g$ actually project to well-defined maps $\mathcal{A}^B_Q : V(\Gamma) \cup V(\Diamond) \to \mathcal{J}^B$ and $\mathcal{A}^W_Q : V(\Gamma^*) \cup V(\Diamond) \to \mathcal{J}^W$ for $Q:=p(\tilde{Q})$. These \textit{black} and \textit{white Abel-Jacobi maps} discretize the Abel-Jacobi map at least for divisors that do not include white or black vertices, respectively. Clearly, they do not depend on the base point $Q$ for divisors of degree 0.
\end{remark}
$\tilde{Q}$ can be connected with another $\tilde{Q}'\in F(\tilde{\Lambda})$ in a more symmetric way that does not depend on a choice of either black or white, using the medial graph.
\begin{definition}
Let $\tilde{Q},{\tilde{Q}'}\in F(\tilde{\Lambda})$. Let $x$ be a vertex of the face $F_{\tilde{Q}}\in F(\tilde{X})$ corresponding to $\tilde{Q}$ and $e,e'$ the two oriented edges of $F_{\tilde{Q}}$ pointing to $x$. We lift the discrete differentials of $\omega$ to the universal covering $(\tilde{\Sigma},\tilde{\Lambda},z \circ p)$. Defining $\int_{\tilde{Q}}^x \omega:=\int_e \omega/2 + \int_{e'}\omega/2$ and similarly $\int_{{\tilde{Q}'}}^{x'} \omega$ for a vertex $x'$ of $F_{\tilde{Q}'}$, \[\tilde{\mathcal{A}}_{\tilde{Q}}({\tilde{Q}'}):=\int_{\tilde{Q}}^{{\tilde{Q}'}} \omega:=\int_{\tilde{Q}}^x \omega+\int_x^{x'} \omega-\int_{{\tilde{Q}'}}^{x'} \omega.\]
\end{definition}
\begin{remark}
$\tilde{\mathcal{A}}_{\tilde{Q}}({\tilde{Q}'})$ is well-defined and does not depend on $x,x'$. In Figure~\ref{fig:contours2}, we described how a closed path on the medial graph induces closed paths on the black and white subgraph. Similarly, a ``path'' connecting $\tilde{Q}$ with ${\tilde{Q}'}$ as above induces two other paths connecting both faces, a black path just using edges of $\tilde{\Gamma}$ and a white path just using edges of $\tilde{\Gamma}^*$ (and half of a diagonal each). This construction shows that $2\tilde{\mathcal{A}}_{\tilde{Q}}({\tilde{Q}'})=\tilde{\mathcal{A}}^B_{\tilde{Q}}({\tilde{Q}'})+\tilde{\mathcal{A}}^W_{\tilde{Q}}({\tilde{Q}'})$.
Thus, $\tilde{\mathcal{A}}_{\tilde{Q}}$ defines a \textit{discrete Abel-Jacobi map} on the divisors of the universal covering $(\tilde{\Sigma},\tilde{\Lambda},z \circ p)$ and it does not depend on the choice of base point $\tilde{Q}$ for divisors of degree 0 that contain as many black as white vertices (counted with sign).
\end{remark}
\begin{proposition}\label{prop:Abel_holomorphic}
$\tilde{\mathcal{A}}_{\tilde{Q}}|_{V(\tilde{\Lambda})}$ is discrete holomorphic in each component.
\end{proposition}
\begin{proof}
Let $\tilde{Q}' \in F(\tilde{\Lambda})$ and $z_{Q'}$ be a chart of $Q'=p(\tilde{Q}')$. Then, $\omega_k|_{\partial F_{Q'}}= p_kdz_{Q'}$ for some complex numbers $p_k$. If $\tilde{b}_-,\tilde{w}_-,\tilde{b}_+,\tilde{w}_+$ denote the vertices of $Q$ in counterclockwise order, starting with a black vertex, then \begin{align*}\left(\tilde{\mathcal{A}}_{\tilde{Q}}\left(\tilde{b}_+\right)-\tilde{\mathcal{A}}_{\tilde{Q}}\left(\tilde{b}_-\right)\right)_k&=p_k\left(z_{Q'}\left(p\left(\tilde{b}_+\right)\right)-z_{Q'}\left(p\left(\tilde{b}_-\right)\right)\right),\\ \left(\tilde{\mathcal{A}}_{\tilde{Q}}\left(\tilde{w}_+\right)-\tilde{\mathcal{A}}_{\tilde{Q}}\left(\tilde{w}_-\right)\right)_k&=p_k\left(z_{Q'}\left(p\left(\tilde{w}_+\right)\right)-z_{Q'}\left(\left(\tilde{w}_-\right)\right)\right).\end{align*} Thus, the discrete Cauchy-Riemann equation is fulfilled.
\end{proof}
\begin{remark}
In such a chart $z_{\tilde{Q}'}=z_{Q'} \circ p$, $\left(\partial_\Lambda \tilde{\mathcal{A}}_{\tilde{Q}}\right)\left(\tilde{Q}'\right)=p,$ exactly as in the smooth case. In particular, the discrete Abel-Jacobi map is an injection unless there is $Q \in V(\Diamond)$ such that all discrete holomorphic differentials vanish at $Q$. By the discrete Riemann-Roch Theorem~\ref{th:Riemann_Roch}, this would imply that there exists a discrete meromorphic function with exactly one simple pole at $Q$. In contrast to the classical theory, this could happen for any genus $g$ due to Proposition~\ref{prop:counterexample_cauchykernel}.
\end{remark}
\section*{Acknowledgment}
The authors would like to thank the anonymous reviewer for his comments and suggestions, in particular for his idea of Figure~\ref{fig:cover}.
The first author was partially supported by the DFG Collaborative Research Center TRR 109, ``Discretization in Geometry and Dynamics''. The research of the second author was supported by the Deutsche Telekom Stiftung. Some parts of this paper were written at the Institut des Hautes \'Etudes Scientifiques in Bures-sur-Yvette, the Isaac Newton Institute for Mathematical Sciences in Cambridge, and the Erwin Schr\"odinger International Institute for Mathematical Physics in Vienna. The second author thanks the European Post-Doctoral Institute for Mathematical Sciences for the opportunity to stay at these institutes. The stay at the Isaac Newton Institute for Mathematical Sciences was funded through an Engineering and Physical Sciences Research Council Visiting Fellowship, Grant EP/K032208/1.
\bibliographystyle{plain}
|
1902.11129
|
\section{Introduction}
Our cosmological model is built upon the assumptions that it is homogeneous and isotropic on large scale. The metric for such a cosmic model can be geometrically represented by the Friedmann-Lemaitre-Robertson-Walker (FLRW) metric given by
\begin{equation}
ds^2 = -c^2 dt^2 + a^2(t) \left[ \frac{dr^2}{1-k_0r^2}+r^2(d\theta^2+sin^2\theta d\phi^2)\right] ,
\end{equation}
where $k_0)$ $(-1$, $0$, $1)$ specifies the open ($=-1$), flat ($=0$) and closed ($=1$) universes respectively and the dynamic nature of our universe is supposed to be governed by the Einstein's field equations $G_{\mu\nu}=\frac{8\pi G}{c^4} T_{\mu\nu}$ where $G_{\mu\nu}$ depicts the information of the geometric part of the space-time and $T_{\mu\nu}$ is the energy-momentum part signifying the properties of the matter distribution in the concerned space-time. This energy-momentum tensor is constituted of contributions from a large number of different matter fields. Even if one is able to know the precise structure of the contributions of every of such fields and the equations of motion governing the corresponding field, the correct description of the energy-momentum tensor becomes complicated. Predictions of the occurrences of different past and future singularities in the universe from the Einstein's equations can be done. Rather than exactly pointing out the singularities, it is quiet easier to obtain some physically reasonable inequalities for the energy-momentum tensor. Two of such noticeable singularities are weak energy condition \footnote{The energy-momentum tensor at each $p\in M$ obeys the inequality $T_{ab}W^{-a}W^{-b}\geq 0$ for any time like vector $W\in T_p$ with $G=C=1$, for matter distribution given by Trace[$-\rho$,$p$,$p$,$p$], the condition simply becomes $p+\rho\geq0$. The critical case $p+\rho=0$ is called the phantom barrier.} and strong energy condition \footnote{$3p+\rho\geq0$, the critical case $\rho+3p=0$ is popularly known to be the quintessence barrier.}. Since $1995$, two different collaborative teams of distant supernova search have observed that the distant SNeIa supernova are more redshifted. These data predict the fact of late time cosmic acceleration. To justify such a repulsive negative pressure responsible behind this phenomenon, it was hypothesised by a huge part of cosmologists/astrophysicists that a homogeneous fluid / energy is permeated all over in the universe which is responsible for such an accelerated expansion. The name of this fluid was popularly coined as dark energy/quintessence. Among many probable models of dark energy(DE hereafter), the redshift parametrization methods of the equation of state (EoS hereafter) parameter are popular.
Two conventional families \cite{ref18} of redshift parametrizations of EoS are, \\
(i) Family I : $\omega(z) = \omega_0 + \omega_1 \frac{1}{(1+z)^n} $ and \\
(ii) Family II: $\omega(z) = \omega_0 + \omega_1 \frac{1}{1+z^n} $ \\
where, $z$ is redshift, $\omega_0$ and $\omega_1$ are two undecided parameters, $n$ is a natural number. Some particular ‘$n$’-cases for both the families I and II are very popularly studied in literature and are known as :\\
\large{\textbf{(i)Linear Parametrization}, (for $n=0$ in family II) EoS is
$\omega(z)=\omega_0+\omega_1(z)$ \cite{ref4}.\\
\large{\textbf{(ii)CPL Parametrization}},(After Chevallier, Polarski and Linder; $n=1$ for families I and II) EoS is $\omega(z)=\omega_0+\omega_1(\frac{z}{1+z})$ \cite{ref1}. \\
\large{\textbf{(iii)JBP Parametrization}}, (After Jassal, Bagala and Padmanabhan; for $n=2$ in family II) EoS is $\omega(z)=\omega_0+\frac{\omega_1 z}{(1+z)^2}$ \cite{ref6}.\\
\large{\textbf{(iv)Log or Efstathiou Parametrization}}, EoS is $\omega(z) = \omega_0 +\omega_1 ln(1+z)$ \cite{ref7}; which is valid for $z<4$.\\
\large{\textbf{(v)ASSS Parametrization}}, (After Alam, Sahni, Saini and Starobinski \cite{ref8, ref9}) EoS is \\$\omega(z)=\left\lbrace -1+\frac{(1+z)}{3}\frac{A_1+2A_2(1+z)}{A_0+2A_1(1+z)+A_2(1+z)^2}\right\rbrace $. \\
\large{\textbf{(vi)Upadhye Ishak Steinhardt Parametrization}}, EoS is $\omega(z)= \omega_0+\omega_1z~~~~~if z<1$ and $\omega(z)= \omega_0+\omega_1~~~~~~~~if z\geq 1$ \cite{ref5}. \\
\large{\textbf{(vii)Hannestad M{\"o}rtsell Parametrization}}, EoS is $\omega(z)=\omega_0\omega_1\large{\frac{a^p+a^{p}_{s}}{\omega_1 a^p+\omega_0 a^{p}_{s}}}=\frac{1+\left(\frac{1+z}{1+z_s} \right)^p }{\omega_0^{-1}+\omega_{1}^{-1}\left(\frac{1+z}{1+z_s} \right)^p}$ \cite{ref13}. \\
\large{\textbf{(viii)Lee Parametrization}}, EoS is $\omega(z)$ as $\omega(z)=\omega_r\frac{\omega_0~exp(px)~+~exp(px_c)}{exp(px)~+~exp(px_c)}$ \cite{ref14} .\\
\large{\textbf{(ix)Barboza Alcaniz Parametrization:}}
\\The BA \cite{ref19} EoS is
\begin{equation}
\omega(z)=\omega_0+\omega_{1}\frac{z(1+z)}{1+z^{2}},
\end{equation}
$\omega_{0}$ is the EoS at present time $z=0$ and $\omega_{1} = \frac{d\omega}{dz}$ at $z=0$. These information give a measurement of time dependence of this DE EoS.
For this parametrization, the bounds in $\omega_{0}-\omega_{1}$ plane are given as:-\\
\textit{For quintenssence:} \\ $-1\leq\omega_{0}-0.21\omega_{1}$ and $\omega_{0}+1.21\omega_{1}\leq 1$; in case of $\omega_{1}>0$ \\and $-1\leq\omega_{0}+1.21\omega_{1}$ and $\omega_{0}-0.2\omega_{1}\leq 1$; in case of $\omega_{1}<0$ \cite{ref19}
\\ \textit{For phantom:} \\$\omega_1<-\frac{(1+\omega_0)}{1.21}$ (when $\omega_1>0$) \\and $\omega_1>\frac{(1+\omega_0)}{0.21}$ (when $\omega_1<0$) \cite{ref19} \\
\large{\textbf{(x)Feng Shen Li Li Parametrization:}}
\\To surpass the divergence of the CPL model (for $z\rightarrow-1$) Feng, Shen, Li and Li \cite{ref15} suggested following interesting relations:
\begin{equation}
\left. \begin{aligned}
\\FSLL~~I: \omega(z)=\omega_0+\omega_1\frac{z}{1+z^2}
\\FSLL~~II: \omega(z)=\omega_0+\omega_1\frac{z^2}{1+z^2}
\end{aligned}\right\rbrace
\end{equation}
Here, $\omega_0=\omega(0)$ and $\omega_1=\frac{d\omega}{dz}|_{z=0}$. In the 1st case, $\omega(\infty)=\omega_0$ and it reduces to $\omega(z)\approx\omega_0+\omega_1 z$ at low $z$. Again, for the second one, $\omega(\infty)=\omega_0+\omega_1$ and it yields $\omega(z)\approx\omega_0+\omega_1 z^2$ at low redshifts.\\
\large{\textbf{(xi)Polynomial Parametrization:}}
\\Sendra and Lazkoz once proposed polynomial parametrization in an expansion in powers of $(1+z)$, which is given as follows \cite{ref16,ref17}
\begin{equation}
\omega(z)=-1+c_1\left(\frac{1+2z}{1+z} \right)+c_2\left(\frac{1+2z}{1+z} \right)
\end{equation}. Here, $c_1=(16\omega_0-9\omega_{0.5}+7)/4$ and $c_2=-3\omega_0+(9\omega_{0.5}-3)/4$; the values of the EoS are $\omega_0$ and $\omega_{0.5}$ at $z=0$ and $z=0.5$ respectively.
The \textbf{CPL} and \textbf{linear} models diverge for large $z$ ($z\gg0$). Here we take some new parametrizations to get a proper concept about the fate of our universe and to speculate about future cosmic singularities.
The Einstein's field equations for the metric given by the equation ($1$) are written in the form \cite{ref10}:
\begin{equation}
\left. \begin{aligned}
\\ \frac{2\ddot{a}}{a}+\frac{\dot{a}^2+k_0}{a^2}=-\kappa p+\Lambda
\\ 3\Big(\frac{\dot{a}^2+k_0}{a^2}\Big)=\kappa \rho+\Lambda
\end{aligned}\right\rbrace
\end{equation} with $\kappa=8\pi$.
The energy conservation equation is:
\begin{equation}
\dot{\rho}+3\frac{\dot{a}}{a}(p+\rho)=0.
\end{equation}
From (1) and field equations (5) we easily get
\begin{equation}
\ddot{a}=-\frac{\kappa}{6}(p+\rho)a+a\frac{\Lambda}{3}.
\end{equation}
This is the Raychaudhuri equation \cite{ref11,ref12} which provides the cosmic acceleration that is governed by forces on the right hand side of it. So we obtain
\begin{equation}
-\frac{k_0}{2}=\frac{1}{2}\dot{a}-\Big(\frac{4\pi}{3}\rho+\frac{\Lambda}{6}\Big)a^2
\end{equation}
Now a function $M(\rho)$ is defined with the help of the equations $(1),(5),(6)$ and $(8)$ for finding the general solution of the system as:
\begin{equation}
M(\rho) = exp \left[ \int \frac{dp}{p+\rho}\right] > 0.
\end{equation}.
We consider the pressure $p$ as a function of density $\rho$ and obtain
\begin{equation}
\frac{dM(\rho)}{d\rho}=\frac{M(\rho)}{p+\rho} > 0.
\end{equation}
We now write the conservation equation $(6)$ as;
\begin{equation}
\frac{d}{dt}[lnM(\rho)+ln a^3]=0\Rightarrow M(\rho)a^3=m_0.
\end{equation}
In this letter, we will calculate the relations between $a(t)$ and $t$ for BA, FSLL I, FSLL II and polynomial parametrizations one by one in the next parts. We will graphically interpret these relations then. Lastly we will briefly discuss the results achived and draw a conclusion.
We use the EoS of BA parametrization in the expression relating the mass and density, i.e., in equation $(9)$ and will get the relation of $M$, $\rho$ and $z$ as
\begin{equation}
m_0a^{-3}=M=\rho^{\frac{1}{1+\omega_0+\omega_1\frac{z(1+z)}{1+z^{2}}}}\Rightarrow \rho=(m_0a^{-3})^ {\left\lbrace {1+\omega_0+\omega_{1} \frac{z(1+z)}{1+z^{2}}} \right\rbrace }
\end{equation}
Again using equations $(5)$ and $(7)$
\begin{eqnarray}
\dot{a}^2=2\left\lbrace \frac{4\pi}{3}(m_0a^{-3})^{\left\lbrace 1+\omega_0+\omega_{1}\frac{z(1+z)}{1+z^{2}}\right\rbrace }+\frac{\Lambda}{6}\right\rbrace a^2-k_0 \nonumber \\\Rightarrow \left(\frac{da}{dt}\right)=\left[2\left\lbrace \frac{4\pi}{3}(m_0a^{-3})^{\left\lbrace 1+\omega_0+\omega_{1}\frac{z(1+z)}{1+z^{2}}\right\rbrace }+\frac{\Lambda}{6}\right\rbrace a^2-k_0 \right]^{\frac{1}{2}}
\end{eqnarray}
Writing $t$ with respect to $a(t)$ we obtain
\begin{eqnarray}
\int _{t_0}^{t} dt=\int _{a(t_0)}^{a} \left[2\left\lbrace \frac{4\pi}{3}(m_0a^{-3})^{\left\lbrace 1+\omega_0+\omega_{1}\frac{z(1+z)}{1+z^{2}}\right\rbrace }+\frac{\Lambda}{6}\right\rbrace a^2-k_0 \right]^{-\frac{1}{2}}da \label{eq1}
\end{eqnarray}
To obtain the analytic solution of $(10)$, we put $\omega_0=-1$, $\omega_1=0$, $m_0=1$ and get:\\For $k_0=0$
\begin{equation}
t(a)= \sqrt{\frac{3}{8\pi-1}}ln a
\end{equation}
\\For $k_0=1$
\begin{equation}
t(a)= \sqrt{\frac{3}{8\pi-1}}ln \left( -2a\sqrt{8\pi-1}+2\sqrt{a^2(8\pi-1)-3}\right)
\end{equation}
\\For $k_0=-1$
\begin{equation}
t(a)= \sqrt{\frac{3}{8\pi-1}} \left(sinh^{-1}\left[\frac{a\sqrt{8\pi-1}}{\sqrt{3}} \right] \right)
\end{equation}
Solving above equation (\ref{eq1}) numerically, we plot graphs of $t$ vs $a(t)$ for $k_0=1,0,-1$. For $k_{0}=1$, we get the graph $1(a)$. Here we discuss the case for quintenssence and phantom era for different values of $k$ and $\omega$.
\begin{figure}[ht!]
\begin{center}
$~~~~Fig.1(a)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Fig.1(b)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Fig.1(c)$
\includegraphics[height=5cm, width=5.6cm]{P3.eps}~~~~~\includegraphics[height=5cm, width=5.6cm]{P1.eps}~~~~~\includegraphics[height=5cm, width=5.6cm]{P5.eps}
\end{center}
Figure $1(a)$ to $1(c)$ are $t$ vs. $a(t)$ plots for BA parametrization for $k_0=1$, $k_0=0$ and $k_0=-1$ respectively. For Fig.$1(a)$: solid line stands for $\omega_0=-1$, $\omega_1=0.1$; dotted line represents $\omega_0=-0.80423$, $\omega_1=1.40845$ and the dot-dashed one stands for $\omega_0=1.006$ and $\omega_1=-0.41493775933$. For Fig.$1(b)$: solid line stands for $\omega_0=-1$, $\omega_1=0.1$; dotted line represents $\omega_0=-0.80423$, $\omega_1=1.40845$ and the dot-dashed one stands for $\omega_{0}=0.9170224481,\omega_{1}=-0.3149677893$. for Fig.$1(c)$: solid line stands for $\omega_0=-1$, $\omega_1=0.1$ and the dotted line represents $\omega_0=-0.80423$, $\omega_1=1.40845$
\end{figure}
The solid line states that for negative time we may have a negative (but increasing) $a(t)$, i.e., if we choose $\omega_0=-1$ and $\omega_1=0.1$, in past we may observe a deceleration. But if $\omega_0=-0.80423$ and $\omega_1=1.40845$, i.e., for dotted curve we see as $t$ increases $a(t)$ increases as well. However, when $\omega_0=1.006$ and $\omega_1=-0.41493775933$ i.e. in the dot-dashed graph we see if $t$ increases, $a(t)$ becomes asymptotic to a finite value. This third case does not allow any future cosmological singularity. However, in closed universe the $\omega_0=1.9170224481$ case does not give any physical value for $t>0$.
We can suppose that $a(t)$ blows for increasing $t$. The same pattern is followed for the flat universe case.
For open universe, (Fig. $1(c)$) the graph where $a(t)$ becomes asymptotic to a finite value for increasing $t$ is absent. This signifies the open universe does not allow not to possess a future singularity.
\begin{figure}[ht!]
\begin{center}
$~~~~Fig.2(a)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Fig.2(b)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Fig.2(c)$
\includegraphics[height=5cm, width=5.6cm]{P4.eps}~~~~~\includegraphics[height=5cm, width=5.6cm]{P2.eps}~~~~~\includegraphics[height=5cm, width=5.6cm]{P6.eps}
\end{center}
Figure $2(a)$ to $2(c)$ are $t$ vs. $a(t)$ plots for BA parametrization for $k_0=1$, $k_0=0$ and $k_0=-1$ respectively. For Fig.$2(a)$: dashed line represents $\omega_0=1.8$, $\omega_1=-1.3105$ and the dot-dashed one stands for $\omega_0=1.1$ and $\omega_1=1.9$. For Fig.$2(b)$: dashed line represents $\omega_0=1.1$, $\omega_1=-1.895$ and the dot-dashed one stands for $\omega_{0}=1.1,\omega_{1}=1.4$. For Fig.$2(c)$: dashed line stands for $\omega_0=1.8$, $\omega_1=0.799$; dotted line represents $\omega_0=1.1$, $\omega_1=0.9$.
\end{figure}
In Fig.$2(a)$ the dashed ($\omega_0=1.8$, $\omega_1=-1.3105$) and dot-dashed ($\omega_0=1.1$ and $\omega_1=1.9$) lines both are steeply increasing sensitive curves such that they become unphysical if we make a little change in the values of $\omega_0$ or $\omega_1$ even upto $5$ or $6$ decimal places. the dot-dashed line behaves quiet curious. Keeping $\omega_0$ fixed if we put $\omega_1=1.8502$ then the graph is also physical but the range of $t$ suddenly rises upto $2.5\times10^{27}$. Again, it becomes unphysical at $\omega_1=1.850235$ and remains the same upto $\omega_1=1.89899$.
In Fig.$2(b)$ the dotted line ($\omega_1=-1.895$) and the dashed line ($\omega_1=1.4$), keeping $\omega_0=1.1$ fixed, are steeply increasing. The dotted one strictly increases with the increment of $t$ after $a(t)=5.6$ and becomes unphysical if we change the value of $\omega_1$ even in $5$ decimal place. The dashed one increases for $a(t)>0.35$ and becomes unphysical if we change the value of $\omega_1$ from $1.4$ to $1.402$.
In Fig.$2(c)$ the dashed($\omega_0=1.8$, $\omega_1=0.799$) and dotted($\omega_0=1.1$ and $\omega_1=0.9$) lines are too much sensitive such that they become unphysical if a little change in the values of $\omega_0$ or $\omega_1$ has made.
Surprisingly, it is to be pointed out that if we take such values of Barboza Alcaniz parameters that phantom era is signified, we observe the values of $a(t)$ to converge to a finite value for increasing $t$ if we take closed universe. Flat and open universe cases do not however support existence of constant (or asymptotic to a finite value) $a(t)$ for infinite $t$. But for these two cases $a(t)$ is not diverging to an infinite value for increasing $t$. So, it is clear that Barboza Alcaniz does not support infinite $a(t)$ even when the parameters are signifying phantom era.
After describing the nature of expanding universe with a dark energy of type BA, we will have followed for \textbf{FSLL I} parametrization. For this case, again, the EoS of FSLL I given by equation $(9)$, we will get
\begin{equation}
m_0a^{-3}=M=\rho^{\frac{1}{1+\omega_0+\omega_1\frac{z}{1+z^{2}}}}\Rightarrow \rho=(m_0a^{-3})^{(1+\omega_0+\omega_{1}\frac{z}{1+z^{2}})}
\end{equation}
Again using equations $(5)$ and $(7)$
\begin{eqnarray}
\dot{a}^2=2\left\lbrace \frac{4\pi}{3}(m_0a^{-3})^{\left\lbrace 1+\omega_0+\omega_{1}\frac{z}{1+z^{2}}\right\rbrace }+\frac{\Lambda}{6}\right\rbrace a^2-k_0 \nonumber \\\Rightarrow \left(\frac{da}{dt}\right)=\left[2\left\lbrace \frac{4\pi}{3}(m_0a^{-3})^{\left\lbrace 1+\omega_0+\omega_{1}\frac{z}{1+z^{2}}\right\rbrace }+\frac{\Lambda}{6}\right\rbrace a^2-k_0 \right]^{\frac{1}{2}} .
\end{eqnarray}
Here, integratiating $t$ with respect to $a(t)$ we get
\begin{eqnarray}
t-t_0=\int_{a(0)}^{a} \left[2\left\lbrace \frac{4\pi}{3}(m_0a^{-3})^{\left\lbrace 1+\omega_0+\omega_{1}\frac{z}{1+z^{2}}\right\rbrace }+\frac{\Lambda}{6}\right\rbrace a^2-k_0 \right]^{-\frac{1}{2}}da \label{eq2}
\end{eqnarray}
To obtain the analytic solution of $(16)$,we put $\omega_0=-1$, $\omega_1=0$, $m_0=1$ and get:\\For $k_0=0$
\begin{equation}
t(a)= \sqrt{\frac{3}{8\pi-1}}ln a
\end{equation}
\\For $k_0=1$
\begin{equation}
t(a)= \sqrt{\frac{3}{8\pi-1}}ln \left( -2a\sqrt{8\pi-1}+2\sqrt{a^2(8\pi-1)-3}\right)
\end{equation}
\\For $k_0=-1$
\begin{equation}
t(a)= \sqrt{\frac{3}{8\pi-1}} \left(sinh^{-1}\left[\frac{a\sqrt{8\pi-1}}{\sqrt{3}} \right] \right)
\end{equation}.
Solving the equation (\ref{eq2}) numerically we plot graphs of $t$ vs $a(t)$ for $k_0=1, 0, -1$. For $k_{0}=1$, we get the graph 3(a). Here we compare the cases for quintenssence and phantom era with BA parametrization for different values of $k$ and $\omega$.
\begin{figure}[ht!]
\begin{center}
$~~~~Fig.3(a)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Fig.3(b)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~3(c)$
\includegraphics[height=5cm, width=5.6cm]{c1p9.eps}~~~~~\includegraphics[height=5cm, width=5.6cm]{cp7.eps}~~~~~\includegraphics[height=5cm, width=5.6cm]{c-1p10.eps}
\end{center}
Figure $3(a)$ to $3(c)$ are $t$ vs. $a(t)$ plots of FSLL I parametrization for quintessence era for $k_0=1$, $k_0=0$ and $k_0=-1$ respectively. For Fig.$3(a)$: solid line stands for $\omega_0=-1$, $\omega_1=0.2$; dotted line represents $\omega_0=-1.30423$, $\omega_1=1.40845$; dashed line represents $\omega_0=-1.40423$, $\omega_1=1.40845$; the thicker line represents $\omega_0=-0.80423$, $\omega_1=1.40845$ and the dot-dashed one stands for $\omega_0=1.006$, $\omega_1=-0.34493775933$. For Fig.$3(b)$: solid line stands for $\omega_0=-1.2$, $\omega_1=0.11$; dotted line represents $\omega_0=-0.70423$, $\omega_1=1.40845$ and the dot-dashed one stands for $\omega_{0}=1.1170224481,\omega_{1}=-0.3142677893$. For Fig.$3(c)$: solid line stands for $\omega_0=-1$, $\omega_1=0.9$; dotted line represents $\omega_0=-0.80423$, $\omega_1=0.80845$ and the dot-dashed one represents $\omega_0=0.9170224481$, $\omega_1=-0.6149677893$.
\end{figure}
In Fig. $3(a)$; the solid, dashed and dotted lines show that for negative time we get an increasing negative $a(t)$; which represents deceleration in past time. The dot-dashed one gives no cosmological singularity in future as here $a(t)$ becomes asymptotic to a finite value for increasing $t$.
In Fig. $3(b)$ and $3(c)$, if we study the increment of $a(t)$ with respect to $t$, then we note that the dotted and solid lines diverge with $a(t)$ while the dot-dashed one becomes convergent. For both the graphs the solid line become asymptotic to a finite value for increasing $t$. Here the pattern of flat and open universe is almost same.
Here in quintessence era, it has been noticed that in each cases of closed, flat and open universe ($k_0=1, 0, -1$), the dot-dashed line is always convergent with the increment of $t$ and all other curves are divergent. Surprisingly we observe that the dot-dashed lines in each graphs has been plotted for ($\omega_0>0, \omega_1<0$), which are near phantom era while all other lines have been drawn for ($\omega_0<0, \omega_1>0$).
\begin{figure}[ht!]
\begin{center}
$~~~~Fig.4(a)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Fig.4(b)$
\includegraphics[height=5cm, width=5.6cm]{cp8.eps}~~~~~\includegraphics[height=5cm, width=5.6cm]{c-1p11.eps}
\end{center}
Figures $4(a)$ and $4(b)$ are $t$ vs. $a(t)$ plots of FSLL I parametrization for phantom era for $k_0=1$, $k_0=0$ and $k_0=-1$ respectively. For Fig.$4(a)$: dashed line represents $\omega_0=1.7$, $\omega_1=-1.895$; dotted line stands for $\omega_0=0.7$, $\omega_1=-1.895$ and the dot-dashed one stands for $\omega_0=1.006$ and $\omega_1=-0.41493775933$. For Fig. $4(b)$: dashed line represents $\omega_0=1.8$, $\omega_1=-0.699$ and the dot-dashed one stands for $\omega_{0}=1.1,\omega_{1}=0.6$.
\end{figure}
In fig. $4(a)$ all the dotted, dashed and dot-dashed lines converge with the increment of $a(t)$. For same $\omega_1$ ($-1.895$), we get dotted($\omega_0=0.7$) and dashed ($\omega_0=1.7$) lines. The dotted one where $a(t)$ is convergent and asymptotic to a finite value with the increment of $t$; whither both dashed and dot-dashed ($\omega_0=1.006$ and $\omega_1=-0.41493775933$) lines are absent for increasing $t$.
In fig. $4(b)$ ($k_0=-1$ i.e. for open universe) both the dashed and dot-dashed lines are neatly convergent. Convergence of the dot-dashed line is more accurate than the dashed one. This is for the data of dot-dashed one ($\omega_0=1.1, \omega_1=0.6$) which is very much alike quintessence barrier while that of the dotted one ($\omega_0=1.8, \omega_1=-0.699$) lies in phantom era.
Therefore, the FSLL I parametrization is defined in such a way that it is violating the big-rip theory; i.e. we know that this $a(t)$ should increase with time for our cosmically accelerating universe (specially in phantom era), but FSLLL I do not let it increase, rather, in future this parametrization is making $a(t)$ finite.
Now, we shall study about the nature of the expansion of the universe with FSLL II type of DE model. Again, here we use the EoS of this parametrization. From equation $(9)$ we obtain
\begin{equation}
m_0a^{-3}=M=\rho^{\frac{1}{1+\omega_0+\omega_1\frac{z^2}{1+z^{2}}}}\Rightarrow \rho=(m_0a^{-3})^{(1+\omega_0+\omega_{1}\frac{z(1+z)}{1+z^{2}})}
\end{equation}.
Again using equations $(5)$ and $(7)$
\begin{eqnarray}
\dot{a}^2=2\left\lbrace \frac{4\pi}{3}(m_0a^{-3})^{\left\lbrace 1+\omega_0+\omega_{1}\frac{z^2}{1+z^{2}}\right\rbrace }+\frac{\Lambda}{6}\right\rbrace a^2-k_0 \nonumber \\\Rightarrow \left(\frac{da}{dt}\right)=\left[2\left\lbrace \frac{4\pi}{3}(m_0a^{-3})^{\left\lbrace 1+\omega_0+\omega_{1}\frac{z^2}{1+z^{2}}\right\rbrace }+\frac{\Lambda}{6}\right\rbrace a^2-k_0 \right]^{\frac{1}{2}}
\end{eqnarray}.
After integrating $t$ with respect to $a(t)$ we get
\begin{eqnarray}
t-t_0=\int_{a(0)}^{a} \left[2\left\lbrace \frac{4\pi}{3}(m_0a^{-3})^{\left\lbrace 1+\omega_0+\omega_{1}\frac{z^2}{1+z^{2}}\right\rbrace }+\frac{\Lambda}{6}\right\rbrace a^2-k_0 \right]^{-\frac{1}{2}}da \label{eq3}
\end{eqnarray}
To obtain the analytic solution of $(22)$,we put $\omega_0=-1$, $\omega_1=0$, $m_0=1$ and get:\\For $k_0=0$
\begin{equation}
t(a)= \sqrt{\frac{3}{8\pi-1}}ln a
\end{equation}
\\For $k_0=1$
\begin{equation}
t(a)= \sqrt{\frac{3}{8\pi-1}}ln \left( -2a\sqrt{8\pi-1}+2\sqrt{a^2(8\pi-1)-3}\right)
\end{equation}
\\For $k_0=-1$
\begin{equation}
t(a)= \sqrt{\frac{3}{8\pi-1}} \left(sinh^{-1}\left[\frac{a\sqrt{8\pi-1}}{\sqrt{3}} \right] \right)
\end{equation}
Solving the equation (\ref{eq3}) numerically we plot graphs of $t$ vs $a(t)$ for $k_0=1, 0, -1$. For $k_{0}=1$, we get the graph $5(a)$. Here we compare the case for quintenssence and phantom era with Barboza Alcaniz parametrization for different values of $k$ and $\omega$.
\begin{figure}[ht!]
\begin{center}
$~~~~Fig.5(a)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Fig.5(b)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~5(c)$
\includegraphics[height=5cm, width=5.6cm]{c1p14.eps}~~~~~\includegraphics[height=5cm, width=5.6cm]{cp12.eps}~~~\includegraphics[height=5cm, width=5.6cm]{c-1p16.eps}
\end{center}
Figure $5(a)$ to $5(c)$ are $t$ vs. $a(t)$ plots of FSLL II parametrization for quintessence era for $k_0=1$, $k_0=0$ and $k_0=-1$ respectively. For Fig.$5(a)$: solid line stands for $\omega_0=-0.9$, $\omega_1=0.7$; dotted line represents $\omega_0=-0.60423$, $\omega_1=1.40845$; the dot-dashed line stands for $\omega_0=1.006$ and $\omega_1=-0.41493775933$; dashed line represents $\omega_0=-0.2$, $\omega_1=0.7$ and the thicker one represents $\omega_0=-0.6$, $\omega_1=0.7$. For Fig.$5(b)$: solid line stands for $\omega_0=-1$, $\omega_1=0.1$ and the dotted line represents $\omega_0=-1.50423$, $\omega_1=1.40845$. For Fig.$5(c)$: the solid line stands for $\omega_0=-1.4$, $\omega_1=0.1$; dotted line represents $\omega_0=-0.30423$, $\omega_1=1.40845$; the dashed one stands for $\omega_{0}=-1.20423,\omega_{1}=1.40845$ and the dot-dashed one represents $\omega_{0}=0.5170224481,\omega_{1}=-0.3149677893$.
\end{figure}
The graphs of FSLL II parametrization change rapidly for closed universe ($k_0=1$). Here we get totally different graphs for same $\omega_1$ ($0.7$) making a little change in $\omega_0$ (the thick, dashed and more thicker curves). The thick line is purely divergent, the thicker line is slightly divergent but the dashed line converges with increasing $t$. The dotted line ($\omega_0=-0.60423$, $\omega_1=1.40845$) gives no proper conclusion and the dot-dashed line ($\omega_0=1.006$ and $\omega_1=-0.41493775933$) becomes asymptotic to a finite value for increasing $t$ is absent.
For flat universe ($k_0=0$), the graphs become divergent with the increment of $a(t)$. The range of another graph suddenly rises so high ($10^{30}$) that no such comparison can be made and we have omitted it from the figure.
In case of open universe we see that the dotted line ($\omega_0=-0.30423$, $\omega_1=1.40845$) is purely convergent, the dashed line ($\omega_{0}=-1.20423,\omega_{1}=1.40845$) gives no conclusion and the solid line ($\omega_0=-1.4$, $\omega_1=0.1$) leads to divergence.
\begin{figure}[ht!]
\begin{center}
$~~~~~~~~~~Fig.6(a)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Fig.6(b)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Fig.6(c)$
\includegraphics[height=5cm, width=5.6cm]{c1p15.eps}~~~~~~~\includegraphics[height=5cm, width=5.6cm]{cp13.eps}~~~~~~~~\includegraphics[height=5cm, width=5.6cm]{c-1p17.eps}
\end{center}
Figure $6(a)$ to $6(c)$ are $t$ vs. $a(t)$ plots of FSLL II parametrization for phantom era for $k_0=1$, $k_0=0$ and $k_0=-1$ respectively. For Fig.$6(a)$: dashed line represents $\omega_0=0.5$, $\omega_1=-0.799$ and the dot-dashed one stands for $\omega_0=1.2$ and $\omega_1=0.9$. For Fig.$6(b)$: dashed line represents $\omega_0=0.8$, $\omega_1=-1.895$ and the dot-dashed one stands for $\omega_{0}=1.1,\omega_{1}=0.6$. For Fig.$6(c)$: dashed line represents $\omega_0=0.5$, $\omega_1=-0.799$ and the dot-dashed one represents $\omega_0=1.1$, $\omega_1=0.9$.
\end{figure}
In phantom era, for closed universe ($k_0=1$) the dashed ($\omega_0=0.5$, $\omega_1=-0.799$) and dot-dashed ($\omega_0=1.2$ and $\omega_1=0.9$) lines become asymptotic to a finite value for highly increasing $t$ (in range of $10^{30}$.
In fig. $6(b)$ we notice that the dot-dashed line ($\omega_{0}=1.1,\omega_{1}=0.6$) is quiet similar to the dashed one ($\omega_0=0.8$, $\omega_1=-1.895$) and coincides with it after a little increment of $a(t)$ although the values of corresponding $\omega_1$ of the curves are enough different.
In case of open universe, we note that both the dashed ($\omega_0=0.5$, $\omega_1=-0.799$) and dot-dashed ($\omega_0=1.1$, $\omega_1=0.9$) lines are divergent with high $t$. The dot-dashed one becomes parallel with the axis of $a(t)$ for increasing $t$.
We observe that the graphs for $k_0= 1, 0, -1$, $a(t)$ are not diverging to an infinite value for increasing $t$ in quintessence era but in phantom era they are divergent. The rate of divergence increases for flat universe than the case of closed universe and finally for open universe it is totally divergent.
Lastly, we use a DE model of type polynomial parametrization which is a bit different from other parametrizations viz. BA, FSLL I and FSLL II discussed earlier in this letter. Now using the EoS of this redshift parametrization, in equation $(9)$ we have introduced $a(t)$ vs $t$ graphs.
\begin{equation}
m_0a^{-3}=M=\rho^{\frac{1}{(1+\omega_0)(\frac{1+2z}{1+z})}}\Rightarrow \rho=(m_0a^{-3})^{(1+\omega_0)(\frac{1+2z}{1+z})})
\end{equation}.
Note that this type of parametrization is somewhat different from previous cases as it depends only upon the values of $\omega_0$ ($\omega_1$ vanishes). Again using equations $(5)$ and $(7)$ we obtain
\begin{eqnarray}
\dot{a}^2=2\left\lbrace \frac{4\pi}{3}(m_0a^{-3})^{\left\lbrace (1+\omega_0)(\frac{1+2z}{1+z}) \right\rbrace }+\frac{\Lambda}{6}\right\rbrace a^2-k_0 \nonumber \\\Rightarrow \left(\frac{da}{dt}\right)=\left[2\left\lbrace \frac{4\pi}{3}(m_0a^{-3})^{\left\lbrace (1+\omega_0)(\frac{1+2z}{1+z})\right\rbrace }+\frac{\Lambda}{6}\right\rbrace a^2-k_0 \right]^{\frac{1}{2}}
\end{eqnarray}.
Here, integrating $t$ with respect to $a(t)$ we get
\begin{eqnarray}
t-t_0=\int_{a(0)}^{a} \left[2\left\lbrace \frac{4\pi}{3}(m_0a^{-3})^{\left\lbrace (1+\omega_0)(\frac{1+2z}{1+z})\right\rbrace }+\frac{\Lambda}{6}\right\rbrace a^2-k_0 \right]^{-\frac{1}{2}}da \label{eq4}
\end{eqnarray}.
To obtain the analytic solution of $(28)$,we put $\omega_0=-1$, $\omega_1=0$, $m_0=1$ and get:\\For $k_0=0$
\begin{equation}
t(a)= \sqrt{\frac{3}{8\pi-1}}ln a
\end{equation}
\\For $k_0=1$
\begin{equation}
t(a)= \sqrt{\frac{3}{8\pi-1}}ln \left( -2a\sqrt{8\pi-1}+2\sqrt{a^2(8\pi-1)-3}\right)
\end{equation}
\\For $k_0=-1$
\begin{equation}
t(a)= \sqrt{\frac{3}{8\pi-1}} \left(sinh^{-1}\left[\frac{a\sqrt{8\pi-1}}{\sqrt{3}} \right] \right)
\end{equation}
Solving the equation (\ref{eq4}) numerically we plot graphs of $t$ vs $a(t)$ for $k_0=0,1,-1$. For $k_{0}=1$, we get the graph of $7(a)$. The case for quintenssence and phantom era for different values of $k$ and $\omega$ has been discussed here.
\begin{figure}[ht!]
\begin{center}
$~~~~Fig.7(a)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Fig.7(b)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~7(c)$
\includegraphics[height=5cm, width=5.6cm]{pl19.eps}~~~~~\includegraphics[height=5cm, width=5.6cm]{pl18.eps}~~~~~\includegraphics[height=5cm, width=5.6cm]{pl21.eps}
\end{center}
Figures $7(a)$ to $7(c)$ are $t$ vs. $a(t)$ plots of polynomial parametrization for quintessence era for $k_0=1$, $k_0=0$ and $k_0=-1$ respectively. For Fig.$7(a)$: the solid line stands for $\omega_0=-1.3$; dotted line represents $\omega_0=-0.80423$ and the dot-dashed one stands for $\omega_0=1.006$. For Fig.$7(b)$: solid line stands for $\omega_0=-1.3$; dotted line represents $\omega_0=-0.80423$ and the dot-dashed one stands for $\omega_{0}=0.6170224481$. For Fig.$7(c)$: solid line stands for $\omega_0=-1$ and the dotted line represents $\omega_0=-0.80423$.
\end{figure}
In closed universe (fig. $7(a)$) the graphs state that for negative time we get an increasing negative $a(t)$; which represents deceleration in past time. Here all the solid ($\omega_0=-1.3$), dotted ($\omega_0=-0.80423$) and dot-dashed ($\omega_0=1.006$) lines coincide just below the axis of $a(t)$ near $t=0.05$.
We see that $a(t)$ blows with the increment of $t$ for flat universe ($k_0=0$). Here, the divergent solid line ($\omega_0=-1.3$) is asymptotic to a finite value of $a(t)$. But the dotted ($\omega_0=-0.80423$) and dot-dashed lines ($\omega_{0}=0.6170224481$) converge with increasing $t$. The solid line does not allow any future cosmological singularity.
In Figure $7(c)$ we observe that the polynomial parametrization gives quiet similar curves for open universe ($k_0=-1$). Here we get solid ($\omega_0=-1$) and dotted ($\omega_0=-0.80423$) lines in negative region. The lines coincide near the axis of $a(t)$ and from their behaviour we state that deceleration has been observed in past.
\begin{figure}[ht!]
\begin{center}
$~~~~~~~~~~~~~~~~~~~~~~~~Fig.8~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$\\
\includegraphics[height=5cm, width=5.6cm]{pl20.eps}~~~~~~~~~~~~~~~~~~~~
\end{center}
Figure $8$ is a $t$ vs. $a(t)$ plot of polynomial parametrization for phantom era for $k_0=1$ only. Here, the dashed line represents $\omega_0=1.7$ and the dot-dashed one stands for $\omega_0=1.2$ .
\end{figure}
In this figure $8$, both the dotted ($\omega_0=1.7$) and the dashed lines ($\omega_0=1.2$) give increasing graphs in negative region (it seems that for negative time we get a negative but increasing $a(t)$) i.e. in past we may observe a deceleration.
In polynomial parametrization, we have found a graph only for $k_0=1$ and the rest for $k_0=0$ and $k_0=-1$ are either unphysical, or coincide with the above graph, or their range suddenly rises so high that no such important comparison can be concluded. In this parametrization $\omega_1$ vanishes and so it depends only upon the values of $\omega_0$. Here it should be pointed out that for the values of $\omega$, the values of $a(t)$ converge to a finite value for increasing $t$. Unlike other parametrizations, here the graphs often coincide near the axis of $a(t)$ for small $t$.
Finally we will discuss in brief regarding the results found in this letter. In this letter, study of the evolution of scale factor $a(t)$ and time $t$ has been done. Equations of state of various redshift parametrization have been taken and some comparative study of some important redshift parametrizations has been studied. Firstly, we have considered the FLRW metric and using mainstream families of redshift parametrization, stated EoS of different parametrizations. Here we have used some important well-known equations like Einstein's field equation, energy conservation equation, Raychoudhuri equation etc. and did some preliminary calculations. Here a mass function $M(\rho)$ has been introduced and we put the EoS of BA, FSLL I, FSLL II, polynomial parametrizations in the mass function and solved differential equations of time ($t$) vs. scale factor ($a(t)$) numerically and plotted graphs with the values of $\omega_0$, $\omega_1$ for closed, flat and open universes ($k_0=1,0,-1$). The simple looking graphs have enormous effect upon the hypothesis of the expansion of the universe. Now if we think about the increment of $a(t)$ with respect to $t$ then from the graphs we conclude that for closed universe ($k_0=1$) $a(t)$ becomes parallel with the axis of $t$ after a certain range of values of $t$ and $a(t)$. For the case of flat universe ($k_0=0$) the incidents are quite similar. In case of open universe ($k_0=-1$); this $a(t)$ becomes parallel after a little increment of $t$ i.e. we get a finite $a(t)$. Here we get a new result from BA parametrization. Where $a(t)$ should be very large and somehow the universe should accelerate abruptly and will burst out (Big-Rip), BA states that in phantom era $a(t)$ becomes finite for some particular $t$.
So we can conclude in brief that in a hypothetical cosmological model Big Rip concerns the ultimate fate of the universe, in which the matter of the universe and even space-time itself is progressively torn apart by the expansion of the universe. the universe dominated by phantom energy is an accelerating universe, expanding at an ever-increasing ratio. When the size of the observable universe became smaller than any particular structure, no interaction by any of the fundamental forces (gravitational, electromagnetic, strong and weak) can occur between the most remote part s of the structure. When these interactions become impossible, the structure is ripped apart.
In case of FSLL I and polynomial parametrizations; the same pattern is followed for all the cases of closed, flat and open universe ($k_0=1, 0, -1$). These parametrizations are not allowing infinite $a(t)$ even when the parameters are signifying phantom era. However FSLL II parametrization somehow supports the big-rip hypothesis. \\
\textbf{Acknowledgement}
This research is supported by the project grant of Government of West Bengal, Department
of Higher Education, Science and Technology and Biotechnology (File no:- ST /P/S\&T /16G − 19/2017). RB thanks IUCAA, Pune for Visiting Associateship. RB dedicates this article to his PhD supervisor Prof. Subenoy Chakraborty, Department of Mathematics, Jadavpur University, Kolkata-32, India to tribute him on his $60^{th}$ birth year.
|
0812.4659
|
\section{Introduction} \label{sec:intro}
Although historically supersymmetric quantum mechanics (SUSY QM) was originally introduced by Witten \cite{Witten:1981} as a toy model of studying patterns of supersymmetry breakings, it was soon recognized that SUSY QM was interesting in its own right; for example, it provides a systematic description of categorizing analytically solvable potentials using the so-called shape invariance (see for review \cite{Cooper:1994}).
Schr\"odinger equations with shape invariant potentials can be solved algebraically with the aid of supersymmetry.
SUSY QM also appears in various contexts of physics; it is related to soliton physics \cite{Wang:1990,Grant:1993,Rodrigues:1998,GomesLima:2002,Dias:2002,deLima:2003} including inverse scattering problems \cite{Sukumar:1985,Sparenberg:1997,Baye:2004}, two-dimensional quantum field theories \cite{Feinberg:2003,Seeger:1998}, supersymmetric lattice models leaving time-direction continuous \cite{Bergner:2007}, integrable models such as the Calogero model and its application to black hole physics \cite{Calogero:1971,Sutherland:1972,Moser:1975,Gibbons:1998,Meljanac:2006}, and quantum mechanics with point singularities \cite{point_singularity1, point_singularity2}.
Recently it was shown that in higher dimensional gauge theories with extra compact dimensions there always exists an $\mathscr{N}=2$ quantum mechanical supersymmetry (QM SUSY) in the 4d spectrum; the Kaluza-Klein mass eigenvalue problems are equivalent to energy eigenvalue problems in $\mathscr{N} = 2$ SUSY QM \cite{Lim:2005}.
The $\mathscr{N}=2$ QM SUSY can be regarded as a remnant of the higher-dimensional gauge invariance, and plays an essential role to generate an infinite tower of massive spin-1 particles.
In Ref.\cite{hierarchy}, it was pointed out that a hierarchical mass spectrum can naturally arise in the context of a higher dimensional gauge theory with a warped metric and give a solution to the gauge hierarchy problem, in which the $\mathscr{N}=2$ QM SUSY turns out to play a crucial role.
Since the extra dimension is compactified, the corresponding supersymmetric quantum mechanical systems are of course constrained to bounded domains.
There, boundary conditions are very important not only for the infrared regime but also for the ultraviolet regime, and play an essential role to determine the 4d particle spectrum especially for the low energy levels or massless mode.
When the compactified dimension does not respect the translational invariance due to the presence of extended defects (branes or boundaries), boundary effects also play a significant role in the ultraviolet regime as boundary localized divergent terms \cite{Georgi:2000}.
Such localized ultraviolet divergences must be renormalized by field theory operators on the boundary and give rise to nontrivial renormalization group flows for brane localized theory \cite{Goldberger:2001,Milton:2001}.
Since any gauge invariant field theory possesses the $\mathscr{N}=2$ QM SUSY, the boundary conditions and the $\mathscr{N}=2$ QM SUSY must be compatible with each other.
In this paper we will address this issue from the supersymmetric quantum mechanics point of view:
we analyze the possible boundary conditions in one-dimensional $\mathscr{N}=2$ SUSY QM on a bounded domain $(0,L)$.
The analysis developed in \cite{Lim:2005} was extended to 5d gravity \cite{Lim:2007}.
In 5d gravity it was shown that \textit{two} $\mathscr{N}=2$ SUSYs are hidden in the 4d spectrum.
The two $\mathscr{N}=2$ SUSYs can be regarded as a remnant of higher-dimensional general coordinate invariance, and are needed in order for the ``Higgs'' mechanism to generate massive spin-2 particles; one of the two quantum mechanical SUSYs ensures the degeneracy between spin-2 and spin-1 excitations and the other between spin-1 and spin-0 excitations.
A crucial ingredient of this coexistence of two quantum mechanical SUSYs is the refactorization of Hamiltonians (Laplace operators).
In view of these facts it would be natural to guess that in a higher-dimensional spin-$N$ field theory there would exist $N$ $\mathscr{N}=2$ SUSYs in the 4d mass spectrum.
In this paper we will also investigate whether it is possible to construct such a hierarchy of $N$ SUSYs without conflicting with the boundary conditions.
The rest of this paper is organized as follows.
In Section \ref{sec:BC} we analyze the possible boundary conditions in $\mathscr{N}=2$ SUSY QM on a bounded domain $(0,L)$.
We show that the allowed boundary conditions in $\mathscr{N}=2$ SUSY QM is limited to the so-called scale-independent subfamily of the $U(2)$ family of boundary conditions \cite{Cheon:2000}.
In Section \ref{sec:refactorization} we construct a hierarchy of $N$ SUSYs by solving the refactorization condition.
The results coincide with the so-called isospectral deformations of the Hamiltonian \cite{Abraham:1980,Baye:1987,Amado:1988}.
In Section \ref{sec:hierarchy} we analyze the allowed boundary conditions of quantum mechanical system with $N$ SUSYs on an interval and on a circle separately and present a systematic prescription to construct a hierarchy
of isospectral Hamiltonians.
Section \ref{sec:conclusion} is devoted to conclusions and discussions.
\section{Boundary conditions in $\mathscr{N}=2$ SUSY QM} \label{sec:BC}
Hermiticity of Hamiltonian is the basic principle in quantum theory; it leads to the unitarity of the S-matrix or the conservation of probability in the whole quantum system.
In one-dimensional non-supersymmetric quantum mechanics it is known that the most general boundary conditions consistent with the hermiticity of Hamiltonian are characterized by a $2\times2$ unitary matrix $U$ \cite{Cheon:2000}.
In one-dimensional $\mathscr{N}=2$ SUSY QM, however, supersymmetry imposes more severe constraints on the parameter space of this $U(2)$ family of boundary conditions.
As we will show below the possible boundary conditions consistent with $\mathscr{N}=2$ supersymmetry are limited to the so-called scale-independent subfamily of the $U(2)$ family of boundary conditions.
To begin with let us consider $\mathscr{N}=2$ SUSY QM on a finite domain $(0, L) \in \mathbb{R}$, whose Hamiltonians are given by\footnote
{%
$\mathscr{N} = 2$ supersymmetry will be transparent by introducing the following $2\times 2$ matrix operators
\begin{align}
\mathscr{H}
= \begin{bmatrix}
H_{0} & 0 \\
0 & H_{1}
\end{bmatrix}, \quad
(-1)^{F}
= \begin{bmatrix}
1 & 0 \\
0 & -1
\end{bmatrix}, \quad
\mathscr{Q}_{1}
= \begin{bmatrix}
0 & Q_{0}^{\dagger} \\
Q_{0} & 0
\end{bmatrix}, \quad
\mathscr{Q}_{2}
= i(-1)^{F}\mathscr{Q}_{1}, \nonumber
\end{align}
which satisfy the standard $\mathscr{N} = 2$ supersymmetry algebra
\begin{align}
\{\mathscr{Q}_{i}, \mathscr{Q}_{j}\}
= 2\delta_{ij}\mathscr{H}, \quad
[\mathscr{Q}_{i}, \mathscr{H}]
= 0, \quad
[(-1)^{F}, \mathscr{H}]
=0, \quad
\{(-1)^{F}, \mathscr{Q}_{i}\}
=0, \quad
i,j
= 1,2. \nonumber
\end{align}
}
\begin{subequations}
\begin{align}
H_{0}
&= Q_{0}^{\dagger}Q_{0}, \label{eq:2.2a}\\
H_{1}
&= Q_{0}Q_{0}^{\dagger}. \label{eq:2.2b}
\end{align}
\end{subequations}
The supercharge $Q_{0}$ and its adjoint $Q_{0}^{\dagger}$ are given by
\begin{subequations}
\begin{align}
Q_{0}
&= \frac{\mathrm{d}}{\mathrm{d}x} + W_{0}^{\prime}(x), \label{eq:2.3a}\\
Q_{0}^{\dagger}
&= - \frac{\mathrm{d}}{\mathrm{d}x} + W_{0}^{\prime}(x), \label{eq:2.3b}
\end{align}
\end{subequations}
where $W_{0}$ is a superpotential (or prepotential), which must be a real function in order to guarantee the hermiticity of the Hamiltonians, and prime ($\prime$) indicates the derivative with respect to $x$.
In terms of the zero-mode function $\phi_{0}^{(0)}$ satisfying the equation $Q_{0}\phi_{0}^{(0)} = 0$, the superpotential $W_{0}$ can be written as
\begin{align}
W_{0}(x) = -\ln\phi_{0}^{(0)}(x). \label{eq:2.4}
\end{align}
Supersymmetric relations are
\begin{subequations}
\begin{align}
Q_{0}\phi_{0}
&= \sqrt{E}\phi_{1}, \label{eq:2.5a}\\
Q_{0}^{\dagger}\phi_{1}
&= \sqrt{E}\phi_{0}, \label{eq:2.5b}
\end{align}
\end{subequations}
where $\phi_{0}$ and $\phi_{1}$ are eigenfunctions of $H_{0}$ and $H_{1}$, respectively, with the common energy $E$.
In this paper we will concentrate on a finite superpotential on the whole domain.
In other words, we require that $\phi_{0}^{(0)}$ has no zero point (or no node).
Next we will focus on the hermiticity of $H_{0}$ and then derive the allowed boundary conditions for $\phi_{0}$ and $\phi_{1}$ using the supersymmetric relations \eqref{eq:2.5a} \eqref{eq:2.5b}.
In physical language, the hermiticity of the Hamiltonian $H_{0}$ indicates the conservation of probability in the whole system $j_{0}(0) = j_{0}(L)$, where the probability current density $j_{0}$ is defined by $j_{0} = -i((\phi_{0}^{\ast})^{\prime}\phi_{0} - \phi_{0}^{\ast}\phi_{0}^{\prime})$.
It is more suitable for the following discussion to rewrite the probability current density into the following form
\begin{align}
j_{0}(x)
&= -i\bigl[(Q_{0}\phi_{0})^{\ast}(x)\phi_{0}(x) - \phi_{0}^{\ast}(x)(Q_{0}\phi_{0})(x)\bigr], \label{eq:2.6}
\end{align}
which follows from the real-valued superpotential.
There are two physically distinct cases:
\begin{enumerate}
\item Case $j_{0}(0) = 0 = j_{0}(L)$.\\
In this case the probability current density $j_{0}$ does not flow outside the domain and the probability is locally conserved.
Hence the two ends of the domain $x = 0$ and $L$ are physically disconnected and we will refer to this case as an interval case.
\item Case $j_{0}(0) = j_{0}(L) (\neq 0)$.\\
In this case $j_{0}$ flows outside the domain but the probability is globally conserved as an entire system, which implies that the two ends of the domain are physically connected.
Hence we will refer to this case as a circle case.
Although in this case the end points $x = 0$ and $L$ are physically identified, there is no need the superpotential $W_{0}$ to be a periodic function; when the superpotential does not have a periodicity of $L$, there just arises some kind of singularity at the junction point $x=0$, which can be characterized by the boundary conditions just as in the point interactions \cite{Cheon:2000}.
\end{enumerate}
In the following subsections we will study these two cases separately.
\subsection{Interval case: $j_{0}(0) = 0 = j_{0}(L)$} \label{subsec:interval}
We first investigate the condition $j_{0}(0) = 0 = j_{0}(L)$.
Note that the condition $j_{0}(x_{i}) = 0$ ($i = 1, 2$; $x_{1} = 0, x_{2} = L$) can be written as follows:
\begin{align}
\bigl|\phi_{0}(x_{i}) - iL_{0}(Q_{0}\phi_{0})(x_{i})\bigr|^{2}
&= \bigl|\phi_{0}(x_{i}) + iL_{0}(Q_{0}\phi_{0})(x_{i})\bigr|^{2}, \label{eq:2.1.1}
\end{align}
where $L_{0}$ is an arbitrary real constant of mass dimension $-1$, which is just introduced to adjust the mass dimension of the equation.
As we will see below $L_{0}$ is not a parameter characterizing the boundary conditions.
The above equation implies that the two complex numbers $\phi_{0}(x_{i}) - iL_{0}(Q_{0}\phi_{0})(x_{i})$ and $\phi_{0}(x_{i}) + iL_{0}(Q_{0}\phi_{0})(x_{i})$ are different from each other at most only in a phase factor.
Thus we can write
\begin{align}
\phi_{0}(x_{i}) - iL_{0}(Q_{0}\phi_{0})(x_{i})
&= {\rm e}^{i\theta_{i}}\bigl(\phi_{0}(x_{i}) + iL_{0}(Q_{0}\phi_{0})(x_{i})\bigr), \label{eq:2.1.2}
\end{align}
where $0\leq\theta_{i}<2\pi$, $i=1,2$.
When one considers a non-supersymmetric quantum mechanics, this is the end of the story by just replacing the supercharge $Q_{0}$ to the ordinary derivative $\mathrm{d}/\mathrm{d}x$, and the resulting boundary conditions are parameterized by the group $U(1)\times U(1)$, whose parameter space is a 2-torus $S^{1}\times S^{1}\simeq T^{2}$ \cite{Cheon:2000}.
However, supersymmetry severely restricts the allowed parameter space.
Using the supersymmetric relations \eqref{eq:2.5a} and \eqref{eq:2.5b} we find
\begin{subequations}
\begin{align}
\sin\left(\frac{\theta_{i}}{2}\right)\phi_{0}(x_{i})
+ L_{0}\cos\left(\frac{\theta_{i}}{2}\right)(Q_{0}\phi_{0})(x_{i})
&= 0, \label{eq:2.1.3a}\\
\sin\left(\frac{\theta_{i}}{2}\right)(Q_{0}^{\dagger}\phi_{1})(x_{i})
+ EL_{0}\cos\left(\frac{\theta_{i}}{2}\right)\phi_{1}(x_{i})
&= 0. \label{eq:2.1.3b}
\end{align}
\end{subequations}
Since the boundary conditions should not depend on the eigenvalue $E$ (otherwise the superposition of the quantum states becomes meaningless),
the parameters $\theta_{i}$ ($i=1,2$) must be $0$ or $\pi$.
Thus in $\mathscr{N}=2$ SUSY QM on an interval the boundary conditions compatible with the supersymmetry are characterized by the discrete group $\mathbb{Z}_{2}\times \mathbb{Z}_{2}\subset U(1)\times U(1)$, which just consists of four 0-dimensional points $\{\mathrm{e}^{i0}, \mathrm{e}^{i\pi}\}\times\{\mathrm{e}^{i0}, \mathrm{e}^{i\pi}\} = \{1, -1\}\times \{1, -1\}$.
This result is consistent with the previous analyses of SUSY QM
with point singularities \cite{point_singularity1, point_singularity2}.
Now it is clear that the allowed boundary conditions can be categorized into the following $2\times2 = 4$ types:
\begin{subequations}
\begin{alignat}{2}
(\theta_{1}, \theta_{2}) &= (0, 0) & : &\quad
\begin{cases}
(Q_{0}\phi_{0})(0) = 0 = (Q_{0}\phi_{0})(L), \\
\phi_{1}(0) = 0 = \phi_{1}(L);
\end{cases} \\
(\theta_{1}, \theta_{2}) &= (\pi, \pi)~& :& \quad
\begin{cases}
\phi_{0}(0) = 0 = \phi_{0}(L), \\
(Q_{0}^{\dagger}\phi_{1})(0) = 0 = (Q_{0}^{\dagger}\phi_{1})(L);
\end{cases} \\
(\theta_{1}, \theta_{2}) &= (0, \pi) & : &\quad
\begin{cases}
(Q_{0}\phi_{0})(0) = 0 = \phi_{0}(L), \\
\phi_{1}(0) = 0 = (Q_{0}^{\dagger}\phi_{1})(L);
\end{cases} \\
(\theta_{1}, \theta_{2}) &= (\pi, 0) & : &\quad
\begin{cases}
\phi_{0}(0) = 0 = (Q_{0}\phi_{0})(L), \\
(Q_{0}^{\dagger}\phi_{1})(0) = 0 = \phi_{1}(L).
\end{cases}
\end{alignat}
\end{subequations}
\subsection{Circle case: $j_{0}(0) = j_{0}(L) (\neq 0)$} \label{subsec:circle}
Next investigate the condition $j_{0}(0) = j_{0}(L) (\neq 0)$.
This condition can be written into the following form
\begin{align}
\left|
\Phi_{\phi_{0}}
-iL_{0}\sigma_{3}
\Phi_{Q_{0}\phi_{0}}
\right|^{2}
&=
\left|
\Phi_{\phi_{0}}
+iL_{0}\sigma_{3}
\Phi_{Q_{0}\phi_{0}}
\right|^{2}, \label{eq:2.2.1}
\end{align}
where for any function $f(x)$ the two-component boundary value vector $\Phi_{f}$ is defined as
\begin{align}
\Phi_{f}
:= \begin{bmatrix}
f(0) \\
f(L)
\end{bmatrix}. \label{eq:2.2.2}
\end{align}
$\sigma_{3}$ is the third Pauli matrix: $\sigma_{3} = \mathrm{diag}(1, -1)$.
This equation shows that the squared length of the two-dimensional complex column vector
$\Phi_{\phi_{0}} -iL_{0}\sigma_{3}\Phi_{Q_{0}\phi_{0}}$
is equal to that of
$\Phi_{\phi_{0}} +iL_{0}\sigma_{3}\Phi_{Q_{0}\phi_{0}}$,
which implies that these two vectors must be related by a two-dimensional unitary transformation.
Thus we can write
\begin{align}
\Phi_{\phi_{0}} -iL_{0}\sigma_{3}\Phi_{Q_{0}\phi_{0}}
&=
U\left(\Phi_{\phi_{0}} +iL_{0}\sigma_{3}\Phi_{Q_{0}\phi_{0}}\right), \label{eq:2.2.3}
\end{align}
where $U$ is an arbitrary $2\times2$ unitary matrix.
In one-dimensional non-supersymmetric quantum mechanics it is known that the most general boundary conditions are characterized by this $U(2)$ family \cite{Cheon:2000}.
In the following we shall determine the possible form of this unitary matrix compatible with supersymmetry and find the allowed subspace of the $U(2)$ family.
To this end we first apply the supersymmetric relations to the condition \eqref{eq:2.2.3}.
Using the supersymmetric relations \eqref{eq:2.5a} and \eqref{eq:2.5b} we find
\begin{subequations}
\begin{align}
(\mbox{1}\hspace{-0.25em}\mbox{l}-U)\Phi_{\phi_{0}} -iL_{0}(\mbox{1}\hspace{-0.25em}\mbox{l}+U)\sigma_{3}\Phi_{Q_{0}\phi_{0}}
&= \vec{0}, \label{eq:2.2.4a}\\
(\mbox{1}\hspace{-0.25em}\mbox{l}-U)\Phi_{Q_{0}^{\dagger}\phi_{1}} -iEL_{0}(\mbox{1}\hspace{-0.25em}\mbox{l}+U)\sigma_{3}\Phi_{\phi_{1}}
&= \vec{0}. \label{eq:2.2.4b}
\end{align}
\end{subequations}
Again since the boundary conditions should not depend of the eigenvalue $E$, the eigenvalues of the matrix $U$ must be $1$ or $-1$, which is equivalent to the condition $U^{2} = \mbox{1}\hspace{-0.25em}\mbox{l}$.
Notice that any unitary matrix satisfying $U^{2} = \mbox{1}\hspace{-0.25em}\mbox{l}$ can be spectrally decomposed using the projection operators $P_{+} = \frac{1}{2}(\mbox{1}\hspace{-0.25em}\mbox{l} + U)$ and $P_{-} = \frac{1}{2}(\mbox{1}\hspace{-0.25em}\mbox{l} - U)$, which satisfy $P_{+} + P_{-} = \mbox{1}\hspace{-0.25em}\mbox{l}$, $(P_{\pm})^{2} = \mbox{1}\hspace{-0.25em}\mbox{l}$ and $P_{\pm}P_{\mp} = 0$.
Multiplying these projection operators the above boundary conditions boil down to the following four independent conditions:
\begin{subequations}
\begin{align}
(\mbox{1}\hspace{-0.25em}\mbox{l}-U)\Phi_{\phi_{0}}
&= \vec{0}, \label{eq:2.2.5a}\\
(\mbox{1}\hspace{-0.25em}\mbox{l}+U)\sigma_{3}\Phi_{Q_{0}\phi_{0}}
&= \vec{0}, \label{eq:2.2.5b}\\
(\mbox{1}\hspace{-0.25em}\mbox{l}-U)\Phi_{Q_{0}^{\dagger}\phi_{1}}
&= \vec{0}, \label{eq:2.2.5c}\\
(\mbox{1}\hspace{-0.25em}\mbox{l}+U)\sigma_{3}\Phi_{\phi_{1}}
&= \vec{0}. \label{eq:2.2.5d}
\end{align}
\end{subequations}
Note that when $U=\mbox{1}\hspace{-0.25em}\mbox{l}$ ($U=-\mbox{1}\hspace{-0.25em}\mbox{l}$) these boundary conditions reduce to type ($0,0$) (type ($\pi, \pi$)) boundary conditions in the interval case and lead to $j_{0}(0) = 0 = j_{0}(L)$.
Thus in this circle case these two ``points'' $U = \mbox{1}\hspace{-0.25em}\mbox{l}$ and $-\mbox{1}\hspace{-0.25em}\mbox{l}$ have to be removed from the parameter space, from which we conclude that the two eigenvalues of $U$ must be $1$ and $-1$.
Such a unitary matrix can be written as follows:
\begin{align}
U = \vec{e}\cdot\vec{\sigma}, \label{eq:2.2.6}
\end{align}
where ${\vec \sigma}$ are the Pauli matrices and ${\vec e}$ is a unit vector, which can be parameterized as
\begin{align}
\vec{e} = (\cos\theta\sin\phi, \sin\theta\sin\phi, \cos\phi), \quad
0\leq\theta<2\pi, \quad
0\leq\phi\leq\pi. \label{eq:2.2.7}
\end{align}
Notice that when $\phi = 0$ ($\phi = \pi$), that is, $U = \sigma_{3}$ ($U = -\sigma_{3}$), the boundary conditions become type ($0,\pi$) (type ($\pi, 0$)) boundary conditions in the interval case and again lead to $j_{0}(0) = 0 = j_{0}(L)$.
Thus in the circle case these two ``points'' $U = \sigma_{3}$ and $-\sigma_{3}$, which correspond to the north pole $\phi = 0$ and the south pole $\phi = \pi$ of $S^{2}$, respectively, must be removed from the parameter space $S^{2}$.
The resulting parameter space is thus isomorphic to a non-compact two-dimensional cylinder.
In summary the boundary conditions compatible with $\mathscr{N}=2$ supersymmetry have a two-parameter family, which can be written as
\begin{subequations}
\begin{align}
\begin{bmatrix}
\phi_{0}(L) \\
(Q_{0}\phi_{0})(L)
\end{bmatrix}
&= {\rm e}^{i\theta}
\begin{bmatrix}
\tan(\phi/2) & 0 \\
0 & \cot(\phi/2)
\end{bmatrix}
\begin{bmatrix}
\phi_{0}(0) \\
(Q_{0}\phi_{0})(0)
\end{bmatrix}, \label{eq:2.2.8a}\\
\begin{bmatrix}
\phi_{1}(L) \\
(Q_{0}^{\dagger}\phi_{1})(L)
\end{bmatrix}
&= {\rm e}^{i\theta}
\begin{bmatrix}
\cot(\phi/2) & 0 \\
0 & \tan(\phi/2)
\end{bmatrix}
\begin{bmatrix}
\phi_{1}(0) \\
(Q_{0}^{\dagger}\phi_{1})(0)
\end{bmatrix}, \label{eq:2.2.8b}
\end{align}
\end{subequations}
where $0\leq\theta<2\pi$ and $0<\phi<\pi$.
In practical calculations it is convenient to introduce a real parameter $\eta$ defined as
\begin{align}
\mathrm{e}^{\eta}
:= \tan\left(\frac{\phi}{2}\right), \quad
-\infty<\eta<\infty.
\end{align}
Before closing this section, we should make a comment on physical meanings of these two parameters $\theta$ and $\eta$.
As is well-known, $\theta$ corresponds to the magnetic flux penetrating through the circle.
On the other hand, as shown in \cite{Griffiths:1993}, boundary conditions with nonzero $\eta$ corresponds to the presence of $\delta^{\prime}$-singularity at the junction point $x=0$.
\section{Refactorization of Hamiltonians} \label{sec:refactorization}
As already mentioned in Section \ref{sec:intro}, quantum mechanical supersymmetry plays an essential role to generate massive Kaluza-Klein particles in higher-dimensional field theory.
It has been shown that in 5d gravity \textit{two} $\mathscr{N}=2$ quantum mechanical SUSYs are needed in order
for the ``Higgs'' mechanism to generate massive spin-2 particles \cite{Lim:2007}.
A crucial ingredient of this coexistence of two quantum mechanical SUSYs is the refactorization of Hamiltonians.
Thus it would be natural to guess that in a higher-dimensional spin-$N$ field theory there would exist a hierarchy of $N$ SUSYs in the 4d mass spectrum, whose typical structure must be as follows:
\begin{center}
\begin{tabular}{cccc}
$H_{0}$ &$= Q_{0}^{\dagger}Q_{0}$ & & \\
$H_{1}$ &$= Q_{0}Q_{0}^{\dagger}$ &$= Q_{1}^{\dagger}Q_{1} + c_{1}$ & \\
$H_{2}$ & &$= Q_{1}Q_{1}^{\dagger} + c_{1}$ &$= Q_{2}^{\dagger}Q_{2} + c_{1} + c_{2}$ \\
$H_{3}$ & & &$= Q_{2}Q_{2}^{\dagger} + c_{1} + c_{2}$ \\
$\vdots$ & & &$\vdots$
\end{tabular}
\end{center}
where the $n$-th supercharge and its adjoint are assumed to be of the form
\begin{subequations}
\begin{align}
Q_{n}
&= {\rm e}^{-W_{n}(x)}\frac{{\rm d}}{{\rm d}x}{\rm e}^{+W_{n}(x)}
= \frac{\rm d}{{\rm d}x} + W_{n}^{'}(x), \label{eq:3.1a}\\
Q_{n}^{\dagger}
&= -{\rm e}^{+W_{n}(x)}\frac{{\rm d}}{{\rm d}x}{\rm e}^{-W_{n}(x)}
= - \frac{\rm d}{{\rm d}x} + W_{n}^{'}(x), \label{eq:3.1b}
\end{align}
\end{subequations}
and $c_{n}$ is a real constant.
In the context of higher-dimensional field theory, $W_{n}$ and $c_{n}$ would correspond to the warp factor and the cosmological constant on 3-branes, respectively.
In this section we solve the refactorization condition of Hamiltonians
in the case of $c_{n}=0$ and construct a hierarchy of supersymmetry.
\subsection{Refactorization of Hamiltonians}
Although in this paper we will focus on the case that all the constant shifts $c_{n}$ are zero, it may be instructive to keep $c_{n}$ to be nonzero in order to distinguish our refactorization method and the conventional one, which is used to solve the Schr\"odinger equation by the method of shape invariance.
The refactorization condition for the $n$-th Hamiltonian $Q_{n-1}Q_{n-1}^{\dagger} = Q_{n}^{\dagger}Q_{n} + c_{n}$ can be written into the following form
\begin{align}
(W_{n-1}^{\prime})^{2} + W_{n-1}^{\prime\prime}
= (W_{n}^{\prime})^{2} - W_{n}^{\prime\prime} + c_{n}. \label{eq:3.2}
\end{align}
This is a recursion relation known as the ladder equation in the context of parasupersymmetric or higher-derivative supersymmetric quantum mechanics \cite{Rubakov:1988,Andrianov1:1991,Andrianov2:1991,Andrianov:1993,Andrianov:1994,Andrianov:1995,FernandezC:1996}.
Our task is to solve the equation \eqref{eq:3.2} with respect to $W_{n}$ and to recursively define the $n$-th superpotential.
The nonlinear differential equation \eqref{eq:3.2} is the Riccati equation in terms of $W_{n}$ so that it can be linearized as follows:
\begin{align}
Q_{n-1}Q_{n-1}^{\dagger}\mathrm{e}^{-W_{n}} = c_{n}\mathrm{e}^{-W_{n}}, \label{eq:3.3}
\end{align}
or equivalently
\begin{align}
H_{n}\mathrm{e}^{-W_{n}} = \left(\sum_{i=1}^{n}c_{i}\right)\mathrm{e}^{-W_{n}}. \label{eq:3.4}
\end{align}
This is nothing but the Schr\"odinger equation for the $n$-th Hamiltonian.
Noting that the spectrum of $n$-th Hamiltonian is bounded from below by the constant $\sum_{i=1}^{n}c_{i}$, we see that Eq.\eqref{eq:3.4} is the Schr\"odinger equation for the ground state.
When $c_{n}=0$ it is easy to solve the equation \eqref{eq:3.3} with the result
\begin{align}
W_{n}
= - W_{n-1}
- \ln\left\{\alpha_{n-1} + \beta_{n-1}\int_{x_{0}}^{x}\!\!\!
{\rm d}y~{\rm e}^{-2W_{n-1}(y)}\right\}, \label{eq:3.5}
\end{align}
where $\alpha_{n}$ and $\beta_{n}$ are integration constants.
$x_{0}$ is an arbitrary point placed on the interval $(0,L)$.
Since in this paper we concentrate on finite superpotentials even at the boundaries, it is convenient to choose $x_{0}$ as $x_{0} = 0$ and $\beta_{n-1}$ as $\beta_{n-1} = \left[\int_{0}^{L}\!\!\mathrm{d}y\exp(-2W_{n-1})\right]^{-1}$.
We note that a constant shift of the superpotentials has no effect on the Hamiltonians.
With these choices, the parameter $\alpha_{n-1}$ is limited to the ranges $\alpha_{n-1}<-1$ and $0<\alpha_{n-1}$ for the well-definedness of $W_{n}^{\prime}$.
Thus, once given a quantum mechanical system, we can always
construct an infinite hierarchy of Hamiltonians.
Notice that the result \eqref{eq:3.5} coincides with the so-called isospectral deformations of the Hamiltonian \cite{Abraham:1980,Baye:1987,Amado:1988}.
\subsection{Three-term recurrence relation for nonzero-modes}
Let $\phi_{n}^{(l)}$ be the energy eigenfunction of $l$-th excited states for the $n$-th Hamiltonian.
Then, we have the three-term recurrence relation for quantum mechanical systems with $N$ SUSYs:
\begin{align}
\phi_{n+2}^{(l)}
= -\phi_{n}^{(l)}
+ \frac{1}{\sqrt{E_{l}}}(W_{n}^{'} + W_{n+1}^{'})\phi_{n+1}^{(l)},
\label{eq:3.8}
\end{align}
which follows from the SUSY relations $\sqrt{E_{l}}\phi_{n+2}^{(l)}
=Q_{n+1}\phi_{n+1}^{(l)},\ \sqrt{E_{l}}\phi_{n}^{(l)}
=Q_{n}^{\dagger}\phi_{n+1}^{(l)}$ and the identity $Q_{n+1}
= -Q_{n}^{\dagger} + W_{n}^{'} + W_{n+1}^{'}$.
Notice that when $\beta_{n+1} = 0$, $\phi_{n+2}^{(l)}$ just reduces to the (opposite sign of) energy eigenfunction $\phi_{n}^{(l)}$.
\subsection{Zero-mode}
Next we will show that the zero-mode functions $\phi_{n}^{(0)}$ for $0<n<N$ cannot exist in general in a quantum mechanical system with $N$ SUSYs.
To this end, suppose that we have constructed a set of $N+1$ isospectral Hamiltonians using the refactorization method.
Since the $n$-th Hamiltonian $H_{n}$ can be written in two ways as $H_{n} = Q_{n-1}Q_{n-1}^{\dagger} = Q_{n}^{\dagger}Q_{n}$, $\phi_{n}^{(0)}(x)$ with $n=1,\cdots,N-1$ has to satisfy the equations
\begin{align}
Q_{n-1}^{\dagger}\phi_{n}^{(0)} = 0 = Q_{n}\phi_{n}^{(0)}, \label{eq:3.9}
\end{align}
or equivalently
\begin{align}
\left(
\frac{\mathrm{d}}{\mathrm{d}x} - W_{n-1}^{\prime}
\right)\phi_{n}^{(0)}
= 0
= \left(
\frac{\mathrm{d}}{\mathrm{d}x} - W_{n-1}^{\prime}
- \frac{\beta_{n-1} {\rm e}^{-2W_{n-1}}}
{\alpha_{n-1} + \beta_{n-1}
\int_{x_{0}}^{x}\!\!\mathrm{d}y~\mathrm{e}^{-2W_{n-1}}}
\right)
\phi_{n}^{(0)}. \label{eq:3.10}
\end{align}
Obviously, there is no nontrivial solution to these two different equations except for the case $\beta_{n-1}=0$. When $\beta_{n-1} = 0$, the $(n+1)$-th Hamiltonian $H_{n+1} = Q_{n+1}^{\dagger}Q_{n+1}$ comes to be identical to the $(n-1)$-th Hamiltonian, which has no interest for us.
Therefore there is no nontrivial solution to \eqref{eq:3.9}.
We thus conclude that the zero-mode solutions consistent with $N$ SUSYs can exist {\em at most} only for the case $n=0$ and $N$.
The ground-state energy eigenfunction for $H_{N}$ is obtained by solving the equation $Q_{N-1}^{\dagger}\phi_{N}^{(0)} = 0$, which can be easily integrated with the result
\begin{align}
\phi_{N}^{(0)}(x)
= C{\rm e}^{+W_{N-1}(x)}, \label{eq:3.11}
\end{align}
where $C$ is the normalization constant.
If $\phi_{N}^{(0)}$ turns out to be non-normalizable or not to obey the boundary conditions, only a single zero-mode $\phi_{0}^{(0)}$ exists.
Typical spectrum of a quantum mechanical system with $N$ SUSYs is shown in Figure \ref{fig:level_N}.
\begin{figure*}[t]
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=.9]{figure1a.eps}
& \includegraphics[scale=.9]{figure1b.eps}\\
(a) & (b)
\end{tabular}
\caption{(a) Typical spectrum of quantum system constructed by the conventional refactorization method with $W_{n} = -\ln\phi_{n}^{(0)}$.
(b) Typical spectrum of quantum system with $N$ SUSYs.}
\label{fig:level_N}
\end{center}
\end{figure*}
\section{Hierarchy of QM SUSYs} \label{sec:hierarchy}
In the previous section, we have not discussed boundary conditions
compatible with $N$ SUSYs.
In this section we will investigate whether it is possible to construct a hierarchical SUSY without conflicting with the hermiticity of each Hamiltonian.
In the subsequent subsections we will study this hierarchical SUSY on an interval and on a circle separately.
\subsection{Hierarchy on an interval} \label{subsec:hierarchy_interval}
Let us first study a hierarchical SUSY on an interval.
As a first step let us consider the boundary conditions consistent with 2 SUSYs.
Inserting the supersymmetric relations $Q_{1}\phi_{1} = \sqrt{E}\phi_{2}$ and $Q_{1}^{\dagger}\phi_{2} = \sqrt{E}\phi_{1}$ into the equation \eqref{eq:2.1.3a} we have
\begin{subequations}
\begin{alignat}{2}
\label{eq:28a}
\phi_{0}:~&
0
&=& \sin\left(\frac{\theta_{i}}{2}\right)\phi_{0}(x_{i})
+ L_{0}\cos\left(\frac{\theta_{i}}{2}\right)(Q_{0}\phi_{0})(x_{i}), \\
\label{eq:28b}
\phi_{1}:~&
0
&=& \sin\left(\frac{\theta_{i}}{2}\right)(Q_{0}^{\dagger}\phi_{1})(x_{i})
+ EL_{0}\cos\left(\frac{\theta_{i}}{2}\right)\phi_{1}(x_{i}), \\
\label{eq:28c}
\phi_{2}:~&
0
&=& \sin\left(\frac{\theta_{i}}{2}\right)(W_{0}^{\prime}+W_{1}^{\prime})(x_{i})(Q_{1}^{\dagger}\phi_{2})(x_{i}) \nonumber\\
&&& + E\left\{
-\sin\left(\frac{\theta_{i}}{2}\right)\phi_{2}(x_{i})
+ L_{0}\cos\left(\frac{\theta_{i}}{2}\right)(Q_{1}^{\dagger}\phi_{2})(x_{i})
\right\},
\end{alignat}
\end{subequations}
where the third equation follows from Eq.\eqref{eq:28b} with the identity $Q_{0}^{\dagger} = -Q_{1}+W_{0}^{\prime}+W_{1}^{\prime}$.
Now it is obvious that there is no possible boundary conditions independent of $E$ except for the choice $\theta_{i} = 0$.
Thus the boundary conditions consistent with 2 SUSYs are uniquely determined as follows:
\begin{subequations}
\begin{align}
\label{eq:29a}
(Q_{0}\phi_{0})(x_{i})
&= 0, \\
\label{eq:29b}
\phi_{1}(x_{i})
&= 0, \\
\label{eq:29c}
(Q_{1}^{\dagger}\phi_{2})(x_{i})
&= 0.
\end{align}
\end{subequations}
It is easy to show that there is no possible boundary conditions consistent with a hierarchy of $N$ SUSYs for $N\geq3$.
Thus, we conclude that, at most, three successive quantum mechanical systems on an interval can be supersymmetric in a hierarchy of QM SUSYs.
\subsection{Hierarchy on a circle} \label{subsec:hierarchy_circle}
Let us next study a hierarchical SUSY on a circle.
As mentioned before in this paper we focus on finite superpotentials on the whole domain.
When $W_{0}$ is finite, the finite $(n+1)$-th superpotential $W_{n+1}$ is recursively defined as
\begin{align}
W_{n+1}(x)
&= -W_{n}(x)
-\ln\left[\alpha_{n} + \beta_{n}\int_{0}^{x}\!\!\!\mathrm{d}y~\mathrm{e}^{-2W_{n}(y)}\right],
\quad\text{for}\quad
n=0,1,2,\cdots, \label{eq:4.2.1}
\end{align}
with
\begin{align}
\alpha_{n}<-1\ \ \textrm{or}\ \ 0<\alpha_{n}, \quad
\beta_{n}
= \left[\int_{0}^{L}\!\!\!\mathrm{d}x~\mathrm{e}^{-2W_{n}(x)}\right]^{-1}. \label{eq:4.2.2}
\end{align}
Since the hierarchy of $N$ SUSYs is just the assembly of $\mathscr{N}=2$ SUSYs, the boundary conditions in $H_{n}$--$H_{n+1}$ sector have to be of the form
\begin{subequations}
\begin{align}
\begin{bmatrix}
\phi_{n}(L) \\
(Q_{n}\phi_{n})(L)
\end{bmatrix}
&= {\rm e}^{i\theta_{n}}
\begin{bmatrix}
\mathrm{e}^{\eta_{n}} & 0 \\
0 & \mathrm{e}^{-\eta_{n}}
\end{bmatrix}
\begin{bmatrix}
\phi_{n}(0) \\
(Q_{n}\phi_{n})(0)
\end{bmatrix}, \label{eq:4.2.3a}\\
\begin{bmatrix}
\phi_{n+1}(L) \\
(Q_{n}^{\dagger}\phi_{n+1})(L)
\end{bmatrix}
&= {\rm e}^{i\theta_{n}}
\begin{bmatrix}
\mathrm{e}^{-\eta_{n}} & 0 \\
0 & \mathrm{e}^{\eta_{n}}
\end{bmatrix}
\begin{bmatrix}
\phi_{n+1}(0) \\
(Q_{n}^{\dagger}\phi_{n+1})(0)
\end{bmatrix}, \label{eq:4.2.3b}
\end{align}
\end{subequations}
with
\begin{align}
0\leq\theta_{n}<2\pi \quad\text{and}\quad -\infty<\eta_{n}<\infty. \label{eq:4.2.4}
\end{align}
For the sake of concreteness of the discussion, let us first consider 2 SUSYs in $H_{0}$--$H_{1}$--$H_{2}$ sector.
The point is whether there exists a well-defined parameter region to be consistent with two different boundary conditions for the wavefunction $\phi_{1}(x)$ of the middle Hamiltonian system $H_{1}$:
\begin{subequations}
\begin{align}
\phi_{1}(L)
&= \mathrm{e}^{i\theta_{0}-\eta_{0}}\phi_{1}(0), \label{eq:4.2.5a}\\
(Q_{0}^{\dagger}\phi_{1})(L)
&= \mathrm{e}^{i\theta_{0}+\eta_{0}}(Q_{0}^{\dagger}\phi_{1})(0), \label{eq:4.2.5b}
\end{align}
\end{subequations}
which come from Eq.\eqref{eq:4.2.3b} for $n=0$, and
\begin{subequations}
\begin{align}
\phi_{1}(L)
&= \mathrm{e}^{i\theta_{1}+\eta_{1}}\phi_{1}(0), \label{eq:4.2.6a}\\
(Q_{1}\phi_{1})(L)
&= \mathrm{e}^{i\theta_{1}-\eta_{1}}(Q_{1}\phi_{1})(0), \label{eq:4.2.6b}
\end{align}
\end{subequations}
which come from Eq.\eqref{eq:4.2.3a} for $n=1$.
First, it is obvious that the parameters $\theta_{1}$ and $\eta_{1}$ have to be equal to $\theta_{0}$ and $-\eta_{0}$, respectively:
\begin{align}
\theta_{1} = \theta_{0}, \quad
\eta_{1} = -\eta_{0}. \label{eq:4.2.7}
\end{align}
Next, by adding Eqs.\eqref{eq:4.2.5a} and \eqref{eq:4.2.6b}
\begin{align}
\left(W_{0}^{\prime}(L) + W_{1}^{\prime}(L)\right)\phi_{1}(L)
&= \mathrm{e}^{i\theta_{0} + \eta_{0}}
\left(W_{0}^{\prime}(0) + W_{1}^{\prime}(0)\right)\phi_{1}(0), \label{eq:4.2.8}
\end{align}
from which we find
\begin{align}
\mathrm{e}^{2\eta_{0}}
&= \frac{W_{0}^{\prime}(L) + W_{1}^{\prime}(L)}{W_{0}^{\prime}(0) + W_{1}^{\prime}(0)} \nonumber\\
&= \frac{\alpha_{0}}{1 + \alpha_{0}}
\exp\left(-2\int_{0}^{L}\!\!\!\mathrm{d}x~W_{0}^{\prime}(x)\right), \label{eq:4.2.9}
\end{align}
where the last equality follows from Eq.\eqref{eq:4.2.1}.
Thus in order to implement the two boundary conditions the isospectral parameter $\alpha_{0}$ has to be tuned as
\begin{align}
{\alpha_{0}}^{-1}
&= \exp\left[-2\left(\eta_{0} + \int_{0}^{L}\!\!\!\mathrm{d}x~W_{0}^{\prime}(x)\right)\right] - 1. \label{eq:4.2.10}
\end{align}
Notice that once the parameters $\eta_{1}$ and $\alpha_{0}$ are tuned as Eq.\eqref{eq:4.2.7} and Eq.\eqref{eq:4.2.10}, the following identity holds
\begin{align}
\eta_{1} + \int_{0}^{L}\!\!\!\mathrm{d}x~W_{1}^{\prime}(x)
&= \eta_{0} + \int_{0}^{L}\!\!\!\mathrm{d}x~W_{0}^{\prime}(x). \label{eq:4.2.11}
\end{align}
The above procedure can be easily continued to arbitrary $n$.
The resulting boundary conditions are as follows:
\begin{subequations}
\begin{align}
\phi_{n}(L)
&= \mathrm{e}^{i\theta_{0} \pm \eta_{0}}\phi_{n}(0), \label{eq:4.2.12a}\\
(Q_{n}\phi_{n})(L)
&= \mathrm{e}^{i\theta_{0} \mp \eta_{0}}(Q_{n}\phi_{n})(0), \label{eq:4.2.12b}
\end{align}
\end{subequations}
where $+$ ($-$) sign for $n=0,2,4\cdots$ ($n=1,3,5\cdots$).
The isospectral parameters are tuned as
\begin{align}
{\alpha_{n}}^{-1}
= \exp\left[-2\left(\eta_{0} + \int_{0}^{L}\!\!\!\mathrm{d}x~W_{0}^{\prime}(x)\right)\right] - 1,
\quad
n=0,1,2,\cdots, \label{eq:4.2.13}
\end{align}
where $\alpha_{n}$ takes a desired value of $\alpha_{n}<-1$ or $\alpha_{n}>0$ (see Fig.\ref{fig:2}), as it should be.
We thus conclude that starting from any quantum mechanical system on a circle we can systematically construct an infinite hierarchy of QM SUSYs.
We should emphasize the difference between the hierarchy on an interval and on a circle.
In the hierarchy on an interval, at most, three successive quantum mechanical systems can be supersymmetric with the
unique boundary conditions \eqref{eq:29a}$-$\eqref{eq:29c}.
On the other hand, in the hierarchy on a circle, we can obtain an infinite tower of quantum mechanical systems whose successive two systems form an $\mathscr{N} =2$ SUSY with the boundary conditions \eqref{eq:4.2.12a} and \eqref{eq:4.2.12b}, which are specified by two parameters $\theta_{0}, \eta_{0}$.
\begin{figure}[t]
\begin{center}
\includegraphics{figure2.eps}
\caption{Allowed region of the isospectral parameter $\alpha_{0}$ as a function of $z = \exp\left[-2\left(\eta_{0} + \int_{0}^{L}\!\mathrm{d}x~W_{0}^{\prime}(x)\right)\right]$, whose range is $0<z<\infty$.}
\label{fig:2}
\end{center}
\end{figure}
\section{Conclusions and discussions} \label{sec:conclusion}
In this paper we have clarified the possible boundary conditions in $\mathscr{N}=2$ supersymmetric quantum mechanics on a finite domain $(0,L)$ without conflicting with the conservation of probability current.
Allowed boundary conditions in $\mathscr{N}=2$ supersymmetric quantum mechanics are limited to the so-called scale-independent subfamily of the $U(2)$ family of boundary conditions.
We also studied the hierarchy of $N$ SUSYs and showed that in an interval case it is not possible to construct beyond 2 SUSYs.
On the other hand, in a circle case it is possible to construct an infinite hierarchy of supersymmetries by tuning the isospectral parameters $\alpha_{n}$.
Let us close with some remarks.
\begin{enumerate}
\item \textbf{\mathversion{bold}Loop effects of $\eta$}.
We show that in $\mathscr{N}=2$ supersymmetric quantum mechanics on a circle it is possible to introduce two parameter $\theta$ and $\eta$ into the boundary conditions.
As mentioned in Section \ref{sec:BC}, $\theta$ corresponds to the magnetic flux penetrating through the circle and nonzero $\eta$ corresponds to the presence of the $\delta^{\prime}$-singularity at the junction point $x=0$.
In higher-dimensional gauge theory compactified on a circle it is widely known that the twisted boundary conditions give rise to gauge symmetry/supersymmetry breaking known as the Hosotani/Schark-Schwarz mechanism.
However, the effect of the presence of $\eta$ is not fully understood yet.
It is interesting to investigate the loop effects of the parameter $\eta$ in five-dimensional gauge theory with a single extra dimension compactified on a circle.
We will address this issue elsewhere.
\item \textbf{Integrable models}.
As opposed to the shape invariant method, the techniques developed in this paper cannot be used to solve the Schr\"odinger equation.
However, once given a solvable model, it is possible to generate an infinite tower of isospectral solvable models with nontrivial potential energy terms.
\item \textbf{\mathversion{bold}Spin-$N$ field theory}.
In this paper we formulate a systematic description for constructing the hierarchy of $N$ SUSYs and show that in an interval case it is not possible to construct beyond 2 SUSYs.
Since it seems a necessary condition in order to generate massive Kaluza-Klein particles, one might expect that it is possible to prove some kind of no-go theorem of the ``Higgs'' mechanism for spin-$N$ ($\geq3$) particle in the context of five-dimensional field theory with a single extra dimension compactified on an interval.
However this is an open question.
\item \textbf{\mathversion{bold}Relax to $\mathcal{PT}$-symmetry}.
Recently, a considerable number of studies have been made on non-hermitian $\mathcal{PT}$-symmetric quantum mechanics (see for recent review \cite{Bender:2007}).
It is known that the conventional hermiticity condition on Hamiltonian is the sufficient condition for the real and lower bounded spectra and can be replaced by the weaker condition of the $\mathcal{PT}$-symmetry of Hamiltonian.
In this paper we impose the hermiticity of Hamiltonian, however, it is interesting to relax the hermiticity condition to $\mathcal{PT}$-symmetric one.
But it is not clear to the authors how to treat the $\mathcal{PT}$-symmetry into the boundary conditions.
\end{enumerate}
\section*{Acknowledgment}
One of the authors (SO) would like to thank S. Odake for very useful discussions and his hospitality.
This work is supported in part by the Grant-in-Aid for Scientific
Research (No.18540275)
by the Japanese Ministry of Education, Science, Sports and Culture.
\addcontentsline{toc}{section}{\textsf{References}}
|
1705.09743
|
\section{Introduction}
The discovery of the Higgs boson in 2012 at the LHC has attested the success
of the standard model (SM) in describing the observed fermions and their
interactions. However, there exist many theoretical issues or open questions
that have no satisfactory answer. In particular, the observed flavour
pattern lacks of a definitive explanation, i.e., the quark Yukawa coupling
matrices $Y_u$ and $Y_d$, which in the SM reproduce the six quark masses,
three mixings angles and a complex phase to account for CP violation
phenomena, are general complex matrices, not constrained by any gauge
symmetry.
Experimentally the flavour puzzle is very intricate. First, there is the
quark mass hierarchy in both sectors. Secondly, the mixings in the SM,
encoded in the Cabibbo-Kobayashi-Maskawa (CKM) unitary matrix, turns out to
be close to the identity matrix. If one takes also the lepton sector into
account, the hierarchy there is even more puzzling~\cite%
{Emmanuel-Costa:2015tca}. On the other hand, in the SM there is in general
no connection between the quark masses hierarchy and the CKM mixing pattern.
In fact, if one considers the Extreme Chiral Limit, where the quark masses
of the first two generations are set to zero, the mixing does not
necessarily vanish~\cite{Botella:2016krk}, and one concludes that the CKM
matrix~$V$ being close to the identity matrix has nothing to do with the
fact that the quark masses are hierarchical. Indeed, in order to have $%
V\approx \mathbf{1}$, one must have a definite alignment of the quark mass
matrices in the flavour space, and to explain this alignment, a flavour
symmetry or some other mechanism is required~\cite{Botella:2016krk}.
Among many attempts made in the literature to address the flavour puzzle,
extensions of the SM with new Higgs doublet are particularly motivating.
This is due to fact that the number of Higgs doublets is not constrained by
the SM symmetry. Moreover, the addition of scalar doublets gives rise to new
Yukawa interactions and as a result it provides a richer framework in
approaching the theory of flavour. On the other hand, any new extension of
the Higgs sector must be very much constrained, since it naturally leads to
flavour changing neutral currents. At tree level, in the SM, all the flavour
changing transitions are mediated through charged weak currents and the
flavour mixing is controlled by the CKM matrix~\cite%
{Cabibbo:1963yz,Kobayashi:1973fv}. If new Higgs doublets are added, one
expects large FCNC effects already present at tree level. Such effects have
not been experimentally observed and they constrain severely any model with
extra Higgs doublets, unless a flavour symmetry suppresses or avoids large
FCNC~\cite{Branco:2011iw}.
Minimal flavour violating models~\cite%
{Joshipura:1990pi,Antaramian:1992ya,Hall:1993ca,Mantilla2017, Buras:2001,
Dambrosio:2002} are examples of a multiHiggs extension where FCNC are
present at tree-level but their contributions to FCNC phenomena involve only
off-diagonal elements of the CKM matrix or their products. The first
consistent models of this kind were proposed by Branco, Grimus and Lavoura
(BGL)~\cite{Branco:1996bq}, and consisted of the SM with two Higgs doublets
together with the requirement of an additional discrete symmetry. BGL models
are compatible with lower neutral Higgs masses and FCNC's occur at tree
level, with the new interactions entirely determined in terms of the CKM
matrix elements.
The goal of this paper is to generalize the previous BGL models and to,
systematically, search for patterns where a discrete flavour symmetry
naturally leads to the alignment of the flavour space of both the quark
sectors. Although the quark mass hierarchy does not arise from the symmetry,
the effect of both is such that the CKM matrix is near to the identity and
has the correct overall phenomenological features, determined by the quark
mass hierarchy, \cite{Branco:2011aa}. To do this we extend the SM with two
extra Higgs doublets to a total of three Higgs $\phi _{a}$. The choice for
discrete symmetries is to avoid the presence of Goldstone bosons that appear
in the context of any global continuous symmetry, when the spontaneous
electroweak symmetry breaking occurs. For the sake of simplicity, we
restrict our search to the family group $Z_{N}$, and demand that the
resulting up-quark mass matrix $M_{u}$ is diagonal. This is to say that, due
to the expected strong up-quark mass hierarchy, we only consider those cases
where the contribution of the up-quark mass matrix to quark mixing is
negligible.
If one assumes that all Higgs doublets acquire vacuum expectation values
with the same order of magnitude, then each Higgs doublet must couple to the
fermions with different strengths. Possibly one could obtain similar results
assuming that the vacuum expectation values (VEVs) of the Higgs have a
definite hierarchy instead of the couplings, but this is not considered
here. Combining this assumption with the symmetry, we obtain the correct
ordered hierarchical pattern if the coupling with $\phi _{3}$ gives the
strength of the third generation, the coupling with $\phi _{2}$ gives the
strength of the second generation and the coupling with $\phi _{1}$ gives
the strength of the first generation. Therefore, from our point of view, the
three Higgs doublets are necessary to ensure that there exists three
different coupling strengths, one for each generation, to guarantee
simultaneously an hierarchical mass spectrum and a CKM matrix that has the
correct overall phenomenological features e.g. $\left\vert V_{cb}\right\vert
^{2}+\left\vert V_{ub}\right\vert ^{2}=O(m_{s}/m_{b})^{2}$, \ and denoted
here by $V\approx \mathbf{1.}$
Indeed, our approach is within the BGL models, and such that the FCNC
flavour structure is entirely determined by CKM. Through the symmetry, the
suppression of the most dangerous FCNC's, by combinations of the CKM matrix
elements and light quark masses, is entirely natural.
The paper is organised as follows. In the next section, we present our model
and classify the patterns allowed by the discrete symmetry in combination with our assumptions.
In Sec. \ref{sec:num}, we give a brief numerical analysis of
the phenomenological output of our solutions. In Sec. \ref{sec:fcnc}, we
examine the suppression of scalar mediated FCNC in our framework for each
pattern. Finally, in Sec. \ref{sec:conc}, we present our conclusions.
\section{The Model}
\label{sec:model}
We extend the Higgs sector of the SM with two extra new scalar doublets,
yielding a total of three scalar doublets, as $\phi _{1}$, $\phi _{2}$, $%
\phi _{3}$. As it was mentioned in the introduction, the main idea for
having three Higgs doublets is to implement a discrete flavour symmetry,
that leads to the alignment of the flavour space of the quark sectors. The
quark mass hierarchy does not arise from the symmetry, but together with the
symmetry the effect of both is such that the CKM matrix is near to the
identity and has the correct overall phenomenological features, determined
by the quark mass hierarchy.
Let us start by considering the most general quark Yukawa coupling
Lagrangian invariant in our setup
\begin{equation}
-\mathcal{L}_{\text{Y}}=(\Omega _{a})_{ij}\,\overline{Q}_{Li}\ \widetilde{%
\phi }_{a}\ u_{R_{j}}+(\Gamma _{a})_{ij}\,\overline{Q}_{Li}\ \phi _{a}\
d_{R_{j}}+h.c., \label{eq:lag}
\end{equation}%
with the Higgs labeling $a=1,2,3$ and $i,j$ are just the usual flavour
indexes identifying the generations of fermions. In the above Lagrangian,
one has three Yukawa coupling matrices $\Omega _{1}$, $\Omega _{2}$, $\Omega
_{3}$ for the up-quark sector and three Yukawa coupling matrices $\Gamma
_{1} $, $\Gamma _{2}$, $\Gamma _{3}$ for the down sector, corresponding to
each of the Higgs doublets $\phi _{1}$, $\phi _{2}$, $\phi _{3}$. Assuming
that only the neutral components of the three Higgs doublets acquire vacuum
expectation value (VEV), the quark mass $M_{u}$ and $M_{d}$ are then easily
generated as
\begin{subequations}
\label{eq:mass}
\begin{align}
M_{u}& =\Omega _{1}\left\langle \phi _{1}\right\rangle \,^{\ast }+\,\Omega
_{2}\left\langle \phi _{2}\right\rangle \,^{\ast }+\,\Omega
_{3}\,\left\langle \phi _{3}\right\rangle ^{\ast }, \label{eq:massup} \\
M_{d}& =\Gamma _{1}\left\langle \phi _{1}\right\rangle \,+\,\Gamma
_{2}\left\langle \phi _{2}\right\rangle \,+\,\Gamma _{3}\left\langle \phi
_{3}\right\rangle ,
\end{align}%
where VEVs $\langle \phi _{i}\rangle $ are parametrised as
\end{subequations}
\begin{equation}
\left\langle \phi _{1}\right\rangle =\frac{v_{1}}{\sqrt{2}},\quad
\left\langle \phi _{2}\right\rangle =\frac{v_{2}e^{i\alpha _{2}}}{\sqrt{2}}%
,\quad \left\langle \phi _{3}\right\rangle =\frac{v_{3}e^{i\alpha _{3}}}{%
\sqrt{2}},
\end{equation}%
with $v_{1}$, $v_{2}$ and $v_{3}$ being the VEV moduli and $\alpha _{2}$, $%
\alpha _{3}$ just complex phases. We have chosen the VEV of $\phi _{1}$ to
be real and positive, since this is always possible through a proper gauge
transformation. As stated, we assume that the moduli of VEVs $v_{i}$ are of
the same order of magnitude, i.e.,
\begin{equation}
v_{1}\sim v_{2}\sim v_{3}. \label{vs}
\end{equation}
Each of the $\phi _{a}$ couples to the quarks with a coupling $(\Omega
_{a})_{ij},(\Gamma _{a})_{ij}$ which we take be of the same order of
magnitude, unless some element vanishes by imposition of the flavour
symmmetry. In this sence, each $\phi _{a}$ and $(\Omega _{a},\Gamma _{a})$
will generate it's own respective generation: i.e., our model is such that
by imposition of the flavour symmmetry, $\phi _{3}$, $\Omega _{3}$, $\Gamma
_{3}$ will generate $m_{t}$ respectivelly $m_{b}$, that $\phi _{2}$, $\Omega
_{2}$, $\Gamma _{2}$ will generate $m_{c}$ respectivelly $m_{s}$, and that $%
\phi _{1}$, $\Omega _{1}$, $\Gamma _{1}$ will generate $m_{u}$ respectivelly
$m_{d}$. Generically, we have
\begin{subequations}
\label{eq:hierarchy}
\begin{align}
v_{1}\left\vert (\Omega _{1})_{ij}\right\vert & \sim m_{u},\;v_{2}\left\vert
(\Omega _{2})_{ij}\right\vert \sim m_{c},\;v_{3}\left\vert (\Omega
_{3})_{ij}\right\vert \sim m_{t}, \\
v_{1}\left\vert (\Gamma _{1})_{ij}\right\vert & \sim m_{d},\;v_{2}\left\vert
(\Gamma _{2})_{ij}\right\vert \sim m_{s},\;v_{3}\left\vert (\Gamma
_{3})_{ij}\right\vert \sim m_{b},
\end{align}%
which together with Eq. \ref{vs} implies a definite hierarchy amongst the
non-vanishing Yukawa coupling matrix elements:
\end{subequations}
\begin{subequations}
\label{eq:hier}
\begin{align}
\left\vert (\Omega _{1})_{ij}\right\vert & \ll \left\vert (\Omega
_{2})_{ij}\right\vert \ll \left\vert (\Omega _{3})_{ij}\right\vert ,
\label{eq:hierup} \\[2mm]
\left\vert (\Gamma _{1})_{ij}\right\vert & <\left\vert (\Gamma
_{2})_{ij}\right\vert \ll \left\vert (\Gamma _{3})_{ij}\right\vert .
\end{align}
Next, we focus on the required textures for the Yukawa coupling matrices $%
\Omega _{a}$ and $\Gamma _{a}$ that naturally lead to an hierarchical mass
quark spectrum and at the same time to a realistic CKM mixing matrix. These
textures must be reproduced by our choice of the flavour symmetry. As
referred in the introduction, we search for quark mass patterns where the
mass matrix $M_{u}$ is diagonal. Therefore, one derives from Eqs.~%
\eqref{eq:massup}, \eqref{eq:hierup} the following textures for $\Omega _{a}$
\end{subequations}
\begin{equation}
\Omega _{1}=%
\begin{pmatrix}
\mathsf{x} & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0%
\end{pmatrix}%
,\,\Omega _{2}=%
\begin{pmatrix}
0 & 0 & 0 \\
0 & \mathsf{x} & 0 \\
0 & 0 & 0%
\end{pmatrix}%
,\,\Omega _{3}=%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & \mathsf{x}%
\end{pmatrix}%
. \label{eq:textureOs}
\end{equation}%
The entry $\mathsf{x}$ means a non zero element. In this case, the up-quark
masses are given by $m_{u}=v_{1}\left\vert (\Omega _{1})_{11}\right\vert $, $%
m_{c}=v_{2}\left\vert (\Omega _{2})_{22}\right\vert $ and $%
m_{t}=v_{3}\left\vert (\Omega _{3})_{33}\right\vert $.
Generically, the down-quark Yukawa coupling matrices must have the following
indicative textures
\begin{equation}
\Gamma _{1}=%
\begin{pmatrix}
\boldsymbol{\mathsf{x}} & \boldsymbol{\mathsf{x}} & \boldsymbol{\mathsf{x}}
\\
\mathsf{x} & \mathsf{x} & \mathsf{x} \\
\mathsf{x} & \mathsf{x} & \mathsf{x}%
\end{pmatrix}%
,\,\Gamma _{2}=%
\begin{pmatrix}
0 & 0 & 0 \\
\boldsymbol{\mathsf{x}} & \boldsymbol{\mathsf{x}} & \boldsymbol{\mathsf{x}}
\\
\mathsf{x} & \mathsf{x} & \mathsf{x}%
\end{pmatrix}%
,\,\Gamma _{3}=%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
\boldsymbol{\mathsf{x}} & \boldsymbol{\mathsf{x}} & \boldsymbol{\mathsf{x}}%
\end{pmatrix}%
. \label{eq:textureGs}
\end{equation}%
We distinguish rows with bold $\boldsymbol{\mathsf{x}}$ in order to indicate
that it is mandatory that at least one of matrix elements within that row
must be nonvanishing. Rows denoted with $\mathsf{x}$ may be set to zero,
without modifying the mass matrix hierarchy. These textures ensure that not only is
the mass spectrum hierarchy respected but it also leads to the alignment of the
flavor space of both the quark sectors \cite{Branco:2011aa}
and to a CKM matrix $V\approx \mathbf{1}$. For instance, if one would not have
a vanishing, or comparatively very small, $(1,3)$ entry in the $\Gamma _{2}$%
, this would not necessarily spoil the scale of $m_{s}$, but it would
dramatically change the predictions for the CKM mixing matrix.
In order to force the Yukawa coupling matrices $\Omega _{a}$ and $\Gamma
_{a} $ to have the indicative forms outlined in Eqs.~\eqref{eq:textureOs}
and~\eqref{eq:textureGs}, we introduce a global flavour symmetry. Since any
global continuous symmetry leads to the presence of massless Goldstone
bosons after the spontaneous electroweak breaking, one should instead
consider a discrete symmetry. Among many possible discrete symmetry
constructions, we restrict our searches to the case of cycle groups $Z_{N}$.
Thus, we demand that any quark or boson multiplet $\chi $ transforms
according to $Z_{N}$ as
\begin{equation}
\chi \rightarrow \chi ^{\prime }=e^{i\,\mathcal{Q}(\chi )\,\frac{2\pi }{N}%
}\chi ,
\end{equation}%
where $\mathcal{Q}(\chi )\in \{0,1,\dots ,N\}$ is the $Z_{N}$-charge
attributed for the multiplet $\chi $.
We have chosen the up-quark mass matrix $M_{u}$ to be diagonal. This
restricts the flavour symmetry $Z_{N}$. We have found that, in order to
ensure that all Higgs doublet charges are different, and to
have appropriate charges for fields ${Q_{L}}_{i}$ and ${u_{R}}_{i}$, we must
have $N\geq 7$. We simplify our analysis by fixing $N=7$ and choose:
\begin{subequations}
\label{eq:fix}
\begin{align}
\mathcal{Q}({Q_{L}}_{i})& =(0,1,-2), \\
\mathcal{Q}({u_{R}}_{i})& =(0,2,-4),
\end{align}%
In addition, we may also fix
\end{subequations}
\begin{equation}
\mathcal{Q}({Q_{L}}_{i})=\mathcal{Q}(\phi _{i}) \label{eq:fix1}
\end{equation}%
It turns out that these choices do not restrict the results, i.e. the
possible textures that one can have for the $\Gamma _{i}$ matrices. Other
choices would only imply that we reshuffle the charges of the multiplets.
With the purpose of enumerating the different possible textures for the $%
\Gamma _{i}$ matrices implementable in $Z_{7}$, we write down the charges of
the trilinears $\mathcal{Q}({\overline{Q}_{L}}_{i}\phi _{a}{d_{R}}_{j})$
corresponding to each $\phi _{a}$ as
\begin{subequations}
\begin{equation}
\mathcal{Q}({\overline{Q}_{L}}_{i}\phi _{1}{d_{R}}_{j})=%
\begin{pmatrix}
d_{1} & d_{2} & d_{3} \\
d_{1}-1 & d_{2}-1 & d_{3}-1 \\
d_{1}+2 & d_{2}+2 & d_{3}+2%
\end{pmatrix}%
,
\end{equation}%
\begin{equation}
\mathcal{Q}({\overline{Q}_{L}}_{i}\phi _{2}{d_{R}}_{j})=%
\begin{pmatrix}
d_{1}+1 & d_{2}+1 & d_{3}+1 \\
d_{1} & d_{2} & d_{3} \\
d_{1}+3 & d_{2}+3 & d_{3}+3%
\end{pmatrix}%
,
\end{equation}%
\begin{equation}
\mathcal{Q}({\overline{Q}_{L}}_{i}\phi _{3}{d_{R}}_{j})=%
\begin{pmatrix}
d_{1}-2 & d_{2}-2 & d_{3}-2 \\
d_{1}-3 & d_{2}-3 & d_{3}-3 \\
d_{1} & d_{2} & d_{3}%
\end{pmatrix}%
,
\end{equation}%
where $d_{i}\equiv \mathcal{Q}({d_{R}}_{i})$. One can check that, in order
to have viable solutions, one must vary the values of $d_{i}\in
\{0,1,-2,-3\} $.
We summarise in Table \ref{tab:downTextures} all the allowed textures for
the $\Gamma _{a}$ matrices and the resulting $M_{d}$ mass matrix texture,
excluding all cases which are irrelevant, e.g. matrices that have too much
texture zeros and are singular, or matrices that do not accommodate CP
violation. It must be stressed, that these are the textures obtained by the
different charge configurations that one can possibly choose. However, if
one assumes a definite charge configuration, then the entire texture, $M_{d}$
and $M_{u}$ and the respective phenomenology are fixed. As stated, the list
of textures in Table~\ref{tab:downTextures} remains unchanged even if one
chooses any other set than in Eqs.~\eqref{eq:fix}, \eqref{eq:fix1}. As
stated, that all patterns presented here are of the Minimal Flavour
Violation (MFV) type \cite%
{Joshipura:1990pi,Antaramian:1992ya,Hall:1993ca,Mantilla2017, Buras:2001,
Dambrosio:2002}.
Pattern~I in the table was already considered in Ref.~\cite{Botella:2009pq}
in the context of $Z_{8}$. We discard Patterns~IV, VII and X, because
contrary to our starting point, at least one of three non-zero couplings
with $\phi _{1}$ will turn out be of the same order as the larger coupling
with $\phi _{2}$ in order to meet the phenomenological requirements of the
CKM matrix.
Notice also, that the structure of other $M_{d}$'s cannot be trivially
obtained, e.g. from Pattern I, by a transformation of the right-handed down
quark fields.
Our symmetry model may be extended to the charged leptons and neutrinos,
e.g. in the context of type one see-saw. Choosing for the lepton doublets ${L%
}_{i}$ the charges $\mathcal{Q}({L}_{i})=(0,-1,2)$, opposite to the Higgs
doublets in Eq. \eqref{eq:fix1},~and e.g. for the charges $\mathcal{Q}({e_{R}%
}_{i})=(0,-2,4)$ of the right-handed fields ${e_{R}}_{i}$, we force the
charged~ lepton mass matrix to be diagonal. Then for the right-handed
neutrinos ${\nu _{R}}_{i}$, choosing $\mathcal{Q}({\nu _{R}}_{i})=(0,0,0)$,
we obtain for the neutrino Dirac mass matrix a pattern similar to pattern I.
Of course, for this case, the heavy right-handed neutrino Majorana mass
matrix is totally arbitrary. In other cases, i.e. for other patterns and
charges, in particular for the right-handed neutrinos, we could introduce
scalar singlets with suitable charges, which would then lead to certain
heavy right-handed neutrino Majorana mass matrices.
Next, we address an important issue of the model, namely, whether accidental
$U(1)$ symmetries may appear in the Yukawa sector or in the potential. One
may wonder whether a continuous accidental $U(1)$ symmetry could arise, once
the $Z_{7}$ is imposed at the Lagrangian level in Eq.~\eqref{eq:lag}. This
is indeed the case, i.e., for all realizations of $Z_{7}$, one has the
appearance of a global $U(1)_{X}$. However, any consistent global $U(1)_{X}$
must obey to the anomaly-free conditions of global symmetries~\cite%
{Babu:1989ex}, which read for the anomalies $SU(3)^{2}\times U(1)_{X}$, $%
SU(2)^{2}\times U(1)_{X}$ and $U(1)_{Y}^{2}\times U(1)_{X}$ as
\end{subequations}
\begin{subequations}
\begin{equation}
A_{3}\equiv \frac{1}{2}\sum_{i=1}^{3}\biggl(2X({Q_{L}}_{i})-X({u_{R}}_{i})-X(%
{d_{R}}_{i})\biggr)=0, \label{eq:A3}
\end{equation}%
\begin{equation}
A_{2}\equiv \frac{1}{2}\sum_{i=1}^{3}\biggl(3X({Q_{L}}_{i})+X({\ell _{L}}%
_{i})\biggr)=0, \label{eq:A2}
\end{equation}%
\begin{equation}
A_{1}\equiv \frac{1}{6}\sum_{i=1}^{3}\biggl(X({Q_{L}}_{i})+3X({\ell _{L}}%
_{i})-8X({u_{R}}_{i})-2X({d_{R}}_{i})-6X({e_{R}}_{i})\biggr)=0,
\end{equation}%
where $X(\chi )$ is the $U(1)_{X}$ charge of the fermion multiplet $\chi $.
We have properly shifted the $Z_{7}$-charges in Eq.\eqref{eq:fix} and in
Table \ref{tab:downTextures} so that $X(\chi )=\mathcal{Q}(\chi )$, apart of
an overall $U(1)_{X}$ convention. In general to test those conditions, one
needs to specify the transformation laws for all fermionic fields. Looking
at the Table~1, we derive that all the cases, except the first case
corresponding to $d_{i}=(0,0,0)$, violate the condition given in Eq.~%
\eqref{eq:A3} that depends only on coloured fermion multiplets. In the case $%
d_{i}=(0,0,0)$, if one assigns the charged lepton charges as $X({\ell _{L}}%
_{i})=X({Q_{L}}_{i})$ one concludes that the condition given in Eq.~%
\eqref{eq:A2} is violated. One then concludes that the global $U(1)_{X}$
symmetry is anomalous and therefore only the discrete symmetry $Z_{7}$
persists.
We also comment on the scalar potential of our model. The most general
scalar potential with three scalars invariant under $Z_{7}$ reads as
\end{subequations}
\begin{equation}
V(\phi )=\sum_{i}\left[ -\mu _{i}^{2}\phi _{i}^{\dagger }\phi _{i}+\lambda
_{i}(\phi _{i}^{\dagger }\phi _{i})^{2}\right] +\sum_{i<j}\left[ +C_{i}(\phi
_{i}^{\dagger }\phi _{i})(\phi _{j}^{\dagger }\phi _{j})+\,\bar{C}%
_{i}\left\vert \phi _{i}^{\dagger }\phi _{j}\right\vert ^{2}\right] ,
\label{eq:pot}
\end{equation}%
where the constants $\mu _{i}^{2}$, $\lambda _{i}$, $C_{i}$ and $\bar{C}_{i}$
are taken real for $i,j=1,2,3$. Analysing the potential above, one sees that
it gives rise to the accidental global continuous symmetry $\phi
_{i}\rightarrow e^{i\alpha _{i}}\phi _{i}$, for arbitrary $\alpha _{i}$,
which upon spontaneous symmetry breaking leads to a massless neutral scalar,
at tree level. Introducing soft-breaking terms like $m_{ij}^{2}\phi
_{i}^{\dagger }\,\phi _{j}\,+\text{H.c.}$ can erase the problem. Another
possibility without spoiling the $Z_{7}$ symmetry is to add new scalar
singlets, so that the coefficients $m_{ij}^{2}$ are effectively obtained
once the scalar singlets acquire VEVs.
\begin{table}[]
\caption{The table shows the viable configurations for the right-handed
down-quark ${d_{R}}_{i}$ and their corresponding $\Gamma _{1}$, $\Gamma _{2}$%
, $\Gamma_{3}$ and $M_{d}$ matrices. It is understood that, for each pattern
and coupling, the parameters expressed here by the same symbol, are in fact
different, but denoting he same order of magnitude, (or possibly smaller).
E.g. in pattern I, coupling $\Gamma_1$, the three $\protect\delta$, $\protect%
\delta$, $\protect\delta$, stand for $\protect\delta_1$, $\protect\delta_2$,
$\protect\delta_3$. The same applies to the $\protect\varepsilon$'s and $c$%
's. For patterns IV, VII, and X, which will be excluded, one of the
couplings in $\Gamma _{1}$ turns out to be much larger. }
\label{tab:downTextures}\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Pattern & $\mathcal{Q}({d_R}_i)$ & $\Gamma_1$ & $\Gamma_2$ & $\Gamma_3$ & $%
M_d$ \\ \hline
I & $(0, 0, 0)$ & $%
\begin{pmatrix}
\delta & \delta & \delta \\
0 & 0 & 0 \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
\varepsilon & \varepsilon & \varepsilon \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
c & c & c%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\delta & \delta & \delta \\
\varepsilon & \varepsilon & \varepsilon \\
c & c & c%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
II & $(0, 0, 1)$ & $%
\begin{pmatrix}
\delta & \delta & 0 \\
0 & 0 & \delta \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
\varepsilon & \varepsilon & 0 \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
c & c & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\delta & \delta & 0 \\
\varepsilon & \varepsilon & \delta \\
c & c & 0%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
III & $(0, 0, -3)$ & $%
\begin{pmatrix}
\delta & \delta & 0 \\
0 & 0 & 0 \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
\varepsilon & \varepsilon & 0 \\
0 & 0 & \varepsilon%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
c & c & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\delta & \delta & 0 \\
\varepsilon & \varepsilon & 0 \\
c & c & \varepsilon%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
IV & $(0, 0, -2)$ & $%
\begin{pmatrix}
\delta & \delta & 0 \\
0 & 0 & 0 \\
0 & 0 & \varepsilon%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
\varepsilon & \varepsilon & 0 \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
c & c & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\delta & \delta & 0 \\
\varepsilon & \varepsilon & 0 \\
c & c & \varepsilon%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
V & $(0, 1, 0)$ & $%
\begin{pmatrix}
\delta & 0 & \delta \\
0 & \delta & 0 \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
\varepsilon & 0 & \varepsilon \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
c & 0 & c%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\delta & 0 & \delta \\
\varepsilon & \delta & \varepsilon \\
c & 0 & c%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
VI & $(0, -3, 0)$ & $%
\begin{pmatrix}
\delta & 0 & \delta \\
0 & 0 & 0 \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
\varepsilon & 0 & \varepsilon \\
0 & \varepsilon & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
c & 0 & c%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\delta & 0 & \delta \\
\varepsilon & 0 & \varepsilon \\
c & \varepsilon & c%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
VII & $(0, -2, 0)$ & $%
\begin{pmatrix}
\delta & 0 & \delta \\
0 & 0 & 0 \\
0 & \varepsilon & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
\varepsilon & 0 & \varepsilon \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
c & 0 & c%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\delta & 0 & \delta \\
\varepsilon & 0 & \varepsilon \\
c & \varepsilon & c%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
VIII & $(1, 0, 0)$ & $%
\begin{pmatrix}
0 & \delta & \delta \\
\delta & 0 & 0 \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & \varepsilon & \varepsilon \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & c & c%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & \delta & \delta \\
\delta & \varepsilon & \varepsilon \\
0 & c & c%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
IX & $(-3, 0, 0)$ & $%
\begin{pmatrix}
0 & \delta & \delta \\
0 & 0 & 0 \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & \varepsilon & \varepsilon \\
\varepsilon & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & c & c%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & \delta & \delta \\
0 & \varepsilon & \varepsilon \\
\varepsilon & c & c%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
X & $(-2, 0, 0)$ & $%
\begin{pmatrix}
0 & \delta & \delta \\
0 & 0 & 0 \\
\varepsilon & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & \varepsilon & \varepsilon \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & c & c%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & \delta & \delta \\
0 & \varepsilon & \varepsilon \\
\varepsilon & c & c%
\end{pmatrix}%
$\rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\ \hline
\end{tabular}%
\end{table}
\newpage
\section{Numerical analysis}
\label{sec:num}
In this section, we give the phenomenological predictions obtained by the
patterns listed in Table~\ref{tab:downTextures}. Note that, although these
patterns arrize directly from the chosen discrete charge configuration of
the quark fields, one may further preform a residual flavour transformation
of the right-handed down quark fields, resulting in an extra zero entry in $%
M_{d}$. Taking this into account, all the parameters in each pattern may be
uniquely expressed in terms of down quark masses and the CKM matrix elements
$V_{ij}$. This follows directly from the diagonalization equation of $M_{d}:$
\begin{equation}
V\ ^{\dagger }M_{d}\ W=diag(m_{d},m_{s},m_{b})\quad \Longrightarrow \quad
M_{d}=V\ diag(m_{d},m_{s},m_{b})\ W^{\dagger } \label{diag}
\end{equation}%
with $V$ being the CKM mixing matrix, since $M_{u}$ is diagonal. Because of
the zero entries in $M_{d}$, it is easy to extract the right-handed
diagonalization matrix $W$, completely in terms of the down quark masses and
the $V_{ij}$. Thus, all parameters, modulo the residual transformation of
the right-handed down quark fields, are fixed, i.e., all parameters in each
pattern may be uniquely expressed in terms of down quark masses and the CKM
matrix elements $V_{ij}$, including the right-handed diagonalization matrix $%
W$ of $M_{d}$. More precisely, all matrix elements of $V$ are written in
terms of Wolfenstein real parameters $\lambda $, $A$, $\overline{\rho }$ and
$\overline{\eta }$, defined in terms of rephasing invariant quantities as
\begin{subequations}
\begin{equation}
\lambda \equiv \frac{|V_{us}|}{\sqrt{|V_{us}|^{2}+|V_{ud}|^{2}}},\qquad
A\equiv \frac{1}{\lambda }\left\vert \frac{V_{cb}}{V_{us}}\right\vert \,,
\end{equation}%
\begin{equation}
\overline{\rho }+i\,\overline{\eta }\equiv -\frac{V_{ud}^{\phantom{\ast}%
}V_{ub}^{\ast }}{V_{cd}^{\phantom{\ast}}V_{cb}^{\ast }}
\end{equation}%
and $diag(m_{d},m_{s},m_{b})$ in Eq.~\eqref{diag}
\end{subequations}
\begin{equation}
\begin{array}{l}
\sqrt{\frac{m_{d}}{m_{s}}}=\sqrt{{\frac{k_{d}}{k_s}}}\ \lambda \\
\\
\frac{m_{s}}{m_{b}}=k_{s}\ \lambda ^{2}%
\end{array}%
\quad \Longrightarrow \quad
\begin{array}{l}
m_{d}=k_{d}\ \lambda ^{4}m_{b} \\
\\
m_{s}=k_{s}\ \lambda ^{2}m_{b}%
\end{array}
\label{ks}
\end{equation}%
with phenomenologically, $k_{d}$ and $k_{s}$ being factores of order one.
Writing $W^{\dagger }$ in Eq.~\eqref{diag} as $W^{\dagger
}=(v_{1},v_{2},v_{3})$, with the $v_{i}$ vectors formed by the $i$-th column
of $W^{\dagger }$, we find e.g. for pattern II,
\begin{equation}
v_{3}=\frac{1}{n_{3}}\left(
\begin{array}{r}
\frac{m_{d}}{m_{b}}V_{11} \\
\frac{m_{s}}{m_{b}}V_{12} \\
V_{13}%
\end{array}%
\right) \times \left(
\begin{array}{r}
\frac{m_{d}}{m_{b}}V_{31} \\
\frac{m_{s}}{m_{b}}V_{32} \\
V_{33}%
\end{array}%
\right) \label{v3}
\end{equation}%
where $n_{3}$ is the norm of the vector obtained from the external product
of the two vectors. Taking into account the extra freedom of transformation
of the right-handed fields, we may choose $M_{31}^{d}=0$, corresponding to $%
c_{1}=0$ in Table~\ref{tab:downTextures}, and we conclude that%
\begin{equation}
v_{1}=\frac{1}{n_{1}}\left(
\begin{array}{r}
\frac{m_{d}}{m_{b}}V_{31} \\
\frac{m_{s}}{m_{b}}V_{32} \\
V_{33}%
\end{array}%
\right) \times v_{3}^{\ast } \label{v1}
\end{equation}%
Obviously, then $v_{2}=\frac{1}{n_{2}}v_{1}^{\ast }\times v_{3}^{\ast }$.
This process is replicated for all patterns. Thus, $V$ and $W$, are entirely
expressed in terms of Wolfenstein parameters and $k_{d}$ and $k_{s}$ of Eq.~%
\eqref{ks}. These two matrices will be later used to compute the patterns of
the FCNC's in Table \ref{tb:FCNCpatterns}. Indeed, in this way, we find e.g.
for pattern II, \ in leading order order,
\begin{equation}
M_{d}=m_{b}\ \left(
\begin{array}{ccc}
-k_{d}\ \lambda ^{3} & \left( \overline{\rho }-i\,\overline{\eta }\right) \
A\ \lambda ^{3} & 0 \\
-k_{d}\ \lambda ^{2} & A\ \lambda ^{2} & -k_{s}\ \lambda ^{3} \\
0 & 1 & 0%
\end{array}%
\right) \label{mdl}
\end{equation}%
which corresponds to the expected power series where the couplings in $%
\Gamma _{1}$ to the first Higgs $\phi _{1}$ are comparatively smaller than
then couplings in $\Gamma _{2}$, and these smaller to\ the couplings in $%
\Gamma _{3}$. Similar results are obtained for all patterns in Table~\ref%
{tab:downTextures}, except for patterns IV, VII and X, where e.g. for
pattern IV, we find that the coupling in $(\Gamma _{1})_{33}$ is
proportional to$\ \lambda $, which is too large and contradicts our initial
assumption that all couplings in $\Gamma _{1}$ to the first Higgs $\phi _{1}$
must be smaller than the couplings in $\Gamma _{2}$ to the second Higgs $%
\phi _{2}$. Therefore, we exclude Patterns IV, VII and X.
We give in Table {\ref{Yukawa_example}} a numerical example of a Yukawa
coupling configuration for each pattern. We use the following quark running
masses at the electroweak scale $M_{Z}$:
\begin{subequations}
\begin{align}
m_{u}& =1.3_{-0.2}^{+0.4}\,\text{MeV},\quad m_{d}=2.7\pm 0.3\,\text{MeV}%
,\quad m_{s}=55_{-3}^{+5}\,\text{MeV}, \\
m_{c}& =0.63\pm 0.03\,\text{GeV},\quad m_{b}=2.86_{-0.04}^{+0.05}\,\text{GeV}%
,\quad m_{t}=172.6\pm 1.5\,\text{GeV}.
\end{align}%
which were obtained from a renormalisation group equation evolution at
four-loop level \cite{1674-1137-38-9-090001}, which, taking into account all
experimental constrains \cite{Charles:2015gya}, implies:
\end{subequations}
\begin{subequations}
\begin{align}
\lambda & =0.2255\pm 0.0006,\qquad A=0.818\pm 0.015, \\
\overline{\rho }& =0.124\pm 0.024,\qquad \overline{\eta }=0.354\pm 0.015.
\end{align}
\begin{table}[]
\caption{A numerical example of a Yukawa coupling configuration for each
pattern that gives the correct hierarchy among the quark masses and mixing.}{%
} {\label{Yukawa_example}}
\par
\begin{center}
\setlength{\tabcolsep}{0.5pc}
\resizebox{\textwidth}{!}{\begin{tabular}{|c|c|c|c|c|c|}
\hline
Pattern & $v_1Y_1$ & $v_2Y_2$ & $v_3Y_3$ & $M_d$ \\
\hline
I &
$\begin{pmatrix}
0.00277 & 0.0124 & 0.0101\,e^{1.907 \,i} \\
0 & 0 & 0 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0.0537 & 0.119 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 2.86
\end{pmatrix}$
&
$\begin{pmatrix}
0.00277 & 0.00124 & 0.0101\,e^{1.907 \,i} \\
0 & 0.0537 & 0.119 \\
0 & 0 & 2.86
\end{pmatrix}$
\\
\hline
II &
$\begin{pmatrix}
0.0123 & 0.0101\,e^{-1.235 \,i} & 0 \\
0 & 0 & 0.012 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0.0524 & 0.119 & 0 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 2.86 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0.0123 & 0.0101\,e^{-1.235 \,i} & 0 \\
0.0524 & 0.119 & 0.012 \\
0 & 2.86 & 0
\end{pmatrix}$
\\
\hline
III &
$\begin{pmatrix}
0.0127 & 0.0102\, e^{-1.253 \,i}& 0 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0.0523 & 0.120 & 0 \\
0 & 0 & 0.295
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 2.844 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0.0127 & 0.0102\, e^{-1.253 \,i} & 0 \\
0.0523 & 0.120 & 0 \\
0 & 2.844 & 0.295
\end{pmatrix}$
\\
\hline
V &
$\begin{pmatrix}
0.0127 & 0 & 0.0101\,e^{-1.234 \,i}\\
0 & 0.0117 & 0 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0.0524 & 0 & 0.112 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 2.86
\end{pmatrix}$
&
$\begin{pmatrix}
0.0127 & 0 &0.0101\,e^{-1.234 \,i} \\
0.0524 & 0.0117& 0.112 \\
0 & 0 & 2.86
\end{pmatrix}$
\\
\hline
VI &
$\begin{pmatrix}
0.0127 & 0 & 0.0102\,e^{-1.253 \,i} \\
0 & 0 & 0 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0.0523 & 0 & 0.120 \\
0 & 0.295 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 2.844
\end{pmatrix}$
&
$\begin{pmatrix}
0.0127 & 0 & 0.0102\,e^{-1.253 \,i} \\
0.0523 & 0 & 0.120 \\
0 & 0.295 & 2.844
\end{pmatrix}$
\\
\hline
VIII &
$\begin{pmatrix}
0 &0.0127 & 0.0102\,e^{1.907 \,i} \\
0.0117 & 0 & 0 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0.0524 & 0.119 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 2.86
\end{pmatrix}$
&
$\begin{pmatrix}
0 &0.0127 & 0.0102\,e^{1.907 \,i} \\
0.0117 & 0.0524 & 0.119 \\
0 & 0 & 2.86
\end{pmatrix}$
\\
\hline
IX &
$\begin{pmatrix}
0 & 0.0127 & 0.0101\,e^{-1.253 \,i} \\
0 & 0 & 0 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0.0523& 0.120 \\
0.295 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 &2.844
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0.0127 & 0.0101\,e^{-1.253 \,i} \\
0 & 0.0523& 0.120 \\
0.295& 0 &2.844
\end{pmatrix}$
\\
\hline
\end{tabular}}
\end{center}
\end{table}
\section{Predictions of flavour changing neutral currents}\label{sec:fcnc}
In the SM, flavour changing neutral currents (FCNC) are forbidden at tree
level, both in the gauge and the Higgs sectors. However, by extending the SM
field content, one obtains Higgs Flavour Violating Neutral Couplings \cite%
{Branco:2011iw}. In terms of the quark mass eigenstates, the Yukawa couplings
to the Higgs neutral fields are:
\end{subequations}
\begin{equation}
\begin{aligned} -\mathcal{L}_{\text{Neutral Yukawa}}= &\frac{H_0}{v}\left(
\overline{d_L}\, D_d \, d_R + \overline{u_L}\, D_u \, u_R \right) +
\frac{1}{v'} \overline{d_L} \, N^d_{1}\, \left( R_1 + i\, I_1 \right) \, d_R
\\ & + \frac{1}{v'} \overline{u_L} \, N^u_{1} \, \left( R_1 - i\, I_1
\right) \, u_R + \frac{1}{v''} \overline{d_L} \, N^d_{2}\, \left( R_2 + i\,
I_2 \right) \, d_R \\ &+ \frac{1}{v''} \overline{u_L} \, N^u_{2} \, \left(
R_2 - i\, I_2 \right) \, u_R + h.c. \end{aligned}
\end{equation}%
where the $N_{i}^{u,d}$ are the matrices which give the strength and the
flavour structure of the FCNC,
\begin{subequations}
\begin{align} \label{eq:FCNC}
& N_{1}^{d}=\frac{1}{\sqrt{2}}V^{\dagger }\,\left( v_{2}\Gamma
_{1}-v_{1}e^{i\,\alpha _{2}}\Gamma _{2}\right) \,W, \\
& N_{2}^{d}=\frac{1}{\sqrt{2}}V^{\dagger }\left( v_{1}\Gamma
_{1}+v_{2}e^{i\,\alpha _{2}}\Gamma _{2}-\frac{v_{1}^{2}+v_{2}^{2}}{v_{3}}%
e^{i\,\alpha _{3}}\Gamma _{3}\right) \,W, \\
& N_{1}^{u}=\frac{1}{\sqrt{2}}\left( v_{2}\Omega _{1}-v_{1}e^{-i\,\alpha
_{2}}\Omega _{2}\right) , \\
& N_{2}^{u}=\frac{1}{\sqrt{2}}\left( v_{1}\Omega _{1}+v_{2}e^{-i\,\alpha
_{2}}\Omega _{2}-\frac{v_{1}^{2}+v_{2}^{2}}{v_{3}}e^{-i\,\alpha _{3}}\Omega
_{3}\right) .
\end{align}%
Since in our case the $N_{i}^{u}$ are diagonal, there are no flavour
violating terms in the up-sector. Therefore, the analysis of the FCNC
resumes only to the down-quark sector. One can use the equations of the mass
matrices presented in Eq.~\eqref{eq:mass} to simplify the Higgs mediated
FCNC matrices for the down-sector:
\end{subequations}
\begin{subequations}
\label{eq:simplefcnc}
\begin{align}
N_{1}^{d}& =\frac{v_{2}}{v_{1}}D_{d}-\frac{v_{2}}{\sqrt{2}}\left( \frac{v_{2}%
}{v_{1}}+\frac{v_{1}}{v_{2}}\right) e^{i\alpha _{2}}\,V^{\dagger }\,\Gamma
_{2}\,W-\frac{v_{2}\,v_{3}}{v_{1}\sqrt{2}}e^{i\alpha _{3}}V^{\dagger
}\,\,\Gamma _{3}\,W \\[2mm]
N_{2}^{d}& =D_{d}-\frac{v^{2}}{v_{3}\sqrt{2}}e^{i\alpha _{3}}\,V^{\dagger
}\,\Gamma _{3}\,W
\end{align}
In order to satisfy experimental constraints arising from $K^{0}-\overline{%
K^{0}}$, $B^{0}-\overline{B^{0}}$ and $D^{0}-\overline{D^{0}}$, the
off-diagonal elements of the Yukawa interactions $N_{1}^{d}$ and $N_{2}^{d}$
must be highly suppressed \cite{Botella:2014ska} \cite{AndreasCrivellin2013}%
. For each of our 10 solutions in Table~\ref{tab:downTextures}, we summarize
in Table {\ref{tb:FCNCpatterns}} all FCNC patterns, for each solution, and
for $v_{1}=v_{2}=v_{3}$ and $\alpha _{2}=\alpha _{3}=0$. These patterns are
of the BGL type, since in Eq.~\eqref{eq:simplefcnc} all matrices can be
expressed in terms of the CKM mixing matrix elements and the down quark
masses. As explained, to obtain these patterns, we express the CKM matrix $V$
and the matrix $W$ in terms of Wolfenstein parameters.
\begin{table}[]
\caption{For all allowed patterns, we find that the matrices $N^d_1-D_d$ and
$N^d_2$ are proportional to the following paterns, where $\protect\lambda$
is the Cabibbo angle.}{\label{tb:FCNCpatterns}}
{\ } \setlength{\tabcolsep}{14pt} \centering
\begin{tabular}{|c|c|c|}
\hline
Pattern & $(N^d_1-D_d)\sim$ & $N^d_{2}\sim$ \\ \hline
I & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^5 & \lambda^2 & \lambda^2 \\
\lambda^7 & \lambda^4 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^7 & \lambda^3 \\
\lambda^9 & \lambda^2 & \lambda^2 \\
\lambda^7 & \lambda^4 & 1%
\end{pmatrix}
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
II & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda^5 & \lambda^4 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^7 & \lambda^3 \\
\lambda^9 & \lambda^2 & \lambda^2 \\
\lambda^7 & \lambda^4 & 1%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
III & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^5 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
IV & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^5 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$\rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
V & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda^5 & \lambda^4 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^7 & \lambda^3 \\
\lambda^7 & \lambda^2 & \lambda^2 \\
\lambda^5 & \lambda^4 & 1%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
VI & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^5 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
VII & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^5 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
VIII & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda^5 & \lambda^4 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^7 & \lambda^3 \\
\lambda^7 & \lambda^2 & \lambda^2 \\
\lambda^5 & \lambda^4 & 1%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
IX & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^5 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
X & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^5 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$\rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\ \hline
\end{tabular}%
\end{table}
The tree level Higgs mediated $\Delta S=2$ amplitude must be suppressed.
This may allways be achieved if one chooses the masses of the flavour
violating neutral Higgs scalars sufficiently heavy. However, from the
experimental point of view, it would be interesting to have these masses as
low as possible. Therefore, we also estimate the lower bound of these
masses, by considering the contribution to $B^{0}-\overline{B^{0}}$ mixing.
We choose this mixing, since for our patterns, the $(3,1)$ entry of the
matrix $N_{1}^{d}$ is the less suppressed in certain cases and would require
very heavy flavour violating neutral Higgses. The relevant quantity is the
off-diagonal matrix element $M_{12}$, which connects the B meson with the
corresponding antimeson. This matrix element, $M_{12}^{NP}$, receives
contributions \cite{Botella:2014ska} both from a SM box diagram and a
tree-level diagram involving the FCNC:
\end{subequations}
\begin{equation}
M_{12}=M_{12}^{SM}+M_{12}^{NP},
\end{equation}%
where the New Physics (NP) short distance tree level contribution to the meson-antimeson
contribution is:
\begin{equation}
\begin{aligned} M_{12}^{NP}=& \sum_i^2 {\frac{f_B^2 \, m_B}{ 96\, v^2
m^2_{R_i}} } \left\{ \left[ \left(1+ \left( \frac{m_B}{m_d+m_b} \right)^2
\right)\left(a^R_i\right)_{12} \right] - \left[ \left(1+ 11 \left(
\frac{m_B}{m_d+m_b} \right)^2 \right] \right)\left(b^R_i\right)_{12}
\right\} \\ &+\sum_i^2 {\frac{f_B^2 \, m_B}{ 96\, v^2 m^2_{I_i}} } \left\{
\left[ \left( 1+ \left( \frac{m_B}{m_d+m_b}
\right)^2\right)\left(a^I_i\right)_{12}\right] - \left[ \left(1+ 11 \left(
\frac{m_B}{m_d+m_b} \right)^2\right]\right) \left(b^I_i\right)_{12} \right\}
\end{aligned}
\end{equation}%
with $v^{2}=v_{1}^{2}+v_{2}^{2}+v_{2}^{2}$ and
\begin{equation}
\begin{array}{l}
\left( a_{i}^{R}\right) _{12}=\left[ \left( N_{i}^{d}\right) _{31}^{\ast
}+\left( N_{i}^{d}\right) _{13}\right] ^{2} \\
\left( a_{i}^{I}\right) _{12}=-\left[ \left( N_{i}^{d}\right) _{31}^{\ast
}-\left( N_{i}^{d}\right) _{13}\right] ^{2}%
\end{array}%
~,\qquad
\begin{array}{l}
\left( b_{i}^{R}\right) _{12}=\left[ \left( N_{i}^{d}\right) _{31}^{\ast
}-\left( N_{i}^{d}\right) _{13}\right] ^{2} \\
\left( b_{i}^{I}\right) _{12}=-\left[ \left( N_{i}^{d}\right) _{31}^{\ast
}+\left( N_{i}^{d}\right) _{13}\right] ^{2}%
\end{array}%
~,\qquad i=1,2 \label{ab}
\end{equation}%
In order to obtain a conservative measure, we have tentatively expanded the
original expression in \cite{Botella:2014ska} and, for the three Higgs case,
included all neutral Higgs mass eigenstates.
Adopting as input values the PDG experimental determinations of $f_{B}$, $%
m_{B}$ and $\Delta \,m_{B}$ and considering a common VEV for all Higgs
doublets, we impose the inequality $M_{12}^{NP}<\Delta m_{B}$. The following
plots show an estimate of the lower bound for the flavour-violating Higgs
masses for two different patterns. We plot two masses chosen from the set $%
\left( m_{1}^{R},m_{2}^{R},m_{1}^{I},m_{2}^{I}\right) $, while the other two
are varied over a wide range. In Fig. 1, we illustrate these lower bounds
for Pattern III, which are restricted by the $(3,1)$ entry of $N_{1}^{d}$
matrix and suppressed by a factor of $\lambda $. For Pattern VIII, in Fig. 2
we find the flavour violating neutral Higgs to be much lighter and possibly
accessible at LHC.
\section{Conclusions}
\label{sec:conc}
We have presented a model based on the SM with 3 Higgs and an additional
flavour discrete symmetry. We have shown that there exist flavour discrete
symmetry configurations which lead to the alignment of the quark sectors.\
By allowing each scalar field to couple to each quark generation with a
distinctive scale, one obtains the quark mass hierarchy, and although this
hierarchy does not arise from the symmetry, the effect of both is such that
the CKM matrix is near to the identity and has the correct overall
phenomenological features. In this context, we have obtained 7 solutions
fulfilling these requirements, with the additional constraint of the up
quark mass matrix being diagonal and real.
We have also verified if accidental $U(1)$ symmetries may appear in the
Yukawa sector or in the potential, particularly the case where a continuous
accidental $U(1)$ symmetry could arise, once the $Z_{7}$ is imposed at the
Lagrangian level. This was indeed the case, however we shown that the
anomaly-free conditions of global symmetries are violated. Thus, the global $%
U(1)_{X}$ symmetry is anomalous and therefore only the discrete symmetry $%
Z_{7}$ persists.
As in this model new Higgs doublets are added, one expects large FCNC
effects, already present at tree level. However, such effects have not been
experimentally observed. We show that, for certain of our specific
implementations of the flavour symmetry, it is possible to suppress the FCNC
effects and to ensure that the flavour violating neutral Higgs are
light enough to be detectable at LHC. Indeed, in this respect, our model
is a generalization of the BGL models for 3HDM, since the FCNC flavour
structure is entirely determined by CKM.
\begin{figure}[h!]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.\linewidth]{P3-a.png}
\caption{Estimate of the lower bound for the flavour-violating Higgs masses for $R_1$ and $I_1$.}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.\linewidth]{P3-b.png}
\caption{Estimate of the lower bound for the flavour-violating Higgs masses for $R_2$ and $I_2$.}
\label{fig:sub2}
\end{subfigure}
\caption{Lower bound for the flavour-violating Higgs masses for case III. }
\label{fig:test}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.\linewidth]{P8-a-good.png}
\caption{Estimate of the lower bound for the flavour-violating Higgs masses for $R_1$ and $I_1$.}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.\linewidth]{P8-b-good.png}
\caption{Estimate of the lower bound for the flavour-violating Higgs masses for $R_2$ and $I_2$.}
\label{fig:sub2}
\end{subfigure}
\caption{Lower bound for the flavour-violating Higgs masses for case VIII. }
\label{fig:test1}
\end{figure}
\acknowledgments
This work is partially supported by Funda\c{c}\~{a}o para a Ci\^{e}ncia e a
Tecnologia (FCT, Portugal) through the projects CERN/FP/123580/2011,
PTDC/FIS-NUC/0548/2012, EXPL/FIS-NUC/0460/2013, and CFTP-FCT Unit 777
(PEst-OE/FIS/UI0777/2013) which are partially funded through POCTI (FEDER),
COMPETE, QREN and EU. The work of D.E.C. is also supported by Associa\c c\~
ao do Instituto Superior T\'ecnico para a Investiga\c c\~ao e
Desenvolvimento (IST-ID). N.R.A is supported by European Union Horizon 2020
research and innovation programme under the Marie Sklodowska-Curie grant
agreement No 674896. N.R.A is grateful to CFTP for the hospitality during
his stay in Lisbon.
\bibliographystyle{ieeetr}
|
1705.10123
|
\section{Introduction}\label{sec:intro}
Mean Field Games (briefly MFG) is a very recent mathematical theory
modelling the macroscopic behaviour of a large population of indistinguishable agents
who wish to minimize a cost depending on the distribution of all the agents in a noisy environment.
It was proposed independently in~2006 by Lasry and Lions (\cite{ll})
and Huang, Caines and Malham\`e (\cite{hcm}), and it has a number of potential applications,
from economics and finance (growth theory, environmental policy,
formation of volatility in financial markets), to engineering and models of social systems,
such as crowd motion and traffic control. For the development of the theory
and several applications, we refer to the monographs~\cite{bf},
\cite{gomesbook}, and to the references therein.
Up to now, the noisy environment in which the average game takes place has been usually
modeled by standard diffusion.
Our aim is to consider a more general framework for the disturbances, and
in particular we take into account processes driven by pure jump {L}\'evy processes.
This generalization is interesting for applications to financial models, where
jump processes are widely used to model sudden crisis and crashes on the markets (see e.g. the monograph \cite{rama}
for a detailed description of motivations for
the use of processes with jumps in financial models and examples of applications of {L}\'evy processes in risk management).
More precisely, even if still heuristically, MFG are noncooperative differential games, with a continuum of players, each of whom
controls his own trajectory in the state space, which in our case is the $N$-dimensional torus.
The trajectory of each player is affected by a fractional Brownian motion: it is defined by a stochastic differential equation
\[dX_t=v_t dt+ dZ_t,\]
where $v_t$ is the control and $Z_t$ is a $N$-dimensional, $2s$-stable pure jumps {L}\'evy process, with associated {L}\'evy measure
(which describes the distribution of jumps of the process) given by
$\nu(dx)=\frac{1}{|x|^{N+2s}}dx$ (see~\cite{a}).
Each player wants to minimize the long time average cost
\[\liminf_{T\to+\infty}\, \mathbf{E} \left[ \frac{1}{T}\, \int_0^T L(v_s) +f(X_s, m(X_s))ds\right],\]
where $m(x)$ denotes the density distribution of the population at point $x$, $L(\mathcal{q})$ is a superlinear convex
function and $f$ is a cost function taking into account
the position of each player and the density of the whole population.
We look for a stable configuration, that is a Nash equilibrium:
a configuration where, keeping into account the choices of the others, no player would spontaneously decide to change his own choice.
In an equilibrium regime, the corresponding density of the average player is stable as time goes to $+\infty$, and coincides with the population density $m$.
{F}rom a PDE point of view,
this equilibrium configuration is characterized by a system of a fractional Hamilton-Jacobi
equation with Hamiltonian $H$ given by the Legendre transform of $L$, coupled with a fractional stationary
Fokker-Planck equation
describing the long-time distribution of all agents, moving according to the control which minimizes the long time average cost
(see \cite{ll}, \cite{gomesbook}).
\smallskip
We recall that MFG with jumps have been very recently considered in the literature by using a completely different approach based on
probabilistic techniques in \cite{kol}, where the theory of non-linear Markovian propagators is used, and in \cite{ca}, where the players control the intensity of jumps.
\medskip
In this paper we start with the analysis of stationary fractional
mean field game systems, in the periodic setting,
with fractional exponent greater than~$\frac{1}{2}$.
We restrict to this regime since the fractional Laplacian
operator with drift presents different properties
depending on the fact that the fractional exponent is greater
or lower than~$\frac{1}{2}$.
In the case $s>\frac{1}{2}$, the diffusion component dominates the drift term, and so,
the drift term can be treated as a lower-order term. Moreover the kernel of the linear operator defined
by the fractional Laplacian with drift can be estimated in terms of the fractional heat kernel (see \cite{bj}).
We provide in this paper an accurate analysis of
steady state solutions to the fractional Fokker-Planck
equations in the periodic setting, with bounded drift and
fractional exponent~$s$ greater than~$\frac{1}{2}$, see
Section~\ref{sectionfp}.
On the other hand, we discuss in Section~\ref{submezzo}
some examples in the case of fractional Laplacian operator
with fractional exponent $s$ lower than~$\frac{1}{2}$ and bounded drift,
which suggest that the study
of fractional MFG in the range $s<\frac{1}{2}$
presents structural differences with respect to the
range~$s>\frac{1}{2}$.
\smallskip
We consider the following ergodic fractional MFG on the~$N$-dimensional torus $Q:=\mathbb{R}^N/\mathbb{Z}^N$. The goal is to
find a constant~$\lambda\in\mathbb{R}$ for which there exists a couple $(u,m)$ solving
\begin{equation}\label{mfg2}\begin{cases}
(- \Delta)^s u+ H(\nabla u)+\lambda =f(x,m),\\
(-\Delta )^s m-\mathrm{div}(m \nabla H(\nabla u) )=0, \\
\int_{Q} m\, dx=1. \end{cases}
\end{equation}
Here we consider the fractional
Laplacian~$(-\Delta)^s=(-\Delta_Q)^s$ defined on the torus~$Q$
with fractional parameter~$s\in\left(\frac{1}{2},1\right)$.
This operator can be defined directly by the multiple Fourier series
\[(-\Delta_Q)^s u(x):= \sum_{k\in \mathbb{Z}^N} |k|^{2s} c_k(u) e^{ik\cdot x}\] where $c_k$ are the Fourier coefficients of $u:Q\to \mathbb{R}$ (see \cite{rs}).
We identify functions defined on~$Q$ with their
periodic extensions to~$\mathbb{R}^N$,
and it is possible to show that for such functions~$u$,
the periodic distribution~$(-\Delta_Q)^s u(x)$
coincides with the distributional fractional Laplacian
on~$\mathbb{R}^N$ of~$u$ (see \cite[Theorem A]{rs2}).
We shall assume that $H:\mathbb{R}^N\to\mathbb{R}$ is locally Lipschitz continuous and strictly convex, and that
there exist some $C_H > 0$, $K>0$ and $\gamma > 1$
such that, for all $\mathcal{p}\in\mathbb{R}^N$,
\begin{equation}\begin{split}\label{Hass}
&C_H |\mathcal{p}|^{\gamma} - C_H^{-1} \le H(\mathcal{p})
\le C_H^{-1} (|\mathcal{p}|^{\gamma} + 1), \\
&\nabla H(\mathcal{p})\cdot \mathcal{p}-H(\mathcal{p})\geq C_H
|\mathcal{p}|^\gamma -K
\quad {\mbox{ and }} \quad |\nabla H(\mathcal{p})|\leq C_H|\mathcal{p}|^{\gamma-1}.
\end{split}\end{equation}
As for the function~$f$, we consider both the case of local and
the case of nonlocal coupling.
We will give more precise
assumptions\footnote{
With a slight abuse of notation,
we write~$f[m]$ when we intend the
action of the function~$f$ to a function~$m$
and~$f(\cdot,m)$ when we intend the
map~$x\mapsto f(x,m(x))$.
The two cases are structurally different, since~$f[m]$
takes into account a ``nonlocal setting", in which,
for instance, $f[m]$
can be the convolution of~$f$ with a kernel
(in particular, $f[m](x)$
does not depend only on~$x$ and on~$m(x)$,
but rather on~$x$ and on all the
values that~$m$ may attain).
A more precise setting
is discussed in Section~\ref{sectionreg}.} in what follows about this.
Moreover, following \cite{trbook},
for~$p>1$ and~$\sigma\geq 0$,
we define the Bessel potential space $H^\sigma_p(Q)$ as
\begin{equation}\label{bessel}
H^\sigma_p(Q) := \Big\{
u\in L^p(Q):\ (I-\Delta)^\frac\sigma2 u\in L^p(Q)
\Big\}\qquad{\mbox{with }}\;
\|u\|_{H^\sigma_p(Q)} := \|(I-\Delta)^\frac\sigma2 u\|_{L^p(Q)}\,.
\end{equation}
In this setting, we say that a classical solution to the system \eqref{mfg2} is a
triple~$(u, \lambda, m)\in C^{2s+\theta}(Q)\times
\mathbb{R}\times H^{2s-1}_p(Q)$, for all
$\theta<2s-1$ and for all $p>1$.
\smallskip
Our main result is the following, and it is proved in
Theorems~\ref{solmfgreg}, \ref{solmfgbdd} and~\ref{solmfg}.
\begin{theorem} \label{TH:MAIN}
Let $s\in\left(\frac{1}{2},1\right)$. Then \eqref{mfg2} admits a classical solution in the following cases.
\begin{enumerate}
\item $\gamma>1$ and~$f:L^{1}(Q)\to \mathbb{R}$ maps
continuously~$C^{\alpha}(Q) $, for some~$\alpha < 2s-1$,
into a bounded subset of $W^{1,\infty}(Q)$.
\item $1<\gamma\leq 2s$ and $f:Q\times [0, +\infty)\to \mathbb{R}$
is continuous and bounded.
\item $ 1< \gamma< \frac{N}{N-2s+1}$ for $N>1$,
$1<\gamma\leq 2s$ for $N=1$, and $f:Q\times [0, +\infty)\to \mathbb{R}$
is locally Lipschtiz continuous and satisfies
\begin{equation}\label{gr1}
-Cm^{q-1}-K\leq f(x, m) \leq C m^{q-1}+K,\end{equation}
for some~$C$, $K>0$ and
\begin{equation}\label{gr2}
1<q<1+\frac{(2s-1)}{N}\frac{\gamma}{\gamma-1}.\end{equation}
\end{enumerate}
\end{theorem}
Now, we discuss in more details the results in Theorem~\ref{TH:MAIN}.
In the case $(1)$, that is in the case in which the coupling $f$
is a smoothing potential, we obtain existence of
solutions to the MFG system
by taking advantadge of a classical approach given in~\cite{ll},
based on the Schauder Fixed Point Theorem.
To get the existence result in this case,
we use some estimates on the solutions to stationary
Fokker-Planck equations
obtained in Section~\ref{sectionfp}
and a-priori gradient estimates on
solutions of fractional coercive Hamilton-Jacobi equations,
inspired by the
Bernstein method in~\cite{blt}.
As for the case of the
local coupling, we use a different approach.
First of all, in order to get a-priori gradient estimates on
solutions of fractional coercive
Hamilton-Jacobi equations we
cannot use anymore
the Bernstein method, since the function~$x\mapsto f(x,m(x))$
is not in general Lipschtiz continuous.
So, we use the so-called Ishii-Lions method
(see~\cite{bcci1}) to obtain gradient estimates on
solutions of fractional coercive Hamilton-Jacobi equations.
This method requires, in particular,
that~$\gamma\leq 2s$, where~$\gamma$ is
the growth of the Hamiltonian given in~\eqref{Hass}
and~$s$ is the fractional exponent of the Laplacian.
The gradient estimates in this case
depend only on the $L^\infty$ norm of the solutions
and of~$f$.
In case $(2)$, this result permits to conclude the proof of Theorem~\ref{TH:MAIN},
by first regularizing the potential and then passing to the limit.
In case $(3)$, in which the local coupling term is unbounded, we use the variational approach,
which goes back to the seminal work \cite{ll} (see also \cite{cgpt}, \cite{c16}):
the MFG system is obtained (at least formally) as the optimality condition of an appropriate
optimal control problem on the fractional Fokker-Planck equation.
First of all, the function $f(x, \cdot)$ can be unbounded
both from below and from above, so in general the
energy associated to the MFG system is not even bounded.
The condition on the growth of $f$ with respect to $m$,
given in~\eqref{gr1} and~\eqref{gr2},
is necessary to get boundedness of the energy associated to the system and then to obtain existence of minimizers by direct methods.
Note that our assumption allows us to treat both the case
in which the coupling is an increasing function of $m$,
that is a congestion game, in which players aim
to avoid regions where the population
has a high density, and the opposite case in which the
coupling is a decreasing function in $m$,
modelling a game in which every player is attracted by
regions where the density of population is high.
Finally we point out that the condition on the growth of the
Hamiltonian in~$(3)$ of Theorem~\ref{TH:MAIN}, that is
\[1< \gamma< \frac{N}{N-2s+1}, \]
is just a technical condition, that can be eliminated once a-priori gradient
estimates on the solutions of fractional coercive Hamilton-Jacobi equations
depending only on the $L^\infty$ norm of the potential term $f$ and not on the $L^\infty$ norm of the solutions $u$ are available. In the case of the classical Laplacian such a result has been obtained
by an improved Bernstein method, based also on
Ishii-Lions type arguments, in \cite{clp}.
We believe that such an approach can be adapted to
the fractional case, and this will be the
topic of future research.
\smallskip
The paper is organized as follows. In Section~\ref{sectionfp}
we provide some results on a-priori estimates,
existence and uniqueness of solutions to stationary fractional
Fokker-Planck equations in the periodic setting.
These results should be classical, and well known,
nevertheless due to the lack of a
precise references in the literature, we provide also a
sketch of the proofs.
In Section~\ref{sectionHJ}, we recall the existing results
about a-priori gradient bounds for solutions to fractional
Hamilton-Jacobi equations with coercive Hamiltonians and
on the solvability of ergodic problems in this setting.
Section~\ref{sectionreg} is devoted to the analysis of MFG
systems in the case of regularizing nonlocal coupling.
Section~\ref{sectionbounded}
contains the existence result for MFG systems with local
bounded coupling. In Section~\ref{sectionunbounded},
we consider fractional MFG systems with local unbounded coupling.
Finally,
Section~\ref{sectionimprovement} contains
the improvement of regularity of solutions of the MFG system
in the case in which the coefficients are more regular, and
the uniqueness result for increasing coupling terms.
\section{Steady state solutions to fractional Fokker-Planck equations}\label{sectionfp}
We provide here some results on existence, uniqueness and
regularity of steady state solutions to fractional Fokker-Planck
equations in the periodic setting.
First of all we recall some simple result about Bessel potential spaces.
We recall that (see \cite{kim})
the norm~$\|\cdot\|_{H^\sigma_p(Q)}$ defined in~\eqref{bessel}
is equivalent to the norm
$$
\|u\| = \|u\|_{L^p(Q)}+\|(-\Delta)^\frac\sigma2 u\|_{L^p(Q)}\,.
$$
Observe that the space $H^\sigma_2 (Q)$ coincides with $W^{\sigma,2}(Q)$.
Moreover we have the following embedding results.
\begin{lemma}\label{lemmazero}
For every $\sigma\geq 0$, $p>1$
and $\varepsilon>0$, we get
\[ H^{\sigma+\varepsilon}_p(Q)\subseteq W^{\sigma,p}(Q)
\subseteq H^{\sigma-\varepsilon}_p(Q),\]
with continuous embeddings.
Moreover
\begin{equation}\label{eqmp}
W^{m,p}(Q)= H^m_p(Q)\qquad \textrm{if $m\in \mathbb N$}.
\end{equation}
\end{lemma}
\begin{proof} The proof of this result
is given in~\cite[Theorem~3.2]{lm}, for~$Q=\mathbb{R}^N$.
Then in~\cite[Section~4]{lm}, the result is extended to~$
Q=\Omega$ with~$\Omega$ bounded open set with regular
boundary, since it is proved (see~\cite[Proposition~4.1]{lm})
that~$H^\sigma_p(\Omega)$
coincides with the set of restrictions to~$\Omega$
of functions in~$H^\sigma_p(\mathbb{R}^N)$.
The same argument (even simpler) permits to show
also the result for the periodic case.
\end{proof}
\begin{lemma}\label{lemmauno}
Let $w\in H^\sigma_p(Q; \mathbb{R}^N)$, with $\sigma\ge 0$.
Then there exists a unique solution
$m\in H^{2s-1+\sigma}_p(Q)$ to the problem
\begin{equation}\label{eqm}
(-\Delta)^{s}m = \mathrm{div}(w),\qquad {\mbox{with }}\; \int_Q m \, dx=1.
\end{equation}
Moreover there exists $C>0$, depending on $p$, such that
\begin{equation}\label{esw} \|m\|_{H^{2s-1+\sigma}_p(Q)}\leq C \|w\|_{H^{\sigma}_p(Q)}.\end{equation}
\end{lemma}
\begin{proof}
We first show that the following auxiliary problem
\begin{equation}\label{eqmu}
-\Delta u = \mathrm{div}(w), \qquad {\mbox{with }}\; \int_Q u\, dx=0,
\end{equation}
admits a unique solution $u\in H^{1+\sigma}_p(Q)$.
Assume first that $w$ is smooth, let $u$ be the unique smooth solution to \eqref{eqmu},
and let $v\in C^\infty(Q)$ be a test function. Multiplying \eqref{eqmu}
by $(-\Delta)^\frac{\sigma}{2}v$ and integrating by parts, we get
\begin{eqnarray*}
&& \int_Q u\, (-\Delta)^{1+\frac\sigma 2}v\,dx =
- \int_Q (-\Delta)^\frac{\sigma}{2}w \cdot\nabla v\,dx
\\&&\qquad\quad \le \|w\|_{H^\sigma_p(Q)}\,\|\nabla v\|_{L^{p'}(Q)}\le
C \|w\|_{H^\sigma_p(Q)}\,\|(-\Delta)^\frac{1}{2} v\|_{L^{p'}(Q)},
\end{eqnarray*}
where the last inequality follows from \eqref{eqmp} with $m=1$.
Here above~$p'=\frac{p}{p-1}$ is the conjugate exponent of~$p$.
As a consequence, by taking $\psi := (-\Delta)^\frac{1}{2} v$,
which is an arbitrary test function with zero average, we get
\[
\int_Q u\, (-\Delta)^{\frac{1+\sigma}{2}}\psi\,dx \le
C \|w\|_{H^\sigma_p(Q)}\,\|\psi\|_{L^{p'}(Q)}, \qquad\forall\psi\in C^\infty(Q),
\]
which implies that \begin{equation}\label{es1} \|u\|_{H^{1+\sigma}_p(Q)}\le C \|w\|_{H^\sigma_p(Q)}.\end{equation}
The result in the general case then follows by approximating $w$ with smooth vector fields.
\smallskip
Letting now $m := 1+ (-\Delta)^{1-s}u\in H^{2s-1+\sigma}_p(Q)$,
so that $(-\Delta)^s m = -\Delta u$,
we have that
$$\int_Q m\, dx = 1$$ and $m$ is the (unique) solution to \eqref{eqm}. Finally, recalling the definition of $m$, we get that
\begin{equation}\begin{split}\label{jegberbger}
&\|m\|_{H^{2s-1+\sigma}_p(Q)}=\| (I-\Delta)^{s-\frac{1}{2}+
\frac{\sigma}{2}}m\|_{L^p(Q)} \\ &\qquad \leq \|
(I-\Delta)^{ \frac{1-\sigma}{2}}u\|_{L^p(Q)}
+ \|(I-\Delta)^{s-\frac{1}{2}+\frac{\sigma}{2}}u\|_{L^p(Q)}
=\|u\|_{H_p^{ \sigma+1}(Q)}
+ \|u\|_{H_p^{\sigma-1+2s}(Q)} .\end{split}\end{equation}
Notice now that~$2s-1\in (0,1)$,
therefore~$\sigma-1+2s\leq \sigma +1$,
Hence, \eqref{jegberbger}, together with~\eqref{es1}, gives~\eqref{esw},
thanks to Lemma~\ref{lemmazero}.
\end{proof}
\begin{lemma}\label{lemmadue}
Let~$r>1$ and~$m\in L^1(Q)$ be such that $\int_Q m =1$ and
\begin{equation}\label{senzaw}
\int_Q m (-\Delta)^s \phi \le C \|\nabla\phi\|_{L^{r'}(Q)},
\qquad \forall \phi\in C^1_{\rm per}(\mathbb{R}^n),
\end{equation}
with~$r'=\frac{r}{r-1}$, for some $C>0$.
Then $(-\Delta)^{s-\frac{1}{2}}m\in L^r(Q)$ and
\begin{equation}\label{eqstima}
\|(-\Delta)^{s-\frac{1}{2}}m\|_{L^r(Q)}\le C.
\end{equation}
\end{lemma}
\begin{proof}
By \cite{trbook} (recall also~\eqref{eqmp}),
we know that $W^{1,r'}(Q)$ is isomorphic to $H^1_{r'}(Q)$,
so that in particular there exists a constant $C=C_{r'}>0$ such that
\begin{equation}\label{jgjbgbg}
\|\nabla \phi\|_{L^{r'}(Q)}\leq C\|(-\Delta)^{\frac{1}{2}}\phi\|_{L^{r'}(Q)}.
\end{equation}
Let $m_\varepsilon:=m\star\chi_\varepsilon$, where $\chi_\varepsilon$ is a standard mollifier.
Then \eqref{senzaw} reads
\[
\int_Q m_\varepsilon (-\Delta)^s \phi \le C \|\nabla\phi\|_{L^{r'}(Q)}, \qquad \forall \phi\in C^1_{\rm per}(\mathbb R^n).
\]
Therefore, integrating by parts and recalling~\eqref{jgjbgbg}, we obtain
\[
\int_Q (-\Delta)^{s-\frac{1}{2}}m_\varepsilon (-\Delta)^{\frac{1}{2}} \phi\, dx=
\int_Q m_\varepsilon (-\Delta)^s \phi\, dx
\le C \|\nabla\phi\|_{L^{r'}(Q)}\leq
C\| (-\Delta)^{\frac{1}{2}} \phi\|_{L^{r'}(Q)},
\]
from which we obtain the desired inequality~\eqref{eqstima} for $m_\varepsilon$
and then for $m$, letting $\varepsilon \to 0$.
\end{proof}
Finally we consider steady state solutions to the periodic fractional
Fokker-Planck equation.
\begin{proposition}\label{lemmaunoemezzo}
Let $b\in L^\infty(Q;\mathbb{R}^N)$. Then, there exists a unique solution
$m\in H^{2s-1}_p(Q)$, for all~$p > 1$, to the problem
\begin{equation}\label{eqkolmo}
(-\Delta)^{s}m + \mathrm{div}(bm) = 0,
\end{equation}
with $\int_Q m \, dx=1$, and
\[
\|m\|_{H^{2s-1}_p(Q)} \le C,
\]
where $C>0$ depends only on $N$, $p$ and $\|b\|_{L^\infty(Q;\mathbb{R}^N)}$.
In particular, we have that~$m\in C^\theta(Q)$,
for every~$\theta\in (0, 2s-1)$.
Furthermore, we get that there exists a constant~$
C=C(s,N,b)>0$ such that $$0<C\leq m(x)\leq C^{-1}, \quad
{\mbox{ for any }}x\in Q.$$
\end{proposition}
\begin{proof}
Assume $b$ to be smooth, the general case will follow by an approximation argument.
\medskip
\noindent {\bf Step 1: Existence and uniqueness of a solution.}
The existence result follows by the Fredholm alternative.
More precisely, for $\Lambda$ large enough, by Lax-Milgram Theorem, the equation
\[
(-\Delta)^{s} v - b \cdot \nabla v + \Lambda v = \psi
\]
has a unique solution $u \in H^{s}_2(Q)$,
for any fixed $\psi \in L^2(Q)$.
Therefore, the mapping~$\mathcal{G}_\Lambda$,
defined by~$v = \mathcal{G}_\Lambda \psi$,
is a compact mapping of~$L^2(Q)$ into itself.
Now, equation \eqref{eqkolmo} may be rewritten as
\begin{equation}\label{Gstar}
(I - \Lambda \mathcal{G}_\Lambda^*) m = 0.
\end{equation}
By the Fredholm alternative, the number of linearly independent solutions of \eqref{Gstar} is the same as that of the adjoint problem, that is
\[
(I - \Lambda \mathcal{G}_\Lambda) v = 0,
\]
that corresponds to
\begin{equation}\label{adjo}
(-\Delta)^{s} v - b \cdot \nabla v = 0.
\end{equation}
Any $v\in H^{s}_2(Q)$ solving \eqref{adjo} is in $C^{2s}(Q)$
(due to~\cite[Lemma~2.2]{pp}), and
then it must be constant by the Strong Maximum Principle (see~\cite{cio}).
We conclude that there exists~$m$
solving~\eqref{eqkolmo} (in the distributional sense),
and such~$m \in L^2(Q)$ is unique up to a multiplicative constant.
\medskip
\noindent {\bf Step 2: Positivity.}
Fix a nonnegative periodic Borel initial datum $z_0$, and consider the following Cauchy problem,
\begin{equation}\label{cauchypb}
\begin{cases}
\partial_t z+ (-\Delta)^{s} z - b \cdot \nabla z = 0 & \text{in $\mathbb{R}^N \times (0,\infty)$}, \\
z(\cdot, 0) = z_0(\cdot).
\end{cases}
\end{equation}
We recall estimates of heat kernel of fractional Laplacian perturbed by gradient operators obtained
in~\cite[Theorem~2]{bj}: for every~$t_0>0$,
there exists a constant~$C>0$,
depending on $t_0$, $s$, $b$ and~$N$,
such that
\begin{equation}\label{bogdan} Cp(t,x,y)\leq p'(t,x,y)\leq
C^{-1} p(t,x,y),\quad {\mbox{for any }} x,y\in \mathbb{R}^N
{\mbox{ and }} t\in (0, t_0),\end{equation}
where $p(t,x,y)$ is the fractional heat kernel and~$p'(t,x,y)$
is the kernel associated to the operator~$\partial_t + (-\Delta)^{s}
- b \cdot \nabla $.
Now we fix $x_0\in Q$
and we take a mollifying sequence
\begin{equation}\label{convmes} z_{0,n}\rightharpoonup \delta_{x_0}
\end{equation} in the sense of measure.
Let $z_n$ be the solution to \eqref{cauchypb} with intial datum $z_{0,n}$.
So, the solution $z_n$ of \eqref{cauchypb} satisfies
\begin{equation*} z_n(x,1)\geq \tilde C \int_Q z_{0,n}(x)dx=\tilde C,
\end{equation*}
where $\tilde C>0$ is a constant depending on $b$, $s$ and~$N$.
In particular, by the comparison principle,
\[z_n(x,t)\geq \tilde C \quad {\mbox{ for any }} t\geq 1.\]
By \cite[Theorem 2]{bcci}, $z_n(\cdot, t) - \Lambda_n t
- \bar{z}_n(\cdot)$ converges uniformly to zero as $t \to +\infty$,
where the couple~$(\Lambda_n, \bar{z}_n)$
solves the stationary problem
\begin{equation}\label{ergod}
(-\Delta)^{s} \bar z_n - b \cdot \nabla \bar z_n = \Lambda_n \quad \text{in $Q$}.
\end{equation}
Note that $(\Lambda_n, \bar{z}_n)$ solving \eqref{ergod}
must satisfy $\Lambda_n = 0$, so that~$\bar{z}_n$ is
identically constant on~$Q$; hence~$z_n(\cdot, t) \to \bar{z}_n\geq
\tilde C $ uniformly on~$Q$ as~$t \to +\infty$.
By multiplying the equation in \eqref{cauchypb} by $m$, the equation
in~\eqref{eqkolmo} by $z_n$, and integrating by parts on~$Q$,
we obtain that, for all $t > 1$,
\[
\int_Q \partial_t z_n(x,t) m(x) dx = 0,
\]
so
\begin{equation}\label{convmes2}
\int_Q z_{0,n}(x) m(x) dx = \int_Q z_n(x,t) m(x) dx \to \bar{z}_n\geq \tilde C >0
\end{equation}
as $t \to +\infty$, since~$\int_Q m \, dx=1$.
Now we send $n\to +\infty$ in~\eqref{convmes2} and we get,
recalling \eqref{convmes},
\[m(x_0)\geq \tilde C.\]
Since this is true for every $x_0\in Q$,
we get that there exists a constant~$C=C(s,N,b)>0$
such that~$0<C\leq m(x) $.
\medskip
\noindent {\bf Step 3: Boundedness and regularity.}
The same argument as in Step~2, using the bound from above
in~\eqref{bogdan} (instead of the bound from below),
gives that there exists a constant~$C'=C'(s,N,b)>0$
such that $ m(x)\leq C'$.
Since $b m \in L^\infty(Q)$, by Lemma \ref{lemmauno} we have
that~$m \in H^{2s-1}_p(Q)$, for all $p > 1$.
In particular, we have that~$m\in C^\theta (Q)$, for every~$\theta\in (0,2s-1)$.
\medskip
\end{proof}
\subsection{The case $s<\frac{1}{2}$} \label{submezzo}
Note that in the case $s<\frac{1}{2}$ the solutions
to
$$(-\Delta)^s m=\mathrm{div} w,$$
for $w\in H^{\sigma}_p$, have to be intended in some
weak sense. In particular, if~$\sigma<1-2s$,
the solution $m$ is a distribution.
Moreover the associated kernel of the operator $(-\Delta)^s +b(x)\cdot \nabla$ is not bounded from below
by the fractional heat kernel, and it does not produce strictly
positive solutions. These phenomena will be discussed in details in the following remarks.
This suggests that the study of fractional Mean Field Games in the range $s<\frac{1}{2}$
presents structural differences with respect to the range $s>\frac{1}{2}$.
\begin{remark}\label{RK1}
Concerning the optimality of
the regularity results in Proposition~\ref{lemmaunoemezzo},
we point out that
the solution~$m$ may vanish at a point
and is not better than~$C^{2s}$ for~$s\in(0,\,1/2)$.
To see a one-dimensional example in~$\mathbb{R}$,
we take~$n=1$, $s\in(0,\,1/2)$ and
$$ b(x):=-\frac{1}{m(x)} \int_0^x (-\Delta)^s m(y)\,dy,$$
with~$m(x):=|x|^\theta$, with~$\theta\in(2s,1)$.
Using the substitution~$z=y/x$, we see that
\begin{equation}\label{SCAL} \int_{\mathbb{R}} \frac{|x+y|^\theta+|x-y|^\theta-2|x|^\theta}{|y|^{1+2s}}\,dy=
|x|^{\theta-2s}
\int_{\mathbb{R}} \frac{|1+z|^\theta+|1-z|^\theta-2|z|^\theta}{|z|^{1+2s}}\,dz\end{equation}
and so~$(-\Delta)^s m(x)=-c|x|^{\theta-2s}$, for some~$c>0$.
This setting gives that, for small~$|x|$,
$$ |b(x)|\le \frac{C}{|x|^\theta} \int_0^{|x|} |y|^{\theta-2s}\,dy
\le C |x|^{1-2s},$$
up to renaming~$C>0$.
This gives that~$b$ is locally bounded (and H\"older continuous
with exponent~$1-2s$).
Moreover, we have that
$$ {\rm div}(bm)= (bm)' =-\left(
\int_0^x (-\Delta)^s m(y)\,dy\right)'=
-(-\Delta)^s m,$$
hence the equation is satisfied.
\end{remark}
\begin{remark}
The example of Remark~\ref{RK1} can also be used to
show that the positivity results of~\cite{bj} do not hold in general for~$s\in(0,\,1/2)$.
For instance,
we take~$n=1$, $s\in(0,\,1/2)$,
$v(x):=|x|^\theta$, with~$\theta\in(2s,1)$.
{F}rom~\eqref{SCAL}, we know that~$(-\Delta)^s v(x)=-c|x|^{\theta-2s}$, for some~$c>0$.
So we define~$ b(x):= -\frac{c}\theta\,|x|^{-2s}x$
and we notice that~$b$ is locally bounded (and H\"older continuous
with exponent~$1-2s$) and
$$ (-\Delta)^{s} v - b \cdot \nabla v =
-c|x|^{\theta-2s} +\left(\frac{c}\theta\,|x|^{-2s}x\right)\cdot(\theta |x|^{\theta-2}x)
=
0,$$
hence~\eqref{adjo} is satisfied.
Since~$v\ge0$ but~$v(0)=0$, this example shows that the strong maximum principle is
violated in this case. Note that~$v$ solves a.e. the
equation, but it is not a viscosity (sub)solution
of the equation at~$x=0$. Indeed,
by the strong maximum principle proved
in Lemma~4.4 in~\cite{blt}, the unique viscosity solutions
to~$(-\Delta)^{s} v - b \cdot \nabla v$ are constants.
Moreover, $v$ is also a (stationary) solution of the heat flow
associated to~\eqref{adjo}, corresponding to an initial datum which is nonnegative
and that does not become strictly positive as time flows (this lack of positivity gain in time
can be seen as a counterpart when~$s\in(0,\,1/2)$ to the positivity of the heat kernel
established in~\cite{bj}).
\end{remark}
\section{Fractional Hamilton-Jacobi equations with coercive Hamiltonian}\label{sectionHJ}
We collect some results on the
Lipschitz continuity of viscosity solutions to Hamilton-Jacobi equations and on the solution to the ergodic problem.
There are different kinds of results, depending on the fact that
the Hamiltonian term is dominant or not with respect to the fractional Laplacian term.
Actually, the growth~$\gamma\leq 2s$ allows the use of the so called Ishii-Lions method, in particular getting a-priori estimates on the gradient of the solution
which depend only on the~$L^\infty$ norm of the
potential term, whereas in the case~$\gamma>2s$
the Bernstein method is used, obtaining a-priori
estimates on the gradient of the solution
which depend only on the Lipschitz norm of the
potential term.
We consider the following Hamilton-Jacobi equation
\begin{equation}\label{eqHJ}
(-\Delta)^s u+ H(\nabla u)+\lambda= f(x), \qquad x\in \mathbb{R}^N.
\end{equation}
We assume that $f\in C(\mathbb{R}^N)$, and that~$f$ is $\mathbb{Z}^N$-periodic.
\begin{theorem}\label{ergodic}
Let $s>\frac{1}{2}$ and $\gamma\leq 2s$.
Then the following statements hold.
\begin{enumerate}
\item If $u$ is a continuous periodic solution to \eqref{eqHJ}, then
there exists a constant $K>0$, depending on~$\|u\|_{L^\infty(Q)}$, $\|f\|_{L^\infty(Q)}$
and~$|\lambda|$, such that
\[\|\nabla u\|_{L^\infty(Q)}\leq K.\]
Moreover, there exists a constant $C>0$,
depending only on the period of $u$ and
$\|f\|_{L^\infty(Q)}$, such that $\|u\|_{L^\infty(Q)}\leq C$.
\item
There exists a unique constant $\lambda\in\mathbb{R}$
such that~\eqref{eqHJ} has a periodic solution $u\in W^{1, \infty}(\mathbb{R}^N)$ and
\begin{equation}\label{defbwb}
\lambda= \sup \{c\in\mathbb{R}\; {\mbox{ s.t. }}\; \exists u\in W^{1, \infty}(\mathbb{R}^N)
{\mbox{ s.t. }} (-\Delta^s) u+ H(\nabla u)+ c\leq f(x)\}.\end{equation}
Moreover, $u$ is the unique Lipschitz viscosity solution to \eqref{eqHJ}
up to addition of constants.
\end{enumerate}
Finally, if $f\in C^{\theta}(\mathbb{R}^N)$, for some $\theta\in (0,1]$,
then~$u\in C^{2s+\alpha}(Q)\cap H^{2s}_p(Q)$,
for every $\alpha<\theta$ and every~$p>1$.
\end{theorem}
\begin{proof}
The a-priori estimates on the gradient is proved in~\cite[Theorem 2]{bcci1}.
The a-priori estimate on the $L^\infty$ norm of $u$ can be obtained as in \cite[Proposition1]{bcci1} or \cite[Lemma 4.2]{blt}.
The existence of $\lambda$ and of a
unique (up to constants) viscosity solution to~\eqref{eqHJ}
is given in~\cite[Theorem~1]{bcci}.
Formula~\eqref{defbwb} can be proved by a standard argument,
using the Strong Maximum Principle, which holds for operators
as~$(-\Delta)^s +b\cdot \nabla$, with~$s>\frac{1}{2}$
and~$b$ continuous (see~\cite{cio}).
Finally, if $f$ is H\"older continuous, applying~\cite{cs},
we get that $u\in C^{1+\alpha}(Q)$ for any $\alpha<2s-1$, and finally,
by the bootstrap argument in~\cite[Theorem~6]{bfv},
we obtain the desired regularity.
Also, since $(-\Delta)^s u\in L^\infty(Q)$,
then by~\cite[Theorem 2.1]{kim} it follows
that~$u\in H^{2s}_p(Q)$ for every $p>1$.
\end{proof}
\begin{theorem}\label{ergodic2}
Let $\gamma>1$ and assume that $f\in W^{1, \infty}(\mathbb{R}^N) $.
Then the following statements hold.
\begin{enumerate}
\item If $u$ is a continuous solution to \eqref{eqHJ},
then there exists a constant $K>0$, depending
on~$\|f\|_{L^\infty(Q)}$, $\|\nabla f\|_{L^\infty(Q;\mathbb{R}^N)}$
and~$|\lambda |$, such that
\[\|\nabla u\|_{L^\infty(Q)}\leq K.\]
\item
There exists a unique constant $\lambda\in\mathbb{R}$
such that~\eqref{eqHJ} has a periodic solution $u\in W^{1, \infty}(\mathbb{R}^N)$ and
\[\lambda= \sup \{c\in\mathbb{R}\; {\mbox{ s.t. }}\; \exists u\in W^{1, \infty}(\mathbb{R}^N)
{\mbox{ s.t. }} (-\Delta^s) u+ H(\nabla u)+ c\leq f(x)\}.\]
Moreover $u$ is the unique Lipschitz viscosity solution to \eqref{eqHJ}
up to addition of constants.
\end{enumerate}
Finally, $u\in C^{2s+\alpha}(Q)\cap H^{2s}_p(Q)$, for every~$\alpha<1$
and every~$p>1$.
\end{theorem}
\begin{proof} The a-priori estimates on the gradient is proved in~\cite[Theorem 3.1]{blt}.
The existence of $\lambda$ and of a
unique (up to constants) viscosity solution to~\eqref{eqHJ}
is given in~\cite[Proposition 4.1]{blt}.
As for the rest we proceed analogously as in Theorem \ref{ergodic}.
\end{proof}
\begin{remark}[Case $s\leq \frac{1}{2}$] \upshape
We note that Theorem~\ref{ergodic2} holds for
every~$s\in (0,1)$.
As for Theorem~\ref{ergodic},
it can be proved that,
if $\gamma<2s<1$, solutions to~\eqref{eqHJ}
are actually H\"older continuous, with
H\"older exponent striclty less
than~$\frac{2s-\gamma}{1-\gamma}$
(see Remark~1 in~\cite{bcci}).
In this case, the uniqueness of the ergodic
constant~$\lambda$ remains true,
but it is not clear anymore that the solutions of the ergodic problem are unique up to an additive constant.
\end{remark}
\section{Regularizing coupling}\label{sectionreg}
The aim of this section is to prove existence of solutions for~\eqref{mfg2}
in the case~$(1)$ of Theorem~\ref{TH:MAIN}.
For this, we consider the system
\begin{equation}\label{mfg_reg}\begin{cases}
(- \Delta)^s u+ H(\nabla u)+\lambda = f[m](x),\\
(-\Delta )^s m-\mathrm{div}(m \nabla H(\nabla u) )=0, \\ \int_{Q} m\, dx=1, \end{cases}
\end{equation}
where $f : C^{\alpha}(Q) \to W^{1,\infty}(Q)$, with~$\alpha < 2s-1$,
is a regularizing functional. Let
\begin{equation}\label{Xdef}
X := \left\{ m \in C^{\alpha}(Q) : m \ge 0, \, \int_{Q} m\, dx=1\right\}.
\end{equation}
We suppose that
\begin{equation}\label{assFnonlocal}
\text{$f$ maps continuously $X$ into a bounded set of $W^{1,\infty}(Q)$}.
\end{equation}
A typical example of $f$ satisfying \eqref{assFnonlocal} is~$f[m](x) := g(x,K \star m (x))$, where $K : \mathbb{R}^N \to \mathbb{R}$ is a Lipschitz kernel and $g:\mathbb{R}^N\times \mathbb{R}\to\mathbb{R}$ is a Lipschitz function, which is $\mathbb{Z}^N$-periodic in $x$.
\begin{theorem}\label{solmfgreg}
Assume that \eqref{assFnonlocal} holds. Then, there exists a classical solution $(u, \lambda, m)$
to the mean field game system \eqref{mfg_reg}.
\end{theorem}
\begin{proof} The statement follows by
the Schauder Fixed Point Theorem in~$X$
(we will follow the lines of~\cite[Section~3]{c14}).
More precisely, we construct a compact map~$T: X\to X$,
with~$T(m)=\mu$, as follows.
For any $m \in X$, we consider the problem
\begin{equation}\label{Fdef}(- \Delta)^s u+ H(\nabla u)+\lambda =f[m](x).\end{equation}
By Theorem~\ref{ergodic2}, since $f[m]$ is a Lipschitz function,
we get that there exists a unique solution~$(u, \lambda)\in
W^{1,\infty}(Q)\times \mathbb{R}$.
This implies, in particular, that $(-\Delta^s)u\in L^\infty(Q)$,
so $u\in H^{2s}_p(Q)$ for all $p>1$, thanks to~\cite{kim}.
Hence, by a bootstrap argument we get that $u\in C^{2s+\theta}(Q)$,
for all $\theta<1$.
Now, we observe that $\|\nabla H(\nabla u)\|_{L^\infty(Q)}\leq C$,
with a constant~$C>0$ independent of $m$, in virtue of Theorem~\ref{ergodic2}.
Let $\mu $ be the solution to
\begin{equation}\label{eggregre}
\begin{cases} (-\Delta )^s \mu-\mathrm{div}(\mu \nabla H(\nabla u) )=0, \\ \int_{Q} \mu\,
dx=1.\end{cases}
\end{equation}
By Lemma \ref{lemmaunoemezzo}, there exists a unique $\mu$
solution to~\eqref{eggregre}, and $\mu\in H^{2s-1}_p(Q)$ for all $p>1$,
with $$\|\mu\|_{H^{2s-1}_p(Q)}\leq C\|\nabla H(\nabla u)\|_{L^\infty(Q)}.$$
Now, by Sobolev embedding, $\|\mu\|_{C^{\beta}(Q)}$ is bounded, for some $\alpha < \beta < 2s-1$.
So, $T: m \mapsto \mu$ is a compact mapping of $X$ into itself.
Therefore, we only need to show that~$T$ is also continuous
to conclude the existence of a fixed point,
that in turn provides the existence of a solution to~\eqref{mfg_reg}.
This follows by stability of the equation in~\eqref{Fdef}.
Indeed, for a given $f[m]$, the couple solving the first equation in~\eqref{Fdef} is unique, if we impose for example $u(0) = 0$
(see Theorem~\ref{ergodic2}).
\end{proof}
\section{Local bounded coupling}
\label{sectionbounded}
Here we prove Theorem~\ref{TH:MAIN} under the assumptions
of case~$(2)$. For this, we now specify the setting in which we work.
We assume that $f:\mathbb{R}^N\times [0, +\infty)\to\mathbb{R} $
is a continuous function, $\mathbb{Z}^N$-periodic
in $x$, that is $f(x+z, m)= f(x,m)$ for all $z\in \mathbb{Z}^N$, all~$x\in\mathbb{R}^N$
and all~$m\in [0, +\infty) $.
Moreover, we assume that there exists $K>0$ such that
\begin{equation}\label{assFlocal1} |f(x,m)|\leq K \quad
\forall m\geq 0.
\end{equation}
We also suppose that
\begin{equation}\label{assFlocal1BIS}
1<\gamma\leq 2s.\end{equation}
In this framework,
we get the following existence result, based on a
regularization argument and on the existence result given
in Theorem~\ref{solmfgreg}.
\begin{theorem}\label{solmfgbdd}
Under assumptions~\eqref{assFlocal1} and~\eqref{assFlocal1BIS},
there exists a classical solution $(u,\lambda, m)$
to the mean field game system~\eqref{mfg2}.
\end{theorem}
\begin{proof}
We consider the following regularization of the system \eqref{mfg2}:
\begin{equation}\label{mfgeps}\begin{cases}
(- \Delta)^s u + H(\nabla u )+\lambda =f_\varepsilon[m](x),\\
(-\Delta )^s m -\mathrm{div}(m \nabla H(\nabla u ) )=0, \\ \int_{Q} m \, dx=1, \end{cases}
\end{equation}
where
\[f_\varepsilon[m](x)= f(x, m \star\chi_\varepsilon)\star \chi_\varepsilon (x)
=\int_Q \chi_\varepsilon(x-y) f\left (y, \int_Q m (z)
\chi_\varepsilon(y-z)dz\right)dy\]
and~$\chi_\varepsilon$, for~$\varepsilon>0$,
is a sequence of standard mollifiers.
Note that $f_\varepsilon$ satisfies assumption~\eqref{assFnonlocal}
(see e.g.~\cite[Example 5]{c14}), and
therefore, for every $\varepsilon>0$,
there exists a classical solution~$(u_\varepsilon, \lambda_\varepsilon, m_\varepsilon)$
to~\eqref{mfgeps}, thanks to Theorem~\ref{solmfgreg}.
Now, let $\overline x_\varepsilon$ and $\underline x_\varepsilon$ be
such that $u_\varepsilon(\overline x_\varepsilon)=\max u_\varepsilon$
and~$u_\varepsilon(\underline x_\varepsilon)=\min u_\varepsilon$.
Evaluating the Hamilton-Jacobi equations at these points,
we get
$$f_\varepsilon[m_\varepsilon](\underline x_\varepsilon)-C_H^{-1}\leq \lambda_\varepsilon
\leq f_\varepsilon[m_\varepsilon](\overline x_\varepsilon)+C_H^{-1},$$
where~$C_H$ is the constant given in~\eqref{Hass}.
This and~\eqref{assFlocal1} imply that~$|\lambda_\varepsilon|\leq \tilde C$,
for some~$\tilde C>0$.
So, up to passing to a subsequence,
we can assume that~$\lambda_\varepsilon\to \lambda$, as~$\varepsilon\to 0$.
Again by assumption \eqref{assFlocal1},
using Theorem \ref{ergodic}, we get that there
exists~$C>0$, independent of $\varepsilon$, such that
\begin{equation}\label{ehwfhfv}
\|\nabla u_\varepsilon\|_{L^\infty(Q)}\leq C.\end{equation}
Hence, since $u_\varepsilon$ solves~\eqref{mfgeps},
we get that~$(-\Delta)^s u_\varepsilon$ is uniformly bounded
in~$L^\infty(Q)$, and so $u_\varepsilon\in H^{2s}_p(Q)$,
for every~$p>1$, and~$\|u_\varepsilon\|_{H^{2s}_p(Q)}\leq C$,
with $C>0$ independent of $\varepsilon$.
Therefore, by Sobolev embedding, we obtain
also that the sequence~$u_\varepsilon$ is equibounded
in~$C^{1+\alpha}(Q)$, for some~$\alpha\in (0,1)$.
Moreover, the estimate in~\eqref{ehwfhfv} and~\eqref{Hass}
imply that
$$\|\nabla H(\nabla u_\varepsilon)\|_{L^\infty(Q)}\leq C,$$
for some costant $C>0$. Then, we are in the position to apply
Proposition~\ref{lemmaunoemezzo}, and conclude that,
for all~$p>1$ and~$\alpha\in (0, 2s-1)$,
there exist constant $C_1$, $C_2>0$, depending on~$K$
and on~$p$ and~$\alpha$, respectively, such that
\begin{equation}\label{stimeun2}
\|m_\varepsilon\|_{H^{2s-1}_p(Q)}\leq C\qquad {\mbox{ and }}\qquad
\|m_\varepsilon\|_{C^{\alpha}(Q)}\leq C'.
\end{equation}
This implies that, up to subsequences,
$m_\varepsilon\to m$ in $H^{2s-1}_p(Q)$, as~$\varepsilon\to 0$,
for all~$p>1$ (and also uniformly in $C^{\alpha}(Q)$ for
every~$\alpha<2s-1$, thanks to Sobolev embeddings).
Therefore~$f_\varepsilon[m_\varepsilon](x)$ is equibounded
in~$C^{\alpha}(Q)$, for every~$\alpha<2s-1$.
Since $u_\varepsilon$ solves~\eqref{mfgeps},
this, in turn, gives that~$u_\varepsilon$ are equibounded
in~$C^{2s+\alpha}(Q)$, for some~$\alpha\in(0,1)$.
Therefore, by Ascoli-Arzel\`a Theorem,
we can extract a converging subsequence~$u_\varepsilon\to u$
in~$C^{2s}(Q)$, as~$\varepsilon\to0$.
Note that the convergences obtained are sufficiently strong to
pass to the limit in the equations, and so we conclude
that~$(u, \lambda, m)$ is
a classical solution to \eqref{mfg2}.
This completes the proof of Theorem~\ref{solmfgbdd}.
\end{proof}
\section{Local unbounded coupling} \label{sectionunbounded}
As in Section~\ref{sectionbounded},
we consider the case in which the coupling~$f$ is local,
that is, we suppose that~$f:\mathbb{R}^N\times [0, +\infty)\to\mathbb{R} $
is locally Lipschitz continuous in both variables,
and~$\mathbb{Z}^N$-periodic
in $x$, namely $f(x+z, m)= f(x,m)$ for all $z\in \mathbb{Z}^N$,
all~$x\in\mathbb{R}^N$
and all~$m\in [0, +\infty) $.
Differently from Section~\ref{sectionbounded}, here~$f$
can be unbounded, both from above and from below.
Nevertheless, we have to restrict the condition on
the growth~$\gamma$ of the Hamiltonian~$H$.
In particular, we assume
that there exist $C>0$ and $K>0$ such that
\begin{equation}\begin{split} \label{assFlocal}
&-Cm^{q-1}-K\leq f(x, m) \leq C m^{q-1}+K, \quad
\text{ with } 1<q<1+\frac{(2s-1)}{N}\frac{\gamma}{\gamma-1}, \\
{\mbox{and }}& \left\{\begin{matrix}
1<\gamma< \frac{N}{N-2s+1} &\text{ if $N>1$},\\
1<\gamma\leq 2s &\text{ if $N=1$}. \end{matrix}\right.
\end{split}\end{equation}
Note that if $N>1$ then~$ \frac{N}{N-2s+1}<2s$,
in virtue of~\eqref{assFlocal}.
We also remark that the bound on~$\gamma$ in the case~$N>1$
is just a technical assumption, that could be removed if the a priori bounds on the gradients
of solutions to fractional Hamilton-Jacobi equations with coercive Hamiltonian stated in Theorem \ref{ergodic} could be shown to be independent on
the $L^\infty$ norm of the solutions.
To provide existence of a solution to~\eqref{mfg2} in this setting,
we follow the variational approach, see~\cites{ll, cgpt, c16, bc}.
More precisely, we associate to the mean field game system
an energy whose minimizers will be used to construct solutions to~\eqref{mfg2}.
\subsection{The energy associated to the system}
We denote by~$\tilde L$ the Legendre transform of $H$, i.e.
\[
\tilde L(\mathcal{q}) := \sup_{\mathcal{p} \in \mathbb{R}^N} [\mathcal{p} \cdot \mathcal{q} - H(\mathcal{p})], \qquad{\mbox{ for any }} \mathcal{q}
\in \mathbb{R}^N.
\]
Observe that, by~\eqref{Hass}, there exists $C_L>0$ such that
\begin{equation}\label{Lass}
C_L |\mathcal{q}|^{\gamma'} - C_L^{-1} \le L(\mathcal{q}) \le C_L^{-1} (|\mathcal{q}|^{\gamma'} + 1)\,,
\quad {\mbox{ for any }} v \in \mathbb{R}^N,
\end{equation}
where $\gamma'=\frac{\gamma}{\gamma-1}$ is the conjugate exponent of $\gamma$.
Note that, by assumption \eqref{assFlocal},
\begin{equation}\label{assgamma} \gamma'>\frac{N}{2s-1}.\end{equation}
We let \begin{equation}\begin{split}
\label{kcalconstraint}
\mathcal{K}: =\,&\left\{ (m,w) \in L^1(Q)\cap W^{1,\gamma'} (Q)
\times L^{\gamma'}(Q)\; {\mbox{ s.t.}} \right. \\ &\quad
\int_{Q} m (-\Delta)^s \varphi \, dx = \int_{Q} w \cdot \nabla \varphi \, dx
\quad \forall \varphi \in C_0^\infty(Q), \\ &\quad \left.
\int_{Q} m \, dx = 1, \quad \text{$m \ge 0$ a.e.} \right\}.
\end{split}\end{equation}
We associate to the mean field game \eqref{mfg2} the following energy \begin{equation}\label{energia}
\mathcal{E}(m, w) := \begin{cases}
\displaystyle \int_{Q} m L\left(-\frac{w}{m}\right)+F(x,m) \, dx & \text{ if $(m,w) \in \mathcal{K}$}, \\
+\infty & \text{otherwise},
\end{cases}
\end{equation}
where
\begin{equation}\label{dati}\begin{split}&
L\left(-\frac{w}{m}\right):=\begin{cases} \tilde L\left(-\frac{w}{m}\right) & {\mbox{ if }}m>0,\\
0 & {\mbox{ if }}m=0, w=0,\\ +\infty & \text{otherwise} \end{cases} \\ {\mbox{and}}
\qquad&
F(x,m):=\begin{cases} \displaystyle
\int_0^m f(x,n)\,dn& {\mbox{ if }}m\geq 0,\\ +\infty& {\mbox{ if }}
m<0.\end{cases}\end{split}\end{equation}
Note that, since $\tilde L$ is the Legendre transform of $H$, we have that, for all $m\geq 0$,
\begin{equation}\label{leg}
mH(p)=\sup_{w} \left[-p\cdot w-m L\left(-\frac{w}{m}\right)\right].
\end{equation}
Moreover, recalling \eqref{Lass}, we get that \begin{equation}\label{ell} C_L \frac{|w|^{\gamma'}}{m^{\gamma'-1}} -C_L^{-1} m
\leq L\left(-\frac{w}{m}\right)\leq C_L^{-1} \frac{|w|^{\gamma'}}{m^{\gamma'-1}} +C_L^{-1} m. \end{equation}
Now, we provide a-priori estimates for couples $(m,w)\in\mathcal{K}$ with finite energy.
\begin{proposition} \label{pstime}
Assume that $(m,w)\in \mathcal{K}$ such that
there exists $K>0$ with
\[E:=\int_{Q} \frac{|w|^{\gamma'}}{m^{\gamma'-1}} dx\leq K.\]
Then, there exist $\delta>0$ and $C>0$ such that
\begin{equation}\label{rstima}
\|m\|_{L^q(Q)}^{q(1+\delta)} \leq C\int_{Q} \frac{|w|^{\gamma'}}{m^{\gamma'-1}} dx\leq CK
\end{equation} where $q$ is as in \eqref{assFlocal}.
Moreover, for every~$\alpha\in \left(0, 2s-1-
\frac{N}{\gamma'}\right)$, there exists a constant $C>0$,
depending on $\alpha$, such that
\begin{equation}\label{mstima}
\|m\|_{C^{\alpha}(Q)} \leq C\int_{Q} \frac{|w|^{\gamma'}}{m^{\gamma'-1}} dx\leq CK.
\end{equation}
\end{proposition}
\begin{proof}
Note that since $m\in W^{1,\gamma'}(Q)$ and $\gamma'$
satisfies~\eqref{assgamma}, then $m\in L^p(Q)$,
for every~$p\in (1, +\infty]$. Moreover, by Sobolev embeddings,
we have that~$m\in C^{\alpha}(Q)$, for
every~$\alpha\in \left(0, 2s-1-\frac{N}{\gamma'}\right)$.
Now, let $p>1$ and define $r_p$ as follows
\begin{equation}\label{r} \frac{1}{r_p}=\frac{1}{\gamma'}+\left(1-\frac{1}{\gamma'}\right)\frac{1}{p}.\end{equation}
In this way, we see that~$r_p< \min\{p, \gamma'\}$.
By~\eqref{kcalconstraint}, we have that
\begin{eqnarray*}
&&\int_Q m(-\Delta)^s \phi\, dx= \int_Q w\cdot \nabla \phi\, dx \leq
\int_Q \left(\frac{|w|^{\gamma'}}{m^{\gamma'-1}}\right)^{\frac{1}{\gamma'}}
m^{\frac{1}{\gamma}}|\nabla \phi|\,dx \\
&&\qquad \leq
\left(\int_Q \frac{|w|^{\gamma'}}{m^{\gamma'-1}} \,dx \right)^{\frac{1}{\gamma}}
\|m\|_{L^p(Q)}^{\frac{1}{\gamma}}\|\nabla \phi\|_{L^{r'_p}(Q)} \leq E^{\frac{1}{\gamma'}} \|m\|_{L^p(Q)}^{\frac{1}{\gamma}}\|\nabla \phi\|_{L^{r'_p}(Q)},
\end{eqnarray*}
for any~$\phi\in C^\infty_0(Q)$.
Here above we used the notation~$r_p'=\frac{r_p}{r_p-1}$.
Therefore, by Lemma \ref{lemmadue} we get that
\begin{equation}\label{fbbbrb}
\|(-\Delta)^{s-\frac{1}{2}}m\|_{L^{r_p}(Q)}\le C
E^{\frac{1}{\gamma'}} \|m\|_{L^p(Q)}^{\frac{1}{\gamma}},
\end{equation}
for some~$C>0$.
Moreover, by interpolation, we get that
\begin{equation}\label{fbbbrb2}
\|m\|_{L^{r_p}(Q)} \leq \|m\|_{L^p(Q)}^{\frac{1}{\gamma}}
\|m\|_{L^1(Q)}^{\frac{1}{\gamma'}}=
\|m\|_{L^p(Q)}^{\frac{1}{\gamma}}.\end{equation}
{F}rom~\eqref{fbbbrb} and~\eqref{fbbbrb2}, we conclude that
\begin{equation}\label{starstar}
\|m\|_{H^{2s-1}_{r_p}(Q)}\le C (E^{\frac{1}{\gamma'}} +1)\|m\|_{L^p(Q)}^{\frac{1}{\gamma}}.
\end{equation}
Now, we prove~\eqref{rstima}. For this, let $r=r_q$,
that is, in \eqref{r} we choose $p=q$, where $q$ is as in~\eqref{assFlocal}.
Let $r^\star$ be such that
\begin{eqnarray*}
&& \frac{1}{r^\star}=\frac{1}{r}-\frac{2s-1}{N} \qquad {\mbox{if }}
r< \frac{N}{2s-1},\\
{\mbox{and }} && r^\star=+\infty \qquad {\mbox{if }}
r\geq \frac{N}{2s-1}.\end{eqnarray*}
Notice that by \eqref{assFlocal}, it is easy to see that
\begin{equation}\label{qrstar}
q< r^\star.\end{equation}
Therefore, by Sobolev embedding, there exists~$C>0$
such that
$$\|m\|_{H^{2s-1}_r(Q)}\geq C\|m\|_{L^q(Q)},$$
and so, substituting in \eqref{starstar} we get that
\[\|m\|_{L^q(Q)} \le C (E +1)\qquad\text{ and }\qquad
\|m\|_{H^{2s-1}_{r}(Q)}\leq C(E+1).\]
Note that, in virtue of~\eqref{qrstar},
by interpolation and using~\eqref{starstar}, we get
\[\|m\|_{L^q(Q)}\leq\|m\|_{L^1(Q)}^{1-\theta}\|m\|_{L^{r^\star}(Q)}^\theta\leq\|m\|_{L^1(Q)}^{1-\theta}\|m\|_{H^{2s-1}_r(Q)}^\theta\leq C(1+E^{\frac{\theta}{\gamma'}})\|m\|_{L^q(Q)}^{\frac{\theta}{\gamma}}, \]
where $\theta$ is such that
\[\frac{1}{q}=1-\theta+\frac{\theta}{r^\star}.\]
It is easy to check that
\begin{equation}\label{teta} \frac{1}{\theta} = 1-\frac{1}{\gamma'} +\frac{2s-1}{N} \frac{q}{q-1}.\end{equation}
We then obtain that
\begin{equation}\label{jgerjgerbj}
\|m\|_{L^q(Q)}^{\left(1-\frac{\theta}{\gamma}\right)
\frac{\gamma'}{\theta}} \leq C(1+E). \end{equation}
Using \eqref{teta} we check that
\[ \left(1-\frac{\theta}{\gamma}\right)\frac{\gamma'}{\theta}=\gamma' \frac{2s-1}{N}\frac{q}{q-1}= (1+\delta)q,\]
where
\[\delta=\frac{1}{q-1}\left( \frac{\gamma'
(2s-1)+N}{N}-q\right)>0\]
thanks to~\eqref{assFlocal}. This and~\eqref{jgerjgerbj}
imply~\eqref{rstima}, as desired.
\smallskip
We prove now \eqref{mstima}.
In virtue of~\eqref{assgamma},
we can choose~$p$ in~\eqref{r} sufficiently large such
that
$$\frac{N}{2s-1}<r_p<\gamma'.$$
So, from~\eqref{starstar}, using Sobolev embeddings and
reasoning as above, we obtain that
\begin{equation}\label{okpuygm}
\|m\|_{L^p(Q)} \le C (E +1)\qquad\text{ and }\qquad
\|m\|_{H^{2s-1}_{r_p}(Q)}\leq C(E+1).\end{equation}
{F}rom the second estimate in~\eqref{okpuygm} and
Sobolev embeddings, we obtain~\eqref{mstima}.
This completes the proof of Proposition~\ref{pstime}.
\end{proof}
Using the previous estimates in Proposition~\ref{pstime},
we deduce the existence of a minimizer of the energy in
the class~$\mathcal{K}$ introduced in~\eqref{kcalconstraint}.
\begin{theorem} \label{exthm}
There exists $(m,w)\in \mathcal{K}$ such that
$$ \mathcal{E}(m,w)=\min_{(m,w)\in \mathcal{K}}\mathcal{E}\,.$$
\end{theorem}
\begin{proof}
First of all observe that, by Proposition \ref{pstime}
and~\eqref{ell}, there exists $C>0$ such that, for every $(m,w)\in \mathcal{K}$,
\[\mathcal{E}(m,w)\geq C\|m\|_{L^q(Q)}^{(1+\delta)q} -C+\int_Q F(x,m)dx\,.\]
{F}rom this, recalling assumption \eqref{assFlocal} and the definition of $F$ in \eqref{dati},
we conclude that there exists a constant $K$, depending on $q$, such that
\[\mathcal{E}(m,w)\geq C\|m\|_{L^q(Q)}^{(1+\delta)q} -C' \|m\|_{L^q(Q)}^{q}-C'\geq K.\]
Let $e:=\inf_{(m,w)\in\mathcal{K}} \mathcal{E}(m,w)$.
We fix a minimizing sequence $(m_n,w_n)$.
Therefore $\mathcal{E}(m_n,w_n)\leq e+1$, for every $n$ sufficiently large.
Note that by our definition of $L$ this implies that~$w_n=0$
where~$m_n=0$.
Therefore, again by assumption \eqref{assFlocal}, \eqref{ell} and Proposition \ref{pstime}, we get
\begin{eqnarray*}
\int_Q \frac{|w_n|^{\gamma'}}{m_n^{\gamma'-1}}
\,dx&\leq & C_L^{-1}(e+1-\int F(x,m_n)dx))\\&\leq &
C_L^{-1}(e+1-C+C\|m_n\|^q_{L^q(Q)})
\\ &\leq &C_L^{-1}\left(e+1+C'+ K \left(\int_Q
\frac{|w_n|^{\gamma'}}{m_n^{\gamma'-1}} \,dx+1
\right)^{\frac{1}{1+\delta}}\right).\end{eqnarray*}
This implies in particular that~$
\left(\int_Q \frac{|w_n|^{\gamma'}}{m_n^{\gamma'-1}}
\,dx\right)$ is equibounded in $n$.
By \eqref{mstima}, this implies that~$\|m_n\|_{C^{\alpha}(Q)}
\leq C$, for some $\alpha\in(0,1)$.
Therefore, up to a subsequence,
\[m_n\to m\quad \text{ uniformly in $Q$, as $n\to+\infty$.}
\]
Therefore, we have that~$m_n\to m$ in $L^1(Q)$
and $\int_Q mdx=1$.
In particular, we see that~$0\leq m_n\leq C$, for every $n$,
and then
\[\int_Q |w_n|^{\gamma'}dx\leq C^{\gamma'-1} \int_Q \frac{|w_n|^{\gamma'}}{m_n^{\gamma'-1}} \,dx. \]
This implies that $w_n$ is equibounded in $L^{\gamma'}(Q)$
and so, up to a subsequence,
\[w_n\to w\quad \text{weakly in $L^{\gamma'}(Q)$, as $n\to+\infty$}.\]
Note that the convergences are strong enough to assure
that~$( m, w)\in\mathcal{K}$.
Then, the desired result follows from
the lower semicontinuity of the kinetic part of the functional
and by the strong convergence in $L^q(Q)$ of $m_n$.
\end{proof}
\subsection{Existence of solutions to the mean field game system}
In order to construct a solution to the mean field game \eqref{mfg2}, we associate
to the energy in~\eqref{energia} a dual problem, using standard arguments in convex analysis.
First of all, following \cite{bc}, we pass to a convex problem.
Given a minimizer $(\bar m, \bar w)$ as obtained in Theorem \ref{exthm},
we introduce the following functional
\begin{equation}\label{convex}
J(m,w):= \int_{Q} m L\left(-\frac{w}{m}\right)+f(x,\bar m) m \,dx.
\end{equation}
We claim that for $(m,w)\in\mathcal{K}$ we have that
\[\int_{Q} m L\left(-\frac{w}{m}\right)dx-\int_{Q} \bar m L\left(-\frac{\bar w}{\bar m}\right)\geq -\int_Q f(x,\bar m) (m-\bar m) dx.\]
This can be proved as in \cite[Proposition 3.1]{bc}, using the convexity of $L$ and the regularity of $F$.
The idea is to consider, for every $\lambda\in (0,1)$,
$m_\lambda:=\lambda m +(1-\lambda)\bar m$ and the same definition
for $w_\lambda$, and to observe that by minimality
\[\int_{Q} m_\lambda L\left(-\frac{w_\lambda}{m_\lambda}\right)dx-\int_{Q}
\bar m L\left(-\frac{\bar w}{\bar m}\right)\geq -\int_Q F(x,\bar m_\lambda)-F(x,\bar m) dx\,.\]
Then, using the
convexity to estimate the left hand side and the
regularity of $F$ on the right hand side, and finally sending $\lambda\to 0$,
we get that \[\min_{(m,w)\in\mathcal{K}} J(m,w)=J(\bar m, \bar w).\]
\smallskip
Now we complete the proof of Theorem~\ref{TH:MAIN}, by showing
the last point~$(3)$.
\begin{theorem}\label{solmfg}
Let $(\bar m, \bar w)$ be a minimizer of $J$ as given by Theorem \ref{exthm}.
Then, $\bar m\in H^{2s-1}_p(Q)$, for all~$p>1$,
and there exist~$\lambda\in \mathbb{R}$ and~$ u \in C^{2s+\alpha}(Q)$,
such that $( u, \lambda, \bar m)$
is a classical solution to the mean field game \eqref{mfg2}.
Finally $\bar w=-\bar m \nabla H(\nabla u)$.
\end{theorem}
\begin{proof}
The functional in~\eqref{convex} is convex, so we can write the dual problem as follows, following standard arguments in convex analysis.
Recall that $\bar m\in C^{\alpha}(Q)$,
for any $\alpha\in \left(0, 2s-1-\frac{N}{\gamma'}\right)$,
thanks to Proposition~\ref{pstime}.
Now, we consider the following functional
\[\mathcal{A}(m,w,u, c):= \int_{Q} \left[ m L\left(-\frac{w}{m}\right)+f(x,\bar m) m -m(-\Delta)^{s} u+\nabla u \cdot w-c m \right]dx +c . \]
It is easy to observe that
\begin{equation}\label{m1} J(\bar m,\bar w)=
\inf_{ \{ (m,w)\in (L^1(Q)\cap C^{\alpha}(Q))\times L^1(Q),
\ \int_Q m=1, m\geq 0\}}\sup_{(u,c)\in C^{2s }(Q)\times \mathbb{R}}
\mathcal{A}(m,w,u, c),\end{equation}
so the infimum is actually a minimum.
Note that $\mathcal{A}(\cdot, \cdot, u,c)$ is convex and
weak$*$ lower semicontinuous and $\mathcal{A}(m,w, \cdot, \cdot)$ is linear (so in particular concave).
So we can use the min-max Theorem, see~\cite{et},
and interchange minimum and supremum, that is
\begin{equation}\begin{split}\label{m2}
&\min_{\{(m,w)\in L^1\cap C^{\alpha}\times L^1,
\ \int m=1, m\geq 0\} }\sup_{(u,c)\in C^{2s}\times \mathbb{R}}
\mathcal{A}(m,w,u,c)\\
=&\,\sup_{(u,c)\in C^{2s}\times \mathbb{R}} \min_{\{(m,w)\in
L^1\cap C^{\alpha}\times L^1, \ \int m=1, m\geq 0\} }
\mathcal{A}(m,w,u, c).\end{split}\end{equation}
Finally, thanks to the Rockafellar's Interchange Theorem \cite{r} between infimum and integral
(based on measurable selection arguments
and the lower semicontinuity of the functional)
we get, using the fact that $H$ is the Legendre transform of $L$, that
\begin{equation}\begin{split}\label{m3}
&\min_{\{(m,w)\in L^1\cap C^{\alpha}\times L^1, \ \int m=1, m\geq 0\} }
\mathcal{A}(m,w,u, c) \\
=\,&\int_{Q}\min_{m\geq 0,w} \left[ m L\left(-\frac{w}{m}\right)+f(x,\bar m) m -m(-\Delta)^{s} u+\nabla u \cdot w-c m \right]dx+c\\ \nonumber
=\,&\int_Q \min_{m\geq 0} m\left[-H(\nabla u) -(-\Delta)^{s} u+f(x,\bar m)-c\right]dx+c.
\end{split}\end{equation}
Note that \[ \min_{m\geq 0} m\left[
-(-\Delta)^{s} u-H(\nabla u)+f(x,\bar m)-c\right]=
\begin{cases} 0, &{\mbox{ if }} -(-\Delta)^{s} u -H(\nabla u) +f(x,\bar m)-c\geq 0,\\
-\infty, & {\mbox{ if }} - (-\Delta)^{s} u-H(\nabla u) +f(x,\bar m)-c< 0.\end{cases}\]
Therefore, from \eqref{m1}, \eqref{m2} and~\eqref{m3} we get that
\begin{equation}\begin{split}
\label{m4} J(\bar m,\bar w)=&\,\sup_{(u,c)
\in C^{2s}\times \mathbb{R}}\int_Q \min_{m\geq 0}
m\left[-H(\nabla u) - (-\Delta)^{s} u+f(x,\bar m)-c\right]dx+c\\
=&\,\sup\left\{c\in \mathbb{R} \ | \exists u\in C^{2s}, \text{ s.t. }
(-\Delta)^{s} u+H(\nabla u) +c \leq f(x,\bar m)\right\}.
\end{split}\end{equation}
Due to Theorem \ref{ergodic}, such supremum is actually a maximum:
there exist $\lambda\in\mathbb{R}$ and a periodic function~$u\in C^{2s+\alpha}(Q)\cap H^{2s}_p(Q)$,
for every $\alpha< 2s-1-\frac{N}{\gamma'}$ and every $p>1$,
which is unique up to additive constants and solves
\begin{equation}\label{hj}
(-\Delta)^{s} u+H(\nabla u) +\lambda =f(x,\bar m).\end{equation}
So, equality \eqref{m1} reads \[\lambda= J(\bar m,\bar w)=\int_{Q} \bar m \left[ L\left(-\frac{\bar w}{\bar m}\right)+f(x,\bar m)\right]dx.\]
Therefore, recalling that $\int_Q\bar m=1$ and using both \eqref{hj} and \eqref{eqm}
with test function $u$, we obtain that
\begin{eqnarray*}
0&=&\int_{Q} \bar m \left[ L\left(-\frac{\bar w}{\bar m}\right)+f(x,\bar m)-\lambda\right]dx\\
&=&
\int_{Q} \bar m \left[ L\left(-\frac{\bar w}{\bar m}\right)+(-\Delta)^{s} u+H(\nabla u)
\right]dx\\ &=&\int_{Q} \bar m \left[ L\left(-\frac{\bar w}{\bar m}\right)+ \nabla u\cdot
\frac{\bar w}{\bar m}+H(\nabla u) \right]dx. \end{eqnarray*}
Using the fact that $H$ is the Legendre transform of $L$ and \eqref{leg}, we thus conclude
that \[\frac{\bar w}{\bar m}=-\nabla H(\nabla u),\]
where $\bar m\neq 0$.
Moreover, by the definition of $L$, we get that $\bar w=0$ where $\bar m=0$.
In particular, recalling \eqref{eqm}, we find that $\bar m$ is a solution of
\[ (-\Delta)^{s}m - \mathrm{div}( m \nabla H(\nabla u))=0, \qquad{\mbox{ with }}
\quad \int_Q m=1.\]
Since $\bar m\nabla H(\nabla u)\in L^\infty(Q)$, by Lemmata~\ref{lemmauno}
and~\ref{lemmadue}, we get that $\bar m\in H^{2s-1}_p(Q)$ for every $p>1$.
This implies that $(u,\bar m)$ is a solution to \eqref{mfg2}.
The proof of Theorem~\ref{solmfg} is thus complete.
\end{proof}
\section{Further properties: improved regularity and uniqueness}\label{sectionimprovement}
If we assume some more regularity on $f$ and $H$, we can obtain more regular solutions.
\begin{proposition}
Assume that $H\in C^{1+k}(\mathbb{R}^N)$, for some $k\geq 1$, and,
in the local case (under assumptions \eqref{assFlocal1} or
\eqref{assFlocal}), that~$f\in C^k(\mathbb{R}^N\times \mathbb{R})$
or, in the nonlocal case (under assumption \eqref{assFnonlocal}), that~$f$ maps continuously~$X$ (as defined in \eqref{Xdef})
in a bounded subset of $C^{k}(Q)$.
Then, the system in~\eqref{mfg2} admits a classical solution $(u,\lambda, m)\in C^{k}(Q) \times
\mathbb{R}\times C^{k-1}(Q)$.
\end{proposition}
\begin{proof} By Theorems \ref{solmfgreg}, \ref{solmfgbdd} and \ref{solmfg}, we have a solution $(u,\lambda, m)\in
(C^{2s+\alpha}(Q)\cap H^{2s}_p(Q))\times \mathbb{R}\times H^{2s-1}_p(Q)$.
Using the regularity of $m$ and $\nabla H(\nabla u)$ and the fact that both are in $L^\infty$,
we get that $m\nabla H(\nabla u))\in H^{2s-1}_p(Q)$ for all $p>1$.
This implies by Lemma~\ref{lemmauno} that $m\in H^{4s-2}_p(Q)$ for every $p>1$.
By the regularity of $f$, if $k\geq 4s-2$, also $f(m)\in H^{4s-2}_p(Q)$.
Therefore, using the Hamilton-Jacobi equation, we get that
$$(-\Delta)^s u\in H^{2s-1}_p(Q)$$
for every $p>1$, which gives that~$u \in H^{4s-1}_p(Q)$.
Reasoning as above we obtain that $m\nabla H(\nabla u))\in H^{4s-2}_p(Q)$ for all $p>1$,
and then by Lemma~\ref{lemmauno} we conclude that $m\in H^{6s-3}_p(Q)$ for every $p>1$.
We iterate the argument up to arriving to $m\in C^{M(2s-1)}(Q) $,
where $M:=\left[\frac{k}{2s-1}\right]$ (that is,
$M$ is the integer part of $\frac{k}{2s-1}$). In particular $m\in C^{k-1}(Q)$.
So we conclude that $u\in C^{2s+N(2s-1)}(Q)$.
\end{proof} \
It is well known that, under a monotonicity condition on the function $f$,
we have uniqueness of solutions to mean field games system.
\begin{theorem}\label{un} Assume that
in the local case (under assumptions \eqref{assFlocal1} or \eqref{assFlocal}) the map~$m\mapsto f(x,m)$ is increasing for
all $x\in Q$ or in the nonlocal case (under assumption \eqref{assFnonlocal})
\[\int_Q (f[m_1](x)-f[m_2](x))(m_1-m_2) dx >0, \qquad {\mbox{for any }} m_1,m_2 \in X.\]
Then, the system in~\eqref{mfg2} admits a unique classical solution
$(u, \lambda, m)$, where $u$ is defined up to addition of constants.
\end{theorem}
\begin{proof}
The argument is standard, see \cite{ll}, and the adaptation to the fractional case is straightforward.
\end{proof}
|
1906.05518
|
\section{Introduction}
It is commonly agreed that even massive resources for language \& vision~\cite{imagenet,chen2015microsoft,visualgenome} will never fully cover the huge range of objects to be found ``in the wild''.
This motivates research in zero-shot learning~\cite{lampert2009,socher2013zero,anne2016deep},
which aims at predicting correct labels or names for objects of novel categories, typically via external lexical knowledge such as, e.g., word embeddings.
\begin{figure}
{\footnotesize
\setlength{\tabcolsep}{6pt}
\begin{tabular}{p{3.5cm}p{3.5cm}}
\raisebox{-\totalheight}{\includegraphics[width = 1\linewidth]{COCO_train2014_000000020362.jpg}} &
\raisebox{-\totalheight}{\includegraphics[width = 1\linewidth]{COCO_train2014_000000367788.jpg}}\\
\textbf{refexp:} right thingy & \textbf{refexp:} left blue\\
\end{tabular}
}
\vspace{-0.3cm}
\caption{RefCOCO expressions referring to difficult/unknown objects}
\label{fig:im}
\vspace{-0.3cm}
\end{figure}
More generally, however, uncertain knowledge of the world that surrounds us, including novel objects, is not only a machine learning challenge: it is simply a very common aspect of human communication, as speakers rarely have perfect representations of their environment.
Precisely the richness of verbal interaction allows us to communicate these uncertainties and to collaborate towards communicative success~\cite{clarkwilkes:ref}.
Figure~\ref{fig:im} illustrates this general point with two examples from the RefCOCO corpus~\cite{Yu2016}, providing descriptions of visual objects from an interactive reference game.
Here, the use of the unspecific \textit{thingy} and the omission of a noun in \textit{left blue} can be seen as pragmatically plausible strategies that avoid confusing the listener with potentially inaccurate names for difficult-to-name objects.
While there has been a lot of recent and traditional research on pragmatically informative object descriptions in reference games~\cite{mao15,yu2017joint,cohn2018, dale:1995,frank2012predicting}, conversational strategies for dealing with uncertainties like novel categories are largely understudied in computational pragmatics, though see, e.g., work by~\citet{fang2014collaborative}.
In this paper, we frame zero-shot learning as a challenge for pragmatic modeling and explore \textit{zero-shot reference games},
where a speaker needs to describe a novel-category object in an image to an addressee who may or may not know the category.
In contrast to standard reference games, this game explicitly targets a situation where relatively common words like object names are likely to be more inaccurate than other words like e.g.\ attributes.
We hypothesize that Bayesian reasoning in the style of Rational Speech Acts, RSA \cite{frank2012predicting}, can extend a neural generation model trained to refer to objects of known categories, towards zero-shot learning.
We implement a Bayesian decoder reasoning about categorical uncertainty and show that, solely as a result of pragmatic decoding, our model produces fewer misleading object names when being uncertain about the category (just as the speakers did in Figure \ref{fig:im}). Furthermore, we show that this strategy often improves reference resolution accuracies of an automatic listener.
\section{Background}
\label{sec:related}
We investigate referring expression generation (REG henceforth), where the goal is to compute an utterance $u$ that identifies a target referent $r$ among other referents $R$ in a visual scene.
Research on REG has a long tradition in natural language generation \cite{krahmer:2012}, and has recently been re-discovered in the area of Language \& Vision \cite{mao15,Yu2016,zarriess2018decoding}.
These latter models for REG essentially implement variants of a standard neural image captioning architecture \cite{vinyals:show}, combining a CNN and an LSTM to generate an utterance directly from objects marked via bounding boxes in real-world images.
Our approach combines such a neural REG model with a reasoning component that is inspired by theory-driven Bayesian pragmatics
and RSA \cite{frank2012predicting}. We will briefly sketch this approach here.
The starting point in RSA is a model of a ``literal speaker'', $S_0(u|r)$, which generates utterances $u$ for the target $r$.
The ``pragmatic listener'' $L_0$ then assigns probabilities to all referents $R$ based on the model $S_0$:
\begin{equation}
L_0(r|u) \propto \frac{S_0(u|r)*P(r)}{\sum_{r_i \in R}S_0(u|r_i)*P(r_i)}
\label{eq:l0}
\end{equation}
In turn, the ``pragmatic speaker'' $S_1$ reasons about which utterance is more discriminative and will be resolved to the target by the pragmatic listener:
\begin{equation}
S_1(u|r) \propto \frac{L_0(r|u)*P(u)}{\sum_{u_i \in U}L_0(r|u_i)*P(u_i)}
\label{eq:1}
\end{equation}
\noindent
($S_0$ and $L_0$ are components of the recursive reasoning of $S_1$ and not in fact separate agents.)
There has been some previous work on leveraging RSA-like reasoning for neural language generation.
For instance, \citet{cohn2018} implement the literal speaker as a neural captioning model trained on non-discriminative image descriptions.
On top of this neural semantics, they build a pragmatic speaker that produces more discriminative captions, applying equation \ref{eq:1} at each step of the inference process.
They evaluate their model in a reference game where an automatic listener (trained on a different portion of the image data) is used to test whether the generated caption singles out the target image among a range of distractor images. A range of related articles have extended neural captioning models with decoding procedures geared towards vocabulary expansion \cite{anderson-guided,nocaps} or contextually discriminative scene descriptions \cite{andreas:2016,vedantam2017context}.
Previous work on REG commonly looks at visual scenes with multiple referents of identical or similar categories. Here, speakers typically produce expressions composed of a head noun, which names the category of the target, and a set of attributes, which distinguish the target from distractor referents of the same category \cite{krahmer:2012}.
Our work adds an additional dimension of uncertainty to this picture, namely a setting where the category of the target itself might not be known to the model and, hence, cannot be named with reasonable accuracy.
In this setting, we expect that a literal speaker (e.g.\ a neural REG model trained for a restricted set object categories) generates misleading references, e.g.\ containing incorrect head nouns, as it has no means of ``knowing'' which words risk being inaccurate for referring to novel objects.
The following Section \ref{sec:model} describes how we modify the RSA approach for reasoning in such a zero-shot reference game.
\section{Model}
\label{sec:model}
Inspired by the approach in Section \ref{sec:related}, we model our pragmatic zero-shot speaker as a neural generator (the literal speaker) that is decoded via a pragmatic listener.
In contrast to the listener in Equation (1), however, our listener possesses an additional latent variable $C$, which reflects its beliefs about the target's category.
This hidden belief distribution will, in turn, allow the pragmatic speaker to reason about how accurate the words produced by the literal speaker might be.
Our Bayesian listener will assign a probability $P(r|u)$ to a referent $r$ conditioned on the utterance $u$ by the (literal) speaker.
To do that, it needs to calculate $P(u|r)$, as in Equation \ref{eq:l0}.
While previous work on RSA typically equates $P(u|r)$ with $S_0(u|r)$, we are going to modify the way this probability is calculated.
Thus, we assume that our listener has hidden beliefs about the category of the referent, that we can marginalize over as follows:
\begin{equation}
\begin{split}
P(u|r) & = \sum_{c_i \in C} P(u,c_i|r) = \sum_{c_i \in C} \frac{P(u,c_i,r)}{P(r)} \\
& = \sum_{c_i \in C} \frac{P(r)*P(c_i|r)*P(u|c_i,r)}{P(r)} \\
& \propto \sum_{c_i \in C} P(c_i|r)*P(u|c_i)
\end{split}
\end{equation}
As a simplification, we condition $u$ only on $c_i$, instead of $P(u|c_i,r)$.
This will allow us to estimate $P(u|c_i)$ directly via maximum likelihood on the training data, i.e. in terms of word probabilities conditioned on categories (observed in training) .
The pragmatic listener is defined as follows:
\begin{equation}
\begin{split}
L_0(r|u) & = \frac{P(u|r) * P(r)}{P(u)}\\
& \propto \sum_{c_i \in C} P(c_i|r)*P(u|c_i)
\end{split}
\label{eq:lcat}
\end{equation}
For instance, let's consider a game with 3 categories and two words, the less specific \textit{left} with $P(u|c_i) = \frac{1}{2}$ for all $c_i \in C$ and the more specific \textit{bus} with $P(u|c_1) = \frac{9}{10},P(u|c_2) = \frac{1}{10},P(u|c_3) = \frac{1}{10}$. When
the listener is uncertain and predicts $P(c_i|r) = \frac{1}{3}$ for all $c_i \in C$,
this yields $L_0(r|\text{left}) = 0.5$ and $L_0(r|\text{bus}) = 0.36$, meaning that the less specific \textit{left} will be more likely resolved to the target $r$. Vice versa, when the listener is more certain, e.g.\ $P(c_1|r) = \frac{9}{10}, P(c_2|r) = \frac{1}{10},P(c_3|r) = \frac{1}{10}$, more specific words will be preferred: $L_0(r|\text{bus}) = 0.83$ and $L_0(r|\text{left}) = 0.55$.
The definition of the pragmatic speaker is straightforward:
\begin{equation}
S_1(u|r) = S_0(u|r)*L_0(r|u)^{\alpha}
\end{equation}
Intuitively, $S_1$ guides its potentially over-optimistic language model ($S_0$) to be more cautious in producing category-specific words, e.g.\ nouns.
The idea is that the degree to which a word is category-specific and, hence, risky in a zero-shot reference game can be determined on descriptions for objects of \textit{known} categories and is expressed in $P(u|c)$ .
For \textit{unknown} categories, the pragmatic speaker can deliberately avoid these category-specific words and resort to describing other visual properties like colour or location.\footnote{We leave it for future work to combine this approach with a listener reasoning about distractor objects in the scene (as in Equation 1).}
Similar to \citet{cohn2018}, we use incremental, word-level inference to decode the pragmatic speaker model in a greedy fashion:
\begin{equation}
S_1^t(w|r,u_{t-1}) = S_0^t(w|r,u_{t-1})*L_0(r|w)^{\alpha+\beta}
\end{equation}
At each time step, we generate the most likely word determined via $S_0$ and $L_0$.
The parameters $\alpha$ and $\beta$ will determine the balance between the literal speaker and the listener. While $\alpha$ is simply a constant (set to 2, in our case), $\beta$ is zero as long as $w$ does not occur in $u_{t-1}$ and increases when it does occur in $u_{t-1}$ (it is then set to 2).
This ensures that there is a dynamic tradeoff between the speaker and the listener, i.e.\ for words that occur in previously generated utterance prefix, the language model probabilities ($S_0$) will have comparitively more weight than for new words.
\section{Exp. 1: Referring without naming?}
\label{subsec:eval1}
Section \ref{sec:model} has introduced a model for referring expression generation (REG) in a zero-shot reference game.
This model, and its pragmatic decoding component in particular, is designed to avoid words that are specific to categories when there is uncertainty about the category of a target object, in favour of words that are not specific to categories like, e.g., colour or location attributes.
In the following evaluation, we will test how this reasoning component actually affects the referring behavior of the pragmatic speaker as compared to the literal speaker, which we implement as neural supervised REG model along the lines of previous work \cite{mao15,Yu2016}.
As object names typically express category-specific information in referring expressions, we focus the comparison on the nouns generated in the systems' output.
\subsection{Training}
\paragraph{Data} We conduct experiments on RefCOCO \cite{Yu2016} referring expressions to objects in MSCOCO \cite{mscoco} images.
As is commonly done in zero-shot learning, we manually select a range of different categories as targets for our zero-shot game, cf.\ \cite{anne2016deep}.
Out of the 90 categories in MSCOCO, we select 6 medium-frequent categories (\textit{cat,horse,cup,bottle,bus,train}), that are similar to those in \cite{anne2016deep}.
For each category, we divide the training set of RefCOCO into a new train-test split such that all images with an instance of the target zero-shot category are moved to the test set.
\paragraph{Generation Model ($S_0$)} We implement a standard CNN-LSTM model for REG, trained on pairs of image regions and referring expressions. The architecture follows the baseline version of \cite{Yu2016}. We crop images to the target region, and obtain the fc features from VGG \cite{vgg}. We set the word embedding layer size to 512, and the hidden state to 1024. We optimized with ADAM, set the batch size to 32 and the learning rate to 0.0004. The number of training epochs is 5 (verified on the RefCOCO validation set).
\paragraph{Uncertainty Estimation} Similar to previous work in zero-shot learning, we factor out the problem of automatically determining the model's certainty with respect to an object's category, cf. \cite{lampert2009,socher2013zero}: for computing $L_0(r|u)$, we set $P(c_i|r)$ to be a uniform distribution over categories, meaning that the model is maximally uncertain about the referent's category. We leave exploration of a more realistic uncertainty or novelty prediction to future work.
\subsection{Evaluation}
\paragraph{Measures} We test to what extent our models produces incorrect names for novel objects.
First, for each zero-shot category, we define a set of distractor nouns (\textbf{distr-noun}), which correspond to the names of the remaining categories in MSCOCO.
Any choice of noun from that set would be wrong, as the categories are pairwise disjunct; the exploration of other nouns (e.g. \textit{thingy, animal}) is left for future work.
In Table \ref{tab:names}, ``\% distr-noun'' refers to how many expressions generated for an instance of a zero-shot category contain such an incorrect distractor noun.
Second, we count how many generated expressions do not contain any noun (\textbf{no-noun}) at all,
according to the NLTK POS tagger.
\paragraph{Results}
\begin{table}
\centering
\small
\begin{tabular}{llccc}
\toprule
\multicolumn{2}{c}{Model} & \% distr-noun & \% no-noun\\
\midrule
\textit{cat} & $S_0$ & 0.606 & 0.107\\
& $S_1$ & \bf 0.484 & \bf 0.193\\
\midrule
\textit{horse} & $S_0$ & 0.683 & 0.085\\
& $S_1$ & \bf 0.572 &\bf 0.30\\
\midrule
\textit{cup} & $S_0$ & 0.627 & 0.079\\
& $S_1$ & \bf 0.332 & \bf 0.172\\
\midrule
\textit{bottle} & $S_0$ & 0.398 & 0.275\\
& $S_1$ & \bf 0.166 & \bf 0.562\\
\midrule
\textit{bus} & $S_0$ & 0.743 & 0.066\\
& $S_1$ & \bf 0.612 & \bf 0.247\\
\midrule
\textit{train} & $S_0$ & 0.759 & 0.166\\
& $S_1$ & \bf 0.558 & \bf 0.37\\
\bottomrule
\end{tabular}
\caption{Names and nouns contained in generation output for two speakers ($S_0, S_1$)}
\label{tab:names}
\end{table}
Table \ref{tab:names} shows that the proportion of output expressions containing a distractor noun \textit{decreases} markedly from $S_0$ to $S_1$, whereas the proportion of expression without any name \textit{increases} markedly from $S_0$ to $S_1$.
First of all, this suggests that our baseline model $S_0$ does, in many cases, not know what it does not know, i.e.\ it is not aware that it encounters a novel category and frequently generates names of known categories encountered during training.
However, even in this simple model, we find a certain portion of output expressions that do not contain any name (e.g.\ 27\% for \textit{bottle}, but only 6\% for \textit{bus}).
The results also confirm our hypothesis that the pragmatic speaker $S_1$ avoids to produce ``risky'' or specific words that are likely to be confused for uncertain or unknown categories.
It is worth stressing here that this behaviour results entirely from the Bayesian reasoning that $S_1$ uses in decoding; the model does not have explicit knowledge of linguistic categories like nouns, names or other taxonomic knowledge.
\section{Exp. 2: Communicative success}
\label{subsec:eval2}
\begin{figure}
{\footnotesize
\setlength{\tabcolsep}{6pt}
\begin{tabular}{p{4cm}p{3.5cm}}
\raisebox{-\totalheight}{\includegraphics[width = 1\linewidth]{COCO_train2014_000000317210.jpg}} & \vspace{0.2cm} \shortstack{Target (unknown cat):\\ left horse\\ \vspace{0.2cm} \\\textbf{$S_0$:} left person \xmark \\ \\
\textbf{$S_1$:} left black \cmark} \\
\end{tabular}
}
\vspace{-0.3cm}
\caption{Qualitative Example}
\label{fig:im2}
\vspace{-0.3cm}
\end{figure}
The Experiment in Section \ref{subsec:eval1} found that the pragmatic speaker uses less category-specific vocabulary when referring to objects of novel categories as compared to a literal speaker.
Now, we need to establish whether the resulting utterances still achieve communicative success in the zero-shot reference game, despite using less specific vocabulary (as shown above).
We test this automatically using a model of a ``competent'' listener, that knows the respective object categories. This is supposed to approximate a conversation between a system and a human that has more elaborate knowledge of the world than the system.
\begin{table}
\begin{tabular}{lc}
\toprule
Zero-shot category & Similar category\\
\midrule
\textit{cat} & dog, cow \\
\textit{horse} & dog, cow \\
\textit{cup} & bowl, bottle, wine glass\\
\textit{bottle} & vase, wine glass\\
\textit{bus} & car, train, truck \\
\textit{train} & car, bus, truck\\
\bottomrule
\end{tabular}
\caption{Target and distractor categories used for testing in Exp. 2}
\label{tab:evalcat}
\end{table}
\paragraph{The evaluation listener}
One pitfall of using a trained listener model (instead of a human) for task-oriented evaluation is that this model might simply make the same mistakes as the speaker model as it is trained on similar data.
To avoid this circularity, \citet{cohn2018} train their listener on a different subset of the image data.
Rather than training on \textit{different} data, we opt for training the listener on \textit{better} data, as we want it to be as strict and human-like as possible.
For instance, we do not want our listener model to resolve an expression like \textit{the brown cat} to a \textit{dog}.
We train $S_{eval}$ as a neural speaker on the entire training set and give $L_{eval}$ access to ground-truth object categories.
The ground-truth category $c_r$ of a referent $r$ is used to calculate $P(n_u|c_r)$ where $n_u$ is the object name contained in the utterance $u$. $P(n_u|c_r)$ is estimated on the entire training set.
\begin{equation}
L_{eval}(r|u,c_r) = S_{eval}(u|r) * P(n_u|c_r)
\end{equation}
$P(n_u|c_r)$ will be close to zero if the utterance contains a rare or wrong name for the category $c_r$, and $L_{eval}$ will then assign a very low probability to this referent.
We apply this listener to all referents in the scene and take the argmax.
\paragraph{Test set}
The set \textbf{TS-image} pairs each target with other (annotated!) objects in the same image, a typical set-up for reference resolution
As many images in RefCOCO only have distractors of the \textit{same} category as the target (which is not ideal for our purposes), we randomly sample an additional test set called \textbf{TS-distractors}, pairing zero-shot targets with 4 distractors of a similar category, which we defined manually, shown in Table \ref{tab:evalcat}.
This is slightly artificial as objects are taken out of the coherent spatial context, but it helps us determining whether our model can successfully refer in a context with similar, but not identical, categories.
\paragraph{Results}
\begin{table}
\centering
\small
\begin{tabular}{llcc}
\toprule
\multicolumn{2}{c}{Model} & TS-image & TS-distractors\\
\midrule
\multirow{ 2}{*}{\textit{cat}} & $S_0$ & 0.516 & 0.343\\
& $S_1$ & \bf 0.603 & \bf 0.386\\
\midrule
\multirow{ 2}{*}{\textit{horse}} & $S_0$ & \bf 0.644 & 0.096\\
& $S_1$ & 0.589 & \bf 0.150\\
\midrule
\multirow{ 2}{*}{\textit{cup}} & $S_0$ & \bf 0.721 & 0.483\\
& $S_1$ & 0.674 & \bf 0.540\\
\midrule
\multirow{ 2}{*}{\textit{bottle}} & $S_0$ & 0.502 & 0.275\\
& $S_1$ & \bf 0.517 & \bf 0.306\\
\midrule
\multirow{ 2}{*}{\textit{bus}} & $S_0$ & \bf 0.789 & \bf 0.405\\
& $S_1$ & 0.759 & 0.361\\
\midrule
\multirow{ 2}{*}{\textit{train}} & $S_0$ & 0.658 & 0.202\\
& $S_1$ & \bf 0.667 & \bf 0.305\\
\bottomrule
\end{tabular}
\caption{Reference resolution accuracies obtained from listener $L_{eval}$ on expressions by $S_0, S_1$}
\label{tab:listener}
\vspace{-0.5cm}
\end{table}
As shown in Table \ref{tab:listener}, the $S_1$ model improves the resolution accuracy for all categories on TS-distractors, except for \textit{bus}.
On TS-image, resolution accuracies are generally much higher and the comparison between $S_0$ and $S_1$ gives mixed results.
We take this as positive evidence that $S_1$ improves communicative success in a relevant number of cases, but it also indicates that combining this model with the more standard RSA approach could be promising. Figure \ref{fig:im2} shows a qualitative example for $S_1$ being more successful than $S_0$.
\section{Conclusion}
We have presented a pragmatic approach to modeling zero-shot reference games, showing that Bayesian reasoning inspired by RSA can help decoding a neural generator that refers to novel objects. The decoder is based on a pragmatic listener that has hidden beliefs about a referent's category, which leads the pragmatic speaker to use fewer nouns when being uncertain about this category.
While some aspects of the experimental setting are, admittedly, simplified (e.g. compilation of an artificial test set, uncertainty estimation), we believe that this is an encouraging result for scaling models in computational pragmatics to real-world conversation and its complexities.
|
1906.05294
|
\section{Majorana Assisted Scattering Calculation}
The purpose of this part of the supplement is to derive Eq.~\ref{eq:Jresults},\ref{eq:Gresult} of the main text. This is a ``standard" but tedious calculation using the bosonized description of the edge[S1]. While some of the formulae appear messy, the procedure is actually quite straightforward. We include quite a bit of detail for added clarity.
\label{sec:MASC}
\subsection{Tunneling Formalism}
We begin by deriving a general formula for tunneling between two systems $R$ and $L$ with corresponding Hamiltonians $H_R$ and $H_L$. The full Hamiltonian is of the form
$$
H = H_L + H_R + \hat T + \hat T^\dagger
$$
where the tunneling term $\hat T$, the tunneling of a single electron from left to right, can be treated as a perturbation.
We will assume that $\hat T$ is a perturbation so
both the left and right halfs can be described with density matrices $\rho_L$ and $\rho_R$ and the state of the full system is a simple product $\rho = \rho_L \otimes \rho_R$.
Similarly the unperturbed eigenstates of the system can be described as simple direct products
$
|a.b\rangle = | a_L \rangle \oplus |b_R \rangle = |a_L b_R\rangle
$
with corresponding eigenenergies
$
E_{a,b} = E_{a_L} + E_{b_R}
$
where
$H_L |a_L \rangle = E_{a_L} |a_L \rangle$ and $H_R |a_R \rangle = E_{a_R} |a_R \rangle_.
$
Setting $\hbar=1$ throughout, the tunneling rate from Fermi's golden rule is is given by
$$
\Gamma= {2 \pi} \sum_{i,f}
|\langle f | \hat T | i \rangle|^2 \delta(E_i - E_f) P(i)
$$
with $|i\rangle$ the inital and $|f\rangle$ the final state (of the entire system) where here $P(i)$ is the probability of the initial state occurring. If there is a voltage difference between the two sides, we can just add that into the argument of the delta function.
The net electrical current flowing from the left to the right can then be written as
\begin{eqnarray*}
J^e &=& {(-2 \pi e)}\sum_{i,f}
|\langle f_L f_R | \hat T | i_L i_R \rangle|^2 \delta(E_{iL} + E_{iR} - E_{fL} - E_{fR} + e V) P(|i_L i_R\rangle) \\
&-& {(-2 \pi e) } \sum_{i,f}
|\langle f_L f_R | \hat T^\dagger | i_L i_R \rangle|^2 \delta(E_{iL} + E_{iR} - E_{fL} - E_{fR} - e V) P(|i_L i_R\rangle)
\end{eqnarray*}
The energy current, on the other hand, is
\begin{eqnarray*}
J^E &=& {2 \pi} \sum_{i,f} (E_{iL} - E_{fL})
|\langle f_L f_R | \hat T | i_L i_R \rangle|^2 \delta(E_{iL} + E_{iR} - E_{fL} - E_{fR} + e V) P(|i_L i_R\rangle) \\
&+& {2 \pi}\sum_{i,f} (E_{iL} - E_{fL})
|\langle f_L f_R | \hat T^\dagger | i_L i_R \rangle|^2 \delta(E_{iL} + E_{iR} - E_{fL} - E_{fR} - eV) P(|i_L i_R\rangle)
\end{eqnarray*}
Writing the delta function as an integration over energy, we get
\begin{eqnarray*}
J^e &=& {-e} \int dt \sum_{i,f} P(|i_L i_R\rangle) e^{it (E_{iL} + E_{iR} - E_{fL} - E_{fR})}
\left[ e^{it eV} |\langle f_L f_R |\hat T | i_L i_R \rangle|^2 - e^{-i t e V} |\langle f_L f_R | \hat T^\dagger | i_L i_R \rangle|^2
\right]
\\
J^E &=& \int dt \sum_{i,f} (E_{iL} - E_{fL}) P(|i_L i_R\rangle) e^{it (E_{iL} + E_{iR} - E_{fL} - E_{fR})}
\left[ e^{it eV} |\langle f_L f_R |\hat T | i_L i_R \rangle|^2+ e^{-i t e V} |\langle f_L f_R | \hat T^\dagger | i_L i_R \rangle|^2
\right]
\end{eqnarray*}
This can be rewritten using time dependent operators as
\begin{eqnarray*}
J^e &=& {-e} \int dt \sum_{i,f} P(|i_L i_R\rangle)
\left[ e^{it eV} \langle i_L i_R | T^\dagger(t) | f_L f_R \rangle \langle f_L f_R |\hat T(0) | i_L i_R \rangle - e^{-i t e V} \langle i_L i_R | T(t) | f_L f_R \rangle \langle f_L f_R |\hat T^\dagger(0) | i_L i_R \rangle \right] \\
&=& -e \int dt \sum_{i} P(|i_L i_R\rangle) \left[ e^{it eV} \langle i_L i_R | \hat T^\dagger(t) \hat T(0) | i_L i_R \rangle - e^{-i t e V} \langle i_L i_R |
\hat T(t) \hat T^\dagger(0) | i_L i_R \rangle \right]
\end{eqnarray*}
and
\begin{eqnarray*}
J^E &=& \int dt \sum_{i,f} (E_{iL} - E_{fL}) P(|i_L i_R\rangle)
\left[ e^{it eV} \langle i_L i_R | T^\dagger(t) | f_L f_R \rangle \langle f_L f_R |\hat T(0) | i_L i_R \rangle + e^{-i t e V} \langle i_L i_R | \hat T(t) | f_L f_R \rangle \langle f_L f_R |\hat T^\dagger(0) | i_L i_R \rangle \right] \\
&=& \int dt \sum_{i,f} P(|i_L i_R\rangle)
\left[ e^{it eV} \langle i_L i_R | T^\dagger(t) | f_L f_R \rangle \langle f_L f_R |[\hat T(0),H_L] | i_L i_R \rangle + e^{-i t e V} \langle i_L i_R | \hat T(t) | f_L f_R \rangle \langle f_L f_R | [\hat T^\dagger(0),H_L] | i_L i_R \rangle \right] \\
&=& \int dt \sum_{i} P(|i_L i_R\rangle) \left[ e^{it eV} \langle i_L i_R | T^\dagger(t) [\hat T(0),H_L] | i_L i_R \rangle + e^{-i t e V} \langle i_L i_R | \hat T(t) [\hat T^\dagger(0),H_L] | i_L i_R \rangle \right]
\end{eqnarray*}
The form of the tunneling is given by
\begin{equation}
\label{eq:Tform}
\hat T = g \int dx \, \psi_{eR}^\dagger(x) \psi_{eL}(x) e^{i p x}
\end{equation}
where $\psi_{eL}^\dagger$ and $\psi_{eR}^\dagger$ are electron creation operators on the left and right side respectively, $p$ is the wavevector mismatch associated with the tunneling, and $g$ is a coupling constant. This constant $g$ differs from the coupling constant $\alpha$ used in the main text by a dimensionful cutoff length scale, described below. We then obtain
\begin{eqnarray*}
J^e &=& -e |g|^2 \int dt \int dx \int dx' \sum_{i} P(|i_L i_R\rangle) \left[ e^{it eV + i p (x - x')} \langle i_L i_R | \psi^\dagger_L(x',t) \psi_R(x',t) \psi^\dagger_R(x,0) \psi_L(x,0) | i_L i_R \rangle \right.
\\&& ~~~~~~~~~~~~~~~ -\left. e^{-i t e V - i p (x - x')} \langle i_L i_R | \psi^\dagger_R(x',t) \psi_L(x',t) \psi^\dagger_L(x,0) \psi_R(x,0) | i_L i_R \rangle \right] \\
&=& -e|g|^2 \int dt \int dx \int dx' \left[ e^{it eV + i p (x - x')} G_<^L(t,x',x) G_>^R(t,x',x) - e^{-i t e V - i p (x - x')} G_<^R(t,x'x) G_>^L(t,x',x) \right]
\end{eqnarray*}
where we have defined Green's functions
\begin{eqnarray*}
G_<(t,x',x') &=& \langle \psi^\dagger(x',t) \psi(x,0)\rangle
\\ G_>(t,x',x) &=& \langle \psi(x',t) \psi^\dagger(x,0)\rangle
\end{eqnarray*}
where the expectation inludes an expectation over the initial state. I.e., we really mean
$$
G_<(t,x',x) = {\rm Tr}[\rho \, \psi^\dagger(x',t) \psi(x,0) ]
$$
with $\rho$ the density matrix.
For the energy current we are going to need the following interesting identity (using the fact that the Green's function only depends on the difference in two times)
\begin{eqnarray*}
\frac{d}{dt}G_<(t,x',x) &=& \frac{d}{dt}\langle \psi^\dagger(x',t) \psi(x,0)\rangle = \frac{d}{dt}\langle \psi^\dagger(x',0) \psi(x,-t)\rangle
\\
&=& -i \langle \psi^\dagger(x') [H,\psi(x,-t)]\rangle = i \langle \psi^\dagger(x',t) [\psi(x,0),H]\rangle
\end{eqnarray*}
and similarly
\begin{eqnarray*}
\frac{d}{dt}G_>(t,x',x) &=& \frac{d}{dt}\langle \psi(x',t) \psi^\dagger(x,0)\rangle = \frac{d}{dt}\langle \psi(x',0) \psi^\dagger(x,-t)\rangle
\\
&=& -i \langle \psi(x') [H,\psi^\dagger(x,-t)]\rangle = i \langle \psi(x',t) [\psi^\dagger(x,0),H]\rangle
\end{eqnarray*}
Using this identity, the similar manipulations give us
\begin{eqnarray*}
J^Q &=& |g|^2 \int dt \int dx \int dx' \sum_{i} P(|i_L i_R\rangle) \left[ e^{it eV + i p (x - x')} \langle i_L i_R | \psi^\dagger_L(x',t) \psi_R(x',t) \psi^\dagger_R(x,0) [\psi_L(x,0),H_L] | i_L i_R \rangle \right.
\\&& ~~~~~~~~~~~~~~~ +\left. e^{-i t e V - i p (x - x')} \langle i_L i_R | \psi^\dagger_R(x',t) \psi_L(x,t) [\psi^\dagger_L(x,0),H_L] \psi_R(x,0) | i_L i_R \rangle \right] \\
&=& -|g|^2 \int dt \int dx \int dx' \left[ e^{it eV + i p (x - x')} G_>^R(t,x',x) \frac{i d}{dt} G_<^L(t,x',x) + e^{-i t e V - i p (x - x')} G_<^R(t,x'x) \frac{i d}{dt} G_>^L(t,x',x) \right]
\end{eqnarray*}
Let us define the Fourier transform conventions
\begin{eqnarray*} G_<(E,x',x) &=& \int dt \, e^{-i t E} \langle \psi^\dagger(x',t) \psi(x,0)\rangle = \int dt \ e^{-i t E} G_<(t,x',x)
\\ G_>(E,x',x) &=& \int dt \, e^{i t E} \langle \psi(x',t) \psi^\dagger(x,0)\rangle = \int dt \ e^{i t E} G_>(t,x',x)
\end{eqnarray*}
implying the inverses
\begin{eqnarray*} G_<(t,x',x) &=& \frac{1}{2\pi}\int dE \ e^{i t E} G_<(E,x',x)
\\ G_>(t,x',x) &=& \frac{1}{2\pi} \int dE \, e^{-i t E} G_>(E,x',x)_.
\end{eqnarray*}
such that
\begin{eqnarray*} \frac{i d}{dt} G_<(t,x',x) &=& \frac{1}{2\pi}\int dE (-E) \ e^{i t E} G_<(E,x',x)
\\ \frac{i d}{dt} G_>(t,x',x) &=& \frac{1}{2\pi} \int dE \, (E) e^{-i t E} G_>(E,x',x)_.
\end{eqnarray*}
We then have
\begin{eqnarray} \label{eq:Jeres1}
J^e &=& \frac{-e |g|^2}{(2 \pi)^2} \int dt \int dx \int dx' \int dE \int dE' \\ \nonumber
& & \left[ e^{it eV + i p (x - x') + i E t - i E' t} G_<^L(E,x',x) G_>^R(E',x',x) - e^{-i t e V - i p (x - x') + i E t - i E' t} G_<^R(E,x'x) G_>^L(E',x',x) \right] \\ \nonumber
&=& \frac{-e |g|^2}{(2 \pi)} \int dx \int dx' \int dE \left[ e^{i p (x - x')} G_<^L(E,x',x) G_>^R(E+eV,x',x) - e^{-i p (x - x')} G_<^R(E,x'x) G_>^L(E-eV,x',x) \right] \\ \nonumber
&=& \frac{-e |g|^2}{(2 \pi)} \int dx \int dx' \int dE \left[ e^{i p (x - x')} G_<^L(E,x',x) G_>^R(E+eV,x',x) - e^{-i p (x - x')} G_<^R(E+e V,x'x) G_>^L(E,x',x) \right]
\end{eqnarray}
Similarly for the thermal current we obtain
\begin{eqnarray}
J^E &=& \label{eq:JEres1}
\frac{-|g|^2}{(2 \pi)^2} \int dt \int dx \int dx' \int dE \int dE' \\ \nonumber
& & \left[ e^{it eV + i p (x - x') + i E t - i E' t} (-E) G_<^L(E,x',x) G_>^R(E',x',x) + e^{-i t e V - i p (x - x') + i E t - i E' t} G_<^R(E,x'x) (E) G_>^L(E',x',x) \right]
\\ \nonumber
&=& \frac{|g|^2}{(2 \pi)} \int dx \int dx' \int dE \left[ e^{i p (x - x')} E G_<^L(E,x',x) G_>^R(E+eV,x',x) - (E-eV) e^{-i p (x - x')} G_<^R(E,x'x) G_>^L(E-eV,x',x) \right] \\ \nonumber
&=& \frac{|g|^2}{(2 \pi)} \int dx \int dx' \int dE E \left[ e^{i p (x - x')} G_<^L(E,x',x) G_>^R(E+eV,x',x) - e^{-i p (x - x')} G_<^R(E+eV,x'x) G_>^L(E,x',x) \right]_.
\end{eqnarray}
\subsection{Necessary Green's Functions}
\subsubsection{Edge Green's Functions}
We are concerned with the tunneling from an integer edge to the fractional edges in the AntiPfaffian. We consider the integer edge to be the $R$ system in the above equation and the fractional edges (the Bose plus Majorana edges) to be the $L$ system. The Green's functions for an integer edge (using our conventions) are simple to calculate, obtaining
\begin{eqnarray*}
G_<(E,x',x) &=& v^{-1} e^{i (E/v) (x - x')} n_F(E)
\\G_>(E,x',x) &=& v^{-1} e^{i (E/v) (x' - x)} n_F(-E)
\end{eqnarray*}
where $v$ is the edge velocity. The derivation of these results are not hard and are presented in section \ref{sub:integeredge} below.
\subsubsection{Factoring the AntiPfaffian Fractional Edge}
\label{sub:factor}
More interesting is to determine the $R$ Green's functions in Eqns. \ref{eq:Jeres1} and \ref{eq:JEres1} above describing the electron when it tunnels into the fractional edge.
The fractional part of the AntiPfaffian edge contains an upstream Bose $b$ and an upstream Majorana mode $\xi$. These have the commutation relations
\begin{eqnarray*}
\{ \xi(x), \xi(x') \} &=& \delta(x -x') \\
\,
[ b(x), b^\dagger(x') ] &=& \delta(x-x')
\end{eqnarray*}
The electron operator along this fractional edge is a combination of the Bose and Majorana operators[S2].
$$
\psi(x) = \sqrt{\ell_c} \,\, \xi(x) b(x)
$$
Here $\ell_c$ is a cutoff length scale. For a typical one dimensional system we use a cutoff $2 \pi/q_{max}$ where $\hbar v q_{max} = \Delta$ with $v$ the mode velocity and $\Delta$ the excitation gap energy. If we think about a system as being on a lattice we should probably instead choose $q_{max}= \pi/\ell$ to match the relationship between the unit cell and the Brillouin zone boundary. Now in this case, we have an issue that the bose and majorana velocities can be quite different. As such we will choose the geometric mean
\begin{equation}
\label{eq:cutoff}
\ell_c = \frac{\pi\sqrt{v_b v_m} }{\Delta}
\end{equation}
The Green's function for the electron operator on the fractional edge can then be written as the product of the Bose and Majorana edges which factorize
\begin{eqnarray} \label{eq:Gfactorize}
G_<(t,x',x) &=& \langle \psi^\dagger(x',t) \psi(x,0)\rangle = \ell_c \langle b^\dagger(x',t) b(x,0) \rangle \langle \xi(x',t), \xi(x,0) \rangle = \ell_c G^b_<(t,x',x) G^\xi(t,x',x)
\\ G_>(t,x',x) &=& \langle \psi(x',t) \psi^\dagger(x,0)\rangle = \ell_c \langle b(x',t) b^\dagger(x,0) \rangle \langle \xi(x',t) \xi(x,0) \rangle = \ell_c G^b_>(t,x',x) G^\xi(t,x',x)
\end{eqnarray}
where here we have defined
\begin{eqnarray*}
G^b_<(t,x',x) &=& \langle b^\dagger(x',t) b(x,0) \rangle
\\ G^b_>(t,x',x) &=& \langle b(x',t) b^\dagger(x,0) \rangle
\end{eqnarray*}
and
$$
G^\xi(t,x',x) = \langle \xi(x',t) \xi(x,0) \rangle
$$
Note, that since for Majorana fields $\xi = \xi^\dagger$ there is no $<$ or $>$ index on this Green's function.
Defining the usual Fourier transforms
\begin{eqnarray*}
G^b_<(E,x',x) &=& \int dt \ e^{-i t E} G^b_<(t,x',x)
\\ G^b_>(E,x',x) &=& \int dt \ e^{i t E} G^b_>(t,x',x)
\end{eqnarray*}
and for the Majorana field
\begin{eqnarray} \nonumber
G^\xi_<(E,x',x) &=& \int dt \ e^{-i t E} G^\xi(t,x',x) \\
G^\xi_>(E,x',x) &=& \int dt \ e^{i t E} G^\xi(t,x',x) = G^\xi_<(-E,x',x) \label{eq:Gmid}
\end{eqnarray}
Using the factorization from Eq.~\ref{eq:Gfactorize} for the electron field Green's function, we obtain the convolutions
\begin{eqnarray} \label{eq:Gconv}
G_<(E,x',x) &=& \frac{\ell_c}{2 \pi} \int dE' \,\, G_<^b(E-E',x',x) G_<^\xi(E',x',x) \\
G_>(E,x,x') &=& \frac{\ell_c}{2 \pi} \int dE' \,\, G_>^b(E-E',x',x) G_>^\xi(E',x',x) \nonumber
\end{eqnarray}
Note that the meaning of this expression is obviously to divide the energy into the piece going into the Bose mode and the piece going into the Majorana mode in all possible ways.
When we consider the tunneling between edge modes, we will want to use the Green's function for a free boson mode appropriate for the AntiPfaffian edge, but the Majorana Green's function will be calculated in the presence of the tunneling impurity.
The Bose mode is equivalent to a $\nu=1/2$ bosonic Laughlin edge theory. The Green's function is the boson-boson correlator, and is given by
\begin{eqnarray*}
G_<^b(E,x,x') &=& \frac{E \tilde \ell_c}{2 \pi v_b^2}
n_B(E) e^{i (E/v_b) (x - x')} \\
G_>^b(E,x,x') &=& \frac{-E \tilde \ell_c}{2 \pi v_b^2} n_B(-E) e^{-i (E/v_b) (x - x') }
\end{eqnarray*}
with $n_F$ the Fermi function, $v_b$ the Bose mode velocity, and where $\tilde \ell_c = \pi v_b/\Delta$ is a length scale cutoff. These result are again fairly standard, but are derived in section \ref{sub:boseedge} for completeness.
Finally we turn to the Majorana Green's function. In the absence of the impurity we have
\begin{eqnarray*}
G_<^{\xi,0}(E,x,x') &=& v_m^{-1}
n_F(E) e^{i (E/v) (x - x')} \\
G_>^{\xi,0}(E,x,x') &=& v_m^{-1} n_F(-E) e^{-i (E/v_m) (x - x') }
\end{eqnarray*}
where here $v_m$ is the Majorana mode velocity. These results are also standard, but are derived in section \ref{sub:majoranaedgenoimpurties} for completeness. Note here we have inserted a superscript 0 to indicate that this is the Green's function in the absence of the impurity.
We next consider plugging these Green's functions into Eq.~\ref{eq:Gconv} to obtain the electron Green's function along the fractional edge, then we use this in Eqs.~\ref{eq:Jeres1} and \ref{eq:JEres1} along with the Green's function of the integer edge. So long as $p \gg T/v$ and $p \gg eV/v$ , there will no way to have a momentum conserving scattering and both the electrical and thermal conductance between the two edges will be zero, exactly as we expect.
Now let us consider the effect of the tunneling impurity. As mentioned in the main text, the effect of the impurity is to incur a phase shift in the Majorana wave as it scatters past the impurity. The phase shift is given by (a simple scattering problem, see
section \ref{sec:phaseshift})
$$
e^{i \varphi(E)} = \frac{E + i E_0}{E - i E_0}
$$
where $E_0 = \lambda^2/v$ is the coupling energy (called $E_{coupling}$ in the main text). When evaluating the Green's function $G^\xi(E,x',x)$, this phase shift will have no effect if both $x$ and $x'$ are on the same side of the impurity. However, if $x$ and $x'$ are on different sides of the impurity, then the Green's function picks up this additional phase. Thus assuming the impurity is at position $x=0$ we can write
$$
G_<^\xi(E,x,x') = G_<^{\xi 0}(E,x,x') F(E,x,x')
$$
where $G^{\xi 0}_<$ is the unperturbed Green's function and
$$
F(E,x,x')= \left\{ \begin{array}{ll} 1 & x,x' < 0 \\ 1 & x,x' > 0 \\ (E + i E_0)/(E - i E_0) & x > 0 > x' \\ (E - i E_0)/(E + i E_0) & x < 0 < x' \end{array} \right.
$$
Let us make the assumption that there is no scattering without the impurity as discussed above. (This is not strictly true at nonzero temperature, because even with a very large momentum mimatch there will be some tiny probability that one can have a highly excited state which can scatter, but this is exponentially small so we may ignore this). It is then useful to write
\begin{eqnarray*}
\delta G_<^{\xi}(E,x,x') &=& G_<^\xi(E,x,x') - G_<^{\xi 0}(E,x,x') \\
&=& G_<^{\xi 0}(E,x,x') \,\, \delta F(E,x,x') \\
&=& v_m^{-1} e^{i (E/v_m) (x - x')} n_F(E) \delta F(E,x,x')
\end{eqnarray*}
where
$$
\delta F(E,x,x')= \left\{ \begin{array}{ll} 0 & x,x' < 0 \\ 0 & x,x' > 0 \\ 2 i E_0/(E - i E_0) & x > 0 > x' \\ - 2i E_0/(E + i E_0) & x < 0 < x' \end{array} \right._.
$$
We may correspondingly write the electrons Green's function
$$
\delta G_<(E,x,x') = G_<(E,x,x') -G^{0}_<(E,x,x')
$$
where again the superscript $0$ indicates no impurity. Using the factorization of the Green's function in Eq.~\ref{eq:Gconv} we obtain
\begin{eqnarray*}
\delta G_<(E,x,x') &=& \frac{\ell_c}{2 \pi} \int dE' G^b_<(E - E',x,x') \delta G_<^\xi(E',x,x')
\\
&=&
\,\,\frac{\ell_c \tilde \ell_c}{(2 \pi)^2 v_m v_b^2} \int dE' e^{i [(E-E')/v_b + E'/v_m] (x - x')} (E-E') n_B^b(E-E') n_F^\xi(E') \, \delta F(E',x,x')
\end{eqnarray*}
with $\ell_c$ and $\tilde \ell_c$ are the cutoff length scales. Note that we have labeled the Fermi and Bose functions with superscripts $\xi$ and $b$ so that one can see that they correspond to the two different edges which most generally may not be at the same temperature. To obtain $\delta G_>$ we can simply use Eq.~\ref{eq:Gmid} obtaining
\begin{eqnarray*}
\delta G_<(E,x,x') &=& \frac{\ell_c}{2 \pi} \int dE' G^b_>(E - E',x,x') \delta G_>^\xi(E',x,x')
\\
\delta G_>(E,x,x') &=&
\,\,\frac{\ell_c \tilde \ell_c}{(2 \pi)^2 v_m v_b^2 } \int dE' e^{-i [(E-E')/v_b + E'/v_m] (x' - x)} (E'-E) n_B^b(E'-E) n_F^\xi(-E') \, \delta F(-E',x,x')
\end{eqnarray*}
\subsection{Calculating Response}
From Eqs.~\ref{eq:Jeres1} and \ref{eq:JEres1} can write the general expression
\begin{eqnarray*}
J^\alpha
&=& \frac{|g|^2}{(2 \pi)} \int dx \int dx' \int dE X^\alpha \left[ e^{i p (x - x')} G_<^L(E,x',x) G_>^R(E+eV,x',x) - e^{-i p (x - x')} G_>^L(E,x',x) G_<^R(E+eV,x'x)\right]_.
\end{eqnarray*}
where $\alpha=e$ or $E$ (for charge current or energy current) and $X^e = -e$ and $X^E = E$
Let us take the $R$-system to be the integer edge and the $L$-system to be the combined fractional edges. Again assuming that there is no transport in the absence of the impurity we can then write
\begin{eqnarray*}
J^\alpha
&=& \frac{|g|^2}{(2 \pi)} \int dx dx' dE \, X^\alpha \left[ e^{i p (x' - x)} \delta G_<^L(E,x,x') G_>^R(E+eV,x,x') - e^{-i p (x' - x)} \delta G_>^L(E,x,x') G_<^R(E+eV,x,x')\right] \\
&=& \frac{|g|^2 \ell_c}{(2 \pi)^2} \int dx dx' dE dE' \, X^\alpha \left[ e^{i p (x' - x)} G_<^b(E-E',x,x') \delta G^\xi_<(E',x,x') G_>^i(E+eV,x,x') - \right. \\ & & ~~~~~~~~~~~~~~~~~~~~~~~~
\left. e^{-i p (x' - x)} G_>^b(E-E',x,x') \delta G^\xi_>(E',x,x')G_<^i(E+eV,x,x') \right]
\end{eqnarray*}
where $G^i$ means ``integer" edge. Note that here we can also choose $X=E'$ to determine the thermal current flowing into the Majorana edge mode only. Plugging in the above results for the Green's functions we obtain
\begin{eqnarray} \label{eq:thiseqJ}
J^\alpha
&=& \frac{|g|^2 \ell_c \tilde \ell_c}{(2 \pi)^3 v v_b^2 v_m} \int dx dx' dE dE' \, X^\alpha(E,E') (E-E') \\ \nonumber & & \left[ e^{i(-p + (E-E')/v_b + E'/v_m + (E + e V)/v) (x - x')} n_B^b(E-E')n_F^\xi(E') \delta F(E',x,x') n_F^i(-E - eV) + \right. \\ \nonumber & & ~~~~~~
\left. e^{-i(-p + (E-E')/v_b + E'/v_m + (E + e V)/v) (x - x')} n_B^b(E'-E)n_F^\xi(-E') \delta F(-E',x,x') n_F^i(E + eV) \right]
\end{eqnarray}
Note that the exponent of $i (E+ eV)(x-x')$ of the integer mode has same sign as the $i(E-E')(x-x')$, this is due to the fact that the integer and fractional modes are opposite directed. Again, $X^\alpha$ can equal $-e, E$ for electrical or thermal current leaving the integer mode or $E'$ for thermal current into the Majorana mode.
The integrals over $x,x'$ are now simple via
\begin{eqnarray*}
&& \int dx \int dx' \delta F(E',x,x') e^{i A (x-x')} =
\\ &=& \frac{2 i E_0}{E'- i E_0} \int_0^\infty dx \int_{-\infty}^0 dx' e^{i A (x -x')} - \frac{2 i E_0}{E'+ i E_0} \int_0^\infty dx' \int_{-\infty}^0 dx e^{i A (x -x')} \\
&=& \frac{2 i E_0}{E'- i E_0} \frac{-1}{(A + i 0^+)^2} - \frac{2 i E_0}{E'+ i E_0}\frac{-1}{(A - i 0^+)^2} = \frac{4 E_0^2}{E'^2 + E_0^2}\frac{1}{A^2}
\end{eqnarray*}
where we have ignored the $0^+$ pieces. This is valid assuming that $A$ is never close to zero. Indeed, we are assuming that in the exponents in Eq.~\ref{eq:thiseqJ} that the momenum mismatch $p$ is much larger than the $E/v$ for any of the energies and velocities so there is never scattering in the absence of disorder. As a result we can replace the constant exponent $A$ by $p$ in all occurances, obtaining
\begin{eqnarray} \label{eq:thiseqJ0}
J^\alpha
&=& \frac{4 |g|^2 \ell_c \tilde \ell_c E_0^2}{(2 \pi)^3 v v_b^2 v_m p^2} \int dE dE' \, \frac{X^\alpha(E,E') (E-E')}{E'^2 + E_0^2} \\ \nonumber & & \left[ n_B^b(E-E')n_F^\xi(E') n_F^i(-E - eV) + n_B^b(E'-E)n_F^\xi(-E') n_F^i(E + eV) \right]
\end{eqnarray}
As we would hope, if all three modes (Bose, Majorana, Integer) are at the same temperature, and if the Voltage is zero, then the expression in brackets in the second line of Eq.~\ref{eq:thiseqJ0} is exactly zero. Generally, though we should allow for the possibility that there are three different temperatures in the Bose ($b$), Majorana ($\xi$), and integer ($i$) mode.
Let us assume the voltage and temperature differences between the modes is small, we can then expand the brackets to obtain
\begin{eqnarray} \label{eq:thiseqJ3}
J^\alpha
&=& \frac{2 |g|^2 \ell_c \tilde \ell_c E_0^2}{(2 \pi)^3 v v_b^2 v_m p^2} \int dE dE' \, \frac{X^\alpha(E,E') (E-E')}{E'^2 + E_0^2} \left[\frac{ E (\beta^i - \beta^b) + E' (\beta^b- \beta^\xi ) + \beta e V}{\sinh(\beta E) - \sinh(\beta E') + \sinh(\beta(E - E')) } \right]
\end{eqnarray}
From the symmetry of the integrand under $E \rightarrow -E$ and $E' \rightarrow -E'$ it is easy to see that we obtain a heat current only for a thermal difference and an electrical current only for a voltage difference. This agrees with the intuition that there should be no themoelectric effect for dispersionless edges\cite{KaFi96}.
For both the electric and thermal current from the integer into the Bose mode, we can assume $E_0 \ll T$ in which case
$$
\frac{1}{E'^2 + E_0^2} \approx \pi \delta(E') E_0^{-1}
$$
and we obtain
\begin{eqnarray}
J^\alpha
&=& \frac{|g|^2 \ell_c \tilde \ell_c E_0}{(2 \pi)^2 v v_b^2 v_m p^2} \int dE \, X^\alpha \left[\frac{ E( E (\beta^i - \beta^b) + \beta e V)}{2 \sinh(\beta E)} \right]
\end{eqnarray}
with $X^\alpha=-e$ or $E$ for the electrical or energy current. This yields
\begin{eqnarray}
J^e
&=& \frac{e^2 \pi^2 | g|^2 \ell_c \tilde \ell_c E_0 T}{4 (2 \pi)^2 v v_b^2 v_m p^2} V =
\frac{e^2 | g|^2 \ell_c \tilde \ell_c E_0 T}{16 v v_b^2 v_m p^2} V
\\
J^E
&=& \frac{\pi^4 | g|^2 \ell_c \tilde \ell_c E_0 T^2}{8 (2 \pi)^2 v v_b^2 v_m p^2} (\Delta T) = \frac{\pi^2 | g|^2 \ell_c \tilde \ell_c E_0 T^2}{32 v v_b^2 v_m p^2} (\Delta T)
\end{eqnarray}
where we have used $\int dx x/\sinh(x) = \pi^2/2$ and $\int dx x^3/\sinh(x) = \pi^4/4$. Here $\Delta T$ is the temperature difference between the integer and Bose mode. Note that in the main text we use the standard definition of conductance in terms of $G_0$ and thermal conductance in terms of $K_0$ which have factors of $h$ rather than $\hbar$.
The calculation of the thermal current into the Majorana mode is more challenging. Here we use $X=E'$ in Eqn. \ref{eq:thiseqJ} and we are concerned only with the response to the temperature differnces. Here the limit of $E_0 \rightarrow 0$ is nonsingular. Taking this limit we have
\begin{eqnarray} \label{eq:thiseqJ4}
J^{E'}
&=& \frac{2 |g|^2 \ell_c \tilde \ell_c E_0^2}{(2 \pi)^3 v v_b^2 v_m p^2} \int dE dE' \, \frac{(E-E')}{E'} \left[\frac{ E [ (\beta^i - \beta^\xi) - (\beta^b - \beta^\xi)] + E' (\beta^b- \beta^\xi )}{\sinh(\beta E) - \sinh(\beta E') + \sinh(\beta(E - E')) } \right]
\end{eqnarray}
the integrals over $E$ and $E'$ can be performed (see section \ref{sec:nastyintegral}
) to obtain
$$
J^{E'}
= \frac{|g|^2 \ell_c \tilde \ell_c E_0^2 T}{9 \pi v v_b^2 v_m p^2} \,
\, [ (T^i - T^\xi) + 2 (T^b - T^\xi) ]_.
$$
We choose the coupling constant $g$ (an interaction energy scale associated with scattering) to be the gap energy $\Delta$. As discussed below in sections \ref{sub:boseedge} and \ref{sub:factor} we have chosen
$$
\ell_c \tilde \ell_c = \frac{\pi^2 v_b \sqrt{v_m v_b}} {\Delta^2}
$$
so that the coupling constant in the main text is given by
$$
|\alpha|^2 = |g|^2 \ell_c \tilde \ell_c = \pi^2 v_b \sqrt{v_m v_b}
$$
\section{Some further details}
\subsection{Details of Integrals}
\label{sec:nastyintegral}
There are two integrals we would like to evaluate
$$
I_n= \int dx dx' \left( \frac{x }{x'} \right)^n \frac{(x-x')}{\sinh x - \sinh x' + \sinh (x - x')}
$$
for $n=0,1$.
We will do the $x$ integral first. Shift variables $y=x-x'$ and rewrite the $\sinh$ as exponentials. This allows us to rewrite the required integral as
$$
I_n = \int dx' \frac{2}{1 + e^{x'}} \frac{1}{(x')^n }\int dy \frac{y (x' +y)^n }{(e^y + e^{-x'})(1 - e^{-y})}
$$
The integrals over $y$ can be performed using 3.419.2 and 3.419.3 of Ref. [S3]
$$
\int dy \frac{y^{1+n}}{(e^{-x'} + e^y)(1 - e^{-y})} = \frac{1}{(2 + n)} \frac{[\pi^2 +{x'}^2](-x')^n}{e^{-x'} + 1}
$$
for $n=0,1$. We thus obtain
$$
I_n = \frac{1}{1 + 2 n} \int dx' \frac{1}{1 + e^{x'}} \left[\frac{\pi^2 + {x'}^2}{e^{-x'} +1} \right]
$$
Noting that
$$
\frac{1}{(1+e^{x})(1 + e^{-x})} = -\frac{d}{dx}\frac{1}{1+e^{x}}
$$
the integral is not difficult giving
$$
I_n = \frac{1}{1 + 2n} \frac{4 \pi^2}{3}
$$
\subsection{Some Useful Identities for $G<$ and $G>$}
Here are some general relationships that the Green's functions must obey. Note that these identities do not rely on translational invariance. Nor is it required here the the operator $\psi$ is a fermion creation operator.
$$
G_<(t,x,x') = \langle \psi^\dagger(t,x) \psi(0,x') \rangle = \langle e^{i t H} \psi^\dagger(x) e^{-itH} \psi(x') \rangle
$$
so
$$
G_<(t,x,x')^* = \langle \psi^\dagger(x') e^{i t H} \psi(x) e^{-itH} \rangle
= \langle \psi^\dagger(0,x') \psi(t,x) \rangle = G_<(-t,x',x)$$
Now consider
$$
G_<(E,x,x') = \int dt e^{-i t E} G_<(t,x,x')
$$
This gives us
\begin{equation}
G_<(E,x,x')^* = \int dt e^{i t E} G(-t,x'x) = \int dt e^{-i t E} G(t,x'x) = G_<(E,x',x) \label{eq:identity1}
\end{equation}
Note this equation is true for Fermi and Bose and Majorana correlators.
Assuming thermal equilibrium, we can derive a further relationship between $G_<$ and $G_>$. \begin{eqnarray*}
G_<(t,x,x') &=& \langle \psi^\dagger(x,t) \psi(x',0) \rangle \\
&=& (1/Z) \sum_{nm} \langle n |\psi^\dagger(x,t) |m \rangle \langle m | \psi(x',0) | n \rangle e^{-\beta E_n} \\
&=& (1/Z) \sum_{nm} \langle n |\psi^\dagger(x) |m \rangle \langle m | \psi(x') | n \rangle e^{it (E_n - E_m)} e^{-\beta E_n}
\end{eqnarray*}
with $Z=\sum_n e^{-\beta E_n}$ the partition function.
Fourier transforming we get
\begin{eqnarray*}
G_<(E,x,x') &=&
(1/Z) \sum_{nm} \langle n |\psi^\dagger(x) |m \rangle \langle m | \psi(x') | n \rangle \delta(E - E_n + E_m) e^{-\beta E_n}
\end{eqnarray*}
Similarly let us calculate
\begin{eqnarray*}
G_>(t,x',x) &=& \langle \psi(x',t) \psi^\dagger(x,0) \rangle \\
&=& (1/Z) \sum_{nm} \langle m |\psi(x',t) |n \rangle \langle n | \psi^\dagger(x,0) | m \rangle e^{-\beta E_m} \\
&=& (1/Z) \sum_{nm} \langle m |\psi(x') |n \rangle \langle n | \psi^\dagger(x) | m \rangle e^{it (E_m - E_n)} e^{-\beta E_m}
\end{eqnarray*}
Fourier transforming (note the opposite transform convention)
\begin{eqnarray}
G_>(E,x',x) &=& \nonumber
(1/Z) \sum_{nm} \langle n |\psi^\dagger(x) |m \rangle \langle m | \psi(x') | n \rangle \delta(E - E_n + E_m) e^{-\beta E_m} \\
&=& G_<(E,x,x') e^{\beta E} \label{eq:Gid1}
\end{eqnarray}
Note that this identity put into the expressions Eq. \ref{eq:Jeres1} and \ref{eq:JEres1} show that there is no net electric or thermal current if there is no voltage difference and no temperature difference between the two systems.
\subsection{Integer Edge}
\label{sub:integeredge}
As a warm-up let us calculate the edge Green's function for an integer edge. We have Dirac fermions with commutations
$$
\{ \psi(x), \psi^\dagger(x') \} = \delta(x -x')
$$
Assuming a system size of $L$, we have $k$ quantized as $k=2 \pi n/L$. We then have the Fourier transform
$$
\psi_k = \frac{1}{\sqrt{L}} \int dx e^{i k x} \psi(x)
$$
and in reverse
$$
\psi(x) = \frac{1}{\sqrt{L}} \sum_k e^{-i k x} \psi_k
$$
The commutations are then
$$
\{ \psi_k, \psi^\dagger_{k'} \} = \frac{1}{L} \int dx \int dx' e^{i k x - i k' x'} \{ \psi(x), \psi^\dagger(x') \} = \frac{1}{L} \int dx \, e^{i (k -k') x} = \delta_{k,k'}
$$
We calculate
\begin{eqnarray*}
G_<(t,x',x) &=& \langle \psi^\dagger(x',t) \psi(x,0)\rangle = \frac{1}{L} \sum_{k,k'} e^{-i k x + i k' x'} \langle \psi^\dagger_{k'}(t) \psi_k(0) \rangle = \frac{1}{L} \sum_{k} e^{i k (x' - x)} \langle \psi^\dagger_{k}(t) \psi_k(0) \rangle
\\ G_>(t,x',x) &=& \langle \psi(x',t) \psi^\dagger(x,0)\rangle = \frac{1}{L} \sum_{k,k'} e^{-i k' x' + i k x} \langle \psi_{k'}(t) \psi^\dagger_k(0) \rangle = \frac{1}{L} \sum_{k} e^{i k (x - x')} \langle \psi_{k}(t) \psi^\dagger_k(0) \rangle
\end{eqnarray*}
Assuming the system is thermal with zero chemical potential, we then have
\begin{eqnarray*}
G_<(t,x',x) &=& \frac{1}{L} \sum_k e^{i k (x' - x) + i E_k t} n_F(E_k)
\\ G_>(t,x',x) &=& \frac{1}{L} \sum_{k} e^{i k (x - x') - i E_k t} n_F(-E_k))
\end{eqnarray*}
with $n_F$ the Fermi function. We will now specialize to a linear edge dispersion
$
E_k = v k
$.
Fourier transforming then gives
\begin{eqnarray*}
G_<(E,x',x) &=& \int dt \ e^{-i t E} G_<(t,x',x) = \frac{1}{L} \sum_k e^{i k (x' - x)} n_F(E) 2 \pi \delta(E - v k) \\ &=& \int dk \, e^{i k (x' - x)} n_F(E) \delta(E - vk) = v^{-1} e^{i (E/v) (x' - x)} n_F(E)
\\G_>(E,x',x) &=& \int dt \ e^{i t E} G_>(t,x',x)= \frac{1}{L} \sum_{k} e^{i k (x - x')} n_F(-E) \delta(E - vk)
\\ &=& \int dk \, e^{i k (x - x')} n_F(-E) \delta(E - vk) = v^{-1} e^{i (E/v) (x - x')} n_F(-E)
\end{eqnarray*}
Note that these expressions correctly satisfy Eq.~\ref{eq:Gid1}.
The integer Green's functions that we have calculated will constitute the $R$ side of the tunneling system in Eqns. \ref{eq:Jeres1} and \ref{eq:JEres1} above.
\subsection{Majorana Edge Without impurity}
\label{sub:majoranaedgenoimpurties}
We start with the Majorana operators
$$
\{\xi(x) ,\xi(x') \} = \delta(x-x')
$$
Assume the system is of size $L$, and $k$ is quantized as $k = 2 \pi n/L$. Fourier transforming we get
$$
\xi_k = \frac{1}{\sqrt{L}} \int dx {e^{i k x}} \xi(x)
$$
and in reverse
$$
\xi(x) = \frac{1}{\sqrt{L}}\sum_k e^{-i k x} \xi_k
$$
where $L$ is the system size (assumed infinite). So that
\begin{eqnarray*}
\{\xi_k, \xi_{k'}\} &=& \frac{1}{L} \int dx \int dx' e^{i k x + i k' x'} \{\xi(x) ,\xi(x') \}
\\
&=&
\frac{1}{L} \int dx \int dx' e^{i k x + i k' x'} \delta(x - x') \\
&=&
\frac{1}{L} \int dx e^{i (k +k') x} = \delta_{k+k'}
\end{eqnarray*}
We can thus think of $\xi_k$ with $k > 0$ as Dirac fermion creation operators with the corresponding $\xi_{-k}$ being the annihilation operators. The vacuum is the absence of any fermions (or equivalently the negative $k$ states are filled).
In the absence of a localized Majorana, the correlator is
\begin{eqnarray}
G^\xi(t,x',x)&=&\langle \xi(x',t) \xi(x,0) \rangle = \frac{1}{L}\sum_{k,k'} e^{-i k x - i k' x'} \langle \xi_{k'}(t) \xi_{k}(0) \rangle \label{xx:corr} \\
&=& \frac{1}{L}\sum_{k} e^{i k (x' - x)} \langle \xi_{-k}(t) \xi_{k}(0) \rangle = \frac{1}{L}\sum_{k} e^{i k (x' - x) - i k v_m t} \langle \xi_{-k} \xi_{k} \rangle \nonumber
\\ &=& \frac{1}{L}\sum_{k} e^{i k (x' - x) - i k v_m t} \, (1 - n_F(v_m k)) = \frac{1}{L}\sum_{k} e^{i k (x' - x) - i k v_m t} \, n_F(-v_m k) \nonumber \\
&=& \frac{1}{L}\sum_{k} e^{-i k (x' - x) + i k v_m t} \, n_F(v_m k)
\end{eqnarray}
where $v_m$ is the Majorana mode velocity. Fourier transforming we obtain
\begin{eqnarray*}
G^\xi_<(E,x',x) &=& \frac{1}{L}\sum_{k} e^{-i k (x' - x)} \, n_F(v_m k) \int dt e^{-i t E + i k v_m t} \\ &=& v^{-1}_m n_F(E) e^{-i (E/v_m) (x' - x)}
\end{eqnarray*}
and correspondingly
\begin{eqnarray*}
G^\xi_>(E,x',x) &=& G^\xi_<(-E,x',x) = v_m^{-1}n_F(-E) e^{i (E/v_m) (x'-x)}
\end{eqnarray*}
\subsection{Bose Edge}
\label{sub:boseedge}
The $\nu=1/2$ Bose mode can be viewed as two seperate Majorana modes[S2]. The boson operator for an edge with velocity $v$ is a product of the two Majoranas having the same velocity $v$. We thus write
$$
b(x) = \sqrt{\tilde \ell_c}\,\, \xi_1(x) \xi_2(x)
$$
with $\tilde \ell_c$ a cutoff length scale. As with the discussion above by Eq.~\ref{eq:cutoff} we should choose this to be
$$
\tilde \ell_c = \frac{\pi v_b}{\Delta}
$$
with $v_b$ the bose mode velocity and $\Delta$ the gap.
We can thus have
$$
G^b_<(t,x',x) = \ell_c \langle \xi_2(x',t) \xi_1(x',t) \xi_1(x,0) \xi_2(x,0) \rangle = \ell_c [G^\xi(t,x',x)]^2
$$
Fourier transforming we have
\begin{eqnarray*}
G^b_<(E,x',x) &=& \frac{\tilde \ell_c}{2 \pi} \int dE' \, G^\xi_<(E-E',x',x) G^\xi_<(E',x',x) \\
&=& \frac{\tilde l_c}{2 \pi v^2} e^{-i (E/v)(x-x')} \int dE' n_F(E - E') n_F(E')
\end{eqnarray*}
The final integral can be performed by elementary methods to give $ E n_B(E)$ with $n_B$ the Bose function. Thus we have
$$
G^b_<(E,x',x) = \frac{\tilde l_c}{2 \pi v^2} e^{i (E/v)(x-x')} \, E \, n_B(E)
$$
and correspondingly we have
$$
G^b_>(E,x',x) = \frac{\tilde l_c}{2 \pi v^2} e^{i (E/v)(x'-x)} \,(-E) \, n_B(-E)
$$
\section{Majorana Edge Plus Majorana Impurity Scattering Problem}
\label{sec:phaseshift}
The scattering phase shift problem of a Majorana edge tunnel coupled to a Majorana impurity has been addressed a number of times previously (See Refs.~\onlinecite{Fendley,Roising,Bishara,Rosenow1,Rosenow2} of the main text). For completeness we give the key steps of the derivation here (in a slightly different language from that of the references).
We begin with a Hamiltonian density for the Chiral Majorana edge $\xi(x)$ coupled to a trapped Majorana $\gamma$ at position zero
$$
H = \int dx \,\, \left[ i (v/2) \, \xi(x) \partial_x \xi(x) + i \lambda \xi(x) \gamma \delta(x) \right]
$$
with $v$ the Majorana velocity and $\lambda$ the coupling strength. Here $\gamma$ is a Majorana so $\gamma^2=1$ and $\{ \gamma, \xi(x)\} = 0$. Note we also have $\{ \xi(x), \xi(x') \rangle = \delta(x - x')$.
The equations of motion are given by commutations $\partial_t \gamma = i [ H, \gamma]$ and $\partial_t \xi(x) = i [H, \xi(x)]$ which yields
$$
\partial_t \xi(x) = v \partial_x \xi(x) + \lambda \gamma \delta(x) \nonumber
$$
Note that away from $x=0$ this gives the wave equation with velocity $v$. At $x=0$, keeping singular parts of this equation we get
$$
\lambda \gamma = -v \left[ \xi(0^+) - \xi(0^-) \right]
$$
And our second equation of motion is
$$
\partial_t \gamma = - \lambda \left[ \xi(0^+) + \xi(0^-) \right]
$$
Replacing $\partial_t$ by $-i \omega$ and solving we get
$$
\xi(0^+) = \left[ \frac{\omega + i \lambda^2/v}{\omega - i \lambda^2/v} \right] \xi(0^-)
$$
\subsection{Bound on Fourier Component of Scattering}
\label{sub:suppedge}
As mentioned in the main text, the tunneling from a Majorana impurity to the edge should be exponential with some decay length $\zeta$. While no calculations have been made of such couplings, we can use numerical estimates of the decay length of the splitting $E \sim E_{gap} e^{-R/\tilde \zeta}$ between two quasiholes\cite{baraban} separated by a distance $R$ which is $\tilde \zeta = 2.3 \ell_B$. Since the energy splitting between two putatively degenerate quasihole states is linear in the matrix element, whereas here we have $\lambda^2/v$ with $\lambda$ the matrix element we instead obtain $e^{-2 R/\tilde \zeta}$ or a decay length $\zeta = \tilde \zeta/2 \approx 1.15 \ell_B$.
The prefactor $\lambda$ has dimensions ${\rm{Energy}} \sqrt{{\rm length}}$ so its natural estimate should then be
$$
\lambda \approx E_{gap} \sqrt{\ell_B} \, e^{-R/\zeta}
$$
Thus we obtain a coupling energy
$$
E_{coupling} = \frac{\lambda^2}{v_m} = \frac{E_{gap}^2 \ell_B}{v_m} e^{-R/\zeta} \approx {\rm 1K}\, e^{-R/\zeta}
$$
where we have used $v_m \approx 10^5 {\rm cm}/{\rm sec}$ and $E_{gap} \approx 1$K and $\ell_B = 16$nm. (We need not be too precise about the prefactor since everything here is dominated by the exponent). To obtain $E_{coupling} \approx 4$mK, we then have
$$
R \approx 5.5 \zeta \approx 6.3 \ell_B
$$
As noted in the main text the smearing of the coupling along the edge should be over a length scale on order $w \approx \sqrt{R \zeta}$ which is then
$$
w \approx 3 \ell_B
$$
\section{Edge Equilibration}
\label{sec:edgeequil}
\subsection{Charge Equilibration}
The tunneling current leaving the integer edge at a single impurity is given by
\begin{equation}
\delta j_1 \ = \ - G \, \Delta (\mu_1 - \mu_B)
\end{equation}
Denoting the density of impurities by $n_{\rm imp}$ and considering a piece of the edge with length $\Delta x$, we find for the tunneling current
\begin{equation}
\Delta j_1 \ = \ - n_{\rm imp}\, \Delta x \, G\, (\mu_1 - \mu_B)
\end{equation}
Expressing the energy density of edge mode $i$ as ${1 \over 2 \kappa_i} \rho_i^2 - \mu_i \rho_i$, we find the relation $\rho_i = \kappa_i \mu_i$.
Since the current density is given by $j_i = v_i \rho_i$, and since $\kappa_i = {\nu_i \over 2 \pi v_i }$, we find that
\begin{equation}
\mu_i=\pm {2 \pi \over \nu_i} j_i \ \ .
\end{equation}
Here, the $+$-sign applies for the integer mode, and the $-$-sign for the Bose mode. We define $1/\ell_0 = 2 \pi n_{\rm imp} G$. Note that in the main text we write $G$ in terms of $h$ rather than $\hbar$ absorbing the $2 \pi$. We then obtain the following differential equation for the the spatial change of the chemical potential of the integer edge mode
\begin{equation}
\partial_x \mu_1 \ = \ - {1 \over \ell_0} \, (\mu_1 - \mu_B) \ .
\end{equation}
For the change of the current of the Bose mode, one needs to take into account that the sign of tunneling current is opposite,
that the direction of the current is opposite to that of the integer mode, and that the filling fraction is $\nu_B = 1/2$. In total, one finds
\begin{equation}
\partial_x \mu_B \ = \ - {2 \over \ell_0} (\mu_1 - \mu_B)
\end{equation}
Taking the difference between the differential equations for integer and Bose mode, we finally obtain
\begin{equation}
\partial_x (\mu_1 - \mu_B) \ = \ {1 \over \ell_0} (\mu_1 - \mu_B) \ \ .
\end{equation}
Introducing the abbreviation $\Delta \mu = \mu_1 - \mu_B$, we can express the solution as
\begin{equation}
\Delta \mu(x) \ = \ \Delta \mu(L)\, e^{(x - L)/\ell_0} \ .
\end{equation}
In addition, the total current $j_1 + j_B$ is conserved, which implies for the chemical potentials
\begin{equation}
\mu_1 - {1 \over 2} \mu_B \ \equiv \ \mu_{\rm tot} \ .
\end{equation}
We now can express the chemical potentials of integer and Bose edge mode in terms of $\Delta \mu$ and $\mu_{\rm tot}$ as
\begin{equation}
\mu_1 \ = 2 \mu_{\rm tot} - \Delta \mu \ , \ \ \ \ \mu_B \ = \ 2(\mu_{\rm tot} - \Delta \mu) \ \ .
\end{equation}
We want to impose boundary conditions that the integer mode is injected into the edge at position $x=0$ with chemical potential $\mu_m$, and that the Bose
mode is injected into the edge with zero chemical potential at spatial position $x=L$. We then find the solutions
\begin{equation}
\mu_1(x) \ = \ \mu_m \left( 1 \ - \ {1 \over 2} e^{(x - L)/\ell_0} \right) \ , \ \ \ \ \mu_B(x) \ = \ \mu_B \left( 1 \ - \ e^{(x - L)/\ell_0} \right) \ .
\end{equation}
giving the equilibration length of $\ell_0$.
\subsection{Thermal Edge Conductance}
\label{sub:thermaledge}
\subsubsection{Two mode model}
Here we assume the heat transferred between the Majorana mode and any other mode is negligible ($E_{coupling}$ very small so $K^{im}$ and $K^{bm}$ are effectively zero), so we can perform the thermal transport calculation by only considering the integer and Bose modes. Here the energy density per unit length is $k_B (\pi^2/6) T^2/(2 \pi \hbar v)$. We would write thermal transport equations in terms of the thermal current density as
$$
\partial_x J_i^Q = n_{imp} K^{ib} (T_i - T_b)
$$
for example where $J_i^Q = (\pi^2/3) T T_i/(2 \pi \hbar)$ where $T_i = T + \mbox{small}$.
We thus have the transport equations
\begin{eqnarray}
\partial_x T_i &=& -\tilde K (T_i - T_b) \label{eq:thisoneTi} \\
\partial_x T_b &=& -\tilde K (T_i - T_b)
\end{eqnarray}
where
$$
\tilde K = n_{imp} K^{ib}/K_0
$$
We then have $\partial_x (T_i - T_b) = 0$ so $T_i - T_b$ is a constant and $\partial_x T_i$ and $\partial_x T_b$ are both constants. We can thus write
\begin{eqnarray}
T_i &=& T_i^0 - \beta x \label{eq:Ti0b} \\
T_b &=& T_b^0 + (L - x) \beta
\end{eqnarray}
where $T_i^0$ is the value of $T_i$ at $x=0$ and $T_b^0$ is the value of $T_b$ at $x=L$, i.e., these are the temperatures in the reservoirs. We thus have
$$
T_i - T_b = T_i^0 - T_b^0 - L \beta
$$
Plugging the form of $T_i$ from Eq.~\ref{eq:Ti0b} into Eq.~\ref{eq:thisoneTi} we obtain
$$
\beta = \tilde K (T_i^0 - T_b^0 - L \beta )
$$
which we solve to get
$$
\beta = \frac{\tilde K (T^0_i - T^0_b)}{ 1 + \tilde K L}
$$
The total heat current (for one edge only) is then
$$
J = K_0 (T_i - T_b) = \frac{K_0 }{1 + \tilde K L} (T_i^0 - T_b^0)
$$
\subsubsection{Three mode model}
We will now assume that the thermal conductance between the Majorana mode and the integer and Bose mode is small, but not negligible (i.e., $K^{im}$ and $K^{bm}$ are small but not zero). This is a bit more complicated and we only sketch the solution. Here we write an equation for all three edges
$$
c^{\alpha} K_0 \partial_x T^\alpha =
\partial_x J^\alpha = -n_{imp} \sum_\beta K^{\alpha \beta} T_\beta
$$
where $\alpha,\beta$ are $i$,$b$ or $m$ and $c^\alpha$ is the signed central charge of the three edges $(-1,1,1/2)$ respectively. Here we take the diagonal components to be
$$
K^{\alpha \alpha} = -\sum_{\beta \neq \alpha} K^{\alpha \beta}
$$
so that the full $K$ matrix is taken to be (with rows and columns in the order $i$, $b$, $m$)
$$
K = \left(
\begin{array}{ccc}
-\epsilon-1 & 1 & \epsilon \\
1 & -2 \epsilon-1 & 2 \epsilon \\
\epsilon & 2 \epsilon & -3 \epsilon \\
\end{array}
\right) K^{ib}
$$
which gives us
$$
M^{\alpha
\beta}= n_{imp}K_0^{-1} (c^\alpha)^{-1} K^{\alpha \beta} = n_{imp} K^{ib} K_0^{-1} \tilde M
$$
with $$\tilde M =
\left(
\begin{array}{ccc}
\epsilon+1 & -1 & -\epsilon \\
1 & -2 \epsilon-1 & 2 \epsilon \\
2 \epsilon & 4 \epsilon & -6 \epsilon \\
\end{array}
\right)
$$
which give us the equation
\begin{equation}
\label{eq:maineq}
\partial_x T^\alpha =
-\tilde M^{\alpha \beta} T^{\beta}
\end{equation}
where $x$ is now measured in units of $K_0/(K^{ib} n_{imp})=\ell^b$ the bose relaxation length.
We thus solve for the eigenvalues $\lambda_j$ and eigenvectors $t_j^\alpha$ of the matrix $n_{imp}K_0 (c^\alpha)^{-1} K^{\alpha \beta}$. The general solution will be
$$
T_\alpha(x) = \sum_j a_j t_j^\alpha e^{\lambda_j x}
$$
We set the initial conditions of the system to be
\begin{eqnarray*}
T_i(0) &=& T_0 \\
T_b(L) &=& T_1 \\
T_m(L) &=& T_1
\end{eqnarray*}
and solve for the coefficients $a_j$. The total thermal current (which can be calculated at any position) is $J=\sum_\alpha c^\alpha K_0 T^\alpha$. While the general expression is rather messy, they can be solved analytically by Mathematica (The precise expression is not enlightening).
We generally obtain an edge conductance (adding the two additional integer modes, and accounting for heat flowing on both sides of the sample) given by
$$
K/K_0 = 2.5 + \frac{2}{1 + A T } + {\cal O}(\epsilon)
$$
where $\epsilon = (32/(9 \pi^3)) (E_{coupling}/T)$ is generally small. We will derive this next.
\subsubsection{Analytic Derivation for small $\epsilon$}
If we assume that $K^{im}$ and $K^{bm}$ are small we can expand to linear order in these small parameters and obtain analytically simple results. This is justified by the fact that we have been working to linear order in $E_{coupling}/T$.
We obtain an edge conductance (adding the two additional integer modes, and accounting for heat flowing on both sides of the sample) given by
$$
K/K_0 = 2.5 + \frac{2}{1 + A T } - \epsilon \, C(A T)
$$
where $A = L/(l_q^b T)$ and
$$
C(x) = x \,\,\, \frac{2 + 2x + x^2}{ (1 + x)^2} $$
and $\epsilon = (32/(9 \pi^3)) (E_{coupling}/T)$. Since we will only be concerned with cases where $x=A T > 1$ we can approximate
$$
C(x) \approx x
$$
which we use within the main text.
We now turn to derive this result. Our approach here will be to first solve the problem in the limit $\epsilon=0$, then treat $\epsilon$ as a perturbation. Since we are solving a linear system of equations which is invariant under all $T \rightarrow T + \mbox{const}$ for simplicity we can set $T_0=0$ and $T_1=1$. Since the equations we need to solve are invariant under shifting all $T$'s by a constant, this will help us avoid carrying around the average temperature.
In the $\epsilon=0$ limit (as we have calculated before) we have
\begin{eqnarray*}
T_i(x) &=& \frac{x}{1 + L} \\
T_b(x) &=& \frac{1 + x}{1 + L} \\
T_m &=& 1
\end{eqnarray*}
where here we measure both $x$ and $L$ in units of $\ell^B_q$ is the bose relaxation length (we will continue to do this for simplicity of notation).
We then have our differential equation for $T_m$ (The third line of the matrix Eq.~\ref{eq:maineq})
\begin{equation}
\partial_x T_m = -2 \epsilon (T_i + 2 T_b - 3 T_M)
\label{eq:Tmdiff}
\end{equation}
Plugging in the $\epsilon=0$ results for the variables on the right hand side and integrating we get
\begin{equation}
\label{eq:Tmres}
T_m(x) = 1 + \epsilon \frac{-2 L - 3 L^2 + 2 x + 6 L x - 3 x^2}{1 + L}
\end{equation}
Note that this correctly gives $T_m = T_1=1$ at $x=L$.
We still need to find $T_b$ and $T_i$. Let us define
\begin{eqnarray*}
T_+ &=& T_i + T_b \\
T_- &=& T_i - T_b
\end{eqnarray*}
The first two lines of the matrix Eq.~\ref{eq:maineq} can be subtracted to give
$$
\partial_x T_- = \epsilon (T_i + 2 T_b - 3 T_m)
$$
comparing this to Eq.~\ref{eq:Tmdiff} we realize that we have
$$
T_- = T_m/2 + C_1
$$
with $C_1$ some constant. Note that the heat current at $x=0$ is precisely $-K_0 C_1 = K_0 (T_b(0) + T_m(0)/2)$ since $T_i(x=0)=0$.
The equation for $T_+$ is given by adding the first two lines of the matrix Eq.~\ref{eq:maineq}
\begin{eqnarray*}
\partial_x T_+ &=& -(2 + \epsilon) T_i + (2 + 2 \epsilon) T_b - \epsilon T_m \\
&=& -2 T_- - \epsilon (T_i - 2 T_b + T_m)
\end{eqnarray*}
In the final $\epsilon$ term we can use the unperturbed values of $T_b$ and $T_m$, and for $T_-$ we can use $T_m/2 +C_1$ yielding
$$
T_+ = -x + \epsilon \frac{2 x + 2 L x + 6 L^2 x - x^2 - 6 L x^2 + 2 x^3}{2(1+L)} - 2 C_1 x + C_2
$$
We then have to impose the boundary conditions. First, we have $T_i(0) = 0$ giving
\begin{eqnarray*}
0 &=& T_-(0) + T_+(0) = T_m(0)/2 + C_1 + C_2 \\
0 &=& C_2 + C_1 + \epsilon \frac{-2 L - 3 L^2}{2(1 + L)} + \frac{1}{2}
\end{eqnarray*}
where in going to the second line we have used Eq. ~\ref{eq:Tmres}.
Secondly we impose $T_b(0) = 1$, by
$$
2 = T_+(L) - T_-(L) = -L + \epsilon \frac{L^2 + 2 L + 2 L^3}{2 (1 + L)} - 2 C_1 L + C_2 - (1/2 + C_1)
$$
Subtracting these two equations from each other removes $C_2$ giving
$$
2 = -2 C_1 (L+1) + \left( -L + \epsilon \frac{4 L + 4 L^2 + 2 L^3 }{2(1 + L)}\right) - 1
$$
which we can then solve for $C_1$ giving
$$
C_1 = \frac{3 + L}{2 (1 + L)} + \epsilon \frac{-2 L - 2 L^2 - L^3}{2 (1 + L)^2}
$$
Multiplied by $K_0$ give the thermal current, then we multiply by two to count both sides. This matches the above quoted result.
\vspace*{10pt}
\hrule
\vspace*{10pt}
[S1] See for example, C.L. Kane and M.P.A. Fisher in Perspectives on Quantum Hall Effects, edited by S. Das Sarma and A. Pinczuk (Wiley, New York, 1997).
[S2] M. Levin, B. I. Halperin, and B. Rosenow, Phys. Rev. Lett. 99, 236806 (2007); S.-S. Lee, S. Ryu, C. Nayak, and M. P. A. Fisher, Phys.
Rev. Lett. 99, 236807 (2007).
[S3] Table of Integral Series and Products, 5ed, I. S. Gradshteyn and I. M. Ryzhik. Academic Press, (1994).
\end{document}
|
2211.07014
|
\section{Introduction}\label{Sec:1}
Breathers are coherent nonlinear pulsating wave structures living on an unstable background, which theoretical description represents a generalization of the soliton theory \cite{NovikovBook1984,AblowitzBook1981,akhmediev1997nonlinear}. The interest in studies breathers is both theoretical and practical. On one side, these nonlinear wave groups can be described by exactly solvable, i.e., integrable models, for example by the one-dimensional focusing nonlinear Schrödinger equation (NLSE) \cite{zakharov1972exact,kuznetsov1977solitons,kawata1978inverse,ma1979perturbed,NovikovBook1984}. As such the class of breather solutions describes an essential part of the integrable system dynamics. On the other side, the breathers model is applicable in a wide range of physical systems as diverse as light in optical fibers, fluids, plasma and Bose-Einstein condensates \cite{kivshar2003optical,OsborneBook2010,maimistov2013nonlinear}. Many experiments have confirmed the existence of breathers in nature, encouraging theoreticians to predict novel scenarios of breather propagation and interactions.
The scalar NLSE breathers have been the focus of the studies for the past decades, revealing such fundamental building blocs of the breather dynamics as Kuznetsov, Akhmediev, Peregrine and Tajiri-Watanabe solutions \cite{kuznetsov1977solitons,akhmediev1985generation,peregrine1983water,tajiri1998breather}; as well as superregular and ghost interaction patterns \cite{frisquet2013collision,gelash2014superregular,kibler2015superregular,xu2019breather}, and breather wave molecules \cite{xu2019breather}. All these scenarios of nonlinear wavefield evolution have been confirmed experimentally with optical, hydrodynamical, and plasma setups \cite{kibler2010peregrine,kibler2012observation,frisquet2013collision,xu2019breather,xu2020ghost,chabchoub2014hydrodynamics,chabchoub2011rogue,chabchoub2019drifting,bailung2011observation}. In addition, the breathers play an essential role in the formation of rational rogue waves \cite{akhmediev2009rogue,akhmediev2009excite}, modulation instability (MI) development \cite{akhmediev1985generation,zakharov2013nonlinear} and in the dynamics and statistics of complex nonlinear random wave states \cite{narhi2016real,soto2016integrable,osborne2019breather,roberti2021numerical}.
In this work we consider the vector two-component extension of the NLSE -- the Manakov system \cite{manakov1974theory}. In the presence of a constant background fields having amplitudes $A_{1,2}=\mathrm{const}$, $A = \sqrt{A_1^2+A_2^2}$, the Manakov system can be written as follows,
\begin{eqnarray}
\label{VNLSE}
i \psi_{1t}+\frac{1}{2}\psi_{1xx}+(|\psi_1|^{2}+|\psi_2|^{2}-A^2)\psi_1=0,
\\\nonumber
i \psi_{2t}+\frac{1}{2}\psi_{2xx}+(|\psi_1|^{2}+|\psi_2|^{2}-A^2)\psi_2=0.
\end{eqnarray}
where $t$ is time, $x$ is spatial coordinate and $\psi_{1,2}$ is a two-component complex wave field. The presence of the constant background, which often refers as condensate, means the following boundary conditions: $|\psi_{1,2}|^2 \to |A_{1,2}|^2$ at $x \to \pm \infty$.
In the case of small-amplitude condensate perturbations, the linear analysis of the system (\ref{VNLSE}) reveals two branches of the dispersion law $\omega_{\mathrm{I}}(k)$ and $\omega_{\mathrm{II}}(k)$, see Appendix section \ref{Sec:Appendix:1} for the derivation details,
\begin{equation}
\label{dispersion_laws}
\omega_{\mathrm{I}}(k) = \pm k\sqrt{k^2/4 - A^2}, \qquad \omega_{\mathrm{II}}(k) = \pm k^2/2.
\end{equation}
The first branch $\omega_{\mathrm{I}}(k)$ is the same as in the scalar NLSE with one-component condensate of amplitude $A$, and leads to the long-wave MI in the spectral region $k\in (-2A,2A)$. The second one $\omega_{\mathrm{II}}(k)$ is the same as in the scalar NLSE on zero background, and correspond to stable small-amplitude linear waves.
The system (\ref{VNLSE}) first considered by S.V. Manakov in \cite{manakov1974theory} is now widely used in nonlinear optics as a model of optical pulse propagation in a birefringent optical fiber \cite{agrawal2000nonlinear,maimistov2013nonlinear}. The two components $\psi_{1}$ and $\psi_{2}$ describe different light polarizations, which nonlinear interactions produce a broad family of complex nonlinear phenomena. Remarkably, the Manakov system, like its scalar counterpart NLSE, belongs to the class of equations integrable using the Inverse Scattering Transform (IST) technique \cite{manakov1974theory}. The IST allows finding exact multi-soliton solutions and asymptotic description of an arbitrary pulse evolution \cite{AblowitzBook1981,NovikovBook1984}. The key role in the IST construction for the Manakov system plays an auxiliary linear system of $3\times 3$ matrix wave functions (Jost functions), see e.g. \cite{maimistov2013nonlinear}. In the case of zero background each complex eigenvalue of the auxiliary system corresponds to a vector (polarized) soliton in the wavefield. S.V. Manakov presented the first study of vector soliton dynamics and demonstrated that they can change polarization as a result of mutual collisions \cite{manakov1974theory}. By now, such solitons are studied in detail, see for example \cite{ablowitz2004soliton} and the monograph \cite{maimistov2013nonlinear}.
In the presence of a condensate, vector solitons transform into vector breathers, characterized by discrete eigenvalues of the same $3\times 3$ auxiliary system. The vector breathers are in the recent trends of nonlinear studies focused on increasing the level of systems complexity. The first results on vector generalizations of Kuznetsov, Akhmediev, and Peregrine breathers have been presented within the past decade in \cite{priya2013akhmediev,chen2015vector,che2022nondegenerate}, see also the recent work \cite{che2022nondegenerate}. The vector rogue waves have been studied theoretically, numerically and experimentally in \cite{baronio2012solutions,baronio2014vector,frisquet2015polarization,manvcic2018statistics,degasperis2019rogue}. In addition, the work \cite{kraus2015focusing} provided a detailed study of the initial value problem for the system (\ref{VNLSE}) based on the IST Riemann approach and suggested a general classification of the types of vector breathers based on analytical properties of the wave field Jost functions. According to this classification, the vector breathers of the Manakov system represent three fundamental types -- type I, II, and III. Type I is a direct analog of the scalar NLSE breathers, while types II and III exhibit fundamentally different dynamics specific to the vector case. Recently we have found that the vector breathers can participate in a resonance interaction, see our Letter \cite{raskovalov2022resonant}. The resonance represents a three-breather process, i.e., a fusion of two breathers of the type I and II into one breather of the type III, which we denote it schematically as $\mathrm{I} + \mathrm{II} \rightarrow \mathrm{III}$; or the opposite, i.e. the decay $\mathrm{III} \rightarrow \mathrm{I} + \mathrm{II}$.
In this work, we study interactions of the vector breathers, including the resonance situations. With the vector variant of the dressing method scheme, see \cite{raskovalov2022resonant}, we obtain a general multi-breather solution of the Eq.~(\ref{VNLSE}). First, we analyze single breathers and show that the type I breathers correspond to the first branch $\omega_{\mathrm{I}}(k)$ of the dispersion laws (\ref{dispersion_laws}), while the types II and III are linked to the second branch $\omega_{\mathrm{I}}(k)$. This correspondence takes place for the decaying tails of the breathers. Then we study the resonanse breather interactions and two-breather solutions. We derive asymptotic expressions for the position and phase shifts acquired by breathers after mutual collisions. The found formulas allow us to describe the asymptotic states of the multi-breather ensembles and reveal the mathematical nature of the resonance interactions. More precisely, we find that similar to the three-wave system case \cite{ZakharovManakov1976theory}, the resonance interaction of vector breathers can be explained as a limiting case of infinite space shift acquired by one of the breathers. Finally, we demonstrate that only type $\mathrm{I}$ breathers participate in the development of MI from small-amplitude perturbations withing the superregular scenario, see \cite{zakharov2013nonlinear,gelash2014superregular}, while the breathers of types $\mathrm{II}$ and $\mathrm{III}$, belonging to the stable branch of the dispersion law, are not involved in this process.
The paper is organized as follows. In the next section \ref{Sec:2} we construct the general scheme of the vector dressing method and find the $N$-breather solution of the Manakov system in the presence of the condensate. In section \ref{Sec:3} we analyze the single-breather solutions of the Manakov system and their relations to the branches of the dispersion law. Then in section \ref{Sec:4} we consider the case of resonant interactions. In section \ref{Sec:5} we analyze the general two-breather solution of the Manakov system (\ref{VNLSE}), which describes elastic collisions of breathers. We end up section \ref{Sec:5} with finding the shifts of the positions and phases acquired by the breathers after their interaction. Finally in section \ref{Sec:6} we investigate important particular cases of vector two-breather solution. The last section \ref{Sec:7} presents discussions and conclusions. The appendix section \ref{Sec:Appendix} provides additional computational details, a complete table of the positions and phases shifts expressions and additional illustrations.
\section{Dressing method for vector breathers}\label{Sec:2}
In this section we build a dressing method scheme for constructing multi-breather solutions to the Manakov system. Previously in the Letter \cite{raskovalov2022resonant} we presented a shorter version of this scheme limited to single-eigenvalue solutions. Note, that the dressing method, also known as the Darboux dressing scheme, being a popular tool for constructing exact solutions to integrable nonlinear PDE, has many variations, see \cite{NovikovBook1984,zakharov1978relativistically,matveev1991darboux,akhmediev1991extremely}. We use a vector analog of the scalar dressing scheme for the NLSE developed in \cite{gelash2014superregular}.
The dressing method starts from introducing the auxiliary linear system for the $3\times 3$ matrix wave function $\mathbf{\Phi}$ depending on $x$, $t$ and the complex spectral parameter $\lambda$:
\begin{eqnarray}
&&\mathbf{\Phi}_x=\mathbf{U}\mathbf{\Phi}, \label{lax system 1}
\\
&&\mathbf{\Phi}_t = \mathbf{V}\mathbf{\Phi} = -(\lambda\mathbf{U}+i\mathbf{W}/2)\mathbf{\Phi}. \label{lax system 2}
\end{eqnarray}
Here $\mathbf{U}$ and $\mathbf{W}$ are the following $3\times 3$ matrixes:
\begin{eqnarray}
\label{U and W def}
&&\mathbf{U}=
\left(
\begin{array}{ccc}
-i\lambda & \psi_1 & \psi_2 \\
-\psi^*_1 & i\lambda & 0 \\
-\psi^*_2 & 0 & i\lambda \\
\end{array}
\right)
,
\\\nonumber
&&\mathbf{W}=
\\\nonumber
&&\left(
\begin{array}{ccc}
|\psi_1|^{2}+|\psi_2|^{2}-A^{2} & \psi_{1x} & \psi_{2x} \\
\psi^*_{1x} & -|\psi_1|^{2}+A^{2} & -\psi^*_1 \psi_2 \\
\psi^*_{2x} & -\psi_1 \psi^*_2 & -|\psi_2|^{2}+A^{2} \\
\end{array}
\right).
\end{eqnarray}
The Manakov system (\ref{VNLSE}) represents the compatibility condition of the equations (\ref{lax system 1}) and (\ref{lax system 2}) written as,
$$
\mathbf{\Phi}_{xt} = \mathbf{\Phi}_{tx}.
$$
From Eqs.~(\ref{lax system 1}) and (\ref{lax system 2}) we find the following auxiliary equations for $\mathbf{\Phi}^{-1}$ and $\mathbf{\Phi}^\dag$:
\begin{eqnarray}
&&\mathbf{\Phi}^{-1}_x = - \mathbf{\Phi}^{-1} \mathbf{U}, \qquad \mathbf{\Phi}^{-1}_t = - \mathbf{\Phi}^{-1} \mathbf{V}, \label{rs1}\\
&&\mathbf{\Phi}^\dag_x = \mathbf{\Phi}^\dag \mathbf{U}^\dag, \qquad\qquad \mathbf{\Phi}^\dag_t = \mathbf{\Phi}^\dag \mathbf{V}^\dag . \label{s2}
\end{eqnarray}
Here the sign $\dag$ means Hermitian conjugation. Comparing formulas (\ref{rs1}) and (\ref{s2}), and using the symmetry properties $\mathbf{U}^\dag (\lambda^*)=-\mathbf{U}(\lambda)$, $\mathbf{V}^\dag (\lambda^*)=-\mathbf{V}(\lambda)$, we find that $\mathbf{\Phi}$ satisfies the following reduction:
\begin{equation}
\mathbf{\Phi}^\dag (\lambda^*) = \mathbf{\Phi}^{-1}(\lambda). \label{rr}
\end{equation}
At the first step of the dressing procedure we find the solution $\mathbf{\Phi}_\mathrm{c}$ of the system (\ref{lax system 1}), (\ref{lax system 2}) for the condensate background $\psi_{1,2}=A_{1,2}$:
\begin{equation}
\mathbf{\Phi}_\mathrm{c} (x, t, \lambda)= [(1+r^2) e^{-\varphi_0}]^{-1/3}\left(\begin{array}{ccc}
0, & e^{\varphi}, & -\mathrm{i}\,r e^{-\varphi}\\
-\frac{A_2}{A}\, e^{-\varphi_{0}}, & -\frac{A_1}{A}\,\mathrm{i}\, r e^{\varphi}, & \frac{A_1}{A}\,e^{-\varphi}\\
\frac{A_1}{A}\,e^{-\varphi_{0}},& -\frac{A_2}{A}\,\mathrm{i}\, r e^{\varphi}, & \frac{A_2}{A}\,e^{-\varphi}
\end{array}\right) ,
\label{Psio}
\end{equation}
where,
\begin{equation}
\label{zeta_def}
r = A/(\lambda+\zeta), \qquad \zeta =\sqrt{\lambda^2+A^2},
\end{equation}
and the functions $\varphi_{0}$ and $\varphi$ are,
\begin{equation}
\label{q_phases}
\varphi_{0} = -\mathrm{i}\,\lambda\,x + \frac{\mathrm{i}}{2}\,\bigl (\lambda^2 + \zeta^2 \bigr ) \, t,
\qquad
\varphi = -\mathrm{i} \zeta\, x + \mathrm{i}\,\lambda\, \zeta\, t.
\end{equation}
We imply that the function $\zeta(\lambda)$ has the branchcut on the interval $[-iA,iA]$, which differs from the automatic choice $\{ [-i\infty,-iA]\cup[iA,\infty i] \}$ implied in software packets such as {\it Wolfram Mathematica}. As we see later, the choice of the branchcut is essential for constructing breather solutions.
The solution $\mathbf{\Phi}_\mathrm{c} (x, t, \lambda)$ satisfies the auxiliary system:
\begin{equation}
\mathbf{\Phi}_{\mathrm{c}\,x}=\mathbf{U}_\mathrm{c} \mathbf{\Phi}, \qquad \mathbf{\Phi}_{\mathrm{c}\,t} = \mathbf{V}_\mathrm{c} \mathbf{\Phi}, \label{la}
\end{equation}
where $\mathbf{V}_\mathrm{c} = -(\lambda\,\mathbf{U}_\mathrm{c} + \mathrm{i} \mathbf{W}_\mathrm{c})$, and
\begin{eqnarray}
\nonumber
&&\mathbf{U}_\mathrm{c}=
\left(
\begin{array}{ccc}
-\mathrm{i}\lambda & A_1 & A_2 \\
-A_1 & \mathrm{i}\lambda & 0 \\
-A_2 & 0 & \mathrm{i}\lambda \\
\end{array}
\right) ,
\\
\nonumber
&&\mathbf{W}_\mathrm{c}=
\frac12\left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & A_2^{2} & -A_1 A_2 \\
0 & -A_1 A_2 & A_1^{2} \\
\end{array}
\right) .
\end{eqnarray}
In accordance with \cite{gelash2014superregular}, we introduce the dressing function $\boldsymbol{\chi}$ as,
\begin{equation}
\boldsymbol{\chi} = \mathbf{\Phi} \,\mathbf{\Phi}_0^{-1}. \label{od}
\end{equation}
The dressing function satisfies the asymptotic condition:
\begin{equation}
\boldsymbol{\chi}(\lambda) \to \mathbf{E} + \frac{\mathbf{N}}{\lambda}+O(\lambda^{-2}), \qquad \textrm{at} \quad |\lambda| \to \infty , \label{usl}
\end{equation}
where $\mathbf{N}$ is a constant matrix and $\mathbf{E}$ is the unit matrix.
From Eq.~(\ref{rr}) we also find, that the function $\boldsymbol{\chi} (x, t, \lambda)$ satisfies the following reduction:
\begin{equation}
\boldsymbol{\chi}^\dag (\lambda^*) = \boldsymbol{\chi}^{-1}(\lambda). \label{rrr}
\end{equation}
Then, using Eqs.~(\ref{lax system 1}), (\ref{lax system 2}), (\ref{rs1}), and (\ref{s2}) we obtain the system of linear equations for the inverse dressing function:
\begin{eqnarray}
\boldsymbol{\chi}^{-1}_x = -\boldsymbol{\chi}^{-1} \mathbf{U} + \mathbf{U}_c \boldsymbol{\chi}^{-1}, \label{rs2}\\
\boldsymbol{\chi}^{-1}_t = -\boldsymbol{\chi}^{-1} \mathbf{V} + \mathbf{V}_c \boldsymbol{\chi}^{-1}. \nonumber
\end{eqnarray}
Now, choosing matrix $\boldsymbol{\chi}$ so, that the matrices $\mathbf{U}$ and $\mathbf{V}$ are regular in the $\lambda$-plane, we obtain a new solution of the Eqs.~(\ref{lax system 1}),~(\ref{lax system 2}). Substituting the expansion (\ref{usl}) into the equation (\ref{rs2}), we find out the final formulas to calculate the components $\psi_{1,2}$:
\begin{equation}
\psi_1 = A_1 + 2 \,\mathrm{i} \,N_{12}, \qquad \psi_2 = A_2 + 2 \,\mathrm{i}\, N_{13} . \label{res}
\end{equation}
First we propose, that the function $\boldsymbol{\chi}$ has only one pole at $\lambda = \lambda_1$, so that it can be written as,
\begin{equation}
\boldsymbol{\chi} = \mathbf{E} + \frac{\mathbf{N}}{\lambda-\lambda_1}. \label{E}
\end{equation}
The constant $\lambda_1$ represents discrete eigenvalue of the system (\ref{lax system 1},\ref{lax system 2}), which means that the corresponding wave function is bounded from both sides in space, see e.g., \cite{kraus2015focusing}. Then, from (\ref{rrr}) we obtain, that:
\begin{equation}
\boldsymbol{\chi}^{-1} = \mathbf{E} + \frac{\mathbf{N}^\dag}{\lambda-\lambda_1^*}. \label{E1}
\end{equation}
From the condition $\boldsymbol{\chi} \boldsymbol{\chi}^{-1} = \mathbf{E}$ we find:
\begin{equation}
\boldsymbol{\chi}(\lambda_1)\,\mathbf{N}^\dag (\lambda_1) = 0. \label{ze}
\end{equation}
From this it follows, that the matrices $\mathbf{N}$, $\mathbf{N}^\dag$ are degenerated and can be expressed via three-component vectors $\mathbf{p}$ and $\mathbf{q}$ as follows:
\[
\mathbf{N}_{\alpha \beta} = p_\alpha q_\beta, \qquad \mathbf{N}^\dag = q^*_\alpha p^*_\beta, \qquad \alpha, \beta = 1,2,3.
\]
To eliminate an extra pole at the point $\lambda=\lambda_1^*$ in the expression (\ref{rs2}), we impose on the vector $\mathbf{q}(x, t)$ the condition:
\[
\partial_x \mathbf{q}^* - \mathbf{U}_c (x, t, \lambda_1^*) \mathbf{q}^* = 0 .
\]
From this, we find the vector $\mathbf{q}$ in the form,
\begin{equation}
\label{q_general_form}
\mathbf{q}(x, t) = \mathbf{\Phi}_c^*(x, t, \lambda_1^*) \mathbf{C},
\end{equation}
where the vector of integration constants,
\begin{equation}
\label{C_general_form}
\mathbf{C}=(C_{0},C_{1},C_{2})^{\mathrm{T}},
\end{equation}
is an arbitrary three-component complex vector with the superscript $\mathrm{T}$ meaning transposing.
Finally, from (\ref{ze}) we obtain the vector $\mathbf{p}$ in the form,
\[
\mathbf{p} = \frac{\mathbf{q^*}}{|\mathbf{q}|^2} (\lambda_1 - \lambda_1^*) .
\]
Thereby, the one-pole function $\boldsymbol{\chi}(x, t, \lambda)$ from Eq.~(\ref{E}) is completely defined. Now, using the formula (\ref{res}), we obtain the components $\psi_{1,2}$ of the single-eigenvalue solution of the Manakov system (\ref{VNLSE}) in the presence of the condensate background:
\begin{eqnarray}
\label{solution}
\psi_{1} = A_{1} +\frac{2\,\mathrm{i}\,(\lambda_1 - \lambda_1^*)\, q_{1}^* q_{2}}{|\mathbf{q}|^2},
\\\nonumber
\psi_{2} = A_{2} +\frac{2\,\mathrm{i}\,(\lambda_1 - \lambda_1^*)\, q_{1}^* q_{3}}{|\mathbf{q}|^2} .
\end{eqnarray}
Following the analogy with \cite{gelash2014superregular}, we obtain that if the dressing matrix $\boldsymbol{\chi}(x, t, \lambda)$ have $N$ poles, $\lambda=\lambda_j$, $j=1,\ldots N$, then the corresponding $N$-eigenvalue solution of the Manakov system can be found by means of the Cramer's rule as,
\begin{eqnarray}
\psi_1 = A_1+2 \mathrm{\widetilde{M}}_{12}/\mathrm{M},
\nonumber\\
\psi_2 = A_2+2\mathrm{\widetilde{M}}_{13}/\mathrm{M}.
\label{N-solitonic solution}
\end{eqnarray}
Here $\mathrm{\widetilde{M}}_{\alpha,\beta}$ and $\mathrm{M}$ are the following determinants:
\begin{equation}
\mathrm{\widetilde{M}}_{\alpha\beta}=
\left|\begin{array}{cc}
0 & \begin{array}{ccc}
q_{1,\beta} & \cdots & q_{n,\beta}
\end{array}
\\
\begin{array}{c}
q^*_{1,\alpha} \\
\vdots \\
q^*_{N,\alpha}
\end{array}
& \begin{array}{c}
\mathbf{M}^{T}
\end{array}
\end{array}\right|;
\label{M1}
\quad\quad
\mathrm{M}=\mathrm{det}(\mathbf{M});
\quad\quad \mathcal{}
\mathbf{M}_{nm}=\frac{\mathrm{i}(\mathbf{q}_{n}\cdot \mathbf{q}^*_{m})}{\lambda_{n}-\lambda^*_m},
\end{equation}
with $\alpha,\beta =1,2,3$ and $n,m =1,2..N$. Note, that it is sufficient to consider the poles of the dressing function located only in the upper half of the $\lambda$-plane, i.e.
\begin{equation}
\label{upper_plane}
\mathrm{Im}[\lambda_j] > 0, \qquad j=1,\ldots N,
\end{equation}
since the choices $\mathrm{Im}[\lambda_j]<0$ lead to the same class of multi-breather solutions. Here and below, $\mathrm{Re}$ and $\mathrm{Im}$ means real and imaginary parts of a complex number. Recall that the eigenvalue set $\{\lambda_j\}$ represents discrete spectrum of the system (\ref{lax system 1},\ref{lax system 2}). Meanwhile the real $\lambda$-axes region belongs to the continuous part of the system (\ref{lax system 1},\ref{lax system 2}), which we do not consider here.
From Eq.~(\ref{q_general_form}) we find the the vectors $\mathbf{q}_n$ as,
\begin{eqnarray}
\mathbf{q}_n = \left(\begin{array}{ccc}
0, & e^{-\varphi_n}, & \mathrm{i}\,r_n e^{\varphi_n}\\
-\frac{A_2}{A} e^{\varphi_{0n}}, & \frac{A_1}{A}\,\mathrm{i}\, r_n e^{-\varphi_n}, & \frac{A_1}{A}\,e^{\varphi_n}\\
\frac{A_1}{A}\,e^{\varphi_{0n}},& \frac{A_2}{A}\,\mathrm{i}\, r_n e^{-\varphi_n}, & \frac{A_2}{A}\,e^{\varphi_n}
\end{array}\right) \left(\begin{array}{c}
C_{n0} \\ C_{n1} \\ C_{n2}
\end{array}\right),
\end{eqnarray}
or, in component-wisely form,
\begin{eqnarray}
q_{n1}&=& e^{-\varphi_n} C_{n1} + \mathrm{i}\, r_n e^{\varphi_n}\, C_{n2}
\nonumber\\
q_{n2}&=&\frac{1}{A}\,\big[-A_2 e^{\varphi_{0n}} C_{n0} + A_1 \big(\mathrm{i} \,r_n e^{-\varphi_n} C_{n1} + e^{\varphi_n} C_{n2} \big)\big],
\nonumber\\
q_{n3}&=&\frac{1}{A}\big[ A_1 e^{\varphi_{0n}} C_{n0} + A_2 \big(\mathrm{i} \,r_n e^{-\varphi_n} C_{n1} + e^{\varphi_n} C_{n2} \big)\big].
\label{q vectors(lambda)}
\end{eqnarray}
Here the functions $\varphi_{0n}$ and $\varphi_n$ are the following, see Eq.~(\ref{q_phases}),
\begin{eqnarray}
\label{phi_0_and_phi}
&&\varphi_{0n} = -\mathrm{i}\,\lambda_n\,x + \frac{\mathrm{i}}{2}\,\bigl (\lambda_n^2 + \zeta_n^2 \bigr ) \, t = u_{0n} - iv_{0n},
\\\nonumber
&&\varphi_n = -\mathrm{i} \zeta_n\, x + \mathrm{i}\,\lambda_n\, \zeta_n\, t = u_n - iv_n.
\end{eqnarray}
The functions $u_n$, $v_n$, $u_{0n}$, $v_{0n}$ distinguish real and imaginary parts of the functions $\varphi_{0n}$ and $\varphi_{n}$ as,
\begin{eqnarray}
\label{uvu0v0}
&&u_{0n} = \mathrm{Im}[\lambda_n] x - \frac{1}{2}\mathrm{Im}[\lambda_n^2+\zeta_n^2]t,
\\\nonumber
&&v_{0n} = \mathrm{Re}[\lambda_n] x - \frac{1}{2}\mathrm{Re}[\lambda_n^2+\zeta_n^2]t,
\\\nonumber
&&u_n = \mathrm{Im}[\zeta_n] x - \mathrm{Im}[\lambda_n\zeta_n]t,
\\\nonumber
&&v_n = \mathrm{Re}[\zeta_n] x - \mathrm{Re}[\lambda_n\zeta_n]t.
\end{eqnarray}
In further calculations we will use the square of the modulus $\mathbf{q}_n$, which can be written as,
\begin{eqnarray}
|\mathbf{q}_n|^2 = |e^{-\varphi_n} C_{n1} + \mathrm{i}\, r_n e^{\varphi_n}\, C_{n2}|^2 +|\mathrm{i}\, r_n e^{-\varphi_n} C_{n1} + e^{\varphi_n} C_{n2}|^2 + |e^{\varphi_{0n}} C_{n0}|^2 .
\end{eqnarray}
Note that the transformation,
\begin{equation}
\label{Cn_transform}
\mathbf{q}_n \rightarrow \kappa \mathbf{q}_n,
\end{equation}
where $\kappa$ is an arbitrary complex constant, does not change the solution (\ref{solution}). The latter means that an arbitrary choice of the vector $\mathbf{C}$ corresponds to four real-valued solution parameters. The nontrivial solutions of the Manakov system appear when the vector $\mathbf{C}$ has at least two nonzero components. We discuss the physical meaning of the breather parameters and different choices of the vector $\mathbf{C}$ in the following paragraphs.
\section{Vector breathers of types I, II and III}\label{Sec:3}
In this section, we describe the elementary building blocks of the vector breather dynamics -- the single breathers of the fundamental types I, II, and III. This classification based on analytical properties of the wavefield Jost functions was proposed in the work \cite{kraus2015focusing}. Type I coincides with the breather solutions to the scalar NLSE, while types II and III exhibit fundamentally different nonlinear wave dynamics specific to the vector (polarized) case. Previously in \cite{raskovalov2022resonant} we have established that on the language of the dressing method, see also Sec.~\ref{Sec:2}, the types I, II, and III correspond to the subsequent setting to zero one of the components of the vector $\mathbf{C}$, see Eq.~(\ref{C_general_form}). Here, following \cite{raskovalov2022resonant}, we present an analytical description for all three types of vector breathers and consider important particular cases which have not been touched in \cite{raskovalov2022resonant}. Also, we emphasize that type II and type III solutions can be transformed into each other by changing the Riemann surface sheets of the spectral parameter plane. Finally, we demonstrate that type I correspond to the first branch of the dispersion law $\omega_{\mathrm{I}}(k)$ while types II and III to the second branch $\omega_{\mathrm{II}}(k)$.
To avoid the sign issues of square root function $\zeta(\lambda)$ and also to simplify computations, we use the following parametrizations for the spectral parameter and associated to it functions,
\begin{eqnarray}
\label{uniformization}
\lambda &=& A\,\sinh(\xi+\mathrm{i}\,\alpha),\\\nonumber
\zeta &=& A\,\cosh(\xi+\mathrm{i}\,\alpha),\\\nonumber
r &=& e^{-\xi - \mathrm{i} \alpha}.
\end{eqnarray}
This transformation of the two-sheeted Riemann surface of $\zeta(\lambda)$ into one-sheeted plane with coordinates $(\xi,\alpha)$ is called uniformization and often used in the breathers studies, see e.g. \cite{gelash2014superregular,kraus2015focusing}. According to (\ref{upper_plane}) we consider only the regions $\xi\in (0,\infty)$ and $\alpha\in (0,\pi)$ for the breather parameters.
We use the general single-eigenvalue solution (\ref{solution}) with $N=1$, and start with the case $C_{0} = 0$ and $C_{1,2}\ne 0$ corresponding to the breather of type I. When dealing with single breathers we omit the subscripts $n$ for the formulas of the previous Sec.~\ref{Sec:2}. Substituting $C_1=0$ into (\ref{solution}) we find that the breather represents a simple vector generalization of the solution of the scalar NLSE, when the two components of the Manakov system do not interact, satisfying the relation,
\begin{equation}
\label{typeI_relation}
\psi_{2} = (A_2/A_1)\psi_{1}.
\end{equation}
The latter means, that each wavefield component represents a well-known breather solution of the scalar NLSE, see, e.g., \cite{pelinovsky2008book}. When the scalar NLSE for one-component wavefield $\psi$ is written in the form,
\begin{eqnarray}
i \psi_{t}+\frac{1}{2}\psi_{xx}+(|\psi|^{2}+|\psi|^{2}-A_0^2)\psi=0,
\label{NLSE}
\end{eqnarray}
the vector breathers of type I can be obtained from the known breather solutions of the Eq.~(\ref{NLSE}) using the following transformation,
\begin{eqnarray}
\label{typeI_transformation}
\psi_1(x,t) = \frac{A_1}{A_0}\psi\left(\frac{A^2}{A^2_0} t,\frac{A}{A_0} x\right),
\\
\psi_2(x,t) = \frac{A_2}{A_0}\psi\left(\frac{A^2}{A^2_0} t,\frac{A}{A_0} x\right).
\end{eqnarray}
From the Eq.~(\ref{Cn_transform}) we see, that the vector $\mathbf{C}$ has only one independent complex parameter, which allows us to parametrize its components as follows:
\begin{eqnarray}
\label{C_I_param}
C_{0} = 0, \quad C_{1} = C_{2}^{-1} = e^{\mathrm{Im}[\zeta]\delta + i\theta/2},
\end{eqnarray}
where $\delta$ and $\theta$ are real valued parameters controlling space position of the breather and its phase. From the general solution (\ref{solution}), the real and imaginary wave field components for type I breather can be written as,
\begin{eqnarray}
\label{s1}
&&\mathrm{Re} \,\psi_{1,2}^\mathrm{I} = A_{1,2} -
\\\nonumber
&&\frac{2 A_{1,2} \sin \alpha\, \cosh \xi\,[\cos(2 v_{\mathrm{I}})\cosh{\xi}+\cosh(2 u_{\mathrm{I}}) \sin \alpha]}{\cosh \xi\,\cosh(2 u_{\mathrm{I}})+\sin \alpha\, \cos(2v_{\mathrm{I}})},
\\\nonumber
&&\mathrm{Im} \,\psi_{1,2}^\mathrm{I} =
\\\nonumber
&&\frac{2 A_{1,2}\,\sin\alpha\cosh{\xi}\,[\sinh{(2u_{\mathrm{I}})}\cos\alpha + \sin{(2v_{\mathrm{I}})}\sinh{\xi}]}{\cosh \xi\,\cosh(2 u_{\mathrm{I}})+\sin \alpha\, \cos(2 v_{\mathrm{I}})},
\end{eqnarray}
where
\begin{eqnarray}
2u_{\mathrm{I}} = l_{\mathrm{I}}^{-1}(x -V_{\mathrm{I}} t - \delta), \qquad 2v_{\mathrm{I}}= k_{\mathrm{I}} x - \omega_{\mathrm{I}} t + \theta,
\end{eqnarray}
are expressed via the breather characteristic length $l_{\mathrm{I}}$, group velocity $V_{\mathrm{I}}$, characteristic wave vector $k_{\mathrm{I}}$, and characteristic frequency $\omega_{\mathrm{I}}$:
\begin{eqnarray}
\nonumber
&&l_{\mathrm{I}} = (2 \mathrm{Im}[\zeta])^{-1} = (2 A \sin{\alpha}\sinh{\xi})^{-1},
\\\nonumber
&&V_{\mathrm{I}} = \frac{\mathrm{Im}[\lambda\zeta]}{\mathrm{Im}[\zeta]} = \frac{A\cos{\alpha}\cosh{2\xi}}{\sinh{\xi}},
\\\nonumber
&&k_{\mathrm{I}} = 2\mathrm{Re}[\zeta] = 2A\cos{\alpha}\cosh{\xi},
\\
&&\omega_{\mathrm{I}} = 2\mathrm{Re}[\lambda\zeta] = A^2\cos{2\alpha}\sinh{2\xi}.
\label{characteristic_values1}
\end{eqnarray}
The type I breather has the following spatial asymptotics,
\begin{eqnarray}
\label{asymptotics_I}
\psi_{1,2}^\mathrm{I} &\to& A_{1,2}\, e^{\pm 2\, \mathrm{i}\, \alpha}; \quad\quad\quad\quad x \to \pm\infty,
\end{eqnarray}
so that the total phase shift of the background field caused by the presence of the breather is $4 \alpha$.
Fig.\ref{fig_01} demonstrates a general case, when the breather is localized in space and moves with a nonzero group velocity. In the figure we indicate the characteristic $l_{\mathrm{I}}$ and $k_{\mathrm{I}}$, and the asymptotic values (\ref{asymptotics_I}). We choose the following set of breather parameters,
\begin{eqnarray}
\label{parameters}
&&A_1 = 1, \quad A_2=1;
\\\nonumber
&&\alpha=\pi/5, \quad \xi=1/4, \quad \theta = 0, \quad \delta = 0,
\end{eqnarray}
which later we also use to show examples of type II and type III breathers. The choice of $\alpha$ and $\xi$ in (\ref{parameters}) correspond to a moving spatially localized breather, see (\ref{characteristic_values1}).
\begin{figure}
\centering
\includegraphics[width=0.3\linewidth]{fig1_1a.pdf}\,\,\,\,\,
\includegraphics[width=0.3\linewidth]{fig1_1b.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig1_1c.pdf}
\caption{
Vector breather of type I, see Eq.~(\ref{s1}), which can be obtained from scalar breather solution using the transformation (\ref{typeI_transformation}). The solution parameters are defined in (\ref{parameters}). (a) $|\psi_1|$ at $t=0$ with indicated asymptotic values, see Eq.~(\ref{asymptotics_I}), characteristic size and wavelength computed according to Eq.~(\ref{characteristic_values1}). (b) $\mathrm{Arg}[\psi_1]$ at $t=0$ with indicated asymptotic values, where $\mathrm{Arg}$ means complex phase. (c) spatio-temporal evolution of $|\psi_1|$. The wave field component $\psi_2$ is not shown since it coincides with the first one after rescaling the amplitude amplitude, see Eq.~(\ref{typeI_relation}). Here and in next figures the function $\mathrm{Arg}$ is defined in the cyclic interval $[-\pi,\pi )$, so that the function value can exhibit a jump as in panel (b).
}
\label{fig_01}
\end{figure}
When $C_{1} =0$ we obtain another nontrivial solution of the Manakov system, which we call as type II breather again referring to the classification from \cite{kraus2015focusing}.
Writing the components of the vector $\mathbf{C}$ as follows,
\begin{eqnarray}
\label{C_II_param}
C_{0} = e^{-\mathrm{Im}[\lambda]\delta - i\theta /2}, \quad C_{1} = 0, \quad C_{2} = e^{-\mathrm{Im}[\zeta]\delta + i\theta/2},
\end{eqnarray}
from (\ref{solution}) we obtain:
\begin{eqnarray}
\psi_{1}^{\mathrm{II}} = A_{1} +\frac{4\,\mathrm{i}\,e^{\mathrm{i} \alpha} \sin \alpha\,\cosh \xi (A_1 - A_2 e^{u_{\mathrm{II}} - iv_{\mathrm{II}}})}{e^{2u_{\mathrm{II}} + \xi}+2\, \cosh \xi}, \nonumber\\
\psi_{2}^{\mathrm{II}} = A_{2} +\frac{4\,\mathrm{i}\,e^{\mathrm{i} \alpha} \sin \alpha\,\cosh \xi (A_2 + A_1 e^{u_{\mathrm{II}} - iv_{\mathrm{II}}})}{e^{2u_{\mathrm{II}} + \xi}+2\, \cosh \xi},
\label{sol2}
\end{eqnarray}
where
\begin{eqnarray}
\label{uv_II}
2u_{\mathrm{II}} = l_{\mathrm{II}}^{-1}(x -V_{\mathrm{II}} t - \delta), \qquad v_{\mathrm{II}}= k_{\mathrm{II}} x - \omega_{\mathrm{II}} t + \theta,
\end{eqnarray}
are expressed via the physical characteristics of type II breather,
\begin{eqnarray}
\label{characteristic_values2}
&&l_{\mathrm{II}} = (2 (\mathrm{Im}[\lambda] - \mathrm{Im}[\zeta]))^{-1} = (2 A e^{-\xi} \sin{\alpha})^{-1},
\\\nonumber
&&V_{\mathrm{II}} = \frac{\mathrm{Im}[\lambda^2+\zeta^2]/2 - \mathrm{Im}[\lambda\zeta]}{\mathrm{Im}[\lambda] - \mathrm{Im}[\zeta]} = -A\cos{\alpha}e^{-\xi},
\\\nonumber
&&k_{\mathrm{II}} = \mathrm{Re}[\lambda] - \mathrm{Re}[\zeta] = -A\cos\alpha\,e^{-\xi},
\\\nonumber
&&\omega_{\mathrm{II}} = \frac{1}{2}\mathrm{Re}[\lambda^2+\zeta^2] - \mathrm{Re}[\lambda\zeta] = \frac{A^2}{2}e^{-2\xi}\cos 2\alpha.
\end{eqnarray}
\begin{figure}[!t]
\centering
\includegraphics[width=0.3\linewidth]{fig1_2a.pdf}\,\,\,\,\,
\includegraphics[width=0.3\linewidth]{fig1_2b.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig1_2c.pdf}\\
\includegraphics[width=0.3\linewidth]{fig1_2d.pdf}\,\,\,\,\,
\includegraphics[width=0.3\linewidth]{fig1_2e.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig1_2f.pdf}
\caption{
Vector breather of type II. The solution parameters are defined in (\ref{parameters}). (a1,a2) $|\psi_{1,2}|$ at $t=0$ with indicated asymptotic values, see Eq.~(\ref{asymptotics_II}), and characteristic size and wavelength computed according to Eq.~(\ref{characteristic_values2}). (b1,b2) $\mathrm{Arg}[\psi_{1,2}]$ at $t=0$ with indicated asymptotic values. (c1,c2) spatio-temporal evolution of $|\psi_{1,2}|$.}
\label{fig_02}
\end{figure}
The type II breather has the following asymptotics:
\begin{eqnarray}
\label{asymptotics_II}
&&\psi_{1,2}^{\mathrm{II}} \to A_{1,2} e^{2\,\mathrm{i}\,\alpha}; \qquad\qquad x \to -\infty
\\\nonumber
&&\psi_{1,2}^{\mathrm{II}} \to A_{1,2}; \qquad\qquad x \to +\infty.
\end{eqnarray}
Fig.~\ref{fig_02} shows an example of type II localized breather having parameters (\ref{parameters}), which moves with a nonzero group velocity.
Finally, for $C_{2} =0$ we obtain type III breather. Writing components $\mathbf{C}$ in the form:
\begin{eqnarray}
\label{C_III_param}
&&C_{0} = e^{-\mathrm{Im}[\lambda]\delta - i\theta /2},
\\\nonumber
&&\mathrm{i}rC_{1} = e^{\mathrm{Im}[\zeta]\delta + i\theta/2}, \quad C_{2} = 0,
\end{eqnarray}
from (\ref{solution}) we obtain:
\begin{eqnarray}
\psi_{1}^{\mathrm{III}} = A_1 - 4\mathrm{i}\sin\alpha e^{-\mathrm{i}\alpha} \cosh\xi
\frac{A_1 - A_2 e^{u_{\mathrm{III}} - \mathrm{i}v_{\mathrm{III}} }}{e^{2u_{\mathrm{III}}-\xi} + 2\cosh\xi},
\\
\psi_{2}^{\mathrm{III}} = A_2 - 4\mathrm{i}\sin\alpha e^{-\mathrm{i}\alpha} \cosh\xi
\frac{A_2 + A_1 e^{u_{\mathrm{III}} - \mathrm{i}v_{\mathrm{III}} }}{e^{2u_{\mathrm{III}}-\xi} + 2\cosh\xi},
\label{sol3}
\end{eqnarray}
where
\begin{eqnarray}
u_{\mathrm{III}} = \frac{l_{\mathrm{III}}^{-1}(x -V_{\mathrm{III}} t - \delta)}{2}, \,\, v_{\mathrm{III}}= k_{\mathrm{III}} x - \omega_{\mathrm{III}} t + \theta .
\end{eqnarray}
Physical characteristics of type III breather are the following:
\begin{eqnarray}
\label{characteristic_values3}
&&l_{\mathrm{III}} = (2 (\mathrm{Im}[\lambda] + \mathrm{Im}[\zeta]))^{-1} = (2 A e^{\xi} \sin{\alpha})^{-1},
\\\nonumber
&&V_{\mathrm{III}} = \frac{\mathrm{Im}[\lambda^2+\zeta^2]/2 + \mathrm{Im}[\lambda\zeta]}{\mathrm{Im}[\lambda] + \mathrm{Im}[\zeta]} = Ae^{\xi}\cos{\alpha},
\\\nonumber
&&k_{\mathrm{III}} = \mathrm{Re}[\lambda] + \mathrm{Re}[\zeta] = Ae^{\xi}\cos\alpha,
\\\nonumber
&&\omega_{\mathrm{III}} = \frac{1}{2}\mathrm{Re}[\lambda^2+\zeta^2] + \mathrm{Re}[\lambda\zeta] = \frac{A^2}{2}e^{2\xi}\cos 2\alpha.
\end{eqnarray}
The type III breather has the following asymptotics:
\begin{eqnarray}
\label{asymptotics_III}
\psi_{1,2}^{\mathrm{III}} \to A_{1,2} e^{-2\,\mathrm{i}\,\alpha}; \qquad x \to -\infty
\\\nonumber
\psi_{1,2}^{\mathrm{III}} \to A_{1,2}; \qquad x \to +\infty.
\end{eqnarray}
Fig.~\ref{fig_03} shows an example of the type III breather, moving with a nonzero group velocity.
The solutions of type II and III have a similar structure, however differ in asymptotic behaviour and characteristic parameters. For the same eigenvalue these breathers always propagate in opposite directions and the breather II has lager size and characteristic wavelength according to the inequalities $|k_{\mathrm{II}}|<|k_{\mathrm{III}}|$ and $|l_{\mathrm{II}}|>|l_{\mathrm{III}}|$, see Eqs.~(\ref{characteristic_values2}) and (\ref{characteristic_values3}). The following change of the spectral parameter,
\begin{equation}
\label{II_to_III_transform}
\xi \rightarrow - \xi, \qquad \alpha \rightarrow \pi-\alpha,
\end{equation}
transforms type II solution (\ref{sol2}) into type III solution (\ref{sol3}). In terms of $\lambda$ spectral variable, the transformation (\ref{II_to_III_transform}) means that we change the Riemann sheets of the function $\zeta(\lambda)$. This situation is not typical in the IST theory, where usually different Riemann sheets correspond to the same class of solutions. For example, in the scalar NLSE model, the jump to another Riemann sheet only changes the breather phase, leaving the solution the same. One can check that type I solution (\ref{s1}) is invariant to the transformation (\ref{II_to_III_transform}), when the additional replacement $\theta \rightarrow - \theta$ is applied.
\begin{figure}[!t]
\centering
\includegraphics[width=0.3\linewidth]{fig1_3a.pdf}\,\,\,\,\,
\includegraphics[width=0.3\linewidth]{fig1_3b.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig1_3c.pdf}\\
\includegraphics[width=0.3\linewidth]{fig1_3d.pdf}\,\,\,\,\,
\includegraphics[width=0.3\linewidth]{fig1_3e.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig1_3f.pdf}
\caption{
Vector breather of type III. The solution parameters are defined in (\ref{parameters}). (a1,a2) $|\psi_{1,2}|$ at $t=0$ with indicated asymptotic values, see Eq. (\ref{asymptotics_III}), and characteristic size and wavelength computed according to Eqs. (\ref{characteristic_values3}). (b1,b2) $\mathrm{Arg}[\psi_{1,2}]$ at $t=0$ with indicated asymptotic values. (c1,c2) spatio-temporal evolution of $|\psi_{1,2}|$.}
\label{fig_03}
\end{figure}
Moving and spatially localized breathers, as shown in Fig.~\ref{fig_01} are often called as Tajiri–Watanabe, see \cite{tajiri1998breather}, or general breather. The eigenvalues of the general breather belongs to the broad region,
\begin{equation}
\label{General_par_set}
\{\mathrm{Re}[\lambda] \ne 0,\quad\mathrm{Im}[\lambda] > 0\},
\quad \{\alpha\ne\pi/2, \quad \xi>0\},
\end{equation}
so that $l_{\mathrm{I}}$ and $V_{\mathrm{I}}$ are finite and nonzero. Similar to type I, the general breathers of type II and III are localized and move on the condensate background. At the same time, the structure of type II and III solutions fundamentally differs from type I and cannot be retrieved by a solution transformation, similar to Eq.~(\ref{typeI_transformation}). One can say that the general breathers of type II and III represent a nontrivial vector counterpart of the scalar NLSE breather.
\begin{figure}[!t]
\centering
\includegraphics[width=0.31\linewidth]{fig1_4a.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig1_4b.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig1_4c.pdf}
\caption{
Vector breather of the type I, see Eq.~(\ref{s1}), which can be obtained from solutions of scalar NLSE using the transformation (\ref{typeI_transformation}). (a) Kuznetsov solution (b) Peregrine solution, (c) Akhmediev solution.}
\label{fig_04}
\end{figure}
In addition, the theory of scalar NLSE distinguishes three important particular cases: (a) Kuznetsov breather, (b) Peregrine breather, and (c) Akhmediev breather; all previously studied in detail, see, e.g., the monographs \cite{pelinovsky2008book,akhmediev1997nonlinear}. They correspond to the following choices of the spectral parameter,
\begin{eqnarray}
\label{Kuznetsov_par_set}
&&\text{(a)}\quad \{\mathrm{Re}[\lambda] = 0,\quad\mathrm{Im}[\lambda] > A\},
\quad \{\alpha=\pi/2, \quad \xi>0\},
\\
\label{Peregrine_par_set}
&&\text{(b)}\quad \{\mathrm{Re}[\lambda] = 0,\quad\mathrm{Im}[\lambda] = A\},
\quad \{\alpha=\pi/2, \quad \xi=0\},
\\
\label{Akhmediev_par_set}
&&\text{(c)}\quad \{\mathrm{Re}[\lambda] = 0,\quad\mathrm{Im}[\lambda] < A\},
\quad \{\alpha\ne\pi/2, \quad \xi=0\}.
\end{eqnarray}
Fig.~\ref{fig_04} briefly reminds the key properties of these nonlinear structures. The Kuznetsov breather is a standing one-humped wave group oscillating on the condensate background with a finite time period $T=4\pi/(A^2\sinh 2\xi)$, see Fig.~\ref{fig_04}(a). The Peregrine breather is a degenerate limit of the Kuznetsov solution appearing in the Eq.~(\ref{s1}) at the spectral parameter (\ref{Peregrine_par_set}). It can be found by resolving an uncertainty of the type $0/0$ in solution (\ref{s1}), that leads to the following rational solution,
\begin{equation}
\label{Peregrine}
\psi_{1,2} = -A_{1,2} + 4A_{1,2} \frac{1-2iA^2 t}{1+4A^2x^2+4A^4 t^2}.
\end{equation}
The Peregrine breather (\ref{Peregrine}) emerges from a small amplitude spatially localized condensate perturbation and then disappears, see Fig.~\ref{fig_04}(b), what makes this solution a popular elementary model of rogue waves formation \cite{pelinovsky2008book,akhmediev2009extreme,shrira2010makes,OsborneBook2010}. Finally, the Akhmediev breather is a periodic solution with spatial period $L=2\pi/(A\sin\alpha)$, which, similar to the Peregrine breather, emerges only once in time, see Fig.~\ref{fig_04}(c). The Akhmediev breather describes an important scenario of the MI development of a periodically perturbed condensate \cite{pelinovsky2008book,akhmediev2009extreme,OsborneBook2010}.
Similar to as it is typically done in the linear theory of polarized light, see, e.g., \cite{gordon2000pmd}, the wavefield components can be considered as vector $(\psi_1, \psi_2)^\mathrm{T}$ which can be rotated by a rotation matrix $\mathbf{T}$ providing the same solution of the Manakov system written in a new basis. In particular, one can switch between solutions of the Manakov system $(\psi_1, \psi_2)^\mathrm{T}$ and $(\widetilde{\psi}_1, \widetilde{\psi}_2)^\mathrm{T}$ having the asymptotics,
\begin{eqnarray}
\label{rotation_basis}
\psi_{1,2}\to
\left(
\begin{array}{cc}
A_1 e^{i\phi^{\pm}} \\
A_2 e^{i\phi^{\pm}} \\
\end{array}
\right),
\qquad
\widetilde{\psi}_{1,2}\to
\left(
\begin{array}{cc}
e^{i\phi^{\pm}} \\
0 \\
\end{array}
\right);
\qquad
x\to\pm \infty,
\end{eqnarray}
using the rotation matrix $\mathbf{T}$ and its inverse counterpart $\mathbf{T}^{-1}$ as follows:
\begin{eqnarray}
\label{rotation_T}
\left(
\begin{array}{cc}
\psi_1 \\
\psi_2 \\
\end{array}
\right)
= \mathbf{T}
\left(
\begin{array}{cc}
\widetilde{\psi}_1 \\
\widetilde{\psi}_2 \\
\end{array}
\right),
\quad
\left(
\begin{array}{cc}
\widetilde{\psi}_1 \\
\widetilde{\psi}_2 \\
\end{array}
\right)
= \mathbf{T}^{-1}
\left(
\begin{array}{cc}
\psi_1 \\
\psi_2 \\
\end{array}
\right);
\qquad
\mathbf{T}=
\frac{1}{A^2}
\left(
\begin{array}{cc}
A_1 & -A_2 \\
A_2 & A_1 \\
\end{array}
\right),
\quad
\mathbf{T}^{-1}=
\frac{1}{A^2}
\left(
\begin{array}{cc}
A_1 & A_2 \\
-A_2 & A_1 \\
\end{array}
\right) .
\end{eqnarray}
Our solutions for the breathers of types $\mathrm{I}$, $\mathrm{II}$ and $\mathrm{III}$ with asymptotics $A_{1,2} e^{i\phi^{\pm}}$ defined by Eqs.~(\ref{asymptotics_I}), (\ref{asymptotics_II}) and (\ref{asymptotics_III}) can be transformed using the inverse matrix $\mathbf{T}^{-1}$ into solutions having zero condensate level in the second component, see Eqs.~(\ref{rotation_basis}) and (\ref{rotation_T}). Most interestingly, for type $\mathrm{I}$ breathers the second component in the new bases is exactly cancelled, i.e. $\widetilde{\psi}^{\mathrm{I}}_2(x,t)\equiv 0$, due to the symmetry (\ref{typeI_relation}). In other words, type $\mathrm{I}$ solutions are flat in the sense of polarization, meanwhile type $\mathrm{II}$ and type $\mathrm{III}$ breathers always have both wavefield components different from zero. In particular, the work \cite{kraus2015focusing} uses the polarization basis corresponding to the case $(\widetilde{\psi}_1, \widetilde{\psi}_2)^\mathrm{T}$, see the illustrations for the second wavefield component in \cite{kraus2015focusing}. One can check that the solutions (\ref{s1}), (\ref{sol2}) and (\ref{sol3}) boil down to those presented in \cite{kraus2015focusing} after the transformation with the matrix $\mathbf{T}^{-1}$.
Mathematically, the diversity of scalar breathers emerges due to the nontrivial spectral parameter plane geometry produced by the function $\zeta(\lambda)$ branchcut. Indeed, the Kuznetsov, Akhmediev and Peregrine breathers correspond to the eigenvalue location at the imaginary $\lambda$-axis respectively outside/inside the branchcut and precisely at the branch point of the function $\zeta(\lambda)$, see Eqs.~(\ref{Kuznetsov_par_set}-\ref{Akhmediev_par_set}). Meanwhile, the general Tajiri–Watanabe breather emerges when the eigenvalue is located outside the imaginary $\lambda$-axis. As we already discussed, the general type II and type III breathers exhibit in principal a similar to the scalar case behaviour (in the sense that they are localized moving pulsating breathers). The choices of the eigenvalue locations (\ref{Kuznetsov_par_set}-\ref{Akhmediev_par_set}) lead to fundamentally different wavefield dynamics, which was recently studied in \cite{che2022nondegenerate}. We illustrate the all three cases in Fig.~\ref{fig_05} (type II) and Fig.~\ref{fig_06} (type III). For the spectral parameter choice (\ref{Kuznetsov_par_set}), the type II and III breathers represent a dark-bright standing wave group oscillating on the condensate background, see Fig.~\ref{fig_05}(a) and Fig.~\ref{fig_06}(a). Unlike their scalar counterpart, these breathers change the condensate phase to $\pi$, according to the asymptotics (\ref{asymptotics_II}) and (\ref{asymptotics_III}). Meanwhile, for the set of parameters (\ref{Peregrine_par_set}), no degeneration in the solutions (\ref{sol2}) and (\ref{sol3}) occurs. The wavefield dynamic is similar to the type II and III cases, see Fig.~\ref{fig_05}(b) and Fig.~\ref{fig_06}(b), meaning that there are no nontrivial vector analogs of the rational rogue waves. Finally, when the spectral parameter belongs to the set (\ref{Akhmediev_par_set}), the solutions of the types II and III are moving localized breathers, see Fig.~\ref{fig_05}(c) and Fig.~\ref{fig_06}(c). Accordingly, these solutions are a particular case of the general type II and III breather, and there are no nontrivial vector analogs of the periodic Akhmediev breather dynamic. In addition, we note that in the case of Akhmediev type eigenvalues (\ref{Akhmediev_par_set}), the transformation (\ref{II_to_III_transform}) boils down to a change of parameter $\alpha$ only, leaving the eigenvalue on the branchcut in the upper half of the $\lambda$-plane. In other words, for the eigenvalues (\ref{Akhmediev_par_set}), the type II and type III solutions merges into one class.
\begin{figure}[!t]
\centering
\includegraphics[width=0.3\linewidth]{fig1_5a.pdf}\,\,\,\,\,
\includegraphics[width=0.3\linewidth]{fig1_5b.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig1_5c.pdf}\\
\includegraphics[width=0.3\linewidth]{fig1_5d.pdf}\,\,\,\,\,
\includegraphics[width=0.3\linewidth]{fig1_5e.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig1_5f.pdf}
\caption{
Vector breathers of the type II with spectral parameters defined by Eqs.~(\ref{Kuznetsov_par_set}-\ref{Akhmediev_par_set}). (a) Spectral parameter belongs to the Kuznetsov region, see Eq.~(\ref{Kuznetsov_par_set}). The corresponding solution is a standing dark-bright oscillating breather. (b) Spectral parameter belongs to the Peregrine region, see Eq.~(\ref{Kuznetsov_par_set}). The corresponding solution exhibits a particular case of type II solution (no degeneration of the solution occurs). (c) Spectral parameter belongs to the Akhmediev region, see Eq.~(\ref{Kuznetsov_par_set}). The corresponding solution exhibits a particular case of type II general dynamics (no spatially periodic dynamic occurs).}
\label{fig_05}
\end{figure}
Each breather type corresponds to a certain branch of the dispersion law (\ref{dispersion_laws}). To establish this connection, we consider the breathers' tails as condensate perturbations and study them asymptotically. The role of small parameter plays the value $e^{-L}$, where $L$ is the characteristic distance from the breather center to the point where we study the breather tail. The latter can be done in the case of finite characteristic size, while for periodic Akhmediev breathers, one can similarly consider asymptotics at large times. We choose $L\gg 1$, so that we are far away from breather center and perform asymptotic expansion of the solutions (\ref{s1}), (\ref{sol2}) and (\ref{sol3}), see Appendix section \ref{Sec:Appendix:1} for the computational details. For type I solution (\ref{s1}) the first-order terms represent the main (zero-order) asymptotic (\ref{asymptotics_I}) plus a linear combinations of the first-order terms having structure $p e^{2\varphi}$ and $\tilde{p} e^{2\varphi^*}$, where $p$ and $\tilde{p}$ are constants, while the functions $\varphi$ are defined by (\ref{phi_0_and_phi}). We write these exponents in the form $e^{ikx + i\omega t}$. Considering, for example $e^{2\varphi}$, we obtain,
\begin{equation}
\label{tails1}
e^{2\varphi} = e^{ikx+i\omega t}, \qquad k = -2\zeta, \qquad \omega=2\lambda\zeta.
\end{equation}
Now using that $\zeta(\lambda) = \sqrt{\lambda^2+A^2}$, see Eq.~(\ref{zeta_def}), we find $\omega(k) = \pm ik\sqrt{A^2-k^2/4}$. Thereby the breather tails obey the first branch of the dispersion law $\omega_{\mathrm{I}}(k)$ with complex $k$ and $\omega$. The complexity of $k$ in (\ref{tails1}) means exponential decay of the breather tail. The same result can be obtained for the terms $\tilde{p} e^{2\varphi^*}$.
For breather types II and III a similar analysis gives as zero order terms the asymptotic values (\ref{asymptotics_II}) and (\ref{asymptotics_III}) plus the first-order terms having structure $p e^{\varphi_0-\varphi}$ and $\tilde{p} e^{\varphi^*_0-\varphi^*}$ (for type II) and the terms of the structure $p e^{-\varphi_0-\varphi}$ and $\tilde{p} e^{-\varphi^*_0-\varphi^*}$ (for type III). For all the listed exponents one gets the second branch of the dispersion law $\omega_{\mathrm{II}}(k)$. For instance, in the case $e^{\varphi_0-\varphi}$, we obtain,
\begin{equation}
\label{tails2}
e^{\varphi_0-\varphi} = e^{ikx+i\omega t}, \qquad k = \zeta - \lambda, \qquad \omega=-\frac12(\zeta - \lambda)^2,
\end{equation}
and finally easily retrieve the second branch $\omega(k) = -k^2/2$. Again, as in the type I case, the complexity of $k$ means exponential decay of the breather tails.
\begin{figure}[!t]
\centering
\includegraphics[width=0.3\linewidth]{fig1_6a.pdf}\,\,\,\,\,
\includegraphics[width=0.3\linewidth]{fig1_6b.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig1_6c.pdf}\\
\includegraphics[width=0.3\linewidth]{fig1_6d.pdf}\,\,\,\,\,
\includegraphics[width=0.3\linewidth]{fig1_6e.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig1_6f.pdf}
\caption{
Vector breathers of type III with spectral parameters defined by Eqs.~(\ref{Kuznetsov_par_set}-\ref{Akhmediev_par_set}). (a) Spectral parameter belongs to the Kuznetsov region, see Eq.~(\ref{Kuznetsov_par_set}). The corresponding solution is a standing dark-bright oscillating breather. (b) Spectral parameter belongs to the Peregrine region, see Eq.~(\ref{Kuznetsov_par_set}). The corresponding solution exhibits a particular case of type III solution (no degeneration of the solution occurs). (c) Spectral parameter belongs to the Akhmediev region, see Eq.~(\ref{Kuznetsov_par_set}). The corresponding solution exhibits a particular case of type III Tajiri–Watanabe dynamics (no spatially periodic dynamic occurs).}
\label{fig_06}
\end{figure}
\section{Resonance interactions of breathers}\label{Sec:4}
In this section, we study resonant interactions of the vector breathers, i.e., a fusion of two breathers into one or decay of one breather into two, such that the characteristic wave vectors and frequencies of the breathers satisfy resonance conditions. The phenomena of inelastic mutual coherent structures transformations have been known for integrable systems since the work \cite{ZakharovManakov1976theory} devoted to solitons in the three-wave model, see also \cite{kaup1976three}. Another examples represent two-dimensional field theory \cite{zakharov1978example}, chiral fields models \cite{orlov1984Nsoliton}, and relativistic $O(1,1)$ sine-Gordon model \cite{barashenkov1988integrable,barashenkov1988exactly,barashenkov1993unified}. In the case of zero background, such nontrivial interactions are possible for solitons in the three (or more than three) component systems, such as the mentioned three-wave model. However, in the presence of a nontrivial background, the constrain on the number of dimensions can be relaxed and nontrivial interactions can be observed already in two-component systems \cite{barashenkov1988integrable,barashenkov1988exactly,barashenkov1993unified}. Mathematically speaking, the nontrivial interactions are possible when the IST auxiliary problem admits different types of the eigenvalues, which can be merged into one point without solution degeneration, see \cite{ZakharovManakov1976theory}. Recently we have observed resonance interactions for vector breathers in the Manakov system (\ref{VNLSE}), see our Letter \cite{raskovalov2022resonant} and the recent paper \cite{raskovalov2022resonanse}. The resonance represents a three-breather process of a fusion or decay, where each of the three participating breathers has a different type either I, II, or III. Here we present the theory of these nontrivial interactions in more detail than \cite{raskovalov2022resonant,raskovalov2022resonanse}.
The resonant interaction of three vector breathers is described by the one-pole solution (\ref{solution}), when all the integration constants $C_0$, $C_1$ and $C_2$ are nonzero. First we consider the situation when the eigenvalue is of general type, see Eq.~(\ref{General_par_set}), and after that we switch to the particular choices (\ref{Kuznetsov_par_set}-\ref{Akhmediev_par_set}). In the general case, the solution has the following asymptotics:
\begin{eqnarray}
\label{asymptotics_resonance}
\psi_{1,2} &\to& A_{1,2} e^{-2\,\mathrm{i}\,\alpha}; \quad\quad\quad\quad x \to -\infty,
\\\nonumber
\psi_{1,2} &\to& A_{1,2}; \quad\quad\quad\quad\quad\quad\,\,\, x \to +\infty,
\end{eqnarray}
that on one side coincides with the asymptotic III (\ref{asymptotics_III}), and on the other side can be obtained by linear superposition of the asymptotics I and II, see Eqs.~(\ref{asymptotics_II}) and (\ref{asymptotics_III}). For definiteness we consider the case $\pi/2>\alpha>0$. In order to investigate the asymptotic states of the solution with non zero integration constants, we move at $t\rightarrow -\infty$ to the reference frame,
\begin{eqnarray}
\label{ref1}
u = \mathrm{const},
\\
\label{ref2}
u_0 - u = \mathrm{const},
\end{eqnarray}
while at $t\rightarrow \infty$ we move to the reference frame,
\begin{eqnarray}
\label{ref3}
u_0 + u = \mathrm{const}.
\end{eqnarray}
Recall that $u$ and $u_0$ are defined in (\ref{uvu0v0}). Then the conditions (\ref{ref1}), (\ref{ref2}) and (\ref{ref3}) lead to that in the expressions (\ref{q vectors(lambda)}), $e^{\varphi_0}\rightarrow 0$, $e^{-\varphi}\rightarrow 0$ and $e^{\varphi}\rightarrow 0$ correspondingly, and for each of the reference frames one can obtain exactly the single breather solutions (\ref{s1}), (\ref{sol2}) or (\ref{sol3}). The latter means that the asymptotic state of the resonance process represent single breathers of the types I, II and III with the following integration constants $\mathbf{C}$: I) $\{0,C_1,C_2\}$, II) $\{C_0,0,C_2\}$, and III) $\{C_0,C_1,0\}$. The interaction itself represent fusion of the breather I and II into the breather III, what we denote as $\mathrm{I}+\mathrm{II}\rightarrow\mathrm{III}$. We show the full resonance process and the single-breather approximation in Fig.~\ref{fig_07}. Note, that for $\pi>\alpha>\pi/2$ the resonance represent an opposite process -- the decay of the breather III into the breathers I and II, i.e., $\mathrm{III}\rightarrow \mathrm{I} + \mathrm{II}$.
From the expressions (\ref{characteristic_values1}), (\ref{characteristic_values2}) and (\ref{characteristic_values3}) describing breather characteristics, we find that the resonance process satisfy the standard resonant conditions,
%
\begin{eqnarray}
\label{resonance1}
k_{\mathrm{I}} + k_{\mathrm{II}} &=& k_{\mathrm{III}},
\\
\omega_{\mathrm{I}} + \omega_{\mathrm{II}} &=& \omega_{\mathrm{III}}.
\label{resonance2}
\end{eqnarray}
Note, that the resonance conditions (\ref{resonance1}) and (\ref{resonance2}) cannot be derived from the dispersion laws (\ref{dispersion_laws}). Indeed, the characteristic breather wave vectors and frequencies follow from the fully nonlinear solutions (\ref{s1}),~(\ref{sol2}),~(\ref{sol3}), meanwhile the dispersion laws describe only the breather tails.
The resonance is always represented by either the process $\mathrm{I}+\mathrm{II}\rightarrow\mathrm{III}$ or the process $\mathrm{III}\rightarrow \mathrm{I} + \mathrm{II}$. Other configurations, such as a fusion of breather I cannot exist, what can be seen from the structure of the solution asymptotics, as soon as the resonance asymptotic (\ref{asymptotics_resonance}) coincides with the asymptotic (\ref{asymptotics_III}). In addition, such process as $\mathrm{I}\rightarrow \mathrm{II} + \mathrm{III}$ is prohibited by the resonant conditions (\ref{resonance1}) and (\ref{resonance2}). Indeed, let us consider the region of spectral parameter where $\pi/4>\alpha>0$. Then, according to Eqs.~(\ref{characteristic_values1}), (\ref{characteristic_values2}), (\ref{characteristic_values3}), we find that $k_\mathrm{I}>0$, $k_\mathrm{II}<0$, and $k_\mathrm{III}>0$. In addition for the whole range of spectral parameter $k_\mathrm{I}>k_\mathrm{III}$, so that $k_\mathrm{I}$ cannot be represent as a sum of $k_\mathrm{II}$ and $k_\mathrm{III}$.
\begin{figure}[!t]
\centering
\includegraphics[width=0.3\linewidth]{fig1_7a.pdf}\,\,\,\,\,
\includegraphics[width=0.3\linewidth]{fig1_7b.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig1_7c.pdf}\\
\includegraphics[width=0.3\linewidth]{fig1_7d.pdf}\,\,\,\,\,
\includegraphics[width=0.3\linewidth]{fig1_7e.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig1_7f.pdf}
\caption{
Resonance interaction $\mathrm{I}+\mathrm{II} \rightarrow \mathrm{III}$ of vector breathers described by single-pole solution (\ref{solution}) with $\mathbf{C} = \{1,1,1\}$. Blue lines in (a1,a2) and (b1,b2) shows $|\psi_{1,2}|$ before ($t=-7.0$) and after ($t=7.0$) the resonance interaction. The dotted green and red lines shows local approximation of the breathers of the types I, II and III (see the corresponding notations in the figures) by solution (\ref{solution}) with: I) $\mathbf{C} = \{0,1,1\}$, green line in (a1,a2) II) $\mathbf{C} = \{1,0,1\}$, red line in (a1,a2) III) $\mathbf{C} = \{1,1,0\}$, red line in (b1,b2). Panels (c1,c2) show spatio-temporal evolution of $|\psi_{1,2}|$ for the whole resonance interaction and its asymptotic state.}
\label{fig_07}
\end{figure}
To conclude this section we consider the particular cases of the resonance interactions corresponding to the choices of the eigenvalues (\ref{Kuznetsov_par_set}-\ref{Akhmediev_par_set}). For the eigenvalue of the Kuznetsov type (\ref{Kuznetsov_par_set}), we observe a standing wave group exhibiting complex oscillations, see Fig.~\ref{fig_08}(a). In the limit (\ref{Peregrine_par_set}) solution (\ref{solution}) with nonzero $C_0$, $C_1$ and $C_2$ degenerates, leading, after resolving the uncertainty of the type $0/0$, to the following rational formula,
\begin{eqnarray}
&&\psi_1 = A_1 +\frac{4 \,[1-2 A (x -x_1)-2\, \mathrm{i}\, A^2 t]}{(e^{2 A (x-x_0)}+2+8 A^2 (x-x_1)^2+8 A^4 t^2)} \times\nonumber\\
&&\qquad\qquad\times(A_2 e^{A (x-x_0) - \mathrm{i} A^2 (t-t_0)/2}+A_1 [1+2 A (x -x_1)-2\, \mathrm{i}\, A^2 t]),\nonumber\\
&&\psi_2 = A_2 +\frac{4\, [1-2 A (x -x_1)-2\, \mathrm{i}\, A^2 t]}{(e^{2 A (x-x_0)}+2+8 A^2 (x-x_1)^2+8 A^4 t^2)}\times\nonumber \\
&&\qquad\qquad\times(-A_1 e^{A (x-x_0) - \mathrm{i} A^2 (t-t_0)/2}+A_2 [1+2 A (x -x_1)-2\, \mathrm{i}\, A^2 t]),\label{Peregrine_resonance}
\end{eqnarray}
where $x_0$, $x_1$ and $t_0$ are real valued parameters. The semirational solution (\ref{Peregrine_resonance}) represents a localized wave group, decaying with time as $t^{-2}$. At certain coordinates it exhibits a Perigrine-type bump coexisting with the rest of the solution, see Fig.~\ref{fig_08}(b), that was previously studied in \cite{baronio2012solutions} in the context of vector rogue waves formation. Finally, for the Akhmediev type eigenvalues (\ref{Akhmediev_par_set}), we observe a moving type II breather which at some point decays into a Akhmediev type I wave excitation in one-half of space, plus a type III breather moving in another half of space, see Fig.~\ref{fig_08}(c). As we noted in the previous section, for the eigenvalues (\ref{Akhmediev_par_set}), the classes of type II and III solution merges into one, which explains why the moving breathers before and after the resonance interaction in Fig.~\ref{fig_08}(c) are similar. Note that Fig.~\ref{fig_08} demonstrates a particular case of the general scenario shown in Fig.~\ref{fig_07}(c) when one uses the eigenvalues (\ref{Akhmediev_par_set}) for each of the three asymptotic breather states.
\begin{figure}[!t]
\centering
\includegraphics[width=0.3\linewidth]{fig1_8a.pdf}\,\,\,\,\,
\includegraphics[width=0.3\linewidth]{fig1_8b.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig1_8c.pdf}\\
\includegraphics[width=0.3\linewidth]{fig1_8d.pdf}\,\,\,\,\,
\includegraphics[width=0.3\linewidth]{fig1_8e.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig1_8f.pdf}
\caption{
Resonance interaction of vector breathers with spectral parameters defined by Eqs.~(\ref{Kuznetsov_par_set}-\ref{Akhmediev_par_set}). (a) Spectral parameter belongs to the Kuznetsov region, see Eq.~(\ref{Kuznetsov_par_set}). (b) Spectral parameter belongs to the Peregrine region, see Eq.~(\ref{Kuznetsov_par_set}). (c) Spectral parameter belongs to the Akhmediev region, see Eq.~(\ref{Kuznetsov_par_set}).}
\label{fig_08}
\end{figure}
\section{Elastic collisions of breathers}\label{Sec:5}
The two-eigenvalue solution of the model (\ref{VNLSE}), see Eq.~(\ref{N-solitonic solution}) with $N=2$, has the following general form,
\begin{eqnarray}
\psi_1 = A_1+2 \mathrm{\widetilde{M}}_{12}/\mathrm{M},
\nonumber\\
\psi_2 = A_2+2\mathrm{\widetilde{M}}_{13}/\mathrm{M},
\label{two}
\end{eqnarray}
where
\begin{eqnarray}
&&\tilde{M}_{12} =\mathrm{i} \left[-m_2 q_{12} q_{11}^* |\mathbf{q}_2|^2 +n_2 q_{12} q_{21}^* (\mathbf{q}_2, \mathbf{q}^*_1)+n_1 q_{22} q_{11}^* (\mathbf{q}_1, \mathbf{q}^*_2)-m_1 q_{22} q_{21}^* |\mathbf{q}_1|^2 \right],\nonumber\\
&&\tilde{M}_{13} =\mathrm{i} \left[-m_2 q_{13} q_{11}^* |\mathbf{q}_2|^2 +n_2 q_{13} q_{21}^* (\mathbf{q}_2, \mathbf{q}^*_1)+n_1 q_{23} q_{11}^* (\mathbf{q}_1, \mathbf{q}^*_2)-m_1 q_{23} q_{21}^* |\mathbf{q}_1|^2 \right],\nonumber\\
&& M = -[m\,|\mathbf{q}_1|^2 |\mathbf{q}_2|^2- n\, (\mathbf{q}_1, \mathbf{q}_2^*) (\mathbf{q}_2, \mathbf{q}_1^*)].
\end{eqnarray}
Here the coefficients $m_{1,2} \equiv 1/(\lambda_{1,2}-\lambda_{1,2}^*)$, $n_{1,2} \equiv 1/(\lambda_{1,2}-\lambda_{2,1}^*)$, $m=m_1 m_2$, $n = n_1 n_2 \leq 0$. The vectors $\mathbf{q}_{1,2}$ are defined in (\ref{q vectors(lambda)}). In the parametrization (\ref{uniformization}), the coefficients can be written as,
\begin{eqnarray*}
&&m_i = \frac{2 |r_i|^2}{A (r_i^*-r_i)(1+|r_i|^2)},\qquad n_i = \frac{2 r_i r_j^*}{A (1+r_i r_j^*)(r_j^*-r_i)},\qquad i=1,2,\,\; j=3-i;\nonumber\\
&&m=\frac{4 |r_1 r_2|^2}{A^2 (1+|r_1|^2)(1+|r_2|^2)(r_1-r_1^*)(r_2-r_2^*)} =-[4 A^2 \sin \alpha_1 \sin \alpha_2\, \mathrm{cosh} \xi_1 \mathrm{cosh} \xi_2]^{-1},\nonumber\\
&&n = -\frac{4 |r_1 r_2|^2}{A^2 |1+r_1 r_2^*|^2 |r_1-r_2^*|^2} . \label{isp}
\end{eqnarray*}
\begin{figure}[!t]
\centering
\includegraphics[width=0.31\linewidth]{fig2_1a.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig2_1b.pdf}\,\,\,\,\,
\includegraphics[width=0.32\linewidth]{fig2_1c.pdf}
\caption{
Elastic collision $\mathrm{I}+\mathrm{I} \rightarrow \mathrm{I} + \mathrm{I}$ of vector breathers with spectral parameters defined by Eq.~(\ref{parameters_2B}). (a,b) $|\psi_1|$ and $\mathrm{Arg}[\psi_{1}]$, where $\mathrm{Arg}$ means complex phase, after the breathers collision at $t=4.5$. (c) Spatio-temporal plot of the wave field evolution. The dotted green and red lines in (a,b) show a local approximation of the breathers after collision by single-breathers solutions from the asymptotic (\ref{2B_asymptotic_I+I}). Thin black dashed line in (a) shows how the first breather would have been if the it had been traveling alone. Both wave field components shown on the same plot since they coincide after the rescaling of the amplitude, see Eq.~(\ref{typeI_relation}).}
\label{fig_09}
\end{figure}
The solution (\ref{two}) describes a wide family of vector breathers interactions. It is characterized by two eigenvalues and 6 integration constants $C_{ij}$, $i=1,2$, $j=1,2,3$. Depending on their choice, the solution (\ref{two}) represents either two elastically colliding breathers, a breather plus fusion/decay resonance wave pattern, or a combination of two resonance wave patterns. Taking into account three fundamental types of breathers, we obtain more than ten scenarios of the breather interactions described via Eq.~(\ref{two}). Here we do not consider all of them, instead focus on fundamental aspects of the vector breathers interactions, such as collision wavefield profiles and asymptotic states of the breathers at large times. For the latter question, we compute exact formulas describing the shifts of the positions and phases acquired by the breathers after their collision.
We start with elastic collisions, i.e., we put zero one of the components in vectors $\mathbf{C}_1$ and $\mathbf{C}_2$. For each breather, we chose parametrization of its integration constant according to the breather type. We begin with the case of type I breathers collision, i.e., $\mathrm{I}+\mathrm{I} \rightarrow \mathrm{I} + \mathrm{I}$. First, we choose the parametrization for the vectors $\mathbf{C}_1$ and $\mathbf{C}_2$ according to Eq.~(\ref{C_I_param}),
\begin{equation}
C_{i,0} = 0, \qquad C_{i,1} = C_{i,2}^{-1} = e^{\mathrm{Im}[\zeta_i]\delta_i + i\theta_i/2}, \quad i=1,2.
\end{equation}
Each of the two breathers changes the phase of the condensate according to (\ref{asymptotics_I}), so that the asymptotic of the I+I solution reads as,
\begin{eqnarray}
\label{asymptotic_I+I}
\psi_{1,2}^\mathrm{I+I} &\to& A_{1,2}\, e^{\pm 2\, \mathrm{i}\, (\alpha_1 + \alpha_2)}; \quad\quad\quad\quad x \to \pm\infty.
\end{eqnarray}
The asymptotic states of the scalar two-breather NLSE solution, which is linked to the vector case through the transformation (\ref{typeI_transformation}), have been found in \cite{gelash2018formation}, see also \cite{gelash2022breather} for additional details. Here we re-obtain this result. We consider the two-breather solution in the reference frame moving with the group velocity $V_i$ of the breather $i$ ($i=1,2$), which collides with the breather $j$ ($j=2,1$). Then we analyze solution (\ref{two}) at large times, see computational details in \cite{gelash2022breather}, and also in appendix Sec. \ref{Sec:Appendix:3}, and find the asymptotic state for each of the breathers. The full asymptotic state of the solution (\ref{two}) at $t \rightarrow \pm\infty$ represents single breathers with shifted position $\delta$ and phase $\theta$ parameters, as well as shifted general phase,
\begin{eqnarray}
\label{2B_asymptotic_I+I}
\psi_{1,2}^{\mathrm{I+I}}(\lambda_1,\delta_1,\theta_1;\lambda_2,\delta_2,\theta_2) \rightarrow
\begin{cases}
e^{\mp 2\,\mathrm{i}\,s_1 \alpha_2} \psi_{1,2}^\mathrm{I}(\lambda_1,\delta_1 + \delta^{\pm}_{0,1},\theta_1 + \theta^{\pm}_{0,1}), \quad\text{at}\quad x\sim V_{\mathrm{I_1}}t, \\\\
e^{\mp 2\,\mathrm{i}\,s_2 \alpha_1} \psi_{1,2}^\mathrm{I}(\lambda_2,\delta_2 + \delta^{\pm}_{0,2},\theta_2 + \theta^{\pm}_{0,2}), \quad\text{at}\quad x\sim V_{\mathrm{I_2}}t,
\end{cases}
\end{eqnarray}
where the sign $s_i =\pm 1$, and the position shift $\delta^{\pm}_{0,i}$, and the phase shift $\theta^{\pm}_{0,i}$ are defined at $t \rightarrow \pm\infty$ by the following expressions,
\begin{eqnarray}
&& s_i \equiv \mathrm{sign} (V_j-V_i),
\\\nonumber
&&\delta^{\pm}_{0,i} \equiv \mp s_i \frac{l_\mathrm{I}(\lambda_i)}{2}\,\log \left|\frac{(r_i-r_j^*)(1+r_i r_j)}{(r_i-r_j)(1+r_i r_j^*)} \right|^2,
\\\nonumber
&&\theta^{\pm}_{0,i} \equiv \mp s_i\,\mathrm{Arg} \left[\frac{(r_i^*-r_j^*)(1+r_i r_j)}{(r_i^*-r_j)(1+r_i r_j^*) \sin \alpha_j}\right] .
\label{2B_asymptotic_coeff_I+I}
\end{eqnarray}
\begin{figure}[!t]
\centering
\includegraphics[width=0.31\linewidth]{fig2_2a.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig2_2b.pdf}\,\,\,\,\,
\includegraphics[width=0.32\linewidth]{fig2_2c.pdf}\\
\includegraphics[width=0.31\linewidth]{fig2_2d.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig2_2e.pdf}\,\,\,\,\,
\includegraphics[width=0.32\linewidth]{fig2_2f.pdf}\\
\caption{
Elastic collision $\mathrm{II}+\mathrm{II} \rightarrow \mathrm{II} + \mathrm{II}$ of vector breathers with spectral parameters defined by Eq.~(\ref{parameters_2B}). (a,b) $|\psi_1|$ and $\mathrm{Arg}[\psi_{1}]$, where $\mathrm{Arg}$ means complex phase, after the breathers collision at $t=25.0$. (c) Spatio-temporal plot of the wave field evolution. The dotted green and red lines in (a,b) show a local approximation of the breathers after collision by single-breathers solutions from the asymptotic (\ref{2B_asymptotic_general}). Thin black dashed line in (a) shows how the second breather would have been if it had been traveling alone.}
\label{fig_10}
\end{figure}
Fig.~\ref{fig_09} shows an example of two breathers of types I elastic collision and also illustrates the asymptotic formula (\ref{2B_asymptotic_I+I}). One can see the change of the breathers' phases and positions by comparing the final wavefield state with the situation when one of the breathers travels along, i.e., without collision. For this and some of the subsequent examples of elastic two-breather interactions, we use the following set of parameters,
\begin{eqnarray}
\label{parameters_2B}
&&A_1 = 1, \quad A_2=1;
\\\nonumber
&&\alpha_1=\pi/5, \quad \xi_1=1/4, \quad \theta_1 = 0, \quad \delta_1 = 0,
\\\nonumber
&&\alpha_2=4\pi/5, \quad \xi_2=1/2, \quad \theta_2 = 0, \quad \delta_2 = 0.
\end{eqnarray}
In general case there are six possible combinations of the elastic two-breather interactions $\mathrm{B_i}+\mathrm{\widetilde{B}}_j \rightarrow \mathrm{B_i} + \mathrm{\widetilde{B}}_j$. Here $\mathrm{B}$ and $\mathrm{\widetilde{B}}$ stand for one of the three breather types, while the subscripts $i=1,2$ and $j=2,1$ indicate the breather index number. Note that the indexes are not related to the breather type and can be freely chosen, i.e., they indicate which breather we call the first and which the second. The asymptotic state of this interaction represents the following generalization of the Eq.~(\ref{2B_asymptotic_I+I}),
\begin{eqnarray}
\label{2B_asymptotic_general}
\psi_{1,2}^{\mathrm{B_i+\widetilde{B}_j}}(\lambda_i,\delta_i,\theta_i;\lambda_j,\delta_j,\theta_j) \rightarrow
\begin{cases}
e^{2\,\mathrm{i}\beta^{\pm}_i} \psi_{1,2}^\mathrm{B}(\lambda_i,\delta_i + \delta^{\pm}_{0,i},\theta_i + \theta^{\pm}_{0,i}), \quad\text{at}\quad x\sim V_{\mathrm{B_i}}t, \\\\
e^{2\,\mathrm{i}\beta^{\pm}_j} \psi_{1,2}^\mathrm{\widetilde{B}}(\lambda_j,\delta_j + \delta^{\pm}_{0,j},\theta_j + \theta^{\pm}_{0,j}), \quad\,\text{at}\quad x\sim V_{\mathrm{\widetilde{B}_j}}t.
\end{cases}
\end{eqnarray}
where the shifts of positions $\delta$, phases $\theta$ and general phases are defined as follows,
\begin{eqnarray}
\{\delta_{0,i}^{-}, \delta_{0,i}^{+}, \theta_{0,i}^{-}, \theta_{0,i}^{+}, \beta^{-}_i, \beta^{+}_i\} =
\begin{cases}
\{a^{\mathrm{B_i,\tilde{B}_j}}_i, b^{\mathrm{B_i,\widetilde{B}_j}}_i, c^{\mathrm{B_i,\widetilde{B}_j}}_i, d^{\mathrm{B_i,\widetilde{B}_j}}_i, e^{\mathrm{B_i,\widetilde{B}_j}}_i, f^{\mathrm{B_i,\widetilde{B}_j}}_i\}, & \text{at}\ s_i=1 \\\\
\{b^{\mathrm{B_i,\widetilde{B}_j}}_i, a^{\mathrm{B_i,\widetilde{B}_j}}_i, d^{\mathrm{B_i,\widetilde{B}_j}}_i, c^{\mathrm{B_i,\widetilde{B}_j}}_i, f^{\mathrm{B_i,\widetilde{B}_j}}_i, e^{\mathrm{B_i,\widetilde{B}_j}}_i\}, & \text{at}\ s_i=-1.
\end{cases}
\label{2B_asymptotic_coeff_general}
\end{eqnarray}
As before, in Eqs.~(\ref{2B_asymptotic_general}) and (\ref{2B_asymptotic_coeff_general}) the subscript indexes can be freely chosen to distinguish breathers, i.e., $i=1,2$ and $j=2,1$. Meanwhile $\mathrm{B}$ and $\widetilde{\mathrm{B}}$ indicate the breather type. The lower and the first upper indexes of the coefficients $a$, $b$, etc. from (\ref{2B_asymptotic_coeff_general}) indicate the breather for which the shift is presented, while the second upper index means the breather with which the studied one interacts. The lower index shows only what we call the studied breather, i.e., either the first or the second one. Meanwhile, the upper indexes also indicate the type of interacting breathers.
For example, $b_i^{\mathrm{B_i,\widetilde{B}_j}}$ at $s_i=1$ ($a_i^{\mathrm{B_i,\widetilde{B}_j}}$ at $s_i=-1$) represents the correction $\delta_{0,i}^{+}$ to the position of the $i$-th breather of type $\mathrm{B}$ at large time after the collision with the breather of type $\mathrm{\widetilde{B}}$.
Similar to the formulas (\ref{2B_asymptotic_coeff_I+I}), we find the rest of the coefficients in (\ref{2B_asymptotic_coeff_general}) by asymptotic analysis of the solution (\ref{two}) at large times, see details in the appendix Sec. \ref{Sec:Appendix:3}. We summarize these results in Table~\ref{table1}, which we present in appendix Sec. \ref{Sec:Appendix:B2}. In addition these tables provide asymptotic wavefield values at $x\rightarrow \pm \infty$ for each of the two-breather configuration. Consider a concrete example of how to use the notations (\ref{2B_asymptotic_coeff_general}). Suppose the process is $\mathrm{I}+\mathrm{II} \rightarrow \mathrm{I} + \mathrm{II}$ and we need to know the asymptotic shifts of positions at large positive time. We say, for instance, that $\mathrm{B_1} = \mathrm{I}$ and $\mathrm{\widetilde{B}_2} = \mathrm{II}$. For the first breather we find $\delta_{0,1}^{+}=b_1^{\mathrm{I, II}}$ if $s_1=1$ and $\delta_{0,1}^{+}=a_1^{\mathrm{I, II}}$ if $s_1=-1$, while for the second one $\delta^{+}_{0,2}=b_2^{\mathrm{II, I}}$ if $s_2=-s_1=1$ and $\delta^{+}_{0,2}=a_2^{\mathrm{II, I}}$ if $s_2=-1$. Then we go to Table \ref{table1} and find the corresponding values of the coefficients $b$ or $a$ in its forth row.
By analogy with Fig.~\ref{fig_09}, Fig.~\ref{fig_10} shows one example of elastic collisions of two equal-type breathers $\mathrm{II}+\mathrm{II} \rightarrow \mathrm{II} + \mathrm{II}$, while Fig.~\ref{fig_11} shows one mixed case $\mathrm{I}+\mathrm{III} \rightarrow \mathrm{I} + \mathrm{III}$. More examples are presented in appendix Sec. \ref{Sec:Appendix:B1}. The asymptotic result (\ref{2B_asymptotic_general}) is illustrated by approximation of the two breathers after mutual collision by single breathes with appropriately shifted phases and position. Note, that in addition to (\ref{parameters_2B}) we also use for the illustrations the following set of breather parameters,
\begin{eqnarray}
\label{parameters_2B_new}
&&A_1 = 1, \quad A_2=1;
\\\nonumber
&&\alpha_1=\pi/4, \quad \xi_1=1/4, \quad \theta_1 = 0, \quad \delta_1 = 0,
\\\nonumber
&&\alpha_2=\pi/5, \quad \xi_2=1/2, \quad \theta_2 = 0, \quad \delta_2 = 0.
\end{eqnarray}
\begin{figure}[!t]
\centering
\includegraphics[width=0.31\linewidth]{figApp_1a.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{figApp_1b.pdf}\,\,\,\,\,
\includegraphics[width=0.32\linewidth]{figApp_1c.pdf}\\
\includegraphics[width=0.31\linewidth]{figApp_1d.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{figApp_1e.pdf}\,\,\,\,\,
\includegraphics[width=0.32\linewidth]{figApp_1f.pdf}\\
\caption{
Elastic collision $\mathrm{I}+\mathrm{III} \rightarrow \mathrm{I} + \mathrm{III}$ of vector breathers with spectral parameters defined by (\ref{parameters_2B_new}). (a,b) $|\psi_1|$ and $\mathrm{Arg}[\psi_{1}]$, where $\mathrm{Arg}$ means complex phase, after the breathers collision at $t=12.0$. (c) Spatio-temporal plot of the wave field evolution. The dotted green and red lines in (a,b) show a local approximation of the breathers after collision by single-breathers solutions from the asymptotic (\ref{2B_asymptotic_general}). Thin black dashed line in (a) shows how breathers would have been if they had been traveling alone.}
\label{fig_11}
\end{figure}
While the formulas (\ref{2B_asymptotic_general}) and (\ref{2B_asymptotic_coeff_general}) allow to describe the asymptotic state of the two-breather interaction mathematically, the physical meaning plays the total values of the position and phase shifts $\Delta\delta^{\mathrm{B_i,\widetilde{B}_j}}_i$ and $\Delta\theta^{\mathrm{B_i,\widetilde{B}_j}}_i$ acquired by the breather $\mathrm{B_i}$ as a result of collision with the breather $\mathrm{\widetilde{B}_j}$,
\begin{eqnarray}
\label{total_shifts1}
\Delta\delta^{\mathrm{B_i,\widetilde{B}_j}}_i = \delta_{0,i}^{+} - \delta_{0,i}^{-},
\\\label{total_shifts2}
\Delta\theta^{\mathrm{B_i,\widetilde{B}_j}}_i = \theta_{0,i}^{+} - \theta_{0,i}^{-}.
\end{eqnarray}
The total shift values (\ref{total_shifts1}) and (\ref{total_shifts2}) can be found using the coefficients listed in Table \ref{table1} of the appendix Sec. \ref{Sec:Appendix:B2}. They represent a broad family of expressions. Here we focus on particular features of the space shifts which are responsible for qualitative changes of the breather collision behaviour.
\begin{table}[!t]
\centering
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c| c| c|}
\hline
Process & Total space shift \\[0.5ex]
\hline
$\mathrm{I}_i+\mathrm{I}_j \rightarrow \mathrm{I}_i + \mathrm{I}_j$
&
$\begin{aligned}
\Delta\delta^{\mathrm{I_i,I_j}}_i = -s_i l_\mathrm{I}(\lambda_i)\,\log \left|\frac{(r_i-r_j^*)(1+r_i r_j)}{(r_i-r_j)(1+r_i r_j^*)} \right|^2
\end{aligned}$
\\
\hline
$\mathrm{II}_i+\mathrm{II}_j \rightarrow \mathrm{II}_i + \mathrm{II}_j$
&
$\begin{aligned}
\Delta\delta^{\mathrm{II_i,II_j}}_i = s_i l_\mathrm{II}(\lambda_i) \,
\left( \log \left[1 - \frac{n |1+r_i^* r_j |^2}{m (1+|r_i|^2) (1+|r_j|^2)}\right]
+ \log \left[1 - \frac{n}{m}\right] \right)
\end{aligned}$
\\
\hline
$\mathrm{III}_i+\mathrm{III}_j \rightarrow \mathrm{III}_i + \mathrm{III}_j$
&
$\begin{aligned}
\Delta\delta^{\mathrm{III_i,III_j}}_i = s_il_\mathrm{III}(\lambda_i) \,
\left( \log \left[1 - \frac{n |1+r_i^* r_j |^2}{m (1+|r_i|^2) (1+|r_j|^2)}\right]
+ \log \left[1 - \frac{n}{m}\right] \right)
\end{aligned}$
\\
\hline
$\mathrm{I}_i+\mathrm{II}_j \rightarrow \mathrm{I}_i + \mathrm{II}_j$
&
$\begin{aligned}
\Delta\delta^{\mathrm{I_i,II_j}}_i = s_i \frac{l_\mathrm{I}(\lambda_i)}{2}\,\log \left|\frac{(r_i-r_j^*)(1+r_i r_j)}{(r_i-r_j)(1+r_i r_j^*)} \right|^2
\\
\Delta\delta^{\mathrm{II_j,I_i}}_j = s_j l_\mathrm{II}(\lambda_j)\,\log \left|\frac{(r_j-r_i^*)(1+r_j r_i)}{(r_j-r_i)(1+r_j r_i^*)} \right|^2
\end{aligned}$
\\
\hline
$\mathrm{I}_i+\mathrm{III}_j \rightarrow \mathrm{I}_i + \mathrm{III}_j$
&
$\begin{aligned}
\Delta\delta^{\mathrm{I_i,III_j}}_i = -s_i \frac{l_\mathrm{I}(\lambda_i)}{2}\,\log \left|\frac{(r_i-r_j^*)(1+r_i r_j)}{(r_i-r_j)(1+r_i r_j^*)} \right|^2
\\
\Delta\delta^{\mathrm{III_j,I_i}}_j = -s_j l_\mathrm{III}(\lambda_j) \,\log \left|\frac{(r_j-r_i^*)(1+r_j r_i)}{(r_j-r_i)(1+r_j r_i^*)} \right|^2
\end{aligned}$
\\
\hline
$\mathrm{II}_i+\mathrm{III}_j \rightarrow \mathrm{II}_i + \mathrm{III}_j$
&
$\begin{aligned}
\Delta\delta^{\mathrm{II_i,III_j}}_i = -s_i l_\mathrm{II}(\lambda_i)\,
\left( \log \left[1 - \frac{n |r_i - r^*_j |^2}{m (1+|r_i|^2) (1+|r_j|^2)}\right]
+ \log \left[1 - \frac{n}{m}\right] \right)
\\
\Delta\delta^{\mathrm{III_j,II_i}}_j = s_j l_\mathrm{III}(\lambda_j) \,
\left( \log \left[1 - \frac{n |r_j - r^*_i |^2}{m (1+|r_j|^2) (1+|r_i|^2)}\right]
+ \log \left[1 - \frac{n}{m}\right] \right)
\end{aligned}$
\\
\hline
\end{tabular}
\caption{Values of the total space shifts, see Eq.~(\ref{total_shifts1}), for elastic collisions of vector breathers. Left column indicates the collision process type in the form $\mathrm{B_i}+\mathrm{\widetilde{B}_j} \rightarrow \mathrm{B_i} + \mathrm{\widetilde{B}_j}$, where $i=1\,\text{or}\,2$ while $j = 2\,\text{or}\,1$ respectively. Right column presents the corresponding values of $\Delta\delta^{\mathrm{B_i,\widetilde{B}_j}}_i$ and $\Delta\delta^{\mathrm{\widetilde{B}_j,B_i}}_j$ for each of the two breathers participating in the interaction. When $\mathrm{B}$ has the same type as $\mathrm{\widetilde{B}}$ we leave only $\Delta\delta^{\mathrm{B_i,\widetilde{B}_j}}_i$ value. The meaning of the indexes has been explained in the main text. For example, $\Delta\delta^{\mathrm{I_i,III_j}}_i$ describes the total space shift acquired by $i$th breather of type $\mathrm{I}$ as a result of a collision with the breather of type $\mathrm{III}$.}
\label{tablemain}
\end{table}
In Table \ref{tablemain} we summarize the total spatial shifts in all possible breather interaction scenarios. As was shown in \cite{gelash2022breather}, for scalar NLSE breathers participating in the head-on collisions the value of the spatial shift in the direction of the breather propagation can be positive, negative and even zero depending on the breather parameters, which means that breather can move forward or backward relative to its initial trajectory or remain on it. In the vector case we have a similar situation for those interactions where the type $\mathrm{I}$ breathers participate. More precisely, the sign of the spatial shift depends on which breather is faster, what is controlled by the value of $s$. However, the fastest breather not necessarily move forward, and in addition in some cases the sign of $\Delta\theta^{\mathrm{B_i,B_j}}_i$ can be switched by changing breather parameters keeping $s_i$ the same. Note, that this situation is unusual taking into account that for NLSE solitons the fastest one always move forward with respect to its propagation direction, while another one move backwards \cite{NovikovBook1984}.
To illustrate the described above behaviour of the vector breathers we consider the following particular case of parameters,
\begin{eqnarray}
\label{parameters_log_shifts}
\alpha_1= \alpha, \quad \alpha_2=\pi - \alpha, \quad \xi_1 = \xi_2 = \xi,
\end{eqnarray}
and assume that the first breather is always of type $\mathrm{I}$, i.e. $\mathrm{B_1}=\mathrm{I}_1$. When $\mathrm{B_2}=\mathrm{I}_2$ or $\mathrm{B_2}=\mathrm{III}_2$, the breathers collide in the head-on manner, while for $\mathrm{B_2}=\mathrm{II}_2$ the collision is overtaking. In addition the absolute value of the breather $\mathrm{I}$ velocity is always bigger than for the breathers $\mathrm{II}$ and $\mathrm{III}$, see Eqs.~(\ref{characteristic_values1}), (\ref{characteristic_values2}) and (\ref{characteristic_values3}). The latter means that the sign of $s_i$ does not change when changing $\alpha$ or $\xi$. Meanwhile the sign of the total spatial shift can be changed by changing $\xi$ at a fixed value of $\alpha$. Indeed, the logarithm in the corresponding shift expressions, see Table \ref{tablemain}, can be simplified under the constrain (\ref{parameters_log_shifts}) as follows,
\begin{eqnarray}
\label{log}
\log \left|\frac{(r_1-r_2^*)(1+r_1 r_2)}{(r_1-r_2)(1+r_1 r_2^*)} \right|^2 =
\log \left(\frac{\sinh^2{\xi}}
{\cos^2{\alpha}(\cos^2{\alpha}\sinh^2{\xi} + \sin^2{\alpha}\cosh^2{\xi})} \right).
\end{eqnarray}
Now one sees that for a fixed $\alpha\ne\pi/2$ the argument of the logarithm (\ref{log}) changes from zero to $\cos^{-2}{\alpha}$, when $\xi$ changes from zero to infinity. At the point $\xi_0$ satisfying transcendental condition $\sinh^2{\xi_0} = \cos^2{\alpha}(\cos^2{\alpha}\sinh^2{\xi_0} + \sin^2{\alpha}\cosh^2{\xi_0})$, the shift change sign, what we illustrate in Fig.~\ref{fig_11add}(a). In addition Fig.~\ref{fig_11add}(b,c) show examples of the overtaking collision $\mathrm{I}+\mathrm{II} \rightarrow \mathrm{I} + \mathrm{II}$, where $\mathrm{I}$ overtakes $\mathrm{II}$, for $\xi<\xi_0$ (b) and $\xi>\xi_0$ (c). One can see that in the case (b) the first breather $\mathrm{I}$ shifts forward along its trajectory, while the second breather $\mathrm{II}$ shifts backwards. In the case (c) the geometry of collision is the same, i.e. the first breather being faster overtakes the second one, however the sings of the shift are opposite. The latter can be easily seen for the breather $\mathrm{II}$ and poorly pronounced for the breather $\mathrm{I}$ due to a relatively small absolute value of the shift, see also Fig.~\ref{fig_11add}(a).
In addition we note that the sign of the logarithmic expressions in the total space shifts formulas of the processes $\mathrm{II}+\mathrm{II} \rightarrow \mathrm{II} + \mathrm{II}$ and $\mathrm{III}+\mathrm{III} \rightarrow \mathrm{III} + \mathrm{III}$ is always negative, i.e.
\begin{eqnarray}
\label{log_signs}
\log \left[1 - \frac{n |1+r_i^* r_j |^2}{m (1+|r_i|^2) (1+|r_j|^2)}\right]
+ \log \left[1 - \frac{n}{m}\right] <0,
\\\nonumber
\log \left[1 - \frac{n |1+r_i^* r_j |^2}{m (1+|r_i|^2) (1+|r_j|^2)}\right]
+ \log \left[1 - \frac{n}{m}\right] <0.
\end{eqnarray}
The inequalities (\ref{log_signs}) mean that the sign of the total shifts $\Delta\delta^{\mathrm{II_i,II_j}}_i$ and $\Delta\delta^{\mathrm{III_i,III_j}}_i$ is determined only by the sign of $s_i$. The latter is different for the processes $\mathrm{II}+\mathrm{II} \rightarrow \mathrm{II} + \mathrm{II}$ and $\mathrm{III}+\mathrm{III} \rightarrow \mathrm{III} + \mathrm{III}$, since the signs of $V_{\mathrm{II}}$ and $V_{\mathrm{III}}$ are opposite, see Eqs.~(\ref{characteristic_values2}) and (\ref{characteristic_values3}). Thus, for the same set of eigenvalues, the signs of $\Delta\delta^{\mathrm{II_i,II_j}}_i$ and $\Delta\delta^{\mathrm{III_i,III_j}}_i$ are also opposite.
\begin{figure}[!h]
\centering
\includegraphics[width=0.31\linewidth]{fig3_0a.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig3_0b.pdf}\,\,\,\,\,
\includegraphics[width=0.32\linewidth]{fig3_0c.pdf}
\caption{Illustration of the total space shifts behaviour in the collisions $\mathrm{I_1}+\mathrm{B_2} \rightarrow \mathrm{I_1} + \mathrm{B_2}$, where $\mathrm{B_2}$ is either type $\mathrm{I}$, $\mathrm{II}$ or $\mathrm{III}$ breather. (a) Dependence of the total space shift for the breathers with parameters defined by (\ref{parameters_log_shifts}) as a function on $\xi$ and fixed $\alpha_1 = 6\pi/16$. Black line corresponds to the case $\mathrm{B_2} = \mathrm{I_2}$, blue lines to the process $\mathrm{B_2} = \mathrm{II_2}$ (solid blue line for the breather $\mathrm{I}_1$ and dashed one for the breather $\mathrm{II}_2$), red lines to the case $\mathrm{B_2} = \mathrm{III_2}$ (solid red line for the breather $\mathrm{I}_1$ and dashed one for the breather $\mathrm{III}_2$). The shift curves change sing at the point $\xi_0$ where the argument of the logarithm (\ref{log}) turns unity. (b,c) Spatio-temporal plots for collisions $\mathrm{I}_1+\mathrm{II}_2$ in the cases $\alpha_1=9\pi/16$, $\xi=0.05<\xi_0$ (b), and $\xi=0.8>\xi_0$ (c).
}
\label{fig_11add}
\end{figure}
\section{Particular cases of vector two-breather solution}\label{Sec:6}
The resonant fusion and decay of breathers presented in Sec. \ref{Sec:4} is based on the single-eigenvalue solution with nonzero integration constants $C_0$, $C_1$ and $C_2$. However, the same expression can be also obtained from the two-eigenvalue solution (\ref{two}) in the case of merging eigenvalues, i.e., when $\lambda_1 = \lambda_2 = \lambda$. More specifically, one must choose the integration constants in (\ref{two}) so that each breather has a different type, either I, II, or III. In other words, each of the vectors $\mathbf{C}_1$ and $\mathbf{C}_2$ has one zero in its components, and the positions of these zeros are different. For example, consider the case I+II, i.e. $C_{0,1} = 0$ and $C_{1,2} = 0$. We substitute $\lambda_1 = \lambda_2 = \lambda$ in (\ref{two}), so that $r_1 = r_2 = r$. In addition, we change $\alpha\rightarrow-\alpha$, and end up with the resonant solution described in Sec. \ref{Sec:4}, characterized by the following three nonzero integration constants,
\begin{eqnarray}
\label{Resonance_param2}
C_0 = \left [ C_{1,1}C_{2,2}(1+r^2)/C_{0,2} \right ]^*, \qquad
C_1 = C^*_{2,1}, \qquad
C_2 = -C^*_{1,1}.
\end{eqnarray}
Zakharov and Manakov proposed the interpretation of the resonant interaction as a result of merging eigenvalues in \cite{ZakharovManakov1976theory}. They found that when $\lambda_1 \rightarrow \lambda_2$, the three wave system soliton acquires infinite space shifts due to collision. The same situation takes place for vector breathers. Indeed, assuming $\lambda_1 \rightarrow \lambda_2$ in (\ref{total_shifts1}), see also Table \ref{tablemain}, for the processes $\mathrm{I}+\mathrm{II} \rightarrow \mathrm{I} + \mathrm{II}$ or $\mathrm{I}+\mathrm{III} \rightarrow \mathrm{I} + \mathrm{III}$ we obtain an infinite value of the space shift. To get a feeling of this limit, we plot the spatio-temporal diagrams for the breathers collisions corresponding to a set of small differences $\varepsilon$ between the breather eigenvalues, defined as,
\begin{equation}
\label{epsilon_resonance}
\alpha_1=\alpha_2 = \alpha, \quad \xi_1 = \xi, \quad \xi_2 = \xi_1 + \varepsilon,
\end{equation}
see Fig.~\ref{fig_12}. At small $\varepsilon$, the point where the breathers collide transforms into an increasing straight junction, which begins at the point of the breathers association and later ends at the point of the breathers separation. The length of the junction is of order of the total shift $\Delta\delta^{\mathrm{I,II}}$, which dependence on $\varepsilon$ in case of eigenvalues (\ref{epsilon_resonance}) is defined by the following logarithm, see Table \ref{tablemain},
\begin{equation}
\ln \left|\frac{(r_i-r_j^*)(1+r_i r_j)}{(r_i-r_j)(1+r_i r_j^*)} \right|^2 = \ln (1+ X),\quad X=\frac{4 \,a\, b \sin^2 \alpha\, [(a^2-1)(b^2-1)+4 \,a\, b \cos^2 \alpha]}{(a-b)^2 (1+a\, b)^2},
\end{equation}
where $a \equiv e^{-\xi_i}$, $b \equiv e^{-\xi_j}$. In case of small $\varepsilon$ one can see that $(a-b)\sim\varepsilon$ and thus $\Delta\delta^{\mathrm{I,II}}\sim \ln(1/\varepsilon)$; i.e., the junction logarithmically increases with decreasing $\varepsilon$. The logarithmic behaviour of the junction length can be also seen in Fig.~\ref{fig_12} where we plot three collision portraits (a), (b) and (c) with $\varepsilon = 10^{-2}$, $10^{-4}$, and $10^{-6}$ correspondingly. The beginning of the junction remains in the same position on the spatio-temporal diagram while its end goes to infinity. The junction itself became the breather of the type different from the types of the colliding breathers. Finally, at $\varepsilon=0$, i.e., when the eigenvalues merge precisely, the two-breather solution transforms into the resonance solution. As we noted above, a similar transformation of the two-soliton collision into the resonance pattern described in \cite{ZakharovManakov1976theory} for the three-wave system.
\begin{figure}[!t]
\centering
\includegraphics[width=0.31\linewidth]{fig3_1a.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig3_1b.pdf}\,\,\,\,\,
\includegraphics[width=0.32\linewidth]{fig3_1c.pdf}
\caption{Elastic collision $\mathrm{I}+\mathrm{II} \rightarrow \mathrm{I} + \mathrm{II}$ of vector breathers with close eigenvalues chosen according to Eq.~(\ref{epsilon_resonance}), and (a) $\varepsilon = 10^{-2}$, (b) $\varepsilon = 10^{-4}$, (c) $\varepsilon = 10^{-6}$. The spectral parameters $\xi$ and $\alpha$ taken from in Eq.~(\ref{parameters}). Characters $\mathrm{I}$, $\mathrm{II}$, and $\mathrm{III}$ in panel (c) indicates the two colliding breathers and the collision junction. The latter transforms to the type $\mathrm{III}$ asymptotically at $\varepsilon = 0$, see also the inset in panel (c) showing comparison of the junction and type $\mathrm{III}$ breather at $t=5$.
}
\label{fig_12}
\end{figure}
Now let us consider the particular case of vector breathers, which emerges when the breathers are placed close to each other, and their group velocities coincide. Such nonlinear wave complex, also called breather molecule, has been studied theoretically in the scalar case, see \cite{belanger1996bright,gelash2014superregular,li2018soliton,xu2019breather}, and recently reproduced experimentally in a nearly conservative optical fiber system \cite{xu2019breather}. Here we briefly consider its vector generalization. The group velocity condition of the two-breather is as follows,
\begin{equation}
\label{bound_state}
V_{\mathrm{B_1}} = V_{\mathrm{B_2}}.
\end{equation}
For example, in the case $\mathrm{II}+\mathrm{II}$, and fixed parameters $\xi_1$, $\xi_2$, $\alpha_1$ the condition (\ref{bound_state}) results in $\alpha_2 = \arccos{\left( e^{-\xi_1+\xi_2}\cos{\alpha_2}\right)}$. Fig.~\ref{fig_13} shows typical two-breather molecules $\mathrm{I}+\mathrm{I}$, $\mathrm{II}+\mathrm{II}$ and $\mathrm{III}+\mathrm{III}$. In the general case, the breather molecule exhibits quasi-periodic oscillations because the breathers' individual oscillation frequencies are not commensurate, see in Fig.~\ref{fig_13}. As was shown in \cite{xu2019breather}, the commensurate condition for the breather oscillation frequencies in the scalar case leads to a high-order polynomial equation. In cases when the equation has solutions in the region of breather parameters validity, then the corresponding breather molecule is periodic, which was observed experimentally \cite{xu2019breather}. We leave the question of the construction of periodic vector breathers molecules to further studies.
\begin{figure}[!t]
\centering
\includegraphics[width=0.31\linewidth]{fig3_2a.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig3_2b.pdf}\,\,\,\,\,
\includegraphics[width=0.32\linewidth]{fig3_2c.pdf}
\caption{Quasi-periodic vector breather molecules for the cases (a) $\mathrm{I}+\mathrm{I}$, $\xi_1=1/2$, $\xi_2=1/4$, $\alpha_1=\pi/5$, $\alpha_2=0.92$. (b) $\mathrm{II}+\mathrm{II}$, $\xi_1=1/2$, $\xi_2=1/4$, $\alpha_1=\pi/5$, $\alpha_2=0.89$, (c) $\mathrm{III}+\mathrm{III}$, $\xi_1=1/4$, $\xi_2=1/2$, $\alpha_1=\pi/5$, $\alpha_2=0.89$. All position and phase parameters of the breathers are zero; the values of $\alpha_2$ have been computed from the condition (\ref{bound_state}).}
\label{fig_13}
\end{figure}
Finally, we discuss one more important case -- the so-called superregular scenario of breather interactions \cite{zakharov2013nonlinear,gelash2014superregular,gelash2018formation}. The superregular interactions represent a near annihilation into a small-amplitude localized condensate perturbation of a pair of scalar NLSE breathers resulting from their collision. A reverse process -- the emergence of breathers -- is also possible, i.e., the formation of breathers due to the perturbation growth and evolution. The latter makes superregular breathers important exactly solvable scenarios of the nonlinear stage of modulation instability development, see also \cite{kibler2015superregular,wabnitz2017book,conforti2018auto}. Is it natural to ask whether finding nontrivial generalizations of the scalar superregular breathers in the vector case is possible? The recent study \cite{tian2021superregular} have not found nontrivial vector analogs of the superregular breathers emerging from small-amplitude condensate perturbations. Here we present our analysis of the question.
We use the mathematical interpretation of the breathers annihilation provided in \cite{gelash2014superregular}, according to which the folding of two breathers into one small localized condensate perturbation emerges due to the exact cancellation of the numerator of the two-breather solution in the case when,
\begin{equation}
\label{eugenvalue_cancellation}
\xi_1 = \xi_2 = 0, \qquad \alpha_1 = -\alpha_2 = \alpha,
\end{equation}
so that the NLSE solution represent a pure unperturbed condensate. Then, in case when $\xi_1 = \xi_2 = \epsilon\ll 1$, the solution at the moment of collision has to be a small localized condensate perturbation because the breathers having opposite group velocities collide in the head-on manner so that no other continuous limit to the condensate solution at $\epsilon\to 0$ is possible, see details in \cite{gelash2014superregular}.
In the vector case the first numerator $\tilde{M}_{12}$ of the two-breather solution (\ref{two}) under the constrain (\ref{eugenvalue_cancellation}) simplifies as follows (for the second numerator $\tilde{M}_{13}$ the derivations are analogous),
\begin{equation}
\label{M_cancellation}
\tilde{M}_{12} = i\tilde{m} (q_{12}q_{23} - q_{13}q_{22})(q^*_{11}q^*_{23} - q^*_{13}q^*_{21}),
\end{equation}
where $\tilde{m} = m_1 = m_2$. One can see that the numerator (\ref{M_cancellation}) is exactly cancelled at any $x$ and $t$, when $q_{12} = hq_{13}$, $q_{22} = hq_{23}$, where $h$ is an arbitrary constant. The latter happens only when $C_{0,1} = 0$, $C_{0,2} = 0$, so that $h=A_2/A_1$, i.e., when both breathers are of type $\mathrm{I}$ and the vector two-breather solution is the trivial generalization of the scalar one, see transformation (\ref{typeI_transformation}).
In the case when one or both breathers are of type $\mathrm{II}$ or $\mathrm{III}$, the numerator (\ref{M_cancellation}), together with denominator $M$, see (\ref{two}), can be canceled only at specific points in $x$, which location depend on the integration vectors. In this case, the vector two-breather solution transforms into a degenerate one (one needs to resolve the indeterminate form $0/0$); see examples of degenerate scalar breathers in \cite{kedziora2012second,gelash2014superregular}. Here we do not study the degenerate limit and instead focus on the behavior of vector two-breather solution at $\xi_1 = \xi_2 = \epsilon\ll 1$. Fig.~\ref{fig_15} shows two-breather solutions of different types at $t=0$, $\alpha=\pi/3$ and different values of $\epsilon$. In addition, we choose breather phases and positions as $\theta_{1,2}=\pi/2$ and $\delta_{1,2}=0$, which corresponds to the most efficient folding of the breathers in the scalar case, see details in \cite{gelash2014superregular}. One can see the trivial vector analog ($\mathrm{I}+\mathrm{I}$) of the superregular folding in Fig.~\ref{fig_15}(a), which shows that the amplitude of the condensate perturbation (produced by the breather collision) decreases when decreasing $\epsilon$. At the same time, in the cases of collisions $\mathrm{II}+\mathrm{II}$ and $\mathrm{III}+\mathrm{III}$ shown in Fig.~\ref{fig_15}(b,c), the amplitude of the wavefield remains large even at very small $\epsilon$. The latter means that instead of the superregular folding, the vector breathers at $\epsilon\to 0$ tend to a degenerate limit, as we discussed above. We conclude that the vector breathers of type $\mathrm{II}$ and $\mathrm{III}$ do not participate in the modulation instability development from small-amplitude perturbations, which is consistent with \cite{tian2021superregular} and also with the correspondence of these types of vector breathers to the stable branch of the dispersion law $\omega_{\mathrm{II}}$, see Sec. \ref{Sec:3}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.31\linewidth]{fig3_3a.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig3_3b.pdf}\,\,\,\,\,
\includegraphics[width=0.31\linewidth]{fig3_3c.pdf}
\caption{Collision profiles of vector two-breather solutions of types (a) $\mathrm{I}+\mathrm{I}$, (b) $\mathrm{II}+\mathrm{II}$, (c) $\mathrm{III}+\mathrm{III}$ at $t=0$ and parameters $\xi_1 = \xi_2 = \epsilon$, $\alpha_1 = -\alpha_2 = \pi/3$, $\theta_{1,2}=\pi/2$, $\delta_{1,2}=0$. The value of parameter $\epsilon$ is $0.4$ (blue curves), $0.2$ (red curves) and $0.1$ (green curves). Panel (a) shows a trivial vector analog of the superregular folding of a pair of scalar NLSE breathers into small-amplitude condensate perturbations. Panels (b) and (c) illustrate that there is no such folding for the breathers of type $\mathrm{II}$ and $\mathrm{III}$. Only the first wavefield component is shown; the behavior of $\psi_2$ is analogous.}
\label{fig_15}
\end{figure}
\section{Conclusions and discussions}\label{Sec:7}
In this work we have studied theoretically the vector breathers and their interactions in the framework of the two-component nonlinear Schrödinger equation -- the Manakov system. Our model imply the focusing-type nonlinearity in both system components, see Eq.~(\ref{VNLSE}), and the presence of a nonzero constant background; see also \cite{prinari2006inverse} for the defocusing case and gray vector solitons. As a starting point we take the vector variant of the dressing method \cite{raskovalov2022resonant}, the studies on the three fundamental breather types I, II, III \cite{kraus2015focusing}, and on the resonance vector breather interactions \cite{raskovalov2022resonant}. Then we reveal the connection between breather type and the branches of the dispersion law, analyse important particular cases of the breather solutions and, describe the asymptotic state of the breather interactions by computing spatial and phase shifts acquiring by breathers as a result of collisions. The three types of breathers generate a family of nine different shift expressions, which we summarize using Eq.~(\ref{2B_asymptotic_coeff_general}) and the corresponding to it Table \ref{table1}. We find that the spatial shifts of the vector breathers can change sign depending on the spectral parameters without changing the sign of the difference between the breather velocities. Finally the obtained shift expressions allowed us to interpret the resonance fusion and decay of breathers as a limiting case of infinite space shift in the case of merging breather eigenvalues. In the future, the shift expressions can be used to build a spectral theory of vector breather gases, similar to the recent scalar case studies, see \cite{el2020spectral}.
The breathers of types $\mathrm{II}$ and $\mathrm{III}$ exhibit fundamentally different wavefield dynamics when compared to the breathers of type $\mathrm{I}$. The latter, as a trivial vector generalization of the scalar NLSE breathers, see transformation (\ref{typeI_transformation}), describes particular scenarios of the modulation instability and formation of rogue waves. In contrast, the breathers of types $\mathrm{II}$ and $\mathrm{III}$, belonging to the stable branch of the dispersion law, do not participate in the development of modulation instability from small-amplitude perturbations. Indeed, for the Akhmediev-type eigenvalues, see Eq.~(\ref{Akhmediev_par_set}), as well as for superregular-type eigenvalues, see Sec. \ref{Sec:6}, the breathers of types $\mathrm{II}$ and $\mathrm{III}$ exhibit a localized condensate pulsations which are never small. On the other side, the breathers of types $\mathrm{II}$ and $\mathrm{III}$ represent an important class of localized pulsating exact solutions and, together with the type $\mathrm{I}$ breathers, are responsible for inelastic resonance interactions, see Sec. \ref{Sec:4}.
A fundamental question that needs further study is the eigenvalues portraits of localized small-amplitude arbitrary-shaped perturbations of the vector condensate. As was shown in \cite{conforti2018auto} for the scalar case, superregular eigenvalue pairs can be embedded into small amplitude and arbitrary-shaped condensate perturbations under certain conditions. Meanwhile, our present studies show that the $\mathrm{II}$ and $\mathrm{III}$ breathers cannot be folded in such a way, at least within the standard superregular scenario. All this leads us to a conjecture that only type $\mathrm{I}$ superregular breathers exist in the locally perturbed vector condensate. Our conjecture can be tested in the future using numerical computation of the eigenvalue spectrum of the auxiliary system (\ref{lax system 1}). Moreover, we think that the vector arbitrary-shaped perturbation evolution is driven by the interaction between type I breathers and the unstable continuous spectrum; see the works \cite{biondini2016universal,conforti2018auto,biondini2021long} explaining how it happens in the scalar case.
We believe that our study will also benefit to the rapidly developing area of statistical description of nonlinear waves in integrable systems -- the so-called integrable turbulence, see \cite{zakharov2009turbulence,pelinovsky2013two,randoux2014intermittency,agafontsev2015integrable,soto2016integrable,Gelash2018,gelash2019bound}. The first studies on the random polarized nonlinear waves have been recently obtained in \cite{manvcic2018statistics} and we think that our analysis of the vector breather interactions will provide new insights into this complex subject. Meanwhile, experimental observation of various aspects of the integrable scalar NLSE dynamics and statistics has been successfully performed in many different works, see e.g. \cite{kibler2010peregrine,bailung2011observation,chabchoub2011rogue,kibler2012observation,frisquet2013collision,chabchoub2014hydrodynamics,chabchoub2019drifting,pierangeli2018observation,xu2019breather,Kraych2019Statistical,xu2020ghost}. In addition the development of vector modulation instability and vector dark rogue waves have been studied experimentally in a Manakov fiber system \cite{frisquet2015polarization,baronio2018observation}. At the same time the experimental observation of vector breathers represents a challenging task for further studies, see also \cite{baronio2018observation} where experimental conditions for experimental observation of vector breathers have been discussed.
\begin{acknowledgments}
The main part of the work was supported by the Russian Science Foundation (Grant No. 19-72-30028). The work of A.G. on Section \ref{Sec:6} and Appendix Section \ref{Sec:Appendix:2} was supported by RFBR Grant No. 19-31-60028. The work of A.R. on Appendix Sections \ref{Sec:Appendix:1} and \ref{Sec:Appendix:B1} was performed in the framework of the state assignment of the Russian Ministry of Science and Education ``Quantum'' No. AAAA-A18-118020190095-4. The authors thank participants of Prof. V.E. Zakharov’s seminar ``Nonlinear Waves'' and, especially, Prof. E.A. Kuznetsov for fruitful discussions.
\end{acknowledgments}
|
2211.07180
|
\section*{Results}
As a preliminary step, let us characterize, for the years 2010 and 2019, the WTN using the Louvain method for cluster detection \cite{blondel08,dugue15}.
The results presented in Fig.~\ref{fig1} show
the existence of 3 main clusters formed around the USA, China and Germany.
We observe that from 2010 to 2019 the size of the cluster
around China extends significantly. Indeed, from 2010 to 2019, the US-cluster loses
almost all of the South American countries to the benefit of the CN-cluster, and a dominant part of Africa enters the CN-cluster.
Meanwhile, the cluster formed around Germany
remains practically unchanged including the
EU countries, the countries of the former Soviet Union, and most of the Maghreb. Examples of WTN clustering
for other years of the considered decade are shown in Fig.~S2 of the Supplementary Information.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{fig2.pdf}
\end{center}
\vglue -0.3cm
\caption{\label{fig2}Final fraction $f_f$ of countries preferring to trade in USD as a function of the initial fraction $f_i$ for the years 2010 (left panel) and 2019 (right panel). Left and right panels: for any initial fraction $f_i$, the Monte Carlo procedure converges toward one of two final fractions $f_f$. These two final fractions are $f_{f_1} = 0.16$ and $f_{f_2}=0.66$ in 2010 (left panel)
and $f_{f_1}=0.14$ and $f_{f_2}=0.44$ in 2019 (right panel). The color of a point ($f_i$,$f_f$) indicates the portion $\rho_{f_f}(f_i)$ of the $N_r=10^4$ initial configurations, with a corresponding initial fraction $f_i$, which attain the final state with the corresponding final fraction $f_f$. The color ranges from black for $\rho_{f_f}(f_i)=0$ (all the countries preferring to trade in CNY whether than in USD) to bright yellow for $\rho_{f_f}(f_i)=1$ (all the countries preferring to trade in USD whether than in CNY). Middle panel: Portion $\rho_{f_f}(f_i)$ of the $N_r=10^4$ initial configurations, the fraction $f_i$ of which initially prefers to trade in USD, which attains the final state with the final fraction $f_f$. The red (blue) curve and the up (down) triangles correspond to the lowest (highest) value $f_{f_1}$ ($f_{f_2}$) of the two final fractions $f_f$. The full (empty) symbols correspond to the year 2019 (2010).
}
\end{figure}
However, this preliminary cluster analysis of the WTN, based on the maximization of the modularity \cite{blondel08,dugue15}, does not determine the trade preference of the countries either for USD or CNY.
The Monte Carlo procedure, described in the previous section, allows to obtain the final fraction $f_f$ of world countries which prefers to trade in USD, and conversely the final fraction $1-f_f$ of world countries which prefers to trade in CNY. Fig.~\ref{fig2} shows the final fraction $f_f$ as a function of the fraction $f_i$ of countries which initially prefer to trade in USD. For each value of $f_i$, we randomly picked $N_r=10^4$ different configurations of spins.
We observe that for any initial fraction $f_i$ belonging to the interval $[0,1]$, only two final fractions $f_{f_1}$ and $f_{f_2}$ can be reached. However, the probability that a given spin configuration reach one or the other final fraction values depends on the initial distribution of the TCPs over the countries. Let us take $f_{f_1}<f_{f_2}$.
Quite naturally, higher (lower) is the initial fraction $f_i$, higher is the probability to obtain the highest (lowest) final value $f_{f_2}$ ($f_{f_1}$).
The middle panel of Fig.~\ref{fig2} gives, for the years 2010 and 2019, the probabilities $\rho_{f_{f_1}}(f_i)$ and $\rho_{f_{f_2}}(f_i)$ to obtain the final fraction $f_{f_1}$ and $f_{f_2}$ as a function of the initial fraction $f_i$.
In 2010, see left panel of Fig.~\ref{fig2}, each of the final fractions corresponded to a majority of countries with either a USD preference ($f_{f_2}=0.66$) or a CNY preference ($f_{f_1}=0.16$). This no longer the case in 2019, see right panel of Fig.~\ref{fig2}, for which the two final fractions $f_{f_1}=0.14$ and $f_{f_2}=0.44$ are below $0.5$ and give both a CNY preference for the majority of the world countries. In one decade and according to the sole structure of the WTN, we pass from a bipolar USD-CNY trade currency preference to a global domination of the CNY.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{fig3.pdf}
\end{center}
\vglue -0.3cm
\caption{\label{fig3}World distribution of the trade currency preference probability for the years 2010 and 2019. Each panel corresponds to a given year and a given fraction $f_i$ of countries preferring to initially trade in USD. Left (right) column panels correspond to the year 2010 (2019). The top, middle, and bottom rows correspond to $f_i=0.1$, $0.5$, and $0.9$, respectively. Each country is characterized by a probability $P_\$$ to obtain a USD trade preference at the end of the Monte Carlo procedure. The CNY trade preference probability of a country is then $P_\yen=1-P_\$$. The color ranges from red for $P_\$=0$ and $P_\yen=1$ to blue for $P_\$=1$ and $P_\yen=0$. The average TCP probability $P_\$$ has been computed from $N_r=10^4$ random initial TCP distributions.
}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{fig4.pdf}
\end{center}
\vglue -0.3cm
\caption{\label{fig4}
World distribution of countries belonging to the USD group
(blue, hard preference to trade in USD), the CNY group (red, hard preference to trade in CNY)
and the swing group (green, the TCP can change between USD and CNY depending on the initial conditions). The world maps are shown for the years 2010 (left panel) and 2019 (right panel).
}
\end{figure}
Let us define the TCP probability $P_\$(c)$ to obtain for the country $c$ a USD preference at the end of the Monte Carlo procedure. The probability to obtain for the country $c$ a CNY preference is then $P_\yen(c)=1-P_\$(c)$. The probability $P_\$$ is obtained from the application of the Monte Carlo procedure to the $N_r=10^4$ initial TCP distributions. Fig.~\ref{fig3} shows the TCP probability world distribution. The Fig.~\ref{fig3} left panels illustrate the above described bipolar USD-CNY trade currency preference which exists in 2010: for high (low) $f_i$ most of the countries finally prefer USD (CNY). In 2019, Fig.~\ref{fig3} right panels, the CNY dominance is clearly observed. Indeed, even for high $f_i$, most of the countries finally prefer CNY over USD.
The reason of the bistability of the final outcomes $f_f$ (see $f_1$ and $f_2$ in Fig.~\ref{fig2})
can be understood from the analysis of the distribution of the USD or CNY trade preference over the
countries. In fact, there are two groups of countries
which keep, for any initial fraction $f_i$, a hard preference to trade in USD
(the USD group) and in CNY (the CNY group). Otherwise stated, a country of the CNY (USD) group, independently of its initial TCP and of the initial TCPs of the other countries, will always ends up in the CNY (USD) group.
A third group of swing states (the swing group) may change their TCP. Amazingly,
depending on the initial configuration of countries which prefer to trade in USD or in CNY, the countries belonging to this swing group collectively adopt at the equilibrium either the USD or the CNY as trade currency. This swing states are collectively responsible for the final outcome: if they adopt USD (CNY) at the equilibrium, the final fraction $f_f$ will be the highest (lowest) of the two possible fractions, i.e., $f_2$ ($f_1$).
These three groups are shown on the world maps displayed in Fig.~\ref{fig4} for the years 2010 and 2019
(the world maps for the other years of the past decade are shown in Fig.~S3 of the Supplementary Information).
We clearly see that there is a drastic change from 2010 to 2019:
a large number of countries passed from the swing group to the CNY group which has considerably increased in size during the last decade. Indeed, the former Soviet Union countries,
almost all South America and Africa belong in 2019 to the CNY group.
By contrast, from 2010 to 2019, the size of the USD group reduced only slightly loosing few countries: Venezuela, Nigeria, Chad, South Sudan, Equatorial Guinea, Afghanistan and Federated States of Micronesia switched to the CNY group and, Suriname and Israel to the swing group.
In 2019, the swing group is mainly composed by
EU countries, the UK and some Mediterranean countries (Turkey, Egypt, Morocco, Algeria, Tunisia and Israel).
The lists of the countries belonging to the USD, CNY and swing groups in 2010 and in 2019 are given in Tables~S1 to S6 of the Supplementary Information.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\columnwidth]{fig5.pdf}
\end{center}
\vglue -0.3cm
\caption{\label{fig5}Time evolution of the sizes of the trade currency preference groups. The 3 bands correspond to the USD group (blue), the CNY group (red), and the swing group (green). The width of a band corresponds to the size of the corresponding group expressed as:
(left panel) the ratio between the countries belonging to the group over and the total number of countries,
(right panel) the ratio between the total trade volume exchanged by the countries of the group and the total volume exchanged by all the world countries. The trade volumes are expressed in USD of the concerned year.}
\end{figure}
The time evolution over the last decade of the size of the three TCP groups is shown in Fig.~\ref{fig5}. The left panel of Fig.~\ref{fig5} displays the fraction of countries belonging to each group and the right panel displays the fraction of the total volume of import and export exchanged by each group. From Fig.~\ref{fig5} left panel, we observe that the CNY group (red band) steadily grows along the decade from a fraction of 34\% of the countries in 2010 to 60\% in 2020. This growth of the CNY group is mainly compensated by the depletion of the swing group (green band) whose size drops from 50\% of the countries in 2010 to 27\% in 2020. Meanwhile, the size of the USD group (blue band) slightly decreased from 16\% of the countries in 2010 to 13\% in 2020.
The trends are the same but less pronounced for the fraction of the trade volume exchanged by the different groups (see Fig.~\ref{fig5} right panel). The fraction of the trade volume exchanged by the CNY (swing) group increases (decreases) from 34\% (49\%) in 2010 to 43\% (40\%) in 2020. Meanwhile, the fraction of the trade volume exchanged by the USD group stayed quite constant during the decade (17\% for both 2010 and 2020). Consequently, the number of countries switching during the last decade from the swing group to the CNY group represents 23\% of the world countries but represents only 9\% of the world trade volume. The CNY club increased but with somewhat less important new entrants in terms of trade volume exchanged.
Let us note that we obtain practically the same results if, instead of keeping China always trading in CNY and the USA always trading in USD, we keep China and the other BRICS (Brazil, Russia, India, South Africa) always trading in CNY and the USA and other Anglo-Saxon countries (Canada, UK, Australia, New Zealand) always trading in USD; see e.g., Figs.~S4 and S5 in the Supplementary Information which are quite similar to Figs.~\ref{fig2} and \ref{fig4}. This result asserts the dominance of China and USA in the world trade network.
We have also considered to replace the import trade probabilities $P_c$ and the export trade probabilities $P^*_c$ in
(\ref{eq1}) by the PageRank and the CheiRank probabilities
obtained from the Google matrix of the WTN. These probabilities allowing to measure the capability of a country to import or export products throughout the WTN were used to analyze the international trade \cite{wtn1,wtn3}. Such a replacement of the probabilities, i.e., of the centrality measures of the WTN,
leads again to practically the same results (see e.g., Figs.~S6 and S7 in Supplementary Information which are similar to Figs.~\ref{fig2} and \ref{fig4}).
\section*{Conclusion}
The question addressed in the current work is the following one: assuming that only the WTN structure matters, what would be the trade currency preference for each country? Our analysis, performed by superimposing a Ising spin network on the WTN, clearly shows that the main part of the world would prefer to trade nowadays in CNY while in 2010 it would have preferred to trade in USD.
We observe two final equilibrium states. In 2010, one of them characterizes an USD preference and the other one a CNY preference, whilst in 2019, both of the two final states characterize a CNY preference. Nowadays, according to the WTN structure, for any initial distributions of countries preferring to trade in USD and in CNY, the final state would always favor a world which preferentially trade in CNY. The bistability of the final state is due to a group of swing states which, depending on the initial distribution of the trade currency preferences over the countries, adopt all together a preference for either USD or CNY. Of course, our analysis is based on the mathematical treatment
of the trade flows between the world countries and does not take
into account any geopolitical relations between the countries.
But, it is often claimed that economics determines politics and thus
we argue that the obtained results demonstrate
a drastic change in the international trade with a transition from dollar dominance to yuan dominance.
\begin{acknowledgments}
We thank L. Ermann for his
help to collect data
from the UN Comtrade database. We thank the UN Statistics Division to grant us a friendly access to the UN Comtrade
database. This research has been partially supported through the grant
NANOX N$^\circ$ ANR-17-EURE-0009 (project MTDINA) in the frame
of the Programme des Investissements d'Avenir, France.
This research has been also supported by the
Programme Investissements d’Avenir ANR-15-IDEX-0003.
\end{acknowledgments}
|
2112.12743
|
\section{Introduction}
\label{sec:intro}
\vspace{-6pt}
In recent years, enormous progress has been made in the neural text-to-speech (TTS), which benefits from the development of sequence-to-sequence (seq2seq) neural models~\cite{bahdanau2014neural,sutskever2014sequence}, making it possible to synthesize highly intelligible and natural speech~\cite{wang2017tacotron,shen2018natural,ren2019fastspeech,yu2020durian}. Despite the successful application of TTS in many scenarios, how to create expressive synthetic speech that can be flexibly controlled in terms of various speaking styles and speaker timbres is desirable for better user experience. This paper proposes a new expressive speech synthesis task that creates diversity synthetic speech by combining the timbre and speaking style from different speakers.
To create a TTS system with the ability to synthesize various expressive speech, a straightforward method is to train a TTS model with a database with manual labels~\cite{lee2017emotional,choi2019multi,litao2021controllable,li2018emphatic,liu2021expressive}, for instance, a database with manually labeled emotion categories~\cite{lee2017emotional,litao2021controllable} or speaking styles~\cite{liu2021expressive}. However, the limitation of these methods is obvious, i.e., it heavily depends on the training data and can not create new voice by combining different speaker timbres and speaking styles. To transplant a style to a target speaker for whom no labeled expressive recording exists, the cross-speaker style transfer task has attracted much attention~\cite{bian2019multi,whitehill2019multi,karlapati2020copycat,litao2021controllable,pan2021cross,shang2021incorporatingM3}. Reference embedding-based cross-speaker style transfer models~\cite{bian2019multi,whitehill2019multi,karlapati2020copycat,shang2021incorporatingM3,Li2021ControllableCE}, typically based on several general reference embedding methods \cite{skerry2018towards,wang2018style,zhang2019learning}, have shown promising performance on the style transfer task.
While those cross-speaker transfer methods can successfully produce expressive speech with a specific speaking style and a timbre from a speaker who has no such a speaking style in the corpus, they typically depend on a source speaker who has enough manually labeled expressive sources. It requires a source speaker to be an expert in expressing all expected styles with the aim to produce synthetic speech with various styles. Anyway, it is impossible for one source speaker to imitate all possible speaking styles and record enough recordings. In contrast, it is much easier to obtain an expressive corpus in which each speaker only speak one specific speaking style that he or
she is good at. With such a corpus, a practical task is to build a TTS system that has the ability to produce synthetic speech by combining different timbres and styles from different speakers, which is referred to as \emph{speaker-related multi-style and multi-speaker TTS (SRM2TTS)}.
However, it is non-trivial to achieve such a SRM2TTS task. Compared with the traditional cross-speaker style transfer task, in the SRM2TTS task, the timbre and style are closely entangled, making it difficult to transfer styles across speakers with reference-based methods. Taking inspiration from the success of the label-assisted content-aware prosody prediction model on the style transfer task~\cite{pan2021cross}, a novel method for the SRM2TTS task is proposed in this work. Specifically, based on a typical neural seq2seq framework, a content-aware multi-scale prosody modeling module is proposed, which can provide the style information to the TTS system based on the style label and input text. With an extra speaker identity controller, the proposed method can distinguish different styles and timbres, and thus can perform any combination of speakers and styles for SRM2TTS. Experiments have shown that the proposed method achieves good performance on synthesizing expressive speech by combining any speaker timbre and speaking style. Besides, benefit from the explicit modeling of prosody features, the proposed method can flexibly control each prosodic component, e.g., pitch and energy, which can increase the diversity of synthesized speech.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.5]{fig/newpp.pdf}
\caption{The architecture of the proposed model
\label{fig:model} \vspace{-0.4cm}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{fig/pp.pdf}
\caption{The architecture of the prosody predictor
\label{fig:pred}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.45]{fig/Prosodyencoder4.pdf}
\caption{The architecture of the prosody encoder
\label{fig:enc}
\end{figure}
Our contribution can be summarized as follows:
(1) Synthesizing expressive speech by combining any style and timbre based on a multi-speaker database in which each speaker has a unique speaking style is first proposed in this work. The realization of this task has profound implications from a perspective of usability.
(2) A novel method, which can realize the combination and control of any style and timbre on expressive speech synthesis, is proposed.
(3) Extensive experiments have shown that with a novel fine-grained text-based prosody modeling module the proposed method can explicitly model and flexibly control the prosodic components.
\section{Proposed Model}
The proposed framework is illustrated in Fig.~\ref{fig:model}. As shown, the proposed model is a typical attention-based seq2seq framework, in which the backbone of the encoder-decoder structure is based on a modified Tacotron2~\cite{shen2018natural}. Besides, a novel text-based fine-grained prosody module is included to predict the prosody, and the speaker identity controller is to control the timbre of synthetic speech.
\subsection{The backbone of the encoder-decoder framework}
Following~\cite{litao2021controllable}, a slightly modified version of Tacotron2~\cite{shen2018natural} is used as the encoder-decoder backbone. The encoder consists of a pre-net, which is composed of two fully connected layers, and a CBHG module~\cite{lee2017fully}.
The decoder is composed of an autoregressive recurrent neural network (RNN) and generates attention queries at each decoder time step. Here, the GMM attention mechanism is used, which has shown a good performance on modeling long sequence speech~\cite{graves2013generating,battenberg2020location}. To control the timbre of synthesized speech, an additional speaker embedding with dimension of 256 is concatenated with the RNN input in the decoder. Same to the original Tacotron2~\cite{shen2018natural}, the post-net which is a five-layer convolution network is also adopted. Speech in this framework is represented by mel-spectrograms, and multi-band WaveRNN~\cite{yu2020durian} is adopted to reconstruct waveforms from predicted spectrograms.
\begin{table*}[!htbp]
\caption{Comparison of our proposed method with Multi-R and PB in terms of style and speaker similarity MOS with confidence intervals of 95$\%$. The higher value means better performance, and the bold indicates the best performance out of three models in terms of each style.}
\label{tab:transfer}
\setlength{\tabcolsep}{3mm}
\centering
\begin{tabular}{l|lll|lll}
\toprule
\multicolumn{1}{c|}{\multirow{2}{*}{Style}} & \multicolumn{3}{c|}{style similarity MOS} & \multicolumn{3}{c}{speaker similarity MOS} \\ \cmidrule{2-7}
\multicolumn{1}{c|}{} & \multicolumn{1}{c}{Multi-R~\cite{bian2019multi}} & \multicolumn{1}{c}{PB~\cite{pan2021cross}} & \multicolumn{1}{c|}{Proposed} & \multicolumn{1}{c}{Multi-R~\cite{bian2019multi}} & \multicolumn{1}{c}{PB~\cite{pan2021cross}} & \multicolumn{1}{c}{Proposed} \\ \midrule
Story & 3.59$\pm$0.058 & 3.65$\pm$0.056 & \bf{3.77}$\pm$\bf{0.058} & 3.83$\pm$0.049 & 3.91$\pm$0.049 & \bf{4.01}$\pm$\bf{0.047} \\
Anchor & 3.48$\pm$0.061 & 3.61$\pm$0.057 & \bf{3.79}$\pm$\bf{0.058} & 3.74$\pm$0.050 & 4.01$\pm$0.047 & \bf{4.04}$\pm$\bf{0.048} \\
CS & 3.22$\pm$0.060 & 3.78$\pm$0.062 & \bf{3.84}$\pm$\bf{0.059} & 3.81$\pm$0.046 & \bf{3.84}$\pm$\bf{0.048} & \bf{3.84}$\pm$\bf{0.046} \\
Poetry & 2.84$\pm$0.057 & 3.88$\pm$0.060 & \bf{4.14}$\pm$\bf{0.054} & 3.82$\pm$0.049 & \bf{3.88}$\pm$\bf{0.050} & 3.86$\pm$0.047 \\
Game & 2.78$\pm$0.054 & 3.81$\pm$0.060 & \bf{4.03}$\pm$\bf{0.059} & 3.90$\pm$0.045 & 3.92$\pm$0.049 & \bf{4.04}$\pm$\bf{0.048} \\ \midrule
Overall & 3.18$\pm$0.023 & 3.74$\pm$0.027 & \bf{3.91}$\pm$\bf{0.026} & 3.82$\pm$0.021 & 3.91$\pm$0.022 & \bf{3.96}$\pm$\bf{0.021} \\ \bottomrule
\end{tabular}
\end{table*}
\subsection{Text-based fine-grained prosody module}
As mentioned in the introduction, when there is an exact correspondence between the speaking style and speaker identity, the speaker information and style information would be deeply entangled from a global perspective. Therefore, it is crucial to find the essential difference between the speaker information and style information. Actually, the speaker information, typically the timbre, is global information, which means the timbre related to speaker identity basically will not change along with the speaking style. In contrast, the speaking style, which is generally presented by fine-grained prosody, is mainly local information and will varies with different speech units. Direct presenting the prosody as a global embedding is hard to distinguish from the speaker embedding in our case. Instead, a fine-grained prosody encoder, as shown in Fig.~\ref{fig:model}, is proposed to model the phoneme-level prosody. During the training stage, the prosodic features are represented by pitch, duration, and energy, all of which are at the phoneme level. Meanwhile, a text-based prosody predictor is optimized with the input of text encoder's output and the style embedding. During the inference stage, the prosody predictor is to provide the speaking style information for speech synthesis.
\textbf{Prosody predictor}~The structure of the prosody predictor is shown in Fig.~\ref{fig:pred}. It consists of five one-dimension convolutional layers and one linear transformation layer. Each of the convolutional layer is followed by layer normalization, ReLu activation function, and dropout. Considering the temporal nature of prosodic sequences, a position-coding vector is added to the input. To optimize the prosody predictor, the L1 loss is used to calculate the deviation between the predicted prosody and the ground-truth prosody features.
~\textbf{Multi-scale prosody encoder}~The speaking style of human speech has rich and subtle changes even within the same utterance. These changes are generally reflected in different scales. To obtain better representation from prosodic features, a multi-scale encoder as shown in Fig.~\ref{fig:enc}, is proposed in our framework.
The input prosody features are first convolved with a one-dimensional convolution filter bank $\mathbf{F}=\left\{\mathbf{f}_{1}, \ldots, \mathbf{f}_{m}\right\}$ where $\mathbf{f}_{i}$ has a width of $i$. In practice, $m$ is 8 in the proposed model.
The outputs of the convolution groups are stacked together, and the processed sequence is further passed to the maximum pooling layer and 1-D convolution layer. Then we use a layer of bidirectional LSTM (BLSTM) to extract the forward and backward sequence features. With this multi-scale modeling method, we can explicitly obtain the local and contextual features from the prosodic components.
\subsection{Style control}
Since the proposed method is based on explicit prosody features, it allows us to control the prosody feature by adjusting its value. Specifically, by multiplying or dividing the prosody features by a scale, we can flexibly control the prosody of the synthesized speech to further enhance the expressiveness of synthesized speech.
\subsection{Training and generation}
The training loss of the proposed model is shown in
\begin{equation}\label{eq1}
\mathcal{L}=\mathcal{L}_{\text {taco}}+\mathcal{L}_{\text {prosody}},
\end{equation}
where $\mathcal{L}_{\text {taco}}$ is the loss function of Tacotron2~\cite{shen2018natural}, and $\mathcal{L}_{\text {prosody}}$ is prosody loss. The training and inference stages are illustrated in Fig.~\ref{fig:model}. At the training stage, ground-truth prosody features are used as input of the prosody encoder. At the inference stage, the prosody features are predicted based on the input text and style id.
\section{Experiments}
\label{sec:typestyle}
\subsection{Experimental setup}
\subsubsection{Database}
To evaluate the performance of the proposed method in the SRM2TTS task, an internal Mandarin multi-speaker corpus, in which each speaker has a unique speaking style, is employed in the experiments. There are a total of six speakers, each with their own unique style, including reading, radio anchor, story telling, customer service (CS), poetry and game character.
Compared with the first four speaking styles, the latter two have stronger expressiveness, which are recorded by a child and a game character respectively.
The total duration is 20 hours, and all recordings are down-sampled to 16kHz. Ten sentences of each speaker are randomly selected as the test set for subjective evaluation.
\subsubsection{Evaluation metrics}
In the SRM2TTS task, a good model should has the ability to produce synthetic speech with the expected speaking style and timbre. Therefore, the synthesized results are evaluated in terms of style similarity and speaker similarity.
\textbf{Style similarity}:
The style similarity is to compare the similarity between the expected speaking style of natural speech and that of synthetic speech. Here, a Mean Opinion Score (MOS) evaluation with the human rating experiment is conducted to evaluate this similarity.
Among the speakers from the adopted database, the speaker with the reading style (DB1\footnote{The dataset is available at \url{http://www.data-baker.com/hc_znv_1.html}}) is a public database. Therefore, in the evaluation, DB1 is adopted as the target timbre to express different speaking styles.
Twenty (gender-balanced) native Mandarin listeners are invited to participate in the evaluation.
\textbf{Speaker similarity}:
The speaker similarity is to compare the similarity between the expected timbre of natural speech and that of synthetic speech. Similar to the evaluation of style similarity, MOS evaluation is conducted in the subjective test.
\subsection{Comparison with other methods}
To evaluate the performance of the proposed model on the SRM2TTS task, two state-of-the-art style transfer methods, i.e., Multi-R~\cite{bian2019multi} and PB~\cite{pan2021cross}, are compared in this work. Multi-R~\cite{bian2019multi} is a Tacotron-based method with multi-reference to transfer the prosody. PB~\cite{pan2021cross} is a cross-speaker style transfer model based on prosodic bottleneck. For the fair comparison, the compared Multi-R and PB take the same Tacotron backbone as our proposed model.
The MOS evaluations in terms of style similarity and speaker similarity are shown in Table~\ref{tab:transfer}. As can be seen from the table, our model achieves the best performance in terms of all style categories. Note that the reference-based method Multi-R obtains the lowest MOS scores with all speaking styles. This is mainly because that when each speaker has a unique speaking style this reference-based method is hard to decouple the timbre and style of the speaker. Therefore, when the imitated speaking style is significantly different from the reading style, i.e., game and poetry, the performance of this reference-based method performs much worse. In contrast, the label-based PB and our method achieve better style similarity MOS scores, which is probably caused by that the distinctive speaking style makes it easier for participants to judge, indicating the effectiveness of the label-based method on this SRM2TTS task. Compared with PB, our proposed method achieves 4.5\% relatively higher style similarity MOS averaging all style categories.
As for the speaker similarity, no obvious MOS difference exists among three models, demonstrating that the style transfer in PB and the proposed method does not bring obvious negative effect to the timbre compared with Multi-R which has very limited style transfer ability. Instead, the proposed method even achieves the best speaker similarity MOS in terms all style categories except for CS and Poetry, indicating the good performance of the proposed method on the SRM2TTS task.
\begin{table}[]
\caption{Ablation study of different prosody component in terms of style similarity MOS with confidence intervals of 95$\%$. w/o means without.}
\label{tab:compare_Ablation}
\setlength{\tabcolsep}{0.1mm}
\centering
\small
\begin{tabular}{l|l|l|l|l|l}
\toprule
\multicolumn{1}{c|}{Method} & \multicolumn{1}{c}{Proposed} & \multicolumn{1}{c}{w/o energy} & \multicolumn{1}{c}{w/o duration} & \multicolumn{1}{c}{w/o pitch} & \multicolumn{1}{c}{w/o all} \\ \midrule
Story & \bf{3.76}$\pm$\bf{0.049} & 3.74$\pm$0.050 & 3.55$\pm$0.053 & 3.65$\pm$0.065 & 3.41$\pm$0.044 \\
Anchor & \bf{3.85}$\pm$\bf{0.042} & 3.76$\pm$0.048 & 3.67$\pm$0.063 & 3.73$\pm$0.049 & 3.37$\pm$0.045 \\
CS & \bf{3.87}$\pm$\bf{0.054} & 3.80$\pm$0.049 & 3.44$\pm$0.054 & 3.66$\pm$0.057 & 3.22$\pm$0.042 \\
Poetry & \bf{3.97}$\pm$\bf{0.045} & 3.81$\pm$0.050 & 3.27$\pm$0.044 & 3.59$\pm$0.062& 2.93$\pm$0.039 \\
Game & \bf{3.91}$\pm$\bf{0.041} & 3.83$\pm$0.044 & 3.13$\pm$0.043 & 3.49$\pm$0.041 & 2.77$\pm$0.036 \\ \midrule
Overall & \bf{3.87}$\pm$\bf{0.021} & 3.79$\pm$0.022 & 3.42$\pm$0.021 & 3.63$\pm$0.021 & 3.14$\pm$0.020 \\
\bottomrule
\end{tabular}
\vspace{-0.5cm}
\end{table}
\subsection{Ablation study}
The prosody prediction module plays an important role to realize the SRM2TTS task. In the prosody prediction module, several prosody components, including pitch, energy, and duration, are considered in our method. To show the effectiveness of each component in the SRM2TTS, an ablation study is performed by comparing the proposed method with several variants that achieved by dropping one or all prosody components. The results are shown in Table~\ref{tab:compare_Ablation}, in which the style similarity is evaluated by the MOS score. Note that, when all of these three prosodic components are removed, which is referred to as \textit{w/o all} in Table~\ref{tab:compare_Ablation}, the model degenerates into a general multi-speaker model. Due to that the human rating experiments in Table~\ref{tab:compare_Ablation} and Table~\ref{tab:transfer} are performed in two individual groups, the MOS scores of the proposed method in these two tables are slightly different.
As can be seen from this table, dropping any prosodic component would result in significant performance drop in terms of the style similarity. Specifically, the dropping of the duration brings the biggest drop, in which the style similarity MOS is 11.7\% relatively lower than the proposed method. When no prosody component is adopted, i.e., \textit{w/o all}, this model is unable to perform the style transfer task. Instead, it is just a multi-speaker TTS model that can only produce synthetic speech with the timbre and style that belongs to the same speaker in the corpus. All of these results indicates the importance of each prosody component in our prosody modeling module. In addition to the effect on the style similarity, those prosody components also show important roles in the manual control of prosody in synthetic speech, which will be demonstrated in Section~\ref{sc:control}.
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.41]{fig/f0_ff.pdf}
\caption{Pitch contours of synthesized speech with pitch component increases by 20$\%$, does not adjust, decreases by 20$\%$.
\label{fig:modelf0}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.41]{fig/ene.pdf}
\caption{Energy contours of synthesized speech with energy component increases by 20$\%$, does not adjust, decreases by 20$\%$.
\label{fig:modelenergy}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.42]{fig/dur.png}
\caption{Mel spectrums of synthesized speech with duration component increases by 20$\%$, does not adjust, decreases by 20$\%$.
\label{fig:modelduraion}
\end{figure}
\vspace{-0.7cm}
\subsection{Style control}
\label{sc:control}
Since we explicitly use the prosody features, i.e., pitch, duration, and energy in the prosody prediction module, we can easily control the prosody by adjusting the prosody features. For instance, we can simply multiply the duration by a scale to control the speaking rate.
Figs.~\ref{fig:modelf0}-\ref{fig:modelduraion} present different pitch, energy, and Mel spectrums of synthesized speech by adjusting the pitch, energy, and duration respectively. As can be seen, the adjusting of the prosody features can exactly control the corresponding prosody of synthesized speech, indicating that our prosody encoder can model the explicit independent prosody components in the final synthesized speech.
Even though the larger scale means the greater change to the corresponding prosodic component, the scale cannot be infinite. For instance, too short duration or too small energy would affect intelligibility.
In the experiment, we found that the pitch and energy can be effectively controlled within a scale of 20$\%$, and the duration can be successfully controlled within a scale of 50$\%$. Synthesized samples are presented at the demo page\footnote{Audio samples can be found on the project page \url{https://qicongxie.github.io/SRM2TTS}}, and we encourage readers to listen to them.
\vspace{-0.2cm}
\section{conclusion}
\label{sec:refs}
In this paper, a general stylized speech synthesis task is proposed. This task, referred to as SRM2TTS, aims to produce expressive synthetic speech by combining any speaking style of one speaker with a timbre of another speaker. Compared with existing style transfer task, this proposed task is more general and practical as it can bypass the dependency on a source speaker who has to record all expected speaking styles. Therefore, the realization of this task is promising for many application cases.
With the aim to achieve this SRM2TTS task, a novel style modeling method based on explicit prosody features is proposed. This proposed method is based on the backbone of Tacotron2 and with a fine-grained text-based prosody prediction module and a speaker controller. Extensive experiments have shown that the proposed method can successfully express the style of one speaker with the timbre of another speaker. Furthermore, the explicit use of the prosody feature in the prosody prediction module allows us to control the prosody manually, which can produce more diverse expressive synthetic speech.
\vfill\pagebreak
\bibliographystyle{IEEEbib}
|
1610.01547
|
\section{Introduction}\labell{s:intro}
Let $(M,\omega)$ be a symplectic manifold, and let $G$ be a Lie group acting properly on $M$. The action is \textbf{Hamiltonian} if it preserves the symplectic form, and there exists a $G$-equivariant\footnote{Some authors do not require the $G$-equivariant condition on a momentum map. See, for example, \cite{OR}.} smooth map $\Phi\colon M\to\g^*$, where $\g$ is the Lie algebra of $G$ and $\g^*$ is equipped with the co-adjoint action, satisfying
\begin{equation}\labell{e:Ham}
\xi_M\hook\omega=-d\langle\Phi,\xi\rangle
\end{equation}
for all $\xi\in\g$. Here, $\xi_M$ is the vector field on $M$ induced by $\xi$. We call $\Phi$ a \textbf{momentum map} for the action. A standard operation used when working with Hamiltonian group actions is symplectic reduction, which we describe now. Fix a value $a$ in the image of $\Phi$. The \textbf{symplectic reduced space} $M\red{a}G$, or \textbf{symplectic quotient}, is defined to be the space $\Phi^{-1}(a)/G_a$, where $G_a$ is the stabiliser of the co-adjoint action of $G$ on $\g^*$.
If $a$ is a regular value of $\Phi$ and $G_a$ acts freely on $\Phi^{-1}(a)$, then the symplectic quotient $M\red{a}G$ is a symplectic manifold \cite{MW},\cite{meyer}. In fact, $G_a$ acts at least locally freely on $\Phi^{-1}(a)$ if and only if $a$ is a regular value of the momentum map. In this case, the symplectic quotient is a symplectic orbifold. If $a$ is a critical value of the momentum map, however, then $M\red{a}G$ is not necessarily an orbifold. Sjamaar--Lerman proved that it is a Whitney stratified space equipped with a Poisson algebra of functions inducing a symplectic structure on each stratum; they refer to this as a \textbf{symplectic stratified space} \cite{SL}.
These stratified symplectic quotients are interesting spaces, with or without the symplectic structure, and have been studied in many contexts. In \cite{HIP}, Herbig--Iyengar--Pflaum specify conditions on a linear Hamiltonian action of a (compact) torus $T$ which guarantee the existence of a deformation quantisation of the symplectic quotient at $0$. In certain ``admissible'' cases, which include the interesting $\SS^1$-cases, $\Phi^{-1}(0)$ is a cone of a manifold $L$, and it was conjectured by the authors that for most torus actions, $L/T$ is not a rational homology sphere. The space $L/T$ being a rational homology sphere is a necessary condition for the symplectic quotient to be an orbifold. These conjectures were resolved in \cite{FHS}, where Farsi--Herbig--Seaton use methods inspired by algebraic geometry behind complex torus actions and their GIT-quotients to find necessary conditions under which the symplectic quotient of a unitary representation of a compact torus is homeomorphic to an orbifold, which involves checking properties of the weight matrix. These results were extended in \cite{HSS}, where in particular, Herbig--Schwarz--Seaton prove the following for $\SS^1$-actions (see Theorem 1.5 and the discussion afterward in their paper):
\begin{quote}\labell{quote:HSS}
Let $G=\SS^1$ act effectively, linearly, and symplectically on $\CC^n$ equipped with its standard symplectic form, and assume all weights are non-zero. The symplectic reduced space at $0$, via the standard quadratic momentum map, is regular-diffeomorphic to a linear symplectic (effective) orbifold if and only if the dimension of the reduced space is less than $4$.
\end{quote}
Here, by diffeomorphism we mean in the sense of (Sikorski) differential spaces \cite{sikorski1}, \cite{sikorski2}, \cite{sniatycki}, which we define in Appendix~\ref{a:diff sp} (Definition~\ref{d:diff space}). Since differential spaces are closed under taking subsets and quotients (see, for example, \cite[Chapter 2]{watts-phd}), there is a natural differential space structure on a symplectic quotient, denoted $\CIN(M\red{a}G)$ (equivalent to the structure studied by Arms--Cushman--Gotay \cite{ACG}), as well as on an orbifold \cite[Section 3]{watts-orb}. The adjective ``regular'' in the above statement refers (in the linear case) to the diffeomorphism preserving the subalgebra $\RR[\CC^n\red{0}\SS^1]$ of $\CIN(\CC^n\red{0}\SS^1)$ consisting of the image of invariant polynomials via the natural map $\CIN(\CC^n)^{\SS^1}\to\CIN(\CC^n\red{0}\SS^1)$; see \cite[Section 2]{HSS}. The main result of this paper provides evidence that ``regular'' can be dropped from the above quote.
In the context of Lie groupoids, it is known that (effective) orbifolds are \textbf{representable} by a Lie group action; that is, given an orbifold $X$ with Lie groupoid representative $\mathcal{G}$, there exists a compact Lie group $K$ and a manifold $N$ such that the Lie groupoid $K\ltimes N$ is Morita equivalent to $\mathcal{G}$ (see, for example, \cite[Theorem 2.19]{MM}). It is shown in \cite[Theorem A and Section 6]{watts-orb} that Morita equivalent Lie groupoids induce diffeomorphic orbit spaces; in fact this correspondence can be realised as a functor $F$ from Lie groupoids to differential spaces that is essentially injective when restricted to Lie groupoids representing orbifolds. Here, by essentially injective, we mean that if $F(\mathcal{G}_1)$ is diffeomorhpic to $F(\mathcal{G}_2)$, then $\mathcal{G}_1$ is Morita equivalent to $\mathcal{G}_2$. Using this language, we can now formulate what we mean by a symplectic quotient being representable.
\begin{definition}\labell{d:repr}
Let $(M,\omega)$ be a symplectic manifold, let $G$ be a Lie group acting properly and in a Hamiltonian fashion, and let $a$ be a value in the image of the momentum map. Then the symplectic quotient $M\red{a}G$ is \textbf{representable} by a proper Lie group action if there exists a Lie group $K$ acting effectively and properly on a manifold $N$, and a diffeomorphism from $N/K$ to $M\red{a}G$. (Note that $N/K$ is equal to the orbit space of the Lie groupoid $K\ltimes N$.)
\end{definition}
Note that this definition might not be adequate in many circumstances. Indeed, the question of representability when applied to ineffective orbifolds, for example, requires more than just a diffeomorphism on the level of orbit spaces; information on isotropy groups must also somehow be recorded. For example, the trivial action of a non-trivial finite group on a point should not be equivalent to the trivial group acting on the point. This is why representability questions are typically phrased completely in terms of stacks or Lie groupoids, which contain such isotropy information. And so, perhaps one may wish to take ``representable'' as defined above as a weak form of representability. However, this is not important for our purposes. Thus, accepting that our notion of representability, we continue on and focus on the case $G=\SS^1$.
Consider first the case of an effective linear action of $\SS^1$ on $\CC^n$ (assume $\SS^1$ is acting in the standard way with non-zero weights -- see the beginning of Section~\ref{s:proof}). This preserves the standard symplectic form, and has a homogeneous quadratic momentum map given by Equation~\ref{e:momentum}. The level set $\Phi^{-1}(0)$ is a cone over a product of ellipsoids $E_-\times E_+$, and $\SS^1$ acts on $E_-\times E_+$ with finite stabilisers; hence, $(E_-\times E_+)/\SS^1$ is an orbifold. Generalising to effective Hamiltonian actions of $\SS^1$ on a manifold $M$, using local normal forms, this translates to the following: If $x\in\Phi^{-1}(0)$ is a fixed point and $[x]$ is its orbit, then there is an open neighbourhood of $[x]\in M\red{0}\SS^1$ diffeomorphic to $\RR^{n-2m}\times F$ where $F$ is (stratified-homeomorphic to) a cone over a link $L_x\cong (E_-\times E_+)/\SS^1$ for some ellipsoids $E_-$ and $E_+$.
\begin{definition}\labell{d:unrepresentable}
Let $(M,\omega)$ be a symplectic manifold, let $\SS^1$ act effectively and in a Hamiltonian fashion on $(M,\omega)$, and let $a$ be a value in the image of the momentum map. Then the symplectic quotient $M\red{a}\SS^1$ is \textbf{unrepresentable} if there does not exist a Lie group $K$ acting effectively and properly on a manifold $N$, and a diffeomorphism from $N/K$ to $M\red{a}\SS^1$. It is \textbf{weakly unrepresentable} if there does not exist a Lie group $K$ acting effectively and properly on a manifold $N$ with quotient map $\pi_N\colon N\to N/K$, and a diffeomorphism $\psi$ from $N/K$ to $M\red{a}\SS^1$ such that for every $\SS^1$-fixed point $x\in\Phi^{-1}(0)$, the restricted action of $K$ on $(\psi\circ\pi_N)^{-1}(L_x)$ yields an orbifold.
\end{definition}
The extra condition in the definition of weakly unrepresentable is necessary in order to apply our techniques using classifying spaces of Lie groupoids. It is unknown whether it is needed; it is possible that in the case of representability, it is automatically satisfied. Indeed, the issue is if the underlying semi-algebraic variety of an orbit space of an effective Lie group action $G\ltimes M$ is diffeomorphic to that of an orbifold, then is $G\ltimes M$ an orbifold? In general, the answer is no: consider $\O(n)$ acting on $\RR^n$ by rotations and reflections. The orbit space is $[0,\infty)$ for each $n$, and only in the case $n=1$ do we have an orbifold. However, there is evidence that the issue is the presence of codimension-$1$ strata (see \cite{CDGMW}), and this issue disappears in the case of symplectic quotients, as each stratum is symplectic, and hence is even-dimensional (there are no codimension-$1$ strata). This is all purely conjectural at this point, and so we stick with weak unrepresentability and state the main theorem of the paper.
\begin{main}\labell{t:main}
\emph{Let $\SS^1$ act effectively on a symplectic manifold in a Hamiltonian fashion. Fix a value $a$ of the momentum map and reduce at that value. Then the resulting symplectic quotient is diffeomorphic to an orbifold, or it is weakly unrepresentable. If the symplectic quotient is \emph{not} weakly unrepresentable, then either there are no $\SS^1$-fixed points on the $a$-level set of the momentum map, or there is at most one positive weight or at most one negative weight at each $\SS^1$-fixed point on the level set.}
\end{main}
There are no $\SS^1$-fixed points in the $a$-level set if and only if $a$ is a regular value, and we already mentioned above that the resulting symplectic quotient is in this case an orbifold. In the case that $a$ is a critical value, note that we do not claim that having at most one positive weight or at most one negative weight is a sufficient condition to obtain an orbifold.
The key technique used in the proof of the Main Theorem is assuming that the symplectic quotient is not weakly unrepresentable, and then checking the homotopy groups of classifying spaces of Lie groupoids whose orbit spaces are certain links in the orbit-type stratifications of the corresponding orbit space and symplectic quotient. These homotopy groups are Morita invariants (or stacky invariants, if you prefer stacks); see Proposition~\ref{p:morita}. They prove to be powerful tools in studying orbit spaces. In the case of Lie groupoids representing effective orbifolds, the fundamental groups of the corresponding classifying spaces turn out to be the orbifold fundamental groups in the sense of Thurston, which in fact are much weaker invariants than general stacky ones; indeed, they can be obtained from the underlying differential space structure. (See \cite[Proposition 3.19, Corollary 4.15, Theorem 5.5, and Theorem 5.10]{watts-orb} for a proof of this fact, and \cite[Definition 1.50, Theorem 2.18]{ALR} for a proof that the two notions of fundamental group match.)
This paper is broken down as follows. Section~\ref{s:proof} provides the proof of the main theorem. The proof requires some background on differential spaces, as well as Lie groupoids and their classifying spaces. A review of the necessary theory of differential spaces is given in Appendix~\ref{a:diff sp}. One necessary ingredient is the minimality of the orbit-type stratification of a symplectic quotient reduced at $0$. This is a folk theorem, and so we take the opportunity to give a proof of it using \'Sniatycki's theory of vector fields on subcartesian spaces (Theorem~\ref{t:minimal}). A review of Lie groupoids and their classifying spaces is given in Appendix~\ref{a:gpds}. Here, we prove another folk theorem: that the classifying space of an action groupoid $G\ltimes M$ is homotopy equivalent to the corresponding Borel construction $EG\times_G M$. The proof is essentially a slightly more detailed version of what appears in a preprint of Leida \cite{leida} (or at least the author cannot find a published version; it may also be in Leida's PhD thesis).
Finally, it is worth mentioning for the reader who is familiar with diffeology that a lot of the work in this paper using differential spaces could be done in the category of diffeological spaces instead. Indeed, there is a functor from Lie groupoids to diffeological spaces that is essentially injective when restricted to Lie groupoids representing orbifolds (see \cite{IKZ} and \cite[Theorem B]{watts-orb}). However, the author finds it more convenient to work with differential spaces in the context of orbit-type stratifications.
\subsection*{Acknowledgements:}
For the author, the question of the representability of a symplectic quotient originally came up during the question period after a talk by Christopher Seaton, to whom the author is grateful for many discussions along with Carla Farsi concerning orbit spaces, symplectic quotients, and Lie groupoids. The author also thanks Sarah Yeakel for explaining the details behind classifying spaces to him, and Carla Farsi and Markus Pflaum for their encouragement and interest in this project.
\section{Proof of Main Theorem}\labell{s:proof}
In the proof of the main theorem, we will restrict our attention to linear models. $\SS^1$-actions in this context take on a very nice form: about a fixed point of the $\SS^1$-action on a manifold, there is an $\SS^1$-equivariant neighbourhood symplectomorphic to $\CC^n$ equipped with the standard symplectic form, in which $\SS^1$ acts by $$e^{i\theta}\cdot(z_1,\dots,z_n)=(e^{\alpha_1i\theta}z_1,\dots,e^{\alpha_ni\theta}z_n).$$ Here, $\{\alpha_1,\dots,\alpha_n\}$ is the multi-set of \textbf{weights} of the action. There is a homogeneous quadratic momentum map $\Phi$ associated with this action:
\begin{equation}\labell{e:momentum}
\Phi(z_1,\dots,z_n)=\frac{1}{2}\sum_{i=1}^n|z_i|^2\alpha_i.
\end{equation}
We prove a number of lemmas regarding linear $\SS^1$-actions on $\CC^n$ in the following. First, the zero-set of $\Phi$ is a cone provided that there is at least one negative weight and at least one positive weight.
\begin{lemma}\labell{l:cone}
Let $\SS^1$ act effectively, linearly, and symplectically on $\CC^n$ equipped with its standard symplectic form with weights $\{\alpha_1,\dots,\alpha_n\}$ and homogeneous quadratic momentum map $\Phi$ given by Equation~\eqref{e:momentum}. Assume that $\alpha_1,\dots,\alpha_m<0$ and $\alpha_{m+1},\dots,\alpha_n>0$ for some $0<m<n$. Then $\Phi^{-1}(0)$ is diffeomorphic to ${\operatorname{Cone}}(E_-\times E_+)$, where $E_-$ is the ellipsoid in $\CC^m$ given by $\alpha_1|z_1|^2+\dots+\alpha_m|z_m|^2=-1$, and $E_+$ is the ellipsoid in $\CC^{n-m}$ given by $\alpha_{m+1}|z_{m+1}|^2+\dots+\alpha_n|z_n|^2=1$.
\end{lemma}
\begin{proof}
This can be seen immediately after setting Equation~\eqref{e:momentum} equal to $0$. The differential structure on ${\operatorname{Cone}}(E_-\times E_+)$ is given in Definition~\ref{d:triviality}.
\end{proof}
Let $\pi_Z\colon\Phi^{-1}(0)\to\CC^n\red{0}\SS^1$ be the quotient map. Let $v=\pi_Z(0)\in\CC^n\red{0}\SS^1$, the image via $\pi_Z$ of the apex of the cone ${\operatorname{Cone}}(E_-\times E_+)$. We prove in the following lemma that if the (linear) symplectic quotient is representable, then the link at $v$ must be diffeomorphic to some quotient of some sphere by a compact Lie group.
\begin{lemma}\labell{l:links}
Given the set up of Lemma~\ref{l:cone}, assume that there exists a manifold $N$ and a Lie group $G$ acting properly on $N$, such that $(N/G,\CIN(N/G))$ is diffeomorphic to $(\CC^{n}\red{0}\SS^1,\CIN(\CC^{n}\red{0}\SS^1))$. Then there exists an open neighbourhood $U$ of $v$ such that $U\smallsetminus\{v\}$ is diffeomorphic to both $\RR\times((E_-\times E_+)/\SS^1)$ and to $\RR\times(\SS^k/H)$, where $k\leq\dim N$ and $H$ is a closed subgroup of $G$. Consequently, we have that $(E_-\times E_+)/\SS^1$ is diffeomorphic to $\SS^k/H$.
\end{lemma}
\begin{proof}
Identify $N/G$ and $\CC^{n}\red{0}\SS^1$. Fix $x\in N$ such that $v=G\cdot x$. By the slice theorem, there is an open neighbourhood of $x$ that is $G$-equivariantly diffeomorphic to $G\times_H V$ where $H$ is the stabiliser of $x$ and $V$ is the isotropy representation at $x$; identify these. Let $k+1=\dim V$. Then, identifying $V$ as $\RR^{k+1}$ equipped with an $H$-invariant metric, we have that $V/H$ is diffeomorphic to an open neighbourhood $U$ of $v$ (see Example~\ref{x:orbitspace}). At the same time, $V\smallsetminus\{0\}$ is $\SS^1$-equivariantly diffeomorphic to $\RR\times\SS^k$, on which $H$ acts diagonally (trivially on $\RR$). Thus, $U\smallsetminus\{v\}$ is diffeomorphic to $\RR\times(\SS^k/H)$.
By Lemma~\ref{l:cone}, $\Phi^{-1}(0)$ is a cone with link $E_-\times E_+$. Note that the link is $\SS^1$-invariant. Thus, shrinking $U$ if necessary, we have that $U\smallsetminus\{v\}$ is diffeomorphic to $\RR\times((E_-\times E_+)/\SS^1)$.
By Theorem~\ref{t:ots invt} and Corollary~\ref{c:minimal}, the diffeomorphism between $N/G$ and $M\red{0}\SS^1$ preserves the orbit-type stratifications on the spaces. It then follows that $(E_-\times E_+)/\SS^1$ is diffeomorphic to $\SS^k/H$.
\end{proof}
Since the link at $v$ has two representations $\SS^k/H$ and $(E_-\times E_+)/\SS^1$ that are diffeomorphic, under the assumption of $\CC^n\red{0}\SS^1$ not being weakly unrepresentable, the corresponding Lie groupoids are Morita equivalent. Furthermore, the corresponding classifying spaces are homotopy equivalent to each other, and to the corresponding Borel constructions. This is the content of the following lemma.
\begin{lemma}\labell{l:morita}
Given the set up of Lemma~\ref{l:links}, and assuming that $\CC^n\red{0}\SS^1$ is not weakly unrepresentable, we have that the action groupoids $H\ltimes\SS^k$ and $\SS^1\ltimes (E_-\times E_+)$ are Morita equivalent. Consequently, the Borel constructions $EH\times_H\SS^k$ and $E\SS^1\times_{\SS^1}(E_-\times E_+)$ are homotopy equivalent.
\end{lemma}
\begin{proof}
Since $(E_-\times E_+)/\SS^1$ is an effective orbifold, and by hypothesis $\SS^k/H$ is also an effective orbifold, the Morita equivalence is immediate from Lemma~\ref{l:links} and \cite[Theorem A]{watts-orb}: there is an essentially injective functor from Lie groupoids representing effective orbifolds to differential spaces sending each Lie groupoid to its orbit space equipped with its quotient differential structure. The homotopy equivalence is immediate from Propositions~\ref{p:morita} and \ref{p:action gpd}.
\end{proof}
Since the Borel constructions are base spaces of certain principal bundles, we can apply the long exact sequence of homotopy groups to these bundles and compare. This will narrow down which weights allow for our representable scenario assumed in Lemma~\ref{l:links}. This is the content of the following proposition.
\begin{proposition}\labell{p:main}
Given the set up of Lemma~\ref{l:morita}, there is at most one negative weight or at most one positive weight; moreover, $H$ is finite.
\end{proposition}
\begin{proof}
We have two fibrations:
\begin{gather*}
\SS^1\longrightarrow E\SS^1\times(E_-\times E_+)\longrightarrow E\SS^1\times_{\SS^1}(E_-\times E_+), \text{ and}\\
H\longrightarrow EH\times\SS^k\longrightarrow EH\times_H\SS^k.
\end{gather*}
By Lemma~\ref{l:morita}, $EH\times_H\SS^k$ and $E\SS^1\times_{\SS^1}(E_-\times E_+)$ are homotopy equivalent. Denote one of them by $X$. Recall that homotopy group functors respect products, $EG$ is contractible for any topological group $G$, and $E_-$ and $E_+$ are diffeomorphic to $\SS^{l_1}$ and $\SS^{l_2}$, respectively, where $l_1=2m-1$ and $l_2=2(n-m)-1$. We thus have the following two long exact sequences:
\begin{gather*}
\dots\to\pi_{p+1}(X)\to\pi_p(\SS^1)\to\pi_p(\SS^{l_1})\times\pi_p(\SS^{l_2})\to\pi_p(X)\to\pi_{p-1}(\SS^1)\to\dots, \text{ and}\\
\dots\to\pi_{p+1}(X)\to\pi_p(H)\to\pi_p(\SS^k)\to\pi_p(X)\to\pi_{p-1}(H)\to\dots.
\end{gather*}
Without loss of generality, assume $l_1\leq l_2$. By hypothesis, $l_1\geq 1$.
Assume for a contradiction that $l_1>1$. Then, since $l_1$ and $l_2$ are odd, we have $3\leq l_1\leq l_2$. Counting dimensions, we have that
\begin{equation}\labell{e:dim}
5\leq l_1+l_2-1=k-\dim H\leq k.
\end{equation}
Inserting this information into the long exact sequences of homotopy groups above, we immediately obtain:
\begin{equation}\labell{e:pi0H}
1\cong\pi_1(X)\cong\pi_0(H),
\end{equation}
\begin{equation}\labell{e:pi1H}
\ZZ\cong\pi_2(X)\cong\pi_1(H),
\end{equation}
\begin{equation}\labell{e:pipH}
\pi_p(\SS^{l_1})\times\pi_p(\SS^{l_2})\cong\pi_p(X)\cong\pi_{p-1}(H) \text{ for $2<p<k$}.
\end{equation}
Equation~\eqref{e:pi0H} implies that $H$ is connected. Since $H$ is compact we have that $H\cong(\TT^q\times K)/\Gamma$ where $\TT^q$ is the $q$-torus; $K$ is trivial or a compact, connected, semi-simple group; and $\Gamma$ is a finite group. It follows from the long exact sequence of homotopy groups for the fibration $\Gamma\to\TT^q\times K\to H$ and Equation~\eqref{e:pi1H} that $q=1$, $\pi_1(K)\cong 1$, $\Gamma$ is a cyclic group, and
\begin{equation}\labell{e:pipK}
\pi_p(K)\cong\pi_p(H) \text{ for $p\geq 2$.}
\end{equation}
From Hurewicz' Theorem and general properties of the homology of compact $1$-connected semi-simple Lie groups (namely, $H^2(K;\RR)=0$ and $H^3(K;\RR)\neq 0$), it follows from Equations~\eqref{e:pipH} and \eqref{e:pipK} that
\begin{equation}\labell{e:pi2H}
\pi_3(\SS^{l_1})\times\pi_3(\SS^{l_2})\text{ is finite, and}
\end{equation}
\begin{equation}\labell{e:pi3K}
K\cong 1 \text{ or } \dim K \geq 3 \text{ and } \pi_4(\SS^{l_1})\times\pi_4(\SS^{l_2}) \text{ is infinite.}
\end{equation}
Since $l_1\geq 3$, Equation~\eqref{e:pi2H} in fact forces $l_1>3$. Since $l_1$ and $l_2$ are odd, we have $l_2\geq l_1\geq 5$. But then, Equation~\eqref{e:pi3K} forces $K\cong 1$. Thus, $H\cong\SS^1$.
By Equation~\eqref{e:dim}, we have $k\geq 10$. Assume that $M\in\NN$ such that $10\leq M\leq k$. Then, by Equation~\eqref{e:pipH}, for all $p=3,\dots,M-1$, we have $\pi_p(\SS^{l_1})\times\pi_p(\SS^{l_2})\cong 1$. This implies $M\leq l_1\leq l_2$. It follows from the right-hand-side of Equation~\eqref{e:dim} that $2M\leq k$. It follows that $k=\infty$, which is absurd. We conclude that $l_1=1$.
Next, assume that $l_2>1$ (and hence $l_2\geq 3$). Equation~\eqref{e:dim} reduces to
\begin{equation}\labell{e:dim2}
3\leq l_2=k-\dim H.
\end{equation}
Looking at the long exact sequences of homotopy groups as above, we obtain the following information:
\begin{equation}\labell{e:exact seq}
1\to\pi_2(X)\to\ZZ\overset{f}{\longrightarrow}\ZZ\to\pi_1(X)\to 1 \text{ is exact,}
\end{equation}
\begin{equation}\labell{e:pipl2}
\pi_p(\SS^{l_2})\cong\pi_p(X) \text{ for $p\geq 3$},
\end{equation}
\begin{equation}\labell{e:pi1X}
\pi_1(X)\cong\ZZ_q \text{ for some $q$, or is trivial,}
\end{equation}
\begin{equation}\labell{e:pi2X}
\pi_2(X)\cong\pi_1(H).
\end{equation}
It follows from Equation~\eqref{e:exact seq} that $\pi_2(X)\cong\ZZ$ or is trivial. In the former case, $f$ must be the zero homomorphism, in which case $\pi_1(X)\cong\ZZ$, contradicting Equation~\eqref{e:pi1X}. Thus, $\pi_2(X)\cong 1$, and Equation~\eqref{e:pi2X} implies that the identity component $H_0$ of $H$ is simply-connected. Thus $H$, being compact, cannot have dimension $1$ or $2$.
Assume that $\dim H\geq 3$. It follows from the long exact sequence of homotopy groups induced by the fibration $\Gamma\to\TT^r\times K\to H_0\cong(\TT^r\times K)/\Gamma$, where $\Gamma$ is finite and $K$ is compact, connected, and semi-simple, that $r=0$ and $\Gamma\cong 1$, and so $H_0\cong K$. That is, $H_0$ is compact, $1$-connected, and semi-simple. From Equation~\eqref{e:dim2} we know that $k\geq 6$, and thus by Equation~\eqref{e:pipl2} and the second long exact sequence of homotopy groups above:
\begin{equation}\labell{e:pipl2-2}
\pi_p(\SS^{l_2})\cong\pi_{p-1}(H) \text{ for $p=3,4,5$}.
\end{equation}
Since $H_0$ is compact, $1$-connected and semi-simple, from Equation~\eqref{e:pipl2-2} we have that $\pi_2(H)\cong\pi_3(\SS^{l_2})$ is finite. Thus, $l_2>3$, and hence $l_2\geq 5$ and $\pi_2(H)\cong 1$. At the same time, since the cohomology $H^3(H_0;\RR)$ is infinite and $H_0$ is $2$-connected, Hurewicz' Theorem implies that $\pi_3(H)\cong\pi_4(\SS^{l_2})$ is infinite. This contradicts $l_2\geq 5$. We are left with the case that $\dim H=0$.
\end{proof}
\begin{proof}[Proof of Main Theorem]
Let $(M,\omega)$ be a symplectic manifold admitting a Hamiltonian $\SS^1$-action with momentum map $\Psi$. Fix a value $a\in\Psi(M)$. If $a$ is a regular value, then it is well-known that $M\red{a}\SS^1$ is representable: we have automatically from Equation~\eqref{e:Ham} that $\SS^1$ acts locally freely on $\Psi^{-1}(a)$, a submanifold of $M$, resulting in the orbifold $\Psi^{-1}(a)/\SS^1$.
Assume that $a$ is a critical value of $\Psi$, and that $M\red{a}\SS^1$ is not weakly unrepresentable: it is diffeomorphic to $N/G$ for some Lie group $G$ and manifold $N$ on which $G$ acts effectively and properly satisfying the orbifold condition near certain links as described in the definition of weakly unrepresentable. Without loss of generality, assume that $a=0$. Let $z\in\Psi^{-1}(0)$ be an $\SS^1$-fixed point (which must exist). Let $\dim M=2n$. By the slice theorem, there is an $\SS^1$-invariant open neighbourhood of $z$ that is $\SS^1$-equivariantly diffeomorphic to an effective, linear, symplectic representation $V$ of $\SS^1$, with weights $\alpha_1,\dots,\alpha_n$, equipped with the standard quadratic momentum map $\Phi$ as in Equation~\eqref{e:momentum}. Hereafter we focus on the representation $V$, identifying it with $\CC^n$ for convenience.
Assume without loss of generality that $\alpha_1,\dots,\alpha_j\neq 0$ and $\alpha_{j+1},\dots,\alpha_n=0$ ($0<j\leq n$). Then the zero-set of $\Phi$ is $\SS^1$-equivariantly isomorphic to $\CC^{n-j}$ or $${\operatorname{Cone}}(\SS^l\times\SS^m)\times\CC^{n-j}$$ as smooth stratified spaces for some positive odd integers $l$ and $m$. The former only occurs if the weights $\alpha_1,\dots,\alpha_j$ are all positive or all negative, and the resulting symplectic quotient is diffeomorphic to $\CC^{n-j}$. Assuming that there is at least one negative weight and at least one positive weight, the symplectic quotient is diffeomorphic to $\CC^j\red{0}\SS^1\times\CC^{n-j}$, where $\SS^1$ acts on $\CC^j$ via the restricted action on $\CC^j\times\{0\}$.
From the proof of Lemma~\ref{l:links}, $\CC^j\red{0}\SS^1$ is diffeomorphic to $\RR^{k+1}/H$ where $k\leq \dim N$ and $H$ is a compact subgroup of $G$. By Proposition~\ref{p:main} applied to the $\SS^1$-action on $\CC^j$, we conclude that there is exactly one negative weight or exactly one positive weight, and that $\dim H=0$. Hence $\CC^j\red{0}\SS^1$ is diffeomorphic to an orbifold $\RR^{k+1}/H$.
Since $z$ is an arbitrary fixed point of $\Psi^{-1}(a)$, it follows that $M\red{a}\SS^1$ is locally diffeomorphic to orbit spaces of linear actions of finite groups. It follows that $M\red{a}\SS^1$ is an orbifold. Since every (effective) orbifold is representable, this completes the proof.
\end{proof}
|
1610.01579
|
\section{derivation of Eq. \ref{eq:BSRR}}
|
2203.03942
|
\section{Introduction}\label{sec1}
Let $\mathbb{N}$ be the set of non-negative integers, $\mathbb{N}_{+}$ the set of positive integers and for given $k\in\mathbb{N}$ we define $\mathbb{N}_{\geq k}$ as the set of integers $\geq k$. Moreover, for a given set $A$, by $|A|$ we denote the number of elements of the set $A$.
Let $n\in\mathbb{N}_{\geq 3}$ and for given $k\leq n$ consider the $k$-th symmetric polynomial
$$
\sigma_{k}(x_{1}, \ldots, x_{n})=\sum_{i_{1}<i_{2}<\ldots<i_{k}}x_{i_{1}}\cdots x_{i_{k}}.
$$
In the sequel, for simplicity of notation, we will write $\overline{1}_{k}$ instead of $(1,\ldots, 1)$, where the number of 1's is equal to $k$.
The question whether the sum of elements of a given finite set can be equal to the product of these elements is a classical one. Equivalently, we ask for which $n\in\mathbb{N}_{\geq 2}$ the Diophantine equation
$$
\sigma_{1}(\overline{X}_{n})=\sigma_{n}(\overline{X}_{n})
$$
has a solution in positive integers $x_{1},\ldots, x_{n}$. This question was investigated by many authors. For each $n\geq 3$ the equation has a solution $(\overline{1}_{n-2}, 2, n)$. In particular, $N(n)\geq 1$, where $N(n)$ is the number of solutions. Schinzel showed that there is no other solution for $n=6$ or $n=24$. He also investigated the existence of rational solutions in the case of $n=3$ and proved that for each $m$ there are at least $m$ triples of integers with the same sum and the same product \cite{Sch}. Misiurewicz has shown that $n=2, 3, 4, 6, 24, 114, 174, 444$ are the only values of $n<10^3$ for which $N(n)=1$ \cite{Mi} (recall that the value 114 was misprinted as 144 in \cite{Mi}). This was later extended by Brown and by Singmaster, Bennett and Dunn to $n\leq 1444000$. All these results were improved by Weingartner. He proved that $N(n)>1$ for $444<n<10^{11}$ \cite{Wei}. This was possible due to connection with Sophie-Germain primes \cite{Br} (see also \cite{Nyb}). The question whether there is a value $n>444$ such that $N(n)=1$ is still open. A nice exposition of the basic results concerning this problem can be found in \cite{Eck}. For more on the history of this problem and related investigations see section D24 in \cite{Guy}.
Motivated by the findings concerning the equation $\sigma_{1}(\overline{X}_{n})=\sigma_{n}(\overline{X}_{n})$ it is natural to ask a question about the existence of positive integer solutions of the Diophantine equation
\begin{equation}\label{maineq}
\sigma_{2}(\overline{X}_{n})=\sigma_{n}(\overline{X}_{n}).
\end{equation}
Let us note that (\ref{maineq}) is equivalent with the Diophantine equation
$$
\sigma_{n-2}\left(\frac{1}{x_{1}},\ldots,\frac{1}{x_{n}}\right)=1
$$
which need to be solved in positive integers and its solutions give quite special representations of 1 in terms of Egyptian fractions. By a solution of (\ref{maineq}) we mean a sequence $\overline{X}_{n}=(x_{1},\ldots, x_{n})$ satisfying the condition $x_{i}\leq x_{i+1}$ for $i=1,\ldots, n-1$.
In this paper we are interested in the structure of the set
$$
S(n)=\{\overline{X}_{n}\in\mathbb{N}_{+}^{n}:\;\overline{X}_{n}\;\mbox{is a solution of}\;(\ref{maineq})\}
$$
and investigate its various properties. Next, for $i\in\{0,\ldots, n\}$, let us put
$$
S_{i}(n)=\{\overline{X}_{n}\in S(n):\;x_{1}=\ldots=x_{n-i}=1,\;2\leq x_{n-i+1}\leq \ldots \leq x_{n}\},
$$
i.e., $S_{i}(n)\subset S(n)$ contain solutions of (\ref{maineq}) with exactly $i$ terms different than 1. We clearly have the (disjoint) decomposition
$$
S(n)=\bigcup_{i=0}^{n}S_{i}(n).
$$
Let us describe the content of the paper in some details. In Section \ref{sec2} finiteness of the set $S(n)$ is proved together with the maximum value of $x_{n}$ which can appear in the solution $\overline{X}_{n}$ of (\ref{maineq}) (Theorem \ref{mainbound}). We also present a bound for $x_{n-2}$ (Corollary \ref{xneqxn}). Using the obtained results we compute all the solutions of (\ref{maineq}) for $n\leq 16$.
In Section \ref{sec3} we investigate the set $S_{3}(n)$ in details. Our findings allow us to prove that
$$
|S(n)|\geq |S_{3}(n)|\geq \frac{1}{2}\tau\left(\frac{1}{2}(n-2)(3n-1)\right),
$$
where $\tau(m)$ is the number of divisors of a positive integer $m$. Moreover, we investigate the behaviour of $x_{n}/x_{n-1}$, where $x_{n-1}, x_{n}$ are components of $\overline{X}_{n}\in S(n)$. In particular, we prove that the set of rational values $x_{n}/x_{n-1}$, where $\overline{X}_{n}\in S(n)$ and $n\in\mathbb{N}_{\geq 3}$, is dense in the set $[1,+\infty)$.
Finally, in the last section we state several questions and conjectures which, we believe, will motivate further investigations.
\section{Boundedness of $x_{n}$ and enumeration of $S(n)$ for $n\leq 16$}\label{sec2}
Before we start, we mention two basic identities involving symmetric polynomials. More precisely, we have
\begin{equation}\label{sigmaid1}
\sigma_{1}(\overline{X}_{k_{1}},\overline{Y}_{k_{2}})=\sigma_{1}(\overline{X}_{k_{1}})+\sigma_{1}(\overline{Y}_{k_{2}})
\end{equation}
and
\begin{equation}\label{sigmaid2}
\sigma_{2}(\overline{X}_{k_{1}},\overline{Y}_{k_{2}})=\sigma_{1}(\overline{X}_{k_{1}})\sigma_{1}(\overline{Y}_{k_{2}})+\sigma_{2}(\overline{X}_{k_{1}})+\sigma_{2}(\overline{Y}_{k_{2}}),
\end{equation}
where $\overline{X}_{k_{1}}, \overline{Y}_{k_{2}}$ are independent sets of variables. In the sequel we will use these identities several times.
We start with a simple bound for $x_{1}\cdot\ldots\cdot x_{n-2}$.
\begin{lem}\label{upperboundforxn-2}
If $n\geq 3$ and $\overline{X}_{n}\in S(n)$ then $x_{1}\cdot\ldots\cdot x_{n-2}\leq \binom{n}{2}$. In particular, $x_{n-2}\leq \binom{n}{2}$.
\end{lem}
\begin{proof}
If $\overline{X}_{n}\in S(n)$ with $x_{1}\leq\ldots\leq x_{n}$ then
$$
\sigma_{n}(\overline{X}_{n})=\sigma_{2}(\overline{X}_{n})\leq \binom{n}{2}x_{n-1}x_{n},
$$
and dividing by $x_{n-1}x_{n}$ we get the inequality from the statement.
\end{proof}
On the other hand, we have a lower bound $x_{1}\cdot\ldots\cdot x_{n-2}\geq 2$ as the following holds.
\begin{lem}\label{lowerboundforxn-2}
For $n\geq 3$ we have $S_{0}(n)=S_{1}(n)=S_{2}(n)=\emptyset$.
\end{lem}
\begin{proof}
Suppose that $\overline{X}_n\in S_i(n)$ for some $i\leq 2$. Then
$$\sigma_2(\overline{X}_n)\geq x_{n-1}x_n+x_{n-1}+x_n> x_{n-1}x_n= \sigma_n(\overline{X}_n),$$
which is a contradiction.
\end{proof}
Lemma \ref{lowerboundforxn-2} states that if $n\in\mathbb{N}_{\geq 3}$ and $S_i(n)\neq\emptyset$, then $i\geq 3$. However, if $S_i(n)\neq\emptyset$, then $i$ cannot be too big comparing to $n$. This is due to the lemma below.
\begin{lem}\label{inumber}
Let $n\in\mathbb{N}_{\geq 3}$. If $S_i(n)\neq\emptyset$, then $i\leq 2+\log_2\binom{n}{2}$.
\end{lem}
\begin{proof}
Let $\overline{X}_{n}\in S_i(n)$. Then
$$2^{i-2}\leq x_{n-i+1}\cdot\ldots\cdot x_{n-2}\leq\binom{n}{2},$$
where the last inequality follows from Lemma \ref{upperboundforxn-2}. After taking the logarithm with base $2$ from the left hand side and right hand side of the above inequality we get
$$i-2\leq\log_2\binom{n}{2}.$$
The lemma is proved.
\end{proof}
As an immediate consequence we get the following.
\begin{cor}\label{ones}
Let $n\in\mathbb{N}_{\geq 3}$ and $\overline{X}_{n}\in S(n)$. If $n\geq 6$ then $x_{1}=1$. If $n\geq 8$ then $x_{1}=x_{2}=1$.
\end{cor}
Writing $\overline{X}_{n-2}$ we mean $(x_1,\ldots,x_{n-2})$. The next result shows how to find $x_{n-1},x_n\in\mathbb{N}$ such that $\overline{X}_{n}\in S(n)$, where $\overline{X}_{n-2}$ is fixed.
\begin{thm}\label{sols}
Let $n\in\mathbb{N}_{\geq 3}$ and $\overline{X}_{n}\in S(n)$. Then
\begin{align*}
x_{n-1}&=\frac{\sigma_1(\overline{X}_{n-2})+d_{1}}{\sigma_{n-2}(\overline{X}_{n-2})-1},\quad x_n=\frac{\sigma_1(\overline{X}_{n-2})+d_{2}}{\sigma_{n-2}(\overline{X}_{n-2})-1},
\end{align*}
where $d_1,d_2\in\mathbb{N}$ are such that
$$d_{1}d_{2}=\sigma_1(\overline{X}_{n-2})^2+\sigma_2(\overline{X}_{n-2})(\sigma_{n-2}(\overline{X}_{n-2})-1)=:f(n,\overline{X}_{n-2}).$$
\end{thm}
\begin{proof}
Let us observe that
\begin{align*}
&\ (\sigma_{n-2}(\overline{X}_{n-2})-1)(\sigma_{n}(\overline{X}_n)-\sigma_{2}(\overline{X}_n))\\
= &\ (\sigma_{n-2}(\overline{X}_{n-2})-1)\\
&\ \cdot\left[(\sigma_{n-2}(\overline{X}_{n-2})-1)x_{n-1}x_n-\sigma_{1}(\overline{X}_{n-2})(x_{n-1}+x_n)-\sigma_{2}(\overline{X}_{n-2})\right]\\
= &\ \left[(\sigma_{n-2}(\overline{X}_{n-2})-1)x_{n-1}-\sigma_{1}(\overline{X}_{n-2})\right]\left[(\sigma_{n-2}(\overline{X}_{n-2})-1)x_n-\sigma_{1}(\overline{X}_{n-2})\right]\\
&\ -f(n,\overline{X}_{n-2}).
\end{align*}
Thus, because $\sigma_{n-2}(\overline{X}_{n-2})\geq 2$ we see that $\overline{X}_n\in S(n)$ if and only if there are positive integers $d_{1}, d_{2}$ such that $d_{1}d_{2}=f(n,\overline{X}_{n-2})$ and the system of equations
$$
(\sigma_{n-2}(\overline{X}_{n-2})-1)x_{n-1}-\sigma_{1}(\overline{X}_{n-2})=d_{1},\quad (\sigma_{n-2}(\overline{X}_{n-2})-1)x_{n-1}-\sigma_{1}(\overline{X}_{n-2})=d_{2}.
$$
has a solution in integers $x_{n-1}, x_n$. Solving for $x_{n-1}, x_n$ we get the expressions from the statement.
\end{proof}
\begin{rem}
{\rm According to Lemma \ref{upperboundforxn-2} and Theorem \ref{sols}, it suffices to find all the possible values of $x_{n-1}$ and $x_n$ for all $\overline{X}_{n-2}$ such that $\sigma_{n-2}(\overline{X}_{n-2})\leq\binom{n}{2}$. Since there are only finitely many such $(n-2)$-tuples $\overline{X}_{n-2}$, the number of solutions of \eqref{maineq} is finite for each $n\in\mathbb{N}_{\geq 3}$.}
\end{rem}
Now we state the result that gives us an upper bound for unknown $x_n$, where $n\in\mathbb{N}_{\geq 3}$ is fixed.
\begin{thm}\label{mainbound}
Let $n\in\mathbb{N}_{\geq 3}$ and $m\in\mathbb{N}$ satisfies
$$
m=\sigma_{2}(\overline{X}_{n})=\sigma_{n}(\overline{X}_{n}).
$$
Then $m\leq n^2(3n-5)$ and $x_{n}\leq \frac{1}{2}n(3n-5)$.
\end{thm}
Before we prove Theorem \ref{mainbound} we need some preparation. First, if $\overline{X}_n\in S_i(n)$, then $i\geq 3$ by Lemma \ref{lowerboundforxn-2}. Thus, we may put
$$y_j=x_{n-i+j},\quad j\in\{1,\ldots,i-2\}$$
and
$$\overline{Y}_{i-2}=(y_1,\ldots,y_{i-2}).$$
Of course,
\begin{align}\label{4}
\overline{Y}_{i-2}\in\mathbb{N}_{\geq 2}^{i-2},\quad \overline{X}_{n-2}=(\overline{1}_{n-i},\overline{Y}_{i-2})
\end{align}
and
\begin{align}\label{5}
\sigma_{n-2}(\overline{X}_{n-2})=\sigma_{i-2}(\overline{Y}_{i-2}).
\end{align}
Let us estimate $\sigma_{1}(\overline{Y}_{i-2})$ and $\sigma_{2}(\overline{Y}_{i-2})$ from above in terms of $\sigma_{i-2}(\overline{Y}_{i-2})$.
\begin{lem}\label{estimate}
We have
$$\sigma_{1}(\overline{Y}_{i-2})\leq \frac{i-2}{2^{i-3}}\sigma_{i-2}(\overline{Y}_{i-2}), \quad \sigma_{2}(\overline{Y}_{i-2})\leq \frac{(i-2)(i-3)}{2^{i-3}}\sigma_{i-2}(\overline{Y}_{i-2}).$$
\end{lem}
\begin{proof}
Since $x_j\geq 2$ for each $j\in\{1,\ldots,i-2\}$, we have
$$y_k=\frac{\sigma_{i-2}(\overline{Y}_{i-2})}{\prod_{j\neq k}y_j}\geq\frac{\sigma_{i-2}(\overline{Y}_{i-2})}{2^{i-3}}$$
and
$$y_ky_l=\frac{\sigma_{i-2}(\overline{Y}_{i-2})}{\prod_{j\neq k,l}y_j}\leq\frac{\sigma_{i-2}(\overline{Y}_{i-2})}{2^{i-4}}$$
for $k,l\in\{1,\ldots,i-2\}$. As a result we get
$$\sigma_{1}(\overline{Y}_{i-2})=\sum_{k=1}^{i-2}y_k\leq (i-2)\frac{\sigma_{i-2}(\overline{Y}_{i-2})}{2^{i-3}}$$
and
$$\sigma_{2}(\overline{Y}_{i-2})=\sum_{1\leq k<l\leq i-2}y_ky_l\leq \binom{i-2}{2}\frac{\sigma_{i-2}(\overline{Y}_{i-2})}{2^{i-4}}.$$
The results follow.
\end{proof}
As an immediate consequence of Lemma \ref{estimate}, the formulae \eqref{sigmaid1}, \eqref{sigmaid2} and the conditions \eqref{4} and \eqref{5} we get the following.
\begin{cor}\label{estimate2}
If $n\in\mathbb{N}_{\geq 3}$ and $\overline{X}_n\in S_i(n)$, then
\begin{align}\label{6}
\sigma_{1}(\overline{X}_{n-2})\leq n-i+\frac{i-2}{2^{i-3}}\sigma_{n-2}(\overline{X}_{n-2})
\end{align}
and
\begin{align}\label{7}
\sigma_{2}(\overline{X}_{n-2})\leq \frac{1}{2}(n-i)(n-i-1)+\frac{(i-2)(n-3)}{2^{i-3}}\sigma_{n-2}(\overline{X}_{n-2}).
\end{align}
In particular, for each $\overline{X}_n\in S(n)$ we have
\begin{align}\label{sigma1bound}
\sigma_{1}(\overline{X}_{n-2})\leq n-3+\sigma_{n-2}(\overline{X}_{n-2})
\end{align}
and
\begin{align}\label{sigma2bound}
\sigma_{2}(\overline{X}_{n-2})\leq \frac{1}{2}(n-3)(n-4)+(n-3)\sigma_{n-2}(\overline{X}_{n-2}).
\end{align}
\end{cor}
\begin{proof}
The inequalities \eqref{sigma1bound} and \eqref{sigma2bound} follow as the expressions on the right hand side in \eqref{6} and \eqref{7} are clearly maximized for $i=3$.
\end{proof}
At this moment we are ready to give the proof of Theorem \ref{mainbound}.
\begin{proof}[Proof of Theorem \ref{mainbound}]
From Theorem \ref{sols} we know that $x_{n-1}=\frac{\sigma_1(\overline{X}_{n-2})+d_{1}}{\sigma_{n-2}(\overline{X}_{n-2})-1}$ and $x_n=\frac{\sigma_1(\overline{X}_{n-2})+d_{2}}{\sigma_{n-2}(\overline{X}_{n-2})-1}$, where $d_1d_2=f(n,\overline{X}_{n-2})$. Hence,
\begin{align*}
m &\ =\sigma_{n-2}(\overline{X}_{n-2})x_{n-1}x_n\\
&\ =\sigma_{n-2}(\overline{X}_{n-2})\frac{f(n,\overline{X}_{n-2})+(d_1+d_2)\sigma_1(\overline{X}_{n-2})+\sigma_1(\overline{X}_{n-2})^2}{(\sigma_{n-2}(\overline{X}_{n-2})-1)^2}.
\end{align*}
We consider two cases.
\bigskip
\textbf{Case I:} $\sigma_{n-2}(\overline{X}_{n-2})\leq n$. Then
\begin{align*}
x_n\leq \frac{\sigma_1(\overline{X}_{n-2})+f(n,\overline{X}_{n-2})}{\sigma_{n-2}(\overline{X}_{n-2})-1}=\frac{\sigma_1(\overline{X}_{n-2})(\sigma_1(\overline{X}_{n-2})+1)}{\sigma_{n-2}(\overline{X}_{n-2})-1}+\sigma_2(\overline{X}_{n-2}).
\end{align*}
Using inequalities \eqref{sigma1bound} and \eqref{sigma2bound} we obtain
\begin{align*}
x_n\leq &\ \frac{(\sigma_{n-2}(\overline{X}_{n-2})+n-3)(\sigma_{n-2}(\overline{X}_{n-2})+n-2)}{\sigma_{n-2}(\overline{X}_{n-2})-1}\\
&\ +\frac{1}{2}(n-3)(n-4)+(n-3)\sigma_{n-2}(\overline{X}_{n-2})\\
= &\ \sigma_{n-2}(\overline{X}_{n-2})+2n-4+\frac{(n-2)(n-1)}{\sigma_{n-2}(\overline{X}_{n-2})-1}\\
&\ +\frac{1}{2}(n-3)(n-4)+(n-3)\sigma_{n-2}(\overline{X}_{n-2})\\
= &\ (n-2)(\sigma_{n-2}(\overline{X}_{n-2})+2)+\frac{(n-2)(n-1)}{\sigma_{n-2}(\overline{X}_{n-2})-1}+\frac{1}{2}(n-3)(n-4).
\end{align*}
A simple analysis of the last expression treated as a function of $\sigma_{n-2}(\overline{X}_{n-2})\in [2,n]$ shows that this expression is maximized for $\sigma_{n-2}(\overline{X}_{n-2})\in\{2,n\}$ and attains the value
$$(n-2)(n+3)+\frac{1}{2}(n-3)(n-4)=\frac{1}{2}n(3n-5).$$
Now we estimate the value of $m$. Since $d_1+d_2\leq 1+f(n,\overline{X}_{n-2})$, we have
\begin{align*}
m\leq &\ \sigma_{n-2}(\overline{X}_{n-2})\cdot\frac{f(n,\overline{X}_{n-2})+(1+f(n,\overline{X}_{n-2}))\sigma_1(\overline{X}_{n-2})+\sigma_1(\overline{X}_{n-2})^2}{(\sigma_{n-2}(\overline{X}_{n-2})-1)^2}\\
= &\ \sigma_{n-2}(\overline{X}_{n-2})\cdot\frac{\sigma_1(\overline{X}_{n-2})+1}{\sigma_{n-2}(\overline{X}_{n-2})-1}\cdot\frac{\sigma_1(\overline{X}_{n-2})+f(n,\overline{X}_{n-2})}{\sigma_{n-2}(\overline{X}_{n-2})-1}
\end{align*}
From the estimation of $x_n$ we know that $\frac{\sigma_1(\overline{X}_{n-2})+f(n,\overline{X}_{n-2})}{\sigma_{n-2}(\overline{X}_{n-2})-1}\leq\frac{1}{2}n(3n-5)$, so it remains to bound $\sigma_{n-2}(\overline{X}_{n-2})\cdot\frac{\sigma_1(\overline{X}_{n-2})+1}{\sigma_{n-2}(\overline{X}_{n-2})-1}$ from above. By \eqref{sigma1bound} we have
\begin{align*}
\sigma_{n-2}(\overline{X}_{n-2})\cdot\frac{\sigma_1(\overline{X}_{n-2})+1}{\sigma_{n-2}(\overline{X}_{n-2})-1}\leq \sigma_{n-2}(\overline{X}_{n-2})\cdot\frac{\sigma_{n-2}(\overline{X}_{n-2})+n-2}{\sigma_{n-2}(\overline{X}_{n-2})-1}.
\end{align*}
An analysis of the derivative of the expression on the right hand side as a function of $\sigma_{n-2}(\overline{X}_{n-2})$ shows that this expression is maximized for $\sigma_{n-2}(\overline{X}_{n-2})\in\{2,n\}$ and attains the value $2n$. Summing up,
$$m\leq n^2(3n-5).$$
\bigskip
\textbf{Case II:} $\sigma_{n-2}(\overline{X}_{n-2})\geq n+1$. Since $x_{n-1}\geq 2$, we have
\begin{align*}
d_1= &\ (\sigma_{n-2}(\overline{X}_{n-2})-1)x_{n-1}-\sigma_{1}(\overline{X}_{n-2})\geq 2(\sigma_{n-2}(\overline{X}_{n-2})-1)-\sigma_{1}(\overline{X}_{n-2})\\
\geq &\ \sigma_{n-2}(\overline{X}_{n-2})-n+1\geq 2,
\end{align*}
where in the first inequality on the last line we used \eqref{sigma1bound}. Hence,
\begin{align*}
d_2=\frac{f(n,\overline{X}_{n-2})}{d_1}\leq\frac{\sigma_1(\overline{X}_{n-2})^2+\sigma_2(\overline{X}_{n-2})(\sigma_{n-2}(\overline{X}_{n-2})-1)}{\sigma_{n-2}(\overline{X}_{n-2})-n+1}.
\end{align*}
Consequently,
\begin{align*}
x_n\leq &\ \frac{\sigma_1(\overline{X}_{n-2})(\sigma_{n-2}(\overline{X}_{n-2})-n+1)+\sigma_1(\overline{X}_{n-2})^2+\sigma_2(\overline{X}_{n-2})(\sigma_{n-2}(\overline{X}_{n-2})-1)}{(\sigma_{n-2}(\overline{X}_{n-2})-1)(\sigma_{n-2}(\overline{X}_{n-2})-n+1)}\\
= &\ \frac{\sigma_1(\overline{X}_{n-2})(\sigma_{n-2}(\overline{X}_{n-2})-n+1+\sigma_1(\overline{X}_{n-2}))+\sigma_2(\overline{X}_{n-2})(\sigma_{n-2}(\overline{X}_{n-2})-1)}{(\sigma_{n-2}(\overline{X}_{n-2})-1)(\sigma_{n-2}(\overline{X}_{n-2})-n+1)}\\
\leq &\ \frac{\sigma_1(\overline{X}_{n-2})(2\sigma_{n-2}(\overline{X}_{n-2})-2)+\sigma_2(\overline{X}_{n-2})(\sigma_{n-2}(\overline{X}_{n-2})-1)}{(\sigma_{n-2}(\overline{X}_{n-2})-1)(\sigma_{n-2}(\overline{X}_{n-2})-n+1)}\\
\leq &\ \frac{2\sigma_1(\overline{X}_{n-2})+\sigma_2(\overline{X}_{n-2})}{\sigma_{n-2}(\overline{X}_{n-2})-n+1}\\
\leq &\ \frac{2(n-3)+2\sigma_{n-2}(\overline{X}_{n-2})+\frac{1}{2}(n-3)(n-4)+(n-3)\sigma_{n-2}(\overline{X}_{n-2})}{\sigma_{n-2}(\overline{X}_{n-2})-n+1}\\
= &\ \frac{\frac{1}{2}(n-3)n+(n-1)\sigma_{n-2}(\overline{X}_{n-2})}{\sigma_{n-2}(\overline{X}_{n-2})-n+1}= \frac{\frac{1}{2}(n-3)n+(n-1)^2}{\sigma_{n-2}(\overline{X}_{n-2})-n+1}+n-1\\
\leq &\ \frac{\frac{1}{2}(n-3)n+(n-1)^2}{2}+n-1=\frac{1}{4}(3n^2-3n-2),
\end{align*}
where in the third and fifth line we used \eqref{sigma1bound} and \eqref{sigma2bound}.
Now we estimate the value of $m$. Since $d_1\leq \sigma_{n-2}(\overline{X}_{n-2})-n+1$, the value of $d_1+d_2$ is maximized for $d_1= \sigma_{n-2}(\overline{X}_{n-2})-n+1=:d_1(\overline{X}_{n-2})$ and $d_2=\frac{f(n,\overline{X}_{n-2})}{\sigma_{n-2}(\overline{X}_{n-2})-n+1}=:d_2(\overline{X}_{n-2})$. Hence, we have
\begin{align*}
m\leq &\ \sigma_{n-2}(\overline{X}_{n-2})\cdot\frac{f(n,\overline{X}_{n-2})+(d_1(\overline{X}_{n-2})+d_2(\overline{X}_{n-2}))\sigma_1(\overline{X}_{n-2})+\sigma_1(\overline{X}_{n-2})^2}{(\sigma_{n-2}(\overline{X}_{n-2})-1)^2}\\
= &\ \sigma_{n-2}(\overline{X}_{n-2})\cdot\frac{\sigma_1(\overline{X}_{n-2})+d_1(\overline{X}_{n-2})}{\sigma_{n-2}(\overline{X}_{n-2})-1}\cdot\frac{\sigma_1(\overline{X}_{n-2})+d_2(\overline{X}_{n-2})}{\sigma_{n-2}(\overline{X}_{n-2})-1}.
\end{align*}
From the estimation of $x_n$ we know that $$\frac{\sigma_1(\overline{X}_{n-2})+d_2(\overline{X}_{n-2})}{\sigma_{n-2}(\overline{X}_{n-2})-1}\leq\frac{\frac{1}{2}(n-3)n+(n-1)^2}{\sigma_{n-2}(\overline{X}_{n-2})-n+1}+n-1,$$ so it remains to bound $\sigma_{n-2}(\overline{X}_{n-2})\cdot\frac{\sigma_1(\overline{X}_{n-2})+d_1(\overline{X}_{n-2})}{\sigma_{n-2}(\overline{X}_{n-2})-1}$ from above. By \eqref{sigma1bound} we have
\begin{align*}
&\ \sigma_{n-2}(\overline{X}_{n-2})\cdot\frac{\sigma_1(\overline{X}_{n-2})+d_1(\overline{X}_{n-2})}{\sigma_{n-2}(\overline{X}_{n-2})-1}\leq \sigma_{n-2}(\overline{X}_{n-2})\cdot\frac{2\sigma_{n-2}(\overline{X}_{n-2})-2}{\sigma_{n-2}(\overline{X}_{n-2})-1}\\
= &\ 2\sigma_{n-2}(\overline{X}_{n-2}).
\end{align*}
Hence,
$$m\leq 2\sigma_{n-2}(\overline{X}_{n-2})\left(\frac{\frac{1}{2}(n-3)n+(n-1)^2}{\sigma_{n-2}(\overline{X}_{n-2})-n+1}+n-1\right).$$
An analysis of the derivative of the expression on the right hand side as a function of $\sigma_{n-2}(\overline{X}_{n-2})\in \left[n+1,\binom{n}{2}\right]$ shows that this expression is maximized for $\sigma_{n-2}(\overline{X}_{n-2})=n+1$ and attains the value $\frac{1}{2}(n+1)(3n^2-3n-2)$.
\bigskip
Since $n\geq 3$, we have $\frac{1}{4}(3n^2-3n-2)\leq\frac{1}{2}n(3n-5)$ and $\frac{1}{2}(n+1)(3n^2-3n-2)<n^2(3n-5)$. Thus $x_n\leq \frac{1}{2}n(3n-5)$ and $m\leq n^2(3n-5)$ in any case.
\end{proof}
\begin{rem}
{\rm From the proofs of Corollary \ref{estimate2} and Theorem \ref{mainbound} we see that $x_n=\frac{1}{2}n(3n-5)$ and $m=n^2(3n-5)$ if and only if $i=3$ and $\{\sigma_{n-2}(\overline{X}_{n-2}),x_{n-1}\}=\{2,n\}$, i.e. $\overline{X}_n=\left(\overline{1}_{n-3},2,n,\frac{1}{2}n(3n-5)\right)$.}
\end{rem}
We are also able to estimate $x_{n-2}$.
\begin{lem}\label{C}
Let $n\in\mathbb{N}_{\geq 3}$ and $\overline{X}_{n}\in S(n)$. Assume that $x_{n-2}\geq 1+C(n-2)^{2/3}$ for some real number $C>0$. Then
\begin{align}\label{ineqC}
C\leq\frac{1}{C}(n-2)^{-1/3}+\sqrt{\frac{3}{2C}(n-2)^{-1}+\frac{1}{C^2}(n-2)^{-2/3}+(n-2)^{-1/3}+\frac{1}{2C}}.
\end{align}
\end{lem}
\begin{proof}
By Theorem \eqref{sols} and the fact that $x_n\geq x_{n-1}\geq x_{n-2}\geq 1+C(n-2)^{2/3}$ we obtain the following chain of inequalities:
\begin{align*}
&\ 1+C(n-2)^{2/3}\leq x_{n-2}\leq x_{n-1}\leq \frac{\sigma_1(\overline{X}_{n-2})+\sqrt{f(n,\overline{X}_{n-2})}}{\sigma_{n-2}(\overline{X}_{n-2})-1}\\
&\ =\frac{\sigma_1(\overline{X}_{n-2})+\sqrt{\sigma_1(\overline{X}_{n-2})^2+\sigma_2(\overline{X}_{n-2})(\sigma_{n-2}(\overline{X}_{n-2})-1)}}{\sigma_{n-2}(\overline{X}_{n-2})-1}\\
&\ \leq\frac{\sigma_{n-2}(\overline{X}_{n-2})-1+n-2}{\sigma_{n-2}(\overline{X}_{n-2})-1}\\
&\ \quad +\frac{\sqrt{(\sigma_{n-2}(\overline{X}_{n-2})-1+n-2)^2+[\binom{n-2}{2}+(n-3)(\sigma_{n-2}(\overline{X}_{n-2})-1)](\sigma_{n-2}(\overline{X}_{n-2})-1)}}{\sigma_{n-2}(\overline{X}_{n-2})-1}\\
&\ =1+\frac{n-2}{\sigma_{n-2}(\overline{X}_{n-2})-1}\\
&\ \quad +\sqrt{\left(1+\frac{n-2}{\sigma_{n-2}(\overline{X}_{n-2})-1}\right)^2+\frac{n-3}{2}\frac{n-2}{\sigma_{n-2}(\overline{X}_{n-2})-1}+n-3}\\
&\ \leq 1+\frac{n-2}{x_{n-2}-1}+\sqrt{\left(1+\frac{n-2}{x_{n-2}-1}\right)^2+\frac{n-3}{2}\frac{n-2}{x_{n-2}-1}+n-3}\\
&\ \leq 1+\frac{n-2}{C(n-2)^{2/3}}+\sqrt{\left(1+\frac{n-2}{C(n-2)^{2/3}}\right)^2+\frac{n-3}{2}\frac{n-2}{C(n-2)^{2/3}}+n-3}\\
&\ \leq 1+\frac{1}{C}(n-2)^{1/3}\\
&\ \quad +\sqrt{1+\frac{2}{C}(n-2)^{1/3}+\frac{1}{C^2}(n-2)^{2/3}+\frac{1}{2C}(n-2)^{4/3}-\frac{1}{2C}(n-2)^{1/3}+n-3}\\
&\ =1+\frac{1}{C}(n-2)^{1/3}+\sqrt{\frac{3}{2C}(n-2)^{1/3}+\frac{1}{C^2}(n-2)^{2/3}+n-2+\frac{1}{2C}(n-2)^{4/3}}.
\end{align*}
After reduction the 1's and dividing by $(n-2)^{2/3}$ we get the the statement.
\end{proof}
\begin{cor}\label{xn2ineq}
Let $n\in\mathbb{N}_{\geq 3}$ and $\overline{X}_{n}\in S(n)$. Then we have
\begin{align}\label{ineqxS3}
x_{n-2}\leq 1+\lfloor 2(n-2)^{2/3}\rfloor ,
\end{align}
where the equality holds only if $n=3$. Moreover, for each $C>2^{-1/3}$ there are only finitely many values of $n\in\mathbb{N}_{\geq 3}$ such that there exists $\overline{X}_{n}\in S(n)$ with $x_{n-2}>\lceil C(n-2)^{2/3}\rceil$.
\end{cor}
\begin{proof}
If $x_{n-2}=1+C(n-2)^{2/3}$ for some $C>0$ then by Lemma \ref{C} we have
\begin{align}\label{ineqC2}
C\leq\frac{1}{C}+\sqrt{\frac{3}{2C}+\frac{1}{C^2}+1+\frac{1}{2C}}
\end{align}
since $n\geq 3$. The left hand side of \eqref{ineqC2} is an increasing function of $C$ while the right hand side is a decreasing one. Moreover, the equality holds for $C=2$. Hence, \eqref{ineqC2} holds if and only if $C\leq 2$. Thus $x_{n-2}\leq 1+2(n-2)^{2/3}$. As $x\in\mathbb{Z}$, we have $x\leq 1+\lfloor 2(n-2)^{2/3}\rfloor$. Moreover, if $n>3$ then $C=2$ does not satisfy inequality \eqref{ineqC}. Therefore the equality in \eqref{ineqxS3} holds only if $n=3$.
We are left with the proof of the "moreover" part. Let $C>2^{-1/3}$. Assume by contrary that there are infinitely many values of $n\in\mathbb{N}$ such that there exists $\overline{X}_{n}\in S(n)$ with $x_{n-2}\geq 1+C(n-2)^{2/3}$. Tending with $n$ to $+\infty$ in inequality \eqref{ineqC} we obtain
$$C\leq\frac{1}{\sqrt{2C}},$$
which means that $C\leq\frac{1}{\sqrt[3]{2}}$. This is a contradiction with our assumption that $C>2^{-1/3}$.
\end{proof}
\begin{rem}
{\rm Let $\overline{X}_{n}\in S(n)$. If the rate of the magnitude of $x_{n-2}$ is $2^{-1/3}n^{2/3}+o(n^{2/3})$ then it is also the rate of the magnitude of $x_{n-1}$ and $x_n$. Indeed, from the proof of Lemma \ref{C} we know that
\begin{align*}
&\ 2^{-1/3}(n-2)^{2/3}+o((n-2)^{2/3})\leq x_{n-2}\leq x_{n-1}=\frac{\sigma_1(\overline{X}_{n-2})+d_1}{\sigma_{n-2}(\overline{X}_{n-2})-1}\\
&\ \leq\frac{\sigma_1(\overline{X}_{n-2})+\sqrt{f(n,\overline{X}_{n-2})}}{\sigma_{n-2}(\overline{X}_{n-2})-1}\\
&\ \leq 1+2^{1/3}(n-2)^{1/3}\\
&\ \quad +\sqrt{3\cdot 2^{-2/3}(n-2)^{1/3}+2^{2/3}(n-2)^{2/3}+n-2+2^{-2/3}(n-2)^{4/3}}\\
&\ =2^{-1/3}n^{2/3}+o(n^{2/3}),
\end{align*}
where $\sigma_{n-2}(\overline{X}_{n-2})=x_{n-2}$ (i. e. $\overline{X}_{n-2}=(\overline{1}_{n-3},x_{n-2})$) and $d_1\sim\sqrt{f(n,\overline{X}_{n-2})}=\sqrt{f(n,x_{n-2})}$, $n\to +\infty$. This means that also $d_2\sim\sqrt{f(n,x_{n-2})}$ and analogously we compute that
$$x_n=\frac{\sigma_1(\overline{X}_{n-2})+d_1}{\sigma_{n-2}(\overline{X}_{n-2})-1}=2^{-1/3}n^{2/3}+o(n^{2/3}).$$
In fact there is an infinite family of quadruples $(n,x,y,z)$ of positive integers such that $(\overline{1}_{n-3}, x, y, z)\in S_{3}(n)$ and $x\sim y\sim z\sim 2^{-1/3}n^{2/3}$, $n\to +\infty$. Namely,
\begin{align*}
n= &\ 4(4k^3+2k^2+2k-2)^3+2,\\
x= &\ 2(4k^3+2k^2+2k-2)\\
&\ +32k^6+32k^5+32k^4-16k^3-8k^2-10k+8,\\
y= &\ 2(4k^3+2k^2+2k-2)^2+1,\\
z= &\ (4k^3+2k^2+2k-2)(8k^3+4k^2+6k-1)+1,
\end{align*}
where $k\in\mathbb{N}$ is sufficiently large. In the above family we have $y=1+2^{-1/3}(n-2)^{2/3}$. However, it is possible to give a family of quadruples $(n,x,y,z)$ such that $(\overline{1}_{n-3}, x, y, z)\in S_{3}(n)$ and $x=1+2^{-1/3}(n-2)^{2/3}$. Such a family is
\begin{align*}
n= &\ 4(4k^3-10k^2+10k-6)^3+2,\\
x= &\ 2(4k^3-10k^2+10k-6)^2+1,\\
y= &\ (4k^3-10k^2+10k-6)(8k^3-20k^2+22k-11)+1,\\
z= &\ 2(4k^3-10k^2+10k-6)\\
&\ +32k^6-150k^5+352k^4-464k^3+392k^2-202k+58,
\end{align*}
where $k\in\mathbb{N}$ is sufficiently large.}
\end{rem}
Gathering all the results above we are able to enumerate all elements of $S(n)$ for $n\leq 16$. More precisely, we have the following.
\begin{thm}\label{smallsolutions}
We have the following equalities of sets:
{\small
\begin{align*}
S(3)&=\{(2, 3, 6), (2, 4, 4), (3, 3, 3)\};\\
S(4)&=\{(1, 2, 4, 14), (2, 2, 2, 6)\};\\
S(5)&=\{(1, 1, 2, 5, 25), (1, 1, 2, 7, 11), (1, 1, 3, 3, 22), (1, 1, 3, 4, 9), (1, 2, 2, 2, 18),\\
&\quad\;\; (1, 2, 2, 4, 4), (2, 2, 2, 2, 3)\};\\
S(6)&=\{(1, 1, 1, 2, 6, 39), (1, 1, 1, 2, 7, 22), (1, 1, 1, 3, 4, 18), (1, 1, 1, 3, 6, 8)\};\\
S(7)&=\{(1, 1, 1, 1, 2, 7, 56), (1, 1, 1, 1, 2, 8, 31), (1, 1, 1, 1, 2, 11, 16), (1, 1, 1, 1, 3, 4, 46),\\
&\quad\;\; (1, 1, 1, 1, 3, 6, 12), (1, 1, 1, 1, 4, 6, 7), (1, 1, 1, 2, 2, 3, 20)\};\\
S(8)&=\{(\overline{1}_{5}, 2, 8, 76), (\overline{1}_{5}, 2, 10, 30), (\overline{1}_{5}, 4, 4, 22), (\overline{1}_{4}, 2, 2, 3, 50), (\overline{1}_{3}, 2, 2, 2, 3, 5)\};\\
S(9)&=\{(\overline{1}_{6}, 2, 9, 99), (\overline{1}_{6}, 2, 15, 21), (\overline{1}_{6}, 3, 5, 78), (\overline{1}_{67}, 3, 6, 29), (\overline{1}_{6},, 3, 8, 15)\};\\
S(10)&=\{(\overline{1}_{7}, 2, 10, 125), (\overline{1}_{7}, 2, 11, 67), (\overline{1}_{8}, 2, 13, 38), (\overline{1}_{7}, 3, 6, 51), (\overline{1}_{7}, 3, 7, 28), (\overline{1}_{7}, 4, 4, 93),\\
&\quad\;\; (\overline{1}_{7}, 4, 5, 26), (\overline{1}_{7}, 6, 7, 7), (\overline{1}_{6}, 2, 3, 3, 21), (\overline{1}_{6}, 3, 3, 3, 8)\};\\
S(11)&=\{(\overline{1}_{8}, 2, 11, 154), (\overline{1}_{8}, 2, 12, 82), (\overline{1}_{8},2,13,58),(\overline{1}_{8},2,14,46),(\overline{1}_{8},2,16,34),(\overline{1}_{8},2,18,28),\\
&\quad \;\;(\overline{1}_{8},2,19,26),(\overline{1}_{8},2,22,22),(\overline{1}_{8},3,6,118),(\overline{1}_{8},3,7,43),(\overline{1}_{8},3,8,28), (\overline{1}_{8},3,10,18),\\
&\quad \;\;(\overline{1}_{8},3,13,13),(\overline{1}_{8},4,5,40),(\overline{1}_{8},4,6,22),(\overline{1}_{8},4,7,16), (\overline{1}_{8},4,8,13),(\overline{1}_{8},4,10,10),\\
&\quad \;\;(\overline{1}_{8},5,5,19),(\overline{1}_{8},6,6,10),(\overline{1}_{8},7,7,7), (\overline{1}_{7},2,2,4,97),(\overline{1}_{7},2,2,5,27),(\overline{1}_{7},2,2,6,17),\\
&\quad\;\; (\overline{1}_{7},2,2,7,13),(\overline{1}_{6},2,2,2,3,11)\};\\
S(12)&=\{(\overline{1}_{9},2,12,186),(\overline{1}_{9},2,16,46),(\overline{1}_{9},2,18,36),(\overline{1}_{9},4,6,30),(\overline{1}_{9},4,8,16),\\
&\quad\;\; (\overline{1}_{9},6,6,12),(\overline{1}_{8},2,3,4,18),(\overline{1}_{8},2,4,4,10),(\overline{1}_{8},2,4,6,6),(\overline{1}_{7},2,2,2,2,101)\};\\
S(13)&=\{(\overline{1}_{10},2,13,221),(\overline{1}_{10},2,23,31),(\overline{1}_{10},3,7,166),(\overline{1}_{10},3,12,21),(\overline{1}_{10},4,5,155),\\
&\quad\;\;(\overline{1}_{10},5,5,34),(\overline{1}_{9},2,3,3,129),(\overline{1}_{9},3,3,3,16),(\overline{1}_{8},2,2,4,4,4)\};\\
S(14)&=\{(\overline{1}_{11},2,14,259),(\overline{1}_{11},2,15,136),(\overline{1}_{11},2,16,95),(\overline{1}_{11},2,19,54),(\overline{1}_{11},3,8,100),\\
&\quad\;\;(\overline{1}_{11},3,10,38),(\overline{1}_{11},4,6,63),(\overline{1}_{11},4,7,34),(\overline{1}_{10},2,2,5,159),(\overline{1}_{9},2,2,3,5,5)\};\\
S(15)&=\{(\overline{1}_{12},2,15,300),(\overline{1}_{12},2,16,157),(\overline{1}_{12},2,25,40),(\overline{1}_{12},2,27,36),(\overline{1}_{12},3,8,222),\\
&\quad \;\;(\overline{1}_{12},3,9,79),(\overline{1}_{12},3,13,27),(\overline{1}_{12},3,14,24),(\overline{1}_{12},4,6,105),(\overline{1}_{12},4,13,14),\\
&\quad\;\;(\overline{1}_{11},2,3,4,45),(\overline{1}_{11},2,3,7,12),(\overline{1}_{10},2,2,2,3,33)\};\\
S(16)&=\{(\overline{1}_{13},2,16,344),(\overline{1}_{13},2,22,62),(\overline{1}_{13},4,6,232),(\overline{1}_{13},4,8,38),(\overline{1}_{13},8,8,10),\\
&\quad\;\;(\overline{1}_{12},2,2,6,107),(\overline{1}_{12},2,2,7,46),(\overline{1}_{12},2,3,6,18),(\overline{1}_{11},2,2,2,3,46)\}.
\end{align*}
}
In particular, we have the following values of $|S(n)|$.
\begin{equation*}
\begin{array}{|c|cccccccccccccc|}
\hline
n & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 \\
\hline
|S(n)| & 3 & 2 & 7 & 4 & 7 & 5 & 5 &10 & 26 & 10 & 9 & 10 & 13 & 9 \\
\hline
\end{array}
\end{equation*}
\begin{center}{\rm Table 1. The number of element of $S(n)$ for $n\leq 16$.}\end{center}
Moreover, for each $n\in\mathbb{N}_{\geq 3}$ we have $S(n)\neq \emptyset$.
\end{thm}
\begin{proof}
To find all elements of $S(n)$ we used a simple computer search. More precisely, we solved the equation (\ref{maineq}) for $x_{n}$ and get
$$
x_{n}=\frac{\sigma_{2}(\overline{X}_{n-1})}{\sigma_{n-1}(\overline{X}_{n-1})-\sigma_{1}(\overline{X}_{n-1})}.
$$
Next, for given $n$, we computed, via Lemma \ref{inumber}, the number of 1's in the solution of (\ref{maineq}) and used the bounds
$$
x_{n-1}\leq \frac{1}{2}n(3n-5),\quad x_{n-2}\leq 1+\lfloor 2(n-2)^{2/3}\rfloor
$$
to check all possibilities $x_{1}\leq x_{2}\leq\ldots\leq x_{n-2}\leq x_{n-1}$ for which $x_{n}$ computed above is a positive integer.
To get the last statement it is enough to note that for $n\geq 3$ we have that $\left(\overline{1}_{n-3}, 2, n, \frac{1}{2}n(3n-5)\right)\in S(n)$. All these computations took less then 2 hours on a laptop with 32 GB of RAM and i7 type processor.
\end{proof}
\begin{cor}
We have
$$
\limsup_{n\rightarrow +\infty}\max_{\overline{X}_{n}\in S(n)}\frac{x_{n}}{x_{n-1}}=+\infty.
$$
\end{cor}
\begin{proof}
Because for each $n\in\mathbb{N}_{\geq 3}$ we have $\left(1,\ldots, 1, 2, n, n(3n-5)/2\right)\in S(n)$ we get
$$
\limsup_{n\rightarrow +\infty}\max_{\overline{X}_{n}\in S(n)}\frac{x_{n}}{x_{n-1}}\geq \limsup_{n\rightarrow +\infty}\frac{n(3n-5)}{2n}=+\infty,
$$
and hence the result.
\end{proof}
\section{Analysis of $S_{3}(n)$ and applications}\label{sec3}
From Theorem \ref{sols} we know that $(\overline{1}_{n-3},x,y,z)\in S_{3}(n)$ if and only if
\begin{equation}\label{yzsol}
y=\frac{n+d_{1}-2}{x-1}+1,\quad z=\frac{n+d_{2}-2}{x-1}+1,
\end{equation}
where
\begin{equation*}
d_{1}d_{2}=\frac{1}{2}(n-2)((x+1)n+2x^2-3x-3)=f(n,(\overline{1}_{n-3},x))=:f(n,x).
\end{equation*}
It is clear that $|S_{3}(n)|$ is finite because from Corollary \ref{xn2ineq} we know that $x\leq 1+\lfloor 2(n-2)^{2/3}\rfloor$. Thus, we can even present a crude upper bound
$$
|S_{3}(n)|\leq \sum_{2\leq x\leq 1+\lfloor 2(n-2)^{2/3}\rfloor}\tau(f(n,x)),
$$
where $\tau(m)$ is the number of positive integer divisors of $m$.
\begin{rem}
{\rm Using the above characterization we computed the set $S_{3}(n)$ for each $n\leq 300$. More precisely, for given $n$ and each $x\leq 1+\lfloor 2(n-2)^{2/3}\rfloor$ we computed the set
$$
D(n,x):=\{d:\;d|f(n,x)\}
$$
of positive integer divisors of $f(n,x)$. Next, for any given $d\in D(n,x)$ such that $d\leq \sqrt{f(n,x)}$ we checked whether the numbers $y, z$ given by (\ref{yzsol}), where $d_1=d$ and $d_2=f(n,x)/d$, are integers.
In the considered range the value of $|S_{3}(n)|$ attains maximum equal to 213 for $n=299$.}
\end{rem}
From Theorem \ref{smallsolutions} we know that $S(n)$ is nonempty. We prove that $|S(n)|\geq 3$. More precisely, the following is true.
\begin{thm}
\begin{enumerate}
\item For $n=3$ and each $n\in\mathbb{N}_{\geq 5}$ we have $|S_3(n)|\geq 3$.
\item We have
$$
|S_{3}(n)|\geq \frac{1}{2}\tau\left(\frac{1}{2}(n-2)(3n-1)\right).
$$
In particular $\limsup_{n\rightarrow +\infty}|S_{3}(n)|=\limsup_{n\rightarrow +\infty}|S(n)|+\infty$.
\end{enumerate}
\end{thm}
\begin{proof}
To get the first statement we note that $|S_{3}(3)|=3, |S_{3}(5)|=3, |S_{3}(6)|=4, |S_{3}(7)|=6, |S_{3}(8)|=3$, i.e., we can assume that $n\geq 9$. We observe that for each $n\geq 5$, the $n$-tuple $(\overline{1}_{n-3}, 2, n, \frac{1}{2}n(3n-5))\in S_{3}(n)$. If $n\equiv 1\pmod*{2}$ we have two additional $n$-tuples
$$
\left(\overline{1}_{n-3}, 2, 2n-3, \frac{1}{2}(5n-3)\right),\quad \left(\overline{1}_{n-3}, 3, n-1, \frac{3}{2}(n+1)\right).
$$
Similarly, if $n\equiv 0\pmod*{2}$ we have the following additional solutions
$$
\left(\overline{1}_{n-3}, 2, \frac{1}{2}(3n-4), 2(2n-1)\right),\quad \left(\overline{1}_{n-3}, 4, \frac{1}{2}n, 2(n+3)\right).
$$
The second statement is a consequence of the following reasoning. Take $x=2$ in (\ref{yzsol}). Then the corresponding values of $y, z$ are
$$
y=n+d_{1}-1,\quad z=n+\frac{f(n,2)}{d_{1}}-1
$$
and are integers. Thus, the number of different pairs satisfying $y\leq z$ is equal to $\left\lceil\frac{1}{2}\tau(f(n,2))\right\rceil$ and we get the required inequality. Moreover, if we take $n\equiv 2\pmod*{2q_{1}\cdot\ldots\cdot q_{k}}$, where $k\in\mathbb{N}_{+}$ and $q_{1}<\ldots<q_{k}$ are primes, then $\frac{1}{2}\tau(f(n,2))\geq k$ and $\limsup_{n\rightarrow +\infty}|S_{3}(n)|=+\infty$. Thus $\limsup_{n\rightarrow +\infty}|S(n)|=+\infty$.
\end{proof}
\begin{rem}
{\rm Let us note that if we take $x=3$ in (\ref{yzsol}) then the necessary and sufficient condition for $n$ in order to get $(\overline{1}_{n-3}, 3, y, z)\in S(n)$ is $n\not\equiv 0\pmod*{4}$.
}
\end{rem}
We are in position to compute the value of $\liminf_{\overline{X}_{n}\in S(n)}x_{n}/x_{n-1}$. More precisely, the following is true.
\begin{thm}\label{xneqxn}
There are infinitely many pairs $(n,X)$ of positive integers such that $(\overline{1}_{n-3}, 2, X, X)\in S_{3}(n)\subset S(n)$. In particular, we have
$$
\liminf_{n\rightarrow +\infty}\min_{\overline{X}_{n}\in S(n)}\frac{x_{n}}{x_{n-1}}=1.
$$
\end{thm}
\begin{proof}
To get the result we show that there are infinitely many values of $n$ such that there is an integer $X$ such that $(\overline{1}_{n-3}, 2, X, X)\in S_{3}(n)\subset S(n)$. We know that this is enough to show that $d^2=f(n,2)$ has infinitely many solutions in positive integers. Then $X=n+d-1$ is a solution we are looking for. The equation $d^2=f(n,2)$ is equivalent with Pell type equation $Y^2-24d^2=25$, where $Y=6n-7$. Using standard method, one can check that for each $m\in\mathbb{N}$ the pair $(d_{m},Y_{m})$, where $d_{m}=f_{m}(1, 10), Y_{m}=f_{m}(5, 49)$ for $m=0, 1, \ldots$ and $f_{m}(a,b)=10f_{m-1}(a,b)-f_{m-2}(a,b)$ with $f_{0}(a,b)=a, f_{1}(a,b)=b$, satisfies $Y_{m}^2-24d_{m}^2=1$ and hence the pair $(5d_{m}, 5Y_{m})$ solves the equation $Y^2-24d^2=25$. To get integer values of $n$ we need to take $m\equiv 1\pmod*{2}$. With $n$ chosen in this way the equation $\sigma_{2}(\overline{X}_{n})=\sigma_{n}(\overline{X}_{n})$ has the solution
$$
\overline{X}_{n(m)}=(\overline{1}_{n(m)-3}, 2, n(m)+d_{2m+1}-1, n(m)+d_{2m+1}-1), m=0, 1, \ldots,
$$
satisfying $x_{n-1}=x_{n}$ and hence the result.
\end{proof}
As an example, we computed the values of $n=n(m)$ and the corresponding solution $x_n$ for $m=0, 1, \ldots, 5$.
\begin{equation*}
\begin{array}{|c|llllll|}
\hline
m & 0 & 1 & 2 & 3 & 4 & 5 \\
\hline
n & 42 & 4002 & 392042 & 38416002 & 3764376042 & 368870436002 \\
x_{n} & 91 & 8901 & 872191 & 85465801 & 8374776291 & 820642610701 \\
\hline
\end{array}
\end{equation*}
\begin{center}
Table 2. Some values of $n$ such that there is an $x_{n}$ satisfying $(\overline{1}_{n-3}, 2, x_{n}, x_{n})\in S(n)$.
\end{center}
In Theorem \ref{xneqxn} we proved that limes inferior of $\operatorname{min}\{x_{n}/x_{n-1}:\;\overline{X}_{n}\in S(n)\}$ is equal to 1. In this context it is natural to investigate the set
$$
D:=\left\{\frac{x_{n}}{x_{n-1}}:\;\overline{X}_{n}\in S(n), n\in\mathbb{N}_{\geq 3}\right\}\subset \mathbb{Q}
$$
and the set
$$
D_{i}:=\left\{\frac{x_{n}}{x_{n-1}}:\;\overline{X}_{n}\in S_{i}(n)\right\}\subset \mathbb{Q}
$$
where $i\in\mathbb{N}_{\geq 3}$, and ask whether $D$ and $D_{i}$ are dense in the set $[1,+\infty)$.
We prove the following.
\begin{thm}
The sets $D_3$ and $D$ are dense in $[1,+\infty)$.
\end{thm}
\begin{proof}
It suffices to show that $D_3$ is dense in $[1,+\infty)$ as $D\supset D_3$.
One can check that the set of limit points of $D_{3}$ contains the set
$$
\left\{\frac{\sqrt{24ab}+b}{\sqrt{24ab}+a}:\;a,b\in\mathbb{N}_{+},\;24ab\;\mbox{is not a square}\right\}
$$
Indeed, if $(\overline{1}_{n-3}, x, x_{n-1}, x_{n})\in S_{3}(n)$ and we take $x=2$ in (\ref{yzsol}) then $x_{n-1}=n+d_{1}-1, x_{n}=n+d_{2}-1$ with $d_{1}d_{2}=f(n,2)$. If we put $d_{1}=at$ and $d_{2}=bt$ then we deal with Pell type equation $abt^2=f(n,2)$ or equivalently
\begin{align}\label{pell}
u^2-24abt^2=25,
\end{align}
where $u=6n-7$. From the theory of Pell equations we know that this equation has infinitely many solutions in positive integers provided that $24ab$ is not a square of an integer. Moreover, since $(u,t)=(5,0)$ is a solution of \eqref{pell}, there are infinitely many solutions of \eqref{pell} with $u\equiv 5\pmod{6}$, which ensures the integrality of $n$. Hence, $(d_{1}, d_{2}, n)$ satisfies
$$
d_{1}\sim a\frac{6n-7}{\sqrt{24ab}}, \quad d_{2}\sim b\frac{6n-7}{\sqrt{24ab}}, \quad n\to +\infty.
$$
In consequence
\begin{align*}
&\ \lim_{n\rightarrow +\infty}\frac{x_{n}}{x_{n-1}}=\lim_{n\rightarrow +\infty}\frac{n+d_{2}-1}{n+d_{1}-1}=\lim_{n\rightarrow +\infty}\frac{n+b\frac{6n-7}{\sqrt{24ab}}-1}{n+a\frac{6n-7}{\sqrt{24ab}}-1}=\frac{\sqrt{24ab}+6b}{\sqrt{24ab}+6a}\\
=&\ \frac{\sqrt{6ab}+3b}{\sqrt{6ab}+3a}=\frac{\sqrt{6\frac{b}{a}}+3\frac{b}{a}}{\sqrt{6\frac{b}{a}}+3}.
\end{align*}
Since the function $[1,+\infty)\ni x\mapsto\frac{\sqrt{6x}+x}{\sqrt{6x}+1}\in [1,+\infty)$ is a continuous surjection, we infer that the set
$$
\left\{\frac{\sqrt{24ab}+b}{\sqrt{24ab}+a}:\;a,b\in\mathbb{N}_{+},\;24ab\;\mbox{is not a square}\right\}
$$
is dense in $[1,+\infty)$. Hence, the closure of $D_3$ is the whole interval $[1,+\infty)$.
\end{proof}
\section{Questions and conjectures}\label{sec4}
In this section we collect some questions and conjectures which appeared during various stages of our investigations.
The most important (and probably the most difficult) is the following
\begin{ques}
What is the order of growth of the number $|S(n)|$? Is it true that there is $\varepsilon >0$ such that $|S(n)|>n^{\varepsilon}$? In particular, is the equality
$$
\liminf_{n\rightarrow +\infty}|S(n)|=+\infty
$$
true?
\end{ques}
This is a difficult question. We believe that the inequality $|S(n)|>\log n$ is true. In this context one can also ask the following questions.
\begin{ques}
What is the average order of the function $|S(n)|$?
\end{ques}
\begin{ques}
Is the function $|S(n)|$ eventually increasing?
\end{ques}
\bigskip
For given $n\in\mathbb{N}_{\geq 3}$ and $\overline{X}_{n}\in S(n)$ let us put $X_{n}=\{x_{1},\ldots, x_{n}\}$ and
$$
M(n):=\operatorname{min}\{|X_{n}|:\;\overline{X}_{n}\in S(n)\}.
$$
From emptiness of $S_{i}(n)$ for $i=0, 1, 2$ and our investigations concerning the structure of the set $S_{3}(n)$, or to be more precise the proof of Theorem \ref{xneqxn}, we know that
$$
\liminf_{n\rightarrow+\infty}M(n)\leq 3.
$$
Let $n\geq 6$. Then $M(n)=1$ is impossible by Corollary \ref{ones}. Moreover, $M(n)=2$ if and only if there is $k\in\{1,\ldots, n-1\}$ and $x\in\mathbb{N}_{\geq 2}$ such that
\begin{equation}\label{M2}
\sigma_{2}(\overline{1}_{n-k},\overline{x}_{k})=\sigma_{n}(\overline{1}_{n-k},\overline{x}_{k}),
\end{equation}
where $\overline{x}_{k}=(x,\ldots,x)$, where we have $k$-occurrences of $x$.
Equivalently, we deal with the equation
\begin{equation*}
y^2=8 x^k+4 k x^2-4 k x+1=:P_{k}(x),
\end{equation*}
where $y=2n+2k(x-1)-1.$ Let $C_{k}$ denotes the curve defined by the equation above. One can check, using \cite[Theorem 3.1]{Ga}, that the discriminant of $P_{k}(x)$ is non-zero, and thus the polynomial $P_{k}$ has no multiple roots (moreover, since the Newton polygon of $P_{k}$ with respect to $2$-adic valuation has only one slope, equal to $3/k$, we see that for each $k\in\mathbb{N}_{\geq 3}$ not divisible by $3$ the polynomial $P_{k}$ is irreducible over the field $\mathbb{Q}_2$ of $2$-adic numbers and thus irreducible over $\mathbb{Q}$). In consequence, the genus of the curve $C_{k}$ is equal to $\lfloor (k-1)/2\rfloor$. For $k=3$ the curve $C_{k}$ is an elliptic curve and we have $C_{k}(\mathbb{Q})\simeq \mathbb{Z}\times\mathbb{Z}_{3}$, where the infinite part is generated by the point $P=(8,24)$, and the torsion part is generated by $T=(0,1)$. Using standard methods one can check that the only integer points on $C_{3}$ which lead to solutions of the equation (\ref{M2}) are: $(x, y)=(3,17)$ which leads to $(k, n, x)=(3, 3, 3)$ (and the value $M(3)=1$) and $(x,y)=(7,57)$ which leads to $(k, n, x)=(3, 11, 7)$.
Similar analysis can be performed in the case $k=4$. For example, one can use the procedure {\tt IntegralQuarticPoints} of the Magma computational package (\cite{Mag}) and find that all the integral points on $C_{4}$ are:
$$(1,\pm 3), (0, \pm 1), (7, \pm 141), (-2, \pm 15), (-3, \pm 29), (172, \pm 83679).$$
The point $(7, 141)$ gives the solution $(k, n, x)=(4, 47, 7)$ and the point $(172, 83679)$ gives the solution $(k, n, x)=(4, 41156, 172)$ of the equation (\ref{M2}).
If $k\geq 5$ then the curve $C_{k}$ is of genus $\geq 2$, and from the Faltings theorem we know that the set $C_{k}(\mathbb{Q})$ is finite. From numerical calculations we expect that besides the trivial point $(0,\pm 1)$ there is no additional integral points on $C_{k}$. We thus formulate the following.
\begin{conj}
The only values of $n\in\mathbb{N}_{\geq 3}$ such that $M(n)=2$ are $n=3, 4, 5, 11, 41156$. In particular, $\liminf_{n\rightarrow+\infty}M(n)=3$.
\end{conj}
Since $\left(\overline{1}_{n-3},2,n,\frac{1}{2}n(3n-5)\right)\in S(n)$ for each $n\in\mathbb{N}_{\geq 3}$, we know that
$$\limsup_{n\to +\infty} M(n)\leq 4.$$
We suppose that the condition $M(n)=3$ is rarely satisfied.
\begin{conj}
We have $\limsup_{n\to +\infty} M(n)=4.$
\end{conj}
In this context we also prove the following.
\begin{thm}
For each $m\in\mathbb{N}_{\geq 2}$ and each sequence $\overline{Y}_{m}=(y_{1}, y_{2},\ldots, y_{m})\in\mathbb{N}_{\geq 2}^{m}\backslash\{(2,2),(2,3)\}$ there are $k, Y\in\mathbb{N}_{+}$ such that for $n=k+m+1$ we have $(\overline{1}_{k},\overline{Y}_{m},Y)\in S(n)$. In particular,
$$
\limsup_{n\rightarrow+\infty}\max_{\overline{X}_n\in S(n)}|X_n|=+\infty.
$$
\end{thm}
\begin{proof}
To find suitable values of $k, Y$ we consider the equation
$$
\sigma_{2}(\overline{1}_{k},\overline{Y}_{m},Y)=\sigma_{n}(\overline{1}_{k},\overline{Y}_{m},Y)=\sigma_{m}(\overline{Y}_{m})Y,
$$
which is equivalent with the equation
$$
\sigma_{2}(\overline{1}_{k})+\sigma_{2}(\overline{Y}_{m})+\sigma_{1}(\overline{1}_{k})\sigma_{1}(\overline{Y}_{m})+(\sigma_{1}(\overline{1}_{k})+\sigma_{1}(\overline{Y}_{m}))Y=\sigma_{m}(\overline{Y}_{m})Y,
$$
i.e.
$$
\binom{k}{2}+\sigma_{2}(\overline{Y}_{m})+k\sigma_{1}(\overline{Y}_{m})=(\sigma_{m}(\overline{Y}_{m})-k-\sigma_{1}(\overline{Y}_{m}))Y.
$$
Thus, by taking $k=\sigma_{m}(\overline{Y}_{m})-\sigma_{1}(\overline{Y}_{m})-1$ we get the corresponding value of $Y=\binom{k}{2}+\sigma_{2}(\overline{Y}_{m})+k\sigma_{1}(\overline{Y}_{m})$. Under our assumption on $\overline{Y}_{m}$ it is clear that $k>0$ (for an easy proof of this fact see \cite{Eck}) and we get required solution.
To get the second statement it is enough to take $\overline{Y}_{m}=(2,\ldots, m+1)$. Then $k=(m+1)!-\binom{m+2}{2}$ and $n=(m+1)!-\binom{m+2}{2}+m+1$. The corresponding element $\overline{X}_{n}\in S(n)$ satisfies $|X_{n}|=m+2$ and hence the result.
\end{proof}
\begin{prob}
For which $a\in\mathbb{N}_{\geq 4}$ the equation $\max_{\overline{X}_n\in S(n)}|X_n|=a$ has infinitely many solutions?
\end{prob}
Our expectation is that for each $a\in\mathbb{N}_{\geq 4}$ the equation $\max_{\overline{X}_n\in S(n)}|X_n|=a$ has infinitely many solutions.
\bigskip
We firmly believe that many results proved in this paper can be appropriately generalized in the context of the Diophantine equation
$$
\sigma_{i}(\overline{X}_{n})=\sigma_{n}(\overline{X}_{n}),
$$
where $i\in\{3,\ldots, n\}$ is fixed. However, even the case $i=3$ will require new ideas. For example, based on numerical calculations we formulate the following.
\begin{conj}
Let $n\in\mathbb{N}_{\geq 4}$ and suppose that $m\in\mathbb{N}$ satisfies
$$
m=\sigma_{3}(\overline{X}_{n})=\sigma_{n}(\overline{X}_{n}).
$$
Then
\begin{align*}
m\leq \frac{1}{12} &\ \left(3021 n^6-77575 n^5+920361 n^4-6235705 n^3\right.\\
&\ \left. \quad+24764202 n^2-53664304n+48986640\right)
\end{align*}
and
$$
x_{n}\leq \frac{1}{12} \left(27 n^4-190 n^3+471 n^2-500 n+216\right).
$$
\end{conj}
\bigskip
\noindent {\bf Acknowledgements.} We are grateful to {\L}ukasz Pa\'{n}kowski and B{\l}a\.{z}ej \.{Z}mija for interesting discussions concerning various approaches to Theorem \ref{mainbound} .
|
1007.3718
|
\section{Introduction}
Stellar clusters are among the most universally-useful entities studied by modern day astronomers. Having a set of singly-aged, same-distance stars in an identifiable cluster provides stellar and galactic astronomers with powerful diagnostic tools to interpret our universe. Of particular interest is the ability to use the ages and masses of constituent stellar clusters to track the star formation history and chemical evolution of a galaxy, the mass function of stellar clusters and their lifetime within differing galaxies (e.g.\ \citeauthor*{lamers2005} \citeyear{lamers2005}; \citeauthor*{piatti2009} \citeyear{piatti2009}; \citeauthor*{larsen2009} \citeyear{larsen2009}; \citeauthor*{mora2009} \citeyear{mora2009}; \citeauthor*{chandar2010a} \citeyear{chandar2010a}). However, such measures require deriving the ages and masses of stellar clusters when the constituent stars within those clusters are not individually resolved. With distant stellar clusters, one has only the integrated properties to work from, typically in either broad band photometry or integrated spectroscopy, to estimate ages (e.g.\ \citeauthor*{searle} \citeyear{searle}; \citeauthor*{vdb} \citeyear{vdb}). In such cases, we rely on sophisticated models of these star systems to derive the cluster properties (age, mass, chemistry) based on their observed, bulk properties. For this reason there exists a long history of effort interpreting integrated cluster properties through the use of these sophisticated models (e.g.\ \citeauthor*{girardi} \citeyear{girardi};
\citeauthor*{bruzual2003} \citeyear{bruzual2003}; \citeauthor*{cervino2009} \citeyear{cervino2009}; \citeauthor*{buzzoni} \citeyear{buzzoni}; \citeauthor*{chiosi} \citeyear{chiosi}; \citeauthor*{bruzual2001} \citeyear{bruzual2001}; \citeauthor*{bruzual2010} \citeyear{bruzual2010}; \citeauthor*{cervino2004} \citeyear{cervino2004}; \citeauthor*{cervino2006} \citeyear{cervino2006}; \citeauthor*{fagiolini2007} \citeyear{fagiolini2007}; \citeauthor*{lancon2000} \citeyear{lancon2000}; \citeauthor*{lancon2009} \citeyear{lancon2009}; \citeauthor*{fouesneau} \citeyear{fouesneau}; \citeauthor*{fouesneau2} \citeyear{fouesneau2}; \citeauthor*{brocato2000a} \citeyear{brocato2000a}; \citeauthor*{brocato2000b} \citeyear{brocato2000b};
\citeauthor*{cantiello} \citeyear{cantiello}; \citeauthor*{raimondo2005} \citeyear{raimondo2005}; \citeauthor*{raimondo2009} \citeyear{raimondo2009};
\citeauthor*{santos} \citeyear{santos}; \citeauthor*{gonzalez} \citeyear{gonzalez}; \citeauthor*{gonzalez2005} \citeyear{gonzalez2005}; \citeauthor*{gonzalez2010} \citeyear{gonzalez2010}; \citeauthor*{jesus} \citeyear{jesus}; \citeauthor*{pessev} \citeyear{pessev}; \citeauthor*{iaus266} \citeyear{iaus266}).
Quite recently, we presented our own independent effort to provide models for interpreting integrated stellar cluster photometry. Our model, MASSCLEAN (\textbf{MASS}ive \textbf{CL}uster \textbf{E}volution and \textbf{AN}alysis\footnote{\url{http://www.physics.uc.edu/\textasciitilde popescu/massclean/} \\ \texttt{MASSCLEAN} package is freely available unde GNU General Public License } -- \citeauthor*{masscleanpaper} \citeyear{masscleanpaper}), is rather unique in its design compared to other stellar cluster models and provides additional insight to interpreting presently available data on stellar clusters. In this paper we first present in \S 2 our recently expanded data-set of 70 million Monte Carlo models, MASSCLEAN{\fontfamily{ptm}\selectfont \textit{colors}} (\citeauthor*{paper2} \citeyear{paper2}). This database of models shows the stochastic and non-Gaussian behavior of integrated stellar cluster colors and magnitudes as a function of cluster mass and age. This, our first, newly completed, extended database of models, was selected to match the mass, age and metallicity range expected in the stellar clusters of the Large Magellanic Cloud (LMC). In \S 3, we demonstrate the utility of this extensive database. When used with a statistical inference code we have designed within MASSCLEAN called, MASSCLEAN{\fontfamily{ptm}\selectfont \textit{age}}, we use the database to work backwards from the integrated magnitudes and colors of real LMC clusters, to derive their stellar ages and masses. We also discuss our new age results as compared to previous age determinations based on the same photometric data, to provide initial tests on the cluster results obtained using MASSCLEAN{\fontfamily{ptm}\selectfont \textit{age}}. Final discussion of our results and conclusions are found in \S 4.
\section{The MASSCLEAN{\fontfamily{ptm}\selectfont \textit{colors}} Database -- Now Over 70 Million Monte Carlo Simulations}
We have computed the integrated colors and magnitudes as a function of mass and age for a simple stellar cluster. Our results -- the newest version of the MASSCLEAN{\fontfamily{ptm}\selectfont \textit{colors}} database (\citeauthor*{paper2} \citeyear{paper2}) -- are based on over $70$ million Monte Carlo simulations
using Padova 2008 (\citeauthor*{padova2008} \citeyear{padova2008}) and Geneva (\citeauthor*{geneva1} \citeyear{geneva1}) isochrones, and for two metalicities, $Z=0.019$ (solar) and $Z=.008$ (LMC). The simulation were done using Kroupa IMF (\citeauthor*{Kroupa2002} \citeyear{Kroupa2002}) with $0.1$ $M_{\Sun}$ and $120$ $M_{\Sun}$ mass limits. The age range of $[6.6,9.5]$ in $log(age/yr)$ was chosen (similar to \citeauthor*{masscleanpaper} \citeyear{masscleanpaper}; \citeauthor*{paper2} \citeyear{paper2}) to accomodate both Padova and Geneva models. The mass range is $200$-$100,000$ $M_{\Sun}$. An average of $5000$ clusters were simulated for each mass and age.
In this work we will look closely at a subset of the MASSCLEAN{\fontfamily{ptm}\selectfont \textit{colors}} database. We present integrated magnitudes and colors ($M_{V}$, $(U-B)_{0}$, $(B-V)_{0}$) representing 50 million Monte Carlo simulations using Padova 2008 stellar evolutionary models alone and for the single metallicity, $Z=.008$ ([Fe/H]=-0.6), selected to match the bulk properties of the LMC over most of the past few billion years (\citeauthor*{piatti2009} \citeyear{piatti2009}).
\subsection{The Distribution of Integrated Colors and Magnitudes as a Function of Mass and Age}
In \citeauthor*{paper2} \citeyear{paper2}, we analysed the influence of the stochastic fluctuations in the mass function of stellar clusters on their integrated colors as a function of cluster mass. Using 13 mass intervals ranging from 500--100,000 $M_{\Sun}$, we showed the mean value of the distribution of integrated colors varies with mass. We further showed that the dispersion of integrated colors, measured by $1 \sigma$ (standard deviation), increases as the value of the cluster mass decreases. In this new work we will present not just the sigma range and mean, but the full distribution of integrated colors and magnitudes. We are also using a much larger number of Monte Carlo simulations and cluster mass values in the 200--100,000 $M_{\Sun}$ interval. This allows us to look closely at where the distribution of real cluster properties lie in the critical color-age and color-color diagrams.
The distribution of integrated $M_{V}$ magnitudes as a function of mass and age is presented in the upper panel of Figure \ref{fig:one2}. The corresponding mass values are color-coded and presented in the bottom panel. Sixty-five (65) different mass intervals have been explored in the MASSCLEAN{\fontfamily{ptm}\selectfont \textit{colors}} database as part of this demonstration.
All 65 mass intervals are plotted on top of each other, making it difficult to see the individual distributions. However, to first order, higher mass clusters have higher absolute magnitudes, $M_{V}$.
In Figures \ref{fig:two2} and \ref{fig:three2} we present the distribution of $(B-V)_{0}$ and $(U-B)_{0}$ integrated colors as a function of mass and age. The mass values are represented by the same colors as shown in the bottom panel of Figure \ref{fig:one2}. The variation of integrated colors computed in the infinite mass limit ($10^{6}$ $M_{\Sun}$ in our simulations) and consistent with standard simple stellar population (SSP) models based on Padova 2008 (\citeauthor*{padova2008} \citeyear{padova2008}), is represented by the solid white line in both figures. As was shown in \citeauthor*{paper2} \citeyear{paper2}, the dispersion in integrated colors grows larger for lower mass clusters, both in the blue and red side. As the cluster mass increases, the dispersion decreases and the integrated colors approach the infinite mass limit shown in white. While it is valuable to see the range of masses over-layed in a single plot, it is difficult to see the exact shape of the color distributions underlying each mass interval.
\begin{figure*}[htp]
\includegraphics[angle=270,width=16.0cm]{f1a.eps}
\hspace{3.5 cm}
\includegraphics[angle=270,width=10.0cm, bb=165 82 445 755]{f1b.eps}
\caption{\small The results from 50 million Monte Carlo simulations created with MASSCLEAN. Models were generated with 65 different masses and evolved following Padova 2008 evolutionary models. Higher mass clusters are plotted over preceded lower mass clusters. This figure gives the (partially correct) impression that $M_{V}$ monotonically increases with cluster mass for all ages. \normalsize}\label{fig:one2}
\end{figure*}
\begin{figure*}[htp]
\centering
\includegraphics[angle=270,width=16.0cm]{f2.eps}
\caption{\small The results from the same 50 million Monte Carlo simulations as shown in Figure \ref{fig:one2} and using the same color coding to represent cluster mass. The range of $(B-V)_{0}$ colors, as a function of mass and age is given. Again, higher mass clusters are plotted over preceded lower mass clusters, so covers the real distribution of expected colors as a function of mass and age. A white line which lies at the color center, represents the color dependence with age computed in the infinite mass limit ($10^{6} M_{\Sun}$ in our simulations). This figure gives the (correct) impression that the range of cluster colors observed as a function of age, is greatly increased for lower mass clusters.\normalsize}\label{fig:two2}
\end{figure*}
\begin{figure*}[htp]
\centering
\includegraphics[angle=270,width=16.0cm]{f3.eps}
\caption{\small The same as Figure \ 2, only this figure shows $(U-B)_{0}$ colors as a function of mass and age. \normalsize}\label{fig:three2}
\end{figure*}
\subsection{The Shape of the Distribution of Integrated Colors and Magnitudes}
We have selected a few single values of mass for independent plots in Figures \ref{fig:four2}--\ref{fig:six2}.
The distribution of integrated $M_{V}$ magnitudes is presented in Figure \ref{fig:four2}, for clusters mass of 200, 1,000, 5,000, 10,000, 25,000, and 50,000 $M_{\Sun}$, respectively. The colors are the same as the ones used in Figure \ref{fig:one2}. What is also given in a black line is the mean value of the distribution of integrated $M_{V}$ magnitudes for that mass (\citeauthor*{paper2} \citeyear{paper2}). Several very critical effects are immediately apparent. The distribution (range) of observed integrated magnitudes for low mass clusters gets very large with lower mass clusters. This was already presented in \citeauthor*{paper2} \citeyear{paper2}. However, the shape of that dispersion was not presented like we have done here. Here we see in Fig.\ \ref{fig:four2}, particularly with 200 and 1,000 solar mass clusters, the distribution becomes {\sl bimodal} for the lowest mass clusters, especially for young ages (log t $\le$ 8.0). A cluster with no more than 200 solar masses can, at times, have an absolute magnitude typical of a cluster $25-100$ times more massive and at the same age. Even if one can derive an accurate age for their cluster, one can not rely on the absolute magnitude alone to derive mass unambiguously.
In Figures \ref{fig:five2} and \ref{fig:six2} we present the distribution of integrated colors $(B-V)_{0}$ and $(U-B)_{0}$ as a function of age for the same select values of cluster mass, 200, 1,000, 5,000, 10,000, 25,000, and 50,000 $M_{\Sun}$, respectively. The integrated colors as a function of age, computed in the infinite mass limit ($10^{6} M_{\Sun}$ in our simulations) is represented by the solid black line. Here the structure in the values observed start out closely aligned with the classical SSP codes predictions (infinite mass limit) for the high mass clusters, but quickly fans out to a rather broad distribution below 10,000 $M_{\Sun}$. When one gets below 1,000 $M_{\Sun}$, the color range is extreme, both in the red and blue colors predicted by our models. Moreover, the distribution shows a clear bimodal distribution (\citeauthor*{paper2} \citeyear{paper2}; \citeauthor*{lancon2002} \citeyear{lancon2002}; \citeauthor*{cervino2004} \citeyear{cervino2004}; \citeauthor*{fagiolini2007} \citeyear{fagiolini2007}). In \citeauthor*{paper2} \citeyear{paper2}, we discussed the increased dispersion in colors as the cluster mass decreased, though we did not investigate masses as low as shown here. However, even clusters with masses of 1,000 to as much as 5,000 $M_{\Sun}$ show an extraordinary range in colors, away from the predicted SSP single expectation value with age.
\begin{figure*}[htp]
\centering
\includegraphics[angle=270,width=5.25cm]{f4a.eps}
\includegraphics[angle=270,width=5.25cm]{f4b.eps}
\includegraphics[angle=270,width=5.25cm]{f4c.eps}
\includegraphics[angle=270,width=5.25cm]{f4d.eps}
\includegraphics[angle=270,width=5.25cm]{f4e.eps}
\includegraphics[angle=270,width=5.25cm]{f4f.eps}
\caption{\small Beginning in the upper left panel, the black line shows the mean value of the distribution of integrated $M_{V}$ colors for a 50,000 $M_{\Sun}$ cluster. The red points are MASSCLEAN, Monte Carlo derived observed magnitudes for a 50,000 $M_{\Sun}$ cluster, as a function of age. Over one million models are used to derive the red points in this figure. The distribution seen about the mean value is relatively small. In the next panel to the right, is given the magnitudes, as a function of age, for Monte Carlo derived cluster with mass, 25,000 $M_{\Sun}$. The black line is the mean value of the distribution of integrated $M_{V}$ colors for a 25,000 $M_{\Sun}$ cluster. In the following 4 subfigures, we provide lower and lower mass simulated clusters, 10,000 (upper right), then 5,000 (lower left), 1,000 (lower center) and finally 200 $M_{\Sun}$ (lower right), all the while showing the mean value of the distribution of integrated $M_{V}$ colors for the corresponding clusters mass. For clusters with masses of only a few thousand solar mass, the observed range becomes extremely large, and can even mimic clusters of lower or even considerably higher total mass. \normalsize}\label{fig:four2}
\end{figure*}
\begin{figure*}[htp]
\centering
\includegraphics[angle=270,width=5.25cm]{f5a.eps}
\includegraphics[angle=270,width=5.25cm]{f5b.eps}
\includegraphics[angle=270,width=5.25cm]{f5c.eps}
\includegraphics[angle=270,width=5.25cm]{f5d.eps}
\includegraphics[angle=270,width=5.25cm]{f5e.eps}
\includegraphics[angle=270,width=5.25cm]{f5f.eps}
\caption{\small Like with Figure \ref{fig:four2}, we have selected six mass examples (50,000, 25,000, 10,000, 5,000, 1000, and 200 $M_{\Sun}$) to show the observed distribution of $(B-V)_{0}$ colors as a function of age, compared to the infinite mass limit (in the black line, the same for all six plots). For the higher mass clusters, 50,000 and 25,000 $M_{\Sun}$, the range of colors is small and stays reasonably close to the values computed in the infinite mass limit. However, in the remaining panels one sees that the observed values from our simulations can demonstrate rather extreme colors, bluer, but more so very red colors at times, for increasingly lower mass clusters.\normalsize}\label{fig:five2}
\end{figure*}
\begin{figure*}[htp]
\centering
\includegraphics[angle=270,width=5.25cm]{f6a.eps}
\includegraphics[angle=270,width=5.25cm]{f6b.eps}
\includegraphics[angle=270,width=5.25cm]{f6c.eps}
\includegraphics[angle=270,width=5.25cm]{f6d.eps}
\includegraphics[angle=270,width=5.25cm]{f6e.eps}
\includegraphics[angle=270,width=5.25cm]{f6f.eps}
\caption{\small This is the same as for Figure \ref{fig:five2}, only here we show the variation with mass and age of $(U-B)_{0}$ colors, as derived through Monte Carlo simulations using MASSCLEAN. \normalsize}\label{fig:six2}
\end{figure*}
\subsection{The Distribution of Integrated Colors in the Color-Color Diagram}
All the data from Figures \ref{fig:two2} and \ref{fig:three2} are again presented in Figure \ref{fig:seven2} as a $(U-B)_{0}$ vs. $(B-V)_{0}$ color-color diagram. And, once again, the variation of integrated colors computed in the infinite mass limit (as predicted by classical SSP codes) is represented by the solid white line. The 65 masses given are color-coded similarly to Figure \ref{fig:one2}. Provided the mass of the cluster is greater than 50,000 $M_{\Sun}$ (in the pink), the predicted properties of our Monte Carlo generated clusters stays reasonably close to the integrated colors computed in the infinite mass limit. For clusters with masses less than 10,000 $M_{\Sun}$ (the dark green just beyond the yellow), the range of possible colors becomes quite large, overlapping heavily with many different aged clusters and causing enormous degeneracy problems.
\begin{figure*}[htp]
\centering
\includegraphics[angle=0,width=16.0cm]{f7.eps}
\caption{\small The result of 50 million Monte Carlo simulations, color coded by mass as described in Figure \ref{fig:one2}. Here we see the full range of color-color magnitudes predicted as a function of mass. Clusters more massive than 50,000 $M_{\Sun}$ (in the pink) typically adhere closely to the predicted values for the single solution Padova-based SSP predictions, computed in the infinite mass limit. However, clusters with masses below 10,000 $M_{\Sun}$ (the dark green just beyond the yellow) can show extreme color ranges. \normalsize}\label{fig:seven2}
\end{figure*}
The {\it classical} method for cluster age determination using integrated colors is based mainly on the $(U-B)_{0}$ vs. $(B-V)_{0}$ color-color diagram. The large dispersion presented in Figure \ref{fig:seven2} shows the limitation of this method when only the infinite mass limit is used. To better view what is happening in the complex Figure \ref{fig:seven2}, we show another view of this data in Figure \ref{fig:eight2}. Here, we apply an entirely new color scheme to represent our Monte Carlo simulations. In Figure \ref{fig:eight2} we plot the color-color diagram for 5 ages ($log(age/yr)$= 7.00, 7.50, 8.00, 8.50, 9.00) and all values of mass between 200--100,000 $M_{\Sun}$. Clusters of all masses are shown now color coded based on age. Combining this with what we have learned from Figure \ref{fig:seven2}, Figure \ref{fig:eight2} shows us that even very young low mass clusters can mimic the UBV colors of more massive old clusters. The extreme tail of colors belongs solely to intermediate and low mass clusters. It is easy to see that very young clusters ($log(age/yr)$= 7.00, 7.50) could be mistaken for 1 billion years old ($log(age/yr)$= 9.00) clusters, based on the single SSP relationship in a classical UBV diagram.
The $(U-B)_{0}$ versus $(B-V)_{0}$ color-color diagram is a very important tool in age determination of stellar clusters. The high level of dispersion shown in Figures \ref{fig:seven2} and \ref{fig:eight2} demonstrates the limitations of this method when only the infinite mass limit is used.
The degeneracy seems to be so extreme, one might think all is lost. But in fact, the purpose of our extensive database was not just to show the range and degeneracy of the colors with age and mass (and here we have only limited ourselves to a single IMF, chemistry and evolutionary model), but to see if by producing such a database, we might provide a reasonable grid of properties from which to work back and derive cluster ages and masses.
\subsection{Mass and Age Are Degenerate Quantities in UBV Colors}
Figures \ref{fig:one2} through \ref{fig:eight2} demonstrate the very difficult condition one faces in deriving unique ages of stellar clusters when mass is not given full consideration. In creating the MASSCLEAN{\fontfamily{ptm}\selectfont \textit{colors}} database, we have created a distribution function of observed photometric properties with age and mass. Our figures demonstrate just the UBV bands, however, our MASSCLEAN{\fontfamily{ptm}\selectfont \textit{colors}} database includes all photometric bands presently included in the Padova 2008 model release: UBVRIJHK. While our figures seem to indicate there is little to match on, in fact, when simultaneously solving all three unique values of UBV ($U-B$, $B-V$ and $M_{V}$)
, there will be optimized, or most likely solutions when one searches in both age and mass grid space. If additional photometric bands are available for a cluster, this can be further used to constrain the cluster properties. The module which will provide such a search we call MASSCLEAN{\fontfamily{ptm}\selectfont \textit{age}}.
\section{Age Determination for Stellar Clusters -- MASSCLEAN{\fontfamily{ptm}\selectfont \textit{age}}}
The newest addition to the \texttt{MASSCLEAN} package (\citeauthor*{masscleanpaper} \citeyear{masscleanpaper}) is MASSCLEAN{\fontfamily{ptm}\selectfont \textit{age}}\footnote{As part of the \texttt{MASSCLEAN} package, MASSCLEAN{\fontfamily{ptm}\selectfont \textit{age}} is open source and freely available under GNU General Public License at: \url{http://www.physics.uc.edu/\textasciitilde popescu/massclean/}}, a program which uses the MASSCLEAN{\fontfamily{ptm}\selectfont \textit{colors}} database (\citeauthor*{paper2} \citeyear{paper2}) to determine the ages of stellar clusters.
Our program, MASSCLEAN{\fontfamily{ptm}\selectfont \textit{age}}, is based on a method analogous to $\chi^{2}$ minimization\footnote{$\chi^{2}$ minimization provides reliable results in the case of Gaussian distribution, so it did not apply to low and medium mass clusters.}. Based on the observational data, integrated magnitude and integrated colors, it searches for the minimum {\it hyper-radius} in the integrated magnitude, integrated colors, age, and mass hyperspace. This work specifically is based on a $M_{V}$, $(U-B)_{0}$, $(B-V)_{0}$, age, and mass hyperspace due to the availability of integrated $UBV$ colors and magnitudes from \citeauthor*{hunter2003} \citeyear{hunter2003}, but the code can provide results based on more colors. The most probable values for age and mass are reported, along with a list of other possible matches. The number of possible matches could be selected by the user or could be automatically computed based on the photometric errors. The results could be used to generate the confidence levels for age and mass (as presented in Tables 1 and 2 and Figure \ref{fig:nine2}), or to generate the age-mass probability distributions (as presented in Figure \ref{fig:ten2}).
By including additional bands (e.g. $(V-K)_{0}$) the accuracy in the age and mass determination would increase, and could minimize the degeneracies. In order to do it, a much larger version of the database is required, since the range of stochastic fluctuations is much larger in $(V-K)_{0}$ (\citeauthor*{paper2} \citeyear{paper2}).
\begin{figure*}[htp]
\centering
\includegraphics[angle=0,width=16.0cm]{f8.eps}
\caption{\small Monte Carlo cluster simulations colored by age for $log(age/yr)= 7.00, 7.50, 8.00, 8.50, 9.00$. Here we show that young clusters of low mass can be found to reside in the region that the infinite mass limit predict for old stellar clusters. For a comparison with the observational data, the clusters from \citeauthor*{hunter2003} \citeyear{hunter2003} catalog are displayed as black dots. The clusters with photometric ages from \citeauthor*{santos} \citeyear{santos} (also presented in Table 1) are highlighted with pink squares. All the clusters presented in Table 2 are highlighted with yellow triangles. \normalsize}\label{fig:eight2}
\end{figure*}
\subsection{Age and Mass Determination for LMC Clusters}
As a first demonstration of the MASSCLEAN{\fontfamily{ptm}\selectfont \textit{age}} code, we compare our values of ages for 7 clusters with both spectroscopic ages (\citeauthor*{santos} \citeyear{santos}) and photometric ages (\citeauthor*{hunter2003} \citeyear{hunter2003}). Our results are computed using the previously de-reddened values of $M_{V}$, $(U-B)_{0}$, and $(B-V)_{0}$ from the \citeauthor*{hunter2003} \citeyear{hunter2003} catalog. The data is presented in Table \ref{table:one}. The integrated photometry and age from \citeauthor*{hunter2003} \citeyear{hunter2003} is presented in the columns 2--5, and the spectroscopic ages from \citeauthor*{santos} \citeyear{santos} is presented in the column 6. The MASSCLEAN{\fontfamily{ptm}\selectfont \textit{age}} results are presented in the columns 7--9. The same data is also presented in Figure \ref{fig:nine2} (a), (b). The integrated $(U-B)_{0}$ and $(B-V)_{0}$ colors are presented in Figure 8 as pink squares.
The ages computed using MASSCLEAN{\fontfamily{ptm}\selectfont \textit{age}} are in good agreement with the spectroscopic ages from \citeauthor*{santos} \citeyear{santos}.
One object in Figure \ref{fig:nine2} still lies significantly away from the predicted age, as derived by \citeauthor*{santos} \citeyear{santos}, even using MASSCLEAN{\fontfamily{ptm}\selectfont \textit{age}}. We investigated a few of the sources in this diagram by looking closely at the minimization solutions found within the MASSCLEAN{\fontfamily{ptm}\selectfont \textit{age}} hyperspace. Absolutely every mass and age range available was specifically calculated against the observed UBV colors. This output was used to create the likelihood plots shown in Figure \ref{fig:ten2}. These banana-shaped diagrams demonstrate the very strong degeneracy in UBV colors between age and mass.
Here we see that for NGC 1894, which represents the most significantly desperate point found in Figure \ref{fig:nine2} (b), our best solution for the age does not match that derived by \citeauthor*{santos} \citeyear{santos} via integrated spectroscopy, but agrees with the age from \citeauthor*{hunter2003} \citeyear{hunter2003}. If the older \citeauthor*{santos} \citeyear{santos} age is correct, then this cluster is fairly massive ($\ge$ 15,000 $M_{\Sun}$ based on Figure \ref{fig:ten2}).
\begin{deluxetable}{lrrrrrrrrrr}
\tablecolumns{11}
\tablewidth{0pc}
\tablecaption{Results}
\tablehead{
\colhead{} & \multicolumn{4}{c}{{\footnotesize Integrated Photometry\tablenotemark{a}}} & \multicolumn{3}{c}{{\footnotesize Spectroscopy\tablenotemark{b} }} & \multicolumn{3}{c}{{\footnotesize MASSCLEAN}} \\
\cline{2-5} \cline{7-7} \cline{9-11}\\
\colhead{{\footnotesize Name}} & \colhead{{\footnotesize$M_{V}$}} & \colhead{{\footnotesize$(U-B)_{0}$}} & \colhead{{\footnotesize$(B-V)_{0}$}} & \colhead{{\footnotesize Age}} & \colhead{{\tiny }} & \colhead{{\footnotesize Age}} & \colhead{{\tiny }} & \colhead{{\footnotesize Age}} & \colhead{{\footnotesize Age}} & \colhead{{\footnotesize Mass}} \\
\colhead{{\footnotesize }} & \colhead{{\footnotesize$(mag)$}} & \colhead{{\footnotesize$(mag)$}} & \colhead{{\footnotesize$(mag)$}} & \colhead{{\footnotesize $(Myr)$}} & \colhead{{\tiny }} & \colhead{{\footnotesize $(Myr)$}} & \colhead{{\tiny }} & \colhead{{\footnotesize $(Myr)$}} & \colhead{{\footnotesize $(log)$}} & \colhead{{\footnotesize $(M_{\Sun}$)}} \\
\colhead{{\footnotesize$1$}} & \colhead{{\footnotesize$2$}} & \colhead{{\footnotesize$3$}} & \colhead{{\footnotesize$4$}} & \colhead{{\footnotesize$5$}} & \colhead{{\tiny }} & \colhead{{\footnotesize$6$}} & \colhead{{\tiny }} & \colhead{{\footnotesize$7$}} & \colhead{{\footnotesize$8$}} & \colhead{{\footnotesize$9$}} }
\startdata
{\footnotesize NGC 1804} & {\footnotesize$-6.493$ \phn} & {\footnotesize$-0.523$ \phn} & {\footnotesize$-0.103$ \phn} & {\footnotesize$19.1\pm7.21$} & {\tiny } & {\footnotesize$60\pm20$} & {\tiny } & {\footnotesize$29.51^{+15.16}_{-11.73}$} & {\footnotesize$7.47^{+0.18}_{-0.22}$} & {\footnotesize$2,000^{+2,000}_{-1,300}$} \\
{\footnotesize NGC 1839} & {\footnotesize$-7.087$ \phn} & {\footnotesize$-0.347$ \phn} & {\footnotesize$0.002$ \phn} & {\footnotesize$16.2\pm7.16$} & {\tiny } & {\footnotesize$90\pm30$} & {\tiny } & {\footnotesize$72.44^{+18.76}_{-32.63}$} & {\footnotesize$7.86^{+0.10}_{-0.26}$} & {\footnotesize$8,000^{+2,000}_{-4,000}$} \\
{\footnotesize SL 237} & {\footnotesize$-6.951$ \phn} & {\footnotesize$-0.494$ \phn} & {\footnotesize$0.156$ \phn} & {\footnotesize$15.2\pm7.02$} & {\tiny } & {\footnotesize$40\pm20$} & {\tiny } & {\footnotesize$38.90^{+10.07}_{-13.78}$} & {\footnotesize$7.59^{+0.10}_{-0.19}$} & {\footnotesize$5,000^{+1,000}_{-3,000}$} \\
{\footnotesize NGC 1870} & {\footnotesize$-7.493$ \phn} & {\footnotesize$-0.423$ \phn} & {\footnotesize$0.082$ \phn} & {\footnotesize$15.8\pm7.54$} & {\tiny } & {\footnotesize$60\pm30$} & {\tiny } & {\footnotesize$43.65^{+23.96}_{-8.98}$} & {\footnotesize$7.64^{+0.19}_{-0.10}$} & {\footnotesize$8,000^{+4,000}_{-2,000}$} \\
{\footnotesize NGC 1894} & {\footnotesize$-7.956$ \phn} & {\footnotesize$-0.550$ \phn} & {\footnotesize$0.192$ \phn} & {\footnotesize$15.0\pm8.02$} & {\tiny } & {\footnotesize$100\pm30$} & {\tiny } & {\footnotesize$20.89^{+14.59}_{-1.40}$} & {\footnotesize$7.32^{+0.23}_{-0.03}$} & {\footnotesize$7,000^{+5,000}_{-1,000}$} \\
{\footnotesize NGC 1913} & {\footnotesize$-7.547$ \phn} & {\footnotesize$-0.493$ \phn} & {\footnotesize$0.257$ \phn} & {\footnotesize$14.6\pm7.72$} & {\tiny } & {\footnotesize$40\pm20$} & {\tiny } & {\footnotesize$33.11^{+11.55}_{-9.12}$} & {\footnotesize$7.52^{+0.13}_{-0.14}$} & {\footnotesize$7,000^{+2,000}_{-2,000}$} \\
{\footnotesize NGC 1943} & {\footnotesize$-7.255$ \phn} & {\footnotesize$-0.359$ \phn} & {\footnotesize$0.137$ \phn} & {\footnotesize$15.5\pm7.31$} & {\tiny } & {\footnotesize$140\pm60$} & {\tiny } & {\footnotesize$79.43^{+7.66}_{-31.57}$} & {\footnotesize$7.90^{+0.04}_{-0.22}$} & {\footnotesize$10,000^{+1,000}_{-4,000}$} \\
\enddata
\tablenotetext{a}{{\footnotesize Hunter et al. 2003}}
\tablenotetext{b}{{\footnotesize Santos et al. 2006}}
\label{table:one}
\end{deluxetable}
\begin{figure}[htp]
\centering
\subfigure[]{\includegraphics[angle=270,width=8.25cm, bb=60 150 554 670]{f9a.eps}}
\subfigure[]{\includegraphics[angle=270,width=8.25cm, bb=60 150 554 670]{f9b.eps}}
\caption{\small In (a) above, the $log(age/yr)$ as derived by Hunter et al.\ (2003) using UBV photometry alone is shown versus the $log(age/yr)$ as independently derived by \citeauthor*{santos} \citeyear{santos} based on integrated spectra of those same clusters. In (b) below, we show the same set of stellar clusters with ages as derived by MASSCLEAN{\fontfamily{ptm}\selectfont \textit{age}} and using the same input UBV photometry as \citeauthor*{hunter2003} \citeyear{hunter2003} used. \normalsize}\label{fig:nine2}
\end{figure}
\begin{figure}[htp]
\centering
\subfigure[]{\includegraphics[angle=270,width=8.0cm]{f10a.eps}}
\subfigure[]{\includegraphics[angle=270,width=8.0cm]{f10b.eps}}
\subfigure[]{\includegraphics[angle=270,width=8.0cm]{f10c.eps}}
\caption{\small Output from MASSCLEAN{\fontfamily{ptm}\selectfont \textit{age}}: finding probabilistic regions in the mass - age plane, based on observed UBV magnitudes for these clusters. These diagrams show the very strong correlation between cluster mass and age in integrated UBV colors. Our best solution for NGC 1894 does not match the age given by \citeauthor*{santos} \citeyear{santos}. \normalsize}\label{fig:ten2}
\end{figure}
\subsection{Further Tests of the MASSCLEAN{\fontfamily{ptm}\selectfont \textit{age}} Program}
Regrettably, the \citeauthor*{santos} \citeyear{santos} study has only a small overlap with the \citeauthor*{hunter2003} \citeyear{hunter2003} catalog of clusters, a consistent, high quality UBV dataset with SSP derived ages. But we wish to provide a further test of the MASSCLEAN{\fontfamily{ptm}\selectfont \textit{age}} package. The \citeauthor*{hunter2003} \citeyear{hunter2003} study includes over 900 LMC clusters, but only a fraction of those clusters are good examples. By this we mean, their location in the color-color diagram allows them to be more clearly aged using SSP models.
We selected a subset of 30 clusters, which cover a wide range of $(U-B)_{0}$ and $(B-V)_{0}$ colors, but are also located close to the infinite mass limit (see the yellow triangles in Figure \ref{fig:eight2}). The selected clusters are given in Table 2. Due to their locations in the color-color diagram, the age determination based on the classical $\chi^2$ minimization methods and infinite mass limit is expected to be relatively accurate (e.g.\ \citeauthor*{lancon2000} \citeyear{lancon2000}; \citeauthor*{paper2} \citeyear{paper2}). As shown in Figure \ref{fig:eleven2}, our MASSCLEAN{\fontfamily{ptm}\selectfont \textit{age}} results are in good agreement with the ages from \citeauthor*{hunter2003} \citeyear{hunter2003} based on SSP model fits. However, we can also derive the mass, and this is presented in column 8 of the Table 2. New values for age and mass for all the clusters with $UBV$ colors in the \citeauthor*{hunter2003} \citeyear{hunter2003} catalog and derived using MASSCLEAN{\fontfamily{ptm}\selectfont \textit{age}} will be presented in a subsequent paper (Popescu et al.\ 2010, in preparation).
\begin{figure}[htp]
\centering
\includegraphics[angle=270,width=8.25cm, bb=60 150 554 670]{f11.eps}
\caption{\small We have further checked the consistency of the ages derived using MASSCLEAN{\it ages} by selecting a subset of \citeauthor*{hunter2003} \citeyear{hunter2003} clusters with previously well constrained ages. These data are also shown in Table 2. \normalsize}\label{fig:eleven2}
\end{figure}
\begin{deluxetable}{lrrrrrrrr}
\tablecolumns{9}
\tablewidth{0pc}
\tablecaption{Results}
\tablehead{
\colhead{} & \multicolumn{4}{c}{{\footnotesize Integrated Photometry (Hunter et al. 2003)}} & \colhead{} & \multicolumn{3}{c}{{\footnotesize MASSCLEAN}} \\
\cline{2-5} \cline{7-9}\\
\colhead{{\footnotesize Name}} & \colhead{{\footnotesize$M_{V}$}} & \colhead{{\footnotesize$(U-B)_{0}$}} & \colhead{{\footnotesize$(B-V)_{0}$}} & \colhead{{\footnotesize Age}} & \colhead{{\tiny }} & \colhead{{\footnotesize Age}} & \colhead{{\footnotesize Age}} & \colhead{{\footnotesize Mass}} \\
\colhead{{\footnotesize }} & \colhead{{\footnotesize$(mag)$}} & \colhead{{\footnotesize$(mag)$}} & \colhead{{\footnotesize$(mag)$}} & \colhead{{\footnotesize $(Myr)$}} & \colhead{{\tiny }} & \colhead{{\footnotesize $(Myr)$}} & \colhead{{\footnotesize $(log)$}} & \colhead{{\footnotesize $(M_{\Sun}$)}} \\
\colhead{{\footnotesize$1$}} & \colhead{{\footnotesize$2$}} & \colhead{{\footnotesize$3$}} & \colhead{{\footnotesize$4$}} & \colhead{{\footnotesize$5$}} & \colhead{{\tiny }} & \colhead{{\footnotesize$6$}} & \colhead{{\footnotesize$7$}} & \colhead{{\footnotesize$8$}} }
\startdata
{\footnotesize KMK 88-88} & {\footnotesize$-7.848$ \phn} & {\footnotesize$-0.768$ \phn} & {\footnotesize$0.076$ \phn} & {\footnotesize$15.7$ \phn} & {\tiny } & {\footnotesize$15.85^{+3.65 \phn}_{-2.36} \phn$} & {\footnotesize$7.20^{+0.09}_{-0.07}$} & {\footnotesize$7,000^{+3,000 \phn}_{-3,000}$} \\
{\footnotesize H 88-308} & {\footnotesize$-7.151$ \phn} & {\footnotesize$-0.716$ \phn} & {\footnotesize$0.265$ \phn} & {\footnotesize$13.6$ \phn} & {\tiny } & {\footnotesize$16.22^{+5.66 \phn}_{-3.33} \phn$} & {\footnotesize$7.21^{+0.13}_{-0.10}$} & {\footnotesize$3,000^{+2,000 \phn}_{-500}$} \\
{\footnotesize NGC 2009} & {\footnotesize$-8.047$ \phn} & {\footnotesize$-0.735$ \phn} & {\footnotesize$0.204$ \phn} & {\footnotesize$12.6$ \phn} & {\tiny } & {\footnotesize$16.22^{+2.84 \phn}_{-4.74} \phn$} & {\footnotesize$7.21^{+0.07}_{-0.15}$} & {\footnotesize$8,000^{+2,000 \phn}_{-5,000}$} \\
{\footnotesize SL 288} & {\footnotesize$-6.897$ \phn} & {\footnotesize$-0.562$ \phn} & {\footnotesize$-0.097$ \phn} & {\footnotesize$19.1$ \phn} & {\tiny } & {\footnotesize$18.62^{+8.92 \phn}_{-5.44} \phn$} & {\footnotesize$7.27^{+0.17}_{-0.15}$} & {\footnotesize$1,500^{+2,500 \phn}_{-1,300}$} \\
{\footnotesize NGC 1922} & {\footnotesize$-7.742$ \phn} & {\footnotesize$-0.631$ \phn} & {\footnotesize$0.027$ \phn} & {\footnotesize$23.9$ \phn} & {\tiny } & {\footnotesize$20.42^{+7.77 \phn}_{-2.22} \phn$} & {\footnotesize$7.31^{+0.14}_{-0.05}$} & {\footnotesize$6,000^{+5,000 \phn}_{-2,000}$} \\
{\footnotesize BSDL 1181} & {\footnotesize$-6.150$ \phn} & {\footnotesize$-0.203$ \phn} & {\footnotesize$0.050$ \phn} & {\footnotesize$31.3$ \phn} & {\tiny } & {\footnotesize$37.15^{+21.73}_{-8.97} \phn$} & {\footnotesize$7.57^{+0.20}_{-0.12}$} & {\footnotesize$1,000^{+1,000 \phn}_{-300}$} \\
{\footnotesize SL 423} & {\footnotesize$-6.305$ \phn} & {\footnotesize$-0.442$ \phn} & {\footnotesize$0.071$ \phn} & {\footnotesize$43.3$ \phn} & {\tiny } & {\footnotesize$51.29^{+11.81}_{-11.47} \phn$} & {\footnotesize$7.71^{+0.09}_{-0.11}$} & {\footnotesize$3,000^{+2,000 \phn}_{-1,000}$} \\
{\footnotesize HS 371} & {\footnotesize$-6.409$ \phn} & {\footnotesize$-0.292$ \phn} & {\footnotesize$0.166$ \phn} & {\footnotesize$105.8$ \phn} & {\tiny } & {\footnotesize$100.00^{+23.03}_{-36.90} \phn$} & {\footnotesize$8.00^{+0.09}_{-0.20}$} & {\footnotesize$5,000^{+2,000 \phn}_{-2,000}$} \\
{\footnotesize KMK 88-45} & {\footnotesize$-5.116$ \phn} & {\footnotesize$-0.338$ \phn} & {\footnotesize$0.208$ \phn} & {\footnotesize$92.2$ \phn} & {\tiny } & {\footnotesize$109.65^{+10.58}_{-43.58} \phn$} & {\footnotesize$8.04^{+0.04}_{-0.22}$} & {\footnotesize$2,000^{+500}_{-1,300 \phn}$} \\
{\footnotesize HS 318} & {\footnotesize$-4.462$ \phn} & {\footnotesize$-0.267$ \phn} & {\footnotesize$0.172$ \phn} & {\footnotesize$112.8$ \phn} & {\tiny } & {\footnotesize$109.65^{+38.26}_{-14.15} \phn$} & {\footnotesize$8.04^{+0.13}_{-0.06}$} & {\footnotesize$700^{+800 \phn \phd \phn}_{-200}$} \\
{\footnotesize SL 408A} & {\footnotesize$-4.086$ \phn} & {\footnotesize$-0.277$ \phn} & {\footnotesize$0.208$ \phn} & {\footnotesize$111.0$ \phn} & {\tiny } & {\footnotesize$128.82^{+49.00}_{-28.82} \phn$} & {\footnotesize$8.11^{+0.14}_{-0.11}$} & {\footnotesize$700^{+300 \phn \phd \phn}_{-400}$} \\
{\footnotesize HS 346} & {\footnotesize$-6.119$ \phn} & {\footnotesize$-0.228$ \phn} & {\footnotesize$0.211$ \phn} & {\footnotesize$153.9$ \phn} & {\tiny } & {\footnotesize$147.91^{+18.05}_{-52.41} \phn$} & {\footnotesize$8.17^{+0.05}_{-0.19}$} & {\footnotesize$5,000^{+1,000 \phn}_{-2,000}$} \\
{\footnotesize KMK 88-76} & {\footnotesize$-4.673$ \phn} & {\footnotesize$-0.216$ \phn} & {\footnotesize$0.246$ \phn} & {\footnotesize$162.2$ \phn} & {\tiny } & {\footnotesize$158.49^{+41.04}_{-46.29} \phn$} & {\footnotesize$8.20^{+0.10}_{-0.15}$} & {\footnotesize$1,500^{+500 \phn \phd \phn}_{-800}$} \\
{\footnotesize NGC 1695} & {\footnotesize$-6.845$ \phn} & {\footnotesize$-0.231$ \phn} & {\footnotesize$0.167$ \phn} & {\footnotesize$144.0$ \phn} & {\tiny } & {\footnotesize$162.18^{+46.75}_{-52.53} \phn$} & {\footnotesize$8.21^{+0.11}_{-0.17}$} & {\footnotesize$11,000^{+4,000 \phn}_{-3,000}$} \\
{\footnotesize KMHK 494} & {\footnotesize$-4.378$ \phn} & {\footnotesize$-0.148$ \phn} & {\footnotesize$0.245$ \phn} & {\footnotesize$178.0$ \phn} & {\tiny } & {\footnotesize$177.83^{+62.05}_{-46.00} \phn$} & {\footnotesize$8.25^{+0.13}_{-0.13}$} & {\footnotesize$1,000^{+500 \phn \phd \phn}_{-300}$} \\
{\footnotesize SL 110} & {\footnotesize$-5.334$ \phn} & {\footnotesize$-0.195$ \phn} & {\footnotesize$0.169$ \phn} & {\footnotesize$184.8$ \phn} & {\tiny } & {\footnotesize$186.21^{+62.05}_{-46.00} \phn$} & {\footnotesize$8.27^{+0.07}_{-0.14}$} & {\footnotesize$3,000^{+1,000 \phn}_{-1,000}$} \\
{\footnotesize HS 218} & {\footnotesize$-5.607$ \phn} & {\footnotesize$-0.183$ \phn} & {\footnotesize$0.213$ \phn} & {\footnotesize$195.4$ \phn} & {\tiny } & {\footnotesize$199.53^{+14.27}_{-41.04} \phn$} & {\footnotesize$8.30^{+0.03}_{-0.10}$} & {\footnotesize$4,000^{+1,000 \phn}_{-1,000}$} \\
{\footnotesize KMK 88-49} & {\footnotesize$-4.496$ \phn} & {\footnotesize$-0.160$ \phn} & {\footnotesize$0.198$ \phn} & {\footnotesize$210.8$ \phn} & {\tiny } & {\footnotesize$208.93^{+14.27}_{-41.04} \phn$} & {\footnotesize$8.32^{+0.05}_{-0.11}$} & {\footnotesize$1,500^{+500 \phn \phd \phn}_{-500}$} \\
{\footnotesize KMK 88-35} & {\footnotesize$-4.488$ \phn} & {\footnotesize$-0.155$ \phn} & {\footnotesize$0.228$ \phn} & {\footnotesize$221.1$ \phn} & {\tiny } & {\footnotesize$208.93^{+30.95}_{-26.96} \phn$} & {\footnotesize$8.32^{+0.06}_{-0.06}$} & {\footnotesize$1,500^{+500 \phn \phd \phn}_{-500}$} \\
{\footnotesize SL 160} & {\footnotesize$-5.077$ \phn} & {\footnotesize$-0.132$ \phn} & {\footnotesize$0.173$ \phn} & {\footnotesize$240.9$ \phn} & {\tiny } & {\footnotesize$245.47^{+23.68}_{-21.60} \phn$} & {\footnotesize$8.39^{+0.04}_{-0.04}$} & {\footnotesize$3,000^{+500 \phn \phd \phn}_{-500}$} \\
{\footnotesize SL 580} & {\footnotesize$-4.957$ \phn} & {\footnotesize$-0.097$ \phn} & {\footnotesize$0.165$ \phn} & {\footnotesize$272.3$ \phn} & {\tiny } & {\footnotesize$281.84^{+72.97}_{-63.06} \phn$} & {\footnotesize$8.45^{+0.10}_{-0.11}$} & {\footnotesize$3,000^{+1,000 \phn}_{-1,000}$} \\
{\footnotesize SL 224} & {\footnotesize$-5.489$ \phn} & {\footnotesize$-0.072$ \phn} & {\footnotesize$0.191$ \phn} & {\footnotesize$303.5$ \phn} & {\tiny } & {\footnotesize$302.00^{+52.81}_{-50.80} \phn$} & {\footnotesize$8.48^{+0.07}_{-0.08}$} & {\footnotesize$5,000^{+1,000 \phn}_{-1,000}$} \\
{\footnotesize H 88-17} & {\footnotesize$-3.290$ \phn} & {\footnotesize$-0.051$ \phn} & {\footnotesize$0.209$ \phn} & {\footnotesize$335.1$ \phn} & {\tiny } & {\footnotesize$331.13^{+49.06}_{-61.98} \phn$} & {\footnotesize$8.52^{+0.06}_{-0.09}$} & {\footnotesize$700^{+300 \phn \phd \phn}_{-200}$} \\
{\footnotesize HS 32} & {\footnotesize$-3.984$ \phn} & {\footnotesize$-0.044$ \phn} & {\footnotesize$0.172$ \phn} & {\footnotesize$319.3$ \phn} & {\tiny } & {\footnotesize$354.81^{+43.29}_{-66.41} \phn$} & {\footnotesize$8.55^{+0.05}_{-0.09}$} & {\footnotesize$1,500^{+250 \phn \phd \phn}_{-500}$} \\
{\footnotesize H 88-235} & {\footnotesize$-4.046$ \phn} & {\footnotesize$-0.029$ \phn} & {\footnotesize$0.218$ \phn} & {\footnotesize$356.7$ \phn} & {\tiny } & {\footnotesize$363.08^{+35.03}_{-74.67} \phn$} & {\footnotesize$8.56^{+0.04}_{-0.10}$} & {\footnotesize$1,500^{+500 \phn \phd \phn}_{-500}$} \\
{\footnotesize HS 76} & {\footnotesize$-4.331$ \phn} & {\footnotesize$0.010$ \phn} & {\footnotesize$0.272$ \phn} & {\footnotesize$411.6$ \phn} & {\tiny } & {\footnotesize$389.05^{+112.14}_{-72.81}$} & {\footnotesize$8.59^{+0.11}_{-0.09}$} & {\footnotesize$2,000^{+500 \phn \phd \phn}_{-500}$} \\
{\footnotesize H 88-59} & {\footnotesize$-3.965$ \phn} & {\footnotesize$-0.008$ \phn} & {\footnotesize$0.258$ \phn} & {\footnotesize$377.6$ \phn} & {\tiny } & {\footnotesize$389.05^{+78.69}_{-65.45} \phn$} & {\footnotesize$8.59^{+0.08}_{-0.08}$} & {\footnotesize$1,500^{+500 \phn \phd \phn}_{-500}$} \\
{\footnotesize SL 588} & {\footnotesize$-5.436$ \phn} & {\footnotesize$0.095$ \phn} & {\footnotesize$0.303$ \phn} & {\footnotesize$609.0$ \phn} & {\tiny } & {\footnotesize$602.56^{+156.02}_{-166.04}$} & {\footnotesize$8.78^{+0.10}_{-0.14}$} & {\footnotesize$8,000^{+2,000 \phn}_{-3,000}$} \\
{\footnotesize HS 338} & {\footnotesize$-4.338$ \phn} & {\footnotesize$0.109$ \phn} & {\footnotesize$0.308$ \phn} & {\footnotesize$633.1$ \phn} & {\tiny } & {\footnotesize$630.96^{+127.62}_{-152.33}$} & {\footnotesize$8.80^{+0.08}_{-0.12}$} & {\footnotesize$3,000^{+1,000 \phn}_{-1,000}$} \\
{\footnotesize SL 268} & {\footnotesize$-7.032$ \phn} & {\footnotesize$0.181$ \phn} & {\footnotesize$0.596$ \phn} & {\footnotesize$1,000.0$ \phn} & {\tiny } & {\footnotesize$1,148.15^{+110.77}_{-256.90}$} & {\footnotesize$9.06^{+0.04}_{-0.11}$} & {\footnotesize$50,000^{+5,000}_{-10,000}$} \\
\enddata
\end{deluxetable}
\section{Discussion \& Conclusions}
That there would be degeneracies in the UBV color-color diagrams affecting the derivation of age as shown in Figures \ref{fig:seven2} and \ref{fig:eight2}, was previously recognized by many (e.g.\ \citeauthor*{bruzual2003} \citeyear{bruzual2003}; \citeauthor*{cervino2009} \citeyear{cervino2009}; \citeauthor*{buzzoni} \citeyear{buzzoni}; \citeauthor*{chiosi} \citeyear{chiosi}; \citeauthor*{bruzual2001} \citeyear{bruzual2001}; \citeauthor*{bruzual2010} \citeyear{bruzual2010}; \citeauthor*{cervino2004} \citeyear{cervino2004}; \citeauthor*{cervino2006} \citeyear{cervino2006}; \citeauthor*{fagiolini2007} \citeyear{fagiolini2007}; \citeauthor*{lancon2000} \citeyear{lancon2000}; \citeauthor*{lancon2009} \citeyear{lancon2009}; \citeauthor*{fouesneau} \citeyear{fouesneau}; \citeauthor*{fouesneau2} \citeyear{fouesneau2}; \citeauthor*{gonzalez} \citeyear{gonzalez}; \citeauthor*{gonzalez2005} \citeyear{gonzalez2005}; \citeauthor*{gonzalez2010} \citeyear{gonzalez2010}; \citeauthor*{jesus} \citeyear{jesus}; \citeauthor*{pandey} \citeyear{pandey}; \citeauthor*{paper2} \citeyear{paper2}).
However, the scatter experienced in the integrated magnitude of a cluster
also means using luminosity as a proxy for mass is highly dangerous for clusters of moderate and low mass. This is clearly demonstrated in our simulations, particularly in Figure \ref{fig:four2}. Present day SSP models are not designed to work with moderate and low mass clusters (e.g.\ \citeauthor*{bruzual2001} \citeyear{bruzual2001}; \citeauthor*{bruzual2010} \citeyear{bruzual2010}; \citeauthor*{lancon2000} \citeyear{lancon2000}; \citeauthor*{masscleanpaper} \citeyear{masscleanpaper}; \citeauthor*{paper2} \citeyear{paper2}). Yet, this is in fact the way in which mass functions for stellar clusters in other galaxies have been derived. For some studies, it might be argued the masses are high enough that the errors are not extreme (\citeauthor*{larsen2000} \citeyear{larsen2000}), but this method is being extended to a lower mass regime such as in the LMC (e.g.\ \citeauthor*{billett} \citeyear{billett}; \citeauthor*{hunter2003} \citeyear{hunter2003}), simply because there was no other way to derive this critical information.
The MASSCLEAN{\fontfamily{ptm}\selectfont \textit{colors}} database is sufficiently populated now so it can be used to work back and derive mass and age for moderate and lower mass clusters in the LMC with a known (correctable) extinction. To make this possible with just three input bands, the dataset
uses the Padova 2008 models (\citeauthor*{padova2008} \citeyear{padova2008}) with a metallicity of $Z=0.008$ ([Fe/H]=-0.6). While the simulations created for the database extend to ages up to $log(age/yr) = 9.5$, we have avoided applying the current inference package MASSCLEAN{\fontfamily{ptm}\selectfont \textit{age}} on clusters in the LMC for which previous indicators suggested ages greater than $log(age/yr) > 9.0$. This is because of our concern in the evolutionary models at such age and the inevitable drop in metallicity for the very oldest LMC clusters. It seems simplistic to assume the same metallicity can be applied to old clusters as to recently formed clusters, however, the metallicity of the LMC does appears to have been reasonably slow changing over a fairly long period of time, from relatively recent times until about $log(age/yr) = 9.5$ (\citeauthor*{bica} \citeyear{bica}; \citeauthor*{harris} \citeyear{harris}; \citeauthor*{piatti2009} \citeyear{piatti2009}). These studies also show that for some young clusters, $log(age/yr) < 8$, the adopted metallicity for the current MASSCLEAN{\fontfamily{ptm}\selectfont \textit{colors}} database may be too low by a few dex. Whether the MASSCLEAN{\fontfamily{ptm}\selectfont \textit{colors}} database might be extended to include LMC clusters over a broader range of metallicities, from [Fe/H] = -1 to 0 and for $log(age/yr) > 9.5$, and if this might prove significant in deriving accurate age and mass estimates remains to be explored. If such an additional parameter would be added to the database, more input photometry will be needed for the stellar cluster in question, to keep the search from becoming hopelessly degenerate over all the properties.
\acknowledgements
We are grateful to suggestions made to an early draft of this work by Deidre Hunter and Bruce Elmegreen. Their ideas lead to significant improvements in the presentation.
We thank the referee for useful comments and suggestions.
This material is based upon work supported by the National Science Foundation under Grant No.\ 0607497 and more recently, Grant No.\ 1009550, to the University of Cincinnati.
|
1007.3436
|
\section{Introduction}
The Riemann zeta function is defined as the series
$$
\zeta(n) = \frac{1}{1^n} + \frac{1}{2^n} +\frac{1}{3^n} + \ldots + \frac{1}{k^n} + \ldots
$$
for any integer $n\geq 2$. Three centuries ago Euler found that $\zeta(2) = \pi^2/6$, which is an irrational number.
Exact value of $\zeta(3)$ is still unknown though it was proved by Ap\'ery in 1979 that $\zeta(3)$ was also irrational
(see \cite{Porten}). Values of $\zeta(n)$, when $n$ is even, are known and can be written in terms of Bernoulli numbers.
We refer the interested reader to chapter 19 of \cite{BOOK} for a ``perfect" proof of the formula
$$
\zeta(2k) = \sum\limits_{n=1}^{\infty} \frac{1}{n^{2k}} = \frac{(-1)^{k-1} 2^{2k-1} B_{2k}}{(2k)!} \cdot \pi^{2k}~~~~(k\in\mathbb N).
$$
\noindent Notice that $\zeta(n)$ can be written as the following
multi-variable integral
$$
\zeta(n) = \int_0^1\cdots \int_0^1
\frac{1}{1-x_1x_2\cdots x_n}~dx_1dx_2\ldots dx_n.
$$
Indeed, each integral is improper at both ends and since the
geometric series $\sum\limits_{q\geq 0} x^{q}$ converges uniformly
on the interval $|x|\leq R,~\forall R\in(0,1)$ we can write
$$
\frac{1}{1-x_1 x_2\cdots x_n} = \sum\limits_{q=0}^{\infty}
(x_1x_2\cdots x_n)^q
$$
then interchange summation with integration, and then integrate $(x_1x_2\cdots x_n)^q$ for each $q$. Using the
identities
$$
\frac{1}{1-xy}+\frac{1}{1+xy}=\frac{2}{1-x^2y^2} ~~~ \text{and}~~~
\frac{1}{1-xy}-\frac{1}{1+xy}=\frac{2xy}{1-x^2y^2}
$$
and a simple change of variables one can easily see that
$$
\int_0^1{\int_0^1{\frac{1}{1-xy}~dx}dy} = \frac{4}{3}
\int_0^1\int_0^1\frac{1}{1-x^2y^2}~dxdy
$$
by further generalizing this idea one comes along the following,
$$
\zeta(n)=\frac{2^n}{2^n-1}\int_0^1\ldots\int_0^1\frac{1}{1-\prod\limits_{i=1}^n
x^2_i}~dx_1...dx_n.
$$
\noindent Notice that $(1,1)$ is the only point in the square
$[0,1]\times[0,1]$, which makes the integrand $1/(1-x^2y^2)$
singular. If we take another point on the graph of $1=x^2y^2$, say
$(a,1/a)$ with $a\in(0,\infty)$, then it follows easily (see lemma 1
below) that
$$
\int_0^{1/a}\int_0^a\frac{1}{1-x^2y^2}~dxdy =
\int_0^1\int_0^1\frac{1}{1-x^2y^2}~dxdy.
$$
\noindent This result motivates the following definition
\begin{dfn} For any point $(a_1,a_2,\ldots, a_{n-1})\in\mathbb R^{n-1}$ such that
$a_i\in(0,+\infty),~\forall i\in\{1,\ldots,n-1\}$ we define
$$
I_n(a_1,...,a_{n-1})=\int_0^{\frac{1}{a_1\cdots
a_{n-1}}}\ldots\int_0^{a_2}\int_0^{a_1}\frac{1}
{1-\prod\limits_{i=1}^n{x_i^2}}~dx_1dx_2\ldots dx_n.
$$
\end{dfn}
\begin{lemma}
For any $a_i\in(0,+\infty)$, we have $I_n(a_1,...,a_{n-1}) =
I_n(1,1,\ldots,1)$.
\end{lemma}
\begin{proof}
Simply observe that by using the change of variables $x_i=a_iu_i$
for all $i\in [1,\ldots,n]$, where $a_n=1/(a_1a_2\cdots a_{n-1})$,
the Jacobian equals 1, and the integrand is unchanged.
\end{proof}
In this article we investigate $\zeta(n)$ following Beukers, Calabi
and Kolk (see \cite{Calabi}), who used the change of variables
$$
x=\frac{\sin(u)}{\cos(v)}~~~\text{and}~~~y=\frac{\sin(v)}{\cos(u)}~~\text{to
evaluate} ~\int_0^1\int_0^1 \frac{1}{1-x^2y^2}~dxdy.
$$
Such proof of the identity $\zeta(2)=\pi^2/6$ may also be found in
chapter 6 of \cite{BOOK} and in papers of Elkies \cite{Elkies} and
Kalman \cite{Kalman}. Let us also mention here that Kalman's paper,
in addition to a few other proofs of the identity, contains some
history of the problem together with an extensive reference list.
Here we will be changing variables too, but in the integrals
$I_n(a_1,...,a_{n-1})$ and using the hyperbolic trig functions
$\sinh$ and $\cosh$ instead of $\sin$ and $\cos$. Such a change of
variables was considered independently of us by Silagadze and the
reader will find his results in \cite{Silagadze}.
\noindent {\bf Acknowledgement}: The authors would like to thank the
referee who drew our attention to Kalman's paper \cite{Kalman} and
has made a few useful suggestions that improved the exposition.
\section{Hyperbolic Change of Variables}
First observe that the change of variables
$$
x_i=\frac{\sin(u_i)}{\cos(u_{i+1})}~~~~~~\forall i\in\mathbb N\mod(n)
$$
reduces the integrand in $I_n(1,...,1)$ to 1 only when $n$ is even. The
region of integration $\Phi_n=[(x_1,\ldots,x_n)\in\mathbb R^n: 0 <
x_1,\ldots,x_n < 1]$ becomes the one-to-one image of the
$n$-dimensional polytop (note $u_{n+1} = u_1$)
$$
\Pi_n := [(u_1,u_2,\ldots,u_n)\in\mathbb R^n: u_i > 0,
u_i+u_{i+1}<\frac{\pi}{2}, 1\leq i \le n].
$$
We suggest here a different change of variables that will
produce an integrand of 1 for all values of $n$ in
$I_n(a_1,...,a_{n-1})$. But first we define the corresponding
region.
\begin{dfn}
For any point $(a_1,a_2,\ldots, a_{n-1})\in\mathbb R^{n-1}$ such that
$a_i\in(0,+\infty),~\forall i\in\{1,\ldots,n-1\}$ we define
$$\Phi_n(a_1,a_2,\ldots,a_{n-1}):=
[(x_1,\ldots,x_n)\in\mathbb R^n~|~ 0 < x_i < a_i, \forall
i\in\{1,\ldots,n\}],
$$
where $a_n=1/(a_1\cdot a_2\cdot \ldots\cdot a_{n-1})$.
\end{dfn}
\begin{lemma} The change in variables
$$
x_i=\frac{\rm sinh(u_i)}{\rm cosh(u_{i+1})} ~~~~~~~\forall i\in\mathbb N\mod(n)
$$
reduces the integrand of $I_n(a_1,...,a_{n-1})$ to 1 for all values
of $n\geq 2$. It also gives a one-to-one differentiable map between the
region $\Phi_n(a_1,a_2,\ldots,a_{n-1})$ and the set
$\Gamma_n\subset\mathbb R^n$ described by the following $n$
inequalities:
$$
0<u_i < \rm arcsinh\bigl(a_i\cdot\rm cosh(u_{i+1})\bigr),~~~~~~~ \forall i\in\mathbb N\mod(n).
$$
\end{lemma}
\begin{proof}
The inequalities for $\Gamma_n$ follow trivially from the
corresponding inequalities $0<x_i<a_i$ and the facts that
$\rm cosh(x)>0$ and $\rm arcsinh(x)$ is increasing everywhere. Injectivity
and smoothness of the map may be proven by writing down formulas,
which express each $u_i$ in terms of all $x_j$. For example, here
are the corresponding formulas for the set $\Gamma_3$:
$$
u_i = \rm arcsinh\left( x_i\cdot\sqrt{\frac{1+x^2_{i+1} +
x^2_{i-1}x^2_{i+1}}{1-x^2_1x^2_2x^2_3}}\right),~~~i\in\mathbb N\pmod{3}.
$$
The Jacobian is the determinant of the matrix
$$
A = \begin{pmatrix}
\frac{\cosh(u_1)}{\cosh(u_2)} & \frac{-\sinh(u_1)\sinh(u_2)}{\cosh^2(u_2)} & 0 & \ldots & 0 \\
0 & \frac{\cosh(u_2)}{\cosh(u_3)} & \frac{-\sinh(u_2)\sinh(u_3)}{\cosh^2(u_3)} & \ldots &0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\frac{-\sinh(u_n)\sinh(u_1)}{\cosh^2(u_1)} & 0 & 0 & \ldots & \frac{\cosh(u_n)}{\cosh(u_1)}
\end{pmatrix}
$$
To compute this determinant we observe that the first column
expansion reduces the computation to two determinants of the upper
and lower triangular matrices. This results in the formula, where
the first term comes from the upper triangular matrix and the second
from the lower triangular matrix (recall that $u_{n+1} = u_1$) :
$$
{\rm Det}(A)=\prod\limits_{i=1}^n\frac{\cosh(u_i)}{\cosh(u_{i+1})} ~
+ ~ (-1)^{n-1}\cdot \prod\limits_{i=1}^n\frac{-\sinh(u_i)\sinh(u_{i+1})}{\cosh^2(u_{i+1})}=
1-\prod\limits_{i=1}^n\tanh^2(u_i).
$$
When using the above change in variables the denominator of the
integrand $1-\prod\limits_{i=1}^nx_i$ becomes
$1-\prod\limits_{i=1}^n\tanh^2(u_i)$, which we just proved to be the
Jacobian.
\end{proof}
\section{Computations of $\zeta(2)$}
We begin with $\zeta(2)$, which is a rational multiple of $I_2(1)$. Lemma 1 implies that it's enough to compute
$$
I_2(a)=\int_0^{\frac{1}{a}}{\int_0^a{\frac{1}{1-x^2y^2}dx}dy} ~~~\mbox{for arbitrary}~a >0.
$$
\noindent We now preform the following change in variables
$$
x=\frac{\rm sinh(u)}{\rm cosh(v)}, ~ ~ y=\frac{\rm sinh(v)}{\rm cosh(u)}.
$$
As we proved above, our integrand reduces to 1 and all we must do is
worry about the limits. If $x= 0$ then clearly $u=0$, the same is
true for $y$ and $v$. If $x=a$ then $a\cdot\rm cosh(v)=\rm sinh(u)$ so
$v=\rm arcosh(\frac{\rm sinh(u)}{a})$ and if $y=\frac{1}{a}$ then
$(1/a)\cdot\rm cosh(u)=\rm sinh(v)$ so $v=\rm arcsinh(\frac{\rm cosh(u)}{a})$
thus describing our region of integration (see Figure 1). We then
write the integral $I_2(a)$ as follows
$$
\int\limits_0^{\rm arcsinh(a)}\rm arcsinh\left(\frac{\rm cosh(u)}{a}\right)du + \int\limits_{\rm arcsinh(a)}^\infty\rm arcsinh\left(\frac{\rm cosh(u)}{a}\right)-
\rm arcosh\left(\frac{\rm sinh(u)}{a}\right)du.
$$
\begin{figure*}[h]
\includegraphics[height=100mm,width=110mm]{Picture.pdf}
\caption{The set $\Gamma_2\subset\mathbb R^2,~~\forall a> 0$.}
\end{figure*}
\begin{lemma}
$\lim\limits_{a\to 0} \int_0^{\rm arcsinh(a)}\rm arcsinh(\frac{\rm cosh(u)}{a})du =0$
\end{lemma}
\begin{proof}
If we let $\rm cosh(\rm arcsinh(z))=Q$ then $Q=\sqrt{1+z^2}$. Therefore
$$
\rm arcsinh\left(\frac{\rm cosh(\rm arcsinh(a))}{a}\right)=\rm arcsinh\left(\sqrt{\frac{1}{a^2}+1}\right).
$$
Since $\rm arcsinh(\rm cosh(u)/a)$ is concave up, we can take area of the rectangle with vertices at $(0,0)$,
$({\rm arcsinh}(a),0)$, and $({\rm arcsinh}(a),{\rm arcsinh(cosh(arcsinh}(a))/a)$ as an overestimate of the integral, that is
$$
\rm arcsinh(a)\cdot \rm arcsinh\left(\sqrt{\frac{1}{a^2}+1}\right)\geq
\int_0^{\rm arcsinh(a)}\rm arcsinh\left(\frac{\rm cosh(u)}{a}\right)du\geq0
$$
Then by applying L'hospital's rule one can deduce
$$
\lim\limits_{a\to0}(\rm arcsinh(a)\cdot\rm arcsinh\left(\sqrt{\frac{1}{a^2}+1}\right))=0.
$$
\end{proof}
Now, since $I_2(a)=I_2(1)$, $\forall a >0$, we conclude that $I_2(1)=\lim\limits_{a\to 0} I_2(a)$, and therefore we have
$$
I_2(1)=\lim_{a \to 0}
\int_{\rm arcsinh(a)}^\infty{\rm arcsinh\left(\frac{\rm cosh(u)}{a}\right)-\rm arcosh\left(\frac{\rm sinh(u)}{a}\right)du}.
$$ Since
$$
\rm arcsinh\left(\frac{\rm cosh(x)}{a}\right)=\ln{\left(\frac{\rm cosh(x)}{a}+\sqrt{\frac{\rm cosh^2(x)}{a^2}+1}\right)}
$$
and
$$
\rm arcosh\left(\frac{\rm sinh(x)}{a}\right)=\ln{\left(\frac{\rm sinh(x)}{a}+\sqrt{\frac{\rm sinh^2(x)}{a^2}-1}\right)}
$$
we get
$$
I_2(1)=\lim_{a\to 0}\int_{\rm arcsinh(a)}^\infty{\ln{\left(\frac{\frac{\rm cosh(x)}{a}+\sqrt{\frac{\rm cosh^2(x)}{a^2}+1}}{\frac{\rm sinh(x)}{a}+
\sqrt{\frac{\rm sinh^2(x)}{a^2}-1}}\right)}dx}
$$
which, after taking the limit as $a\to0$ gives
$$
I_2(1)=\int_0^\infty\ln\left(\frac{\rm cosh(x)}{\rm sinh(x)}\right)dx.
$$
\noindent Using integration by parts $u=\ln{\left(\frac{\rm cosh(x)}{\rm sinh(x)}\right)}$ and $v=dx$ one
obtains the formula
$$
I_2(1)=\left.
x\ln{\left(\frac{\rm cosh(x)}{\rm sinh(x)}\right)}\right|_0^\infty+\int_0^\infty{\frac{2x}{\rm sinh(2x)}dx}.
$$
By examining the limits of the first half of the formula as x goes
to 0 and $\infty$ we are left with only the integral
$$
I_2(1)=\int_0^\infty\frac{2x}{\rm sinh(2x)}dx.
$$
By applying the change in variables $u = 2x$ our formula becomes
$$
I_2(1)=\frac{1}{2}\int_0^\infty{\frac{u}{\rm sinh(u)}du}.
$$
Now we use the method of differentiation under the integral sign
and consider the function
$$
F(\alpha)=\frac{1}{2}\int_0^\infty\frac{\rm arctanh(\alpha
\rm tanh(x))}{\rm sinh(x)}dx.
$$ One should consider the function $F$ at the points $\alpha=1$ and $\alpha=0$. $F(1)$ is clearly the integral we
are trying to find and $F(0)$ is 0. Thus by differentiating under the integral with respect to alpha, plus some algebra
we obtain
$$
F'(\alpha) = f(\alpha)=\frac{1}{2}\int_0^\infty\frac{\rm cosh(x)}{1+(1-\alpha^2)\rm sinh^2(x)}dx.
$$
Then by preforming the change in variables
$u=\sqrt{1-\alpha^2}\cdot \rm sinh(x)$ the integral becomes
$$
f(\alpha)=\frac{1}{2\sqrt{1-\alpha^2}}\int_0^\infty\frac{1}{1+u^2}du,
$$
which is simply
$$
\left.\frac{\rm arctanh(u)}{2\sqrt{1-\alpha^2}}\right|_0^\infty =
\frac{\pi}{4\sqrt{1-\alpha^2}}.
$$
Since we took the derivative with respect to $\alpha$ we must take the integral with respect to alpha so we have
$$
\int_0^1{f(\alpha)~d\alpha}=F(1)-F(0)=F(1)-0=F(1)
$$
which, as stated above is our goal. So
$$
I_2(1) = \int_0^1 f(\alpha)~d\alpha =\frac{\pi}{4}\int_0^1\frac{1}{\sqrt{1-\alpha^2}}~d\alpha =
\left.\frac{\pi}{4}\rm arcsin(\alpha)\right|_0^1 = \frac{\pi^2}{8},
$$
and hence $\zeta(2) = \frac{4}{3}\cdot\frac{\pi^2}{8} = \pi^2/6$.
\section{A formula for $\zeta(n),~n\geq 2$}
One could try to use similar approach to compute $\zeta(n),~n > 2$, however the computations become a bit long.
Instead, we present an elementary proof of the following theorem, which generalizes our formula for $\zeta(2)$ from
the previous section.
\begin{theorem} Let $n\geq 2$ be a natural number. Then
$$
\int_0^1...\int_0^1\frac{1}{1-\prod\limits_{i=1}^n{x_i^2}}~dx_1...dx_n
= \frac{1}{(n-1)!}\cdot\int_0^\infty \ln^{n-1}(\rm coth(x))~dx.
$$
\end{theorem}
\noindent Let us start with the following lemma, which can be easily
proved by using induction on $k$, integration by parts and
l'Hospital's rule.
\begin{lemma}
$$
\int_0^1{\ln^{k}(z)z^{2q}dz}=\frac{(-1)^kk!}{(2q+1)^{k+1}},~~\forall
k\in \mathbb N ~\text{and} ~ q\geq 0.
$$
\end{lemma}
\begin{proof}[Proof of the theorem.] Applying the substitution $z=\rm tanh(x)$
to the integral
$$
\frac{1}{(n-1)!} \int_0^\infty \ln^{n-1}(\rm coth(x))~dx
$$
gives
$$
\frac{1}{(n-1)!}\int_0^1{\frac{(-\ln(z))^{n-1}}{1-z^2}dz} =
\frac{1}{(n-1)!}\int_0^1 (-\ln(z))^{n-1}\cdot(\sum_{q\geq 0}
z^{2q})dz.
$$
Since the integral is improper at both ends and the geometric series $\sum\limits_{q\geq 0}
z^{2q}$ converges uniformly on the interval $|z|\leq R,~\forall
R\in(0,1)$, the last integral equals
$$
\frac{1}{(n-1)!}
\sum_{q\geq0}{(-1)^{n-1}\cdot \int_0^1{\ln^{n-1}(z)z^{2q}dz}} = ~ \mbox{by lemma 4} ~ = \sum_{q\geq0}{\frac{1}{(2q+1)^n}}.
$$
\noindent Using the geometric series expansion one can easily show that we also have
$$
\int_0^1\ldots\int_0^1
\frac{1}{1-\prod\limits_{i=1}^n{x_i^2}}~dx_1...dx_n =
\sum_{q\geq0}{\frac{1}{(2q+1)^n}}.
$$
\end{proof}
\begin{corollary}
For any integer $n\geq 2$,
$$
\zeta(n) = \frac{2^n}{(2^n -1)\cdot(n-1)!}\cdot\int_0^\infty \ln^{n-1}(\rm coth(x))~dx.
$$
\end{corollary}
|
1007.3938
|
\section{Introduction}
Metamaterials (MMs) are artificial materials engineered to achieve unusual electromagnetic (EM) properties that are not normally found in nature\cite{Fang}-\cite{Feng}. A comprehensive review can be found in the book\cite{Capolino}. Distinguished from photonic crystals, metamaterials are made from periodical structures with unit cells much smaller than the wavelength of light. A general method to construct large area 3D MMs is layer-by-layer fabrication technique\cite{Valentine2}-\cite{Shalaev}. In stacked MMs, interaction between adjacent layers makes it difficult to extract bulk material parameters from a single unit-cell layer. It has been found that the retrieved effective metamaterial parameters are often dependent on the number of unit cells along the propagation direction\cite{Valentine2, Andryieuski}. It appears that there is no clear methodology on how to accurately predict the bulk values of the effective permittivities and permeabilies through a single layer of unit cells. Currently there is no simple and effective way to resolve phase ambiguity in determining the phase of the transmission and reflection of electromagnetic fields because of phase wrapping. Although the phase ambiguity is a common issue for the parameter retrieval of the general composite materials, this issue becomes more significant for metamaterials where typically resonances, positive refractive index, and negative refractive index are all present in the same frequency band. In this paper, we apply Herpin's equivalent theorem\cite{Herpin} to orthorhombic anisotropic media and provide a simple way to accurately predict the effective bulk material parameters from a single layer of unit cells. We introduce a graphical retrieval method and phase unwrapping techniques, which can simultaneously determine the six material parameters, the permittivity and permeability tensors, from one unit cell.
\section{Equivalent theory}
Layer-by-layer fabrication method renders metamaterials intrinsically anisotropic. In MMs one unit-cell layer is usually composed of several sub-layers of different materials or nanostructures. We limit our discussion in nonchiral materials. To illustrate the key point we adopt multilayer method that has been used to model MMs by many groups\cite{Smith}-\cite{Chen}. We further assume that the principal axes of the sub-layers are parallel in each direction. In the principal coordinate system, the permittivity and permeability tensors are given by
\begin{equation}
\label{epmu}
\bar{\bar\epsilon}_n = \begin{pmatrix} \epsilon_{nx} & 0 & 0 \\ 0 & \epsilon_{ny} & 0 \\ 0 & 0 & \epsilon_{nz} \end{pmatrix} \,, \hspace{.3in}
\bar{\bar\mu}_n = \begin{pmatrix} \mu_{nx} & 0 & 0 \\ 0 & \mu_{ny} & 0 \\ 0 & 0 & \mu_{nz} \end{pmatrix} \,,
\end{equation}
where $n=1,2,\cdots$. The scalar terms $\epsilon_{nj}$ and $\mu_{nj}\ (j=x,y,z)$ are complex. Consider a monochromatic wave of frequency $\omega$ with time dependence $\exp(-i\omega t)$ propagates inside the orthorhombic anisotropic materials. In each layer we have
\begin{eqnarray}
\label{Maxw}
\begin{split}
\nabla\times\bigl( \bar{\bar\mu}_n^{-1} \cdot \nabla\times{\bm E}\bigr) &=\, k_0^2\bigl(\bar{\bar\epsilon}_n \cdot{\bm E}\bigr) \,,\\
\nabla\times\bigl( \bar{\bar\epsilon}_n^{-1} \cdot \nabla\times{\bm H}\bigr) &=\, k_0^2\bigl(\bar{\bar\mu}_n \cdot{\bm H}\bigr) \,,
\end{split}
\end{eqnarray}
where $k_0=\omega/c$. If the plane of incidence is one of the crystal planes, the TE and TM polarizations are decoupled.
\begin{figure}[htb]
\centering\includegraphics[width=.33\textwidth]{UnitCell}
\caption{A schematic shows how to construct a symmetric unit cell, which is composed of two original asymmetric unit cells.}
\label{UnitCell}
\end{figure}
\noindent
According to Herpin's equivalent theorem\cite{Herpin}, every general multilayer is equivalent to a two-homogeneous-layer system and every symmetric multilayer is equivalent to a single homogeneous layer, characterized by an equivalent index and equivalent thickness\cite{Epstein}. In metamaterials, a single unit-cell layer can be considered as a multilayer system. So it is equivalent to a two-homogeneous-layer system, denoted as $AB$. We can then construct a symmetric unit cell by cascading two unit cells as $ABBA$, which is equivalent to a single homogeneous layer according to Herpin's theorem. This process is illustrated in Fig.~\ref{UnitCell}. Applying this methodology, a symmetric unit-cell layer can often be constructed regardless the number of sub-layers and complexity of each sub-layer in the original unit cell. The permittivities and permeabilities retrieved from the symmetric unit cell, which is composed of two original asymmetric unit cells, will represent the bulk material parameters. Thus, a length-independent description can be achieved. \\
To make it more clear, let $\bm{M}$ represents the characteristic matrix of one symmetric unit cell. According to Herpin's equivalent theorem, it can be replaced by an equivalent single layer. Assume the x-z plane is the plane of incidence. For TM mode, i.e. $\bm{H}=(0,H_y,0)$ and $\bm{E}=(E_x,0,E_z)$, we have
\begin{equation}
\label{Herpin}
\begin{split}
M_{11} &= M_{22} = \cos\psi_e \,, \\
M_{12} &= \frac{\sin\psi_e}{i{\cal Z}_e} \,, \hskip.1in M_{21} = -i{\cal Z}_e \sin\psi_e \,, \\
\end{split}
\end{equation}
where $\psi_e$ is the equivalent phase thickness of the symmetric unit cell; ${\cal Z}_e$ is the equivalent impedance. If the material contains $N$-layer symmetric unit cells, the characteristic matrix of the material is given by
\begin{equation}
\label{MatxN}
\begin{pmatrix} \cos\psi_e & \dfrac{1}{i{\cal Z}_e}\sin\psi_e \\ -i{\cal Z}_e\sin\psi_e & \cos\psi_e \end{pmatrix}^N =
\begin{pmatrix} \cos(N\psi_e) & \dfrac{1}{i{\cal Z}_e}\sin(N\psi_e) \\ -i{\cal Z}_e\sin(N\psi_e) & \cos(N\psi_e) \end{pmatrix} \,.
\end{equation}
A similar expression for TE mode can be obtained by replacing the ${\cal Z}_e$ with the negative admittance $-{\cal Y}_e$. Here the ${\cal Z}_e$ and ${\cal Y}_e$ are, respectively, the generalized impedance and admittance because they include incidence angle, i.e., Eq.~(\ref{MatxN}) is valid for both normal and oblique incidence. They are given by
\begin{equation}
\label{Disp}
\begin{split}
&\mbox{TM:}\hspace{.3in} {\cal Z}_e = \frac{k_z}{k_0\epsilon_x} \,,\hspace{.3in}
k_z^2 = k_0^2\epsilon_x\,\mu_y - \frac{\epsilon_x}{\epsilon_z} k_x^2 \,, \\
&\mbox{TE:}\hspace{.3in} {\cal Y}_e = \frac{k_z}{k_0\mu_x} \,,\hspace{.3in}
k_z^2 = k_0^2\epsilon_y\,\mu_x - \frac{\mu_x}{\mu_z} k_x^2 \,, \\
\end{split}
\end{equation}
where $(\epsilon_x,\epsilon_y,\epsilon_z)$ and $(\mu_x,\mu_y,\mu_z)$ are, respectively, the effective permittivities and permeabilities of the equivalent layer. As shown in Eq.~(\ref{MatxN}), the total phase of a $N$-layer system equals $N$ times the phase $\psi_e$ of the single layer; whereas the impedance ${\cal Z}_e$ or admittance ${\cal Y}_e$ is independent of the number of layers. These properties imply that the material parameters retrieved from the $N$ layers of unit cells are the same as those retrieved from one layer of unit cells. In other words, the bulk metamaterial parameters can be predicted from a single symmetric unit cell. Note that the characteristic matrix of a N-layer asymmetric unit cells does not have the nice properties as those in Eq.~(\ref{MatxN}). As a consequence, the retrieved material parameters will be dependent on the number of unit cells along the propagation direction. Hence, the bulk material parameters cannot be predicted from one unit-cell layer. In other words, a length-independent description cannot be achieved for asymmetric unit cells. \\
\section{Graphical retrieval method}
In this section, we will introduce a graphical retrieval method based on Herpin's theorem and the previous method\cite{Smith}. From above discussion a symmetric unit cell can be represented by three equivalent layers $ABA$. Let the $z$-direction perpendicular to the plane of the layers. Assume each layer is orthorhombic anisotropic with the principal axes parallel in each direction and the effective permittivity and permeability tensors are given by Eq.~(\ref{epmu}). Let the x-z plane be the plane of incidence. The characteristic matrix of TM mode is defined as
\begin{equation}
\begin{pmatrix} H^i_y \vspace{1mm} \\ E^i_x \end{pmatrix} = {\bm M} \begin{pmatrix} H^o_y \vspace{1mm} \\ E^o_x \end{pmatrix} =
\begin{pmatrix} M_{11} & M_{12} \vspace{1mm} \\ M_{21} & M_{22} \end{pmatrix} \begin{pmatrix} H^o_y \vspace{1mm} \\ E^o_x \end{pmatrix} \,,
\end{equation}
where `i' refers to the input; and `o' refers to the output. By matching boundary condition, the relationship between the scattering and characteristic matrices of a unit cell is given by
\begin{equation}
\label{MinS}
\begin{split}
M_{11} &= M_{22} = \frac{\Bigl[\bigl(1+S_{21}\bigr)\bigl(1-S_{12}\bigr) + S_{11}S_{22}\Bigr]} {2S_{11}} \,,\\
M_{12} &= \frac{\Bigl[\bigl(1+S_{21}\bigr)\bigl(1+S_{12}\bigr) - S_{11}S_{22}\Bigr]} {2S_{11}{\cal Z}_o} \,,\\
M_{21} &= \frac{{\cal Z}_i}{2S_{11}} \Bigl[\bigl(1-S_{21}\bigr)\bigl(1-S_{12}\bigr) - S_{11}S_{22}\Bigr] \,.
\end{split}
\end{equation}
where ${\cal Z}_i$ and ${\cal Z}_o$ are, respectively, the generalized input and output impedance whose values depend on the incidence angle and background permittivity and permeability at the input and output sides. In our definition, $S_{ij} (i=j)$ is related to the transmission and $S_{ij} (i\ne j)$ is related to the reflection. That means, for the TE polarization $S_{11}$ and $S_{21}$ are, respectively, the transmission and reflection coefficients of the electric field; whereas for the TM polarization the transmission and reflection coefficients of the electric field are given by $\dfrac{\epsilon_1k_{2z}}{\epsilon_2k_{1z}}S_{11}$ and $-S_{21}$, respectively. Note that as long as the layer has inversion symmetry, $M_{11}=M_{22}$ regardless the input and output impedances. For scattering matrix, $S_{11}=S_{22}$ only if ${\cal Z}_i={\cal Z}_o$. In addition if $M_{11}=M_{22}$, we then also have $S_{12}=S_{21}$. From Eq.~(\ref{Herpin}), the dispersion relation and the effective impedance ${\cal Z}_e$ of the unit cell $ABA$ are given by
\begin{equation}
\label{DispZe}
\begin{split}
& \cos(K_ed) = M_{11} = \cos(k_{1z}d_1)\cos(k_{2z}d_2) - \eta^+ \sin(k_{1z}d_1)\sin(k_{2z}d_2) \,,\\
& {\cal Z}_e^2 = \frac{M_{21}}{M_{12}} = {\cal Z}_1^2
\frac{\sin(k_{1z}d_1)\cos(k_{2z}d_2) + \eta^+\cos(k_{1z}d_1)\sin(k_{2z}d_2) - \eta^-\sin(k_{2z}d_2)}
{\sin(k_{1z}d_1)\cos(k_{2z}d_2) + \eta^+\cos(k_{1z}d_1)\sin(k_{2z}d_2) + \eta^-\sin(k_{2z}d_2)} \,,
\end{split}
\end{equation}
where the phase $\psi_e=K_ed$ and the $K_e$ is the effective propagation constant. The subscript 1 and 2 refer to the layer A and B, respectively. The ${\cal Z}_1$ is the generalized impedance of the equivalent layer A. The z-component wave vector $k_{1z}$ and $k_{2z}$ are determined from the dispersion relations in Eq.~(\ref{Disp}). The $d_1$ is twice the thickness of layer A; and the $d_2$ is the thickness of layer B. The period of the symmetric unit cell is $d=d_1+d_2$. Other parameters in Eq.~(\ref{DispZe}) are given by
\begin{equation}
\label{eta}
\begin{split}
\eta^\pm &= \frac{1}{2} \left(\frac{\epsilon_{2x}\,k_{1z}} {\epsilon_{1x}\,k_{2z}} \pm \frac{\epsilon_{1x}\,k_{2z}} {\epsilon_{2x}\,k_{1z}}\right)
\hspace{.25in}\mbox{for TM} \,, \\
\eta^\pm &= \frac{1}{2} \left(\frac{\mu_{2x}\,k_{1z}} {\mu_{1x}\,k_{2z}} \pm \frac{\mu_{1x}\,k_{2z}} {\mu_{2x}\,k_{1z}}\right)
\hspace{.25in}\mbox{for TE} \,.
\end{split}
\end{equation}
Although some MMs are mesoscopic media, nevertheless we restrict our consideration to those satisfying $d\ll\lambda$, after tedious derivation, Eq.~(\ref{DispZe}) can be simplified to
\begin{equation}
\label{TM-TE}
\begin{split}
&\mbox{TM:}\hspace{.3in} \frac{K_e^2}{k_0^2} = \overline\epsilon_x\,\overline\mu_y - \frac{\overline\epsilon_x}{\overline\epsilon_z} \frac{k_x^2}{k_0^2}
\,,\hspace{.3in} {\cal Z}_e^2 = \frac{\overline\mu_y}{\overline\epsilon_x} - \frac{k_x^2}{k_0^2\overline\epsilon_x\,\overline\epsilon_z} \,,\\
&\mbox{TE:}\hspace{.3in} \frac{K_e^2}{k_0^2} = \overline\epsilon_y\,\overline\mu_x - \frac{\overline\mu_x}{\overline\mu_z} \frac{k_x^2}{k_0^2}
\,,\hspace{.3in} {\cal Y}_e^2 = \frac{\overline\epsilon_y}{\overline\mu_x} - \frac{k_x^2}{k_0^2\overline\mu_x\,\overline\mu_z} \,.
\end{split}
\end{equation}
Here, the $\overline\epsilon_j$ and $\overline\mu_j\ (j=x,y,z)$ are the bulk values of the effective permittivities and permeabilities, respectively. They are related to the material parameters of the equivalent layers A and B through
\begin{equation}
\label{epmuEff}
\begin{split}
\overline\epsilon_p &= \frac{d_1\epsilon_{1p} + d_2\epsilon_{2p}}{d} \,, \hspace{.4in}
\overline\epsilon_z = \frac{d\,\epsilon_{1z}\,\epsilon_{2z}}{d_1\epsilon_{2z} + d_2\epsilon_{1z}} \,, \\
\overline\mu_p &= \frac{d_1\mu_{1p} + d_2\mu_{2p}}{d} \,, \hspace{.4in}
\overline\mu_z = \frac{d\,\mu_{1z}\,\mu_{2z}} {d_1\mu_{2z} + d_2\mu_{1z}} \,,
\end{split}
\end{equation}
where $p=x,y$. Equation~(\ref{epmuEff}) is a simplification of the most general linear case treated by Lakhtakia\cite{Lakhtakia}. In practice, the material parameters of the equivalent layers are unknown. One can only measure scattering parameters. In most experiments ${\cal Z}_i={\cal Z}_o$, combining Eqs.~(\ref{MinS}) and~(\ref{TM-TE}), we obtained the retrieval formula:
\begin{equation}
\label{RetvLine}
\begin{split}
&\mbox{TM:}\hskip.35in Y_M = \overline\epsilon_x\,\overline\mu_y - \frac{\overline\epsilon_x}{\overline\epsilon_z}X \,,\hskip.3in
Y_m = \frac{\overline\mu_y}{\overline\epsilon_x} - \frac{X}{\overline\epsilon_x\,\overline\epsilon_z} \,, \\
&\mbox{TE:}\hskip.35in Y_E = \overline\epsilon_y\,\overline\mu_x - \frac{\overline\mu_x}{\overline\mu_z}X \,,\hskip.3in
Y_e = \frac{\overline\epsilon_y}{\overline\mu_x} - \frac{X}{\overline\mu_x\,\overline\mu_z} \,,
\end{split}
\end{equation}
where
\begin{eqnarray}
\label{X} &&X \equiv \epsilon_b\mu_b\sin^2\theta \,,\hspace{.2in} Y_m\equiv \frac{\mu_b}{\epsilon_b} S\cos^2\theta \,,\hspace{.2in}
Y_e\equiv \frac{\epsilon_b}{\mu_b} S\cos^2\theta \,,\hspace{.2in}
S\equiv \frac{\bigl(1-S_{21}\bigr)^2 - S_{11}^2} {\bigl(1+S_{21}\bigr)^2 - S_{11}^2} \,,\\
\label{Y} &&Y_M = Y_E\equiv \left\{\frac{1}{k_0d} \left[2m\pi\pm \cos^{-1}\left(\frac{1-S_{21}^2+S_{11}^2}{2S_{11}}\right)\right]\right\}^2
\,,\hspace{.2in} m = 0,\pm1,\pm2,\cdots \,,
\end{eqnarray}
where $\theta$ is the incidence angle. The $\epsilon_b$ and $\mu_b$ are, respectively, the background relative permittivity and permeability. The retrieval formulas in Eq.~(\ref{RetvLine}) provide four straight lines ($Y_M$, $Y_m$, $Y_E$, $Y_e$ vs. $X$), two for each polarization. The $Y_M$ and $Y_E$ represent the dispersion lines for TM and TE polarizations, respectively. The $Y_m$ and $Y_e$ are the corresponding impedance and admittance lines. These straight lines are easy to implement experimentally. After measuring the scattering parameters at several incidence angles and plotting the data according to Eqs.~(\ref{RetvLine})-(\ref{Y}), use linear regression technique to calculate the slopes and Y-intercepts of the four lines. From the slopes and Y-intercepts, the six effective material parameters, $\overline\epsilon_j$ and $\overline\mu_j\ (j=x,y,z)$, can be retrieved simultaneously. Let $Y_M^0$ and $Y_m^0$ represent the Y-intercepts of the two lines in TM polarization, $S_M$ and $S_m$ the corresponding slopes of the lines; whereas $Y_E^0$, $Y_e^0$, $S_E$, and $S_e$ are the corresponding quantities for TE polarization. Thus,
\begin{equation}
\label{RetVal}
\begin{split}
\mbox{TM:} \hskip.2in &n_m = \pm\sqrt{Y_M^0} \,,\hskip.2in Z = \pm\sqrt{Y_m^0} \,,\hskip.2in \overline\epsilon_x = \frac{n_m}{Z}
\,,\hskip.2in \overline\mu_y = n_mZ \,,\hskip.2in \overline\epsilon_z = -\frac{\overline\epsilon_x}{S_M} \,,\\
\mbox{TE:} \hskip.2in &n_e = \pm\sqrt{Y_E^0} \,,\hskip.2in Y = \pm\sqrt{Y_e^0} \,,\hskip.2in \overline\epsilon_y = n_eY
\,,\hskip.2in \overline\mu_x = \frac{n_e}{Y} \,,\hskip.2in \overline\mu_z = -\frac{\overline\mu_x}{S_E} \,,
\end{split}
\end{equation}
where the $\pm$ sign in Eq.~(\ref{RetVal}) can be fixed by requiring the imaginary part of refractive index greater than zero, i.e.
$\Im(n_m)>0$, $\Im(n_e)>0$, and the real part of impedance and admittance greater than zero, i.e. $\Re(Z)>0$, and $\Re(Y)>0$ for passive media\cite{Smith}.
\section{Resolving phase branch}
\label{Phase}
There are two issues regarding the branch determination in Eq.~(\ref{Y}), the $\pm$ sign in front of the inverse cosine and the phase branch $m$. The inverse cosine $\cos^{-1}(\cdot)$ represents the principal value: $0\le\cos^{-1}(\cdot)\le\pi$. Knowing only the cosine value, the phase angle cannot be determined. One need the information of sine, which can be obtained from the imaginary part of the cosine as following:
\begin{equation}
A \equiv \frac{1-S_{21}^2+S_{11}^2}{2S_{11}} = \cos\varphi = \cos(\varphi_r+i\varphi_i)
= \cos\varphi_r\cosh\varphi_i - i\sin\varphi_r\sinh\varphi_i \,,
\end{equation}
where $\varphi_r$ and $\varphi_i$ are real. For passive medium, $\varphi_i>0$, and thus $\sinh\phi_i>0$. Therefore, the $\pm$ sign in front of the inverse cosine in Eq.~(\ref{Y}) can be resolved from the imaginary part of $A$:
\begin{equation}
\label{pmSign}
\varphi = \left\{ \begin{matrix} \cos^{-1}A & & \mbox{if } \Im(A) < 0 \\
2\pi - \cos^{-1}A & & \mbox{if } \Im(A) > 0 \end{matrix} \right. \,.
\end{equation}
After determining the $\pm$ sign, next step is to resolve the correct phase branch $m$. This part can be very confusing when negative refractive index might be involved. Notice that among the four retrieval lines in Eq.~(\ref{RetvLine}), only the dispersion lines, $Y_M\sim X$ and $Y_E\sim X$, are functions of the phase branch $m$, the impedance and admittance lines are independent of $m$. Employing this property, we provide three methods that might help to resolve the correct phase branch $m$. \\
{\it Method-1, Using $\overline\epsilon_x^2$ and $\overline\mu_x^2$}:\ \
For the TM polarization in Eq.~(\ref{RetvLine}), the $\overline\epsilon_x^2$ can be obtained either from the intercepts as $\overline\epsilon_x^2=\dfrac{Y_M^0}{Y_m^0}$ or from the slopes as $\overline\epsilon_x^2=\dfrac{S_M}{S_m}$, and the two results should be close. When the branch is wrong, the two results can be significantly different. Similarly for the TE polarization, the $\overline\mu_x^2$ can be obtained from either $\overline\mu_x^2=\dfrac{Y_E^0}{Y_e^0}$ or $\overline\mu_x^2=\dfrac{S_E}{S_e}$. Hence, the correct branch $m$ is the one that minimizes the absolute value of the differences, i.e.
\begin{equation}
\label{Branch1}
\begin{split}
\mbox{TM:} \hspace{.3in} &\min\left\{\left|\dfrac{Y_M^0(m)}{Y_m^0}-\dfrac{S_M(m)}{S_m}\right|:\ \ m=0,\pm1,\pm2,\cdots\right\} \,, \\
\mbox{TE:} \hspace{.3in} &\min\left\{\left|\dfrac{Y_E^0(m)}{Y_e^0}-\dfrac{S_E(m)}{S_e}\right|:\ \ m=0,\pm1,\pm2,\cdots\right\} \,.
\end{split}
\end{equation}
{\it Method-2, Using $\dfrac{\overline\mu_y}{\overline\epsilon_z}$ and $\dfrac{\overline\epsilon_y}{\overline\mu_z}$}:\ \
Alternatively, we can use $\dfrac{\overline\mu_y}{\overline\epsilon_z}$ for TM and $\dfrac{\overline\epsilon_y}{\overline\mu_z}$ for TE as criteria to determine the correct phase branch, i.e.
\begin{equation}
\label{Branch2}
\begin{split}
\mbox{TM:} \hspace{.3in} &\min\left\{\left|Y_M^0(m) S_m - Y_m^0 S_M(m)\right|:\ \ m=0,\pm1,\pm2,\cdots\right\} \,, \\
\mbox{TE:} \hspace{.3in} &\min\left\{\left|Y_E^0(m) S_e - Y_e^0 S_E(m)\right|:\ \ m=0,\pm1,\pm2,\cdots\right\} \,.
\end{split}
\end{equation}
{\it Method-3, Using $\overline\epsilon_z\,\overline\mu_y$ and $\overline\epsilon_y\,\overline\mu_z$}:\ \
The third method is to use $\overline\epsilon_z\,\overline\mu_y$ for TM and $\overline\epsilon_y\,\overline\mu_z$ for TE as criteria to select the correct phase branch. Then, the algorithm becomes
\begin{equation}
\label{Branch3}
\begin{split}
\mbox{TM:} \hspace{.3in} &\min\left\{\left|\frac{Y_M^0(m)}{S_M(m)} - \frac{Y_m^0}{S_m}\right|:\ \ m=0,\pm1,\pm2,\cdots\right\} \,, \\
\mbox{TE:} \hspace{.3in} &\min\left\{\left|\frac{Y_E^0(m)}{S_E(m)} - \frac{Y_e^0}{S_e}\right|:\ \ m=0,\pm1,\pm2,\cdots\right\} \,.
\end{split}
\end{equation}
In above three methods, the branch number $m$ predicted by the first two methods is always consistent for all the frequencies in the several examples we tested. The third method predicts the same result as the first two for most of the frequencies, but sometimes it can be different by $\pm1$ at the edge of phase transition frequencies in resonant regimes. Note that before applying Eqs.~(\ref{Branch1}), (\ref{Branch2}), or~(\ref{Branch3}) to resolve the correct phase branch, the $\pm$ sign in front of the inverse cosine in Eq.~(\ref{Y}) must be determined first. From the several examples we tested, using above branch-resolving techniques, there is no need to recourse adjacent frequencies in determining the correct phase branch. Recoursing adjacent frequencies can become confusing when both positive and negative refractive indices are present in the same frequency band.
\begin{figure}[hbt]
\centering
\subfigure[TM polarization: dispersion lines (left panels) and impedance lines (right panels).]
{
\label{RetrvLine:a}
\includegraphics[width=.45\textwidth]{BiRetv3}
}
\hspace{1cm}
\subfigure[TE polarization: dispersion lines (left panels) and admittance lines (right panels).]
{
\label{RetrvLine:b}
\includegraphics[width=.45\textwidth]{BiRetv4}
}
\caption{Retrieval lines at the frequency $30\,$THz calculated from the scattering matrix of a single layer of unit cells at seven incidence angles (denoted as circles) uniformly distributed from $0$ to $30$ degree. The horizontal axis is $\sin^2\theta$. Background medium is vacuum. Top panels: real part. Bottom panels: imaginary part.}
\label{RetrvLine}
\end{figure}
\begin{figure}[hbt]
\centering
\subfigure[One-period thickness.]
{
\label{Branch:a}
\includegraphics[width=.45\textwidth]{BiRetv5p1}
}
\hspace{1cm}
\subfigure[Six-period thickness.]
{
\label{Branch:b}
\includegraphics[width=.45\textwidth]{BiRetv5p6}
}
\caption{Branch number predicted by the method-3 [Eq.~(\ref{Branch3})] vs. frequency. Since $d\ll\lambda$, most of the frequencies are in the fundamental branch $m=0$. The $m=-1$ indicates the frequencies of the negative refractive index [$n<0$, see Fig.~\ref{nZY:a}]. The $m=1$ corresponds to the frequencies of the high valves of the positive refractive index [see Fig.~\ref{nZY:a}]. Top: TM polarization. Bottom: TE polarization.}
\label{Branch}
\end{figure}
\section{Discussion}
In this section, we provide an example to show how to implement the graphical method from the scattering parameters. Typically the background is lossless and thus, the $X$ variable in Eq.~(\ref{RetvLine}) is real; whereas the slops and Y-intercepts are usually complex. The real and imaginary parts of the retrieval lines should be plotted separately as shown in Fig.~\ref{RetrvLine} which was calculated from the scattering matrix. Drude model is used for the effective material parameters of the equivalent layers A and B,
\begin{equation}
\label{Drude}
\epsilon = 1 - \frac{f_{ep}^2}{f^2-f_{er}^2+i\gamma f} \,,\hspace{.3in} \mu = 1 - \frac{f_{mp}^2}{f^2-f_{mr}^2+i\gamma f} \,,
\end{equation}
where $f_{ep}=30\,$THz, $f_{mp}=20\,$THz, and $\gamma=3\,$THz. The $\epsilon_x$ and $\mu_x$ are described by Eq.~(\ref{Drude}) with resonances at $f_{er}=20\,$THz and $f_{mr}=25\,$THz for the layer A and at $f_{er}=35\,$THz and $f_{mr}=37\,$THz for the layer B. The other parameters for A: $\epsilon_y=\epsilon_x-0.3$, $\epsilon_z=\epsilon_x+2$, $\mu_y=\mu_x-0.5$, and $\mu_z=1$. The other parameters for B: $\epsilon_y=\epsilon_x-0.8$, $\epsilon_z=\epsilon_x-0.5$, $\mu_y=\mu_x+0.2$, and $\mu_z=\mu_x-0.6$. The thickness is $240\,$nm for the layer A and $320\,$nm for the layer B. Thus, the period of the unit cell ($ABA$) is $800\,$nm. Figure~\ref{Branch} shows the correct phase branch $m$ predicted for each frequency in the regime of interest for the thickness of one (a) and six (b) periods. In our example since $d\ll\lambda$, for one unit-cell thickness, most of the frequencies are within the fundamental branch $m=0$ except for the regime of negative index of refraction [see Fig.~\ref{nZY}(a)] where $m=-1$. For the six-period thickness, the phase branch jumps to $m=1$ in the frequency range $18\sim20\,$THz due to the high valves of the positive refractive index in this regime[see Fig.~\ref{nZY}(a)]. \\
\begin{figure}[hbt]
\centering
\subfigure[Retrieved refractive index for TM polarization (left panels) and for TE polarization (right panels).]
{
\label{nZY:a}
\includegraphics[width=.45\textwidth]{BiRetv6}
}
\hspace{1cm}
\subfigure[Left panels: Retrieved impedance for TM polarization. Right panels: Retrieved admittance for TE polarization.]
{
\label{nZY:b}
\includegraphics[width=.45\textwidth]{BiRetv7}
}
\caption{Effective refractive index, impedance, and admittance retrieved from a single unit-cell layer. Top panels: real part. Bottom panels: imaginary part.}
\label{nZY}
\end{figure}
\begin{figure}[hbt]
\centering
\subfigure[The effective $\overline\epsilon_x$ (left panels) and $\overline\mu_x$ (right panels).]
{
\label{epmuxy:a}
\includegraphics[width=.45\textwidth]{BiRetv8}
}
\hspace{1cm}
\subfigure[The effective $\overline\epsilon_y$ (left panels) and $\overline\mu_y$ (right panels).]
{
\label{epmuxy:b}
\includegraphics[width=.45\textwidth]{BiRetv9}
}
\caption{In-plane values of the retrieved material parameters. Upper panels: real parts. Lower panels: imaginary parts.}
\label{epmuxy}
\end{figure}
\begin{figure}[h!]
\centering\includegraphics[width=.45\textwidth]{BiRetv10}
\caption{The effective $\overline\epsilon_z$ (right panels) and $\overline\mu_z$ (left panels). Upper panels: real parts. Lower panels: imaginary parts.}
\label{epmuz}
\end{figure}
Shown in Fig.~\ref{nZY} are the effective index of refraction, impedance, and admittance retrieved from one unit cell. These values are the same as those retrieved from the six unit cells. In this example, both TM and TE polarizations contain two frequency bands where the effective index of refraction is negative. As illustrated in Fig.~\ref{Branch}, these two negative bands are accurately captured by the branch-resolving techniques provided in Sec.~\ref{Phase}. The six retrieved effective permittivities and permeabilities are shown in Figs.~\ref{epmuxy} and~\ref{epmuz} in the blue-solid curves, which are obtained from the graphical-retrieval and phase-unwrapping scheme, i.e. Eqs.~(\ref{RetVal}), (\ref{pmSign}), and~(\ref{Branch3}). For comparison, we also show the effective permittivities and permeabilities calculated from Eq.~(\ref{epmuEff}) in Figs.~\ref{epmuxy} and~\ref{epmuz} in the green-circle curves. The retrieved effective material parameters agree very well with the results calculated from the effective medium theory. These values represent the bulk values of the material parameters since they are extracted from the symmetric unit cells. We have successfully applied the graphical method and phase-unwrapping techniques to our recent experiment\cite{Roberts}. From the experimental point of view, the straight-line graphical method is more accurate than the methods using single data point. The linear regression technique is based on collective data points measured at several incidence angles. It has an averaging effect and thus reduces the uncertainty of measurements. Using several incidence angles can also help to resolve the phase ambiguity. After determining the effective parameters of the bulk material, Eq.~(\ref{epmuEff}) can be used to recovery the effective parameters of the equivalent layers if we are interested in. This extra benefit may help to improve the unit-cell design. \\
When the tensors in Eq.~(\ref{epmu}) rotate about the z-axis without rotating the coordinate system, the off-diagonal elements in the x-y plane will not be zero ($\epsilon_{xy}\ne0$ and $\mu_{xy}\ne0$), and thus the x and y components of the electromagnetic fields will be coupled. The dispersion relations in Eq.~(\ref{Disp}) will no longer be valid. Without the proper modification, the current parameter retrieval scheme cannot be applied to this scenario. The original classification of the left-handed and right-handed electromagnetic materials can cause confusion with chiral materials which are important class of electromagnetic materials\cite{McCall}. To avoid such confusion, we can use phase-velocity characteristics. When the phase velocity and the time-averaged Poynting vector form an acute angle, the medium is positive-phase-velocity characteristics. When the phase velocity and the time-averaged Poynting vector form an obtuse angle, the medium is negative-phase-velocity characteristics.
\section{Conclusions}
In conclusions, we have shown that symmetric unit cells can often be constructed in metamaterials regardless the number of sub-layers and the complexity of each sub-layers in the original unit cells. The graphical retrieval method and phase unwrapping techniques presented here might be a useful tool for metamaterial designs.
\section*{Acknowledgments}
The author thanks NAVAIR's ILIR program and the program managers Mark Spector and Steven Russell in Office of Naval Research for funding of this work. \\
\end{document}
|
2206.01795
|
\section*{Acknowledgements}
BKS is supported by National Science Foundation (NSF) CAREER Award DMS-1945396. SK is partially supported by JSPS KAKENHI Grant Number 21H03403. SV, KF, and SK were supported by JST, CREST Grant Number JPMJCR15D3, Japan.
\section{Conclusion \& Discussion}
\label{sec:discussion}
In this paper, we introduce a methodology for constructing filtrations which are computationally efficient, provably robust, and statistically consistent even in the presence of outliers. To our knowledge, our results are the first of this type.
To elaborate, we introduced \md{}, $\dnq$, as a computationally efficient and outlier-robust variant of the distance function based on the median-of-means principle, and established some of its theoretical properties. In particular, when the samples contain outliers in the adversarial contamination setting, we (i) showed that the $\dnq$-weighted filtrations are statistically consistent estimators of the true (uncontaminated) population counterpart, (ii) characterized its convergence rate in the bottleneck metric, and (iii) provided uniform confidence bands in the space of persistence diagrams. Furthermore, we used an empirical influence analysis framework to quantify the robustness of the $\dnq-$filtrations, and provide a framework for selecting the parameter $Q$.
Topological inference in the presence of outliers is a topic which has received considerable attention in recent years, and with good reason. We would like to highlight that the objective in this paper has been to develop a framework of topological inference in which the population target is the persistence diagram $\dgm\pa{\bbv[\Xb]}$. Therefore, the proposed methodology disregards, to a large extent, the distribution of mass on the support. As a future direction, we would like to explore a framework of inference which incorporates information from, both, the geometry of the underlying space and the structure of the probability measure generating the data. As noted in \citet[Section~5]{anai2019dtm}, their results follow only from a few simple properties of the distance-to-measure. We build off their foundation to provide some useful generalizations which we hope will be useful in the analysis of other estimators using this framework.
\section{Main Results}
\label{sec:main}
In the following, we present a MoM estimator to obtain outlier robust persistence diagrams in Section~\ref{sec:proposal}, and its statistical properties along with the influence analysis are presented in Sections~\ref{sec:statistical}---\ref{sec:influence}. In Section~\ref{sec:lepski} we present a method for adaptively calibrating the MoM tuning parameter using a data-driven procedure. The proofs for all results are deferred to Section~\ref{sec:proofs}.
\subsection{Empirical distance function using the Median-of-Means principle}
\label{sec:proposal}
Let $\Xn = \pb{\Xv_1, \Xv_2, \dots, \Xv_n} \subset \R^d$ be a sample of $n$ observations. We assume that the samples are obtained under sampling setting \samp{}. We emphasize that this setting encompasses the following~scenarios:
\begin{enumerate}[label=(\alph*)]
\item The samples $\Xn$ are obtained i.i.d.~from $\pr \in \mathcal{P}(\Xb, a, b)$ for compact $\Xb \subset \R^d$.
\item The samples are obtained from a distribution $\pr = (1-\pi)\pr_{signal} + \pi\pr_{noise}$, where $\pi \in (0, 1/2)$ and $\pr_{signal} \in \mathcal{P}(\Xb, a, b)$.
\item $\qty{\widetilde{\Xv}_1,\widetilde{\Xv}_2, \dots, \widetilde{\Xv}_n}$ is first sampled i.i.d.~from $\pr \in \mathcal{P}(\Xb, a, b)$, and then handed over to an adversary. The adversary is then free to examine the $n$ points, and replace any $m < n/2$ of them with some points of their choice. The modified dataset, $\Xn$, is then shuffled and handed to the topologist for inference, who has no prior knowledge of the original $\qty{\widetilde{\Xv}_1,\widetilde{\Xv}_2, \dots, \widetilde{\Xv}_n}$.
\end{enumerate}
The central objective is to derive a statistically consistent and computationally efficient estimator of $\dgm\pa{\bbv[\Xb]}$ which is robust to the misspecification scenarios detailed above, using the samples $\Xn$. To this end, the MoM Distance (\md{}) function $\dnq$ is defined as follows.
\begin{definition}[\md{}]
Given a collection of points $\Xn \subset \R^d$ and $1 \le Q \le n$, let $\pB{S_1, S_2, \dots S_Q}$ be a partition of $\Xn$ into $Q$ disjoint blocks, such that each subset $S_q \subset \Xn$ comprises of $|{S_q}| = \floor{n/Q}$ samples\footnote{Without loss of generality, we may assume that $n$ is divisible by $Q$, so that $n/Q \in \Z_+$}. The MoM distance function $\dnq: \R^d \rightarrow \R_{\ge 0}$ is defined to be
\eq{
\dnq(\yv) \defeq \med\qty\Big{ \dsf_{n, S_q}(\yv) : q \in [Q] } = \med\qty\Big{ \inf_{\xv \in S_q} \norm{\xv -\yv} : q \in [Q]}.
\label{eq:momdist}
}
The proposed outlier robust persistence diagram $\dgm\pa{ \bbv[{\Xn, \dnq}] }$ is then obtained using $\dnq$-weighted filtration $V[\Xn, \dnq]$.
\label{def:mom}
\end{definition}
Note that we recover the usual empirical distance function, i.e., $\dsf_{n, 1} \equiv \dsf_n$ when $Q = 1$.
\begin{remark} For each block $S_q$, distance function $d_{n,q} \in L_\infty({\R^d})$ can be viewed as the Kuratowski embedding of $S_q$. The most natural generalization of the multivariate median-of-means estimators proposed by \cite{minsker2015geometric} and \cite{lerasle2019monk} would suggest the following estimator as the natural candidate for \md{}:
\eq{
\widetilde{\dsf}_{n,Q} = \arginf_{f \in L_{\infty}(\R^d)} \sum_{q=1}^{Q} \norminf{f - \dsf_{n,S_q}},\nonumber
}
where the median under consideration corresponds to the geometric median in $L_{\infty}(\R^d)$. Although $\widetilde{\dsf}_{n,Q}$ has its appeal from a theoretical perspective, the computation of $\widetilde{\dsf}_{n,Q}$ involves an infinite-dimensional optimization problem, making it infeasible in practice. In contrast, the proposed estimator in Definition~\ref{def:mom}, is a pointwise median-of-means estimator with a tractable computational cost. This has the promise of being highly modular, and widely applicable in many practical settings. The technical difficulty arises in showing that the pointwise estimator $\dnq$ achieves an exponential concentration bound around $\dx$ in the $L_{\infty}(\R^d)$ metric.
\end{remark}
Similar to the proposed methodology in Definition~\ref{def:mom}, the procedure of partitioning the data $\Xn$ into smaller subsets, and then aggregating them as an estimator of persistent homology has been shown to satisfy several favorable properties by \cite{solomon2021geometry} and \cite{gomez2021curvature}, albeit in a different context. We argue that a similar principle, in our setting, also leads to provably robust estimators.
\textbf{Computational considerations.} Given a weighting function $f$, the first step in constructing the $f$-weighted filtration begins with estimating the weights associated with the sample points, i.e., $w_i = f(\Xv_i)$ for all $i \in [n]$. After this step, the computational complexity of constructing the $f$-weighted filtration $V[\Xn, f]$ is independent of the choice of the weighting function $f$. Table~\ref{tab:comparison} compares the computational complexity for three robust filtrations: (i) the \md{} $\dnq$, (ii) the distance-to-measure $\delta_{n,k}$ (DTM, \citealp{anai2019dtm}), and (iii) the robust kernel density estimator $\fns$ (RKDE, \citealp{vishwanath2020robust}). Given a test point $\xv \in \R^d$, the distance from $\xv$ to each block $S_q$ is optimally computed using a $k-$d tree. The pre-processing step, which involves the construction of the $k-$d tree \citep{wald2006building}, typically has time complexity $O(\abs{S_q} \log \abs{S_q})$ for each block $q \in [Q]$ with $\abs{S_q} = n/Q$. Thereafter $O(\log \abs{S_q})$ time is needed for a single query \citep[Chapter~10]{cormen2009introduction}. The results for each block $q\in[Q]$ are then aggregated to compute the median, which takes an additional $O(Q)$ time per query. This results in a total evaluation time of $O(n \cdot (Q + \log n/Q))$ for $n$ samples.
The distance-to-measure with parameter $m$ requires the evaluation of the distance to the $k${th} nearest neighbor for $k=\floor{mn}$. This is, again, optimally computed using a $k-$d tree; however, unlike $\dnq$, the $k-$d tree needs to be constructed for all $n$ samples, resulting in a time complexity of $O(n\log n)$ for pre-processing. Thereafter, the evaluation time takes $O(k\log n)$ for each query point, resulting in $O(n \cdot k\log n)$ for evaluation over $n$ samples. The robust KDE $\fns$, on the other hand, requires $O(n^2)$ time to compute the Gram-matrix in each iteration of the KIRWLS algorithm, and takes $O(n^2\ell)$ for $\ell$ outer loops. After this pre-processing step, the coefficients of $\fns$ may be used to evaluate each query in $O(n)$ time. The three weighted filtrations $V[\Xn, \dnq]$, $V[\Xn, \delta_{n, k}]$ and $V[\Xn, \fns]$ are illustrated in Figure~\ref{fig:robust-examples}.
\begin{table}\caption{Comparison of computational complexity for robust weighted filtrations.}
\centering
\resizebox{\textwidth}{!}{\begin{tabular}{llll}
\toprule
Method & Pre-processing & Evaluation & Provably robust? \\
\midrule
$V[\Xn, \dnq]$ (\md{}--filtration) & $O\qty\Big( \f nQ \log(n/Q))$ & $O\qty\Big( n \cdot (Q + \log n/Q) )$ & Yes \\
$V[\Xn, \delta_{n, k}]$ \citep[DTM--filtration]{anai2019dtm} & $O( n \log n )$ & $O( kn \log n )$ & No \\
$V[\Xn, \fns]$ \citep[RKDE--filtration]{vishwanath2020robust} & $O(n^2 \ell)$ & $O(n^2)$ & No \\
\bottomrule
\end{tabular}}
\medskip
\scriptsize $n=\#$samples, $Q=\#$blocks, $k=\floor{mn}=$ DTM parameter, $\sigma=$RKDE bandwidth, and $\ell=\#$iterations of KIRWLS algorithm
\label{tab:comparison}
\end{table}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/plots/v_momdist.pdf}
\caption{$V^t[\Xn, \dnq]$, \md{}}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/plots/v_dtm.pdf}
\caption{$V^t[\Xn, \delta_{n, k}]$, DTM}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/plots/v_rkde.pdf}
\caption{$V^t[\Xn, \fns]$, RKDE}
\end{subfigure}
\caption{Comparison of $V^t[\Xn, f]$ for the median filtration value $t=\median\pb{w_1, w_2, \dots, w_n}$ and $p=\infty$.}
\label{fig:robust-examples}
\end{figure}
We conclude this section with the following result, which establishes that \md{} is $1-$Lipschitz.
\begin{lemma}
Given samples $\Xn = \pb{\Xv_1, \Xv_2, \dots, \Xv_n}$ and $Q < n$,
\eq{
\abs{ \dnq(\xv) - \dnq(\yv) } \le \norm{ \xv - \yv }, \qq{for all $\xv, \yv \in \R^d$.}\nn
}
\label{lemma:lipschitz}
\end{lemma}
\subsection{Statistical properties of $\bbv[\dnq]$}
\label{sec:statistical}
We begin our analysis by characterizing the persistence diagrams obtained using the sublevel filtration of $\dnq$. The following result (proved in Section~\ref{proof:theorem:momdist-sublevel}), establishes that $\dgm\pa{\bbv[\dnq]}$ is a statistically consistent estimator of target population quantity $\dgm\pa{\bbv[\Xb]}$ under sampling setting~\samp{}, and establishes its rate of convergence in the $\Winf$ metric.
\begin{theorem}[Sublevel filtration]
Suppose $\pr \in \mathcal{P}(\bX, a, b)$ is a probability distribution with support~$\bX$ satisfying the $(a, b)-$standard condition, and $\Xn$ is obtained under sampling condition \samp{}. For $2m < Q < n$ and for all $\delta < e^{-(1+b)Q}$,
\eq{
\pr\qty\Bigg{ \Winf\qty\bigg( \dgm\pa{\bbv[\dnq]}, \dgm\pa{\bbv[\Xb]} ) \le \mathfrak{g}(n, Q, a, b) } \ge 1 - \delta,
\label{eq:mom-confidence-band}
}
where
\eq{
\mathfrak{g}(n, Q, a, b) = \qty\bigg( \frac{Q\log(n / Q)}{a n} + \frac{4Q \log(1/\delta)}{a(Q-2m)n} )^{1/b}.\nn
}
Furthermore, if the number of outliers grows with n as $m = cn^\epsilon$ for $c > 0$ and $\epsilon \in [0, 1)$ then
\eq{
\E\qty\Bigg[ \Winf\qty\bigg( \dgm\pa{\bbv[\dnq]}, \dgm\pa{\bbv[\Xb]} ) ] \lesssim \pa{\f{\log n}{n^{1-\e}}}^{1/b}.
\label{eq:rate-with-noise}
}
\label{theorem:momdist-sublevel}
\end{theorem}
\begin{remark}
The following salient observations can be made from Proposition~\ref{theorem:momdist-sublevel}.
\begin{enumerate}[label=\textup{\textbf{(\roman*)}}]
\item In addition to characterizing the uniform rate of convergence of $\dnq$, \eref{eq:mom-confidence-band} also provides a uniform confidence band for $\dgm\pa{\bbv[{\Xb}]}$ in the presence of outliers. The two terms appearing in $\mathfrak{g}(n, Q, a, b)$ may be interpreted as follows: The first term is similar to the term appearing in \citet[Theorem~2]{chazal2015convergence} with an effective sample size of $n/Q$ instead of $n$, which is a consequence of the Median-of-Means procedure. The second term incorporates the desired confidence level $\delta$ adaptive to the volume dimension $b>0$, with an effective sample size of $n/Q$. Notably, as the number of outliers $m$ increases, the number of blocks $Q$ must also increase; thereby widening the resulting confidence band.
\item The complex inter-dependence of the parameters $m, Q$ and $\delta$ in \eref{eq:mom-confidence-band} is simplified in \eref{eq:rate-with-noise}. In the absence of outliers, i.e., when $m=0$ and $Q=1$, we recover the same convergence rate as in \citet[Theorem~4]{chazal2015convergence},
\eq{
\E\qty\Bigg[ \Winf\qty\bigg( \dgm\pa{\bbv[\dnq]}, \dgm\pa{\bbv[\Xb]} ) ] \lesssim \pa{\f{\log n}{n}}^{1/b}.
\label{eq:dnq-rate}
}
Specifically, it becomes apparent that accommodating for more adverse noise conditions comes at the price of an attenuated rate of convergence.
\item The admissible confidence level $\delta$ for constructing the confidence band is implicitly dependent on the parameter $Q$. This phenomenon is unavoidable with estimators based on the median-of-means principle. We refer the reader to \citet[Section~2.4]{lugosi2019mean} for a comprehensive discussion on how robustness must come at the price of the confidence level $\delta$ being restricted.
\end{enumerate}
\end{remark}
The proof of Proposition~\ref{theorem:momdist-sublevel} relies on the following lemma, which allows us to control the deviation of a pointwise median-of-means estimator from its uncontaminated population counterpart in terms of a Binomial tail probability.
\begin{lemma}
Suppose $\pr \in \mathcal{P}(\bX)$ for $\bX \subset \R^d$ and $\Xn = \Xnm \cup \Ym$ is obtained under sampling condition \samp{} with $\Xnm$ observed i.i.d. from $\pr$. Let $\pr_n$ denote the empirical measure associated with $\Xn$ and for $2m < Q < n$, let $\pr_q$ be the empirical measure associated with the block $S_q$ for all $q \in [Q]$. Given a statistical functional $T : \mathcal{P}(\R^d) \rightarrow L_{\infty}(\R^d)$, let $T_Q(\pr_n) \in L_{\infty}(\R^d)$ be the pointwise MoM estimator given by
\eq{
T_Q(\pr_n)(\xv) = \med\qty\Big{ T(\pr_q)(\xv) : q \in [Q] }, \ \ \textup{ for all } \xv \in \R^d.\nn
}
Then, for $t > 0$
\eq{
\pr\qty\bigg({ \norminf{ T_Q(\pr_n) - T(\pr) } > t }) \le \pr\qty({ \sum_{q \in A} \xi_{q}(t; n, Q) > \f{Q-2m}{2} }),\nn
}
where $A = \qty{ q \in [Q] : S_q \cap \Y_m = \varnothing}$ are the indices for the blocks containing no outliers, and
\eq{
\xi_{q}(t; n, Q) \defeq \mathbbm{1}\qty\Big( \norminf{ T(\pr_q) - T(\pr) } > t ) \ \ \textup{ for all } q \in A. \nn
}
\label{lemma:mom}
\end{lemma}
The statement of Lemma~\ref{lemma:mom} holds for empirical processes arising from general classes of pointwise median-of-means estimators. In particular, by taking $T(\pr_q) = \dsf_{s,q}$ to be the distance function w.r.t. block $S_q$, the estimator $\dnq$ satisfies the conditions of Lemma~\ref{lemma:mom}. We also point out that the exponential concentration bound in Proposition~\ref{theorem:momdist-sublevel} is strictly better than similar bounds appearing in other pointwise MoM estimators, e.g., \citet[Theorem~2]{humbert2020robust}. This is owing to the Chernoff bound (instead of a Hoeffding bound) used for bounding the Binomial tail probability appearing in Lemma~\ref{lemma:mom}. This provides a significant gain for Binomial random variables with shrinking probability \citep{Hagerup1990AGT}.
\subsection{Statistical properties of $V[\Xn, \dnq]$}
\label{sec:statistical-2}
In practice, the sublevel filtration $V[\dnq]$ cannot be computed exactly, and one must rely on approximations using cubical homology. To this end, we now turn our attention to $\dnq$-weighted filtrations computed on the sample points directly. Before we study the statistical properties of the $\dnq$--weighted filtration, we provide a useful characterization of the persistence diagram obtained using the sublevel sets of $\dnq$.
\begin{lemma}
Given samples $\Xn$ and $Q<n$, $V[\dnq]$ and $V{[\R^d, \dnq]}$ are $(\id, \alpha)-$interleaved for $\alpha: t \mapsto 2^{\f{p-1}{p}} t$ for all $p \ge 1$. In particular, $V[\dnq] = V{[\R^d, \dnq]}$ when $p=1$.
\label{lemma:sublevel-equivalence}
\end{lemma}
We now turn our attention to the $\dnq$-weighted filtration $V[\Xn, \dnq]$. The following result establishes that the persistence module $\bbv[{\Xn, \dnq}]$ is sufficiently regular.
\begin{lemma}[Regularity]
For $\Xn$ obtained under sampling setting \samp{} and $\dnq$ defined in \eref{eq:momdist}, the persistence module $\bbv[{\Xn, \dnq}]$ is $q-$tame and pointwise finite-dimensional.
\label{lemma:momdist-regularity}
\end{lemma}
The proof of Lemma~\ref{lemma:momdist-regularity} is a direct consequence of \citet[Proposition~{3.1}]{anai2019dtm}, and ensures that the persistence diagram $\dgm\pa{\bbv[\Xn, \dnq]}$ is well-defined.
Next, in order to establish that $\dgm\pa{\bbv[{\Xn, \dnq}]}$ is a consistent estimator of $\dgm\pa{\bbv[\bX]}$ and to construct uniform confidence bands in the space of persistence diagrams $(\Omega, \winf)$, we need a tighter control for how the two persistence modules are interleaved. To this end, Lemmas \ref{lemma:ab-filtration} and \ref{lemma:ab-module} will be of assistance, and serve as generalizations of \citet[Lemma~4.8 \& Proposition~4.9]{anai2019dtm}. The following result, which holds for a general metric space $(\M, \rho)$ and an arbitrary weight function $f$, provides a handle for the interleavings between $f$-weighted filtrations computed on two nested sets using the same function $f$.
\begin{lemma}
Given a metric space $(\M, \rho)$, two compact subsets $\bX, \bY$ of $\M$ such that $\bX \subseteq \bY$, and a weight function $f: \mathcal{M} \rightarrow \R_{\ge 0}$, let $\Vt[ ][\bX,f][\rho]$ and $\Vt[ ][\bY, f][\rho]$ be their respective $f$--weighted filtrations. If $f$ satisfies the property that
\eq{
\inf_{\xv \in \bX}\rho(\xv, \yv) \le f(\yv) + a,\nn
}
for $a > 0$ and for all $\yv \in \bY$, then the filtrations are $(\id,\alpha)$--interleaved, i.e.,
\eq{
\Vt[][\bX, f] \subseteq \Vt[][\bY, f] \subseteq \Vt[\alpha(t)][\bX, f],\nn
}
for $\alpha: t \mapsto 2^{1 - \f 1 p} t + a + \sup_{\xv \in \bX}f(\xv)$.
\label{lemma:ab-filtration}
\end{lemma}
Since map $\alpha$ appearing in Lemma~\ref{lemma:ab-filtration} is not purely a translation map, it does not lead to a bound in the interleaving metric as per \eref{eq:interleaving-filtration}, and, therefore, a bound in the $\winf$ metric cannot be characterized using Lemma~\ref{lemma:ab-filtration} alone. The next result, which is stated only for the Euclidean space $(\R^d, \norm{\cdot})$, establishes that for sufficiently large values of $t$, the map $\alpha$ may be replaced by a translation map.
\begin{lemma}
Let $(\mathcal{M},\rho) = (\R^d, \norm{\cdot})$. Suppose $\bX$, $\bY \subset \R^d$ are compact sets such that $\bX \subseteq \Yb$, and $f$ satisfies the same conditions as in Lemma~\ref{lemma:ab-filtration} for $a>0$. Let $t({\bX})$ be the filtration value for the simplex corresponding to $\bX$ in $\textup{nerve}\pb{\VVt[ ][\bX, f][\rho]}$, i.e.,
\eq{
t({\bX}) \defeq \inf \qty\Big{t>0: {\textstyle \bigcap\limits_{\xv \in \bX} } B_{f, \rho}(\xv, t) \neq \varnothing},\nn
}
and $\beta: t \mapsto t + c(\bX)$ be a non-decreasing map with
\eq{
c(\bX) \defeq a + \sup_{\xv \in \bX}f(\xv) + \pa{1 - \f 1 p}t(\bX).\nn
}
Then for all $t \ge t(\bX)$, the homomorphisms $\phi_{t}^{\beta(t)}: \bVt[t][\bX, f][\rho] \rightarrow \bVt[\beta(t)][\bX, f][\rho]$ are trivial, i.e.,
$$
{\textup{Im}\qty\big(\phi_{t}^{\beta(t)})} \cong \begin{cases}
\mathbf{F} & \text{ \ \ if \ \ } \bVt[t][\bX, f][\rho] = \textup{H}_0\pa{\Vt[t][\bX, f][\rho]}\nn \\
\pb{\mathsf{0}} & \text{\ \ \ if \ \ } \bVt[t][\bX, f][\rho] = \textup{H}_k\pa{\Vt[t][\bX, f][\rho]}, \ \ k>0
\end{cases}.
$$
Furthermore, the bottleneck distance between the resulting $f$--weighted persistence diagrams is bounded above as
\eq{
\Winf\qty\Big( \dgm\pa{\bVt[ ][\bX, f][\rho]}, \dgm\pa{\bVt[ ][\bY, f][\rho]} ) \le c(\bX).\nn
}
\label{lemma:ab-module}
\end{lemma}
\begin{remark}
Unlike Lemma~\ref{lemma:ab-filtration}, which is stated for general metric spaces, restricting ourselves to the Euclidean space $(\R^d, \norm{\cdot})$ in Lemma~\ref{lemma:ab-module} is sufficient for the objective of this work. However, as outlined in the proof, the only issue arises when \citet[Lemma~B.1]{anai2019dtm} is invoked. While \citet[Lemma~B.1]{anai2019dtm} (which holds for affine spaces satisfying the parallelogram identity) extends naturally to Banach spaces, the extension to general metric spaces will require some care on a case-by-case basis.
\end{remark}
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{figures/plots/interleaving2.pdf}
\caption{Illustration of Lemmas~\ref{lemma:ab-filtration} and \ref{lemma:ab-module} for $p=2$, $t(\bX)=3$ and $a+\sup_{\xv\in \bX}f(\xv)=1$. The interleaving maps $\alpha$ and $\beta$ are illustrated in blue and red, respectively. When $t < t(\bX)$, the interleaving map is $\alpha$ from Proposition~\ref{lemma:ab-filtration}. For $t \ge t(\bX)$, the map $\beta$ is obtained using Lemma~\ref{lemma:ab-module}. Extending $\beta$ along the black line yields the interleaving bound.}
\label{fig:ab-interleaving}
\end{figure}
In essence, the preceding two results enable us to control the filtrations in two separate stages, and, then, ``stitch'' the results together. See Figure~\ref{fig:ab-interleaving} for an illustration. This forms the crux of the next result, which establishes an analogue of the stability result for $\dnq$-weighted filtrations, but unlike the stability for the usual distance function $\dn$, it is also robust to outliers.
\begin{theorem}(Stability \& robustness of $\dnq$-weighted filtrations)
Let $\Xn = \Xnm \cup \Ym$ be a collection of points obtained under sampling condition \samp{}. For $Q > 2m$ let $\dnq$ be the \textup{\md{}} function computed on the contaminated points $\Xn$ and let $\dsf_{n-m}$ be the distance function w.r.t.~the inliers $\Xnm$. Then
\eq{
\Winf\qty\bigg( \bVt[ ][\Xn, \dnq], \bVt[ ][\Xnm, \dsf_{n-m}] ) \le \sup_{\xv \in \Xnm}\dnq(\xv) + \norminf{\dnq - \dsf_{n-m}} + \qty(1 - \f1p)t(\Xnm), \nn
}
where $t(\Xnm)$ is the filtration value of the simplex associated with the inliers $\Xnm$ in the filtration $V[\Xnm,\dnq]$. In particular, when $p=1$ we have
\eq{
\Winf\qty\bigg( \bVt[ ][\Xn, \dnq], \bVt[ ][\Xnm, \dsf_{n-m}] ) \le \sup_{\xv \in \Xnm}\dnq(\xv) + \norminf{\dnq - \dsf_{n-m}}.
\label{eq:stability-p1}
}
\label{theorem:momdist-stability}
\end{theorem}
\begin{remark}The following observations follow from Theorem~\ref{theorem:momdist-stability}.
\begin{enumerate}[label=\textup{\textbf{(\roman*)}}]
\item In contrast to what would follow from Lemma~\ref{lemma:anai-et-al}~(ii) for the standard unweighted filtration, the term appearing in the r.h.s.~of \eref{eq:stability-p1} completely eliminates the dependence on the Hausdorff distance between $\Xn$ and $\Xnm$ in the $\dnq-$filtration. More generally, the same bound in Proposition~\ref{theorem:momdist-stability} holds even when $V[\Xn,\dnq]$ is replaced by $V[\mathbb{M}, \dnq]$ for any set $\mathbb{M} \supseteq \Xnm$.
\item Notably, $V[\Xn, \dnq]$ remains resilient to outliers. To see this, observe that the first term appearing in the r.h.s. of \eref{eq:stability-p1} may be bounded as
\eq{
\sup_{\xv \in \Xnm}\dnq(\xv) = \sup_{\xv \in \Xnm}\abs{ \dnq(\xv) - \dx(\xv) } \le \norminf{ \dsf_{n, Q} - \dx },\nn
}
where the first equality follows from the fact that $\dx(\xv)=0$ for all $\xv \in \Xnm$. Therefore, from the proof of Theorem~\ref{theorem:momdist-sublevel}, the r.h.s.~of \eref{eq:stability-p1} vanishes with high probability for sufficiently large sample sizes.
\item For $p=1$, a similar analysis for the DTM-filtrations appears in \citet[Theorem~{4.5}]{anai2019dtm} and the bottleneck distance is bounded above as
\eq{
\winf\qty\bigg( \dgm\pa{\bbv[\Xn, \delta_{n, k}]}, \dgm\pa{\bbv[\Xnm, \delta_{n-m, k}]} ) \le \sqrt{\f nk} W_2\pa{\Xnm, \Xn} + \sup_{\xv \in \Xnm}\delta_{n-m, k}.\nn
}
While the last term on the r.h.s. converges to the uncontaminated population analogue with high probability, the first term involving the Wasserstein distance $W_2(\Xnm, \Xn)$ can be large even for a few extreme outliers. In contrast, the r.h.s. of \eref{eq:stability-p1} converges to zero with high probability with no assumptions on the outliers $\Ym$.
\end{enumerate}
\label{remark:stability}
\end{remark}
With this background we are now in a position to state our main result, which characterizes the rate of convergence for the $\dnq$--weighted filtration on the contaminated sample points, $V[\Xn, \dnq]$, to the counterfactual population analogue $V[\bX]$ in the $\winf$ metric.
\begin{theorem}[$\dnq$-weighted filtration]
Let $p=1$. Suppose $\pr \in \mathcal{P}(\bX, a, b)$ is a probability distribution with support $\bX$ satisfying the $(a, b)-$standard condition, and $\Xn = \Xnm \cup \Ym$ is obtained under sampling condition \samp{}. Then, for $2m < Q < n$ and for all $\delta \in (0, 1)$,
\eq{
\pr\qty\Bigg{\Winf\qty\bigg( \bVt[ ][\Xnm \cup \Ym, \dnq], \bvt{}[\bX] ) \le \mathfrak{f}(n, m, Q, \delta_1, \delta_2)} \ge 1 - \delta,\nn
}
where
\eq{
\mathfrak{f}(n, m, Q, a, b) \defeq \qty\bigg( \frac{Q\log(n / Q)}{a (n / Q)} + \frac{4Q \log(1/\delta_1)}{a(Q-2m)n} )^{1/b} + \qty\bigg( \frac{\log (n-m)}{a (n-m)} + \frac{4 \log(1/\delta_2)}{a (n-m)} )^{1/b},\nn
}
for $\delta_1, \delta_2 \in (0, 1)$ such that $\delta_1 \le e^{-(1+b)Q}$ and $\delta_1 + \delta_2 = \delta$. In particular, if $m_n = cn^\e$ for $0 \le \e < 1$, then
\eq{
\E\qty\bigg[ \Winf\qty\bigg( \bVt[ ][\Xnm \cup \Ym, \dnq], \bvt{} [\bX] ) ] \lesssim \qty\Bigg( \f{\log n}{n^{1-\e}} )^{1/b}.
\label{eq:dnq-rate-1}
}
\label{theorem:momdist-consistency}
\end{theorem}
\begin{remark} We make the following observations from Theorem~\ref{theorem:momdist-consistency}.
\begin{enumerate}[label=\textup{\textbf{(\roman*)}}]
\item The term appearing in the r.h.s.~of \eref{eq:dnq-rate-1} is identical to the term appearing in the r.h.s.~of \eref{eq:dnq-rate} in Theorem~\ref{theorem:momdist-sublevel}. Therefore, the $\dnq$--weighted filtration and the $\dnq$ sublevel filtration converge to the same population limit with identical convergence rates. They both differ from the minimax rate without outliers \citep[Theorem~4]{chazal2015convergence} by a factor of $n^{-\e/b}$.
\item The uniform confidence band we obtain from Theorem~\ref{theorem:momdist-consistency} can, in principle, be computed for any confidence level $\delta \in (0, 1)$. However, the restriction on $\delta_1$ makes the confidence band obtained using $V[\Xn, \dnq]$ wider than that obtained using Proposition~\ref{theorem:momdist-sublevel}. This is, ultimately, the price we have to pay for choosing the computationally tractable $\dnq$-weighted filtration as the estimator as opposed to the $\dnq$ sublevel filtration.
\end{enumerate}
\end{remark}
We conclude this section with the following result, which relates the sublevel filtration $V[\dnq]$ to $V[\Xn, \dnq]$.
\begin{proposition}
Given samples $\Xn = \pb{\Xv_1, \Xv_2, \dots, \Xv_n}$ and $Q < n$, the filtrations $V[\dnq]$ and $V[\Xn, \dnq]$ are $(\eta, \xi)-$interleaved, where
\eq{
\eta: t \mapsto 2^{\ipfac}t + \sup_{\xv \in \Xnm}\dnq(\xv), \qq{} \xi: t \mapsto 2^{\ipfac}\eta(t),\nn
}
and $p \ge 1$. Specifically, when $p=1$,
\eq{
\winf\qty\bigg( \dgm\pa{\bbv[\dnq]}, \dgm\pa{\bbv[\Xn, \dnq]} ) \le \sup_{\xv \in \Xnm}\dnq(\xv).\nn
}
\label{prop:sublevel-2}
\end{proposition}
The above result characterizes the error incurred when using $V[\Xn, \dnq]$ to approximate the sublevel filtration $V[\dnq]$. In light of Remark~\ref{remark:stability}~(ii), this error vanishes with increasing sample size. In contrast, the approximation error for the DTM-filtration is non-vanishing \cite[Proposition~4.6]{anai2019dtm}.
\begingroup
\subsection{Influence analysis}
\label{sec:influence}
The statistical analysis in the previous sections establishes that, even in the presence of outliers, as the number of samples increases we can eventually mitigate the effect of the outliers. In this section, we provide a more precise characterization for the influence the outliers have on the resulting $\dnq$--weighted filtrations, in contrast to the non-robust counterpart---the $\dn$--weighted filtrations.
Given a probability measure $\pr \in \mathcal{P}(\bX, a, b)$, \citet[Definition~4.1]{vishwanath2020robust} characterized the influence an outlier at $\xv \in \R^d$ has on a persistence diagram $\dgm\pa{\bbv[f_\pr]}$---obtained using the sublevel sets of $f_\pr$---using the \textit{persistence influence} function
\eq{
\boldsymbol{\Psi}(f_\pr; \xv) \defeq \lim_{\e \rightarrow 0} \winf\qty\Big( \dgm\pa{\bbv[{f_{\pr^{\e}_{\xv}}}]}, \dgm\pa{\bbv[f_\pr]} ),
\label{eq:persinf}
}
where $\pr^{\e}_{\xv} = (1-\e)\pr + \delta_{\xv}$ is the perturbation curve w.r.t.~$\xv$ in the space of probability measures. The persistence influence is a generalization of the influence function in robust statistics \citep{hampel2011robust} to general metric spaces. The analysis in this section is similar in spirit to the analysis based on the persistence influence, but differs in two important aspects. First, the $\dnq$--weighted filtration is computed purely on the sample points---by partitioning the samples into $Q$ disjoint blocks---and, therefore, the notion of persistence influence is adapted to the samples, in contrast to \eref{eq:persinf}, which is based on the data-generating distribution $\pr$. Additionally, unlike the case of the persistence influence function---where the influence of outliers in the resulting persistence diagram is quantified in terms of the bottleneck distance---here we directly examine the influence the outlying point has on the resulting persistence diagram itself. This provides a more tractable interpretation for how outliers impact the resulting topological inference.
The discussion in the previous section focused on the weighted filtrations, which can be approximated using the weighted-\cech{} complex. Here, we will explicitly restrict ourselves to the case of the weighted Rips filtrations. Firstly, a majority of the computational applications of persistent homology are performed using the Rips complex, with several optimized implementations widely available, e.g. Ripser, Gudhi, GiottoTDA. Furthermore, since the Rips complex $\mathcal{R}^t[\bX, f]$ is defined to be the flag complex associated with the $1$--skeleton of $\check{C}^t[\bX, f]$, the weighted Rips persistence diagram is entirely characterized by its $0-$ and $1-$simplices.
\begingroup
\renewcommand{\Xnm}{\bX[n+m]}
With this background, we now introduce the empirical persistence influence framework. Suppose we are given a collection of observations $\Xn$, which is sampled i.i.d.~from a probability distribution $\pr$ of interest. Let $\dgm\pa{\bVt[ ][\Xn, f_n][]}$ be its weighted--Rips persistence diagram, where the weight function $f_n$ is constructed using the samples $\Xn$. Suppose $\Xn$ is contaminated with $m < \f n2$ outliers to obtain the contaminated dataset $\Xnm$. In particular, we may assume that the $m$--points are placed at an outlying location $\xvo$, i.e.,
\eq{
\Xnm = \Xn \bigcup \pb{\mathop{\medcup}\limits_{j=1}^m\pb{\xvo}},\nn
}
such that the factor $m$ and the location $\xvo$ together control the relative influence the outliers have. This is similar to the role played by the factor $\e$ in the perturbation curve associated with the persistence influence. Note that when $m=0$, the influence of the outliers is non-existent in the dataset.
Let $\dgm\pa{\bVt[ ][\Xb, f_{n+m}]}$ be the weighted--Rips persistence diagram constructed on $\Xnm$ using the weight function $f_{n+m}$. This gives rise to a collection of spurious topological features in the resulting persistence diagram. If $\bn$ is the birth time associated with a hypothetical topological feature with mass $0$ at $\xvo$ (i.e. $0\delta_{\xvo}$) in $\dgm\pa{\bVt[ ][\Xn, f_n]}$, and $\bnm$ is the birth time associated with the observed topological feature associated with the $m$--points at $\xvo$ (i.e. $m\delta_{\xvo}$), then the \textit{empirical persistence influence} of $\xvo$ can be characterized by
\eq{
\text{influence}\pa{b; \Xn, f_n, m, \xvo} = \Delta b_{n, m}(\pb{\xvo}) = \bn - \bnm.
\label{eq:birth-influence}
}
Indeed, when $\bn - \bnm$ is small, the resulting weighted-Rips persistence diagram $\dgm\pa{\bVt[ ][\Xnm, f_{n+m}]}$ is more robust, and vice versa.
In a similar vein as \citet[Definition~4.1]{vishwanath2020robust} we may also characterize the influence the outliers have on the persistence diagrams resulting from the sublevel filtrations as
\eq{
\text{influence}\pa{\winf; \Xn, f_n, m, \xvo} = \winf\qty\bigg( \dgm\pa{\bbv[f_{m+n}]}, \dgm\pa{\bbv[f_n]} ) \le \norminf{f_{n+m} - f_{n}}.
\label{eq:winf-influence}
}
The following result establishes that, under some mild conditions and with high probability, the $\dnq$--weighted Rips persistence diagrams are more robust than their non-robust counterpart.
\endgroup
\begin{theorem}[Influence analysis of $\dnq$-weighted filtrations]
For $\Xn$ observed i.i.d. from $\pr \in \mathcal{P}(\bX, a, b)$ and $\xvo \in \R^d$, let $\Xmn$ be given by
\eq{
\Xmn = \Xn \bigcup \pb{\mathop{\medcup}\limits_{j=1}^m\pb{\xvo}}. \nn
}
For $2m < Q < n+m$, let $\dsf_{n+m}$ and $\dsf_{n+m, Q}$ denote the distance and MoM distance function w.r.t. $\Xmn$, and let $\db$ and $\dbq$ be as defined in \eref{eq:birth-influence} for $\dsf_{n+m}$ and $\dsf_{n+m, Q}$ respectively. Then
\eq{
\dbq \le \db \ \ \ \textup{a.s.}\nn
}
Furthermore, for $\nQ = (n+m)/Q$ and $c = \min\qty{a2^{-(1+b)}, a2^{-2b}}$, if
\eq{
\vp \defeq c\ \!\dx(\xvo)^b > \f{\log\nQ}{\nQ} + \f{4(1+b)^2Q^3}{\nQ} , \tag{I}
}
then, for all $\delta \in (0,1)$ satisfying
\eq{
(1+b)^2Q^2 \le \log(2/\delta) \le \f{\nQ\vp - \log\nQ}{4Q}, \tag{II}
}
with probability greater than $1-\delta$,
\eq{
\norminf{ \dsf_{n+m} - \dsf_n } - \norminf{\dsf_{n+m, Q} - \dsf_n} \ge %
\qty({ \f{2\log\nQ}{a\nQ} } + { \f{8 \log(2/\delta)}{a\nQ} })^{1/b}.\nn%
}
\label{theorem:momdist-influence}
\end{theorem}
\begin{remark} The result from Theorem~\ref{theorem:momdist-influence} may be interpreted as follows.
\begin{enumerate}[label=\textup{\textbf{(\roman*)}}]
\item The first part guarantees that the $\dnq$-weighted persistence diagram always has a smaller influence on the birth time in comparison to the non-robust counterpart. Since the Rips persistence diagram is entirely determined by the filtration values associated with the $0-$ and $1-$ simplices, this provides a partial picture for the influence $\xvo$ has on the resulting persistence diagrams. Characterizing the influence on the $1-$simplices is far more challenging owing to the combinatorial complexity in characterizing their lifetimes.
\item The second part compares the upper bounds on the empirical persistence influence from \eref{eq:winf-influence}. When conditions (I) and (II) hold, then with high probability, persistence diagrams obtained using $\dnq$ are closer to the truth than those obtained using $\dsf_n$. Therefore, the interplay between $n$, $m$ and $\xvo$ is better understood by characterizing when conditions (I) and (II) hold.
\item For fixed $n$ observe that (I) is satisfied whenever $\dx(\xvo)$ is sufficiently large, i.e., $\xvo$ is sufficiently far away from the support. On the other hand, if $\xvo$ is fixed, then (I) is satisfied when $\log \nQ / \nQ$ is sufficiently small, i.e., $n$ is sufficiently large. Together, this implies that for condition (I) to be satisfied, either (a) we need the outliers to be sufficiently well-separated from the support $\Xb$ such that we are able to distinguish outliers $\xvo$ from the inliers $\Xn$, or (b) for outliers placed very close to the support $\Xb$ we need sufficiently many inliers $n$ for us to be able to distinguish them from the outliers. On the other hand, note that if $n$ and $m$ are fixed, then the r.h.s. of (I) is directly proportional to $Q$. Although $Q$ can take any values between $2m < Q < (n+m)$, choosing a value of $Q$ much larger than $2m+1$ will likely breach condition (I) for a fixed $\xvo$. Equivalently, for a suboptimal choice of $Q$, we need the outliers to be sufficiently far away from the inliers in order to be able to distinguish them.
\item The l.h.s. of (II) is equivalent to the constraint that $\delta \le e^{-(1+b)Q}$, which appears in Theorems~\ref{theorem:momdist-sublevel}~and~\ref{theorem:momdist-consistency}. The r.h.s. of (II) specifies a lower-bound on the confidence level $\delta$. Condition (I) guarantees that the admissible values of $\delta \in (0,1)$ satisfying (II) is nonempty. For fixed $m, Q$ and $\xvo$, the r.h.s. of (II) is directly proportional to $n$, i.e., the lower bound vanishes as $n \rightarrow \infty$.
\item When conditions (I) and (II) are satisfied, we have the following lower bound from the l.h.s. of (II):
\eq{
\norminf{ \dsf_{n+m} - \dsf_n } - \norminf{\dsf_{n+m, Q} - \dsf_n} \gtrsim \qty({ \f{\log(n+m/Q)}{a(n+m)/Q}} + { \f{Q^2}{(n+m)/Q} })^{1/b}.\label{eq:inf-lb}
}
In the regime when $n,m \rightarrow \infty$, and for the optimal choice of $Q$, i.e., $Q=km$ for $k > 2$, the r.h.s. of \eref{eq:inf-lb} is non-trivial when $m = \Omega(n^{1/3})$. Therefore, under conditions (I) and (II), when there are sufficiently many outliers, there is greater evidence to support the robustness of $\dnq$.
\end{enumerate}
\end{remark}
\endgroup
\begingroup
\providecommand{\hQ}{\widehat{Q}}
\providecommand{\hm}{\widehat{m}}
\renewcommand{\ms}{m^*}
\providecommand{\h}{\mathfrak{h}}
\subsection{Auto-tuning the parameter $Q$}
\label{sec:lepski}
The result in Theorem~\ref{theorem:momdist-consistency} relies on the crucial assumption that the number of outliers $\ms$ is known \textit{a priori}. While this assumption may hold in certain adversarial settings, in general, this information may be unavailable. In order to make Theorem~\ref{theorem:momdist-consistency} more useful in practical settings, we discuss two solutions for calibrating the parameter $Q$. The first procedure is based on Lepski's method \citep{lepskii1991problem}, which is a powerful data-driven method for adaptive parameter selection. In this case, we also provide theoretical guarantees for the adaptively tuned estimator. The second procedure---which is based on some heuristic observations regarding the sample estimator $\bbv{}[\Xn, \dnq]$--- works well in practice, and may be used as a precursor to Lepski's method.
When the number of outliers $\ms$ is known, choosing $Q^*=2\ms+1$ results in the rate of convergence in Theorem~\ref{theorem:momdist-consistency}. However, without access to $m^*$, Lepski's method provides a systematic procedure for selecting a parameter $\hQ$ which provides the same error guarantees as $Q^*$ \citep{birge2001alternative}. The procedure is as follows. Let $m_{\min}$ and $m_{\max}$ be two coarse bounds on (unknown) $\ms$ such that ${m_{\min} \le \ms \le m_{\max}}$. For a choice of $\theta>1$, let $m(j) = \theta^{j}\mmin$ and define
$$
\J \defeq \qty\Big{ j \ge 1 : \mmin \le m(j) < \theta\mmax }.
$$
For $\pr \in \mathcal{P}\pa{\bX, a, b}$ and $\Xn$ obtained under sampling condition ($\scr{S}$), let $\bbv_n(j) = \bbv[\Xn, \dsf_{n, Q(j)}]$ be the persistence module obtained using the \md{-weighted} filtration with ${Q(j) = 2m(j)+1}$.
For $\delta \in (0,1)$ and $\delta_{\max} = \delta - e^{-(1+b)(2\mmax+1)}$, let $\mathfrak{h}(n, m, \delta)$ be defined as follows:
\eq{
\mathfrak{h}(n, m, \delta) = 2\qty\Bigg( \f{2m+1}{an} \wo\qty( \f{ne^{ 4(1+b)(2\mmax+1) }}{2m+1} ) )^{1/b} + \qty\Bigg( \f{1}{a(n-m)} \wo\qty( (n-m)e^{4\log(1/\delta_{\max})} ) )^{1/b},\nn
}
where for $z>0$, $\wo(z)$ is the Lambert $\wo$ function given by the identity $\wo(z)e^{\wo(z)} = z$. With this background, let $\hj$ be the output of the following procedure:
\eq{
\hj \defeq \min \qty\Big{ j \in \J : \winf\pa{ \bbv_n(j), \bbv_n({j'}) } \le 2\mathfrak{h}(n, m(j'), \delta) \qq{for all} j' \in \J, j' > j },
\label{eq:lepski-j}
}
the resulting weighted persistence module $\widehat{\bbv}_n = \bbv_n({\hj}) = \bbv[\Xn, \dsf_{n, Q(\hj)}]$ is the Lepski estimator for $\bbv[\bX]$. The following result establishes that the adaptive selection of $Q$ results in an estimator with the same convergence guarantees as in Theorem~\ref{theorem:momdist-consistency}.
\begin{theorem}[Adaptive $\dnq$-weighted filtration]
Suppose $\Xn$ is obtained under sampling condition \samp{} for $\pr \in \mathcal{P}(\bX, a, b)$, and suppose $\mmin$ and $\mmax$ are known such that unknown number of outliers ${\ms \in [\mmin, \mmax]}$ and $\ms < n/2$. For a chosen $\theta > 1$ let $\hj$ be the output of data-driven procedure in \eref{eq:lepski-j} and let $\widehat{\bbv}_n = \bbv_n{(\hj)}$. Then, for all $\delta \in (0, 1)$,
\eq{
\pr\qty\bigg( \winf\qty\Big( \dgm\qty\big(\widehat{\bbv}_n), \dgm\qty\big(\bbv[\bX]) ) \le 3\mathfrak{h}(n, \theta m^*, \delta) ) \ge 1 - \delta \log_\theta\qty( \f{\theta \mmin}{\mmax} ).\nn
}
\label{theorem:lepski}
\end{theorem}
\begin{remark}
We make the following useful observations from Theorem~\ref{theorem:lepski}.
\begin{enumerate}[label=\textup{\textbf{(\roman*)}}]
\item We make the distinction that the output $\widehat\bbv_n$ of Lepski's method does not necessarily correspond to the optimal choice $\bbv_n^*$ if $\ms$ were known. Instead, Theorem~\ref{theorem:lepski} guarantees that error associated with $\widehat\bbv_n$ is of the same order (up to constants) as that of $\bbv_n^*$.
\item While Lepski's method guarantees optimal errors for the adaptive estimator without any knowledge of the true $m^*$; in practice, however, the empirical performance depends on several factors. Since the procedure in Theorem~\ref{theorem:lepski} is designed to match the guarantee of Theorem~\ref{theorem:momdist-consistency}, the success of the procedure crucially depends on the tightness of the bound $\mathfrak{f}(n, m, Q, \delta_1, \delta_2)$ in Theorem~\ref{theorem:momdist-consistency}. Furthermore, the implementation described in \eref{eq:lepski-j} requires knowledge of the parameters $a, b > 0$ arising from the $(a,b)-$standard condition. While the calibration of $a$ and $b$ in practice is more of an art and beyond the scope of the paper, we emphasize here that it is possible to construct a statistically consistent estimator of the true population quantity $\bbv[\bX]$ in a purely data-adaptive fashion, even in the presence of adversarial contamination.
\item Unlike a standard grid search, Lepski's method adapts to the true noise level $m^*$ in an efficient manner. Given a reasonable estimate for $\mmin$ and $\mmax$, Lepski's method has a computational cost of $O( \log^2_\theta(\mmax/\mmin ) )$. However, the choice of $\theta > 1$ must also be made judiciously, e.g., replacing $\theta$ with $\sqrt{\theta}$ for the procedure in \eref{eq:lepski-j} will require $\sim4$ times more computational time.
\item In the worst case, when there are no reasonable estimates for $\mmin$ and $\mmax$, choosing $\mmin=1$ and $\mmax=n/2$ requires $O(\log^2_\theta(n))$ computational time. Notably, more than just the additional computational price, a suboptimal choice of $\mmin$ and $\mmax$ leads to poor performance. To see this, note that the term $\h(n, m, \delta)$ is a lower bound for the term $\mathfrak{f}(n, m, Q, \delta_1, \delta_2)$ in Theorem~\ref{theorem:momdist-consistency} when $Q=2m+1$ and $\delta_1 = e^{-(1+b)(2\mmax+1)} \le e^{-(1+b)Q}$. Therefore, when the number of outliers grows with $n$ as $m^* = cn^\e$ for $c>0$ and $\e \in [0, 1)$, a similar analysis to that in Theorem~\ref{theorem:momdist-sublevel} and Theorem~\ref{theorem:momdist-consistency} yields that
\eq{
\E\qty\bigg[ \winf\qty\Big( \dgm\qty\big(\widehat{\bbv}_n), \dgm\qty\big(\bbv[\bX]) ) ] \lesssim \pa{\f{\log n}{n/\mmax}}^{1/b}.\nn
}
Therefore, if the bound $\mmax$ is not tight, i.e., $\mmax = Cn^\beta$ for $\epsilon < \beta$, then, asymptotically, the output of Lepski's method is not adaptive to the true noise $m^*$, and, instead, reflects the suboptimal choice of $\mmax$.
\end{enumerate}
\label{remark:lepski}
\end{remark}
In a similar vein, Lepski's method may be used to adaptively select the parameter $Q$ to obtain a statistically consistent sublevel set persistence module. The following result outlines a data-driven procedure to obtain ${\cj \in \J}$ such that the resulting sublevel persistence module $\overline{\bbv}_n = \bbv_n(\cj) = \bbv[\dsf_{n, Q(\cj)}]$ has the same convergence guarantee as Theorem~\ref{theorem:momdist-sublevel}.
\begin{theorem}[Adaptive sublevel filtration]
For $\pr \in \mathcal{P}(\bX, a, b)$, suppose $\Xn$ is obtained under sampling condition \samp{}, and suppose $\mmin$ and $\mmax$ are known such that unknown number of outliers $\ms \in [\mmin, \mmax]$ and $\ms < n/2$. Let $\bbw_n(j) = \bbv[\dsf_{n, Q(j)}]$ be the sublevel persistence module obtained using $\dsf_{n, Q(j)}$ with ${Q(j) = 2m(j)+1}$ for all $j \in \J$. For a chosen $\theta > 1$, let $\cj$ be the output of data-driven procedure,
\eq{
\cj = \min \qty\Big{ j \in \J : \winf\pa{ \bbv_n(j), \bbv_n({j'}) } \le 2\mathfrak{p}(n, m(j'), \delta) \qq{for all} j' \in \J, j' > j },\nn
}
where
\eq{
\mathfrak{p}(n, m, \delta) = \qty\Bigg( \f{2m+1}{an} \wo\qty( \f{ne^{ (1+b)\log(1/\delta) }}{2m+1} ) )^{1/b}.\nn
}
Then, for all $\delta \le e^{-(1+b)(2\mmax+1)}$ and $\overline{\bbv}_n = \bbv_n{(\cj)}$,
\eq{
\pr\qty\bigg( \winf\qty\Big( \dgm\qty\big(\overline{\bbv}_n), \dgm\qty\big(\bbv[\bX]) ) \le 3\mathfrak{h}(n, \theta m^*, \delta) ) \ge 1 - \delta \log_\theta\qty( \f{\theta \mmin}{\mmax} ).\nn
}
\label{corollary:lepski}
\end{theorem}
\endgroup
The proof is identical to that of Theorem~\ref{theorem:lepski}, and is, therefore, omitted. The success of Lepski's method depends on the tightness of the probabilistic bounds, knowledge of the (nuisance) parameters (i.e. $a,b$) appearing in these bounds, and a prudent choice for $\mmin$ and $\mmax$. While the calibration of $a$ is beyond the scope of this paper, in $\R^d$ a conservative choice for $b$ would be the dimension $d$ of the ambient space. We refer the reader to \cite[Section~4]{chazal2015convergence} for further details.
To address the last bottleneck in Lepski's method, we describe a heuristic method to select the parameter $Q$, which may be used to obtain reasonable choices for $\mmin$ and $\mmax$. %
The method is based on the observation that the blocks $\pb{S_q : q \in [Q]}$ may be resampled by shuffling the sample points $\Xn$ prior to partitioning it. The resulting estimator $\bbv[\Xn, \dnq]$ is an unbiased estimator of the same population quantity when ${2m < Q < n}$. Therefore, we may choose the smallest value of $Q$ for which the pairwise bottleneck distance over permutations of the data is minimized. Specifically, suppose $\Xn^\sigma = \pb{ \Xv_{\s(1)}, \Xv_{\s(2)}, \dots, \Xv_{\s(n)} }$ is a permutation of $\Xn$, then
\eq{
\widehat{Q}_{R} = \argmin_{Q \ge 1} \sum_{1 \le i < j \le N} \winf\qty\Big({ \bbv[{\Xn^{\sigma_i}}, \dnq], \bbv[{ \Xn^{\sigma_j} }, \dnq] }),\nn
}
where, for a chosen number of replicates $N$, $\sigma_i, \sigma_j$ are permutations of $[n]$ for each $i, j \in [N]$. Furthermore, for $\widehat{m}_{R} = \floor{\widehat{Q}_{R}/2}$ and for a constant $C > 1$, the bounds $\mmin$ and $\mmax$ may be taken to be $C\inv\widehat{m}_{R}$ and $C\widehat{m}_{R}$, respectively.
\section{Proofs}
\label{sec:proofs}
In this section, we present the proofs for the results in Section~\ref{sec:main}.
\allowdisplaybreaks
\providecommand{\ut}[1]{U^{#1}}
\providecommand{\vt}[1]{V^{#1}}
\providecommand{\wt}[1]{W^{#1}}
\providecommand{\but}[1]{\mathbb{U}^{#1}}
\providecommand{\bvt}[1]{\mathbb{V}^{#1}}
\providecommand{\bwt}[1]{\mathbb{W}^{#1}}
\providecommand{\ball}[1]{B_{f\!, \rho}\pa{#1}}
\providecommand{\xvy}{\xv^*_{\yv}}
\subsection{Proof for Lemma~\ref{lemma:lipschitz}}
\label{proof:lemma:lipschitz}
We begin by noting that for each $q \in [Q]$, the distance function $\dsf_{n, S_q}$ associated with the block $S_q$ is $1-$Lipschitz \cite[Chapter~9.1]{boissonnat2018geometric}. Thus, for each $q \in [Q]$ and for all $\xv, \yv \in \R^d$ we have that
\eq{
0 \le \dsf_{n,q}(\xv) \le \dsf_{n,q}(\yv) + \norm{\xv-\yv},\nn
}
and, therefore, it follows that
\eq{
\median\pb{ \dsf_{n,q}(\xv) : q \in [Q] } \le \median\pb{ \dsf_{n,q}(\yv) : q \in [Q] } + \norm{\xv-\yv}.\nn
}
As a result, we obtain that $\dnq(\xv) \le \dnq(\yv) + \norm{\xv-\yv}$. Exchanging $\xv$ and $\yv$ in the steps above yields the desired result. \null\nobreak\hfill\qedsymbol{}
\subsection{Proof for Lemma~\ref{lemma:mom}}
\label{proof:lemma:mom}
For $t > 0$, define two events
\eq{
E_1 = \pb{\norminf{T_Q(\pr_n) - T(\pr)} \le t}, \qq{and}
E_2 = \pb{ \#\qty\Big{q \in [Q]: \norminf{ T(\pr_q) - T(\pr) } > t} \le \f{Q}{2} }.\nn
}
First, we show that $E_2 \subseteq E_1$. To this end for any $\omega \in E_2$, we have
\eq{
\omega \in E_2 &\Longrightarrow \omega \in \pb{\#\qty\Big{q \in [Q]: \norminf{T(\pr_q) - T(\pr)} > t} \le \f{Q}{2}}\n
&\Longrightarrow \omega \in \pb{\#\qty\Big{q \in [Q]: \norminf{T(\pr_q) - T(\pr)} \le t} > Q - \f{Q}{2}}\n
&\Longrightarrow \omega \in \pb{\#\qty\Big{q \in [Q]: \forall \xv \in \R^d, T(\pr)(\xv) - t \le T(\pr_q)(\xv) \le T(\pr)(\xv) + t} > \f{Q}{2}}\n
&\Longrightarrow \omega \in \pb{\forall \xv \in \R^d, T(\pr)(\xv) - t \le \med\qty\big{T(\pr_q)(\xv) : q \in [Q]} \le T(\pr)(\xv) + t}\n
&\Longrightarrow \omega \in \pb{\forall \xv \in \R^d, T(\pr)(\xv) - t \le T_Q(\pr_n)(\xv) \le T(\pr)(\xv) + t}\n
&\Longrightarrow \omega \in \pb{\norminf{T_Q(\pr_n) - T(\pr)} \le t}\n
&\Longrightarrow \omega \in E_1.\nn
}
Therefore, we have $E_2 \subseteq E_1$. Next, note that $E_2$ can be written as
\eq{
E_2 = \pb{\sum_{q=1}^Q \rqt \le \f Q 2},\nn
}
where, for each $q \in [Q]$,
\eq{
\rqt \defeq \mathbbm{1}\qty\Big( \norminf{ T(\pr_q) - T(\pr) } > t ).\nn
}
Since $0 \le \rqt \le 1$ a.s., we have that
\eq{
\sum_{q=1}^Q \rqt &= \sum_{q \in A}\rqt + \sum_{q\in A^c}\rqt \le \sum_{q \in A}\rqt + \abs{A^c} \le \sum_{q \in A}\rqt + m.\nn
}
As a result, we can further bound the probability of $E_2$ from below as
\eq{
\pr(E_2) \ge \pr\pa{\sum_{q \in A}\rqt \le \f Q 2 - m}.
\label{mom-lemma-1}
}
Combining \eref{mom-lemma-1} with the fact that $E_2 \subseteq E_1$, we obtain
\eq{
\pr\qty\bigg({ \norminf{ T_Q(\pr_n) - T(\pr) } > t }) = \pr(E_1^c) \le \pr(E_2^c) \le \pr\pa{\sum_{q \in A}\rqt > \f Q 2 - m},\nn
}
which gives us the desired result. \null\nobreak\hfill\qedsymbol{}
\subsection{Proof of Theorem~\ref{theorem:momdist-sublevel}}
\label{proof:theorem:momdist-sublevel}
First, we note from the stability of persistence diagrams that,
\eq{
\pr\qty\bigg{\Winf\qty\Big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) > t} \le \pr\pb{\norminf{\dnq - \dx} > t}.
\label{mom-concentration-stability}
}
Therefore, it suffices to control the probability of the event $\pb{\norminf{\dnq - \dx} > t}$. To this end, let $A = \pb{q\in [Q]: S_q \cap \Ym = \varnothing}$ be the blocks which contain no outliers. From the assumption on $Q$, i.e., $2m < Q < n$, it follows that, and $\abs{A} > Q/2$. For $q \in [Q]$, let $\rqt$ be given by
\eq{
\rqt = \mathbbm{1}\qty\Big( { \norminf{\mathsf{d}_{n, q} - \dx} } > t ).\nn
}
On application of Lemma~\ref{lemma:mom} to the estimator $\dnq$, it follows that
\eq{
\pr\qty\bigg{{ \norminf{\mathsf{d}_{n, Q} - \dx} } > t} \le \pr\pa{\sum_{q \in A}\rqt > \f Q 2 - m}.
\label{eq:momdist-sublevel1}
}
Since $S_q \subseteq \Xnm$ for all $q \in A$, it follows that $\qty{\rqt: q \in A}$ are i.i.d.~$\textup{Bernoulli}\qty\big(p(t; n, Q))$ random variables, where
\eq{
p(t; n, Q) = \E\qty(\rqt) = \pr\qty\Big( { \norminf{\mathsf{d}_{n, q} - \dx} } > t ).\nn
}
For the remainder of the proof we need two key ingredients: (i) we need an upper bound for $\E(\rqt)$, and (ii) we need a tight bound for the binomial tail probability in \eref{eq:momdist-sublevel1}.
\textbf{Bound for $p(t; n, Q)$.} From \citet[Theorem~2]{chazal2015convergence}, under the $(a, b)-$standard condition it follows that
\eq{
p(t; n, Q) \le \f{2^b}{at^b}\exp({ -\nq at^b }) = \exp({ -\nq at^b - \log\qty\big(at^b) + b\log2 }).
\label{eq:momdist-pt}
}
\textbf{Binomial tail probability bound.} For $0 < \e < 1$, using the Chernoff-Hoeffding bound from Lemma~\ref{lemma:chernoff-hoeffding} yields,
\eq{
\pr\pa{\f{1}{\abs{A}}\sum_{q \in A}\rqt > \e} \le \exp\qty\Bigg( \abs{A} \pa{\f{2}{e} + \e \log p(t; n, Q)}).\nn
}
Using the bound for $p(t; n, Q)$ from \eref{eq:momdist-pt}, we obtain
\eq{
\pr\pa{\f{1}{\abs{A}}\sum_{q \in A}\rqt > \e} &\le \exp\qty\Bigg( \abs{A} \qty\Big( \f{2}{e} + b\e\log2 - \e \nq at^b - \e \log\!\qty\big(at^b)) )\n
&\le \exp\qty\Bigg( \abs{A} \qty\Big( 1 + b\e - \e \nq at^b - \e \log\!\qty\big(at^b)) )\n
&\le \exp\qty\bigg( \abs{A} \qty\Big( 1 + b\e - \e\Ot) ),\nn
}
where, in the last line we use $\Ot \defeq (n/Q)at^b + \log(at^b)$ for brevity. When $t$ satisfies the condition that
\eq{
\Ot \ge \f{2(1+b\e)}{\e}
\label{eq:Ot-condition1}
}
then it implies that
\eq{
1 + b\e - \e\Ot \le -\f\e2 \Ot,\nonumber
}
and we get
\eq{
\pr\pa{\f{1}{\abs{A}}\sum_{q \in A}\rqt > \e} \le \exp\qty\Bigg( -\f{\abs{A}\e}{2} \Ot ).\nn
}
By setting $\delta$ equal to the r.h.s. of the inequality above, we obtain
\eq{
\Ot = \f{2\log(1/\delta)}{\abs{A}\e}.
\label{eq:Ot-Ae}
}
When $\delta \le e^{-(1+b)Q}$, using the fact that $Q > \abs{A}$ and $0 < \e < 1$, it follows that
\eq{
\Ot = \f{2\log(1/\delta)}{\abs{A}\e} \ge \f{2(1+b)Q}{\abs{A}\e} \ge \f{2(1+b\e)}{\e},\nn
}
and, therefore, the condition in \eref{eq:Ot-condition1} is satisfied. Consequently, for $\delta \le e^{-(1+b)Q}$, on rearranging the terms in \eref{eq:Ot-Ae} we obtain
\eq{
\pr\pa{\sum_{q \in A}\rqt > \f{2\log(1/\delta)}{\Ot}} \le \delta.
\label{eq:momdist-sublevel-bound1}
}
Comparing \eref{eq:momdist-sublevel1} with \eref{eq:momdist-sublevel-bound1} we conclude that
\eq{
\pr\pa{\sum_{q \in A}\rqt > \f{Q-2m}{2}} = \pr\pa{\sum_{q \in A}\rqt > \f{2\log(1/\delta)}{\Ot}} \le \delta,\nn
}
by setting
\eq{
\f{2\log(1/\delta)}{\Ot} = \f{Q-2m}{2} \Longleftrightarrow \Ot = \f{4{\log(1/\delta)}}{Q-2m}.\nn
}
Since $\Ot = \nq at^b + \log(at^b)$, this is equivalent to
\eq{
\exp( \nq a t^b ) \nq at^b = \nq \exp({ \f{4{\log(1/\delta)}}{Q-2m} }).\nn
}
Moreover, using the fact that the Lambert $\wo$ function is given by the identity $\wo(x)e^{\wo(x)} = x$ \citep{hoorfar2008inequalities}, we obtain that
\eq{
t = \qty\Bigg( \f{Q}{an} \wo\qty( \nq \exp{ \f{4{\log(1/\delta)}}{Q-2m} })) ^{1/b}.
\label{eq:momdist-t-constraint}
}
Therefore, from \eref{mom-concentration-stability} and (\ref{eq:momdist-sublevel1}), for $t$ satisfying \eref{eq:momdist-t-constraint} and for all $\tau \ge t$ we have that
\eq{
\pr\qty\bigg{\Winf\qty\Big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) > \tau} \le \pr\qty\bigg{\Winf\qty\Big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) > t} \le \delta.
\label{eq:momdist-sublevel2}
}
Since $\delta \le e^{-(1+b)Q}$, observe that
\eq{
\f{4\log(1/\delta)}{Q-2m} \ge \f{4(1+b)Q}{Q-2m} \ge 4(1+b) > 1.\nonumber
}
Furthermore, using the fact that $\wo(z) \le \log(z)$ for $z > e$ \cite[Eq.~1.1]{hoorfar2008inequalities}, we may take $\tau$ to be
\eq{
t = \qty\Bigg( \f{Q}{an} \wo\qty( \nq \exp{ \f{4{\log(1/\delta)}}{Q-2m} })) ^{1/b} &\le \qty\Bigg( \f{Q}{an} \log\qty( \nq \exp{ \f{4{\log(1/\delta)}}{Q-2m} })) ^{1/b}\n
&= \qty\Bigg( \f{Q \log(n/Q)}{an} + \f{4Q{\log(1/\delta)}}{a(Q-2m)n} ) ^{1/b} \defeq \tau.\nn
}
Plugging this into \eref{eq:momdist-sublevel2}, we obtain the desired result.
For the second claim in the theorem, by inverting the relationship between $t$ and $\delta$ in \eref{eq:momdist-t-constraint} and using the fact that $\wo(z)$ is an increasing function for $z > 0$, observe that the constraint on $\delta$ equivalently specifies a constraint on $t$, i.e.,
\eq{
\delta \le e^{-(1+b)Q} \Longleftrightarrow t \ge \qty\Bigg( \f{Q}{an} \wo\qty( \nq \exp{ \f{4{(1+b)Q}}{Q-2m} })) ^{1/b}.\nn
}
A sufficient condition for this to hold is that
\eq{
t \ge t(n, Q) \defeq \qty\Bigg( \f{Q \log(n/Q)}{an} + \f{4(1+b)Q^2}{a(Q-2m)n} )^{1/b}.\nonumber
}
Therefore, from \eref{eq:momdist-sublevel2} we have that for all $t \ge t(n, Q)$
\eq{
\pr\qty\bigg{\Winf\qty\Big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) > t} \le \exp( - \qty(\f{Q-2m}{4})\Ot ).\nn
}
By taking $\pr\qty\big{\Winf\qty\Big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) > t}$ to be its maximum value of $1$ in the interval $[0,t(n, Q)]$ we have
\eq{
\E\qty[ \Winf\qty\Big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) ] &= \int\limits_0^\infty\pr\qty\bigg{\Winf\qty\Big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) > t} dt\n
&\le t(n, Q) + \int\limits_{t(n, Q)}^\infty \exp( - \qty(\f{Q-2m}{4})\Ot ) dt.\nonumber
}
By taking $w = \Ot$ and setting $r_n = 4(1+b)Q/(Q-2m)$, we further obtain
\eq{\label{eq:momdist-ebound1}
&\E\qty\Big[ \Winf\qty\big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) ] \n
&\qq{} \qq{} \lesssim t(n, Q) + \pa{\f{Q}{n}}^{1/b} \int\limits_{r_n}^{\infty} \f{e^{-w/4}\pa{\wo\pa{ \f{n}{Q} e^w }}^{1/b}}{w+1}dw\nn\\[5pt]
&\qq{} \qq{} \num{\lesssim}{ii} t(n, Q)%
+ \qty( \f{\log(n/Q)}{n/Q})^{1/b} \underbrace{\int\limits_{r_n}^\infty\f{e^{-w/4}}{w+1}dw}_{\circled{a}} %
+ \pa{\f{Q}{n}}^{1/b} \underbrace{\int\limits_{r_n}^\infty\f{e^{-w/4}w^{1/b}}{w+1}dw}_{\circled{b}},
}
where (ii) follows from the fact that $\wo(z) \le \log(z)$ for $z > e$ together with, either, an application of Lemma~\ref{lemma:useful-inequalities}~(iii) when $b\ge 1$, or Lemma~\ref{lemma:useful-inequalities}~(i) with the additional factor $2^{1/b - 1}$ being absorbed in the symbol $\lesssim$ when $b<1$. The term $\circled{a}$ can be bounded above using the incomplete $\Gamma$ function as,
\eq{
\circled{a} = \int\limits_{r_n}^\infty\f{e^{-w/4}}{w+1}dw = e^{1/4} \int_{(r_n-1)/4}^\infty v^{-1}e^{-v}dv = e^{1/4} \Gamma\qty\big(0, (r_n-1)/4) < \infty. \nn
}
Similarly, using the fact that $w+1 > 1$, the term $\circled{b}$ may be bounded above as,
\eq{
\circled{b} = \int\limits_{r_n}^\infty\f{e^{-w/4}w^{1/b}}{w+1}dw \le \int\limits_{r_n}^\infty{e^{-w/4}w^{1/b}}dw \le \f{\Gamma(1 + b\inv)}{4^{1+1/b}} \int_{r_n}^{\infty}\pi(v)dv \le \f{\Gamma(1 + b\inv)}{4^{1+1/b}} < \infty,\nn
}
where $\pi$ is the probability density function of the distribution $\Gamma\qty\big(1+b\inv, 1/4)$. Therefore, the inequality in \eref{eq:momdist-ebound1} becomes
\eq{
\E\qty[ \Winf\qty\Big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) ] \lesssim \qty\bigg(\f{\log(n/Q)}{n/Q} + \f{Q^2}{(Q-2m)n} )^{1/b} + \qty\bigg(\f{Q}{n})^{1/b}.\nn
}
When the number of outliers grows with $n$ as $m_n = cn^\e$ where $0 \le \e < 1$, let the number of blocks be $Q_n = 3c n^\beta$, where $\e \le \beta < 1$. Therefore,
\eq{
\E\qty[ \Winf\qty\Big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) ] &\lesssim \inf_{\e \le \beta < 1} \qty\bigg(\f{\log(n/n^\beta)}{n/n^\beta} + \f{n^{2\beta}}{(3n^\beta-2n^\e)n} )^{1/b} + \qty\bigg(\f{n^\beta}{n})^{1/b}\n[5pt]
&\lesssim \qty( \f{\log n}{n^{1-\e}} )^{1/b},\nn
}
which gives us the desired result. \null\nobreak\hfill\qedsymbol{}
\subsection{Proof of Lemma~\ref{lemma:sublevel-equivalence}}
\label{proof:lemma:sublevel-equivalence}
\begingroup
\providecommand{\rd}{\R^d}
\renewcommand{\a}{\alpha}
For simplicity, let $f=\dnq$ denote the \md{} function. By definition, $V[f]$ and $V[\rd,f]$ are $( \id, \a )-$interleaved if the following relationship holds
\eq{
V^t[f] \subseteq V^t[\rd, f] \subseteq V^{\a(t)}[f].\nn
}
The first inclusion is straightforward since
\eq{
V^t[f] \subseteq \bigcup_{\xv \in V^t[f]}B_f(\xv, t) = \bigcup_{\xv \in \rd}B_f(\xv, t) = V^t[\rd,f].\nn
}
For the second inclusion, suppose $\xv \in V^t[\rd, f]$, i.e., there exists $\yv \in \rd$ such that $\norm{\xv-\yv} \le r_{f,\yv}(t)$. It suffices to show that $\xv \in V^{\a(t)}[f]$. To this end, note that since $\dnq$ is $1-$Lipschitz by Lemma~\ref{lemma:lipschitz} it follows that
\eq{
f(\xv) &\le f(\yv) + \norm{\xv-\yv}\n
&\le f(\yv) + r_{f, \yv}(t)\n
&= f(\yv) + (t^p - f(\yv)^p)^{\f 1p}\n
&\num{\le}{i} 2^{\f{p-1}{p}}\qty( f(\yv)^p + (t^p - f(\yv)^p) )^{\f 1p} = 2^{\f{p-1}{p}}t,\nn
}
where (i) follows from an application of Lemma~\ref{lemma:useful-inequalities}~(iii). Since $f(\xv) \le 2^{\f{p-1}{p}}t = \a(t)$, it implies that $x \in V^{\a(t)}$ and the result follows. When $p=1$, note that $\a(t) = t$, and therefore $V[f] = V[\rd, f]$.
\endgroup
\null\nobreak\hfill\qedsymbol{}
\subsection{Proof of Lemma~\ref{lemma:ab-filtration}}
\label{proof:lemma:ab-filtration}
Since $\bX \subseteq \bY$, the inclusion $\Vt[][\bX, f][\rho] \subseteq \Vt[][\bY, f][\rho]$ holds trivially. For the next part, let ${\vt{t} = \Vt[][\bX, f][\rho]}$ and ${\ut{t} = \Vt[][\bY, f][\rho]}$ denote the respective $f$--weighted filtrations, so as to avoid the notational overload. In order to show the second inclusion, i.e., $\ut{t} \subseteq \vt{\alpha(t)}$, consider $\zv \in \ut{t}$. Then, there exists $\yv \in \bY$ such that $\zv \in \ball{\yv, t}$. If $\yv \in \bX \subset \bY$, then it immediately follows that ${\zv \in \vt{t} \subseteq \vt{\alpha(t)}}$. In what remains, for $\yv \in \bY \setminus \bX$, it is sufficient to show that there exists $\xv \in \bX$ such that ${\zv \in \ball{\xv, \alpha(t)}}$.
To this end, let $\xvy = \arginf_{\xv \in \bX}\rho(\xv, \yv)$ be the projection of $\yv$ onto $\bX$ via $\rho$. Then two following cases arise: (I) $\rho(\xvy, \zv) \le \rho(\xvy, \yv)$, and (II) $\rho(\xvy, \zv) \ge \rho(\xvy, \yv)$ (see Figure~\ref{fig:proof:ab-filtration1}).
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{./figures/illustrator/fig4_1.jpg}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\bigskip
\includegraphics[width=\textwidth]{./figures/illustrator/fig4_2.jpg}
\end{subfigure}
\caption{Illustration of Case I (Left) and Case II (Right).}
\label{fig:proof:ab-filtration1}
\end{figure}
\textbf{Case I.} The distance between $\xvy$ and $\zv$ will satisfy
\eq{
\rho(\xvy, \zv) \le \rho(\xvy, \yv) &\num{\le}{i} f(\yv) + a\n
&\num{\le}{ii} \qty\big({t^p - \rho(\yv, \zv)^p})^{\fpp} + a\n
&\le t + a\nn
}
where (i) follows from the assumption on $f$, and (ii) follows from the fact that if $\zv \in \ball{\yv, t}$, then $\rho(\yv, \zv) \le \rfx[f][\yv][t] = \pa{t^p - f(\yv)^p}^{1/p}$. Furthermore, from Lemma~\ref{lemma:useful-inequalities}~(vi) we obtain
\eq{
\rho(\xvy, \zv) &\le \qty\Big( (t+a + f(\xvy))^p - f(\xvy)^p )^{\fpp}\n
&\le \qty\Big( \qty\big(t+a + \sup_{\xv \in \bX}f(\xv))^p - f(\xvy)^p )^{\fpp}\n
&\le \qty\Big( \qty\big(2^{1-\fpp}t+a + \sup_{\xv \in \bX}f(\xv))^p - f(\xvy)^p )^{\fpp}\n
&= \qty\big( \alpha(t)^p - f(\xvy)^p )^{\fpp} = \rfx[f][\xvy][\alpha(t)],\nn
}
where the last inequality holds because $2^{1-\fpp} \ge 1$. The last line implies that ${\zv \in \ball{\xvy, \alpha(t)} \subseteq \vt{\alpha(t)}}$.
\textbf{Case II.} For $r = \rho(\xvy, \yv)$ let $\yv'$ be the projection of $\zv$ onto $\partial B\pa{\xvy, r}$, i.e.,
$$
\yv' = \arginf_{\xv' \in \partial B\pa{\xvy, r}}\rho(\xv',\zv).
$$
The point $\yv'$ satisfies the following three properties: (PI)~${\rho(\xvy, \yv') = \rho(\xvy, \yv)}$, since $\yv' \in \partial\ball{\xvy, r}$; (PII)~$\rho(\zv, \yv') \le \rho(\zv, \yv)$ by definition of $\yv'$; and (PIII)~$\rho(\xvy, \yv') + \rho(\yv', \zv) \ge \rho(\xvy, \zv)$ from the triangle inequality.
Since $\zv \in \ball{\yv, t}$, when $\rho(\xvy, \yv) \le a$ we may use the triangle inequality to obtain
\eq{
\rho(\xvy, \zv) \le \rho(\xvy, \yv) + \rho(\zv, \yv) \le a + \qty( t^p - f(\yv)^p )^{\fpp} \le a + t \le a + 2^{1-\fpp}t.\label{eq:rho-1}
}
Alternatively, when $\rho(\xvy, \yv) > a$ we obtain the following inequality,
\eq{
t^p &\ge \rho(\yv, \zv)^p + f(\yv)^p \n
&\num{\ge}{iii} \rho(\zv, \yv)^p + \pa{\rho(\xvy, \yv) - a}^p\n
&\num{=}{iv} \rho(\zv, \yv)^p + \pa{\rho(\xvy, \yv') - a}^p\n
&\num{\ge}{v} \rho(\zv, \yv')^p + \pa{\rho(\xvy, \yv') - a}^p\n
&\num{\ge}{vi} \qty\Big(\rho(\xvy, \zv) - \rho(\xvy, \yv'))^p + \qty\Big(\rho(\xvy, \yv') - a)^p\n
&\num{\ge}{vii} 2^{1-p}\qty\Big( \rho(\xvy, \zv)- a)^p,
\label{eq:proof:ab-filtration1}
}
where (iii) holds from the assumption on $f$, (iv--vi) follow from (PI--PIII) respectively, and (vii) uses Lemma~\ref{lemma:useful-inequalities}\,(i). Rearranging the terms of \eref{eq:proof:ab-filtration1} we get $\rho(\xvy, \zv) \le a + 2^{1-\fpp}t$. Therefore, from \eref{eq:rho-1} and \eref{eq:proof:ab-filtration1}, in case (II) we have that
\eq{
\rho(\xvy, \zv) &\le 2^{1-\fpp}t + a\n
&\num{\le}{viii} \qty\Big( \qty\big(2^{1-\fpp}t + a + \sup_{\xv \in \bX}f(\xv))^p - f(\xvy)^p )^{\fpp}\n
&= \rfx[f][\xvy][\alpha(t)],\nn
}
where (viii) uses Lemma~\ref{lemma:useful-inequalities}~(vi). Similar to case (I), we obtain $\zv \in \ball{\xv, \alpha(t)^{ }} \subseteq \vt{\alpha(t)}$. \null\nobreak\hfill\qedsymbol{}
\subsection{Proof of Lemma~\ref{lemma:ab-module}}
\label{proof:lemma:ab-module}
\providecommand{\xvo}{{\xv_{o}}}
Let $t(\bX) = \inf\pb{t > 0: \bigcap_{\xv \in \bX}\ball{\xv, t} \neq \varnothing}$, and let $\xvo \in \bigcap_{\xv \in \bX}\ball{\xv, t}$. To ease the notation, let $\ut{t} = \Vt[t][\bY, f][\rho]$ denote the usual $f$-weighted filtration, and let $\wt{t}$ be defined as
\eq{
\wt{t} = \pb{\bigcup_{\xv \in \bX}\ball{\xv, \beta(t)}} \cup \pb{\bigcup_{\yv \in \bY \setminus \bX}\ball{\yv, t}},\nn
}
such that $\ut{t} \subset \wt{t} \subset \ut{\beta(t)}$. With this background, the proof closely follows that of \citet[Proposition~4.8]{anai2019dtm}. Specifically, the proof is based on the following outline:
\begin{enumerate}[label=\protect\circled{\arabic*}]
\item We first establish that for any $\yv \in \bY \setminus \bX$, there exists $\xv = \xvy \in \bX$ such that for all $t \ge t(\bX)$, $\ball{\yv, t} \cup \ball{\xv, \beta(t)}$ is star-shaped around $\xvo$. Since this holds for all $\yv \in \bY \setminus \bX$, it also holds for $\bigcup_{\yv \in \bY \setminus \bX}\ball{\yv, t}$, and, therefore, $\wt{t}$ is star-shaped and contractible to $\xvo$.
\item The inclusion map $\iota_t: \ut{t} \hookrightarrow \ut{\beta(t)}$ can be decomposed as $\iota_t = j_t \circ \kappa_t$ where ${j_t: \ut{t} \hookrightarrow \wt{t}}$ and ${\kappa_{t}: \wt{t} \hookrightarrow \ut{\beta(t)}}$. Since $\wt{t}$ is star-shaped and contractible, i.e., $\wt{t} \sim \pb{\xvo}$, the linear map between the homology groups induced by $\kappa_{t}$, i.e., $v_t: \bwt{t} \rightarrow \but{\beta(t)}$ will be trivial.
\item The interleavings $\alpha(t)$ (Lemma~\ref{lemma:ab-filtration}) and $\beta(t)$ are combined to provide the bound in $\Winf$.
\end{enumerate}
\textbf{Claim \circled{1}.} Let $\yv \in \bY \setminus \bX$. We need to show that there exists $\xv \in \bX$ such that ${\ball{\xv, \beta(t)}\cup\ball{\yv, t}}$ is star-shaped around $\xvo$, i.e., for any $\zv \in \ball{\yv, t}$ the curve $\Gamma[\xvo, \zv]$ is contained inside the set ${\ball{\xv, \beta(t)}\cup\ball{\yv, t}}$. See Figure~\ref{fig:lemma:ab-module-claim1}.
\begin{figure}
\includegraphics[width=\textwidth]{./figures/illustrator/fig5_4.pdf}
\caption{Illustration of Claim \protect\circled{1}.}
\label{fig:lemma:ab-module-claim1}
\end{figure}
To this end, let $\xv = \arginf_{\zv \in \bX}\rho(\zv, \yv)$ be the projection of $\yv$ onto $\bX$. Note that, from the definition of $\xvo$, $\xvo \in \ball{\xv, t}$ for all $t \ge t(\bX)$. For simplicity, let $S^t = {\ball{\xv, \beta(t)}\cup\ball{\yv, t}}$. Additionally, let $\pi(\xv)$ and $\pi(\yv)$ be the projection of $\xv$ and $\yv$ onto $\Gamma[\xvo, \zv]$, respectively, i.e.,
\eq{
\pi(\xv) = \arginf_{\xv' \in \Gamma[\xvo, \zv]}\rho(\xv', \xv),\nn
}
mutatis mutandis, the same for $\pi(\yv)$. By definition, $\rho(\xv, \pi(\xv)) \le \rho(\xv, \xvo)$ and ${\rho(\yv, \pi(\yv)) \le \rho(\yv, \zv)}$, and consequently, $\pi(\yv) \in \ball{\yv, t}$. This implies that $\Gamma[\pi(\yv), \zv] \subseteq S^t$. What remains to be established is that $\Gamma[\xvo, \pi(\yv)] \subseteq S^t$. In order to show this, note that it is sufficient to show that $\pi(\yv) \in \ball{\xv, \beta(t)}$. Indeed, if this holds, then $\Gamma[\xvo, \pi(\yv)] \subseteq \ball{\xv, \beta(t)} \subseteq S^t$, and it will follow that ${\Gamma[\xvo, \pi(\yv)] \cup \Gamma[\pi(\yv), \zv] = \Gamma[\xvo, \zv] \subseteq S^t}$.
Let $\tau = \rho(\yv, \pi(\yv))$. Since $\pi(\yv) \in \ball{\yv, t}$, when $\rho(\xv, \yv) > a$ it follows that
\eq{
\tau &\le \rfx[f][\yv][t]\n
&\le \qty\Big(t^p - f(\yv)^p)^{\fpp}\n
&\le \qty\Big(t^p - \qty\big(\rho(\xv, \yv) - a)^p)^{\fpp},\nn
}
where the last inequality follows from the assumption on $f$. Thus, we have
\eq{
\rho(\xv, \yv) &\le \qty\Big(t^p - \tau^p)^{\fpp} + a.
\label{eq:lemma:ab-module-eq2}
}
Alternatively, when $\rho(\xv, \yv) \le a$, \eref{eq:lemma:ab-module-eq2} holds trivially. Since $\pi(\xv) \in \ball{\xv, t}$ and $\rho(\xv, \pi(\xv)) \le \rho(\xv, \xvo)$, it follows that
\eq{
\rho(\xv, \pi(\xv)) \le t(\bX).
\label{eq:lemma:ab-module-eq3}
}
Since $\rho=\norm{\cdot}$, \citet[Lemma~B.2]{anai2019dtm} holds, which, combined with Eqs.\eqref{eq:lemma:ab-module-eq2}~and~\eqref{eq:lemma:ab-module-eq3} yields
\eq{
\rho(\xv, \pi(\yv))^2 &\num{\le}{i} \qty\Big(\qty\Big(t^p - \tau^p)^{\fpp} + a)^2 + \tau(2t(\bX) - \tau)\n
&\le \qty\Big(t^p - \tau^p)^{\f2p} + \tau(2t(\bX) - \tau) + a^2 + 2a\qty\Big(t^p - \tau^p)^{\fpp}\n
&\num{\le}{ii} (t + \kappa t(\bX))^2 + a^2 + 2at\n
&\le (t + \kappa t(\bX))^2 + a^2 + 2a(t+\kappa t(\bX))\n
&= \qty\big(t+a+\kappa t(\bX))^2,\nn
}
where (i) is a consequence of \citet[Lemma~B.2]{anai2019dtm}, (ii) follows from \citet[Lemma~B.3]{anai2019dtm} and noting that $t^p - \tau^p \le t^p$ since $\tau \le t$, and $\kappa = (1-\fpp)$. Additionally, from Lemma~\ref{lemma:useful-inequalities}~(vi) we obtain
\eq{
\rho(\xv, \pi(\yv)) &\le t+a+\kappa t(\bX) \le \qty\Big( \qty\big(t+a+\kappa t(\bX) + \sup_{\xv \in \bX}f(\xv))^p - f(\xv)^p )^{\fpp} = \rfx[f][\xv][\beta(t)].\nn
}
This implies that $\pi(\yv) \in \ball{\xv, \beta(t)}$, and establishes claim \circled{1}.
For claim \circled{2}, note that since $\wt{t} \sim \pb{\xvo}$, for the $k$th homology group $\bwt{t}$, we have that $\bwt{t} \simeq \mathbf{F}$ for $k=0$, and $\bwt{t} \simeq \pb{\mathsf{0}}$ for $k > 0$. Therefore, the map $w_t: \bwt{t} \rightarrow \but{\beta(t)}$ is trivial, and consequently, so is the linear map $\but{t} \rightarrow \but{\beta(t)}$.
In order to show claim \circled{3}, observe that the persistence modules $\but{ }$ and $\bvt{ }$ are
\eq{
\begin{cases}
\text{ $(\id,\alpha)$--interleaved } & \text{ for all $t$ and for $\alpha: t \mapsto 2^{1-\fpp}t + a + \sup_{\xv \in \bX}f(\xv)$}\nn \\
\text{ $(\id,\beta)$--interleaved } & \text{ for $t \ge t(\bX)$ and for $\beta: t \mapsto t+a+\kappa t(\bX) + \sup_{\xv \in \bX}f(\xv)$}\nn
\end{cases}.
}
When $t\le t(\bX)$, from \citet[Lemma~B.1]{anai2019dtm},
\eq{
\alpha(t) &= t + \qty\Big(2^{1-\fpp}-1)t + a + \sup_{\xv \in \bX}f(\xv) \le t + \kappa t(\bX) + a + \sup_{\xv \in \bX}f(\xv) = \beta(t).\nn
}
Thus, $\alpha(t) \le \beta(t)$ for $t \le t(\bX)$. Since $\beta: t \mapsto t + c(\bX)$ is an additive interleaving for $c(\bX) = \kappa t(\bX) + a + \sup_{\xv \in \bX}f(\xv)$, this implies that
\eq{
\Winf\qty\big(\dgm\pa{\but{ }}, \dgm\pa{\bvt{ }}) \le c(\bX),\nn
}
which establishes claim \circled{3}. \null\nobreak\hfill\qedsymbol{}
\subsection{Proof of Theorem~\ref{theorem:momdist-stability}}
\label{proof:theorem:momdist-stability}
We begin by establishing the following result:
\eq{
\Winf\qty\bigg( \bVt[ ][\Xn, \dnq], \bVt[ ][\Xnm, \dnq] ) \le \adjustlimits\sup_{\xv \in \Xnm}\dnq(\xv) + \qty( 1 - \f1p )t(\Xnm).\nn
}
Observe that from Lemma~\ref{lemma:ab-filtration} and Lemma~\ref{lemma:ab-module}, it suffices to show that for every $\yv \in \Ym$ the MoM-Dist function $\dnq$ satisfies the property that
\eq{
\inf_{\xv \in \Xnm}\norm{\xv - \yv} \le \dnq(\yv).\nn
}
To this end, let $A = \pb{ q \in [Q] : S_q \cap \Ym = \varnothing }$ be the blocks containing no outliers. For $\yv \in \Ym$ and every $q \in A$, we have that $S_q \subseteq \Xnm$, and therefore
\eq{
\inf_{\xv \in \Xnm} \norm{\xv - \yv} \le \inf_{\xv \in S_q}\norm{\xv - \yv} = \mathsf{d}_{n,q}(\yv).\nn
}
Since this holds for every $q \in A$, taking the infimum on the right hand side over $Q$ yields
\eq{
\inf_{\xv \in \Xnm} \norm{\xv - \yv} \le \inf_{q \in [Q]} \mathsf{d}_{n,q}(\yv).\nn
}
Since $2m < Q$ by assumption, using the pigeonhole principle we further have that
\eq{
\inf_{q \in [Q]} \mathsf{d}_{n,q}(\yv) \le \med\qty\Big{ \mathsf{d}_{n, q}(\yv) : q \in [Q] },\nn
}
which implies that $\inf_{\xv \in \Xnm}\norm{\xv - \yv} \le \dnq(\yv)$ for every $\yv \in \Ym$. Therefore, taking $a=0$ in Lemma~\ref{lemma:ab-filtration} and Lemma~\ref{lemma:ab-module} we obtain
\eq{
\Winf\qty\bigg( \bVt[ ][\Xn, \dnq], \bVt[ ][\Xnm, \dnq] ) \le \adjustlimits\sup_{\xv \in \Xnm}\dnq(\xv) + \qty( 1 - \f1p )t(\Xnm).
\label{eq:stability-1}
}
Turning our attention to the quantity appearing in the statement of the theorem, note that an application of the triangle inequality yields
\eq{
\Winf\qty\bigg( \bVt[ ][\Xn, \dnq], \bVt[ ][\Xnm, \dsf_{n-m}] ) &\le \Winf\qty\bigg( \bVt[ ][\Xn, \dnq], \bVt[ ][\Xnm, \dnq] ) \n
&\quad\quad+ \Winf\qty\bigg( \bVt[ ][\Xnm, \dnq], \bVt[ ][\Xnm, \dsf_{n-m}] )\n
&\num{\le}{$\star$} \adjustlimits\sup_{\xv \in \Xnm}\dnq(\xv) + \qty( 1 - \f1p )t(\Xnm) + \norminf{\dnq - \dsf_{n-m}},\nn
}
where the first term in ($\star$) follows from \eref{eq:stability-1} and the last term follows from Lemma~\ref{lemma:anai-et-al}~(i). This gives us the desired result. Furthermore, when $p=1$ note that $1 - 1/p = 0$, giving us the tighter bound in this case. \null\nobreak\hfill\qedsymbol{}
\subsection{Proof of Theorem~\ref{theorem:momdist-consistency}}
\label{proof:theorem:momdist-consistency}
We begin by noting that $\bvt{}[\bX] = \bvt{}[\bX, \dx]$. Indeed, the distance function $\dx(\xv) = 0$ for all $\xv \in \bX$. We may further conclude that
\eq{
\sup_{\xv \in \bX}\dx(\xv) = 0.
\label{eq:supdist=0}
}
The bottleneck distance between $\bVt[ ][\Xnm \cup \Ym, \dnq]$ and $\bvt{}[\bX]$ may be bounded above as
\eq{
\Winf\qty\bigg( \bVt[ ][\Xnm \cup \Ym, \dnq], \bvt{}[\bX] ) &\le \Winf\qty\bigg( \bVt[ ][\Xnm \cup \Ym, \dnq], \bVt[ ][\Xnm, \dnq] ) && =:\circled{a} \n
&\quad+ \Winf\qty\bigg( \bVt[ ][\Xnm, \dnq], \bVt[ ][\Xnm, \dx]) && =:\circled{b} \n
&\quad+ \Winf\qty\bigg( \bVt[ ][\Xnm, \dx], \bVt[ ][\bX, \dx] ). && =:\circled{c} \nn
}
When $p=1$, the term $\circled{b} \le \norminf{\dnq - \dx}$ using Lemma~\ref{lemma:anai-et-al}~(i), and the term $\circled{c} \le \mathsf{H}(\Xnm, \bX)$ using Lemma~\ref{lemma:anai-et-al}~(ii). The term $\circled{a}$ is bounded above by taking $p=1$ in \eref{eq:stability-1} (from the proof of Theorem~\ref{theorem:momdist-stability}) to give
\eq{
\circled{a} &= \sup_{\xv \in \Xnm}\dnq(\xv) \n
&\num{\le}{$\star$} \sup_{\xv \in \bX}\dnq(\xv)\n
&\num{\le}{$\dagger$} \norminf{\dnq - \dx} + \sup_{\xv \in \bX}\dx(\xv)\n
&\num{=}{$\ddagger$} \norminf{\dnq - \dx},\nn
}
where ($\star$) follows from the fact $\Xnm \subset \bX$, ($\dagger$) uses the identity $f(\xv) \le \norminf{f - g} + g(\xv)$ for all $\xv \in \bX$, and ($\ddagger$) follows from \eref{eq:supdist=0}. Plugging in the bounds for the bottleneck distance we obtain
\eq{
\Winf\qty\bigg( \bVt[ ][\Xnm \cup \Ym, \dnq], \bvt{}[\bX] ) \le 2\norminf{\dnq - \dx} + \mathsf{H}(\Xnm, \bX).\nn
}
By noting that the Hausdorff distance $\mathsf{H}(\Xnm, \bX) = \norminf{\dsf_{n-m} - \dx}$, for $t_1, t_2$ such that $t_1 + t_2 = t$ we may bound the tail probability for the bottleneck distance as follows.
\eq{
\pr\qty\Bigg{\Winf\qty\Big( \bVt[ ][\Xnm \cup \Ym, \dnq], \bvt{}[\bX] ) > t} &\le \pr\qty\Big({ 2\norminf{\dnq - \dx} > t_1 }) + \pr\qty\Big({ \norminf{\dsf_{n-m} - \dx} > t_2 })\n
&\le \delta_1 + \delta_2 = \delta,
\label{eq:momdist-filt}
}
where the relationship between $\delta_1, \delta_2$ and $t_1, t_2$ is given by \eref{eq:momdist-t-constraint}, i.e., $\delta_1 \le e^{-(1+b)Q}$ from the condition in Theorem~\ref{theorem:momdist-sublevel}, $\delta_2 = \delta - \delta_1$,
\eq{
t_1 = 2\qty\Bigg( \f{Q}{an} \wo\qty( \nq \exp{ \f{4{\log(1/\delta_1)}}{Q-2m} })) ^{1/b}, \ \ \text{ and } \ \ \ t_2 = \qty\Bigg( \f{1}{an} \wo\qty\Big( n e^{ 4{\log(1/\delta_2)} })) ^{1/b}.\nn
}
Furthermore, using the bound for the Lambert $\wo$ function ${\wo(z) \le \log z}$ for $z > e$, we have
\eq{
t_1 \le 2\qty\bigg( \frac{Q\log(n / Q)}{a (n / Q)} + \frac{4Q \log(1/\delta_1)}{a(Q-2m)n} )^{1/b}, \ \ \text{ } \ \ \ t_2 \le \qty\bigg( \frac{\log (n-m)}{a (n-m)} + \frac{4 \log(1/\delta_2)}{a (n-m)} )^{1/b},\nonumber
}
and $t = t_1 + t_2 \le \mathfrak{f}(n, m, Q, \delta_1, \delta_2)$. Therefore, the bound in \eref{eq:momdist-filt} yields
\eq{
\pr\Bigg\{\Winf\qty\Big( \bVt[ ][\Xnm \cup \Ym, \dnq], \bvt{}[\bX] )& \le \mathfrak{f}(n, m, Q, a, b)\Bigg\} \n
&\ge \pr\qty\Bigg{\Winf\qty\Big( \bVt[ ][\Xnm \cup \Ym, \dnq], \bvt{}[\bX] ) > t_1 + t_2} \ge 1 - \delta,\nn
}
which gives the desired result. The second part of the theorem follows directly using the identical procedure as that used in the proof of Theorem~\ref{theorem:momdist-sublevel} in Section~\ref{proof:theorem:momdist-sublevel}. \null\nobreak\hfill\qedsymbol{}
\subsection{Proof of Proposition~\ref{prop:sublevel-2}}
\label{proof:prop:sublevel-2}
We begin by noting from Lemma~\ref{lemma:sublevel-equivalence} that $V[\dnq]$ and $V[\R^d, \dnq]$ are $(\id, \alpha)-$interleaved for ${\alpha : t \mapsto \tp t}$. Furthermore, consider the intermediate filtrations $V[\Xnm, \dnq]$. From Proposition~\ref{theorem:momdist-stability} and Lemma~\ref{lemma:ab-filtration} we have that $V[\R^d, \dnq]$ and $V[\Xnm, \dnq]$ are $(\eta, \id)-$ interleaved for
$${\eta: t \mapsto \tp t + \sup_{\xv \in \Xnm}\dnq(\xv)}.
$$
Using an identical argument, but reversing the order, we have that $V[\Xnm, \dnq]$ and $V[\Xn, \dnq]$ are $(\id,\eta)-$interleaved. We can now apply the ``triangle inequality'' for generalized interleavings \cite[Proposition~3.11]{bubenik2015metrics} to obtain that $V[\dnq]$ and $V[\Xn, \dnq]$ are ${(\id \circ \eta \circ \id, \alpha \circ \id \circ \eta)-}$interleaved. On simplifying the interleaving maps, we obtain that the two filtrations are $(\eta, \xi)-$interleaved for ${\xi(t) = \alpha \circ \eta(t) = \tp\eta(t)}$. \null\nobreak\hfill\qedsymbol{}
\subsection{Proof of Theorem~\ref{theorem:momdist-influence}}
\label{proof:theorem:momdist-influence}
The birth time of a connected component at $\xvo$ in $\Vt[ ][\bX, f]$ is given by $b_f(\pb{\xvo}) = f(\xvo)$.
Therefore, $\db$ is given by
\eq{
\db = b_{n}(\pb{\xvo}) - b_{n+m}(\pb{\xvo}) = \dsf_n(\xvo) - \dsf_{n+m}(\xvo) = \dsf_n(\xvo),\nn
}
where the last equality follows from the fact that $\dsf_{n+m}(\xvo) = 0$, since $\xvo \in \Xmn$. On the other hand, from the proof of Proposition~\ref{theorem:momdist-stability},
\eq{
\dsf_n(\xvo) = \inf_{\xv \in \Xn}\norm{\xv - \xvo} \le \dsf_{n+m, Q}(\xvo).\nn
}
Therefore, we have that $\dbq = \dsf_n(\xvo) - \dsf_{n+m, Q}(\xvo) \le \db$, and the result follows.
For the second part, we begin by observing that $\norminf{ \dsf_{n+m} - \dsf_n }$ can be bounded from below as follows:
\eq{
\norminf{ \dsf_{n+m} - \dsf_n } &\ge \dsf_n(\xvo) - \dsf_{n+m}(\xvo)\n
&= \dsf_n(\xvo) - 0 \n
&=\inf_{\xv \in \Xn}\norm{\xv - \xvo} \n
&\ge\inf_{\xv \in \Xb}\norm{\xv - \xvo} = \dx(\xvo). \nn
}
Furthermore, for $\delta \le e^{-(1+b)Q}$ and $k \defeq \max\qty\big{1, {2^{\f{b-1}{b}}}}$, with probability greater than $1-\delta$,
\eq{
&\norminf{\dsf_{n+m,Q} - \dsf_n} \n
&\qq{} \num{\le}{i} \norminf{\dsf_{n+m,Q} - \dx} + \norminf{\dsf_n - \dx}\n
&\qq{} \num{\le}{ii} \qty\Bigg[ \f{1}{a\nQ}\wo\qty( \nQ \exp\qty{ \f{4Q\log(2/\delta)}{Q-2m} } ) ]^{1/b} + \qty\Bigg[ \f{1}{an}\wo\qty( n \exp\qty{ 4\log(2/\delta) } ) ]^{1/b}\n
&\qq{} \num{\le}{iii} \f{k}{a^{1/b}}\qty[{ \f{1}{\nQ} \wo\qty( \nQ \exp\qty{ \f{4Q\log(2/\delta)}{Q-2m} } ) + \f{1}{n} \wo\qty( n \exp\qty{ 4\log(2/\delta) } )}]^{1/b}\n
&\qq{} \num{\le}{iv} \f{k}{a^{1/b}}\qty[{\f{\log\nQ}{\nQ}} + \f{\log n}{n} + 4\log(2/\delta)\qty( \f{Q}{\nQ(Q-2m)} + \f{1}{n} ) ]^{1/b}\n
&\qq{} \num{\le}{v} \f{k2^{1/b}}{a^{1/b}}\qty(\f{\log\nQ + 4Q\log(2/\delta)}{\nQ})^{1/b} \defeq \et,\nn
}
where, for $\nQ = (n+m)/Q$, (i) is a consequence of the triangle inequality and (ii) follows from the proofs of Theorem~\ref{theorem:momdist-sublevel} and Theorem~\ref{theorem:momdist-consistency}, (iii) uses Lemma~\ref{lemma:useful-inequalities}, (iv) follows from the fact that $\wo(z) < \log(z)$ for $z > e$, and (v) uses the fact that $\nQ < n$ and $(Q-2m)\inv \le 1$ for $2m > Q$.
Observe that if $2\et \le \dx(\xvo)$, then with probability greater than $1-\delta$,
\eq{
\norminf{ \dsf_{n+m} - \dsf_n } - \norminf{\dsf_{n+m,Q} - \dsf_n} \ge \dx(\xvo) - \et \ge \et,
\label{eq:momdist-influence-triangle}
}
and the result follows. Therefore, in order to establish the claim for the second part it suffices to check that ${2\et \le \dx(\xvo)}$ under conditions (I) and (II). To this end, note that
\eq{
\dx(\xvo) \ge 2\et \quad \Longleftrightarrow \quad \vp \ge \f{\log\nQ + 4Q\log(2/\delta)}{\nQ},\n
}
which is satisfied whenever $\delta$ satisfies the r.h.s. of condition (II), i.e.,
\eq{
\log(2/\delta) \le \f{\nQ\vp - \log\nQ}{4Q}.\nn
}
Furthermore, the l.h.s. of condition (II), i.e., $\delta \le e^{-(1+b)Q}$, is satisfied only when
\eq{
(1+b)^2 Q^2 \le \f{\nQ\vp - \log\nQ}{4Q},\nn
}
or, equivalently, when condition (I) is satisfied:
\eq{
\vp \ge \f{\log\nQ}{\nQ} + \f{4(1+b)^2Q^3}{\nQ}.\nn
}
The result now follows from \eref{eq:momdist-influence-triangle}. \null\nobreak\hfill\qedsymbol{}
\begingroup
\providecommand{\hQ}{\widehat{Q}}
\providecommand{\hm}{\widehat{m}}
\renewcommand{\ms}{m^*}
\providecommand{\h}{\mathfrak{h}}
\subsection{Proof of Theorem~\ref{theorem:lepski}}
\label{proof:theorem:lepski}
Let $\js = \min\pb{ j \in \J : m(j) > \ms }$. By definition of $\J$ we have that $\abs{\J} \le 1 + \log_\theta(\mmax/\mmin)$ and $m(\js) < \theta\ms$ for $\theta > 1$. The outline of the proof is as follows. First, we show that $\mathfrak{h}(n,m,\delta)$ is non-decreasing in $m$, from which it follows that $\mathfrak{h}(n,m(j),\delta) \le \mathfrak{h}(n,m(j+1),\delta)$. Next, we show that the event $\pb{ \jh \le \js }$ contains the event $\mathcal{E}$ given by
\eq{
\mathcal{E} = \bigcap_{\qty{j \in \J : j \ge \js}}\qty\Big{ \winf\qty\big( \bbv_n(j), \bbv[\bX] ) \le \h(n, m(j), \delta) }.\nn
}
Then, using a standard procedure for obtaining the Lepski bound (e.g., Theorem~5.1 of \citealt{minsker2018sub} and Theorem~3.1 of \citealt{chen2020robust}), we show that the event $\mathcal{E}$, and, therefore the event $\pb{\jh \le \js}$, holds with probability at least $1 - \delta \log_\theta(\mmax/\mmin)$. Lastly, we use the bound on the event $\pb{\jh \le \js}$ to obtain the desired result.
\textbf{1. Monotonicity of $\h(n, m, \delta)$ in $m$.} Consider the function $f(z; \alpha, \beta) = \alpha\wo(\beta z)/ z$ for fixed constants $\alpha, \beta > 0$. The derivative of $f$ is given by
\eq{
f'(z; \alpha, \beta) = \f{d}{dz} \qty(\f{\alpha}{z}\wo(\beta z)) &= \alpha\pa{ \f{\beta}{z}\wo'(\beta z) - \f{1}{z^2}\wo(\beta z) }\nn\\[10pt]
&\num{=}{i}\alpha\pa{ \f{\beta}{z}\pb{\f{\wo(\beta z)}{\beta z (1 + \wo(\beta z))}} - \f{1}{z^2}\wo(\beta z) }\nn\\[10pt]
&= - \f{\alpha \wo(\beta z)^2}{z^2(1+\wo(\beta z))} < 0 \qq{for all} z > 0.\nn
}
Note that in (i) we have used the fact that the derivative of the Lambert $\wo$ function is given by $\wo'(z) = {\wo(z)}/{z(1+\wo(z))}$. Therefore, it follows that $f$ is non-increasing in $z$. The claim follows by noting that the function $\mathfrak{h}$ is given by
\eq{
\h(n, m, \delta) = f( n/(2m+1); \alpha_1, \beta_1 )^{1/b} + f(n-m; \alpha_2, \beta_2)^{1/b},\nn
}
for constants $\alpha_1, \beta_1, \alpha_2, \beta_2 > 0$ not depending on $n$ or $m$.
\textbf{2. $\mathcal{E}$ is a subset of $\pb{\jh \le \js}$.} We begin by noting that since $\ms \le m(\js)$, it follows that $2\ms < Q(j) = 2 m(j) + 1$ for all $j \ge \js$ and satisfies the first condition for Theorem~\ref{theorem:momdist-consistency}. By taking
$$
\delta_1 = e^{-2(1+b)(2\mmax + 1)} \le e^{-2(1+b)Q(j)},
$$
and $\delta_2 = \delta - \delta_1$, note that $\h\qty\big(n, m(j), \delta) = \mathfrak{f}\qty\big(n, m(j), Q(j), \delta_1, \delta_2)$. Therefore, we may use Theorem~\ref{theorem:momdist-consistency} to obtain
\eq{
\pr\qty\bigg( \winf\qty\Big( \bbv_n(j), \bbv[\bX]) > \h(n, m(j), \delta) ) < \delta \qq{for all} j \ge \js.
\label{eq:j-condition}
}
Furthermore, by definition of $\jh$, it follows that for all $j < \jh$, there exists at least one $i > j$ such that ${\winf\pa{ \bbv_n(i), \bbv_n(j) } > 2\h(n, m(i), \delta)}$. Therefore,
\eq{
\pb{ \jh > \js } &\subseteq \bigcup_{\pb{ j \in \J : j > \js }} \qty\Big{ \winf\qty\big( \bbv_n(j), \bbv_n(\js) ) > 2\h( n, m(j), \delta ) }\nn\\
&\num{\subseteq}{ii} \bigcup_{\pb{ j \in \J : j > \js }} \qty\Big{ \winf\qty\big( \bbv_n(j), \bbv[\bX]) > \h( n, m(j), \delta ) } \cup \qty\Big{ \winf\qty\big( \bbv_n(\js), \bbv[\bX]) > \h( n, m(\js), \delta ) }\nn\\
&= \bigcup_{\pb{ j \in \J : j \ge \js }} \qty\Big{ \winf\qty\big( \bbv_n(j), \bbv[\bX]) > \h( n, m(j), \delta ) } \defeq \mathcal{E}^c,
\label{eq:Ec}
}
where, in (ii) we have used the fact that $\h(n, m(\js), \delta) \le \h(n, m(j), \delta)$ for all $j > \js$, and, therefore
\eq{
\qty\Big{ \winf\qty\big( \bbv_n(j), \bbv[\bX]) \le \h( n, m(j), \delta ) } \cap \qty\Big{ \winf\qty\big( \bbv_n(\js), \bbv[\bX]) \le \h( n, m(\js), \delta ) } \nn\\
\subseteq \qty\Big{ \winf\qty\big( \bbv_n(j), \bbv_n(\js) ) \le 2\h( n, m(j), \delta ) }. \qq{} \qq{} \nn
}
By inverting the above inclusion we get the inclusion in (ii). Therefore, we obtain that $\mathcal{E} \subseteq \pb{ \jh \le \js }$.
\textbf{3. Tail bound for the event $\mathcal{E}$.} Applying a union bound to \eref{eq:Ec}, we obtain
\eq{
\pr( \mathcal{E}^c ) &= \pr\qty( \bigcup_{\pb{ j \in \J : j \ge \js }} \qty\Big{ \winf\qty\big( \bbv_n(j), \bbv[\bX]) > \h( n, m(j), \delta ) } )\nn\\
&\le \sum_{\pb{ j \in \J : j \ge \js }} \pr\qty\bigg( \winf\qty\big( \bbv_n(j), \bbv[\bX]) > \h( n, m(j), \delta ) )\nn\\
&\num{\le}{iv} \sum_{\pb{ j \in \J : j \ge \js }} \!\!\!\delta\nn\\
&\num{\le}{v} \delta \log_\theta\qty( \f{\theta\mmax}{\mmin} ),\nn
}
where (iv) follows from \eref{eq:j-condition} and (v) uses the fact that $\abs{\J} \le 1 + \log_\theta(\mmax/\mmin)$.
\textbf{4. Bound for $\winf(\bbv_n(\jh), \bbv[\bX])$.} We begin by noting that when the event $\mathcal{E}$ holds, we have that
\eq{
\winf\qty(\bbv_n(\jh), \bbv[\bX]) &\le \winf\qty(\bbv_n(\jh), \bbv_n(\js)) + \winf\qty(\bbv_n(\js), \bbv[\bX])\nn\\
&\num{\le}{vi} 2\h( n, m(\js), \delta ) + \h(n, m(\js), \delta)\nn\\
&\num{\le}{vii} 3\h( n, \theta\ms, \delta ),\nn
}
where the first term in (vi) follows from the definition of $\jh$, which is guaranteed to hold because $\mathcal{E} \subseteq \pb{\jh \le \js}$, and the second term in (vi) follows from the definition of $\mathcal{E}$. The inequality in (vii) uses the fact that $m(\js) < \theta\ms$ and the fact that $h(n, m, \delta)$ is non-decreasing in $m$. Therefore, we have the inclusion
\eq{
\mathcal{E} \subseteq \qty\Big{ \winf\qty(\bbv_n(\jh), \bbv[\bX]) \le 3\h( n, \theta\ms, \delta ) }.\nn
}
Using the tail bound on $\mathcal{E}$ we obtain
\eq{
\pr\qty\Big( \winf\qty\big(\bbv_n(\jh), \bbv[\bX]) \le 3\h( n, \theta\ms, \delta ) ) \ge \pr( \mathcal{E} ) \ge 1 - \delta\log_\theta\qty( \f{\theta\mmax}{\mmin} ),\nn
}
which is the desired result. \null\nobreak\hfill\qedsymbol{}
\endgroup
\section{Introduction}
\label{sec:intro}
Given a compact set $\bX \subset \R^d$, its persistence diagram encodes the subtle geometric and topological features which underlie $\bX$ as a multiscale summary, and forms the cornerstone of topological data analysis. Persistent homology serves as the backbone for computing persistence diagrams, and encodes the homological features underlying $\bX$ at different resolutions. The computation of persistent homology is typically achieved by constructing a \textit{filtration} $V_{\bX}$, i.e., a nested sequence of topological spaces, which captures the evolution of geometric and topological features as the resolution varies. The persistent homology, which is encoded in its persistence module, $\bbv_{\bX}$, extracts the homological information from the filtration $V_{\bX}$. This is then summarized in a persistence diagram $\dgm\pa{\bbv_{\bX}}$.
Broadly speaking, there are two different methods for obtaining filtrations. The first, and, arguably more classical method is obtained by examining the union of balls of radius $r$ centered on the points of $\bX$ called the $r$--\textit{offset} of $\bX$, denoted ${\bX}(r)$, for each resolution $r > 0$. The resulting filtration $V[\bX] = \pb{\Xb(r): r > 0}$, depends only on the metric properties of $\bX$. The second, and more general approach is based on constructing a \textit{filter function} $f_{\Xb}$, which reflects the topological features underlying $\bX$. The resulting filtration $V[f_{\bX}]$, in this case, is obtained by probing the sublevel sets $f\inv_{\Xb}\left( (-\infty, r] \right)$ or the superlevel sets $f\inv_{\Xb}\qty( [r, \infty))$ associated with $f_\Xb$. While these two methods are vastly different, in principle, they both attempt to explore the topological features underlying $\bX$.
In this context, the distance function $\dx$ to the set $\Xb$ plays a special role in topological data analysis, and satisfies the property that $V[\bX] = V[\dx]$. That is, the sublevel sets of the distance function encode the same topological information as the filtration from its offsets. The appeal of using the distance function in the computation of persistence diagrams comes from the celebrated stability of persistence diagrams \citep{chazal2016structure}. In a nutshell, the stability result for persistence diagrams guarantees that (i) the persistence diagrams resulting from two compact sets $\Xb$ and $\Yb$ are close whenever the sets themselves are close in the Hausdorff distance, and, (ii)
the functional persistence diagrams resulting from two filter functions $f$ and $g$ are close whenever $f$ and $g$ are close w.r.t. the $\norminf{\cdot}$ metric.
In the statistical setting, one has access to $\bX$ only through samples $\Xn = \pb{\Xv_1, \Xv_2, \dots, \Xv_n}$ obtained using a probability distribution $\pr$ which is supported on the (unknown) set $\bX$. The objective, in a statistical inference framework, is to use the samples $\Xn$ to infer the true population persistence diagram $\dgm\pa{\bbv_\Xb}$. The offset $\Xn(r)$ and filter function $f_n$, constructed using the sample points, are themselves random quantities associated with their population counterparts $\Xb(r)$ and $f_\Xb$, respectively, and these may be used to construct a sample estimator $\dgm\pa{\bbv_{\Xn}}$. To this end, several existing works have studied the statistical properties of $\dgm\pa{\bbv_{\Xn}}$, e.g.,~constructing confidence bands and characterizing the convergence rate of $\dgm\pa{\bbv_{\Xn}}$ to $\dgm\pa{\bbv_{\Xb}}$ in the space of persistence diagrams \citep{fasy2014confidence,chazal2015subsampling,chazal2015convergence,chazal2017robust,vishwanath2020robust}.
\subsection{Contributions}
In practical settings, real-world data is likely subject to measurement errors and the presence of outliers. While some assumptions may be imposed on the noise and the outliers, in the most baneful settings, the given data may be subject to adversarial contamination. In this setting, for $m<n/2$, we assume that the samples $\Xn$, which we have access to, contain only ${n-m}$ points obtained from the probability distribution $\pr$ with $\supp\pa{\pr} = \bX$, and make no further assumptions on the remaining $m$ points. In principle, the $m$ outliers may be carefully chosen by an adversary after examining the remaining $n-m$ points. The overarching objective of this paper is to construct an estimator of the (unknown) population quantity $\dgm\pa{\bbv[\bX]}$ using the corrupted sample points $\Xn$ which is, both, statistically consistent and computationally efficient.
While the stability of persistence diagrams guarantees that small perturbations in the sample points induce only small changes in the resulting persistence diagrams, even a few outliers in the samples can lead to deleterious effects. This issue is further exacerbated in the adversarial setting, where the adversary is free to place the $m$ points where it may drastically impact the resulting topological inference.
In this paper, we introduce \md{}, denoted by $\dnq$, as an outlier-robust variant of the empirical distance function which is constructed using the median-of-means principle, and we establish its theoretical properties. Notably the \md{} relies on a tuning parameter $Q$ which is easy to interpret. While the persistence diagram resulting from the sublevel filtration of $\dnq$ is a valid candidate for statistical inference, it can be expensive to compute in practice. To overcome this, we use the weighted filtrations introduced by \cite{buchet2016efficient} and \cite{anai2019dtm} to construct $\dnq$-weighted filtrations, $V[\Xn, \dnq]$, as computationally efficient estimators of $\dgm\pa{V[\bX]}$. Our main contributions are the following:
\begin{enumerate}[label=\textup{(\Roman*)}]
\item We show that sublevel set persistence diagrams of $\dnq$ are consistent estimators of the sublevel set persistence diagram of the true population counterpart $\dx$ even in the presence of outliers (Theorem~\ref{theorem:momdist-sublevel}).
\item We establish a stability result for the the $\dnq$-weighted filtrations, $V[\Xn, \dnq]$, and we show that they are stable w.r.t.~adversarial contamination (Theorem~\ref{theorem:momdist-stability}).
\item Furthermore, we show that the persistence diagram $\dgm\pa{\bbv[\Xn, \dnq]}$ is both a computationally efficient and statistically consistent estimator of $\dgm\pa{\bbv[\bX]}$, and we establish its convergence rate (Theorem~\ref{theorem:momdist-consistency}).
\item Next, in a sensitivity analysis framework, we quantify the gain in robustness achieved when using the $\dnq$-weighted filtrations vis-\`{a}-vis its non-robust $\dsf_n$-weighted counterpart (Theorem~\ref{theorem:momdist-influence}).
\item Lastly, we propose a data-driven procedure for adaptively selecting the tuning parameter $Q$ using Lepski's method. For the data-driven choice $\widehat Q$, we show that the resulting estimator $\dgm\qty\big{\bbv[\Xn, \dsf_{n, \widehat Q}]}$ is statistically consistent and establish its convergence rate (Theorem~\ref{theorem:lepski}).
\end{enumerate}
\subsection{Related Work}
Several approaches have been proposed in existing literature to overcome the sensitivity of persistence diagrams to noise. The prevailing ideas in these approaches rely on constructing a filter function, $f_\pr$, which reflects both the topological information and the distribution of mass underlying the support $\supp\pa{\pr} = \bX$. Replacing the population probability measure $\pr$ with the empirical measure $\pr_n$ associated with the samples $\Xn$ results in an empirical estimator $f_{\pr_n}$. Some notable examples include the distance-to-measure \citep{chazal2011geometric}, the kernel distance \citep{phillips2015geometric}, and kernel density estimators \citep{fasy2014confidence}.
While these approaches mitigate, to some extent, the influence of noise on the resulting persistence diagrams, they are not without their drawbacks. For starters, while it may be argued that $\dgm\pa{\bbv[f_{\pr_n}]}$ is more resilient to noise, ultimately, this sample estimator corresponds to the population quantity $\dgm\pa{\bbv[f_{\pr}]}$, which may, nevertheless, omit some subtle geometric and topological features present in $\dgm\pa{\bbv[\bX]}$. Furthermore, from a statistical perspective, if $\Xn$ comprises of only $n-m$ points from $\pr$ and the remaining $m$ points constitute outliers, then the sample estimator $V[f_{\pr_n}]$, obtained using $\Xn$, will no longer be a valid estimator of the population quantity $\dgm\pa{\bbv[f_{\pr}]}$ which we wish to infer.
Lastly, the exact computation of these estimators can be prohibitively expensive, if not impossible in practice. For instance, the exact computation of the distance-to-measure requires computing an order-$k$ Voronoi diagram. Moreover, in the general setting, the sublevel/superlevel filtrations arising from these approaches are computed using cubical homology, which relies on a (nuisance) grid resolution \mbox{parameter}. If this resolution is too coarse, then some subtle topological features are affected. On the flipside, if the resolution is too fine, then the accuracy is still impacted, as noted in \cite{fasy2014confidence}. In the high-dimensional setting, cubical homology also falls victim to the curse of dimensionality, i.e., for a fixed grid resolution, the number of simplices in the resulting cubical complex grows exponentially with the dimension of the ambient space.
In order to overcome these computational drawbacks, \cite{buchet2016efficient} and \cite{anai2019dtm} propose weighted filtrations, $V[\Xn, f_{\Xn}]$, using power distances. While the weighted filtrations circumvent the need for constructing grid-based approximations, they come at the expense of exact inference, i.e., the weighted filtrations $V[\Xn, f_{\Xn}]$ only approximate $V[f_{\Xn}]$ and do not provide valid statistical inference, even in the absence of outliers.
More recently, \cite{vishwanath2020robust} propose robust persistence diagrams which are resilient to outliers using kernel density estimators (KDE), and also develop a principled framework for characterizing the sensitivity to outliers using an analogue of influence functions. Although \citet[Theorem 1]{vishwanath2020robust} describes the gain in robustness by considering the robust KDE $\fn$ using the persistence influence function, \citet[Theorems 2 \& 3]{vishwanath2020robust} together establish that as $n \rightarrow \infty$ and $\sigma \rightarrow 0$, the persistence diagram $\dgm\pa{\fn}$ recovers the same information which underlies the sample points $\Xn$. However, if the underlying distribution is contaminated, e.g., ${\pr^* = (1-\pi) \pr_{signal} + \pi \pr_{noise}}$, then the topological inference we hope to target is that of $\pr_{signal}$ and not that of $\pr^*$.
Finally, with a similar objective of mitigating the impact of noise in topological inference, recent approaches have considered multi-parameter persistent homology as a robust tool for inferring the topological features underlying $\Xn$ \citep{carlsson2009theory}. While some recent results have demonstrated some promise (e.g., \citealt{vipond2021multiparameter}), they are, nevertheless, computationally infeasible for most applications, in addition to being hard to interpret \citep{otter2017roadmap,bjerkevik2020computing}.
On the statistical front, robust statistics was founded on the seminal works of \cite{tukey1960survey} and \cite{huber1964robust} with the objective of developing a framework of statistical inference stable to model misspecification and the presence of extraneous errors. Over the past few decades, robust counterparts for several inference tasks have been explored in literature \citep{huber2004robust,hampel2011robust}. More recently, in the landscape of big-data and high-dimensional statistics, the field of robust statistics has witnessed a renewed interest in the statistics and computer science literature \citep{diakonikolas2017being}. In particular, the classical problem of mean and covariance estimation has been revisited in several works \citep{audibert2011robust,minsker2015geometric,devroye2016sub,joly2016robust} with the objective of easing model assumptions to, either, the regularity of the data generating mechanism, or, the presence of outliers. See \cite{lugosi2019mean} for a recent survey. A common theme underlying these works is the constant struggle to achieve a Goldilocks equilibrium: the right balance of statistical optimality, computational efficiency and robustness to model misspecification.
In this regard, the median-of-means estimator, and, more broadly, the median-of-means principle \citep{lecue2020robust}, has emerged as a powerful tool for ``robustifying'' an existing estimator in near linear time. Although this comes slightly at the expense of statistical optimality, median-of-means estimators are, nevertheless, easier to compute than statistically optimal \textit{and} robust methods such as the tournament estimators introduced by \cite{lugosi2019risk}. However, computing the median in high dimensions is not a well-defined task, and can be computationally burdensome. To make matters worse, robust topological summaries naïvely employing the median-of-means principle require estimating the median in infinite-dimensional space, which can be hopeless to achieve in a computationally tractable fashion. Our work overcomes this limitation by proposing a pointwise median-of-means estimator which, although computationally tractable, exhibits a concentration of measure phenomenon with respect to the true target population counterpart in the $\norminf{\cdot}$ metric.
\textbf{Organization.} The remainder of this paper is organized as follows. In Section~\ref{sec:preliminaries} we present the necessary background on persistent homology and robust statistics. We first introduce the proposed methodology in Section~\ref{sec:proposal}, and then present the main results in the remainder of the section. We establish the statistical properties of the proposed estimator in Section~\ref{sec:statistical}, and we present the influence analysis in Section~\ref{sec:influence}. Numerical results supporting the theory are provided in Section~\ref{sec:experiments}. The proofs of all the results are collected in Section~\ref{sec:proofs}.
\section{Experiments}
\label{sec:experiments}
In the following section, we supplement the theory through illustration of the performance of the robust filtrations $\bbv[\dnq]$ and $\bbv[\Xn, \dnq]$ in synthetic experiments. The tools for data-adaptive construction of $\dnq$-weighted filtrations, in addition to the code for all experiments, are made publicly available in the \href{https://www.github.com/sidv23/RobustTDA.jl}{\textsf{RobustTDA.jl}} Julia package\footnote{\url{https://www.github.com/sidv23/RobustTDA.jl/}}. In all experiments, the persistence diagrams are computed using the \textsf{Ripserer.jl} backend \citep{cufar2020ripserer}, and we set the parameter $p=1$ for the weighted-filtrations.
\subsection{Adaptive calibration of $Q$}
\label{exp:adaptive}
For $n=500$, $K=30$ replicates and for each $i \in [K]$, point clouds $\Xn^{(i)}$ are generated on a circle, and ${m^{(i)} \sim \text{Unif}\qty(\qty[50, 150])}$ outliers added from a Matérn cluster process. This is illustrated in Figure~\ref{fig:lepski}\,(a). Taking $m_{\min} = 20$, $m_{\max}=200$ and $\theta=1.07$, the adaptive estimate $\widehat{m}^{(i)}$ is computed using Lepski's method, and $\widehat{m}_R^{(i)}$ is computed using the heuristic method described in Section~\ref{sec:lepski} with $N=50$. For a single replicate ${i \in [K]}$, Figure~\ref{fig:lepski}\,(b) plots $\sum_{1 \le i < j \le N} \winf\qty\big({ \bbv[{\Xn^{\sigma_i}}, \dnq], \bbv[{ \Xn^{\sigma_j} }, \dnq] })$ vs. $Q$. In most cases, we have observed that the resampled bottleneck distance criterion stabilizes shortly before the optimal value of $m$. Figure~\ref{fig:lepski}\,(c) shows a boxplot for the relative errors $\qty\big{ \widehat{m}^{(i)} - {m}^{(i)} / {m}^{(i)} : i \in [K] }$ and $\qty\big{ \widehat{m}_R^{(i)} - {m}^{(i)} / {m}^{(i)} : i \in [K] }$ for Lepski's method and the heuristic procedure, respectively. Lepski's method is fairly robust to the choice of the hyperparameters, and, consistently selects $\widehat m^{(i)} \ge m^{(i)}$. In contrast, since the resampled bottleneck distance from the heuristic procedure often stabilizes before $m^{(i)}$, we observe that $\widehat{m}^{(i)}_R < m^{(i)}$.
\begin{figure}[t]
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[height=\textwidth]{./figures/experiments/calibration/scatterplot.pdf}
\caption{Scatterplot of $\Xn$}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[height=\textwidth]{./figures/experiments/calibration/resampled.pdf}
\caption{Heuristic procedure}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[height=1.05\textwidth]{./figures/experiments/calibration/Box.pdf}
\caption{Relative error: Lepski vs. Resampling}
\end{subfigure}
\caption{Comparison of Lepski's method and the heuristic procedure for selecting the parameter $Q$.}
\label{fig:lepski}
\end{figure}
\subsection{Comparison of $\bbv[\dnq]$ and $\bbv[\Xn, \dnq]$}
\label{exp:sublevel}
The objective of this experiment is to illustrate that the $\dnq$-weighted filtration $V[\Xn, \dnq]$ reasonably approximates the sublevel filtration $V[\dnq]$. For the same setup as \ref{exp:adaptive}, {$\Xn$ comprises of $n=550$ points obtained by sampling $500$ points on a circle with additive Gaussian noise ($\s=0.01$) and $m=50$ outliers added from a Matérn cluster process.} For $Q=\widehat Q$ selected using Lepski's method, Figure~\ref{fig:sublevel}\,(a) depicts the \md{} function $\dnq$. Figure~\ref{fig:sublevel}\,(b) illustrates the scatter plot for $\Xn$ with the points colored by the weights $\dnq(\xv_i)$ for each $\xv_i \in \Xn$. The shaded regions show the $\dnq$-weighted offsets $V^t[\Xn, \dnq]$ for $t \in \qty{1.5, 1.75, 2, 2.25}$ colored from white to blue. Figure~\ref{fig:sublevel}\,(c) depicts the sublevel persistence diagram $\dgm\pa{ \bbv[\dnq] }$ computed using cubical homology on a grid of resolution $0.5$. As expected by the result of Proposition~\ref{prop:sublevel-2}, the $\dnq$-weighted persistence diagram $\dgm\pa{\bbv[\Xn, \dnq]}$ in Figure~\ref{fig:sublevel}\,(d) captures the essential topological information in $\dgm\pa{ \bbv[\dnq] }$.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[height=\textwidth]{./figures/experiments/sublevel/sublevel.pdf}
\caption{\md{} function $\dnq$}
\end{subfigure}
\quad\quad
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[height=\textwidth]{./figures/experiments/sublevel/filtrations.pdf}
\caption{$V^t[\Xn, \dnq]$}
\end{subfigure}
\vspace{5mm}\\
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{./figures/experiments/sublevel/sublevel_dgm1.pdf}
\caption{$\dgm\pa{\bbv[\dnq]}$}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{./figures/experiments/sublevel/wrips_dgm1.pdf}
\caption{$\dgm\pa{\bbv[\Xn, \dnq]}$}
\end{subfigure}
\caption{Comparison of sublevel filtrations with the $\dnq$-weighted filtration.}
\label{fig:sublevel}
\end{figure}
\subsection{High dimensional topological inference}
\label{exp:highdim}
In this experiment, we illustrate the advantage of using $\dnq$-weighted filtrations for high dimensional topological inference. Points are uniformly sampled in $\R^3$ from two interlocked circles. Using a random rotation matrix $Q \in {SO}(100)$, the points are transformed to an arbitrary configuration in $\R^{100}$. The samples $\Xn \subset \R^{100}$ are obtained by replacing $12.5\%$ of the points in $\R^{100}$ with outliers sampled from $\textup{Uniform}\pa{ [-0.2, 0.2]^{100} }$. A scatterplot for $\Xn$ projected to $3$ arbitrary coordinates is shown in Figure~\ref{fig:highdim}\,(a). Since the point cloud is embedded in $\R^{100}$, computing sublevel filtrations using cubical homology with the same resolution as earlier requires $(10/0.5)^{100} \approx 10^{131}$ simplices to be stored in memory. In contrast, computing the $\dnq$-weighted filtrations requires is less intensive. Figure~\ref{fig:highdim}\,(b) shows the persistence diagram $\dgm(\widehat{\bbv}_n)$ obtained using $\dnq$-weighted filtrations, where the parameter $Q$ is adaptively selected using Lepski's method. The two $1$st order homological features underlying the interlocked circles are recovered. Figure~\ref{fig:highdim}\,(c) illustrates the persistence diagram $\dgm(\bbv [{\Xn, \delta_{n,k}}])$ obtained using DTM-weighted filtrations. Since the DTM parameter $k \in [1, n]$ results in a smoothing similar to the parameter $Q \in [1, n]$ for the \md{}, the parameter $k$ is set to the value of $Q$ obtained using Lepski's method.
\begin{figure}[H]
\begin{subfigure}[b]{0.34\textwidth}
\centering
\includegraphics[width=\textwidth]{./figures/experiments/highdim/interlocked-noisy.pdf}
\caption{{$\Xn$ projected to coordinates $11, 53 \ \& \ 91$}}
\end{subfigure}
\begin{subfigure}[b]{0.31\textwidth}
\centering
\includegraphics[height=\textwidth]{./figures/experiments/highdim/interlocked-MOM.pdf}
\caption{MOM diagram $\dgm({\widehat{\bbv}_n})$}
\end{subfigure}
\begin{subfigure}[b]{0.31\textwidth}
\centering
\includegraphics[height=\textwidth]{./figures/experiments/highdim/interlocked-DTM.pdf}
\caption{{DTM} diagram $\dgm\pa{\bbv[\Xn, \delta_{n, k}]}$}
\end{subfigure}
\caption{Robust persistence diagrams for interlocked circles in $\R^{100}$ using $\dnq$ and $\delta_{n, k}$ weighted filtrations.}
\label{fig:highdim}
\end{figure}
\subsection{Recovering the true signal under adversarial contamination}
\label{exp:mnist}
In this experiment, we illustrate how $\bbv[{\Xn, \dnq}]$ can be used to recover the true topological features in the presence of adversarial contamination. In Figure~\ref{fig:mnist}\,(a), we consider a $28 \times 28$ image for the digit ``6'' from the MNIST database \citep{deng2012mnist}. We consider the setting in which an adversary is allowed to manipulate $10\%$ of the image by modifying the pixel intensities. Figure~\ref{fig:mnist}\,(b) depicts the adversarially contaminated version of the image by transforming the ``6'' to an ``8''.
For each pixel $p$ with pixel intensity $\iota(p)$, we convert the image to a point cloud $\Xn \subset \R^2$ by sampling $10 * \iota(p)$ points uniformly from the region enclosed by the pixel. \mbox{Figures~\ref{fig:mnist}(d, e)} illustrate the point clouds obtained from the true and contaminated images with $n-m \approx 1100$ and $n \approx 1300$, respectively. The persistence diagrams constructed using the distance function $\dsf_n$ for the two point clouds are reported in Figures~\ref{fig:mnist}(g, h). The persistence diagram in Figure~\ref{fig:mnist}\,(h) indicates the presence of the additional loop introduced by the adversary. To account for the adversarial contamination, we compute the \md{} function $\dnq$ with the parameter $Q$ selected using the contamination budget, i.e., $Q = 1 + 2(1100 \times 10\%) = 221$. Figure~\ref{fig:mnist}\,(f) shows the adversarially contaminated point cloud with each point $\xv_i \in \Xn$ colored by the value of $\dnq(\xv_i)$. The resulting $\dnq$-weighted persistence diagram $\dgm(\bbv[\Xn, \dnq])$ is reported in Figure~\ref{fig:mnist}\,(f). We note that $\dgm(\bbv[\Xn, \dnq])$ recovers the prominent features of Figure~\ref{fig:mnist}\,(g) up to a rescaling. Additionally, for each pixel $p$ we compute a rescaled version of $\dnq$, given by
\eq{
f_{n, Q}(p) = \f{\max\limits_x\dnq(x) - \dnq(p)}{\max\limits_x\dnq(x)},\nn
}
as a proxy for the pixel intensity obtained using $\dnq$. In Figure~\ref{fig:mnist}\,(c), we plot the level sets $\pb{p: f_{n,Q} = t}$ on the original image for $t \ge 0.8$.
\begingroup
\renewcommand{\Xnm}{\mathbb{X}_{n\minus m}}
\begin{figure}
\begin{subfigure}[c]{0.3\textwidth}
\includegraphics[width=\textwidth]{./figures/experiments/mnist/6.png}
\caption{Signal}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.3\textwidth}
\includegraphics[width=\textwidth]{./figures/experiments/mnist/8.png}
\caption{Adversarial contamination}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.305\textwidth}
\includegraphics[height=\textwidth]{./figures/experiments/mnist/heatmap2.pdf}
\caption{Recovered}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[height=\textwidth]{./figures/experiments/mnist/scatter-6.pdf}
\caption{$\Xnm$}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[height=\textwidth]{./figures/experiments/mnist/scatter-8.pdf}
\caption{$\Xn$}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[height=\textwidth]{./figures/experiments/mnist/scatter-8-heat.pdf}
\caption{$\Xn$ colored by $\dnq$}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[height=\textwidth]{./figures/experiments/mnist/dgm6.pdf}
\caption{$\dgm\qty( \bbv[\Xnm] )$}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[height=\textwidth]{./figures/experiments/mnist/dgm8.pdf}
\caption{$\dgm\qty( \bbv[\Xn] )$}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[height=\textwidth]{./figures/experiments/mnist/rdgm8.pdf}
\caption{$\dgm\qty( \bbv[\Xn, \dnq] )$}
\end{subfigure}\hspace{10pt}
\caption{Recovering the topological information underlying the signal in the presence of adversarial contamination.}
\label{fig:mnist}
\end{figure}
\endgroup
\subsection{Empirical influence analysis}
\label{exp:influence}
\providecommand{\fnms}{\ensuremath{f^{n+m}_{\rho,\s}}}
\providecommand{\Dnms}{\ensuremath{D^{RKDE}_{{n+m}, \rho,\s}}}
\providecommand{\dnms}{\ensuremath{\dsf_{n+m, \rho,\s}}}
In this experiment, we examine the influence of outliers on $\dnq$-weighted filtrations. For $n=500$, points $\Xn$ are sampled uniformly from a circle. We compute the unweighted persistence diagram $D_n = \dgm(\bbv[\Xn])$. In a small neighborhood around the center of the circle, outliers $\Ym$ are sampled uniformly from $[-0.1,0.1]^2$. For the composite sample $\Xn \cup \Ym$ and a fixed value of $Q=100$ \& $k = 50$, we compute the \md{} weighted persistence diagram $D^{MoM}_{n+m,Q} = \dgm(\bbv[\Xn \cup \Ym, \dsf_{n+m,Q}])$, the DTM weighted persistence diagram $D^{DTM}_{n+m,k} = \dgm(\bbv[\Xn \cup \Ym, \delta_{n+m, k}])$, and the RKDE weighted persistence diagram ${\Dnms}$ from the RKDE $\fnms$ using the Hampel loss $\rho$ and a Gaussian kernel $K_\s$. Since the RKDE ${\fnms \defeq \sum_{i=1}^{n+m}w_i K_\s(\cdot, \Xv_i)}$ does not behave like a distance function, we convert $\fnms$ to a distance-like function $\dnms$ using a similar approach as \cite{phillips2015geometric} to obtain
\eq{
\dnms(\xv) \defeq \normh{K_\s(\cdot, \xv) - \fnms} = \sqrt{\mathop{\sum\sum}\limits_{1 \le i,j \le n+m} w_i w_j K_\s(\Xv_i, \Xv_j) + K_\s(\xv, \xv) - 2 \fnms(\xv)}.\nn
}
The RKDE-weighted persistence diagram $\Dnms = \dgm(\bbv[\Xn \cup \Ym, \dnms])$ is then computed using the $\dnms$-weighted filtration on the composite sample. The bandwidth of the kernel and the parameters for the Hampel loss function are selected using the same approach as in \cite{vishwanath2020robust}. For each diagram, we compute the birth time $b(\qty{\xvo})$ for the first outlier $\xvo \in \Ym$, and the bottleneck influence $\winf(D_{n+m}, D_n)$, as described in Section~\ref{sec:influence}. We generate $10$ such samples for each value of $m$, and report the average in Figure~\ref{fig:influence}.
From Figure~\ref{fig:influence}\,(a), we note that $D^{MoM}_{n+m,Q}$ and $D^{DTM}_{n+m,k}$ show similar behavior, although the outliers consistently appear earlier in the {DTM persistence diagram $D^{DTM}_{n+m,k}$.} Since the birth time $b(\qty{\xvo})$ alone does not fully characterize the impact an outlier has on inferring the topological feature underlying the circle, we also compute the maximum persistence for the first order persistence diagram in Figure~\ref{fig:influence}\,(b). We point out that the behavior of $b(\qty{\xvo})$ w.r.t. $m$ largely reflects the influence an outlier has on the relevant topological signal. Furthermore, for $D^{MoM}_{n+m,Q}$, we observe the sharp transition which occurs between $m=50$ and $m=80$, which is due to the fact that the theoretical guarantees for $\dnq$ from Theorem~\ref{theorem:momdist-influence} are valid only when $2m < Q = 100$. Similarly, from Theorem~\ref{theorem:momdist-consistency}, the outliers are guaranteed to have little influence on $D^{MoM}_{n+m,Q}$ whenever $m \le 50$, as seen in Figure~\ref{fig:influence}\,(c).
On the other hand, while the RKDE remains resilient to uniform outliers, we note that $\Dnms$ is significantly impacted by the outliers placed at a single point in center of the circle. This is evidenced by the sharp transitions for $\Dnms$ in Figures~\ref{fig:influence}\,(b, c). However, unlike $\dsf_{n+m, Q}$ and $\delta_{n+m, k}$, by construction ${\norminf{\dnms} \le \sup_{\xv} \sqrt{2 K_\s(\xv, \xv)} < \infty}$. Therefore, the impact the outliers have on $\Dnms$ are bounded; and despite being more sensitive to the outliers, the resulting influence the outliers have on $\Dnms$ in Figures~\ref{fig:influence}\,(a, b, c) is bounded.
\begin{figure}[H]
\begin{subfigure}[c]{0.48\textwidth}
\includegraphics[width=\textwidth]{./figures/experiments/influence5/influence-birth.pdf}
\caption{$\text{influence}\pa{b; \Xn, f_n, m, \xvo}$}
\end{subfigure}
\begin{subfigure}[c]{0.48\textwidth}
\includegraphics[width=\textwidth]{./figures/experiments/influence5/influence-rpers.pdf}
\caption{max Persistence for the first order diagram}
\end{subfigure}
\begin{subfigure}[c]{0.48\textwidth}
\includegraphics[width=\textwidth]{./figures/experiments/influence5/influence-bottleneck0.pdf}
\caption{$\text{influence}\pa{\winf; \Xn, f_n, m, \xvo}$}
\end{subfigure}
\caption{Influence analysis for $\dnq$-weighted filtrations vis-à-vis DTM-based filtrations and unweighted filtrations.}
\label{fig:influence}
\end{figure}
\section{Preliminaries}
\label{sec:preliminaries}
The following subsections introduce the essential ingredients used for the remaining of the paper.
\textbf{Definitions and Notations.} For two sets $A$ and $B \subseteq A$, $\id: B \rightarrow A$ given by $b \mapsto b$ denotes the identity map. For $n \in \Z_+$, we use the notation $[n] = \pb{1, 2, \dots, n}$, and for real-valued functions~$f$~and~$g$ we employ the notation $f(n) \lesssim g(n)$ if $f(n) = O\qty\big(g(n))$. Given a metric space $(\M, \rho)$ with metric ${\rho: \M \times \M \rightarrow \R_{\ge 0}}$, the ball of radius $r$ centered at $\xv \in \M$ is denoted $\Bfx[\rho][][r]$\footnote{When $r<0$ we explicitly define $B_\rho(\xv, r) = \varnothing$.}.
For a compact set $\bX \subset \M$, the $r$--offset of $\bX$ w.r.t the metric $\rho$ is given by
$$
\bX[\rho](r) = \bigcup_{\xv \in \bX}\Bfx[\rho][][r].
$$
The distance function w.r.t. the compact set $\Xb$ plays a central role in extracting the geometric and topological features underlying $\Xb$.
\begin{definition}[Distance function]
For a metric space $(\M, \rho)$ and a compact set $\bX \subseteq \M$, the distance function to the set $\bX$, denoted as $\dx$, is given by
\eq{
\dx(\yv) \defeq \inf_{\xv \in \bX}\rho\pa{\xv, \yv}, \ \ \ \text{for all } \yv \in \M.\nn
}
\end{definition}
For a finite collection of points $\Xn$, the distance function $d_{\Xn}$ is simply denoted as $\dsf_n$. For two compact sets $\bX, \bY \subset (\M, \rho)$ the \textit{Hausdorff distance} {between $\bX$ and $\bY$} is given by
$$
{\haus{\bX, \bY}[\rho] \defeq \max\pb{ \sup_{\xv \in \Xb}\dsf_{\Yb}(\xv), \ \sup_{\yv \in \Yb}\dx(\yv) } = \inf\pB{\e > 0: \bX \subseteq \bY[\rho](\e), \bY \subseteq \bX[\rho](\e)}},
$$
and metrizes the space of all compact subsets of $(\M, \rho)$. Throughout the paper we assume that $(\M, \rho) = (\R^d, \norm{\cdot})$ is the usual Euclidean space with the $\ell_2$ metric, and omit the subscript $\rho$. However, the results here should extend to general metric spaces $(\M,\rho)$ with simple modifications along the lines of \cite{chazal2015convergence} and \cite{buchet2016efficient}.
We use $\mathcal{P}(\bX)$ to denote the set of Borel probability measures defined on $\R^d$ with support $\bX \subseteq \R^d$, and for $\xv \in \R^d$, $\dir{\xv}$ is used to denote a Dirac measure at $\xv$. A key assumption used throughout the paper is a regularity condition for the data generating mechanism. For $a,b>0$, the probability measure satisfies the $(a,b)-$standard condition if
\eq{\label{eq:ab-standard}
\pr\qty\Big( B(\xv, r) ) > 1 \wedge a r^b \quad \text{for all } r>0.
}
We denote by $\mathcal{P}(\Xb, a, b)$ the subset of $\mathcal{P}(\Xb)$ which satisfies the $(a, b)-$standard condition in \eref{eq:ab-standard} for $a, b >0$. This regularity assumption is standard in the domain of statistical shape analysis (e.g., \citealt{cuevas2004boundary,chazal2015convergence,chazal2015subsampling,chazal2017robust}). Throughout the paper, we assume that the samples $\Xn$ are obtained in an adversarial contamination setting $(\scr{S})$, as defined below.
\begin{description}[labelindent=1cm]
\item[\textsc{Sampling Setting ($\scr{S}$).}] The data comprises of $n$ samples $\Xn = \rangeb{\Xv}{n}$, where $m < {n}/{2}$ samples are contaminated with unknown outliers. No distributional assumption is made on these outliers. The remaining $n\!-\!m$ samples are observed iid from a distribution $\pr \in \mathcal{P}(\Xb, a, b)$, for compact $\Xb \subset \R^d$ and $a,b > 0$.
\end{description}
A glossary of notations for additional definitions and notations introduced in the subsequent sections is provided in Appendix~\ref{sec:glossary}.
\subsection{Background on Persistent Homology}
\label{sec:persistent}
In this section we provide the necessary background on persistent homology arising from single parameter filtrations. We refer the reader to \cite{chazal2017introduction,edelsbrunner2010computational} for a detailed introduction.
Given a compact set $\bX$, the building block of any topological data analysis pipeline to extract meaningful information from $\bX$ begins with a nested sequence of filtered topological spaces called a filtration, simply denoted by $V$. The sequence of spaces are parametrized by a resolution parameter $t$. There are several approaches for constructing a filtration using $\Xb$. One approach is to consider the collection of offsets built on top of $\Xb$, i.e., $V^t = V^t[\bX] = \Xb(t)$.
For $s < t$, the offsets are nested $V^s \subseteq V^t$, and $V[\bX] \defeq \pb{ V^t[\Xb] : t \in \R }$ is a nested sequence of topological spaces and defines the filtration built using the offsets of $\Xb$.
The second approach to constructing a filtration is using a filter function $f_{\Xb}: \R^d \rightarrow \R$ which carries the topological information underlying $\Xb$. In this scenario, one typically constructs the filtration from the sublevel sets associated with $f_\Xb$, given by $V^t = f\inv_{\Xb}\qty\big({ (-\infty, t] })$ for each resolution $t$. Again, for $s<t$, $V^s \subset V^t$ and the sequence $V[f_\Xb] = \pb{ V^t[f_{\Xb}] : t \in \R }$ constitutes the sublevel filtration from $f_{\Xb}$. Mutatis mutandis a similar notion holds for the superlevel filtration.
In general, the filtration $V[\Xb]$ can be very different from $V[f_{\Xb}]$, although the prevailing objective is for $V[f_{\Xb}]$ to encode the same information as in $V[\Xb]$. In this context, the distance function $\dx$ plays a special role owing to the fact that its sublevel filtration is the same filtration associated with the offsets, i.e., $V[\dx] = V[\bX]$. This fact plays an important role in motivating the \md{} estimator introduced in Section~\ref{sec:proposal}, and follows by noting that for every resolution $t > 0$,
\eq{
\dsf_{\Xb}\inv\qty\Big({ (-\infty, t] }) = \pb{\xv \in \R^d : \dx(\xv) \le t} = \bigcup_{\xv \in \Xb}B(\xv, t).\nn
}
Let $V = \pb{V^t : t \in \R}$ denote a generic filtration and let $\iota_{s}^t: V^s \hookrightarrow V^t$ denote the inclusion map between the filtered spaces at resolutions $s < t$. For each resolution $t$, let $\bbv^t = \textup{H}_*\pa{V^t; \bF}$ be the homology\footnote{Where, as per convention, the order of homology, denoted by $*$, is an arbitrary non-negative integer.} of $V^t$ with coefficients in a field $\bF$. As the resolution $t$ varies, the evolution of topological features is captured by $V$. Roughly speaking, new cycles (i.e., connected components, loops, holes, and higher dimensional analogues) are born, or existing cycles can merge and disappear. The collection of cycles in $V^t$ at each resolution $t$ is encoded as a vector space in $\bbv^t$. The inclusion maps $\iota_{s}^t: V^s \hookrightarrow V^t$ induce linear maps $\phi_s^t: \bbv^s \rightarrow \bbv^t$ between the vector spaces $\bbv^s$ and $\bbv^t$.
As such, the collection $V$ can be described more succinctly as the \emph{category} $V = \qty{V^t, \iota_s^t : s \le t}$ with the inclusion maps $\iota_s^t$ \mbox{representing the morphisms for ${s\le t}$}. The image of $V$ under the \emph{homology functor} $\mathbf{Hom}_* : V \mapsto \bbv$, gives us the \emph{persistence module}
\eq{
\bbv \defeq \qty\Big{\bbv^t, \phi_s^t: {s\le t}},\nonumber
}
where the induced maps $\phi_s^t: \bbv^s \rightarrow \bbv^t$ are homomorphisms between two vector spaces. For $r < s < t$, the persistence module can equivalently be represented as
\begin{figure}[H]
\includegraphics[]{./tikz/v-module.pdf}
\vspace{-10pt}
\end{figure}
Informally, a new topological feature is born at resolution $b \in \R$ if the cycle associated with that feature is not present in $\bbv^{b-\e}$ for all $\e > 0$. The same feature is said to die at resolution $d>b$ if the cycle associated with this feature disappears from $\bbv^{d+\e}$ for all $\e > 0$, resulting in the (ordered) persistence pair $(b,d)$. By collecting all the persistence pairs, the persistence module $\bbv$ may be succinctly represented by a \textit{persistence diagram},
\eq{
\dgm\pa{\bbv} \defeq \pb{ (b,d) \in \R^2: b \le d \le \infty }.\nn
}
\subsection{Interleaving of Persistence Modules}
\label{sec:interleaving}
Given two persistence modules $\bV = \pb{\bV[][t], \phi_s^t}_{s\le t}$ and $\bW = \pb{\bW[][t], \psi_s^t}_{s \le t}$, they are said to be equivalent (or isomorphic) if there exists a family of isomorphisms $\pb{\xi_t}_{t \in \R}$ such that each $\xi^t: \bV[][t] \rightarrow \bW[][t]$ is an isomorphism. This notion can be extended to define two collection of maps $\pb{\alpha_t : t \in \R}$ and $\pb{\beta_t : t \in \R}$ which weave the two persistence modules together.
\bigskip
\begin{definition}[Interleaving of persistence modules] Given two persistence modules $\bV$ and $\bW$, and two monotone increasing maps ${\alpha,\beta: \R \rightarrow \R}$, $\bV$ and $\bW$ are said to be $\pa{\alpha,\beta}$--interleaved if the following diagrams commute for all $s \le t$
\begin{figure}[H]
\includegraphics[]{./tikz/commutative.pdf}
\end{figure}
\label{def:interleaving}
\end{definition}
\begin{remark}
The persistence modules $\bbv$ and $\bbw$ are purely algebraic objects, and their underlying filtrations $V$ and $W$ are not necessarily compatible. However, when the filtrations $V$ and $W$ arise as filtered subsets of the same underlying space (e.g., $\R^d$), we can similarly define an $(\alpha,\beta)-$interleaving between the filtrations $V$ and $W$ by replacing all linear maps in Definition~\ref{def:interleaving} by inclusion maps.
\end{remark}
The resulting persistence diagrams $\dgm\pa{\bbv}$ and $\dgm\pa{\bbw}$ are elements of the space of persistence diagrams $\Omega = \pb{ (x,y) : x \le y }$, which is endowed with the family of $q-$Wasserstein metrics $W_q(\cdot, \cdot)$ for $1 \le q \le \infty$. We refer the reader to \cite{edelsbrunner2010computational,mileyko2011probability} for more details. In special case of $q = \infty$, the resulting metric $\winf$ is commonly referred to as the \textit{bottleneck distance}, and is given as follows.
\begin{definition}[Bottleneck distance]
Given two persistence diagrams $D_1, D_2 \in \Omega$, the bottleneck distance is given by
\eq{
\winf\pa{D_1, D_2} \defeq \inf_{\gamma \in \Gamma}\sup_{p \in D_1 \cup \Delta} \norminf{ p - \gamma(p) },\nonumber
}
where $\Gamma = \pb{\gamma : D_1 \cup \Delta \rightarrow D_2 \cup \Delta}$ is the set of all multi-bijections from $D_1$ to $D_2$ including the diagonal $\Delta = \pb{(x,y) : x=y}$ with infinite multiplicity.
\end{definition}
Although the space of persistence diagrams $(\Omega, W_q)$, together with the $q-$Wasserstein distance, presents a challenging mathematical structure for refined statistical analyses \citep{mileyko2011probability,turner2014frechet}, the stability of persistence diagrams provides a handle on this space by allowing us to directly work on the space generating the filtrations.
\begin{lemma}[Stability of persistence diagrams; \citealt{cohen2007stability,chazal2016structure}]
Given two compact sets $\bX, \bY \subset \R^d$,
\eq{
\winf\qty\Big({ \dgm\pa{ \bbv[\Xb] }, \dgm\pa{ \bbv[\Yb]} }) \le \haus{\Xb, \Yb}[].\nn
}
Alternatively, for two filter functions $f,g : \R^d \rightarrow \R$,
\eq{
\winf\qty\Big({ \dgm\pa{ \bbv[f] }, \dgm\pa{ \bbv[g]} }) \le \norminf{f-g}.\nn
}
\end{lemma}
\bigskip
\begin{remark} \null \hfill
\begin{enumerate}[label=\textup{\textbf{(\roman*)}}]
\item When the interleaving maps $\pa{\alpha,\beta}$ are additive, i.e., of the form ${\alpha: t \mapsto t + \epsilon}$ and ${\beta: t \mapsto t+\delta}$, then persistence diagrams $\dgm\pa{\bV}$ and $\dgm\pa{\bW}$ obtained from the persistence modules satisfy the following relationships:
\eq{
\dgm\pa{\bV} \in \dgm\pa{\bW} \oplus \pc{-\delta, \e}^2 \hspace{1em} \text{ and } \hspace{1em} \dgm\pa{\bW} \in \dgm\pa{\bV} \oplus \pc{-\e, \delta}^2,\nn
}
where $\oplus$ denotes the Minkowski sum in $\R^2$. A \emph{coarser} bound is obtained from the stability theorem \citep{cohen2007stability} which guarantees that
\eq{
\Winf\qty\big(\dgm\pa{\bV}, \dgm\pa{\bW}) \le \max\pb{\e,\delta}.\nn
}
\item Furthermore, when the interleaving maps are identical, i.e., $\alpha \equiv \beta: t \mapsto t + \epsilon$, this notion can be extended to define an \textit{interleaving pseudo-distance} between persistence modules,
\eq{
d_{\mathcal{I}}\pa{\bV, \bW} \defeq \inf\qty\Big{ \e > 0 : \bV \text{ and } \bW \text{ are } (\alpha,\alpha)-\text{interleaved for } \alpha: t \mapsto t + \e }.\nn
}
From the isometry theorem \citep{chazal2016structure} the \emph{interleaving distance} is identical to the \emph{bottleneck distance}, i.e., $\Winf\qty\big(\dgm\pa{\bV}, \dgm\pa{\bW}) = d_{\mathcal{I}}\pa{\bV, \bW}$. In such cases, it is equivalent to say that $\bV$ and $\bW$ are $\pa{\alpha,\alpha}$--interleaved or ${d_{\mathcal{I}}\pa{\bV, \bW} \le \e}$. Similarly, for filtrations $V$ and $W$ comprising of subsets of $\R^d$,
\eq{
d_{\mathcal{I}}\pa{V, W} \defeq \inf\qty\Big{ \e > 0 : V^t \subseteq W^{t+\e} \quad \text{and} \quad W^t \subseteq V^{t+\e} }.
\label{eq:interleaving-filtration}
}
By functoriality, $d_{\mathcal{I}}\pa{V, W} \le \e \Longrightarrow d_{\mathcal{I}}\pa{\bV, \bW} \le \e \Longrightarrow \Winf\qty\big(\dgm\pa{\bV}, \dgm\pa{\bW}) \le \e$.
\end{enumerate}
\label{remark:interleaving-1}
\end{remark}
\subsection{Weighted Rips Filtrations}
\label{sec:weightedrips}
In practice, given a compact set $\Xb \subset \R^d$ or a filter function $f$, the persistence modules $\bbv[\Xb]$ and $\bbv[f]$ are computed using simplicial complexes. In particular:
\begin{enumerate}[label=(\roman*)]
\item For each $t \in \R$, one may use the \cech{} or Alpha complex to compute the nerve of the cover, $\text{nerve}\pb{ B(\xv, t) : \xv \in \Xb }$. Since the Nerve lemma \citep{edelsbrunner2010computational} guarantees that ${V^t[\Xb] \cong \text{nerve}\pb{ B(\xv, t) : \xv \in \Xb }}$, the resulting persistence module $\bbv[\bX]$ may be computed exactly using simplicial homology.
\item In the case of $\bbv\pc{f}$, this is typically achieved by choosing a grid resolution parameter $\e$, and constructing a cubical complex $\k_\e$ on the underlying space. The function $f:\R^d \rightarrow \R$ may be extended to define $f: \k_\e \rightarrow \R$, and at each resolution $t \in \R$, the sublevel sets $V^t[f_\Xb]$ can be approximated using the lower-star filtration $\k_\e^t = \pb{ \sigma \in \k_\e : \max_{\xv \in \sigma}f(\xv) \le t }$. Therefore, the filtration $\bbv[f]$ can be approximated by the filtration $\pb{\k_\e^t : t \in \R}$, and the resulting persistence module is computed using cubical homology.
\end{enumerate}
Note that (i) is able to compute the exact persistence module in practice, but is unable to weight points according to $f$. On the other hand, (ii) is only an approximate computation and depends on the nuisance parameter $\e$. Furthermore, the size of the cubical complex is $\abs{\k_\e} = O(\e^{-d})$, making it scale poorly in high dimensions. To overcome this limitation, \cite{buchet2016efficient} proposed the $f$-weighted filtrations, which was subsequently generalized by \cite{anai2019dtm}.
Given a non-negative \emph{weight function} $f: \R^d \rightarrow \R_{\ge 0}$ and \textit{power} $1 \le p \le \infty$, the \emph{weighted radius function} of resolution $t>0$ at $\xv$ is given by
\begin{equation
\rfx \defeq \begin{cases}
\pa{t^p - f(\xv)^p}^{1/p} & \text{ if } t \ge f(\xv) \\
-\infty & \text{ if } t < f(\xv).
\end{cases}\nonumber
\end{equation}
Consequently, $\Bfx[f,\rho][][][]$ is the \emph{weighted ball of resolution $t$ at $\xv$} w.r.t.~the metric $\rho$, which is illustrated in Figure~\ref{fig:ball}, and is given by
\eq{
\Bfx[f,\rho][][][] \defeq B_{\rho}\pa{\xv, \rfx} = \pb{\yv \in \R^d: \rho(\xv,\yv) \le \rfx}.\nn
}
\begin{figure}
\centering
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/plots/v1.pdf}
\caption{$V^t[\Xn]$ unweighted}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/plots/v2.pdf}
\caption{$V^t[\Xn, f]$ for $p=1$}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/plots/v3.pdf}
\caption{$V^t[\Xn, f]$ for $p=\infty$}
\end{subfigure}
\caption{Illustration of offsets for $t=0.5$ and $f(\xv) = \inf_{\yv \in \mathbb{S}^1}\norm{\xv-\yv}$.}
\label{fig:ball}
\end{figure}
Given $\bX \subseteq \R^d$, the collection of weighted balls ${\VVt[][\bX, f][] = \pB{\Bfx[f][\xv][]: \xv \in \bX}}$, is called the \emph{weighted cover} of $\Xn$. The $f$-weighted offset at resolution $t$ is given by the union of balls in $\VVt[][\bX, f][]$,
\eq{
\Vt[][\bX,f][] \defeq \bigcup_{\xv \in \bX} \Bfx[f][\xv][].\nonumber
}
Together with the inclusion maps $\iota_s^t:\Vt[s][\bX,f][] \hookrightarrow \Vt[][\bX,f][]$, the $f-$\emph{weighted filtration} is given by
\eq{
V[\bX,f] \defeq \pB{\Vt[][\bX,f][], \iota_s^t : {s \le t}}.\nonumber
}
The image of $V[\bX,f]$ under the \emph{homology functor} $\mathbf{Hom}_* : V[\bX,f] \mapsto \bV[][][\bX,f]$, results in the \emph{weighted persistence module} $\bV[][][\bX,f] \defeq \qty{\bVt[][\bX,f][], \phi_s^t : {s\le t}}$, where the induced maps ${\phi_s^t: \bVt[s][\bX,f][] \rightarrow \bVt[][\bX,f][]}$ are linear maps between vector spaces. The weighted-simplicial complexes
\eq{
\Ct[][\bX,f][] = \text{nerve}\pb{\VVt[][\bX,f][]} \hspace{2mm} \text{ and } \hspace{2mm} \Rt[][\bX,f][] = \text{Rips}\pb{\VVt[][\bX,f][]}\nn
}
denote the weighted-\cech{} complex and weighted-Rips complex associated with the weighted cover $\VVt[][][]$ respectively. Without loss of generality $\bVt[][\bX,f][] = \textup{H}_*\pa{\Vt[][\bX,f][]}$ is the homology of the offset $\Vt[][\bX,f][]$, which, by the nerve lemma, is the same as the homology of the weighted-\cech{} complex. Furthermore, if $f(\xv) \equiv 0$ for all $\xv \in \R^d$ then the resulting filtrations are the usual unweighted filtrations. In particular, $V[\Xn] \cong \Ct[ ][\Xn,f][ ]$ and $\Rt[ ][\Xn, f][ ]$ correspond to \cech{ and Rips} filtrations, respectively. The following structural results appear in \cite{anai2019dtm}, and serve as analogues of the stability result for $f$-weighted filtrations.
\bigskip
\begin{lemma}[{\citealp[Propositions 3.2 \& 3.3]{anai2019dtm}}]
Given $\bX \subset \R^d$ and $f,g : \bX \rightarrow \R_+$
\begin{enumerate}[label=\textup{(\roman*)}]
\item $\bbv[\bX,f]$ and $\bbv[\bX, g]$ are $(\alpha,\alpha)$--interleaved for $\alpha: t \mapsto t + \norminf{f-g}$.
\end{enumerate}
Additionally, given $\bY \subset \R^d$ and $h: \bX \cup \bY \rightarrow \R_+$, if $h$ is $L$--Lipschitz and $\haus{\bX,\bY} \le \e$, then
\begin{enumerate}[label=\textup{(\roman*)}, resume]
\item $\bbv[\bX,h]$ and $\bbv[\bY, h]$ are $(\beta,\beta)$--interleaved for $\beta: t \mapsto t + \e \pa{1+L^p}^{1/p}$.
\end{enumerate}
\label{lemma:anai-et-al}
\end{lemma}
\subsection{Median-of-means Estimators}
\label{sec:mom}
Median-of-means (MoM) estimators have gained popularity in the robust machine learning owing to recent success, both theoretically and experimentally. See, for example, \cite{devroye2016sub,lugosi2019mean,lecue2020robust}. The background for MoM estimators in the context of mean estimation is as follows: samples $\Xn = \pb{\Xv_1, \Xv_2, \dots, \Xv_n}$ are observed and we wish to construct an estimator for the population mean $\theta$. The sample mean $\hat\theta = \overline{\bX}_n$ is known to achieve sub-Gaussian estimation error only when the samples $\Xn$ themselves are observed from a sub-Gaussian distribution.
Robust statistics deals with two important relaxations to this model: (i) the samples $\Xn$ are observed iid from $\pr$, but $\pr$ is no longer sub-Gaussian and is assumed to have heavy tails; and (ii) a fraction $\pi < \half$ of the samples are assumed to be contaminated with outliers, and the remaining $(1-\pi)n$ samples are observed from a well-behaved distribution $\pr$.
The median-of-means estimator $\hat\theta_{\text{MOM}}$, originally introduced by \cite{nemirovskij1983problem}, addresses these relaxations by constructing a robust estimator of location as follows: For $1 \le Q \le n$, the sample $\Xn$ is partitioned into subsets $\pB{S_1, S_2, \dots, S_Q}$ such that each subset $S_q \subset \pb{1,2,\dots,n}$ with $|{S_q}| = \floor{n/Q}$. The \emph{MoM estimator} $\thetamom$ is, then, defined as
\eq{
\thetamom \defeq \text{median} \pb{\hat\theta_1, \hat\theta_2, \dots, \hat\theta_Q},\nonumber
}
where $\{\hat\theta_q: q \in [Q]\}$ are the sample means computed for each subset $\{S_q: q \in [Q]\}$. \cite{audibert2011robust} showed that, in the univariate setting, $\thetamom$ achieves sub-Gaussian rates of convergence for heavy tailed data. \cite{minsker2015geometric} and \cite{devroye2016sub} extend these results to the multivariate setting by considering the geometric median. The MoM idea has subsequently been extended in several other directions, e.g., U-statistics \citep{joly2016robust}, kernel mean embeddings \citep{lerasle2019monk} and general M-estimators \citep{lecue2020robust} among others. Most importantly, these extensions move away from the heavy-tailed framework and provide significant insights on how $\thetamom$ can overcome the second relaxation, i.e., estimation in the presence of outlying contamination. While the MoM estimators are not unique in their ability to recover the signal under heavy tailed noise, or in the presence of contamination, they are very simple to construct in most cases, and provide a clear characterization of the effect of noise on the estimation error.
\section{Glossary of Notations}
\label{sec:glossary}
\begin{center}
\begin{tabular}{r!{:}l}
$\mathsf{H}_\rho(\bX, \bY)$ & Hausdorff distance between $\bX \subseteq \M$ and $\bY \subseteq \M$ measured w.r.t. metric $\rho$.\\
$V^t[f]$ & Sublevel set of $f$ at level $t$ given by $\pb{\xv \in \R^d : f(\xv) \le t}$\\
$V[f]$ and $\bbv[f]$ & Sublevel filtration $\pb{V^t[f]: t \in \R}$ and its persistence module\\
$\rfx$ & The $f$--weighted radius function of resolution $t$ at $\xv$. $\rfx=\pa{t^p - f(\xv)^p}^{\f1p}$\\
$\Bfx[f,\rho][\xv][t]$ & $f$--weighted ball at $\xv$ with radius $\rfx$ w.r.t the metric $\rho$.\\
$\Vt[][][]$ & The $f$--weighted offset of $\bX$ at resolution $t$ given by $\Vt[][][] = \bigcup_{\xv\in\bX}\Bfx[f][\xv][t]$\\
$\Vt[ ][][]$ & $f$--weighted filtration, i.e., $\dots \rightarrow \Vt[t_1] \rightarrow \Vt[t_2] \rightarrow \dots \Vt[t_n] \rightarrow \dots$\\
$\bVt[ ][][]$ & $f$--weighted persistence module, i.e., $\bVt[ ][][] = \textbf{Hom}\pa{\Vt[ ][][]}$\\
$\dgm\pa{\bbv}$ & Persistence diagram associated with the persistence module $\bbv$\\
$\hat\theta_{n,Q}$ & MoM-estimator, $\median\{\hat\theta_{1},\dots, \hat\theta_Q\}$, where $\hat\theta_q$ is the estimator from block $S_q$.\\
$\dnq$ & \md{} function given by $\dnq(\xv) = \median\pb{\inf_{\yv \in S_q}\norm{\xv-\yv} : q \in [Q]}$\\
$\dx$ & Distance function to a compact set $\Xb$ given by $\dx(\yv) = \inf_{\xv \in \Xb}\norm{\xv-\yv}$
\end{tabular}
\end{center}
\section{Supplementary Results}
\label{supplementary-results}
The following lemma is a collection of well-known inequalities (and their slight variants). We state them here for reference, as they are used frequently in the proofs.
\begin{lemma}
For $0 < y \le x$ and $p \ge 1$, the following inequalities hold:
\begin{enumerate}[label=\textup{(\roman*)}, itemsep=7pt]
\item $x^p + y^p \le (x+y)^p \le 2^{p-1}(x^p + y^p)$;
\item $2^{1-p}x^{p} - y^p \le (x-y)^p \le x^p - y^p$;
\item $(x+y)^{\f 1 p} \le x^{\f 1 p} + y^{\f 1 p} \le 2^{\f{p-1}{p}}(x+y)^{\f 1 p}$;
\item $x^{\f 1 p} - y^{\f 1 p} \le (x-y)^{\f 1 p} \le 2^{\f{p-1}{p}}x^{\f 1 p} - y^{\f 1 p}$;
\item $y^{1 - \f1p}x^{\f1p} \le x \le y^{1-p}x^p$;
\item $x \le \qty\Big(\pa{x+y}^p - y^p)^{\fpp}$.
\end{enumerate}
\label{lemma:useful-inequalities}
\end{lemma}
\begin{proof}
\emph{Part (i).} Let $f(y) = (x+y)^p - x^p - y^p$ on the interval $0 < y \le x$. The derivative,
$$
f'(y) = p(x+y)^{p-1} - py^{p-1} \ge 0
$$
for all $0 < y \le x$ and $p \ge 1$. Therefore $f(y)$ is strictly non-decreasing, and $f(y) \ge f(0) = 0$. This gives us the first inequality. For the second inequality, note that $g(z) = z^p$ is convex for $z\ge 0$. This follows from the fact that $g''(z) = p(p-1)z^{p-2} \ge 0$ for all $z \ge 0$ and $p\ge 1$. By convexity, we obtain
\eq{
2^{-p} \pa{x+y}^p = \pa{\half x + \half y}^p \le \f{x^p + y^p}{2},\nn
}
which leads to the second inequality.
\emph{Part (ii).} Let $z = (x-y)$. Applying the first inequality from the preceding part to $z$ and $y$ we get $z^p \le (y+z)^p - y^p$, i.e., $(x-y)^p \le x^p - y^p$. Similarly, from the second inequality, $(z+y)^p \le 2^{p-1}(z^p + y^p)$, which is the same as $2^{1-p}x^p - y^p \le (x-y)^p$.
\emph{Part (iii).} Note that $f(z) = z^{\f 1 p}$ is concave for all $z\ge 0$ and $p \ge 1$, since
\eq{
f''(z) = -\pa{\f{p-1}{p^2}} z^{\f{1-2p}{p}} \le 0,\nn
}
for all $z \ge 0$, $p \ge 1$. Therefore, by concavity,
\eq{
2^{-\f{1}{p}}(x+y)^{\f 1 p} \ge \f{x^{\f 1 p} + y^{\f 1 p}}{2},\nn
}
which leads to the right hand side inequality, i.e., $x^{\f 1 p} + y^{\f 1 p} \le 2^{1 - \f{1}{p}}(x + y)^p$. For the left hand side inequality, let $f(y) = x^{\f 1 p} + y^{\f 1 p} - (x+y)^{\f 1 p}$ on the interval $0 < y \le x$. The derivative is given by
\eq{
f'(y) = \f{1}{p}\pa{y^{\f1p - 1}-(x+y)^{\f1p-1}} \ge 0,\nn
}
since $0 < 1/p \le 1$ and $0 < y \le x$. Thus, $f(y)$ is increasing on the interval $[0,x]$, and, therefore, $f(y) \ge f(0) = 0$. This leads to the desired result.
\emph{Part (iv).} The proof is identical to the proof in Part (ii). The inequalities are obtained by taking $z=(x-y)$, and applying the results of Part (iii).
\emph{Part (v).} Since $y \le x$, it follows that $1 \le \pa{x/y}^{\f1p} \le x/y \le \pa{x/y}^p$ for $p\ge 1$. By rearranging the terms, we get $x \le y^{1-p}x^p$ and $x \ge y^{1 - \f1p}x^{\f1p}$.
\emph{Part (vi).} We have $x = (x + y - y) = \qty\big((x+y-y)^p)^{\fpp}$. From Part (ii) we have
$$
{(x+y-y)^p \le (x+y)^p - y^p},
$$
which, on rearrangement, yields $x \le \qty\big((x+y)^p - y^p)^{\fpp}$.
\end{proof}
\begin{lemma}[Chernoff-Hoeffding bound simplified]
Suppose ${ Z_1, Z_2, \dots Z_N }$ are i.i.d. $\textup{Bernoulli}(p)$ random variables. Then, for $0 < \e < 1$,
\eq{
\pr\pa{\f{1}{N}\sum_{1 \le i \le N}Z_i > \e} \le \exp\qty\Bigg( N \pa{\f{2}{e} + \e \log(p)}).\nn
}
\label{lemma:chernoff-hoeffding}
\end{lemma}
\providecommand{\kl}{\mathsf{KL}}
\begin{proof}
For $0 < \e < 1$, using the Chernoff-Hoeffding bound for binomial random variables \citep[Theorem~1]{hoeffding1963probability} we have
\eq{
\pr\pa{\f{1}{N}\sum_{1 \le i \le N}Z_i > \e} \le \exp\qty\Bigg(-N \cdot \kl\qty\Big(\textup{Ber}(\e) || \textup{Ber}(p))),
\label{mom-concentration-4}
}
where $\textup{Ber}(\e)$ and $\textup{Ber}(p)$ are Bernoulli distributions with parameters $\e$ and $p$ respectively, and $\kl(\pr || \qr)$ is the Kullback-Leibler divergence of $\qr$ w.r.t $\pr$. Simplifying the quantity in the exponent, we get
\eq{
\kl\qty\big(\textup{Ber}(\e) || \textup{Ber}(p)) &= \e\log\pa{\f{\e}{p}} + (1-\e)\log\pa{\f{1-\e}{1-p}}\n
&= \underbrace{\e \log(\e) + (1-\e)\log(1-\e)}_{\ge -2/e} - \e\log(p) - (1-\e)\log(1-p)\n
&\ge -\f{2}{e} - \e \log(p),\nn
}
where the last inequality uses the fact that $x\log(x) \ge -1/e$ for all $0 \le x \le 1$, and ${-(1-\e)\log(1-p) \ge 0}$ for all ${0 \le \e, p \le 1}$. Substituting this in \eref{mom-concentration-4} yields the result.
\end{proof}
\let\sh\undefined
\let\len\undefined
|
2205.11622
|
\section{Introduction} \label{introduction}
Sparse tensor algebra is used in many machine learning domains such as
graph neural networks~\cite{fusedMM,FeatGraph,hamilton2018inductive}.
Tensors are a generalization of matrices and are typically represented using n-dimensional arrays.
However, when used to represent large graph-like structures, representing a tensor with a dense array is wasteful, as most values in the tensor are zero. In such cases, programmers use {\em compressed} representations of these {\em sparse tensors}.
The problem of compiler optimizations for sparse codes is well known
~\cite{kjolstad:2017:taco,aartbik:93,michelle2015,compiler_in_mlir,pingali97}, and there are several challenges that compilers face: (1) tensor computations have to deal with specific data formats; (2) load imbalance can arise due to irregular structure; and
(3) data locality issues arise due to the sparsity of the tensors.
TACO provides a compiler for automatically generating kernels for dense and sparse tensor algebra
operations~\cite{kjolstad:2017:taco}. The tensor application is expressed in terms of three languages: a tensor algebra language for expressing the computation
(Section~\ref{tensor-index-notation}), a data representation language for specifying how sparse tensors are compressed, and a
scheduling language that specifies the schedule of the computation (Section~\ref{scheduling_primitives}).
The scheduling language provides the ability to define different schedules for computations depending on tensors' dimensionality and sparsity patterns, because one schedule may not fit all data formats and datasets.
This allows the separation of {\em algorithmic} specification from the {\em scheduling} details of the computation.
Once both are specified, code can be generated to implement the desired algorithm and schedule.
One important consequence of TACO's code generation is that the asymptotic complexity of the kernels grows with the number of index variables in the tensor index notation~\cite{peterahrens}.
For example, the complexity of ${\sparse A_{ij}} = \sum\nolimits_{k} {\sparse B_{ij}}\cdot C_{ik}\cdot D_{jk}$\footnote{Highlighted tensors denote sparse tensors.} is $O(nnz(B_{IJ})K)$\footnote{$nnz(B_{IJ})$ denotes the nonzero values of the sparse tensor B bounded by the hierarchical accesses $i$ and $j$.}, where $B$ is sparse.
If this example is extended with an additional computation, as in $A_{il} =\sum\nolimits_{kj} {\sparse B_{ij}}\cdot C_{ik}\cdot D_{jk}\cdot E_{jl}$, then the complexity is $O(nnz(B_{IJ})KL)$~\footnote{$K$ and $L$ denote the number of iterations or the dimensionality of $k$ and $l$ dimensions respectively.}---and this complexity increases with each additional index variable.
Hence, with increasing terms in the tensor expression, the asymptotic complexity of the resulting code blows up.
Interestingly, this asymptotic blowup is a consequence of doing multiple tensor operations in a single kernel.
The computation could instead be expressed as two separate kernels, with the result of the first computation stored in a temporary tensor: ${\sparse T_{ij}} =\sum\nolimits_{k} {\sparse B_{ij}}\cdot C_{ik}\cdot D_{jk} $; $A_{il} =\sum\nolimits_{j} {\sparse T_{ij}}\cdot E_{jl}$.
This computation has a complexity of $O(nnz(B_{IJ})(K+L))$. However, writing complex computations as {\em separate} TACO expressions has two downsides.
First, it is no longer possible to apply schedule transformations, such as outer-loop parallelization, across the entire computation.
Second, if the computations require large temporaries, materializing them results in performance degradation due to exhaustion of the last-level cache.
The correct schedule looks like neither the single-kernel approach nor the separate-kernels approach.
Instead, it performs a single outer loop over the $i$ and $j$ indices and then performs the inner loop of the first kernel, stores the results in a temporary, then uses those results in the inner loop of the second kernel.
This approach has an asymptotic complexity of $O(nnz(B_{IJ})(K+L))$, comparable to the separate kernel approach, but because the temporary is only live within the inner loops, it is much smaller and hence can fit in cache.
Moreover, the overall computation is a single loop nest, allowing for the outer loops to be parallelized, tiled, etc.
The above schedule transformation is analogous to ones in {\em dense} tensor contraction that combine loop distribution and fusion to create imperfectly-nested loops~\cite{saday1}.
But it is less clear how to use this technique on sparse loops for several reasons:
(i) analysis is harder, because of the sparse tensor accesses and non-affine bounds, as polyhedral techniques do not work due to the use of dynamic array bounds in loops;
(ii) producing good schedules is harder because performance can degrade by forcing a sparse tensor to be processed using dense iteration; and
(iii) code generation is harder, as you need to deal with storage format-specific iteration machinery.
For example, a sparse matrix and dense matrix multiplication (SpMM) may be performed with a sparse matrix of Compressed Sparse Row format (CSR), Coordinate format (COO), etc.~\cite{chau2018}.
Hence, the compiler needs to tackle format-specific access patterns to generate code for SpMM for different storage formats.
Our insight for tackling the complex scheduling transformations needed to avoid asymptotic blowup while preserving locality, is to use dense temporaries and introduce \textit{Sparse Loop Nest Restructuring (SparseLNR)\footnote{\url{https://github.com/adhithadias/SparseLNR}}} for tensor computations.
Crucially, these transformations can co-exist with TACO's other scheduling primitives~\cite{senanayake:2020:scheduling}.
This paper introduces a new representation called {\em branched iteration graphs} that support imperfect nesting of sparse iteration.
Given this representation, our compiler can restructure sparse tensor computations to remove the asymptotic blowup in sparse tensor algebra code generation while delivering good locality.
Our specific contributions are;
\begin{description}
\item[\textbf{Branched iteration graph for tensor multiplications}] We generalize the
iteration graph intermediate representation (IR) of TACO to support imperfectly nested loop structures.
\item[\textbf{Branch IR transformation}] We design a sparse tensor transformation
that transforms iteration graphs to express fusion and distribution.
\item[\textbf{New scheduling primitives}] We introduce a new scheduling primitive that lets programmers integrate fusion and distribution into TACO schedules.
\end{description}
For several real-world tensor algebra computations (Described in Section~\ref{benchmarks}) on various datasets (Shown in Table~\ref{tab:datasets}), using our new representation and transformations, we show that SparseLNR\xspace can achieve 1.23--1997x (single-thread) and 0.86--1263x (multi-thread) speedup over baseline TACO schedules, and 0.27--3.21x (single-thread) and 0.51--3.16x (multi-thread) speedup over TACO schedules of manually separated computations.
\section{Background}
\label{background}
This section provides the necessary background to understand sparse tensor algebra computations and different ways to schedule those computations.
\subsection{Tensor Index Notation}
\label{tensor-index-notation}
Tensor index notation is a high-level representation used for describing tensor algebra expressions~\cite{kjolstad:2017:taco}.
Throughout the paper we will be using both the {\em standard
notation} and {\em tensor index notation} to denote tensor operations.
For instance, the tensor computation $A_{ik} = \sum\nolimits_{j} {\sparse B_{ij}} C_{jk}$ written in standard notation is equivalent to $A(i,k) = {\sparse B(i,j)} * C(j,k)$, written in tensor index notation.\footnote{This computation is classic matrix-matrix multiply.}
Here, all the tensors are matrices and indices $i,j,$ and $k$ are used to iterate over matrices $A, B,$ and $C$.
In this computation, index $j$ must be iterated over the intersection of second dimension coordinates of $B$ and first dimension coordinates of $C$, whereas index $i$ and $k$ must be iterated over the first and second dimension coordinates of $B$ and $C$ respectively.
\subsection{Iteration Graph}
\label{iteration_graph}
We first summarize TACO's iteration graph representation, which Kjolstad~\textit{et al.}~describes in great detail~\cite{kjolstad:2017:taco}.
When computing the tensor expression $ {\sparse A_{ij}} = \sum\nolimits_{k} {\sparse B_{ij}} C_{ik} D_{jk} $, coordinates $(i,j)$ of B, coordinates $(i,k)$ of C, and $(j,k)$ of D need to be iterated.
An iterator on indices $(i,j,k)$ can iterate through all the coordinates of $B$, $C$, and $D$ and store the results in $A$.
TACO represents the iteration space of a tensor expression using an iteration graph, an intermediate representation that defines tensor access patterns of indices.
Figure~\ref{fig:iteration-graphs} shows a few examples of iteration graphs. For example, a tensor expression $ {\sparse A_{ij}} = \sum\nolimits_{k} {\sparse B_{ij}} C_{ik} D_{jk} $ results in an iteration graph as shown in Figure~\ref{fig:sddmm-iter-graph} such that the indices lay in $i, j, k$ order.
Here, the order of $j$ and $k$ is not strict if C and D are dense.
Figures~\ref{fig:mttkrp-iter-graph}~and~\ref{fig:spmv-spmv-iter-graph} are the iteration graphs of
tensor expressions $ A_{ik} = \sum\nolimits_{kl} {\sparse B_{ikl}} C_{lj} D_{kj} $ and
$ y_{i} = \sum\nolimits_{jk} {\sparse B_{ij}} {\sparse C_{jk}} v_{k} $
respectively.
Nodes in the iteration graph represent indices of tensor index notation.
In other words, the iteration graph is a directed graph of these indices.
These indices of the graph are topologically sorted such that it imposes sparse iteration constraints (\textit{i.e.,} constraints that define the sparse tensor access patterns of indices due to lack of random access in general).
Each index in the iteration graph can be expressed as a loop to iterate through a tensor.
Therefore, a given tensor multiplication can be computed using nested loops, where each loop corresponds to an index variable in the iteration graph.
\begin{definition} \label{iteration_graph_definition}
An iteration graph is a directed graph $ G = (V,P) $ where $ V = {v_1, v_2, ..., v_n} $ defines the set of index variables in the tensor index notation, and $ P = {p_1, p_2, ..., p_n} $ defines the set of tensor paths, a tensor path is a tuple of index variables associated with a particular tensor variable.
\end{definition}
\begin{figure}[!t]
\vspace{-1em}
\centering
\hfill
\begin{subfigure}[t]{0.3\columnwidth}
\centering
\includegraphics[scale=.3]{./images/iteration_graphs/sddmm-iter-graph.pdf}
\caption{\ }
\vspace{-10pt}
\label{fig:sddmm-iter-graph}
\Description[SDDMM original iteration graph]{<long description>}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.3\columnwidth}
\centering
\includegraphics[scale=.3]{./images/iteration_graphs/mttkrp-iter-graph.pdf}
\caption{\ }
\vspace{-10pt}
\label{fig:mttkrp-iter-graph}
\Description[MTTKRP original iteration graph]{<long description>}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.3\columnwidth}
\centering
\includegraphics[scale=.3]{./images/iteration_graphs/spmv-spmv-iter-graph.pdf}
\caption{\ }
\vspace{-10pt}
\label{fig:spmv-spmv-iter-graph}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\hfill
\caption{Iteration graphs (a) SDDMM kernel $ {\sparse A_{ij}} = \sum\nolimits_{k} {\sparse B_{ij}} C_{ik} D_{jk} $
(b) Khatri-Rao product (MTTKRP) kernel $ {\sparse A_{ik}} = \sum\nolimits_{kl} {\sparse B_{ikl}} C_{lj} D_{kj} $
(c) Sparse matrix vector multiplication (SpMV) kernel preceded by another SpMV kernel $ y_{i} = \sum\nolimits_{jk} {\sparse B_{ij}} ({\sparse C_{jk}} v_{k}) $ }
\label{fig:iteration-graphs}
\vspace{-1.5em}
\end{figure}
\subsection{Scheduling Primitives}\label{scheduling_primitives}
A tensor expression can have multiple valid schedules of computation as there are different valid orders of iterating through indices and multiple parallelization strategies.
Kjolstad \textit{et al.}~\cite{kjolstad:2017:taco} and
Senanayake \textit{et al.}~\cite{senanayake:2020:scheduling} have introduced scheduling primitives for tensor computations, with which the user can describe schedules to execute a given tensor computation.
The scheduling primitives in TACO are the \textit{split} directive to split a loop into two loops for tiling, \textit{collapse} directive to collapse doubly nested loops into a single loop for balancing load among threads, \textit{reorder} directive\footnote{Also referred to as \textit{permute} directive in the literature.} to reorder loops, \textit{unroll} directive to perform loop unrolling, \textit{parallelize} directive to parallelize loops with OpenMP-based multithreaded execution (for outer loops) or vectorized execution (for inner loops).
Furthermore, Kjolstad \textit{et al.}~\cite{kjolstad:2018:workspaces}
added \textit{precompute} scheduling directive to use intermediate dense workspaces to remove sparse accesses when storing data values to output tensors.
\section{Overview} \label{overview}
There are a number of factors taken into account when deciding whether to apply transformations across kernels.
If the working sets are small, running the kernels separately with good schedules defined on each individual kernel maybe faster than a fused kernel.
But if the working sets are large resulting in large temporaries that do not fit in caches, it is better to fuse two kernels and try to maximize the data reuse by using the results produced by the first kernel and execute part of the second kernel without waiting for the completion of the first kernel.
\begin{figure*}[t]
\begin{adjustbox}{minipage=\linewidth,scale=0.9}
\begin{subfigure}[t]{0.50\textwidth}
\begin{lstlisting}[basicstyle=\small, gobble=8, tabsize=2, showtabs=false, showstringspaces=false]
int32_t jY = 0;
for (int32_t i = 0; i < C1_dimension; i++) {
for (int32_t jB = B2_pos[i]; jB < B2_pos[(i + 1)]; jB++) {
int32_t j = B2_crd[jB];
double tkY_val = 0.0;
for (int32_t k = 0; k < D2_dimension; k++) {
tkY_val += B_vals[jB] * C_vals[i,k] * D_vals[j,k];
}
Y_vals[jY] = tkY_val;
jY++;
}
}
\end{lstlisting}
\vspace{-10pt}
\caption[For the list of figures]{$ {\sparse Y_{ij}} = \sum\nolimits_{k} {\sparse B_{ij}} C_{jk} D_{kj} $}
\vspace{-10pt}
\label{fig:sddmm-kernel}
\Description[C++ code of the kernel sampled dense-dense matrix multiplication kernel]{<long description>}
\end{subfigure}%
\hfill%
\hspace{2em}%
\begin{subfigure}[t]{0.50\textwidth}
\begin{lstlisting}[basicstyle=\small, gobble=8, tabsize=2, showstringspaces=false]
for (int32_t i = 0; i < Y1_dimension; i++) {
for (int32_t jY = Y2_pos[i]; jY < Y2_pos[(i + 1)]; jY++) {
int32_t j = Y2_crd[jY];
for (int32_t l = 0; l < E2_dimension; l++) {
A_vals[i,l] = A_vals[i,l] + Y_vals[jY] * E_vals[j,l];
}
}
}
\end{lstlisting}
\vspace{-10pt}
\caption[For the list of figures]{$ A_{il} = \sum\nolimits_{j} {\sparse Y_{ij}} E_{jl} $}
\vspace{-10pt}
\label{fig:spmm-kernel}
\Description[C++ code of the sparse matrix multiplication kernel]{<long description>}
\end{subfigure}%
\vskip\baselineskip
\begin{subfigure}[t]{0.50\textwidth}
\begin{lstlisting}[basicstyle=\small, gobble=8, tabsize=2, showstringspaces=false]
for (int32_t i = 0; i < C1_dimension; i++) {
for (int32_t jB = B2_pos[i]; jB < B2_pos[(i + 1)]; jB++) {
int32_t j = B2_crd[jB];
for (int32_t l = 0; l < E2_dimension; l++) {
double tkA = 0.0;
for (int32_t k = 0; k < D2_dimension; k++) {
tkA += B_vals[jB]* C_vals[i,k]* D_vals[j,k]* E_vals[j,l];
}
A_vals[i,l] = A_vals[i,l] + tkA;
}
}
}
\end{lstlisting}
\vspace{-10pt}
\caption[For the list of figures]{$ A_{il} = \sum\nolimits_{jk} {\sparse B_{ij}} C_{jk} D_{kj} E_{jl} $}
\vspace{-5pt}
\label{fig:taco-fused-sddmm-spmm-kernel}
\Description[C++ code of the kernel fused sampled dense-dense matrix multiplication kernel and sparse matrix multiplication kernel]{<long description>}
\end{subfigure}%
\hfill%
\hspace{2em}%
\begin{subfigure}[t]{0.50\textwidth}
\begin{lstlisting}[basicstyle=\small, gobble=8, tabsize=2, showstringspaces=false]
for (int32_t i = 0; i < C1_dimension; i++) {
for (int32_t jB = B2_pos[i]; jB < B2_pos[(i + 1)]; jB++) {
int32_t j = B2_crd[jB];
double Y_val = 0.0;
for (int32_t k = 0; k < D2_dimension; k++) {
Y_val += B_vals[jB] * C_vals[i,k] * D_vals[j,k];
}
for (int32_t l = 0; l < E2_dimension; l++) {
A_vals[i,l] = A_vals[i,l] + Y_val * E_vals[j,l];
}
}
}
\end{lstlisting}
\vspace{-10pt}
\caption[For the list of figures]{$ A_{il} = \sum\nolimits_{j} ( \sum\nolimits_{k} {\sparse B_{ij}} C_{jk} D_{kj} ) E_{jl}$}
\vspace{-5pt}
\label{fig:SparFF-fused-kernel}
\Description[C++ code of the kernel fused sampled dense-dense matrix multiplication kernel and sparse matrix multiplication kernel]{<long description>}
\end{subfigure}
\end{adjustbox}
\caption{Different schedules of executing $ A_{il} = \sum\nolimits {\sparse B_{ij}} \cdot C_{jk} \cdot D_{kj} \cdot E_{jl} $. The code snippet~\ref{fig:spmm-kernel} executed immediately after the code snippet~\ref{fig:sddmm-kernel} computes the same result as fused operations explained in the code snippets~\ref{fig:taco-fused-sddmm-spmm-kernel} and~\ref{fig:SparFF-fused-kernel}. Here, the code snippet~\ref{fig:taco-fused-sddmm-spmm-kernel} has a perfectly nested loop structure while the code~\ref{fig:SparFF-fused-kernel} describes a nested loop structure for the same computation.}
\vspace{-10pt}
\label{fig:three graphs}
\end{figure*}
\subsection{Motivating Example} \label{motivating_example}
Consider the computation, $A = Sparse\ B \odot (CD) \cdot E$ that is used in graph embedding and graph neural networks~\cite{fusedMM,graph_embedding_networks}.
The Hadamard product, or element-wise product, is denoted by $\odot$ and matrix multiplication is denoted by $\cdot$.
We can perform the above computation in the following order
with fine-grained smaller tensor operations.
$T = gemm (C, D)$, $Sparse\ U = spelmm (Sparse\ B, T)$,
$A = spmm (Sparse\ U, E)$.
Here, $gemm$ stands for the generalized matrix multiplication, $spelmm$ stands for sparse element-wise multiplication, and $spmm$ stands for sparse matrix multiplication.
Materialization of these intermediate tensors leads to
multiple issues:
\begin{enumerate}[align=left,leftmargin=*]
\item Dense matrix multiplication results in
redundant calculations and unnecessary increase in asymptotic complexity, because later it is sampled by the Sparse B matrix.
\item Values are produced long before they are consumed, which may cause them to be evicted from caches.
\item Having intermediate tensors is justifiable if intermediate results are needed for some other computation, nevertheless a single kernel maybe needed for faster operation.
\end{enumerate}
Introducing \textit{kernel fusion} to tensor computations can reduce these issues~\cite{fusedMM}.
In this section, we discuss different schedules for performing the computation $ A = Sparse\ B \cdot (CD) * E $, and motivate the need for supporting loop fusion for sparse tensor computations.
First, we discuss the opportunities for distribution in the running example using a fused kernel with high asymptotic complexity (Section~\ref{taco_original_kernel}).
Next, we discuss opportunities for fusion when the computation is split into two smaller kernels (Section~\ref{separate-kernel-execution-code}).
Finally, in Section~\ref{subsection:fused-kernel-code} we discuss how we can exploit these different scenarios to construct a distributed (versus the fused kernel in Section~\ref{taco_original_kernel}) and then fused (as compared to the kernel in Section~\ref{separate-kernel-execution-code}) implementation.
\subsubsection{Asymptotic expensive fused kernel} \label{taco_original_kernel}
The computation $A_{il} = \sum\nolimits {\sparse B_{ij}} \cdot C_{ik} \cdot D_{jk} \cdot E_{jl}$ can be fully realized using a nested loop iterator defined by all indices $i,j,k,\ \mbox{and}\ l$.
The generalized way of producing kernels for a tensor multiplication of this kind in TACO is by generating an iteration graph (see Section~\ref{iteration_graph}).
Since the iteration graph contains all the indices in a linear tree pattern, TACO generates a kernel as in Figure~\ref{fig:taco-fused-sddmm-spmm-kernel}, with time complexity of $O(nnz(B_{IJ})KL)$ due to the quadruple linearly nested loops (lines 1--6).
\subsubsection{Asymptotically inexpensive distributed kernels} \label{separate-kernel-execution-code}
However, the computation $A_{il} = \sum_{kj}\nolimits {\sparse B_{ij}} \cdot C_{ik} \cdot D_{jk} \cdot E_{jl}$ can be performed by evaluating two smaller kernels: sampled dense-dense matrix multiplication (SDDMM):
$ {\sparse Y_{ij}} = \sum_{k}\nolimits {\sparse B_{ij}} \cdot C_{ik} \cdot D_{jk} $ followed by SpMM:
$ A_{il} = \sum_{j}\nolimits {\sparse Y_{ij}} \cdot E_{jl} $.
As these separate kernels are triply nested loops (lines 2--6 in Figure~\ref{fig:sddmm-kernel} and 1--3 in Figure~\ref{fig:spmm-kernel}), they have lower asymptotic complexity.
Here, the Hadamard product, in SDDMM, results in ${\sparse Y_{ij}}$ matrix's sparse structure to be same as ${\sparse B_{ij}}$.
Therefore, the asymptotic complexity of performing two tensor computations with an intermediary matrix ${\sparse Y_{ij}}$ is $O(nnz(B_{IJ})(K+L))$.
These separate kernels can be realized through loop distribution of the kernel from Section~\ref{taco_original_kernel}.
Although we achieve a lower asymptotic complexity, we are using an intermediary tensor to
pass values between SDDMM and SpMM,
Hence, we miss the opportunity to exploit the temporal locality of the
operation.
The tensor contraction computed using linearly nested loops in Section~\ref{taco_original_kernel}.
is expensive because of the high degree of nesting in the computation
and the redundant duplicate computations, but
may still be good for memory-constrained
systems because the computation does not require any memory for storing intermediate results.
Using a temporary tensor to hold the result of the SDDMM operation is acceptable as long as the dimensionality of the index variables $i$ and $j$, and the density of the temporary tensor, are small.
The code generation algorithm in TACO is limited to generating sequential code when the output tensor is of sparse format, (see $jY$ variable in Figure~\ref{fig:sddmm-kernel}).
The kernel is sequential because the data format used to store the results of the computation limits random accesses.
Here, the output of SDDMM operation is sparse (and the output from SpMM is dense) in which case we cannot parallelize the outermost loop of the SDDMM operation in separate kernel execution whereas the kernel in Figure~\ref{fig:taco-fused-sddmm-spmm-kernel} can be parallelized because the output of the combined kernel is dense.
This is another valid reason to prefer the single kernel implementation despite its high asymptotic complexity.
\begin{comment}
One can define the output tensor of SDDMM kernel as dense to give it random access
so that the outermost loop in SDDMM kernel can be run in parallel in TACO.
But the dimensionality of both $i$ and $j$ can be 1 million each
we would need to store a 1M by 1M
array of values as the intermediary array. 1M by 1M 2D array would require 8 TB
of memory which makes the computation impractical.
As we will see in the example graphs in Section~\ref{evaluation}, real-world graphs could have
dimensionality in millions.
\end{comment}
\subsubsection{Fused kernel with low asymptotic complexity} \label{subsection:fused-kernel-code}
Since both the kernels $ {\sparse Y_{ij}} = \sum\nolimits {\sparse B_{ij}} \cdot C_{ik} \cdot D_{jk} $
in Figure~\ref{fig:sddmm-kernel} and
$A_{il} = \sum\nolimits {\sparse Y_{ij}} \cdot E_{jl}$ in Figure~\ref{fig:spmm-kernel} have the same
access patterns in their two outer-most loops, we can fuse them as shown in Figure~\ref{fig:SparFF-fused-kernel}, removing the use of the intermediary tensor to pass the values
between the two separate kernels as explained in Section~\ref{separate-kernel-execution-code}.
This execution has a time complexity of $O(nnz(B_{IJ})(K+L))$, and at the same time removes the usage of
a large tensor temporary by using an imperfectly nested loop structure (Lines 1--2,5 and 8 in Figure~\ref{fig:SparFF-fused-kernel}).
Note that this partially-fused kernel provides the best of both worlds. Like the separate kernel approach, it has low asymptotic complexity. Like the fused kernel approach, it has good locality (since the temporaries only need to store data from the inner loops, their sizes much smaller and the reuse distances are reduced).
Furthermore, because the outer loops of the partially fused are shared between both computations, and there is no longer a loop-carried dependence for SDDMM, the overall kernel can be parallelized in the same way as the kernel of Figure~\ref{fig:taco-fused-sddmm-spmm-kernel}.
\subsection{Our approach: SparseLNR\xspace}
While the schedule of computation in Figure~\ref{fig:SparFF-fused-kernel} provides both good asymptotic complexity and good locality, no existing system can automatically generate this type of schedule when generating code for sparse computations. TACO only handles ``linear'' iteration graphs that yield perfectly-nested loops, and hence cannot handle the partially-fused, imperfectly nested loop structure needed by our example.
On the other hand, prior work on distribution and fusion for tensor computations~\cite{saday1}, can support this type of code structure only for operations on dense tensors.
SparseLNR\xspace provides mechanisms for generating the code in Figure~\ref{fig:SparFF-fused-kernel} from a high level representation of the computation as well as scheduling directives that inform the structure of the code.
We introduce several components to perform this code generation and Section~\ref{method} discuss them in detail.
\begin{enumerate}[align=left,leftmargin=*]
\item We introduce a new representation called a {\em branched iteration graph} that allows the representation of partially-fused iteration structures, where some loops in a loop nest are common between computations and others are separate. Hence, this graph represents imperfect nesting. We carefully place constraints on these graphs to ensure that the requirements of nested iteration over sparse structures are met. The branched iteration graph is described in more detail in Section~\ref{algorithm}.
\item We introduce new scheduling primitives for loop distribution and fusion that allow programmers to {\em generate} the branched iteration graph by applying scheduling transformations to linear TACO iteration graph. We describe the primitives and describe how they systematically transform a branched iteration graph in Section~\ref{support_existing_transformations}.
\item We adapt TACO's code generation strategies to the branched iteration graph, allowing SparseLNR\xspace to generate sparse iteration code for tensor kernels that have had our distribution and fusion transformations applied to them. We discuss code generation in Section~\ref{code_generation}.
\end{enumerate}
\section{Detailed Design} \label{method}
This section describes the key components of SparseLNR\xspace. Section~\ref{subsec:representation} describes SparseLNR\xspace's new branched iteration graph representation. Section~\ref{subsec:transformation} shows how partial fusion is represented through iteration graph transformations. Section~\ref{subsec:directives} explains how scheduling directives can guide partial fusion while still composing with TACO's existing scheduling language. Finally, Section~\ref{subsec:codegen} explains how SparseLNR\xspace generates code.
\begin{comment}
\begin{figure*}[ht]
\centering
\begin{adjustbox}{minipage=\linewidth,scale=0.8}
\begin{subfigure}[t]{0.12\textwidth}
\centering
\includegraphics[width=1\linewidth]{./images/sddmm-spmm-original.pdf}
\vspace{-10pt}
\caption{\ }
\vspace{-10pt}
\label{fig:sddmm-spmm-original}
\Description[Original iteration graph of sddmm spmm operation]{<long description>}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.12\textwidth}
\centering
\includegraphics[width=1\linewidth]{./images/sddmm-spmm-removed.pdf}
\vspace{-10pt}
\caption{\ }
\vspace{-10pt}
\label{fig:sddmm-spmm-removed}
\Description[SDDMM iteration graph recovered from the iteration graph of the combined iteration graph of SDDMM and SpMM operation]{<long description>}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.12\textwidth}
\centering
\includegraphics[width=1\linewidth]{./images/sddmm-spmm-add-back.pdf}
\vspace{-10pt}
\caption{\ }
\vspace{-10pt}
\label{fig:sddmm-spmm-add-back}
\Description[Iteration graph of the SpMM operation recovered from the combined kernel]{<long description>}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=1\linewidth]{./images/sddmm-spmm-combined.pdf}
\vspace{-10pt}
\caption{\ }
\vspace{-10pt}
\label{fig:sddmm-spmm-combined}
\Description[Reasoning about the producer and consumer iteration graphs]{<long description>}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=1\linewidth]{./images/sddmm-spmm-branched.pdf}
\vspace{-10pt}
\caption{\ }
\vspace{-10pt}
\label{fig:sddmm-spmm-branched}
\Description[Branched iteration graph for the SDDMM, SpMM operation]{<long description>}
\end{subfigure}%
\end{adjustbox}
\caption{loopfuse transformation performed on $ A_{il} = \sum\nolimits_{jk} {\sparse B_{ij}} C_{ik} D_{jk} E_{jl} $}
\label{fig:sddmm-spmm-all-graphs}
\vspace{-10pt}
\end{figure*}
\end{comment}
\begin{figure*}[ht]
\vspace{-1em}
\centering
\hfill
\hfill
\begin{subfigure}[t]{0.14\textwidth}
\centering
\includegraphics[width=0.85\linewidth]{./images/example_iteration_graph/sddmm-spmm-original.pdf}
\caption{Original kernel}
\vspace{-5pt}
\label{fig:sddmm-spmm-original}
\Description[Original iteration graph of sddmm spmm operation]{<long description>}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.13\textwidth}
\centering
\includegraphics[width=0.85\linewidth]{./images/example_iteration_graph/sddmm-spmm-removed-2.pdf}
\caption{SDDMM}
\vspace{-5pt}
\label{fig:sddmm-spmm-removed}
\Description[SDDMM iteration graph recovered from the iteration graph of the combined iteration graph of SDDMM and SpMM operation]{<long description>}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.13\textwidth}
\centering
\includegraphics[width=0.85\linewidth]{./images/example_iteration_graph/sddmm-spmm-add-back-2.pdf}
\caption{SpMM}
\vspace{-5pt}
\label{fig:sddmm-spmm-add-back}
\Description[Iteration graph of the SpMM operation recovered from the combined kernel]{<long description>}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.26\textwidth}
\centering
\includegraphics[width=0.85\linewidth]{./images/example_iteration_graph/sddmm-spmm-combined-2.pdf}
\caption{Producer/Consumer kernels}
\vspace{-5pt}
\label{fig:sddmm-spmm-combined}
\Description[Reasoning about the producer and consumer iteration graphs]{<long description>}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.26\textwidth}
\centering
\includegraphics[width=0.85\linewidth]{./images/example_iteration_graph/sddmm-spmm-branched-2.pdf}
\caption{Fused kernel}
\vspace{-5pt}
\label{fig:sddmm-spmm-branched}
\Description[Branched iteration graph for the SDDMM, SpMM operation]{<long description>}
\end{subfigure}%
\hfill
\hfill
\caption{loopfuse transformation performed on $ A_{il} = \sum\nolimits_{jk} {\sparse B_{ij}} C_{ik} D_{jk} E_{jl} $}
\label{fig:sddmm-spmm-all-graphs}
\vspace{-1.0em}
\end{figure*}
\subsection{Representation}
\label{subsec:representation}
SparseLNR\xspace uses a {\em branched iteration
graph} to represent sparse tensor algebra kernels,
which is an extension to the concrete index notation described in~\cite{kjolstad:2018:workspaces}. A branched iteration graph can be understood as
an iteration graph with branches in index access patterns. By transforming the linear
index tree iteration graph generated by TACO to a branched iteration graph in the
context of tensor multiplication, we try to
remove the asymptotic complexity that arises from perfectly linearly nested loops in dense/sparse
iterations.
\begin{definition}
A branched iteration graph is a directed graph $G = (V, G_{p}, G_{c}, P)$, where $V$ is a set of {\em unbranched} indices, organized as a sequence starting from the root of the iteration graph that then has two children graphs,
$G_{p}$ (producer) and $G_{c}$ (consumer), that define the two branches of $G$, where $G_p$ and $G_c$ themselves are branched iteration graphs, such that there is a {\em dependence edge} from $G_{p}$ to $G_{c}$ and a \textit{boundary} between $V$ and ($G_c, G_p$).
The {\em dependence edge} tracks the common set of indices in $G_{p}$ and $G_{c}$.
$P = {p_1, p_2, \ldots, p_n}$ defines the set of tensor paths, a tuple of indices associated with a particular tensor variable.
\end{definition}
Intuitively, where a TACO iteration graph corresponds to a perfectly-nested loop where the order of the vertices in the graph corresponds to the nesting order of the loops, a branched iteration graph represents an imperfectly nested loop. $V$ corresponds to the common outer loops, just as in a TACO graph, while $G_p$ and $G_c$ correspond to the inner loop nests (which can themselves be imperfectly nested).
For example, in Figure~\ref{fig:sddmm-spmm-branched}, $V$ refers to the set of indices \{$i,j$\}, and $G_c$, $G_p$ refer to the boxes \textit{Producer} and \textit{Consumer}, respectively.
\subsection{Branched Iteration Graph Transformation} \label{algorithm}\label{subsec:transformation}
In Section~\ref{motivating_example} we saw how we could perform loop fusion or distribution for a sparse tensor algebra computation.
We recognize this pattern in index traversal and exploit it to generate the branched iteration graph.
We name this pattern recognition algorithm \emph{fusion after distribution} because it proceeds in two steps as described in Algorithm~\ref{algorithm:sparsefd}:
(i) distributing the perfectly-nested indices in the iteration graph, and then
(ii) fusing the common indices.
\begin{algorithm}
\caption{Loop fusion after distribution}
\label{algorithm:sparsefd}
\begin{algorithmic}[1]
\Require{Topologically Ordered Iteration Graph $G_{I}=(I_{G},P)$}
\Require{Index Expression $Expr$: $A_{out} = A_{1}*A_{2}*...*A_{n}$}
\Require{Bool $recursive$}
\Ensure{Branched Iteration Graph $G'_{I}$}
\vspace{0.5em}
\State fusible = isFusible($G_{I}$) \label{lst:line:is_fusible}
\IIf{!fusible} \textbf{return} $G_{I}$
\State $P_{T'} = P_{A_{out}} - P_{A_{n}}$ \Comment{index path for temporary tensor $T'$}
\State $G_{I-Producer}, ProducerExpr_{temp} := T'(P_{T'}) = Expr \setminus A_{n}$
\State $G_{I-Consumer}, ProducerExpr_{temp} := A_{out} = T'(P_{T'}) * A_{n}$
\If{recursive} \label{lst:line:recursive_start}
\State $G_{I-Producer} = recursiveCall(G_{I-Producer},$ \par\qquad\(\hookrightarrow\)\enspace $ProducerExpr_{temp}, recursive)$
\EndIf \label{lst:line:recursive_end}
\State $List_{I-Producer} = GetIndices(G_{I-Producer})$ \label{lst:line:common_prefix_start}
\State $List_{I-Consumer} = GetIndices(G_{I-Consumer})$
\State \textbf{Define}: $I_{sharable} = \emptyset$ \label{lst:line:sharable_start}
\For{Each $i \in I_{G}$}
\If{$i \in List_{I-Producer} \textbf{ and } i \in List_{I-Consumer}$}
\State $I_{sharable} = I_{sharable} \cup i$
\Else \quad break;
\EndIf
\EndFor \label{lst:line:sharable_end}
\State \textbf{Define}: $I_{fusable} = \emptyset$ \label{lst:line:fusible_start}
\For{$i \gets 1$ to $N$}
\If{$i\not\in I_{sharable} \textbf{ and } i \in List_{I-Producer} \textbf{ and }$ \par\qquad\(\hookrightarrow\)\enspace $i \in List_{I-Consumer}$}
\State $I_{fusable} = I_{fusable} \cup i$
\Else \quad break;
\EndIf
\EndFor \label{lst:line:fusible_end}
\State \textbf{Define}: $T(P_{I_{fusable}})$ \label{lst:line:temp_define}
\State $ProducerExpr := T(P_{I_{fusable}}) = T'(P_{T'})$ \label{lst:line:producer_with_temporary}
\State $ConsumerExpr := A_{out} = T(P_{I_{fusable}}) * A_{N}$ \label{lst:line:consumer_with_temporary}
\State \Return{$GraphRewrite(G_{I}, I_{sharable},$\par\qquad\(\hookrightarrow\)\enspace$ProducerExpr, ConsumerExpr)$}
\end{algorithmic}
\end{algorithm}
\paragraph{Topologically sorted iteration graph.}
The iteration graph in Figure~\ref{fig:sddmm-spmm-original} relates to the index expression
$ A(i,l) = {\sparse B(i,j)} * C(i,k) * D(j,k) * E(j,l) $, where B is sparse.
We denote this kernel as <SDDMM, SpMM>. The indices here are
topologically ordered such that the ordering of the indices are constrained by the
sparsity patterns of the sparse tensors. The ordering
$ i \rightarrow j \rightarrow k \rightarrow l $ would be consistent
with the access patterns of all the tensors
$i \rightarrow l$ in A, $i \rightarrow j$ in B, $i \rightarrow k$ in C, $j \rightarrow k$ in D,
and $j \rightarrow l$ in E. However, there should be a hard ordering imposed on $i$ and $j$ index
variables because $j$ cannot be accessed without accessing $i$ first.
The index access patterns of the tensor access variables are marked in the graph as paths. $A_{1}$
denotes the first access dimension of the $\dense{A}$ and $A_{2}$ denotes the second access dimension
of the $\dense{A}$ tensor. The fusion algorithm requires the iteration graph in
Figure~\ref{fig:sddmm-spmm-original} and the tensor index notation expression $ A(i,l) = {\sparse B(i,j)} * C(i,k) * D(j,k) * E(j,l) $.
We identify the iteration graph as fusible if there are indices that are only present in the last tensor and the output tensor in the tensor expression (line~\ref{lst:line:is_fusible} of the Algorithm~\ref{algorithm:sparsefd}).
\paragraph{Distribution into two kernels.}
The description of the tensor kernel above captures all the information of performing
kernel executions
SDDMM: ${\sparse T'(i,j)} = {\sparse B(i,j)} * C(i,k) * D(j,k) $ and
SpMM: $A(i,l) = {\sparse T'(i,j)} * E(j,l) $ sequentially. (Notice that
separation of kernels requires a temporary matrix $T'$)
Therefore, we can recover the separate 2 smaller kernels that would yield the same
result given the larger tensor expression.
We denote the first kernel as the producer and the second kernel as the consumer.
To find these separate smaller kernels, we need to remove the last
tensor $E(j,l)$ from the original expression.
Line 8 of the Algorithm~\ref{algorithm:sparsefd} creates the producer index expression and iteration graph
for the tensor computation performed first (SDDMM in our running example) by removing the
last tensor from the original expression,
and then line 9 of the Algorithm~\ref{algorithm:sparsefd} creates the consumer index expression and
iteration graph for the tensor computation that is performed second (SpMM in our
running example) by adding it back to the producer's expression.
These 2 separate kernels would have iteration graphs shown in Figures~\ref{fig:sddmm-spmm-removed}
and~\ref{fig:sddmm-spmm-add-back} respectively.
We perform this recovery of the two separate operations in order to identify the fusible
and shared indices between two separate tensor operations as we will further explain in a
next paragraph.
\paragraph{Fusing common loops.}
Once we have the iteration graphs for the separate kernels we reason about them together (See~\ref{fig:sddmm-spmm-combined}).
We reason that
both the sparse iterations need to iterate through the space using index variables $i$ and $j$.
Also, iteration space defined by the index $k$ is iterated only by the SDDMM operation, and the iteration
space defined by the index $l$ is only iterated by the SpMM operation.
But those iterations over index $k$ and $l$ need to happen one after the other.
The producer-consumer dependence must be satisfied
such that the values consumed by the consumer must have been produced by the producer before its use.
The values shared between the producer and consumer can be stored in an intermediate scratch memory.
Furthermore, the comparison of the two graphs, the producer
graph and the consumer graph, helps identify the indices that can and cannot be shared among the iterations.
The producer graph in Figure~\ref{fig:sddmm-spmm-removed} and the consumer
graph in Figure~\ref{fig:sddmm-spmm-removed}
have a common prefix defined by some indices in their iteration graphs. We run a prefix match to identify the shared
indices by the two kernels (lines~\ref{lst:line:common_prefix_start}--\ref{lst:line:sharable_end} of the Algorithm~\ref{algorithm:sparsefd}),
in Figure~\ref{fig:sddmm-spmm-combined}. We see that both $i$ and $j$ indices are shared, and
the other variables are not shared.
The addition of indices $i$ and $j$ to the set of sharable indices
is described in lines~\ref{lst:line:sharable_start}--\ref{lst:line:sharable_end} of the Algorithm~\ref{algorithm:sparsefd}.
We identify this point as a \emph{nest boundary} in the iteration graph~\ref{fig:sddmm-spmm-combined}, and denote the indices above the \emph{nest boundary} as fusible.
The final output of executing the {\em fusion after distribution algorithm} is a branched iteration graph.
Therefore, if the algorithm is applied recursively (see the kernel <SDDMM, SpMM, GEMM> in the benchmark Section~\ref{benchmarks}) on the producer (lines~\ref{lst:line:recursive_start}--\ref{lst:line:recursive_end} of the Algorithm~\ref{algorithm:sparsefd}), our algorithm can still match the prefix even if the producer graph is already branched.
\paragraph{Materializing temporary variables.}
The next step of the Algorithm is to identify the indices that cannot be fused as outermost
loops but are common to the
producer and the consumer. In Figure~\ref{fig:sddmm-spmm-combined} we see that there are no common variables below the \emph{nest boundary}.
The variables that are below the \emph{nest boundary} line and common to both the producer and consumer define the dimensions of the temporary variable that is shared between them.
For the case of <SDDMM, SpMM> described in Figure~\ref{fig:sddmm-spmm-combined}, since no indices are
common below the \emph{nest boundary} line, we can define the temporary as a scalar.
However, for the same case of <SDDMM, SpMM> described in
Section~\ref{support_existing_transformations}, where transpose of $D$ is used to define the computation, we can see that index $j$ is a common index below the \emph{nest boundary} line.
Therefore, the algorithm
defines a temporary vector bounded by the size of the index $j$,
Lines~\ref{lst:line:fusible_start}-\ref{lst:line:fusible_end} of the Algorithm~\ref{algorithm:sparsefd} explain how we perform the identification of the common indices below the \emph{nest boundary}, and line~\ref{lst:line:temp_define} defines this temporary variable.
\paragraph{Rewrite the iteration graph.}
After we find the fusible indices, shared indices and define the temporary variable, we define the producer expression and consumer expression using the temporary variable that is shared between the producer and the consumer (lines~\ref{lst:line:producer_with_temporary},~\ref{lst:line:consumer_with_temporary} of the Algorithm~\ref{algorithm:sparsefd}).
Then, we rewrite the iteration graph to
model this behavior with the temporary variable, the producer and the consumer (see Figure~\ref{fig:sddmm-spmm-combined}) which would eventually generate the code shown in Figure~\ref{fig:SparFF-fused-kernel} for our running example.
\subsection{Scheduling} \label{scheduling}
In this section we describe, (1) the invocation of scheduling transformation and (2) the impact it has on the space of possible schedules.
\subsubsection{Scheduling Directive} \label{support_existing_transformations}\label{subsec:directives}
SparseLNR\xspace introduces a new scheduling
directive to TACO.
The user can call the \code{loopfuse} scheduling transformation as shown in Figure~\ref{fig:scheduling-primitive} with other scheduling directives.
Here, \textbf{1} refers to applying the algorithm once.
By passing \textbf{2} or a higher number, the algorithm can be applied recursively.
Sometimes it is necessary to combine \code{loopfuse} with other TACO scheduling directives. Hence, it is important that our new directive compose with the existing scheduling language.
For example,
applying Algorithm~\ref{algorithm:sparsefd} to the tensor expression
$A(i,l) = {\sparse B(i,j)}*C(i,k)*D(k,j)*E(j,l)$ would not yield the code in Figure~\ref{fig:SparFF-fused-kernel}
by default because now the access pattern of the D matrix is different since we are using
the transpose of $D$ for this example. This difference results
in a different iteration graph as shown in Figure~\ref{fig:sddmm-spmm-reorder-original},
because now the iteration
graph needs to preserve the ordering of $i \rightarrow j$ for B, $i \rightarrow k$ for D,
$k \rightarrow j$ for D, $j \rightarrow l$ for E, and $i \rightarrow l$ for A, with a hard ordering
of $i \rightarrow j$ because B is a sparse matrix.
Applying the \emph{fusion after distribution} algorithm would result in an iteration graph as depicted in
Figure~\ref{fig:sddmm-spmm-reorder-branched}.
However, since D is dense, there is no
hard constraint on the ordering of indices $k$ and $j$. Therefore, to arrive at the code in Figure~\ref{fig:SparFF-fused-kernel}, a loop reordering can be performed before the loopfuse
scheduling directive (See Figure~\ref{fig:scheduling-primitive}).
\begin{figure}[!t]
\centering
\begin{minipage}[b]{.33\columnwidth}
\begin{subfigure}{\linewidth}
\vspace{-5em}
\includegraphics[scale=0.40]{./images/example_iteration_graph_reorder/sddmm-spmm-reorder-original.pdf}
\caption{Original kernel}
\label{fig:sddmm-spmm-reorder-original}
\end{subfigure}
\end{minipage}
\begin{minipage}[b]{.50\columnwidth}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.9\linewidth]{./images/pngs/scheduling_primitive.pdf}
\caption{Scheduling directives}
\label{fig:scheduling-primitive}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.9\linewidth]{./images/example_iteration_graph_reorder/sddmm-spmm-reorder-branched-2.pdf}
\caption{Transformed kernel}
\label{fig:sddmm-spmm-reorder-branched}
\end{subfigure}
\end{minipage}
\vspace{-1.0em}
\caption{The \code{loopfuse} transformation performed on $A_{il} = \sum\nolimits_{jk} B_{ij} C_{ik} D_{kj} E_{jl}$.}%
\label{fig:sddmm-spmm-reorder}
\vspace{-1.5em}
\end{figure
\begin{comment}
\begin{figure}[h]
\vspace{-10pt}
\begin{lstlisting}[basicstyle=\small, gobble=8, tabsize=4, numbers=none, showstringspaces=false]
tensorIndexExpression
.reorder({i,j,k,l})
.loopfuse({1})
\end{lstlisting}
\vspace{-10pt}
\end{figure}
\end{comment}
\begin{comment}
\begin{figure}[h]
\centering
\hspace{2em}
\begin{subfigure}[t]{0.24\columnwidth}
\centering
\includegraphics[width=0.8\linewidth]{./images/sddmm-spmm-reorder-original.pdf}
\vspace{-5pt}
\caption{\ }
\vspace{-10pt}
\label{fig:sddmm-spmm-reorder-original}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.48\columnwidth}
\centering
\includegraphics[width=0.8\linewidth]{./images/sddmm-spmm-reorder-branched.pdf}
\vspace{-5pt}
\caption{\ }
\vspace{-10pt}
\label{fig:sddmm-spmm-reorder-branched}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\hspace{2em}
\caption{loopfuse transformation performed on $ A_{il} = \sum\nolimits_{jk} B_{ij} C_{ik} D_{kj} E_{jl} $}
\vspace{-10pt}
\label{fig:sddmm-spmm-reorder}
\end{figure}
\end{comment}
The \emph{nest boundary} between the branching point in the iteration graph constrains loop reordering between the \emph{nest boundary}.
But loop reordering can be still allowed in indices within \emph{nest boundaries}.
\subsubsection{Scheduling Space}
In our current implementation, if two kernels have $n$ common outer loops, we fuse them all.
However, if loop reordering is possible, and we choose only certain loops to be fused, then there are $2^n$ possible schedules to start with, given that there are $n$ number of fusible loops (i.e. $n$ number of common iterators in the two kernels).
This is an upper bound without considering any constraints of sparse access patterns.
Reordering of inner loops can be performed after fusion, and other scheduling directives (split, parallelization, etc.) can be applied separately, giving more scheduling opportunities with imperfect nesting. This scheduling space is obviously very large, so smart strategies for searching that space is a promising avenue for future work.
\subsection{Code Generation} \label{code_generation}\label{subsec:codegen}
We carefully redesigned intermediate representation (IR) in TACO to support the branched iteration graph and manage temporaries such that code generation backend does not require any changes.
We rewrite the graph loop structure with \code{where} statements
defining a producer-consumer relationship.
This placement of temporaries for the producer-consumer relationship and the change of iteration graph explained in Section~\ref{algorithm}
preserves all the attributes that are necessary for TACO
code generation backend.
In TACO each index in the iteration graph is converted to one or more loops to iterate through dense loops or co-iterate over the levels of sparse data formats.
An iteration lattice~\cite{kjolstad:2017:taco} is used to co-iterate through the intersections of the sparse dimensions which results in a single for-loop, single while-loop or multiple while-loops.
\section{Implementation} \label{implementation}
We implement the branch iteration graph transformation described in Section~\ref{method} on top of the TACO ~\cite{kjolstad:2017:taco} intermediate representation (IR).
Furthermore, we introduce a new scheduling directive to separate it from the algorithmic language and to provide the scheduling language with more opportunities to generate more (performant) schedules.
We change the iteration graph~\cite{kjolstad:2017:taco} and use
the concrete index notation~\cite{kjolstad:2018:workspaces} to introduce intermediate temporaries that are shared between the producer and the consumer.
We implement a \emph{nest boundary} between the fused loops and shared index loops to constrain performing loop reordering transformations between them.
In our running example, the user cannot interchange loops with an outer level, once the distribution operation is performed.
This new transformation can be used in the context of tensor multiplication.
Hence, it does not generalize to tensor expressions with tensor additions.
We limit the number of tensors and index variables removed from the index expression, to identify the producer and consumer graphs, per iteration to one.
We believe that the algorithm could be generalized to support fusion of indices shared between multiple tensors which would be able to support high order tensors and complex tensor contractions.
\section{Evaluation} \label{evaluation}
We compare SparseLNR\xspace to two other techniques:
\noindent
\textbf{TACO Original.}
Given a large combined index expression containing multiple smaller index expressions, the code generated by TACO has a perfectly nested loop structure with at least one loop per each index variable in the index expression.
We refer to this version as {\em TACO Original}.
\noindent
\textbf{TACO Separate.}
In some cases, the asymptotic complexity of {\em TACO Original} can be reduced
by manually separating a larger index expression
into multiple smaller index expressions by using temporary
tensors to store the intermediate results.
We refer to this version as {\em TACO Separate}.
When there are multiple ways to break down
the computation into smaller kernels, we evaluate all those combinations and report the best execution time.
\begin{figure*}[ht]
\centering
\begin{subfigure}[t]{0.43\textwidth}
\centering
\includegraphics[width=0.96\linewidth]{./plots/eval_plots/sddmm-spmm-1.pdf}
\vspace{-4pt}
\caption{Single-threaded <SDDMM, SpMM>}
\vspace{-10pt}
\label{fig:sddmm-spmm-single-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\hspace{2em}%
\begin{subfigure}[t]{0.43\textwidth}
\centering
\includegraphics[width=0.96\linewidth]{./plots/eval_plots/sddmm-spmm-64.pdf}
\vspace{-4pt}
\caption{Multi-threaded <SDDMM, SPMM>}
\vspace{-10pt}
\label{fig:sddmm-spmm-multi-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\vskip\baselineskip
\begin{subfigure}[t]{0.43\textwidth}
\centering
\includegraphics[width=0.96\linewidth]{./plots/eval_plots/spmm-gemm-1.pdf}
\vspace{-4pt}
\caption{Single-threaded <SpMM, GeMM>}
\vspace{-10pt}
\label{fig:spmm-gemm-single-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\hspace{2em}%
\begin{subfigure}[t]{0.43\textwidth}
\centering
\includegraphics[width=0.96\linewidth]{./plots/eval_plots/spmm-gemm-64.pdf}
\vspace{-4pt}
\caption{Multi-threaded <SpMM, GeMM>}
\vspace{-10pt}
\label{fig:spmm-gemm-multi-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\vskip\baselineskip
\begin{subfigure}[t]{0.43\textwidth}
\centering
\includegraphics[width=0.96\linewidth]{./plots/eval_plots/spgemmh-gemm-1.pdf}
\vspace{-4pt}
\caption{Single-threaded <SpMMH, GeMM>}
\vspace{-10pt}
\label{fig:spgemmh-gemm-single-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\hspace{2em}%
\begin{subfigure}[t]{0.43\textwidth}
\centering
\includegraphics[width=0.96\linewidth]{./plots/eval_plots/spgemmh-gemm-64.pdf}
\vspace{-4pt}
\caption{Multi-threaded <SpMMH, GeMM>}
\vspace{-10pt}
\label{fig:spgemmh-gemm-multi-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\vskip\baselineskip
\begin{subfigure}[t]{0.43\textwidth}
\centering
\includegraphics[width=0.96\linewidth]{./plots/eval_plots/sddmm-spmm-gemm-1.pdf}
\vspace{-4pt}
\caption{Single-threaded <SDDMM, SpMM, GeMM>}
\vspace{-10pt}
\label{fig:sddmm-spmm-gemm-single-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\hspace{2em}%
\begin{subfigure}[t]{0.43\textwidth}
\centering
\includegraphics[width=0.96\linewidth]{./plots/eval_plots/sddmm-spmm-gemm-64.pdf}
\vspace{-4pt}
\caption{Multi-threaded <SDDMM, SpMM, GeMM>}
\vspace{-10pt}
\label{fig:sddmm-spmm-gemm-multi-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\caption{Performance Comparison with TACO for benchmarks with 2-D matrices.}
\label{fig:evaluation-of-matrices}
\vspace{-1.5em}
\end{figure*}
\begin{figure*}[ht]
\vspace{-1.0em}
\centering
\begin{subfigure}[t]{0.40\textwidth}
\centering
\includegraphics[width=0.96\linewidth]{./plots/eval_plots/mttkrp-gemm-1.pdf}
\vspace{-4pt}
\caption{Single-threaded <MTTKRP, GeMM>}
\vspace{-10pt}
\label{fig:mttkrp-gemm-single-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\hspace{2em}%
\begin{subfigure}[t]{0.40\textwidth}
\centering
\includegraphics[width=0.96\linewidth]{./plots/eval_plots/mttkrp-gemm-64.pdf}
\vspace{-4pt}
\caption{Multi-threaded <MTTKRP, GEMM>}
\vspace{-10pt}
\label{fig:mttkrp-gemm-multi-thread}
\Description[MTTKRP, GEMM multi-threads performance graph]{<long description>}
\end{subfigure}%
\vskip\baselineskip
\begin{subfigure}[t]{0.40\textwidth}
\centering
\includegraphics[width=0.96\linewidth]{./plots/eval_plots/ttm-ttm-1.pdf}
\vspace{-4pt}
\caption{Single-threaded <SpTTM, SpTTM>}
\vspace{-10pt}
\label{fig:ttm-ttm-single-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\hspace{2em}%
\begin{subfigure}[t]{0.40\textwidth}
\centering%
\includegraphics[width=0.96\linewidth]{./plots/eval_plots/ttm-ttm-64.pdf}
\vspace{-4pt}
\caption{Multi-threaded <SpTTM, SpTTM>}
\vspace{-10pt}
\label{fig:ttm-ttm-multi-thread}
\Description[SpTTM, SpTTM multi-threads performance graph]{<long description>}
\end{subfigure}%
\caption{Performance Comparison with TACO for benchmarks with 3-D tensors.}
\label{fig:evaluation-of-matrices2}
\vspace{-1.0em}
\end{figure*}
\begin{comment}
\begin{figure*}[ht]
\centering
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=1\linewidth]{./images/results/sddmm-spmm-1.pdf}
\vspace{-10pt}
\caption{{\footnotesize <SDDMM, SpMM> single-thread}}
\vspace{-10pt}
\label{fig:sddmm-spmm-single-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=1\linewidth]{./images/results/sddmm-spmm-64.pdf}
\vspace{-10pt}
\caption{{\footnotesize <SDDMM, SPMM> multi-threads}}
\vspace{-10pt}
\label{fig:sddmm-spmm-multi-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\hfill%
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=1\linewidth]{./images/results/spmm-gemm-1.pdf}
\vspace{-10pt}
\caption{{\footnotesize <SpMM, GeMM> single-thread}}
\vspace{-10pt}
\label{fig:spmm-gemm-single-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=1\linewidth]{./images/results/spmm-gemm-64.pdf}
\vspace{-10pt}
\caption{{\footnotesize <SpMM, GeMM> multi-threads}}
\vspace{-10pt}
\label{fig:spmm-gemm-multi-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\vskip\baselineskip
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=1\linewidth]{./images/results/mttkrp-gemm-1.pdf}
\vspace{-10pt}
\caption{{\footnotesize <MTTKRP, GeMM> single-thread}}
\vspace{-10pt}
\label{fig:mttkrp-gemm-single-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\hfill%
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=1\linewidth]{./images/results/mttkrp-gemm-64.pdf}
\vspace{-10pt}
\caption{{\footnotesize <MTTKRP, GEMM> multi-threads}}
\vspace{-10pt}
\label{fig:mttkrp-gemm-multi-thread}
\Description[MTTKRP, GEMM multi-threads performance graph]{<long description>}
\end{subfigure}%
\hfill%
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=1\linewidth]{./images/results/spgemmh-gemm-1.pdf}
\vspace{-10pt}
\caption{{\footnotesize <SpMMH, GeMM> single-thread}}
\vspace{-10pt}
\label{fig:spgemmh-gemm-single-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=1\linewidth]{./images/results/spgemmh-gemm-64.pdf}
\vspace{-10pt}
\caption{{\footnotesize <SpMMH, GeMM> multi-thread}}
\vspace{-10pt}
\label{fig:spgemmh-gemm-multi-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\vskip\baselineskip
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=1\linewidth]{./images/results/sddmm-spmm-gemm-1.pdf}
\vspace{-10pt}
\caption{{\footnotesize <SDDMM, SpMM, GeMM> single-threads}}
\vspace{-10pt}
\label{fig:sddmm-spmm-gemm-multi-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=1\linewidth]{./images/results/sddmm-spmm-gemm-64.pdf}
\vspace{-10pt}
\caption{{\footnotesize <SDDMM, SpMM, GeMM> multi-thread}}
\vspace{-10pt}
\label{fig:sddmm-spmm-gemm-multi-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=1\linewidth]{./images/results/ttm-ttm-1.pdf}
\vspace{-10pt}
\caption{{\footnotesize <SpTTM, SpTTM> single-thread}}
\vspace{-10pt}
\label{fig:ttm-ttm-single-thread}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\hfill%
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=1\linewidth]{./images/results/ttm-ttm-64.pdf}
\vspace{-10pt}
\caption{{\footnotesize <SpTTM, SpTTM> multi-threads}}
\vspace{-10pt}
\label{fig:ttm-ttm-multi-thread}
\Description[SpTTM, SpTTM multi-threads performance graph]{<long description>}
\end{subfigure}%
\caption{Performance Comparison with TACO}
\label{fig:evaluation-of-matrices}
\end{figure*}
\end{comment}
\subsection{Experimental Setup} \label{setup}
All experiments run on a single socket 64-Core AMD Ryzen Threadripper 3990X at 2.2 GHz, with 32KB L1 data cache, 512KB shared L2 cache, and 16MB shared L3 cache.
We compile the code using GCC 7.5.0 with \code{-O3 -ffast-math}. We use \code{-fopenmp} for parallel versions with {\em OpenMP version 4.5}.
All parallel versions use 64 threads which is the number of physical cores available in the machine.
\begin{table}[!ht]
\begin{center}
\caption{Test tensors used in the evaluation from various matrix and tensor collections mentioned in the Section~\ref{setup}}
\label{tab:datasets}
\vspace{-1em}
\begin{adjustbox}{minipage=\linewidth,scale=0.8}
\begin{tabular}{l|r|r}
\textbf{Tensor} & \textbf{Dimensions} & \textbf{Non-zeros}\\
\hline%
cora & { $2.7K \times 2.7K$} & {$5.4K$}\\
bcsstk17 & { $11K \times 11K$} & {$429K$}\\
pdb1HYS & { $36K \times 36K$} & {$4.34M$}\\
rma10 & { $47K \times 47K$} & {$2.37M$}\\
cant & { $62K \times 62K$} & {$4.01M$}\\
consph & { $83K \times 83K$} & {$6.01M$}\\
cop20k\_A & { $12K \times 12K$} & {$2.62M$}\\
shipsec1 & { $140K \times 140K$} & {$7.81M$}\\
scircuit & { $171K \times 171K$} & {$959K$}\\
mac\_econ\_fwd500 & { $207K \times 207K$} & {$1.27M$}\\
amazon & { $334K \times 334K$} & {$1.85M$}\\
webbase-1M & { $1.00M \times 1.00M$} & {$3.11M$}\\
circuit5M & { $5.56M \times 5.56M$} & {$59.52M$}\\
\hline%
flickr-3d & { $320K \times 2.82M \times 1.60M$} & { $112.89M$}\\
nell-2 & { $12K \times 9K \times 288K$} & { $76.88M$}\\
nell-1 & { $2.9M \times 2.14M \times 25.5M$} & { $143.6M$}\\
vast-2015-mc1-3d & { $165K \times 11K \times 2$} & { $26.02M$}\\
darpa1998 & { $22K \times 22K \times 23.7M$} & { $28.42M$}\\
\end{tabular}
\end{adjustbox}
\end{center}
\vspace{-1.5em}
\end{table}
\noindent
\textbf{Datasets.} \label{datasets} We use sparse tensors from four sources:
SuiteSparse~\cite{suitesparse};
Network Repository~\cite{nr-sigkdd16};
Formidable Repository of Open Sparse Tensors and Tools (FROSTT)~\cite{frosttdataset};
and the 1998 DARPA Intrusion Detection Evaluation~\cite{freebee}.
Dense tensors in kernels are randomly generated. Table~\ref{tab:datasets} gives the details of the sparse tensors.
\subsection{Benchmarks} \label{benchmarks}
\noindent
\textbf{<SDDMM, SpMM>.} SDDMM computation followed by the SpMM operation, $A_{il} = \sum\nolimits {\sparse B_{ij}} \cdot C_{ik} \cdot D_{jk} \cdot E_{jl}$.
This operation is used in graph neural networks~\cite{fusedMM}. We set the inner dimensions $k$ and $l$ to <64, 64>.
Fusion of SDDMM with SpMM results in a scalar intermediate to share the results between the fused loops as shown in Figure~\ref{fig:SparFF-fused-kernel}.
\noindent
\textbf{<SpMMH, GEMM>.} SpMMH here is pre-multiplying the Hadamard product of two dense matrices by a sparse matrix.
The combined kernel we evaluate is $ A_{il} = \sum\nolimits {\sparse B_{ik}} \cdot C_{kj} \cdot D_{kj} \cdot E_{jl} $ with inner dimensions $j$ and $l$ set to <128, 128>.
\noindent
\textbf{<SpMM, GEMM>.} \label{spmm-gemm-benchmark} SpMM kernel followed by another GEMM kernel, $ A_{il} = \sum\nolimits {\sparse B_{ij}} \cdot C_{jk} \cdot D_{kl} $.
The inner dimensions $k,m$ are set to <128, 64>.
This calculation is commonly used for updating the hidden state in GNNs~\cite{gnn}.
\noindent
\textbf{<SDDMM, SpMM, GEMM>.} We combine two of the prior kernels to show the recursive applicability of the algorithm.
$ A_{im} = \sum\nolimits {\sparse B_{ij}} \cdot C_{ik} \cdot D_{jk} \cdot F_{jl} \cdot W_{lm} $
We could relate this execution to performing SDDMM operation to get the attention values along the edges of a graph, multiplying the feature matrix of the graph with a weight matrix to get the new feature set of the graph and then doing a neighbor sum of the graph.
The inner dimensions $k,l$ and $m$ are set to <64,64,64>.
\noindent
\textbf{<MTTKRP, GEMM>.} Khatri-Rao product (MTTKRP) followed by GEMM operation, $ A_{im} = \sum\nolimits {\sparse B_{ikl}} \cdot C_{lj} \cdot D_{kj} \cdot E_{jm} $.
MTTKRP kernel is used in various sparse computations like signal processing and computer vision~\cite{mttkrp2018}.
We set the inner dimensions $j$ and $m$ to <32, 64>.
\noindent
\textbf{<SpTTM, SpTTM>.} Sparse Tensor Times Matrix (SpTTM) operation followed by another SpTTM operation, $ {\sparse A_{ijm}} = \sum\nolimits {\sparse B_{ijk}} \cdot C_{kl} \cdot D_{lm} $.
SpTTM is a computational kernel used in data analytics and data mining applications such as the popular Tucker decomposition~\cite{MA201999}.
The inner dimensions $l$ and $m$ are set to <32,64>.
\noindent
\textbf{Sparse Formats.}
SpMM, SDDMM kernels use standard compressed sparse row (CSR) format for their sparse matrices whereas SpTTM, MTTKRP kernels use compressed sparse fiber (CSF) format.
For the SDDMM, SpMM, MTTKRP kernels in {\em TACO separate} we use the versions provided in Senanayake \textit{et al.}~\cite{senanayake:2020:scheduling}.
For the rest of kernels we evaluate multiple schedules and select the best performing one.
TACO does not generate multi-threaded code when the output tensor is sparse. Prior work has evaluated against single-threaded code in such situations~\cite{kjolstad:2018:workspaces,compiler_in_mlir}. Following the strategy of Senanayeke \textit{et al.}~\cite{senanayake:2020:scheduling}, we modified the TACO generated code manually to add multithreading
In general, the speedups we see compared to the TACO original comes from the reduction in asymptotic complexity while the speedups we see compared to the TACO separate comes from the reduction in cache reuse distances by removing large tensors used to store intermediate results.
We see speedups for our approach of 0.90--1.23x compared TACO Separate and 3.31--16.05 compared to TACO Original in <SDDMM, SpMM> kernel's multi-threaded execution.
For single-threaded execution, we get 0.91--1.50x compared to TACO separate and 10.75--33.39x compared to TACO original.
In multithreaded execution the fused kernel performs better on circuit5M, shipsec1, consph, pdf1HYS, and cant, the matrices with most non-zeros, from the tested datasets~\ref{tab:datasets} compared against the separate execution.
We observe speedups of 1.60--1.99x against TACO separate and 1.29--50.55x against TACO original in single-threaded execution for <SpMMH, GEMM> kernel.
For the same kernel in multi-threaded execution, we observed speedups of 1.28--2.40x against TACO separate and 1.66--36.50x against TACO original.
For the <SpMM, GEMM> kernel in single-threaded execution, speedups of 1.23--3.27x, and 6.91--79.86x are observed for TACO original and TACO separate, respectively.
For the same kernel in multi-threaded execution, we observed speedups of 0.86--3.16x, and 2.44--139x for TACO original and TACO separate, respectively.
We see substantial speedups in <SDDMM, SpMM, GEMM> due to the kernel presenting two opportunities for fusion: <SDDMM, SpMM> and <SpMM, GEMM>.
We see speedups ranging from 1.20--2.26x for single-threaded execution against TACO separate, 93--1997x over single-threaded TACO original, 1.06--2.09x for multithreaded TACO separate and 19--1263x for multithreaded TACO original.
We see that our approach under-performed in datasets nell-2 and darpa1998 against the TACO separate for the <MTTKRP, GEMM> benchmark.
In these datasets, the first dimensions of the tensors
are bounded by 12092, and 22476.
Therefore, the intermediate matrix in the TACO separate execution has sizes that fit within the last level cache.
Furthermore, executing kernels separately sometimes offer more opportunities to optimize smaller kernels individually.
Due to these reasons, there may be datasets that the separate kernel execution performs better than the fused version.
But we see considerable speedups versus TACO Separate when the intermediate tensors are large.
\begin{figure}[!t
\centering
\unskip\ \vrule height -1ex\ %
\begin{subfigure}[t]{0.22\linewidth}
\centering
\includegraphics[width=1\linewidth]{./images/ttmloopstructure/fused.pdf}
\vspace{-15pt}
\caption{\ }
\vspace{-10pt}
\label{fig:fusedttm}
\Description[Branched iteration graph for the SDDMM, SpMM operation]{<long description>}
\end{subfigure}%
\unskip\ \vrule height -1ex\ %
\begin{subfigure}[t]{0.22\linewidth}
\centering
\includegraphics[width=1\linewidth]{./images/ttmloopstructure/original.pdf}
\vspace{-15pt}
\caption{\ }
\vspace{-10pt}
\label{fig:originalttm}
\Description[Branched iteration graph for the SDDMM, SpMM operation]{<long description>}
\end{subfigure}%
\unskip\ \vrule height -1ex\ %
\begin{subfigure}[t]{0.22\linewidth}
\centering
\includegraphics[width=1\linewidth]{./images/ttmloopstructure/separate1.pdf}
\vspace{-15pt}
\caption{\ }
\vspace{-10pt}
\label{fig:separate1ttm}
\Description[Branched iteration graph for the SDDMM, SpMM operation]{<long description>}
\end{subfigure}%
\unskip\ \vrule height -1ex\ %
\begin{subfigure}[t]{0.22\linewidth}
\centering
\includegraphics[width=1\linewidth]{./images/ttmloopstructure/separate2.pdf}
\vspace{-15pt}
\caption{\ }
\vspace{-10pt}
\label{fig:separate2ttm}
\Description[Branched iteration graph for the SDDMM, SpMM operation]{<long description>}
\end{subfigure}%
\unskip\ \vrule height -1ex\ %
\caption{Basic loop structure of different schedules for <SpTTM, SpTTM> kernel.}
\label{fig:ttmloopstructure}
\vspace{-2.0em}
\end{figure}
For <SpTTM, SpTTM>, there are two different schedules due to different associativity choices. We see varying results based on which choice is made, so we report both of them as {\em separate1} and {\em separate2} (see Figures~\ref{fig:ttm-ttm-single-thread}
and~\ref{fig:ttm-ttm-multi-thread}).
Figure~\ref{fig:ttmloopstructure} shows the basic loop structure for different versions of <SpTTM, SpTTM>.
The fused version has asymptotic complexity of $O(nnz(B_{IJK})L+nnz(B_{IJ})LM)$. The TACO original version $ {\sparse A_{ijm}} = \sum\nolimits {\sparse B_{ijk}} \cdot C_{kl} \cdot D_{lm} $
has asymptotic complexity of $O(nnz(B_{IJK})LM)$. But separate1
($ {\sparse T_{ijl}} = \sum\nolimits {\sparse B_{ijk}} \cdot C_{kl} $ followed by
$ {\sparse A_{ijl}} = \sum\nolimits {\sparse T_{ijl}} \cdot C_{kl} $) and
separate2 ($ T_{km} = \sum\nolimits C_{kl} \cdot D_{lm} $ followed by
$ {\sparse A_{ijm}} = \sum\nolimits {\sparse B_{ijk}} \cdot T_{km} $) versions have complexities of
$O(nnz(B_{IJK})L+nnz(B_{IJL})M)$ and
$O(KLM+nnz(B_{IJK})M)$, respectively.
When $k$ is small (for instance, dimension $k$ of the dataset {\em vast2015-mc1-3d} in Table~\ref{tab:datasets} is bounded by 2), the asymptotic complexity of our approach is comparable to TACO's baseline approach, and the size of the working set is small.
Therefore, there are no cache misses for the TACO separate approach; the re-associated schedule hence has the best performance.
In the other datasets, where $k$ is large (for instance, the dimension $k$ of the dataset {\em darpa1998} in Table~\ref{tab:datasets} is bounded by 28.42M), these effects vanish,
and our approach is considerably faster than any competing version. We note that SparseLNR\xspace's representation could support re-association scheduling directives, that would allow it to use the better schedules of TACO separate, but we leave that for future work.
\begin{figure}[!ht]
\centering
\begin{subfigure}[t]{0.80\columnwidth}
\centering
\includegraphics[width=1\linewidth]{./plots/scaling_plots/circuit5M-speedup-threading.pdf}
\vspace{-15pt}
\caption{Circuit5M Scaling}
\vspace{-15pt}
\label{fig:circuit5M-speedup}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\vskip\baselineskip
\begin{subfigure}[t]{0.80\columnwidth}
\centering
\includegraphics[width=1\linewidth]{./plots/scaling_plots/webbase-speedup-threading.pdf}
\vspace{-15pt}
\caption{Webbase Scaling}
\vspace{-12pt}
\label{fig:webbase-speedup}
\Description[sddmm-spmm original iteration graph]{<long description>}
\end{subfigure}%
\caption{Speedup change with respect to the number of threads for <SDDMM, SpMM> benchmark.}
\label{fig:threading-speedup}
\vspace{-1.9em}
\end{figure}
The scaling results for webbase and circuit5M datasets, on <SDDMM, SpMM> benchmark are shown in Figure~\ref{fig:threading-speedup}; SparseLNR\xspace delivers comparable scaling to the best alternative approach.
We observed similar scaling for other benchmarks and datasets.
\subsection{Case Study: Performance with different inner dimensions} \label{casestudy1}
\begin{figure}[!ht]
\centering
\begin{subfigure}[t]{0.45\columnwidth}
\includegraphics[width=\linewidth]{./plots/heatmaps/heatmap_with_taco_separate.pdf}
\caption{TACO-Sep. vs. SparseLNR\xspace}
\vspace{-10pt}
\label{fig:heatmap-with-taco-separate}
\end{subfigure}%
\hspace{1em}%
\begin{subfigure}[t]{0.45\columnwidth}
\includegraphics[width=\linewidth]{./plots/heatmaps/heatmap_with_taco_original.pdf}
\caption{TACO-Orig. vs. SparseLNR\xspace}
\vspace{-10pt}
\label{fig:heatmap-with-taco-original}
\end{subfigure}%
\caption{Speedup variation wrt. inner dimension sizes ($k$ and $l$) on the benchmark <SpMM,GEMM>. The Figures (a) and (b) correspond to multithreaded execution of SparseLNR\xspace against TACO-Separate and TACO-Original, respectively.}
\label{fig:heatmap-speedup}
\vspace{-1.5em}
\end{figure}
We chose the inner dimensions of the benchmarks explained in Section~\ref{benchmarks} arbitrarily.
In this case study we consider change of performance wrt. varying inner dimension sizes ($k$ and $l$) in the benchmark <SpMM, GEMM> ($ A_{il} = \sum\nolimits {\sparse B_{ij}} \cdot C_{jk} \cdot D_{kl} $) in Section~\ref{spmm-gemm-benchmark}.
Here, the dimensions $i$ and $j$ are determined by size of the graphs read into the sparse matrix $\sparse B_{ij}$ and cannot be arbitrarily changed.
However, the dimensions $k$ and $l$ can change in size since the matrices $C_{jk}$ and $D_{kl}$ are dense.
Usually, in GNN literature, these dimensions correspond to feature sizes in hidden layers.
Performing the \emph{loop fusion after distribution} in the benchmark <SpMM, GEMM> results in a temporary vector of the length of the size of dimension $k$, and $k$ and $l$ dimensions completely define the size of the matrix $D_{kl}$.
When the size of the dimension $l$ is small, $D_{kl}$ can completely fit in higher level caches for the $k$ values considered.
Therefore, for smaller $l$ values, the speedup increases with the size of $k$ (See Figure~\ref{fig:heatmap-speedup}).
Because with increasing $k$, the temporary tensor gets larger and it decreases the performance of TACO-Separate.
We see this behavior for $l$ dimension sizes of 16-128 in Figure~\ref{fig:heatmap-with-taco-separate}.
But with increasing $k$, when the size of the $l$ dimension gets larger, the sizes of $D_{kl}$ and temporary vector get larger.
As a result, they keep getting evicted from the higher level caches in SparseLNR\xspace.
Therefore, as shown in Figure~\ref{fig:heatmap-with-taco-separate}, the peak performance in columns 256 and 512 of the dimension $l$ occurs not when $k$ is 512, but when size of $k$ takes values in the range 64--256.
Increasing sizes of $k$ and $l$ results in higher time complexity for TACO-Original.
Hence, speedup of SparseLNR\xspace increases with the sizes of $k$ and $l$ in the Figure~\ref{fig:heatmap-with-taco-original}.
\section{Related Work} \label{related_works}
Code generation for tensor algebra has been extensively researched.
This area of research can be primarily divided into two subareas --- sparse and dense.
First, we discuss the related work on code generation and optimization techniques for sparse tensor algebra and then move to the ones for dense.
\subsection{Sparse Tensor Algebra}
Automated sparse code generation~\cite{aartbik:93,compiler_in_mlir,kjolstad:2017:taco,aartbik93_data_structure_selection} is a heavily-researched topic.
Even though these methods are highly effective, they lack fine-grained optimizations like ours that applies across kernels to reduce the time-complexity of the computation.
Ahrens \textit{et al.}~\cite{peterahrens} proposed splitting large tensor expressions into smaller kernels to minimize the time-complexity.
The Sparse Polyhedral Framework~\cite{LaMielle2010EnablingCG,sparse_polyhedral_framework,sparse_polyhedral_framework2} employs an inspector-executor strategy to transform the data layout and schedule of sparse computations to achieve locality and parallelism.
Kurt \textit{et al.}~\cite{tiled_sparse} improved SpMM and SDDMM kernels by optimizing their tile sizes using a sparsity signature.
However, these methods do not consider loop nest restructuring transformations to improve data locality across kernels.
Athena~\cite{athena}, Sparta~\cite{sparta}, and HiParTi~\cite{hiparti} are techniques which provide highly optimized kernels for sparse tensor operations and contraction sequences that shows significant performance improvements.
Kernel fusion has been used in FusedMM~\cite{fusedMM} to accelerate SDDMM and SpMM used in graph neural network applications. Their transformation is structurally analogous to SparseLNR\xspace, but is specific to graph embeddings, and further performs kernel-specific optimizations.
None of these prior techniques handle arbitrary sparse tensor expressions supporting a variety of input formats.
\subsection{Dense Tensor Algebra}
Optimizations for computations over dense tensors have been well-studied for decades.
Numerous loop optimizations for dense tensor contractions~\cite{Cociorva,saday1,saday2005,saday2006,Krishnan2003DataLO,saday2001,saday2002} for CPUs and tensor contractions for GPUs~\cite{7349652,ABDELFATTAH2016108} have been proposed that exhibits superior performance.
However, these transformations are not directly applicable to sparse tensor algebra since there data access restrictions for sparse tensors and the non-affine nature of loop nests.
\section{Conclusion} \label{discussion}
We presented SparseLNR\xspace, a loop restructuring framework for sparse tensor algebra programs.
SparseLNR\xspace improves the performance of sparse computations by reducing time complexity and enhancing data locality.
SparseLNR\xspace enables kernel distribution and loop fusion and achieves significant performance improvements for real-world benchmarks.
The new scheduling transformations introduced by SparseLNR\xspace expands the scheduling space of sparse tensor applications and facilitates fine-grained tuning.
\section{Appendix}
\end{document}
|
2002.05954
|
\section{Introduction}
Reinforcement learning (RL) can solve long horizon control tasks with continuous state-action spaces in robotics \cite{lillicrap2015ddpg, levine2015deepvisuo, schulman2017proximal}, such as locomotion \cite{hwangbo2019learning}, manipulation \cite{akkaya2019solving}, or human-robot interaction \cite{christen2019}.
However, tasks that involve extended planning and sparse rewards still pose many challenges in successfully reasoning over long horizons and in achieving generalization from training to different test environments.
Therefore, hierarchical reinforcement learning (HRL) splits the decision making problem into several subtasks at different levels of abstraction \cite{sutton1999tempabstr, andre2002funcabstrac}, often learned separately via curriculum learning \cite{frans2017meathierarchy,florensa2017stochierarchic, bacon2016optioncritic, sasha2017fun}, or end-to-end via off-policy and goal-conditioning \cite{levy2018hierarchical, nachum2018data, nachum2018nearoptimal}.
However, these methods share the full-state space across layers, even if low-level control states are not strictly required for planning.
This limits i) modularity in the sense of transferring higher level policies across different control agents and ii) the ability to generalize to unseen test tasks without retraining.
\begin{figure}%
\centering
\hspace{-0.5cm}
\vspace{-0.1cm}
\subfloat[Example of agent transfer]{{
\includegraphics[width=0.5\columnwidth]{figures/hide_modular_new.png} }}
\qquad
\hspace{-0.7 cm}
\subfloat[Our proposed hierarchical architecture]{\raisebox{7ex}
{\includegraphics[width=0.5\columnwidth]{figures/hierarchy_overview.png} }}%
\caption{a) Showing the decompositionality of our approach, the \textbf{p}lanning policy of simple agent is combined with more complex \textbf{c}ontrol policies. b) Our $2$-layer HRL architecture. The planning layer $\pi_1$ receives information crucial for planning $s_{\text{plan}}$ and provides subgoals $g_1$ to the lower level. A goal-conditioned control policy $\pi_0$ learns to reach the target $g_1$ given the agent's internal state $s_{\text{internal}}$.}%
\label{fig:teaser}%
\vspace{-0.5cm}
\end{figure}
In this paper, we study how a more explicit hierarchical task decomposition into local control and global planning tasks can alleviate both issues. In particular, we hypothesize that explicit decoupling of the state-action spaces of different layers, whilst providing suitable task-relevant knowledge and efficient means to leverage it, leads to a task decomposition that is beneficial for generalization across agents and environments.
Thus, we propose a 2-level hierarchy (see Figure \ref{fig:teaser}b) that is suited for continuous control tasks with a 2D-planning component. Furthermore, we show in our experiments that planning in 3D is also possible.
Global environment information is only available to the planning layer, whereas the full internal state of the agent is only accessible by the control layer. To leverage the global information, we propose the integration of an efficient, RL-based planner.
The benefit of this explicit task decomposition is manyfold. First, the individual layers have access only to task-relevant information, enabling layers to focus on their individual tasks \cite{dayan2000feudalrl}. Second, the modularity allows for the composition of new agents without retraining. We demonstrate this via transferring the planning layer across different low-level agents ranging from a simple 2DoF ball to a 17DoF humanoid. The approach even allows to generalize across domains, combining layers from navigation and robotic manipulation tasks to solve a compound task (see Figure \ref{fig:teaser}a).
In our framework, which we call HiDe, a goal-conditioned control policy $\pi_0$ on the lower-level of the hierarchy interacts with the environment. It has access to the proprioceptive state of an agent and learns to achieve subgoals $g_1$ that are provided by the planning layer policy $\pi_1$. The planning layer has access to task-relevant information, e.g., a top-down view image, and needs to find a subgoal-path towards a goal. We stress that the integration of such additional information into HRL approaches is non-trivial. For example, naively adding an image to HRL methods \cite{levy2018hierarchical, nachum2018data} causes an explosion of the state-space complexity and hence leads to failure as we show in Section \ref{sec:maze_nav}. We propose a specialized, efficient planning layer, based on MVProp \cite{nardelli2018value} with an added learned dynamic agent-centric attention window which transforms the task-relevant prior into a value map. The action of $\pi_1$ is the position that maximizes the masked value map and is fed as a subgoal to the control policy $\pi_0$. While the policies are functionally decoupled, they are trained jointly, which we show to be beneficial over separately training a control agent and attaching a conventional planner.
We focus on continuous control problems that involve navigation and path planning from top-down view, e.g., an agent navigating a warehouse or a robotic arm pushing a block. However, we show as a proof of concept that HiDe can also work in non-euclidean space and be extended to planning in 3D. In our experiments, we first demonstrate that generalization and scaling remain challenging for state-of-the-art HRL approaches and are outperformed by our method. We also compare against a baseline with a non-learning based planner, where a control policy trained with RL is guided by a conventional RRT planner \cite{rrt}. We then show that our method can scale beyond 3x longer horizons and generalize to randomly configured layouts. Lastly, we demonstrate transfer across agents and domains.
The results indicate that an explicit decomposition of policy layers in combination with task-relevant knowledge and an efficient planner are an effective tool to help generalize to unseen environments and make HRL more practicable. In summary our main contributions include:
\vspace{-0.25cm}
\begin{itemize}
\item A novel HRL architecture that enforces functional decomposition into global planning and low-level control through a strict separation of the state space per layer, in combination with an RL-based planner on the higher layer of the hierarchy, to solve long horizon control tasks.\vspace{-0.1cm}
\item We provide empirical evidence that task-relevant priors are essential components to enable generalization to unseen test environments and to scale to larger environments, which HRL methods struggle with.
\vspace{-0.1cm}
\item Demonstration of transfer of individual modular layers across different agents and domains.
\end{itemize}
\vspace{-0.2cm}
\section{Related Work}
\paragraph{Hierarchical Reinforcement Learning}
Learning hierarchical policies has seen lasting interest \cite{sutton1999tempabstr,schmidhuber1991learningTG,dietterich1991maxq,parr1998rlhm,mcgovern2001autodiscovery,dayan2000feudalrl}, but many approaches are limited to discrete domains. Sasha et. al \citep{sasha2017fun} introduce FeUdal Networks (FUN), inspired by \cite{dayan2000feudalrl}. In FUN, a hierarchic decomposition is achieved via a learned state representation in latent space, but only works with discrete actions.
More recently, off-policy methods that work for goal-conditioned continuous control tasks have been introduced \cite{levy2018hierarchical, nachum2018data, nachum2018nearoptimal, tirumala2019}.
Nachum et. al \citep{nachum2018data, nachum2018nearoptimal} present HIRO and HIRO-LR, an off-policy HRL method with two levels of hierarchy. The non-stationary signal of the upper policy is mitigated via off-policy corrections. In HIRO-LR, the method is extended by learning a representation of the state and subgoal space space from environment images. In contrast to our approach, both methods use a dense reward function. Levy et. al \citep{levy2018hierarchical} introduce Hierarchical Actor-Critic (HAC) that can jointly learn multiple policies in parallel via different hindsight techniques from sparse rewards. HAC, HIRO and HIRO-LR consist of a set of nested policies where the goal of a policy is provided by the top layer. In contrast to our method, the same state space is used in all layers, which prohibits transfer of layers across agents. We introduce a modular design to decouple the functionality of individual layers. This allows us to define different state, action and goal spaces for each layer.
Our method is closest to HIRO-LR, which also has access to a top-down view image. Although the learned space representation of HIRO-LR can generalize to a mirrored environment, the policies need to be retrained for each task. Contrarily, HiDe generalizes without retraining through the explicit use of the environment image for planning.
\paragraph{Planning in Reinforcement Learning}
In model-based RL, much attention has been given to learning of a dynamics model of the environment and subsequent planning \cite{sutton1990ntegratedAF, sutton2012dyna, wang2019modelbased}. Eysenbach et. al \citep{eysenbach2019search} propose a planning method that performs a graph search over the replay buffer. However, they require to spawn the agent at different locations in the environment and let it learn a distance function in order to build the search graph. Unlike model-based RL, we do not learn state transitions explicitly. Instead, we learn a spatial value map from collected rewards.
Recently, differentiable planning modules that are trained via model-free RL have been proposed \cite{nardelli2018value, tamar2016value, oh2017vpn, srinivas2018upn}. Tamar et. al \citep{tamar2016value} establish a connection between CNNs and Value Iteration \cite{bertsekas2000dpoc}. They propose \emph{Value Iteration Networks} (VIN), where model-free RL policies are additionally conditioned on a fully differrentiable planning module. MVProp \cite{nardelli2018value} extends this by making it more parameter-efficient and generalizable.
Our planning layer is based on MVProp. However, we do not rely on a fixed neighborhood mask for action selection. Instead we learn an attention mask which is used to generate intermediate goals for the low-level policy. We also extend our planner to 3D, whereas MVProp is only shown in 2D.
Gupta et. al \citep{gupta2017cognitive} learn a map of indoor spaces and do planning using a multi-scale VIN. However, the robot operates only on a discrete set of macro actions. Nasiriany et. al \citep{nasiriany2019planning} use a goal-conditioned policy for learning a TDM-based planner on latent representations. Srinivas et. al \citep{srinivas2018upn} propose Universal Planning Networks (UPN), which also learn how to plan an optimal action trajectory via a latent space representation. Müller et. al. \cite{mueller2018} separate planning from low-level control to achieve generalization by using a supervised planner and a PID-controller.
In contrast to our approach, the latter methods either rely on expert demonstrations or need to be retrained in order to achieve transfer to harder tasks.
\section{Background}
\subsection{Goal-Conditioned Reinforcement Learning}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\state}[1]{{s}_{#1}}
\newcommand{\goal}[1]{{g}_{#1}}
\newcommand{\action}[1]{{a}_{#1}}
\newcommand{r}{r}
\newcommand{\gamma}{\gamma}
\newcommand{Q}{Q}
\newcommand{\pi}{\pi}
\newcommand{J}{J}
\newcommand{\mathcal{M}}{\mathcal{M}}
We model a Markov Decision Process (MDP) augmented with a set of goals $\mathcal{G}$. We define the MDP as a tuple $\mathcal{M} = \{\mathcal{S}, \mathcal{A}, \mathcal{G}, \mathcal{R}, \gamma, \mathcal{T}, \rho_0, \}$, where $\mathcal{S}$ and $\mathcal{A}$ are set of states and actions, respectively, $\mathcal{R}_t=r(s_t,a_t, g_t)$ a reward function, $\gamma$ a discount factor $\in [0,1]$, $\mathcal{T} = p(\state{t+1}|\state{t}, \action{t})$ the transition dynamics of the environment and $\rho_0 = p(\state{1})$ the initial state distribution, with $\state{t} \in \mathcal{S}$ and $\action{t} \in \mathcal{A}$. Each episode is initialized with a goal $g \in \mathcal{G}$ and an initial state is sampled from $\rho_0$. We aim to find a policy $\pi: \mathcal{S} \times \mathcal{G} \rightarrow \mathcal{A}$, which maximizes the expected return. We a an actor-critic framework where the goal augmented action-value function is defined as: $
Q(s_t,g_t, a_t) = \mathbb{E}_{a_t \sim \pi, s_{t+1} \sim \mathcal{T}} \left [\sum_{i=t}^{T}\gamma^{i-t}\mathcal{R}_t \right ].
$
The Q-function (critic) and the policy $\pi$ (actor) are approximated by using neural networks with parameters $\theta^Q$ and $\theta^{\pi}$. The objective for $\theta^Q$ minimizes the loss:
\begin{equation}
\begin{split}
&L(\theta^Q) = \mathbb{E}_{\mathcal{M}} \left [ \left( Q(\state{t}, \goal{t}, \action{t} ; \theta^Q) - y_t \right)^2 \right], \text{where}\\
&y_t = r(\state{t}, \goal{t}, \action{t}) + \gamma Q(\state{t+1}, \goal{t+1}, \action{t+1} ; \theta^Q).
\label{eq:q_bellman}
\end{split}
\end{equation}
The policy parameters $\theta^{\pi}$ are trained to maximize the Q-value:
\begin{equation}
L(\theta^{\pi}) = \mathbb{E}_{\pi} \left [Q(\state{t}, \goal{t}, \action{t} ; \theta^Q) \vert \state{t}, \goal{t}, \action{t} = \pi(\state{t}, \goal{t}; \theta^{\pi}) \right ]
\label{eq:pi}
\end{equation}
\subsection{Hindsight Techniques}
\label{sec3_hac}
In HAC, Levy et. al \citep{levy2018hierarchical} apply two hindsight techniques to address the challenges introduced by the non-stationary nature of hierarchical policies and the environments with sparse rewards. In order to train a policy $\pi_i$, optimal behavior of the lower-level policy is simulated by \emph{hindsight action transitions}. More specifically, the action $a_{i}$ of the upper policy is replaced with a state $s_{i-1}$ that is actually achieved by the lower-level policy $\pi_{i-1}$. Identically to HER \cite{andrychowicz2017hindsight}, \emph{hindsight goal transitions}
replace the subgoal $g_{i-1}$ with an achieved state $s_{i-1}$, which consequently assigns a reward to the lower-level policy $\pi_{i-1}$ for achieving the virtual subgoal. Additionally, a third technique called \emph{subgoal testing} is proposed. The incentive of subgoal testing is to help a higher-level policy understand the current capability of a lower-level policy and to learn Q-values for subgoal actions that are out of reach. We find all three techniques effective and apply them to our model during training.
\subsection{Value Propagation Networks}
Tamar et. al \citep{tamar2016value} introduce value iteration networks (VIN) for path planning problems. Nardelli et. al \citep{nardelli2018value} propose value propagation networks (MVProp) with better sample efficiency and generalization behavior. MVProp creates reward- and propagation maps covering the environment. A reward map highlights the goal location and a propagation map determines the propagation factor of values through a particular location.
The reward map is an image $\bar{r}_{i,j}$ of the same size as the environment image, where $\bar{r}_{i, j} = 0$ if the pixel $(i,j)$ overlaps with the goal position and $-1$ otherwise. The value map $V$ is calculated by unrolling max-pooling operations in a neighborhood $N$ for $k$ steps as follows: \begin{equation}
\begin{split}
v_{i,j}^{(0)} &= \bar{r}_{i,j}, \quad \bar{v}_{i,j}^{(k)} = \max_{(i',j')\in N(i,j)} \left(\bar{r}_{i,j}+p_{i,j}(v_{i',j'}^{(k-1)} - \bar{r}_{i,j})\right), \quad
v_{i,j}^{(k)} = \max\left (v_{i,j}^{(k-1)}, \bar{v}_{i,j}^{(k)} \right )
\end{split}
\label{eq:mvprop_main}
\end{equation}
The action (i.e., the target position) is selected to be the pixels $(i', j')$ maximizing the value in a predefined $3x3$ neighborhood $N(i_0,j_0)$ of the agent's current position $(i_0,j_0)$:
\begin{equation}
\pi(s,(i_0,j_0))= \underset{i',j'\in N(i_0,j_0)}{\argmax} v_{i',j'}^{(k)}
\label{eq:mvprop_final}
\end{equation}
Note that the window $N(i_0,j_0)$ is determined by the discrete, pixel-wise actions.
\section{Hierarchical Decompositional Reinforcement Learning}
\label{sec4_method}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.8\columnwidth]{figures/planning_layer_hide.png}
\caption{Planning layer $\pi_1(s_{\text{pos}}, s_{\text{img}}, G)$ illustrated for the 2D case. Given an image $s_{\text{img}}$ and goal $G$, the MVProp network computes a value map $V$. An attention mask $M$, using the agent's position $s_{\text{pos}}$ restricts $V$ to a local subgoal map $\bar{V}$. The policy $\pi_1$ selects max value and assigns the control policy $\pi_0$ with a sugboal relative to the agent's position. For the 3D case, see Eq. \ref{eq:gaussian_likelihood}-\ref{eq:masked_v_argmax} and Figure \ref{fig:experiment_envs}c.}
\label{fig:model_planner}
\vspace{-0.5cm}
\end{figure*}
We introduce a hierarchical architecture, HiDe, allowing for an explicit functional \emph{decomposition} across layers. Our method achieves temporal abstractions via nested policies. Moreover, our architecture enforces functional decomposition explicitly by reducing the state in each layer to only task-relevant information. The planning layer is responsible for planning a path towards a goal and hence receives global information about the environment. The control layer has access to the agent's internal state and learns a control policy that can achieve subgoals from the planning layer. The layers are jointly-trained by using the hindsight techniques and subgoal testing presented in Section \ref{sec3_hac} to overcome the sparsity of the reward and the non-stationarity caused by off-policy training. Our design significantly improves generalization and makes cross-agent transfer possible (see Section \ref{sec5_experiments}).
\subsection{Planning Layer}
\label{sec4_planninglayer}
The planning layer is expected to learn high-level actions over a long horizon, which define a coarse path towards a goal. In related work \cite{levy2018hierarchical,nachum2018data,nachum2018nearoptimal}, the planning layer learns an \emph{implicit} value function and shares the same architecture as the lower layers. Since the task is learned for a specific environment, generalization is inherently limited. In contrast, we introduce a planning specific layer consisting of several components to learn the map and to find a feasible path to the goal.
Our planning layer for the 2D case is illustrated in Figure \ref{fig:model_planner}. We utilize a value propagation network (MVProp) \cite{nardelli2018value} to learn an \emph{explicit} value map which projects the collected rewards onto the environment space. For example, given a discretized 3D model of the environment, a convolutional network determines the per voxel flow probability $p_{i,j, k}$. The probability value of a voxel corresponding to an obstacle should be $0$ and that for free passages $1$, respectively.
Nardelli et. all \citep{nardelli2018value} only use 2D images and a predefined $3\times3$ neighborhood of the agent's current position and pass the location of the maximum value in this neighbourhood as goal position to the agent (Equation \ref{eq:mvprop_final}). We extend this to be able to handle both 3D and 2D inputs and augment the MVProp network with an attention model which learns to define the neighborhood dynamically and adaptively. Given the value map $V$ and the agent's current position $s_{\text{pos}}$, we estimate how far the agent can move, modeled by a Gaussian. More specifically, we predict a full covariance matrix $\Sigma$ with the agent's global position $s_{\text{pos}}$ as mean. We later build a mask $M$ of the same size as the environment space $s_{\text{img}}$ by using the likelihood function:
\begin{equation}
m_{i,j,k} = \mathcal{N}((i,j,k) | s_{\text{pos}}, \Sigma)
\label{eq:gaussian_likelihood}
\end{equation}
Intuitively, the mask defines the density for the agent's success rate. Our planning policy selects an action (i.e., subgoal) that maximizes the masked value map as follows:
\begin{equation}
\begin{split}
\bar{V} &= M \cdot V \\
\pi_1(s_{\text{pos}}, s_{\text{img}}, G) &= \underset{i,j,k}{\text{argmax }} \bar{v}_{i,j,k} - s_{\text{pos}}\\
\end{split}
\label{eq:masked_v_argmax}
\end{equation}
where $\bar{v}_{i,j,k}$ corresponds to the value at voxel $(i,j,k)$ on the masked value map $\bar{V}$. Note that the subgoal selected by the planner is relative to the agent's current position $s_{\text{pos}}$, resulting in better generalization as we show in Section \ref{sec:ablation_study}. While we present the more general 3D case in Equations \ref{eq:gaussian_likelihood}-\ref{eq:masked_v_argmax}, we reduce the equations by one dimension for the 2D case used in most of our experiments.
The benefits of having an attention model are twofold. First, the planning layer considers the agent dynamics in assigning subgoals which may lead to fine- or coarse-grained subgoals depending on the underlying agent's performance. Second, the Gaussian window allows us to define a dynamic set of actions for the planning policy $\pi_1$, which is essential to find a path of subgoals on the map. While the action space includes all pixels of the value map $V$, it is limited to the subset of only reachable pixels by the Gaussian mask $M$. We find that this leads to better obstacle avoidance behaviour such as the corners and walls shown in Figure \ref{fig:window_comparison} in the Appendix.
Since our planning layer operates in a discrete action space, the resolution of the projected image defines the minimum amount of displacement for the agent, affecting maneuverability. This could be tackled by using a soft-argmax \cite{chapelle2010gradient} to select the subgoal pixel, allowing to choose real-valued actions and providing invariance to image resolution. In our experiments we see no difference in terms of the final performance. However, since the former setting allows for the use of DQN \cite{mnih2013dqn} instead of DDPG \cite{lillicrap2015ddpg}, we prefer the discrete action space for simplicity and faster convergence. Both the MVProp (Equation \ref{eq:mvprop_main}) and Gaussian likelihood (Equation \ref{eq:gaussian_likelihood}) operations are differentiable. Hence, MVProp and the attention model parameters are trained by minimizing the standard mean squared Bellman error objective as defined in Equation \ref{eq:q_bellman}.
\subsection{Control Layer}
\label{sec4_controllayer}
The control layer learns a goal-conditioned control policy. Unlike the planning layer, it has access to the agent's full internal state $s_{\text{internal}}$, including joint positions and velocities. In the control tasks we consider, the agent has to learn a policy to reach a certain goal position, e.g., reach a target position in a navigation domain. We use the hindsight techniques (cf. Section \ref{sec3_hac}) so that the control policy receives rewards even in failure cases. All policies in our hierarchy are trained jointly. We use DDPG \cite{lillicrap2015ddpg} (Equations \ref{eq:q_bellman}-\ref{eq:pi}) to train the control layer and DQN \cite{mnih2013dqn} for the planning layer.
\section{Experiments}
\label{sec5_experiments}
We evaluate our method on a series of continuous control tasks\footnote{Videos available at: \small{\url{https://sites.google.com/view/hide-rl}}}. Experiment and implementation details are provided in the Appendix. First, we compare to various baseline methods in navigation tasks (see Figure \ref{wrap-fig:exp1_fwd}) in Section \ref{sec5_exp1} and provide an ablation study for our design choices in Section \ref{sec:ablation_study}. In Section \ref{subsec_mazenav_complex}, we show that HiDe can scale beyond 2-3x larger environments (see Figure \ref{fig:experiment_envs}a). Section \ref{subsec_transfer_policies} demonstrates that our approach indeed leads to functional decomposition by transferring layers across agents and domains (see Figure \ref{fig:experiment_envs}b) . Finally, we show in Section \ref{sec:ext_epx} that HiDe can be extended to planning in 3D and use non-image based priors, such as a joint map (see Figure \ref{fig:experiment_envs}c).
\subsection{Maze Navigation}
\label{sec:maze_nav}
We introduce the following task configurations:\newline
\textbf{Maze Training} The training environment, where the task is to reach a goal from a fixed start position. \\
\textbf{Maze Backward} The training environment with swapped start and goal positions. \\
\textbf{Maze Flipped} The mirrored training environment.\\
\textbf{Maze Random} A set of 500 randomly generated mazes with random start and goal positions.
We always train in the Maze Training environment. The reward signal during training is constantly -1, unless the agent reaches the given goal (except for HIRO/HIRO-LR which use an L2-shaped reward). We test the agents on the above task configurations. We intend to answer the following questions:
\vspace{-0.2cm}
\begin{enumerate}
\item Can our method generalize to unseen test cases and environment layouts?
\vspace{-0.1cm}
\item Can we scale to larger environments with more complex layouts (see Figure \ref{fig:experiment_envs}a)?
\vspace{-0.2cm}
\end{enumerate}
We compare our method to state-of-the-art HRL approaches including HIRO \citep{nachum2018nearoptimal}, HIRO-LR \citep{nachum2018nearoptimal}, HAC \citep{levy2018hierarchical}, and a more conventional navigation baseline dubbed RRT+LL. HIRO-LR is the closest related work, since it also receives a top-down view image of the environment and is a fully learned hierarchy.
Our preliminary experiments have shown that HAC and HIRO cannot solve the task when provided with an environment image (see Table \ref{tab:hrl_image_fail} in the Appendix), likely due to the increase of the state space by factor of 14. We therefore only show results of HAC and HIRO where they are able to solve the training task, i.e., without accessing an image. To compare against a baseline with complete separation, we introduce RRT+LL. We train a goal-conditioned control policy with RL in an empty environment and attach an RRT planner \cite{rrt}, which finds a path from top-down views via tree-search and does not require training.
See Table \ref{tab:comp_overview} in the Appendix for an overview of all the baselines.
\begin{table}
\centering
\caption{Success rates of achieving a goal in the maze navigation environments.}
\begin{tabular}{@{}c|ccc|cccc@{}}
\toprule
\multicolumn{1}{}{} & \multicolumn{3}{c}{Simple Maze} & \multicolumn{4}{c}{Complex Maze} \\ \midrule
Method & Training & Backward & Flipped & Training & Random & Backward & Flipped \\ \midrule
HAC & $82 \pm 16$ & $0 \pm 0$ & $0 \pm 0$ & $0 \pm 0$ & $0 \pm 0$ & $0 \pm 0$ & $0 \pm 0$ \\
HIRO & $91 \pm 2$ & $0 \pm 0$ & $0 \pm 0$ & $68 \pm 8$ & $36 \pm 5$ & $0 \pm 0$ & $0 \pm 0$ \\
HIRO-LR & $ 83 \pm 8 $ & $0 \pm 0$ & $0 \pm 0$ & $20 \pm 21$ & $15 \pm 7$ & $0 \pm 0$ & $0 \pm 0$ \\
RRT+LL & $25 \pm 13 $ & $22 \pm 12$ & $29 \pm 10$ & $13 \pm 9$ & $48 \pm 3$ & $7 \pm 4$ & $5 \pm 5$ \\
HiDe &$\mathbf{94 \pm 2}$ & $\mathbf{85 \pm 9}$ & $\mathbf{93 \pm 2}$ & $\mathbf{87 \pm 2}$ & $\mathbf{85 \pm 3}$ & $\mathbf{79 \pm 8}$ & $\mathbf{79 \pm 12}$ \\ \bottomrule
\end{tabular}
\label{tab:exp1}
\vspace{-0.5cm}
\end{table}
\subsubsection{Simple Maze Navigation}
\label{sec5_exp1}
\begin{wrapfigure}{r}{2.5cm}
\vspace{-0.5cm}
\includegraphics[width=2.5cm]{figures/exp1_forward.png}
\vspace*{-5mm}
\caption{Simple Maze Training.}
\label{wrap-fig:exp1_fwd}
\vspace{-1.2cm}
\end{wrapfigure}
Table \ref{tab:exp1} (left) summarizes the results for the simple maze tasks. All HRL models successfully learned the training task (see Figure \ref{wrap-fig:exp1_fwd}). The models' generalization abilities are evaluated in the unseen Maze Backward and Maze Flipped tasks. While HIRO, HIRO-LR and HAC manage to solve the training environment with success rates between 91\% and 82\%, they suffer from overfitting to the training task, indicated by the 0\% success rates in the unseen test scenarios. HIRO-LR, which uses the top-down view implicitly to learn a goal space representation, also fails.
For the navigation baseline RRT+LL, the success rate stays between 22\% and 29\% for all tasks. Although the planner can generalize to different tasks, it cannot learn to cooperate with the low-level control as in our hierarchy. Contrarily, our method is able to achieve 93\% and 85\% success rates in the generalization tasks \emph{without} retraining. We argue that this is mainly due to the strict separation of concerns, which allows the integration of task-relevant priors, in combination with HiDe's efficient planner.
\subsubsection{Complex Maze Navigation}
\label{subsec_mazenav_complex}
In this experiment, we evaluate how well the methods scale to larger environments with longer horizons. Thus, we train an ant agent in a more complex environment layout (cf. Figure \ref{fig:experiment_envs}a), i.e., we increase the size of the environment by roughly 50\% and add more obstacles, thereby also increasing the distance to the final reward. The results are reported in Table \ref{tab:exp1} (right). HAC fails to learn the training task, while HIRO and HIRO-LR reach success rates of 68\% and 20\%, respectively. Hence, there is a significant performance drop for both methods if the state-space increases. RRT+LL only reaches success rates between 5\% and 13\%, except for the Maze Random task. The higher success rates in Maze Random compared to the other test cases can be attributed to the randomization of both the environment layout as well as the start and goal position, which can result in short trajectories without obstacles.
Contrary to the baselines, our model's performance decreases only slightly in the training task compared to the simple maze in Section \ref{sec5_exp1} and also generalizes to all of the unseen test cases. The decrease in performance is due to the increased difficulty of the task. In terms of convergence, we find that in the simple maze case HIRO is competitive with HiDe, due to its reduced state space (no image), but convergence slows with an increase in maze complexity. Contrary, HiDe shows similar convergence behavior in both experiments (cf. Figure \ref{fig:curves} in the Appendix). To push the limits of our method, we gradually increase the environment size and observe that only at a 300\% increase, the performance drops to around 50\% (see Figure \ref{fig:env_increase} in the Appendix). These results indicate that task-relevant information and efficient methods to leverage it are essential components to scale to larger environments. Most failure cases for HiDe arise if the agent gets stuck at a wall and falls over.
\subsubsection{Ablation Study}
\label{sec:ablation_study}
\begin{wraptable}{r}{6.5cm}
\vspace{-0.45cm}
\caption{Ablation study in the simple maze navigation environments from Section \ref{sec5_exp1}.}
\centering
\begin{tabular}{@{}c|ccc@{}}
\toprule
Methods & Training & Backward & Flipped \\ \midrule
HiDe-A & $88 \pm 2 $ & $17 \pm 15$ & $36 \pm 16$ \\
HiDe-3x3 & $46 \pm 32 $ & $2 \pm 3$ & $31\pm 28$ \\
HiDe-5x5 & $92 \pm 4 $ & $41 \pm 35$ & $82\pm 18$ \\
HiDe-9x9 & $93 \pm 4 $ & $16 \pm 27$ & $79\pm 7$ \\
HiDe-RRT & $77 \pm 9$ & $53 \pm 13$ & $72 \pm 6$ \\
HiDe & $\mathbf{94 \pm 2}$ & $\mathbf{85 \pm 9}$ & $\mathbf{93 \pm 2}$ \\ \bottomrule
\end{tabular}
\vspace{-0.4cm}
\label{tab:ablation}
\end{wraptable}
To support the claim that our architectural design choices support the generalization and scaling capabilities, we analyze empirical results of different variants of our method. To show the benefits of relative positions, we compare HiDe against a variant with absolute positions, dubbed HiDe-A. Unlike the case of relative positions, the policy needs to learn all values within the range of the environment dimensions in this setting. Second, we run an ablation study for HiDe with a fixed window size, i.e., we train and evaluate an ant agent on window sizes $3\times3$, $5\times5$, and $9\times9$. Lastly, we compare to a variant where HiDe's decoupled state-space structure is used for training, but the RL-based planner is replaced with an RRT planner.
As indicated in Table \ref{tab:ablation}, HiDe-A is competitive in the training task, but fails to match the generalization performance of HiDe. showing that relative positions are crucial for generalization. The learned attention window (HiDe) achieves better or comparable performance than the fixed window variants.
Moreover, it eliminates the need for tuning the window size per agent and environment. HiDe-RRT performs significantly worse in all tasks, showing that our learned planner outperforms training with a conventional planner.
\subsection{Transfer of Policies}
\label{subsec_transfer_policies}
We argue that a key to transferability and generalization behavior in hierarchical RL lies in enforcing a separation of concerns across different layers. To examine whether the overall task is truly split into separate sub-tasks, we perform a set of experiments to demonstrate transfer behavior.
\paragraph{Agent Transfer}
\label{sec:exp_agent_transfer}
\begin{wrapfigure}{r}{3.0cm}
\vspace{-0.4cm}
\includegraphics[width=3.0cm]{figures/humanoid_maze_4.png}
\caption{Humanoid agent transfer.}
\label{wrap-fig:humanoid}
\vspace{-0.4cm}
\end{wrapfigure}
For this experiment, we train different control agents with HiDe. We then transfer the planning layer of one agent to another agent, e.g., we replace the planning layer of a complex ant agent by the planning layer trained on a simple $2$ DoF ball agent. We observe that transfer is possible and only leads to a marginal decrease in performance (cf. Table \ref{exp:agent_transfer} in the Appendix). Most failure cases arise at corners, where the ball's planner tries to use a path close to the walls. Contrarily, the ant's planner is more conservative as subgoals close to the wall may lead to overturning. We hereby show that transfer of layers between agents is possible and therefore find our hypothesis to be valid. To further demonstrate our method's transfer capabilities, we train a humanoid agent (17 DoF) in an empty environment. We then use the planning layer from a ball agent and connect it as is with the control layer of the trained humanoid (see Figure \ref{wrap-fig:humanoid})\footnote{Videos available at \small{\url{https://sites.google.com/view/hide-rl}}\label{video_ref}}.
\paragraph{Domain Transfer}
\label{sec:domain_transfer}
In this experiment, we demonstrate the capability of HiDe to transfer the planning layer from a simple ball agent, trained on a pure navigation task, to a robot manipulation agent (see Figure \ref{fig:experiment_envs}b).
To this end, we train a ball agent with HiDe. Moreover, we train a control policy for a robot manipulation task in the OpenAI Gym "Push" environment \cite{openai_gym}, which learns to move a cube to a relative position goal. Note that the manipulation task does not encounter any obstacles during training. To attain the compound agent, we attach the planning layer of the ball agent to the manipulation policy (cf. Figure \ref{fig:teaser}a). The planning layer has access to the environment layout and the cube's position, which is a common assumption in robot manipulation tasks. For testing, we generate 500 random environment layouts. As in the navigation experiments in Section \ref{sec5_exp1}, state-of-the-art methods are able to solve these tasks when trained on a single, simple environment layout. However, they do not generalize to other layouts without retraining. In contrast, our evaluation of the compound HiDe agent on unseen testing layouts shows a success rate of 49\% (cf. Table \ref{tab:domain_transfer} in the Appendix). Thus, our modular approach can achieve domain transfer and generalize to different environments.
\subsection{Representation of Priors and 3D-Planning}
\label{sec:ext_epx}
While the majority of our experiments use 2D-images for planning, we show via a proof-of-concept that our method i) can be extended to planning in 3D ii) works with non-top-down view sources of information. To this end, we add a 3DoF robotic arm that has to reach goals in 3D configuration space (see Figure \ref{fig:experiment_envs}c). Instead of a top-down view, we project the robot's joint angles onto a 3D-map which is used as input to our planning layer. We train the 3D variant of our planning layer (see Section \ref{sec4_planninglayer}) and 3D CNNs to compute the value map. Our method can successfully solve the task\footnoteref{video_ref}. If the planning space would exceed 3 dimensions, a mapping from higher dimensional representation space to a 2D or 3D latent space could be a potential solution. We leave this for future work.
\section{Environment Details}
\label{appendix_env_details}
We build on the Mujoco~\cite{todorov2012mujoco} environments used in \cite{nachum2018data}. The rewards in all experiments are sparse, i.e., $0$ for reaching the goal and ${-1}$ otherwise. We consider the goal reached if $|s-g|_{\max} < 1$. All environments use $dt=0.02$. Each episode in the simple maze navigation experiment (Section \ref{sec5_exp1}) is terminated after 500 steps and after 800 steps for the complex maze experiment (Section \ref{subsec_mazenav_complex}). In the robot manipulation experiment (Section \ref{subsec_transfer_policies}), we terminate after 800 steps and in the reacher experiments (Section \ref{sec:ext_epx}) after 200 steps.
\subsection{Agents}
\label{subsec{agents}}
Our ant agent is equivalent to the one in \cite{levy2018hierarchical}. In other words, the ant from Rllab~\cite{duan2016benchmarking} with gear power of 16 instead of 150 and 10 frame skip instead of 5. Our ball agent is the PointMass agent from DM Control Suite~\cite{deepmindcontrolsuite2018}. We changed the joints so that the ball rolls instead of sliding. Furthermore, we resize the motor gear and the ball itself to match the maze size. For the manipulation robot, we slightly adapt the "Push" task from OpenAI gym \cite{openai_gym}. The original environment uses an inverse kinematic controller to steer the robot, whereas joint positions are enforced and realistic physics are ignored. This can cause unwanted behavior, such as penetration through objects. Hence, we change the control inputs to motor torques for the joints. For the robot reacher task, we also adapt the version of the "Reacher" task from OpenAI \cite{openai_gym}. Instead of being provided with euclidean position goals, the agents needs to reach goals in the robot's joint angle configuration space.
\subsection{Environments}
\subsubsection{Navigation Mazes}
All navigation mazes are modelled by immovable blocks of size $4\times 4 \times 4$. \cite{nachum2018data} uses blocks of $8\times 8 \times 8$. The environment shapes are clearly depicted in Figure \ref{fig:navigation_envs}. For the randomly generated maze, we sample each block with probability being empty $p=0.7$. The start and goal positions are also sampled randomly at uniform with a minimum of 5 blocks distance apart. Mazes where start and goal positions are adjacent or where the goal is not reachable are discarded. For evaluation, we generated 500 of such environments and reused them (one per episode) for all experiments. We will provide the random environments along with the code.
\subsubsection{Manipulation Environments}
The manipulation environments differ from the navigation mazes in scale. Each wall is of size $0.05 \times 0.05 \times 0.03$. We use a layout of $9 \times 9$ blocks. The object position is the position used for the planning layer. When the object escapes the top-down view range, the episodes are terminated. The random layouts were generated using the same methodology as for the navigation mazes.
\subsubsection{Robot Reacher Environments}
The robot reacher environment is an adapted version of the OpenAI gym \cite{openai_gym} implementation. We add an additional degree of freedom to extend it to 3D space. The joint angles are projected onto a map of size $12\times12\times12$, where the axes correspond to the joint angles of the robot links. The goals are randomly sampled points in the robot's configuration space.
\section{Implementation Details}
\label{appendix_impl_details}
Our PyTorch~\cite{paszke2017pyTorch} implementation will be available on the project website\footnote{HiDe: \small{\url{https://sites.google.com/view/hide-rl}}}.
\subsection{Baseline Experiments}
For HIRO, HIRO-LR and HAC we used the authors' original implementations\footnote{HIRO: \url{https://github.com/tensorflow/models/tree/master/research/efficient-hrl}}\footnote{HAC: \url{https://github.com/andrew-j-levy/Hierarchical-Actor-Critc-HAC-}}. To improve the performance of HAC, we modified their Hindsight Experience Replay~\cite{andrychowicz2017hindsight} implementation so that they use the \textsc{Future} strategy. More importantly, we also added target networks to both the actor and critic to improve the performance.
For HIRO, we ran the hiro\_xy variant, which uses only position coordinates for subgoals instead of all joint positions to have a fair comparison with our method. For HIRO-LR, we provide top-down view images of size 5x5x3 as in the original implementation. We train both HIRO and HIRO-LR for 10 million steps as reported in \cite{nachum2018nearoptimal}.
For RRT+LL, we adapted the RRT python implementation\footnote{RRT: \url{https://github.com/AtsushiSakai/PythonRobotics}} from \cite{python_robotics} to our problem. We trained a goal-conditioned low-level control policy in an empty environment with DDPG. During testing, we provide the low-level control policies with subgoals from the RRT planner.
We used OpenAI's baselines \cite{baselines} for the DDPG+HER implementation. When pretraining for domain transfer, we made the achieved goals relative before feeding them into the network. For a better overview of the features the algorithms use, see Table \ref{tab:comp_overview}.
\newcommand{{\color{black}{\checkmark}}}{{\color{black}{\checkmark}}}
\newcommand{{\color{black}\checkmark}}{{\color{black}\checkmark}}
\newcommand{{\color{black}{x}}}{{\color{black}{x}}}
\newcommand{{\color{black}x}}{{\color{black}x}}
\begin{table}[H]
\centering
\begin{tabular}{@{}ccccccc@{}}
\toprule
\multicolumn{1}{c}{Features} &
\multicolumn{1}{c}{HiDe} &
\multicolumn{1}{c}{RRT+LL} &
\multicolumn{1}{c}{HIRO-LR} &
\multicolumn{1}{c}{HIRO} &
\multicolumn{1}{c}{HAC} &
\multicolumn{1}{c}{DDPG+HER} \\ \midrule
Images & {\color{black}\checkmark} & {\color{black}\checkmark} & {\color{black}\checkmark} & {\color{black}{x}} & {\color{black}{x}} & {\color{black}{x}} \\
Random start pos & {\color{black}{x}} & {\color{black}{x}} & {\color{black}{x}} & {\color{black}{x}} & {\color{black}\checkmark} & {\color{black}\checkmark} \\
Random end pos & {\color{black}{x}} & {\color{black}\checkmark} & {\color{black}\checkmark} & {\color{black}\checkmark} & {\color{black}\checkmark} & {\color{black}\checkmark} \\
Agent position & {\color{black}\checkmark} & {\color{black}\checkmark} & {\color{black}{x}} & {\color{black}\checkmark} & {\color{black}\checkmark} & {\color{black}\checkmark} \\
Shaped reward & {\color{black}{x}} & {\color{black}{x}} & {\color{black}\checkmark} & {\color{black}\checkmark} & {\color{black}{x}} & {\color{black}{x}} \\
Agent transfer & {\color{black}{\checkmark}} & {\color{black}{\checkmark}} & {\color{black}x} & {\color{black}x} & {\color{black}x} & {\color{black}x} \\ \bottomrule
\end{tabular}
\vspace{0.1cm}
\caption{Overview of related work and our method with their respective features. Features marked with a tick are included in the algorithm whereas features marked with a cross are not. See glossary below for a detailed description of the features. \newline \newline
\textbf{Glossary:\newline}
Images: If the state space has access to images.\newline
Random start pos: If the starting position is randomized during training.\newline
Random end pos: If the goal position is randomized during training.\newline
Agent position: If the state space has access to the agent's position.\newline
Shaped reward: If the algorithm learns using a shaped reward.\newline
Agent transfer: Whether transfer of layers between agents is possible without retraining.}
\label{tab:comp_overview}
\end{table}
\subsection{Ablation Baselines}
We compare HiDe against several different versions to justify our design choices. In HiDe-A(bsolute), we use absolute positions for the goal and the agent's position. In HiDe-3x3, HiDe-5x5, HiDe-9x9, we compare our learned attention window against fixed window sizes for selecting subgoals. In HiDe-RRT, we use RRT planning \cite{rrt} for the top-layer, while the lower layer is trained as in HiDe. This is to show that our learned planner improves the overall performance.
\subsection{Evaluation Details}
For evaluation in the maze navigation experiment of Section \ref{sec:maze_nav}, we trained 5 seeds each for 2.5M steps for the simple maze navigation and 10M steps for the complex maze navigation “Training” environments. We performed continuous evaluation (every 100 episodes for 100 episodes). After training, we selected the best checkpoint based on the continuous evaluation of each seed. Then, we tested the learned policies for 500 episodes and reported the average success rate. Although the agent and goal positions are fixed, the initial joint positions and velocities are sampled from uniform distribution as is standard in OpenAI Gym environments \cite{openai_gym}. Therefore, the tables in the results (cf. Table \ref{tab:exp1}) contain means and standard deviations across 5 seeds.
\subsection{Network Structure}
\subsubsection{Planning Layer}
Input images for the planning layer were binarized in the following way: each pixel corresponds to one block (0 if it was a wall or 1 if it was a corridor).
In our planning layer, we process the input image via two convolutional layers with $3 \times 3 \times 3$ for the 3D case and $3 \times 3$ kernels for the 2D case. Both layers have only $1$ input and output channel and are padded so that the output size is the same as the input size. We propagate the value through the value map as in~\cite{nardelli2018value} $K=35$ times using a $3\times3\times3$ max pooling layer ($3\times3$ for 2D). Finally, the value map and position image is processed by 3 convolutions with $32$ output channels and $3\times3\times3$ filter window ($3\times3$ in 2D) interleaved by $2\times2\times2$ max pool ($2\times2$ for 2D) with $\relu$ activation functions and zero padding. The final result is flattened and processed by two fully connected layers with $64$ neurons, each producing outputs: $\sigma_1, \sigma_2, \sigma_3$ and the respective pairwise correlation coefficients $\rho$. We use $\Softplus$ activation functions for the $\sigma$ values and $\tanh$ activation functions for the correlation coefficients. The final covariance matrix $\Sigma$ is given by
$$\Sigma =
\begin{pmatrix}
\sigma_1^2 & \rho_{1,2}\,\sigma_1 \sigma_2 & \rho_{1,3}\,\sigma_1 \sigma_3 \\
\rho_{1,2}\,\sigma_1 \sigma_2 & \sigma_2^2 & \rho_{2,3}\,\sigma_2 \sigma_3 \\
\rho_{1,3}\,\sigma_1 \sigma_3 & \rho_{2,3}\,\sigma_2 \sigma_3 & \sigma_3^2 \\
\end{pmatrix}
$$ so that the matrix is always symmetric and positive definite. For numerical reasons, we multiply by the binarized kernel mask instead of the actual Gaussian densities. We set the values greater than the mean to $1$ and the others to zeros.
\subsubsection{Control Layer}
We use the same network architecture for the lower layer as proposed by~\cite{levy2018hierarchical}, i.e. we use 3 times a fully connected layer with $\relu$ activation function. The control layer is activated with $\tanh$, which is then scaled to the action range.
\subsubsection{Training Parameters}
\begin{itemize}
\itemsep0em
\item Discount $\gamma = 0.98$ for all agents.
\item Adam optimizer. Learning rate $0.001$ for all actors and critics.
\item Soft updates using moving average; $\tau=0.05$ for all controllers.
\item Replay buffer size was designed to store 500 episodes, similarly as in~\cite{levy2018hierarchical}
\item We performed $40$ updates after each epoch on each layer, after the replay buffer contained at least 256 transitions.
\item Batch size 1024.
\item No gradient clipping
\item Rewards 0 and -1 without any normalization.
\item Observations also were not normalized.
\item 2 HER transitions per transition using the \textsc{future} strategy~\cite{andrychowicz2017hindsight}.
\item Exploration noise: 0.05 and 0.1 for the planning and control layer respectively.
\end{itemize}
\subsection{Computational Infrastructure}
All HiDe, HAC and HIRO experiments were trained on 1 GPU (GTX 1080). OpenAI DDPG+HER baselines were trained on 19 CPUs using the baseline repository \cite{baselines}.
\section{Additional Results}
\label{additional_results}
\subsection{Maze Navigation}
\vspace{-0.5cm}
\begin{figure}[h]%
\centering
\hspace{-0.5cm}
\subfloat[Simple Maze Navigation]{{
\includegraphics[width=0.5\columnwidth]{figures/exp1_environments.png} }}%
\qquad
\hspace{-0.3 cm}
\subfloat[Complex Maze Navigation]{
{\includegraphics[width=0.4\columnwidth]{figures/exp2_envs_subset.png} }}%
\caption{Maze environments for the tasks reported in a) Section \ref{sec5_exp1} and \ref{sec:ablation_study}, b) Section \ref{subsec_mazenav_complex}. The red sphere indicates the goal, the pink cubes represent the subgoals. Agents are trained in only \textit{Training} and tested in \textit{Backward}, \textit{Flipped}, and \textit{Random} environments.}%
\label{fig:navigation_envs}%
\vspace{-0.3cm}
\end{figure}
\begin{table}[H]
\vspace{-0.5cm}
\caption{Success rates of the individual seeds for achieving a goal in the maze navigation tasks.}
\centering
\begin{tabular}{@{}c|ccc|cccc@{}}
\toprule
\multicolumn{1}{}{} & \multicolumn{3}{c}{Simple Maze} & \multicolumn{4}{c}{Complex Maze} \\ \midrule
Method & Training & Backward & Flipped & Training & Backward & Flipped & Random \\ \midrule
HAC 1 & 96.4 & 00.0 & 00.0 & 00.0 & 00.0 & 00.0 & 00.0 \\
HAC 2 & 82.0 & 00.0 & 00.0 & 00.0 & 00.0 & 00.0 & 00.0 \\
HAC 3 & 85.6 & 00.4 & 00.0 & 00.0 & 00.0 & 00.0 & 00.0 \\
HAC 4 & 92.8 & 00.0 & 00.0 & 00.0 & 00.0 & 00.0 & 00.0 \\
HAC 5 & 55.6 & 00.0 & 00.0 & 00.0 & 00.0 & 00.0 & 00.0 \\ \midrule
HIRO 1 & 89.0 & 00.0 & 00.0 & 68.0 & 00.0 & 00.0 & 44.6 \\
HIRO 2 & 89.2 & 00.0 & 00.0 & 64.0 & 00.0 & 00.0 & 36.2 \\
HIRO 3 & 94.0 & 00.0 & 00.0 & 80.0 & 00.0 & 00.0 & 33.6 \\
HIRO 4 & 91.3 & 00.0 & 00.0 & 61.0 & 00.0 & 00.0 & 32.8 \\
HIRO 5 & 90.8 & 00.0 & 00.0 & 81.0 & 00.0 & 00.0 & 34.4 \\ \midrule
RRT+LL 1 & 13.8 & 24.0 & 19.6 & 4.8 & 5.6 & 1.2 & 43.6 \\
RRT+LL 2 & 40.2 & 6.2 & 45.2 & 6.8 & 4.6 & 13.6 & 50.6 \\
RRT+LL 3 & 20.2 & 23.8 & 29.0 & 17.0 & 8.4 & 0.4 & 47.0 \\
RRT+LL 4 & 37.2 & 18.6 & 30.0 & 25.6 & 2.2 & 3.4 & 47.2 \\
RRT+LL 5 & 15.0 & 38.2 & 23.4 & 9.4 & 13.2 & 5.8 & 50.6 \\ \midrule
HIRO-LR 1 & 84.2 & 00.0 & 00.0 & 00.0 & 00.0 & 00.0 & 14.4 \\
HIRO-LR 2 & 80 & 00.0 & 00.0 & 00.0 & 00.0 & 00.0 & 10.4 \\
HIRO-LR 3 & 79.8 & 00.0 & 00.0 & 48.2 & 00.0 & 00.0 & 16.2 \\
HIRO-LR 4 & 76 & 00.0 & 00.0 & 20.4 & 00.0 & 00.0 & 26.6 \\
HIRO-LR 5 & 96.8 & 00.0 & 00.0 & 33.8 & 00.0 & 00.0 & 9.6 \\ \midrule
HiDe 1 & 93.4 & 90.2 & 94.0 & 86.4 & 82.6 & 87.4 & 88.6 \\
HiDe 2 & 90.8 & 68.2 & 91.8 & 83.4 & 66.0 & 66.0 & 81.6 \\
HiDe 3 & 94.8 & 91.4 & 96.2 & 87.6 & 84.6 & 91.0 & 85.4 \\
HiDe 4 & 94.0 & 85.2 & 92.6 & 87.2 & 76.6 & 77.6 & 83.2 \\
HiDe 5 & 96.2 & 87.8 & 92.2 & 89.0 & 87.4 & 77.4 & 87.6 \\ \bottomrule
\end{tabular}
\label{tab:exp2_appendix}
\end{table}
\begin{table}[H]
\centering
\caption{Results of the simple maze navigation for HAC and HIRO when provided with an image. Both methods fail to solve the task, as the state space complexity is too high.}
\begin{tabular}{@{}c|ccc@{}}
\toprule
& Training & Backward & Flipped \\ \midrule
HAC with Image & $0.0 \pm 0.0$ & $0.0 \pm 0.0$ & $0.0 \pm{0.0}$ \\
HIRO with Image & $0.0 \pm 0.0$ & $0.0 \pm 0.0$ & $0.0 \pm{0.0}$ \\ \bottomrule
\end{tabular}
\vspace{0.1cm}
\label{tab:hrl_image_fail}
\end{table}
\begin{figure}[H]%
\centering
\hspace{-0.5cm}
\vspace{-0.1cm}
\subfloat[]{{
\includegraphics[width=0.45\columnwidth]{figures/exp1_curves.pdf} }}
\qquad
\hspace{-0.7 cm}
\subfloat[]{
{\includegraphics[width=0.45\columnwidth]{figures/exp2_curves.pdf} }}%
\caption{Learning curves with success rates for the training tasks averaged over 5 seeds for a) the simple maze experiment from Section \ref{sec5_exp1}, b) the complex maze experiment from Section \ref{subsec_mazenav_complex}. HiDe matches convergence properties of HIRO in the simple maze (left), albeit having a much larger state space in the planning layer. In the more complex maze (right), HiDe shows similar convergence, while the convergence of HIRO slows.}%
\label{fig:curves}%
\vspace{-0.5cm}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\columnwidth]{figures/hide_env_increase_barplot.png}
\vspace{-0.2cm}
\caption{Success rates for the training task when gradually increasing the environment size and number of obstacles. At a 300\% increase, HiDe's performance drops to 64\%. See Table \ref{tab:hiro_lr_image_res} for the more detailed results.}%
\label{fig:env_increase}
\end{figure}
\begin{table}[H]
\centering
\caption{Results for individual seeds of the HiDe experiments with gradually increasing number of obstacles and environment size.}
\begin{tabular}{@{}c|ccccc|c@{}}
\toprule
Maze Training & Seed 1 & Seed 2 & Seed 3 & Seed 4 & Seed 5 & Averaged \\ \midrule
HiDe env + 100\% & 84.0 & 83.4 & 82.5 & 75.8 & 75.8 & $80.3 \pm{4}$ \\
HiDe env + 150\% & 74.2 & 85.4 & 81.4 & 75.6 & 84.8 & $80.3 \pm{5}$ \\
HiDe env + 225\% & 78.0 & 69.6 & 70.8 & 82.2 & 91.4 & $78.4 \pm{9}$ \\
HiDe env + 300\% & 62.8 & 58.8 & 69.8 & 66.4 & 64.0 & $64.4 \pm{4}$ \\ \bottomrule
\end{tabular}
\vspace{0.1cm}
\label{tab:hiro_lr_image_res}
\end{table}
\subsection{Ablation Studies}
\begin{table}[H]
\centering
\begin{tabular}{@{}c|ccc@{}}
\toprule
Experiment & Training & Backward & Flipped \\ \midrule
HiDe-A 1 & 88.2 & 5.4 & 49.4 \\
HiDe-A 2 & 89.2 & 28.4 & 53.2 \\
HiDe-A 3 & 84.8 & 35.8 & 29.6 \\
HiDe-A 4 & 91.2 & 13.6 & 32.2 \\
HiDe-A 5 & 88.2 & 00.0 & 14.4 \\ \midrule
HiDe-RRT 1 & 82.0 & 35.2 & 78.8 \\
HiDe-RRT 2 & 80.0 & 72.4 & 75.0 \\
HiDe-RRT 3 & 73.6 & 53.6 & 69.4 \\
HiDe-RRT 4 & 63.4 & 50.6 & 63.4 \\
HiDe-RRT 5 & 85.6 & 54.0 & 74.2 \\ \midrule
HiDe 3x3 1 & 28.4 & 0.0 & 4.4 \\
HiDe 3x3 2 & 67.6 & 0.6 & 44.0 \\
HiDe 3x3 3 & 59.6 & 0.4 & 36.4 \\
HiDe 3x3 4 & 0.0 & 0.0 & 0.0 \\
HiDe 3x3 5 & 76.8 & 6.6 & 67.8 \\ \midrule
HiDe 5x5 1 & 91.0 & 82.8 & 97.0 \\
HiDe 5x5 2 & 94.6 & 10.6 & 89.6 \\
HiDe 5x5 3 & 88.0 & 72.4 & 87.2 \\
HiDe 5x5 4 & 91.2 & 14.4 & 83.6 \\
HiDe 5x5 5 & 97.6 & 83.6 & 50.6 \\ \midrule
HiDe 9x9 1 & 90.6 & 9.4 & 81.2 \\
HiDe 9x9 2 & 92.4 & 64.4 & 79.4 \\
HiDe 9x9 3 & 89.8 & 1.2 & 85.6 \\
HiDe 9x9 4 & 99.0 & 4 & 83.0 \\
HiDe 9x9 5 & 94.6 & 3 & 68.2 \\ \bottomrule
\end{tabular}
\vspace{0.1cm}
\caption{Individual seed results of the ablation study for the simple maze navigation task.}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{@{}c|ccccc|ccccc@{}}
\toprule
& Ant 1 & Ant 2 & Ant 3 & Ant 4 & Ant 5 & Averaged \\ \midrule
Training & 0.0 & 78.8 & 0.0 & 0.0 & 0.0 & $16 \pm 35$\\
Random & 67.2 & 88.8 & 17.6 & 0.0 & 0.0 & $35 \pm 41$\\
Backward & 0.0 & 62.2 & 0.0 & 0.0 & 0.0 & $12 \pm 28$ \\
Flipped & 0.0 & 79.8 & 0.0 & 0.0 & 0.0 & $16 \pm 36$ \\ \bottomrule
\end{tabular}
\vspace{0.1cm}
\caption{Results for the complex maze navigation task with HiDe-Absolute.}
\label{tab:abs_complex}
\end{table}
\vspace{0.5cm}
\begin{figure}[H]
\centering
\includegraphics[width=0.5\columnwidth]{figures/dynamic_static_window.PNG}
\caption{A visual comparison of \textit{(left)} our dynamic attention window with a \textit{(right)} fixed neighborhood. The green dot corresponds to the selected subgoal in this case. Notice how our window is shaped so that it avoids the wall and induces a further subgoal.}\label{fig:window_comparison}
\end{figure}
\subsection{Agent Transfer}
\label{appendix:transfer}
\begin{table}[H]
\centering
\begin{tabular}{@{}c|ccccc|ccccc@{}}
\toprule
& A$\rightarrow$B 1 & A$\rightarrow$B 2 & A$\rightarrow$B 3 & A$\rightarrow$B 4 & A$\rightarrow$B 5 & Averaged \\ \midrule
Training & 99.8 & 100 & 100 & 100 & 100 & $100 \pm 0$ \\
Random & 98.4 & 98.6 & 98.0 & 98.6 & 98.8 & $98 \pm 0$ \\
Backward & 100 & 100 & 99.8 & 100 & 100 & $100 \pm 0$ \\
Flipped & 99.6 & 100 & 99.6 & 100 & 100 & $100 \pm 0$ \\ \bottomrule
\end{tabular}
\vspace{0.1cm}
\caption{Results of HiDe for ant to ball transfer for individual seeds.}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{@{}c|ccccc|ccccc@{}}
\toprule
& B$\rightarrow$A 1 & B$\rightarrow$A 2 & B$\rightarrow$A 3 & B$\rightarrow$A 4 & B$\rightarrow$A 5 & Averaged \\ \midrule
Training & 73.4 & 73.4 & 71.2 & 68.2 & 71.6 & $72 \pm 2$ \\
Random & 86.2 & 84.4 & 83.8 & 82.6 & 78.0 & $83 \pm 3$ \\
Backward & 56.8 & 48.4 & 66.4 & 57.8 & 58.4 & $58 \pm 6$ \\
Flipped & 74.4 & 77.4 & 80.0 & 61.6 & 72.8 & $73 \pm 7$ \\ \bottomrule
\end{tabular}
\vspace{0.1cm}
\caption{Results of HiDe for ball to ant transfer for individual seeds.}
\label{exp:agent_transfer}
\end{table}
\subsection{Robotic Arm Manipulation}
\begin{table}[H]
\centering
\begin{tabular}{@{}c|ccccc|ccccc@{}}
\toprule
& Arm 1 & Arm 2 & Arm 3 & Arm 4 & Arm 5 & Averaged \\ \midrule
Random & 50 & 48 & 47 & 48 & 51 & $49 \pm 1$ \\
\bottomrule
\end{tabular}
\vspace{0.1cm}
\caption{Results of the different seeds for the domain transfer experiments.}
\label{tab:domain_transfer}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=1\columnwidth, trim={20 350 20 10},clip]{figures/arm.png}
\caption{Example of three randomly configured test environments we use to demonstrate the domain transfer of the planning layer from a locomotion domain to a manipulation robot.}
\label{fig:arm}
\end{figure}
\newpage
\section{Algorithm}
\label{algorithm}
\begin{algorithm}[H]
\caption{Hierarchical Decompositional Reinforcement Learning (HiDe)}\label{alg:HiDe}
\begin{algorithmic}
\State \textbf{Input:}
\begin{itemize}
\item Agent position $s_{pos}$, goal position $G$, and projection from environment coordinates to image coordinates and its inverse $Proj$,$Proj^{-1}$.
\end{itemize}
\State \textbf{Parameters:}
\begin{enumerate}
\item maximum subgoal horizon $H=40$, subgoal testing frequency $\lambda=0.3$
\end{enumerate}
\State \textbf{Output:}
\begin{itemize}
\item $k=2$ trained actor and critic functions $\pi_0, ..., \pi_{k-1}, Q_0, ... , Q_{k-1}$
\end{itemize}
\State
\For{$M$ episodes} \Comment{Train for M episodes}
\State $s \leftarrow$ $S_{init}$, $g$ $\leftarrow$ $G_{k-1}$ \Comment{Get initial state and task goal}
\State $train\_top\_level$($s$, $g$) \Comment{Begin training}
\State Update all actor and critic networks
\EndFor
\State \Function{$\pi_1$}{$s::state, g::goal$}
\State $v_{map}$ $\gets$ $MVProp(I, g_1)$ \Comment{Run MVProp on top-down view image and goal position}
\State $\sigma, \rho$ $\gets$ $CNN(v_{map}, Proj(s_{pos}))$ \Comment{Predict mask parameters}
\State $v$ $\gets$ $v_{map} \bigodot \mathcal{N}(\cdot | s_{pos}, \Sigma)$ \Comment{Mask the value map}
\State \textbf{return} $a_1$ $\gets$ $Proj^{-1}(\argmax v) - s_{pos}$ \Comment{Output relative subgoal corresponding to the max value pixel}
\EndFunction
\State \Function{$\pi_0$}{$s::joints\_state, g::relative\_subgoal$}
\State \textbf{return} $a_0$ $\gets$ $MLP(s, g)$ \Comment{Output actions for actuators}
\EndFunction
\State
\Function{train\_level}{$i::level, s::state, g::goal$}
\State $s_i \gets s$, $g_i \gets g$ \Comment{Set current state and goal for level $i$}
\For{$H$ attempts or until $g_n$, $i \leq n < k$ achieved}
\State $a_i$ $\gets$ $\pi_i(s_i, g_i)$ + $noise$ (if not subgoal testing) \Comment{Sample (noisy) action from policy}
\If{$i > 0$}
\State Determine whether to test subgoal $a_i$
\State $s_{i}^{'} \gets$ $train\_level$($i-1, s_i, a_i$) \Comment{Train level $i-1$ using subgoal $a_i$}
\Else
\State Execute primitive action $a_0$ and observe next state $s_{0}^{'}$
\EndIf
\State \Comment{Create replay transitions}
\If{$i > 0$ and $a_i$ not reached}
\If{$a_i$ was subgoal tested} \Comment{Penalize subgoal $a_i$}
\State $Replay\_Buffer_i \gets [s = s_i, a = a_i, r = Penalty, s^{'} = s_{i}^{'}, g = g_{i}, \gamma = 0]$
\EndIf
\State $a_i \gets s_i^{'}$ \Comment{Replace original action with action executed in hindsight}
\EndIf
\State \Comment{Evaluate executed action on current goal and hindsight goals}
\State $Replay\_Buffer_i \gets [s = s_i, a = a_i, r \in \{-1,0\}, s^{'} = s_{i}^{'}, g = g_{i}, \gamma \in \{\gamma, 0\}]$
\State $HER\_Storage_i \gets [s = s_i, a = a_i, r = TBD, s^{'} = s_{i}^{'}, g = TBD, \gamma = TBD]$
\State $s_i \gets s_i^{'}$
\EndFor
\State $Replay\_Buffer_i \gets$ Perform HER using $HER\_Storage_i$ transitions
\State \textbf{return} $s_i^{'}$\Comment{Output current state}
\EndFunction
\end{algorithmic}
\end{algorithm}
\section{Conclusion}
In this paper, we introduce a novel HRL architecture that can solve long-horizon, continuous control tasks with sparse rewards that require planning. The architecture, which is trained end-to-end, consists of a RL-based planning layer which learns an explicit value map and is connected with a low-level control layer. Our method is able to generalize to previously unseen settings and environments. Furthermore, we show that transfer of planners between different agents can be achieved, enabling us to move a planner trained with a simplistic agent to a more complex agent, such as a humanoid or a robot manipulator.
The key insight lies in a strict separation of concerns with task-relevant priors that allow for efficient planning and in consequence leads to better generalization. Interesting directions for future work include extensions to higher dimensional planning tasks and multi-agent scenarios.
\section*{Broader Impact}
Our method HiDe has potential applications in the field of continuous control in robotics, specifically for navigation and manipulation robots. Example applications include a robot navigating a warehouse or pick and place tasks for a manipulator.
So far RL methods have struggled to demonstrate good generalization across training and test environments, leaving much work to be done before these are applicable in the real-world. We see our work as one important building block towards this goal and provide one example of how such generalization can be achieved.
While our work is mostly academic in nature and practical applications are still far-off, we see potential benefits in advancements in automation of tedious manual tasks, for example, in healthcare, factories and the supply chain. While automation may render some occupations redundant, it can also create new jobs and hence opportunities to alter the type of work we do. Moreover, robots may fill in or assist jobs in areas with labor shortages, such as in health and elderly care.
One potential risk in deploying RL robots is the lack of interpretability and explainability of actions, which is an active area of research in and of itself.
Enforcing a strict separation of subtasks in a hierarchical system such as HiDe may be helpful in understanding the behavior of robots trained with RL. More specifically, as a human, it may not be necessary to understand the inner workings of a robot (the control), but is rather important to understand decisions on a more abstract level (planning). This may also have implications for future systems that let people interactively shape and train complex robotic systems.
\section*{Acknowledgements}
\begin{wrapfigure}{r}{4.0cm}
\vspace{-1.0cm}
\includegraphics[width=4.0cm]{figures/erc.png}
\vspace{-0.9cm}
\end{wrapfigure}
The authors would like to thank Stefan Stev{\v{s}}i{\'{c}} and Christoph Gebhardt for their valuable comments and discussions throughout this project, and Alexis E. Block for the help with the preparation of the contributed talk. The project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme grant agreement No 717054.
|
2002.06055
|
\chapter{\texorpdfstring{$2$}{2}-Categories and Bicategories}
\label{ch:2cat_bicat}
In this chapter we define bicategories and $2$-categories. The definition of a bicategory and a series of examples are given in \Cref{sec:bicategories}. Several useful unity properties in bicategories are presented in \Cref{sec:bicategory-unity}. The definition of a $2$-category and a series of examples are given in \Cref{sec:2categories}. In \Cref{sec:multicategories,sec:polycat-2cat} we discuss the $2$-categories of multicategories and of polycategories, generalizing the $2$-category of small categories, functors, and natural transformations. Dualities of bicategories are discussed in \Cref{sec:dualities}.
\section{Bicategories}\label{sec:bicategories}
In this section we give the detailed definition of a bicategory and some examples.
\begin{convention}\label{not:discrete-cat1}
Recall from \cref{notation:terminal-category} that $\boldone$ denotes the category with one object $*$ and only its identity morphism. For a category $\C$, we usually identify the categories $\C\times \boldone$ and $\boldone \times \C$ with $\C$, and regard the canonical isomorphisms between them as $1_{\C}$. For an object $X$ in $\C$, its identity morphism $1_X$ is also denoted by $X$.\dqed
\end{convention}
\begin{motivation}
As we pointed out in Example \ref{ex:monoid-as-cat}, a monoid $(X,\mu,\operadunit)$ in $\Set$ may be regarded as a category $\Sigma X$ with one object $*$, morphism set $\Sigma X(*,*)=X$, identity morphism $1_*=\operadunit$, and composition $\mu$. The associativity and unity axioms of the monoid $X$ become the associativity and unity axioms of the category $\Sigma X$. So a category is a multi-object version of a monoid. In a similar way, a bicategory, to be defined shortly, is a multi-object\index{monoidal category!as a one-object bicategory} version of a monoidal category as in \Cref{def:monoidal-category}.\dqed
\end{motivation}
\begin{definition}\label{def:bicategory}
A \emph{bicategory}\index{bicategory}\index{category!bi-} is a tuple \[\bigl(\B, 1, c, a, \ell, r\bigr)\] consisting of the following data.
\begin{description}
\item[Objects]\label{notation:obb} $\B$ is equipped with a class $\Ob(\B) = \B_0$, whose elements are called \emph{objects}\index{object!bicategory} or \index{0-cell}\emph{$0$-cells} in $\B$. If $X\in \B_0$, we also write $X\in \B$.
\item[Hom Categories] For each pair of objects $X,Y\in\B$, $\B$ is equipped with a category $\B(X,Y)$, called a \index{hom category}\index{category!hom}\emph{hom category}.
\begin{itemize}
\item Its objects are called \emph{$1$-cells}\index{1-cell} in $\B$. The collection of all the $1$-cells in $\B$ is denoted by $\B_1$.
\item Its morphisms are called \emph{$2$-cells}\index{2-cell} in $\B$. The collection of all the $2$-cells in $\B$ is denoted by $\B_2$.
\item Composition and identity morphisms in the category $\B(X,Y)$ are called \emph{vertical composition}\index{vertical composition!bicategory} and \index{identity 2-cell}\emph{identity $2$-cells}, respectively.
\item An isomorphism in $\B(X,Y)$ is called an \index{invertible 2-cell}\emph{invertible $2$-cell}, and its inverse is called a \index{vertical inverse}\emph{vertical inverse}.
\item For a $1$-cell $f$, its identity $2$-cell is denoted by $1_f$.
\end{itemize}
\item[Identity $1$-Cells]\label{notation:id-one-cell} For each object $X\in\B$, \[1_X : \boldone \to \B(X,X)\] is a functor. We identify the functor $1_X$ with the $1$-cell $1_X(*)\in\B(X,X)$, called the \index{identity 1-cell}\emph{identity $1$-cell of $X$}.
\item[Horizontal Composition]\label{notation:hor-comp} For each triple of objects $X,Y,Z \in \B$,
\[c_{XYZ} : \B(Y,Z) \times \B(X,Y) \to \B(X,Z)\]
is a functor, called the \index{horizontal composition!bicategory}\emph{horizontal composition}. For $1$-cells $f \in \B(X,Y)$ and $g \in \B(Y,Z)$, and $2$-cells $\alpha \in \B(X,Y)$ and $\beta \in \B(Y,Z)$, we use the notations
\[\begin{split}
c_{XYZ}(g,f) &= g \circ f \orspace gf,\\
c_{XYZ}(\beta,\alpha) &= \beta * \alpha.
\end{split}\]
\item[Associator]\label{notation:associator} For objects $W,X,Y,Z \in \B$,
\[a_{WXYZ} : c_{WXZ} \bigl(c_{XYZ} \times \Id_{\B(W,X)}\bigr) \to c_{WYZ}\bigl(\Id_{\B(Y,Z)} \times c_{WXY}\bigr)\]
is a natural isomorphism, called the \index{associator}\emph{associator}, between functors
\[\B(Y,Z) \times \B(X,Y) \times \B(W,X) \to \B(W,Z).\]
\item[Unitors]\label{notation:unitors} For each pair of objects $X,Y\in \B$,
\[\begin{tikzcd}
c_{XYY} \bigl(1_Y \times \Id_{\B(X,Y)}\bigr) \rar{\ell_{XY}} & \Id_{\B(X,Y)}
& c_{XXY}\bigl(\Id_{\B(X,Y)} \times 1_X\bigr) \lar[swap]{r_{XY}}\end{tikzcd}\]
are natural isomorphisms, called the \index{left unitor}\emph{left unitor} and the \index{right unitor}\emph{right unitor}, respectively.
\end{description}
The subscripts in $c$ will often be omitted. The subscripts in $a$, $\ell$, and $r$ will often be used to denote their components. The above data are required to satisfy the following two axioms for $1$-cells $f \in \B(V,W)$, $g \in \B(W,X)$, $h \in \B(X,Y)$, and $k \in \B(Y,Z)$.
\begin{description}
\item[Unity Axiom] The middle unity diagram\index{unity!bicategory}
\begin{equation}\label{bicat-unity}
\begin{tikzcd}[column sep=small] (g 1_W)f \arrow{rr}{a} \arrow[shorten >=-4pt]{rd}[swap]{r_g * 1_f} && g(1_W f) \arrow[shorten >=-4pt]{ld}{1_g * \ell_f}\\ & gf
\end{tikzcd}
\end{equation}
in $\B(V,X)$ is commutative.
\item[Pentagon Axiom]\index{pentagon axiom!bicategory}
The diagram
\begin{equation}\label{bicat-pentagon}
\begin{tikzpicture}[commutative diagrams/every diagram]
\node (P0) at (90:2cm) {$(kh)(gf)$};
\node (P1) at (90+72:2cm) {$((kh)g)f$} ;
\node (P2) at (220:1.6cm) {\makebox[3ex][r]{$(k(hg))f$}};
\node (P3) at (-40:1.6cm) {\makebox[3ex][l]{$k((hg)f)$}};
\node (P4) at (90+4*72:2cm) {$k(h(gf))$};
\path[commutative diagrams/.cd, every arrow, every label]
(P0) edge node {$a_{k,h,gf}$} (P4)
(P1) edge node {$a_{kh,g,f}$} (P0)
(P1) edge node[swap] {$a_{k,h,g} * 1_f$} (P2)
(P2) edge node {$a_{k,hg,f}$} (P3)
(P3) edge node[swap] {$1_k * a_{h,g,f}$} (P4);
\end{tikzpicture}
\end{equation}
in $\B(V,Z)$ is commutative.
\end{description}
This finishes the definition of a bicategory.
\end{definition}
\begin{explanation}\label{expl:bicategory}
We usually abbreviate a bicategory as above to $\B$.
\begin{enumerate}
\item We assume the hom categories $\B(X,Y)$ for objects $X,Y\in\B$ are disjoint. If not, we tacitly replace them with their disjoint union.
\item In each hom category $\B(X,Y)$, the vertical composition of $2$-cells is associative and unital in the strict sense. In other words, for $1$-cells $f,f',f''$, and $f'''$ in $\B(X,Y)$, and $2$-cells $\alpha : f \to f'$, $\alpha' : f' \to f''$, and $\alpha'' : f'' \to f'''$, the equalities\index{hom category!associativity and unity}\index{associativity!hom category}\index{unity!hom category}
\begin{equation}\label{hom-category-axioms}
\begin{split}
(\alpha'' \alpha') \alpha &= \alpha'' (\alpha' \alpha),\\
\alpha= \alpha 1_f &= 1_{f'} \alpha
\end{split}
\end{equation}
hold.
\item For $1$-cells $f,f' \in \B(X,Y)$, we display each $2$-cell $\alpha : f \to f'$ in diagrams as\label{notation:double-arrow}
\[\begin{tikzpicture}[commutative diagrams/every diagram]
\node (X) at (-1,0) {$X$}; \node (Y) at (1,0) {$Y$};
\node[font=\Large] at (-.1,0) {\rotatebox{270}{$\Rightarrow$}};
\node[font=\small] at (.15,0) {$\alpha$};
\path[commutative diagrams/.cd, every arrow, every label]
(X) edge [bend left] node[above] {$f$} (Y)
edge [bend right] node[below] {$f'$} (Y);
\end{tikzpicture}\]
with a double arrow\index{2-cell!double arrow notation} for the $2$-cell. With this notation, the horizontal composition $c_{XYZ}$ is the assignment
\[\begin{tikzpicture}[commutative diagrams/every diagram]
\node (X) at (-1,0) {$X$}; \node (Y) at (1,0) {$Y$}; \node (Z) at (3,0) {$Z$};
\node (X1) at (5,0) {$X$}; \node (Z1) at (7,0) {$Z$};
\node[font=\Large] at (-.1,0) {\rotatebox{270}{$\Rightarrow$}};
\node[font=\small] at (.15,0) {$\alpha$};
\node[font=\Large] at (1.9,0) {\rotatebox{270}{$\Rightarrow$}};
\node[font=\small] at (2.15,0) {$\beta$};
\node[font=\Large] at (5.7,0) {\rotatebox{270}{$\Rightarrow$}};
\node[font=\small] at (6.2,0) {$\beta*\alpha$};
\node at (4,0) {$\mapsto$};
\path[commutative diagrams/.cd, every arrow, every label]
(X) edge [bend left] node[above] {$f$} (Y)
(X) edge [bend right] node[below] {$f'$} (Y)
(Y) edge [bend left] node[above] {$g$} (Z)
(Y) edge [bend right] node[below] {$g'$} (Z)
(X1) edge [bend left] node[above] {$gf$} (Z1)
(X1) edge [bend right] node[below] {$g'f'$} (Z1);
\end{tikzpicture}\]
for $1$-cells $f,f' \in \B(X,Y)$, $g,g' \in \B(Y,Z)$ and $2$-cells $\alpha : f \to f'$, $\beta : g \to g'$.
\item That the horizontal composition $c_{XYZ}$ is a functor means:
\begin{enumerate}
\item It preserves identity\index{horizontal composition!preserves identity 2-cells} $2$-cells, i.e.,
\begin{equation}\label{bicat-c-id}
1_g * 1_f = 1_{gf}
\end{equation}
in $\B(X,Z)(gf,gf)$.
\item It preserves\index{horizontal composition!preserves vertical composition} vertical composition, i.e.,
\begin{equation}\label{middle-four}
(\beta'\beta) * (\alpha'\alpha) = (\beta'*\alpha')(\beta*\alpha)
\end{equation}
in $\B(X,Z)(gf,g''f'')$ for $1$-cells $f'' \in \B(X,Y)$, $g'' \in \B(Y,Z)$ and $2$-cells $\alpha' : f'\to f''$, $\beta' : g' \to g''$.
\end{enumerate}
The equality \eqref{middle-four} is called the \index{middle four exchange}\emph{middle four exchange}. It may be visualized as the equality of the two ways to compose the diagram
\[\begin{tikzpicture}[commutative diagrams/every diagram, xscale=1.5]
\node (X) at (-1,0) {$X$}; \node (Y) at (1,0) {$Y$}; \node (Z) at (3,0) {$Z$};
\node[font=\Large] at (-.1,.3) {\rotatebox{270}{$\Rightarrow$}};
\node[font=\small] at (.1,.3) {$\alpha$};
\node[font=\Large] at (1.9,.3) {\rotatebox{270}{$\Rightarrow$}};
\node[font=\small] at (2.1,.3) {$\beta$};
\node[font=\Large] at (-.1,-.3) {\rotatebox{270}{$\Rightarrow$}};
\node[font=\small] at (.1,-.3) {$\alpha'$};
\node[font=\Large] at (1.9,-.3) {\rotatebox{270}{$\Rightarrow$}};
\node[font=\small] at (2.1,-.3) {$\beta'$};
\path[commutative diagrams/.cd, every arrow, every label]
(X) edge [bend left=60] node[above] {$f$} (Y)
(X) edge node[near start] {$f'$} (Y)
(Y) edge [bend left=60] node[above] {$g$} (Z)
(Y) edge node[near start] {$g'$} (Z)
(X) edge [bend right=60] node[below] {$f''$} (Y)
(Y) edge [bend right=60] node[below] {$g''$} (Z);
\end{tikzpicture}\]
down to a single $2$-cell.
\item Horizontal composition is associative up to the specified natural isomorphism $a$. So for $1$-cells $f \in \B(W,X)$, $g\in \B(X,Y)$, and $h\in \B(Y,Z)$, the component of $a$ is an invertible $2$-cell\index{associator!component}
\begin{equation}\label{associator-component}
\begin{tikzcd} a_{h,g,f} : (hg)f \rar{\cong} & h(gf)\end{tikzcd}
\end{equation}
in $\B(W,Z)$. The naturality\index{associator!naturality} of $a$ means that, for $2$-cells $\alpha : f \to f'$, $\beta : g \to g'$, and $\gamma : h \to h'$, the diagram
\begin{equation}\label{associator-naturality}
\begin{tikzcd}[column sep=large]
(hg)f \ar{r}{a_{h,g,f}} \ar{d}[swap]{(\gamma*\beta)*\alpha} & h(gf) \ar{d}{\gamma*(\beta*\alpha)}\\
(h'g')f' \ar{r}{a_{h',g',f'}} & h'(g'f')\end{tikzcd}
\end{equation}
in $\B(W,Z)$ is commutative.
\item Similarly, horizontal composition is unital with respect to the identity $1$-cells up to the specified natural isomorphisms $\ell$ and $r$. So for each $1$-cell $f \in \B(X,Y)$, their components are invertible $2$-cells\index{left unitor!component}\index{right unitor!component}
\begin{equation}\label{unitor-component}
\begin{tikzcd} \ell_f : 1_Yf \rar{\cong} & f\end{tikzcd}\andspace
\begin{tikzcd} r_f : f1_X \rar{\cong} & f\end{tikzcd}
\end{equation}
in $\B(X,Y)$. The naturality\index{left unitor!naturality}\index{right unitor!naturality} of $\ell$ and $r$ means the diagram
\begin{equation}\label{unitor-naturality}
\begin{tikzcd}[column sep=large]
1_Y f \ar{r}{\ell_f} \ar{d}[swap]{1_{1_Y} * \alpha} & f \ar{d}{\alpha} & f1_X \ar{d}{\alpha * 1_{1_X}} \ar{l}[swap]{r_f}\\
1_Y f' \ar{r}{\ell_{f'}} & f' & f'1_X \ar{l}[swap]{r_{f'}}\end{tikzcd}
\end{equation}
is commutative for each $2$-cell $\alpha : f \to f'$.
\item The unity axiom \eqref{bicat-unity} asserts the equality of $2$-cells
\[r * 1_f = (1_g * \ell)a \in \B(V,X)\bigl((g1_W)f, gf\bigr).\]
The right-hand side is the vertical composition of a component of the associator $a$ with the horizontal composition $1_g *\ell$.
\item Similarly, the pentagon axiom \eqref{bicat-pentagon} is an equality of $2$-cells in the set \[\B(V,Z)\bigl(((kh)g)f, k(h(gf))\bigr).\] One of the $2$-cells is the vertical composition of two instances of the associator $a$. The other $2$-cell is the vertical composition of the $2$-cells \[a*1_f, \qquad a, \andspace 1_k*a,\] the first and the last of which are horizontal compositions.\dqed
\end{enumerate}
\end{explanation}
\begin{definition}\label{def:small-bicat}
Suppose $P$ is a property of categories. A bicategory $\B$ is \emph{locally $P$} if each hom category in $\B$ has property $P$. In particular, $\B$ is:
\begin{itemize}
\item \emph{locally small}\index{bicategory!locally small}\index{locally!small}\index{small!locally - bicategory} if each hom category is a small category.
\item \emph{locally discrete}\index{bicategory!locally discrete}\index{locally!discrete} if each hom category is discrete.
\item \emph{locally partially ordered}\index{bicategory!locally partially ordered}\index{locally!partially ordered} if each hom category is a partially ordered set regarded as a small category.
\end{itemize}
Finally, $\B$ is \emph{small}\index{bicategory!small} if it is locally small and if $\B_0$ is a set.
\end{definition}
\begin{definition}\label{def:subbicategory}
Suppose $\B$ and $\B'$ are bicategories. Then $\B'$ is called a \index{bicategory!sub-}\emph{sub-bicategory} of $\B$ if the following statements hold.
\begin{itemize}
\item $\B'_0$ is a sub-class of $\B_0$.
\item For objects $X,Y \in \B'$, $\B'(X,Y)$ is a sub-category of $\B(X,Y)$.
\item The identity $1$-cell of $X$ in $\B'$ is equal to the identity $1$-cell of $X$ in $\B$.
\item For objects $X,Y,Z$ in $\B'$, the horizontal composition $c'_{XYZ}$ in $\B'$ makes the diagram
\[\begin{tikzcd}
\B'(Y,Z)\times\B'(X,Y) \ar{r}{c'_{XYZ}} \ar{d} & \B'(X,Z) \ar{d}\\
\B(Y,Z)\times\B(X,Y) \ar{r}{c_{XYZ}} & \B(X,Z)\end{tikzcd}\]
commutative, in which the unnamed arrows are sub-category inclusions.
\item Every component of the associator in $\B'$ is equal to the corresponding component of the associator in $\B$, and similarly for the left unitors and the right unitors.
\end{itemize}
This finishes the definition of a sub-bicategory.
\end{definition}
The following special cases of the horizontal composition, in which one of the $2$-cells is an identity $2$-cell of some $1$-cell, will come up often.
\begin{definition}\label{def:whiskering}
In a bicategory $\B$, suppose given $1$-cells $h \in \B(W,X)$, $f,f' \in \B(X,Y)$, $g\in\B(Y,Z)$, and a $2$-cell $\alpha : f \to f'$, as in the diagram:
\[\begin{tikzpicture}[commutative diagrams/every diagram]
\node (W) at (-3,0) {$W$}; \node (X) at (-1,0) {$X$};
\node (Y) at (1,0) {$Y$}; \node (Z) at (3,0) {$Z$};
\node[font=\Large] at (-.1,0) {\rotatebox{270}{$\Rightarrow$}};
\node[font=\small] at (.15,0) {$\alpha$};
\path[commutative diagrams/.cd, every arrow, every label]
(W) edge node[above] {$h$} (X)
(X) edge [bend left] node[above] {$f$} (Y)
edge [bend right] node[below] {$f'$} (Y)
(Y) edge node[above] {$g$} (Z);
\end{tikzpicture}\]
Then the horizontal compositions $\alpha * 1_h$ and $1_g * \alpha$ are called the \emph{whiskering of $h$ and $\alpha$}\index{whiskering!of a 1-cell and a 2-cell} and the \emph{whiskering of $\alpha$ and $g$}, respectively.
\end{definition}
\begin{explanation}\label{expl:whiskering}
The whiskering $\alpha * 1_h$ is a $2$-cell $fh \to f'h$ in $\B(W,Y)$. The whiskering $1_g * \alpha$ is a $2$-cell $gf \to gf'$ in $\B(X,Z)$.\dqed
\end{explanation}
The rest of this section contains examples of bicategories.
\begin{example}[Categories]\label{ex:category-as-bicat}
Categories are identified with \index{bicategory!locally discrete}\index{category!as a locally discrete bicategory}locally discrete bicategories. Indeed, in each category $\C$, each morphism set $\C(X,Y)$ may be regarded as a discrete category; i.e., there are only identity $2$-cells.
\begin{itemize}
\item The identity $1$-cells are the identity morphisms in $\C$.
\item The horizontal composition of $1$-cells is the composition in $\C$.
\item The horizontal composition and the vertical composition of identity $2$-cells yield identity $2$-cells.
\item The natural isomorphisms $a$, $\ell$, and $r$ are defined as the identity natural transformations.
\end{itemize}
We write $\C_{\bi}$ for this locally discrete bicategory.
Conversely, for a locally discrete bicategory $\B$, the natural isomorphisms $a$, $\ell$, and $r$ are identities by \eqref{associator-component} and \eqref{unitor-component}. So the identification above yields a category $(\B_0,\B_1,1,c)$.\dqed
\end{example}
\begin{example}[Monoidal Categories]\label{ex:moncat-bicat}
Monoidal categories are canonically identified with\index{monoidal category!as a one-object bicategory}\index{bicategory!one-object} one-object bicategories. Indeed, suppose $(\C,\otimes,\tensorunit,\alpha,\lambda,\rho)$ is a monoidal category as in \Cref{def:monoidal-category}. Then it yields a bicategory $\Sigma\C$ with:
\begin{itemize}
\item one object $*$;
\item hom category $\Sigma\C(*,*) = \C$;
\item identity $1$-cell $1_* = \tensorunit$;
\item horizontal composition $c = \otimes : \C\times\C \to \C$;
\item associator $a = \alpha$;
\item left unitor $\ell = \lambda$ and right unitor $r = \rho$.
\end{itemize}
The unity axiom \eqref{bicat-unity} and the pentagon axiom \eqref{bicat-pentagon} in $\Sigma\C$ are those of the monoidal category $\C$ in \eqref{monoidal-unit} and \eqref{pentagon-axiom}, respectively.
Conversely, for a bicategory $\B$ with one object $*$, the hom category $\B(*,*)$, along with the identification in the previous paragraph, is a monoidal category.\dqed
\end{example}
\begin{example}[Hom Monoidal Categories]\label{ex:hom-monoidal-cat}
For each object $X$ in a bicategory $\B$, the hom category\index{hom category!as a monoidal category} $\C = \B(X,X)$ is a monoidal category with:
\begin{itemize}
\item monoidal unit $\tensorunit = 1_X$;
\item monoidal product $\otimes = c_{XXX} : \C\times\C\to\C$;
\item associativity isomorphism $\alpha = a_{XXXX}$;
\item left and right unit isomorphisms $\lambda = \ell_{XX}$ and $\rho = r_{XX}$.
\end{itemize}
As in \Cref{ex:moncat-bicat}, the monoidal category axioms \eqref{monoidal-unit} and \eqref{pentagon-axiom} in $\C$ follow from the bicategory axioms \eqref{bicat-unity} and \eqref{bicat-pentagon} in $\B$. \dqed
\end{example}
\begin{example}[Products]\label{ex:product-bicat}
Suppose $\A$ and $\B$ are bicategories. The \index{product!bicategory}\index{bicategory!product}\emph{product bicategory} $\A \times \B$ is the bicategory defined by the following data.
\begin{itemize}
\item $(\A\times\B)_0 = \A_0\times\B_0$.
\item For objects $A,A'\in\A$ and $B,B'\in\B$, it has the Cartesian product hom category
\[(\A\times\B)\big((A,B),(A',B')\big) = \A(A,A')\times\B(B,B').\]
\item The identity $1$-cell of an object $(A,B)$ is $(1_A,1_B)$.
\item The horizontal composition is the composite functor
\[\begin{tikzcd}
\A(A',A'')\times\B(B',B'') \times \A(A,A')\times\B(B,B') \ar{d}[swap]{\cong} \ar{r}{c} & \A(A,A'')\times \B(B,B'')\\
\A(A',A'')\times \A(A,A') \times\B(B',B'') \times\B(B,B') \ar[start anchor={east},
end anchor={south}, bend right=20]{ur}[swap]{c_{\A}\times c_{\B}} &
\end{tikzcd}\]
with $c_{\A}$ and $c_{\B}$ the horizontal compositions in $\A$ and $\B$, respectively, and the left vertical functor permuting the middle two categories.
\item The associator, the left unitor, and the right unitor are all induced entrywise by those in $\A$ and $\B$.
\end{itemize}
The unity axiom and the pentagon axiom follow from those in $\A$ and $\B$.\dqed
\end{example}
\begin{example}[Spans]\label{ex:spans}
Suppose $\C$ is a category in which all pullbacks exist. For each diagram in $\C$ of the form $\begin{tikzcd}[column sep=small] X \rar{f} & B & Y, \lar[swap]{g}\end{tikzcd}$ we choose an arbitrary pullback diagram
\[\begin{tikzcd}[column sep=normal]
X \timesover{B} Y \dar[swap]{p_1} \rar{p_2} & Y \dar{g}\\
X \rar{f} & B \end{tikzcd}\]
in $\C$. A \emph{span}\index{span!bicategory}\index{bicategory!of spans} in $\C$ from $A$ to $B$ is a diagram of the form
\begin{equation}\label{axb-span}
\begin{tikzcd} A & X \lar[swap]{f_1} \rar{f_2} & B.\end{tikzcd}
\end{equation}
There is a bicategory $\Span(\C)$, or $\Span$ if $\C$ is clear from the context, consisting of the following data.
\begin{itemize}
\item Its objects are the objects in $\C$.
\item For objects $A,B \in \C$, the $1$-cells in $\Span(A,B)$ are the spans in $\C$ from $A$ to $B$. The identity $1$-cell of $A$ consists of two copies of the identity morphism $1_A$.
\item A $2$-cell in $\Span(A,B)$ from the span \eqref{axb-span} to the span $\begin{tikzcd}[column sep=small] A & X' \lar[swap]{f_1'} \rar{f_2'} & B \end{tikzcd}$ is a morphism $\phi : X \to X'$ in $\C$ such that the diagram
\begin{equation}\label{span-2cell}
\begin{tikzcd}[row sep=tiny]
& X \arrow{ld}[swap]{f_1} \arrow{dd}{\phi} \arrow{rd}{f_2} &\\
A && B\\
& X' \arrow{lu}{f_1'} \arrow{ru}[swap]{f_2'} &\end{tikzcd}
\end{equation}
is commutative. Identity $2$-cells are identity morphisms in $\C$, and vertical composition is the composition in $\C$.
\item The horizontal composition of $1$-cells is induced by the chosen pullbacks. More explicitly, suppose given a span $(f_1,f_2)$ from $A$ to $B$ and a span $(g_1,g_2)$ from $B$ to $C$ as in the lower half of the diagram:
\begin{equation}\label{span-1cell-hcomp}
\begin{tikzcd}[row sep=tiny]
&& X\timesover{B} Y \arrow{ld}[swap]{p_1} \arrow{rd}{p_2}
\arrow[out=180,in=90]{lldd}[swap]{f_1p_1} \arrow[out=0,in=90]{rrdd}{g_2p_2}&&\\
& X \arrow{ld}[swap, near start]{f_1} \arrow{rd}{f_2} && Y \arrow{ld}[swap]{g_1} \arrow{rd}[near start]{g_2} &\\
A && B && C\\ \end{tikzcd}
\end{equation}
The diamond in the middle is the chosen pullback of $(f_2,g_1)$. The horizontal composition $(g_1,g_2)(f_1,f_2)$ is the span $(f_1p_1,g_2p_2)$ from $A$ to $C$.
\item The horizontal composition of $2$-cells is induced by the universal property of pullbacks. More precisely, suppose given two horizontally composable $2$-cells $(\phi,\varphi)$ as in the following solid-arrow commutative diagram.
\[\begin{tikzcd}[row sep=small, column sep=large]
&& X\timesover{B} Y \arrow{ld}[swap]{p_1} \arrow{rd}{p_2}
\arrow[out=180,in=90]{lldd}[swap]{f_1p_1} \arrow[out=0,in=90]{rrdd}{g_2p_2} \arrow[dashed, bend right, shorten <=-4pt]{dddd}[very near end]{\theta} &&\\
& X \arrow{ld}[swap, near start]{f_1} \arrow{rd}[near start]{f_2} \arrow{dd}[near start]{\phi} && Y \arrow{ld}[swap]{g_1} \arrow{rd}[near start]{g_2} \arrow{dd}[near start]{\varphi} &\\
A && B && C\\
& X' \arrow{lu}[swap]{f_1'} \arrow{ru}{f_2'} && Y' \arrow{lu}[swap]{g_1'} \arrow{ru}{g_2'} &\\
&& X'\timesover{B} Y' \arrow{lu}[swap]{p_1'} \arrow{ru}{p_2'}
\arrow[out=180,in=270]{lluu}{f_1'p_1'} \arrow[out=0,in=270]{rruu}[swap]{g_2'p_2'}&&\end{tikzcd}\]
The commutativity of the solid-arrow diagram and the universal property of the pullback $X'\times_B Y'$ imply the existence of a unique morphism $\theta$ such that
\[p_1' \theta = \phi p_1 \andspace p_2'\theta = \varphi p_2.\]
These equalities imply that
\[f_1'p_1'\theta = f_1'\phi p_1 = f_1p_1 \andspace
g_2'p_2'\theta = g_2'\varphi p_2 = g_2p_2.\]
So $\theta$ is a $2$-cell from the span $(f_1p_1,g_2p_2)$ to the span $(f_1'p_1',g_2'p_2')$, which is defined as the horizontal composition $\varphi * \phi$.
\item The associator, the left unitor, and the right unitor are similarly defined by the universal property of pullbacks.
\end{itemize}
The bicategory axioms \eqref{bicat-unity} and \eqref{bicat-pentagon} also follow from the universal property of pullbacks.\dqed
\end{example}
\begin{example}[Bimodules]\label{ex:bimodules}
There is a bicategory\index{bicategory!of bimodules}\index{bimodule!bicategory} $\Bimod$ given by the following data.
\begin{itemize}
\item Its objects are rings, which are always assumed to be unital and associative.
\item For two rings $R$ and $S$, the hom category $\Bimod(R,S)$ is the category whose objects are $(R,S)$-bimodules. An \emph{$(R,S)$-bimodule} $M$ is an abelian group together with
\begin{itemize}
\item a left $R$-module structure $(x,m) \mapsto xm$ and
\item a right $S$-module structure $(m,s) \mapsto ms$
\end{itemize}
such that the two actions commute in the sense that \[(xm)s=x(ms) \forspace (x,m,s) \in R\times M\times S.\] The $2$-cells in $\Bimod(R,S)$ are $(R,S)$-bimodule homomorphisms, with the identity maps as identity $2$-cells.
\item The identity $1$-cell of a ring $R$ is $R$ regarded as an $(R,R)$-bimodule.
\item For another ring $T$ and an $(S,T)$-module $N$, the horizontal composition of $1$-cells is given by tensoring over $S$, \[c_{RST}(N,M) = M \tensorover{S} N,\] which is an $(R,T)$-bimodule. The horizontal composition of $2$-cells is given by tensoring bimodule homomorphisms.
\item The associator $a$, the left unitor $\ell$, and the right unitor $r$ are given by the natural isomorphisms
\[M \tensorover{S} \bigl(N\tensorover{T} P\bigr) \cong \bigl(M \tensorover{S} N\bigr) \tensorover{T} P, \qquad M\tensorover{S} S\cong M, \andspace R\tensorover{R} M\cong M\]
for $(T,U)$-bimodules $P$.
\end{itemize}
\end{example}
\begin{example}[Classifying Category and Picard Groupoid]\label{ex:picard-groupoid}
A bicategory $\B$ is \emph{locally essentially small} if each of its hom-categories has a set of isomorphism classes of objects. For a locally essentially small bicategory $\B$, its \index{classifying category}\index{category!classifying}\emph{classifying category}\label{notation:clab} $\Cla(\B)$ is defined as follows.
\begin{itemize}
\item Its objects are the objects in $\B$.
\item For objects $X,Y$ in $\Cla(\B)$, $\Cla(\B)(X,Y)$ is the set of isomorphism classes of $1$-cells in $\B(X,Y)$. The isomorphism class of a $1$-cell $f$ is written as $[f]$.
\item For each object $X$ in $\Cla(\B)$, its identity morphism is $[1_X]$.
\item For objects $X,Y,Z$ in $\B$, the composition is defined as
\[\Cla(\B)(Y,Z)\times\Cla(\B)(X,Y) \to \Cla(\B)(X,Z),\qquad [g]\circ[f] = [gf].\] This is well defined by the axioms \eqref{hom-category-axioms}, \eqref{bicat-c-id}, and \eqref{middle-four}.
\end{itemize}
Since the associator and the unitors of $\B$ are componentwise isomorphisms, composition in $\Cla(\B)$ is strictly associative and unital. The \index{Picard!groupoid}\index{groupoid!Picard}\emph{Picard groupoid} of $\B$, denoted by\label{notation:picb} $\Pic(\B)$, is the sub-groupoid of $\Cla(\B)$ consisting of all the objects and only the invertible morphisms.\dqed
\end{example}
\begin{example}[$2$-Vector Spaces]\label{ex:two-vector-space}\index{bicategory!of 2-vector spaces}
Suppose $\fieldc$ is the field of complex numbers. An $n\times m$ \emph{$2$-matrix}\index{2-matrix}\index{matrix!2-} $V$ is an array \[V = (V_{ij})_{1\leq i \leq n, 1\leq j \leq m}\] with each $V_{ij}$ a finite dimensional complex vector space. Given a $p \times n$ $2$-matrix $W = (W_{ki})$, the \emph{$2$-matrix product} $WV$ is the $p\times m$ $2$-matrix with $(k,j)$-entry the complex vector space
\begin{equation}\label{eq:2-mat-prod}(WV)_{kj} = \bigoplus_{i=1}^n \big(W_{ki} \tensor V_{ij}\big) \forspace 1\leq k \leq p, 1\leq j \leq m.
\end{equation}
Here $\oplus$ and $\tensor$ denote, respectively, direct sum and tensor product of finite dimensional complex vector spaces.
There is a bicategory\label{notation:twovc} $\twovc$, of \emph{coordinatized $2$-vector spaces}\index{2-vector space}, determined by the following data.
\begin{description}
\item[Objects] The objects in $\twovc$ are the symbols $\{n\}$ for non-negative integers $n$.
\item[Hom-category] The hom-category $\twovc(\{m\},\{n\})$ is defined as follows.
\begin{itemize}
\item Its objects are $n\times m$ $2$-matrices.
\item A morphism \[\theta = (\theta_{ij}) : V = (V_{ij}) \to V' = (V'_{ij})\] consists of $\fieldc$-linear maps $\theta_{ij} : V_{ij} \to V'_{ij}$ for $1\leq i \leq n$ and $1\leq j \leq m$.
\item The identity morphism of an object $V = (V_{ij})$ consists of the identity maps of the $V_{ij}$.
\item Composition is given by coordinate-wise composition of $\fieldc$-linear maps, i.e., $(\phi_{ij})(\theta_{ij})=(\phi_{ij}\theta_{ij})$.
\end{itemize}
\item[Identity $1$-Cells] The identity $1$-cell of an object $\{n\}$ is the $n\times n$ $2$-matrix $1^n$ with entries
\[1^n_{ij} = \begin{cases} \fieldc & \text{if } i=j,\\ 0 & \text{if } i\not= j.\end{cases}\]
\item[Horizontal Composition] On $1$-cells, it is given by the $2$-matrix product in \eqref{eq:2-mat-prod} above.
On $2$-cells, suppose $\theta : V \to V'$ is a $2$-cell in $\twovc(\{m\},\{n\})$, and $\phi : W \to W'$ is a $2$-cell in $\twovc(\{n\},\{p\})$. The horizontal composite \[\phi * \theta : WV \to W'V'\] is the $2$-cell in $\twovc(\{m\},\{p\})$ consisting of $\fieldc$-linear maps
\begin{equation}\label{phi-theta-kj}
(\phi*\theta)_{kj} = \bigoplus_{i=1}^n \big(\phi_{ki} \tensor \theta_{ij}\big) : \bigoplus_{i=1}^n \big(W_{ki} \tensor V_{ij}\big) \to \bigoplus_{i=1}^n \big(W'_{ki} \tensor V'_{ij}\big).
\end{equation}
\item[Associator] For composable $1$-cells
\[\begin{tikzcd}
\{m\} \ar{r}{V} & \{n\} \ar{r}{W} & \{p\} \ar{r}{X} & \{q\},\end{tikzcd}\]
the component
\[a_{X,W,V} : (XW)V \to X(WV)\] of the associator has $(l,j)$-entry, for $1\leq l \leq q$ and $1\leq j \leq m$, the following composite of canonical isomorphisms:
\[\begin{split}
[(XW)V]_{lj} &= \bigoplus_{i=1}^n (XW)_{li} \tensor V_{ij}\\
&= \bigoplus_{i=1}^n \Big(\bigoplus_{k=1}^p X_{lk} \tensor W_{ki}\Big)\tensor V_{ij}\\
&\iso \bigoplus_{i=1}^n \bigoplus_{k=1}^p \big(X_{lk}\tensor W_{ki}\big) \tensor V_{ij}\\
&\iso \bigoplus_{k=1}^p \bigoplus_{i=1}^n X_{lk}\tensor \big(W_{ki}\tensor V_{ij}\big)\\
&\iso \bigoplus_{k=1}^p X_{lk}\tensor \Big(\bigoplus_{i=1}^n W_{ki}\tensor V_{ij}\Big)\\
&= \bigoplus_{k=1}^p X_{lk}\tensor (WV)_{kj}\\
&= [X(WV)]_{lj}.
\end{split}\]
In the above canonical isomorphisms, we used the canonical distributivity isomorphisms
\[\begin{split}
(A\oplus B)\tensor C &\iso (A\tensor C)\oplus (B\tensor C),\\
A \tensor (B\oplus C) &\iso (A\tensor B) \oplus (A\tensor C),\end{split}\]
the symmetry isomorphism
\[A \oplus B \iso B \oplus A,\]
and the associativity isomorphism \[(A\tensor B)\tensor C \iso A\tensor (B\tensor C)\] of finite dimensional complex vector spaces.
\item[Unitors] The left unitor $\ell_V : 1^nV \to V$ is induced by the canonical isomorphisms \[\fieldc \tensor A \iso A, \quad 0\tensor A \iso 0, \andspace 0 \oplus A \iso A \iso A \oplus 0.\]
Similarly, the right unitor $r_V : V1^m \to V$ is induced by the last two canonical isomorphisms, together with \[A \tensor \fieldc \iso A \andspace A\tensor 0 \iso 0.\]
\end{description}
This finishes the definition of $\twovc$.
To see that $\twovc$ is actually a bicategory, observe the following.
\begin{itemize}
\item Each hom-category is an actual category because composition of $\fieldc$-linear maps is strictly associative and unital.
\item The horizontal composition preserves identity $2$-cells because direct sum and tensor product of finite dimensional complex vector spaces preserve identity maps. The middle four exchange \eqref{middle-four} holds because $\fieldc$-linear maps satisfy the analogous properties
\[\begin{split}
(\phi_2\phi_1) \oplus (\theta_2\theta_1) &= (\phi_2\oplus \theta_2)(\phi_1\oplus \theta_1),\\
(\phi_2\phi_1) \tensor (\theta_2\theta_1) &= (\phi_2\tensor\theta_2)(\phi_1\tensor \theta_1)\end{split}\]
whenever the composites are defined.
\item The associator, the left unitor, and the right unitor are natural isomorphisms because they have invertible components, and the canonical isomorphisms in their definitions are natural with respect to $\fieldc$-linear maps.
\item The unity axiom \eqref{bicat-unity} holds because the diagram
\[\begin{tikzcd}[row sep=small]
(B\tensor \fieldc) \tensor A \ar{rr}{\iso} \ar{dr}[swap]{\iso\tensor 1_A} && B\tensor (\fieldc \tensor A) \ar{dl}{1_B\tensor \iso}\\ & B\tensor A \\
\end{tikzcd}\]
of canonical isomorphisms between finite dimensional complex vector spaces is commutative.
\item The pentagon axiom \eqref{bicat-pentagon} holds because the diagram
\[\begin{tikzpicture}[commutative diagrams/every diagram]
\node (P0) at (90:2cm) {$(YX)(WV)$};
\node (P1) at (90+72:2cm) {$((YX)W)V$};
\node (P2) at (220:1.6cm) {\makebox[3ex][r]{$(Y(XW))V$}};
\node (P3) at (-40:1.6cm) {\makebox[3ex][l]{$Y((XW)V)$}};
\node (P4) at (90+4*72:2cm) {$Y(X(WV))$};
\path[commutative diagrams/.cd, every arrow, every label]
(P0) edge node {$\iso$} (P4)
(P1) edge node {$\iso$} (P0)
(P1) edge node[swap] {$\iso \tensor 1_V$} (P2)
(P2) edge node {$\iso$} (P3)
(P3) edge node[swap] {$1_Y \tensor \iso$} (P4);
\end{tikzpicture}\]
of canonical isomorphisms between finite dimensional complex vector spaces is commutative, where the $\tensor$ symbols among the objects are omitted to save space.
\end{itemize}
This shows that $\twovc$ is a bicategory. A $2$-categorical version of $\twovc$ will be considered in \Cref{ex:twovect-tc}.
\end{example}
\begin{example}\label{example:terminal-bicategory}
Generalizing the terminal category
(\cref{notation:terminal-category}), there is a trivial bicategory
with a single object, single 1-cell, and single 2-cell. We denote
this bicategory $\boldone$ and call it the \emph{terminal
bicategory}.\index{terminal!bicategory}\index{bicategory!terminal}
\end{example}
\section{Unity Properties}\label{sec:bicategory-unity}
In this section we record several useful unity properties in bicategories. The proofs here provide illustrations of basic computation in bicategories. Fix a bicategory $(\B,1,c,a,\ell,r)$ as in \Cref{def:bicategory}. The first property says that identity $2$-cells of identity $1$-cells can be canceled in horizontal composition.
\begin{lemma}\label{bicat-unit-cancellation}
Suppose $f,f' \in \B(X,Y)$ are two $1$-cells, and $\alpha,\alpha' : f \to f'$ are two $2$-cells. Then the following statements are equivalent.\index{identity 2-cell!cancellation properties}
\begin{enumerate}
\item $\alpha=\alpha'$.
\item $1_{1_Y} * \alpha = 1_{1_Y} * \alpha'$.
\item $\alpha * 1_{1_X} = \alpha' * 1_{1_X}$
\end{enumerate}
\end{lemma}
\begin{proof}
(1) implies both (2) and (3) by definition.
On the other hand, by the naturality of the unitors $\ell$ and $r$, the diagram
\[\begin{tikzcd}[column sep=large]
1_Y f \ar{r}{\ell_f}[swap]{\cong} \ar{d}[swap]{1_{1_Y} * \alpha} & f \ar{d}{\alpha} & f1_X \ar{d}{\alpha * 1_{1_X}} \ar{l}{\cong}[swap]{r_f}\\
1_Y f' \ar{r}{\ell_{f'}}[swap]{\cong} & f' & f'1_X \ar{l}{\cong}[swap]{r_{f'}}\end{tikzcd}\]
in the hom category $\B(X,Y)$ is commutative, and similarly for $\alpha'$. If (2) holds, then the left square and the invertibility of $\ell_f$ imply (1). Similarly, if (3) holds, then the right square and the invertibility of $r_f$ imply (1).
\end{proof}
\begin{lemma}\label{bicat-r-r}
For each object $X$ in $\B$, the equalities
\[\begin{split}
r_{1_X1_X} &= r_{1_X} * 1_{1_X} : (1_X1_X)1_X \to 1_X1_X,\\
\ell_{1_X1_X} &= 1_{1_X} * \ell_{1_X} : 1_X (1_X1_X) \to 1_X1_X
\end{split}\]
hold in $\B(X,X)$.
\end{lemma}
\begin{proof}
By the naturality of the right unitor, the diagram
\[\begin{tikzcd}[column sep=large]
(1_X1_X)1_X \ar{d}[swap]{r_{1_X}*1_{1_X}} \ar{r}{r_{1_X1_X}} & 1_X1_X \ar{d}{r_{1_X}}\\
1_X1_X \ar{r}{r_{1_X}} & 1_X\end{tikzcd}\]
in $\B(X,X)$ is commutative. The first equality now follows from this commutative diagram and the fact that $r_{1_X}$ is an invertible $2$-cell. The second equality is proved similarly.
\end{proof}
\begin{lemma}\label{hcomp-invertible-2cells}
Suppose $f,f' \in \B(X,Y)$, $g,g' \in \B(Y,Z)$ are $1$-cells, and $\alpha : f \to f'$, $\beta : g \to g'$ are invertible $2$-cells. Then $\beta * \alpha$ is also an invertible $2$-cell.\index{invertible 2-cell!composition}
\end{lemma}
\begin{proof}
Suppose $\alpha^{-1}$ and $\beta^{-1}$ are the vertical inverses of $\alpha$ and $\beta$, respectively. There are equalities
\[\begin{split}
(\beta^{-1} *\alpha^{-1})(\beta *\alpha)
&= (\beta^{-1}\beta)*(\alpha^{-1}\alpha)\\
&= 1_g * 1_f\\
&= 1_{gf}.
\end{split}\]
The first equality is from the middle four exchange \eqref{middle-four}, and the last equality is \eqref{bicat-c-id}. Similarly, there is an equality
\[(\beta *\alpha) (\beta^{-1} *\alpha^{-1}) = 1_{g'f'}.\]
Therefore, $\beta^{-1} *\alpha^{-1}$ is the vertical inverse of $\beta *\alpha$.
\end{proof}
The following observation contains the bicategorical generalizations of the unity diagrams \eqref{moncat-other-unit-axioms} in a monoidal category.
\begin{proposition}\label{bicat-left-right-unity}
Suppose $f\in\B(X,Y)$ and $g\in\B(Y,Z)$ are $1$-cells. Then the diagrams\index{bicategory!left unity}\index{bicategory!right unity}
\[\begin{tikzcd}[column sep=small]
(1_Zg)f \arrow{rr}{a} \arrow{rd}[swap]{\ell_g * 1_f} && 1_Z(gf) \arrow{ld}{\ell_{gf}}\\
& gf & \end{tikzcd}\qquad
\begin{tikzcd}[column sep=small]
(gf)1_X \arrow{rr}{a} \arrow{rd}[swap]{r_{gf}} && g(f1_X) \arrow{ld}{1_g*r_f}\\
& gf & \end{tikzcd}\]
in $\B(X,Z)$ are commutative.
\end{proposition}
\begin{proof}
To prove the commutativity of the second diagram, consider the following diagram in $\B(X,Z)$.
\[\begin{tikzcd}[column sep=small, row sep=huge]
&& (gf)(1_X1_X) \ar[phantom, swap, "(ii)"]{ddr} \ar{ddl}{1_{gf}*\ell_{1_X}} \ar{drr}{a} &&\\
((gf)1_X)1_X \ar[bend right=25, near start, phantom, "(i)"]{urr} \ar[start anchor={[xshift=-.5cm]}, end anchor={[xshift=-.5cm]}]{ddr}[swap]{a*1_{1_X}} \ar{urr}{a} \ar{dr}{r_{gf}*1_{1_X}} \arrow[dr, bend right=25, near end, phantom, "(\dagger)"] &&&& g(f(1_X1_X)) \ar{dl}[swap]{1_g*(1_f*\ell_{1_X})}\\
& (gf)1_X \ar{rr}{a} && g(f1_X) &\\
& (g(f1_X))1_X \ar[phantom, "(iii)"]{urr} \ar{u}[swap]{(1_g*r_f)*1_{1_X}} \ar{rr}{a} && g((f1_X)1_X) \ar{u}{1_g*(r_f*1_{1_X})} \ar[bend left=10, phantom, near start, "(iv)"]{uur} \ar[start anchor={[xshift=.5cm]}, end anchor={[xshift=.5cm]}]{uur}[swap]{1_g*a}
\end{tikzcd}\]
\begin{itemize}
\item The outer-most pentagon is commutative by the pentagon axiom \eqref{bicat-pentagon}.
\item $(i)$ is commutative by the unity axiom \eqref{bicat-unity}.
\item $(ii)$ is commutative by the naturality of the associator $a$ and the equality $1_g * 1_f = 1_{gf}$ in \eqref{bicat-c-id}.
\item $(iii)$ is commutative by the naturality of $a$.
\item $(iv)$ is commutative by the equalities
\[\begin{split}
\bigl(1_g * (1_f * \ell_{1_X})\bigr)(1_g * a)
&= (1_g1_g) * \bigl((1_f*\ell_{1_X})a\bigr)\\
&= 1_g * (r_f * 1_{1_X}).
\end{split}\]
The first equality holds by the middle four exchange \eqref{middle-four}. The second equality holds by the unity axiom \eqref{bicat-unity} and the equality $1_g1_g=1_g$ from \eqref{hom-category-axioms}.
\item Every edge is an invertible $2$-cell by Lemma \ref{hcomp-invertible-2cells}.
\end{itemize}
It follows that the sub-diagram $(\dagger)$ is commutative.
So there are equalities
\begin{equation}\label{using-dagger}
\begin{split}
\bigl((1_g * r_f)a\bigr) * 1_{1_X}
&= \bigl((1_g * r_f)a\bigr) * \bigl(1_{1_X}1_{1_X}\bigr)\\
&= \bigl((1_g * r_f) * 1_{1_X}\bigr) \bigl(a * 1_{1_X}\bigr)\\
&= r_{gf} * 1_{1_X}.
\end{split}
\end{equation}
The first equality follows from $1_{1_X}=1_{1_X}1_{1_X}$ in \eqref{hom-category-axioms}. The second equality is from the middle four exchange \eqref{middle-four}. The last equality is the commutativity of $(\dagger)$. \Cref{bicat-unit-cancellation} now implies the equality
\[(1_g * r_f)a = r_{gf},\] which is the desired commutative diagram.
To prove the commutativity of the first diagram, we consider the diagram
\[\begin{tikzcd}[column sep=tiny, row sep=huge]
&& (1_Z1_Z)(gf) \ar{ddr}[swap]{r_{1_Z}*1_{gf}} \ar{drr}{a} &&\\
((1_Z1_Z)g)f \ar[start anchor={[xshift=-.5cm]}, end anchor={[xshift=-.5cm]}]{ddr}[swap]{a*1_{f}} \ar{urr}{a} \ar{dr}{(r_{1_Z}*1_{g})*1_f} &&&& 1_Z(1_Z(gf)) \ar{dl}[swap]{1_{1_Z}*\ell_{gf}}\\
& (1_Zg)f \ar{rr}{a} && 1_Z(gf) &\\
& (1_Z(1_Zg))f \ar{u}[swap]{(1_{1_Z}*\ell_g)*1_{f}} \ar{rr}{a} && 1_Z((1_Zg)f) \ar{u}{1_{1_Z}*(\ell_g *1_{f})} \ar[bend left=10, phantom, near start, "(\ddagger)"]{uur} \ar[start anchor={[xshift=.5cm]}, end anchor={[xshift=.5cm]}]{uur}[swap]{1_{1_Z}*a}
\end{tikzcd}\]
in $\B(X,Z)$. A slight modification of the argument above shows that the sub-diagram $(\ddagger)$ is commutative. Similar to \eqref{using-dagger}, this implies the equality
\[1_{1_Z} * (\ell_g * 1_f) = 1_{1_Z} * \bigl(\ell_{gf}a\bigr).\]
Then we use \Cref{bicat-unit-cancellation} to cancel $1_{1_Z}$ and obtain the desired commutative diagram.
\end{proof}
Next is the bicategorical generalization of the equality $\lambda_{\tensorunit} = \rho_{\tensorunit}$ in a monoidal category.
\begin{proposition}\label{bicat-l-equals-r}
For each object $X$ in $\B$, the equality\index{left unitor!of identity 1-cell}\index{right unitor!of identity 1-cell}\index{identity 1-cell!left and right unitors}
\[\begin{tikzcd}\ell_{1_X} = r_{1_X} : 1_X 1_X \rar{\cong} & 1_X\end{tikzcd}\]
holds in $\B(X,X)$.
\end{proposition}
\begin{proof}
Consider the diagrams:
\[\begin{tikzcd}[column sep=tiny]
(1_X1_X)1_X \arrow{rr}{a} \arrow{rd}[swap]{r_{1_X} * 1_{1_X}} && 1_X(1_X1_X) \arrow{ld}{1_{1_X}*\ell_{1_X}}\\
& 1_X1_X & \end{tikzcd}\qquad
\begin{tikzcd}[column sep=tiny]
(1_X1_X)1_X \arrow{rr}{a} \arrow{rd}[swap]{r_{1_X1_X}} && 1_X(1_X1_X) \arrow{ld}{1_{1_X}*r_{1_X}}\\
& 1_X1_X & \end{tikzcd}\]
The first diagram is commutative by the unity axiom \eqref{bicat-unity}. The second diagram is the second commutative diagram in \Cref{bicat-left-right-unity}. Since $a$ is an invertible $2$-cell and since $r_{1_X1_X} = r_{1_X}*1_{1_X}$ by \Cref{bicat-r-r}, it follows that
\[1_{1_X} * \ell_{1_X} = 1_{1_X} * r_{1_X}.\] The desired equality now follows by applying \Cref{bicat-unit-cancellation}.
\end{proof}
\section{\texorpdfstring{$2$}{2}-Categories}\label{sec:2categories}
In this section we discuss $2$-categories and some examples.
\begin{motivation}
As discussed in \Cref{ex:moncat-bicat}, monoidal categories are identified with one-object bicategories. The algebra in a monoidal category is significantly simplified if the monoidal category is strict, since in that case we may forget about the associativity isomorphism and the unit isomorphisms. One can expect a similar simplification in a bicategory whose coherence isomorphisms $a$, $\ell$, and $r$ are identities. Bicategories with these properties are called $2$-categories. Just as every monoidal category can be strictified by Mac Lane's Coherence Theorem, we will see in \Cref{ch:coherence} that every bicategory can be strictified to a $2$-category. The schematic
\begin{center}
\begin{tikzpicture}[x=70mm, y=25mm,
block/.style ={rectangle, draw=black,
align=center, rounded corners,
minimum height=2em, outer sep=1.5mm},
top/.style={text width=13ex},
bot/.style={text width=26ex},
arrlabel/.style={font=\small}
]
\draw
(0,0) node[block,top] (bicat) {bicategories}
(1,0) node[block,top] (2cat) {$2$-categories}
(0,-1) node[block,bot] (mon) {monoidal categories}
(1,-1) node[block,bot] (strmon) {strict monoidal categories};
\draw[right hook->] (mon) -- node[arrlabel] {one object} (bicat);
\draw[right hook->] (strmon) -- node[arrlabel] {one object} (2cat);
\draw[left hook->,transform canvas={yshift=-1.2mm}] (2cat) -- (bicat);
\draw[left hook->,transform canvas={yshift=-1.2mm}] (strmon) -- (mon);
\draw[->,transform canvas={yshift=1.2mm}] (bicat) -- node[arrlabel] {strictification} (2cat);
\draw[->,transform canvas={yshift=1.2mm}] (mon) -- node[arrlabel] {strictification} (strmon);
\end{tikzpicture}
\end{center}
should help the reader keep track of the various concepts.\dqed
\end{motivation}
\begin{definition}\label{def:2category}
A \emph{$2$-category}\index{2-category}\index{category!2-}\index{bicategory!2-category} is a bicategory $(\B,1,c,a,\ell,r)$ in which the associator $a$, the left unitor $\ell$, and the right unitor $r$ are identity natural transformations.
\end{definition}
\begin{definition}\label{def:subiicategory}
Suppose $\B$ and $\B'$ are $2$-categories. Then $\B'$ is called a \emph{sub-$2$-category}\index{sub-2-category} of $\B$ if it is a sub-bicategory of $\B$ in the sense of \cref{def:subbicategory}.
\end{definition}
The terminology in \Cref{def:small-bicat} also applies to $2$-categories. For example, a $2$-category is \emph{locally small}\index{locally!small!2-category}\index{2-category!locally small}\index{small!locally - 2-category} if each hom category is a small category.
The following observation provides an explicit list of axioms for a $2$-category.
\begin{proposition}\label{2category-explicit}
A $2$-category $\B$ contains precisely the following data:\index{2-category!explicit data and axioms}
\begin{itemize}
\item A class $\B_0$ of objects.
\item For objects $X,Y\in\B_0$, a class $\B_1(X,Y)$ of $1$-cells from $X$ to $Y$.
\item An identity $1$-cell $1_X \in \B_1(X,X)$ for each object $X$.
\item For $1$-cells $f,f' \in \B_1(X,Y)$, a set $\B_2(X,Y)(f,f')$, or simply $\B_2(f,f')$, of $2$-cells from $f$ to $f'$.
\item An identity $2$-cell $1_f \in \B_2(f,f)$ for each $1$-cell $f \in \B_1(X,Y)$ and each pair of objects $X,Y$.
\item For objects $X$ and $Y$, and $1$-cells $f,f',f'' \in \B_1(X,Y)$, an assignment
\[\begin{tikzcd}\B_2(f',f'') \times \B_2(f,f') \rar{v} & \B_2(f,f'')\end{tikzcd},\qquad v(\alpha',\alpha) = \alpha'\alpha\] called the vertical composition.
\item For objects $X,Y$, and $Z$, an assignment
\[\begin{tikzcd}\B_1(Y,Z) \times \B_1(X,Y) \rar{c_1} & \B_1(X,Z)\end{tikzcd},\qquad c_1(g,f) = gf\]
called the horizontal composition of $1$-cells.
\item For objects $X,Y$, and $Z$, and $1$-cells $f,f' \in \B_1(X,Y)$ and $g,g' \in \B_1(Y,Z)$, an assignment
\[\begin{tikzcd}\B_2(Y,Z)(g,g') \times \B_2(X,Y)(f,f') \rar{c_2} & \B_2(X,Z)(gf,g'f')\end{tikzcd},\quad c_2(\beta,\alpha) = \beta * \alpha\]
called the horizontal composition of $2$-cells.
\end{itemize}
These data are required to satisfy the following axioms:
\begin{enumerate}[label=(\roman*)]
\item The vertical composition is associative and unital with respect to the identity $2$-cells, in the sense that \eqref{hom-category-axioms} holds.
\item The horizontal composition preserves identity $2$-cells and vertical composition, in the sense that \eqref{bicat-c-id} and \eqref{middle-four} hold.
\item The horizontal composition of $1$-cells is associative, in the sense that for $1$-cells $f \in \B_1(W,X)$, $g\in \B_1(X,Y)$, and $h\in \B_1(Y,Z)$, there is an equality
\begin{equation}\label{2cat-associator-id}
(hg)f = h(gf) \in \B_1(W,Z).
\end{equation}
\item The horizontal composition of $2$-cells is associative, in the sense that for $2$-cells $\alpha \in \B(W,X)(f,f')$, $\beta\in \B(X,Y)(g,g')$, and $\gamma\in \B(Y,Z)(h,h')$, there is an equality
\begin{equation}\label{2cat-associator-id-2cell}
(\gamma * \beta)* \alpha = \gamma * (\beta * \alpha)
\end{equation}
in $\B(W,Z)\bigl((hg)f, h'(g'f')\bigr)$.
\item The horizontal composition of $1$-cells is unital with respect to the identity $1$-cells, in the sense that there are equalities
\begin{equation}\label{2cat-unitor-id-1cell}
1_Y f = f = f1_X
\end{equation}
for each $f \in \B_1(X,Y)$.
\item The horizontal composition of $2$-cells is unital with respect to the identity $2$-cells of the identity $1$-cells, in the sense that there are equalities
\begin{equation}\label{2cat-unitor-id-2cell}
1_{1_Y} * \alpha = \alpha = \alpha * 1_{1_X}
\end{equation}
for each $\alpha \in \B_2(X,Y)(f,f')$.
\end{enumerate}
\end{proposition}
\begin{proof}
For each bicategory $\B$, the equalities in \eqref{hom-category-axioms} mean that each $\B(X,Y)$ is a category. The equalities \eqref{bicat-c-id} and \eqref{middle-four} mean that the horizontal composition $c$ is a functor. Now suppose $\B$ is a $2$-category; i.e., $a$, $\ell$, and $r$ are identity natural transformations. The equalities \eqref{2cat-associator-id} and \eqref{2cat-associator-id-2cell} hold because the associator $a$ is the identity natural transformation. The equalities in \eqref{2cat-unitor-id-1cell} and \eqref{2cat-unitor-id-2cell} hold because the unitors $\ell$ and $r$ are the identity natural transformations.
Conversely, suppose $\B$ is as in the statement of the Proposition. Then each $\B(X,Y)$ is a category by \eqref{hom-category-axioms}. The horizontal compositions of $1$-cells and $2$-cells together form a functor by \eqref{bicat-c-id} and \eqref{middle-four}. We define the natural transformations $a$, $\ell$, and $r$ as the identities, which are well-defined by \eqref{2cat-associator-id}--\eqref{2cat-unitor-id-2cell}. The unity axiom \eqref{bicat-unity} and the pentagon axiom \eqref{bicat-pentagon} are automatically true by \eqref{bicat-c-id}. So $\B$ is a $2$-category.
\end{proof}
Recall that $(\Cat,\times,\boldone)$ is the symmetric monoidal category with small categories as objects, functors as morphisms, product as monoidal product, and the discrete category $\boldone$ with one object as the monoidal unit. As discussed in \Cref{sec:enriched-cat}, it makes sense to talk about categories enriched in $\Cat$, or $\Cat$-categories for short.
\begin{proposition}\label{2cat-cat-enriched-cat}
A locally small $2$-category is precisely a $\Cat$-category.\index{2-category!as a $\Cat$-category}\index{enriched!category!2-category}
\end{proposition}
\begin{proof}
Suppose $\B$ is a locally small $2$-category. Using the explicit description of a $2$-category in \Cref{2category-explicit}, we observe that $\B$ is a $\Cat$-category as in \Cref{def:enriched-category}.
\begin{itemize}
\item Since $\B$ is a locally small bicategory, for each pair of objects $X,Y$ in $\B$, the hom category $\B(X,Y)$ is an object in $\Cat$---i.e., a small category---with its $1$-cells, $2$-cells, vertical composition, and identity $2$-cells.
\item For each triple of objects $X,Y,Z$ in $\B$, the composition
\[\begin{tikzcd}
\B(Y,Z)\times\B(X,Y)\rar{m_{XYZ}} & \B(X,Z)\end{tikzcd}\]
is the horizontal composition $c_{XYZ}$ in the bicategory $\B$, which is a functor, i.e., a morphism in $\Cat$.
\item For each object $X$ in $\B$, the identity of $X$ is the functor
\[\begin{tikzcd}
\boldone \rar{i_X} & \B(X,X)\end{tikzcd}\] given by the identity $1$-cell $1_X \in \B(X,X)$ of $X$.
\item The $\Cat$-category associativity diagram \eqref{enriched-cat-associativity} in $\B$ is commutative because the horizontal composition is associative for both $1$-cells and $2$-cells, in the sense of \eqref{2cat-associator-id} and \eqref{2cat-associator-id-2cell}.
\item The $\Cat$-category unity diagram \eqref{enriched-cat-unity} in $\B$ is commutative because the horizontal composition is unital for both $1$-cells and $2$-cells, in the sense of \eqref{2cat-unitor-id-1cell} and \eqref{2cat-unitor-id-2cell}.
\end{itemize}
Therefore, $\B$ is a $\Cat$-category. The same identification as above shows that a $\Cat$-category is a locally small $2$-category.
\end{proof}
\begin{explanation}\label{expl:2cat-3descriptions}
There are three ways to think about a $2$-category.
\begin{description}
\item[Bicategorical] In \Cref{def:2category}, a $2$-category is defined as a bicategory with extra properties, namely, that the associator, the left unitor, and the right unitor are identities. This definition is useful in that every property of a bicategory is automatically true in a $2$-category.
\item[Set-Theoretic] Alternatively, the entire list of axioms of a $2$-category is stated in \Cref{2category-explicit}. This is a purely set-theoretic view of a $2$-category, which is useful in checking that something forms a $2$-category.
\item[Categorical] \Cref{2cat-cat-enriched-cat} states that locally small $2$-categories are categories enriched in $\Cat$. This is a categorical description of a $2$-category, which is useful in defining $2$-categorical concepts, including $2$-functors, $2$-natural transformations, $2$-adjunctions, $2$-monads, and so forth.
\end{description}
In the rest of this book, we will use all three descriptions of a $2$-category.\dqed
\end{explanation}
The rest of this section contains examples of $2$-categories.
\begin{example}[Strict Monoidal Categories]\label{ex:strict-moncat-2cat}
The identification in \Cref{ex:moncat-bicat} shows that strict monoidal categories are canonically identified with\index{strict!monoidal category!as a one-object 2-category}\index{2-category!one-object} one-object $2$-categories.\dqed
\end{example}
\begin{example}[Hom Strict Monoidal Categories]\label{ex:hom-strict-monoidal-cat}
Suppose $(\B,1,c,a,\ell,r)$ is a bicategory, and $X$ is an object in $\B$ such that $a_{XXXX}$, $\ell_{XX}$, and $r_{XX}$ are identities. This is true, for example, if $\B$ is a $2$-category. Then the identification in \Cref{ex:hom-monoidal-cat} gives the hom category $\B(X,X)$ the structure of a strict monoidal category.\dqed
\end{example}
\begin{example}[Relations]\label{ex:relations}
There is a \index{2-category!locally partially ordered}locally partially ordered $2$-category $\Rel$ of relations\index{2-category!of relations}\index{relations} consisting of the following data.
\begin{itemize}
\item Its objects are sets.
\item For two sets $A$ and $B$, the hom category $\Rel(A,B)$ is the category of relations from $A$ to $B$. A \emph{relation} from $A$ to $B$ is a subset $R \subseteq A \times B$. The set of relations from $A$ to $B$ is a partially ordered set under set inclusion. This partially ordered set is then regarded as a small category $\Rel(A,B)$ with a unique morphism $R \to R'$ if and only if $R \subseteq R' \subseteq A \times B$.
\item The identity $1$-cell of $A$ is the relation \[\Delta_A = \bigl\{(a,a) : a\in A\bigr\} \subseteq A \times A.\]
\item If $C$ is another set and if $S \subseteq B\times C$ is a relation from $B$ to $C$, then the horizontal composition $SR \subseteq A \times C$ is defined by
\[SR = \bigl\{(a,c) \in A\times C : \text{there exists $b\in B$ with $(a,b)\in R$ and $(b,c)\in S$}\bigr\}.\]
\item The horizontal composition of $2$-cells is defined by the condition that $R \subseteq R' \subseteq A \times B$ and $S \subseteq S' \subseteq B \times C$ imply \[SR \subseteq S'R' \subseteq A \times C.\] This property also shows that the middle four exchange \eqref{middle-four} is satisfied.
\end{itemize}
Since the horizontal composition is associative and unital for both $1$-cells and $2$-cells, $\Rel$ is a $2$-category by Proposition \ref{2category-explicit}.\dqed
\end{example}
\begin{example}[$2$-Category of Small Categories]\label{ex:2cat-of-cat}
There is a $2$-category\index{2-category!of small categories}\index{small category!2-category} $\Cat$ consisting of the following data.
\begin{itemize}
\item Its objects are small categories.
\item For small categories $\C$ and $1.3$, the hom category $\Cat(1.3,\C)$ is the diagram category $\C^{1.3}$. In other words:
\begin{itemize}
\item Its $1$-cells are functors $1.3 \to \C$.
\item Its $2$-cells are natural transformations between such functors.
\item The vertical composition is the vertical composition of such natural transformations.
\item The identity natural transformation $1_F$ for a functor $F : 1.3\to\C$ is the identity $2$-cell of $F$.
\end{itemize}
\item The identity functor $1_{\C}$ is the identity $1$-cell $1_{\C}$.
\item Horizontal composition of $1$-cells is the composition of functors.
\item Horizontal composition of $2$-cells is the horizontal composition of natural transformations.
\end{itemize}
The $2$-category axioms in \Cref{2category-explicit} are satisfied in $\Cat$.
For example, for the middle four exchange \eqref{middle-four}, consider the situation
\[\begin{tikzpicture}[commutative diagrams/every diagram, xscale=1.5]
\node (X) at (-1,0) {$\C$}; \node (Y) at (1,0) {$1.3$}; \node (Z) at (3,0) {$3$};
\node[font=\Large] at (-.1,.3) {\rotatebox{270}{$\Rightarrow$}};
\node[font=\small] at (.1,.3) {$\alpha$};
\node[font=\Large] at (1.9,.3) {\rotatebox{270}{$\Rightarrow$}};
\node[font=\small] at (2.1,.3) {$\beta$};
\node[font=\Large] at (-.1,-.3) {\rotatebox{270}{$\Rightarrow$}};
\node[font=\small] at (.1,-.3) {$\alpha'$};
\node[font=\Large] at (1.9,-.3) {\rotatebox{270}{$\Rightarrow$}};
\node[font=\small] at (2.1,-.3) {$\beta'$};
\path[commutative diagrams/.cd, every arrow, every label]
(X) edge [bend left=60] node[above] {$F$} (Y)
(X) edge node[near start] {$F'$} (Y)
(Y) edge [bend left=60] node[above] {$G$} (Z)
(Y) edge node[near start] {$G'$} (Z)
(X) edge [bend right=60] node[below] {$F''$} (Y)
(Y) edge [bend right=60] node[below] {$G''$} (Z);
\end{tikzpicture}\]
with four natural transformations $\alpha$, $\alpha'$, $\beta$, and $\beta'$. For an object $X\in\C$, the morphisms $\bigl[(\beta'\beta)*(\alpha'\alpha)\bigr]_X$ and $\bigl[(\beta'*\alpha')(\beta *\alpha)\bigr]_X$ are the upper composite and the lower composite, respectively, in the diagram
\[\begin{tikzcd}
&& GF''X \arrow{rd}{\beta_{F''X}} &&\\
GFX \rar{G\alpha_X} & GF'X \arrow{ru}{G\alpha'_X} \arrow{rd}{\beta_{F'X}} && G'F''X \rar{\beta'_{F''X}} & G''F''X\\
&& G'F'X \arrow{ru}{G'\alpha'_X} &&
\end{tikzcd}\]
in $3$. Since the quadrilateral is commutative by the naturality of $\beta : G \to G'$, the middle four exchange holds in $\Cat$. The other axioms of a $2$-category are checked similarly.
We previously used $\Cat$ to denote the category of small categories and functors. When the symbol $\Cat$ appears, the context will make it clear whether we are considering it as a category or as a $2$-category.\dqed
\end{example}
\begin{example}[$2$-Category of Small Enriched Categories]\label{ex:2cat-of-enriched-cat}
Recall the concepts of $\V$-categories, $\V$-functors, and $\V$-natural transformations in \Cref{sec:enriched-cat} for a monoidal category $\V$. There is a $2$-category\index{2-category!of $\V$-categories}\index{enriched!category!2-category} $\Cat_{\V}$ consisting of the following data.
\begin{itemize}
\item Its objects are small $\V$-categories.
\item For small $\V$-categories $\C$ and $1.3$, the hom category $\Cat_{\V}(1.3,\C)$ has:
\begin{itemize}
\item $\V$-functors $1.3 \to \C$ as $1$-cells.
\item $\V$-natural transformations between such $\V$-functors as $2$-cells.
\item vertical composition that of $\V$-natural transformations.
\item the identity $\V$-natural transformation $1_F$ for a $\V$-functor $F : 1.3\to\C$ as the identity $2$-cell of $F$.
\end{itemize}
\item The identity $\V$-functor $1_{\C}$ is the identity $1$-cell $1_{\C}$.
\item Horizontal composition of $1$-cells is the composition of $\V$-functors.
\item Horizontal composition of $2$-cells is that of $\V$-natural transformations.
\end{itemize}
The verification that $\Cat_{\V}$ is a $2$-category is adapted from \Cref{ex:2cat-of-cat}, which is the $\V=\Set$ case.\dqed
\end{example}
\begin{example}[$2$-Vector Spaces revisited]\label{ex:twovect-tc}
Related to the bicategory $\twovc$ in \Cref{ex:two-vector-space} is the $2$-category\label{notation:twovtc} $\twovtc$, of \emph{totally coordinatized $2$-vector spaces}\index{2-vector space!totally coordinatized}\index{2-category!of totally coordinatized 2-vector spaces}, specified by the following data.
\begin{description}
\item[Objects] The objects in $\twovtc$ are those in $\twovc$, i.e., $\{n\}$ for non-negative integers $n$.
\item[Hom-Category] The hom-category $\twovtc(\{m\},\{n\})$ is defined as follows.
\begin{itemize}
\item Its objects are $n\times m$ matrices with non-negative integers as entries.
\item A morphism \[\theta = (\theta_{ij}) : V = (v_{ij}) \to V' = (v'_{ij})\] is an $n\times m$ matrix whose $(i,j)$-entry $\theta_{ij}$ is a $v'_{ij}\times v_{ij}$ complex matrix for $1\leq i \leq n$ and $1\leq j \leq m$.
\item The identity morphism of an object $V = (v_{ij})$ is the $n\times m$ matrix $1_V$ with $(i,j)$-entry the $v_{ij}\times v_{ij}$ identity matrix.
\item With $\theta$ as above and another morphism $\theta' = (\theta'_{ij}) : V' \to V'' = (v''_{ij})$, their composition is defined by coordinate-wise matrix multiplication
\[\theta'\theta = \big(\theta'_{ij}\theta_{ij}\big) : V \to V''.\]
This is well defined because $\theta'_{ij}$ is a $v''_{ij}\times v'_{ij}$ complex matrix, and $\theta_{ij}$ is a $v'_{ij}\times v_{ij}$ complex matrix.
\end{itemize}
\item[Identity $1$-Cells] The identity $1$-cell $1^n$ of an object $\{n\}$ is the $n\times n$ identity matrix.
\item[Horizontal Composition] On $1$-cells, it is given by matrix multiplication of non-negative integer matrices.
To define the horizontal composition on $2$-cells, first we need to define direct sum and tensor product of complex matrices. Suppose $A = (a_{ij})$ is an $m\times n$ complex matrix, and $B = (b_{kl})$ is a $p\times q$ complex matrix. Their \emph{matrix direct sum}\index{matrix!direct sum} is the $(m+p)\times(n+q)$ complex matrix
\[A \oplus B = \begin{bmatrix}
A & 0 \\ 0 & B\end{bmatrix}\]
in which each $0$ means that every entry in that rectangular region, either $m\times q$ or $p\times n$, is $0$. Their \emph{matrix tensor product}\index{matrix!tensor product} is the $(mp)\times(nq)$ complex matrix
\[A\tensor B = \begin{bmatrix}
a_{11}B & a_{12}B & \cdots & a_{1n}B \\
a_{21}B & a_{22}B & \cdots & a_{2n}B \\
\vdots & \vdots & \ddots & \vdots \\
a_{m1}B & a_{m2}B & \cdots & a_{mn}B
\end{bmatrix}\]
with each $a_{ij}B = (a_{ij}b_{kl})_{k,l}$ the scalar multiplication.
For $2$-cells $\theta = (\theta_{ij}) : V \to V'$ in $\twovtc(\{m\},\{n\})$ and $\phi = (\phi_{ki}) : W \to W'$ in $\twovtc(\{n\},\{p\})$, their horizontal composite
\[\phi * \theta = \Big(\bigoplus_{i=1}^n (\phi_{ki}\tensor\theta_{ij})\Big)_{k,j} : WV \to W'V'\]
is the $2$-cell in $\twovtc(\{m\},\{p\})$ with $(k,j)$-entry given by the formula \eqref{phi-theta-kj}, and with $\oplus$ and $\tensor$ interpreted as, respectively, matrix direct sum and matrix tensor product.
\item[Associator and Unitors] They are all defined as the identities.
\end{description}
This finishes the definition of $\twovtc$.
To check that $\twovtc$ is actually a $2$-category, we use the same reasoning as for the bicategory $\twovc$ in \Cref{ex:two-vector-space}, or the explicit criteria in \Cref{2category-explicit}. The associator, the left unitor, and the right unitor are well defined because matrix multiplication is strictly associative and unital with respect to identity matrices.
In \Cref{ex:two-vector-strict-functor,cor:two-vector-spaces}, we will observe that the bicategory $\twovc$ is biequivalent to the $2$-category $\twovtc$. So the latter is a strictification of the former.
\end{example}
More examples of bicategories and $2$-categories will be given throughout the rest of this book and in the exercises in \Cref{sec:2cat_bicat_exercises}.
\section{\texorpdfstring{$2$}{2}-Category of Multicategories}\label{sec:multicategories}
In \Cref{ex:2cat-of-cat} we observed that there is a $2$-category $\Cat$ of small categories, functors, and natural transformations. In this section we present an example of a $2$-category $\Multicat$ that generalizes $\Cat$ to multicategories, to be defined shortly.
\begin{motivation}\label{mot:multicategory}
To motivate the definition of a multicategory, consider a set $X$ and the set $\Map(X^n,X)$ of functions from $X^n = X \times \cdots \times X$, with $n \geq 0$ factors of $X$, to $X$. We take $X^0$ to mean a one-element set $*$. Given functions
\begin{itemize}
\item $f \in \Map(X^n,X)$ with $n \geq 1$ and
\item $g_i \in \Map(X^{m_i},X)$ for each $1 \leq i \leq n$,
\end{itemize}
one can form the new function
\[f \circ (g_1,\ldots, g_n) \in \Map\bigl(X^{m_1 + \cdots + m_n},X\bigr)\]
as the composite
\[\begin{tikzcd}
X^{m_1} \times \cdots \times X^{m_n} \ar{rr}{(g_1,\ldots, g_n)} && X^{n} \rar{f} & X.\end{tikzcd}\]
In other words, first apply the $g_i$'s simultaneously and then apply $f$.
More generally, we may allow the domain and the codomain of each function to be from different sets, i.e., functions
\[\begin{tikzcd}X_{c_1} \times \cdots \times X_{c_n} \rar{f} & X_d.\end{tikzcd}\] In this case, the above composition is defined if and only if the codomain of $g_i$ is $X_{c_i}$ for $1\leq i \leq n$; i.e., they match with the domain of $f$. Together with permutations of the domain factors, these functions satisfy some associativity, unity, and equivariance conditions. A multicategory is an abstraction of this example that allows one to encode operations with multiple, possibly zero, inputs and one output, and their compositions.\dqed
\end{motivation}
To define a multicategory precisely, first we need some notations.
\begin{definition}\label{def:profile}
Suppose $\colorc$\label{notation:s-class} is a class.
\begin{enumerate}
\item Denote by\index{profile}\label{notation:profs}
\[\Profc = \coprodover{n \geq 0}\ \colorc^{\times n}\]
the class of finite ordered sequences of elements in $\colorc$. An element in $\Profc$ is called a \emph{$\colorc$-profile}.
\item A typical $\colorc$-profile of length\index{length of a profile} $n=|\uc|$ is denoted by $\uc = (c_1, \ldots, c_n) \in \colorc^{\times n}$\label{notation:us}. The empty $\colorc$-profile\index{empty profile} is denoted by $\varnothing$.
\item An element in $\Profcc$ is denoted vertically as\label{notation:duc} $\duc$ with $d\in\colorc$ and $\uc\in\Profc$.\defmark
\end{enumerate}
\end{definition}
\begin{definition}\label{notation:sigma-n}
For each integer $n \geq 0$, the symmetric group on $n$ letters is denoted by \index{symmetric group}$\Sigma_n$.
\end{definition}
Now we come to the definition of multicategory. In the literature,
what we call a multicategory in the next definition is sometimes
called a symmetric multicategory, in which case a multicategory refers
to the non-symmetric version. Since we only consider the version with
symmetric group action, we simply call them multicategories. See
\cref{note:multicats-polycats} for further discussion.
\begin{definition}\label{def:multicategory}
A \emph{multicategory}\index{multicategory}\index{category!multi-} $(\C, \gamma, \operadunit)$\label{notation:multicategory}, also called an \index{operad}\emph{operad}, consists of the following data.
\begin{itemize}
\item $\C$ is equipped with a class $\colorc$ of\index{object!multicategory} \emph{objects}.
\item For each $\duc \in \Profcc$ with $d\in\colorc$ and $\uc=(c_1,\ldots,c_n)\in\Profc$, $\C$ is equipped with a set\label{notation:cduc}
\[\C\duc = \C\sbinom{d}{c_1,\ldots,c_n}\]
of \emph{$n$-ary operations}\index{n-ary operation@$n$-ary operation}
with \emph{input profile}\index{input profile} $\uc$ and \emph{output}\index{output} $d$.
\item
For $\duc \in \Profcc$ as above and a permutation $\sigma \in \Sigma_n$, $\C$ is equipped with a bijection
\[\begin{tikzcd}\C\duc \rar{\sigma}[swap]{\cong} & \C\ducsigma,\end{tikzcd}\]
called the \emph{right action}\index{right action} or the \index{symmetric group!action}\emph{symmetric group action}, in which\label{notation:c-sigma}
\[\uc\sigma = (c_{\sigma(1)}, \ldots, c_{\sigma(n)})\]
is the right permutation\index{right permutation} of $\uc$ by $\sigma$.
\item For each $c \in \colorc$, $\C$ is equipped with an element\label{notation:unit-c}
\[\operadunit_c \in \C\cc,\] called the \index{colored unit}\emph{$c$-colored unit}.
\item For $\duc \in \Profcc$ as above with $n \geq 1$, suppose $\ub_1, \ldots, \ub_n \in \Profc$ and $\ub = (\ub_1,\ldots,\ub_n) \in \Profc$ is their \index{concatenation}concatenation. Then $\C$ is equipped with a map\label{notation:multicategory-composition}
\[\begin{tikzcd}
\C\duc \times \prod\limits_{i=1}^n \C\ciubi \rar{\gamma} & \C\dub\end{tikzcd}\]
called the \index{multicategory!composition}\emph{composition}.
\end{itemize}
These data are required to satisfy the following axioms.
\begin{description}
\item[Symmetric Group Action]
For $d\in\colorc$, $\uc\in\Profc$ with length $n$, and $\sigma,\tau\in\Sigma_n$, the diagram
\[\begin{tikzcd}
\C\duc \arrow{rd}[swap]{\sigma\tau} \rar{\sigma} & \C\ducsigma \dar{\tau}\\
& \C\sbinom{d}{\uc\sigma\tau}
\end{tikzcd}\]
is commutative. Moreover, the identity permutation in $\Sigma_n$ acts as the identity map on $\C\duc$.
\item[Associativity] Suppose that:\index{associativity!multicategory}
\begin{itemize}
\item in the definition of the composition $\gamma$,
\[\ub_j = \bigl(b^j_1, \ldots , b^j_{k_j}\bigr) \in \Profc\]
has length $k_j \geq 0$ for each $1 \leq j \leq n$ such that at least one $k_j > 0$;
\item $\ua^j_i \in \Profc$ for each $1 \leq j \leq n$ and $1 \leq i \leq k_j$;
\item for each $1 \leq j \leq n$,
\[\ua_j = \begin{cases}\bigl(\ua^j_1, \ldots , \ua^j_{k_j}\bigr)
& \text{if $k_j > 0$},\\
\varnothing & \text{if $k_j = 0$};\end{cases}\]
\item $\ua = (\ua_1,\ldots , \ua_n)$ is their concatenation.
\end{itemize}
Then the \emph{associativity diagram}
\begin{equation}\label{multicategory-associativity}
\begin{tikzcd}\C\duc \times \biggl[\prod\limits_{j=1}^n \C\cjubj\biggr]
\times \prod\limits_{j=1}^n \biggl[\prod\limits_{i=1}^{k_j} \C\bjiuaji\biggr]
\rar{(\gamma, 1)} \dar{\cong}[swap]{\text{permute}}
& \C\dub \times \prod\limits_{j=1}^{n} \biggl[\prod\limits_{i=1}^{k_j} \C\bjiuaji\biggr] \arrow{dd}{\gamma}\\
\C\duc \times \prod\limits_{j=1}^n \biggl[\C\cjubj
\times \prod\limits_{i=1}^{k_j} \C\bjiuaji\biggr]
\dar[swap]{(1, \smallprod_j \gamma)} &\\
\C\duc \times \prod\limits_{j=1}^n \C\sbinom{c_j}{\ua_j} \rar{\gamma} & \C\dua
\end{tikzcd}\end{equation}
is commutative.
\item[Unity]
Suppose $d \in \colorc$.\index{unity!multicategory}
\begin{enumerate}
\item If $\uc = (c_1,\ldots,c_n) \in \Profc$ has length $n \geq 1$, then the \emph{right unity diagram}\index{right unity}
\begin{equation}\label{multicategory-right-unity}
\begin{tikzcd} \C\duc \times \{*\}^{n} \dar[swap]{(1, \smallprod \operadunit_{c_j})} \rar{\cong} & \C\duc \dar[equal]\\
\C\duc \times \prod\limits_{j=1}^n \C\cjcj \rar{\gamma} & \C\duc
\end{tikzcd}
\end{equation}
is commutative. Here $\{*\}$ is the one-point set, and $\{*\}^n$ is its $n$-fold product.
\item
If $\ub \in \Profc$, then the \index{unity!multicategory}\emph{left unity diagram}
\begin{equation}\label{multicategory-left-unity}
\begin{tikzcd}
\{*\} \times \C\dub \dar[swap]{(\operadunit_d, 1)} \rar{\cong} &
\C\dub \dar[equal]\\
\C\dd \times \C\dub \rar{\gamma} & \C\dub
\end{tikzcd}
\end{equation}
is commutative.
\end{enumerate}
\item[Equivariance]
Suppose that in the definition of $\gamma$, $|\ub_j| = k_j \geq 0$.\index{equivariance!multicategory}
\begin{enumerate}
\item For each $\sigma \in \Sigma_n$, the \index{top equivariance}\emph{top equivariance diagram}
\begin{equation}\label{operadic-eq-1}
\begin{tikzcd}[column sep=large]\C\duc \times \prod\limits_{j=1}^n \C\cjubj
\dar[swap]{\gamma} \rar{(\sigma, \sigma^{-1})}
& \C\ducsigma \times \prod\limits_{j=1}^n \C\sbinom{c_{\sigma(j)}}{\ub_{\sigma(j)}} \dar[swap]{\gamma}\\
\C\sbinom{d}{\ub_1,\ldots,\ub_n} \rar{\sigma\langle k_{\sigma(1)}, \ldots , k_{\sigma(n)}\rangle}
& \C\sbinom{d}{\ub_{\sigma(1)},\ldots,\ub_{\sigma(n)}}
\end{tikzcd}
\end{equation}
is commutative. Here $\sigma\langle k_{\sigma(1)}, \ldots , k_{\sigma(n)} \rangle \in \Sigma_{k_1+\cdots+k_n}$\label{notation:block-permutation} is the block permutation\index{block!permutation} that permutes the $n$ consecutive blocks of lengths $k_{\sigma(1)}$, $\ldots$, $k_{\sigma(n)}$ as $\sigma$ permutes $\{1,\ldots,n\}$, leaving the relative order within each block unchanged.
\item
Given permutations $\tau_j \in \Sigma_{k_j}$ for $1 \leq j \leq n$, the \index{bottom equivariance}\emph{bottom equivariance diagram}
\begin{equation}\label{operadic-eq-2}
\begin{tikzcd}\C\duc \times \prod\limits_{j=1}^n \C\cjubj
\dar[swap]{\gamma} \rar{(1, \smallprod \tau_j)} &
\C\duc \times \prod\limits_{j=1}^n \C\sbinom{c_j}{\ub_j\tau_j}\dar[swap]{\gamma} \\
\C\sbinom{d}{\ub_1,\ldots,\ub_n} \rar{\tau_1 \times \cdots \times \tau_n}
& \C\sbinom{d}{\ub\tau_1,\ldots,\ub\tau_n}
\end{tikzcd}
\end{equation}
is commutative. Here the block sum\index{block!sum} $\tau_1 \times\cdots \times\tau_n \in \Sigma_{k_1+\cdots+k_n}$\label{notation:block-sum} is the image of $(\tau_1, \ldots, \tau_n)$ under the canonical inclusion \[\Sigma_{k_1} \times \cdots \times \Sigma_{k_n} \to \Sigma_{k_1 + \cdots + k_n}.\]
\end{enumerate}
\end{description}
This finishes the definition of a multicategory.
Moreover:
\begin{itemize}
\item If $\C$ has only one object, then its set of $n$-ary operations is
denoted by $\C_n$.
\item A multicategory is \emph{small}\index{multicategory!small} if its class of objects is a set.\defmark
\end{itemize}
\end{definition}
\begin{explanation}\label{expl:multicategory}
Suppose $\C$ is a multicategory with object class $\colorc$.
\begin{itemize}
\item For $y \in \C\duc$ and $x_i \in \C\ciubi$ for $1 \leq i \leq n$, the image of the composition is written as
\[\gamma\bigl(y; x_1, \ldots, x_n\bigr) = y(x_1,\ldots,x_n) \in \C\dub.\]
\item The associativity axiom \eqref{multicategory-associativity} means the equality
\[\begin{split}
&\bigl(y(x_1,\ldots,x_n)\bigr)\bigl(w^1_1,\ldots,w^1_{k_1},\ldots,w^n_1,\ldots,w^n_{k_n}\bigr)\\
&= y\Bigl(x_1(w^1_1,\ldots,w^1_{k_1}),\ldots, x_n(w^n_1,\ldots,w^n_{k_n})\Bigr)
\end{split}\]
for $y\in \C\duc$, $x_j \in \C\cjubj$ for $1\leq j \leq n$, and $w^j_i \in \C\bjiuaji$ for $1\leq j \leq n$ and $1 \leq i \leq k_j$.
\item The right and left unity axioms \eqref{multicategory-right-unity} and \eqref{multicategory-left-unity} mean the equalities
\[y(1_{c_1},\ldots,1_{c_n}) = y = 1_d(y).\]
\item The top and bottom equivariance axioms \eqref{operadic-eq-1} and \eqref{operadic-eq-2} mean the equalities
\[\begin{split}
\bigl(y(x_1,\ldots,x_n)\bigr)\sigma\langle k_{\sigma(1)}, \ldots , k_{\sigma(n)} \rangle
&= (y\sigma)(x_{\sigma(1)},\ldots,x_{\sigma(n)}),\\
\bigl(y(x_1,\ldots,x_n)\bigr)(\tau_1 \times\cdots \times\tau_n) &= y(x_1\tau_1,\ldots,x_n\tau_n),\end{split}\]
respectively.\dqed
\end{itemize}
\end{explanation}
\begin{example}[Categories]\label{ex:category-as-operad}
Categories are multicategories with only unary operations.\index{category!multicategory with only unary operations}
Indeed, a category $\C$ with $\Ob(\C)=\colorc$ may be regarded as a multicategory $\C'$ with
\[\C'\duc = \begin{cases} \C(c,d) & \text{if $\uc=c\in\colorc$},\\
\varnothing & \text{otherwise}.\end{cases}\]
In other words, $\C'$ only has unary operations, which are the morphisms in $\C$. The composition in $\C'$ is the one in $\C$. The colored units are the identity morphisms in $\C$. There are no non-trivial symmetric group actions. The unity and associativity in the category $\C$ become those in the multicategory $\C'$. The reverse identification is also true. In particular, monoids, which can be identified with one-object categories, are multicategories with only one object and only unary operations.\dqed
\end{example}
\begin{example}[Endomorphism Operads]\label{ex:endomorphism}
For a non-empty class $\colorc$, suppose $X = \{X_c\}_{c\in \colorc}$ is a $\colorc$-indexed class of sets. Then there is a multicategory $\End(X)$, called the \index{endomorphism!operad}\index{operad!endomorphism}\emph{endomorphism operad}, whose sets of $n$-ary operations are \[\End(X)\duc = \Map\bigl(X_{c_1} \times \cdots \times X_{c_n}, X_d\bigr)\] for $\duc\in\Profcc$ with $\uc=(c_1,\ldots,c_n)$. Here $\Map(A,B)$ is the set of functions from $A$ to $B$. The composition is the one in \Cref{mot:multicategory}. In other words, for
\begin{itemize}
\item $f \in \End(X)\duc$ with $\uc=(c_1,\ldots,c_n)$,
\item $g_j \in \End(X)\cjubj$ for $1\leq j \leq n$, and
\item the notation $X_{\uc} = X_{c_1}\times\cdots\times X_{c_n}$,
\end{itemize}
the element
\[\gamma\bigl(f;g_1,\ldots,g_n\bigr) \in \End(X)\dub\]
with $\ub = (\ub_1,\ldots,\ub_n)$ is the composite function
\[\begin{tikzcd}[column sep=scriptsize]
X_{\ub_1} \times \cdots\times X_{\ub_n} \ar{rr}{(g_1,\ldots,g_n)} && X_{\uc} \rar{f} & X_d.
\end{tikzcd}\]
The colored units are the identity functions of the sets $X_c$. The $\Sigma_n$-action is induced by the permutations of the $n$ factors in the domain $X_{c_1} \times \cdots \times X_{c_n}$. The multicategory axioms can actually be read off from this example.\dqed
\end{example}
\begin{example}[Associative Operad]\label{ex:ass}
There is a multicategory\label{notation:As} $\As$ with only one object, called the \index{associative operad}\index{operad!associative}\emph{associative operad}, with \[\As_n = \Sigma_n \forspace n \geq 0.\] The $\Sigma_n$-action is the group multiplication, with the identity permutation in $\Sigma_1$ as the unit. Given permutations $\sigma \in \Sigma_n$ with $n \geq 1$ and $\tau_i \in \Sigma_{k_i}$ for each $1 \leq i \leq n$, the composition is given by the product
\[\gamma\bigl(\sigma; \tau_1, \ldots, \tau_n\bigr) = \sigma\langle k_{1},\ldots, k_{n}\rangle \cdot (\tau_1 \times\cdots \times\tau_n) \in \Sigma_{k_1 + \cdots + k_n},\] with the notations in \eqref{operadic-eq-1} and \eqref{operadic-eq-2}.\dqed
\end{example}
\begin{example}[Commutative Operad]\label{ex:com}
There is a multicategory\label{notation:Com} $\Com$ with only one object, called the \index{commutative operad}\index{operad!commutative}\emph{commutative operad}, with \[\Com_n = \{*\} \forspace n \geq 0.\] Its composition and symmetric group actions are all trivial.\dqed
\end{example}
Next we extend the concept of a functor to multicategories.
\begin{definition}\label{def:multicategory-functor}
A \emph{multifunctor}\index{multifunctor}\index{functor!multi-} $F : \C \to 1.3$ between multicategories $\C$ and $1.3$ consists of the following data:
\begin{itemize}
\item an assignment \[F : \colorc \to \colord,\] where $\colorc$ and $\colord$ are the classes of objects of $\C$ and $1.3$, respectively;
\item for each $\czerouc \in \Profcc$ with $\uc=(c_1,\ldots,c_n)$, a function
\[F : \C\czerouc \to 1.3\Fczerouc,\] where $F\uc=(Fc_1,\ldots,Fc_n)$.
\end{itemize}
These data are required to preserve the symmetric group action, the colored units, and the composition in the following sense.
\begin{description}
\item[Symmetric Group Action] For each $\czerouc$ as above and each permutation $\sigma \in \Sigma_n$, the diagram\index{equivariance!multifunctor}
\begin{equation}\label{multifunctor-equivariance}
\begin{tikzcd}
\C\czerouc \ar{d}{\cong}[swap]{\sigma} \ar{r}{F} & 1.3\Fczerouc \ar{d}{\cong}[swap]{\sigma}\\
\C\czeroucsigma \ar{r}{F} & 1.3\Fczeroucsigma\end{tikzcd}
\end{equation}
is commutative.
\item[Units] For each $c\in\colorc$, the equality
\begin{equation}\label{multifunctor-unit}
F\operadunit_c = \operadunit_{Fc} \in 1.3\Fcc
\end{equation}
holds.
\item[Composition] The diagram
\begin{equation}\label{multifunctor-composition}
\begin{tikzcd}
\C\duc \times \prod\limits_{i=1}^n \C\ciubi \dar[swap]{\gamma} \ar{r}{(F,\prod F)} & 1.3\Fduc \times \prod\limits_{i=1}^n 1.3\Fciubi \dar{\gamma}\\
\C\dub \ar{r}{F} & 1.3\Fdub
\end{tikzcd}
\end{equation}
is commutative.
\end{description}
This finishes the definition of a multifunctor.
Moreover:
\begin{enumerate}
\item For another multifunctor $G : 1.3\to3$ between multicategories, where $3$ has object class $\colore$, the \index{composition!multifunctor}\emph{composition} $GF : \C\to3$ is the multifunctor defined by composing the assignments on objects
\[\begin{tikzcd} \colorc \ar{r}{F} & \colord \ar{r}{G} & \colore
\end{tikzcd}\]
and the functions on $n$-ary operations
\[\begin{tikzcd}
\C\czerouc \ar{r}{F} & 1.3\Fczerouc \ar{r}{G} & 3\GFczerouc.
\end{tikzcd}\]
\item The \index{identity!multifunctor}\emph{identity multifunctor} $1_{\C} : \C\to\C$ is defined by the identity assignment on objects and the identity function on $n$-ary operations.\defmark
\end{enumerate}
\end{definition}
\begin{lemma}\label{multifunctor-comp-welldefined}
Composition of multifunctors is well-defined, associative, and unital with respect to the identity multifunctors.
\end{lemma}
\begin{proof}
For multifunctors $F : \C\to1.3$ and $G : 1.3\to3$, $GF$ preserves the compositions in $\C$ and $3$ because the diagram
\[\begin{tikzcd}
\C\duc \times \prod\limits_{i=1}^n \C\ciubi \dar[swap]{\gamma} \ar{r}{(F,\prod F)} \ar[bend left=20, start anchor={[xshift=-.8cm]}, end anchor={[xshift=.8cm]}]{rr}{(GF,\prod GF)} & 1.3\Fduc \times \prod\limits_{i=1}^n 1.3\Fciubi \dar{\gamma} \ar{r}{(G,\prod G)} & 3\GFduc \times \prod\limits_{i=1}^n 3\GFciubi \dar{\gamma}\\
\C\dub \ar{r}{F} \ar[bend right=20]{rr}{GF} & 1.3\Fdub \ar{r}{G} & 3\GFdub
\end{tikzcd}\]
is commutative. Similarly, $GF$ preserves the symmetric group actions and the colored units in $\C$ and $3$, so it is a multifunctor. Composition of multifunctors is associative and unital because composition of functions is associative and unital.
\end{proof}
\begin{example}[Functors]\label{ex:functor-multifunctor}
A functor\index{functor!as a multifunctor} $F : \C\to1.3$ between categories is also a multifunctor when $\C$ and $1.3$ are regarded as multicategories with only unary operations as in \Cref{ex:category-as-operad}.\dqed
\end{example}
\begin{example}[Commutative Operad]\label{ex:com-is-terminal}
For each multicategory $\C$, there exists a unique multifunctor \[F : \C\to\Com,\] where $\Com$ is the commutative operad in \Cref{ex:com}.\dqed
\end{example}
\begin{example}[Operad Algebras]\label{ex:operad-algebra}
Suppose $\C$ is a multicategory, called an operad in this example, with a non-empty class $\colorc$ of objects. Suppose $X = \{X_c\}_{c\in \colorc}$ is a $\colorc$-indexed class of sets, and $\End(X)$ is the endomorphism operad in \Cref{ex:endomorphism}. A \emph{$\C$-algebra}\index{operad!algebra} structure on $X$ is an object-preserving multifunctor \[\theta : \C \to \End(X).\] At a typical $\duc$-entry for $\duc \in \Profcc$ with $\uc=(c_1,\ldots,c_n)$, $\theta$ can be written in adjoint form as the map
\[\begin{tikzcd}
\C\duc \times X_{c_1}\times \cdots\times X_{c_n} \ar{r}{\theta} & X_d.
\end{tikzcd}\]
For example:
\begin{enumerate}
\item Suppose $(M,\mu,\operadunit)$ is a monoid, regarded as an operad with one object and only unary operations as in \Cref{ex:category-as-operad}. Then an $M$-algebra structure on a set $X$ is a left $M$-action\index{set!with left monoid action} on $X$ in the usual sense. Indeed, an $M$-algebra structure morphism is a map \[M\times X \to X\] that is associative and unital with respect to $\mu$ and $\operadunit$.
\item For the associative operad $\As$ in \Cref{ex:ass}, an $\As$-algebra is precisely a monoid\index{monoid!as an operad algebra} in the usual sense. Indeed, for an $\As$-algebra $M$:
\begin{itemize}
\item The action by $\As_0$ determines the unit $\operadunit \in M$.
\item The action by the identity permutation $\id_2 \in \As_2=\Sigma_2$ gives $M$ a multiplication \[\mu : M\times M \to M.\] The associativity and unity axioms of a monoid hold in $M$ by the permutation identities
\[\begin{split}
\gamma(\id_2;\id_2,\id_1) &= \id_3 = \gamma(\id_2; \id_1,\id_2) \in \Sigma_3,\\
\gamma(\id_2;\id_1,\id_0) &= \id_1=\gamma(\id_2;\id_0,\id_1) \in \Sigma_1.
\end{split}\]
\item The above structure already determines the entire $\As$-algebra structure on $M$ because, by \eqref{multifunctor-equivariance}, the action by $\As_n$ is determined by the action by the identity permutation $\id_n\in\Sigma_n$. Furthermore, \eqref{multifunctor-unit} and \eqref{multifunctor-composition} imply that the $\id_n$-action is determined by the $\id_2$-action, which gives $\mu$.
\end{itemize}
\item Similarly, for the commutative operad $\Com$ in \Cref{ex:com}, a $\Com$-algebra is precisely a commutative monoid\index{commutative monoid!as operad algebra} in the usual sense.\dqed
\end{enumerate}
\end{example}
Natural transformations and their horizontal and vertical compositions also have direct generalizations to multicategories.
\begin{definition}\label{def:multicat-natural-transformation}
Suppose $F,G : \C\to1.3$ are multifunctors as in \Cref{def:multicategory-functor}. A \emph{multinatural transformation}\index{multinatural transformation}\index{natural transformation!multi-} $\theta : F\to G$ consists of unary operations
\[\theta_c \in 1.3\sbinom{Gc}{Fc} \forspace c\in\colorc\]
such that, for each $n$-ary operation $p \in \C\czerouc$ with $\uc=(c_1,\ldots,c_n)$, the \emph{naturality condition}
\[(Gp)\bigl(\theta_{c_1},\ldots,\theta_{c_n}\bigr) = \theta_{c_0}(Fp) \in 1.3\sbinom{Gc_0}{F\uc}\]
holds, in which the compositions on both sides are taken in $1.3$.
\begin{itemize}
\item Each $\theta_c$ is called a \emph{component} of $\theta$.
\item The \emph{identity multinatural transformation}\index{multinatural transformation!identity} $1_F : F\to F$ has components \[(1_F)_c = \operadunit_{Fc}\in 1.3\Fcc \forspace c\in\colorc.\defmark\]
\end{itemize}
\end{definition}
\begin{definition}\label{def:multinatural-composition}
Suppose $\theta : F \to G$ is a multinatural transformation between multifunctors as in \Cref{def:multicat-natural-transformation}.
\begin{enumerate}
\item Suppose $\phi : G \to H$ is a multinatural transformation for a multifunctor $H : \C \to 1.3$. The \emph{vertical composition}\index{vertical composition!multinatural transformation}\label{notation:operad-vcomp} \[\phi\theta : F \to H\] is the multinatural transformation with components \[(\phi\theta)_c = \phi_c(\theta_c) \in 1.3\sbinom{Hc}{Fc} \forspace c\in\colorc.\]
\item Suppose $\theta' : F' \to G'$ is a multinatural transformation for multifunctors $F', G' : 1.3 \to 3$. The \emph{horizontal composition}\index{horizontal composition!multinatural transformation}\label{notation:operad-hcomp}
\[\theta' \ast \theta : F'F \to G'G\] is the multinatural transformation with components
\[(\theta' \ast \theta)_c = \theta'_{Gc}(F'\theta_c) = (G'\theta_c)(\theta'_{Fc}) \in 3\sbinom{G'Gc}{F'Fc}\]
for each object $c\in\colorc$, in which the second equality follows from the naturality of $\theta'$.\defmark
\end{enumerate}
\end{definition}
\begin{example}\label{ex:nt-multi-nt}
Each natural transformation $\theta : F \to G$ between functors $F,G : \C\to1.3$ is a multinatural transformation when $F$ and $G$ are regarded as multifunctors between multicategories as in \Cref{ex:functor-multifunctor}. Moreover, horizontal and vertical compositions of natural transformations become those of multinatural transformations.\dqed
\end{example}
\begin{theorem}\label{multicat-2cat}
There is a $2$-category\index{2-category!of multicategories}\index{multicategory!2-category} $\Multicat$ consisting of the following data.
\begin{itemize}
\item Its objects are small multicategories.
\item For small multicategories $\C$ and $1.3$, the hom category $\Multicat(\C,1.3)$ has:
\begin{itemize}
\item multifunctors $\C\to1.3$ as $1$-cells;
\item multinatural transformations between such multifunctors as $2$-cells;
\item vertical composition as composition;
\item identity multinatural transformations as identity $2$-cells.
\end{itemize}
\item The identity $1$-cell $1_{\C}$ is the identity multifunctor $1_{\C}$.
\item Horizontal composition of $1$-cells is the composition of multifunctors.
\item Horizontal composition of $2$-cells is that of multinatural transformations.
\end{itemize}
\end{theorem}
\begin{proof}
One first needs to check that the horizontal and vertical compositions of multinatural transformations are well-defined. To check that the vertical composition $\phi\theta$ in \Cref{def:multinatural-composition} is a multinatural transformation $F\to H$, suppose $p\in\C\czerouc$ is an $n$-ary operation. Using (i) the naturality of $\theta$ and $\phi$ and (ii) the associativity in $1.3$ three times, we compute as follows:
\[\begin{split}
(Hp)\bigl((\phi\theta)_{c_1},\ldots,(\phi\theta)_{c_n}\bigr)
&= (Hp)\bigl(\phi_{c_1}(\theta_{c_1}),\ldots,\phi_{c_n}(\theta_{c_n})\bigr)\\
&= \bigl[(Hp)(\phi_{c_1},\ldots,\phi_{c_n})\bigr](\theta_{c_1},\ldots,\theta_{c_n})\\
&= \bigl[\phi_{c_0}(Gp)\bigr](\theta_{c_1},\ldots,\theta_{c_n})\\
&= \phi_{c_0}\bigl[(Gp)(\theta_{c_1},\ldots,\theta_{c_n})\bigr]\\
&= \phi_{c_0}\bigl(\theta_{c_0}(Fp)\bigr)\\
&= \bigl(\phi_{c_0}\theta_{c_0}\bigr)(Fp)\\
&= (\phi\theta)_{c_0}(Fp).
\end{split}\]
This shows that $\phi\theta : F\to H$ is a well-defined multinatural transformation.
To check that the horizontal composition $\theta'*\theta$ in \Cref{def:multinatural-composition} is a multinatural transformation, we use (i) \eqref{multifunctor-composition} for $G'$, (ii) the naturality of $\theta$ and $\theta'$, and (iii) the associativity in $3$ to compute as follows:
\[\begin{split}
(G'Gp)\bigl((\theta'*\theta)_{c_1},\ldots,(\theta'*\theta)_{c_n}\bigr)
&= (G'Gp)\bigl((G'\theta_{c_1})(\theta'_{Fc_1}),\ldots,(G'\theta_{c_n})(\theta'_{Fc_n})\bigr)\\
&= \bigl[(G'Gp)(G'\theta_{c_1},\ldots,G'\theta_{c_n})\bigr](\theta'_{Fc_1},\ldots,\theta'_{Fc_n})\\
&= \bigl[G'\bigl((Gp)(\theta_{c_1},\ldots,\theta_{c_n})\bigr)\bigr](\theta'_{Fc_1},\ldots,\theta'_{Fc_n})\\
&= \bigl[G'\bigl(\theta_{c_0}(F_p)\bigr)\bigr](\theta'_{Fc_1},\ldots,\theta'_{Fc_n})\\
&= \bigl[(G'\theta_{c_0})(G'Fp)\bigr](\theta'_{Fc_1},\ldots,\theta'_{Fc_n})\\
&= (G'\theta_{c_0})\bigl[(G'Fp)(\theta'_{Fc_1},\ldots,\theta'_{Fc_n})\bigr]\\
&= (G'\theta_{c_0})\bigl(\theta'_{Fc_0}(F'Fp)\bigr)\\
&= \bigl((G'\theta_{c_0})(\theta'_{Fc_0})\bigr)(F'Fp)\\
&= (\theta'*\theta)_{c_0}(F'Fp).
\end{split}\]
This shows that $\theta'*\theta : F'F\to G'G$ is a well-defined multinatural transformation.
Next one needs to check the $2$-category axioms in \Cref{2category-explicit}. The axioms involving $2$-cells are precisely the same as in $\Cat$ in \Cref{ex:2cat-of-cat} because the vertical and horizontal compositions of multinatural transformations involve only unary operations. Finally, the horizontal composition of $1$-cells (i.e., multifunctors) is associative and unital because composition of functions is associative and unital.
\end{proof}
\section{\texorpdfstring{$2$}{2}-Category of Polycategories}\label{sec:polycat-2cat}
In this section we present an extension of the $2$-categories $\Cat$ in \Cref{ex:2cat-of-cat} and $\Multicat$ in \Cref{multicat-2cat} to polycategories.
\begin{motivation}
A category consists of morphisms $f : X \to Y$ with one input and one output, together with their composition and associativity and unity axioms. Multicategories in \Cref{def:multicategory} extend categories by allowing the domain of each morphism
\[\begin{tikzcd} (X_1,\ldots,X_m) \ar{r}{f} & Y\end{tikzcd}\]
to be a finite sequence of objects, together with suitable composition law and associativity and unity axioms. With permutations of the domain objects, there is also a $\Sigma_m$-equivariant structure for $m\geq 0$, along with equivariance axioms. Taking this process one step further, we may also allow the codomain of each morphism to be a finite sequence of objects. So now a morphism has the form
\[\begin{tikzcd} (X_1,\ldots,X_m) \ar{r}{f} & (Y_1,\ldots,Y_n)\end{tikzcd}\]
in which $m$ and $n$ do not need to be equal. A polycategory is such a categorical structure together with suitable composition law, equivariant structure, and axioms.\dqed
\end{motivation}
To define polycategories, we extend the notations in \Cref{def:profile}.
\begin{definition}\label{def:biprofiles}
Suppose $\colorc$ is a class.
\begin{itemize}
\item An element in $\Profcsq$ is written vertically as $\uduc$ with $\uc$ in the first $\Profc$ factor.
\item For $\uc,\ua\in\Profc$ with $\uc=(c_1,\ldots,c_m)$ and $1\leq i \leq m$, we define
\[\uc \compi \ua = \bigl(\underbracket[0.5pt]{c_1,\ldots,c_{i-1}}_{\text{$\varnothing$ if $i=1$}},\ua, \underbracket[0.5pt]{c_{i+1},\ldots,c_m}_{\text{$\varnothing$ if $i=m$}}\bigr) \in \Profc.\defmark\]
\end{itemize}
\end{definition}
\begin{example}
If $\uc = (c_1,c_2,c_3)$ and $\ua = (a_1,a_2)$, then
\[\uc\comp_1\ua = (a_1,a_2,c_2,c_3),\qquad \uc \comp_2 \ua = (c_1,a_1,a_2,c_3), \andspace \uc \comp_3 \varnothing = (c_1,c_2).\]
In general, $\uc \compi \ua$ is obtained from $\uc$ by replacing its $i$th entry with the profile $\ua$.\dqed
\end{example}
\begin{definition}\label{def:polycategory}
A \emph{polycategory}\index{polycategory}\index{category!poly-} $(\C, \comp, \operadunit)$\label{notation:polycategory} consists of the following data.
\begin{itemize}
\item $\C$ is equipped with a class $\colorc$ of \index{object!polycategory}\emph{objects}.
\item For each $\uduc = \sbinom{d_1,\ldots,d_n}{c_1,\ldots,c_m} \in \Profcsq$, $\C$ is equipped with a set\label{notation:cuduc}
\[\C\uduc = \C\sbinom{d_1,\ldots,d_n}{c_1,\ldots,c_m}\]
of \emph{$(m,n)$-ary operations}\index{m0@$(m,n)$-ary operation}
with \emph{input profile}\index{input profile} $\uc$ and \emph{output profile}\index{output!profile} $\ud$.
\item
For $\uduc \in \Profcsq$ as above and permutations $(\tau,\sigma) \in \Sigma_m\times\Sigma_n$, $\C$ is equipped with a \index{symmetric group!action}\emph{symmetric group action}
\[\begin{tikzcd}\C\uduc \rar{(\tau,\sigma)}[swap]{\cong} & \C\sbinom{\sigma\ud}{\uc\tau},\end{tikzcd}\]
in which\label{notation:tau-d}
\[\sigma\ud= (d_{\sigma^{-1}(1)}, \ldots, d_{\sigma^{-1}(n)})\]
is the left permutation\index{left permutation} of $\ud$ by $\sigma$.
\item For each $c \in \colorc$, $\C$ is equipped with a \index{colored unit}\emph{$c$-colored unit}
\[\operadunit_c \in \C\cc.\]
\item For $\uduc, \ubua \in \Profcsq$ with $\ub=(b_1,\ldots,b_l)$ and \[b_j=c_i\] for some $1\leq i \leq m$ and $1\leq j \leq l$, $\C$ is equipped with a \index{composition!polycategory}\emph{composition}\label{notation:polycategory-composition}
\begin{equation}\label{polycat-compij}
\begin{tikzcd}
\C\uduc \times \C\ubua \rar{\compij} & \C\sbinom{\ub \compj \ud}{\uc\compi\ua}.\end{tikzcd}
\end{equation}
\end{itemize}
These data are required to satisfy the following axioms, in which $|\ua|=k$, $|\ub|=l$, $|\uc|=m$, $|\ud|=n$, $|\ue|=p$, and $|\uf|=q$.
\begin{description}
\item[Symmetric Group Action]
For $\tau,\tau'\in \Sigma_m$ and $\sigma,\sigma'\in\Sigma_n$, the diagram
\[\begin{tikzcd}
\C\uduc \arrow{rd}[swap]{(\tau\tau',\sigma'\sigma)} \rar{(\tau,\sigma)} & \C\sbinom{\sigma\ud}{\uc\tau} \dar{(\tau',\sigma')}\\
& \C\sbinom{\sigma'\sigma\ud}{\uc\tau\tau'}\end{tikzcd}\]
is commutative. Moreover, the identity $(\id_m,\id_n)\in\Sigma_m\times\Sigma_n$ acts as the identity map on $\C\uduc$.
\item[Unity]The diagram\index{unity!polycategory}
\[\begin{tikzcd}[column sep=large]
\C\uduc \times \{*\} \ar{r}{\cong} \ar{d}[swap]{(1,\operadunit_{c_i})} & \C\uduc \ar[equal]{d} & \{*\} \times \C\uduc \ar{d}{(\operadunit_{d_j}, 1)} \ar{l}[swap]{\cong}\\
\C\uduc \times \C\sbinom{c_i}{c_i} \ar{r}{\comp_{i,1}} & \C\uduc & \C\sbinom{d_j}{d_j}\times \C\uduc \ar{l}[swap]{\comp_{1,j}}
\end{tikzcd}\]
is commutative for each $1\leq i \leq |\uc|$ and $1\leq j \leq |\ud|$.
\item[Equivariance]
For $(\tau',\sigma',\tau,\sigma)\in \Sigma_k\times \Sigma_l\times \Sigma_m\times \Sigma_n$ and with $b_j=c_i$, the diagram\index{equivariance!polycategory}
\[\begin{tikzcd}[column sep=huge, row sep=large]
\C\uduc \times \C\ubua \ar{d}[swap]{(\tau,\sigma)\times(\tau',\sigma')} \ar{r}{\compij} & \C\sbinom{\ub\compj\ud}{\uc\compi\ua} \ar{d}{(\tau\comp_{\tau^{-1}(i)}\tau', \sigma'\compj\sigma)}\\
\C\sbinom{\sigma\ud}{\uc\tau} \times \C\sbinom{\sigma'\ub}{\ua\tau'} \ar{r}{\comp_{\tau^{-1}(i),\sigma'(j)}} & \C\sbinom{\sigma'\ub \comp_{\sigma'(j)} \sigma\ud}{\uc\tau \comp_{\tau^{-1}(i)} \ua\tau'}
\end{tikzcd}\]
is commutative, in which
\[\sigma' \compj \sigma = \underbrace{\sigma'\langle \overbracket[0.5pt]{1,\ldots,1}^{j-1},n,\overbracket[0.5pt]{1,\ldots,1}^{l-j}\rangle}_{\text{block permutation}} \cdot\, \bigl(\underbrace{\id_{j-1} \times \sigma \times \id_{l-j}}_{\text{block sum}}\bigr) \in \Sigma_{l+n-1}\]
with the notations in \eqref{operadic-eq-1} and \eqref{operadic-eq-2}, and similarly for $\tau \comp_{\tau^{-1}(i)} \tau'$.
\item[Associativity]
There are three associativity axioms.\index{associativity!polycategory}
\begin{enumerate}
\item With $b_j=c_i$ and $d_s=e_r$, the following diagram is commutative.
\begin{equation}\label{polycat-associativity-1}
\begin{tikzcd}[column sep=huge,row sep=large]
\C\ufue \times \C\uduc \times \C\ubua \ar{d}{1\times \compij} \ar{r}{\comprs\times 1} & \C\sbinom{\ud\comps \uf}{\ue\compr\uc} \times \C\ubua \ar{d}{\comp_{i+r-1,j}}\\
\C\ufue \times \C\sbinom{\ub\compj\ud}{\uc\compi\ua} \ar{r}{\comp_{r,s+j-1}} & \C\sbinom{\ub\compj(\ud\comps\uf)}{\ue\compr(\uc\compi\ua)}\end{tikzcd}
\end{equation}
\item With $(e_r,e_i) = (d_s,b_j)$ and $r<i$, the diagram
\begin{equation}\label{polycat-associativity-2}
\begin{tikzcd}[column sep=huge, row sep=large]
\C\ufue \times \C\uduc \times \C\ubua \ar{d}[swap]{\comprs \times 1} \ar{r}{\text{permute}} & \C\ufue \times \C\ubua \times \C\uduc \ar{d}{\compij \times 1}\\
\C\sbinom{\ud\comps\uf}{\ue\compr\uc} \times \C\ubua \ar{d}[swap]{\comp_{i-1+m,j}} & \C\sbinom{\ub\compj\uf}{\ue\compi\ua} \times \C\uduc \ar{d}{\comprs}\\
\C\sbinom{\ub\compj(\ud\comps\uf)}{(\ue\compr\uc)\comp_{i-1+m}\ua} \ar{r}{(\id_{p+m+k-2},\sigma)}
& \C\sbinom{\ud\comps(\ub\compj\uf)}{(\ue\compi\ua)\compr\uc}
\end{tikzcd}
\end{equation}
is commutative, where
\[\sigma = \bigl((1,2)(4,5)\bigr)\langle j-1,s-1,q,n-s,l-j \rangle \in \Sigma_{l+n+q-2}\]
is the block permutation induced by $(1,2)(4,5)\in\Sigma_5$.
\item With $(b_j,b_s) = (c_i,e_r)$ and $j<s$, the diagram
\begin{equation}\label{polycat-associativity-3}
\begin{tikzcd}[column sep=huge, row sep=large]
\C\ufue \times \C\uduc \times \C\ubua \ar{d}[swap]{1 \times \compij} \ar{r}{\text{permute}} & \C\uduc\times \C\ufue \times \C\ubua \ar{d}{1\times\comprs}\\
\C\ufue \times \C\sbinom{\ub\compj\ud}{\uc\compi\ua} \ar{d}[swap]{\comp_{r,s-1+n}} & \C\uduc \times \C\sbinom{\ub\comps\uf}{\ue\compr\ua} \ar{d}{\compij}\\
\C\sbinom{(\ub\compj\ud)\comp_{s-1+n}\uf}{\ue\compr(\uc\compi\ua)} \ar{r}{(\tau,\id_{l+n+q-2})} & \C\sbinom{(\ub\comps\uf)\compj\ud}{\uc\compi(\ue\compr\ua)}
\end{tikzcd}
\end{equation}
is commutative, where
\[\tau = \bigl((1,2)(4,5)\bigr)\langle i-1,r-1,k,p-r,m-i \rangle \in \Sigma_{p+m+k-2}\]
is the block permutation induced by $(1,2)(4,5)\in\Sigma_5$.
\end{enumerate}
\end{description}
This finishes the definition of a polycategory. A polycategory is \emph{small}\index{polycategory!small} if its class of objects is a set.
\end{definition}
\begin{example}[Multi-In Multi-Out Functions]\label{ex:endomorphism-polycat}
For a non-empty class $\colorc$, suppose $X = \{X_c\}_{c\in \colorc}$ is a $\colorc$-indexed class of sets. There is a \index{endomorphism!polycategory}polycategory $\PEnd(X)$ with
\[\PEnd(X)\uduc = \Map\bigl(X_{\uc},X_{\ud}\bigr) \forspace \uduc\in\Profcsq,\] where $X_{\uc} = X_{c_1}\times\cdots\times X_{c_m}$ if $\uc=(c_1,\ldots,c_m)$, and similarly for $X_{\ud}$. Here $\Map(A,B)$ is the set of functions from $A$ to $B$.
\begin{itemize}
\item The left-$\Sigma_n$, right-$\Sigma_m$ action is induced by permutations of the codomain factors and of the domain factors.
\item The $c$-colored unit is the identity function of $X_c$.
\item With $b_j=c_i$ in $\colorc$, the composition
\[\begin{tikzcd}
\PEnd(X)\uduc \times \PEnd\ubua \rar{\compij} & \PEnd\sbinom{\ub \compj \ud}{\uc\compi\ua}\end{tikzcd}\]
is defined as
\[\begin{split}
&(g \compij f)\bigl(\overbracket[0.5pt]{x_1,\ldots,x_{i-1},\uy,x_{i+1},\ldots,x_m}^{\text{in $X_{\uc\compi\ua}$}}\bigr)\\
&= \Bigl((f\uy)_1,\ldots,(f\uy)_{j-1}, g\bigl(\underbracket[0.5pt]{x_1,\ldots,x_{i-1}, (f\uy)_j, x_{i+1},\ldots,x_m}_{\text{in $X_{\uc}$}}\bigr), (f\uy)_{j+1},\ldots,(f\uy)_{l}\Bigr)\end{split}\]
for
\begin{itemize}
\item $(g,f) \in \PEnd(X)\uduc \times \PEnd\ubua$ with $\compij(g,f) = g\compij f$,
\item $(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_m) \in X_{c_1}\times\cdots\times X_{c_{i-1}} \times X_{c_{i+1}} \times \cdots\times X_{c_m}$,
\item $\uy \in X_{\ua}$.
\end{itemize}
Since \[f\uy = \bigl((f\uy)_1,\ldots,(f\uy)_l\bigr) \in X_{\ub},\] its $j$th entry $(f\uy)_j$ is in $X_{b_j} = X_{c_i}$.
\end{itemize}
The polycategory axioms can actually be read off from this example. Also notice that this is an extension of the endomorphism operad in \Cref{ex:endomorphism}.\dqed
\end{example}
\begin{example}[Multicategories]\label{ex:multicat-as-polycat}
Each multicategory $(\C,\gamma,\operadunit)$ yields a polycategory\index{multicategory!as a polycategory} with the same class $\colorc$ of objects, the same colored units, and constituent sets
\[\C\uduc = \begin{cases} \C\duc & \text{if $\ud=d\in\colorc$},\\
\varnothing & \text{otherwise}.\end{cases}\] The left-$\Sigma_1$, right-$\Sigma_m$ action is given by the symmetric group action on $\C\duc$. With $\uc=(c_1,\ldots,c_m)$ and $1\leq i \leq m$, the polycategory composition is the composite below.
\[\begin{tikzpicture}[commutative diagrams/every diagram, yscale=2]
\node (A) at (-2,1) {$\C\duc \times \C\ciub$};
\node (B) at (2,1) {$\C\sbinom{d}{\uc\compi\ub}$};
\node (C) at (0,0) {$\C\duc \times \C\sbinom{c_1}{c_1} \times\cdots\times \C\sbinom{c_{i-1}}{c_{i-1}} \times \C\ciub \times \C\sbinom{c_{i+1}}{c_{i+1}} \times\cdots\times \C\sbinom{c_m}{c_m}$};
\path[commutative diagrams/.cd, every arrow, every label]
(A) edge node {$\comp_{i,1}$} (B)
(A) edge node[swap, near start] {$1\times (1_{c_1},\ldots,1_{c_{i-1}},1,1_{c_{i+1}},\ldots,1_{c_m})$} (C)
(C) edge node[swap, near end] {$\gamma$} (B);
\end{tikzpicture}\]
All other polycategory compositions are trivial.
\begin{itemize}
\item The polycategory unity and equivariance axioms follow from those in the multicategory $\C$.
\item The polycategory associativity axioms \eqref{polycat-associativity-1} and \eqref{polycat-associativity-2} follow from the associativity axiom and the unity axioms in the multicategory $\C$ and an induction.
\item The polycategory associativity axiom \eqref{polycat-associativity-3} is trivial, since it cannot happen for a multicategory.\dqed
\end{itemize}
\end{example}
Next we define functors and natural transformations for polycategories.
\begin{definition}\label{def:polyfunctor}
A \emph{polyfunctor}\index{polyfunctor}\index{functor!poly-} $F : \C \to 1.3$ between polycategories $\C$ and $1.3$ consists of the following data:
\begin{itemize}
\item an assignment \[F : \colorc \to \colord,\] where $\colorc$ and $\colord$ are the classes of objects of $\C$ and $1.3$, respectively;
\item for each $\uduc \in \Profcsq$, a function
\[F : \C\uduc \to 1.3\Fuduc.\]
\end{itemize}
These data are required to preserve the symmetric group action, the colored units, and the composition in the following sense.
\begin{description}
\item[Symmetric Group Action] For $(\tau,\sigma) \in \Sigma_{|\uc|}\times\Sigma_{|\ud|}$, the diagram\index{equivariance!polyfunctor}
\begin{equation}\label{polyfunctor-equivariance}
\begin{tikzcd}
\C\uduc \ar{d}[swap]{(\tau,\sigma)} \ar{r}{F} & 1.3\Fuduc \ar{d}{(\tau,\sigma)}\\
\C\sbinom{\sigma\ud}{\uc\tau} \ar{r}{F} & 1.3\sbinom{F\sigma\ud}{F\uc\tau}\end{tikzcd}
\end{equation}
is commutative.
\item[Units] For each $c\in\colorc$, the equality\index{unity!polyfunctor}
\begin{equation}\label{polyfunctor-unit}
F\operadunit_c = \operadunit_{Fc} \in 1.3\Fcc
\end{equation}
holds.
\item[Composition] With $b_j=c_i$, the diagram
\begin{equation}\label{polyfunctor-composition}
\begin{tikzcd}
\C\uduc \times \C\ubua \ar{d}[swap]{\compij} \ar{r}{F\times F} & 1.3\Fuduc\times 1.3\sbinom{F\ub}{F\ua} \ar{d}{\compij}\\
\C\sbinom{\ub\compj\ud}{\uc\compi\ua} \ar{r}{F} & 1.3\sbinom{F\ub\compj F\ud}{F\uc\compi F\ua}
\end{tikzcd}
\end{equation}
is commutative.
\end{description}
This finishes the definition of a polyfunctor.
Moreover:
\begin{enumerate}
\item For another polyfunctor $G : 1.3\to3$ between polycategories, where $3$ has object class $\colore$, the \emph{composition}\index{composition!polyfunctor} $GF : \C\to3$ is the polyfunctor defined by composing the assignments on objects
\[\begin{tikzcd} \colorc \ar{r}{F} & \colord \ar{r}{G} & \colore
\end{tikzcd}\]
and the functions
\[\begin{tikzcd}
\C\uduc \ar{r}{F} & 1.3\Fuduc \ar{r}{G} & 3\sbinom{GF\ud}{GF\uc}.
\end{tikzcd}\]
\item The \emph{identity polyfunctor}\index{identity!polyfunctor} $1_{\C} : \C\to\C$ is defined by the identity assignment on objects and the identity function on each entry of $\C$.\defmark
\end{enumerate}
\end{definition}
\begin{example}[Multifunctors]\label{ex:multifunctor-polyfunctor}
A multifunctor $F : \C\to1.3$ between multicategories is also a polyfunctor\index{multifunctor!as a polyfunctor} when $\C$ and $1.3$ are regarded as polycategories as in \Cref{ex:multicat-as-polycat}.\dqed
\end{example}
\begin{definition}\label{def:polynatural-transformation}
Suppose $F,G : \C\to1.3$ are polyfunctors as in \Cref{def:polyfunctor}. A \index{polynatural transformation}\index{natural transformation!poly-}\emph{polynatural transformation} $\theta : F\to G$ consists of unary operations
\[\theta_c \in 1.3\sbinom{Gc}{Fc} \forspace c\in\colorc\]
such that, for each $p \in \C\uduc = \C\sbinom{d_1,\ldots,d_n}{c_1,\ldots,c_m}$, the \emph{naturality condition}
\[(Gp)\bigl(\theta_{c_1},\ldots,\theta_{c_m}\bigr)
= \bigl(\theta_{d_1},\ldots,\theta_{d_n}\bigr)(Fp) \in 1.3\sbinom{G\ud}{F\uc}\]
holds, in which
\[\begin{split}
(Gp)\bigl(\theta_{c_1},\ldots,\theta_{c_m}\bigr)
&= \bigl((Gp \comp_{1,1} \theta_{c_1}) \cdots\bigr) \comp_{m,1}\theta_{c_m},\\
\bigl(\theta_{d_1},\ldots,\theta_{d_n}\bigr)(Fp)
&= \theta_{d_1} \comp_{1,1} \bigl(\cdots (\theta_{d_n} \comp_{1,n} Fp)\bigr).
\end{split}\]
\begin{itemize}
\item Each $\theta_c$ is called a \emph{component} of $\theta$.
\item The \emph{identity polynatural transformation} $1_F : F\to F$ has components \[(1_F)_c = \operadunit_{Fc}\in 1.3\Fcc \forspace c\in\colorc.\defmark\]
\end{itemize}
\end{definition}
\begin{definition}\label{def:polynatural-composition}
Suppose $\theta : F \to G$ is a polynatural transformation between polyfunctors as in \Cref{def:polynatural-transformation}.
\begin{enumerate}
\item Suppose $\phi : G \to H$ is a polynatural transformation for a polyfunctor $H : \C \to 1.3$. The \emph{vertical composition}\index{vertical composition!polynatural transformation}\label{notation:poly-vcomp}
\[\phi\theta : F \to H\] is the polynatural transformation with components \[(\phi\theta)_c = \phi_c \comp_{1,1} \theta_c \in 1.3\sbinom{Hc}{Fc} \forspace c\in\colorc.\]
\item Suppose $\theta' : F' \to G'$ is a polynatural transformation for polyfunctors $F', G' : 1.3 \to 3$. The \emph{horizontal composition}\index{horizontal composition!polynatural transformation}\label{notation:poly-hcomp}
\[\theta' \ast \theta : F'F \to G'G\] is the polynatural transformation with components
\[(\theta' \ast \theta)_c = \theta'_{Gc} \comp_{1,1} F'\theta_c = G'\theta_c \comp_{1,1} \theta'_{Fc} \in 3\sbinom{G'Gc}{F'Fc}\]
for each object $c\in\colorc$, in which the second equality follows from the naturality of $\theta'$.\defmark
\end{enumerate}
\end{definition}
\begin{example}\label{ex:multint-polynt}
Each multinatural transformation\index{multinatural transformation!as a polynatural transformation} $\theta : F \to G$ between multifunctors $F,G : \C\to1.3$ is a polynatural transformation when $F$ and $G$ are regarded as polyfunctors between polycategories as in \Cref{ex:multifunctor-polyfunctor}. Moreover, horizontal and vertical compositions of multinatural transformations become those of polynatural transformations.\dqed
\end{example}
An adaptation of the proof of \Cref{multicat-2cat} yields the following result.
\begin{theorem}\label{polycat-2cat}
There is a $2$-category\index{2-category!of polycategories}\index{polycategory!2-category} $\Polycat$ consisting of the following data.
\begin{itemize}
\item Its objects are small polycategories.
\item For small polycategories $\C$ and $1.3$, the hom category $\Polycat(\C,1.3)$ has:
\begin{itemize}
\item polyfunctors $\C\to1.3$ as $1$-cells;
\item polynatural transformations between such polyfunctors as $2$-cells;
\item vertical composition as composition;
\item identity polynatural transformations as identity $2$-cells.
\end{itemize}
\item The identity $1$-cell $1_{\C}$ is the identity polyfunctor $1_{\C}$.
\item Horizontal composition of $1$-cells is the composition of polyfunctors.
\item Horizontal composition of $2$-cells is that of polynatural transformations.
\end{itemize}
\end{theorem}
\section{Dualities}\label{sec:dualities}
In this section we discuss duality constructions in bicategories.
\begin{motivation}
Each category $\C$ has an opposite category $\Cop$, which has the same objects as $\C$ and a morphism $f^{\op} : Y \to X$ whenever $f : X \to Y$ is a morphism in $\C$. In a bicategory, there are two kinds of arrows, namely $1$-cells, which go between objects, and $2$-cells, which go between $1$-cells. Therefore, there are three opposites of a bicategory obtained by reversing the directions of
\begin{enumerate}[label=(\roman*)]
\item only the $1$-cells,
\item only the $2$-cells, or
\item both the $1$-cells and the $2$-cells.
\end{enumerate}
These opposites will be useful in \Cref{ch:functors} when we discuss oplax versions of lax functors and lax natural transformations between bicategories.\dqed
\end{motivation}
Suppose $(\B,1,c,a,\ell,r)$ is a bicategory as in \Cref{def:bicategory} with object class $B_0$. First we define the bicategory in which the $1$-cells in $\B$ are reversed.
\begin{definition}\label{def:bicategory-opposite}
Define the \emph{opposite bicategory}\index{opposite!bicategory}\index{bicategory!opposite}
\[\bigl(\Bop,1^{\op},c^{\op},a^{\op},\ell^{\op},r^{\op}\bigr)\]
as follows.
\begin{itemize}
\item $\Bop_0 = \B_0$; i.e., it has the same objects as $\B$.
\item For objects $X,Y$ in $\Bop$, its hom category is the hom category in $\B$, \[\Bop(X,Y) = \B(Y,X),\] with identity $1$-cell \[1^{\op}_X = 1_X \in \B(X,X)=\Bop(X,X).\]
\item Its horizontal composition is the following composite functor.
\[\begin{tikzcd}
\Bop(Y,Z) \times \Bop(X,Y) \ar[equal]{d} \ar{r}{c^{\op}_{XYZ}} & \Bop(X,Z) = \B(Z,X)\\
\B(Z,Y) \times \B(Y,X) \ar{r}{\text{permute}}[swap]{\cong} & \B(Y,X)\times \B(Z,Y) \ar{u}{c_{ZYX}}
\end{tikzcd}\]
\item For $1$-cells $f \in \Bop(W,X)$, $g\in \Bop(X,Y)$, and $h\in \Bop(Y,Z)$, the component of the associator $a^{\op}_{h,g,f}$ is the invertible $2$-cell
\[\begin{tikzcd}[column sep=large]
c^{\op}\bigl(c^{\op}(h,g),f\bigr) \ar[equal]{d} \ar{r}{a^{\op}_{h,g,f}} & c^{\op}\bigl(h,c^{\op}(g,f)\bigr) \ar[equal]{d}\\
f(gh) \ar{r}{a^{-1}_{f,g,h}} & (fg)h\end{tikzcd}\]
in $\B(Z,W) = \Bop(W,Z)$.
\item For each $1$-cell $f\in\Bop(X,Y)=\B(Y,X)$, the components of the left unitor $\ell^{\op}_{f}$ and of the right unitor $r^{\op}_f$ are the invertible $2$-cells
\[\begin{tikzcd}[column sep=large]
c^{\op}(1^{\op}_Y,f) = f1_Y \ar{r}{\ell^{\op}_f\,=\,r_f} & f & 1_Xf = c^{\op}(f,1^{\op}_X) \ar{l}[swap]{r^{\op}_f\,=\,\ell_f}\end{tikzcd}\]
in $\B(Y,X)$.
\end{itemize}
This finishes the definition of $\Bop$.
\end{definition}
Next we define the bicategory in which the $2$-cells in $\B$ are reversed.
\begin{definition}\label{def:bicategory-co}
Define the \emph{co-bicategory}\index{co-bicategory}\index{bicategory!co-} \[\bigl(\Bco,1^{\co},c^{\co},a^{\co},\ell^{\co},r^{\co}\bigr)\]
as follows.
\begin{itemize}
\item $\Bco_0 = \B_0$.
\item For objects $X,Y$ in $\Bco$, its hom category is the opposite category \[\Bco(X,Y) = \B(X,Y)^{\op}\] of the hom category $\B(X,Y)$, with identity $1$-cell \[1^{\co}_X = (1_X)^{\op} \in \B(X,X)^{\op}=\Bco(X,X).\]
\item Its horizontal composition is the composite
\[\begin{tikzcd}
\Bco(Y,Z) \times \Bco(X,Y) \ar[equal]{d} \ar{r}{c^{\co}_{XYZ}} & \Bco(X,Z) = \B(X,Z)^{\op}\\
\B(Y,Z)^{\op} \times \B(X,Y)^{\op} \ar{r}{\cong} & \big[\B(Y,Z)\times \B(X,Y)\bigr]^{\op} \ar{u}{c^{\op}_{XYZ}}
\end{tikzcd}\]
in which $c^{\op}_{XYZ}$ is the opposite functor of the horizontal composition $c_{XYZ}$ in $\B$ in the sense of \eqref{opposite-functor}.
\item For $1$-cells $f \in \Bco(W,X)$, $g\in \Bco(X,Y)$, and $h\in \Bco(Y,Z)$, the component of the associator $a^{\co}_{h,g,f}$ is the invertible $2$-cell
\[\begin{tikzcd}[column sep=large]
c^{\co}\bigl(c^{\co}(h,g),f\bigr) \ar[equal]{d} \ar{r}{a^{\co}_{h,g,f}} & c^{\co}\bigl(h,c^{\co}(g,f)\bigr) \ar[equal]{d}\\
(hg)f \ar{r}{(a^{-1}_{h,g,f})^{\op}} & h(gf)\end{tikzcd}\]
in $\Bco(W,Z) = \B(W,Z)^{\op}$.
\item For each $1$-cell $f\in\Bco(X,Y)$, the components of the left unitor $\ell^{\co}_{f}$ and of the right unitor $r^{\co}_f$ are the invertible $2$-cells
\[\begin{tikzcd}[column sep=huge]
c^{\co}(1^{\co}_Y,f) = 1_Yf \ar{r}{\ell^{\co}_f\,=\, (\ell_f^{-1})^{\op}} & f & f1_X = c^{\co}(f,1^{\co}_X) \ar{l}[swap]{r^{\co}_f\,=\, (r_f^{-1})^{\op}}\end{tikzcd}\]
in $\Bco(X,Y)=\B(X,Y)^{\op}$.
\end{itemize}
This finishes the definition of $\Bco$.
\end{definition}
Next we define the bicategory in which both the $1$-cells and the $2$-cells in $\B$ are reversed.
\begin{definition}\label{def:bicategory-coop}
Define the \emph{coop-bicategory}\index{coop-bicategory}\index{bicategory!coop-}
\[\Bcoop = (\Bco)^{\op}.\defmark\]
\end{definition}
\begin{lemma}\label{Bcoop-bicat}
For each bicategory $\B$, the following statements hold.
\begin{enumerate}
\item $\Bop$, $\Bco$, and $\Bcoop$ are well-defined bicategories.
\item $\Bcoop = (\Bop)^{\co}$.
\item $(\Bop)^{\op} = \B = (\Bco)^{\co}$.
\item If $\B$ is a $2$-category, then so are $\Bop$, $\Bco$, and $\Bcoop$.
\end{enumerate}
\end{lemma}
\begin{proof}
First consider $\Bop$. For $1$-cells $f\in \Bop(W,X) = B(X,W)$ and $g\in \Bop(X,Y) = B(Y,X)$, the unity axiom \eqref{bicat-unity} in $\Bop$ demands that the diagram on the left be commutative:
\[\begin{tikzcd}[column sep=tiny]
c^{\op}\bigl(c^{\op}(g,1_X^{\op}),f\bigr) \ar{dd}[swap]{a^{\op}_{g,1_X^{\op},f}} \ar[start anchor={[xshift=.5cm]}]{dr}{c^{\op}(r^{\op}_g,1^{\op}_f)} &\\
& c^{\op}(g,f)\\
c^{\op}\bigl(g,c^{\op}(1_X^{\op},f)\bigr) \ar[start anchor={[xshift=.5cm]}]{ur}[swap]{c^{\op}(1^{\op}_g,\ell^{\op}_f)} &
\end{tikzcd} \qquad {=} \qquad
\begin{tikzcd}[column sep=tiny]
f(1_Xg) \ar{dr}{1_f*\ell_g} \ar{dd}[swap]{a^{-1}_{f,1_X,g}} &\\
& fg\\
(f1_X)g \ar{ur}[swap]{r_f*1_g} & \end{tikzcd}\]
The left diagram is equal to the diagram in $\B(Y,W)$ on the right, which is commutative by the unity axiom in $\B$. The pentagon axiom \eqref{bicat-pentagon} in $\Bop$ is proved similarly by interpreting the diagram in a hom category in $\B$ and using the pentagon axiom in $\B$. A similar argument shows that $\Bco$ is a bicategory. Since $\Bcoop$ is the opposite bicategory of $\Bco$, it is also well-defined.
The second assertion, that $(\Bco)^{\op}=(\Bop)^{\co}$, follows by inspecting the two definitions.
The equality $(\Bop)^{\op} = \B$ also follows from the definition. The equality $\B = (\Bco)^{\co}$ follows from $(\Cop)^{\op} = \C$ for each category $\C$.
Finally, if $\B$ is a $2$-category---i.e., $a$, $\ell$, and $r$ are identity natural transformations---then the associators, the left unitors, and the right unitors in $\Bop$, $\Bco$, and $\Bcoop$ are also identities.
\end{proof}
\begin{explanation}\label{expl:bicategory-three-duals}
The opposite bicategory $\Bop$ does \emph{not} involve opposite categories, in the sense that its hom category $\Bop(X,Y)$ is the hom category $\B(Y,X)$ in $\B$. To visualize the definitions above, suppose $f,f' \in \B(X,Y)$ are $1$-cells, and $\alpha : f \to f'$ is a $2$-cell in $\B$, as displayed below.
\[\begin{tikzpicture}[commutative diagrams/every diagram]
\node (X) at (-1,0) {$X$}; \node (Y) at (1,0) {$Y$};
\node[font=\Large] at (-.1,0) {\rotatebox{270}{$\Rightarrow$}};
\node[font=\small] at (.15,0) {$\alpha$};
\path[commutative diagrams/.cd, every arrow, every label]
(X) edge [bend left] node[above] {$f$} (Y)
edge [bend right] node[below] {$f'$} (Y);
\end{tikzpicture}\]
The corresponding $1$-cells and $2$-cells in $\Bop$, $\Bco$, and $\Bcoop$ are displayed below.
\[\begin{tikzpicture}[commutative diagrams/every diagram]
\node (X) at (-1,0) {$X$}; \node (Y) at (1,0) {$Y$};
\node[font=\Large] at (0,0) {\rotatebox{270}{$\Rightarrow$}};
\node at (0,.7) {$\Bop(Y,X)$};
\path[commutative diagrams/.cd, every arrow, every label]
(Y) edge [bend left] (X) edge [bend right] (X);
(Y) edge [bend left] (X) edge [bend right] (X);
\node (X1) at (3,0) {$X$}; \node (Y1) at (5,0) {$Y$};
\node[font=\Large] at (4,0) {\rotatebox{90}{$\Rightarrow$}};
\node at (4,.7) {$\Bco(X,Y)$};
\path[commutative diagrams/.cd, every arrow, every label]
(X1) edge [bend left] (Y1) edge [bend right] (Y1);
\node (X2) at (7,0) {$X$}; \node (Y2) at (9,0) {$Y$};
\node[font=\Large] at (8,0) {\rotatebox{90}{$\Rightarrow$}};
\node at (8,.7) {$\Bcoop(Y,X)$};
\path[commutative diagrams/.cd, every arrow, every label]
(Y2) edge [bend left] (X2) edge [bend right] (X2);
\end{tikzpicture}\]
\dqed
\end{explanation}
\begin{example}[Categories]\label{ex:cat-opposite-bicat}
Each category $\C$ may be regarded as a locally discrete bicategory\index{category!as a locally discrete bicategory} $\C_{\bi}$ as in \Cref{ex:category-as-bicat}. There are equalities
\[\begin{split}
(\C_{\bi})^{\op} &= (\Cop)_{\bi},\\
(\C_{\bi})^{\co} &= \C_{\bi},\end{split}\]
where $\Cop$ is the opposite category of $\C$.\dqed
\end{example}
\begin{example}[Monoidal Categories]
\label{ex:rev-moncat-opposite-bicat}
Suppose $\C$ is a monoidal category, regarded as a \index{monoidal category!as a one-object bicategory}one-object bicategory $\Sigma\C$ as in \Cref{ex:moncat-bicat}. There are equalities
\[\begin{split}
(\Sigma\C)^{\op} &= \Sigma(\C^{\rev}),\\
(\Sigma\C)^{\co} &= \Sigma(\Cop)
\end{split}\] with:
\begin{itemize}
\item $\C^{\rev}$ the reversed monoidal category $(\C,\otimes\tau,\tensorunit,\alpha^{-1},\rho,\lambda)$ in \Cref{ex:reversed-moncat};
\item $\Cop$ the opposite monoidal category $(\Cop,\otimes^{\op}, \tensorunit,\alpha^{-1},\lambda^{-1},\rho^{-1})$ in \Cref{ex:opposite-monoidal-cat}.
\end{itemize}
In other words, as one-object bicategories, $\C^{\rev}$ and $\Cop$ are the opposite bicategory and the co-bicategory of $\Sigma\C$, respectively.\dqed
\end{example}
\begin{example}[Hom Monoidal Categories]
\label{ex:hom-moncat-opposite-bicat}
For each object $X$ in a bicategory $\B$, the hom category $\B(X,X)$ inherits a monoidal category structure as in \Cref{ex:hom-monoidal-cat}. There are equalities
\[\begin{split}
\Bop(X,X) &= \B(X,X)^{\rev},\\
\Bco(X,X) &= \B(X,X)^{\op}\end{split}\]
of monoidal categories.\dqed
\end{example}
\section{Exercises and Notes}\label{sec:2cat_bicat_exercises}
In the following exercises, $(\B,1,c,a,\ell,r)$ denotes a bicategory.
\begin{exercise}
Suppose $(X,\mu,\operadunit)$ is a monoid in $\Set$. Prove that it defines a bicategory $\Sigma^2X$ with one object, one $1$-cell, and $X$ as the set of $2$-cells if and only if $X$ is a \index{bicategory!one object, one 1-cell}\index{commutative monoid!one object, one 1-cell bicategory}commutative monoid
\end{exercise}
\begin{exercise}
Prove the second equality in \Cref{bicat-r-r}.
\end{exercise}
\begin{exercise}
Prove that the first diagram in \Cref{bicat-left-right-unity} is commutative.
\end{exercise}
\begin{exercise}
Suppose $(\C,\otimes,\tensorunit,\alpha,\lambda,\rho)$ is a monoidal category, and $1.3$ is a category. A \emph{left action}\index{category!with a left action} of $\C$ on $1.3$ consists of:
\begin{itemize}
\item a functor $\phi : \C \times 1.3 \to 1.3$, with $\phi(X,Y)$ also denoted by $XY$;
\item a natural isomorphism $\psi$ with components
\[\begin{tikzcd}[column sep=large]
(X_1 \otimes X_2)Y \arrow{r}{\psi_{X_1,X_2,Y}}[swap]{\cong} & X_1(X_2Y)\end{tikzcd}\]
for objects $X_1,X_2\in\C$ and $Y\in1.3$;
\item a natural isomorphism $\eta$ with components
\[\begin{tikzcd}
\tensorunit Y \arrow{r}{\eta_Y}[swap]{\cong} & Y.\end{tikzcd}\]
\end{itemize}
The unity diagram
\[\begin{tikzcd}(X \otimes \tensorunit)Y \dar[swap]{\phi(\rho_X,Y)} \rar{\psi_{X,\tensorunit,Y}}
& X (\tensorunit Y) \dar[d]{\phi(X,\eta_Y)}\\
XY \rar[equal] & XY
\end{tikzcd}\]
and the pentagon\index{pentagon axiom}
\[\begin{tikzpicture}[commutative diagrams/every diagram]
\node (P0) at (90:2.3cm) {$(X_1 \otimes X_2)(X_3 Y)$};
\node (P1) at (90+72:2cm) {$\bigl((X_1 \otimes X_2) \otimes X_3\bigr)Y$} ;
\node (P2) at (90+2*72:1.7cm) {\makebox[2ex][r]{$\bigl(X_1 \otimes (X_2 \otimes X_3)\bigr) Y$}};
\node (P3) at (90+3*72:1.7cm) {\makebox[2ex][l]{$X_1 \bigl((X_2 \otimes X_3) Y\bigr)$}};
\node (P4) at (90+4*72:2cm) {$X_1 \bigl(X_2 (X_3 Y)\bigr)$};
\path[commutative diagrams/.cd, every arrow, every label]
(P0) edge node {$\psi_{X_1,X_2,X_3 Y}$} (P4)
(P1) edge node {$\psi_{X_1\otimes X_2,X_3,Y}$} (P0)
(P1) edge node[swap] {$\phi(\alpha_{X_1,X_2,X_3}, Y)$} (P2)
(P2) edge node {$\psi_{X_1,X_2\otimes X_3,Y}$} (P3)
(P3) edge node[swap] {$\phi(X_1,\psi_{X_2,X_3,Y})$} (P4);
\end{tikzpicture}\]
are required to be commutative for objects $X,X_1,X_2,X_3\in\C$ and $Y\in1.3$. Prove that this left action $(\phi,\psi,\eta)$ of $\C$ on $1.3$ contains the same data as the bicategory $\B$ with:
\begin{itemize}
\item objects $\B_0 = \{0,1\}$;
\item hom categories $\B(0,0)=\boldone$, $\B(0,1) = 1.3$, and $\B(1,0)=\varnothing$ (the empty category);
\item hom monoidal category $\B(1,1)=\C$ as in Example \ref{ex:hom-monoidal-cat};
\item horizontal composition
\[c_{011} = \phi : \C\times1.3 \to 1.3;\]
\item associator $a_{0111} = \psi$;
\item left unitor $\ell_{01}=\eta$.
\end{itemize}
\end{exercise}
\begin{exercise}
Define and prove the right action and the bi-action versions of the previous exercise.
\end{exercise}
\begin{exercise}
In \Cref{ex:spans}, write down the rest of the bicategory structure and check the bicategory axioms.
\end{exercise}
\begin{exercise}\label{exer:cospan}
Suppose $\C$ is a category in which all pushouts exist. Dualizing \Cref{ex:spans}, show that there is a bicategory\label{notation:cospan} $\Cospan(\C)$\index{cospan}\index{span!co-}\index{bicategory!of cospans} with:
\begin{itemize}
\item the same objects as $\C$;
\item cospans, which are diagrams in $\C$ of the form $\begin{tikzcd}[column sep=small] A \rar{f_1} & X & B, \lar[swap]{f_2}\end{tikzcd}$ as $1$-cells from $A$ to $B$;
\item morphisms $X \to X'$ that make the diagram
\[\begin{tikzcd}[row sep=tiny]
& X \arrow{dd} &\\
A \arrow{ru}{f_1} \arrow{rd}[swap]{g_1} && B \arrow{ld}{g_2} \arrow{lu}[swap]{f_2} \\
& X' &\end{tikzcd}\]
commutative as $2$-cells from $(f_1,f_2)$ to $(g_1,g_2)$.
\end{itemize}
\end{exercise}
\begin{exercise}\label{exer:partial-function}
Analogous to \Cref{ex:relations}, show that there is a locally partially ordered $2$-category\label{notation:partial-function} $\Par$ consisting of the following data.
\begin{itemize}
\item Its objects are sets.
\item For two sets $A$ and $B$, the objects in the hom category $\Par(A,B)$ are partial functions from $A$ to $B$. A \emph{partial function}\index{partial function}\index{2-category!of partial functions} from $A$ to $B$ is a pair $(A_0,f)$ consisting of a subset $A_0 \subseteq A$ and a function $f : A_0\to B$. The set of partial functions from $A$ to $B$ is a partially ordered set under the partial ordering: $(A_0,f_0) \leq (A_1,f_1)$ if and only if
\begin{itemize}
\item $A_0 \subseteq A_1 \subseteq A$ and
\item the restriction of $f_1$ to $A_0$ is $f_0$.
\end{itemize}
We regard this partially ordered set as a small category $\Par(A,B)$ with a unique morphism $(A_0,f_0) \to (A_1,f_1)$ if and only if \[(A_0,f_0) \leq (A_1,f_1).\]
\end{itemize}
An important part of this exercise is the definition of the horizontal composition of partial functions.
\end{exercise}
\begin{exercise}
In \Cref{ex:2cat-of-cat,ex:2cat-of-enriched-cat}, check all the $2$-category axioms in $\Cat$ and in $\Cat_{\V}$.
\end{exercise}
\begin{exercise}\label{exer:cat-over}
Suppose $\A$ is a small category. Consider the \index{2-category!of over-categories}\index{category!over-}\index{over-category}over-category\label{notation:overcat} $\overcat{\Cat}{\A}$, whose objects are pairs $(\B,F)$ consisting of a small category $\B$ and a functor $F : \B\to\A$. A morphism $(\B,F) \to (\C,G)$ is a functor $H : \B\to\C$ such that the diagram
\[\begin{tikzcd}[column sep=small]
\B \arrow{rr}{H} \arrow[shorten >=-.15cm]{rd}[swap]{F} && \C \arrow[shorten >=-.15cm]{ld}{G}\\
& \A & \end{tikzcd}\]
is commutative, i.e., $GH = F$. Prove that $\overcat{\Cat}{\A}$ becomes a $2$-category if a $2$-cell $H \to H'$ is defined as a natural transformation $\alpha : H \to H'$ such that \[1_G * \alpha = 1_F.\] One can visualize this condition in the following ice cream cone diagram.
\[\begin{tikzpicture}[commutative diagrams/every diagram]
\node (B) at (-1,0) {$\B$}; \node (C) at (1,0) {$\C$};
\node (A) at (0,-1.5) {$\A$};
\node[font=\Large] at (-.1,0) {\rotatebox{270}{$\Rightarrow$}};
\node[font=\small] at (.15,0) {$\alpha$};
\path[commutative diagrams/.cd, every arrow, every label]
(B) edge [bend left] node[above] {$H$} (C)
(B) edge [bend right] node[below] {$H'$} (C)
(B) edge node[left] {$F$} (A)
(C) edge node[right] {$G$} (A);
\end{tikzpicture}\]
Observe that $\overcat{\Cat}{\boldone}$ is the $2$-category $\Cat$ in \Cref{ex:2cat-of-cat}.
\end{exercise}
\begin{exercise}\label{exer:moncat}
Generalizing \Cref{ex:2cat-of-cat}, show that there are $2$-categories:
\begin{enumerate}[label=(\roman*)]
\item\label{notation:catfl} $\Catfl$\index{2-category!of finite-limit preserving functors} with small categories as objects, functors that preserve finite limits up to isomorphisms as $1$-cells, and natural transformations as $2$-cells.
\item\label{notation:moncat} $\MonCat$\index{2-category!of monoidal categories}\index{monoidal category!2-category} with small monoidal categories as objects, monoidal functors as $1$-cells, and monoidal natural transformations as $2$-cells.
\item\label{notation:stgmoncat} $\StgMonCat$ and $\SttMonCat$ defined in the same way as $\MonCat$, but with strong monoidal functors and strict monoidal functors, respectively, as $1$-cells.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exer:enriched-multicat}
Suppose $\V$ is a symmetric monoidal category. Extend \Cref{ex:2cat-of-enriched-cat} and \Cref{multicat-2cat} to $\V$-multicategories. In other words:
\begin{itemize}
\item Define $\V$-multicategories\index{enriched!multicategory}\index{multicategory!enriched} as multicategories in which each $\C\duc$ is an object in $\V$, along with suitable structure morphisms and axioms in $\V$.
\item Define $\V$-multifunctors between $\V$-multicategories, $\V$-multinatural transformations between $\V$-multifunctors, and their horizontal and vertical compositions.
\item Show that there is a $2$-category\index{2-category!of enriched multicategories} $\Multicat_{\V}$ of small $\V$-multicategories, $\V$-multifunctors, and $\V$-multinatural transformations.
\end{itemize}
\end{exercise}
\begin{exercise}\label{exer:enriched-polycat}
Repeat the previous exercise for $\V$-polycategories.\index{enriched!polycategory}\index{polycategory!enriched}\index{2-category!of enriched polycategories}
\end{exercise}
\begin{exercise}
Check the details of \Cref{ex:endomorphism-polycat,ex:multicat-as-polycat}.
\end{exercise}
\begin{exercise}
Give a detailed proof of \Cref{polycat-2cat}.
\end{exercise}
\begin{exercise}
A \emph{lattice}\index{lattice} $(L,\leq)$ is a partially ordered set in which each pair of elements $\{x,y\}$ has both a least upper bound $x \vee y$ and a greatest lower bound $x \wedge y$. A \emph{distributive lattice} is a lattice that satisfies \[x \wedge (y \vee z) = (x \wedge y) \vee (x \wedge z) \forspace x,y,z\in L.\] Show that a distributive lattice $(L,\leq)$ yields a small polycategory\index{polycategory!distributive lattice} with object set $L$, and with $L\sbinom{d_1,\ldots,d_n}{c_1,\ldots,c_m}$ containing a unique element if and only if
\[c_1 \wedge \cdots \wedge c_m \leq d_1 \vee \cdots \vee d_m.\]
\end{exercise}
\begin{exercise}
Fill in the rest of the proof of \Cref{Bcoop-bicat}.
\end{exercise}
\subsection*{Notes}
\begin{note}[Discussion of Literature]
The concepts of a bicategory and of a $2$-category are due to B\'{e}nabou in \cite{benabou} and \cite{benabou-2cat}, respectively. The articles \cite{lack,leinster-bicat} are useful guides for $2$-categories and bicategories. The bicategories $\Bop$, $\Bco$, and $\Bcoop$ are what B\'{e}nabou called the \index{transpose}\emph{transpose}, the \index{conjugate}\emph{conjugate}, and the \index{symmetric}\emph{symmetric}, respectively. What we call $1$-cells and $2$-cells are sometimes called $1$-morphisms and $2$-morphisms in the literature. The reader is cautioned that Borceux \cite[Definition 7.7.1]{borceux2} uses different conventions for a bicategory. In particular, in his definition, the horizontal composition is a functor
\[\begin{tikzcd}
c_{XYZ} : \B(X,Y)\times\B(Y,Z)\ar{r} & \B(X,Z).
\end{tikzcd}\]
His associator, left unitor, and right unitor are denoted by $\alpha$, $\lambda$, and $\rho$, and have components
\[\begin{tikzcd}h(gf) \ar{r}{\alpha} & (hg)f,\end{tikzcd} \quad
\begin{tikzcd}f \ar{r}{\lambda} & f1_X,\end{tikzcd}\andspace
\begin{tikzcd}f \ar{r}{\rho} & 1_Yf.\end{tikzcd}\]
Moreover, these conventions do not agree with the notations in his Diagrams 7.18 and 7.19.
\end{note}
\begin{note}[$2$-Vector Spaces]
\Cref{ex:two-vector-space,ex:twovect-tc} about $2$-vector spaces are from \cite[Section 5]{kapranov-voevodsky}.
\end{note}
\begin{note}[Notation for $2$-Cells]
We denote a $2$-cell $\alpha : f \to f'$ by a single arrow both (i) in-line and (ii) in diagrams with $1$-cells as nodes and $2$-cells as edges, such as the Unity and the Pentagon axioms in a bicategory. We only write a $2$-cell as a double arrow $\Rightarrow$ in diagrams with $1$-cells as edges and $2$-cells occupying regions, such as pasting diagrams. This single-arrow notation by default agrees with the usage in, for example, \cite{awodey,leinster,simmons}, but the notations are not consistent in the literature. Another choice is to write a $2$-cell using a double arrow in-line, as in $\theta : f \Rightarrow f'$. However, with this second choice, in diagrams with $1$-cells as nodes, the $2$-cells, which are the edges, are usually written as single arrows. Some authors use a single arrow for a $2$-cell even in pasting diagrams.
Here is a mnemonic to remember our notations for $2$-cells: An arrow $\theta : f \to f'$---which could be a $1$-cell between objects or a $2$-cell between $1$-cells---goes between objects $f,f'$ of one lower dimension. The single arrow indicates not only the directionality of $\theta$, but also that it is one dimension higher than the symbols $f$ and $f'$. In a diagram, whether the nodes are objects or $1$-cells, the edges that connect them are one dimension higher, so only need single arrows. In pasting diagrams, to indicate that $2$-cells are one dimension higher than the $1$-cells, which are already displayed as single arrows, we denote the $2$-cells by double arrows.
\end{note}
\begin{note}[Quillen Model Structures]
With suitable concepts of functors to be discussed in \Cref{ch:functors}, $2$-categories form a Quillen \index{model category}\index{category!model}model category \cite{lack-model-2cat}. Another model structure on $2$-categories is in \cite{whpt,whpt-correction}. The analogous Quillen model structure for bicategories is in \cite{lack-model-bicat}.
\end{note}
\begin{note}[Unity Properties]
The proofs of the unity properties in \Cref{bicat-left-right-unity,bicat-l-equals-r} are adapted from \cite{kelly2} (Theorems 6 and 7), which has the analogous statements for monoidal categories. The two commutative triangles in \Cref{bicat-left-right-unity} are stated in \cite[page 68]{maclane-pare}.
\end{note}
\begin{note}[Multicategories and Polycategories]\label{note:multicats-polycats}
What we call a multicategory is also called a \emph{symmetric multicategory}\index{symmetric multicategory}\index{multicategory!symmetric}, with the plain term \emph{multicategory} reserved for the non-symmetric definition. The terms \emph{operad}\index{operad}, a \emph{symmetric operad}, and a \emph{colored operad} are also common. The book \cite{yau-operad} is a gentle introduction to multicategories.
Historically, multicategories without symmetric group actions were introduced by Lambek \cite{lambek}. May \cite{may} introduced the term \emph{operad} for a one-object multicategory. With the \index{Boardman-Vogt tensor product}Boardman-Vogt tensor product \cite[Definition 2.14]{boardman-vogt}, small multicategories and multifunctors form a symmetric monoidal closed categories \cite[Section 4.1]{moerdijk-toen}. Homotopy theory of multicategories is discussed in \cite{moerdijk-toen,white-yau}. Applications of multicategories outside of pure mathematics can be found in \cite{yau-wd,yau-hqft}.
Polycategories without symmetric group actions were introduced by Szabo \cite{szabo}. Gan \cite{gan} used the term \index{dioperad}\emph{dioperad} for a one-object polycategory. Discussion of the relationships between categories, multicategories, polycategories, and other variations can be found in \cite{markl08,bluemonster}. The symmetric monoidal category structures on the category of small polycategories and other variants are discussed in \cite{hry}.
\end{note}
\chapter{Adjunctions and Monads}
\label{ch:adjunctions}
In this chapter we discuss adjoint pairs of $1$-cells in a bicategory.
This notion is sometimes called \emph{internal} adjunction, to
emphasize that it is a structure internal to a specific bicategory and
to distinguish it from the tricategorical notion of adjunction
\emph{between} bicategories, which is beyond our scope.
We give the basic definition of internal adjunctions in
\cref{sec:internal-adjunctions} and explain the related notion of mates.
In \cref{sec:internal-equivalences}
we focus on the special case of internal equivalences (invertible
pairs).
Applying this theory to bicategories such as
$\Bicatps(\B,\B')$ will allow us to understand biequivalences of
bicategories and the Coherence Theorem \ref{theorem:bicat-coherence}.
In \cref{sec:dual-pairs} we describe duality for modules over rings as
adjunctions in the bicategory $\Bimod$.
In \cref{sec:monads,sec:2-monads} we discuss monads, $2$-monads, and
their algebras. We will apply this in \Cref{ch:fibration} and show
that cloven fibrations, respectively split fibrations, of categories
correspond to pseudo, respectively strict, algebras over a certain
$2$-monad $\funnyf$ described in \cref{def:iimonad-on-catoverc}.
Pasting diagrams will be an essential part of our explanation in this
chapter, and we remind the reader that \cref{thm:bicat-pasting-theorem}
and \cref{conv:boundary-bracketing} explain how to interpret pasting
diagrams in a bicategory.
\begin{motivation}\label{motivation:internal-adjunction}
Two fundamental but apparently disparate examples are unified by the
theory of adjunctions in bicategories. The first is the notion of
an adjunction between categories (cf. \cref{def:adjunctions}). If
$\C$ and $1.3$ are small categories, then an adjunction
\[
\begin{tikzpicture}[x=25mm,y=25mm]
\draw[0cell]
(0,0) node (x) {\C}
(1,0) node (y) {1.3}
;
\draw[1cell]
(x) edge[bend left] node {F} (y)
(y) edge[bend left] node {G} (x)
;
\draw[2cell]
(.5,0) node[rotate=0,font=\Large] {\bot}
;
\end{tikzpicture}
\]
can be described in the $2$-category $\IICat$ as certain data satisfying
certain axioms which we shall elaborate below.
The second example is the notion of\index{dualizable!object}\index{object!dualizable} dualizable object $V$ in a
monoidal category, such as a finite-dimensional vector space $V$
over a field $\fieldk$ and its\index{linear dual} linear dual
$V^* = \Hom_\fieldk(V,\fieldk)$. Interpreted as data and axioms in
a one-object bicategory, these have the same general format as that of
an adjunction. Namely, duality for the pair $(V,V^*)$ consists of a unit and counit
\begin{align*}
\fieldk \too & V \otimes_\fieldk V^*\\
V^* \otimes_\fieldk V & \too \fieldk
\end{align*}
satisfying\index{triangle identities} triangle axioms for the composites
\begin{align*}
V \too & V \otimes_\fieldk V^* \otimes_\fieldk V \too V\\
V^* \too & V^* \otimes_\fieldk V \otimes_\fieldk V^* \too V^*.
\end{align*}
We will extend this example to modules over commutative rings in
\cref{sec:dual-pairs}.
\end{motivation}
\section{Internal Adjunctions}\label{sec:internal-adjunctions}
\begin{definition}\label{definition:internal-adjunction}
An \emph{internal adjunction}\index{internal!adjunction}\index{adjunction!in a bicategory}\index{bicategory!adjunction} $f \dashv g$ in a bicategory
$\sB$ is a quadruple $(f,g,\eta,\epz)$ consisting of
\begin{itemize}
\item $1$-cells $f\cn X \to Y$ and $g\cn Y \to X$;
\item $2$-cells $\eta\cn 1_X \to gf$ and $\epz\cn fg \to 1_Y$.
\end{itemize}
These data are subject to the following two axioms, in the form of
commutative triangles. These are known as the\index{triangle identities!in a bicategory}\index{bicategory!triangle identities} \emph{triangle identities}.
\begin{equation}\label{diagram:triangles}
\begin{tikzpicture}[x=22mm,y=16mm,rotate=0,vcenter]
\draw[0cell]
(0,0) node (f1) {f\, 1_X}
(1,0) node (fgf1) {f(gf)}
(2,0) node (fgf2) {(fg)f}
(2,-1) node (1f) {1_Y\, f}
(2,-2) node (f) {f}
;
\draw[1cell]
(f1) edge[swap] node {r_f} (f)
(f1) edge node {1_f * \eta} (fgf1)
(fgf1) edge node {a_{f,g,f}^\inv} (fgf2)
(fgf2) edge node {\epz * 1_f} (1f)
(1f) edge node {\ell_f} (f)
;
\draw[2cell]
;
\end{tikzpicture}
\qquad
\begin{tikzpicture}[x=22mm,y=16mm,rotate=0,vcenter]
\draw[0cell]
(0,0) node (1g) {1_X\, g}
(1,0) node (gfg1) {(gf)g}
(2,0) node (gfg2) {g(fg)}
(2,-1) node (g1) {g\, 1_Y}
(2,-2) node (g) {g}
;
\draw[1cell]
(1g) edge[swap] node {\ell_g} (g)
(1g) edge node {\eta * 1_g} (gfg1)
(gfg1) edge node {a_{g,f,g}} (gfg2)
(gfg2) edge node {1_g * \epz} (g1)
(g1) edge node {r_g} (g)
;
\draw[2cell]
;
\end{tikzpicture}
\end{equation}
The $1$-cell $f$ is called the \emph{left adjoint}\index{left adjoint!internal adjunction}\index{internal adjunction!left adjoint} and $g$ is called
the\index{right adjoint!internal adjunction}\index{internal adjunction!right adjoint} \emph{right adjoint}. The $2$-cell $\eta$, respectively $\epz$,
is called the\index{unit!internal adjunction}\index{internal adjunction!unit} \emph{unit},
respectively\index{counit!internal adjunction}\index{internal adjunction!counit} \emph{counit}. Often we
will omit the word internal, or refer to $f$ and $g$ as an
\index{adjoint!pair}\index{internal adjunction!adjoint pair}\emph{adjoint pair} or \emph{dual pair}\index{dual pair} of $1$-cells. This
terminology is motivated by examples we will discuss in
\cref{sec:dual-pairs}.
\end{definition}
\begin{explanation}\label{explanation:internal-adjunction}
\
\begin{enumerate}
\item The left-hand side of \eqref{diagram:triangles} is an equality
in the local category $\sB(X,Y)$. We call this the\index{left
triangle identity} \emph{left
triangle identity} since it begins and ends with the left
adjoint, $f$. The right-hand side of \eqref{diagram:triangles} is
an equality in $\sB(Y,X)$. We call this the\index{right triangle
identity} \emph{right triangle
identity} since it begins and ends with the right adjoint, $g$.
\item As discussed in \cref{ex:bicat-pasting-simple}, we can present
the two commutative diagrams in \eqref{diagram:triangles} as two
equalities of pasting diagrams with the bracketing and associators
left implicit.
\begin{equation}\label{diagram:triangles-2fgf}
\begin{tikzpicture}[x=22mm,y=22mm,baseline={(eq).base}]
\def1.2{1.5}
\def1{.866}
\def-1} \def\v{-1{.5}
\draw[font=\Large] (1.2+-1} \def\v{-1 -.5,0) node (eq) {=};
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (y0) {Y}
(0:1) node (y1) {Y}
(-60:1) node (x1) {X}
(-120:1) node (x0) {X}
;
\draw[1cell]
(x0) edge[swap] node {1_X} (x1)
(y0) edge node {1_Y} (y1)
(x0) edge node {f} (y0)
(x1) edge[swap] node {f} (y1)
;
}
\begin{scope}[shift={(0,1/2)}]
\boundary
\draw[1cell]
(y0) edge node[swap, pos=.6] {g} (x1)
;
\draw[2cell]
(-90:.5) node[rotate=90,font=\Large] {\Rightarrow}
node[left=.04] {\eta}
(-30:.6) node[rotate=90,font=\Large] {\Rightarrow}
node[right=.04] {\epz}
;
\end{scope}
\begin{scope}[shift={(1.2+-1} \def\v{-1+-1} \def\v{-1,1/2)}]
\boundary
\draw[2cell]
(-60:.5) node[rotate=120,font=\Large] {\Rightarrow}
++(30:.15) node {1_f}
;
\end{scope}
\end{tikzpicture}
\end{equation}
\begin{equation}\label{diagram:triangles-2gfg}
\begin{tikzpicture}[x=22mm,y=22mm,baseline={(eq).base}]
\def1.2{1.5}
\def1{.866}
\def-1} \def\v{-1{.5}
\draw[font=\Large] (1.2+-1} \def\v{-1-.5,0) node (eq) {=};
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (W0) {X}
(0:1) node (W1) {X}
(-60:1) node (Z1) {Y}
(-120:1) node (Z0) {Y}
;
\draw[1cell]
(Z0) edge[swap] node {1_Y} (Z1)
(W0) edge node {1_X} (W1)
(Z0) edge node {g} (W0)
(Z1) edge[swap] node {g} (W1)
;
}
\begin{scope}[shift={(0,1/2)}]
\boundary
\draw[1cell]
(W0) edge node[swap, pos=.6] {f} (Z1)
;
\draw[2cell]
(-90:.5) node[rotate=-90,font=\Large] {\Rightarrow}
node[left=.03] {\epz}
(-30:.6) node[rotate=-90,font=\Large] {\Rightarrow}
node[right=.04] {\eta}
;
\end{scope}
\begin{scope}[shift={(1.2+-1} \def\v{-1+-1} \def\v{-1,1/2)}]
\boundary
\draw[2cell]
(-60:.5) node[rotate=-60,font=\Large] {\Rightarrow}
++(3:.15) node {1_g}
;
\end{scope}
\end{tikzpicture}
\end{equation}
\item In a $2$-category, the associator and unitors are identities so the
triangle identities\index{triangle identities!in a 2-category}\index{2-category!triangle identities}
can be expressed via the pastings above (with no associators
needed), or as the following commutative triangles.
\[
\begin{tikzpicture}[x=22mm,y=16mm]
\draw[0cell]
(0,0) node (f) {f}
(1,0) node (fgf) {fgf}
(1,-1) node (f') {f}
;
\draw[1cell]
(f) edge node {1_f * \eta} (fgf)
(fgf) edge node {\epz * 1_f} (f')
(f) edge[swap] node {1_f} (f')
;
\end{tikzpicture}
\qquad
\begin{tikzpicture}[x=22mm,y=18mm]
\draw[0cell]
(0,0) node (f) {g}
(1,0) node (fgf) {gfg}
(1,-1) node (f') {g}
;
\draw[1cell]
(f) edge node {\eta * 1_g} (fgf)
(fgf) edge node {1_g * \epz} (f')
(f) edge[swap] node {1_f} (f')
;
\end{tikzpicture}
\]
\item The notion of dualizable object in a monoidal category
$M = (M,\otimes,\tensorunit)$ is equivalent to that of an internal
adjoint in the bicategory $\Si M$. We will explain this example
and its generalizations in \cref{sec:dual-pairs}.
\item Adjoints are unique up to canonical isomorphism; see
\cref{lemma:bicat-adj-unique} below.
\end{enumerate}
\end{explanation}
\begin{lemma}\label{lemma:bicat-adj-unique}\index{adjoint!uniqueness}
Let $(f,g,\eta,\epz)$ and $(f,g',\eta',\epz')$ be two adjunctions
with a common left adjoint $f$. Then there is a canonical
isomorphism $g \to g'$ given by
\[
\begin{tikzpicture}[x=23mm,y=20mm]
\draw[0cell]
(0,0) node (g0) {g}
++(1,0) node (g) {1_X\, g}
++(1,0) node (g'fg) {(g'f)g}
++(1,0) node (g'fg2) {g'(fg)}
++(1,0) node (g') {g' 1_Y}
++(1,0) node (g'0) {g'}
;
\draw[1cell]
(g0) edge node {\ell^\inv} (g)
(g) edge node {\eta' * 1_g} (g'fg)
(g'fg) edge node {a} (g'fg2)
(g'fg2) edge node {1_{g'} * \epz} (g')
(g') edge node {r} (g'0)
;
\end{tikzpicture}
\]
and inverse given by
\[
\begin{tikzpicture}[x=23mm,y=20mm]
\draw[0cell]
(0,0) node (g'0) {g'}
++(1,0) node (g') {1_X \, g'}
++(1,0) node (gfg') {(gf)g}
++(1,0) node (gfg'2) {g(fg)}
++(1,0) node (g) {g 1_Y}
++(1,0) node (g0) {g.}
;
\draw[1cell]
(g'0) edge node{\ell^\inv} (g')
(g') edge node {\eta * 1_{g'}} (gfg')
(gfg') edge node {a} (gfg'2)
(g'fg2) edge node {1_g * \epz'} (g)
(g) edge node {r} (g0)
;
\end{tikzpicture}
\]
\end{lemma}
\begin{proof}
We use the triangle axioms for $fg'f$ and $gfg$ to show that the
composite $g \to g' \to g$ is the identity. The other composite
follows by symmetry. For clarity, we first assume that we are
working in a $2$-category, so that $a$, $\ell$, and $r$ are
identities. Then the diagram below shows that the composite is the
identity.
\[
\begin{tikzpicture}[x=25mm,y=16mm,scale=1]
\draw[0cell]
(0,0) node (g) {g}
(1,1) node (g'fg) {g'fg}
(2,2) node (g') {g'}
(1,-1) node (gfg) {gfg}
(2,0) node (M) {gfg'fg}
(3,1) node (gfg') {gfg'}
(3,-1) node (gfg2) {gfg}
(4,0) node (g2) {g}
;
\draw[1cell]
(g) edge node {\eta' * 1_g} (g'fg)
(g'fg) edge node {1_{g'} * \epz} (g')
(g') edge node {\eta * 1_{g'}} (gfg')
(gfg') edge node {1_g * \epz'} (g2)
(g) edge[swap] node {\eta * 1_g} (gfg)
(g'fg) edge[swap] node {\eta * 1_{g'fg}} (M)
(gfg) edge node {1_{gf} * \eta' * 1_g} (M)
(M) edge[swap] node {1_{gfg'} * \epz} (gfg')
(M) edge node {1_g * \epz' * 1_{fg}} (gfg2)
(gfg2) edge[swap] node {1_g * \epz} (g2)
(gfg) edge[swap] node {1_{gfg}} (gfg2)
;
\end{tikzpicture}
\]
The three squares are instances of the middle four exchange law
\eqref{middle-four} and the triangle is the axiom for $fg'f$
whiskered on both sides by $g$. The remaining composite along the
bottom is then the identity by the triangle axiom for $gfg$.
This is the heart of the argument in a general bicategory, and
indeed by the bicategorical Coherence Theorem
\ref{theorem:bicat-coherence} this argument implies the general
result. For the sake of completeness, we sketch how to incorporate
general unitors $r$ and $\ell$. Begin with the following diagram,
obtained from the previous one by inserting appropriate units.
\[
\begin{tikzpicture}[x=25mm,y=16mm]
\draw[0cell]
(0,0) node (a) {1g'1}
(-1,-1) node (b) {1g'fg}
(1,-1) node (c) {gfg'1}
(-2,-2) node (d) {11g}
(0,-2) node (e) {gfg'fg}
(2,-2) node (f) {g11}
(-1,-3) node (g) {gf1g}
(1,-3) node (h) {g1fg}
(0,-4) node (i) {gfg}
;
\draw[1cell]
(d) edge node {} (b)
(b) edge node {} (a)
(a) edge node {} (c)
(c) edge node {} (f)
(b) edge node {\eta * 1_{g'} * \epz} (c)
(b) edge node {} (e)
(e) edge node {} (c)
(d) edge node {\eta * \eta' * 1{g}} (e)
(e) edge node {1_{g} * \epz' * \epz} (f)
(d) edge node {} (g)
(g) edge node {} (i)
(i) edge node {} (h)
(h) edge node {} (f)
(g) edge node {} (e)
(e) edge node {} (h)
;
\draw[2cell]
node[between=g and h at .5] {(*)}
;
\end{tikzpicture}
\]
Three of the squares again commute by middle four exchange \eqref{middle-four}; the
final square marked (*) commutes because it is the triangle axiom
for $fg'f$ whiskered on both sides by $g$. To complete the
argument, one uses appropriate unitors and unit axioms around the
boundary.
The argument incorporating general associators is similar; one
expands each node into a commutative diagram for different
bracketings, and adjusts the existing arrows appropriately. The new
regions all commute by some combination of axioms. One can check
this by hand, or apply the Coherence Theorem \ref{theorem:bicat-coherence} to guarantee there will
be no obstruction to completing the diagram.
\end{proof}
\begin{proposition}\label{proposition:adjunctions-preserved}\index{adjunction!preservation by pseudofunctors}\index{pseudofunctor!preserves adjunctions}
Suppose $K\cn \sA \to \sB$ is a pseudofunctor of bicategories. If
$(f,g,\eta,\epz)$ is an adjunction in $\sA$, then there is an
induced adjunction
\[
\begin{tikzpicture}[x=25mm,y=25mm]
\draw[0cell]
(0,0) node (x) {K(X)}
(1,0) node (y) {K(Y)}
;
\draw[1cell]
(x) edge[bend left] node {K(f)} (y)
(y) edge[bend left] node {K(g)} (x)
;
\draw[2cell]
(.5,0) node[rotate=0,font=\Large] {\bot}
;
\end{tikzpicture}
\]
in $\sB$. The unit $\ol\eta$ is given by the composite
\[
\begin{tikzpicture}[x=25mm,y=25mm]
\draw[0cell]
(0,0) node (1kx) {1_{K(X)}}
(1,0) node (k1x) {K(1_X)}
(2,0) node (kgf) {K(gf)}
(3,0) node (kgkf) {K(g)\,K(f)}
;
\draw[1cell]
(1kx) edge node {K^0_X} (k1x)
(k1x) edge node {K(\eta)} (kgf)
(kgf) edge node {(K^2_{g,f})^\inv} (kgkf)
;
\end{tikzpicture}
\]
and the counit $\ol\epz$ is given by the composite
\[
\begin{tikzpicture}[x=25mm,y=25mm]
\draw[0cell]
(0,0) node (kfkg) {K(f)\,K(g)}
(1,0) node (kfg) {K(fg)}
(2,0) node (k1y) {K(1_Y)}
(3,0) node (1ky) {1_{K(Y)}.}
;
\draw[1cell]
(kfkg) edge node {K^2_{f,g}} (kfg)
(kfg) edge node {K(\epz)} (k1y)
(k1y) edge node {(K^0_Y)^\inv} (1ky)
;
\end{tikzpicture}
\]
\end{proposition}
\begin{proof}
We must verify the triangle axioms of \eqref{diagram:triangles}. We
check the triangle axiom for $K(f)K(g)K(f)$ and leave the other as
\cref{exercise:other-triangle-psfun}. To simplify the notation, we let
$(-)'$ denote $K(-)$; for example $f' \cn= K(f)$, $\eta'\cn=K(\eta)$
etc.
Our verification consists of the following steps, and is shown in
\eqref{diagram:kfkgkf} below.
\begin{enumerate}
\item\label{it:kfkgkf-1} We begin with the composite
$\ell_{f'} \circ (\ol\epz * 1_{f'}) \circ a^\inv \circ (1_{f'} * \ol\eta)$ along the top and
right.
\item\label{it:kfkgkf-1.5} Each of $\ol\epz * 1_{f'}$ and $1_{f'} * \ol\eta$
decomposes as a composite of three $2$-cells because whiskering is
functorial.
\item\label{it:kfkgkf-2} We use a hexagon equivalent to the lax
associativity of $K$, \eqref{f2-bicat}. Note that we use $a$ to
denote the associators in both $\sA$ and $\sB$.
\item\label{it:kfkgkf-3} We use naturality of the functoriality
constraint $K^2$ in two places.
\item\label{it:kfkgkf-4} We use quadrangles equivalent to the lax
left and right unity axioms \eqref{f0-bicat}. Note that we use
$r$ and $\ell$, respectively, for the right and left unitors in
both $\sA$ and $\sB$.
\item\label{it:kfkgkf-6} We apply $K$ to the left triangle identity for
in $\sA$, using the functoriality of $K$ with respect to
$2$-cells.
\end{enumerate}
The remaining composite along the lower left is equal to $r_{f'}$,
and that is what we wanted to check.
\begin{equation}\label{diagram:kfkgkf}
\begin{tikzpicture}[x=11.25mm,y=11mm,baseline={(m).base}]
\def1{7.3}
\def1.3{1.3}
\def3{3}
\def2.7{2.7}
\def2.5{2.5}
\def35{35}
\def60{60}
\draw[0cell]
(0,0) node (a) {f'1_{X'}}
(a) ++(-90:3) node (b) {f'}
(a) ++(-35:2.7) node (c) {f'(1_X)'}
(c) ++(0:2.5) node (d) {f'(gf)'}
(a) ++(0:1) node (e) {f'(g'f')}
(1+1.3,-1-1.3) node (i) {1_{Y'}f'}
(i) ++(180+20:3) node (j) {f'}
(i) ++(90:1) node (f) {(f'g')f'}
(i) ++(90+35:2.7) node (h) {(1_Y)'f'}
(h) ++(90:2.5) node (g) {(fg)'f'}
%
(d) ++(180+60:2.5) node (l) {(f(gf))'}
(l) ++(180:2.5) node (k) {(f 1_X)'}
(g) ++(270-60:2.5) node (m) {((fg)f)'}
(m) ++(270:2.5) node (n) {(1_Y\,f)'}
;
\draw[1cell]
(a) edge node (A) {1_{f'} * \ol\eta} (e)
(e) edge node {a^\inv} (f)
(f) edge node (B) {\ol\epz * 1_{f'}} (i)
%
(a) edge node {1_{f'} * K^0_X} (c)
(c) edge node {1_{f'} * \eta'} (d)
(d) edge[swap] node {1_{f'} * (K^2_{g,f})^\inv} (e)
(f) edge[swap] node {K^2_{f,g}*1_{f'}} (g)
(g) edge node {\epz'*1_{f'}} (h)
(h) edge[swap] node {(K^0_Y)^\inv * 1_{f'}} (i)
(i) edge node {\ell_{f'}} (j)
%
(k) edge[swap] node {(1_f * \eta)'} (l)
(l) edge node (D) {(a^\inv)'} (m)
(m) edge[swap] node {(\epz * 1_f)'} (n)
(n) edge[swap] node {\ell_f'} (j)
%
(a) edge[swap] node {r_{f'}} (b)
(b) edge[swap] node {((r_f)')^\inv} (k)
(c) edge node {K^2_{f,1_X}} (k)
(d) edge node {K^2_{f,gf}} (l)
(g) edge node {K^2_{fg,f}} (m)
(h) edge[swap] node[pos=.4] {K^2_{1_Y,f}} (n)
%
(k) edge[swap,bend right,out=-40,in=210] node[pos=.38] (C) {(r_f)'} (j)
;
\draw[2cell]
node[between=c and A at .5] {\eqref{it:kfkgkf-1.5}}
node[between=h and B at .5] (T) {\eqref{it:kfkgkf-1.5}}
node[between=d and g at .5] {\eqref{it:kfkgkf-2}}
node[between=c and l at .5] {\eqref{it:kfkgkf-3}}
node[between=h and m at .5] {\eqref{it:kfkgkf-3}}
node[between=c and b at .6] {\eqref{it:kfkgkf-4}}
node[between=j and h at .35] {\eqref{it:kfkgkf-4}}
node[between=k and n at .5] {\eqref{it:kfkgkf-6}}
;
\end{tikzpicture}
\end{equation}
\end{proof}
\begin{remark}\index{lax functor!does not preserve adjunctions}\index{adjunction!non-preservation by lax functors}
Note that both the unit and the counit of $K(f) \dashv K(g)$
require invertibility of either $K^2$ or $K^0$. General lax
functors do not preserve adjunctions.
\end{remark}
\begin{example}[Corepresented Adjunction]\label{example:corepresented-adjunction}\index{corepresented adjunction}\index{adjunction!corepresented}
Let $(f,g,\eta,\epz)$ be an adjunction in $\sB$
\[
\begin{tikzpicture}[x=25mm,y=25mm]
\draw[0cell]
(0,0) node (x) {X}
(1,0) node (y) {Y}
;
\draw[1cell]
(x) edge[bend left] node {f} (y)
(y) edge[bend left] node {g} (x)
;
\draw[2cell]
(.5,0) node[rotate=0,font=\Large] {\bot}
;
\end{tikzpicture}
\]
and let $W$ be any object in $\sB$. Horizontal composition with $f$
and $g$ defines the corepresentable pseudofunctors $f_*$ and $g_*$,
respectively. These are described in
\cref{corepresentable-pseudofunctor}.
These induce an adjunction of $1$-categories between $\sB(W,X)$ and
$\sB(W,Y)$ with unit $\eta_*$ and counit $\epz_*$ defined as follows.
The component of the unit $\eta_*$ at a $1$-cell $t \in \sB(W,X)$ is
\[
\begin{tikzpicture}[x=25mm,y=20mm]
\draw[0cell]
(0,0) node (t) {t}
(1,0) node (t1) {1_X t}
(2,0) node (gft1) {(gf)t}
(3,0) node (gft2) {g(ft).}
;
\draw[1cell]
(t) edge node {\ell_t^\inv} (t1)
(t1) edge node {\eta * 1_t} (gft1)
(gft1) edge node {a_{g,f,t}} (gft2)
;
\end{tikzpicture}
\]
The component of the counit $\epz_*$ at a $1$-cell $s \in \sB(W,Y)$ is
\[
\begin{tikzpicture}[x=25mm,y=20mm]
\draw[0cell]
(0,0) node (fgs2) {f(gs)}
(1,0) node (fgs1) {(fg)s}
(2,0) node (1s) {1_Ys}
(3,0) node (s) {s.}
;
\draw[1cell]
(fgs2) edge node {a_{f,g,s}^\inv} (fgs1)
(fgs1) edge node {\epz * 1_s} (1s)
(1s) edge node {\ell_s} (s)
;
\end{tikzpicture}
\]
We have two ways to verify that this is indeed an adjunction. The
first is to apply \cref{proposition:adjunctions-preserved} to the
pseudofunctor
\[
K = \sB(W,-)\cn \sB \to \IICat.
\]
One can verify that the formulas for the unit and counit given here
are those obtained from \cref{proposition:adjunctions-preserved} in
this case.
The second way to verify that $(f_*, g_*, \eta_*, \epz_*)$
is an adjunction is to directly check the triangle identities \eqref{diagram:triangles}
for an internal adjunction in the $2$-category $\IICat$. We leave
this to the reader in \cref{exercise:other-triangle-postcomp}.
\end{example}
\begin{example}\label{example:adjunctions-op}\index{adjunction!opposite bicategory}\index{opposite!bicategory!adjunction}
If $(f,g,\eta,\epz)$ is an adjunction in $\sB$, then
$(g,f,\eta,\epz)$ is an adjunction in $\sB^\op$. One also has
adjunctions in $\sB^\co$ and $\sB^\coop$. We leave these to the
reader in \cref{exercise:adjunctions-coop}.
Since reversing $1$-cells interchanges precomposition and
postcomposition, the argument in
\cref{example:corepresented-adjunction} applied to $\sB^\op$ shows
that precomposition also defines an adjunction, where precomposition
with $f$ is the \emph{right} adjoint, and precomposition with $g$ is
the \emph{left} adjoint. The unit and counit are given by $\eta$
and $\epz$, respectively, using formulas similar to those of
\cref{example:corepresented-adjunction}, but making use of the right
unitor $r$ instead of the left unitor $\ell$. Note that the left
unitor $\sB^\op$ corresponds to the right unitor in $\sB$.
\end{example}
If $(f,g,\eta,\epz)$ is an adjunction with $f\cn X \to Y$, then the
represented adjunctions given by pre- and post-composition induce
isomorphisms of $2$-cells; corresponding $2$-cells under these
isomorphisms are known as mates, and defined as follows.
\begin{definition}\label{definition:mates}\index{mate}\index{2-cell!mate}\index{adjunction!mate}\index{bicategory!mate}
Suppose $(f_0, g_0, \eta_0, \epz_0)$ and $(f_1, g_1, \eta_1,
\epz_1)$ is a pair of adjunctions in $\B$, with $f_0\cn X_0 \to Y_0$
and $f_1\cn X_1 \to Y_1$. Suppose moreover that $a\cn X_0 \to X_1$
and $b\cn Y_0 \to Y_1$ are $1$-cells in $\B$.
The \emph{mate} of a $2$-cell $\omega\cn
f_1a \to bf_0$ is given by the pasting diagram at left below.
Likewise, the \emph{mate} of a $2$-cell $\nu\cn ag_0 \to g_1b$ is given by
the pasting diagram at right below.
\[
\begin{tikzpicture}[x=20mm,y=20mm,scale=.85]
\draw[0cell]
(0,0) node (x0) {X_0}
(1,-.5) node (x1) {X_1}
(0,-1) node (y0) {Y_0}
(1,-1.5) node (y1) {Y_1}
(-1,0.5) node (y0') {Y_0}
(2,-2) node (x1') {X_1}
(y0') ++(2.6,-.2) node (T) {X_0}
(x1') ++(-2.6,.2) node (B) {Y_1}
;
\draw[1cell]
(x0) edge node (a) {a} (x1)
(y0) edge[swap] node (b) {b} (y1)
(x0) edge[swap] node {f_0} (y0)
(x1) edge node {f_1} (y1)
(y0') edge node {g_0} (x0)
(y0') edge[swap,bend right=20] node[pos=.6] (1y) {1} (y0)
(y1) edge[swap] node {g_1} (x1')
(x1) edge[bend left=20] node[pos=.4] (1x) {1} (x1')
(y0') edge[bend left=20] node {g_0} (T)
(T) edge[bend left=15] node {a} (x1')
(y0') edge[bend right=15, swap] node {b} (B)
(B) edge[bend right=20, swap] node {g_1} (x1')
;
\draw[2cell]
node[between=y0 and x1 at .5, rotate=225, font=\Large] (A) {\Rightarrow}
(A) node[above left] {\omega}
node[between=1y and x0 at .4, rotate=225, font=\Large] (E) {\Rightarrow}
(E) node[above left] {\epz_0}
node[between=y1 and 1x at .6, rotate=225, font=\Large] (H) {\Rightarrow}
(H) node[below right] {\eta_1}
node[between=T and a at .5, rotate=225, font=\large] (U) {\Rightarrow}
(U) node[above left] {\ell^\inv}
node[between=B and b at .5, rotate=225, font=\large] (V) {\Rightarrow}
(V) node[below right] {r}
;
\end{tikzpicture}
\quad\qquad
\begin{tikzpicture}[x=20mm,y=20mm, scale=.85]
\draw[0cell]
(0,0) node (x0) {X_0}
(1,0.5) node (x1) {X_1}
(0,-1) node (y0) {Y_0}
(1,-0.5) node (y1) {Y_1}
(2,1) node (y1') {Y_1}
(-1,-1.5) node (x0') {X_0}
(x0') ++(2.6,.2) node (B) {Y_0}
(y1') ++(-2.6,-.2) node (T) {X_1}
;
\draw[1cell]
(x0) edge node (a) {a} (x1)
(y0) edge[swap] node (b) {b} (y1)
(y0) edge node {g_0} (x0)
(y1) edge[swap] node {g_1} (x1)
(x1) edge node {f_1} (y1')
(y1) edge[swap, bend right=20] node[pos=.4] (1y) {1} (y1')
(x0') edge[swap] node {f_0} (y0)
(x0') edge[bend left=20] node[pos=.6] (1x) {1} (x0)
(x0') edge[bend left=15] node {a} (T)
(T) edge[bend left=20] node {f_1} (y1')
(x0') edge[bend right=20, swap] node {f_0} (B)
(B) edge[bend right=15, swap] node {b} (y1')
;
\draw[2cell]
node[between=y0 and x1 at .5, rotate=-45, font=\Large] (A) {\Rightarrow}
(A) node[above right] {\nu}
node[between=x1 and 1y at .6, rotate=-45, font=\Large] (E) {\Rightarrow}
(E) node[above right] {\epz_1}
node[between=y0 and 1x at .6, rotate=-45, font=\Large] (H) {\Rightarrow}
(H) node[below left] {\eta_0}
node[between=T and a at .5, rotate=-45, font=\large] (U) {\Rightarrow}
(U) node[above right] {r^\inv}
node[between=B and b at .5, rotate=-45, font=\large] (V) {\Rightarrow}
(V) node[below left] {\ell}
;
\end{tikzpicture}
\]
This finishes the definition of mates.
\end{definition}
Note, by the left and right unity property in
\cref{bicat-left-right-unity}, the unitor $r$ can be replaced by
$1_{g_1} * r_b$ and a different collection of implicit associators;
similarly for the other unitors.
\begin{lemma}\label{lemma:mate-pairs}
If $(f_0, g_0, \eta_0, \epz_0)$ and $(f_1, g_1, \eta_1, \epz_1)$ is a pair of adjunctions
in $\B$, with $f_0\cn X_0 \to Y_0$ and $f_1\cn X_1 \to
Y_1$, then taking mates establishes a bijection of $2$-cells
\[
\B(X_0,Y_1)(f_1 a, b f_0) \iso \B(Y_0,X_1)(a g_0, g_1 b).
\]
for any $1$-cells $a\cn X_0 \to X_1$ and $b\cn Y_0 \to Y_1$.
\end{lemma}
\begin{proof}
For $\om$ as in \cref{definition:mates}, the mate of the mate of
$\om$ is the composite of the following pasting diagram.
\[
\begin{tikzpicture}[x=20mm,y=20mm,scale=.75]
\draw[0cell]
(0,0) node (x0) {X_0}
(1,-.5) node (x1) {X_1}
(0,-1) node (y0) {Y_0}
(1,-1.5) node (y1) {Y_1}
(-1,0.5) node (y0') {Y_0}
(2,-2) node (x1') {X_1}
(y0') ++(2.6,-.2) node (T) {X_0}
(x1') ++(-2.6,.2) node (B) {Y_1}
;
\draw[1cell]
(x0) edge node (a) {a} (x1)
(y0) edge[swap] node (b) {b} (y1)
(x0) edge[swap] node {f_0} (y0)
(x1) edge node {f_1} (y1)
(y0') edge node {g_0} (x0)
(y0') edge[swap,bend right=20] node[pos=.6] (1y) {1} (y0)
(y1) edge[swap] node {g_1} (x1')
(x1) edge[bend left=20] node[pos=.4] (1x) {1} (x1')
(y0') edge[bend left=20] node (g0) {g_0} (T)
(T) edge[bend left=15] node (a') {a} (x1')
(y0') edge[bend right=15, swap] node (b') {b} (B)
(B) edge[bend right=20, swap] node (g1) {g_1} (x1')
;
\draw[2cell]
node[between=y0 and x1 at .5, rotate=225, font=\Large] (A) {\Rightarrow}
(A) node[above left] {\omega}
node[between=1y and x0 at .5, rotate=225, font=\Large] (E) {\Rightarrow}
(E) node[above left] {\epz_0}
node[between=y1 and 1x at .5, rotate=225, font=\Large] (H) {\Rightarrow}
(H) node[below right] {\eta_1}
node[between=T and a at .5, rotate=225, font=\large] (U) {\Rightarrow}
(U) node[above left] {\ell^\inv}
node[between=B and b at .5, rotate=225, font=\large] (V) {\Rightarrow}
(V) node[below right] {r}
;
\draw[0cell]
(g0) ++(120:1) node (mX0) {X_0}
(g1) ++(-60:1) node (mY1) {Y_1}
(a') ++(30:1.5) node (mX1) {X_1}
(b') ++(210:1.5) node (mY0) {Y_0}
;
\draw[1cell]
(mX0) edge[',bend right=10] node {f_0} (y0')
(mX0) edge[bend left=15] node {1} (T)
(B) edge[',bend right=15] node {1} (mY1)
(x1') edge[bend left=10] node {f_1} (mY1)
(mX0) edge[bend left=15] node {a} (mX1)
(mY0) edge[',bend right=15] node {b} (mY1)
(mX0) edge[',bend right=43] node {f_0} (mY0)
(mX1) edge[bend left=43] node {f_1} (mY1)
;
\draw[2cell]
(mX0) ++(-85:.58) node[rotate=225, 2label={below,\eta_0}] {\Rightarrow}
(mY1) ++(95:.5) node[rotate=225, 2label={below,\epz_1}] {\Rightarrow}
(mY0) ++(20:.8) node[rotate=225, 2label={below,\ell}] {\Rightarrow}
(mX1) ++(200:.8) node[rotate=225, 2label={above,r^\inv}] {\Rightarrow}
;
\end{tikzpicture}
\]
Applying naturality of the unitors together with the left and right unity properties of
\cref{bicat-left-right-unity}, the previous pasting diagram
simplifies to the diagram at left below, and then the diagram at right.
\[
\hspace{-1em}
\begin{tikzpicture}[x=20mm,y=20mm,scale=.75]
\draw[0cell]
(0,0) node (x0) {X_0}
(1.5,-.5) node (x1) {X_1}
(0,-1) node (y0) {Y_0}
(1.5,-1.5) node (y1) {Y_1}
(-1,0.5) node (y0') {Y_0}
(2.5,-2) node (x1') {X_1}
;
\draw[1cell]
(x0) edge[swap] node {f_0} (y0)
(x1) edge node {f_1} (y1)
(y0') edge node {g_0} (x0)
(y0') edge[swap,bend right=20] node[pos=.6] (1y) {1} (y0)
(y1) edge[swap] node {g_1} (x1')
(x1) edge[bend left=20] node[pos=.4] (1x) {1} (x1')
;
\draw[2cell]
node[between=y0 and x1 at .5, rotate=225, font=\Large] (A) {\Rightarrow}
(A) node[above left] {\omega}
node[between=1y and x0 at .5, rotate=225, font=\Large] (E) {\Rightarrow}
(E) node[above left] {\epz_0}
node[between=y1 and 1x at .5, rotate=225, font=\Large] (H) {\Rightarrow}
(H) node[below right] {\eta_1}
;
\draw[0cell]
(x0) ++(90:1.5) node (mX0) {X_0}
(y1) ++(-90:1.5) node (mY1) {Y_1}
;
\draw[1cell]
(mX0) edge[',bend right=10] node {f_0} (y0')
(mX0) edge node {1} (x0)
(mX0) edge[bend left=20] node (a) {a} (x1)
(x0) edge node {a} (x1)
(x1') edge[bend left=10] node {f_1} (mY1)
(y1) edge['] node {1} (mY1)
(mX0) edge[bend left=50] node {a} (x1')
(y0) edge[swap,bend right=20] node (b) {b} (mY1)
(y0) edge[swap] node (b) {b} (y1)
(y0') edge[',bend right=50] node {b} (mY1)
;
\draw[2cell]
(mX0) ++(-110:.7) node[rotate=225, 2label={below,\eta_0}] {\Rightarrow}
(mY1) ++(65:.7) node[rotate=225, 2label={below,\epz_1}] {\Rightarrow}
(y1) ++(225:.65) node[rotate=225, 2label={below,\ell}] {\Rightarrow}
(y0) ++(240:.7) node[rotate=225, 2label={below,r}] {\Rightarrow}
(x0) ++(45:.65) node[rotate=225, 2label={above,r^\inv}] {\Rightarrow}
(x1) ++(60:.7) node[rotate=225, 2label={below,\ell^\inv}] {\Rightarrow}
;
\end{tikzpicture}
\hspace{-.75em}
\begin{tikzpicture}[x=20mm,y=20mm,scale=.75]
\draw[0cell]
(0,0) node (x0) {X_0}
(1.5,-.5) node (x1) {X_1}
(0,-1) node (y0) {Y_0}
(1.5,-1.5) node (y1) {Y_1}
(-1,0.5) node (y0') {Y_0}
(2.5,-2) node (x1') {X_1}
;
\draw[1cell]
(x0) edge[swap] node {f_0} (y0)
(x1) edge node {f_1} (y1)
(y0') edge node {g_0} (x0)
(y0') edge[swap,bend right=20] node[pos=.6] (1y) {1} (y0)
(y1) edge[swap] node {g_1} (x1')
(x1) edge[bend left=20] node[pos=.4] (1x) {1} (x1')
;
\draw[2cell]
node[between=y0 and x1 at .5, rotate=225, font=\Large] (A) {\Rightarrow}
(A) node[above left] {\omega}
node[between=1y and x0 at .5, rotate=225, font=\Large] (E) {\Rightarrow}
(E) node[above left] {\epz_0}
node[between=y1 and 1x at .5, rotate=225, font=\Large] (H) {\Rightarrow}
(H) node[below right] {\eta_1}
;
\draw[0cell]
(x0) ++(90:1.5) node (mX0) {X_0}
(y1) ++(-90:1.5) node (mY1) {Y_1}
;
\draw[1cell]
(mX0) edge[',bend right=10] node {f_0} (y0')
(mX0) edge node {1} (x0)
(mX0) edge[bend left=45] node {f_0} (y0)
(mX0) edge[bend left=20] node (a) {a} (x1)
(x1') edge[bend left=10] node {f_1} (mY1)
(x1) edge[',bend right=45] node {f_1} (mY1)
(y1) edge['] node {1} (mY1)
(mX0) edge[bend left=50] node {a} (x1')
(y0) edge[swap,bend right=20] node (b) {b} (mY1)
(y0') edge[',bend right=50] node {b} (mY1)
;
\draw[2cell]
(mX0) ++(-110:.7) node[rotate=225, 2label={below,\eta_0}] {\Rightarrow}
(mY1) ++(65:.7) node[rotate=225, 2label={below,\epz_1}] {\Rightarrow}
(y1) ++(225:.45) node[rotate=180, 2label={below,\ell}] {\Rightarrow}
(y0) ++(240:.7) node[rotate=225, 2label={below,r}] {\Rightarrow}
(x0) ++(45:.45) node[rotate=180, 2label={below,r^\inv}] {\Rightarrow}
(x1) ++(60:.7) node[rotate=225, 2label={below,\ell^\inv}] {\Rightarrow}
;
\end{tikzpicture}
\]
The diagram at right simplifies to $\om$ by the left triangle
identities \eqref{diagram:triangles} for both $(f_0,g_0)$ and
$(f_1,g_1)$, followed by two instances of the middle unity axiom
\eqref{bicat-unity}.
One can likewise check, for $\nu$ as in \cref{definition:mates},
that the mate of the mate of $\nu$ is equal to $\nu$. We leave this
as \cref{exercise:double-mate}.
\end{proof}
\section{Internal Equivalences}\label{sec:internal-equivalences}
\begin{definition}\label{definition:internal-equivalence}\index{internal!equivalence}\index{adjoint!equivalence!in a bicategory}\index{bicategory!adjoint equivalence}
An adjunction $(f,g,\eta,\epz)$ with $f\cn X \to Y$ and
$g\cn Y \to X$ is called an \emph{internal equivalence} or
\emph{adjoint equivalence} if $\eta$ and $\epz$ are isomorphisms.
We say that $f$ and $g$ are members of an adjoint equivalence in
this case, and we write $X \hty Y$ if such an equivalence exists.
If $f$ is a member of an adjoint equivalence, we often let $f^\bdot$
denote an adjoint.
\end{definition}
\begin{definition}\label{definition:1-cell-isomorphism}
We will say that a pair of $1$-cells $(f,g)$ in a bicategory $\B$ are
mutually inverse \emph{isomorphisms}\index{isomorphism!1-cell}\index{1-cell!isomorphism} if $1_Y = fg$ and $gf = 1_X$.
\end{definition}
The notion of isomorphism for $1$-cells is much stronger than adjoint
equivalence, just as the notion of isomorphism between $1$-categories is
much stronger than the notion of equivalence. In most of what
follows, we will focus on the notion of adjoint equivalence, but for
our discussion of $2$-categorical special cases below, the notion of
isomorphism is useful.
We showed in \cref{proposition:adjunctions-preserved} that a
pseudofunctor of bicategories preserves adjoint pairs. The proof
shows that internal equivalences are preserved as well. We state this
as follows, and leave the proof to \cref{exercise:equivalences-preserved}.
\begin{proposition}\label{proposition:equivalences-preserved}\index{pseudofunctor!preserves equivalences}\index{internal!equivalence!preservation by pseudofunctors}
Suppose $K\cn \A \to \B$ is a pseudofunctor of bicategories. If
$(f,g,\eta,\epz)$ is an internal equivalence in $\A$, then $Kf$ and
$Kg$ are adjoint members of an internal equivalence in $\B$.
\end{proposition}
Recall \cref{def:equivalence-in-bicategory} that a $1$-cell
$f\cn X \to Y$ is said to be \emph{invertible} or \emph{an
equivalence} if there exists a $1$-cell $g\cn Y \to X$ together with
isomorphisms $gf \iso 1_X$ and $1_Y \iso fg$. Note there is no
assumption of compatibility between the two isomorphisms.
Clearly each of the $1$-cells in an adjoint equivalence is an
equivalence, and we now show that the converse is true. Here we give
a direct argument; the result can also be proved by applying the
Bicategorical Yoneda Lemma \ref{lemma:yoneda-bicat}, but we leave this to
\cref{exercise:yoneda-adj-equiv}.
\begin{proposition}\label{proposition:equiv-via-isos}\index{equivalence!in a bicategory}
A $1$-cell $f\cn X \to Y$ in $\sB$ is an equivalence if and only if it
is a member of an adjoint equivalence.
\end{proposition}
\begin{proof}
Suppose $f$ is an equivalence. Then there exists $g\cn Y \to X$ and
isomorphisms
\[
\eta\cn 1_X \iso gf \quad \mathrm{ and } \quad \mu\cn fg \iso 1_Y
\]
We will show how to choose another isomorphism $\epz\cn fg \iso 1_Y$
satisfying the two traingle axioms so that $(f,g,\eta,\epz)$ is an
adjoint equivalence.
First we define $\ol\epz\cn g(fg) \to g 1_y$ as the following
composite:
\[
g(fg) \fto{a^\inv} (gf)g \fto{\eta^\inv * 1_g} 1_X\, g \fto{\ell_g}
g \fto{r_g^\inv} g1.
\]
Now postcomposition with $f$ and $g$ defines functors
\[
\begin{tikzpicture}[x=25mm,y=25mm]
\draw[0cell]
(0,0) node (x) {\sB(W,X)}
(1,0) node (y) {\sB(W,Y)}
;
\draw[1cell]
(x) edge[bend left] node {f_*} (y)
(y) edge[bend left] node {g_*} (x)
;
\end{tikzpicture}
\]
for any object $W$. The isomorphisms $\eta$ and $\mu$ make these
functors equivalences of $1$-categories. Taking $W = Y$ in
particular, $g_*$ induces a bijection on hom-$2$-cells
\[
\sB(Y,Y)(a,b) \fto{1_g*(-)} \sB(Y,X)(ga,gb).
\]
for any $1$-cells $a,b\cn Y \to Y$.
Therefore there is a unique $\epz\cn fg \to 1_Y$ such that $1_g*\epz =
\ol\epz$. Then $\epz$ satisfies the right triangle identity \eqref{diagram:triangles} by
definition. To verify the left triangle identity \eqref{diagram:triangles}, we must show
that the composite
\[
f1_X \fto{1_f * \eta} f(gf) \fto{a^\inv} (fg)f \fto{\epz * 1_f} 1_Y\,
f \fto{\ell_f} f
\]
is equal to $r_f$. Since $g_*$ is an equivalence of $1$-categories,
it suffices to apply $g_*$ and check that the resulting composite is
equal to $g * r_f$. This follows by using naturality of the
associator, the definition of $1_g*\epz = \ol\epz$, the pentagon axiom
\eqref{bicat-pentagon}, and middle four exchange
\eqref{middle-four}; we leave the reader to complete this in
\cref{exercise:equiv-via-isos-diagram}.
\end{proof}
\begin{example}
The (adjoint) equivalences in the $2$-category $\Cat$ are precisely
the (adjoint) equivalences of categories.
\end{example}
\begin{example}
Suppose $R$ is a commutative ring and let $\Mod_R$ be the monoidal
category of $R$-modules. Then the adjoint equivalences in $\Si
\Mod_R$ are precisely the invertible $R$-modules. The group of
invertible $R$-modules modulo isomorphism is known as the
\emph{Picard group}\index{Picard!group} of $R$.
\end{example}
\begin{example}[Invertible Strong Transformations]\label{example:equiv-in-bicatpsAB}\index{strong transformation!invertible}
Given small bicategories $\B$ and $\C$, the internal equivalences in
$\Bicatps(\B,\C)$ are invertible strong transformations. If $F$ and
$G$ are pseudofunctors from $\B$ to $\C$, and $(\phi,\phi^\bdot,\Th,\Xi)$ is an
adjoint equivalence in $\Bicatps(\B,\C)$, with
\[
\phi\cn F \to G \andspace \phi^\bdot\cn G \to F,
\]
then we make the following observations.
\begin{itemize}
\item For each object $X \in \B$, we have adjoint equivalences
$\phi_X\cn FX \to GX$ and $\phi^\bdot_X\cn GX \to FX$.
\item For each pair of objects $X,Y \in \B$, composition with
$\phi^\bdot_X$ and $\phi_Y$ induces an equivalence of categories
\[
\C(FX,FY) \to \C(GX,GY)
\]
under $\B(X,Y)$. In fact there are two such equivalences:
$(\phi^{\bdot*}_{X})(\ph_{Y*})$ and $(\ph_{Y*})(\phi^{\bdot*}_{X})$. The
associator induces a natural isomorphism between them
(\cref{exercise:rep-corep-assoc}).
\end{itemize}
\end{example}
\begin{definition}\label{definition:biequivalence}\index{biequivalence}\index{bicategory!biequivalence}
For bicategories $\B$ and $\C$, a pseudofunctor $F\cn \B \to \C$ is a \emph{biequivalence} if
there exists a pseudofunctor $G\cn \C \to \B$ together with internal equivalences
\[
\Id_\B \hty GF \quad \mathrm{ and } \quad FG \hty \Id_\C
\]
in $\Bicatps(\B,\B)$ and $\Bicatps(\C,\C)$, respectively.
\end{definition}
\begin{explanation}\label{biequivalence-interpret}
The internal equivalence $FG \hty \Id_\C$ entails strong
transformations
\[
\epz\cn FG \to \Id_\C \andspace \epz^\bdot\cn \Id_\C \to FG
\]
together with invertible modifications
\[
\Ga\cn 1_{\Id_\C} \iso \epz \circ \epz^\bdot \andspace \Ga'\cn \epz^\bdot \circ \epz \iso 1_{FG}.
\]
Likewise, the internal equivalence $\Id_\B \hty GF$ entails strong
transformations
\[
\eta \cn \Id_\B \to GF \andspace \eta^\bdot\cn GF \to \Id_\B
\]
together with invertible modifications
\[
\Th\cn 1_{\Id_\B} \iso \eta^\bdot \circ \eta \andspace \Th'\cn \eta \circ
\eta^\bdot \iso 1_{GF}.
\]
We note some specific consequences of this structure:
\begin{itemize}
\item For each object $X\in\C$, we have $\epz_X\cn FGX \to X$
providing an equivalence in $\C$. Therefore $F$ is surjective on
equivalence-classes of objects.
\item From \cref{example:equiv-in-bicatpsAB}, we observe that both of
the following are equivalences of categories:
\[
\B(A,B) \to \B(GFA,GFB) \andspace \C(X,Y) \to \C(FGX,FGY).\qedhere
\]
\end{itemize}
\end{explanation}
\begin{definition}\label{definition:2-equivalence}\index{2-equivalence}
Suppose that $\B$ and $\C$ are $2$-categories. Then a $2$-functor
$F\cn \B \to \C$ is a \emph{$2$-equivalence} if there is a $2$-functor
$G\cn \C \to \B$ together with $2$-natural isomorphisms
\[
\Id_\B \iso GF \quad \mathrm{ and } \quad FG \iso \Id_\C
\]
in $\iiCat(\B,\B)$ and $\iiCat(\C,\C)$, respectively.
\end{definition}
\begin{remark}
Note that the notion of $2$-equivalence is stricter than the notion of
biequivalence for $2$-categories. If a $2$-functor $F\cn \B \to \C$ is
a biequivalence of $2$-categories, its inverse will generally be a
pseudofunctor but not a $2$-functor.
\end{remark}
Recall from \cref{2cat-cat-enriched-cat} that $\A$ is a locally small
$2$-category if and only if $\A$ is a $\Cat$-category. Likewise,
\cref{ex:2functor} explains that $\Cat$-enriched functors between
$\Cat$-categories are precisely $2$-functors between the corresponding
locally small $2$-categories. And finally, \cref{ex:cat-nt} explains
that $\Cat$-natural transformations between $\Cat$-functors are
precisely $2$-natural transformations between the corresponding
$2$-functors. This forms the basis for the following lemma, whose proof
we give as \cref{exercise:2-equiv-Cat-equiv}.
\begin{lemma}\label{lemma:2-equiv-Cat-equiv}\index{2-equivalence!as $\Cat$-enriched equivalence}
A $2$-equivalence between locally small $2$-categories is precisely the same as a
$\Cat$-enriched equivalence.
\end{lemma}
\begin{lemma}\label{lemma:biequiv-implies-local-equiv}\index{biequivalence!local equivalence}
If $F$ is a biequivalence, then each local functor
\[
\B(A,B) \to \C(FA,FB)
\]
is essentially surjective and fully faithful. That is, $F$ is
essentially full on $1$-cells and fully faithful on $2$-cells.
\end{lemma}
\begin{proof}
First we observe that both $F$ and $G$ are fully faithful on
$2$-cells. Suppose that $f$ and $g$ are $1$-cells $A \to B$ in $\B$.
Then, as discussed in \cref{example:equiv-in-bicatpsAB}, we have the
following pair of equivalences induced by $(\eta,\eta^\bdot)$ and
$(\epz^\bdot,\epz)$.
\[
\begin{tikzpicture}[x=25mm,y=16mm]
\draw[0cell]
(0,0) node (a) {\B(A,B)}
(1,0) node (b) {\C(FA,FB)}
(1,-1) node (c) {\B(G(FA),G(FB))}
;
\draw[1cell]
(a) edge node {F} (b)
(b) edge node {G} (c)
(a) edge[swap] node {\hty} (c)
;
\end{tikzpicture}
\qquad
\begin{tikzpicture}[x=35mm,y=16mm]
\draw[0cell]
(0,0) node (a) {\C(X,Y)}
(0,-1) node (b) {\B(GY,GX)}
(1,-1) node (c) {\C(F(GX),F(GY))}
;
\draw[1cell]
(a) edge[swap] node {G} (b)
(b) edge node {F} (c)
(a) edge node {\hty} (c)
;
\end{tikzpicture}
\]
These show that $F$ and $G$ must both be injective on $2$-cells and,
consequently, must also by surjective on $2$-cells. Thus both $F$ and
$G$ are fully faithful. Now for any $1$-cell $h\cn FA \to FB$ in
$\C$, there is some $\ol{h}\cn A \to B$ such that $Gh \iso (\eta_B
\ol{h}) \eta^\bdot_A$. Then the laxity of $\eta$ combined
with the component of $\Th'$ at $A$ gives
\[
Gh \iso (\eta_b \ol{h}) \eta^\bdot_A \iso (G(F\ol{h}) \eta_a) \eta^\bdot_A
\iso G(F\ol{h}) (\eta_A \eta^\bdot_A) \iso G(F\ol{h}).
\]
Now since $G$ is fully faithful, this implies $F\ol{h} \iso h$.
Therefore the local functor $F\cn \B(A,B) \to \C(A,B)$ is
essentially surjective and fully faithful.
\end{proof}
\begin{remark}
\cref{lemma:biequiv-implies-local-equiv} implies that a
biequivalence is a local equivalence of categories (see
\cref{def:equivalences}).\dqed
\end{remark}
The next two results make use of mates for dual pairs; see
\cref{definition:mates}.
\begin{lemma}\label{lemma:mate-iso}
If $(f,f^\bdot)$ is an adjoint equivalence, then a $2$-cell
$\theta\cn fs \to t$ is an isomorphism if and only if its mate
$\theta^\dagger$ is an isomorphism.
\end{lemma}
\begin{proposition}\label{proposition:adjoint-equivalence-componentwise}\index{strong transformation!invertible}\index{characterization of!an invertible strong transformation}
Suppose that $F$ and $G$ are pseudofunctors of bicategories $\B \to
\C$ and suppose that $35\cn F \to G$ is a strong transformation. Then $35$ is
invertible if and only if each $35_X\cn F(X) \to G(X)$ is an
invertible $1$-cell in $\C$.
\end{proposition}
\begin{proof}
One implication is direct and has been discussed in
\cref{example:equiv-in-bicatpsAB}. For the other implication,
suppose that $35$ is a strong transformation and each component
$35_X$ is invertible. By \cref{proposition:equiv-via-isos} we may
choose an adjoint inverse $35^\bdot_X$ for each component.
We will show that these components assemble to give a strong
transformation $35^\bdot\cn G \to F$ together with invertible
modifications $\eta\cn 1_F \iso 35^\bdot 35$ and $\epz\cn 35
35^\bdot \iso 1_G$. We define the $2$-cell aspect of $35^\bdot$ by
taking component-wise mates of the $2$-cells for $35$. The
transformation axioms for $35^\bdot$ follow from those of $35$ by
\cref{lemma:mate-pairs}. Each mate of an isomorphism is again an
isomorphism by \cref{lemma:mate-iso}, and therefore $35^\bdot$ is a
strong transformation. The componentwise units and counits
define the requisite invertible modifications to make $35$ and
$35^\bdot$ invertible strong transformations.
\end{proof}
\section{Duality For Modules Over Rings}\label{sec:dual-pairs}
This section describes several basic examples of duality in algebra as
adjunctions in the bicategory $\Bimod$. The results in this section
are not used elsewhere in this book. To review and fix notation, we
let $R$ and $S$ be rings. Let $\Mod_R$ and $\Mod_S$ denote the
categories of right modules over $R$ and $S$, respectively. Tensor
and Hom with an $(S,R)$ bimodule $M$ induces an adjunction of
$1$-categories
\[
\begin{tikzpicture}[x=40mm,y=20mm]
\draw[0cell]
(0,0) node (x) {\Mod_S}
(1,0) node (y) {\Mod_R.}
;
\draw[1cell]
(x) edge[bend left=20] node {- \otimes_S M} (y)
(y) edge[bend left=20] node {\Hom_R(M,-)} (x)
;
\draw[2cell]
(.5,0) node[rotate=0,font=\Large] {\bot}
;
\end{tikzpicture}
\]
The unit, respectively counit, of this adjunction has components
\[
N \to \Hom_R(M,N \otimes_S M) \qquad \mathrm{ respectively } \qquad \Hom_R(M,L) \otimes_S M \to L
\]
for $N \in \Mod_S$ and $L \in \Mod_R$. The former sends an element $n
\in N$ to the $R$-module homomorphism $(m \mapsto n \otimes m)$ for $m
\in M$. The latter is defined on simple tensors by the evaluation map
$(f \otimes m \mapsto f(m))$ for $f \in \Hom_R(M,L)$ and $m \in M$.
We will need one more canonical map associated with this adjunction:
If $T$ is a third ring and $L$ is a $(T,R)$-bimodule, then
$\Hom_R(M,L)$ has a left $T$-module structure induced by the left
module structure of $L$. Tensoring the evaluation map by a
right $T$-module $K$ we have
\[
K \otimes_T \Hom_R(M,L) \otimes_S M \to K \otimes_T L.
\]
This is a homomorphism of right $R$-modules, and its adjoint
\[
K \otimes_T \Hom_R(M,L) \to \Hom_R(M,K \otimes_T L)
\]
is known as the \emph{coevaluation}. On simple tensors $k \otimes f \in
K \otimes_T \Hom_R(M,L)$, it is defined as the map $(m \mapsto k
\otimes f(m))$ for $m \in M$.
We will need a lemma regarding a special case of the coevaluation and
its compatibility with evaluation. This may be verified either by
formal adjoint arguments, or as a direct calculation on simple
tensors, and we leave it to the reader in \cref{exercise:eval-coeval-assoc}
\begin{lemma}\label{lemma:eval-coeval-assoc}
Suppose $M$ is a right $R$-module, and let $S = \Hom_R(M,M)$ be the
endomorphism ring. Give $M$ the left $S$-module structure induced
by evaluation. Then the following square commutes.
\[
\begin{tikzpicture}[x=65mm,y=16mm]
\draw[0cell]
(0,0) node (a) {\Hom_R(M,R) \otimes_S M \otimes_R \Hom_R(M,R)}
(0,-1) node (b) {\Hom_R(M,R) \otimes_S \Hom_R(M,M)}
(1,0) node (c) {R \otimes_R \Hom_R(M,R)}
(1,-1) node (d) {\Hom_R(M,R)}
;
\draw[1cell]
(a) edge['] node {1 \otimes \mathrm{coeval}} (b)
(c) edge node {\iso} (d)
(a) edge node {\mathrm{eval} \otimes 1} (c)
(b) edge['] node {\iso} (d)
;
\end{tikzpicture}
\]
\end{lemma}
\begin{lemma}[Dual Basis I]\label{lemma:DBLi}
Suppose $M$ is a right module over a ring $R$.
Then $M$ is projective if and only if there is an indexing set $I$ and families of elements
$\{m_i\}_{i \in I} \subset M$ and $\{f_i\}_{i \in I} \subset
\Hom_R(M,R)$ such that for any $x \in M$ we have $f_i(x)$ nonzero
for only finitely many $i$ and $x = \sum_i m_i\, f_i(x)$. Moreover,
$M$ is finitely generated and projective if and only if the indexing
set $I$ can be made finite.
\end{lemma}
\begin{proof}
Suppose given $\{m_i\}$ and $\{f_i\}$ as in the statement. For each
$i \in I$ let $R \langle e_i \rangle\>$ be the free right $R$-module
of rank 1 on the generator $e_i$. Let
\[
F = \bigoplus_{i \in I} R\langle e_i \rangle
\]
and define an $R$-linear surjection
\[
g\cn F \to M
\]
by $g(e_i) = m_i$ for each $i \in I$. Define
\[
f \cn M \to F
\]
by $f(x) = \sum_{i \in I} e_i f_i(x)$. Then the condition
$x = \sum_{i \in I} m_i f_i(x)$, with only finitely many $f_i(x)$
nonzero, implies that $f$ is a splitting of $g$ and hence $M$ is a
summand of $F$. For the converse, suppose $M$ is a summand of a
free module
\[
F = \bigoplus_{i \in I} R\langle e_i \rangle,
\]
with surjection $g\cn F \to M$ and splitting $f\cn M \to F$. Let
$m_i = g(e_i)$, and let $f_i$ be the composite
\[
M \to F \to R\langle e_i \rangle
\]
of $f$ with the projection to the $i$th component. Then the
splitting $fg = 1_M$ implies that for each $x$ we have
\[
x = \sum_{i \in I} m_i f_i(x)
\]
with only finitely many $f_i(x)$ nonzero.
The preceding argument shows that the cardinality of the indexing
set $I$ is given by the number of free summands of $F$. Thus we can
take $I$ to be finite if and only if $M$ is finitely generated and projective.
\end{proof}
\begin{definition}
The sets $\{(m_i,f_i)\}_i \in I$ is known as
a \emph{dual basis} even though the $m_i$ are merely a generating
set, not necessarily a basis, for $M$.
\end{definition}
\begin{lemma}[Dual Basis II]\label{lemma:DBLii}
Suppose $M$ is a right module over a ring $R$. The following are
equivalent.
\begin{enumerate}
\item\label{DBLiia} $M$ is finitely generated and projective.
\item\label{DBLiib} The coevaluation map
\[
M \otimes_R \Hom_R(M,R) \xrightarrow{\ \mathrm{coeval}\ } \Hom_R(M,M)
\]
is an isomorphism.
\item\label{DBLiic} The coevaluation map
\[
K \otimes_R \Hom_R(M,R) \xrightarrow{\ \mathrm{coeval}\ } \Hom_R(M,K)
\]
is an isomorphism for any right $R$-module $K$.
\end{enumerate}
\end{lemma}
\begin{proof}
First we note that \eqref{DBLiic} implies
\eqref{DBLiib} with $K = M$.
By \cref{lemma:DBLi}, $M$ is finitely generated and projective if
and only if there is a fintely-indexed dual basis $\{(m_i, f_i)\}$. This is
equivalent to the condition that the identity homomorphism $M \to M$
is in the image of the coevaluation map. Therefore \eqref{DBLiib}
implies \eqref{DBLiia}.
Now to see \eqref{DBLiia} implies \eqref{DBLiic}, suppose we have a finite dual
basis $\{(m_i, f_i)\}$. Then for any
homomorphism $g \cn M \to K$ we have
\[
g(x) = g\big( \sum_i m_i f_i(x) \big) = \sum_i g(m_i) f_i(x).
\]
Therefore $g \mapsto \sum_i g(m_i) \otimes f_i$ gives a homomorphism
\[
\delta\cn \Hom_R(M,K) \to K \otimes_R \Hom_R(M,R).
\]
Since
\[
g = \mathrm{coeval}(\sum_i g(m_i) \otimes f_i),
\]
the composite
$\mathrm{coeval} \circ \de$ is the identity on $\Hom_R(M,K)$. To
show that the other composite is the identity, we first note that
for any $\phi\cn M \to R$, the previous argument with $K = R$ shows
\begin{equation}\label{eq:phi-coeval}
\phi = \mathrm{coeval}( \sum_i \phi(m_i) \otimes f_i).
\end{equation}
Now for arbitrary $\sum_{j = 1}^N k_j
\otimes \phi_j$ in $K \otimes \Hom_R(M,R)$, we have
\[
(\de \circ \mathrm{coeval}) (\sum_j k_j \otimes \phi_j) = \sum_i \big( \sum_j
k_j \phi_j(m_i) \big) \otimes f_i.
\]
Interchanging order of summation and moving $\phi_j(m_i)$ across the
tensor, this is equal to
\[
\sum_j k_j \otimes \big( \sum_i \phi_j(m_i) f_i \big).
\]
Using \eqref{eq:phi-coeval} for each $\phi_j$, this last expression
is equal to $\sum_j k_j \otimes \phi_j$, and thus
$\de \circ \mathrm{coeval}$ is equal to the identity on
$K \otimes_R \Hom_R(M,R)$.
\end{proof}
The second dual basis lemma \ref{lemma:DBLii} gives the following
example, generalizing the case of vector spaces outlined in
\cref{motivation:internal-adjunction}.
\begin{example}[Duality for modules]\index{bimodule!duality}\index{dualizable!bimodule}
An $(S,R)$ bimodule $M$ is a left adjoint in the bicategory $\Bimod$
if and only if $M$ is finitely-generated and projective over $R$.
In this case, $M^* = \Hom_R(M,R)$ gives its right adjoint, with the
unit and counit given by
\begin{align*}
& S \to \Hom_R(M,M) \fto{\mathrm{coeval}^{-1}} M \otimes_R
\Hom_R(M,R) = \Hom_R(M,R) \circ M\\
& M \circ \Hom_R(M,R) = \Hom_R(M,R) \otimes_S M \fto{\ \ \mathrm{eval}\ \ } R.\dqed
\end{align*}
\end{example}
\section{Monads}\label{sec:monads}
In this section we discuss the theory of internal monads in a
bicategory $\B$. In the case $\B = \Cat$, this recovers the notion of
monad acting on a category discussed in \cref{sec:categories}, beginning with
\cref{def:monad}.
\begin{motivation}
Recall the following are equivalent for a category $\C$ and endofunctor $T\cn \C \to
\C$.
\begin{itemize}
\item $T$ is a monad on $\C$.
\item $T$ is a monoid in the monoidal category $\Cat(\C,\C)$ under
composition.
\item The functor $\boldone \to \Cat(\C,\C)$ that sends the unique object to
$T$ is lax monoidal, with laxity given by the transformation $T^2
\to T$.
\end{itemize}
We will observe that these notions generalize, and remain
equivalent, in a general bicategory.
\end{motivation}
\begin{definition}\label{monad-bicat}\index{bicategory!monad}\index{monad!in a bicategory}
Suppose $\B$ is a bicategory and recall from
\cref{example:terminal-bicategory} that $\boldone$ denotes the
terminal bicategory. A \emph{monad} in $\B$ is a lax
functor from $\boldone$ to $\B$. A
\emph{$1$-cell} between monads is a lax transformation of lax functors. A
\emph{$2$-cell} between monad $1$-cells is a modification of the
corresponding lax transformations.
\end{definition}
\begin{explanation}\label{monad-bicat-interpret}
\
\begin{enumerate}
\item Interpreting \cref{def:lax-functors} for this special case, a lax
functor $S\cn \boldone \to \B$ consists of the following data:
\begin{itemize}
\item an object $C = S\vstar$, where $\vstar$ denotes the unique object of $\boldone$,
\item a $1$-cell $t = S1_\vstar\cn C \to C$,
\item a $2$-cell $\mu = S^2_{1_\vstar}\cn t^2 \to t$, and
\item a $2$-cell $\eta = S^0_\vstar\cn 1_C \to t$.
\end{itemize}
These data make the following diagrams commute.
\begin{equation}
\begin{tikzpicture}[x=20mm,y=16mm,baseline=(X).base]
\draw[0cell]
(0,0) node (t3L) {(t^2)t}
(t3L) ++(50:1) node (t3R) {t(t^2)}
(t3L) ++(-50:1) node (t2L) {t^2}
(t3L) ++(2.5,0) node (tR) {t}
(tR) ++(130:1) node (t2R) {t^2}
(tR) ++(-130:1) node (tL) {t}
;
\draw[1cell]
(t3L) edge node (X) {a} (t3R)
(t3R) edge node {1_t * \mu} (t2R)
(t2R) edge node {\mu} (tR)
(t3L) edge[swap] node {\mu * 1_t} (t2L)
(t2L) edge[swap] node {\mu} (tL)
(tL) edge[swap] node {1_t} (tR)
;
\end{tikzpicture}
\end{equation}
\begin{equation}
\begin{tikzpicture}[x=23mm,y=15mm,baseline={(0,1).base}]
\draw[0cell]
(0,0) node (a) {t\, 1_{C}}
(.25,1) node (b) {t^2}
(1.25,1) node (c) {t}
(1.5,0) node(d) {t}
;
\draw[1cell]
(a) edge node[pos=.4] {1*\eta} (b)
(b) edge node {\mu} (c)
(c) edge node[pos=.6] {1_t} (d)
(a) edge node {r} (d)
;
\end{tikzpicture}
\qquad\qquad
\begin{tikzpicture}[x=23mm,y=15mm, baseline={(0,1).base}]
\draw[0cell]
(0,0) node (a) {1_{C}\, t}
(.25,1) node (b) {t^2}
(1.25,1) node (c) {t}
(1.5,0) node(d) {t}
;
\draw[1cell]
(a) edge node[pos=.4] {\eta*1} (b)
(b) edge node {\mu} (c)
(c) edge node[pos=.6] {1_t} (d)
(a) edge node {\ell} (d)
;
\end{tikzpicture}
\end{equation}
With these data, we say that $(t,\mu,\eta)$ is a monad \emph{on} $C$
or \emph{acting on} $C$. Note that these data are equivalent to the
statement that $t$ is a monoid in the monoidal category of
endomorphisms $\B(C,C)$.
\item Interpreting \cref{definition:lax-transformation},
suppose $S_0$ and $S_1$ are monads in $\B$ acting on $C_0$ and
$C_1$, respectively. Let $(t_i, \mu_i, \eta_i)$ denote the data of
$S_i$ for $i = 0, 1$ as above. A lax transformation $35\cn S_0 \to S_1$
consists of the following data:
\begin{itemize}
\item a $1$-cell $m = 35_\vstar\cn C_0 \to C_1$;
\item a $2$-cell $\phi\cn t_1 m \to m t_0$ shown below.
\[
\begin{tikzpicture}[x=16mm,y=14mm]
\draw[0cell]
(0,0) node (a) {C_0}
(1,0) node (b) {C_0}
(0,-1) node (c) {C_1}
(1,-1) node (d) {C_1}
;
\draw[1cell]
(a) edge node {t_0} (b)
(b) edge node {m} (d)
(a) edge[swap] node {m} (c)
(c) edge[swap] node {t_1} (d)
;
\draw[2cell]
(.5,-.55) node[font=\Large,rotate=45] {\Rightarrow}
node[above left] {\phi}
;
\end{tikzpicture}
\]
\end{itemize}
Interpreting \eqref{unity-transformation} and
\eqref{2-cell-transformation}, these data make the following
diagrams commute.
\begin{equation}
\begin{tikzpicture}[x=27mm,y=15mm,baseline=(b).base]
\draw[0cell]
(0,0) node (a) {(t_1t_1)m}
(0,1) node (b) {t_1(t_1m)}
(0,2) node (c) {t_1(mt_0)}
(1,2) node (d) {(t_1m)t_0}
(2,2) node (e) {(mt_0)t_0}
(2,1) node (f) {m(t_0t_0)}
(2,0) node (g) {mt_0}
(1,0) node (h) {t_1m}
;
\draw[1cell]
(a) edge node {a} (b)
(b) edge node {1 * \phi} (c)
(c) edge node {a^\inv} (d)
(d) edge node {\phi * 1} (e)
(e) edge node {a} (f)
(f) edge node {1 * \mu_0} (g)
(a) edge node {\mu_1*1} (h)
(h) edge node {\phi} (g)
;
\draw[2cell]
;
\end{tikzpicture}
\end{equation}
\begin{equation}
\begin{tikzpicture}[x=22mm,y=15mm,baseline=(X).base]
\draw[0cell]
(0,0) node (1m) {1_{C_1}\, m}
(1,0) node (m) {m}
(2,0) node (m1) {m\, 1_{c_1}}
(0,-1) node (t1m) {t_1\, m}
(2,-1) node (mt0) {m\, t_0}
;
\draw[1cell]
(1m) edge node {\ell} (m)
(m) edge node {r^\inv} (m1)
(1m) edge[swap] node (X) {\eta_1 * 1} (t1m)
(m1) edge node {1 * \eta_0} (mt0)
(t1m) edge node {\phi} (mt0)
;
\draw[2cell]
;
\end{tikzpicture}
\end{equation}
\item If $(C,t,\mu,\eta)$ is a monad in $\B$, and $W$ is another
object of $\B$, then the represented pseudofunctor
\[
t_*\cn \B(W,C) \to \B(W,C)
\]
defines a represented monad in $\Cat$ acting on the category $\B(W,C)$. In
\cref{exercise:represented-monad} we ask the reader to verify that
a $1$-cell of monads
\[
(C, t, \mu, \eta) \to (C', t', \mu', \eta')
\]
induces a functor from the category of algebras over $t_*$ to the
category of algebras over $t'_*$ and a $2$-cell between
monad $1$-cells induces a natural transformation between these
functors of algebras.\dqed
\end{enumerate}
\end{explanation}
\begin{example}[Monoids]\index{monoid!as a monad}
Suppose $\M$ is a monoidal category.
Combining \cref{ex:monfunctor-laxfunctor,ex:mnt-icon}, we
have an isomorphism of categories
\[
\Bicatic(\boldone,\Si \M) \iso \mathrm{Mon}(\M)
\]
between the $1$-categories of, on the one hand, lax functors
$\boldone \to \Si \M$ with icons between them and, on the other
hand, monoids and monoid morphisms on $\M$. Thus (via
\cref{icon-is-icon}) a morphism of monoids $X \to Y$ in $\M$ gives
an example of a $1$-cell of monads $Y \to X$ (note the reversal of
direction) over the unique object of $\Si \M$.
\end{example}
\begin{example}[Internal Categories]\label{example:internal-cat}\index{category!internal}\index{internal category}
Suppose $\C$ is a category in which all pullbacks exist, and recall
the bicategory $\Span$ discussed in \cref{ex:spans}. Objects are
those of $\C$, $1$-cells are spans in $\C$, and $2$-cells are given by
span morphisms shown in \eqref{span-2cell}.
A monad in\index{monad!in $\Span$}\index{span!monad in -} $\Span$ is called an \emph{internal category} in $\C$, and
consists of the following:
\begin{itemize}
\item an object $C_0$;
\item a span $(C_1,t,s)$ as below;
\[
\begin{tikzpicture}[x=20mm,y=16mm]
\draw[0cell]
(0,0) node (L) {C_0}
(1,.5) node (M) {C_1}
(2,0) node (R) {C_0}
;
\draw[1cell]
(M) edge[swap] node {t} (L)
(M) edge node {s} (R)
;
\end{tikzpicture}
\]
\item a morphism $c\cn C_1 \times_{C_0} C_1 \to C_1$ that is a map
of spans as below;
\[
\begin{tikzpicture}[x=20mm,y=16mm]
\draw[0cell]
(0,0) node (L) {C_0}
(2,0) node (R) {C_0}
(1,.5) node (T) {C_1 \times_{C_0} C_1}
(1,-.5) node (B) {C_1}
;
\draw[1cell]
(T) edge[swap] node {t} (L)
(T) edge node {s} (R)
(B) edge node {t} (L)
(B) edge[swap] node {s} (R)
(T) edge node {c} (B)
;
\end{tikzpicture}
\]
\item a morphism $i\cn C_0 \to C_1$ that is a map of spans as below.
\[
\begin{tikzpicture}[x=20mm,y=16mm]
\draw[0cell]
(0,0) node (L) {C_0}
(2,0) node (R) {C_0}
(1,.5) node (T) {C_0}
(1,-.5) node (B) {C_1}
;
\draw[1cell]
(T) edge[swap] node {1} (L)
(T) edge node {1} (R)
(B) edge node {t} (L)
(B) edge[swap] node {s} (R)
(T) edge node {i} (B)
;
\end{tikzpicture}
\]
\end{itemize}
The objects $C_0$ and $C_1$ are called the \emph{objects} and
\emph{arrows} of the internal category, and the morphisms $s$, $t$,
$c$, and $i$ are known respectively as \emph{source}
(\emph{domain}), \emph{target} (\emph{codomain}),
\emph{composition}, and \emph{identity} (\emph{unit}).
The monad axioms are equivalent to associativity and unity
conditions generalizing those of \cref{def:categories}.
Notable special cases of this example include the following.
\begin{itemize}
\item $\C = \Set$: internal categories in $\Set$ are small
categories.
\item $\C = \Cat$: internal categories in the $1$-category $\Cat$ are\index{strict double category}\index{category!strict double -}
\emph{strict double categories}. See \cref{sec:double-cat} for
further discussion of double categories.\dqed
\end{itemize}
\end{example}
\subsection*{Comonads}
\begin{definition}\index{comonad!in a bicategory}\index{bicategory!comonad}
A \emph{comonad} in $\B$ is a monad in $\B^\co$; equivalently, it is
a colax functor from the terminal bicategory to $\B$. A \emph{$1$-cell} between
comonads is an oplax transformation of colax functors
$\boldone \to \B$ or, equivalently, a lax transformation of lax
functors $\boldone \to \B^\coop$. A \emph{$2$-cell} between
comonad $1$-cells is a modification of oplax transformations.
\end{definition}
\begin{explanation}
\
\begin{enumerate}
\item A comonad in $\B$ consists of $(C,s,\de,\epz)$ as follows:
\begin{itemize}
\item an object $C = S\vstar$, where $\vstar$ denotes the unique
object of $\boldone$;
\item a $1$-cell $s = S1_\vstar\cn C \to C$;
\item a $2$-cell $\delta = S^2_{1_\vstar}\cn t \to t^2$.
\item a $2$-cell $\epz = S^0_\vstar\cn t \to 1_C$;
\end{itemize}
\item A $1$-cell of comonads $(C_0, s_0, \de_0, \epz_0) \to (C_1, s_1,\de_1,
\epz_1)$ consists of
\begin{itemize}
\item a $1$-cell $n\cn C_0 \to C_1$;
\item a $2$-cell $\psi\cn ns_0 \to s_1n$ shown below.
\[
\begin{tikzpicture}[x=16mm,y=14mm]
\draw[0cell]
(0,0) node (a) {C_0}
(1,0) node (b) {C_0}
(0,-1) node (c) {C_1}
(1,-1) node (d) {C_1}
;
\draw[1cell]
(a) edge node {s_0} (b)
(b) edge node {n} (d)
(a) edge[swap] node {n} (c)
(c) edge[swap] node {s_1} (d)
;
\draw[2cell]
(.5,-.55) node[font=\Large,rotate=-135] {\Rightarrow}
node[above left] {\psi}
;
\end{tikzpicture}
\]
\end{itemize}
\item Just as for monads, the definition of comonad $1$-cell has been
chosen so that a $1$-cell between comonads
\[
(C,s,\de,\epz) \to (C',s',\de',\epz')
\]
induces a functor from the category of $s_*$-coalgebras in
$\B(W,C)$ to the category of $s'_*$-coalgebras in $\B(W,C')$ for
any object $W$ in $\B$. Likewise, a $2$-cell between comonad
$1$-cells induces a natural transformation between functors of
coalgebras. This is \cref{exercise:represented-comonad}.\dqed
\end{enumerate}
\end{explanation}
\section{\texorpdfstring{$2$}{2}-Monads}\label{sec:2-monads}
The special case of monads on $2$-categories is important enough to
discuss explicitly. One approach is to apply the theory above to the
bicategory $\iiCat$ discussed in \cref{exer:2cat-of-2cat}. However
the objects of $\iiCat$ are \emph{small} categories, and this leaves
out key examples. In particular, the $2$-category $\Cat$, formed by
small categories, functors, and natural transformations
(cf. \cref{ex:2cat-of-cat}) is a locally small $2$-category but not a
small category. Similarly, for a category $\C$ we have a $2$-category
$\Cat/\C$ of categories, functors, and natural transformations over
$\C$ (cf. \cref{exer:cat-over}).
Therefore we consider $\Cat$-monads on a $\Cat$-enriched category
$\A$. This notion is defined in \cref{def:enriched-monad}.
To connect with the theory of internal monads discussed above, we have
the following result, whose proof we give as
\cref{exercise:cat-monad-is-internal-monad}.
\begin{proposition}\label{cat-monad-is-internal-monad}
Suppose $\A$ is a small $2$-category, and let $\A'$ denote $\A$
regarded as a $\Cat$-enriched category. Then a monad on $\A$ in
$\iiCat$ is precisely a $\Cat$-monad on $\A'$.
\end{proposition}
And now we come to the definition of $2$-monad.
\begin{definition}\label{definition:2-monad}\index{2-monad}\index{monad!2-}
A \emph{$2$-monad} is a $\Cat$-enriched monad on a $\Cat$-enriched
category. Typically we denote a $2$-monad by $(\A,T,\mu,\eta)$, where
$\A$ is a $\Cat$-category, $T$ is a $\Cat$-functor, $\mu$ and $\eta$ are
$\Cat$-natural transformations. Equivalently, regarding $\A$ as a
locally small $2$-category, we regard $T$ as a $2$-functor, and regard $\mu$ and
$\eta$ as $2$-natural transformations.
\end{definition}
\begin{convention}
In the remainder of this section we will let $\A$ be a
$\Cat$-enriched category, regarded as a $2$-category. Thus we will
refer to $1$-cells and $2$-cells of $\A$ under this correspondence. We
will use the terms ``$2$-functor'' and ``$2$-natural transformation'' without
further mention of the the equivalent $\Cat$-enriched terms.
\end{convention}
\begin{definition}\label{definition:lax-algebra}\index{lax algebra}
Suppose $(\A,T,\mu,\eta)$ is a $2$-monad. A \emph{lax $T$-algebra}
is a quadruple $(X,\theta,\zeta,\omega)$ consisting of
\begin{itemize}
\item an object $X$ in $\A$,
\item a morphism $\theta\cn TX \to X$ in $\A$ called the\index{structure morphism} \emph{structure morphism},
\item a $2$-cell $\zeta\cn 1_X \to \theta \circ \eta_X$ in $\A$ called the\index{lax unity constraint}
\emph{lax unity constraint}, and
\item a $2$-cell
$\omega\cn \theta \circ T\theta \to \theta \circ \mu_X$ called the\index{lax associativity constraint}
\emph{lax associativity constraint}.
\end{itemize}
The $2$-cells $\zeta$ and $\omega$ are shown diagrammatically below.
\begin{equation}\label{zeta-theta-def}
\begin{tikzpicture}[x=20mm,y=20mm,baseline={(0,-.75).base}]
\draw[0cell]
(0,0) node (x) {X}
(0,-1) node (tx) {TX}
(1,-1) node (x') {X}
;
\draw[1cell]
(x) edge[swap] node {\eta_X} (tx)
(x) edge node (id) {1_X} (x')
(tx) edge[swap] node {\theta} (x')
;
\draw[2cell]
node[between=id and tx at .5, rotate=225, 2label={below,\zeta}]{\Rightarrow}
;
\end{tikzpicture}
\qquad
\begin{tikzpicture}[x=20mm,y=20mm,baseline={(0,-.75).base}]
\draw[0cell]
(0,0) node (t2x) {T^2 X}
(1,0) node (tx) {TX}
(0,-1) node (tx') {TX}
(1,-1) node (x) {X}
;
\draw[1cell]
(t2x) edge[swap] node {T\theta} (tx')
(t2x) edge node {\mu_X} (tx)
(tx) edge node (id) {\theta} (x)
(tx') edge[swap] node {\theta} (x)
;
\draw[2cell]
node[between=tx' and tx at .5, rotate=45, 2label={above,\omega}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
The above data is required to satisfy the following three pasting
diagram equalities relating $\omega$ and $\zeta$. The unlabeled
regions in the diagrams are commutative by
\cref{def:enriched-monad}.
\begin{description}
\item[Lax Unity]\index{unity!lax algebra}
\begin{equation}\label{lax-algebra-units}
\begin{tikzpicture}[x=32mm,y=30mm,baseline=(t2x.base)]
\draw[0cell]
(0,0) node (tx) {TX}
(1,0) node (tx') {TX}
(0,-1) node (x) {X}
(1,-1) node (x') {X}
(.5,-.333) node (t2x) {T^2 X}
(.5,-.666) node (tx'') {TX}
;
\draw[1cell]
(tx) edge node {1_{TX}} (tx')
(tx) edge['] node {\theta} (x)
(tx') edge node (m) {\theta} (x')
(x) edge['] node (1x) {1_X} (x')
(tx) edge['] node {\eta_{TX}} (t2x)
(t2x) edge['] node {\mu_X} (tx')
(t2x) edge node (Tm) {T\theta} (tx'')
(x) edge node {\eta_X} (tx'')
(tx'') edge node {\theta} (x')
;
\draw[2cell]
node[between=Tm and m at .5, rotate=45, 2label={below,\omega}] {\Rightarrow}
node[between=1x and tx'' at .5, shift={(-.025,0)}, rotate=90, 2label={below,\zeta}] {\Rightarrow}
;
\end{tikzpicture}
\qquad
\begin{tikzpicture}[x=20mm,y=20mm,rotate=-90,baseline=(m.base)]
\draw[0cell]
(0,0) node (tx) {TX}
(1,0) node (x) {X}
;
\draw[1cell]
(tx) edge[bend left] node[pos=.45] (m) {\theta} (x)
(tx) edge[swap,bend right] node[pos=.45] (m') {\theta} (x)
;
\draw[2cell]
node[between=m and m' at .5, shift={(-.025,0)}, rotate=0, 2label={below,1_\theta}] {\Rightarrow}
(m) ++ (60:.35) node[rotate=00, font=\LARGE] {=}
(m') ++ (-60:.35) node[rotate=00, font=\LARGE] {=}
;
\end{tikzpicture}
\qquad
\begin{tikzpicture}[x=32mm,y=30mm,baseline=(t2x.base)]
\draw[0cell]
(0,0) node (tx) {TX}
(0,-1) node (tx') {TX}
(1,0) node (tx'') {TX}
(.5,-.333) node (t2x) {T^2 X}
(1,-1) node (x) {X}
;
\draw[1cell]
(tx) edge['] node (1) {1_{TX}} (tx')
(tx) edge['] node[inner sep=0pt] {T\eta_{X}} (t2x)
(tx) edge node {1_{TX}} (tx'')
(t2x) edge node {T\theta} (tx')
(t2x) edge['] node {\mu_X} (tx'')
(tx') edge['] node (m) {\theta} (x)
(tx'') edge node {\theta} (x)
;
\draw[2cell]
node[between=t2x and x at .5, rotate=45, 2label={below,\omega}] {\Rightarrow}
node[between=1 and t2x at .5, shift={(0,-.025)}, rotate=0, 2label={below,T\zeta}, inner sep=.5mm] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
\item[Lax Associativity]\index{associativity!lax algebra}
\begin{equation}\label{lax-algebra-hexagon}
\begin{tikzpicture}[x=.7mm,y=.7mm]
\def\boundary{
\draw[0cell]
(0,0) node (LL) {T^3X}
(20,20) node (TL) {T^2X}
(20,-20) node (BL) {T^2X}
(50,20) node (TR) {TX}
(50,-20) node (BR) {TX}
(70,0) node (RR) {X}
;
\draw[1cell]
(LL) edge node {\mu_{TX}} (TL)
(TL) edge node {\mu_X} (TR)
(TR) edge node {\theta} (RR)
(LL) edge[swap] node {T^2\theta} (BL)
(BL) edge[swap] node {T\theta} (BR)
(BR) edge[swap] node {\theta} (RR)
;
}
\begin{scope}
\boundary
\draw[0cell]
(30,0) node (CC) {T^2X}
;
\draw[1cell]
(LL) edge node {T\mu_X} (CC)
(CC) edge node {\mu_X} (TR)
(CC) edge node {T\theta} (BR)
;
\draw[2cell]
node[between=BL and CC at .5, rotate=60, 2label={below,T\omega}] {\Rightarrow}
node[between=CC and RR at .55, rotate=90, 2label={below,\omega}] {\Rightarrow}
;
\draw (82.5,2.5) node[font=\LARGE] {=};
\end{scope}
\begin{scope}[shift={(95,0)}]
\boundary
\draw[0cell]
(40,0) node (CC) {TX}
;
\draw[1cell]
(TL) edge[swap] node {T\theta} (CC)
(BL) edge node {\mu_X} (CC)
(CC) edge node {\theta} (RR)
;
\draw[2cell]
node[between=BR and CC at .5, rotate=120, 2label={below,\omega}] {\Rightarrow}
node[between=CC and TR at .5, rotate=60, 2label={above,\omega}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\end{equation}
\end{description}
Note we have used the strict functoriality of $T$ where $T\zeta$
and $T\omega$ appear in these diagrams. This finishes the definition of lax
algebra. Moreover:
\begin{itemize}
\item A \emph{pseudo $T$-algebra}\index{pseudo!algebra}\index{algebra!pseudo} is a lax $T$-algebra in which
$\zeta$ and $\omega$ are $2$-natural isomorphisms.
\item A \emph{normalized}\index{normalized lax algebra} lax $T$-algebra is a lax $T$-algebra in
which $\zeta$ is the identity transformation.
\item \emph{strict $T$-algebra}\index{strict algebra} is a lax $T$-algebra in which both
$\zeta$ and $\omega$ are identities.\defmark
\end{itemize}
\end{definition}
\begin{remark}
In \cref{exercise:Cat-enriched-algebra} we ask the reader to verify
that the definition of strict $T$-algebra is precisely the definition of
$\Cat$-enriched algebra from \cref{def:enriched-monad-algebra}.
\end{remark}
Now we describe morphisms of lax algebras.
\begin{definition}\label{definition:lax-algebra-morphism}\index{morphism!lax algebras}\index{lax algebra!lax morphism}
Suppose $(\A,T,\mu,\eta)$ is a $2$-monad with lax $T$-algebras
$(X,\theta,\zeta,\omega)$ and $(X',\theta',\zeta',\omega')$. A
\emph{lax morphism} $X \to X'$ is a pair $(f,\phi)$ consisting
of
\begin{itemize}
\item a $1$-cell $f\cn X \to X'$ in $\A$ called the \emph{structure $1$-cell} and
\item a $2$-cell $\phi\cn \theta' \circ Tf \to f \circ \theta$ in
$\A$, called the \emph{structure $2$-cell}, shown below.
\[
\begin{tikzpicture}[x=20mm,y=20mm]
\draw[0cell]
(0,0) node (tx) {TX}
(1,0) node (tx') {TX'}
(0,-1) node (x) {X}
(1,-1) node (x') {X'}
;
\draw[1cell]
(tx) edge node {Tf} (tx')
(x) edge['] node {f} (x')
(tx) edge['] node {\theta} (x)
(tx') edge node {\theta'} (x')
;
\draw[2cell]
node[between=x and tx' at {.5}, rotate=225, 2label={below,\phi}] {\Rightarrow}
;
\end{tikzpicture}
\]
\end{itemize}
These are required to satisfy the following two pasting diagram
equalities; the unlabeled regions are commutative.
\begin{description}
\item[Lax Unity]\index{unity!lax morphism}
\begin{equation}\label{lax-morphism-unit-axiom}
\begin{tikzpicture}[x=20mm,y=20mm,baseline=(tx.base)]
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (tx) {TX}
(0,-1) node (x) {X}
(1,-1) node (x') {X'}
(tx) ++(45:1) node (a) {X}
(a) ++(1,0) node (b) {X'}
;
\draw[1cell]
(x) edge['] node {f} (x')
(tx) edge['] node {\theta} (x)
(a) edge['] node (etax) {\eta_X} (tx)
(a) edge node {f} (b)
(b) edge[bend left] node (1x') {1_{X'}} (x')
;
\draw[2cell]
;
}
\draw[font=\Large] (2.25,0) node {=};
\begin{scope}[shift={(0,0)}]\
\boundary
\draw[0cell]
(1,0) node (tx') {TX'}
;
\draw[1cell]
(tx) edge node {Tf} (tx')
(tx') edge node {\theta'} (x')
(b) edge['] node {\eta_{X'}} (tx')
;
\draw[2cell]
node[between=x and tx' at {.5}, rotate=225, 2label={below,\phi}] {\Rightarrow}
node[between=1x' and tx' at {.5}, rotate=180, 2label={below,\zeta'}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(3,0)}]
\boundary
\draw[0cell]
;
\draw[1cell]
(a) edge[bend left] node (1x) {1_{X}} (x)
;
\draw[2cell]
node[between=1x and tx at {.5}, shift={(-.5mm,-0.8mm)}, rotate=180, 2label={below,\zeta}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\end{equation}
\item[Lax Structure Preservation]
\begin{equation}\label{lax-morphism-structure-axiom}
\begin{tikzpicture}[x=.7mm,y=.7mm]
\def\boundary{
\draw[0cell]
(0,0) node (LL) {T^2X}
(20,20) node (TL) {TX}
(20,-20) node (BL) {T^2X'}
(50,20) node (TR) {X}
(50,-20) node (BR) {TX'}
(70,0) node (RR) {X'}
;
\draw[1cell]
(LL) edge node {\mu_{X}} (TL)
(TL) edge node {\theta} (TR)
(TR) edge node {f} (RR)
(LL) edge[swap] node {T^2 f} (BL)
(BL) edge[swap] node {T\theta'} (BR)
(BR) edge[swap] node {\theta'} (RR)
;
}
\begin{scope}
\boundary
\draw[0cell]
(30,0) node (CC) {TX}
;
\draw[1cell]
(LL) edge node {T\theta} (CC)
(CC) edge node {\theta} (TR)
(CC) edge node {Tf} (BR)
;
\draw[2cell]
node[between=BL and CC at .5, rotate=60, 2label={below,T\phi}] {\Rightarrow}
node[between=TL and CC at .45, rotate=120, 2label={above,\omega}] {\Rightarrow}
node[between=CC and RR at .55, rotate=90, 2label={below,\phi}] {\Rightarrow}
;
\draw (82.5,2.5) node[font=\LARGE] {=};
\end{scope}
\begin{scope}[shift={(95,0)}]
\boundary
\draw[0cell]
(40,0) node (CC) {TX'}
;
\draw[1cell]
(TL) edge[swap] node {Tf} (CC)
(BL) edge node {\mu_{X'}} (CC)
(CC) edge node {\theta'} (RR)
;
\draw[2cell]
node[between=BR and CC at .5, rotate=120, 2label={below,\omega'}] {\Rightarrow}
node[between=CC and TR at .5, rotate=60, 2label={above,\phi}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\end{equation}
\end{description}
Note that we have used strict functoriality of $T$ where $T\phi$
appears above. This finishes the definition of lax morphism.
Moreover:
\begin{itemize}
\item A \emph{strong}\index{morphism!strong} morphism
is a lax morphism in which $\phi$ is an isomorphism.
\item A \emph{strict}\index{morphism!strict} morphism is a lax morphism in which
$\phi$ is an identity.\defmark
\end{itemize}
\end{definition}
\begin{definition}\label{definition:algebra-2-cell}\index{2-cell!lax morphisms}
Suppose $(\A,T,\mu,\eta)$ is a $2$-monad with lax $T$-algebras
$(X,\theta,\zeta,\omega)$ and $(X',\theta',\zeta',\omega')$.
Suppose, moreover, that $(f_1,\phi_1)$ and $(f_2,\phi_2)$ are lax
morphisms $X \to X'$. A \emph{$2$-cell} $f_1 \to f_2$ consists of
a $2$-cell in $\A$, $35\cn f \to f'$ such that
\[
35\, \phi_1 = \phi_2 T35,
\]
as shown in the following equality of of pasting diagrams.
\[
\begin{tikzpicture}[x=20mm,y=20mm]
\newcommand\boundary{
\draw[0cell]
(0,0) node (tx) {TX}
(1,0) node (tx') {TX'}
(0,-1) node (x) {X}
(1,-1) node (x') {X'}
;
\draw[1cell]
(tx) edge[bend left] node (Tf1) {Tf_1} (tx')
(x) edge[',bend right] node (f2) {f_2} (x')
(tx) edge['] node {\theta} (x)
(tx') edge node {\theta'} (x')
;
}
\draw[font=\Large] (1.75,-.4) node {=};
\begin{scope}[shift={(0,0)}]
\boundary
\boundary
\draw[1cell]
(x) edge[bend left] node {f_1} (x')
;
\draw[2cell]
node[between=Tf1 and f2 at {.4}, rotate=225, 2label={below,\phi_1}] {\Rightarrow}
node[between=x and x' at {.46}, rotate=-90, 2label={above,\,\alpha}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(2.5,0)}]
\boundary
\draw[1cell]
(tx) edge[',bend right] node {Tf_2} (tx')
;
\draw[2cell]
node[between=Tf1 and f2 at {.66}, rotate=225, 2label={below,\phi_2}] {\Rightarrow}
node[between=tx and tx' at {.44}, rotate=-90, 2label={above,T\alpha}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}\defmark
\]
\end{definition}
\begin{definition}\label{definition:t-alg-2-cats}
We have $2$-categories and inclusions
\[
\begin{tikzpicture}[x=26mm,y=20mm]
\draw[0cell]
(0,0) node (a) {\sAlg{T}}
++(1,0) node (b) {\Alg{T}}
++(1,0) node (c) {\PsAlg{T}}
++(1,0) node (d) {\LaxAlg{T}}
;
\path[1cell]
(a) edge[right hook->] node {} (b)
(b) edge[right hook->] node {} (c)
(c) edge[right hook->] node {} (d)
;
\draw[2cell]
;
\end{tikzpicture}
\]
defined as follows.
\begin{itemize}
\item $\sAlg{T}$ consists of strict $T$-algebras, strict
morphisms, and $2$-cells;
\item $\Alg{T}$ consists of strict $T$-algebras, strong
morphisms, and $2$-cells;
\item $\PsAlg{T}$ consists of pseudo $T$-algebras, strong
morphisms, and $2$-cells;
\item $\LaxAlg{T}$ consists of lax $T$-algebras, lax morphisms,
and $2$-cells.\defmark
\end{itemize}
\end{definition}
The verification that these form $2$-categories is left as
\cref{exercise:t-alg-checks}.
\begin{remark}
In \Cref{ch:fibration} we define a $2$-monad $\funnyf$
(cf. \cref{def:iimonad-on-catoverc}) and show that cloven fibrations
correspond to pseudo $\funnyf$-algebras while split fibrations
correspond to strict $\funnyf$-algebras.
\end{remark}
\section{Exercises and Notes}\label{sec:adj-exercises}
\begin{exercise}\label{exercise:other-triangle-psfun}
Verify the right triangle identity
in the proof of \cref{proposition:adjunctions-preserved}.
\end{exercise}
\begin{exercise}\label{exercise:adjunctions-coop}
\renewcommand{\labelenumi}{(\alph{enumi})}
Let $(f,g,\eta,\epz)$ be an internal adjunction in $\sB$.
\begin{enumerate}
\item Fill in the details of \cref{example:adjunctions-op} to show
that $(g^*,f^*,\eta^*,\epz^*)$ is an adjunction in $\sB^\op$.
\item Identify the corresponding adjunctions in $\sB^\co$ and $\sB^\coop$.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exercise:cartesian-dualizable}\index{dualizable!object}\index{object!dualizable}
Show that the only dualizable object in a Cartesian monoidal
category\index{monoidal category!Cartesian} is the terminal object.
\end{exercise}
\begin{exercise}\label{exercise:other-triangle-postcomp}
Verify the triangle identities \eqref{diagram:triangles} for
\cref{example:corepresented-adjunction} directly. Begin by labeling
the arrows on the diagram below, and explain why each region
commutes. Then draw, label, and explain the corresponding diagram
for the right triangle identity.
\begin{equation}\label{diagram:fgf}
\begin{tikzpicture}[x=17mm,y=8mm,baseline={(L).base}]
\draw[0cell]
(4,1) node (fgft') {f(g(ft))}
(fgft') ++(-4.5,0) node (ft) {ft}
(fgft') ++(0,-5.5) node (ft') {ft}
(fgft') ++(180+25:1.8) node (fgft) {f((gf)t)}
(fgft') ++(270-25:1.8) node (fgft'') {(fg)(ft)}
(fgft) ++(-1.5,0) node (f1t) {f(1_X\, t)}
(fgft'') ++(0,-2) node (1ft) {1_Y\, (ft)}
(fgft) ++(270-40:1.3) node (fgft2) {(f(gf))t}
(fgft'') ++(180+50:1.4) node (L) {((fg)f)t}
(fgft2) ++(-1.5,0) node (f1t2) {(f1_X)\, t}
(L) ++(0,-2) node (1ft') {(1_Y\, f)t}
;
\draw[1cell]
(ft) edge node (foleta) {1_f * (\eta_*)_t} (fgft')
(fgft') edge node (olepzf) {(\epz_*)_{ft}} (ft')
(ft) edge node[pos=.6] {} (f1t)
(f1t) edge node {} (fgft)
(fgft) edge node {} (fgft')
(fgft') edge node {} (fgft'')
(fgft'') edge node {} (1ft)
(1ft) edge node (ellft) {} (ft')
(ft) edge[swap] node (rft) {} (f1t2)
(f1t) edge node[pos=.25] {} (f1t2)
(f1t2) edge[swap] node {} (fgft2)
(fgft) edge[swap] node {} (fgft2)
(fgft2) edge[swap] node {} (L)
(L) edge[swap] node {} (fgft'')
(L) edge[swap] node {} (1ft')
(1ft') edge[swap] node {} (1ft)
(1ft') edge[swap] node {} (ft')
(f1t2) edge[swap, bend right,out=-50,in=220] node[pos=.33] {} (ft')
;
\end{tikzpicture}
\end{equation}
\end{exercise}
\begin{exercise}\label{exercise:adjunction-local}
Let $(f,g,\eta,\epz)$ be an adjunction in $\sB$. Let $W$ be another
object of $\sB$ and consider $1$-cells $x\cn W \to X$ and $y\cn W
\to Y$. Prove there is an isomorphism
\[
\sB(fx,y) \iso \sB(x,gy)
\]
that is natural with respect to $1$-cells $W \to W'$ and $2$-cells $x
\to x'$, $y \to y'$. (Hint: make use of the represented
pseudofunctors $\sB(W,-)$ and the corresponding description of
adjunctions between $1$-categories.)
\end{exercise}
\begin{exercise}\label{exercise:double-mate}
Return to the proof of \cref{lemma:mate-pairs} and show, for $\nu$
as in \cref{definition:mates}, that the mate of the mate of $\nu$ is
equal to $\nu$.
\end{exercise}
\begin{exercise}\label{exercise:equivalences-preserved}
Use the proof of \cref{proposition:adjunctions-preserved} to give a
proof that adjunctions preserve internal equivalences
(\cref{proposition:equivalences-preserved}).
\end{exercise}
\begin{exercise}\label{exercise:yoneda-adj-equiv}
Give an alternate proof of \cref{proposition:equiv-via-isos} by
applying the Bicategorical Yoneda Lemma \ref{lemma:yoneda-bicat} and appealing to the
analogous result for equivalences of $1$-categories.
\end{exercise}
\begin{exercise}\label{exercise:equiv-via-isos-diagram}
Complete the outline in the proof of
\cref{proposition:equiv-via-isos} to check the left triangle identity.
\end{exercise}
\begin{exercise}\label{exercise:2-equiv-Cat-equiv}
Verify the correspondence stated in
\cref{lemma:2-equiv-Cat-equiv} between $\Cat$-equivalences and
$2$-equivalences for locally small $2$-categories.
\end{exercise}
\begin{exercise}\label{exercise:eval-coeval-assoc}
Prove the compatibility of evaluation with coevaluation stated in
\cref{lemma:eval-coeval-assoc}.
\end{exercise}
\begin{exercise}\label{exercise:represented-monad}
Suppose that
\[
(C, t, \mu, \eta) \to (C', t', \mu', \eta')
\]
is a morphism of monads in a bicategory $\B$, and suppose that $W$
is another object of $\B$.
\begin{enumerate}
\item Show that the morphism of monads induces a functor from the
category of $t_*$-algebras in $\B(W,C)$ to the category of
$t'_*$-algebras in $\B(W,C')$ (see \cref{monad-bicat-interpret}).
\item Show that a modification of monad morphisms induces a
natural transformation between functors constructed in the
previous part.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exercise:represented-comonad}
Suppose that
\[
(C,s,\de,\epz) \to (C',s',\de',\epz')
\]
is a morphism of comonads in a bicategory $\B$, and suppose that $W$
is another object of $\B$.
\begin{enumerate}
\item Show that the morphism of comonads induces a functor from the
category of $s_*$-coalgebras in $\B(W,C)$ to the category of
$s'_*$-coalgebras in $\B(W,C')$.
\item Show that a modification of comonad morphisms induces a
natural transformation between functors constructed in the
previous part.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exercise:cat-monad-is-internal-monad}
Compare the diagrams of \cref{def:enriched-monad} and
\cref{monad-bicat-interpret} to prove
\cref{cat-monad-is-internal-monad}.
\end{exercise}
\begin{exercise}\label{exercise:Cat-enriched-algebra}
Suppose $T$ is a $2$-monad on a $\Cat$-enriched category $\A$. Verify
that the definition of strict $T$-algebra in
\cref{definition:lax-algebra} is precisely the definition of
$\Cat$-enriched algebra from \cref{def:enriched-monad-algebra}.
\end{exercise}
\begin{exercise}\label{exercise:t-alg-checks}
Verify that the four collections defined in
\cref{definition:t-alg-2-cats} are indeed $2$-categories.
\end{exercise}
\subsection*{Notes}
\begin{note}[Additional Examples of Duality]
In \cref{exercise:cartesian-dualizable} we observe that the only
dualizable object in a Cartesian monoidal category is the terminal
object. Therefore there are no nontrivial dual pairs in the category
of topological spaces. However, passing to a category of spectra
with smash product yields a number of interesting examples. In
\cite{may-sigurdsson,ponto-shulman} the authors discuss bicategories
of parametrized spectra and show that the duality theory there
recovers a host of duality theories in topology, including
Poincar\'e-Thom, Spanier-Whitehead, and Constenoble-Waner dualities.
\end{note}
\begin{note}[Set-Theoretic Considerations for Biequivalences]
\cref{definition:biequivalence} gives the definition of biequivalence
in terms of internal equivalences in $\Bicatps(\B,\B)$ and
$\Bicatps(\C,\C)$. In so doing, we have implicitly assumed that
$\B_0$ and $\C_0$ are both sets, as required by
\cref{def:bicat-of-lax-functors,subbicat-pseudofunctor}. Recall, as
described in \cref{conv:universe}, this means that $\B_0$ and $\C_0$ are
elements of a chosen Grothendieck universe $\calu$.
If we need to discuss biequivalences between bicategories $\B'$ and
$\C'$ whose collections of objects are not elements of $\calu$ (for
example, they may be subsets of $\calu$), there are two approaches.
The first is to take a larger universe $\calu'$ for which $\B'_0$ and
$\C'_0$ are elements. Then the theory developed above, when applied
to $\calu'$, defines biequivalences between $\B'$ and $\C'$. In this
way---under the assumption of the Axiom of
Universes---\cref{definition:biequivalence} does define biequivalences
for all bicategories.
The second approach is to give a definition of biequivalence in terms
of certain pseudofunctors, strong transformations, and modifications,
as sketched in \cref{biequivalence-interpret}. This definition does
not require $\B_0$ and $\C_0$ to be sets (elements of $\calu$).
However one can easily verify that the explicit definition is
equivalent to the internal one when $\B_0$ and $\C_0$ are sets.
We've chosen to adopt the first of these two approaches because it
gives a more compact and conceptual definition of biequivalences.
\end{note}
\begin{note}[Biequivalences in Tricategories]
Gurski \cite{gurski-biequivalences} gives a detailed treatment of
biequivalences in a general tricategory and discusses the equivalence
of different definitions. That work also gives a treatment of
biadjunctions and biequivalences in more general tricategories---the
tricategorical analogue of our discussion of adjunctions
above---together with coherence applications.
\end{note}
\begin{note}[Monads in $2$-Categories]
The theory of\index{monad!in a 2-category}\index{2-category!monad} monads in a $2$-category was first discussed by Lawvere
\cite{lawvere-metric}. Other foundational references are
\cite{street_monads,street-yoneda,kelly-coherence,power-coherence,bkp}.
The terminology for monad $1$- and $2$-cells is not entirely standard.
Street \cite{street_monads} uses \emph{morphism} and \emph{transformation}, while Lack
\cite{lack} uses \emph{morphism} and \emph{$2$-cell}. Similar terms are used
elsewhere in the literature. Since these cells are defined as lax
transformations and modifications, we have opted for the generic terms
``$1$-cell'' and ``$2$-cell'' in order to avoid unfortunate phrases such as ``a
morphism (of monads) is a transformation (of corresponding lax
functors)'' and ``a transformation of morphisms (of monads) is a
modification of transformations (of lax functors)''.
For morphisms of monad algebras, Street \cite{street_monads} uses \emph{pseudo
morphism} where we use \emph{strong morphism} following Kelly \cite{kelly-coherence}.
\end{note}
\begin{note}[Additional Features of $2$-Monad Theory]
There are two important features of $2$-monad theory that are just
beyond the scope of this text. The first is that there is a
well-developed theory for computing limits and colimits in $\sAlg{T}$
and $\Alg{T}$; see \cite{lack} for an overview, and \cite{bkp} for
more complete details.
The second important feature is that often in cases of interest one
has equivalences between pseudo algebras and strict algebras. This is
not generally the case; see Shulman \cite{shulman-pseudo} for several
examples. However, when one does have such equivalences, one has a
very general form of coherence theory. This was first described by
Kelly \cite{kelly-coherence} and followed by Power
\cite{power-coherence}. We mention two fundamental applications from
\cite{power-coherence} to the coherence theorem for bicategories. Our
approach in \cref{ch:coherence} will be much more elementary.
\begin{enumerate}
\item There is a $2$-monad on $\Cat$-graphs with a given vertex set $X$
whose strict algebras are $2$-categories with object set $X$ and whose
pseudoalgebras are bicategories with object set $X$. Power's
theorem implies that every bicategory is biequivalent to a
$2$-category with the same set of objects.
\item For a small $2$-category $\C$, there is a $2$-monad on the
$2$-category of categories indexed by $\Ob(\C)$, i.e. $2$-functors
$\Ob(\C) \to \Cat$, whose strict algebras are $2$-functors $\C \to
\Cat$ and whose pseudo algebras are pseudofunctors. Power's theorem
implies that every such pseudofunctor is equivalent, via an
invertible strong transformation, to a $2$-functor.
\end{enumerate}
Subsequent work of \cite{bkp,hermida,lack-codescent,lack-paoli}, among
many others, clarifies and
extends this theory. Another approach using algebraic weak
factorization systems is given in
\cite{bourke-garner-awfs1,bourke-garner-awfs2} and was further
generalized to coherence theorems for strict and lax weak equivalences
in \cite{gjo-extending}.
\end{note}
\chapter{Categories}
\label{ch:categorical_prelim}
In this chapter we recall some basic concepts of category theory, including monads, monoidal categories, and enriched categories. For more detailed discussion of basic category theory, the reader is referred to the references mentioned in Section \ref{sec:category-exercises}.
\section{Basic Category Theory}\label{sec:categories}
In this section we recall the concepts of categories, functors, natural transformations, adjunctions, equivalences, Yoneda Lemma, (co)limits, and monads. We begin by fixing a set-theoretic convention.
\begin{definition}\label{def:universe}
A \index{Grothendieck!universe}\emph{Grothendieck universe}, or just a \index{universe}\emph{universe}, is a set\label{notation:universe} $\calu$ with the following properties.
\begin{enumerate}
\item\label{univ1} If $x \in \calu$ and $y \in x$, then $y\in\calu$.
\item\label{univ2} If $x\in\calu$, then $\calp(x)\in\calu$, where $\calp(x)$ is the power set of $x$.
\item\label{univ3} If $I\in\calu$ and $x_i\in\calu$ for each $i\in I$, then the union $\bigcup_{i\in I} x_i \in \calu$.
\item\label{univ4} The set of finite ordinals $\bbN \in \calu$. \defmark
\end{enumerate}
\end{definition}
\begin{convention}\label{conv:universe}
We assume the\index{Axiom of Universes}
\begin{quote}
Axiom of Universes: Every set belongs to some universe.
\end{quote}
We fix a universe $\calu$. From now on, an element in $\calu$ is called a \index{set}\emph{set}, and a subset of $\calu$ is called a \index{class}\emph{class}. These conventions allow us to make the usual set-theoretic constructions, including Cartesian products, disjoint unions, and function sets.\dqed
\end{convention}
\begin{proposition}
A universe $\calu$ has the following properties.
\renewcommand{\theenumi}{{\textit{\alph{enumi}}}}
\begin{enumerate}
\item\label{univ-pair} If $x,y \in \calu$, then $\{x,y\} \in \calu$.
\item\label{univ-subsets} If $x \in \calu$ and $y \subset x$,
then $y \in \calu$.
\item\label{univ-prod} If $I\in \calu$ and $x_i \in \calu$ for each $i \in I$, then the
Cartesian product $\prod_{i \in I} x_i \in \calu$.
\item\label{univ-coprod} If $I\in \calu$ and $x_i \in \calu$ for each $i \in I$, then
the disjoint union $\coprod_{i \in I} x_i \in \calu$.
\item\label{univ-func} If $x,y \in \calu$, then $y^x \in \calu$,
where $y^x$ denotes the collection of functions $x \to y$.
\end{enumerate}
\end{proposition}
\begin{proof}
Combining Axioms \eqref{univ4}, \eqref{univ2}, and \eqref{univ1} of
\cref{def:universe}, we see that $\calu$ contains an $n$-element set
for each $n \in \bbN$. For Property \eqref{univ-pair}, we therefore
have $x \cup y \in \calu$ by Axiom \eqref{univ3}. Since $\{x,y\}
\in \calp(x \cup y)$, we have $\{x,y\} \in \calu$ by Axiom
\eqref{univ1}. For Property \eqref{univ-subsets}, $y \subset x$
means that $y \in \calp(x)$ and thus the assertion follows from
Axioms \eqref{univ1} and \eqref{univ2}. For Properties
\eqref{univ-prod} and \eqref{univ-coprod}, we first note that for
any $x,y \in \calu$ we have $x \times y \subset \calp(\calp(x \cup
y))$ and therefore $x \times y \in \calu$ by Property
\eqref{univ-subsets}. Hence, using Axiom \eqref{univ3}, the product
$I \times \bigcup_{i \in I} x_i$ is an element of $\calu$. The
assertions of Properties \eqref{univ-prod} and \eqref{univ-coprod},
respectively, now follow because
\[
\prod_{i \in I} x_i \subset \calp(I \times \bigcup_{i \in I} x_i) \andspace
\coprod_{i \in I} x_i \subset I \times \bigcup_{i \in I} x_i.
\]
Property \eqref{univ-func} follows because $y^x \subset \calp(x
\times y)$.
\end{proof}
\begin{definition}\label{def:categories}
A \index{category}\emph{category} $\C$ consists of:
\begin{itemize}
\item a class\label{notation:object} $\Ob(\C)$ of \index{object}\emph{objects} in $\C$;
\item a set\label{notation:morphism-set} $\C(X,Y)$, also denoted by $\C(X;Y)$, of \index{morphism}\emph{morphisms} with \label{notation:domain} \emph{domain}\index{domain} $X = \dom(f)$ and \emph{codomain}\index{codomain} $Y = \codom(f)$ for any objects $X,Y\in \Ob(\C)$;
\item an assignment called \index{composition}\emph{composition}
\[\begin{tikzcd}
\C(Y,Z) \times \C(X,Y) \arrow{r}{\circ} & \C(X,Z),
\end{tikzcd}
\qquad \circ(g,f) = g \circ f\] for objects $X,Y,Z$ in $\C$;
\item an \index{identity!morphism}\emph{identity morphism}\label{notation:identity-morphism} $1_X\in \C(X,X)$ for each object $X$ in $\C$.
\end{itemize}
These data are required to satisfy the following two conditions.
\begin{description}
\item[Associativity]
For morphisms $f$, $g$, and $h$, the equality\index{associativity!category} \[h \circ (g \circ f) = (h \circ g) \circ f\] holds, provided the compositions are defined.
\item[Unity]
For each morphism $f \in \C(X,Y)$, the equalities\index{unity!category} \[1_Y \circ f = f = f \circ 1_X\] hold.
\end{description}
In subsequent chapters, a category is sometimes called a \index{category!1-}\index{1-category}\emph{$1$-category}.
\end{definition}
In a category $\C$, the class of objects $\Ob(\C)$ is also denoted by $\C_0$, and the collection of morphisms is denoted by\label{not:morphism} either $\Mor(\C)$ or $\C_1$. For an object $X\in\Ob(\C)$ and a morphism $f \in \Mor(\C)$, we often write $X\in\C$ and $f\in\C$. We also denote a morphism $f \in \C(X,Y)$ as
\[f : X \to Y, \qquad \begin{tikzcd}X \arrow{r}{f} & Y\end{tikzcd}, \andspace \begin{tikzcd}X \arrow{r}[description]{f} & Y.\end{tikzcd}\]
Morphisms $f : X \to Y$ and $g : Y \to Z$ are called \index{composable}\emph{composable}, and $g \circ f\in \C(X,Z)$ is often abbreviated to\label{notation:morphism-composition} $gf$, called their \index{composite}\emph{composite}.
The identity morphism $1_X$ of an object $X$ is also denoted by $1$ or even just $X$. A morphism $f : X \to Y$ in a category $\C$ is called an \emph{isomorphism}\index{isomorphism} if there exists a morphism $g : Y \to X$ such that $gf = 1_X$ and $fg=1_Y$. An isomorphism is sometimes denoted by \!\!$\begin{tikzcd}[column sep=scriptsize]X \rar{\cong} & Y.\end{tikzcd}$\label{not:iso} A category is \index{discrete category}\index{category!discrete}\emph{discrete} if it contains no non-identity morphisms. A \index{groupoid}\emph{groupoid} is a category in which every morphism is an isomorphism. The \index{category!opposite}\index{opposite!category}\emph{opposite category} of a category $\C$ is denoted by\label{notation:opposite-category} $\C^{\op}$. It has the same objects as $\C$ and morphism sets $\Cop(X,Y) = \C(Y,X)$, with identity morphisms and composition inherited from $\C$. A morphism $f : Y \to X$ in $\C$ is denoted by $f^{\op} : X \to Y$ in $\Cop(X,Y)$. A \index{category!small}\index{small category}\emph{small category} is a category whose class of objects forms a set. A category is \index{essentially!small category}\index{category!essentially small}\emph{essentially small} if its isomorphism classes of objects form a set.
\begin{definition}\label{def:functors}
For categories $\C$ and $1.3$, a \emph{functor}\index{functor} $F : \C \to 1.3$ consists of:
\begin{itemize}
\item an assignment on objects
\[\Ob(\C) \to \Ob(1.3), \qquad X \mapsto F(X);\]
\item an assignment on morphisms
\[\C(X,Y) \to 1.3\bigl(F(X),F(Y)\bigr), \qquad f \mapsto F(f).\]
\end{itemize}
These data are required to satisfy the following two conditions.
\begin{description}
\item[Composition] The equality \[F(gf) = F(g)F(f)\] of morphisms holds, provided the compositions are defined.
\item[Identities] For each object $X\in\C$, the equality \[F(1_X) = 1_{F(X)}\] in $1.3\bigl(F(X),F(X)\bigr)$ holds.\defmark
\end{description}
\end{definition}
We often abbreviate $F(X)$ and $F(f)$ to $FX$ and $Ff$, respectively. Functors are composed by composing the assignments on objects and on morphisms. The \emph{identity functor}\index{identity!functor}\index{functor!identity} of a category $\C$ is determined by the identity assignments on objects and morphisms, and is written as either\label{not:idc} $\Id_{\C}$ or $1_{\C}$. We write\label{notation:cat} $\Cat$ \index{category!of small categories}for the category with small categories as objects, functors as morphisms, identity functors as identity morphisms, and composition of functors as composition. For categories $\C$ and $1.3$, the collection of functors $\C \to 1.3$ is denoted by\label{not:funcd} $\Fun(\C,1.3)$. For a functor $F : \C\to1.3$, the functor
\begin{equation}\label{opposite-functor}
\begin{tikzcd}
\Cop \ar{r}{F^{\op}} & 1.3^{\op},\end{tikzcd}
\begin{cases}
\Ob(\Cop) \ni X & \mapsto FX \in \Ob(1.3^{\op}),\\
\Cop(X,Y) \ni f & \mapsto (Ff)^{\op} \in 1.3^{\op}(FX,FY)
\end{cases}
\end{equation}
is called the \index{functor!opposite}\index{opposite!functor}\emph{opposite functor}.
\begin{definition}\label{def:natural-transformations}
Suppose $F, G : \C \to 1.3$ are functors. A \emph{natural transformation}\index{natural transformation}\index{transformation!natural} $\theta : F \to G$ consists of a morphism $\theta_X : FX \to GX$ in $1.3$ for each object $X\in\C$ such that the diagram
\[\begin{tikzcd}FX \dar[swap]{Ff} \rar{\theta_X} & GX \dar[d]{Gf}\\ FY \rar[r]{\theta_Y} & GY\end{tikzcd}\]
in $1.3$ is commutative for each morphism $f : X \to Y$ in $\C$.
\end{definition}
In other words, the equality \[Gf \circ \theta_X = \theta_Y \circ Ff\] holds in $1.3(FX,GY)$. The collection of natural transformations $F \to G$ is denoted by\label{not:natfg} $\Nat(F,G)$. Each morphism $\theta_X$ is called a \emph{component} of $\theta$. The \emph{identity natural transformation}\index{identity!natural transformation} $1_F : F \to F$ of a functor $F$ has each component an identity morphism. A \index{natural isomorphism}\emph{natural isomorphism} is a natural transformation in which every component is an isomorphism.
\begin{definition}\label{def:vertical-comp}
Suppose $\theta : F \to G$ is a natural transformation for functors $F,G : \C\to 1.3$.
\begin{enumerate}
\item Suppose $\phi : G \to H$ is a natural transformation for another functor $H : \C \to 1.3$. The \emph{vertical composition}\index{vertical composition!natural transformation} \[\phi\theta : F \to H\] is the natural transformation with components
\begin{equation}\label{not:vcomp}
(\phi\theta)_X = \phi_X \circ \theta_X : FX\to HX \forspace X\in\C.
\end{equation}
\item Suppose $\theta' : F' \to G'$ is a natural transformation for functors $F', G' : 1.3 \to 3$. The \emph{horizontal composition}\index{horizontal composition!natural transformation}\label{not:hcomp} \[\theta' \ast \theta : F'F \to G'G\] is the natural transformation whose component $(\theta' \ast \theta)_X$ for an object $X\in\C$ is defined as either composite in the commutative diagram
\begin{equation}\label{horizontal-composition}
\begin{tikzcd}
F'FX \rar{\theta'_{FX}} \dar[swap]{F'\theta_X} & G'FX \dar{G'\theta_X}\\ F'GX \rar{\theta'_{GX}} & G'GX\end{tikzcd}
\end{equation}
in $1.3$.\defmark
\end{enumerate}
\end{definition}
For a category $\C$ and a small category $1.3$, a \emph{$1.3$-diagram in $\C$} is a functor $1.3 \to \C$. The \index{category!diagram}\index{diagram category}\emph{diagram category}\label{notation:diagram-category} $\C^{1.3}$ has $1.3$-diagrams in $\C$ as objects, natural transformations between such functors as morphisms, and vertical composition of natural transformations as composition.
\begin{definition}\label{def:adjunctions}
For categories $\C$ and $1.3$, an \index{adjunction}\emph{adjunction}\label{notation:adjunction} from $\C$ to $1.3$ is a triple $(L,R,\phi)$ consisting of:
\begin{itemize}
\item A pair of functors in opposite directions
\[\begin{tikzcd}\C \rar[shift left]{L} & 1.3. \lar[shift left]{R}\end{tikzcd}\]
\item A family of bijections
\[\begin{tikzcd}1.3(LX,Y) \rar{\phi_{X,Y}}[swap]{\cong} & \C(X,RY)\end{tikzcd}\] that is natural in the objects $X\in\C$ and $Y\in1.3$.
\end{itemize}
Such an adjunction is also called an \index{adjoint!pair}\emph{adjoint pair}, with $L$ the \index{left adjoint}\emph{left adjoint} and $R$ the \index{right adjoint}\emph{right adjoint}.
\end{definition}
We also denote such an adjunction by $L \dashv R$. We always display the left adjoint on top, pointing to the right. If an adjunction is displayed vertically, then the left adjoint is written on the left-hand side.
In an adjunction $L \dashv R$ as above, setting $Y=LX$ or $X=RY$, the natural bijection $\phi$ yields natural transformations
\begin{equation}\label{adjunction-unit}
\begin{tikzcd}1_{\C} \rar[r]{\eta} & RL\end{tikzcd} \andspace
\begin{tikzcd}LR \rar[r]{\varepsilon} & 1_{1.3},\end{tikzcd}
\end{equation}
called the \index{unit!of an adjunction}\emph{unit} and the \index{counit!of an adjunction}\emph{counit}. The vertically composed natural transformations
\begin{equation}\label{triangle-identities}
\begin{tikzcd}R \rar{\eta R} & RLR \rar{R\varepsilon} & R\end{tikzcd} \andspace \begin{tikzcd}L \rar{L\eta} & LRL \rar{\varepsilon L} & L\end{tikzcd}
\end{equation}
are equal to $1_{R}$ and $1_{L}$, respectively. Here \[\eta R =
\eta*1_R,\qquad R\varepsilon = 1_R*\varepsilon,\] and similarly for
$L\eta$ and $\varepsilon L$. The identities in
\eqref{triangle-identities} are known as the \index{triangle
identities!adjunction}\emph{triangle identities}. Characterizations
of adjunctions are given in \cite[Chapter 3]{borceux1} and \cite[IV.1]{maclane}, one of which\index{characterization of!an adjunction} is the following. An adjunction $(L,R,\phi)$ is completely determined by
\begin{itemize}
\item the functors $L : \C\to1.3$ and $R : 1.3\to\C$, and
\item the natural transformation $\eta : 1_{\C} \to RL$
\end{itemize}
such that for each morphism $f : X \to RY$ in $\C$ with $X\in\C$ and $Y\in1.3$, there exists a unique morphism $f' : LX \to Y$ in $1.3$ such that the diagram
\[\begin{tikzcd}X \dar[equal] \rar{\eta_X} & RLX \dar{Rf'}\\ X \rar{f} & RY\end{tikzcd}\] in $\C$ is commutative.
\begin{definition}\label{def:equivalences}
A functor $F : \C \to 1.3$ is called an \index{characterization of!an equivalence}\index{equivalence}\emph{equivalence} if there exist
\begin{itemize}
\item a functor $G : 1.3\to\C$ and
\item natural isomorphisms \!$\begin{tikzcd}[column sep=scriptsize]\eta : 1_{\C} \rar{\cong} & GF\end{tikzcd}$\! and \!$\begin{tikzcd}[column sep=scriptsize]\varepsilon : FG \rar{\cong} & 1_{1.3}.\end{tikzcd}$\defmark
\end{itemize}
If, in addition, $F$ is left adjoint to $G$ with unit $\eta$ and counit
$\epz$, then $(F,G,\eta,\epz)$ is called an \emph{adjoint equivalence}\index{adjoint!equivalence}.
\end{definition}
Equivalences can be characterized locally as follows. A functor $F$ is an equivalence if and only if it is both:
\begin{itemize}
\item \index{fully faithful}\emph{fully faithful}, which means that each function $\C(X,Z) \to 1.3(FX,FZ)$ on morphism sets is a bijection;
\item \index{essentially!surjective}\emph{essentially surjective}, which means that for each object $Y\in1.3$, there exists an isomorphism \!$\begin{tikzcd}[column sep=scriptsize]FX \rar{\cong} & Y\end{tikzcd}$\! for some object $X\in\C$.
\end{itemize}
\begin{definition}\label{def:representables}
Suppose $\C$ is a category, and $A$ is an object in $\C$. The functor
\[\Yo_A = \C(-,A) : \Cop \to \Set\] defined by
\[\begin{split}\Ob(\C) \ni X & \mapsto \Yo_A(X)=\C(X,A),\\
\C(X,Y) \ni f & \mapsto \Yo_A(f) = (-) \circ f : \C(Y,A) \to \C(X,A)
\end{split}\]
is called the \index{functor!representable}\index{representable!functor}\emph{representable functor induced by $A$}.
\end{definition}
Yoneda Lemma\index{Yoneda!Lemma} states that there is a bijection
\begin{equation}\label{yoneda-lemma}
\Nat(\Yo_A,F) \cong F(A),
\end{equation}
defined by
\[\bigl(\theta : \Yo_A \to F\bigr) \mapsto \theta_A(1_A) \in F(A),\]
that is natural in the object $A\in\C$ and the functor $F : \Cop\to\Set$.
A special case of the Yoneda Lemma is the natural bijection
\[\Nat(\Yo_A,\Yo_B) \cong \Yo_B(A) = \C(A,B)\] for objects $A,B \in \C$. The \emph{Yoneda embedding}\index{Yoneda!embedding} is the functor
\begin{equation}\label{yoneda-embedding}
\Yo_{-} : \C \to \Fun(\Cop,\Set).
\end{equation}
This functor is fully faithful by the previous bijection.
\begin{definition}\label{def:colimits}
Suppose $F : 1.3 \to \C$ is a functor. A \index{colimit}\emph{colimit of $F$}, if it exists, is a pair $(\colim\, F,\delta)$ consisting of
\begin{itemize}
\item an object $\colim\, F \in \C$ and
\item a morphism $\delta_d : Fd \to \colim\, F$ in $\C$ for each object $d \in 1.3$
\end{itemize}
that satisfies the following two conditions.
\begin{enumerate}
\item For each morphism $f : d \to d'$ in $1.3$, the diagram
\[\begin{tikzcd}Fd \dar[swap]{Ff} \rar{\delta_d} & \colim\, F \dar[equal]\\ Fd' \rar{\delta_{d'}} & \colim\, F\end{tikzcd}\]
in $\C$ is commutative. A pair $(\colim\, F,\delta)$ with this property is called a \index{cocone}\emph{cocone of $F$}.
\item The pair $(\colim\, F,\delta)$ is \index{universal property}\emph{universal} among cocones of $F$. This means that if $(X,\delta')$ is another such pair that satisfies property (1), then there exists a unique morphism $h : \colim\, F \to X$ in $\C$ such that the diagram
\[\begin{tikzcd}Fd \dar[equal] \rar{\delta_d} & \colim\, F \dar{h}\\
Fd \rar{\delta'_d} & X\end{tikzcd}\]
is commutative for each object $d\in1.3$.\defmark
\end{enumerate}
\end{definition}
A \index{limit}\emph{limit of $F$} $(\limit\, F,\delta)$, if it exists, is defined dually by turning the morphisms $\delta_d$ for $d\in1.3$ and $h$ backward. A \index{small colimit}\emph{small (co)limit} is a (co)limit of a functor whose domain category is a small category. A category $\C$ is \index{complete}\index{cocomplete}\emph{(co)complete} if it has all small (co)limits. For a functor $F : 1.3 \to \C$, its colimit, if it exists, is also denoted by\label{notation:colimit} $\colimof{x\in 1.3}\, Fx$ and $\colimof{1.3}\, F$, and similarly for limits.
A left adjoint $F : \C \to 1.3$ \index{left adjoint!preservation of colimits}\index{colimit!preservation by left adjoints}preserves all the colimits that exist in $\C$. In other words, if $H : 3 \to \C$ has a colimit, then $FH : 3\to 1.3$ also has a colimit, and the natural morphism
\begin{equation}\label{preserve-limits}
\colimover{e\in3}\, FHe \to F\Bigl(\colimover{e\in E}\, He\Bigr)
\end{equation}
is an isomorphism. Similarly, a right adjoint $G : 1.3 \to \C$ preserves\index{limit!preservation by right adjoints}\index{right adjoint!preservation of limits} all the limits that exist in $1.3$.
\begin{example}
Here are some special types of colimits in a category $\C$.
\begin{enumerate}
\item An \emph{initial object}\index{initial object} $\varnothing^{\C}$ in $\C$ is a colimit of the functor $\varnothing \to \C$, where $\varnothing$ is the \index{empty category}\index{category!empty}empty category with no objects and no morphisms. It is characterized by the universal property that for each object $X$ in $\C$, there is a unique morphism $\varnothing^{\C}\to X$ in $\C$.
\item A \index{coproduct}\emph{coproduct} is a colimit of a functor whose domain category is a discrete category. We use the symbols $\coprod$ and $\amalg$\label{not:coprod} to denote coproducts.
\item A \index{pushout}\emph{pushout} is a colimit of a functor whose domain category has the form
\[\begin{tikzcd}\bullet & \bullet \lar \rar & \bullet\end{tikzcd}\] with three objects and two non-identity morphisms.
\item A \index{coequalizer}\emph{coequalizer} is a colimit of a functor whose domain category has the form
\[\begin{tikzcd}\bullet \rar[shift left] \rar[shift right] & \bullet\end{tikzcd}\] with two objects and two non-identity morphisms.
\end{enumerate}
Terminal \index{terminal!object}objects, products, \index{product}\index{pullback}\index{equalizer}pullbacks, and equalizers are the corresponding limit concepts.\dqed
\end{example}
\begin{notation}\label{notation:terminal-category}\index{terminal!category}\index{category!terminal}
We let $\boldone$ denote the terminal category; it has a unique
object $*$ and a unique 1-cell $1_*$.
\end{notation}
\begin{definition}\label{def:monad}
A \index{monad}\emph{monad} on a category $\C$ is a triple\label{notation:monad} $(T,\mu,\eta)$ in which
\begin{itemize}
\item $T : \C \to \C$ is a functor and
\item $\mu : T^2 \to T$, called the \emph{multiplication}, and $\eta : 1_{\C} \to T$, called the \emph{unit}, are natural transformations
\end{itemize}
such that the \index{associativity!monad}associativity and \index{unity!monad}unity diagrams
\[\begin{tikzcd}T^3 \dar[swap]{\mu T} \rar{T\mu} & T^2 \dar{\mu}\\ T^2 \rar{\mu} & T\end{tikzcd}\qquad
\begin{tikzcd}
1_{\C} \circ T \dar[equal] \rar{\eta T} & T^2 \dar[swap]{\mu} & T\circ 1_{\C} \lar[swap]{T\eta} \dar[equal]\\
T \rar[equal] & T \rar[equal] & T\end{tikzcd}\] are commutative. We often abbreviate such a monad to $T$.
\end{definition}
\begin{definition}\label{def:monad-algebra}
Suppose $(T,\mu,\eta)$ is a monad on a category $\C$.
\begin{enumerate}
\item A \index{algebra!of a monad}\emph{$T$-algebra} is a pair $(X,\theta)$ consisting of
\begin{itemize}
\item an object $X$ in $\C$ and
\item a morphism $\theta : TX \to X$, called the \emph{structure morphism},
\end{itemize}
such that the associativity and unity diagrams
\begin{equation}\label{monad-alg-axioms}
\begin{tikzcd}
T^2X \dar[swap]{\mu_X} \rar{T\theta} & TX \dar{\theta}\\
TX \rar{\theta} & X\end{tikzcd}\qquad
\begin{tikzcd}
X \rar{\eta_X} \arrow[equal]{rd} & TX \dar{\theta}\\ & X\end{tikzcd}
\end{equation}
are commutative.
\item A \index{morphism!monad algebras}\emph{morphism of $T$-algebras} \[f : (X,\theta^X) \to (Y,\theta^Y)\] is a morphism $f : X \to Y$ in $\C$ such that the diagram
\[\begin{tikzcd}
TX \rar{Tf} \dar[swap]{\theta^X} & TY \dar{\theta^Y}\\ X \rar{f} & Y\end{tikzcd}\]
is commutative.
\item The category of $T$-algebras is denoted by\label{notation:algt}\index{category!of algebras over a monad} $\alg(T)$.\defmark
\end{enumerate}
\end{definition}
\begin{definition}\label{def:eilenberg-moore}
For a monad $(T,\mu,\eta)$ on a category $\C$, the \emph{Eilenberg-Moore adjunction}\index{Eilenberg-Moore adjunction}\index{adjunction!Eilenberg-Moore} is the adjunction
\[\begin{tikzcd}\C \rar[shift left]{T} & \alg(T) \lar[shift left]{U}\end{tikzcd}\] in which:
\begin{itemize}
\item The right adjoint $U$ is the forgetful functor $U(X,\theta)=X$.
\item The left adjoint sends an object $X \in \C$ to the\index{free algebra} free $T$-algebra \[\bigl(TX, \mu_X : T^2X \to TX\bigr).\defmark\]
\end{itemize}
\end{definition}
\section{Monoidal Categories}\label{sec:moncat}
In this section we recall the definitions of a monoidal category, a monoidal functor, a monoidal natural transformation, and their symmetric and braided versions. We also recall Mac Lane's Coherence Theorem for monoidal categories and discuss some examples. One may think of a monoidal category as a categorical generalization of a monoid, in which there is a way to multiply together objects and morphisms.
\begin{definition}\label{def:monoidal-category}
A \index{monoidal category}\index{category!monoidal}\emph{monoidal category} is a tuple
\[(\C, \otimes, \tensorunit, \alpha, \lambda, \rho)\]
consisting of:
\begin{itemize}
\item a category $\C$;
\item a functor $\otimes : \C \times \C \to \C$\label{notation:monoidal-product} called the \index{monoidal product}\emph{monoidal product};
\item an object $\tensorunit\in\C$ called the \index{monoidal unit}\emph{monoidal unit};
\item a natural isomorphism
\begin{equation}\label{mon-cat-alpha}
\begin{tikzcd}[column sep=large](X \otimes Y) \otimes Z \rar{\alpha_{X,Y,Z}}[swap]{\cong} & X\otimes (Y \otimes Z)\end{tikzcd}
\end{equation}
for all objects $X,Y,Z \in \C$ called the \index{associativity!isomorphism}\emph{associativity isomorphism};
\item natural isomorphisms
\begin{equation}\label{mon-cat-lambda}
\begin{tikzcd}\tensorunit \otimes X \rar{\lambda_X}[swap]{\cong} & X\end{tikzcd} \andspace \begin{tikzcd}X \otimes \tensorunit \rar{\rho_X}[swap]{\cong} & X\end{tikzcd}
\end{equation}
for all objects $X \in \C$ called the \emph{left unit isomorphism}\index{left unit isomorphism} and the \index{right unit isomorphism}\emph{right unit isomorphism}, respectively.
\end{itemize}
These data are required to satisfy the following axioms.
\begin{description}
\item[Unity Axioms]
The \index{unity!monoidal category}\emph{middle unity diagram}
\begin{equation}\label{monoidal-unit}
\begin{tikzcd}(X \otimes \tensorunit) \otimes Y \dar[swap]{\rho_X \otimes Y} \rar{\alpha_{X,\tensorunit,Y}}
& X \otimes (\tensorunit \otimes Y) \dar[d]{X \otimes \lambda_Y}\\ X \otimes Y \rar[equal] & X \otimes Y\end{tikzcd}
\end{equation}
is commutative for all objects $X,Y \in \C$. Moreover, the equality \[\begin{tikzcd}\lambda_{\tensorunit} = \rho_{\tensorunit} : \tensorunit \otimes \tensorunit \rar{\cong} & \tensorunit\end{tikzcd}\] holds.
\item[Pentagon Axiom]
The pentagon\index{pentagon axiom}
\begin{equation}\label{pentagon-axiom}
\begin{tikzpicture}[commutative diagrams/every diagram]
\node (P0) at (90:2.3cm) {$(W \otimes X) \otimes (Y \otimes Z)$};
\node (P1) at (90+72:2cm) {$\bigl((W \otimes X) \otimes Y\bigr) \otimes Z$} ;
\node (P2) at (90+2*72:2cm) {\makebox[5ex][r]{$\bigl(W \otimes (X \otimes Y)\bigr) \otimes Z$}};
\node (P3) at (90+3*72:2cm) {\makebox[5ex][l]{$W \otimes \bigl((X \otimes Y) \otimes Z\bigr)$}};
\node (P4) at (90+4*72:2cm) {$W \otimes \bigl(X \otimes (Y \otimes Z)\bigr)$};
\path[commutative diagrams/.cd, every arrow, every label]
(P0) edge node {$\alpha_{W,X,Y\otimes Z}$} (P4)
(P1) edge node {$\alpha_{W\otimes X,Y,Z}$} (P0)
(P1) edge node[swap] {$\alpha_{W,X,Y} \otimes Z$} (P2)
(P2) edge node {$\alpha_{W,X\otimes Y,Z}$} (P3)
(P3) edge node[swap] {$W \otimes \alpha_{X,Y,Z}$} (P4);
\end{tikzpicture}
\end{equation}
is commutative for all objects $W,X,Y,Z \in \C$.
\end{description}
A \index{strict!monoidal category}\index{monoidal category!strict}\emph{strict monoidal category} is a monoidal category in which the components of $\alpha$, $\lambda$, and $\rho$ are all identity morphisms.
\end{definition}
\begin{convention}\label{conv:empty-tensor}
In a monoidal category, an \index{empty tensor product}\emph{empty tensor product}, written as\label{notation:empty-tensor} $X^{\otimes 0}$ or $X^{\otimes \varnothing}$, means the monoidal unit $\tensorunit$. We sometimes use concatenation as an abbreviation for the monoidal product, so for example \[XY = X \otimes Y, \qquad (XY)Z = (X\otimes Y)\otimes Z,\] and similarly for morphisms. We usually suppress $\alpha$, $\lambda$, and $\rho$ from the notation, and abbreviate a monoidal category to $(\C, \otimes, \tensorunit)$ or $\C$. To emphasize the ambient monoidal category $\C$, we decorate the monoidal structure accordingly as $\otimes^{\C}$, $\tensorunit^{\C}$, $\alpha^{\C}$, $\lambda^{\C}$, and $\rho^{\C}$.\dqed
\end{convention}
\begin{remark}
In a monoidal category:
\begin{enumerate}
\item The axiom $\lambda_{\tensorunit} = \rho_{\tensorunit}$ is actually a consequence of the middle unity diagram \eqref{monoidal-unit} and the pentagon axiom \eqref{pentagon-axiom}.
\item The diagrams
\begin{equation}\label{moncat-other-unit-axioms}
\begin{tikzcd}
(\tensorunit \otimes X) \otimes Y \dar[swap]{\lambda_X \otimes Y} \rar{\alpha_{\tensorunit,X,Y}}
& \tensorunit \otimes (X\otimes Y) \dar{\lambda_{X\otimes Y}}\\ X \otimes Y \rar[equal]& X \otimes Y\end{tikzcd}\qquad
\begin{tikzcd}
(X \otimes Y) \otimes \tensorunit \dar[swap]{\rho_{X \otimes Y}} \rar{\alpha_{X,Y,\tensorunit}}
& X \otimes (Y\otimes \tensorunit ) \dar{X \otimes \rho_Y}\\ X \otimes Y \rar[equal]& X \otimes Y\end{tikzcd}
\end{equation}
are commutative. They are called the \index{left unity!monoidal category}\emph{left unity diagram} and the \index{right unity!monoidal category}\emph{right unity diagram}, respectively.\dqed
\end{enumerate}
\end{remark}
\begin{example}[Reversed Monoidal Category]\label{ex:reversed-moncat}
Every monoidal category \[(\C,\otimes,\tensorunit,\alpha,\lambda,\rho)\] induces another monoidal category with the order of the monoidal product reversed. More precisely, we define the following structures.
\begin{itemize}
\item First we define the composite functor
\[\begin{tikzcd}
\C\times\C \rar{\tau} \arrow[bend left]{rr}{\otimes'} & \C\times\C \rar{\otimes} & \C,\end{tikzcd}\]
called the \index{monoidal product!reversed}\index{reversed monoidal!product}\emph{reversed monoidal product}, in which $\tau$ switches the two arguments.
\item Next we define the natural isomorphism
\[\begin{tikzcd}[column sep=large](X \otimes' Y) \otimes' Z \rar{\alpha'_{X,Y,Z}}[swap]{\cong} & X\otimes' (Y \otimes' Z)
\end{tikzcd}\]
as \[\alpha'_{X,Y,Z} = \alpha_{Z,Y,X}^{-1}.\]
\item Then we define the natural isomorphisms
\[\begin{tikzcd}\tensorunit \otimes' X \rar{\lambda'_X}[swap]{\cong} & X\end{tikzcd} \andspace
\begin{tikzcd}X \otimes' \tensorunit \rar{\rho'_X}[swap]{\cong} & X\end{tikzcd}\]
as \[\lambda'_X = \rho_X \andspace \rho'_X = \lambda_X,\] respectively.
\end{itemize}
Then\label{notation:crev} \[\C^{\rev}= (\C,\otimes',\tensorunit,\alpha',\lambda',\rho')\] is a monoidal category, called the \index{monoidal category!reversed}\index{reversed monoidal!category}\emph{reversed monoidal category} of $\C$. For example, the middle unity diagram \eqref{monoidal-unit} in $\C^{\rev}$ is the diagram
\[\begin{tikzcd}Y \otimes (\tensorunit \otimes X) \dar[swap]{Y \otimes \lambda_X} \rar{\alpha^{-1}_{Y,\tensorunit,X}}
& (Y \otimes \tensorunit) \otimes X \dar[d]{\rho_Y \otimes X}\\
Y \otimes X \rar[equal] & Y \otimes X\end{tikzcd}\]
in $\C$, which is commutative by the middle unity diagram in $\C$. A similar argument proves the pentagon axiom in $\C^{\rev}$. We will come back to this example in the next chapter when we discuss dualities of bicategories.\dqed
\end{example}
\begin{example}[Opposite Monoidal Category]
\label{ex:opposite-monoidal-cat}
For each monoidal category $\C$, its opposite category\index{opposite!monoidal category}\index{monoidal category!opposite} $\Cop$ has a monoidal structure
\[(\Cop,\otimes^{\op}, \tensorunit,\alpha^{-1},\lambda^{-1},\rho^{-1})\]
with monoidal product
\[\begin{tikzcd}
\Cop \times \Cop \cong (\C\times\C)^{\op} \ar{r}{\otimes^{\op}} & \Cop
\end{tikzcd}\]
the opposite functor of $\otimes$, and with the same monoidal unit. Its associativity isomorphism, left unit isomorphism, and right unit isomorphism are the inverses of their counterparts in $\C$.\dqed
\end{example}
\begin{definition}\label{def:monoid}
A \emph{monoid}\index{monoid} in a monoidal category $\C$ is a triple\label{notation:monoid} $(X,\mu,\operadunit)$ with:
\begin{itemize}
\item $X$ an object in $\C$;
\item $\mu : X \otimes X \to X$ a morphism, called the \emph{multiplication};
\item $\operadunit : \tensorunit \to X$ a morphism, called the \emph{unit}.
\end{itemize}
These data are required to make the following associativity and unity diagrams commutative.
\[\begin{tikzcd}[column sep=large]
(X\otimes X) \otimes X \arrow{dd}[swap]{\mu\otimes X} \rar{\alpha} & X \otimes (X \otimes X) \dar{X\otimes \mu}\\ & X \otimes X \dar{\mu}\\
X \otimes X \arrow{r}{\mu} & X\end{tikzcd}\qquad
\begin{tikzcd}\tensorunit \otimes X \rar{\operadunit \otimes X} \dar{\cong}[swap]{\lambda} & X \otimes X \dar{\mu} & X \otimes \tensorunit \lar[swap]{X \otimes \operadunit} \dar{\rho}[swap]{\cong}\\ X \rar[equal] & X \rar[equal] & X \end{tikzcd}\]
A morphism of monoids \[f : (X,\mu^X,\operadunit^X) \to (Y,\mu^Y,\operadunit^Y)\] is a morphism $f : X \to Y$ in $\C$ that preserves the multiplications and the units in the sense that the diagrams
\[\begin{tikzcd}X\otimes X \dar[swap]{\mu^X} \rar{f\otimes f} & Y \otimes Y \dar{\mu^Y}\\ X \rar{f} & Y\end{tikzcd} \qquad
\begin{tikzcd}\tensorunit \dar[equal] \rar{\operadunit^X} & X \dar{f}\\ \tensorunit \rar{\operadunit^Y} & Y\end{tikzcd}\]
are commutative. The category of monoids in a monoidal category $\C$ is denoted by\label{notation:monoid-category}\index{category!of monoids} $\Mon(\C)$.
\end{definition}
\begin{example}\label{ex:monoid-as-cat}
Suppose $(X,\mu,\operadunit)$ is a monoid in the category $\Set$ with
sets as objects, functions as morphisms, and monoidal product given by
the Cartesian product. There are two ways to regard $(X,\mu,\operadunit)$ as a category.
\begin{enumerate}
\item There is a category\index{monoid!as a category}\index{category!from a monoid} $\Sigma X$ with one object $*$, morphism set $\Sigma X(*,*) = X$, composition $\mu : X \times X \to X$, and identity morphism $1_* = \operadunit$. The associativity and unity of the monoid $(X,\mu,\operadunit)$ become those of the category $\Sigma X$.
\item We may also regard the set $X$ as a discrete category\index{monoid!as a discrete strict monoidal category}\index{strict!monoidal category!from a monoid} $X^{\dis}$, so there are no non-identity morphisms. This discrete category is a strict monoidal category with monoidal product $\mu$ on objects, and monoidal unit $\operadunit$.
\end{enumerate}
We will come back to the category $\Sigma X$ in the next chapter.\dqed
\end{example}
\begin{example}\label{ex:opposite-monoid}
In the context of \Cref{ex:monoid-as-cat}, consider the \index{monoid!opposite}\index{opposite!monoid}\emph{opposite monoid} \[X^{\op} = (X,\muop,\operadunit)\] in which \[\muop(a,b) = \mu(b,a) \forspace a,b\in X.\]
\begin{enumerate}
\item There is an equality \[\Sigma(X^{\op}) = (\Sigma X)^{\op}\] of categories. This means that the one-object category of the opposite monoid is the opposite category of the one-object category of $(X,\mu,\operadunit)$.
\item Recall the reversed monoidal category in Example \ref{ex:reversed-moncat}. Then there is an equality \[(X^{\op})^{\dis} = (X^{\dis})^{\rev}\] of strict monoidal categories. In other words, the discrete strict monoidal category of the opposite monoid is the \index{reversed monoidal!category}reversed monoidal category of the discrete strict monoidal category of $(X,\mu,\operadunit)$.\dqed
\end{enumerate}
\end{example}
\begin{definition}\label{def:monoidal-functor}
For monoidal categories $\C$ and $1.3$, a \index{monoidal functor}\index{functor!monoidal}\emph{monoidal functor} \[(F,F_2,F_0) : \C \to 1.3\] consists of:
\begin{itemize}
\item a functor $F : \C \to 1.3$;
\item a natural transformation
\begin{equation}\label{monoidal-f2}
\begin{tikzcd}FX \otimes FY \rar{F_2} & F(X \otimes Y) \in 1.3,\end{tikzcd}
\end{equation}
where $X$ and $Y$ are objects in $\C$;
\item a morphism
\begin{equation}\label{monoidal-f0}
\begin{tikzcd}\tensorunit^{1.3} \rar{F_0} & F\tensorunit^{\C} \in 1.3.\end{tikzcd}
\end{equation}
\end{itemize}
These data are required to satisfy the following associativity and unity axioms.
\begin{description}
\item[Associativity]
The diagram\index{associativity!monoidal functor}
\begin{equation}\label{f2}
\begin{tikzcd}\bigl(FX \otimes FY\bigr) \otimes FZ \rar{\alpha^{1.3}} \dar[swap]{F_2 \otimes FZ} & FX \otimes \bigl(FY \otimes FZ\bigr) \dar{FX \otimes F_2}\\
F(X \otimes Y) \otimes FZ \dar[swap]{F_2} & FX \otimes F(Y \otimes Z) \dar[d]{F_2}\\
F\bigl((X \otimes Y) \otimes Z\bigr) \rar{F\alpha^{\C}} &
F\bigl(X \otimes (Y \otimes Z)\bigr)\end{tikzcd}
\end{equation}
is commutative for all objects $X,Y,Z \in \C$.
\item[Left Unity]
The diagram\index{left unity!monoidal functor}
\begin{equation}\label{f0-left}
\begin{tikzcd}\tensorunit^{1.3} \otimes FX \dar[swap]{F_0 \otimes FX} \rar{\lambda^{1.3}_{FX}} & FX \\
F\tensorunit^{\C} \otimes FX \rar{F_2} & F(\tensorunit^{\C} \otimes X)
\uar[swap]{F\lambda^{\C}_X}\end{tikzcd}
\end{equation}
is commutative for all objects $X \in \C$.
\item[Right Unity]
The diagram\index{right unity!monoidal functor}
\begin{equation}\label{f0-right}
\begin{tikzcd}FX \otimes \tensorunit^{1.3} \dar[swap]{FX \otimes F_0} \rar{\rho^{1.3}_{FX}} & FX \\
FX \otimes F\tensorunit^{\C} \rar{F_2} & F(X \otimes \tensorunit^{\C})
\uar[swap]{F\rho^{\C}_X}\end{tikzcd}
\end{equation}
is commutative for all objects $X \in \C$.
\end{description}
A monoidal functor $(F,F_2,F_0)$ is often abbreviated to $F$.
A \index{strong monoidal functor}\index{monoidal functor!strong}\emph{strong monoidal functor} is a monoidal functor in which the morphisms $F_0$ and $F_2$ are all isomorphisms. A \index{strict!monoidal functor}\index{monoidal functor!strict}\emph{strict monoidal functor} is a monoidal functor in which the morphisms $F_0$ and $F_2$ are all identity morphisms.
\end{definition}
\begin{definition}\label{def:monoidal-natural-transformation}
For monoidal functors $F,G : \C \to 1.3$, a \index{monoidal natural transformation}\index{natural transformation!monoidal}\emph{monoidal natural transformation} $\theta : F \to G$ is a natural transformation between the underlying functors such that the diagrams
\begin{equation}\label{mon-nat-transf-F2}
\begin{tikzcd}[column sep=large]
FX \otimes FY \dar[swap]{F_2} \rar{\theta_X \otimes \theta_Y} & GX \otimes GY \dar{G_2}\\
F(X \otimes Y) \rar{\theta_{X\otimes Y}} & G(X \otimes Y)\end{tikzcd}
\end{equation}
and
\begin{equation}\label{mon-nat-transf-F0}
\begin{tikzcd}\tensorunit^{1.3} \dar[equal] \rar{F_0} & F\tensorunit^{\C} \dar{\theta_{\tensorunit^{\C}}}\\
\tensorunit^{1.3} \rar{G_0} & G\tensorunit^{\C}\end{tikzcd}
\end{equation}
are commutative for all objects $X,Y \in \C$.
\end{definition}
The following strictification result for monoidal categories is due to Mac Lane; see \cite{maclane-rice}, \cite[XI.3 Theorem 1]{maclane}, and \cite{joyal-street}.
\begin{theorem}[Mac Lane's Coherence]
\label{maclane-thm}\index{strictification!monoidal category}\index{coherence!monoidal category}\index{monoidal category!coherence}\index{Theorem!Mac Lane's Coherence}
For each monoidal category $\C$, there exist a strict monoidal category $\C_{\st}$ and an adjoint equivalence
\[\begin{tikzcd}\C \rar[shift left]{L} & \C_{\st} \lar[shift left]{R}\end{tikzcd}\]
with (i) both $L$ and $R$ strong monoidal functors and (ii) $RL=1_{\C}$.
\end{theorem}
In other words, every monoidal category can be strictified via an adjoint equivalence consisting of strong monoidal functors.
\begin{itemize}
\item Another version of the Coherence Theorem \cite[VII.2 Theorem 1]{maclane} describes explicitly the free monoidal category generated by one object.
\item A third version of the Coherence Theorem \cite[VII.2 Corollary]{maclane} states that every \emph{formal diagram}\index{formal diagram} in a monoidal category is commutative. A formal diagram is a diagram that involves only the associativity isomorphism, the unit isomorphisms, their inverses, identity morphisms, the monoidal product, and composites.
\item A fourth version of the Coherence Theorem states that, for each category $\C$, the unique strict monoidal functor from the free monoidal category generated by $\C$ to the free strict monoidal category generated by $\C$ is an equivalence of categories \cite[Theorem 1.2]{joyal-street}.
\end{itemize}
Next we consider symmetric monoidal categories.
\begin{definition}\label{def:symmetric-monoidal-category}
A \index{category!symmetric monoidal}\index{symmetric monoidal!category}\index{monoidal category!symmetric}\emph{symmetric monoidal category} is a pair $\left(\C, \xi\right)$ in which:
\begin{itemize}
\item $\C = (\C,\otimes,\tensorunit,\alpha,\lambda,\rho)$ is a monoidal category as in Definition \ref{def:monoidal-category}.
\item $\xi$ is a natural isomorphism\label{notation:symmetry-iso}
\begin{equation}\label{symmetry-isomorphism}
\begin{tikzcd}X \otimes Y \rar{\xi_{X,Y}}[swap]{\cong} & Y \otimes X\end{tikzcd}
\end{equation}
for objects $X,Y \in \C$, called the \index{symmetry!isomorphism}\emph{symmetry isomorphism}.
\end{itemize}
These data are required to satisfy the following three axioms.
\begin{description}
\item[Symmetry Axiom]
The diagram\index{symmetry!axiom}
\begin{equation}\label{monoidal-symmetry-axiom}
\begin{tikzcd}X \otimes Y \rar{\xi_{X,Y}} \arrow[equal]{dr} & Y \otimes X \dar[d]{\xi_{Y,X}}\\ & X \otimes Y\end{tikzcd}
\end{equation}
is commutative for all objects $X,Y \in \C$.
\item[Unit Axiom]
The diagram\index{unity!symmetric monoidal category}
\begin{equation}\label{symmetry-unit}
\begin{tikzcd}X \otimes \tensorunit \dar[swap]{\rho_X} \rar{\xi_{X,\tensorunit}}
& \tensorunit \otimes X \dar{\lambda_X}\\ X \rar[equal] & X\end{tikzcd}
\end{equation}
is commutative for all objects $X \in \C$.
\item[Hexagon Axiom]
The diagram\index{hexagon axiom!symmetric monoidal category}
\begin{equation}
\label{hexagon-axiom}
\begin{tikzpicture}[commutative diagrams/every diagram]
\node (P0) at (0:2cm) {$(X \otimes Y) \otimes Z$};
\node (P1) at (60:2cm) {\makebox[5ex][l]{$X \otimes (Y\otimes Z)$}};
\node (P2) at (120:2cm) {\makebox[5ex][r]{$X \otimes (Z\otimes Y)$}};
\node (P3) at (180:2cm) {$(X \otimes Z) \otimes Y$};
\node (P4) at (240:2cm) {\makebox[5ex][r]{$Y \otimes (X\otimes Z)$}};
\node (P5) at (300:2cm) {\makebox[5ex][l]{$(Y \otimes X)\otimes Z$}};
\path[commutative diagrams/.cd, every arrow, every label]
(P3) edge node {$\alpha$} (P2)
(P2) edge node {$X \otimes \xi_{Z,Y}$} (P1)
(P1) edge node {$\alpha^{-1}$} (P0)
(P3) edge node[swap] {$\xi_{X\otimes Z,Y}$} (P4)
(P4) edge node {$\alpha^{-1}$} (P5)
(P5) edge node[swap] {$\xi_{Y,X}\otimes Z$} (P0);
\end{tikzpicture}
\end{equation}
is commutative for all objects $X,Y, Z \in \C$.
\end{description}
A symmetric monoidal category is said to be \index{strict!symmetric monoidal category}\index{symmetric monoidal!category!strict}\emph{strict} if the underlying monoidal category is strict.
\end{definition}
\begin{definition}\label{def:com-monoid}
A \index{commutative monoid}\index{monoid!commutative}\emph{commutative monoid} in a symmetric monoidal category $(\C,\xi)$ is a monoid $(X,\mu,\operadunit)$ in $\C$ such that the multiplication $\mu$ is commutative in the sense that the diagram
\[\begin{tikzcd}X\otimes X \dar[swap]{\mu} \rar{\xi_{X,X}} & X \otimes X \dar{\mu}\\ X \rar[equal] & X\end{tikzcd}\]
is commutative. A morphism of commutative monoids is a morphism of the underlying monoids. The category of commutative monoids in $\C$ is denoted by $\CMon(\C)$.\label{notation:cmon}
\end{definition}
\begin{definition}\label{def:symmetric-monoidal-functor}
For symmetric monoidal categories $\C$ and $1.3$, a \index{functor!symmetric monoidal}\index{symmetric monoidal!functor}\index{monoidal functor!symmetric}\emph{symmetric monoidal functor} $(F,F_2,F_0) : \C \to 1.3$
is a monoidal functor between the underlying monoidal categories that is compatible with the symmetry isomorphisms, in the sense that the diagram
\begin{equation}\label{monoidal-functor-symmetry}
\begin{tikzcd}[column sep=large]
FX \otimes FY \dar[swap]{F_2} \rar{\xi_{FX,FY}}[swap]{\cong} & FY \otimes FX \dar{F_2} \\ F(X \otimes Y) \rar{F\xi_{X,Y}}[swap]{\cong} & F(Y \otimes X)\end{tikzcd}
\end{equation}
is commutative for all objects $X,Y \in \C$. A symmetric monoidal functor is said to be \emph{strong}\index{symmetric monoidal!functor!strong}\index{symmetric monoidal!functor!strict} (resp., \emph{strict}) if the underlying monoidal functor is so.
\end{definition}
The symmetric version of the Coherence Theorem \ref{maclane-thm} states that every symmetric monoidal category can be strictified to a strict symmetric monoidal category via an adjoint equivalence consisting of strong symmetric monoidal functors. The following variations from \cite{joyal-street} are also true.
\begin{itemize}
\item Every \index{formal diagram}\emph{formal diagram} in a symmetric monoidal category is commutative. Here a formal diagram is defined as in the non-symmetric case by allowing the symmetry isomorphism as well.
\item The unique strict symmetric monoidal functor from the free symmetric monoidal category generated by a category $\C$ to the free strict symmetric monoidal category generated by $\C$ is an equivalence of categories.
\end{itemize}
\begin{example}\label{ex:sym-mon-cat}
Here are some examples of symmetric monoidal categories.
\begin{itemize}
\item \label{notation:set} $(\Set, \times, *)$ : The category of sets and functions.\index{set} A monoid in $\Set$ is a monoid in the usual sense.
\item $(\Cat, \times, \boldone)$ : The category\index{category!of small categories} of small categories and functors. Here $\boldone$ is a category with one object and only the identity morphism.
\item \label{notation:hilb} $(\Hilb, \cotimes, \fieldc)$: The category of complex \index{Hilbert space}Hilbert spaces and bounded linear maps, with $\cotimes$ the completed tensor product of Hilbert spaces \cite{weidmann}.\dqed
\end{itemize}
\end{example}
\begin{definition}\label{def:sym-mon-closed}
A symmetric monoidal category $\C$ is \index{closed category}\index{category!closed}\emph{closed} if, for each object $X$, the functor \[-\otimes X : \C\to\C\] admits a right adjoint, denoted by\label{notation:internal-hom} $[X,-]$ and called the \index{internal hom}\emph{internal hom}.
\end{definition}
Next we turn to braided monoidal categories.
\begin{definition}\label{def:braided-monoidal-category}\index{braided!monoidal category}\index{monoidal category!braided}\index{category!braided monoidal}
A \emph{braided monoidal category} is a pair $(\C,\xi)$ in which:
\begin{itemize}
\item $(\C,\otimes,\tensorunit,\alpha,\lambda,\rho)$ is a monoidal category as in Definition \ref{def:monoidal-category}.
\item $\xi$ is a natural isomorphism
\begin{equation}\label{braiding-isomorphism}
\begin{tikzcd}[column sep=large]
X \otimes Y \rar{\xi_{X,Y}}[swap]{\cong} & Y \otimes X\end{tikzcd}
\end{equation}
for objects $X,Y \in \C$, called the \index{braiding!braided monoidal category}\emph{braiding}.
\end{itemize}
These data are required to satisfy the following axioms.
\begin{description}
\item[Unit Axiom] The diagram\index{unity!braided monoidal category}
\begin{equation}\label{braiding-unit}
\begin{tikzcd}X \otimes \tensorunit \dar[swap]{\rho} \rar{\xi_{X,\tensorunit}} & \tensorunit \otimes X \dar{\lambda}\\
X \rar[equal] & X\end{tikzcd}
\end{equation}
is commutative for all objects $X\in \C$.
\item[Hexagon Axioms] The following two hexagon diagrams\index{hexagon axiom!braided monoidal category} are required to be commutative for objects $X,Y,Z \in \C$.
\begin{equation}\label{hexagon-b1}
\begin{tikzpicture}[commutative diagrams/every diagram]
\node (P0) at (0:2cm) {$Y \otimes (Z\otimes X)$};
\node (P1) at (60:2cm) {\makebox[5ex][l]{$Y \otimes (X \otimes Z)$}};
\node (P2) at (120:2cm) {\makebox[5ex][r]{$(Y \otimes X) \otimes Z$}};
\node (P3) at (180:2cm) {$(X \otimes Y) \otimes Z$};
\node (P4) at (240:2cm) {\makebox[5ex][r]{$X \otimes (Y \otimes Z)$}};
\node (P5) at (300:2cm) {\makebox[5ex][l]{$(Y \otimes Z) \otimes X$}};
\path[commutative diagrams/.cd, every arrow, every label]
(P3) edge node {$\xi_{X,Y}\otimes Z$} (P2)
(P2) edge node {$\alpha$} (P1)
(P1) edge node {$Y \otimes \xi_{X,Z}$} (P0)
(P3) edge node[swap] {$\alpha$} (P4)
(P4) edge node {$\xi_{X,Y\otimes Z}$} (P5)
(P5) edge node[swap] {$\alpha$} (P0);
\end{tikzpicture}
\end{equation}
\begin{equation}\label{hexagon-b2}
\begin{tikzpicture}[commutative diagrams/every diagram]
\node (P0) at (0:2cm) {$(Z\otimes X) \otimes Y$};
\node (P1) at (60:2cm) {\makebox[5ex][l]{$(X \otimes Z) \otimes Y$}};
\node (P2) at (120:2cm) {\makebox[5ex][r]{$X \otimes (Z \otimes Y)$}};
\node (P3) at (180:2cm) {$X \otimes (Y \otimes Z)$};
\node (P4) at (240:2cm) {\makebox[5ex][r]{$(X \otimes Y) \otimes Z$}};
\node (P5) at (300:2cm) {\makebox[5ex][l]{$Z \otimes (X \otimes Y)$}};
\path[commutative diagrams/.cd, every arrow, every label]
(P3) edge node {$X\otimes \xi_{Y,Z}$} (P2)
(P2) edge node {$\alpha^{-1}$} (P1)
(P1) edge node {$\xi_{X,Z}\otimes Y$} (P0)
(P3) edge node[swap] {$\alpha^{-1}$} (P4)
(P4) edge node {$\xi_{X\otimes Y, Z}$} (P5)
(P5) edge node[swap] {$\alpha^{-1}$} (P0);
\end{tikzpicture}
\end{equation}
\end{description}
A braided monoidal category is said to be\index{strict!braided monoidal category}\index{braided!monoidal category!strict} \emph{strict} if the underlying monoidal category is strict.
A\index{functor!braided monoidal}\index{braided!monoidal functor}\index{monoidal functor!braided} \emph{braided monoidal functor} is defined just like a symmetric monoidal functor, and similarly for the \emph{strong} and \emph{strict} versions.
\end{definition}
\begin{explanation}\label{expl:hexagon-axioms}
The two hexagon diagrams \eqref{hexagon-b1} and \eqref{hexagon-b2} may be visualized as the \index{braid}braids, read bottom-to-top,
\begin{center}\begin{tikzpicture}[xscale=.6, yscale=.25]
\foreach \x in {0,2,4} {\coordinate (A\x) at (\x,-1.5); \coordinate (B\x) at (\x,5);}
\draw[strand] (A0) to[out=90, in=270] (B4);
\foreach \x in {2,4} {\pgfmathsetmacro\xminus{\x -2}
\draw[line width=7pt, white] (A\x) to [out=90, in=270] (B\xminus);
\draw[strand] (A\x) to [out=90, in=270] (B\xminus);}
\node at (0,-2.2) {\scriptsize{$X$}};
\node at (2,-2.2) {\scriptsize{$Y$}};
\node at (4,-2.2) {\scriptsize{$Z$}};
\end{tikzpicture}\hspace{1.5cm}
\begin{tikzpicture}[xscale=.6, yscale=.25]
\foreach \x in {0,2,4} {\coordinate (A\x) at (\x,-1.5); \coordinate (B\x) at (\x,5);}
\draw[strand] (A0) to[out=90, in=270] (B2); \draw[strand] (A2) to[out=90, in=270] (B4);
\draw[line width=7pt, white] (A4) to [out=90, in=270] (B0);
\draw[strand] (A4) to [out=90, in=270] (B0);
\node at (0,-2.2) {\scriptsize{$X$}};
\node at (2,-2.2) {\scriptsize{$Y$}};
\node at (4,-2.2) {\scriptsize{$Z$}};
\end{tikzpicture}\end{center}
in the \index{braid!group}braid group $B_3$ \cite{artin}, with the braiding $\xi$ interpreted as the generator
\begin{tikzpicture}[xscale=.3,yscale=.25,baseline={(0,0).base},strand]
\draw (0,0) to (1,1);
\draw[line width=4pt, white] (1,0) to (0,1);
\draw (1,0) to (0,1);
\end{tikzpicture}
in the braid group $B_2$. On the left, the two strings labeled by $Y$ and $Z$ cross over the string labeled by $X$. The two composites along the boundary of the hexagon diagram \eqref{hexagon-b1} correspond to passing $Y$ and $Z$ over $X$ either one at a time, or both at once. On the right, the string labeled by $Z$ crosses over the two strings labeled by $Y$ and $X$. The two composites along the boundary of \eqref{hexagon-b2} likewise correspond to the two ways of passing $Z$ over $X$ and $Y$.\dqed
\end{explanation}
The braided version\label{braided-coherence} of the Coherence Theorem \ref{maclane-thm} states that every braided monoidal category can be strictified to a strict braided monoidal category via an adjoint equivalence consisting of strong braided monoidal functors. The following variations from \cite{joyal-street} are also true.
\begin{itemize}
\item A \index{formal diagram!braided monoidal category}\emph{formal diagram}, defined as in the symmetric case with the braiding in place of the symmetry isomorphism, in a braided monoidal category is commutative if and only if composites with the same (co)domain have the same underlying braid.
\item For each category $\C$, the unique strict braided monoidal functor from the free braided monoidal category generated by $\C$ to the free strict braided monoidal category generated by $\C$ is an equivalence of categories.
\end{itemize}
\section{Enriched Categories}\label{sec:enriched-cat}
In this section we recall some basic definitions regarding enriched categories, which will be useful when we discuss $2$-categories. While a category has morphism sets, an enriched category has morphism objects in another category $\V$. The composition, identity morphisms, associativity, and unity are all phrased in the category $\V$. Fix a monoidal category $(\V,\otimes,\tensorunit,\alpha,\lambda,\rho)$ as in \Cref{def:monoidal-category}.
\begin{definition}\label{def:enriched-category}
A \emph{$\V$-category} $\C$, also called a \index{category!enriched}\index{enriched!category}\emph{category enriched in $\V$}, consists of:
\begin{itemize}
\item a class $\Ob(\C)$ of objects in $\C$;
\item for each pair of objects $X,Y$ in $\C$, an object $\C(X,Y)$ in $\V$, called the \index{hom object}\emph{hom object} with domain $X$ and codomain $Y$;
\item for each triple of objects $X,Y,Z$ in $\C$, a morphism
\[\begin{tikzcd}[column sep=large] \C(Y,Z) \otimes \C(X,Y) \rar{m_{XYZ}} & \C(X,Z)
\end{tikzcd}\]
in $\V$, called the \index{composition!enriched category}\emph{composition};
\item for each object $X$ in $\C$, a morphism
\[\begin{tikzcd} \tensorunit \rar{i_X} & \C(X,X)\end{tikzcd}\]
in $\V$, called the \emph{identity} of $X$.
\end{itemize}
These data are required to make the \index{associativity!enriched category}\emph{associativity diagram}
\begin{equation}\label{enriched-cat-associativity}
\begin{tikzcd}
\bigl(\C(Y,Z) \otimes \C(X,Y)\bigr) \otimes \C(W,X) \arrow{dd}[swap]{m \otimes 1} \rar{\alpha} & \C(Y,Z) \otimes \bigl(\C(X,Y)\otimes \C(W,X)\bigr) \dar{1 \otimes m}\\
& \C(Y,Z) \otimes \C(W,Y) \dar{m}\\
\C(X,Z) \otimes \C(W,X) \rar{m} & \C(W,Z)
\end{tikzcd}
\end{equation}
and the \index{unity!enriched category}\emph{unity diagram}
\begin{equation}\label{enriched-cat-unity}
\begin{tikzcd}
\tensorunit \otimes \C(X,Y) \dar[swap]{i_Y \otimes 1} \rar{\lambda} & \C(X,Y) \dar[equal] & \C(X,Y) \otimes \tensorunit \lar[swap]{\rho} \dar{1 \otimes i_X}\\
\C(Y,Y) \otimes \C(X,Y) \rar{m} & \C(X,Y) & \C(X,Y) \otimes \C(X,X) \lar[swap]{m}
\end{tikzcd}
\end{equation}
commute for objects $W,X,Y,Z$ in $\C$. This finishes the definition of a $\V$-category. A $\V$-category $\C$ is \emph{small} if $\Ob(\C)$ is a set.
\end{definition}
\begin{example}\label{ex:enriched-cat-examples}
Here are some examples of enriched categories.
\begin{enumerate}
\item A $\Set$-category, for the symmetric monoidal category $(\Set,\times,*)$ of sets, is precisely a category in the usual sense.
\item\label{notation:top} A $\Top$-category, for the symmetric monoidal category $(\Top,\times,*)$ of topological spaces, is usually called a \index{topological category}\index{category!topological}\emph{topological category}. If we restrict to compactly generated Hausdorff spaces, then an example of a $\Top$-category is $\Top$ itself. For two spaces $X$ and $Y$, the set $\Top(X,Y)$ of continuous maps from $X$ to $Y$ is given the compact-open topology.
\item\label{notation:ab} An $\Ab$-category, for the symmetric monoidal category $(\Ab,\otimes,\bbZ)$ of abelian groups, is sometimes called a \index{pre-additive category}\index{category!pre-additive}\emph{pre-additive category} in the literature. Explicitly, an $\Ab$-category $\C$ is a category in which each morphism set $\C(X,Y)$ is equipped with the structure of an abelian group such that composition distributes over addition, in the sense that
\[h(g_1+g_2)f = hg_1f + hg_2f\] when the compositions are defined.
\item\label{notation:ch} For a commutative ring $R$, suppose $(\Ch,\otimes_R,R)$ is the symmetric monoidal category of chain complexes of $R$-modules. A $\Ch$-category is usually called a \index{differential graded category}\index{category!differential graded}\emph{differential graded category}.
\item A symmetric monoidal closed category $\V$ becomes a $\V$-category with hom objects the internal hom $[X,Y]$ for objects $X,Y$ in $\V$. The composition $m$ is induced by the adjunction between $-\otimes X$ and $[X,-]$. The identity $i_X$ is adjoint to the left unit isomorphism $\lambda_X : \tensorunit \otimes X \cong X$.
\item We will see in \Cref{ch:2cat_bicat} that $\Cat$-categories are locally small $2$-categories.
\end{enumerate}
Although the definition of a $\V$-category does not require $\V$ to be symmetric, in practice $\V$ is often a symmetric monoidal category.\dqed
\end{example}
Next we recall functors, natural transformations, adjunctions, and monads in the enriched setting. In the next few definitions, the reader will notice that we recover the usual notions in \Cref{sec:categories} when $\V=\Set$.
\begin{definition}\label{def:enriched-functor}
Suppose $\C$ and $1.3$ are $\V$-categories. A \index{functor!enriched}\index{enriched!functor}\emph{$\V$-functor} $F : \C \to 1.3$ consists of:
\begin{itemize}
\item an assignment on objects
\[\Ob(\C) \to \Ob(1.3), \qquad X \mapsto FX;\]
\item for each pair of objects $X,Y$ in $\C$, a morphism
\[\begin{tikzcd}
\C(X,Y) \rar{F_{XY}} & 1.3\bigl(FX,FY\bigr)\end{tikzcd}\] in $\V$.
\end{itemize}
These data are required to satisfy the following two conditions.
\begin{description}
\item[Composition] For each triple of objects $X,Y,Z$ in $\C$, the diagram
\[\begin{tikzcd}
\C(Y,Z) \otimes \C(X,Y) \rar{m} \dar[swap]{F \otimes F} & \C(X,Z) \dar{F}\\
1.3(FY,FZ) \otimes 1.3(FX,FY) \rar{m} & 1.3(FX,FZ)\end{tikzcd}\]
in $\V$ is commutative.
\item[Identities] For each object $X\in\C$, the diagram
\[\begin{tikzcd}
\tensorunit \dar[equal] \rar{i_X} & \C(X,X) \dar{F}\\
\tensorunit \rar{i_{FX}} & 1.3(FX,FX)
\end{tikzcd}\]
in $\V$ is commutative.
\end{description}
Moreover:
\begin{itemize}
\item For $\V$-functors $F : \C \to 1.3$ and $G : 1.3 \to 3$, their composition \[GF : \C\to3\] is the $\V$-functor defined by composing the assignments on objects and forming the composite \[(GF)_{XY} = G_{FX,FY} F_{XY} : \C(X,Y) \to 3(GFX,GFY)\] in $\V$ on hom objects.
\item The \index{identity!enriched functor}\index{enriched!identity functor}\emph{identity $\V$-functor} of $\C$, denoted $1_{\C} : \C\to \C$, is given by the identity map on $\Ob(\C)$ and the identity morphism $1_{\C(X,Y)}$ for objects $X,Y$ in $\C$.\defmark
\end{itemize}
\end{definition}
\begin{definition}\label{def:enriched-natural-transformation}
Suppose $F,G : \C\to1.3$ are $\V$-functors between $\V$-categories $\C$ and $1.3$.
\begin{enumerate}
\item A \index{enriched!natural transformation}\index{natural transformation!enriched}\emph{$\V$-natural transformation} $\theta : F\to G$ consists of a morphism \[\theta_X : \tensorunit \to 1.3(FX,GX)\] in $\V$, called a \emph{component} of $\theta$, for each object $X$ in $\C$, such that the diagram
\begin{equation}\label{enriched-nt-naturality}
\begin{tikzcd}[column sep=small]
\C(X,Y) \dar{\cong}[swap]{\rho^{-1}} \rar{\lambda^{-1}}[swap]{\cong} & \tensorunit \otimes \C(X,Y) \dar{\theta_Y \otimes F} \\
\C(X,Y) \otimes \tensorunit \dar[swap]{G\otimes\theta_X} & 1.3(FY,GY) \otimes 1.3(FX,FY) \dar{m} \\
1.3(GX,GY) \otimes 1.3(FX,GX) \rar{m} & 1.3(FX,GY)
\end{tikzcd}
\end{equation}
is commutative for objects $X,Y$ in $\C$.
\item The \index{enriched!identity natural transformation}\emph{identity $\V$-natural transformation of $F$}, denoted by $1_F : F \to F$, is defined by the component \[(1_F)_X = i_{FX} : \tensorunit \to 1.3(FX,FX)\] for each object $X$ in $\C$.\defmark
\end{enumerate}
\end{definition}
As for natural transformations, there are two types of compositions for $\V$-natural transformations.
\begin{definition}\label{def:enriched-nt-composition}
Suppose $\theta : F \to G$ is a $\V$-natural transformation for $\V$-functors $F,G : \C\to1.3$.
\begin{enumerate}
\item Suppose $\phi : G \to H$ is another $\V$-natural transformation for a $\V$-functor $H : \C\to1.3$. The \index{enriched!natural transformation!vertical composition}\index{vertical composition!enriched natural transformation}\emph{vertical composition} \[\phi\theta : F \to H\] is the $\V$-natural transformation whose component $(\phi\theta)_X$ is the composite
\[\begin{tikzcd}[column sep=large]
\tensorunit \rar{(\phi\theta)_X} \dar{\cong}[swap]{\lambda^{-1}} & 1.3(FX,HX)\\
\tensorunit \otimes \tensorunit \rar{\phi_X\otimes\theta_X} & 1.3(GX,HX) \otimes 1.3(FX,GX) \uar[swap]{m}
\end{tikzcd}\]
in $\V$ for each object $X$ in $\C$.
\item Suppose $\theta' : F'\to G'$ is a $\V$-natural transformation for $\V$-functors $F',G' : 1.3\to3$ with $3$ a $\V$-category. The \index{enriched!natural transformation!horizontal composition}\index{horizontal composition!enriched natural transformation}\emph{horizontal composition} \[\theta' * \theta : F'F \to G'G\]
is the $\V$-natural transformation whose component $(\theta'*\theta)_X$, for an object $X$ in $\C$, is defined as the composite
\begin{equation}\label{enriched-hcomp-component}
\begin{tikzcd}
\tensorunit \arrow{dd}{\cong}[swap]{\lambda^{-1}} \rar{(\theta'*\theta)_X} & 3(F'FX,G'GX)\\
& 3(F'GX,G'GX) \otimes 3(F'FX,F'GX) \uar[swap]{m}\\
\tensorunit \otimes \tensorunit \rar{\theta'_{GX}\otimes \theta_X} & 3(F'GX,G'GX) \otimes 1.3(FX,GX) \uar[swap]{1\otimes F'}
\end{tikzcd}
\end{equation}
in $\V$.\defmark
\end{enumerate}
\end{definition}
For ordinary categories, adjunctions can be characterized in terms of the unit, the counit, and the triangle identities \eqref{triangle-identities}. In the enriched setting, we use the triangle identities as the definition for an adjunction.
\begin{definition}\label{def:enriched-adjunction}
Suppose $\C$ and $1.3$ are $\V$-categories, and $L : \C\to1.3$ and $R : 1.3\to\C$ are $\V$-functors. A \index{adjunction!enriched}\index{enriched!adjunction}\emph{$\V$-adjunction} $L\dashv R$ consists of
\begin{itemize}
\item a $\V$-natural transformation $\eta : 1_{\C} \to RL$ called the \emph{unit}, and
\item a $\V$-natural transformation $\varepsilon : LR \to 1_{1.3}$ called the \emph{counit},
\end{itemize}
such that the diagrams
\[\begin{tikzcd}[column sep=small]
& RLR \arrow{rd}{1_R * \varepsilon} &\\
R \arrow{rr}{1_R} \arrow{ru}{\eta * 1_R} && R
\end{tikzcd}\qquad
\begin{tikzcd}[column sep=small]
& LRL \arrow{rd}{\varepsilon * 1_L} &\\
L \arrow{rr}{1_L} \arrow{ru}{1_L * \eta} && L
\end{tikzcd}\]
commute. In this case, $L$ is called the \index{left adjoint!enriched}\emph{left adjoint}, and $R$ is called the \index{right adjoint!enriched}\emph{right adjoint}.
\end{definition}
\begin{definition}\label{def:enriched-natural-iso}
Suppose $F,G : \C\to1.3$ are $\V$-functors.
\begin{enumerate}
\item A $\V$-natural transformation $\theta : F \to G$ is called a \index{natural isomorphism!enriched}\index{enriched!natural isomorphism}\emph{$\V$-natural isomorphism} if there exists a $\V$-natural transformation $\theta^{-1} : G \to F$ such that the equalities
\[\theta^{-1}\theta = 1_F \andspace \theta\theta^{-1} = 1_G\] hold.
\item $F$ is called a \index{equivalence!enriched}\index{enriched!equivalence}\emph{$\V$-equivalence} if there exist
\begin{itemize}
\item a $\V$-functor $F' : 1.3\to\C$ and
\item $\V$-natural isomorphisms \!$\begin{tikzcd}[column sep=scriptsize]\eta : 1_{\C} \rar{\cong} & F'F\end{tikzcd}$\! and \!$\begin{tikzcd}[column sep=scriptsize]\varepsilon : FF' \rar{\cong} & 1_{1.3}.\end{tikzcd}$\defmark
\end{itemize}
\end{enumerate}
\end{definition}
\begin{definition}\label{def:enriched-monad}
A \index{monad!enriched}\index{enriched!monad}\emph{$\V$-monad} in a $\V$-category $\C$ is a triple $(T,\mu,\eta)$ consisting of
\begin{itemize}
\item a $\V$-functor $T : \C \to \C$,
\item a $\V$-natural transformation $\mu : T^2 \to T$ called the \emph{multiplication}, and
\item a $\V$-natural transformation $\eta : 1_{\C} \to T$ called the \emph{unit},
\end{itemize}
such that the \index{associativity!enriched monad}associativity and \index{unity!enriched monad}unity diagrams
\[\begin{tikzcd}
T^3 \dar[swap]{\mu * 1_T} \rar{1_T*\mu} & T^2 \dar{\mu}\\ T^2 \rar{\mu} & T\end{tikzcd}\qquad
\begin{tikzcd}
1_\C T \dar[equal] \rar{\eta *1_T} & T^2 \dar[swap]{\mu} & T1_\C \lar[swap]{1_T*\eta} \dar[equal]\\
T \rar[equal] & T \rar[equal] & T\end{tikzcd}\] are commutative. We often abbreviate such a monad to $T$.
\end{definition}
\begin{definition}\label{def:enriched-monad-algebra}
Suppose $(T,\mu,\eta)$ is a $\V$-monad in a $\V$-category $\C$.
\begin{enumerate}
\item A \index{algebra!of an enriched monad}\emph{$T$-algebra} is a pair $(X,\theta)$ consisting of
\begin{itemize}
\item an object $X$ in $\C$ and
\item a morphism $\theta : \tensorunit \to \C(TX,X)$ in $\V$ called the \emph{structure morphism},
\end{itemize}
such that the associativity diagram
\[\begin{tikzcd}[column sep=tiny]
\tensorunit \dar{\cong}[swap]{\lambda^{-1}} \rar{\lambda^{-1}}[swap]{\cong} & \tensorunit \otimes \tensorunit \dar{\theta \otimes \mu_X}\\
\tensorunit \otimes \tensorunit \dar[swap]{\theta\otimes\theta} & \C(TX,X) \otimes C(T^2X,TX) \arrow{dd}{m}\\
\C(TX,X) \otimes \C(TX,X) \dar[swap]{1\otimes T} &\\
\C(TX,X) \otimes \C(T^2X,TX) \rar{m} & \C(T^2X,X)
\end{tikzcd}\]
and the unity diagram
\[\begin{tikzcd}
\tensorunit \dar{\cong}[swap]{\lambda^{-1}} \rar{i_X} & \C(X,X)\\
\tensorunit\otimes\tensorunit \rar{\theta\otimes\eta_X} &
\C(TX,X)\otimes \C(X,TX) \arrow{u}[swap]{m}\\
\end{tikzcd}\]
are commutative.
\item For $T$-algebras $(X,\theta^X)$ and $(Y,\theta^Y)$, a \index{morphism!enriched monad algebras}\emph{morphism of $T$-algebras} \[f : (X,\theta^X) \to (Y,\theta^Y)\] is a morphism $f : \tensorunit \to \C(X,Y)$ in $\V$ such that the diagram
\[\begin{tikzcd}
\tensorunit \dar{\cong}[swap]{\lambda^{-1}} \rar{\lambda^{-1}}[swap]{\cong} & \tensorunit\otimes\tensorunit \rar{\theta^Y\otimes f} & \C(TY,Y) \otimes \C(X,Y) \dar{1\otimes T}\\
\tensorunit\otimes\tensorunit \dar[swap]{f\otimes\theta^X} && \C(TY,Y) \otimes \C(TX,TY) \dar{m}\\
\C(X,Y)\otimes \C(TX,X) \arrow{rr}{m} && \C(TX,Y)
\end{tikzcd}\]
is commutative.\defmark
\end{enumerate}
\end{definition}
\section{Exercises and Notes}\label{sec:category-exercises}
\begin{exercise}
Check that the vertical composition of two natural transformations, when it is defined, is actually a natural transformation, and that vertical composition is associative and unital. Do the same for horizontal composition.
\end{exercise}
\begin{exercise}
Repeat the previous exercise for $\V$-natural transformations for a monoidal category $\V$.
\end{exercise}
\begin{exercise}
Suppose $\theta : F \to G$ is a natural transformation. Prove that $\theta$ is a \index{natural isomorphism}natural isomorphism if and only if there exists a unique natural transformation $\phi : G \to F$ such that $\phi\theta = 1_F$ and $\theta \phi = 1_G$.
\end{exercise}
\begin{exercise}
For an adjunction $L \dashv R$, prove the triangle identities \eqref{triangle-identities}.
\end{exercise}
\begin{exercise}
Prove the alternative characterization of an adjunction stated at the end of the paragraph containing \eqref{adjunction-unit}.
\end{exercise}
\begin{exercise}
Prove that for a functor $F$, the following statements are equivalent.\index{characterization of!an equivalence}
\begin{enumerate}[label=(\roman*)]
\item $F$ is part of an adjoint equivalence.
\item $F$ is an equivalence.
\item $F$ is both fully faithful and essentially surjective.
\end{enumerate}
\end{exercise}
\begin{exercise}
Prove that \index{adjunction!composition}adjunctions can be composed.\index{adjunction!composition}
\end{exercise}
\begin{exercise}
Prove the Yoneda Lemma \eqref{yoneda-lemma}.
\end{exercise}
\begin{exercise}
Prove that the limit of a functor, if it exists, is unique up to a unique isomorphism. Do the same for the colimit.\index{limit!uniqueness}\index{colimit!uniqueness}
\end{exercise}
\begin{exercise}
Prove that a left adjoint preserves colimits, and that a right adjoint preserves limits.
\end{exercise}
\begin{exercise}
Suppose $\C$ is a monoidal category, except that the axiom $\lambda_{\tensorunit} = \rho_{\tensorunit}$ is not assumed. Prove that this axiom follows from the unity axiom \eqref{monoidal-unit} and the pentagon axiom \eqref{pentagon-axiom}.
\end{exercise}
\begin{exercise}
Prove that the unity diagrams \eqref{moncat-other-unit-axioms} are commutative in a monoidal category.
\end{exercise}
\begin{exercise}
In \Cref{ex:reversed-moncat}, check that $\C^{\rev}$ satisfies the pentagon axiom.
\end{exercise}
\begin{exercise}\label{exer:monoidal-functor-monoid}
Prove that each \index{monoidal functor!lifts to monoids}monoidal functor $(F,F_2,F_0) : \C\to1.3$ induces a functor
\[\begin{tikzcd}\Mon(\C) \ar{r}{F} & \Mon(1.3)\end{tikzcd}\]
that sends a monoid $(X,\mu,\operadunit)$ in $\C$ to the monoid $\bigl(FX,\mu^{FX},\operadunit^{FX}\bigr)$ in $1.3$ with unit the composite
\[\begin{tikzcd}
\tensorunitd \rar{F_0} & F\tensorunitc \rar{F\operadunit} & FX
\end{tikzcd}\]
and multiplication the composite
\[\begin{tikzcd}
FX \otimes FX \rar{F_2} & F(X\otimes X) \rar{F\mu} & FX.\end{tikzcd}\]
In other words, \index{monoidal functor!preservation of monoids}monoidal functors preserve monoids.
\end{exercise}
\begin{exercise}\label{symmonoidal-functor-cmonoid} Repeat the previous exercise for a symmetric monoidal functor. In other words, prove that each \index{symmetric monoidal!functor!lifts to commutative monoids}symmetric monoidal functor $(F,F_2,F_0) : \C\to1.3$ induces a functor, defined as in the previous exercise,
\[\begin{tikzcd}\CMon(\C) \rar[r]{F} & \CMon(1.3)\end{tikzcd}\]
between the categories of commutative monoids.
\end{exercise}
\begin{exercise}\label{exercise:3-cocycle-monoidal-cat}
Suppose that $G$ is a group and $M$ is a $G$-module. A
\emph{normalized $3$-cocycle}
\index{3-cocycle!normalized - as associativity}%
\index{normalized 3-cocycle!as associativity}%
\index{monoidal category!example from group 3-cocycle}%
\index{associativity!normalized 3-cocycle}%
for $G$ with coefficients in $M$ is a
function $h\cn G^3 \to M$ such that the following two equalities hold
in $M$ for all $x$, $y$, $z$, $w$ in $G$:
\begin{align*}
h(x,1,y) & = 0\\
w \cdot h(x,y,z) + h(w, xy, z) + h(w,x,y) & = h(w, x, yz) + h(wx, y, z).
\end{align*}
Given such an $h$, define a category $T = T(G,M,h)$ as follows.
The objects of $T$ are given by
the elements of $G$. For each $x$ the set of endomorphisms
$T(x,x) = M$, and for $x \not = y$ the morphism set $T(x,y)$ is empty.
Identities and composition (of endomorphisms) are given by the
identity and addition in $M$.
Show that the following structure makes $T$ a monoidal category:
The product of objects is given by multiplication in $G$.
The product of morphisms $p\cn x \to x$ and $q\cn y \to y$ is given
by
\[
p + x \cdot q \cn xy \to xy.
\]
The unit element of $G$ is the monoidal unit, and the unit isomorphisms are
trivial. The associativity isomorphism components $(xy)z \to x(yz)$ are defined
to be $h(x,y,z)$.
\end{exercise}
\begin{exercise}
Suppose $\C$ is a small category. Prove that monads on $\C$ are precisely the monoids\index{monoid} in a certain strict monoidal category.
\end{exercise}
\begin{exercise}
Repeat the previous exercise for $\V$-monads on a small $\V$-category $\C$.
\end{exercise}
\begin{exercise}
Show that the composite \eqref{enriched-hcomp-component} that defines the component $(\theta'*\theta)_X$ of the horizontal composition is equal to the following composite.
\[\begin{tikzcd}
\tensorunit \arrow{dd}{\cong}[swap]{\lambda^{-1}} & 3(F'FX,G'GX)\\
& 3(G'FX,G'GX) \otimes 3(F'FX,G'FX) \uar[swap]{m}\\
\tensorunit \otimes \tensorunit \rar{\theta_{X}\otimes \theta'_{FX}} & 1.3(FX,GX) \otimes 3(F'FX,G'FX) \uar[swap]{G'\otimes 1}
\end{tikzcd}\]
\end{exercise}
\begin{exercise}
For a $\V$-monad $T$ in a $\V$-category $\C$, show that $T$-algebras and their morphisms form a category.
\end{exercise}
\subsection*{Notes}
\begin{note}[General References]
For more detailed discussion of basic category theory we refer the reader to the introductory books \cite{awodey,grandis,leinster,riehl,roman,simmons}.
\end{note}
\begin{note}[Set-Theoretic Foundations]
Our set-theoretic convention using Grothendieck universes is from the appendix in \cite{bourbaki}. For more discussion of set-theoretic foundation in the context of category theory, the reader is referred to \cite{maclane-foundation,shulman-set,low}.
\end{note}
\begin{note}[The Yoneda Embedding]
In the literature, the Yoneda embedding of an object $A$ is sometimes denoted by $h_A$. We chose the symbol $\Yo_A$ to make it easier for the reader to remember that $\Yo$ stands for Yoneda.
\end{note}
\begin{note}[Monads]
For further discussion of monads, the reader may consult \cite{barr-wells,borceux2,godement,maclane,riehl}. Monads are also called \index{triple}\emph{triples} and \index{standard construction}\emph{standard constructions} in the literature.
\end{note}
\begin{note}[Monoidal Categories and Functors]
What we call a strict symmetric monoidal category is sometimes called a \index{category!permutative}\index{permutative category}\emph{permutative category} in the literature. What we call a (symmetric/braided) monoidal category is what Joyal and Street \cite{joyal-street} called a \index{symmetric tensor category}\index{braided!tensor category}\emph{(symmetric/braided) tensor category}. A monoidal functor is sometimes called a \index{monoidal functor!lax}\index{lax monoidal functor}\emph{lax monoidal functor} in the literature to emphasize the condition that the morphisms $F_2$ and $F_0$ are not necessarily invertible. A strong monoidal functor is also known as a\index{functor!tensor}\index{tensor functor} \emph{tensor functor}. Discussion of monoidal categories and their coherence can be found in\index{monoidal category!coherence}\index{coherence!monoidal category} \cite{joyal-street,kelly2,maclane-rice,maclane,yau-inf-operad}. \cref{exercise:3-cocycle-monoidal-cat} appears in work of
Joyal-Street \cite[Section 6]{joyal-street-mmr}.
\end{note}
\begin{note}[Enriched Categories]
The standard comprehensive reference for enriched category theory is Kelly's book \cite{kelly-enriched}. Some discussion can also be found in \cite[Chapter 6]{borceux2}. For the theory of enriched monads, the reader is referred to \cite{bkp,lack-street,street_monads}.
\end{note}
\chapter{The Yoneda Lemma and Coherence}
\label{ch:coherence}
In this chapter we discuss the Yoneda Lemma for bicategories.
To review and fix notation, we begin with a discussion
of the $1$-categorical Yoneda Lemma in \cref{sec:yoneda-unpacked}. This
entails three results which are usually all termed \emph{The Yoneda
Lemma} for a small category $\C$.
\begin{enumerate}
\item Natural transformations from a represented functor to an
arbitrary functor $F\cn \C^\op \to \Set$ are in bijection with the
value of $F$ at the representing object. We call this
\emph{The Objectwise Yoneda Lemma} \ref{yoneda-unpacked-objectwise}.
\item In the special case that $F$ is also a represented functor, the
bijection constructed in the first part is inverse to the Yoneda functor.
We call this \emph{The Yoneda Embedding Lemma}
\ref{yoneda-unpacked-embedding}.
\item The bijections constructed in the first part are natural with
respect to morphisms of the representing object. This is the most
general form, and thus is the one we name \emph{The Yoneda Lemma}
\ref{yoneda-unpacked-nat}.
\end{enumerate}
In \cref{sec:yoneda-bicat-definition,sec:yoneda-bicat-lemma} we give
the bicategorical analogue of \cref{sec:yoneda-unpacked}. First we
describe the Yoneda pseudofunctor, and prove that it is indeed a
pseudofunctor. Then we state and give detailed proofs for
bicategorical analogues of the Yoneda Lemma in the three forms
discussed above.
In \cref{sec:coherence} we apply the Bicategorical Yoneda Embedding
Lemma \ref{lemma:yoneda-embedding-bicat} to prove the Bicategorical
Coherence Theorem \ref{theorem:bicat-coherence}, which asserts that
every bicategory is biequivalent to a $2$-category. The proof is very
short, but depends on the theory of this chapter and on the
Bicategorical Whitehead Theorem \ref{theorem:whitehead-bicat} of
\cref{ch:whitehead}. As in previous chapters, we will rely on
\cref{thm:bicat-pasting-theorem} and \cref{conv:boundary-bracketing}
to interpret pasting diagrams in a bicategory.
Our discussion of the Yoneda Lemma for $1$-categories or bicategories
will require sets of natural transformations between functors, and
categories of strong transformations between pseudofunctors.
Therefore we will assume throughout that the categories and
bicategories to which we apply the Yoneda constructions are small.
\section{The \texorpdfstring{$1$}{1}-Categorical Yoneda Lemma}\label{sec:yoneda-unpacked}
Suppose $\C$ is a $1$-category, and for each object $A \in
\C$, recall from \cref{def:representables} that $\Yo_A$ denotes the represented functor
\[
\Yo_A = \C(-,A)\cn \C^\op \to \Set.
\]
For a $1$-cell $f\cn A \to B$ in $\C$, let $\Yo_f$ denote the
represented natural transformation
\[
\Yo_f = f_* = \C(-,f)
\]
whose component at an object $W \in \C$ is given by post-composition
\[
f_*\cn \C(W,A) \to \C(W,B).
\]
When $\C$ is small, we have a category of functors and natural
transformations from $\C^\op$ to $\Set$. One verifies that the
components $f_*$ are natural and therefore $\Yo = \Yo_{(-)}$ defines a
functor. To clarify, we recall the following notation from
\cref{def:functors,def:natural-transformations}.
\begin{definition}\label{definition:fun-nat}
Suppose that $\C$ is a small $1$-category, and
suppose $F$ and $G$ are functors $\C \to 1.3$.
\begin{itemize}
\item
$\Fun(\C,1.3)$ denotes the $1$-category of functors and natural
transformations $\C \to 1.3$.
\item $\Nat(F,G)$ denotes the set of natural transformations $F \to G$.\defmark
\end{itemize}
\end{definition}
\noindent With this notation, we have defined a functor
\[
\Yo = \Yo_{(-)}\cn \C \to \Fun(\C^\op,\Set)
\]
known as the \emph{Yoneda functor}.\index{Yoneda!functor}
Now suppose that $F\cn \C^\op \to \Set$ is another functor. For each
object $A \in \C$, we have a morphism of sets
\[
e_A\cn \Nat(\Yo_A, F) \to FA
\]
defined, for each $\theta\cn \Yo_A \to F$, by
$e_A(\theta) = \theta_A1_A$.
\begin{lemma}[Objectwise Yoneda]\label{yoneda-unpacked-objectwise}\index{Yoneda!objectwise}
The morphisms $e_A$ are bijections of sets.
\end{lemma}
\begin{proof}
Naturality of $\theta$ means that the following square commutes for
each morphism $p\cn W \to Z$ in $\C$.
\begin{equation}\label{nat-theta}
\begin{tikzpicture}[x=35mm,y=20mm,baseline={(0,-1).base}]
\draw[0cell]
(0,0) node (a) {\Yo_A(Z)}
(1,0) node (b) {\Yo_A(W)}
(0,-1) node (fa) {FZ}
(1,-1) node (fb) {FW}
;
\draw[1cell]
(a) edge node {\Yo_A(p)} (b)
(fa) edge['] node {Fp} (fb)
(a) edge['] node {\theta_Z} (fa)
(b) edge node {\theta_W} (fb)
;
\end{tikzpicture}
\end{equation}
Unpacking the notation, we have $\Yo_A(p) = p^* \cn \C(Z,A) \to
\C(W,A)$. In the special case $Z = A$, the equality $p^*1_A = 1_A
\circ p = p$ together with the
commutativity of \eqref{nat-theta} means that we have an equality
\begin{equation}\label{nat-theta-eq}
\theta_W(p) = (Fp)(\theta_A1_A).
\end{equation}
Indeed, this equality holding for all morphisms $p\cn W \to A$ is equivalent to
naturality of $\theta$, and therefore the element $\theta_A1_A \in
FA$ uniquely determines the natural transformation $\theta$.
\end{proof}
The equation \eqref{nat-theta-eq} shows, in the special case $F =
\Yo_B$, that $\theta = \Yo_{\theta_A 1_A}$. Thus we have the
following.
\begin{lemma}[Yoneda Embedding]\label{yoneda-unpacked-embedding}\index{Yoneda!embedding}
For each $A$ and $B$ in $\C$, the Yoneda functor is a bijection
\[
\Yo\cn \C(A,B) \to \Nat(\Yo_A,\Yo_B)
\]
inverse to $e_A$. Thus $\Yo\cn \C \to \Fun(\C^\op,\Set)$ is fully-faithful.
\end{lemma}
Now for each $p\cn W \to A$, precomposition with the natural
transformation $\Yo_p\cn \Yo_W \to \Yo_A$ defines a morphism of sets
\[
(\Yo_p)^* \cn \Nat(\Yo_A,F) \to \Nat(\Yo_W,F).
\]
This is functorial (contravariant), and thus we have a functor
\[
\Nat(\Yo_{(-)},F) \cn \C^\op \to \Set.
\]
\begin{lemma}[Yoneda]\label{yoneda-unpacked-nat}\index{Yoneda!Lemma}
The morphisms $e_A$ are natural with respect to morphisms in $\C$,
and thus $e$ defines a natural isomorphism
\[
e \cn \Nat(\Yo_{(-)}, F) \fto{\iso} F(-)
\]
for each functor $F\cn \C^\op \to \Set$.
\end{lemma}
\begin{proof}
We proved that the components of $e$ are bijections in \cref{yoneda-unpacked-objectwise}.
To prove that $e$ is natural, we must show that the following square commutes for all
$p\cn W \to A$ in $\C$.
\begin{equation}\label{nat-e}
\begin{tikzpicture}[x=35mm,y=20mm,baseline={(0,-1).base}]
\draw[0cell]
(0,0) node (na) {\Nat(\Yo_A,F)}
(1,0) node (nb) {\Nat(\Yo_W,F)}
(0,-1) node (fa) {FA}
(1,-1) node (fb) {FW}
;
\draw[1cell]
(na) edge node {(\Yo_p)^*} (nb)
(fa) edge['] node {Fp} (fb)
(na) edge['] node {e_A} (fa)
(nb) edge node {e_W} (fb)
;
\end{tikzpicture}
\end{equation}
For a natural transformation $\theta\cn \Yo_A \to F$, the top-right
composite in \eqref{nat-e} is
\[
\theta \mapsto \theta\circ\Yo_p
\mapsto (\theta \circ \Yo_p)_W(1_W) = (\theta_B \circ (\Yo_p)_W)(1_W) =
\theta_W((\Yo_p)_W(1_W)) = \theta_W(p).
\]
On the other hand, the left-bottom composite is
\[
\theta \mapsto \theta_A1_A \mapsto (Fp)(\theta_A1_A).
\]
Equality of these two elements in $FW$ is precisely the equality
guaranteed by naturality of $\theta$ and shown in \eqref{nat-theta}
and \eqref{nat-theta-eq}.
\end{proof}
\cref{yoneda-unpacked-objectwise,yoneda-unpacked-embedding,yoneda-unpacked-nat}
are collectively referred to as the Yoneda Lemma for $1$-categories.
They are summarized in the discussion following
\cref{def:representables}.
\section{The Bicategorical Yoneda Pseudofunctor}\label{sec:yoneda-bicat-definition}
Now we turn to the bicategorical case, following the outline
established by the $1$-categorical case above. In this section we
assume $\B$ is a small bicategory and thus we have a $2$-category
$\Bicatps(\B^\op, \Cat)$ by \cref{subbicat-pseudofunctor}. We define
a pseudofunctor
\[
\Yo\cn \B \to \Bicatps(\B^\op, \Cat)
\]
in four steps.
\begin{itemize}
\item Define $\Yo$ as an assignment on cells (\cref{definition:Yo}).
\item Define the lax unity constraint $\Yo^0$ (\cref{definition:Yo0})
and prove that it is a modification (\cref{proposition:Yo0}).
\item Define the lax functoriality constraint $\Yo^2$
(\cref{definition:Yo2}) and prove that it is a modification
(\cref{proposition:Yo2}).
\item Show that $(\Yo,\Yo^2,\Yo^0)$ defines a pseudofunctor (\cref{proposition:Yo-pseudo}).
\end{itemize}
\begin{definition}[Assignment on cells $\Yo$]\label{definition:Yo}\index{Yoneda!pseudofunctor}\index{pseudofunctor!Yoneda}
\
\begin{itemize}
\item For each object $A \in \B$, define $\Yo_A = \B(-,A)$, the representable pseudofunctor of
\cref{representable-pseudofunctor,def:representable-pseudofunctor}.
\item For each $1$-cell $f\cn A \to B$, define $\Yo_f = f_*$, the representable strong
transformation
\[
\B(-,A) \to \B(-,B)
\]
of \cref{representable-transformation,def:representable-transformation}.
\item For each $2$-cell $35\cn f \to g$, define $\Yo_35 =
35_*$, the representable modification of \cref{representable-modification,def:representable-modification}.
\end{itemize}
This finishes the definition of $\Yo$ as an assignment on cells.
\end{definition}
\begin{notation}
Suppose $A$ and $W$ are objects of $\B$. For each $1$-cell $f\cn W
\to A$, we will add a subscript to the inverse component of the left
unitor and write
\[
\ell^\inv_{W; f} = \ell^\inv_f\cn f \to 1_A f.
\]
Typically the object $W$ is omitted, but it will be useful for
clarity below. Naturality of the left unitor with respect to
$2$-cells $f \to f'$ in $\B(W,A)$ means that the components $\ell^\inv_{W;f}$
assemble to form a natural transformation of endofunctors on $\B(W,A)$
\[
1_{\B(W,A)} \to (1_A)_*.
\]
We denote this natural transformation by $\ell^\inv_{W;-}$.
\end{notation}
\begin{motivation}\label{motivation:Yo0}
For an object $A$, we have $1_{\Yo_A}$ and $\Yo_{1_A}$. These are
$1$-cells in $\Bicatps(\B^\op,\Cat)$, i.e., strong transformations,
from $\Yo_A$ to $\Yo_A$. The first of these is the identity on
$\Yo_A$, and the second of these is $\Yo$ evaluated at the identity
$1$-cell $1_A$ in $\B$. The unity constraint $(\Yo^0)_A$ must be a $2$-cell
in $\Bicatps(\B^\op, \Cat)$, i.e., a modification,
\[
(\Yo^0)_A\cn 1_{\Yo_A} \to \Yo_{1_A}.
\]
Thus it will have components at $W \in \B$
\[
(\Yo^0)_{A;W} \cn 1_{\B(W,A)} \to \Yo_{1_A;W} = (1_A)_*
\]
that are $2$-cells in $\Cat$, i.e., natural transformations. We
shall see that $\ell^\inv_{W;-}$ is just such a natural
transformation.
\end{motivation}
\begin{definition}[Lax unity $\Yo^0$]\label{definition:Yo0}\index{Yoneda!lax unity constraint}
For each pair of objects $A,W \in \B$, we let
\[
(\Yo^0)_{A;W} = \ell^\inv_{W; -} \cn 1_{\B(W,A)} \to \Yo_{1_A;W} = (1_A)_*.\defmark
\]
\end{definition}
\begin{proposition}\label{proposition:Yo0}
For each $A \in \B$, the components $(\Yo^0)_{A;W} = \ell^\inv_{W;-}$ assemble to form
a modification
\[
(\Yo^0)_A \cn 1_{\Yo_A} \to \Yo_{1_A}.
\]
\end{proposition}
\begin{proof}
We noted at the end of \cref{motivation:Yo0} that $\ell^\inv_{W;-}$
is a natural transformation by naturality of the left unitor. Thus
$(\Yo^0)_{A;W}$ is a $2$-cell in $\Cat$.
To verify that $(\Yo^0)_A$ is a modification, we need to show, for
any $1$-cell $p\cn W \to Z$ in $\B$, that the following two pasting
diagrams in $\Cat$ have the same composite. The unlabeled double
arrow on the left-hand side is the component $2$-cell of $\Yo_{1_A}$,
given by an associator component and described in
\cref{representable-transformation}. The empty region on the
right-hand side is strictly commuting because $1_{\Yo_A(Z)}$ is a
strict transformation.
\begin{equation}\label{ell-modification}
\begin{tikzpicture}[x=28mm,y=30mm]
\def1.2{1}
\def1{1}
\def-1} \def\v{-1{.55}
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (yz) {\Yo_A(Z)}
(1,0) node (yw) {\Yo_A(W)}
(0,-1) node (yz') {\Yo_A(Z)}
(1,-1) node (yw') {\Yo_A(W)}
;
\draw[1cell]
(yz) edge node {\Yo_A(p)} (yw)
(yz') edge['] node {\Yo_A(p)} (yw')
(yz) edge[bend right=40,'] node[pos=.6] {1_{\Yo_A(Z)}} (yz')
(yw) edge[bend left=40] node[pos=.4] {(\Yo_{1_A})_W} (yw')
;
}
\begin{scope}[shift={(0,1/2)}]
\boundary
\draw[1cell]
(yz) edge[bend left=40] node[pos=.4] {(\Yo_{1_A})_Z} (yz')
;
\draw[2cell]
node[between=yz and yw' at .6, shift={(0,0)}, rotate=45, 2label={below,}] {\Rightarrow}
node[between=yz and yz' at .5, rotate=0,
2label={above,\ell^\inv_{Z; -}}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(1.2+-1} \def\v{-1+-1} \def\v{-1,1/2)}]
\boundary
\draw[1cell]
(yw) edge[bend right=40,'] node[pos=.6] {1_{\Yo_A(W)}} (yw')
;
\draw[2cell]
node[between=yw and yw' at .5, rotate=0,
2label={above,\ell^\inv_{W; -}}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\end{equation}
To see that these composites are equal, consider a $1$-cell $h\in
\Yo_A(Z) = \B(Z,A)$. The functor along the left-bottom boundary, $\Yo_A(p) \circ
1_{\Yo_A(Z)}$, sends $h$ to $hp$. The functor along the top-right boundary,
$(\Yo_{1_A})_W \circ \Yo_A(p)$, sends $h$
to $1_A(hp)$. Then the composite at left in
\eqref{ell-modification} is a natural transformation whose component
at $h$ is the following $2$-cell composite in $\Yo_A(W) = \B(W,A)$:
\[
hp \fto{\ell^\inv_{Z;h} * 1_p} (1_A h)p \fto{a} 1_A(hp).
\]
On the other hand, the composite at right in
\eqref{ell-modification} is a natural transformation whose component
at $h$ is
\[
hp \fto{\ell^\inv_{W;hp}} 1_A(hp).
\]
These $2$-cells are equal by the unity property in
\cref{bicat-left-right-unity}. Thus the natural transformations
$\ell^\inv_{W;-}$ assemble to give a modification
\[
(\Yo^0)_A\cn 1_{\Yo_A} \to \Yo_{1_A}
\]
for each $A \in \B$.
\end{proof}
Now we turn to the lax functoriality constraint $\Yo^2$.
\begin{notation}
Suppose
\[
A \fto{f} B \fto{g} C
\]
is a composable pair of $1$-cells in $\B$. For each $1$-cell $j\cn W
\to A$ we will add the subscript $W$ to the inverse component of the
associator and write
\[
a^\inv_{W;g,f,j} = a^\inv_{g,f,j} \cn g(fj) \to (gf)j.
\]
Naturality of the associator with respect to $2$-cells $j \to j'$ in
$\B(W,A)$ means that the components $a^\inv_{W;g,f,j}$ assemble to
form a natural transformation of functors $\B(W,A) \to \B(W,C)$
\[
g_*\,f_* \to (gf)_*.
\]
We denote this natural transformation by $a_{W;g,f,-}$.
\end{notation}
\begin{motivation}\label{motivation:Yo2}
For each composable pair of $1$-cells in $\B$,
\[
A \fto{f} B \fto{g} C,
\]
we have the composite strong transformation, $\Yo_g\Yo_f = g_* \,
f_*$, and the strong transformation for the composite, $\Yo_{gf} =
(gf)_*$. These are $1$-cells in $\Bicat(\B^\op,\Cat)$ from $\Yo_A$ to
$\Yo_C$. The lax functoriality constraint $(\Yo^2)_{g,f}$ must be a
$2$-cell in $\Bicat(\B^\op, \Cat)$, i.e., a modification,
\[
(\Yo^2)_{g,f} \cn \Yo_g\Yo_f \to \Yo_{gf}
\]
for each composable pair $f$ and $g$. Thus it will have components
at $W \in \B$
\[
(\Yo^2)_{g,f;W} \cn \Yo_{g;W}\Yo_{f;W} \to \Yo_{gf;W}
\]
that are $2$-cells in $\Cat$, i.e., natural transformations. We
shall see that $a^\inv_{W;g,f,-}$ is just such a natural transformation.
Moreover, \cref{def:lax-functors} requires that the modifications
$(\Yo^2)_{g,f}$ are natural with respect to $2$-cells $f \to f'$ and
$g \to g'$ in $\B(A,B)$ and $\B(B,C)$, respectively.
\end{motivation}
\begin{definition}[Lax functoriality $\Yo^2$]\label{definition:Yo2}\index{Yoneda!lax functoriality constraint}
For each pair of composable $1$-cells in $\B$,
\[
A \fto{f} B \fto{g} C,
\]
and each object $W \in \B$,
we let
\[
(\Yo^2)_{g,f;W} = a^\inv_{W; g,f,-} \cn \Yo_g\Yo_f \to \Yo_{gf}.\defmark
\]
\end{definition}
\begin{proposition}\label{proposition:Yo2}
For each pair of composable $1$-cells in $\B$,
\[
A \fto{f} B \fto{g} C,
\]
the components $(\Yo^2)_{g,f;W} = a^\inv_{W;g,f,-}$ assemble to
form a modification
\[
(\Yo^2)_{g,f} \cn \Yo_g\Yo_f \to \Yo_{gf}.
\]
These modifications are natural in $f$ and $g$.
\end{proposition}
\begin{proof}
We noted at the end of \cref{motivation:Yo2} that $a^\inv_{W;g,f,-}$
is a natural transformation by naturality of the associator. Thus
$(\Yo^2)_{g,f;W}$ is a $2$-cell in $\Cat$.
To verify that $(\Yo^2)_{g,f}$ is a modification, we need to show,
for each $1$-cell $p\cn W \to Z$, that the following two pasting
diagrams in $\Cat$ have the same composite. The unlabeled double
arrows on each side are given by associator components described in
\cref{representable-transformation} (for $\Yo_{gf}$, $\Yo_g$, and
$\Yo_{f}$) and \cref{def:lax-tr-comp} (for the composite
$\Yo_g\Yo_f$).
\begin{equation}\label{a-modification}
\begin{tikzpicture}[x=28mm,y=30mm]
\def1.2{1}
\def1{1}
\def-1} \def\v{-1{.55}
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (yz) {\Yo_A(Z)}
(1,0) node (yw) {\Yo_A(W)}
(0,-1) node (yz') {\Yo_C(Z)}
(1,-1) node (yw') {\Yo_C(W)}
;
\draw[1cell]
(yz) edge node {\Yo_A(p)} (yw)
(yz') edge['] node {\Yo_C(p)} (yw')
(yz) edge[bend right=40,'] node[pos=.6] {\Yo_g\Yo_f} (yz')
(yw) edge[bend left=40] node[pos=.4] {\Yo_{gf}} (yw')
;
}
\begin{scope}[shift={(0,1/2)}]
\boundary
\draw[1cell]
(yz) edge[bend left=40] node[pos=.4] {\Yo_{gf}} (yz')
;
\draw[2cell]
node[between=yz and yw' at .6, shift={(0,0)}, rotate=45, 2label={below,}] {\Rightarrow}
node[between=yz and yz' at .5, rotate=0,
2label={above,a^\inv_{Z; g,f,-}}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(1.2+-1} \def\v{-1+-1} \def\v{-1,1/2)}]
\boundary
\draw[1cell]
(yw) edge[bend right=40,'] node[pos=.6] {\Yo_g \Yo_f} (yw')
;
\draw[2cell]
node[between=yz and yw' at .4, shift={(-.125,0)}, rotate=45, 2label={below,}] {\Rightarrow}
node[between=yw and yw' at .5, rotate=0,
2label={above,a^\inv_{W; g,f,-}}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\end{equation}
To see that these composites are equal, consider a $1$-cell
$k \in \Yo_A(Z) = \B(Z,A)$. The functor along the left-bottom
boundary, $\Yo_C(p) \circ (\Yo_g\Yo_f)$, sends $k$ to $(g(fk))p$. The
functor along the top-right boundary, $\Yo_{gf} \circ \Yo_A(p)$, sends $k$
to $(gf)(kp)$. Using the component $2$-cell in
\cref{representable-transformation}, the composite at left in
\eqref{a-modification} is a natural transformation whose component
at $k$ is the composite
\[
(g(fk))p \fto{a^\inv_{g,f,k} * 1_p} ((gf)k)p \fto{a_{gf,k,p}} (gf)(kp).
\]
On the other hand, the component at $k$ of the composite at right in
\eqref{a-modification} is given as follows. Here we use the
definition in \cref{representable-transformation}
together with
the formula in \cref{expl:lax-tr-comp}, which simplifies because associators in
$\Cat$ are strict:
\[
(g(fk))p \fto{a_{g,fk,p}} g((fk)p) \fto{1_g * a_{f,k,p}} g(f(kp))
\fto{a^\inv_{g,f,kp}} (gf)(kp).
\]
The two composites in \eqref{a-modification} are equal by the
pentagon axiom \eqref{bicat-pentagon}. Thus the natural
transformations $a^\inv_{W;g,f,-}$ assemble to give a modification
\[
(\Yo^2)_{g,f} \cn \Yo_g\Yo_f \to \Yo_{gf}
\]
for each pair of composable $1$-cells $f$ and $g$.
Lastly, we observe that these modifications, i.e., $2$-cells in
$\Bicatps(\B^\op,\Cat)$, are natural with respect to $2$-cells in $\B$
because the components $a^\inv_{W; g,f,-}$ are natural in $f$ and
$g$. Therefore the $2$-cells $(\Yo^2)_{g,f}$ assemble to give a
natural transformation $\Yo^2$ as in \cref{def:lax-functors}. This
finishes the definition of $\Yo^2$.
\end{proof}
\begin{proposition}\label{proposition:Yo-pseudo}
The triple $(\Yo,\Yo^2,\Yo^0)$ defines a pseudofunctor.
\end{proposition}
\begin{proof}
We have shown that $\Yo^0$ and $\Yo^2$ are modifications in
\cref{proposition:Yo0,proposition:Yo2}, respectively.
The components of $\Yo^2$ and $\Yo^0$ are invertible by
construction, since the components $a^\inv$ and $\ell^\inv$ are
invertible. Thus it remains to show that $(\Yo,\Yo^2,\Yo^0)$
satisfies the lax associativity and lax unity axioms of
\cref{def:lax-functors}.
The lax associativity axiom \eqref{f2-bicat} requires that the
following diagram \eqref{f2-yoneda} commutes in
$\Bicatps(\B^\op,\Cat)$ for all composable triples of $1$-cells
\[
A \fto{f} B \fto{g} C \fto{h} D.
\]
Note that the associator in the $2$-category
$\Bicatps(\B^\op,\Cat)$ is an identity and thus the upper-left arrow
below is labeled $1$.
\begin{equation}\label{f2-yoneda}
\begin{tikzpicture}[x=19mm,y=19mm,baseline={(a.base)}]
\draw[0cell]
(0,0) node (a) {(\Yo_h \circ \Yo_g) \circ \Yo_f}
(60:1) node (b) {\Yo_h \circ (\Yo_g \circ \Yo_f)}
(-60:1) node (c) {\Yo_{hg} \circ \Yo_f}
(3.25,0) node (d) {\Yo_{h(gf)}}
(d) ++(120:1) node (e) {\Yo_h \circ \Yo_{gf}}
(d) ++(240:1) node (f) {\Yo_{(hg)f}}
;
\path[1cell]
(a) edge node {1} (b)
(b) edge node {1_{\Yo_h}*a^\inv_{-;g,f,-}} (e)
(e) edge node[pos=.7] {a^\inv_{-;h,gf,-}} (d)
(a) edge[swap] node[pos=.2] {a^\inv_{-;h,g,-} *1_{\Yo_f}} (c)
(c) edge[swap] node {a^\inv_{-;hg,f,-}} (f)
(f) edge[swap] node {\Yo_a} (d)
;
\end{tikzpicture}
\end{equation}
This diagram of strong transformations and modifications commutes if
and only if the resulting diagram of components commutes when
evaluated at each $1$-cell $k\cn Z \to A$. The resulting diagram
is equivalent to an instance of the pentagon axiom
\eqref{bicat-pentagon}, and thus \eqref{f2-yoneda} commutes.
The lax left and right unity axiom \eqref{f0-bicat} requires that
the following two diagrams \eqref{f0-yoneda} commute for each $1$-cell
$f\cn A \to B$. As with the associator, the unitors in
$\Bicatps(\B^\op,\Cat)$ are identities and therefore the base of
each trapezoid below is labeled $1$.
\begin{equation}\label{f0-yoneda}
\begin{tikzpicture}[x=17mm,y=16mm,baseline=(b.base)]
\draw[0cell]
(0,0) node (a) {1_{\Yo_B} \circ \Yo_f}
(.15,1) node (b) {\Yo_{1_B} \circ \Yo_f}
(2,0) node (d) {\Yo_f}
(d) ++(-.15,1) node (c) {\Yo_{(1_B\circ f)}}
;
\draw[1cell]
(a) edge node[pos=.3] (X) {\ell^\inv_{-;-}*1_{\Yo_f}} (b)
(b) edge node {a^\inv_{-;1_B,f,-}} (c)
(c) edge node[pos=.6] {\Yo_\ell} (d)
(a) edge node {1} (d)
;
\end{tikzpicture}
\qquad
\begin{tikzpicture}[x=18mm,y=16mm,baseline=(b.base)]
\draw[0cell]
(0,0) node (a) {\Yo_f \circ 1_{\Yo_A}}
(.15,1) node (b) {\Yo_f \circ \Yo_{1_A}}
(2,0) node (d) {\Yo_f}
(d) ++(-.15,1) node (c) {\Yo_{(f\circ 1_A)}}
;
\draw[1cell]
(a) edge node[pos=.3] (X) {1_{\Yo_f}*\ell^\inv_{-;-}} (b)
(b) edge node {a^\inv_{-;f,1_W,-}} (c)
(c) edge node[pos=.6] {\Yo_r} (d)
(a) edge node {1} (d)
;
\end{tikzpicture}
\end{equation}
These diagrams of strong transformations and modifications commute
if and only if the resulting diagrams of components commute when
evaluated at each $1$-cell $k\cn Z \to A$. When evaluated at $k$, the
diagram at left is equivalent to the left unity property in
\cref{bicat-left-right-unity} and the diagram at right is equivalent
to the middle unity axiom \eqref{bicat-unity}.
\end{proof}
\begin{definition}\label{definition:Yo-bicat}
The pseudofunctor defined by \cref{proposition:Yo-pseudo} is denoted
$\Yo$ and called the \emph{Yoneda pseudofunctor}.
\end{definition}
Because the components of $\Yo^0$ and $\Yo^2$ are defined by the
left unitor and associator of $\B$, respectively, we have the
following corollary of \cref{proposition:Yo-pseudo}.
\begin{corollary}\label{Yo-2cat}\index{Yoneda!2-functor}\index{2-functor!Yoneda}
If $\B$ is a $2$-category, then
\[
\Yo\cn \B \to \Bicatps(\B^\op,\Cat)
\]
is a $2$-functor.
\end{corollary}
\section{The Bicategorical Yoneda Lemma}\label{sec:yoneda-bicat-lemma}
In this section we prove the Bicategorical Yoneda Lemma.
In addition to the notation $\Bicatps$, we will need the following.
\begin{definition}
Suppose that $\B$ is a small bicategory, and
suppose $F$ and $G$ are pseudofunctors $\B \to \C$. Then
\[
\Str(F,G) = \Bicatps(\B,\C)(F,G)
\]
denotes the $1$-category of strong transformations and modifications
$F \to G$.
\end{definition}
Throughout this section we assume that $\B$ is a small bicategory and
\[
F\cn \B^\op \to \Cat
\]
is a pseudofunctor.
\begin{definition}\label{definition:eA}
For each
$A \in \B$ we define a functor of $1$-categories called
\emph{evaluation}\index{evaluation}
\[
e_A\cn \Str(\Yo_A,F) \to FA
\]
as follows.
\begin{itemize}
\item For each strong transformation $\theta\cn \Yo_A \to F$,
the component $\theta_A$
is a functor $\Yo_A(A) \to FA$. We define
\[
e_A(\theta) = \theta_A1_A \in FA.
\]
\item For each modification $\Ga\cn \theta \to \theta'$, we have
natural transformation $\Ga_A\cn \theta_A \to \theta'_A$. We let
$\Ga_{A;p}$ denote the component of $\Ga_A$ at $p \in \Yo_A(W)$
and define
\[
e_A(\Ga) = \Ga_{A;1_A} \cn \theta_A1_A \to \theta'_A1_A.
\]
\end{itemize}
Composition of modifications is strictly unital and associative, and
therefore $e_A$ is a functor. This finishes the definition of $e_A$.
\end{definition}
We will prove that each $e_A$ is an equivalence of categories, and
that together they form the components of an invertible strong
transformation. We begin with two explanations unpacking the data of
transformations and modifications in the category $\Str(\Yo_A,F)$.
\begin{explanation}\label{explanation:str-yoA-F-transformation}
Suppose $\theta \in \Str(\Yo_A,F)$, and consider the strong naturality
constraint for $\theta$. This is a natural isomorphism of functors
filling the following square for each $1$-cell $p\cn W \to Z$ in $\B$.
\begin{equation}\label{str-theta}
\begin{tikzpicture}[x=35mm,y=20mm,baseline={(0,-1).base}]
\draw[0cell]
(0,0) node (a) {\Yo_A(Z)}
(1,0) node (b) {\Yo_A(W)}
(0,-1) node (fa) {FZ}
(1,-1) node (fb) {FW}
;
\draw[1cell]
(a) edge node {\Yo_A(p)} (b)
(fa) edge['] node {Fp} (fb)
(a) edge['] node {\theta_Z} (fa)
(b) edge node {\theta_W} (fb)
;
\draw[2cell]
node[between=fa and b at .5, rotate=45, 2label={below,\theta_p}]{\Rightarrow}
;
\end{tikzpicture}
\end{equation}
These isomorphisms are natural with respect to $2$-cells $p \to p'$ in
$\Yo_A(Z)$ as discussed in \cref{expl:lax-transformation}.
Unpacking the notation, we have
\[
\Yo_A(p) = p^* \cn \B(Z,A) \to \B(W,A).
\]
In the special case
$Z = A$, we have the component of $\theta_p$ at $1_A$ as below:
\begin{equation}\label{str-theta-iso}
\theta_{p; 1_A}\cn (Fp)(\theta_A1_A) \fto{\iso}
\theta_W(1_A p). \dqed
\end{equation}
\end{explanation}
\begin{explanation}\label{explanation:str-yoA-F-modification}
Suppose $\theta,\theta'\in \Str(\Yo_A,F)$, and suppose
$\Ga\cn \theta \to \theta'$ is a modification. The modification
axiom \eqref{modification-axiom-pasting} means that, for any $1$-cell
$p\cn W \to Z$, the following diagram of natural transformations
between functors commutes in $\Cat$.
\[
\begin{tikzpicture}[x=38mm,y=20mm]
\draw[0cell]
(0,0) node (a) {(Fp)\theta_Z}
(1,0) node (b) {(Fp)\theta'_Z}
(0,-1) node (c) {\theta_W \Yo_A(p)}
(1,-1) node (d) {\theta'_W \Yo_A(p)}
;
\path[1cell]
(a) edge node {1_{Fp}*\Ga_Z} (b)
(c) edge['] node {\Ga_W * 1_{\Yo_{A}(p)}} (d)
(a) edge['] node {\theta_p} (c)
(b) edge node {\theta'_p} (d)
;
\end{tikzpicture}
\]
Again taking the special case $Z = A$ and evaluating at $1_A \in
\Yo_A(A)$, we have the following commutative diagram of objects and
morphisms in $FW$.
\begin{equation}\label{eq:Gamma-p-1}
\begin{tikzpicture}[x=38mm,y=20mm,baseline={(0,-1).base}]
\draw[0cell]
(0,0) node (a) {(Fp)(\theta_A 1_A)}
(1,0) node (b) {(Fp)(\theta'_A 1_A)}
(0,-1) node (c) {\theta_W (1_A\,p)}
(1,-1) node (d) {\theta'_W (1_A\,p)}
;
\path[1cell]
(a) edge node {(Fp)(\Ga_{A;1_A})} (b)
(c) edge['] node {\Ga_{W;1_Ap}} (d)
(a) edge['] node {\theta_{p;1_A}} (c)
(b) edge node {\theta'_{p;1_A}} (d)
;
\end{tikzpicture}
\end{equation}
Moreover, since $\Ga_W$ is a natural transformation, the
following squares commute for the morphisms
\[
\ell\cn 1_A\, p \to p \andspace r^{\inv}\cn p \to p\,1_W
\]
in $\Yo_A(W) = \B(W,A)$.
\begin{equation}\label{eq:Gamma-p-2}
\begin{tikzpicture}[x=38mm,y=20mm,baseline={(0,-1).base}]
\draw[0cell]
(0,0) node (a) {\theta_W (1_A\,p)}
(1,0) node (b) {\theta_W (p)}
(2,0) node (b2) {\theta_W (p\,1_W)}
(0,-1) node (c) {\theta'_W (1_A\,p)}
(1,-1) node (d) {\theta'_W (p)}
(2,-1) node (d2) {\theta'_W (p\,1_W)}
;
\path[1cell]
(a) edge node {\theta_W(\ell)} (b)
(b) edge node {\theta_W(r^\inv)} (b2)
(c) edge['] node {\theta'_W(\ell)} (d)
(d) edge['] node {\theta'_W(r^\inv)} (d2)
(a) edge['] node {\Ga_{W;1_Ap}} (c)
(b) edge node {\Ga_{W;p}} (d)
(b2) edge node {\Ga_{W;p1_W}} (d2)
;
\end{tikzpicture}
\end{equation}
Note that the vertical arrows in \eqref{eq:Gamma-p-1} are
isomorphisms because $\theta$ and $\theta'$ are strong
transformations. The horizontal arrows in \eqref{eq:Gamma-p-2} are
isomorphisms because functors preserve isomorphisms. Thus,
combining \eqref{eq:Gamma-p-1} with the left half of
\eqref{eq:Gamma-p-2} (transposed), we observe that $\Gamma_{W;p}$ is uniquely
determined by $\Ga_{A;1_A}$.
\end{explanation}
Next we give a construction that will be useful in showing that each
$e_A$ is essentially surjective.
\begin{definition}\label{definition:e-inverse-D}
Suppose $F\cn \B^\op \to \Cat$ is a pseudofunctor and suppose $A \in
\B$. For each object $D \in FA$, we define a strong transformation
\[
\ol{D}\cn \Yo_A \to F
\]
as follows.
\begin{enumerate}
\item For each $W \in \B$, we define component functors
$\ol{D}_W\cn \Yo_A(W) \to FW$:
\begin{itemize}
\item For $h\in \Yo_A(W) = \B(W,A)$, let $\ol{D}_W(h) = (Fh)D$.
\item For $35\cn h \to h'$, let $\ol{D}_W(35) = (F35)_D$, the
component at $D$ of the natural transformation $F35\cn Fh \to Fh'$.
\end{itemize}
The functoriality and unit conditions for $\ol{D}_W$ with respect to
composition and identities in $\Yo_A(W)$ follow from the corresponding
conditions for $F$ with respect to $2$-cells in $\B$.
\item For each $p\cn W \to Z$ we define a natural
transformation
\[
\ol{D}_p\cn (Fp \circ \ol{D}_Z) \to (\ol{D}_W \circ
\Yo_A(p))
\]
as shown below.
\begin{equation}
\begin{tikzpicture}[x=35mm,y=20mm,baseline={(0,-1).base}]
\draw[0cell]
(0,0) node (a) {\Yo_A(Z)}
(1,0) node (b) {\Yo_A(W)}
(0,-1) node (fa) {FZ}
(1,-1) node (fb) {FW}
;
\draw[1cell]
(a) edge node {\Yo_A(p)} (b)
(fa) edge['] node {Fp} (fb)
(a) edge['] node {\ol{D}_Z} (fa)
(b) edge node {\ol{D}_W} (fb)
;
\draw[2cell]
node[between=fa and b at .5, rotate=45, 2label={below,\ol{D}_p}]{\Rightarrow}
;
\end{tikzpicture}
\end{equation}
The component of $\ol{D}_p$ at $h \in \Yo_A(Z)$ is the component of $F^2_{h,p}$ at $D$:
\[
(Fp \circ \ol{D}_Z)(h)
= ((Fp) (Fh)) D
\fto{({F^2_{h,p}})_D} (F(hp)) D
= (\ol{D}_W \circ \Yo_A(p))(h).
\]
Naturality of $\ol{D}_p$ follows from naturality of $F^2$, and
these components are isomorphisms because $F$ is a
pseudofunctor. In \cref{exercise:D-bar} we ask the reader to
verify the lax unity and lax associativity axioms for $\ol{D}$.
\end{enumerate}
This finishes the definition of $\ol{D}$.
\end{definition}
\begin{lemma}[Objectwise Yoneda]\label{yoneda-bicat-objectwise}\index{Yoneda!objectwise!for a pseudofunctor}\index{pseudofunctor!objectwise Yoneda}\index{Bicategorical!Objectwise - Yoneda Lemma}
For each $A \in \B$, the functor
\[
e_A \cn \Str(\Yo_A,F) \to FA
\]
is an equivalence of categories.
\end{lemma}
\begin{proof}
For each $D \in FA$, we have the strong transformation $\ol{D}$
constructed in \cref{definition:e-inverse-D}. By definition,
$e_A(\ol{D}) = \ol{D}_A 1_A = (F1_A)D$ and we have the isomorphism
\[
e_A(\ol{D}) = (F1_A)D \fto{(F^0_D)^\inv} (1_{FA})D = D.
\]
Thus $e_A$ is essentially surjective.
We noted in \cref{explanation:str-yoA-F-modification} that
\eqref{eq:Gamma-p-1} and \eqref{eq:Gamma-p-2} imply that the
components of each modification $\Ga\cn \theta \to \theta'$ are
determined by $e_A(\Ga) = \Ga_{A;1_A}$. Therefore $e_A$ is fully
faithful.
\end{proof}
In the special case $F = \Yo_B$, the Yoneda pseudofunctor gives an
inverse for $e_A$, as follows.
\begin{lemma}[Yoneda Embedding]\label{lemma:yoneda-embedding-bicat}\index{Yoneda!embedding!bicategorical}\index{Bicategorical!Yoneda Embedding}
For each pair of objects $A$ and $B$ in $\B$, the Yoneda
pseudofunctor is an equivalence of categories
\[
\Yo\cn \B(A,B) \fto{\hty} \Str(\Yo_A,\Yo_B)
\]
inverse to $e_A$. Thus $\Yo\cn \B \to \Bicatps(\B^\op,\Cat)$ is a
local equivalence.
\end{lemma}
\begin{proof}
For a $1$-cell $f\cn A \to B$ in $\B$ we have $e_A(\Yo_f)= f 1_A$.
For a $2$-cell $35 \cn f \to f'$ in $\B(A,B)$ we have $e_A(\Yo_35)
= 35 * 1_{1_A}$. Thus the left unitor defines a natural
isomorphism
\[
\ell \cn e_A \circ \Yo \Rightarrow \Id_{\B(A,B)}.
\]
For a strong transformation $\theta\cn \Yo_A \to \Yo_B$ and for each
$1$-cell $p\cn W \to A$ in $\B$, we have
\[
\Yo_B(p)(\theta_A1_A) = (\theta_A 1_A) \circ p = \Yo_{\theta_A 1_A}(p).
\]
Therefore, by combining the left unitor $\ell$ with
\eqref{str-theta-iso} we have an isomorphism
$\Yo_{\theta_A 1_A}(p) \iso \theta_W(p)$. The discussion of
modifications in \cref{explanation:str-yoA-F-modification} shows
that this defines an invertible modification
\[
\Yo \circ e \Rightarrow \Id_{\Str(\Yo_A,\Yo_B)}.
\]
Thus $e_A$ and $\Yo$ are inverse equivalences of $1$-categories.
\end{proof}
Examining the proof of \cref{lemma:yoneda-embedding-bicat}, we note
that the Yoneda embedding is a local \emph{isomorphism} if $\B$ has
trivial unitors. Thus we have the following.
\begin{corollary}\label{corollary:yoneda-embedding-2-cat}\index{Yoneda!embedding!2-categorical}\index{2-categorical!Yoneda Embedding}
If $\B$ is a $2$-category, then
\[
\Yo \cn \B(A,B) \to \Str(A,B)
\]
is an isomorphism of categories.
\end{corollary}
Now we return to the study of general pseudofunctors $F \cn \B^\op \to
\Cat$.
\begin{definition}\label{definition:ef}
Suppose $f\cn C \to A$ is a $1$-cell in $\B$. We define a natural
isomorphism $e_f$ filling the following diagram in $\Cat$.
\[
\begin{tikzpicture}[x=35mm,y=20mm,baseline={(0,-1).base}]
\draw[0cell]
(0,0) node (a) {\Str(\Yo_A,F)}
(1,0) node (b) {\Str(\Yo_C,F)}
(0,-1) node (c) {FA}
(1,-1) node (d) {FC}
;
\draw[1cell]
(a) edge node {(\Yo_f)^*} (b)
(c) edge['] node {Ff} (d)
(a) edge['] node {e_A} (c)
(b) edge node {e_C} (d)
;
\draw[2cell]
node[between=c and b at .5, rotate=45, 2label={below,e_f}]{\Rightarrow}
;
\end{tikzpicture}
\]
In this diagram, $(\Yo_f)^*$ is the represented functor given by
precomposition with the strong transformation $\Yo_f$.
For each $\theta \in \Str(\Yo_A,F)$, the left-bottom composite sends
$\theta$ to the object
\[
(Ff \circ e_A)(\theta) = (Ff)(\theta_A1_A)
\]
in $FA$. The top-right
composite sends $\theta$ to
\[
(e_C \circ (\Yo_f)^*)(\theta) = e_C(\theta \circ \Yo_f) =
\theta_C(f\, 1_C).
\]
We have a description of the strong naturality constraint $\theta_f$
in \cref{explanation:str-yoA-F-transformation} (take $p$ there to be
the $1$-cell $f\cn C \to A$). We define the component $e_{f;\theta}$
to be the composite of three isomorphisms in $FC$:
\begin{equation}\label{ef-composite}
(Ff)(\theta_A1_A) \fto{\theta_{f;1_A}} \theta_C(1_A\,f)
\fto{\theta_C(\ell)} \theta_C(f) \fto{\theta_C(r^\inv)} \theta_C(f 1_C).
\end{equation}
Note, because $\theta_C$ is functorial, we have $\theta_C(r^\inv)
\circ \theta_C(\ell) = \theta_C(r^\inv \ell)$.
To discuss naturality of $e_{f;\theta}$ with respect to
modifications $\Gamma\cn \theta \to \theta'$, we first note
\[
(Ff \circ e_A)(\Ga) = (Ff)(\Ga_{A,1_A})
\]
and
\[
(e_C \circ (\Yo_f)^*)(\Ga) = e_C(\Ga * 1_{\Yo_f}) = \Ga_{C;f 1_C}.
\]
In the context of \cref{explanation:str-yoA-F-modification}, we take
$p$ to be the $1$-cell $f\cn C \to A$ and then the diagrams
\eqref{eq:Gamma-p-1} and \eqref{eq:Gamma-p-2} combine to show that
the composite \eqref{ef-composite} is natural with respect to
$\Ga\cn \theta \to \theta'$. This finishes the definition of $e_f$.
\end{definition}
Now we come to the Yoneda Lemma for bicategories. If
$F\cn \B^\op \to \Cat$ is a pseudofunctor, then we apply
\cref{representable-pseudofunctor} to
$F \in \Bicatps(\B^\op,\Cat)^\op$ to obtain a pseudofunctor
\[
\Str(-,F)\cn \Bicatps(\B^\op,\Cat)^\op \to \Cat.
\]
Composing with (the opposite of) the Yoneda pseudofunctor, we thus
have a pseudofunctor
\[
\Str(\Yo_{(-)},F) \cn \B^\op \to \Cat.
\]
The Bicategorical Yoneda Lemma compares this pseudofunctor with $F$ itself.
\begin{lemma}[Yoneda]\label{lemma:yoneda-bicat}\index{Yoneda!Lemma!bicategorical}\index{Bicategorical!Yoneda Lemma}\index{strong transformation!invertible}\index{pseudofunctor!Bicategorical Yoneda Lemma}
Suppose $F\cn \B^\op \to \Cat$ is a pseudofunctor. The functors
$e_A$ and natural transformations $e_f$ assemble to form an
invertible strong transformation
\[
e\cn \Str(\Yo_{(-)}, F) \to F(-).
\]
\end{lemma}
\begin{proof}
For $A \in \B$, we have described the functors $e_A$ in
\cref{definition:eA} and proved they are invertible, i.e.,
equivalences of categories, in \cref{yoneda-bicat-objectwise}. For
$1$-cells $f\cn C \to A$ in $\B$, we have described the natural
isomorphisms $e_f$ in \cref{definition:ef}. The only remaining step
is to show that the components $e_A$ and $e_f$ satisfy the lax unity
and lax naturality axioms of \cref{definition:lax-transformation}.
We begin with the lax unity axiom
\eqref{unity-transformation-pasting}. Since the unitors in $\Cat$
are identities, we must show that the composites in the following
two pasting diagrams are equal.
\begin{equation}\label{eq:e-lax-unity}
\begin{tikzpicture}[x=35mm,y=20mm,baseline={(0,-1).base}]
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (a) {\Str(\Yo_A,F)}
(1,0) node (b) {\Str(\Yo_A,F)}
(0,-1) node (c) {FA}
(1,-1) node (d) {FA}
;
\draw[1cell]
(a) edge[bend left] node {(\Yo_{1_A})^*} (b)
(c) edge[',bend right] node {1_{FA}} (d)
(a) edge['] node {e_A} (c)
(b) edge node {e_A} (d)
;
}
\begin{scope}[shift={(0,0)}]
\boundary
\draw[1cell]
(c) edge[bend left] node {F1_A} (d)
;
\draw[2cell]
node[between=c and b at .5, shift={(0,.37)}, rotate=45, 2label={below,e_{1_A}}]{\Rightarrow}
node[between=c and d at .5, shift={(0,0)}, rotate=90, 2label={below,F^0}]{\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(1.8,0)}]
\boundary
\draw[1cell]
(a) edge[',bend right] node {1_{\Str(\Yo_A,F)}} (b)
;
\draw[2cell]
node[between=a and b at .5, shift={(-.12,0)}, rotate=90, 2label={below,(\Yo^0)^*}]{\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\end{equation}
The two composites indicated are $2$-cells in $\Cat$, i.e., natural
transformations. Therefore it suffices to show that their
components at each $\theta\in \Str(\Yo_A,F)$ are equal. For each
such $\theta$, the two diagrams in \eqref{eq:e-lax-unity} describe
two different morphisms in $FA$ from $e_A(\theta) = \theta_A 1_A$ to
$e_A(\theta \Yo_{1_A}) = \theta_A(1_A1_A)$. Recalling the
definition of $e_{1_A}$ from \eqref{ef-composite}, the composite at
left is
\[
\theta_A1_A \fto{F^0_{\theta_A1_A}} (F 1_A)(\theta_A 1_A) \fto{\theta_{1_A;1_A}}
\theta_A(1_A 1_A) \fto{\theta_A(r^\inv\ell)} \theta_A(1_A1_A).
\]
Since $\ell_{1_A} = r_{1_A}$ by \cref{bicat-l-equals-r}, this
composite reduces to
\begin{equation}\label{lhs:e-lax-unity}
\theta_{1_A; 1_A} F^0_{\theta_A 1_A}.
\end{equation}
On
the right-hand side of \eqref{eq:e-lax-unity}, we have
\[
\theta_A 1_A \fto{(1_{e_A} * (\Yo^0)^*)_\theta} (e_A
\Yo_{1_A}^*)\theta = (\theta \Yo_{1_A})_A(1_A) = \theta_A(1_A1_A).
\]
Unpacking the notation, we find
\begin{equation}\label{rhs:e-lax-unity}
(1_{e_A} * (\Yo^0)^*)_\theta = \theta_A((\Yo^0)_{A;1_A})
\end{equation}
and, recalling \cref{definition:Yo0},
\[
(\Yo^0)_{A;1_A} =\ell^\inv_{A;1_A}\cn 1_A \to 1_A1_A.
\]
To show the composites \eqref{lhs:e-lax-unity} and
\eqref{rhs:e-lax-unity} are equal, recall the lax unity axiom
\eqref{unity-transformation-pasting} for $\theta$. This axiom
asserts that, for each object $W \in \B$, the following pasting
diagrams have equal composites.
\begin{equation}\label{eq:theta-lax-unity}
\begin{tikzpicture}[x=35mm,y=20mm,baseline={(0,-1).base}]
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (a) {\Yo_A(W)}
(1,0) node (b) {\Yo_A(W)}
(0,-1) node (c) {FW}
(1,-1) node (d) {FW}
;
\draw[1cell]
(a) edge[bend left] node {\Yo_{1_W}} (b)
(c) edge[',bend right] node {1_{FW}} (d)
(a) edge['] node {\theta_W} (c)
(b) edge node {\theta_W} (d)
;
}
\draw[font=\Large] (1.4,-.3) node{=};
\begin{scope}[shift={(0,0)}]
\boundary
\draw[1cell]
(c) edge[bend left] node {F1_W} (d)
;
\draw[2cell]
node[between=c and b at .5, shift={(0,.37)}, rotate=45, 2label={below,\theta_{1_W}}]{\Rightarrow}
node[between=c and d at .5, shift={(0,0)}, rotate=90, 2label={below,F^0}]{\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(1.8,0)}]
\boundary
\draw[1cell]
(a) edge[',bend right] node {1_{\Yo_A(W)}} (b)
;
\draw[2cell]
node[between=a and b at .5, shift={(-.03,0)}, rotate=90, 2label={below,\Yo^0}]{\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\end{equation}
Thus for $p\cn W \to A$, we have the following commuting diagram in
$FW$.
\[
\begin{tikzpicture}[x=50mm,y=20mm]
\draw[0cell]
(0,0) node (a) {(1_{FW} \theta_W)p}
(.5,0) node (a') {\theta_W p}
(30:1) node (b) {(F1_W \theta_W)p}
++(-30:1) node (c) {\theta_W (1_W p)}
;
\path[1cell]
(a) edge node {F^0_{\theta_W p}} (b)
(b) edge node {\theta_{{1_W};p}} (c)
(a) edge[equal] node {} (a')
(a') edge['] node {\theta_W ((\Yo^0)_{W;p})} (c)
;
\draw[2cell]
;
\end{tikzpicture}
\]
For $W = A$ and $p = 1_A$, this is precisely the equality of \eqref{lhs:e-lax-unity} and
\eqref{rhs:e-lax-unity} we wanted to show, and therefore we conclude
that $e$ satisfies the lax unity axiom.
Now we turn to lax associativity for $e$. Suppose we have a composable pair
of $1$-cells in $\B$,
\[
C \fto{g} B \fto{f} A.
\]
We must show that the composites in the following two pasting
diagrams are equal.
\begin{equation}\label{eq:e-2-cell-transformation-pasting}
\begin{tikzpicture}[x=40mm,y=17mm,baseline={(0,-3).base}]
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (a) {\Str(\Yo_A,F)}
(2,0) node (b') {\Str(\Yo_C,F)}
(0,-1) node (c) {FA}
(1,-1.33) node (d) {FB}
(2,-1) node (d') {FC}
;
\draw[1cell]
(a) edge[bend left=16] node (Yfg) {(\Yo_{fg})^*} (b')
(c) edge[',bend right=16] node {Ff} (d)
(d) edge[',bend right=16] node {Fg} (d')
(a) edge['] node {e_A} (c)
(b') edge node {e_C} (d')
;
}
\begin{scope}[shift={(-.0,0)}]
\boundary
\draw[0cell]
(1,-.33) node (b) {\Str(\Yo_B,F)}
;
\draw[1cell]
(a) edge[bend right=16] node[pos=.35] {(\Yo_f)^*} (b)
(b) edge[bend right=16] node[pos=.65] {(\Yo_g)^*} (b')
(b) edge node {e_B} (d)
;
\draw[2cell]
node[between=c and b at .5, shift={(0,-.1)}, rotate=45, 2label={below,e_{f}}]{\Rightarrow}
node[between=d and b' at .5, shift={(0,-.1)}, rotate=45, 2label={below,e_{g}}]{\Rightarrow}
node[between=a and b' at .5, shift={(0,.06)}, rotate=90, 2label={below,(\Yo^2)^*_{f,g}}]{\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(.0,-2.7)}]
\boundary
\draw[1cell]
(c) edge[bend left=16] node (Ffg) {F(fg)} (d')
;
\draw[2cell]
node[between=Yfg and Ffg at .5, shift={(0,0)}, rotate=45, 2label={below,e_{fg}}]{\Rightarrow}
node[between=c and d' at .5, shift={(0,0.06)}, rotate=90, 2label={below,F^2}]{\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\end{equation}
As with the unity axiom, these two composites are $2$-cells in $\Cat$,
i.e., natural transformations, and therefore it suffices to show
their components are equal for each $\theta \in \Str(\Yo_A,F)$. For
such $\theta$, the two diagrams in
\eqref{eq:e-2-cell-transformation-pasting} describe two different
$1$-cells in $FC$ from
\[
((Fg) (Ff) e_A)\theta = (Fg Ff)(\theta_A 1_A)
\]
to
\[
(e_C (\Yo_{fg})^*) \theta = e_C(\theta \Yo_{fg}) = \theta_C(1_A (fg)).
\]
The composite of the diagram at top in
\eqref{eq:e-2-cell-transformation-pasting} is the following.
\begin{equation}\label{e-2-cell-top}
\begin{tikzpicture}[x=50mm,y=25mm,baseline={(0,-2).base}]
\draw[0cell]
(0,0) node (a) {\big((Fg) (Ff)\big) (\theta_A 1_A)}
++(1,0) node (a') {(Fg)(\theta_B(1_A f))}
++(-1,-1) node (b) {(Fg)(\theta_B(f 1_B))}
++ (1,0) node (b') {\theta_C(f(1_B g))}
++(-1,-1) node(c) {\theta_C(f(g1_C))}
++(1,0) node(d) {\theta_C((fg)1_C)}
;
\draw[1cell]
(a) edge node {(Fg)(\theta_{f;1_A})} (a')
(a') edge['] node {(Fg)(\theta_B(r^\inv \ell))} (b)
(b) edge node {(\theta \Yo_f)_g} (b')
(b') edge['] node {\theta_C(1_f * (r^\inv\ell))} (c)
(c) edge node {\theta((\Yo^2)_{f,g})} (d)
;
\draw[2cell]
;
\end{tikzpicture}
\end{equation}
The composite of the diagram at bottom in
\eqref{eq:e-2-cell-transformation-pasting} is the following.
\begin{equation}\label{e-2-cell-bot}
\begin{tikzpicture}[x=20mm,y=18mm,baseline={(0,-1).base},baseline={(0,-1).base}]
\draw[0cell]
(0,0) node (a) {\big((Fg) (Ff)\big)(\theta_A 1_A)}
++(1,-1) node (b) {\big(F(fg)\big)(\theta_A1_A)}
++ (2,0) node (b') {\theta_C(1_A(fg))}
++(1,1) node(c) {\theta_C((fg)1_C)}
;
\draw[1cell]
(a) edge['] node {F^2_{\theta_A 1_A}} (b)
(b) edge node {\theta_{fg;1_A}} (b')
(b') edge['] node {\theta_C(r^\inv \ell)} (c)
;
\draw[2cell]
;
\end{tikzpicture}
\end{equation}
Now we show that these are equal. As discussed in
\cref{representable-transformation}, the component $2$-cell of the
strong transformation $\Yo_f = f_*$ is given by an associator. In
\cref{exercise:e-2-cell-A} we ask the reader to show that the
component $2$-cell $(\theta \Yo_f)_g$ in \eqref{e-2-cell-top} is given
by
\begin{equation}\label{e-2-cell-A}
(Fg)(\theta_B(f1_B)) \fto{\theta_{g;f1_B}} \theta_C((f1_B)g)
\fto{\theta_C(a_{f,1_B,g})} \theta_C(f(1_B g)).
\end{equation}
Next, we have the following commutative diagram in $FC$, which we
explain below.
\[
\begin{tikzpicture}[x=38mm,y=20mm]
\draw[0cell]
(0,0) node (a) {(Fg)(\theta_B(1_Af))}
++(0,-1) node (b) {(Fg)(\theta_B(f))}
++(0,-1) node (c) {(Fg)(\theta_B(f1_B))}
(1,0) node (a') {\theta_C((1_Af)g)}
++(0,-1) node (b') {\theta_C(fg)}
++(0,-1) node (c') {\theta_C((f1_B)g)}
(2.5,0) node (a'') {\theta_C(1_A(fg))}
++(0,-2) node (c'') {\theta_C(f(1_Bg))}
;
\draw[1cell]
(a) edge['] node {(Fg)(\theta_B(\ell))} (b)
(b) edge['] node {(Fg)(\theta_B(r^\inv))} (c)
(a') edge['] node {\theta_C(\ell*1_g)} (b')
(b') edge['] node {\theta_C(r^\inv*1_g)} (c')
(a'') edge node {\theta_C(\ell)} (b')
(b') edge node {\theta_C(1_f*\ell^\inv)} (c'')
%
(a) edge node {\theta_{g;1_Af}} (a')
(b) edge node {\theta_{g;f}} (b')
(c) edge['] node {\theta_{g;f1_B}} (c')
(c') edge['] node {\theta_C(a_{f,1_B,g})} (c'')
(a') edge node {\theta_C(a_{1_A,f,g})} (a'')
;
\end{tikzpicture}
\]
Using \eqref{e-2-cell-A}, the left-bottom composite is equal to
\[
(\theta \Yo_f)_g \circ \big((Fg)(\theta_B(r^\inv \ell))\big).
\]
The two left-hand squares commute by naturality of $\theta_g$ with
respect to morphisms, i.e., $2$-cells, in $\Yo_A(B) = \B(B,A)$. The
two right-hand triangles commute by the middle unity axiom
\eqref{bicat-unity} and the left unity property
in \cref{bicat-left-right-unity}.
Recalling \cref{definition:Yo2}, the $2$-cell component $\Yo^2$ is
given by the inverse associator. Combining this with the preceding
observations (making use of functoriality of $\theta_C$ and
canceling $1_f*\ell$ with $1_f*\ell^\inv$) we have shown
that the composite \eqref{e-2-cell-top} is equal to the following composite.
\begin{equation}\label{e-2-cell-top-2}
\begin{tikzpicture}[x=38mm,y=20mm,baseline={(0,.8).base}]
\draw[0cell]
(0,-1) node (z) {\big((Fg)(Ff)\big)(\theta_A1_A)}
(0,0) node (a) {(Fg)(\theta_B(1_Af))}
(0,1) node (a') {\theta_C((1_Af)g)}
(1.15,1) node (a'') {\theta_C(1_A(fg))}
(2,1) node (b') {\theta_C(fg)}
(2,0) node (c'') {\theta_C(f(g1_C))}
(2,-1) node (w) {\theta_C((fg)1_C)}
;
\draw[1cell]
(z) edge node {(Fg)(\theta_{f;1_A})} (a)
(a'') edge node {\theta_C(\ell)} (b')
(b') edge node {\theta_C(1_f*r^\inv)} (c'')
%
(a) edge node {\theta_{g;1_Af}} (a')
(a') edge node {\theta_C(a_{1_A,f,g})} (a'')
(c'') edge node {\theta_C(a^\inv_{f,g,1_C})} (w)
;
\end{tikzpicture}
\end{equation}
Now we are ready to compare \eqref{e-2-cell-top-2} with
\eqref{e-2-cell-bot}. In \cref{exercise:e-2-cell-B} we ask the
reader to verify that the strong naturality axiom for $\theta$ with
respect to $f$ and $g$ implies that the following diagram commutes
in $FC$.
\begin{equation}\label{e-2-cell-B}
\begin{tikzpicture}[x=20mm,y=15mm,baseline={(0,1).base}]
\draw[0cell]
(0,0) node (a) {\big((Fg)(Ff)\big) (\theta_A 1_A)}
(1,1) node (b) {\big(Fg\big)(\theta_B(1_A f))}
(3,1) node (c) {\theta_C((1_A f) g)}
(4,0) node (d) {\theta_C(1_A(fg))}
(2,-1) node (e) {F(fg)(\theta_A1_A)}
;
\draw[1cell]
(a) edge node {(Fg)(\theta_{f;1_A})} (b)
(b) edge node {\theta_{g;1_A f}} (c)
(c) edge node {\theta_C(a_{1_A,f,g})} (d)
(a) edge['] node {F^2_{\theta_A 1_A}} (e)
(e) edge['] node {\theta_{fg;1_A}} (d)
;
\end{tikzpicture}
\end{equation}
This diagram, together with functoriality of $\theta_C$ and the
right unity property in \cref{bicat-left-right-unity}, shows that
the composites in \eqref{e-2-cell-bot} and \eqref{e-2-cell-top-2}
are equal. Thus the composites of the two pasting diagrams in
\cref{eq:e-2-cell-transformation-pasting} are equal. This completes
the verification of the lax associativity axiom for $e$. Therefore
$e$ is a strong transformation with invertible components. By
\cref{proposition:adjoint-equivalence-componentwise}, this implies
that $e$ is an invertible strong transformation.
\end{proof}
In \cref{exercise:e-natural} we ask the reader to prove the following.
\begin{proposition}\label{proposition:e-natural}
The strong invertible transformations $e$ are natural with respect
to $1$-cells, i.e., strong transformations, $\eta\cn F \to F'$ in
$\Bicatps(\B^\op,\Cat)$.
\end{proposition}
For an object $A$, a $1$-cell $f$, and a $2$-cell $35$ in $\B$, we have the
corepresented pseudofunctor, respectively strong transformation,
respectively modification given by
\begin{itemize}
\item $\Yo^A = \B(A,-)$,
\item $\Yo^f = f^*$, and
\item $\Yo^35 = 35^*$.
\end{itemize}
These can be obtained by applying the content of
\cref{sec:yoneda-bicat-definition} to $\B^\op$, and thus $\Yo^{(-)}$
defines a pseudofunctor $\B^\op \to \Cat$. As a corollary of
the Bicategorical Yoneda Lemma \ref{lemma:yoneda-bicat}, we have the following.
\begin{corollary}
For any pseudofunctor $F\cn \B \to \Cat$,
evaluation at the unit $1$-cell defines an invertible strong
transformation
\[
\Str(\Yo^{(-)},F) \to F.
\]
\end{corollary}
\section{The Coherence Theorem}\label{sec:coherence}
\begin{theorem}[Coherence]\label{theorem:bicat-coherence}\index{biequivalence!strictification}\index{strictification}\index{Bicategorical!Coherence Theorem}\index{Theorem!Bicategorical Coherence}\index{2-category!as a strictified bicategory}
Every bicategory is biequivalent to a $2$-category.
\end{theorem}
\begin{proof}
Suppose $\B$ is a bicategory. The Bicategorical Yoneda Embedding
\ref{lemma:yoneda-embedding-bicat} shows that
\[
\Yo\cn \B \to \Bicatps(\B^\op,\Cat)
\]
is a local equivalence. Let $\st{\B}$ denote the essential image of
$\Yo$. That is, $\st{\B}$ has objects given by the
represented pseudofunctors $\Yo_A$, and is the full sub-$2$-category
of $\Bicatps(\B^\op,\Cat)$ on these objects. Then by the
Whitehead Theorem for Bicategories \ref{theorem:whitehead-bicat},
$\Yo\cn \B \to \st{\B}$ is a biequivalence.
\end{proof}
If $\B$ is a $2$-category, recall from
\cref{corollary:yoneda-embedding-2-cat} that $\Yo$ is a $2$-functor.
\cref{corollary:yoneda-embedding-2-cat} shows that $\Yo$ is a local
isomorphism when $\B$ is a $2$-category, and therefore by the
$2$-categorical Whitehead Theorem \ref{theorem:whitehead-2-cat} we
obtain the following.
\begin{corollary}\index{2-equivalence}
If $\B$ is a $2$-category, then the strictification
\[
\B \to \st{\B}
\]
of \cref{theorem:bicat-coherence} is a $2$-equivalence.
\end{corollary}
\section{Exercises and Notes}
\begin{exercise}\label{exercise:D-bar}
Return to \cref{definition:e-inverse-D} and verify the following for
$\ol{D}$.
\begin{enumerate}
\item The lax unity axiom \eqref{unity-transformation-pasting}
follows from the unity axiom \eqref{f0-bicat} for $F$.
\item The lax associativity axiom
\eqref{2-cell-transformation-pasting} follows from the lax
associativity axiom \eqref{f2-bicat} for $F$.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exercise:e-2-cell-A}
In the proof of \cref{lemma:yoneda-bicat}, use the formula
\cref{def:lax-tr-comp} for the $2$-cell constraint of a composite
strong transformation to show that the description of
$(\theta \Yo_f)_g$ given in \eqref{e-2-cell-A} is correct.
\end{exercise}
\begin{exercise}\label{exercise:e-2-cell-B}
In the proof of \cref{lemma:yoneda-bicat}, use the strong naturality
axiom \eqref{2-cell-transformation-pasting} for $\theta$ together
with the pseudofunctoriality constraint of $\Yo$ given in
\cref{definition:Yo2} to show that \eqref{e-2-cell-B} is correct.
\end{exercise}
\begin{exercise}\label{exercise:e-natural}
Give a proof of \cref{proposition:e-natural}.
\end{exercise}
\begin{exercise}\label{exercise:e-inverse}
Extend the construction $\ol{D}$ in \cref{definition:e-inverse-D} to
define a strong transformation that is inverse to the evaluation
$e$.
\end{exercise}
\begin{exercise}\label{exercise:coherence-corollary}
Suppose that $\B$ is a $2$-category. Use the Bicategorical Yoneda
Lemma \ref{lemma:yoneda-bicat} to show that every pseudofunctor
$\B^\op \to \Cat$ is equivalent, in the $2$-category
$\Bicatps(\B^\op,\Cat)$, to a $2$-functor.
\end{exercise}
\subsection*{Notes}
\begin{note}[Discussion of Literature]
Our approach to the Bicategorical Yoneda Lemma
\ref{lemma:yoneda-bicat} follows brief sketches that have appeared in
the literature beginning with \cite{street_cat-structures}. A
detailed treatment has also appeared in \cite{bakovicYoneda}, which
includes \cref{exercise:e-inverse}, the construction of an inverse to
the evaluation $e$.
\end{note}
\begin{note}
\cref{exercise:coherence-corollary} appears in \cite[Corollary 9.2]{street_cat-structures}.
\end{note}
\begin{note}[Naturality of Coherence]
The Coherence Theorem \ref{theorem:bicat-coherence} that one obtains
from the Yoneda Lemma does not give a statement about naturality with
respect to pseudofunctors $G \cn \B \to \B'$. Gurski \cite[Chapter
2]{gurski-coherence} discusses coherence for both bicategories and
pseudofunctors following the monoidal case by Joyal-Street
\cite{joyal-street}.
\end{note}
\chapter{Bicategorical Limits and Nerves}\label{ch:constructions}
Limits and the Grothendieck nerve are fundamental concepts of $1$-category theory. In this chapter we discuss their bicategorical and $2$-categorical counterparts. Lax bilimits, lax limits, pseudo bilimits, and pseudo limits are discussed in \Cref{sec:bilimits}. Their colimit analogues are discussed in \Cref{sec:bicolimits}. $2$-limits and $2$-colimits are discussed in \Cref{sec:iilimits}. The Duskin nerve, which associates to each small bicategory a simplicial set, is the subject of \Cref{sec:duskin-nerves}. The Lack-Paoli $2$-nerve, which associates to each small bicategory a simplicial category, is discussed in \Cref{sec:iinerve}.
We remind the reader that the Bicategorical Pasting \Cref{thm:bicat-pasting-theorem} and \Cref{conv:boundary-bracketing} are used to interpret pasting diagrams.
\section{Bilimits}\label{sec:bilimits}
In this section we discuss lax and pseudo (bi)limits for bicategories.
\begin{motivation}\label{mot:bilimits}
Suppose $F : \C \to 1.3$ is a functor between $1$-categories with $\C$ small. Recall that a \emph{limit}\index{limit} of $F$ is a pair $(L,\pi)$ with
\begin{itemize}
\item $L$ an object in $1.3$, and
\item $\pi : \conof{L} \to F$ a natural transformation from the constant functor $\conof{L} : \C\to1.3$ at $L$
\end{itemize}
such that, for each object $X$ in $1.3$, the map
\[\begin{tikzcd}
1.3(X,L) \ar{r}{\pi_*}[swap]{\cong} & \Nat(\conof{X},F)\end{tikzcd}\]
given by post-composition with $\pi$ is a bijection. To extend the concept of limits to bicategories, we will use
\begin{itemize}
\item the constant pseudofunctor in place of the constant functor $\conof{L}$, and
\item lax transformations in place of natural transformations.
\end{itemize}
Moreover, the bicategorical analogue of the map $\pi_*$ is a functor. We will ask that it be an equivalence of categories instead of a bijection. There are several variations where we ask that $\pi_*$ be an isomorphism of categories and/or use strong transformations.\dqed
\end{motivation}
\begin{definition}\label{def:lax-cone}
Suppose $F : \A\to\B$ is a lax functor between bicategories with $\A_0$ a set, and $L$ is an object in $\B$.
\begin{enumerate}
\item Define the category\label{notation:laxcone}
\[\laxcone(\conof{L},F) = \Bicat(\A,\B)(\conof{L},F)\]
in which:
\begin{itemize}
\item $\conof{L} : \A \to \B$ is the constant pseudofunctor at $L$ in \Cref{def:constant-pseudofunctor}.
\item $\Bicat(\A,\B)$ is the bicategory in \Cref{thm:bicat-of-lax-functors}.
\end{itemize}
An object in $\laxcone(\conof{L},F)$ is called a \index{lax!cone}\emph{lax cone of $L$ over $F$}.
\item Suppose $F$ is a pseudofunctor. Define the category\label{notation:pscone}
\[\pscone(\conof{L},F) = \Bicatps(\A,\B)(\conof{L},F)\]
in which $\Bicatps(\A,\B)$ is the bicategory in \Cref{subbicat-pseudofunctor}. An object in $\pscone(\conof{L},F)$ is called a \index{pseudocone}\emph{pseudocone of $L$ over $F$}.\defmark
\end{enumerate}
\end{definition}
\begin{explanation}\label{expl:lax-cone}
In \Cref{def:lax-cone}, a lax cone $\pi : \conof{L}\to F$ is a lax transformation as in \Cref{definition:lax-transformation}. More explicitly, it is determined by the following data:
\begin{description}
\item[Components] It has a component $1$-cell $\pi_A \in \B(L,FA)$ for each object $A\in \A$.
\item[Lax Naturality Constraints] For each $1$-cell $f\in\A(A,A')$, it has a component $2$-cell
\[\begin{tikzpicture}[xscale=2.5, yscale=1.5]
\draw[0cell]
(0,0) node (L1) {L}
(1,0) node (L2) {L}
(0,-1) node (F1) {FA}
(1,-1) node (F2) {FA'}
;
\draw[1cell]
(L1) edge node (1l) {1_L} (L2)
(L1) edge node[swap]{\pi_A} (F1)
(L2) edge node{\pi_{A'}} (F2)
(F1) edge node (ff) {Ff} (F2)
;
\draw[2cell]
node[between=ff and 1l at .5, rotate=45, font=\Large] (pi) {\Rightarrow}
(pi) node[below right] {\pi_f}
;
\end{tikzpicture}\]
in $\B(L,FA')$.
\end{description}
These data are required to satisfy the following three conditions.
\begin{description}
\item[Naturality of $\pi_f$] For each $2$-cell $\theta : f \to g$ in $\A(A,A')$, the equality
\[\begin{tikzpicture}[xscale=2.5, yscale=1.6]
\draw[0cell]
(0,0) node (F1) {FA}
($(F1) + (1,0)$) node (F2) {FA'}
($(F1)+(0,1)$) node (G1) {L}
($(F1)+(1,1)$) node (G2) {L}
($(F1)+(1.5,.5)$) node[font=\huge] {=}
($(F1)+(2,0)$) node (F3) {FA}
($(F3) + (1,0)$) node (F4) {FA'}
($(F3)+(0,1)$) node (G3) {L}
($(F3)+(1,1)$) node (G4) {L}
;
\draw[1cell]
(G1) edge node[swap]{\pi_A} (F1)
(F1) edge[bend right] node[swap] (ff) {Ff} (F2)
(G1) edge[bend left] node (i) {1_L} (G2)
(G2) edge node{\pi_{A'}} (F2)
(G3) edge node[swap]{\pi_A} (F3)
(F3) edge[bend right] node[swap] (ff2) {Ff} (F4)
(F3) edge[bend left=40] node (fg) {Fg} (F4)
(G3) edge[bend left] node (ii) {1_L} (G4)
(G4) edge node{\pi_{A'}} (F4)
;
\draw[2cell]
node[between=i and ff at .5, rotate=45, font=\Large] (pif) {\Rightarrow}
(pif) node[below right] {\pi_f}
node[between=ii and fg at .5, rotate=45, font=\Large] (pig) {\Rightarrow}
(pig) node[below right] {\pi_g}
node[between=F3 and F4 at .4, rotate=90, font=\Large] (ft) {\Rightarrow}
(ft) node[right] {F\theta}
;
\end{tikzpicture}\]
of pasting diagrams holds in $\B(L,FA')$.
\item[Lax Unity] For each object $A\in\A$, the pasting diagram equality
\[\begin{tikzpicture}[xscale=2.5, yscale=1.6]
\draw[0cell]
(0,0) node (F1) {FA}
($(F1)+(1,0)$) node (F2) {FA}
($(F1)+(0,1)$) node (G1) {L}
($(F1)+(1,1)$) node (G2) {L}
($(F1)+(1.5,.5)$) node[font=\huge] {=}
($(F1)+(2,0)$) node (F3) {FA}
($(F3) + (1,0)$) node (F4) {FA}
($(F3)+(0,1)$) node (G3) {L}
($(F3)+(1,1)$) node (G4) {L}
;
\draw[1cell]
(G1) edge[bend left] node (il) {1_L} (G2)
(G2) edge node{\pi_A} (F2)
(G1) edge node[swap]{\pi_A} (F1)
(F1) edge[bend right] node[swap] {1_{FA}} (F2)
(F1) edge[bend left=40] node (ifa) {F1_{A}} (F2)
(G3) edge[bend left] node{1_L} (G4)
(G4) edge node{\pi_A} (F4)
(G3) edge node[swap]{\pi_A} (F3)
(F3) edge[bend right] node[swap]{1_{FA}} (F4)
(G3) edge node[swap,inner sep=1pt, pos=.3]{\pi_A} (F4)
;
\draw[2cell]
node[between=il and ifa at .5, rotate=45, font=\Large] (pi) {\Rightarrow}
(pi) node[below right] {\pi_{1_A}}
node[between=F1 and F2 at .4, rotate=90, font=\Large] (f0) {\Rightarrow}
(f0) node[right] {F^0}
node[between=F3 and G4 at .25, rotate=45, font=\Large] (ell) {\Rightarrow}
(ell) node[below right] {\ell}
node[between=F3 and G4 at .75, rotate=45, font=\Large] (r) {\Rightarrow}
(r) node[above left] {r^{-1}}
;
\end{tikzpicture}\]
holds in $\B(L,FA)$.
\item[Lax Naturality] For $1$-cells $f\in\A(A,A')$ and $g\in\A(A',A'')$, the equality
\[\begin{tikzpicture}[xscale=1.9, yscale=1.6]
\draw[0cell]
(0,0) node (F1) {FA}
($(F1)+(1,-.3)$) node (F2) {FA'}
($(F1) + (2,0)$) node (F3) {FA''}
($(F1)+(0,1)$) node (G1) {L}
($(F1)+(2,1)$) node (G2) {L}
($(F1)+(2.7,.25)$) node[font=\huge] {=}
($(F1)+(3.4,0)$) node (F4) {FA}
($(F4)+(1,-.3)$) node (F5) {FA'}
($(F4)+(2,0)$) node (F6) {FA''}
($(F4)+(0,1)$) node (G3) {L}
($(F5)+(0,1)$) node (G4) {L}
($(F6)+(0,1)$) node (G5) {L}
;
\draw[1cell]
(F1) edge[bend right=10] node[swap]{Ff} (F2)
(F2) edge[bend right=10] node[swap]{Fg} (F3)
(F1) edge[bend left=25] node (fgf) {F(gf)} (F3)
(G1) edge[bend left=25] node (il) {1_L} (G2)
(G1) edge node[swap]{\pi_A} (F1)
(G2) edge node{\pi_{A''}} (F3)
(F4) edge[bend right=10] node[swap]{Ff} (F5)
(F5) edge[bend right=10] node[swap]{Fg} (F6)
(G3) edge[bend left=25] node{1_L} (G5)
(G3) edge[bend right=10] node[swap,inner sep=1pt, pos=.4]{1_L} (G4)
(G4) edge[bend right=10] node[swap, inner sep=1pt, pos=.6]{1_L} (G5)
(G3) edge node[swap]{\pi_A} (F4)
(G4) edge node[swap, inner sep=1pt, pos=.3]{\pi_{A'}} (F5)
(G5) edge node{\pi_{A''}} (F6)
;
\draw[2cell]
node[between=il and fgf at .5, rotate=45, font=\Large] (pgf) {\Rightarrow}
(pgf) node[below right] {\pi_{gf}}
node[between=F1 and F3 at .5, rotate=90, font=\Large] (f2) {\Rightarrow}
(f2) node[right] {F^2}
node[between=G3 and G5 at .5, rotate=90, font=\Large] (ell) {\Rightarrow}
(ell) node[right] {\ell}
node[between=F4 and G4 at .4, rotate=45, font=\Large] (pf) {\Rightarrow}
(pf) node[below right] {\pi_f}
node[between=F5 and G5 at .4, rotate=45, font=\Large] (pg) {\Rightarrow}
(pg) node[below right] {\pi_g}
;
\end{tikzpicture}\]
of pasting diagrams holds in $\B(L,FA'')$.
\end{description}
Moreover, if $F$ is a pseudofunctor, then a pseudocone $\pi : \conof{L}\to F$ is a strong transformation, which is as above with each component $2$-cell $\pi_f$ invertible.\dqed
\end{explanation}
\begin{explanation}\label{expl:lax-cone-morphism}
In \Cref{def:lax-cone}, a morphism of lax cones
\[\Gamma : \pi \to \phi \in \laxcone(\conof{L},F)\]
is a modification between lax transformations as in \Cref{def:modification}. More explicitly, it consists of a component $2$-cell
\[\Gamma_A : \pi_A \to \phi_A \inspace \B(L,FA)\]
for each object $A$ in $\A$, that satisfies the modification axiom
\[\begin{tikzpicture}[xscale=2, yscale=2]
\draw[0cell]
(0,0) node (F1) {FA}
($(F1) + (1,0)$) node (F2) {FA'}
($(F1)+(0,1)$) node (G1) {L}
($(F1)+(1,1)$) node (G2) {L}
($(F1)+(2,.5)$) node[font=\huge] {=}
($(F1)+(3,0)$) node (F3) {FA}
($(F3) + (1,0)$) node (F4) {FA'}
($(F3)+(0,1)$) node (G3) {L}
($(F3)+(1,1)$) node (G4) {L}
;
\draw[1cell]
(G1) edge node{1_L} (G2)
(G2) edge[bend left=40] node (phiap) {\phi_{A'}} (F2)
(G2) edge[bend right=40] node[swap, inner sep=1pt, pos=.5] (piap) {\pi_{A'}} (F2)
(G1) edge[bend right=40] node[swap] (pia) {\pi_A} (F1)
(F1) edge node[swap]{Ff} (F2)
(G3) edge node{1_L} (G4)
(G4) edge[bend left=40] node (phiapii) {\phi_{A'}} (F4)
(G3) edge[bend left=40] node[pos=.5] (phia) {\phi_{A}} (F3)
(G3) edge[bend right=40] node[swap] (piaii){\pi_A} (F3)
(F3) edge node[swap]{Ff} (F4)
;
\draw[2cell]
node[between=pia and piap at .6, rotate=45, font=\Large] (pf) {\Rightarrow}
(pf) node[above left] {\pi_{f}}
node[between=G2 and F2 at .6, rotate=0, font=\Large] (Gap) {\Rightarrow}
(Gap) node[above] {\Gamma_{A'}}
node[between=G3 and F3 at .6, rotate=0, font=\Large] (Ga) {\Rightarrow}
(Ga) node[above] {\Gamma_A}
node[between=phia and phiapii at .5, rotate=45, font=\Large] (phif) {\Rightarrow}
(phif) node[above left] {\phi_{f}}
;
\end{tikzpicture}\]
for each $1$-cell $f \in \A(A,A')$. A morphism of pseudocones is a modification between strong transformations, which is described in exactly the same way.\dqed
\end{explanation}
To define bilimits, we need a suitable analogue of the map $\pi_*$ in \Cref{mot:bilimits}, for which we need some preliminary observations.
\begin{lemma}\label{constant-induced-transformation}\index{strong transformation!induced by a 1-cell}
Suppose $f \in \B(X,Y)$ is a $1$-cell, and $\A$ is another bicategory. Then $f$ induces a strong transformation
\begin{equation}\label{eq:conof-f-laxnat}
\conof{X} \fto{\conof{f}} \conof{Y} : \A \to \B
\end{equation}
with:
\begin{itemize}
\item $f$ as each component $1$-cell;
\item lax naturality constraint
\[(\conof{f})_g = r_f^{-1}\ell_f : 1_Yf \to f1_X\] for each $1$-cell $g$ in $\A$.
\end{itemize}
\end{lemma}
\begin{proof}
To be more precise:
\begin{itemize}
\item The strong transformation $\conof{f}$ has component $1$-cell \[(\conof{f})_A = f : (\conof{X})_A = X \to Y = (\conof{Y})_A\] for each object $A$ in $\A$.
\item For each $1$-cell $g \in \A(A,B)$, its component $2$-cell $(\conof{f})_g$ is the composite as displayed below.
\[\begin{tikzpicture}[xscale=2.5, yscale=2]
\draw[0cell]
(0,0) node (X1) {X}
($(X1) + (1,0)$) node (X2) {X}
($(X1)+(0,1)$) node (Y1) {Y}
($(X1)+(1,1)$) node (Y2) {Y}
;
\draw[1cell]
(X1) edge node[swap]{1_X} (X2)
(X2) edge node[swap]{f} (Y2)
(X1) edge node{f} (Y1)
(Y1) edge node{1_Y} (Y2)
(X1) edge node[pos=.7, inner sep=1pt]{f} (Y2)
;
\draw[2cell]
node[between=Y1 and X2 at .3, rotate=-45, font=\Large] (l) {\Rightarrow}
(l) node[below left] {\ell_{f}}
node[between=Y1 and X2 at .6, rotate=-45, font=\Large] (r) {\Rightarrow}
(r) node[below left] {r^{-1}_{f}}
;
\end{tikzpicture}\]
\end{itemize}
The naturality of $(\conof{f})_g$ with respect to $g$ \eqref{lax-transformation-naturality} follows from the fact that $(\conof{f})_g$ is independent of $g$ and that $\conof{X}$ and $\conof{Y}$ have only component identity $2$cells.
The lax unity axiom \eqref{unity-transformation-pasting} for $\conof{f}$ holds because both $(\conof{X})^0$ and $(\conof{Y})^0$ are componentwise identity $2$-cells of identity $1$-cells. The lax naturality axiom \eqref{2-cell-transformation-pasting} for $\conof{f}$ means the commutativity of the outermost diagram below.
\[\begin{tikzcd}
(1_Y1_Y)f \ar{dd}[swap]{\ell_{1_Y}*1} \ar{r}{a} & 1_Y(1_Yf) \ar{ddl}{\ell_{1_Yf}} \ar{r}{1*\ell_f} & 1_Yf \ar{r}{1*r^{-1}_f} \ar{ldd}{\ell_f} & 1_Y(f1_X) \ar{r}{a^{-1}} \ar{dr}[swap]{\ell_{f1_X}} & (1_Yf)1_X \ar{d}{\ell_f*1}\\
&&&& f1_X \ar{d}{r_f^{-1}*1} \ar{dll}[swap]{1} \\
1_Yf \ar{r}[swap]{\ell_f} & f \ar{r}[swap]{r_f^{-1}} & f1_X & f(1_X1_X) \ar{l}{1*\ell_{1_X}} & (f1_X)1_X \ar{l}{a}
\end{tikzcd}\]
The upper left and the upper right triangles are commutative by the left unity property in \Cref{bicat-left-right-unity}. The other three sub-diagrams are commutative by the naturality of $\ell$ twice and the unity axiom \eqref{bicat-unity}.
\end{proof}
\begin{lemma}\label{constant-induced-modification}\index{modification!induced by a 2-cell}
Each $2$-cell $\alpha : f \to g$ in $\B(X,Y)$ induces a modification
\[\begin{tikzcd}
\conof{f} \ar{r}{\conof{\alpha}} & \conof{g}
\end{tikzcd}\]
with component $2$-cell
\[\begin{tikzcd}
(\conof{f})_A = f \ar{r}{\alpha} & g = (\conof{g})_A\end{tikzcd}\]
for each object $A$ in $\A$.
\end{lemma}
\begin{proof}
The component $2$-cells of $\conof{f}$ and $\conof{g}$ are $r_f^{-1}\ell_f$ and $r_g^{-1}\ell_g$, respectively. The modification axiom \eqref{modification-axiom} for $\conof{\alpha}$ follows from the naturality of $\ell$ and $r$.
\end{proof}
\begin{lemma}\label{constant-induced-mod-composition}
For vertically composable $2$-cells $\alpha : f \to g$ and $\beta : g \to h$ in $\B$, the equality \[\conof{\beta\alpha} = \conof{\beta}\conof{\alpha}\] holds, in which the right-hand side is the vertical composite of modifications in \Cref{def:modification-composition}.
\end{lemma}
\begin{proof}
The two modifications involved have component $2$-cells $\beta\alpha$.
\end{proof}
In \Cref{exer:constant-at-something} the reader is asked to prove the next assertion.
\begin{proposition}\label{constant-at-something}
For bicategories $\A$ and $\B$ with $\A_0$ a set, there is a strict functor
\[\begin{tikzcd}
\B \ar{r}{\con} & \Bicatps(\A,\B)
\end{tikzcd}\]
defined by
\begin{itemize}
\item \Cref{def:constant-pseudofunctor} on objects,
\item \Cref{constant-induced-transformation} on $1$-cells, and
\item \Cref{constant-induced-modification} on $2$-cells,
\end{itemize}
with $\Bicatps(\A,\B)$ the bicategory in \Cref{subbicat-pseudofunctor}.
\end{proposition}
\begin{proposition}\label{laxcone-induced-functor}
Suppose $F : \A \to \B$ is a lax functor between bicategories with $\A_0$ a set, $L$ an object in $\B$, and $\pi : \conof{L} \to F$ a lax cone of $L$ over $F$.
\begin{enumerate}
\item For each object $X$ in $\B$, there is a functor
\[\begin{tikzcd}
\B(X,L) \ar{r}{\pi_*} & \laxcone(\conof{X},F)\end{tikzcd}\]
induced by post-composition with $\pi$.
\item If $F$ is a pseudofunctor with $\pi$ a pseudocone, then there is an induced functor
\[\begin{tikzcd}
\B(X,L) \ar{r}{\pi_*} & \pscone(\conof{X},F)\end{tikzcd}\]
defined in the same way.
\end{enumerate}
\end{proposition}
\begin{proof}
To be precise, the functor $\pi_*$ is defined as follows.
\begin{itemize}
\item $\pi_*$ sends each $1$-cell $f\in\B(X,L)$ to the horizontal composite
\[\begin{tikzcd}
\conof{X} \ar{r}{\conof{f}} & \conof{L} \ar{r}{\pi} & F\end{tikzcd}\]
as in \Cref{def:lax-tr-comp} with $\conof{f}$ the induced strong transformation in \Cref{constant-induced-transformation}.
\item $\pi_*$ sends each $2$-cell $\alpha : f \to g$ in $\B(X,L)$ to the whiskering $1_{\pi}*\conof{\alpha}$ in $\Bicat(\A,\B)$, as displayed below
\[\begin{tikzpicture}[xscale=2.5, yscale=2]
\draw[0cell]
(0,0) node (X) {\conof{X}}
($(X) + (1,0)$) node (L) {\conof{L}}
($(L)+(1,0)$) node (F) {F,};
;
\draw[1cell]
(X) edge[bend left=45] node{\conof{f}} (L)
(X) edge[bend right=45] node[swap]{\conof{g}} (L)
(L) edge node{\pi} (F)
;
\draw[2cell]
node[between=X and L at .45, rotate=-90, font=\Large] (al) {\Rightarrow}
(al) node[right] {\conof{\alpha}}
;
\end{tikzpicture}\]
with $\conof{\alpha}$ the induced modification in \Cref{constant-induced-modification}.
\end{itemize}
The assignment $\pi_*$ preserves identity morphisms by the equalities
\[1_{\pi} * \conof{1_f} = 1_{\pi} * 1_{\conof{f}} = 1_{\pi\conof{f}}\]
for each $1$-cell $f\in\B(X,L)$. The first equality follows from the definition of $\conof{\alpha}$ in \Cref{constant-induced-modification}, and the second equality is from \Cref{modification-comp-id}.
That $\pi_*$ preserves compositions follows from the equalities
\[1_{\pi}*\conof{\beta\alpha} = 1_{\pi}*(\conof{\beta}\conof{\alpha}) = \big(1_{\pi}*\conof{\beta}\big)\big(1_{\pi}*\conof{\alpha}\big).\]
The first equality is from \Cref{constant-induced-mod-composition}, and the second equality follows from the bicategory axioms \eqref{hom-category-axioms} and \eqref{middle-four} in $\Bicat(\A,\B)$. This proves the first assertion. The exact same proof also works in the other case with $F$ a pseudofunctor and $\pi$ a pseudocone, by the strong case in \Cref{lax-tr-compose}.
\end{proof}
\begin{definition}\label{def:bilimits}
Suppose $F : \A \to \B$ is a lax functor between bicategories with $\A_0$ a set.
\begin{enumerate}
\item A \index{lax!bilimit}\index{bilimit!lax}\index{limit!lax bi-}\emph{lax bilimit of $F$} is a pair $(L,\pi)$ with
\begin{itemize}
\item $L$ an object in $\B$, and
\item $\pi : \conof{L} \to F$ a lax cone of $L$ over $F$,
\end{itemize}
such that for each object $X$ in $\B$, the functor
\begin{equation}\label{pistar-lax}
\begin{tikzcd}
\B(X,L) \ar{r}{\pi_*}[swap]{\simeq} & \laxcone(\conof{X},F)\end{tikzcd}
\end{equation}
in \Cref{laxcone-induced-functor} is an equivalence of categories.
\item A \index{lax!limit}\index{limit!lax}\emph{lax limit of $F$} is a pair $(L,\pi)$ as in the previous item such that $\pi_*$ in \eqref{pistar-lax} is an isomorphism of categories.
\end{enumerate}
Suppose in addition that $F$ is a pseudofunctor.
\begin{enumerate}
\item A \index{pseudo!bilimit}\index{bilimit!pseudo}\index{limit!pseudo bi-}\emph{pseudo bilimit of $F$} is a pair $(L,\pi)$ with
\begin{itemize}
\item $L$ an object in $\B$, and
\item $\pi : \conof{L} \to F$ a pseudocone of $L$ over $F$,
\end{itemize}
such that for each object $X$ in $\B$, the functor
\begin{equation}\label{pistar-ps}
\begin{tikzcd}
\B(X,L) \ar{r}{\pi_*}[swap]{\simeq} & \pscone(\conof{X},F)\end{tikzcd}
\end{equation}
in \Cref{laxcone-induced-functor} is an equivalence of categories.
\item A \index{pseudo!limit}\index{limit!pseudo}\emph{pseudo limit of $F$} is a pair $(L,\pi)$ as in the previous item such that $\pi_*$ in \eqref{pistar-ps} is an isomorphism of categories.\defmark
\end{enumerate}
\end{definition}
\begin{explanation}\label{expl:lax-bilimits}
In the definition of a lax bilimit (resp., lax limit) $(L,\pi)$ of a lax functor $F : \A\to\B$, for an object $X$ in $\B$, asking that the functor $\pi_*$ in \eqref{pistar-lax} be an equivalence (resp., isomorphism) of categories is equivalent to asking that it be both (i) essentially surjective (resp., bijective) on objects and (ii) fully faithful on morphisms, which are defined after \Cref{def:equivalences}. Let us describe these conditions explicitly for the functor $\pi_*$.
\begin{description}
\item[Essentially Surjective] The functor $\pi_*$ is essentially surjective if, for each lax cone $\theta : \conof{X}\to F$, there exist
\begin{itemize}
\item a $1$-cell $f \in \B(X,L)$ and
\item an invertible modification
\[\begin{tikzcd}
\pi_*f=\pi\conof{f} \ar{r}{\Gamma}[swap]{\cong} & \theta.\end{tikzcd}\]
\end{itemize}
Explicitly, for each object $A$ in $\A$, $\Gamma$ has an invertible component $2$-cell
\[\begin{tikzpicture}[xscale=1.5, yscale=.7]
\draw[0cell]
(0,0) node (X) {X}
(1,1.5) node (L) {L}
(2,0) node (A) {FA}
;
\draw[1cell]
(X) edge[bend left] node {f} (L)
(L) edge[bend left] node {\pi_A} (A)
(X) edge[bend right] node[swap] (T) {\theta_A} (A)
;
\draw[2cell]
node[between=L and T at .5, rotate=-90, font=\Large] (G) {\Rightarrow}
(G) node[left] {\Gamma_A}
;
\end{tikzpicture}\]
in $\B(X,FA)$.
The modification axiom for $\Gamma$ states that, for each $1$-cell $g \in \A(A,B)$, the equality of pasting diagrams
\begin{equation}\label{lax-bilimits-mod-axiom}
\begin{tikzpicture}[xscale=2.5, yscale=1.7, baseline={(L1.base)}]
\draw[0cell]
(0,0) node (X1) {X}
($(X1)+(1,0)$) node (X2) {X}
($(X1)+(0,-1)$) node (L1) {L}
($(X1)+(1,-1)$) node (L2) {L}
($(X1)+(0,-2)$) node (A1) {FA}
($(X1)+(1,-2)$) node (B1) {FB}
;
\draw[1cell]
(X1) edge node {1_X} (X2)
(X2) edge node {f} (L2)
(X2) edge[bend left=45] node (th) {\theta_B} (B1)
(X1) edge node[pos=.3]{f} (L2)
(X1) edge node[swap]{f} (L1)
(L1) edge node[swap]{1_L} (L2)
(L1) edge node[swap]{\pi_A} (A1)
(L2) edge node{\pi_B} (B1)
(A1) edge node[swap]{Fg} (B1)
;
\draw[2cell]
node[between=A1 and L2 at .4, rotate=45, font=\Large] (P) {\Rightarrow}
(P) node[below right] {\pi_g}
node[between=L1 and X2 at .35, rotate=45, font=\Large] (l) {\Rightarrow}
(l) node[below right, inner sep=1pt] {\ell_f}
node[between=L1 and X2 at .7, rotate=45, font=\Large] (r) {\Rightarrow}
(r) node[below right, inner sep=1pt] {r_f^{-1}}
node[between=L2 and th at .5, font=\Large] (g) {\Rightarrow}
(g) node[above] {\Gamma_B}
;
\draw[0cell]
($(L2)+(.8,.5)$) node[font=\huge] (eq) {=}
($(X2)+(1.4,0)$) node (X3) {X}
($(X3)+(1,0)$) node (X4) {X}
($(X3)+(0,-1)$) node (L3) {L}
($(X3)+(0,-2)$) node (A2) {FA}
($(X3)+(1,-2)$) node (B2) {FB}
;
\draw[1cell]
(X3) edge node {1_X} (X4)
(X3) edge[bend left=45] node (tha) {\theta_A} (A2)
(X4) edge[bend left=45] node {\theta_B} (B2)
(X3) edge node[swap]{f} (L3)
(L3) edge node[swap]{\pi_A} (A2)
(A2) edge node[swap]{Fg} (B2)
;
\draw[2cell]
node[between=A2 and X4 at .8, rotate=45, font=\Large] (thg) {\Rightarrow}
(thg) node[below right] {\theta_g}
node[between=L3 and tha at .5, font=\Large] (ga) {\Rightarrow}
(ga) node[above] {\Gamma_A};
\end{tikzpicture}
\end{equation}
holds in $\B(X,FB)$.
In the case of a lax limit of $F$, essential surjectivity is replaced by the condition that $\pi_*$ be a bijection on objects. In other words, for each lax cone $\theta$ as above, there is a unique $1$-cell $f \in \B(X,L)$ such that $\pi\conof{f} = \theta$.
\item[Fully Faithful]
The functor $\pi_*$ is fully faithful if for
\begin{itemize}
\item each pair of $1$-cells $e,f\in\B(X,L)$ and
\item each modification $\Gamma : \pi\conof{e} \to \pi\conof{f}$,
\end{itemize}
there exists a unique $2$-cell
\begin{equation}\label{lax-bilimits-fullyfaithful}
\begin{tikzcd}e \ar{r}{\alpha} & f\end{tikzcd} \stspace \Gamma = 1_{\pi}*\conof{\alpha}.
\end{equation}
In this case, for each object $A$ in $\A$, the component $2$-cell $\Gamma_A$ is the whiskering $1_{\pi_A}*\alpha$ as in the diagram
\[\begin{tikzpicture}[xscale=2, yscale=1]
\draw[0cell]
(0,0) node (X) {X}
(1,0) node (L) {L}
(2,0) node (A) {FA}
;
\draw[1cell]
(X) edge[bend left=60] node (e) {e} (L)
(X) edge[bend right=60] node[swap] (f) {f} (L)
(L) edge node {\pi_A} (A)
;
\draw[2cell]
node[between=e and f at .5, rotate=-90, font=\Large] (alpha) {\Rightarrow}
(alpha) node[left] {\alpha};
\end{tikzpicture}\]
in $\B(X,FA)$.
\end{description}
Similarly, if $F$ is a pseudofunctor, then the functor $\pi_*$ in \eqref{pistar-ps} is an equivalence (resp., isomorphism) if and only if it is (i) essentially surjective (resp., bijective) on objects and (ii) fully faithful on morphisms. These two conditions have the same meaning as above, with (i) applied to a pseudocone $\theta : \conof{X} \to F$.\dqed
\end{explanation}
\begin{motivation}
In $1$-category theory, a limit, if it exists, is unique up to an isomorphism. We now observe that a lax bilimit, if it exists, is unique up to an equivalence.\dqed
\end{motivation}
\begin{definition}\label{def:equivalence-in-bicategory}\index{1-cell!invertible}\index{1-cell!equivalence}\index{equivalence!in a bicategory}\index{bicategory!equivalence}
In a bicategory $\B$, a $1$-cell $f\in \B(X,Y)$ is said to be \emph{invertible} or \emph{an equivalence} if there exist
\begin{itemize}
\item a $1$-cell $g\in\B(Y,X)$, called an \emph{inverse of $f$}, and
\item invertible $2$-cells $1_X\iso gf$ and $1_Y \iso fg$.\defmark
\end{itemize}
\end{definition}
\begin{theorem}\label{thm:bilimit-uniqueness}\index{uniqueness of!bilimits}\index{bilimit!uniqueness}
Suppose $F : \A \to \B$ is a lax functor between bicategories with $\A_0$ a set. Suppose $(L,\pi)$ and $(\barof{L},\barof{\pi})$ are lax cones over $F$ that satisfy one of the following statements.
\begin{enumerate}
\item Both $(L,\pi)$ and $(\barof{L},\barof{\pi})$ are lax bilimits of $F$.
\item Both $(L,\pi)$ and $(\barof{L},\barof{\pi})$ are lax limits of $F$.
\item $F$ is a pseudofunctor, and $(L,\pi)$ and $(\barof{L},\barof{\pi})$ are both pseudo bilimits of $F$.
\item $F$ is a pseudofunctor, and $(L,\pi)$ and $(\barof{L},\barof{\pi})$ are both pseudo limits of $F$.
\end{enumerate}
Then there exist
\begin{itemize}
\item an equivalence $f \in \B(L,\barof{L})$ and
\item an invertible modification $\barof{\pi}\conof{f} \cong \pi$.
\end{itemize}
\end{theorem}
\begin{proof}
For the first assertion, since $(L,\pi)$ is a lax bilimit of $F$ and since $(\barof{L},\barof{\pi})$ is another lax cone over $F$, the essential surjectivity of the functor $\pi_*$ in \eqref{pistar-lax} implies that there exist
\begin{itemize}
\item a $1$-cell $g \in \B(\barof{L},L)$ and
\item an invertible modification $\begin{tikzcd}\Gamma : \pi\conof{g} \ar{r}{\cong} & \barof{\pi}.\end{tikzcd}$
\end{itemize}
Reversing the roles of $(L,\pi)$ and $(\barof{L},\barof{\pi})$, we obtain
\begin{itemize}
\item a $1$-cell $f \in \B(L,\barof{L})$ and
\item an invertible modification $\begin{tikzcd}\barof{\Gamma} : \barof{\pi}\conof{f} \ar{r}{\cong} & \pi.\end{tikzcd}$
\end{itemize}
We will show that this $1$-cell $f$ is an equivalence with $g$ as an inverse.
We define a modification
\begin{equation}\label{sigma-modification}
\begin{tikzcd}\pi\conof{1_L} \ar{r}{\Sigma} & \pi\conof{gf}\end{tikzcd}
\end{equation}
whose component $2$-cell $\Sigma_A$, for each object $A$ in $\A$, is the composite of the pasting diagram on the left-hand side below
\begin{equation}\label{sigma-of-a}
\begin{tikzpicture}[xscale=2.5, yscale=2.5, baseline={(b.base)}]
\draw[0cell]
(0,0) node (L) {L}
(.5,1) node (L2) {L}
(1,0) node (Lp) {\barof{L}}
(2,0) node (L3) {L}
(1.5,1) node (A) {FA}
;
\draw[1cell]
(L) edge node {1_L} (L2)
(L2) edge node (pia) {\pi_A} (A)
(L2) edge node[swap, pos=.3] (f) {f} (Lp)
(L) edge node[swap] (f2) {f} (Lp)
(Lp) edge node[swap, pos=.7] (pa) {\barof{\pi}_A} (A)
(Lp) edge node[swap] (g) {g} (L3)
(L3) edge node[swap]{\pi_A} (A)
;
\draw[2cell]
node[between=L2 and f2 at .7, rotate=-90, font=\Large] (r) {\Rightarrow}
(r) node[right] {r_f}
node[between=pia and Lp at .3, rotate=-90, font=\Large] (gp) {\Rightarrow}
(gp) node[right] {\barof{\Gamma}_A^{-1}}
node[between=L2 and L3 at .8, rotate=-30, font=\Large] (g) {\Rightarrow}
(g) node[above left] {\Gamma_A^{-1}}
;
\draw[0cell]
($(A)+(1.5,0)$) node (a) {\pi_A1_L}
($(a)+(1,0)$) node (e) {\pi_A(gf)}
($(a)+(0,-.5)$) node (b) {(\barof{\pi}_Af)1_L}
($(a)+(.5,-1)$) node (c) {\barof{\pi}_A(f1_L)}
($(a)+(1,-.5)$) node (d) {(\pi_Ag)f}
;
\draw[1cell]
(a) edge node{\Sigma_A} (e)
(a) edge node[swap] {\barof{\Gamma}_A^{-1}*1} (b)
(b) edge node[swap] {a} (c)
(c) edge node[swap, pos=.6] {\Gamma_A^{-1}*r_f} (d)
(d) edge node[swap]{a} (e)
;
\end{tikzpicture}
\end{equation}
with the right normalized bracketing for the codomain. In other words, $\Sigma_A$ is the vertical composite in $\B(L,FA)$ on the right-hand side in \eqref{sigma-of-a}, with each $a$ denoting a component of the associator in $\B$.
Similarly, we define a modification
\begin{equation}\label{sigmap-modification}
\begin{tikzcd}\barof{\pi}\conof{1_{\barof{L}}} \ar{r}{\barof{\Sigma}} & \barof{\pi}\conof{fg}\end{tikzcd}
\end{equation}
whose component $2$-cell $\barof{\Sigma}_A$ is the composite of the pasting diagram
\[\begin{tikzpicture}[xscale=2.5, yscale=2.5]
\draw[0cell]
(0,0) node (L) {\barof{L}}
(.5,1) node (L2) {\barof{L}}
(1,0) node (Lp) {L}
(2,0) node (L3) {\barof{L}}
(1.5,1) node (A) {FA}
;
\draw[1cell]
(L) edge node {1_{\barof{L}}} (L2)
(L2) edge node (pia) {\barof{\pi}_A} (A)
(L2) edge node[swap, pos=.3] (f) {g} (Lp)
(L) edge node[swap] (f2) {g} (Lp)
(Lp) edge node[swap, pos=.7] (pa) {\pi_A} (A)
(Lp) edge node[swap] (g) {f} (L3)
(L3) edge node[swap]{\barof{\pi}_A} (A)
;
\draw[2cell]
node[between=L2 and f2 at .7, rotate=-90, font=\Large] (r) {\Rightarrow}
(r) node[right] {r_g}
node[between=pia and Lp at .3, rotate=-90, font=\Large] (gp) {\Rightarrow}
(gp) node[right] {\Gamma_A^{-1}}
node[between=L2 and L3 at .8, rotate=-30, font=\Large] (g) {\Rightarrow}
(g) node[above left] {\barof{\Gamma}_A^{-1}}
;
\end{tikzpicture}\]
with the right normalized bracketing for the codomain. In \Cref{sigma-sigmap-modification} below, we will show that $\Sigma$ and $\barof{\Sigma}$ actually satisfy the modification axiom, so they are indeed modifications. Since their components are composites of invertible $2$-cells, they are invertible modifications.
The fully faithfulness \eqref{lax-bilimits-fullyfaithful} of the functors $\pi_*$ and $\barof{\pi}_*$ now imply the existence of invertible $2$-cells $1_{L} \cong gf$ and $1_{\barof{L}}\cong fg$, completing the proof.
The second assertion follows from the first assertion and the fact that each lax limit is also a lax bilimit, since each isomorphism of categories is also an equivalence. The assertions for pseudo (bi)limits are proved by the same argument as above for lax (bi)limits.
\end{proof}
\begin{lemma}\label{sigma-sigmap-modification}
$\Sigma$ in \eqref{sigma-modification} and $\barof{\Sigma}$ in \eqref{sigmap-modification} satisfy the modification axiom.
\end{lemma}
\begin{proof}
We will prove the assertion for $\Sigma$; the proof for $\barof{\Sigma}$ is similar. The reader is asked to check it in \Cref{exer:sigmap}.
By the definition of $\Sigma_A$ in \eqref{sigma-of-a}, the modification axiom \eqref{modification-axiom} for $\Sigma$ is the equality of pasting diagrams
\begin{equation}\label{modaxiom-sigma}
\begin{tikzpicture}[xscale=2.3, yscale=2.2, vcenter]
\draw[0cell]
(0,0) node (L) {L}
($(L)+(1,0)$) node (L2) {L}
($(L)+(-.5,-.6)$) node (L3) {L}
($(L3)+(1,0)$) node (L4) {L}
($(L4)+(1,0)$) node (Lp) {\barof{L}}
($(Lp)+(0,-.75)$) node (L5) {L}
($(L3)+(0,-1.25)$) node (A) {FA}
($(A)+(1,0)$) node (B) {FB}
;
\draw[1cell]
(L) edge node {1_{L}} (L2)
(L) edge node[swap] {1_{L}} (L3)
(L2) edge node[swap] {1_{L}} (L4)
(L2) edge node (f) {f} (Lp)
(Lp) edge[bend right=0,swap] node[pos=.3] (ppb) {\barof{\pi}_B} (B)
(Lp) edge node {g} (L5)
(L3) edge node[swap] {1_L} (L4)
(L3) edge node[swap] {\pi_A} (A)
(L4) edge[bend right=0] node[swap, pos=.4] (pb) {\pi_B} (B)
(L4) edge[swap] node[pos=.55] (f2) {f} (Lp)
(A) edge node[swap]{Fh} (B)
(L5) edge node {\pi_B} (B)
;
\draw[2cell]
(L4) ++(20:.5) node[rotate=0, 2label={above,r_f}] {\Rightarrow}
node[between=A and L4 at .5, rotate=45, 2label={below,\pi_h}] {\Rightarrow}
(L4) ++(-70:.6) node[rotate=0, 2label={above,\barof{\Gamma}_B^{-1}}] {\Rightarrow}
(L5) ++(160:.22) node[rotate=0, 2label={above,\Gamma_B^{-1}}] {\Rightarrow}
;
\draw[0cell]
($(Lp)+(.4,-.2)$) node[font=\huge] (eq) {=}
($(L)+(2.8,0)$) node (l) {L}
($(l)+(1,0)$) node (l2) {L}
($(l)+(-.5,-.6)$) node (l3) {L}
($(l3)+(1,0)$) node (lp2) {\barof{L}}
($(lp2)+(1,0)$) node (lp) {\barof{L}}
($(lp2)+(0,-.75)$) node (l4) {L}
($(lp)+(0,-.75)$) node (l5) {L}
($(l3)+(0,-1.25)$) node (a) {FA}
($(a)+(1,0)$) node (b) {FB}
;
\draw[1cell]
(l) edge node {1_{L}} (l2)
(l) edge node[swap] (one) {1_{L}} (l3)
(l2) edge node (fp) {f} (lp)
(lp) edge node {g} (l5)
(l3) edge node[swap] (pia) {\pi_A} (a)
(a) edge node[swap] (fh) {Fh} (b)
(l5) edge node {\pi_B} (b)
(l) edge node {f} (lp2)
(lp2) edge node[swap]{1_{\barof{L}}} (lp)
(l3) edge node[swap, pos=.3] {f} (lp2)
(lp2) edge node[swap, pos=.3] (pibara) {\barof{\pi}_A} (a)
(lp2) edge node {g} (l4)
(l4) edge node[swap] {1_L} (l5)
(l4) edge node {\pi_A} (a)
;
\draw[2cell]
(l3) ++(20:.5) node[2label={above,r_f}] {\Rightarrow}
node[between=lp2 and l2 at .5, shift={(0,.2)}, rotate=45, 2label={below,r^{-1}_f\ell_f}] {\Rightarrow}
node[between=l4 and lp at .4, shift={(0,.2)}, rotate=45, 2label={below,r^{-1}_g\ell_g}] {\Rightarrow}
node[between=pia and l4 at .3, rotate=0, 2label={above,\barof{\Gamma}_A^{-1}}] {\Rightarrow}
node[between=pibara and l4 at .6, shift={(237:.4)}, rotate=0, 2label={above,\Gamma_A^{-1}}] {\Rightarrow}
node[between=b and l4 at .4, rotate=22.5, 2label={above,\pi_h}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
for each $1$-cell $h \in \A(A,B)$. For convenience, we refer to the pasting diagram on the left-hand side of \eqref{modaxiom-sigma} as $\Sigma_1$, and the one on the right as $\Sigma_2$. In $\Sigma_1$, the upper left quadrilateral is filled by the identity $2$-cell of $1_L$. To prove the equality \eqref{modaxiom-sigma}, we will start with $\Sigma_2$ and work toward $\Sigma_1$.
For the upper part of $\Sigma_2$, there is a commutative diagram
\[\begin{tikzcd}[column sep=tiny]
& (1_{\barof{L}}f)1_L \ar[bend left]{dr}{\ell_f*1} &\\
1_{\barof{L}}(f1_L) \ar[bend left]{ur}{a^{-1}} \ar{rr}{\ell_{f1_L}} \ar{d}[swap]{1*r_f} && f1_L\\
1_{\barof{L}}f \ar{rr}{\ell_f} && f \ar{u}[swap]{r_f^{-1}}
\end{tikzcd}\]
in which:
\begin{itemize}
\item The top triangle is commutative by the left unity property in \Cref{bicat-left-right-unity}.
\item The bottom half is commutative by the naturality of $\ell$.
\end{itemize}
Applying this commutative diagram to the top of $\Sigma_2$ and augmenting $\Sigma_2$ by $\Gamma_B$ along its right edge, we obtain the pasting diagram $\Sigma_3$ on the left-hand side below.
\[\begin{tikzpicture}[xscale=2.15, yscale=2.2]
\draw[0cell]
($(0,0)$) node (l) {L}
($(l)+(0,-.5)$) node (l3) {L}
($(l3)+(1,-.5)$) node (lp2) {\barof{L}}
($(lp2)+(1,0)$) node (lp) {\barof{L}}
($(lp2)+(0,-.75)$) node (l4) {L}
($(lp)+(-.125,-.75)$) node (l5) {L}
($(l3)+(0,-1.75)$) node (a) {FA}
($(a)+(2,0)$) node (b) {FB}
;
\draw[1cell]
(l) edge node[swap] (one) {1_{L}} (l3)
(l3) edge[bend left=30] node (fp) {f} (lp)
(lp) edge[swap] node {g} (l5)
(l3) edge node[swap] (pia) {\pi_A} (a)
(a) edge node[swap] (fh) {Fh} (b)
(l5) edge[swap] node {\pi_B} (b)
(lp2) edge node[swap]{1_{\barof{L}}} (lp)
(l3) edge node[swap, pos=.3] {f} (lp2)
(lp2) edge node[swap, pos=.3] (pibara) {\barof{\pi}_A} (a)
(lp2) edge node {g} (l4)
(l4) edge node[swap] {1_L} (l5)
(l4) edge node {\pi_A} (a)
(lp) edge[bend left=40] node {\barof{\pi}_B} (b)
;
\draw[2cell]
node[between=lp2 and fp at .5, rotate=45, 2label={below,\ell_f}] {\Rightarrow}
node[between=l4 and lp at .4, shift={(0,.2)}, rotate=45, 2label={below,r^{-1}_g\ell_g}] {\Rightarrow}
node[between=pia and l4 at .3, rotate=0, 2label={above,\barof{\Gamma}_A^{-1}}] {\Rightarrow}
node[between=pibara and l4 at .6, shift={(237:.4)}, rotate=0, 2label={above,\Gamma_A^{-1}}] {\Rightarrow}
node[between=b and l4 at .4, shift={(-.75,0)}, rotate=22.5, 2label={above,\pi_h}] {\Rightarrow}
(l5) ++(15:.25) node[rotate=0, 2label={above,\Gamma_B}] {\Rightarrow}
;
\draw[0cell]
($(lp)+(.5,-.2)$) node[font=\huge] (eq) {=}
;
\begin{scope}[shift={(3,0)}]
\draw[0cell]
($(0,0)$) node (l) {L}
($(l)+(0,-.5)$) node (l3) {L}
($(l3)+(1,-.5)$) node (lp2) {\barof{L}}
($(lp2)+(1,0)$) node (lp) {\barof{L}}
($(lp2)+(0,-.75)$) node (l4) {L}
($(l3)+(0,-1.75)$) node (a) {FA}
($(a)+(2,0)$) node (b) {FB}
;
\draw[1cell]
(l) edge node[swap] (one) {1_{L}} (l3)
(l3) edge[bend left=30] node (fp) {f} (lp)
(l3) edge node[swap] (pia) {\pi_A} (a)
(a) edge node[swap] (fh) {Fh} (b)
(lp2) edge node[swap]{1_{\barof{L}}} (lp)
(l3) edge node[swap, pos=.3] {f} (lp2)
(lp2) edge node[swap, pos=.3] (pibara) {\barof{\pi}_A} (a)
(lp2) edge node {g} (l4)
(l4) edge node[pos=.2] {\pi_A} (a)
(lp) edge[bend left=40] node {\barof{\pi}_B} (b)
(lp2) edge[out=-45, in=10, shorten >=.15cm, looseness=2] node (Pibaraii) {\barof{\pi}_A} (a)
;
\draw[2cell]
node[between=lp2 and fp at .5, rotate=45, 2label={below,\ell_f}] {\Rightarrow}
(l4) ++(.25,.1) node[2label={above,\Gamma_A}] {\Rightarrow}
node[between=pia and l4 at .3, rotate=0, 2label={above,\barof{\Gamma}_A^{-1}}] {\Rightarrow}
node[between=pibara and l4 at .6, shift={(237:.4)}, rotate=0, 2label={above,\Gamma_A^{-1}}] {\Rightarrow}
(lp) ++(260:.7) node[rotate=45, 2label={above,\barof{\pi}_h}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\]
The above equality follows by applying the modification axiom \eqref{lax-bilimits-mod-axiom} for $\Gamma$ to the right half of $\Sigma_3$. The pasting diagram on the right-hand side above is called $\Sigma_4$. In the middle of $\Sigma_4$, the composite $\Gamma_A\Gamma_A^{-1}$ is the identity $2$-cell of $\barof{\pi}_A$.
Augmenting $\Sigma_4$ by $r_f^{-1}$ along the top edge and then by $\barof{\Gamma}_B$ along the resulting right edge, we obtain the pasting diagram $\Sigma_5$ on the left-hand side below.
\[\begin{tikzpicture}[xscale=2.2, yscale=2]
\draw[0cell]
(0,0) node (L) {L}
($(L)+(2,0)$) node (L2) {L}
($(L)+(0,-.6)$) node (L3) {L}
($(L)+(1.5,-1)$) node (Lp) {\barof{L}}
($(L)+(.7,-1.3)$) node (Lp2) {\barof{L}}
($(L)+(0,-2)$) node (A) {FA}
($(L)+(2,-2)$) node (B) {FB}
;
\draw[1cell]
(L) edge node[swap] {1_{L}} (L3)
(L2) edge node[swap] {f} (Lp)
(L3) edge[bend left=15] node {1_{L}} (L2)
(L3) edge[bend left=15] node[pos=.5] (F1) {f} (Lp)
(L3) edge node[pos=.6] {f} (Lp2)
(L3) edge node[swap] (Pia) {\pi_A} (A)
(A) edge node[swap] (Fh) {Fh} (B)
(Lp) edge node[swap] (Pibarb) {\barof{\pi}_B} (B)
(Lp2) edge node[swap]{1_{\barof{L}}} (Lp)
(Lp2) edge node (Pibara) {\barof{\pi}_A} (A)
(L2) edge node[pos=.5] (Pib) {\pi_B} (B)
;
\draw[2cell]
node[between=Lp2 and F1 at .4, rotate=60, font=\Large] (Ellf) {\Rightarrow}
(Ellf) node[right] {\ell_f}
node[between=F1 and L2 at .4, rotate=60, font=\Large] (rfi) {\Rightarrow}
(rfi) node[left] {r_f^{-1}}
node[between=Pia and Lp2 at .5, rotate=0, font=\Large] (Gabi) {\Rightarrow}
(Gabi) node[above] {\barof{\Gamma}_A^{-1}}
node[between=Pibara and Pibarb at .5, rotate=0, font=\Large] (pibarh) {\Rightarrow}
(pibarh) node[above] {\barof{\pi}_h}
node[between=Lp and Pib at .5, font=\Large] (Gpb) {\Rightarrow}
(Gpb) node[above] {\barof{\Gamma}_B}
;
\draw[0cell]
($(L2)+(.4,-.6)$) node[font=\huge] (eq) {=}
($(L2)+(.8,0)$) node (l) {L}
($(l)+(2,0)$) node (l2) {L}
($(l)+(0,-.6)$) node (l3) {L}
($(l)+(.7,-1.3)$) node (lp2) {\barof{L}}
($(l)+(0,-2)$) node (a) {FA}
($(l)+(2,-2)$) node (b) {FB}
;
\draw[1cell]
(l) edge node[swap] {1_{L}} (l3)
(l3) edge[bend left=15] node {1_{L}} (l2)
(l3) edge node[pos=.6] {f} (lp2)
(l3) edge node[swap] (pia) {\pi_A} (a)
(a) edge node[swap] (fh) {Fh} (b)
(lp2) edge node[pos=.3] (pibara) {\barof{\pi}_A} (a)
(l2) edge node[pos=.5] (pib) {\pi_B} (b)
(l3) edge[bend left=80, looseness=2.8] node (piaii) {\pi_A} ([yshift=1pt]a.east)
;
\draw[2cell]
node[between=pia and lp2 at .4, rotate=0, font=\Large] (gabi) {\Rightarrow}
(gabi) node[above] {\barof{\Gamma}_A^{-1}}
node[between=lp2 and piaii at .5, rotate=0, font=\Large] (gab) {\Rightarrow}
(gab) node[above] {\barof{\Gamma}_A}
node[between=piaii and l2 at .3, rotate=45, font=\Large] (pih) {\Rightarrow}
(pih) node[above left] {\pi_h}
;
\end{tikzpicture}\]
The above equality follows by applying the modification axiom \eqref{lax-bilimits-mod-axiom} for $\barof{\Gamma}$ to the right half of $\Sigma_5$. The pasting diagram on the right-hand side above is called $\Sigma_6$. Since $\barof{\Gamma}_A\barof{\Gamma}_A^{-1} = 1_{\pi_A}$, the composite of $\Sigma_6$ is $\pi_h*1_L$.
If we apply the above sequence of operations---namely, augmenting by $\Gamma_B$, followed by $r_f^{-1}$ and then $\barof{\Gamma}_B$---to $\Sigma_1$, the result is also $\pi_h*1_L$ by the definition of $\Sigma_1$. Since the $2$-cells $\Gamma_B$, $\barof{\Gamma}_B$, and $r_f$ are invertible, it follows that $\Sigma_1 = \Sigma_2$.
\end{proof}
\section{Bicolimits}\label{sec:bicolimits}
In this section we discuss various colimits for bicategories.
\begin{motivation}
Suppose $F : \C \to 1.3$ is a functor between $1$-categories with $\C$ small. Recall that a \emph{colimit}\index{colimit} of $F$ is a limit of the opposite functor $\Fop : \Cop \to \Dop$. We adapt this concept in the setting of bicategories.\dqed
\end{motivation}
The following definition will refer to concepts from \Cref{def:bilimits}.
\begin{definition}\label{def:lax-bicolimit}
Suppose $F : \A \to \B$ is a lax functor between bicategories with $\A_0$ a set. Suppose $\Fop : \Aop \to \Bop$ is the opposite lax functor of $F$ in \Cref{ex:opposite-lax-functor}.
\begin{enumerate}
\item For each object $L$ in $\B$, define the category
\begin{equation}\label{bicat-aop-bop}
\oplaxcone(F,\conof{L}) = \Bicat(\Aop,\Bop)(\conof{L},\Fop),
\end{equation}
in which on the right-hand side $\conof{L} : \Aop \to \Bop$ denotes the constant pseudofunctor at $L$ regarded as an object in $\Bop$. An object in this category is called an \emph{oplax cone of $L$ under $F$}.
\item A \index{lax!bicolimit}\index{bicolimit!lax}\index{colimit!lax bi-}\emph{lax bicolimit of $F$} is a lax bilimit of $\Fop$.
\item A \index{lax!colimit}\index{colimit!lax}\emph{lax colimit of $F$} is a lax limit of $\Fop$.
\end{enumerate}
Suppose, in addition, that $F$ is a pseudofunctor.
\begin{enumerate}
\item A \index{pseudo!bicolimit}\index{bicolimit!pseudo}\index{colimit!pseudo bi-}\emph{pseudo bicolimit of $F$} is a pseudo bilimit of $\Fop$.
\item A \index{pseudo!colimit}\index{colimit!pseudo}\emph{pseudo colimit of $F$} is a pseudo limit of $\Fop$.\defmark
\end{enumerate}
\end{definition}
Let us unwrap the above concepts.
\begin{explanation}[Oplax Cones]\label{expl:oplax-cone}
An oplax cone of $L$ under $F$ is by definition a lax transformation $\conof{L} \to \Fop$ for lax functors $\Aop \to \Bop$. By \Cref{strong-optransformation} such an \index{oplax cone}oplax cone is the same as an \emph{oplax} transformation $F \to \conof{L}$ between lax functors $\A \to \B$. Using \Cref{def:oplax-transformation}, an oplax cone $\alpha : F \to \conof{L}$ of $L$ under $F$ consists of the following data.
\begin{description}
\item[Component $1$-Cells]
$\alpha_A \in\B(FA,L)$ for each object $A\in\A$.
\item[Component $2$-Cells]
$\alpha_f \in \B(FA,L)$ as in
\[\begin{tikzpicture}[xscale=2, yscale=1.5]
\draw[0cell]
(0,0) node (a) {FA}
($(a)+(1,0)$) node (b) {FB}
($(a)+(0,-1)$) node (l) {L}
($(b)+(0,-1)$) node (l2) {L}
;
\draw[1cell]
(a) edge node (f) {Ff} (b)
(b) edge node {\alpha_B} (l2)
(a) edge node[swap] {\alpha_A} (l)
(l) edge node {1_L} (l2)
;
\draw[2cell]
node[between=a and l2 at .5, rotate=-135, font=\Large] (al) {\Rightarrow}
(al) node[above left] {\alpha_f}
;
\end{tikzpicture}\]
for each $1$-cell $f\in\A(A,B)$, that is natural in $f$ in the sense of \eqref{oplax-transformation-naturality}.
\end{description}
These data satisfy:
\begin{itemize}
\item the oplax unity axiom \eqref{unity-oplax-pasting} with $G1_X$ and $G^0$ replaced by $1_L$ and $1_{1_L}$, respectively;
\item the oplax naturality axiom \eqref{2cell-oplax-pasting} with $G(gf)$, $Gf$, and $Gg$ replaced by $1_L$, and $G^2$ replaced by $\ell_{1_L}$.\dqed
\end{itemize}
\end{explanation}
\begin{explanation}[Morphisms of Oplax Cones]\label{expl:morphism-oplax-cone}
A morphism\index{morphism!oplax cone} of oplax cones is a modification as in \Cref{def:modification} between lax transformations $\conof{L} \to \Fop$. More explicitly, a morphism $\Gamma : \alpha \to \beta$ of oplax cones of $L$ under $F$ consists of a component $2$-cell
\[\Gamma_A : \alpha_A \to \beta_A \inspace \B(FA,L)\] for each object $A$ in $\A$, that satisfies the modification axiom
\[\begin{tikzpicture}[xscale=2.2, yscale=2]
\draw[0cell]
(0,0) node (a) {FA}
($(a)+(1,0)$) node (b) {FB}
($(a)+(0,-1)$) node (l) {L}
($(b)+(0,-1)$) node (l2) {L}
($(l2)+(.5,.5)$) node[font=\huge] {=}
($(a)+(2,0)$) node (a2) {FA}
($(a2)+(1,0)$) node (b2) {FB}
($(a2)+(0,-1)$) node (l3) {L}
($(b2)+(0,-1)$) node (l4) {L}
;
\draw[1cell]
(a) edge node {Ff} (b)
(b) edge[bend left] node[swap] {\alpha_B} (l2)
(a) edge[bend left] node {\alpha_A} (l)
(a) edge[bend right] node[swap] {\beta_A} (l)
(l) edge node[swap] {1_L} (l2)
(a2) edge node (ff) {Ff} (b2)
(b2) edge[bend left] node {\alpha_B} (l4)
(b2) edge[bend right] node[swap] {\beta_B} (l4)
(a2) edge[bend right] node {\beta_A} (l3)
(l3) edge node[swap] (il) {1_L} (l4)
;
\draw[2cell]
(b) ++(235:.5) node[rotate=-135, 2label={below,\alpha_f}] {\Rightarrow}
node[between=a and l at .55, rotate=180, font=\Large] (ga) {\Rightarrow}
(ga) node[above] {\Gamma_A}
(a2) ++(-45:.55) node[rotate=-135, 2label={below,\beta_f}] {\Rightarrow}
node[between=b2 and l4 at .55, rotate=180, font=\Large] (gb) {\Rightarrow}
(gb) node[above] {\Gamma_B}
;
\end{tikzpicture}\]
in $\B(FA,L)$ for each $1$-cell $f \in \A(A,B)$.\dqed
\end{explanation}
\begin{explanation}[Lax Bicolimits]\label{expl:lax-bicolimit}
Lax bilimits of a lax functor $F : \A\to\B$ have to do with the category $\Bicat(\A,\B)(\conof{L},F)$ for an object $L$ in $\B$. Therefore, lax bicolimits of $F$ have to do with the category $\oplaxcone(F,\conof{L})$ of oplax cones of $L$ under $F$. Explicitly, a lax bicolimit of $F$ is a pair $(L,\pi)$ consisting of
\begin{itemize}
\item an object $L$ in $\B$, and
\item an oplax cone $\pi : F \to \conof{L}$ of $L$ under $F$,
\end{itemize}
such that for each object $X$ in $\B$, the functor
\begin{equation}\label{pistar-oplax}
\begin{tikzcd}
\B(L,X) \ar{r}{\pi^*}[swap]{\simeq} & \oplaxcone(F,\conof{X})\end{tikzcd}
\end{equation}
induced by pre-composition with $\pi$, is an equivalence of categories. This means that $\pi^*$ is essentially surjective and fully faithful.
\begin{description}
\item[Essentially Surjective]
The functor $\pi^*$ is essentially surjective if, for each oplax cone $\theta : F\to\conof{X}$ of $X$ under $F$, there exist
\begin{itemize}
\item a $1$-cell $f \in \B(L,X)$ and
\item an invertible modification
\[\begin{tikzcd}
\conof{f}\pi \ar{r}{\Gamma}[swap]{\cong} & \theta.\end{tikzcd}\]
\end{itemize}
Explicitly, for each object $A$ in $\A$, $\Gamma$ has an invertible component $2$-cell
\[\begin{tikzpicture}[xscale=2, yscale=1.5]
\draw[0cell]
(0,0) node (X) {X}
(-1,0) node (L) {L}
(-1,1) node (A) {FA}
;
\draw[1cell]
(L) edge node[swap] {f} (X)
(A) edge node[swap] {\pi_A} (L)
(A) edge[bend left] node (T) {\theta_A} (X)
;
\draw[2cell]
node[between=L and T at .5, rotate=45, font=\Large] (G) {\Rightarrow}
(G) node[above left] {\Gamma_A}
;
\end{tikzpicture}\]
in $\B(FA,X)$.
The modification axiom for $\Gamma$ states that, for each $1$-cell $g \in \A(A,B)$, the equality of pasting diagrams
\[\begin{tikzpicture}[xscale=2.5, yscale=1.7]
\draw[0cell]
(0,0) node (X1) {X}
($(X1)+(1,0)$) node (X2) {X}
($(X1)+(0,1)$) node (L1) {L}
($(X1)+(1,1)$) node (L2) {L}
($(X1)+(0,2)$) node (A1) {FA}
($(X1)+(1,2)$) node (B1) {FB}
;
\draw[1cell]
(X1) edge node (ix) {1_X} (X2)
(L2) edge node {f} (X2)
(A1) edge[bend right=45] node[swap] (th) {\theta_A} (X)
(L1) edge node[swap]{f} (X1)
(L1) edge node (il) {1_L} (L2)
(A1) edge node[swap]{\pi_A} (L1)
(B1) edge node{\pi_B} (L2)
(A1) edge node{Fg} (B1)
;
\draw[2cell]
node[between=B1 and L1 at .5, rotate=-135, font=\Large] (P) {\Rightarrow}
(P) node[above left] {\pi_g}
node[between=il and ix at .4, rotate=-135, font=\Large] (r) {\Rightarrow}
(r) node[below right, inner sep=1pt] {\ell^{-1}r}
node[between=L1 and th at .5, rotate=180, font=\Large] (g) {\Rightarrow}
(g) node[above] {\Gamma_A}
;
\draw[0cell]
($(L2)+(.5,.5)$) node[font=\huge] (eq) {=}
($(X2)+(1.4,0)$) node (X3) {X}
($(X3)+(1,0)$) node (X4) {X}
($(X4)+(0,1)$) node (L3) {L}
($(X3)+(0,2)$) node (A2) {FA}
($(X3)+(1,2)$) node (B2) {FB}
;
\draw[1cell]
(X3) edge node {1_X} (X4)
(A2) edge[bend right=45] node {\theta_A} (X3)
(B2) edge[bend right=45] node[swap] (thb) {\theta_B} (X4)
(L3) edge node{f} (X4)
(B2) edge node{\pi_B} (L3)
(A2) edge node(fg) {Fg} (B2)
;
\draw[2cell]
(A2) ++(-85:1) node[rotate=-135, 2label={below,\theta_g}] {\Rightarrow}
node[between=L3 and thb at .5, rotate=180, font=\Large] (gb) {\Rightarrow}
(gb) node[above] {\Gamma_B};
\end{tikzpicture}\]
holds in $\B(FA,X)$.
\item[Fully Faithful]
The functor $\pi^*$ is fully faithful if for
\begin{itemize}
\item each pair of $1$-cells $e,f\in\B(L,X)$ and
\item each modification $\Gamma : \conof{e}\pi \to \conof{f}\pi$,
\end{itemize}
there exists a unique $2$-cell
\[\begin{tikzcd}e \ar{r}{\alpha} & f\end{tikzcd} \stspace \Gamma = \conof{\alpha}*1_{\pi}.\]
In this case, for each object $A$ in $\A$, the component $2$-cell $\Gamma_A$ is the whiskering $\alpha*1_{\pi_A}$ as in the diagram
\[\begin{tikzpicture}[xscale=2, yscale=1]
\draw[0cell]
(0,0) node (X) {X}
(-1,0) node (L) {L}
(-2,0) node (A) {FA}
;
\draw[1cell]
(L) edge[bend left=60] node (e) {e} (X)
(L) edge[bend right=60] node[swap] (f) {f} (X)
(A) edge node {\pi_A} (L)
;
\draw[2cell]
node[between=e and f at .5, rotate=-90, font=\Large] (alpha) {\Rightarrow}
(alpha) node[left] {\alpha};
\end{tikzpicture}\]
in $\B(FA,X)$.
\end{description}
Similarly, a lax colimit of $F$ is a pair $(L,\pi)$ as above such that $\pi^*$ in \eqref{pistar-oplax} is an isomorphism of categories. In this case, essential surjectivity is replaced by the condition that $\pi^*$ be a bijection on objects. In other words, for each oplax cone $\theta : F\to\conof{X}$, there is a unique $1$-cell $f \in \B(L,X)$ such that $\conof{f}\pi = \theta$.\dqed
\end{explanation}
\begin{explanation}[Pseudo Bicolimits]\label{expl:ps-bicolimit}
For a pseudofunctor $F : \A \to \B$ with $\A_0$ a set, pseudo bilimits of $F$ have to do with the category $\Bicatps(\A,\B)(\conof{L},F)$ for an object $L$ in $\B$. Therefore, pseudo bicolimits of $F$ have to do with the category
\[\Bicatps(\Aop,\Bop)(\conof{L},\Fop),\] whose objects are called \index{pseudocone!op-}\emph{op-pseudocones of $L$ under $F$}. They are oplax transformations $F \to \conof{L}$ as in \Cref{expl:oplax-cone} with invertible component $2$-cells.
A pseudo bicolimit of $F$ is a pair $(L,\pi)$ consisting of
\begin{itemize}
\item an object $L$ in $\B$, and
\item an op-pseudocone $\pi : F \to \conof{L}$ of $L$ under $F$,
\end{itemize}
such that for each object $X$ in $\B$, the functor
\begin{equation}\label{pistar-oppseudo}
\begin{tikzcd}
\B(L,X) \ar{r}{\pi^*}[swap]{\simeq} & \Bicatps(\Aop,\Bop)(\conof{X},\Fop)\end{tikzcd}
\end{equation}
induced by pre-composition with $\pi$, is an equivalence of categories. This can be explicitly described as in \Cref{expl:lax-bicolimit} with oplax cones replaced by op-pseudocones.
Similarly, a pseudo colimit of $F$ is a pair $(L,\pi)$ as above such that $\pi^*$ in \eqref{pistar-oppseudo} is an isomorphism of categories. In this case, essential surjectivity is replaced by bijectivity on objects.\dqed
\end{explanation}
\begin{corollary}\label{bicolimit-uniqueness}\index{uniqueness of!bicolimits}\index{bicolimit!uniqueness}
Suppose $F : \A \to \B$ is a lax functor between bicategories with $\A_0$ a set. Suppose $(L,\pi)$ and $(\barof{L},\barof{\pi})$ are oplax cones under $F$ that satisfy one of the following statements.
\begin{enumerate}
\item Both $(L,\pi)$ and $(\barof{L},\barof{\pi})$ are lax bicolimits of $F$.
\item Both $(L,\pi)$ and $(\barof{L},\barof{\pi})$ are lax colimits of $F$.
\item $F$ is a pseudofunctor, and $(L,\pi)$ and $(\barof{L},\barof{\pi})$ are both pseudo bicolimits of $F$.
\item $F$ is a pseudofunctor, and $(L,\pi)$ and $(\barof{L},\barof{\pi})$ are both pseudo colimits of $F$.
\end{enumerate}
Then there exist
\begin{itemize}
\item an equivalence $f \in \B(L,\barof{L})$ and
\item an invertible modification $\conof{f}\pi \cong \barof{\pi}$.
\end{itemize}
\end{corollary}
\begin{proof}
Both $(L,\pi)$ and $(\barof{L},\barof{\pi})$ are lax/pseudo (bi)limits of the lax functor $\Fop : \Aop\to\Bop$. By \Cref{thm:bilimit-uniqueness} there exist (i) an equivalence $f\in\Bop(\barof{L},L)$, which is the same as an equivalence in $\B(L,\barof{L})$, and (ii) an invertible modification $\Gamma : \conof{f}\pi \iso \barof{\pi}$.
\end{proof}
\section{\texorpdfstring{$2$}{2}-Limits}\label{sec:iilimits}
In this section we discuss $2$-(co)limits in the context of $2$-categories. Recall from \Cref{sec:2categories} that a $2$-category is a bicategory whose associator, left unitor, and right unitor are all identity natural transformations. In particular, the objects, the $1$-cells, and the horizontal composition in a $2$-category form a $1$-category. The corresponding concepts of $2$-functors and $2$-natural transformations are in \Cref{def:lax-functors,definition:lax-transformation}. For an object $X$ in a $2$-category $\B$, with $\A$ another $2$-category, note that the constant pseudofunctor $\conof{X} : \A\to\B$ in \Cref{def:constant-pseudofunctor} is a $2$-functor, called the \emph{constant $2$-functor at $X$}.
\begin{definition}\label{def:iilimits}
Suppose $F : \A\to\B$ is a $2$-functor between $2$-categories with $\A_0$ a set.
\begin{enumerate}
\item For an object $L$ in $\B$, define the category\label{notation:2cone} $\iicone(\conof{L},F)$ with:
\begin{itemize}
\item $2$-natural transformations $\conof{L} \to F$ as objects;
\item modifications between them as morphisms;
\item vertical composition of modifications in \Cref{def:modification-composition} as the composition;
\item the identity modifications as the identity morphisms.
\end{itemize}
An object in $\iicone(\conof{L},F)$ is called a \index{2-cone}\emph{$2$-cone of $L$ over $F$}.
\item A \index{2-limit}\index{limit!2-}\emph{$2$-limit of $F$} is a pair $(L,\pi)$ with
\begin{itemize}
\item $L$ an object in $\B$, and
\item $\pi : \conof{L} \to F$ a $2$-cone of $L$ over $F$,
\end{itemize}
such that for each object $X$ in $\B$, the functor
\begin{equation}\label{pistar-2}
\begin{tikzcd}
\B(X,L) \ar{r}{\pi_*}[swap]{\cong} & \iicone(\conof{X},F),\end{tikzcd}
\end{equation}
induced by post-composition with $\pi$, is an isomorphism of categories.
\item A \index{2-colimit}\index{colimit!2-}\emph{$2$-colimit of $F$} is a $2$-limit of the opposite $2$-functor $\Fop : \Aop \to\Bop$ of $F$ as in \Cref{ex:opposite-lax-functor}.
\item For an object $L$ in $\B$, define the category\label{notation:2cocone} $\iicone(F,\conof{L})$ with:
\begin{itemize}
\item $2$-natural transformations $F\to\conof{L}$ as objects;
\item modifications between them as morphisms;
\item vertical composition of modifications as the composition;
\item the identity modifications as the identity morphisms.
\end{itemize}
An object in $\iicone(F,\conof{L})$ is called a \emph{$2$-cocone of $L$ under $F$}.\defmark
\end{enumerate}
\end{definition}
\begin{explanation}[$2$-Cones]\label{expl:iicone}
Analogous to \Cref{expl:lax-cone}, for an object $L$ in $\B$, a $2$-cone $\pi : \conof{L}\to F$ of $L$ over $F$ is determined by a component $1$-cell $\pi_A \in \B(L,FA)$ for each object $A\in \A$, such that the following two conditions are satisfied.
\begin{enumerate}
\item For each $1$-cell $f\in\A(A,A')$, the diagram
\[\begin{tikzcd}[column sep=tiny]
& L \ar{dl}[swap]{\pi_A} \ar{dr}{\pi_{A'}} &\\
FA \ar{rr}{Ff} && FA'
\end{tikzcd}\]
in $\B(L,FA')$ is commutative.
\item For each $2$-cell $\theta : f \to g$ in $\A(A,A')$, the equality
\[\begin{tikzpicture}[xscale=2.3, yscale=2.2]
\draw[0cell]
(0,0) node (L) {L}
($(L)+(-.5,-1)$) node (A) {FA}
($(L)+(.5,-1)$) node (Ap) {FA'}
;
\draw[1cell]
(L) edge node[swap] {\pi_A} (A)
(L) edge node {\pi_{A'}} (Ap)
(A) edge[bend right] node[swap] {Ff} (Ap)
;
\draw[0cell]
($(L)+(1,-.5)$) node[font=\huge] (eq) {=}
($(L)+(2,0)$) node (L2) {L}
($(L2)+(-.5,-1)$) node (A2) {FA}
($(L2)+(.5,-1)$) node (Ap2) {FA'}
;
\draw[1cell]
(L2) edge node[swap] {\pi_A} (A2)
(L2) edge node {\pi_{A'}} (Ap2)
(A2) edge[bend right] node[swap] (f) {Ff} (Ap2)
(A2) edge[bend left] node (g) {Fg} (Ap2)
;
\draw[2cell]
node[between=A2 and Ap2 at .6, rotate=90, font=\Large] (t) {\Rightarrow}
(t) node[left] {F\theta}
;
\end{tikzpicture}\]
holds, in which an unlabeled region means an identity $2$-cell.
\end{enumerate}
Similar to \Cref{expl:lax-cone-morphism}, a morphism of $2$-cones \[\Gamma : \pi \to \phi \in \iicone(\conof{L},F)\] is a modification between $2$-natural transformations with a component $2$-cell
\[\Gamma_A : \pi_A \to \phi_A \inspace \B(L,FA)\]
for each object $A$ in $\A$, that satisfies the modification axiom
\[\begin{tikzpicture}[xscale=2, yscale=2.5]
\draw[0cell]
(0,0) node (L) {L}
($(L)+(0,-1)$) node (Ap) {FA'}
;
\draw[1cell]
(L) edge[bend left=50] node {\phi_{A'}} (Ap)
(L) edge[bend right=50] node[swap] {\pi_{A'}} (Ap)
;
\draw[2cell]
node[between=L and Ap at .5, rotate=0, font=\Large] (gap) {\Rightarrow}
(gap) node[above] {\Gamma_{A'}}
;
\draw[0cell]
($(L)+(1,-.5)$) node[font=\huge] (eq) {=}
($(L)+(2,0)$) node (L2) {L}
($(L2)+(0,-.6)$) node (A) {FA}
($(L2)+(0,-1)$) node (Ap2) {FA'}
;
\draw[1cell]
(L2) edge[bend left=50] node {\phi_{A}} (A)
(L2) edge[bend right=50] node[swap] {\pi_{A}} (A)
(A) edge node[swap] {Ff} (Ap2)
;
\draw[2cell]
node[between=L2 and A at .6, rotate=0, font=\Large] (ga) {\Rightarrow}
(ga) node[above] {\Gamma_{A}}
;
\end{tikzpicture}\]
for each $1$-cell $f \in \A(A,A')$.\dqed
\end{explanation}
\begin{explanation}[$2$-Limits]\label{expl:iilimits}
In \Cref{def:iilimits}, a pair $(L,\pi)$, with $\pi : \conof{L}\to F$ a $2$-cone of $L$ over $F$, is a $2$-limit of the $2$-functor $F : \A \to \B$ if and only if for each object $X$ in $\B$ the following two conditions hold.
\begin{description}
\item[Objects] The functor $\pi_*$ in \eqref{pistar-2} is bijective on objects. This means that for each $2$-cone $\theta : \conof{X} \to F$ as in \Cref{expl:iicone}, there exists a unique $1$-cell $f \in \B(X,L)$ such that, for each object $A$ in $\A$, the component $1$-cell $\theta_A$ factors as the horizontal composite
\[\begin{tikzcd}
X \ar{rr}{\theta_A} \ar{dr}[swap]{f} && FA\\
& L \ar{ur}[swap]{\pi_A} &
\end{tikzcd}\]
in $\B(X,FA)$.
\item[Morphisms] The functor $\pi_*$ is bijective on morphisms. This means that for
\begin{itemize}
\item each pair of $1$-cells $e,f\in\B(X,L)$ and
\item each modification $\Gamma : \pi\conof{e} \to \pi\conof{f}$,
\end{itemize}
there exists a unique $2$-cell $\alpha : e \to f$ such that \[\Gamma = 1_{\pi}*\conof{\alpha}.\] In this case, $\Gamma_A=1_{\pi_A}*\alpha$ in $\B(X,FA)$ for each object $A$ in $\A$.\dqed
\end{description}
\end{explanation}
\begin{explanation}[$2$-Cocones]\label{expl:iicocone}
Similar to \Cref{expl:iicone}, for an object $L$ in $\B$, a $2$-cocone $\pi : F\to\conof{L}$ of $L$ under $F$ is determined by a component $1$-cell $\pi_A \in \B(FA,L)$ for each object $A\in \A$, such that the following two conditions are satisfied.
\begin{enumerate}
\item For each $1$-cell $f\in\A(A,A')$, the diagram
\[\begin{tikzcd}[column sep=tiny]
FA \ar{rr}{Ff} \ar{dr}[swap]{\pi_A} && FA' \ar{dl}{\pi_{A'}}\\
& L &
\end{tikzcd}\]
in $\B(FA,L)$ is commutative.
\item For each $2$-cell $\theta : f \to g$ in $\A(A,A')$, the equality
\[\begin{tikzpicture}[xscale=2.3, yscale=2.2]
\draw[0cell]
(0,0) node (L) {L}
($(L)+(-.5,1)$) node (A) {FA}
($(L)+(.5,1)$) node (Ap) {FA'}
;
\draw[1cell]
(A) edge node[swap] {\pi_A} (L)
(Ap) edge node {\pi_{A'}} (L)
(A) edge[bend left] node {Ff} (Ap)
;
\draw[0cell]
($(L)+(1,.5)$) node[font=\huge] (eq) {=}
($(L)+(2,0)$) node (L2) {L}
($(L2)+(-.5,1)$) node (A2) {FA}
($(L2)+(.5,1)$) node (Ap2) {FA'}
;
\draw[1cell]
(A2) edge node[swap] {\pi_A} (L2)
(Ap2) edge node {\pi_{A'}} (L2)
(A2) edge[bend right] node[swap] {Fg} (Ap2)
(A2) edge[bend left] node {Ff} (Ap2)
;
\draw[2cell]
node[between=A2 and Ap2 at .4, rotate=-90, font=\Large] (t) {\Rightarrow}
(t) node[right] {F\theta}
;
\end{tikzpicture}\]
holds, in which an unlabeled region means an identity $2$-cell.
\end{enumerate}
Similar to \Cref{expl:lax-cone-morphism}, a morphism of $2$-cocones \[\Gamma : \pi \to \phi \in \iicone(F,\conof{L})\] is a modification with a component $2$-cell
\[\Gamma_A : \pi_A \to \phi_A \inspace \B(FA,L)\]
for each object $A$ in $\A$, that satisfies the modification axiom
\[\begin{tikzpicture}[xscale=2, yscale=2.5]
\draw[0cell]
(0,0) node (L) {L}
($(L)+(0,1)$) node (A) {FA}
;
\draw[1cell]
(A) edge[bend left=50] node {\phi_{A}} (L)
(A) edge[bend right=50] node[swap] {\pi_{A}} (L)
;
\draw[2cell]
node[between=L and A at .5, rotate=0, font=\Large] (ga) {\Rightarrow}
(ga) node[above] {\Gamma_{A}}
;
\draw[0cell]
($(L)+(1,.5)$) node[font=\huge] (eq) {=}
($(L)+(2,0)$) node (L2) {L}
($(L2)+(0,.6)$) node (Ap) {FA'}
($(L2)+(0,1)$) node (A2) {FA}
;
\draw[1cell]
(Ap) edge[bend left=50] node {\phi_{A'}} (L2)
(Ap) edge[bend right=50] node[swap] {\pi_{A'}} (L2)
(A2) edge node[swap] {Ff} (Ap)
;
\draw[2cell]
node[between=Ap and L2 at .6, rotate=0, font=\Large] (gap) {\Rightarrow}
(gap) node[above] {\Gamma_{A'}}
;
\end{tikzpicture}\]
for each $1$-cell $f \in \A(A,A')$.\dqed
\end{explanation}
\begin{explanation}[$2$-Colimits]\label{expl:iicolimits}
In \Cref{def:iilimits}, a pair $(L,\pi)$, with $\pi : F\to \conof{L}$ a $2$-cocone of $L$ under $F$, is a $2$-colimit of the $2$-functor $F : \A \to \B$ if and only if for each object $X$ in $\B$ the following two conditions hold.
\begin{enumerate}
\item For each $2$-cocone $\theta : F\to\conof{X}$ as in \Cref{expl:iicocone}, there exists a unique $1$-cell $f \in \B(L,X)$ such that, for each object $A$ in $\A$, the component $1$-cell $\theta_A$ factors as the horizontal composite
\[\begin{tikzcd}[row sep=tiny]
& L \ar{dd}{f} \\
FA \ar{ur}{\pi_A} \ar{dr}[swap]{\theta_A} &\\
& X
\end{tikzcd}\]
in $\B(FA,X)$.
\item For
\begin{itemize}
\item each pair of $1$-cells $e,f\in\B(L,X)$ and
\item each modification $\Gamma : \conof{e}\pi \to \conof{f}\pi$,
\end{itemize}
there exists a unique $2$-cell $\alpha : e \to f$ such that \[\Gamma =\conof{\alpha}*1_{\pi}.\] In this case, $\Gamma_A=\alpha*\pi_A$ in $\B(FA,X)$ for each object $A$ in $\A$.\dqed
\end{enumerate}
\end{explanation}
In \Cref{exer:iilimits-unique}, the reader is asked to prove the following uniqueness result for $2$-(co)limits.
\begin{proposition}\label{iilimits-unique}\index{uniqueness of!2-(co)limits}\index{2-colimit!uniqueness}\index{2-limit!uniqueness}
Suppose $F : \A\to\B$ is a $2$-functor between $2$-categories with $\A_0$ a set.
\begin{enumerate}
\item Suppose $(L,\pi)$ and $(\barof{L},\barof{\pi})$ are both $2$-limits of $F$. Then there exists an isomorphism $f : L \to \barof{L}$ such that \[\barof{\pi}_A f = \pi_A\] in $\B(L,FA)$ for each object $A$ in $\A$.
\item Suppose $(L,\pi)$ and $(\barof{L},\barof{\pi})$ are both $2$-colimits of $F$. Then there exists an isomorphism $f : L \to \barof{L}$ such that \[f\pi_A=\barof{\pi}_A\] in $\B(FA,\barof{L})$ for each object $A$ in $\A$.
\end{enumerate}
\end{proposition}
\begin{explanation}
A $2$-functor $F : \A\to\B$ between $2$-categories with $\A_0$ a set may also be regarded as a lax functor or a pseudofunctor between bicategories. Therefore, there are five kinds of limits for $F$:
\begin{itemize}
\item lax (bi)limit and pseudo (bi)limit in \Cref{def:bilimits};
\item $2$-limit in \Cref{def:iilimits}.
\end{itemize}
In general, these limits are different. We will illustrate this point in the following example. Similarly, there are five kinds of colimits for $F$: lax (bi)colimit, pseudo (bi)colimit, and $2$-colimit.\dqed
\end{explanation}
\begin{example}[$2$-, Lax (bi)-, and Pseudo (bi)-pullbacks]\label{ex:iipullback}
Consider the category $\C$ with three objects and two non-identity morphisms, as displayed below.
\[\begin{tikzcd}
C_1 \ar{r}{c_{1}} & C_0 & C_2 \ar{l}[swap]{c_{2}}\end{tikzcd}\]
We regard $\C$ also as a locally discrete $2$-category, i.e., a $2$-category with no non-identity $2$-cells. Suppose $F : \C\to\A$ is a $2$-functor, which is uniquely determined by the data
\[\begin{tikzcd}
FC_1 \ar{r}{Fc_1} & FC_0 & FC_2 \ar{l}[swap]{Fc_2}\end{tikzcd}\]
consisting of three objects $\{FC_0,FC_1,FC_2\}$ and two $1$-cells $\{Fc_1,Fc_2\}$ in $\A$.
\begin{description}
\item[$2$-Cone and $2$-Pullback]\index{2-pullback}
For an object $L$ in $\A$, a $2$-cone $\pi : \conof{L}\to F$ is uniquely determined by three component $1$-cells $\pi_i : L \to FC_i$ that make the diagram
\[\begin{tikzcd}[row sep=tiny]
& L \ar{dl}[swap]{\pi_1} \ar{dd}{\pi_0} \ar{dr}{\pi_2} & \\
FC_1 \ar{dr}[swap]{Fc_1} & & FC_2 \ar{dl}{Fc_2}\\
& FC_0 &\end{tikzcd}\]
in $\A$ commutative.
A $2$-limit of $F$, which is also called a \emph{$2$-pullback}, is such a pair $(L,\pi)$ such that the two conditions in \Cref{expl:iilimits} are satisfied for each object $X$ in $\A$.
\item[Lax Cone and Lax (Bi-)Pullback]\index{lax!bi-pullback}
By \Cref{expl:lax-cone}, a lax cone $\pi : \conof{L}\to F$ is uniquely determined by
\begin{itemize}
\item three component $1$-cells $\pi_i \in \A(L,FC_i)$ for $1\leq i \leq 3$, and
\item two component $2$-cells
\[\begin{tikzpicture}[xscale=2, yscale=1]
\draw[0cell]
(0,0) node (L) {L}
($(L)+(-1,-1)$) node (C1) {FC_1}
($(L)+(1,-1)$) node (C2) {FC_2}
($(L)+(0,-2)$) node (C0) {FC_0}
;
\draw[1cell]
(L) edge[bend right] node[swap] {\pi_1} (C1)
(C1) edge[bend right] node[swap] {Fc_1} (C0)
(L) edge[bend left] node {\pi_2} (C2)
(C2) edge[bend left] node {Fc_2} (C0)
(L) edge node[pos=.75] {\pi_0} (C0)
;
\draw[2cell]
node[between=C1 and C2 at .25, rotate=0, font=\Large] (pii) {\Rightarrow}
(pii) node[above] {\pi_{c_1}}
node[between=C1 and C2 at .75, rotate=180, font=\Large] (piii) {\Rightarrow}
(piii) node[above] {\pi_{c_2}}
;
\end{tikzpicture}\]
in $\A(L,FC_0)$.
\end{itemize}
A lax bilimit of $F$, which is also called a \emph{lax bi-pullback}, is such a pair $(L,\pi)$ such that the two conditions in \Cref{expl:lax-bilimits} are satisfied for each object $X$ in $\A$.
A lax limit of $F$, which is also called a \index{lax!pullback}\emph{lax pullback}, is a pair $(L,\pi)$ as above such that the two conditions in \Cref{expl:lax-bilimits} are satisfied for each object $X$ in $\A$, with essential surjectivity replaced by bijectivity on objects.
\item[Pseudocone and Pseudo (Bi-)Pullback]
A pseudocone $\pi : \conof{L}\to F$ is a lax cone as above with both component $2$-cells $\pi_{c_1}$ and $\pi_{c_2}$ invertible. Pseudo bilimits and pseudo limits of $F$ are also called \index{pseudo!bi-pullback}\emph{pseudo bi-pullbacks} and \index{pseudo!pullback}\emph{pseudo pullbacks}, respectively. They are described as in the previous case, with essential surjectivity or bijectivity on objects applied to pseudocones instead of lax cones.\dqed
\end{description}
\end{example}
\section{Duskin Nerves}\label{sec:duskin-nerves}
The Grothendieck nerve of a category expresses the categorical structure in terms of a simplicial set. Grothendieck's idea is to use categorical tools, via the nerve construction, to study and classify homotopy $n$-types. In this section we discuss a nerve construction due to Duskin that expresses bicategories in simplicial terms. We begin by recalling some basic definitions regarding simplicial objects.
\begin{definition}\label{def:ordinal-number-cat}
The \index{ordinal number category}\index{category!ordinal number}\emph{ordinal number category}\label{notation:ordinalcat} $\Delta$ has
\begin{description}
\item[Objects] \index{linearly ordered set}linearly ordered sets\label{notation:ordn} $\ord{n} = \{0 < 1 < \cdots < n\}$ for $n \geq 0$;
\item[Morphisms] $f : \ord{m} \to \ord{n}$ order-preserving maps, i.e., $f(i) \leq f(j)$ if $i \leq j$.
\end{description}
Moreover, for $0 \leq i \leq n$, the maps\label{notation:coface}
\[\begin{tikzcd}
\ord{n-1} \ar{r}{d^i} & \ord{n} & \ord{n+1} \ar{l}[swap]{s^i}\end{tikzcd}\]
with
\begin{itemize}
\item $d^i$ injective and omitting $i\in \ord{n}$ and
\item $s^i$ surjective and sending both $i,i+1\in \ord{n}$ to $i\in \ord{n+1}$
\end{itemize}
are called the \emph{$i$th coface map}\index{coface map} and the\index{codegeneracy map} \emph{$i$th codegeneracy map}, respectively.
\end{definition}
\begin{example}\label{ex:cosimplicial-id}
The coface and codegeneracy maps satisfy the \index{cosimplicial identities}\emph{cosimplicial identities}:
\begin{alignat*}{2}
d^jd^i &= d^id^{j-1} \qquad &&\text{if $i < j$},\\
s^jd^i &= d^is^{j-1} \quad &&\text{if $i < j$},\\
s^jd^j &= \Id = s^jd^{j+1}, \quad &&\\
s^jd^i &= d^{i-1}s^j \quad &&\text{if $i>j+1$},\\
s^js^i &= s^is^{j+1} \quad &&\text{if $i \leq j$}.
\end{alignat*}
In fact, the coface maps, the codegeneracy maps, and the cosimplicial identities give a generator-relation description of the category $\Delta$. This is a consequence of the surjection-injection factorization of functions. Moreover, regarding each $\ord{n}$ as a small category, there is a full embedding $\Delta \to \Cat$.\dqed
\end{example}
\begin{definition}\label{def:simplicial-objects}
For a category $\C$, the diagram category $\C^{\Deltaop}$ is called the category of \index{simplicial!object}\emph{simplicial objects in $\C$}.
\begin{itemize}
\item Its objects, called \emph{simplicial objects}, are functors $X : \Deltaop \to \C$. The object $X(\ord{n})$ is written as $X_n$, and is called the object of \emph{$n$-simplices}.
\item Its morphisms, called \index{simplicial!map}\emph{simplicial maps}, are natural transformations between such functors.
\end{itemize}
If $\C=\Set$, then\label{notation:sset} $\SSet$ is called the category of \index{simplicial!set}\emph{simplicial sets}.
\end{definition}
\begin{example}\label{ex:simplicial-objects}
Since the coface maps, the codegeneracy maps, and the cosimplicial identities completely describe $\Delta$, a simplicial object $X$ in $\C$ is uniquely determined by the objects $\{X_n\}_{n\geq 0}$ and the morphisms\label{notation:face-map}
\[\begin{tikzcd}
X_{n-1} & X_n \ar{l}[swap]{d_i} \ar{r}{s_i} & X_{n+1} \end{tikzcd} \forspace 0 \leq i \leq n,\]
called the \emph{$i$th face}\index{face} and the\index{degeneracy} \emph{$i$th degeneracy}, satisfying the \index{simplicial!identities}\emph{simplicial identities}:
\begin{alignat*}{2}
d_id_j &= d_{j-1}d_i \qquad &&\text{if $i < j$},\\
d_is_j &= s_{j-1}d_i \qquad &&\text{if $i < j$},\\
d_js_j &= \Id = d_{j+1}s_j,\quad &&\\
d_is_j &= s_jd_{i-1} \qquad &&\text{if $i>j+1$},\\
s_is_j &= s_{j+1}s_i \qquad &&\text{if $i \leq j$}.
\end{alignat*}
A simplicial map $f : X \to Y$ between simplicial objects in $\C$ is uniquely determined by the level-wise morphisms $f_n : X_n \to Y_n$ for $n \geq 0$ that strictly commute with all the faces and degeneracies.\dqed
\end{example}
As a refresher, we recall the definition of the nerve of a category.
\begin{definition}\label{def:grothendieck-nerve}
The \index{Grothendieck!nerve}\index{nerve}\emph{Grothendieck nerve}, or just the \emph{nerve}, is the functor
\[\Ner : \Cat \to \SSet\]
that sends a small category $\C$ to the simplicial set $\Ner(\C)$ defined as follows.
\begin{description}
\item[Simplices] For each $n \geq 0$, the set \[\Ner(\C)_n = \Cat(\ord{n},\C)\] consists of sequences of $n$ composable morphisms
\[\begin{tikzcd}
A_0 \ar{r}{f_1} & A_1 \ar{r}{f_2} & \cdots \ar{r}{f_{n-1}} & A_{n-1} \ar{r}{f_n} & A_n
\end{tikzcd} \inspace \C.\]
\item[Faces] For $0 \leq i \leq n$, the map \[d_i : \Ner(\C)_n \to \Ner(\C)_{n-1}\] removes $A_0$ and $f_1$ if $i=0$, removes $A_n$ and $f_n$ if $i=n$, and composes $f_i$ with $f_{i+1}$ if $0 < i < n$.
\item[Degeneracies] For $0 \leq i \leq n$, the map \[s_i : \Ner(\C)_n \to \Ner(\C)_{n+1}\] replaces the object $A_i$ by its identity morphism.
\end{description}
The value of $\Ner$ at a functor $\C \to 1.3$ is given by the composite $\ord{n}\to\C\to1.3$ for each $n$-simplex in $\Ner(\C)$. This finishes the definition of the Grothendieck nerve.
\end{definition}
\begin{explanation}
In particular, the $0$-simplices in $\Ner(\C)$ are the objects in $\C$. The $1$-simplices in $\Ner(\C)$ are the morphisms in $\C$. The $2$-simplices in $\Ner(\C)$ are the composable pairs of morphisms in $\C$.\dqed
\end{explanation}
Now we define the first type of nerve of a bicategory. Recall from \Cref{thm:cat-of-bicat} the category $\Bicatsu$ with small bicategories as objects and strictly unitary lax functors as morphisms.
\begin{definition}\label{def:duskin-nerve}
The \emph{Duskin nerve}\index{Duskin nerve}\index{nerve!Duskin} is the functor
\[\DNer : \Bicatsu \to \SSet\]
defined as follows for a small bicategory $\B$ and $n \geq 0$.
\begin{description}
\item[Simplices] The set\label{notation:duskin-simplices}
\[\DNer(\B)_n = \Bicatsu(\ord{n},\B)\] consists of strictly unitary lax functors from the small category $\ord{n}$, regarded as a locally discrete bicategory, to $\B$.
\item[Faces] These are induced by the coface maps $d^i : \ord{n-1} \to \ord{n}$, regarded as strict functors between locally discrete bicategories.
\item[Degeneracies] These are induced by the codegeneracy maps $s^i : \ord{n+1} \to \ord{n}$, regarded as strict functors between locally discrete bicategories.
\item[Morphisms] The value of $\DNer$ at a strictly unitary lax functor $\B\to \B'$ is given by the composite $\ord{n}\to\B\to\B'$ for each $n$-simplex in $\DNer(\B)$.
\end{description}
The finishes the definition of the Duskin nerve.
\end{definition}
First we observe that the Duskin nerve restricts to the Grothendieck nerve.
\begin{proposition}\label{dnerve-category}
The diagram
\[\begin{tikzcd}[column sep=large]
\Cat \ar{d} \ar{dr}{\Ner} &\\
\Bicatsu \ar{r}[swap]{\DNer} & \SSet\end{tikzcd}\]
is commutative, in which the functor $\Cat\to\Bicatsu$ regards each small category as a locally discrete bicategory and each functor as a strict functor.
\end{proposition}
\begin{proof}
For a category $\C$, as explained in \Cref{ex:functor-laxfunctor}, strictly unitary functors $\ord{n} \to \C$ are precisely the functors $\ord{n} \to \C$, so $\DNer(\C)_n=\Ner(\C)_n$. In both the Duskin nerve and the Grothendieck nerve, the faces and the degeneracies are induced by the coface maps and the codegeneracies. Similarly, $\DNer$ restricts to $\Ner$ on morphisms.
\end{proof}
To understand the Duskin nerve of a small bicategory $\B$, we first unwrap its lowest dimensions.
\begin{lemma}\label{duskin-0}
$\DNer(\B)_0$ is the set of objects in $\B$.
\end{lemma}
\begin{proof}
A $0$-simplex in $\DNer(\B)$ is a strictly unitary lax functor $F : \ord{0} \to \B$, where $\ord{0}$ has only one object $0$, its identity $1$-cell $1_0$, and its identity $2$-cell $1_{1_0}$. Let us write $X$ for the object $F0$ in $\B$. We want to show that $F$ is completely determined by the object $X$. The functor
\[F : \ord{0}(0,0) \to \B(X,X)\]
must send:
\begin{itemize}
\item the identity $1$-cell $1_0$ to the identity $1$-cell $1_X$ because $F$ is strictly unitary;
\item the identity $2$-cell $1_{1_0}$ to the identity $2$-cell $1_{1_X}$ by functoriality.
\end{itemize}
The only other datum is the lax functoriality constraint
\[F^2 : (F1_0)(F1_0) = 1_X1_X \to 1_X = F(1_01_0) \inspace \B(1_X1_X,1_X).\]
Since $\ord{0}$ has no non-identity $2$-cells, the lax left unity axiom \eqref{f0-bicat} implies $F^2=\ell_{1_X}$, the left unitor at $1_X$. The lax right unity axiom gives $F^2=r_{1_X}$, but this is already true in $\B$ by \Cref{bicat-l-equals-r}. So this imposes no further conditions on $F$. The lax associativity axiom \eqref{f2-bicat} is the outermost diagram below
\[\begin{tikzcd}
(1_X1_X)1_X \ar{r}{a} \ar{d}[swap]{\ell*1} & 1_X(1_X1_X) \ar{d}{1*\ell}\\
1_X1_X \ar{r}{1} \ar{d}[swap]{\ell} & 1_X1_X \ar{d}{\ell}\\
1_X \ar{r}{1} & 1_X
\end{tikzcd}\]
in which every identity $2$-cell is written as $1$. Since $\ell_{1_X}=r_{1_X}$, the top square is already commutative in $\B$, while the bottom square commutes by definition. So once again this imposes no further conditions on $F$.
\end{proof}
\begin{lemma}\label{duskin-1}
$\DNer(\B)_1$ is the set of $1$-cells in $\B$.
\end{lemma}
\begin{proof}
A $1$-simplex in $\DNer(\B)$ is a strictly unitary lax functor $F : \ord{1} \to \B$, where $\ord{1}$ has objects $\{0,1\}$, their identity $1$-cells and the only non-identity $1$-cell $f : 0\to 1$, and their identity $2$-cells. Let us write $X=F0$ and $Y=F1$. As in the proof of \Cref{duskin-0}, $F$ preserves all the identity $1$-cells and identity $2$-cells, and $Ff$ is a $1$-cell in $\B(X,Y)$. We want to show that there are no further restrictions on $Ff$.
The lax left and right unity axioms \eqref{f0-bicat} are the equalities
\begin{align*}
F^2_{1_1,f} &= \ell_{Ff} : 1_Y (Ff) = (F1_1)(Ff) \to Ff,\\
F^2_{f,1_0} &= r_{Ff} : (Ff)1_X = (Ff)(F1_0) \to Ff.
\end{align*}
They impose no restrictions on the $1$-cell $Ff$.
For the lax associativity axiom \eqref{f2-bicat}, if all three $1$-cells involved are identity $1$-cells, which must be the same, in $\ord{1}$, then the previous proof shows that this imposes no further conditions on $F$. The other three cases are the outermost diagrams below.
\[\begin{tikzcd}
\big((Ff)1_X\big)1_X \ar{d}[swap]{r*1} \ar{r}{a} & (Ff)(1_X1_X) \ar{d}{1*\ell} \\
(Ff)1_X \ar{r}{1} \ar{d}[swap]{r} & (Ff)1_X \ar{d}{r}\\
Ff \ar{r}{1} & Ff\end{tikzcd}\qquad
\begin{tikzcd}
\big(1_Y(Ff)\big)1_X \ar{d}[swap]{\ell*1} \ar{r}{a} & 1_Y\big((Ff)1_X\big) \ar{d}{1*r} \ar{dl}{\ell}\\
(Ff)1_X \ar{d}[swap]{r} & 1_Y(Ff) \ar{d}{\ell}\\
Ff \ar{r}{1} & Ff\end{tikzcd}\]
\[\begin{tikzcd}
(1_Y1_Y)(Ff) \ar{d}[swap]{\ell*1} \ar{r}{a} & 1_Y\big(1_Y(Ff)\big) \ar{d}{1*\ell} \ar{dl}{\ell}\\
1_Y(Ff) \ar{d}[swap]{\ell} & 1_Y(Ff) \ar{d}{\ell}\\
Ff \ar{r}{1} & Ff\end{tikzcd}\]
In the first diagram, the top square is commutative by the middle unity axiom \eqref{bicat-unity}, and the bottom square is commutative by definition. In each of the remaining two diagrams, the top left triangle is commutative by the left unity property in \Cref{bicat-left-right-unity}, and the trapezoid is commutative by the naturality of $\ell$. Therefore, they do not impose any further restriction on the $1$-cell $Ff$.
\end{proof}
\begin{lemma}\label{duskin-2}
$\DNer(\B)_2$ is the set of quadruples $(i,j,k,\theta)$ as in the diagram
\[\begin{tikzpicture}[xscale=1, yscale=1]
\draw[0cell]
(0,0) node (x) {X}
(1,1) node (y) {Y}
(2,0) node (z) {Z}
;
\draw[1cell]
(x) edge[bend left=10] node{i} (y)
(y) edge[bend left=10] node{j} (z)
(x) edge[bend right=10] node[swap] (k) {k} (z)
;
\draw[2cell]
node[between=y and k at .5, rotate=-90, font=\Large] (t) {\Rightarrow}
(t) node[right] {\theta}
;
\end{tikzpicture}\]
consisting of:
\begin{itemize}
\item $1$-cells $i \in \B(X,Y)$, $j\in \B(Y,Z)$, and $k\in\B(X,Z)$ for some objects $X,Y$, and $Z$ in $\B$;
\item a $2$-cell $\theta : ji \to k$ in $\B(X,Z)$.
\end{itemize}
\end{lemma}
\begin{proof}
A $2$-simplex in $\DNer(\B)$ is a strictly unitary lax functor $F : \ord{2} \to \B$, where $\ord{2}$ has:
\begin{itemize}
\item objects $\{0,1,2\}$,
\item their identity $1$-cells and three non-identity $1$-cells $f : 0\to 1$, $g : 1 \to 2$, and $gf : 0\to 2$, and
\item their identity $2$-cells.
\end{itemize}
Let us write
\begin{itemize}
\item $X=F0$, $Y=F1$, and $Z=F2$ for the images of the objects, and
\item $i = Ff : X\to Y$, $j = Fg : Y \to Z$, and $k = F(gf) : X \to Z$ for the images of the non-identity $1$-cells.
\end{itemize}
As in the proof of \Cref{duskin-1}, the lax left and right unity axioms \eqref{f0-bicat} state \[F^2_{1_1,f} = \ell_{i}, \quad F^2_{f,1_0} = r_{i},\] and similarly for $j$ and $k$. They impose no restrictions on the $1$-cells $i$, $j$, and $k$. The only other instance of the lax functoriality constraint is a $2$-cell \[\theta = F^2_{g,f} : (Fg)(Ff)=ji \to k=F(gf)\] in $\B(X,Z)$. It remains to check that no further restrictions are imposed on $\theta$.
For the lax associativity axiom \eqref{f2-bicat}, if at most one of the three $1$-cells involved is a non-identity $1$-cell, then the proof of \Cref{duskin-1} shows that no further restrictions are imposed. The only three cases left are the outermost diagrams below.
\[\begin{tikzcd}
(1_Zj)i \ar{d}[swap]{\ell*1} \ar{r}{a} & 1_Z(ji) \ar{d}{1*\theta} \ar{dl}{\ell}\\
ji \ar{d}[swap]{\theta} & 1_Zk \ar{d}{\ell}\\
k \ar{r}{1} & k\end{tikzcd}\qquad
\begin{tikzcd}
(ji)1_X \ar{d}[swap]{\theta*1} \ar{r}{a} \ar{dr}[swap]{r} & j(i1_X) \ar{d}{1*r}\\
k1_X \ar{d}[swap]{r} & ji \ar{d}{\theta}\\
k \ar{r}{1} & k\end{tikzcd}\qquad
\begin{tikzcd}
(j1_Y)i \ar{d}[swap]{r*1} \ar{r}{a} & j(1_Yi) \ar{d}{1*\ell}\\
ji \ar{d}[swap]{\theta} \ar{r}{1} & ji \ar{d}{\theta}\\
k \ar{r}{1} & k\end{tikzcd}\]
In the first two diagrams, the upper triangles are commutative by the left and right unity properties in \Cref{bicat-left-right-unity}, and the lower trapezoids are commutative by the naturality of $\ell$ and $r$. In the third diagram, the top square is commutative by the middle unity axiom \eqref{bicat-unity}, and the bottom square is commutative by definition. Therefore, these diagrams impose no restrictions on the $2$-cell $\theta$.
\end{proof}
\begin{proposition}\label{duskin-n}\index{characterization of!the Duskin nerve}
Each element in $\DNer(\B)_n$ consists of precisely the following data.
\begin{itemize}
\item An object $X_i \in \B$ for each $0\leq i \leq n$.
\item A $1$-cell $f_{ij} \in \B(X_i,X_j)$ for each pair $i < j$ in $\ord{n}$.
\item A $2$-cell $\theta_{ijk} : f_{jk}f_{ij} \to f_{ik}$ for each tuple $i < j < k$ in $\ord{n}$.
\end{itemize}
These data are required to make the diagram
\begin{equation}\label{duskin-cocycle}
\begin{tikzcd}
(f_{kl}f_{jk})f_{ij} \ar{r}{a} \ar{d}[swap]{\theta_{jkl}*1} & f_{kl}(f_{jk}f_{ij}) \ar{d}{1*\theta_{ijk}}\\
f_{jl}f_{ij} \ar{d}[swap]{\theta_{ijl}} & f_{kl}f_{ik} \ar{d}{\theta_{ikl}}\\
f_{il} \ar{r}{1} & f_{il}\end{tikzcd}
\end{equation}
in $\B(X_i,X_l)$ commutative for each tuple $i < j < k < l$ in $\ord{n}$.
\end{proposition}
\begin{proof}
The cases $n=0,1,2$ are precisely \Cref{duskin-0,duskin-1,duskin-2}. So suppose $n \geq 3$.
An $n$-simplex in $\DNer(\B)$ is a strictly unitary lax functor $F : \ord{n} \to \B$. We write:
\begin{itemize}
\item $X_i=Fi$ for the image of each object $i\in\ord{n}$;
\item $f_{ij}=Fx_{ij} \in \B(X_i,X_j)$ for the image of the morphism $x_{ij} : i \to j$ in $\ord{n}$ for $i<j$;
\item $\theta_{ijk} = F^2_{x_{jk},x_{ij}} : f_{jk}f_{ij} \to f_{ik}$ for the indicated lax functoriality constraint for $i<j<k$.
\end{itemize}
Reusing the proofs of \Cref{duskin-0,duskin-1,duskin-2}, $F$ preserves identity $1$-cells and identity $2$-cells. The lax left and right unity axioms \eqref{f0-bicat} and the lax associativity axiom \eqref{f2-bicat} do not impose restrictions on $F$ if the latter involves at most two non-identity $1$-cells in $\ord{n}$. On the other hand, if \eqref{f2-bicat} involves three non-identity $1$-cells in $\ord{n}$, then it is the commutative diagram \eqref{duskin-cocycle}.
\end{proof}
\begin{explanation}\label{expl:duskin-cocycle}
The commutative diagram \eqref{duskin-cocycle} is a kind of $2$-cocycle condition that can be restated as the equality
\[\begin{tikzpicture}[xscale=2, yscale=2]
\draw[0cell]
(0,0) node (i) {X_i}
(.3,1) node (j) {X_j}
(1.7,1) node (k) {X_k}
(2,0) node (l) {X_l}
;
\draw[1cell]
(i) edge node{f_{ij}} (j)
(j) edge node{f_{jk}} (k)
(k) edge node{f_{kl}} (l)
(i) edge node[swap]{f_{il}} (l)
(j) edge node[swap, inner sep=0pt, pos=.7]{f_{jl}} (l)
;
\draw[2cell]
node[between=i and k at .25, rotate=-90, font=\Large] (t1) {\Rightarrow}
(t1) node[right] {\theta_{ijl}}
node[between=i and k at .8, rotate=-135, font=\Large] (t2) {\Rightarrow}
(t2) node[below right] {\theta_{jkl}}
;
\draw[0cell]
(2.5,.4) node[font=\huge] {=}
($(i)+(3,0)$) node (i2) {X_i}
($(i2)+(.3,1)$) node (j2) {X_j}
($(i2)+(1.7,1)$) node (k2) {X_k}
($(i2)+(2,0)$) node (l2) {X_l}
;
\draw[1cell]
(i2) edge node{f_{ij}} (j2)
(j2) edge node{f_{jk}} (k2)
(k2) edge node{f_{kl}} (l2)
(i2) edge node[swap]{f_{il}} (l2)
(i2) edge node[swap, inner sep=0pt, pos=.3]{f_{ik}} (k2)
;
\draw[2cell]
node[between=j2 and l2 at .75, rotate=-90, font=\Large] (t3) {\Rightarrow}
(t3) node[left] {\theta_{ikl}}
node[between=j2 and l2 at .25, rotate=-45, font=\Large] (t4) {\Rightarrow}
(t4) node[below left] {\theta_{ijk}}
;
\end{tikzpicture}\]
of pasting diagrams.\dqed
\end{explanation}
\begin{example}\label{duskin-01}
The faces and degeneracy between the $0$-simplices and the $1$-simplices are as follows. For an object $X$ in $\B$, regarded as a $0$-simplex in $\DNer(\B)$, $s_0X$ is the identity $1$-cell $1_X$ in $\B(X,X)$. For a $1$-cell $f \in \B(X,Y)$, regarded as a $1$-simplex in $\DNer(\B)$, $d_0f = Y$ and $d_1f=X$. In \Cref{exer:duskin-face} the reader is asked to describe the other faces and degeneracies in the Duskin nerve.\dqed
\end{example}
\section{\texorpdfstring{$2$}{2}-Nerves}\label{sec:iinerve}
In this section we discuss another nerve construction, due to Lack and Paoli, that uses $2$-categorical structures of bicategories. Recall from \Cref{ex:2cat-of-cat} the $2$-category $\Cat$ of small categories, functors, and natural transformations. We regard the category $\Deltaop$ as a locally discrete $2$-category as in \Cref{ex:category-as-bicat}.
\begin{definition}\label{def:simplicial-cat}
Define the $2$-category $\Catdeltaop$\index{simplicial!category} with
\begin{description}
\item[Objects] $2$-functors $\Deltaop \to \Cat$,
\item[$1$-Cells] $2$-natural transformations between such $2$-functors,
\item[$2$-Cells] modifications between such $2$-natural transformations,
\end{description}
and other structures as in \Cref{def:bicat-of-lax-functors}.
\end{definition}
\begin{explanation}
A $2$-functor $\Deltaop \to \Cat$ is the same thing as a functor from the $1$-category $\Deltaop$ to the $1$-category $\Cat$ because there are no non-identity $2$-cells in $\Deltaop$. Moreover, $\Catdeltaop$ is a sub-$2$-category of the $2$-category $\Bicat(\Deltaop,\Cat)$ in \Cref{2cat-of-lax-functors}.\dqed
\end{explanation}
Recall from \Cref{thm:iicat-of-bicat} the $2$-category $\Bicatsupic$ with small bicategories as objects, strictly unitary pseudofunctors as $1$-cells, and icons as $2$-cells.
\begin{definition}\label{def:iinerve}
The \emph{$2$-nerve}\index{2-nerve}\index{nerve!2-} is the $2$-functor
\[\iiner : \Bicatsupic \to \Catdeltaop\]
defined as follows.
\begin{description}
\item[Objects] For a small bicategory $\B$, \[\iiner(\B) : \Deltaop \to \Cat\] is the functor with value-category\label{notation:iinerve-simplices}
\[\iiner(\B)_n = \iiner(\B)(\ord{n}) = \Bicatsupic(\ord{n},\B)\] for $n\geq 0$. For a morphism $f : \ord{m} \to \ord{n}$ in $\Delta$, the functor \[\iiner(\B)_n \to \iiner(\B)_m\] is induced by pre-composition with $f$.
\item[$1$-Cells] For a strictly unitary pseudofunctor $F : \B\to\C$, the $2$-natural transformation
\[\iiner(F) : \iiner(\B) \to \iiner(\C) : \Deltaop \to \Cat\] is induced by post-composition with $F$.
\item[$2$-Cells] For an icon $\alpha : F \to G$ with $F,G : \B\to\C$ strictly unitary pseudofunctors agreeing on objects, the modification \[\iiner(\alpha) : \iiner(F) \to \iiner(G)\] is induced by post-whiskering with $\alpha$.
\end{description}
This finishes the definition of the $2$-nerve.
\end{definition}
In \Cref{exer:iinerve-iifunctor} the reader is asked to check that $\iiner$ is actually a $2$-functor. Next is the $2$-nerve analogue of \Cref{duskin-n}.
\begin{proposition}\label{iinerve-explicit}\index{characterization of!the 2-nerve}
Suppose $\B$ is a small bicategory, and $n \geq 0$.
\begin{enumerate}
\item Each object in $\iiner(\B)_n$ consists of precisely the following data.
\begin{itemize}
\item An object $X_i \in \B$ for each $0\leq i \leq n$.
\item A $1$-cell $f_{ij} \in \B(X_i,X_j)$ for each pair $i < j$ in $\ord{n}$.
\item An invertible $2$-cell $\theta_{ijk} : f_{jk}f_{ij} \to f_{ik}$ for each tuple $i < j < k$ in $\ord{n}$.
\end{itemize}
These data are required to make the diagram \eqref{duskin-cocycle} commutative for each tuple $i<j<k<l$ in $\ord{n}$.
\item Suppose
\begin{itemize}
\item $F=\big(\{X_i\}, \{f_{ij}\}, \{\theta_{ijk}\}\big)$ and
\item $G=\big(\{X_i\}, \{g_{ij}\}, \{\phi_{ijk}\}\big)$
\end{itemize}
are two objects in $\iiner(\B)_n$. Then a morphism $F \to G$ in $\iiner(\B)_n$ consists of precisely the following: $2$-cells \[\alpha_{ij} : f_{ij} \to g_{ij} \inspace \B(X_i,X_j)\] for $i<j$ in $\ord{n}$, such that the diagram
\begin{equation}\label{iinerve-morphism}
\begin{tikzcd}
f_{jk}f_{ij} \ar{d}[swap]{\theta_{ijk}} \ar{r}{\alpha_{jk}*\alpha_{ij}} & g_{jk}g_{ij} \ar{d}{\phi_{ijk}}\\
f_{ik} \ar{r}{\alpha_{ik}} & g_{ik}\end{tikzcd}
\end{equation}
in $\B(X_i,X_k)$ is commutative for all $i<j<k$ in $\ord{n}$.
\end{enumerate}
\end{proposition}
\begin{proof}
For the first assertion, an object in $\iiner(\B)_n$ is a strictly unitary pseudofunctor $F : \ord{n} \to \B$. We reuse the proofs and notations of \Cref{duskin-0,duskin-1,duskin-2,duskin-n}, and note that every instance of the lax functoriality constraint $F^2$ is an invertible $2$-cell, since $F$ is a pseudofunctor.
For the second assertion, a morphism $\alpha : F \to G$ in $\iiner(B)_n$ is an icon, which requires the object parts of $F$ and $G$ to be the same. Such an icon consists of a natural transformation
\[\alpha : F \to G : \ord{n}(i,j) \to \B(Fi,Fj) = \B(X_i,X_j)\] for each pair of objects $i,j$ in $\ord{n}$. The naturality of $\alpha$ is an empty condition because $\ord{n}(i,j)$ is either empty or is a discrete category with one object. Since $\ord{n}(i,j)$ is empty for $i>j$, we only need to consider the cases $i \leq j$. If $i=j$, since $F$ and $G$ are strictly unitary, the icon unity axiom \eqref{icon-unity} says \[\alpha_{1_i} = 1_{1_{X_i}},\] which is the identity $2$-cell of $1_{X_i}$.
For $i<j$ in $\ord{n}$, $\alpha$ assigns to the unique object $x_{ij} \in \ord{n}(i,j)$ a $2$-cell \[\alpha_{ij} : Fx_{ij} = f_{ij} \to g_{ij} = Gx_{ij} \inspace \B(X_i,X_j).\] The diagram \eqref{iinerve-morphism} is the icon naturality axiom \eqref{icon-naturality} for $\alpha$.
\end{proof}
In \Cref{exer:iinerve-face} the reader is asked to describe the faces and degeneracies in the $2$-nerve.
\section{Exercises and Notes}\label{sec:constructions-exercises}
\begin{exercise}\label{exer:constant-at-something}
Prove \Cref{constant-at-something}.
\end{exercise}
\begin{exercise}\label{exer:conof-ef}
In the setting of \Cref{constant-induced-transformation}, suppose $e \in \B(W,X)$ and $f \in \B(X,Y)$ are composable $1$-cells in a bicategory. Prove that \[\conof{f}\conof{e} = \conof{fe},\] where the left-hand side is the horizontal composite in \Cref{def:lax-tr-comp}.
\end{exercise}
\begin{exercise}\label{exer:sigmap}
Show that $\barof{\Sigma}$ in \eqref{sigmap-modification} satisfies the modification axiom.
\end{exercise}
\begin{exercise}\label{exer:iilimits-unique}
Prove \Cref{iilimits-unique}, i.e., that $2$-limits, if they exist, are unique up to isomorphisms.
\end{exercise}
\begin{exercise}\label{exer:duskin-face}
Give an explicit description of all the faces and degeneracies in the Duskin nerve $\DNer(\B)$.
\end{exercise}
\begin{exercise}\label{exer:iinerve-iifunctor}
In \Cref{def:iinerve}, check that the $2$-nerve defines a $2$-functor from $\Bicatsupic$ to $\Catdeltaop$.
\end{exercise}
\begin{exercise}\label{exer:iinerve-face}
Give an explicit description of all the faces and degeneracies in the $2$-nerve $\iiner(\B)$, i.e., the functors among the categories $\iiner(\B)_?$ corresponding to the coface maps and the codegeneracy maps in $\Delta$.
\end{exercise}
\subsection*{Notes}
\begin{note}[General References]
For more discussion of bilimits and $2$-limits, the reader is referred to \cite{bkps,fiore,kelly-limits,lack-limits,power-bilimits,power-robinson,street-limits,street_fibrations,street_fibrations-correction}.
\end{note}
\begin{note}[Bicategorical Nerves and Variants]
For further properties of the Duskin nerve and the $2$-nerve, the reader is referred to the original papers \cite{duskin,lack-paoli}. In \cite{lack-paoli}, our $\Bicatsupic$ is denoted by \textbf{NHom}, and strictly unitary pseudofunctors are called \index{homomorphism!normal}\emph{normal homomorphisms}. More discussion of bicategorical nerves can be found in \cite{gurski-nerve}.
There are many variants of the Grothendieck nerve. For example, there are \index{nerve!homotopy coherent}homotopy coherent nerves \cite{cordier-porter} for categories and nerves for \index{nerve!strict $\omega$-category}strict $\omega$-categories \cite{verity}.
\end{note}
\begin{note}[Geometric Realizations]
For a bicategory, the geometric realizations of the Duskin nerve\index{Duskin nerve!geometric realization} and of the $2$-nerve\index{2-nerve!geometric realization} are connected by a zig-zag of homotopy equivalences. The reader is referred to \cite{ccg} for a proof of this fact.
\end{note}
\endinput
\chapter*{List of Main Facts}
\chap{\Cref{ch:categorical_prelim}}
\fact{Every set belongs to some universe.}{\ref{conv:universe}}
\fact{An adjunction can be characterized in terms of an extension property.}{p.\ \pageref{def:equivalences}}
\fact{A functor is an equivalence if and only if it is part of an adjoint equivalence. This is also equivalent to being fully faithful and essentially surjective.}{p.\ \pageref{def:equivalences}}
\fact{The Yoneda embedding is fully faithful by the Yoneda Lemma.}{\ref{yoneda-lemma}}
\fact{Left adjoints preserve colimits. Right adjoints preserve limits.}{\ref{preserve-limits}}
\fact{\thm{Mac Lane's Coherence Theorem} Every monoidal category $\C$ is adjoint equivalent to a strict monoidal category via strong monoidal functors $L \dashv R$ such that $RL=1_{\C}$.}{\ref{maclane-thm}}
\fact{Every symmetric monoidal category is adjoint equivalent to a strict symmetric monoidal category via strong symmetric monoidal functors.}{p.\ \pageref{monoidal-functor-symmetry}}
\fact{Every braided monoidal category is adjoint equivalent to a strict braided monoidal category via strong braided monoidal functors.}{p.\ \pageref{braided-coherence}}
\fact{Monoidal functors preserve monoids.}{\ref{exer:monoidal-functor-monoid}}
\fact{Symmetric monoidal functors preserve commutative monoids.}{\ref{symmonoidal-functor-cmonoid}}
\chap{\Cref{ch:2cat_bicat}}
\fact{The middle four exchange is satisfied in each bicategory.}{\ref{middle-four}}
\fact{A category is a locally discrete bicategory.}{\ref{ex:category-as-bicat}}
\fact{A monoidal category is a one-object bicategory.}{\ref{ex:moncat-bicat}}
\fact{For a category with all pullbacks, there is a bicategory with spans as 1-cells.}{\ref{ex:spans}}
\fact{There is a bicategory with rings as objects and bimodules as 1-cells.}{\ref{ex:bimodules}}
\fact{Every locally essentially small bicategory has a classifying category and a Picard groupoid.}{\ref{ex:picard-groupoid}}
\fact{There is a bicategory $\twovc$ of coordinatized 2-vector spaces.}{\ref{ex:two-vector-space}}
\fact{Identity 2-cells of identity 1-cells can be canceled in horizontal composition.}{\ref{bicat-unit-cancellation}}
\fact{The horizontal composite of two invertible 2-cells is invertible.}{\ref{hcomp-invertible-2cells}}
\fact{Every bicategory has the left unity property and the right unity property.}{\ref{bicat-left-right-unity}}
\fact{For an identity 1-cell, the left unitor is equal to the right unitor.}{\ref{bicat-l-equals-r}}
\fact{A 2-category can be described by an explicit list of axioms.}{\ref{2category-explicit}}
\fact{A locally small 2-category is a $\Cat$-category.}{\ref{2cat-cat-enriched-cat}}
\fact{Strict monoidal categories are one-object 2-categories.}{\ref{ex:strict-moncat-2cat}}
\fact{There is a locally partially ordered 2-category with sets as objects and relations as 1-cells.}{\ref{ex:relations}}
\fact{There is a 2-category $\Cat$ of small categories, functors, and natural transformations.}{\ref{ex:2cat-of-cat}}
\fact{For a monoidal category $\V$, there is a 2-category $\Cat_{\V}$ of small $\V$-categories, $\V$-functors, and $\V$-natural transformations.}{\ref{ex:2cat-of-enriched-cat}}
\fact{There is a 2-category $\twovtc$ of totally coordinatized 2-vector spaces.}{\ref{ex:twovect-tc}}
\fact{There is a 2-category $\Multicat$ of small multicategories, multifunctors, and multinatural transformations}{\ref{multicat-2cat}}
\fact{There is a 2-category $\Polycat$ of small polycategories, polyfunctors, and polynatural transformations}{\ref{polycat-2cat}}
\fact{Every bicategory has an opposite bicategory, a co-bicategory, and a coop-bicategory.}{\ref{Bcoop-bicat}}
\fact{For a category with all pushouts, there is a bicategory with cospans as 1-cells.}{\ref{exer:cospan}}
\fact{There is a locally partially ordered 2-category with sets as objects and partial functions as 1-cells.}{\ref{exer:partial-function}}
\fact{For each small category $\A$, there is a 2-category of categories over $\A$.}{\ref{exer:cat-over}}
\fact{There is a 2-category $\MonCat$ of small monoidal categories, monoidal functors, and monoidal natural transformations.}{\ref{exer:moncat}}
\fact{For each symmetric monoidal category $\V$, there is a 2-category of small $\V$-multicategories, $\V$-multifunctors, and $\V$-multinatural transformations.}{\ref{exer:enriched-multicat}}
\fact{For each symmetric monoidal category $\V$, there is a 2-category of small $\V$-polycategories, $\V$-polyfunctors, and $\V$-polynatural transformations.}{\ref{exer:enriched-polycat}}
\chap{\Cref{ch:pasting-string}}
\fact{\thm{2-Categorical Pasting Theorem} Every pasting diagram in a 2-category has a unique composite.}{\ref{thm:2cat-pasting-theorem}}
\fact{\thm{Moving Brackets Lemma} Any two bracketings of the same length are related by a canonical finite sequence of steps, each moving one pair of brackets to the left or to the right.}{\ref{moving-brackets}}
\fact{A bracketed graph admits a composition scheme extension if and only if its underlying anchored graph admits a pasting scheme presentation.}{\ref{bicat-pasting-existence}}
\fact{\thm{Mac Lane's Coherence Theorem} In a bicategory, for each finite composable sequence of 1-cells with two bracketings, the two composite 1-cells are related by a unique constraint 2-cell that is a vertical composite of whiskerings of 1-cells with components of the associator or their inverses.}{\ref{maclane-coherence}}
\fact{\thm{Bicategorical Pasting Theorem} Every pasting diagram in a bicategory has a unique composite.}{\ref{thm:bicat-pasting-theorem}}
\chap{\Cref{ch:functors}}
\fact{A 2-functor strictly preserves identity 1-cells, identity 2-cells, vertical composition of 2-cells, and horizontal compositions of 1-cells and 2-cells.}{\ref{iifunctor}}
\fact{Every bicategory has an identity strict functor.}{\ref{ex:identity-strict-functor}}
\fact{A (strong, resp., strict) monoidal functor yields a lax (pseudo, resp., strict) functor between one-object bicategories.}{\ref{ex:monfunctor-laxfunctor}}
\fact{There is a strictly unitary pseudofunctor $F : \twovtc \to \twovc$.}{\ref{ex:two-vector-strict-functor}}
\fact{Every object in a bicategory induces a constant pseudofunctor from any other bicategory.}{\ref{constant-pseudofunctor}}
\fact{Every pullback-preserving functor induces a strictly unitary pseudofunctor between the bicategories of spans.}{\ref{spans-functor}}
\fact{There is a 1-category with small bicategories as objects and lax functors as morphisms. The same is true with pseudofunctors, strict functors, or colax functors in place of lax functors.}{\ref{thm:cat-of-bicat}}
\fact{A 2-natural transformation is determined by component 1-cells that are natural with respect to 1-cells and 2-cells.}{\ref{iinatural-transformation}}
\fact{Every lax functor has an identity strong transformation.}{\ref{id-lax-transformation}}
\fact{Every oplax transformation is uniquely determined by a lax transformation between the opposite lax functors.}{\ref{strong-optransformation}}
\fact{Monoidal natural transformations are examples of oplax transformations.}{\ref{monnt-oplax-transformation}}
\fact{A modification is invertible if and only if it has a vertical composite inverse.}{\ref{invertible-modification}}
\fact{For two bicategories $\A$ and $\B$, there is a bicategory $\Bicat(\A,\B)$ with lax functors $\A \to \B$ as objects, lax transformations as 1-cells, and modifications as 2-cells.}{\ref{thm:bicat-of-lax-functors}}
\fact{$\Bicat(\A,\B)$ is a 2-category if $\B$ is a 2-category.}{\ref{2cat-of-lax-functors}}
\fact{$\Bicat(\A,\B)$ contains a sub-bicategory $\Bicatps(\A,\B)$ with pseudofunctors as objects and strong transformations as 1-cells.}{\ref{subbicat-pseudofunctor}}
\fact{Every object in a bicategory induces a representable pseudofunctor.}{\ref{representable-pseudofunctor}}
\fact{Every 1-cell in a bicategory induces a representable strong transformation.}{\ref{representable-transformation}}
\fact{Every 2-cell in a bicategory induces a representable modification.}{\ref{representable-modification}}
\fact{There is a canonical bijection between icons and oplax transformations with component identity 1-cells.}{\ref{icon-is-icon}}
\fact{Monoidal natural transformations are icons.}{\ref{ex:mnt-icon}}
\fact{There is a 2-category $\Bicatic$ with small bicategories as objects, lax functors as 1-cells, and icons as 2-cells. The same is true with pseudofunctors or strict functors in place of lax functors.}{\ref{thm:iicat-of-bicat}}
\fact{$\Bicatic$ contains a sub-2-category that can be identified with the 2-category $\MonCat$ of small monoidal categories.}{\ref{moncat-bicaticon}}
\fact{There is a 2-category $\iiCat$ of small 2-categories, 2-functors, and 2-natural transformations.}{\ref{exer:2cat-of-2cat}}
\chap{\Cref{ch:constructions}}
\fact{Lax bilimits and lax limits of a lax functor, and pseudo bilimits and pseudo limits of a pseudofunctor, are unique up to an equivalence and an invertible modification.}{\ref{thm:bilimit-uniqueness}}
\fact{Lax bicolimits and lax colimits of a lax functor, and pseudo bicolimits and pseudo colimits of a pseudofunctor, are unique up to an equivalence and an invertible modification.}{\ref{bicolimit-uniqueness}}
\fact{2-limits and 2-colimits are unique up to an isomorphism.}{\ref{iilimits-unique}}
\fact{The Duskin nerve restricts to the Grothendieck nerve.}{\ref{dnerve-category}}
\fact{An $n$-simplex in the Duskin nerve consists of a family of objects, 1-cells, and 2-cells that satisfy a cocycle condition.}{\ref{duskin-n}}
\fact{An object in the $n$th category of the 2-nerve of a bicategory consists of a family of objects, 1-cells, and invertible 2-cells that satisfy the same cocycle condition as in the Duskin nerve.}{\ref{iinerve-explicit}}
\chap{\Cref{ch:adjunctions}}
\fact{For two adjunctions with the same left adjoint, the right adjoints are canonically isomorphic.}{\ref{lemma:bicat-adj-unique}}
\fact{Pseudofunctors preserve adjunctions.}{\ref{proposition:adjunctions-preserved}}
\fact{Every adjunction induces a corepresented adjunction from each object.}{\ref{example:corepresented-adjunction}}
\fact{Every adjunction induces an adjunction in the opposite bicategory with the left and right adjoints switched.}{\ref{example:adjunctions-op}}
\fact{Taking mates yields a bijection of 2-cells.}{\ref{lemma:mate-pairs}}
\fact{Pseudofunctors preserve internal equivalences.}{\ref{proposition:equivalences-preserved}}
\fact{A 1-cell in a bicategory is an equivalence if and only if it is a member of an adjoint equivalence.}{\ref{proposition:equiv-via-isos}}
\fact{Internal equivalences in $\Bicatps(\B,\C)$ are invertible strong transformations.}{\ref{example:equiv-in-bicatpsAB}}
\fact{A 2-equivalence between locally small 2-categories is the same as a $\Cat$-enriched equivalence.}{\ref{lemma:2-equiv-Cat-equiv}}
\fact{A strong transformation between pseudofunctors is invertible if and only if each component 1-cell is invertible.}{\ref{proposition:adjoint-equivalence-componentwise}}
\fact{For a category $\C$ with all pullbacks, monads in the bicategory $\Span(\C)$ are internal categories in $\C$.}{\ref{example:internal-cat}}
\fact{For a small 2-category $\A$, a monad on $\A$ in the 2-category $\iiCat$ is the same as a $\Cat$-monad on $\A$.}{\ref{cat-monad-is-internal-monad}}
\chap{\Cref{ch:whitehead}}
\fact{Every lax slice of a lax functor is a bicategory.}{\ref{defprop:lax-slice}}
\fact{Whiskering with a 1-cell induces a strict functor between lax slice bicategories, called the change-of-slice functor.}{\ref{lemma:base-change-functor}}
\fact{For a lax functor that is essentially surjective, essentially full, and fully faithful, each lax slice bicategory has an inc-lax terminal object.}{\ref{proposition:lax-slice-lax-terminal}}
\fact{For a pseudofunctor that is essentially surjective, essentially full, and fully faithful, the change-of-slice functors preserve initial components.}{\ref{lemma:lax-slice-change-fiber}}
\fact{\thm{Bicategorical Quillen Theorem A} Suppose $F : \B\to\C$ is a lax functor such that (i) each lax slice bicategory has an inc-lax terminal object and that (ii) the change-of-slice functors preserve initial components. Then there is a lax functor $G : \C\to\B$ together with lax transformations $\eta : 1_{\B}\to GF$ and $\epz : FG \to 1_{\C}$.}{\ref{theorem:Quillen-A-bicat}}
\fact{\thm{Bicategorical Whitehead Theorem} A pseudofunctor between bicategories is a biequivalence if and only if it is essentially surjective on objects, essentially full on 1-cells, and fully faithful on 2-cells.}{\ref{theorem:whitehead-bicat}}
\fact{The strictly unitary pseudofunctor $F : \twovtc \to \twovc$ is a biequivalence.}{\ref{cor:two-vector-spaces}}
\fact{For a lax functor whose domain is a 2-category, every lax slice is a 2-category.}{\ref{proposition:lax-slice-2-cat}}
\fact{For a lax functor of 2-categories that is 1-essentially surjective on objects, 1-fully faithful on 1-cells, and fully faithful on 2-cells, each lax slice 2-category has an inc-lax terminal object whose initial components are 2-unitary 1-cells.}{\ref{inc-lax-terminal-2-cat}}
\fact{\thm{2-Categorical Quillen Theorem A} Suppose $F : \B\to\C$ is a lax functor of 2-categories such that (i) each lax slice 2-category has an inc-lax terminal object whose initial components are 2-unitary 1-cells, and that (ii) the change-of-slice functors preserve initial components. Then there is a lax functor $G : \C\to\B$ together with a lax transformation $\eta : 1_{\B}\to GF$ and a strict transformation $\epz : FG \to 1_{\C}$ such that the component 1-cells of $\eta$ and $\epz$ are isomorphisms.}{\ref{theorem:Quillen-A-2-cat}}
\fact{\thm{2-Categorical Whitehead Theorem} A 2-functor between 2-categories is a 2-equivalence if and only if it is 1-essentially surjective on objects, 1-fully faithful on 1-cells, and fully faithful on 2-cells.}{\ref{theorem:whitehead-2-cat}}
\chap{\Cref{ch:coherence}}
\fact{The Yoneda embedding is fully faithful.}{\ref{yoneda-unpacked-embedding}}
\fact{Each small bicategory has a Yoneda pseudofunctor.}{\ref{proposition:Yo-pseudo}}
\fact{The Yoneda pseudofunctor of a small 2-category is a 2-functor.}{\ref{Yo-2cat}}
\fact{\thm{Objectwise Bicategorical Yoneda Lemma} For each object in a small bicategory, evaluation is an equivalence of categories}{\ref{yoneda-bicat-objectwise}}
\fact{\thm{Bicategorical Yoneda Embedding} The Yoneda pseudofunctor is a local equivalence.}{\ref{lemma:yoneda-embedding-bicat}}
\fact{\thm{Bicategorical Yoneda Lemma} For a pseudofunctor $F : \B^{\op} \to \Cat$ with $\B$ a small bicategory, evaluation yields an invertible strong transformation.}{\ref{lemma:yoneda-bicat}}
\fact{\thm{Bicategorical Coherence Theorem} Every bicategory is biequivalent to a 2-category.}{\ref{theorem:bicat-coherence}}
\chap{\Cref{ch:fibration}}
\fact{Cartesian lifts are unique up to an isomorphism. Cartesian morphisms are closed under composition.}{\ref{cartesian-properties}}
\fact{Fibrations are closed under composition of functors.}{\ref{fibration-composition}}
\fact{Isomorphisms of categories are Cartesian functors.}{\ref{cartesian-iso}}
\fact{There is a 2-category $\Fib(\C)$ with fibrations over $\C$ as objects, Cartesian functors as 1-cells, and vertical natural transformations as 2-cells. The same statement also holds for cloven fibrations and split fibrations.}{\ref{iicat-fibrations}}
\fact{Sending a cloven fibration to its underlying fibration is a 2-equivalence from the 2-category of cloven fibrations to the 2-category of fibrations.}{\ref{fibcl-fib-iiequivalence}}
\fact{Fibrations are closed under pullbacks in $\Cat$. Equivalences of categories are closed under pullbacks along fibrations.}{\ref{fibration-pullback}}
\fact{\thm{Grothendieck Fibration Theorem} There is a 2-monad $\funnyf$ on $\catoverc$ with the properties that (i) there is a canonical bijection between pseudo $\funnyf$-algebras and cloven fibrations, and that (ii) the same bijection yields a correspondence between strict $\funnyf$-algebras and split fibrations.}{\ref{fibration=psalgebra}}
\chap{\Cref{ch:grothendieck}}
\fact{The Grothendieck construction $\intf$ of a lax functor $F : \Cop\to\Cat$ with $\C$ small is a category.}{\ref{grothendieck-cat}}
\fact{If the lax functoriality constraint $F^2$ is invertible, then there is a cloven fibration $\Usubf : \intf\to\C$.}{\ref{grothendieck-is-fibration}}
\fact{For a pseudofunctor $F$, $\Usubf$ is a split fibration if and only if $F$ is a strict functor.}{\ref{strict-functor-split-fib}}
\fact{For a lax functor $F : \Cop\to\Cat$ with $\C$ small, $\intf$ is equipped with an oplax cone under $F$, with respect to which it is a lax colimit of $F$.}{\ref{thm:lax-grothendieck-lax-colimit}}
\fact{The Grothendieck construction $\intalpha$ of a strong transformation $\alpha : F \to G$ between lax functors is a functor.}{\ref{intalpha-functor}}
\fact{If $F^0$ and $G^2$ are invertible (e.g., if they are pseudofunctors), then $\intalpha$ is a Cartesian functor.}{\ref{intalpha-cartesian}}
\fact{The Grothendieck construction $\intgamma$ of a modification $\Gamma : \alpha \to \beta$ between strong transformations is a vertical natural transformation.}{\ref{intgamma-vertical}}
\fact{The Grothendieck construction $\sint$ defines a 2-functor from the 2-category $\Bicatps(\Cop,\Cat)$ of pseudofunctors, strong transformations, and modifications, to the 2-category $\Fib(\C)$ of fibrations over $\C$.}{\ref{grothendieck-iifunctor}}
\fact{The Grothendieck construction is 1-essentially surjective.}{\ref{grothendieck-one-esssurjective}}
\fact{The Grothendieck construction is 1-fully faithful.}{\ref{grothendieck-one-fullyfaithful}}
\fact{\thm{Grothendieck Construction Theorem} The Grothendieck construction is a 2-equivalence between 2-categories.}{\ref{thm:grothendieck-iiequivalence}}
\chap{\Cref{ch:tricat-of-bicat}}
\fact{The pre-whiskering $\alpha\whis F$ of a lax transformation $\alpha$ with a lax functor $F$ is a lax transformation, which is a strong transformation if $\alpha$ is so.}{\ref{pre-whiskering-transformation}}
\fact{The post-whiskering $H\whis\alpha$ of a lax transformation $\alpha$ with a lax functor $H$ with $H^2$ invertible is a lax transformation, which is a strong transformation if $\alpha$ is so.}{\ref{post-whiskering-transformation}}
\fact{Every bicategory is a locally discrete tricategory, with componentwise identity pentagonator and 2-unitors.}{\ref{ex:bicat-as-tricat}}
\fact{There is a tricategory with small bicategories as objects and $\Bicatps(\cdot,\cdot)$ as hom bicategories.}{\ref{thm:tricatofbicat}}
\fact{In every tricategory, the left 2-unitor and the right 2-unitor are uniquely determined by the rest of the tricategorical data.}{\ref{exer:tricat-lambda-rho}}
\chap{\Cref{ch:monoidal_bicat}}
\fact{A monoidal bicategory is a tricategory with one object.}{\ref{def:monoidal-bicat}}
\fact{The data of a monoidal bicategory consists of composition and identity pseudofunctors, together with associator, unitor, pentagonator, and three 2-unitors.}{\ref{expl:monoidal-bicat}}
\fact{The data of a braided monoidal bicategory consists of a monoidal bicategory, a braiding, and two hexagonators.}{\ref{def:braided-monbicat}}
\fact{A braided monoidal bicategory satisfies axioms corresponding to (3,1)-crossing, (1,3)-crossing, (2,2)-crossing, and the Yang-Baxter axiom.}{\ref{def:braided-monbicat}}
\fact{A sylleptic monoidal bicategory consists of a braided monoidal bicategory together with a syllepsis satisfying the (2,1)- and (1,2)-syllepsis axioms.}{\ref{def:sylleptic-monbicat}}
\fact{A symmetric monoidal bicategory consists of a sylleptic monoidal bicategory satisfying a triple-braiding axiom.}{\ref{def:symmetric-monbicat}}
\fact{Cubical functors out of $\C \times 1.3$ are in bijection with 2-functors out of the Gray tensor product $\C \otimes 1.3$.}{\ref{theorem:cub-gray-adj}}
\fact{The Gray tensor product $- \otimes 1.3$ is left adjoint to $\Hom(1.3,-)$.}{\ref{corollary:gray-hom-adj}}
\fact{The category $\Gray = (\iiCat,\otimes,\Hom)$ is symmetric monoidal closed.}{\ref{theorem:Gray-is-symm-mon}}
\fact{A Gray monoid is a 2-category with additional data and axioms for the monoidal product.}{\ref{explanation:data-axioms-for-Gray-monoids}}
\fact{The one-object bicategory arising from a braided strict monoidal category is a Gray monoid.}{\ref{example:brmoncat-grmon}}
\fact{A double category satisfies unity properties analogous to those in a monoidal category.}{\ref{psdcat-left-right-unity}, \ref{psdcat-l-equals-r}}
\fact{In a double category $1.3$, the objects, horizontal 1-cells, and globular 2-cells form a bicategory $\cH1.3$.}{\ref{definition:horizontal-bicat}}
\fact{For a category $\C$ with all pullbacks, $\Span(\C)$ is the horizontal bicategory of a double category.}{\ref{ex:psd-spans}}
\fact{The bicategory of rings and bimodules, $\Bimod$, is the horizontal bicategory of a double category.}{\ref{ex:psd-bimodules}}
\fact{There is a $2$-category $\Dbl$ whose objects are small double categories, $1$-cells are lax functors, and $2$-cells are transformations.}{\ref{proposition:dbl-is-a-2-cat}}
\fact{Taking horizontal bicategories defines a product-preserving functor of 1-categories from $\Dbl$ to $\Bicat$.}{\ref{theorem:H-preserves-products}}
\fact{In a monoidal double category $1.3$, both $1.3_0$ and $1.3_1$ are monoidal categories, which are braided, respectively symmetric, if $1.3$ is. In the latter case, the functors $s$,$t$, and $i$ are braided, respectively symmetric, monoidal functors.}{\ref{explanation:mon-psd}}
\fact{The double category of spans and the double category of bimodules are examples of symmetric monoidal double categories.}{\ref{ex:psd-spans-bimodules-symm-monoidal}}
\endinput
\chapter{Grothendieck Fibrations}\label{ch:fibration}
In this chapter we discuss Grothendieck fibrations between categories and explain their connection to $2$-monads. Grothendieck fibrations, Cartesian morphisms, and some of their basic properties are discussed in \Cref{sec:iicat-fibrations}. The main observation of this chapter is \Cref{fibration=psalgebra}, which describes a canonical bijection between cloven fibrations over a small category $\C$ and pseudo $\funnyf$-algebras for a $2$-monad $\funnyf$ on the $2$-category $\catoverc$.
The $2$-monad $\funnyf$ is defined in \Cref{sec:iimonad-f-fibrations}. In \Cref{sec:fibration-pseudoalgebra} it is observed that every pseudo $\funnyf$-algebra has a canonically associated cloven fibration. The converse statement, that every cloven fibration has a canonically associated pseudo $\funnyf$-algebra, is proved in \Cref{sec:pseudo-alg-from-fib}. In \Cref{sec:fib=psalg} it is proved that these two assignments are inverses of each other, thereby establishing the bijective correspondence between cloven fibrations over $\C$ and pseudo $\funnyf$-algebras. Moreover, under this correspondence, split fibrations over $\C$ correspond to strict $\funnyf$-algebras.
In some of the diagrams below, we use the symbols $\exists$ and $!$ for \emph{there exists} and \emph{unique}, respectively.
\section{Cartesian Morphisms and Fibrations}\label{sec:iicat-fibrations}
The purpose of this section is to introduce Cartesian morphisms and fibrations. \Cref{fibration-pullback} states that fibrations are closed under pullbacks and that equivalences of categories are closed under pullbacks along fibrations.
\begin{definition}\label{def:fibration}
Suppose $P : 3 \to \C$ is a functor between $1$-categories.
\begin{enumerate}
\item For an object or a morphism $X$ in $3$, we write its image $P(X)$ also as\label{notation:xsubp} $\subof{X}{P}$.
\item A \index{pre-lift}\index{lift!pre-}\emph{pre-lift}\label{notation:prelift} is a pair $\prelift{Y}{f}$ consisting of
\begin{itemize}
\item an object $Y$ in $3$ and
\item a morphism $f : A \to \subof{Y}{P}$ in $\C$.
\end{itemize}
\item A \emph{lift}\index{lift} of a pre-lift $\prelift{Y}{f}$ as above is a morphism\label{notation:liftoff} $\liftof{f} : \subof{Y}{f} \to Y$ in $3$ such that \[\subof{(\subof{Y}{f})}{P} = A \andspace \subof{\liftof{f}}{P} = f,\] as in the following diagram.
\[\begin{tikzpicture}[xscale=2.5, yscale=1.4]
\draw[0cell]
(0,0) node (Y) {Y}
($(Y)+(0,1)$) node (Yf) {\subof{Y}{f}}
($(Y)+(.3,.4)$) node (s) {}
($(s)+(.5,0)$) node (t) {}
($(t)+(.3,-.4)$) node (Yp) {\subof{Y}{P}}
($(Yp)+(0,1)$) node (A) {A}
;
\draw[1cell]
(Yf) edge[dashed] node[swap] {\liftof{f}} (Y)
(s) edge[|->] node {P} (t)
(A) edge node {f} (Yp)
;
\end{tikzpicture}\]
We also call $\liftof{f}$ a \emph{lift of $f$}. If we need to specify both the morphism $f$ and the object $Y$, we will denote such a lift by $\lift{\preliftyf}$.
\item For the pre-lift $\prelift{Y}{1_{\subof{Y}{P}}}$, the lift $1_Y$ is called the \index{unitary!lift}\index{lift!unitary}\emph{unitary lift}.
\item A \index{pre-raise}\index{raise!pre-}\emph{pre-raise}\label{notation:preraise} is a triple $\preraise{g}{h}{f}$ consisting of:
\begin{itemize}
\item two morphisms $g : X \to Y$ and $h : Z \to Y$ in $3$ with the same codomain;
\item a morphism $f : \subof{Z}{P} \to \subof{X}{P}$ in $\C$ such that $\subof{g}{P} f=\subof{h}{P}$.
\end{itemize}
\item A \index{raise}\emph{raise} of a pre-raise $\preraise{g}{h}{f}$ is a morphism\label{notation:raiseoff} $\raiseof{f} : Z \to X$ in $3$ such that
\[\subof{\raiseof{f}}{P} = f \andspace g\raiseof{f}=h,\]
as in the following diagram.
\[\begin{tikzpicture}[xscale=2.5, yscale=1.6]
\draw[0cell]
(0,0) node (X) {X}
($(X)+(1,0)$) node (Y) {Y}
($(X)+(.5,.85)$) node (Z) {Z}
($(Y)+(.3,.4)$) node (s) {}
($(s)+(.5,0)$) node (t) {}
($(t)+(.3,-.4)$) node (Xp) {\subof{X}{P}}
($(Xp)+(1,0)$) node (Yp) {\subof{Y}{P}}
($(Xp)+(.5,.85)$) node (Zp) {\subof{Z}{P}}
;
\draw[1cell]
(X) edge node {g} (Y)
(Z) edge node {h} (Y)
(Z) edge[dashed] node[swap] {\raiseof{f}} (X)
(s) edge[|->] node {P} (t)
(Xp) edge node {\subof{g}{P}} (Yp)
(Zp) edge node {\subof{h}{P}} (Yp)
(Zp) edge node[swap] {f} (Xp)
;
\end{tikzpicture}\]
\item A morphism $g : X \to Y$ in $3$ is called a \index{Cartesian!morphism}\index{morphism!Cartesian}\emph{Cartesian morphism} if every pre-raise $\preraise{g}{h}{f}$ has a unique raise.
\item A lift of a pre-lift $\prelift{Y}{f}$ is called a \index{Cartesian!lift}\index{lift!Cartesian}\emph{Cartesian lift} of $f$ if it is also a Cartesian morphism.
\item The functor $P$ is called a \emph{Grothendieck fibration}\index{Grothendieck!fibration} or a \index{fibration}\emph{fibration} if every pre-lift has a Cartesian lift. To emphasize the category $\C$, we call a fibration $P : 3\to\C$ a fibration \emph{over $\C$}. We sometimes denote such a fibration by just the functor $P$.
\item For a fibration $P$, a \emph{cleavage}\index{cleavage} is a choice of a Cartesian lift of each pre-lift, called the \index{chosen Cartesian lift}\index{Cartesian!lift!chosen}\emph{chosen Cartesian lift}. A fibration equipped with a cleavage is called a\index{cloven fibration}\index{fibration!cloven} \emph{cloven fibration}.
\item A cleavage for a fibration $P$ is called:
\begin{enumerate}[label=(\roman*)]
\item A \emph{unitary cleavage}\index{unitary!cleavage}\index{cleavage!unitary} if for each object $Y$ in $3$, the chosen Cartesian lift of the pre-lift $\prelift{Y}{1_{\subof{Y}{P}}}$ is the unitary lift $1_Y$.
\item A \emph{multiplicative cleavage}\index{multiplicative cleavage}\index{cleavage!multiplicative} if for each object $Y$ in $3$ and each pair of composable morphisms $g : A \to B$ and $f : B \to \subof{Y}{P}$ in $\C$, the composite in $3$ on the left-hand side below
\begin{equation}\label{multiplicative-cleavage}
\begin{tikzcd}
\subof{(\subof{Y}{f})}{g} \ar{r}{\liftof{g}} & \subof{Y}{f} \ar{r}{\liftof{f}} & Y
\end{tikzcd}\qquad\qquad
\begin{tikzcd}
\subof{Y}{fg} \ar{r}{\liftof{fg}} & Y
\end{tikzcd}
\end{equation}
is equal to the chosen Cartesian lift $\liftof{fg}$ of the pre-lift $\prelift{Y}{fg}$, in which $\liftof{f}$ and $\liftof{g}$ are the chosen Cartesian lifts of the pre-lifts $\prelift{Y}{f}$ and $\prelift{\subof{Y}{f}}{g}$, respectively.
\item A \emph{split cleavage}\index{split!cleavage}\index{cleavage!split} if it is both unitary and multiplicative.
\end{enumerate}
\item A fibration equipped with a split cleavage is called a \index{split!fibration}\index{fibration!split}\emph{split fibration}.
\end{enumerate}
In the above concepts, if we need to emphasize the functor $P$, then we add the phrase \emph{with respect to $P$}.
\end{definition}
\begin{explanation}\label{expl:fibration}
In \Cref{def:fibration}:
\begin{itemize}
\item A lift of a pre-lift is not required to be unique. Even for a fibration, each pre-lift is required to have a Cartesian lift, which does not need to be unique.
\item On the other hand, for a Cartesian morphism $g$, each pre-raise $\preraise{g}{h}{f}$ is required to have a \emph{unique} raise.
\item We sometimes write the image $P(X)$ as $X_P$ because $X$ is pushed forward by the functor $P$, and a subscript is often used for an induced map that preserves the direction.
\item In the notation for a lift of a pre-lift $\prelift{Y}{f}$, the domain of $\liftof{f}$ is denoted by $\subof{Y}{f}$, so we can write $\subof{(\subof{Y}{f})}{g} = \subof{Y}{fg}$ in \eqref{multiplicative-cleavage}. Had we used a notation such as $A^f$ for the domain of a lift of $f : A \to \ysubp$, the previous equality would say $A^g = A^{fg}$.\dqed
\end{itemize}
\end{explanation}
Next we discuss some basic properties of Cartesian morphisms.
\begin{proposition}\label{cartesian-properties}
Suppose $P : 3\to\C$ is a functor. Suppose:
\begin{itemize}
\item $\prelift{Y}{f}$ is a pre-lift with $f : A \to \subof{Y}{P}$ a morphism in $\C$.
\item $\liftof{f} : \subof{Y}{f} \to Y$ is a Cartesian lift of $f$.
\item $g : X \to Y$ and $h : Y \to Z$ are morphisms in $3$.
\end{itemize}
Then the following statements hold.
\begin{enumerate}
\item\label{cartesian-properties-0}
Every isomorphism in $3$ is a Cartesian morphism.
\item\label{cartesian-properties-i}
The identity morphism $1_{\subof{Y}{f}}$ is the unique morphism $\subof{Y}{f}\to\subof{Y}{f}$ with the properties
\[\liftof{f} \circ 1_{\subof{Y}{f}} = \liftof{f} \andspace P(1_{\subof{Y}{f}}) = 1_A.\]
\item\label{cartesian-properties-ii}
If $g$ is a Cartesian morphism and if $e : X\to X$ is a morphism such that $ge=g$, then $e=1_X$.
\item\label{cartesian-properties-iii}\index{Cartesian!lift!uniqueness}\index{uniqueness of!Cartesian lifts}
If $\liftof{f}' : \subof{Y}{f}' \to Y$ is another Cartesian lift of $f$, then there exists a unique isomorphism $\raiseof{1_A} : \subof{Y}{f}' \to \subof{Y}{f}$ such that the triangle
\begin{equation}\label{two-cartesian-lifts}
\begin{tikzcd}[row sep=small]
\subof{Y}{f}' \ar{rr}{\raiseof{1_A}}[swap]{\cong} \ar{dr}[swap]{\liftof{f}'} && \subof{Y}{f} \ar{dl}{\liftof{f}}\\
& Y &\end{tikzcd}
\end{equation}
in $3$ commutes and that $P(\raiseof{1_A})=1_A$.
\item\label{cartesian-properties-iv}
If $g$ and $h$ are Cartesian morphisms, then so is the composite $hg : X \to Z$.
\item\label{cartesian-properties-v}
If $hg$ and $h$ are Cartesian morphisms, then so is $g$.
\item\label{cartesian-properties-vi}
If $g$ is a Cartesian morphism such that $\subof{g}{P}$ is an isomorphism in $\C$, then $g$ is an isomorphism.
\end{enumerate}
\end{proposition}
\begin{proof}
For assertion \eqref{cartesian-properties-0}, if $i$ is an isomorphism in $3$, then a pre-raise $\preraise{i}{j}{k}$ has unique raise $i^{-1}j$.
Assertion \eqref{cartesian-properties-i} holds because the pre-raise $\preraise{\liftof{f}}{\liftof{f}}{1_A}$ has a unique raise, which must be $1_{\subof{Y}{f}}$.
Assertion \eqref{cartesian-properties-ii} follows from the first assertion with the pre-lift $\prelift{Y}{\subof{g}{P}}$ and the Cartesian lift $g$.
For assertion \eqref{cartesian-properties-iii}, since $\liftof{f}$ is Cartesian, the pre-raise $\preraise{\liftof{f}}{\liftof{f}'}{1_A}$ has a unique raise $\raiseof{1_A}$. It remains to show that $\raiseof{1_A}$ is an isomorphism. Switching the roles of $\liftof{f}$ and $\liftof{f}'$, the pre-raise $\preraise{\liftof{f}'}{\liftof{f}}{1_A}$ has a unique raise $(\raiseof{1_A})'$. These raises imply the equalities
\[\liftof{f} = \liftof{f}' (\raiseof{1_A})' = \liftof{f} \raiseof{1_A} (\raiseof{1_A})'.\]
The first assertion now implies \[\raiseof{1_A} (\raiseof{1_A})' = 1_{\subof{Y}{f}}.\] Switching the roles of $\liftof{f}$ and $\liftof{f}'$, we infer that \[(\raiseof{1_A})' \raiseof{1_A} = 1_{\subof{Y}{f}'}.\] Therefore, $\raiseof{1_A}$ is an isomorphism with inverse $(\raiseof{1_A})'$.
The proofs for assertions \eqref{cartesian-properties-iv} and \eqref{cartesian-properties-v} are left to the reader in \cref{exer:cartesian-properties}.
For assertion \eqref{cartesian-properties-vi}, the pre-raise $\preraise{g}{1_Y}{\subof{g}{P}^{-1}}$ has a unique raise $e : Y \to X$, which we will show is the inverse of $g$. We already know that $ge=1_Y$, so it remains to show that $eg=1_X$. Since $ge=1_Y$ is Cartesian, the previous assertion implies that $e$ is also Cartesian. The pre-raise $\preraise{e}{1_X}{\subof{g}{P}}$ has a unique raise $h : X \to Y$. The equalities
\[h = 1_Y h = (ge)h = g(eh) = g1_X = g\]
now imply $eg = eh = 1_X$.
\end{proof}
Some examples and properties of fibrations follow.
\begin{example}[Identities]\label{ex:fibration-id-morphism}
Each identity morphism in $3$ is a Cartesian morphism with respect to each functor with domain $3$. Therefore, each fibration has a unitary cleavage. Moreover, the identity functor of $3$ is a split fibration.\dqed
\end{example}
\begin{example}[Terminal category]\label{ex:fibration-terminal}
The unique functor $3\to\boldone$ to the terminal category\index{category!terminal} $\boldone=\{*\}$ is a split fibration, since each pre-lift $\prelift{Y}{1_*}$ has the unitary lift $1_Y$ as a Cartesian lift.\dqed
\end{example}
\begin{proposition}\label{fibration-product}
For $1$-categories $\C$ and $1.3$, the first-factor projection
\[P : \C\times1.3\to\C\] is a split fibration.
\end{proposition}
\begin{proof}
Each morphism $(g,1)\in\C\times1.3$ with second component an identity morphism in $1.3$, is a Cartesian morphism. For a pre-lift $\prelift{(X,Y)}{f}$, with an object $(X,Y)\in\C\times1.3$ and a morphism $f : A \to X$ in $\C$, a Cartesian lift is given by $(f,1_Y)$.
\end{proof}
The proofs for the following two observations are left to \cref{exer:fibration-fromone}.
\begin{proposition}\label{fibration-fromone}
A functor $P : \boldone \to \C$ is a fibration if and only if the object $P(*)\in\C$ is the codomain of only its identity morphism.
\end{proposition}
\begin{proposition}\label{fibration-composition}\index{fibration!closure under composition}
Fibrations are closed under composition of functors. In other words, if $P : 3\to\C$ and $Q : \C\to1.3$ are fibrations, then so is the composite $QP : 3\to1.3$.
\end{proposition}
\begin{explanation}\label{expl:split-fib-not-preserved}
In \Cref{fibration-composition}, even if both $P$ and $Q$ are split fibrations, it does not automatically follow that the composite $QP$ has a split cleavage. The issue is that, given a pre-lift $\prelift{Y}{f}$ with respect to $QP$, its chosen Cartesian lift $\liftof{f}_{QP}$ is not necessarily equal to $(\liftof{f}_Q)_P$, with:
\begin{itemize}
\item $\liftof{f}_Q$ the chosen Cartesian lift of $\prelift{\subof{Y}{P}}{f}$ with respect to $Q$;
\item $(\liftof{f}_Q)_P$ the chosen Cartesian lift of $\prelift{Y}{\liftof{f}_Q}$ with respect to $P$.
\end{itemize}
If it is the case that $\liftof{f}_{QP} = (\liftof{f}_Q)_P$ for every pre-lift $\prelift{Y}{f}$ with respect to $QP$, then the composite $QP$ is a split fibration.\dqed
\end{explanation}
\begin{example}[Equivalences]\label{ex:fibration-nonexample}
An equivalence of $1$-categories is \emph{not} necessarily a fibration. For example, consider the category $\{0 \rightleftarrows 1\}$ with only two objects $0$ and $1$, and only two non-identity isomorphisms $0\to 1$ and $1 \to 0$. The equivalence
\[\begin{tikzcd}
\boldone \ar{r}{\simeq} & \{0 \rightleftarrows 1\}\end{tikzcd}\]
that sends $*\in\boldone$ to $0$ is \emph{not} a fibration by \Cref{fibration-fromone}.\dqed
\end{example}
\begin{example}[Surjections]\label{ex:fibration-surjection}
A functor $P : 3\to\C$ that is surjective on objects and morphism sets is \emph{not} necessarily a fibration. For example, consider the functor $P : 3\to\C$ described by the following diagram.
\[\begin{tikzpicture}[xscale=2.5, yscale=1.6]
\draw[0cell]
(0,0) node (X) {X}
($(X)+(1,0)$) node (Y) {Y}
($(X)+(.5,.85)$) node (Z) {Z}
($(Y)+(.3,.4)$) node (s) {}
($(s)+(.5,0)$) node (t) {}
($(t)+(.3,-.4)$) node (Xp) {\subof{X}{P}}
($(Xp)+(1,0)$) node (Yp) {\subof{Y}{P}}
($(Xp)+(.5,.85)$) node (Zp) {\subof{Z}{P}}
;
\draw[1cell]
(X) edge node[swap] {g} (Y)
(Z) edge node {h} (Y)
(Z) edge[bend right=15] node[swap] {f_1} (X)
(Z) edge[bend left=15] node[pos=.3] {f_2} (X)
(s) edge[|->] node {P} (t)
(Xp) edge node {\subof{g}{P}} (Yp)
(Zp) edge node {\subof{h}{P}} (Yp)
(Zp) edge node[swap] {\subof{f}{P}} (Xp)
;
\end{tikzpicture}\]
In other words:
\begin{itemize}
\item $3$ has three objects $\{X,Y,Z\}$, and four non-identity morphisms as displayed on the left-hand side above such that $gf_1=gf_2=h$.
\item $\C$ has three objects $\{\subof{X}{P},\subof{Y}{P},\subof{Z}{P}\}$, and three non-identity morphisms as displayed on the right-hand side above such that $\subof{g}{P}\subof{f}{P}=\subof{h}{P}$.
\item The functor $P$ is determined by adding the subscript $P$ and sending both $f_1$ and $f_2$ to $\subof{f}{P}$.
\end{itemize}
The functor $P$ is surjective on both objects and morphisms. The pre-lift $\prelift{Y}{\subof{g}{P}}$ has a unique lift $g$. However, the pre-raise $\preraise{g}{h}{\subof{f}{P}}$ has two different raises $f_1$ and $f_2$. So $g$ is not a Cartesian morphism in $3$, and the pre-lift $\prelift{Y}{\subof{g}{P}}$ does not have a Cartesian lift.\dqed
\end{example}
Functors and natural transformations in the context of fibrations are defined next.
\begin{definition}\label{def:cartesian-functor}
Suppose $P : 3\to\C$ and $P' : 3' \to \C$ are functors.
\begin{enumerate}
\item A \emph{Cartesian functor}\index{Cartesian!functor}\index{functor!Cartesian}
\[
(P : 3\to\C) \fto{F} (P' : 3'\to\C)
\]
is a functor $F : 3\to3'$ that satisfies the following two conditions.
\begin{enumerate}[label=(\roman*)]
\item $P'F=P$.
\item $F$ sends Cartesian morphisms in $3$ to Cartesian morphisms in $3'$.
\end{enumerate}
We usually denote such a Cartesian functor by either
\[F : 3\to3' \orspace F : P \to P',\]
depending on whether we want to emphasize the domain categories or the functors.
\item Suppose $F,G : 3\to3'$ are functors such that $P'F=P=P'G$. A natural transformation $\theta : F \to G$ is called a \index{vertical natural transformation}\index{natural transformation!vertical}\emph{vertical natural transformation} if
\begin{equation}\label{vertical-natural-tr}
1_{P'}*\theta = 1_P.\defmark
\end{equation}
\end{enumerate}
\end{definition}
\begin{explanation}\label{expl:vertical-nt}
A natural transformation $\theta : F \to G$ is vertical if the pasting diagram
\[\begin{tikzpicture}[xscale=2.5, yscale=1.6]
\draw[0cell]
(0,0) node (e) {3}
($(e)+(1,0)$) node (ep) {3'}
($(e)+(.5,-1)$) node (c) {\C}
;
\draw[1cell]
(e) edge[bend left] node {F} (ep)
(e) edge[bend right] node[swap] {G} (ep)
(e) edge node[swap] (p) {P} (c)
(ep) edge node (pp) {P'} (c)
;
\draw[2cell]
node[between=e and ep at .45, rotate=-90, font=\Large] (th) {\Rightarrow}
(th) node[right] {\theta}
node[between=p and pp at .5, rotate=0, font=\Large] () {=}
;
\end{tikzpicture}\]
has composite the identity natural transformation $1_P$.\dqed
\end{explanation}
\begin{proposition}\label{cartesian-iso}\index{Cartesian!functor!isomorphism of categories}
Suppose
\[\begin{tikzcd}[column sep=small]
3 \ar{rr}{G}[swap]{\iso} \ar{dr}[swap]{P} && 3' \ar{dl}{P'}\\
& \C &
\end{tikzcd}\]
is a commutative diagram of functors with $G$ an isomorphism of categories. Then:
\begin{enumerate}
\item $G$ is a Cartesian functor.
\item The inverse functor of $G$ is also a Cartesian functor.
\end{enumerate}
\end{proposition}
\begin{proof}
Write $H : 3'\to3$ for the inverse functor of $G$. We first show that $H$ is a Cartesian functor. There are equalities
\[P' = P'1_{3'} = P'GH = PH.\]
For the other property of a Cartesian functor, suppose $p\in 3'$ is a Cartesian morphism with respect to $P'$. We must show that $Hp \in3$ is a Cartesian morphism. Each pre-raise $R = \preraise{Hp}{q}{r}$ with respect to $P$ yields a pre-raise $R'=\preraise{p}{Gq}{r}$ with respect to $P'$ by the equalities
\[(P'p)r = (PHp)r = Pq = P'Gq.\] Since $p$ is a Cartesian morphism by assumption, the pre-raise $R'$ has a unique raise $s$. Then $Hs\in 3$ is a raise of the pre-raise $R$ because
\[PHs = P's =r \andspace (Hp)(Hs)=H(ps)=HGq=q.\]
To see that $Hs$ is the unique raise of $R$, suppose $t$ is another raise of $R$. Similar to the equalities in the previous displayed line, $Gt$ is a raise of $R'$. The uniqueness of $s$ implies that $Gt=s$, so $t=Hs$. We have shown that $R$ has a unique raise, and $Hp$ is a Cartesian morphism. Therefore, $H$ is a Cartesian functor.
Reversing the roles of $G$ and $H$, we conclude that $G$ is a Cartesian functor.
\end{proof}
\begin{theorem}\label{iicat-fibrations}\index{2-category!of fibrations}\index{fibration!2-category}
Suppose $\C$ is a small category. Then there is a $2$-category $\Fibof{\C}$ with:
\begin{description}
\item[Objects] Fibrations $P : 3\to\C$ with $3$ a small category.
\item[$1$-Cells] Cartesian functors between such fibrations.
\item[$2$-Cells] Vertical natural transformations between such Cartesian functors.
\item[Identity $1$-Cells] Identity functors.
\item[Vertical Composition and Identity $2$-Cells] Those of natural transformations.
\item[Horizontal Composition] Those of functors and natural transformations.
\end{description}
Furthermore, there are similar $2$-categories\label{notation:fibclofc} $\fibclofc$ and $\fibspofc$ whose objects are \index{cloven fibration!2-category}cloven fibrations and \index{split!fibration!2-category}split fibrations with small domain categories, respectively.
\end{theorem}
\begin{proof}
For $\fibofc$, we just need to check that it is a sub-$2$-category of the $2$-category $\overcat{\Cat}{\C}$ in \Cref{exer:cat-over}. The vertical composite of two vertical natural transformations is a vertical natural transformation by the middle four exchange \eqref{middle-four} for natural transformations. That the composite of two Cartesian functors is a Cartesian functor follows from the definition. The horizontal composite of two vertical natural transformations $\theta$ and $\theta'$ is a vertical natural transformation because the pasting diagram
\[\begin{tikzpicture}[xscale=2, yscale=1.6]
\draw[0cell]
(0,0) node (e) {3}
($(e)+(1,0)$) node (ep) {3'}
($(ep)+(0,-.5)$) node (s) {}
($(ep)+(1,0)$) node (epp) {3''}
($(ep)+(0,-1)$) node (c) {\C}
;
\draw[1cell]
(e) edge[bend left] node {F} (ep)
(e) edge[bend right] node[swap] {G} (ep)
(ep) edge[bend left] node {F'} (epp)
(ep) edge[bend right] node[swap] {G'} (epp)
(e) edge[out=-90, in=180] node[swap] (p) {P} (c)
(ep) edge node (pp) {P'} (c)
(epp) edge[out=-90, in=0] node (ppp) {P''} (c)
;
\draw[2cell]
node[between=e and ep at .45, rotate=-90, font=\Large] (th) {\Rightarrow}
(th) node[right] {\theta}
node[between=ep and epp at .4, rotate=-90, font=\Large] (thp) {\Rightarrow}
(thp) node[right] {\theta'}
node[between=s and p at .5, rotate=0, font=\Large] () {=}
node[between=s and ppp at .5, rotate=0, font=\Large] () {=}
;
\end{tikzpicture}\]
has composite the identity natural transformation $1_P$.
For $\fibclofc$ and $\fibspofc$, we reuse the previous paragraph and observe that all the conditions in \Cref{2category-explicit} for a $2$-category are satisfied.
\end{proof}
\begin{corollary}\label{fibcl-fib-iiequivalence}
There is a $2$-equivalence of $2$-categories \[U : \fibclofc \to \fibofc\] that sends:
\begin{itemize}
\item each cloven fibration to the underlying fibration;
\item each $1$-/$2$-cell to itself.
\end{itemize}
\end{corollary}
\begin{proof}
The stated assignments form a $2$-functor by \Cref{iifunctor}. Since each fibration has a cleavage by definition, $U$ is surjective on objects. Since $U$ is the identity on $1$-cells and $2$-cells, the $2$-categorical Whitehead \Cref{theorem:whitehead-2-cat} implies that it is a $2$-equivalence.
\end{proof}
Fibrations have the following closure properties with respect to pullbacks.
\begin{theorem}\label{fibration-pullback}\index{fibration!closure under pullbacks}
Suppose given a pullback diagram
\[\begin{tikzcd}
1.3\timesover{\C}3 \ar{d}[swap]{Q_1} \ar{r}{Q_2} & 3\ar{d}{P}\\
1.3\ar{r}{F} & \C
\end{tikzcd}\]
in the $1$-category $\Cat$ with $P$ a fibration.
\begin{enumerate}
\item\label{fibration-closed-pullback} Then the first-factor projection $Q_1$ is also a fibration.
\item\label{fibration-weak-equivalence}\index{equivalence!closure under pullbacks along a fibration} If $F$ is an equivalence of categories, then the second-factor projection $Q_2$ is also an equivalence of categories.
\end{enumerate}
\end{theorem}
\begin{proof}
An object in $1.3\times_{\C}3$ is a pair $(X,Y)$ with $X$ an object in $1.3$ and $Y$ an object in $3$ such that $\subof{X}{F}=\subof{Y}{P}$, and similarly for morphisms.
For the first assertion, suppose $\prelift{(X,Y)}{f}$ is a pre-lift with respect to $Q_1$, with
\begin{itemize}
\item $(X,Y)$ an object in $1.3\times_{\C}3$ and
\item $f : W \to X=\subof{(X,Y)}{Q_1}$ a morphism in $1.3$.
\end{itemize}
We must show that it has a Cartesian lift with respect to $Q_1$. Consider the pre-lift $\prelift{Y}{\subof{f}{F}}$ with respect to $P$, which is well-defined because
\[\subof{X}{F}= \subof{(X,Y)}{FQ_1}=\subof{(X,Y)}{PQ_2}=\subof{Y}{P}.\] Since $P$ is a fibration, $\prelift{Y}{\subof{f}{F}}$ has a Cartesian lift $\liftof{\subof{f}{F}} : Z \to Y$ in $3$ with
\[\subof{Z}{P} = \subof{W}{F} \andspace \subof{(\liftof{\subof{f}{F}})}{P}= \subof{f}{F},\]
which implies $(f,\liftof{\subof{f}{F}}) \in 1.3\times_{\C}3$. Since $\subof{(f,\liftof{\subof{f}{F}})}{Q_1}=f$, it remains to show that $(f,\liftof{\subof{f}{F}})$ is a Cartesian morphism with respect to $Q_1$.
Suppose given a pre-raise $\preraise{(f,\liftof{\subof{f}{F}})}{(h_1,h_2)}{g}$ with respect to $Q_1$ as in the following diagram.
\[\begin{tikzpicture}[xscale=2.7, yscale=1.8]
\draw[0cell]
(0,0) node (wz) {(W,Z)}
($(wz)+(1,0)$) node (xy) {(X,Y)}
($(wz)+(.5,.85)$) node (uv) {(U,V)}
($(xy)+(.3,.4)$) node (s) {}
($(s)+(.5,0)$) node (t) {}
($(t)+(.3,-.4)$) node (w) {W}
($(w)+(1,0)$) node (x) {X}
($(w)+(.5,.85)$) node (u) {U}
;
\draw[1cell]
(wz) edge node {(f,\liftof{\subof{f}{F}})} (xy)
(uv) edge node {(h_1,h_2)} (xy)
(uv) edge[dashed] node[swap] {(g,\exists !\,\raiseof{g}?)} (wz)
(s) edge[|->] node {Q_1} (t)
(w) edge node {f} (x)
(u) edge node {h_1} (x)
(u) edge node[swap] {g} (w)
;
\end{tikzpicture}\]
We must show that it has a unique raise with respect to $Q_1$. In other words, we must show that there is a unique morphism $\raiseof{g} : V \to Z$ in $E$ such that
\[\liftof{\subof{f}{F}} \raiseof{g}=h_2 \andspace \subof{g}{F}=\subof{(\raiseof{g})}{P}.\] There is a pre-raise $\preraise{\liftof{\subof{f}{F}}}{h_2}{\subof{g}{F}}$ with respect to $P$ as in the following diagram.
\[\begin{tikzpicture}[xscale=2.5, yscale=1.6]
\draw[0cell]
(0,0) node (z) {Z}
($(z)+(1,0)$) node (y) {Y}
($(z)+(.5,.85)$) node (v) {V}
($(y)+(.3,.4)$) node (s) {}
($(s)+(.5,0)$) node (t) {}
($(t)+(.3,-.4)$) node (w) {\subof{Z}{P}=\subof{W}{F}}
($(w)+(1,0)$) node (x) {\subof{Y}{P}=\subof{X}{F}}
($(w)+(.5,.85)$) node (u) {\subof{V}{P}=\subof{U}{F}}
;
\draw[1cell]
(z) edge node {\liftof{\subof{f}{F}}} (y)
(v) edge node {h_2} (y)
(v) edge[dashed] node[swap] {\exists !\,\raiseof{g}} (z)
(s) edge[|->] node {P} (t)
(w) edge node {\subof{f}{F}} (x)
(u) edge node {\subof{(h_2)}{P}=\subof{(h_1)}{F}} (x)
(u) edge node[swap] {\subof{g}{F}} (w)
;
\end{tikzpicture}\]
Since $\liftof{\subof{f}{F}}$ is Cartesian with respect to $P$, there is a unique raise $\raiseof{g} : V \to Z$. Then the pair $(g,\raiseof{g})$ is the desired unique raise of $\preraise{(f,\liftof{\subof{f}{F}})}{(h_1,h_2)}{g}$ with respect to $Q_1$. This proves the first assertion.
For the second assertion, we check that $Q_2$ is (i) essentially surjective and (ii) fully faithful. For essential surjectivity, suppose $Y$ is an object in $3$. Since the equivalence $F$ is essentially surjective, there exist
\begin{itemize}
\item an object $X$ in $1.3$ and
\item an isomorphism $g : \subof{X}{F} \iso \subof{Y}{P}$ in $\C$.
\end{itemize}
The pre-lift $\prelift{Y}{g}$ with respect to $P$ has a Cartesian lift, say $\subof{g}{Y} : \subof{Y}{g} \to Y$ in $3$, so
\[\subof{(\subof{Y}{g})}{P} = \subof{X}{F} \andspace \subof{(\subof{g}{Y})}{P} = g.\]
Since $g$ is an isomorphism in $\C$, its Cartesian lift $\subof{g}{Y}$ is an isomorphism by \Cref{cartesian-properties}\eqref{cartesian-properties-vi}. Therefore, there is an object $(X,\subof{Y}{g}) \in 1.3\times_{\C}3$ and an isomorphism
\[\begin{tikzcd}
Q_2(X,\subof{Y}{g}) = \subof{Y}{g} \ar{r}{\subof{g}{Y}}[swap]{\iso} & Y,\end{tikzcd}\]
proving the essential surjectivity of $Q_2$.
To show that $Q_2$ is fully faithful, suppose $(X_0,Z_0)$ and $(X_1,Z_1)$ are two objects in $1.3\times_{\C}3$. There is a commutative diagram
\[\begin{tikzcd}[column sep=tiny]
\big(1.3\timesover{\C}3\big)\big((X_0,Z_0),(X_1,Z_1)\big) \ar{d}[swap]{Q_1} \ar{r}{Q_2} & 3(Z_0,Z_1) \ni f \ar{d}{P}\\
1.3(X_0,X_1) \ar{r}{F}[swap]{\iso} & \C(\subof{X}{0,F},\subof{X}{1,F}) = \C(\subof{Z}{0,P},\subof{Z}{1,P}) \ni \subof{f}{P}
\end{tikzcd}\]
of functions, in which $\subof{X}{0,F} = \subof{(X_0)}{F}$ and similarly for $\subof{X}{1,F}$, $\subof{Z}{0,P}$, and $\subof{Z}{1,P}$. Suppose $f \in 3(Z_0,Z_1)$. We must show that there exists a unique morphism $h$ in the upper-left corner such that $Q_2h=f$. Since the $3$-component of $h$ is $f$, we must show that there exists a unique $e \in 1.3(X_0,X_1)$ such that $\subof{e}{F} = \subof{f}{P}$. Such a unique morphism $e$ exists because $F$ is fully faithful. This proves that $Q_2$ is fully faithful and finishes the proof of the second assertion.
\end{proof}
\section{A \texorpdfstring{$2$}{2}-Monad for Fibrations}\label{sec:iimonad-f-fibrations}
Fix a small category $\C$. The purpose of this section is to construct a $2$-monad $\funnyf$ on $\catoverc$ whose pseudo algebras will be shown to be cloven fibrations in \Cref{sec:fibration-pseudoalgebra}. In \Cref{sec:pseudo-alg-from-fib,sec:fib=psalg}, we will prove the converse, i.e., that cloven fibrations yield pseudo $\funnyf$-algebras such that the two constructions are inverses of each other. Moreover, under this correspondence, split fibrations over $\C$ are precisely the strict $\funnyf$-algebras.
To define the $2$-monad $\funnyf$, recall the concept of a $2$-monad on a $2$-category from \Cref{definition:2-monad} and the locally small $2$-category $\catoverc$ from \Cref{exer:cat-over}. The objects in $\catoverc$ are functors $\A \to \C$ with $\A$ a small category. Its $1$-cells and $2$-cells are functors and natural transformations, respectively, that respect the functors to $\C$.
\begin{definition}\label{def:iimonad-on-catoverc}\index{2-monad!for cloven and split fibrations}
Define a $2$-monad
\[(\funnyf,\mu,\eta) : \catoverc \to \catoverc\] consisting of
\begin{itemize}
\item a $2$-functor $\funnyf : \catoverc \to \catoverc$ and
\item $2$-natural transformations $\mu : \funnyfsq \to \funnyf$ and $\eta : 1_{\catoverc} \to \funnyf$
\end{itemize}
as follows.
\begin{description}
\item[$2$-Functor on Objects] For a functor $P : \A \to \C$ with $\A$ a small category, $\funnyfp \in \catoverc$ is determined by the following data.
\begin{description}
\item[Objects] An object in $\funnyfp$ is a triple $\xfy$ with
\begin{itemize}
\item $X$ an object in $\C$,
\item $Y$ an object in $\A$, and
\item $f : X \to \subof{Y}{P}$ a morphism in $\C$.
\end{itemize}
\item[Morphisms] A morphism in $\funnyfp$ is a pair
\begin{equation}\label{gh-morphism}
\begin{tikzcd}[column sep=large]
\xfyzero \ar{r}{\pairof{g}{h}} & \xfyone\end{tikzcd}
\end{equation}
with morphisms $g : X_0 \to X_1$ in $\C$ and $h : Y_0 \to Y_1 \in \A$ such that the square
\[\begin{tikzcd}
X_0 \ar{d}[swap]{g} \ar{r}{f_0} & \subof{Y}{0,P} = \subof{(Y_0)}{P} \ar{d}{\subof{h}{P}}\\
X_1 \ar{r}{f_1} & \subof{Y}{1,P} = \subof{(Y_1)}{P}
\end{tikzcd}\]
in $\C$ commutes. We will use the shorthand $\subof{Y}{k,P}$ for the $P$-image $\subof{(Y_k)}{P} = P(Y_k)$ below.
\item[Composition and Identity Morphisms]
These are defined in $\C$ and $\A$ in the first and the second components, respectively.
\item[Functor to $\C$] The functor $\pisubp : \funnyfp \to \C$ projects onto the first component, so
\begin{equation}\label{pi-p}
\pisubp\xfy = X \andspace \pisubp\pairof{g}{h} = g.
\end{equation}
\end{description}
\item[$2$-Functor on $1$-Cells]
For a $1$-cell
\[\begin{tikzcd}
\A \ar{rr}{F} \ar{dr}[swap]{P} && \B \ar{dl}{Q}\\
& \C & \end{tikzcd}\] in $\catoverc$, the functor
\begin{equation}\label{funnyf-of-f}
\begin{tikzcd}[column sep=large]
\funnyfp \ar{r}{\funnyff} & \funnyfq\end{tikzcd}
\end{equation}
sends:
\begin{itemize}
\item an object $\xfy\in\funnyfp$ to $\xfysubf \in \funnyfq$;
\item a morphism $\pairof{g}{h} \in \funnyfp$ to $\pairof{g}{\hsubf} \in \funnyfq$.
\end{itemize}
\item[$2$-Functor on $2$-Cells]
Suppose $\theta : F \to G$ is a $2$-cell in $\catoverc$, as displayed on the left-hand side below.
\[\begin{tikzpicture}[xscale=2.5, yscale=1.6]
\draw[0cell]
(0,0) node (p) {P}
($(p)+(1,0)$) node (q) {Q}
($(q)+(.7,0)$) node (p1) {\funnyfp}
($(p1)+(1,0)$) node (q1) {\funnyfq}
;
\draw[1cell]
(p) edge[bend left=50] node {F} (q)
(p) edge[bend right=50] node[swap] {G} (q)
(p1) edge[bend left=50] node {\funnyff} (q1)
(p1) edge[bend right=50] node[swap] {\funnyfg} (q1)
;
\draw[2cell]
node[between=p and q at .45, rotate=-90, font=\Large] (the) {\Rightarrow}
(the) node[right] {\theta}
node[between=p1 and q1 at .4, rotate=-90, font=\Large] (th) {\Rightarrow}
(th) node[right] {\funnyftheta}
;
\end{tikzpicture}\]
For an object $\xfy \in \funnyfp$, define $\funnyftheta_{\xfy}$ as the morphism
\begin{equation}
\begin{tikzcd}[column sep=large]
\funnyff\xfy = \xfysubf \ar{r}{\pairof{1_X}{\theta_Y}} & \xfysubg = \funnyfg\xfy
\end{tikzcd}
\end{equation}
in $\funnyfq$.
\item[Unit] With $P$ as above, the $P$-component of the unit $\eta$ is the functor
\begin{equation}\label{eta-p}
\eta_P : \A \to \funnyfp
\end{equation}
that sends:
\begin{itemize}
\item an object $Y \in \A$ to $\yponey \in \funnyfp$;
\item a morphism $g : Y \to Z$ in $\A$ to the morphism
\[\begin{tikzcd}[column sep=large]
\yponey \ar{r}{\gpg} & \zponez\in \funnyfp.\end{tikzcd}\]
\end{itemize}
\item[Multiplication] The $P$-component of the multiplication $\mu$ is the functor \[\mu_P : \funnyfsqp \to \funnyfp\] that sends:
\begin{itemize}
\item an object $\wgxfy \in \funnyfsqp$ to $\wfgy \in \funnyfp$.
\item a morphism
\begin{equation}\label{eij}
\begin{tikzcd}[column sep=large]
\wgxfyzero \ar{r}{\eij} & \wgxfyone\in \funnyfsqp\end{tikzcd}
\end{equation}
to the morphism
\begin{equation}\label{ej}
\begin{tikzcd}
\wfgyzero \ar{r}{\ej} & \wfgyone\in \funnyfp.\end{tikzcd}
\end{equation}
\end{itemize}
\end{description}
This finishes the definition of $(\funnyf,\mu,\eta)$. We prove that
these data constitute a 2-monad in \cref{funnyf-is-iimonad} below.
\end{definition}
\begin{explanation}\label{expl:funnyf}
An object $\wgxfy \in \funnyfsqp$ consists of
\begin{itemize}
\item an object $Y\in \A$,
\item two objects $W,X\in \C$, and
\item two composable morphisms
\[\begin{tikzcd}
W \ar{r}{g} & X \ar{r}{f} & \ysubp\in \C.\end{tikzcd}\]
\end{itemize}
A morphism $\eij$ as in \eqref{eij} consists of
\begin{itemize}
\item a morphism $j : Y_0 \to Y_1$ in $\A$ and
\item two morphisms $e : W_0 \to W_1$ and $i : X_0 \to X_1$ in $\C$,
\end{itemize}
such that the diagram in $\C$
\begin{equation}\label{eijp-diagram}
\begin{tikzcd}
W_0 \ar{d}[swap]{e} \ar{r}{g_0} & X_0 \ar{d}{i} \ar{r}{f_0} & \ysubzerop \ar{d}{\jsubp}\\
W_1 \ar{r}{g_1} & X_1 \ar{r}{f_1} & \ysubonep\end{tikzcd}
\end{equation}
is commutative. This implies that the morphism $\ej$ in \eqref{ej} is well-defined.\dqed
\end{explanation}
\begin{proposition}\label{funnyf-is-iimonad}
The triple $(\funnyf,\mu,\eta)$ in \Cref{def:iimonad-on-catoverc} is a $2$-monad on $\catoverc$.
\end{proposition}
\begin{proof}
First we use \Cref{iifunctor} to check that $\funnyf$ is a $2$-functor. For a $1$-cell $F : P \to Q$ in $\catoverc$, \[\funnyff : \funnyfp \to \funnyfq\] is a well-defined functor by the functoriality of $F$ and the entrywise definition of the composition and identity morphisms in $\funnyfp$ and $\funnyfq$. It is a $1$-cell in $\catoverc$ because for each object $\xfy\in\funnyfp$, there are equalities
\[\begin{split}
\pisubq\funnyff\xfy &= \pisubq\xfysubf\\
&=X\\
&=\pisubp\xfy,
\end{split}\] and similarly for morphisms. Moreover, $\funnyf$ preserves horizontal composition of $1$-cells and identity $1$-cells in $\catoverc$ by the functoriality of $F$.
For a $2$-cell $\theta : F \to G$ in $\catoverc$, the naturality of $\theta$ implies that $\funnyftheta$ is a natural transformation. It is a $2$-cell in $\catoverc$ by the equalities
\[\begin{split}
\pisubq\funnyftheta\xfy &= \pisubq\pairof{1_X}{\theta_Y}\\
& = 1_X\\
& = 1_{\pisubp\xfy}.
\end{split}\]
Moreover, $\funnyf$ preserves identity $2$-cells and vertical composition of $2$-cells by the entrywise definition of $\funnyftheta\xfy$ as $\pairof{1_X}{\theta_Y}$.
Finally, suppose $\varphi : F' \to G'$ is another $2$-cell in $\catoverc$ such that the horizontal composite $\varphi * \theta$ is defined. For each object $\xfy \in \funnyfp$, there are equalities
\[\begin{split}
\bigl(\funnyf(\varphi) * \funnyftheta\big)_{\xfy}
&= \funnyf(\varphi)\big(\funnyfg\xfy\big) \circ \funnyf(F')\big(\funnyftheta_{\xfy}\big)\\
&= \pairof{1_X}{\varphi_{Y_G}} \circ \pairof{1_X}{F'\theta_Y}\\
&= \pairof{1_X}{\varphi_{Y_G}(F'\theta_Y)}\\
&= \pairof{1_X}{(\varphi*\theta)_Y}\\
&= \funnyf(\varphi*\theta)_{\xfy}.
\end{split}\]
Therefore, $\funnyf$ preserves horizontal compositions of $2$-cells, and it is a $2$-functor.
Next we use \Cref{iinatural-transformation} to check that $\mu : \funnyfsq \to \funnyf$ is a $2$-natural transformation. For each object $P \in \catoverc$, $\mu_P$ is a well-defined functor because composition and identity morphisms in $\funnyfsqp$ and $\funnyfp$ are defined entrywise. For each $1$-cell $F : P\to Q$ as above, both composite functors in the diagram
\[\begin{tikzcd}
\funnyfsqp \ar{d}[swap]{\funnyfsqf} \ar{r}{\mu_P} & \funnyfp \ar{d}{\funnyff}\\
\funnyfsqq \ar{r}{\mu_Q} & \funnyfq\end{tikzcd}\]
send:
\begin{itemize}
\item an object $\wgxfy \in \funnyfsqp$ to the object $\tripleof{W}{fg}{\subof{Y}{F}} \in \funnyfq$;
\item a morphism $\eij \in \funnyfsqp$ to the morphism $\pairof{e}{j_F} \in \funnyfq$.
\end{itemize}
This proves the $1$-cell naturality of $\mu$.
The $2$-cell naturality of $\mu$ means that for each $2$-cell $\theta : F \to G$ in $\catoverc$, the diagram of natural transformations
\[\begin{tikzcd}
\funnyff\mu_P \ar{d}[swap]{\funnyftheta * 1_{\mu_P}} \ar{r}{1} & \mu_Q\funnyfsqf \ar{d}{1_{\mu_Q}*\funnyfsqtheta}\\
\funnyfg\mu_P \ar{r}{1} & \mu_Q\funnyfsqg
\end{tikzcd}\]
is commutative. This is true because for each object $\wgxfy \in \funnyfsqp$, there are equalities:
\[\begin{split}
\funnyftheta_{\mu_P\wgxfy} &= \funnyftheta_{\wfgy}\\
&= \pairof{1_W}{\theta_Y}\\
&= \mu_Q\pairof{1_W}{\pairof{1_X}{\theta_Y}}\\
&= \mu_Q\pairof{1_W}{\funnyftheta_{\xfy}}\\
&= \mu_Q\funnyfsqtheta_{\wgxfy}.
\end{split}\]
This shows that $\mu$ is a $2$-natural transformation. A similar argument shows that $\eta$ is a $2$-natural transformation.
For the $2$-monad associativity axiom for $(\funnyf,\mu,\eta)$ in \Cref{def:enriched-monad}, first consider an object
\begin{equation}\label{vhwgxfy}
\vhwgxfy \in \funnyfcubep
\end{equation}
consisting of an object $Y\in\A$ and morphisms
\[\begin{tikzcd}
V \ar{r}{h} & W \ar{r}{g} & X \ar{r}{f} & \ysubp\end{tikzcd} \in \C.\]
Both composite functors in the diagram
\[\begin{tikzcd}
\funnyfcubep \ar{d}[swap]{\mu_{\funnyfp}} \ar{r}{\funnyf(\mu_P)} & \funnyfsqp \ar{d}{\mu_P}\\
\funnyfsqp \ar{r}{\mu_P} & \funnyfp
\end{tikzcd}\]
send:
\begin{itemize}
\item the above object in $\funnyfcubep$ to the object $\tripleof{V}{fgh}{Y}\in \funnyfp$;
\item a morphism $\pairof{d}{\eij}\in\funnyfcubep$ to the morphism $\pairof{d}{j} \in \funnyfp$.
\end{itemize}
The unity axiom for $(\funnyf,\mu,\eta)$ is checked similarly.
\end{proof}
\begin{notation}
We usually abbreviate the $2$-monad $(\funnyf,\mu,\eta)$ in \Cref{funnyf-is-iimonad} to $\funnyf$.
\end{notation}
\section{From Pseudo Algebras to Fibrations}
\label{sec:fibration-pseudoalgebra}
Fix a small category $\C$. Recall from \Cref{definition:lax-algebra} that for a $2$-monad $T$, a pseudo $T$-algebra is a lax $T$-algebra whose lax unity constraint and lax associativity constraint are invertible $2$-cells. The purpose of this section is to show that for the $2$-monad $(\funnyf,\mu,\eta)$ in \Cref{funnyf-is-iimonad}, every pseudo $\funnyf$-algebra yields a canonical cloven fibration as in \Cref{def:fibration}. Moreover, under this assignment, strict $\funnyf$-algebras are sent to split fibrations. We begin by giving an explicit description of the structure of a lax $\funnyf$-algebra.
\begin{lemma}\label{pre-lax-Falg}
Suppose:
\begin{itemize}
\item $P : \A\to\C$ is an object in $\catoverc$.
\item $F : \funnyfp \to \A$ is a functor.
\item $\zeta$ and $\theta$ are natural transformations as displayed below, in which $\theta$ is defined assuming $F$ is a $1$-cell in $\catoverc$.
\[\begin{tikzpicture}[xscale=2.5, yscale=1.6]
\draw[0cell]
(0,0) node (fiip) {\funnyfsqp}
($(fiip)+(1,0)$) node (fp) {\funnyfp}
($(fp)+(1,0)$) node (a) {\A}
($(fiip)+(0,-1)$) node (fpii) {\funnyfp}
($(fp)+(0,-1)$) node (aii) {\A}
;
\draw[1cell]
(fiip) edge node {\mu_P} (fp)
(fiip) edge node[swap] {\funnyff} (fpii)
(fp) edge node {F} (aii)
(fpii) edge node[swap] {F} (aii)
(a) edge node[swap] {\eta_P} (fp)
(a) edge[out=-90, in=0] node (ia) {1_{\A}} (aii)
;
\draw[2cell]
node[between=fiip and aii at .55, rotate=45, 2label={above,\theta}] {\Rightarrow}
node[between=fp and ia at .55, rotate=135, 2label={below,\zeta}] {\Rightarrow}
;
\end{tikzpicture}\]
\end{itemize}
Then the following statements hold.
\begin{enumerate}
\item Regarding $\pi_P : \funnyfp \to \C$ as an object in $\catoverc$, $F$ is a $1$-cell in $\catoverc$ if and only if the equalities
\begin{equation}\label{xfypf}
\subof{\xfy}{PF} = X \andspace \subof{\pairof{g}{h}}{PF} = g \in \C
\end{equation}
hold for each object $\xfy$ and each morphism $\pairof{g}{h} \in \funnyfp$.
\item For each object $Y\in \A$, denoting the $Y$-component of $\zeta$ by
\[\begin{tikzcd}
Y \ar{r}{\zeta_Y} & F\eta_P(Y) = \yponeysubf \in \A,
\end{tikzcd}\]
the naturality of $\zeta$ means that the diagram
\begin{equation}\label{gzetaz}
\begin{tikzcd}
Y \ar{d}[swap]{g} \ar{r}{\zeta_Y} & \yponeysubf \ar{d}{\gpgsubf}\\
Z \ar{r}{\zeta_Z} & \zponezsubf
\end{tikzcd}
\end{equation}
is commutative for each morphism $g : Y \to Z$ in $\A$. Moreover, if $F$ is a $1$-cell in $\catoverc$, then $\zeta$ is a $2$-cell in $\catoverc$ if and only if the equality
\begin{equation}\label{zetayp}
P(\zeta_Y) = 1_{\subof{Y}{P}} \in \C
\end{equation}
holds for each object $Y\in\A$.
\item Assuming $F$ is a $1$-cell in $\catoverc$, for each object $\wgxfy \in \funnyfsqp$, denote the corresponding component of $\theta$ by
\[\begin{tikzcd}[column sep=huge]
\wgxfyff \ar{r}{\theta_{\gfy}} & \wfgyf\in\A.
\end{tikzcd}\]
Then the naturality of $\theta$ means that the diagram
\begin{equation}\label{eijfftheta}
\begin{tikzcd}[column sep=huge]
\wgxfyffzero \ar{d}[swap]{\eijff} \ar{r}{\theta_{\gfyzero}}
& \wfgyfzero \ar{d}{\ejf}\\
\wgxfyffone \ar{r}{\theta_{\gfyone}} & \wfgyfone
\end{tikzcd}
\end{equation}
is commutative for each morphism $\eij \in\funnyfsqp$ as in \eqref{eij}. Moreover, $\theta$ is a $2$-cell in $\catoverc$ if and only if the equality
\begin{equation}\label{thetapiw}
P(\theta_{\gfy}) = 1_W \in \C
\end{equation}
holds for each object $\wgxfy\in\funnyfsqp$.
\end{enumerate}
\end{lemma}
\begin{proof}
For the first assertion, $F$ is a $1$-cell in $\catoverc$ if and only if $PF=\pi_P$. By the definition of $\pi_P$ in \eqref{pi-p}, the two equalities in \eqref{xfypf} are expressing the equality $PF=\pi_P$ on objects and morphisms, respectively.
For the second assertion, in the naturality diagram \eqref{gzetaz}, we used the definition $\eta_P(g) = \gpg$ in \eqref{eta-p}. The equality \eqref{zetayp} means the equality \[1_P *\zeta=1_P,\] which in turn means that $\zeta$ is a $2$-cell in $\catoverc$.
For the third assertion, in the naturality diagram \eqref{eijfftheta}, we used the definitions of $\funnyff$ applied to a morphism in \eqref{funnyf-of-f} and of $\mu_P$ applied to a morphism in \eqref{ej}. The equality \eqref{thetapiw} means the equality \[1_P*\theta = 1_{\pi_{\funnyfp}},\] which in turn means that $\theta$ is a $2$-cell in $\catoverc$.
\end{proof}
\begin{lemma}\label{lax-falg}
Suppose given a tuple $(P,F,\zeta,\theta)$ as in \Cref{pre-lax-Falg} such that:
\begin{itemize}
\item $F : \funnyfp \to \A$ is a $1$-cell in $\catoverc$.
\item $\zeta$ and $\theta$ are $2$-cells in $\catoverc$.
\end{itemize}
Then $P$ equipped with $(F,\zeta,\theta)$ is a lax $\funnyf$-algebra if and only if the following three statements hold.
\begin{description}
\item[First Lax Unity] For each object $\afy \in \funnyfp$, the diagram
\begin{equation}\label{ps-falg-coherence-i}
\begin{tikzpicture}[xscale=2.5, yscale=1.2, baseline={(z.base)}]
\draw[0cell]
(0,0) node (a) {\afysubf}
($(a)+(2,0)$) node (b) {\afysubf}
($(a)+(1,-1)$) node (c) {\aoneafyff}
($(a)+(0,-1)$) node[inner sep=0pt] (sw) {}
($(b)+(0,-1)$) node[inner sep=0pt] (se) {}
;
\draw[1cell]
(a) edge node {1_{\afysubf}} (b)
(a) edge[-,shorten >=-1pt] node[swap, pos=.6] (z) {\zeta_{\afysubf}} (sw)
(sw) edge[shorten <=-1pt] node {} (c)
(c) edge[-,shorten >=-1pt] node {} (se)
(se) edge[shorten <=-1pt] node[swap, pos=.4] {\theta_{\oneafy}} (b)
;
\end{tikzpicture}
\end{equation}
in $\A$ is commutative.
\item[Second Lax Unity] For each object $\afy \in \funnyfp$, the diagram
\begin{equation}\label{ps-falg-coherence-ii}
\begin{tikzpicture}[xscale=2.5, yscale=1.2, baseline={(z.base)}]
\draw[0cell]
(0,0) node (a) {\afysubf}
($(a)+(2,0)$) node (b) {\afysubf}
($(a)+(1,-1)$) node (c) {\afyponeyff}
($(a)+(0,-1)$) node[inner sep=0pt] (sw) {}
($(b)+(0,-1)$) node[inner sep=0pt] (se) {}
;
\draw[1cell]
(a) edge node {1_{\afysubf}} (b)
(a) edge[-,shorten >=-1pt] node[swap, pos=.6] (z) {\oneazetayf} (sw)
(sw) edge[shorten <=-1pt] node {} (c)
(c) edge[-,shorten >=-1pt] node {} (se)
(se) edge[shorten <=-1pt] node[swap, pos=.4] {\theta_{\foneypy}} (b)
;
\end{tikzpicture}
\end{equation}
in $\A$ is commutative.
\item[Lax Associativity] For each object $\vhwgxfy \in \funnyfcubep$ as in \eqref{vhwgxfy}, the diagram
\begin{equation}\label{ps-falg-coherence-iii}
\begin{tikzcd}[column sep=huge]
\vhwgxfyfff \ar{d}[swap]{\onevthetagfyf} \ar{r}{\theta_{\hgxfyf}} & \vghxfyff \ar{d}{\theta_{\ghfy}}\\
\vhwfgyff \ar{r}{\theta_{\hfgy}} & \vfghyf
\end{tikzcd}
\end{equation}
in $\A$ is commutative.
\end{description}
\end{lemma}
\begin{proof}
The commutative diagrams \eqref{ps-falg-coherence-i}, \eqref{ps-falg-coherence-ii}, and \eqref{ps-falg-coherence-iii} are the component forms of the two lax unity axioms \eqref{lax-algebra-units} and the lax associativity axiom \eqref{lax-algebra-hexagon} for a lax $\funnyf$-algebra, respectively.
\end{proof}
Using the above description of lax $\funnyf$-algebras, we now show that every pseudo $\funnyf$-algebra is canonically a cloven fibration.
\begin{lemma}\label{foneyf-zetayinv-cartesian}
Suppose:
\begin{itemize}
\item $\big(P : \A\to\C, F : \funnyfp\to\A, \zeta, \theta\big)$ is a pseudo $\funnyf$-algebra as in \Cref{lax-falg}.
\item $\prelift{Y}{f}$ is a pre-lift with respect to $P$ for an object $Y$ in $\A$ and a morphism $f : A \to \ysubp$ in $\C$.
\end{itemize}
Consider the morphism
\begin{equation}\label{foney}
\begin{tikzcd}[column sep=large]
\afy \ar{r}{\foney} & \yponey\end{tikzcd}
\end{equation}
in $\funnyfp$ and the composite
\begin{equation}\label{foneyf-zetayinv}
\begin{tikzcd}[column sep=large]
\afysubf \ar{r}{\foneysubf} & \yponeysubf \ar{r}{\zeta_Y^{-1}} & Y\end{tikzcd}
\end{equation}
in $\A$. Then the morphism $\zeta_Y^{-1}\foneysubf$ in \eqref{foneyf-zetayinv} is a Cartesian lift of the pre-lift $\prelift{Y}{f}$.
\end{lemma}
\begin{proof}
Applying the functor $P$ to the composite $\zeta_Y^{-1}\foneysubf$ yields the composite
\[\begin{tikzcd}[column sep=large]
\afysubpf \ar{r}{\foneysubpf} & \yponeysubpf \ar{r}{P(\zetainvy)}
& \ysubp\end{tikzcd}\]
in $\C$. This is equal to $f$ by \eqref{xfypf} and \eqref{zetayp}, so $\zeta_Y^{-1}\foneysubf$ is a lift of $\preliftyf$.
To show that this lift of $\preliftyf$ is a Cartesian morphism, suppose given a pre-raise $\preraise{\zetainvy\foneysubf}{g}{h}$ as displayed below.
\[\begin{tikzpicture}[xscale=2.5, yscale=1.4]
\draw[0cell]
(0,0) node (a) {\afysubf}
($(a)+(1.3,0)$) node (yp) {\yponeysubf}
($(yp)+(.8,0)$) node (y) {Y}
($(a)+(1.05,1)$) node (x) {X}
($(y)+(.3,.5)$) node (s) {
($(s)+(.4,0)$) node (t) {
($(t)+(.3,-.5)$) node (aii) {A}
($(aii)+(.8,0)$) node (yii) {Y}
($(aii)+(.4,1)$) node (xp) {\xsubp}
;
\draw[1cell]
(a) edge node[swap] {\foneysubf} (yp)
(yp) edge node[swap] {\zetainvy} (y)
(x) edge node {g} (y)
(x) edge[dashed] node[swap] {\exists !\, \hplus ?} (a)
(s) edge[|->] node {P} (t)
(xp) edge node {\gsubp} (yii)
(xp) edge node[swap] {h} (aii)
(aii) edge node {f} (yii)
;
\end{tikzpicture}\]
We aim to show that the composite
\[\begin{tikzcd}
X \ar{r}{\zetax} & \xponexsubf \ar{r}{\hgsubf} & \afysubf\in\A
\end{tikzcd}\]
is the desired unique raise, in which $\hg\in\funnyfp$ is well-defined because $fh=\gsubp$. First, to show that it is a raise, we apply the functor $P$ to obtain the composite
\[\begin{tikzcd}[column sep=large]
\xsubp \ar{r}{P(\zetax)} & \xponexsubpf \ar{r}{\hgsubpf} & \afysubpf\in\C.
\end{tikzcd}\]
This is equal to $h$ by \eqref{xfypf} and \eqref{zetayp}. Moreover, the outermost diagram in
\[\begin{tikzpicture}[xscale=2.7, yscale=1.2]
\draw[0cell]
(0,0) node (x) {X}
($(x)+(2,0)$) node (y) {Y}
($(x)+(0,-1)$) node (xp) {\xponexsubf}
($(xp)+(2,0)$) node (yp) {\yponeysubf}
($(xp)+(1,-1)$) node (a) {\afysubf}
($(xp)+(0,-1)$) node[inner sep=0pt] (sw) {}
($(yp)+(0,-1)$) node[inner sep=0pt] (se) {}
;
\draw[1cell]
(x) edge node {g} (y)
(x) edge node[swap] {\zetax} (xp)
(yp) edge node[swap] {\zetainvy} (y)
(xp) edge node {\fhgsubf\,=\,\gpgsubf} (yp)
(xp) edge[-,shorten >=-1pt] node[swap,pos=.7] {\hgsubf} (sw)
(sw) edge[shorten <=-1pt] node{} (a)
(a) edge[-,shorten >=-1pt] node {} (se)
(se) edge[shorten <=-1pt] node[swap,pos=.3] {\foneysubf} (yp)
;
\end{tikzpicture}\]
is commutative, since the two sub-diagrams inside are commutative by the functoriality of $F$ and the naturality of $\zeta$ as in \eqref{gzetaz}.
It remains to check the uniqueness of the raise. Suppose $\hplus$ is another raise. We must show that $\hplus$ is equal to $\hgsubf \zetax$. The defining equalities for $\hplus$ are
\[\hplusp = h \andspace \foneysubf \hplus = \zetay g.\]
These equalities imply that the diagram
\begin{equation}\label{hghhplus}
\begin{tikzcd}[column sep=large]
\xponex \ar{d}[swap]{\hg} \ar{r}{\hhplus} & \aoneafyf \ar{d}{\oneafoneysubf}\\
\afy \ar{r}{\oneazetay} & \afyponeyf
\end{tikzcd}
\end{equation}
in $\funnyfp$ is commutative. Since
\begin{equation}\label{oneaoneyf}
\oneaoneyf = F\big(1_{\afy}\big) = 1_{\afysubf} \ ,
\end{equation}
the desired equality
\[\hplus = \hgsubf \zetax\]
means the commutativity of the outermost diagram in $\A$ below.
\[\begin{tikzpicture}[xscale=4, yscale=1.8]
\draw[0cell]
(0,0) node (x-i-i) {X}
($(x-i-i)+(1,0)$) node (x-i-ii) {\afysubf}
($(x-i-i)+(0,-1)$) node (x-ii-i) {\xponexsubf}
($(x-ii-i)+(1,0)$) node (x-ii-ii) {\aoneafyff}
($(x-ii-ii)+(1,0)$) node (x-ii-iii) {\afysubf}
($(x-ii-i)+(0,-1)$) node (x-iii-i) {\afysubf}
($(x-iii-i)+(1,0)$) node (x-iii-ii) {\afyponeyff}
($(x-iii-ii)+(1,0)$) node (x-iii-iii) {\afysubf}
($(x-i-ii)+(1,0)$) node[inner sep=0pt] (ne) {}
($(x-iii-i)+(0,-.5)$) node[inner sep=0pt] (sw) {}
($(x-iii-iii)+(0,-.5)$) node[inner sep=0pt] (se) {}
;
\draw[1cell]
(x-i-i) edge node {\hplus} (x-i-ii)
(x-i-i) edge node[swap] {\zetax} (x-ii-i)
(x-i-ii) edge node {\zeta_{\afysubf}} (x-ii-ii)
(x-ii-i) edge node {\hhplusf} (x-ii-ii)
(x-ii-i) edge node[swap] {\hgsubf} (x-iii-i)
(x-ii-ii) edge node {\theta_{\oneafy}} (x-ii-iii)
(x-ii-ii) edge node {\oneafoneyff} (x-iii-ii)
(x-ii-iii) edge node {\oneaoneyf} (x-iii-iii)
(x-iii-i) edge node {\oneazetayf} (x-iii-ii)
(x-iii-ii) edge node {\theta_{\foneypy}} (x-iii-iii)
(x-i-ii) edge[-,shorten >=-1pt] node {1_{\afysubf}} (ne)
(ne) edge[shorten <=-1pt] node {} (x-ii-iii)
(x-iii-i) edge[-,shorten >=-1pt] node {} (sw)
(sw) edge[-,shorten <=-1pt, shorten >=-1pt] node {1_{\afysubf}} (se)
(se) edge[shorten <=-1pt] node {} (x-iii-iii)
;
\end{tikzpicture}\]
In the above diagram:
\begin{itemize}
\item The top left square is commutative by the naturality \eqref{gzetaz} of $\zeta$.
\item The top right square is the first lax unity axiom \eqref{ps-falg-coherence-i}.
\item The middle left square is the image of the commutative square in \eqref{hghhplus} under the functor $F$, so it is commutative.
\item The middle right square is commutative by the naturality \eqref{eijfftheta} of $\theta$.
\item The bottom rectangle is the second lax unity axiom \eqref{ps-falg-coherence-ii}.
\end{itemize}
We have shown that $\hgsubf \zetax$ is the desired unique raise.
\end{proof}
\Cref{foneyf-zetayinv-cartesian} implies that pseudo $\funnyf$-algebras yield cloven fibrations.
\begin{proposition}\label{psalgebra-to-fibration}
Suppose \[\big(P : \A\to\C, F : \funnyfp\to\A, \zeta, \theta\big)\] is a pseudo $\funnyf$-algebra as in \Cref{lax-falg}. Then $P$ is a cloven fibration when each pre-lift $\prelift{Y}{f}$ is equipped with the chosen Cartesian lift in \eqref{foneyf-zetayinv}.
\end{proposition}
Recall from \Cref{definition:lax-algebra} that for a $2$-monad $T$, a \emph{strict} $T$-algebra is a lax $T$-algebra whose lax unity constraint and lax associativity constraint are identity $2$-cells. Also recall from \Cref{def:fibration} that a \emph{split} fibration is a fibration equipped with a cleavage that is both unitary and multiplicative. Next is the analogue of \Cref{psalgebra-to-fibration} for strict $\funnyf$-algebras and split fibrations.
\begin{proposition}\label{strictalgebra-to-split-fib}
Suppose \[\big(P : \A\to\C, F : \funnyfp\to\A, \zeta=1, \theta=1\big)\] is a strict $\funnyf$-algebra. Then $P$ is a split fibration when each pre-lift $\prelift{Y}{f}$ is equipped with the chosen Cartesian lift in \eqref{foneyf-zetayinv}.
\end{proposition}
\begin{proof}
By \Cref{psalgebra-to-fibration} we already know that $P$ is a cloven fibration. For a pre-lift $\preliftyf$, the chosen Cartesian lift is now $\foneysubf$, since $\zetay=1_Y$. For $f = 1_{\ysubp}$, the equalities in \eqref{oneaoneyf} show that the chosen Cartesian lift of $\prelift{Y}{1_{\ysubp}}$ is $1_Y$. So the given cleavage is unitary.
To show that the cleavage is multiplicative \eqref{multiplicative-cleavage}, suppose $Y\in \A$ is an object and $f,g$ are morphisms in $\C$ as displayed in the bottom row below.
\[\begin{tikzpicture}[xscale=3.5, yscale=1.3]
\draw[0cell]
(0,0) node (x-i-i) {\agfysubf}
($(x-i-i)+(1,0)$) node (x-i-ii) {\bgysubf}
($(x-i-ii)+(1,0)$) node (x-i-iii) {Y \in \A}
($(x-i-i)+(0,-1)$) node (x-ii-i) {A}
($(x-ii-i)+(1,0)$) node (x-ii-ii) {B}
($(x-ii-ii)+(1,0)$) node (x-ii-iii) {\ysubp\in\C}
;
\draw[1cell]
(x-i-i) edge node {\foneysubf} (x-i-ii)
(x-i-ii) edge node {\goneysubf} (x-i-iii)
(x-i-iii) edge[|->] node {P} (x-ii-iii)
(x-ii-i) edge node {f} (x-ii-ii)
(x-ii-ii) edge node {g} (x-ii-iii)
;
\end{tikzpicture}\]
The chosen Cartesian lift of the pre-lift $\preliftygf$ is the morphism $\gfoneysubf \in \A$. There are equalities
\[\begin{split}
\gfoneysubf &= \subof{\big(\goney \circ \foney\big)}{F}\\
&= \goneysubf \circ \foneysubf
\end{split}\]
by the entrywise definition of composition in $\funnyfp$ and the functoriality of $F$. Since $\goneysubf$ is the chosen Cartesian lift of the pre-lift $\preliftyg$, it remains to show that $\foneysubf$ is the chosen Cartesian lift of the pre-lift $\preliftbgysubff$. The chosen Cartesian lift of $\preliftbgysubff$ is
\[\begin{split}
\subof{\pairof{f}{1_{\bgysubf}}}{F}
&= \subof{\pairof{f}{\oneboneyf}}{F}\\
&= \foneysubf.
\end{split}\]
The first equality above is by \eqref{oneaoneyf}, and the second equality follows from \eqref{eijfftheta} because $\theta$ is the identity.
\end{proof}
\section{From Fibrations to Pseudo Algebras}
\label{sec:pseudo-alg-from-fib}
For a fixed small category $\C$, the purpose of this section is to prove the converse of \Cref{psalgebra-to-fibration}. In other words, we will show that each cloven fibration yields a canonical pseudo $\funnyf$-algebra. In \Cref{sec:fib=psalg} we will observe that the assignments form a bijection between pseudo $\funnyf$-algebras and cloven fibrations. Moreover, under these assignments, strict $\funnyf$-algebras correspond to split fibrations.
Recall the $2$-monad $(\funnyf,\mu,\eta)$ in \Cref{def:iimonad-on-catoverc}. For a cloven fibration, we first define the structures---namely, $\funnyf$-action, lax unity constraint, and lax associativity constraint---that will be shown to constitute a pseudo $\funnyf$-algebra, starting with the $\funnyf$-action functor. We will use \Cref{pre-lax-Falg,lax-falg}, which provide an explicit description of a lax $\funnyf$-algebra.
For the rest of this section, suppose $P : \A\to\C$ is a cloven fibration with $\A$ a small category, in which the chosen Cartesian lift of a pre-lift $\preliftyf$ is denoted by
\[\begin{tikzcd}[column sep=large]
\yf \ar{r}{\lift{\preliftyf}} & Y.\end{tikzcd}\]
\begin{definition}\label{def:fib-falg-f-action}
Define a functor
\begin{equation}\label{fibration-f-action}
F : \funnyfp \to \A
\end{equation}
as follows.
\begin{description}
\item[Objects] For an object $\xfy \in \funnyfp$, define the object
\begin{equation}\label{functor-f-object}
\xfyf = \yf\in\A,
\end{equation}
which is the domain of the chosen Cartesian lift $\beta_{\preliftyf}$.
\item[Morphisms]
For a morphism
\[\begin{tikzcd}[column sep=large]
\xfyzero \ar{r}{\pairof{g}{h}} & \xfyone \in \funnyfp\end{tikzcd}\]
as in \eqref{gh-morphism}, consider the pre-raise
\[\preraise{\lift{\preliftyfone}}{h\lift{\preliftyfzero}}{g}\] as displayed below.
\[\begin{tikzpicture}[xscale=2.5, yscale=1.3]
\draw[0cell]
(0,0) node (x-i-i) {\yfzero}
($(x-i-i)+(1,0)$) node (x-i-ii) {Y_0}
($(x-i-i)+(0,-1)$) node (x-ii-i) {\yfone}
($(x-ii-i)+(1,0)$) node (x-ii-ii) {Y_1}
($(x-ii-ii)+(.3,.5)$) node (s) {}
($(s)+(.4,0)$) node (t) {}
($(t)+(.3,.5)$) node (y-i-i) {X_0}
($(y-i-i)+(1,0)$) node (y-i-ii) {\ysubzerop}
($(y-i-i)+(0,-1)$) node (y-ii-i) {X_1}
($(y-ii-i)+(1,0)$) node (y-ii-ii) {\ysubonep}
;
\draw[1cell]
(x-i-i) edge node {\lift{\preliftyfzero}} (x-i-ii)
(x-i-i) edge[dashed] node[swap] {\exists !\,\gplus} (x-ii-i)
(x-i-ii) edge node {h} (x-ii-ii)
(x-ii-i) edge node {\lift{\preliftyfone}} (x-ii-ii)
(s) edge[|->] node {P} (t)
(y-i-i) edge node {f_0} (y-i-ii)
(y-i-i) edge node[swap] {g} (y-ii-i)
(y-i-ii) edge node {\hsubp} (y-ii-ii)
(y-ii-i) edge node {f_1} (y-ii-ii)
;
\end{tikzpicture}\]
Since $\lift{\preliftyfone}$ is a Cartesian morphism, there is a unique raise $\gplus$. Define the morphism
\begin{equation}\label{functor-f-morphism}
\ghsubf=\gplus \in \A.
\end{equation}
\end{description}
This finishes the definition of $F$.
\end{definition}
\begin{lemma}\label{functor-f-well-def}
In \Cref{def:fib-falg-f-action}:
\begin{enumerate}
\item $F : \funnyfp \to \A$ is a well-defined functor.
\item $PF=\pisubp : \funnyfp \to \C$, so $F$ is a $1$-cell in $\catoverc$.
\end{enumerate}
\end{lemma}
\begin{proof}
The first assertion follows from the uniqueness of a raise for a pre-raise whose first entry is a Cartesian morphism. The second assertion follows from the definition of $\pisubp$ in \eqref{pi-p}.
\end{proof}
Next we define the lax unity constraint in a pseudo $\funnyf$-algebra.
\begin{definition}\label{def:fib-falg-lax-unity}
For each object $Y\in\A$, define $\zetay$ as the unique isomorphism
\begin{equation}\label{zetay-beta-one}
\begin{tikzcd}
Y \ar{dr}[swap]{1_Y} \ar{rr}{\zetay}[swap]{\iso} && \yponeysubf \ar{dl}{\lift{\preliftyone}}\\
& Y &
\end{tikzcd}
\end{equation}
in $\A$ in which:
\begin{itemize}
\item $\preliftyone$ is the pre-lift $\prelift{Y}{1_{\ysubp} : \ysubp \to \ysubp}$ with respect to $P$.
\item $\lift{\preliftyone}$ is the chosen Cartesian lift of $\preliftyone$.
\item $\zetay$ is the unique isomorphism in \eqref{two-cartesian-lifts}, which exists because the identity morphism $1_Y$ is also a Cartesian lift of $\preliftyone$. By \Cref{cartesian-properties}, the triangle \eqref{zetay-beta-one} commutes, and
\[P(\zetay)=1_{\ysubp} \in \C.\]
\end{itemize}
This finishes the definition of $\zetay$.
\end{definition}
\begin{lemma}\label{zetay-natural}
In \Cref{def:fib-falg-lax-unity}:
\begin{enumerate}
\item $\zetay = \lift{\preliftyone}^{-1}$.
\item $\zeta : 1_{\catoverc} \to F\etap$ is an invertible $2$-cell in $\catoverc$.
\end{enumerate}
\end{lemma}
\begin{proof}
The equality $\zetay = \lift{\preliftyone}^{-1}$ follows from the commutative triangle
\eqref{zetay-beta-one}.
To see that $\zeta$ is a natural transformation, suppose $g : Y \to Z$ is a morphism in $\A$. Consider the morphism
\[\begin{tikzcd}[column sep=large]
\yponey \ar{r}{\gpg} & \zponez \in \funnyfp.
\end{tikzcd}\]
By the definition \eqref{functor-f-morphism} of $F$ on the morphism $\gpg$, the square
\[\begin{tikzcd}[column sep=large]
\yponeysubf \ar{d}[swap]{\gpgsubf} \ar{r}{\lift{\preliftyone}} & Y \ar{d}{g}\\
\zponezsubf \ar{r}{\lift{\preliftzone}} & Z
\end{tikzcd}\]
in $\A$ is commutative. Since $\zetay=\lift{\preliftyone}^{-1}$ and $\zetaz=\lift{\preliftzone}^{-1}$, the previous commutative square implies that the one in \eqref{gzetaz} is also commutative. This means that $\zeta$ is a natural transformation.
Moreover, $\zeta$ defines a $2$-cell in $\catoverc$ because $P(\zetay)=1_{\ysubp}$. Finally, $\zeta$ is an invertible $2$-cell because each component $\zetay$ is an isomorphism, whose inverse is $\lift{\preliftyone}$.
\end{proof}
Next we define the lax associativity constraint in a pseudo $\funnyf$-algebra.
\begin{definition}\label{def:fib-falg-lax-ass}
For an object $\wgxfy\in\funnyfsqp$ as in \Cref{expl:funnyf}, consider the morphisms
\[\begin{tikzpicture}[xscale=4, yscale=1.3]
\draw[0cell]
(0,0) node (x-i-i) {\wgxfyff}
($(x-i-i)+(1.2,0)$) node (x-i-ii) {\xfyf}
($(x-i-ii)+(.6,0)$) node (x-i-iii) {Y \in \A}
($(x-i-i)+(0,-1)$) node (x-ii-i) {W}
($(x-i-ii)+(0,-1)$) node (x-ii-ii) {X}
($(x-i-iii)+(0,-1)$) node (x-ii-iii) {\ysubp\in\C}
($(x-ii-i)+(0,-1)$) node (x-iii-i) {\wfgyf}
($(x-ii-iii)+(0,-1)$) node (x-iii-iii) {Y\in\A}
;
\draw[1cell]
(x-i-i) edge node {\lift{\preliftxfyfg}} (x-i-ii)
(x-i-ii) edge node {\lift{\preliftyf}} (x-i-iii)
(x-i-iii) edge[|->] node {P} (x-ii-iii)
(x-iii-iii) edge[|->] node[swap] {P} (x-ii-iii)
(x-ii-i) edge node {g} (x-ii-ii)
(x-ii-ii) edge node {f} (x-ii-iii)
(x-iii-i) edge node {\lift{\preliftyfg}} (x-iii-iii)
;
\end{tikzpicture}\]
in which each $\lift{?}$ is the chosen Cartesian lift of the pre-lift in its subscript. Since Cartesian morphisms are closed under composition, the composite in the top row above is also a Cartesian lift of $\preliftyfg$. Therefore, as in \eqref{two-cartesian-lifts}, there is a unique isomorphism $\theta_{\gfy} \in \A$ such that the diagram
\begin{equation}\label{thetagfy-betayfg}
\begin{tikzcd}[column sep=large]
\wgxfyff \ar{d}[swap]{\lift{\preliftxfyfg}} \ar{r}{\theta_{\gfy}}[swap]{\iso} & \wfgyf \ar{d}{\lift{\preliftyfg}}\\
\xfyf \ar{r}{\lift{\preliftyf}} & Y
\end{tikzcd}
\end{equation}
in $\A$ is commutative and that
\begin{equation}\label{p-theta-1}
P(\theta_{\gfy}) = 1_W \in \C.
\end{equation}
This finishes the definition of $\theta_{\gfy}$.
\end{definition}
\begin{lemma}\label{theta-invertible-iicell}
In \Cref{def:fib-falg-lax-ass}, $\theta$ defines an invertible $2$-cell
\[\begin{tikzpicture}[xscale=2.5, yscale=1.4]
\draw[0cell]
(0,0) node (fiip) {\funnyfsqp}
($(fiip)+(1,0)$) node (fp) {\funnyfp}
($(fp)+(0,-1)$) node (a) {\A}
($(fiip)+(0,-1)$) node (fpii) {\funnyfp}
;
\draw[1cell]
(fiip) edge node {\mu_P} (fp)
(fiip) edge node[swap] {\funnyff} (fpii)
(fp) edge node {F} (a)
(fpii) edge node[swap] {F} (a)
;
\draw[2cell]
node[between=fiip and a at .55, rotate=45, 2label={above,\theta}, 2label={below,\iso}] {\Rightarrow}
;
\end{tikzpicture}\]
in $\catoverc$.
\end{lemma}
\begin{proof}
Each component of $\theta$ is an isomorphism, and $P(\theta_{\gfy}) = 1_W$ in $\C$. Therefore, it remains to show that, for each morphism $\eij \in \funnyfsqp$ as in \eqref{eij}, the naturality diagram \eqref{eijfftheta} is commutative. Consider the cube
\begin{equation}\label{theta-naturality-cube}
\begin{tikzpicture}[xscale=3, yscale=1.4]
\draw[0cell]
(0,0) node (b11) {\wgxfyffzero}
($(b11)+(2,0)$) node (b12) {\wfgyfzero}
($(b11)+(0,-2)$) node (b21) {\wgxfyffone}
($(b12)+(0,-2)$) node (b22) {\wfgyfone}
($(b11)+(1.3,-1)$) node (f11) {\xfyfzero}
($(f11)+(2,0)$) node (f12) {Y_0}
($(f11)+(0,-2)$) node (f21) {\xfyfone}
($(f12)+(0,-2)$) node (f22) {Y_1}
;
\draw[1cell]
(b11) edge node {\theta_{\gfyzero}} (b12)
(b11) edge node[swap] {\eijff} (b21)
(b12) edge[dashed] node[swap,pos=.8] {\ejf} (b22)
(b21) edge[dashed] node[pos=.3] {\theta_{\gfyone}} (b22)
(f11) edge node {\lift{\preliftyfzero}} (f12)
(f11) edge node[pos=.75] {\ijf} (f21)
(f12) edge node {j} (f22)
(f21) edge node[pos=.3] {\lift{\preliftyfone}} (f22)
(b11) edge node[pos=.6] {\lift{\preliftxfyfgzero}} (f11)
(b12) edge node {\lift{\preliftyfgzero}} (f12)
(b21) edge node[swap] {\lift{\preliftxfyfgone}} (f21)
(b22) edge node[pos=.4] {\lift{\preliftyfgone}} (f22)
;
\end{tikzpicture}
\end{equation}
in $\A$. In this cube:
\begin{itemize}
\item The back face with the two dashed arrows is the naturality diagram \eqref{eijfftheta} for $\theta$, which we want to show is commutative.
\item The left, front, and right faces are commutative by the definitions \eqref{functor-f-morphism} of $\eijff$, $\ijf$, and $\ejf$, respectively.
\item The top and bottom faces are the commutative diagrams \eqref{thetagfy-betayfg} that define $\theta_{\gfyzero}$ and $\theta_{\gfyone}$, respectively.
\end{itemize}
The image of the back and the right faces in the cube \eqref{theta-naturality-cube} under the functor $P : \A\to\C$ is the commutative diagram
\begin{equation}\label{theta-cube-p}
\begin{tikzpicture}[xscale=2.5, yscale=1.3, baseline={(e.base)}]
\draw[0cell]
(0,0) node (b11) {W_0}
($(b11)+(1,0)$) node (b12) {W_0}
($(b12)+(1,0)$) node (f12) {\ysubzerop}
($(b11)+(0,-1)$) node (b21) {W_1}
($(b12)+(0,-1)$) node (b22) {W_1}
($(f12)+(0,-1)$) node (f22) {\ysubonep}
;
\draw[1cell]
(b11) edge node {1_{W_0}} (b12)
(b12) edge node {f_0g_0} (f12)
(b11) edge node[swap] (e) {e} (b21)
(b12) edge node[swap] {e} (b22)
(f12) edge node {\jsubp} (f22)
(b21) edge node {1_{W_1}} (b22)
(b22) edge node {f_1g_1} (f22)
;
\end{tikzpicture}
\end{equation}
in $\C$. By definition \eqref{functor-f-morphism}, $\ejf$ is the \emph{unique} raise of the pre-raise
\[\preraise{\lift{\preliftyfgone}}{j\lift{\preliftyfgzero}}{e}\]
given by the right face in the cube \eqref{theta-naturality-cube} and the right square in the diagram \eqref{theta-cube-p}. Using the uniqueness of $\ejf$ and the invertibility of $\theta_{\gfyzero}$, to show that the back face in the cube \eqref{theta-naturality-cube} is commutative, it suffices to show that the diagram
\[\begin{tikzpicture}[xscale=2.5, yscale=.8]
\draw[0cell]
(0,0) node (b11) {\wgxfyffzero}
($(b11)+(2,0)$) node (b12) {\wfgyfzero}
($(b11)+(0,-2)$) node (b21) {\wgxfyffone}
($(b12)+(0,-2)$) node (b22) {\wfgyfone}
($(b11)+(1.3,-1)$) node (f11) {}
($(f11)+(2,0)$) node (f12) {Y_0}
($(f11)+(0,-2)$) node (f21) {}
($(f12)+(0,-2)$) node (f22) {Y_1}
;
\draw[1cell]
(b12) edge node[swap] {\theta_{\gfyzero}^{-1}} (b11)
(b11) edge node[swap] {\eijff} (b21)
(b21) edge node {\theta_{\gfyone}} (b22)
(f12) edge node {j} (f22)
(b12) edge node[pos=.2] {\lift{\preliftyfgzero}} (f12)
(b22) edge node[pos=.2] {\lift{\preliftyfgone}} (f22)
;
\end{tikzpicture}\]
is commutative. The commutativity of this diagram follows from that of the top, bottom, left, and front faces in the cube \eqref{theta-naturality-cube}.
\end{proof}
Given a cloven fibration $P : \A\to\C$ with $\A$ a small category, so far we have defined
\begin{itemize}
\item a $1$-cell $F : \funnyfp \to \A$ in \eqref{fibration-f-action},
\item an invertible $2$-cell $\zeta : 1_{\catoverc} \to F\etap$ in \eqref{zetay-beta-one}, and
\item an invertible $2$-cell $\theta : F \circ \funnyff \to F\mu_P$ in \eqref{thetagfy-betayfg}
\end{itemize}
in $\catoverc$. Next we check that $P$ equipped with $(F,\zeta,\theta)$ is a pseudo $\funnyf$-algebra.
\begin{lemma}\label{fib-falg-coherence-i}
$(P,F,\zeta,\theta)$ satisfies the first lax unity axiom \eqref{ps-falg-coherence-i}.
\end{lemma}
\begin{proof}
For each object $\afy\in\funnyfp$, there is an equality
\begin{equation}\label{zeta-beta-inv}
\zeta_{\afysubf} = \lift{\preliftafyfone}^{-1}
\end{equation}
by \Cref{zetay-natural}. Moreover, consider the morphisms
\[\begin{tikzpicture}[xscale=4, yscale=1.3]
\draw[0cell]
(0,0) node (x11) {\aoneafyff}
($(x11)+(1.2,0)$) node (x12) {\afysubf}
($(x12)+(.6,0)$) node (x13) {Y \in \A}
($(x11)+(0,-1)$) node (x21) {A}
($(x12)+(0,-1)$) node (x22) {A}
($(x13)+(0,-1)$) node (x23) {\ysubp\in\C}
;
\draw[1cell]
(x11) edge node {\lift{\preliftafyfone}} (x12)
(x12) edge node {\lift{\preliftyf}} (x13)
(x13) edge[|->] node {P} (x23)
(x21) edge node {1_A} (x22)
(x22) edge node {f} (x23)
;
\end{tikzpicture}\]
with each $\lift{?}$ the indicated chosen Cartesian lift. By the definition of $\theta_{\oneafy}$ in \eqref{thetagfy-betayfg}, it is the unique isomorphism such that the equality
\[\lift{\preliftyf}\theta_{\oneafy} = \lift{\preliftyf}\lift{\preliftafyfone}\]
holds and that $P(\theta_{\oneafy}) = 1_A$. This uniqueness property implies that there is an equality
\begin{equation}\label{theta-beta-inv}
\theta_{\oneafy} = \lift{\preliftafyfone}.
\end{equation}
It follows from \eqref{zeta-beta-inv} and \eqref{theta-beta-inv} that the composite
\[\theta_{\oneafy} \circ \zeta_{\afysubf}\]
is the identity morphism of $\afysubf$, proving the first lax unity axiom \eqref{ps-falg-coherence-i}.
\end{proof}
\begin{lemma}\label{fib-falg-coherence-ii}
$(P,F,\zeta,\theta)$ satisfies the second lax unity axiom \eqref{ps-falg-coherence-ii}.
\end{lemma}
\begin{proof}
For each object $\afy\in\funnyfp$, by the definition \eqref{functor-f-morphism}, $\oneazetayf$ is the unique raise of the pre-raise
\[\preraise{\lift{\preliftyponeyff}}{\zetay\circ \lift{\preliftyf}}{1_A}\]
as displayed below.
\[\begin{tikzpicture}[xscale=3, yscale=1.3]
\draw[0cell]
(0,0) node (x-i-i) {\afysubf}
($(x-i-i)+(1.8,0)$) node (x-i-ii) {Y}
($(x-i-i)+(0,-1)$) node (x-ii-i) {\afyponeyff}
($(x-i-ii)+(0,-1)$) node (x-ii-ii) {\yponeysubf}
($(x-ii-ii)+(.3,.5)$) node (s) {}
($(s)+(.4,0)$) node (t) {}
($(t)+(.3,.5)$) node (y-i-i) {A}
($(y-i-i)+(.4,0)$) node (y-i-ii) {\ysubp}
($(y-i-i)+(0,-1)$) node (y-ii-i) {A}
($(y-i-ii)+(0,-1)$) node (y-ii-ii) {\ysubp}
;
\draw[1cell]
(x-i-i) edge node {\lift{\preliftyf}} (x-i-ii)
(x-i-i) edge node[swap] {\exists !\,\oneazetayf} (x-ii-i)
(x-i-ii) edge node {\zetay} (x-ii-ii)
(x-ii-i) edge node {\lift{\preliftyponeyff}} (x-ii-ii)
(s) edge[|->] node {P} (t)
(y-i-i) edge node {f} (y-i-ii)
(y-i-i) edge node[swap] {1_A} (y-ii-i)
(y-i-ii) edge node {1_{\ysubp}} (y-ii-ii)
(y-ii-i) edge node {f} (y-ii-ii)
;
\end{tikzpicture}\]
On the other hand, by the definition \eqref{thetagfy-betayfg}, $\theta_{\foneypy}$ is the unique isomorphism such that the diagram
\[\begin{tikzpicture}[xscale=2.8, yscale=1.3]
\draw[0cell]
(0,0) node (x-i-i) {\afysubf}
($(x-i-i)+(2,0)$) node (x-i-ii) {Y}
($(x-i-i)+(0,-1)$) node (x-ii-i) {\afyponeyff}
($(x-i-ii)+(0,-1)$) node (x-ii-ii) {\yponeysubf}
;
\draw[1cell]
(x-i-i) edge node {\lift{\preliftyf}} (x-i-ii)
(x-ii-i) edge node {\theta_{\foneypy}} (x-i-i)
(x-ii-ii) edge node {\lift{\preliftyone}} node[swap]{=\,\zetay^{-1}} (x-i-ii)
(x-ii-i) edge node {\lift{\preliftyponeyff}} (x-ii-ii)
;
\end{tikzpicture}\]
in $\A$ is commutative and that $P(\theta_{\foneypy}) = 1_A$. It follows that
\[\oneazetayf = \theta_{\foneypy}^{-1},\] which implies the second lax unity axiom \eqref{ps-falg-coherence-ii}.
\end{proof}
\begin{lemma}\label{fib-falg-coherence-iii}
$(P,F,\zeta,\theta)$ satisfies the lax associativity axiom \eqref{ps-falg-coherence-iii}.
\end{lemma}
\begin{proof}
For each object
\[\vhwgxfy \in \funnyfcubep\]
as in \eqref{vhwgxfy}, consider the following diagram in $\A$.
\begin{equation}\label{psalg-coh-iii}
\scalebox{.9}{
\begin{tikzpicture}[xscale=3.2, yscale=1.5]
\draw[0cell]
(0,0) node (x11) {\vghxfyff}
($(x11)+(0,-1)$) node (x21) {\vhwgxfyff}
($(x21)+(2,0)$) node (x22) {\wgxfyff}
($(x22)+(1.3,0)$) node (x23) {\xfyf}
($(x21)+(0,-1)$) node (x31) {\vhwfgyff}
($(x22)+(0,-1)$) node (x32) {\wfgyf}
($(x23)+(0,-1)$) node (x33) {Y}
($(x31)+(0,-1)$) node (x41) {\vfghyf}
($(x11)+(-.7,0)$) node[inner sep=0pt] (nw) {}
($(x41)+(-.7,0)$) node[inner sep=0pt] (sw) {}
($(x23)+(0,1)$) node[inner sep=0pt] (x13) {}
($(x33)+(0,-1)$) node[inner sep=0pt] (x43) {}
;
\draw[1cell]
(x21) edge node[swap] {\theta_{\hgxfyf}} (x11)
(x21) edge node {\lift{\preliftwgxfyffh}} (x22)
(x22) edge node {\lift{\preliftxfyfg}} (x23)
(x21) edge node {\onevthetagfyf} (x31)
(x22) edge node[swap] {\theta_{\gfy}} (x32)
(x23) edge node[swap] {\lift{\preliftyf}} (x33)
(x31) edge node {\lift{\preliftwfgyfh}} (x32)
(x32) edge node[pos=.3] {\lift{\preliftyfg}} (x33)
(x31) edge node {\theta_{\hfgy}} (x41)
(x11) edge[-, shorten >=-1pt] node {\lift{\preliftxfyfgh}} (x13)
(x13) edge[shorten <=-1pt] node {} (x23)
(x41) edge[-, shorten >=-1pt] node {\lift{\preliftyfgh}} (x43)
(x43) edge[shorten <=-1pt] node {} (x33)
(x11) edge[-, shorten >=-1pt] node {} (nw)
(nw) edge[-, shorten <=-1pt, shorten >=-1pt] node {\theta_{\ghfy}} (sw)
(sw) edge[shorten <=-1pt] node{} (x41)
;
\end{tikzpicture}
}
\end{equation}
In the above diagram:
\begin{itemize}
\item The left rectangle is the diagram \eqref{ps-falg-coherence-iii} that we want to show is commutative.
\item The middle rectangle is commutative by the definition \eqref{functor-f-morphism} of the morphism $\onevthetagfyf$.
\item The top, middle right, and bottom rectangles are the commutative diagrams \eqref{thetagfy-betayfg} that define the isomorphisms $\theta_{\hgxfyf}$, $\theta_{\gfy}$, and $\theta_{\hfgy}$, respectively.
\item Also by \eqref{thetagfy-betayfg}, the left-most arrow $\theta_{\ghfy}$ is the unique isomorphism such that the outermost diagram commutes and that $P(\theta_{\ghfy}) = 1_V$.
\end{itemize}
Along the first column in the diagram \eqref{psalg-coh-iii}, there are equalities
\[P\big(\theta_{\hgxfyf}\big) = P\big(\onevthetagfyf\big) = P\big(\theta_{\hfgy}\big) = 1_V\]
by \Cref{functor-f-well-def} and \eqref{p-theta-1}. Therefore, by the uniqueness property that defines $\theta_{\ghfy}$, the left rectangle in \eqref{psalg-coh-iii} is commutative because the other four sub-diagrams are commutative.
\end{proof}
We have checked all the axioms for a lax $\funnyf$-algebra, so we obtain the following result.
\begin{proposition}\label{fibration-to-psalgebra}
For each cloven fibration $P : \A\to\C$ with $\A$ a small category, when equipped with the structure $(F,\zeta,\theta)$ in \eqref{fibration-f-action}, \eqref{zetay-beta-one}, and \eqref{thetagfy-betayfg}, $(P,F,\zeta,\theta)$ is a pseudo $\funnyf$-algebra.
\end{proposition}
\begin{proof}
Combine \Cref{lax-falg,fib-falg-coherence-i,fib-falg-coherence-ii,fib-falg-coherence-iii}, and the fact that $\zeta$ and $\theta$ are invertible $2$-cells in $\catoverc$.
\end{proof}
Next is the analogue involving split fibrations and strict $\funnyf$-algebras.
\begin{proposition}\label{splitfib-to-strictalg}
In \Cref{fibration-to-psalgebra}, if $P$ is a split fibration, then $(P,F,\zeta,\theta)$ is a strict $\funnyf$-algebra.
\end{proposition}
\begin{proof}
We must show that $\zeta$ and $\theta$ are identity natural transformations. For each object $Y\in\A$, $\zetay$ in \eqref{zetay-beta-one} is the identity morphism of $Y$ because the given cleavage is unitary, i.e., $\lift{\preliftyone}=1_Y$.
Similarly, for each object $\wgxfy\in\funnyfsqp$, the multiplicativity \eqref{multiplicative-cleavage} of the given cleavage implies the equality
\[\lift{\preliftyfg} = \lift{\preliftyf} \circ \lift{\preliftxfyfg}\]
in \eqref{thetagfy-betayfg}. So $\theta_{\gfy}$ is the identity morphism.
\end{proof}
\section{Fibrations are Pseudo Algebras}\label{sec:fib=psalg}
Suppose $\C$ is a small category. The purpose of this section is to observe that the constructions in the last two sections provide inverse bijections between cloven fibrations and pseudo $\funnyf$-algebras, and also between split fibrations and strict $\funnyf$-algebras.
\begin{definition}\label{def:fib-algebra-correspondence}
Suppose $(\funnyf,\mu,\eta)$ is the $2$-monad on $\catoverc$ in \Cref{funnyf-is-iimonad}.
\begin{enumerate}
\item For a pseudo $\funnyf$-algebra $(P,F,\zeta,\theta)$ as in \Cref{lax-falg}, the cloven fibration $P$ in \Cref{psalgebra-to-fibration} is denoted by\label{notation:algtofib}
\[\algtofib{P,F,\zeta,\theta}.\]
\item For a cloven fibration $P : \A\to\C$ with $\A$ a small category, the pseudo $\funnyf$-algebra $(P,F,\zeta,\theta)$ in \Cref{fibration-to-psalgebra} is denoted by\label{notation:fibtoalg}
\[\fibtoalg{P}.\]
\item Reusing the notations from \Cref{iicat-fibrations}, denote by $\fibclofc$ the collection of cloven fibrations over $\C$ with small domain categories. The sub-collection consisting of split fibrations over $\C$ is denoted by $\fibspofc$.
\item Denote by $\psfalg$ the collection of pseudo $\funnyf$-algebras, and by $\stfalg$ the sub-collection of strict $\funnyf$-algebras.\defmark
\end{enumerate}
\end{definition}
\begin{lemma}\label{fib-alg-fib}
For each cloven fibration $P : \A\to\C$ as in \Cref{sec:pseudo-alg-from-fib}, there is an equality
\[\algtofib{\fibtoalg{P}} = P\]
of cloven fibrations.
\end{lemma}
\begin{proof}
The underlying functor of the left-hand side is also $P$. We must show that the two cloven fibrations have the same cleavage. Since the chosen Cartesian lift for the left-hand side is from \eqref{foneyf-zetayinv}, for each pre-lift $\prelift{Y}{f : A \to \ysubp}$, we need to show that the equality
\begin{equation}\label{zeta-foneyf-liftyf}
\zetay^{-1} \foneysubf = \lift{\preliftyf}
\end{equation}
holds in $\A$, with $\foneysubf$ and $\zetay$ defined in \eqref{functor-f-morphism} and \eqref{zetay-beta-one}, respectively. In particular, $\foneysubf$ is defined as the unique raise of the pre-raise
\[\preraise{\lift{\preliftyone}}{1_Y\lift{\preliftyf}}{f}\]
as displayed below.
\[\begin{tikzpicture}[xscale=2.5, yscale=1.3]
\draw[0cell]
(0,0) node (x-i-i) {\afysubf}
($(x-i-i)+(1.2,0)$) node (x-i-ii) {Y}
($(x-i-i)+(0,-1)$) node (x-ii-i) {\yponeysubf}
($(x-i-ii)+(0,-1)$) node (x-ii-ii) {Y}
($(x-ii-ii)+(.3,.5)$) node (s) {}
($(s)+(.4,0)$) node (t) {}
($(t)+(.3,.5)$) node (y-i-i) {A}
($(y-i-i)+(.7,0)$) node (y-i-ii) {\ysubp}
($(y-i-i)+(0,-1)$) node (y-ii-i) {\ysubp}
($(y-i-ii)+(0,-1)$) node (y-ii-ii) {\ysubp}
;
\draw[1cell]
(x-i-i) edge node {\lift{\preliftyf}} (x-i-ii)
(x-i-i) edge[dashed] node[swap] {\exists !\,\foneysubf} (x-ii-i)
(x-i-ii) edge node {1_Y} (x-ii-ii)
(x-ii-i) edge node {\lift{\preliftyone}} (x-ii-ii)
(s) edge[|->] node {P} (t)
(y-i-i) edge node {f} (y-i-ii)
(y-i-i) edge node[swap] {f} (y-ii-i)
(y-i-ii) edge node {1_{\ysubp}} (y-ii-ii)
(y-ii-i) edge node {1_{\ysubp}} (y-ii-ii)
;
\end{tikzpicture}\]
Since $\zetay^{-1} = \lift{\preliftyone}$ by \Cref{zetay-natural}, the desired equality \eqref{zeta-foneyf-liftyf} follows from the left commutative square above.
\end{proof}
\begin{lemma}\label{alg-fib-alg}
For each pseudo $\funnyf$-algebra $(P,F,\zeta,\theta)$ as in \Cref{lax-falg}, there is an equality
\[\fibtoalg{\algtofib{P,F,\zeta,\theta}} = (P,F,\zeta,\theta)\]
of pseudo $\funnyf$-algebras.
\end{lemma}
\begin{proof}
The underlying object of the left-hand side is also the functor $P : \A\to\C$. We must show that the pseudo $\funnyf$-algebra structure of the left-hand side, which we write as $(F',\zeta',\theta')$, is also specified by $(F,\zeta,\theta)$.
To show that $F=F'$, suppose $\afy \in \funnyfp$ is an object. By \eqref{functor-f-object}, the $\funnyf$-action functor $F'$ sends $\afy$ to the domain of the chosen Cartesian lift of $\preliftyf$, which is $\zetay^{-1}\foneysubf$ in \eqref{foneyf-zetayinv}. The domain of the latter is that of $\foneysubf$, which is $\afysubf$. So the $\funnyf$-action functors $F'$ and $F$ agree on objects.
Suppose $\gh \in \funnyfp$ is a morphism as in \eqref{gh-morphism}. By \eqref{functor-f-morphism}, $F'$ sends $\gh$ to the unique raise of the pre-raise
\[\preraise{\zetayone^{-1} \circ \foneyonesubf}{h \circ \zetayzero^{-1}\circ \foneyzerosubf}{g}\]
as displayed below.
\begin{equation}\label{ghfprime-preraise}
\begin{tikzpicture}[xscale=3.7, yscale=1.3]
\draw[0cell]
(0,0) node (x11) {\xfyfzero}
($(x11)+(1,0)$) node (x12) {\yponeyzerosubf}
($(x12)+(.7,0)$) node (x13) {Y_0}
($(x11)+(0,-1)$) node (x21) {\xfyfone}
($(x12)+(0,-1)$) node (x22) {\yponeyonesubf}
($(x13)+(0,-1)$) node (x23) {Y_1}
($(x23)+(.2,.5)$) node (s) {}
($(s)+(.2,0)$) node (t) {}
($(t)+(.2,.5)$) node (y11) {X_0}
($(y11)+(.4,0)$) node (y12) {\ysubzerop}
($(y11)+(0,-1)$) node (y21) {X_1}
($(y12)+(0,-1)$) node (y22) {\ysubonep}
;
\draw[1cell]
(x11) edge node {\foneyzerosubf} (x12)
(x12) edge node {\zetayzero^{-1}} (x13)
(x11) edge[dashed] node[swap] {\exists !\,\ghsubfprime} (x21)
(x13) edge node {h} (x23)
(x21) edge node {\foneyonesubf} (x22)
(x22) edge node {\zetayone^{-1}} (x23)
(s) edge[|->] node {P} (t)
(y11) edge node {f_0} (y12)
(y11) edge node[swap] {g} (y21)
(y12) edge node {\hsubp} (y22)
(y21) edge node {f_1} (y22)
;
\end{tikzpicture}
\end{equation}
The morphism $\ghsubf$ satisfies
\[P\big(\ghsubf\big) = g \in \C\] by \eqref{xfypf} and the fact that $F : \funnyfp \to \A$ is a $1$-cell in $\catoverc$.
Therefore, by the uniqueness property of $\ghsubfprime$, to show the equality
\[\ghsubf = \ghsubfprime,\]
it remains to show that the left rectangle in \eqref{ghfprime-preraise} is commutative when its left vertical morphism is replaced by $\ghsubf$. Consider the diagram
\[\begin{tikzpicture}[xscale=4, yscale=1.3]
\draw[0cell]
(0,0) node (x11) {\xfyfzero}
($(x11)+(1,0)$) node (x12) {\yponeyzerosubf}
($(x12)+(.7,0)$) node (x13) {Y_0}
($(x11)+(0,-1)$) node (x21) {\xfyfone}
($(x12)+(0,-1)$) node (x22) {\yponeyonesubf}
($(x13)+(0,-1)$) node (x23) {Y_1}
;
\draw[1cell]
(x11) edge node {\foneyzerosubf} (x12)
(x12) edge node {\zetayzero^{-1}} (x13)
(x11) edge node[swap] {\ghsubf} (x21)
(x12) edge node {\hphsubf} (x22)
(x13) edge node {h} (x23)
(x21) edge node {\foneyonesubf} (x22)
(x22) edge node {\zetayone^{-1}} (x23)
;
\end{tikzpicture}\]
in $\A$. The left sub-diagram is commutative by the functoriality of $F$ and the right square in \eqref{ghfprime-preraise}. The right sub-diagram is commutative by the naturality of $\zeta$ as in \eqref{gzetaz}. Therefore, $F=F'$ as functors.
Next, to show that $\zeta=\zeta'$, suppose $Y\in\A$ is an object. By \Cref{zetay-natural} there is an equality
\[\zetaprimey = \lift{\preliftyone}^{-1}\] with $\lift{\preliftyone}$ the chosen Cartesian lift of the pre-lift $\preliftyone$ in the cloven fibration $\algtofib{P,F,\zeta,\theta}$. The latter is defined in \eqref{foneyf-zetayinv}. So $\zetaprimey$ is the composite
\[\begin{tikzpicture}[xscale=4, yscale=1.3]
\draw[0cell]
(0,0) node (x11) {Y}
($(x11)+(.6,0)$) node (x12) {\yponeysubf}
($(x12)+(1,0)$) node (x13) {\yponeysubf}
;
\draw[1cell]
(x11) edge node {\zetay} (x12)
(x12) edge node {\oneyponey^{-1}} node[swap] {=} (x13)
;
\end{tikzpicture}\]
in $\A$, which is equal to $\zetay$.
Finally, to show that $\theta = \theta'$, suppose $\wgxfy \in \funnyfsqp$ is an object as in \Cref{expl:funnyf}. Since $F=F'$, by \eqref{thetagfy-betayfg} $\theta'_{\gfy}$ is the unique isomorphism such that the diagram
\[\begin{tikzpicture}[xscale=4, yscale=1.3]
\draw[0cell]
(0,0) node (x11) {\wgxfyff}
($(x11)+(1,0)$) node (x12) {\wfgyf}
($(x11)+(0,-1)$) node (x21) {\xfyf}
($(x12)+(0,-1)$) node (x22) {Y}
;
\draw[1cell]
(x11) edge node {\theta'_{\gfy}} node[swap]{\iso} (x12)
(x11) edge node[swap] {\lift{\preliftxfyfg}} (x21)
(x12) edge node {\lift{\preliftyfg}} (x22)
(x21) edge node {\lift{\preliftyf}} (x22)
;
\end{tikzpicture}\]
in $\A$ commutes and that $P\big(\theta'_{\gfy}\big) = 1_W$, with each $\beta_?$ the chosen Cartesian lift of its subscript in the cloven fibration $\algtofib{P,F,\zeta,\theta}$. Since $P\big(\theta_{\gfy}\big) = 1_W$ by \eqref{thetapiw}, it remains to show that the above diagram is commutative when $\theta'_{\gfy}$ is replaced by $\theta_{\gfy}$.
As each $\beta_?$ is defined as a composite as in \eqref{foneyf-zetayinv}, the desired diagram is the outermost diagram below.
\[\begin{tikzpicture}[xscale=4, yscale=1.7]
\draw[0cell]
(0,0) node (x11) {\wgxfyff}
($(x11)+(1,0)$) node (x12) {}
($(x12)+(.8,0)$) node (x13) {\wfgyf}
($(x11)+(0,-1)$) node (x21) {\xonexfyff}
($(x12)+(0,-1)$) node (x22) {\xfyf}
($(x13)+(0,-1)$) node (x23) {\yponeysubf}
($(x21)+(0,-1)$) node (x31) {\xfyf}
($(x22)+(0,-1)$) node (x32) {\yponeysubf}
($(x23)+(0,-1)$) node (x33) {Y}
($(x11)+(-.5,0)$) node[inner sep=0pt] (a) {}
($(a)+(0,-2)$) node[inner sep=0pt] (b) {}
($(x31)+(0,-.5)$) node[inner sep=0pt] (c) {}
($(x33)+(0,-.5)$) node[inner sep=0pt] (d) {}
($(x13)+(.4,0)$) node[inner sep=0pt] (e) {}
($(e)+(0,-2)$) node[inner sep=0pt] (f) {}
;
\draw[1cell]
(x11) edge node {\theta_{\gfy}} (x13)
(x11) edge node {\gonexfyff} (x21)
(x13) edge node[swap] {\goneysubf} (x22)
(x13) edge node[swap, pos=.7] {\fgoneysubf} (x23)
(x21) edge node {\theta_{\onexfy}} (x22)
(x21) edge node[pos=.4] {\zeta_{\xfyf}^{-1}} (x31)
(x22) edge node[swap] {1} (x31)
(x23) edge node[swap] {1} (x32)
(x23) edge node[swap] {\zetay^{-1}} (x33)
(x31) edge node[pos=.6] {\foneysubf} (x32)
(x32) edge node {\zetay^{-1}} (x33)
(x11) edge[-,shorten >=-1pt] node {} (a)
(a) edge[-,shorten >=-1pt, shorten <=-1pt] node[pos=.75] {\lift{\preliftxfyfg}} (b)
(b) edge[shorten <=-1pt] node {} (x31)
(x31) edge[-,shorten >=-1pt] node {} (c)
(c) edge[-,shorten >=-1pt, shorten <=-1pt] node {\lift{\preliftyf}} (d)
(d) edge[shorten <=-1pt] (x33)
(x13) edge[-,shorten >=-1pt] (e)
(e) edge[-,shorten >=-1pt, shorten <=-1pt] node[swap,pos=.75] {\lift{\preliftyfg}} (f)
(f) edge[shorten <=-1pt] (x33)
;
\end{tikzpicture}\]
In the above diagram:
\begin{itemize}
\item The three outer rectangles are the definitions of the three chosen Cartesian lifts $\beta_?$.
\item Since
\[\gonexfyff = \gonexoneyff\]
by \eqref{oneaoneyf}, the top trapezoid is commutative by the naturality \eqref{eijfftheta} of $\theta$.
\item The lower-left triangle is commutative by the first lax unity axiom \eqref{ps-falg-coherence-i} of the given pseudo $\funnyf$-algebra.
\item The other two sub-diagrams are commutative by the functoriality of $F$ and by definition.
\end{itemize}
We have shown that $\theta = \theta'$.
\end{proof}
Next is the main observation of this chapter regarding the correspondence between pseudo/strict $\funnyf$-algebras and cloven/split fibrations.
\begin{theorem}[Grothendieck Fibration]\label{fibration=psalgebra}\index{2-monad!for cloven and split fibrations}\index{cloven fibration!as a pseudo algebra}\index{split!fibration!as a strict algebra}\index{fibration!as algebra over a 2-monad}\index{Theorem!Grothendieck Fibration}
Each pair of assignments
\[\begin{tikzpicture}[xscale=2, yscale=1.5]
\draw[0cell]
(0,0) node (alg) {\psfalg}
($(alg)+(1.5,0)$) node (fib) {\fibclofc}
($(fib)+(1.2,0)$) node (stalg) {\stfalg}
($(stalg)+(1.5,0)$) node (fibsp) {\fibspofc}
;
\draw[1cell]
(alg) edge[bend left=25] node {\algtofib{-}} (fib)
(fib) edge[bend left=25] node {\fibtoalg{-}} (alg)
(stalg) edge[bend left=25] node {\algtofib{-}} (fibsp)
(fibsp) edge[bend left=25] node {\fibtoalg{-}} (stalg)
;
\draw[2cell]
node[between=alg and fib at .5, rotate=0] {\iso}
node[between=stalg and fibsp at .5, rotate=0] {\iso}
;
\end{tikzpicture}\]
consists of mutually inverse bijections.
\end{theorem}
\begin{proof}
\Cref{fib-alg-fib,alg-fib-alg} imply that the assignments on the left-hand side are inverse bijections. The inverse bijections on the right-hand side follow from those on the left-hand side by restriction, and \Cref{strictalgebra-to-split-fib,splitfib-to-strictalg}.
\end{proof}
\section{Exercises and Notes}\label{sec:fibration-exercises}
\begin{exercise}\label{exer:cartesian-properties}
Prove \Cref{cartesian-properties-iv,cartesian-properties-v} in \Cref{cartesian-properties}.
\end{exercise}
\begin{exercise}\label{exer:fibration-fromone}
Prove \Cref{fibration-fromone,fibration-composition}. Furthermore, show that if $P : 3\to\C$ and $Q : \C\to1.3$ are split fibrations such that $\liftof{f}_{QP} = (\liftof{f}_Q)_P$ for every pre-lift $\prelift{Y}{f}$ with respect to $QP$, as in \Cref{expl:split-fib-not-preserved}, then the composite $QP : 3\to1.3$ is a split fibration.
\end{exercise}
\begin{exercise}\label{exer:f-iimonad}
In \Cref{funnyf-is-iimonad}, check that:
\begin{enumerate}
\item The unit $\eta : 1_{\catoverc} \to \funnyf$ is a $2$-natural transformation.
\item $(\funnyf,\mu,\eta)$ satisfies the $2$-monad unity axiom.
\end{enumerate}
\end{exercise}
\subsection*{Notes}
\begin{note}[Discussion of Literature]
The concept of a fibration is due to Grothendieck \cite{grothendieck}. A fibration is also known as a \emph{fibered category}\index{fibered category}\index{category!fibered} in the literature. Some other places that discuss basic aspects of fibrations include \cite[Chapter 12]{barr-wells-category}, \cite[Chapter 8]{borceux2}, \cite[Chapter B1.3]{elephant}, and \cite{gray-fibred}. Generalizations of Grothendieck fibrations are discussed in \cite{street-yoneda,street_fibrations,street_fibrations-correction,street-conspectus}. Other discussion of fibrations can be found in \cite{benabou-fibered,fgiknv,harpaz,maltsiniotis}, among many others.
The main observation of this chapter, namely, \Cref{fibration=psalgebra}, is stated in a number of papers in the literature, such as \cite{buckley}. However, we are not aware of any published detailed proof of this fact, as we have given in this chapter.
\end{note}
\chapter{Functors, Transformations, and Modifications}
\label{ch:functors}
The main purpose of this chapter is to introduce bicategorical analogues of functors and natural transformations. Lax functors and its variants are discussed in \Cref{sec:functors}. The main observation is that there is a category $\Bicat$ with small bicategories as objects and lax functors as morphisms. Lax transformations and its variants are discussed in \Cref{sec:natural-transformations}, and oplax transformations are discussed in \Cref{sec:oplax-transformations}. Strictly speaking, oplax transformations can be defined as lax transformations between the opposite lax functors, as we will see in \Cref{strong-optransformation}. However, the concept of an oplax transformation is so fundamental that it deserves its own name and discussion.
In \Cref{sec:modifications} we discuss modifications, which compare lax transformations. The main observation is that, for bicategories $\B$ and $\B'$ with $\B_0$ a set, there is a bicategory $\Bicat(\B,\B')$ with lax functors $\B\to\B'$ as objects, lax transformations as $1$-cells, and modifications as $2$-cells. Moreover, this is a $2$-category if $\B'$ is a $2$-category. In \Cref{sec:representables} we discuss the bicategorical analogues of representable functors. In particular, each object (resp., $1$-cell or $2$-cell) in a bicategory induces a representable pseudofunctor (resp., strong transformation or modification). In \Cref{sec:icons} we discuss icons, which are in canonical bijections with oplax transformations with component identity $1$-cells. The main observation is that there is a $2$-category $\Bicatic$ with small bicategories as objects, lax functors as $1$-cells, and icons as $2$-cells.
We remind the reader that we use \Cref{thm:bicat-pasting-theorem} and \Cref{conv:boundary-bracketing} to interpret pasting diagrams in bicategories. Furthermore, the bicategory axioms \eqref{hom-category-axioms}, \eqref{bicat-c-id}, and \eqref{middle-four} will often be used as in the computation \eqref{pasting-computation}, and we apply them tacitly. As before, $(\B,1,c,a,\ell,r)$ denotes a bicategory.
\section{Lax Functors}\label{sec:functors}
In this section we define lax functors and their variants between bicategories. The main observation is that there is a category whose objects are small bicategories and whose morphisms are lax functors. Recall from \Cref{not:discrete-cat1} that $\boldone$ denotes the discrete category with one object $*$.
\begin{motivation}
We saw in \Cref{ex:moncat-bicat} that each monoidal category may be regarded as a one-object bicategory. The next concept is the bicategorical analogue of a monoidal functor as in \Cref{def:monoidal-functor}. The bicategorical versions of the associativity axiom \eqref{f2} and the unity axioms \eqref{f0-left} and \eqref{f0-right} are \eqref{f2-bicat} and \eqref{f0-bicat} below.\dqed
\end{motivation}
\begin{definition}\label{def:lax-functors}
Suppose $(\B,1,c,a,\ell,r)$ and $(\B',1',c',a',\ell',r')$ are bicategories. A \index{lax functor}\index{functor!lax}\emph{lax functor} \[(F,F^2,F^0) : \B\to\B'\] from $\B$ to $\B'$ is a triple consisting of the following data.
\begin{description}
\item[Objects] $F : \B_0 \to \B'_0$ is a function on objects.
\item[Hom Categories] For each pair of objects $X,Y$ in $\B$, it is equipped with a \emph{local functor}\index{local!functor}\index{functor!local} \[F : \B(X,Y) \to \B'(FX,FY).\]
\item[Laxity Constraints] For all objects $X,Y,Z$ in $\B$, it is equipped with natural transformations
\[\begin{tikzpicture}[commutative diagrams/every diagram, xscale=3.7, yscale=1.5]
\node (A) at (0,1) {$\B(Y,Z)\times\B(X,Y)$};
\node (B) at (1,1) {$\B(X,Z)$};
\node (C) at (0,0) {$\B'(FY,FZ)\times\B'(FX,FY)$};
\node (D) at (1,0) {$\B'(FX,FZ)$};
\node[font=\Large] at (.6,.5) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at (.5,.6) {$F^2$};
\draw [arrow] (A) to node{\small{$c$}} (B);
\draw [arrow] (B) to node{\small{$F$}} (D);
\draw [arrow] (A) to node[swap]{\small{$F\times F$}} (C);
\draw [arrow] (C) to node[swap]{\small{$c'$}} (D);
\end{tikzpicture}\qquad
\begin{tikzpicture}[commutative diagrams/every diagram, xscale=2, yscale=1.5]
\node (A) at (0,1) {$\boldone$};
\node (B) at (1,1) {$\B(X,X)$}; \node (C) at (0,0) {};
\node (D) at (1,0) {$\B'(FX,FX)$};
\node[font=\Large] at (.6,.5) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at (.4,.6) {$F^0$};
\draw [arrow] (A) to node{\small{$1_X$}} (B);
\draw [arrow] (B) to node{\small{$F$}} (D);
\draw [arrow, out=-90, in=170] (A) to node[near start, swap]{\small{$1'_{FX}$}} (D);
\end{tikzpicture}\]
with component $2$-cells
\[\begin{tikzcd}Fg \circ Ff \ar{r}{F^2_{g,f}} & F(gf)\end{tikzcd}\andspace
\begin{tikzcd} 1'_{FX} \ar{r}{F^0_X} & F1_X,\end{tikzcd}\]
called the \index{lax functor!lax functoriality constraint}\emph{lax functoriality constraint} and the \index{lax functor!lax unity constraint}\emph{lax unity constraint}.
\end{description}
The above data are required to make the following three diagrams commutative for all $1$-cells $f \in \B(W,X)$, $g \in \B(X,Y)$, and $h \in \B(Y,Z)$.
\begin{description}
\item[Lax Associativity]\index{associativity!lax functor}
\begin{equation}\label{f2-bicat}
\begin{tikzcd}
(Fh \circ Fg) \circ Ff \ar{r}{a'} \ar{d}[swap]{F^2_{h,g} *1_{Ff}}
& Fh \circ (Fg \circ Ff) \ar{d}{1_{Fh}*F^2_{g,f}}\\
F(hg) \circ Ff \ar{d}[swap]{F^2_{hg,f}} & Fh \circ F(gf) \ar{d}{F^2_{h,gf}}\\
F((hg)f) \ar{r}{Fa} & F(h(gf))
\end{tikzcd}
\end{equation}
in $B'(FW,FZ)$.
\item[Lax Left and Right Unity]\index{unity!lax functor}
\begin{equation}\label{f0-bicat}
\begin{tikzcd}
1'_{FX} \circ Ff \ar{r}{\ell'} \ar{d}[swap]{F^0_X*1_{Ff}} & Ff\\
F1_X \circ Ff \ar{r}{F^2_{1_X,f}} & F(1_X\circ f) \ar{u}[swap]{F\ell}
\end{tikzcd}\qquad
\begin{tikzcd}
Ff \circ 1'_{FW} \ar{r}{r'} \ar{d}[swap]{1_{Ff}*F^0_W} & Ff\\
Ff \circ F1_W \ar{r}{F^2_{f,1_W}} & F(f\circ 1_W) \ar{u}[swap]{Fr}
\end{tikzcd}
\end{equation}
in $\B'(FW,FX)$.
\end{description}
This finishes the definition of a lax functor. Moreover:
\begin{itemize}
\item A lax functor is \emph{unitary}\index{lax functor!unitary}\index{unitary!lax functor} (resp., \index{lax functor!strictly unitary}\emph{strictly unitary}) if each lax unity constraint $F^0_X$ is an invertible $2$-cell (resp., identity $2$-cell).
\item A \emph{colax functor}\index{colax functor}\index{functor!colax} from $\B$ to $\B'$ is a lax functor from $\Bco$ to $\Bprimeco$, in which $\Bco$ and $\Bprimeco$ are the co-bicategories of $\B$ and $\B'$ as in \Cref{def:bicategory-co}.
\item A \emph{pseudofunctor}\index{pseudofunctor}\index{functor!pseudo-} is a lax functor in which $F^2$ and $F^0$ are natural isomorphisms.
\item A \emph{strict functor}\index{strict!functor}\index{functor!strict} is a lax functor in which $F^2$ and $F^0$ are identity natural transformations.
\item A strict functor between two $2$-categories is called a \index{functor!2-}\index{2-functor}\emph{$2$-functor}.
\item If $P$ is a property of functors, then a lax functor is said to be \emph{local $P$}\index{local!property of lax functors} or to \emph{have property $P$ locally} if each local functor between hom categories has property $P$. For example, a lax functor is a \emph{local equivalence}\index{local!equivalence} if each local functor is an equivalence of categories. \defmark
\end{itemize}
\end{definition}
\begin{explanation}\label{expl:lax-functor}
In \Cref{def:lax-functors}:
\begin{enumerate}
\item A lax functor strictly preserves vertical composition of $2$-cells and identity $2$-cells. On the other hand, a lax functor preserves identity $1$-cells and horizontal composition only up to the lax unity constraint $F^0$ and the lax functoriality constraint $F^2$, which do not even need to be invertible.
\item The naturality of $F^2$ means that, for $2$-cells $\alpha : f \to f'$ in $\B(X,Y)$ and $\beta : g \to g'$ in $\B(Y,Z)$, the diagram
\begin{equation}\label{f2-bicat-naturality}
\begin{tikzcd}
Fg \circ Ff \ar{d}[swap]{F\beta *F\alpha} \ar{r}{F^2_{g,f}}
& F(gf) \ar{d}{F(\beta*\alpha)}\\
Fg' \circ Ff' \ar{r}{F^2_{g',f'}} & F(g'f')
\end{tikzcd}
\end{equation}
in $\B'(FX,FZ)$ is commutative.
\item\label{fzero-natural}
The naturality of $F^0$ is the commutative diagram
\[\begin{tikzcd}
1'_{FX} \ar{d}[swap]{1_{1'_{FX}}} \ar{r}{F^0_X} & F1_X \ar{d}{F1_{1_X}}\\
1'_{FX} \ar{r}{F^0_X} & F1_X
\end{tikzcd}\]
in $\B'(FX,FX)$. Since $F^01_{1'_{FX}} = F^0$ by \eqref{hom-category-axioms} and since $F1_{1_X} = 1_{F1_X}$ by the functoriality of $F$, both composites in the above commutative diagram are $F_0$. In other words, $F^0$ is completely determined by the $2$-cells $F^0 : 1'_{FX} \to F1_X$ for objects $X$ in $\B$, with the naturality condition being redundant.
\item The lax associativity axiom \eqref{f2-bicat} is equal to the pasting diagram equality
\begin{equation}\label{f2-bicat-pasting}
\begin{tikzpicture}[xscale=3, yscale=2, baseline={(eq.base)}]
\node (F1) at (0,0) {$FW$}; \node (F2) at ($(F1) + (0,1)$) {$FX$};
\node (F3) at ($(F1)+(1,1)$) {$FY$}; \node (F4) at ($(F1)+(1,0)$) {$FZ$};
\node[font=\Large] at ($(F1)+(.8,.5)$) {\rotatebox{-135}{$\Rightarrow$}};
\node[font=\small] at ($(F1)+(.7,.6)$) {$F^2$};
\node[font=\Large] at ($(F1)+(.25,.5)$) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at ($(F1)+(.15,.5)$) {$F^2$};
\node[font=\Large] at ($(F1)+(.55,-.2)$) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at ($(F1)+(.4,-.15)$) {$Fa$};
\draw[arrow] (F1) to node{\small{$Ff$}} (F2);
\draw[arrow] (F2) to node{\small{$Fg$}} (F3);
\draw[arrow] (F3) to node{\small{$Fh$}} (F4);
\draw[arrow, bend right=60] (F1) to node[swap]{\small{$F(h(gf))$}} (F4);
\draw[arrow] (F1) to node{\scalebox{.8}{$F((hg)f)$}} (F4);
\draw[arrow] (F2) to node[near start,inner sep=0]{\scalebox{.8}{$F(hg)$}} (F4);
\node (eq) at ($(F1)+(1.5,.3)$) {\LARGE{$=$}};
\node (F5) at ($(F1)+(2,0)$) {$FW$};
\node (F6) at ($(F5) + (0,1)$) {$FX$}; \node (F7) at ($(F5) + (1,1)$) {$FY$};
\node (F8) at ($(F5)+(1,0)$) {$FZ$};
\node[font=\Large] at ($(F5)+(.2,.6)$) {\rotatebox{-45}{$\Rightarrow$}};
\node[font=\small] at ($(F5)+(.3,.8)$) {$F^2$};
\node[font=\Large] at ($(F5)+(.45,0)$) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at ($(F5)+(.6,0)$) {$F^2$};
\draw[arrow] (F5) to node{\small{$Ff$}} (F6);
\draw[arrow] (F6) to node{\small{$Fg$}} (F7);
\draw[arrow] (F7) to node{\small{$Fh$}} (F8);
\draw[arrow, bend right=60] (F5) to node[swap]{\small{$F(h(gf))$}} (F8);
\draw[arrow] (F5) to node[swap, inner sep=0]{\small{$F(gf)$}} (F7);
\end{tikzpicture}
\end{equation}
with the domain bracketing from \Cref{conv:boundary-bracketing} and $a'$ automatically inserted by \Cref{def:bicat-diagram-composite}.
\item The lax left unity axiom \eqref{f0-bicat} is equal to the pasting diagram equality:
\[\begin{tikzpicture}[xscale=3, yscale=2]
\node (F1) at (0,0) {$FW$}; \node (F2) at ($(F1) + (.5,.7)$) {$FX$};
\node (F3) at ($(F1)+(1,0)$) {$FX$};
\node[font=\Large] at ($(F1)+(.75,.35)$) {\rotatebox{-135}{$\Rightarrow$}};
\node at ($(F1)+(.65,.45)$) {\scalebox{.7}{$F^0$}};
\node[font=\Large] at ($(F1)+(.4,.4)$) {\rotatebox{-90}{$\Rightarrow$}};
\node at ($(F1)+(.3,.4)$) {\scalebox{.7}{$F^2$}};
\node[font=\Large] at ($(F1)+(.55,-.15)$) {\rotatebox{-90}{$\Rightarrow$}};
\node at ($(F1)+(.4,-.15)$) {$F\ell$};
\draw[arrow, bend left] (F1) to node{\small{$Ff$}} (F2);
\draw[arrow, bend left] (F2) to node{\small{$1'_{FX}$}} (F3);
\draw[arrow, bend right] (F2) to node[swap, inner sep=0]{\scalebox{.6}{$F1_{X}$}} (F3);
\draw[arrow] (F1) to node[near start, inner sep=1]{\scalebox{.6}{$F(1_Xf)$}} (F3);
\draw[arrow, bend right=60] (F1) to node[swap]{\small{$Ff$}} (F3);
\node at ($(F1)+(1.5,0)$) {\LARGE{$=$}};
\node (F4) at ($(F1)+(2,0)$) {$FW$}; \node (F5) at ($(F4) + (.5,.7)$) {$FX$};
\node (F6) at ($(F4) + (1,0)$) {$FX$};
\node[font=\Large] at ($(F4)+(.45,.2)$) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at ($(F4)+(.6,.2)$) {$\ell'$};
\draw[arrow, bend left] (F4) to node{\small{$Ff$}} (F5);
\draw[arrow, bend left] (F5) to node{\small{$1'_{FX}$}} (F6);
\draw[arrow, bend right=60] (F4) to node[swap]{\small{$Ff$}} (F6);
\end{tikzpicture}\]
Similarly, the lax right unity axiom \eqref{f0-bicat} is equal to the equality
\[\begin{tikzpicture}[xscale=3, yscale=2]
\node (F1) at (0,0) {$FW$}; \node (F2) at ($(F1) + (.5,.7)$) {$FW$};
\node (F3) at ($(F1)+(1,0)$) {$FX$};
\node[font=\Large] at ($(F1)+(.25,.35)$) {\rotatebox{-45}{$\Rightarrow$}};
\node at ($(F1)+(.35,.45)$) {\scalebox{.7}{$F^0$}};
\node[font=\Large] at ($(F1)+(.7,.4)$) {\rotatebox{-90}{$\Rightarrow$}};
\node at ($(F1)+(.6,.4)$) {\scalebox{.7}{$F^2$}};
\node[font=\Large] at ($(F1)+(.55,-.15)$) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at ($(F1)+(.4,-.15)$) {$Fr$};
\draw[arrow, bend left] (F1) to node{\small{$1'_{FW}$}} (F2);
\draw[arrow, bend left] (F2) to node{\small{$Ff$}} (F3);
\draw[arrow, bend right] (F1) to node[swap, inner sep=0]{\scalebox{.6}{$F1_{W}$}} (F2);
\draw[arrow] (F1) to node[near end, inner sep=1]{\scalebox{.6}{$F(f1_W)$}} (F3);
\draw[arrow, bend right=60] (F1) to node[swap]{\small{$Ff$}} (F3);
\node at ($(F1)+(1.5,0)$) {\LARGE{$=$}};
\node (F4) at ($(F1)+(2,0)$) {$FW$}; \node (F5) at ($(F4) + (.5,.7)$) {$FW$};
\node (F6) at ($(F4) + (1,0)$) {$FX$};
\node[font=\Large] at ($(F4)+(.45,.2)$) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at ($(F4)+(.6,.2)$) {$r'$};
\draw[arrow, bend left] (F4) to node{\small{$1'_{FW}$}} (F5);
\draw[arrow, bend left] (F5) to node{\small{$Ff$}} (F6);
\draw[arrow, bend right=60] (F4) to node[swap]{\small{$Ff$}} (F6);
\end{tikzpicture}\]
of pasting diagrams.\dqed
\end{enumerate}
\end{explanation}
\begin{proposition}\label{iifunctor}\index{characterization of!a 2-functor}
For $2$-categories $\A$ and $\B$, a $2$-functor $F : \A\to\B$ consists of precisely the following data.
\begin{itemize}
\item A function $F : \A_0 \to \B_0$ on objects.
\item A functor $F: \A(X,Y) \to \B(FX,FY)$ for each pair of objects $X,Y$ in $\A$.
\end{itemize}
These data are required to satisfy the following two conditions.
\begin{enumerate}
\item $F$ is a functor between the underlying $1$-categories of $\A$ and $\B$.
\item $F$ preserves horizontal compositions of $2$-cells.
\end{enumerate}
\end{proposition}
\begin{proof}
If $F$ is a $2$-functor, then $F$ strictly preserves horizontal compositions of $1$-cells and of $2$-cells because $F^2$ is the identity natural transformation. Also, $F$ strictly preserves identity $1$-cells because $F^0$ is the identity. Conversely, if $F$ satisfies the stated conditions, then we define $F^2$ and $F^0$ as the identities. The lax associativity axiom \eqref{f2-bicat} and the lax unity axioms \eqref{f0-bicat} are trivially satisfied because every edge involved is an identity $2$-cell.
\end{proof}
\begin{explanation}\label{expl:iifunctor}
In other words, a $2$-functor $F : \A\to\B$ is an assignment of objects, $1$-cells, and $2$-cells in $\A$ to those in $\B$ that strictly preserves identity $1$-cells, identity $2$-cells, vertical compositions of $2$-cells, and horizontal compositions of $1$-cells and of $2$-cells.\dqed
\end{explanation}
\begin{example}[Opposite Lax Functors]\label{ex:opposite-lax-functor}\index{lax functor!opposite}
Each lax functor $(F,F^2,F^0) : \B \to \B'$ uniquely determines a lax functor
\[(\Fop, (\Fop)^2,(\Fop)^0) : \Bop \to \Bprimeop\] with the following data, in which $\Bop$ and $\Bprimeop$ are the opposite bicategories in \Cref{def:bicategory-opposite}.
\begin{itemize}
\item $\Fop = F$ on objects.
\item For objects $X,Y$ in $\B$, it is equipped with the functor
\[\Fop = F: \Bop(X,Y) = \B(Y,X) \to \B'(Y,X) =\Bprimeop(\Fop X,\Fop Y).\]
\item For $1$-cells $(g,f) \in\B(Z,Y)\times\B(Y,X)$, $(\Fop)^{2}_{g,f}$ is the $2$-cell
\[\begin{tikzcd}
Ff \circ Fg \ar{r}{F^2_{f,g}} & F(fg) \inspace \B'(Z,X)=\Bprimeop(X,Z).\end{tikzcd}\]
\item For each object $X$ in $\B$, $(\Fop)^{0}_X=F^0_X \in \B'(FX,FX) = \Bprimeop(\Fop X,\Fop X)$.
\end{itemize}
The lax associativity axiom and the lax unity axioms for $F^{\op}$ follow from those for $F$. We call $F^{\op}$ the \emph{opposite lax functor} of $F$, and similarly if $F$ is a pseudofunctor, a strict functor, or a $2$-functor.\dqed
\end{example}
\begin{example}[Identity Strict Functors]\label{ex:identity-strict-functor}
Each bicategory $\B$ has an \index{identity!strict functor}\index{strict!functor!identity}identity strict functor $1_{\B} : \B\to\B$.
\begin{itemize}
\item It is the identity function on the objects in $\B$.
\item It is the identity functor on $\B(X,Y)$ for objects $X,Y$ in $\B$.
\item For composable $1$-cells $(g,f)$, the component $(1_{\B})^2_{g,f}$ is the identity $2$-cell $1_{gf} = 1_g*1_f$.
\item The component $(1_{\B})^0_X$ is the identity $2$-cell $1_{1_X}$.
\end{itemize}
For $1_{\B}$, the lax associativity diagram \eqref{f2-bicat} follows from the naturality of the associator $a$, and both lax unity diagrams \eqref{f0-bicat} are commutative by definition.\dqed
\end{example}
\begin{example}[Functors]\label{ex:functor-laxfunctor}
Suppose $F : \C\to1.3$ is a functor between categories. Regarding $\C$ and $1.3$ as locally discrete bicategories as in \Cref{ex:category-as-bicat}, $F$ becomes a\index{functor!as a strict functor} strict functor because there are no non-identity $2$-cells in $\C$ and $1.3$. Conversely, for each bicategory $\B$, each lax functor $\B \to 1.3$ is a strict functor. In particular, every lax (and hence strict) functor $\C\to1.3$ is determined by a functor.\dqed
\end{example}
\begin{example}[Monoidal Functors]\label{ex:monfunctor-laxfunctor}
Suppose $(F,F_2,F_0) : \C\to1.3$ is a monoidal functor as in \Cref{def:monoidal-functor}. Regarding $\C$ and $1.3$ as one-object bicategories $\Sigma\C$ and $\Sigma1.3$ as in \Cref{ex:moncat-bicat}, $(F,F_2,F_0) : \Sigma\C\to\Sigma1.3$ is a \index{monoidal functor!as a lax functor}lax functor. Furthermore, it is a pseudofunctor (resp., strict functor) if the original monoidal functor is strong (resp., strict). Conversely, every lax functor $\Sigma\C\to\Sigma1.3$ is determined by a monoidal functor $\C\to1.3$, and similarly for pseudofunctor and strict functor.\dqed
\end{example}
\begin{example}[Colax Monoidal Functors]\label{ex:colax-monoidal-functor}
For monoidal categories $\C$ and $1.3$, every colax functor $\Sigma\C\to\Sigma1.3$ is determined by a colax monoidal functor $\C\to1.3$, and vice versa. \index{monoidal functor!colax, a.k.a.\ oplax}Colax monoidal functors are also known as oplax monoidal functors and lax comonoidal functors.\dqed
\end{example}
\begin{example}[$\Cat$-Functors]\label{ex:2functor}
We saw in \Cref{2cat-cat-enriched-cat} that locally small $2$-categories are precisely $\Cat$-categories. A $2$-functor $F : \C\to1.3$ between locally small $2$-categories is precisely a $\Cat$-functor\index{2-functor!as a $\Cat$-functor} in the sense of \Cref{def:enriched-functor}. Indeed, a $2$-functor $F$ satisfies $F^2 = \Id$ and $F^0=\Id$, so the two diagrams in \Cref{def:enriched-functor} are commutative. Conversely, given a $\Cat$-functor $G : \C\to1.3$, by the two commutative diagrams in \Cref{def:enriched-functor}, we may define $G^2$ and $G^0$ to be the identity natural transformations. The three diagrams in \eqref{f2-bicat} and \eqref{f0-bicat} are commutative because every $2$-cell involved is an identity $2$-cell.\dqed
\end{example}
\begin{example}[Categories and Multi/Polycategories]\label{ex:cat-multicat-polycat}
There are $2$-functors
\[\begin{tikzcd}\Cat \ar{r} & \Multicat \ar{r} & \Polycat\end{tikzcd}\]
in which:
\begin{itemize}
\item $\Cat$ is the $2$-category of small categories, functors, and natural transformations in \Cref{ex:2cat-of-cat}.
\item $\Multicat$ is the $2$-category of small multicategories, multifunctors, and multinatural transformations in \Cref{multicat-2cat}.
\item $\Polycat$ is the $2$-category of small polycategories, polyfunctors, and polynatural transformations in \Cref{polycat-2cat}.
\item The first $2$-functor is specified as in \Cref{ex:category-as-operad,ex:functor-multifunctor,ex:nt-multi-nt}.
\item The second $2$-functor is specified as in \Cref{ex:multicat-as-polycat,ex:multifunctor-polyfunctor,ex:multint-polynt}.
\end{itemize}
All three $2$-categories above are locally small, so they are $\Cat$-categories.\dqed
\end{example}
\begin{example}[$2$-Vector Spaces]\label{ex:two-vector-strict-functor}
Recall the bicategory $\twovc$ of coordinatized $2$-vector spaces\index{2-vector space} in \Cref{ex:two-vector-space} and the $2$-category $\twovtc$ of totally coordinatized $2$-vector spaces in \Cref{ex:twovect-tc}. There is a strictly unitary pseudofunctor\index{pseudofunctor}\index{functor!pseudo-} \[(F,F^2,F^0) : \twovtc \to \twovc\] defined as follows.
\begin{itemize}
\item $F$ is the identity on objects. This is well defined because $\twovc$ and $\twovtc$ have the same objects.
\item A $1$-cell $A = (a_{ij})$ in $\twovtc(\{m\},\{n\})$ is an $n\times m$ matrix with each $a_{ij}$ a non-negative integer. Its image under $F$ is the $n\times m$ 2-matrix
\[F(A) = \big(\fieldc^{a_{ij}}\big)\]
with $\fieldc^0 = 0$.
\item A $2$-cell $M = (M_{ij}) : A \to A'=(a'_{ij})$ in $\twovtc(\{m\},\{n\})$ is an $n\times m$ matrix with each $M_{ij}$ an $a'_{ij} \times a_{ij}$ complex matrix. Its image under $F$ is the $n\times m$ matrix with $(i,j)$-entry the $\fieldc$-linear map
\[\begin{tikzcd}
\fieldc^{a_{ij}} \ar{r} & \fieldc^{a'_{ij}}
\end{tikzcd}\]
represented by the complex $a'_{ij} \times a_{ij}$ matrix $M_{ij}$ with respect to the standard basis in each $\fieldc^n$.
\item The lax unity constraint $F^0$ is the identity. This is well defined because the identity 1-cell of $\{n\}$ in $\twovtc$ is the $n \times n$ identity matrix, with $1$'s along the diagonal and $0$'s in other entries. Its image under $F$ is the $n \times n$ $2$-matrix $1^n$ with copies of $\fieldc$ along the diagonal and $0$'s in other entries.
\item To define the lax functoriality constraint $F^2$, suppose $B = (b_{ki})$ is a 1-cell in $\twovtc(\{n\},\{p\})$, i.e., a $p \times n$ matrix with each $b_{ki}$ a non-negative integer. With $A = (a_{ij})$ as above and $BA = (c_{kj}) \in \twovtc(\{m\},\{p\})$, there are equalities as follows.
\[\begin{split}
F(B) &= \big(\fieldc^{b_{ki}}\big)\\
F(BA) &= \big(\fieldc^{c_{kj}}\big) \withspace c_{kj} = \sum_{i=1}^n \,b_{ki}a_{ij}\\
\big[F(B)F(A)\big]_{kj} &= \bigoplus_{i=1}^n\, \big(\fieldc^{b_{ki}} \otimes \fieldc^{a_{ij}}\big)
\end{split}\]
The 2-cell
\[\begin{tikzcd}[column sep=large]
F(B)F(A) \ar{r}{F^2_{B,A}} & F(BA)
\end{tikzcd}\]
in $\twovc$ has $(k,j)$-entry the following composite of canonical isomorphisms.
\begin{equation}\label{ftwoCanonicalIso}
\begin{tikzcd}
\bigoplus\limits_{i=1}^n\, \big(\fieldc^{b_{ki}} \otimes \fieldc^{a_{ij}}\big) \ar{r}{\iso}
& \bigoplus\limits_{i=1}^n\, \fieldc^{b_{ki}a_{ij}} \ar{r}{\iso}
& \fieldc^{\sum_{i=1}^n b_{ki}a_{ij}}
\end{tikzcd}
\end{equation}
\end{itemize}
This finishes the definition of $F$.
To see that $F$ is indeed a strictly unitary pseudofunctor, observe the following.
\begin{itemize}
\item $F$ is locally a functor between hom-categories. Indeed, if a complex $n \times m$ matrix represents a $\fieldc$-linear map $\fieldc^m \to \fieldc^n$ with respect to the standard bases, then the following two statements hold.
\begin{itemize}
\item Matrix multiplication corresponds to composition of $\fieldc$-linear maps.
\item The identity $n \times n$ matrix represents the identity map on $\fieldc^n$.
\end{itemize}
\item $F^2$ is natural because the canonical isomorphism in \eqref{ftwoCanonicalIso} is natural with respect to $A$ and $B$. Both the lax associativity axiom \eqref{f2-bicat} and the lax unity axiom \eqref{f0-bicat} follow from the definition that \eqref{ftwoCanonicalIso} is the canonical isomorphism.
\item $F^0$ is the identity, and $F^2$ is an isomorphism.
\end{itemize}
Therefore, $F$ is a strictly unitary pseudofunctor. In \Cref{cor:two-vector-spaces} we will observe that $F$ is a biequivalence.
\end{example}
The following observation translates a colax functor in terms of $\B$ and $\B'$.
\begin{proposition}\label{colax-functor-explicit}\index{characterization of!a colax functor}
Suppose $\B$ and $\B'$ are bicategories. A colax functor from $\B$ to $\B'$ is precisely a triple $(F,F^2,F^0)$ consisting of the following data.
\begin{itemize}
\item $F : \B_0 \to \B'_0$ is a function on objects.
\item For each pair of objects $X,Y$ in $\B$, it is equipped with a functor \[F : \B(X,Y) \to \B'(FX,FY).\]
\item For all objects $X,Y,Z$ in $\B$, it is equipped with natural transformations
\[\begin{tikzpicture}[commutative diagrams/every diagram, xscale=3.7, yscale=1.5]
\node (A) at (0,1) {$\B(Y,Z)\times\B(X,Y)$};
\node (B) at (1,1) {$\B(X,Z)$};
\node (C) at (0,0) {$\B'(FY,FZ)\times\B'(FX,FY)$};
\node (D) at (1,0) {$\B'(FX,FZ)$};
\node[font=\Large] at (.6,.5) {\rotatebox{225}{$\Rightarrow$}};
\node[font=\small] at (.5,.6) {$F^2$};
\draw [arrow] (A) to node{\small{$c$}} (B);
\draw [arrow] (B) to node{\small{$F$}} (D);
\draw [arrow] (A) to node[swap]{\small{$F\times F$}} (C);
\draw [arrow] (C) to node[swap]{\small{$c'$}} (D);
\end{tikzpicture}\qquad
\begin{tikzpicture}[commutative diagrams/every diagram, xscale=2, yscale=1.5]
\node (A) at (0,1) {$\boldone$};
\node (B) at (1,1) {$\B(X,X)$}; \node (C) at (0,0) {};
\node (D) at (1,0) {$\B'(FX,FX)$};
\node[font=\Large] at (.6,.5) {\rotatebox{225}{$\Rightarrow$}};
\node[font=\small] at (.4,.6) {$F^0$};
\draw [arrow] (A) to node{\small{$1_X$}} (B);
\draw [arrow] (B) to node{\small{$F$}} (D);
\draw [arrow, out=-90, in=170] (A) to node[near start, swap]{\small{$1'_{FX}$}} (D);
\end{tikzpicture}\]
with component $2$-cells
\[\begin{tikzcd}Fg \circ Ff & F(gf) \ar{l}[swap]{F^2_{g,f}}\end{tikzcd}\andspace
\begin{tikzcd} 1'_{FX} & F1_X.\ar{l}[swap]{F^0_X}\end{tikzcd}\]
\end{itemize}
The above data are required to make the following three diagrams commutative for all $1$-cells $f \in \B(W,X)$, $g \in \B(X,Y)$, and $h \in \B(Y,Z)$.
\begin{description}
\item[Lax Associativity]\index{associativity!colax functor}
\begin{equation}\label{colax-f2}
\begin{tikzcd}
(Fh \circ Fg) \circ Ff \ar{r}{a'} & Fh \circ (Fg \circ Ff) \\
F(hg) \circ Ff \ar{u}{F^2 *1_{Ff}} & Fh \circ F(gf) \ar{u}[swap]{1_{Fh}*F^2}\\
F((hg)f) \ar{u}{F^2} \ar{r}{Fa} & F(h(gf)) \ar{u}[swap]{F^2}
\end{tikzcd}
\end{equation}
in $B'(FW,FZ)$.
\item[Lax Left and Right Unity]\index{unity!colax functor}
\begin{equation}\label{colax-f0}
\begin{tikzcd}
1'_{FX} \circ Ff \ar{r}{\ell'} & Ff \\
F1_X \circ Ff \ar{u}{F^0*1_{Ff}} & F(1_X\circ f) \ar{l}[swap]{F^2} \ar{u}[swap]{F\ell}\end{tikzcd}\qquad
\begin{tikzcd}
Ff \circ 1'_{FW} \ar{r}{r'} & Ff \\
Ff \circ F1_W \ar{u}{1_{Ff}*F^0} & F(f\circ 1_W) \ar{l}[swap]{F^2} \ar{u}[swap]{Fr}
\end{tikzcd}
\end{equation}
in $\B'(FW,FX)$.
\end{description}
\end{proposition}
\begin{proof}
This follows from the definition $\Bco(X,Y) = \B(X,Y)^{\op}$, and similarly for $\Bprimeco$, and the fact that each functor $\C^{\op}\to1.3^{\op}$ is uniquely determined by a functor $\C\to1.3$.
\end{proof}
The following observation is the bicategorical analogue of a constant functor at a fixed object in a category.
\begin{proposition}\label{constant-pseudofunctor}\label{constant pseudofunctor}\index{pseudofunctor!constant}
Suppose $X$ is an object in a bicategory $\B$, and $\A$ is another bicategory. Then there is a strictly unitary pseudofunctor \[\conof{X} : \A\to\B\] defined as follows.
\begin{itemize}
\item $\conof{X}$ sends each object of $\A$ to $X$.
\item For each pair of objects $Y,Z$ in $\A$, the functor
\[\conof{X} : \A(Y,Z) \to \B(X,X)\] sends
\begin{itemize}
\item every $1$-cell in $\A(Y,Z)$ to the identity $1$-cell $1_X$ of $X$;
\item every $2$-cell in $\A(Y,Z)$ to the identity $2$-cell $1_{1_X}$ of the identity $1$-cell.
\end{itemize}
\item For each object $Y$ of $\A$, the lax unity constraint is
\[(\conof{X})^0_Y = 1_{1_X} : 1_X \to 1_X.\]
\item For each pair of composable $1$-cells $(g,f)$ in $\A$, the lax functoriality constraint is
\[(\conof{X})^2_{g,f} = \ell_{1_X} : 1_X 1_X \to 1_X.\]
\end{itemize}
\end{proposition}
\begin{proof}
The naturality of $\conof{X}$ \eqref{f2-bicat-naturality} follows from the unity properties in \eqref{hom-category-axioms} and \eqref{bicat-c-id}. \Cref{bicat-l-equals-r}, which says that $\ell_{1_X} = r_{1_X}$, is used twice below. The lax associativity diagram \eqref{f2-bicat} for $\conof{X}$ is the outermost diagram in
\[\begin{tikzcd}
(1_X1_X)1_X \ar{r}{a} \ar{d}[swap]{\ell*1} & 1_X(1_X1_X) \ar{d}{1*\ell}\\
1_X1_X \ar{d}[swap]{\ell} \ar{r}{1} \ar{dr}{\ell} & 1_X1_X \ar{d}{\ell}\\
1_X \ar{r}{1} & 1_X
\end{tikzcd}\]
in which $\ell = \ell_{1_X}$, and $1 = 1_{1_X}$ or $1_{1_X1_X}$. Since \[\ell_{1_X}*1_{1_X} = r_{1_X}*1_{1_X},\] the top square is commutative by the middle unity axiom \eqref{bicat-unity} and \eqref{hom-category-axioms}, which also implies the bottom square is commutative.
Similarly, the lax right unity diagram \eqref{f0-bicat} is the diagram
\[\begin{tikzcd}
1_X1_X \ar{r}{r} \ar{d}[swap]{1*1} & 1_X\\
1_X1_X \ar{r}{\ell} & 1_X \ar{u}[swap]{1}\end{tikzcd}\]
in which $r=r_{1_X}$. Since there are equalities
\[1_{1_X}\ell_{1_X} (1_{1_X}*1_{1_X}) = \ell_{1_X}1_{1_X1_X} = \ell_{1_X},\] the above diagram is commutative by \Cref{bicat-l-equals-r}. The lax left unity axiom is proved by the previous displayed line.
\end{proof}
\begin{definition}\label{def:constant-pseudofunctor}
For a bicategory $\A$ and an object $X$ in a bicategory $\B$, the strictly unitary pseudofunctor $\conof{X} : \A\to\B$ in \Cref{constant-pseudofunctor} is called the \emph{constant pseudofunctor} at $X$.
\end{definition}
For a category $\C$ with all pullbacks, recall from \Cref{ex:spans} that $\Span(\C)$ is the bicategory with objects those in $\C$, $1$-cells the spans in $\C$, and $2$-cells morphisms of spans.
\begin{proposition}\label{spans-functor}\index{functor!induced pseudofunctor in spans}\index{span!induced pseudofunctor}
Suppose $F : \C\to1.3$ is a functor such that $\C$ and $1.3$ have all pullbacks, and that $F$ preserves pullbacks up to isomorphisms as in \eqref{preserve-limits}. Then $F$ induces a strictly unitary pseudofunctor \[F_* : \Span(\C)\to\Span(1.3).\] Moreover, if $F$ preserves chosen pullbacks, then $F_*$ is a strict functor.
\end{proposition}
\begin{proof}
The pseudofunctor $F_*$ is defined as follows.
\begin{description}
\item[Objects] $F_*$ is the same as $F$ on objects.
\item[Hom Categories] For objects $A,B\in\C$, the functor \[F_* : \Span(\C)(A,B) \to \Span(1.3)(FA,FB)\] sends a span $(f_1,f_2)$ from $A$ to $B$ in $\C$ in the form \eqref{axb-span} to the span $(Ff_1,Ff_2)$ from $FA$ to $FB$ in $1.3$. For a $2$-cell $\phi$ as in \eqref{span-2cell}, $F_*\phi = F\phi$. With these definitions, $F_*$ strictly preserves identity $1$-cells, identity $2$-cells, and vertical composition because they are defined using identity morphisms and composition in $\C$.
\item[Lax Unity Constraint] Because $F_*$ strictly preserves identity $1$-cells, we define each component of the lax unity constraint $F_*^0$ as the identity $2$-cell.
\item[Lax Functoriality Constraint] To define $F_*^2$, suppose $f=(f_1,f_2)$ is a span from $A$ to $B$, and $g=(g_1,g_2)$ is a span from $B$ to $C$ in $\C$ as in \eqref{span-1cell-hcomp}. Applying the functor $F$, there is a commutative solid-arrow diagram
\begin{equation}\label{span-f2}
\begin{tikzcd}[column sep=large]
&& FX\timesover{FB} FY \ar[densely dashed, shorten >=-3pt]{d}{\exists !\, \psi}[swap]{\cong} \arrow[shorten <=-1em]{ld}[swap]{p_1'} \arrow[shorten <=-1em]{rd}{p_2'}
\arrow[out=180,in=90]{lldd}[swap]{Ff_1\circ p_1'} \arrow[out=0,in=90]{rrdd}{Fg_2 \circ p_2'}&&\\
& FX \arrow{ld}[swap, near start]{Ff_1} \arrow{rd}[swap]{Ff_2} & F\bigl(X\timesover{B} Y\bigr) \ar{l}[swap, near start]{Fp_1} \ar{r}[near start]{Fp_2} & FY \arrow{ld}{Fg_1} \arrow{rd}[near start]{Fg_2} &\\
FA && FB && FC\\ \end{tikzcd}
\end{equation}
in $1.3$ in which:
\begin{itemize}
\item $(p_1,p_2)$ is the chosen pullback of $(f_2,g_1)$ in $\C$.
\item The middle lower triangle is a pullback by the assumption that $F$ preserves pullbacks.
\item $(p_1',p_2')$ is the chosen pullback of $(Ff_2,Fg_1)$ in $1.3$.
\end{itemize}
The universal property of pullbacks implies that there exists a unique isomorphism \[\begin{tikzcd}
FX \timesover{FB} FY \ar{r}{\psi}[swap]{\cong} & F\bigl(X\timesover{B} Y\bigr)\end{tikzcd}\] in $1.3$ such that \[p_1'=Fp_1 \circ \psi \andspace p_2'=Fp_2\circ \psi.\] Therefore, $\psi : Fg \circ Ff \to F(gf)$ is an invertible $2$-cell in $\Span(1.3)$, which we define as the component $(F_*^2)_{g,f}$.
\end{description}
The naturality of $F^2_*$ and the commutativity of the diagrams \eqref{f2-bicat} and \eqref{f0-bicat} also follow from the universal property of pullbacks. Therefore, $F_*$ is a pseudofunctor, which is strictly unitary by construction.
Finally, if $F$ preserves chosen pullbacks, then $\psi$ is the identity morphism. So $F^2_*$ is the identity natural transformation.
\end{proof}
Next we define composites of lax functors.
\begin{definition}\label{def:lax-functors-composition}
Suppose \[\begin{tikzcd}[column sep=huge]
\B \ar{r}{(F,F^2,F^0)} & \C \ar{r}{(G,G^2,G^0)} & 1.3\end{tikzcd}\]
are lax functors between bicategories. The \emph{composite}\index{composition!lax functors}\index{lax functor!composite}
\[\begin{tikzpicture}[xscale=4, yscale=1.3]
\draw[0cell]
(0,0) node (B) {\B}
(B)++(1,0) node (D) {1.3}
;
\draw[1cell]
(B) edge node {(GF,(GF)^2,(GF)^0)} (D)
;
\end{tikzpicture}\]
is defined as follows.
\begin{description}
\item[Objects] $GF : \B_0 \to 1.3_0$ is the composite of the functions $F : \B_0 \to \C_0$ and $G : \C_0 \to 1.3_0$ on objects.
\item[Hom Categories] For objects $X,Y$ in $\B$, it is equipped with the composite functor
\[\begin{tikzpicture}[xscale=2.5, yscale=1.3]
\draw[0cell]
(0,0) node (x11) {\B(X,Y)}
($(x11)+(1,0)$) node (x12) {\C(FX,FY)}
($(x12)+(1.2,0)$) node (x13) {1.3(GFX,GFY).}
($(x11)+(0,.5)$) node[inner sep=0pt] (s) {}
($(x13)+(0,.5)$) node[inner sep=0pt] (t) {}
;
\draw[1cell]
(x11) edge node {F} (x12)
(x12) edge node {G} (x13)
(x11) edge[-,shorten >=-1pt] (s)
(s) edge[-,shorten <=-1pt, shorten >=-1pt] node {GF} (t)
(t) edge[shorten <=-1pt] (x13)
;
\end{tikzpicture}\]
\item[Lax Unity Constraint] For each object $X$ in $\B$, it is equipped with the vertically composed $2$-cell
\begin{equation}\label{lax-functors-comp-zero}
\begin{tikzpicture}[xscale=2.5, yscale=1.3, baseline={(x11.base)}]
\draw[0cell]
(0,0) node (x11) {1_{GFX}}
($(x11)+(1,0)$) node (x12) {G1_{FX}}
($(x12)+(1,0)$) node (x13) {GF1_X}
($(x11)+(0,.5)$) node[inner sep=0pt] (s) {}
($(x13)+(0,.5)$) node[inner sep=0pt] (t) {}
;
\draw[1cell]
(x11) edge node {G^0_{FX}} (x12)
(x12) edge node {G(F^0_X)} (x13)
(x11) edge[-,shorten >=-1pt] (s)
(s) edge[-,shorten <=-1pt, shorten >=-1pt] node {(GF)^0_X} (t)
(t) edge[shorten <=-1pt] (x13)
;
\end{tikzpicture}
\end{equation}
in $1.3(GFX,GFX)$.
\item[Lax Functoriality Constraint] For $1$-cells $(g,f) \in \B(Y,Z) \times \B(X,Y)$, it is equipped with the vertically composed $2$-cell
\begin{equation}\label{lax-functors-comp-two}
\begin{tikzpicture}[xscale=3.5, yscale=1.5, baseline={(x11.base)}]
\draw[0cell]
(0,0) node (x11) {GFg \circ GFf}
($(x11)+(1,0)$) node (x12) {G(Fg\circ Ff)}
($(x12)+(1,0)$) node (x13) {GF(gf)}
($(x11)+(0,.5)$) node[inner sep=0pt] (s) {}
($(x13)+(0,.5)$) node[inner sep=0pt] (t) {}
;
\draw[1cell]
(x11) edge node {G^2_{Fg,Ff}} (x12)
(x12) edge node {G(F^2_{g,f})} (x13)
(x11) edge[-,shorten >=-1pt] (s)
(s) edge[-,shorten <=-1pt, shorten >=-1pt] node {(GF)^2_{g,f}} (t)
(t) edge[shorten <=-1pt] (x13)
;
\end{tikzpicture}
\end{equation}
in $1.3(GFX,GFZ)$.
\end{description}
The finishes the definition of the composite.
\end{definition}
\begin{lemma}\label{lax-functors-compose}
Suppose $F : \B\to\C$ and $G : \C\to1.3$ are lax functors between bicategories.
\begin{enumerate}
\item The composite $(GF,(GF)^2,(GF)^0)$ is a lax functor from $\B$ to $1.3$.
\item If both $F$ and $G$ are pseudofunctors (resp., strict functors, unitary, or strictly unitary), then so is the composite $GF$.
\end{enumerate}
\end{lemma}
\begin{proof}
To check that $GF$ is a lax functor, first observe that the naturality of $(GF)^2$ follows from the naturality of $F$ and $G$ as in \eqref{f2-bicat-naturality} using the commutative diagram
\[\begin{tikzcd}[column sep=huge]
GFg \circ GFf \ar{d}[swap]{GF\beta*GF\alpha} \ar{r}{G^2_{Fg,Ff}} & G(Fg\circ Ff) \ar{d}{G(F\beta*F\alpha)} \ar{r}{G(F^2_{g,f})} & GF(gf) \ar{d}{GF(\beta*\alpha)}\\
GFg' \circ GFf' \ar{r}{G^2_{Fg',Ff'}} & G(Fg'\circ Ff') \ar{r}{G(F^2_{g',f'})} & GF(g'f')
\end{tikzcd}\]
for $2$-cells $\alpha : f \to f'$ in $\B(X,Y)$ and $\beta : g \to g'$ in $\B(Y,Z)$.
The lax associativity diagram \eqref{f2-bicat} for $GF$ is the outermost diagram below
\[\begin{tikzpicture}[commutative diagrams/every diagram, xscale=2.3, yscale=1.5]
\node (11) at (0,4) {$(GFh \circ GFg)\circ GFf$};
\node (12) at (4,4) {$GFh \circ (GFg\circ GFf)$};
\node (21) at (0,3) {$G(Fh \circ Fg)\circ GFf$};
\node (22) at (4,3) {$GFh \circ G(Fg\circ Ff)$};
\node (31) at (0,2) {$GF(hg)\circ GFf$};
\node (32) at (1.2,2) {$G((Fh \circ Fg)\circ Ff)$};
\node (33) at (2.8,2) {$G(Fh \circ (Fg\circ Ff))$};
\node (34) at (4,2) {$GFh \circ GF(gf)$};
\node (41) at (0,1) {$G(F(hg)\circ Ff)$};
\node (42) at (4,1) {$G(Fh \circ F(gf))$};
\node (51) at (0,0) {$GF((hg)f)$};
\node (52) at (4,0) {$GF(h(gf))$};
\draw[arrow] (11) to node{\small{$a$}} (12);
\draw[arrow] (11) to node[swap]{\small{$G^2*1$}} (21);
\draw[arrow] (12) to node{\small{$1*G^2$}} (22);
\draw[arrow] (22) to node[swap]{\small{$G^2$}} (33);
\draw[arrow] (21) to node{\small{$G^2$}} (32);
\draw[arrow] (32) to node{\small{$Ga$}} (33);
\draw[arrow] (32) to node{\small{$G(F^2*1)$}} (41);
\draw[arrow] (21) to node[swap]{\small{$GF^2*1$}} (31);
\draw[arrow] (31) to node[swap]{\small{$G^2$}} (41);
\draw[arrow] (33) to node[swap]{\small{$G(1*F^2)$}} (42);
\draw[arrow] (22) to node{\small{$1*GF^2$}} (34);
\draw[arrow] (34) to node{\small{$G^2$}} (42);
\draw[arrow] (41) to node[swap]{\small{$GF^2$}} (51);
\draw[arrow] (42) to node{\small{$GF^2$}} (52);
\draw[arrow] (51) to node{\small{$GFa$}} (52);
\end{tikzpicture}\]
in which every identity $2$-cell is denoted by $1$. Since $G1_{Ff} = 1_{GFf}$, the left triangle is commutative by the naturality of $G^2$. Similarly, the right triangle is commutative. The top hexagon is commutative by the lax associativity diagram \eqref{f2-bicat} for $G$. The bottom hexagon is $G$ applied to the lax associativity diagram \eqref{f2-bicat} for $F$, so it is commutative.
The lax left unity diagram \eqref{f0-bicat} for $GF$ is the outermost diagram below.
\[\begin{tikzpicture}[commutative diagrams/every diagram, xscale=3, yscale=1.5]
\node (11) at (0,2) {$1_{GFX} \circ GFf$};
\node (12) at (2,2) {$GFf$};
\node (21) at (0,1) {$G1_{FX} \circ GFf$};
\node (22) at (1,1) {$G(1_{FX} \circ Ff)$};
\node (23) at (2,1) {$GF(1_X\circ f)$};
\node (31) at (0,0) {$GF1_X \circ GFf$};
\node (32) at (2,0) {$G(F1_X \circ Ff)$};
\draw[arrow] (11) to node{\small{$\ell$}} (12);
\draw[arrow] (11) to node[swap]{\small{$G^0*1$}} (21);
\draw[arrow] (21) to node{\small{$G^2$}} (22);
\draw[arrow] (22) to node{\small{$G\ell$}} (12);
\draw[arrow] (22) to node[swap,near start,inner sep=1pt]{\small{$G(F^0*1)$}} (32);
\draw[arrow] (23) to node[swap]{\small{$GF\ell$}} (12);
\draw[arrow] (21) to node[swap]{\small{$GF^0*1$}} (31);
\draw[arrow] (31) to node[swap]{\small{$G^2$}} (32);
\draw[arrow] (32) to node[swap]{\small{$GF^2$}} (23);
\end{tikzpicture}\]
Since $G1_{Ff} = 1_{GFf}$, the bottom trapezoid is commutative by the naturality of $G^2$. The top trapezoid is commutative by the left unity diagram for $G$. The right triangle is $G$ applied to the left unity diagram for $F$, so it is commutative. The lax right unity diagram for $GF$ is proved similarly. Therefore, $GF$ is a lax functor from $\B$ to $1.3$.
If $F$ and $G$ are both pseudofunctors, then each component of $(GF)^0$ is the vertical composite of two invertible $2$-cells, so it is an invertible $2$-cell. This also covers the case where $F$ and $G$ are unitary lax functors. Similarly, $(GF)^2$ has invertible components, so $GF$ is a pseudofunctor. Finally, if $F$ and $G$ are both strict functors, then the components of $(GF)^0$ and $(GF)^2$ are vertical composites of identity $2$-cells, so they are identity $2$-cells. This also covers the case of strictly unitary lax functors.
\end{proof}
Recall from \Cref{def:small-bicat} that a bicategory is small if it has a set of objects and is locally small. A subcategory of a category $\C$ is called \emph{wide} if it contains all the objects in the category $\C$. The identity strict functor of a bicategory in \Cref{ex:identity-strict-functor} is used in the following observation.
\begin{theorem}\label{thm:cat-of-bicat}\index{category!of bicategories and lax functors}\index{bicategory!category of -}
There is a category $\Bicat$ with
\begin{itemize}
\item small bicategories as objects,
\item lax functors between them as morphisms,
\item composites of lax functors as in \Cref{def:lax-functors-composition}, and
\item identity strict functors in \Cref{ex:identity-strict-functor} as identity morphisms.
\end{itemize}
Furthermore:
\begin{enumerate}
\item $\Bicat$ contains the wide subcategories:
\begin{enumerate}[label=(\roman*)]
\item $\Bicatu$ with unitary lax functors as morphisms.
\item $\Bicatsu$ \label{notation:bicatsu}with strictly unitary lax functors as morphisms.
\item $\Bicatps$ with pseudofunctors as morphisms.
\item $\Bicatsup$ with strictly unitary pseudofunctors as morphisms.
\item $\Bicatst$ with strict functors as morphisms.
\end{enumerate}
\item There is a category\label{notation:bicatco} $\Bicatco$\index{category!of bicategories and colax functors} with small bicategories as objects and colax functors as morphisms.
\end{enumerate}
\end{theorem}
\begin{proof}
We checked in \Cref{lax-functors-compose} that the composite of two lax functors is a lax functor. The smallness assumption ensures that given any two small bicategories, there is only a set of lax functors between them. To show that $\Bicat$ is a category, we need to check that composition of lax functors is strictly associative and unital.
Reusing the notations in \Cref{def:lax-functors}, suppose $(H,H^2,H^0) : 1.3\to3$ is a third lax functor. On objects, $H(GF)$ and $(HG)F$ are the same function $\B_0\to3_0$ because composition of functions is strictly associative. Likewise, for objects $X,Y$ in $\B$, $H(GF)$ and $(HG)F$ are the same functors \[\B(X,Y)\to3(HGFX,HGFY)\] because composition of functors is strictly associative.
For each object $X$ in $\B$, both $(H(GF))^0_X$ and $((HG)F)^0_X$ are equal to the vertical composite
\[\begin{tikzpicture}[xscale=2.7, yscale=1.3]
\draw[0cell]
(0,0) node (x11) {1_{HGFX}}
($(x11)+(1,0)$) node (x12) {H1_{GFX}}
($(x12)+(1,0)$) node (x13) {HG1_{FX}}
($(x13)+(1,0)$) node (x14) {HGF1_X}
($(x12)+(0,.5)$) node[inner sep=0pt] (s) {}
($(x14)+(0,.5)$) node[inner sep=0pt] (t) {}
($(x11)+(0,-.5)$) node[inner sep=0pt] (s2) {}
($(x13)+(0,-.5)$) node[inner sep=0pt] (t2) {}
;
\draw[1cell]
(x11) edge node {H^0_{GFX}} (x12)
(x12) edge node {HG^0_{FX}} (x13)
(x13) edge node {HGF^0_X} (x14)
(x12) edge[-,shorten >=-1pt] (s)
(s) edge[-,shorten <=-1pt, shorten >=-1pt] node {H(GF)^0_X} (t)
(t) edge[shorten <=-1pt] (x14)
(x11) edge[-,shorten >=-1pt] (s2)
(s2) edge[-,shorten <=-1pt, shorten >=-1pt] node[swap] {(HG)^0_{FX}} (t2)
(t2) edge[shorten <=-1pt] (x13)
;
\end{tikzpicture}\]
of $2$-cells in $3(HGFX,HGFX)$.
For $1$-cells $(g,f) \in \B(Y,Z) \times \B(X,Y)$, both $(H(GF))^2_{g,f}$ and $((HG)F)^2_{g,f}$ are equal to the vertical composite
\[\begin{tikzpicture}[xscale=3.2, yscale=1.3]
\draw[0cell]
(0,0) node (x11) {HGFg \circ HGFf}
($(x11)+(1,0)$) node (x12) {H(GFg \circ GFf)}
($(x12)+(1,0)$) node (x13) {HG(Fg\circ Ff)}
($(x13)+(1,0)$) node (x14) {HGF(gf)}
($(x12)+(0,.5)$) node[inner sep=0pt] (s) {}
($(x14)+(0,.5)$) node[inner sep=0pt] (t) {}
($(x11)+(0,-.5)$) node[inner sep=0pt] (s2) {}
($(x13)+(0,-.5)$) node[inner sep=0pt] (t2) {}
;
\draw[1cell]
(x11) edge node {H^2} (x12)
(x12) edge node {HG^2} (x13)
(x13) edge node {HGF^2} (x14)
(x12) edge[-,shorten >=-1pt] (s)
(s) edge[-,shorten <=-1pt, shorten >=-1pt] node {H(GF)^2} (t)
(t) edge[shorten <=-1pt] (x14)
(x11) edge[-,shorten >=-1pt] (s2)
(s2) edge[-,shorten <=-1pt, shorten >=-1pt] node[swap] {(HG)^2} (t2)
(t2) edge[shorten <=-1pt] (x13)
;
\end{tikzpicture}\]
of $2$-cells in $3(HGFX,HGFZ)$. Therefore, the composite lax functors $H(GF)$ and $(HG)F$ are equal.
Composition of lax functors is strictly unital with respect to the identity strict functors because the latter are defined by the identity functions on objects, identity functors on hom categories, and identity $2$-cells, which are preserved by lax functors and are strictly unital with respect to vertical composition. Therefore, $\Bicat$ is a category. Moreover, $\Bicatu$, $\Bicatsu$, $\Bicatps$, $\Bicatsup$, and $\Bicatst$ are categories because (strictly) unitary lax functors, (strictly unitary) pseudofunctors, and strict functors are also closed under composite by \Cref{lax-functors-compose}.
Finally, colax functors are just lax functors between co-bicategories, and their composites are defined as above. This composition is strictly associative and unital with respect to the identity strict functors of the co-bicategories. Therefore, $\Bicatco$ is a category.
\end{proof}
\begin{explanation}
Here is a conceptual way to explain the strict associativity of composition of lax functors. Coherence data are cells $1$ dimension higher than the things being compared, so the laxity of a lax functor $F$ is a $2$-cell $F^2_{g,f}$ comparing composites of $1$-cells, and similarly for $F^0_X$. In a bicategory, there are no cells above dimension $2$, or, one might say, any higher cells are identities. So the coherence for composition of $2$-cells is strictly associative. When we are asking about the laxity of a composite of lax functors, it is a composite of $2$-cells. Since these compose associatively, the laxity of $H(GF)$ is the same as that of $(HG)F$.\dqed
\end{explanation}
\begin{example}
In the context of \Cref{ex:functor-laxfunctor}, composition of functors is composition of strict functors between locally discrete bicategories. In the context of \Cref{ex:monfunctor-laxfunctor}, composition of monoidal functors is composition of lax functors between one-object bicategories. In the context of \Cref{ex:2functor}, composition of $\Cat$-functors between $\Cat$-categories is composition of strict functors between locally small $2$-categories.\dqed
\end{example}
\begin{example}\label{ex:span-induced-functors-compose}
In the setting of \Cref{spans-functor}, suppose $G : 1.3 \to 3$ is a functor such that $3$ has all pullbacks, and that $G$ preserves pullbacks up to isomorphisms. An inspection of that proof shows that the composite of the strictly unitary pseudofunctors
\[\begin{tikzcd}
\Span(\C) \ar{r}{F_*} & \Span(1.3) \ar{r}{G_*} & \Span(3)\end{tikzcd}\]
is equal to the strictly unitary pseudofunctor $(GF)_*$ induced by the composite functor $GF : \C \to 3$. The key part is the equality \[(G_*F_*)^2 = (GF)_*^2.\] Using the notations in \eqref{span-f2}, this equality boils down to the commutative diagram
\[\begin{tikzcd}[column sep=tiny]
GFX \timesover{GFB} GFY \ar[shorten <=-.5cm]{dr}[swap]{G_*^2} \ar{rr}{(GF)_*^2} && GF\bigl(X\timesover{B}Y\bigr)\\
& G\bigl(FX\timesover{FB}FY\bigr) \ar[shorten >=-.5cm]{ur}[swap]{G_*F_*^2}\end{tikzcd}\]
which follows from the universal property of pullbacks.\dqed
\end{example}
\section{Lax Transformations}\label{sec:natural-transformations}
In this section we define lax transformations, which are the bicategorical analogues of natural transformations.
\begin{definition}\label{definition:lax-transformation}
Let $(F,F^2,F^0)$ and $(G,G^2,G^0)$ be lax functors $\B \to \B'$. A \index{transformation!lax}\index{lax transformation}\emph{lax transformation} $\alpha\cn F \to G$ consists of the following data.
\begin{description}
\item[Components] It is equipped with a component $1$-cell\label{notation:transformation-cells} $\alpha_X\in \B'(FX,GX)$ for each object $X$ in $\B$.
\item[Lax Naturality Constraints] \index{lax naturality constraint}For each pair of objects $X,Y$ in $\B$, it is equipped with a natural transformation
\[\alpha : 35_X^* G \to (35_Y)_* F : \B(X,Y) \to \B'(FX,GY),\]
with a component $2$-cell \[\alpha_f \cn (Gf) 35_X \to 35_Y (Ff),\] as in the following diagram, for each $1$-cell $f\in\B(X,Y)$.
\[\begin{tikzpicture}[xscale=2.5, yscale=-1.8]
\node (F1) at (0,0) {$FX$}; \node (F2) at (1,0) {$FY$};
\node (G1) at (0,1) {$GX$}; \node (G2) at (1,1) {$GY$};
\draw[arrow] (F1) to node{\small{$Ff$}} (F2);
\draw[arrow] (F1) to node[swap]{\small{$\alpha_X$}} (G1);
\draw[arrow] (F2) to node{\small{$\alpha_Y$}} (G2);
\draw[arrow] (G1) to node{\small{$Gf$}} (G2);
\node[font=\Large] at (.5,.45) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at (.4,.35) {$\alpha_{f}$};
\end{tikzpicture}\]
\end{description}
The above data are required to satisfy the following two pasting diagram equalities for all objects $X,Y,Z$ and $1$-cells $f \in \B(X,Y)$ and $g \in \B(Y,Z)$.
\begin{description}
\item[Lax Unity]\index{unity!lax transformation}
\begin{equation}\label{unity-transformation-pasting}
\begin{tikzpicture}[xscale=2.5, yscale=-2, baseline={(eq.base)}]
\node (F1) at (0,0) {$FX$}; \node (F2) at (1,0) {$FX$};
\node (G1) at (0,1) {$GX$}; \node (G2) at (1,1) {$GX$};
\draw[arrow, bend right] (F1) to node{\small{$F1_X$}} (F2);
\draw[arrow] (F1) to node[swap]{\small{$\alpha_X$}} (G1);
\draw[arrow] (F2) to node{\small{$\alpha_X$}} (G2);
\draw[arrow,bend left] (G1) to node[swap]{\small{$1_{GX}$}} (G2);
\draw[arrow,bend right] (G1) to node{\small{$G1_{X}$}} (G2);
\node[font=\Large] at (.5,.35) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at (.38,.22) {$\alpha_{1_X}$};
\node[font=\Large] at (.45,1) {\rotatebox{90}{$\Rightarrow$}};
\node[font=\small] at (.6,1) {$G^0$};
\node (eq) at (1.7,.5) {\LARGE{$=$}};
\node (F3) at (2.4,0) {$FX$}; \node (F4) at (3.4,0) {$FX$};
\node (G3) at (2.4,1) {$GX$}; \node (G4) at (3.4,1) {$GX$};
\draw[arrow, bend right] (F3) to node{\small{$F1_X$}} (F4);
\draw[arrow] (F3) to node[swap]{\small{$\alpha_X$}} (G3);
\draw[arrow] (F4) to node{\small{$\alpha_X$}} (G4);
\draw[arrow,bend left] (G3) to node[swap]{\small{$1_{GX}$}} (G4);
\draw[arrow, bend left] (F3) to node[swap,inner sep=0pt]{\small{$\alpha_X$}} (G4);
\draw[arrow, bend left] (F3) to node[swap]{\small{$1_{FX}$}} (F4);
\node[font=\Large] at (2.73,.97) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at (2.82,1) {$\ell$};
\node[font=\Large] at (2.95,.6) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at (3.13,.6) {$r^{-1}$};
\node[font=\Large] at (2.85,0) {\rotatebox{90}{$\Rightarrow$}};
\node[font=\small] at (3,0) {$F^0$};
\end{tikzpicture}
\end{equation}
\item[Lax Naturality]\index{naturality!lax}
\begin{equation}\label{2-cell-transformation-pasting}
\begin{tikzpicture}[xscale=1.7, yscale=-2, baseline={(eq.base)}]
\node (F1) at (0,0) {$FX$}; \node (F2) at ($(F1) + (2,0)$) {$FZ$};
\node (G1) at ($(F1)+(0,1)$) {$GX$}; \node (G2) at ($(F1)+(1,1.3)$) {$GY$};
\node (G3) at ($(F1)+(2,1)$) {$GZ$};
\node[font=\Large] at ($(F1)+(.9,1)$) {\rotatebox{90}{$\Rightarrow$}};
\node[font=\small] at ($(F1)+(1.1,1)$) {$G^2$};
\node[font=\Large] at ($(F1)+(1.07,.3)$) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at ($(F1)+(.9,.2)$) {$\alpha_{gf}$};
\draw[arrow, bend right=15] (F1) to node{\small{$F(gf)$}} (F2);
\draw[arrow] (F2) to node{\small{$\alpha_Z$}} (G3);
\draw[arrow] (F1) to node[swap]{\small{$\alpha_X$}} (G1);
\draw[arrow, bend left=10] (G1) to node[swap]{\small{$Gf$}} (G2);
\draw[arrow, bend left=10] (G2) to node[swap]{\small{$Gg$}} (G3);
\draw[arrow, bend right=15] (G1) to node{\small{$G(gf)$}} (G3);
\node (eq) at ($(F1)+(2.7,.5)$) {\LARGE{$=$}};
\node (F3) at ($(F1)+(3.4,0)$) {$FX$};
\node (F4) at ($(F3) + (1,.3)$) {$FY$}; \node (F5) at ($(F3) + (2,0)$) {$FZ$};
\node (G4) at ($(F3)+(0,1)$) {$GX$}; \node (G5) at ($(F3)+(1,1.3)$) {$GY$};
\node (G6) at ($(F3)+(2,1)$) {$GZ$};
\node[font=\Large] at ($(F3)+(.45,.75)$) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at ($(F3)+(.65,.85)$) {$\alpha_f$};
\node[font=\Large] at ($(F3)+(1.45,.75)$) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at ($(F3)+(1.65,.85)$) {$\alpha_g$};
\node[font=\Large] at ($(F3)+(.9,.05)$) {\rotatebox{90}{$\Rightarrow$}};
\node[font=\small] at ($(F3)+(1.1,.05)$) {$F^2$};
\draw[arrow, bend right=15] (F3) to node{\small{$F(gf)$}} (F5);
\draw[arrow] (F5) to node{\small{$\alpha_Z$}} (G6);
\draw[arrow] (F3) to node[swap]{\small{$\alpha_X$}} (G4);
\draw[arrow, bend left=10] (G4) to node[swap]{\small{$Gf$}} (G5);
\draw[arrow, bend left=10] (G5) to node[swap]{\small{$Gg$}} (G6);
\draw[arrow, bend left=10] (F3) to node[swap,inner sep=1]{\small{$Ff$}} (F4);
\draw[arrow, bend left=10] (F4) to node[swap,inner sep=1]{\small{$Fg$}} (F5);
\draw[arrow] (F4) to node[swap,near start, inner sep=1]{\small{$\alpha_Y$}} (G5);
\end{tikzpicture}
\end{equation}
\end{description}
This finishes the definition of a lax transformation. Moreover:
\begin{itemize}
\item A \emph{strong transformation}\index{strong transformation}\index{transformation!strong} is a lax transformation in which every component $\alpha_f$ is an invertible $2$-cell.
\item A \emph{strict transformation}\index{strict!transformation}\index{transformation!strict} is a lax transformation in which every component $\alpha_f$ is an identity $2$-cell.
\item A \emph{$2$-natural transformation}\index{2-natural transformation}\index{natural transformation!2-} is a strict transformation between $2$-functors between $2$-categories.\defmark
\end{itemize}
\end{definition}
\begin{explanation}\label{expl:lax-transformation}
In \Cref{definition:lax-transformation}:
\begin{enumerate}
\item The direction of each component $2$-cell $\alpha_f : (Gf)\alpha_X \to \alpha_Y(Ff)$ may seem strange because it goes from $G$ to $F$, while $\alpha$ itself goes from $F$ to $G$. One way to understand this directionality is that $\alpha_f$ preserves the direction of the $1$-cell $f$. As $f$ goes from $X$ to $Y$, $\alpha_f$ goes from $\alpha_X$ to $\alpha_Y$.
\item The naturality of $\alpha$ means that for each $2$-cell $\theta : f \to g$ in $\B(X,Y)$, the diagram
\begin{equation}\label{lax-transformation-naturality}
\begin{tikzcd}
(Gf)\alpha_X \ar{r}{\alpha_f} \ar{d}[swap]{G\theta*1_{\alpha_X}} & \alpha_Y(Ff) \ar{d}{1_{\alpha_Y}*F\theta}\\
(Gg)\alpha_X \ar{r}{\alpha_g} & \alpha_Y(Fg)\end{tikzcd}
\end{equation}
in $\B'(FX,GY)$ is commutative. This is equivalent to the pasting diagram equality:
\begin{equation}\label{lax-transformation-nat-pasting}
\begin{tikzpicture}[xscale=2.5, yscale=-2, baseline={(eq.base)}]
\node (F1) at (0,0) {$FX$}; \node (F2) at ($(F1) + (1,0)$) {$FY$};
\node (G1) at ($(F1)+(0,1)$) {$GX$}; \node (G2) at ($(F1)+(1,1)$) {$GY$};
\node[font=\Large] at ($(F1)+(.5,.7)$) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at ($(F1)+(.6,.8)$) {$\alpha_f$};
\node[font=\Large] at ($(F1)+(.4,0)$) {\rotatebox{90}{$\Rightarrow$}};
\node[font=\small] at ($(F1)+(.55,0)$) {$F\theta$};
\draw[arrow, bend left] (F1) to node[swap]{\small{$Ff$}} (F2);
\draw[arrow, bend right] (F1) to node{\small{$Fg$}} (F2);
\draw[arrow] (F2) to node{\small{$\alpha_Y$}} (G2);
\draw[arrow] (F1) to node[swap]{\small{$\alpha_X$}} (G1);
\draw[arrow, bend left] (G1) to node[swap]{\small{$Gf$}} (G2);
\node (eq) at ($(F1)+(1.5,.5)$) {\LARGE{$=$}};
\node (F3) at ($(F1)+(2,0)$) {$FX$}; \node (F4) at ($(F3) + (1,0)$) {$FY$};
\node (G3) at ($(F3)+(0,1)$) {$GX$}; \node (G4) at ($(F3)+(1,1)$) {$GY$};
\node[font=\Large] at ($(F3)+(.4,1)$) {\rotatebox{90}{$\Rightarrow$}};
\node[font=\small] at ($(F3)+(.55,1)$) {$G\theta$};
\node[font=\Large] at ($(F3)+(.5,.25)$) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at ($(F3)+(.6,.35)$) {$\alpha_g$};
\draw[arrow, bend right] (F3) to node{\small{$Fg$}} (F4);
\draw[arrow] (F4) to node{\small{$\alpha_Y$}} (G4);
\draw[arrow] (F3) to node[swap]{\small{$\alpha_X$}} (G3);
\draw[arrow, bend left] (G3) to node[swap]{\small{$Gf$}} (G4);
\draw[arrow, bend right] (G3) to node{\small{$Gg$}} (G4);
\end{tikzpicture}
\end{equation}
\item The lax unity axiom \eqref{unity-transformation-pasting} means the commutative diagram
\begin{equation}\label{unity-transformation}
\begin{tikzcd}
1_{GX}\alpha_X \ar{r}{\ell} \ar{d}[swap]{G^0*1_{\alpha_X}} & \alpha_X \ar{r}{r^{-1}} & \alpha_X 1_{FX} \ar{d}{1_{\alpha_X}*F^0}\\
(G1_X)\alpha_X \ar{rr}{\alpha_{1_X}} && \alpha_X(F1_X)
\end{tikzcd}
\end{equation}
in $\B'(FX,GX)$.
\item The lax naturality axiom \eqref{2-cell-transformation-pasting} means the commutative diagram
\begin{equation}\label{2-cell-transformation}
\begin{tikzcd}
(Gg) \big(35_Y Ff\big) \ar{r}{a^{-1}} & \big((Gg) 35_Y\big) Ff \ar{r}{\alpha_g*1_{Ff}} & \big(35_Z Fg\big) Ff \ar{d}{a}\\
(Gg) \big((Gf) 35_X\big) \ar{u}{1_{Gg}*\alpha_f} && 35_Z \big((Fg)(Ff)\big) \ar{d}{1_{\alpha_Z}*F^2}\\
\bigl((Gg)(Gf)\big)35_X \ar{u}{a} \ar{r}{G^2*1_{\alpha_X}} & G(gf) 35_X \ar{r}{\alpha_{gf}} & 35_Z F(gf)
\end{tikzcd}
\end{equation}
in $\B'(FX,GZ)$. Here the bracketing follows \Cref{conv:boundary-bracketing}, and the instances of $a^{\pm 1}$ come from \Cref{def:bicat-diagram-composite}.\dqed
\end{enumerate}
\end{explanation}
\begin{example}[Natural Transformations]\label{ex:nt-lax-transformation}
Suppose $\theta : F \to G$ is a natural transformation between functors $F,G : \C\to1.3$ with $\C$ and $1.3$ categories. Regarding $F$ and $G$ as strict functors between locally discrete bicategories as in \Cref{ex:functor-laxfunctor}, $\theta$ becomes a \index{natural transformation!as a strict transformation}strict transformation. The lax unity axiom \eqref{unity-transformation} and the lax naturality axiom \eqref{2-cell-transformation} are true because there are only identity $2$-cells in $1.3$.\dqed
\end{example}
\begin{example}[$\Cat$-Natural Transformations]\label{ex:cat-nt}
We saw in \Cref{ex:2functor} that a $2$-functor $F : \C\to1.3$ between locally small $2$-categories is precisely a $\Cat$-functor. If $G : \C\to1.3$ is another $2$-functor (i.e., $\Cat$-functor), then a $2$-natural transformation $\alpha : F\to G$ is precisely a $\Cat$-natural transformation\index{2-natural transformation!as a $\Cat$-natural transformation} in the sense of \Cref{def:enriched-natural-transformation}. Indeed, the component identity $2$-cells of $\alpha$ and its naturality \eqref{lax-transformation-naturality} are equivalent to the diagram \eqref{enriched-nt-naturality} of a $\Cat$-natural transformation. The lax unity axiom \eqref{unity-transformation} and the lax naturality axiom \eqref{2-cell-transformation} are trivially true because every $2$-cell involved is an identity $2$-cell.\dqed
\end{example}
\begin{proposition}\label{iinatural-transformation}\index{characterization of!a 2-natural transformation}
For $2$-functors $F,G : \A\to\B$ between $2$-categories, a $2$-natural transformation $\alpha : F \to G$ consists of exactly a component $1$-cell $\alpha_X \in \B(FX,GX)$ for each object $X$ in $\A$ such that the following two conditions are satisfied.
\begin{description}
\item[$1$-Cell Naturality] For each $1$-cell $f \in \A(X,Y)$, the two composite $1$-cells
\[\begin{tikzcd}
FX \ar{d}[swap]{Ff} \ar{r}{\alpha_X} & GX \ar{d}{Gf}\\
FY \ar{r}{\alpha_Y} & GY
\end{tikzcd}\]
in $\B(FX,GY)$ are equal.
\item[$2$-Cell Naturality]
For each $2$-cell $\theta : f \to g$ in $\A(X,Y)$, the diagram
\[\begin{tikzcd}
(Gf)\alpha_X \ar{r}{1} \ar{d}[swap]{G\theta*1_{\alpha_X}} & \alpha_Y(Ff) \ar{d}{1_{\alpha_Y}*F\theta}\\
(Gg)\alpha_X \ar{r}{1} & \alpha_Y(Fg)\end{tikzcd}\]
in $\B(FX,GY)$ is commutative.
\end{description}
\end{proposition}
\begin{proof}
If $\alpha$ is a $2$-natural transformation, then its naturality \eqref{lax-transformation-naturality} is the stated $2$-cell naturality, while $1$-cell naturality holds because each component $2$-cell $\alpha_f$ is the identity $2$-cell. Conversely, assuming the two stated conditions, we define each component $2$-cell $\alpha_f$ as the identity $2$-cell of $(Gf)\alpha_X = \alpha_Y(Ff)$. This implies the naturality \eqref{lax-transformation-naturality} of $\alpha$. The lax unity axiom \eqref{unity-transformation-pasting} and the lax naturality axiom \eqref{2-cell-transformation-pasting} are trivially satisfied because every $2$-cell involved is an identity $2$-cell.
\end{proof}
The following result is the bicategorical analogue of the identity natural transformation of a functor.
\begin{proposition}\label{id-lax-transformation}
Suppose $(F,F^2,F^0) : \B\to\B'$ is a lax functor between bicategories. Then there is a strong transformation\index{identity!strong transformation}\index{strong transformation!identity} \[1_F : F \to F\] defined by the following data.
\begin{itemize}
\item For each object $X$ in $\B$, the component $1$-cell $(1_F)_X$ is the identity $1$-cell $1_{FX} \in \B(FX,FX)$.
\item For each $1$-cell $f \in \B(X,Y)$, the component $2$-cell is the vertical composite
\begin{equation}\label{idlaxtr-component}
\begin{tikzcd}
(Ff)1_{FX} \ar[bend left, start anchor={[xshift=-.3cm]},
end anchor={[xshift=.3cm]}]{rr}{(1_F)_f} \ar{r}{r_{Ff}} & Ff \ar{r}{\ell^{-1}_{Ff}} & 1_{FY}(Ff)
\end{tikzcd}
\end{equation}
in $\B'(FX,FY)$.
\end{itemize}
\end{proposition}
\begin{proof}
To simplify the notations, in this proof we omit the subscripts in $a$, $\ell$, and $r$, and write every identity $2$-cell as $1$. The naturality of $\Id_F$ with respect to $2$-cells \eqref{lax-transformation-naturality} means that, for each $2$-cell $\theta : f \to g$ in $\B(X,Y)$, the outermost diagram in
\[\begin{tikzcd}
(Ff)1_{FX} \ar{r}{r} \ar{d}[swap]{F\theta*1} & Ff \ar{r}{\ell^{-1}} \ar{d}{F\theta} & 1_{FX}(Ff) \ar{d}{1*F\theta}\\
(Fg)1_{FX} \ar{r}{r} & Fg \ar{r}{\ell^{-1}} & 1_{FX}(Fg)
\end{tikzcd}\]
is commutative. The two squares above are commutative by the naturality of $r$ and $\ell$.
The lax unity axiom \eqref{unity-transformation} is the outermost diagram below.
\[\begin{tikzcd}
1_{FX}1_{FX} \ar{r}{\ell} \ar{d}[swap]{F^0*1} & 1_{FX} \ar{r}{r^{-1}} \ar{d}{F^0} & 1_{FX}1_{FX} \ar{d}{1*F^0}\\
(F1_X)1_{FX} \ar{r}{r} & F1_X \ar{r}{\ell^{-1}} & 1_{FX}(F1_X)
\end{tikzcd}\]
Using the equality $\ell_{1_{FX}} = r_{1_{FX}}$ in \Cref{bicat-l-equals-r} along the top row above twice, the two square above are commutative by the naturality of $r$ and $\ell$.
The lax naturality axiom \eqref{2-cell-transformation} is the outermost diagram below.
\[\begin{tikzcd}
(Fg)\big(1_{FY}(Ff)\big) \ar{r}{a^{-1}} & \big((Fg)1_{FY}\big)(Ff) \ar{r}{r*1} & (Fg)(Ff) \ar{r}{\ell^{-1}*1} \ar{d}{1} & \big(1_{FZ}(Fg)\big)(Ff) \ar{d}{a}\\
(Fg)(Ff) \ar{u}{1*\ell^{-1}} \ar{rr}{1} && (Fg)(Ff) \ar{r}{\ell^{-1}} \ar{dd}{F^2} & 1_{FZ}\big((Fg)(Ff)\big) \ar{dd}{1*F^2}\\
(Fg)\big((Ff)1_{FX}\big) \ar{u}{1*r} &&&\\
\big((Fg)(Ff)\big)1_{FX} \ar{u}{a} \ar{uurr}{r} \ar{r}{F^2*1} & F(gf)1_{FX} \ar{r}{r} & F(gf) \ar{r}{\ell^{-1}} & 1_{FZ}F(gf)
\end{tikzcd}\]
The vertical unity property of identity $2$-cells \eqref{hom-category-axioms} will be used several times below without further comment. In the diagram above:
\begin{itemize}
\item The top left rectangle is commutative by the middle unity axiom \eqref{bicat-unity}.
\item The top right square is commutative by the left unity diagram in \Cref{bicat-left-right-unity}.
\item The left triangle involving $1*r$ is commutative by the right unity diagram in \Cref{bicat-left-right-unity}.
\item The lower left triangle involving $F^2*1$ is commutative by the naturality of $r$.
\item The lower right rectangle is commutative by the naturality of $\ell$.
\end{itemize}
This shows that $1_F : F \to F$ is a lax transformation. Since every component $2$-cell $(1_F)_f = \ell^{-1}_{Ff} r_{Ff}$ is the composite of two invertible $2$-cells, $1_F$ is a strong transformation.
\end{proof}
\begin{definition}\label{def:identity-strong-transformation}
For a lax functor $F$, the strong transformation $1_F : F \to F$ in \Cref{id-lax-transformation} is called the \emph{identity transformation} of $F$.
\end{definition}
We emphasize that the identity transformation $1_F$ of a lax functor $F$ is, in general, \emph{not} a strict transformation because the component $2$-cells $(1_F)_f= \ell^{-1}_{Ff} r_{Ff}$ are not identity $2$-cells. On the other hand, it is a strict transformation if $\B'$ is a $2$-category.
Next we define composition of lax transformations.
\begin{definition}\label{def:lax-tr-comp}
Suppose $\alpha : F\to G$ and $\beta : G \to H$ are lax transformations for lax functors $F,G,H : \B\to \B'$. The \emph{horizontal composite}\index{horizontal composition!lax transformation}\index{lax transformation!horizontal composition}\index{composition!lax transformation!horizontal} $\beta\alpha : F \to H$ is defined with the following data.
\begin{description}
\item[Component $1$-Cells] For each object $X$ in $\B$, it is equipped with the horizontal composite $1$-cell
\[\begin{tikzcd}
FX \ar{r}{\alpha_X} \ar[bend left, start anchor={[xshift=-.3cm]},
end anchor={[xshift=.3cm]}]{rr}{(\beta\alpha)_X} & GX \ar{r}{\beta_X} & HX
\end{tikzcd}\] in $\B'(FX,HX)$.
\item[Component $2$-Cells] For each $1$-cell $f \in \B(X,Y)$, $(\beta\alpha)_f$ is the following $2$-cell
\begin{equation}\label{transf-hcomp-iicell-pasting}
\begin{tikzpicture}[xscale=2.5, yscale=-1.8, baseline={(G1).base}]
\node (F1) at (0,0) {$FX$}; \node (F2) at (1,0) {$FY$};
\node (G1) at (0,1) {$GX$}; \node (G2) at (1,1) {$GY$};
\node (H1) at (0,2) {$HX$}; \node (H2) at (1,2) {$HY$};
\draw[arrow] (F1) to node[swap]{\small{$Ff$}} (F2);
\draw[arrow] (F1) to node[near start]{\small{$\alpha_X$}} (G1);
\draw[arrow] (F2) to node[swap, near start]{\small{$\alpha_Y$}} (G2);
\draw[arrow] (G1) to node[swap]{\small{$Gf$}} (G2);
\draw[arrow] (G1) to node[near start]{\small{$\beta_X$}} (H1);
\draw[arrow] (G2) to node[swap, near start]{\small{$\beta_Y$}} (H2);
\draw[arrow] (H1) to node[swap]{\small{$Hf$}} (H2);
\draw[arrow,bend left] (F1) to node[swap]{\small{$\beta_X\alpha_X$}} (H1);
\draw[arrow,bend right] (F2) to node{\small{$\beta_Y\alpha_Y$}} (H2);
\node[font=\Large] at (.5,.55) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at (.6,.7) {$\alpha_{f}$};
\node[font=\Large] at (.5,1.55) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at (.6,1.7) {$\beta_{f}$};
\end{tikzpicture}
\end{equation}
whose vertical boundaries are bracketed as indicated.\defmark
\end{description}
\end{definition}
\begin{explanation}\label{expl:lax-tr-comp}
The component $2$-cell $(\beta\alpha)_f$ in \eqref{transf-hcomp-iicell-pasting} is the vertical composite
\begin{equation}\label{transf-hcomp-iicell}
\begin{tikzcd}[column sep=large]
(Hf)\big(\beta_X\alpha_X) \ar{r}{(\beta\alpha)_f} \ar{d}[swap]{a^{-1}} & \big(\beta_Y\alpha_Y\big)(Ff)\\
\big((Hf)\beta_X\big)\alpha_X \ar{d}[swap]{\beta_f*1} & \beta_Y\big(\alpha_Y(Ff)\big) \ar{u}[swap]{a^{-1}}\\
\big(\beta_Y(Gf)\big)\alpha_X \ar{r}{a} & \beta_Y\big((Gf)\alpha_X\big) \ar{u}[swap]{1*\alpha_f}
\end{tikzcd}
\end{equation}
of five $2$-cells in $\B'(FX,HY)$.\dqed
\end{explanation}
\begin{lemma}\label{lax-tr-compose}
In \Cref{def:lax-tr-comp}, $\beta\alpha : F \to H$ is a lax transformation, which is strong if both $\alpha$ and $\beta$ are strong.
\end{lemma}
\begin{proof}
The naturality of $(\beta\alpha)_f$ with respect to $2$-cells follows from the naturality of $\alpha$, $\beta$, and $a^{\pm 1}$. Using \Cref{expl:lax-tr-comp}, the lax unity diagram \eqref{unity-transformation} for $\beta\alpha$ is the boundary of the following diagram.
\[\begin{tikzcd}
1_{HX}(\beta_X\alpha_X) \ar{r}{\ell} \ar{dd}[swap]{H^0*1} \ar{ddr}{a^{-1}} & \beta_X\alpha_X \ar{r}{r^{-1}} \ar[end anchor={[xshift=-.2cm, yshift=-.3cm]}]{ddr}{1} \ar[bend left=30, start anchor={[xshift=.1cm]}, end anchor={[xshift=.7cm]}, shorten >=-.2cm]{ddd}[near end]{r^{-1}*1} & (\beta_X\alpha_X)1_{FX} \ar{r}{1*F^0} & (\beta_X\alpha_X)(F1_X)\\
& & \beta_X(\alpha_X1_{FX}) \ar{u}{a^{-1}} \ar{dddr}[near start]{1*(1*F^0)} &\\
(H1_X)(\beta_X\alpha_X) \ar{dd}[swap]{a^{-1}} & (1_{HX}\beta_X)\alpha_X \ar{ddl}[sloped, anchor=center, above]{(H^0*1)*1} \ar{uu}{\ell*1} & \beta_X\alpha_X \ar{u}{1*r^{-1}} &\\
& (\beta_X1_{GX})\alpha_X \ar{d}{(1*G^0)*1} \ar{r}{a} & \beta_X(1_{GX}\alpha_X) \ar{u}{1*\ell} \ar{d}{1*(G^0*1)} &\\
\big((H1_X)\beta_X\big)\alpha_X \ar{r}{\beta_{1_X}*1} & \big(\beta_X(G1_X)\big)\alpha_X \ar{r}{a} & \beta_X\big((G1_X)\alpha_X\big) \ar{r}{1*\alpha_{1_X}} & \beta_X\big(\alpha_X(F1_X)\big) \ar{uuuu}[swap]{a^{-1}}
\end{tikzcd}\]
In the above diagram:
\begin{itemize}
\item Every identity $2$-cell is written as $1$.
\item Along the top row, the left and middle triangles are commutative by the unity diagrams in \Cref{bicat-left-right-unity}. The sub-diagram under the top middle triangle is commutative by the middle unity axiom \eqref{bicat-unity}.
\item Along the bottom row, the first sub-diagram is commutative by the lax unity axiom of $\beta$. The bottom right triangle is commutative by the lax unity axiom of $\alpha$.
\item The other three sub-diagrams are commutative by the naturality of $a^{\pm 1}$.
\end{itemize}
We ask the reader to check the lax naturality axiom for $\beta\alpha$ in \Cref{exer:lax-tr-compose}.
If both $\alpha$ and $\beta$ are strong transformations, then every component $2$-cell $(\beta\alpha)_f$ is the vertical composite of five invertible $2$-cells, which is therefore invertible.
\end{proof}
\begin{remark}
Even if $\alpha$ and $\beta$ are strict transformations, the horizontal composite $\beta\alpha$ is, in general, \emph{not} strict.\dqed
\end{remark}
\section{Oplax Transformations}\label{sec:oplax-transformations}
In this section we discuss a variation of lax transformations in which the component $2$-cells go in the opposite direction.
\begin{definition}\label{def:oplax-transformation}
Suppose $(F,F^2,F^0)$ and $(G,G^2,G^0)$ are lax functors $\B \to \B'$. An \index{oplax transformation}\index{transformation!oplax}\emph{oplax transformation} $\alpha\cn F \to G$ consists of the following data.
\begin{description}
\item[Components] It is equipped with a component $1$-cell $\alpha_X\in \B'(FX,GX)$ for each object $X$ in $\B$.
\item[Oplax Naturality Constraints] For each pair of objects $X,Y$ in $\B$, it is equipped with a natural transformation
\[\alpha : (35_Y)_* F\to 35_X^* G : \B(X,Y) \to \B'(FX,GY),\]
with a component $2$-cell $\alpha_f \cn 35_Y (Ff) \to (Gf) 35_X$, as in the following diagram, for each $1$-cell $f\in\B(X,Y)$.
\[\begin{tikzpicture}[xscale=2.5, yscale=1.8]
\node (F1) at (0,1) {$FX$}; \node (F2) at (1,1) {$FY$};
\node (G1) at (0,0) {$GX$}; \node (G2) at (1,0) {$GY$};
\draw[arrow] (F1) to node{\small{$Ff$}} (F2);
\draw[arrow] (F1) to node[swap]{\small{$\alpha_X$}} (G1);
\draw[arrow] (F2) to node{\small{$\alpha_Y$}} (G2);
\draw[arrow] (G1) to node{\small{$Gf$}} (G2);
\node[font=\Large] at (.4,.6) {\rotatebox{-135}{$\Rightarrow$}};
\node[font=\small] at (.6,.6) {$\alpha_{f}$};
\end{tikzpicture}\]
\end{description}
The above data are required to satisfy the following two pasting diagram equalities for all objects $X,Y,Z$ and $1$-cells $f \in \B(X,Y)$ and $g \in \B(Y,Z)$.
\begin{description}
\item[Oplax Unity]\index{unity!oplax}
\begin{equation}\label{unity-oplax-pasting}
\begin{tikzpicture}[xscale=2.5, yscale=2, baseline={(eq.base)}]
\node (F1) at (0,1) {$FX$}; \node (F2) at (1,1) {$FX$};
\node (G1) at (0,0) {$GX$}; \node (G2) at (1,0) {$GX$};
\draw[arrow, bend left] (F1) to node{\small{$1_{FX}$}} (F2);
\draw[arrow, bend right] (F1) to node[swap]{\small{$F1_X$}} (F2);
\draw[arrow] (F1) to node[swap]{\small{$\alpha_X$}} (G1);
\draw[arrow] (F2) to node{\small{$\alpha_X$}} (G2);
\draw[arrow,bend right] (G1) to node[swap]{\small{$G1_{X}$}} (G2);
\node[font=\Large] at (.4,.2) {\rotatebox{-135}{$\Rightarrow$}};
\node[font=\small] at (.6,.2) {$\alpha_{1_X}$};
\node[font=\Large] at (.45,1) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at (.6,1) {$F^0$};
\node (eq) at (1.7,.5) {\LARGE{$=$}};
\node (F3) at (2.4,1) {$FX$}; \node (F4) at (3.4,1) {$FX$};
\node (G3) at (2.4,0) {$GX$}; \node (G4) at (3.4,0) {$GX$};
\draw[arrow, bend left] (F3) to node{\small{$1_{FX}$}} (F4);
\draw[arrow] (F3) to node[swap]{\small{$\alpha_X$}} (G3);
\draw[arrow] (F4) to node{\small{$\alpha_X$}} (G4);
\draw[arrow,bend left] (G3) to node{\small{$1_{GX}$}} (G4);
\draw[arrow,bend right] (G3) to node[swap]{\small{$G1_{X}$}} (G4);
\draw[arrow, bend left] (F3) to node[inner sep=0pt]{\small{$\alpha_X$}} (G4);
\node[font=\Large] at (2.9,1) {\rotatebox{-135}{$\Rightarrow$}};
\node[font=\small] at (3.05,1) {$r$};
\node[font=\Large] at (2.6,.7) {\rotatebox{-135}{$\Rightarrow$}};
\node[font=\small] at (2.7,.6) {$\ell^{-1}$};
\node[font=\Large] at (2.85,0) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at (3,0) {$G^0$};
\end{tikzpicture}
\end{equation}
\item[Oplax Naturality]\index{naturality!oplax}
\begin{equation}\label{2cell-oplax-pasting}
\begin{tikzpicture}[xscale=1.7, yscale=2, baseline={(eq.base)}]
\node (F1) at (0,0) {$FX$}; \node (F2) at ($(F1) + (1,.3)$) {$FY$};
\node (F3) at ($(F1) + (2,0)$) {$FZ$};
\node (G1) at ($(F1)+(0,-1)$) {$GX$}; \node (G2) at ($(F1)+(2,-1)$) {$GZ$};
\node[font=\Large] at ($(F1)+(.9,0)$) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at ($(F1)+(1.1,0)$) {$F^2$};
\node[font=\Large] at ($(F1)+(.8,-.8)$) {\rotatebox{-135}{$\Rightarrow$}};
\node[font=\small] at ($(F1)+(1.1,-.9)$) {$\alpha_{gf}$};
\draw[arrow, bend right=15] (F1) to node[swap]{\small{$F(gf)$}} (F3);
\draw[arrow, bend left=10] (F1) to node{\small{$Ff$}} (F2);
\draw[arrow, bend left=10] (F2) to node{\small{$Fg$}} (F3);
\draw[arrow] (F1) to node[swap]{\small{$\alpha_X$}} (G1);
\draw[arrow, bend right=15] (G1) to node[swap]{\small{$G(gf)$}} (G2);
\draw[arrow] (F3) to node{\small{$\alpha_Z$}} (G2);
\node (eq) at ($(F1)+(2.7,-.5)$) {\LARGE{$=$}};
\node (F4) at ($(F1)+(3.4,0)$) {$FX$};
\node (F5) at ($(F4) + (1,.3)$) {$FY$}; \node (F6) at ($(F4) + (2,0)$) {$FZ$};
\node (G3) at ($(F4)+(0,-1)$) {$GX$}; \node (G4) at ($(F4)+(1,-.7)$) {$GY$};
\node (G5) at ($(F4)+(2,-1)$) {$GZ$};
\node[font=\Large] at ($(F4)+(.65,-.1)$) {\rotatebox{-135}{$\Rightarrow$}};
\node[font=\small] at ($(F4)+(.5,0)$) {$\alpha_f$};
\node[font=\Large] at ($(F4)+(1.65,-.1)$) {\rotatebox{-135}{$\Rightarrow$}};
\node[font=\small] at ($(F4)+(1.5,0)$) {$\alpha_g$};
\node[font=\Large] at ($(F4)+(.9,-.95)$) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at ($(F4)+(1.1,-.95)$) {$G^2$};
\draw[arrow, bend left=10] (F4) to node{\small{$Ff$}} (F5);
\draw[arrow, bend left=10] (F5) to node{\small{$Fg$}} (F6);
\draw[arrow] (F4) to node[swap]{\small{$\alpha_X$}} (G3);
\draw[arrow] (F5) to node[swap, near end]{\small{$\alpha_Y$}} (G4);
\draw[arrow] (F6) to node{\small{$\alpha_Z$}} (G5);
\draw[arrow, bend right=15] (G3) to node[swap]{\small{$G(gf)$}} (G5);
\draw[arrow, bend left=10] (G3) to node{\small{$Gf$}} (G4);
\draw[arrow, bend left=10] (G4) to node{\small{$Gg$}} (G5);
\end{tikzpicture}
\end{equation}
\end{description}
This finishes the definition of an oplax transformation.
\end{definition}
\begin{explanation}\label{expl:oplax-transformation}
In \Cref{def:oplax-transformation}:
\begin{enumerate}
\item The naturality of $\alpha$ means that for each $2$-cell $\theta : f \to g$ in $\B(X,Y)$, the diagram
\begin{equation}\label{oplax-transformation-naturality}
\begin{tikzcd}
\alpha_Y(Ff) \ar{r}{\alpha_f} \ar{d}[swap]{1_{\alpha_Y}*F\theta} & (Gf)\alpha_X \ar{d}{G\theta*1_{\alpha_X}}\\
\alpha_Y(Fg) \ar{r}{\alpha_g} & (Gg)\alpha_X\end{tikzcd}
\end{equation}
in $\B'(FX,GY)$ is commutative. This is equivalent to the pasting diagram equality:
\begin{equation}\label{oplax-transformation-nat-pasting}
\begin{tikzpicture}[xscale=2.5, yscale=2, baseline={(eq.base)}]
\node (F1) at (0,0) {$FX$}; \node (F2) at ($(F1) + (1,0)$) {$FY$};
\node (G1) at ($(F1)+(0,-1)$) {$GX$}; \node (G2) at ($(F1)+(1,-1)$) {$GY$};
\node[font=\Large] at ($(F1)+(.45,-.1)$) {\rotatebox{-135}{$\Rightarrow$}};
\node[font=\small] at ($(F1)+(.6,-.2)$) {$\alpha_f$};
\node[font=\Large] at ($(F1)+(.4,-1)$) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at ($(F1)+(.55,-1)$) {$G\theta$};
\draw[arrow, bend left] (F1) to node{\small{$Ff$}} (F2);
\draw[arrow] (F2) to node{\small{$\alpha_Y$}} (G2);
\draw[arrow] (F1) to node[swap]{\small{$\alpha_X$}} (G1);
\draw[arrow, bend left] (G1) to node{\small{$Gf$}} (G2);
\draw[arrow, bend right] (G1) to node[swap]{\small{$Gg$}} (G2);
\node (eq) at ($(F1)+(1.5,-.5)$) {\LARGE{$=$}};
\node (F3) at ($(F1)+(2,0)$) {$FX$}; \node (F4) at ($(F3) + (1,0)$) {$FY$};
\node (G3) at ($(F3)+(0,-1)$) {$GX$}; \node (G4) at ($(F3)+(1,-1)$) {$GY$};
\node[font=\Large] at ($(F3)+(.4,0)$) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at ($(F3)+(.55,0)$) {$F\theta$};
\node[font=\Large] at ($(F3)+(.45,-.7)$) {\rotatebox{-135}{$\Rightarrow$}};
\node[font=\small] at ($(F3)+(.6,-.8)$) {$\alpha_g$};
\draw[arrow, bend left] (F3) to node{\small{$Ff$}} (F4);
\draw[arrow, bend right] (F3) to node[swap]{\small{$Fg$}} (F4);
\draw[arrow] (F4) to node{\small{$\alpha_Y$}} (G4);
\draw[arrow] (F3) to node[swap]{\small{$\alpha_X$}} (G3);
\draw[arrow, bend right] (G3) to node[swap]{\small{$Gg$}} (G4);
\end{tikzpicture}
\end{equation}
\item The oplax unity axiom \eqref{unity-oplax-pasting} means the commutative diagram
\begin{equation}\label{unity-oplax}
\begin{tikzcd}
\alpha_X1_{FX} \ar{r}{r} \ar{d}[swap]{1_{\alpha_X}*F^0} & \alpha_X \ar{r}{\ell^{-1}} & 1_{GX}\alpha_X \ar{d}{G^0*1_{\alpha_X}}\\
\alpha_X(F1_X) \ar{rr}{\alpha_{1_X}} && (G1_X)\alpha_X
\end{tikzcd}
\end{equation}
in $\B'(FX,GX)$.
\item The oplax naturality axiom \eqref{2cell-oplax-pasting} means the commutative diagram
\begin{equation}\label{2cell-oplax}
\begin{tikzcd}[column sep=large]
\big((Gg)\alpha_Y\big)(Ff) \ar{r}{a} & (Gg)\big(\alpha_Y(Ff)\big) \ar{r}{1_{Gg}*\alpha_f} & (Gg)\big((Gf)\alpha_X\big) \ar{d}{a^{-1}}\\
\big(\alpha_Z(Fg)\big)(Ff) \ar{u}{\alpha_g*1_{Ff}} \ar{d}[swap]{a} && \big((Gg)(Gf)\big)\alpha_X \ar{d}{G^2*1_{\alpha_X}}\\
\alpha_Z\big((Fg)(Ff)\big) \ar{r}{1_{\alpha_Z}*F^2} & \alpha_Z\big(F(gf)\big) \ar{r}{\alpha_{gf}} & G(gf)\alpha_X
\end{tikzcd}
\end{equation}
in $\B'(FX,GZ)$.\dqed
\end{enumerate}
\end{explanation}
\begin{lemma}\label{strong-optransformation}
Suppose $\alpha : F \to G$ is an oplax transformation between lax functors $F,G : \B\to\B'$. Then:
\begin{enumerate}
\item $\alpha$ is uniquely determined by a\index{characterization of!an oplax transformation} lax transformation $\alphaop : \Gop \to \Fop$ with $\Fop$ and $\Gop$ the opposite lax functors $\Bop\to\Bprimeop$ in \Cref{ex:opposite-lax-functor}.
\item Each component $2$-cell of $\alpha$ is invertible if and only if $\alpha$ defines a strong transformation $\alpha' : F\to G$ with the same component $1$-cells and $\alpha'_f=\alpha_f^{-1}$ for each $1$-cell $f$.
\item Each component $2$-cell of $\alpha$ is an identity if and only if $\alpha$ is a strict transformation.
\end{enumerate}
\end{lemma}
\begin{proof}
The first assertion follows from an inspection of \Cref{def:bicategory-opposite,definition:lax-transformation,ex:opposite-lax-functor}. For the other two assertions, the naturality, the oplax unity, and the oplax naturality of $\alpha$ are equivalent to, respectively, the naturality, the lax unity, and the lax naturality of $\alpha'$, since $a$, $\ell$, and $r$ are invertible.
\end{proof}
\begin{example}\label{ex:id-oplax-transformation}
The identity strong transformation $\Id_F : F \to F$ of a lax functor $F$ may be regarded as an oplax transformation with the same component $1$-cells $1_{FX}$ and with component $2$-cells the vertical composite
\[\begin{tikzcd}
1_{FY}(Ff) \ar{r}{\ell_{Ff}} & Ff \ar{r}{r^{-1}_{Ff}} & (Ff)1_{FX}\end{tikzcd}\]
for each $1$-cell $f\in\B(X,Y)$.\dqed
\end{example}
The following observation says that monoidal natural transformations are examples of oplax transformations.
\begin{proposition}\label{monnt-oplax-transformation}\index{oplax transformation!monoidal natural transformation}\index{monoidal natural transformation!as an oplax transformation}
Suppose $F,G : \C\to1.3$ are monoidal functors, and $\theta : F \to G$ is a monoidal natural transformation. Regarding $F$ and $G$ as lax functors $\Sigma\C \to \Sigma1.3$ between one-object bicategories, $\theta$ yields an oplax transformation $\vartheta : F\to G$ with the following data:
\begin{itemize}
\item For the unique object $*$ in $\Sigma\C$, the component $1$-cell $\vartheta_* \in \Sigma1.3(*,*)$ is the monoidal unit $\tensorunit \in1.3$.
\item For each object $X\in\C$ (i.e., $1$-cell in $\Sigma\C(*,*)$), the component $2$-cell $\vartheta_X$ is the composite
\[\begin{tikzcd}
\tensorunit \otimes FX \ar{r}{\vartheta_X} \ar{d}[swap]{\lambda} & GX \otimes \tensorunit\\
FX \ar{r}{\theta_X} & GX \ar{u}[swap]{\rho^{-1}}\end{tikzcd}\]
in $1.3$, with $\lambda$ and $\rho$ the left and right unit isomorphisms, and $\theta_X$ the $X$-component of $\theta$.
\end{itemize}
Moreover, if $\theta$ is a natural isomorphism, then $\vartheta$ has invertible component $2$-cells.
\end{proposition}
\begin{proof}
The naturality \eqref{oplax-transformation-naturality} of $\vartheta$ means that, for each morphism $f : X\to Y$ in $\C$, the outermost diagram in
\[\begin{tikzcd}
\tensorunit\otimes FX \ar{r}{\lambda} \ar{d}[swap]{\tensorunit\otimes Ff} & FX \ar{d}[swap]{Ff} \ar{r}{\theta_X} & GX \ar{d}{Gf} \ar{r}{\rho^{-1}} & GX \otimes \tensorunit \ar{d}{Gf\otimes\tensorunit}\\
\tensorunit\otimes FY \ar{r}{\lambda} & FY \ar{r}{\theta_Y} & GY \ar{r}{\rho^{-1}} & GY\otimes\tensorunit
\end{tikzcd}\]
is commutative. The three squares above are commutative by the naturality of $\lambda$, $\theta$, and $\rho$.
The oplax unity axiom \eqref{unity-oplax} of $\vartheta$ means that the outermost diagram in
\[\begin{tikzcd}
\tensorunit\otimes\tensorunit \ar{d}[swap]{\tensorunit\otimes F^0} \ar{r}{\rho} & \tensorunit \ar{r}{\Id} \ar{d}[swap]{F^0} & \tensorunit \ar{d}{G^0} \ar{r}{\lambda^{-1}} & \tensorunit\otimes\tensorunit \ar{d}{G^0\otimes\tensorunit}\\
\tensorunit\otimes F\tensorunit \ar{r}{\lambda} & F\tensorunit \ar{r}{\theta_{\tensorunit}} & G\tensorunit \ar{r}{\rho^{-1}} & G\tensorunit\otimes\tensorunit
\end{tikzcd}\]
is commutative. Using the equality $\lambda_{\tensorunit}=\rho_{\tensorunit}$ in \Cref{def:monoidal-category}, the left and right squares above are commutative by the naturality of $\lambda$ and $\rho$, respectively. The middle square is commutative by the axiom \eqref{mon-nat-transf-F0} of a monoidal natural transformation.
The oplax naturality axiom \eqref{2cell-oplax} of $\vartheta$ means that the outermost diagram in
\[\begin{tikzcd}
\big((GY)\tensorunit\big)(FX) \ar{r}{\alpha} & (GY)\big(\tensorunit(FX)\big) \ar{r}{GY\otimes\lambda} & (GY)(FX) \ar{r}{GY\otimes\theta_X} & (GY)(GX) \ar{ddl}[swap]{\Id} \ar{d}{(GY)\otimes\rho^{-1}}\\
(GY)(FX) \ar{u}{\rho^{-1}\otimes FX} \ar{urr}[swap,near start]{\Id} &&& (GY)\big((GX)\tensorunit\big) \ar{d}{\alpha^{-1}}\\
(FY)(FX) \ar{u}{\theta_Y\otimes FX} \ar[bend left=15]{ddrr}[swap]{F^2} \ar{rr}{\theta_Y\otimes\theta_X} && (GY)(GX) \ar{r}{\rho^{-1}} \ar{ddr}[swap]{G^2} & \big((GY)(GX)\big)\tensorunit \ar{d}{G^2\otimes\tensorunit}\\
\big(\tensorunit(FY)\big)(FX) \ar{u}{\lambda\otimes FX} \ar{d}[swap]{\alpha} &&& G(YX)\tensorunit\\
\tensorunit\big((FY)(FX)\big) \ar{r}{\tensorunit\otimes F^2} \ar[shift right=.5cm, bend right=60, shorten >=-5pt, shorten <=-5pt]{uu}[swap]{\lambda} & \tensorunit F(YX) \ar{r}{\lambda} & F(YX) \ar{r}{\theta_{YX}} & G(YX) \ar{u}[swap]{\rho^{-1}}
\end{tikzcd}\]
is commutative, with $\alpha$ denoting the associativity isomorphism, and concatenation of objects denoting their monoidal products. In the above diagram, starting at the upper left corner and going counterclockwise, the seven sub-diagrams are commutative for the following reasons.
\begin{enumerate}
\item The unity axiom \eqref{monoidal-unit} of a monoidal category.
\item The functoriality of $\otimes$.
\item The left unity diagram in \eqref{moncat-other-unit-axioms}, which is equal to the one-object case of the left unity diagram in \Cref{bicat-left-right-unity}.
\item The naturality of $\lambda$.
\item The axiom \eqref{mon-nat-transf-F2} of a monoidal natural transformation.
\item The naturality of $\rho$.
\item The right unity diagram in \eqref{moncat-other-unit-axioms}, which is equal to the one-object case of the right unity diagram in \Cref{bicat-left-right-unity}.
\end{enumerate}
Since $\lambda$ is invertible, we have shown that the above diagram is commutative, so $\vartheta$ is an oplax transformation.
Finally, if $\theta$ is a natural isomorphism, then each component $2$-cell of $\vartheta$ is a composite of three isomorphisms, which is therefore invertible.
\end{proof}
\section{Modifications}\label{sec:modifications}
We have discussed lax functors $\B \to \B'$ and lax transformations between such lax functors. In this section we define a structure called a modification that compares lax transformations. The main observation is that there is a bicategory $\Bicat(\B,\B')$ with lax functors $\B\to\B'$ as objects, lax transformations as $1$-cells, and modifications as $2$-cells.
\begin{definition}\label{def:modification}
Suppose $\alpha,\beta : F \to G$ are lax transformations for lax functors $F,G : \B\to\B'$. A \emph{modification}\index{modification} $\Gamma : \alpha \to \beta$ consists of a component $2$-cell\label{notation:modification-compcell}
\[\Gamma_X : \alpha_X \to \beta_X\] in $\B'(FX,GX)$ for each object $X$ in $\B$, that satisfies the following \emph{modification axiom}
\begin{equation}\label{modification-axiom}
\begin{tikzpicture}[xscale=2, yscale=-2, baseline={(eq.base)}]
\node (F1) at (0,0) {$FX$}; \node (F2) at ($(F1) + (1,0)$) {$FY$};
\node (G1) at ($(F1)+(0,1)$) {$GX$}; \node (G2) at ($(F1)+(1,1)$) {$GY$};
\node[font=\Large] at ($(F1)+(.2,.5)$) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at ($(F1)+(.1,.4)$) {$\alpha_{f}$};
\node[font=\Large] at ($(F1)+(1,.55)$) {$\Rightarrow$};
\node[font=\small] at ($(F1)+(1,.4)$) {$\Gamma_Y$};
\draw[arrow] (F1) to node{\small{$Ff$}} (F2);
\draw[arrow, bend right=40] (F2) to node{\small{$\beta_Y$}} (G2);
\draw[arrow, bend left=40] (F1) to node[swap]{\small{$\alpha_X$}} (G1);
\draw[arrow] (G1) to node[swap]{\small{$Gf$}} (G2);
\draw[arrow, bend left=40] (F2) to node[swap]{\small{$\alpha_Y$}} (G2);
\node (eq) at ($(F1)+(2,.5)$) {\LARGE{$=$}};
\node (F3) at ($(F1)+(3,0)$) {$FX$}; \node (F4) at ($(F3) + (1,0)$) {$FY$};
\node (G3) at ($(F3)+(0,1)$) {$GX$}; \node (G4) at ($(F3)+(1,1)$) {$GY$};
\node[font=\Large] at ($(F3)+(.85,.5)$) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at ($(F3)+(.75,.4)$) {$\beta_{f}$};
\node[font=\Large] at ($(F3)+(0,.55)$) {$\Rightarrow$};
\node[font=\small] at ($(F3)+(0,.4)$) {$\Gamma_X$};
\draw[arrow] (F3) to node{\small{$Ff$}} (F4);
\draw[arrow, bend right=40] (F4) to node{\small{$\beta_Y$}} (G4);
\draw[arrow, bend left=40] (F3) to node[swap]{\small{$\alpha_X$}} (G3);
\draw[arrow] (G3) to node[swap]{\small{$Gf$}} (G4);
\draw[arrow, bend right=40] (F3) to node{\small{$\beta_X$}} (G3);
\end{tikzpicture}
\end{equation}
for each $1$-cell $f \in \B(X,Y)$. A modification is \emph{invertible}\index{modification!invertible} if each component $\Gamma_X$ is an invertible $2$-cell.
\end{definition}
\begin{explanation}\label{expl:modification}
The modification axiom \eqref{modification-axiom} is the commutative diagram
\begin{equation}\label{modification-axiom-pasting}
\begin{tikzcd}[column sep=large]
(Gf)\alpha_X \ar{r}{1_{Gf}*\Gamma_X} \ar{d}[swap]{\alpha_f} & (Gf)\beta_X \ar{d}{\beta_f}\\
\alpha_Y(Ff) \ar{r}{\Gamma_Y*1_{Ff}} & \beta_Y(Ff)\end{tikzcd}
\end{equation}
in $\B'(FX,GY)$.\dqed
\end{explanation}
\begin{definition}\label{def:modification-composition}
Suppose $F,G,H : \B\to\B'$ are lax functors, and $\alpha,\beta,\gamma : F\to G$ are lax transformations.
\begin{description}
\item[Identity Modifications] The \emph{identity modification}\index{identity!modification}\index{modification!identity} of $\alpha$, denoted by\label{notation:id-modification} $1_\alpha : \alpha \to \alpha$, consists of the identity $2$-cell
\[(1_\alpha)_X = 1_{\alpha_X} : \alpha_X \to \alpha_X\] in $\B'(FX,GX)$ for each object $X$ in $\B$.
\item[Vertical Composition] Suppose $\Gamma : \alpha \to \beta$ and $\Sigma : \beta \to \gamma$ are modifications. The \emph{vertical composite}\label{notation:modification-vcomp}\index{vertical composition!modification}\index{modification!horizontal and vertical compositions}\index{composition!modification!horizontal and vertical}
\[\Sigma\Gamma : \alpha \to \gamma\] consists of the vertical composite $2$-cell
\[\begin{tikzcd}
\alpha_X \ar{r}{\Gamma_X} \ar[bend left, start anchor={[xshift=-.3cm]},
end anchor={[xshift=.3cm]}]{rr}{(\Sigma\Gamma)_X} & \beta_X \ar{r}{\Sigma_X} & \gamma_X\end{tikzcd}\]
in $\B'(FX,GX)$ for each object $X$ in $\B$.
\item[Horizontal Composition]
With $\Gamma : \alpha \to \beta$ as above, suppose $\Gamma' : \alpha' \to \beta'$ is a modification for lax transformations $\alpha',\beta' : G\to H$. The \emph{horizontal composite}\label{notation:modification-hcomp}\index{horizontal composition!modification}
\[\Gamma'*\Gamma : \alpha'\alpha \to \beta'\beta\] consists of the horizontal composite $2$-cell
\begin{equation}\label{modification-hcomp}
(\Gamma'*\Gamma)_X = \Gamma'_X * \Gamma_X : (\alpha'\alpha)_X \to (\beta'\beta)_X
\end{equation}
in $\B'(FX,HX)$ for each object $X$ in $\B$. Here $\alpha'\alpha, \beta'\beta : F \to H$ are the horizontal composite lax transformations in \Cref{def:lax-tr-comp}.\defmark
\end{description}
\end{definition}
\begin{explanation}\label{expl:modification-comp}
For each object $X$ in $\B$, the components of the vertical composite and the horizontal composite modifications are the composites of the pasting diagrams
\[\begin{tikzpicture}[xscale=2.5, yscale=2]
\node (F) at (0,0) {$FX$}; \node (G) at ($(F)+(1,0)$) {$GX$};
\node at ($(F)+(.5,.7)$) {$(\Sigma\Gamma)_X$};
\node[font=\Large] at ($(F)+(.5,.2)$) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at ($(F)+(.65,.2)$) {$\Gamma_X$};
\node[font=\Large] at ($(F)+(.5,-.15)$) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at ($(F)+(.65,-.15)$) {$\Sigma_X$};
\draw[arrow, bend left=60] (F) to node{\small{$\alpha_X$}} (G);
\draw[arrow] (F) to node[near start]{\scalebox{.7}{$\beta_X$}} (G);
\draw[arrow, bend right=60] (F) to node[swap]{\small{$\gamma_X$}} (G);
\node (F1) at ($(F)+(2,0)$) {$FX$};
\node (G1) at ($(F1)+(1,0)$) {$GX$};
\node (H1) at ($(F1)+(2,0)$) {$HX$};
\node at ($(F1)+(1,.7)$) {$(\Gamma'*\Gamma)_X$};
\node[font=\Large] at ($(F1)+(.45,0)$) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at ($(F1)+(.6,0)$) {$\Gamma_X$};
\node[font=\Large] at ($(F1)+(1.45,0)$) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at ($(F1)+(1.6,0)$) {$\Gamma'_X$};
\draw[arrow, bend left=60] (F1) to node{\small{$\alpha_X$}} (G1);
\draw[arrow, bend right=60] (F1) to node[swap]{\small{$\beta_X$}} (G1);
\draw[arrow, bend left=60] (G1) to node{\small{$\alpha'_X$}} (H1);
\draw[arrow, bend right=60] (G1) to node[swap]{\small{$\beta'_X$}} (H1);
\end{tikzpicture}\]
in $\B'$.\dqed
\end{explanation}
\begin{lemma}\label{modification-comp-id}
In \Cref{def:modification-composition}:
\begin{enumerate}
\item $1_\alpha$, $\Sigma\Gamma$, and $\Gamma'*\Gamma$ are modifications.
\item Vertical composition of modifications is strictly associative and unital with respect to identity modifications.
\item $1_{\alpha'} * 1_{\alpha} = 1_{\alpha'\alpha}$.
\item Modifications satisfy the middle four exchange \eqref{middle-four}.
\end{enumerate}
\end{lemma}
\begin{proof}
The modification axiom \eqref{modification-axiom} for $1_{\alpha}$ follows from the bicategory axioms \eqref{hom-category-axioms} and \eqref{bicat-c-id}.
The modification axiom for the horizontal composite $\Gamma'*\Gamma$ is the equality of pasting diagrams:
\[\begin{tikzpicture}[xscale=2, yscale=-2]
\node (F1) at (0,0) {$FX$}; \node (F2) at ($(F1) + (1,0)$) {$FY$};
\node (G1) at ($(F1)+(0,1)$) {$GX$}; \node (G2) at ($(F1)+(1,1)$) {$GY$};
\node (H1) at ($(F1)+(0,2)$) {$HX$}; \node (H2) at ($(F1)+(1,2)$) {$HY$};
\node[font=\Large] at ($(F1)+(.2,.5)$) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at ($(F1)+(.1,.4)$) {$\alpha_{f}$};
\node[font=\Large] at ($(F1)+(1,.55)$) {$\Rightarrow$};
\node[font=\small] at ($(F1)+(1,.4)$) {$\Gamma_Y$};
\node[font=\Large] at ($(F1)+(.2,1.5)$) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at ($(F1)+(.08,1.38)$) {$\alpha'_{f}$};
\node[font=\Large] at ($(F1)+(1,1.57)$) {$\Rightarrow$};
\node[font=\small] at ($(F1)+(1,1.4)$) {$\Gamma'_Y$};
\draw[arrow] (F1) to node{\small{$Ff$}} (F2);
\draw[arrow, bend right=40] (F2) to node{\small{$\beta_Y$}} (G2);
\draw[arrow, bend left=40] (F1) to node[swap]{\small{$\alpha_X$}} (G1);
\draw[arrow] (G1) to node[swap]{\small{$Gf$}} (G2);
\draw[arrow, bend left=40] (F2) to node[swap]{\small{$\alpha_Y$}} (G2);
\draw[arrow] (H1) to node[swap]{\small{$Hf$}} (H2);
\draw[arrow, bend right=40] (G2) to node{\small{$\beta'_Y$}} (H2);
\draw[arrow, bend left=40] (G1) to node[swap]{\small{$\alpha'_X$}} (H1);
\draw[arrow, bend left=40] (G2) to node[swap]{\small{$\alpha'_Y$}} (H2);
\node at ($(F1)+(2,1)$) {\LARGE{$=$}};
\node (F3) at ($(F1)+(3,0)$) {$FX$}; \node (F4) at ($(F3) + (1,0)$) {$FY$};
\node (G3) at ($(F3)+(0,1)$) {$GX$}; \node (G4) at ($(F3)+(1,1)$) {$GY$};
\node (H3) at ($(F3)+(0,2)$) {$HX$}; \node (H4) at ($(F3)+(1,2)$) {$HY$};
\node[font=\Large] at ($(F3)+(.85,.5)$) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at ($(F3)+(.75,.4)$) {$\beta_{f}$};
\node[font=\Large] at ($(F3)+(0,.55)$) {$\Rightarrow$};
\node[font=\small] at ($(F3)+(0,.4)$) {$\Gamma_X$};
\node[font=\Large] at ($(F3)+(.85,1.52)$) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at ($(F3)+(.75,1.4)$) {$\beta'_{f}$};
\node[font=\Large] at ($(F3)+(0,1.57)$) {$\Rightarrow$};
\node[font=\small] at ($(F3)+(0,1.4)$) {$\Gamma'_X$};
\draw[arrow] (F3) to node{\small{$Ff$}} (F4);
\draw[arrow, bend right=40] (F4) to node{\small{$\beta_Y$}} (G4);
\draw[arrow, bend left=40] (F3) to node[swap]{\small{$\alpha_X$}} (G3);
\draw[arrow] (G3) to node[swap]{\small{$Gf$}} (G4);
\draw[arrow, bend right=40] (F3) to node{\small{$\beta_X$}} (G3);
\draw[arrow] (H3) to node[swap]{\small{$Hf$}} (H4);
\draw[arrow, bend right=40] (G4) to node{\small{$\beta'_Y$}} (H4);
\draw[arrow, bend left=40] (G3) to node[swap]{\small{$\alpha'_X$}} (H3);
\draw[arrow, bend right=40] (G3) to node{\small{$\beta'_X$}} (H3);
\end{tikzpicture}\]
The equalities for the top halves and the bottom halves follow from the modification axioms for $\Gamma'$ and $\Gamma$, respectively. Alternatively, using the expanded forms of the component $2$-cells $(\alpha'\alpha)_f$ and $(\beta'\beta)_f$ in \Cref{expl:lax-tr-comp}, the desired modification axiom \eqref{modification-axiom-pasting} for the horizontal composite is a diagram with twelve $2$-cells along the boundary. It factors into a ladder diagram with five squares. Three of them are commutative by the naturality of $a^{\pm 1}$, and the other two are the modification axioms for $\Gamma'$ and $\Gamma$.
The proof for the vertical composite $\Sigma\Gamma$ is similar, and we ask the reader to provide the proof in \Cref{exer:mod-hcomp} below.
Vertical composition of modifications is strictly associative and unital because vertical composition of $2$-cells is strictly associative and unital in $\B'$.
The equality $1_{\alpha'} * 1_{\alpha} = 1_{\alpha'\alpha}$ holds by the bicategory axiom \eqref{bicat-c-id} in $\B'$ for each object $X$ in $\B$. Similarly, the middle four exchange holds for modifications because it holds in $\B'$ for each object $X$ in $\B$.
\end{proof}
\begin{lemma}\label{invertible-modification}\index{characterization of!an invertible modification}
A modification $\Gamma : \alpha \to \beta$ is invertible if and only if there is a unique modification $\Gamma^{-1} : \beta \to \alpha$ such that
\[\Gamma^{-1}\Gamma = 1_{\alpha} \andspace \Gamma\Gamma^{-1} = 1_{\beta}.\]
\end{lemma}
\begin{proof}
The \emph{if} direction holds by the definition of an invertible modification. For the other direction, if each component $2$-cell $\Gamma_X : \alpha_X \to \beta_X$ is invertible, then we define the $2$-cell \[\Gamma^{-1}_X = (\Gamma_X)^{-1} : \beta_X \to \alpha_X.\] The equalities $\Gamma^{-1}\Gamma = 1_{\alpha}$ and $\Gamma\Gamma^{-1} = 1_{\beta}$ are satisfied. The modification axiom for $\Gamma^{-1}$ is obtained by adding (i) a copy of $\Gamma^{-1}_X$ to the left and (ii) a copy of $\Gamma^{-1}_Y$ to the right of each side of the modification axiom \eqref{modification-axiom} for $\Gamma$. The uniqueness of $\Gamma^{-1}$ follows from the uniqueness of the inverse of an invertible morphism in a category.
\end{proof}
\begin{definition}\label{def:bicat-of-lax-functors}
Suppose $\B$ and $\B'$ are bicategories with $\B_0$ a set. Define\label{notation:functor-bicat}
\[\Bicat(\B,\B')\] with the following data.
\begin{description}
\item[Objects] The objects in $\Bicat(\B,\B')$ are lax functors $\B\to\B'$.
\item[Hom Categories] For lax functors $F,G : \B\to\B'$, $\Bicat(\B,\B')(F,G)$ is the category with:
\begin{itemize}
\item lax transformations $F \to G$ as objects;
\item modifications $\Gamma : \alpha \to \beta$ between such lax transformations as morphisms;
\item vertical composition of modifications as composition;
\item identity modifications as identity morphisms.
\end{itemize}
So in $\Bicat(\B,\B')$, $1$-cells are lax transformations between lax functors from $\B$ to $\B'$, and $2$-cells are modifications between them.
\item[Identity $1$-Cells] For each lax functor $F : \B\to\B'$, its identity $1$-cell is the identity transformation $1_F : F \to F$ in \Cref{def:identity-strong-transformation}.
\item[Horizontal Composition] For lax functors $F,G,H : \B\to\B'$, the horizontal composition
\[\begin{tikzcd}
\Bicat(\B,\B')(G,H) \times \Bicat(\B,\B')(F,G) \ar{r}{c} & \Bicat(\B,\B')(F,H)
\end{tikzcd}\]
is given by:
\begin{itemize}
\item the horizontal composition of lax transformations in \Cref{def:lax-tr-comp} for $1$-cells;
\item the horizontal composition of modifications in \Cref{def:modification-composition} for $2$-cells.
\end{itemize}
\item[Associator] For lax transformations $\alpha : F \to G$, $\beta : G \to H$, and $\gamma : H \to I$ with $F,G,H,I : \B\to\B'$ lax functors, the component
\[a_{\gamma,\beta,\alpha} : (\gamma\beta)\alpha \to \gamma(\beta\alpha)\]
of the associator $a$ is the modification with, for each object $X\in\B$, the component $2$-cell
\[a_{\gamma_X,\beta_X,\alpha_X} : (\gamma_X\beta_X)\alpha_X \to \gamma_X(\beta_X\alpha_X) \inspace \B'(FX,IX),\] which is a component of the associator in $\B'$.
\item[Unitors] For each lax transformation $\alpha$ as above, the component
\[\ell_{\alpha} : 1_{G}\alpha \to \alpha\]
of the left unitor $\ell$ is the modification with, for each object $X\in\B$, the component $2$-cell
\[\ell_{\alpha_X} : 1_{GX}\alpha_X \to \alpha_X \inspace \B'(FX,GX),\]
which is a component of the left unitor in $\B'$. The right unitor $r$ is defined analogously using the right unitor in $\B'$.
\end{description}
This finishes the definition of $\Bicat(\B,\B')$.
\end{definition}
\begin{theorem}\label{thm:bicat-of-lax-functors}\index{lax functor!bicategory}\index{bicategory!of lax functors}
Suppose $\B$ and $\B'$ are bicategories such that $\B$ has a set of objects. Then $\Bicat(\B,\B')$ with the structure in \Cref{def:bicat-of-lax-functors} is a bicategory.
\end{theorem}
\begin{proof}
The naturality and invertibility of the associator $a$, the left unitor $\ell$, and the right unitor $r$ in $\Bicat(\B,\B')$ follow from the corresponding properties in $\B'$. The assumption that $\B_0$ be a set ensures that for each pair of lax functors $F,G : \B\to\B'$ and each pair of lax transformations $\alpha,\beta : F\to G$, there is only a set of modifications $\alpha \to \beta$. That each $\Bicat(\B,\B')(F,G)$ is a category and the functoriality of the horizontal composition $c$ follow from \Cref{modification-comp-id}. The pentagon axiom \eqref{bicat-pentagon} and the middle unity axiom \eqref{bicat-unity} hold in $\Bicat(\B,\B')$ because, for each object $X$ in $\B$, they hold in $\B'$.
\end{proof}
The above bicategory is particularly nice when the target is a $2$-category.
\begin{corollary}\label{2cat-of-lax-functors}
Suppose $\B$ is a bicategory with a set of objects, and $\B'$ is a $2$-category. Then $\Bicat(\B,\B')$ is a $2$-category.\index{2-category!of lax functors}
\end{corollary}
\begin{proof}
The assertion is that $a$, $\ell$, and $r$ in $\Bicat(\B,\B')$ are identity natural transformations. Each of their components is a modification whose components are those of $a$, $\ell$, or $r$ in $\B'$, which are identities by assumption. This is enough because the identity modification of a lax transformation is defined componentwise by identity $2$-cells.
\end{proof}
An important variation of the bicategory $\Bicat(\B,\B')$ is the following sub-bicategory, which will be a part of the definition of a biequivalence between bicategories.
\begin{corollary}\label{subbicat-pseudofunctor}\index{pseudofunctor!bicategory}\index{bicategory!of pseudofunctors}
For bicategories $\B$ and $\B'$ such that $\B$ has a set of objects, the bicategory $\Bicat(\B,\B')$ contains a sub-bicategory $\Bicatps(\B,\B')$ with
\begin{itemize}
\item pseudofunctors $\B\to\B'$ as objects,
\item strong transformations between such pseudofunctors as $1$-cells, and
\item modifications between such strong transformations as $2$-cells.
\end{itemize}
Moreover, if $\B'$ is a $2$-category, then $\Bicatps(\B,\B')$ is a $2$-category.
\end{corollary}
\begin{proof}
For pseudofunctors $F,G : \B\to\B'$, the category $\Bicatps(\B,\B')(F,G)$ is the full sub-category of $\Bicat(\B,\B')(F,G)$ consisting of all the strong transformations $F\to G$ and all the modifications between them. The identity transformation of a pseudofunctor, or any lax functor in general, is a strong transformation. The horizontal composition of two strong transformations is a strong transformation by \Cref{lax-tr-compose}.
If $\B'$ is a $2$-category, then, as in the proof of \Cref{2cat-of-lax-functors}, $a$, $\ell$, and $r$ are identities.
\end{proof}
The reader is asked to consider other variations in \Cref{exer:bicat-of-functors}.
\section{Representables}\label{sec:representables}
In this section we discuss representable pseudofunctors, representable transformations, and representable modifications. These concepts will be used in the bicategorical Yoneda embedding.
\begin{definition}\label{def:1cell-induced-functors}
Suppose $f \in \B(X,Y)$ is a $1$-cell in a bicategory $\B$, and $Z$ is an object in $\B$.
\begin{enumerate}
\item Define the \index{pre-composition functor}\index{functor!pre-composition}\emph{pre-composition functor}\label{notation:prepost-comp} \[f^* : \B(Y,Z) \to \B(X,Z)\]
by:
\begin{itemize}
\item $f^*(h)=hf$ for each $1$-cell $h\in\B(Y,Z)$.
\item $f^*(\alpha)=\alpha*1_f$ for each $2$-cell $\alpha$ in $\B(Y,Z)$.
\end{itemize}
\item Define the \index{post-composition functor}\index{functor!post-composition}\emph{post-composition functor} \[f_* : \B(Z,X) \to \B(Z,Y)\]
by:
\begin{itemize}
\item $f_*(g)=fg$ for each $1$-cell $g\in\B(Z,X)$.
\item $f_*(\theta)=1_f*\theta$ for each $2$-cell $\theta$ in $\B(Z,X)$.
\end{itemize}
\end{enumerate}
The functoriality of these functors follows from that of the horizontal composition in $\B$.
\end{definition}
Recall from \Cref{def:bicategory-opposite} the opposite bicategory $\Bop$.
\begin{proposition}\label{representable-pseudofunctor}
Each object $X$ in a bicategory $\B$ induces a pseudofunctor \[\B(-,X) : \Bop \to \Cat.\]
\end{proposition}
\begin{proof}
To be precise, $\B(-,X)$ is defined as follows.
\begin{description}
\item[Objects] $\B(-,X)$ sends each object $A$ in $\Bop$ to the hom-category $\B(A,X)$.
\item[$1$-Cells] For objects $A,B$ in $\B$, the functor
\[\begin{tikzcd}[column sep=large]
\Bop(A,B)=\B(B,A) \ar{r}{\B(-,X)} & \Cat\bigl(\B(A,X),\B(B,X)\bigr)\end{tikzcd}\]
sends a $1$-cell $f\in\B(B,A)$ to the pre-composition functor \[\B(f,X)=f^* : \B(A,X)\to\B(B,X).\]
\item[$2$-Cells] The functor $\B(-,X)$ sends a $2$-cell $\theta : f\to g$ in $\B(B,A)$ to the natural transformation
\[\begin{tikzpicture}[commutative diagrams/every diagram, xscale=3, yscale=1]
\node (s) at (0,0) {$\B(A,X)$}; \node (t) at (1,0) {$\B(B,X)$};
\draw [arrow, bend left=60] (s) to node{\small{$f^*$}} (t);
\draw [arrow, bend right=60] (s) to node[swap]{\small{$g^*$}} (t);
\node[font=\Large] at (.55,0) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at (.45,0) {$\theta^*$};
\end{tikzpicture}\]
whose component at a $1$-cell $h\in\B(A,X)$ is the $2$-cell
\[\theta^*_h = 1_h*\theta : hf \to hg\] in $\B(B,X)$. That $\B(-,X)$ preserves identity $2$-cells means the second equality in \[(1_f)^*_h = 1_h * 1_f = 1_{hf} = 1_{f^*(h)} = (\Id_{f^*})_h\] for each $1$-cell $h\in\B(A,X)$, which is true by \eqref{bicat-c-id}. To see that $\B(-,X)$ preserves vertical composition, suppose $\varphi : g \to e$ is another $2$-cell in $\B(B,A)$. Using the definition \eqref{not:vcomp} of vertical composition, the desired equality $(\varphi\theta)^* = \varphi^*\theta^*$ means \[1_h *(\varphi\theta) = (1_h*\varphi)(1_h*\theta),\] which is true by the vertical unity property \eqref{hom-category-axioms} and the middle four exchange \eqref{middle-four}.
\item[Lax Unity Constraint] For an object $A$ in $\B$, the natural transformation
\[\begin{tikzpicture}[commutative diagrams/every diagram, xscale=4, yscale=1]
\node (s) at (0,0) {$\B(A,X)$}; \node (t) at (1,0) {$\B(A,X)$};
\draw [arrow, bend left=75] (s) to node{\small{$\Id_{\B(A,X)}$}} (t);
\draw [arrow, bend right=75] (s) to node[swap]{\small{$1_A^*$}} (t);
\node[font=\LARGE] at (.7,0) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at (.45,0) {$\B(-,X)^0_A$};
\end{tikzpicture}\]
sends a $1$-cell $h \in \B(A,X)$ to the $2$-cell
\[\begin{tikzcd}h \ar{r}{r_h^{-1}}[swap]{\cong} & h1_A = (1_A^*)(h)\end{tikzcd}\] in $\B(A,X)$, in which $r$ is the right unitor in $\B$.
\item[Lax Functoriality Constraint] For a pair of $1$-cells \[(g,f) \in \Bop(B,C)\times\Bop(A,B) = \B(C,B)\times\B(B,A),\] the natural transformation
\[\begin{tikzpicture}[commutative diagrams/every diagram, xscale=4, yscale=1]
\node (s) at (0,0) {$\B(A,X)$}; \node (t) at (1,0) {$\B(C,X)$};
\draw [arrow, bend left=75] (s) to node{\small{$g^* f^*$}} (t);
\draw [arrow, bend right=75] (s) to node[swap]{\small{$(fg)^*$}} (t);
\node[font=\LARGE] at (.7,0) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at (.45,0) {$\B(-,X)^2_{g,f}$};
\end{tikzpicture}\]
sends a $1$-cell $h \in \B(A,X)$ to the $2$-cell
\[\begin{tikzcd}
(g^* f^*)(h) = (hf)g \ar{r}{a_{h,f,g}} & h(fg) = (fg)^*(h)
\end{tikzcd}\]
in $\B(C,X)$, which is a component of the associator $a$ in $\B$.
\end{description}
This finishes the definition of $\B(-,X)$.
The naturality of $\B(-,X)^2$ as in \eqref{f2-bicat-naturality} is the diagram on the left
\[\begin{tikzcd}[column sep=large]
g^* f^* \ar{d}[swap]{\beta^* *\alpha^*} \ar{r}{\B(-,X)^2} & (fg)^* \ar{d}{(\alpha*\beta)^*}\\ g'^* f'^* \ar{r}{\B(-,X)^2} & (f'g')^*\end{tikzcd}\qquad
\begin{tikzcd}
(hf)g \ar{r}{a_{h,f,g}} \ar{d}[swap]{(1_h*\alpha)*\beta} & h(fg) \ar{d}{1_h*(\alpha*\beta)}\\
(hf')g' \ar{r}{a_{h,f',g'}} & h(f'g')\end{tikzcd}\]
in $\Cat\bigl(\B(A,X),\B(C,X)\bigr)$ for $2$-cells $\alpha : f\to f'$ in $\B(B,A)$ and $\beta : g \to g'$ in $\B(C,B)$. This is commutative if and only if for each $1$-cell $h\in \B(A,X)$, the diagram in $\B(C,X)$ on the right above is commutative. For the $2$-cell on the left, we used
the equalities
\[\begin{split}
(\beta^* * \alpha^*)_h &= (1_{hf'}*\beta)((1_h*\alpha)*1_g)\\
&= (1_{hf'}(1_h*\alpha)) * (\beta 1_g)\\
&= (1_h*\alpha) *\beta,\end{split}\]
which hold by \eqref{horizontal-composition}, \eqref{middle-four}, and \eqref{hom-category-axioms}, respectively. The commutativity of the right diagram above follows from the naturality of $a$ \eqref{associator-naturality}.
The lax associativity diagram \eqref{f2-bicat} is the diagram on the left
\[\begin{tikzcd}[column sep=large]
(h^*g^*)f^* \ar{r}{a\,=\,\Id} \ar{d}[swap]{\B(-,X)^2*\Id} & h^*(g^*f^*) \ar{d}{\Id*\B(-,X)^2}\\
(gh)^*f^* \ar{d}[swap]{\B(-,X)^2} & h^*(fg)^* \ar{d}{\B(-,X)^2}\\
(f(gh))^* \ar{r}{(a^{-1})^*} & ((fg)h)^*\end{tikzcd}\qquad
\begin{tikzcd}[column sep=large]
((ef)g)h \ar{d}[swap]{a_{ef,g,h}} \ar[bend left]{dr}{a_{e,f,g} * 1_h} &\\
(ef)(gh) \ar{d}[swap]{a_{e,f,gh}} & (e(fg))h \ar{d}{a_{e,fg,h}}\\
e(f(gh)) \ar{r}{1_e*a^{-1}_{f,g,h}} & e((fg)h)\end{tikzcd}\]
in $\Cat\bigl(\B(A,X),\B(D,X)\bigr)$ for $1$-cells $f\in\B(B,A)$, $g\in \B(C,B)$, and $h\in\B(D,C)$. This is commutative if and only if for each $1$-cell $e\in \B(A,X)$, the diagram in $\B(D,X)$ on the right above is commutative. The commutativity of this diagram follows from the pentagon axiom \eqref{bicat-pentagon} in $\B$.
The lax left and right unity diagrams \eqref{f0-bicat} are the diagrams
\[\begin{tikzcd}[column sep=large]
\Id f^* \ar{d}[swap]{\B(-,X)^0*\Id_{f^*}} \ar{r}{\ell\,=\,\Id} & f^*\\
1_B^*f^* \ar{r}{\B(-,X)^2} & (f1_B)^* \ar{u}[swap]{(\ell^{\op})^*}\end{tikzcd}\qquad
\begin{tikzcd}[column sep=large]
f^*\Id \ar{d}[swap]{\Id_{f^*}*\B(-,X)^0} \ar{r}{r\,=\,\Id} & f^*\\
f^*1_A^* \ar{r}{\B(-,X)^2} & (1_Af)^* \ar{u}[swap]{(r^{\op})^*}\end{tikzcd}\]
in $\Cat\bigl(\B(A,X),\B(B,X)\bigr)$ for a $1$-cell $f\in\B(B,A)$. They are commutative if and only if the diagrams
\[\begin{tikzcd}
hf \ar{d}[swap]{r_{hf}^{-1}} \ar{r}{1_{hf}} & hf\\
(hf)1_B \ar{r}{a_{h,f,1_B}} & h(f1_B) \ar{u}[swap]{1_h*r_f}\end{tikzcd}\qquad
\begin{tikzcd}
hf \ar{d}[swap]{r_{h}^{-1}*1_f} \ar{r}{1_{hf}} & hf\\
(h1_A)f \ar{r}{a_{h,1_A,f}} & h(1_Af) \ar{u}[swap]{1_h*\ell_f}
\end{tikzcd}\]
in $\B(B,X)$ are commutative for each $1$-cell $h\in\B(A,X)$. The left diagram above is commutative by the unity property in \eqref{hom-category-axioms} and the right unity diagram in \Cref{bicat-left-right-unity}. The right diagram above is commutative by \eqref{hom-category-axioms} and the middle unity property \eqref{bicat-unity}.
\end{proof}
\begin{corollary}\label{corepresentable-pseudofunctor}
Each object $X$ in a bicategory $\B$ induces a pseudofunctor \[\B(X,-) : \B \to \Cat.\]
\end{corollary}
\begin{proof}
Apply \Cref{representable-pseudofunctor} to $\Bop$, and use the equalities $\Bop(-,X)=\B(X,-)$ and $(\Bop)^{\op}=\B$.
\end{proof}
\begin{definition}\label{def:representable-pseudofunctor}
The pseudofunctor $\B(-,X) : \Bop\to\Cat$ in \Cref{representable-pseudofunctor} is called a \index{representable!pseudofunctor}\index{pseudofunctor!representable}\emph{representable pseudofunctor}. The pseudofunctor $\B(X,-) : \B\to\Cat$ in \Cref{corepresentable-pseudofunctor} is called a \index{corepresentable pseudofunctor}\index{pseudofunctor!corepresentable}\emph{corepresentable pseudofunctor}.
\end{definition}
\begin{proposition}\label{representable-transformation}
Each $1$-cell $f \in \B(X,Y)$ in a bicategory $\B$ induces a strong transformation
\[\begin{tikzpicture}[commutative diagrams/every diagram, xscale=3, yscale=1]
\node (s) at (0,0) {$\Bop$}; \node (t) at (1,0) {$\Cat$};
\draw [arrow, bend left=60] (s) to node{\small{$\B(-,X)$}} (t);
\draw [arrow, bend right=60] (s) to node[swap]{\small{$\B(-,Y)$}} (t);
\node[font=\Large] at (.55,0) {\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at (.45,0) {$f_*$};
\end{tikzpicture}\]
with $\B(-,X)$ and $\B(-,Y)$ the pseudofunctors in \Cref{representable-pseudofunctor}.
\end{proposition}
\begin{proof}
To be precise, $f_*=\B(-,f)$ is defined as follows.
\begin{description}
\item[Component $1$-Cells] It is equipped with the post-composition functor
\[f_*=\B(A,f) : \B(A,X)\to \B(A,Y)\] in \Cref{def:1cell-induced-functors} for each object $A\in\B$.
\item[Component $2$-Cells] For each $1$-cell $g\in\B(B,A) = \Bop(A,B)$, it is equipped with the natural transformation
\[\begin{tikzpicture}[xscale=2.5, yscale=-1.8]
\node (F1) at (0,0) {$\B(A,X)$}; \node (F2) at (1,0) {$\B(B,X)$};
\node (G1) at (0,1) {$\B(A,Y)$}; \node (G2) at (1,1) {$\B(B,Y)$};
\draw[arrow] (F1) to node{\small{$g^*$}} (F2);
\draw[arrow] (F1) to node[swap]{\small{$f_*$}} (G1);
\draw[arrow] (F2) to node{\small{$f_*$}} (G2);
\draw[arrow] (G1) to node{\small{$g^*$}} (G2);
\node[font=\Large] at (.52,.52) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at (.35,.35) {$(f_*)_g$};
\end{tikzpicture}\]
that sends each $1$-cell $h \in \B(A,X)$ to the component
\[\begin{tikzcd}
g^*f_*(h) = (fh)g \ar{r}{a_{f,h,g}} & f(hg) = f_*g^*(h)\end{tikzcd}\]
of the associator in $\B$.
\end{description}
The naturality of this natural transformation follows from that of $a$.
For each object $A$ in $\B$, the lax unity axiom \eqref{unity-transformation} for $f_*$ follows from the right unity diagram in \Cref{bicat-left-right-unity}. For each pair of composable $1$-cells $(i,h) \in \B(C,B)\times\B(B,A)$, the lax naturality axiom \eqref{2-cell-transformation} for $f_*$ follows from the pentagon axiom \eqref{bicat-pentagon} in $\B$. The details for these two assertions are similar to parts of the proof of \Cref{representable-pseudofunctor}, so we ask the reader to check them in Exercise \ref{exer:rep-tr} below.
\end{proof}
\begin{definition}\label{def:representable-transformation}
The strong transformation $f_* = \B(-,f)$ in \Cref{representable-transformation} is called a \index{representable!transformation}\index{transformation!representable}\emph{representable transformation}.
\end{definition}
\begin{proposition}\label{representable-modification}
Suppose $f,g \in\B(X,Y)$ are $1$-cells, and $\alpha : f \to g$ is a $2$-cell. Then $\alpha$ induces a modification \[\alpha_* : f_* \to g_* : \B(-,X)\to\B(-,Y)\] between the representable transformations $f_*$ and $g_* $. Moreover, if $\alpha$ is an invertible $2$-cell, then $\alpha_*$ is an invertible modification.
\end{proposition}
\begin{proof}
To be precise, $\alpha_*$ is the modification with, for each object $A \in \B$, a component $2$-cell, i.e., a natural transformation
\[\begin{tikzcd}
(f_*)_A = \B(A,f) \ar{r}{(\alpha_*)_A} & \B(A,g) = (g_*)_A\end{tikzcd}\]
in $\Cat\big(\B(A,X),\B(A,Y)\big)$. For each $1$-cell $h \in \B(A,X)$, it has the component $2$-cell
\[(\alpha_*)_{A,h} = \alpha * 1_h : (f_*)_A(h) = fh \to gh = (g_*)_A(h) \inspace \B(A,Y).\]
The naturality of $(\alpha_*)_A$ means that, for each $2$-cell $\theta : h \to h'$ in $\B(A,X)$, the diagram
\[\begin{tikzcd}
fh \ar{r}{\alpha * 1_h} \ar{d}[swap]{1_f*\theta} & gh \ar{d}{1_g*\theta}\\
fh' \ar{r}{\alpha*1_{h'}} & gh'\end{tikzcd}\]
in $\B(A,Y)$ is commutative. This is true since both composites are equal to $\alpha *\theta$ by the bicategory axioms \eqref{hom-category-axioms} and \eqref{middle-four}.
It remains to check the modification axiom \eqref{modification-axiom} for $\alpha_*$. It asserts that, for each $1$-cell $j\in\Bop(A,B) = \B(B,A)$, the diagram in $\Cat\big(\B(A,X),\B(B,Y)\big)$,
\[\begin{tikzcd}[column sep=large]
j^*(f_*)_A \ar{r}{1*(\alpha_*)_A} \ar{d}[swap]{(f_*)_j} & j^*(g_*)_A \ar{d}{(g_*)_j}\\
(f_*)_Bj^* \ar{r}{(\alpha_*)_B * 1} & (g_*)_Bj^*
\end{tikzcd}\]
is commutative. This means that for each $1$-cell $h\in\B(A,X)$, the diagram
\[\begin{tikzcd}[column sep=huge]
(fh)j \ar{r}{(\alpha* 1_h)*1_j} \ar{d}[swap]{a_{f,h,j}} & (gh)j \ar{d}{a_{g,h,j}}\\
f(hj) \ar{r}{\alpha*1_{hj}} & g(hj)
\end{tikzcd}\]
in $\B(B,Y)$ is commutative. This follows from the naturality of $a$ and the bicategory axiom \eqref{bicat-c-id}.
If $\alpha : f \to g$ is an invertible $2$-cell, then each component $2$-cell $(\alpha_*)_{A,h} = \alpha * 1_h$ is an invertible $2$-cell. So $\alpha_*$ is an invertible modification.
\end{proof}
\begin{definition}\label{def:representable-modification}
The modification $\alpha_* : f_* \to g_*$ in \Cref{representable-modification} is called a \index{representable!modification}\index{modification!representable}\emph{representable modification}.
\end{definition}
\section{Icons}\label{sec:icons}
In this section we observe that there is a $2$-category with small bicategories as objects, lax functors as $1$-cells, and icons, to be defined below, as $2$-cells.
\begin{motivation}\label{mot:no-bicat-of-bicat}
We saw in \Cref{thm:cat-of-bicat} that there is a category $\Bicat$ of small bicategories and lax functors. A natural question is whether this category can be extended to a bicategory with lax transformations as $2$-cells. Given lax transformations $\alpha$ and $\beta$ as in the diagram
\[\begin{tikzpicture}[xscale=2, yscale=2]
\node (B) at (0,0) {$\B$};
\node (C) at (1,0) {$\C$};
\node (D) at (2,0) {$1.3$};
\node[font=\Large] at (.4,0){\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at (.6,0) {$\alpha$};
\node[font=\Large] at (1.4,0){\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at (1.6,0) {$\beta$};
\draw[arrow, bend left] (B) to node{\small{$F$}} (C);
\draw[arrow, bend right] (B) to node[swap]{\small{$G$}} (C);
\draw[arrow, bend left] (C) to node{\small{$H$}} (D);
\draw[arrow, bend right] (C) to node[swap]{\small{$K$}} (D);
\end{tikzpicture}\]
we would define the horizontal composition $\beta *\alpha : HF\to KG$ as having the component $1$-cell
\[\begin{tikzcd}
HFX \ar[bend left, start anchor={[xshift=-.3cm]},
end anchor={[xshift=.3cm]}]{rr}{(\beta*\alpha)_X} \ar{r}{H\alpha_X} & HGX \ar{r}{\beta_{GX}} & KGX\end{tikzcd}\]
for each object $X$ in $\B$.
For each $1$-cell $f \in \B(X,Y)$, we would define the component $2$-cell $(\beta*\alpha)_f$ using the diagram
\[\begin{tikzpicture}[xscale=3, yscale=-1.5]
\node (F1) at (0,0) {$HFX$}; \node (F2) at (1,0) {$HFY$};
\node (G1) at (0,1) {$HGX$}; \node (G2) at (1,1) {$HGY$};
\node (H1) at (0,2) {$KGX$}; \node (H2) at (1,2) {$KGY$};
\draw[arrow] (F1) to node{\small{$HFf$}} (F2);
\draw[arrow] (F1) to node[swap]{\small{$H\alpha_X$}} (G1);
\draw[arrow] (F2) to node{\small{$H\alpha_Y$}} (G2);
\draw[arrow] (G1) to node{\small{$HGf$}} (G2);
\draw[arrow] (G1) to node[swap]{\small{$\beta_{GX}$}} (H1);
\draw[arrow] (G2) to node{\small{$\beta_{GY}$}} (H2);
\draw[arrow] (H1) to node{\small{$KGf$}} (H2);
\draw[arrow,bend left=40] (F1.west) to node[swap]{\small{$(\beta*\alpha)_X$}} (H1.west);
\draw[arrow,bend right=40] (F2.east) to node{\small{$(\beta*\alpha)_Y$}} (H2.east);
\node[font=\Large] at (.5,.5) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at (.4,.4) {$?$};
\node[font=\Large] at (.5,1.5) {\rotatebox{45}{$\Rightarrow$}};
\node[font=\small] at (.38,1.38) {$\beta_{Gf}$};
\end{tikzpicture}\]
by filling the two squares with appropriate $2$-cells. For the bottom square we can use $\beta_{Gf}$. For the top square, we cannot simply apply $H$ to $\alpha_f$ because it would only yield the $2$-cell along the top row below.
\[\begin{tikzcd}
H\big((Gf)\alpha_X\big) \ar{r}{H\alpha_f} & H\big(\alpha_Y(Ff)\big)\\
(HGf)(H\alpha_X) \ar{u}{H^2} \ar[densely dashed]{r}{?} & (H\alpha_Y)(HFf) \ar{u}{H^2}\end{tikzcd}\]
If we first pre-compose with $H^2$, then we would get the correct domain. However, we still cannot get the correct codomain by post-composing with $H^2$ because of its direction. Along these lines, in \Cref{exer:2cat-of-2cat} the reader is asked to check that there is a $2$-category of small $2$-categories, $2$-functors, and $2$-natural transformations.
If $H$ is a pseudofunctor, one may obtain a composite by using the inverse of $H^2$. However, this does not result in a strictly associative horizontal composition and therefore cannot be the composition of 2-cells in a bicategory. However we will return to this idea when we discuss tricategories in \cref{ch:tricat-of-bicat}.\dqed
\end{motivation}
In contrast with the observations above, we can obtain a bicategory structure on $\Bicat$ by using the following more specialized concept for $2$-cells.
\begin{definition}\label{def:icon}
Suppose $F, G : \B \to \B'$ are lax functors between bicategories such that $FX=GX$ for each object $X$ in $\B$. An \emph{icon}\index{icon} $\alpha : F\to G$ consists of a natural transformation
\[\begin{tikzpicture}[xscale=4, yscale=2]
\node (s) at (0,0) {$\B(X,Y)$};
\node (t) at (1,0) {$\B'(FX,FY) = \B'(GX,GY)$};
\node[font=\LARGE] at (.3,0){\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at (.38,0) {$\alpha$};
\draw[arrow, bend left] (s.45) to node{\small{$F$}} (t.165);
\draw[arrow, bend right] (s.315) to node[swap]{\small{$G$}} (t.195);
\end{tikzpicture}\]
for each pair of objects $X,Y$ in $\B$, such that the following two pasting diagram equalities are satisfied for all objects $X,Y,Z$ and $1$-cells $f \in \B(X,Y)$ and $g \in \B(Y,Z)$.
\begin{description}
\item[Icon Unity]\index{unity!icon}
\begin{equation}\label{icon-unity-pasting}
\begin{tikzpicture}[xscale=2.5, yscale=2, baseline={(eq.base)}]
\node (F1) at (0,0) {$FX$}; \node (F2) at (1,0) {$FX$};
\node[font=\Large] at (.5,.2){\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at (.65,.2) {$F^0_X$};
\node[font=\Large] at (.5,-.2){\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at (.67,-.2) {$\alpha_{1_X}$};
\draw[arrow, bend left=90] (F1) to node{\small{$1_{FX}$}} (F2);
\draw[arrow] (F1) to node[near start]{\scalebox{.8}{$F1_X$}} (F2);
\draw[arrow, bend right=90] (F1) to node[swap]{\small{$G1_X$}} (F2);
\node (eq) at ($(F1)+(1.5,0)$) {\LARGE{$=$}};
\node (G1) at ($(F1)+(2,0)$) {$GX$}; \node (G2) at ($(G1)+(1,0)$) {$GX$};
\node[font=\Large] at ($(G1)+(.4,0)$){\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at ($(G1)+(.6,0)$) {$G^0_X$};
\draw[arrow, bend left=90] (G1) to node{\small{$1_{GX}$}} (G2);
\draw[arrow, bend right=90] (G1) to node[swap]{\small{$G1_X$}} (G2);
\end{tikzpicture}
\end{equation}
\item[Icon Naturality]\index{naturality!icon}
\begin{equation}\label{icon-naturality-pasting}
\begin{tikzpicture}[xscale=2.5, yscale=2, baseline={(eq.base)}]
\node (F1) at (0,0) {$FX$};
\node (F2) at ($(F1)+(.5,.8)$) {$FY$};
\node (F3) at ($(F1)+(1,0)$) {$FZ$};
\node[font=\Large] at (.45,.4){\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at (.6,.4) {$F^2$};
\node[font=\Large] at (.45,-.2){\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at (.6,-.2) {$\alpha_{gf}$};
\draw[arrow, bend left] (F1) to node{\small{$Ff$}} (F2);
\draw[arrow, bend left] (F2) to node{\small{$Fg$}} (F3);
\draw[arrow] (F1) to node[near start]{\scalebox{.8}{$F(gf)$}} (F3);
\draw[arrow, bend right=60] (F1) to node[swap]{\small{$G(gf)$}} (F3);
\node (eq) at ($(F1)+(1.5,0)$) {\LARGE{$=$}};
\node (Fa) at ($(F1)+(2,0)$) {$FX$};
\node (Fb) at ($(Fa)+(.5,.8)$) {$FY$};
\node (Fc) at ($(Fa)+(1,0)$) {$FZ$};
\node[font=\Large] at ($(Fa)+(.45,-.1)$){\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at ($(Fa)+(.6,-.1)$) {$G^2$};
\node[font=\Large] at ($(Fa)!.4!(Fb)$){\rotatebox{-45}{$\Rightarrow$}};
\node[font=\small] at ($(Fa)!.55!(Fb)$) {$\alpha_f$};
\node[font=\Large] at ($(Fb)!.6!(Fc)$){\rotatebox{-135}{$\Rightarrow$}};
\node[font=\small] at ($(Fb)!.45!(Fc)$) {$\alpha_g$};
\draw[arrow, bend left] (Fa) to node{\small{$Ff$}} (Fb);
\draw[arrow, bend left] (Fb) to node{\small{$Fg$}} (Fc);
\draw[arrow, bend right=20] (Fa) to node[swap, inner sep=0pt, near start]{\scalebox{.7}{$Gf$}} (Fb);
\draw[arrow, bend right=20] (Fb) to node[swap, inner sep=0pt, near end]{\scalebox{.7}{$Gg$}} (Fc);
\draw[arrow, bend right=60] (Fa) to node[swap]{\small{$G(gf)$}} (Fc);
\end{tikzpicture}
\end{equation}
\end{description}
This finishes the definition of an icon.
\end{definition}
\begin{explanation}\label{expl:icon}
In \Cref{def:icon}:
\begin{enumerate}
\item An icon stands for an \underline{i}dentity \underline{c}omponent \underline{o}plax transformatio\underline{n}. We will justify this name in \Cref{icon-is-icon} below.
\item An icon $\alpha : F \to G$ consists of a $2$-cell $\alpha_f : Ff \to Gf$ in $\B'(FX,FY)$ for each $1$-cell $f \in \B(X,Y)$. The naturality of $\alpha$ as a natural transformation means that, for each $2$-cell $\theta : f \to g$ in $\B(X,Y)$, the diagram
\begin{equation}\label{icon-2cell-naturality}
\begin{tikzcd}
Ff \ar{d}[swap]{F\theta} \ar{r}{\alpha_f} & Gf \ar{d}{G\theta}\\
Fg \ar{r}{\alpha_g} & Gg\end{tikzcd}
\end{equation}
in $\B'(FX,FY)=\B'(GX,GY)$ is commutative.
\item The icon unity axiom \eqref{icon-unity-pasting} is the equality
\begin{equation}\label{icon-unity}
\alpha_{1_X}(F^0_X) = G^0_X
\end{equation}
in $\B'(FX,FX) = \B'(GX,GX)$ for each object $X$ in $\B$.
\item The icon naturality axiom \eqref{icon-naturality-pasting} is the equality
\begin{equation}\label{icon-naturality}
\alpha_{gf}F^2_{g,f} = G^2_{g,f}\big(\alpha_g * \alpha_f\big)
\end{equation}
in $\B'(FX,FZ) = \B'(GX,GZ)$ for each pair $(g,f)$ of composable $1$-cells in $\B$.\dqed
\end{enumerate}
\end{explanation}
\begin{proposition}\label{icon-is-icon}\index{characterization of!an icon}
Suppose $F, G : \B \to \B'$ are lax functors between bicategories such that $FX=GX$ for each object $X$ in $\B$. Then there is a canonical bijection between:
\begin{enumerate}
\item Icons $\alpha : F \to G$.
\item Oplax transformations $\alpha' : F\to G$ with component identity $1$-cells.
\end{enumerate}
\end{proposition}
\begin{proof}
Given an icon $\alpha : F \to G$, we define $\alpha' : F \to G$ with:
\begin{itemize}
\item component identity $1$-cell \[\alpha'_X = 1_{FX} = 1_{GX}\] for each object $X$ in $\B$;
\item component $2$-cell
\[\alpha'_f = r^{-1}_{Gf}\alpha_f\ell_{Ff} : 1_Y(Ff)\to (Gf)1_X\]
for each $1$-cell $f \in \B(X,Y)$.
\end{itemize}
The naturality \eqref{oplax-transformation-naturality} of $\alpha'$ with respect to $2$-cells in $\B(X,Y)$ follows from that of $\alpha$ \eqref{icon-2cell-naturality}, $\ell$, and $r$.
The oplax naturality axiom \eqref{2cell-oplax} for $\alpha'$ is the outermost diagram below
\[\begin{tikzcd}[column sep=small]
\big((Gg)1_{FX}\big)(Ff) \ar{r}{a} & (Gg)\big(1_{FX}(Ff)\big) \ar{r}{1*\ell} & (Gf)(Ff) \ar{r}{1*\alpha_f} & (Gg)(Gf) \ar[start anchor={[xshift=-.3cm]}, end anchor={[xshift=-.2cm]}]{ddl}[swap]{1} \ar{d}{1*r^{-1}}\\
(Gg)(Ff) \ar{u}{r^{-1}*1} \ar{ur}[sloped, anchor=center, above]{1*\ell^{-1}} \ar[bend right=15]{urr}{1} &&& (Gg)\big((Gf)1_{GX}\big) \ar{d}{a^{-1}}\\
(Fg)(Ff) \ar{u}{\alpha_g*1} \ar{r}{1} & (Fg)(Ff) \ar{r}{\alpha_g*\alpha_f} \ar{ddr}{F^2} & (Gg)(Gf) \ar{r}{r^{-1}} \ar{ddr}[swap]{G^2} & \big((Gg)(Gf)\big)(1_{GX}) \ar{d}{G^2*1}\\
\big(1_{FZ}(Fg)\big)(Ff) \ar{u}{\ell*1} \ar{d}[swap]{a} &&& G(gf)1_{GX}\\
1_{FZ}\big((Fg)(Ff)\big) \ar{r}{1*F^2} \ar[start anchor={[xshift=.3cm]}]{uur}[near end]{\ell} & 1_{FZ} F(gf) \ar{r}{\ell} & F(gf) \ar{r}{\alpha_{gf}} & G(gf) \ar{u}[swap]{r^{-1}}
\end{tikzcd}\]
in which every identity $2$-cell is written as $1$. In the diagram above, the lower right parallelogram is commutative by the icon naturality axiom \eqref{icon-naturality}. The other sub-diagrams are commutative by the naturality of $\ell$ and $r$, the bicategory axioms \eqref{hom-category-axioms}, \eqref{bicat-c-id}, and \eqref{middle-four}, the middle unity axiom \eqref{bicat-unity}, and the left and right unity properties in \Cref{bicat-left-right-unity}. The oplax unity axiom \eqref{unity-oplax} for $\alpha'$ is proved similarly. Therefore, $\alpha'$ is an oplax transformation with component identity $1$-cells.
Conversely, given an oplax transformation $\alpha' : F\to G$ with component identity $1$-cells, we define the $2$-cell \[\alpha_f = r_{Gf} \alpha'_f \ell_{Ff}^{-1} : Ff \to Gf\] for each $1$-cell $f \in \B(X,Y)$. In \Cref{exer:icon-is-icon} the reader is asked to check that $\alpha : F \to G$ is an icon. This gives the desired canonical bijection because the two assignments $\alpha \mapsto \alpha'$ and $\alpha' \mapsto \alpha$ are inverses of each other.
\end{proof}
\begin{example}[Monoidal Natural Transformations]\label{ex:mnt-icon}\index{monoidal natural transformation!as an icon}
Suppose $F,G : \C\to1.3$ are monoidal functors between monoidal categories, regarded as lax functors between one-object bicategories as in \Cref{ex:monfunctor-laxfunctor}. Then a monoidal natural transformation $\theta : F \to G$ is precisely an icon from $F$ to $G$. Indeed, \Cref{monnt-oplax-transformation} says that a monoidal natural transformation $\theta$ yields an oplax transformation $\vartheta : F \to G$ with component identity $1$-cells. The corresponding icon, in the sense of \Cref{icon-is-icon}, is precisely $\theta$. The converse is similar.\dqed
\end{example}
To make icons the $2$-cells of a bicategory, next we define their vertical and horizontal compositions.
\begin{definition}\label{def:icon-composition}
Suppose $F,G,H : \B \to \C$ are lax functors between bicategories such that $FX=GX=HX$ for each object $X$ in $\B$. In the following definitions, $X$ and $Y$ run through the objects in $\B$.
\begin{description}
\item[Identities] The \emph{identity icon}\index{identity!icon}\index{icon!identity} of $F$, denoted by $1_F : F \to F$, is the icon consisting of the identity natural transformation of $F : \B(X,Y) \to \C(FX,FY)$.
\item[Vertical Composition] Suppose $\alpha : F \to G$ and $\beta : G \to H$ are icons. The \emph{vertical composite}\index{vertical composition!icon} $\beta\alpha : F \to H$ is defined by the vertical composite of the natural transformations $\alpha$ and $\beta$ in the pasting diagram
\[\begin{tikzpicture}[xscale=3, yscale=2]
\node (F1) at (0,0) {$\B(X,Y)$}; \node (F2) at (1,0) {$\C(FX,FY).$};
\node[font=\Large] at (.5,.2){\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at (.6,.2) {$\alpha$};
\node[font=\Large] at (.5,-.2){\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at (.6,-.2) {$\beta$};
\draw[arrow, bend left=70] (F1) to node{\small{$F$}} (F2);
\draw[arrow] (F1) to node[near start]{\scalebox{.8}{$G$}} (F2);
\draw[arrow, bend right=70] (F1) to node[swap]{\small{$H$}} (F2);
\end{tikzpicture}\]
\item[Horizontal Composition] Suppose $J,K : \C\to1.3$ are lax functors between bicategories such that $JZ=KZ$ for each object $Z$ in $\C$, and $\gamma : J \to K$ is an icon. The \emph{horizontal composite}\index{horizontal composition!icon} $\gamma * \alpha : JF \to KG$ is defined by the horizontal composite of the natural transformations $\alpha$ and $\gamma$ in the pasting diagram
\[\begin{tikzpicture}[xscale=3, yscale=2]
\node (F1) at (0,0) {$\B(X,Y)$}; \node (F2) at (1,0) {$\C(FX,FY)$};
\node (F3) at (2,0) {$1.3(JFX,JFY).$};
\node[font=\Large] at (.4,0){\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at (.5,0) {$\alpha$};
\node[font=\Large] at (1.4,0){\rotatebox{-90}{$\Rightarrow$}};
\node[font=\small] at (1.5,0) {$\gamma$};
\draw[arrow, bend left] (F1.60) to node{\small{$F$}} (F2.120);
\draw[arrow, bend right] (F1.300) to node[swap]{\small{$G$}} (F2.240);
\draw[arrow, bend left] (F2.60) to node{\small{$J$}} (F3.120);
\draw[arrow, bend right] (F2.300) to node[swap]{\small{$K$}} (F3.240);
\end{tikzpicture}\defmark\]
\end{description}
\end{definition}
\begin{explanation}\label{expl:icon-composition}
In \Cref{def:icon-composition}, for each $1$-cell $f \in \B(X,Y)$:
\begin{enumerate}
\item The identity icon $1_F$ sends $f$ to the identity $2$-cell $1_{Ff} \in \C(FX,FX)$.
\item The vertical composite $\beta\alpha$ assigns to $f$ the composite $2$-cell
\[\begin{tikzcd}
Ff \ar{r}{\alpha_f} \ar[bend left, start anchor={[xshift=-.3cm]},
end anchor={[xshift=.3cm]}]{rr}{(\beta\alpha)_f} & Gf \ar{r}{\beta_f} & Hf
\end{tikzcd}\]
in $\C(FX,FY)$.
\item The horizontal composite $\gamma *\alpha$ assigns to $f$ either composite $2$-cell in the commutative diagram
\[\begin{tikzcd}
JFf \ar{r}{J\alpha_f} \ar{d}[swap]{\gamma_{Ff}} & JGf \ar{d}{\gamma_{Gf}}\\
KFf \ar{r}{K\alpha_f} & KGf\end{tikzcd}\]
in $1.3(JFX,JFY)$.\dqed
\end{enumerate}
\end{explanation}
Next is the main result of this section.
\begin{theorem}\label{thm:iicat-of-bicat}\index{icon!2-category}\index{2-category!of bicategories, lax functors, and icons}
There is a $2$-category $\Bicatic$ defined by the following data.
\begin{itemize}
\item Its objects are small bicategories.
\item Its $1$-cells are lax functors, with horizontal composite as in \Cref{def:lax-functors-composition} and identity $1$-cells the identity strict functors in \Cref{ex:identity-strict-functor}.
\item Its $2$-cells are icons, with identity $2$-cells, horizontal composition, and vertical composition as in \Cref{def:icon-composition}.
\end{itemize}
Furthermore, $\Bicatic$ contains the following sub-$2$-categories with the same objects and with icons as $2$-cells:
\begin{enumerate}[label=(\roman*)]
\item $\Bicatuic$ with unitary lax functors as $1$-cells.
\item $\Bicatsuic$ with strictly unitary lax functors as $1$-cells.
\item $\Bicatpsic$ \label{notation:bicatpsic}with pseudofunctors as $1$-cells.
\item $\Bicatsupic$ \label{notation:bicatsupic}with strictly unitary pseudofunctors as $1$-cells.
\item $\Bicatstic$ with strict functors as $1$-cells.
\end{enumerate}
These $2$-categories are related by the following $2$-functors that are identity on objects and inclusions on $1$-cells and $2$-cells.
\[\begin{tikzpicture}[xscale=2.5, yscale=2]
\node (st) at (0,0) {$\Bicatstic$}; \node (sup) at (1,0) {$\Bicatsupic$};
\node (su) at (1.5,.5) {$\Bicatsuic$}; \node (ps) at (1.5,-.5) {$\Bicatpsic$};
\node (u) at (2,0) {$\Bicatuic$}; \node (l) at (3,0) {$\Bicatic$};
\draw[arrow] (st) to (sup); \draw[arrow] (sup) to (su); \draw[arrow] (sup) to (ps);
\draw[arrow] (su) to (u); \draw[arrow] (ps) to (u); \draw[arrow] (u) to (l);
\end{tikzpicture}\]
\end{theorem}
\begin{proof}
In \Cref{exer:icon-id-comp} we ask the reader to check that the identity icon and the vertical and horizontal composites are well-defined icons. The assumption of small bicategories ensures that, for each pair of lax functors $F,G : \B\to\B'$, there is only a set of icons from $F$ to $G$. To check that $\Bicatic$ is a $2$-category, we use the criteria in \Cref{2category-explicit}. In \Cref{thm:cat-of-bicat} we observed that $\Bicat$ is a category, so \eqref{2cat-associator-id} and \eqref{2cat-unitor-id-1cell} hold. The bicategory axioms \eqref{hom-category-axioms}, \eqref{bicat-c-id}, and \eqref{middle-four}, and the horizontal associativity of $2$-cells \eqref{2cat-associator-id-2cell} hold because natural transformations satisfy these properties. Finally, the identity $2$-cell $1_{1_{\B}}$ of the identity $1$-cell $1_{\B}$ of a small bicategory $\B$ sends each $1$-cell $f \in \B(X,Y)$ to its identity $2$-cell $1_f$. Since the identity strict functor $1_{\B}$ is defined by the identity function on objects and identity functors, the horizontal unity of $2$-cells \eqref{2cat-unitor-id-2cell} follows.
The proofs for the existence of the sub-$2$-categories $\Bicatuic$, $\Bicatsuic$, $\Bicatpsic$, $\Bicatsupic$, and $\Bicatstic$ are similar, using the corresponding categories in \Cref{thm:cat-of-bicat}.
\end{proof}
By \Cref{exer:moncat}, there is a $2$-category $\MonCat$ of small monoidal categories, monoidal functors, and monoidal natural transformations. There are similar $2$-categories $\StgMonCat$ and $\SttMonCat$ with strong monoidal functors and strict monoidal functors, respectively, as $1$-cells.
\begin{corollary}\label{moncat-bicaticon}
The $2$-category $\Bicatic$ contains a sub-$2$-category that can be identified with $\MonCat$. The analogous statements are also true with $\big(\Bicatic,\MonCat\big)$ replaced by:
\begin{enumerate}
\item $\big(\Bicatpsic,\StgMonCat\big)$.
\item $\big(\Bicatstic,\SttMonCat\big)$.
\end{enumerate}
\end{corollary}
\begin{proof}
We use:
\begin{itemize}
\item \Cref{ex:moncat-bicat} to identify small monoidal categories with small one-object bicategories;
\item \Cref{ex:monfunctor-laxfunctor} to identify (strong, respectively strict) monoidal functors with (pseudo, respectively strict) lax functors;
\item \Cref{ex:mnt-icon} to identify monoidal natural transformations with icons.
\end{itemize}
Via the above identifications, the $2$-categorical structures in the monoidal cases agree with the bicategorical cases.
\end{proof}
\section{Exercises and Notes}\label{sec:functors-exercises}
\begin{exercise}\label{exer:catc-catd} Suppose $(F,F_2,F_0) : \C\to1.3$ is a monoidal functor. Show that $F$ induces a $2$-functor $F_* : \Cat_{\C}\to\Cat_{1.3}$ from the $2$-category $\Cat_{\C}$ of small $\C$-categories, $\C$-functors, and $\C$-natural transformations in \Cref{ex:2cat-of-enriched-cat} to the $2$-category $\Cat_{1.3}$, defined as follows:
\begin{itemize}
\item $F_*$ preserves the object sets.
\item $F_*$ is $F$ on morphism $\C$-objects.
\item On composition, $F_*$ is induced by $F_2$ and the composition in a $\C$-category.
\item On identities, $F_*$ is induced by $F_0$ and the identities in a $\C$-category.
\item On a $\C$-functor, $F_*$ is the identity on objects and $F$ on $\C$-morphisms between morphism $\C$-objects.
\item On a $\C$-natural transformation $\theta$, $F_*$ is induced by $F_0$ and the components of $\theta$.
\end{itemize}
\end{exercise}
\begin{exercise}\label{exer:catv-cat}
Suppose $\V$ is a monoidal category with monoidal unit $\tensorunit$.
\begin{enumerate}[label=(\roman*)]
\item Show that there is a monoidal functor $\V(\tensorunit,-) : \V\to\Set$.
\item Show that there is a $2$-functor $\Cat_{\V} \to \Cat$ given by:
\begin{itemize}
\item the identity assignment on objects;
\item the assignment $\V(\tensorunit,-)$ on morphism categories.
\end{itemize}
\end{enumerate}
\end{exercise}
\begin{exercise}
In \Cref{ex:cat-multicat-polycat}, show that each of the two $2$-functors is the left adjoint of a $\Cat$-adjunction, and describe the right adjoint explicitly.
\end{exercise}
\begin{exercise} In \Cref{spans-functor}, check the naturality of $F^2_*$ and the commutativity of the diagrams \eqref{f2-bicat} and \eqref{f0-bicat}.
\end{exercise}
\begin{exercise}\index{functor!induced pseudofunctor in cospans}
A \emph{cospan}\index{cospan}\index{span!co-} in a category $\C$ is a diagram of the form $\begin{tikzcd}[column sep=small] A \ar{r} & X & B. \ar{l}\end{tikzcd}$ Formulate and prove the cospan version of \Cref{spans-functor}.
\end{exercise}
\begin{exercise}
In \Cref{colax-functor-explicit}, express the naturality of $F^2$, the lax associativity axiom \eqref{colax-f2}, and the lax unity axioms \eqref{colax-f0} in terms of pasting diagrams.
\end{exercise}
\begin{exercise}\label{exercise:rep-corep-assoc}
Suppose that $f\cn X \to Y$ and $g\cn Z \to W$ are $1$-cells in a bicategory $\B$. Show that the associator in $\B$ defines a natural isomorphism between the two functors
\[\B(W,X) \to \B(Z,Y)\]
given by $g^*f_*$ and $f_*g^*$.
\end{exercise}
\begin{exercise}
In \Cref{lax-functors-compose}, check the lax right unity diagram for $GF$.
\end{exercise}
\begin{exercise}
Check the details of \Cref{ex:span-induced-functors-compose}.
\end{exercise}
\begin{exercise}\label{exer:rep-tr}
In the proof of \Cref{representable-transformation}, check the lax unity axiom and the lax naturality axiom for $f_*$.
\end{exercise}
\begin{exercise}\label{exer:lax-tr-compose}
In the proof of \Cref{lax-tr-compose}, check the lax naturality axiom for $\beta\alpha$. [Hint: Since every component $2$-cell of $\beta\alpha$ is a vertical composite of five $2$-cells, the lax naturality diagram \eqref{2-cell-transformation} involves twenty $2$-cells. This diagram factors into $18$ sub-diagrams, which are commutative by the pentagon axiom \eqref{bicat-pentagon} six times, the naturality of $a^{\pm 1}$ nine times, the middle four exchange \eqref{middle-four}, and the lax naturality of $\alpha$ and $\beta$.]
\end{exercise}
\begin{exercise}\label{exer:mod-hcomp}
In the proof of \Cref{modification-comp-id}, check that the vertical composite $\Sigma\Gamma$ is a modification. Also, write down the proof for the horizontal composite as a commutative diagram using \Cref{expl:lax-tr-comp}.
\end{exercise}
\begin{exercise}\label{exer:bicat-of-functors}
Depending on whether we use (i) lax, pseudo, or strict functors, and (ii) lax, strong, or strict transformations, there are nine versions of $\Bicat(\B,\B')$, with (lax, lax) the original version in \Cref{thm:bicat-of-lax-functors} and (pseudo, strong) the version in \Cref{subbicat-pseudofunctor}. Among the other seven versions, check that:
\begin{enumerate}[label=(\roman*)]
\item The four versions (pseudo or strict, lax) and (lax or strict, strong) are bicategories.
\item The other three versions (-, strict) are generally not bicategories.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exer:2cat-of-2cat}\index{2-category!of 2-categories, 2-functors, and 2-natural transformations}
Show that there is a $2$-category $\iiCat$ defined by the following data.
\begin{itemize}
\item Its objects are small $2$-categories.
\item Its $1$-cells are $2$-functors, with identity $1$-cells as in \Cref{ex:identity-strict-functor} and horizontal composition as in \Cref{def:lax-functors-composition}.
\item Its $2$-cells are $2$-natural transformations, with identity $2$-cells as in \Cref{id-lax-transformation} and vertical composition as in \Cref{def:lax-tr-comp}.
\end{itemize}
\end{exercise}
\begin{exercise}\label{exer:icon-is-icon}
In the proof of \Cref{icon-is-icon}:
\begin{enumerate}[label=(\roman*)]
\item In the first half, check the oplax unity axiom \eqref{unity-oplax} for $\alpha'$.
\item In the second half, check that $\alpha : F \to G$ is an icon.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exer:icon-id-comp}
In \Cref{def:icon-composition}, check that the identity icon and the vertical and horizontal composites are well-defined icons.
\end{exercise}
\subsection*{Notes}
\begin{note}[Lax Functors]
\Cref{thm:cat-of-bicat} is due to Benabou \cite{benabou}, who used the terms \emph{morphisms} and \index{homomorphism}\emph{homomorphisms} for what we call lax functors and pseudofunctors. The name \emph{lax functor} probably goes back to Street \cite{street_lax}. We follow Benabou in the usage of \emph{unitary} and \emph{strictly unitary}. What we call a \emph{strictly unitary lax functor} is sometimes called a \emph{normal lax functor}\index{lax functor!normal} in the literature.
\end{note}
\begin{note}[Lax and Oplax Transformations]
The concept of a lax transformation is also due to Benabou \cite[Section 8]{benabou}, who defined it in terms of a cylinder construction. In fact, it was in the process of proving properties of those cylinders that Benabou introduced pastings, which we discussed in detail in \Cref{ch:pasting-string}, to simplify and clarify diagrams in bicategories.
The terminology regarding \emph{lax} and \emph{oplax} transformations is not consistent in the literature. For example, the authors of \cite{gps,gurski-coherence,elephant} use the term \emph{lax transformation} for what we call an oplax transformation. Our terminology agrees with that in \cite{benabou,kelly-clubs,leinster-bicat,lack-icons,street_fibrations,street_fibrations-correction}. In \cite{street_lax}, lax and oplax transformations are called \emph{left lax transformations} and \emph{right lax transformations}, respectively.
\end{note}
\begin{note}[Compositions in $\Bicatps$]
We saw in \Cref{subbicat-pseudofunctor} that $\Bicatps(\A,\B)$ is a bicategory with pseudofunctors as objects, strong transformations as $1$-cells, and modifications as $2$-cells. There are in fact compositions
\[\begin{tikzcd}
\Bicatps(\B,\C)\times\Bicatps(\A,\B) \ar{r}{\tensor} & \Bicatps(\A,\C)\end{tikzcd}\]
for bicategories $\A,\B$, and $\C$, such that there is a \emph{tricategory} with bicategories as objects, the bicategories $\Bicatps(\A,\B)$ as hom bicategories, and $\tensor$ as the composition. We will discuss tricategories in \Cref{ch:tricat-of-bicat}.
\end{note}
\begin{note}[Icons]
Icons and the results in \Cref{sec:icons} are due to Lack \cite{lack-icons}. Lack uses the term ``oplax natural transformation'' for what we have simply called ``oplax transformation'', and the \emph{n} in icon was taken to stand for ``natural''. We will use the concept of icons in \Cref{sec:iinerve} when we discuss $2$-nerves of bicategories.
\end{note}
\begin{note}
\Cref{exer:catc-catd,exer:catv-cat} about enriched categories are from \cite[Section 6.4]{borceux2}.
\end{note}
\endinput
\chapter{Grothendieck Construction}\label{ch:grothendieck}
Throughout this chapter $\C$ denotes a small category, also regarded as a locally discrete $2$-category. The main subject of this chapter is the Grothendieck construction that associates to each pseudofunctor $F : \Cop\to\Cat$ a cloven fibration $\Usubf : \intf \to \C$. The main observation is \Cref{thm:grothendieck-iiequivalence}, which says that the Grothendieck construction defines a $2$-equivalence of $2$-categories
\[\begin{tikzcd}
\Bicatpscopcat \ar{r}{\int} & \fibofc
\end{tikzcd}\]
with $\Bicatpscopcat$ and $\fibofc$ defined in \Cref{subbicat-pseudofunctor,iicat-fibrations}, respectively. In particular, at the object level, this means that every fibration over $\C$ is isomorphic to one of the form $\Usubf : \intf \to \C$ for some pseudofunctor $F$.
In \Cref{sec:grothendieck} the Grothendieck construction of a pseudofunctor $\Cop\to\Cat$ is defined. It is shown that the category $\intf$ is equipped with a cloven fibration over $\C$, which is split if and only if the pseudofunctor $F$ is a strict functor.
In \Cref{sec:grothendieck-laxcolim} it is proved that the Grothendieck construction $\intf$ of a $\C$-indexed category $F$ is a lax colimit of $F$. This result is independent of the fact that the Grothendieck construction is a $2$-equivalence of $2$-categories.
The rest of the proof that the Grothendieck construction is a $2$-equivalence is divided into several steps. In \Cref{sec:grothendieck-iifunctor} we define the Grothendieck construction on strong transformations and modifications, and show that it is a $2$-functor. By the Whitehead \Cref{theorem:whitehead-2-cat} for $2$-categories, to establish the desired $2$-equivalence, we will show that the Grothendieck construction is $1$-essentially surjective on objects in \Cref{sec:fibration-indexed-cat}, $1$-fully faithful on $1$-cells in \Cref{sec:grothendieck-ifully-faithful}, and fully faithful on $2$-cells in \Cref{sec:grothendieck-iiequivalence}.
\Cref{sec:grothendieck-bicat} contains a brief discussion of another Grothendieck construction for $\C$-indexed \emph{bicategories}.
\section{From Pseudofunctors to Fibrations}
\label{sec:grothendieck}
In this section we define the Grothendieck construction of a pseudofunctor $\Cop\to\Cat$, and observe that it yields a cloven fibration. Moreover, this cloven fibration is a split fibration if and only if the given pseudofunctor is a strict functor.
\begin{motivation}\label{mot:icat-grothendieck}
Suppose for the moment that $\Cat$ is the $1$-category of small categories and functors. For a functor $F : \Cop \to \Cat$, the \index{Grothendieck construction!for a functor}Grothendieck construction $\int_{\C} F$ is a category that glues the categories $FA$, for $A\in\C$, together using the parametrizing functor $F$.
\begin{itemize}
\item An object in $\int_{\C} F$ is a pair $(A \in \C, X\in FA)$.
\item A morphism $(f,p) : (A,X)\to(B,Y)$ consists of morphisms
\[f : A \to B\in\C \andspace p : X \to (Ff)(Y)\in FA.\]
\end{itemize}
Identities and composition are the obvious ones. A natural way to extend the Grothendieck construction is to take advantage of the fact, discussed in \Cref{ex:2cat-of-cat}, that $\Cat$ is a $2$-category with natural transformations as $2$-cells. We can now allow $F$ to be a lax functor.\dqed
\end{motivation}
In the following definition, $\Cat$ is a $2$-category. Recall from \Cref{def:lax-functors} the concept of a lax functor.
\begin{definition}\label{def:grothendieck-cat
A \emph{$\C$-indexed category}\index{indexed!category}\index{category!indexed} is a lax functor\index{lax functor!as an indexed category}\index{lax functor!Grothendieck construction for}
\[(F,F^2,F^0) : \Cop \to \Cat.\]
Given a $\C$-indexed category $F$, its \emph{Grothendieck construction}\index{Grothendieck construction!for a lax functor} $\intf$ is the category defined as follows.
\begin{description}
\item[Objects] An object is a pair $(A,X)$ with $A$ an object in $\C$ and $X$ an object in $FA$.
\item[Morphisms] A morphism
\begin{equation}\label{fp-ax-by}
(f,p) : (A,X) \to (B,Y) \in \intf
\end{equation}
consists of
\begin{itemize}
\item a morphism $f : A \to B$ in $\C$, and
\item a morphism $p : X \to \tothe{f}{F}Y$ in $FA$, where $\tothe{f}{F}=Ff : FB \to FA$.
\end{itemize}
\item[Identities] The identity morphism of an object $(A,X)$ consists of
\begin{itemize}
\item the identity morphism $1_A : A \to A$ in $\C$, and
\item the morphism
\begin{equation}\label{fzeroax}
(F^0_A)_X : X \to \tothe{1_A}{F} X \inspace FA,
\end{equation}
where \[F^0_A : 1_{FA} \to F1_A = \tothe{1_A}{F}\] is the $A$-component natural transformation of $F^0$.
\end{itemize}
\item[Composition] Suppose given composable morphisms
\begin{equation}\label{fpgq}
\begin{tikzcd}
(A,X) \ar{r}{(f,p)} & (B,Y) \ar{r}{(g,q)} & (C,Z) \in \intf,\end{tikzcd}
\end{equation}
with
\begin{itemize}
\item $g : B \to C$ a morphism in $\C$, and
\item $q : Y \to \tothe{g}{F}Z$ a morphism in $FB$, where $\tothe{g}{F} = Fg : FC \to FB$.
\end{itemize}
The composite \[(g,q)(f,p) : (A,X) \to (C,Z)\] is defined by
\begin{itemize}
\item the composite $gf : A \to C$ in $\C$, and
\item the composite
\begin{equation}\label{intf-composite}
\begin{tikzcd}[column sep=large]
X \ar{r}{p} & \tothe{f}{F}Y \ar{r}{\tothe{f}{F}q} & \tothe{f}{F}\tothe{g}{F}Z \ar{r}{(F^2_{f,g})_Z} & \tothe{gf}{F}Z
\end{tikzcd}
\end{equation}
in $FA$, where $(F^2_{f,g})_Z$ is the $Z$-component of the natural transformation $F^2_{f,g}$.
\end{itemize}
\end{description}
This finishes the definition of $\sint F$. We show that $\sint F$ is
a category in \cref{grothendieck-cat} below.
\end{definition}
\begin{explanation}\label{expl:grothendieck-construction}
A picture for the Grothendieck construction $\intf$ is as follows.
\begin{center}\begin{tikzpicture}[xscale=1, yscale=.7]
\node (c) at (4,0) {};
\draw[thick] (c) circle (2 and .8);
\node at (1.5,0) {$\C$};
\node (a) at ($(c)+(-1.5,0)$) {$A$};
\node (b) at ($(c)+(1.5,0)$) {$B$};
\draw [->] (a) to node{\small{$f$}} (b);
\node (fa) at ($(a)+(0,2.6)$) {};
\draw[thick] (fa) circle (1 and 1.5);
\node (FA) at ($(fa)+(0,2)$) {$FA$};
\node (x) at ($(fa)+(0,.8)$) {$X$};
\node (fy) at ($(fa)+(0,-.8)$) {$\ftof Y$};
\draw [->] (x) to node[swap]{\small{$p$}} (fy);
\node (fb) at ($(fa)+(3,0)$) {$Y$};
\draw[thick] (fb) circle (1 and 1.5);
\node (FB) at ($(fb)+(0,2)$) {$FB$};
\draw [->] (FB) to node[swap]{\small{$\ftof$}} (FA);
\draw [lightgray, |->, line width=1pt, bend left=10] (fb) to (fy);
\end{tikzpicture}
\end{center}
The identity morphism of an object $(A,X)$ involves the lax unity constraint $(F^0_A)_X$, and composition involves the lax functoriality constraint $(F^2_{f,g})_Z$. These are not invertible in general. Despite such laxity, we now observe that the Grothendieck construction is actually a $1$-category.\dqed
\end{explanation}
\begin{lemma}\label{grothendieck-cat}
For each $\C$-indexed category $F : \Cop \to \Cat$, the Grothendieck construction $\intf$ is a category.
\end{lemma}
\begin{proof}
We need to check the unity and associativity axioms of a category. For the unity axiom, suppose $(f,p) : (A,X) \to (B,Y)$ is a morphism in $\intf$, and $\big(1_A, (F^0_A)_X\big)$ is the identity morphism of $(A,X)$. The composite
\[\begin{tikzcd}[column sep=huge]
(A,X) \ar{r}{(1_A,(F^0_A)_X)} & (A,X) \ar{r}{(f,p)} & (B,Y)\end{tikzcd}\]
has
\begin{itemize}
\item first component the morphism $f1_A = f : A \to A$ in $\C$;
\item second component the long composite along the boundary of the
following diagram in $FA$.
\[\begin{tikzcd}[column sep=large, row sep=large]
\tothe{1_A}{F}X \ar{rr}{\tothe{1_A}{F}p} && \tothe{1_A}{F}\tothe{f}{F}Y \ar{d}{(F^2_{1_A,f})_Y}\\
X \ar{u}{(F^0_A)_X} \ar{r}{p} & \tothe{f}{F}Y \ar{ur}[sloped, anchor=center, above]{(F^0_A*1_{\tothe{f}{F}})_Y} \ar[equal]{r}& \tothe{f}{F}Y
\end{tikzcd}\]
\end{itemize}
We need to show that this long composite is equal to $p$. In the above diagram, the left trapezoid is commutative by the naturality of the natural transformation $F^0_A$ because
\[(F^0_A*1_{\tothe{f}{F}})_Y = (F^0_A)_{\tothe{f}{F}Y}.\]
The right triangle is commutative by the lax left unity axiom \eqref{f0-bicat}. This proves half of the unity axiom. The other half is proved similarly.
For the associativity axiom, suppose given three composable morphisms
\[\begin{tikzcd}
(A,W) \ar{r}{(f,p)} & (B,X) \ar{r}{(g,q)} & (C,Y) \ar{r}{(h,r)} & (D,Z)\end{tikzcd}\] in $\sint{\C}F$. In both composites
\[\big((h,r)(g,q)\big)(f,p) \andspace (h,r)\big((g,q)(f,p)\big),\]
the first component is the composite $hgf : A \to D$ in $\C$. Their second components are the two composites along the boundary of the following diagram in $FA$.
\[\begin{tikzcd}[column sep=large]
W \ar{r}{p} \ar{d}[swap]{p} & \tothe{f}{F}X \ar{r}{\tothe{f}{F}q} & \tothe{f}{F}\tothe{g}{F}Y \ar{r}{(F^2_{f,g})_Y} & \tothe{gf}{F}Y \ar{d}{\tothe{gf}{F}r}\\
\tothe{f}{F}X \ar{d}[swap]{\tothe{f}{F}q} &&& \tothe{gf}{F}\tothe{h}{F}Z \ar{d}{(F^2_{gf,h})_Z}\\
\tothe{f}{F}\tothe{g}{F}Y \ar[bend left=10]{uurr}{1} \ar{r}{\tothe{f}{F}\tothe{g}{F}r} & \tothe{f}{F}\tothe{g}{F}\tothe{h}{F}Z \ar[bend left=20]{urr}[near end, inner sep=2pt]{(F^2_{f,g})_{\tothe{h}{F}Z}} \ar{r}{\tothe{f}{F}(F^2_{g,h})_Z} & \tothe{f}{F}\tothe{hg}{F}Z \ar{r}{(F^2_{f,hg})_Z} & \tothe{hgf}{F}Z
\end{tikzcd}\]
In the above diagram from left to right, the first sub-diagram is commutative by definition. The second sub-diagram is commutative by the naturality of the natural transformation $F^2_{f,g}$. The third sub-diagram is commutative by the lax associativity axiom \eqref{f2-bicat}.
\end{proof}
Recall from \Cref{def:fibration} that a fibration over $\C$ is a functor $P : 3\to\C$ in which every pre-lift has a Cartesian lift. In the rest of this section, we observe that the Grothendieck construction yields a fibration over $\C$. First we define the functor to $\C$.
\begin{definition}\label{def:grothendieck-over-c}
For a $\C$-indexed category $F : \Cop \to\Cat$, denote by
\[\begin{tikzcd}
\intf \ar{r}{\Usubf} & \C\end{tikzcd}\]
the functor that sends:
\begin{itemize}
\item an object $(A,X) \in \intf$ to the object $A\in\C$;
\item a morphism $(f,p) \in \intf$ to the morphism $f\in\C$.
\end{itemize}
In other words, $\Usubf$ is the first-factor projection.
Note that $\Usubf$ is a well-defined functor because the $\C$-components of the composition and identity morphisms in $\intf$ are defined in $\C$.
\end{definition}
\begin{proposition}\label{grothendieck-is-fibration}
Suppose $F : \Cop \to\Cat$ is a $\C$-indexed category with $\Ftwo$ invertible.
\begin{enumerate}
\item\label{fone-cartesian}
For each morphism $f : A \to B$ in $\C$ and each object $Y\in FB$, the morphism
\[\begin{tikzcd}[column sep=large]
(A,\ftof Y) \ar{r}{(f,1_{\ftof Y})} & (B,Y) \in \intf\end{tikzcd}\]
is a Cartesian morphism with respect to $\Usubf$.
\item\label{gro-is-fibration}
The functor
\[\begin{tikzcd}
\intf \ar{r}{\Usubf} & \C\end{tikzcd}\] is a fibration over $\C$.\index{Grothendieck construction!yields a fibration}
\end{enumerate}
\end{proposition}
\begin{proof}
For the first assertion, consider a pre-raise
\[\preraise{(f,1_{\ftof Y})}{(h,q)}{g}\]
with respect to $\Usubf$, as pictured below.
\[\begin{tikzpicture}[xscale=3, yscale=1.4]
\draw[0cell]
(0,0) node (a) {(A,\ftof Y)}
($(a)+(1,0)$) node (b) {(B,Y)}
($(a)+(.5,1)$) node (c) {(C,Z)}
($(b)+(.3,.5)$) node (s) {}
($(s)+(.4,0)$) node (t) {}
($(t)+(.3,-.5)$) node (a2) {A}
($(a2)+(1,0)$) node (b2) {B}
($(a2)+(.5,1)$) node (c2) {C}
;
\draw[1cell]
(a) edge node {(f,1_{\ftof Y})} (b)
(c) edge[dashed] node[swap] {(g,\exists !\, p?)} (a)
(c) edge node {(h,q)} (b)
(s) edge[|->] node{\Usubf} (t)
(a2) edge node {f} (b2)
(c2) edge node[swap] {g} (a2)
(c2) edge node {h} (b2)
;
\end{tikzpicture}\]
We must show that it has a unique raise. Since $\Usubf$ projects onto the first factor, the first component of a raise must be the morphism $g\in\C$. For any morphism
\[\begin{tikzcd}
(C,Z) \ar{r}{(g,p)} & (A,\ftof Y) \in \intf,
\end{tikzcd}\]
the composite with $(f,1_{\ftof Y})$ has second component the long composite in the diagram
\[\begin{tikzcd}[column sep=large]
Z \ar{d}[swap]{p} \ar{r}{q} & \htof Y = \fgtof Y\\
\gtof \ftof Y \ar{r}{\gtof 1_{\ftof Y}}[swap]{=} & \gtof \ftof Y \ar{u}{\iso}[swap]{(\Ftwosub{g,f})_Y}
\end{tikzcd}\]
in $FC$. Therefore, the given pre-raise has a unique raise $(g,p)$ with $p$ the composite
\[p = (\Ftwosub{g,f})_Y^{-1} \circ q\]
in $FC$, proving that $(f,1_{\ftof Y})$ is a Cartesian morphism.
For the second assertion, suppose given a pre-lift $\preliftbyf$ with respect to $\Usubf$ consisting of:
\begin{itemize}
\item an object $(B\in\C, Y\in FB)$ in $\intf$;
\item a morphism $f : A \to B$ in $\C$.
\end{itemize}
We must show that it has a Cartesian lift. The morphism $(f,1_{\ftof Y})$ satisfies \[\Usubf(f,1_{\ftof Y})=f,\] so it is a lift of the pre-lift $\preliftbyf$. By the first assertion, it is a Cartesian lift.
\end{proof}
For a $\C$-indexed category $F : \Cop\to\Cat$ with $\Ftwo$ invertible, as in \Cref{grothendieck-is-fibration}, we regard $\Usubf : \intf \to \C$ as a cloven fibration such that each pre-lift $\preliftbyf$ has chosen Cartesian lift $(f,\oneftofy)$. Recall from \Cref{def:fibration} that a \emph{split} fibration is a cloven fibration that is both unitary and multiplicative. Also recall from \Cref{def:lax-functors} that a \emph{strict} functor is a lax functor whose laxity constraints are identities. The following observation says that, under the Grothendieck construction, strict functors $\Cop\to\Cat$ correspond to split fibrations over $\C$.
\begin{proposition}\label{strict-functor-split-fib}
For each pseudofunctor $F : \Cop\to\Cat$, the following two statements are equivalent.
\begin{enumerate}
\item $F$ is a strict functor.
\item $\Usubf : \intf \to \C$ is a split fibration.
\end{enumerate}
\end{proposition}
\begin{proof}
First suppose $F$ is a strict functor, so $F$ strictly preserves identity morphisms and composites, with $\Fzero$ and $\Ftwo$ identities. The chosen Cartesian lift of a pre-lift $\preliftbyone$ is
\[(\oneb,1_{\onebtof Y}) = (\oneb, 1_Y),\]
which is the identity morphism of $(B,Y)$ by \eqref{fzeroax}. This shows that the cloven fibration $\Usubf$ is unitary.
To see that $\Usubf$ is multiplicative, consider an object $(C,Z) \in \intf$ and composable morphisms $f : A\to B$ and $g : B \to C$ in $\C$, as displayed below.
\begin{equation}\label{usubf-multiplicative}
\begin{tikzpicture}[xscale=3.5, yscale=1.4, baseline={(u.base)}]
\def1{1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (a) {A}
($(a)+(1,0)$) node (b) {B}
($(b)+(1,0)$) node (c) {C \in \C}
($(a)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x) {(A,\ftof\gtof Z)}
($(b)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y) {(B,\gtof Z)}
($(c)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (z) {(C,Z)\in\intf}
;
\draw[1cell]
(a) edge node {f} (b)
(b) edge node {g} (c)
(x) edge node {(f,1_{\ftof\gtof Z})} (y)
(y) edge node{(g,1_{\gtof Z})} (z)
(z) edge[|->] node (u) {\Usubf} (c)
;
\end{tikzpicture}
\end{equation}
The two morphisms in the top row in \eqref{usubf-multiplicative} are the chosen Cartesian lifts of the pre-lifts $\prelift{(B,\gtof Z)}{f}$ and $\prelift{(C,Z)}{g}$. Their composite is equal to the chosen Cartesian lift $(gf,1_{\gftof Z})$ of the pre-lift $\prelift{(C,Z)}{gf}$ because, by \eqref{intf-composite}, its second component is the composite
\begin{equation}\label{usubf-multiplicative-2}
\begin{tikzcd}[column sep=large]
\ftof\gtof Z \ar{r}{1_{\ftof\gtof Z}} & \ftof\gtof Z \ar{r}{\ftof 1_{\gtof Z}} & \ftof\gtof Z \ar{r}{(\Ftwosub{f,g})_Z} & \gftof Z
\end{tikzcd}
\end{equation}
in $FA$ of three identity morphisms. This shows that the cloven fibration $\Usubf$ is also multiplicative. Therefore, $\Usubf$ is a split fibration.
Conversely, suppose $\Usubf$ is a split fibration. To show that $\Fzero$ is the identity, suppose $A\in\C$ and $X\in FA$ are objects. The pre-lift $\preliftaxone$ has chosen Cartesian lift
\[\big(\onea,1_{\oneatof X}\big) = 1_{(A,X)} = \big(\onea,(\Fzeroa)_X\big)\]
by the unitarity of the split fibration $\Usubf$ and \eqref{fzeroax}. So $\Fzeroa$ is the identity natural transformation.
To show that $\Ftwo$ is the identity, consider morphisms $f : A\to B$ and $g : B \to C$ in $\C$, an object $Z\in FC$, and the chosen Cartesian lifts in \eqref{usubf-multiplicative}. By \eqref{usubf-multiplicative-2} and the multiplicativity of the split fibration $\Usubf$, the diagram
\[\begin{tikzcd}[column sep=large]
\ftof\gtof Z \ar{d}{=}[swap]{1_{\ftof\gtof Z}} \ar{r}{1_{\gftof Z}} & \gftof Z\\
\ftof\gtof Z \ar{r}{=}[swap]{\ftof 1_{\gtof Z}} & \ftof\gtof Z \ar{u}[swap]{(\Ftwosub{f,g})_Z}
\end{tikzcd}\]
in $FA$ is commutative. This shows that $\Ftwosub{f,g}$ is the identity natural transformation. Therefore, $F$ is a strict functor.
\end{proof}
\section{As a Lax Colimit}\label{sec:grothendieck-laxcolim}
The purpose of this section is to observe that the Grothendieck construction is a lax colimit. Recall from \eqref{bicat-aop-bop} and \Cref{expl:oplax-cone} the concept of an oplax cone.
\begin{definition}\label{def:lax-grothendieck-oplax-cone}
Suppose $F : \Cop \to \Cat$ is a $\C$-indexed category. Define the following structures.
\begin{description}
\item[Component $1$-Cells] For each object $A$ in $\C$, define the functor\label{notation:piofa}
\[\begin{tikzcd}
FA \ar{r}{\pi_A} & \intf
\end{tikzcd}\]
by sending:
\begin{itemize}
\item each object $X \in FA$ to the object $(A,X) \in\intf$;
\item each morphism $p : X \to X'$ in $FA$ to the morphism
\[\big(1_A, (F^0_A)_{X'} \circ p\big) : (A,X) \to (A,X') \inspace \intf.\]
\end{itemize}
\item[Component $2$-Cells] For each morphism $f \in \C(A,B)$, define the natural transformation
\[\begin{tikzpicture}[xscale=2.5, yscale=1.5]
\draw[0cell]
(0,0) node (A) {FA}
(1,0) node (B) {FB}
(0,-1) node (G) {\intf}
(1,-1) node (G2) {\intf}
;
\draw[1cell]
(B) edge node[swap] {Ff} (A)
(B) edge node {\pi_B} (G2)
(A) edge node[swap] {\pi_A} (G)
(G2) edge node {\Id} (G)
;
\draw[2cell]
node[between=B and G at .6, rotate=-45, font=\Large] (pi) {\Rightarrow}
(pi) node[above right] {\pi_f}
;
\end{tikzpicture}\]
with component
\[\begin{tikzcd}
(\pi_f)_Y = (f,1_{\tothe{f}{F}Y}) : \pi_A\tothe{f}{F}Y = (A,\tothe{f}{F}Y) \ar{r} & (B,Y) = \pi_BY \in \intf
\end{tikzcd}\]
for each object $Y \in FB$.\defmark
\end{description}
\end{definition}
\begin{lemma}\label{lax-grothendieck-oplax-cone}
\Cref{def:lax-grothendieck-oplax-cone} defines an oplax cone \[\pi : F \to \conof{\intf}\] of $\intf$ under $F$.
\end{lemma}
\begin{proof}
To see that each $\pi_A$ is actually a functor, first note that it sends the identity morphism $1_X$ for an object $X\in FA$ to $(1_A,(F^0_A)_X)$, which is the identity morphism of $(A,X)=\pi_AX$ in $\intf$.
For composable morphisms $p : X \to X'$ and $p' : X' \to X''$ in $FA$, $\pi_A$ preserves their composite if and only if the boundary of the following diagram in $FA$ commutes.
\[\begin{tikzcd}[column sep=large]
X \ar{d}[swap]{p} \ar{r}{p} & X' \ar{r}{p'} & X'' \ar{r}{(F^0_A)_{X''}} & \tothe{1_A}{F}X''\\
X' \ar[equal]{ur} \ar{r}[swap]{(F^0_A)_{X'}} & \tothe{1_A}{F}X' \ar{r}[swap]{\tothe{1_A}{F}p'} & \tothe{1_A}{F}X'' \ar[equal]{ur} \ar{r}[swap]{\tothe{1_A}{F}(F^0_A)_{X''}} & \tothe{1_A}{F}\tothe{1_A}{F}X'' \ar{u}[swap]{(F^2_{1_A,1_A})_{X''}}
\end{tikzcd}\]
From left to right, the three sub-diagrams are commutative by definition, the naturality of $F^0_A$, and the lax right unity axiom \eqref{f0-bicat} for $F$.
The naturality of $\pi_f$ with respect to $f$ is trivial because $\C$ has no non-identity $2$-cells. The oplax unity axiom \eqref{unity-oplax-pasting} and the oplax naturality axiom \eqref{2cell-oplax-pasting} for $\pi$ both follow from the lax left unity axiom \eqref{f0-bicat} for $F$.
\end{proof}
\begin{theorem}\label{thm:lax-grothendieck-lax-colimit}\index{Grothendieck construction!is a lax colimit}\index{lax!colimit!Grothendieck construction}
Suppose $F : \Cop \to \Cat$ is a $\C$-indexed category. Then the pair
\[\left(\intf,\pi\right)\] with
\begin{itemize}
\item $\intf$ the Grothendieck construction in \Cref{grothendieck-cat}, and
\item $\pi : F \to \conof{\intf}$ the oplax cone in \Cref{lax-grothendieck-oplax-cone}
\end{itemize}
is a lax colimit of $F$.
\end{theorem}
\begin{proof}
The assertion means that, for each category $1.3$, there is an isomorphism
\[\begin{tikzcd}
\Cat\big(\intf,1.3\big) \ar{r}{\pi^*}[swap]{\cong} & \oplaxcone(F,\conof{1.3}) = \Bicat(\C,\Catop)(\conof{1.3},\Fop)
\end{tikzcd}\]
of categories induced by pre-composition with the oplax cone $\pi$. By \Cref{laxcone-induced-functor}, the functor $\pi^*$ is well-defined.
For the purpose of constructing an inverse, let us first describe the functor $\pi^*$ explicitly. For each functor $G : \intf \to 1.3$, the oplax transformation
\[\begin{tikzpicture}[xscale=3.5, yscale=2]
\draw[0cell]
(0,0) node (c) {\Cop}
(1,0) node (cat) {\Cat}
;
\draw[1cell]
(c) edge[bend left=40] node {F} (cat)
(c) edge[bend right=40] node[swap] {\conof{1.3}} (cat)
;
\draw[2cell]
node[between=c and cat at .25, rotate=-90, font=\Large] (pig) {\Rightarrow}
(pig) node[right] {\pi^*G = \conof{G}\pi}
;
\end{tikzpicture}\]
is determined by the following data.
\begin{enumerate}
\item For each object $A$ in $\C$, the functor
\[(\conof{G}\pi)_A : FA \to 1.3\]
sends:
\begin{itemize}
\item each object $X\in FA$ to $G(A,X)\in1.3$;
\item each morphism $p : X \to X'$ in $FA$ to the morphism
\[G\big(1_A,(F^0_A)_{X'}\circ p\big) : G(A,X) \to G(A,X') \inspace 1.3.\]
\end{itemize}
\item For each morphism $f : A \to B$ in $\C$, the natural transformation
\[\begin{tikzpicture}[xscale=2.3, yscale=2]
\draw[0cell]
(0,0) node (A) {FA}
(1,0) node (B) {FB}
(.5,-.7) node (D) {1.3}
;
\draw[1cell]
(B) edge node[swap] (f) {Ff} (A)
(A) edge[out=-90, in=180] node[swap, pos=.4] {(\conof{G}\pi)_A} (D)
(B) edge[out=-90, in=0] node[pos=.4] {(\conof{G}\pi)_B} (D)
;
\draw[2cell]
node[between=f and D at .7, rotate=0, font=\Large] (pi) {\Rightarrow}
(pi) node[above] {(\conof{G}\pi)_f}
;
\end{tikzpicture}\]
has component morphism
\[\begin{tikzcd}[column sep=huge]
G(A,\tothe{f}{F}Y) \ar{r}{G(f,1_{\tothe{f}{F}Y})} & G(B,Y)
\end{tikzcd}\]
in $1.3$ for each object $Y\in FB$.
\end{enumerate}
Moreover, for each natural transformation $\alpha : G \to H$ for functors $G,H : \intf\to 1.3$, the modification
\[\pi^*\alpha = \conof{\alpha}*1_{\pi} : \conof{G}\pi \to \conof{H}\pi\]
is equipped with, for each object $A \in \C$, the natural transformation
\[\begin{tikzpicture}[xscale=3, yscale=2]
\draw[0cell]
(0,0) node (A) {FA}
(1,0) node (D) {1.3}
;
\draw[1cell]
(A) edge[bend left=40] node {(\conof{G}\pi)_A} (D)
(A) edge[bend right=40] node[swap] {(\conof{H}\pi)_A} (D)
;
\draw[2cell]
node[between=A and D at .25, rotate=-90, font=\Large] (al) {\Rightarrow}
(al) node[right] {(\conof{\alpha}*1_{\pi})_A}
;
\end{tikzpicture}\]
with component morphism
\begin{equation}\label{alpha-of-ax}
\begin{tikzcd}[column sep=large]
(\conof{G}\pi)_A(X) = G(A,X) \ar{r}{\alpha_{(A,X)}} & H(A,X) = (\conof{H}\pi)_A(X) \in 1.3
\end{tikzcd}
\end{equation}
for each object $X\in FA$.
Now we define a functor that is strictly inverse to $\pi^*$. Using \Cref{expl:oplax-cone}, each oplax transformation $\alpha : F \to \conof{1.3}$ has
\begin{itemize}
\item component functors $\alpha_A : FA \to 1.3$ for objects $A \in \C$, and
\item component natural transformations
\[\begin{tikzpicture}[xscale=2.3, yscale=1.7]
\draw[0cell]
(0,0) node (A) {FA}
(1,0) node (B) {FB}
(.5,-.7) node (D) {1.3}
;
\draw[1cell]
(B) edge node[swap] (f) {Ff} (A)
(A) edge[out=-90, in=180] node[swap, pos=.4] {\alpha_A} (D)
(B) edge[out=-90, in=0] node[pos=.4] {\alpha_B} (D)
;
\draw[2cell]
node[between=f and D at .7, rotate=0, font=\Large] (pi) {\Rightarrow}
(pi) node[above] {\alpha_f}
;
\end{tikzpicture}\]
for morphisms $f : A \to B$ in $\C$, that satisfy the oplax unity axiom and the oplax naturality axiom.
\end{itemize}
Given such an oplax transformation, we define a functor
\[\barof{\alpha} : \intf \to 1.3\]
by sending:
\begin{itemize}
\item each object $(A,X)\in \intf$ to the object $\alpha_AX\in 1.3$;
\item each morphism \[(f,p) : (A,X) \to (B,Y) \in \intf\] to the composite
\[\begin{tikzpicture}[xscale=3, yscale=1]
\draw[0cell]
(0,0) node (X) {\alpha_AX}
(1,0) node (Y) {\alpha_BY}
(.5,-.7) node (fy) {\alpha_A \tothe{f}{F}Y}
;
\draw[1cell]
(X) edge node (f) {\barof{\alpha}(f,p)} (Y)
(X) edge[out=-90, in=180] node[swap, pos=.4] {\alpha_Ap} (fy)
(fy) edge[out=0, in=-90] node[swap, pos=.6] {(\alpha_f)_Y} (Y)
;
\end{tikzpicture}\]
in $1.3$.
\end{itemize}
The oplax unity axiom \eqref{unity-oplax-pasting} for $\alpha$ implies that $\barof{\alpha}$ preserves identity morphisms. To see that $\barof{\alpha}$ preserves composites, suppose given composable $1$-cells
\[\begin{tikzcd}
(A,X) \ar{r}{(f,p)} & (B,Y) \ar{r}{(g,q)} & (C,Z)\end{tikzcd}\]
in $\intf$. Then $\barof{\alpha}$ preserves their composite if and only if the boundary of the following diagram in $1.3$ commutes.
\[\begin{tikzcd}[column sep=huge]
\alpha_AX \ar{d}[swap]{\alpha_Ap} \ar{r}{\alpha_Ap} & \alpha_A\tothe{f}{F}Y \ar{r}{(\alpha_f)_Y} & \alpha_BY \ar{d}{\alpha_Bq}\\
\alpha_A\tothe{f}{F}Y \ar[equal]{ur} \ar{d}[swap]{\alpha_A\tothe{f}{F}q} && \alpha_B\tothe{g}{F}Z \ar{d}{(\alpha_g)_Z}\\
\alpha_A\tothe{f}{F}\tothe{g}{F}Z \ar[bend left=15]{urr}[pos=.6]{(\alpha_f)_{\tothe{g}{F}Z}} \ar{r}{\alpha_A(F^2_{f,g})_Z} & \alpha_A\tothe{gf}{F}Z \ar{r}{(\alpha_{gf})_Z} & \alpha_CZ
\end{tikzcd}\]
The upper left triangle is commutative by definition. The middle sub-diagram is commutative by the naturality of $\alpha_f$. The lower right triangle is commutative by the oplax naturality axiom \eqref{2cell-oplax-pasting} for $\alpha$.
Next, each modification $\Gamma : \alpha\to\beta$ of oplax transformations $\alpha,\beta : F \to \conof{1.3}$ has a component natural transformation $\Gamma_A$ as on the left-hand side below
\[\begin{tikzpicture}[xscale=2.2, yscale=2]
\draw[0cell]
(0,0) node (A) {FA}
(1,0) node (D) {1.3}
;
\draw[1cell]
(A) edge[bend left=40] node {\alpha_A} (D)
(A) edge[bend right=40] node[swap] {\beta_A} (D)
;
\draw[2cell]
node[between=A and D at .45, rotate=-90, font=\Large] (ga) {\Rightarrow}
(ga) node[right] {\Gamma_A}
;
\draw[0cell]
($(D)+(1,0)$) node (F) {\intf}
($(F)+(1,0)$) node (D2) {1.3}
;
\draw[1cell]
(F) edge[bend left=40, shorten <=-.1cm] node[pos=.45] {\barof{\alpha}} (D2)
(F) edge[bend right=40, shorten <=-.1cm] node[pos=.45, swap] {\barof{\beta}} (D2)
;
\draw[2cell]
node[between=F and D2 at .45, rotate=-90, font=\Large] (ga) {\Rightarrow}
(ga) node[right] {\barof{\Gamma}}
;
\end{tikzpicture}\]
for each object $A\in\C$. Given such a modification, we define a natural transformation $\barof{\Gamma}$ as on the right-hand side above, with a component morphism
\[\begin{tikzpicture}[xscale=5, yscale=2]
\draw[0cell]
(0,0) node (A) {\barof{\alpha}(A,X) = \alpha_AX}
(1,0) node (B) {\beta_AX = \barof{\beta}(A,X)}
;
\draw[1cell]
(A) edge node{\barof{\Gamma}_{(A,X)} = (\Gamma_A)_X} (B)
;
\end{tikzpicture}\]
in $1.3$ for each object $(A,X)\in\intf$. The naturality of $\barof{\Gamma}$ means that, for each morphism $(f,p)$ in $\intf$ as above, the boundary of the following diagram in $1.3$ commutes.
\[\begin{tikzcd}[column sep=huge]
\alpha_AX \ar{d}[swap]{\alpha_Ap} \ar{r}{(\Gamma_A)_X} & \beta_AX \ar{d}{\beta_Ap}\\
\alpha_A\tothe{f}{F}Y \ar{d}[swap]{(\alpha_f)_Y} \ar{r}{(\Gamma_A)_{\tothe{f}{F}Y}} & \beta_A\tothe{f}{F}Y \ar{d}{(\beta_f)_Y}\\
\alpha_BY \ar{r}{(\Gamma_B)_Y} & \beta_BY
\end{tikzcd}\]
The top square is commutative by the naturality of $\Gamma_A$. The bottom square is commutative by the modification axiom for $\Gamma$ as in \Cref{expl:morphism-oplax-cone}.
If $\Gamma$ is the identity modification of $\alpha$, then $\barof{\Gamma}$ is the identity natural transformation, since each $\Gamma_A$ is the identity natural transformation of $\alpha_A$. Similarly, the vertical composite of two modifications between oplax transformations is sent to their vertical composite natural transformation. Therefore, we have defined a functor $\phi$ as in
\[\begin{tikzcd}
\Cat\big(\intf,1.3\big) \ar[shift left=.5ex]{r}[pos=.5]{\pi^*} & \oplaxcone(F,\conof{1.3}) \ar[shift left=.5ex]{l}{\phi}
\end{tikzcd}\]
given by
\[\phi(\alpha) = \barof{\alpha} \andspace \phi(\Gamma) = \barof{\Gamma}.\]
It remains to check that the functors $\pi^*$ and $\phi$ are inverses of each other.
Starting on the left side, for a functor $G : \intf\to1.3$, $(\phi\pi^*)(G)$ sends:
\begin{itemize}
\item each object $(A,X)\in\intf$ to the object $(\conof{G}\pi)_A(X) = G(A,X)$;
\item each morphism $(f,p)\in\intf$ as above to the composite
\[\begin{tikzpicture}[xscale=5, yscale=2]
\draw[0cell]
(0,0) node (A) {G(A,X)}
(1,0) node (B) {G(A,\tothe{f}{F}Y)}
(1.7,0) node (C) {G(B,Y)}
;
\draw[1cell]
(A) edge node{G\big(1_A, (F^0_A)_{\tothe{f}{F}Y}\circ p\big)} (B)
(B) edge node{G(f,1_{\tothe{f}{F}Y})} (C)
;
\end{tikzpicture}\]
in $1.3$.
\end{itemize}
Since $G$ preserves composites, to see that the above composite is $G(f,p)$, it is enough to check that the composite
\[\begin{tikzpicture}[xscale=4.5, yscale=2]
\draw[0cell]
(0,0) node (A) {(A,X)}
(1,0) node (B) {(A,\tothe{f}{F}Y)}
(1.7,0) node (C) {(B,Y)}
;
\draw[1cell]
(A) edge node{\big(1_A, (F^0_A)_{\tothe{f}{F}Y}\circ p\big)} (B)
(B) edge node{(f,1_{\tothe{f}{F}Y})} (C)
;
\end{tikzpicture}\]
in $\intf$ is $(f,p)$. In this composite, the first component is $f1_A=f$. The second component is the long composite of the following diagram in $FA$.
\[\begin{tikzcd}[column sep=large]
X \ar{d}[swap]{p} && \tothe{f}{F}Y\\
\tothe{f}{F}Y \ar{r}{(F^0_A)_{\tothe{f}{F}Y}} \ar[bend left]{urr}{1_{\tothe{f}{F}Y}} & \tothe{1_A}{F}\tothe{f}{F}Y \ar{r}{\tothe{1_A}{F}1_{\tothe{f}{F}Y}}[swap]{=} & \tothe{1_A}{F}\tothe{f}{F}Y \ar{u}[swap]{(F^2_{1_A,f})_Y}
\end{tikzcd}\]
Since the triangle is commutative by the lax left unity axiom \eqref{f0-bicat} for $F$, the entire composite is $p$. So $\phi\pi^*$ is the identity assignment on objects.
For a natural transformation $\alpha : G \to H$ for functors $G,H : \intf\to 1.3$, the equality
\[\alpha = (\phi\pi^*)(\alpha)\] follows from \eqref{alpha-of-ax} and \eqref{gamma-of-ax}. We have shown that $\phi\pi^*$ is the identity functor on $\Cat\big(\intf,1.3\big)$. The other equality $\pi^*\phi = \Id$ is similar, and the reader is asked to check it in \Cref{exer:pistar-phi}.
\end{proof}
\section{As a \texorpdfstring{$2$}{2}-Functor}
\label{sec:grothendieck-iifunctor}
In \Cref{grothendieck-is-fibration} we saw that the Grothendieck construction sends each $\C$-indexed category $F$ with $\Ftwo$ invertible to a fibration over $\C$. Most of the rest of this chapter is devoted to showing that the Grothendieck construction yields a $2$-equivalence
\[\begin{tikzcd}
\Bicatpscopcat \ar{r}{\sint} & \fibofc
\end{tikzcd}\]
in the sense of \Cref{definition:2-equivalence}. Here:
\begin{itemize}
\item $\Bicatpscopcat$ is the $2$-category in \Cref{subbicat-pseudofunctor} with pseudofunctors $\Cop \to \Cat$ as objects, strong transformations as $1$-cells, and modifications as $2$-cells.
\item $\fibofc$ is the $2$-category in \Cref{iicat-fibrations} with fibrations over $\C$ with small domain categories as objects, Cartesian functors as $1$-cells, and vertical natural transformations as $2$-cells.
\end{itemize}
As the first step in establishing this $2$-equivalence, in this section we observe that the Grothendieck construction extends to a $2$-functor.
First we define the Grothendieck construction of strong transformations. Recall from \Cref{definition:lax-transformation} that a strong transformation $\alpha : F \to G$ between lax functors is a lax transformation such that each component $2$-cell $\alpha_f$ is invertible.
\begin{notation}\label{not:lax-functor-subscripts}
The following abbreviations will often be used to simplify the presentation.
\begin{enumerate}
\item For a lax functor $(F,\Ftwo,\Fzero) : \Cop \to \Cat$ and objects $A\in\C$ and $X\in FA$, we abbreviate the natural transformation $\Fzeroa$ to $\Fzero$ and the $X$-component morphism $(\Fzeroa)_X$ in \eqref{fzeroax} to $\Fzeroax$, $\Fzero_X$, or even just $\Fzero$.
\item Similar abbreviations will be applied to $F^2$ and other natural transformations.\defmark
\end{enumerate}
\end{notation}
\begin{definition}\label{def:grothendieck-icell}
For each strong transformation $\alpha$ as in
\[\begin{tikzpicture}[xscale=2, yscale=1]
\draw[0cell]
(0,0) node (x11) {\Cop}
(1,0) node (x12) {\Cat}
;
\draw[1cell]
(x11) edge[bend left=50] node{F} (x12)
(x11) edge[bend right=50] node[swap]{G} (x12)
;
\draw[2cell]
node[between=x11 and x12 at .45, rotate=-90, 2label={above,\alpha}] {\Rightarrow}
;
\end{tikzpicture}\]
with $F,G$ lax functors, define the functor\label{notation:intalpha}
\[\begin{tikzcd}
\intf \ar{r}{\intalpha} & \intg\end{tikzcd}\]
as follows.
\begin{itemize}
\item $\intalpha$ sends each object $(A\in\C,X\in FA)$ in $\intf$ to the object $(A,\alphaa X\in GA)$ in $\intg$.
\item $\intalpha$ sends each morphism $(f,p) : (A,X) \to (B,Y)$ in $\intf$ to the morphism
\begin{equation}\label{intalpha-fp}
\begin{tikzcd}[column sep=huge]
(A,\alphaa X) \ar{r}{\big(f,\,\alphafyinv \circ \alphaa p\big)} & (B,\alphab Y) \in \intg\end{tikzcd}
\end{equation}
whose second component is the composite
\[\begin{tikzpicture}[xscale=2, yscale=1,baseline={(0,0).base}]
\draw[0cell]
(0,0) node (x11) {\alphaa X}
(1,0) node (x12) {\alphaa \ftof Y}
(3,0) node (x13) {\ftog \alphab Y \in GA}
;
\draw[1cell]
(x11) edge node{\alphaa p} (x12)
(x12) edge node{\alphafyinv \,=\, (\alphaf)_Y^{-1}} (x13)
;
\end{tikzpicture}\]
with $\alphafyinv$ the inverse of the $Y$-component of the natural isomorphism $\alphaf$.
\end{itemize}
This finishes the definition of $\intalpha$. We show that $\intalpha$ is a functor in \cref{intalpha-functor} below.
\end{definition}
\begin{explanation}\label{expl:bicatpscopcat-icell}
In \Cref{def:grothendieck-icell}, $F$ and $G : \Cop \to \Cat$ do \emph{not} need to be pseudofunctors. A strong transformation $\alpha : F \to G$ consists of a component $1$-cell, i.e., a functor
\[\begin{tikzcd}
FA \ar{r}{\alphaa} & GA\end{tikzcd}\]
for each object $A \in \C$, and an invertible component $2$-cell, i.e., a natural isomorphism
\[\begin{tikzpicture}[xscale=2, yscale=1.4]
\draw[0cell]
(0,0) node (x11) {FA}
($(x11)+(1,0)$) node (x12) {FB}
($(x11)+(0,-1)$) node (x21) {GA}
($(x12)+(0,-1)$) node (x22) {GB}
;
\draw[1cell]
(x12) edge node[swap] {\ftof} (x11)
(x11) edge node[swap] {\alphaa} (x21)
(x12) edge node {\alphab} (x22)
(x22) edge node {\ftog} (x21)
;
\draw[2cell]
node[between=x21 and x12 at .4, rotate=135, 2label={below,\alphaf}] {\Rightarrow}
;
\end{tikzpicture}\]
for each morphism $f : A \to B \in\C$. This data is required to satisfy the lax unity axiom \eqref{unity-transformation-pasting} and the lax naturality axiom \eqref{2-cell-transformation-pasting}.\dqed
\end{explanation}
\begin{lemma}\label{intalpha-functor}
For each strong transformation $\alpha : F \to G$ as in \Cref{def:grothendieck-icell},
\[\begin{tikzcd}
\intf \ar{r}{\intalpha} & \intg\end{tikzcd}\]
is a functor.
\end{lemma}
\begin{proof}
To see that $\intalpha$ preserves identity morphisms, consider an object $(A,X) \in \intf$ with identity morphism $(1_A,\Fzeroax)$. By \eqref{intalpha-fp}, the image of $(1_A,\Fzeroax)$ under $\intalpha$ has second component the top composite in the following diagram
\[\begin{tikzpicture}[xscale=2, yscale=1]
\draw[0cell]
(0,0) node (x11) {\alphaa X}
(1.5,0) node (x12) {\alphaa \oneatof X}
(3,0) node (x13) {\oneatog \alphaa X}
($(x11)+(0,-1)$) node[inner sep=0pt] (sw) {}
($(x13)+(0,-1)$) node[inner sep=0pt] (se) {}
;
\draw[1cell]
(x11) edge node{\alphaa \Fzeroax} (x12)
(x12) edge node{\alphaoneaxinv} (x13)
(x11) edge[-,shorten >=-1pt] (sw)
(sw) edge[-,shorten <=-1pt, shorten >=-1pt] node {(\Gzeroa)_{\alphaa X}} (se)
(se) edge[shorten <=-1pt] (x13)
;
\end{tikzpicture}
\]
in $GA$. This composite is equal to $(\Gzeroa)_{\alphaa X}$ by the lax unity axiom \eqref{unity-transformation-pasting} for $\alpha$. Therefore, $\intalpha$ preserves identity morphisms.
To see that $\intalpha$ preserves composites, consider composable morphisms
\[\begin{tikzcd}
(A,X) \ar{r}{(f,p)} & (B,Y) \ar{r}{(g,q)} & (C,Z) \in \intf\end{tikzcd}\]
as in \eqref{fpgq}. By \eqref{intf-composite} and \eqref{intalpha-fp}, the second components of
\[\left(\intalpha\right)\big((g,q) \circ (f,p)\big) \andspace \left(\intalpha\right)(g,q) \circ \left(\intalpha\right)(f,p)\]
are equal if and only if the outermost diagram below is commutative in $GA$.
\[\begin{tikzcd}[column sep=large]
\alphaa X \ar{d}[swap]{\alphaa p} \ar{r}{\alphaa p} & \alphaa \ftof Y \ar{r}{\alphafyinv} & \ftog \alphab Y \ar{d}[swap]{\ftog\alphab q} \ar{r}{\ftog\alphab q} & \ftog \alphab \gtof Z \ar{d}{\ftog\alphagzinv}\\
\alphaa \ftof Y \ar[equal]{ur} \ar{d}[swap]{\alphaa \ftof q} && \ftog \alphab \gtof Z \ar{r}{\ftog \alphagzinv} & \ftog\gtog\alphac Z \ar{d}{\Gtwosub{\alphac Z}}\\
\alphaa \ftof \gtof Z \ar[bend left=15]{urr}{\alphainv_{f,\gtof Z}} \ar{rr}{\alphaa(\Ftwosub{Z})} && \alphaa \gftof Z \ar{r}{\alphagfzinv} & \gftog \alphac Z
\end{tikzcd}\]
In the above diagram:
\begin{itemize}
\item The upper left triangle and the upper right square are commutative by definition.
\item The lax naturality axiom \eqref{2-cell-transformation-pasting} for $\alpha$ implies that the bottom sub-diagram is commutative.
\item The remaining sub-diagram is commutative by the naturality of $\alphaf$.
\end{itemize}
Therefore, $\intalpha$ preserves composites of morphisms.
\end{proof}
To show that $\intalpha$ is a Cartesian functor, we will use the following observation.
\begin{lemma}\label{cartesian-invertible}
Suppose:
\begin{itemize}
\item $F : \Cop\to\Cat$ is a lax functor with $\Fzero$ invertible.
\item $(f,p) : (A,X) \to (B,Y)$ is a Cartesian morphism in $\intf$ with respect to the functor $\Usubf : \intf \to \C$.
\end{itemize}
Then $p : X \to \ftof Y$ is an isomorphism in $FA$.
\end{lemma}
\begin{proof}
To construct the inverse to $p$, consider the pre-raise
\[\preraise{(f,p)}{(f,1_{\ftof Y})}{\onea}\]
with respect to $\Usubf$, as pictured below.
\begin{equation}\label{oneat-fp-fone}
\begin{tikzpicture}[xscale=3, yscale=1.5, baseline={(s.base)}]
\draw[0cell]
(0,0) node (a) {(A,X)}
($(a)+(1,0)$) node (b) {(B,Y)}
($(a)+(.5,1)$) node (c) {(A,\ftof Y)}
($(b)+(.3,.5)$) node (s) {}
($(s)+(.3,0)$) node (t) {}
($(t)+(.3,-.5)$) node (a2) {A}
($(a2)+(.6,0)$) node (b2) {B}
($(a2)+(.3,1)$) node (c2) {A}
;
\draw[1cell]
(a) edge node {(f,p)} (b)
(c) edge[dashed] node[swap] {\exists !\, (\onea,t)} (a)
(c) edge node {(f,1_{\ftof Y})} (b)
(s) edge[|->] node{\Usubf} (t)
(a2) edge node {f} (b2)
(c2) edge node[swap] {\onea} (a2)
(c2) edge node {f} (b2)
;
\end{tikzpicture}
\end{equation}
Since $(f,p)$ is a Cartesian morphism in $\intf$, this pre-raise has a unique raise $(\onea,t)$ with $t : \ftof Y \to \oneatof X$ in $FA$. We aim to show that the composite
\begin{equation}\label{t-fzeroxinv}
\begin{tikzcd}[column sep=large]
\ftof Y \ar{r}{t} & \oneatof X \ar{r}{(\Fzerosub{X})^{-1}} & X
\end{tikzcd}
\end{equation}
in $FA$ is the inverse of $p$.
Consider the following diagram in $FA$.
\begin{equation}\label{t-fzeroinv-p=1}
\begin{tikzpicture}[xscale=2.5, yscale=1.5, baseline={(x21.base)}]
\draw[0cell]
(0,0) node (x11) {\ftof Y}
($(x11)+(1,0)$) node (x12) {\foneatof Y\,=\,\ftof Y}
($(x11)+(0,-1)$) node (x21) {\oneatof X}
($(x12)+(0,-1)$) node (x22) {\oneatof \ftof Y}
($(x21)+(0,-1)$) node (x31) {X}
($(x22)+(0,-1)$) node (x32) {\ftof Y}
($(x12)+(.8,0)$) node[inner sep=0pt] (ne) {}
($(x32)+(.8,0)$) node[inner sep=0pt] (se) {}
;
\draw[1cell]
(x11) edge node {1} (x12)
(x11) edge node[swap] {t} (x21)
(x22) edge node[swap] {\Ftwosub{Y}} (x12)
(x21) edge node {\oneatof p} (x22)
(x21) edge node[swap] {(\Fzerosub{X})^{-1}} (x31)
(x32) edge node[swap] {\Fzerosub{\ftof Y}} (x22)
(x31) edge node {p} (x32)
(x32) edge[-,shorten >=-1pt] (se)
(se) edge[-,shorten <=-1pt, shorten >=-1pt] node[swap] {1} (ne)
(ne) edge[shorten <=-1pt] (x12)
;
\end{tikzpicture}
\end{equation}
\begin{itemize}
\item The top square is the second component of the left commutative triangle in \eqref{oneat-fp-fone}.
\item The bottom square is commutative by the naturality of $\Fzero$.
\item The right rectangle is commutative by the lax left unity \eqref{f0-bicat} of $F$.
\end{itemize}
So $(\Fzerosub{X})^{-1}t$ is a right inverse of $p$.
To show that $(\Fzerosub{X})^{-1}t$ is also a left inverse of $p$, it suffices to show the commutativity of the left triangle in
\begin{equation}\label{pt-fzerox}
\begin{tikzpicture}[xscale=2.5, yscale=1.5, baseline={(0,-.5)}]
\draw[0cell]
(0,0) node (x11) {X}
($(x11)+(1,0)$) node (x12) {\oneatof X}
($(x11)+(.5,-1)$) node (x21) {\ftof Y}
;
\draw[1cell]
(x11) edge node{\Fzerosub{X}} (x12)
(x11) edge node[swap]{p} (x21)
(x21) edge node[swap]{t} (x12)
;
\draw[0cell]
($(x21)+(1.5,0)$) node (a) {(A,X)}
($(a)+(1,0)$) node (b) {(B,Y)}
($(a)+(.5,1)$) node (c) {(A,X)}
;
\draw[1cell]
(a) edge node {(f,p)} (b)
(c) edge node[swap] {(\onea,tp)} (a)
(c) edge node {(f,p)} (b)
;
\end{tikzpicture}
\end{equation}
in $FA$. By \eqref{fzeroax}, the identity morphism of $(A,X)$ is $(\onea,\Fzerosub{X})$. Since $(f,p)$ is a Cartesian morphism in $\intf$, to show the commutativity of the left triangle in \eqref{pt-fzerox}, by \Cref{cartesian-properties}\eqref{cartesian-properties-ii} it suffices to show that the triangle in $\intf$ on the right-hand side is commutative.
In this triangle, the first component is $f\onea = f$, and the second component is the outermost diagram in $FA$ below.
\[\begin{tikzcd}[column sep=huge, row sep=large]
X \ar{d}[swap]{p} &&\\
\ftof Y \ar{d}[swap]{t} \ar[bend left]{rr}{1} & X \ar{r}{p} & \ftof Y \ar{d}{1}\\
\oneatof X \ar{ur}{(\Fzerosub{X})^{-1}} \ar{r}{\oneatof p} & \oneatof \ftof Y \ar{r}{\Ftwosub{Y}} \ar{ur}{(\Fzero)^{-1}} & \foneatof Y
\end{tikzcd}\]
\begin{itemize}
\item The top left sub-diagram is commutative by \eqref{t-fzeroinv-p=1}.
\item The parallelogram is commutative by the naturality of $\Fzero$.
\item The bottom right triangle is commutative by the lax left unity \eqref{f0-bicat} of $F$.
\end{itemize}
We have shown that $(\Fzerosub{X})^{-1}t$ is also a left inverse of $p$.
\end{proof}
Recall from \Cref{def:cartesian-functor} that a Cartesian functor is a functor (i) that respects the functors to $\C$ and (ii) that preserves Cartesian morphisms.
\begin{lemma}\label{intalpha-cartesian}
Suppose $\alpha : F \to G$ is a strong transformation with
\begin{itemize}
\item $F,G : \Cop\to\Cat$ lax functors and
\item $\Fzero$ and $\Gtwo$ invertible.
\end{itemize}
Then the functor
\[\begin{tikzcd}
\intf \ar{r}{\intalpha} & \intg\end{tikzcd}\]
in \Cref{intalpha-functor} is a Cartesian functor.
\end{lemma}
\begin{proof}
The diagram
\begin{equation}\label{intalpha-ug}
\begin{tikzpicture}[xscale=1.5, yscale=1, baseline={(x11).base}]
\draw[0cell]
(0,0) node (x11) {\intf}
($(x11)+(2,0)$) node (x12) {\intg}
($(x11)+(1,-1)$) node (x21) {\C}
;
\draw[1cell]
(x11) edge node{\intalpha} (x12)
(x11) edge[bend right=15] node[swap]{\Usubf} (x21)
(x12) edge[bend left=15] node{\Usubg} (x21)
;
\end{tikzpicture}
\end{equation}
is commutative because (i) each $U_?$ projects onto the first factor, while (ii) $\intalpha$ leaves the first component unchanged. The commutativity of the diagram \eqref{intalpha-ug} does not require the invertibility of $\Fzero$ and $\Gtwo$.
To check that $\intalpha$ preserves Cartesian morphisms, suppose
\[(f,p) : (A,X) \to (B,Y)\] is a Cartesian morphism in $\intf$. We must show that $\left(\intalpha\right)(f,p)$, whose second component is in \eqref{intalpha-fp}, is a Cartesian morphism in $\intg$. Suppose given a pre-raise
\[\preraise{(f, \alphafyinv \circ \alphaa p)}{(h,r)}{i}\]
with respect to $\Usubg$, as pictured below.
\[\begin{tikzpicture}[xscale=3, yscale=1.5]
\draw[0cell]
(0,0) node (a) {(A,\alphaa X)}
($(a)+(1.5,0)$) node (b) {(B,\alphab Y)}
($(a)+(.75,1)$) node (c) {(C,Z)}
($(b)+(.3,.5)$) node (s) {}
($(s)+(.3,0)$) node (t) {}
($(t)+(.3,-.5)$) node (a2) {A}
($(a2)+(.6,0)$) node (b2) {B}
($(a2)+(.3,1)$) node (c2) {C}
;
\draw[1cell]
(a) edge node {(f,\alphafyinv \circ \alphaa p)} (b)
(c) edge[dashed] node[swap] {(i,\exists !\, q?)} (a)
(c) edge node {(h,r)} (b)
(s) edge[|->] node{\Usubg} (t)
(a2) edge node {f} (b2)
(c2) edge node[swap] {i} (a2)
(c2) edge node {h} (b2)
;
\end{tikzpicture}\]
We must show that there exists a unique raise. The first entry of such a raise must be $i$.
By \eqref{intf-composite} we must show that there exists a unique morphism
\[\begin{tikzcd}
Z \ar{r}{q} & \itog\alphaa X \in GC\end{tikzcd}\]
such that the outermost diagram in
\begin{equation}\label{s-itogalphaap}
\begin{tikzpicture}[xscale=3.2, yscale=1.4, baseline={(0,-1)}]
\draw[0cell]
(0,0) node (x11) {Z}
($(x11)+(1,0)$) node (x12) {}
($(x12)+(1,0)$) node (x13) {\htog\alphab Y\,=\,\fitog\alphab Y}
($(x11)+(0,-1)$) node (x21) {\itog\alphaa X}
($(x12)+(0,-1)$) node (x22) {\itog\alphaa\ftof Y}
($(x13)+(0,-1)$) node (x23) {\itog\ftog\alphab Y}
;
\draw[1cell]
(x11) edge node{r} (x13)
(x11) edge node[swap] {q} (x21)
(x23) edge node {\iso} node[swap] {\Gtwosub{\alphab Y}} (x13)
(x21) edge node {\itog\alphaa p} node[swap]{\iso} (x22)
(x22) edge node {\itog\alphafyinv} node[swap]{\iso} (x23)
;
\end{tikzpicture}
\end{equation}
in $GC$ is commutative. By the invertibility of $\Fzero$ and \Cref{cartesian-invertible}, $p$ is an isomorphism. Since $\Gtwo$ is invertible by assumption, $q$ exists and is uniquely determined by the boundary in \eqref{s-itogalphaap}. In other words, $q$ must be the composite
\[\begin{tikzpicture}[xscale=3.2, yscale=1.4, baseline={(0,-1)}]
\draw[0cell]
(0,0) node (x11) {Z}
($(x11)+(1,0)$) node (x12) {}
($(x12)+(1,0)$) node (x13) {\htog\alphab Y\,=\,\fitog\alphab Y}
($(x11)+(0,-1)$) node (x21) {\itog\alphaa X}
($(x12)+(0,-1)$) node (x22) {\itog\alphaa\ftof Y}
($(x13)+(0,-1)$) node (x23) {\itog\ftog\alphab Y}
;
\draw[1cell]
(x11) edge node{r} (x13)
(x13) edge node {(\Gtwosub{\alphab Y})^{-1}} (x23)
(x23) edge node[swap] {\itog\alphafy} (x22)
(x22) edge node[swap] {\itog\alphaa p^{-1}} (x21)
;
\end{tikzpicture}\]
in $GC$.
\end{proof}
\Cref{intalpha-cartesian} implies that the Grothendieck construction $\int$ sends $1$-cells in $\Bicatpscopcat$ to $1$-cells in $\fibofc$. Recall from \Cref{def:modification} the concept of a modification. Next we define the Grothendieck construction of modifications.
\begin{definition}\label{def:grothendieck-twocells}
Suppose $\Gamma : \alpha\to\beta$ is a modification as in
\[\begin{tikzpicture}[xscale=3, yscale=1]
\draw[0cell]
(0,0) node (x11) {\Cop}
(1,0) node (x12) {\Cat}
;
\draw[1cell]
(x11) edge[bend left=70] node (F) {F} (x12)
(x11) edge[bend right=70] node[swap] (G) {G} (x12)
;
\draw[2cell]
node[between=x11 and x12 at .25, rotate=-90, 2label={above,\alpha}] {\Rightarrow}
node[between=x11 and x12 at .7, rotate=-90, 2label={above,\beta}] {\Rightarrow}
node[between=F and G at .6, rotate=0, 2label={above,\Gamma}] {\Rrightarrow}
;
\end{tikzpicture}\]
with $\alpha,\beta$ strong transformations and $F,G$ lax functors. For each object $(A,X)\in\intf$, define the morphism
\[\begin{tikzcd}[column sep=huge]
\left(\intalpha\right)(A,X) = (A,\alphaa X) \ar{r}{\left(\intgamma\right)_{(A,X)}} & (A,\betaa X) = \left(\intbeta\right)(A,X) \in \intg
\end{tikzcd}\]
as
\begin{equation}\label{intgamma-ax}
\left(\intgamma\right)_{(A,X)} = \left(\onea, \Gzerosub{A,\betaa X} \circ \Gammaax\right)
\end{equation}
in which:
\begin{itemize}
\item The second component is the composite
\[\begin{tikzcd}[column sep=large]
\alphaa X \ar{r}{\Gammaax} & \betaa X \ar{r}{\Gzerosub{A,\betaa X}} & \oneatog\betaa X \in GA.\end{tikzcd}\]
\item $\Gammaax = (\Gammaa)_X$ is the $X$-component of the natural transformation $\Gammaa$.
\item $\Gzerosub{A,\betaa X} = (\Gzero_A)_{\betaa X}$ is the $\betaa X$-component of the natural transformation $\Gzeroa$.
\end{itemize}
This finishes the definition of $\intgamma$. We show that $\intgamma$ is a vertical natural transformation in \cref{intgamma-vertical} below.
\end{definition}
\begin{explanation}
The modification $\Gamma : \alpha\to\beta$ consists of a component $2$-cell, i.e., natural transformation
\[\begin{tikzpicture}[xscale=2.5, yscale=1]
\draw[0cell]
(0,0) node (x11) {FA}
(1,0) node (x12) {GA}
;
\draw[1cell]
(x11) edge[bend left=50] node{\alphaa} (x12)
(x11) edge[bend right=50] node[swap]{\betaa} (x12)
;
\draw[2cell]
node[between=x11 and x12 at .45, rotate=-90, 2label={above,\Gammaa}] {\Rightarrow}
;
\end{tikzpicture}\]
for each object $A\in\C$, that satisfies the modification axiom
\begin{equation}\label{gamma-alpha-beta-modaxiom}
\begin{tikzpicture}[xscale=2,yscale=1.5,baseline={(eq.base)}]
\def1.2{1}
\def1{1}
\def-1} \def\v{-1{.8}
\draw[font=\Large] (0,0) node (eq) {=};
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (x11) {FA}
($(x11)+(1,0)$) node (x12) {FB}
($(x11)+(0,-1)$) node (x21) {GA}
($(x12)+(0,-1)$) node (x22) {GB}
;
\draw[1cell]
(x12) edge[swap] node (foff) {\ftof} (x11)
(x11) edge[bend right] node[swap] (ba) {\betaa} (x21)
(x12) edge[bend left] node (ab) {\alphab} (x22)
(x22) edge node (goff) {\ftog} (x21)
;
}
\begin{scope}[shift={(-1.2--1} \def\v{-1,1/2)}]
\boundary
\draw[1cell]
(x11) edge[bend left] node (aa) {\alphaa} (x21)
;
\draw[2cell]
node[between=x11 and x21 at .6, rotate=180, 2label={below,\Gammaa}] {\Rightarrow}
node[between=x12 and goff at .5, rotate=135, 2label={below,\alphaf}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(-1} \def\v{-1,1/2)}]
\boundary
\draw[1cell]
(x12) edge[bend right] node[swap] (bb) {\betab} (x22)
;
\draw[2cell]
node[between=x12 and x22 at .6, rotate=180, 2label={below,\Gammab}] {\Rightarrow}
node[between=foff and x21 at .7, rotate=135, 2label={below,\betaf}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\end{equation}
for each morphism $f : A \to B$ in $\C$.\dqed
\end{explanation}
\begin{lemma}\label{intgamma-vertical}
Suppose $\Gamma : \alpha\to\beta$ is a modification with
\begin{itemize}
\item $\alpha,\beta : F \to G$ strong transformations and
\item $F,G : \Cop\to\Cat$ lax functors.
\end{itemize}
Then $\intgamma$ as in
\[\begin{tikzpicture}[xscale=2.5, yscale=1]
\draw[0cell]
(0,0) node (x11) {\intf}
(1,0) node (x12) {\intg}
;
\draw[1cell]
(x11) edge[bend left=65] node{\intalpha} (x12)
(x11) edge[bend right=65] node[swap]{\intbeta} (x12)
;
\draw[2cell]
node[between=x11 and x12 at .4, rotate=-90, 2label={above,\intgamma}] {\Rightarrow}
;
\end{tikzpicture}\]
is a vertical natural transformation with components as in \eqref{intgamma-ax}.
\end{lemma}
\begin{proof}
By \eqref{intalpha-fp} and \eqref{intgamma-ax}, the naturality of $\intgamma$ means that for each morphism
\[\begin{tikzcd}[column sep=large]
(A,X) \ar{r}{(f,p)} & (B,Y) \in \intf,
\end{tikzcd}\]
the diagram
\[\begin{tikzpicture}[xscale=5, yscale=1.5]
\draw[0cell]
(0,0) node (x11) {(A,\alphaa X)}
($(x11)+(1,0)$) node (x12) {(A,\betaa X)}
($(x11)+(0,-1)$) node (x21) {(B,\alphab Y)}
($(x12)+(0,-1)$) node (x22) {(B,\betab Y)}
;
\draw[1cell]
(x11) edge node{\big(\onea, \Gzerosub{A,\betaa X}\circ \Gammasub{A,X}\big)} (x12)
(x11) edge node[swap]{\big(f, \alphafyinv\circ \alphaa p\big)} (x21)
(x12) edge node {\big(f, \betafyinv\circ \betaa p\big)} (x22)
(x21) edge node {\big(\oneb, \Gzerosub{B,\betab Y}\circ \Gammasub{B,Y}\big)} (x22)
;
\end{tikzpicture}\]
in $\intg$ is commutative. The first component is the equality $\oneb f = f\onea$. The second component is the outermost diagram in $GA$ below.
\[\begin{tikzpicture}[xscale=3.2, yscale=1.5]
\draw[0cell]
(0,0) node (x11) {\alphaa X}
($(x11)+(1,0)$) node (x12) {\betaa X}
($(x12)+(1,0)$) node (x13) {\oneatog \betaa X}
($(x11)+(0,-1)$) node (x21) {\alphaa\ftof Y}
($(x12)+(0,-1)$) node (x22) {\betaa\ftof Y}
($(x13)+(0,-1)$) node (x23) {\oneatog \betaa \ftof Y}
($(x21)+(0,-1)$) node (x31) {\ftog\alphab Y}
($(x22)+(0,-1)$) node (x32) {\ftog\betab Y}
($(x23)+(0,-1)$) node (x33) {\oneatog\ftog\betab Y}
($(x31)+(0,-1)$) node (x41) {\ftog\betab Y}
($(x32)+(0,-1)$) node (x42) {\ftog\onebtog\betab Y}
($(x33)+(0,-1)$) node (x43) {\ftog\betab Y}
;
\draw[1cell]
(x11) edge node{\Gammasub{A,X}} (x12)
(x12) edge node{\Gzerosub{A,\betaa X}} (x13)
(x11) edge node[swap]{\alphaa p} (x21)
(x12) edge node[swap] {\betaa p} (x22)
(x13) edge node {\oneatog\betaa p} (x23)
(x21) edge node {\Gammasub{A,\ftof Y}} (x22)
(x22) edge node {\Gzerosub{A,\betaa\ftof Y}} (x23)
(x21) edge node[swap] {\alphafyinv} (x31)
(x22) edge node[swap] {\betafyinv} (x32)
(x23) edge node {\oneatog \betafyinv} (x33)
(x31) edge node {\ftog\Gammasub{B,Y}} (x32)
(x32) edge node {\Gzerosub{A,\ftog\betab Y}} (x33)
(x31) edge node[swap] {\ftog\Gammasub{B,Y}} (x41)
(x32) edge node[swap] {\ftog\Gzerosub{B,\betab Y}} (x42)
(x32) edge node {1} (x43)
(x33) edge node {\Gtwosub{\betab Y}} (x43)
(x41) edge node[swap] {\ftog\Gzerosub{B,\betab Y}} (x42)
(x42) edge node[swap] {\Gtwosub{\betab Y}} (x43)
;
\end{tikzpicture}\]
In the above diagram:
\begin{itemize}
\item The top two sub-diagrams are commutative by the naturality of $\Gammaa$ and $\Gzeroa$.
\item In the middle row, the two sub-diagrams are commutative by the modification axiom \eqref{gamma-alpha-beta-modaxiom} for $\Gamma$ and the naturality of $\Gzeroa$.
\item In the bottom row, the left square is commutative by definition. The two triangles are commutative by the lax left unity and the lax right unity axioms \eqref{f0-bicat} for $G$.
\end{itemize}
This shows that $\intgamma : \intalpha \to \intbeta$ is a natural transformation.
Finally, $\intgamma$ is a vertical natural transformation in the sense of \eqref{vertical-natural-tr}; i.e., the equality
\[1_{\Usubg} * \intgamma = 1_{\Usubf}\]
holds because both sides send an object $(A,X)\in\intf$ to $\onea \in \C$.
\end{proof}
\Cref{intgamma-vertical} implies that the Grothendieck construction $\int$ sends $2$-cells in $\Bicatpscopcat$ to $2$-cells in $\fibofc$. Next we show that the Grothendieck construction preserves identity $1$-cells and is locally a functor.
\begin{lemma}\label{grothendieck-local-functor}
Suppose $F,G : \Cop\to\Cat$ are lax functors.
\begin{enumerate}
\item\label{gro-preserves-idonecell}
Then $\intonef$ is the identity functor of $\intf$.
\item\label{gro-preserves-idtwocell}
Suppose $\alpha : F\to G$ is a strong transformation with identity modification $\onealpha$. Then $\int\onealpha$ in \Cref{intgamma-vertical} is the identity natural transformation of the functor $\intalpha$ in \Cref{intalpha-functor}.
\item\label{gro-preserves-vcomp}
Given modifications $\Gamma : \alpha \to \beta$ and $\Sigma : \beta \to \gamma$ with $\alpha,\beta,\gamma : F \to G$ strong transformations, there is an equality
\[\left(\sintsigma\right)\left(\sintgamma\right) = \sintsigmagamma\]
of natural transformations $\intalpha \to \intsmallgamma$.
\end{enumerate}
\end{lemma}
\begin{proof}
The identity transformation $\onef$ of $F$ is defined in \Cref{id-lax-transformation}, with
\[(\onef)_A = 1_{FA} \andspace (\onef)_f=1_{\ftof}\]
for each object $A$ and each morphism $f\in\C$. Assertion \eqref{gro-preserves-idonecell} now follows from \Cref{def:grothendieck-icell}.
Assertion \eqref{gro-preserves-idtwocell} follows from the equalities
\[\begin{split}
\left(\sint\onealpha\right)_{(A,X)}
&= \left(\onea, \Gzerosub{A,\alphaa X} \circ 1_{\alphaa X}\right)\\
&= \left(\onea, \Gzerosub{A,\alphaa X}\right)\\
&= 1_{(A,\alphaa X)}\\
&= 1_{\left(\intalpha\right)(A,X)}
\end{split}\]
in $\intg$ for each object $(A,X) \in \intf$. The first, third, and fourth equalities above follow from \eqref{intgamma-ax}, \eqref{fzeroax}, and \Cref{def:grothendieck-icell}, respectively.
For assertion \eqref{gro-preserves-vcomp}, we must show that for each object $(A,X)\in\intf$, the diagram
\[\begin{tikzpicture}[xscale=6.5, yscale=1.1]
\draw[0cell]
(0,0) node (x11) {\left(\sintalpha\right)(A,X)=(A,\alphaa X)}
($(x11)+(1,0)$) node (x12) {\left(\sintsmallgamma\right)(A,X)=(A,\smallgammaa X)}
($(x11)+(.5,-1)$) node (x21) {\left(\sintbeta\right)(A,X)=(A,\betaa X)}
($(x11)+(0,-1)$) node[inner sep=0pt] (sw) {}
($(x12)+(0,-1)$) node[inner sep=0pt] (se) {}
;
\draw[1cell]
(x11) edge node{\left(\sintsigmagamma\right)_{(A,X)}} (x12)
(x11) edge[-,shorten >=-1pt] node[swap, pos=.75]{\left(\sintgamma\right)_{(A,X)}} (sw)
(sw) edge[shorten <=-1pt] (x21)
(x21) edge[-,shorten >=-1pt] (se)
(se) edge[shorten <=-1pt] node[swap, pos=.25] {\left(\sintsigma\right)_{(A,X)}} (x12)
;
\end{tikzpicture}\]
in $\intg$ is commutative. By \eqref{intgamma-ax}, the first component of the previous diagram is the equality $\onea\onea=\onea \in \C$. The second component is the outermost diagram in $GA$ below.
\[\begin{tikzpicture}[xscale=4.4, yscale=1.3]
\draw[0cell]
(0,0) node (x11) {\alphaa X}
($(x11)+(1,0)$) node (x12) {\betaa X}
($(x12)+(1,0)$) node (x13) {\smallgammaa X}
($(x11)+(0,-1)$) node (x21) {\betaa X}
($(x12)+(0,-1)$) node (x22) {}
($(x13)+(0,-1)$) node (x23) {\oneatog \smallgammaa X}
($(x21)+(0,-1)$) node (x31) {\oneatog\betaa X}
($(x31)+(.6,0)$) node (x32) {\oneatog\smallgammaa X}
($(x32)+(.7,0)$) node (x33) {\oneatog\oneatog\smallgammaa X}
($(x23)+(0,-1)$) node (x34) {\oneatog \smallgammaa X}
;
\draw[1cell]
(x11) edge node{\Gammaax} (x12)
(x12) edge node{\Sigmaax} (x13)
(x11) edge node[swap] {\Gammaax} (x21)
(x12) edge[equal] (x21)
(x13) edge node[swap] {\Gzerosub{A,\smallgammaa X}} (x32)
(x13) edge node {\Gzerosub{A,\smallgammaa X}} (x23)
(x21) edge node[swap] {\Gzerosub{A,\betaa X}} (x31)
(x23) edge node[swap] {\Gzero} (x33)
(x23) edge node {1} (x34)
(x31) edge node[swap] {\oneatog\Sigmaax} (x32)
(x32) edge node[swap] {\oneatog\Gzerosub{A,\smallgammaa X}} (x33)
(x33) edge node[swap] {\Gtwosub{\smallgammaa X}} (x34)
;
\end{tikzpicture}\]
From the upper left to the lower right, the first three sub-diagrams are commutative by definition and the naturality of $\Gzeroa$. The lower right triangle is commutative by the lax left unity axiom \eqref{f0-bicat} of $G$.
\end{proof}
Next we check that the Grothendieck construction preserves horizontal composition of $1$-cells. Recall from \Cref{def:lax-tr-comp} the horizontal composite of lax transformations.
\begin{lemma}\label{gro-hcomp-onecell}
Suppose $F,G,H : \Cop\to\Cat$ are lax functors, and $\alpha : F \to G$ and $\beta : G \to H$ are strong transformations. Then the diagram
\[\begin{tikzpicture}[xscale=3, yscale=1]
\draw[0cell]
(0,0) node (x11) {\sintf}
($(x11)+(1,0)$) node (x12) {\sinth}
($(x11)+(.5,-1)$) node (x21) {\sintg}
;
\draw[1cell]
(x11) edge node {\sintbetaalpha} (x12)
(x11) edge node[swap]{\sintalpha} (x21)
(x21) edge node[swap] {\sintbeta} (x12)
;
\end{tikzpicture}\]
is commutative.
\end{lemma}
\begin{proof}
Both functors $\left(\intbeta\right)\left(\intalpha\right)$ and $\intbetaalpha$ send an object $(A,X)$ in $\intf$ to the object $(A,\betaa\alphaa X)$ in $\inth$.
For a morphism $f : A \to B$ in $\C$, by \Cref{def:lax-tr-comp} $(\beta\alpha)_f$ is the natural transformation given by the following pasting diagram.
\[\begin{tikzpicture}[xscale=3, yscale=1.3]
\draw[0cell]
(0,0) node (x11) {FA}
($(x11)+(1,0)$) node (x12) {FB}
($(x11)+(0,-1)$) node (x21) {GA}
($(x12)+(0,-1)$) node (x22) {GB}
($(x21)+(0,-1)$) node (x31) {HA}
($(x22)+(0,-1)$) node (x32) {HB}
;
\draw[1cell]
(x12) edge node[swap] (ff) {\ftof} (x11)
(x11) edge node[swap]{\alphaa} (x21)
(x12) edge node {\alphab} (x22)
(x22) edge node[swap] (fg) {\ftog} (x21)
(x21) edge node[swap] {\betaa} (x31)
(x22) edge node {\betab} (x32)
(x32) edge node (fh) {\ftoh} (x31)
;
\draw[2cell]
node[between=ff and x21 at .6, rotate=135, 2label={below,\alphaf}] {\Rightarrow}
node[between=fg and x31 at .6, rotate=135, 2label={below,\betaf}] {\Rightarrow}
;
\end{tikzpicture}\]
So by \eqref{intalpha-fp}, the functor $\intbetaalpha$ sends a morphism $(f,p)\in\intf$ as in \eqref{fp-ax-by} to the morphism
\[\begin{tikzcd}
\left(A,\betaa\alphaa X\right) \ar{r}{(f,\cdots)} & \left(A,\betab\alphab Y\right) \in \sinth\end{tikzcd}\]
with second component the left-bottom composite in the following diagram in $HA$.
\[\begin{tikzcd}[column sep=huge]
\betaa\alphaa X \ar{d}[swap]{\betaa\alphaa p} \ar[bend left=20]{dr}{\betaa\left(\alphafyinv \circ \alphaa p\right)} &&\\
\betaa\alphaa \ftof Y \ar{r}{\betaa \alphafyinv} & \betaa\ftog\alphab Y \ar{r}{\betainv_{f,\alphab Y}} & \ftoh\betab\alphab Y
\end{tikzcd}\]
Also by \eqref{intalpha-fp}, the composite functor $\left(\intbeta\right)\left(\intalpha\right)$ sends the morphism $(f,p)$ to $(f,\cdots)$ with second component the shorter composite in the previous diagram. Since the triangle there is commutative by the functoriality of $\betaa$, we have shown that $\left(\intbeta\right)\left(\intalpha\right)$ and $\intbetaalpha$ agree on morphisms.
\end{proof}
Next we check that the Grothendieck construction preserves horizontal composition of $2$-cells. Recall from \Cref{def:modification-composition} the horizontal composite of modifications.
\begin{lemma}\label{gro-hcomp-twocell}
Suppose $\Gamma$ and $\Gamma'$ are modifications as in
\[\begin{tikzpicture}[xscale=2, yscale=1]
\draw[0cell]
(0,0) node (x11) {F}
($(x11)+(1,0)$) node (x12) {G}
($(x12)+(1,0)$) node (x13) {H}
;
\draw[1cell]
(x11) edge[bend left=50] node{\alpha} (x12)
(x11) edge[bend right=50] node[swap]{\beta} (x12)
(x12) edge[bend left=50] node{\alpha'} (x13)
(x12) edge[bend right=50] node[swap]{\beta'} (x13)
;
\draw[2cell]
node[between=x11 and x12 at .45, rotate=-90, 2label={above,\Gamma}] {\Rightarrow}
node[between=x12 and x13 at .45, rotate=-90, 2label={above,\Gamma'}] {\Rightarrow}
;
\end{tikzpicture}\]
with $\alpha,\alpha',\beta,\beta'$ strong transformations and $F,G,H : \Cop\to\Cat$ lax functors. Then there is an equality
\[\left(\intgammap\right) * \left(\intgamma\right) = \intgammapgamma\]
of natural transformations $\intalphapalpha \to \intbetapbeta$.
\end{lemma}
\begin{proof}
Suppose $(A,X)$ is an object in $\intf$. To prove the equality
\[\left(\intgammap * \intgamma\right)_{(A,X)}
= \big(\intgammapgamma\big)_{(A,X)} \in \sinth,\]
first observe that they both have first component $\onea$. By \eqref{modification-hcomp}, \eqref{intalpha-fp}, and \eqref{intgamma-ax}, their second components form the outermost diagram in
\[\begin{tikzpicture}[xscale=7, yscale=1.5]
\draw[0cell]
(0,0) node (x0) {\alphaa'\alphaa X}
($(x0)+(-.5,0)$) node (x11) {\betaa'\alphaa X}
($(x11)+(1,0)$) node (x12) {\betaa'\alphaa X}
($(x11)+(0,-1)$) node (x21) {\oneatoh\betaa'\alphaa X}
($(x12)+(0,-1)$) node (x22) {\betaa'\betaa X}
($(x21)+(0,-1)$) node (x31) {\oneatoh\betaa'\betaa X}
($(x22)+(0,-1)$) node (x32) {\oneatoh\betaa'\betaa X}
($(x31)+(0,-1)$) node (x41) {\oneatoh\betaa'\oneatog\betaa X}
($(x32)+(0,-1)$) node (x42) {\oneatoh\oneatoh\betaa'\betaa X}
;
\draw[1cell]
(x0) edge node[swap] {\Gammapsub{A,\alphaa X}} (x11)
(x0) edge node {\Gammapsub{A,\alphaa X}} (x12)
(x11) edge[equal, bend right=60] (x12)
(x11) edge node[swap]{\Hzerosub{A,\betaa'\alphaa X}} (x21)
(x12) edge node {\betaa'\Gammaax} (x22)
(x21) edge node[swap]{\oneatoh\betaa'\Gammaax} (x31)
(x22) edge node {\Hzerosub{A,\betaa'\betaa X}} (x32)
(x31) edge[equal, bend left=70] (x32)
(x31) edge node {\oneatoh\Hzerosub{\betaa'\betaa X}} (x42)
(x31) edge node[swap]{\oneatoh\betaa'\Gzerosub{A,\betaa X}} (x41)
(x42) edge node[swap] {\Htwosub{\betaa'\betaa X}} (x32)
(x41) edge node[swap] {\oneatoh(\betapsub{\onea,\betaa X})^{-1}} (x42)
;
\end{tikzpicture}\]
in $HA$. In the above diagram, from top to bottom:
\begin{itemize}
\item The first sub-diagram is commutative by definition.
\item The next sub-diagram is commutative by the naturality of $\Hzeroa$.
\item The next triangle is commutative by the lax right unity \eqref{f0-bicat} of $H$.
\item The bottom triangle is the result of applying the functor $\oneatoh$ to the lax unity axiom \eqref{unity-transformation-pasting} of $\beta'$.
\end{itemize}
This proves the desired equality.
\end{proof}
\begin{proposition}\label{grothendieck-iifunctor}\index{Grothendieck construction!is a 2-functor}\index{2-functor!from the Grothendieck construction}
There is a $2$-functor
\[\begin{tikzcd}
\Bicatpscopcat \ar{r}{\int} & \fibofc
\end{tikzcd}\]
defined by
\begin{itemize}
\item \Cref{grothendieck-is-fibration} on objects,
\item \Cref{intalpha-cartesian} on $1$-cells, and
\item \Cref{intgamma-vertical} on $2$-cells,
\end{itemize}
with the $2$-categories $\Bicatpscopcat$ and $\fibofc$ defined in \Cref{subbicat-pseudofunctor,iicat-fibrations}, respectively.
\end{proposition}
\begin{proof}
The Grothendieck construction is locally a functor and preserves identity $1$-cells by \Cref{grothendieck-local-functor}. It preserves horizontal compositions of $1$-cells and of $2$-cells by \Cref{gro-hcomp-onecell,gro-hcomp-twocell}, respectively. Therefore, by \Cref{iifunctor} the Grothendieck construction is a $2$-functor.
\end{proof}
\begin{definition}\label{def:grothendieck-iifunctor}
The $2$-functor in \Cref{grothendieck-iifunctor} is called the \emph{Grothendieck construction}.
\end{definition}
\section{From Fibrations to Pseudofunctors}
\label{sec:fibration-indexed-cat}
The purpose of this section is to prove that the Grothendieck construction in \Cref{grothendieck-iifunctor} is $1$-essentially surjective in the sense of \Cref{definition:2-equiv-terms}.
\begin{definition}\label{def:fiber-category}
For a functor $G : \A \to \B$ between $1$-categories and an object $Y\in\B$, the \index{fiber}\emph{fiber of $Y$} is the subcategory $\Ginv(Y)$ of $\A$ with
\begin{itemize}
\item objects $X\in\A$ such that $GX=Y$, and
\item morphisms $f : X_1 \to X_2 \in A$ such that $GX_1=GX_2=Y$ and that $Gf=1_Y$.
\end{itemize}
This finishes the definition of the category $\Ginv(Y)$.
\end{definition}
\begin{convention}\label{conv:fibration-cleavage}
For the rest of this section, suppose $P : 3\to\C$ is a fibration as in \Cref{def:fibration}. We fix a \emph{unitary} cleavage for $P$, which is always possible by \Cref{ex:fibration-id-morphism}. This means that for each pre-lift $\preliftyf$, we fix a Cartesian lift $\liftf : \yf \to Y$ such that the chosen Cartesian lift for $\preliftyone$ is $1_Y$ for each object $Y\in3$.\dqed
\end{convention}
Next we:
\begin{enumerate}
\item Define a pseudofunctor $F : \Cop\to\Cat$ associated to the given fibration $P$ in \Cref{F-is-pseudofunctor}.
\item Check that $\intf$ and $P$ are isomorphic as fibrations in \Cref{varphi-phi-inverses}.
\end{enumerate}
\begin{definition}\label{def:fibration-pseudofunctor}
For the given fibration $P : 3\to\C$, define the assignments
\[(F,\Ftwo) : \Cop\to\Cat\]
as follows.
\begin{description}
\item[Objects] For each object $A\in\C$, define the category
\[FA = \Pinv(A).\]
\item[Morphisms] For a morphism $f : A \to B$ in $\C$, define a functor
\[\begin{tikzcd}
\Pinv(A) & \Pinv(B) \ar{l}[swap]{\ftof}
\end{tikzcd}\]
as follows.
\begin{itemize}
\item For each object $Y\in\Pinv(B)$, define
\begin{equation}\label{ftof-of-y}
\ftof Y = \yf \in \Pinv(A),
\end{equation}
which is the domain of the chosen Cartesian lift of $\preliftyf$.
\item For each morphism $e : Y \to Y' \in \Pinv(B)$, consider the chosen Cartesian lifts $\liftf$ of $\preliftyf$ and $\liftf'$ of $\preliftypf$. The pre-raise $\preraise{\liftf'}{e\liftf}{\onea}$, as pictured in
\begin{equation}\label{preraise-fe}
\begin{tikzpicture}[xscale=2.4, yscale=1.3, baseline={(s.base)}]
\draw[0cell]
(0,0) node (x11) {\ftof Y=\yf}
($(x11)+(1,0)$) node (x12) {Y}
($(x11)+(0,-1)$) node (x21) {\ftof Y' =\ypf}
($(x12)+(0,-1)$) node (x22) {Y'}
($(x22)+(.3,.5)$) node (s) {}
($(s)+(.4,0)$) node (t) {}
($(t)+(.3,.5)$) node (y11) {A}
($(y11)+(.7,0)$) node (y12) {B}
($(y11)+(0,-1)$) node (y21) {A}
($(y12)+(0,-1)$) node (y22) {B,}
;
\draw[1cell]
(x11) edge node {\liftf} (x12)
(x11) edge[dashed, shorten <=-.15cm, shorten >=-.1cm] node[swap] {\exists !\,\ftof e} (x21)
(x12) edge node {e} (x22)
(x21) edge node {\liftf'} (x22)
(s) edge[|->] node {P} (t)
(y11) edge node {f} (y12)
(y11) edge node[swap] {\onea} (y21)
(y12) edge node {\oneb} (y22)
(y21) edge node {f} (y22)
;
\end{tikzpicture}
\end{equation}
has a unique raise $\ftof e$ because $\liftf'$ is a Cartesian morphism.
\end{itemize}
\item[Lax Functoriality Constraint]
For morphisms $f : A \to B$ and $g : B \to C$ in $\C$, define a natural isomorphism
\[\begin{tikzpicture}[xscale=4, yscale=1.5]
\draw[0cell]
(0,0) node (a) {\Pinv(A)}
($(a)+(1,0)$) node (c) {\Pinv(C)}
($(a)+(.5,.5)$) node (b) {\Pinv(B)}
;
\draw[1cell]
(c) edge[bend right] node[swap] {\gtof} (b)
(b) edge[bend right] node[swap] {\ftof} (a)
(c) edge[bend left=50] node (gf) {\gftof} (a)
;
\draw[2cell]
node[between=b and gf at .5, rotate=-90, 2label={above,\Ftwosub{f,g}}, 2label={below,\iso}] {\Rightarrow}
;
\end{tikzpicture}\]
as follows. For each object $Z\in \Pinv(C)$, consider the three chosen Cartesian lifts along the top row below.
\[\begin{tikzpicture}[xscale=2, yscale=1]
\draw[0cell]
(0,0) node (x11) {\zgcommaf}
($(x11)+(1,0)$) node (x12) {\zg}
($(x12)+(1,0)$) node (x13) {Z}
($(x13)+(.5,-.1)$) node (s) {}
($(s)+(0,-.8)$) node (t) {}
($(x13)+(1,0)$) node (x14) {\zgf}
($(x14)+(1,0)$) node (x15) {Z}
($(x11)+(0,-1)$) node (x21) {A}
($(x12)+(0,-1)$) node (x22) {B}
($(x13)+(0,-1)$) node (x23) {C}
($(x14)+(0,-1)$) node (x24) {A}
($(x15)+(0,-1)$) node (x25) {C}
;
\draw[1cell]
(x11) edge node {\liftf} (x12)
(x12) edge node {\liftg} (x13)
(x14) edge node {\liftgf} (x15)
(s) edge[|->] node{P} (t)
(x21) edge node {f} (x22)
(x22) edge node {g} (x23)
(x24) edge node {gf} (x25)
;
\end{tikzpicture}\]
By \Cref{cartesian-properties}\eqref{cartesian-properties-iii} and \eqref{cartesian-properties-iv}, there is a unique isomorphism
\[\begin{tikzcd}[column sep=large]
\zgcommaf \ar{r}{(\Ftwo_{f,g})_Z}[swap]{\iso} & \zgf \in \Pinv(A)
\end{tikzcd}\]
such that the diagram
\begin{equation}\label{ftwofg-z}
\begin{tikzcd}[column sep=large]
\zgcommaf \ar{d}[swap]{\liftf} \ar{r}{(\Ftwo_{f,g})_Z}[swap]{\iso} & \zgf \ar{d}{\liftgf}\\
\zg \ar{r}{\liftg} & Z\end{tikzcd}
\end{equation}
is commutative.
\end{description}
This finishes the definition of $(F,\Ftwo)$.
\end{definition}
Recall from \Cref{def:lax-functors} that a strictly unitary pseudofunctor is a lax functor $(G,\Gtwo,\Gzero)$ with $\Gzero$ the identity and with $\Gtwo$ invertible.
\begin{lemma}\label{F-is-pseudofunctor}
In \Cref{def:fibration-pseudofunctor}, $(F,\Ftwo) : \Cop\to\Cat$ is a strictly unitary pseudofunctor.
\end{lemma}
\begin{proof}
For each morphism $f : A \to B$ in $\C$, we first check that \[\ftof : \Pinv(B) \to \Pinv(A)\] is a well-defined functor. It preserves identity morphisms because if $e=1_Y$, then the unique raise $\raiseof{\onea}$ in \eqref{preraise-fe} must be $1_{\yf}$.
For composable morphisms $e_1 : Y_0 \to Y_1$ and $e_2 : Y_1 \to Y_2$ in $\Pinv(B)$, consider the chosen Cartesian lift
\[\begin{tikzcd}
\subof{Y}{i,f} = \subof{(Y_i)}{f} \ar{r}{\liftfi} & Y_i\end{tikzcd}\]
of the pre-lift $\prelift{Y_i}{f}$ for $i=0,1,2$. The pre-raise $\preraise{\liftftwo}{e_2e_1\liftfzero}{\onea}$,
as pictured in
\[\begin{tikzpicture}[xscale=2.4, yscale=1.3, baseline={(s.base)}]
\def1.2{-.6}
\draw[0cell]
(0,0) node (x11) {\yzerof}
($(x11)+(1,0)$) node (x12) {Y_0}
($(x11)+(0,-1)$) node (x21) {\yonef}
($(x12)+(0,-1)$) node (x22) {Y_1}
($(x21)+(0,-1)$) node (x31) {\ytwof}
($(x22)+(0,-1)$) node (x32) {Y_2}
($(x22)+(.3,0)$) node (s) {}
($(s)+(.4,0)$) node (t) {}
($(t)+(.3,1)$) node (y11) {A}
($(y11)+(.7,0)$) node (y12) {B}
($(y11)+(0,-1)$) node (y21) {A}
($(y12)+(0,-1)$) node (y22) {B}
($(y21)+(0,-1)$) node (y31) {A}
($(y22)+(0,-1)$) node (y32) {B,}
($(x11)+(1.2,0)$) node[inner sep=0pt] (ne) {}
($(x31)+(1.2,0)$) node[inner sep=0pt] (se) {}
;
\draw[1cell]
(x11) edge node {\liftfzero} (x12)
(x11) edge[dashed, shorten <=-.1cm, shorten >=-.1cm] node[swap] {\exists !\,\ftof e_1} (x21)
(x12) edge node {e_1} (x22)
(x21) edge node {\liftfone} (x22)
(x21) edge[dashed, shorten <=-.1cm, shorten >=-.1cm] node[swap] {\exists !\,\ftof e_2} (x31)
(x22) edge node {e_2} (x32)
(x31) edge node {\liftftwo} (x32)
(s) edge[|->] node {P} (t)
(y11) edge node {f} (y12)
(y11) edge node[swap] {\onea} (y21)
(y12) edge node {\oneb} (y22)
(y21) edge node {f} (y22)
(y21) edge node[swap] {\onea} (y31)
(y22) edge node {\oneb} (y32)
(y31) edge node {f} (y32)
(x11) edge[-,dashed, shorten >=-1pt] (ne)
(ne) edge[-,dashed, shorten <=-1pt, shorten >=-1pt] node[swap] {\exists !\,\ftof (e_2e_1)} (se)
(se) edge[dashed, shorten <=-1pt] (x31)
;
\end{tikzpicture}\]
has a unique raise $\ftof(e_2e_1)$. Since $(\ftof e_2)(\ftof e_1)$ is also a raise of this pre-raise, it is equal to $\ftof(e_2e_1)$ by uniqueness. This shows that $\ftof$ is a functor.
For the identity morphism $\onea$ for an object $A\in\C$, $\oneatof$ is the identity functor of $FA=\Pinv(A)$ because the chosen cleavage for $P$ is unitary. So we define the lax unity constraint $\Fzero$ as the identity.
The naturality of $\Ftwo$ in the sense of \eqref{f2-bicat-naturality} is trivial because $\C$ has only identity $2$-cells.
The lax left and right unity axioms \eqref{f0-bicat} for $F$ mean that $\Ftwosub{1,g}$ and $\Ftwosub{f,1}$ are identities. They are true by (i) the unitarity of the chosen cleavage and (ii) the uniqueness property in \Cref{cartesian-properties}\eqref{cartesian-properties-iii}.
It remains to check the lax associativity axiom \eqref{f2-bicat} for $F$. Consider morphisms
\[\begin{tikzcd}
A \ar{r}{f} & B \ar{r}{g} & C \ar{r}{h} & D \in \C,\end{tikzcd}\]
the corresponding functors
\[\begin{tikzcd}
\Pinv(A) & \Pinv(B) \ar{l}[swap]{\ftof} & \Pinv(C) \ar{l}[swap]{\gtof} & \Pinv(D), \ar{l}[swap]{\htof}\end{tikzcd}\]
an object $W \in \Pinv(D)$, and the four chosen Cartesian lifts along the top row below.
\[\begin{tikzpicture}[xscale=1.7, yscale=1]
\def1.2{.4}
\draw[0cell]
(0,0) node (x11) {\whgfcomma}
($(x11)+(1,0)$) node (x12) {\whgcomma}
($(x12)+(1,0)$) node (x13) {\wh}
($(x13)+(1,0)$) node (x14) {W}
($(x14)+(1.2,-.1)$) node (s) {}
($(s)+(0,-.8)$) node (t) {}
($(x14)+(1,0)$) node (x15) {\whgf}
($(x15)+(1,0)$) node (x16) {W}
($(x11)+(0,-1)$) node (x21) {A}
($(x12)+(0,-1)$) node (x22) {B}
($(x13)+(0,-1)$) node (x23) {C}
($(x14)+(0,-1)$) node (x24) {D}
($(x15)+(0,-1)$) node (x25) {A}
($(x16)+(0,-1)$) node (x26) {D}
;
\draw[1cell]
(x11) edge node {\liftf} (x12)
(x12) edge node {\liftg} (x13)
(x13) edge node {\lifth} (x14)
(x15) edge node {\lifthgf} (x16)
(s) edge[|->] node{P} (t)
(x21) edge node {f} (x22)
(x22) edge node {g} (x23)
(x23) edge node {h} (x24)
(x25) edge node {hgf} (x26)
;
\end{tikzpicture}\]
The pre-raise $\preraise{\lifthgf}{\lifth \liftg \liftf}{\onea}$, as pictured in
\[\begin{tikzpicture}[xscale=2, yscale=1.3, baseline={(s.base)}]
\draw[0cell]
(0,0) node (x11) {\whgfcomma}
($(x11)+(1,0)$) node (x12) {\whgcomma}
($(x12)+(1,0)$) node (x13) {\wh}
($(x11)+(0,-1)$) node (x21) {\whgf}
($(x12)+(0,-1)$) node (x22) {}
($(x13)+(0,-1)$) node (x23) {W}
($(x23)+(.5,.5)$) node (s) {}
($(s)+(.4,0)$) node (t) {}
($(t)+(.5,.5)$) node (y11) {A}
($(y11)+(.7,0)$) node (y12) {C}
($(y11)+(0,-1)$) node (y21) {A}
($(y12)+(0,-1)$) node (y22) {D,}
;
\draw[1cell]
(x11) edge node {\liftf} (x12)
(x11) edge[dashed, shorten >=-.1cm] node[swap] {\exists !\,\raiseof{\onea}} (x21)
(x12) edge node {\liftg} (x13)
(x13) edge node {\lifth} (x23)
(x21) edge node {\lifthgf} (x23)
(s) edge[|->] node {P} (t)
(y11) edge node {gf} (y12)
(y11) edge node[swap] {\onea} (y21)
(y12) edge node {h} (y22)
(y21) edge node {hgf} (y22)
;
\end{tikzpicture}\]
has a unique raise $\raiseof{\onea}$ because $\lifthgf$ is a Cartesian morphism. By \eqref{ftwofg-z}, both composites in the lax associativity axiom \eqref{f2-bicat} for $F$, when applied to $W$, are raises for this pre-raise. So they are equal by uniqueness.
\end{proof}
To show that the Grothendieck construction $\intf$ in \Cref{grothendieck-cat} is isomorphic to $P$ as a fibration, we now define a functor $\intf \to 3$ that will be shown to be an isomorphism of fibrations.
\begin{definition}\label{def:varphi-grothendieck}
Suppose $F : \Cop\to\Cat$ is the strictly unitary pseudofunctor in \Cref{F-is-pseudofunctor} associated to the given fibration $P : 3\to\C$. Define assignments
\begin{equation}\label{varphi-grothendieck}
\begin{tikzcd}
\intf \ar{r}{\varphi} & 3\end{tikzcd}
\end{equation}
as follows.
\begin{description}
\item[Objects] For each object $\big(A\in\C,X\in\Pinv(A)\big)$ in $\intf$, define the object
\begin{equation}\label{varphi-ax}
\varphi (A,X) = X \in 3.
\end{equation}
\item[Morphisms] For a morphism $(f,p) : (A,X) \to (B,Y)$ in $\intf$, define $\varphi(f,p)$ as the composite
\begin{equation}\label{varphi-fp}
\begin{tikzpicture}[xscale=3, yscale=1.5, baseline={(x11.base)}]
\def1{.5}
\draw[0cell]
(0,0) node (x11) {\varphi(A,X)=X}
($(x11)+(1,0)$) node (x12) {\ftof Y=\yf}
($(x12)+(1,0)$) node (x13) {Y=\varphi(B,Y)}
($(x11)+(0,1)$) node[inner sep=0pt] (nw) {}
($(x13)+(0,1)$) node[inner sep=0pt] (ne) {}
;
\draw[1cell]
(x11) edge node {p} (x12)
(x12) edge node {\liftf} (x13)
(x11) edge[-,shorten >=-1pt] (nw)
(nw) edge[-, shorten <=-1pt, shorten >=-1pt] node {\varphi(f,p)} (ne)
(ne) edge[shorten <=-1pt] (x13)
;
\end{tikzpicture}
\end{equation}
in $3$, with $\liftf$ the chosen Cartesian lift of $\preliftyf$, and the middle equality from \eqref{ftof-of-y}.
\end{description}
This finishes the definition of $\varphi$.
\end{definition}
Recall from \Cref{def:grothendieck-over-c} the first-factor projection functor $\Usubf : \intf \to \C$.
\begin{lemma}\label{varphi-functor}
$\varphi : \intf \to 3$ in \Cref{def:varphi-grothendieck} is a functor such that the diagram
\begin{equation}\label{usubf-varphi-p}
\begin{tikzcd}[column sep=small]
\intf \ar{rr}{\varphi} \ar{dr}[swap]{\Usubf} && 3 \ar{dl}{P}\\
& \C &
\end{tikzcd}
\end{equation}
is commutative.
\end{lemma}
\begin{proof}
First we check that $\varphi$ is a functor. The identity morphism of an object $(A,X)\in\intf$ is \[(\onea, \Fzerosub{A,X}) = (\onea,\onex).\] There are equalities
\[\varphi(\onea,\onex)=\liftonea\onex=\onex\onex=\onex,\]
with the first equality from \eqref{varphi-fp}, and with the second equality from the unitarity of the chosen cleavage of $P$.
Suppose
\[\begin{tikzcd}
(A,X) \ar{r}{(f,p)} & (B,Y) \ar{r}{(g,q)} & (C,Z) \in \intf\end{tikzcd}\]
are composable morphisms. To see that $\varphi$ preserves their composite, first let us describe the morphism $\varphi\big((g,q)(f,p)\big)$. The morphism $q : Y \to \gtof Z = \zg$ is in $\Pinv(B)$. With $q$ in place of $e$ in \eqref{preraise-fe},
\[\begin{tikzcd}
\yf=\ftof Y \ar{r}{\ftof q} & \ftof\gtof Z= \zgcommaf \in\Pinv(A)\end{tikzcd}\]
is the unique morphism such that the diagram
\begin{equation}\label{ftof-q}
\begin{tikzcd}
\yf \ar{r}{\liftf_Y} \ar{d}[swap]{\ftof q} & Y \ar{d}{q}\\
\zgcommaf \ar{r}{\liftf_{\zg}} & \zg
\end{tikzcd}
\end{equation}
in $3$ commutes, with $\liftf_Y$ and $\liftf_{\zg}$ the chosen Cartesian lifts of $\preliftyf$ and $\prelift{\zg}{f}$, respectively. By \eqref{intf-composite} and \eqref{varphi-fp},
\[\varphi\big((g,q)(f,p)\big) = \varphi\big(gf, \Ftwosub{Z} \circ \ftof q\circ p\big)\]
is the left-bottom composite in the diagram
\[\begin{tikzcd}
X \ar{d}[swap]{p} \ar{r}{p} & \yf \ar{r}{\liftf} & Y \ar{d}{q}\\
\yf \ar[equal]{ur} \ar{d}[swap]{\ftof q} && \zg \ar{d}{\liftg}\\
\zgcommaf \ar[bend left=20]{urr}[near end]{\liftf_{\zg}} \ar{r}{\Ftwo} & \zgf \ar{r}{\liftgf} & Z
\end{tikzcd}\]
in $3$, with $\liftg$ and $\liftgf$ the chosen Cartesian lifts of $\prelift{Z}{g}$ and $\prelift{Z}{gf}$, respectively. In this diagram:
\begin{itemize}
\item The top composite is $\varphi(f,p)$, and the right composite is $\varphi(g,q)$.
\item The top left triangle is commutative by definition.
\item The middle sub-diagram is commutative by \eqref{ftof-q}.
\item The lower right triangle is commutative by \eqref{ftwofg-z}.
\end{itemize}
Therefore, $\varphi$ preserves composites.
To check the commutativity of the diagram \eqref{usubf-varphi-p}, suppose $(A,X)\in\intf$ is an object. There are equalities
\[P\varphi(A,X) = P(X)=A=\Usubf(A,X)\] by \eqref{varphi-ax} because $X\in FA=\Pinv(A)$. For a morphism $(f,p) : (A,X) \to (B,Y) \in \intf$, there are equalities
\[P\varphi(f,p) = P(\liftf p) = (P\liftf)(Pp) = f\onea=f=\Usubf(f,p)\] by \eqref{varphi-fp} because $p\in FA=\Pinv(A)$. Therefore, $P\varphi=\Usubf$.
\end{proof}
Next we define a functor that will be shown to be the inverse of the functor $\varphi$.
\begin{definition}\label{def:phi-grothendieck}
Suppose $F : \Cop\to\Cat$ is the strictly unitary pseudofunctor in \Cref{F-is-pseudofunctor} associated to the given fibration $P : 3\to\C$. Define assignments
\begin{equation}\label{phi-grothendieck}
\begin{tikzcd}
3 \ar{r}{\phi} & \intf\end{tikzcd}
\end{equation}
as follows.
\begin{description}
\item[Objects] For each object $X\in3$, define the object
\begin{equation}\label{phix-object}
\phi X = \big(PX\in\C, X\in\Pinv(PX)\big)\in\intf.
\end{equation}
\item[Morphisms] For a morphism $q : X \to Y$ in $3$, define the morphism
\begin{equation}\label{phi-morphism-def}
\begin{tikzcd}[column sep=huge]
(PX,X) \ar{r}{\phi q\,=\,(\qsubp,\qstar)} & (PY,Y)\in\intf
\end{tikzcd}
\end{equation}
in which:
\begin{itemize}
\item $\qsubp : PX \to PY\in\C$ is the image of $q$ in $\C$.
\item $\qstar\in\Pinv(PX)$ is the unique raise of the pre-raise $\preraise{\liftqsubp}{q}{1_{PX}}$, as pictured below.
\begin{equation}\label{phiq-morphism}
\begin{tikzpicture}[xscale=2.5, yscale=1.3, baseline={(s.base)}]
\draw[0cell]
(0,0) node (a) {\qsubptof Y=\yqp}
($(a)+(1,0)$) node (b) {Y}
($(a)+(.5,1)$) node (c) {X}
($(b)+(.3,.5)$) node (s) {}
($(s)+(.4,0)$) node (t) {}
($(t)+(.3,-.5)$) node (a2) {PX}
($(a2)+(1,0)$) node (b2) {PY}
($(a2)+(.5,1)$) node (c2) {PX}
;
\draw[1cell]
(a) edge node[pos=.4] {\liftqsubp} (b)
(c) edge[dashed, shorten >=-.1cm] node[swap] {\exists !\, \qstar} (a)
(c) edge node {q} (b)
(s) edge[|->] node{P} (t)
(a2) edge node {\qsubp} (b2)
(c2) edge node[swap] {1} (a2)
(c2) edge node {\qsubp} (b2)
;
\end{tikzpicture}
\end{equation}
Here $\liftqsubp$ is the chosen Cartesian lift of $\prelift{Y}{\qsubp}$ with respect to $P$, and the equality $\qsubptof Y=\yqp$ is from \eqref{ftof-of-y}.
\end{itemize}
\end{description}
This finishes the definition of $\phi$.
\end{definition}
\begin{lemma}\label{phi-functor}
$\phi : 3\to\intf$ in \Cref{def:phi-grothendieck} is a functor.
\end{lemma}
\begin{proof}
For the identity morphism $\onex$ of an object $X\in3$, there are equalities
\[\phi \onex = \big(1_{PX}, \starof{\onex}\big) = (1_{PX},\onex) = 1_{(PX,X)} = 1_{\phi X}\]
in which:
\begin{itemize}
\item The first and the last equalities follow from the definitions \eqref{phi-morphism-def} and \eqref{phix-object}.
\item The second equality follows from the unitarity of the chosen cleavage of $P$ and the uniqueness of the raise in \eqref{phiq-morphism}.
\item The third equality follows from \eqref{fzeroax} and that $\Fzero=1$.
\end{itemize}
So $\phi$ preserves identity morphisms.
For composable morphisms
\[\begin{tikzcd}
X \ar{r}{q} & Y \ar{r}{r} & Z \in 3,\end{tikzcd}\]
we must show that
\begin{equation}\label{rstar-qstar}
(\rsubp,\rstar)\circ (\qsubp,\qstar) = \big(\rqsubp,\rqstar\big)\in \intf.
\end{equation}
On the left-hand side, the first component of the composite is $\rsubp\qsubp = \rqsubp$ in $\C$. By \eqref{intf-composite}, its second component is the left-bottom composite in the diagram
\begin{equation}\label{phi-preserves-comp}
\begin{tikzpicture}[xscale=2.5, yscale=1.3, baseline={(qstar.base)}]
\def1{.5}
\draw[0cell]
(0,0) node (x11) {X}
($(x11)+(1,0)$) node (x12) {Y}
($(x12)+(1,0)$) node (x13) {\zrp}
($(x13)+(1,0)$) node (x14) {Z}
($(x11)+(0,-1)$) node (x21) {\yqp}
($(x12)+(0,-1)$) node (x22) {\yqp}
($(x13)+(0,-1)$) node (x23) {\zrpqp}
($(x14)+(0,-1)$) node (x24) {\zrqp}
($(x12)+(0,1)$) node[inner sep=0pt] (s){}
($(x14)+(0,1)$) node[inner sep=0pt] (t){}
;
\draw[1cell]
(x11) edge node {q} (x12)
(x12) edge node {\rstar} (x13)
(x13) edge node {\liftrsubp} (x14)
(x11) edge node (qstar) {\qstar} (x21)
(x22) edge node{\liftqsubp} (x12)
(x23) edge node {\liftqsubp} (x13)
(x24) edge node[swap] {\liftrqsubp} (x14)
(x21) edge[equal] (x22)
(x22) edge node {\qsubptof\rstar} (x23)
(x23) edge node {\Ftwo} (x24)
(x12) edge[-,shorten >=-1pt] (s)
(s) edge[-,shorten <=-1pt, shorten >=-1pt] node{r} (t)
(t) edge[shorten <=-1pt] (x14)
;
\end{tikzpicture}
\end{equation}
in $3$. In this diagram:
\begin{itemize}
\item The top rectangle is the left commutative triangle in \eqref{phiq-morphism} with $r$ in place of $q$.
\item The left-most square is the left commutative triangle in \eqref{phiq-morphism}.
\item The middle square is the left commutative square in \eqref{preraise-fe} with $f$ and $e$ replaced by $\qsubp$ and $\rstar$, respectively. The two vertical morphisms $\liftqsubp$ are the chosen Cartesian lifts of the pre-lifts $\prelift{Y}{\qsubp}$ and $\prelift{\zrp}{\qsubp}$, respectively.
\item The right-most square is the commutative diagram \eqref{ftwofg-z} with $f$ and $g$ replaced by $\qsubp$ and $\rsubp$, respectively.
\end{itemize}
To show that the second components of the two sides in \eqref{rstar-qstar} are equal, we must show that the left-bottom composite in \eqref{phi-preserves-comp} is equal to $\rqstar$. The images under $P$ of $\qstar$, $\qsubptof\rstar$, and $\Ftwo$ are all equal to $1_{PX}$. Therefore, by the uniqueness of $\rqstar$ in \eqref{phiq-morphism}, it remains to show that the outermost diagram in \eqref{phi-preserves-comp} is commutative, which is already observed in the previous paragraph. So $\phi$ preserves composites of morphisms.
\end{proof}
\begin{lemma}\label{varphi-phi-inverses}
Consider the functors
\[\begin{tikzcd}[column sep=large]
\intf \ar[shift left=.5ex]{r}{\varphi} & 3 \ar[shift left=.5ex]{l}{\phi}
\end{tikzcd}\]
in \Cref{varphi-functor,phi-functor}.
\begin{enumerate}
\item They are inverses of each other.
\item With respect to the functors $\Usubf : \intf \to \C$ and $P : 3\to\C$, both $\varphi$ and $\phi$ are Cartesian functors.
\end{enumerate}
\end{lemma}
\begin{proof}
For the first assertion, observe that the functors are inverses of each other on objects by the definitions \eqref{varphi-ax} and \eqref{phix-object}.
For a morphism $(f,p) : (A,X) \to (B,Y)$ in $\intf$, by \eqref{varphi-fp}
\[\varphi(f,p) = \liftf p \withspace p\in FA=\Pinv(A).\]
So there are equalities
\begin{equation}\label{P-of-liftf-p}
P(\liftf p) = (P\liftf)(Pp) =f\onea=f\in\C.
\end{equation}
By \eqref{phiq-morphism}, $\starof{(\liftf p)}$ is the unique raise of the pre-raise $\preraise{\liftf}{\liftf p}{1_{PX}}$, as pictured below.
\begin{equation}\label{unique-raise=p}
\begin{tikzpicture}[xscale=2.5, yscale=1.3, baseline={(s.base)}]
\draw[0cell]
(0,0) node (a) {\yf}
($(a)+(1,0)$) node (b) {Y}
($(a)+(.5,1)$) node (c) {X}
($(b)+(.3,.5)$) node (s) {}
($(s)+(.4,0)$) node (t) {}
($(t)+(.3,-.5)$) node (a2) {PX}
($(a2)+(1,0)$) node (b2) {PY}
($(a2)+(.5,1)$) node (c2) {PX}
;
\draw[1cell]
(a) edge node {\liftf} (b)
(c) edge[dashed, shorten >=-.1cm] node[swap] {\exists !\, p} (a)
(c) edge node {\liftf p} (b)
(s) edge[|->] node{P} (t)
(a2) edge node {f} (b2)
(c2) edge node[swap] {1} (a2)
(c2) edge node {f} (b2)
;
\end{tikzpicture}
\end{equation}
By uniqueness this raise must be $p$. So there are equalities
\[\phi\varphi(f,p) = \phi(\liftf q) = \big(P(\liftf p),\starof{(\liftf p)}\big) = (f,p)\]
by \eqref{varphi-fp}, \eqref{phi-morphism-def}, \eqref{P-of-liftf-p}, and the left commutative triangle in \eqref{unique-raise=p}. This shows that $\phi\varphi$ is the identity functor of $\intf$.
Finally, for a morphism $q : X \to Y$ in $3$, there are equalities
\[\varphi\phi q = \varphi(\qsubp,\qstar) = (\liftqsubp)(\qstar) = q\]
by \eqref{phi-morphism-def}, \eqref{varphi-fp}, and the left commutative triangle in \eqref{phiq-morphism}, respectively. We have shown that $\varphi$ and $\phi$ are inverse functors of each other.
For the second assertion, observe that there is a commutative triangle
\[\begin{tikzcd}[column sep=small]
\intf \ar{rr}{\varphi}[swap]{\iso} \ar{dr}[swap]{\Usubf} && 3 \ar{dl}{P}\\
& \C &
\end{tikzcd}\]
by \eqref{usubf-varphi-p}, with $\varphi$ an isomorphism by the first assertion. So $\varphi$ and its inverse are both Cartesian functors by \Cref{cartesian-iso}.
\end{proof}
\begin{proposition}\label{grothendieck-one-esssurjective}\index{Grothendieck construction!1-essential surjectivity}
The $2$-functor in \Cref{grothendieck-iifunctor}, i.e., the Grothendieck construction, is $1$-essentially surjective in the sense of \Cref{definition:2-equiv-terms}.
\end{proposition}
\begin{proof}
Given a fibration $P : 3\to\C$, we:
\begin{itemize}
\item picked a unitary cleavage in \Cref{conv:fibration-cleavage};
\item constructed a strictly unitary pseudofunctor $F : \Cop\to\Cat$ in \Cref{F-is-pseudofunctor};
\item showed that the fibrations $\Usubf : \intf \to \C$ and $P : 3\to\C$ are isomorphic via the Cartesian functor $\varphi$ in \Cref{varphi-phi-inverses}.
\end{itemize}
So the Grothendieck construction is $1$-essentially surjective.
\end{proof}
\section{\texorpdfstring{$1$}{1}-Fully Faithfulness}
\label{sec:grothendieck-ifully-faithful}
The purpose of this section is to show that the Grothendieck construction is $1$-fully faithful on $1$-cells in the sense of \Cref{definition:2-equiv-terms}. Recall from \Cref{def:lax-functors} that a pseudofunctor is a lax functor whose laxity constraints are invertible.
\begin{convention}\label{conv:cartesian-functor-h}
For the rest of this section, suppose
\[(F,\Ftwo,\Fzero), (G,\Gtwo,\Gzero) : \Cop\to\Cat\]
are pseudofunctors, and $H$ as in
\begin{equation}\label{ufugh}
\begin{tikzcd}[column sep=small]
\intf \ar{dr}[swap]{\Usubf} \ar{rr}{H} && \intg \ar{dl}{\Usubg}\\
& \C &
\end{tikzcd}
\end{equation}
is a Cartesian functor in the sense of \Cref{def:cartesian-functor}.\dqed
\end{convention}
\begin{explanation}\label{expl:cartesian-h}
Let us explain the details and some notations for $H$ being a functor.
\begin{description}
\item[Objects] $H$ sends each object $(A,X)\in\intf$ to an object
\begin{equation}\label{cartesian-h-objects}
H(A,X) = (A,\Hsubtwo X) \in \intg
\end{equation}
for some object $\Hsubtwo X\in GA$. The first component is $A\in\C$ by the commutativity of \eqref{ufugh}.
\item[Morphisms] $H$ sends each morphism $(f,p) : (A,X) \to (B,Y)$ in $\intf$, with $p : X \to \ftof Y$ in $FA$, to a morphism
\begin{equation}\label{cartesian-h-morphisms}
\begin{tikzcd}[column sep=large]
(A,\Hsubtwo X \in GA) \ar{r}{(f,\Hsubtwo p)} & (B,\Hsubtwo Y \in GB) \in\intg
\end{tikzcd}
\end{equation}
for some morphism
\[\begin{tikzcd}
\Hsubtwo X \ar{r}{\Hsubtwo p} & \ftog(\Hsubtwo Y) \in GA.\end{tikzcd}\]
The first component of $H(f,p)$ is $f\in\C$ by the commutativity of \eqref{ufugh}.
\item[Identities] By definition \eqref{fzeroax}, $H$ sends the identity morphism $(\onea,\Fzeroax)$ of an object $(A,X)$ in $\intf$ to the identity morphism
\begin{equation}\label{cartesian-h-identity}
\big(\onea,\Hsubtwo(\Fzeroax)\big) = \big(\onea, \Gzerosub{A,\Hsubtwo X}\big)
\end{equation}
of $(A,\Hsubtwo X)$ in $\intg$.
\item[Composites] For composable morphisms
\[\begin{tikzcd}
(A,X) \ar{r}{(f,p)} & (B,Y) \ar{r}{(g,q)} & (C,Z) \in \intf,\end{tikzcd}\]
by \eqref{intf-composite} the second component of the equality
\[(g,\Hsubtwo q)\circ (f,\Hsubtwo p) = H\big((g,q)\circ(f,p)\big) \in\intg\]
means the commutativity of the diagram
\begin{equation}\label{cartesian-h-composite}
\begin{tikzcd}[column sep=huge]
\Hsubtwo X \ar{d}[swap]{\Hsubtwo p} \ar{r}{\Hsubtwo(\Ftwosub{Z} \circ \ftof q \circ p)} & \gftog(\Hsubtwo Z)\\
\ftog(\Hsubtwo Y) \ar{r}{\ftog(\Hsubtwo q)} & \ftog\gtog(\Hsubtwo Z) \ar{u}[swap]{(\Gtwosub{f,g})_{\Hsubtwo Z}}\end{tikzcd}
\end{equation}
in $GA$.
\end{description}
In the rest of this section, we will use these notations for $H$.\dqed
\end{explanation}
The following preliminary observations will be used several times in the rest of this section.
\begin{lemma}\label{etofy-def}
Suppose $f : A \to B$ is a morphism in $\C$, and $Y\in FB$ is an object.
\begin{enumerate}
\item The image of the morphism
\[\begin{tikzcd}[column sep=large]
(A,\ftof Y) \ar{r}{(f,1_{\ftof Y})} & (B,Y) \in \intf\end{tikzcd}\]
under $H$, namely
\begin{equation}\label{htwo-oneftofy}
\begin{tikzcd}[column sep=huge]
\big(A,\Htwoftofy\big) \ar{r}{\big(f,\Htwooneftofy\big)} & (B,\Htwoy) \in\intg,\end{tikzcd}
\end{equation}
is a Cartesian morphism with respect to $\Usubg$.
\item There is a unique morphism $\etofy$ such that the diagram
\begin{equation}\label{etofy-diagram}
\begin{tikzpicture}[xscale=4.5, yscale=1.4, baseline={(e.base)}]
\def1{1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {\ftoghtwoy}
($(x11)+(1,0)$) node (x12) {\ftoghtwoy}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\oneatog\Htwoftofy}
($(x12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {\oneatog\ftoghtwoy}
;
\draw[1cell]
(x11) edge node {1_{\ftoghtwoy}} (x12)
(x11) edge node[swap] (e) {\etofy} (x21)
(x22) edge node[swap] {(\Gtwosub{\onea,f})_{\Htwoy}} (x12)
(x21) edge node {\oneatog(\Htwooneftofy)} (x22)
;
\end{tikzpicture}
\end{equation}
in $GA$ is commutative.
\end{enumerate}
\end{lemma}
\begin{proof}
The first assertion follows from the assumption that $H$ is a Cartesian functor and \Cref{grothendieck-is-fibration}\eqref{fone-cartesian}, which says that $(f,1_{\ftof Y})$ is a Cartesian morphism with respect to $\Usubf$.
For the second assertion, consider the pre-raise
\[\preraise{(f,\Htwooneftofy)}{(f,\oneftoghtwoy)}{\onea}\]
as displayed below.
\begin{equation}\label{etofy-definition}
\begin{tikzpicture}[xscale=4, yscale=1.5, baseline={(e.base)}]
\def-1} \def\v{-1{.2}
\def1{.9}
\def1.2{.3}
\draw[0cell]
(0,0) node (a) {\big(A,\Htwoftofy\big)}
($(a)+(1,0)$) node (b) {(B,\Htwoy)}
($(a)+(.5,1)$) node (c) {\big(A, \ftog(\Htwoy)\big)}
($(b)+(-1} \def\v{-1,1)$) node (s) {}
($(s)+(.3,0)$) node (t) {}
($(t)+(-1} \def\v{-1,-1)$) node (a2) {A}
($(a2)+(1.2,0)$) node (b2) {B}
($(a2)+(1.2/2,1)$) node (c2) {A}
;
\draw[1cell]
(a) edge node {(f,\Hsubtwo(\oneftofy))} (b)
(c) edge[dashed, shorten >=-.1cm] node[swap, pos=.8] (e) {(\onea, \exists !\, \etofy)} (a)
(c) edge node[pos=.8] {(f, \oneftoghtwoy)} (b)
(s) edge[|->] node{\Usubg} (t)
(a2) edge node {f} (b2)
(c2) edge node[swap] {\onea} (a2)
(c2) edge node {f} (b2)
;
\end{tikzpicture}
\end{equation}
Since $(f,\Hsubtwo(\oneftofy))$ is a Cartesian morphism by the first assertion, this pre-raise has a unique raise $(\onea,\etofy)$. In the second component, by \eqref{intf-composite}, the left commutative triangle in \eqref{etofy-definition} yields the diagram \eqref{etofy-diagram}.
\end{proof}
\begin{lemma}\label{fp-factor}
Suppose $(f,p) : (A,X) \to (B,Y)$ is a morphism in $\intf$.
\begin{enumerate}
\item The morphism $(f,p)$ factors as follows.
\begin{equation}\label{fp-factor-diagram}
\begin{tikzpicture}[xscale=3.5, yscale=1, baseline={(s.base)}]
\def1{-1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (x11) {(A,X)}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {(B,Y)}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2,1)$) node (x21) {(A,\ftof Y)}
($(x11)+(0,1)$) node[inner sep=0pt] (sw) {}
($(x12)+(0,1)$) node[inner sep=0pt] (se) {}
;
\draw[1cell]
(x11) edge node {(f,p)} (x12)
(x11) edge[-, shorten >=-1pt] node[swap,pos=.6] (s) {\big(\onea, \Fzerosub{A,\ftof Y} \circ p\big)} (sw)
(sw) edge[shorten <=-1pt] (x21)
(x21) edge[-, shorten >=-1pt] (se)
(se) edge[shorten <=-1pt] node[swap,pos=.4] {(f, \oneftofy)} (x12)
;
\end{tikzpicture}
\end{equation}
\item The diagram
\begin{equation}\label{htwop-factor}
\begin{tikzpicture}[xscale=4, yscale=1.5, baseline={(s.base)}]
\def1{-1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (x11) {\Htwox}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {\ftog(\Htwoy)}
($(x11)+(0,1)$) node (x21) {\oneatog\Htwoftofy}
($(x12)+(0,1)$) node (x22) {\oneatog\ftog(\Htwoy)}
;
\draw[1cell]
(x11) edge node {\Hsubtwo p} (x12)
(x11) edge node[swap] (s) {\Hsubtwo\big(\Fzerosub{A,\ftof Y} \circ p\big)} (x21)
(x22) edge node[swap] {(\Gtwosub{\onea,f})_{\Htwoy}} (x12)
(x21) edge node {\oneatog(\Htwooneftofy)} (x22)
;
\end{tikzpicture}
\end{equation}
in $GA$ is commutative.
\end{enumerate}
\end{lemma}
\begin{proof}
For the first assertion, observe that the diagram
\[\begin{tikzpicture}[xscale=3.5, yscale=1.4, baseline={(e.base)}]
\def1{-1}
\def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{.9}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{.8}
\draw[0cell]
(0,0) node (x11) {X}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {}
($(x12)+(1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3,0)$) node (x13) {\ftof Y}
($(x11)+(0,1)$) node (x21) {\ftof Y}
($(x12)+(0,1)$) node (x22) {\oneatof\ftof Y}
($(x13)+(0,1)$) node (x23) {\oneatof\ftof Y}
;
\draw[1cell]
(x11) edge node {p} (x13)
(x11) edge node[swap] {p} (x21)
(x21) edge node (e) {1} (x13)
(x23) edge node[swap] {(\Ftwosub{\onea,f})_Y} (x13)
(x21) edge node[swap] {\Fzerosub{A,\ftof Y}} (x22)
(x22) edge node{=} node[swap] {\oneatof(\oneftofy)} (x23)
;
\end{tikzpicture}\]
in $FA$ is commutative, with the lower right triangle commutative by the lax left unity \eqref{f0-bicat} of $F$. The previous commutative diagram and \eqref{intf-composite} imply the commutativity of the diagram \eqref{fp-factor-diagram}.
For the second assertion, we apply the functor $H$ to the diagram \eqref{fp-factor-diagram} to obtain the commutative diagram
\[\begin{tikzpicture}[xscale=3.5, yscale=1, baseline={(s.base)}]
\def1{-1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (x11) {(A,\Htwox)}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {(B,\Htwoy)}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2,1)$) node (x21) {(A,\Htwoftofy)}
($(x11)+(0,1)$) node[inner sep=0pt] (sw) {}
($(x12)+(0,1)$) node[inner sep=0pt] (se) {}
;
\draw[1cell]
(x11) edge node {(f,\Hsubtwo p)} (x12)
(x11) edge[-, shorten >=-1pt] node[swap,pos=.6] (s) {\big(\onea, \Hsubtwo(\Fzerosub{A,\ftof Y} \circ p)\big)} (sw)
(sw) edge[shorten <=-1pt] (x21)
(x21) edge[-, shorten >=-1pt] (se)
(se) edge[shorten <=-1pt] node[swap,pos=.4] {\big(f, \Hsubtwo(\oneftofy)\big)} (x12)
;
\end{tikzpicture}\]
in $\intg$. By \eqref{intf-composite}, the second component of the previous commutative diagram is the diagram \eqref{htwop-factor}.
\end{proof}
To show that the Grothendieck construction is $1$-fully faithful, we will:
\begin{enumerate}
\item Construct a strong transformation $\xi : F\to G$ in \Cref{xi-lax-naturality}.
\item Show that $\intxi = H$ in \Cref{intxi=H}.
\item Show that the previous two properties uniquely determine $\xi$ in \Cref{xi-unique}.
\end{enumerate}
Recall from \Cref{definition:lax-transformation} that a lax transformation has component $1$-cells and component $2$-cells.
\begin{definition}\label{def:transformation-gamma}
Define assignments $\xi : F\to G$ as follows.
\begin{description}
\item[Component $1$-Cells] For each object $A\in\C$, define the assignments
\begin{equation}\label{xia-fa-ga}
\begin{tikzcd}
FA \ar{r}{\xi_A} & GA\end{tikzcd}
\end{equation}
as follows.
\begin{description}
\item[Objects] For each object $X\in FA$, define the object
\begin{equation}\label{xia-object}
\xi_AX = \Hsubtwo X \in GA
\end{equation}
with $\Hsubtwo X$ as in \eqref{cartesian-h-objects}.
\item[Morphisms] Each morphism $p : X \to Y$ in $FA$ yields a morphism
\[\begin{tikzcd}[column sep=huge]
(A,X) \ar{r}{(\onea,\Fzeroay \circ p)} & (A,Y) \in \intf
\end{tikzcd}\]
with second component the composite
\[\begin{tikzcd}
X \ar{r}{p} & Y \ar{r}{\Fzeroay} & \oneatof Y \in FA.
\end{tikzcd}\]
Define the morphism $\xi_Ap$ as the composite
\begin{equation}\label{xia-morphism}
\begin{tikzpicture}[xscale=3.5, yscale=1.5, baseline={(s.base)}]
\def1{-.7}
\draw[0cell]
(0,0) node (x11) {\xi_AX = \Hsubtwo X}
($(x11)+(1,0)$) node (x12) {\xi_AY = \Hsubtwo Y}
($(x11)+(.5,1)$) node (x21) {\oneatog(\Hsubtwo Y)}
($(x11)+(0,1)$) node[inner sep=0pt] (sw) {}
($(x12)+(0,1)$) node[inner sep=0pt] (se) {}
;
\draw[1cell]
(x11) edge node {\xi_A p} (x12)
(x11) edge[-, shorten >=-1pt] node[swap,pos=.6] (s) {\Hsubtwo(\Fzeroay \circ p)} (sw)
(sw) edge[shorten <=-1pt] (x21)
(x21) edge[-, shorten >=-1pt] (se)
(se) edge[shorten <=-1pt] node[swap,pos=.4] {(\Gzerosub{A,\Hsubtwo Y})^{-1}} (x12)
;
\end{tikzpicture}
\end{equation}
in $GA$, with
\begin{equation}\label{htwo-fzero-p}
H\big(\onea,\Fzeroay \circ p\big)
= \big(\onea, \Hsubtwo(\Fzeroay \circ p)\big) \in\intg
\end{equation}
as in \eqref{cartesian-h-morphisms}
\end{description}
This finishes the definition of $\xi_A$.
\item[Component $2$-Cells] For each morphism $f : A \to B$ in $\C$ and each object $Y\in FB$, we define $\xify$ as the composite
\begin{equation}\label{xi-component-iicell}
\begin{tikzpicture}[xscale=2.5, yscale=1.3, baseline={(eq.base)}]
\def1{-1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\def1.2{1.8}
\draw[0cell]
(0,0) node (x11) {\ftog\xib Y}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {}
($(x12)+(1.2,0)$) node (x13) {\xia\ftof Y}
($(x11)+(0,1)$) node (x21) {\ftoghtwoy}
($(x12)+(0,1)$) node (x22) {\oneatog\Htwoftofy}
($(x13)+(0,1)$) node (x23) {\Htwoftofy}
;
\draw[1cell]
(x11) edge node {\xify} (x13)
(x11) edge[equal] node (eq) {} (x21)
(x13) edge[equal] (x23)
(x21) edge node {\etofy} (x22)
(x22) edge node {\big(\Gzero_{A,\Hsubtwo(\ftof Y)}\big)^{-1}} (x23)
;
\end{tikzpicture}
\end{equation}
in $GA$, with the vertical equalities from \eqref{xia-object} and $\etofy$ from \eqref{etofy-diagram}.
\end{description}
This finishes the definition of $\xi : F \to G$.
\end{definition}
To show that $\xi$ is a strong transformation, we begin with its component $1$-cells.
\begin{lemma}\label{xia-is-functor}
For each object $A\in\C$, $\xia : FA\to GA$ in \eqref{xia-fa-ga} is a functor.
\end{lemma}
\begin{proof}
For an object $X\in FA$, $\xia \onex$ is the identity morphism by \eqref{cartesian-h-identity} and \eqref{xia-morphism}.
Given composable morphisms
\[\begin{tikzcd}
X \ar{r}{p} & Y \ar{r}{q} & Z \in FA,\end{tikzcd}\]
we must show that
\[(\xia q)(\xia p) = \xia(qp) \in GA.\]
By \eqref{xia-morphism}, this equality means that the two composites below
\begin{equation}\label{xiaq-xiap}
\begin{tikzpicture}[xscale=3.5, yscale=1.5, baseline={(e.base)}]
\def1{-1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (x11) {\Htwox}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {\oneatog(\Htwoz)}
($(x12)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x13) {\Htwoz}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2,1/3)$) node (star) {(\ast)}
($(x11)+(0,1)$) node (x21) {\oneatog\Htwoy}
($(x12)+(0,1)$) node (x22) {\Htwoy}
;
\draw[1cell]
(x11) edge node {\Hsubtwo\big(\Fzeroaz \circ qp\big)} (x12)
(x12) edge node {(\Gzeroahtwoz)^{-1}} (x13)
(x11) edge node[swap] (e) {\Hsubtwo\big(\Fzeroay \circ p\big)} (x21)
(x22) edge node[swap] {\Hsubtwo\big(\Fzeroaz\circ q\big)} (x12)
(x21) edge node {(\Gzeroahtwoy)^{-1}} (x22)
;
\end{tikzpicture}
\end{equation}
in $GA$ are equal. Therefore, it suffices to show that the sub-diagram $(\ast)$ is commutative.
To prove the commutativity of $(\ast)$ in \eqref{xiaq-xiap}, first note that the diagram
\begin{equation}\label{oneafr-oneafq}
\begin{tikzpicture}[xscale=4, yscale=1.2, baseline={(e.base)}]
\def1{-1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (x11) {(A,X)}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {(A,Z)}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2,1)$) node (x21) {(A,Y)}
($(x11)+(0,1)$) node[inner sep=0pt] (sw) {}
($(x12)+(0,1)$) node[inner sep=0pt] (se) {}
;
\draw[1cell]
(x11) edge node {\big(\onea, \Fzeroaz \circ qp\big)} (x12)
(x11) edge[-, shorten >=-1pt] node[swap,pos=.6] (e) {\big(\onea, \Fzeroay \circ p\big)} (sw)
(sw) edge[shorten <=-1pt] (x21)
(x21) edge[-, shorten >=-1pt] (se)
(se) edge[shorten <=-1pt] node[swap,pos=.4] {\big(\onea, \Fzeroaz \circ q\big)} (x12)
;
\end{tikzpicture}
\end{equation}
in $\intf$ is commutative. Indeed, the first components in \eqref{oneafr-oneafq} give the equality $\onea\onea=\onea$. The second components give the two composites below.
\[\begin{tikzcd}[column sep=large]
X \ar{d}[swap]{p} & Z \ar{dr}{\Fzeroaz} && \oneatof Z\\
Y \ar{ur}{q} \ar{r}[swap]{\Fzeroay} & \oneatof Y \ar{r}[swap]{\oneatof q} & \oneatof Z \ar{r}{\oneatof(\Fzeroaz)} \ar[bend left,equal]{ur} & \oneatof\oneatof Z \ar{u}[swap]{\Ftwosub{Z}}
\end{tikzcd}\]
The left triangle is commutative by the naturality of $\Fzeroa$, and the right triangle is commutative by the lax right unity \eqref{f0-bicat} of $F$.
Applying the functor $H$ to \eqref{oneafr-oneafq} yields the commutative diagram
\[\begin{tikzpicture}[xscale=5, yscale=1.2]
\def1{-1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (x11) {(A,\Htwox)}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {(A,\Htwoz)}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2,1)$) node (x21) {(A,\Htwoy)}
($(x11)+(0,1)$) node[inner sep=0pt] (sw) {}
($(x12)+(0,1)$) node[inner sep=0pt] (se) {}
;
\draw[1cell]
(x11) edge node {\big(\onea, \Hsubtwo(\Fzeroaz \circ qp)\big)} (x12)
(x11) edge[-, shorten >=-1pt] node[swap,pos=.6] (e) {\big(\onea, \Hsubtwo(\Fzeroay \circ p)\big)} (sw)
(sw) edge[shorten <=-1pt] (x21)
(x21) edge[-, shorten >=-1pt] (se)
(se) edge[shorten <=-1pt] node[swap,pos=.4] {\big(\onea, \Hsubtwo(\Fzeroaz \circ q)\big)} (x12)
;
\end{tikzpicture}\]
in $\intg$. Therefore, by \eqref{intf-composite} applied to the previous commutative diagram, the sub-diagram $(\ast)$ in \eqref{xiaq-xiap} is equal to the outermost diagram below.
\[\begin{tikzpicture}[xscale=3.5, yscale=1.7, baseline={(e.base)}]
\def1{-1}
\def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{1.2}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{.9}
\draw[0cell]
(0,0) node (x11) {\Htwox}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {\oneatog\Htwoy}
($(x12)+(1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3,0)$) node (x13) {\oneatog\oneatog\Htwoz}
($(x11)+(0,1)$) node (x21) {\oneatog\Htwoy}
($(x12)+(0,1)$) node (x22) {\Htwoy}
($(x13)+(0,1)$) node (x23) {\oneatog\Htwoz}
;
\draw[1cell]
(x11) edge node {\Hsubtwo\big(\Fzeroay \circ p\big)} (x12)
(x12) edge node {\oneatog\Hsubtwo(\Fzeroaz \circ q)} (x13)
(x11) edge node[swap] (e) {\Hsubtwo\big(\Fzeroay \circ p\big)} (x21)
(x21) edge[equal] (x12)
(x13) edge node {\Gtwosub{\Htwoz}} (x23)
(x13) edge[bend right] node[swap] {(\Gzerosub{A,\oneatog\Htwoz})^{-1}} (x23)
(x21) edge node[swap] {(\Gzeroahtwoy)^{-1}} (x22)
(x22) edge node[swap] {\Hsubtwo\big(\Fzeroaz\circ q\big)} (x23)
;
\end{tikzpicture}\]
In this diagram:
\begin{itemize}
\item The left triangle is commutative by definition.
\item The middle sub-diagram is commutative by the naturality of $\Gzeroa$.
\item The right sub-diagram is commutative by the lax left unity \eqref{f0-bicat} of $G$.
\end{itemize}
We have shown that the sub-diagram $(\ast)$ in \eqref{xiaq-xiap} is commutative, and $\xia$ preserves composites.
\end{proof}
Next we observe that the component $2$-cells of $\xi$ are invertible.
\begin{lemma}\label{xify-invertible}
Suppose $f : A \to B$ is a morphism in $\C$, and $Y\in FB$ is an object. Then the morphisms
\[\begin{tikzcd}[column sep=large]
\ftog\xiby=\ftog(\Htwoy) \ar[shift left=.5ex]{r}{\xify} & \Htwoftofy =\xia\ftof Y \ar[shift left=.5ex]{l}{\Htwooneftofy}
\end{tikzcd}\]
in $GA$, defined in \eqref{htwo-oneftofy} and \eqref{xi-component-iicell}, are inverses of each other.
\end{lemma}
\begin{proof}
The equality \[\Htwooneftofy \circ \xify = 1_{\ftoghtwoy}\] is the outer boundary of the following diagram in $GA$.
\[\begin{tikzpicture}[xscale=5, yscale=1.7]
\def1{1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\def.4{-1}
\def1.2{.3}
\draw[0cell]
(0,0) node (x11) {\ftoghtwoy}
($(x11)+(1,0)$) node (x12) {\ftoghtwoy}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\oneatog\Htwoftofy}
($(x12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {\oneatog\ftoghtwoy}
($(x21)+(0,.4)$) node (x31) {\Htwoftofy}
($(x22)+(0,.4)$) node (x32) {\ftoghtwoy}
($(x11)+(-1.2,0)$) node[inner sep=0pt] (a) {}
($(x31)+(-1.2,0)$) node[inner sep=0pt] (b) {}
($(x12)+(1.2,0)$) node[inner sep=0pt] (c) {}
($(x32)+(1.2,0)$) node[inner sep=0pt] (d) {}
;
\draw[1cell]
(x11) edge node {1} (x12)
(x11) edge node {\etofy} (x21)
(x22) edge node {\Gtwosub{\Htwoy}} (x12)
(x21) edge node {\oneatog(\Htwooneftofy)} (x22)
(x21) edge node {(\Gzerosub{A,\Htwoftofy})^{-1}} (x31)
(x22) edge node[swap] {(\Gzerosub{A,\ftog(\Htwoy)})^{-1}} (x32)
(x31) edge node[swap] {\Htwooneftofy} (x32)
(x11) edge[-, shorten >=-1pt] (a)
(a) edge[-, shorten <=-1pt, shorten >=-1pt] node[pos=.7]{\xify} (b)
(b) edge[shorten <=-1pt] (x31)
(x32) edge[-, shorten >=-1pt] (d)
(d) edge[-, shorten <=-1pt, shorten >=-1pt] node[pos=.3]{1} (c)
(c) edge[shorten <=-1pt] (x12)
;
\end{tikzpicture}\]
\begin{itemize}
\item The left rectangle is the commutative diagram \eqref{xi-component-iicell} that defines $\xify$.
\item The top square is the commutative diagram \eqref{etofy-diagram}.
\item The bottom square is commutative by the naturality of $\Gzeroa$.
\item The right rectangle is commutative by the lax left unity \eqref{f0-bicat} of $G$.
\end{itemize}
The converse equality
\[(\Gzerosub{A,\Htwoftofy})^{-1} \circ \etofy \circ \Htwooneftofy = 1_{\Htwoftofy}\]
is equivalent to the equality
\begin{equation}\label{xify-inverse}
\etofy \circ \Htwooneftofy = \Gzerosub{A,\Htwoftofy}.
\end{equation}
To prove this equality, consider the pre-raise
\[\preraise{(f,\Htwooneftofy)}{(f,\Htwooneftofy)}{\onea}\]
as displayed below.
\begin{equation}\label{htwooneftofy-preraise}
\begin{tikzpicture}[xscale=4, yscale=1.5, baseline={(one.base)}]
\def-1} \def\v{-1{.2}
\def1{.9}
\def1.2{.3}
\draw[0cell]
(0,0) node (a) {\big(A,\Htwoftofy\big)}
($(a)+(1,0)$) node (b) {(B,\Htwoy)}
($(a)+(.5,1)$) node (c) {\big(A,\Htwoftofy\big)}
($(b)+(-1} \def\v{-1,1)$) node (s) {}
($(s)+(.3,0)$) node (t) {}
($(t)+(-1} \def\v{-1,-1)$) node (a2) {A}
($(a2)+(1.2,0)$) node (b2) {B}
($(a2)+(1.2/2,1)$) node (c2) {A}
;
\draw[1cell]
(a) edge node {(f,\Hsubtwo(\oneftofy))} (b)
(c) edge[dashed, shorten >=-.1cm] node[swap, pos=.8] (one) {(\onea, \exists !\, \Gzerosub{A,\Htwoftofy})} (a)
(c) edge node[pos=.8] {(f, \Htwooneftofy)} (b)
(s) edge[|->] node{\Usubg} (t)
(a2) edge node {f} (b2)
(c2) edge node[swap] {\onea} (a2)
(c2) edge node {f} (b2)
;
\end{tikzpicture}
\end{equation}
Since $\big(f,\Htwooneftofy\big)$ is a Cartesian morphism by \eqref{htwo-oneftofy}, this pre-raise has a unique raise, which must be the identity morphism of $\big(A,\Htwoftofy\big)$, namely, \[\big(\onea, \Gzerosub{A,\Htwoftofy}\big)\] by \eqref{fzeroax}.
By the uniqueness of this raise, to prove \eqref{xify-inverse}, it suffices to show that \[\big(\onea, \etofy \circ \Htwooneftofy\big)\] is also a raise of the pre-raise \eqref{htwooneftofy-preraise}. In other words, by \eqref{intf-composite}, it suffices to show that the following diagram in $GA$ is commutative.
\[\begin{tikzpicture}[xscale=3.5, yscale=1.4, baseline={(e.base)}]
\def1{-1}
\def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{1.2}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{.7}
\draw[0cell]
(0,0) node (x11) {\Htwoftofy}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {}
($(x12)+(1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3,0)$) node (x13) {\ftog\Htwoy}
($(x11)+(0,1)$) node (x21) {\ftog\Htwoy}
($(x12)+(0,1)$) node (x22) {\oneatog\Htwoftofy}
($(x13)+(0,1)$) node (x23) {\oneatog\ftog\Htwoy}
;
\draw[1cell]
(x11) edge node {\Htwooneftofy} (x13)
(x11) edge node[swap] {\Htwooneftofy} (x21)
(x21) edge node (e) {1} (x13)
(x23) edge node[swap] {\Gtwosub{\Htwoy}} (x13)
(x21) edge node[swap] {\etofy} (x22)
(x22) edge node[swap] {\oneatog\Htwooneftofy} (x23)
;
\end{tikzpicture}\]
The upper left triangle is commutative by definition. The lower right triangle is commutative by \eqref{etofy-diagram}.
\end{proof}
\begin{lemma}\label{xi-lax-unity}
$\xi : F \to G$ satisfies the lax unity axiom \eqref{unity-transformation-pasting}.
\end{lemma}
\begin{proof}
The lax unity axiom for $\xi$ means the equality
\begin{equation}\label{xi-lax-unity-1}
\xioneax \circ \Gzerosub{A,\xiax} = \xia(\Fzeroax) \in GA
\end{equation}
for each object $A\in\C$ and each object $X\in FA$. By \eqref{xia-morphism} and \eqref{xi-component-iicell}, this equality means that the two composites in the diagram
\begin{equation}\label{xi-lax-unity-2}
\begin{tikzpicture}[xscale=4, yscale=1.4, baseline={(s.base)}]
\def1{1}
\def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{1.1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{.8}
\draw[0cell]
(0,0) node (x11) {\oneatog\Htwox}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {\oneatog\Htwooneatofx}
($(x12)+(1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3,0)$) node (x13) {\Htwooneatofx}
($(x11)+(0,1)$) node (x01) {\Htwox}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2,1/2)$) node (s) {(\diamondsuit)}
;
\draw[1cell]
(x11) edge node {\etooneax} (x12)
(x12) edge node {(\Gzerosub{A,\Htwooneatofx})^{-1}} (x13)
(x01) edge node[swap] {\Gzerosub{A,\Htwox}} (x11)
(x01) edge[out=0,in=90] node[pos=.6] {\Htwo(\Fzero\Fzero)} (x12)
;
\end{tikzpicture}
\end{equation}
in $GA$ are equal, in which
\[\Fzero\Fzero =\Fzeroaoneatofx \circ \Fzeroax.\]
So it suffices to show that the sub-diagram $(\diamondsuit)$ is commutative. By the invertibility of $\Gzeroa$ and the uniqueness of $\etooneax$ in \eqref{etofy-diagram}, $(\diamondsuit)$ in \eqref{xi-lax-unity-2} is commutative if and only if the diagram
\begin{equation}\label{xi-lax-unity-3}
\begin{tikzpicture}[xscale=4.2, yscale=1.4, baseline={(s.base)}]
\def1{-1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (x11) {\Htwox}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {\oneatog\Htwox}
($(x11)+(0,1)$) node (x21) {\oneatog\Htwooneatofx}
($(x12)+(0,1)$) node (x22) {\oneatog\oneatog\Htwox}
;
\draw[1cell]
(x11) edge node {\Gzerosub{A,\Htwox}} (x12)
(x11) edge node[swap] (s) {\Htwo(\Fzero\Fzero)} (x21)
(x22) edge node[swap] {\Gtwosub{\Htwoy}} (x12)
(x21) edge node {\oneatog(\Htwooneoneatofx)} (x22)
;
\end{tikzpicture}
\end{equation}
is commutative. The diagram \eqref{xi-lax-unity-3} is commutative by:
\begin{itemize}
\item the commutative diagram \eqref{htwop-factor} for the morphism
\[(f,p)=(\onea,\Fzeroax) : (A,X) \to (A,X);\]
\item the equality $\Hsubtwo\Fzeroax = \Gzerosub{A,\Htwox}$ in \eqref{cartesian-h-identity}.
\end{itemize}
So $(\diamondsuit)$ is commutative, and the equality \eqref{xi-lax-unity-1} holds.
\end{proof}
\begin{lemma}\label{xi-lax-naturality}
$\xi : F \to G$ in \Cref{def:transformation-gamma} is a strong transformation.
\end{lemma}
\begin{proof}
The naturality of $\xif$ in the sense of \eqref{lax-transformation-naturality} is trivial because $\C$ has no non-identity $2$-cells. By \Cref{xia-is-functor,xify-invertible,xi-lax-unity}, it remains to show that $\xi$ satisfies the lax naturality axiom \eqref{2-cell-transformation-pasting}.
The lax naturality axiom for $\xi$ means the equality
\begin{equation}\label{xi-lax-naturality-1}
\xigfz \circ \big(\Gtwosub{g,f}\big)_{\xic Z} = \xia\big(\Ftwosub{g,f}\big)_Z \circ \xifgtofz \circ \ftog(\xigz) \in GA
\end{equation}
for morphisms $f : A \to B$ and $g : B \to C$ in $\C$ and objects $Z\in FC$. By \eqref{xia-morphism}, \eqref{xi-component-iicell}, and \eqref{xify-invertible}, the equality \eqref{xi-lax-naturality-1} means that the two composites in the diagram
\[\begin{tikzpicture}[xscale=4.4, yscale=1.5, baseline={(s.base)}]
\def1{-1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (x11) {\ftog\gtog\Htwoz}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {\ftog\Htwogtofz}
($(x12)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x13) {\Htwoftofgtofz}
($(x11)+(0,1)$) node (x21) {\gftog{\Htwoz}}
($(x13)+(0,1)$) node (x23) {\oneatog\Htwogftofz}
($(x23)+(0,1)$) node (x33) {\Htwogftofz}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2,1/2)$) node (s) {(\diamondsuit)}
;
\draw[1cell]
(x11) edge node {\ftog(\Htwoonegtofz)^{-1}} (x12)
(x12) edge node {(\Htwooneftofgtofz)^{-1}} (x13)
(x11) edge node {(\Gtwosub{f,g})_{\Htwoz}} (x21)
(x13) edge node[swap] {\Hsubtwo(\Fzero\Ftwo)} (x23)
(x21) edge node {\etogfz} (x23)
(x23) edge node[swap] {(\Gzerosub{A,\Htwogftofz})^{-1}} (x33)
;
\end{tikzpicture}\]
are equal, in which
\[\Fzero\Ftwo = \Fzerosub{A,\gftof Z} \circ (\Ftwosub{f,g})_Z.\]
So it suffices to show that the sub-diagram $(\diamondsuit)$ is commutative. By the uniqueness of $\etogfz$ in \eqref{etofy-diagram}, the previous sub-diagram $(\diamondsuit)$ is commutative if and only if the diagram
\begin{equation}\label{xi-lax-naturality-2}
\begin{tikzpicture}[xscale=4.4, yscale=1.5, baseline={(s.base)}]
\def1{-1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (x11) {\Htwoftofgtofz}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {\oneatog\Htwogftofz}
($(x12)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x13) {\oneatog\gftog\Htwoz}
($(x11)+(0,1)$) node (x21) {\ftog\Htwogtofz}
($(x12)+(0,1)$) node (x22) {\ftog\gtog\Htwoz}
($(x13)+(0,1)$) node (x23) {\gftog(\Htwoz)}
;
\draw[1cell]
(x11) edge node {\Hsubtwo(\Fzero\Ftwo)} (x12)
(x12) edge node {\oneatog\Htwoonegftofz} (x13)
(x11) edge node {\Htwooneftofgtofz} (x21)
(x13) edge node[swap] {(\Gtwosub{\onea,gf})_{\Htwoz}} (x23)
(x21) edge node[swap] {\ftog(\Htwoonegtofz)} (x22)
(x22) edge node[swap] {(\Gtwosub{f,g})_{\Htwoz}} (x23)
;
\end{tikzpicture}
\end{equation}
is commutative.
To prove the commutativity of \eqref{xi-lax-naturality-2}, first note that the diagram
\[\begin{tikzpicture}[xscale=3, yscale=1.5, baseline={(s.base)}]
\def1{-1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (x11) {\ftof\gtof Z}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {\ftof\gtof Z}
($(x12)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x13) {\ftof\gtof Z}
($(x11)+(0,1)$) node (x21) {\gftof Z}
($(x13)+(0,1)$) node (x23) {\gftof Z}
($(x21)+(0,1)$) node (x31) {\oneatof\gftof Z}
($(x23)+(0,1)$) node (x33) {\oneatof\gftof Z}
;
\draw[1cell]
(x11) edge node {1_{\ftof\gtof Z}} (x12)
(x12) edge node[swap]{=} node {\ftof 1_{\gtof Z}} (x13)
(x11) edge node {(\Ftwosub{f,g})_Z} (x21)
(x13) edge node[swap] {(\Ftwosub{f,g})_Z} (x23)
(x21) edge node {=} (x23)
(x21) edge node {\Fzerosub{A,\gftof Z}} (x31)
(x33) edge node {(\Ftwosub{\onea,gf})_Z} (x23)
(x31) edge node[swap]{=} node{\oneatof\big(\onegftofz\big)} (x33)
;
\end{tikzpicture}\]
in $FA$ is commutative, with the bottom rectangle commutative by the lax left unity \eqref{f0-bicat} of $F$. The previous commutative diagram and \eqref{intf-composite} imply the diagram
\[\begin{tikzpicture}[xscale=4.5, yscale=1.5]
\def1{-1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (x11) {(A,\ftof\gtof Z)}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {(B,\gtof Z)}
($(x11)+(0,1)$) node (x21) {(A,\gftof Z)}
($(x12)+(0,1)$) node (x22) {(C,Z)}
;
\draw[1cell]
(x11) edge node {\big(f,\oneftofgtofz\big)} (x12)
(x11) edge node[swap]{\big(\onea, \Fzero\Ftwo\big)} (x21)
(x12) edge node {\big(g,\onegtofz\big)} (x22)
(x21) edge node {\big(gf,\onegftofz\big)} (x22)
;
\end{tikzpicture}\]
in $\intf$ is commutative. Applying the functor $H$ to the previous commutative diagram yields the commutative diagram
\[\begin{tikzpicture}[xscale=5, yscale=1.5]
\def1{-1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (x11) {\big(A,\Htwoftofgtofz\big)}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {\big(B,\Htwogtofz\big)}
($(x11)+(0,1)$) node (x21) {\big(A,\Htwogftofz\big)}
($(x12)+(0,1)$) node (x22) {(C,\Htwoz)}
;
\draw[1cell]
(x11) edge node {\big(f,\Hsubtwo(\oneftofgtofz)\big)} (x12)
(x11) edge node[swap]{\big(\onea, \Hsubtwo(\Fzero\Ftwo)\big)} (x21)
(x12) edge node {\big(g,\Hsubtwo(\onegtofz)\big)} (x22)
(x21) edge node {\big(gf,\Hsubtwo(\onegftofz)\big)} (x22)
;
\end{tikzpicture}\]
in $\intg$. By \eqref{intf-composite}, the second component of the previous commutative diagram is \eqref{xi-lax-naturality-2}.
\end{proof}
\begin{lemma}\label{intxi=H}
For the strong transformation $\xi : F \to G$ in \Cref{def:transformation-gamma}, the functor
\[\begin{tikzcd}
\intf \ar{r}{\intxi} & \intg\end{tikzcd}\]
in \Cref{def:grothendieck-icell} is equal to the functor $H$.
\end{lemma}
\begin{proof}
For an object $(A,X)\in\intf$, there are equalities
\[\big(\intxi\big)(A,X) = (A,\xia X) = (A, \Htwox) = H(A,X)\]
by \Cref{def:grothendieck-icell}, \eqref{xia-object}, and \eqref{cartesian-h-objects}, respectively.
For a morphism $(f,p) : (A,X) \to (B,Y)$ in $\intf$, by \eqref{intalpha-fp} and \eqref{cartesian-h-morphisms}, the desired equality
\[\big(\intxi\big)(f,p) = H(f,p)\]
is equivalent to the equality
\begin{equation}\label{intxi-H-morphisms}
\xifyinv \circ \xia p = \Hsubtwo p.
\end{equation}
Since $p : X \to \ftof Y$, by \eqref{cartesian-h-identity}, \eqref{xia-morphism} and \Cref{xify-invertible}, the equality \eqref{intxi-H-morphisms} means the commutativity around the boundary of the following diagram in $GA$.
\[\begin{tikzpicture}[xscale=3.5, yscale=1.5]
\def1{-1}
\def.4{1.2}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (x11) {\Htwox}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {\ftog(\Htwoy)}
($(x12)+(.4,0)$) node (x13) {\ftog(\Htwoy)}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,1)$) node (mid) {\oneatog\ftog(\Htwoy)}
($(x11)+(0,2*1)$) node (x21) {\oneatog\Htwoftofy}
($(x21)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x22) {\oneatog\Htwoftofy}
($(x13)+(0,2*1)$) node (x23) {\Htwoftofy}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2,1/2)$) node (di) {(\diamondsuit)}
($(x12)+(.4/2,1/2)$) node (he) {(\heartsuit)}
;
\draw[1cell]
(x11) edge node {\Hsubtwo p} (x12)
(x12) edge[equal] (x13)
(x11) edge node[swap]{\Hsubtwo\big(\Fzerosub{A,\ftof Y} \circ p\big)} (x21)
(x22) edge node {\oneatog(\Htwooneftofy)} (mid)
(mid) edge node {\Gtwosub{\Htwoy}} (x12)
(x23) edge node {\Htwooneftofy} (x13)
(x21) edge[equal] (x22)
(x22) edge node {\Hsubtwo(\Fzerosub{A,\ftof Y})^{-1}} (x23)
;
\end{tikzpicture}\]
\begin{itemize}
\item The sub-diagram $(\diamondsuit)$ is the commutative diagram \eqref{htwop-factor}.
\item The sub-diagram $(\heartsuit)$ is commutative by \eqref{htwop-factor} for the morphism
\[(f,\oneftofy) : (A,\ftof Y) \to (B,Y) \in \intf.\]
\end{itemize}
Therefore, $\intxi$ and $H$ agree on morphisms as well.
\end{proof}
\begin{lemma}\label{xi-unique}
The property $\intxi = H$ uniquely determines the strong transformation $\xi$ in \Cref{def:transformation-gamma}.
\end{lemma}
\begin{proof}
Suppose $\theta : F \to G$ is a strong transformation such that $\inttheta = H$. We will show that $\theta = \xi$ as strong transformations, starting with their component $1$-cells.
For each object $A\in\C$ and each object $X\in FA$, the functor $\thetaa : FA \to GA$ satisfies
\[(A,\thetaa X) = \big(\inttheta\big)(A,X) = H(A,X) = (A,\Htwox) = (A,\xia X)\]
by \Cref{def:grothendieck-icell}, \eqref{cartesian-h-objects}, and \eqref{xia-object}. So $\thetaa = \xia$ on objects.
For a morphism $p : X \to Y$ in $FA$, there is the morphism
\begin{equation}\label{h-onea-fzeroay-p}
H\big(\onea,\Fzeroay \circ p\big) = \big(\onea, \Hsubtwo(\Fzeroay \circ p)\big) : (A,\Htwox) \to (A,\Htwoy)
\end{equation}
in $\intg$ in \eqref{htwo-fzero-p}. Also, the morphism
\[\big(\inttheta\big)\big(\onea,\Fzeroay \circ p\big) = (\onea, \cdots)\in\intg\]
has second component the composite
\begin{equation}\label{inttheta-onea-fzeroay-p}
\begin{tikzpicture}[xscale=3.5, yscale=1.5, baseline={(x11.base)}]
\def.4{.9}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (x11) {\Htwox}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {\Htwooneatofy}
($(x12)+(.4,0)$) node (x13) {\oneatog\Htwoy \in GA}
;
\draw[1cell]
(x11) edge node {\thetaa\big(\Fzeroay \circ p\big)} (x12)
(x12) edge node {\thetaoneayinv} (x13)
;
\end{tikzpicture}
\end{equation}
by \eqref{intalpha-fp}. Consider the following diagram in $GA$.
\[\begin{tikzpicture}[xscale=3.5, yscale=1.5]
\def1{-1}
\def.4{.4}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (x11) {\Htwox}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {\oneatog(\Htwoy)}
($(x12)+(.4,0)$) node[inner sep=0pt] (x13) {}
($(x11)+(0,1)$) node (x21) {\Htwox}
($(x12)+(0,1)$) node (x22) {\Htwooneatofy}
($(x21)+(0,1)$) node (x31) {\Htwox}
($(x22)+(0,1)$) node (x32) {\Htwoy}
($(x32)+(.4,0)$) node[inner sep=0pt] (x33) {}
;
\draw[1cell]
(x11) edge node {\Hsubtwo(\Fzeroay \circ p)} (x12)
(x11) edge[equal] (x21)
(x22) edge node {\thetaoneayinv} (x12)
(x21) edge node {\thetaa\big(\Fzeroay \circ p\big)} (x22)
(x21) edge[equal] (x31)
(x32) edge node {\thetaa(\Fzeroay)} (x22)
(x31) edge node {\thetaa p} (x32)
(x32) edge[-,shorten >=-1pt] (x33)
(x33) edge[-,shorten >=-1pt,shorten <=-1pt] node[swap] {\Gzerosub{A,\Htwoy}} (x13)
(x13) edge[shorten <=-1pt] (x12)
;
\end{tikzpicture}\]
\begin{itemize}
\item The top left square is commutative by \eqref{h-onea-fzeroay-p}, \eqref{inttheta-onea-fzeroay-p}, and $\inttheta = H$.
\item The bottom left square is commutative by the functoriality of $\thetaa$.
\item The right rectangle is commutative by the lax unity \eqref{unity-transformation-pasting} of $\theta$.
\end{itemize}
Therefore, there are equalities
\[\thetaa p = \big(\Gzerosub{A,\Htwoy}\big)^{-1} \circ \Hsubtwo(\Fzeroay \circ p) = \xia p\]
by the previous commutative diagram and \eqref{xia-morphism}. We have shown that $\thetaa = \xia$ as functors, so $\theta$ and $\xi$ have the same component $1$-cells.
For their component $2$-cells, consider a morphism $f : A \to B \in \C$ and an object $Y\in FB$. We must show that $\thetafy = \xify$. The image of the morphism
\[\begin{tikzcd}[column sep=large]
(A,\ftof Y) \ar{r}{(f,\oneftofy)} & (B,Y) \in \intf\end{tikzcd}\]
under $\inttheta$ has second component the composite
\[\begin{tikzpicture}[xscale=3, yscale=1,baseline={(0,0).base}]
\draw[0cell]
(0,0) node (x11) {\Htwoftofy}
(1,0) node (x12) {\Htwoftofy}
(2,0) node (x13) {\ftog \Htwoy \in GA}
;
\draw[1cell]
(x11) edge node{\thetaaoneftofy} node[swap]{=} (x12)
(x12) edge node{\thetafyinv} (x13)
;
\end{tikzpicture}\]
by \eqref{intalpha-fp}. Since $\inttheta =H$, it follows that
\[\thetafy = \big(\Htwooneftofy\big)^{-1} = \xify,\] with the second equality from \Cref{xify-invertible}. Therefore, the component $2$-cells $\theta_f$ and $\xi_f$ are equal.
\end{proof}
\begin{proposition}\label{grothendieck-one-fullyfaithful}\index{Grothendieck construction!1-fully faithfulness}
The Grothendieck construction in \Cref{grothendieck-iifunctor} is $1$-fully faithful in the sense of \Cref{definition:2-equiv-terms}.
\end{proposition}
\begin{proof}
This follows from \Cref{intxi=H,xi-unique}.
\end{proof}
\section{As a \texorpdfstring{$2$}{2}-Equivalence}
\label{sec:grothendieck-iiequivalence}
The purpose of this section is to show that the Grothendieck construction is a $2$-equivalence between $2$-categories. Based on the results in the previous sections, it remains to show that the Grothendieck construction is fully faithful---i.e., a bijection---on $2$-cells.
\begin{convention}\label{conv:gro-iicell-fullyfaithful}
For the rest of this section, suppose:
\begin{itemize}
\item $F,G : \Cop \to \Cat$ are pseudofunctors.
\item $\alpha, \beta : F \to G$ are strong transformations.
\item $\gamma$ as in
\[\begin{tikzpicture}[xscale=2, yscale=1.3]
\def1{1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {\intf}
($(x11)+(1,0)$) node (x12) {\intg}
($(x11)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\C}
;
\draw[1cell]
(x11) edge[bend left=35] node{\intalpha} (x12)
(x11) edge[bend right=35] node[swap]{\intbeta} (x12)
(x11) edge[out=-90,in=165] node[swap,pos=.4] (uf) {\Usubf} (x21)
(x12) edge[out=-90,in=15] node[pos=.4] (ug) {\Usubg} (x21)
;
\draw[2cell]
node[between=x11 and x12 at .45, rotate=-90, 2label={above,\gamma}] {\Rightarrow}
;
\end{tikzpicture}\]
is a vertical natural transformation in the sense of \eqref{vertical-natural-tr}.\dqed
\end{itemize}
\end{convention}
\begin{explanation}\label{expl:theta-iicell}
To say that $\gamma : \intalpha \to \intbeta$ is a vertical natural transformation means that it is a natural transformation that satisfies
\begin{equation}\label{one-gamma-one}
1_{\Usubg} * \gamma = 1_{\Usubf}.
\end{equation}
For each object $A\in\C$,
\[\alphaa,\betaa : FA \to GA\] are functors. For each object $X\in FA$, $\gamma$ has a component morphism $\gamma_{(A,X)}\in\intg$ as in
\begin{equation}\label{onea-gammaax}
\begin{tikzcd}[column sep=large]
\big(\intalpha\big)(A,X) \ar[equal]{d} \ar{r}{\gamma_{(A,X)}} & \big(\intbeta\big)(A,X) \ar[equal]{d}\\
(A,\alphaax) \ar{r}{\big(\onea,\gammaax\big)} & (A,\betaax)
\end{tikzcd}
\end{equation}
with second component a morphism
\begin{equation}\label{gamma-of-ax}
\begin{tikzcd}[column sep=large]
\alphaax \ar{r}{\gammaax} & \oneatog\betaax \in GA.\end{tikzcd}
\end{equation}
The first component is $\onea$ by \eqref{one-gamma-one}. Moreover, $\gamma_{(A,X)}$ is natural with respect to morphisms in $\intf$.\dqed
\end{explanation}
To prove that the Grothendieck construction is fully faithful on $2$-cells, we will:
\begin{enumerate}
\item Construct a modification $\Gamma : \alpha \to \beta$ in \Cref{Gamma-is-modification}.
\item Show that $\Gamma$ is the unique modification with the property $\intgamma = \gamma$ in \Cref{Gamma-is-unique}.
\end{enumerate}
\begin{definition}\label{def:mod-gamma}
Define an assignment $\Gamma : \alpha \to \beta$ as follows. For each object $A\in\C$, $\Gammaa$ as in
\begin{equation}\label{Gammaa-transformation}
\begin{tikzpicture}[xscale=2, yscale=1.5, baseline={(x11.base)}]
\def1{1}
\draw[0cell]
(0,0) node (x11) {FA}
($(x11)+(1,0)$) node (x12) {GA}
;
\draw[1cell]
(x11) edge[bend left=45] node{\alphaa} (x12)
(x11) edge[bend right=45] node[swap]{\betaa} (x12)
;
\draw[2cell]
node[between=x11 and x12 at .4, rotate=-90, 2label={above,\Gammaa}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
is the assignment whose component $(\Gammaa)_X = \Gammaax$ is the composite
\begin{equation}\label{Gammaa-of-x}
\begin{tikzpicture}[xscale=4, yscale=1, baseline={(e.base)}]
\def1{-1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (x11) {\alphaax}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {\betaax}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2,1)$) node (x21) {\oneatog\betaax}
($(x11)+(0,1)$) node[inner sep=0pt] (sw) {}
($(x12)+(0,1)$) node[inner sep=0pt] (se) {}
;
\draw[1cell]
(x11) edge node {\Gammaax} (x12)
(x11) edge[-, shorten >=-1pt] node[swap,pos=.6] (e) {\gammaax} (sw)
(sw) edge[shorten <=-1pt] (x21)
(x21) edge[-, shorten >=-1pt] (se)
(se) edge[shorten <=-1pt] node[swap,pos=.4] {\big(\Gzerosub{A,\betaax}\big)^{-1}} (x12)
;
\end{tikzpicture}
\end{equation}
in $GA$ for each object $X\in FA$, with $\gammaax$ as in \eqref{gamma-of-ax}.
\end{definition}
\begin{lemma}\label{gammaa-natural}
$\Gammaa : \alphaa \to \betaa$ in \eqref{Gammaa-transformation} is a natural transformation.
\end{lemma}
\begin{proof}
By \eqref{Gammaa-of-x}, the naturality of $\Gammaa$ with respect to a morphism $p : X \to Y$ in $FA$ means the commutativity around the boundary of the following diagram in $GA$.
\begin{equation}\label{Gammaa-natural-1}
\begin{tikzpicture}[xscale=3, yscale=1.5, baseline={(p.base)}]
\def1{-1}
\def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{1.2}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (x11) {\alphaax}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {\oneatog\betaax}
($(x12)+(1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3,0)$) node (x13) {\betaax}
($(x11)+(0,1)$) node (x21) {\alphaay}
($(x12)+(0,1)$) node (x22) {\oneatog\betaay}
($(x13)+(0,1)$) node (x23) {\betaay}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2,1/3)$) node (di) {(\diamondsuit)}
;
\draw[1cell]
(x11) edge node {\gammaax} (x12)
(x12) edge node[pos=.6] {\big(\Gzerosub{A,\betaax}\big)^{-1}} (x13)
(x11) edge node[swap] (p) {\alphaa p} (x21)
(x12) edge node{\oneatog\betaa p} (x22)
(x13) edge node {\betaa p} (x23)
(x21) edge node {\gammaay} (x22)
(x22) edge node[pos=.6] {\big(\Gzerosub{A,\betaay}\big)^{-1}} (x23)
;
\end{tikzpicture}
\end{equation}
Since the right square is commutative by the naturality of $\Gzeroa$, it suffices to show that the sub-diagram $(\diamondsuit)$ in \eqref{Gammaa-natural-1} is commutative.
To prove the commutativity of $(\diamondsuit)$ in \eqref{Gammaa-natural-1}, consider the naturality of $\gamma : \intalpha \to \intbeta$ with respect to the morphism
\[\begin{tikzpicture}[xscale=3.8, yscale=1.7]
\draw[0cell]
(0,0) node (x11) {(A,X)}
($(x11)+(1,0)$) node (x12) {(A,Y) \in \intf.}
;
\draw[1cell]
(x11) edge node {\big(\onea, \Fzeroay \circ p\big)} (x12)
;
\end{tikzpicture}\]
By \eqref{intalpha-fp}, it is the following commutative diagram in $\intg$.
\begin{equation}\label{Gammaa-natural-2}
\begin{tikzpicture}[xscale=3, yscale=1.5, baseline={(p.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\def1{1}
\draw[0cell]
(0,0) node (x11) {(A,\alphaax)}
($(x11)+(1,0)$) node (x12) {(A,\betaax)}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {(A,\alphaay)}
($(x12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {(A,\betaay)}
;
\draw[1cell]
(x11) edge node {\big(\onea, \gammaax\big)} (x12)
(x11) edge node[swap] (p) {\big(\onea,\alphaoneayinv \circ \alphaa(\Fzeroay \circ p) \big)} (x21)
(x12) edge node {\big(\onea,\betaoneayinv \circ \betaa(\Fzeroay \circ p) \big)} (x22)
(x21) edge node {\big(\onea, \gammaay\big)} (x22)
;
\end{tikzpicture}
\end{equation}
By \eqref{intf-composite} and the functoriality of $\alphaa$ and $\betaa$, the second component of the commutative diagram \eqref{Gammaa-natural-2} is the outermost diagram in $GA$ below.
\[\begin{tikzpicture}[xscale=3.4, yscale=1.5]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\def1{1}
\def20{.8}
\draw[0cell]
(0,0) node (x11) {\alphaax}
($(x11)+(20/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2)$) node (di) {(\diamondsuit)}
($(x11)+(20,0)$) node (x12) {\oneatog\betaax}
($(x12)+(1,0)$) node (x13) {\oneatog\betaay}
($(x13)+(1,0)$) node (x14) {\oneatog\betaa\oneatog Y}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\alphaay}
($(x13)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x23) {\oneatog\betaay}
($(x14)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x24) {\oneatog\oneatog\betaay}
($(x11)+(0,2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {\alphaa\oneatof Y}
($(x12)+(0,2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x32) {\oneatog\alphaa Y}
($(x13)+(0,2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x33) {\oneatog\oneatog\betaa Y}
($(x14)+(0,2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x34) {\oneatog\betaa Y}
;
\draw[1cell]
(x11) edge node {\gammaax} (x12)
(x12) edge node {\oneatog\betaa p} (x13)
(x13) edge node {\oneatog\betaa\Fzeroay} node[swap]{\iso} (x14)
(x11) edge node[swap] {\alphaa p} (x21)
(x13) edge[equal] (x23)
(x14) edge node {\oneatog\betaoneayinv} node[swap]{\iso} (x24)
(x21) edge node {\gammaay} (x23)
(x23) edge node {\oneatog\Gzerosub{A,\betaay}} (x24)
(x21) edge node[swap] {\alphaa\Fzeroay} (x31)
(x21) edge node[pos=.7] {\Gzerosub{A,\alphaay}} (x32)
(x23) edge node[swap] {\Gzerosub{A,\oneatog\betaay}} (x33)
(x23) edge node {1} (x34)
(x24) edge node {\Gtwosub{\betaay}} node[swap]{\iso} (x34)
(x31) edge node[pos=.3] {\alphaoneayinv} (x32)
(x32) edge node[pos=.4] {\oneatog\gammaay} (x33)
(x33) edge node[pos=.3] {\Gtwosub{\betaay}} (x34)
;
\end{tikzpicture}\]
In the above diagram:
\begin{itemize}
\item The boundary is a commutative diagram by \eqref{Gammaa-natural-2}.
\item The upper left rectangle is $(\diamondsuit)$ in \eqref{Gammaa-natural-1}, which we want to show is commutative.
\item From the lower left to the upper right counterclockwise, the other five sub-diagrams are commutative by
\begin{itemize}
\item the lax unity \eqref{unity-transformation-pasting} of $\alpha$,
\item the naturality of $\Gzeroa$,
\item the lax left unity \eqref{f0-bicat} of $G$,
\item the lax right unity \eqref{f0-bicat} of $G$, and
\item the lax unity \eqref{unity-transformation-pasting} of $\beta$.
\end{itemize}
\end{itemize}
Therefore, the two composites in $(\diamondsuit)$, when followed by three isomorphisms, are equal. So $(\diamondsuit)$ is commutative.
\end{proof}
\begin{lemma}\label{Gamma-is-modification}
$\Gamma : \alpha \to \beta$ in \Cref{def:mod-gamma} is a modification.
\end{lemma}
\begin{proof}
By \eqref{Gammaa-of-x}, the modification axiom \eqref{modification-axiom} for $\Gamma$ means the commutativity of the diagram
\begin{equation}\label{Gamma-modaxiom-1}
\begin{tikzpicture}[xscale=3, yscale=1.5, baseline={(p.base)}]
\def1{-1}
\def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{1.2}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (x11) {\ftog \alphaby}
($(x11)+(-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15,0)$) node (x12) {\ftog\onebtog\betaby}
($(x12)+(1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3,0)$) node (x13) {\ftog\betaby}
($(x11)+(0,1)$) node (x21) {\alphaa\ftofy}
($(x12)+(0,1)$) node (x22) {\oneatog\betaa\ftofy}
($(x13)+(0,1)$) node (x23) {\betaa\ftofy}
;
\draw[1cell]
(x11) edge node {\ftog\gammaby} (x12)
(x12) edge node {\ftog\big(\Gzerosub{B,\betaby}\big)^{-1}} (x13)
(x11) edge node[swap] (p) {\alphafy} (x21)
(x13) edge node {\betafy} (x23)
(x21) edge node {\gammaaftofy} (x22)
(x22) edge node {\big(\Gzerosub{A,\betaa\ftofy}\big)^{-1}} (x23)
;
\end{tikzpicture}
\end{equation}
in $GA$ for each morphism $f : A \to B \in \C$ and each object $Y\in FB$. To prove that \eqref{Gamma-modaxiom-1} is commutative, consider the naturality of $\gamma : \intalpha \to \intbeta$ with respect to the morphism
\[\begin{tikzcd}[column sep=large]
(A,\ftofy) \ar{r}{(f,1_{\ftofy})} & (B,Y) \in \intf.
\end{tikzcd}\]
By \eqref{intalpha-fp}, it is the commutative diagram
\begin{equation}\label{Gamma-modaxiom-2}
\begin{tikzpicture}[xscale=3.8, yscale=1.5, baseline={(p.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\def1{1}
\draw[0cell]
(0,0) node (x11) {(A,\alphaa\ftofy)}
($(x11)+(1,0)$) node (x12) {(A,\betaa\ftofy)}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {(B,\alphaby)}
($(x12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {(B,\betaby)}
;
\draw[1cell]
(x11) edge node {\big(\onea, \gammaaftofy\big)} (x12)
(x11) edge node[swap] (p) {\big(f,\alphafyinv \circ \alphaa \oneftofy\big)} (x21)
(x12) edge node {\big(f,\betafyinv \circ \betaa \oneftofy \big)} (x22)
(x21) edge node {\big(\oneb, \gammaby\big)} (x22)
;
\end{tikzpicture}
\end{equation}
in $\intg$.
By \eqref{intf-composite}, the second component of the commutative diagram \eqref{Gamma-modaxiom-2} is the outermost diagram in $GA$ below.
\[\begin{tikzpicture}[xscale=3.5, yscale=1.5]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\def1{1}
\draw[0cell]
(0,0) node (x11) {\alphaa\ftofy}
($(x11)+(1,0)$) node (x12) {\oneatog\betaa\ftofy}
($(x12)+(1,0)$) node (x13) {\oneatog\betaa\ftofy}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\alphaa\ftofy}
($(x12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {\betaa\ftofy}
($(x13)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x23) {\oneatog\ftog\betaby}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {\ftog\alphaby}
($(x22)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x32) {\ftog\onebtog\betaby}
($(x23)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x33) {\ftog\betaby}
;
\draw[1cell]
(x11) edge node {\gammaaftofy} (x12)
(x12) edge node {\oneatog\betaa\oneftofy} node[swap]{=} (x13)
(x11) edge node{=} node[swap] (p) {\alphaa\oneftofy} (x21)
(x12) edge node[swap] {\big(\Gzerosub{A,\betaa\ftofy}\big)^{-1}} (x22)
(x13) edge node {\oneatog\betafyinv} node[swap]{\iso} (x23)
(x21) edge node[swap] {\alphafyinv} (x31)
(x33) edge node[swap,pos=.6] {\betafy} (x22)
(x23) edge node[swap]{\iso} node {\big(\Gtwosub{\onea,f}\big)_{\betaby}} (x33)
(x31) edge node[swap] {\ftog\gammaby} (x32)
(x32) edge node[swap] {\big(\Gtwosub{f,\oneb}\big)_{\betaby}} (x33)
;
\end{tikzpicture}\]
In the above diagram:
\begin{itemize}
\item The boundary is a commutative diagram by \eqref{Gamma-modaxiom-2}.
\item The right trapezoid is commutative by the naturality of $\Gzeroa$ and the equality
\[\big(\Gtwosub{\onea,f}\big)_{\betaby} = \big(\Gzerosub{A,\ftog\betaby}\big)^{-1},\]
which follows from the lax left unity \eqref{f0-bicat} of $G$.
\item The left sub-diagram is equivalent to the diagram \eqref{Gamma-modaxiom-1} by the equality
\[\big(\Gtwosub{f,\oneb}\big)_{\betaby} = \ftog\big(\Gzerosub{B,\betaby}\big)^{-1},\]
which follows from the lax right unity \eqref{f0-bicat} of $G$.
\end{itemize}
Therefore, the diagram \eqref{Gamma-modaxiom-1} is commutative.
\end{proof}
\begin{lemma}\label{Gamma-is-unique}
$\Gamma : \alpha \to \beta$ in \Cref{def:mod-gamma} is the unique modification such that
\[\intgamma = \gamma\] as natural transformations.
\end{lemma}
\begin{proof}
By \Cref{intgamma-vertical,Gamma-is-modification}, $\intgamma : \intalpha \to \intbeta$ is a vertical natural transformation. For each object $(A,X) \in \intf$, there are equalities
\[\begin{split}
\left(\intgamma\right)_{(A,X)}
&= \left(\onea, \Gzerosub{A,\betaa X} \circ \Gammaax\right)\\
&= \big(\onea,\gammaax\big)\\
&= \gamma_{(A,X)}
\end{split}\]
by \eqref{intgamma-ax}, \eqref{Gammaa-of-x}, and \eqref{onea-gammaax}, respectively. This shows that $\intgamma = \gamma$. The uniqueness of $\Gamma$ follows from \eqref{intgamma-ax}, \eqref{onea-gammaax}, and the invertibility of $\Gzerosub{A,\betaax}$, because the requirement
\[\Gzerosub{A,\betaa X} \circ \Gammaax = \gammaax\]
forces the definition \eqref{Gammaa-of-x}.
\end{proof}
\begin{theorem}[Grothendieck Construction]\label{thm:grothendieck-iiequivalence}\index{Grothendieck construction!is a 2-equivalence}\index{2-equivalence!Grothendieck construction}\index{Theorem!Grothendieck Construction}
The Grothendieck construction
\[\begin{tikzcd}
\Bicatpscopcat \ar{r}{\int} & \fibofc
\end{tikzcd}\]
in \Cref{grothendieck-iifunctor} is a $2$-equivalence of $2$-categories in the sense of \Cref{definition:2-equivalence}.
\end{theorem}
\begin{proof}
The Grothendieck construction is
\begin{itemize}
\item $1$-essentially surjective on objects by \Cref{grothendieck-one-esssurjective},
\item $1$-fully faithful on $1$-cells by \Cref{grothendieck-one-fullyfaithful}, and
\item fully faithful on $2$-cells by \Cref{Gamma-is-unique}.
\end{itemize}
So it is a $2$-equivalence by the Whitehead \Cref{theorem:whitehead-2-cat} for $2$-categories.
\end{proof}
\section{Bicategorical Grothendieck Construction}
\label{sec:grothendieck-bicat}
The Grothendieck construction that we have discussed so far in this chapter is not the only one that generalizes the one in \Cref{mot:icat-grothendieck}. In this section we discuss a variation of the Grothendieck construction for diagrams of bicategories.
\begin{motivation}\label{mot:bicat-grothendieck}
In \Cref{sec:grothendieck} we extended the usual Grothendieck construction for a functor $F : \Cop \to \Cat$ by using the $2$-categorical structure on $\Cat$ and allowing $F$ to be a lax functor. Another natural way to extend the Grothendieck construction is to replace the $1$-category $\Cat$ by a $1$-category of bicategories. Recall from \Cref{thm:cat-of-bicat} the $1$-category $\Bicatps$ with small bicategories as objects and pseudofunctors as morphisms.\dqed
\end{motivation}
\begin{definition}\label{def:grothendieck-bicat}
A \emph{$\C$-indexed bicategory}\index{indexed!bicategory}\index{bicategory!indexed} is a functor \[F : \Cop \to \Bicatps.\] For a $\C$-indexed bicategory $F$, its \emph{bicategorical Grothendieck construction}\index{bicategorical Grothendieck construction}\index{Grothendieck construction!bicategorical} is the bicategory\label{notation:intbicf} $\intbi{\C}F$ defined as follows.
\begin{description}
\item[Objects] An object is a pair $(A,X)$ with $A$ an object in $\C$ and $X$ an object in $FA$.
\item[$1$-Cells] Given two objects $(A,X)$ and $(B,Y)$, a $1$-cell
\[(f,p) \in \Big(\intbi{\C}F\Big)\big((A,X),(B,Y)\big)\] is a pair consisting of
\begin{itemize}
\item a morphism $f : A \to B$ in $\C$ and
\item a $1$-cell $p \in (FA)(X,\tothe{f}{F}Y)$, where $\tothe{f}{F}=Ff : FB \to FA$.
\end{itemize}
We call $f$ the \emph{$\C$-component} and $p$ the \emph{$F$-component} of the $1$-cell, and similarly for objects.
\item[Identity $1$-Cells] The identity $1$-cell of an object $(A,X)$ consists of
\begin{itemize}
\item $\C$-component the identity morphism $1_A : A\to A$ in $\C$ and
\item $F$-component the identity $1$-cell $1_X \in (FA)(X,X)$.
\end{itemize}
This is well-defined because \[\tothe{1_A}{F}=F1_A=1_{FA} : FA \to FA\] is the identity strict functor on $FA$ as in \Cref{ex:identity-strict-functor}.
\item[$2$-Cells] Given two $1$-cells $(f,p),(f',p') : (A,X) \to (B,Y)$, a $2$-cell \[\theta : (f,p) \to (f',p')\] requires the equality $f=f' : A \to B$ in $\C$, in which case $\theta : p \to p'$ is a $2$-cell in $(FA)(X,\tothe{f}{F}Y)$.
\item[Identity $2$-Cells] For a $1$-cell $(f,p)$ as above, its identity $2$-cell is the identity $2$-cell $1_p$ of $p$ in $(FA)(X,\tothe{f}{F}Y)$.
\item[Vertical Composition] For $2$-cells
\[\begin{tikzcd}
(f,p) \ar{r}{\theta} & (f,p') \ar{r}{\theta'} & (f,p'') \inspace (FA)(X,\tothe{f}{F}Y),\end{tikzcd}\]
their vertical composite $\theta'\theta : (f,p) \to (f,p'')$ is taken in $(FA)(X,\tothe{f}{F}Y)$.
\item[Horizontal Composition of $1$-Cells] For two $1$-cells
\[\begin{tikzcd}
(A,X) \ar{r}{(f,p)} & (B,Y) \ar{r}{(g,q)} & (C,Z),\end{tikzcd}\]
their horizontal composite $(g,q)(f,p)$ consists of
\begin{itemize}
\item $\C$-component the composite $gf : A \to C$ in $\C$ and
\item $F$-component the horizontal composite
\[\begin{tikzcd}
X \ar{r}{p} & \tothe{f}{F}Y \ar{r}{\tothe{f}{F}q} & \tothe{f}{F}\tothe{g}{F}Z = \tothe{gf}{F}Z \inspace FA.\end{tikzcd}\]
\end{itemize}
\item[Horizontal Composition of $2$-Cells] For two $2$-cells $\theta$ and $\phi$ as in
\[\begin{tikzpicture}[xscale=2.5, yscale=2]
\draw[0cell]
(0,0) node (A) {(A,X)}
(1,0) node (B) {(B,Y)}
(2,0) node (C) {(C,Z),}
;
\draw[1cell]
(A) edge[bend left, shorten <=-.1cm, shorten >=-.1cm] node (f) {(f,p)} (B)
(A) edge[bend right, shorten <=-.1cm, shorten >=-.1cm] node[swap] (fp) {(f,p')} (B)
(B) edge[bend left, shorten <=-.1cm, shorten >=-.1cm] node (g) {(g,q)} (C)
(B) edge[bend right, shorten <=-.1cm, shorten >=-.1cm] node[swap] (gq) {(g,q')} (C)
;
\draw[2cell]
node[between=f and fp at .5, rotate=-90, font=\Large] (t) {\Rightarrow}
(t) node[right] {\theta}
node[between=g and gq at .5, rotate=-90, font=\Large] (phi) {\Rightarrow}
(phi) node[right] {\phi}
;
\end{tikzpicture}\]
their horizontal composite is that of
\[\begin{tikzpicture}[xscale=2.5, yscale=2]
\draw[0cell]
(0,0) node (A) {X}
(1,0) node (B) {\tothe{f}{F}Y}
(2,0) node (C) {\tothe{gf}{F}Z}
;
\draw[1cell]
(A) edge[bend left, shorten <=-.1cm, shorten >=-.1cm] node (f) {p} (B)
(A) edge[bend right, shorten <=-.1cm, shorten >=-.1cm] node[swap] (fp) {p'} (B)
(B) edge[bend left, shorten <=-.1cm, shorten >=-.1cm] node[pos=.5] (g) {\tothe{f}{F}q} (C)
(B) edge[bend right, shorten <=-.1cm, shorten >=-.1cm] node[swap, pos=.5] (gq) {\tothe{f}{F}q'} (C)
;
\draw[2cell]
node[between=A and B at .4, rotate=-90, font=\Large] (t) {\Rightarrow}
(t) node[right] {\theta}
node[between=B and C at .4, rotate=-90, font=\Large] (phi) {\Rightarrow}
(phi) node[right] {\tothe{f}{F}\phi}
;
\end{tikzpicture}\]
in $(FA)\big(X,\tothe{gf}{F}Z\big)$.
\item[Associator] For three $1$-cells
\[\begin{tikzcd}
(A,W) \ar{r}{(f,p)} & (B,X) \ar{r}{(g,q)} & (C,Y) \ar{r}{(h,r)} & (D,Z),\end{tikzcd}\]
the corresponding component of the associator
\[\begin{tikzcd}
\big((h,r)(g,q)\big)(f,p) \ar{r}{a} & (h,r)\big((g,q)(f,p)\big)
\end{tikzcd}\]
is the $2$-cell given by the composite of the pasting diagram
\[\begin{tikzpicture}[xscale=3, yscale=2]
\draw[0cell]
(0,0) node (A) {\tothe{f}{F}X}
(1,0) node (B) {\tothe{f}{F}\tothe{g}{F}Y}
(2,0) node (C) {\tothe{hgf}{F}Z}
(0,-1) node (D) {W}
(1,-1) node (E) {\tothe{f}{F}X}
(2,-1) node (F) {\tothe{f}{F}\tothe{g}{F}Y}
;
\draw[1cell]
(A) edge[bend left=55] node[inner sep=2pt] (fgrq) {\tothe{f}{F}((\tothe{g}{F}r)q)} (C)
(A) edge node[swap]{\tothe{f}{F}q} (B)
(B) edge node[swap]{\tothe{f}{F}\tothe{g}{F}r} (C)
(D) edge node{p} (A)
(D) edge node{(p} (E)
(E) edge node{\tothe{f}{F}q)} (F)
(F) edge node[swap]{\tothe{f}{F}\tothe{g}{F}r} (C)
;
\draw[2cell]
node[between=fgrq and B at .5, rotate=-90, font=\Large] (t) {\Rightarrow}
(t) node[right] {((\tothe{f}{F})^2)^{-1}}
node[between=B and E at .5, rotate=-90, font=\Large] (a) {\Rightarrow}
(a) node[right] {a}
;
\end{tikzpicture}\]
in $FA$ with the indicated bracketing along the codomain. Here
\begin{itemize}
\item $(\tothe{f}{F})^2$ is the $(\tothe{g}{F}r,q)$-component of the lax functoriality constraint of the pseudofunctor $\tothe{f}{F} = Ff : FB\to FA$, and
\item $a$ is the $(\tothe{f}{F}\tothe{g}{F}r,\tothe{f}{F}q,p)$-component of the associator in $FA$.
\end{itemize}
\item[Left Unitor] For a $1$-cell $(f,p) : (A,X) \to (B,Y)$, the corresponding component of the left unitor
\[\begin{tikzcd}
(1_B,1_Y)(f,p) \ar{r}{\ell} & (f,p)\end{tikzcd}\]
is the $2$-cell given by the composite of the pasting diagram
\[\begin{tikzpicture}[xscale=2.5, yscale=1.5]
\draw[0cell]
(0,0) node (X) {X}
(1,1) node (Y1) {\tothe{f}{F}Y}
(2,0) node (Y2) {\tothe{f}{F}Y}
;
\draw[1cell]
(X) edge[bend left] node{p} (Y1)
(Y1) edge[bend left] node{\tothe{f}{F}1_Y} (Y2)
(Y1) edge[bend right] node[swap, inner sep=0pt] {1_{\tothe{f}{F}Y}} (Y2)
(X) edge[bend right=15] node[near start] (p) {p} (Y2)
;
\draw[2cell]
node[between=Y1 and Y2 at .5, rotate=-135, font=\Large] (t) {\Rightarrow}
(t) node[below right] {}
node[between=Y1 and p at .4, rotate=-90, font=\Large] (l) {\Rightarrow}
(l) node[left] {\ell_p}
;
\end{tikzpicture}\]
in $FA$. Here:
\begin{itemize}
\item The unlabeled $2$-cell is $((\tothe{f}{F})^0)^{-1}$, which is the $Y$-component of the lax unity constraint of the pseudofunctor $\tothe{f}{F}$.
\item $\ell_p$ is the $p$-component of the left unitor in $FA$.
\end{itemize}
\item[Right Unitor] For a $1$-cell $(f,p)$ as above, the corresponding component of the right unitor
\[\begin{tikzcd}
(f,p)(1_A,1_X) \ar{r}{r} & (f,p)\end{tikzcd}\]
is the $p$-component of the right unitor
\[\begin{tikzpicture}[xscale=2, yscale=1.3]
\draw[0cell]
(0,0) node (X1) {X}
(1,1) node (X2) {\tothe{1_A}{F}X=X}
(2,0) node (Y) {\tothe{1_A}{F}\tothe{f}{F}Y= \tothe{f}{F}Y}
;
\draw[1cell]
(X1) edge[bend left] node {1_X} (X2)
(X2) edge[bend left] node{\tothe{1_A}{F}p = p} (Y)
(X1) edge[bend right=15] node[swap, pos=.6] (p) {p} (Y)
;
\draw[2cell]
node[between=X2 and p at .5, rotate=-90, font=\Large] (l) {\Rightarrow}
(l) node[right] {r_p}
;
\end{tikzpicture}\]
in $FA$.
\end{description}
This finishes the definition of the bicategorical Grothendieck construction $\intbi{\C}F$. We show that $\intbi{\C}F$ is a bicategory in \cref{grothendieck-bicat} below.
\end{definition}
\begin{explanation}
In \Cref{def:grothendieck-bicat}, the codomain of the functor $F$ is the category $\Bicatps$, with pseudofunctors as morphisms, instead of the category $\Bicat$, with lax functors as morphisms. The reason for this restriction is that, to define the associator in $\intbi{\C}F$, we need the inverse of the lax functoriality constraint $\tothe{f}{F}=Ff$. Moreover, to define the left unitor in $\intbi{\C}F$, we need the inverse of the lax unity constraint $\tothe{f}{F}$.\dqed
\end{explanation}
\begin{theorem}\label{grothendieck-bicat}
For each $\C$-indexed bicategory $F : \Cop \to \Bicatps$, the bicategorical Grothendieck construction $\intbi{\C} F$ in \Cref{def:grothendieck-bicat} is a bicategory.
\end{theorem}
\begin{proof}
Every hom category in $\intbi{\C} F$ is actually a category because its vertical composition and identity $2$-cells are defined in the hom categories in the bicategories $FA$ for $A\in\C$.
For each morphism $f : A \to B$ in $\C$, $\tothe{f}{F}=Ff : FB \to FA$ is a pseudofunctor, which preserves vertical composition and identity $2$-cells. So the horizontal composite of two identity $2$-cells in $\intbi{\C}F$ is the horizontal composite of two identity $2$-cells in some bicategory $FA$, which must be an identity $2$-cell. Similarly, the middle four exchange \eqref{middle-four} is true in $\intbi{\C}F$ because it is true in each bicategory $FA$. For the same reason, the associator $a$, the left unitor $\ell$, and the right unitor $r$ in $\intbi{\C}F$ are natural. It remains to check the pentagon axiom and the unity axiom in $\intbi{\C}F$.
For composable $1$-cells
\[\begin{tikzcd}[column sep=large]
(A,X) \ar{r}{(f,p)} & (B,Y) \ar{r}{(1_B,1_Y)} & (B,Y) \ar{r}{(g,q)} & (C,Z)\end{tikzcd}\]
in $\intbi{\C}F$, the unity axiom \eqref{bicat-unity} states that the diagram
\[\begin{tikzcd}[column sep=tiny]
\big((g,q)(1_B,1_Y)\big)(f,p) \ar[end anchor=west]{dr}[swap]{r*1} \ar{rr}{a} && (g,q)\big((1_B,1_Y)(f,p)\big) \ar[end anchor=east]{dl}{1*\ell}\\
& (g,q)(f,p) &\end{tikzcd}\]
is commutative. Both composites have the composite \[g1_Bf = gf : A \to C\] as the $\C$-component. For the $F$-component, we need to show the equality
\[\begin{tikzpicture}[xscale=2, yscale=1.3]
\draw[0cell]
(0,0) node (X) {X}
($(X)+(.6,0)$) node (Y1) {\tothe{f}{F}Y}
($(Y1)+(1,0)$) node (Y2) {\tothe{f}{F}Y}
($(Y2)+(0,1)$) node (y) {}
($(Y2)+(1,0)$) node (Z) {\tothe{f}{F}\tothe{g}{F}Z}
($(Z)+(.6,0)$) node[font=\LARGE] {=}
($(Z)+(1,0)$) node (X2) {X}
($(X2)+(.6,0)$) node (Y3) {\tothe{f}{F}Y}
($(Y3)+(1.3,0)$) node (Z2) {\tothe{f}{F}\tothe{g}{F}Z}
;
\draw[1cell]
(X) edge node (p) {p} (Y1)
(Y1) edge node (fiy) {\tothe{f}{F}1_Y} (Y2)
(Y2) edge node[swap]{\tothe{f}{F}q} (Z)
(Y1) edge[out=90, in=90, looseness=1.1] node[pos=.2] (fqiy) {\tothe{f}{F}(q1_Y)} (Z)
(Y1) edge[bend right=60] node[swap, inner sep=2pt] (ify) {\scalebox{.8}{$1_{\tothe{f}{F}Y}$}} (Y2)
(X) edge[out=-90, in=-90, looseness=1.7] node[swap] (p2) {p} (Y2)
(X2) edge node{p} (Y3)
(Y3) edge[bend left=60] node{\tothe{f}{F}(q1_Y)} (Z2)
(Y3) edge[bend right=60] node[swap]{\tothe{f}{F}q} (Z2)
;
\draw[2cell]
node[between=y and Y2 at .5, rotate=-90, font=\Large] (f2) {\Rightarrow}
(f2) node[right] {f^2}
node[between=fiy and ify at .5, rotate=-90, font=\large] (f0) {\Rightarrow}
(f0) node[right] {\scalebox{.8}{$f^0$}}
node[between=Y1 and p2 at .5, rotate=-90, font=\Large] (l) {\Rightarrow}
(l) node[left] {\ell_p}
node[between=Y3 and Z2 at .4, rotate=-90, font=\Large] (fr) {\Rightarrow}
(fr) node[right] {\tothe{f}{F}r_q}
;
\end{tikzpicture}\]
of pasting diagrams, with the $2$-cells labeled $f^2$ and $f^0$ given by $((\tothe{f}{F})^2)^{-1}$ and $((\tothe{f}{F})^0)^{-1}$, respectively. Since $\tothe{f}{F}r_q$ is an invertible $2$-cell, it suffices to show that the pasting diagram on the left-hand side below
\[\begin{tikzpicture}[xscale=1.8, yscale=2]
\draw[0cell]
(0,0) node (X) {X}
($(X)+(.6,0)$) node (Y1) {\tothe{f}{F}Y}
($(Y1)+(1,0)$) node (Y2) {\tothe{f}{F}Y}
($(Y2)+(0,1)$) node (y) {}
($(Y2)+(1,0)$) node (Z) {\tothe{f}{F}\tothe{g}{F}Z}
($(Z)+(.6,0)$) node[font=\LARGE] {=}
($(Z)+(1,0)$) node (X2) {X}
($(X2)+(.6,0)$) node (Y3) {\tothe{f}{F}Y}
($(Y3)+(1,0)$) node (Y4) {\tothe{f}{F}Y}
($(Y4)+(0,1)$) node (y2) {}
($(Y4)+(1,0)$) node (Z2) {\tothe{f}{F}\tothe{g}{F}Z}
;
\draw[1cell]
(X) edge node (p) {p} (Y1)
(Y1) edge node (fiy) {\scalebox{.8}{$\tothe{f}{F}1_Y$}} (Y2)
(Y2) edge node[swap]{\tothe{f}{F}q} (Z)
(Y1) edge[bend left=50] node[pos=0.4, inner sep=0pt]{\scalebox{.7}{$\tothe{f}{F}(q1_Y)$}} (Z)
(Y1) edge[out=90, in=90, looseness=1.6] node[near start]{\tothe{f}{F}q} (Z)
(Y1) edge[bend right=60] node[swap, inner sep=2pt] (ify) {\scalebox{.8}{$1_{\tothe{f}{F}Y}$}}
(Y2)
(X) edge[out=-90, in=-90, looseness=1.4] node[swap] (p2) {p} (Y2)
(X2) edge node (p3) {p} (Y3)
(Y3) edge[out=90, in=90, looseness=1.6] node[near start]{\tothe{f}{F}q} (Z2)
(Y3) edge[bend right=60] node[inner sep=2pt]{\scalebox{.8}{$1_{\tothe{f}{F}Y}$}} (Y4)
(Y4) edge node[swap]{\tothe{f}{F}q} (Z2)
(X2) edge[out=-90, in=-90, looseness=1.4] node[swap] (p4) {p} (Y4)
;
\draw[2cell]
node[between=Y2 and y at .3, rotate=-90, font=\Large] (f2) {\Rightarrow}
(f2) node[right] {f^2}
node[between=Y2 and y at .8, rotate=-90, font=\Large] (fr) {\Rightarrow}
(fr) node[right] {\tothe{f}{F}r_q^{-1}}
node[between=fiy and ify at .5, rotate=-90, font=\Large] (f0) {\Rightarrow}
(f0) node[right] {\scalebox{.8}{$f^0$}}
node[between=Y1 and p2 at .5, rotate=-90, font=\Large] (l) {\Rightarrow}
(l) node[left] {\ell_p}
node[between=Y4 and y2 at .5, rotate=-90, font=\Large] (r) {\Rightarrow}
(r) node[right] {r^{-1}_{\tothe{f}{F}q}}
node[between=Y3 and p4 at .5, rotate=-90, font=\Large] (ell) {\Rightarrow}
(ell) node[left] {\ell_p}
;
\end{tikzpicture}\]
has composite the identity $2$-cell of $(\tothe{f}{F}q)p$. By the lax right unity axiom \eqref{f0-bicat} of the pseudofunctor $\tothe{f}{F}$, the left pasting diagram and the right pasting diagram above have the same composites. By the unity axiom \eqref{bicat-unity} in the bicategory $FA$, the composite of the right pasting diagram above is the identity $2$-cell of $(\tothe{f}{F}q)p$. This proves the unity axiom \eqref{bicat-unity} in $\intbi{\C}F$.
Next, suppose given composable $1$-cells
\[\begin{tikzcd}
(A,V) \ar{r}{(f,p)} & (B,W) \ar{r}{(g,q)} & (C,X) \ar{r}{(h,r)} & (D,Y) \ar{r}{(i,s)} & (E,Z)
\end{tikzcd}\]
in $\intbi{\C}F$. The pentagon axiom \eqref{bicat-pentagon} states that the diagram
\[\begin{tikzpicture}[xscale=2.5, yscale=1.5]
\draw[0cell]
(0,0) node (A) {\big[(i,s)(h,r)\big]\big[(g,q)(f,p)\big]}
($(A)+(-1,-1)$) node (B) {\big[\big((i,s)(h,r)\big)(g,q)\big](f,p)}
($(A)+(1,-1)$) node (C) {(i,s)\big[(h,r)\big((g,q)(f,p)\big)\big]}
($(A)+(-1,-2)$) node (D) {\big[(i,s)\big((h,r)(g,q)\big)\big](f,p)}
($(A)+(1,-2)$) node (E) {(i,s)\big[\big((h,r)g,q)\big)(f,p)\big]}
;
\draw[1cell]
(B) edge node{a} (A)
(A) edge node{a} (C)
(B) edge node[swap]{a*1} (D)
(D) edge node{a} (E)
(E) edge node[swap]{1*a} (C)
;
\end{tikzpicture}\]
is commutative. Both composites have $\C$-component $ihgf : A \to E$.
For the $F$-component, we need to check the equality
\[\begin{tikzpicture}[xscale=3.8, yscale=2]
\draw[0cell]
(0,0) node (V) {V}
($(V)+(0,-1)$) node (W) {\tothe{f}{F}W}
($(W)+(1,0)$) node (Z) {(ihgf)^*Z}
($(W)+(0,-1)$) node (X) {\tothe{gf}{F}X}
($(X)+(1,0)$) node (Y) {\tothe{hgf}{F}Y}
($(Z)+(.45,0)$) node[font=\huge] {=}
($(V)+(1.8,0)$) node (V1) {V}
($(V1)+(0,-1)$) node (W1) {\tothe{f}{F}W}
($(W1)+(1,0)$) node (Z1) {(ihgf)^*Z}
($(W1)+(.3,0)$) node (w) {}
($(W1)+(0,-1)$) node (X1) {\tothe{gf}{F}X}
($(X1)+(1,0)$) node (Y1) {\tothe{hgf}{F}Y}
;
\draw[1cell]
(V) edge node[swap]{p} (W)
(W) edge[bend left=70, looseness=2] node[pos=.5]{\scalebox{.8}{$\tothe{f}{F}\big(\tothe{g}{F}((\tothe{h}{F}s)r)q\big)$}} (Z)
(W) edge node[swap]{\tothe{f}{F}q} (X)
(X) edge node[pos=.7] (xz) {} (Z)
(X) edge node[swap]{\tothe{gf}{F}r} (Y)
(Y) edge node[midway,below,sloped]{\tothe{(hgf)}{F}s} (Z)
(V1) edge node[swap]{p} (W1)
(W1) edge[bend left=70, looseness=2] node[pos=.5] (fgh) {\scalebox{.8}{$\tothe{f}{F}\big(\tothe{g}{F}((\tothe{h}{F}s)r)q\big)$}} (Z1)
(W1) edge node[inner sep=1pt] (fhg) {\scalebox{.7}{$\tothe{f}{F}([(\tothe{hg}{F}s)(\tothe{g}{F}r)]q)$}} (Z1)
(W1) edge node[swap]{\tothe{f}{F}q} (X1)
(X1) edge node[pos=.7] (xz1){} (Z1)
(X1) edge node[swap]{\tothe{gf}{F}r} (Y1)
(Y1) edge node[midway,below,sloped]{\tothe{(hgf)}{F}s} (Z1)
;
\draw[2cell]
node[between=W and Z at .4, rotate=-90, font=\Large] (f) {\Rightarrow}
(f) node[right] {f^2}
node[between=xz and Y at .4, rotate=-45, font=\Large] (gf) {\Rightarrow}
(gf) node[below left] {(gf)^2}
node[between=fgh and w at .5, rotate=-90, font=\Large] (fgi) {\Rightarrow}
(fgi) node[right] {\tothe{f}{F}(g^2*1)}
node[between=w and X1 at .3, rotate=-90, font=\Large] (f2) {\Rightarrow}
(f2) node[right] {f^2}
node[between=xz1 and Y1 at .4, rotate=-45, font=\Large] (f3) {\Rightarrow}
(f3) node[below left] {f^2}
;
\end{tikzpicture}\]
of pasting diagrams, with:
\begin{itemize}
\item $\tothe{gf}{F}((\tothe{h}{F}s)r)$ the unlabeled arrow on the left-hand side;
\item $\tothe{f}{F}[(\tothe{hg}{F}s)(\tothe{g}{F}r)]$ the unlabeled arrow on the right-hand side;
\item $f^2=((\tothe{f}{F})^2)^{-1}$, and similarly for $g^2$ and $(gf)^2$.
\end{itemize}
Similar to the proof of the unity axiom above, this proof uses the pentagon axiom in the bicategory $FA$, the naturality and lax associativity of $(\tothe{f}{F})^2$, and the lax associativity constraint of the composite $\tothe{f}{F}\tothe{g}{F}$ in \Cref{def:lax-functors-composition}. We ask the reader to fill in the details in \Cref{exer:grothendieck-bicat-pentagon}.
\end{proof}
\section{Exercises and Notes}\label{sec:grothendieck-exercises}
\begin{exercise}
In the proof of \Cref{lax-grothendieck-oplax-cone}, check the oplax unity axiom and the oplax naturality axiom for $\pi$.
\end{exercise}
\begin{exercise}\label{exer:pistar-phi}
Near the end of the proof of \Cref{thm:lax-grothendieck-lax-colimit}, check that the composite $\pi^*\phi$ is the identity functor on $\oplaxcone(F,\conof{1.3})$.
\end{exercise}
\begin{exercise}\label{exer:grothendieck-bicat-pentagon}
In the proof of \Cref{grothendieck-bicat}, finish the proof of the pentagon axiom \eqref{bicat-pentagon} in the bicategorical Grothendieck construction $\intbi{\C}F$.
\end{exercise}
\subsection*{Notes}
\begin{note}[Discussion of Literature]
The Grothendieck construction described above is due to Grothendieck \cite{grothendieck}. A discussion of the $1$-categorical Grothendieck construction in \Cref{mot:icat-grothendieck} can be found in \cite[Chapter 12]{barr-wells-category}. There are many other variations of the Grothendieck construction, such as those in \cite{beardsley-wong,buckley,moerdijk-weiss,street_fibrations,street_fibrations-correction}. For discussion of \Cref{thm:lax-grothendieck-lax-colimit}---that the Grothendieck construction is a lax colimit---in the context of $\infty$-categories, the reader is referred to \cite{ghn}.
\end{note}
\begin{note}[{\Cref{thm:grothendieck-iiequivalence}}]
The main result of this chapter, \Cref{thm:grothendieck-iiequivalence}, is an important result in category theory that allows one to go between pseudofunctors $\Cop\to\Cat$ and fibrations over $\C$. Less detailed proofs of this $2$-equivalence can be found in \cite[B1.3.6]{elephant} and \cite[8.3]{borceux2}. Similar to \Cref{ch:fibration}, we are not aware of any completely detailed proof of \Cref{thm:grothendieck-iiequivalence}, as we have given in this chapter.
\end{note}
\begin{note}[Bicategorical Grothendieck Construction]
The bicategorical Grothendieck construction in \Cref{sec:grothendieck-bicat} is from \cite[Section 7]{ccg}, which also contains a detailed discussion of the\index{bicategorical Grothendieck construction!homotopy type} homotopy type of the bicategorical Grothendieck construction. A proof of \Cref{grothendieck-bicat} is, however, not given there.
\end{note}
\endinput
To prove this, consider a pre-raise $\preraise{\liftf p}{g}{h}$ as in
\begin{equation}\label{unique-raise-hplus}
\begin{tikzpicture}[xscale=2, yscale=1.5, baseline={(s.base)}]
\def1.2{.3}
\def1{.5}
\def1{1}
\draw[0cell]
(0,0) node (x11) {X}
($(x11)+(1,0)$) node (x12) {\yf}
($(x12)+(1,0)$) node (x13) {Y}
($(x12)+(0,1)$) node (z) {Z}
($(x13)+(1.2,1)$) node (s) {}
($(s)+(.4,0)$) node (t) {}
($(t)+(1.2,-1)$) node (y11) {A}
($(y11)+(1,0)$) node (y12) {A}
($(y12)+(1,0)$) node (y13) {B}
($(y12)+(0,1)$) node (pz) {PZ}
;
\draw[1cell]
(x11) edge node {p} (x12)
(x12) edge node {\liftf} (x13)
(z) edge node {g} (x13)
(z) edge[dashed] node[swap]{\exists!\, \raiseof{h}?} (x11)
(s) edge[|->] node {P} (t)
(y11) edge node{\onea} (y12)
(y12) edge node {f} (y13)
(pz) edge node {g'} (y13)
(pz) edge node[swap] {h} (y11)
;
\end{tikzpicture}
\end{equation}
in which $g' = Pg$. We must show that there is a unique raise $\raiseof{h}$.
To prove the existence of such a raise, first consider the pre-raise $\preraise{\liftgp}{g}{1_{PZ}}$ as in
\begin{equation}\label{raise-d-gp}
\begin{tikzpicture}[xscale=2, yscale=1.3, baseline={(s.base)}]
\def1.2{.3}
\def1{.5}
\draw[0cell]
(0,0) node (x11) {\ygp}
($(x11)+(1,0)$) node (x12) {Y}
($(x11)+(.5,1)$) node (z) {Z}
($(x12)+(1.2,1)$) node (s) {}
($(s)+(.4,0)$) node (t) {}
($(t)+(1.2,-1)$) node (y11) {PZ}
($(y11)+(1,0)$) node (y12) {B}
($(y11)+(.5,1)$) node (pz) {PZ}
;
\draw[1cell]
(x11) edge node {\liftgp} (x12)
(z) edge node {g} (x12)
(z) edge[dashed, shorten >=-.1cm] node[swap]{\exists!\, d} (x11)
(s) edge[|->] node {P} (t)
(y11) edge node{g'} (y12)
(pz) edge node {g'} (y12)
(pz) edge node[swap] {1} (y11)
;
\end{tikzpicture}
\end{equation}
in which $\liftgp$ is the chosen Cartesian lift of $\prelift{Y}{g'}$. Since $\liftgp$ is a Cartesian morphism, a unique raise $d : Z \to \ygp$ exists. Using $d$, consider the pre-raise
\[\preraise{(f,p)}{(g',d)}{h}\]
with respect to $\Usubf$ as displayed below.
\begin{equation}\label{raise-hs-fq}
\begin{tikzpicture}[xscale=3, yscale=1.3, baseline={(s.base)}]
\def1.2{.3}
\def1{.5}
\draw[0cell]
(0,0) node (x11) {(A,X)}
($(x11)+(1,0)$) node (x12) {(B,Y)}
($(x11)+(.5,1)$) node (z) {(PZ,Z)}
($(x12)+(1.2,1)$) node (s) {}
($(s)+(.4,0)$) node (t) {}
($(t)+(1.2,-1)$) node (y11) {A}
($(y11)+(1,0)$) node (y12) {B}
($(y11)+(.5,1)$) node (pz) {PZ}
;
\draw[1cell]
(x11) edge node {(f,p)} (x12)
(z) edge node {(g',d)} (x12)
(z) edge[dashed] node[swap]{\exists!\, (h,s)} (x11)
(s) edge[|->] node {P} (t)
(y11) edge node{f} (y12)
(pz) edge node {g'} (y12)
(pz) edge node[swap] {h} (y11)
;
\end{tikzpicture}
\end{equation}
Since $(f,p)$ is a Cartesian morphism in $\intf$ by assumption, this pre-raise has a unique raise $(h,s)$. By \eqref{intf-composite}, the morphism $s \in \Pinv(PZ)$ is uniquely characterized by the property that the diagram
\begin{equation}\label{s-in-pinvpz}
\begin{tikzcd}
Z \ar{d}[swap]{s} \ar{r}{d} & \ygp\\
\xh \ar{r}{\htof p} & \yfcommah \ar{u}[swap]{\Ftwo}
\end{tikzcd}
\end{equation}
in $\Pinv(PZ)$ is commutative. In the diagram \eqref{s-in-pinvpz}:
\begin{itemize}
\item $p : X \to \yf \in \Pinv(A)$ is part of the given Cartesian morphism $(f,p)$.
\item $\htof p$ is defined as the unique raise as in \eqref{preraise-fe}. In other words, $\htof p\in\Pinv(PZ)$ is the unique morphism such that the diagram
\begin{equation}\label{htof-p}
\begin{tikzcd}
\xh \ar{d}[swap]{\htof p} \ar{r}{\lifth_X} & X \ar{d}{p}\\
\yfcommah \ar{r}{\lifth_{\yf}} & \yf
\end{tikzcd}
\end{equation}
in $3$ is commutative, with $\lifth_X$ and $\lifth_{\yf}$ the chosen Cartesian lifts of $\prelift{X}{h}$ and $\prelift{\yf}{h}$, respectively.
\item $\Ftwo \in \Pinv(PZ)$ is defined as in \eqref{ftwofg-z}. In other words, it is the unique isomorphism in $\Pinv(PZ)$ such that the diagram
\begin{equation}\label{ftwo-yfh}
\begin{tikzcd}
\yfcommah \ar{d}[swap]{\lifth_{\yf}} \ar{r}{\Ftwo} & \ygp \ar{d}{\liftgp}\\
\yf \ar{r}{\liftf} & Y\end{tikzcd}
\end{equation}
in $3$ is commutative
\end{itemize}
We aim to show that the composite
\begin{equation}\label{s-lifthx}
\begin{tikzcd}
Z \ar{r}{s} & \xh \ar{r}{\lifth_X} & X \in 3
\end{tikzcd}
\end{equation}
is the desired unique raise in \eqref{unique-raise-hplus}.
To see that $\lifth_X s$ is a raise in \eqref{unique-raise-hplus}, first observe that
\[P(\lifth_X s) = P(\lifth_X) P(s) = h 1_{PZ} = h.\] To check the commutativity of the left triangle in \eqref{unique-raise-hplus}, consider the following diagram in $3$.
\[\begin{tikzpicture}[xscale=2.5, yscale=1.3, baseline={(x21.base)}]
\def1.2{.3}
\draw[0cell]
(0,0) node (x11) {Z}
($(x11)+(1,0)$) node (x12) {}
($(x12)+(1,0)$) node (x13) {Y}
($(x11)+(0,-1)$) node (x21) {\xh}
($(x12)+(0,-1)$) node (x22) {\yfcommah}
($(x13)+(0,-1)$) node (x23) {\ygp}
($(x21)+(0,-1)$) node (x31) {X}
($(x22)+(0,-1)$) node (x32) {\yf}
($(x23)+(0,-1)$) node (x33) {Y}
($(x13)+(1.2,0)$) node[inner sep=0pt] (ne) {}
($(x33)+(1.2,0)$) node[inner sep=0pt] (se) {}
;
\draw[1cell]
(x11) edge node {g} (x13)
(x11) edge node[swap] {s} (x21)
(x11) edge node {d} (x23)
(x21) edge node {\htof p} (x22)
(x22) edge node[swap] {\Ftwo} (x23)
(x21) edge node[swap] {\lifth_X} (x31)
(x22) edge node[swap] {\lifth_{\yf}} (x32)
(x23) edge node[swap] {\liftgp} (x33)
(x31) edge node[swap] {p} (x32)
(x32) edge node[swap] {\liftf} (x33)
(x13) edge[-,shorten >=-1pt] (ne)
(ne) edge[-,shorten >=-1pt, shorten <=-1pt] node {1} (se)
(se) edge[shorten <=-1pt] (x33)
;
\end{tikzpicture}\]
Starting at the top and going counter-clockwise, the four sub-diagrams above are commutative by the left commutative triangle in \eqref{raise-d-gp}, \eqref{s-in-pinvpz}, \eqref{htof-p}, and \eqref{ftwo-yfh}, respectively. This shows that the composite $\lifth_X s$ in \eqref{s-lifthx} is a raise in \eqref{unique-raise-hplus}.
It remains to prove the uniqueness of $\lifth_X s$. Suppose $\hplus$ is another raise in \eqref{unique-raise-hplus}. We must show that
\begin{equation}\label{hplus-lifthx-s}
\hplus = \lifth_X s.
\end{equation}
Consider the pre-raise $\preraise{\lifth_X}{\hplus}{1_{PZ}}$ with respect to $P$, as displayed below.
\begin{equation}\label{raise-lifthx-hplus}
\begin{tikzpicture}[xscale=2, yscale=1.3, baseline={(s.base)}]
\def1.2{.3}
\def1{.5}
\draw[0cell]
(0,0) node (x11) {\xh}
($(x11)+(1,0)$) node (x12) {X}
($(x11)+(.5,1)$) node (z) {Z}
($(x12)+(1.2,1)$) node (s) {}
($(s)+(.4,0)$) node (t) {}
($(t)+(1.2,-1)$) node (y11) {PZ}
($(y11)+(1,0)$) node (y12) {A}
($(y11)+(.5,1)$) node (pz) {PZ}
;
\draw[1cell]
(x11) edge node {\lifth_X} (x12)
(z) edge node {\hplus} (x12)
(z) edge[dashed, shorten >=-.1cm] node[swap]{\exists!\, t} (x11)
(s) edge[|->] node {P} (t)
(y11) edge node{h} (y12)
(pz) edge node {h} (y12)
(pz) edge node[swap] {1} (y11)
;
\end{tikzpicture}
\end{equation}
Since $\lifth_X$ is a Cartesian morphism by assumption, there exists a unique raise $t : Z \to \xh \in \Pinv(PZ)$ such that
\[\hplus = \lifth_X t.\]
Therefore, to prove the equality in \eqref{hplus-lifthx-s}, it suffices to show that $t=s$. Furthermore, by the uniqueness of $s$ in \eqref{s-in-pinvpz} and of $d$ in \eqref{raise-d-gp}, to show that $t=s$, it is enough to show that the outermost diagram in
\[\begin{tikzpicture}[xscale=2, yscale=1.3, baseline={(s.base)}]
\def1{.5}
\draw[0cell]
(0,0) node (x11) {Z}
($(x11)+(1,0)$) node (x12) {X}
($(x12)+(1,0)$) node (x13) {\yf}
($(x13)+(1,0)$) node (x14) {Y}
($(x11)+(0,-1)$) node[inner sep=0pt] (x21) {}
($(x12)+(0,-1)$) node (x22) {\xh}
($(x13)+(0,-1)$) node (x23) {\yfcommah}
($(x14)+(0,-1)$) node (x24) {\ygp}
($(x11)+(0,1)$) node[inner sep=0pt] (nw) {}
($(x14)+(0,1)$) node[inner sep=0pt] (ne) {}
;
\draw[1cell]
(x11) edge node {\hplus} (x12)
(x12) edge node {p} (x13)
(x13) edge node {\liftf} (x14)
(x22) edge node[swap] {\lifth_X} (x12)
(x23) edge node[swap] {\lifth_{\yf}} (x13)
(x24) edge node[swap] {\liftgp} (x14)
(x22) edge node[swap] {\htof p} (x23)
(x23) edge node[swap] {\Ftwo} (x24)
(x11) edge[-,shorten >=-1pt] (nw)
(nw) edge[-,shorten <=-1pt,shorten >=-1pt] node {g} (ne)
(ne) edge[shorten <=-1pt] (x14)
(x11) edge[-,shorten >=-1pt] node[swap] {t} (x21)
(x21) edge[shorten <=-1pt] (x22)
;
\end{tikzpicture}\]
in $3$ is commutative. In this diagram:
\begin{itemize}
\item The top rectangle is the left commutative triangle in \eqref{unique-raise-hplus}.
\item The left square in the second row is the left commutative triangle in \eqref{raise-lifthx-hplus}.
\item The middle square and the right square are \eqref{htof-p} and \eqref{ftwo-yfh}, respectively.
\end{itemize}
We have proved the desired equality \eqref{hplus-lifthx-s}.
\chapter{Further \texorpdfstring{$2$}{2}-Dimensional Categorical Structures}
\label{ch:monoidal_bicat}
In this chapter we give short introductions to further structures in
dimension 2.
\cref{sec:braided-monoidal-bicat} covers monoidal bicategories as
one-object tricategories. It includes definitions of braided, sylleptic, and
symmetric monoidal structures.
\cref{sec:gray-tensor} covers the Gray tensor product for
$2$-categories. This is a monoidal product weaker than the Cartesian
product and which is a new phenomenon in dimension 2.
\cref{sec:double-cat} covers general double categories, also known as
pseudo double categories in the literature. Several important
examples of bicategories are obtained from double categories.
\cref{sec:mon-double-cat} extends \cref{sec:double-cat} to define
monoidal double categories.
For each topic, notes at the end of the chapter give additional topics
of interest and references to the literature. As in previous
chapters, the Bicategorical Pasting \Cref{thm:bicat-pasting-theorem}
and \Cref{conv:boundary-bracketing} are used to interpret pasting
diagrams in bicategories.
\section{Braided, Sylleptic, and Symmetric Monoidal Bicategories}
\label{sec:braided-monoidal-bicat}
In this section we give the definitions of:
\begin{itemize}
\item a monoidal bicategory in \Cref{def:monoidal-bicat};
\item a braided monoidal bicategory in \Cref{def:braided-monbicat};
\item a sylleptic monoidal bicategory in \Cref{def:sylleptic-monbicat};
\item a symmetric monoidal bicategory in \Cref{def:symmetric-monbicat}.
\end{itemize}
Further explanations are given after each definition.
\begin{motivation}
Recall from \Cref{ex:monoid-as-cat} that a monoid may be regarded as a one-object category. Also, by \Cref{ex:moncat-bicat} a monoidal category may be identified with a one-object bicategory. The analogue for the next dimension is the following definition.
\end{motivation}
Recall \Cref{def:tricategory} of a tricategory and \Cref{def:2category} of a $2$-category.
\begin{definition}\label{def:monoidal-bicat}
A \emph{monoidal bicategory}\index{monoidal bicategory}\index{bicategory!monoidal} is a tricategory with one object. A \emph{monoidal $2$-category}\index{monoidal 2-category}\index{2-category!monoidal} is a monoidal bicategory whose only hom bicategory is a $2$-category.
\end{definition}
\begin{explanation}\label{expl:monoidal-bicat}
Interpreting \Cref{def:tricategory} in the one-object case, we obtain the following explicit description of a monoidal bicategory
\[\big(\B,\tensor,\monunit,a,\ell,r,\pi,\mu,\lambda,\rho\big).\]
\begin{description}
\item[Base Bicategory] It has a bicategory $\B$ called the \emph{base bicategory}. The $n$-fold product bicategory $\B \times \cdots \times \B$ is written as $\B^n$ below.
\item[Composition] It has a pseudofunctor
\[\begin{tikzcd}[column sep=huge]
\B \times \B \ar{r}{(\tensor,\tensortwo,\tensorzero)} & \B\end{tikzcd}\]
called the \emph{composition}.
\item[Identity] It has a pseudofunctor
\[\begin{tikzcd}[column sep=huge]
\boldone \ar{r}{(\monunit,\monunittwo,\monunitzero)} & \B\end{tikzcd}\]
called the \emph{identity}. The object $\monunit(*) \in \B$ is also denoted by $\monunit$, called the \emph{identity object}. In the terminology of \Cref{monad-bicat,monad-bicat-interpret}, the identity $\monunit$ is a monad in $\B$ acting on the identity object $\monunit \in\B$ with
\begin{itemize}
\item a $1$-cell $t = \monunit(1_*) \in \B(\monunit,\monunit)$,
\item an invertible multiplication $2$-cell $\monunittwo : tt \to t \in \B(\monunit,\monunit)$, and
\item an invertible unit $2$-cell $\monunitzero : 1_{\monunit} \to t \in \B(\monunit,\monunit)$.
\end{itemize}
\item[Associator] It has an adjoint equivalence $(a,\abdot,\etaa,\epza)$ as in
\[\begin{tikzpicture}[xscale=2.5, yscale=1.2, baseline={(a.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def1{1}
\draw[0cell]
(0,0) node (x11) {\B^3}
($(x11)+(1,0)$) node (x12) {\B^2}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\B^2}
($(x12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {\B}
;
\draw[1cell]
(x11) edge node (s) {\tensor\times 1} (x12)
(x11) edge node[swap] (a) {1\times \tensor} (x21)
(x12) edge node {\tensor} (x22)
(x21) edge node[swap] (t) {\tensor} (x22)
;
\draw[2cell]
node[between=s and t at .5, rotate=-135, 2label={below,a}] {\Rightarrow}
;
\end{tikzpicture}\]
in the bicategory $\Bicatps(\B^3, \B)$, called the \emph{associator}. Its left and right adjoints have component $1$-cells
\[\begin{tikzcd}[column sep=large]
(C\tensor B) \tensor A \ar[shift left]{r}{a_{C,B,A}} & C\tensor (B\tensor A) \ar[shift left]{l}{\abdot_{C,B,A}} \in \B
\end{tikzcd}\]
for objects $A,B,C\in\B$.
\item[Unitors] It has adjoint equivalences $(\ell,\ellbdot,\etaell,\epzell)$ and $(r,\rbdot,\etar,\epzr)$ as in
\[\begin{tikzpicture}[xscale=3, yscale=1.2, baseline={(a.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def1{1} \def-1} \def\v{-1{1}
\draw[0cell]
(0,0) node (x11) {\B}
($(x11)+(1,0)$) node (x12) {\B}
($(x11)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (tl) {\B^2}
($(x12)+(1/2,0)$) node (x13) {\B}
($(x13)+(1,0)$) node (x14) {\B}
($(x13)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (tr) {\B^2}
;
\draw[1cell]
(x11) edge node[swap] (i) {1} (x12)
(x11) edge node[pos=.4] (a) {\monunit\times 1} (tl)
(tl) edge node[pos=.6] {\tensor} (x12)
(x13) edge node[swap] (ii) {1} (x14)
(x13) edge node[pos=.4] {1\times \monunit} (tr)
(tr) edge node[pos=.6] {\tensor} (x14)
;
\draw[2cell]
node[between=tl and i at .5, rotate=-90, 2label={above,\ell}] {\Rightarrow}
node[between=tr and ii at .5, rotate=-90, 2label={above,r}] {\Rightarrow}
;
\end{tikzpicture}\]
in the bicategory $\Bicatps(\B,\B)$, called the \emph{left unitor} and the \emph{right unitor}, respectively. Their left and right adjoints have component $1$-cells
\[\begin{tikzcd}[column sep=large]
\monunit \tensor A \ar[shift left]{r}{\ell_A} & A \ar[shift left]{l}{\ellbdot_A} \ar[shift right]{r}[swap]{\rbdot_A} & A \tensor \monunit \ar[shift right]{l}[swap]{r_A} \in \B.\end{tikzcd}\]
\item[Pentagonator] It has an invertible $2$-cell $\pi$ in $\Bicatps(\B^4,\B)$ as in \eqref{tricategory-pentagonator}, called the \emph{pentagonator}, with invertible component $2$-cells
\[\begin{tikzpicture}[xscale=5,yscale=1.2, baseline={(x11.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def.4{-.9} \def1{1} \def1{.05}
\draw[0cell]
(0,0) node (x11) {\big((D\tensor C)\tensor B\big)\tensor A}
($(x11)+(1,0)$) node (x12) {D\tensor\big(C\tensor(B\tensor A)\big)}
($(x11)+(1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x01) {\big(D\tensor(C\tensor B)\big)\tensor A}
($(x11)+(1-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x02) {D\tensor\big((C\tensor B)\tensor A\big)}
($(x11)+(1/2,.4)$) node (x31) {(D\tensor C)\tensor(B\tensor A)}
;
\draw[1cell]
(x11) edge node[pos=.2] {a_{D,C,B}\tensor 1_A} (x01)
(x01) edge node (s) {a_{D,C\tensor B,A}} (x02)
(x02) edge node[pos=.8] {1_D\tensor a_{C,B,A}} (x12)
(x11) edge node[swap,pos=.3] {a_{D\tensor C,B,A}} (x31)
(x31) edge node[swap,pos=.7] {a_{D,C,B\tensor A}} (x12)
;
\draw[2cell]
node[between=s and x31 at .4, shift={(-.5,0)}, rotate=-90, 2label={above,\pi_{D,C,B,A}}] {\Rightarrow}
;
\end{tikzpicture}\]
in $\B\big(((D\tensor C)\tensor B\big)\tensor A, D\tensor(C\tensor(B\tensor A))\big)$ for objects $A,B,C,D\in\B$.
\item[$2$-Unitors] It has invertible $2$-cells $\mu$, $\lambda$, and $\rho$ in $\Bicatps(\B^2,\B)$ as in \eqref{tricategory-iiunitors}, called the \emph{middle $2$-unitor}, the \emph{left $2$-unitor}, and the \emph{right $2$-unitor}, respectively, with invertible component $2$-cells
\[\begin{tikzpicture}[xscale=4,yscale=1.2,baseline={(r.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def1{1} \def1{.1}
\draw[0cell]
(0,0) node (x11) {B\tensor A}
($(x11)+(1,0)$) node (x12) {B\tensor A}
($(x11)+(1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x01) {(B\tensor \monunit)\tensor A}
($(x11)+(1-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x02) {B \tensor (\monunit\tensor A)}
;
\draw[1cell]
(x11) edge node[pos=.3] (r) {\rbdot_B\tensor 1_A} (x01)
(x01) edge node (s) {a_{B,\monunit,A}} (x02)
(x02) edge node[pos=.7] {1_B\tensor \ell_A} (x12)
(x11) edge node[swap] (one) {1_{B\tensor A}} (x12)
;
\draw[2cell]
node[between=s and one at .5, shift={(-.3,0)}, rotate=-90, 2label={above,\mu_{B,A}}] {\Rightarrow}
;
\end{tikzpicture}\]
\[\begin{tikzpicture}[xscale=3,yscale=1.2, baseline={(lambda.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def1{1} \def-1} \def\v{-1{2*1/3} \def.4{.2} \def\b{15}
\draw[0cell]
(0,0) node (x11) {(\monunit\tensor B)\tensor A}
($(x11)+(1,0)$) node (x12) {B\tensor A}
($(x11)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\monunit\tensor (B\tensor A)}
;
\draw[1cell]
(x11) edge node (s) {\ell_B \tensor 1_A} (x12)
(x11) edge[bend right=\b] node[swap,pos=.3] (a) {a_{\monunit,B,A}} (x21)
(x21) edge[bend right=\b] node[swap,pos=.6] {\ell_{B\tensor A}} (x12)
;
\draw[2cell]
node[between=s and x21 at .5, shift={(-.2,0)}, rotate=-90, 2label={above,\lambda_{B,A}}] (lambda) {\Rightarrow}
;
\draw[0cell]
($(x12)+(-1} \def\v{-1,0)$) node (y11) {B\tensor A}
($(y11)+(1,0)$) node (y12) {B\tensor (A\tensor \monunit)}
($(y11)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y21) {(B\tensor A)\tensor \monunit}
;
\draw[1cell]
(y11) edge node (a) {1_B \tensor \rbdot_A} (y12)
(y11) edge[bend right=\b] node[swap,pos=.3] {\rbdot_{B\tensor A}} (y21)
(y21) edge[bend right=\b] node[swap,pos=.6] {a_{B,A,\monunit}} (y12)
;
\draw[2cell]
node[between=a and y21 at .5, shift={(-.2,0)}, rotate=-90, 2label={above,\rho_{B,A}}] {\Rightarrow}
;
\end{tikzpicture}\]
in $\B$.
\end{description}
The above data satisfy the three axioms \eqref{nb4cocycle}, \eqref{left-normalization-axiom}, and \eqref{right-normalization-axiom} of a tricategory. In the left and right normalization axioms, whenever the symbol $1$ appears in an object, that copy of $1$ should be replaced by the identity object $\monunit$.
\end{explanation}
The following preliminary observation is needed to define the braided analogue of a monoidal bicategory. We ask the reader to prove it in \Cref{exer:monbicat-mates}. The $\tensor$ symbols among the objects are omitted to save space.
\begin{lemma}\label{pi-mates}\label{pentagonator!mates of the -}\index{mate!pentagonator}\index{pentagonator!mate}
In a monoidal bicategory $\B$, the pentagonator $\pi$ induces invertible $2$-cells\label{notation:pin} $\pi_n$ in $\Bicatps(\B^4,\B)$ for $1\leq n \leq 10$, with component $2$-cells as displayed below.
\[\scalebox{.9}{\begin{tikzpicture}[xscale=3.5,yscale=1.2]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def.4{-.9} \def1{1} \def1{.05}
\def-1} \def\v{-1{1.9} \def\n{-2.7} \def\p{-5.3}
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (x11) {\big((DC)B\big)A}
($(x11)+(1,0)$) node (x12) {D\big(C(BA)\big)}
($(x11)+(1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x01) {\big(D(CB)\big)A}
($(x11)+(1-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x02) {D\big((CB)A\big)}
($(x11)+(1/2,.4)$) node (x31) {(DC)(BA)}
;}
\begin{scope}[shift={(0,0)}]
\boundary
\draw[1cell]
(x01) edge node[swap,pos=.7] {\abdot\tensor 1} (x11)
(x02) edge node[swap] (s) {\abdot} (x01)
(x12) edge node[swap,pos=.3] {1\tensor \abdot} (x02)
(x31) edge node[pos=.6] {\abdot} (x11)
(x12) edge node[pos=.3] {\abdot} (x31)
;
\draw[2cell]
node[between=s and x31 at .5, shift={(-.2,0)}, rotate=90, 2label={below,\pi_1}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(-1} \def\v{-1,0)}]
\boundary
\draw[1cell]
(x01) edge node[swap,pos=.7] {\abdot\tensor 1} (x11)
(x02) edge node[swap] (s) {\abdot} (x01)
(x12) edge node[swap,pos=.3] {1\tensor \abdot} (x02)
(x31) edge node[pos=.6] {\abdot} (x11)
(x31) edge node[swap,pos=.6] {a} (x12)
;
\draw[2cell]
node[between=s and x31 at .5, shift={(0,0)}, rotate=-135, 2label={below,\pi_2}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(0,\n)}]
\boundary
\draw[1cell]
(x11) edge node[pos=.3] {a\tensor 1} (x01)
(x01) edge node (s) {a} (x02)
(x12) edge node[swap,pos=.3] {1\tensor \abdot} (x02)
(x31) edge node[pos=.6] {\abdot} (x11)
(x31) edge node[swap,pos=.6] {a} (x12)
;
\draw[2cell]
node[between=s and x31 at .5, shift={(0,0)}, rotate=160, 2label={below,\pi_3}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(-1} \def\v{-1,\n)}]
\boundary
\draw[1cell]
(x01) edge node[swap,pos=.7] {\abdot\tensor 1} (x11)
(x01) edge node (s) {a} (x02)
(x02) edge node[pos=.7] {1\tensor a} (x12)
(x31) edge node[pos=.6] {\abdot} (x11)
(x12) edge node[pos=.3] {\abdot} (x31)
;
\draw[2cell]
node[between=s and x31 at .4, shift={(-.3,0)}, rotate=165, 2label={below,\pi_4}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(0,2*\n)}]
\boundary
\draw[1cell]
(x11) edge node[pos=.3] {a\tensor 1} (x01)
(x02) edge node[swap] (s) {\abdot} (x01)
(x12) edge node[swap,pos=.3] {1\tensor \abdot} (x02)
(x31) edge node[pos=.6] {\abdot} (x11)
(x31) edge node[swap,pos=.6] {a} (x12)
;
\draw[2cell]
node[between=s and x31 at .5, shift={(0,0)}, rotate=20, 2label={above,\pi_5}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(-1} \def\v{-1,2*\n)}]
\boundary
\draw[1cell]
(x01) edge node[swap,pos=.7] {\abdot\tensor 1} (x11)
(x02) edge node[swap] (s) {\abdot} (x01)
(x02) edge node[pos=.7] {1\tensor a} (x12)
(x11) edge node[swap,pos=.3] {a} (x31)
(x12) edge node[pos=.3] {\abdot} (x31)
;
\draw[2cell]
node[between=s and x31 at .5, shift={(0,0)}, rotate=-20, 2label={above,\pi_6}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(0,3*\n)}]
\boundary
\draw[1cell]
(x01) edge node[swap,pos=.7] {\abdot\tensor 1} (x11)
(x01) edge node (s) {a} (x02)
(x02) edge node[pos=.7] {1\tensor a} (x12)
(x11) edge node[swap,pos=.3] {a} (x31)
(x12) edge node[pos=.3] {\abdot} (x31)
;
\draw[2cell]
node[between=s and x31 at .5, shift={(0,0)}, rotate=20, 2label={above,\pi_7}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(-1} \def\v{-1,3*\n)}]
\boundary
\draw[1cell]
(x01) edge node[swap,pos=.7] {\abdot\tensor 1} (x11)
(x01) edge node (s) {a} (x02)
(x12) edge node[swap,pos=.3] {1\tensor \abdot} (x02)
(x11) edge node[swap,pos=.3] {a} (x31)
(x31) edge node[swap,pos=.6] {a} (x12)
;
\draw[2cell]
node[between=s and x31 at .5, shift={(-.2,0)}, rotate=90, 2label={below,\pi_8}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(0,4*\n)}]
\boundary
\draw[1cell]
(x01) edge node[swap,pos=.7] {\abdot\tensor 1} (x11)
(x02) edge node[swap] (s) {\abdot} (x01)
(x02) edge node[pos=.7] {1\tensor a} (x12)
(x31) edge node[pos=.6] {\abdot} (x11)
(x12) edge node[pos=.3] {\abdot} (x31)
;
\draw[2cell]
node[between=s and x31 at .4, shift={(-.3,0)}, rotate=165, 2label={below,\pi_9}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(-1} \def\v{-1,4*\n)}]
\boundary
\draw[1cell]
(x01) edge node[swap,pos=.7] {\abdot\tensor 1} (x11)
(x01) edge node (s) {a} (x02)
(x02) edge node[pos=.7] {1\tensor a} (x12)
(x11) edge node[swap,pos=.3] {a} (x31)
(x31) edge node[swap,pos=.6] {a} (x12)
;
\draw[2cell]
node[between=s and x31 at .5, shift={(0,0)}, rotate=20, 2label={above,\pi_{10}}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
}\]
\end{lemma}
\begin{motivation}\label{mot:braided-monbicat}
From \Cref{def:braided-monoidal-category}, a braided monoidal category is a monoidal category equipped with a natural isomorphism $\xi : X \otimes Y \iso Y \otimes X$ that switches the $\otimes$-factors, and that satisfies the unit axiom \eqref{braiding-unit} and the two hexagon axioms \eqref{hexagon-b1} and \eqref{hexagon-b2}. The next definition is the bicategorical analogue of a braided monoidal category, with the natural isomorphism $\xi$ turned into an adjoint equivalence, and with the hexagon axioms turned into invertible modifications. There are further coherence axioms that govern iterates of these structures.
\end{motivation}
\begin{definition}\label{def:braided-monbicat}\index{braided monoidal!bicategory}\index{monoidal bicategory!braided}\index{bicategory!braided monoidal}
A \emph{braided monoidal bicategory} is a quadruple
\[\big(\B,\beta,\Rone,\Rtwo\big)\]
consisting of the following data.
\begin{enumerate}
\item $\B$ is a monoidal bicategory $\big(\B,\tensor,\monunit,a,\ell,r,\pi,\mu,\lambda,\rho\big)$ as in \Cref{def:monoidal-bicat}.
\item $(\beta,\betabdot,\etabeta,\epzbeta)$ \label{notation:beta-adjoint}is an adjoint equivalence as in
\[\begin{tikzpicture}[xscale=3, yscale=1.1]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def1{1} \def-1} \def\v{-1{1} \def\q{15}
\draw[0cell]
(0,0) node (x11) {\B^2}
($(x11)+(1,0)$) node (x12) {\B}
($(x11)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {\B^2}
;
\draw[1cell]
(x11) edge node (i) {\tensor} (x12)
(x11) edge[bend right=\q] node[swap,pos=.5] (a) {\tau} (x2)
(x2) edge[bend right=\q] node[swap,pos=.5] {\tensor} (x12)
;
\draw[2cell]
node[between=i and x2 at .5, shift={(-.1,0)}, rotate=-90, 2label={above,\beta}] {\Rightarrow}
;
\end{tikzpicture}\]
in $\Bicatps(\B^2,\B)$, called the \index{braiding!braided monoidal bicategory}\emph{braiding}, in which $\tau$ switches the two arguments. For objects $A,B\in\B$, the component $1$-cells of $\beta$ and $\betabdot$ are
\[\begin{tikzcd}[column sep=large]
A\tensor B \ar[shift left]{r}{\beta_{A,B}} & B \tensor A. \ar[shift left]{l}{\betabdot_{A,B}}\end{tikzcd}\]
\item $\Rone$ \label{notation:left-hex}is an invertible $2$-cell (i.e., modification)
\[\begin{tikzcd}[column sep=large]
\big[\big(\tensor \whis (1\times \beta)\big) a\big]\big[\tensor \whis (\beta\times 1)\big] \ar{r}{\Rone} & \big[a\big(\beta\whis (1\times\tensor)\big)\big] a
\end{tikzcd}\]
in
\[\Bicatps(\B^3,\B)\big(\tensor(\tensor\times 1), \tensor(1\times\tensor)(1\times\tau)(\tau\times 1)\big)\]
called the \index{left hexagonator}\emph{left hexagonator}. Components of $\Rone$ are invertible $2$-cells
\[\begin{tikzpicture}[xscale=4.7,yscale=1.2]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def1{1} \def1{.1}
\draw[0cell]
(0,0) node (x11) {(A\tensor B)\tensor C}
($(x11)+(1,0)$) node (x12) {B\tensor (C\tensor A)}
($(x11)+(1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x01) {(B\tensor A)\tensor C}
($(x11)+(1-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x02) {B\tensor (A\tensor C)}
($(x11)+(1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {A\tensor (B\tensor C)}
($(x11)+(1-1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {(B\tensor C)\tensor A}
;
\draw[1cell]
(x11) edge node[pos=.2] {\beta_{A,B}\tensor 1_C} (x01)
(x01) edge node {a_{B,A,C}} (x02)
(x02) edge node[pos=.8] {1_B\tensor \beta_{A,C}} (x12)
(x11) edge node[swap,pos=.2] {a_{A,B,C}} (x21)
(x21) edge node {\beta_{A,B\tensor C}} (x22)
(x22) edge node[swap,pos=.8] {a_{B,C,A}} (x12)
;
\draw[2cell]
node[between=x11 and x12 at .4, shift={(0,.2)}, rotate=-90, 2label={above,R_{A|B,C}}] {\Rightarrow}
;
\end{tikzpicture}\]
in $\B\big((A\tensor B)\tensor C, B \tensor (C\tensor A)\big)$, with the left-normalized bracketings in the (co)domain, for objects $A,B,C\in\B$.
\item $\Rtwo$ \label{notation:right-hex}is an invertible $2$-cell (i.e., modification)
\[\begin{tikzcd}[column sep=large]
\big[\big(\tensor\whis(\beta\times 1)\big) \abdot\big] \big[\tensor \whis(1\times\beta)\big] \ar{r}{\Rtwo} & \big[\abdot\big(\beta\whis(\tensor\times 1)\big)\big] \abdot
\end{tikzcd}\]
in
\[\Bicatps(\B^3,\B)\big(\tensor(1\times\tensor), \tensor(\tensor\times 1)(\tau\times 1)(1\times \tau)\big)\]
called the \index{right hexagonator}\emph{right hexagonator}. Components of $\Rtwo$ are invertible $2$-cells
\[\begin{tikzpicture}[xscale=4.7,yscale=1.2]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def1{1} \def1{.1}
\draw[0cell]
(0,0) node (x11) {A\tensor (B\tensor C)}
($(x11)+(1,0)$) node (x12) {(C\tensor A)\tensor B}
($(x11)+(1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x01) {A\tensor (C\tensor B)}
($(x11)+(1-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x02) {(A\tensor C)\tensor B}
($(x11)+(1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {(A\tensor B)\tensor C}
($(x11)+(1-1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {C\tensor (A\tensor B)}
;
\draw[1cell]
(x11) edge node[pos=.2] {1_A \tensor \beta_{B,C}} (x01)
(x01) edge node {\abdot_{A,C,B}} (x02)
(x02) edge node[pos=.8] {\beta_{A,C}\tensor 1_B} (x12)
(x11) edge node[swap,pos=.1] {\abdot_{A,B,C}} (x21)
(x21) edge node {\beta_{A\tensor B,C}} (x22)
(x22) edge node[swap,pos=.9] {\abdot_{C,A,B}} (x12)
;
\draw[2cell]
node[between=x11 and x12 at .4, shift={(0,.2)}, rotate=-90, 2label={above,R_{A,B|C}}] {\Rightarrow}
;
\end{tikzpicture}\]
in $\B\big(A\tensor (B\tensor C), (C \tensor A)\tensor B\big)$, with the left-normalized bracketings in the (co)domain.
\end{enumerate}
The above data are required to satisfy the following four pasting diagram axioms for objects $A,B,C,D\in\B$, with $\tensor$ abbreviated to concatenation and iterates of $\tensor$ denoted by parentheses. The $2$-cells $\pi_n$ in \Cref{pi-mates} and $\tensorzero_{A,B} : 1_{A\tensor B} \to 1_A\tensor 1_B$ in \eqref{tensorzero-gf} will be used in the axioms below.
\begin{description}
\newcommand{\tikzset{every node/.style={scale=.78}}}{\tikzset{every node/.style={scale=.78}}}
\item[(3,1)-Crossing Axiom]\index{31crossingaxiom@(3,1)-crossing axiom}\index{braided monoidal!bicategory!(3,1)-crossing axiom}
\[
\begin{tikzpicture}[x=23mm,y=13.6mm]
{
\tikzset{every node/.style={scale=.78}}
\def.08{1.2}
\def1.1{1.1}
\def.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1{40}
\def20{20}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{.75}
\def1{1.25}
\draw[0cell,font={\small}]
(-1,0) node (l) {A((BC)D)}
(l) ++(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15) node (lt) {A(B(CD))}
(lt) ++(45:1.1) node (ltt) {A(B(DC))}
(l) ++(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15) node (lb) {(A(BC))D}
(lb) ++(-45:1.1) node (lbb) {D(A(BC))}
(1,0) node (r) {A(D(BC))}
(r) ++(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15) node (rt) {A((DB)C)}
(rt) ++(135:1.1) node (rtt) {A((BD)C)}
(r) ++(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15) node (rb) {(AD)(BC)}
(rb) ++(-135:1.1) node (rbb) {(DA)(BC)}
(lt) ++(180-.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1+20:.08) node (lt2) {(AB)(CD)}
(ltt) ++(180-.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1-20:.08) node (ltt2) {(AB)(DC)}
(rtt) ++(.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1+20:.08) node (rtt2) {(A(BD))C}
(rt) ++(.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1-20:.08) node (rt2) {(A(DB))C}
(rb) ++(-.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1+20:.08) node (rb2) {((AD)B)C}
(rbb) ++(-.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1-20:.08) node (rbb2) {((DA)B)C}
(lbb) ++(180+.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1+20:.08) node (lbb2) {D((AB)C)}
(lb) ++(180+.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1-20:.08) node (lb2) {((AB)C)D}
node[between=ltt2 and rtt2 at .5] (mt) {((AB)D)C}
node[between=lbb2 and rbb2 at .5] (mb) {(D(AB))C}
(lt2.east) ++(0:-.02) node (lt2') {}
(ltt2.south) ++(-90:.00) node (ltt2') {}
(rb.south) ++(0:.14) node (rb') {}
(rbb.east) ++(-.05,.05) node (rbb') {}
;
\draw[1cell]
(lt) edge['] node {1(160)} (ltt)
(ltt) edge['] node {1\abdot} (rtt)
(rtt) edge['] node {1(60 1)} (rt)
(rt) edge['] node {1a} (r)
(lt) edge node {1\abdot} (l)
(l) edge node {1_A\,60_{BC,D}} (r)
(r) edge['] node {\abdot} (rb)
(rb) edge['] node {60_{A,D} 1_{BC}} (rbb)
(l) edge node {\abdot} (lb)
(lb) edge node {60} (lbb)
(lbb) edge node {\abdot} (rbb)
%
(lt2) edge node (1be) {1_{AB}\,60_{CD}} (ltt2)
(ltt2) edge node {\abdot} (mt)
(mt) edge node {a1} (rtt2)
(rtt2) edge node {(160_{B,D})1} (rt2)
(rt2) edge node {\abdot 1} (rb2)
(rb2) edge node (be11) {(60_{A,D} 1)1} (rbb2)
(lt2) edge['] node {\abdot} (lb2)
(lb2) edge['] node {60_{(AB)C,D}} (lbb2)
(lbb2) edge['] node {\abdot} (mb)
(mb) edge['] node {\abdot 1} (rbb2)
%
(lt2) edge['] node {a} (lt)
(ltt2) edge node {a} (ltt)
(rtt2) edge['] node {a} (rtt)
(rt2) edge node {a} (rt)
(lb) edge['] node {\abdot 1} (lb2)
(lbb) edge node {1\abdot} (lbb2)
(rbb) edge['] node {\abdot} (rbb2)
(rb) edge node {\abdot} (rb2)
%
(lt2') edge[',bend right=42,looseness=1.2] node[pos=.75] (11be) {(11)\,60} (ltt2')
(rb') edge[bend left=55+30,looseness=1.2] node[pos=.15] (be11) {60\,(11)} (rbb')
;
}
\draw[2cell]
(lt2) ++(35:1) node[shift={(0,0)}, rotate=-45,
2label={below,}] (T1) {\Rightarrow}
(T1) ++(240:.25) node {\otimes^0_{A,B} 1_{60}}
(rb) ++(240:.75) node[shift={(0,0)}, rotate=135,
2label={below,}] (T2) {\Rightarrow}
(T2) ++(.24,.18) node {1_{60}\otimes^{-0}_{B,C}}
node[between=lt and rt at .5, shift={(0,-.2)}, rotate=225,
2label={below left,1R^1_{B,C|D}}] {\Rightarrow}
node[between=lb and rb at .5, shift={(0,.1)}, rotate=225,
2label={below left,R_{A,BC|D}}] {\Rightarrow}
(mt) ++(0,-.45) node[rotate=-135,
2label={above left,\pi_3^\inv}] {\Rightarrow}
(mb) ++(0,.45) node[rotate=-90,
2label={above,\pi_1}] {\Rightarrow}
(l) ++(-.6,0) node[rotate=180,
2label={below,\pi_2}] {\Rightarrow}
(r) ++(.6,0) node[rotate=180,
2label={below,\pi_4^\inv}] {\Rightarrow}
node[between=rt and rtt2 at .4, shift={(.1,0)}, rotate=180,
2label={below,a^\inv_{1,60,1}}] {\Rightarrow}
node[between=lb and lbb2 at .5, shift={(-.1,0)}, rotate=180,
2label={below,60_{\abdot,1}}] {\Rightarrow}
(lt) ++(95:.5) node[rotate=-90, 2label={above,a^\inv_{1,1,60}}] {\Rightarrow}
(rbb2) ++(80:.4) node[rotate=180, 2label={below,\abdot_{60,1,1}}] {\Rightarrow}
;
\draw[font=\Huge] (0,-3) node [rotate=90] (eq) {$=$};
\begin{scope}[yscale=1]
\def++(0,6){++(0,-6)}
\def\S{++(0,1)}
\def\T{++(0,.4)}
{
\tikzset{every node/.style={scale=.78}}
\draw[0cell,font={\small}]
(lt2)++(0,6)\T node (lt2B) {(AB)(CD)}
(ltt2)++(0,6) node (ltt2B) {(AB)(DC)}
(rtt2)++(0,6) node (rtt2B) {(A(BD))C}
(rt2)++(0,6)\T node (rt2B) {(A(DB))C}
(rb2)++(0,6)\T\S node (rb2B) {((AD)B)C}
(rbb2)++(0,6)\T\S\T node (rbb2B) {((DA)B)C}
(lbb2)++(0,6)\T\S\T node (lbb2B) {D((AB)C)}
(lb2)++(0,6)\T\S node (lb2B) {((AB)C)D}
(mt)++(0,6) node (mtB) {((AB)D)C}
(mb)++(0,6)\T\S\T node (mbB) {(D(AB))C}
;
\draw[1cell]
(lt2B) edge node (1be) {1_{AB}\,60_{C,D}} (ltt2B)
(ltt2B) edge node {\abdot} (mtB)
(mtB) edge node {a1} (rtt2B)
(rtt2B) edge node {(160_{B,D})1} (rt2B)
(rt2B) edge node {\abdot 1} (rb2B)
(rb2B) edge node (be11) {(60_{A,D} 1)1} (rbb2B)
(lt2B) edge['] node {\abdot} (lb2B)
(lb2B) edge['] node {60_{(AB)C,D}} (lbb2B)
(lbb2B) edge['] node {\abdot} (mbB)
(mbB) edge['] node {\abdot 1} (rbb2B)
%
(mtB) edge node[pos=.33] {60_{AB,D}\,1_C} (mbB)
;
}
\draw[2cell]
node[between=ltt2B and lbb2B at .5, shift={(0,0)}, rotate=225,
2label={below left,R_{AB,C|D}}] {\Rightarrow}
node[between=rtt2B and rbb2B at .5, shift={(0,0)}, rotate=225,
2label={below left,R^2_{A,B|D}1}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\]
In the above pasting diagrams:
\begin{itemize}
\item The $2$-cells $1R^1_{B,C|D}$ and $R^2_{A,B|D}1$ are \emph{not} $1\tensor R^1_{B,C|D}$ and $R^2_{A,B|D}\tensor 1$, but are induced by the right hexagonator $\Rtwo$. They will be defined precisely in \eqref{right-hex-mate-1} and \eqref{right-hex-mate-2} below.
\item The component $2$-cells
\[\abdot_{\beta,1,1} = \abdot_{\beta_{A,D},1_B,1_C}, \quad a_{1,1,\beta} = a_{1_A,1_B,\beta_{C,D}}, \andspace a_{1,\beta,1} = a_{1_A,\beta_{B,D},1_C}\]
of the strong transformations $\abdot$ and $a$ are used, and similarly in the next three axioms.
\item The component $2$-cell
\[\beta_{\abdot,1} = \beta_{\abdot_{A,B,C},1_D}\]
of the strong transformation $\beta$ is used, and similarly in the next three axioms.
\end{itemize}
\item[(1,3)-Crossing Axiom]\index{13crossingaxiom@(1,3)-crossing axiom}\index{braided monoidal!bicategory!(1,3)-crossing axiom}
\[
\begin{tikzpicture}[x=23mm,y=13.6mm]
{
\tikzset{every node/.style={scale=.78}}
\def.08{1.2}
\def1.1{1.1}
\def.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1{40}
\def20{20}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{.75}
\def1{1.25}
\draw[0cell,font={\small}]
(-1,0) node (l) {(A(BC))D}
(l) ++(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15) node (lt) {((AB)C)D}
(lt) ++(45:1.1) node (ltt) {((BA)C)D}
(l) ++(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15) node (lb) {A((BC)D)}
(lb) ++(-45:1.1) node (lbb) {((BC)D)A}
(1,0) node (r) {((BC)A)D}
(r) ++(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15) node (rt) {(B(CA))D}
(rt) ++(135:1.1) node (rtt) {(B(AC))D}
(r) ++(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15) node (rb) {(BC)(AD)}
(rb) ++(-135:1.1) node (rbb) {(BC)(DA)}
(lt) ++(180-.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1+20:.08) node (lt2) {(AB)(CD)}
(ltt) ++(180-.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1-20:.08) node (ltt2) {(BA)(CD)}
(rtt) ++(.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1+20:.08) node (rtt2) {B((AC)D)}
(rt) ++(.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1-20:.08) node (rt2) {B((CA)D)}
(rb) ++(-.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1+20:.08) node (rb2) {B(C(AD))}
(rbb) ++(-.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1-20:.08) node (rbb2) {B(C(DA))}
(lbb) ++(180+.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1+20:.08) node (lbb2) {(B(CD))A}
(lb) ++(180+.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1-20:.08) node (lb2) {A(B(CD))}
node[between=ltt2 and rtt2 at .5] (mt) {B(A(CD))}
node[between=lbb2 and rbb2 at .5] (mb) {B((CD)A)}
(lt2.east) ++(0:-.02) node (lt2') {}
(ltt2.south) ++(-90:.00) node (ltt2') {}
(rb.south) ++(0:.14) node (rb') {}
(rbb.east) ++(-.05,.05) node (rbb') {}
;
\draw[1cell]
(lt) edge['] node {(60 1)1} (ltt)
(ltt) edge['] node {a1} (rtt)
(rtt) edge['] node {(160)1} (rt)
(rt) edge['] node {\abdot 1} (r)
(lt) edge node {a1} (l)
(l) edge node {60_{A,BC}\,1_D} (r)
(r) edge['] node {a} (rb)
(rb) edge['] node {1_{BC} 60_{A,D}} (rbb)
(l) edge node {a} (lb)
(lb) edge node {60} (lbb)
(lbb) edge node {a} (rbb)
%
(lt2) edge node (1be) {60_{A,B}\,1_{CD}} (ltt2)
(ltt2) edge node {a} (mt)
(mt) edge node {1\abdot} (rtt2)
(rtt2) edge node {1(60_{A,C}1)} (rt2)
(rt2) edge node {1a} (rb2)
(rb2) edge node (be11) {1(160_{A,D})} (rbb2)
(lt2) edge['] node {a} (lb2)
(lb2) edge['] node {60_{A,B(CD)}} (lbb2)
(lbb2) edge['] node {a} (mb)
(mb) edge['] node {1a} (rbb2)
%
(lt2) edge['] node {\abdot} (lt)
(ltt2) edge node {\abdot} (ltt)
(rtt2) edge['] node {\abdot} (rtt)
(rt2) edge node {\abdot} (rt)
(lb2) edge node {1\abdot} (lb)
(lbb2) edge['] node {\abdot 1} (lbb)
(rbb) edge['] node {a} (rbb2)
(rb2) edge['] node {\abdot} (rb)
%
(lt2') edge[',bend right=42,looseness=1.2] node[pos=.75] (be11) {60\,(11)} (ltt2')
(rb') edge[bend left=55+30,looseness=1.2] node[pos=.15] (11be) {(11)\,60} (rbb')
;
}
\draw[2cell]
(lt2) ++(35:1) node[shift={(0,0)}, rotate=-45,
2label={below,}] (T1) {\Rightarrow}
(T1) ++(240:.25) node {1_{60} \otimes^0_{C,D}}
(rb) ++(240:.75) node[shift={(0,0)}, rotate=135,
2label={below,}] (T2) {\Rightarrow}
(T2) ++(.24,.18) node {\otimes^{-0}_{B,C}1_{60}}
node[between=lt and rt at .5, shift={(0,-.2)}, rotate=225,
2label={below left,R^2_{A|B,C}1}] {\Rightarrow}
node[between=lb and rb at .5, shift={(0,.1)}, rotate=225,
2label={below left,R_{A|BC,D}}] {\Rightarrow}
(mt) ++(0,-.45) node[rotate=-135,
2label={above left,\pi_5^\inv}] {\Rightarrow}
(mb) ++(0,.45) node[rotate=-90,
2label={above,\pi_{10}}] {\Rightarrow}
(l) ++(-.6,0) node[rotate=180,
2label={below,\pi_3^\inv}] {\Rightarrow}
(r) ++(.6,0) node[rotate=180,
2label={below,\pi_6^\inv}] {\Rightarrow}
node[between=rt and rtt2 at .4, shift={(.1,0)}, rotate=180,
2label={below,\abdotinv_{1,60,1}}] {\Rightarrow}
node[between=lb and lbb2 at .4, shift={(-.2,0)}, rotate=-90,
2label={above,\,60^\inv_{1,\abdot}}] {\Rightarrow}
(lt) ++(95:.5) node[rotate=-90, 2label={above,\abdotinv_{60,1,1}}] {\Rightarrow}
(rbb2) ++(90:.45) node[rotate=135, 2label={below,\big(\abdotinv_{1,1,60}\big)'}] {\Rightarrow}
;
\draw[font=\Huge] (0,3) node [rotate=90] (eq) {$=$};
%
\begin{scope}[yscale=1]
\def++(0,6){++(0,6)}
\def\S{++(0,-1)}
\def\T{++(0,-.4)}
{
\tikzset{every node/.style={scale=.78}}
\draw[0cell,font={\small}]
(lt2)++(0,6)\T\S node (lt2B) {(AB)(CD)}
(ltt2)++(0,6)\T\S\T node (ltt2B) {(BA)(CD)}
(rtt2)++(0,6)\T\S\T node (rtt2B) {B((AC)D)}
(rt2)++(0,6)\T\S node (rt2B) {B((CA)D)}
(rb2)++(0,6)\T node (rb2B) {B(C(AD))}
(rbb2)++(0,6) node (rbb2B) {B(C(DA))}
(lbb2)++(0,6) node (lbb2B) {(B(CD))A}
(lb2)++(0,6)\T node (lb2B) {A(B(CD))}
(mt)++(0,6)\T\S\T node (mtB) {B(A(CD))}
(mb)++(0,6) node (mbB) {B((CD)A)}
;
\draw[1cell]
(lt2B) edge node (1be) {60_{A,B}\,1_{CD}} (ltt2B)
(ltt2B) edge node {a} (mtB)
(mtB) edge node {1\abdot} (rtt2B)
(rtt2B) edge node {1(60_{A,C}1)} (rt2B)
(rt2B) edge node {1a} (rb2B)
(rb2B) edge node (be11) {1(160_{A,D})} (rbb2B)
(lt2B) edge['] node {a} (lb2B)
(lb2B) edge['] node {60_{A,B(CD)}} (lbb2B)
(lbb2B) edge['] node {a} (mbB)
(mbB) edge['] node {1a} (rbb2B)
%
(mtB) edge node[pos=.33] {1_B\,60_{A,CD}} (mbB)
;
}
\draw[2cell]
node[between=ltt2B and lbb2B at .5, shift={(0,0)}, rotate=225,
2label={below left,R_{A|B,CD}}] {\Rightarrow}
node[between=rtt2B and rbb2B at .5, shift={(0,0)}, rotate=225,
2label={below left,1R^1_{A|C,D}}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\]
In the above pasting diagrams, the $2$-cells $1R^1_{A|C,D}$ and $R^2_{A|B,C}1$ are \emph{not} $1\tensor R^1_{A|C,D}$ and $R^2_{A|B,C}\tensor 1$, but are induced by the left hexagonator $\Rone$. They will be defined in \Cref{expl:left-hex-mates} below.
The 2-cell $\big(\abdotinv_{1,1,60}\big)'$ is a mate of $\abdotinv_{1,1,60}$.
\item[(2,2)-Crossing Axiom]\index{22crossingaxiom@(2,2)-crossing axiom}\index{braided monoidal!bicategory!(2,2)-crossing axiom}
\[\begin{tikzpicture}[xscale=10,yscale=1.2,baseline={(eq.base)}]
\def\small{\small}
\def-1} \def\v{-1{.6} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def.4{1.5} \def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{-4.7}
\def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{.2} \def\d{.1} \def20{.7} \def\f{.26} \def1{.02} \def1{1} \def\i{.1}
\draw[font=\Huge] (0,0) node [rotate=90] (eq) {$=$};
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (x1) {(A(BC))D}
($(x1)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {(A(CB))D}
($(x2)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {((AC)B)D}
($(x3)+(\i,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x4) {(AC)(BD)}
($(x1)+(1,0)$) node (x9) {C((DA)B)}
($(x9)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x8) {C((AD)B)}
($(x8)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x7) {C(A(DB))}
($(x7)+(-\i,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x6) {(CA)(DB)}
($(x4)!.5!(x6)$) node (x5) {(CA)(BD)}
($(x1)+(1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y1) {((AB)C)D}
($(x9)+(-1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y4) {C(D(AB))}
($(y1)!.33!(y4)$) node (y2) {(AB)(CD)}
($(y1)!.67!(y4)$) node (y3) {(CD)(AB)}
;
\draw[1cell]
(x1) edge node {(1\beta_{B,C})1} (x2)
(x2) edge node {\abdot 1} (x3)
(x3) edge node[pos=.2] {a} (x4)
(x4) edge node {\beta_{A,C}(11)} (x5)
(x5) edge node {(11)\beta_{B,D}} (x6)
(x6) edge node[pos=.7] {a} (x7)
(x7) edge node {1\abdot} (x8)
(x8) edge node {1(\beta_{A,D}1)} (x9)
(x1) edge node[swap,pos=.2] {\abdot 1} (y1)
(y1) edge node[swap] {a} (y2)
(y2) edge node[swap] {\beta_{AB,CD}} (y3)
(y3) edge node[swap] {a} (y4)
(y4) edge node[swap,pos=.8] {1\abdot} (x9)
;}
\begin{scope}[shift={(-1/2,1.2*.4)},yscale=1.2]
{
\tikzset{every node/.style={scale=.78}}
\boundary
\draw[0cell]
($(x1)+(.33,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2)$) node (z1) {(C(AB))D}
($(x1)+(.67,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2)$) node (z4) {C((AB)D)}
($(x3)!.33!(x7)$) node (z2) {((CA)B)D}
($(x3)!.67!(x7)$) node (z3) {C(A(BD))}
;
\draw[1cell]
(y1) edge node {\beta 1} (z1)
(z1) edge node {a} (z4)
(z1) edge node[swap] {\abdot 1} (z2)
(x3) edge node[swap] {(\beta 1)1} (z2)
(z2) edge node[swap] {a} (x5)
(x5) edge node[swap] {a} (z3)
(z3) edge node[swap] {1(1\beta)} (x7)
(z3) edge node[swap] {1\abdot} (z4)
(z4) edge node {1\beta} (y4)
;
}
\draw[2cell]
node[between=x3 and z2 at .7, shift={(0,.4)}, rotate=-45, 2label={above,a_{\beta,1,1}}] {\Rightarrow}
node[between=z3 and x7 at .3, shift={(0,.3)}, rotate=225, 2label={below,\ainv_{1,1,\beta}}] {\Rightarrow}
node[between=x1 and z1 at .4, shift={(0,.6)}, rotate=-45, 2label={above,R_{A,B|C}1}] {\Rightarrow}
node[between=z4 and x9 at .6, shift={(0,.6)}, rotate=225, 2label={below,1R_{A,B|D}}] {\Rightarrow}
node[between=z1 and z4 at .45, shift={(0,1)}, rotate=-90, 2label={above,\pi_8}] {\Rightarrow}
node[between=y2 and y3 at .4, shift={(0,.8)}, rotate=-90, 2label={above,R_{AB|C,D}}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(-1/2,1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)},yscale=1.4]
{
\tikzset{every node/.style={scale=.78}}
\boundary
\draw[0cell]
($(x5)+(0,-.7)$) node (z0) {(AC)(DB)}
($(x3)+(\f,.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1)$) node (za1) {A(C(BD))}
($(za1)+(-\d,-20)$) node (za2) {A((CB)D)}
($(za1)+(\d,-20)$) node (za3) {A(C(DB))}
($(za2)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (za4) {A((BC)D)}
($(za3)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (za5) {A((CD)B)}
($(za4)+(\d,-20)$) node (za6) {A(B(CD))}
($(x3)+(1-\f,.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1)$) node (zb1) {((CA)D)B}
($(zb1)+(-\d,-20)$) node (zb2) {((AC)D)B}
($(zb1)+(\d,-20)$) node (zb3) {(C(AD))B}
($(zb2)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (zb4) {(A(CD))B}
($(zb3)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (zb5) {(C(DA))B}
($(zb4)+(\d,-20)$) node (zb6) {((CD)A)B}
;
\draw[1cell]
(x4) edge node[font=\small,pos=.6] {(11)\beta} (z0)
(z0) edge node[font=\small,pos=.4] {\beta(11)} (x6)
(za4) edge node[font=\small,swap,pos=.75] {1(\beta 1)} (za2)
(za2) edge node[pos=.3] {1a} (za1)
(za1) edge node[font=\small,swap,pos=.3] {1(1\beta)} (za3)
(za4) edge node[swap,pos=.3] {1a} (za6)
(za6) edge node[swap,pos=.7] {1\beta} (za5)
(za5) edge node[swap,pos=.6] {1a} (za3)
(x1) edge node[swap] {a} (za4)
(x2) edge node[pos=.6] {a} (za2)
(za1) edge node[pos=.7] {\abdot} (x4)
(za3) edge node[swap] {\abdot} (z0)
(za5) edge node[swap] {\abdot} (zb4)
(za6) edge node[swap,pos=.1] {\abdot} (y2)
(zb4) edge node[pos=.6] {\abdot 1} (zb2)
(zb2) edge node[font=\small,swap,pos=.6] {(\beta 1)1} (zb1)
(zb1) edge node {a1} (zb3)
(zb3) edge node[font=\small,swap,pos=.25] {(1\beta)1} (zb5)
(zb4) edge node[swap,pos=.3] {\beta 1} (zb6)
(zb6) edge node[swap,pos=.7] {a1} (zb5)
(z0) edge node[swap] {\abdot} (zb2)
(x6) edge node[pos=.3] {\abdot} (zb1)
(zb3) edge node[pos=.4] {a} (x8)
(zb5) edge node[swap] {a} (x9)
(y3) edge node[swap,pos=.9] {\abdot} (zb6)
;
}
\draw[2cell]
node[between=x5 and z0 at .5, shift={(-.5,0)}, rotate=-90, 2label={above,\beta_{1,1}\betainv_{1,1}}] {\Rightarrow}
node[between=x3 and za1 at .4, shift={(0,-.2)}, 2label={above,\pi_7}] {\Rightarrow}
node[between=x7 and zb1 at .4, shift={(0,-.2)}, rotate=180, 2label={below,\pi_3}] {\Rightarrow}
node[between=za1 and z0 at .35, shift={(0,-.1)}, rotate=-90, 2label={above,\abdot_{1,1,\beta}}] {\Rightarrow}
node[between=zb1 and z0 at .65, shift={(0,-.2)}, rotate=-90, 2label={above,\abdotinv_{\beta,1,1}}] {\Rightarrow}
node[between=x1 and za4 at .5, shift={(-.2,.5)}, rotate=0, 2label={above,\ainv_{1,\beta,1}}] {\Rightarrow}
node[between=x9 and zb5 at .3, shift={(0,.7)}, rotate=180, 2label={below,a_{1,\beta,1}}] {\Rightarrow}
node[between=za4 and za5 at .5, shift={(0,.3)}, rotate=0, 2label={above,1R_{B|C,D}}] {\Rightarrow}
node[between=za5 and zb4 at .4, shift={(0,.5)}, rotate=-45, 2label={above,\pi_9}] {\Rightarrow}
node[between=zb4 and zb5 at .2, shift={(0,.5)}, rotate=-90, 2label={above,R^1_{A|C,D}1}] {\Rightarrow}
node[between=y2 and y3 at .4, shift={(0,1)}, rotate=-90, 2label={above,R_{A,B|CD}}] {\Rightarrow}
node[between=y1 and y2 at .4, shift={(0,.8)}, rotate=225, 2label={below,\pi_7^{-1}}] {\Rightarrow}
node[between=y3 and y4 at .6, shift={(0,.8)}, rotate=-45, 2label={above,\pi_3^{-1}}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}\]
In the above pasting diagrams:
\begin{itemize}
\item $R_{A,B|C}1$ and $1R_{A,B|D}$ are induced by the right hexagonator, and will be defined in \Cref{expl:right-hex-mates} below.
\item $1R_{B|C,D}$ and $R^1_{A|C,D}1$ are induced by the left hexagonator, and will be defined in \Cref{expl:left-hex-mates} below.
\end{itemize}
\item[Yang-Baxter Axiom]\index{Yang-Baxter!axiom}\index{braided monoidal!bicategory!Yang-Baxter axiom}
\[\begin{tikzpicture}[x=17mm,y=17mm,baseline={(eq.base)}]
\def-1} \def\v{-1{1.4} \def1{1}
\def.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1{45} \def\d{1.2} \def20{.75}
\draw[font=\Huge] (0,-.707-\d-.25*20) node [rotate=0] (eq) {$=$};
\newcommand{\boundary}{
\draw[0cell,font=\small]
(0,0) node (x1) {(AB)C}
($(x1)+(-3*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:1)$) node (x2) {(BA)C}
($(x2)+(-2*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:\d)$) node (x3) {B(AC)}
($(x3)+(-2*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:20)$) node (x4) {B(CA)}
($(x4)+(-2*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:\d)$) node (x5) {(BC)A}
($(x5)+(-1*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:1)$) node (x6) {(CB)A}
($(x6)+(-0*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:1)$) node (x7) {C(BA)}
($(x1)+(0*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:1)$) node (y1) {A(BC)}
($(y1)+(-1*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:1)$) node (y2) {A(CB)}
($(y2)+(-2*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:\d)$) node (y3) {(AC)B}
($(y3)+(-2*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:20)$) node (y4) {(CA)B}
($(y4)+(-2*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:\d)$) node (y5) {C(AB)}
;
\draw[1cell]
(x1) edge['] node {\beta_{A,B}1} (x2)
(x2) edge['] node {a} (x3)
(x3) edge['] node[pos=.5] {1\beta_{A,C}} (x4)
(x4) edge['] node[pos=.5] {\abdot} (x5)
(x5) edge['] node {\beta_{B,C}1} (x6)
(x6) edge['] node {a} (x7)
(x1) edge node {a} (y1)
(y1) edge node {1\beta_{B,C}} (y2)
(y2) edge node {\abdot} (y3)
(y3) edge node[pos=.5] {\beta_{A,C}1} (y4)
(y4) edge node {a} (y5)
(y5) edge node {1\beta_{A,B}} (x7)
;}
\begin{scope}[shift={(-1--1} \def\v{-1,0)},rotate=0*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1]
{
\tikzset{every node/.style={scale=.78}}
\boundary
\draw[0cell,font=\small]
node[between=x5 and y1 at .4,shift={(.4,0)}] (z) {(BC)A}
;
\draw[1cell]
(y1) edge node[swap,pos=.7] {\beta} (z)
(z) edge node[pos=.5] {a} (x4)
(z) edge node[pos=.5] (one) {1} (x5)
(z) edge[bend right=0] node[pos=.35] {\beta 1} (x6)
(y2) edge[bend right=0] node[pos=.4] {\beta} (x6)
;
}
\draw[2cell]
node[between=x1 and z at .5, shift={(-.2,0)}, rotate=-0, 2label={above,R_{A|B,C}}] {\Rightarrow}
node[between=x4 and one at .5, shift={(0,0)}, rotate=-0, 2label={below,\etaainv}] {\Rightarrow}
node[between=one and x6 at .25, shift={(0,0)}, rotate=0, 2label={below,r}] {\Rightarrow}
node[between=y2 and z at .5, shift={(0,0)}, rotate=-0, 2label={above,\beta_{1,\beta}}] {\Rightarrow}
node[between=z and y5 at .55, shift={(0,-.5)}, rotate=-0, 2label={above,(R^1_{A|C,B})^{-1}}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(-1} \def\v{-1,0)}]
{
\tikzset{every node/.style={scale=.78}}
\boundary
\draw[0cell,font=\small]
node[between=x2 and x7 at .4,shift={(.4,0)}] (z) {(BA)C}
;
\draw[1cell]
(x1) edge node[pos=.65] {\beta 1} (z)
(x2) edge node[pos=.5] (one) {1} (z)
(x3) edge node[',pos=.5] {\abdot} (z)
(z) edge node[',pos=.5] {\beta} (x7)
(x1) edge node[pos=.5] {\beta} (y5)
;
}
\draw[2cell]
node[between=z and x6 at .5, shift={(-.2,0)}, rotate=0, 2label={above,R^1_{B,A|C}}] {\Rightarrow}
node[between=x3 and one at .5, shift={(-30:0)}, rotate=0, 2label={above,\hspace{-2ex}\etaainv}] {\Rightarrow}
node[between=x1 and one at .6, shift={(-.1,0)}, rotate=0, 2label={above,\ell}] {\Rightarrow}
node[between=z and y5 at .45, shift={(0,0)}, rotate=0, 2label={above,\betainv_{\beta,1}}] {\Rightarrow}
node[between=y2 and z at .45, shift={(0,.1)}, rotate=0, 2label={above,(R^3_{A,B|C})^{-1}}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}\]
In the above pasting diagrams:
\begin{itemize}
\item $\ell$ and $r$ are the left and right unitors in the base bicategory $\B$.
\item $\etaa : 1 \to \abdot a$ is the unit of the adjoint equivalence $(a,\abdot,\etaa,\epza)$, whose inverse is denoted by\label{notation:etaainv} $\etaainv$.
\item $R^1_{A|C,B}$ is induced by the left hexagonator, and will be defined in \Cref{expl:left-hex-mates} below.
\item $R^1_{B,A|C}$ and $R^3_{A,B|C}$ are induced by the right hexagonator, and will be defined in \Cref{expl:right-hex-mates} below.
\end{itemize}
\end{description}
This finishes the definition of a braided monoidal bicategory.
\end{definition}
\begin{explanation}[Hexagonators]\label{expl:braided-monbicat-data}
The (co)domain of the left hexagonator $\Rone$ is an iterated horizontal composite as in \Cref{def:lax-tr-comp} of three strong transformations, with $\whis$ the whiskerings in \Cref{def:whiskering-transformation}. This is well-defined by \Cref{lax-tr-compose,pre-whiskering-transformation,post-whiskering-transformation}. The same remark also applies to the right hexagonator $\Rtwo$.
\end{explanation}
\begin{explanation}[Mates of the right hexagonator]\label{expl:right-hex-mates}\index{mate!right hexagonator}\index{right hexagonator!mate}
In the (3,1)-crossing axiom, the $2$-cell $1R^1_{B,C|D}$ is defined as the vertical composite
\begin{equation}\label{right-hex-mate-1}
\begin{tikzpicture}[xscale=5.5,yscale=1.5,baseline={(t.base)}]
\def-1} \def\v{-1{-1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x1) {\big\{\big[(1_A\tensor a_{D,B,C}) \big(1_A \tensor (\beta_{B,D}\tensor 1_C)\big)\big] (1_A \tensor \abdot_{B,D,C})\big\}\big[1_A \tensor (1_B\tensor \beta_{C,D})\big]}
($(x1)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {\big\{\big[(1_A 1_A) \tensor \big(a_{D,B,C} (\beta_{B,D}\tensor 1_C)\big)\big] (1_A \tensor \abdot_{B,D,C})\big\}\big[1_A \tensor (1_B\tensor \beta_{C,D})\big]}
($(x2)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {\big\{\big[(1_A 1_A)1_A\big] \tensor \big[\big(a_{D,B,C} (\beta_{B,D}\tensor 1_C)\big)\abdot_{B,D,C}\big]\big\} \big[1_A \tensor (1_B\tensor \beta_{C,D})\big]}
($(x3)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x4) {\big[\big((1_A 1_A)1_A\big)1_A\big] \tensor \big\{\big[\big(a_{D,B,C} (\beta_{B,D}\tensor 1_C)\big)\abdot_{B,D,C}\big] (1_B\tensor \beta_{C,D})\big\}}
($(x4)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x5) {(1_A 1_A) \tensor \big(\beta_{B\tensor C,D} \abdot_{B,C,D}\big)}
($(x5)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x6) {\big(1_A\tensor \beta_{B\tensor C,D}\big) \big(1_A\tensor \abdot_{B,C,D}\big)}
($(x1)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (a) {}
($(x6)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (b) {}
;
\draw[1cell]
(x1) edge node {\{\tensortwo * 1\}* 1} (x2)
(x2) edge node {\tensortwo * 1} (x3)
(x3) edge node (t) {\tensortwo} (x4)
(x4) edge node {(rr) \tensor R^1_{B,C|D}} (x5)
(x5) edge node {\tensortwoinv} (x6)
(x1) edge[-,shorten >=-1pt] (a)
(a) edge[-,shorten <=-1pt,shorten >=-1pt] node[pos=.75] {1R^1_{B,C|D}} (b)
(b) edge[shorten <=-1pt] (x6)
;
\end{tikzpicture}
\end{equation}
in $\B\big(A(B(CD)),A(D(BC))\big)$ with:
\begin{itemize}
\item $r$ the right unitor in the base bicategory $\B$;
\item $R^1_{B,C|D}$ the composite of the following pasting diagram in $\B$, with all the $\tensor$ symbols omitted to save space.
\[\begin{tikzpicture}[xscale=4.5,yscale=1.2]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def1{1} \def1{.23}
\draw[0cell]
(0,0) node (x1) {B(CD)}
($(x1)+(1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {B(DC)}
($(x1)+(1-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {(BD)C}
($(x1)+(1,0)$) node (x4) {(DB)C}
($(x1)+(1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y1) {(BC)D}
($(x1)+(1-1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y2) {D(BC)}
($(x4)+(1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y3) {D(BC)}
;
\draw[1cell]
(x1) edge node[pos=.2] {1_B \beta_{C,D}} (x2)
(x2) edge node {\abdot} (x3)
(x3) edge node[pos=.8] {\beta_{B,D} 1_C} (x4)
(x4) edge node[pos=.8] {a} (y3)
(x1) edge node[swap,pos=.1] {\abdot} (y1)
(y1) edge[bend right=80,looseness=2] node[swap] {\beta_{BC,D}} (y3)
(y1) edge node {\beta_{BC,D}} (y2)
(y2) edge node[pos=.2] {\abdot} (x4)
(y2) edge node[swap] {1} (y3)
;
\draw[2cell]
node[between=x1 and x4 at .4, shift={(0,.2)}, rotate=-90, 2label={above,R_{B,C|D}}] {\Rightarrow}
node[between=y2 and y3 at .4, shift={(0,.4)}, rotate=-90, 2label={above,\epza}] {\Rightarrow}
node[between=y1 and y3 at .45, shift={(0,-.5)}, rotate=-90, 2label={above,\ell}] {\Rightarrow}
;
\end{tikzpicture}\]
\end{itemize}
In the previous pasting diagram:
\begin{itemize}
\item $R_{B,C|D}$ is a component of the right hexagonator $\Rtwo$.
\item $\epza$ is the counit of the adjoint equivalence $(a,\abdot,\etaa,\epza)$.
\item $\ell$ is the left unitor in the base bicategory $\B$.
\end{itemize}
In other words, $R^1_{B,C|D}$ is induced by the right hexagonator, similar to mates in \Cref{definition:mates}.
Moreover:
\begin{itemize}
\item Switching symbols, the previous pasting diagram also defines the $2$-cell $R^1_{B,A|C}$ in the Yang-Baxter axiom.
\item The $2$-cell $1R_{A,B|D}$ in the (2,2)-crossing axiom is a vertical composite involving $R_{A,B|D}$, similar to \eqref{right-hex-mate-1}.
\end{itemize}
Next, in the (3,1)-crossing axiom, the $2$-cell $R^2_{A,B|D}1$ is defined as the vertical composite
\begin{equation}\label{right-hex-mate-2}
\begin{tikzpicture}[xscale=5.5,yscale=1.5,baseline={(t.base)}]
\def-1} \def\v{-1{-1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x1) {\big\{\big[\big((\beta_{A,D} \tensor 1_B)\tensor 1_C\big) \big(\abdot_{A,D,B} \tensor 1_C\big)\big] \big((1_A\tensor \beta_{B,D}) \tensor 1_C \big)\big\} \big(a_{A,B,D} \tensor 1_C\big)}
($(x1)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {\big\{\big[\big((\beta_{A,D} \tensor 1_B)\abdot_{A,D,B}\big)\tensor (1_C 1_C)\big] \big((1_A\tensor \beta_{B,D}) \tensor 1_C \big)\big\} \big(a_{A,B,D} \tensor 1_C\big)}
($(x2)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {\big\{\big[\big((\beta_{A,D} \tensor 1_B)\abdot_{A,D,B}\big)(1_A\tensor \beta_{B,D})\big] \tensor \big((1_C 1_C)1_C\big)\big\} \big(a_{A,B,D} \tensor 1_C\big)}
($(x3)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x4) {\big\{\big[\big((\beta_{A,D} \tensor 1_B)\abdot_{A,D,B}\big)(1_A\tensor \beta_{B,D})\big] a_{A,B,D}\big\} \tensor \big[\big((1_C 1_C)1_C\big)1_C\big]}
($(x4)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x5) {\big(\abdot_{D,A,B} \beta_{A\tensor B,D}\big) (1_C 1_C)}
($(x5)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x6) {\big(\abdot_{D,A,B} \tensor 1_C\big) \big(\beta_{A\tensor B,D} \tensor 1_C\big)}
($(x1)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (a) {}
($(x6)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (b) {}
;
\draw[1cell]
(x1) edge node {\{\tensortwo * 1\}* 1} (x2)
(x2) edge node {\tensortwo * 1} (x3)
(x3) edge node (t) {\tensortwo} (x4)
(x4) edge node {R^2_{A,B|D} \tensor (rr)} (x5)
(x5) edge node {\tensortwoinv} (x6)
(x1) edge[-,shorten >=-1pt] (a)
(a) edge[-,shorten <=-1pt,shorten >=-1pt] node[pos=.75] {R^2_{A,B|D}1} (b)
(b) edge[shorten <=-1pt] (x6)
;
\end{tikzpicture}
\end{equation}
in $\B\big(((AB)D)C,((DA)B)C\big)$, with $R^2_{A,B|D}$ the composite of the following pasting diagram in $\B$.
\[\begin{tikzpicture}[xscale=4.5,yscale=1.2]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def1{1} \def1{.23} \def\i{.35}
\draw[0cell]
(0,0) node (x1) {A(BD)}
($(x1)+(1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {A(DB)}
($(x1)+(1-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {(AD)B}
($(x1)+(1,0)$) node (x4) {(DA)B}
($(x1)+(1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y1) {(AB)D}
($(x1)+(1-1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y2) {D(AB)}
($(x1)+(-1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y) {(AB)D}
;
\draw[1cell]
(y) edge node[pos=.2] {a} (x1)
(x1) edge node[pos=.2] {1_A \beta_{B,D}} (x2)
(x2) edge node {\abdot} (x3)
(x3) edge node[pos=.8] {\beta_{A,D} 1_B} (x4)
(y) edge[bend right=80,looseness=2] node[swap] {\beta_{AB,D}} (y2)
(y2) edge node[swap,pos=.8] {\abdot} (x4)
(y) edge node[swap] {1} (y1)
(y1) edge node {\beta_{AB,D}} (y2)
(x1) edge node[pos=.6] {\abdot} (y1)
;
\draw[2cell]
node[between=x1 and x4 at .4, shift={(0,.2)}, rotate=-90, 2label={above,R_{A,B|D}}] {\Rightarrow}
node[between=y and y1 at .4, shift={(0,.3)}, rotate=-90, 2label={above,\etaainv}] {\Rightarrow}
node[between=y and y2 at .5, shift={(0,-.5)}, rotate=-90, 2label={above,r}] {\Rightarrow}
;
\end{tikzpicture}\]
Here $\etaa$ is the unit of the adjoint equivalence $(a,\abdot,\etaa,\epza)$, with inverse $\etaainv$. Moreover, the $2$-cell $R_{A,B|C}1$ in the (2,2)-crossing axiom is a vertical composite involving $R_{A,B|C}$, similar to \eqref{right-hex-mate-2}.
Finally, the $2$-cell $R^3_{A,B|C}$ in the Yang-Baxter axiom is the composite of the pasting diagram
\[\begin{tikzpicture}[xscale=4.5,yscale=1.2]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def1{1} \def1{.23} \def\i{.2} \def\p{20}
\draw[0cell]
(0,0) node (x1) {A(BC)}
($(x1)+(1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {A(CB)}
($(x1)+(1-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {(AC)B}
($(x1)+(1,0)$) node (x4) {(CA)B}
($(x1)+(1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y1) {(AB)C}
($(x1)+(1-1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y2) {C(AB)}
($(x1)+(0,-2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y0) {(AB)C}
($(x4)+(0,-2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y3) {C(AB)}
;
\draw[1cell]
(x1) edge node[pos=.2] {1_A \beta_{B,C}} (x2)
(x2) edge node {\abdot} (x3)
(x3) edge node[pos=.8] {\beta_{A,C} 1_B} (x4)
(x1) edge node[pos=.6] {\abdot} (y1)
(y1) edge node {\beta_{AB,C}} (y2)
(y2) edge node[pos=.2] {\abdot} (x4)
(y0) edge[bend left=\p] node {a} (x1)
(y0) edge node[pos=.3] {1} (y1)
(x4) edge[bend left=\p] node {a} (y3)
(y2) edge node {1} (y3)
(y0) edge[bend right=15] node[swap,pos=.6] {\beta_{AB,C}} (y2)
(y0) edge node[swap] {\beta_{AB,C}} (y3)
;
\draw[2cell]
node[between=x1 and x4 at .4, shift={(0,.2)}, rotate=-90, 2label={above,R_{A,B|C}}] {\Rightarrow}
node[between=y0 and x1 at .5, shift={(-.5,0)}, rotate=-45, 2label={above,\etaainv}] {\Rightarrow}
node[between=y3 and x4 at .5, shift={(.4,0)}, rotate=225, 2label={below,\epza}] {\Rightarrow}
node[between=y0 and y1 at .6, shift={(.6,0)}, rotate=-45, 2label={above,r}] {\Rightarrow}
node[between=y0 and y3 at .75, shift={(0,.5)}, rotate=-90, 2label={above,\ell}] {\Rightarrow}
;
\end{tikzpicture}\]
in $\B$.
\end{explanation}
\begin{explanation}[Mates of the left hexagonator]\label{expl:left-hex-mates}\index{mate!left hexagonator}\index{left hexagonator!mate}
The $2$-cell $1R^1_{A|C,D}$ appearing in the (1,3)-crossing axiom is defined as a vertical composite similar to \eqref{right-hex-mate-1}, with $R^1_{A|C,D}$ the composite of the following pasting diagram in $\B$.
\[\begin{tikzpicture}[xscale=4.5,yscale=1.2]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def1{1} \def1{.23} \def\i{.35}
\draw[0cell]
(0,0) node (x1) {(AC)D}
($(x1)+(1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {(CA)D}
($(x1)+(1-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {C(AD)}
($(x1)+(1,0)$) node (x4) {C(DA)}
($(x1)+(1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y1) {A(CD)}
($(x1)+(1-1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y2) {(CD)A}
($(x1)+(-1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y) {A(CD)}
;
\draw[1cell]
(y) edge node[pos=.2] {\abdot} (x1)
(x1) edge node[pos=.2] {\beta_{A,C} 1_D} (x2)
(x2) edge node {a} (x3)
(x3) edge node[pos=.8] {1_C\beta_{A,D}} (x4)
(y) edge[bend right=80,looseness=2] node[swap] {\beta_{A,CD}} (y2)
(y2) edge node[swap,pos=.8] {a} (x4)
(y) edge node[swap] {1} (y1)
(y1) edge node {\beta_{A,CD}} (y2)
(x1) edge node[pos=.6] {a} (y1)
;
\draw[2cell]
node[between=x1 and x4 at .4, shift={(0,.2)}, rotate=-90, 2label={above,R_{A|C,D}}] {\Rightarrow}
node[between=y and y1 at .4, shift={(0,.3)}, rotate=-90, 2label={above,\epza}] {\Rightarrow}
node[between=y and y2 at .45, shift={(0,-.5)}, rotate=-90, 2label={above,r}] {\Rightarrow}
;
\end{tikzpicture}\]
Here $R_{A|C,D}$ is a component of the left hexagonator $\Rone$. Moreover:
\begin{itemize}
\item In the (2,2)-crossing axiom, the $2$-cell $1R_{B|C,D}$ is a vertical composite similar to \eqref{right-hex-mate-1} involving $R_{B|C,D}$.
\item Also in the (2,2)-crossing axiom, the $2$-cell $R^1_{A|C,D}1$ is a vertical composite similar to \eqref{right-hex-mate-2} involving $R^1_{A|C,D}$.
\item In the Yang-Baxter axiom, the $2$-cell $R^1_{A|C,B}$ is defined by the previous pasting diagram with $B$ in place of $D$.
\end{itemize}
Finally, the $2$-cell $R^2_{A|B,C}1$ in the (1,3)-crossing axiom is defined as a vertical composite similar to \eqref{right-hex-mate-2}, with $R^2_{A|B,C}$ the composite of the pasting diagram
\[\begin{tikzpicture}[xscale=4.5,yscale=1.2]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def1{1} \def1{.23} \def\i{.35}
\draw[0cell]
(0,0) node (x1) {(AB)C}
($(x1)+(1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {(BA)C}
($(x1)+(1-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {B(AC)}
($(x1)+(1,0)$) node (x4) {B(CA)}
($(x1)+(1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y1) {A(BC)}
($(x1)+(1-1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y2) {(BC)A}
($(x4)+(1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y3) {(BC)A}
;
\draw[1cell]
(x1) edge node[pos=.2] {\beta_{A,B} 1_C} (x2)
(x2) edge node {a} (x3)
(x3) edge node[pos=.8] {1_B \beta_{A,C}} (x4)
(x4) edge node[pos=.8] {\abdot} (y3)
(x1) edge node[swap,pos=.1] {a} (y1)
(y1) edge[bend right=80,looseness=2] node[swap] {\beta_{A,BC}} (y3)
(y1) edge node {\beta_{A,BC}} (y2)
(y2) edge node[pos=.2] {a} (x4)
(y2) edge node[swap] {1} (y3)
;
\draw[2cell]
node[between=x1 and x4 at .4, shift={(0,.2)}, rotate=-90, 2label={above,R_{A|B,C}}] {\Rightarrow}
node[between=y2 and y3 at .4, shift={(0,.3)}, rotate=-90, 2label={above,\etaainv}] {\Rightarrow}
node[between=y1 and y3 at .45, shift={(0,-.5)}, rotate=-90, 2label={above,\ell}] {\Rightarrow}
;
\end{tikzpicture}\]
in $\B$.
\end{explanation}
\begin{explanation}[Visualization]\label{expl:braided-monbicat-axioms}
In \Cref{def:braided-monbicat}:
\begin{description}
\item[Braiding] As in a braided monoidal category, the braiding $\beta$ may be visualized as the generating braid
\begin{tikzpicture}[xscale=.3,yscale=.25,baseline={(0,0).base},strand]
\draw (0,0) to (1,1);
\draw[line width=4pt, white] (1,0) to (0,1);
\draw (1,0) to (0,1);
\end{tikzpicture}
in the braid group $B_2$. However, in this case the braiding does not admit a strict inverse. Instead, it is the left adjoint of an adjoint equivalence with right adjoint $\betabdot$.
\item[Hexagonators] The left hexagonator $\Rone$ may be visualized as the left braid in \Cref{expl:hexagon-axioms}, i.e.,
\begin{tikzpicture}[xscale=.3,yscale=.25,baseline={(0,0).base},strand]
\draw (0,0) to (1,1);
\draw[line width=3pt, white] (.5,0) to (0,1);
\draw (.5,0) to (0,1);
\draw[line width=3pt, white] (1,0) to (.5,1);
\draw (1,0) to (.5,1);
\end{tikzpicture}
with the last two strings crossing over the first string. The domain of $\Rone$ corresponds to first crossing the second string over the first string, followed by crossing the last string over the first string. The codomain corresponds to crossing the last two strings over the first string in one step.
The right hexagonator $\Rtwo$ admits a similar interpretation using the right braid
\begin{tikzpicture}[xscale=.3,yscale=.25,baseline={(0,0).base},strand]
\draw (0,0) to (.5,1);
\draw (.5,0) to (1,1);
\draw[line width=3pt, white] (1,0) to (0,1);
\draw (1,0) to (0,1);
\end{tikzpicture}
in \Cref{expl:hexagon-axioms}. The domain corresponds to crossing one string over two strings to its left, one string at a time. The codomain corresponds to crossing one string over two strings in one step.
\item[Crossing Axioms] The (3,1)-crossing axiom may be visualized using the left-most picture below.
\begin{center}
\begin{tikzpicture}[xscale=1.6,yscale=1,baseline={(0,0).base},strand]
\def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{6}
\draw (0,0) to (.5,1);
\draw (.25,0) to (.75,1);
\draw (.5,0) to (1,1);
\draw[line width=1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3 pt, white] (1,0) to (0,1);
\draw (1,0) to (0,1);
\node at (.5,-.5) {(3,1)-crossing};
\begin{scope}[shift={(2.5,0)}]
\draw (0,0) to (1,1);
\draw[line width=1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3 pt, white] (.5,0) to (0,1);
\draw (.5,0) to (0,1);
\draw[line width=1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3 pt, white] (.75,0) to (.25,1);
\draw (.75,0) to (.25,1);
\draw[line width=1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3 pt, white] (1,0) to (.5,1);
\draw (1,0) to (.5,1);
\node at (.5,-.5) {(1,3)-crossing};
\end{scope}
\begin{scope}[shift={(5,0)}]
\draw (0,0) to (.75,1);
\draw (.25,0) to (1,1);
\draw[line width=1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3 pt, white] (.75,0) to (0,1);
\draw (.75,0) to (0,1);
\draw[line width=1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3 pt, white] (1,0) to (.25,1);
\draw (1,0) to (.25,1);
\node at (.5,-.5) {(2,2)-crossing};
\end{scope}
\end{tikzpicture}
\end{center}
The common domain of the two pasting diagrams in the (3,1)-crossing axiom corresponds to crossing one string over three strings, one string at a time. The common codomain corresponds to crossing one string over three strings in one step. The two pasting diagrams correspond to two ways to transform from the common domain to the common codomain using the structures in a braided monoidal bicategory.
The (1,3)-crossing axiom and the (2,2)-crossing axiom admit similar interpretations, using the middle and the right pictures above.
\item[Yang-Baxter Axiom] The Yang-Baxter axiom may be visualized using the following pictures.
\begin{center}
\begin{tikzpicture}[xscale=1,yscale=.6,baseline={(0,0).base},rounded corners,strand]
\def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{6}
\def.08{.1}
\begin{scope}[shift={(0,0)}]
\draw (2,0) to (0,-2);
\begin{scope}[shift={(-.08,0)}]
\draw[line width=1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3 pt, white] (1.2,0) to (0,-1.2) to ++(.8,-.8);
\draw (1.2,0) to (0,-1.2) to ++(.8,-.8);
\end{scope}
\draw[line width=1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3 pt, white] (0,0) to (2,-2);
\draw (0,0) to (2,-2);
\node at (1,-2.5) {domain};
\end{scope}
\begin{scope}[shift={(4,0)}]
\draw (2,0) to (0,-2);
\begin{scope}[shift={(.08,0)}]
\draw[line width=1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3 pt, white] (1.2,0) to ++(.8,-.8) to ++(-1.2,-1.2);
\draw (1.2,0) to ++(.8,-.8) to ++(-1.2,-1.2);
\end{scope}
\draw[line width=1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3 pt, white] (0,0) to (2,-2);
\draw (0,0) to (2,-2);
\node at (1,-2.5) {codomain};
\end{scope}
\end{tikzpicture}
\end{center}
The common domain of the two pasting diagrams in the Yang-Baxter axiom corresponds to the left picture above, while the common codomain corresponds to the right picture. The two pasting diagrams correspond to two ways to transform from the domain to the codomain using the structures in a braided monoidal bicategory. The Yang-Baxter axiom is so named because it is a version of the Yang-Baxter equation\index{Yang-Baxter!equation}
\[\def(R\tensor 1)} \def\rr{(1\tensor R){(R\tensor 1)} \def\rr{(1\tensor R)}
(R\tensor 1)} \def\rr{(1\tensor R) \rr (R\tensor 1)} \def\rr{(1\tensor R) = \rr (R\tensor 1)} \def\rr{(1\tensor R) \rr,\]
which appears in statistical mechanics and quantum group theory.\dqed
\end{description}
\end{explanation}
\begin{remark}\label{rk:braided-tensorzero}
In the (3,1)-crossing axiom and the (1,3)-crossing axiom, the four $2$-cells involving $\tensorzero$ are usually suppressed in the literature. A similar practice is common in the presentation of the tricategorical axioms in the literature, as we pointed out in \Cref{expl:associahedron}.
\end{remark}
\begin{motivation}\label{mot:sylleptic-monbicat}
Recall that a symmetric monoidal category is precisely a braided monoidal category whose braiding $\xi$ satisfies $\xi\xi = 1$, which is the symmetry axiom \eqref{monoidal-symmetry-axiom}. At the bicategory level, due to the existence of $2$-cells, there is an intermediate structure in which the symmetry axiom is replaced by an invertible modification. We introduce this structure next.
\end{motivation}
\begin{definition}\label{def:sylleptic-monbicat}\index{sylleptic monoidal!bicategory}\index{monoidal bicategory!sylleptic}\index{bicategory!sylleptic monoidal}
A \emph{sylleptic monoidal bicategory} is a quintuple
\[\big(\B,\beta,\Rone,\Rtwo,\syl\big)\]
consisting of the following data.
\begin{enumerate}
\item $(\B,\beta,\Rone,\Rtwo)$ is a braided monoidal bicategory.
\item $\syl$ \label{notation:syllepsis}is an invertible $2$-cell (i.e., modification)
\[\begin{tikzcd}[column sep=normal]
(\beta\whis\tau)\beta \ar{r}{\syl} & 1_{\tensor} \in \Bicatps(\B^2,\B)(\tensor,\tensor)
\end{tikzcd}\]
called the \index{syllepsis}\emph{syllepsis}. Components of $\syl$ are invertible $2$-cells
\[\begin{tikzpicture}[xscale=3.5,yscale=.6]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def1{1} \def\p{20}
\draw[0cell]
(0,0) node (x1) {A \tensor B}
($(x1)+(1,0)$) node (x2) {A \tensor B}
($(x1)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {B \tensor A}
;
\draw[1cell]
(x1) edge[bend right=75] node[swap] {1_{A\tensor B}} (x2)
(x1) edge[bend left=\p] node[pos=.6] {\beta_{A,B}} (x3)
(x3) edge[bend left=\p] node[pos=.4] {\beta_{B,A}} (x2)
;
\draw[2cell]
node[between=x1 and x2 at .4, shift={(0,0)}, rotate=-90, 2label={above,\syl_{A,B}}] {\Rightarrow}
;
\end{tikzpicture}\]
in $\B(A\tensor B,A\tensor B)$.
\end{enumerate}
The following two pasting diagram axioms are required to hold for objects $A,B,C\in\B$, with the same conventions as in the axioms in \Cref{def:braided-monbicat}.
\begin{description}
\newcommand{\tikzset{every node/.style={scale=.78}}}{\tikzset{every node/.style={scale=.78}}}
\item[(2,1)-Syllepsis Axiom]\index{21syllepsisaxiom@(2,1)-syllepsis axiom}\index{sylleptic monoidal!bicategory!(2,1)-syllepsis axiom}
\[\begin{tikzpicture}[x=30mm,y=30mm,scale=.78,baseline={(eq.base)}]
\def1.2{1.3}
\def.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1{360/7}
\draw[font=\Huge] (0,0) node [rotate=0] (eq) {$=$};
\newcommand{\boundary}{
\draw[0cell,font={\small}]
(90:1) node (x1) {A(BC)}
(90+.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:1) node (x2) {A(CB)}
(90+2*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:1) node (x3) {(AC)B}
(90+3*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:1) node (x4) {(CA)B}
(90+4*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:1) node (x5) {C(AB)}
(90+5*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:1) node (x6) {(AB)C}
(90+6*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:1) node (x7) {A(BC)}
;
\draw[1cell]
(x1) edge node (one) {1} (x7)
(x1) edge['] node {1\beta_{B,C}} (x2)
(x2) edge node[swap] {\abdot} (x3)
(x3) edge node[swap] (bAC1) {\beta_{A,C}1} (x4)
(x4) edge node[swap] (a) {a} (x5)
(x5) edge['] node[pos=.5] {\beta_{C,AB}} (x6)
(x6) edge['] node[pos=.5] {a} (x7)
;}
\begin{scope}[shift={(-1.2,0)}]
{
\tikzset{every node/.style={scale=.78}}
\boundary
\draw[0cell,font=\small]
($(x2)+(-18:.9)$) node (z1) {(AB)C}
($(z1)+(180+3*180/7:.7)$) node (z2) {C(AB)}
;
\draw[1cell]
(x1) edge node[swap,pos=.4] {\abdot} (z1)
(z1) edge node[swap,pos=.5] (aABC) {a} (x7)
(z1) edge node[pos=.5,swap] {1} (x6)
(z1) edge node[swap,pos=.6] {\beta} (z2)
(z2) edge node[pos=.5] (B) {\beta} (x6)
(z2) edge node[swap] {\abdot} (x4)
(z2) edge node[pos=.6] {1} (x5)
;
}
\draw[2cell]
node[between=x3 and z1 at .5, shift={(-51:0)}, rotate=-15,
2label={above,\hspace{-1pc}R_{A,B|C}}] {\Rightarrow}
node[between=z2 and a at .55, shift={(0,0)}, rotate=20, 2label={above,\epza}] {\Rightarrow}
node[between=x5 and B at .5, shift={(25:.1)}, rotate=110, 2label={above,r\,}] {\Rightarrow}
node[between=z1 and B at .6, shift={(231:.05)}, rotate=70, 2label={above,\syl\,}] {\Rightarrow}
node[between=aABC and x6 at .3, shift={(0,0)}, rotate=110, 2label={above,r\,}] {\Rightarrow}
node[between=x1 and aABC at .6, shift={(0,0)}, rotate=70, 2label={above,\epza}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(1.2,0)}]
{
\tikzset{every node/.style={scale=.78}}
\boundary
\draw[0cell,font=\small]
($(x5)+(180-40:.9)$) node (z2) {(AC)B}
($(z2)+(2*180/7:.7)$) node (z1) {A(CB)}
;
\draw[1cell]
(x1) edge node[swap,pos=.4] (1B) {1\beta} (z1)
(z1) edge node[swap,pos=.4] {1\beta} (x7)
(x2) edge node[swap,pos=.5] {1} (z1)
(x2) edge['] node[pos=.4] (abdot) {\abdot} (z2)
(x3) edge node[pos=.5] {1} (z2)
(x4) edge node[swap,pos=.5] {\beta 1} (z2)
(z2) edge node[swap,pos=.5] {a} (z1)
;
}
\draw[2cell]
node[between=x6 and z2 at .5, shift={(180-51:0)}, rotate=139, 2label={below,\hspace{-.5pc}R^{-1}_{C|A,B}}] {\Rightarrow}
node[between=bAC1 and z2 at .55, shift={(-60:.05)}, rotate=85, 2label={above,\syl^1\hspace{-2pt}}] {\Rightarrow}
node[between=x3 and abdot at .6, shift={(-60:.05)}, rotate=30, 2label={above,\ell\,}] {\Rightarrow}
node[between=abdot and z1 at .55, shift={(-.02,-.075)}, rotate=80, 2label={above,\epza}] {\Rightarrow}
node[between=x2 and 1B at .65, shift={(0,.0)}, rotate=10, 2label={above,\ell}] {\Rightarrow}
node[between=1B and x7 at .45, shift={(0,-.0)}, rotate=70, 2label={above,\syl^2\hspace{-3pt}}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}\]
\item[(1,2)-Syllepsis Axiom]\index{12syllepsisaxiom@(1,2)-syllepsis axiom}\index{sylleptic monoidal!bicategory!(1,2)-syllepsis axiom}
\[\begin{tikzpicture}[x=30mm,y=30mm,scale=.78,baseline={(eq.base)}]
\def1.2{1.3}
\def.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1{360/7}
\draw[font=\Huge] (0,0) node [rotate=0] (eq) {$=$};
\newcommand{\boundary}{
\draw[0cell,font={\small}]
(90:1) node (x1) {B(AC)}
(90+.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:1) node (x2) {(BA)C}
(90+2*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:1) node (x3) {(AB)C}
(90+3*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:1) node (x4) {(AB)C}
(90+4*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:1) node (x5) {A(BC)}
(90+5*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:1) node (x6) {(BC)A}
(90+6*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1:1) node (x7) {B(CA)}
;
\draw[1cell]
(x7) edge['] node (one) {1\beta_{C,A}} (x1)
(x2) edge['] node {\beta_{B,A}1} (x3)
(x1) edge node[swap] {\abdot} (x2)
(x4) edge['] node[swap] (bAC1) {1} (x3)
(x4) edge node[swap] (a) {a} (x5)
(x5) edge['] node[pos=.5] {\beta_{A,BC}} (x6)
(x6) edge['] node[pos=.5] {a} (x7)
;}
\begin{scope}[shift={(1.2,0)},rotate=-3*360/7]
{
\tikzset{every node/.style={scale=.78}}
\boundary
\draw[0cell,font=\small]
($(x1)+(-18-360/7:.9)$) node (z1) {(BC)A}
($(z1)+(8*180/7:.7)$) node (z2) {A(BC)}
;
\draw[1cell]
(x7) edge node[',pos=.5] (aABC) {\abdot} (z1)
(z1) edge node[swap,pos=.5] {\beta} (z2)
(z2) edge node[swap] {\abdot} (x3)
(x6) edge node[pos=.4] {1} (z1)
(x5) edge node[pos=.5] (B) {\beta} (z1)
(x5) edge node[',pos=.5] {1} (z2)
(x4) edge node[',pos=.6] {a} (z2)
;
}
\draw[2cell]
node[between=x2 and z1 at .5, shift={(-51:0)}, rotate=140, 2label={below,\hspace{-.5pc}R_{B,C|A}}] {\Rightarrow}
node[between=bAC1 and z2 at .45, shift={(0,0)}, rotate=80, 2label={above,\etaainv\hspace{-.8ex}}] {\Rightarrow}
node[between=z2 and a at .55, shift={(0,0)}, rotate=20, 2label={above,\ell}] {\Rightarrow}
node[between=z2 and B at .6, shift={(-90:0.09)}, rotate=80, 2label={above,\syl\,}] {\Rightarrow}
node[between=x6 and B at .6, shift={(231:.05)}, rotate=30, 2label={above,\ell\,}] {\Rightarrow}
node[between=aABC and x6 at .3, shift={(90:0.08)}, rotate=90, 2label={above,\etaainv}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(-1.2,0)}]
{
\tikzset{every node/.style={scale=.78}}
\tikzset{rotate=-3*360/7}
\boundary
\draw[0cell,font=\small]
($(x5)+(180-40:.9)$) node (z2) {(BA)C}
($(z2)+(2*180/7:.7)$) node (z1) {B(AC)}
;
\draw[1cell]
(z1) edge node[pos=.5] (1B) {1} (x1)
(z1) edge node[swap,pos=.4] {1\beta} (x7)
(z1) edge node[pos=.5] {\abdot} (x2)
(z2) edge['] node[pos=.4] (abdot) {1} (x2)
(z2) edge node[',pos=.5] {\beta 1} (x3)
(x4) edge node[swap,pos=.5] {\beta 1} (z2)
(z2) edge node[swap,pos=.5] {a} (z1)
;
}
\draw[2cell]
node[between=x6 and z2 at .5, shift={(180-51:0)}, rotate=-15, 2label={above,\hspace{-1pc}R^{-1}_{A|B,C}}] {\Rightarrow}
node[between=bAC1 and z2 at .4, shift={(-60:.05)}, rotate=60, 2label={above,\syl^1\hspace{-2pt}}] {\Rightarrow}
node[between=x3 and abdot at .6, shift={(0:.05)}, rotate=110, 2label={above,r\,}] {\Rightarrow}
node[between=abdot and z1 at .35, shift={(0:0)}, rotate=60, 2label={above,\etaainv\hspace{-1.2ex}}] {\Rightarrow}
node[between=x2 and 1B at .7, shift={(0,.0)}, rotate=100, 2label={above,r\,}] {\Rightarrow}
node[between=1B and x7 at .4, shift={(0,-.0)}, rotate=60, 2label={above,\syl^2\hspace{-3pt}}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}\]
\end{description}
The $2$-cells $\syl^1$ and $\syl^2$ are induced by the syllepsis, and will be defined in \eqref{syllepsis-mates} below. This finishes the definition of a sylleptic monoidal bicategory.
\end{definition}
\begin{explanation}[Syllepsis]\label{expl:syllepsis}
In \Cref{def:sylleptic-monbicat}:
\begin{enumerate}
\item The domain of the syllepsis $\syl$ is the horizontal composite as in \Cref{def:lax-tr-comp} of the strong transformations
\[\begin{tikzcd}
\tensor \ar{r}{\beta} & \tensor\tau \ar{r}{\beta \whis\tau} & \tensor\tau\tau = \tensor,
\end{tikzcd}\]
with $\whis$ the pre-whiskering in \Cref{def:whiskering-transformation}. That $(\beta\whis\tau)\beta$ is a strong transformation follows from \Cref{lax-tr-compose,pre-whiskering-transformation}.
\item In the (2,1)-syllepsis axiom and the (1,2)-syllepsis axiom, the $2$-cells $\syl^1$ and $\syl^2$ are defined as the vertical composites
\begin{equation}\label{syllepsis-mates}
\begin{tikzpicture}[xscale=2.4,yscale=1.5,baseline={(nu.base)}]
\def-1} \def\v{-1{-1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x1) {\big((\betatau)_{A,B}\tensor 1_C\big) \big(\beta_{A,B}\tensor 1_C\big)}
($(x1)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {\big((\betatau)_{A,B}\beta_{A,B}\big) \tensor (1_C 1_C)}
($(x2)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {1_{A\tensor B}\tensor 1_C}
($(x3)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x4) {1_{(A\tensor B)\tensor C}}
($(x1)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (a) {}
($(x4)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (b) {}
;
\draw[1cell]
(x1) edge node {\tensortwo} (x2)
(x2) edge node (nu) {\syl_{A,B}\tensor \ell} (x3)
(x3) edge node {\tensorzeroinv} (x4)
(x1) edge[-,shorten >=-1pt] (a)
(a) edge[-,shorten <=-1pt,shorten >=-1pt] node[pos=.7] {\syl^1} (b)
(b) edge[shorten <=-1pt] (x4)
;
\begin{scope}[shift={(2.2,0)}]
\draw[0cell]
(0,0) node (x1) {\big(1_A\tensor(\betatau)_{B,C}\big) \big(1_A\tensor\beta_{B,C}\big)}
($(x1)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {(1_A 1_A) \tensor \big((\betatau)_{B,C}\beta_{B,C}\big)}
($(x2)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {1_A \tensor 1_{B\tensor C}}
($(x3)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x4) {1_{A\tensor (B\tensor C)}}
($(x1)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (a) {}
($(x4)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (b) {}
;
\draw[1cell]
(x1) edge node {\tensortwo} (x2)
(x2) edge node {\ell\tensor \syl_{B,C}} (x3)
(x3) edge node (t) {\tensorzeroinv} (x4)
(x1) edge[-,shorten >=-1pt] (a)
(a) edge[-,shorten <=-1pt,shorten >=-1pt] node[pos=.7] {\syl^2} (b)
(b) edge[shorten <=-1pt] (x4)
;
\end{scope}
\end{tikzpicture}
\end{equation}
in $\B\big((AB)C,(AB)C\big)$ and $\B\big(A(BC),A(BC)\big)$, respectively, with $\tensorzeroinv$ the inverse of the lax unity constraint \eqref{tensorzero-gf} for $\tensor$.\dqed
\end{enumerate}
\end{explanation}
\begin{explanation}[Visualization]\label{expl:sylleptic-monbicat-visual}
In \Cref{def:sylleptic-monbicat}:
\begin{description}
\item[Braiding] Due to the existence of the syllepsis, the braiding $\beta$ in a sylleptic monoidal bicategory may be visualized as the virtual crossing
\begin{tikzpicture}[xscale=.3,yscale=.25,baseline={(0,0).base},strand]
\draw (0,0) to (1,1);
\draw (1,0) to (0,1);
\end{tikzpicture}.
\item[Syllepsis] It is the isomorphism
\begin{tikzpicture}[xscale=.2,yscale=.25,baseline={(0,0).base}, rounded corners,strand]
\draw (0,0) to (1.25,.5) to (0,1);
\draw (1,0) to (-.25,.5) to (1,1);
\end{tikzpicture}
$\iso$
\begin{tikzpicture}[xscale=.2,yscale=.25,baseline={(0,0).base}, strand]
\foreach \x in {0,.75} \draw (\x,0) to (\x,1);
\end{tikzpicture}\hspace{.5mm}
that straightens its domain to the identity.
\item[Axioms] In the (2,1)-syllepsis axiom, the common domain is the left picture below
\begin{center}
\begin{tikzpicture}[xscale=1,baseline={(0,0).base},rounded corners,strand]
\draw (0,0) to (.5,.5) to (0,1);
\draw (.5,0) to (1,.5) to (.5,1);
\draw (1,0) to (0,.5) to (1,1);
\node at (.5,-.5) {(2,1)-syllepsis};
\begin{scope}[shift={(3,0)}]
\draw (0,0) to (1,.5) to (0,1);
\draw (.5,0) to (0,.5) to (.5,1);
\draw (1,0) to (.5,.5) to (1,1);
\node at (.5,-.5) {(1,2)-syllepsis};
\end{scope}
\end{tikzpicture}
\end{center}
with common codomain the identity
\begin{tikzpicture}[xscale=.2,yscale=.25,baseline={(0,0).base}]
\foreach \x in {0,.5,1} \draw (\x,0) to (\x,1);
\end{tikzpicture}. The two pasting diagrams correspond to two ways to transform from the domain to the codomain using the structures in a sylleptic monoidal bicategory. The (1,2)-syllepsis axiom admits a similar interpretation with common domain the right picture above.\dqed
\end{description}
\end{explanation}
\begin{definition}\label{def:symmetric-monbicat}\index{symmetric monoidal!bicategory}\index{monoidal bicategory!symmetric}\index{bicategory!symmetric monoidal}
A \emph{symmetric monoidal bicategory} is a sylleptic monoidal bicategory as in \Cref{def:sylleptic-monbicat} that satisfies the pasting diagram axiom
\begin{equation}\label{symmetric-monbicat-axiom}
\begin{tikzpicture}[xscale=3.5,yscale=1.2,baseline={(eq.base)}]
\def\footnotesize{\footnotesize}
\def-1} \def\v{-1{.3} \def1{1} \def\i{.1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[font=\Huge] (0,0) node [rotate=0] (eq) {$=$};
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (x1) {AB}
($(x1)+(\i,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {BA}
($(x2)+(1-2*\i,0)$) node (x3) {AB}
($(x3)+(\i,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x4) {BA}
;
\draw[1cell]
(x1) edge node {\beta} (x4)
(x1) edge node[swap,pos=.3] {\beta} (x2)
(x2) edge node[swap] {\beta} (x3)
(x3) edge node[swap,pos=.8] {\beta} (x4)
;}
\begin{scope}[shift={(-1--1} \def\v{-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2)}]
\boundary
\draw[1cell]
(x1) edge node[pos=.4] {1} (x3)
;
\draw[2cell]
node[between=x2 and x3 at .25, shift={(0,.4)}, rotate=90, 2label={below,\syl}] {\Rightarrow}
node[between=x1 and x4 at .75, shift={(0,-.4)}, rotate=90, 2label={below,r}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(-1} \def\v{-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2)}]
\boundary
\draw[1cell]
(x2) edge node[pos=.6] {1} (x4)
;
\draw[2cell]
node[between=x2 and x3 at .75, shift={(0,.4)}, rotate=90, 2label={below,\syl}] {\Rightarrow}
node[between=x1 and x4 at .2, shift={(0,-.4)}, rotate=90, 2label={below,\ell}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\end{equation}
for objects $A$ and $B$ in $\B$.
\end{definition}
\begin{explanation}[Visualization]\label{expl:sym-monbicat-axiom}
With the braiding and the syllepsis interpreted as in \Cref{expl:sylleptic-monbicat-visual}, the symmetric monoidal bicategory axiom \eqref{symmetric-monbicat-axiom} may be visualized as the commutativity of the following diagram.
\begin{center}
\begin{tikzpicture}[xscale=1,yscale=.7, rounded corners]
\def.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1{2.5} \def\b{1.5}
\def.08{.08}
\draw[strand] (0,0) to (1+.08,.5) to (0-.08,1) to (1,1.5);
\draw[strand] (1,0) to (0-.08,.5) to (1+.08,1) to (0,1.5);
\draw[->,out=45,in=180] (.5,1.8) to (2.2,2.25);
\draw[->,out=0,in=90] (3.8,2.25) to (5.5,1.3);
\draw[->,out=-45,in=180] (.5,-.3) to (2.2,-.75);
\draw[->,out=0,in=-90] (3.8,-.75) to (5.5,.2);
\begin{scope}[shift={(.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1,\b)}]
\draw[strand] (0,0) to (0,1) to (1,1.5);
\draw[strand] (1,0) to (1,1) to (0,1.5);
\end{scope}
\begin{scope}[shift={(.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1,-\b)}]
\draw[strand] (0,0) to (1,.5) to (1,1.5);
\draw[strand] (1,0) to (0,.5) to (0,1.5);
\end{scope}
\begin{scope}[shift={(2*.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1,.5)}]
\draw[strand] (0,0) to (1,.5);
\draw[strand] (1,0) to (0,.5);
\end{scope}
\end{tikzpicture}
\end{center}
In other words, given three consecutive virtual crossings, straightening the first two virtual crossings is the same as straightening the last two virtual crossings.
\end{explanation}
\section{The Gray Tensor Product}
\label{sec:gray-tensor}
The Gray tensor product is a monoidal product for $2$-categories that
is a weakening of the Cartesian product. In
\cref{theorem:cub-gray-adj} we show that the Gray tensor product
classifies certain pseudofunctors of $2$-categories called \emph{cubical
pseudofunctors} (\cref{definition:cubical-psfun}). Extending this,
\cref{theorem:Gray-is-symm-mon} shows that the underlying $1$-category
of $\IICat$ is symmetric monoidal closed (as a $1$-category) with
respect to the Gray tensor product and the internal hom given by
pseudofunctors of $2$-categories.
We begin with a relatively simple construction. This will have some,
but not all, of the properties we require of a tensor product.
Suppose $\C$ and $1.3$ are $2$-categories, and we consider $\IICat$ as a
$\Cat$-enriched category with $2$-categories as objects in the next
definition.
\begin{definition}\label{definition:box-product}\index{box product}\index{product!box}\index{2-category!box product}
The \emph{box product} $\C \Box 1.3$ is the $\Cat$-enriched pushout
in the following diagram, induced by the inclusions $\Ob\C \to \C$
and $\Ob1.3 \to 1.3$.
\begin{equation}\label{diagram:C-box-D}
\begin{tikzpicture}[x=30mm,y=20mm,vcenter]
\draw[0cell]
(0,0) node (a) {\Ob\C \times \Ob1.3}
(1,0) node (b) {\C \times \Ob1.3}
(0,-1) node (c) {\Ob\C \times 1.3}
(1,-1) node (d) {\C \Box 1.3}
;
\path[1cell]
(a) edge node {} (b)
(a) edge node {} (c)
(b) edge node {} (d)
(c) edge node {} (d)
;
\end{tikzpicture}
\end{equation}
This finishes the definition of the box product.
\end{definition}
\begin{explanation}\
\begin{enumerate}
\item The $1$-cells of $\C \times \Ob 1.3$ are given by $(f,1_Y)$ for a
$1$-cell $f\in \C(X,X')$ and an object $Y \in 1.3$, and the $2$-cells
of $\C \times \Ob 1.3$ are given by $(35,1_{1_Y})$ for a $2$-cell
$35 \in \C(X,X')(f_1,f_2)$ and an object $Y \in 1.3$. We denote
their images in $\C \Box 1.3$ as $f \Box Y$ and $35 \Box Y$,
respectively, and do likewise for $1$-cells and $2$-cells in $\Ob \C
\times 1.3$.
\item Unpacking \cref{definition:box-product}, we can describe $\C \Box
1.3$ as follows.
\begin{description}
\item[Objects] The objects are pairs $(X,Y)$, written $X \Box Y$,
with $X \in \C$ and $Y \in 1.3$.
\item[$1$-Cells] The $1$-cells are generated under composition by pairs
consisting of a $1$-cell and an object, called \emph{basic $1$-cells}\index{basic 1-cell!box product}\index{box product!basic 1-cell}\index{1-cell!basic, in box product}
and written as
\begin{itemize}
\item $f \Box Y\cn X \Box Y \to X' \Box Y$, for $f \in \C(X,X')$ and $Y \in 1.3$, or
\item $X \Box g\cn X \Box Y \to X \Box Y'$, for $g \in 1.3(Y,Y')$ and $X \in \C$.
\end{itemize}
Because the arrows in \eqref{diagram:C-box-D} are $2$-functors,
these $1$-cells are subject to the following conditions.
\begin{itemize}
\item For $X \in C$ and $Y \in 1.3$ we have
\[
1_X \Box Y = 1_{X \Box Y} = X \Box 1_Y.
\]
\item For $f \in \C(X,X')$, $f' \in \C(X',X'')$, and $Y \in 1.3$
we have
\[
(f'\Box Y) (f \Box Y) = (f'f) \Box Y.
\]
\item For $g \in 1.3(Y,Y')$, $g' \in 1.3(Y',Y'')$, and $X \in \C$
we have
\[
(X \Box g') (X \Box g) = X \Box (g'g).
\]
\end{itemize}
\item[$2$-Cells] The $2$-cells are generated under horizontal and
vertical composition by pairs consisting of a $2$-cell and an
object, called \emph{basic $2$-cells}\index{basic 2-cell!box product}\index{box product!basic 2-cell}\index{2-cell!basic, in box product} and written as
\begin{itemize}
\item $35 \Box Y\cn f_1 \Box Y \to f_2 \Box Y$ for $35 \in
\C(X,X')(f_1,f_2)$ and $Y \in 1.3$.
\item $X \Box \beta\cn X \Box g_1 \to X \Box g_2$ for $\beta \in
1.3(Y,Y')(g_1,g_2)$ and $X \in \C$.
\end{itemize}
Because the arrows in \eqref{diagram:C-box-D} are $2$-functors,
these $2$-cells are subject to the following conditions.
\begin{itemize}
\item For $f \in \C(X,X')$ and $g \in 1.3(Y,Y')$ we have
\[
1_f \Box Y = 1_{f \Box Y} \andspace X \Box {1_g} = 1_{X \Box g}.
\]
\item For $35 \in \C(X,X')(f_1,f_2)$, $35' \in
\C(X',X'')(f'_1,f'_2)$, and $Y \in 1.3$ we have
\[
(35' \Box Y) * (35 \Box Y) = (35' * 35) \Box Y.
\]
\item For $60 \in 1.3(Y,Y')(g_1,g_2)$, $60' \in
1.3(Y',Y'')(g'_1,g'_2)$, and $X \in \C$ we have
\[
(X \Box 60') * (X \Box 60) = X \Box (60' * 60).
\]
\item For $35 \in \C(X,X')(f_1,f_2)$ and $35' \in
\C(X,X')(f_2,f_3)$ we have
\[
(35' \Box Y) (35 \Box Y) = (35'35) \Box Y.
\]
\item For $60 \in 1.3(Y,Y')(g_1,g_2)$ and $60' \in
1.3(Y,Y')(g_2,g_3)$ we have
\[
(X \Box 60') (X \Box 60) = X \Box (60' 60).
\]
\end{itemize}
\end{description}
This concludes the unpacking of \cref{definition:box-product}.
\item By the universal property of the pushout, there is a
$2$-functor
\[
j\cn \C \Box 1.3 \to \C \times 1.3
\]
which is bijective on objects. It sends a $1$-cell $f \Box Y$ to $f
\times 1_Y$, a $2$-cell $35 \Box Y$ to $35 \times 1_{1_Y}$, and
similarly for $X \Box g$ or $X \Box 60$. The composites $(f \Box
Y') (X \Box g)$ and $({X'} \Box g) (f \Box Y)$ are distinct in $\C
\Box 1.3$, but both are mapped by $j$ to $f \times g$. This
observation is a basis for \cref{motivation:box-vs-times} below.
\dqed
\end{enumerate}
\end{explanation}
\begin{motivation}\label{motivation:box-vs-times}
We now turn to the definition of the Gray tensor product. This will
have the same $0$-cells and $1$-cells as the box product, but additional
$2$-cells. Recall that the two composites in the square below are unrelated
in $\C \Box 1.3$, and their images in $\C \times 1.3$ are equal.
\begin{equation*
\begin{tikzpicture}[x=40mm,y=20mm,vcenter]
\draw[0cell]
(0,0) node (xy) {X \Box Y}
(xy) ++(1,0) node (x'y) {X' \Box Y}
(xy) ++(0,-1) node (xy') {X \Box Y'}
(xy') ++(1,0) node (x'y') {X' \Box Y'}
;
\draw[1cell]
(xy) edge node {f \Box Y} (x'y)
(xy') edge node {f \Box {Y'}} (x'y')
(xy) edge['] node {{X} \Box g} (xy')
(x'y) edge node {{X'} \Box g} (x'y')
;
\end{tikzpicture}
\end{equation*}
In the Gray tensor product, the corresponding square is filled by a generally
nontrivial isomorphism, $\Si_{f,g}$. In this way the Gray tensor
product is an intermediary between $\C \Box 1.3$ and $\C \times
1.3$.
\end{motivation}
We define the objects, $1$-, and $2$-cells of the Gray tensor product now,
and prove that they form a $2$-category in
\cref{proposition:gray-tensor-2-cat} below.
\begin{definition}\label{definition:gray-tensor}
For two $2$-categories $\C$ and $1.3$, the \emph{Gray tensor product}
\index{Gray tensor product}\index{tensor product!Gray}\index{product!Gray tensor}\index{2-category!Gray tensor product}
$\C \otimes 1.3$ is a $2$-category defined as follows.
The objects and $1$-cells of $\C \otimes 1.3$ are the same as those of
$\C \Box 1.3$, now denoted with $\otimes$ instead of $\Box$.
\index{basic 1-cell!Gray tensor product}\index{Gray tensor product!basic 1-cell}\index{1-cell!basic, in Gray tensor product}
The
$2$-cells are defined in two stages, as follows. The
\emph{proto-$2$-cells}\index{proto-2-cell}\index{Gray tensor product!proto-2-cell}\index{2-cell!proto, in Gray tensor product}
are generated under horizontal composition by
the basic $2$-cells of $\C \Box 1.3$, now denoted $35 \otimes Y$ and
$X \otimes 60$,
\index{basic 2-cell!Gray tensor product}\index{Gray tensor product!basic 2-cell}\index{2-cell!basic, in Gray tensor product}
together with a third type: $2$-cells
\[
\Si_{f,g}\cn (f \otimes {Y'})({X} \otimes g) \to ({X'} \otimes
g) (f \otimes {Y}) \andspace
\]
\[
\Si^\inv_{f,g}\cn ({X'} \otimes
g) (f \otimes {Y}) \to (f \otimes {Y'})({X} \otimes g) \phantom{\andspace}
\]
\index{Gray structure 2-cell}
for each pair of nonidentity $1$-cells $f \in \C(X,X')$ and $g \in
1.3(Y,Y')$. If either $f$ or $g$ is an identity $1$-cell, then
$\Si_{f,g}$ is the respective identity $2$-cell.
This horizontal composition is required to be associative and
unital, satisfying the relations induced by $\Box$, i.e.,
\begin{itemize}
\item $(35' \otimes Y) * (35 \otimes Y) = (35' * 35) \otimes Y$
and
\item $(X \otimes 60') * (X \otimes 60) = X \otimes (60' * 60)$
\end{itemize}
for horizontally composable $2$-cells $35$ and $35'$ in $\C$,
respectively $60$ and $60'$ in $1.3$.
The $2$-cells of $\C \otimes 1.3$ are equivalence classes of vertical
composites of proto-$2$-cells, where the equivalence relation is the
smallest one which includes the following.
\begin{enumerate}
\item\label{sigma:inv} The vertical composites $\Si_{f,g}
\Si^\inv_{f,g}$ and $\Si^\inv_{f,g} \Si_{f,g}$ are equivalent to
the respective identities.
\item\label{sigma:basic-vert} The basic $2$-cells from $\C \Box 1.3$
satisfy the vertical composition relations induced by $\Box$,
namely
\[
(35_2 \otimes Y) (35_1 \otimes Y) \sim (35_2 35_1) \otimes Y
\andspace
(X \otimes 60_2) (X \otimes 60_1) \sim (X \otimes 60_2 60_1).
\]
\item\label{sigma:f'f} For $f \in \C(X,X')$, $f'\in \C(X',X'')$, and
$g\in 1.3(Y,Y')$ we have
\[
\big(\Si_{f',g} * (1_f \otimes Y)\big) \,
\big((1_{f'} \otimes {Y'}) * \Si_{f,g}\big) \sim \Si_{f'f,g}
\]
\item\label{sigma:g'g} For $g\in 1.3(Y,Y')$, $g'\in 1.3(Y',Y'')$, and
$f\in \C(X,X')$ we have
\[
\big(({X'} \otimes 1_{g'}) * \Si_{f,g} \big)\,
\big(\Si_{f,g'} * (X \otimes 1_g)\big) \sim \Si_{f,g'g}
\]
\item\label{sigma:horiz} For $f$, $f'$, $g$, $g'$ as above we have
\[
\big( ({X''} \otimes 1_{g'}) * (1_{f'} \otimes {Y'}) * \Si_{f,g} \big)\,
\big( \Si_{f',g'} * (1_f \otimes {Y'}) * ({X} \otimes 1_g)
\big) \sim
\]
\[
\big( \Si_{f',g'} * ({X'} \otimes 1_{g}) * (1_f \otimes {Y}) \big)\,
\big( (1_{f'} \otimes {Y''}) * ({X'} \otimes 1_{g'}) *
\Si_{f,g} \big)
\]
\item\label{sigma:nat} For $35 \in \C(X,X')(f_1,f_2)$ and $60 \in
1.3(Y,Y')(g_1,g_2)$ we have
\[
\big( ({X'} \otimes 60) * (35 \otimes {Y}) \big) \, \Sigma_{f_1,g_1}
\sim
\Si_{f_2,g_2} \, \big( (35 \otimes {Y'}) * ({X} \otimes 60) \big)
\]
\item\label{sigma:closure} The equivalence relation is closed under
vertical composition.
\item\label{sigma:exchange} For any horizontally composable
proto-$2$-cells $\la$ and $\la'$ we have
\[
(1 * \la)(\la' * 1) \sim (\la' * \la) \sim (\la' * 1)(1 * \la).
\]
\end{enumerate}
Each $2$-cell $\La$ is represented by a vertical composite of
proto-$2$-cells, $\la_1 \cdots \la_n$, and thus the vertical composition of $2$-cells is defined by
concatenation. The horizontal composition of $2$-cells is defined by
\begin{equation}\label{eq:gray-tensor-proto-2}
(\la'_1 \la'_2) * (\la_1 \la_2) = (\la'_1 * \la_1) (\la'_2 * \la_2)
\end{equation}
for appropriately composable $2$-cells $\la_1$, $\la_2$, $\la'_1$,
and $\la'_2$. This is extended to arbitrary horizontal composites
\[
(\la'_1 \cdots \la'_n) * (\la_1 \cdots \la_m)
\]
where $\la_i$ are proto-$2$-cells in $(\C \otimes 1.3)(X\otimes Y, X' \otimes
Y')$ and $\la'_i$ are $2$-cells in $(\C \otimes 1.3)(X' \otimes Y', X''
\otimes Y'')$, by inserting appropriate identity $2$-cells so that $m
= n$, and then by induction on \eqref{eq:gray-tensor-proto-2}.
Condition \eqref{sigma:exchange} implies that this definition is
independent of how identities are inserted and satisfies the middle
four exchange property \eqref{middle-four}. Preservation of units
\eqref{bicat-c-id} follows from the corresponding properties of
$\Box$, together with conditions \eqref{sigma:f'f} and
\eqref{sigma:g'g} with $f'$ and $g'$ being identities.
This finishes the definition of the Gray tensor product $\C \otimes
1.3$. We prove that it is a $2$-category in
\cref{proposition:gray-tensor-2-cat} below.
\end{definition}
\begin{explanation}[Properties of $\Sigma_{f,g}$]\label{explanation:gray-tensor}\
\index{Gray structure 2-cell!properties}
Because $\Sigma_{f,g}$ is an identity $2$-cell whenever $f$ or $g$ is
an identity $1$-cell,
conditions
\eqref{sigma:f'f}, \eqref{sigma:g'g}, and \eqref{sigma:horiz}
are equivalent to the
requirement that, all possible composites formed from the following
pasting diagram are equal to $\Si_{f'f,g'g}$
for all $f$,$f'$,$g$, and $g'$.
\begin{equation}\label{sigma:f'fg'g-pasting}
\begin{tikzpicture}[x=40mm,y=20mm,vcenter]
\draw[0cell]
(0,0) node (xy) {X \otimes Y}
(xy) ++(1,0) node (x'y) {X' \otimes Y}
(x'y) ++(1,0) node (x''y) {X'' \otimes Y}
(xy) ++(0,-1) node (xy') {X \otimes Y'}
(xy') ++(1,0) node (x'y') {X' \otimes Y'}
(x'y') ++(1,0) node (x''y') {X'' \otimes Y'}
(xy') ++(0,-1) node (xy'') {X \otimes Y''}
(xy'') ++(1,0) node (x'y'') {X' \otimes Y''}
(x'y'') ++(1,0) node (x''y'') {X'' \otimes Y''}
;
\draw[1cell]
(xy) edge node {f \otimes Y} (x'y)
(x'y) edge node {f' \otimes Y} (x''y)
(xy') edge node {f \otimes {Y'}} (x'y')
(x'y') edge node {f' \otimes {Y'}} (x''y')
(xy'') edge node {f \otimes {Y''}} (x'y'')
(x'y'') edge node {f' \otimes {Y''}} (x''y'')
(xy) edge['] node {{X} \otimes g} (xy')
(xy') edge['] node {{X} \otimes g'} (xy'')
(x'y) edge node {{X'} \otimes g} (x'y')
(x'y') edge node {{X'} \otimes g'} (x'y'')
(x''y) edge node {{X''} \otimes g} (x''y')
(x''y') edge node {{X''} \otimes g'} (x''y'')
;
\draw[2cell]
(xy) ++(-40:.6) node[rotate=45, 2label={below,\Sigma_{f,g}}] {\Rightarrow}
(x'y) ++(-40:.6) node[rotate=45, 2label={below,\Sigma_{f',g}}] {\Rightarrow}
(xy') ++(-40:.6) node[rotate=45, 2label={below,\Sigma_{f,g'}}] {\Rightarrow}
(x'y') ++(-40:.6) node[rotate=45, 2label={below,\Sigma_{f',g'}}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
In particular, condition \eqref{sigma:horiz} means that the two ways
of forming a vertical composite from $\Si_{f',g'} * \Si_{f,g}$ are
equal. Furthermore, condition \eqref{sigma:nat} means that the composites
of the following pasting diagrams are equal in $\C \otimes 1.3$.
\begin{equation}\label{sigma:nat-pasting}
\begin{tikzpicture}[x=30mm,y=28mm]
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (xy) {X \otimes Y}
(xy) ++(1,0) node (x'y) {X' \otimes Y}
(xy) ++(0,-1) node (xy') {X \otimes Y'}
(xy') ++(1,0) node (x'y') {X' \otimes Y'}
;
\draw[1cell]
(xy) edge[bend left=40] node {f_2 \otimes Y} (x'y)
(xy') edge[bend right=40,'] node {f_1 \otimes {Y'}} (x'y')
(xy) edge[bend right=40,'] node[pos=.33] {{X} \otimes g_1} (xy')
(x'y) edge[bend left=40] node[pos=.66] {{X'} \otimes g_2} (x'y')
;
}
\begin{scope}
\boundary
\draw[1cell]
(xy) edge[bend right=40,'] node[pos=.4] {f_1 \otimes Y} (x'y)
(x'y) edge[bend right=40,'] node {{X'} \otimes g_1} (x'y')
;
\draw[2cell]
(xy') ++(45:.45) node[rotate=45, 2label={below,\Si_{f_1,g_1}}] {\Rightarrow}
node[between=xy and x'y at .5, shift={(-.1,0)}, rotate=90, 2label={below,35 \otimes Y}] {\Rightarrow}
node[between=x'y and x'y' at .5, shift={(0,-.125)}, rotate=0, 2label={above,X' \otimes 60}] {\Rightarrow}
;
\end{scope}
\draw (1.52,-.5) node[font=\Large] {=};
\begin{scope}[shift={(2.05,0)}]
\boundary
\draw[1cell]
(xy) edge[bend left=40] node {X \otimes g_2} (xy')
(xy') edge[bend left=40] node[pos=.6] {f_2 \otimes Y'} (x'y')
;
\draw[2cell]
(x'y) ++(225:.4) node[rotate=45, 2label={above,\Si_{f_2,g_2}}] {\Rightarrow}
node[between=xy' and x'y' at .5, shift={(-.1,0)}, rotate=90, 2label={below,35 \otimes Y'}] {\Rightarrow}
node[between=xy and xy' at .5, shift={(0,.125)}, rotate=0, 2label={below,X \otimes 60}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\end{equation}\\[-2pc]\dqed
\end{explanation}
\begin{proposition}\label{proposition:gray-tensor-2-cat}
For $2$-categories $\C$ and $1.3$, the Gray tensor product $\C \otimes
1.3$ is a $2$-category.\index{Gray tensor product!yields a 2-category}
\end{proposition}
\begin{proof}
The data of $\C \otimes 1.3$ is described above in
\cref{definition:gray-tensor}. We verify the axioms of
\cref{2category-explicit} as follows, noting that identity $1$-cells
and $2$-cells are basic (i.e., are cells from $\C \Box 1.3$), and also
$2$-cells of the form $\Si_{f,1}$ and $\Si_{1,g}$ are identities.
\begin{enumerate}[label=\textit{(\roman*)}]
\item Vertical composition of $2$-cells is associative by definition.
and unital by condition \eqref{sigma:basic-vert} for basic $2$-cells
and conditions \eqref{sigma:f'f} and \eqref{sigma:g'g} for
$\Sigma_{f,g}$.
\item Horizontal composition of $2$-cells preserves identities and
satisfies middle four exchange, as noted at the end of
\cref{definition:gray-tensor}.
\item Horizontal composition of $1$-cells is associative because it is
so in $\C \Box 1.3$.
\item Horizontal composition of $2$-cells is associative by definition.
\item Horizontal composition of $1$-cells is unital because it is so
in $\C \Box 1.3$.
\item Horizontal composition of $2$-cells is unital with respect to
the identity $2$-cells of identity $1$-cells because it is so in $\C
\Box 1.3$.\qedhere
\end{enumerate}
\end{proof}
\begin{explanation}[Compatibility between $\Box$ and $\otimes$]
\index{Gray tensor product!compatibility with box product}\index{box product!compatibility with Gray tensor product}
The conditions for composition of basic $1$- and $2$-cells from $\C \Box
1.3$ ensure that there is a $2$-functor
\[
\C \Box 1.3 \to \C \otimes 1.3,
\]
and this is an isomorphism on underlying $1$-categories. For objects
$X \in \C$ and $Y \in 1.3$ we have inclusion $2$-functors
\begin{align*}
\C & \fto{- \otimes Y} \C \otimes 1.3\\
1.3 & \fto{X \otimes -} \C \otimes 1.3.\dqed
\end{align*}
\end{explanation}
We will prove below that the Gray tensor product is a symmetric
monoidal product on the $1$-category $\IICat$ and adjoint to the
internal hom given by pseudofunctors of $2$-categories. For this, we
will need the following notion, which characterizes the Gray tensor
product.
\begin{definition}\label{definition:cubical-psfun}\index{characterization of!the Gray tensor product}
A pseudofunctor
\[
F\cn \C_1 \times \cdots \times \C_n \to 1.3
\]
is \emph{cubical}\index{cubical pseudofunctor}\index{pseudofunctor!cubical}
if the following \emph{cubical condition}\index{cubical condition}\index{cubical pseudofunctor!cubical condition}\index{pseudofunctor!cubical condition} holds. Suppose $(f_1 \times \cdots
\times f_n)$ and $(f'_1 \times \cdots \times f'_n)$ is a composable
pair of $1$-cells in $\C_1 \times \cdots \times \C_n$. If, for all
$i > j$, either $f_i$ or $f'_j$ is an identity $1$-cell, then the
lax functoriality constraint
\[
F^2\cn F(f'_1 \times \cdots \times f'_n) \circ F(f_1 \times \cdots f_n)
\to F((f'_1f_1) \times \cdots \times (f'_nf_n))
\]
is an identity $2$-cell.
\end{definition}
\begin{explanation}\label{explanation:cubical}\
\begin{itemize}
\item In the case $n=3$, the condition that $F$ be cubical means
that $F^2$ must be the identity for the following cases of
$(f'_1 \times f'_2 \times f'_3)(f_1 \times f_2 \times f_3)$:
\begin{align*}
(f'_1 \times f'_2 \times f'_3) \,& (f_1 \times \,1\, \times \,1\,)\\
(\,1\; \times f'_2 \times f'_3) \,& (f_1 \times f_2 \times \,1\,)\\
(\,1\; \times \,1\; \times f'_3) \,& (f_1 \times f_2 \times f_3).
\end{align*}
\item The lax left and right unity properties \eqref{f0-bicat} imply
that $F$ is strictly unitary (i.e., $F^0$ is the identity)\index{cubical pseudofunctor!is strictly unitary}. We
ask the reader to verify this in
\cref{exercise:cubical-preserves-id}.
\item In the case $n=1$, a cubical pseudofunctor is a $2$-functor.
\item For objects $X_1 \in \C_1$ and $X_2 \in \C_2$ in the case $n =
2$, the cubical condition implies that the pseudofunctor
composites of $F$ with the constant $2$-functors $\Delta_{X_1}$ and
$\Delta_{X_2}$, respectively, are $2$-functors
\[
\C_1 \fto{1_{\C_1} \times \De_{X_2}} \C_1 \times \C_2 \fto{F} 1.3
\]
\[
\C_2 \fto{\De_{X_1} \times 1_{\C_2}} \C_1 \times \C_2 \fto{F} 1.3.
\]
\item For a composable pair of $1$-cells
\begin{equation}\label{eq:fxg-f'xg'}
(f \times g)\cn (X \times Y) \to (X' \times Y') \andspace
(f' \times g')\cn (X' \times Y') \to (X'' \times Y'')
\end{equation}
in the case $n = 2$, the cubical condition implies
that we have
\[
F(f \times g) = F(1_{X'} \times g)F(f \times 1_{Y}).
\]
Moreover, the lax functoriality constraint
\[
F(1_{X''} \times g') \,
F(f' \times 1_{Y'}) \,
F(1_{X'} \times g) \,
F(f \times 1_{Y}) \to
F((f'f) \times (g'g))
\]
is given by
\begin{equation}\label{eq:f2-cubical}
F^2_{(f' \times g'),(f \times g)} =
1_{F(1_{X''} \times g')} * F^2_{(f' \times 1_{Y'}), (1_{X'} \times
g)} * 1_{F(f \times 1_Y)}.\dqed
\end{equation}
\end{itemize}
\end{explanation}
\begin{definition}\label{definition:univ-cubical}
The \emph{universal cubical pseudofunctor}\index{universal cubical pseudofunctor}\index{cubical pseudofunctor!universal}\index{universal!cubical pseudofunctor}
\[
c\cn \C \times 1.3 \to \C \otimes 1.3
\]
is defined as follows.
\begin{itemize}
\item On objects $X \times Y$ we define
\[
c(X \times Y) = X \otimes Y.
\]
\item On $1$-cells $f \times g\cn (X \times Y) \to (X' \times Y')$, we
define
\[
c(f \times g) = (X' \otimes g) \circ (f \otimes Y).
\]
\item On $2$-cells $35 \times 60\cn (f_1 \times g_1) \to (f_2 \times
g_2)$ in $(\C \times 1.3)(X\times Y, X' \times Y')$, we define
\[
c(35 \times 60) = (X' \otimes 60) * (35 \otimes Y).
\]
\end{itemize}
The lax unity constraint $c^0$ is defined to be the identity. For a
composable pair of $1$-cells $(f \times g)$ and $(f' \times g')$
as in \eqref{eq:fxg-f'xg'}, the lax functoriality constraint
\[
c^2\cn (X'' \otimes g')(f' \otimes Y')(X' \otimes g)(f \otimes Y)
\to (X'' \otimes (g'g))((f'f) \otimes Y)
\]
is given by $1 * \Sigma_{f',g} * 1$.
Note that $c^2$ satisfies the condition to be cubical because
$\Sigma_{1,g}$ and $\Sigma_{f',1}$ are both identities.
This finishes the definition of $c$. We show that $c^2$ is natural
and verify the axioms of a lax functor (\cref{def:lax-functors}) in
\cref{proposition:univ-cubical} below.
\end{definition}
\begin{proposition}\label{proposition:univ-cubical}
The universal $c$ constructed in \cref{definition:univ-cubical} is a cubical
pseudofunctor.
\end{proposition}
\begin{proof}
Condition \eqref{sigma:nat} of \cref{definition:gray-tensor} implies
that $\Sigma_{f',g}$ is natural with respect to $2$-cells (see
\eqref{sigma:nat-pasting}), and hence $c^2 = 1 * \Sigma_{f',g} * 1$
is a natural transformation. The lax associativity axiom
\eqref{f2-bicat} follows from conditions \eqref{sigma:f'f} and
\eqref{sigma:g'g} of \cref{definition:gray-tensor} (see
\eqref{sigma:f'fg'g-pasting}). The lax left and right unity axioms
are trivial because all of the $2$-cells involved are identities.
\end{proof}
We will make use of the following lemma, which follows because
$2$-functors preserve composition and identities strictly. It is a
special case of \cref{exercise:cubical-composite}, which we leave to
the reader.
\begin{lemma}\label{lemma:cubical-2-composite}
\index{cubical pseudofunctor!composition with 2-functors}
Suppose that $F\cn \C_1 \times \C_2 \to \C$ is a cubical
pseudofunctor between $2$-categories and suppose that the following
are $2$-functors between $2$-categories:
\begin{align*}
G & \cn \C \to 1.3\\
G_1 & \cn \C'_1 \to \C_1\\
G_2 & \cn \C'_2 \to \C_2.
\end{align*}
Then the following composite pseudofunctors are cubical:
\[
\C_1 \times \C_2 \fto{F} \C \fto{G} 1.3
\]
\[
\C'_1 \times \C'_2 \fto{G_1 \times G_2} \C_1 \times \C_2 \fto{F} \C.
\]
\end{lemma}
\begin{notation}\label{definition:2catcub}
For small $2$-categories $\C_1$, $\ldots$, $\C_n$, and $1.3$, the collection of
cubical pseudofunctors
\[
F\cn \C_1 \times \cdots \times \C_n \to 1.3
\]
is a subset of $\Bicatps(\C_1 \times \cdots \times \C_n, 1.3)$,
denoted $\IICatcub(\C_1,\ldots ,\C_n; 1.3)$.
\end{notation}
For the remainder of this section we will assume, unless otherwise
stated, that our $2$-categories are small.
\begin{theorem}\label{theorem:cub-gray-adj}
For $2$-categories $\B$, $\C$, and $1.3$, composition with the
universal cubical pseudofunctor $c$ induces a bijection of sets
\[
\IICatcub(\C,1.3;\B) \iso \IICat(\C \otimes 1.3, \B)
\]
which is natural with respect to $2$-functors $\C' \to \C$, $1.3' \to
1.3$, and $\B \to \B'$.
\index{cubical pseudofunctor!classified by Gray tensor product}
\index{Gray tensor product!classifies cubical pseudofunctors}
\end{theorem}
\begin{proof}
By \cref{lemma:cubical-2-composite}, the composite
\[
\C \times 1.3 \fto{c} \C \otimes 1.3 \fto{G} \B
\]
is cubical for any $2$-functor $G$. The naturality statement follows
from \cref{lemma:cubical-2-composite}, either by direct inspection
or by applying the computations in \cref{sec:representables}. We
leave this to the reader in \cref{exercise:naturality-cub-gray}.
For any cubical pseudofunctor $F\cn \C \times 1.3 \to \B$, define a
$2$-functor $\ol{F}\cn \C \otimes 1.3 \to \B$ as follows.
\begin{itemize}
\item On objects $X \otimes Y$ we define
\[
\ol{F}(X \otimes Y) = F(X \times Y).
\]
\item On $1$-cells $f \otimes Y\cn (X \otimes Y) \to (X' \otimes Y)$
and $X \otimes g\cn (X \otimes Y) \to (X \otimes Y')$ we define
\begin{align*}
\ol{F}(f \otimes Y) & = F(f \times 1_Y), \andspace\\
\ol{F}(X \otimes g) & = F(1_X \times g).
\end{align*}
\item On $2$-cells $35 \otimes Y\cn (f_1 \otimes Y) \to (f_2 \otimes
Y)$ in $(\C \otimes 1.3)(X \otimes Y, X' \otimes Y)$, and $X
\otimes 60 \cn (X \otimes g_1) \to (X \otimes g_2)$ in $(\C \otimes
1.3)(X\otimes Y, X \otimes Y')$, we define
\begin{align*}
\ol{F}(35 \otimes Y) & = F(35 \times 1_{1_Y}), \andspace\\
\ol{F}(X \otimes 60) & = F(1_{1_X} \times 60).
\end{align*}
\item On $2$-cells $\Sigma_{f,g}$ we define
\[
\ol{F}\Sigma_{f,g} = F^2_{f \times 1, 1 \times g}.
\]
\end{itemize}
This defines $\ol{F}$ on generating $1$- and $2$-cells. We extend its
definition by requiring that $\ol{F}$ preserve composition strictly, thus
forming a $2$-functor. The assumption that $F$ is cubical, together
with the axioms for composition in $\C \otimes 1.3$, ensures that
$\ol{F}$ is well-defined.
By construction, both $F$ and $\ol{F}c$ take the same values on $0$-,
$1$-, and $2$-cells of $\C \times 1.3$. Since $\ol{F}$ is a $2$-functor,
the lax functoriality constraint of the composite (see
\eqref{lax-functors-comp-two}) is given by $\ol{F}(c^2)$. Therefore
we have $\ol{F}c = F$. Likewise, for any $2$-functor $G\cn \C \otimes
1.3 \to \B$, we have
\[
\ol{(Gc)}\Sigma_{f,g} = (Gc)^2_{f \times 1, 1 \times g} = G\Sigma_{f,g}
\]
and hence $\ol{(Gc)} = G$.
\end{proof}
\begin{proposition}\label{proposition:gray-tensor-functorial}
\index{Gray tensor product!functorial}
The Gray tensor product induces a functor
\[
\otimes\cn \IICat \times \IICat \to \IICat.
\]
\end{proposition}
\begin{proof}
We have seen in \cref{proposition:gray-tensor-2-cat} that $\C
\otimes 1.3$ is a $2$-category whenever $\C$ and $1.3$ are
$2$-categories. For $2$-functors
\[
F\cn \C \to \C' \andspace G\cn 1.3 \to 1.3',
\]
the composite with the universal cubical pseudofunctor
\begin{equation}\label{eq:cFG}
\C \times 1.3 \fto{F \times G} \C' \times 1.3' \fto{c} \C' \otimes 1.3'
\end{equation}
is cubical by \cref{lemma:cubical-2-composite}.
We define
\[
F \otimes G\cn \C \otimes 1.3 \to \C' \otimes 1.3'
\]
as the unique $2$-functor corresponding to \eqref{eq:cFG} via
\cref{theorem:cub-gray-adj}. Preservation of composition and
identities follows from uniqueness of the correspondence in
\cref{theorem:cub-gray-adj}.
\end{proof}
We are now ready to prove that the Gray tensor product is a monoidal
product on $\IICat$. To distinguish between $(\IICat,\times)$ and
$(\IICat,\otimes)$, we let $\Gray$ denote the latter.
\begin{theorem}\label{theorem:Gray-is-monoidal-category}
\index{Gray tensor product!monoidal product}
\index{monoidal category!2-categories and Gray tensor product}
There is a monoidal category
\[
\Gray = (\IICat, \otimes, \boldone, a, \ell, r)
\]
whose underlying category is $\IICat$, the $1$-category of
$2$-categories and $2$-functors and whose monoidal product is the Gray
tensor product, $\otimes$. The unit object is the terminal
$2$-category, $\boldone$. The associator and unitors are induced by
those of the Cartesian product.
\end{theorem}
\begin{proof}
We have shown that $\otimes$ is functorial in
\cref{proposition:gray-tensor-functorial}. The two unitors are
induced by the unitors of the Cartesian product, via
\cref{theorem:cub-gray-adj}.
The component of the associator at $(\C_1, \C_2, \C_3) \in \IICat^{3}$ is a
$2$-functor\index{Gray tensor product!associator}
\[
a_{C_1,C_2,C_3}\cn (\C_1 \otimes \C_2) \otimes \C_3 \to \C_1 \otimes (\C_2 \otimes \C_3)
\]
defined as follows. First, the associator for the Cartesian product
gives the values of $a_{C_1,C_2,C_3}$ on objects, generating
$1$-cells, and generating $2$-cells of the form $(35_1 \otimes X_2)
\otimes X_3$, $(X_1 \otimes 35_2) \otimes X_3$, and $(X_1 \otimes
X_2) \otimes 35_3$ where $X_i$ and $35_i$ are objects and $2$-cells,
respectively, in $\C_i$. Next, there are three additional
generating $2$-cells in $(\C_1 \otimes \C_2) \otimes \C_3$, each
involving the Gray structure $2$-cells $\Sigma$. Their images under
the associator are uniquely determined by the images of their
sources and targets, and we list them here for $1$-cells $f_i \in
\C_i(X_i,Y_i)$.
\begin{align*}
\Sigma_{f_1,f_2} \otimes X_3 & \mapsto \Sigma_{f_1,(f_2 \otimes X_3)}\\
\Sigma_{(f_1\otimes X_2),f_3} & \mapsto \Sigma_{f_1,(X_2 \otimes f_3)}\\
\Sigma_{(X_1 \otimes f_2), f_3} & \mapsto X_1 \otimes \Sigma_{f_2, f_3}\\
\end{align*}
The components $a_{\C_1,\C_2,\C_3}$ are bijective in all
dimensions and therefore are isomorphisms of $2$-categories.
Naturality of $a$ with respect to $2$-functors
\[
(\C_1,\C_2,\C_3) \to (\C'_1,\C'_2,\C'_3)
\]
follows from the definition of $\otimes$ on $2$-functors (see
\cref{proposition:gray-tensor-functorial}). For example, the three
types of $2$-cells involving $\Sigma$, as listed above, are preserved
by $2$-functors in each variable, and therefore their values under the
associator are preserved. Checking the unity and pentagon axioms,
\eqref{monoidal-unit} and \eqref{pentagon-axiom}, respectively, is
straightforward and we leave it to the reader in
\cref{exercise:Gray-is-monoidal-category}.
\end{proof}
\begin{notation}\label{notation:psfun-hom}
For bicategories $\C$ and $1.3$, we let $\Hom(\C,1.3)$
\index{hom object!adjoint to Gray tensor product}
\index{Gray tensor product!closed structure}
denote the full
sub-bicategory of $\Bicatps(\C,1.3)$ consisting of strict functors,
strong transformations, and modifications. Recall, by
\cref{subbicat-pseudofunctor}, that $\Hom(\C,1.3)$ is a $2$-category
whenever $1.3$ is a $2$-category.
\end{notation}
\begin{definition}\label{definition:hom-eval}
Suppose that $1.3$ and $\B$ are $2$-categories. The
\emph{evaluation pseudofunctor}\index{evaluation pseudofunctor}\index{pseudofunctor!evaluation} is a cubical pseudofunctor
\[
\ev\cn \Hom(1.3,\B) \times 1.3 \to \B
\]
defined as follows.
\begin{itemize}
\item For a $2$-functor $H\cn 1.3 \to \B$ and an object $Y \in 1.3$, we
define
\[
\ev(H \times Y) = HY.
\]
\item For a strong transformation $35\cn H \to H'$ in $\Hom(1.3,
\B)$ and a $1$-cell $g\cn Y \to Y'$ in $1.3$, we define
\[
\ev(35 \times g) = (H'g) \circ 35_Y
\]
\item For a modification $\Ga\cn 35_1 \to 35_2$ in $\Hom(1.3,\B)(H,H')$
and a $2$-cell $60\cn g_1 \to g_2$ in $\Hom(1.3,\B)(Y,Y')$, we define
\[
\ev(\Ga \times 60) = (H'60) * \Ga_Y.
\]
\end{itemize}
The lax unity constraint $\ev^0$ is defined to be the identity. For a
composable pair of $1$-cells $(35 \times g)\cn (H \times Y) \to (H'
\times Y')$ and $(35' \times g') \cn (H' \times Y') \to (H'' \times
Y'')$, the lax functoriality constraint
\[
\ev^2\cn (H''g') \, 35'_{Y'} \, (H'g) \, 35_Y \to (H''(g'g)) \, (35'35)_Y
\]
is given by $1 * (35'_g)^{-1} * 1$. Note that $\ev$ satisfies the
condition to be cubical because $35'_g$ is an identity $2$-cell if $35'$ is
an identity strong transformation or $g$ is an identity $1$-cell.
This finishes the definition of $\ev$. We show that $\ev^2$ is natural
and verify the axioms of a lax functor (\cref{def:lax-functors}) in
\cref{proposition:eval-cubical} below.
\end{definition}
\begin{proposition}\label{proposition:eval-cubical}
The evaluation pseudofunctor $\ev$ constructed in \cref{definition:hom-eval} is
cubical.\index{cubical functor!evaluation pseudofunctor}\index{evaluation pseudofunctor!as a cubical functor}
\end{proposition}
\begin{proof}
Naturality of the $2$-cells $\ev^2 = 1 * (35'_g)^{-1} * 1$ follows from
naturality in $g$ of the component $2$-cells $35'_g$ (see
\cref{definition:lax-transformation}) and the modification axiom
\eqref{modification-axiom}. The lax associativity axiom
\eqref{f2-bicat} follows from the lax naturality axiom
\eqref{2-cell-transformation-pasting} for $35'$ and the definition
of composition for lax transformations
\eqref{transf-hcomp-iicell-pasting}.
\end{proof}
\begin{proposition}\label{proposition:cub-hom-adj}
\index{evaluation pseudofunctor!induces correspondence with cubical pseudofunctors}
Suppose that $\B$, $\C$, and $1.3$ are $2$-categories. The function
\[
\IICat(\C,\Hom(1.3,\B)) \to \IICatcub(\C,1.3; \B)
\]
defined by sending a $2$-functor $G\cn \C \to \Hom(1.3, \B)$ to the composite
\[
\C \times 1.3 \fto{G \times 1_{1.3}} \Hom(1.3,\B) \times 1.3 \fto{\ev} \B
\]
is a bijection of sets which is natural with respect to
$2$-functors $\C' \to \C$, $1.3' \to 1.3$, and $\B \to \B'$.
\end{proposition}
\begin{proof}
We showed that the evaluation pseudofunctor $\ev$ is cubical in
\cref{proposition:eval-cubical}. Hence the composite $\ev \circ (G
\times 1_1.3)$ is cubical by \cref{lemma:cubical-2-composite}. The
naturality statement is similar to that of
\cref{theorem:cub-gray-adj}, and we leave it to the reader in
\cref{exercise:naturality-cub-hom}.
Given a cubical pseudofunctor $F \cn \C \times 1.3 \to \B$, we
construct a $2$-functor $\wt{F}\cn \C \to \Hom(1.3, \B)$ as follows.
\begin{itemize}
\item For each object $X \in \C$ we use the constant $2$-functor
$\conof{X}$ and let $\wt{F}X$ be the composite
\[
1.3 \fto{\conof{X} \times 1_{1.3}} \C \times 1.3 \fto{F} \B.
\]
Thus $(\wt{F}X)Y = F(X,Y)$ for objects $Y \in 1.3$.
Recalling \cref{explanation:cubical}, the cubical condition
implies that $\wt{F}X$ is a $2$-functor.
\item For each $1$-cell $f\cn X \to X'$ in $\C$ we use the induced
strong transformation $\conof{f}\cn \conof{X} \to \conof{X'}$
described in \cref{constant-induced-transformation}. Recall the
lax naturality constraint of $\conof{f}$ \eqref{eq:conof-f-laxnat}
is given by unitors, and therefore $\conof{f}$ is a $2$-natural
transformation since $\C$ is a $2$-category. Let $\wt{F}f$ be the
post whiskering $F \whis (\conof{f} \times 1_{1_{1.3}})$ described
in \cref{def:whiskering-transformation}. Thus $(\wt{F}f)_Y =
F(f,1_Y)$ for objects $Y \in 1.3$.
\cref{post-whiskering-transformation} shows that $\wt{F}f$ is a
strong transformation.
\item For each $2$-cell $35\cn f_1 \to f_2$ in $\C(X,X')$ we use the
induced modification $\conof{35}\cn \conof{f_1} \to \conof{f_2}$
described in \cref{constant-induced-modification}. For an object
$Y \in 1.3$, let
\begin{equation}\label{eq:Ftilde-alpha}
(\wt{F}35)_Y = F((\De_35 \times 1_{1_{1_{1.3}}})_{Y}) = F(35,1_{1_Y}).
\end{equation}
This is a special case of the composite modification $\Sigma
\otimes \Gamma$ defined in \cref{def:transformation-tensor}
\eqref{notation:sigmatensorgamma} with $\Sigma = 1_{1_F}$ and
$\Gamma = \De_35 \times 1_{1_{1_{1.3}}}$. The composite is shown to be a
modification in \cref{tensor-modification}.
\end{itemize}
For composable $f$ and $f'$ in $\C$ we have $\conof{f'} \conof{f} =
\conof{f'f}$ and likewise $\conof{35'} \conof{35} =
\conof{35'35}$ for composable $35$ and $35'$ in $\C$. These
equalities, together with the cubical condition for $F$, imply that
$\wt{F}$ is a $2$-functor.
To verify that we have constructed a bijection, first suppose
\[
F\cn \C \times 1.3 \to \B
\]
is a cubical pseudofunctor and consider the composite
\[
\C \times 1.3 \fto{\wt{F} \times 1_{1.3}} \Hom(1.3, \B) \times 1.3
\fto{\ev} \B.
\]
This gives the same assignment on $0$-, $1$-, and $2$-cells as $F$. Since
$\wt{F} \times 1_{1.3}$ is a $2$-functor, the lax functoriality
constraint for a pair of composable $1$-cells, $(f \times g)$ and $(f'
\times g')$ as in \eqref{eq:fxg-f'xg'}, is given by the component of
$\ev^2$ at $\wt{F}f' \times g'$ and $\wt{F}f \times g$. That
component, by definition, is
\begin{equation}\label{eq:cub-hom-adj-1}
1 * ((\wt{F}f')_g)^{-1} * 1.
\end{equation}
Unpacking the formula in \eqref{post-whis-iicell}, we have
\[
(\wt{F}f')_g = (F \whis (\De_f \times 1_{1_{1.3}}))_g =
(F^2_{f' \times 1_{Y'}, 1_X' \times g})^{-1} \; (F 1_{f' \times g})
\; (F^2_{1_{X''} \times g, f' \times 1_Y})
\]
and since $F$ is cubical only the first term is nontrivial. Using
the expression for $F^2$ given in \eqref{eq:f2-cubical}, we see that
\eqref{eq:cub-hom-adj-1} is equal to $F^2$. Therefore $\ev \circ
(\wt{F} \times 1_{1.3}) = F$.
Now, for the other direction, suppose we have a $2$-functor
\[
G \cn \C \to \Hom(1.3,\B).
\]
For each $X \in \C$, the composite below with the indicated
bracketing
\[
\begin{tikzpicture}[x=30mm,y=20mm]
\draw[0cell]
(0,0) node (d) {1.3}
(d) ++(1.2,0) node (cd) {\C \times 1.3}
(cd) ++(1.5,0) node (hd) {\Hom(1.3,\B) \times 1.3}
(hd) ++(1,0) node (b) {\B}
;
\draw[1cell]
(d) edge node {\conof{X} \times 1_{1.3}} (cd)
(cd) edge node {G \times 1_{1.3}} (hd)
(hd) edge node {\ev} (b)
(cd) -- ++(0,-.5) -- node{\ev \circ (G \times 1_{1.3})} ++(2.5,0) -- (b)
;
\end{tikzpicture}
\]
is a $2$-functor whose assignment on $0$-, $1$-, and $2$-cells is equal to
that of the $2$-functor $GX$, and therefore is equal to $GX$. For each
$1$-cell $f\cn X \to X'$ in $\C$, the whiskering
\[
(\ev \circ (G \times 1_{1.3})) \whis (\De_f \times 1_{1_{1.3}})
\]
shown below is a strong transformation whose components at $0$- and $1$-cells
of $1.3$ are equal to those of $Gf$, and therefore is equal to $Gf$.
\[
\begin{tikzpicture}[x=30mm,y=20mm]
\draw[0cell]
(0,0) node (d) {1.3}
(d) ++(1.2,0) node (cd) {\C \times 1.3}
(cd) ++(1.5,0) node (hd) {\Hom(1.3,\B) \times 1.3}
(hd) ++(1,0) node (b) {\B}
;
\draw[1cell]
(d) edge[bend left] node {\conof{X} \times 1_{1.3}} (cd)
(d) edge[bend right,'] node {\conof{X'} \times 1_{1.3}} (cd)
(cd) edge node {G \times 1_{1.3}} (hd)
(hd) edge node {\ev} (b)
(cd) -- ++(0,-.5) -- node{\ev \circ (G \times 1_{1.3})} ++(2.5,0) -- (b)
;
\draw[2cell]
node[between=d and cd at .33, rotate=-90, 2label={above,\conof{f}
\times 1_{1_{1.3}}}] {\Rightarrow}
;
\end{tikzpicture}
\]
For each $2$-cell $35 \cn f_1 \to f_2$ in $\C(X,X')$ and an object $Y
\in 1.3$, we use \eqref{eq:Ftilde-alpha} and find
\[
\big( (\ev \circ (G \times 1_{1.3}))35\big)_Y =
(\ev \circ (G \times 1_{1.3}))(35,1_{1_Y}) =
(G35)_Y.
\]
Therefore $(\ev \circ (G \times 1_{1.3}))35 = G35$.
\end{proof}
Combining \cref{theorem:cub-gray-adj,proposition:cub-hom-adj}, we have
the following adjunction of functors from $\IICat$, regarded as a
$1$-category, to itself.
\begin{corollary}\label{corollary:gray-hom-adj}
\index{Gray tensor product!adjoint to hom}
\index{closed category!with respect to Gray tensor product}
For each $2$-category $1.3$, the functor $- \otimes 1.3$ is left adjoint
to the functor $\Hom(1.3,-)$.
\end{corollary}
Recalling the definition of symmetric monoidal closed category from
\cref{def:sym-mon-closed}, we have the following further structure on
the monoidal category $\Gray$.
\begin{theorem}\label{theorem:Gray-is-symm-mon}
\index{Gray tensor product!symmetric monoidal closed}
\index{symmetric monoidal!category!2-categories with Gray tensor product}
There is a symmetric monoidal closed category
\[
\Gray = (\IICat, \otimes, \Hom, \xi, \boldone, a, \ell, r).
\]
\end{theorem}
\begin{proof}
We proved that $(\IICat, \otimes, \boldone, a, \ell, r)$ is a
monoidal category in \cref{theorem:Gray-is-monoidal-category}. The
$\otimes$-$\Hom$ adjunction is given in
\cref{corollary:gray-hom-adj}, following from the characterization
of the Gray tensor product via cubical pseudofunctors in
\cref{theorem:cub-gray-adj} and the characterization of cubical
pseudofunctors via $\Hom$ in \cref{proposition:cub-hom-adj}.
Now we construct the symmetry natural isomorphism $\xi$.
\index{Gray tensor product!symmetry}
\index{symmetry!for Gray tensor product}
For
$2$-categories $\C$ and $1.3$, the component of $\xi$ at $(\C,1.3)$ is
the $2$-functor
\[
\C \otimes 1.3 \to 1.3 \otimes \C
\]
defined on generating cells as follows.
\begin{itemize}
\item For objects $X \otimes Y \in \C \otimes 1.3$,
\[
\xi(X \otimes Y) = Y \otimes X.
\]
\item For $1$-cells $f \otimes Y$ and $X \otimes g$ in $\C \otimes 1.3$,
\[
\xi(f \otimes Y) = Y \otimes f \andspace \xi(X \otimes g) = g
\otimes X.
\]
\item For $2$-cells $35 \otimes Y$ and $X \otimes 60$ in $\C \otimes
1.3$,
\[
\xi(35 \otimes Y) = Y \otimes 35 \andspace \xi(X \otimes 60) = 60
\otimes X.
\]
\item For $2$-cells $\Si_{f,g}$ in $\C \otimes 1.3$,
\[
\xi\,\Si_{f,g} = \Si_{g,f}^{-1}.
\]
\end{itemize}
The $2$-functors $\xi$ are bijective in all dimensions, and therefore
are isomorphisms of $2$-categories. Naturality of $\xi$ with respect
to $2$-functors
\[
(\C,1.3) \to (\C',1.3')
\]
follows from the definition of $\otimes$ on $2$-functors (see
\cref{proposition:gray-tensor-functorial}).
The symmetry axiom \eqref{monoidal-symmetry-axiom} and unit axiom
\eqref{symmetry-unit} are verified on generating cells by the
definition of $\xi$. The hexagon axiom \eqref{hexagon-axiom}
follows from the definition of $\xi$ and the associator. We leave
further verification of the details to the reader in
\cref{exercise:Gray-is-symm-mon}.
\end{proof}
Recall from \cref{def:monoid} the notion of monoid in a monoidal
category---an object together with multiplication and unit morphisms
satisfying axioms for associativity and unity.
\begin{definition}\label{definition:gray-monoid}
A \emph{Gray monoid}\index{Gray monoid}\index{Gray tensor product!monoid}\index{monoid!Gray} is a monoid $(\C,\gmtimes,\gmunit)$ in $\Gray$.
\end{definition}
\begin{explanation}[Monoids in $\Gray$]\label{explanation:monoids-in-Gray}
Rewriting \cref{def:monoid} in this context, a Gray monoid is a
triple $(\C, \gmtimes, \gmunit)$ consisting of a $2$-category $\C$ and
$2$-functors
\begin{align*}
\gmtimes\cn &\; \C \otimes \C \to \C \\
\gmunit\cn &\; \boldone \to \C
\end{align*}
such that the following diagrams of $2$-categories and $2$-functors commute.
\begin{equation}\label{eq:gray-monoid-diagrams}
\begin{tikzcd}[column sep=large]
(\C\otimes \C) \otimes \C \arrow{dd}[swap]{\gmtimes\otimes \C} \rar{a}
& \C \otimes (\C \otimes \C) \dar{\C\otimes \gmtimes}\\
& \C \otimes \C \dar{\gmtimes}\\
\C \otimes \C \arrow{r}{\gmtimes} & \C
\end{tikzcd}
\qquad
\begin{tikzcd}
\boldone \otimes \C \rar{\gmunit \otimes \C} \arrow{dr}[swap]{\ell}
& \C \otimes \C \dar{\gmtimes}
& \C \otimes \boldone \lar[swap]{\C \otimes \gmunit} \arrow{dl}{r}\\
& \C
&
\end{tikzcd}
\end{equation}
\end{explanation}
\begin{explanation}[Data and axioms for Gray monoids]\label{explanation:data-axioms-for-Gray-monoids}
\index{Gray monoid!data and axioms}
Unpacking the definition of the Gray tensor product, we have an even
more explicit list of data and axioms. A Gray monoid
\[
(\C,\gmtimes,\gmunit)
\]
consists of a
$2$-category $\C$ together with the following data.
\begin{description}
\item[Unit] A distinguished object $\gmunit$.
\item[Objects] For each pair of objects $W$ and $X$, an object $W
\gmtimes X$.
\item[$1$-Cells] For each object $W$ and $1$-cell $f\cn X \to X'$, $1$-cells
\begin{align*}
W \gmtimes f & \cn W \gmtimes X \to W \gmtimes X' \andspace\\
f \gmtimes W & \cn X \gmtimes W \to X' \gmtimes W.
\end{align*}
\item[$2$-Cells] For each object $W$ and $2$-cell $35\cn f_1 \to f_2$
in $\C(X,X')$, $2$-cells
\begin{align*}
W \gmtimes 35 & \cn W \gmtimes f_1 \to W \gmtimes f_2 \andspace \\
35 \gmtimes W & \cn f_1 \gmtimes W \to f_2 \gmtimes W.
\end{align*}
For each $1$-cell $f\cn X \to X'$ and $1$-cell $g\cn Y \to Y'$, a
$2$-cell isomorphism
\[
\Si_{f,g}\cn (f \gmtimes Y')(X \gmtimes g) \fto{\iso} (X' \gmtimes g)(f
\gmtimes Y).
\]
\end{description}
These data are subject to the following axioms.
\begin{enumerate}
\item For each object $W$, the assignments on cells
\begin{align*}
W \gmtimes - & \cn \C \to \C \andspace\\
- \gmtimes W & \cn \C \to \C
\end{align*}
are $2$-functors.
\item The unit $I$ is strict. That is, for each object $X$, $1$-cell
$f$, and $2$-cell $35$, we have the following equalities.
\begin{align*}
I \gmtimes X =\ & X = X \gmtimes I\\
I \gmtimes f =\ \, & f \, = f \gmtimes I\\
I \gmtimes 35 =\ \, & 35 \, = 35 \gmtimes I
\end{align*}
\item The product $\gmtimes$ is strictly associative. That is, for
objects $Z$, $W$, and $X$ we have
\[
(Z \gmtimes W) \gmtimes X = Z \gmtimes (W \gmtimes X).
\]
For each $1$-cell $f$ and $2$-cell $35$ we have the following equalities.
\begin{align*}
(Z \gmtimes W) \gmtimes f & = Z \gmtimes (W \gmtimes f) &
(Z \gmtimes W) \gmtimes 35 & = Z \gmtimes (W \gmtimes 35) \\
(Z \gmtimes f) \gmtimes W & = Z \gmtimes (f \gmtimes W) &
(Z \gmtimes 35) \gmtimes W & = Z \gmtimes (35 \gmtimes W) \\
(f \gmtimes Z) \gmtimes W & = f \gmtimes (Z \gmtimes W) &
(35 \gmtimes Z) \gmtimes W & = 35 \gmtimes (Z \gmtimes W) \\
\end{align*}
\item\label{gm:hex} For $1$-cells $f\cn X \to X'$, $g\cn Y \to Y'$, and $h\cn Z \to
Z'$ we have the following equalities.
\[
\Sigma_{f,g} \gmtimes Z = \Sigma_{f,(g \gmtimes Z)}, \quad
\Sigma_{f \gmtimes Y,h} = \Sigma_{f,(Y \gmtimes h)}, \andspace
\Sigma_{X \gmtimes g,h} = X \gmtimes \Sigma_{g,h}.
\]
\item\label{gm:func} For $f \in \C(X,X')$, $f'\in \C(X',X'')$, $g\in \C(Y,Y')$, and
$g' \in \C(Y',Y'')$ we have the following equalities of pasting
diagrams.
\[
\begin{tikzpicture}[x=25mm,y=20mm,vcenter]
\tikzset{1cell/.append style={nodes={scale=.8}}}
\draw[0cell]
(0,0) node (xy) {X \gmtimes Y}
(xy) ++(1,0) node (x'y) {X' \gmtimes Y}
(x'y) ++(1,0) node (x''y) {X'' \gmtimes Y}
(xy) ++(0,-1) node (xy') {X \gmtimes Y'}
(xy') ++(1,0) node (x'y') {X' \gmtimes Y'}
(x'y') ++(1,0) node (x''y') {X'' \gmtimes Y'}
;
\draw[1cell]
(xy) edge node {f \gmtimes Y} (x'y)
(x'y) edge node {f' \gmtimes Y} (x''y)
(xy') edge node {f \gmtimes {Y'}} (x'y')
(x'y') edge node {f' \gmtimes {Y'}} (x''y')
(xy) edge['] node[pos=.33] {{X} \gmtimes g} (xy')
(x'y) edge node {{X'} \gmtimes g} (x'y')
(x''y) edge node[pos=.67] {{X''} \gmtimes g} (x''y')
;
\draw[2cell]
(xy) ++(-40:.6) node[rotate=45, 2label={below,\Sigma_{f,g}}] {\Rightarrow}
(x'y) ++(-40:.6) node[shift={(.11,0)}, rotate=45, 2label={below,\Sigma_{f',g}}] {\Rightarrow}
;
\draw (2.5,-.5) node[font=\large] {=};
\begin{scope}[shift={(3,0)}]
\draw[0cell]
(0,0) node (xy) {X \gmtimes Y}
(xy) ++(1.3,0) node (x''y) {X'' \gmtimes Y}
(xy) ++(0,-1) node (xy') {X \gmtimes Y'}
(xy') ++(1.3,0) node (x''y') {X'' \gmtimes Y'}
;
\draw[1cell]
(xy) edge node {(f'f) \gmtimes Y} (x''y)
(xy') edge node {(f'f) \gmtimes {Y'}} (x''y')
(xy) edge['] node[pos=.33] {{X} \gmtimes g} (xy')
(x''y) edge node[pos=.67] {{X''} \gmtimes g} (x''y')
;
\draw[2cell]
(xy) ++(-40:.6) node[shift={(.15,0)}, rotate=45, 2label={below,\Sigma_{f'f,g}}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\]
\[
\begin{tikzpicture}[x=25mm,y=20mm,vcenter]
\tikzset{1cell/.append style={nodes={scale=.8}}}
\draw[0cell]
(0,0) node (xy) {X \gmtimes Y}
(xy) ++(1,0) node (x'y) {X' \gmtimes Y}
(xy) ++(0,-1) node (xy') {X \gmtimes Y'}
(xy') ++(1,0) node (x'y') {X' \gmtimes Y'}
(xy') ++(0,-1) node (xy'') {X \gmtimes Y''}
(xy'') ++(1,0) node (x'y'') {X' \gmtimes Y''}
;
\draw[1cell]
(xy) edge node {f \gmtimes Y} (x'y)
(xy') edge node {f \gmtimes {Y'}} (x'y')
(xy'') edge node {f \gmtimes {Y''}} (x'y'')
(xy) edge['] node {{X} \gmtimes g} (xy')
(xy') edge['] node {{X} \gmtimes g'} (xy'')
(x'y) edge node {{X'} \gmtimes g} (x'y')
(x'y') edge node {{X'} \gmtimes g'} (x'y'')
;
\draw[2cell]
(xy) ++(-40:.6) node[rotate=45, 2label={below,\Sigma_{f,g}}] {\Rightarrow}
(xy') ++(-40:.6) node[rotate=45, 2label={below,\Sigma_{f,g'}}] {\Rightarrow}
;
\draw (1.6,-1) node[font=\large] {=};
\begin{scope}[shift={(2.4,-.35)}]
\draw[0cell]
(0,0) node (xy) {X \gmtimes Y}
(xy) ++(1,0) node (x''y) {X' \gmtimes Y}
(xy) ++(0,-1.3) node (xy') {X \gmtimes Y''}
(xy') ++(1,0) node (x''y') {X' \gmtimes Y''}
;
\draw[1cell]
(xy) edge node {f \gmtimes Y} (x''y)
(xy') edge node {f \gmtimes {Y'}} (x''y')
(xy) edge['] node {{X} \gmtimes (g'g)} (xy')
(x''y) edge node {{X''} \gmtimes (g'g)} (x''y')
;
\draw[2cell]
(xy) ++(-40:.6) node[shift={(0,-.15)}, rotate=45, 2label={below,\Sigma_{f,g'g}}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\]
\item\label{gm:nat} For $35 \in \C(X,X')(f_1,f_2)$ and $60 \in
1.3(Y,Y')(g_1,g_2)$ we have the following equality of pasting diagrams.
\[
\begin{tikzpicture}[x=30mm,y=28mm]
\tikzset{1cell/.append style={nodes={scale=.8}}}
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (xy) {X \gmtimes Y}
(xy) ++(1,0) node (x'y) {X' \gmtimes Y}
(xy) ++(0,-1) node (xy') {X \gmtimes Y'}
(xy') ++(1,0) node (x'y') {X' \gmtimes Y'}
;
\draw[1cell]
(xy) edge[bend left=40] node {f_2 \gmtimes Y} (x'y)
(xy') edge[bend right=40,'] node {f_1 \gmtimes {Y'}} (x'y')
(xy) edge[bend right=40,'] node[pos=.33] {{X} \gmtimes g_1} (xy')
(x'y) edge[bend left=40] node[pos=.66] {{X'} \gmtimes g_2} (x'y')
;
}
\begin{scope}
\boundary
\draw[1cell]
(xy) edge[bend right=40,'] node[pos=.4] {f_1 \gmtimes Y} (x'y)
(x'y) edge[bend right=40,'] node {{X'} \gmtimes g_1} (x'y')
;
\draw[2cell]
(xy') ++(45:.45) node[rotate=45, 2label={below,\Si_{f_1,g_1}}] {\Rightarrow}
node[between=xy and x'y at .5, shift={(-.1,0)}, rotate=90, 2label={below,35 \gmtimes Y}] {\Rightarrow}
node[between=x'y and x'y' at .5, shift={(0,-.125)}, rotate=0, 2label={above,X' \gmtimes 60}] {\Rightarrow}
;
\end{scope}
\draw (1.52,-.5) node[font=\Large] {=};
\begin{scope}[shift={(2.05,0)}]
\boundary
\draw[1cell]
(xy) edge[bend left=40] node {X \gmtimes g_2} (xy')
(xy') edge[bend left=40] node[pos=.6] {f_2 \gmtimes Y'} (x'y')
;
\draw[2cell]
(x'y) ++(225:.4) node[rotate=45, 2label={above,\Si_{f_2,g_2}}] {\Rightarrow}
node[between=xy' and x'y' at .5, shift={(-.1,0)}, rotate=90, 2label={below,35 \gmtimes Y'}] {\Rightarrow}
node[between=xy and xy' at .5, shift={(0,.125)}, rotate=0, 2label={below,X \gmtimes 60}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\]
\end{enumerate}
Note, in particular, the following consequences of these axioms:
\begin{itemize}
\item Since $W \gmtimes -$ is a $2$-functor, we have $W \gmtimes 1_X =
1_{W \gmtimes X}$ and $W \gmtimes 1_{1_X} = 1_{1_{W \gmtimes X}}$.
Likewise, $- \gmtimes W$ preserves identity $1$- and $2$-cells.
\item Condition \eqref{gm:func} together with the invertibility of
$\Si_{f,g}$ implies that $\Si_{1,g}$ and $\Si_{f,1}$ are identity $2$-cells.\dqed
\end{itemize}
\end{explanation}
\begin{example}[Braided strict monoidal categories]\label{example:brmoncat-grmon}
\index{Gray monoid!one-object example}
\index{braided monoidal category!strict case as one-object Gray monoid}
Suppose $\M = (\M, \otimes_{\M})$ is a braided monoidal category
whose underlying monoidal category is strict; i.e., the associator
and unitors are identities.
Then the one-object bicategory $\Si \M$ explained in
\eqref{ex:moncat-bicat} is a $2$-category. The unique object of $\Si
\M$ is denoted $*$, and recall that $1_*$ is defined to be the
monoidal unit, $\monunit \in \M$.
The braiding $\xi$ makes $\Si \M$ a Gray
monoid as follows.
For objects $X$ and $Y$ (i.e., $1$-cells of $\Si \M$) and
morphisms
\[
f\cn X \to X' \andspace g\cn Y \to Y'
\]
(i.e., $2$-cells of $\Si \M$), we define $\boxtimes$ using the
monoidal product of $\M$ (recalling that $\monunit$ is a strict unit).
\begin{align*}
X \gmtimes * & = X \otimes_{\M} \monunit = X&
* \gmtimes Y & = \monunit \otimes_{\M} Y = Y\\
f \gmtimes * & = f \otimes_{\M} 1_{\monunit} = f&
* \gmtimes g & = 1_{\monunit} \otimes_{\M} g = g.
\end{align*}
Thus the unique
object $* \in \Si \M$ is a strict unit for $\gmtimes$. Moreover, we have
\begin{align*}
(X \gmtimes *) \otimes_{\M} (* \gmtimes Y) &
= X \otimes_{\M} Y \andspace \\
(* \gmtimes Y) \otimes_{\M} (X \gmtimes *) &
= Y \otimes_{\M} X.
\end{align*}
Therefore we define the Gray-monoidal structure $2$-cells $\Si_{X,Y} =
\xi_{X,Y}$. We leave it to the reader in
\cref{exercise:brmoncat-grmon} to verify that the axioms for $\xi$
given in \cref{def:braided-monoidal-category}
imply conditions \eqref{gm:hex}, \eqref{gm:func}, and \eqref{gm:nat}
of \cref{explanation:data-axioms-for-Gray-monoids}.
\end{example}
\section{Double Categories}
\label{sec:double-cat}
In this section and the following we discuss double categories, also
known in the literature as pseudo double categories.
Recall from \cref{example:internal-cat} that a strict double category
is an internal category in the $1$-category $\Cat$, i.e., a monad in
$\Span(\Cat)$.
\begin{explanation}[Strict double category]\index{strict double category!unpacked}
Unpacking the definition, a strict double category $\C$ is a tuple
\[
(\C_0,\C_1, \hcirc, i, s, t)
\]
consisting of the following data.
\begin{itemize}
\item A category $\C_0$.
\item A span $(\C_1,t,s)$ in $\Cat$ as below.
\begin{equation}\label{dc1}
\begin{tikzpicture}[x=20mm,y=20mm,baseline={(t).base}]
\draw[0cell]
(0,0) node (L) {\C_0}
(1,.5) node (M) {\C_1}
(2,0) node (R) {\C_0}
;
\draw[1cell]
(M) edge[swap] node (t) {t} (L)
(M) edge node {s} (R)
;
\end{tikzpicture}
\end{equation}
\item A functor $\hcirc\cn \C_1 \times_{\C_0} \C_1 \to \C_1$
that is a map of spans as below
\begin{equation}\label{dc2}
\begin{tikzpicture}[x=20mm,y=20mm,baseline={(L).base}]
\draw[0cell]
(0,0) node (L) {\C_0}
(2,0) node (R) {\C_0}
(1,.5) node (T) {\C_1 \times_{\C_0} \C_1}
(1,-.5) node (B) {\C_1}
;
\draw[1cell]
(T) edge[swap] node {t p_1} (L)
(T) edge node {s p_2} (R)
(B) edge node {t} (L)
(B) edge[swap] node {s} (R)
(T) edge node {\hcirc} (B)
;
\end{tikzpicture}
\end{equation}
where the top span is the horizontal composite of $(\C_1,t,s)$ with
itself in $\Span(\Cat)$. Thus $\C_1 \times_{\C_0} \C_1$ is the
pullback over $(s,t)$ and $p_1$, respectively $p_2$, denotes projection to
the first, respectively second, component.
\item a functor $i\cn \C_0 \to \C_1$ that is a map of spans as below.
\begin{equation}\label{dc3}
\begin{tikzpicture}[x=20mm,y=20mm,baseline={(L).base}]
\draw[0cell]
(0,0) node (L) {\C_0}
(2,0) node (R) {\C_0}
(1,.5) node (T) {\C_0}
(1,-.5) node (B) {\C_1}
;
\draw[1cell]
(T) edge[swap] node {1} (L)
(T) edge node {1} (R)
(B) edge node {t} (L)
(B) edge[swap] node {s} (R)
(T) edge node {i} (B)
;
\end{tikzpicture}
\end{equation}
\end{itemize}
These data satisfy the monad axioms explained in
\cref{monad-bicat,monad-bicat-interpret}. Thus $\hcirc$ is strictly
associative and unital.
\end{explanation}
\begin{explanation}[Terminology for double categories]\label{explanation:dcat-terms}
The following terms are used for strict double categories, and for general double categories to be defined in \cref{definition:psdouble-cat} below.
The category $\C_0$ is called the \emph{category of objects}.\index{category of objects!double category}\index{double category!category of objects}
The
category $\C_1$ is called the \emph{category of arrows}.\index{category of arrows!double category}\index{double category!category of arrows}
The
functor $\hcirc$ is called \emph{horizontal composition},\index{horizontal composition!double category}\index{double category!horizontal composition}
and we
write $M \hcirc N$ for $\hcirc(M,N)$. The functors $i$, $s$, and
$t$ are called, respectively, the \emph{unit}\index{unit!double category}\index{double category!unit}, \emph{source}\index{source!double category}\index{double category!source}, and
\emph{target}\index{target!double category}\index{double category!target} functors. We let $\C$ denote the tuple $(\C_0, \C_1, \hcirc, i, s, t)$
The objects of $\C_0$ are called \emph{objects}\index{objects!double category}\index{double category!objects} of $\C$. The
morphisms of $\C_0$ are called \emph{vertical morphisms}\index{vertical morphisms!double category}\index{double category!vertical morphisms}\index{double category!morphisms!vertical}\index{morphisms!vertical!double category} of $\C$.
The objects of $\C_1$ are called \emph{horizontal $1$-cells}\index{horizontal 1-cells!double category}\index{double category!horizontal 1-cells}\index{double category!1-cells!horizontal}\index{1-cell!horizontal in a double category} of $\C$
and represented as slashed arrows from, respectively to, their
images under the source, respectively target, functors. For
example, a horizontal $1$-cell $M$ with $sM = R$ and $tM = S$ is
drawn as
\[
\begin{tikzpicture}[x=20mm,y=20mm]
\draw[0cell]
(0,0) node (R) {R}
(1,0) node (S) {S.}
;
\draw[1cell]
(R) edge[slashed] node {M} (S)
;
\end{tikzpicture}
\]
The morphisms of $\C_1$ are called \emph{$2$-cells}\index{2-cell!double category}\index{double category!2-cells} of $\C$.
Since $s$ and $t$ are functors, a morphism $35\cn M \to
M'$ in $\C_1$ has source and target morphisms in $\C_0$.
If $s35 = f\cn R \to
R'$ and $t35 = g\cn S \to S'$, we display this as follows.
\[
\begin{tikzpicture}[x=20mm,y=15mm]
\draw[0cell]
(0,0) node (R) {R}
(1,0) node (S) {S}
(0,-1) node (R') {R'}
(1,-1) node (S') {S'}
;
\draw[1cell]
(R) edge[slashed] node (M) {M} (S)
(R') edge[slashed,'] node (M') {M'} (S')
(R) edge['] node {f} (R')
(S) edge node {g} (S')
;
\draw[2cell]
node[between=M and M' at .5, rotate=-90, 2label={above,\ 35}] {\Rightarrow}
;
\end{tikzpicture}
\]
However, we caution the reader that horizontal $1$-cells and vertical
morphisms cannot be composed, as
horizontal $1$-cells are objects in $\C_1$, while vertical morphisms are morphisms in $\C_0$.
A $2$-cell $35$ whose source and target are both
identity morphisms in $\C_0$ is called a \emph{globular $2$-cell}.\index{globular 2-cell!double category}\index{double category!globular 2-cell}\index{2-cell!globular in a double category}
\end{explanation}
\begin{explanation}[Horizontal $2$-category]
A strict double category $\C$ has an associated $2$-category, called
the \emph{horizontal $2$-category}\index{horizontal 2-category}\index{2-category!horizontal} $\cH\C$ that consists of the
following:
\begin{itemize}
\item The objects of $\cH\C$ are the
objects of $\C$.
\item The $1$-cells $R \to S$ in $\cH\C$ are the horizontal $1$-cells $M$
such that $sM = R$ and $tM = B$.
\item The $2$-cells $M \to N$ in $\cH\C$ are the globular $2$-cells $M \to N$
in $\C$.
\item The identity $2$-cells, vertical composition, identity $1$-cells,
and horizontal composition in $\cH\C$ are given by, respectively,
the identity morphisms in $\C_1$, composition in $\C_1$, the unit
$i$, and $\hcirc$. \dqed
\end{itemize}
\end{explanation}
We now turn to a weakening of strict double categories, in which horizontal composition
is weakly unital and associative, but vertical composition remains
strictly unital and associative.
\begin{definition}\label{definition:psdouble-cat}
A \emph{double category}\index{double category} $1.3$ is a tuple
\[
(1.3_0, 1.3_1, \hcirc, i, s, t, a, \ell, r)
\]
consisting of the following.
\begin{itemize}
\item The data $(1.3_0, 1.3_1, \hcirc, i, s, t)$ are categories and
functors as in \eqref{dc1}, \eqref{dc2}, and \eqref{dc3} for a
strict double category.
\item The data $(a, \ell, r)$ are natural isomorphisms filling the
following diagrams of categories and functors.
\[
\begin{tikzpicture}[x=40mm,y=20mm]
\draw[0cell]
(0,0) node (a) {1.3_1 \times_{1.3_0} 1.3_1 \times_{1.3_0} 1.3_1}
(1,0) node (b) {1.3_1 \times_{1.3_0} 1.3_1}
(0,-1) node (c) {1.3_1 \times_{1.3_0} 1.3_1}
(1,-1) node (d) {1.3_1}
;
\draw[1cell]
(a) edge node {\hcirc \times_{1.3_0} 1_{1.3_1}} (b)
(a) edge['] node {1_{1.3_1} \times_{1.3_0} \hcirc} (c)
(b) edge node {\hcirc} (d)
(c) edge['] node {\hcirc} (d)
;
\draw[2cell]
node[between=a and d at .5, rotate=225, 2label={below,a}] {\Rightarrow}
;
\end{tikzpicture}
\]
\[
\begin{tikzpicture}[x=60mm,y=30mm]
\draw[0cell]
(0,0) node (a) {1.3_1 \times_{1.3_0} 1.3_1}
(.5,.5) node (b) {1.3_1}
(.5,-.5) node (b') {1.3_1}
(1,0) node (c) {1.3_1 \times_{1.3_0} 1.3_1}
;
\draw[1cell]
(b) edge['] node {(it) \times_{1.3_0} 1_{1.3_1}} (a)
(a) edge['] node {\hcirc} (b')
(b) edge[] node {1_{1.3_1} \times_{1.3_0} (is)} (c)
(c) edge node {\hcirc} (b')
(b) edge node (I) {1_{1.3_1}} (b')
;
\draw[2cell]
node[between=a and c at .3, rotate=0, 2label={above,\ell}] {\Rightarrow}
node[between=c and a at .3, rotate=180, 2label={below,r}] {\Rightarrow}
;
\end{tikzpicture}
\]
\end{itemize}
The components of $a$, $\ell$, and $r$ satisfy the following axioms.
\begin{description}
\item[Globular Condition]\index{globular condition}\index{double category!globular condition} The components of $a$, $\ell$, and $r$ are
globular; i.e., their images under $s$ and $t$ are identities in $1.3_0$.
\item[Unity Axiom]\index{unity!double category}\index{double category!unity} For each pair $(M,N) \in 1.3_1 \times_{1.3_0} 1.3_1$
with $sM = S = tN$, the following middle unity diagram commutes in
$1.3_1$.
\begin{equation}\label{psdcat-unity}
\begin{tikzcd}[column sep=small] (M \hcirc (iS)) \hcirc N
\arrow{rr}{a_{M, iS, N}}
\arrow[shorten >=-4pt]{rd}[swap]{r_M \hcirc 1_N}
&& M \hcirc ((iS) \hcirc N)
\arrow[shorten >=-4pt]{ld}{1_M \hcirc \ell_N}\\ & M \hcirc N
\end{tikzcd}
\end{equation}
\item[Pentagon Axiom]\index{pentagon axiom!double category}\index{double category!pentagon axiom} For each quadruple
\[
(M, N, P, Q) \in 1.3_1 \times_{1.3_0} 1.3_1 \times_{1.3_0} 1.3_1
\times_{1.3_0} 1.3_1,
\]
the following pentagon commutes in $1.3_1$.
\begin{equation}\label{psdcat-pentagon}
\begin{tikzpicture}[commutative diagrams/every diagram]
\node (P0) at (90:2cm) {$(M\hcirc N) \hcirc (P \hcirc Q)$};
\node (P1) at (90+72:2cm) {$((M \hcirc N) \hcirc P) \hcirc Q$} ;
\node (P2) at (220:1.6cm) {\makebox[3ex][r]{$(M \hcirc (N
\hcirc P)) \hcirc Q$}};
\node (P3) at (-40:1.6cm) {\makebox[3ex][l]{$M \hcirc ((N
\hcirc P) \hcirc Q)$}};
\node (P4) at (90+4*72:2cm) {$M \hcirc (N \hcirc (P
\hcirc \hspace{.5pt} Q))$};
\draw[commutative diagrams/.cd, every arrow, every label]
(P0) edge node {$a_{M,N,P \hcirc Q}$} (P4)
(P1) edge node {$a_{M \hcirc N,P,Q}$} (P0)
(P1) edge node[swap] {$(a_{M,N,P}) \hcirc 1_Q$} (P2)
(P2) edge node {$a_{M,N \hcirc P,Q}$} (P3)
(P3) edge node[swap] {$1_M \hcirc (a_{N,P,Q})$} (P4);
\end{tikzpicture}
\end{equation}
\end{description}
This finishes the definition of a double category. A double
category $1.3$ is \emph{small}\index{small!double category}\index{double category!small} if both $1.3_0$ and $1.3_1$ are small
categories.
\end{definition}
\begin{explanation}
\
\begin{itemize}
\item The term \emph{pseudo double category}\index{double category!pseudo}\index{pseudo!double category} is sometimes used for
what we have called simply ``double category'' here.
\item We extend the terminology of strict double categories described in
\cref{explanation:dcat-terms} to general double categories.
\item If $(a,\ell,r)$ are identity transformations,\index{strict double category!special case of double category} then
$(1.3_0,1.3_1,\hcirc,s,t)$ satisfies the axioms of a strict double
category.\dqed
\end{itemize}
\end{explanation}
The unity properties\index{double category!unity properties}
of \cref{sec:bicategory-unity} generalize to
double categories. In particular, the left and right unity
properties of \cref{bicat-left-right-unity,bicat-l-equals-r}
generalize to \cref{psdcat-left-right-unity,psdcat-l-equals-r}.
The proofs are direct generalizations and we leave them to
\cref{exercise:psdcat-left-right-unity,exercise:psdcat-l-equals-r}.
\begin{proposition}\label{psdcat-left-right-unity}
Suppose $M \cn R \sto S$ and $N\cn S \sto T$ are horizontal $1$-cells
in a double category $1.3$. Then the following diagrams in
$1.3_1$ are commutative.
\[\begin{tikzcd}[column sep=.3cm]
(1_T \hcirc N) \hcirc M \arrow{rr}{a} \arrow{rd}[swap]{\ell_N
\hcirc 1_M} && 1_T \hcirc (N \hcirc M) \arrow{ld}{\ell_{N
\hcirc M}}\\
& N \hcirc M & \end{tikzcd}\qquad
\begin{tikzcd}[column sep=.3cm]
(N \hcirc M) \hcirc 1_R \arrow{rr}{a} \arrow{rd}[swap]{r_{N
\hcirc M}} && N \hcirc (M \hcirc 1_R) \arrow{ld}{1_N \hcirc r_M}\\
& N \hcirc M &
\end{tikzcd}\]
\end{proposition}
\begin{proposition}\label{psdcat-l-equals-r}
For each object $R$ in $1.3$ we have $\ell_{iR} = r_{iR}$.
\end{proposition}
\begin{definition}\label{definition:horizontal-bicat}
A double category $1.3$ has a \emph{horizontal bicategory}\index{horizontal bicategory}\index{bicategory!horizontal}\index{double category!horizontal bicategory} $\cH1.3$
whose objects are those of $1.3$, $1$-cells $R \to S$ are those
horizontal $1$-cells $M$ such that $sM = R$ and $tM = S$, and $2$-cells
are globular $2$-cells of $1.3$. The associator and unitors of $\cH$
are given by those of $1.3$, and the axioms are identical.
\end{definition}
\begin{example}[Products]\label{definition:dbl-times}
Given double categories
\[
1.3 = (1.3_0,1.3_1,\hcirc, i, s, t, a, \ell, r) \andspace 1.3' = (1.3_0',1.3_1',\hcirc', i', s', t', a', \ell', r'),
\]
the \emph{product double category}\index{double category!product}\index{product!double category} is defined via the Cartesian product of categories,
functors, and natural transformations
\[
(1.3_0 \times 1.3_0', 1.3_1 \times 1.3_1', \hcirc'' , i \times i', s
\times s', t \times t', a'', \ell'', r'')
\]
where each of $\hcirc''$, $a''$, $\ell''$, and $r''$ is defined by
composing with various isomorphisms of categories of the form
\[
(\A \times_\B \C) \times (\A' \times_{\B'} \C') \iso (\A \times \A')
\times_{\B \times \B'} (\C \times \C').
\]
We leave the reader to verify the axioms of
\cref{definition:psdouble-cat} in \cref{exercise:dbl-times}.
\end{example}
\begin{example}\label{example:psd-terminal}
The empty product is the \emph{terminal double category}\index{terminal!double category}\index{double category!terminal}, denoted
$\boldone$ and consisting of a single object, unique vertical
$1$-morphism, horizontal $1$-cell, and $2$-cell.
\end{example}
Many of the bicategories that arise in practice occur as the horizontal
bicategories of double categories. Here we give two basic examples
generalizing \cref{ex:spans,ex:bimodules}
\begin{example}[Spans]\label{ex:psd-spans}
\index{double category!of spans}\index{span!double category}
Suppose $\C$ is a category with all pullbacks. We have a double
category of objects and spans in $\C$ defined as follows. Let
$1.3_0=\C$. Let $1.3_1$ be the category whose objects are spans
\[
\begin{tikzpicture}[x=20mm,y=20mm]
\draw[0cell]
(0,0) node (r) {A}
(1,0) node (x) {X}
(2,0) node (s) {B}
;
\draw[1cell]
(x) edge node {} (r)
(x) edge node {} (s)
;
\end{tikzpicture}
\]
and whose morphisms are commuting diagrams of spans
\[
\begin{tikzpicture}[x=20mm,y=10mm]
\draw[0cell]
(0,0) node (r) {A}
(1,0) node (x) {X}
(2,0) node (s) {B}
(0,-1) node (r') {A'}
(1,-1) node (x') {X'}
(2,-1) node (s') {B'.}
;
\draw[1cell]
(x) edge node {} (r)
(x) edge node {} (s)
(x') edge node {} (r')
(x') edge node {} (s')
(x) edge node {} (x')
(r) edge node {} (r')
(s) edge node {} (s')
;
\end{tikzpicture}
\]
Horizontal composition, unit spans, associators and unitors are
given just as in $\Span(\C)$ (see \cref{ex:spans}), and the double
category axioms follow likewise. The horizontal bicategory of this
double category is
$\Span(\C)$.
\end{example}
\begin{example}[Bimodules]\label{ex:psd-bimodules}
\index{double category!of bimodules}\index{bimodule!double category}
We have a double category of rings and bimodules defined as follows.
Let $1.3_0$ be the category of rings and ring homomorphisms. Let
$1.3_1$ be the category whose objects are bimodules ${}_RM_S$ over
any pair of rings $R$ and $S$, and whose morphisms ${}_RM_S \to
{}_{R'}M'_{S'}$ are equivariant bimodule homomorphisms $(f,35,g)$
consisting of ring homomorphisms $f\cn R \to R'$ and $g\cn S \to S'$
together with a bimodule homomorphism $35\cn g_*M \to f^*M'$.
Horizontal composition is given by the tensor product, just as in
$\Bimod$ (see \cref{ex:bimodules}), and the unit bimodule $iR$ for a
ring $R$ is that ring regarded as a bimodule over itself. The
source, respectively target, functors give the rings which act on
the left, respectively right, of a given bimodule. The associator and
unitors are given by those of $\Bimod$, and the double category
axioms in this case follow from those of $\Bimod$. The horizontal
bicategory of this double category is $\Bimod$.
\end{example}
\begin{definition}\label{definition:double-functor}
Suppose that $1.3$ and $3$ are double categories. A
\emph{lax functor}\index{lax functor!double category}\index{double category!lax functor} $F\cn 1.3 \to 3$ is a tuple
\[
(F_0, F_1, F_1^2, F_1^0)
\]
consisting of the following.
\begin{itemize}
\item Functors $F_0\cn 1.3_0 \to 3_0$ and $F_1\cn1.3_1 \to 3_1$
inducing a map of spans; i.e., $sF_1 = F_0s$ and $tF_1 = F_0t$.
\item Natural transformations $F_1^2$ and $F_1^0$ with globular
components
\[
(F_1^2)_{N,M}\cn F_1M \hcirc F_1N \to F_1(M \hcirc N) \andspace (F_1^0)_R\cn
(iF_0)(R) \to F_1(iR).
\]
\end{itemize}
These data are required to satisfy the two axioms \eqref{f2-bicat}
and \eqref{f0-bicat} such that $F_0$ and $F_1$ induce a lax functor
\[
\cH\F \cn \cH1.3 \to \cH3
\]
between horizontal bicategories.
This finishes the definition of a lax functor. Moreover:
\begin{itemize}
\item A lax functor $F$ for which the natural transformations $F_1^2$ and
$F_1^0$ are natural isomorphisms, respectively identities, is
called a \emph{pseudofunctor}\index{pseudofunctor!double category}\index{double category!pseudofunctor}, respectively \emph{strict functor}\index{strict!functor!double category}\index{double category!strict functor}.
When clear from context, we will omit subscripts on the functors
$F_0$ and $F_1$.
\item As in \cref{conv:functor-subscript} we let $F^{-0}$ and
$F^{-2}$ denote the inverses of $F^{0}$ and
$F^{2}$, respectively.
\item There is an identity strict functor $1.3 \to 1.3$ given by
$(1_{1.3_0},1_{1.3_1},1_{\hcirc},1_i)$.\index{identity!strict functor!double category}
\item If $F\cn 1.3 \to 1.3'$ and $G\cn 1.3' \to 1.3''$ are two lax functors
of double categories, the composite $GF$\index{composition!lax functors!double category}\index{double category!lax functor!composite} is defined by
composing functors $G_0F_0$ and $G_1F_1$, and composing the lax
functors of bicategories, $\cH G$ and $\cH F$. The composition of
lax functors between bicategories is strictly associative and
unital (see \cref{thm:cat-of-bicat} and its proof), and therefore
so is the composition of lax functors between double
categories.\defmark
\end{itemize}
\end{definition}
\begin{definition}\label{definition:double-transformation}
Suppose $1.3$ and $3$ are double categories, and that $F$ and
$G$ are lax functors $1.3 \to 3$. A \emph{transformation}\index{transformation!double category}\index{transformation!double category}\index{double category!transformation} $35\cn F
\to G$ consists of natural transformations
\[
35_0 \cn F_0 \to G_0 \andspace 35_1\cn F_1 \to G_1
\]
both denoted $35$ and subject to the following axioms.
\begin{itemize}
\item For all horizontal $1$-cells $M$ (i.e., objects of $1.3_1$) we
have $s35_M = 35_{sM}$ and $t35_M = 35_{tM}$.
\item For horizontal $1$-cells $M\cn R \sto S$ and $N\cn S \sto T$,
we have
\[
35_{N \hcirc M} \ (F_1^2)_{N,M} = (G_1^2)_{N,M} \ (35_N \hcirc 35_M),
\]
an equality of $2$-cells (i.e., morphisms in $3_1$) from $FN \hcirc
FM$ to $G(N \hcirc M)$. This may be pictured as the following
equality of diagrams.
\[
\begin{tikzpicture}[x=20mm,y=15mm]
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (R) {FR}
(1,0) node (S) {FS}
(2,0) node (T) {FT}
(0,-2) node (R') {GR}
(2,-2) node (T') {GT}
;
\draw[1cell]
(R) edge[slashed] node (FM) {FM} (S)
(S) edge[slashed] node (FN) {FN} (T)
(R') edge[slashed,'] node (GNM) {G(N \hcirc M)} (T')
;
}
\draw[font=\Large] (2.5,-1) node {=};
\begin{scope}
\boundary
\draw[0cell]
(0,-1) node (R1) {FR}
(2,-1) node (T1) {FT}
;
\draw[1cell]
(R1) edge[slashed] node (FNM) {F(N \hcirc M)} (T1)
(R) edge['] node {1} (R1)
(T) edge node {1} (T1)
(R1) edge['] node {35_R} (R')
(T1) edge node {35_T} (T')
;
\draw[2cell]
node[between=S and FNM at .5, rotate=-90, 2label={above,\ F_1^2}] {\Rightarrow}
node[between=FNM and GNM at .5, rotate=-90,
2label={above,\ 35_{N \hcirc M}}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(3,0)}]
\boundary
\draw[0cell]
(0,-1) node (R2) {GR}
(1,-1) node (S2) {GS}
(2,-1) node (T2) {GT}
;
\draw[1cell]
(R) edge['] node {35_R} (R2)
(S) edge node {35_S} (S2)
(T) edge node {35_T} (T2)
(R2) edge['] node {1} (R')
(T2) edge node {1} (T')
(R2) edge[slashed] node (GM) {GM} (S2)
(S2) edge[slashed] node (GN) {GN} (T2)
;
\draw[2cell]
node[between=FM and GM at .6, rotate=-90, 2label={above,\ 35_M}] {\Rightarrow}
node[between=FN and GN at .6, rotate=-90, 2label={above,\ 35_N}] {\Rightarrow}
node[between=S2 and GNM at .4, rotate=-90,
2label={above,\ G_1^2}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\]
\item For each object $R$, we have
\[
35_{iR} \ (F_1^0)_R = (G_1^0)_R \ i(35_R),
\]
an equality of $2$-cells (i.e., morphisms in $3_1$) from $i(FR)$ to
$G(iR)$. This may be pictured as the following equality of
pasting diagrams.
\[
\begin{tikzpicture}[x=30mm,y=15mm]
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (R) {FR}
(1,0) node (T) {FR}
(0,-2) node (R') {GR}
(1,-2) node (T') {GR}
;
\draw[1cell]
(R) edge[slashed] node (iFR) {i(FR)} (T)
(R') edge[slashed,'] node (GiR) {G(iR)} (T')
;
}
\draw[font=\Large] (1.5,-1) node {=};
\begin{scope}
\boundary
\draw[0cell]
(0,-1) node (R1) {FR}
(1,-1) node (T1) {FR}
;
\draw[1cell]
(R1) edge[slashed] node (FiR) {F(iR)} (T1)
(R) edge['] node {1} (R1)
(T) edge node {1} (T1)
(R1) edge['] node {35_R} (R')
(T1) edge node {35_R} (T')
;
\draw[2cell]
node[between=iFR and FiR at .6, rotate=-90, 2label={above,\ F_1^0}] {\Rightarrow}
node[between=FiR and GiR at .5, rotate=-90,
2label={above,\ 35_{iR}}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(2,0)}]
\boundary
\draw[0cell]
(0,-1) node (R2) {GR}
(1,-1) node (T2) {GR}
;
\draw[1cell]
(R) edge['] node {35_R} (R2)
(T) edge node {35_R} (T2)
(R2) edge['] node {1} (R')
(T2) edge node {1} (T')
(R2) edge[slashed] node (iGR) {i(GR)} (T2)
;
\draw[2cell]
node[between=iGR and GiR at .5, rotate=-90, 2label={above,\ G_1^0}] {\Rightarrow}
node[between=iFR and iGR at .6, rotate=-90,
2label={above,\ i(35_{R})}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\]
\end{itemize}
This finishes the definition of a transformation. Moreover:
\begin{itemize}
\item A transformation $35$ is \emph{invertible}\index{transformation!double category!invertible}\index{double category!transformation!invertible}\index{invertible!transformation!double category} if the
components of $35_0$ and $35_1$ are isomorphisms. In this case $35$ has an
inverse given by $(35_0^{-1},35_1^{-1})$.
\item There is an identity strict transformation
\index{strict transformation!double category!identity}\index{double category!strict transformation!identity}
$F \to F$ given by
$(1_{F_0},1_{F_1})$. The necessary axioms hold by functoriality
of $s$, $t$, $\hcirc$, and $i$.
\item If $35\cn F \to G$ and $60\cn G \to H$ are transformations,
then their composite
\index{transformation!double category!vertical composition}\index{double category!transformation!vertical composition}\index{vertical composition!transformation!double category}
$\be35$ is given by composing $60_035_0$
and $60_135_1$. In
\cref{exercise:vcomposite-double-transformation} we ask the reader
to verify that $\be35$ satisfies the axioms of a transformation,
and that composition of transformations is strictly associative
and unital.
\item Suppose $F,F'\cn 1.3 \to 1.3'$ and $G,G'\cn 1.3' \to 1.3''$ are
lax functors of double categories. If $35\cn F \to F'$ and
$60\cn G \to G'$ are transformations, then there is a
horizontal composite
\index{transformation!double category!horizontal composition}\index{double category!transformation!horizontal composition}\index{horizontal composition!transformation!double category}
$60 * 35$ whose constituent transformations
are the horizontal compositions $60_0 * 35_0$ and $60_1 * 35_1$. In
\cref{exercise:hcomposite-double-transformation} we ask the reader
to verify that $60 * 35$ satisfies the axioms of a
transformation, and that the middle four exchange law \eqref{middle-four} holds.\defmark
\end{itemize}
\end{definition}
\begin{remark}
\index{transformation!bicategory and double category comparison}
\index{oplax transformation!bicategory and double category comparison}
\index{double category!transformation!comparison with bicategory}
\index{bicategory!oplax transformation!comparison with double category}
Note that the axioms for a transformation $F \to G$ are
similar, but not the same, as the axioms for an oplax transformation
between $\cH F$ and $\cH G$. If $35_0$ is the identity
transformation, then the components of $35_1$ are globular and
$35_1$ induces an icon $\cH F \to \cH G$. However, a general
transformation $35\cn F \to G$ does not induce any kind of
transformation between $\cH F$ and $\cH G$ because the components of
$35_1$ are generally not globular.
\end{remark}
Using the composition described in
\cref{definition:double-functor,definition:double-transformation}, we
have the following result. We leave verification of the details to
\cref{exercise:dbl-is-a-2-cat}.
\begin{proposition}\label{proposition:dbl-is-a-2-cat}
\index{2-category!of double categories}\index{double category!2-category of double categories}
There is a $2$-category $\Dbl$ whose objects are small double
categories, $1$-cells are lax functors, and $2$-cells are transformations.
\end{proposition}
\begin{notation}
As for $\Cat$, we will use $\Dbl$ for both the $2$-category and its
underlying $1$-category.
\end{notation}
For the remainder of this chapter we will assume, unless otherwise
stated, that our double categories are small.
In \cref{exercise:H-preserves-products} we ask the reader to verify
that $\cH$ is functorial with respect to lax functors and is
product-preserving, thus proving the following result.
\begin{theorem}\label{theorem:H-preserves-products}
\index{horizontal bicategory!product-preserving functor}
Taking horizontal bicategories defines a product-preserving functor
of $1$-categories
\[
\cH\cn \Dbl \to \Bicat,
\]
where $\Bicat$ is the $1$-category of small bicategories and lax
functors in \cref{thm:cat-of-bicat}.
\end{theorem}
\section{Monoidal Double Categories}
\label{sec:mon-double-cat}
A monoidal double category is defined by generalizing the definition
of monoidal category from $(\Cat, \times)$ to $(\Dbl, \times)$. We
will make use of pasting diagrams in the $2$-category $\Dbl$,
interpreted using the $2$-Categorical Pasting Theorem
\ref{thm:2cat-pasting-theorem}.
\begin{definition}[Monoidal double category]\label{definition:monoidal-psd-cat}
A \emph{monoidal double category}\index{monoidal!double category}\index{double category!monoidal} is a tuple
\[
(1.3, \otimes, \monunit, a, \ell, r)
\]
consisting of the following.
\begin{itemize}
\item A double category $1.3$
called the \emph{base double category}\index{monoidal!double category!base}\index{double category!monoidal!base}\index{double category!base}. The $n$-fold product $1.3
\times \cdots \times 1.3$ is written as $1.3^n$ below.
\item A pseudofunctor $\otimes \cn 1.3 \times 1.3 \to 1.3$ called the
\emph{monoidal product}.
\item A pseudofunctor $\monunit\cn \boldone \to 1.3$ called the
\emph{monoidal unit}, where $\boldone$ is the terminal double
category of \cref{example:psd-terminal}.
\item An invertible transformation
\[
a \cn \otimes \circ (\otimes \times 1) \fto{\iso}
\otimes \circ (1 \times \otimes),
\]
called the \emph{associator}.
\item Invertible transformations
\[
\ell \cn \otimes \circ (\monunit \times 1) \to 1 \andspace
r \cn \otimes \circ (1 \times \monunit) \to 1.
\]
called the \emph{left unitor} and \emph{right unitor}, respectively.
\end{itemize}
These data are required to satisfy the following two axioms, given
as equalities of pasting diagrams in the $2$-category $\Dbl$. The unlabeled regions
commute strictly.
\begin{description}
\item[Pentagon Axiom]\index{pentagon axiom!monoidal double category}\index{double category!monoidal!pentagon axiom}
\begin{equation}\label{mondbl-pentagon}
\begin{tikzpicture}[x=21mm,y=23mm]
\def\boundary{
\draw[0cell]
(0,0) node (LL) {1.3^4}
(.707,.707) node (TL) {1.3^3}
(.707,-.707) node (BL) {1.3^3}
(TL) ++(1,0) node (TR) {1.3^2}
(BL) ++(1,0) node (BR) {1.3^2}
(2.414,0) node (RR) {1.3}
;
\path[1cell]
(LL) edge node {\otimes \times 1 \times 1} (TL)
(TL) edge node {\otimes \times 1} (TR)
(TR) edge node {\otimes} (RR)
(LL) edge[swap] node {1 \times 1 \times \otimes} (BL)
(BL) edge[swap] node {1 \times \otimes} (BR)
(BR) edge[swap] node {\otimes} (RR)
;
}
\boundary
\draw[0cell]
(1.414,0) node (CC) {1.3^3}
;
\draw[1cell]
(TL) edge node {1 \times \otimes} (CC)
(BL) edge['] node {\otimes \times 1} (CC)
(CC) edge node {\otimes} (RR)
;
\draw[2cell]
node[between=TR and CC at .5, rotate=240, 2label={above,a}] {\Rightarrow}
node[between=CC and BR at .5, rotate=300, 2label={above,a}] {\Rightarrow}
;
\draw (RR) ++(.4,0) node (eq) {=};
\begin{scope}[shift={(3.214,0)}]
\boundary
\draw[0cell]
(1,0) node (CC) {1.3^3}
;
\draw[1cell]
(LL) edge node {1 \times \otimes \times 1} (CC)
(CC) edge['] node[pos=.6] {\otimes \times 1} (TR)
(CC) edge node[pos=.6] {1 \times \otimes} (BR)
;
\draw[2cell]
node[between=TR and BR at .5, shift={(.2,0)}, rotate=270, 2label={below,a}] {\Rightarrow}
node[between=CC and BL at .5, rotate=240, 2label={below,1
\times a}] {\Rightarrow}
node[between=TL and CC at .5, rotate=300,
2label={above,a \times 1}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\end{equation}
\item[Middle Unity Axiom]\index{middle unity axiom!monoidal double category}\index{double category!monoidal!middle unity axiom}
\begin{equation}\label{mondbl-middle-unity}
\begin{tikzpicture}[x=20mm,y=20mm]
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (a) {1.3^2}
(0,1) node (c) {1.3^3}
(1,0) node (b) {1.3^2}
(1,1) node (d) {1.3^2}
(2,.5) node (e) {1.3}
;
\draw[1cell]
(a) edge node {1 \times \monunit \times 1} (c)
(c) edge node {\otimes \times 1} (d)
(d) edge node {\otimes} (e)
(a) edge['] node {1} (b)
(b) edge['] node {\otimes} (e)
;
}
\draw (2.5,.6) node {=};
\begin{scope}[shift={(0,0)}]
\boundary
\draw[1cell]
(c) edge node[pos=.4] {1 \times \otimes} (b)
;
\draw[2cell]
(a) ++(55:.35) node[rotate=-90,2label={above,1 \times \ell}]
{\Rightarrow}
(d) ++(-80:.5) node[rotate=-90,2label={above,a}]
{\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(3.5,0)}]
\boundary
\draw[1cell]
(a) edge['] node {1} (d)
;
\draw[2cell]
(c) ++(-55:.4) node[rotate=-45,2label={above,r \times 1}]
{\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\end{equation}
\end{description}
This finishes the definition of a monoidal double category.
Moreover,
\begin{itemize}
\item $1.3$ is \emph{braided monoidal}
\index{monoidal!braided!double category}
\index{braided monoidal!double category}
\index{double category!monoidal!braided}
if it is equipped with an
invertible transformation $\beta\cn \otimes \to \otimes
\circ \tau$, where $\tau$ denotes the twist $1.3^2 \to 1.3^2$ given
by $\tau(R \times S) = S \times R$. The invertible transformation
$\beta$ is called the \emph{braiding} and is subject to
the following two axioms, stated as equalities of pasting
diagrams in $\Dbl$. The unlabeled regions commute strictly. Each
of the unlabeled arrows is given by the monoidal product, $\otimes$,
and we let $\gamma$ denote the cyclic permutation given by $\ga(R
\times S \times T) = (S \times T \times R)$.
\begin{description}
\item[(1,2)-Braid Axiom]\index{12braidaxiom@(1,2)-braid axiom}\index{braided monoidal!double category!(1,2)-braid axiom}\index{double category!braided monoidal!(1,2)-braid axiom}
\begin{equation}\label{mondbl-br1}
\begin{tikzpicture}[x=20mm,y=15mm,baseline={(0,1.4).base}]
\newcommand{\boundary}{
\draw[0cell]
(0,2) node (t1) {1.3^3}
(t1) ++(1.5,0) node (t2) {1.3^2}
(2,1) node (MR) {1.3}
(0,0) node (b1) {1.3^3}
(b1) ++(1.5,0) node (b2) {1.3^2}
;
\draw[1cell]
(t1) edge node {\otimes \times 1} (t2)
(t2) edge node {} (MR)
(b1) edge['] node {1 \times \otimes} (b2)
(b2) edge['] node {} (MR)
(t1) edge['] node {\gamma} (b1)
;
}
\begin{scope}[shift={(0,0)}]
\boundary
\draw[0cell]
(t1) ++(1,-.5) node (t3) {1.3^2}
(b1) ++(1,.5) node (b3) {1.3^2}
;
\draw[1cell]
(t1) edge['] node {1 \times \otimes} (t3)
(t3) edge node {} (MR)
(b1) edge node {\otimes \times 1} (b3)
(b3) edge node {} (MR)
(t3) edge['] node {\tau} (b3)
;
\draw[2cell]
node[between=t3 and t2 at .5, rotate=-135, 2label={below,a}] {\Rightarrow}
node[between=b3 and b2 at .5, rotate=-45, 2label={below,a}] {\Rightarrow}
(MR) ++(180:.6) node[rotate=-90,2label={below,\beta}] {\Rightarrow}
;
\end{scope}
\draw (2.5,1) node {=};
\begin{scope}[shift={(3,0)}]
\boundary
\draw[0cell]
(.8,1) node (ML) {1.3^3}
;
\draw[1cell]
(t1) edge['] node[pos=.6] {\tau \times 1} (ML)
(ML) edge['] node[pos=.4] {1 \times \tau} (b1)
(ML) edge['] node {\otimes \times 1} (t2)
(ML) edge node {1 \times \otimes} (b2)
;
\draw[2cell]
(ML) ++(90:.6) node[shift={(-.15,0)}, rotate=-90,2label={above,\,\beta \times 1}] {\Rightarrow}
(ML) ++(-90:.6) node[shift={(-.15,0)}, rotate=-90,2label={above,\,1 \times \beta}] {\Rightarrow}
(ML) ++(.8,0) node[rotate=-90,2label={below,a}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\end{equation}
\item[(2,1)-Braid Axiom]\index{21braidaxiom@(2,1)-braid axiom}\index{braided monoidal!double category!(2,1)-braid axiom}\index{double category!braided monoidal!(2,1)-braid axiom}
\begin{equation}\label{mondbl-br2}
\begin{tikzpicture}[x=20mm,y=15mm,baseline={(0,1.4).base}]
\newcommand{\boundary}{
\draw[0cell]
(0,2) node (t1) {1.3^3}
(t1) ++(1.5,0) node (t2) {1.3^2}
(2,1) node (MR) {1.3}
(0,0) node (b1) {1.3^3}
(b1) ++(1.5,0) node (b2) {1.3^2}
;
\draw[1cell]
(t1) edge node {1 \times \otimes} (t2)
(t2) edge node {} (MR)
(b1) edge['] node {\otimes \times 1} (b2)
(b2) edge['] node {} (MR)
(t1) edge['] node {\gamma^{-1}} (b1)
;
}
\begin{scope}[shift={(0,0)}]
\boundary
\draw[0cell]
(t1) ++(1,-.5) node (t3) {1.3^2}
(b1) ++(1,.5) node (b3) {1.3^2}
;
\draw[1cell]
(t1) edge['] node {\otimes \times 1} (t3)
(t3) edge node {} (MR)
(b1) edge node {1 \times \otimes} (b3)
(b3) edge node {} (MR)
(t3) edge['] node {\tau} (b3)
;
\draw[2cell]
node[between=t3 and t2 at .5, rotate=-135, 2label={above,a^{-1}}] {\Rightarrow}
node[between=b3 and b2 at .5, rotate=-45, 2label={above,a^{-1}}] {\Rightarrow}
(MR) ++(180:.6) node[rotate=-90,2label={below,\beta}] {\Rightarrow}
;
\end{scope}
\draw (2.5,1) node {=};
\begin{scope}[shift={(3,0)}]
\boundary
\draw[0cell]
(.8,1) node (ML) {1.3^3}
;
\draw[1cell]
(t1) edge['] node[pos=.6] {1 \times \tau} (ML)
(ML) edge['] node[pos=.4] {\tau \times 1} (b1)
(ML) edge['] node {1 \times \otimes} (t2)
(ML) edge node {\otimes \times 1} (b2)
;
\draw[2cell]
(ML) ++(90:.6) node[shift={(-.15,0)},
rotate=-90,2label={above,\,1 \times \beta}] {\Rightarrow}
(ML) ++(-90:.6) node[shift={(-.15,0)},
rotate=-90,2label={above,\,\beta \times 1}] {\Rightarrow}
(ML) ++(.7,0) node[rotate=-90,2label={above,a^{-1}}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\end{equation}
\end{description}
\item $1.3$ is \emph{symmetric monoidal}\index{symmetric monoidal!double category}\index{monoidal!symmetric!double category}\index{double category!monoidal!symmetric}\index{double category!symmetric monoidal} if, furthermore,
$(\beta\tau) \circ \beta = 1_{\tensor}$.\defmark
\end{itemize}
\end{definition}
\begin{explanation}\label{explanation:mon-psd}\
\begin{enumerate}
\item
\index{double category!monoidal!comparison with monoidal categories}
The pentagon and middle unity axioms for monoidal double
categories generalize \eqref{pentagon-axiom} and
\eqref{monoidal-unit} for monoidal categories. In particular, by
restriction these axioms imply that the categories $1.3_0$ and
$1.3_1$ are monoidal.
\item The associativity and unity diagrams \eqref{mondbl-pentagon}
and \eqref{mondbl-middle-unity} are just like the ones mentioned
in \cref{expl:tricategory-definition}, but in this case they
really are pasting diagrams in a $2$-category so there is no
ambiguity.
\item
\index{double category!braided monoidal!comparison with braided monoidal categories}
The two braid axioms generalize the hexagon axioms
\eqref{hexagon-b1} and \eqref{hexagon-b2} for braided monoidal
categories; they imply that each of $1.3_0$ and $1.3_1$ is a braided
monoidal category. In \cref{exercise:mondbl-br-unit-axiom} we ask
the reader to show that the two braid axioms imply unit axioms
generalizing \eqref{symmetry-unit}.
\item We can think of the braiding $\beta$ as a crossing between two
lines as in \cref{expl:hexagon-axioms}. Then the (1,2)-braid
axiom states the equality between two ways to cross two lines over
one line, similar to the first picture in
\cref{expl:hexagon-axioms}. The left-hand side of the axiom
describes crossing two lines over one line all at once, while the
right-hand side of the axiom describes crossing two lines over one
line, one at a time. The (2,1)-braid axiom admits a similar
interpretation with one line crossing over two lines, similar to
the second picture in \cref{expl:hexagon-axioms}, either all at once or
one line at a time.
\item If $1.3$ is symmetric monoidal, then each of $1.3_0$ and $1.3_1$
is a symmetric monoidal category.
\item The source and target functors, $s,t\cn 1.3_1 \to 1.3_0$, are strict
monoidal. If $1.3$ is braided, respectively symmetric, then $s$
and $t$ are braided, respectively symmetric, monoidal functors.
\item The unit functor, $i\cn 1.3_0 \to 1.3_1$, is strong monoidal
with constraint isomorphism $i(R \otimes R') \to iR \otimes iR'$
given by $\otimes^0$. If $1.3$ is braided, respectively symmetric,
then $i$ is a braided, respectively symmetric, monoidal
functor.\dqed
\end{enumerate}
\end{explanation}
\begin{example}[Spans and Bimodules]\label{ex:psd-spans-bimodules-symm-monoidal}
\index{symmetric monoidal!double category!of spans}
\index{span!double category!symmetric monoidal}
\index{symmetric monoidal!double category!of bimodules}
\index{bimodule!double category!symmetric monoidal}
In
\cref{exercise:psd-spans-monoidal,exercise:psd-bimodules-monoidal}
we ask the reader to verify the following two examples.
\begin{enumerate}
\item The double category of spans described in \cref{ex:psd-spans}
is symmetric monoidal.
\item The double category of rings and bimodules described in
\cref{ex:psd-bimodules} is symmetric monoidal.\dqed
\end{enumerate}
\end{example}
\section{Exercises and Notes}\label{sec:monoidal-bicat-exercises}
\begin{exercise}\label{exer:monbicat-mates}
Prove \Cref{pi-mates}. Hint: All of these induced $2$-cells are similar to mates in \Cref{definition:mates}. They are defined using the original $2$-cells, the left and right unitors, and the (co)units of the relevant adjoint equivalences. See the examples in \Cref{expl:right-hex-mates,expl:left-hex-mates}. One may follow the following steps.
\begin{enumerate}
\item Starting with $\pi^{-1}$, first define each component of $\pi_3$ using the pasting diagram
\[\begin{tikzpicture}[xscale=3.5,yscale=1.4]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def.4{-.9} \def1{1} \def1{.05}
\draw[0cell]
(0,0) node (x11) {\big((DC)B\big)A}
($(x11)+(1,0)$) node (x12) {D\big(C(BA)\big)}
($(x11)+(1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x01) {\big(D(CB)\big)A}
($(x11)+(1-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x02) {D\big((CB)A\big)}
($(x11)+(1/2,.4)$) node (x31) {(DC)(BA)}
(x02)++(1-2*1,0) node (x03) {D\big((CB)A\big)}
(x11)++(-1/2,.4) node (x30) {(DC)(BA)}
;
\draw[1cell]
(x11) edge node[pos=.2] {a\tensor 1} (x01)
(x01) edge node (s) {a} (x02)
(x02) edge node[swap,pos=.25] {1\tensor a} (x12)
(x11) edge node[pos=.65] {a} (x31)
(x31) edge node[pos=.4] {a} (x12)
(x02) edge node {1} (x03)
(x12) edge node[swap] {1\tensor \abdot} (x03)
(x01) edge[bend left=75] node {a} (x03)
(x30) edge node[pos=.4] {\abdot} (x11)
(x30) edge node[swap] {1} (x31)
(x30) edge[out=-60,in=-90,looseness=2.3] node[swap] {a} (x12)
;
\draw[2cell]
node[between=x30 and x31 at .8, shift={(0,-.7)}, rotate=90, 2label={below,r^{-1}}] {\Rightarrow}
node[between=x30 and x31 at .45, shift={(0,.4)}, rotate=90, 2label={below,\epzainv}] {\Rightarrow}
node[between=s and x31 at .4, shift={(-.3,0)}, rotate=90, 2label={below,\pi^{-1}}] {\Rightarrow}
node[between=x02 and x03 at .2, shift={(0,-.6)}, rotate=90, 2label={below,1\etaainv}] {\Rightarrow}
node[between=x02 and x03 at 0, shift={(0,.6)}, rotate=90, 2label={below,\ell}] {\Rightarrow}
;
\end{tikzpicture}\]
in which:
\begin{itemize}
\item $\epzainv$ is the inverse of the counit $\epza : a\abdot \to 1$.
\item $\etaainv$ is the inverse of the unit $\etaa : 1 \to \abdot a$.
\item $1\etaainv$ is the vertical composite
\[\begin{tikzpicture}[xscale=4,yscale=1.3]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def1{1} \def1{.05}
\draw[0cell]
(0,0) node (x11) {(1_D \tensor \abdot)(1_D \tensor a)}
($(x11)+(1,0)$) node (x12) {1_{D((CB)A)}}
($(x11)+(1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {(1_D1_D) \tensor (\abdot a)}
($(x11)+(1-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {1_D \tensor 1_{(CB)A}}
;
\draw[1cell]
(x11) edge node (one) {1\etaainv} (x12)
(x11) edge node[swap,pos=.15] {\tensortwo} (x21)
(x21) edge node (s) {\ell \tensor \etaainv} (x22)
(x22) edge node[swap,pos=.8] {\tensorzeroinv} (x12)
;
\end{tikzpicture}\]
in $\B$ with $\tensorzeroinv$ the inverse of the lax unity constraint \eqref{tensorzero-gf} for the composition $\tensor$.
\end{itemize}
\item Starting with $\pi_3$, one similarly defines $\pi_5^{-1}$ by replacing one copy of $a$ with its adjoint $\abdot$.
\item Starting with $\pi_5$, one defines $\pi_2$, which is then used to define $\pi_1$.
\item $\pi_7$ and $\pi_8$ are defined similarly to $\pi_3$ in the first step.
\item Starting with $\pi_7$, one defines $\pi_4$, $\pi_6$, and $\pi_{10}$.
\item Finally, with $\pi_6$, one defines $\pi_9$.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exer:mu-rho-mates}\index{middle 2-unitor!mate}\index{mate!middle 2-unitor}\index{right 2-unitor!mate}\index{mate!right 2-unitor}
In a monoidal bicategory $\B$, prove that the middle $2$-unitor and the right $2$-unitor induce invertible $2$-cells $\mu'$ and $\rho'$, respectively, in $\Bicatps(\B^2,\B)$, with the following component $2$-cells.
\[\begin{tikzpicture}[xscale=3,yscale=1.2,baseline={(r.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def1{.8} \def1{.1} \def\b{15} \def1.2{.4}
\draw[0cell]
(0,0) node (x11) {(B\monunit)A}
($(x11)+(1,0)$) node (x12) {B(\monunit A)}
($(x11)+(1/2,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {BA}
;
\draw[1cell]
(x11) edge[bend right=\b] node[swap,pos=.4] (r) {r\tensor 1} (x2)
(x11) edge node (s) {a} (x12)
(x12) edge[bend left=\b] node[pos=.4] {1\tensor \ell} (x2)
;
\draw[2cell]
node[between=s and x2 at .7, shift={(0,0)}, rotate=-180, 2label={below,\mu'}] {\Rightarrow}
;
\draw[0cell]
($(x2)+(1.5,0)$) node (y21) {(BA)\monunit}
($(y21)+(-1.2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y11) {BA}
($(y21)+(1.2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y12) {B(A\monunit)}
;
\draw[1cell]
(y12) edge node[swap] (a) {1 \tensor r} (y11)
(y21) edge[bend left=\b] node {r} (y11)
(y21) edge[bend right=\b] node[swap] {a} (y12)
;
\draw[2cell]
node[between=a and y21 at .7, shift={(0,0)}, rotate=-180, 2label={below,\rho'}] {\Rightarrow}
;
\end{tikzpicture}\]
\end{exercise}
\begin{exercise}\label{exer:syllepsis-mate}\index{syllepsis!mate}\index{mate!syllepsis}
In a sylleptic monoidal bicategory, prove that the syllepsis $\syl$ induces an invertible $2$-cell $\syl'$ in $\Bicatps(\B^2,\B)(\tensor,\tensor\tau)$ with the following component $2$-cells.
\[\begin{tikzpicture}[xscale=3,yscale=1.2]
\def1{1} \def\b{55}
\draw[0cell]
(0,0) node (x) {A\tensor B}
($(x)+(1,0)$) node (y) {B\tensor A}
;
\draw[1cell]
(x) edge[bend left=\b] node {\beta_{A,B}} (y)
(x) edge[bend right=\b] node[swap] {\betabdot_{B,A}} (y)
;
\draw[2cell]
node[between=x and y at .45, shift={(0,0)}, rotate=-90, 2label={above,\syl'}] {\Rightarrow}
;
\end{tikzpicture}\]
\end{exercise}
\begin{exercise}\label{exercise:cubical-preserves-id}
Suppose $F$ is a cubical pseudofunctor as in
\cref{definition:cubical-psfun}. Use the lax left and right unity
properties \eqref{f0-bicat} to show that $F$ is strictly unitary
(i.e., $F^0$ is the identity).
\end{exercise}
\begin{exercise}\label{exercise:cubical-composite}
Suppose that
\[
F\cn \C_1 \times \cdots \times \C_n \to 1.3
\]
is a cubical pseudofunctor. Suppose moreover, that for each $1 \leq
i \leq n$ we have cubical pseudofunctors
\[
F_j\cn \C_{j,1} \times \cdots \times \C_{j,n_j} \to \C_j.
\]
Then $F \circ (F_1 \times \cdots \times F_n)$ is a cubical
pseudofunctor.
\end{exercise}
\begin{exercise}\label{exercise:naturality-cub-gray}
Prove the naturality statement in \cref{theorem:cub-gray-adj}.
\end{exercise}
\begin{exercise}\label{exercise:Gray-is-monoidal-category}
Verify the unity and pentagon axioms for the monoidal category
$\Gray = (\IICat, \otimes, \boldone, a, \ell, r)$ in
\cref{theorem:Gray-is-monoidal-category}.
\end{exercise}
\begin{exercise}\label{exercise:naturality-cub-hom}
Prove the naturality statement in \cref{proposition:cub-hom-adj}.
\end{exercise}
\begin{exercise}\label{exercise:Gray-is-symm-mon}
Complete the argument in \cref{theorem:Gray-is-symm-mon} showing
that $\xi$ defines a symmetry for $\Gray$.
\end{exercise}
\begin{exercise}\label{exercise:brmoncat-grmon}
Complete \cref{example:brmoncat-grmon} by showing that the axioms
for a braiding (\cref{def:braided-monoidal-category}) imply
conditions \eqref{gm:hex}, \eqref{gm:func}, and \eqref{gm:nat} of
\cref{explanation:data-axioms-for-Gray-monoids}.
\end{exercise}
\begin{exercise}\label{exercise:bicatps-composition-cubical}
Use the formula given in \eqref{tensortwo-x} to show that the
composition pseudofunctor $(\otimes,\otimes^2,\otimes^0)$, defined
for $\Bicatps$ in \cref{sec:composite-tr-mod}, is cubical.
\end{exercise}
\begin{exercise}\label{exercise:psdcat-left-right-unity}
Verify that the proof of \cref{bicat-left-right-unity} generalizes
to a proof of \cref{psdcat-left-right-unity}.
\end{exercise}
\begin{exercise}\label{exercise:psdcat-l-equals-r}
Verify that the proof of \cref{bicat-l-equals-r} generalizes to a
proof of \cref{psdcat-l-equals-r}.
\end{exercise}
\begin{exercise}\label{exercise:vcomposite-double-transformation}
Suppose that $F, G, H \cn 1.3 \to 3$ are three lax functors of double categories. Suppose $35 \cn F \to G$ and $60\cn G \to H$ are two
transformations. Verify the following.
\begin{enumerate}
\item The composite $\be35$ described in
\cref{definition:double-transformation} satisfies the axioms
to be a transformation.
\item Composition of transformations is strictly associative and
unital.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exercise:hcomposite-double-transformation}
Suppose $F,F'\cn 1.3 \to 1.3'$ and $G,G'\cn 1.3' \to 1.3''$ are lax
functors of double categories. Suppose that $35\cn F \to F'$ and
$60\cn G \to G'$ are transformations.
Verify the following.
\begin{enumerate}
\item The horizontal composite $60 * 35$ described in
\cref{definition:double-transformation} satisfies the axioms
to be a transformation.
\item Horizontal composition of transformations satisfies the
middle four exchange law \eqref{middle-four}.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exercise:dbl-is-a-2-cat}
Use
\cref{exercise:vcomposite-double-transformation,exercise:hcomposite-double-transformation}
to prove that $\Dbl$ is a $2$-category (\cref{proposition:dbl-is-a-2-cat}).
\end{exercise}
\begin{exercise}\label{exercise:dbl-times}
Complete the verification of \cref{definition:dbl-times}, showing
that $1.3 \times 1.3'$ is a double category.
\end{exercise}
\begin{exercise}\label{exercise:H-preserves-products}
Give a proof of \cref{theorem:H-preserves-products}: taking
horizontal bicategories defines a product-preserving functor of
$1$-categories
\[
\cH\cn \Dbl \to \Bicat.
\]
\end{exercise}
\begin{exercise}\label{exercise:mondbl-br-unit-axiom}
Suppose that $1.3$ is a braided monoidal double category. Show that
the braid axioms \eqref{mondbl-br1} and \eqref{mondbl-br2} imply the
following equality of pasting diagrams, generalizing
\eqref{symmetry-unit}.
\[
\begin{tikzpicture}[x=20mm,y=20mm]
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (a) {1.3}
(1,.75) node (b) {1.3^2}
(2,0) node (c) {1.3}
;
\draw[1cell]
(a) edge node {1 \times \monunit} (b)
(b) edge node {\otimes} (c)
(a) edge[',bend right,looseness=1,out=-60, in=240] node {1} (c)
;
}
\begin{scope}[shift={(0,0)}]
\boundary
\draw[0cell]
(1,0) node (b') {1.3^2}
;
\draw[1cell]
(a) edge['] node {\monunit \times 1} (b')
(b) edge['] node {\tau} (b')
(b') edge['] node {\otimes} (c)
;
\draw[2cell]
(b') ++(45:.4) node[rotate=-135,2label={below,\beta}] {\Rightarrow}
(b') ++(-90:.3) node[rotate=-90,2label={above,\ell}] {\Rightarrow}
;
\end{scope}
\draw (2.5,.125) node {=};
\begin{scope}[shift={(3,0)}]
\boundary
\draw[2cell]
(b) ++(-90:.75) node[rotate=-90,2label={above,r}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\]
\end{exercise}
\begin{exercise}\label{exercise:psd-spans-monoidal}
Suppose $\C$ is a category with products and pullbacks.
Show that the double category of spans in $\C$ (\cref{ex:psd-spans})
is symmetric monoidal with respect to the product in $\C$.
\end{exercise}
\begin{exercise}\label{exercise:psd-bimodules-monoidal}
Show that the double category of rings and bimodules
(\cref{ex:psd-bimodules}) is symmetric monoidal with respect to the
tensor product.
\end{exercise}
\subsection*{Notes}
\begin{note}[Monoidal Bicategories]
The idea of a monoidal bicategory came from \cite{cw,ckww}. The definition of a monoidal bicategory as a one-object tricategory is from \cite{gps}, except that their lax transformations are actually our oplax transformations, as we pointed out in \Cref{note:tricat-discussion}. Their coherence theorem for tricategories implies that every monoidal bicategory is monoidally biequivalent to a Gray monoid (\cref{definition:gray-monoid}). Another coherence theorem for monoidal bicategories is in \cite{gurski-coherence}. It states that in each free monoidal bicategory, a diagram made up of only constraint $2$-cells is commutative.
See \cite{stay} for further discussion of the axioms for monoidal
bicategories along with braided, sylleptic, and symmetric variants.
\end{note}
\begin{note}[Braided Monoidal Bicategories]
\Cref{def:braided-monbicat} of a braided monoidal bicategory is essentially from \cite{gurski-monoidal,mccrudden}, with minor conventional changes. Our left hexagonator $\Rone$ and right hexagonator $\Rtwo$ correspond to McCrudden's $R^{-1}$ and $S^{-1}$, respectively. The notations $\Rone$ and $\Rtwo$ are due to Kapranov and Voevodsky \cite{kapranov-voevodsky,kapranov-voevodsky-b}, who gave the first definition of a braided monoidal\index{braided monoidal!2-category}\index{monoidal 2-category!braided} 2-category. The (3,1)-crossing, (1,3)-crossing, and (2,2)-crossing axioms are bicategorical analogues of the axioms denoted by, respectively, $((\bullet\tensor\bullet\tensor\bullet)\tensor\bullet)$, $(\bullet\tensor(\bullet\tensor\bullet\tensor\bullet))$, and $((\bullet\tensor\bullet)\tensor(\bullet\tensor\bullet))$ in a braided monoidal $2$-category in the sense of Kapranov and Voevodsky.
The Yang-Baxter axiom, called the \index{Breen polytope}\emph{Breen polytope} in \cite{stay}, is due to Breen \cite{breen}. Kapranov and Voevodsky did not originally include the $2$-categorical version of this axiom in their definition of a braided monoidal $2$-category. Baez and Neuchl \cite{baez-neuchl} later added this axiom to the definition of a braided monoidal $2$-category. It was improved further by Crans \cite{crans} with the inclusion of several unity axioms. The diagrams in the axioms of a braided monoidal bicategory also appeared in \cite{barnatan,batanin}.
Applications of braided monoidal $2$-/bicategories to topological quantum field theory appeared in \cite{baez-neuchl,ktz}.
\end{note}
\begin{note}[Sylleptic and Symmetric Monoidal Bicategories]
\Cref{def:symmetric-monbicat} of a symmetric monoidal bicategory is essentially the one in \cite{stay}, whose syllepsis is phrased as $\syl'$ in \Cref{exer:syllepsis-mate}. Sylleptic\index{sylleptic monoidal!2-category}\index{monoidal 2-category!sylleptic} and symmetric monoidal $2$-categories\index{symmetric monoidal!2-category}\index{monoidal 2-category!symmetric} were defined in \cite{day-street}.
\end{note}
\begin{note}[Coherence Results for Braided and Symmetric Monoidal Bicategories]
Coherence results for braided monoidal bicategories are proved in \cite{gurski-monoidal}. In particular, it is proved there that each braided monoidal bicategory is equivalent to a braided monoidal $2$-category as defined by Crans \cite{crans}. Coherence results for symmetric monoidal bicategories are proved in \cite{gurski-osorno}. Further strictification results for symmetric monoidal bicategories can be found in \cite{bartlett,schommer-pries}.
\end{note}
\begin{note}[The Gray Tensor Product]
The Gray tensor product\index{Gray tensor product!John Gray} is due to John Gray \cite{gray-fibred,gray}.
He gave a more general definition of a lax tensor product, not
requiring the structure $2$-cells $\Si_{f,g}$ to be invertible. We
focus on the case where $\Si_{f,g}$ are invertible as it is somewhat
simpler and is far more common in the literature. Our exposition for
the definition and basic properties of the Gray tensor product
follows that of \cite{gurski-coherence}. Further references include \cite{gps}
and \cite{lack}.
\end{note}
\begin{note}[The Box Product]
The box product of \cref{definition:box-product} is an example of a
general construction for enriched categories known as the \emph{funny
tensor product}.\index{funny tensor product}\index{tensor product!funny}\index{enriched!category!funny tensor product} This point of view, and its relationship to the
Gray tensor product, is explained in
\cite{lack,gurski-coherence,bourke-gurski}. In \cite{bourke-gurski},
the authors use a general theory of factorization systems to construct
the Gray tensor product, and prove that it gives a symmetric
monoidal closed structure on $\IICat$, without resorting to generators and
relations.
\end{note}
\begin{note}[Gray monoids]
Our
\cref{explanation:monoids-in-Gray,explanation:data-axioms-for-Gray-monoids}
appear in \cite[Lemmas 3 and 4]{baez-neuchl}, respectively, where the
term \emph{semistrict monoidal $2$-category}\index{semistrict monoidal 2-category}\index{monoidal 2-category!semistrict} is used for what we call
Gray monoids. The explicit formulation in
\cref{explanation:data-axioms-for-Gray-monoids} is the definition of
semistrict monoidal $2$-category given in \cite{kapranov-voevodsky}.
\end{note}
\begin{note}
\cref{exercise:cubical-composite} is given as \cite[Proposition
3.5]{gurski-coherence}. It is the key step in showing that cubical
pseudofunctors form a \emph{non-symmetric multicategory}\index{multicategory!non-symmetric},
a notion of multicategory more general than that of
\cref{def:multicategory}, in which one omits all data and axioms
relating to the permutations $\sigma \in \Sigma_n$. See \cite[Section
11.7]{yau-operad} for detailed definitions and explanations, where
this notion is called \emph{non-symmetric colored operad}\index{operad!non-symmetric colored}.
\end{note}
\begin{note}[Composition of Lax Transformations]
\cref{exercise:bicatps-composition-cubical} shows that the
composition $\otimes$ for the tricategory $\bicat$ is cubical. The
alternative composition $\otimes'$ sketched in
\cref{note:opcubical-composition} is \emph{opcubical}\index{opcubical condition}, meaning that,
with respect to which components of the lax functoriality constraint
are identities, it satisfies the opposite of the cubical condition
(\cref{definition:cubical-psfun}). This is noted in \cite[Remark
5.2]{gurski-coherence}.
\end{note}
\begin{note}[Double Categories]\label{note:double-cats}
The concept of a strict double category goes back to Ehresmann
\cite{ehresmann1}. The more general version of \cref{definition:psdouble-cat} is
also called \emph{pseudo-double category}\index{double category!pseudo-double category}\index{pseudo!double category@-double category} in \cite{grandis-pare-1999}
and \emph{weak double category}\index{weak double category}\index{double category!weak} in \cite{grandis-pare-2019}. Our
usage follows Shulman \cite{shulman} and Hansen-Shulman
\cite{hansen-shulman}.
There is some variation in the literature about the orientation of
diagrams for double categories, and hence some variation in what the
terms ``vertical'' and ``horizontal'' refer to. The terms ``tight''
and ``loose'' are also used in, e.g., \cite{hansen-shulman} to avoid
this ambiguity. With this language, the concept we have called
``transformation'' is also called ``tight transformation''.
\end{note}
\begin{note}[Monoidal Double Categories]
The usual definition of monoidal [pseudo] double category one finds in
the literature is that of a \emph{pseudomonoid}\index{pseudomonoid} in $(\Dbl,\times)$.
The notion of pseudomonoid is an abstraction to general monoidal
bicategories of the structure defining monoidal categories, but the
details are just beyond the scope of this text. Our
\cref{definition:monoidal-psd-cat} is an unpacking of the usual one.
Pseudomonoids in general monoidal $2$-categories are discussed in
\cite{day-street}.
\end{note}
\begin{note}[Unity Properties]
The special case of \cref{exercise:mondbl-br-unit-axiom} for braided
monoidal $1$-categories can be found, along with several other unity
properties, in \cite[Proposition 1]{joyal-street}.
\end{note}
\begin{note}[Monoidal Bicategories from Monoidal Double Categories]
Work of Shulman \cite{shulman} and Hansen-Shulman
\cite{hansen-shulman} describes conditions under which a symmetric
monoidal structure for a double category descends to give a symmetric
monoidal structure on its horizontal bicategory, with $\Bimod$ being
one of the key examples.
\cref{exercise:psd-spans-monoidal,exercise:psd-bimodules-monoidal} appear,
along with several other examples, in \cite{hansen-shulman}.
\end{note}
\endinput
\chapter*{List of Notations}
\newcommand{\where}[1]{\> \> \pageref{#1} \> \>}
\newcommand{\> \> \> \> \hspace{1em}}{\> \> \> \> \hspace{1em}}
\begin{tabbing}
\phantom{\textbf{Notation}} \= \hspace{1.5cm}\= \phantom{\textbf{Page}}\= \hspace{.4cm}\= \phantom{\textbf{Description}} \\
\> \> \> \> \hspace{1em}\\
\textbf{\Cref{ch:categorical_prelim}} \>\> \textbf{Page} \>\> \textbf{Description}\\
$\calu$ \where{notation:universe} a Grothendieck universe\\
$\Ob(\C)$, $\C_0$ \where{notation:object} class of objects in a category $\C$\\
$\C(X,Y)$, $\C(X;Y)$ \where{notation:morphism-set} set of morphisms from $X$ to $Y$ in $\C$\\
$\dom(f)$ \where{notation:domain} domain of a morphism $f$\\
$\codom(f)$ \where{notation:domain} codomain of a morphism $f$\\
$1_X$, $1$ \where{notation:identity-morphism} identity morphism of $X$\\
$\Mor(\C)$, $\C_1$ \where{not:morphism} collection of morphisms in $\C$\\
$gf$ or $g\comp f$ \where{notation:morphism-composition} composition of morphisms\\
$\begin{tikzcd}[column sep=small]X \rar{\cong} & Y\end{tikzcd}$ \where{not:iso} an isomorphism\\
$\C^{\op}$ \where{notation:opposite-category} opposite category of $\C$\\
$\Id_{\C}$, $1_{\C}$ \where{not:idc} identity functor of $\C$\\
$\Cat$ \where{notation:cat} category of small categories\\
$\Fun(\C,1.3)$ \where{not:funcd} collection of functors $\C \to 1.3$\\
$\Nat(F,G)$ \where{not:natfg} collection of natural transformations $F\to G$\\
$\theta \ast \theta'$ \where{not:hcomp} horizontal composition of natural transformations\\
$\C^{1.3}$ \where{notation:diagram-category} diagram category of functors $1.3 \to \C$\\
$L \dashv R$ \where{notation:adjunction} an adjunction\\
$\Yo_{-}$ \where{yoneda-embedding} Yoneda embedding\\
$\colim\, F$, $\colimover{x\in 1.3}\, Fx$ \where{notation:colimit} colimit of $F : 1.3 \to C$\\
$\coprod$, $\amalg$ \where{not:coprod} coproducts\\
$\boldone$ \where{notation:terminal-category} terminal category\\
$\otimes$ \where{notation:monoidal-product} monoidal product\\
$\tensorunit$ \where{notation:monoidal-product} monoidal unit\\
$\alpha$ \where{mon-cat-alpha} associativity isomorphism\\
$\lambda$, $\rho$ \where{mon-cat-lambda} left and right unit isomorphisms\\
$\C^{\rev}$ \where{notation:crev} reversed monoidal category\\
$(X,\mu,\operadunit)$ \where{notation:monoid} a monoid\\
$\Mon(\C)$ \where{notation:monoid-category} category of monoids in $\C$\\
$(F,F_2,F_0)$ \where{def:monoidal-functor} a monoidal functor\\
$\xi$ \where{def:symmetric-monoidal-category} symmetry isomorphism\\
$\CMon(\C)$ \where{notation:cmon} category of commutative monoids in $\C$\\
$\Set$ \where{notation:set} category of sets\\
$[-,-]$ \where{notation:internal-hom} internal hom\\
$\Top$ \where{notation:top} category of topological spaces\\
$\Ab$ \where{notation:ab} category of abelian groups\\
$\Ch$ \where{notation:ch} category of chain complexes\\
\> \> \> \> \hspace{1em}\\
\textbf{\Cref{ch:2cat_bicat}}\> \> \> \> \hspace{1em}\\
$\Ob(\B)$, $\B_0$ \where{notation:obb} objects/0-cells in a bicategory\\
$\B_1$, $\B_2$ \where{notation:obb} 1-cells and 2-cells in a bicategory\\
$\B(X,Y)$ \where{notation:obb} hom category\\
$1_X$ \where{notation:id-one-cell} identity 1-cell of an object $X$\\
$c$ \where{notation:hor-comp} horizontal composition\\
$g\circ f$, $gf$ \where{notation:hor-comp} horizontal composition of 1-cells\\
$\beta*\alpha$ \where{notation:hor-comp} horizontal composition of 2-cells\\
$a$ \where{notation:associator} associator\\
$\ell$, $r$ \where{notation:unitors} left and right unitors\\
$\Rightarrow$ \where{notation:double-arrow} a 2-cell\\
$\C_{\bi}$ \where{ex:category-as-bicat} locally discrete bicategory of a category $\C$\\
$\Sigma\C$ \where{ex:moncat-bicat} one-object bicategory of a monoidal category $\C$\\
$\A\times\B$ \where{ex:product-bicat} product bicategory\\
$\Span(\C)$ \where{axb-span} bicategory of spans in $\C$\\
$\Bimod$ \where{ex:bimodules} bicategory with bimodules as 1-cells\\
$\Cla(\B)$ \where{notation:clab} classifying category\\
$\Pic(\B)$ \where{notation:picb} Picard groupoid\\
$V = (V_{ij})$ \where{ex:two-vector-space} a 2-matrix\\
$\twovc$ \where{notation:twovc} bicategory of coordinatized 2-vector spaces\\
$\boldone$ \where{example:terminal-bicategory} terminal bicategory\\
$\Rel$ \where{ex:relations} 2-category of relations\\
$\Cat$ \where{ex:2cat-of-cat} 2-category of small categories, functors, and\\
\> \> \> \> \hspace{1em} natural transformations\\
$\Cat_{\V}$ \where{ex:2cat-of-enriched-cat} $\V$-enriched version of $\Cat$\\
$\twovtc$ \where{notation:twovtc} 2-category of totally coordinatized 2-vector spaces\\
$\Map(-,-)$ \where{mot:multicategory} set of functions\\
$X^n$, $X^{\times n}$ \where{mot:multicategory} $n$-fold product $X \times \cdots \times X$\\
$\Profc$ \where{notation:profs} class of $\colorc$-profiles\\
$\uc$, $(c_1,\ldots,c_n)$ \where{notation:us} a $\colorc$-profile\\
$\duc$ \where{notation:duc} an element in $\Profcc$\\
$\Sigma_n$ \where{notation:sigma-n} symmetric group on $n$ letters\\
$(\C, \gamma, \operadunit)$ \where{notation:multicategory} a multicategory, a.k.a.\ an operad\\
$\End(X)$ \where{ex:endomorphism} endomorphism operad\\
$\As$ \where{notation:As} associative operad\\
$\Com$ \where{notation:Com} commutative operad\\
$\phi\theta$ \>\> \pageref{notation:operad-vcomp}, \pageref{notation:poly-vcomp} \>\> vertical composite of multinatural and\\
\> \> \> \> \hspace{1em} polynatural transformations\\
$\theta' \ast \theta$ \>\> \pageref{notation:operad-hcomp}, \pageref{notation:poly-hcomp} \>\> horizontal composite of multinatural\\
\> \> \> \> \hspace{1em} and polynatural transformations\\
$\Multicat$ \where{multicat-2cat} $2$-category of multicategories, multifunctors, and\\
\> \> \> \> \hspace{1em} multinatural transformations\\
$(\C, \comp, \operadunit)$ \where{notation:polycategory} a polycategory\\
$\C\uduc$ \where{notation:cuduc} an entry of a polycategory\\
$\PEnd(X)$ \where{ex:endomorphism-polycat} polycategory of functions\\
$\Polycat$ \where{polycat-2cat} $2$-category of polycategories, polyfunctors, and\\
\> \> \> \> \hspace{1em} polynatural transformations\\
$\Bop$ \where{def:bicategory-opposite} opposite bicategory\\
$\Bco$ \where{def:bicategory-co} co-bicategory\\
$\Bcoop$ \where{def:bicategory-coop} coop-bicategory\\
$\Cospan(\C)$ \where{notation:cospan} bicategory of cospans in $\C$\\
$\Par$ \where{notation:partial-function} 2-category of partial functions\\
$\overcat{\Cat}{\A}$ \where{notation:overcat} over-category\\
$\Catfl$ \where{notation:catfl} $\Cat$ with finite limit-preserving functors\\
$\MonCat$ \where{notation:moncat} 2-category of monoidal categories, monoidal functors,\\
\> \> \> \> \hspace{1em} and monoidal natural transformations\\
$\StgMonCat$ \where{notation:stgmoncat} $\MonCat$ with strong monoidal functors\\
$\SttMonCat$ \where{notation:stgmoncat} $\MonCat$ with strict monoidal functors\\
\> \> \> \> \hspace{1em}\\
\textbf{\Cref{ch:pasting-string}}\> \> \> \> \hspace{1em}\\
$(V_G,E_G,\psi_G)$ \where{def:graph} a graph\\
$|G|$ \where{notation:geo-real} geometric realization of a graph $G$\\
$\fieldc$ \where{notation:complex-plane} complex plane\\
$\bullet$, $\raisebox{-.05cm}{\scalebox{.8}{\begin{tikzpicture}\node [draw,circle,thick,minimum size=.4cm,inner sep=0pt] {$v$};\end{tikzpicture}}}$
\where{notation:vertex} a vertex in a graph\\
$v_0e_1v_1\cdots e_nv_n$ \where{notation:path} a path\\
$\ext_G$ \where{notation:extg} exterior face\\
$\bd_F$ \where{notation:boundary} boundary of a face $F$\\
$s_F$, $t_F$ \where{notation:source} source and sink\\
$\dom_F$, $\codom_F$ \where{notation:dom-codom} domain and codomain\\
$HG$ \where{notation:vcomp-graph} vertical composite of anchored graphs\\
$|\phi|$ \where{notation:2pasting-comp} composite of a 2-category pasting diagram\\
$b(P)$, $(P)$ \where{notation:bofp} bracketed directed path\\
$|\phi|$ \>\> \pageref{pasting-diagram-composite}, \pageref{notation:bipasting-comp} \>\> composite of a composition/pasting diagram\\
$G/A$, $H/\{A_i\}_{1\leq i \leq j}$ \where{notation:collapsing} collapsing\\
\> \> \> \> \hspace{1em}\\
\textbf{\Cref{ch:functors}}\> \> \> \> \hspace{1em}\\
$(F,F^2,F^0)$ \where{def:lax-functors} a lax functor\\
$\Fop$ \where{ex:opposite-lax-functor} opposite lax functor\\
$1_{\B}$ \where{ex:identity-strict-functor} identity strict functor\\
$\conof{X}$ \where{constant-pseudofunctor} constant pseudofunctor\\
$\Bicat$ \where{thm:cat-of-bicat} category of bicategories and lax functors\\
$\Bicatsu$ \where{notation:bicatsu} wide subcategory of $\Bicat$ with\\
\> \> \> \> \hspace{1em} strictly unitary lax functors\\
$\Bicatco$ \where{notation:bicatco} category of bicategories and colax functors\\
$\alpha_X$, $\alpha_f$ \where{notation:transformation-cells} component 1-/2-cells of a lax transformation\\
$1_F$ \where{id-lax-transformation} identity transformation of a lax functor\\
$\beta\alpha$ \where{def:lax-tr-comp} horizontal composite of lax transformations\\
$\Gamma_X$ \where{notation:modification-compcell} component 2-cell of a modification\\
$1_\alpha$ \where{notation:id-modification} identity modification of $\alpha$\\
$\Sigma\Gamma$ \where{notation:modification-vcomp} vertical composite of modifications\\
$\Gamma'*\Gamma$ \where{notation:modification-hcomp} horizontal composite of modifications\\
$\Bicat(\B,\B')$ \where{notation:functor-bicat} bicategory of lax functors, lax transformations,\\
\> \> \> \> \hspace{1em} and modifications\\
$\Bicatps(\B,\B')$ \where{subbicat-pseudofunctor} $\Bicat(\B,\B')$ with pseudofunctors and\\
\> \> \> \> \hspace{1em} strong transformations\\
$f^*$, $f_*$ \where{notation:prepost-comp} pre/post-composition functors\\
$\B(-,X)$ \where{representable-pseudofunctor} representable pseudofunctor\\
$\B(X,-)$ \where{corepresentable-pseudofunctor} corepresentable pseudofunctor\\
$f_*$, $\B(-,f)$ \where{representable-transformation} representable transformation\\
$\alpha_*$ \where{representable-modification} representable modification\\
$\Bicatic$ \where{thm:iicat-of-bicat} $2$-category of bicategories, lax functors, and icons\\
$\Bicatpsic$ \where{notation:bicatpsic} $\Bicatic$ with pseudofunctors\\
$\Bicatsupic$ \where{notation:bicatsupic} $\Bicatic$ with strictly unitary pseudofunctors\\
$\iiCat$ \where{exer:2cat-of-2cat} $2$-category of 2-categories, 2-functors, and\\
\> \> \> \> \hspace{1em} 2-natural transformations\\
\> \> \> \> \hspace{1em}\\
\textbf{\Cref{ch:constructions}}\> \> \> \> \hspace{1em}\\
$\laxcone(\conof{L},F)$ \where{notation:laxcone} category of lax cones of $L$ over $F$\\
$\pscone(\conof{L},F)$ \where{notation:pscone} category of pseudocones of $L$ over $F$\\
$\conof{f}$ \where{constant-induced-transformation} constant strong transformation induced by a 1-cell\\
$\conof{\alpha}$ \where{constant-induced-modification} constant modification induced by a 2-cell\\
$\oplaxcone(F,\conof{L})$ \where{bicat-aop-bop} category of oplax cones of $L$ under $F$\\
$\iicone(\conof{L},F)$ \where{notation:2cone} category of 2-cones of $L$ over $F$\\
$\iicone(F,\conof{L})$ \where{notation:2cocone} category of 2-cocones of $L$ under $F$\\
$\Delta$ \where{notation:ordinalcat} ordinal number category\\
$\ord{n}$ \where{notation:ordn} linearly ordered set $\{0 < 1 < \cdots < n\}$\\
$d^i$ \where{notation:coface} $i$th coface map $\ord{n-1}\to\ord{n}$\\
$s^i$ \where{notation:coface} $i$th codegeneracy map $\ord{n+1}\to\ord{n}$\\
$\C^{\Deltaop}$ \where{def:simplicial-objects} category of simplicial objects in $\C$\\
$\SSet$ \where{notation:sset} category of simplicial sets\\
$d_i$, $s_i$ \where{notation:face-map} $i$th face and $i$th degeneracy\\
$\Ner$ \where{def:grothendieck-nerve} Grothendieck nerve\\
$\DNer$ \where{def:duskin-nerve} Duskin nerve\\
$\DNer(\B)_n$ \where{notation:duskin-simplices} the set $\Bicatsu(\ord{n},\B)$\\
$\Catdeltaop$ \where{def:simplicial-cat} 2-category of 2-functors $\Deltaop \to \Cat$,\\
\> \> \> \> \hspace{1em} 2-natural transformations, and modifications\\
$\iiner$ \where{def:iinerve} 2-nerve\\
$\iiner(\B)_n$ \where{notation:iinerve-simplices} the category $\Bicatsupic(\ord{n},\B)$\\
\> \> \> \> \hspace{1em}\\
\textbf{\Cref{ch:adjunctions}}\> \> \> \> \hspace{1em}\\
$\eta$, $\epz$ \where{definition:internal-adjunction} unit and counit of an internal adjunction\\
$f^{\bdot}$ \where{definition:internal-equivalence} an adjoint of $f$\\
$(C,t,\mu,\eta)$ \where{monad-bicat-interpret} a monad in a bicategory\\
$(\A,T,\mu,\eta)$ \where{definition:2-monad} a 2-monad\\
$(X,\theta,\zeta,\omega)$ \where{definition:lax-algebra} a lax algebra for a 2-monad\\
$\sAlg{T}$ \where{definition:t-alg-2-cats} 2-category of strict $T$-algebras, strict morphisms,\\
\> \> \> \> \hspace{1em} and 2-cells\\
$\Alg{T}$ \where{definition:t-alg-2-cats} $\sAlg{T}$ with strong morphisms\\
$\PsAlg{T}$ \where{definition:t-alg-2-cats} $\Alg{T}$ with pseudo $T$-algebras\\
$\LaxAlg{T}$ \where{definition:t-alg-2-cats} $\PsAlg{T}$ with lax $T$-algebras and lax morphisms\\
\> \> \> \> \hspace{1em}\\
\textbf{\Cref{ch:whitehead}}\> \> \> \> \hspace{1em}\\
$F \sdar X$ \where{sec:lax-slice} lax slice bicategory of $F$ at $X$\\
$F \sdar u$ \where{lemma:base-change-functor} change-of-slice functor\\
$\lto$ \where{definition:lax-terminal} a lax terminal object\\
\> \> \> \> \hspace{1em}\\
\textbf{\Cref{ch:coherence}}\> \> \> \> \hspace{1em}\\
$\Yo$ \where{definition:Yo} Yoneda pseudofunctor\\
$\Yo^0$ \where{definition:Yo0} lax unity constraint of $\Yo$\\
$\Yo^2$ \where{definition:Yo2} lax functoriality constraint of $\Yo$\\
$\Str(F,G)$ \where{sec:yoneda-bicat-lemma} the 1-category $\Bicatps(\B,\C)(F,G)$\\
$e_A$ \where{definition:eA} evaluation at an object $A$\\
$e_f$ \where{definition:ef} evaluation at a 1-cell $f$\\
$\st{\B}$ \where{theorem:bicat-coherence} the essential image of $\Yo$ for a bicategory $\B$\\
\> \> \> \> \hspace{1em}\\
\textbf{\Cref{ch:fibration}}\> \> \> \> \hspace{1em}\\
$\subof{X}{P}$ \where{notation:xsubp} the image of $X$ under $P$\\
$\prelift{Y}{f}$ \where{notation:prelift} a pre-lift\\
$\liftof{f}$, $\lift{\preliftyf}$ \where{notation:liftoff} a lift of $\prelift{Y}{f}$\\
$\preraise{g}{h}{f}$ \where{notation:preraise} a pre-raise\\
$\raiseof{f}$ \where{notation:raiseoff} a raise of $\preraise{g}{h}{f}$\\
$\Fibof{\C}$ \where{iicat-fibrations} 2-category of fibrations over $\C$, Cartesian functors,\\
\> \> \> \> \hspace{1em} and vertical natural transformations\\
$\fibclofc$ \where{notation:fibclofc} $\Fibof{\C}$ with cloven fibrations\\
$\fibspofc$ \where{notation:fibclofc} $\Fibof{\C}$ with split fibrations\\
$(\funnyf,\mu,\eta)$ \where{def:iimonad-on-catoverc} 2-monad with cloven fibrations as pseudo algebras\\
$\algtofib$ \where{notation:algtofib} cloven fibration of a pseudo $\funnyf$-algebra\\
$\fibtoalg$ \where{notation:fibtoalg} pseudo $\funnyf$-algebra of a cloven fibration\\
\> \> \> \> \hspace{1em}\\
\textbf{\Cref{ch:grothendieck}}\> \> \> \> \hspace{1em}\\
$\intf$ \where{def:grothendieck-cat} Grothendieck construction of $F$\\
$\Usubf$ \where{def:grothendieck-over-c} projection functor $\intf \to \C$\\
$\pi$ \where{notation:piofa} oplax cone of $\intf$ under $F$\\
$\intalpha$ \where{notation:intalpha} Grothendieck construction of a strong transformation\\
$\intgamma$ \where{intgamma-ax} Grothendieck construction of a modification\\
$\Ginv(Y)$ \where{def:fiber-category} fiber of $Y$ with respect to $G$\\
$\intbi{\C}F$ \where{notation:intbicf} bicategorical Grothendieck construction\\
\> \> \> \> \hspace{1em}\\
\textbf{\Cref{ch:tricat-of-bicat}}\> \> \> \> \hspace{1em}\\
$\alpha\whis F$, $F\whis\alpha$ \where{notation:alphawhis} pre/post-whiskering\\
$f_G$, $\alpha_{g,G}$ \where{conv:functor-subscript} images of $f$ and $\alpha_g$ under $G$\\
$\Gtwoinv$, $\Gzeroinv$ \where{notation:gtwoinv} inverses of $\Gtwo$ and $\Gzero$\\
$\boldone$ \where{notation:unit-bicat} unit bicategory\\
$\T^n_{i_1,\ldots,i_{n+1}}$ \where{tricategory-product-abbreviation} $\T_{i_n,i_{n+1}} \times \cdots \times \T_{i_1,i_2}$\\
$\T^n_{[r,r+n]}$ \where{tricategory-product-abbreviation} $\T^n_{r,r+1,\ldots,r+n}$\\
$\T(X_1,X_2)$ \where{notation:hom-bicat} hom bicategories of a tricategory\\
$(\tensor,\tensortwo,\tensorzero)$ \where{tricat-composition} composition in a tricategory\\
$(1_X,1_X^2,1_X^0)$ \where{tricat-identity} identity of an object $X$\\
$(a,\abdot,\etaa,\epza)$ \where{tricategory-associator} associator in a tricategory\\
$(\ell,\ellbdot,\etaell,\epzell)$ \where{tricategory-unitors} left unitor in a tricategory\\
$(r,\rbdot,\etar,\epzr)$ \where{tricategory-unitors} right unitor in a tricategory\\
$\pi$ \where{tricategory-pentagonator} pentagonator\\
$\mu$, $\lambda$, $\rho$ \where{tricategory-iiunitors} middle, left, and right 2-unitors\\
$\tensorzeroinv$ \where{notation:tensorzeroinv} inverse of $\tensorzero$\\
$\bicata_{i,j}$ \where{conv:bicategory-index} bicategory $\Bicatps(\A_i,\A_j)$\\
$G\tensor F$ \where{notation:gtensorf} composite lax functor $GF$\\
$\beta\tensor\alpha$ \where{transformation-composite} composite lax transformation\\
$\Sigma\tensor\Gamma$ \where{mod-composite-component} composite modification\\
$\tensortwo$ \where{tensortwo-component} lax functoriality constraint for $\tensor$\\
$\MC$ \where{notation:maclane} Mac Lane's Coherence \Cref{maclane-coherence}\\
$\iso$ \where{notation:iso} coherence isomorphism\\
$\nat$ \where{notation:nat} naturality properties\\
$\unity$ \where{notation:unity} unity properties\\
$\midfour$ \where{notation:midfour} middle four exchange\\
$\tensorzero$ \where{tensorzero} lax unity constraint for $\tensor$\\
$\bicat$ \where{thm:tricatofbicat} tricategory of small bicategories, pseudofunctors,\\
\> \> \> \> \hspace{1em} strong transformations, and modifications\\
NB4 \where{notation:nb4} non-abelian 4-cocycle condition\\
\> \> \> \> \hspace{1em}\\
\textbf{\Cref{ch:monoidal_bicat}}\> \> \> \> \hspace{1em}\\
$\pi_1,\ldots,\pi_9$ \where{notation:pin} mates of the pentagonator\\
$(\beta,\betabdot,\etabeta,\epzbeta)$ \where{notation:beta-adjoint} braiding in a braided monoidal bicategory\\
$\Rone$ \where{notation:left-hex} left hexagonator\\
$\Rtwo$ \where{notation:right-hex} right hexagonator\\
$\etaainv$ \where{notation:etaainv} inverse of $\etaa$\\
$R^i_{--|-}$ \where{right-hex-mate-1} mates of the right hexagonator\\
$R^i_{-|--}$ \where{expl:left-hex-mates} mates of the left hexagonator\\
$\syl$ \where{notation:syllepsis} syllepsis\\
$\C \Box 1.3$ \where{definition:box-product} box product of $\C$ and $1.3$\\
$\C \otimes 1.3$ \where{definition:gray-tensor} Gray tensor product of $\C$ and $1.3$\\
$\Sigma_{f,g}$ \where{definition:gray-tensor} structure 2-cells in the Gray tensor product\\
$c$ \where{definition:univ-cubical} universal cubical pseudofunctor\\
$\Hom(\C,1.3)$ \where{notation:psfun-hom} 2-category of strict functors, strong transformations,\\
\> \> \> \> \hspace{1em} and modifications\\
$\ev$ \where{proposition:eval-cubical} evaluation pseudofunctor\\
$\Gray$ \where{theorem:Gray-is-monoidal-category} symmetric monoidal closed category of 2-categories\\
\> \> \> \> \hspace{1em} with Gray tensor product and pseudofunctor hom\\
$(\C, \gmtimes, \gmunit)$ \where{definition:gray-monoid} Gray monoid\\
$(1.3_0,1.3_1)$ \where{explanation:dcat-terms} categories of objects and arrows in a double category\\
$\hcirc$ \where{explanation:dcat-terms} horizontal composition in a double category\\
$(i,s,t)$ \where{explanation:dcat-terms} unit, source, and target in a double category\\
$R \sto S$ \where{explanation:dcat-terms} horizontal 1-cell in a double category\\
$\cH1.3$ \where{definition:horizontal-bicat} horizontal bicategory of a double category $1.3$\\
$\boldone$ \where{example:psd-terminal} terminal double category\\
$\Dbl$ \where{proposition:dbl-is-a-2-cat} 2-category of small double categories, lax functors,\\
\> \> \> \> \hspace{1em} and transformations
\end{tabbing}
\endinput
\chapter{Pasting Diagrams}\label{ch:pasting-string}
In this chapter we discuss pasting diagrams in $2$-categories and bicategories in general. Pasting diagrams provide a convenient and visual way to encode iterated vertical composites of horizontal composites of $2$-cells. For example, pasting diagrams are crucial parts of internal adjunctions, the proof of the local characterization of a biequivalence, tricategories, and various monoidal bicategories, all of which will be discussed later in this book. Furthermore, diagrammatic arguments using pasting diagrams are used throughout the literature
Although it is not logically necessary, we will treat the $2$-category case first before tackling general bicategories because the former is much easier than the latter. We begin in \Cref{sec:pasting-2cat} with some examples to motivate the definitions of a pasting scheme and of a pasting diagram in a $2$-category. Such pasting schemes are defined precisely in \Cref{sec:pasting-scheme} using graph theoretic concepts. In \Cref{sec:2cat-pasting-theorem}, we first define pasting diagrams in $2$-categories. Then we prove a $2$-categorical pasting theorem that states that the composite of a pasting diagram in a $2$-category has a unique value regardless of which pasting scheme presentation is used.
Pasting diagrams in bicategories are discussed in \Cref{sec:bicategorical-pasting,sec:bicat-pscheme-extension,sec:bicat-pasting-theorem}, culminating in the Bicategorical Pasting \Cref{thm:bicat-pasting-theorem}. It states that each pasting diagram in a bicategory has a unique composite. \Cref{sec:string-diagrams} is about string diagrams, which provide an alternative diagrammatic formalism for pasting diagrams in $2$-categories and bicategories in general.
The discussion of graphs and diagrams in this chapter will require us
to introduce several new terms. In \cref{exercise:key-terms} we list
the key terminology and recommend that the reader make a glossary to
keep track of their meanings and relationships.
Throughout this chapter, $\A$ denotes a $2$-category, and $\B$ denotes a bicategory.
\section{Examples of \texorpdfstring{$2$}{2}-Categorical Pastings}\label{sec:pasting-2cat}
In this section we present a few motivating examples of pasting diagrams in $2$-categories. The precise definition will be given in the following sections.\index{pasting diagram!2-category examples}\index{2-category!pasting diagram}
\begin{motivation}\label{mot:pasting-2cat}
In category theory, commutative diagrams provide a convenient way to represent composites of morphisms, relationships between these composites, and the (co)domain objects involved. Analogously, in $2$-categories and more generally bicategories, a long expression involving vertical composites of horizontal composites of $2$-cells is not easy to read. Pasting diagrams provide a convenient way to represent iterated vertical composites of $2$-cells, each of which is a whiskering of one $2$-cell with some $1$-cells. In pasting diagrams, objects, $1$-cells, and $2$-cells are represented as vertices, edges, and double arrows in regions bounded by edges, respectively. The diagrammatic notation of a $2$-cell in \Cref{expl:bicategory} and the concept of whiskering in \Cref{def:whiskering} will be used often throughout the rest of this book.\dqed
\end{motivation}
Suppose $\A$ is a $2$-category. We begin with a few simple examples to illustrate the idea of pasting.
\begin{example}\label{ex:pasting-simple}
Consider the pasting diagram in $\A$ on the left:
\[\begin{tikzpicture}[commutative diagrams/every diagram, xscale=1.2]
\node (W) at (0,1) {$W$}; \node (X) at (-1.5,0) {$X$};
\node (Y) at (0,-1) {$Y$}; \node (Z) at (1.5,0) {$Z$};
\node[font=\Large] at (-.6,0) {\rotatebox{-45}{$\Rightarrow$}};
\node at (-.5,.2) {\scriptsize{$\alpha$}};
\node[font=\Large] at (.5,0) {\rotatebox{-45}{$\Rightarrow$}};
\node at (.7,.2) {\scriptsize{$\beta$}};
\path[commutative diagrams/.cd, every arrow, every label]
(X) edge node[above] {$f$} (W)
(X) edge node[below] {$g$} (Y)
(Y) edge node[right, near start] {$h$} (W)
(Y) edge node[below] {$j$} (Z)
(W) edge node[above] {$i$} (Z);
\end{tikzpicture}\quad = \quad
\begin{tikzcd}
if \ar{r}{1_i *\alpha} & ihg \ar{r}{\beta*1_g} & jg
\end{tikzcd}\]
The ingredients in the pasting diagram are:
\begin{itemize}
\item $1$-cells $f\in\A(X,W)$, $g\in\A(X,Y)$, $h\in\A(Y,W)$, $i\in\A(W,Z)$, and $j\in\A(Y,Z)$;
\item $2$-cells $\alpha : f \to hg$ and $\beta : ih \to j$.
\end{itemize}
The entire pasting diagram represents the vertical composite $(\beta*1_g)(1_i*\alpha)$, which is a $2$-cell in $\A(X,Z)$, displayed on the right above.
\end{example}
\begin{example}\label{ex:pasting-simple2}
With $1$-cells $f,g,i,j$ as in the previous example, consider the pasting diagram on the left:
\[\begin{tikzpicture}[commutative diagrams/every diagram, xscale=1.2]
\node (W) at (0,1) {$W$}; \node (X) at (-1.5,0) {$X$};
\node (Y) at (0,-1) {$Y$}; \node (Z) at (1.5,0) {$Z$};
\node[font=\Large] at (-.7,0) {\rotatebox{-135}{$\Rightarrow$}};
\node at (-.5,-.1) {\scriptsize{$\alpha$}};
\node[font=\Large] at (.5,0) {\rotatebox{-135}{$\Rightarrow$}};
\node at (.7,-.1) {\scriptsize{$\beta$}};
\path[commutative diagrams/.cd, every arrow, every label]
(X) edge node[above] {$f$} (W)
(X) edge node[below] {$g$} (Y)
(W) edge node[right, near start] {$h$} (Y)
(Y) edge node[below] {$j$} (Z)
(W) edge node[above] {$i$} (Z);
\end{tikzpicture}\quad = \quad
\begin{tikzcd}
if \ar{r}{\beta*1_f} & jhf \ar{r}{1_j*\alpha} & jg
\end{tikzcd}\]
Here we have a $1$-cell $h\in\A(W,Y)$, and $2$-cells $\alpha : hf \to g$ and $\beta : i\to jh$. The entire pasting diagram represents the vertical composite $(1_j*\alpha)(\beta*1_f)$, which is a $2$-cell in $\A(X,Z)$, displayed on the right above. In \Cref{explanation:internal-adjunction}, we will see the previous two pasting diagrams in the definition of an internal adjunction in a $2$-category.\dqed
\end{example}
\begin{example}\label{ex:pasting-complicated}
Here is a more complicated pasting diagram in $\A$:
\[\begin{tikzpicture}[commutative diagrams/every diagram, scale=1.5]
\node (A) at (-2,0) {$A$}; \node (B) at (-1,1) {$B$};
\node (C) at (1,1) {$C$}; \node (D) at (2,0) {$D$};
\node (E) at (0,0) {$E$}; \node (F) at (-1,-1) {$F$};
\node (G) at (1,-1) {$G$};
\node[font=\Large] at (-1.2,.4) {\rotatebox{-45}{$\Rightarrow$}};
\node at (-1.05,.5) {\scriptsize{$\alpha$}};
\node[font=\Large] at (-.2,.7) {\rotatebox{-45}{$\Rightarrow$}};
\node at (0,.7) {\scriptsize{$\beta$}};
\node[font=\Large] at (-1.1,-.4) {\rotatebox{-90}{$\Rightarrow$}};
\node at (-.95,-.4) {\scriptsize{$\gamma$}};
\node[font=\Large] at (.2,-.5) {\rotatebox{-90}{$\Rightarrow$}};
\node at (.35,-.5) {\scriptsize{$\delta$}};
\node[font=\Large] at (1.4,0) {\rotatebox{-135}{$\Rightarrow$}};
\node at (1.5,-.1) {\scriptsize{$\theta$}};
\path[commutative diagrams/.cd, every arrow, every label]
(A) edge node[above] {$a$} (B)
(A) edge node[above, near end] {$b$} (E)
(A) edge node[below] {$c$} (F)
(F) edge node[right] {$e$} (E)
(E) edge node[right, near start] {$d$} (B)
(B) edge node[above] {$f$} (C)
(E) edge node[below] {$g$} (C)
(F) edge node[below] {$h$} (G)
(C) edge node[right, near start] {$i$} (G)
(C) edge node[above] {$j$} (D)
(G) edge node[below] {$k$} (D);
\end{tikzpicture}\]
The ingredients are:
\begin{itemize}
\item $1$-cells $a \in \A(A,B)$, $b\in\A(A,E)$, $c\in\A(A,F)$, $d\in\A(E,B)$, $e\in\A(F,E)$, $f\in\A(B,C)$, $g\in\A(E,C)$, $h\in\A(F,G)$, $i\in\A(C,G)$, $j\in\A(C,D)$, and $k\in\A(G,D)$;
\item $2$-cells $\alpha : a \to db$, $\beta : fd \to g$, $\gamma : b\to ec$, $\delta : ige \to h$, and $\theta : j \to ki$.
\end{itemize}
The entire pasting diagram is the vertical composite
\begin{equation}\label{complicated-vcomp}
\begin{tikzcd}[column sep=large]
jfa \ar{d}[swap]{1_{jf} * \alpha} &&& khc\\
jfdb \ar{r}{\theta * 1_{fdb}} & kifdb \ar{r}{1_{ki}*\beta*1_b} & kigb \ar{r}{1_{kig}*\gamma} & kigec \ar{u}[swap]{1_k*\delta* 1_c}\end{tikzcd}
\end{equation}
which is a $2$-cell in $\A(A,D)$.
Observe that this vertical composite is not the only one that makes sense. For example, the above vertical composite is equal to the vertical composite
\begin{equation}\label{complicated-vcomp2}
\begin{tikzcd}[column sep=large]
jfa \ar{d}[swap]{\theta*1_{fa}} &&& khc\\
kifa \ar{r}{1_{kif}*\alpha} & kifdb \ar{r}{1_{ki}*\beta*1_b} & kigb \ar{r}{1_{kig}*\gamma} & kigec \ar{u}[swap]{1_k*\delta* 1_c}\end{tikzcd}
\end{equation}
in which the first whiskering involves $\theta$ instead of $\alpha$. This is justified by the computation:
\begin{equation}\label{pasting-computation}
\begin{split}
(\theta * 1_{fdb})(1_{jf} * \alpha)
&= \bigl(\theta * 1_f * 1_{db}\bigr)(1_{jf}*\alpha)\\
&= \bigl((\theta * 1_f)(1_{jf})\bigr) * (1_{db}\alpha)\\
&= \theta * 1_f * \alpha\\
&= (1_{ki}\theta) * \bigl((1_f*\alpha)(1_{fa})\bigr)\\
&= \bigl(1_{ki} * 1_f * \alpha\bigr)(\theta * 1_{fa})\\
&= (1_{kif}*\alpha)(\theta*1_{fa}).
\end{split}
\end{equation}
In the above computation:
\begin{itemize}
\item The first and the last equalities follow from \eqref{bicat-c-id}, which is the fact that horizontal composition preserves identity $2$-cells.
\item The second and the second-to-last equalities follow from the middle four exchange \eqref{middle-four}.
\item The third and the forth equalities follow from the fact that identity $2$-cells are units for vertical composition \eqref{hom-category-axioms}.
\end{itemize}
For later discussion about pasting diagrams in bicategories, we observe that the three properties we just used---namely, \eqref{hom-category-axioms}, \eqref{bicat-c-id}, and \eqref{middle-four}---are also properties in bicategories.
In fact, there are $8$ possible ways to compose the pasting diagram above, and they are all equal to each other by computations similar to \eqref{pasting-computation}. This example illustrates the key property that each pasting diagram has a \emph{unique} composite. Therefore, when we draw a pasting diagram in a $2$-category, we do not need to choose which vertical composite it represents. If we have to compute with it, we may choose any vertical composite that makes sense.\dqed
\end{example}
Up to this point, we have not actually defined what a pasting diagram in a $2$-category is. We will do so precisely in the next two sections.
\section{Pasting Schemes}\label{sec:pasting-scheme}
In this section we define the concept of a pasting scheme, which will be used in \Cref{sec:2cat-pasting-theorem} to define pasting diagrams in $2$-categories precisely.
\begin{motivation}\label{mot:2cat-pasting-precise}
As the examples in \Cref{sec:pasting-2cat} suggest, a pasting diagram in a $2$-category is supposed to represent a vertical composite of $2$-cells, each being the whiskering of a $2$-cell with an identity $2$-cell on either side, or both sides.
\[\begin{tikzpicture}[scale=1, shorten >=-3pt, shorten <=-2pt]
\node (s) at (0,0) {$\bullet$}; \node (s1) at (1,0) {$\bullet$};
\node (F) at (2,0) {\rotatebox{270}{\LARGE{$\Rightarrow$}}};
\node (u) at (2,.7) {$\bullet$}; \node (v) at (1.5,-.7) {$\bullet$};
\node (w) at (2.5,-.7) {$\bullet$}; \node (t1) at (3,0) {$\bullet$};
\node (t) at (4,0) {$\bullet$};
\draw [arrow] (s) to (s1); \draw [arrow] (s1) to (u);
\draw [arrow] (s1) to (v); \draw [arrow] (v) to (w);
\draw [arrow] (u) to (t1); \draw [arrow] (w) to (t1); \draw [arrow] (t1) to (t);
\end{tikzpicture}\]
In order to state the precise definition of a pasting diagram and to prove their uniqueness, we first need some graph theoretic concepts to express nodes, edges, and regions bounded by edges.\dqed
\end{motivation}
\begin{definition}\label{def:graph}
A \emph{graph}\index{graph} is a triple \[G=(V_G,E_G,\psi_G)\] consisting of:
\begin{itemize}
\item a finite set $V_G$ of \emph{vertices}\index{vertex} with at least two elements;
\item a finite set $E_G$ of \emph{edges}\index{edge} with at least two elements such that $E_G \cap V_G = \varnothing$;
\item an \emph{incidence function}\index{incidence function} $\psi_G : E_G\to V_G^{\times 2}$. For each edge $e$, if $\psi_G(e)=(u,v)$, then $u$ and $v$ are called the \emph{tail}\index{tail} and the \emph{head}\index{head} of $e$, respectively, and together they are called the \emph{ends}\index{ends} of $e$.
\end{itemize}
Moreover:
\begin{enumerate}
\item The \emph{geometric realization}\index{geometric realization} of a graph $G$ is the topological quotient\label{notation:geo-real}
\[|G| = \Bigl[\bigl(\coprodover{v\in V_G} \{v\}\bigr) \coprod \bigl(\coprodover{e\in E_G} [0,1]_e\bigr)\Bigr]\Big/\sim\]
in which:
\begin{itemize}
\item $\{v\}$ is a one-point space indexed by a vertex $v$.
\item Each $[0,1]_e$ is a copy of the topological unit interval $[0,1]$ indexed by an edge $e$.
\item The identification $\sim$ is generated by
\[u \sim 0 \in [0,1]_e \ni 1 \sim v \ifspace \psi_G(e)=(u,v).\]
\end{itemize}
\item A \emph{plane graph}\index{plane graph} is a graph together with a topological embedding of its geometric realization into the \index{complex plane}complex plane\label{notation:complex-plane} $\fieldc$.\defmark
\end{enumerate}
\end{definition}
\begin{explanation}\label{expl:plane-graph}
It is important to note that in the definition of a plane graph, a specific topological embedding of its geometric realization into $\fieldc$, instead of just the existence of one, is required. In practice, each vertex $v$ is drawn either as a bullet\label{notation:vertex} $\bullet$ or as a circle $\raisebox{-.05cm}{\scalebox{.8}{\begin{tikzpicture}\node [draw,circle,thick,minimum size=.4cm,inner sep=0pt] {$v$};\end{tikzpicture}}}$ with the name of the vertex inside. Each edge $e$ with tail $u$ and head $v$ is drawn as an arrow from $u$ to $v$, as in:
\[\begin{tikzpicture}
\node [normaldot] (a) {}; \node [normaldot, right=1cm of a] (b) {};
\draw [arrow] (a) to node{$e$} (b); \end{tikzpicture}
\qquad\orspace\qquad
\begin{tikzpicture}
\node [smallplain] (u) {$u$};
\node [smallplain, right=1cm of u] (v) {$v$};
\draw [arrow] (u) to node{$e$} (v);
\end{tikzpicture}\]
A plane graph is a graph together with a drawing of it in the complex plane $\fieldc$ such that its edges meet only at their ends. To simplify the notation, we will identify a plane graph $G$ with its geometric realization $|G|$ and with the latter's topologically embedded image in $\fieldc$.\dqed
\end{explanation}
For the purpose of discussing pasting diagrams, we need plane graphs with special properties, which we define next.
\begin{definition}\label{def:graph-path}
Suppose $G= (V_G,E_G,\psi_G)$ is a graph.
\begin{enumerate}
\item A \emph{path}\index{path} in $G$ is an alternating sequence\label{notation:path} $v_0e_1v_1\cdots e_nv_n$ with $n\geq 0$ of vertices $v_i$'s and edges $e_i$'s such that:
\begin{itemize}
\item each $e_i$ has ends $\{v_{i-1},v_i\}$;
\item the vertices $v_i$'s are distinct.
\end{itemize}
This is also called a \emph{path from $v_0$ to $v_n$}. A path is \emph{trivial} if $n=0$, and is \emph{non-trivial} if $n\geq 1$.
\item If $p = v_0e_1v_1\cdots e_nv_n$ is a path, then
\[p^* = v_ne_n\cdots v_1e_1v_0\] is the \emph{reversed path}\index{reversed path} from $v_n$ to $v_0$.
\item A \emph{directed path}\index{directed path} is a path such that each $e_i$ has head $v_i$.
\item $G$ is \emph{connected}\index{connected} if for each pair of distinct vertices $\{u,v\}$, there exists a path from $u$ to $v$.\defmark
\end{enumerate}
\end{definition}
\begin{convention}\label{conv:graph-identification}
Using the orientation of the complex plane $\fieldc$, we identify two connected plane graphs if they are connected by an orientation-preserving, incidence relation-preserving homeomorphism that maps vertices to vertices and edges to edges.\dqed
\end{convention}
To express $1$-cells, $2$-cells, and their (co)domains in graph theoretic terms, we need suitable concepts of faces and boundaries, which we define next.
\begin{definition}\label{def:faces}
Suppose $G$ is a connected plane graph.
\begin{enumerate}
\item The connected subspaces of the complement $\fieldc \setminus |G|$ are called the \emph{open faces}\index{open face} of $G$. Their closures are called \emph{faces}\index{face} of $G$. The unique unbounded face is called the \index{exterior face}\emph{exterior face}, denoted by\label{notation:extg} $\ext_G$. The bounded faces are called \index{interior face}\emph{interior faces}.
\item The vertices and edges in the boundary\label{notation:boundary} $\bd_F$ of a face $F$ of $G$ form an alternating sequence $v_0e_1v_1\cdots e_nv_n$ of vertices and edges such that:
\begin{itemize}
\item $v_0 = v_n$.
\item The ends of $e_i$ are $\{v_{i-1},v_i\}$.
\item Traversing $\bd_F$ from $v_0$ to $v_n=v_0$ along the edges $e_1, e_2,\ldots,e_n$ in this order, ignoring their tail-to-head orientation, the face $F$ is always on the right-hand side.
\end{itemize}
\item An interior face $F$ of $G$ is \emph{anchored}\index{anchored} if it is equipped with
\begin{itemize}
\item two distinct vertices\label{notation:source} $s_F$ and $t_F$, called the \emph{source}\index{source} and the \index{sink}\emph{sink} of $F$, respectively, and
\item two directed paths\label{notation:dom-codom} $\dom_F$ and $\codom_F$ from $s_F$ to $t_F$, called the \emph{domain}\index{domain} and the \emph{codomain}\index{codomain} of $F$, respectively,
\end{itemize}
such that \[\bd_F = \dom_F \codom_F^*\] with the first vertex in $\codom_F^*=t_F$ removed on the right-hand side.
\item The exterior face of $G$ is \emph{anchored} if it is equipped with
\begin{itemize}
\item two distinct vertices $s_G$ and $t_G$, called the \emph{source} and the \emph{sink} of $G$, respectively, and
\item two directed paths $\dom_G$ and $\codom_G$ from $s_G$ to $t_G$, called the \emph{domain} and the \emph{codomain} of $G$, respectively,
\end{itemize}
such that \[\bd_{\ext_G} = \codom_G\dom_G^*\] with the first vertex in $\dom_G^*=t_G$ removed on the right-hand side.
\item $G$ is \emph{anchored}\index{anchored!graph}\index{graph!anchored} if every face of $G$ is anchored.
\item $G$ is an \emph{atomic graph}\index{atomic graph}\index{graph!atomic} if it is an anchored graph with exactly one interior face.\defmark
\end{enumerate}
\end{definition}
\begin{explanation}
In an anchored graph, the boundary of each interior face is oriented clockwise. On the other hand, the boundary of the exterior face is oriented counter-clockwise.\dqed
\end{explanation}
\begin{example}\label{ex:atomic-graph}
Here is an atomic graph $G$
\[\begin{tikzpicture}[scale=.7]
\node [plain] (s) {$s$}; \node [plain, right=1cm of s] (s1) {$s_F$};
\node [right=1cm of s1] (F) {$F$}; \node [plain, above=.5cm of F] (u) {$u$}; \node [plain, below left=.8cm and .4cm of F] (v) {$v$};
\node [plain, below right=.8cm and .4cm of F] (w) {$w$};
\node [plain, right=1cm of F] (t1) {$t_F$};
\node [plain, right=1cm of t1] (t) {$t$};
\node [above=.7cm of s] () {$\ext_G$};
\draw [arrow] (s) to node {\scriptsize{$f$}} (s1);
\draw [arrow] (s1) to node {\scriptsize{$h_1$}} (u);
\draw [arrow] (s1) to node[swap] {\scriptsize{$h_3$}} (v);
\draw [arrow] (v) to node {\scriptsize{$h_4$}} (w);
\draw [arrow] (u) to node {\scriptsize{$h_2$}} (t1);
\draw [arrow] (w) to node[swap] {\scriptsize{$h_5$}} (t1);
\draw [arrow] (t1) to node {\scriptsize{$g$}} (t);
\end{tikzpicture}\]
with:
\begin{itemize}
\item unique interior face $F$ with source $s_F$, sink $t_F$, $\dom_F = s_F h_1 u h_2 t_F$, and $\codom_F = s_F h_3 v h_4 w h_5 t_F$;
\item exterior face $\ext_G$ with source $s$, sink $t$,
\[\begin{split}
\dom_G &= sfs_F h_1 u h_2 t_Fgt,\\
\codom_G &=sfs_F h_3 v h_4 w h_5 t_Fgt.
\end{split}\]
\end{itemize}
Comparing this example with \Cref{mot:2cat-pasting-precise}, we see that an atomic graph is the graph theoretic manifestation of a whiskering of a $2$-cell with identity $2$-cells on either side, or both sides.\dqed
\end{example}
\begin{lemma}\label{atomic-domain}
If $G$ is an atomic graph with unique interior face $F$, then
\[\dom_F \subseteq \dom_G \andspace \codom_F \subseteq \codom_G.\]
\end{lemma}
\begin{proof}
Since $G$ has only one interior face, the boundary $\bd_{\ext_G}=\codom_G\dom_G^*$ of the exterior face contains all of its edges. Traversing an edge $e$ in $\dom_F$ from its tail to its head, $F$ is on the right-hand side, so $\ext_G$ is on the left-hand side. Therefore, $e$ cannot be contained in the directed path $\codom_G$. This proves the first containment. The second containment is proved similarly.
\end{proof}
\begin{explanation}\label{expl:anchored-graph}
It follows from \Cref{atomic-domain} that each atomic graph $G$ consists of its unique interior face $F$, a directed path from the source $s$ of $G$ to the source $s_F$ of $F$, and a directed path from the sink $t_F$ of $F$ to the sink $t$ of $G$.\dqed
\end{explanation}
Next we define a composition on anchored graphs that mimics the vertical composition of $2$-cells in a bicategory.
\begin{definition}\label{def:anchored-composition}
Suppose $G$ and $H$ are anchored graphs such that:
\begin{itemize}
\item $s_G=s_H$,
\item $t_G=t_H$, and
\item $\codom_G = \dom_H$.
\end{itemize}
The \index{vertical composition!anchored graph}\emph{vertical composite}\label{notation:vcomp-graph} $HG$ is the anchored graph defined by the following data.
\begin{enumerate}
\item The connected plane graph of $HG$ is the quotient
\[\dfrac{G \sqcup H}{\bigl\{\codom_G\,=\,\dom_H\bigr\}}\] of the disjoint union of $G$ and $H$, with the codomain of $G$ identified with the domain of $H$.
\item The interior faces of $HG$ are the interior faces of $G$ and $H$, which are already anchored.
\item The exterior face of $HG$ is the intersection of $\ext_G$ and $\ext_H$, with
\begin{itemize}
\item source $s_G=s_H$,
\item sink $t_G=t_H$,
\item domain $\dom_G$, and
\item codomain $\codom_H$.
\end{itemize}
\end{enumerate}
This finishes the definition of the anchored graph $HG$.
\end{definition}
The following observation ensures that taking vertical composites is associative.
\begin{lemma}\label{graph-comp-associative}
If $G$, $H$, and $I$ are anchored graphs such that the vertical composites $IH$ and $HG$ are defined, then \[(IH)G=I(HG).\]
\end{lemma}
\begin{proof}
Both iterated vertical composites have:
\begin{itemize}
\item connected plane graphs
\[\dfrac{G \sqcup H \sqcup I}{\bigl\{\codom_G\,=\,\dom_H,\, \codom_H\,=\, \dom_I\bigr\}};\]
\item interior faces those in $G$, $H$, and $I$, anchored as they are there;
\item exterior face $\ext_G\cap\ext_H\cap\ext_I$;
\item source $s_G=s_H=s_I$;
\item sink $t_G=t_H=t_I$;
\item domain $\dom_G$;
\item codomain $\codom_I$.
\end{itemize}
This proves the lemma.
\end{proof}
Using \Cref{graph-comp-associative}, we will safely omit parentheses when we write iterated vertical composites of anchored graphs. Next is the main graph theoretic definition of this section.
\begin{definition}\label{def:pasting-scheme}
A \emph{pasting scheme}\index{pasting scheme} is an anchored graph $G$ together with a decomposition
\[G=G_n\cdots G_1\]
into vertical composites of $n \geq 1$ atomic graphs $G_1,\ldots,G_n$. Such a decomposition is called a \index{pasting scheme!presentation}\emph{pasting scheme presentation} of $G$.
\end{definition}
\begin{explanation}\label{expl:pasting-scheme}
We are \emph{not} asserting that each anchored graph admits a pasting scheme presentation. In fact, as we will see shortly, there exist anchored graphs that cannot be made into pasting schemes. We are also \emph{not} asserting the uniqueness of a pasting scheme presentation for a given anchored graph. If $G$ admits a pasting scheme presentation $G_n\cdots G_1$, then:
\begin{itemize}
\item $G$ has $n$ interior faces, one in each atomic graph $G_i$ for $1\leq i \leq n$.
\item Each $G_i$ has the same source and the same sink as $G$ by the definition of vertical composite.
\item The codomain of $G_i$ is equal to the domain of $G_{i+1}$ for each $1\leq i \leq n-1$.
\item The domains of $G$ and $G_1$ are equal. The codomains of $G$ and $G_n$ are equal.
\item If $1\leq i\leq j\leq n$, then $G_j\cdots G_i$ is a pasting scheme.\dqed
\end{itemize}
\end{explanation}
The rest of this section contains examples.
\begin{example}\label{ex:anchored-not-pscheme}
The anchored graph $G$
\[\begin{tikzpicture}
\matrix[row sep=.4cm,column sep=.8cm,ampersand replacement=\&] {
\& \node [smallplain] (u) {$u$}; \&\\
\node [smallplain] (s) {$s$}; \& \node [smallplain] (v) {$v$}; \&
\node [smallplain] (t) {$t$}; \\
\& \node [smallplain] (w) {$w$}; \&\\};
\draw [arrow] (s) to (u); \draw [arrow] (s) to (w);
\draw [arrow] (v) to (s); \draw [arrow] (v) to (u); \draw [arrow] (v) to (w);
\draw [arrow] (u) to (t); \draw [arrow] (w) to (t);
\end{tikzpicture}\]
\emph{cannot} be made into a pasting scheme. This anchored graph has five vertices $\{s,t,u,v,w\}$ and seven edges.
\begin{itemize}
\item The exterior face is anchored with source $s$, sink $t$, domain the directed path from $s$ to $t$ via $u$, and codomain the directed path from $s$ to $t$ via $w$.
\item Each of the three interior faces is anchored with source $v$, and sink $u$, $w$, or $t$.
\end{itemize}
The anchored graph $G$ does \emph{not} admit a pasting scheme
presentation because it contains no atomic graphs with source $s$ and sink $t$.\dqed
\end{example}
\begin{example}\label{ex:different-embedding}
Let us reuse the same underlying graph $G$ in \Cref{ex:anchored-not-pscheme} to explain the importance of a specific topological embedding of the geometric realization into $\fieldc$. Another topological embedding of $|G|$ into $\fieldc$ yields the anchored graph $G'$
\[\begin{tikzpicture}
\matrix[row sep=.4cm,column sep=.8cm,ampersand replacement=\&] {
\& \node [smallplain] (u) {$u$}; \&\\
\node [smallplain] (v) {$v$}; \& \node [smallplain] (s) {$s$}; \&
\node [smallplain] (t) {$t$}; \\
\& \node [smallplain] (w) {$w$}; \&\\};
\draw [arrow] (s) to (u); \draw [arrow] (s) to (w);
\draw [arrow] (v) to (s); \draw [arrow] (v) to (u); \draw [arrow] (v) to (w);
\draw [arrow] (u) to (t); \draw [arrow] (w) to (t);
\end{tikzpicture}\]
with source $v$ and sink $t$. It has a unique pasting scheme presentation \[G'=G'_3G'_2G'_1,\] where $G'_1$, $G'_2$, and $G'_3$ are the atomic graphs below from left to right.
\[\begin{tikzpicture}
\matrix[row sep=.4cm,column sep=.8cm,ampersand replacement=\&] {
\& \node () {$G'_1$}; \&\\
\& \node [smallplain] (u) {$u$}; \&\\
\node [smallplain] (v) {$v$}; \& \node [smallplain] (s) {$s$}; \&
\node [smallplain] (t) {$t$};\\};
\draw [arrow] (s) to (u); \draw [arrow] (v) to (s);
\draw [arrow] (v) to (u); \draw [arrow] (u) to (t);
\end{tikzpicture}\qquad
\begin{tikzpicture}
\matrix[row sep=.4cm,column sep=.8cm,ampersand replacement=\&] {
\node () {$G'_2$}; \& \node [smallplain] (u) {$u$}; \&\\
\node [smallplain] (v) {$v$}; \& \node [smallplain] (s) {$s$}; \&
\node [smallplain] (t) {$t$}; \\
\& \node [smallplain] (w) {$w$}; \&\\};
\draw [arrow] (s) to (u); \draw [arrow] (s) to (w); \draw [arrow] (v) to (s);
\draw [arrow] (u) to (t); \draw [arrow] (w) to (t);
\end{tikzpicture}\qquad
\begin{tikzpicture}
\matrix[row sep=.4cm,column sep=.8cm,ampersand replacement=\&] {
\& \node () {$G'_3$}; \&\\
\node [smallplain] (v) {$v$}; \& \node [smallplain] (s) {$s$}; \&
\node [smallplain] (t) {$t$}; \\
\& \node [smallplain] (w) {$w$}; \&\\};
\draw [arrow] (s) to (w); \draw [arrow] (v) to (s); \draw [arrow] (v) to (w);
\draw [arrow] (w) to (t);
\end{tikzpicture}\]
In summary, changing the topological embedding of the geometric realization into $\fieldc$ can change whether the resulting anchored graph admits a pasting scheme presentation or not.\dqed
\end{example}
\begin{example}\label{ex:pasting-scheme-simple}
Corresponding to \Cref{ex:pasting-simple} is the anchored graph $G$
\[\begin{tikzpicture}[scale=.5]
\matrix[row sep=.3cm,column sep=.8cm,ampersand replacement=\&] {
\& \node () {$G$}; \&\\
\& \node[normaldot] (w) {}; \&\\
\node[normaldot] (x) {}; \&\& \node[normaldot] (z) {};\\
\& \node[normaldot] (y) {};\\};
\draw [arrow] (x) to (w); \draw [arrow] (x) to (y);
\draw [arrow] (y) to (w); \draw [arrow] (y) to (z); \draw [arrow] (w) to (z);
\end{tikzpicture}\qquad
\begin{tikzpicture}[scale=.5]
\matrix[row sep=.3cm,column sep=.8cm,ampersand replacement=\&] {
\& \node () {$G_1$}; \&\\
\& \node[normaldot] (w) {}; \&\\
\node[normaldot] (x) {}; \& \& \node[normaldot] (z) {};\\
\& \node[normaldot] (y) {};\\};
\draw [arrow] (x) to (w); \draw [arrow] (x) to (y);
\draw [arrow] (y) to (w); \draw [arrow] (w) to (z);
\end{tikzpicture}\qquad
\begin{tikzpicture}[scale=.5]
\matrix[row sep=.3cm,column sep=.8cm,ampersand replacement=\&] {
\& \node () {$G_2$}; \&\\
\& \node[normaldot] (w) {}; \&\\
\node[normaldot] (x) {}; \& \& \node[normaldot] (z) {};\\
\& \node[normaldot] (y) {};\\};
\draw [arrow] (x) to (y); \draw [arrow] (y) to (w);
\draw [arrow] (y) to (z); \draw [arrow] (w) to (z);
\end{tikzpicture}\]
on the left. It has a unique pasting scheme presentation $G=G_2G_1$.\dqed
\end{example}
\begin{example}\label{ex:pasting-scheme-simple2}
Corresponding to \Cref{ex:pasting-simple2} is the anchored graph $G$
\[\begin{tikzpicture}[scale=.5]
\matrix[row sep=.3cm,column sep=.8cm,ampersand replacement=\&] {
\& \node () {$G$}; \&\\
\& \node[normaldot] (w) {}; \&\\
\node[normaldot] (x) {}; \& \& \node[normaldot] (z) {};\\
\& \node[normaldot] (y) {};\\};
\draw [arrow] (x) to (w); \draw [arrow] (x) to (y);
\draw [arrow] (w) to (y); \draw [arrow] (y) to (z); \draw [arrow] (w) to (z);
\end{tikzpicture}\qquad
\begin{tikzpicture}[scale=.5]
\matrix[row sep=.3cm,column sep=.8cm,ampersand replacement=\&] {
\& \node () {$G_1$}; \&\\
\& \node[normaldot] (w) {}; \&\\
\node[normaldot] (x) {}; \& \& \node[normaldot] (z) {};\\
\& \node[normaldot] (y) {};\\};
\draw [arrow] (x) to (w); \draw [arrow] (w) to (y);
\draw [arrow] (w) to (z); \draw [arrow] (y) to (z);
\end{tikzpicture}\qquad
\begin{tikzpicture}[scale=.5]
\matrix[row sep=.3cm,column sep=.8cm,ampersand replacement=\&] {
\& \node () {$G_2$}; \&\\
\& \node[normaldot] (w) {}; \&\\
\node[normaldot] (x) {}; \& \& \node[normaldot] (z) {};\\
\& \node[normaldot] (y) {};\\};
\draw [arrow] (x) to (y); \draw [arrow] (x) to (w);
\draw [arrow] (w) to (y); \draw [arrow] (y) to (z);
\end{tikzpicture}\]
on the left. It has a unique pasting scheme presentation $G=G_2G_1$.\dqed
\end{example}
\begin{example}\label{ex:pasting-scheme-complicated}
Corresponding to \Cref{ex:pasting-complicated} is the anchored graph $G$
\[\begin{tikzpicture}[scale=.4]
\matrix[row sep=.3cm,column sep=.5cm,ampersand replacement=\&] {
\& \node[normaldot] (b) {}; \& \& \node[normaldot] (c) {}; \&\\
\node[normaldot] (a) {}; \&\& \node[normaldot] (e) {}; \&\&
\node[normaldot] (d) {};\\
\& \node[normaldot] (f) {}; \& \& \node[normaldot] (g) {}; \&\\};
\draw [arrow] (a) to (b); \draw [arrow] (a) to (e); \draw [arrow] (a) to (f); \draw [arrow] (b) to (c); \draw [arrow] (c) to (d); \draw [arrow] (c) to (g);
\draw [arrow] (e) to (b); \draw [arrow] (e) to (c);
\draw [arrow] (f) to (e); \draw [arrow] (f) to (g); \draw [arrow] (g) to (d);
\end{tikzpicture}\]
with a pasting scheme presentation $G=G_5G_4G_3G_2G_1$ given by the atomic graphs:
\[\begin{tikzpicture}[scale=.3]
\matrix[row sep=.3cm,column sep=.5cm,ampersand replacement=\&] {
\node () {$G_1$}; \& \node[normaldot] (b) {}; \&\& \node[normaldot] (c) {}; \&\\
\node[normaldot] (a) {}; \&\& \node[normaldot] (e) {}; \&\&
\node[normaldot] (d) {};\\
\node(){};\&\&\&\& \\};
\draw [arrow] (a) to (b); \draw [arrow] (a) to (e); \draw [arrow] (b) to (c); \draw [arrow] (c) to (d); \draw [arrow] (e) to (b);
\end{tikzpicture}\quad
\begin{tikzpicture}[scale=.3]
\matrix[row sep=.3cm,column sep=.5cm,ampersand replacement=\&] {
\node () {$G_2$}; \& \node[normaldot] (b) {}; \& \& \node[normaldot] (c) {}; \&\\
\node[normaldot] (a) {}; \&\& \node[normaldot] (e) {}; \&\&
\node[normaldot] (d) {};\\
\& \& \& \node[normaldot] (g) {}; \&\\};
\draw [arrow] (a) to (e); \draw [arrow] (b) to (c); \draw [arrow] (c) to (d); \draw [arrow] (c) to (g); \draw [arrow] (e) to (b); \draw [arrow] (g) to (d);
\end{tikzpicture}\quad
\begin{tikzpicture}[scale=.3]
\matrix[row sep=.3cm,column sep=.5cm,ampersand replacement=\&] {
\node () {$G_3$}; \& \node[normaldot] (b) {}; \& \& \node[normaldot] (c) {}; \&\\
\node[normaldot] (a) {}; \&\& \node[normaldot] (e) {}; \&\&
\node[normaldot] (d) {};\\
\& \& \& \node[normaldot] (g) {}; \&\\};
\draw [arrow] (a) to (e); \draw [arrow] (b) to (c); \draw [arrow] (c) to (g); \draw [arrow] (e) to (b); \draw [arrow] (e) to (c); \draw [arrow] (g) to (d);
\end{tikzpicture}
\]
\[\begin{tikzpicture}[scale=.3]
\matrix[row sep=.3cm,column sep=.5cm,ampersand replacement=\&] {
\node(){$G_4$}; \& \& \& \node[normaldot] (c) {}; \&\\
\node[normaldot] (a) {}; \&\& \node[normaldot] (e) {}; \&\&
\node[normaldot] (d) {};\\
\& \node[normaldot] (f) {}; \& \& \node[normaldot] (g) {}; \&\\};
\draw [arrow] (a) to (e); \draw [arrow] (a) to (f); \draw [arrow] (c) to (g);
\draw [arrow] (e) to (c); \draw [arrow] (f) to (e); \draw [arrow] (g) to (d);
\end{tikzpicture}
\qquad
\begin{tikzpicture}[scale=.3]
\matrix[row sep=.3cm,column sep=.5cm,ampersand replacement=\&] {
\node(){$G_5$}; \& \& \& \node[normaldot] (c) {}; \&\\
\node[normaldot] (a) {}; \&\& \node[normaldot] (e) {}; \&\&
\node[normaldot] (d) {};\\
\& \node[normaldot] (f) {}; \& \& \node[normaldot] (g) {}; \&\\};
\draw [arrow] (a) to (f); \draw [arrow] (c) to (g); \draw [arrow] (e) to (c); \draw [arrow] (f) to (e); \draw [arrow] (f) to (g); \draw [arrow] (g) to (d);
\end{tikzpicture}\]
Specifically, the above pasting scheme presentation corresponds to the vertical composite in \eqref{complicated-vcomp}.
Another pasting scheme presentation of $G$ is $G_5G_4G_3G_2'G_1'$ with $G_1'$ and $G_2'$ the following atomic graphs.
\[\begin{tikzpicture}[scale=.3]
\matrix[row sep=.3cm,column sep=.5cm,ampersand replacement=\&] {
\node(){$G'_1$}; \& \node[normaldot] (b) {}; \& \& \node[normaldot] (c) {}; \&\\
\node[normaldot] (a) {}; \&\&\&\& \node[normaldot] (d) {};\\
\&\& \& \node[normaldot] (g) {}; \&\\};
\draw [arrow] (a) to (b); \draw [arrow] (b) to (c); \draw [arrow] (c) to (d); \draw [arrow] (c) to (g); \draw [arrow] (g) to (d);
\end{tikzpicture}\qquad
\begin{tikzpicture}[scale=.3]
\matrix[row sep=.3cm,column sep=.5cm,ampersand replacement=\&] {
\node(){$G'_2$}; \& \node[normaldot] (b) {}; \& \& \node[normaldot] (c) {}; \&\\
\node[normaldot] (a) {}; \&\& \node[normaldot] (e) {}; \&\&
\node[normaldot] (d) {};\\
\&\& \& \node[normaldot] (g) {}; \&\\};
\draw [arrow] (a) to (b); \draw [arrow] (a) to (e); \draw [arrow] (b) to (c); \draw [arrow] (c) to (g); \draw [arrow] (e) to (b); \draw [arrow] (g) to (d);
\end{tikzpicture}\]
This second pasting scheme presentation of $G$ corresponds to the vertical composite in \eqref{complicated-vcomp2}. In fact, there are precisely $8$ pasting scheme presentations of $G$, corresponding to the $8$ composites mentioned in \Cref{ex:pasting-complicated}.\dqed
\end{example}
\section{\texorpdfstring{$2$}{2}-Categorical Pasting Theorem}
\label{sec:2cat-pasting-theorem}
In this section we define pasting diagrams in $2$-categories and prove a uniqueness result about their composites.
\begin{definition}\label{def:2cat-pasting-diagram}
Suppose $\A$ is a $2$-category, and $G$ is an anchored graph. A \index{diagram!G@$G$-}\emph{$G$-diagram} in $\A$ is an assignment $\phi$ as follows.
\begin{itemize}
\item $\phi$ assigns to each vertex $v$ in $G$ an object $\phi_v$ in $\A$.
\item $\phi$ assigns to each edge $e$ in $G$ with tail $u$ and head $v$ a $1$-cell \[\phi_e \in \A(\phi_u,\phi_v).\] For a directed path $P = v_0e_1v_1\cdots e_mv_m$ in $G$ with $m \geq 1$, define the horizontal composite $1$-cell
\[\phi_P =\phi_{e_m}\cdots\phi_{e_1} \in \A(\phi_{v_0},\phi_{v_m}).\]
\item $\phi$ assigns to each interior face $F$ of $G$ a $2$-cell
\[\phi_F : \phi_{\dom_F} \to \phi_{\codom_F}\] in $\A(\phi_{s_F},\phi_{t_F})$.
\end{itemize}
If $G$ admits a pasting scheme presentation, then a $G$-diagram is called a \emph{pasting diagram}\index{pasting diagram!2-category}\index{diagram!pasting -, in a 2-category} in $\A$ of shape $G$.
\end{definition}
Recall that a path in a graph is trivial (resp., non-trivial) if it contains no edges (resp., at least one edge).
\begin{definition}\label{def:2cat-pasting-composite}
Suppose $\phi$ is a pasting diagram in a $2$-category $\A$ of shape $G$. With respect to a pasting scheme presentation $G_n\cdots G_1$ of $G$:
\begin{enumerate}
\item For each $1\leq i \leq n$, suppose $F_i$ is the unique interior face of $G_i$, with $P_i$ the directed path in $G_i$ from $s_G$ to $s_{F_i}$, and $P_i'$ the directed path in $G_i$ from $t_{F_i}$ to $t_G$. Define the $2$-cell
\[\phi_{G_i} : \phi_{\dom_{G_i}} \to \phi_{\codom_{G_i}}\] in $\A(\phi_{s_G},\phi_{t_G})$ by
\[\phi_{G_i} = \begin{cases}
1_{\phi_{P_i'}} * \phi_{F_i} * 1_{\phi_{P_i}} & \text{if $P_i$ and $P_i'$ are non-trivial};\\
1_{\phi_{P_i'}} * \phi_{F_i} & \text{if $P_i$ is trivial, and $P_i'$ is non-trivial};\\
\phi_{F_i} * 1_{\phi_{P_i}} & \text{if $P_i$ is non-trivial, and $P_i'$ is trivial};\\
\phi_{F_i} & \text{if $P_i$ and $P_i'$ are trivial}.
\end{cases}\]
\item The \index{composite!pasting diagram in a 2-category}\emph{composite of $\phi$}, denoted by\label{notation:2pasting-comp} $|\phi|$, is defined as the vertical composite
\[|\phi| = \phi_{G_n}\cdots \phi_{G_1} : \phi_{\dom_G} = \phi_{\dom_{G_1}} \to \phi_{\codom_{G_n}}=\phi_{\codom_G},\] which is a $2$-cell in $\A(\phi_{s_G},\phi_{t_G})$.\defmark
\end{enumerate}
\end{definition}
\begin{example}\label{ex:atomic-pasting}
For the atomic graph $G$ in \Cref{ex:atomic-graph}, a pasting diagram $\phi$ in $\A$ of shape $G$ is a diagram
\[\begin{tikzpicture}[commutative diagrams/every diagram, scale=1.6]
\node (s) at (-2,0) {$S$}; \node (s1) at (-1,0) {$S_F$};
\node [font=\LARGE] at (-.1,-.1) {\rotatebox{270}{$\Rightarrow$}};
\node at (.1,-.1) {$\theta$}; \node (u) at (0,.5) {$U$};
\node (v) at (-.5,-.6) {$V$}; \node (w) at (.5,-.6) {$W$};
\node (t1) at (1,0) {$T_F$}; \node (t) at (2,0) {$T$};
\draw [arrow] (s) to node{\scriptsize{$f$}} (s1);
\draw [arrow] (s1) to node{\scriptsize{$h_1$}} (u);
\draw [arrow] (s1) to node[swap]{\scriptsize{$h_3$}} (v);
\draw [arrow] (v) to node[swap]{\scriptsize{$h_4$}} (w);
\draw [arrow] (u) to node{\scriptsize{$h_2$}} (t1);
\draw [arrow] (w) to node[swap]{\scriptsize{$h_5$}} (t1);
\draw [arrow] (t1) to node{\scriptsize{$g$}} (t);
\end{tikzpicture}\]
of
\begin{itemize}
\item objects $S,T,U,V,W,S_F$, and $T_F$;
\item $1$-cells $h_1,\ldots,h_5,f$, and $g$;
\item a $2$-cell $\theta : h_2h_1 \to h_5h_4h_3$ in $\A(S_F,T_F)$.
\end{itemize}
Its composite is the horizontal composite
\[1_{g} * \theta * 1_{f} : gh_2h_1f \to gh_5h_4h_3f,\]
which is a $2$-cell in $\A(S,T)$.\dqed
\end{example}
\begin{example}\label{ex:different-embedding-pasting}
For the pasting scheme $G'=G'_3G'_2G'_1$ in \Cref{ex:different-embedding}, a pasting diagram $\phi$ in $\A$ of shape $G'$ is a diagram
\[\begin{tikzpicture}[commutative diagrams/every diagram, scale=.8]
\node (v) at (-3,0) {$V$}; \node (s) at (0,0) {$S$};
\node (t) at (3,0) {$T$}; \node (u) at (0,2) {$U$};
\node (w) at (0,-2) {$W$};
\node [font=\LARGE] at (-.65,1) {\rotatebox{-45}{$\Rightarrow$}};
\node at (-.9,.8) {\scriptsize{$\theta_{1}$}};
\node [font=\LARGE] at (1,0) {\rotatebox{-90}{$\Rightarrow$}};
\node at (1.4,0) {\scriptsize{$\theta_{2}$}};
\node [font=\LARGE] at (-.7,-.7) {\rotatebox{-135}{$\Rightarrow$}};
\node at (-1.05,-.5) {\scriptsize{$\theta_{3}$}};
\draw [arrow] (v) to node{\scriptsize{$f_1$}} (u);
\draw [arrow] (u) to node{\scriptsize{$f_2$}} (t);
\draw [arrow] (v) to node[swap]{\scriptsize{$g_1$}} (w);
\draw [arrow] (w) to node[swap]{\scriptsize{$g_2$}} (t);
\draw [arrow] (v) to node{\scriptsize{$h_1$}} (s);
\draw [arrow] (s) to node[swap]{\scriptsize{$h_2$}} (u);
\draw [arrow] (s) to node{\scriptsize{$h_3$}} (w);
\end{tikzpicture}\]
of
\begin{itemize}
\item objects $S,T,U,V$, and $W$;
\item $1$-cells $f_1,f_2,g_1,g_2,h_1,h_2$, and $h_3$;
\item $2$-cells
\begin{itemize}
\item $\theta_{1} : f_1 \to h_2h_1$ in $\A(V,U)$,
\item $\theta_{2} : f_2h_2 \to g_2h_3$ in $\A(S,T)$, and
\item $\theta_{3} : h_3h_1 \to g_1$ in $\A(V,W)$.
\end{itemize}
\end{itemize}
Its composite is the vertical composite
\[|\phi| = \phi_{G'_3}\phi_{G'_2}\phi_{G'_1} : \phi_{\dom_{G'}} = f_2f_1 \to g_2g_1 = \phi_{\codom_{G'}}\] in $\A(V,T)$, where the constituent $2$-cells are the horizontal composites
\[\begin{split}
\phi_{G'_1} &= 1_{f_2} * \theta_{1} : f_2f_1 \to f_2h_2h_1,\\
\phi_{G'_2} &= \theta_{2} * 1_{h_1} : f_2h_2h_1 \to g_2h_3h_1,\\
\phi_{G'_3} &= 1_{g_2} * \theta_{3} : g_2h_3h_1 \to g_2g_1.
\end{split}\]
Since $G'_3G'_2G'_1$ is the only pasting scheme presentation of the anchored graph $G'$, there are no other ways to define the composite.\dqed
\end{example}
\begin{example}\label{ex:pasting-diagram-simple}
For the pasting scheme $G=G_2G_1$ in \Cref{ex:pasting-scheme-simple}, a pasting diagram in $\A$ of shape $G$ and its composite are as described in \Cref{ex:pasting-simple}. Similarly:
\begin{itemize}
\item For the pasting scheme $G=G_2G_1$ in \Cref{ex:pasting-scheme-simple2}, a pasting diagram in $\A$ of shape $G$ and its composite are as described in \Cref{ex:pasting-simple2}.
\item For the pasting schemes $G=G_5G_4G_3G_2G_1$ and $G=G_5G_4G_3G'_2G'_1$ in \Cref{ex:pasting-scheme-complicated}, a pasting diagram in $\A$ of shape $G$ and its composite are as described in \Cref{ex:pasting-complicated}.\dqed
\end{itemize}
\end{example}
\begin{motivation}
The anchored graph $G$ in \Cref{ex:pasting-scheme-complicated} has $8$ pasting scheme presentations, corresponding to the $8$ possible ways to compose the $2$-cells as described in \Cref{ex:pasting-complicated}. As discussed there, these $8$ composites are equal to each other. The following result shows that this is, in fact, the general situation.\dqed
\end{motivation}
\begin{theorem}[$2$-Categorical Pasting]\label{thm:2cat-pasting-theorem}\index{2-category!pasting theorem}\index{pasting theorem!2-categorical}\index{Theorem!2-Categorical Pasting}
Every pasting diagram in a $2$-category has a unique composite.
\end{theorem}
\begin{proof}
Suppose $\phi$ is a pasting diagram in a $2$-category $\A$ for some anchored graph $G$ that admits pasting scheme presentations \[G = G_n\cdots G_1 \andspace G = G'_n\cdots G'_1.\] To show that the composites of $\phi$ with respect to them are equal, we proceed by induction on $n$, which is the number of interior faces of $G$. If $n=1$, then the unique interior face of $G_1$ is equal to the unique interior face of $G'_1$. So $\phi_{G_1}=\phi_{G'_1}$ by \Cref{def:2cat-pasting-composite}(1).
Suppose $n>1$. If $G_1=G'_1$, then the induction hypothesis---applied to the restriction of $\phi$ to the pasting scheme presentations $G_n\cdots G_2$ and $G'_n\cdots G'_2$ of the same anchored graph---implies the equality \[\phi_{G_n}\cdots \phi_{G_2} = \phi_{G'_n}\cdots \phi_{G'_2}.\] Together with $\phi_{G_1}=\phi_{G'_1}$, we conclude that the composites of $\phi$ with respect to the two pasting scheme presentations are equal.
If $G_1 \not= G'_1$, then by \Cref{atomic-domain} their interior faces $F_1$ and $F'_1$ do not intersect, except possibly for $t_{F_1}=s_{F'_1}$ or $t_{F'_1}=s_{F_1}$. Denote by $G_1 \cup G'_1$ their union anchored graph. Without loss of generality, we may assume that $\dom_G$ goes through $s_{F_1}$ before $s_{F'_1}$. This union is displayed below with each edge representing a directed path.
\[\begin{tikzpicture}[scale=1]
\node [plain] (s) {$s_G$}; \node [above=.4cm of s](){$G_1\cup G'_1$};
\node [plain, right=1cm of s] (s1) {$s_{F_1}$};
\node [right=.5cm of s1] () {$F_1$};
\node [plain, right=1.5cm of s1] (t1) {$t_{F_1}$};
\node [plain, right=1cm of t1] (s2) {$s_{F'_1}$};
\node [right=.5cm of s2] () {$F'_1$};
\node [plain, right=1.5cm of s2] (t2) {$t_{F'_1}$};
\node [plain, right=1cm of t2] (t) {$t_{G}$};
\draw [arrow] (s) to node{$Q_1$} (s1);
\draw [arrow, bend left=30] (s1) to node{$\dom_{F_1}$} (t1);
\draw [arrow, bend right=30] (s1) to node[swap]{$\codom_{F_1}$} (t1);
\draw [arrow] (t1) to node{$Q_2$} (s2);
\draw [arrow, bend left=30] (s2) to node{$\dom_{F'_1}$} (t2);
\draw [arrow, bend right=30] (s2) to node[swap]{$\codom_{F'_1}$} (t2);
\draw [arrow] (t2) to node{$Q_3$} (t);
\end{tikzpicture}\]
This union has exactly two pasting scheme presentations, each with two atomic graphs. By the induction hypothesis, it suffices to show that, when $\phi$ is restricted to $G_1\cup G'_1$, the two pasting scheme presentations yield the same composite of $\phi$. Depending on whether the directed paths $Q_1$, $Q_2$, and $Q_3$ are trivial or not, there are $8$ cases to check.
If $Q_1$ and $Q_3$ are trivial, and if $Q_2$ is not trivial, then we need to check that
\begin{equation}\label{pasting-computation-proof}
\begin{split}
&\bigl(\phi_{F'_1} * 1_{\phi_{Q_2}\phi_{\codom_{F_1}}}\bigr)
\bigl(1_{\phi_{\dom_{F'_1}}\phi_{Q_2}} * \phi_{F_1}\bigr)\\
&= \bigl(1_{\phi_{\codom_{F'_1}}\phi_{Q_2}} *\phi_{F_1}\bigr)
\bigl(\phi_{F'_1}* 1_{\phi_{Q_2}\phi_{\dom_{F_1}}}\bigr).\end{split}
\end{equation}
This equality holds by the computation \eqref{pasting-computation} with a change of symbols. The other seven cases follow by similar computations using the axioms \eqref{hom-category-axioms}, \eqref{bicat-c-id}, and \eqref{middle-four}.
\end{proof}
To reiterate \Cref{thm:2cat-pasting-theorem}, the composite of a pasting diagram in a $2$-category is independent of the choice of a particular pasting scheme presentation.
\section{Composition Schemes}
\label{sec:bicategorical-pasting}
In this section we give a precise definition of a pasting diagram in a bicategory.
\begin{motivation}\label{mot:bracketing}
We saw that pasting diagrams in $2$-categories have to do with pasting scheme presentations of anchored graphs. A crucial fact---used for instance in defining $\phi_P$ in \Cref{def:2cat-pasting-diagram} and $\phi_{G_i}$ in \Cref{def:2cat-pasting-composite}---is that horizontal composition is strictly associative in a $2$-category. In a bicategory, as opposed to a $2$-category, the horizontal composition is not strictly associative. For example, a sequence of $1$-cells
\[\begin{tikzcd}
W \ar{r}{f} & X \ar{r}{g} & Y \ar{r}{h} & Z\end{tikzcd}\] in a bicategory has two horizontal composites $(hg)f$ and $h(gf)$, and they are in general not equal. The same goes for horizontal composites of $2$-cells. Therefore, to define a pasting diagram in a bicategory, we first need to discuss bracketing in anchored graphs. Once we have the suitable language, we will apply them to $1$-cells and $2$-cells in a bicategory.\dqed
\end{motivation}
\begin{definition}\label{def:bracketing}
\emph{Bracketings}\index{bracketing} are defined recursively as follows:
\begin{itemize}
\item The only bracketing of length $0$ is the empty sequence $\varnothing$.
\item The only bracketing of length $1$ is the symbol $-$, called a dash.
\item If $b$ and $b'$ are bracketings of lengths $m$ and $n$, respectively, then $(bb')$ is a bracketing of length $m+n$.
\end{itemize}
We usually omit the outermost pair of parentheses, so the unique bracketing of length $2$ is $- -$. Moreover:
\begin{enumerate}
\item A \emph{left normalized bracketing}\index{left normalized bracketing} is either $-$ or $(b)-$ with $b$ a left normalized bracketing.
\item A \emph{right normalized bracketing}\index{right normalized bracketing} is either $-$ or $-(b)$ with $b$ a right normalized bracketing.\defmark
\end{enumerate}
\end{definition}
\begin{definition}\label{def:bracketing-directed-path}
For a directed path $P=v_0e_1v_1\cdots e_nv_n$ in a graph, a \emph{bracketing for $P$} is a choice of a bracketing $b$ of length $n$.
\begin{itemize}
\item In this case, we write\label{notation:bofp} $b(P)$, called a \index{bracketed!directed path}\emph{bracketed directed path}, for the bracketed sequence obtained from $b$ by replacing its $n$ dashes with $e_1,\ldots,e_n$ from left to right.
\item If the bracketing is clear from the context, then we abbreviate $b(P)$ to $(P)$ or even $P$.
\end{itemize}
We sometimes suppress the vertices and write $P$ as $(e_1,\ldots,e_n)$, in which case $b(P)$ is also denoted by $b(e_1,\ldots,e_n)$.
\end{definition}
\begin{example}\label{ex:bracketed-directed-path}
A directed path $P=(e_1,\ldots,e_n)$ with $0 \leq n \leq 2$ has a unique bracketing. The only bracketings of length $3$ are $(- -)-$ and $-(- -)$, and only the first one is left normalized. The five bracketings of length $4$ are
\[((- -)-)-,\quad (- -)(- -),\quad -(-(- -)),\quad (-(- -))-,\andspace -((- -)-),\] and only the first of which is left normalized. In fact, an induction shows that, for each $n\geq 1$, there is a unique left normalized bracketing of length $n$.
Suppose $P=(e_1,e_2,e_3,e_4)$ is a directed path in a graph. Then $b(P)$ for the five possible bracketings for $P$ are the bracketed sequences
\[((e_1e_2)e_3)e_4,\quad (e_1e_2)(e_3e_4),\quad e_1(e_2(e_3e_4)),\quad (e_1(e_2e_3))e_4,\andspace e_1((e_2e_3)e_4).\]\dqed
\end{example}
Recall from \Cref{def:faces} that an anchored graph is a connected plane graph whose interior faces and exterior face are all anchored. Next we introduce anchored graphs whose various domains and codomains are bracketed.
\begin{definition}\label{def:bracketed-graph}
A \emph{bracketing}\index{anchored!graph!bracketing}\index{bracketing!anchored graph} for an anchored graph $G$ consists of a bracketing for each of the directed paths:
\begin{itemize}
\item $\dom_G$ and $\codom_G$;
\item $\dom_F$ and $\codom_F$ for each interior face $F$ of $G$.
\end{itemize}
An anchored graph $G$ with a bracketing is called a \index{graph!bracketed}\index{bracketed!graph}\emph{bracketed graph}.
\end{definition}
\begin{definition}\label{def:bracketed-graph-vcomp}
Suppose $G$ and $H$ are bracketed graphs such that:
\begin{itemize}
\item The vertical composite $HG$ is defined as in \Cref{def:anchored-composition}.
\item $(\codom_G) = (\dom_H)$ as bracketed directed paths.
\end{itemize}
Then the anchored graph $HG$ is given the bracketing determined as follows:
\begin{itemize}
\item $(\dom_{HG}) = (\dom_G)$;
\item $(\codom_{HG}) = (\codom_H)$;
\item Each interior face $F$ of $HG$ is either an interior face of $G$ or an interior face of $H$, and not both. Corresponding to these two cases, the directed paths $\dom_F$ and $\codom_F$ are bracketed as they are in $G$ or $H$.
\end{itemize}
Equipped with this bracketing, $HG$ is called the \emph{vertical composite}\index{vertical composition!bracketed graphs} of the bracketed graphs $G$ and $H$.
\end{definition}
The bracketed graph version of \Cref{graph-comp-associative} is also true; i.e., vertical composites of bracketed graphs are strictly associative. So we will safely omit parentheses when we write iterated vertical composites of bracketed graphs. Recall that an atomic graph is an anchored graph with only one interior face. The following concept plays the role of an atomic graph in the bracketed setting.
\begin{definition}\label{def:consistent-bracketing}
Suppose $G$ is an atomic graph with
\begin{itemize}
\item unique interior face $F$,
\item $P=(e_1,\ldots,e_m)$ the directed path from $s_G$ to $s_F$, and
\item $P'=(e'_1,\ldots,e'_n)$ the directed path from $t_F$ to $t_G$,
\end{itemize}
as displayed below with each edge representing a directed path.
\[\begin{tikzpicture}[scale=1]
\node [plain] (s) {$s_G$}; \node [left=.2cm of s](){$G=$};
\node [plain, right=1cm of s] (s1) {$s_{F}$};
\node [right=.5cm of s1] () {$F$};
\node [plain, right=1.5cm of s1] (t1) {$t_{F}$};
\node [plain, right=1cm of t1] (t) {$t_{G}$};
\draw [arrow] (s) to node{$P$} (s1);
\draw [arrow, bend left=30] (s1) to node{$\dom_{F}$} (t1);
\draw [arrow, bend right=30] (s1) to node[swap]{$\codom_{F}$} (t1);
\draw [arrow] (t1) to node{$P'$} (t);
\end{tikzpicture}\]
A bracketing for $G$ is \emph{consistent}\index{bracketing!consistent} if it satisfies both
\begin{equation}\label{consistent-bracketing}
\begin{split}
(\dom_G) &= b\bigl(e_1,\ldots,e_m, (\dom_F), e'_1,\ldots,e'_n\bigr),\\
(\codom_G) &= b\bigl(e_1,\ldots,e_m, (\codom_F), e'_1,\ldots,e'_n\bigr)
\end{split}
\end{equation}
for some bracketing $b$ of length $m+n+1$. In $(\dom_G)$, the bracketed directed path $(\dom_F)$ is substituted into the $(m+1)$st dash in $b$, and similarly in $(\codom_G)$. An atomic graph with a consistent bracketing is called a \index{consistent graph}\index{graph!consistent}\emph{consistent graph}.
\end{definition}
\begin{definition}\label{def:associativity-graph}
An \emph{associativity graph}\index{associativity!graph}\index{graph!associativity} is a consistent graph in which the unique interior face $F$ satisfies one of the following two conditions:
\begin{equation}\label{associativity-graph1}
(\dom_{F}) = (E_1E_2)E_3 \andspace
(\codom_{F}) = E_1'(E_2'E_3'),
\end{equation}
or
\begin{equation}\label{associativity-graph2}
(\dom_{F}) = E_1(E_2E_3) \andspace
(\codom_{F}) = (E_1'E_2')E_3'.
\end{equation}
Moreover, in each case and for each $1\leq i \leq 3$, $E_i$ and $E'_i$ are non-trivial bracketed directed paths with the same length and the same bracketing.
\end{definition}
\begin{explanation}\label{expl:consistent-bracketing}
In a consistent graph $G$ with unique interior face $F$, if we consider each of $\dom_F$ and $\codom_F$ as a single edge, then the bracketings for $\dom_G$ and $\codom_G$ are the same. That is what the word \emph{consistent} refers to.
In an associativity graph, the unique interior face has either one of the following two forms.
\[\begin{tikzpicture}[scale=1]
\matrix[column sep=1cm, row sep=.7cm,ampersand replacement=\&]
{\node[normaldot] (s) {}; \& \node[normaldot] (a) {};
\& \node[normaldot] (b) {}; \\
\node[normaldot] (c) {}; \& \node[normaldot] (d) {};
\& \node[normaldot] (t) {};\\};
\draw [arrow] (s) to node{\scriptsize{$(E_1$}} (a);
\draw [arrow] (s) to node[swap]{\scriptsize{$E'_1$}} (c);
\draw [arrow] (a) to node{\scriptsize{$E_2)$}} (b);
\draw [arrow] (b) to node{\scriptsize{$E_3$}} (t);
\draw [arrow] (c) to node{\scriptsize{$(E'_2$}} (d);
\draw [arrow] (d) to node{\scriptsize{$E'_3)$}} (t);
\end{tikzpicture}\qquad
\begin{tikzpicture}[scale=1]
\matrix[column sep=1cm, row sep=.7cm,ampersand replacement=\&]
{\node[normaldot] (s) {}; \& \node[normaldot] (a) {}; \\
\node[normaldot] (c) {}; \& \node[normaldot] (b) {}; \\
\node[normaldot] (d) {}; \& \node[normaldot] (t) {};\\};
\draw [arrow] (s) to node{\scriptsize{$E_1$}} (a);
\draw [arrow] (a) to node{\scriptsize{$(E_2$}} (b);
\draw [arrow] (b) to node{\scriptsize{$E_3)$}} (t);
\draw [arrow] (s) to node[swap]{\scriptsize{$(E'_1$}} (c);
\draw [arrow] (c) to node[swap]{\scriptsize{$E'_2)$}} (d);
\draw [arrow] (d) to node{\scriptsize{$E'_3$}} (t);
\end{tikzpicture}\]
Here each edge represents a non-trivial bracketed directed path, with $E_i$ and $E'_i$ having the same length and the same bracketing. They are designed to move brackets from left to right, or from right to left. As we will see in the examples below, associativity graphs are for components of the associator and its inverse. Moreover, since $E_i$ and $E'_i$ have the same number of edges, we can speak of corresponding edges in them.\dqed
\end{explanation}
\begin{definition}\label{def:bicategorical-pasting-scheme}
A \emph{composition scheme}\index{composition scheme} is a bracketed graph $G$ together with a decomposition
\[G=G_n\cdots G_1\]
into vertical composites of $n \geq 1$ consistent graphs $G_1,\ldots,G_n$. Such a decomposition is called a \index{composition scheme!presentation}\emph{composition scheme presentation} of $G$.
\end{definition}
\begin{explanation}\label{expl:bicat-pasting-scheme}
The bicategorical analogue of \Cref{expl:pasting-scheme} is true. In particular, if $G=G_n\cdots G_1$ is a composition scheme, then:
\begin{itemize}
\item $G$ has $n$ interior faces, one in each consistent graph $G_i$ for $1\leq i \leq n$.
\item Each $G_i$ has the same source and the same sink as $G$.
\item For each $1\leq i \leq n-1$, $(\codom_{G_i}) = (\dom_{G_{i+1}})$ as bracketed directed paths.
\item $(\dom_{G}) = (\dom_{G_1})$ and $(\codom_{G}) = (\codom_{G_n})$.
\item If $1\leq i\leq j\leq n$, then $G_j\cdots G_i$ is a composition scheme.\dqed
\end{itemize}
\end{explanation}
We now apply the concepts above to bicategories.
\begin{definition}\label{def:bicat-pasting-diagram}
Suppose $\B$ is a bicategory, and $G$ is a bracketed graph.
\begin{enumerate}
\item A \emph{$1$-skeletal $G$-diagram}\index{diagram!1-skeletal $G$-} in $\B$ is an assignment $\phi$ as follows.
\begin{itemize}
\item $\phi$ assigns to each vertex $v$ in $G$ an object $\phi_v$ in $\B$.
\item $\phi$ assigns to each edge $e$ in $G$ with tail $u$ and head $v$ a $1$-cell \[\phi_e \in \B(\phi_u,\phi_v).\]
\end{itemize}
\item Suppose $\phi$ is such a $1$-skeletal $G$-diagram, and $P = v_0e_1v_1\cdots e_mv_m$ is a directed path in $G$ with $m \geq 1$ and with an inherited bracketing $(P)$. Define the $1$-cell
\begin{equation}\label{phi-directed-path}
\phi_P \in \B(\phi_{v_0},\phi_{v_m})
\end{equation}
as follows.
\begin{itemize}
\item First replace the edge $e_i$ in $(P)$ by the $1$-cell $\phi_{e_i} \in \B(\phi_{v_{i-1}},\phi_{v_i})$ for $1\leq i \leq m$.
\item Then form the horizontal composite of the resulting parenthesized sequence
\[\begin{tikzcd}
\phi_{v_0} \ar{r}{\phi_{e_1}} & \phi_{v_1} \ar{r}{\phi_{e_2}} & \cdots \ar{r}{\phi_{e_m}} & \phi_{v_m}\end{tikzcd}\]
of $1$-cells.
\end{itemize}
\item A \emph{$G$-diagram}\index{diagram!G@$G$-} in $\B$ is a $1$-skeletal $G$-diagram $\phi$ in $\B$ that assigns to each interior face $F$ of $G$ a $2$-cell
\[\phi_F : \phi_{\dom_F} \to \phi_{\codom_F}\] in $\B(\phi_{s_F},\phi_{t_F})$.
\item A $G$-diagram is called a \emph{composition diagram}\index{composition diagram}\index{diagram!composition} of shape $G$ if $G$ admits a composition scheme presentation.
\item A $G$-diagram is called a \emph{pasting diagram}\index{diagram!pasting -, in a bicategory}\index{pasting diagram!bicategory} if the underlying anchored graph admits a pasting scheme presentation.\defmark
\end{enumerate}
\end{definition}
In \Cref{bicat-pasting-existence} below, we will show that a $G$-diagram $\phi$ is a pasting diagram in $\B$ if and only if $G$ admits a composition scheme extension. Before we define the composite of a pasting diagram, we first define the composite of a composition diagram.
\begin{definition}[Composite of a composition diagram]\label{def:bicat-pasting-composite}
Suppose $\phi$ is a composition diagram in a bicategory $\B$ of shape $G$. With respect to a composition scheme presentation $G_n\cdots G_1$ of $G$:
\begin{enumerate}
\item For each $1\leq i \leq n$, suppose $G_i$ has:
\begin{itemize}
\item unique interior face $F_i$;
\item directed path $P_i = (e_{i1},\ldots,e_{ik_i})$ from $s_G$ to $s_{F_i}$;
\item directed path $P_i' = (e'_{i1},\ldots,e'_{il_i})$ from $t_{F_i}$ to $t_G$.
\end{itemize}
By \eqref{consistent-bracketing} the bracketing of the consistent graph $G_i$ satisfies
\[\begin{split}
(\dom_{G_i}) &= b_i\bigl(e_{i1},\ldots,e_{ik_i}, (\dom_{F_i}), e'_{i1},\ldots,e'_{il_i}\bigr),\\
(\codom_{G_i}) &= b_i\bigl(e_{i1},\ldots,e_{ik_i}, (\codom_{F_i}), e'_{i1},\ldots,e'_{il_i}\bigr)
\end{split}\]
for some bracketing $b_i$ of length $k_i+l_i+1$. Define the $2$-cell
\begin{equation}\label{basic-2cell}
\phi_{G_i} = b_i\bigl(1_{\phi_{e_{i1}}},\ldots, 1_{\phi_{e_{ik_i}}}, \phi_{F_i}, 1_{\phi_{e'_{i1}}},\ldots, 1_{\phi_{e'_{il_i}}}\bigr) : \phi_{\dom_{G_i}} \to \phi_{\codom_{G_i}}
\end{equation}
in $\B(\phi_{s_G},\phi_{t_G})$ as follows:
\begin{itemize}
\item The identity $2$-cell of each $\phi_{e_{ij}}$ is substituted for $e_{ij}$ in $b_i$, and similarly for the identity $2$-cell of each $\phi_{e'_{ij}}$.
\item The $2$-cell $\phi_{F_i}$ is substituted for the $(k_i+1)$st entry in $b_i$.
\item $\phi_{G_i}$ is the iterated horizontal composite of the resulting bracketed sequence of $2$-cells, with the horizontal compositions determined by the brackets in $b_i$.
\end{itemize}
\item The \index{composition diagram!composite}\index{composite!composition diagram}\emph{composite of $\phi$}, denoted by $|\phi|$, is defined as the vertical composite
\begin{equation}\label{pasting-diagram-composite}
\begin{tikzcd}[column sep=huge]
\phi_{\dom_G} = \phi_{\dom_{G_1}} \ar{r}{|\phi| \,=\, \phi_{G_n}\cdots \phi_{G_1}} & \phi_{\codom_{G_n}}=\phi_{\codom_G},\end{tikzcd}
\end{equation}
which is a $2$-cell in $\B(\phi_{s_G},\phi_{t_G})$.
\end{enumerate}
This finishes the definition of the composite of $\phi$.
\end{definition}
The rest of this section contains examples. In the following examples, suppose $(\B,1,c,a,\ell,r)$ is a bicategory.
\begin{example}\label{ex:bicat-pasting-simple}
The anchored graph $G$ in \Cref{ex:pasting-scheme-simple}, displayed on the left below, admits a unique bracketing because, in both interior faces and the exterior face, the domain and the codomain have at most two edges.
\[\begin{tikzpicture}[scale=.5]
\matrix[row sep=.3cm,column sep=.8cm,ampersand replacement=\&] {
\& \node () {$G$}; \&\\
\& \node[normaldot] (w) {}; \&\\
\node[normaldot] (x) {}; \&\& \node[normaldot] (z) {};\\
\& \node[normaldot] (y) {};\\};
\draw [arrow] (x) to node{\scriptsize{$e_4$}} (w);
\draw [arrow] (y) to node[swap]{\scriptsize{$e_5$}} (z);
\draw [arrow] (x) to node[swap]{\scriptsize{$e_1$}} (y);
\draw [arrow] (y) to node{\scriptsize{$e_2$}} (w);
\draw [arrow] (w) to node{\scriptsize{$e_3$}} (z);
\end{tikzpicture}\qquad
\begin{tikzpicture}[scale=.5]
\matrix[row sep=.3cm,column sep=.8cm,ampersand replacement=\&] {
\& \node () {$G_1$}; \&\\
\& \node[normaldot] (w) {}; \&\\
\node[normaldot] (x) {}; \& \& \node[normaldot] (z) {};\\
\& \node[normaldot] (y) {};\\};
\draw [arrow] (x) to node{\scriptsize{$e_4$}} (w);
\draw [arrow] (x) to node[swap]{\scriptsize{$(e_1$}} (y);
\draw [arrow] (y) to node[near start, swap]{\scriptsize{$e_2)$}} (w);
\draw [arrow] (w) to node{\scriptsize{$e_3$}} (z);
\end{tikzpicture}\qquad
\begin{tikzpicture}[scale=.5]
\matrix[row sep=.3cm,column sep=.8cm,ampersand replacement=\&] {
\& \node () {$G_2$}; \&\\
\& \node[normaldot] (w) {}; \&\\
\node[normaldot] (x) {}; \& \& \node[normaldot] (z) {};\\
\& \node[normaldot] (y) {};\\};
\draw [arrow] (x) to node[swap]{\scriptsize{$e_1$}} (y);
\draw [arrow] (y) to node[swap]{\scriptsize{$e_5$}} (z);
\draw [arrow] (y) to node[near end]{\scriptsize{$(e_2$}} (w);
\draw [arrow] (w) to node{\scriptsize{$e_3)$}} (z);
\end{tikzpicture}\]
In contrast to \Cref{ex:pasting-scheme-simple}, where $G$ was merely an anchored graph, the bracketed graph $G$ does \emph{not} admit a composition scheme presentation. Indeed, since $G$ has only two interior faces, the only candidate of a composition scheme presentation involves the consistent graphs $G_1$ and $G_2$ above with the bracketings
\[(\codom_{G_1}) = (e_1e_2)e_3 \andspace
(\dom_{G_2}) = e_1(e_2e_3).\]
Since $(\codom_{G_1})$ and $(\dom_{G_2})$ are different bracketed directed paths, the vertical composite $G_2G_1$ is not defined.
In terms of a $G$-diagram $\phi$ in $\B$,
\[\begin{tikzpicture}[commutative diagrams/every diagram, xscale=1.2, yscale=.8]
\node () at (-2,0) {$\phi=$};
\node (W) at (0,1) {$W$}; \node (X) at (-1.5,0) {$X$};
\node (Y) at (0,-1) {$Y$}; \node (Z) at (1.5,0) {$Z$};
\node[font=\Large] at (-.6,0) {\rotatebox{-45}{$\Rightarrow$}};
\node at (-.5,.2) {\scriptsize{$\alpha$}};
\node[font=\Large] at (.5,0) {\rotatebox{-45}{$\Rightarrow$}};
\node at (.7,.2) {\scriptsize{$\beta$}};
\path[commutative diagrams/.cd, every arrow, every label]
(X) edge node[above] {$f$} (W)
(X) edge node[below] {$g$} (Y)
(Y) edge node[right, near start] {$h$} (W)
(Y) edge node[below] {$j$} (Z)
(W) edge node[above] {$i$} (Z);
\end{tikzpicture}\]
the non-existence of a composition scheme presentation of $G$ means that, without suitable adjustment, $\phi$ does not have a well-defined composite in general. The issue is that the two $2$-cells involved are
\[\begin{tikzcd} if \ar{r}{1_i *\alpha} & i(hg)\end{tikzcd}\andspace
\begin{tikzcd} (ih)g \ar{r}{\beta*1_g} & jg\end{tikzcd}\]
in $\B(X,Z)$. However, the $1$-cells $i(hg)$ and $(ih)g$ are not equal in general in a bicategory, so the $2$-cells $1_i *\alpha$ and $\beta*1_g$ cannot be vertically composed in $\B(X,Z)$.
To fix this, observe that the $1$-cells $i(hg)$ and $(ih)g$ are related by a component of the associator $a$, namely, the invertible $2$-cell
\[\begin{tikzcd}[column sep=large]
i(hg) \ar{r}{a^{-1}_{i,h,g}}[swap]{\cong} & (ih)g\end{tikzcd}\] in $\B(X,Z)$. This suggests that we should expand $G$ into the bracketed graph $G'$ on the left by inserting an associativity graph $G_0$ of the form \eqref{associativity-graph1},
\[\begin{tikzpicture}[scale=1]
\matrix[column sep=1cm, row sep=.5cm, ampersand replacement=\&]
{\node[normaldot] (a) {}; \& \node[normaldot] (b) {}; \&
\node[normaldot] (c) {}; \\
\node[normaldot] (e) {}; \& \node[normaldot] (f) {}; \& \node[normaldot] (d) {};\\};
\path (a) to node{$G'$} ++(1,1);
\path (b) to node[inner sep=0]{\scriptsize{$F_0$}} (f);
\draw [arrow] (a) to node{\scriptsize{$(e_1$}} (b);
\draw [arrow] (b) to node{\scriptsize{$e_2)$}} (c);
\draw [arrow] (c) to node{\scriptsize{$e_3$}} (d);
\draw [arrow, bend left=60] (a) to node{\scriptsize{$e_4$}} (c);
\draw [arrow] (a) to node[swap]{\scriptsize{$e_1'$}} (e);
\draw [arrow] (e) to node{\scriptsize{$(e_2'$}} (f);
\draw [arrow] (f) to node[near end]{\scriptsize{$e_3')$}} (d);
\draw [arrow, bend right=60] (e) to node{\scriptsize{$e_5$}} (d);
\end{tikzpicture}\quad
\begin{tikzpicture}[scale=1]
\matrix[column sep=.7cm, row sep=.5cm,ampersand replacement=\&] {
\node[normaldot] (x) {}; \& \node[normaldot] (y) {}; \&
\node[normaldot] (w) {};\\ \&\& \node[normaldot] (z) {};\\ \&\&\\};
\path (x) to node{$G_1$} ++(1,1);
\draw [arrow, bend left=60] (x) to node{\scriptsize{$e_4$}} (w);
\draw [arrow] (x) to node[swap]{\scriptsize{$(e_1$}} (y);
\draw [arrow] (y) to node[swap]{\scriptsize{$e_2)$}} (w);
\draw [arrow] (w) to node{\scriptsize{$e_3$}} (z);
\end{tikzpicture} ~
\begin{tikzpicture}[scale=1]
\matrix[column sep=.7cm, row sep=.5cm,ampersand replacement=\&]
{\node[normaldot] (s) {}; \& \node[normaldot] (a) {};
\& \node[normaldot] (b) {}; \\
\node[normaldot] (c) {}; \& \node[normaldot] (d) {};
\& \node[normaldot] (t) {};\\ \&\&\\};
\path (s) to node{$G_0$} ++(1,1);
\draw [arrow] (s) to node{\scriptsize{$(e_1$}} (a);
\draw [arrow] (s) to node[swap]{\scriptsize{$e'_1$}} (c);
\draw [arrow] (a) to node{\scriptsize{$e_2)$}} (b);
\draw [arrow] (b) to node{\scriptsize{$e_3$}} (t);
\draw [arrow] (c) to node{\scriptsize{$(e'_2$}} (d);
\draw [arrow] (d) to node{\scriptsize{$e'_3)$}} (t);
\end{tikzpicture}~
\begin{tikzpicture}[scale=1]
\matrix[column sep=.7cm, row sep=.5cm,ampersand replacement=\&] {
\node[normaldot] (x) {}; \& \node[inner sep=0] () {$G_2$}; \& \\
\node[normaldot] (y) {}; \& \node[normaldot] (w) {}; \& \node[normaldot] (z) {};\\};
\draw [arrow] (x) to node[swap]{\scriptsize{$e'_1$}} (y);
\draw [arrow, bend right=60] (y) to node{\scriptsize{$e_5$}} (z);
\draw [arrow] (y) to node{\scriptsize{$(e'_2$}} (w);
\draw [arrow] (w) to node{\scriptsize{$e'_3)$}} (z);
\end{tikzpicture}\]
whose interior face $F_0$ is bracketed with
\[(\dom_{F_0}) = (e_1e_2)e_3 \andspace
(\codom_{F_0}) = e_1'(e_2'e_3').\]
The bracketed graph $G'$ has a unique composition scheme presentation
\[G'=G_2G_0G_1\] with the consistent graphs displayed above
and the bracketings
\[(\codom_{G_1}) = (e_1e_2)e_3 = (\dom_{G_0}) \andspace
(\codom_{G_0}) = e'_1(e'_2e'_3) = (\dom_{G_2}).\]
The other bracketings are all uniquely defined.
With the bracketed graph $G$ replaced by the composition scheme $G'=G_2G_0G_1$, we replace the $G$-diagram $\phi$ by the pasting diagram $\phi'$ of shape $G'$ by inserting the invertible $2$-cell $a^{-1}_{i,h,g} : i(hg)\to (ih)g$ for the interior face $F_0$.
\[\begin{tikzpicture}[commutative diagrams/every diagram, scale=1]
\node () at (-2,.7) {$\phi$}; \node (W) at (0,1) {$W$};
\node (X) at (-1.5,0) {$X$};
\node (Y) at (0,-1) {$Y$}; \node (Z) at (1.5,0) {$Z$};
\node[font=\Large] at (-.6,0) {\rotatebox{-45}{$\Rightarrow$}};
\node at (-.5,.2) {\scriptsize{$\alpha$}};
\node[font=\Large] at (.5,0) {\rotatebox{-45}{$\Rightarrow$}};
\node at (.7,.2) {\scriptsize{$\beta$}};
\path[commutative diagrams/.cd, every arrow, every label]
(X) edge node[above] {$f$} (W)
(X) edge node[below] {$g$} (Y)
(Y) edge node[right, near start] {$h$} (W)
(Y) edge node[below] {$j$} (Z)
(W) edge node[above] {$i$} (Z);
\end{tikzpicture} \quad
\begin{tikzpicture}
\draw [->, line join=round, decorate, decoration={zigzag, segment length=4, amplitude=1, post=lineto, post length=2pt}] (0,0) -- (0.9,0);
\end{tikzpicture}
\quad
\begin{tikzpicture}[commutative diagrams/every diagram, scale=1]
\node () at (3.5,.7) {$\phi'$};
\node (X) at (0,0) {$X$}; \node (Y) at (1.5,0) {$Y$}; \node (W) at (3,0) {$W$};
\node (Y') at (0,-1) {$Y$}; \node (W') at (1.5,-1) {$W$}; \node (Z) at (3,-1) {$Z$};
\node[font=\Large] at (1.4,.5) {\rotatebox{-90}{$\Rightarrow$}};
\node at (1.7,.5) {\scriptsize{$\alpha$}};
\node[font=\Large] at (1.4,-.5) {\rotatebox{-90}{$\Rightarrow$}};
\node at (1.8,-.5) {\scriptsize{$a^{-1}$}};
\node[font=\Large] at (1.4,-1.5) {\rotatebox{-90}{$\Rightarrow$}};
\node at (1.7,-1.5) {\scriptsize{$\beta$}};
\path[commutative diagrams/.cd, every arrow, every label]
(X) edge[out=60,in=120] node[above] {$f$} (W)
(X) edge node[above] {$g$} (Y)
(Y) edge node[above] {$h$} (W)
(W) edge node[right] {$i$} (Z)
(X) edge node[left] {$g$} (Y')
(Y') edge node[above] {$h$} (W')
(W') edge node[above] {$i$} (Z)
(Y') edge[out=-60,in=240] node[below] {$j$} (Z);
\end{tikzpicture}\]
Its composite is now defined as the vertical composite
\[|\phi'| = \Bigl(\begin{tikzcd}
if \ar{r}{1_i *\alpha} & i(hg) \ar{r}{a^{-1}} & (ih)g \ar{r}{\beta*1_g} & jg
\end{tikzcd}\Bigr)\] of the $2$-cells
\[\phi_{G_1} =1_i *\alpha,\quad \phi_{G_0} = a^{-1},\andspace
\phi_{G_2} = \beta*1_g\] in $\B(X,Z)$.
The upshot of this example is that, in a situation as above where we have a pasting diagram $\phi$ of shape $G$ that is not a composition diagram because of mismatched bracketings, we may expand $G$ into a composition scheme $G'$ by inserting associativity graphs, and insert instances of the associator or its inverse. The composite of $\phi$ is then defined as the composite of the composition diagram $\phi'$, which is defined on $G'$. In \Cref{sec:bicat-pscheme-extension} we describe precisely the situations where $G$ can be replaced by a composition scheme. Then in \Cref{sec:bicat-pasting-theorem} we show that, regardless of which composition scheme we replace $G$ with, the resulting composition diagram has the same composite.
This is an important example because one of the axioms of an internal adjunction in a bicategory \eqref{diagram:triangles} involves a pasting diagram of shape $G'$, along with $\ell$ and $r$. It also illustrates the advantage of $2$-categories over general bicategories. In \Cref{ex:pasting-simple}, where we worked in a $2$-category, the composite $(\beta*1_g)(1_i*\alpha)$ was defined without any adjustment.\dqed
\end{example}
\begin{example}\label{ex:bicat-pasting-2}
The anchored graph $G'$ in \Cref{ex:different-embedding} has a unique bracketing because, in all three interior faces and the exterior face, the domain and the codomain have at most two edges. Similar to \Cref{ex:bicat-pasting-simple}, this bracketed graph $G'$ does \emph{not} admit a composition scheme presentation.
Suppose given a $G'$-diagram $\phi'$ in $\B$, as displayed on the left below.
\[
\begin{tikzpicture}[x=20mm,y=20mm]
\newcommand{\core}{
\draw[0cell]
(0,0) node (v) {V}
(1,0) node (s) {S}
(1.75,.5) node (u) {U}
(1.75,-.5) node (w) {W}
(2.5,0) node (t) {T}
;
\draw[1cell]
(s) edge[swap] node {h_2} (u)
(s) edge node {h_3} (w)
(u) edge node {f_2} (t)
(w) edge[swap] node {g_2} (t)
;
\draw[2cell]
node[between=s and t at .6, rotate=-90,font=\Large] (T2) {\Rightarrow}
(T2) node[right] {\theta_2}
;
}
\begin{scope}
\core
\draw [->, line join=round, decorate, decoration={zigzag, segment
length=4, amplitude=1, post=lineto, post length=2pt}]
(t) ++(.25,0) -- ++(0.45,0);
\draw[0cell]
(v) ++(.1,.75) node {\phi'}
;
\draw[1cell]
(v) edge node[pos=.4] {h_1} (s)
(v) edge[bend left] node (f1) {f_1} (u)
(v) edge[bend right, swap] node (g1) {g_1} (w)
;
\draw[2cell]
node[between=f1 and s at .6, rotate=-45,font=\Large] (T1) {\Rightarrow}
(T1) node[above right] {\theta_1}
node[between=g1 and s at .6, rotate=225,font=\Large] (T3) {\Rightarrow}
(T3) node[below right] {\theta_3}
;
\end{scope}
\begin{scope}[shift={(3.55,0)}]
\core
\draw[0cell]
(v) ++(-.1,.9) node {\phi''}
(s) ++(0,.6) node (s') {S}
(u) ++(0,.6) node (u') {U}
(s) ++(0,-.6) node (s'') {S}
(w) ++(0,-.6) node (w'') {W}
;
\draw[1cell]
(v) edge node[pos=.65] {h_1} (s)
(v) edge node[pos=.5] {h_1} (s')
(v) edge[swap] node[pos=.5] {h_1} (s'')
(s') edge node {h_2} (u')
(s'') edge[swap] node {h_3} (w'')
(u') edge[bend left=40] node {f_2} (t)
(w'') edge[swap, bend right=35] node {g_2} (t)
(v) edge[bend left=50, looseness=1.2] node (f1') {f_1} (u')
(v) edge[bend right=50, looseness=1.2, swap] node (g1') {g_1} (w'')
;
\draw[2cell]
node[between=f1' and s' at .6, rotate=-45,font=\Large] (T1) {\Rightarrow}
(T1) node[above right] {\theta_1}
node[between=g1' and s'' at .6, rotate=225,font=\Large] (T3) {\Rightarrow}
(T3) node[below right] {\theta_3}
(u) ++(130:.3) node[rotate=-45,font=\Large] (a') {\Rightarrow}
(a') node[below left] {\,a^\inv}
(w) ++(230:.3) node[rotate=225,font=\Large] (a) {\Rightarrow}
(a) node[above left] {a}
;
\end{scope}
\end{tikzpicture}
\]
The composite of $\phi'$ is not defined in general because
\[\begin{split}
\codom(1_{f_2}*\theta_1) = f_2(h_2h_1) & \not= (f_2h_2)h_1 = \dom(\theta_2*1_{h_1}),\\
\codom(\theta_2*1_{h_1}) = (g_2h_3)h_1 &\not= g_2(h_3h_1) = \dom(1_{g_2}*\theta_3).\end{split}\]
The issue is again mismatched bracketings.
We fix this issue by:
\begin{itemize}
\item expanding $G'$ into a composition scheme $G''$ by inserting two associativity graphs, one of the form \eqref{associativity-graph1} and the other \eqref{associativity-graph2};
\item inserting instances of the associator $a$ or its inverse $a^{-1}$ to obtain the composition diagram $\phi''$ of shape $G''$ on the right above.
\end{itemize}
The composite of $\phi'$ is now defined as the vertical composite
\[\begin{tikzcd}
f_2f_1 \ar{rrr}{|\phi''|} \ar{d}[swap]{1_{f_2}*\theta_1} &&& g_2g_1\\
f_2(h_2h_1) \ar{r}{a^{-1}} & (f_2h_2)h_1 \ar{r}{\theta_2*1_{h_1}} & (g_2h_3)h_1 \ar{r}{a} & g_2(h_3h_1) \ar{u}[swap]{1_{g_2}*\theta_3} \end{tikzcd}\]
of $2$-cells in $\B(V,T)$.
\dqed
\end{example}
\section{Composition Scheme Extensions}
\label{sec:bicat-pscheme-extension}
In both \Cref{ex:bicat-pasting-simple,ex:bicat-pasting-2}, a bracketed graph was extended to a composition scheme by inserting associativity graphs. In this section we formalize this idea and characterize the bracketed graphs that admit such an extension.
\begin{definition}\label{def:collapsing}
Suppose $G$ is a bracketed graph with a decomposition \[G = G_2AG_1,\quad G_2A, \orspace AG_1\] into a vertical composite of bracketed graphs in which $A$ is an associativity graph with unique interior face $F$. Using the notations in \Cref{def:associativity-graph}, the bracketed graph obtained from $G$ by identifying each edge in $E_i$ with its corresponding edge in $E'_i$ for each $1\leq i \leq 3$, along with their corresponding tails and heads, is said to be obtained from $G$ by\label{notation:collapsing} \index{collapsing}\emph{collapsing $A$}, denoted by $G/A$.
\end{definition}
\begin{explanation}\label{expl:collapsing}
In the context of \Cref{def:collapsing}:
\begin{itemize}
\item $(\dom_{G/A}) = (\dom_G)$ and $(\codom_{G/A}) = (\codom_{G})$.
\item The interior faces in $G/A$ are those in $G$ minus the interior face of $A$, and their (co)domains are bracketed as they are in $G$.
\item Collapsing associativity graphs is a strictly associative operation. So we can iterate the collapsing process without worrying about the order of the collapses.
\item If $G$ originally has the form $G_2AG_1$, then the bracketed graph $G/A$ is \emph{not} the vertical composite $G_2G_1$ of the bracketed graphs $G_1$ and $G_2$ because
\[(\codom_{G_1}) = (\dom_A) \not= (\codom_A) = (\dom_{G_2})\] as bracketed directed paths. However, forgetting the bracketings, the underlying anchored graph of $G/A$ is the vertical composite of the underlying anchored graphs of $G_1$ and $G_2$.\dqed
\end{itemize}
\end{explanation}
\begin{definition}\label{def:bicat-pasting-extension}
Suppose $G$ is a bracketed graph. A \emph{composition scheme extension}\index{composition scheme!extension} of $G$ consists of the following data.
\begin{enumerate}
\item A composition scheme $H=H_n\cdots H_1$ as in \Cref{def:bicategorical-pasting-scheme}.
\item A proper subsequence of associativity graphs
\[\{A_1,\ldots,A_j\} \subset \{H_1,\ldots,H_n\}\]
such that $G$ is obtained from $H$ by collapsing $A_1,\ldots,A_j$.
\end{enumerate}
In this case, we also denote the bracketed graph $G$ by $H/\{A_1,\ldots,A_j\}$.
\end{definition}
\begin{explanation}\label{expl:bicat-pscheme-extension}
In the context of \Cref{def:bicat-pasting-extension}:
\begin{itemize}
\item $(\dom_{G}) = (\dom_H)$ and $(\codom_{G}) = (\codom_{H})$.
\item The interior faces in $G$ are those in $H$ minus those in $\{A_1,\ldots,A_j\}$, and their (co)domains are bracketed as they are in $H$.
\item The order in which the associativity graphs $A_1,\ldots,A_j$ are collapsed does not matter.
\item A bracketed graph may admit multiple composition scheme extensions.\dqed
\end{itemize}
\end{explanation}
\begin{example}\label{ex:collapsing}
Here are some examples for \Cref{def:bicat-pasting-extension}.
\begin{enumerate}
\item A composition scheme is a composition scheme extension, corresponding to the case $j=0$.
\item In \Cref{ex:bicat-pasting-simple}, $G'$ is a composition scheme extension of $G$, since the latter is obtained from $G'$ by collapsing the associativity graph $G_0$.
\item In \Cref{ex:bicat-pasting-2}, $G''$ is a composition scheme extension of $G'$, since the latter is obtained from the former by collapsing two associativity graphs.
\end{enumerate}
The last two examples illustrate that a bracketed graph obtained from a composition scheme extension by collapsing associativity graphs does \emph{not} in general admit a composition scheme presentation.\dqed
\end{example}
Our next objective is to characterize bracketed graphs that admit a composition scheme extension. We will need the following observation about moving brackets using associativity graphs.
\begin{lemma}[Moving Brackets]\label{moving-brackets}\index{Moving Brackets Lemma}
Suppose $G$ is a bracketed atomic graph with interior face $F$ such that:
\begin{itemize}
\item $(\dom_G) = (\dom_F)$ and $(\codom_G) = (\codom_F)$ as bracketed directed paths.
\item $(\dom_G)$ and $(\codom_G)$ have the same length.
\end{itemize}
Then one of the following two statements holds.
\begin{enumerate}
\item $(\dom_G) = (\codom_G)$.
\item There exists a canonical vertical composite
\[A_k\cdots A_1\] of associativity graphs $A_1,\ldots,A_k$ such that
\[(\dom_{A_1}) = (\dom_G) \andspace (\codom_{A_k}) = (\codom_G).\]
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose $(\dom_G)$ and $(\codom_G)$ have length $n$, and $b_n^l$ is the left normalized bracketing of length $n$. First we consider the case where
\[(\codom_G) = b_n^l(e_1,\ldots,e_n) = b^l_{n-1}(e_1,\ldots,e_{n-1})e_n.\]
We proceed by induction on $n$. If $n\leq 2$, then there is a unique bracketing of length $n$, so $(\dom_G) = b_n^l$.
Suppose $n\geq 3$. Then \[(\dom_G)=E_1E_2\] for some canonical, non-trivial bracketed directed paths $E_1$ and $E_2$. If $E_2$ has length $1$ (i.e., it contains the single edge $e_n$), then the induction hypothesis applies with $E_1$ as the domain and $b^l_{n-1}(e_1,\ldots,e_{n-1})$ as the codomain. Since adding an edge at the end of an associativity graph yields an associativity graph, we are done in this case.
If $E_2$ has length $> 1$, then it has the form \[E_2 = E_{21}E_{22}\] for some canonical, non-trivial bracketed directed paths $E_{21}$ and $E_{22}$. There is a unique associativity graph $A_1$ of the form \eqref{associativity-graph2} that satisfies
\[\begin{split}
(\dom_{A_1}) &= E_1(E_{21}E_{22}) = (\dom_G),\\
(\codom_{A_1}) &= (E_1E_{21})E_{22}.\end{split}\]
Now we repeat the previous argument with $(\codom_{A_1})$ as the new domain. So if $E_{22}$ has length $1$, then we apply the induction hypothesis with $E_1E_{21}$ as the domain and $b^l_{n-1}(e_1,\ldots,e_{n-1})$ as the codomain. If $E_{22}$ has length $>1$, then it has the form \[E_{22}=E_{221}E_{222}.\] There is a unique associativity graph $A_2$ of the form \eqref{associativity-graph2} that satisfies
\[\begin{split}
(\dom_{A_2}) &= (E_1E_{21})(E_{221}E_{222}) = (\codom_{A_1}),\\
(\codom_{A_2}) &= ((E_1E_{21})E_{221})E_{222}.\end{split}\]
This procedure must stop after a finite number of steps because $\dom_G$ has finite length. When it stops, the right-most bracketed directed path $E_?$ has length $1$, so we can apply the induction hypothesis as above. This finishes the induction.
An argument dual to the above shows that $b_n^l(e_1,\ldots,e_n)$ and $(\codom_G)$ are connected by a canonical finite sequence of associativity graphs of the form \eqref{associativity-graph1}. Splicing the two vertical composites of associativity graphs together yields the desired vertical composite.
\end{proof}
The next observation characterizes bracketed graphs that admit a composition scheme extension. Recall from \Cref{def:pasting-scheme} the concept of a pasting scheme presentation for an anchored graph.
\begin{theorem}\label{bicat-pasting-existence}\index{characterization of!a bracketed graph admitting a composition scheme extension}
For a bracketed graph $G$, the following two statements are equivalent.
\begin{enumerate}
\item $G$ admits a composition scheme extension.
\item The underlying anchored graph of $G$ admits a pasting scheme presentation.
\end{enumerate}
\end{theorem}
\begin{proof}
For the implication (1) $\Rightarrow$ (2), suppose $H = H_n\cdots H_1$ is a composition scheme. By definition, this is also a pasting scheme presentation for the underlying anchored graph of $H$ because each consistent graph $H_i$ has an underlying atomic graph. If
\[\{A_1,\ldots,A_j\} \subset \{H_1,\ldots,H_n\}\]
is a proper subsequence of associativity graphs, then the vertical composite of the remaining underlying atomic graphs in \[\{H_1,\ldots,H_n\} \setminus \{A_1,\ldots,A_j\}\] is defined. Moreover, it is a pasting scheme presentation for the underlying anchored graph of the bracketed graph $H/\{A_1,\ldots,A_j\}$.
For the implication (2) $\Rightarrow$ (1), suppose $G=G_m\cdots G_1$ is a pasting scheme presentation for the underlying anchored graph of $G$. For each $1\leq i \leq m$, denote by:
\begin{itemize}
\item $F_i$ the unique interior face of $G_i$;
\item $P_i$ the directed path in $G_i$ from $s_G$ to $s_{F_i}$;
\item $P_i'$ the directed path in $G_i$ from $t_{F_i}$ to $t_G$.
\end{itemize}
Equip the atomic graph $G_i$ with the consistent bracketing in which:
\begin{itemize}
\item $(\dom_{F_i})$ and $(\codom_{F_i})$ are bracketed as they are in $G$;
\item $(\dom_{G_i}) = \bigl((P_i)(\dom_{F_i})\bigr)(P_i')$;
\item $(\codom_{G_i}) = \bigl((P_i)(\codom_{F_i})\bigr)(P_i')$.
\end{itemize}
Here $(P_i)$ and $(P_i')$ are either empty or left normalized bracketings. By the Moving Brackets \Cref{moving-brackets}:
\begin{itemize}
\item Either
\[(\dom_G) = (\dom_{G_1}),\]
or else there is a canonical vertical composite of associativity graphs
\[A_{1k_1}\cdots A_{11}\]
with domain $(\dom_G)$ and codomain $(\dom_{G_1})$.
\item For each $2\leq i \leq m$, either
\[(\codom_{G_{i-1}}) = (\dom_{G_{i}}),\]
or else there is a canonical vertical composite of associativity graphs
\[A_{ik_i}\cdots A_{i1}\]
with domain $(\codom_{G_{i-1}})$ and codomain $(\dom_{G_{i}})$.
\item Either
\[(\codom_{G_m}) = (\codom_G),\]
or else there is a canonical vertical composite of associativity graphs
\[A_{m+1,k_{m+1}}\cdots A_{m+1,1}\]
with domain $(\codom_{G_m})$ and codomain $(\codom_G)$.
\end{itemize}
The corresponding vertical composite
\[H = \overbracket[0.5pt]{(A_{m+1,k_{m+1}}\cdots A_{m+1,1})}^{\text{or $\varnothing$}} G_m \cdots \overbracket[0.5pt]{(A_{2k_2}\cdots A_{21})}^{\text{or $\varnothing$}} G_1 \overbracket[0.5pt]{(A_{1k_1}\cdots A_{11})}^{\text{or $\varnothing$}}\]
is a composition scheme. Moreover, $G$ is obtained from $H$ by collapsing all the associativity graphs $A_{ij}$ for $1\leq i \leq m+1$ and $1 \leq j \leq k_i$.
\end{proof}
The previous proof actually proves slightly more.
\begin{itemize}
\item The proof of (1) $\Rightarrow$ (2) shows that, if $H=H_n\cdots H_1$ is a composition scheme extension of $G$ with the associativity graphs $\{A_1,\ldots,A_j\}$, then the underlying anchored graph of $G$ admits a pasting scheme presentation given by the vertical composite of the remaining underlying atomic graphs in $\{H_i\}_{1\leq i \leq n} \setminus \{A_i\}_{1\leq i \leq j}$.
\item The proof of (2) $\Rightarrow$ (1) shows that each pasting scheme presentation of the underlying anchored graph of $G$ can be canonically extended to a composition scheme extension of $G$.
\end{itemize}
\section{Bicategorical Pasting Theorem}
\label{sec:bicat-pasting-theorem}
In this section we prove that each pasting diagram in a bicategory has a uniquely defined composite. This is the bicategorical analogue of the $2$-categorical pasting \Cref{thm:2cat-pasting-theorem}.
\begin{motivation}
Given a pasting diagram $\phi$ in a bicategory $\B$ of shape $G$ that admits a composition scheme extension $G'$, similar to \Cref{ex:bicat-pasting-simple,ex:bicat-pasting-2}, we would like to define the composite of $\phi$ as the composite of the composition diagram $\phi'$ of shape $G'$. To define this composite precisely and to prove its uniqueness, we first need the following concepts and a version of Mac Lane's Coherence Theorem for monoidal categories in the current setting.\dqed
\end{motivation}
Recall that $a$ denotes the associator in our ambient bicategory $\B$.
\begin{definition}\label{def:associator-or-inverse}
Suppose $A$ is an associativity graph, and $\phi$ is a $1$-skeletal $A$-diagram in $\B$ as in \Cref{def:bicat-pasting-diagram}.
\begin{enumerate}
\item We call $\phi$ \index{extendable}\emph{extendable} if, using the notations in \Cref{def:associativity-graph}, for each $1\leq i \leq 3$ and each edge $e$ in $E_i$ with corresponding edge $e'$ in $E'_i$, there is an equality of $1$-cells \[\phi_e = \phi_{e'}.\] As defined in \eqref{phi-directed-path}, this implies the equality $\phi_{E_i} = \phi_{E'_i}$ of $1$-cells.
\item Suppose $\phi$ is extendable. Define the \index{canonical extension}\emph{canonical extension of $\phi$} as the $A$-diagram that assigns to the unique interior face $F$ of $A$ the $2$-cell
\[\begin{tikzcd}[column sep=huge]
\phi_{\dom_F} = \phi_{E_3}(\phi_{E_2}\phi_{E_1}) \ar{r}{\phi_F \,=\, a^{-1}} & (\phi_{E'_3}\phi_{E'_2})\phi_{E'_1} = \phi_{\codom_F}\end{tikzcd}\]
if $A$ satisfies \eqref{associativity-graph1}, or
\[\begin{tikzcd}[column sep=huge]
\phi_{\dom_F} = (\phi_{E_3}\phi_{E_2})\phi_{E_1} \ar{r}{\phi_F \,=\, a} & \phi_{E'_3}(\phi_{E'_2}\phi_{E'_1}) = \phi_{\codom_F}\end{tikzcd}\]
if $A$ satisfies \eqref{associativity-graph2}.\defmark
\end{enumerate}
\end{definition}
\begin{example}
In \Cref{ex:bicat-pasting-simple}, the composition diagram $\phi'$ involves a canonical extension of $\phi$ that uses $a^{-1}$. In \Cref{ex:bicat-pasting-2} the composition diagram $\phi''$ involves two canonical extensions of $\phi'$, one for each of $a$ and $a^{-1}$.\dqed
\end{example}
\begin{theorem}[Mac Lane's Coherence]\label{maclane-coherence}\index{Mac Lane's Coherence Theorem}\index{coherence!Mac Lane's}\index{Theorem!Mac Lane's Coherence}
Suppose:
\begin{enumerate}
\item $G=A_k\cdots A_1$ and $G'=A'_l\cdots A'_1$ are composition schemes such that:
\begin{itemize}
\item All the $A_i$ and $A_j'$ are associativity graphs.
\item $(\dom_G) = (\dom_{G'})$ and $(\codom_G) = (\codom_{G'})$ as bracketed directed paths.
\end{itemize}
\item $\phi$ is a $1$-skeletal $G$-diagram in $\B$ whose restriction to each $A_i$ is extendable. With the canonical extension of $\phi$ in each $A_i$, the resulting composition diagram of shape $G$ is denoted by $\phibar$.
\item $\phi'$ is a $1$-skeletal $G'$-diagram in $\B$ whose restriction to each $A'_j$ is extendable. With the canonical extension of $\phi'$ in each $A'_j$, the resulting composition diagram of shape $G'$ is denoted by $\phibar'$.
\item $\phi_{e} = \phi'_{e}$ for each edge $e$ in $\dom_G$.
\end{enumerate}
Then there is an equality \[|\phibar| = |\phibar'|\] of composite $2$-cells in $\B(\phi_{s_G},\phi_{t_G})$.
\end{theorem}
\begin{proof}
The desired equality is
\[\phibar_{A_k}\cdots\phibar_{A_1} = \phibar'_{A'_l}\cdots\phibar'_{A'_1}\] with
\begin{itemize}
\item each side a vertical composite as in \eqref{pasting-diagram-composite}, and
\item $\phibar_{A_i}$ and $\phibar'_{A'_j}$ horizontal composites as in \eqref{basic-2cell}.
\end{itemize}
The proof is adapted from the proof of Mac Lane's Coherence Theorem for monoidal categories in \cite[pages 166--168]{maclane}, which characterizes the free monoidal category on one object, as follows.
\begin{itemize}
\item Suppose the edges in $\dom_G$, and hence also in $\codom_G$, are $e_1,\ldots,e_n$ from the source $s_G$ to the sink $t_G$. By hypothesis there are equalities of $1$-cells:
\begin{itemize}
\item $\phi_{e_i} = \phi'_{e_i}$ for $1\leq i \leq n$;
\item $\phi_{\dom_G} = \phi'_{\dom_{G'}}$ and $\phi_{\codom_G} = \phi'_{\codom_{G'}}$.
\end{itemize}
Mac Lane considered $\otimes$-words involving $n$ objects in a monoidal category. Here we consider bracketings of the sequence of $1$-cells $(\phi_{e_1},\ldots,\phi_{e_n})$.
\item Identity morphisms within $\otimes$-words are replaced by identity $2$-cells in the ambient bicategory $\B$.
\item Each instance of the associativity isomorphism $\alpha$ in a monoidal category is replaced by a component of the associator $a$.
\item A basic arrow in Mac Lane's sense is a $\otimes$-word of length $n$ involving one instance of $\alpha$ and $n-1$ identity morphisms. Basic arrows are replaced by $2$-cells of the forms $\phibar_{A}$ or $\phibar'_{A}$ for an associativity graph $A$.
\item Composites of basic arrows are replaced by vertical composites of $2$-cells.
\item The bifunctoriality of the monoidal product is replaced by the functoriality of the horizontal composition in $\B$.
\item The pentagon axiom \eqref{pentagon-axiom} in a monoidal category is replaced by the pentagon axiom \eqref{bicat-pentagon} in the bicategory $\B$.
\end{itemize}
Mac Lane's proof shows that, given any two $\otimes$-words $u$ and $w$ of length $n$ involving the same sequence of objects, any two composites of basic arrows from $u$ to $w$ are equal. With the adaptation detailed above, Mac Lane's argument yields the desired equality of composite $2$-cells.
\end{proof}
In \eqref{pasting-diagram-composite} we defined the composite of a composition diagram. Generalizing that definition, we now define the composite of a pasting diagram in a bicategory.
\begin{definition}[Composite of a pasting diagram]\label{def:bicat-diagram-composite}\index{composite!pasting diagram in a bicategory}
Suppose that $\phi$ is a pasting diagram of shape $G$ in a bicategory $\B$, and suppose $H=H_n\cdots H_1$ is a composition scheme extension of $G$. The \emph{composite} of $\phi$ with respect to $H=H_n\cdots H_1$, denoted by $|\phi|$, is defined as follows.
\begin{enumerate}
\item First define the composition diagram $\phi_H$ of shape $H$ by the following data:
\begin{itemize}
\item The restriction of $\phi_H$ to
\begin{itemize}
\item $(\dom_H)=(\dom_G)$,
\item $(\codom_H)=(\codom_G)$, and
\item the interior faces in $\{H_1,\ldots,H_n\} \setminus \{A_1,\ldots,A_j\}$ (i.e., in $G$),
\end{itemize}
agrees with $\phi$.
\item For each $1\leq i \leq j$, the restriction of $\phi_H$ to the associativity graph $A_i$ is extendable. The value of $\phi_H$ at the unique interior face of $A_i$ is given as in \Cref{def:associator-or-inverse}(2); i.e., it is either a component of the associator $a$ or its inverse.
\end{itemize}
\item Now we define the $2$-cell\label{notation:bipasting-comp} $|\phi|$ in $\B(\phi_{s_G},\phi_{t_G})$ by
\[\begin{tikzcd}[column sep=large]
\phi_{\dom_G} \ar{r}{|\phi| \,=\, |\phi_H|} & \phi_{\codom_G},\end{tikzcd}\] where $|\phi_H|$ is the composite of $\phi_H$ as in \eqref{pasting-diagram-composite} with respect to $H_n\cdots H_1$.
\end{enumerate}
This finishes the definition of the composite of $\phi$.
\end{definition}
Next is the main result of this section, which is a bicategorical version of the $2$-categorical pasting \Cref{thm:2cat-pasting-theorem}.
\begin{theorem}[Bicategorical Pasting]\label{thm:bicat-pasting-theorem}\index{pasting theorem!bicategorical}\index{Theorem!Bicategorical Pasting}
Suppose $\B$ is a bicategory. Every pasting diagram in $\B$ has a unique composite.
\end{theorem}
\begin{proof}
Suppose $G$ is a bracketed graph whose underlying anchored graph
admits a pasting scheme presentation, and suppose $\phi$ is a pasting
diagram of shape $G$ in $\B$. Existence of a composite follows from
\cref{bicat-pasting-existence}: $G$ has a composition scheme
extension $H$, and $\phi$ has a composite with respect to $H$ as
described in \cref{def:bicat-diagram-composite}.
Now we turn to uniqueness. Suppose we are given two composition scheme extensions of $G$, say
\begin{itemize}
\item $H= H_{j+n}\cdots H_1$ with associativity graphs $\{A_1,\ldots,A_j\}$ and
\item $H' = H'_{k+n}\cdots H'_1$ with associativity graphs $\{A'_1,\ldots,A'_k\}$
\end{itemize}
as in \Cref{def:bicat-pasting-extension}. We want to show that the composites of $\phi$ with respect to $H= H_{j+n}\cdots H_1$ and $H' = H'_{k+n}\cdots H'_1$ are the same. The proof is an induction on the number $n$ of interior faces of $G$.
The case $n=1$ follows from
\begin{enumerate}[label=(\roman*)]
\item \Cref{moving-brackets},
\item Mac Lane's Coherence \Cref{maclane-coherence}, and
\item the naturality of the associator $a$ and its inverse
\end{enumerate}
as follows. Suppose the unique interior face $F$ of $G$ appears in $H_p$ and $H'_q$ for some $1\leq p \leq j+1$ and $1\leq q \leq k+1$. Since $H_p$ and $H_q'$ are consistent graphs, by \eqref{consistent-bracketing} there exist bracketings $b$ and $b'$ of the same length, say $m$, such that
\[\begin{split}
(\dom_{H_p}) &= b\bigl(e_1,\ldots,e_{l-1}, (\dom_F), e_{l+1},\ldots,e_m\bigr),\\
(\codom_{H_p}) &= b\bigl(e_1,\ldots,e_{l-1}, (\codom_F), e_{l+1},\ldots,e_m\bigr),\\
(\dom_{H'_q}) &= b'\bigl(e_1,\ldots,e_{l-1}, (\dom_F), e_{l+1},\ldots,e_m\bigr),\\
(\codom_{H'_q}) &= b'\bigl(e_1,\ldots,e_{l-1}, (\codom_F), e_{l+1},\ldots,e_m\bigr).
\end{split}\]
There is a unique bracketed atomic graph $C$ with interior face $C_F$ such that
\begin{itemize}
\item $(\dom_C) = (\dom_{C_F}) = (\dom_{H_p})$ and
\item $(\codom_C) = (\codom_{C_F}) = (\dom_{H'_q})$.
\end{itemize}
By (i) there exists a canonical vertical composite
\[C'=C_r \cdots C_1\] of associativity graphs $C_1,\ldots,C_r$ such that:
\begin{itemize}
\item $(\dom_{C'}) = (\dom_{C_1}) = (\dom_C)$.
\item $(\codom_{C'}) = (\codom_{C_r}) = (\codom_C)$.
\item No $C_i$ changes the bracketing of $(\dom_F)$.
\end{itemize}
Indeed, since the bracketed directed path $(\dom_F)$ appears as the $l$th entry in both $b$ and $b'$, we can first regard $(\dom_F)$ as a single edge, say $e_l$, in $C$. Applying (i) in that setting gives a vertical composite of associativity graphs with domain $b(e_1,\ldots,e_m)$ and codomain $b'(e_1,\ldots,e_m)$. Then we substitute $(\dom_F)$ in for each $e_l$ in the resulting vertical composite.
The sequence of edges \[\bigl\{e_1,\ldots,e_{l-1},\dom_F,e_{l+1},\ldots,e_m\bigr\}\] in $\dom_{H_p}$ is the same as those in $\dom_G$ and $\dom_{H'_q}$. So the underlying $1$-skeletal $G$-diagram of $\phi$ uniquely determines a composition diagram $\phi_{C'}$ of shape $C'$, in which every interior face is assigned either a component of the associator $a$ or its inverse, corresponding to the two cases \eqref{associativity-graph2} and \eqref{associativity-graph1}. Its composite with respect to the composition scheme presentation $C_r\cdots C_1$ is denoted by $|\phi_{C'}|$. Similar remarks apply with $\codom_F$, $\codom_{H_p}$, $\codom_G$, and $\codom_{H'_q}$ replacing $\dom_F$, $\dom_{H_p}$, $\dom_G$, and $\dom_{H'_q}$, respectively.
Moreover, since $n=1$, by the definitions of $H_p$ and $H_q'$ there are equalities
\[\begin{split}
\{H_1,\ldots,H_{j+1}\} &= \{A_1,\ldots,A_{p-1},H_p,A_p,\ldots,A_j\},\\
\{H'_1,\ldots,H'_{k+1}\} &= \{A'_1,\ldots,A'_{q-1},H'_q,A'_q,\ldots,A'_k\}.
\end{split}\]
Consider the following diagram in $\B(\phi_{s_G},\phi_{t_G})$.
\[\begin{tikzpicture}[commutative diagrams/every diagram, scale=1.5]
\node (dH) at (-1,3) {$\phi_{\dom_H}$};
\node (dHprime) at (1,3) {$\phi_{\dom_{H'}}$};
\node (dHp) at (-1,2) {$\phi_{\dom_{H_p}}$};
\node (dHq) at (1,2) {$\phi_{\dom_{H_q'}}$};
\node (cHp) at (-1,1) {$\phi_{\codom_{H_p}}$};
\node (cHq) at (1,1) {$\phi_{\codom_{H'_q}}$};
\node (cH) at (-1,0) {$\phi_{\codom_H}$};
\node (cHprime) at (1,0) {$\phi_{\codom_{H'}}$};
\draw [arrow] (dH) to node{\scriptsize{$1_{\phi_{\dom_G}}$}} (dHprime);
\draw [arrow] (dH) to node[swap]{\scriptsize{$\phi_{A_{p-1}}\cdots\phi_{A_1}$}} (dHp);
\draw [arrow] (dHprime) to node{\scriptsize{$\phi_{A'_{q-1}}\cdots \phi_{A'_1}$}} (dHq);
\draw [arrow] (dHp) to node{\scriptsize{$|\phi_{C'}|$}} (dHq);
\draw [arrow] (dHp) to node[swap]{\scriptsize{$\phi_{H_p}$}} (cHp);
\draw [arrow] (dHq) to node{\scriptsize{$\phi_{H'_q}$}} (cHq);
\draw [arrow] (cHp) to node{\scriptsize{$|\phi_{C'}|$}} (cHq);
\draw [arrow] (cHp) to node[swap]{\scriptsize{$\phi_{A_j}\cdots\phi_{A_p}$}} (cH);
\draw [arrow] (cHq) to node{\scriptsize{$\phi_{A'_k}\cdots \phi_{A'_q}$}} (cHprime);
\draw [arrow] (cH) to node{\scriptsize{$1_{\phi_{\codom_G}}$}} (cHprime);
\end{tikzpicture}\]
The left-bottom boundary and the top-right boundary are the composites of $\phi$ with respect to $H= H_{j+1}\cdots H_1$ and $H' = H'_{k+1}\cdots H'_1$, respectively. The top and bottom rectangles are commutative by (ii). The middle rectangle is commutative by (iii). This proves the initial case $n=1$.
Suppose $n\geq 2$. We consider the two interior faces of $G$, say $F_1$ and $F_1'$, that appear first in the lists
\[\{H_1,\ldots,H_{j+n}\}\setminus \{A_1,\ldots,A_j\} \andspace \{H'_1,\ldots,H'_{k+n}\} \setminus \{A'_1,\ldots,A'_k\},\] respectively. If $F_1 = F_1'$, then, similar to the case $n=1$, the two composites of $\phi$ are equal by (i)--(iii) and the induction hypothesis.
For the other case, suppose $F_1 \not= F_1'$. Since $G$ has an underlying anchored graph, by \Cref{atomic-domain} $F_1$ and $F'_1$ do not intersect, except possibly for $t_{F_1}=s_{F'_1}$ or $t_{F'_1}=s_{F_1}$. Similar to the $n=1$ case, by (i)--(iii) and the induction hypothesis, we are reduced to the case with $n=2$, $j=k=0$, the underlying anchored graph of $G$ as displayed below with each edge representing a directed path,
\[\begin{tikzpicture}[scale=1]
\node [plain] (s) {$s_G$}; \node [plain, right=1cm of s] (s1) {$s_{F_1}$};
\node [right=.5cm of s1] () {$F_1$};
\node [plain, right=1.5cm of s1] (t1) {$t_{F_1}$};
\node [plain, right=1cm of t1] (s2) {$s_{F'_1}$};
\node [right=.5cm of s2] () {$F'_1$};
\node [plain, right=1.5cm of s2] (t2) {$t_{F'_1}$};
\node [plain, right=1cm of t2] (t) {$t_{G}$};
\draw [arrow] (s) to node{$Q_1$} (s1);
\draw [arrow, bend left=30] (s1) to node{$\dom_{F_1}$} (t1);
\draw [arrow, bend right=30] (s1) to node[swap]{$\codom_{F_1}$} (t1);
\draw [arrow] (t1) to node{$Q_2$} (s2);
\draw [arrow, bend left=30] (s2) to node{$\dom_{F'_1}$} (t2);
\draw [arrow, bend right=30] (s2) to node[swap]{$\codom_{F'_1}$} (t2);
\draw [arrow] (t2) to node{$Q_3$} (t);
\end{tikzpicture}\]
and
\[\begin{split}
(\dom_G) &= b''\bigl((Q_1), (\dom_{F_1}), (Q_2), (\dom_{F'_1}), (Q_3)\bigr),\\
(\codom_G) &= b''\bigl((Q_1), (\codom_{F_1}), (Q_2), (\codom_{F'_1}), (Q_3)\bigr)
\end{split}\]
for some bracketing $b''$. In this case, the equality of the two composites of $\phi$ follows from computation similar to \eqref{pasting-computation} and the bicategory axioms \eqref{hom-category-axioms}, \eqref{bicat-c-id}, and \eqref{middle-four}.
For example, suppose $Q_1$ and $Q_3$ are trivial, with $Q_2$ non-trivial, and $b'' = -(- -)$. In this case, we need to check the commutativity of the outermost diagram below, in which identity $2$-cells are all written as $1$.
\[\begin{tikzcd}[column sep=large, row sep=huge, shorten <=-3pt]
(\phi_{\dom_{F_1'}}\phi_{Q_2})\phi_{\dom_{F_1}} \ar{d}[swap]{(1*1)*\phi_{F_1}}
\ar[shorten >=-3pt]{r}{(\phi_{F_1'}*1)*1} \ar[shorten >=-3pt]{dr}[description]{(\phi_{F_1'}*1)*\phi_{F_1}}
& (\phi_{\codom_{F_1'}}\phi_{Q_2})\phi_{\dom_{F_1}} \ar{d}{(1*1)*\phi_{F_1}}\\
(\phi_{\dom_{F_1'}}\phi_{Q_2})\phi_{\codom_{F_1}} \ar[shorten >=-3pt]{r}[swap]{(\phi_{F_1'}*1)*1}
& (\phi_{\codom_{F_1'}}\phi_{Q_2})\phi_{\codom_{F_1}}
\end{tikzcd}\]
Both triangles are commutative by \eqref{hom-category-axioms} and \eqref{middle-four}. The other cases follow from similar computation.
\end{proof}
In summary, for a $G$-diagram $\phi$ in a bicategory $\B$ for some bracketed graph $G$:
\begin{enumerate}
\item $\phi$ has a composite if $G$ admits at least one composition scheme extension. Such bracketed graphs are characterized in \Cref{bicat-pasting-existence}.
\item By the Bicategorical Pasting \Cref{thm:bicat-pasting-theorem}, the composite of $\phi$ is the same regardless of which composition scheme extension of $G$ is used.
\end{enumerate}
For the rest of this book, the Bicategorical Pasting \Cref{thm:bicat-pasting-theorem} will be used along with the following conventions.
\begin{convention}\label{conv:boundary-bracketing}\index{pasting diagram!convention}
Suppose $\phi$ is a pasting diagram of shape $G$ in a bicategory $\B$ as in \Cref{def:bicat-diagram-composite}.
\begin{enumerate}
\item Suppose $\dom_G$ consists of the edges $(e_1,\ldots,e_n)$ in this order from the source $s_G$ to the sink $t_G$.
\begin{itemize}
\item If $n \leq 2$, then $\dom_G$ has a unique bracketing.
\item If $n \geq 3$ and if we draw $\phi$ without explicitly specifying the bracketings for $\dom_G$, then the left-normalized bracketing
\[(\cdots(\phi_{e_n}\phi_{e_{n-1}}) \cdots )\phi_{e_1}\]
for the composable $1$-cells $(\phi_{e_n},\ldots,\phi_{e_1})$ is used.
\end{itemize}
The same convention applies to the codomain of $G$.
\item
We say that two pasting diagrams are \emph{equal} if their composites, in the sense of \Cref{def:bicat-diagram-composite}, are equal as $2$-cells in $\B$. This is well-defined by the Bicategorical Pasting \Cref{thm:bicat-pasting-theorem}.\dqed
\end{enumerate}
\end{convention}
\section{String Diagrams}\label{sec:string-diagrams}
In this section we discuss string diagrams. They provide another way to visually represent pasting diagrams in $2$-categories and bicategories such that the roles of vertices and regions are switched. Since we already discussed pasting diagrams in detail, our discussion of string diagrams is informal and consists mainly of examples. We begin with $2$-categories. The passage from pasting diagrams to string diagrams is based on the following rule.
\begin{definition}\label{def:string-atomic}
Suppose $\phi$ is a pasting diagram in a $2$-category $\A$ for some atomic graph $A$ with:
\begin{itemize}
\item source $s$, sink $t$, and interior face $F$;
\item $\dom_F=v_0f_1v_1\cdots f_mv_m$ from $s_F=v_0$ to $t_F=v_m$;
\item $\codom_F=u_0g_1u_1\cdots g_nu_n$ from $s_F=u_0$ to $t_F=u_n$.
\end{itemize}
The \emph{string diagram}\index{string diagram!pasting diagram in a 2-category} of $\phi$ is specified as follows.
\begin{enumerate}
\item For each vertex $v$ in $A$, the object $\phi_v$ is represented by an open, connected, bounded plane region, with $\phi_s$ the left-most region and $\phi_t$ the right-most region. The regions for distinct vertices do not intersect.
\item For each edge $f$ in $A$ with tail $u$ and head $v$, the $1$-cell $\phi_f : \phi_u \to \phi_v$ is represented as a non-self-intersecting line segment, called a \emph{string}, that separates the regions for $\phi_u$ and $\phi_v$. The strings for distinct edges do not intersect.
\item The $2$-cell $\phi_F : \phi_{\dom_F} \to \phi_{\codom_F}$ is represented by a rectangular box in the plane with the following properties:
\begin{itemize}
\item The left boundary of $\phi_F$ is part of the boundary for the region $\phi_{s_F}$.
\item The right boundary of $\phi_F$ is part of the boundary for the region $\phi_{t_F}$.
\item The top boundary of $\phi_F$ intersects one end of each string $\phi_{f_i}$ for $1\leq i \leq m$ transversely from left to right.
\item The bottom boundary of $\phi_F$ intersects one end of each string $\phi_{g_j}$ for $1\leq j \leq n$ transversely from left to right.
\end{itemize}
\item An outermost rectangular box $\phi_{\ext_A}$, called the \emph{exterior box}, is drawn with the following properties:
\begin{itemize}
\item The box $\phi_F$ is contained in the interior of $\phi_{\ext_A}$.
\item The left boundary of $\phi_{\ext_A}$ is part of the boundary for the region $\phi_{s}$.
\item The right boundary of $\phi_{\ext_A}$ is part of the boundary for the region $\phi_{t}$.
\item The top boundary of $\phi_{\ext_A}$ intersects the other end of each string $\phi_{f_i}$ for $1\leq i \leq m$ transversely from left to right.
\item The bottom boundary of $\phi_{\ext_A}$ intersects the other end of each string $\phi_{g_j}$ for $1\leq j \leq n$ transversely from left to right.\defmark
\end{itemize}
\end{enumerate}
\end{definition}
\begin{definition}\label{def:string-general}
Suppose $\phi$ is a pasting diagram in a $2$-category $\A$ for some pasting scheme $G=G_n\cdots G_1$ as in \Cref{def:pasting-scheme}. The \emph{string diagram} of $\phi$ is obtained by the following steps:
\begin{enumerate}
\item Vertically stack the string diagrams of $\phi_{G_1},\ldots,\phi_{G_n}$ from top to bottom, and proportionally scale them in such a way that they all have the same width.
\item Erase the overlapping horizontal boundaries of $\phi_{\ext_{G_i}}$ and $\phi_{\ext_{G_{i+1}}}$ for $1\leq i \leq n-1$.
\item Connect each string in $\phi_{\codom_{G_i}}$ with its corresponding string in $\phi_{\dom_{G_{i+1}}}$.\defmark
\end{enumerate}
\end{definition}
\begin{example}\label{ex:atom-string}
Consider the pasting diagram $\phi$ in a $2$-category $\A$ of shape $G$ in \Cref{ex:atomic-pasting}, reproduced on the left below.
\[\begin{tikzpicture}[scale=1.2, shorten >=-2pt, shorten <=-2pt]
\node at (0,1.5) {pasting diagram $\phi$};
\node (s) at (-2,0) {$S$}; \node (s1) at (-1,0) {$S_F$};
\node [font=\LARGE] at (-.2,-.1) {\rotatebox{270}{$\Rightarrow$}};
\node at (.1,-.1) {$\theta$}; \node (u) at (0,.6) {$U$};
\node (v) at (-.6,-.8) {$V$}; \node (w) at (.6,-.8) {$W$};
\node (t1) at (1,0) {$T_F$}; \node (t) at (2,0) {$T$};
\draw [arrow] (s) to node{\scriptsize{$f$}} (s1);
\draw [arrow] (s1) to node{\scriptsize{$h_1$}} (u);
\draw [arrow] (s1) to node[swap]{\scriptsize{$h_3$}} (v);
\draw [arrow] (v) to node[swap]{\scriptsize{$h_4$}} (w);
\draw [arrow] (u) to node{\scriptsize{$h_2$}} (t1);
\draw [arrow] (w) to node[swap]{\scriptsize{$h_5$}} (t1);
\draw [arrow] (t1) to node{\scriptsize{$g$}} (t);
\end{tikzpicture}\qquad
\begin{tikzpicture}[xscale=.8, yscale=.7]
\node at (3,4) {string diagram of $\phi$};
\draw [thick] (2,1) rectangle (4,2); \node at (3,1.5) {$\theta$};
\draw [thick, lightgray] (0,0) rectangle (6,3);
\draw [thick] (1,0) -- (1,3); \draw[thick] (2.2,2) -- (2.2,3);
\draw[thick] (3.8,2) -- (3.8,3); \draw[thick] (5,0) -- (5,3);
\node at (1,3.3) {\scriptsize{$f$}}; \node at (2.2,3.3) {\scriptsize{$h_1$}};
\node at (3.8,3.3) {\scriptsize{$h_2$}}; \node at (5,3.3) {\scriptsize{$g$}};
\draw[thick] (2.1,0) -- (2.1,1); \draw[thick] (3,0) -- (3,1); \draw[thick] (3.9,0) -- (3.9,1);
\node at (1,-.3) {\scriptsize{$f$}}; \node at (2.1,-.3) {\scriptsize{$h_3$}};
\node at (3,-.3) {\scriptsize{$h_4$}}; \node at (3.9,-.3) {\scriptsize{$h_5$}};
\node at (5,-.3) {\scriptsize{$g$}};
\node at (.5,1.5) {$S$}; \node at (1.5,1.5) {$S_F$}; \node at (3,2.5) {$U$};
\node at (2.5,.5) {$V$}; \node at (3.5,.5) {$W$}; \node at (4.5,1.5) {$T_F$};
\node at (5.5,1.5) {$T$};
\end{tikzpicture}\]
Its string diagram is on the right above, in which the outermost gray box is $\phi_{\ext_G}$. Reading the string diagram from left to right---$f$, followed by $\theta$, and then $g$---yields the composite $|\phi| = 1_g * \theta * 1_f$.\dqed
\end{example}
\begin{example}\label{ex:gprime-string}
Consider the pasting diagram $\phi$ in a $2$-category $\A$ of shape $G'=G'_3G'_2G'_1$ in \Cref{ex:different-embedding-pasting}, reproduced on the left below.
\[\begin{tikzpicture}[scale=.7]
\node at (0,3.3) {pasting diagram $\phi$};
\node (v) at (-3,0) {$V$}; \node (s) at (0,0) {$S$};
\node (t) at (3,0) {$T$}; \node (u) at (0,2) {$U$};
\node (w) at (0,-2) {$W$}; \node () at (0,-3){};
\node [font=\LARGE] at (-.65,1) {\rotatebox{-45}{$\Rightarrow$}};
\node at (-.9,.8) {\scriptsize{$\theta_{1}$}};
\node [font=\LARGE] at (1,0) {\rotatebox{-90}{$\Rightarrow$}};
\node at (1.4,0) {\scriptsize{$\theta_{2}$}};
\node [font=\LARGE] at (-.7,-.7) {\rotatebox{-135}{$\Rightarrow$}};
\node at (-1.05,-.5) {\scriptsize{$\theta_{3}$}};
\draw [arrow] (v) to node{\scriptsize{$f_1$}} (u);
\draw [arrow] (u) to node{\scriptsize{$f_2$}} (t);
\draw [arrow] (v) to node[swap]{\scriptsize{$g_1$}} (w);
\draw [arrow] (w) to node[swap]{\scriptsize{$g_2$}} (t);
\draw [arrow] (v) to node{\scriptsize{$h_1$}} (s);
\draw [arrow] (s) to node[swap]{\scriptsize{$h_2$}} (u);
\draw [arrow] (s) to node{\scriptsize{$h_3$}} (w);
\end{tikzpicture}\qquad\qquad
\begin{tikzpicture}[xscale=1.4]
\node at (1.5,4.5) {string diagram of $\phi$};
\draw [thick] (.5,2.9) rectangle (1.5,3.4); \node at (1,3.15) {$\theta_1$};
\draw [thick] (1.3,1.7) rectangle (2.3,2.2); \node at (1.8,1.95) {$\theta_2$};
\draw [thick] (.5,.5) rectangle (1.5,1); \node at (1,.75) {$\theta_3$};
\draw [thick, lightgray] (0,0) rectangle (2.8,3.9);
\draw [thick, dashed, lightgray] (0,2.45) -- (2.8,2.45);
\draw [thick, dashed, lightgray] (0,1.4) -- (2.8,1.4);
\draw [thick] (1,3.9) -- (1,3.4); \draw[thick] (2.2,3.9) -- (2.2,2.2);
\node at (1,4.1) {\scriptsize{$f_1$}}; \node at (2.2,4.1) {\scriptsize{$f_2$}};
\draw [thick] (.6,2.9) -- (.6,1); \node at (.8,1.2) {\scriptsize{$h_1$}};
\draw[thick] (1.4,2.9) -- (1.4,2.2); \node at (1.6,2.7) {\scriptsize{$h_2$}};
\draw [thick] (1.4,1.7) -- (1.4,1); \node at (1.6,1.2) {\scriptsize{$h_3$}};
\draw [thick] (1,.5) -- (1,0); \draw[thick] (2.2,1.7) -- (2.2,0);
\node at (1,-.2) {\scriptsize{$g_1$}}; \node at (2.2,-.2) {\scriptsize{$g_2$}};
\node at (.3,1.95) {$V$}; \node at (.95,1.95) {$S$}; \node at (1.6,3.65) {$U$};
\node at (1.6,.25) {$W$}; \node at (2.5,.75) {$T$};
\node at (3.2,3.15) {$\phi_{G'_1}$}; \node at (3.2,1.95) {$\phi_{G'_2}$};
\node at (3.2,.75) {$\phi_{G'_3}$};
\end{tikzpicture}\]
Its string diagram is on the right above. The two dashed gray lines are the overlapping horizontal boundaries erased in step (2) in \Cref{def:string-general}, and are not part of the string diagram of $\phi$. Reading the string diagram from top to bottom, and within each $\phi_{G'_i}$ from left to right, yields the composite $|\phi| = \phi_{G'_3}\phi_{G'_2}\phi_{G'_1}$.\dqed
\end{example}
\begin{example}\label{ex:pasting-proof-string}
The equality \eqref{pasting-computation-proof} corresponds to the equality
\[\begin{tikzpicture}
\draw[thick] (.5,1.75) rectangle (1.5,2.25); \draw[thick] (2.5,.75) rectangle (3.5,1.25);
\draw[thick,lightgray] (0,.2) rectangle (4,2.8);
\node at (1,2) {$\phi_{F_1}$}; \node at (3,1) {$\phi_{F'_1}$};
\node at (.5,1) {$\phi_{s_G}$}; \node at (1.5,1) {$\phi_{t_{F_1}}$};
\node at (2.5,2) {$\phi_{s_{F'_1}}$}; \node at (3.5,2) {$\phi_{t_G}$};
\draw [thick] (1,2.8) -- (1,2.25); \draw [thick] (1,1.75) -- (1,.2);
\draw [thick] (2,2.8) -- (2,.2);
\draw [thick] (3,2.8) -- (3,1.25); \draw [thick] (3,.75) -- (3,.2);
\node at (.9,3.1) {$\phi_{\dom_{F_1}}$};
\node at (2,3.1) {$\phi_{Q_2}$};
\node at (3.1,3.1) {$\phi_{\dom_{F'_1}}$};
\node at (.9,-.1) {$\phi_{\codom_{F_1}}$};
\node at (2,-.1) {$\phi_{Q_2}$};
\node at (3.1,-.1) {$\phi_{\codom_{F'_1}}$};
\node at (5,1.5) {\LARGE{$=$}};
\draw[thick] (6.5,.75) rectangle (7.5,1.25); \draw[thick] (8.5,1.75) rectangle (9.5,2.25);
\draw[thick,lightgray] (6,.2) rectangle (10,2.8);
\node at (7,1) {$\phi_{F_1}$}; \node at (9,2) {$\phi_{F'_1}$};
\node at (6.5,2) {$\phi_{s_G}$}; \node at (7.5,2) {$\phi_{t_{F_1}}$};
\node at (8.5,1) {$\phi_{s_{F'_1}}$}; \node at (9.5,1) {$\phi_{t_G}$};
\draw [thick] (7,2.8) -- (7,1.25); \draw [thick] (7,.75) -- (7,.2); \draw [thick] (8,2.8) -- (8,.2);
\draw [thick] (9,2.8) -- (9,2.25); \draw [thick] (9,1.75) -- (9,.2);
\node at (6.9,3.1) {$\phi_{\dom_{F_1}}$};
\node at (8,3.1) {$\phi_{Q_2}$};
\node at (9.1,3.1) {$\phi_{\dom_{F'_1}}$};
\node at (6.9,-.1) {$\phi_{\codom_{F_1}}$};
\node at (8,-.1) {$\phi_{Q_2}$};
\node at (9.1,-.1) {$\phi_{\codom_{F'_1}}$};
\end{tikzpicture}\]
of string diagrams, where we assume that each of $\dom_{F_1}$, $\codom_{F_1}$, $Q_2$, $\dom_{F_1'}$, and $\codom_{F_1'}$ has a single edge for ease of drawing. In the general case, each of them consists of a finite number of parallel strings.\dqed
\end{example}
Next we turn to string diagrams in bicategories.
\begin{definition}\label{def:string-consistent}
Suppose $\phi$ is a composition diagram in a bicategory $\B$ for some consistent graph $G$ with interior face $F$ as in \Cref{def:consistent-bracketing}. Using the underlying atomic graph of $G$, the \emph{string diagram}\index{string diagram!composition diagram in a bicategory} of $\phi$ is specified as in \Cref{def:string-atomic} along with the bracketing of $\dom_G$, $\codom_G$, $\dom_F$, and $\codom_F$.
\end{definition}
\begin{definition}\label{def:string-bicat-general}
Suppose $\phi$ is a composition diagram in a bicategory $\B$ for some composition scheme $G=G_n\cdots G_1$ as in \Cref{def:bicategorical-pasting-scheme,def:bicat-pasting-diagram}, in which $G_i$ has interior face $F_i$. The underlying anchored graph of $G$ has a pasting scheme presentation $G_n\cdots G_1$ with the underlying anchored graphs of the $G_i$'s. The \emph{string diagram} of $\phi$ is specified as in \Cref{def:string-general} along with the bracketings of $\dom_G = \dom_{G_1}$, $\codom_G=\codom_{G_n}$, $\dom_{F_i}$, and $\codom_{F_i}$ for $1\leq i \leq n$.
\end{definition}
\begin{example}\label{ex:bicat-string-simple}
Consider the composition diagram $\phi'$ in $\B$ of shape $G' = G_2G_0G_1$ in \Cref{ex:bicat-pasting-simple}, reproduced on the left below.
\[\begin{tikzpicture}[scale=1]
\node () at (1.5,1.6) {pasting diagram $\phi'$};
\node (X) at (0,0) {$X$}; \node (Y) at (1.5,0) {$Y$}; \node (W) at (3,0) {$W$};
\node (Y') at (0,-1) {$Y$}; \node (W') at (1.5,-1) {$W$}; \node (Z) at (3,-1) {$Z$};
\node[font=\Large] at (1.4,.5) {\rotatebox{-90}{$\Rightarrow$}};
\node at (1.7,.5) {\scriptsize{$\alpha$}};
\node[font=\Large] at (1.4,-.5) {\rotatebox{-90}{$\Rightarrow$}};
\node at (1.8,-.5) {\scriptsize{$a^{-1}$}};
\node[font=\Large] at (1.4,-1.5) {\rotatebox{-90}{$\Rightarrow$}};
\node at (1.7,-1.5) {\scriptsize{$\beta$}};
\path[commutative diagrams/.cd, every arrow, every label]
(X) edge[out=60,in=120] node[above] {$f$} (W)
(X) edge node[above] {$g$} (Y)
(Y) edge node[above] {$h$} (W)
(W) edge node[right] {$i$} (Z)
(X) edge node[left] {$g$} (Y')
(Y') edge node[above] {$h$} (W')
(W') edge node[above] {$i$} (Z)
(Y') edge[out=-60,in=240] node[below] {$j$} (Z);
\end{tikzpicture}\qquad\qquad
\begin{tikzpicture}[xscale=1.2, yscale=.8]
\node at (1.5,4.7) {string diagram of $\phi'$};
\draw[thick] (.5,3) rectangle (1.5,3.5); \node at (1,3.25) {\scalebox{.8}{$\alpha$}};
\draw[thick] (.5,1.7) rectangle (2.5,2.2); \node at (1.5,1.95) {\scalebox{.8}{$a^{-1}$}};
\draw[thick] (1.5,.5) rectangle (2.5,1); \node at (2,.75) {\scalebox{.8}{$\beta$}};
\draw[thick,lightgray] (0,0) rectangle (3,4);
\draw [thick] (1,4) -- (1,3.5); \node at (1,4.2) {\scriptsize{$f$}};
\draw [thick] (.7,3) -- (.7,2.2); \node at (.5,2.6) {\scriptsize{$(g$}};
\draw[thick] (1.3,3) -- (1.3,2.2); \node at (1.5,2.6) {\scriptsize{$h)$}};
\draw[thick] (2.3,4) -- (2.3,2.2); \node at (2.3,4.2) {\scriptsize{$i$}};
\draw [thick] (.7,1.7) -- (.7,0); \node at (.7,-.2) {\scriptsize{$g$}};
\draw[thick] (1.7,1.7) -- (1.7,1); \node at (1.5,1.35) {\scriptsize{$(h$}};
\draw[thick] (2.3,1.7) -- (2.3,1); \node at (2.5,1.35) {\scriptsize{$i)$}};
\draw[thick] (2,.5) -- (2,0); \node at (2,-.2) {\scriptsize{$j$}};
\node at (.3,.75) {$X$}; \node at (2.7,3.25) {$Z$};
\node at (1,2.6) {$Y$}; \node at (1.15,.75) {$Y$};
\node at (1.9,3.25) {$W$}; \node at (2,1.35) {$W$};
\end{tikzpicture}\]
Its string diagram is on the right above. The bracketings for the (co)domain of $a^{-1}$ are indicated by the parentheses around $g$, $h$, and $i$.\dqed
\end{example}
\begin{example}\label{ex:bicat-string-2}
The composition diagram $\phi''$ in $\B$ of shape $G''$ in \Cref{ex:bicat-pasting-2} is reproduced on the left below.
\[\begin{tikzpicture}[x=20mm,y=20mm]
\newcommand{\core}{
\draw[0cell]
(0,0) node (v) {V}
(1,0) node (s) {S}
(1.75,.5) node (u) {U}
(1.75,-.5) node (w) {W}
(2.5,0) node (t) {T}
;
\draw[1cell]
(s) edge[swap] node {h_2} (u)
(s) edge node {h_3} (w)
(u) edge node {f_2} (t)
(w) edge[swap] node {g_2} (t)
;
\draw[2cell]
node[between=s and t at .6, rotate=-90,font=\Large] (T2) {\Rightarrow}
(T2) node[right] {\theta_2}
;
}
\core
\draw[0cell]
(v) ++(-.1,.9) node {\phi''}
(s) ++(0,.6) node (s') {S}
(u) ++(0,.6) node (u') {U}
(s) ++(0,-.6) node (s'') {S}
(w) ++(0,-.6) node (w'') {W}
;
\draw[1cell]
(v) edge node[pos=.65] {h_1} (s)
(v) edge node[pos=.5] {h_1} (s')
(v) edge[swap] node[pos=.5] {h_1} (s'')
(s') edge node {h_2} (u')
(s'') edge[swap] node {h_3} (w'')
(u') edge[bend left=40] node {f_2} (t)
(w'') edge[swap, bend right=35] node {g_2} (t)
(v) edge[bend left=50, looseness=1.2] node (f1') {f_1} (u')
(v) edge[bend right=50, looseness=1.2, swap] node (g1') {g_1} (w'')
;
\draw[2cell]
node[between=f1' and s' at .6, rotate=-45,font=\Large] (T1) {\Rightarrow}
(T1) node[above right] {\theta_1}
node[between=g1' and s'' at .6, rotate=225,font=\Large] (T3) {\Rightarrow}
(T3) node[below right] {\theta_3}
(u) ++(130:.3) node[rotate=-45,font=\Large] (a') {\Rightarrow}
(a') node[below left] {\,a^\inv}
(w) ++(230:.3) node[rotate=225,font=\Large] (a) {\Rightarrow}
(a) node[above left] {a}
;
\end{tikzpicture}
\qquad\qquad
\begin{tikzpicture}[xscale=1.4, yscale=.8]
\node at (1.5,4.9) {string diagram of $\phi''$};
\draw[thick] (.5,3) rectangle (1.5,3.5); \node at (1,3.25) {\scalebox{.8}{$\theta_1$}};
\draw[thick] (.5,1.7) rectangle (2.5,2.2); \node at (1.5,1.95) {\scalebox{.8}{$a^{-1}$}};
\draw[thick] (1.5,.5) rectangle (2.5,1); \node at (2,.75) {\scalebox{.8}{$\theta_2$}};
\draw[thick] (.5,-.8) rectangle (2.5,-.3); \node at (1.5,-.55) {\scalebox{.8}{$a$}};
\draw[thick] (.5,-2.1) rectangle (1.5,-1.6); \node at (1,-1.85) {\scalebox{.8}{$\theta_3$}};
\draw[thick,lightgray] (0,-2.6) rectangle (3,4);
\draw [thick] (1,4) -- (1,3.5); \node at (1,4.2) {\scriptsize{$f_1$}};
\draw [thick] (.7,3) -- (.7,2.2); \node at (.5,2.6) {\scriptsize{$(h_1$}};
\draw[thick] (1.3,3) -- (1.3,2.2); \node at (1.5,2.6) {\scriptsize{$h_2)$}};
\draw[thick] (2.3,4) -- (2.3,2.2); \node at (2.3,4.2) {\scriptsize{$f_2$}};
\draw [thick] (.7,1.7) -- (.7,-.3); \node at (.5,1.35) {\scriptsize{$h_1$}};
\draw[thick] (1.7,1.7) -- (1.7,1); \node at (1.5,1.35) {\scriptsize{$(h_2$}};
\draw[thick] (2.3,1.7) -- (2.3,1); \node at (2.5,1.35) {\scriptsize{$f_2)$}};
\draw[thick] (1.7,.5) -- (1.7,-.3); \node at (1.5,.1) {\scriptsize{$(h_3$}};
\draw[thick] (2.3,.5) -- (2.3,-.3); \node at (2.5,.1) {\scriptsize{$g_2)$}};
\draw[thick] (.7,-.8) -- (.7,-1.6); \node at (.5,-1.2) {\scriptsize{$(h_1$}};
\draw[thick] (1.3,-.8) -- (1.3,-1.6); \node at (1.5,-1.2) {\scriptsize{$h_3)$}};
\draw[thick] (1,-2.1) -- (1,-2.6); \node at (1,-2.8) {\scriptsize{$g_1$}};
\draw[thick] (2.3,-.8) -- (2.3,-2.6); \node at (2.3,-2.8) {\scriptsize{$g_2$}};
\node at (.3,.75) {$V$}; \node at (2.7,3.25) {$T$};
\node at (1,2.6) {$S$}; \node at (1,.75) {$S$}; \node at (1,-1.2) {$S$};
\node at (1.9,3.25) {$U$}; \node at (2,1.35) {$U$};
\node at (2,.1) {$W$}; \node at (1.9,-1.85) {$W$};
\end{tikzpicture}\]
Its string diagram is on the right above. The bracketings for the (co)domain of $a^{\pm 1}$ are indicated by the parentheses around $f_2$, $g_2$, $h_1$, $h_2$, and $h_3$.\dqed
\end{example}
\section{Exercises and Notes}\label{sec:pasting-string-exercises}
\begin{exercise}\label{exercise:key-terms}
Make a glossary of the key terms for graphs and diagrams in this
chapter, listed below. The bicategorical terms include and extend
the 2-categorical terms.\\[1pc]
\begin{center}
\begin{tabular}[t]{l}
\textbf{2-Categorical}\\
\hline
anchored graph \\
atomic graph\\
pasting scheme\\
pasting scheme presentation\\
$G$-diagram\\
pasting diagram
\end{tabular}
\hspace{1cm}
\begin{tabular}[t]{l}
\textbf{Bicategorical}\\
\hline
bracketed graph \\
associativity graph \\
composition scheme \\
$G$-diagram \\
composition diagram \\
pasting diagram \\
composition scheme extension
\end{tabular}
\end{center}
\end{exercise}
\begin{exercise}
In \Cref{ex:pasting-complicated}, show that there are exactly $8$ ways to compose the pasting diagram in a general $2$-category, and that they are all equal to each other.
\end{exercise}
\begin{exercise}
For the anchored graph $G$ in \Cref{ex:pasting-scheme-complicated}:
\begin{itemize}
\item State all of its bracketings
\item Given a $G$-diagram in $\B$, for each bracketing of $G$, describe the expansion of $G$ into a composition scheme and the composite of the resulting composition diagram.
\end{itemize}
\end{exercise}
\begin{exercise}
Near the end of the proof of \Cref{thm:2cat-pasting-theorem}, check the other seven cases.
\end{exercise}
\begin{exercise}
In the proof of \Cref{moving-brackets}, show that $b_n^l(e_1,\ldots,e_n)$ and $(\codom_G)$ are connected by a canonical finite sequence of associativity graphs of the form \eqref{associativity-graph1}.
\end{exercise}
\begin{exercise}
Near the end of the proof of \Cref{thm:bicat-pasting-theorem}, check the other cases.
\end{exercise}
\begin{exercise}
Describe the string diagrams for the pasting diagrams in \Cref{ex:pasting-simple2,ex:pasting-complicated}, as well as their bicategorical versions.
\end{exercise}
\subsection*{Notes}
\begin{note}[Graph Theory]
For basics of graph theory, the reader may consult \cite{bondy-murty}.
\end{note}
\begin{note}[Discussion of Literature]\label{note:pasting-literature-discussion}
Pasting diagrams in $2$-categories and bicategories were introduced by B\'{e}nabou \cite{benabou}, and they have been used ever since. A $2$-categorical pasting theorem similar to \Cref{thm:2cat-pasting-theorem} was proved by Power\index{pasting theorem!Power's} \cite{power}, who also considered plane graphs with a source and a sink. The main difference between Power's approach and ours is that he assumed that his graphs have no directed cycles. On the other hand, our acyclicity condition is the existence of a vertical decomposition into atomic graphs. An advantage of our definition is that it parallels the way pasting diagrams are used in practice, namely, as vertical composites of whiskerings of one $2$-cell with a number of $1$-cells.
A bicategorical pasting theorem was established by Verity\index{pasting theorem!Verity's} \cite{verity}, who extended Power's concept of graphs to include bracketings of the (co)domain of each interior face and of the global (co)domain. His proof involves first using the bicategorical coherence theorem that says that each bicategory $\B$ is retract biequivalent to a $2$-category $\A$. Given such a biequivalence $h : \B \to \A$, a pasting diagram in $\B$ is sent to a pasting diagram in $\A$, which has a unique composite by Power's $2$-categorical pasting theorem. Using the fact that a biequivalence is locally full and faithful, a unique $2$-cell composite is then obtained back in the bicategory $\B$. The proof that this composite is independent of the choice of a biequivalence $h$ also uses the bicategorical coherence theorem.
In contrast, our elementary proof of the Bicategorical Pasting \Cref{thm:bicat-pasting-theorem} stays entirely within the given bicategory, and only uses the basic axioms of a bicategory. In particular, our approach does \underline{not} rely on:
\begin{itemize}
\item Power's $2$-categorical pasting theorem;
\item \Cref{thm:bicat-of-lax-functors}, which states that $\Bicat(\B,\B')$ is a bicategory;
\item the local characterization of a biequivalence, which is the Bicategorical Whitehead \Cref{theorem:whitehead-bicat};
\item the Bicategorical Coherence \Cref{theorem:bicat-coherence}.
\end{itemize}
As our detailed proof for the Whitehead \Cref{theorem:whitehead-bicat} shows:
\begin{enumerate}
\item Proving a bicategorical pasting theorem using the local characterization of a biequivalence is logically circular.
\item Proving the local characterization of a biequivalence without using pasting diagrams is unadvisable because that would introduce many complicated calculations into the proof that are currently handled by pasting diagrams.
\end{enumerate}
Therefore, for both conceptual and logical reasons, it is best to base a bicategorical pasting theorem only on the basic axioms of a bicategory, as we have done in this chapter.
\end{note}
\begin{note}[$n$-Categorical Pastings]
For pastings in\index{pasting theorem!9ncategorical@$n$-categorical}\index{n-category@$n$-category}\index{category!9n@$n$-} $n$-categories, the reader is referred to \cite{johnson-ncat,power2}.
\end{note}
\begin{note}[Moving Brackets \Cref{moving-brackets}]
The first part of the proof of the Moving Brackets \Cref{moving-brackets} is essentially what Mac Lane \cite[page 166]{maclane} means by successively moving outermost parentheses to the front.
\end{note}
\begin{note}[String Diagrams]
For a survey of string diagrams\index{string diagram!in monoidal categories} in monoidal categories and their many variants, the reader is referred to the article \cite{selinger}. The string diagram corresponding to a pasting diagram is actually obtained from the dual graph\index{graph!dual}\index{string diagram!as a dual graph} construction \cite{bondy-murty}.
\end{note}
\endinput
\section*{#1}}
\chapter*{Preface}
\addtocontents{toc}{\SkipTocEntry}
\sect{\texorpdfstring{$2$}{2}-Dimensional Categories}
The theory of $2$-dimensional categories, which includes $2$-categories and
bicategories, is a fundamental part of modern category theory with a wide
range of applications not only in mathematics, but also in physics
\cite{baez-neuchl,kapranov-voevodsky,kapranov-voevodsky-b,ktz,parzygnat,schommer-pries},
computer science \cite{preller-lambek}, and linguistics
\cite{lambek-linguistics,lambek-physics}. The basic definitions and
properties of $2$-categories and
bicategories were introduced by B\'{e}nabou in \cite{benabou-2cat} and
\cite{benabou}, respectively.
The one-object case is illustrative: a monoid, which is a set
with a unital and associative multiplication, is a
one-object category. A monoidal category, which is a category with
a product that is associative and unital up to coherent isomorphism, is a one-object
bicategory. The definition of a bicategory is obtained from that of a
category by replacing the hom sets with hom categories, the
composition and identities with functors, and the associativity and
unity axioms with natural isomorphisms called the associator and the
unitors. These data satisfy unity and pentagon axioms that are
conceptually identical to those in a monoidal category. A
$2$-category is a bicategory in which the associator and the unitors
are identities.
For example, small categories, functors, and natural transformations
form a $2$-category $\Cat$. As we will see in
\Cref{sec:multicategories,sec:polycat-2cat}, there are similar
$2$-categories of multicategories and of polycategories. An important
bicategory in algebra is $\Bimod$, with rings as objects, bimodules as
$1$-cells, and bimodule homomorphisms as $2$-cells. Another important
bicategory is $\Span(\C)$ for a category $\C$ with all pullbacks.
This bicategory has the same objects as $\C$ and has spans in $\C$ as
$1$-cells. We will see in \Cref{example:internal-cat} that internal
categories in $\C$ are monads in the bicategory $\Span(\C)$.
\sect{Purpose and Audience}
The literature on bicategories and $2$-categories is scattered in a
large number of research papers that span over half a century.
Moreover, some fundamental results, well-known to experts, are
mentioned with little or no detail in the research literature. This presents a
significant obstruction for beginners in the study of $2$-dimensional
categories. Varying terminology across the literature compounds the difficulty.
This book is a self-contained introduction to bicategories and
$2$-categories, assuming only the most elementary aspects of category
theory, which is summarized in \Cref{ch:categorical_prelim}.
The content is written for non-expert readers, and provides
complete details in both the basic definitions and fundamental results
about bicategories and $2$-categories.
It aims to serve as both an
entry point for students and a reference for
researchers in related fields.
A review of basic category theory is followed by a systematic
discussion of $2$-/bicategories, pasting diagrams, morphisms
(functors, transformations, and modifications),
$2$-/bilimits, the Duskin nerve, $2$-nerve, adjunctions and monads in
bicategories, $2$-monads, biequivalences, the Bicategorical Yoneda
Lemma, and the Coherence Theorem for bicategories. The next two
chapters discuss Grothendieck fibrations and the Grothendieck
construction. The last two chapters provide introductions to more advanced topics,
including tricategories, monoidal bicategories, the Gray tensor
product, and double categories.
\sect{Features}
\begin{description}
\item[Details] As mentioned above, one aspect that makes this subject
challenging for beginners is the lack of detailed proofs, or
sometimes even precise statements, of some fundamental results that
are well-known to experts. To make the subject of
$2$-dimensional categories as widely accessible as possible,
this text presents precise
statements and completely detailed proofs of the following
fundamental but hard-to-find results.
\begin{itemize}
\item The Bicategorical Pasting \Cref{thm:bicat-pasting-theorem},
which shows that every pasting diagram has a well-defined and unique
composite.
\item The Whitehead \Cref{theorem:whitehead-bicat}, which gives a
local characterization of a biequivalence, and a $2$-categorical
version in \Cref{theorem:whitehead-2-cat}.
\item The Bicategorical Yoneda \Cref{lemma:yoneda-bicat} and the
corresponding Coherence \Cref{theorem:bicat-coherence} for
bicategories.
\item The Grothendieck Fibration \Cref{fibration=psalgebra}: cloven
and split fibrations are, respectively, pseudo and strict
$\funnyf$-algebras for a $2$-monad $\funnyf$.
\item The Grothendieck Construction
\Cref{thm:grothendieck-iiequivalence}: the Grothendieck construction
is a $2$-equivalence from the $2$-category of pseudofunctors
$\Cop\to\Cat$ to the $2$-category of fibrations over $\C$.
\item The Grothendieck construction is a lax colimit
(\Cref{thm:lax-grothendieck-lax-colimit}).
\item The Gray tensor product is symmetric monoidal with adjoint
$\Hom$, providing a symmetric monoidal closed
structure on the category of $2$-categories and $2$-functors
(\Cref{theorem:Gray-is-symm-mon}).
\end{itemize}
\item[$2$-categorical restrictions] The special case of $2$-categories
is both simpler and of independent importance. There is an
extensive literature for $2$-categories in their own right, some of
which does not have a bicategorical analogue. Whenever appropriate,
the $2$-categorical version of a bicategorical concept is presented.
For example, \Cref{def:2category} of a $2$-category is immediately
unpacked into explicit data and axioms, and then restated in terms
of a $\Cat$-enriched category. Another example is the Whitehead
Theorem in \Cref{ch:whitehead}, which is first discussed for
bicategories and then restricted to $2$-categories.
\item[Motivation and explanation] Definitions of main concepts are
preceded by motivational discussion that makes the upcoming
definitions easier to understand. Whenever useful, main definitions are
immediately followed by a detailed explanation that helps
the reader interpret and unpack the various components. In the
text, these are clearly marked as \emph{Motivation} and
\emph{Explanation}, respectively.
\item[Review] To make this book self-contained and accessible to
beginners, definitions and facts in basic category theory are
summarized in \Cref{ch:categorical_prelim}.
\item[Exercises and notes] Exercises are collected in the final section of
each chapter. Most of them involve proof techniques that are
already discussed in detail in that chapter or earlier in this
book. At the end of each chapter we provide additional notes
regarding references, terminology, related concepts, or other
information that may be inessential but helpful to the reader.
\item[Organization] Extensive and precise cross-references are given
when earlier definitions and results are used. Near the end of this
book, in addition to a detailed index, we also include a list of
main facts and a list of notations, each organized by chapters.
\end{description}
\addtocontents{toc}{\SkipTocEntry}
\sect{Related Literature}
The literature on bicategories and $2$-categories is extensive,
and a comprehensive review is beyond our scope. Here we mention only
a selection of key references for background or further reading. The
Notes section at the end of each chapter provides additional
references for the content of that chapter.
\begin{description}
\item[$1$-categories]
\cite{awodey,grandis,leinster,riehl,roman,simmons}. These are
introductory books on basic category theory at the advanced
undergraduate and beginning graduate level. The standard reference
for enriched category theory is \cite{kelly-enriched}.
\item[$2$-categories] A standard reference is \cite{kelly-street}.
\item[Bicategories] Besides the founding paper \cite{benabou}, the
papers
\cite{lack,leinster-bicat,street_fibrations,street_fibrations-correction,street_cat-structures}
are often used as references.
\item[Tricategories] The basic definitions and coherence of tricategories are discussed in
\cite{gps,gurski-coherence}.
\item[$(\infty,1)$-categories] Different models of $(\infty,1)$-categories
are discussed in the books
\cite{bergner,cisinski,leinster-higher,lurie,paoli,riehl-cht,simpson}.
\end{description}
\sect{Chapter Summaries}
A brief description of each chapter follows.
\begin{description}
\item[\Cref{ch:categorical_prelim}] To make this book self-contained
and accessible to beginners, in the first chapter we review basic
concepts of category theory. Starting from the definitions of a
category, a functor, and a natural transformation, we review limits,
adjunctions, equivalences, the Yoneda Lemma, and monads. Then we
review monoidal categories, which serve as both examples and
motivation for bicategories, and Mac Lane's Coherence Theorem. Next
we review enriched categories, which provide one characterization of
$2$-categories.
\item[\Cref{ch:2cat_bicat}] The definitions of a bicategory and of a
$2$-category, along with basic examples, are given in this chapter.
\Cref{sec:bicategory-unity} contains several useful unity properties
in bicategories, generalizing those in monoidal categories. These
unity properties underlie many fundamental results in bicategory
theory, and are often used implicitly in the literature. They will
be used many times in later chapters. Examples include the
uniqueness of lax and pseudo bilimits in
\Cref{thm:bilimit-uniqueness}, an explicit description of the Duskin
nerve in \Cref{sec:duskin-nerves}, mates in \Cref{lemma:mate-pairs},
the Whitehead \Cref{theorem:whitehead-bicat}, the Bicategorical
Yoneda \Cref{lemma:yoneda-bicat}, and the tricategory of
bicategories in \Cref{ch:tricat-of-bicat}, to name a few.
\item[\Cref{ch:pasting-string}] This chapter provides pasting theorems
for $2$-categories and bicategories. We discuss a $2$-categorical
pasting theorem first, although our bicategorical pasting theorem
does not depend on the $2$-categorical version. Each pasting
theorem says that a pasting diagram, in a $2$-category or a
bicategory, has a unique composite. We refer the reader to
\cref{note:pasting-literature-discussion} for a discussion of why
it is important to \emph{not} base a bicategorical pasting theorem
on a $2$-categorical version, the Whitehead Theorem (i.e., local
characterization of a biequivalence), or the Bicategorical
Coherence Theorem. String diagrams, which provide another way to
visualize and manipulate pasting diagrams, are discussed in
\Cref{sec:string-diagrams}.
\item[\Cref{ch:functors}] This chapter presents bicategorical
analogues of functors and natural transformations. We introduce lax
functors between bicategories, lax transformations between lax
functors, and modifications between lax transformations. We discuss
important
variations, including pseudofunctors, strong transformations, and
icons. The representable pseudofunctors, representable
transformations, and representable modifications in
\Cref{sec:representables} will be important in \Cref{ch:coherence}
when we discuss the Bicategorical Yoneda \Cref{lemma:yoneda-bicat}.
\item[\Cref{ch:constructions}] This chapter is about bicategorical
analogues of limits and nerves. Using lax functors and
pseudofunctors, we define lax cones and pseudocones with respect to
a lax functor. These concepts are used to define lax and pseudo
versions of bilimits and limits. Analogous to the $1$-categorical
fact that limits are unique up to an isomorphism, we show in
\Cref{thm:bilimit-uniqueness} that lax and pseudo (bi)limits are
unique up to an equivalence and an invertible modification. We also
discuss the
dual concepts of lax and pseudo (bi)colimits, and $2$-(co)limits.
Next we describe the Duskin nerve and $2$-nerve, which associate to
each small bicategory a
simplicial set and a simplicial category, respectively. These are two different
generalizations of the $1$-categorical Grothendieck nerve, and
for each we give an explicit description of their simplices.
\item[\Cref{ch:adjunctions}] In this chapter we discuss bicategorical
analogues of adjunctions, adjoint equivalences, and monads.
After defining an internal adjunction in a bicategory and discussing
some basic properties and examples, we discuss the theory of mates,
which is a useful consequence of adjunctions.
The basic concept of sameness between bicategories is that of a
biequivalence, which is defined using adjoint equivalences in
bicategories. Biequivalences between bicategories will play major
roles in \Cref{ch:whitehead,ch:coherence,ch:grothendieck}. The
second half of this chapter is about monads in a bicategory,
$2$-monads on a $2$-category, and various concepts of algebras of a
$2$-monad. In \Cref{ch:fibration} we will use pseudo and strict
algebras of a $2$-monad $\funnyf$ to characterize cloven and split
fibrations.
\item[\Cref{ch:whitehead}] In this chapter we provide a careful proof
of a central result in basic bicategory theory, namely, the local
characterization of a biequivalence between bicategories, which we
call the Whitehead Theorem. This terminology comes from homotopy
theory, with the Whitehead Theorem stating that a continuous map
between CW complexes is a homotopy equivalence if and only if it
induces an isomorphism on all homotopy groups. In $1$-category
theory, a functor is an equivalence if and only if it is essentially
surjective on objects and fully faithful on morphisms. Analogously,
the Bicategorical Whitehead \Cref{theorem:whitehead-bicat}
says that a pseudofunctor between bicategories is a biequivalence if
and only if it is essentially surjective on objects (i.e., surjective up to
adjoint equivalences), essentially full on $1$-cells (i.e., surjective
up to isomorphisms), and fully faithful on $2$-cells (i.e., a
bijection). Although the statement of this result is
similar to the $1$-categorical version, the actual details in the
proof are much more involved. We give an outline in the
introduction of the
chapter. The Bicategorical
Whitehead \Cref{theorem:whitehead-bicat} will be used in
\Cref{ch:coherence} to prove the
Coherence \cref{theorem:bicat-coherence} for bicategories. Furthermore, the
$2$-Categorical Whitehead \Cref{theorem:whitehead-2-cat} will be
used in \Cref{ch:grothendieck} to establish a $2$-equivalence
between a $2$-category of Grothendieck fibrations and a $2$-category
of pseudofunctors.
\item[\Cref{ch:coherence}] The Yoneda Lemma is a central result in
$1$-category theory, and it entails several related
statements about represented functors and natural transformations.
In this chapter we discuss their bicategorical analogues. In
\Cref{sec:yoneda-unpacked} we discuss several versions of the
$1$-categorical Yoneda Lemma, both as a refresher and as motivation
for the bicategorical versions. In
\Cref{sec:yoneda-bicat-definition} we construct a bicategorical
version of the Yoneda embedding for a bicategory, which we call the
Yoneda pseudofunctor. In \Cref{sec:yoneda-bicat-lemma} we first
establish the Bicategorical Yoneda Embedding in
\Cref{lemma:yoneda-embedding-bicat}, which states that the Yoneda
pseudofunctor is a local equivalence. Then we prove the
Bicategorical Yoneda \Cref{lemma:yoneda-bicat}, which describes a
pseudofunctor $F : \B^\op \to \Cat$ in terms of strong
transformations from the Yoneda pseudofunctor to $F$. A consequence
of the Bicategorical Whitehead \Cref{theorem:whitehead-bicat} and
the Bicategorical Yoneda Embedding is the Bicategorical Coherence
\Cref{theorem:bicat-coherence}, which states that every bicategory
is biequivalent to a $2$-category.
\item[\Cref{ch:fibration}] This chapter is about Grothendieck
fibrations. A functor is called a fibration if, in our terminology,
every pre-lift has a Cartesian lift. A fibration with a chosen
Cartesian lift for each pre-lift is called a cloven fibration, which
is furthermore a split fibration if it satisfies a unity property
and a multiplicativity property. After discussing some basic
properties and examples of fibrations, we observe that there is a
$2$-category $\Fib(\C)$ with fibrations over a given small category
$\C$ as objects. In \Cref{fibration-pullback} we observe that
fibrations are closed under pullbacks, and that equivalences of
$1$-categories are closed under pullbacks along fibrations. The
rest of this chapter contains the construction of a $2$-monad
$\funnyf$ on the over-category $\catoverc$ and a detailed proof of
the Grothendieck Fibration \Cref{fibration=psalgebra}. The latter
provides an explicit bijection between cloven fibrations and pseudo
$\funnyf$-algebras, and also between split fibrations and strict
$\funnyf$-algebras.
\item[\Cref{ch:grothendieck}] This chapter presents the fundamental
concept of the Grothendieck construction $\intf$ of a lax functor $F
: \Cop \to \Cat$. For a pseudofunctor $F$, the category $\intf$ is
equipped with a fibration $\Usubf : \intf \to \C$ over $\C$, which
is split precisely when $F$ is a strict functor. Using the concepts
from \Cref{ch:constructions}, next we show that the Grothendieck
construction is a lax colimit of $F$. Most of the rest of this
chapter contains a detailed proof of the Grothendieck Construction
\Cref{thm:grothendieck-iiequivalence}: the Grothendieck construction
is part of a $2$-equivalence from the $2$-category of pseudofunctors
$\Cop\to\Cat$, strong transformations, and modifications, to the
$2$-category of fibrations over $\C$, Cartesian functors, and
vertical natural transformations. \Cref{sec:grothendieck-bicat}
briefly discusses a generalization of the Grothendieck
construction that applies to an indexed bicategory.
\item[\Cref{ch:tricat-of-bicat}] This chapter is about a
$3$-dimensional generalization of a bicategory called a tricategory.
After a preliminary discussion of whiskerings of a lax
transformation with a lax functor, we define a tricategory. The
Bicategorical Pasting \Cref{thm:bicat-pasting-theorem} plays a
crucial role in interpreting the axioms of a tricategory, which are
all stated in terms of pasting diagrams. The rest of this chapter
contains the detailed definitions and a proof of the existence of a
tricategory $\bicat$ with small bicategories as objects,
pseudofunctors as $1$-cells, strong transformations as $2$-cells,
and modifications as $3$-cells.
\item[\Cref{ch:monoidal_bicat}] Other $2$-dimensional categorical
structures are discussed in this chapter. Motivated by the fact
that monoidal categories are one-object bicategories, a monoidal
bicategory is defined as a one-object tricategory. Then we discuss
the braided, sylleptic, and symmetric versions of monoidal
bicategories. Just as it is for tricategories, the Bicategorical
Pasting \Cref{thm:bicat-pasting-theorem} is crucial in interpreting
their axioms. Next we discuss the Gray tensor product on
$2$-categories, which provides a symmetric monoidal structure that
is different from the Cartesian one, and the corresponding Gray
monoids. The last part of this chapter discusses double categories
and monoidal double categories.
\end{description}
\sect{Chapter Interdependency}
The core concepts in
\Cref{ch:2cat_bicat,ch:pasting-string,ch:functors} are used in all the
subsequent chapters. \Cref{ch:adjunctions,ch:whitehead,ch:coherence,ch:fibration}
are independent of \cref{ch:constructions}.
\Cref{ch:whitehead,ch:coherence} require internal adjunctions, mates,
and internal equivalences from
\Cref{sec:internal-adjunctions,sec:internal-equivalences}.
\Cref{ch:fibration} uses $2$-monads from \Cref{sec:2-monads}.
\Cref{ch:grothendieck} depends on all of \Cref{ch:fibration}, and
\Cref{sec:grothendieck-laxcolim} uses lax colimits from
\Cref{sec:bicolimits}. The rest of \Cref{ch:grothendieck} uses the
$2$-Categorical Whitehead \Cref{theorem:whitehead-2-cat}.
\Cref{ch:tricat-of-bicat,ch:monoidal_bicat} use internal adjunctions,
mates, and internal equivalences from
\Cref{sec:internal-adjunctions,sec:internal-equivalences}, but none of
the other material after \Cref{ch:functors}. \Cref{ch:monoidal_bicat}
depends on the whiskerings of \Cref{sec:whiskering} and the definition of
a tricategory from \Cref{sec:tricategories}. The following graph
summarizes these dependencies.
\begin{center}
\begin{tikzpicture}[x=25mm, y=12mm,
block/.style ={rectangle, draw=black,
align=center, rounded corners,
minimum height=2em, outer sep=1.5mm, minimum width=5ex},
every node/.style={font=\small}
]
\draw
(0,0) node[block] (24) {2 -- 4}
(1,1) node[block] (5) {5}
(1,-1.5) node[block] (6) {6}
(2,0) node[block] (78) {7, 8}
(78) ++(1,-.5) node[block] (9) {9}
(5) ++(3,0) node[block] (10) {10}
(6) ++(1.5,0) node[block] (11) {11}
(11) ++(1.5,0) node[block] (12) {12}
;
\draw[->] (5) -- (24);
\draw[->] (6) -- (24);
\draw[->] (78) -- node['] {6.1, 6.2} (6);
\draw[->] (10) -- node['] {5.2} (5);
\draw[->] (10) -- node['] {7.5} (78);
\draw[->] (9) -- node[', pos=.1] {6.5} (6);
\draw[->] (11) -- node {6.1, 6.2} (6);
\draw[->] (12) -- node {11.1, 11.2} (11);
\draw[->] (10) -- (9);
\end{tikzpicture}
\end{center}
\addtocontents{toc}{\SkipTocEntry}
\sect{Acknowledgement}
For helpful feedback on an early draft, the authors thank John Baez,
Michael Horst, Martti Karvonen, Ralf Meyer, Joe Moeller, Emily Riehl,
David Roberts, and Michael Shulman.
\chapter{Tricategory of Bicategories}
\label{ch:tricat-of-bicat}
The main objectives of this chapter are:
\begin{enumerate}
\item to introduce a three-dimensional analogue of a bicategory called a \emph{tricategory};
\item to observe that there is a tricategory $\bicat$ with small bicategories as objects, pseudofunctors as $1$-cells, strong transformations as $2$-cells, and modifications as $3$-cells.
\end{enumerate}
The existence of $\bicat$ is analogous to the fact that there is a $2$-category $\Cat$ with small categories as objects and diagram categories as hom categories, as explained in \Cref{ex:2cat-of-cat}. However, the details are far more involved when we move from small categories, functors, and natural transformations to small bicategories, pseudofunctors, strong transformations, and modifications.
To prepare for the definition of a tricategory, in \Cref{sec:whiskering} we define whiskerings of a lax transformation with a lax functor on either side. Tricategories are defined in \Cref{sec:tricategories}. The bicategory $\Bicatps(\A,\B)$ in \Cref{subbicat-pseudofunctor}---with pseudofunctors $\A \to \B$ as objects, strong transformations as $1$-cells, and modifications as $2$-cells---and the concept of an adjoint equivalence in \Cref{definition:internal-equivalence} will play major roles in the definition of a tricategory. The whiskerings in \Cref{sec:whiskering} are used in the definitions of the pentagonator and the three $2$-unitors, which are among the data of a tricategory. The tricategorical axioms are stated as equalities of pasting diagrams, which are interpreted using the Bicategorical Pasting \Cref{thm:bicat-pasting-theorem} and \Cref{conv:boundary-bracketing}. The rest of that section contains further explanation of the definition of a tricategory and an example.
The remaining sections are about the tricategory $\bicat$ with small bicategories as objects and $\Bicatps(\cdot,\cdot)$ as hom bicategories. The tricategorical composition in $\bicat$, which is a pseudofunctor $(\tensor,\tensortwo,\tensorzero)$, is defined and justified in \Cref{sec:composite-tr-mod,sec:tensorzero}. The whiskerings in \Cref{sec:whiskering} are used to define the composites of strong transformations and of modifications. The associator in $\bicat$, which is an adjoint equivalence, is discussed in \Cref{sec:tricat-associator}. The rest of the tricategorical data in $\bicat$---namely, the identities, the left and right unitors, the pentagonator, and the three $2$-unitors---are described in \Cref{sec:tricat-other}.
Recall the concepts of bicategories, lax/pseudo functors, lax/strong transformations, and modifications in \Cref{def:bicategory,def:lax-functors,definition:lax-transformation,def:modification}.
\section{Whiskerings of Transformations}
\label{sec:whiskering}
In this section we introduce whiskerings of a lax transformation with a lax functor. These whiskerings will be used in the definition of a tricategory.
\begin{definition}\label{def:whiskering-transformation}
Suppose given bicategories $\A,\B,\C,1.3$, lax functors $F,G,G',H$ with $\Htwo$ invertible, and a lax transformation $\alpha : G\to G'$ as displayed below.
\[\begin{tikzpicture}[xscale=2, yscale=1.4]
\def1{.8}
\def1{1}
\draw[0cell]
(0,0) node (x11) {\A}
($(x11)+(1,0)$) node (x12) {\B}
($(x12)+(1,0)$) node (x13) {\C}
($(x13)+(1,0)$) node (x14) {1.3}
;
\draw[1cell]
(x11) edge node {F} (x12)
(x12) edge[bend left=45] node {G} (x13)
(x12) edge[bend right=45] node[swap] {G'} (x13)
(x13) edge node {H} (x14)
;
\draw[2cell]
node[between=x12 and x13 at .5, rotate=-90, 2label={above,\alpha}] {\Rightarrow}
;
\end{tikzpicture}\]
\begin{enumerate}
\item The \index{lax transformation!whiskering}\index{whiskering!of a lax transformation and a lax functor}\index{pre-whiskering}\emph{pre-whiskering of $\alpha$ with $F$}, denoted by\label{notation:alphawhis} $\alpha\whis F$, is defined by the following data.
\begin{description}
\item[Component $1$-Cells] For each object $X\in\A$, it has a component $1$-cell
\[(\alpha\whis F)_X = \alpha_{FX} \in \C(GFX,G'FX).\]
\item[Component $2$-Cells] For each morphism $f : X \to Y$ in $\A$, it has a component $2$-cell
\begin{equation}\label{pre-whis-iicell}
(\alpha\whis F)_f = \alpha_{Ff} \in \C(GFX,G'FY),
\end{equation}
which is a component $2$-cell of $\alpha$, as displayed below.
\[\begin{tikzpicture}[xscale=2.5, yscale=1.5]
\def1{1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {GFX}
($(x11)+(1,0)$) node (x12) {GFY}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {G'FX}
($(x12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {G'FY}
;
\draw[1cell]
(x11) edge node {GFf} (x12)
(x11) edge node[swap] {(\alpha\whis F)_X\,=\,\alpha_{FX}} (x21)
(x12) edge node {\alpha_{FY}\,=\, (\alpha\whis F)_Y} (x22)
(x21) edge node[swap] {G'Ff} (x22)
;
\draw[2cell]
node[between=x11 and x22 at .6, rotate=45, 2label={above,\alpha_{Ff}}] {\Rightarrow}
;
\end{tikzpicture}\]
\end{description}
\item The \index{post-whiskering}\emph{post-whiskering of $\alpha$ with $H$}, denoted by $H \whis \alpha$,
is defined by the following data.
\begin{description}
\item[Component $1$-Cells] For each object $X\in \B$, it has a component $1$-cell
\[(H\whis \alpha)_X = H(\alpha_X) \in 1.3(HGX,HG'X).\]
\item[Component $2$-Cells] For each morphism $f : X \to Y$ in $\B$, its component $2$-cell is the vertical composite
\begin{equation}\label{post-whis-iicell}
(H\whis \alpha)_f = \big(\Htwosub{\alpha_Y,Gf}\big)^{-1} (H\alphaf) \big(\Htwosub{G'f,\alpha_X}\big)
\end{equation}
in $1.3(HGX,HG'Y)$, as displayed on the left-hand side below.
\[\begin{tikzpicture}[xscale=3, yscale=2.3]
\def1{1.4} \def1{.8} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def.4{.8}
\draw[0cell]
(0,0) node (x11) {HGX}
($(x11)+(1,0)$) node (x12) {HG'Y}
($(x11)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (bot) {HG'X}
($(x11)+(1/2,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (top) {HGY}
;
\draw[1cell]
(x11) edge[out=-90,in=180] node[swap] {HGf} (top)
(top) edge[out=0,in=-90] node[swap] {H\alpha_Y} (x12)
(x11) edge[out=90,in=180] node {H\alpha_X} (bot)
(bot) edge[out=0,in=90] node {HG'f} (x12)
(x11) edge[bend right=20] node[swap] (s) {H\big((\alpha_Y)(Gf)\big)} (x12)
(x11) edge[bend left=20] node (t) {H\big((G'f)(\alpha_X)\big)} (x12)
;
\draw[2cell]
node[between=x11 and x12 at .45, shift={(0,1.5)}, rotate=-90, 2label={above,\Htwosub{G'f,\alpha_X}}] {\Rightarrow}
node[between=x11 and x12 at .45, rotate=-90, 2label={above,H\alphaf}] {\Rightarrow}
node[between=x11 and x12 at .45, shift={(0,-1.5)}, rotate=-90, 2label={above,(\Htwosub{\alpha_Y,Gf})^{-1}}] {\Rightarrow}
;
\draw[0cell]
($(x12)+(1,.4/2)$) node (y11) {GX}
($(y11)+(1,0)$) node (y12) {GY}
($(y11)+(0,-.4)$) node (y21) {G'X}
($(y12)+(0,-.4)$) node (y22) {G'Y}
;
\draw[1cell]
(y11) edge node {Gf} (y12)
(y11) edge node[swap] {\alpha_X} (y21)
(y12) edge node {\alpha_Y} (y22)
(y21) edge node[swap] {G'f} (y22)
;
\draw[2cell]
node[between=y11 and y22 at .6, rotate=45, 2label={above,\alphaf}] {\Rightarrow}
;
\end{tikzpicture}\]
Here $\alpha_f \in \C(GX,G'Y)$ is the component $2$-cell of $\alpha$ displayed on the right-hand side above, and $\Htwo$ is the lax functoriality constraint of $H$
\end{description}
\end{enumerate}
This finishes the definitions of the pre-whiskering and the post-whiskering.
\end{definition}
\begin{convention}\label{conv:functor-subscript}
To save space, we sometimes use the following abbreviations:
\begin{itemize}
\item We write $Gf$, if it is defined, as $f_G$ for a $0$-cell, $1$-cell, or $2$-cell $f$ and a lax functor $G$.
\item For something that already has a subscript, such as $\alpha_g$, we write $G\alpha_g$ as $\alpha_{g,G}$.
\item If $G$ is a lax functor with $\Gtwo$ or $\Gzero$ invertible, we write the inverses of $\Gtwo$ and $\Gzero$ as\label{notation:gtwoinv} $\Gtwoinv$ and $\Gzeroinv$, respectively.
\end{itemize}
\end{convention}
\begin{lemma}\label{pre-whiskering-transformation}
In the context of \Cref{def:whiskering-transformation}, the pre-whiskering
\[\alpha\whis F : GF \to G'F\]
is a lax transformation, which is a strong transformation if $\alpha$ is so.
\end{lemma}
\begin{proof}
The naturality of $\alpha\whis F$ with respect to $2$-cells in $\A$, in the sense of \eqref{lax-transformation-naturality}, follows from the naturality of $\alpha$.
The lax unity axiom \eqref{unity-transformation} for $\alpha\whis F$ means the commutativity around the boundary of the following diagram in $\C(GFX,G'FX)$ for each object $X\in\A$.
\[\begin{tikzpicture}[xscale=2.5, yscale=1.5]
\def1{1}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\def-1} \def\v{-1{.6}
\draw[0cell]
(0,0) node (x11) {1_{G'FX}\alpha_{FX}}
($(x11)+(1,0)$) node (x12) {\alpha_{FX}}
($(x12)+(1,0)$) node (x13) {\alpha_{FX}1_{GFX}}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {(G'1_{FX})\alpha_{FX}}
($(x13)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x23) {\alpha_{FX}(G1_{FX})}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {(G'F1_{X})\alpha_{FX}}
($(x23)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x33) {\alpha_{FX}(GF1_{X})}
($(x11)+(--1} \def\v{-1,0)$) node[inner sep=0pt] (a) {}
($(x31)+(--1} \def\v{-1,0)$) node[inner sep=0pt] (b) {}
($(x13)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (c) {}
($(x33)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (d) {}
;
\draw[1cell]
(x11) edge node {\ell} (x12)
(x12) edge node {r^{-1}} (x13)
(x11) edge node {(G')^0_{FX}*1} (x21)
(x13) edge node[swap] {1*G^0_{FX}} (x23)
(x21) edge node {\alpha_{1_{FX}}} (x23)
(x21) edge node {G'F^0_X *1} (x31)
(x23) edge node[swap] {1*GF^0_X} (x33)
(x31) edge node {\alpha_{F1_X}} (x33)
(x11) edge[-,shorten >=-1pt] (a)
(a) edge[-,shorten <=-1pt, shorten >=-1pt] node[swap] {(G'F)^0_X*1_{\alpha_{FX}}} (b)
(b) edge[shorten <=-1pt] (x31)
(x13) edge[-,shorten >=-1pt] (c)
(c) edge[-,shorten <=-1pt, shorten >=-1pt] node {1_{\alpha_{FX}}*(GF)^0_X} (d)
(d) edge[shorten <=-1pt] (x33)
;
\end{tikzpicture}\]
\begin{itemize}
\item The left and the right rectangles are commutative by the definitions of $(G'F)^0$ and $(GF)^0$ in \eqref{lax-functors-comp-zero}.
\item The top middle square is commutative by the lax unity \eqref{unity-transformation} of $\alpha$.
\item The bottom middle square is commutative by the lax naturality \eqref{lax-transformation-naturality} of $\alpha$.
\end{itemize}
Using \Cref{conv:functor-subscript}, the lax naturality axiom \eqref{2-cell-transformation} for $\alpha \whis F$ means the commutativity around the boundary of the following diagram in $\C(GFX,G'FZ)$ for $1$-cells $f : X \to Y$ and $g : Y \to Z$ in $\A$.
\[\begin{tikzpicture}[xscale=3.1, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {g_{G'F}(f_{G'F}\alpha_{FX})}
($(x11)+(1,0)$) node (x12) {g_{G'F}(\alpha_{FY}f_{GF})}
($(x12)+(1,0)$) node (x13) {(g_{G'F}\alpha_{FY})f_{GF}}
($(x13)+(1,0)$) node (x14) {(\alpha_{FZ}g_{GF})f_{GF}}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {(g_{G'F}f_{G'F})\alpha_{FX}}
($(x12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {(g_F f_F)_{G'}\alpha_{FX}}
($(x13)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x23) {\alpha_{FZ}(g_F f_F)_G}
($(x14)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x24) {\alpha_{FX}(g_{GF}f_{GF})}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {(gf)_{G'F}\alpha_{FX}}
($(x24)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x34) {\alpha_{FX} (gf)_{GF}}
;
\draw[1cell]
(x11) edge node {1*\alpha_{f_F}} (x12)
(x12) edge node {a^{-1}} (x13)
(x13) edge node {\alpha_{g_F}*1} (x14)
(x21) edge node {a} (x11)
(x14) edge node {a} (x24)
(x21) edge[bend left=45] node {(G')^2*1} (x22)
(x22) edge node {\alpha_{g_F f_F}} (x23)
(x24) edge[bend right=45] node[swap] {1*G^2} (x23)
(x21) edge node[swap] {(G'F)^2*1} (x31)
(x22) edge node[pos=.3] {\Ftwo_{G'}*1} (x31)
(x23) edge node[swap,pos=.3] {1*\Ftwo_G} (x34)
(x24) edge node {1*(GF)^2} (x34)
(x31) edge node {\alpha_{(gf)_F}} (x34)
;
\end{tikzpicture}\]
\begin{itemize}
\item The top sub-diagram is commutative by the lax naturality \eqref{2-cell-transformation} for $\alpha$.
\item The lower left and lower right triangles are commutative by the definitions of $(G'F)^2$ and $(GF)^2$ in \eqref{lax-functors-comp-two}.
\item The commutativity of the lower middle sub-diagram follows from the naturality \eqref{lax-transformation-naturality} of $\alpha$ with respect to $2$-cells.
\end{itemize}
We have shown that $\alpha\whis F$ is a lax transformation.
Finally, if $\alpha$ is a strong transformation, then each component $2$-cell $\alpha_{Ff}$ of $\alpha\whis F$ is invertible. So $\alpha\whis F$ is also a strong transformation.
\end{proof}
\begin{lemma}\label{post-whiskering-transformation}
In the context of \Cref{def:whiskering-transformation}, the post-whiskering
\[H \whis \alpha : HG \to HG'\]
is a lax transformation, which is a strong transformation if $\alpha$ is so.
\end{lemma}
\begin{proof}
The naturality \eqref{lax-transformation-naturality} of $H\whis\alpha$ means that for each $2$-cell $\theta : f \to g$ in $\B(X,Y)$, the boundary of the following diagram in $1.3(HGX,HG'Y)$ commutes.
\[\begin{tikzpicture}[xscale=3, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {(HG'f)(H\alpha_{X})}
($(x11)+(1,0)$) node (x12) {H\big((G'f)\alpha_X\big)}
($(x12)+(1,0)$) node (x13) {H\big(\alpha_Y Gf\big)}
($(x13)+(1,0)$) node (x14) {(H\alpha_Y)(HGf)}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {(HG'g)(H\alpha_X)}
($(x12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {H\big((G'g)\alpha_X\big)}
($(x13)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x23) {H\big(\alpha_Y Gg\big)}
($(x14)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x24) {(H\alpha_Y)(HGg)}
;
\draw[1cell]
(x11) edge node {H^2} (x12)
(x12) edge node {H\alphaf} (x13)
(x13) edge node {(H^2)^{-1}} (x14)
(x11) edge node[swap] {HG'\theta*1} (x21)
(x12) edge node[swap] {H(G'\theta*1)} (x22)
(x13) edge node[swap] {H(1*G\theta)} (x23)
(x14) edge node[swap] {1*HG\theta} (x24)
(x21) edge node[swap] {H^2} (x22)
(x22) edge node[swap] {H\alpha_g} (x23)
(x23) edge node[swap] {(H^2)^{-1}} (x24)
;
\end{tikzpicture}\]
The two outer squares are commutative by the naturality of $\Htwo$. The middle square is commutative by the naturality \eqref{lax-transformation-naturality} of $\alpha$ and the functoriality of the local functors of $H$.
The lax unity axiom \eqref{unity-transformation} for $H\whis\alpha$ means the commutativity around the boundary of the following diagram in $1.3(HGX,HG'X)$ for each object $X\in\B$.
\[\begin{tikzpicture}[xscale=2.3, yscale=1.3]
\def1{1}\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {1_{HG'X}H\alpha_X}
($(x11)+(2*1,0)$) node (x13) {H\alpha_{X}}
($(x13)+(2*1,0)$) node (x15) {(H\alpha_{X})1_{HGX}}
($(x11)+(1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {H(1_{G'X}\alpha_X)}
($(x15)+(-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x24) {H(\alpha_X1_{GX})}
($(x11)+(0,2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {(H1_{G'X})(H\alpha_X)}
($(x15)+(0,2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x35) {(H\alpha_X)(H1_{GX})}
($(x22)+(0,2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x42) {H\big((G'1_X)\alpha_X\big)}
($(x24)+(0,2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x44) {H\big(\alpha_X(G1_X)\big)}
($(x31)+(0,2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x51) {(HG'1_X)(H\alpha_X)}
($(x35)+(0,2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x55) {(H\alpha_X)(HG1_X)}
;
\draw[1cell]
(x11) edge node {\ell} (x13)
(x13) edge node {r^{-1}} (x15)
(x11) edge node[swap] {H^0*1} (x31)
(x31) edge node {H^2} (x22)
(x22) edge node[swap] {H\ell} (x13)
(x13) edge node[swap] {Hr^{-1}} (x24)
(x24) edge node[pos=.3] {(\Htwo)^{-1}} (x35)
(x15) edge node {1*H^0} (x35)
(x31) edge node[swap] {H(G')^0*1} (x51)
(x22) edge node {H((G')^0*1)} (x42)
(x51) edge node {\Htwo} (x42)
(x24) edge node[swap] {H(1*G^0)} (x44)
(x44) edge node[pos=.3] {(\Htwo)^{-1}} (x55)
(x35) edge node {1*HG^0} (x55)
(x42) edge node {H\alpha_{1_{X}}} (x44)
(x51) edge node {(H\whis\alpha)_{1_X}} (x55)
;
\end{tikzpicture}\]
\begin{itemize}
\item The top left and right triangles are commutative by the lax left unity and the lax right unity \eqref{f0-bicat} of $H$, respectively.
\item The middle pentagon is commutative by the lax unity \eqref{unity-transformation} for $\alpha$ and the functoriality of the local functors of $H$.
\item The lower left and right parallelograms are commutative by the naturality \eqref{f2-bicat-naturality} of $\Htwo$.
\item The bottom trapezoid is commutative by \eqref{post-whis-iicell}.
\end{itemize}
Using \eqref{post-whis-iicell}, \eqref{lax-functors-comp-two} for $(HG)^2$ and $(HG')^2$, and \Cref{conv:functor-subscript}, the lax naturality axiom \eqref{2-cell-transformation} for $H\whis\alpha$ means the commutativity around the boundary of the following diagram in $1.3(HGX,HG'Z)$ for $1$-cells $f : X \to Y$ and $g : Y \to Z$ in $\B$.
\[\begin{tikzpicture}[xscale=3.3, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {g_{HG'}(\alpha_{Y,H}f_{HG})}
($(x11)+(1,0)$) node (x12) {(g_{HG'}\alpha_{Y,H})f_{HG}}
($(x12)+(1,0)$) node (x13) {(g_{G'}\alpha_Y)_H f_{HG}}
($(x13)+(1,0)$) node (x14) {(\alpha_Z g_G)_H f_{HG}}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {g_{HG'}(\alpha_Y f_G)_H}
($(x12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {(g_{G'}(\alpha_Y f_G))_H}
($(x13)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x23) {((g_{G'}\alpha_Y) f_G)_H}
($(x14)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x24) {(\alpha_{Z,H}g_{HG})f_{HG}}
($(x11)+(0,2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {g_{HG'}(f_{G'}\alpha_X)_H}
($(x12)+(0,2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x32) {(g_{G'}(f_{G'}\alpha_X))_H}
($(x32)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2)$) node (di) {(\diamondsuit)}
($(x13)+(0,2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x33) {((\alpha_Z g_G)f_G)_H}
($(x14)+(0,2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x34) {\alpha_{Z,H}(g_{HG}f_{HG})}
($(x11)+(0,3*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x41) {g_{HG'}(f_{HG'}\alpha_{X,H})}
($(x12)+(0,3*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x42) {((g_{G'}f_{G'})\alpha_X)_H}
($(x13)+(0,3*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x43) {(\alpha_Z(g_G f_G))_H}
($(x14)+(0,3*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x44) {\alpha_{Z,H}(g_G f_G)_H}
($(x11)+(0,4*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x51) {(g_{HG'}f_{HG'})\alpha_{X,H}}
($(x14)+(0,4*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x54) {\alpha_{Z,H}(gf)_{HG}}
($(x11)+(0,5*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x61) {(g_{G'} f_{G'})_H \alpha_{X,H}}
($(x12)+(0,5*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x62) {(gf)_{HG'}\alpha_{X,H}}
($(x13)+(0,5*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x63) {((gf)_{G'} \alpha_X)_H}
($(x14)+(0,5*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x64) {(\alpha_Z (gf)_G)_H}
;
\draw[1cell]
(x11) edge node {a^{-1}} (x12)
(x12) edge node {\Htwo*1} (x13)
(x13) edge node {\alpha_{g,H}*1} (x14)
(x21) edge node {1*\Htwoinv} (x11)
(x13) edge node[swap] {\Htwo} (x23)
(x14) edge node[swap,pos=.3] {\Htwo} (x33)
(x14) edge node {\Htwoinv*1} (x24)
(x21) edge node {\Htwo} (x22)
(x22) edge node {a^{-1}_H} (x23)
(x31) edge node {1*\alpha_{f,H}} (x21)
(x32) edge node {(1*\alphaf)_H} (x22)
(x23) edge node[swap] {(\alpha_g*1)_H} (x33)
(x24) edge node {a} (x34)
(x31) edge node {\Htwo} (x32)
(x41) edge node {1*\Htwo} (x31)
(x42) edge node {a_H} (x32)
(x33) edge node[swap] {a_H} (x43)
(x34) edge node {1*\Htwo} (x44)
(x43) edge node {\Htwoinv} (x44)
(x51) edge node {a} (x41)
(x44) edge node {1*\Gtwo_H} (x54)
(x51) edge node[swap] {\Htwo*1} (x61)
(x61) edge node[swap] {\Htwo} (x42)
(x42) edge node[swap, inner sep=1pt, pos=.5] {((G')^2*1)_H} (x63)
(x43) edge node[swap, inner sep=1pt, pos=.5] {(1*\Gtwo)_H} (x64)
(x64) edge node[swap] {\Htwoinv} (x54)
(x61) edge node[swap] {(G')^2_H*1} (x62)
(x62) edge node[swap] {\Htwo} (x63)
(x63) edge node[swap] {\alpha_{gf,H}} (x64)
;
\end{tikzpicture}\]
\begin{itemize}
\item The middle sub-diagram $(\diamondsuit)$ is commutative by the lax naturality \eqref{2-cell-transformation} for $\alpha$ and the functoriality of the local functors of $H$.
\item Starting at the lower left triangle and going clockwise along the boundary, the other seven sub-diagrams are commutative by (i) the naturality \eqref{f2-bicat-naturality} of $\Htwo$ and (ii) the lax associativity \eqref{f2-bicat} of $H$ alternately.
\end{itemize}
We have shown that $H\whis\alpha$ is a lax transformation.
Finally, if $\alpha$ is a strong transformation, then each component $2$-cell of $H\whis\alpha$ in \eqref{post-whis-iicell} is the vertical composite of three invertible $2$-cells, so it is invertible.
\end{proof}
\section{Tricategories}\label{sec:tricategories}
The definition of a tricategory is given in this section.
\begin{convention}\label{conv:tricat}
To simplify the presentation in the rest of this chapter, we adopt the following conventions.
\begin{enumerate}
\item When the bicategory $\Bicat(\A,\B)$ or any of its sub-bicategories in \Cref{thm:bicat-of-lax-functors,subbicat-pseudofunctor,2cat-of-lax-functors,exer:bicat-of-functors} are mentioned, we tacitly assume that the domain bicategory $\A$ has a set of objects.
\item We sometimes denote an adjoint equivalence $(f,\fbdot,\eta,\epz)$ in a bicategory as in \Cref{definition:internal-equivalence} by its left adjoint $f$. We write $\etaf$ and $\epzf$ for $\eta$ and $\epz$, respectively, if we need to emphasize their association with $f$.
\item Recall from \cref{example:terminal-bicategory} that $\boldone$ denotes the\label{notation:unit-bicat} terminal bicategory with one object $*$, only its identity $1$-cell $1_*$, and only its identity $2$-cell $1_{1_*}$. For each bicategory $\B$, the product bicategories $\B\times\boldone$ and $\boldone\times\B$ will be identified with $\B$ via the canonical strict functors between them, which will be suppressed from the notations.
\item Suppose $\T_{i,j}$ is a bicategory for $i,j\in\{1,2,\ldots\}$. For $n \geq 1$, we use the following abbreviations for product bicategories:\index{product!bicategory}\index{bicategory!product}
\begin{equation}\label{tricategory-product-abbreviation}
\begin{split}
\T^n_{i_1,\ldots,i_{n+1}} &= \prod_{k=1}^n \T_{i_{n+1-k}, i_{n+2-k}}\\
&= \T_{i_n,i_{n+1}} \times \cdots \times \T_{i_1,i_2},\\
\T^n_{[r,r+n]} &= \T^n_{r,r+1,\ldots,r+n}.\\
\end{split}
\end{equation}
For example, we have
\[\begin{split}
\T^3_{[1,4]} &= \T_{3,4} \times \T_{2,3} \times \T_{1,2},\\
\T^4_{[1,5]} &= \T_{4,5} \times \T_{3,4} \times \T_{2,3} \times \T_{1,2},
\end{split}\]
and $\T^2_{1,2,4} = \T_{2,4} \times \T_{1,2}$.\dqed
\end{enumerate}
\end{convention}
\begin{motivation}\label{mot:tricategory}
To go from the definition of a category to that of a bicategory, we replace:
\begin{itemize}
\item the hom-sets with hom-categories;
\item the composition and the identity morphisms with the horizontal composition and the identity $1$-cell functors;
\item the equalities in the associativity and the unity axioms, which are \emph{properties} and not structures in a category, with the associators and the left/right unitors, which are natural isomorphisms.
\end{itemize}
The coherence axioms in a bicategory---namely, the unity axiom and the pentagon axiom---are motivated in part by useful properties in a category.
The process of going from a bicategory to a tricategory, to be defined shortly below, is similar, by replacing:
\begin{itemize}
\item the hom categories with hom bicategories;
\item the identity $1$-cell functors and the horizontal composition with pseudofunctors;
\item the associators and the left/right unitors with adjoint equivalences;
\item the unity axiom and the pentagon axiom, which are \emph{properties} in a bicategory, with invertible modifications.
\end{itemize}
Furthermore, the left and right unity properties in \Cref{bicat-left-right-unity} are converted to invertible modifications. There are several coherence axioms that govern iterates of these structures.\dqed
\end{motivation}
\begin{definition}\label{def:tricategory}
A \emph{tricategory}\index{tricategory}\index{category!tri-}
\[\T = \big(\T_0, \T(-,-), \tensor, 1, a, \ell, r, \pi, \mu, \lambda, \rho\big)\]
consists of the following data subject to the three axioms stated afterwards.
\begin{description}
\item[Objects] $\T$ is equipped with a class $\T_0$ of \index{object!tricategory}\emph{objects}, also called \index{0-cell!tricategory}\emph{$0$-cells} in $\T$. For each object $X\in\T_0$, we also write $X\in\T$.
\item[Hom Bicategories] For each pair of objects $X_1,X_2\in\T$, it has a bicategory\label{notation:hom-bicat} \[\T(X_1,X_2) = \T_{1,2},\] called the \index{hom bicategory}\index{bicategory!hom}\index{tricategory!hom bicategory}\emph{hom bicategory} with domain $X_1$ and codomain $X_2$.
\begin{itemize}
\item The $0$-,$1$-,$2$-cells in $\T_{1,2}$ are called \emph{$1$-,$2$-,$3$-cells} in $\T$, respectively.
\item Objects in $\T_{1,2}$ are denoted by $f,g,h,$ etc., in the rest of this definition.
\end{itemize}
\item[Composition] For each triple of objects $X_1,X_2,X_3\in\T$, it has a\index{pseudofunctor} pseudofunctor
\begin{equation}\label{tricat-composition}
\begin{tikzcd}[column sep=large]
\T^2_{[1,3]} = \T(X_2,X_3) \times \T(X_1,X_2) \ar{r}{(\tensor,\tensortwo,\tensorzero)} & \T(X_1,X_3)= \T_{1,3}\end{tikzcd}
\end{equation}
called the \emph{composition}\index{composition!tricategory}\index{tricategory!composition} in $\T$, using the notations in \eqref{tricategory-product-abbreviation}. For objects $(g,f) \in \T^2_{[1,3]}$, their \emph{composite} $g \tensor f\in\T_{1,3}$ will sometimes be abbreviated to $gf$, omitting $\tensor$ from the notation and using parentheses for iterates of $\tensor$.
\item[Identities] For each object $X\in\T$, it has a pseudofunctor
\begin{equation}\label{tricat-identity}
\begin{tikzcd}[column sep=huge]
\boldone \ar{r}{(1_X,1_X^2,1_X^0)} & \T(X,X)\end{tikzcd}
\end{equation}
called the \emph{identity of $X$}.
\begin{itemize}
\item We sometimes abbreviate $1_X$ to $1$. The object $1_X(*)\in \T(X,X)$ is also written as $1_X$, called the \index{tricategory!identity}\index{identity 1-cell!tricategory}\emph{identity $1$-cell of $X$}.
\item The $1$-cell $1_X(1_*) \in \T(X,X)(1_X,1_X)$ is denoted by $i_X$.
\end{itemize}
\item[Associator] For each quadruple of objects $X_1,X_2,X_3,X_4\in\T$, it has an \index{bicategory!adjoint equivalence}\index{adjoint!equivalence!in a bicategory}adjoint equivalence $(a,\abdot,\etaa,\epza)$ as in\index{associator!tricategory}
\begin{equation}\label{tricategory-associator}
\begin{tikzpicture}[xscale=2.5, yscale=1.4, baseline={(a.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def1{1}
\draw[0cell]
(0,0) node (x11) {\T^3_{[1,4]}}
($(x11)+(1,0)$) node (x12) {\T^2_{1,2,4}}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\T^2_{1,3,4}}
($(x12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {\T_{1,4}}
;
\draw[1cell]
(x11) edge node (s) {\tensor\times 1} (x12)
(x11) edge node[swap] (a) {1\times \tensor} (x21)
(x12) edge node {\tensor} (x22)
(x21) edge node[swap] (t) {\tensor} (x22)
;
\draw[2cell]
node[between=s and t at .5, rotate=-135, 2label={below,a}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
in the bicategory $\Bicatps\big(\T^3_{[1,4]}, \T_{1,4}\big)$, called the \index{tricategory!associator}\emph{associator}.
\begin{itemize}
\item As a strong transformation
\[\begin{tikzcd}
\tensor(\tensor\times 1) \ar{r}{a} & \tensor(1\times \tensor),
\end{tikzcd}\]
its component $1$-cells and invertible component $2$-cells are in the hom bicategory $\T_{1,4}$, which are $2$-cells and invertible $3$-cells, respectively, in $\T$.
\item Component $1$-cells of $a$ are $1$-cells
\[\begin{tikzcd}
(h\tensor g)\tensor f \ar{r}{a_{h,g,f}} & h\tensor(g\tensor f) \in \T_{1,4}
\end{tikzcd}\]
for objects $(h,g,f) \in \T^3_{[1,4]}$.
\end{itemize}
\item[Unitors] For each pair of objects $X_1,X_2\in\T$, it has adjoint equivalences $(\ell,\ellbdot,\etaell,\epzell)$ and $(r,\rbdot,\etar,\epzr)$ as in
\begin{equation}\label{tricategory-unitors}
\begin{tikzpicture}[xscale=3, yscale=1.4, baseline={(a.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def1{1} \def-1} \def\v{-1{1}
\draw[0cell]
(0,0) node (x11) {\T_{1,2}}
($(x11)+(1,0)$) node (x12) {\T_{1,2}}
($(x11)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (tl) {\T_{2,2}\times\T_{1,2}}
($(x12)+(1/2,0)$) node (x13) {\T_{1,2}}
($(x13)+(1,0)$) node (x14) {\T_{1,2}}
($(x13)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (tr) {\T_{1,2}\times\T_{1,1}}
;
\draw[1cell]
(x11) edge node[swap] (i) {1} (x12)
(x11) edge node[pos=.4] (a) {1_{X_2}\times 1} (tl)
(tl) edge node[pos=.6] {\tensor} (x12)
(x13) edge node[swap] (ii) {1} (x14)
(x13) edge node[pos=.4] {1\times 1_{X_1}} (tr)
(tr) edge node[pos=.6] {\tensor} (x14)
;
\draw[2cell]
node[between=tl and i at .5, rotate=-90, 2label={above,\ell}] {\Rightarrow}
node[between=tr and ii at .5, rotate=-90, 2label={above,r}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
in the bicategory $\Bicatps\big(\T_{1,2},\T_{1,2}\big)$, called the \index{tricategory!left and right unitors}\index{left unitor!tricategory}\emph{left unitor} and the \index{right unitor!tricategory}\emph{right unitor}, respectively.
\begin{itemize}
\item As strong transformations
\[\begin{tikzcd}
\tensor(1_{X_2}\times 1) \ar{r}{\ell} & 1 & \tensor(1\times 1_{X_1}) \ar{l}[swap]{r},
\end{tikzcd}\]
the component $1$-cells and invertible component $2$-cells of both $\ell$ and $r$ are in the hom bicategory $\T_{1,2}$, which are $2$-cells and invertible $3$-cells, respectively, in $\T$.
\item Component $1$-cells of $\ell$ and $r$ are $1$-cells
\[\begin{tikzcd}
1_{X_2} \tensor f \ar{r}{\ell_f} & f & f \tensor 1_{X_1} \ar{l}[swap]{r_{f}} \in \T_{1,2}
\end{tikzcd}\]
for objects $f\in \T_{1,2}$.
\end{itemize}
\item[Pentagonator] For each tuple of objects $X_p\in\T$ for $1\leq p \leq 5$, it has an invertible $2$-cell (i.e., modification)\index{pentagonator}\index{tricategory!pentagonator}
\begin{equation}\label{tricategory-pentagonator}
\begin{tikzcd}
\Big[\big(\tensor\whis (1\times a)\big) \big(a\whis (1\times\tensor\times 1)\big)\Big] \big(\tensor\whis(a\times 1)\big) \ar{d}{\pi}\\
\big(a \whis(1\times 1\times \tensor)\big) \big(a\whis(\tensor\times 1 \times 1)\big)
\end{tikzcd}
\end{equation}
in the hom-category
\[\Bicatps\big(\T^4_{[1,5]}, \T_{1,5}\big)\big(\tensor(\tensor\times 1)(\tensor\times 1 \times 1), \tensor(1\times \tensor)(1\times 1\times\tensor)\big),\]
called the \emph{pentagonator}.
\begin{itemize}
\item The domain of $\pi$ is an iterated horizontal composite as in \Cref{def:lax-tr-comp} of three strong transformations, with $\whis$ the whiskerings in \Cref{def:whiskering-transformation}. This is well-defined by \Cref{lax-tr-compose,pre-whiskering-transformation,post-whiskering-transformation}.
\item Similarly, the codomain of $\pi$ is the horizontal composite of the strong transformations $\big(a\whis(\tensor\times 1 \times 1)\big)$ and $\big(a \whis(1\times 1\times \tensor)\big)$.
\item Components of $\pi$ are invertible $2$-cells
\begin{equation}\label{pentagonator-component}
\begin{tikzpicture}[xscale=5,yscale=1.5, baseline={(x11.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def.4{-.9} \def1{1} \def1{.1}
\draw[0cell]
(0,0) node (x11) {\big((j\tensor h)\tensor g\big)\tensor f}
($(x11)+(1,0)$) node (x12) {j\tensor\big(h\tensor(g\tensor f)\big)}
($(x11)+(1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x01) {\big(j\tensor(h\tensor g)\big)\tensor f}
($(x11)+(1-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x02) {j\tensor\big((h\tensor g)\tensor f\big)}
($(x11)+(1/2,.4)$) node (x31) {(j\tensor h)\tensor(g\tensor f)}
;
\draw[1cell]
(x11) edge node[pos=.3] {a_{j,h,g}\tensor 1_f} (x01)
(x01) edge node (s) {a_{j,hg,f}} (x02)
(x02) edge node[pos=.7] {1_j\tensor a_{h,g,f}} (x12)
(x11) edge node[swap] {a_{jh,g,f}} (x31)
(x31) edge node[swap] {a_{j,h,gf}} (x12)
;
\draw[2cell]
node[between=s and x31 at .4, shift={(-.3,0)}, rotate=-90, 2label={above,\pi_{j,h,g,f}}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
in $\T_{1,5}\big(((j\tensor h)\tensor g\big)\tensor f, j\tensor(h\tensor(g\tensor f))\big)$ for objects $(j,h,g,f)\in\T^4_{[1,5]}$.
\end{itemize}
\item[$2$-Unitors] For objects $X_1,X_2,X_3\in \T$, it has three invertible $2$-cells\index{2-unitor}\index{middle 2-unitor}\index{left 2-unitor}\index{right 2-unitor}\index{tricategory!2-unitors}
\begin{equation}\label{tricategory-iiunitors}
\begin{tikzcd}[%
,row sep = 0ex
,/tikz/column 1/.append style={anchor=base east}
,/tikz/column 2/.append style={anchor=base west}
]
\Big[\big(\tensor\whis(1\times\ell)\big) \big(a \whis (1\times 1_{X_2}\times 1)\big)\Big] \big(\tensor\whis (\rbdot \times 1)\big) \ar{r}{\mu} & \onetensor,\\
\tensor \whis (\ell\times 1) \ar{r}{\lambda} & (\ell\whis\tensor)\big(a \whis (1_{X_3}\times 1 \times 1)\big),\\
\tensor\whis(1\times \rbdot) \ar{r}{\rho} & \big(a\whis (1\times 1 \times 1_{X_1})\big)(\rbdot \whis \tensor)
\end{tikzcd}
\end{equation}
in the bicategory $\Bicatps\big(\T^2_{[1,3]},\T_{1,3}\big)$, called the \emph{middle $2$-unitor}, the \emph{left $2$-unitor}, and the \emph{right $2$-unitor}, respectively.
\begin{itemize}
\item $\rbdot : 1 \to \tensor(1\times 1_{X_1})$ is the chosen right adjoint of $r$, with component $1$-cells
\[\begin{tikzcd}
f \ar{r}{\rbdot_f} & f\tensor 1_{X_1}.\end{tikzcd}\]
\item Components of $\mu$, $\lambda$, and $\rho$ are invertible $2$-cells
\begin{equation}\label{mid-iiunitor-component}
\begin{tikzpicture}[xscale=4.7,yscale=1.5,baseline={(r.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def1{1} \def1{.1}
\draw[0cell]
(0,0) node (x11) {g\tensor f}
($(x11)+(1,0)$) node (x12) {g\tensor f}
($(x11)+(1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x01) {(g\tensor 1_{X_2})\tensor f}
($(x11)+(1-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x02) {g \tensor (1_{X_2}\tensor f)}
;
\draw[1cell]
(x11) edge node[pos=.3] (r) {\rbdot_g\tensor 1_f} (x01)
(x01) edge node (s) {a_{g,1,f}} (x02)
(x02) edge node[pos=.7] {1_g\tensor \ell_f} (x12)
(x11) edge node[swap] (one) {1_{g\tensor f}} (x12)
;
\draw[2cell]
node[between=s and one at .5, shift={(-.3,0)}, rotate=-90, 2label={above,\mu_{g,f}}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
\begin{equation}\label{left-right-iiunitor-component}
\begin{tikzpicture}[xscale=3,yscale=1.5, baseline={(lambda.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def1{1} \def-1} \def\v{-1{2*1/3} \def.4{.2} \def\b{15}
\draw[0cell]
(0,0) node (x11) {(1_{X_3}\tensor g)\tensor f}
($(x11)+(1,0)$) node (x12) {g\tensor f}
($(x11)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {1_{X_3}\tensor (g\tensor f)}
;
\draw[1cell]
(x11) edge node (s) {\ell_g \tensor 1_f} (x12)
(x11) edge[bend right=\b] node[swap] (a) {a_{1,g,f}} (x21)
(x21) edge[bend right=\b] node[swap] {\ell_{g\tensor f}} (x12)
;
\draw[2cell]
node[between=s and x21 at .5, shift={(-.2,0)}, rotate=-90, 2label={above,\lambda_{g,f}}] (lambda) {\Rightarrow}
;
\draw[0cell]
($(x12)+(-1} \def\v{-1,0)$) node (y11) {g\tensor f}
($(y11)+(1,0)$) node (y12) {g\tensor (f\tensor 1_{X_1})}
($(y11)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y21) {(g\tensor f)\tensor 1_{X_1}}
;
\draw[1cell]
(y11) edge node (a) {1_g \tensor \rbdot_f} (y12)
(y11) edge[bend right=\b] node[swap] {\rbdot_{g\tensor f}} (y21)
(y21) edge[bend right=\b] node[swap] {a_{g,f,1}} (y12)
;
\draw[2cell]
node[between=a and y21 at .5, shift={(-.2,0)}, rotate=-90, 2label={above,\rho_{g,f}}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
in the hom bicategory $\T_{1,3}$ for objects $(g,f)\in \T^2_{[1,3]}$.
\end{itemize}
\end{description}
The subscripts in $a$, $\ell$, $r$, $\pi$, $\mu$, $\lambda$, and $\rho$ will often be omitted.
The above data are required to satisfy the following three axioms, with $\tensor$ abbreviated to concatenation and iterates of $\tensor$ denoted by parentheses. As always, pasting diagrams are interpreted using the Bicategorical Pasting \Cref{thm:bicat-pasting-theorem} and \Cref{conv:boundary-bracketing}.
\begin{description}
\item[Non-Abelian 4-Cocycle Condition]\index{non-abelian 4-cocycle condition} The equality of pasting diagrams
\begin{equation}\label{nb4cocycle}
\begin{tikzpicture}[xscale=2.5,yscale=1.8,baseline={(eq.base)}]
\def-1} \def\v{-1{.2}
\def.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1{.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2}
\def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{1.8} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def.4{.5} \def\q{.3} \def-1} \def\v{-1{.5} \def\z{40}
\draw[font=\huge] (0,0) node [rotate=90] (eq) {$=$};
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (x11) {(k(j(hg)))f}
($(x11)+(-\b,-\q)$) node (x21) {(k((jh)g))f}
($(x11)+(\b,-\q)$) node (x22) {k((j(hg))f)}
($(x11)+(-.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {((k(jh))g)f}
($(x11)+(.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x33) {k(j((hg)f))}
($(x11)+(-.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1,-2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15-.4)$) node (x41) {(((kj)h)g)f}
($(x11)+(.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1,-2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15-.4)$) node (x44) {k(j(h(gf)))}
($(x11)+(-.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1,-3*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x51) {((kj)h)(gf)}
($(x11)+(.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1,-3*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x52) {(kj)(h(gf))}
;
\draw[1cell]
(x41) edge node {(a1)1} (x31)
(x31) edge node {a1} (x21)
(x21) edge node[pos=.7] {(1a)1} (x11)
(x11) edge node {a} (x22)
(x22) edge node {1a} (x33)
(x33) edge node {1(1a)} (x44)
(x41) edge node[swap] {a} (x51)
(x51) edge node {a} (x52)
(x52) edge node[swap] {a} (x44)
;
}
\begin{scope}[shift={(0,3*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15+-1} \def\v{-1)}]
\boundary
\draw[0cell]
($(x11)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x32) {k(((jh)g)f)}
($(x11)+(-.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1,-1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)$) node (x42) {(k(jh))(gf)}
($(x11)+(.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1,-1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)$) node (x43) {k((jh)(gf))}
;
\draw[1cell]
(x21) edge node {a} (x32)
(x32) edge node {1(a1)} (x22)
(x32) edge node {1a} (x43)
(x31) edge node {a} (x42)
(x42) edge node[swap] {a} (x43)
(x43) edge node {1a} (x44)
(x51) edge[bend left=\z] node {a(1_g 1_f)} (x42)
(x51) edge[bend right=\z] node[swap] {a1_{gf}} (x42)
;
\draw[2cell]
node[between=x11 and x32 at .4, shift={(-.4,0)}, rotate=-90, 2label={above,\ainv_{1,a,1}}] {\Rightarrow}
node[between=x32 and x33 at .5, shift={(0,-.1)}, rotate=-90, 2label={above,1\pi}] {\Rightarrow}
node[between=x31 and x32 at .5, shift={(0,-.1)}, rotate=-90, 2label={above,\pi}] {\Rightarrow}
node[between=x41 and x42 at .25, shift={(0,.6)}, rotate=-45, 2label={above,\ainv_{a,1,1}}] {\Rightarrow}
node[between=x42 and x51 at .65, rotate=0, 2label={above,1_a(\tensorzeroinv_{g,f})}] {\Rightarrow}
node[between=x43 and x51 at .4, shift={(.3,0)}, rotate=-90, 2label={above,\pi}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(0,--1} \def\v{-1)}]
\def1.2{1.2}
\boundary
\draw[0cell]
($(x11)+(-.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1,-1.2)$) node (x32) {((kj)(hg))f}
($(x11)+(.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1,-1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)$) node (x43) {(kj)((hg)f)}
;
\draw[1cell]
(x41) edge node[swap] {a1} (x32)
(x32) edge node[swap] {a1} (x11)
(x32) edge node {a} (x43)
(x43) edge node {a} (x33)
(x43) edge[bend left=\z] node {(1_k 1_j)a} (x52)
(x43) edge[bend right=\z] node[swap] {1_{kj}a} (x52)
;
\draw[2cell]
node[between=x31 and x32 at .4, shift={(.2,0)}, rotate=-45, 2label={above,\pi 1}] {\Rightarrow}
node[between=x22 and x43 at .5, shift={(-.6,.3)}, rotate=-90, 2label={above,\pi}] {\Rightarrow}
node[between=x43 and x44 at .5, shift={(0,.5)}, rotate=-135, 2label={above,a_{1,1,a}}] {\Rightarrow}
node[between=x43 and x52 at .65, rotate=180, 2label={below,(\tensorzeroinv_{k,j}) 1_a}] {\Rightarrow}
node[between=x32 and x51 at .5, rotate=-90, 2label={above,\pi}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\end{equation}
holds in $\T_{1,6}\big((((kj)h)g)f,k(j(h(gf)))\big)$ for objects $(k,j,h,g,f)\in\T^5_{[1,6]}$.
\begin{itemize}
\item The $2$-cells $1\pi$ in the top side and $\pi 1$ in the bottom side are \emph{not} $1\tensor \pi$ and $\pi \tensor 1$. They will be explained precisely in \Cref{expl:nb4}.
\item $a_{a,1,1}$, $a_{1,a,1}$, and $a_{1,1,a}$ are component $2$-cells of the strong transformation $a$.
\item $\tensorzero$ is the lax unity constraint for the pseudofunctor $\tensor$, and
\begin{equation}\label{tensorzero-gf}
\begin{tikzcd}
1_{g \tensor f} \ar{r}{\tensorzero_{g,f}} & 1_g \tensor 1_f \in \T_{1,3}(g\tensor f,g \tensor f)
\end{tikzcd}
\end{equation}
is its $(g,f)$-component, with inverse denoted by\label{notation:tensorzeroinv} $\tensorzeroinv_{g,f}$, and similarly for $\tensorzeroinv_{k,j}$.
\end{itemize}
\item[Left Normalization]\index{left normalization axiom} The equality of pasting diagrams
\begin{equation}\label{left-normalization-axiom}
\begin{tikzpicture
\def-1} \def\v{-1{.05}
\def.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1{.5} \def\b{1} \def\bp{1.04} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{.3}
\def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{1.4} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def.4{.5} \def\q{.95} \def-1} \def\v{-1{.5} \def\z{60}
\draw[font=\huge] (0,0) node [rotate=0] (eq) {$=$};
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (x11) {(h(1g))f}
($(x11)+(-\b,-.4)$) node (x21) {((h1)g)f}
($(x11)+(\b,-.4)$) node (x22) {(hg)f}
($(x11)+(-\b,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15-.4)$) node (x31) {(hg)f}
($(x11)+(\b,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15-.4)$) node (x32) {h(gf)}
($(x11)+(0,-2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x41) {h(gf)}
;
\draw[1cell]
(x31) edge node[pos=.3] {(\rbdot 1)1} (x21)
(x21) edge[bend left] node {a1} (x11)
(x11) edge[bend left] node {(1\ell)1} (x22)
(x22) edge node (as) {a} (x32)
(x31) edge[out=-90,in=180] node[swap,pos=.4] {a} (x41)
(x41) edge[out=0,in=-90] node[swap,pos=.6] {1} (x32)
;
}
\begin{scope}[xscale=2.6,yscale=3,shift={(-\b-3*-1} \def\v{-1/4,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)}]
\boundary
\draw[0cell]
($(x11)+(0,-.4)$) node (A) {h((1g)f)}
($(x11)+(0,-\q)$) node (B) {h(1(gf))}
($(x11)+(0,-1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)$) node (C) {(h1)(gf)}
;
\draw[1cell]
(x11) edge node[swap] {a} (A)
(A) edge node[swap] {1a} (B)
(A) edge[bend left=20] node[pos=.3] (onelone) {1(\ell 1)} (x32)
(C) edge node {a} (B)
(B) edge[bend right=15] node[pos=.6] {1\ell} (x32)
(x21) edge node[swap,pos=.4] {a} (C)
(x41) edge[bend left=\z] node {\rbdot (1_g 1_f)} (C)
(x41) edge[bend right=\z] node[swap,pos=.45] {\rbdot 1_{gf}} (C)
;
\draw[2cell]
node[between=x22 and A at .6, shift={(0,.7)}, rotate=-135, 2label={above,\ainv_{1,\ell,1}}] {\Rightarrow}
node[between=B and as at .4, shift={(0,-.2)}, rotate=-135, 2label={above,1\lambda}] {\Rightarrow}
node[between=x21 and A at .5, shift={(0,.6)}, rotate=-90, 2label={above,\pi}] {\Rightarrow}
node[between=x31 and C at .2, shift={(0,.7)}, rotate=-90, 2label={above,\ainv_{\rbdot,1,1}}] {\Rightarrow}
node[between=C and x41 at .7, 2label={above,1_{\rbdot}\tensorzeroinv_{g,f}}] {\Rightarrow}
node[between=C and x32 at .5, shift={(0,-.5)}, rotate=-45, 2label={above,\mu}] {\Rightarrow}
;
\end{scope}
\begin{scope}[xscale=1.8,yscale=3,shift={(\b+-1} \def\v{-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)}]
\boundary
\draw[1cell]
(x31) edge[bend left=15] node {1_{hg}1_f} (x22)
(x31) edge[bend right=15] node[swap,pos=.6] {1_{(hg)f}} (x22)
(x31) edge node[swap] (ass) {a} (x32)
;
\draw[2cell]
node[between=x21 and x22 at .4, shift={(0,.5)}, rotate=-45, 2label={above,\mu 1}] {\Rightarrow}
node[between=x31 and x22 at .45, rotate=-45, 2label={above,\tensorzeroinv_{hg,f}}] {\Rightarrow}
node[between=x31 and x32 at .6, shift={(0,.4)}, rotate=-90, 2label={above,r_a}] {\Rightarrow}
node[between=ass and x41 at .5, shift={(-.3,0)}, rotate=-90, 2label={above,\ellinv_a}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\end{equation}
holds in $\T_{1,4}\big((hg)f,h(gf)\big)$ for objects $(h,g,f)\in\T^3_{[1,4]}$.
\begin{itemize}
\item On the right-hand side, $r_a$ and $\ell_a$ are the $a_{h,g,f}$-components of the right unitor and the left unitor, respectively, in the hom bicategory $\T_{1,4}$.
\item The $2$-cells $1\lambda$ and $\mu 1$ are \emph{not} $1\tensor\lambda$ and $\mu \tensor 1$. They will be explained precisely in \Cref{expl:left-normalization}.
\end{itemize}
\item[Right Normalization]\index{right normalization axiom} The equality of pasting diagrams
\begin{equation}\label{right-normalization-axiom}
\begin{tikzpicture
\def-1} \def\v{-1{.05}
\def.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1{.5} \def\b{1} \def\bp{1.04} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{.3}
\def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{1.4} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def.4{.5} \def\q{.95} \def-1} \def\v{-1{.5} \def\z{60}
\draw[font=\huge] (0,0) node [rotate=0] (eq) {$=$};
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (x11) {h((g1)f)}
($(x11)+(-\b,-.4)$) node (x21) {h(gf)}
($(x11)+(\b,-.4)$) node (x22) {h(g(1f))}
($(x11)+(-\b,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15-.4)$) node (x31) {(hg)f}
($(x11)+(\b,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15-.4)$) node (x32) {h(gf)}
($(x11)+(0,-2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x41) {(hg)f}
;
\draw[1cell]
(x31) edge node (as) {a} (x21)
(x21) edge[bend left] node {1(\rbdot 1)} (x11)
(x11) edge[bend left] node {1a} (x22)
(x22) edge node[pos=.3] {1(1\ell)} (x32)
(x31) edge[out=-90,in=180] node[swap,pos=.4] {1} (x41)
(x41) edge[out=0,in=-90] node[swap,pos=.6] {a} (x32)
;
}
\begin{scope}[xscale=2.6,yscale=3,shift={(-\b-3*-1} \def\v{-1/4,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)}]
\boundary
\draw[0cell]
($(x11)+(0,-.4)$) node (A) {(h(g1))f}
($(x11)+(0,-\q)$) node (B) {((hg)1)f}
($(x11)+(0,-1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)$) node (C) {(hg)(1f)}
;
\draw[1cell]
(A) edge node[swap] {a} (x11)
(x31) edge[bend left=20] node[pos=.7] {(1\rbdot)1} (A)
(B) edge node[swap] (aone) {a1} (A)
(x31) edge[bend right=15] node[pos=.4] {\rbdot 1} (B)
(B) edge node {a} (C)
(C) edge node[pos=.7] {a} (x22)
(C) edge[bend left=\z] node[pos=.5] {(1_h1_g)\ell} (x41)
(C) edge[bend right=\z] node[swap,pos=.55] {1_{hg}\ell} (x41)
;
\draw[2cell]
node[between=x21 and A at .7, shift={(0,.7)}, rotate=-45, 2label={below,a_{1,\rbdot,1}}] {\Rightarrow}
node[between=B and as at .5, shift={(0,-.3)}, rotate=-45, 2label={above,\rho 1}] {\Rightarrow}
node[between=A and x22 at .5, shift={(0,.6)}, rotate=-90, 2label={above,\pi}] {\Rightarrow}
node[between=x32 and C at .5, shift={(0,.7)}, rotate=-90, 2label={above,a_{1,1,\ell}}] {\Rightarrow}
node[between=C and x41 at .7, rotate=180, 2label={below,\tensorzeroinv_{h,g}1_{\ell}}] {\Rightarrow}
node[between=C and x31 at .6, shift={(0,-.3)}, rotate=-135, 2label={above,\mu}] {\Rightarrow}
;
\end{scope}
\begin{scope}[xscale=1.8,yscale=3,shift={(\b+-1} \def\v{-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)}]
\boundary
\draw[1cell]
(x21) edge[bend left=15] node[pos=.4] {1_h 1_{gf}} (x32)
(x21) edge[bend right=15] node[swap,pos=.4] {1_{h(gf)}} (x32)
(x31) edge node[swap] (ass) {a} (x32)
;
\draw[2cell]
node[between=x21 and x22 at .5, shift={(0,.5)}, rotate=-135, 2label={below,1\mu}] {\Rightarrow}
node[between=x21 and x32 at .55, rotate=-135, 2label={below,\tensorzeroinv_{h,gf}}] {\Rightarrow}
node[between=x31 and x32 at .4, shift={(0,.4)}, rotate=-90, 2label={above,\ell_a}] {\Rightarrow}
node[between=ass and x41 at .5, shift={(-.3,0)}, rotate=-90, 2label={above,\rinv_a}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}
\end{equation}
holds in $\T_{1,4}\big((hg)f,h(gf)\big)$ for objects $(h,g,f)\in\T^3_{[1,4]}$. The $2$-cells $\rho 1$ and $1\mu$ will be explained precisely in \Cref{expl:right-normalization}.
\end{description}
This finishes the definition of a tricategory.
\end{definition}
\begin{explanation}[Tricategorical data]\label{expl:tricategory-axiom}
In the definition of a tricategory:
\begin{enumerate}
\item The lax functoriality constraint $\tensortwo$ in \eqref{tricat-composition} has invertible component $2$-cells
\begin{equation}\label{tricat-tensortwo-component}
\begin{tikzcd}[column sep=huge]
(\beta'\tensor\alpha') (\beta\tensor\alpha) \ar{r}{\tensortwo_{(\beta',\alpha'),(\beta,\alpha)}}[swap]{\iso} & (\beta'\beta) \tensor (\alpha'\alpha) \in \T_{1,3}\big(g\tensor f,g''\tensor f''\big)
\end{tikzcd}
\end{equation}
for $1$-cells
\[\begin{tikzcd}
f \ar{r}{\alpha} & f' \ar{r}{\alpha'} & f'' \in \T_{1,2}\end{tikzcd} \andspace
\begin{tikzcd}
g \ar{r}{\beta} & g' \ar{r}{\beta'} & g'' \in \T_{2,3}.\end{tikzcd}\]
Its inverse will be written as $\tensortwoinv$.
\item For the pseudofunctor $1_X : \boldone \to \T(X,X)$ in \eqref{tricat-identity}, its lax unity constraint and lax functoriality constraint are invertible $2$-cells
\begin{equation}\label{onexzerotwo}
\begin{tikzcd}
1_{1_X} \ar{r}{(1_X)^0}[swap]{\iso} & i_X & i_X i_X \ar{l}{\iso}[swap]{(1_X)^2} \in \T(X,X)(1_X,1_X).
\end{tikzcd}
\end{equation}
Naturality \eqref{f2-bicat-naturality} of $(1_X)^2$ is trivially satisfied, since the only $2$-cell in $\boldone$ is the identity $2$-cell of the identity $1$-cell $1_*$.
\item The associator $(a,\abdot,\eta^a,\epz^a)$ in \eqref{tricategory-associator} is an adjoint equivalence in the bicategory $\Bicatps\big(\T^3_{[1,4]},\T_{1,4}\big)$ in the sense of \Cref{definition:internal-equivalence}. The left adjoint $a$ and the right adjoint $\abdot$ have component $1$-cells
\begin{equation}\label{aadot-component-icell}
\begin{tikzcd}
(h\tensor g)\tensor f \ar[shift left]{r}{a_{h,g,f}} & h\tensor(g\tensor f) \ar[shift left]{l}{\abdot_{h,g,f}} \in \T_{1,4}
\end{tikzcd}
\end{equation}
and invertible component $2$-cells
\begin{equation}\label{aadot-component-iicell}
\begin{tikzpicture}[xscale=2.8, yscale=1.5, baseline={(a.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def1{1}
\draw[0cell]
(0,0) node (x11) {(hg)f}
($(x11)+(1,0)$) node (x12) {h(gf)}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {(h'g')f'}
($(x12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {h'(g'f')}
;
\draw[1cell]
(x11) edge node (s) {a_{h,g,f}} (x12)
(x11) edge node[swap] (a) {(\gamma\beta)\alpha} (x21)
(x12) edge node {\gamma(\beta\alpha)} (x22)
(x21) edge node[swap] (t) {a_{h',g',f'}} (x22)
;
\draw[2cell]
node[between=s and t at .6, shift={(.3,0)}, rotate=-135, 2label={below,a_{\gamma,\beta,\alpha}}] {\Rightarrow}
;
\draw[0cell]
($(x12)+(1,0)$) node (y11) {(hg)f}
($(y11)+(1,0)$) node (y12) {h(gf)}
($(y11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y21) {(h'g')f'}
($(y12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y22) {h'(g'f')}
;
\draw[1cell]
(y12) edge node[swap] (s) {\abdot_{h,g,f}} (y11)
(y11) edge node[swap] (a) {(\gamma\beta)\alpha} (y21)
(y12) edge node {\gamma(\beta\alpha)} (y22)
(y22) edge node (t) {\abdot_{h',g',f'}} (y21)
;
\draw[2cell]
node[between=s and t at .6, shift={(-.3,0)}, rotate=-45, 2label={above,\abdot_{\gamma,\beta,\alpha}}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
in $\T_{1,4}\big((hg)f, h'(g'f')\big)$ and $\T_{1,4}\big(h(gf),(h'g')f'\big)$, respectively, for $1$-cells $\alpha : f \to f'$ in $\T_{1,2}$, $\beta : g \to g'$ in $\T_{2,3}$, and $\gamma : h \to h'$ in $\T_{3,4}$.
The unit and the counit,
\begin{equation}\label{aadot-unit-counit}
\begin{tikzcd}
1_{\tensor(\tensor\times 1)} \ar{r}{\eta^a}[swap]{\iso} & \abdot a\end{tikzcd} \andspace
\begin{tikzcd}
a\abdot \ar{r}{\epz^a}[swap]{\iso} & 1_{\tensor(1\times\tensor)},\end{tikzcd}
\end{equation}
have invertible component $2$-cells
\[\begin{tikzpicture}[xscale=2.8, yscale=1.5, baseline={(a.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def1{1}
\draw[0cell]
(0,0) node (x11) {(hg)f}
($(x11)+(1,0)$) node (x12) {h(gf)}
($(x12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {(hg)f}
;
\draw[1cell]
(x11) edge node {a_{h,g,f}} (x12)
(x12) edge node {\abdot_{h,g,f}} (x22)
(x11) edge[out=-90,in=180] node[swap] (one) {1_{(hg)f}} (x22)
;
\draw[2cell]
node[between=one and x12 at .6, shift={(0,-.3)}, rotate=45, 2label={above,\eta^a_{h,g,f}}] {\Rightarrow}
;
\draw[0cell]
($(x12)+(1,0)$) node (y11) {(hg)f}
($(y11)+(1,0)$) node (y12) {h(gf)}
($(y11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y21) {h(gf)}
;
\draw[1cell]
(y12) edge node[swap] {\abdot_{h,g,f}} (y11)
(y11) edge node[swap] {a_{h,g,f}} (y21)
(y12) edge[out=-90,in=0] node (i) {1_{h(gf)}} (y21)
;
\draw[2cell]
node[between=y11 and i at .4, shift={(0,-.3)}, rotate=-45, 2label={above,\epz^a_{h,g,f}}] {\Rightarrow}
;
\end{tikzpicture}\]
in $\T_{1,4}\big((hg)f,(hg)f\big)$ and $\T_{1,4}\big(h(gf),h(gf)\big)$, respectively.
\item The left unitor $(\ell, \ellbdot, \etaell, \epzell)$ and the right unitor $(r,\rbdot,\etar,\epzr)$ in \eqref{tricategory-unitors} are adjoint equivalences in $\Bicatps(\T_{1,2},\T_{1,2})$, with component $1$-cells
\begin{equation}\label{unitors-component-icell}
\begin{tikzcd}
1_{X_2} \tensor f \ar[shift left]{r}{\ell_f} & f \ar[shift left]{l}{\ellbdot_f} \ar[shift right]{r}[swap]{\rbdot_f} & f \tensor 1_{X_1} \ar[shift right]{l}[swap]{r_{f}} \in \T_{1,2}
\end{tikzcd}
\end{equation}
and invertible component $2$-cells
\begin{equation}\label{unitors-component-iicell}
\begin{tikzpicture}[xscale=1.8, yscale=1.5, baseline={(al.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def1{1} \def-1} \def\v{-1{.7}
\draw[0cell]
(0,0) node (x11) {1_{X_2} f}
($(x11)+(1,0)$) node (x12) {f}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {1_{X_2}f'}
($(x12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {f'}
;
\draw[1cell]
(x11) edge node (s) {\ell_f} (x12)
(x11) edge node[swap] (al) {1\alpha} (x21)
(x12) edge node {\alpha} (x22)
(x21) edge node[swap] (t) {\ell_{f'}} (x22)
;
\draw[2cell]
node[between=s and t at .6, shift={(.1,0)}, rotate=-135, 2label={below,\ell_{\alpha}}] {\Rightarrow}
;
\draw[0cell]
($(x12)+(-1} \def\v{-1,0)$) node (y11) {1_{X_2}f}
($(y11)+(1,0)$) node (y12) {f}
($(y11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y21) {1_{X_2}f'}
($(y12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y22) {f'}
;
\draw[1cell]
(y12) edge node[swap] (sy) {\ellbdot_f} (y11)
(y11) edge node[swap] {1\alpha} (y21)
(y12) edge node {\alpha} (y22)
(y22) edge node (ty) {\ellbdot_{f'}} (y21)
;
\draw[2cell]
node[between=sy and ty at .6, shift={(-.2,0)}, rotate=-45, 2label={above,\ellbdot_{\alpha}}] {\Rightarrow}
;
\draw[0cell]
($(y12)+(-1} \def\v{-1,0)$) node (w11) {f}
($(w11)+(1,0)$) node (w12) {f1_{X_1}}
($(w11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (w21) {f'}
($(w12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (w22) {f'1_{X_1}}
;
\draw[1cell]
(w12) edge node[swap] (sw) {r_f} (w11)
(w11) edge node[swap] {\alpha} (w21)
(w12) edge node {\alpha 1} (w22)
(w22) edge node (tw) {r_{f'}} (w21)
;
\draw[2cell]
node[between=sw and tw at .6, shift={(-.1,0)}, rotate=-45, 2label={above,r_{\alpha}}] {\Rightarrow}
;
\draw[0cell]
($(w12)+(-1} \def\v{-1,0)$) node (z11) {f}
($(z11)+(1,0)$) node (z12) {f1_{X_1}}
($(z11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (z21) {f'}
($(z12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (z22) {f'1_{X_1}}
;
\draw[1cell]
(z11) edge node (sz) {\rbdot_f} (z12)
(z11) edge node[swap] {\alpha} (z21)
(z12) edge node {\alpha 1} (z22)
(z21) edge node[swap] (tz) {\rbdot_{f'}} (z22)
;
\draw[2cell]
node[between=sz and tz at .6, shift={(.1,0)}, rotate=-135, 2label={below,\rbdot_{\alpha}}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
in $\T_{1,2}(1_{X_2}f,f')$, $\T_{1,2}(f,1_{X_2}f')$, $\T_{1,2}(f1_{X_1},f')$, and $\T_{1,2}(f,f'1_{X_1})$, respectively, for $1$-cells $\alpha : f \to f'$ in $\T_{1,2}$.
The units and the counits,
\begin{equation}\label{unitors-unit-counit}
\begin{tikzcd}[%
,row sep = 0ex
,/tikz/column 1/.append style={anchor=base east}
,/tikz/column 2/.append style={anchor=base west}
,/tikz/column 3/.append style={anchor=base east}
,/tikz/column 4/.append style={anchor=base west}]
1_{\tensor(1_{X_2}\times 1)} \ar{r}{\etaell} & \ellbdot\ell & 1_{\tensor(1\times 1_{X_1})} \ar{r}{\etar} & \rbdot r\\
\ell\ellbdot \ar{r}{\epzell} & 1_{1_{\T_{1,2}}} & r \rbdot \ar{r}{\epzr} & 1_{1_{\T_{1,2}}},
\end{tikzcd}
\end{equation}
have invertible component $2$-cells
\[\begin{tikzpicture}[xscale=1.8, yscale=1.5, baseline={(a.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def1{1} \def-1} \def\v{-1{.7}
\draw[0cell]
(0,0) node (x11) {1_{X_2} f}
($(x11)+(1,0)$) node (x12) {f}
($(x12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {1_{X_2}f}
;
\draw[1cell]
(x11) edge node {\ell_f} (x12)
(x12) edge node {\ellbdot_f} (x22)
(x11) edge[out=-90,in=180] node[swap] (ietal) {1} (x22)
;
\draw[2cell]
node[between=ietal and x12 at .5, shift={(0,-.3)}, rotate=45, 2label={above,\etaell_{f}}] {\Rightarrow}
;
\draw[0cell]
($(x12)+(-1} \def\v{-1,0)$) node (y11) {1_{X_2}f}
($(y11)+(1,0)$) node (y12) {f}
($(y11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y21) {f}
;
\draw[1cell]
(y12) edge node[swap] {\ellbdot_f} (y11)
(y11) edge node[swap] {\ell_f} (y21)
(y12) edge[out=-90,in=0] node (iepzl) {1} (y21)
;
\draw[2cell]
node[between=y11 and iepzl at .5, shift={(0,-.3)}, rotate=-45, 2label={above,\epzell_{f}}] {\Rightarrow}
;
\draw[0cell]
($(y12)+(-1} \def\v{-1,0)$) node (w11) {f}
($(w11)+(1,0)$) node (w12) {f1_{X_1}}
($(w11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (w21) {f1_{X_1}}
;
\draw[1cell]
(w12) edge node[swap] (sw) {r_f} (w11)
(w11) edge node[swap] {\rbdot_f} (w21)
(w12) edge[out=-90,in=0] node (ietar) {1} (w21)
;
\draw[2cell]
node[between=ietar and w11 at .5, shift={(-.1,-.3)}, rotate=135, 2label={below,\etar_{f}}] {\Rightarrow}
;
\draw[0cell]
($(w12)+(-1} \def\v{-1,0)$) node (z11) {f}
($(z11)+(1,0)$) node (z12) {f1_{X_1}}
($(z12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (z22) {f}
;
\draw[1cell]
(z11) edge node (sz) {\rbdot_f} (z12)
(z12) edge node {r_f} (z22)
(z11) edge[out=-90,in=180] node[swap] (iepzr) {1} (z22)
;
\draw[2cell]
node[between=z12 and iepzr at .5, shift={(0,-.1)}, rotate=-135, 2label={below,\epzr_{f}}] {\Rightarrow}
;
\end{tikzpicture}\]
in $\T_{1,2}\big(1_{X_2}f,1_{X_2}f\big)$, $\T_{1,2}(f,f)$, $\T_{1,2}\big(f 1_{X_1},f 1_{X_1}\big)$, and $\T_{1,2}(f,f)$, respectively.
\end{enumerate}
The above data---namely, the composition, the associator, the left unitor, and the right unitor---are categorified versions of bicategorical data.
\end{explanation}
\begin{explanation}[Pentagonator and $2$-unitors]\label{expl:tricategory-definition}
The pentagonator $\pi$ and the $2$-unitors $\mu$, $\lambda$, and $\rho$ are the tricategorical analogues of the pentagon axiom \eqref{bicat-pentagon}, the unity axiom \eqref{bicat-unity}, and the left and right unity \emph{properties} in \Cref{bicat-left-right-unity}. In the literature, they are often denoted by the following diagrams.
\[\begin{tikzpicture}[xscale=2.5,yscale=1.8,baseline={(pi.base)}]
\def-1} \def\v{-1{.5}
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def1{1} \def1{.3}
\draw[font=\Large] (0,0) node (pi) {$\stackrel{\pi}{\Rrightarrow}$};
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (x11) {\T^4_{[1,5]}}
($(x11)+(1,0)$) node (x12) {\T^3_{1,2,3,5}}
($(x11)+(-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\T^3_{1,3,4,5}}
($(x12)+(1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x23) {\T^2_{1,2,5}}
($(x11)+(0,2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {\T^2_{1,4,5}}
($(x31)+(1,0)$) node (x32) {\T_{1,5}}
;
\draw[1cell]
(x11) edge node (ar1) {\tensor \times 1 \times 1} (x12)
(x11) edge node[swap] (ar2) {1 \times 1 \times \tensor} (x21)
(x12) edge node (ar3) {\tensor \times 1} (x23)
(x21) edge node[swap] (ar4) {1\times \tensor} (x31)
(x23) edge node (ar5) {\tensor} (x32)
(x31) edge node[swap] {\tensor} (x32)
;
}
\begin{scope}[shift={(-1-1--1} \def\v{-1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)}]
\boundary
\draw[0cell]
($(x21)!.45!(x23)$) node (x22) {\T^3_{1,2,4,5}}
;
\draw[1cell]
(x11) edge node[pos=.4] (arr1) {1\times\tensor\times 1} (x22)
(x22) edge node[swap] (arr2) {\tensor\times 1} (x23)
(x22) edge node[pos=.6] (arr3) {1\times\tensor} (x31)
;
\draw[2cell]
node[between=x21 and x22 at .5, rotate=180, 2label={below,1\times a}] {\Rightarrow}
node[between=x12 and x22 at .6, rotate=-135, 2label={above,a\times 1}] {\Rightarrow}
node[between=x22 and x32 at .5, rotate=-135, 2label={below,a}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(1+-1} \def\v{-1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)}]
\boundary
\draw[0cell]
($(x21)!.6!(x23)$) node (x22) {\T^2_{1,3,5}}
;
\draw[1cell]
(x12) edge node[swap,pos=.4] (arr1) {1\times\tensor} (x22)
(x21) edge node (arr2) {\tensor\times 1} (x22)
(x22) edge node[swap] (arr3) {\tensor} (x32)
;
\draw[2cell]
node[between=x11 and x22 at .5] {=}
node[between=x22 and x23 at .5, rotate=180, 2label={below,a}] {\Rightarrow}
node[between=x22 and x31 at .5, rotate=-135, 2label={below,a}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}\]
\[\begin{tikzpicture}[xscale=2,yscale=2,baseline={(pi.base)}]
\def-1} \def\v{-1{.5}
\def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{2} \def.4{1.5} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def1.2{.7} \def\f{.5} \def1{1} \def1{1.5}
\draw[font=\Large] (0,0) node (pi) {$\stackrel{\mu}{\Rrightarrow}$};
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (x00) {\T^2_{[1,3]}}
($(x00)+(-1,-.4)$) node (x11) {\T^2_{[1,3]}}
($(x00)+(1,-.4)$) node (x12) {\T^2_{[1,3]}}
($(x00)+(0,-1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)$) node (x2) {\T_{1,3}}
;
\draw[1cell]
(x00) edge[out=180,in=90] node[swap] (ileft) {1} (x11)
(x11) edge node[swap] {\tensor} (x2)
(x00) edge[out=0,in=90] node (iright) {1} (x12)
(x12) edge node {\tensor} (x2)
;
}
\begin{scope}[shift={(-1--1} \def\v{-1,1.2)}]
\boundary
\draw[0cell]
($(x00)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x1) {\T^3_{1,2,2,3}}
;
\draw[1cell]
(x00) edge node[pos=.5] {1\times 1_{X_2}\times 1} (x1)
(x1) edge node {\tensor\times 1} (x11)
(x1) edge node[swap] {1\times\tensor} (x12)
;
\draw[2cell]
node[between=ileft and x1 at .5, shift={(0,-.4)}, rotate=0, 2label={above,\rbdot\times 1}] {\Rightarrow}
node[between=x11 and x12 at .5, shift={(0,-.3)}, rotate=0, 2label={above,a}] {\Rightarrow}
node[between=x1 and iright at .5, shift={(0,-.4)}, rotate=0, 2label={above,1\times\ell}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(1+-1} \def\v{-1,1.2)}]
\boundary
\draw[2cell]
node[between=x11 and x12 at .5, 2label={above,1}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}\]
\[\begin{tikzpicture}[xscale=2,yscale=2,baseline={(pi.base)}]
\def-1} \def\v{-1{.5}
\def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{.75} \def.4{.7} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def1.2{.5} \def\f{.5} \def1{1} \def1{1.5}
\draw[font=\Large] (0,0) node (pi) {$\stackrel{\lambda}{\Rrightarrow}$};
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (x00) {\T^3_{1,2,3,3}}
($(x00)+(-1,-1.2)$) node (x11) {\T^2_{[1,3]}}
($(x00)+(1,-1.2)$) node (x12) {\T^2_{[1,3]}}
($(x11)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\T_{1,3}}
($(x12)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {\T_{1,3}}
;
\draw[1cell]
(x11) edge[bend left] node[pos=.3] {1_{X_3}\times 1 \times 1} (x00)
(x00) edge[bend left] node[pos=.7] {\tensor\times 1} (x12)
(x12) edge node {\tensor} (x22)
(x11) edge node[swap] {\tensor} (x21)
(x21) edge node (i) {1} (x22)
;
}
\begin{scope}[shift={(-1--1} \def\v{-1,.4)}]
\boundary
\draw[1cell]
(x11) edge[bend right] node (ii) {1} (x12)
;
\draw[2cell]
node[between=x11 and x12 at .4, shift={(0,.3)}, rotate=-90, 2label={above,\ell\times 1}] {\Rightarrow}
node[between=i and ii at .5] {=}
;
\end{scope}
\begin{scope}[shift={(1+-1} \def\v{-1,.4)}]
\boundary
\draw[0cell]
($(x00)+(0,-1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)$) node (c) {\T^2_{1,3,3}}
;
\draw[1cell]
(x00) edge node[pos=.4] {1\times\tensor} (c)
(x21) edge node[pos=.7] {1_{X_3}\times 1} (c)
(c) edge node {\tensor} (x22)
;
\draw[2cell]
node[between=c and x12 at .5, rotate=205, 2label={above,a}] {\Rightarrow}
node[between=x11 and c at .5, shift={(0,.3)}] {=}
node[between=c and i at .5, shift={(-.1,0)}, rotate=-90, 2label={above,\ell}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}\]
\[\begin{tikzpicture}[xscale=2,yscale=2,baseline={(pi.base)}]
\def-1} \def\v{-1{.5}
\def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{.75} \def.4{.7} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def1.2{.5} \def\f{.5} \def1{1} \def1{1.5}
\draw[font=\Large] (0,0) node (pi) {$\stackrel{\rho}{\Rrightarrow}$};
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (x00) {\T^3_{1,1,2,3}}
($(x00)+(-1,-1.2)$) node (x11) {\T^2_{[1,3]}}
($(x00)+(1,-1.2)$) node (x12) {\T^2_{[1,3]}}
($(x11)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\T_{1,3}}
($(x12)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {\T_{1,3}}
;
\draw[1cell]
(x11) edge[bend left] node[pos=.3] {1\times 1 \times 1_{X_1}} (x00)
(x00) edge[bend left] node[pos=.7] {1\times\tensor} (x12)
(x12) edge node {\tensor} (x22)
(x11) edge node[swap] {\tensor} (x21)
(x21) edge node (i) {1} (x22)
;
}
\begin{scope}[shift={(-1--1} \def\v{-1,.4)}]
\boundary
\draw[1cell]
(x11) edge[bend right] node (ii) {1} (x12)
;
\draw[2cell]
node[between=x11 and x12 at .4, shift={(0,.3)}, rotate=90, 2label={below,1\times\rbdot}] {\Rightarrow}
node[between=i and ii at .5] {=}
;
\end{scope}
\begin{scope}[shift={(1+-1} \def\v{-1,.4)}]
\boundary
\draw[0cell]
($(x00)+(0,-1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)$) node (c) {\T^2_{1,1,3}}
;
\draw[1cell]
(x00) edge node[pos=.4] {\tensor\times 1} (c)
(x21) edge node[pos=.7] {1\times 1_{X_1}} (c)
(c) edge node {\tensor} (x22)
;
\draw[2cell]
node[between=c and i at .5, shift={(-.1,0)}, rotate=90, 2label={below,\rbdot}] {\Rightarrow}
node[between=x11 and c at .5, shift={(0,.3)}] {=}
node[between=c and x12 at .5, rotate=25, 2label={below,a}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}\]
When these structures are presented in such diagrammatic forms, it is important to be aware of the following points.
\begin{enumerate}
\item The domain of $\pi$ is \emph{not} a pasting diagram in some bicategory, but an iterated horizontal composite of three strong transformations, each with one whiskering as in \Cref{def:whiskering-transformation}. Therefore, \Cref{lax-tr-compose,pre-whiskering-transformation,post-whiskering-transformation} are needed to make sure that the domain of $\pi$ is well-defined.
\item Since horizontal composite of lax/strong transformations is not strictly associative, one must specify an order for the iterated horizontal composite. This order is not displayed in the diagrammatic presentation of $\pi$.
\item Similar remarks apply to the codomain of $\pi$ and to $\mu$, $\lambda$, and $\rho$.\dqed
\end{enumerate}
\end{explanation}
\begin{explanation}[Associahedron]\label{expl:associahedron}\index{associahedron}
The inverse of the lax unity constraint $\tensorzero$ in \eqref{tensorzero-gf} appears six times, twice in each of the three tricategorical axioms. In the literature, these six $2$-cells involving $\tensorzeroinv$ are usually not displayed explicitly in the tricategorical axioms.
Furthermore, suppose
\begin{itemize}
\item the two sides of the non-abelian 4-cocycle condition are glued together along their common boundary, and
\item each of the two bi-gons involving $\tensorzeroinv$ is collapsed down to a single edge.
\end{itemize}
Then the resulting $3$-dimensional object has nine faces, as displayed below.
\[\begin{tikzpicture}[xscale=2.5,yscale=1.8,baseline={(eq.base)}]
\def.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1{.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1}
\def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{1.8} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def.4{.5} \def1.2{1.4} \def\q{.3}
\draw[0cell]
(0,0) node (x11) {(k(j(hg)))f}
($(x11)+(-\b,-\q)$) node (x21) {(k((jh)g))f}
($(x11)+(\b,-\q)$) node (x22) {k((j(hg))f)}
($(x11)+(-.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {((k(jh))g)f}
($(x11)+(.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x33) {k(j((hg)f))}
($(x11)+(-.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1,-2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15-.4)$) node (x41) {(((kj)h)g)f}
($(x11)+(.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1,-2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15-.4)$) node (x44) {k(j(h(gf)))}
($(x11)+(-.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1,-3*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x51) {((kj)h)(gf)}
($(x11)+(.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1,-3*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x52) {(kj)(h(gf))}
($(x11)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x32) {k(((jh)g)f)}
($(x11)+(-.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1,-1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3-.4)$) node (x42) {(k(jh))(gf)}
($(x11)+(.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1/2,-1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)$) node (x43) {k((jh)(gf))}
($(x11)+(-\d,-1.2)$) node (b32) {((kj)(hg))f}
($(x11)+(\d,-1.2)$) node (b43) {(kj)((hg)f)}
;
\draw[1cell]
(x41) edge node {(a1)1} (x31)
(x31) edge node {a1} (x21)
(x21) edge node[pos=.7] {(1a)1} (x11)
(x11) edge node {a} (x22)
(x22) edge node {1a} (x33)
(x33) edge node {1(1a)} (x44)
(x41) edge node[swap] {a} (x51)
(x51) edge node {a} (x52)
(x52) edge node[swap] {a} (x44)
(x41) edge[dashed] (b32)
(b32) edge[dashed] (x11)
(b32) edge[dashed] (b43)
(b43) edge[dashed, shorten <=-.2cm, shorten >=-.15cm] (x33)
(b43) edge[dashed] (x52)
(x21) edge[cross line] (x32)
(x32) edge (x22)
(x32) edge[cross line] (x43)
(x31) edge[cross line] (x42)
(x42) edge (x43)
(x43) edge[cross line] (x44)
(x51) edge (x42)
;
\end{tikzpicture}\]
This is the Stasheff associahedron \cite{stasheff} $K_5$\index{K5@$K_5$} that describes ways to move parentheses from $(((kj)h)g)f$ to $k(j(h(gf)))$. As drawn above, its front and back faces correspond to the top and the bottom sides of the non-abelian 4-cocycle condition.
\end{explanation}
\begin{explanation}[Non-abelian 4-cocycle condition]\label{expl:nb4}
In the top side of this axiom, the $2$-cell $1\pi$ is interpreted as follows. First, $\pi$ has a component $2$-cell
\[\begin{tikzcd}
\big[\big(1_j \tensor a_{h,g,f}\big) a_{j,h\tensor g,f}\big] \big(a_{j,h,g}\tensor 1_f\big) \ar{r}{\pi_{j,h,g,f}} & a_{j,h,g\tensor f} a_{j\tensor h,g,f}
\end{tikzcd}\]
in $\T_{1,5}\big(((jh)g)f, j(h(gf))\big)$. Then $1\pi$ is defined as the vertical composite $2$-cell
\begin{equation}\label{one-pi-iicell}
\begin{tikzpicture}[xscale=4.5,yscale=1.5,baseline={(x3.base)}]
\def-1} \def\v{-1{-1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x1) {\big[\big(1_k \tensor (1_j \tensor a_{h,g,f})\big) \big(1_k \tensor a_{j,h\tensor g, f}\big)\big] \big[1_k \tensor (a_{j,h,g}\tensor 1_f)\big]}
($(x1)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {\big[(1_k 1_k) \tensor \big((1_j \tensor a_{h,g,f}) a_{j,h\tensor g, f}\big)\big] \big[1_k \tensor (a_{j,h,g}\tensor 1_f)\big]}
($(x2)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {\big[(1_k 1_k)1_k\big] \tensor \big[\big((1_j \tensor a_{h,g,f})a_{j,h\tensor g, f}\big) (a_{j,h,g}\tensor 1_f)\big]}
($(x3)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x4) {(1_k 1_k) \tensor \big(a_{j,h,g\tensor f} a_{j\tensor h,g,f}\big)}
($(x4)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x5) {\big(1_k \tensor a_{j,h,g\tensor f}\big) \big(1_k \tensor a_{j\tensor h,g,f}\big)}
($(x1)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (a) {}
($(x5)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (b) {}
;
\draw[1cell]
(x1) edge node {\tensortwo * 1} (x2)
(x2) edge node {\tensortwo} (x3)
(x3) edge node {(\ell_{1_k}*1)\tensor \pi_{j,h,g,f}} (x4)
(x4) edge node {\tensortwoinv} (x5)
(x1) edge[-,shorten >=-1pt] (a)
(a) edge[-,shorten <=-1pt,shorten >=-1pt] node[pos=.7]{1\pi} (b)
(b) edge[shorten <=-1pt] (x5)
;
\end{tikzpicture}
\end{equation}
in $\T_{1,6}\big(k(((jh)g)f), k(j(h(gf))\big)$, with $\tensortwoinv$ the inverse of $\tensortwo$ in \eqref{tricat-tensortwo-component}.
Similarly, the $2$-cell $\pi 1$ in the bottom side of the non-abelian 4-cocycle condition is defined as the vertical composite $2$-cell
\begin{equation}\label{pi-one-iicell}
\begin{tikzpicture}[xscale=4.5,yscale=1.5,baseline={(x3.base)}]
\def-1} \def\v{-1{-1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x1) {\big[\big((1_k \tensor a_{j,h,g})\tensor 1_f\big) \big(a_{k,j\tensor h,g}\tensor 1_f\big)\big] \big[(a_{k,j,h}\tensor 1_g)\tensor 1_f\big]}
($(x1)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {\big[\big((1_k \tensor a_{j,h,g}) a_{k,j\tensor h,g}\big)\tensor (1_f 1_f)\big] \big[(a_{k,j,h}\tensor 1_g)\tensor 1_f\big]}
($(x2)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {\big[\big((1_k \tensor a_{j,h,g}) a_{k,j\tensor h,g}\big)(a_{k,j,h}\tensor 1_g)\big] \tensor \big[(1_f 1_f)1_f\big]}
($(x3)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x4) {\big(a_{k,j,h\tensor g} a_{k\tensor j,h,g}\big) \tensor (1_f1_f)}
($(x4)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x5) {\big(a_{k,j,h\tensor g}\tensor 1_f\big) \big(a_{k\tensor j,h,g}\tensor 1_f\big)}
($(x1)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (a) {}
($(x5)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (b) {}
;
\draw[1cell]
(x1) edge node {\tensortwo * 1} (x2)
(x2) edge node {\tensortwo} (x3)
(x3) edge node {\pi_{k,j,h,g} \tensor (\ell_{1_f}*1)} (x4)
(x4) edge node {\tensortwoinv} (x5)
(x1) edge[-,shorten >=-1pt] (a)
(a) edge[-,shorten <=-1pt,shorten >=-1pt] node[pos=.7]{\pi 1} (b)
(b) edge[shorten <=-1pt] (x5)
;
\end{tikzpicture}
\end{equation}
in $\T_{1,6}\big((((kj)h)g)f, (k(j(hg)))f\big)$.
\end{explanation}
\begin{explanation}[Left normalization]\label{expl:left-normalization}
This axiom has one copy of the left $2$-unitor $\lambda$. Since all the $2$-cells involved are invertible, $\lambda$ is uniquely determined by the rest of the data of a tricategory, minus $\rho$, via the left normalization axiom with $h=1_{X_3}$.
Moreover, $1\lambda$ and $\mu 1$ are defined as the composite $2$-cells
\begin{equation}\label{one-lambda-iicell}
\begin{tikzpicture}[xscale=2.5,yscale=1.5,baseline={(x2.base)}]
\def-1} \def\v{-1{-1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x1) {1_h \tensor (\ell_g \tensor 1_f)}
($(x1)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {(1_h 1_h) \tensor \big(\ell_{g\tensor f} a_{1,g,f}\big)}
($(x2)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {\big(1_h\tensor \ell_{g\tensor f}\big) \big(1_h \tensor a_{1,g,f}\big)}
($(x1)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (a) {}
($(x3)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (b) {}
;
\draw[1cell]
(x1) edge node {\ellinv_{1_h} \tensor \lambda_{g,f}} (x2)
(x2) edge node {\tensortwoinv} (x3)
(x1) edge[-,shorten >=-1pt] (a)
(a) edge[-,shorten <=-1pt,shorten >=-1pt] node[swap,pos=.5]{1\lambda} (b)
(b) edge[shorten <=-1pt] (x3)
;
\end{tikzpicture}
\end{equation}
\begin{equation}\label{mu-one-iicell}
\begin{tikzpicture}[xscale=4,yscale=1.5,baseline={(x3.base)}]
\def-1} \def\v{-1{-1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x1) {\big[\big((1_h \tensor \ell_g) \tensor 1_f\big) \big(a_{h,1,g} \tensor 1_f\big)\big] \big[(\rbdot_h\tensor 1_g) \tensor 1_f\big]}
($(x1)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {\big[\big((1_h\tensor \ell_g) a_{h,1,g}\big) \tensor (1_f 1_f)\big] \big[(\rbdot_h\tensor 1_g) \tensor 1_f\big]}
($(x2)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {\big[\big((1_h\tensor \ell_g) a_{h,1,g}\big) (\rbdot_h \tensor 1_g)\big] \tensor \big[(1_f 1_f)1_f\big]}
($(x3)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x4) {1_{h\tensor g} \tensor 1_f}
($(x1)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (a) {}
($(x4)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (b) {}
;
\draw[1cell]
(x1) edge node {\tensortwo * 1} (x2)
(x2) edge node {\tensortwo} (x3)
(x3) edge node {\mu_{h,g} \tensor [\ell_{1_f}(\ell_{1_f}*1)]} (x4)
(x1) edge[-,shorten >=-1pt] (a)
(a) edge[-,shorten <=-1pt,shorten >=-1pt] node[pos=.5]{\mu 1} (b)
(b) edge[shorten <=-1pt] (x4)
;
\end{tikzpicture}
\end{equation}
in $\T_{1,4}\big(h((1g)f),h(gf)\big)$ and $\T_{1,4}\big((hg)f,(hg)f\big)$, respectively.
\end{explanation}
\begin{explanation}[Right normalization]\label{expl:right-normalization}
Similar to the left $2$-unitor, the right $2$-unitor $\rho$ is uniquely determined by the rest of the data of a tricategory, minus $\lambda$, via the right normalization axiom with $f=1_{X_1}$.
Moreover, $\rho 1$ and $1\mu$ are defined as the composite $2$-cells
\begin{equation}\label{rho-one-iicell}
\begin{tikzpicture}[xscale=2.5,yscale=1.5,baseline={(x2.base)}]
\def-1} \def\v{-1{-1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x1) {(1_h \tensor \rbdot_g) \tensor 1_f}
($(x1)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {\big(a_{h,g,1} \rbdot_{h\tensor g}\big) \tensor (1_f 1_f)}
($(x2)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {\big(a_{h,g,1} \tensor 1_f\big) \big(\rbdot_{h\tensor g} \tensor 1_f\big)}
($(x1)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (a) {}
($(x3)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (b) {}
;
\draw[1cell]
(x1) edge node {\rho_{h,g} \tensor \ellinv_{1_f}} (x2)
(x2) edge node {\tensortwoinv} (x3)
(x1) edge[-,shorten >=-1pt] (a)
(a) edge[-,shorten <=-1pt,shorten >=-1pt] node[swap,pos=.5]{\rho 1} (b)
(b) edge[shorten <=-1pt] (x3)
;
\end{tikzpicture}
\end{equation}
\begin{equation}\label{one-mu-iicell}
\begin{tikzpicture}[xscale=4,yscale=1.5,baseline={(x3.base)}]
\def-1} \def\v{-1{-1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x1) {\big[\big(1_h \tensor (1_g\tensor \ell_f)\big) \big(1_h \tensor a_{g,1,f}\big)\big] \big[1_h \tensor (\rbdot_g\tensor 1_f)\big]}
($(x1)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {\big[(1_h1_h) \tensor \big((1_g \tensor \ell_f)a_{g,1,f}\big)\big] \big[1_h \tensor (\rbdot_g\tensor 1_f)\big]}
($(x2)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {\big[(1_h1_h)1_h\big] \tensor \big[\big((1_g \tensor \ell_f)a_{g,1,f}\big) (\rbdot_g \tensor 1_f) \big]}
($(x3)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x4) {1_h\tensor 1_{g\tensor f}}
($(x1)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (a) {}
($(x4)+(-1} \def\v{-1,0)$) node[inner sep=0pt] (b) {}
;
\draw[1cell]
(x1) edge node {\tensortwo * 1} (x2)
(x2) edge node {\tensortwo} (x3)
(x3) edge node {[\ell_{1_h}(\ell_{1_h}*1)] \tensor \mu_{g,f}} (x4)
(x1) edge[-,shorten >=-1pt] (a)
(a) edge[-,shorten <=-1pt,shorten >=-1pt] node[pos=.5]{1\mu} (b)
(b) edge[shorten <=-1pt] (x4)
;
\end{tikzpicture}
\end{equation}
in $\T_{1,4}\big((hg)f,(h(g1))f\big)$ and $\T_{1,4}\big(h(gf),h(gf)\big)$, respectively.
\end{explanation}
\begin{example}[Bicategories]\label{ex:bicat-as-tricat}\index{bicategory!as a tricategory}
Each bicategory $(\B,1,c,a,\ell,r)$ yields a \index{tricategory!locally discrete}\emph{locally discrete} tricategory $\T$ with the following structures.
\begin{description}
\item[Objects] $\T_0=\B_0$, the class of objects in $\B$.
\item[Hom bicategories] For objects $X_1,X_2\in\T$, the hom bicategory $\T_{1,2}$ is the hom-category $\B(X_1,X_2)$, regarded as a locally discrete bicategory as in \Cref{ex:category-as-bicat}.
\item[Composition] For objects $X_1,X_2,X_3\in\T$, the composition $(\tensor,\tensortwo,\tensorzero)$ is the horizontal composition
\[\begin{tikzcd}
\T^2_{[1,3]} = \B(X_2,X_3) \times \B(X_1,X_2) \ar{r}{c} & \B(X_1,X_3) = \T_{1,3}
\end{tikzcd}\]
in $\B$, regarded as a strict functor between locally discrete bicategories as in \Cref{ex:functor-laxfunctor}.
\item[Identities] For each object $X\in\T$, the identity of $X$ in $\T$ is the identity of $X$ in $\B$, again regarded as a strict functor between locally discrete bicategories.
\item[Associator] The associator of $\T$ is the associator $a$ of $\B$, regarded as a strict transformation as in \Cref{ex:nt-lax-transformation}. The chosen right adjoint $\abdot$ of $a$ is the inverse of the natural isomorphism $a$, regarded as a strict transformation. The unit $1 \to \abdot a$ and the counit $1\to a\abdot$ are both identity modifications.
\item[Unitors] In exactly the same manner, the left unitor $\ell$ and the right unitor $r$ of $\T$ are those of $\B$, regarded as strict transformations. The chosen right adjoints $\ellbdot$ and $\rbdot$ are the inverses of the natural isomorphisms $\ell$ and $r$, respectively.
\item[Pentagonator] Each component \eqref{pentagonator-component} of the pentagonator $\pi$ of $\T$ is an identity $2$-cell. This is well-defined by the pentagon axiom \eqref{bicat-pentagon} in $\B$.
\item[$2$-Unitors] Each component \eqref{mid-iiunitor-component} of the middle $2$-unitor $\mu$ of $\T$ is an identity $2$-cell, which is well-defined by the unity axiom \eqref{bicat-unity} in $\B$. Similarly, the left $2$-unitor $\lambda$ and the right $2$-unitor $\rho$ of $\T$ are componentwise \eqref{left-right-iiunitor-component} identity $2$-cells. They are well-defined by the left and right unity properties in \Cref{bicat-left-right-unity}.
\end{description}
The three tricategorical axioms are satisfied because in each of the six pasting diagrams, every $2$-cell is an identity $2$-cell.
\end{example}
\section{Composites of Transformations and Modifications}
\label{sec:composite-tr-mod}
In this section we begin the construction of the tricategory of bicategories by defining $\tensor$ and $\tensortwo$ that form a part of a pseudofunctor
\[\begin{tikzcd}[column sep=large]
\Bicatps(\B,\C)\times\Bicatps(\A,\B) \ar{r}{(\tensor,\tensortwo,\tensorzero)} & \Bicatps(\A,\C)
\end{tikzcd}\]
as in \eqref{tricat-composition}. The plan for this section is as follows. We:
\begin{itemize}
\item define $\tensor$ in \Cref{def:transformation-tensor};
\item define $\tensortwo$ in \Cref{def:tensortwo};
\item prove that $\tensortwo$ is a natural isomorphism in \Cref{tensortwo-modification,tensortwo-iicell-natural};
\item prove the lax associativity axiom \eqref{f2-bicat} in \Cref{tensortwo-lax-associative};
\end{itemize}
\begin{convention}\label{conv:bicategory-index}
Suppose $\A_i$ is a bicategory for $i \in \{1,2,\ldots\}$. Define the bicategory
\[\bicata_{i,j} = \Bicatps(\A_i,\A_j)\]
as in \Cref{subbicat-pseudofunctor}. This notation will be used with \eqref{tricategory-product-abbreviation}. For example, we have
\[\begin{split}
\bicata^2_{[1,3]} &= \bicata_{2,3} \times \bicata_{1,2}\\
&= \Bicatps(\A_2,\A_3)\times \Bicatps(\A_1,\A_2),\\
\bicata^3_{[1,4]} & = \bicata_{3,4} \times \bicata_{2,3} \times \bicata_{1,2}\\
&= \Bicatps(\A_3,\A_4) \times \Bicatps(\A_2,\A_3)\times \Bicatps(\A_1,\A_2),
\end{split}\]
and $\bicata^2_{1,2,4} = \bicata_{2,4}\times \bicata_{1,2}$.
\end{convention}
\begin{definition}\label{def:transformation-tensor}
Suppose given
\begin{itemize}
\item bicategories $\A_1,\A_2$, and $\A_3$,
\item lax functors $F,F',G$, and $G'$ with $\Gptwo$ invertible,
\item lax transformations $\alpha, \alpha' : F \to F'$ and $\beta, \beta' : G\to G'$, and
\item modifications $\Gamma : \alpha \to \alpha'$ and $\Sigma : \beta \to \beta'$,
\end{itemize}
as displayed below.
\[\begin{tikzpicture}[xscale=3, yscale=1.4]
\def1{1}
\draw[0cell]
(0,0) node (x11) {\A_1}
($(x11)+(1,0)$) node (x12) {\A_2}
($(x12)+(1,0)$) node (x13) {\A_3}
;
\draw[1cell]
(x11) edge[bend left=75] node {F} (x12)
(x11) edge[bend right=75] node[swap] {F'} (x12)
(x12) edge[bend left=75] node {G} (x13)
(x12) edge[bend right=75] node[swap] {G'} (x13)
;
\draw[2cell]
node[between=x11 and x12 at .35, rotate=-90, 2label={below,\alpha}] {\Rightarrow}
node[between=x11 and x12 at .65, rotate=-90, 2label={above,\alpha'}] {\Rightarrow}
node[between=x11 and x12 at .5, rotate=0, 2label={above,\Gamma}] {\Rrightarrow}
node[between=x12 and x13 at .35, rotate=-90, 2label={below,\beta}] {\Rightarrow}
node[between=x12 and x13 at .65, rotate=-90, 2label={above,\beta'}] {\Rightarrow}
node[between=x12 and x13 at .5, rotate=0, 2label={above,\Sigma}] {\Rrightarrow}
;
\end{tikzpicture}\]
\begin{enumerate}
\item Define the \emph{composite}\label{notation:gtensorf}\index{tricategory of bicategories!composition}\index{composition!tricategory!of bicategories}
\[G\tensor F = GF : \A_1 \to \A_3,\]
which is a lax functor by \Cref{lax-functors-compose}. It is a pseudofunctor if both $F$ and $G$ are so.
\item Define the \emph{composite}\index{composition!lax transformation!in tricategory of bicategories}\index{lax transformation!composition in tricategory of bicategories} $\beta\tensor\alpha$ as the horizontal composite in \Cref{def:lax-tr-comp}
\begin{equation}\label{transformation-composite}
\beta\tensor\alpha = (G'\whis\alpha)(\beta\whis F) : GF \to G'F'
\end{equation}
of
\begin{itemize}
\item the pre-whiskering $\beta\whis F : GF \to G'F$ in \Cref{pre-whiskering-transformation} and
\item the post-whiskering $G'\whis \alpha : G'F \to G'F'$ in \Cref{post-whiskering-transformation}.
\end{itemize}
\item Define the \emph{composite}\label{notation:sigmatensorgamma}\index{composition!modification!in tricategory of bicategories}
\[\begin{tikzcd}
\beta\tensor\alpha \ar{r}{\Sigma\tensor\Gamma} & \beta'\tensor\alpha'\end{tikzcd}\]
as having the horizontal composite component $2$-cell
\begin{equation}\label{mod-composite-component}
(\Sigma\tensor\Gamma)_X = (G'\Gamma_X) * \Sigma_{FX} \in \A_3\big((\beta\tensor\alpha)_X,(\beta'\tensor\alpha')_X\big)
\end{equation}
for each object $X\in \A_1$, as displayed below.
\[\begin{tikzpicture}[xscale=2.8, yscale=1.4]
\def1{1}
\draw[0cell]
(0,0) node (x11) {GFX}
($(x11)+(1,0)$) node (x12) {G'FX}
($(x12)+(1,0)$) node (x13) {G'F'X}
;
\draw[1cell]
(x11) edge[bend left=45] node {\beta_{FX}} (x12)
(x11) edge[bend right=45] node[swap] {\beta'_{FX}} (x12)
(x12) edge[bend left=45] node {G'\alpha_X} (x13)
(x12) edge[bend right=45] node[swap] {G'\alpha'_X} (x13)
;
\draw[2cell]
node[between=x11 and x12 at .4, rotate=-90, 2label={above,\Sigma_{FX}}] {\Rightarrow}
node[between=x12 and x13 at .35, rotate=-90, 2label={above,G'\Gamma_X}] {\Rightarrow}
;
\end{tikzpicture}\]
\end{enumerate}
This finishes the definitions of the composites $\beta\tensor\alpha$ and $\Sigma\tensor\Gamma$.
\end{definition}
\begin{explanation}\label{expl:composite-tr-mod}
By \Cref{lax-tr-compose,pre-whiskering-transformation,post-whiskering-transformation}, the composite
\[\beta\tensor\alpha = (G'\whis\alpha)(\beta\whis F) : GF\to G'F'\]
is a lax transformation, which is strong if both $\alpha$ and $\beta$ are strong. It may be visualized as follows.
\[\begin{tikzpicture}[xscale=2.2,yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-.6}
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (x11) {\A_1}
($(x11)+(1,0)$) node (x12) {\A_2}
($(x12)+(1,0)$) node (x13) {\A_3}
;
\draw[1cell]
(x11) edge[bend left=45] node (f) {F} (x12)
(x12) edge[bend right=45] node[swap] (gp) {G'} (x13)
;}
\begin{scope}
\boundary
\draw[0cell]
($(x11)+(-1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2)$) node {\beta\tensor\alpha}
($(x13)+(1/2,0)$) node (x14) {\beta\whis F}
;
\draw[1cell]
(x12) edge[bend left=45] node (g) {G} (x13)
;
\draw[2cell]
node[between=x12 and x13 at .45, rotate=-90, 2label={above,\beta}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)}]
\boundary
\draw[0cell]
($(x13)+(1/2,0)$) node (x14) {G'\whis\alpha}
;
\draw[1cell]
(x11) edge[bend right=45] node[swap] (fp) {F'} (x12)
;
\draw[2cell]
node[between=x11 and x12 at .45, rotate=-90, 2label={above,\alpha}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}\]
By \Cref{def:lax-tr-comp}:
\begin{enumerate}
\item For each object $X\in\A_1$, its component $1$-cell is the horizontal composite
\begin{equation}\label{composite-tr-icell}
\begin{tikzpicture}[xscale=2.5, yscale=1.4, baseline={(x11.base)}]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{.5}
\draw[0cell]
(0,0) node (x11) {GFX}
($(x11)+(1,0)$) node (x12) {G'FX}
($(x12)+(1,0)$) node (x13) {G'F'X}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node[inner sep=0pt] (a) {}
($(x13)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node[inner sep=0pt] (b) {}
;
\draw[1cell]
(x11) edge node {\beta_{FX}} (x12)
(x12) edge node {G'\alpha_X} (x13)
(x11) edge[-,shorten >=-1pt] (a)
(a) edge[-,shorten <=-1pt,shorten >=-1pt] node {(\beta\tensor\alpha)_X} (b)
(b) edge[shorten <=-1pt] (x13)
;
\end{tikzpicture}
\end{equation}
in $\A_3$. So the component $2$-cell $(\Sigma\tensor\Gamma)_X$ in \eqref{mod-composite-component} is well-defined.
\item For each $1$-cell $f : X \to Y$ in $\A_1$, its component $2$-cell $(\beta\tensor\alpha)_f$ is the composite of the pasting diagram
\begin{equation}\label{composite-tr-iicell}
\begin{tikzpicture}[xscale=3, yscale=2.3, baseline={(f.base)}]
\def1{1} \def\i{1.3} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{.4}
\draw[0cell]
(0,0) node (x11) {GFX}
($(x11)+(1,0)$) node (x12) {G'FX}
($(x12)+(\i,0)$) node (x13) {G'F'X}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {GFY}
($(x12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {G'FY}
($(x13)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x23) {G'F'Y}
($(x11)+(0,.4)$) node[inner sep=0pt] (a) {}
($(x13)+(0,.4)$) node[inner sep=0pt] (b) {}
($(x21)+(0,-.4)$) node[inner sep=0pt] (c) {}
($(x23)+(0,-.4)$) node[inner sep=0pt] (d) {}
;
\draw[1cell]
(x11) edge node {\beta_{FX}} (x12)
(x12) edge node {G'\alpha_X} (x13)
(x11) edge node[swap] (f) {GFf} (x21)
(x12) edge node[swap] {G'Ff} (x22)
(x12) edge[bend left=20] (x23)
(x12) edge[bend right=20] (x23)
(x13) edge node {G'F'f} (x23)
(x21) edge node[swap] {\beta_{FY}} (x22)
(x22) edge node[swap] {G'\alpha_Y} (x23)
(x11) edge[-,shorten >=-1pt] (a)
(a) edge[-,shorten <=-1pt,shorten >=-1pt] node{(\beta\tensor\alpha)_X} (b)
(b) edge[shorten <=-1pt] (x13)
(x21) edge[-,shorten >=-1pt] (c)
(c) edge[-,shorten <=-1pt,shorten >=-1pt] node{(\beta\tensor\alpha)_Y} (d)
(d) edge[shorten <=-1pt] (x23)
;
\draw[2cell]
node[between=x11 and x22 at .4, rotate=-135, 2label={above,\beta_{Ff}}] {\Rightarrow}
node[between=x13 and x22 at .25, rotate=-135, 2label={above,\Gptwo}] {\Rightarrow}
node[between=x13 and x22 at .5, rotate=-135, 2label={above,G'\alphaf}] {\Rightarrow}
node[between=x13 and x22 at .75, rotate=-135, 2label={above,\Gptwoinv}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
in $\A_3(GFX,G'F'Y)$, with the top and bottom rows bracketed as indicated, and $\alpha_f : (F'f)\alpha_X \to \alpha_Y(Ff)$ a component $2$-cell of $\alpha$. In other words, it is the vertical composite
\begin{equation}\label{composite-tr-iicell-eq}
\begin{tikzcd}[column sep=large]
(G'F'f)\big[(G'\alpha_X)\beta_{FX}\big] \ar{r}{(\beta\tensor\alpha)_f} \ar{d}[swap]{\ainv} & \big[(G'\alpha_Y)\beta_{FY}\big](GFf)\\
\big[(G'F'f)(G'\alpha_X)\big]\beta_{FX} \ar{d}[swap]{\Gptwo *1} & (G'\alpha_Y)\big[\beta_{FY}(GFf)\big] \ar{u}[swap]{\ainv}\\
G'\big((F'f)\alpha_X\big)\beta_{FX} \ar{d}[swap]{(G'\alpha_f)*1} & (G'\alpha_Y)\big[(G'Ff)\beta_{FX}\big] \ar{u}[swap]{1*\beta_{Ff}}\\
G'\big(\alpha_Y Ff\big)\beta_{FX} \ar{r}{\Gptwoinv*1} & \big[(G'\alpha_Y)(G'Ff)\big]\beta_{FX} \ar{u}[swap]{a}
\end{tikzcd}
\end{equation}
of seven $2$-cells.\dqed
\end{enumerate}
\end{explanation}
\begin{lemma}\label{tensor-modification}
In \Cref{def:transformation-tensor},
\[\Sigma\tensor\Gamma : \beta\tensor\alpha \to \beta'\tensor\alpha'\]
is a modification, which is invertible if both $\Sigma$ and $\Gamma$ are invertible.
\end{lemma}
\begin{proof}
By \eqref{mod-composite-component}, \eqref{composite-tr-icell}, \eqref{composite-tr-iicell-eq}, and \Cref{conv:functor-subscript}, the modification axiom \eqref{modification-axiom-pasting} for $\Sigma\tensor\Gamma$ means the commutativity around the boundary of the following diagram in $\A_3(GFX,G'F'Y)$ for each $1$-cell $f : X \to Y$ in $\A_1$.
\[\begin{tikzpicture}[xscale=4.5, yscale=1.4]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def\q{15}
\draw[0cell]
(0,0) node (x11) {f_{G'F'}(\alpha_{X,G'}\beta_{FX})}
($(x11)+(1,0)$) node (x12) {f_{G'F'}(\alpha'_{X,G'}\beta'_{FX})}
($(x12)+(1,0)$) node (x13) {(f_{G'F'}\alpha'_{X,G'})\beta'_{FX}}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {(f_{G'F'}\alpha_{X,G'})\beta_{FX}}
($(x13)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x23) {(f_{F'}\alpha'_X)_{G'}\beta'_{FX}}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {(f_{F'}\alpha_X)_{G'}\beta_{FX}}
($(x23)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x33) {(\alpha'_Y f_F)_{G'}\beta'_{FX}}
($(x31)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x41) {(\alpha_Y f_F)_{G'}\beta_{FX}}
($(x33)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x43) {(\alpha'_{Y,G'} f_{G'F})\beta'_{FX}}
($(x41)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x51) {(\alpha_{Y,G'} f_{G'F})\beta_{FX}}
($(x43)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x53) {\alpha'_{Y,G'} (f_{G'F}\beta'_{FX})}
($(x51)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x61) {\alpha_{Y,G'} (f_{G'F}\beta_{FX})}
($(x53)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x63) {\alpha'_{Y,G'} (\beta'_{FY} f_{GF})}
($(x61)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x71) {\alpha_{Y,G'} (\beta_{FY} f_{GF})}
($(x71)+(1,0)$) node (x72) {(\alpha_{Y,G'} \beta_{FY})f_{GF}}
($(x63)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x73) {(\alpha'_{Y,G'} \beta'_{FY}) f_{GF}}
;
\draw[1cell]
(x11) edge node {1*(\Sigma\tensor\Gamma)_X} (x12)
(x12) edge node {a^{-1}} (x13)
(x11) edge node[swap] {a^{-1}} (x21)
(x21) edge[bend right=\q] node {(1*\Gamma_{X,G'})*\Sigma_{FX}} (x13)
(x13) edge node {\Gptwo*1} (x23)
(x21) edge node[swap] {\Gptwo*1} (x31)
(x31) edge[bend right=\q] node {(1*\Gamma_X)_{G'}*\Sigma_{FX}} (x23)
(x23) edge node {\alpha'_{f,G'}*1} (x33)
(x31) edge node[swap] {\alpha_{f,G'}*1} (x41)
(x41) edge[bend right=\q] node {(\Gamma_Y*1)_{G'}*\Sigma_{FX}} (x33)
(x33) edge node {\Gptwoinv*1} (x43)
(x41) edge node[swap] {\Gptwoinv*1} (x51)
(x51) edge[bend right=\q] node {(\Gamma_{Y,G'}*1)*\Sigma_{FX}} (x43)
(x43) edge node {a} (x53)
(x51) edge node[swap] {a} (x61)
(x61) edge[bend right=\q] node {\Gamma_{Y,G'}*(1*\Sigma_{FX})} (x53)
(x53) edge node {1*\beta'_{f_F}} (x63)
(x61) edge node[swap] {1*\beta_{f_F}} (x71)
(x71) edge[bend right=\q] node {\Gamma_{Y,G'}*(\Sigma_{FY}*1)} (x63)
(x63) edge node {a^{-1}} (x73)
(x71) edge node[swap] {a^{-1}} (x72)
(x72) edge node[swap] {(\Sigma\tensor\Gamma)_Y *1} (x73)
;
\end{tikzpicture}\]
From top to bottom:
\begin{itemize}
\item The top, the fifth, and the bottom sub-diagrams are commutative by the naturality \eqref{associator-naturality} of the associator $a$ in $\A_3$.
\item The second and the fourth sub-diagrams are commutative by the naturality \eqref{f2-bicat-naturality} of $\Gptwo$.
\item The third sub-diagram is commutative by the modification axiom \eqref{modification-axiom-pasting} for $\Gamma$ and the functoriality of the local functors of $G'$.
\item The sixth sub-diagram is commutative by the modification axiom \eqref{modification-axiom-pasting} for $\Sigma$.
\end{itemize}
This shows that $\Sigma\tensor\Gamma$ is a modification.
Finally, if both $\Sigma$ and $\Gamma$ are invertible modifications, then each component $2$-cell $(\Sigma\tensor\Gamma)_X$ in \eqref{mod-composite-component} is the horizontal composite of two invertible $2$-cells in $\A_3$, which is invertible by \Cref{hcomp-invertible-2cells}.
\end{proof}
For the rest of this chapter, we consider the assignment
\[\begin{tikzcd}
\Bicatps(\A_2,\A_3)\times\Bicatps(\A_1,\A_2) \ar{r}{\tensor} & \Bicatps(\A_1,\A_3),
\end{tikzcd}\]
which is well-defined by \Cref{lax-functors-compose,expl:composite-tr-mod,tensor-modification}. Next we define the lax functoriality constraint $\tensortwo$ corresponding to $\tensor$.
\begin{definition}\label{def:tensortwo}
Suppose given
\begin{itemize}
\item bicategories $\A_1,\A_2$, and $\A_3$,
\item pseudofunctors $F,F',F'',G,G'$, and $G''$, and
\item strong transformations $\alpha, \alpha'$, $\beta$, and $\beta'$,
\end{itemize}
as displayed below.
\[\begin{tikzpicture}[xscale=2.3, yscale=1.4]
\def1{1} \def\q{75} \def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{.35} \def.4{.65}
\draw[0cell]
(0,0) node (x11) {\A_1}
($(x11)+(1,0)$) node (x12) {\A_2}
($(x12)+(1,0)$) node (x13) {\A_3}
;
\draw[1cell]
(x11) edge[bend left=\q] node (f) {F} (x12)
(x11) edge node[pos=.2] {F'} (x12)
(x11) edge[bend right=\q] node[swap] (fpp) {F''} (x12)
(x12) edge[bend left=\q] node (g) {G} (x13)
(x12) edge node[pos=.2] {G'} (x13)
(x12) edge[bend right=\q] node[swap] (gpp) {G''} (x13)
;
\draw[2cell]
node[between=f and fpp at 1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3, rotate=-90, 2label={above,\alpha}] {\Rightarrow}
node[between=f and fpp at .4, rotate=-90, 2label={above,\alpha'}] {\Rightarrow}
node[between=g and gpp at 1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3, rotate=-90, 2label={above,\beta}] {\Rightarrow}
node[between=g and gpp at .4, rotate=-90, 2label={above,\beta'}] {\Rightarrow}
;
\end{tikzpicture}\]
Define
\begin{equation}\label{tensortwo-component}
\begin{tikzcd}
(\beta'\tensor\alpha')(\beta\tensor\alpha) \ar{r}{\tensortwo} & (\beta'\beta)\tensor(\alpha'\alpha)
\end{tikzcd}
\end{equation}
with the following vertical composite component $2$-cell
\begin{equation}\label{tensortwo-x}
\begin{tikzpicture}[xscale=6, yscale=1.4,baseline={(iso.base)}]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def.4{1}
\draw[0cell]
(0,0) node (x11) {\big[(\beta'\tensor\alpha')(\beta\tensor\alpha)\big]_X}
($(x11)+(1,0)$) node (x12) {\big[(\beta'\beta)\tensor(\alpha'\alpha)\big]_X}
($(x11)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\big[(G''\alpha'_X)(\beta'_{F'X})\big]\big[(G'\alpha_X)\beta_{FX}\big]}
($(x12)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {\big[G''(\alpha'_X\alpha_X)\big]\big[\beta'_{FX}\beta_{FX}\big]}
($(x21)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {\big[(G''\alpha'_X)\big(\beta'_{F'X}G'\alpha_X\big)\big]\beta_{FX}}
($(x22)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x32) {\big[(G''\alpha'_X)(G''\alpha_X)\big]\big[\beta'_{FX}\beta_{FX}\big]}
($(x31)+(1/2,-.4)$) node (bot) {\big[(G''\alpha'_X)\big((G''\alpha_X)\beta'_{FX}\big)\big]\beta_{FX}}
($(x31)+(0,-.4)$) node[inner sep=0pt] (a) {}
($(x32)+(0,-.4)$) node[inner sep=0pt] (b) {}
;
\draw[1cell]
(x11) edge node {\tensortwo_X} (x12)
(x11) edge node[swap] {1} (x21)
(x22) edge node {1} (x12)
(x21) edge node[swap] (iso) {\iso} (x31)
(x32) edge node[swap] {\Gpptwo*1} (x22)
(x31) edge[-,shorten >=-1pt] node[swap,pos=.6] {\big(1*\betapinv_{\alpha_X}\big)*1} (a)
(a) edge[shorten <=-1pt] (bot)
(bot) edge[-,shorten >=-1pt] (b)
(b) edge[shorten <=-1pt] node[swap,pos=.4] {\iso} (x32)
;
\end{tikzpicture}
\end{equation}
in $\A_3(GFX,G''F''X)$ for each object $X\in\A_1$.
\begin{itemize}
\item $(\beta'\tensor\alpha')(\beta\tensor\alpha)$ is the horizontal composite in \Cref{def:lax-tr-comp} of the composite lax transformations $\beta\tensor\alpha$ and $\beta'\tensor\alpha'$ in \eqref{transformation-composite}.
\item $(\beta'\beta)\tensor(\alpha'\alpha)$ is the composite in \eqref{transformation-composite} of the horizontal composites $\alpha'\alpha$ and $\beta'\beta$.
\item Each symbol $\iso$ is given by Mac Lane's Coherence \Cref{maclane-coherence}, so it is a vertical composite of horizontal composites of identity $2$-cells and a component of the associator $a$ or its inverse.
\end{itemize}
This finishes the definition of $\tensortwo$.
\end{definition}
\begin{explanation}\label{expl:tensortwo}
In the definition of $\tensortwo_X$:
\begin{enumerate}
\item Mac Lane's Coherence \Cref{maclane-coherence} guarantees that each invertible $2$-cell denoted by $\iso$ has a unique value regardless of how the parentheses are moved using components of the associator and their inverses.
\item The $2$-cell
\[\begin{tikzcd}[column sep=large]
(G''\alpha_X)\beta'_{FX} \ar{r}{\beta'_{\alpha_X}} & \beta'_{F'X} G'\alpha_X
\end{tikzcd}\]
is the component $2$-cell of $\beta'$ at the $1$-cell $\alpha_X : FX \to F'X$ in $\A_2$.
\item $\tensortwo_X$ is the composite of the pasting diagram
\begin{equation}\label{tensortwo-pasting}
\begin{tikzpicture}[xscale=2.7, yscale=1.5, baseline={(ga.base)}]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {GFX}
($(x11)+(1,0)$) node (x12) {G'FX}
($(x12)+(1,0)$) node (x13) {G''FX}
($(x12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {G'F'X}
($(x22)+(1,0)$) node (x23) {G''F'X}
($(x23)+(1,0)$) node (x24) {G''F''X}
;
\draw[1cell]
(x11) edge node {(\beta_{FX}} (x12)
(x12) edge node {\beta'_{FX})} (x13)
(x13) edge[out=0,in=90] node {G''(\alpha'_X\alpha_X)} (x24)
(x12) edge node[swap] (ga) {G'\alpha_X)} (x22)
(x13) edge node[pos=.5] {G''\alpha_X} (x23)
(x22) edge node[swap] {[\beta'_{F'X}} (x23)
(x23) edge node[swap] {G''\alpha'_X]} (x24)
;
\draw[2cell]
node[between=x12 and x23 at .6, rotate=45, 2label={above,\betapinv_{\alpha_X}}] {\Rightarrow}
node[between=x13 and x24 at .5, rotate=45, 2label={below,\Gpptwo}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
in $\A_3(GFX,G''F''X)$, with the indicated bracketings in its (co)domain.
\dqed
\end{enumerate}
\end{explanation}
\begin{convention}\label{conv:large-diagram}
To typeset large diagrams in the rest of this chapter, we will use \Cref{conv:functor-subscript} along with the following abbreviations to justify the commutativity of each sub-diagram:
\begin{itemize}
\item $\MC$ \label{notation:maclane}means Mac Lane's Coherence \Cref{maclane-coherence}.
\item $\iso$ \label{notation:iso}denotes a coherence isomorphism in a bicategory whose existence follows from \Cref{moving-brackets}, and whose uniqueness is guaranteed by Mac Lane's Coherence \Cref{maclane-coherence}.
\item $\nat$ \label{notation:nat}means the naturality of:
\begin{itemize}
\item the associator $a$ as in \eqref{associator-naturality};
\item an instance of $\iso$ with repeated applications of \eqref{associator-naturality};
\item either the left unitor or the right unitor as in \eqref{unitor-naturality};
\item the lax functoriality constraint of a lax functor as in \eqref{f2-bicat-naturality};
\item a lax transformation with respect to $2$-cells as in \eqref{lax-transformation-naturality}.
\end{itemize}
\item $\unity$ \label{notation:unity}means either the unity axiom \eqref{bicat-unity} in a bicategory, or the unity properties in \Cref{bicat-left-right-unity,bicat-l-equals-r}.
\item $\midfour$ \label{notation:midfour}means the middle four exchange \eqref{middle-four}, possibly used with the unity properties \eqref{hom-category-axioms} and \eqref{bicat-c-id}. Moreover, if $\midfour$ is applied along with some other property $P$, then we only mention $P$.
\item To save space, we use concatenation to denote horizontal composite of $2$-cells. For example, $(\Gpptwo 1)1$ means $(\Gpptwo*1)*1$.\dqed
\end{itemize}
\end{convention}
\begin{lemma}\label{tensortwo-modification}
In \Cref{def:tensortwo},
\[\begin{tikzcd}
(\beta'\tensor\alpha')(\beta\tensor\alpha) \ar{r}{\tensortwo} & (\beta'\beta)\tensor(\alpha'\alpha)
\end{tikzcd}\]
is an invertible modification.
\end{lemma}
\begin{proof}
Each component $2$-cell $\tensortwo_X$ in \eqref{tensortwo-x} is defined as the vertical composite of four invertible $2$-cells, which is invertible by \Cref{hcomp-invertible-2cells}. It remains to check the modification axiom \eqref{modification-axiom-pasting} for $\tensortwo$, which is the commutativity of the diagram
\begin{equation}\label{tensortwo-mod-axiom}
\begin{tikzcd}
(G''F''f)\big[(\beta'\tensor\alpha')(\beta\tensor\alpha)\big]_X \ar{d}[swap]{\big((\beta'\tensor\alpha')(\beta\tensor\alpha)\big)_f} \ar{r}{1*\tensortwo_X} & (G''F''f)\big[(\beta'\beta)\tensor(\alpha'\alpha)\big]_X \ar{d}{\big((\beta'\beta)\tensor(\alpha'\alpha)\big)_f}\\
\big[(\beta'\tensor\alpha')(\beta\tensor\alpha)\big]_Y (GFf) \ar{r}{\tensortwo_Y*1} & \big[(\beta'\beta)\tensor(\alpha'\alpha)\big]_Y (GFf)
\end{tikzcd}
\end{equation}
in the hom-category $\A_3(GFX,G''F''Y)$ for $1$-cells $f : X \to Y$ in $\A_1$. Expanding the boundary using \eqref{transf-hcomp-iicell}, \eqref{composite-tr-iicell-eq}, and \eqref{tensortwo-x}, the boundary of the diagram \eqref{tensortwo-mod-axiom} has 40 edges. We will prove its commutativity by filling it with 51 sub-diagrams as follows.
\begin{equation}\label{tensortwo-modax-outline}
\begin{tikzpicture}[xscale=2, yscale=1, baseline={(x31.base)}]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{-1.7} \def1.1{-.7} \def.4{.5}
\draw[0cell]
(0,0) node (x11) {\bullet}
($(x11)+(1,0)$) node (x12) {\bullet}
($(x12)+(1,0)$) node (x13) {\bullet}
($(x13)+(1,0)$) node (x14) {\bullet}
($(x14)+(1,0)$) node (x15) {\bullet}
($(x13)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x23) {\bullet}
($(x14)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x24) {\bullet}
($(x11)+(0,1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)$) node (x31) {\bullet}
($(x31)+(1,0)$) node (x32) {\bullet}
($(x32)+(1,0)$) node (x33) {\bullet}
($(x33)+(1,0)$) node (x34) {\bullet}
($(x34)+(1,0)$) node (x35) {\bullet}
($(x31)+(0,1.1)$) node (x41) {\bullet}
($(x34)+(0,1.1)$) node (x44) {\bullet}
($(x41)+(0,1.1)$) node (x51) {\bullet}
($(x51)+(1,0)$) node (x52) {\bullet}
($(x52)+(1/2,0)$) node (x525) {\bullet}
($(x52)+(1,0)$) node (x53) {\bullet}
($(x53)+(1,0)$) node (x54) {\bullet}
($(x51)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x61) {\bullet}
($(x61)+(1,0)$) node (x62) {\bullet}
($(x62)+(1,0)$) node (x63) {\bullet}
($(x63)+(1,0)$) node (x64) {\bullet}
($(x64)+(1,0)$) node (x65) {\bullet}
($(x11)!.5!(x32)$) node {A_1^7}
($(x12)!.5!(x33)$) node {A_2^6}
($(x13)!.5!(x24)$) node {A_3^6}
($(x14)!.5!(x35)$) node {A_4^5}
($(x31)!.5!(x52)$) node {A_5^2}
($(x23)!.5!(x54)$) node {A_6^4}
($(x51)!.5!(x62)$) node {A_7^4}
($(x52)!.5!(x63)$) node {A_8^4}
($(x53)!.5!(x64)$) node {A_9^5}
($(x34)!.5!(x65)$) node {A_{10}^8}
($(x11)+(0,.4)$) node[inner sep=0pt] (a) {}
($(x15)+(0,.4)$) node[inner sep=0pt] (b) {}
($(x61)+(0,-.4)$) node[inner sep=0pt] (c) {}
($(x65)+(0,-.4)$) node[inner sep=0pt] (d) {}
;
\draw[1cell]
(x11) edge (x12)
(x12) edge (x13)
(x13) edge (x14)
(x14) edge (x15)
(x11) edge node[swap] {10} (x31)
(x12) edge node[swap] {8} (x32)
(x13) edge node[swap] {7} (x23)
(x23) edge (x33)
(x14) edge node[swap] {7} (x24)
(x24) edge (x34)
(x15) edge node {6} (x35)
(x31) edge (x32)
(x32) edge (x33)
(x23) edge (x24)
(x34) edge (x35)
(x31) edge (x41)
(x41) edge (x51)
(x33) edge (x52)
(x44) edge (x34)
(x44) edge (x54)
(x35) edge node {9} (x65)
(x51) edge (x52)
(x525) edge (x52)
(x525) edge (x53)
(x54) edge (x53)
(x51) edge node[swap] {5} (x61)
(x52) edge node[swap] {5} (x62)
(x53) edge node[swap] {6} (x63)
(x54) edge node[swap] {5} (x64)
(x61) edge (x62)
(x62) edge (x63)
(x63) edge (x64)
(x64) edge (x65)
(x11) edge[-,shorten >=-1pt] (a)
(a) edge[-,shorten <=-1pt, shorten >=-1pt] node {1*\tensortwo_X} (b)
(b) edge[shorten <=-1pt] (x15)
(x61) edge[-,shorten >=-1pt] (c)
(c) edge[-,shorten <=-1pt, shorten >=-1pt] node[swap] {\tensortwo_Y*1} (d)
(d) edge[shorten <=-1pt] (x65)
;
\end{tikzpicture}
\end{equation}
In this sub-division of the diagram \eqref{tensortwo-mod-axiom}:
\begin{itemize}
\item $A_i^n$ is the $i$th sub-diagram, which is further divided into $n$ sub-diagrams.
\item An arrow labeled by a number $k$ represents $k$ composable arrows. For example, the sub-diagram $A_1^7$ is divided into 7 sub-diagrams. Its left and right boundaries have 10 edges and 8 edges, respectively. In what follows, we will omit the super-script and write $A_i$ for $A_i^n$.
\item The left boundary, which is $\big((\beta'\tensor\alpha')(\beta\tensor\alpha)\big)_f$, has 17 arrows.
\item The right boundary, which is $\big((\beta'\beta)\tensor(\alpha'\alpha)\big)_f$, has 15 arrows.
\item The top and the bottom boundaries are by definition $1*\tensortwo_X$ and $\tensortwo_Y*1$, respectively, and each has 4 arrows.
\item In each sub-diagram $A_i$, each edge is an invertible $2$-cell in $\A_3$.
\end{itemize}
\newcommand{\ensuremath{f_{G''F''}\big[(\alpha'_{X,G''}\beta'_{F'X})(\alpha_{X,G'}\beta_{FX})\big]}}{\ensuremath{f_{G''F''}\big[(\alpha'_{X,G''}\beta'_{F'X})(\alpha_{X,G'}\beta_{FX})\big]}}
\newcommand{\ensuremath{\big[f_{G''F''}(\alpha'_{X,G''}\beta'_{F'X})\big](\alpha_{X,G'}\beta_{FX})}}{\ensuremath{\big[f_{G''F''}(\alpha'_{X,G''}\beta'_{F'X})\big](\alpha_{X,G'}\beta_{FX})}}
\newcommand{\ensuremath{\big[(f_{G''F''}\alpha'_{X,G''})\beta'_{F'X}\big](\alpha_{X,G'}\beta_{FX})}}{\ensuremath{\big[(f_{G''F''}\alpha'_{X,G''})\beta'_{F'X}\big](\alpha_{X,G'}\beta_{FX})}}
\newcommand{\ensuremath{\big[(f_{F''}\alpha'_X)_{G''}\beta'_{F'X}\big](\alpha_{X,G'}\beta_{FX})}}{\ensuremath{\big[(f_{F''}\alpha'_X)_{G''}\beta'_{F'X}\big](\alpha_{X,G'}\beta_{FX})}}
\newcommand{\ensuremath{\big[(\alpha'_Y f_{F'})_{G''}\beta'_{F'X}\big](\alpha_{X,G'}\beta_{FX})}}{\ensuremath{\big[(\alpha'_Y f_{F'})_{G''}\beta'_{F'X}\big](\alpha_{X,G'}\beta_{FX})}}
\newcommand{\ensuremath{\big[(\alpha'_{Y,G''}f_{G''F'})\beta'_{F'X}\big](\alpha_{X,G'}\beta_{FX})}}{\ensuremath{\big[(\alpha'_{Y,G''}f_{G''F'})\beta'_{F'X}\big](\alpha_{X,G'}\beta_{FX})}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}(f_{G''F'}\beta'_{F'X})\big](\alpha_{X,G'}\beta_{FX})}}{\ensuremath{\big[\alpha'_{Y,G''}(f_{G''F'}\beta'_{F'X})\big](\alpha_{X,G'}\beta_{FX})}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}f_{G'F'})\big](\alpha_{X,G'}\beta_{FX})}}{\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}f_{G'F'})\big](\alpha_{X,G'}\beta_{FX})}}
\newcommand{\ensuremath{\big[(\alpha'_{Y,G''}\beta'_{F'Y})f_{G'F'}\big](\alpha_{X,G'}\beta_{FX})}}{\ensuremath{\big[(\alpha'_{Y,G''}\beta'_{F'Y})f_{G'F'}\big](\alpha_{X,G'}\beta_{FX})}}
\newcommand{\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[f_{G'F'}(\alpha_{X,G'}\beta_{FX})\big]}}{\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[f_{G'F'}(\alpha_{X,G'}\beta_{FX})\big]}}
\newcommand{\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[(f_{G'F'}\alpha_{X,G'})\beta_{FX}\big]}}{\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[(f_{G'F'}\alpha_{X,G'})\beta_{FX}\big]}}
\newcommand{\ensuremath{f_{G''F''}\big[(\alpha'_{X,G''}(\beta'_{F'X}\alpha_{X,G'}))\beta_{FX}\big]}}{\ensuremath{f_{G''F''}\big[(\alpha'_{X,G''}(\beta'_{F'X}\alpha_{X,G'}))\beta_{FX}\big]}}
\newcommand{\ensuremath{(f_{G''F''}\alpha'_{X,G''})\big[(\beta'_{F'X}\alpha_{X,G'})\beta_{FX}\big]}}{\ensuremath{(f_{G''F''}\alpha'_{X,G''})\big[(\beta'_{F'X}\alpha_{X,G'})\beta_{FX}\big]}}
\newcommand{\ensuremath{(f_{F''}\alpha'_X)_{G''}\big[(\beta'_{F'X}\alpha_{X,G'})\beta_{FX}\big]}}{\ensuremath{(f_{F''}\alpha'_X)_{G''}\big[(\beta'_{F'X}\alpha_{X,G'})\beta_{FX}\big]}}
\newcommand{\ensuremath{(\alpha'_Y f_{F'})_{G''}\big[(\beta'_{F'X}\alpha_{X,G'})\beta_{FX}\big]}}{\ensuremath{(\alpha'_Y f_{F'})_{G''}\big[(\beta'_{F'X}\alpha_{X,G'})\beta_{FX}\big]}}
\newcommand{\ensuremath{(\alpha'_{Y,G''} f_{G''F'})\big[(\beta'_{F'X}\alpha_{X,G'})\beta_{FX}\big]}}{\ensuremath{(\alpha'_{Y,G''} f_{G''F'})\big[(\beta'_{F'X}\alpha_{X,G'})\beta_{FX}\big]}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''} (f_{G''F'}(\beta'_{F'X}\alpha_{X,G'}))\big]\beta_{FX}}}{\ensuremath{\big[\alpha'_{Y,G''} (f_{G''F'}(\beta'_{F'X}\alpha_{X,G'}))\big]\beta_{FX}}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''} ((f_{G''F'}\beta'_{F'X})\alpha_{X,G'})\big]\beta_{FX}}}{\ensuremath{\big[\alpha'_{Y,G''} ((f_{G''F'}\beta'_{F'X})\alpha_{X,G'})\big]\beta_{FX}}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''} ((\beta'_{F'Y}f_{G'F'})\alpha_{X,G'})\big]\beta_{FX}}}{\ensuremath{\big[\alpha'_{Y,G''} ((\beta'_{F'Y}f_{G'F'})\alpha_{X,G'})\big]\beta_{FX}}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''} (\beta'_{F'Y}(f_{G'F'}\alpha_{X,G'}))\big]\beta_{FX}}}{\ensuremath{\big[\alpha'_{Y,G''} (\beta'_{F'Y}(f_{G'F'}\alpha_{X,G'}))\big]\beta_{FX}}}
Next is the diagram $A_1$ with the notations in \Cref{conv:large-diagram}.
\[\begin{tikzpicture}[xscale=7, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{-2}
\draw[0cell]
(0,0) node (x11) {\ensuremath{f_{G''F''}\big[(\alpha'_{X,G''}\beta'_{F'X})(\alpha_{X,G'}\beta_{FX})\big]}}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\ensuremath{\big[f_{G''F''}(\alpha'_{X,G''}\beta'_{F'X})\big](\alpha_{X,G'}\beta_{FX})}}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {\ensuremath{\big[(f_{G''F''}\alpha'_{X,G''})\beta'_{F'X}\big](\alpha_{X,G'}\beta_{FX})}}
($(x31)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x41) {\ensuremath{\big[(f_{F''}\alpha'_X)_{G''}\beta'_{F'X}\big](\alpha_{X,G'}\beta_{FX})}}
($(x41)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x51) {\ensuremath{\big[(\alpha'_Y f_{F'})_{G''}\beta'_{F'X}\big](\alpha_{X,G'}\beta_{FX})}}
($(x51)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x61) {\ensuremath{\big[(\alpha'_{Y,G''}f_{G''F'})\beta'_{F'X}\big](\alpha_{X,G'}\beta_{FX})}}
($(x61)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x71) {\ensuremath{\big[\alpha'_{Y,G''}(f_{G''F'}\beta'_{F'X})\big](\alpha_{X,G'}\beta_{FX})}}
($(x71)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x81) {\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}f_{G'F'})\big](\alpha_{X,G'}\beta_{FX})}}
($(x81)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x91) {\ensuremath{\big[(\alpha'_{Y,G''}\beta'_{F'Y})f_{G'F'}\big](\alpha_{X,G'}\beta_{FX})}}
($(x91)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x101) {\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[f_{G'F'}(\alpha_{X,G'}\beta_{FX})\big]}}
($(x101)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x111) {\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[(f_{G'F'}\alpha_{X,G'})\beta_{FX}\big]}}
($(x11)+(1,0)$) node (x12) {\ensuremath{f_{G''F''}\big[(\alpha'_{X,G''}(\beta'_{F'X}\alpha_{X,G'}))\beta_{FX}\big]}}
($(x12)+(0,.4)$) node (x22) {\ensuremath{(f_{G''F''}\alpha'_{X,G''})\big[(\beta'_{F'X}\alpha_{X,G'})\beta_{FX}\big]}}
($(x22)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x32) {\ensuremath{(f_{F''}\alpha'_X)_{G''}\big[(\beta'_{F'X}\alpha_{X,G'})\beta_{FX}\big]}}
($(x32)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x42) {\ensuremath{(\alpha'_Y f_{F'})_{G''}\big[(\beta'_{F'X}\alpha_{X,G'})\beta_{FX}\big]}}
($(x42)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x52) {\ensuremath{(\alpha'_{Y,G''} f_{G''F'})\big[(\beta'_{F'X}\alpha_{X,G'})\beta_{FX}\big]}}
($(x52)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x62) {\ensuremath{\big[\alpha'_{Y,G''} (f_{G''F'}(\beta'_{F'X}\alpha_{X,G'}))\big]\beta_{FX}}}
($(x62)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x72) {\ensuremath{\big[\alpha'_{Y,G''} ((f_{G''F'}\beta'_{F'X})\alpha_{X,G'})\big]\beta_{FX}}}
($(x72)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x82) {\ensuremath{\big[\alpha'_{Y,G''} ((\beta'_{F'Y}f_{G'F'})\alpha_{X,G'})\big]\beta_{FX}}}
($(x82)+(0,.4)$) node (x92) {\ensuremath{\big[\alpha'_{Y,G''} (\beta'_{F'Y}(f_{G'F'}\alpha_{X,G'}))\big]\beta_{FX}}}
($(x11)!.5!(x22)$) node {\MC}
($(x31)!.5!(x32)$) node {\nat}
($(x41)!.5!(x42)$) node {\nat}
($(x51)!.5!(x52)$) node {\nat}
($(x61)!.5!(x62)$) node {\MC}
($(x81)!.5!(x72)$) node {\nat}
($(x101)!.5!(x82)$) node {\MC}
;
\draw[1cell]
(x11) edge node[swap] {\ainv} (x21)
(x21) edge node[swap] {\ainv 1} (x31)
(x31) edge node[swap] {(\Gpptwo 1)1} (x41)
(x41) edge node[swap] {(\alpha'_{f,G''}1)1} (x51)
(x51) edge node[swap] {(\Gpptwoinv 1)1} (x61)
(x61) edge node[swap] {a 1} (x71)
(x71) edge node[swap] {(1\beta'_{F'f})1} (x81)
(x81) edge node[swap] {\ainv 1} (x91)
(x91) edge node[swap] {a} (x101)
(x101) edge node[swap] {1\ainv} (x111)
(x12) edge node {\iso} (x22)
(x22) edge node {\Gpptwo 1} (x32)
(x32) edge node {\alpha'_{f,G''}1} (x42)
(x42) edge node {\Gpptwoinv 1} (x52)
(x52) edge node {\iso} (x62)
(x62) edge node {(1\ainv)1} (x72)
(x72) edge node {(1(\beta'_{F'f}1))1} (x82)
(x82) edge node {(1a)1} (x92)
(x11) edge node {1*\iso} (x12)
(x31) edge node {\iso} (x22)
(x41) edge node {\iso} (x32)
(x51) edge node {\iso} (x42)
(x61) edge node {\iso} (x52)
(x71) edge node {\iso} (x72)
(x81) edge node {\iso} (x82)
(x111) edge node {\iso} (x92)
;
\end{tikzpicture}\]
\newcommand{\ensuremath{f_{G''F''}\big[(\alpha'_{X,G''}(\alpha_{X,G''}\beta'_{FX}))\beta_{FX}\big]}}{\ensuremath{f_{G''F''}\big[(\alpha'_{X,G''}(\alpha_{X,G''}\beta'_{FX}))\beta_{FX}\big]}}
\newcommand{\ensuremath{(f_{G''F''}\alpha'_{X,G''})\big[(\alpha_{X,G''}\beta'_{FX})\beta_{FX}\big]}}{\ensuremath{(f_{G''F''}\alpha'_{X,G''})\big[(\alpha_{X,G''}\beta'_{FX})\beta_{FX}\big]}}
\newcommand{\ensuremath{(f_{F''}\alpha'_{X})_{G''}\big[(\alpha_{X,G''}\beta'_{FX})\beta_{FX}\big]}}{\ensuremath{(f_{F''}\alpha'_{X})_{G''}\big[(\alpha_{X,G''}\beta'_{FX})\beta_{FX}\big]}}
\newcommand{\ensuremath{(\alpha'_{Y}f_{F'})_{G''}\big[(\alpha_{X,G''}\beta'_{FX})\beta_{FX}\big]}}{\ensuremath{(\alpha'_{Y}f_{F'})_{G''}\big[(\alpha_{X,G''}\beta'_{FX})\beta_{FX}\big]}}
\newcommand{\ensuremath{(\alpha'_{Y,G''}f_{G''F'})\big[(\alpha_{X,G''}\beta'_{FX})\beta_{FX}\big]}}{\ensuremath{(\alpha'_{Y,G''}f_{G''F'})\big[(\alpha_{X,G''}\beta'_{FX})\beta_{FX}\big]}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}(f_{G''F'}(\alpha_{X,G''}\beta'_{FX}))\big]\beta_{FX}}}{\ensuremath{\big[\alpha'_{Y,G''}(f_{G''F'}(\alpha_{X,G''}\beta'_{FX}))\big]\beta_{FX}}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}((f_{G''F'}\alpha_{X,G''})\beta'_{FX})\big]\beta_{FX}}}{\ensuremath{\big[\alpha'_{Y,G''}((f_{G''F'}\alpha_{X,G''})\beta'_{FX})\big]\beta_{FX}}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}((f_{F'}\alpha_{X})_{G''}\beta'_{FX})\big]\beta_{FX}}}{\ensuremath{\big[\alpha'_{Y,G''}((f_{F'}\alpha_{X})_{G''}\beta'_{FX})\big]\beta_{FX}}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}(f_{F'}\alpha_{X})_{G'})\big]\beta_{FX}}}{\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}(f_{F'}\alpha_{X})_{G'})\big]\beta_{FX}}}
Next is the diagram $A_2$.
\[\begin{tikzpicture}[xscale=7, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{-2}
\draw[0cell]
(0,0) node (x12) {\ensuremath{f_{G''F''}\big[(\alpha'_{X,G''}(\beta'_{F'X}\alpha_{X,G'}))\beta_{FX}\big]}}
($(x12)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {\ensuremath{(f_{G''F''}\alpha'_{X,G''})\big[(\beta'_{F'X}\alpha_{X,G'})\beta_{FX}\big]}}
($(x22)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x32) {\ensuremath{(f_{F''}\alpha'_X)_{G''}\big[(\beta'_{F'X}\alpha_{X,G'})\beta_{FX}\big]}}
($(x32)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x42) {\ensuremath{(\alpha'_Y f_{F'})_{G''}\big[(\beta'_{F'X}\alpha_{X,G'})\beta_{FX}\big]}}
($(x42)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x52) {\ensuremath{(\alpha'_{Y,G''} f_{G''F'})\big[(\beta'_{F'X}\alpha_{X,G'})\beta_{FX}\big]}}
($(x52)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x62) {\ensuremath{\big[\alpha'_{Y,G''} (f_{G''F'}(\beta'_{F'X}\alpha_{X,G'}))\big]\beta_{FX}}}
($(x62)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x72) {\ensuremath{\big[\alpha'_{Y,G''} ((f_{G''F'}\beta'_{F'X})\alpha_{X,G'})\big]\beta_{FX}}}
($(x72)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x82) {\ensuremath{\big[\alpha'_{Y,G''} ((\beta'_{F'Y}f_{G'F'})\alpha_{X,G'})\big]\beta_{FX}}}
($(x82)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x92) {\ensuremath{\big[\alpha'_{Y,G''} (\beta'_{F'Y}(f_{G'F'}\alpha_{X,G'}))\big]\beta_{FX}}}
($(x12)+(1,0)$) node (x13) {\ensuremath{f_{G''F''}\big[(\alpha'_{X,G''}(\alpha_{X,G''}\beta'_{FX}))\beta_{FX}\big]}}
($(x13)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x23) {\ensuremath{(f_{G''F''}\alpha'_{X,G''})\big[(\alpha_{X,G''}\beta'_{FX})\beta_{FX}\big]}}
($(x23)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x33) {\ensuremath{(f_{F''}\alpha'_{X})_{G''}\big[(\alpha_{X,G''}\beta'_{FX})\beta_{FX}\big]}}
($(x33)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x43) {\ensuremath{(\alpha'_{Y}f_{F'})_{G''}\big[(\alpha_{X,G''}\beta'_{FX})\beta_{FX}\big]}}
($(x43)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x53) {\ensuremath{(\alpha'_{Y,G''}f_{G''F'})\big[(\alpha_{X,G''}\beta'_{FX})\beta_{FX}\big]}}
($(x53)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x63) {\ensuremath{\big[\alpha'_{Y,G''}(f_{G''F'}(\alpha_{X,G''}\beta'_{FX}))\big]\beta_{FX}}}
($(x63)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x73) {\ensuremath{\big[\alpha'_{Y,G''}((f_{G''F'}\alpha_{X,G''})\beta'_{FX})\big]\beta_{FX}}}
($(x73)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x83) {\ensuremath{\big[\alpha'_{Y,G''}((f_{F'}\alpha_{X})_{G''}\beta'_{FX})\big]\beta_{FX}}}
($(x83)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x93) {\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}(f_{F'}\alpha_{X})_{G'})\big]\beta_{FX}}}
($(x12)!.4!(x23)$) node {\nat}
($(x22)!.4!(x33)$) node {\midfour}
($(x32)!.4!(x43)$) node {\midfour}
($(x42)!.4!(x53)$) node {\midfour}
($(x52)!.5!(x63)$) node {\nat}
;
\draw[1cell]
(x12) edge node[swap] {\iso} (x22)
(x22) edge node[swap] {\Gpptwo 1} (x32)
(x32) edge node[swap] {\alpha'_{f,G''} 1} (x42)
(x42) edge node[swap] {\Gpptwoinv 1} (x52)
(x52) edge node[swap] {\iso} (x62)
(x62) edge node[swap] {(1 \ainv)1} (x72)
(x72) edge node[swap] {(1(\beta'_{F'f}1))1} (x82)
(x82) edge node[swap] {(1a)1} (x92)
(x13) edge node {\iso} (x23)
(x23) edge node {\Gpptwo 1} (x33)
(x33) edge node {\alpha'_{f,G''}1} (x43)
(x43) edge node {\Gpptwoinv 1} (x53)
(x53) edge node {\iso} (x63)
(x63) edge node {(1\ainv)1} (x73)
(x73) edge node {(1(\Gpptwo 1))1} (x83)
(x83) edge node {(1\beta'_{f_{F'}\alpha_X})1} (x93)
(x12) edge node {1(1(\betapinv_{\alpha_X}1))} (x13)
(x22) edge node {1(\betapinv_{\alpha_X}1)} (x23)
(x32) edge node {1(\betapinv_{\alpha_X}1)} (x33)
(x42) edge node {1(\betapinv_{\alpha_X}1)} (x43)
(x52) edge node {1(\betapinv_{\alpha_X}1)} (x53)
(x62) edge node[swap] {(1(1\betapinv_{\alpha_X}))1} (x63)
(x92) edge node {(1(1\Gptwo))1} (x93)
;
\end{tikzpicture}\]
The bottom square in $A_2$ is commutative by the lax naturality \eqref{2-cell-transformation} of $\beta'$ applied to the composable $1$-cells $\alpha_X : FX \to F'X$ and $F'f : F'X \to F'Y$ in $\A_2$.
\newcommand{\ensuremath{f_{G''F''}\big[(\alpha'_{X,G''}\alpha_{X,G''})(\beta'_{FX}\beta_{FX})\big]}}{\ensuremath{f_{G''F''}\big[(\alpha'_{X,G''}\alpha_{X,G''})(\beta'_{FX}\beta_{FX})\big]}}
\newcommand{\ensuremath{\big[f_{G''F''}(\alpha'_{X,G''}\alpha_{X,G''})\big](\beta'_{FX}\beta_{FX})}}{\ensuremath{\big[f_{G''F''}(\alpha'_{X,G''}\alpha_{X,G''})\big](\beta'_{FX}\beta_{FX})}}
\newcommand{\ensuremath{\big[(f_{G''F''}\alpha'_{X,G''})\alpha_{X,G''}\big](\beta'_{FX}\beta_{FX})}}{\ensuremath{\big[(f_{G''F''}\alpha'_{X,G''})\alpha_{X,G''}\big](\beta'_{FX}\beta_{FX})}}
\newcommand{\ensuremath{\big[(f_{F''}\alpha'_X)_{G''}\alpha_{X,G''}\big](\beta'_{FX}\beta_{FX})}}{\ensuremath{\big[(f_{F''}\alpha'_X)_{G''}\alpha_{X,G''}\big](\beta'_{FX}\beta_{FX})}}
\newcommand{\ensuremath{\big[(\alpha'_Y f_{F'})_{G''}\alpha_{X,G''}\big](\beta'_{FX}\beta_{FX})}}{\ensuremath{\big[(\alpha'_Y f_{F'})_{G''}\alpha_{X,G''}\big](\beta'_{FX}\beta_{FX})}}
\newcommand{\ensuremath{\big[(\alpha'_{Y,G''} f_{G''F'})\alpha_{X,G''}\big](\beta'_{FX}\beta_{FX})}}{\ensuremath{\big[(\alpha'_{Y,G''} f_{G''F'})\alpha_{X,G''}\big](\beta'_{FX}\beta_{FX})}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}(f_{G''F'}\alpha_{X,G''})\big](\beta'_{FX}\beta_{FX})}}{\ensuremath{\big[\alpha'_{Y,G''}(f_{G''F'}\alpha_{X,G''})\big](\beta'_{FX}\beta_{FX})}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}(f_{F'}\alpha_X)_{G''}\big](\beta'_{FX}\beta_{FX})}}{\ensuremath{\big[\alpha'_{Y,G''}(f_{F'}\alpha_X)_{G''}\big](\beta'_{FX}\beta_{FX})}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}(\alpha_Y f_F)_{G''}\big](\beta'_{FX}\beta_{FX})}}{\ensuremath{\big[\alpha'_{Y,G''}(\alpha_Y f_F)_{G''}\big](\beta'_{FX}\beta_{FX})}}
Next is the diagram $A_3$.
\[\begin{tikzpicture}[xscale=7, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{-2}
\draw[0cell]
(0,0) node (x13) {\ensuremath{f_{G''F''}\big[(\alpha'_{X,G''}(\alpha_{X,G''}\beta'_{FX}))\beta_{FX}\big]}}
($(x13)+(0,.4)$) node (x23) {\ensuremath{(f_{G''F''}\alpha'_{X,G''})\big[(\alpha_{X,G''}\beta'_{FX})\beta_{FX}\big]}}
($(x23)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x33) {\ensuremath{(f_{F''}\alpha'_{X})_{G''}\big[(\alpha_{X,G''}\beta'_{FX})\beta_{FX}\big]}}
($(x33)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x43) {\ensuremath{(\alpha'_{Y}f_{F'})_{G''}\big[(\alpha_{X,G''}\beta'_{FX})\beta_{FX}\big]}}
($(x43)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x53) {\ensuremath{(\alpha'_{Y,G''}f_{G''F'})\big[(\alpha_{X,G''}\beta'_{FX})\beta_{FX}\big]}}
($(x53)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x63) {\ensuremath{\big[\alpha'_{Y,G''}(f_{G''F'}(\alpha_{X,G''}\beta'_{FX}))\big]\beta_{FX}}}
($(x63)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x73) {\ensuremath{\big[\alpha'_{Y,G''}((f_{G''F'}\alpha_{X,G''})\beta'_{FX})\big]\beta_{FX}}}
($(x73)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x83) {\ensuremath{\big[\alpha'_{Y,G''}((f_{F'}\alpha_{X})_{G''}\beta'_{FX})\big]\beta_{FX}}}
($(x13)+(1,0)$) node (x14) {\ensuremath{f_{G''F''}\big[(\alpha'_{X,G''}\alpha_{X,G''})(\beta'_{FX}\beta_{FX})\big]}}
($(x14)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x24) {\ensuremath{\big[f_{G''F''}(\alpha'_{X,G''}\alpha_{X,G''})\big](\beta'_{FX}\beta_{FX})}}
($(x24)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x34) {\ensuremath{\big[(f_{G''F''}\alpha'_{X,G''})\alpha_{X,G''}\big](\beta'_{FX}\beta_{FX})}}
($(x34)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x44) {\ensuremath{\big[(f_{F''}\alpha'_X)_{G''}\alpha_{X,G''}\big](\beta'_{FX}\beta_{FX})}}
($(x44)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x54) {\ensuremath{\big[(\alpha'_Y f_{F'})_{G''}\alpha_{X,G''}\big](\beta'_{FX}\beta_{FX})}}
($(x54)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x64) {\ensuremath{\big[(\alpha'_{Y,G''} f_{G''F'})\alpha_{X,G''}\big](\beta'_{FX}\beta_{FX})}}
($(x64)+(0,.4)$) node (x74) {\ensuremath{\big[\alpha'_{Y,G''}(f_{G''F'}\alpha_{X,G''})\big](\beta'_{FX}\beta_{FX})}}
($(x74)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x84) {\ensuremath{\big[\alpha'_{Y,G''}(f_{F'}\alpha_X)_{G''}\big](\beta'_{FX}\beta_{FX})}}
($(x13)!.5!(x34)$) node {\MC}
($(x23)!.5!(x44)$) node {\nat}
($(x33)!.5!(x54)$) node {\nat}
($(x43)!.5!(x64)$) node {\nat}
($(x53)!.5!(x74)$) node {\MC}
($(x73)!.5!(x84)$) node {\nat}
;
\draw[1cell]
(x13) edge node[swap] {\iso} (x23)
(x23) edge node[swap] {\Gpptwo 1} (x33)
(x33) edge node[swap] {\alpha'_{f,G''}1} (x43)
(x43) edge node[swap] {\Gpptwoinv 1} (x53)
(x53) edge node[swap] {\iso} (x63)
(x63) edge node[swap] {(1\ainv)1} (x73)
(x73) edge node[swap] {(1(\Gpptwo 1))1} (x83)
(x14) edge node {\ainv} (x24)
(x24) edge node {\ainv 1} (x34)
(x34) edge node {(\Gpptwo 1)1} (x44)
(x44) edge node {(\alpha'_{f,G''}1)1} (x54)
(x54) edge node {(\Gpptwoinv 1)1} (x64)
(x64) edge node {a1} (x74)
(x74) edge node {(1\Gpptwo)1} (x84)
(x13) edge node {1*\iso} (x14)
(x23) edge node {\iso} (x34)
(x33) edge node {\iso} (x44)
(x43) edge node {\iso} (x54)
(x53) edge node {\iso} (x64)
(x73) edge node {\iso} (x74)
(x83) edge node {\iso} (x84)
;
\end{tikzpicture}\]
\newcommand{\ensuremath{f_{G''F''}\big[(\alpha'_X \alpha_X)_{G''}(\beta'_{FX}\beta_{FX})\big]}}{\ensuremath{f_{G''F''}\big[(\alpha'_X \alpha_X)_{G''}(\beta'_{FX}\beta_{FX})\big]}}
\newcommand{\ensuremath{\big[f_{G''F''}(\alpha'_X \alpha_X)_{G''}\big](\beta'_{FX}\beta_{FX})}}{\ensuremath{\big[f_{G''F''}(\alpha'_X \alpha_X)_{G''}\big](\beta'_{FX}\beta_{FX})}}
\newcommand{\ensuremath{\big[f_{F''}(\alpha'_X \alpha_X)\big]_{G''}(\beta'_{FX}\beta_{FX})}}{\ensuremath{\big[f_{F''}(\alpha'_X \alpha_X)\big]_{G''}(\beta'_{FX}\beta_{FX})}}
\newcommand{\ensuremath{\big[(f_{F''}\alpha'_X)\alpha_X\big]_{G''}(\beta'_{FX}\beta_{FX})}}{\ensuremath{\big[(f_{F''}\alpha'_X)\alpha_X\big]_{G''}(\beta'_{FX}\beta_{FX})}}
\newcommand{\ensuremath{\big[(\alpha'_Y f_{F'})\alpha_X\big]_{G''}(\beta'_{FX}\beta_{FX})}}{\ensuremath{\big[(\alpha'_Y f_{F'})\alpha_X\big]_{G''}(\beta'_{FX}\beta_{FX})}}
\newcommand{\ensuremath{\big[\alpha'_Y (f_{F'}\alpha_X)\big]_{G''}(\beta'_{FX}\beta_{FX})}}{\ensuremath{\big[\alpha'_Y (f_{F'}\alpha_X)\big]_{G''}(\beta'_{FX}\beta_{FX})}}
\newcommand{\ensuremath{\big[\alpha'_Y (\alpha_Y f_{F})\big]_{G''}(\beta'_{FX}\beta_{FX})}}{\ensuremath{\big[\alpha'_Y (\alpha_Y f_{F})\big]_{G''}(\beta'_{FX}\beta_{FX})}}
Next is the diagram $A_4$.
\[\begin{tikzpicture}[xscale=7, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{-2}
\draw[0cell]
(0,0) node (x14) {\ensuremath{f_{G''F''}\big[(\alpha'_{X,G''}\alpha_{X,G''})(\beta'_{FX}\beta_{FX})\big]}}
($(x14)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x24) {\ensuremath{\big[f_{G''F''}(\alpha'_{X,G''}\alpha_{X,G''})\big](\beta'_{FX}\beta_{FX})}}
($(x24)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x34) {\ensuremath{\big[(f_{G''F''}\alpha'_{X,G''})\alpha_{X,G''}\big](\beta'_{FX}\beta_{FX})}}
($(x34)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x44) {\ensuremath{\big[(f_{F''}\alpha'_X)_{G''}\alpha_{X,G''}\big](\beta'_{FX}\beta_{FX})}}
($(x44)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x54) {\ensuremath{\big[(\alpha'_Y f_{F'})_{G''}\alpha_{X,G''}\big](\beta'_{FX}\beta_{FX})}}
($(x54)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x64) {\ensuremath{\big[(\alpha'_{Y,G''} f_{G''F'})\alpha_{X,G''}\big](\beta'_{FX}\beta_{FX})}}
($(x64)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x74) {\ensuremath{\big[\alpha'_{Y,G''}(f_{G''F'}\alpha_{X,G''})\big](\beta'_{FX}\beta_{FX})}}
($(x74)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x84) {\ensuremath{\big[\alpha'_{Y,G''}(f_{F'}\alpha_X)_{G''}\big](\beta'_{FX}\beta_{FX})}}
($(x84)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x94) {\ensuremath{\big[\alpha'_{Y,G''}(\alpha_Y f_F)_{G''}\big](\beta'_{FX}\beta_{FX})}}
($(x14)+(1,0)$) node (x15) {\ensuremath{f_{G''F''}\big[(\alpha'_X \alpha_X)_{G''}(\beta'_{FX}\beta_{FX})\big]}}
($(x15)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x25) {\ensuremath{\big[f_{G''F''}(\alpha'_X \alpha_X)_{G''}\big](\beta'_{FX}\beta_{FX})}}
($(x25)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x35) {\ensuremath{\big[f_{F''}(\alpha'_X \alpha_X)\big]_{G''}(\beta'_{FX}\beta_{FX})}}
($(x35)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x45) {\ensuremath{\big[(f_{F''}\alpha'_X)\alpha_X\big]_{G''}(\beta'_{FX}\beta_{FX})}}
($(x45)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x55) {\ensuremath{\big[(\alpha'_Y f_{F'})\alpha_X\big]_{G''}(\beta'_{FX}\beta_{FX})}}
($(x55)+(0,3*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x65) {\ensuremath{\big[\alpha'_Y (f_{F'}\alpha_X)\big]_{G''}(\beta'_{FX}\beta_{FX})}}
($(x65)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x75) {\ensuremath{\big[\alpha'_Y (\alpha_Y f_{F})\big]_{G''}(\beta'_{FX}\beta_{FX})}}
($(x14)!.4!(x25)$) node {\nat}
($(x44)!.4!(x55)$) node {\nat}
($(x84)!.4!(x75)$) node {\nat}
;
\draw[1cell]
(x14) edge node[swap] {\ainv} (x24)
(x24) edge node[swap] {\ainv 1} (x34)
(x34) edge node[swap] {(\Gpptwo 1)1} (x44)
(x44) edge node[swap] {(\alpha'_{f,G''}1)1} (x54)
(x54) edge node[swap] {(\Gpptwoinv 1)1} (x64)
(x64) edge node[swap] {a1} (x74)
(x74) edge node[swap] {(1\Gpptwo)1} (x84)
(x84) edge node[swap] {(1\alpha_{f,G''})1} (x94)
(x15) edge node {\ainv} (x25)
(x25) edge node {\Gpptwo 1} (x35)
(x35) edge node {\ainv_{G''}1} (x45)
(x45) edge node {(\alpha'_f 1)_{G''}1} (x55)
(x55) edge node {a_{G''}1} (x65)
(x65) edge node {(1\alpha_f)_{G''}1} (x75)
(x14) edge node {1(\Gpptwo 1)} (x15)
(x24) edge node {(1\Gpptwo)1} (x25)
(x44) edge node {\Gpptwo 1} (x45)
(x54) edge node {\Gpptwo 1} (x55)
(x84) edge node {\Gpptwo 1} (x65)
(x94) edge node {\Gpptwo 1} (x75)
;
\end{tikzpicture}\]
The two unmarked sub-diagrams in $A_4$ are commutative by the lax associativity \eqref{f2-bicat} of $G''$.
\newcommand{\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[(f_{F'}\alpha_X)_{G'}\beta_{FX}\big]}}{\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[(f_{F'}\alpha_X)_{G'}\beta_{FX}\big]}}
\newcommand{\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[(\alpha_Y f_F)_{G'}\beta_{FX}\big]}}{\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[(\alpha_Y f_F)_{G'}\beta_{FX}\big]}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}(\alpha_Y f_F)_{G'})\big]\beta_{FX}}}{\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}(\alpha_Y f_F)_{G'})\big]\beta_{FX}}}
Next is the diagram $A_5$.
\[\begin{tikzpicture}[xscale=7, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{-2}
\draw[0cell]
(0,0) node (x111) {\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[(f_{G'F'}\alpha_{X,G'})\beta_{FX}\big]}}
($(x111)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x121) {\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[(f_{F'}\alpha_X)_{G'}\beta_{FX}\big]}}
($(x121)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x131) {\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[(\alpha_Y f_F)_{G'}\beta_{FX}\big]}}
($(x111)+(1,0)$) node (x92) {\ensuremath{\big[\alpha'_{Y,G''} (\beta'_{F'Y}(f_{G'F'}\alpha_{X,G'}))\big]\beta_{FX}}}
($(x92)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x93) {\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}(f_{F'}\alpha_{X})_{G'})\big]\beta_{FX}}}
($(x93)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x102) {\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}(\alpha_Y f_F)_{G'})\big]\beta_{FX}}}
($(x111)!.5!(x93)$) node {\nat}
($(x121)!.5!(x102)$) node {\nat}
;
\draw[1cell]
(x111) edge node[swap] {1(\Gptwo 1)} (x121)
(x121) edge node[swap] {1(\alpha_{f,G'}1)} (x131)
(x92) edge node {(1(1\Gptwo))1} (x93)
(x93) edge node {(1(1\alpha_{f,G'}))1} (x102)
(x111) edge node {\iso} (x92)
(x121) edge node {\iso} (x93)
(x131) edge node {\iso} (x102)
;
\end{tikzpicture}\]
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}((\alpha_Y f_F)_{G''}\beta'_{FX})\big]\beta_{FX}}}{\ensuremath{\big[\alpha'_{Y,G''}((\alpha_Y f_F)_{G''}\beta'_{FX})\big]\beta_{FX}}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}((\alpha_{Y,G''} f_{G''F})\beta'_{FX})\big]\beta_{FX}}}{\ensuremath{\big[\alpha'_{Y,G''}((\alpha_{Y,G''} f_{G''F})\beta'_{FX})\big]\beta_{FX}}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} f_{G''F})\big](\beta'_{FX}\beta_{FX})}}{\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} f_{G''F})\big](\beta'_{FX}\beta_{FX})}}
\newcommand{\ensuremath{\big[(\alpha'_{Y,G''}\alpha_{Y,G''}) f_{G''F}\big](\beta'_{FX}\beta_{FX})}}{\ensuremath{\big[(\alpha'_{Y,G''}\alpha_{Y,G''}) f_{G''F}\big](\beta'_{FX}\beta_{FX})}}
Next is the diagram $A_6$.
\[\begin{tikzpicture}[xscale=7, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{-1.2} \def1.1{-.8} \def.4{-2}
\draw[0cell]
(0,0) node (x83) {\ensuremath{\big[\alpha'_{Y,G''}((f_{F'}\alpha_{X})_{G''}\beta'_{FX})\big]\beta_{FX}}}
($(x83)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x93) {\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}(f_{F'}\alpha_{X})_{G'})\big]\beta_{FX}}}
($(x93)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x102) {\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}(\alpha_Y f_F)_{G'})\big]\beta_{FX}}}
($(x102)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x102b) {\ensuremath{\big[\alpha'_{Y,G''}((\alpha_Y f_F)_{G''}\beta'_{FX})\big]\beta_{FX}}}
($(x83)+(1,0)$) node (x84) {\ensuremath{\big[\alpha'_{Y,G''}(f_{F'}\alpha_X)_{G''}\big](\beta'_{FX}\beta_{FX})}}
($(x84)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x94) {\ensuremath{\big[\alpha'_{Y,G''}(\alpha_Y f_F)_{G''}\big](\beta'_{FX}\beta_{FX})}}
($(x94)+(0,1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)$) node (x104) {\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} f_{G''F})\big](\beta'_{FX}\beta_{FX})}}
($(x104)+(0,1.1)$) node (x114) {\ensuremath{\big[(\alpha'_{Y,G''}\alpha_{Y,G''}) f_{G''F}\big](\beta'_{FX}\beta_{FX})}}
($(x102b)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x103) {\ensuremath{\big[\alpha'_{Y,G''}((\alpha_{Y,G''} f_{G''F})\beta'_{FX})\big]\beta_{FX}}}
($(x83)!.65!(x94)$) node {\nat}
($(x102b)!.4!(x114)$) node {\nat}
($(x103)!.5!(x114)$) node {\MC}
;
\draw[1cell]
(x83) edge node[swap] {(1\beta'_{f_{F'}\alpha_X}) 1} (x93)
(x93) edge node[swap] {(1(1\alpha_{f,G'}))1} (x102)
(x102b) edge node {(1\beta'_{\alpha_Y f_F})1} (x102)
(x102b) edge[bend right=20] node[swap,pos=.4] {(1(\Gpptwoinv 1))1} (x103)
(x83) edge node {\iso} (x84)
(x84) edge node {(1\alpha_{f,G''})1} (x94)
(x104) edge node[swap] {(1\Gpptwo)1} (x94)
(x104) edge node {\ainv 1} (x114)
(x114) edge[bend left=20] node {\iso} (x103)
(x83) edge[bend left] node[pos=.15] {(1(\alpha_{f,G''}1))1} (x102b)
(x94) edge[bend left=10] node[swap,pos=.3] {\iso} (x102b)
(x104) edge[bend right=20] node[swap,pos=.4] {\iso} (x103)
;
\end{tikzpicture}\]
The unlabeled sub-diagram in $A_6$ is commutative by the naturality \eqref{lax-transformation-naturality} of $\beta'$ with respect to the $2$-cell $\alpha_f : (F'f)\alpha_X \to \alpha_Y(Ff)$ in $\A_2(FX,F'Y)$.
\newcommand{\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[(\alpha_{Y,G'} f_{G'F})\beta_{FX}\big]}}{\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[(\alpha_{Y,G'} f_{G'F})\beta_{FX}\big]}}
\newcommand{\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[\alpha_{Y,G'} (f_{G'F}\beta_{FX})\big]}}{\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[\alpha_{Y,G'} (f_{G'F}\beta_{FX})\big]}}
\newcommand{\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[\alpha_{Y,G'} (\beta_{FY}f_{GF})\big]}}{\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[\alpha_{Y,G'} (\beta_{FY}f_{GF})\big]}}
\newcommand{\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[(\alpha_{Y,G'} \beta_{FY})f_{GF}\big]}}{\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[(\alpha_{Y,G'} \beta_{FY})f_{GF}\big]}}
\newcommand{\ensuremath{\big[(\alpha'_{Y,G''}\beta'_{F'Y})(\alpha_{Y,G'} \beta_{FY})\big]f_{GF}}}{\ensuremath{\big[(\alpha'_{Y,G''}\beta'_{F'Y})(\alpha_{Y,G'} \beta_{FY})\big]f_{GF}}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}(\alpha_{Y,G'} f_{G'F}))\big]\beta_{FX}}}{\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}(\alpha_{Y,G'} f_{G'F}))\big]\beta_{FX}}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}((\beta'_{F'Y}\alpha_{Y,G'}) f_{G'F})\big]\beta_{FX}}}{\ensuremath{\big[\alpha'_{Y,G''}((\beta'_{F'Y}\alpha_{Y,G'}) f_{G'F})\big]\beta_{FX}}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}\alpha_{Y,G'})\big]( f_{G'F}\beta_{FX})}}{\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}\alpha_{Y,G'})\big]( f_{G'F}\beta_{FX})}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}\alpha_{Y,G'})\big](\beta_{FY} f_{GF})}}{\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}\alpha_{Y,G'})\big](\beta_{FY} f_{GF})}}
\newcommand{\ensuremath{\big[(\alpha'_{Y,G''}(\beta'_{F'Y}\alpha_{Y,G'}))\beta_{FY}\big]f_{GF}}}{\ensuremath{\big[(\alpha'_{Y,G''}(\beta'_{F'Y}\alpha_{Y,G'}))\beta_{FY}\big]f_{GF}}}
Next is the diagram $A_7$.
\[\begin{tikzpicture}[xscale=7, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{-2}
\draw[0cell]
(0,0) node (x131) {\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[(\alpha_Y f_F)_{G'}\beta_{FX}\big]}}
($(x131)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x141) {\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[(\alpha_{Y,G'} f_{G'F})\beta_{FX}\big]}}
($(x141)+(0,.4)$) node (x151) {\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[\alpha_{Y,G'} (f_{G'F}\beta_{FX})\big]}}
($(x151)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x161) {\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[\alpha_{Y,G'} (\beta_{FY}f_{GF})\big]}}
($(x161)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x171) {\ensuremath{(\alpha'_{Y,G''}\beta'_{F'Y})\big[(\alpha_{Y,G'} \beta_{FY})f_{GF}\big]}}
($(x171)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x181) {\ensuremath{\big[(\alpha'_{Y,G''}\beta'_{F'Y})(\alpha_{Y,G'} \beta_{FY})\big]f_{GF}}}
($(x131)+(1,0)$) node (x102) {\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}(\alpha_Y f_F)_{G'})\big]\beta_{FX}}}
($(x102)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x112) {\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}(\alpha_{Y,G'} f_{G'F}))\big]\beta_{FX}}}
($(x112)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x122) {\ensuremath{\big[\alpha'_{Y,G''}((\beta'_{F'Y}\alpha_{Y,G'}) f_{G'F})\big]\beta_{FX}}}
($(x122)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x132) {\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}\alpha_{Y,G'})\big]( f_{G'F}\beta_{FX})}}
($(x132)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x142) {\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}\alpha_{Y,G'})\big](\beta_{FY} f_{GF})}}
($(x142)+(0,.4)$) node (x152) {\ensuremath{\big[(\alpha'_{Y,G''}(\beta'_{F'Y}\alpha_{Y,G'}))\beta_{FY}\big]f_{GF}}}
($(x131)!.5!(x112)$) node {\nat}
($(x141)!.5!(x132)$) node {\MC}
($(x151)!.5!(x142)$) node {\nat}
($(x161)!.5!(x152)$) node {\MC}
;
\draw[1cell]
(x131) edge node[swap] {1(\Gptwoinv 1)} (x141)
(x141) edge node[swap] {1a} (x151)
(x151) edge node[swap] {1(1\beta_{f_F})} (x161)
(x161) edge node[swap] {1\ainv} (x171)
(x171) edge node[swap] {\ainv} (x181)
(x102) edge node {(1(1\Gptwoinv))1} (x112)
(x112) edge node {(1\ainv)1} (x122)
(x122) edge node {\iso} (x132)
(x132) edge node {1\beta_{f_F}} (x142)
(x142) edge node {\iso} (x152)
(x131) edge node {\iso} (x102)
(x141) edge node {\iso} (x112)
(x151) edge node {\iso} (x132)
(x161) edge node {\iso} (x142)
(x181) edge node {\iso} (x152)
;
\end{tikzpicture}\]
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} (f_{G''F}\beta'_{FX}))\big]\beta_{FX}}}{\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} (f_{G''F}\beta'_{FX}))\big]\beta_{FX}}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} (\beta'_{FY}f_{G'F}))\big]\beta_{FX}}}{\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} (\beta'_{FY}f_{G'F}))\big]\beta_{FX}}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}((\alpha_{Y,G''} \beta'_{FY})f_{G'F})\big]\beta_{FX}}}{\ensuremath{\big[\alpha'_{Y,G''}((\alpha_{Y,G''} \beta'_{FY})f_{G'F})\big]\beta_{FX}}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} \beta'_{FY})\big](f_{G'F}\beta_{FX})}}{\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} \beta'_{FY})\big](f_{G'F}\beta_{FX})}}
\newcommand{\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} \beta'_{FY})\big](\beta_{FY}f_{GF})}}{\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} \beta'_{FY})\big](\beta_{FY}f_{GF})}}
\newcommand{\ensuremath{\big[(\alpha'_{Y,G''}(\alpha_{Y,G''} \beta'_{FY}))\beta_{FY}\big]f_{GF}}}{\ensuremath{\big[(\alpha'_{Y,G''}(\alpha_{Y,G''} \beta'_{FY}))\beta_{FY}\big]f_{GF}}}
Next is the diagram $A_8$.
\[\begin{tikzpicture}[xscale=7, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x102b) {\ensuremath{\big[\alpha'_{Y,G''}((\alpha_Y f_F)_{G''}\beta'_{FX})\big]\beta_{FX}}}
($(x102b)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x102) {\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}(\alpha_Y f_F)_{G'})\big]\beta_{FX}}}
($(x102)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x112) {\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}(\alpha_{Y,G'} f_{G'F}))\big]\beta_{FX}}}
($(x112)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x122) {\ensuremath{\big[\alpha'_{Y,G''}((\beta'_{F'Y}\alpha_{Y,G'}) f_{G'F})\big]\beta_{FX}}}
($(x122)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x132) {\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}\alpha_{Y,G'})\big]( f_{G'F}\beta_{FX})}}
($(x132)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x142) {\ensuremath{\big[\alpha'_{Y,G''}(\beta'_{F'Y}\alpha_{Y,G'})\big](\beta_{FY} f_{GF})}}
($(x142)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x152) {\ensuremath{\big[(\alpha'_{Y,G''}(\beta'_{F'Y}\alpha_{Y,G'}))\beta_{FY}\big]f_{GF}}}
($(x102b)+(1,0)$) node (x103) {\ensuremath{\big[\alpha'_{Y,G''}((\alpha_{Y,G''} f_{G''F})\beta'_{FX})\big]\beta_{FX}}}
($(x103)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x113) {\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} (f_{G''F}\beta'_{FX}))\big]\beta_{FX}}}
($(x113)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x123) {\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} (\beta'_{FY}f_{G'F}))\big]\beta_{FX}}}
($(x123)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x133) {\ensuremath{\big[\alpha'_{Y,G''}((\alpha_{Y,G''} \beta'_{FY})f_{G'F})\big]\beta_{FX}}}
($(x133)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x143) {\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} \beta'_{FY})\big](f_{G'F}\beta_{FX})}}
($(x143)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x153) {\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} \beta'_{FY})\big](\beta_{FY}f_{GF})}}
($(x153)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x163) {\ensuremath{\big[(\alpha'_{Y,G''}(\alpha_{Y,G''} \beta'_{FY}))\beta_{FY}\big]f_{GF}}}
($(x122)!.4!(x143)$) node {\nat}
($(x132)!.4!(x153)$) node {\midfour}
($(x142)!.4!(x163)$) node {\nat}
;
\draw[1cell]
(x102b) edge node[swap] {(1\beta'_{\alphay f_F})1} (x102)
(x102) edge node[swap] {(1(1\Gptwoinv))1} (x112)
(x112) edge node[swap] {(1\ainv)1} (x122)
(x122) edge node[swap] {\iso} (x132)
(x132) edge node[swap] {1\beta_{f_F}} (x142)
(x142) edge node[swap] {\iso} (x152)
(x103) edge node {(1a)1} (x113)
(x113) edge node {(1(1\beta'_{f_F}))1} (x123)
(x123) edge node {(1\ainv)1} (x133)
(x133) edge node {\iso} (x143)
(x143) edge node {1\beta_{f_F}} (x153)
(x153) edge node {\iso} (x163)
(x102b) edge node {(1(\Gpptwoinv 1))1} (x103)
(x122) edge node {(1(\betapinv_{\alphay} 1))1} (x133)
(x132) edge node {(1\betapinv_{\alphay})1} (x143)
(x142) edge node {(1\betapinv_{\alphay})1} (x153)
(x152) edge node {((1\betapinv_{\alphay})1)1} (x163)
;
\end{tikzpicture}\]
The top square in $A_8$ is commutative by the lax naturality \eqref{2-cell-transformation} of $\beta'$ applied to the composable $1$-cells $Ff : FX \to FY$ and $\alphay : FY \to F'Y$ in $\A_2$.
\newcommand{\ensuremath{(\alpha'_{Y,G''}\alpha_{Y,G''})\big[(f_{G''F}\beta'_{FX})\beta_{FX}\big]}}{\ensuremath{(\alpha'_{Y,G''}\alpha_{Y,G''})\big[(f_{G''F}\beta'_{FX})\beta_{FX}\big]}}
\newcommand{\ensuremath{(\alpha'_{Y,G''}\alpha_{Y,G''})\big[(\beta'_{FY}f_{G'F})\beta_{FX}\big]}}{\ensuremath{(\alpha'_{Y,G''}\alpha_{Y,G''})\big[(\beta'_{FY}f_{G'F})\beta_{FX}\big]}}
\newcommand{\ensuremath{(\alpha'_{Y,G''}\alpha_{Y,G''})\big[\beta'_{FY}(f_{G'F}\beta_{FX})\big]}}{\ensuremath{(\alpha'_{Y,G''}\alpha_{Y,G''})\big[\beta'_{FY}(f_{G'F}\beta_{FX})\big]}}
\newcommand{\ensuremath{(\alpha'_{Y,G''}\alpha_{Y,G''})\big[\beta'_{FY}(\beta_{FY}f_{GF})\big]}}{\ensuremath{(\alpha'_{Y,G''}\alpha_{Y,G''})\big[\beta'_{FY}(\beta_{FY}f_{GF})\big]}}
\newcommand{\ensuremath{\big[(\alpha'_{Y,G''}\alpha_{Y,G''})(\beta'_{FY}\beta_{FY})\big]f_{GF}}}{\ensuremath{\big[(\alpha'_{Y,G''}\alpha_{Y,G''})(\beta'_{FY}\beta_{FY})\big]f_{GF}}}
Next is the diagram $A_9$.
\[\begin{tikzpicture}[xscale=7, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{-2}
\draw[0cell]
(0,0) node (x103) {\ensuremath{\big[\alpha'_{Y,G''}((\alpha_{Y,G''} f_{G''F})\beta'_{FX})\big]\beta_{FX}}}
($(x103)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x113) {\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} (f_{G''F}\beta'_{FX}))\big]\beta_{FX}}}
($(x113)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x123) {\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} (\beta'_{FY}f_{G'F}))\big]\beta_{FX}}}
($(x123)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x133) {\ensuremath{\big[\alpha'_{Y,G''}((\alpha_{Y,G''} \beta'_{FY})f_{G'F})\big]\beta_{FX}}}
($(x133)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x143) {\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} \beta'_{FY})\big](f_{G'F}\beta_{FX})}}
($(x143)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x153) {\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} \beta'_{FY})\big](\beta_{FY}f_{GF})}}
($(x153)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x163) {\ensuremath{\big[(\alpha'_{Y,G''}(\alpha_{Y,G''} \beta'_{FY}))\beta_{FY}\big]f_{GF}}}
($(x103)+(1,0)$) node (x114) {\ensuremath{\big[(\alpha'_{Y,G''}\alpha_{Y,G''}) f_{G''F}\big](\beta'_{FX}\beta_{FX})}}
($(x114)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x124) {\ensuremath{(\alpha'_{Y,G''}\alpha_{Y,G''})\big[(f_{G''F}\beta'_{FX})\beta_{FX}\big]}}
($(x124)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x134) {\ensuremath{(\alpha'_{Y,G''}\alpha_{Y,G''})\big[(\beta'_{FY}f_{G'F})\beta_{FX}\big]}}
($(x134)+(0,.4)$) node (x144) {\ensuremath{(\alpha'_{Y,G''}\alpha_{Y,G''})\big[\beta'_{FY}(f_{G'F}\beta_{FX})\big]}}
($(x144)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x154) {\ensuremath{(\alpha'_{Y,G''}\alpha_{Y,G''})\big[\beta'_{FY}(\beta_{FY}f_{GF})\big]}}
($(x154)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x164) {\ensuremath{\big[(\alpha'_{Y,G''}\alpha_{Y,G''})(\beta'_{FY}\beta_{FY})\big]f_{GF}}}
($(x103)!.5!(x124)$) node {\MC}
($(x113)!.5!(x134)$) node {\nat}
($(x123)!.5!(x144)$) node {\MC}
($(x143)!.5!(x154)$) node {\nat}
($(x153)!.5!(x164)$) node {\MC}
;
\draw[1cell]
(x103) edge node[swap] {(1a)1} (x113)
(x113) edge node[swap] {(1(1\beta'_{f_F}))1} (x123)
(x123) edge node[swap] {(1\ainv)1} (x133)
(x133) edge node[swap] {\iso} (x143)
(x143) edge node[swap] {1\beta_{f_F}} (x153)
(x153) edge node[swap] {\iso} (x163)
(x114) edge node {\iso} (x124)
(x124) edge node {1(\beta'_{f_F}1)} (x134)
(x134) edge node {1a} (x144)
(x144) edge node {1(1\beta_{f_F})} (x154)
(x154) edge node {\iso} (x164)
(x114) edge node[swap] {\iso} (x103)
(x113) edge node {\iso} (x124)
(x123) edge node {\iso} (x134)
(x143) edge node {\iso} (x144)
(x153) edge node {\iso} (x154)
(x163) edge node {\iso*1} (x164)
;
\end{tikzpicture}\]
\newcommand{\ensuremath{\big[(\alpha'_Y\alpha_Y)f_{F}\big]_{G''}(\beta'_{FX}\beta_{FX})}}{\ensuremath{\big[(\alpha'_Y\alpha_Y)f_{F}\big]_{G''}(\beta'_{FX}\beta_{FX})}}
\newcommand{\ensuremath{\big[(\alpha'_Y\alpha_Y)_{G''}f_{G''F}\big](\beta'_{FX}\beta_{FX})}}{\ensuremath{\big[(\alpha'_Y\alpha_Y)_{G''}f_{G''F}\big](\beta'_{FX}\beta_{FX})}}
\newcommand{\ensuremath{(\alpha'_Y\alpha_Y)_{G''}\big[f_{G''F}(\beta'_{FX}\beta_{FX})\big]}}{\ensuremath{(\alpha'_Y\alpha_Y)_{G''}\big[f_{G''F}(\beta'_{FX}\beta_{FX})\big]}}
\newcommand{\ensuremath{(\alpha'_Y\alpha_Y)_{G''}\big[(f_{G''F}\beta'_{FX})\beta_{FX}\big]}}{\ensuremath{(\alpha'_Y\alpha_Y)_{G''}\big[(f_{G''F}\beta'_{FX})\beta_{FX}\big]}}
\newcommand{\ensuremath{(\alpha'_Y\alpha_Y)_{G''}\big[(\beta'_{FY}f_{G'F})\beta_{FX}\big]}}{\ensuremath{(\alpha'_Y\alpha_Y)_{G''}\big[(\beta'_{FY}f_{G'F})\beta_{FX}\big]}}
\newcommand{\ensuremath{(\alpha'_Y\alpha_Y)_{G''}\big[\beta'_{FY}(f_{G'F}\beta_{FX})\big]}}{\ensuremath{(\alpha'_Y\alpha_Y)_{G''}\big[\beta'_{FY}(f_{G'F}\beta_{FX})\big]}}
\newcommand{\ensuremath{(\alpha'_Y\alpha_Y)_{G''}\big[\beta'_{FY}(\beta_{FY}f_{GF})\big]}}{\ensuremath{(\alpha'_Y\alpha_Y)_{G''}\big[\beta'_{FY}(\beta_{FY}f_{GF})\big]}}
\newcommand{\ensuremath{(\alpha'_Y\alpha_Y)_{G''}\big[(\beta'_{FY}\beta_{FY})f_{GF}\big]}}{\ensuremath{(\alpha'_Y\alpha_Y)_{G''}\big[(\beta'_{FY}\beta_{FY})f_{GF}\big]}}
\newcommand{\ensuremath{\big[(\alpha'_Y\alpha_Y)_{G''}(\beta'_{FY}\beta_{FY})\big]f_{GF}}}{\ensuremath{\big[(\alpha'_Y\alpha_Y)_{G''}(\beta'_{FY}\beta_{FY})\big]f_{GF}}}
Next is the diagram $A_{10}$.
\[\begin{tikzpicture}[xscale=7, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{-2}
\draw[0cell]
(0,0) node (x94) {\ensuremath{\big[\alpha'_{Y,G''}(\alpha_Y f_F)_{G''}\big](\beta'_{FX}\beta_{FX})}}
($(x94)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x104) {\ensuremath{\big[\alpha'_{Y,G''}(\alpha_{Y,G''} f_{G''F})\big](\beta'_{FX}\beta_{FX})}}
($(x104)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x114) {\ensuremath{\big[(\alpha'_{Y,G''}\alpha_{Y,G''}) f_{G''F}\big](\beta'_{FX}\beta_{FX})}}
($(x114)+(0,.4)$) node (x124) {\ensuremath{(\alpha'_{Y,G''}\alpha_{Y,G''})\big[(f_{G''F}\beta'_{FX})\beta_{FX}\big]}}
($(x124)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x134) {\ensuremath{(\alpha'_{Y,G''}\alpha_{Y,G''})\big[(\beta'_{FY}f_{G'F})\beta_{FX}\big]}}
($(x134)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x144) {\ensuremath{(\alpha'_{Y,G''}\alpha_{Y,G''})\big[\beta'_{FY}(f_{G'F}\beta_{FX})\big]}}
($(x144)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x154) {\ensuremath{(\alpha'_{Y,G''}\alpha_{Y,G''})\big[\beta'_{FY}(\beta_{FY}f_{GF})\big]}}
($(x154)+(0,.4)$) node (x164) {\ensuremath{\big[(\alpha'_{Y,G''}\alpha_{Y,G''})(\beta'_{FY}\beta_{FY})\big]f_{GF}}}
($(x94)+(1,0)$) node (x75) {\ensuremath{\big[\alpha'_Y (\alpha_Y f_{F})\big]_{G''}(\beta'_{FX}\beta_{FX})}}
($(x75)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x85) {\ensuremath{\big[(\alpha'_Y\alpha_Y)f_{F}\big]_{G''}(\beta'_{FX}\beta_{FX})}}
($(x85)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x95) {\ensuremath{\big[(\alpha'_Y\alpha_Y)_{G''}f_{G''F}\big](\beta'_{FX}\beta_{FX})}}
($(x95)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x105) {\ensuremath{(\alpha'_Y\alpha_Y)_{G''}\big[f_{G''F}(\beta'_{FX}\beta_{FX})\big]}}
($(x105)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x115) {\ensuremath{(\alpha'_Y\alpha_Y)_{G''}\big[(f_{G''F}\beta'_{FX})\beta_{FX}\big]}}
($(x115)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x125) {\ensuremath{(\alpha'_Y\alpha_Y)_{G''}\big[(\beta'_{FY}f_{G'F})\beta_{FX}\big]}}
($(x125)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x135) {\ensuremath{(\alpha'_Y\alpha_Y)_{G''}\big[\beta'_{FY}(f_{G'F}\beta_{FX})\big]}}
($(x135)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x145) {\ensuremath{(\alpha'_Y\alpha_Y)_{G''}\big[\beta'_{FY}(\beta_{FY}f_{GF})\big]}}
($(x145)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x155) {\ensuremath{(\alpha'_Y\alpha_Y)_{G''}\big[(\beta'_{FY}\beta_{FY})f_{GF}\big]}}
($(x155)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x165) {\ensuremath{\big[(\alpha'_Y\alpha_Y)_{G''}(\beta'_{FY}\beta_{FY})\big]f_{GF}}}
($(x114)!.5!(x115)$) node {\nat}
($(x124)!.4!(x125)$) node {\midfour}
($(x134)!.4!(x135)$) node {\midfour}
($(x144)!.4!(x145)$) node {\midfour}
($(x154)!.5!(x165)$) node {\nat}
;
\draw[1cell]
(x104) edge node {(1\Gpptwo)1} (x94)
(x104) edge node[swap] {\ainv 1} (x114)
(x114) edge node[swap] {\iso} (x124)
(x124) edge node[swap] {1(\beta'_{f_F}1)} (x134)
(x134) edge node[swap] {1a} (x144)
(x144) edge node[swap] {1(1\beta_{f_F})} (x154)
(x154) edge node[swap] {\iso} (x164)
(x75) edge node {\ainv_{G''}1} (x85)
(x85) edge node {\Gpptwoinv 1} (x95)
(x95) edge node {a} (x105)
(x95) edge[bend right] node[swap,pos=.3] {\iso} node[pos=.2] {\MC} (x115)
(x105) edge node {1\ainv} (x115)
(x115) edge node {1(\beta'_{f_F}1)} (x125)
(x125) edge node {1a} (x135)
(x135) edge node {1(1\beta_{f_F})} (x145)
(x145) edge node {1\ainv} (x155)
(x145) edge[bend right] node[swap,pos=.3] {\iso} node[pos=.2] {\MC} (x165)
(x155) edge node {\ainv} (x165)
(x94) edge node {\Gpptwo 1} (x75)
(x114) edge node {(\Gpptwo 1)1} (x95)
(x124) edge node {\Gpptwo 1} (x115)
(x134) edge node {\Gpptwo 1} (x125)
(x144) edge node {\Gpptwo 1} (x135)
(x154) edge node {\Gpptwo 1} (x145)
(x164) edge node {(\Gpptwo 1)1} (x165)
;
\end{tikzpicture}\]
The top square in $A_{10}$ is commutative by the lax associativity \eqref{f2-bicat} of $G''$.
We have proved that the diagram \eqref{tensortwo-mod-axiom} is commutative, so $\tensortwo$ is an invertible modification.
\end{proof}
Next we show that $\tensortwo$ is natural with respect to $2$-cells.
\begin{lemma}\label{tensortwo-iicell-natural}
The modification $\tensortwo$ in \eqref{tensortwo-component} is natural in the sense of \eqref{f2-bicat-naturality}.
\end{lemma}
\begin{proof}
In the context of \Cref{def:tensortwo}, consider arbitrary
\begin{itemize}
\item strong transformations $\alpha_1,\alpha'_1,\beta_1$, and $\beta'_1$, and
\item modifications $\Gamma, \Gamma', \Sigma$, and $\Sigma'$,
\end{itemize}
as displayed below.
\[\begin{tikzpicture}[xscale=3, yscale=2]
\def1{1} \def\l{2} \def\p{.15} \def\q{75} \def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{.3} \def.4{.7}
\def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{.35} \def\d{.65} \def20{.6}
\draw[0cell]
(0,0) node (x11) {\A_1}
($(x11)+(1,0)$) node (x12) {\A_2}
($(x12)+(1,0)$) node (x13) {\A_3}
;
\draw[1cell]
(x11) edge[bend left=\q, looseness=\l] node (f) {F} (x12)
(x11) edge node[pos=\p] {F'} (x12)
(x11) edge[bend right=\q, looseness=\l] node[swap] (fpp) {F''} (x12)
(x12) edge[bend left=\q, looseness=\l] node (g) {G} (x13)
(x12) edge node[pos=\p] {G'} (x13)
(x12) edge[bend right=\q, looseness=\l] node[swap] (gpp) {G''} (x13)
;
\draw[2cell]
node[between=x11 and x12 at .25} \def\cp{.5} \def\cpp{.75} \def\e{-.1, shift={(0,20)}, rotate=-90, 2label={below,\alpha}] {\Rightarrow}
node[between=x11 and x12 at \d, shift={(0,20)}, rotate=-90, 2label={above,\alpha_1}] {\Rightarrow}
node[between=x11 and x12 at .25} \def\cp{.5} \def\cpp{.75} \def\e{-.1, shift={(0,-20)}, rotate=-90, 2label={below,\alpha'}] {\Rightarrow}
node[between=x11 and x12 at \d, shift={(0,-20)}, rotate=-90, 2label={above,\alpha'_1}] {\Rightarrow}
node[between=x12 and x13 at .25} \def\cp{.5} \def\cpp{.75} \def\e{-.1, shift={(0,20)}, rotate=-90, 2label={below,\beta}] {\Rightarrow}
node[between=x12 and x13 at \d, shift={(0,20)}, rotate=-90, 2label={above,\beta_1}] {\Rightarrow}
node[between=x12 and x13 at .25} \def\cp{.5} \def\cpp{.75} \def\e{-.1, shift={(0,-20)}, rotate=-90, 2label={below,\beta'}] {\Rightarrow}
node[between=x12 and x13 at \d, shift={(0,-20)}, rotate=-90, 2label={above,\beta'_1}] {\Rightarrow}
node[between=f and fpp at 1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3, 2label={above,\Gamma}] {\Rrightarrow}
node[between=f and fpp at .4, 2label={above,\Gamma'}] {\Rrightarrow}
node[between=g and gpp at 1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3, 2label={above,\Sigma}] {\Rrightarrow}
node[between=g and gpp at .4, 2label={above,\Sigma'}] {\Rrightarrow}
;
\end{tikzpicture}\]
The naturality \eqref{f2-bicat-naturality} of $\tensortwo$ means the commutativity of the diagram
\[\begin{tikzpicture}[xscale=6, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {(\beta'\tensor\alpha')(\beta\tensor\alpha)}
($(x11)+(1,0)$) node (x12) {(\beta'_1\tensor\alpha'_1)(\beta_1\tensor\alpha_1)}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {(\beta'\beta)\tensor(\alpha'\alpha)}
($(x21)+(1,0)$) node (x22) {(\beta'_1\beta_1)\tensor(\alpha'_1\alpha_1)}
;
\draw[1cell]
(x11) edge node[swap] {\tensortwo} (x21)
(x12) edge node {\tensortwo} (x22)
(x11) edge node {(\Sigma'\tensor\Gamma')*(\Sigma\tensor\Gamma)} (x12)
(x21) edge node {(\Sigma'*\Sigma)\tensor(\Gamma'*\Gamma)} (x22)
;
\end{tikzpicture}\]
of modifications, with:
\begin{itemize}
\item $\Sigma\tensor\Gamma$ the composite modification in \Cref{def:transformation-tensor};
\item $*$ the horizontal composite of modifications in \Cref{def:modification-composition}.
\end{itemize}
In other words, for each object $X$ in $\A_1$, we must show the commutativity of the diagram
\begin{equation}\label{tensortwo-natural}
\begin{tikzpicture}[xscale=6.5, yscale=1.5, baseline={(t.base)}]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {(\beta'\tensor\alpha')_X (\beta\tensor\alpha)_X}
($(x11)+(1,0)$) node (x12) {(\beta'_1\tensor\alpha'_1)_X (\beta_1\tensor\alpha_1)_X}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\big[(\beta'\beta)\tensor(\alpha'\alpha)\big]_X}
($(x21)+(1,0)$) node (x22) {\big[(\beta'_1\beta_1)\tensor(\alpha'_1\alpha_1)\big]_X}
;
\draw[1cell]
(x11) edge node[swap] (t) {\tensortwo_X} (x21)
(x12) edge node {\tensortwo_X} (x22)
(x11) edge node {(\Sigma'\tensor\Gamma')_X*(\Sigma\tensor\Gamma)_X} (x12)
(x21) edge node {[(\Sigma'*\Sigma)\tensor(\Gamma'*\Gamma)]_X} (x22)
;
\end{tikzpicture}
\end{equation}
of vertical composites of $2$-cells in $\A_3(GFX,G''F''X)$.
Expanding the boundary using \eqref{modification-hcomp}, \eqref{mod-composite-component}, and \eqref{tensortwo-x}, along with \Cref{conv:large-diagram}, the diagram \eqref{tensortwo-natural} is the outermost diagram below.
\begin{equation}\label{tensortwo-natural-ii}
\begin{tikzpicture}[xscale=8, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {(\alpha'_{X,G''}\beta'_{F'X})(\alpha_{X,G'}\beta_{FX})}
($(x11)+(1,0)$) node (x12) {(\alpha'_{1,X,G''}\beta'_{1,F'X})(\alpha_{1,X,G'}\beta_{1,FX})}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\big[\alpha'_{X,G''}(\beta'_{F'X}\alpha_{X,G'})\big]\beta_{FX}}
($(x21)+(1,0)$) node (x22) {\big[\alpha'_{1,X,G''}(\beta'_{1,F'X}\alpha_{1,X,G'})\big]\beta_{1,FX}}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {\big[\alpha'_{X,G''}(\alpha_{X,G''}\beta'_{FX})\big]\beta_{FX}}
($(x31)+(1,0)$) node (x32) {\big[\alpha'_{1,X,G''}(\alpha_{1,X,G''}\beta'_{1,FX})\big]\beta_{1,FX}}
($(x31)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x41) {(\alpha'_{X,G''}\alpha_{X,G''})(\beta'_{FX}\beta_{FX})}
($(x41)+(1,0)$) node (x42) {(\alpha'_{1,X,G''}\alpha_{1,X,G''})(\beta'_{1,FX}\beta_{1,FX})}
($(x41)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x51) {(\alpha'_X\alpha_X)_{G''}(\beta'_{FX}\beta_{FX})}
($(x51)+(1,0)$) node (x52) {(\alpha'_{1,X}\alpha_{1,X})_{G''}(\beta'_{1,FX}\beta_{1,FX})}
($(x11)!.4!(x22)$) node {\nat}
($(x21)!.4!(x32)$) node {(\spadesuit)}
($(x31)!.4!(x42)$) node {\nat}
($(x41)!.4!(x52)$) node {\nat}
;
\draw[1cell]
(x11) edge node[swap] {\iso} (x21)
(x21) edge node[swap] {(1\betapinv_{\alpha_X})1} (x31)
(x31) edge node[swap] {\iso} (x41)
(x41) edge node[swap] {\Gpptwo 1} (x51)
(x12) edge node {\iso} (x22)
(x22) edge node {(1\betapinv_{1,\alpha_{1,X}})1} (x32)
(x32) edge node {\iso} (x42)
(x42) edge node {\Gpptwo 1} (x52)
(x11) edge node {(\Gamma'_{X,G''}\Sigma'_{F'X})(\Gamma_{X,G'}\Sigma_{FX})} (x12)
(x21) edge node {(\Gamma'_{X,G''}(\Sigma'_{F'X}\Gamma_{X,G'}))\Sigma_{FX}} (x22)
(x31) edge node {(\Gamma'_{X,G''}(\Gamma_{X,G''}\Sigma'_{FX}))\Sigma_{FX}} (x32)
(x41) edge node {(\Gamma'_{X,G''}\Gamma_{X,G''})(\Sigma'_{FX}\Sigma_{FX})} (x42)
(x51) edge node {(\Gamma'_{X}\Gamma_X)_{G''}(\Sigma'_{FX}\Sigma_{FX})} (x52)
;
\end{tikzpicture}
\end{equation}
By the middle four exchange \eqref{middle-four}, to show that the sub-diagram $(\spadesuit)$ is commutative, it suffices to show that the following diagram is commutative.
\[\begin{tikzpicture}[xscale=8, yscale=1.5]
\def1{1} \def\i{.3} \def\j{.4} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{-2}
\draw[0cell]
(0,0) node (x11) {\beta'_{F'X}\alpha_{X,G'}}
($(x11)+(1,0)$) node (x12) {\beta'_{1,F'X}\alpha_{1,X,G'}}
($(x11)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\beta'_{F'X}\alpha_{1,X,G'}}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {\alpha_{1,X,G''}\beta'_{FX}}
($(x11)+(0,3*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x41) {\alpha_{X,G''}\beta'_{FX}}
($(x41)+(1,0)$) node (x42) {\alpha_{1,X,G''}\beta'_{1,FX}}
($(x11)+(1/2,-\i)$) node {\midfour}
($(x41)+(1/2,\i)$) node {\midfour}
;
\draw[1cell]
(x11) edge node {\Sigma'_{F'X}\Gamma_{X,G'}} (x12)
(x12) edge node {\betapinv_{1,\alpha_{1,X}}} (x42)
(x11) edge node[swap] {\betapinv_{\alpha_{X}}} (x41)
(x41) edge node[swap] {\Gamma_{X,G''}\Sigma'_{FX}} (x42)
(x21) edge node[swap] {\betapinv_{\alpha_{1,X}}} (x31)
(x11) edge node[swap] {1\Gamma_{X,G'}} (x21)
(x21) edge node[swap] {\Sigma'_{F'X}1} (x12)
(x41) edge node {\Gamma_{X,G''}1} (x31)
(x31) edge node {1\Sigma'_{FX}} (x42)
;
\end{tikzpicture}\]
\begin{itemize}
\item The left trapezoid above is commutative by the naturality \eqref{lax-transformation-naturality} of $\beta' : G'\to G''$ with respect to the $2$-cell $\Gamma_X : \alpha_X \to \alpha_{1,X}$ in $\A_2(FX,F'X)$.
\item The right trapezoid is commutative by the modification axiom \eqref{modification-axiom-pasting} of $\Sigma' : \beta' \to \beta'_1$ for the $1$-cell $\alpha_{1,X} \in \A_2(FX,F'X)$.
\end{itemize}
We have shown that the diagram \eqref{tensortwo-natural} is commutative.
\end{proof}
Next we show that $\tensortwo$ is lax associative.
\begin{lemma}\label{tensortwo-lax-associative}
The modification $\tensortwo$ in \eqref{tensortwo-component} satisfies the lax associativity axiom \eqref{f2-bicat}.
\end{lemma}
\begin{proof}
In the context of \Cref{def:tensortwo}, consider arbitrary
\begin{itemize}
\item pseudofunctors $F'''$ and $G'''$, and
\item strong transformations $\alpha''$ and $\beta''$,
\end{itemize}
as displayed below.
\[\begin{tikzpicture}[xscale=3.5, yscale=2]
\def1{1} \def\l{2} \def\p{.3} \def\q{75} \def.08{35} \def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{.3} \def.4{.7}
\def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{.25} \def\cp{.5} \def\cpp{.75} \def20{-.1}
\draw[0cell]
(0,0) node (x11) {\A_1}
($(x11)+(1,0)$) node (x12) {\A_2}
($(x12)+(1,0)$) node (x13) {\A_3}
;
\draw[1cell]
(x11) edge[bend left=\q, looseness=\l] node (f) {F} (x12)
(x11) edge[bend left=.08] node[pos=\p] {F'} (x12)
(x11) edge[bend right=.08] node[pos=\p,swap] {F''} (x12)
(x11) edge[bend right=\q, looseness=\l] node[swap] (fppp) {F'''} (x12)
(x12) edge[bend left=\q, looseness=\l] node (g) {G} (x13)
(x12) edge[bend left=.08] node[pos=\p] {G'} (x13)
(x12) edge[bend right=.08] node[pos=\p,swap] {G''} (x13)
(x12) edge[bend right=\q, looseness=\l] node[swap] (gppp) {G'''} (x13)
;
\draw[2cell]
node[between=f and fppp at .25} \def\cp{.5} \def\cpp{.75} \def\e{-.1, shift={(20,0)}, rotate=-90, 2label={above,\alpha}] {\Rightarrow}
node[between=f and fppp at \cp, shift={(20,0)}, rotate=-90, 2label={above,\alpha'}] {\Rightarrow}
node[between=f and fppp at \cpp, shift={(20,0)}, rotate=-90, 2label={above,\alpha''}] {\Rightarrow}
node[between=g and gppp at .25} \def\cp{.5} \def\cpp{.75} \def\e{-.1, shift={(20,0)}, rotate=-90, 2label={above,\beta}] {\Rightarrow}
node[between=g and gppp at \cp, shift={(20,0)}, rotate=-90, 2label={above,\beta'}] {\Rightarrow}
node[between=g and gppp at \cpp, shift={(20,0)}, rotate=-90, 2label={above,\beta''}] {\Rightarrow}
;
\end{tikzpicture}\]
The lax associativity axiom \eqref{f2-bicat} for $\tensortwo$ means the commutativity of the diagram
\begin{equation}\label{tensortwo-laxas}
\begin{tikzcd}
\big[(\beta''\tensor\alpha'')(\beta'\tensor\alpha')\big](\beta\tensor\alpha) \ar{r}{a} \ar{d}[swap]{\tensortwo*1} & (\beta''\tensor\alpha'')\big[(\beta'\tensor\alpha')(\beta\tensor\alpha)\big] \ar{d}{1*\tensortwo}\\
\big[(\beta''\beta') \tensor (\alpha''\alpha')\big](\beta\tensor\alpha) \ar{d}[swap]{\tensortwo} & (\beta''\tensor\alpha'')\big[(\beta'\beta)\tensor (\alpha'\alpha)\big] \ar{d}{\tensortwo}\\
\big[(\beta''\beta')\beta\big] \tensor \big[(\alpha''\alpha')\alpha\big] \ar{r}{a\tensor a} & \big[\beta''(\beta'\beta)\big] \tensor \big[\alpha''(\alpha'\alpha)\big]
\end{tikzcd}
\end{equation}
of vertical composites of modifications. Therefore, we must show that, when evaluated at each object $X$ in $\A_1$, the two vertical composites
\begin{equation}\label{tensortwo-laxas-ii}
\begin{tikzpicture}[xscale=6.5,yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{-2}
\draw[0cell]
(0,0) node (x11) {\big[(\alpha''_{X,G'''}\beta''_{F''X})(\alpha'_{X,G''}\beta'_{F'X})\big](\alpha_{X,G'}\beta_{FX})}
($(x11)+(1,0)$) node (x12) {(\alpha''_{X,G'''}\beta''_{F''X})\big[(\alpha'_{X,G''}\beta'_{F'X}) (\alpha_{X,G'}\beta_{FX})\big]}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\big[(\alpha''_X\alpha'_X)_{G'''}(\beta''_{F'X}\beta'_{F'X})\big](\alpha_{X,G'}\beta_{FX})}
($(x21)+(1,0)$) node (x22) {(\alpha''_{X,G'''}\beta''_{F''X})\big[(\alpha'_X\alpha_X)_{G''}(\beta'_{FX}\beta_{FX})\big]}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {\big((\alpha''_X\alpha'_X)\alpha_X\big)_{G'''}\big((\beta''_{FX}\beta'_{FX})\beta_{FX}\big)}
($(x31)+(1,0)$) node (x32) {\big(\alpha''_X(\alpha'_X\alpha_X)\big)_{G'''}\big(\beta''_{FX}(\beta'_{FX}\beta_{FX})\big)}
;
\draw[1cell]
(x11) edge node[swap] {\tensortwo_X*1} (x21)
(x21) edge node[swap] {\tensortwo_X} (x31)
(x12) edge node {1*\tensortwo_X} (x22)
(x22) edge node {\tensortwo_X} (x32)
(x11) edge node {a^{\C}} (x12)
(x31) edge node {a^{\B}_{G'''} * a^{\C}} (x32)
;
\end{tikzpicture}
\end{equation}
of $2$-cells are equal in $\A_3(GFX,G'''F'''X)$. Using \eqref{tensortwo-x}:
\begin{itemize}
\item Each of the top left vertical arrow and the two right vertical arrows is a composite of four $2$-cells.
\item On the other hand, the lower left vertical arrow is a composite of eight $2$-cells because $(\beta''\beta')_{\alpha_X}$ is a composite of five $2$-cells by \eqref{transf-hcomp-iicell}.
\end{itemize}
By \eqref{tensortwo-pasting}, the left-bottom composite in \eqref{tensortwo-laxas-ii} is equal to the composite of the pasting diagram
\[\begin{tikzpicture}[xscale=5,yscale=1.5]
\def1{1} \def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{1.5} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def.4{2} \def.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1{30} \def\b{15}
\draw[0cell]
(0,0) node (gf) {GFX}
($(gf)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (gpf) {G'FX}
($(gpf)+(0,-.4)$) node (gpfp) {G'F'X}
($(gpfp)+(1,0)$) node (gppfp) {G''F'X}
($(gppfp)+(1,0)$) node (gppfpp) {G''F''X}
($(gppfpp)+(0,1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)$) node (gpppfpp) {G'''F''X}
($(gpppfpp)+(0,1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)$) node (gpppfppp) {G'''F'''X}
($(gpf)+(1/2,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (gppf) {G''FX}
($(gf)+(1/2,0)$) node (gpppf) {G'''FX}
;
\draw[0cell]
($(gppf)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2)$) node (gpppfp) {G'''F'X}
;
\draw[1cell]
(gf) edge node[swap] {\beta_{FX}} (gpf)
(gpf) edge node[swap] {\alpha_{X,G'}} (gpfp)
(gpfp) edge node[swap] {\beta'_{F'X}} (gppfp)
(gppfp) edge node[swap] {\alpha'_{X,G''}} (gppfpp)
(gppfpp) edge node[swap] {\beta''_{F''X}} (gpppfpp)
(gpppfpp) edge node[swap] {\alpha''_{X,G'''}} (gpppfppp)
(gpf) edge node[pos=.3] {\beta'_{FX}} (gppf)
(gppf) edge node[pos=.7] {\beta''_{FX}} (gpppf)
(gpppf) edge[bend left=.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1] node {(\alpha''_X(\alpha'_X\alpha_X))_{G'''}} (gpppfppp)
;
\draw[1cell]
(gppf) edge node[swap] {\alpha_{X,G''}} (gppfp)
(gppfp) edge node[swap] {\beta''_{F'X}} (gpppfp)
(gpppfp) edge node[swap] {\alpha'_{X,G'''}} (gpppfpp)
(gpppfp) edge node[swap,pos=.3] {(\alpha''_X\alpha'_X)_{G'''}} (gpppfppp)
(gpppf) edge node[swap,pos=.65] {\alpha_{X,G'''}} (gpppfp)
(gpppf) edge[bend right=.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1] node[swap] {((\alpha''_X\alpha'_X)\alpha_X)_{G'''}} (gpppfppp)
;
\draw[2cell]
node[between=gpfp and gppf at .6, shift={(0,-.3)}, rotate=45, 2label={above,\betapinv_{\alpha_X}}] {\Rightarrow}
node[between=gppfp and gpppfpp at .5, shift={(0,-.4)}, rotate=135, 2label={below,\betappinv_{\alpha'_X}}] {\Rightarrow}
node[between=gppf and gpppfp at .4, shift={(0,-.4)}, rotate=120, 2label={below,\betappinv_{\alpha_X}}] {\Rightarrow}
node[between=gpppfp and gpppfpp at .75, shift={(0,.5)}, rotate=130, 2label={below,\Gppptwo}] {\Rightarrow}
node[between=gpppfp and gppfp at -.3, rotate=90, 2label={below,\Gppptwo}] {\Rightarrow}
node[between=gpppf and gpppfppp at .45, rotate=90, 2label={below,a^{\B}_{G'''}}] {\Rightarrow}
;
\end{tikzpicture}\]
whose domain and codomain have the bracketings of the upper left corner and the lower right corner in \eqref{tensortwo-laxas-ii}, respectively. By the lax associativity \eqref{f2-bicat-pasting} of $G'''$, the above pasting diagram is equal to the one below.
\[\begin{tikzpicture}[xscale=5,yscale=1.5]
\def1{1} \def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{1.5} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def.4{2} \def.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1{30} \def\b{15}
\draw[0cell]
(0,0) node (gf) {GFX}
($(gf)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (gpf) {G'FX}
($(gpf)+(0,-.4)$) node (gpfp) {G'F'X}
($(gpfp)+(1,0)$) node (gppfp) {G''F'X}
($(gppfp)+(1,0)$) node (gppfpp) {G''F''X}
($(gppfpp)+(0,1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)$) node (gpppfpp) {G'''F''X}
($(gpppfpp)+(0,1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)$) node (gpppfppp) {G'''F'''X}
($(gpf)+(1/2,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (gppf) {G''FX}
($(gf)+(1/2,0)$) node (gpppf) {G'''FX}
;
\draw[0cell]
($(gppf)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2)$) node (gpppfp) {G'''F'X}
;
\draw[1cell]
(gf) edge node[swap] {\beta_{FX}} (gpf)
(gpf) edge node[swap] {\alpha_{X,G'}} (gpfp)
(gpfp) edge node[swap] {\beta'_{F'X}} (gppfp)
(gppfp) edge node[swap] {\alpha'_{X,G''}} (gppfpp)
(gppfpp) edge node[swap] {\beta''_{F''X}} (gpppfpp)
(gpppfpp) edge node[swap] {\alpha''_{X,G'''}} (gpppfppp)
(gpf) edge node[pos=.3] {\beta'_{FX}} (gppf)
(gppf) edge node[pos=.7] {\beta''_{FX}} (gpppf)
(gpppf) edge[bend left=.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1] node {(\alpha''_X(\alpha'_X\alpha_X))_{G'''}} (gpppfppp)
;
\draw[1cell]
(gppf) edge node[swap] {\alpha_{X,G''}} (gppfp)
(gppfp) edge node[swap] {\beta''_{F'X}} (gpppfp)
(gpppfp) edge node[swap] {\alpha'_{X,G'''}} (gpppfpp)
(gpppf) edge node[swap,pos=.65] {\alpha_{X,G'''}} (gpppfp)
(gpppf) edge node[pos=.4] {(\alpha'_X\alpha_X)_{G'''}} (gpppfpp)
;
\draw[2cell]
node[between=gpfp and gppf at .6, shift={(0,-.3)}, rotate=45, 2label={above,\betapinv_{\alpha_X}}] {\Rightarrow}
node[between=gppfp and gpppfpp at .5, shift={(0,-.3)}, rotate=135, 2label={below,\betappinv_{\alpha'_X}}] {\Rightarrow}
node[between=gppf and gpppfp at .4, shift={(0,-.4)}, rotate=120, 2label={below,\betappinv_{\alpha_X}}] {\Rightarrow}
node[between=gpppfp and gppfp at -.4, rotate=60, 2label={below,\Gppptwo}] {\Rightarrow}
node[between=gpppf and gpppfppp at .75, shift={(0,-.5)}, rotate=90, 2label={below,\Gppptwo}] {\Rightarrow}
;
\end{tikzpicture}\]
The previous pasting diagram is equal to the one below
\[\begin{tikzpicture}[xscale=5,yscale=1.5]
\def1{1} \def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{1.5} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def.4{2} \def.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1{30} \def\b{15}
\draw[0cell]
(0,0) node (gf) {GFX}
($(gf)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (gpf) {G'FX}
($(gpf)+(0,-.4)$) node (gpfp) {G'F'X}
($(gpfp)+(1,0)$) node (gppfp) {G''F'X}
($(gppfp)+(1,0)$) node (gppfpp) {G''F''X}
($(gppfpp)+(0,1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)$) node (gpppfpp) {G'''F''X}
($(gpppfpp)+(0,1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)$) node (gpppfppp) {G'''F'''X}
($(gpf)+(1/2,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (gppf) {G''FX}
($(gf)+(1/2,0)$) node (gpppf) {G'''FX}
;
\draw[1cell]
(gf) edge node[swap] {\beta_{FX}} (gpf)
(gpf) edge node[swap] {\alpha_{X,G'}} (gpfp)
(gpfp) edge node[swap] {\beta'_{F'X}} (gppfp)
(gppfp) edge node[swap] {\alpha'_{X,G''}} (gppfpp)
(gppfpp) edge node[swap] {\beta''_{F''X}} (gpppfpp)
(gpppfpp) edge node[swap] {\alpha''_{X,G'''}} (gpppfppp)
(gpf) edge node[pos=.3] {\beta'_{FX}} (gppf)
(gppf) edge node[pos=.7] {\beta''_{FX}} (gpppf)
(gpppf) edge[bend left=.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1] node {(\alpha''_X(\alpha'_X\alpha_X))_{G'''}} (gpppfppp)
;
\draw[1cell]
(gppf) edge node[swap] {\alpha_{X,G''}} (gppfp)
(gppf) edge[bend left=.7} \def\b{1.25} \def.25} \def\cp{.5} \def\cpp{.75} \def\e{-.1{2} \def\d{1] node[pos=.25] {(\alpha'_X\alpha_X)_{G''}} (gppfpp)
(gpppf) edge node[pos=.4] {(\alpha'_X\alpha_X)_{G'''}} (gpppfpp)
;
\draw[2cell]
node[between=gpfp and gppf at .6, shift={(0,-.3)}, rotate=45, 2label={above,\betapinv_{\alpha_X}}] {\Rightarrow}
node[between=gppfp and gpppfpp at .2, rotate=45, 2label={above,\Gpptwo}] {\Rightarrow}
node[between=gppf and gpppfpp at .7, shift={(0,-.4)}, rotate=135, 2label={below,\betappinv_{\alpha'_X\alpha_X}}] {\Rightarrow}
node[between=gpppf and gpppfppp at .75, shift={(0,-.5)}, rotate=90, 2label={below,\Gppptwo}] {\Rightarrow}
;
\end{tikzpicture}\]
by the lax naturality \eqref{2-cell-transformation-pasting} of $\beta''$ for the composable $1$-cells $\alpha_X$ and $\alpha'_X$, since both $\Gpptwo$ and $\Gppptwo$ are invertible.
By \eqref{tensortwo-pasting}, the composite of the previous pasting diagram is equal to the top-right composite in \eqref{tensortwo-laxas-ii}, which is therefore commutative.
\end{proof}
\section{Composition for Bicategories}
\label{sec:tensorzero}
In this section we finish the construction of a pseudofunctor
\[\begin{tikzcd}[column sep=large]
\Bicatps(\A_2,\A_3)\times\Bicatps(\A_1,\A_2) \ar{r}{(\tensor,\tensortwo,\tensorzero)} & \Bicatps(\A_1,\A_3)
\end{tikzcd}\]
by defining the lax unity constraint $\tensorzero$. The plan for this section is as follows. We:
\begin{itemize}
\item define $\tensorzero$ in \Cref{def:tensorzero};
\item prove that $\tensorzero$ is a natural isomorphism in \Cref{tensorzero-modification};
\item prove the lax left and right unity axioms \eqref{f0-bicat} in \Cref{tensorzero-laxunity}.
\end{itemize}
For a lax functor $F$, recall from \Cref{id-lax-transformation} the identity transformation $1_F : F \to F$, which is a strong transformation.
\begin{definition}\label{def:tensorzero}
Suppose $F$ and $G$ as in
\[\begin{tikzcd}
\A_1 \ar{r}{F} & \A_2 \ar{r}{G} & \A_3\end{tikzcd}\]
are pseudofunctors between bicategories. Define $\tensorzero$ as in
\begin{equation}\label{tensorzero}
\begin{tikzpicture}[xscale=3,yscale=1.5,baseline={(x1.base)}]
\def1{1} \def\q{45}
\draw[0cell]
(0,0) node (x1) {GF}
($(x1)+(1,0)$) node (x2) {GF}
;
\draw[1cell]
(x1) edge[bend left=\q] node {1_{GF}} (x2)
(x1) edge[bend right=\q] node[swap] {1_G \tensor 1_F} (x2)
;
\draw[2cell]
node[between=x1 and x2 at .45, rotate=-90, 2label={above,\tensorzero}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
with component $2$-cells the vertical composites
\begin{equation}\label{tensorzero-x}
\begin{tikzpicture}[xscale=4.5,yscale=1,vcenter]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def\q{45}
\draw[0cell]
(0,0) node (x1) {(1_{GF})_X = 1_{GFX}}
($(x1)+(1,0)$) node (x3) {1_{FX,G}1_{GFX} = (1_G\tensor 1_F)_X}
($(x1)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {1_{GFX}1_{GFX}}
($(x1)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node[inner sep=0pt] (a) {}
($(x3)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node[inner sep=0pt] (b) {}
;
\draw[1cell]
(x1) edge node {\tensorzero_X} (x3)
(x1) edge[-,shorten >=-1pt] node[swap,pos=.7] {\ellinv_{1_{GFX}}} (a)
(a) edge[shorten <=-1pt] (x2)
(x2) edge[-,shorten >=-1pt] (b)
(b) edge[shorten <=-1pt] node[swap,pos=.3] {G^0_{FX}*1_{1_{GFX}}} (x3)
;
\end{tikzpicture}
\end{equation}
in $\A_3(GFX,GFX)$ for objects $X\in\A_1$.
\end{definition}
\begin{lemma}\label{tensorzero-modification}
$\tensorzero : 1_{GF} \to 1_G\tensor 1_F$ in \eqref{tensorzero} is an invertible modification.
\end{lemma}
\begin{proof}
Since $\Gzero$ is invertible, so is $\Gzero_{FX} * 1_{1_{GFX}}$ by \Cref{hcomp-invertible-2cells}. So each component $2$-cell of $\tensorzero$ is the vertical composite of two invertible $2$-cells in $\A_3$, which is invertible. By \eqref{idlaxtr-component}, \eqref{tensorzero-x}, and \Cref{conv:large-diagram}, the modification axiom \eqref{modification-axiom-pasting} for $\tensorzero$ means the commutativity of the diagram
\begin{equation}\label{tensorzero-modax}
\begin{tikzpicture}[xscale=4,yscale=1.6,baseline={(x21.base)}]
\def.4} \def\z{.6} \def\g{.8} \def\h{1} \def\v{-1} \def\u{-2/3} \def\q{45{.4} \def\z{.6} \def1{.8} \def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{-2/3} \def\q{45}
\draw[0cell]
(0,0) node (x11) {f_{GF}1_{GFX}}
($(x11)+(1,0)$) node (x12) {f_{GF}(1_{GFX}1_{GFX})}
($(x12)+(1,0)$) node (x13) {f_{GF}(1_{FX,G}1_{GFX})}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {f_{GF}}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {1_{GFY}f_{GF}}
($(x31)+(1,0)$) node (x32) {(1_{GFY}1_{GFY})f_{GF}}
($(x32)+(1,0)$) node (x33) {(1_{FY,G}1_{GFY})f_{GF}}
($(x13)+(0,.4)$) node (xa3) {f_{GF}1_{GFX}}
($(xa3)+(0,.4)$) node (xb3) {1_{FY,G}f_{GF}}
($(x11)+(0,\z)$) node[inner sep=0pt] (t1) {}
($(x13)+(0,\z)$) node[inner sep=0pt] (t2) {}
($(x11)+(-.4} \def\z{.6} \def\g{.8} \def\h{1} \def\v{-1} \def\u{-2/3} \def\q{45,0)$) node[inner sep=0pt] (l1) {}
($(x31)+(-.4} \def\z{.6} \def\g{.8} \def\h{1} \def\v{-1} \def\u{-2/3} \def\q{45,0)$) node[inner sep=0pt] (l2) {}
($(x31)+(0,-\z)$) node[inner sep=0pt] (b1) {}
($(x33)+(0,-\z)$) node[inner sep=0pt] (b2) {}
($(x13)+(.5,0)$) node[inner sep=0pt] (r1) {}
($(x33)+(.5,0)$) node[inner sep=0pt] (r2) {}
($(x12)!.5!(xa3)$) node[shift={(30:.2)}] {B_1}
($(x12)!.5!(x32)$) node {B_2}
($(x32)!.5!(xb3)$) node[shift={(-30:.2)}] {B_3}
;
\draw[1cell]
(x11) edge node {1\ellinv_{1_{GFX}}} (x12)
(x12) edge node {1(\Gzero_{FX}1)} (x13)
(x11) edge[-,shorten >=-1pt] (t1)
(t1) edge[-,shorten <=-1pt,shorten >=-1pt] node {1*\tensorzero_X} (t2)
(t2) edge[shorten <=-1pt] (x13)
(x11) edge node[swap] {r_{f_{GF}}} (x21)
(x21) edge node[swap] {\ellinv_{f_{GF}}} (x31)
(x11) edge[-,shorten >=-1pt] (l1)
(l1) edge[-,shorten <=-1pt,shorten >=-1pt] node {(1_{GF})_f} (l2)
(l2) edge[shorten <=-1pt] (x31)
(x31) edge node[swap] {\ellinv_{1_{GFY}}1_{f_{GF}}} (x32)
(x32) edge node[swap] {(\Gzero_{FY}1)1} (x33)
(x31) edge[-,shorten >=-1pt] (b1)
(b1) edge[-,shorten <=-1pt,shorten >=-1pt] node {\tensorzero_Y* 1} (b2)
(b2) edge[shorten <=-1pt] (x33)
(x13) edge node[swap]{3} (xa3)
(xa3) edge node[swap]{4} (xb3)
(xb3) edge node[swap]{2} (x33)
(x13) edge[-,shorten >=-1pt] (r1)
(r1) edge[-,shorten <=-1pt,shorten >=-1pt] node[swap] {(1_G\tensor 1_F)_f} (r2)
(r2) edge[shorten <=-1pt] (x33)
(x11) edge node[swap,pos=.4] {1} (xa3)
(x31) edge node[pos=.4] {\Gzero_{FY}*1} (xb3)
;
\end{tikzpicture}
\end{equation}
in $\A_3(GFX,GFX)$ for each $1$-cell $f \in \A_1(X,Y)$. The right edge $(1_G\tensor 1_F)_f$ decomposes into nine $2$-cells by \eqref{idlaxtr-component} and \eqref{composite-tr-iicell-eq}, as indicated by the numbers 2, 3, and 4 decorating the arrows. It remains to show that the sub-diagrams $B_1$, $B_2$, and $B_3$ are commutative.
Next is the diagram $B_1$, rearranged into a rectangle.
\[\begin{tikzpicture}[xscale=8,yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1}
\draw[0cell]
(0,0) node (A) {f_{GF}1_{GFX}}
($(A)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (B) {f_{GF}(1_{GFX}1_{GFX})}
($(B)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (C) {f_{GF}(1_{FX,G}1_{GFX})}
($(C)+(1,0)$) node (D) {(f_{GF}1_{FX,G})1_{GFX}}
($(D)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (E) {(f_F 1_{FX})_{G}1_{GFX}}
($(E)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (F) {f_{GF}1_{GFX}}
($(A)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (G) {(f_{GF}1_{GFX})1_{GFX}}
($(A)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/4)$) node {\midfour}
($(C)!.4!(G)$) node {\nat}
;
\draw[1cell]
(A) edge node {1\ellinv_{1_{GFX}}} (B)
(B) edge node {1(\Gzero_{FX}1)} (C)
(C) edge node {\ainv} (D)
(D) edge node {\Gtwo 1} (E)
(E) edge node {r_{f_F,G} 1} (F)
(A) edge node[swap] {1} (F)
(A) edge node[pos=.4] {\unity} node[swap,pos=.6] {\rinv_{f_{GF}}1} (G)
(B) edge node {\ainv} (G)
(G) edge node[pos=.4] {(1\Gzero_{FX})1} (D)
(G) edge node[swap,pos=.4] {r_{f_{GF}}1} (F)
;
\end{tikzpicture}\]
The right triangle is commutative by the lax right unity \eqref{f0-bicat} of $G$.
Next is the diagram $B_2$.
\[\begin{tikzpicture}[xscale=8,yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{-2} \def\q{45}
\draw[0cell]
(0,0) node (x11) {f_{GF}1_{GFX}}
($(x11)+(1/2,0)$) node (x12) {f_{GF}1_{GFX}}
($(x12)+(1/2,0)$) node (x13) {(1_{FY}f_F)_{G}1_{GFX}}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {f_{GF}}
($(x21)+(1/2,0)$) node (x22) {(1_{FY}f_F)_{G}}
($(x22)+(1/2,0)$) node (x23) {(1_{FY,G}f_{FG})1_{GFX}}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {1_{GFY}f_{GF}}
($(x31)+(1/2,0)$) node (x32) {1_{FY,G}f_{GF}}
($(x32)+(1/2,0)$) node (x33) {1_{FY,G}(f_{FG}1_{GFX})}
($(x11)!.5!(x22)$) node {\nat}
;
\draw[1cell]
(x11) edge node {1} (x12)
(x12) edge node {\ellinv_{f_F,G}1} (x13)
(x31) edge node[swap] {\Gzero_{FY}1} (x32)
(x33) edge node[pos=.3] {1 r_{f_{GF}}} (x32)
(x11) edge node[swap] {r_{f_{GF}}} (x21)
(x21) edge node[swap] {\ellinv_{f_{GF}}} (x31)
(x13) edge node {\Gtwoinv 1} (x23)
(x23) edge node {a} (x33)
(x21) edge node[swap] {\ellinv_{f_F,G}} (x22)
(x22) edge node {\Gtwoinv} (x32)
(x12) edge node[swap,pos=.3] {(\Gtwoinv \ellinv_{f_F,G})1} node[pos=.55] {\midfour} (x23)
(x23) edge node[swap,pos=.4] {r} node[pos=.3] {\unity} (x32)
;
\end{tikzpicture}\]
The bottom left square is commutative by the lax left unity \eqref{f0-bicat} of $G$.
Next is the diagram $B_3$, rearranged into a rectangle.
\[\begin{tikzpicture}[xscale=8,yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {1_{GFY}f_{GF}}
($(x11)+(1,0)$) node (x13) {1_{FY,G} f_{GF}}
($(x11)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x22) {1_{GFY}(1_{GFY}f_{GF})}
($(x22)+(1/2,0)$) node (x23) {1_{FY,G}(1_{GFY} f_{GF})}
($(x11)+(0,2*-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {(1_{GFY}1_{GFY})f_{GF}}
($(x31)+(1,0)$) node (x33) {(1_{FY,G}1_{GFY})f_{GF}}
($(x11)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2)$) node {\midfour}
($(x22)!.5!(x33)$) node {\nat}
;
\draw[1cell]
(x11) edge node {\Gzero_{FY}1} (x13)
(x31) edge node[swap] {(\Gzero_{FY}1)1} (x33)
(x11) edge node[swap] {\ellinv_{1_{GFY}}1_{f_{GF}}} (x31)
(x13) edge node {1\ellinv_{f_{GF}}} (x23)
(x23) edge node {\ainv} (x33)
(x11) edge node[swap,pos=.5] {1\ellinv_{f_{GF}}} (x22)
(x22) edge node[swap] {\unity} node[pos=.4] {\ainv} (x31)
(x22) edge node {\Gzero_{FY}1} (x23)
;
\end{tikzpicture}\]
We have shown that the diagram \eqref{tensorzero-modax} is commutative, so $\tensorzero$ is an invertible modification.
\end{proof}
Recall the notation $\bicata_{i,j} = \Bicatps(\A_i,\A_j)$ and related notations from
\cref{conv:bicategory-index}. We will use these in the results below.
\begin{lemma}\label{tensorzero-laxunity}
The tuple $(\tensor,\tensortwo,\tensorzero)$ satisfies the lax left and right unity axioms \eqref{f0-bicat}.
\end{lemma}
\begin{proof}
The lax left unity axiom is the following assertion. Given bicategories $\A_1$, $\A_2$, and $\A_3$, and strong transformations $\alpha$ and $\beta$ as in
\[\begin{tikzpicture}[xscale=2.3, yscale=1.4]
\def1{1} \def\q{45} \def.4{.45}
\draw[0cell]
(0,0) node (A) {\A_1}
($(A)+(1,0)$) node (B) {\A_2}
($(B)+(1,0)$) node (C) {\A_3}
;
\draw[1cell]
(A) edge[bend left=\q] node {F} (B)
(A) edge[bend right=\q] node[swap] {F'} (B)
(B) edge[bend left=\q] node {G} (C)
(B) edge[bend right=\q] node[swap] {G'} (C)
;
\draw[2cell]
node[between=A and B at .4, rotate=-90, 2label={above,\alpha}] {\Rightarrow}
node[between=B and C at .4, rotate=-90, 2label={above,\beta}] {\Rightarrow}
;
\end{tikzpicture}\]
between pseudofunctors, the diagram of modifications
\begin{equation}\label{tensorzero-lax-unity}
\begin{tikzcd}
1_{G'F'}(\beta\tensor\alpha) \ar{d}[swap]{\tensorzero * 1} \ar{r}{\ell} & \beta\tensor\alpha\\
(1_{G'}\tensor 1_{F'})(\beta\tensor\alpha) \ar{r}{\tensortwo} & (1_{G'}\beta)\tensor (1_{F'}\alpha) \ar{u}[swap]{\ell\tensor\ell}
\end{tikzcd}
\end{equation}
in $\bicata_{1,3}(GF,G'F')$ is commutative. Using \eqref{tensortwo-x} and \eqref{tensorzero-x}, and evaluating at an object $X\in\A_1$, the diagram \eqref{tensorzero-lax-unity} yields the boundary of the following diagram in $\A_3(GFX,G'F'X)$.
\begin{equation}\label{tensorzero-lax-left-unity}
\begin{tikzpicture}[xscale=6.5, yscale=1.5, baseline={(g.base)}]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {1_{G'F'X}(\alpha_{X,G'}\beta_{FX})}
($(x11)+(1,0)$) node (x12) {\alpha_{X,G'}\beta_{FX}}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {(1_{G'F'X}1_{G'F'X})(\alpha_{X,G'}\beta_{FX})}
($(x21)+(1,0)$) node (x22) {(1_{F'X}\alpha_X)_{G'}(1_{G'FX}\beta_{FX})}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {(1_{F'X,G'}1_{G'F'X})(\alpha_{X,G'}\beta_{FX})}
($(x31)+(1,0)$) node (x32) {(1_{F'X,G'}\alpha_{X,G'})(1_{G'FX}\beta_{FX})}
($(x31)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x41) {\big[1_{F'X,G'}(1_{G'F'X}\alpha_{X,G'})\big]\beta_{FX}}
($(x41)+(1,0)$) node (x42) {\big[1_{F'X,G'}(\alpha_{X,G'}1_{G'FX})\big]\beta_{FX}}
($(x21)!.4!(x22)$) node (d11) {\bullet}
($(x21)!.6!(x22)$) node (d12) {\bullet}
($(x31)!.4!(x32)$) node (d21) {\bullet}
($(x31)!.6!(x32)$) node (d22) {\bullet}
($(d11)!.5!(d22)$) node {(\clubsuit)}
;
\draw[1cell]
(x11) edge node {\ell_{\alpha_{X,G'}\beta_{FX}}} (x12)
(x41) edge node {(1 (1_{G'})_{\alpha_X}^{-1})1} (x42)
(x11) edge node[swap] {\ellinv_{1_{G'F'X}}1} (x21)
(x21) edge node[swap] (g) {(\Gpzero_{F'X}1)1} (x31)
(x31) edge node[swap] {\iso} (x41)
(x42) edge node[swap] {\iso} (x32)
(x32) edge node[swap] {\Gptwo 1} (x22)
(x22) edge node[swap] {\ell_{\alpha_X,G'} \ell_{\beta_{FX}}} (x12)
(d11) edge (d12)
(d12) edge (d22)
(d21) edge (d11)
(d21) edge (d22)
(x11) edge (d11)
(d12) edge (x12)
(d21) edge (x41)
(d22) edge (x42)
;
\end{tikzpicture}
\end{equation}
We will show its commutativity by showing that the five sub-diagrams are commutative.
The middle square $(\clubsuit)$ in \eqref{tensorzero-lax-left-unity} is the outer diagram below.
\[\begin{tikzpicture}[xscale=6, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (d11) {(1_{G'F'X}\alpha_{X,G'})\beta_{FX}}
($(d11)+(1,0)$) node (d12) {(1_{G'F'X}\alpha_{X,G'})(1_{G'FX}\beta_{FX})}
($(d11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (d21) {\big[1_{G'F'X}(1_{G'F'X}\alpha_{X,G'})\big]\beta_{FX}}
($(d21)+(1,0)$) node (d22) {\big[1_{G'F'X}(\alpha_{X,G'}1_{G'FX})\big]\beta_{FX}}
;
\draw[1cell]
(d11) edge node {1\ellinv} (d12)
(d21) edge node[swap] {(1 (1_{G'})_{\alpha_X}^{-1})1} (d22)
(d21) edge node {(1\ell)1} (d11)
(d12) edge node {\iso} (d22)
(d11) edge node {(1\rinv)1} (d22)
;
\end{tikzpicture}\]
The lower left triangle is commutative by the definition of $(1_{G'})_{\alpha_X}$ in \eqref{idlaxtr-component}. The upper right triangle is the boundary of the following diagram in $\A_3(GFX,G'F'X)$.
\[\begin{tikzpicture}[xscale=6, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{-3} \def-1} \def\v{-1{.3} \def\n{.4}
\draw[0cell]
(0,0) node (d11) {(1_{G'F'X}\alpha_{X,G'})\beta_{FX}}
($(d11)+(1,0)$) node (d13) {(1_{G'F'X}\alpha_{X,G'})(1_{G'FX}\beta_{FX})}
($(d11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (d21) {\alpha_{X,G'}\beta_{FX}}
($(d21)+(1/2,0)$) node (d22) {\alpha_{X,G'}(1\beta_{FX})}
($(d22)+(1/2,0)$) node (d23) {1\big[\alpha_{X,G'}(1\beta_{FX})\big]}
($(d21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (d31) {(\alpha_{X,G'}1)\beta_{FX}}
($(d31)+(1,0)$) node (d33) {1\big[(\alpha_{X,G'}1)\beta_{FX}\big]}
($(d31)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (d42) {\big[1_{G'F'X}(\alpha_{X,G'}1_{G'FX})\big]\beta_{FX}}
($(d11)+(--1} \def\v{-1,0)$) node[inner sep=0pt] (a) {}
($(a)+(0,.4)$) node[inner sep=0pt] (b) {}
($(d13)+(\n,0)$) node[inner sep=0pt] (c) {}
($(c)+(0,.4)$) node[inner sep=0pt] (d) {}
($(d11)+(1/3,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/3)$) node {\midfour}
($(d33)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2)$) node {\MC}
($(d22)!.5!(d33)$) node {\nat}
($(d11)+(-.2,-.5)$) node{\nat}
($(d23)+(-1/6,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/3)$) node {\unity}
($(d21)+(1/6,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/3)$) node {\unity}
($(d42)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2)$) node {\unity}
;
\draw[1cell]
(d11) edge node {1\ellinv} (d13)
(d11) edge[-,shorten >=-1pt] (a)
(a) edge[-,shorten <=-1pt,shorten >=-1pt] (b)
(b) edge[shorten <=-1pt] node {(1\rinv)1} (d42)
(d13) edge[-,shorten >=-1pt] (c)
(c) edge[-,shorten >=-1pt,shorten <=-1pt] (d)
(d) edge[shorten <=-1pt] node[swap,pos=.3] {\iso} (d42)
(d11) edge node {\ell 1} (d21)
(d13) edge node[swap] {\ell 1} (d22)
(d13) edge node {a} (d23)
(d23) edge node[swap] {\ell} (d22)
(d22) edge node[swap] {1\ell} (d21)
(d31) edge node {r1} (d21)
(d31) edge node[swap] {a} (d22)
(d23) edge node {1\ainv} (d33)
(d33) edge node[swap] {\ell} (d31)
(d42) edge node[swap] {\ell 1} (d31)
(d33) edge node[swap] {\ainv} (d42)
;
\end{tikzpicture}\]
This shows that the middle square $(\clubsuit)$ in \eqref{tensorzero-lax-left-unity} is commutative.
Abbreviating every identity $1$-cell to $1$, the four trapezoids in \eqref{tensorzero-lax-left-unity} are further divided as follows.
\[\begin{tikzpicture}[xscale=9, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {1(\alpha_{X,G'}\beta_{FX})}
($(x11)+(1,0)$) node (x12) {\alpha_{X,G'}\beta_{FX}}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {(11)(\alpha_{X,G'}\beta_{FX})}
($(x21)+(1,0)$) node (x22) {(1\alpha_X)_{G'}(1\beta_{FX})}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {(1_{G'}1)(\alpha_{X,G'}\beta_{FX})}
($(x31)+(1,0)$) node (x32) {(1_{G'}\alpha_{X,G'})(1\beta_{FX})}
($(x31)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x41) {\big[1_{G'}(1\alpha_{X,G'})\big]\beta_{FX}}
($(x41)+(1,0)$) node (x42) {\big[1_{G'}(\alpha_{X,G'}1)\big]\beta_{FX}}
($(x21)!.4!(x22)$) node (d11) {\bullet}
($(x21)!.6!(x22)$) node (d12) {\bullet}
($(x31)!.4!(x32)$) node (d21) {\bullet}
($(x31)!.6!(x32)$) node (d22) {\bullet}
($(d11)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2)$) node {\unity}
($(d11)!.5!(d22)$) node {(\clubsuit)}
($(x21)!.6!(d11)$) node {(\diamondsuit)}
($(d22)!.4!(x32)$) node {\nat}
($(x31)!.6!(d21)$) node {\nat}
;
\draw[1cell]
(x11) edge node {\ell} (x12)
(x41) edge node[pos=.3] {(1 (1_{G'})_{\alpha_X}^{-1})1} (x42)
(x11) edge node[swap] {\ellinv_{1}1} (x21)
(x21) edge node[swap] (g) {(\Gpzero_{F'X}1)1} (x31)
(x31) edge node[swap] {\iso} (x41)
(x42) edge node[swap] {\iso} (x32)
(x32) edge node[swap] {\Gptwo 1} (x22)
(x22) edge node[swap] {\ell_{\alpha_X,G'} \ell_{\beta_{FX}}} (x12)
(d11) edge node[swap] {1\ellinv} (d12)
(d12) edge node {\iso} (d22)
(d21) edge node {(1\ell)1} (d11)
(d21) edge (d22)
(x11) edge node {\ainv} (d11)
(d12) edge node[swap,pos=.6] {\ell\ell} (x12)
(d21) edge (x41)
(d22) edge (x42)
(d11) edge[bend left=15] node[pos=.4] {\ell 1} node[swap] {\midfour} (x12)
(d21) edge[bend right=15] node {\midfour} node[swap] {\midfour} (x42)
(x21) edge node[swap,pos=.4] {\iso} (d21)
(d12) edge node[pos=.3] {(\Gpzero 1)1} (x32)
;
\end{tikzpicture}\]
\begin{itemize}
\item In the bottom trapezoid, the left and the right boundaries both have the form $(\Gpzero_{F'X}1)1$. The inside arrow is $(\Gpzero_{F'X}(1_{G'})^{-1}_{\alpha_X})1$.
\item In the right trapezoid, the upper triangle is commutative by the lax left unity \eqref{f0-bicat} of $G'$.
\end{itemize}
In the left trapezoid, the sub-diagram $(\diamondsuit)$ is the outer diagram below.
\[\begin{tikzpicture}[xscale=6, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{-2}
\draw[0cell]
(0,0) node (x11) {1(\alpha_{X,G'}\beta_{FX})}
($(x11)+(1,0)$) node (x12) {(1\alpha_{X,G'})\beta_{FX}}
($(x11)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (c) {\big[(11)\alpha_{X,G'}\big]\beta_{FX}}
($(x11)+(0,.4)$) node (x21) {(11)(\alpha_{X,G'}\beta_{FX})}
($(x21)+(1,0)$) node (x22) {\big[1(1\alpha_{X,G'})\big]\beta_{FX}}
($(c)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2)$) node {\MC}
($(x11)!.5!(c)$) node {\nat + \unity}
($(c)+(1/3,.1)$) node {\unity}
;
\draw[1cell]
(x11) edge node {\ainv} (x12)
(x11) edge node[swap] {\ellinv_{1}1} (x21)
(x21) edge node {\iso} (x22)
(x22) edge node[swap] {(1\ell)1} (x12)
(c) edge node {(r_1 1)1} (x12)
(x21) edge node {\ainv} (c)
(c) edge node {a1} (x22)
;
\end{tikzpicture}\]
We have shown that the diagram \eqref{tensorzero-lax-left-unity} is commutative.
The lax right unity axiom is proved by a similar argument, and we ask the reader to check it in \Cref{exer:tensorzero-lax-right}.
\end{proof}
\begin{proposition}\label{tensor-pseudofunctor}\index{tricategory of bicategories!composition}
For bicategories $\A_1$, $\A_2$, and $\A_3$, the tuple
\[\begin{tikzcd}[column sep=large]
\bicata^2_{[1,3]} = \Bicatps(\A_2,\A_3)\times\Bicatps(\A_1,\A_2) \ar{r}{(\tensor,\tensortwo,\tensorzero)} & \Bicatps(\A_1,\A_3) = \bicata_{1,3}
\end{tikzcd}\]
defined in
\begin{itemize}
\item \Cref{def:transformation-tensor} for $\tensor$,
\item \Cref{def:tensortwo} for $\tensortwo$, and
\item \Cref{def:tensorzero} for $\tensorzero$,
\end{itemize}
is a pseudofunctor.
\end{proposition}
\begin{proof}
The assignment $\tensor$ is well-defined by
\begin{itemize}
\item \Cref{lax-functors-compose} on objects,
\item \Cref{lax-tr-compose,pre-whiskering-transformation,post-whiskering-transformation} on $1$-cells, and
\item \Cref{tensor-modification} on $2$-cells.
\end{itemize}
Moreover:
\begin{itemize}
\item $\tensortwo$ is a natural isomorphism by \Cref{tensortwo-modification,tensortwo-iicell-natural}.
\item \Cref{tensorzero-modification,expl:lax-functor}\eqref{fzero-natural} imply that $\tensorzero$ is a natural isomorphism.
\end{itemize}
The lax associativity axiom \eqref{f2-bicat} and the lax left and right unity axioms \eqref{f0-bicat} hold by \Cref{tensortwo-lax-associative,tensorzero-laxunity}, respectively.
\end{proof}
We call the pseudofunctor $(\tensor,\tensortwo,\tensorzero)$ the \emph{composition}.
\section{The Associator}\label{sec:tricat-associator}
In this section we define the associator \eqref{tricategory-associator} of the tricategory of bicategories. The plan for this section is as follows:
\begin{itemize}
\item The left adjoint $a$ of the associator is defined in \Cref{def:tricatofbicat-associator}. It is shown to be a $1$-cell in \Cref{tricatofbicat-associator-modax,tricatofbicat-associator-iicell-nat,tricatofbicat-associator-laxunity,tricatofbicat-associator-laxnat}.
\item The right adjoint $\abdot$ of the associator is defined in \Cref{def:ass-right-adjoint}.
\item The unit and the counit of the associator are defined in \Cref{def:tricatofbicat-ass-unit}. They are observed to be invertible $2$-cells in \Cref{etaa-iicell,epza-iicell}.
\item The triangle identities are checked in \Cref{ass-adjoint-equivalence}.
\end{itemize}
We remind the reader of \Cref{conv:functor-subscript,conv:bicategory-index,conv:large-diagram}, and that composition of lax functors is strictly associative, as shown in \Cref{thm:cat-of-bicat}. The associator involves the pseudofunctor $\tensor$ in \Cref{tensor-pseudofunctor}.
\begin{definition}\label{def:tricatofbicat-associator}
Suppose $\A_1,\A_2,\A_3$, and $\A_4$ are bicategories. Define $a$ as in
\begin{equation}\label{tricatofbicat-associator}
\begin{tikzpicture}[xscale=3, yscale=1.4, baseline={(A.base)}]
\def1{1} \def\q{45}
\draw[0cell]
(0,0) node (A) {\bicata^3_{[1,4]}}
($(A)+(1,0)$) node (B) {\bicata_{1,4}}
;
\draw[1cell]
(A) edge[bend left=\q] node {\tensor(\tensor\times 1)} (B)
(A) edge[bend right=\q] node[swap] {\tensor(1\times\tensor)} (B)
;
\draw[2cell]
node[between=A and B at .45, rotate=-90, 2label={above,a}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
with the following components.
\begin{description}
\item[Component $1$-Cells] For pseudofunctors
\begin{equation}\label{functors-fgh}
\begin{tikzcd}
\A_1 \ar{r}{F} & \A_2 \ar{r}{G} & \A_3 \ar{r}{H} & \A_4,
\end{tikzcd}
\end{equation}
i.e., an object $(H,G,F) \in \bicata^3_{[1,4]}$, $a$ has a component $1$-cell
\[\begin{tikzpicture}[xscale=7, yscale=1.4, baseline={(A.base)}]
\def1{1}
\draw[0cell]
(0,0) node (A) {\big(\tensor(\tensor\times 1)\big)(H,G,F) = HGF}
($(A)+(1,0)$) node (B) {HGF = \big(\tensor(1\times\tensor)\big)(H,G,F)}
;
\draw[1cell]
(A) edge node {a_{H,G,F} ~=~ 1_{HGF}} (B)
;
\end{tikzpicture}\]
which is the identity strong transformation of the composite $HGF \in \bicata_{1,4}$ in \Cref{id-lax-transformation}.
\item[Component $2$-Cells] For strong transformations
\begin{equation}\label{transformation-abg}
\begin{tikzpicture}[xscale=2.3, yscale=1.4, baseline={(A.base)}]
\def1{1} \def\q{45} \def.4{.45}
\draw[0cell]
(0,0) node (A) {\A_1}
($(A)+(1,0)$) node (B) {\A_2}
($(B)+(1,0)$) node (C) {\A_3}
($(C)+(1,0)$) node (D) {\A_4,}
;
\draw[1cell]
(A) edge[bend left=\q] node {F} (B)
(A) edge[bend right=\q] node[swap] {F'} (B)
(B) edge[bend left=\q] node {G} (C)
(B) edge[bend right=\q] node[swap] {G'} (C)
(C) edge[bend left=\q] node {H} (D)
(C) edge[bend right=\q] node[swap] {H'} (D)
;
\draw[2cell]
node[between=A and B at .4, rotate=-90, 2label={above,\alpha}] {\Rightarrow}
node[between=B and C at .4, rotate=-90, 2label={above,\beta}] {\Rightarrow}
node[between=C and D at .4, rotate=-90, 2label={above,\gamma}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
i.e., a $1$-cell
\[(\gamma,\beta,\alpha)\in \bicata^3_{[1,4]}\big((H,G,F),(H',G',F')\big),\]
$a$ has a component $2$-cell in $\bicata_{1,4}(HGF,H'G'F')$, i.e., a modification (to be justified in \Cref{tricatofbicat-associator-modax} below)
\begin{equation}\label{tricatofbicat-ass-iicell}
\begin{tikzpicture}[xscale=3, yscale=1.7, baseline={(a.base)}]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{.65}
\draw[0cell]
(0,0) node (x11) {HGF}
($(x11)+(1,0)$) node (x12) {H'G'F'}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {HGF}
($(x21)+(1,0)$) node (x22) {H'G'F'}
;
\draw[1cell]
(x11) edge node {(\gamma\tensor\beta)\tensor\alpha} (x12)
(x11) edge node[swap] (a) {a_{H,G,F}\,=\, 1_{HGF}} (x21)
(x12) edge node {1_{H'G'F'}\,=\, a_{H',G',F'}} (x22)
(x21) edge node[swap] {\gamma\tensor(\beta\tensor\alpha)} (x22)
;
\draw[2cell]
node[between=x11 and x22 at .4, rotate=40, 2label={above,a_{\gamma,\beta,\alpha}}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
given by the vertical composite $2$-cell
\begin{equation}\label{tricatofbicat-ass-iicell-comp}
\begin{tikzpicture}[xscale=6, yscale=1.5, baseline={(x21.base)}]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {\big((\alpha_{X,G'}\beta_{FX})_{H'}\gamma_{GFX}\big)1_{HGFX}}
($(x11)+(1,0)$) node (x12) {1_{H'G'F'X}\big(\alpha_{X,H'G'}(\beta_{FX,H'}\gamma_{GFX})\big)}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {(\alpha_{X,G'}\beta_{FX})_{H'}(\gamma_{GFX}1_{HGFX})}
($(x21)+(1,0)$) node (x22) {\alpha_{X,H'G'}(\beta_{FX,H'}\gamma_{GFX})}
($(x21)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {(\alpha_{X,H'G'}\beta_{FX,H'})\gamma_{GFX}}
;
\draw[1cell]
(x11) edge node {(a_{\gamma,\beta,\alpha})_X} (x12)
(x11) edge node[swap] {a} (x21)
(x21) edge node[swap] {\Hptwoinv r} (x3)
(x3) edge node[swap] {a} (x22)
(x22) edge node[swap] {\ellinv} (x12)
;
\end{tikzpicture}
\end{equation}
in $\A_4(HGFX,H'G'F'X)$ for each object $X\in\A_1$.
\end{description}
This finishes the definition of $a$.
\end{definition}
\begin{explanation}\label{expl:tricatofbicat-associator}
By the naturality \eqref{unitor-naturality} of $\ell$ and $r$, and the right unity property in \Cref{bicat-left-right-unity}, the component $(a_{\gamma,\beta,\alpha})_X$ in \eqref{tricatofbicat-ass-iicell-comp} is equal to the following two composites.
\begin{equation}\label{ass-iicell-alternative}
\begin{tikzpicture}[xscale=6, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {\big((\alpha_{X,G'}\beta_{FX})_{H'}\gamma_{GFX}\big)1_{HGFX}}
($(x11)+(1,0)$) node (x12) {1_{H'G'F'X}\big(\alpha_{X,H'G'}(\beta_{FX,H'}\gamma_{GFX})\big)}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {(\alpha_{X,G'}\beta_{FX})_{H'}\gamma_{GFX}}
($(x21)+(1,0)$) node (x22) {1_{H'G'F'X}\big((\alpha_{X,H'G'}\beta_{FX,H'})\gamma_{GFX}\big)}
($(x21)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {1_{H'G'F'X}\big((\alpha_{X,G'}\beta_{FX})_{H'}\gamma_{HGFX}\big)}
;
\draw[1cell]
(x11) edge node {(a_{\gamma,\beta,\alpha})_X} (x12)
(x11) edge node[swap] {r} (x21)
(x21) edge node[swap,pos=.4] {\ellinv} (x3)
(x3) edge node[swap,pos=.7] {1(\Hptwoinv 1)} (x22)
(x22) edge node[swap] {1a} (x12)
;
\draw[0cell]
($(x11)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y11) {\big((\alpha_{X,H'G'}\beta_{FX,H'})\gamma_{GFX}\big)1_{HGFX}}
($(y11)+(1,0)$) node (y12) {\alpha_{X,H'G'}(\beta_{FX,H'}\gamma_{GFX})}
($(y11)+(1/2,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y2) {\big(\alpha_{X,H'G'}(\beta_{FX,H'}\gamma_{GFX})\big)1_{HGFX}}
;
\draw[1cell]
(x11) edge node {(\Hptwoinv 1)1} (y11)
(y11) edge node {a1} (y2)
(y2) edge node {r} (y12)
(y12) edge node {\ellinv} (x12)
;
\end{tikzpicture}
\end{equation}
Moreover, in each composite, the order of $r$ and $\ellinv$ can switch.
\end{explanation}
\begin{lemma}\label{tricatofbicat-associator-modax}
The component $a_{\gamma,\beta,\alpha}$ in \eqref{tricatofbicat-ass-iicell} is an invertible modification.
\end{lemma}
\begin{proof}
Each component of $a_{\gamma,\beta,\alpha}$ is a vertical composite of four invertible $2$-cells in $\A_4$, so it is invertible.
Writing every identity $1$-cell as 1, the modification axiom \eqref{modification-axiom-pasting} for $a_{\gamma,\beta,\alpha}$ with respect to a $1$-cell $f \in \A_1(X,Y)$ is the commutativity of the diagram
\begin{equation}\label{tricatofbicat-ass-modax}
\begin{tikzpicture}[xscale=5.7, yscale=1.5, baseline={(f.base)}]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def\q{50}
\draw[0cell]
(0,0) node (x11) {f_{H'G'F'}\big[\big((\alpha_{X,G'}\beta_{FX})_{H'}\gamma_{GFX}\big)1\big]}
($(x11)+(1,0)$) node (x12) {f_{H'G'F'}\big[1\big(\alpha_{X,H'G'}(\beta_{FX,H'}\gamma_{GFX})\big)\big]}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\big[\big((\alpha_{Y,G'}\beta_{FY})_{H'}\gamma_{GFY}\big)1\big]f_{HGF}}
($(x21)+(1,0)$) node (x22) {\big[1\big(\alpha_{Y,H'G'}(\beta_{FY,H'}\gamma_{GFY})\big)\big]f_{HGF}}
;
\draw[1cell]
(x11) edge[bend left=\q] node {1(a_{\gamma,\beta,\alpha})_X} node[swap] {4} (x12)
(x11) edge node (f) {[(\gamma\tensor(\beta\tensor\alpha))1]_f} node[swap] {12} (x21)
(x12) edge node {[1((\gamma\tensor\beta)\tensor\alpha)]_f} node[swap] {19} (x22)
(x21) edge[bend right=\q] node[swap] {(a_{\gamma,\beta,\alpha})_Y 1} node {4} (x22)
;
\end{tikzpicture}
\end{equation}
in $\A_3(HGFX,H'G'F'Y)$. The numbers 4, 4, 12, and 19 decorating the arrows indicate the number of $2$-cells each arrow decomposes into using \eqref{idlaxtr-component}, \eqref{transf-hcomp-iicell}, \eqref{composite-tr-iicell-eq}, and \eqref{tricatofbicat-ass-iicell-comp}. Similar to \Cref{tensortwo-modification}, the commutativity of \eqref{tricatofbicat-ass-modax} is proved by sub-dividing it into a number of sub-diagrams, each of which is commutative by the axioms and properties in \Cref{conv:large-diagram}. The only exception is the following sub-diagram that appears somewhere in the middle of the expanded form of \eqref{tricatofbicat-ass-modax}, with $\gamma=\gamma_{GFX}$.
\newcommand{\ensuremath{\big[f_{H'G'F'}(\alpha_{X,H'G'}\beta_{FX,H'})\big]\gamma}}{\ensuremath{\big[f_{H'G'F'}(\alpha_{X,H'G'}\beta_{FX,H'})\big]\gamma}}
\newcommand{\ensuremath{\big[f_{H'G'F'}(\alpha_{X,G'}\beta_{FX})_{H'}\big]\gamma}}{\ensuremath{\big[f_{H'G'F'}(\alpha_{X,G'}\beta_{FX})_{H'}\big]\gamma}}
\newcommand{\ensuremath{\big[f_{G'F'}(\alpha_{X,G'}\beta_{FX})\big]_{H'}\gamma}}{\ensuremath{\big[f_{G'F'}(\alpha_{X,G'}\beta_{FX})\big]_{H'}\gamma}}
\newcommand{\ensuremath{\big[(f_{G'F'}\alpha_{X,G'})\beta_{FX}\big]_{H'}\gamma}}{\ensuremath{\big[(f_{G'F'}\alpha_{X,G'})\beta_{FX}\big]_{H'}\gamma}}
\newcommand{\ensuremath{\big[(f_{F'}\alpha_{X})_{G'}\beta_{FX}\big]_{H'}\gamma}}{\ensuremath{\big[(f_{F'}\alpha_{X})_{G'}\beta_{FX}\big]_{H'}\gamma}}
\newcommand{\ensuremath{\big[(\alpha_Y f_{F})_{G'}\beta_{FX}\big]_{H'}\gamma}}{\ensuremath{\big[(\alpha_Y f_{F})_{G'}\beta_{FX}\big]_{H'}\gamma}}
\newcommand{\ensuremath{\big[(\alpha_{Y,G'} f_{G'F})\beta_{FX}\big]_{H'}\gamma}}{\ensuremath{\big[(\alpha_{Y,G'} f_{G'F})\beta_{FX}\big]_{H'}\gamma}}
\newcommand{\ensuremath{\big[\alpha_{Y,G'} (f_{G'F}\beta_{FX})\big]_{H'}\gamma}}{\ensuremath{\big[\alpha_{Y,G'} (f_{G'F}\beta_{FX})\big]_{H'}\gamma}}
\newcommand{\ensuremath{\big[\alpha_{Y,G'} (\beta_{FY} f_{GF})\big]_{H'}\gamma}}{\ensuremath{\big[\alpha_{Y,G'} (\beta_{FY} f_{GF})\big]_{H'}\gamma}}
\newcommand{\ensuremath{\big[(\alpha_{Y,G'}\beta_{FY})f_{GF}\big]_{H'}\gamma}}{\ensuremath{\big[(\alpha_{Y,G'}\beta_{FY})f_{GF}\big]_{H'}\gamma}}
\newcommand{\ensuremath{\big[(\alpha_{Y,G'}\beta_{FY})_{H'}f_{H'GF}\big]\gamma}}{\ensuremath{\big[(\alpha_{Y,G'}\beta_{FY})_{H'}f_{H'GF}\big]\gamma}}
\newcommand{\ensuremath{\big[(\alpha_{Y,H'G'}\beta_{FY,H'})f_{H'GF}\big]\gamma}}{\ensuremath{\big[(\alpha_{Y,H'G'}\beta_{FY,H'})f_{H'GF}\big]\gamma}}
\newcommand{\ensuremath{\big[(f_{H'G'F'}\alpha_{X,H'G'})\beta_{FX,H'}\big]\gamma}}{\ensuremath{\big[(f_{H'G'F'}\alpha_{X,H'G'})\beta_{FX,H'}\big]\gamma}}
\newcommand{\ensuremath{\big[(f_{G'F'}\alpha_{X,G'})_{H'}\beta_{FX,H'}\big]\gamma}}{\ensuremath{\big[(f_{G'F'}\alpha_{X,G'})_{H'}\beta_{FX,H'}\big]\gamma}}
\newcommand{\ensuremath{\big[(f_{G'F'}\alpha_{X,G'})\beta_{FX}\big]_{H'}\gamma}}{\ensuremath{\big[(f_{G'F'}\alpha_{X,G'})\beta_{FX}\big]_{H'}\gamma}}
\newcommand{\ensuremath{\big[(f_{G'F'}\alpha_{X,G'})_{H'}\beta_{FX,H'}\big]\gamma}}{\ensuremath{\big[(f_{G'F'}\alpha_{X,G'})_{H'}\beta_{FX,H'}\big]\gamma}}
\newcommand{\ensuremath{\big[(f_{F'}\alpha_{X})_{H'G'}\beta_{FX,H'}\big]\gamma}}{\ensuremath{\big[(f_{F'}\alpha_{X})_{H'G'}\beta_{FX,H'}\big]\gamma}}
\newcommand{\ensuremath{\big[(\alpha_Y f_{F})_{H'G'}\beta_{FX,H'}\big]\gamma}}{\ensuremath{\big[(\alpha_Y f_{F})_{H'G'}\beta_{FX,H'}\big]\gamma}}
\newcommand{\ensuremath{\big[(\alpha_{Y,G'} f_{G'F})_{H'}\beta_{FX,H'}\big]\gamma}}{\ensuremath{\big[(\alpha_{Y,G'} f_{G'F})_{H'}\beta_{FX,H'}\big]\gamma}}
\newcommand{\ensuremath{\big[(\alpha_{Y,H'G'} f_{H'G'F})\beta_{FX,H'}\big]\gamma}}{\ensuremath{\big[(\alpha_{Y,H'G'} f_{H'G'F})\beta_{FX,H'}\big]\gamma}}
\newcommand{\ensuremath{\big[\alpha_{Y,H'G'} (f_{H'G'F}\beta_{FX,H'})\big]\gamma}}{\ensuremath{\big[\alpha_{Y,H'G'} (f_{H'G'F}\beta_{FX,H'})\big]\gamma}}
\newcommand{\ensuremath{\big[\alpha_{Y,H'G'}(f_{G'F}\beta_{FX})_{H'}\big]\gamma}}{\ensuremath{\big[\alpha_{Y,H'G'}(f_{G'F}\beta_{FX})_{H'}\big]\gamma}}
\newcommand{\ensuremath{\big[\alpha_{Y,H'G'}(\beta_{FY}f_{GF})_{H'}\big]\gamma}}{\ensuremath{\big[\alpha_{Y,H'G'}(\beta_{FY}f_{GF})_{H'}\big]\gamma}}
\newcommand{\ensuremath{\big[\alpha_{Y,H'G'}(\beta_{FY,H'}f_{H'GF})\big]\gamma}}{\ensuremath{\big[\alpha_{Y,H'G'}(\beta_{FY,H'}f_{H'GF})\big]\gamma}}
\[\begin{tikzpicture}[xscale=6.5, yscale=1.3]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def-1} \def\v{-1{.4} \def\q{-.8}
\draw[0cell]
(0,0) node (x11) {\ensuremath{\big[f_{H'G'F'}(\alpha_{X,H'G'}\beta_{FX,H'})\big]\gamma}}
($(x11)+(1,0)$) node (x12) {\ensuremath{\big[(f_{H'G'F'}\alpha_{X,H'G'})\beta_{FX,H'}\big]\gamma}}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\ensuremath{\big[f_{H'G'F'}(\alpha_{X,G'}\beta_{FX})_{H'}\big]\gamma}}
($(x21)+(1,0)$) node (x22) {\ensuremath{\big[(f_{G'F'}\alpha_{X,G'})_{H'}\beta_{FX,H'}\big]\gamma}}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {\ensuremath{\big[f_{G'F'}(\alpha_{X,G'}\beta_{FX})\big]_{H'}\gamma}}
($(x31)+(1,0)$) node (x32) {\ensuremath{\big[(f_{G'F'}\alpha_{X,G'})\beta_{FX}\big]_{H'}\gamma}}
($(x31)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x41) {\ensuremath{\big[(f_{G'F'}\alpha_{X,G'})\beta_{FX}\big]_{H'}\gamma}}
($(x41)+(1,0)$) node (x42) {\ensuremath{\big[(f_{G'F'}\alpha_{X,G'})_{H'}\beta_{FX,H'}\big]\gamma}}
($(x41)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x51) {\ensuremath{\big[(f_{F'}\alpha_{X})_{G'}\beta_{FX}\big]_{H'}\gamma}}
($(x51)+(1,0)$) node (x52) {\ensuremath{\big[(f_{F'}\alpha_{X})_{H'G'}\beta_{FX,H'}\big]\gamma}}
($(x51)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x61) {\ensuremath{\big[(\alpha_Y f_{F})_{G'}\beta_{FX}\big]_{H'}\gamma}}
($(x61)+(1,0)$) node (x62) {\ensuremath{\big[(\alpha_Y f_{F})_{H'G'}\beta_{FX,H'}\big]\gamma}}
($(x61)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x71) {\ensuremath{\big[(\alpha_{Y,G'} f_{G'F})\beta_{FX}\big]_{H'}\gamma}}
($(x71)+(1,0)$) node (x72) {\ensuremath{\big[(\alpha_{Y,G'} f_{G'F})_{H'}\beta_{FX,H'}\big]\gamma}}
($(x71)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x81) {\ensuremath{\big[\alpha_{Y,G'} (f_{G'F}\beta_{FX})\big]_{H'}\gamma}}
($(x81)+(1,0)$) node (x82) {\ensuremath{\big[(\alpha_{Y,H'G'} f_{H'G'F})\beta_{FX,H'}\big]\gamma}}
($(x81)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x91) {\ensuremath{\big[\alpha_{Y,G'} (\beta_{FY} f_{GF})\big]_{H'}\gamma}}
($(x91)+(1,0)$) node (x92) {\ensuremath{\big[\alpha_{Y,H'G'} (f_{H'G'F}\beta_{FX,H'})\big]\gamma}}
($(x91)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x101) {\ensuremath{\big[(\alpha_{Y,G'}\beta_{FY})f_{GF}\big]_{H'}\gamma}}
($(x101)+(1,0)$) node (x102) {\ensuremath{\big[\alpha_{Y,H'G'}(f_{G'F}\beta_{FX})_{H'}\big]\gamma}}
($(x101)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x111) {\ensuremath{\big[(\alpha_{Y,G'}\beta_{FY})_{H'}f_{H'GF}\big]\gamma}}
($(x111)+(1,0)$) node (x112) {\ensuremath{\big[\alpha_{Y,H'G'}(\beta_{FY}f_{GF})_{H'}\big]\gamma}}
($(x111)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x121) {\ensuremath{\big[(\alpha_{Y,H'G'}\beta_{FY,H'})f_{H'GF}\big]\gamma}}
($(x121)+(1,0)$) node (x122) {\ensuremath{\big[\alpha_{Y,H'G'}(\beta_{FY,H'}f_{H'GF})\big]\gamma}}
($(x31)+(--1} \def\v{-1,0)$) node[inner sep=0pt] (a) {}
($(x101)+(--1} \def\v{-1,0)$) node[inner sep=0pt] (b) {}
($(x11)+(1/2,\q)$) node {\laxas}
($(x71)+(1/2,\q)$) node {\laxas}
($(x111)+(1/2,0)$) node {\laxas}
;
\draw[1cell]
(x11) edge node[swap] {(1\Hptwo)1} (x21)
(x21) edge node[swap] {\Hptwo 1} (x31)
(x31) edge node[swap] {\ainv_{H'}1} (x41)
(x41) edge node[swap] {(\Gptwo 1)_{H'}1} (x51)
(x51) edge node[swap] {(\alpha_{f,G'}1)_{H'}1} (x61)
(x61) edge node {(\Gptwoinv 1)_{H'}1} (x71)
(x71) edge node[swap] {a_{H'}1} (x81)
(x81) edge node[swap] {(1\beta_{f_F})_{H'}1} (x91)
(x91) edge node[swap] {\ainv_{H'}1} (x101)
(x101) edge node[swap] {\Hptwoinv 1} (x111)
(x111) edge node[swap] {(\Hptwoinv 1)1} (x121)
(x12) edge node {(\Hptwo 1)1} (x22)
(x22) edge node {\Hptwo 1} (x32)
(x32) edge node {\Hptwoinv 1} (x42)
(x42) edge node {(\Gptwo_{H'} 1)1} (x52)
(x52) edge node {(\alpha_{f,H'G'} 1)1} (x62)
(x62) edge node {(\Gptwoinv_{H'} 1)1} (x72)
(x72) edge node {(\Hptwoinv 1)1} (x82)
(x82) edge node {a1} (x92)
(x92) edge node {(1\Hptwo)1} (x102)
(x102) edge node {(1\beta_{f_F,H'})1} (x112)
(x112) edge node {(1\Hptwoinv)1} (x122)
(x11) edge node {\ainv 1} (x12)
(x31) edge node {\ainv_{H'}1} (x32)
(x51) edge node {\Hptwoinv 1} (x52)
(x61) edge node {\Hptwoinv 1} (x62)
(x71) edge node[pos=.6] {\Hptwoinv 1} (x72)
(x81) edge node[pos=.4] {\Hptwoinv 1} (x102)
(x91) edge node[pos=.4] {\Hptwoinv 1} (x112)
(x121) edge node {a1} (x122)
(x31) edge[-,shorten >=-1pt] (a)
(a) edge[-,shorten <=-1pt,shorten >=-1pt] node {(\beta\tensor\alpha)_{f,H'}1} (b)
(b) edge[shorten <=-1pt] (x101)
;
\end{tikzpicture}\]
The three marked sub-diagrams above are commutative by the lax associativity \eqref{f2-bicat} of $H'$. The other four sub-diagrams are commutative by the naturality \eqref{f2-bicat-naturality} of $\Hptwo$.
\end{proof}
Next we check that $a$ satisfies the axioms of a lax transformation.
\begin{lemma}\label{tricatofbicat-associator-iicell-nat}
The construction $a$ in \eqref{tricatofbicat-associator} is natural in the sense of \eqref{lax-transformation-naturality}.
\end{lemma}
\begin{proof}
Naturality of $a$ means that, for modifications $(\Omega,\Sigma,\Gamma)$ as in
\[\begin{tikzpicture}[xscale=3, yscale=1.4]
\def1{1} \def\q{65} \def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{-.1} \def.4{.5} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{.35} \def1.2{.65}
\draw[0cell]
(0,0) node (A) {\A_1}
($(A)+(1,0)$) node (B) {\A_2}
($(B)+(1,0)$) node (C) {\A_3}
($(C)+(1,0)$) node (D) {\A_4,}
;
\draw[1cell]
(A) edge[bend left=\q] node {F} (B)
(A) edge[bend right=\q] node[swap] {F'} (B)
(B) edge[bend left=\q] node {G} (C)
(B) edge[bend right=\q] node[swap] {G'} (C)
(C) edge[bend left=\q] node {H} (D)
(C) edge[bend right=\q] node[swap] {H'} (D)
;
\draw[2cell]
node[between=A and B at .4, shift={(0,1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)}, 2label={above,\Gamma}] {\Rrightarrow}
node[between=A and B at -1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15, rotate=-90, 2label={below,\alpha}] {\Rightarrow}
node[between=A and B at 1.2, rotate=-90, 2label={above,\alpha'}] {\Rightarrow}
node[between=B and C at .4, shift={(0,1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)}, 2label={above,\Sigma}] {\Rrightarrow}
node[between=B and C at -1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15, rotate=-90, 2label={below,\beta}] {\Rightarrow}
node[between=B and C at 1.2, rotate=-90, 2label={above,\beta'}] {\Rightarrow}
node[between=C and D at .4, shift={(0,1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3)}, 2label={above,\Omega}] {\Rrightarrow}
node[between=C and D at -1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15, rotate=-90, 2label={below,\gamma}] {\Rightarrow}
node[between=C and D at 1.2, rotate=-90, 2label={above,\gamma'}] {\Rightarrow}
;
\end{tikzpicture}\]
i.e., a $2$-cell in $\bicata^3_{[1,4]}\big((H,G,F),(H',G',F')\big)$, the diagram
\[\begin{tikzpicture}[xscale=6, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {\big[\gamma\tensor(\beta\tensor\alpha)\big]1_{HGF}}
($(x11)+(1,0)$) node (x12) {\big[\gamma'\tensor(\beta'\tensor\alpha')\big]1_{HGF}}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {1_{H'G'F'}\big[(\gamma\tensor\beta)\tensor\alpha\big]}
($(x21)+(1,0)$) node (x22) {1_{H'G'F'}\big[(\gamma'\tensor\beta')\tensor\alpha'\big]}
;
\draw[1cell]
(x11) edge node {[\Omega\tensor(\Sigma\tensor\Gamma)]*1} (x12)
(x11) edge node[swap] (f) {a_{\gamma,\beta,\alpha}} (x21)
(x12) edge node {a_{\gamma',\beta',\alpha'}} (x22)
(x21) edge node[swap] {1*[(\Omega\tensor\Sigma)\tensor\Gamma]} (x22)
;
\end{tikzpicture}\]
of modifications is commutative.
Evaluating at an object $X\in\A_1$ and using \eqref{modification-hcomp}, \eqref{mod-composite-component}, and \eqref{composite-tr-icell} to expand the boundaries, the previous diagram yields the diagram
\[\begin{tikzpicture}[xscale=6, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def\q{45}
\draw[0cell]
(0,0) node (x11) {\big[(\alpha_{X,G'}\beta_{FX})_{H'} \gamma_{GFX}\big]1_{HGFX}}
($(x11)+(1,0)$) node (x12) {\big[(\alpha'_{X,G'}\beta'_{FX})_{H'} \gamma'_{GFX}\big]1_{HGFX}}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {(\alpha_{X,G'}\beta_{FX})_{H'} (\gamma_{GFX}1_{HGFX})}
($(x21)+(1,0)$) node (x22) {(\alpha'_{X,G'}\beta'_{FX})_{H'} (\gamma'_{GFX}1_{HGFX})}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {(\alpha_{X,H'G'}\beta_{FX,H'}) \gamma_{GFX}}
($(x31)+(1,0)$) node (x32) {(\alpha'_{X,H'G'}\beta'_{FX,H'}) \gamma'_{GFX}}
($(x31)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x41) {\alpha_{X,H'G'}(\beta_{FX,H'}\gamma_{GFX})}
($(x41)+(1,0)$) node (x42) {\alpha'_{X,H'G'}(\beta'_{FX,H'}\gamma'_{GFX})}
($(x41)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x51) {1_{H'G'F'X}\big[\alpha_{X,H'G'}(\beta_{FX,H'}\gamma_{GFX})\big]}
($(x51)+(1,0)$) node (x52) {1_{H'G'F'X}\big[\alpha'_{X,H'G'}(\beta'_{FX,H'}\gamma'_{GFX})\big]}
;
\draw[1cell]
(x11) edge node[swap] {a} (x21)
(x21) edge node[swap] {\Hptwoinv r} (x31)
(x31) edge node[swap] {a} (x41)
(x41) edge node[swap] {\ellinv} (x51)
(x12) edge node {a} (x22)
(x22) edge node {\Hptwoinv r} (x32)
(x32) edge node {a} (x42)
(x42) edge node {\ellinv} (x52)
(x11) edge[bend left=\q] node {[(\Gamma_{X,G'}\Sigma_{FX})_{H'} \Omega_{GFX}]1} (x12)
(x21) edge[bend left=\q] node {(\Gamma_{X,G'}\Sigma_{FX})_{H'} (\Omega_{GFX}1)} (x22)
(x31) edge[bend left=\q] node {(\Gamma_{X,H'G'}\Sigma_{FX,H'}) \Omega_{GFX}} (x32)
(x41) edge[bend left=\q] node {\Gamma_{X,H'G'}(\Sigma_{FX,H'} \Omega_{GFX})} (x42)
(x51) edge[bend left=\q] node {1[(\Gamma_{X,H'G'}(\Sigma_{FX,H'} \Omega_{GFX})]} (x52)
;
\end{tikzpicture}\]
in $\A_4(HGFX,H'G'F'X)$. In the previous diagram, every sub-diagram is commutative by naturality.
\end{proof}
\begin{lemma}\label{tricatofbicat-associator-laxunity}
The construction $a$ in \eqref{tricatofbicat-associator} satisfies the lax unity axiom \eqref{unity-transformation}.
\end{lemma}
\begin{proof}
The lax unity axiom for $a$ means that, for pseudofunctors $(H,G,F)$ as in \eqref{functors-fgh}, the diagram
\[\begin{tikzpicture}[xscale=3.5, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {1_{HGF}1_{HGF}}
($(x11)+(1,0)$) node (x12) {1_{HGF}}
($(x12)+(1,0)$) node (x13) {1_{HGF}1_{HGF}}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\big[1_H\tensor(1_G\tensor 1_F)\big]1_{HGF}}
($(x21)+(2*1,0)$) node (x23) {1_{HGF}\big[(1_H\tensor 1_G)\tensor 1_F\big]}
;
\draw[1cell]
(x11) edge node {\ell} (x12)
(x12) edge node {\rinv} (x13)
(x11) edge node {[\tensor(1\times\tensor)]^0_{H,G,F} *1} (x21)
(x13) edge node {1*[\tensor(\tensor\times 1)]^0_{H,G,F}} (x23)
(x21) edge node {a_{(1_H,1_G,1_F)}} (x23)
;
\end{tikzpicture}\]
of modifications is commutative.
Evaluating at an object $X\in\A_1$, abbreviating $1_{HGFX}$ to $1$, and using \eqref{lax-functors-comp-zero}, \eqref{tensorzero-x}, and \eqref{tricatofbicat-ass-iicell-comp} to expand the boundary, the previous diagram yields the boundary of the following diagram in $\A_4(HGFX,HGFX)$.
\[\begin{tikzpicture}[xscale=8, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {11}
($(x11)+(1/2,0)$) node (x12) {1}
($(x12)+(1/2,0)$) node (x13) {11}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {(11)1}
($(x21)+(1/2,0)$) node (x22) {11}
($(x21)+(1,0)$) node (x23) {1(11)}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {(1_{GFX,H}1)1}
($(x31)+(1/2,0)$) node (x32) {1_{GFX,H}1}
($(x31)+(1,0)$) node (x33) {1(1_{GFX,H}1)}
($(x31)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x41) {[(1_{GFX}1_{GFX})_{H}1]1}
($(x41)+(1,0)$) node (x43) {1[1_{FX,HG}1]}
($(x41)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x51) {[(1_{FX,G}1_{GFX})_H 1]1}
($(x51)+(1/2,0)$) node (x52) {1_{FX,HG}1}
($(x52)+(1/2,0)$) node (x53) {1[1_{FX,HG}(11)]}
($(x51)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x61) {(1_{FX,G}1_{GFX})_H (11)}
($(x61)+(1/2,0)$) node (x62) {(1_{FX,HG}1)1}
($(x62)+(1/2,0)$) node (x63) {1[1_{FX,HG}(1_{GFX,H}1)]}
($(x61)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x71) {(1_{FX,HG} 1_{GFX,H})1}
($(x71)+(1,0)$) node (x73) {1_{FX,HG}(1_{GFX,H}1)}
;
\draw[1cell]
(x11) edge node[swap] {\ellinv 1} (x21)
(x21) edge node[swap] {(\Hzero 1)1} (x31)
(x31) edge node[swap] {(\ellinv_H 1)1} (x41)
(x41) edge node[swap] {[(\Gzero 1)_H 1]1} (x51)
(x51) edge node[swap] {a} (x61)
(x61) edge node[swap] {\Htwoinv r} (x71)
(x13) edge node {1\ellinv} (x23)
(x23) edge node {1(\Hzero 1)} (x33)
(x33) edge node {1(\Gzero_H 1)} (x43)
(x43) edge node {1(1\ellinv)} (x53)
(x53) edge node {1[1(\Hzero 1)]} (x63)
(x73) edge node[swap] {\ellinv} (x63)
(x11) edge node {\ell} (x12)
(x12) edge node {\rinv} (x13)
(x21) edge node {r} (x22)
(x23) edge node[swap] {\ell} (x22)
(x31) edge node {r} (x32)
(x33) edge node[swap] {\ell} (x32)
(x43) edge node[swap] {\ell} (x52)
(x22) edge node {\Hzero 1} (x32)
(x32) edge node {\Gzero_H 1} (x52)
(x71) edge node[pos=.65] {a} (x73)
(x61) edge node {r_H r} (x52)
(x52) edge node {\rinv 1} (x62)
(x62) edge node[pos=.4] {(1\Hzero)1} (x71)
;
\end{tikzpicture}\]
In the previous diagram, the bottom left parallelogram is commutative by the lax right unity \eqref{f0-bicat} of $H$. The other sub-diagrams are commutative by the axioms and properties in \Cref{conv:large-diagram}.
\end{proof}
\begin{lemma}\label{tricatofbicat-associator-laxnat}
The construction $a$ in \eqref{tricatofbicat-associator} satisfies the lax naturality axiom \eqref{2-cell-transformation}.
\end{lemma}
\begin{proof}
The lax naturality axiom for $a$ means that, for strong transformations $\alpha$, $\alpha'$, $\beta$, $\beta'$, $\gamma$, and $\gamma'$ as in
\[\begin{tikzpicture}[xscale=2.3, yscale=1.4]
\def1{1} \def\q{75} \def1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3{.3} \def.4{.7} \def\l{2}
\draw[0cell]
(0,0) node (x11) {\A_1}
($(x11)+(1,0)$) node (x12) {\A_2}
($(x12)+(1,0)$) node (x13) {\A_3}
($(x13)+(1,0)$) node (x14) {\A_4,}
;
\draw[1cell]
(x11) edge[bend left=\q, looseness=\l] node (f) {F} (x12)
(x11) edge node[pos=.2] {F'} (x12)
(x11) edge[bend right=\q, looseness=\l] node[swap] (fpp) {F''} (x12)
(x12) edge[bend left=\q, looseness=\l] node (g) {G} (x13)
(x12) edge node[pos=.2] {G'} (x13)
(x12) edge[bend right=\q, looseness=\l] node[swap] (gpp) {G''} (x13)
(x13) edge[bend left=\q, looseness=\l] node (h) {H} (x14)
(x13) edge node[pos=.2] {H'} (x14)
(x13) edge[bend right=\q, looseness=\l] node[swap] (hpp) {H''} (x14)
;
\draw[2cell]
node[between=f and fpp at 1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3, rotate=-90, 2label={above,\alpha}] {\Rightarrow}
node[between=f and fpp at .4, rotate=-90, 2label={above,\alpha'}] {\Rightarrow}
node[between=g and gpp at 1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3, rotate=-90, 2label={above,\beta}] {\Rightarrow}
node[between=g and gpp at .4, rotate=-90, 2label={above,\beta'}] {\Rightarrow}
node[between=h and hpp at 1.8} \def\v{1} \def.4{.5} \def\w{1.4} \def\q{.3, rotate=-90, 2label={above,\gamma}] {\Rightarrow}
node[between=h and hpp at .4, rotate=-90, 2label={above,\gamma'}] {\Rightarrow}
;
\end{tikzpicture}\]
the diagram of modifications
\[\begin{tikzpicture}[xscale=6.5, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def\q{15}
\draw[0cell]
(0,0) node (x1) {\big[\gamma'\gamma \tensor(\beta'\beta \tensor \alpha'\alpha)\big]1_{HGF}}
($(x1)+(-1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\big[\big(\gamma'\tensor(\beta'\tensor\alpha')\big) \big(\gamma\tensor(\beta\tensor\alpha)\big)\big]1_{HGF}}
($(x21)+(1,0)$) node (x22) {1_{H''G''F''}\big[(\gamma'\gamma\tensor\beta'\beta)\tensor \alpha'\alpha\big]}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {\big(\gamma'\tensor(\beta'\tensor\alpha')\big) \big[\big(\gamma\tensor(\beta\tensor\alpha)\big) 1_{HGF}\big]}
($(x31)+(1,0)$) node (x32) {1_{H''G''F''}\big[\big((\gamma'\tensor\beta')\tensor\alpha'\big) \big((\gamma\tensor\beta)\tensor\alpha\big)\big]}
($(x31)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x41) {\big(\gamma'\tensor(\beta'\tensor\alpha')\big) \big[1_{H'G'F'}\big((\gamma\tensor\beta)\tensor\alpha\big)\big]}
($(x41)+(1,0)$) node (x42) {\big[1_{H''G''F''}\big((\gamma'\tensor\beta')\tensor\alpha'\big)\big] \big((\gamma\tensor\beta)\tensor\alpha\big)}
($(x41)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x5) {\big[\big(\gamma'\tensor(\beta'\tensor\alpha')\big) 1_{H'G'F'}\big] \big((\gamma\tensor\beta)\tensor\alpha\big)}
;
\draw[1cell]
(x21) edge[bend left=\q] node {[\tensor(1\times\tensor)]^2} (x1)
(x1) edge[bend left=\q] node {a_{\gamma'\gamma,\beta'\beta,\alpha'\alpha}} (x22)
(x21) edge node[swap] {a} (x31)
(x31) edge node[swap] {1*a_{\gamma,\beta,\alpha}} (x41)
(x41) edge[bend right=\q] node[swap,pos=.3] {\ainv} (x5)
(x5) edge[bend right=\q] node[swap,pos=.7] {a_{\gamma',\beta',\alpha'}*1} (x42)
(x42) edge node[swap] {a} (x32)
(x32) edge node[swap] {1*[\tensor(\tensor\times 1)]^2} (x22)
;
\end{tikzpicture}\]
is commutative. Evaluating at an object $X\in\A_1$, and using \eqref{modification-hcomp}, \eqref{lax-functors-comp-two}, \eqref{tensortwo-x}, and \eqref{tricatofbicat-ass-iicell-comp} to expand the boundary, the previous diagram yields a diagram $D$ in $\A_4(HGFX,H''G''F''X)$ consisting of 32 $2$-cells. As in \Cref{tensortwo-modification,tricatofbicat-associator-modax}, the commutativity of the diagram $D$ is proved by dividing it into sub-diagrams. Each sub-diagram is commutative by the lax associativity \eqref{f2-bicat} of $H''$, or the axioms and properties in \Cref{conv:large-diagram}, with the following two exceptions.
With the abbreviations $\alpha'=\alpha'_{X,H''G''}$, $\beta = \beta_{FX,H'}$, $\gamma = \gamma_{GFX}$, and $\gamma'=\gamma'_{G'FX}$, the first exception is the sub-diagram
\[\begin{tikzpicture}[xscale=6.5, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def\q{15}
\draw[0cell]
(0,0) node (x1) {\big[\alpha'\big((\beta'_{F'X,H''}\alpha_{X,H''G'})\gamma'\big)\big](\beta\gamma)}
($(x1)+(-1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\big[\alpha'\big(\beta'_{F'X,H''}(\alpha_{X,H''G'}\gamma')\big)\big](\beta\gamma)}
($(x21)+(1,0)$) node (x22) {\big[\alpha'\big((\beta'_{F'X}\alpha_{X,G'})_{H''}\gamma'\big)\big](\beta\gamma)}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {\big[\alpha'\big(\beta'_{F'X,H''}(\gamma'_{G'F'X}\alpha_{X,H'G'})\big)\big](\beta\gamma)}
($(x31)+(1,0)$) node (x32) {\big[\alpha'\big((\alpha_{X,G''}\beta'_{FX})_{H''}\gamma'\big)\big](\beta\gamma)}
($(x31)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x41) {\big[\alpha'\big((\beta'_{F'X,H''}\gamma'_{G'F'X})\alpha_{X,H'G'}\big)\big](\beta\gamma)}
($(x41)+(1,0)$) node (x42) {\big[\alpha'\big((\alpha_{X,H''G''}\beta'_{FX,H''})\gamma'\big)\big](\beta\gamma)}
($(x41)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x5) {\big[\alpha'\big(\alpha_{X,H''G''}(\beta'_{FX,H''}\gamma')\big)\big](\beta\gamma)}
;
\draw[1cell]
(x21) edge[bend left=\q] node {(1\ainv)1} (x1)
(x1) edge[bend left=\q] node {[1(\Hpptwo 1)]1} (x22)
(x22) edge node {[1(\betapinv_{\alpha_X,H''} 1)]1} (x32)
(x21) edge node[swap] {[1(1\gamma'_{\alpha_{X,G'}})]1} (x31)
(x31) edge node[swap] {(1\ainv)1} (x41)
(x41) edge[bend right=\q] node[swap,pos=.3] {[1(\gamma'\tensor\beta')^{-1}_{\alpha_X}]1} (x5)
(x5) edge[bend right=\q] node[swap,pos=.7] {(1\ainv)1} (x42)
(x42) edge node[swap] {[(1\Hpptwo)1]1} (x32)
;
\end{tikzpicture}\]
that is commutative by the definition \eqref{composite-tr-iicell-eq} of $(\gamma'\tensor\beta')_{\alpha_X}$.
Next, with the abbreviations $\alpha'=\alpha'_{X,G''}$, $\beta=\beta_{FX}$, $\beta'=\beta'_{F'X}$, and $\gamma = \gamma_{GFX}$, the other exception is the sub-diagram
\[\begin{tikzpicture}[xscale=6.5, yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def\q{15}
\draw[0cell]
(0,0) node (x1) {\big[(\alpha'\beta')_{H''} \big((\alpha_{X,G'}\beta)_{H''}\gamma'_{GFX}\big)\big]\gamma}
($(x1)+(-1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\big[(\alpha'\beta')_{H''} \big(\gamma'_{G'F'X}(\alpha_{X,G'}\beta)_{H'}\big)\big]\gamma}
($(x21)+(1,0)$) node (x22) {\big[(\alpha'\beta')_{H''} \big((\alpha_{X,H''G'}\beta_{H''})\gamma'_{GFX}\big)\big]\gamma}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {\big[(\alpha'\beta')_{H''} \big(\gamma'_{G'F'X}(\alpha_{X,H'G'}\beta_{H'})\big)\big]\gamma}
($(x31)+(1,0)$) node (x32) {\big[(\alpha'\beta')_{H''} \big(\alpha_{X,H''G'}(\beta_{H''}\gamma'_{GFX})\big)\big]\gamma}
($(x31)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x41) {\big[(\alpha'\beta')_{H''} \big((\gamma'_{G'F'X}\alpha_{X,H'G'})\beta_{H'}\big)\big]\gamma}
($(x41)+(1,0)$) node (x42) {\big[(\alpha'\beta')_{H''} \big(\alpha_{X,H''G'}(\gamma'_{G'FX}\beta_{H'})\big)\big]\gamma}
($(x41)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x5) {\big[(\alpha'\beta')_{H''} \big((\alpha_{X,H''G'}\gamma'_{G'FX})\beta_{H'}\big)\big]\gamma}
;
\draw[1cell]
(x21) edge[bend left=\q] node {(1\gammapinv_{\alpha_{X,G'}\beta})1} (x1)
(x1) edge[bend left=\q] node {[1(\Hpptwoinv 1)]1} (x22)
(x22) edge node {(1a)1} (x32)
(x32) edge node {[1(1\gamma'_{\beta})]1} (x42)
(x21) edge node[swap] {[1(1\Hptwoinv)]1} (x31)
(x31) edge node[swap] {(1\ainv)1} (x41)
(x41) edge[bend right=\q] node[swap,pos=.3] {[1(\gammapinv_{\alpha_{X,G'}}1)]1} (x5)
(x5) edge[bend right=\q] node[swap,pos=.7] {(1a)1} (x42)
;
\end{tikzpicture}\]
that is commutative by the lax naturality \eqref{2-cell-transformation} of $\gamma'$ applied to the composable $1$-cells
\[\begin{tikzcd}
GFX \ar{r}{\beta_{FX}} & G'FX \ar{r}{\alpha_{X,G'}} & G'F'X
\end{tikzcd}\]
in $\A_3$.
\end{proof}
\begin{lemma}\label{tricatofbicat-ass-icell}
The construction $a$ in \eqref{tricatofbicat-associator} is a $1$-cell
\[\begin{tikzcd}
\tensor(\tensor\times 1) \ar{r}{a} & \tensor(1\times\tensor)
\end{tikzcd}\]
in $\Bicatps\big(\bicata^3_{[1,4]},\bicata_{1,4}\big)$.
\end{lemma}
\begin{proof}
The assertion means that $a$ is a strong transformation. We checked that $a$ has well-defined invertible component $2$-cells in \Cref{tricatofbicat-associator-modax}. The lax transformation axioms are checked in \Cref{tricatofbicat-associator-iicell-nat,tricatofbicat-associator-laxunity,tricatofbicat-associator-laxnat}.
\end{proof}
Next we define a right adjoint of $a$.
\begin{definition}\label{def:ass-right-adjoint}
Suppose $\A_1,\A_2,\A_3$, and $\A_4$ are bicategories. Define $\abdot$ as in
\begin{equation}\label{ass-adjiont}
\begin{tikzpicture}[xscale=3, yscale=1.4, baseline={(A.base)}]
\def1{1} \def\q{45}
\draw[0cell]
(0,0) node (A) {\bicata^3_{[1,4]}}
($(A)+(1,0)$) node (B) {\bicata_{1,4}}
;
\draw[1cell]
(A) edge[bend left=\q] node {\tensor(\tensor\times 1)} (B)
(A) edge[bend right=\q] node[swap] {\tensor(1\times\tensor)} (B)
;
\draw[2cell]
node[between=A and B at .45, rotate=90, 2label={below,\abdot}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
with the following components.
\begin{description}
\item[Component $1$-Cells] For pseudofunctors
\[\begin{tikzcd}
\A_1 \ar{r}{F} & \A_2 \ar{r}{G} & \A_3 \ar{r}{H} & \A_4,
\end{tikzcd}\]
$\abdot$ has a component $1$-cell
\[\begin{tikzpicture}[xscale=7, yscale=1.4, baseline={(A.base)}]
\def1{1}
\draw[0cell]
(0,0) node (A) {\big(\tensor(1\times\tensor)\big)(H,G,F) = HGF}
($(A)+(1,0)$) node (B) {HGF = \big(\tensor(\tensor\times 1)\big)(H,G,F)}
;
\draw[1cell]
(A) edge node {\abdot_{H,G,F} ~=~ 1_{HGF}} (B)
;
\end{tikzpicture}\]
which is the identity strong transformation of the composite $HGF \in \bicata_{1,4}$.
\item[Component $2$-Cells] For strong transformations
\[\begin{tikzpicture}[xscale=2.3, yscale=1.4]
\def1{1} \def\q{45} \def.4{.45}
\draw[0cell]
(0,0) node (A) {\A_1}
($(A)+(1,0)$) node (B) {\A_2}
($(B)+(1,0)$) node (C) {\A_3}
($(C)+(1,0)$) node (D) {\A_4,}
;
\draw[1cell]
(A) edge[bend left=\q] node {F} (B)
(A) edge[bend right=\q] node[swap] {F'} (B)
(B) edge[bend left=\q] node {G} (C)
(B) edge[bend right=\q] node[swap] {G'} (C)
(C) edge[bend left=\q] node {H} (D)
(C) edge[bend right=\q] node[swap] {H'} (D)
;
\draw[2cell]
node[between=A and B at .4, rotate=-90, 2label={above,\alpha}] {\Rightarrow}
node[between=B and C at .4, rotate=-90, 2label={above,\beta}] {\Rightarrow}
node[between=C and D at .4, rotate=-90, 2label={above,\gamma}] {\Rightarrow}
;
\end{tikzpicture}\]
$\abdot$ has a component $2$-cell in $\bicata_{1,4}(HGF,H'G'F')$, i.e., a modification
\[\begin{tikzpicture}[xscale=3, yscale=1.7, baseline={(a.base)}]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{.65}
\draw[0cell]
(0,0) node (x11) {HGF}
($(x11)+(1,0)$) node (x12) {H'G'F'}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {HGF}
($(x21)+(1,0)$) node (x22) {H'G'F'}
;
\draw[1cell]
(x11) edge node {\gamma\tensor(\beta\tensor\alpha)} (x12)
(x11) edge node[swap] (a) {\abdot_{H,G,F}\,=\, 1_{HGF}} (x21)
(x12) edge node {1_{H'G'F'}\,=\, \abdot_{H',G',F'}} (x22)
(x21) edge node[swap] {(\gamma\tensor\beta)\tensor\alpha} (x22)
;
\draw[2cell]
node[between=x11 and x22 at .4, rotate=40, 2label={above,\abdot_{\gamma,\beta,\alpha}}] {\Rightarrow}
;
\end{tikzpicture}\]
given by the vertical composite $2$-cell
\begin{equation}\label{ass-adjoint-iicell-comp}
\begin{tikzpicture}[xscale=6, yscale=1.5, baseline={(x21.base)}]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {\big(\alpha_{X,H'G'}(\beta_{FX,H'}\gamma_{GFX})\big)1_{HGFX}}
($(x11)+(1,0)$) node (x12) {1_{H'G'F'X}\big((\alpha_{X,G'}\beta_{FX})_{H'}\gamma_{GFX}\big)}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\alpha_{X,H'G'}(\beta_{FX,H'}\gamma_{GFX})}
($(x21)+(1,0)$) node (x22) {(\alpha_{X,G'}\beta_{FX})_{H'}\gamma_{GFX}}
($(x21)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {(\alpha_{X,H'G'}\beta_{FX,H'})\gamma_{GFX}}
;
\draw[1cell]
(x11) edge node {(\abdot_{\gamma,\beta,\alpha})_X} (x12)
(x11) edge node[swap] {r} (x21)
(x21) edge node[swap,pos=.4] {\ainv} (x3)
(x3) edge node[swap,pos=.6] {\Hptwo 1} (x22)
(x22) edge node[swap] {\ellinv} (x12)
;
\end{tikzpicture}
\end{equation}
in $\A_4(HGFX,H'G'F'X)$ for each object $X\in\A_1$.
\end{description}
This finishes the definition of $\abdot$.
\end{definition}
\begin{explanation}\label{expl:ass-adjoint}
The component $(\abdot_{\gamma,\beta,\alpha})_X$ in \eqref{ass-adjoint-iicell-comp} is equal to the following two composite $2$-cells
\[\begin{tikzpicture}[xscale=6, yscale=1.5, baseline={(x21.base)}]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {\big(\alpha_{X,H'G'}(\beta_{FX,H'}\gamma_{GFX})\big)1_{HGFX}}
($(x11)+(1,0)$) node (x12) {1_{H'G'F'X}\big((\alpha_{X,G'}\beta_{FX})_{H'}\gamma_{GFX}\big)}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\alpha_{X,H'G'}(\beta_{FX,H'}\gamma_{GFX})}
($(x21)+(1,0)$) node (x22) {1_{H'G'F'X}\big((\alpha_{X,H'G'}\beta_{FX,H'})\gamma_{GFX}\big)}
($(x21)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {1_{H'G'F'X}\big(\alpha_{X,H'G'}(\beta_{FX,H'}\gamma_{GFX})\big)}
;
\draw[1cell]
(x11) edge node {(\abdot_{\gamma,\beta,\alpha})_X} (x12)
(x11) edge node[swap] {r} (x21)
(x21) edge node[swap,pos=.4] {\ellinv} (x3)
(x3) edge node[swap,pos=.6] {1\ainv} (x22)
(x22) edge node[swap] {1(\Hptwo 1)} (x12)
;
\draw[0cell]
($(x11)+(0,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y21) {\big((\alpha_{X,H'G'}\beta_{FX,H'})\gamma_{GFX}\big)1_{HGFX}}
($(y21)+(1,0)$) node (y22) {(\alpha_{X,G'}\beta_{FX})_{H'}\gamma_{GFX}}
($(y21)+(1/2,--1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y3) {\big((\alpha_{X,G'}\beta_{FX})_{H'}\gamma_{GFX}\big)1_{HGFX}}
;
\draw[1cell]
(x11) edge node {\ainv 1} (y21)
(y21) edge node[pos=.3] {(\Hptwo 1)1} (y3)
(y3) edge node {r} (y22)
(y22) edge node {\ellinv} (x12)
;
\end{tikzpicture}\]
by the naturality \eqref{unitor-naturality} of $r$ and $\ellinv$. Moreover, in each composite, the order of $r$ and $\ellinv$ can switch.
\end{explanation}
\begin{lemma}\label{ass-adjoint-icell}
The construction $\abdot$ in \eqref{ass-adjiont} is a $1$-cell
\[\begin{tikzcd}
\tensor(1\times\tensor) \ar{r}{\abdot} & \tensor(\tensor\times 1)
\end{tikzcd}\]
in $\Bicatps\big(\bicata^3_{[1,4]},\bicata_{1,4}\big)$.
\end{lemma}
\begin{proof}
The assertion means that $\abdot$ is a strong transformation. The proof is similar to the case for $a$ in \Cref{tricatofbicat-associator-modax,tricatofbicat-associator-iicell-nat,tricatofbicat-associator-laxunity,tricatofbicat-associator-laxnat}. We ask the reader to write down the details in \Cref{exer:ass-adjoint-icell}.
\end{proof}
Our next objective is to show that $(a,\abdot)$ is part of an adjoint equivalence with left adjoint $a$ and right adjoint $\abdot$. So we need to define the unit and the counit, and check the triangle identities.
\begin{definition}\label{def:tricatofbicat-ass-unit}
Define the assignments
\begin{equation}\label{tricatofbicat-ass-unit}
\begin{tikzcd}
1_{\tensor(\tensor\times 1)} \ar{r}{\etaa} & \abdot a\end{tikzcd}
\andspace
\begin{tikzcd}
a\abdot \ar{r}{\epza} & 1_{\tensor(1\times\tensor)}\end{tikzcd}
\end{equation}
as follows. For each object $(H,G,F) \in \bicata^3_{[1,4]}$ as in \eqref{functors-fgh} and each object $X \in \A_1$, $\etaa$ and $\epza$ have the component $2$-cells
\begin{equation}\label{tricatbofbicat-ass-unit-comp}
\begin{tikzpicture}[xscale=7.5, yscale=1]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {\big(1_{\tensor(\tensor\times 1)}\big)_{(H,G,F),X} = 1_{HGFX}}
($(x11)+(1,0)$) node (x12) {1_{HGFX}1_{HGFX} = (\abdot a)_{(H,G,F),X},}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {(a\abdot)_{(H,G,F),X} = 1_{HGFX}1_{HGFX}}
($(x21)+(1,0)$) node (x22) {1_{HGFX} = \big(1_{\tensor(1\times\tensor)}\big)_{(H,G,F),X}}
;
\draw[1cell]
(x11) edge node {\etaa_{(H,G,F),X}~=~ \ellinv_{1_{HGFX}}} (x12)
(x21) edge node {\epza_{(H,G,F),X}~=~ \ell_{1_{HGFX}}} (x22)
;
\end{tikzpicture}
\end{equation}
in $\A_4(HGFX,HGFX)$. This finishes the definitions of $\etaa$ and $\epza$.
\end{definition}
\begin{lemma}\label{etaa-iicell}
In \Cref{def:tricatofbicat-ass-unit}:
\begin{enumerate}
\item For each object $(H,G,F) \in \bicata^3_{[1,4]}$, $\etaa_{(H,G,F)}$ is an invertible $2$-cell in $\bicata_{1,4}$.
\item $\etaa$ is an invertible $2$-cell in $\Bicatps\big(\bicata^3_{[1,4]},\bicata_{1,4}\big)$.
\end{enumerate}
\end{lemma}
\begin{proof}
The first assertion means that $\etaa_{(H,G,F)}$ is an invertible modification. Since each component of $\ellinv$ is an invertible $2$-cell in $\A_4$, it remains to check the modification axiom \eqref{modification-axiom-pasting}. Using \eqref{idlaxtr-component}, \eqref{transf-hcomp-iicell}, and \eqref{tricatbofbicat-ass-unit-comp} to expand the boundary, the modification axiom for $\etaa_{(H,G,F)}$ is the commutativity around the boundary of the following diagram in $\A_4(HGFX,HGFY)$ for each $1$-cell $f\in\A_1(X,Y)$.
\[\begin{tikzpicture}[xscale=7.5, yscale=1.3]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def\q{0}
\draw[0cell]
(0,0) node (x11) {f_{HGF} 1_{HGFX}}
($(x11)+(1,0)$) node (x12) {f_{HGF} (1_{HGFX} 1_{HGFX})}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {f_{HGF}}
($(x21)+(1,0)$) node (x22) {(f_{HGF} 1_{HGFX}) 1_{HGFX}}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {1_{HGFY}f_{HGF}}
($(x31)+(1,0)$) node (x32) {f_{HGF}1_{HGFX}}
($(x31)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x41) {(1_{HGFY}1_{HGFY}) f_{HGF}}
($(x41)+(1,0)$) node (x42) {(1_{HGFY}f_{HGF})1_{HGFX}}
($(x41)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x51) {1_{HGFY}(1_{HGFY} f_{HGF})}
($(x51)+(1,0)$) node (x52) {1_{HGFY} (f_{HGF}1_{HGFX})}
($(x51)+(1/2,0)$) node (x6) {1_{HGFY} f_{HGF}}
;
\draw[1cell]
(x11) edge node[swap] {r} (x21)
(x21) edge node[swap] {\ellinv} (x31)
(x31) edge node[swap] {\ellinv_1 1} (x41)
(x11) edge node {1\ellinv_1} (x12)
(x12) edge node {\ainv} (x22)
(x22) edge node {r1} (x32)
(x32) edge node {\ellinv 1} (x42)
(x42) edge node {a} (x52)
(x52) edge[bend left=\q] node {1r} (x6)
(x6) edge[bend left=\q] node {1\ellinv} (x51)
(x51) edge node {\ainv} (x41)
(x11) edge node (B) {1} (x32)
(x42) edge node[swap] (A) {r} (x31)
(x42) edge node[swap] {r} (x6)
(x6) edge node[swap] {\rinv_1 1} (x41)
;
\draw[0cell] ($(A)!.45!(B)$) node {(\clubsuit)};
\end{tikzpicture}\]
The middle trapezoid $(\clubsuit)$ is commutative by the naturality of $r$. The other four sub-diagrams are commutative by unity as in \Cref{conv:large-diagram}.
The second assertion means that $\etaa$ is an invertible modification. Since each component $\etaa_{(H,G,F)}$ is an invertible $2$-cell in $\bicata_{1,4}$, it remains to check the modification axiom for $\etaa$. This means that for each $1$-cell $(\gamma,\beta,\alpha)$ in $\bicata^3_{[1,4]}$ as in \eqref{transformation-abg}, we need to check the commutativity of the diagram
\[\begin{tikzpicture}[xscale=6.5, yscale=1.3]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {\big((\gamma\tensor\beta)\tensor\alpha\big) 1_{HGF}}
($(x11)+(1,0)$) node (x12) {\big((\gamma\tensor\beta)\tensor\alpha\big) (1_{HGF} 1_{HGF})}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {(\gamma\tensor\beta)\tensor\alpha}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {1_{H'G'F'} \big((\gamma\tensor\beta)\tensor\alpha\big)}
($(x31)+(1,0)$) node (x32) {(1_{H'G'F'}1_{H'G'F'}) \big((\gamma\tensor\beta)\tensor\alpha\big)}
;
\draw[1cell]
(x11) edge node[swap] {r} (x2)
(x2) edge node[swap] {\ellinv} (x31)
(x31) edge node {\etaa_{(H',G',F')}*1} (x32)
(x11) edge node {1*\etaa_{(H,G,F)}} (x12)
(x12) edge node {(\abdot a)_{(\gamma,\beta,\alpha)}} (x32)
;
\end{tikzpicture}\]
of $2$-cells (i.e., modifications) in $\bicata_{1,4}(HGF,H'G'F')$. Evaluating at an object $X\in\A_1$, using the abbreviations $1=1_{HGFX}$, $1'=1_{H'G'F'X}$, $\alpha = \alpha_{X,H'G'}$, $\beta=\beta_{FX,H'}$, and $\gamma=\gamma_{GFX}$, and using \eqref{transf-hcomp-iicell}, \eqref{tricatofbicat-ass-iicell-comp}, and \eqref{ass-adjoint-iicell-comp} to expand the boundary, the previous diagram yields the boundary of the following diagram in $\A_4(HGFX,H'G'F'X)$.
\[\begin{tikzpicture}[xscale=8, yscale=1.3]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def\q{15}
\draw[0cell]
(0,0) node (x0) {\big(\alpha(\beta\gamma)\big)(11)}
($(x0)+(-1/2,0)$) node (x11) {\big(\alpha(\beta\gamma)\big)1}
($(x11)+(1,0)$) node (x12) {\big[\big(\alpha(\beta\gamma)\big)1\big]1}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\alpha(\beta\gamma)}
($(x21)+(1,0)$) node (x22) {\big(\alpha(\beta\gamma)\big)1}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {1'\big(\alpha(\beta\gamma)\big)}
($(x31)+(1,0)$) node (x32) {\big((\alpha\beta)\gamma\big)1}
($(x31)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x41) {(1'1')\big(\alpha(\beta\gamma)\big)}
($(x41)+(1,0)$) node (x42) {\big((\alpha_{X,G'}\beta_{FX})_{H'}\gamma\big)1}
($(x41)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x51) {1'\big[1'\big(\alpha(\beta\gamma)\big)\big]}
($(x51)+(1,0)$) node (x52) {\big[1'\big((\alpha_{X,G'}\beta_{FX})_{H'} \gamma\big)\big]1}
($(x51)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x61) {1'\big(\alpha(\beta\gamma)\big)}
($(x61)+(1,0)$) node (x62) {1'\big[\big((\alpha_{X,G'}\beta_{FX})_{H'} \gamma\big)1\big]}
($(x61)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x71) {1'\big((\alpha\beta)\gamma\big)}
($(x71)+(1,0)$) node (x72) {1'\big[(\alpha_{X,G'}\beta_{FX})_{H'} (\gamma1)\big]}
($(x21)+(1/2,0)$) node (m1) {1'\big[\big(\alpha(\beta\gamma)\big)1\big]}
($(x41)+(1/2,0)$) node (m2) {1'\big[\big((\alpha\beta)\gamma\big)1\big]}
($(x61)+(1/2,0)$) node (m3) {1'\big[(\alpha\beta)(\gamma 1)\big]}
;
\draw[1cell]
(x11) edge node[swap] {r} (x21)
(x21) edge node[swap] {\ellinv} (x31)
(x31) edge node[swap] {\ellinv 1} (x41)
(x11) edge node {1\ellinv} (x0)
(x0) edge node {\ainv} (x12)
(x12) edge node {r1} (x22)
(x22) edge node {\ainv 1} (x32)
(x32) edge node {(\Hptwo 1)1} (x42)
(x42) edge node {\ellinv 1} (x52)
(x52) edge node {a} (x62)
(x62) edge node {1a} (x72)
(x72) edge node {1(\Hptwoinv r)} (x71)
(x71) edge node {1a} (x61)
(x61) edge node {1\ellinv} (x51)
(x51) edge node {\ainv} (x41)
(x11) edge node[swap,pos=.4] {\ellinv} (m1)
(x11) edge node[pos=.7] {1} (x22)
(m1) edge node[pos=.4] {1(\ainv 1)} (m2)
(m2) edge node {1a} (m3)
(x32) edge node[swap] {\ellinv} (m2)
(m3) edge node[pos=.3] {1(1r)} (x71)
(m3) edge node[swap,pos=.2] {1(\Hptwo 1)} (x72)
;
\end{tikzpicture}\]
In the previous diagram, each sub-diagram is commutative by the naturality and unity properties in \Cref{conv:large-diagram}.
\end{proof}
\begin{lemma}\label{epza-iicell}
In \Cref{def:tricatofbicat-ass-unit}:
\begin{enumerate}
\item For each object $(H,G,F) \in \bicata^3_{[1,4]}$, $\epza_{(H,G,F)}$ is an invertible $2$-cell in $\bicata_{1,4}$.
\item $\epza$ is an invertible $2$-cell in $\Bicatps\big(\bicata^3_{[1,4]},\bicata_{1,4}\big)$.
\end{enumerate}
\end{lemma}
\begin{proof}
This is similar to the proof of \Cref{etaa-iicell}. We ask the reader to check it in \Cref{exer:epza-iicell}.
\end{proof}
\begin{proposition}\label{ass-adjoint-equivalence}\index{tricategory of bicategories!associator}
In the bicategory $\Bicatps\big(\bicata^3_{[1,4]},\bicata_{1,4}\big)$, the quadruple \[(a,\abdot,\etaa,\epza)\] with
\begin{itemize}
\item $a : \tensor(\tensor\times 1)\to \tensor(1\times\tensor)$ in \eqref{tricatofbicat-associator},
\item $\abdot : \tensor(1\times\tensor)\to \tensor(\tensor\times 1)$ in \eqref{ass-adjiont}, and
\item $\etaa : 1_{\tensor(\tensor\times 1)} \to \abdot a$ and $\epza : a\abdot \to 1_{\tensor(1\times\tensor)}$ in \eqref{tricatofbicat-ass-unit},
\end{itemize}
is an adjoint equivalence.
\end{proposition}
\begin{proof}
By \Cref{tricatofbicat-ass-icell,ass-adjoint-icell}, $a$ and $\abdot$ are $1$-cells. By \Cref{etaa-iicell,epza-iicell}, $\etaa$ and $\epza$ are invertible $2$-cells. It remains to check the triangle identities in \eqref{diagram:triangles}. The left triangle identity is the commutativity of the leftmost diagram in \eqref{eq:extra-1} below; a diagram of $2$-cells in $\Bicatps\big(\bicata^3_{[1,4]},\bicata_{1,4}\big)$.
\begin{equation}\label{eq:extra-1}\begin{tikzcd}
a1_{\tensor(\tensor\times 1)} \ar{d}[swap]{r} \ar{r}{1*\etaa} & a(\abdot a) \ar{r}{\ainv} & (a\abdot) a \ar{d}{\epza*1}\\
a && 1_{\tensor(1\times \tensor)}a \ar{ll}[swap]{\ell}
\end{tikzcd}\qquad
\begin{tikzcd}
11 \ar{d}[swap]{r} \ar{r}{1\ellinv} \ar[bend right=20]{rr}[swap,near start]{\rinv 1} & 1(11) \ar{r}{\ainv} & (11)1 \ar{d}{\ell 1} \\
1 && 11 \ar{ll}[swap]{\ell}
\end{tikzcd}\end{equation}
Evaluating at an object $(H,G,F) \in \bicata^3_{[1,4]}$ as in \eqref{functors-fgh} and an object $X\in\A_1$, and abbreviating $1_{HGFX}$ to $1$, the diagram on the left of \eqref{eq:extra-1} yields the outer diagram on the right of \eqref{eq:extra-1}; a diagram in $\A_4(HGFX,HGFX)$. In the right diagram, the top sub-diagram is commutative by the unity axiom \eqref{bicat-unity}, and the bottom sub-diagram is commutative by the equality $\ell_1=r_1$ in \Cref{bicat-l-equals-r}.
The right triangle identity is the commutativity of the leftmost diagram in \eqref{eq:extra-2} below; a diagram of $2$-cells in $\Bicatps\big(\bicata^3_{[1,4]},\bicata_{1,4}\big)$.
\begin{equation}\label{eq:extra-2}\begin{tikzcd}
1_{\tensor(\tensor\times 1)}\abdot \ar{d}[swap]{\ell} \ar{r}{\etaa*1} & (\abdot a)\abdot \ar{r}{a} & \abdot(a\abdot) \ar{d}{1*\epza}\\
\abdot && \abdot 1_{\tensor(1\times \tensor)} \ar{ll}[swap]{r}
\end{tikzcd}\qquad
\begin{tikzcd}
11\ar{d}[swap]{\ell} \ar{r}{\ellinv 1} \ar[bend right=20]{rr}[swap,near start]{\ellinv} & (11)1 \ar{r}{a} & 1(11) \ar{d}{1\ell} \\
1 && 11 \ar{ll}[swap]{r}
\end{tikzcd}\end{equation}
Evaluating at an object $(H,G,F) \in \bicata^3_{[1,4]}$ and an object $X\in\A_1$, the diagram on the left of \eqref{eq:extra-2} yields the outer diagram on the right of \eqref{eq:extra-2}; a diagram in $\A_4(HGFX,HGFX)$. In the right diagram, the top sub-diagram is commutative by the left unity property in \Cref{bicat-left-right-unity}, and the bottom sub-diagram is commutative by the equality $\ell_1=r_1$ and the naturality \eqref{unitor-naturality} of $\ell$.
\end{proof}
We call $(a,\abdot,\etaa,\epza)$ the \emph{associator}.
\section{The Other Structures}\label{sec:tricat-other}
In this section we define the rest of the structures in the tricategory of bicategories. We define:
\begin{itemize}
\item the identities as in \eqref{tricat-identity} in \Cref{def:tricatofbicat-identities};
\item the left unitor as in \eqref{tricategory-unitors} in \Cref{def:tricatofbicat-left-unitor,def:tricatofbicat-ell-adjoint,def:ell-unit};
\item the right unitor as in \eqref{tricategory-unitors} in \Cref{def:tricatofbicat-r,def:r-adjoint,def:r-unit};
\item the pentagonator as in \eqref{tricategory-pentagonator} in \Cref{def:tricatofbicat-pentagonator};
\item the middle, left, and right $2$-unitors as in \eqref{tricategory-iiunitors} in \Cref{def:tricatofbicat-mid-iiunitor,def:tricatofbicat-left-iiunitor,def:tricatofbicat-right-iiunitor}, respectively.
\end{itemize}
\begin{definition}\label{def:tricatofbicat-identities}\index{tricategory of bicategories!identity}
For each bicategory $\A$, define an assignment
\begin{equation}\label{tricatof-bicat-id}
\begin{tikzcd}
\boldone \ar{r}{1_{\A}} & \Bicatps(\A,\A)\end{tikzcd}
\end{equation}
called the \emph{identity of $\A$}, as follows.
\begin{description}
\item[Object] The unique object $*\in\boldone$ is sent to the identity strict functor \[1_{\A} \in \Bicatps(\A,\A)\] of $\A$ in \Cref{ex:identity-strict-functor}.
\item[$1$-Cell] The unique identity $1$-cell $1_*$ in $\boldone$ is sent to the identity strong transformation
\[\begin{tikzcd}
1_{\A} \ar{r}{1_{1_{\A}}} & 1_{\A}\end{tikzcd}\]
of $1_{\A}$ in \Cref{id-lax-transformation}.
\item[$2$-Cell] The unique identity $2$-cell in $\boldone$ is sent to the identity modification of $1_{1_{\A}}$ in \Cref{def:modification-composition}. In other words, its component at an object $X\in\A$ is the identity $2$-cell $1_{1_X} \in \A(X,X)$ of the identity $1$-cell $1_X$.
\item[Lax Unity Constraint] $1_{\A}^0$ is the identity modification of $1_{1_{\A}}$.
\item[Lax Functoriality Constraint] It is the modification
\[\begin{tikzcd}
1_{1_{\A}}1_{1_{\A}} \ar{r}{1_{\A}^2} & 1_{1_{\A}}\end{tikzcd}\]
whose component at an object $X\in\A$ is the $2$-cell
\[\begin{tikzcd}
(1_{1_{\A}})_X (1_{1_{\A}})_X = 1_X 1_X \ar{r}{\ell_{1_X}}[swap]{\iso} & 1_X = (1_{1_{\A}})_X \end{tikzcd}\]
in $\A(X,X)$.
\end{description}
This finishes the definition of $1_{\A}$.
\end{definition}
\begin{lemma}\label{tricatofbicat-identities}
$1_{\A}$ in \Cref{def:tricatofbicat-identities} is a pseudofunctor.
\end{lemma}
\begin{proof}
The naturality \eqref{f2-bicat-naturality} of $1_{\A}^2$ follows from the fact that $\boldone$ has no non-identity $2$-cells. The lax left unity axiom \eqref{f0-bicat} holds because both $1_{\A}^0$ and the left unitor in $\boldone$ are identities, while $1_{\A}^2$ is the left unitor in each component. The unity property $\ell_1 = r_1$ in \Cref{bicat-l-equals-r} imply both the lax right unity axiom and the lax associativity axiom \eqref{f2-bicat}.
\end{proof}
The next few definitions are about the left unitor.
\begin{definition}\label{def:tricatofbicat-left-unitor}
For bicategories $\A_1$ and $\A_2$, define an assignment $\ell$ as in
\begin{equation}\label{tricatofbicat-left-unitor}
\begin{tikzpicture}[xscale=4.5, yscale=1.4, baseline={(A.base)}]
\def1{1} \def\q{45}
\draw[0cell]
(0,0) node (A) {\Bicatps(\A_1,\A_2)=\bicata_{1,2}}
($(A)+(1,0)$) node (B) {\bicata_{1,2}=\Bicatps(\A_1,\A_2)}
;
\draw[1cell]
(A) edge[bend left=\q] node {\tensor(1_{\A_2}\times 1)} (B)
(A) edge[bend right=\q] node[swap] {1_{\bicata_{1,2}}} (B)
;
\draw[2cell]
node[between=A and B at .47, rotate=-90, 2label={above,\ell}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
with the following components. In \eqref{tricatofbicat-left-unitor}, $1_{\bicata_{1,2}}$ is the identity strict functor of $\bicata_{1,2}$ in \Cref{ex:identity-strict-functor}, and $1_{\A_2}$ is the identity of $\A_2$ in \eqref{tricatof-bicat-id}.
\begin{description}
\item[Component $1$-Cells] For each pseudofunctor $F : \A_1\to\A_2$, i.e., an object $F\in\bicata_{1,2}$, $\ell$ has a component $1$-cell
\[\begin{tikzpicture}[xscale=5.1, yscale=1.4, baseline={(A.base)}]
\def1{1}
\draw[0cell]
(0,0) node (A) {\big(\tensor(1_{\A_2}\times 1)\big)(F) = 1_{\A_2}F = F}
($(A)+(1,0)$) node (B) {F=(1_{\bicata_{1,2}})(F),}
;
\draw[1cell]
(A) edge node {\ell_F ~=~ 1_F} (B)
;
\end{tikzpicture}\]
which is the identity strong transformation of $F$ in \Cref{id-lax-transformation}.
\item[Component $2$-Cells] For each strong transformation $\alpha$ as in \eqref{transformation-abg}, i.e., a $1$-cell in $\bicata_{1,2}(F,F')$, $\ell$ has a component $2$-cell (i.e., modification)
\[\begin{tikzpicture}[xscale=5, yscale=1.7, baseline={(a.base)}]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{.6}
\draw[0cell]
(0,0) node (x11) {\big(\tensor(1_{\A_2}\times 1)\big)(F) = F}
($(x11)+(1,0)$) node (x12) {F'=\big(\tensor(1_{\A_2}\times 1)\big)(F')}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {(1_{\bicata_{1,2}})(F)=F}
($(x21)+(1,0)$) node (x22) {F'=(1_{\bicata_{1,2}})(F')}
;
\draw[1cell]
(x11) edge node {1_{1_{\A_2}}\tensor\alpha} (x12)
(x11) edge node[swap] (a) {\ell_F~=~1_F} (x21)
(x12) edge node {1_{F'}~=~\ell_{F'}} (x22)
(x21) edge node[swap] {\alpha} (x22)
;
\draw[2cell]
node[between=x11 and x22 at .4, shift={(-.2,0)}, rotate=40, 2label={above,\ell_{\alpha}}] {\Rightarrow}
;
\end{tikzpicture}\]
in $\bicata_{1,2}(F,F')$ given by the $2$-cell
\[\begin{tikzpicture}[xscale=6, yscale=1.4, baseline={(A.base)}]
\def1{1}
\draw[0cell]
(0,0) node (A) {(\alpha 1_F)_X = \alpha_X 1_{FX}}
($(A)+(1,0)$) node (B) {1_{F'X}(\alpha_X 1_{FX}) = \big[1_{F'}(1_{1_{\A_2}}\tensor\alpha)\big]_X}
;
\draw[1cell]
(A) edge node {\ell_{\alpha,X} ~=~ \ellinv_{\alpha_X 1_{FX}}} (B)
;
\end{tikzpicture}\]
in $\A_2(FX,F'X)$ for each object $X\in\A_1$.
\end{description}
This finishes the definition of $\ell$.
\end{definition}
\begin{definition}\label{def:tricatofbicat-ell-adjoint}
Using the same notations as in \Cref{def:tricatofbicat-left-unitor}, define an assignment $\ellbdot$ as in
\begin{equation}\label{tricatofbicat-ell-adjoint}
\begin{tikzpicture}[xscale=3, yscale=1.4, baseline={(A.base)}]
\def1{1} \def\q{45}
\draw[0cell]
(0,0) node (A) {\bicata_{1,2}}
($(A)+(1,0)$) node (B) {\bicata_{1,2}}
;
\draw[1cell]
(A) edge[bend left=\q] node {\tensor(1_{\A_2}\times 1)} (B)
(A) edge[bend right=\q] node[swap] {1_{\bicata_{1,2}}} (B)
;
\draw[2cell]
node[between=A and B at .45, rotate=90, 2label={below,\ellbdot}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
with the following components.
\begin{description}
\item[Component $1$-Cells] For each object $F\in\bicata_{1,2}$, $\ellbdot$ has a component $1$-cell
\[\begin{tikzpicture}[xscale=4.5, yscale=1.4, baseline={(A.base)}]
\def1{1}
\draw[0cell]
(0,0) node (A) {(1_{\bicata_{1,2}})(F) = F}
($(A)+(1,0)$) node (B) {F = \big(\tensor(1_{\A_2}\times 1)\big)(F).}
;
\draw[1cell]
(A) edge node {\ellbdot_F ~=~ 1_F} (B)
;
\end{tikzpicture}\]
\item[Component $2$-Cells] For each $1$-cell $\alpha \in \bicata_{1,2}(F,F')$, $\ellbdot$ has a component $2$-cell
\[\begin{tikzpicture}[xscale=5, yscale=1.7, baseline={(a.base)}]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{.6}
\draw[0cell]
(0,0) node (x11) {(1_{\bicata_{1,2}})(F)=F}
($(x11)+(1,0)$) node (x12) {F'=(1_{\bicata_{1,2}})(F')}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\big(\tensor(1_{\A_2}\times 1)\big)(F) = F}
($(x21)+(1,0)$) node (x22) {F'=\big(\tensor(1_{\A_2}\times 1)\big)(F')}
;
\draw[1cell]
(x11) edge node {\alpha} (x12)
(x11) edge node[swap] (a) {\ellbdot_F~=~1_F} (x21)
(x12) edge node {1_{F'}~=~\ellbdot_{F'}} (x22)
(x21) edge node[swap] {1_{1_{\A_2}}\tensor\alpha} (x22)
;
\draw[2cell]
node[between=x11 and x22 at .4, shift={(-.2,0)}, rotate=40, 2label={above,\ellbdot_{\alpha}}] {\Rightarrow}
;
\end{tikzpicture}\]
in $\bicata_{1,2}(F,F')$ given by the composite $2$-cell
\[\begin{tikzpicture}[xscale=5, yscale=1.4, baseline={(A.base)}]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {\big[(1_{1_{\A_2}}\tensor\alpha)1_{F}\big]_X = (\alpha_X 1_{FX})1_{FX}}
($(x11)+(1,0)$) node (x12) {1_{F'X}\alpha_X = (1_{F'}\alpha)_X}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\alpha_X 1_{FX}}
($(x21)+(1,0)$) node (x22) {\alpha_X}
;
\draw[1cell]
(x11) edge node {\ellbdot_{\alpha,X}} (x12)
(x11) edge node[swap] {r} (x21)
(x21) edge node {r} (x22)
(x22) edge node[swap] {\ellinv} (x12)
;
\end{tikzpicture}\]
in $\A_2(FX,F'X)$ for each object $X\in\A_1$.
\end{description}
This finishes the definition of $\ellbdot$.
\end{definition}
Next we define the unit and the counit for $(\ell,\ellbdot)$.
\begin{definition}\label{def:ell-unit}
Define the assignments
\[\begin{tikzcd}
1_{\tensor(1_{\A_2}\times 1)} \ar{r}{\etaell} & \ellbdot \ell\end{tikzcd}
\andspace
\begin{tikzcd}
\ell \ellbdot \ar{r}{\epzell} & 1_{1_{\bicata_{1,2}}}\end{tikzcd}\]
with, respectively, the component $2$-cells
\[\begin{tikzcd}[%
column sep=huge ,row sep = 0ex
,/tikz/column 1/.append style={anchor=base east}
,/tikz/column 2/.append style={anchor=base west}]
\big(1_{\tensor(1_{\A_2}\times 1)}\big)_{F,X} = 1_{FX} \ar{r}{\etaell_{F,X}~=~ \ellinv_{1_{FX}}} & 1_{FX}1_{FX} = (\ellbdot\ell)_{F,X},\\
(\ell\ellbdot)_{F,X} = 1_{FX}1_{FX} \ar{r}{\epzell_{F,X}~=~ \ell_{1_{FX}}} & 1_{FX} = \big(1_{1_{\bicata_{1,2}}}\big)_{F,X}\end{tikzcd}\]
in $\A_2(FX,FX)$ for each object $F\in\bicata_{1,2}$ and each object $X\in\A_1$.
\end{definition}
\begin{proposition}\label{tricatofbicat-ell}\index{tricategory of bicategories!left unitor}
In the bicategory $\Bicatps(\bicata_{1,2},\bicata_{1,2})$, the quadruple
\[(\ell,\ellbdot,\etaell,\epzell)\]
defined in
\begin{itemize}
\item \Cref{def:tricatofbicat-left-unitor} for $\ell$,
\item \Cref{def:tricatofbicat-ell-adjoint} for $\ellbdot$, and
\item \Cref{def:ell-unit} for $\etaell$ and $\epzell$,
\end{itemize}
is an adjoint equivalence.
\end{proposition}
\begin{proof}
This is similar to the case for $(a,\abdot,\etaa,\epza)$ in \Cref{ass-adjoint-equivalence}. We ask the reader to write down the details in \Cref{exer:tricatofbicat-ell}.
\end{proof}
We call $(\ell,\ellbdot,\etaell,\epzell)$ the \emph{left unitor}. The next few definitions are for the right unitor. The notations in \Cref{def:tricatofbicat-left-unitor} will be used.
\begin{definition}\label{def:tricatofbicat-r}
Define an assignment $r$ as in
\begin{equation}\label{tricatofbicat-r}
\begin{tikzpicture}[xscale=3, yscale=1.4, baseline={(A.base)}]
\def1{1} \def\q{45}
\draw[0cell]
(0,0) node (A) {\bicata_{1,2}}
($(A)+(1,0)$) node (B) {\bicata_{1,2}}
;
\draw[1cell]
(A) edge[bend left=\q] node {\tensor(1\times 1_{\A_1})} (B)
(A) edge[bend right=\q] node[swap] {1_{\bicata_{1,2}}} (B)
;
\draw[2cell]
node[between=A and B at .45, rotate=-90, 2label={above,r}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
with the following components.
\begin{description}
\item[Component $1$-Cells] For each object $F\in\bicata_{1,2}$, $r$ has a component $1$-cell
\[\begin{tikzpicture}[xscale=5, yscale=1.4, baseline={(A.base)}]
\def1{1}
\draw[0cell]
(0,0) node (A) {\big(\tensor(1\times 1_{\A_1})\big)(F) = F1_{\A_1} = F}
($(A)+(1,0)$) node (B) {F = (1_{\bicata_{1,2}})(F).}
;
\draw[1cell]
(A) edge node {r_F ~=~ 1_F} (B)
;
\end{tikzpicture}\]
\item[Component $2$-Cells] For each $1$-cell $\alpha \in \bicata_{1,2}(F,F')$, $r$ has a component $2$-cell
\[\begin{tikzpicture}[xscale=5, yscale=1.7, baseline={(a.base)}]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{.6}
\draw[0cell]
(0,0) node (x11) {\big(\tensor(1\times 1_{\A_1})\big)(F) = F}
($(x11)+(1,0)$) node (x12) {F'=\big(\tensor(1\times 1_{\A_1})\big)(F')}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {(1_{\bicata_{1,2}})(F) = F}
($(x21)+(1,0)$) node (x22) {F' = (1_{\bicata_{1,2}})(F')}
;
\draw[1cell]
(x11) edge node {\alpha\tensor 1_{1_{\A_1}}} (x12)
(x11) edge node[swap] (a) {r_F~=~1_F} (x21)
(x12) edge node {1_{F'}~=~ r_{F'}} (x22)
(x21) edge node[swap] {\alpha} (x22)
;
\draw[2cell]
node[between=x11 and x22 at .4, shift={(-.2,0)}, rotate=40, 2label={above,r_{\alpha}}] {\Rightarrow}
;
\end{tikzpicture}\]
in $\bicata_{1,2}(F,F')$ given by the composite $2$-cell
\[\begin{tikzpicture}[xscale=5, yscale=1.4, baseline={(A.base)}]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {(\alpha 1_{F})_X = \alpha_X 1_{FX}}
($(x11)+(1,0)$) node (x13) {1_{F'X}(1_{X,F'}\alpha_X) = \big[1_{F'}(\alpha\tensor 1_{1_{\A_1}})\big]_X}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\alpha_X}
($(x21)+(1/2,0)$) node (x22) {1_{F'X}\alpha_X}
($(x22)+(1/2,0)$) node (x23) {1_{X,F'}\alpha_X}
;
\draw[1cell]
(x11) edge node {r_{\alpha,X}} (x13)
(x11) edge node[swap] {r} (x21)
(x21) edge node {\ellinv} (x22)
(x22) edge node {\Fpzero 1} (x23)
(x23) edge node[swap] {\ellinv} (x13)
;
\end{tikzpicture}\]
in $\A_2(FX,F'X)$ for each object $X\in\A_1$.
\end{description}
This finishes the definition of $r$.
\end{definition}
\begin{definition}\label{def:r-adjoint}
Define an assignment $\rbdot$ as in
\begin{equation}\label{r-adjoint}
\begin{tikzpicture}[xscale=3, yscale=1.4, baseline={(A.base)}]
\def1{1} \def\q{45}
\draw[0cell]
(0,0) node (A) {\bicata_{1,2}}
($(A)+(1,0)$) node (B) {\bicata_{1,2}}
;
\draw[1cell]
(A) edge[bend left=\q] node {\tensor(1\times 1_{\A_1})} (B)
(A) edge[bend right=\q] node[swap] {1_{\bicata_{1,2}}} (B)
;
\draw[2cell]
node[between=A and B at .45, rotate=90, 2label={below,\rbdot}] {\Rightarrow}
;
\end{tikzpicture}
\end{equation}
with the following components.
\begin{description}
\item[Component $1$-Cells] For each object $F\in\bicata_{1,2}$, $\rbdot$ has a component $1$-cell
\[\begin{tikzpicture}[xscale=4.5, yscale=1.4, baseline={(A.base)}]
\def1{1}
\draw[0cell]
(0,0) node (A) {(1_{\bicata_{1,2}})(F) = F}
($(A)+(1,0)$) node (B) {F = \big(\tensor(1\times 1_{\A_1})\big)(F).}
;
\draw[1cell]
(A) edge node {\rbdot_F ~=~ 1_F} (B)
;
\end{tikzpicture}\]
\item[Component $2$-Cells] For each $1$-cell $\alpha \in \bicata_{1,2}(F,F')$, $\rbdot$ has a component $2$-cell
\[\begin{tikzpicture}[xscale=5, yscale=1.7, baseline={(a.base)}]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def.4{.6}
\draw[0cell]
(0,0) node (x11) {(1_{\bicata_{1,2}})(F) = F}
($(x11)+(1,0)$) node (x12) {F' = (1_{\bicata_{1,2}})(F')}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\big(\tensor(1\times 1_{\A_1})\big)(F) = F}
($(x21)+(1,0)$) node (x22) {F'=\big(\tensor(1\times 1_{\A_1})\big)(F')}
;
\draw[1cell]
(x11) edge node {\alpha} (x12)
(x11) edge node[swap] (a) {\rbdot_F~=~1_F} (x21)
(x12) edge node {1_{F'}~=~ \rbdot_{F'}} (x22)
(x21) edge node[swap] {\alpha\tensor 1_{1_{\A_1}}} (x22)
;
\draw[2cell]
node[between=x11 and x22 at .4, shift={(-.2,0)}, rotate=40, 2label={above,\rbdot_{\alpha}}] {\Rightarrow}
;
\end{tikzpicture}\]
in $\bicata_{1,2}(F,F')$ given by the composite $2$-cell
\[\begin{tikzpicture}[xscale=5, yscale=1.4, baseline={(A.base)}]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1}
\draw[0cell]
(0,0) node (x11) {\big[(\alpha\tensor 1_{1_{\A_1}})1_{F}\big]_X = (1_{X,F'}\alpha_X)1_{FX}}
($(x11)+(1,0)$) node (x12) {1_{F'X}\alpha_X = (1_{F'}\alpha)_X}
($(x11)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x2) {1_{X,F'}\alpha_X}
;
\draw[1cell]
(x11) edge node {\rbdot_{\alpha,X}} (x12)
(x11) edge node[swap] {r} (x2)
(x2) edge node[swap] {\Fpzeroinv 1} (x12)
;
\end{tikzpicture}\]
in $\A_2(FX,F'X)$ for each object $X\in\A_1$.
\end{description}
This finishes the definition of $\rbdot$.
\end{definition}
\begin{definition}\label{def:r-unit}
Define the assignments
\[\begin{tikzcd}
1_{\tensor(1\times 1_{\A_1})} \ar{r}{\etar} & \rbdot r\end{tikzcd}
\andspace
\begin{tikzcd}
r \rbdot \ar{r}{\epzr} & 1_{1_{\bicata_{1,2}}}\end{tikzcd}\]
with, respectively, the component $2$-cells
\[\begin{tikzcd}[%
column sep=huge ,row sep = 0ex
,/tikz/column 1/.append style={anchor=base east}
,/tikz/column 2/.append style={anchor=base west}]
\big(1_{\tensor(1\times 1_{\A_1})}\big)_{F,X} = 1_{FX} \ar{r}{\etar_{F,X}~=~ \ellinv_{1_{FX}}} & 1_{FX}1_{FX} = (\rbdot r)_{F,X},\\
(r\rbdot)_{F,X} = 1_{FX}1_{FX} \ar{r}{\epzr_{F,X}~=~ \ell_{1_{FX}}} & 1_{FX} = \big(1_{1_{\bicata_{1,2}}}\big)_{F,X}\end{tikzcd}\]
in $\A_2(FX,FX)$ for each object $F\in\bicata_{1,2}$ and each object $X\in\A_1$.
\end{definition}
\begin{proposition}\label{tricatofbicat-right-unitor}\index{tricategory of bicategories!right unitor}
In the bicategory $\Bicatps(\bicata_{1,2},\bicata_{1,2})$, the quadruple
\[(r,\rbdot,\etar,\epzr)\]
defined in
\begin{itemize}
\item \Cref{def:tricatofbicat-r} for $r$,
\item \Cref{def:r-adjoint} for $\rbdot$, and
\item \Cref{def:r-unit} for $\etar$ and $\epzr$,
\end{itemize}
is an adjoint equivalence.
\end{proposition}
\begin{proof}
This is similar to the case for $(a,\abdot,\etaa,\epza)$ in \Cref{ass-adjoint-equivalence}. We ask the reader to write down the details in \Cref{exer:tricatofbicat-r}.
\end{proof}
We call $(r,\rbdot,\etar,\epzr)$ the \emph{right unitor}.
Next we define the pentagonator. In the next few definitions, $\tensor$ is the composition in \Cref{tensor-pseudofunctor}, and $a$ is the associator in \Cref{ass-adjoint-equivalence}.
\begin{definition}\label{def:tricatofbicat-pentagonator}\index{tricategory of bicategories!pentagonator}
Suppose $\A_p$ for $1\leq p \leq 5$ are bicategories. Define a $2$-cell
\[\pi \in \Bicatps\big(\bicata^4_{[1,5]}, \bicata_{1,5}\big)\big(\tensor(\tensor\times 1)(\tensor\times 1 \times 1), \tensor(1\times \tensor)(1\times 1\times\tensor)\big)\]
as in \eqref{tricategory-pentagonator}, called the \emph{pentagonator}, with the following components. For pseudofunctors
\[\begin{tikzcd}
\A_1\ar{r}{F} & \A_2\ar{r}{G} & \A_3\ar{r}{H} & \A_4\ar{r}{J} & \A_5,\end{tikzcd}\]
$\pi$ has a component $2$-cell
\[\begin{tikzpicture}[xscale=5,yscale=1.5, baseline={(x11.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def.4{-.9} \def1{1} \def1{.1}
\draw[0cell]
(0,0) node (x11) {JHGF}
($(x11)+(1,0)$) node (x12) {JHGF}
($(x11)+(1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x01) {JHGF}
($(x11)+(1-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x02) {JHGF}
($(x11)+(1/2,.4)$) node (x31) {JHGF}
;
\draw[1cell]
(x11) edge node[pos=.3] {a_{J,H,G}\tensor 1_F ~=~ 1_{JHG}\tensor 1_F} (x01)
(x01) edge node (s) {a_{J,HG,F} ~=~ 1_{JHGF}} (x02)
(x02) edge node[pos=.7] {1_J \tensor a_{H,G,F} ~=~ 1_J\tensor 1_{HGF} } (x12)
(x11) edge node[swap,pos=.4] {a_{JH,G,F} ~=~ 1_{JHGF}} (x31)
(x31) edge node[swap,pos=.6] {a_{J,H,GF} ~=~ 1_{JHGF}} (x12)
;
\draw[2cell]
node[between=x11 and x12 at .4, shift={(0,.5)}, rotate=-90, 2label={above,\pi_{J,H,G,F}}] {\Rightarrow}
;
\end{tikzpicture}\]
in $\bicata_{1,5}(JHGF,JHGF)$ given by the composite $2$-cell
\begin{equation}\label{tricatofbicat-pentagonator-component}
\begin{tikzpicture}[xscale=6.5,yscale=1.5, baseline={(x11.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def1{1}
\draw[0cell]
(0,0) node (x11) {\big[(1_J\tensor 1_{HGF})_X 1_{JHGFX}\big]\big(1_{JHG} \tensor 1_F\big)_X}
($(x11)+(1,0)$) node (x12) {1_{JHGFX}1_{JHGFX}}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\big[\big(1_{HGFX,J}1_{JHGFX}\big)1_{JHGFX}\big]\big(1_{FX,JHG}1_{JHGFX}\big)}
($(x21)+(1,0)$) node (x22) {\big(1_{JHGFX}1_{JHGFX}\big)1_{JHGFX}}
($(x21)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x3) {\big[\big(1_{JHGFX}1_{JHGFX}\big)1_{JHGFX}\big]\big(1_{JHGFX}1_{JHGFX}\big)}
;
\draw[1cell]
(x11) edge node {\pi_{(J,H,G,F),X}} (x12)
(x11) edge node[swap] {1} (x21)
(x21) edge node[swap, pos=.3] {[(\Jzeroinv 1)1][\JHGzeroinv 1]} (x3)
(x3) edge node[swap, pos=.6] {(\ell 1)\ell} (x22)
(x22) edge node[swap] {\ell 1} (x12)
;
\end{tikzpicture}
\end{equation}
in $\A_5(JHGFX,JHGFX)$ for each object $X\in\A_1$. This finishes the definition of $\pi$.
\end{definition}
Next we define the three $2$-unitors. In the next three definitions, $\ell$ is the left unitor in \Cref{tricatofbicat-ell}, and $r$ is the right unitor in \Cref{tricatofbicat-right-unitor}.
\begin{definition}\label{def:tricatofbicat-mid-iiunitor}\index{tricategory of bicategories!middle 2-unitor}
Suppose $\A_1,\A_2$, and $\A_3$ are bicategories. Define a $2$-cell
\[\mu \in \Bicatps\big(\bicata^2_{[1,3]},\bicata_{1,3}\big)(\tensor,\tensor)\]
as in \eqref{tricategory-iiunitors}, called the \emph{middle $2$-unitor}, with the following components. For pseudofunctors
\[\begin{tikzcd}
\A_1\ar{r}{F} & \A_2\ar{r}{G} & \A_3,\end{tikzcd}\]
$\mu$ has a component $2$-cell
\[\begin{tikzpicture}[xscale=6,yscale=1.5,baseline={(r.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{1} \def1{1} \def1{.1}
\draw[0cell]
(0,0) node (x11) {GF}
($(x11)+(1,0)$) node (x12) {GF}
($(x11)+(1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x01) {(G1_{\A_2})F=GF}
($(x11)+(1-1,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x02) {G(1_{\A_2}F) = GF}
;
\draw[1cell]
(x11) edge node[pos=.3] (r) {\rbdot_G \tensor 1_F ~=~ 1_G\tensor 1_F} (x01)
(x01) edge node (s) {a_{G,1,F} ~=~ 1_{GF}} (x02)
(x02) edge node[pos=.7] {1_G \tensor \ell_F ~=~ 1_G\tensor 1_F} (x12)
(x11) edge node[swap] (one) {1_{GF} ~=~ 1_{G\tensor F}} (x12)
;
\draw[2cell]
node[between=s and one at .5, shift={(-.3,0)}, rotate=-90, 2label={above,\mu_{G,F}}] {\Rightarrow}
;
\end{tikzpicture}\]
in $\bicata_{1,3}(GF,GF)$ given by the composite $2$-cell
\begin{equation}\label{mu-iicell-component}
\begin{tikzpicture}[xscale=6.5,yscale=1.5, baseline={(x11.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def1{1}
\draw[0cell]
(0,0) node (x11) {\big[(1_G\tensor 1_F)_X 1_{GFX}\big]\big(1_G \tensor 1_F\big)_X}
($(x11)+(1,0)$) node (x12) {1_{GFX}}
($(x11)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {\big[\big(1_{FX,G} 1_{GFX}\big) 1_{GFX}\big]\big(1_{FX,G} 1_{GFX}\big)}
($(x21)+(1,0)$) node (x22) {1_{GFX}1_{GFX}}
($(x21)+(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x31) {\big[\big(1_{GFX}1_{GFX}\big) 1_{GFX}\big]\big(1_{GFX} 1_{GFX}\big)}
($(x31)+(1,0)$) node (x32) {\big(1_{GFX}1_{GFX}\big)\big({1_{GFX}1_{GFX}}\big)}
;
\draw[1cell]
(x11) edge node {\mu_{(G,F),X}} (x12)
(x11) edge node[swap] {1} (x21)
(x21) edge node[swap] {[(\Gzeroinv 1)1](\Gzeroinv 1)} (x31)
(x31) edge node {(\ell 1)1} (x32)
(x32) edge node[swap] {\ell \ell} (x22)
(x22) edge node[swap] {\ell} (x12)
;
\end{tikzpicture}
\end{equation}
in $\A_3(GFX,GFX)$ for each object $X\in\A_1$. This finishes the definition of $\mu$.
\end{definition}
\begin{definition}\label{def:tricatofbicat-left-iiunitor}\index{tricategory of bicategories!left 2-unitor}
Reusing the notations in \Cref{def:tricatofbicat-mid-iiunitor}, define a $2$-cell
\[\lambda \in \Bicatps\big(\bicata^2_{[1,3]},\bicata_{1,3}\big)\big(\tensor(\tensor\times 1)(1_{\A_3}\times 1 \times 1),\tensor\big)\]
as in \eqref{tricategory-iiunitors}, called the \emph{left $2$-unitor}, with component $2$-cells
\[\begin{tikzpicture}[xscale=5.5,yscale=1.5, baseline={(lambda.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def1{1} \def-1} \def\v{-1{2*1/3} \def.4{.2} \def\b{15}
\draw[0cell]
(0,0) node (x11) {(1_{\A_3}G)F = GF}
($(x11)+(1,0)$) node (x12) {GF}
($(x11)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (x21) {1_{\A_3}(GF) = GF}
;
\draw[1cell]
(x11) edge node (s) {\ell_G\tensor 1_F ~=~ 1_G \tensor 1_F} (x12)
(x11) edge[bend right=\b] node[swap,pos=.4] (a) {a_{1,G,F} ~=~ 1_{GF}} (x21)
(x21) edge[bend right=\b] node[swap,pos=.6] {\ell_{G\tensor F} ~=~ 1_{GF}} (x12)
;
\draw[2cell]
node[between=s and x21 at .5, shift={(-.3,0)}, rotate=-90, 2label={above,\lambda_{G,F}}] (lambda) {\Rightarrow}
;
\end{tikzpicture}\]
in $\bicata_{1,3}(GF,GF)$ given by the $2$-cell
\begin{equation}\label{lambda-comp-iicell}
\begin{tikzpicture}[xscale=4.5,yscale=1.5, baseline={(x11.base)}]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def1{1} \def-1} \def\v{-1{.5}
\draw[0cell]
(0,0) node (x11) {(1_G \tensor 1_F)_X = 1_{FX,G} 1_{GFX}}
($(x11)+(1,0)$) node (x12) {1_{GFX}1_{GFX}}
($(x11)+(0,-1} \def\v{-1)$) node[inner sep=0pt] (a) {}
($(x12)+(0,-1} \def\v{-1)$) node[inner sep=0pt] (b) {}
;
\draw[1cell]
(x11) edge node {\Gzeroinv 1} (x12)
(x11) edge[-,shorten >=-1pt] (a)
(a) edge[-,shorten <=-1pt,shorten >=-1pt] node {\lambda_{(G,F),X}} (b)
(b) edge[shorten <=-1pt] (x12)
;
\end{tikzpicture}
\end{equation}
in $\A_3(GFX,GFX)$ for each object $X\in\A_1$. This finishes the definition of $\lambda$.
\end{definition}
\begin{definition}\label{def:tricatofbicat-right-iiunitor}\index{tricategory of bicategories!right 2-unitor}
Reusing the notations in \Cref{def:tricatofbicat-mid-iiunitor}, define a $2$-cell
\[\rho \in \Bicatps\big(\bicata^2_{[1,3]},\bicata_{1,3}\big)\big(\tensor,\tensor(1\times \tensor)(1 \times 1\times 1_{\A_1})\big)\]
as in \eqref{tricategory-iiunitors}, called the \emph{right $2$-unitor}, with component $2$-cells
\[\begin{tikzpicture}[xscale=5.5,yscale=1.5]
\def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-1} \def1{1} \def-1} \def\v{-1{2*1/3} \def.4{.2} \def\b{15}
\draw[0cell]
(0,0) node (y11) {GF}
($(y11)+(1,0)$) node (y12) {G(F 1_{\A_1}) = GF}
($(y11)+(1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)$) node (y2) {(GF) 1_{\A_1} = GF}
;
\draw[1cell]
(y11) edge node (a) {1_G\tensor \rbdot_F ~=~ 1_G \tensor 1_F} (y12)
(y11) edge[bend right=\b] node[swap,pos=.4] {\rbdot_{G\tensor F} ~=~ 1_{GF}} (y2)
(y2) edge[bend right=\b] node[swap,pos=.6] {a_{G,F,1} ~=~ 1_{GF}} (y12)
;
\draw[2cell]
node[between=a and y2 at .5, shift={(-.3,0)}, rotate=-90, 2label={above,\rho_{G,F}}] {\Rightarrow}
;
\end{tikzpicture}\]
in $\bicata_{1,3}(GF,GF)$ given by the $2$-cell
\begin{equation}\label{rho-iicell-component}
\rho_{(G,F),X} = \Gzeroinv 1 \in \A_3(GFX,GFX)
\end{equation}
in \eqref{lambda-comp-iicell} for each object $X\in\A_1$. This finishes the definition of $\rho$.
\end{definition}
\begin{lemma}\label{pi-mu-lambda-rho}
$\pi$, $\mu$, $\lambda$, and $\rho$ in \Cref{def:tricatofbicat-pentagonator,def:tricatofbicat-mid-iiunitor,def:tricatofbicat-left-iiunitor,def:tricatofbicat-right-iiunitor}, are well-defined invertible $2$-cells.
\end{lemma}
\begin{proof}
This is similar to the proofs of \Cref{tensortwo-modification,tricatofbicat-associator-modax}. We ask the reader to write down the details in \Cref{exer:pi-mu-lambda-rho}.
\end{proof}
Here is the main result of this chapter.
\begin{theorem}\label{thm:tricatofbicat}\index{tricategory of bicategories}\index{bicategory!tricategory structure}
There is a tricategory $\bicat$ defined by the following data.
\begin{itemize}
\item The objects of $\bicat$ are small bicategories.
\item For each pair of bicategories $\A_1$ and $\A_2$, it has the hom bicategory
\[\bicat(\A_1,\A_2) = \Bicatps(\A_1,\A_2)\]
in \Cref{subbicat-pseudofunctor}.
\item The identities, the composition, the associator, and the left and right unitors are in \Cref{tricatofbicat-identities,tensor-pseudofunctor,ass-adjoint-equivalence,tricatofbicat-ell,tricatofbicat-right-unitor}, respectively.
\item The pentagonator and the middle, left, and right $2$-unitors are in \Cref{pi-mu-lambda-rho}.
\end{itemize}
\end{theorem}
\begin{proof}
Since all the tricategorical data of $\bicat$ have been defined, it remains to check the three tricategorical axioms.
The non-abelian 4-cocycle condition \eqref{nb4cocycle}, called\label{notation:nb4} NB4 below, means that, given pseudofunctors
\[\begin{tikzcd}
\A_1\ar{r}{F} & \A_2\ar{r}{G} & \A_3\ar{r}{H} & \A_4\ar{r}{J} & \A_5\ar{r}{K} & \A_6\end{tikzcd}\]
and an object $X\in\A_1$, the diagram
\begin{equation}\label{fourcocycle}
\begin{tikzcd}
\big[\big(\left\lbrace(t_6 t_5) t_4\right\rbrace t_3\big) t_2\big] t_1 \ar[bend left]{r}{\phi_1} \ar[bend right]{r}[swap]{\phi_2} & (b_3 b_2) b_1
\end{tikzcd}
\end{equation}
of $2$-cells in $\A_6\big(KJHGFX,KJHGFX\big)$ is commutative. In \eqref{fourcocycle}:
\begin{itemize}
\item On the left-hand side, the six $1$-cells are
\begin{align*}
t_1 &= \big[(1_{KJH}\tensor 1_G) \tensor 1_F\big]_X = 1_{FX,KJHG}\big(1_{GFX,KJH} 1_{KJHGFX}\big),\\
t_2 &= \big[1_{KJHG}\tensor 1_F\big]_X = 1_{FX,KJHG} 1_{KJHGFX},\\
t_3 &= \big[(1_K\tensor 1_{JHG})\tensor 1_F\big]_X = 1_{FX,KJHG} \big(1_{JHGFX,K} 1_{KJHGFX}\big) ,\\
t_4 &= 1_{KJHGFX},\\
t_5 &= \big[1_K \tensor 1_{JHGF}\big]_X = 1_{JHGFX,K} 1_{KJHGFX}, \andspace\\
t_6 &= \big[1_K\tensor(1_J\tensor 1_{HGF})\big]_X = \big(1_{HGFX,J} 1_{JHGFX}\big)_K 1_{KJHGFX}.
\end{align*}
They correspond to the six edges in the common top boundary of the two sides of NB4, evaluated at $X$.
\item On the right-hand side, the three $1$-cells are
\[b_1 = b_2 = b_3 = 1_{KJHGFX}.\]
They correspond to the three edges in the common bottom boundary of the two sides of NB4, evaluated at $X$.
\item $\phi_1$ and $\phi_2$ are the $2$-cells given by the composites of the top and the bottom pasting diagrams, respectively, in NB4, evaluated at $X$.
\end{itemize}
The $1$-cell $t_6\cdots t_1$, with the left-normalized bracketing, on the left-hand side of \eqref{fourcocycle} is entirely made up of:
\begin{itemize}
\item identity $1$-cells, such as $1_{KJHGFX}$;
\item horizontal composites in $\A_5$ and $\A_6$;
\item applications of the pseudofunctors, such as $1_{FX,KJHG}$.
\end{itemize}
To compute the $2$-cell $\phi_1$ in $\A_6$ in \eqref{fourcocycle}, we use the definitions of:
\begin{itemize}
\item $1\pi$ in \eqref{one-pi-iicell};
\item the composite modification in \eqref{mod-composite-component}, $\tensortwo$ in \eqref{tensortwo-x}, and $\tensorzero$ in \eqref{tensorzero-x};
\item $a_{\gamma,\beta,\alpha}$ in \eqref{tricatofbicat-ass-iicell-comp} with $\alpha$, $\beta$, and $\gamma$ all identity strong transformations;
\item $\ell$ in \eqref{tricatofbicat-left-unitor} and $\pi$ in \eqref{tricatofbicat-pentagonator-component}.
\end{itemize}
The result is that $\phi_1$ is a vertical composite in $\A_6$ of horizontal composites involving:
\begin{itemize}
\item identity $2$-cells;
\item the associator, the left unitor, the right unitor, and their inverses, in one of the bicategories $\A_p$;
\item the lax functoriality and unity constraints of the pseudofunctors $F,G,H,J$, and $K$;
\item applications of the pseudofunctors $F,G,H,J$, and $K$.
\end{itemize}
Using the definition of $\pi 1$ in \eqref{pi-one-iicell}, we see that the $2$-cell $\phi_2$ is also a vertical composite of horizontal composites involving the above $2$-cells. Similar to the proofs of \Cref{tensortwo-modification,tensorzero-modification,tricatofbicat-associator-laxunity,tricatofbicat-associator-laxnat}, the equality $\phi_1=\phi_2$ is proved by dividing the diagram in question into a number of sub-diagrams, each of which is commutative by the axioms and properties in \Cref{conv:large-diagram}, or the pseudofunctor axioms in \Cref{def:lax-functors}.
The other two tricategorical axioms are proved by the same kind of reasoning as for NB4. For the left normalization axiom \eqref{left-normalization-axiom}, we use the definitions of $1\lambda$ in \eqref{one-lambda-iicell}, $\mu 1$ in \eqref{mu-one-iicell}, $\mu$ in \eqref{mu-iicell-component}, and $\lambda$ in \eqref{lambda-comp-iicell}. For the right normalization axiom \eqref{right-normalization-axiom}, we use the definitions of $\rho 1$ in \eqref{rho-one-iicell}, $1\mu$ in \eqref{one-mu-iicell}, and $\rho$ in \eqref{rho-iicell-component}.
\end{proof}
\section{Exercises and Notes}
\label{sec:tricatofbicat-exercises}
\begin{exercise}\label{exer:tricat-lambda-rho}\index{tricategory!left 2-unitor}\index{tricategory!right 2-unitor}
In a tricategory:
\begin{enumerate}
\item Write down a formula for the left $2$-unitor $\lambda$ in terms of the rest of the tricategorical data, minus $\rho$. Hint: Use \eqref{one-lambda-iicell}.
\item Write down a formula for the right $2$-unitor $\rho$ in terms of the rest of the tricategorical data, minus $\lambda$. Hint: Use \eqref{rho-one-iicell}.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exer:tensorzero-lax-right}
In \Cref{tensorzero-laxunity}, prove the lax right unity axiom for $(\tensor,\tensortwo,\tensorzero)$.
\end{exercise}
\begin{exercise}\label{exer:ass-adjoint-icell}
Prove \Cref{ass-adjoint-icell}, i.e., that $\abdot$ is a $1$-cell.
\end{exercise}
\begin{exercise}\label{exer:epza-iicell}
Prove \Cref{epza-iicell}, i.e., that $\epza$ is an invertible $2$-cell.
\end{exercise}
\begin{exercise}\label{exer:tricatofbicat-ell}
Prove \Cref{tricatofbicat-ell}, i.e., that $(\ell,\ellbdot,\etaell,\epzell)$ is an adjoint equivalence.
\end{exercise}
\begin{exercise}\label{exer:tricatofbicat-r}
Prove \Cref{tricatofbicat-right-unitor}, i.e., that $(r,\rbdot,\etar,\epzr)$ is an adjoint equivalence.
\end{exercise}
\begin{exercise}\label{exer:pi-mu-lambda-rho}
Prove \Cref{pi-mu-lambda-rho}, i.e., that $\pi$, $\mu$, $\lambda$, and $\rho$ are invertible $2$-cells.
\end{exercise}
\subsection*{Notes}
\begin{note}[Discussion of Literature]\label{note:tricat-discussion}
Our \Cref{def:tricategory} of a tricategory is essentially the one in Gurski's book \cite{gurski-coherence}, with some conventional changes. Gurski's definition of a tricategory uses what he called \emph{transformations}, which are our oplax transformations in \Cref{def:oplax-transformation} with invertible component $2$-cells. Consequently, Gurski's concept of a modification is an oplax version of our modification. Other presentational differences were mentioned in \Cref{expl:tricategory-definition,expl:associahedron}. Since we are dealing with pseudofunctors, strong transformations, adjoint equivalences, and invertible modifications, these differences between our definition of a tricategory and the one in \cite{gurski-coherence} are simply a matter of conventions. With these conventional differences in mind, our \Cref{tensor-pseudofunctor,ass-adjoint-equivalence,thm:tricatofbicat} correspond to Propositions 5.1 and 5.3 and Theorem 5.7 in \cite{gurski-coherence}.
The original source of tricategories is the paper \cite{gps} by Gordon, Power, and Street. They use the same kind of transformations as in Gurski's book, i.e., our oplax transformations. However, in their original definition of a tricategory, the associator, the left unitor, and the right unitor are only equivalences in the sense of \Cref{def:equivalence-in-bicategory}, instead of adjoint equivalences as in Gurski's and our definitions.
\end{note}
\begin{note}[Alternative Composition, $\otimes'$]\label{note:opcubical-composition}
The tricategory\index{tricategory of bicategories} $\bicat$ in \Cref{thm:tricatofbicat} is one of two tricategories that can be defined using small bicategories, pseudofunctors, strong transformations, and modifications. In fact, in \eqref{transformation-composite} we could also have defined the composite $\beta \tensor' \alpha$ as $(\beta\whis F')(G\whis\alpha)$, which may be visualized as follows.
\[\begin{tikzpicture}[xscale=2.2,yscale=1.5]
\def1{1} \def-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15{-.6}
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (x11) {\A_1}
($(x11)+(1,0)$) node (x12) {\A_2}
($(x12)+(1,0)$) node (x13) {\A_3}
;
\draw[1cell]
(x11) edge[bend right=45] node[swap] (fp) {F'} (x12)
(x12) edge[bend left=45] node (g) {G} (x13)
;}
\begin{scope}
\boundary
\draw[0cell]
($(x11)+(-1/2,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15/2)$) node {\beta\tensor'\alpha}
($(x13)+(1/2,0)$) node (x14) {G\whis\alpha}
;
\draw[1cell]
(x11) edge[bend left=45] node (f) {F} (x12)
;
\draw[2cell]
node[between=x11 and x12 at .45, rotate=-90, 2label={above,\alpha}] {\Rightarrow}
;
\end{scope}
\begin{scope}[shift={(0,-1} \def\h{1} \def-1} \def\v{-1{2*\h/3} \def.4{.2} \def\b{15)}]
\boundary
\draw[0cell]
($(x13)+(1/2,0)$) node (x14) {\beta\whis F'}
;
\draw[1cell]
(x12) edge[bend right=45] node[swap] (gp) {G'} (x13)
;
\draw[2cell]
node[between=x12 and x13 at .45, rotate=-90, 2label={above,\beta}] {\Rightarrow}
;
\end{scope}
\end{tikzpicture}\]
With this definition, each component is
\[(\beta\tensor'\alpha)_X = \beta_{F'X} \alpha_{X,G}.\]
The corresponding composite modification has components
\[(\Sigma\tensor'\Gamma)_X = \Sigma_{F'X} * \Gamma_{X,G},\]
as displayed below.
\[\begin{tikzpicture}[xscale=2.8, yscale=1.4]
\def1{1}
\draw[0cell]
(0,0) node (x11) {GFX}
($(x11)+(1,0)$) node (x12) {GF'X}
($(x12)+(1,0)$) node (x13) {G'F'X}
;
\draw[1cell]
(x11) edge[bend left=45] node {\alpha_{X,G}} (x12)
(x11) edge[bend right=45] node[swap] {\alpha'_{X,G}} (x12)
(x12) edge[bend left=45] node {\beta_{F'X}} (x13)
(x12) edge[bend right=45] node[swap] {\beta'_{F'X}} (x13)
;
\draw[2cell]
node[between=x11 and x12 at .4, rotate=-90, 2label={above,\Gamma_{X,G}}] {\Rightarrow}
node[between=x12 and x13 at .4, rotate=-90, 2label={above,\Sigma_{F'X}}] {\Rightarrow}
;
\end{tikzpicture}\]
Since the definition of $\tensor$ has changed, all other tricategorical data---i.e., $\tensortwo$, $\tensorzero$, the adjoint equivalences $a$, $\ell$, and $r$, and the invertible modifications $\pi$, $\mu$, $\lambda$, and $\rho$---also need to be suitably adjusted. An argument similar to the one we gave for $\bicat$ shows that there is a tricategory $\bicat'$ with the same objects and hom bicategories as $\bicat$, but with $\tensor'$, etc., in place of their original versions in $\bicat$. According to \cite[5.6]{gps} and \cite[Theorem 5.9]{gurski-coherence}, there is a triequivalence $\bicat \to \bicat'$. We will not discuss any details of this result or even the definition of a triequivalence in this book. For more discussion of tricategories and their coherence, the reader is referred to \cite{gps,gurski-coherence}.
\end{note}
\endinput
\chapter{Whitehead Theorem for Bicategories}\label{ch:whitehead}
Recall from \cref{def:equivalences} that a functor of categories is an
equivalence if and only if it is essentially surjective and fully
faithful. In this chapter we prove a generalization of this result
for pseudofunctors. The key terms are the following.
\begin{definition}
Suppose $F\cn \B \to \C$ is a lax functor of bicategories.
\begin{itemize}
\item We say that $F$ is \index{object!essentially surjective}\index{essentially!surjective}\emph{essentially surjective} if it is
surjective on adjoint-equivalence classes of objects.
\item We say that $F$ is \index{1-cell!essentially full}\index{essentially!full}\emph{essentially full} if it is surjective
on isomorphism classes of $1$-cells.
\item We say that $F$ is \index{2-cell!fully faithful}\index{fully faithful}\emph{fully faithful} if it is a bijection
on $2$-cells.\defmark
\end{itemize}
\end{definition}
The main result, which we prove in \cref{sec:Whitehead-bicat}, is that
a pseudofunctor is a biequivalence if and only if it is essentially
surjective, essentially full, and fully faithful. We call this the
Whitehead Theorem for Bicategories because it is a bicategorical
analogue of Whitehead's theorem for topological spaces. See the notes
in \cref{sec:whitehead-exercises} for further discussion of this
point.
The implication that a biequivalence is essentially surjective,
essentially full, and fully faithful is straightforward and we explain
it in \cref{sec:Whitehead-bicat}. The reverse implication requires
more work, and our approach splits into four conceptual steps. First,
in \cref{sec:lax-slice} we describe a lax slice construction for a
general lax functor $F$. Second, in \cref{sec:lax-terminal} we show
that if $F$ is essentially surjective, essentially full, and
fully faithful, then the lax slices can be equipped with extra
structure in the form of certain terminal objects. If, moreover, $F$
is a pseudofunctor, this structure is suitably preserved by
change-of-slice functors.
Third, in \cref{sec:Quillen-A-bicat} we show that if $F$ is any lax
functor whose lax slices have this extra structure and whose
change-of-slice functors preserve it, then there is a reverse lax
functor $G$ determined by these data. Finally, in
\cref{sec:Whitehead-bicat} we show that if $F$ is a pseudofunctor
that is essentially surjective, essentially full, and fully faithful,
then the $G$ constructed in \cref{sec:Quillen-A-bicat} is an inverse
biequivalence for $F$.
As before, we remind the reader that \cref{thm:bicat-pasting-theorem}
and \cref{conv:boundary-bracketing} explain how to interpret pasting
diagrams in a bicategory. These will be essential to the construction
of lax slices and the reverse functor $G$. In this chapter $F\cn\B
\to \C$ is a lax functor or pseudofunctor of bicategories.
\section{The Lax Slice Bicategory}\label{sec:lax-slice}
In this section we describe a bicategorical generalization of slice
categories.
\begin{definition}
Given a lax functor $F \cn \B \to \C$ and an object $X \in \C$, the
\emph{lax slice}\index{lax slice!bicategory}\index{bicategory!lax slice} bicategory $F \sdar X$ consists of the following.
\begin{enumerate}
\item Objects are pairs $(A,f_A)$ where $A \in \B$ and $FA \fto{f_A} X$ in
$\C$.
\item $1$-cells $(A_0, f_0) \to (A_1,f_1)$ are
pairs $(p,\theta_p)$ where $A_0 \fto{p} A_1$ in $\B$ and
$\theta_p\cn f_{0} \to f_{1} (Fp)$ in $\C$. We depict this as a triangle.
\[
\begin{tikzpicture}[x=25mm,y=25mm]
\draw[0cell]
(0,0) node (x) {X}
(120:1) node (0) {FA_0}
(60:1) node (1) {FA_1}
;
\draw[1cell]
(0) edge[swap] node {f_0} (x)
(1) edge node {f_1} (x)
(0) edge node {Fp} (1)
;
\draw[2cell]
(-.05,.6) node[rotate=30,font=\Large] {\Rightarrow}
node[below right] {\theta_p}
;
\end{tikzpicture}
\]
\item $2$-cells $(p_0,\theta_0) \to (p_1,\theta_1)$ are
singletons $(35)$ where $35$ is a $2$-cell $p_0 \to p_1$ in $\B$ such that $F35$
satisfies the equality shown in the pasting diagram below, known as
the \emph{ice cream cone condition}\index{ice cream cone condition} with respect to $\theta_0$
and $\theta_1$.
\[
\begin{tikzpicture}[x=35mm,y=35mm, scale=.85]
\draw (0,0) node[font=\Large] {=};
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (x) {X}
(120:1) node (0) {FA_0}
(60:1) node (1) {FA_1}
;
\draw[1cell]
(0) edge[swap] node {f_0} (x)
(1) edge node {f_1} (x)
(0) edge[bend left] node (F1) {Fp_1} (1)
;
}
\begin{scope}[shift={(-.8,-.5)}]
\boundary
\draw[1cell]
(0) edge[swap, bend right] node (F0) {Fp_0} (1)
;
\draw[2cell]
(0,.38) node[rotate=30,font=\Large] (A) {\Rightarrow}
(A) ++(.05,-.1) node {\theta_0}
node[between=F0 and F1 at .5, shift={(-.1,0)}, rotate=90, font=\Large] (Fal) {\Rightarrow}
(Fal) node[right] {F35}
;
\end{scope}
\begin{scope}[shift={(.8,-.5)}]
\boundary
\draw[1cell]
;
\draw[2cell]
(0,.6) node[rotate=30,font=\Large] (B) {\Rightarrow}
(B) ++(.05,-.1) node {\theta_1}
;
\end{scope}
\end{tikzpicture}
\]
\end{enumerate}
We describe the additional data of $F \sdar X$ and prove that it
satisfies the bicategory axioms in \cref{defprop:lax-slice}.
\end{definition}
\begin{proposition}\label{defprop:lax-slice}
Given a lax functor $F\cn \B \to \C$ and an object $X \in \C$, the
lax slice $F \sdar X$ is a bicategory.
\end{proposition}
\begin{proof}
The objects, $1$-cells, and $2$-cells of $F \sdar X$ are defined above.
We structure the rest of the proof as follows.
\begin{enumerate}
\item\label{it:slice-1} Define identity $1$-cells and $2$-cells.
\item\label{it:slice-2} Define horizontal and vertical composition for $1$-cells and
$2$-cells.
\item\label{it:slice-3} Verify each collection of $1$-cells and $2$-cells between a given pair of
objects forms a category.
\item\label{it:slice-4} Verify functoriality of horizontal composition.
\item\label{it:slice-5} Define components of the associator and unitor.
\item\label{it:slice-6} Verify that the associator and unitors are natural
isomorphisms.
\item\label{it:slice-7} Verify the pentagon and unity axioms.
\end{enumerate}
\newcommand{\step}[1]{\textbf{Step (\ref{it:slice-#1}).}}
\step{1} The identity $1$-cell for an object $(A,f_A)$ is $(1_A,r')$
where
\[
r'= (1_{f_A} * F^0) \circ r^{-1},
\]
shown in the pasting diagram below.
\begin{equation}\label{slice-1-1}
\begin{tikzpicture}[x=35mm,y=35mm,baseline=(B.base),scale=.85]
\draw[0cell]
(0,0) node (x) {X}
(120:1) node (a) {FA}
(60:1) node (b) {FA}
;
\draw[1cell]
(a) edge[swap] node {f_{A}} (x)
(b) edge node {f_{A}} (x)
(a) edge[bend left] node (T) {F1_{A}} (b)
(a) edge[swap, bend right] node (B) {1_{FA}} (b)
;
\draw[2cell]
node[between=x and B at .6, rotate=30, font=\Large] (A) {\Rightarrow}
(A) ++(.02,0) node[below] {r^\inv}
node[between=a and b at .5, rotate=90, font=\Large] (C) {\Rightarrow}
(C) node[right] {F^0_{A}}
;
\end{tikzpicture}
\end{equation}
The identity $2$-cell for a $1$-cell $(p,\theta)$ is given by $(1_p)$,
noting that this satisfies the necessary condition because $F1_p =
1_{Fp}$.
\step{2} The horizontal composite of $1$-cells
\[
(A_0,f_0) \fto{(p_0,\theta_0)}
(A_1,f_1) \fto{(p_1,\theta_1)}
(A_2,f_2)
\]
is $(p_1p_0, {\theta'})$, where ${\theta'}$ is given by the
composite of the
pasting diagram formed from $\theta_0$, $\theta_1$, and $F^2$ as shown below.
\begin{equation}\label{slice-2-1}
\begin{tikzpicture}[x=15mm,y=20mm,baseline=(f1.base),scale=.85]
\draw[0cell]
(0,0) node (x) {X}
(-2,2) node (a) {FA_0}
(0,1.75) node (b) {FA_1}
(2,2) node (c) {FA_2}
;
\draw[1cell]
(a) edge[swap] node (f0) {f_{0}} (x)
(b) edge node (f1) {f_{1}} (x)
(c) edge node (f2) {f_{2}} (x)
(a) edge[bend right=10] node (b0) {Fp_0} (b)
(b) edge[bend right=10] node (b1) {Fp_1} (c)
(a) edge[bend left] node (T) {F(p_1p_0)} (c)
;
\draw[2cell]
node[between=f0 and b at .5, rotate=45, font=\Large] (A) {\Rightarrow}
(A) node[below right] {\theta_0}
node[between=f2 and b at .5, rotate=45, font=\Large] (B) {\Rightarrow}
(B) node[below right] {\theta_1}
node[between=b and T at .5, rotate=90, font=\Large] (C) {\Rightarrow}
(C) node[right] {F^2_{p_1, p_0}}
;
\end{tikzpicture}
\end{equation}
Horizontal and vertical composites of $2$-cells in $F \sdar X$ are
given by their composites in $\B$, as we now explain.
Given $1$-cells and $2$-cells
\begin{equation}\label{slice-2-2}
\begin{tikzpicture}[x=40mm,y=20mm,baseline=(a0.base),scale=.8]
\draw[0cell]
(0,0) node (a0) {(A_0,f_0)}
(1,0) node (a1) {(A_1,f_1)}
(2,0) node (a2) {(A_2,f_2)}
;
\draw[1cell]
(a0) edge[bend right, swap] node {(p_0,\theta_0)} (a1)
(a1) edge[bend right, swap] node {(p_1,\theta_1)} (a2)
(a0) edge[bend left] node (X) {(p_0',\theta_0')} (a1)
(a1) edge[bend left] node {(p_1',\theta_1')} (a2)
;
\draw[2cell]
node[between=a0 and a1 at .5, rotate=90, font=\Large] (al0) {\Rightarrow}
(al0) node[right] {(35_0)}
node[between=a1 and a2 at .5, rotate=90, font=\Large] (al1) {\Rightarrow}
(al1) node[right] {(35_1)}
;
\end{tikzpicture}
\end{equation}
the following equalities of pasting diagrams show that $F(35_1 *
35_0)$ satisfies the necessary condition for $35_1 * 35_0$ to define a $2$-cell in $F
\sdar X$.
The first equality follows by naturality of $F^2$. The second follows by the conditions for
$(35_0)$ and $(35_1)$ separately.
\begin{equation*
\begin{tikzpicture}[x=11mm,y=18mm,baseline={(A0.base)}, scale=.97]
\def1.2{3.5}
\def1{-2.3}
\def-1} \def\v{-1{.6}
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (x) {X}
(-2,2) node (a) {FA_0}
(0,1.5) node (b) {FA_1}
(2,2) node (c) {FA_2}
;
\draw[1cell]
(a) edge[swap] node (f0) {f_{0}} (x)
(b) edge node (f1) {f_{1}} (x)
(c) edge node (f2) {f_{2}} (x)
(a) edge[bend left=60, looseness=1.5] node (T') {F(p_1'p_0')} (c)
;
\draw[2cell]
;
}
\begin{scope}[shift={(0,0)}]
\boundary
\draw[1cell]
(a) edge[bend left=10] node (T) {F(p_1p_0)} (c)
(a) edge[bend right=20] node (b0) {} (b)
(b) edge[bend right=20] node (b1) {} (c)
;
\draw[2cell]
node[between=T and T' at .4, shift={(-.4,0)}, rotate=90, font=\Large] (A) {\Rightarrow}
(A) node[right] {F(35_1 * 35_0)}
node[between=b and T at .5, rotate=90, font=\Large] (C) {\Rightarrow}
(C) node[right] {F^2_{p_1, p_0}}
node[between=f0 and b at .5, rotate=45, font=\Large] (A) {\Rightarrow}
(A) node[below right] {\theta_0}
node[between=f2 and b at .55, rotate=45, font=\Large] (B) {\Rightarrow}
(B) node[below right] {\theta_1}
;
\end{scope}
\begin{scope}[shift={(1.2,1)}]
\boundary
\draw[font=\LARGE] (x) ++(130:3.4) node[rotate=-45] (eq) {=};
\draw[font=\LARGE] (x) ++(50:3.4) node[rotate=45] (eq) {=};
\draw[1cell]
(a) edge[bend left=45, looseness=1.25] node {} (b)
(b) edge[bend left=45, looseness=1.25] node {} (c)
(a) edge[bend right=20] node (b0) {} (b)
(b) edge[bend right=20] node (b1) {} (c)
;
\draw[2cell]
node[between=b and T' at .6, shift={(-.4,0)}, rotate=90, font=\Large] (A0) {\Rightarrow}
(A0) node[right] {F^2_{p_1',p_2'}}
node[between=a and b at .5, shift={(-.2,.1)}, rotate=90, font=\Large] (Fal0) {\Rightarrow}
(Fal0) node[right] {F35_0}
node[between=b and c at .5, shift={(-.25,.15)}, rotate=90, font=\Large] (Fal1) {\Rightarrow}
(Fal1) node[right] {F35_1}
node[between=f0 and b at .5, rotate=45, font=\Large] (A) {\Rightarrow}
(A) node[below right] {\theta_0}
node[between=f2 and b at .55, rotate=45, font=\Large] (B) {\Rightarrow}
(B) node[below right] {\theta_1}
;
\end{scope}
\begin{scope}[shift={(1.2+1.2,0)}]
\boundary
\draw[1cell]
(a) edge[bend left=45, looseness=1.25] node {} (b)
(b) edge[bend left=45, looseness=1.25] node {} (c)
;
\draw[2cell]
node[between=b and T' at .6, shift={(-.4,0)}, rotate=90, font=\Large] (A2) {\Rightarrow}
(A2) node[right] {F^2_{p_1',p_2'}}
node[between=f0 and b at .5, shift={(135:.25)}, rotate=45, font=\Large] (A3) {\Rightarrow}
(A3) node[below right] {\theta_0'}
node[between=f2 and b at .55, shift={(45:.25)}, rotate=45, font=\Large] (B2) {\Rightarrow}
(B2) node[below right] {\theta_1'}
;
\end{scope}
\end{tikzpicture}
\end{equation*}
Likewise, given $35$ and $35'$ as below,
\begin{equation}\label{slice-2-4}
\begin{tikzpicture}[x=45mm,y=20mm,baseline=(a0.base),scale=.85]
\draw[0cell]
(0,0) node (a0) {(A_0,f_0)}
(1,0) node (a1) {(A_1,f_1)}
;
\draw[1cell]
(a0) edge[bend right=60, swap, looseness=1.5] node (p) {(p,\theta)} (a1)
(a0) edge node (p') {(p',\theta')} (a1)
(a0) edge[bend left=60, looseness=1.5] node (p'') {(p'',\theta'')} (a1)
;
\draw[2cell]
node[between=p and p' at .5, rotate=90, font=\Large] (al0) {\Rightarrow}
(al0) node[right] {(35)}
node[between=p' and p'' at .5, rotate=90, font=\Large] (al1) {\Rightarrow}
(al1) node[right] {(35')}
;
\end{tikzpicture}
\end{equation}
the composite $35'\, 35$ satisfies the necessary condition to
define a $2$-cell
\[
(35'\,35)\cn (p,\theta) \to (p'', \theta'')
\]
because $F$ is functorial with respect to composition of $2$-cells.
\step{3} Vertical composition in $F\sdar X$ is strictly associative
and unital because it is defined $\B$. Therefore each collection of
$1$-cells and $2$-cells between a given pair of objects forms a
category.
\step{4} Likewise, because horizontal composition of $2$-cells in
$F\sdar X$ is defined by the horizontal composites in $\B$, and these
are functorial, it follows that horizontal composition of $2$-cells in
$F \sdar X$ is functorial.
\step{5} The remaining data to describe in $F \sdar X$ are the associator and
two unitors. Consider a composable triple of $1$-cells
\[
\begin{tikzpicture}[x=30mm,y=20mm]
\draw[0cell]
(0,0) node (a0) {(A_0,f_0)}
(1,0) node (a1) {(A_1,f_1)}
(2,0) node (a2) {(A_2,f_2)}
(3,0) node (a3) {(A_3,f_3).}
;
\draw[1cell]
(a0) edge node {(p_0,\theta_0)} (a1)
(a1) edge node {(p_1,\theta_1)} (a2)
(a2) edge node {(p_2,\theta_2)} (a3)
;
\end{tikzpicture}
\]
Lax associativity \eqref{f2-bicat} for $F$ gives an equality of
pasting diagrams shown below.
\begin{equation}\label{slice-5-1}
\begin{tikzpicture}[x=20mm,y=20mm,baseline={(0,1)},scale=.75]
\draw[font=\Large] (2.55,.5) node (eq) {=};
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (0) {FA_0}
(3,0) node (3) {FA_3}
(1.25,-1) node (4) {FA_1}
;
\draw[1cell]
(0) edge[swap] node (c0) {Fp_0} (4)
(4) edge[swap] node (21) (p21) {(Fp_2) \circ (Fp_1)} (3)
(0) edge[bend left=85,looseness=1.9] node (c210R) {F(p_2(p_1p_0))} (3)
;
\draw[2cell]
;
}
\begin{scope}[shift={(3,0)}]
\boundary
\draw[0cell]
(2,0) node (2) {FA_2}
;
\draw[1cell]
(0) edge[swap] node[scale=.9] (p10) {(Fp_1) \circ (Fp_0)} (2)
(2) edge node (c2) {F(p_2)} (3)
(0) edge[bend left=45,looseness=1.25] node[pos=.5] (c10) {F(p_1p_0)} (2)
;
\draw[2cell]
(4) ++(0,.5) node[rotate=90, font=\Large] {\Rightarrow}
node[right] {a_\C}
node[between=p10 and c10 at .5, rotate=90, font=\Large] (F10) {\Rightarrow}
(F10) node[right] {F^2}
node[between=2 and c210R at .5, rotate=90, font=\Large] (F210R) {\Rightarrow}
(F210R) node[right] {F^2}
;
\end{scope}
\begin{scope}[shift={(-1,0)}]
\boundary
\draw[0cell]
;
\draw[1cell]
(4) edge[bend left=40, looseness=1.1] node[pos=.8] (c21) {F(p_2p_1)} (3)
(0) edge[bend left=50,looseness=1.15] node (c210L) {F((p_2p_1)p_0)} (3)
;
\draw[2cell]
node[between=p21 and c21 at .5, shift={(180:.15)}, rotate=120, font=\Large] (F21) {\Rightarrow}
(F21) ++(210:.2) node {F^2}
node[between=c0 and c210L at .5, shift={(0:.1)}, rotate=90, font=\Large] (F210L) {\Rightarrow}
(F210L) node[left] {F^2}
node[between=c210L and c210R at .5, rotate=90, font=\Large] (Fa) {\Rightarrow}
(Fa) node[right] {Fa_\B}
;
\end{scope}
\end{tikzpicture}
\end{equation}
Combining these with the triangles
\begin{equation}\label{slice-5-2}
\begin{tikzpicture}[x=24mm,y=20mm,baseline=(f0.base)]
\draw[0cell]
(0,0) node (0) {FA_0}
(1,0) node (1) {FA_1}
(2,0) node (2) {FA_2}
(3,0) node (3) {FA_3}
(1.5,-1.5) node (x) {X}
;
\draw[1cell]
(0) edge node {Fp_0} (1)
(1) edge node {Fp_1} (2)
(2) edge node {Fp_2} (3)
(0) edge[swap] node[pos=.33] (f0) {f_0} (x)
(1) edge[swap] node[pos=.33] (f1) {f_1} (x)
(2) edge node[pos=.33] (f2) {f_2} (x)
(3) edge node[pos=.33] (f3) {f_3} (x)
;
\draw[2cell]
node[between=f0 and 1 at .5, shift={(0,-.1)}, rotate=45, font=\Large] (th0) {\Rightarrow}
(th0) node[above left] {\theta_0}
node[between=f1 and 2 at .5, shift={(0,-.1)}, rotate=45, font=\Large] (th1) {\Rightarrow}
(th1) node[above left] {\theta_1}
node[between=f3 and 2 at .5, shift={(0,-.1)}, rotate=45, font=\Large] (th2) {\Rightarrow}
(th2) node[above left] {\theta_2}
;
\end{tikzpicture}
\end{equation}
shows that $Fa_\B$ satisfies the relevant ice cream cone condition and hence
$a_\B$ defines a $2$-cell
\[
(a_\B)\cn ((p_2,\theta_2)(p_1,\theta_1))\, (p_0,\theta_0) \to
(p_2,\theta_2) \, ((p_1,\theta_1) (p_0,\theta_0)).
\]
in $F\sdar X$. Recall that we implicitly make use of associators to
interpret pasting diagrams of three triangles; the component of
$a_\C$ in \eqref{slice-5-1} cancels with its inverse to form the
composite in the target of $(a_\B)$.
The left and right unitors are defined similarly; in \cref{exercise:lr-lax-slice}
we ask the reader to verify that the unitors $r_\B$ and $\ell_\B$
satisfy the appropriate ice cream cone conditions and therefore given
a $1$-cell $(p,\theta)\cn (A_0,f_0) \to (A_1, f_1)$, we have $2$-cells
\[
(r_\B)\cn (p,\theta) (1_{A_0},r') \to (p,\theta) \quad \mathrm{ and }
\quad
(\ell_\B) \cn (1_{A_1},r') (p,\theta) \to (p,\theta).
\]
\step{6} Naturality of the associator and unitors defined in the
previous step is a consequence of the corresponding naturality in
$\B$ and $\C$ together with naturality of $F^0$ and $F^2$.
Moreover, each component is invertible because a lax functor
preserves invertibility of $2$-cells.
\step{7} Because the associator and unitor are defined by the
corresponding components in $\B$, it follows that they satisfy the
unity and pentagon axioms, \eqref{bicat-unity} and
\eqref{bicat-pentagon}.
\end{proof}
\begin{proposition}\label{lemma:base-change-functor}\index{lax slice!change-of-slice functor}\index{change-of-slice functor}\index{functor!change-of-slice}
Suppose $F\cn \B \to \C$ is a lax functor of bicategories. Given a
$1$-cell $u\cn X \to Y$, there is a strict functor
\[
F \sdar u \cn (F \sdar X) \to (F \sdar Y)
\]
induced by whiskering with $u$.
\end{proposition}
\begin{proof}
The assignment on $0$-, $1$- and $2$-cells, respectively, is given by
\begin{align*}
(A,f_A) & \mapsto (A, uf_A)\\
(p,\theta) & \mapsto (p, a_\C^\inv \circ (1_u * \theta)) \\
(35) & \mapsto (35).
\end{align*}
where the associator $a_\C$ is used to ensure that the target of the
$2$-cell $a_\C^\inv \circ (1_u * \theta)$ is $(u f_{A_1}) \circ (Fp)$.
To show that $F \sdar u$ is strictly unital, recall that the identity
$1$-cell of $(A,f_A)$ is $(1_A,r')$ where
\[
r' = (1_{f_A} * F^0) \circ r^\inv
\]
is shown in \eqref{slice-1-1}. Then, using the functoriality of
$(1_u * -)$, the $2$-cell
component of $(F\sdar u)(1_A,r')$ is shown along the top and right of
the diagram below. The right unity property in
\cref{bicat-left-right-unity} together with naturality of $a_\C$
shows that the diagram commutes and therefore $F\sdar u$ is strictly
unital.
\[
\begin{tikzpicture}[x=30mm,y=20mm]
\draw[0cell]
(0,0) node (a) {uf_A}
(1,0) node (b) {u(f_A 1_{FA})}
(2.5,0) node (c) {u(f_A F1_A)}
(1,-1) node (d) {(uf_A)1_{FA}}
(2.5,-1) node (e) {(uf_A) F1_A}
;
\path[1cell]
(a) edge node {1_u * r^\inv} (b)
(b) edge node {1_u * (1_{f_A} * F^0)} (c)
(c) edge node {a^\inv_\C} (e)
(b) edge node {a^\inv_\C} (d)
(d) edge[swap] node {1_{uf_A} * F^0} (e)
(a) edge[swap] node {r^\inv} (d)
;
\draw[2cell]
;
\end{tikzpicture}
\]
A similar calculation using the functoriality of whiskering and
naturality of the associator shows that $F\sdar u$ is strictly
functorial with respect to horizontal composition.
\end{proof}
\begin{definition}
We call the strict functor $F \sdar u$ constructed in
\cref{lemma:base-change-functor} the \emph{change-of-slice functor}.
\end{definition}
\section{Lax Terminal Objects in Lax Slices}\label{sec:lax-terminal}
In this section we introduce a specialized notion of terminal object
called inc-lax terminal and prove two key results. First,
\cref{proposition:lax-slice-lax-terminal} proves that if a lax functor
$F$ is essentially surjective, essentially full, and fully faithful,
then the lax slices can be equipped with our specialized form of
terminal object. Second, \cref{lemma:lax-slice-change-fiber} proves
that if $F$ is furthermore a pseudofunctor, then these terminal
objects are preserved by change-of-slice functors. These are the two
key properties of lax slices required for the construction of a
reverse lax functor in \cref{sec:Quillen-A-bicat}.
Given an object $X$ of a bicategory $\C$, recall
\cref{constant-pseudofunctor} describes $\conof{X}$, the constant
pseudofunctor at $X$.
\begin{definition}\label{definition:lax-terminal}
We say that $\lto \in \C$ is \emph{lax terminal}\index{lax terminal object}\index{object!lax terminal} if there is a
lax transformation $k \cn \Id_\C \to \conof{\lto}$. Such a
transformation has component $1$-cells $k_X\cn X \to \lto$ for $X
\in \C$ and $2$-cells
\[
\begin{tikzpicture}[x=16mm,y=16mm]
\draw[0cell]
(0,0) node (1) {\lto}
(1,0) node (1') {\lto}
(1') ++(90:1) node (y) {Y}
(1) ++(90:1) node (x) {X}
;
\draw[1cell]
(x) edge node {u} (y)
(x) edge[swap] node {k_X} (1)
(y) edge node {k_Y} (1')
(1) edge[swap] node {1_{\lto}} (1')
;
\draw[2cell]
(.55,.45) node[rotate=45,font=\Large] (R) {\Rightarrow}
(R) node[above left] {k_u}
;
\end{tikzpicture}
\]
satisfying the lax unity and lax naturality axioms.
\end{definition}
\begin{definition}\label{definition:inc-lax-terminal}\index{inc-lax!lax transformation}\index{lax transformation!inc-lax}
Given lax functors $F,G\cn \B \to \C$, we say that a lax
transformation $k\cn F \to G$ is \emph{inc-lax} or
\emph{\underline{in}itial-\underline{c}omponent-lax} if each
component
\[
k_X\cn FX \to GX
\]
is initial in the category $\C(FX,GX)$.
\end{definition}
\begin{definition}\index{inc-lax!terminal object}\index{lax terminal object!inc-}\index{object!inc-lax terminal}
Suppose that $\lto \in \C$ is a lax terminal object with lax
transformation $k\cn \Id_\C \to \conof{\lto}$.
We say $\lto$ is an \emph{inc-lax terminal} object
if $k$ is inc-lax and the component $k_\lto$ at $\lto$ is
the identity $1$-cell $1_\lto$.
\end{definition}
\begin{explanation}
The universal property of initial $1$-cells implies that, for a $1$-cell
$u\cn X \to Y$, the lax naturality constraint $k_u$ is equal to the
composite of the left unitor with the universal $2$-cell from each
$k_X$ to the composite $k_Y \, u$, as shown below.
\[
\begin{tikzpicture}[x=16mm,y=16mm]
\newcommand{\boundary}{
\draw[0cell]
(0,0) node (1) {\lto}
(1,0) node (1') {\lto}
(1') ++(90:1) node (y) {Y}
(1) ++(90:1) node (x) {X}
;
\draw[1cell]
(x) edge node {u} (y)
(x) edge[swap] node {k_X} (1)
(y) edge node {k_Y} (1')
(1) edge[swap] node {1_{\lto}} (1')
;
}
\draw (1.75,.5) node[font=\Large] {=};
\begin{scope}
\boundary
\draw[2cell]
(.55,.45) node[rotate=45,font=\Large] (R) {\Rightarrow}
(R) node[above left] {k_u}
;
\end{scope}
\begin{scope}[shift={(2.5,0)}]
\boundary
\draw[1cell]
(x) edge[swap] node[pos=.25,inner sep=0] {k_X} (1')
;
\draw[2cell]
(.4,.3) node[rotate=45,font=\Large] (R) {\Rightarrow}
(R) node[below right] {\ell}
(.75,.65) node[rotate=45,font=\Large] (E) {\Rightarrow}
(E) ++(.075,-.16) node {\exists !}
;
\end{scope}
\end{tikzpicture}
\]
\end{explanation}
\begin{definition}\label{definition:preserves-inc}
Suppose that $\B$ and $\C$ have inc-lax terminal objects $(\lto,k)$
and $(\lto',k')$, respectively.
We say that a lax functor $F \cn \B \to \C$ \emph{preserves initial
components}\index{preserves initial components}\index{lax functor!preserves initial components} if each composite
\[
FX \fto{Fk_X} F\lto \fto{k'_{(F\lto)}} \lto'
\]
is initial in $\C(FX,\lto)$.
\end{definition}
\begin{lemma}\label{lemma:preserves-initial-1-cells}
Suppose that $F\cn \B \to \C$ preserves initial components. If
\[
f\cn X \to \lto
\]
is any initial $1$-cell in
$\B(X,\lto)$, then the composite
\[
FX \fto{Ff} F\lto \fto{k'_{(F\lto)}} \lto'
\]
is initial in $\C(FX,\lto')$.
\end{lemma}
\begin{proof}
If $f$ is initial, then there is a unique isomorphism $f \iso
k_X$. Therefore $Ff \iso Fk_X$ and hence their composites
with $k'_{(F\lto)}$ are isomorphic. Now
\[
(k'_{(F\lto)}) \circ (Fk_X)
\]
is initial by hypothesis, and therefore the result follows.
\end{proof}
Now we show that, if $F$ is essentially surjective, essentially full,
and fully faithful, then each lax slice $F\sdar X$ has an inc-lax
terminal object, and each change-of-slice functor $F \sdar u$ preserves initial
components. The first of these results requires the axiom of choice,
and the second depends on the first.
\begin{proposition}\label{proposition:lax-slice-lax-terminal}
Suppose $F$ is a lax functor that is \index{essentially!surjective}essentially surjective,
\index{essentially!full}essentially full, and \index{fully faithful}fully faithful. Then for each $X \in \C$ the
lax slice $F \sdar X$ has an inc-lax terminal object.\index{inc-lax!terminal object!existence}
\end{proposition}
\begin{proof}
Since $F$ is essentially surjective on objects, there is a choice of
object $\ol{X} \in \B$ and invertible $1$-cell
\begin{equation}\label{folX}
f_{\ol{X}}\cn F\ol{X} \to X
\end{equation}
with adjoint inverse
\[
f_{\ol{X}}^\bdot\cn X \to F\ol{X}.
\]
Therefore
$(\ol{X},f_{\ol{X}})$ is an object of $F \sdar X$; we will show that
it is an inc-lax terminal object. Given any other object $(A, f_A)$ in $F \sdar X$, we
have a composite
\[
FA \fto{f_A} X \fto{f_{\ol{X}}^{\bdot}} F\ol{X}
\]
in $\C$. Since $F$ is essentially surjective on $1$-cells, there is a
choice of $1$-cell $p_A$ together with a $2$-cell isomorphism
\[
\theta^{\dagger}_A\cn f_{\ol{X}}^\bdot \; f_A \to Fp_A
\]
whose mate $\theta_A$ fills the triangle
\begin{equation}\label{thetaA}
\begin{tikzpicture}[x=20mm,y=20mm]
\draw[0cell]
(0,0) node (x) {X}
(120:1) node (0) {FA}
(60:1) node (1) {F\ol{X}}
;
\draw[1cell]
(0) edge[swap] node {f_A} (x)
(1) edge node {f_{\ol{X}}} (x)
(0) edge node {Fp_A} (1)
;
\draw[2cell]
(.05,.5) node[rotate=30,font=\Large] {\Rightarrow}
node[above left] {\theta_A}
;
\end{tikzpicture}
\end{equation}
Note that $\theta_A$ is therefore also an isomorphism
by \cref{lemma:mate-iso}. If $(A,f_A)$ is equal to the object
$(\ol{X},f_{\ol{X}})$, then we require the choice of
$(p_{\ol{X}},\theta_{\ol{X}})$ to be the identity $1$-cell $(1_{\ol{X}},r')$
described in \eqref{slice-1-1}.
Therefore $(p_A,\theta_A)$ defines a $1$-cell $(A,f_A) \to
(\ol{X},f_{\ol{X}})$ in $F \sdar X$ that is the identity $1$-cell if
$(A,f_A) = (\ol{X},f_{\ol{X}})$. Now we show that $(p_A,\theta_A)$ is
initial in the category of $1$- and $2$-cells $(A,f_A) \to
(X,f_{\ol{X}})$. The universal property for initial $1$-cells then
implies that the components defined by
$k_{(A,f_A)} = (p_A,\theta_A)$ assemble to form a lax
transformation to the constant pseudofunctor at $(\ol{X},f_{\ol{X}})$
(see \cref{exercise:inc-lax-components}).
Given any other $1$-cell $(q,\omega)\cn (A,f_A) \to
(\ol{X},f_{\ol{X}})$, we compose with $\theta_A^\inv$ to obtain a $2$-cell
\[
{\ga'}\cn f_{\ol{X}} \; (Fp_A) \to f_{\ol{X}} \; (Fq)
\]
shown below.
\[
\begin{tikzpicture}[x=25mm,y=25mm]
\draw[0cell]
(0,0) node (x) {X}
(120:1) node (0) {FA}
(60:1) node (1) {F\ol{X}}
(180:1) node (2) {F\ol{X}}
;
\path[1cell]
(0) edge node[pos=.75] {f_A} (x)
(1) edge node {f_{\ol{X}}} (x)
(0) edge node (Fpa) {Fq} (1)
(0) edge[swap] node {Fp_{A}} (2)
(2) edge[swap] node {f_{\ol{X}}} (x)
;
\draw[2cell]
(.05,.6) node[rotate=30,font=\Large] {\Rightarrow}
node[above left] {\om}
(150:.5) node[rotate=30,font=\Large] {\Rightarrow}
node[above left] {\theta_A^\inv}
;
\end{tikzpicture}
\]
Since $f_{\ol{X}}$ is an adjoint equivalence, this uniquely determines
a $2$-cell
\[
\ga\cn Fp_A \to Fq
\]
such that $1_{f_{\ol{X}}} * \ga = \om\, \tha_A^\inv$.
Therefore, because $F$ is fully
faithful on $2$-cells, we have a unique $2$-cell
\[
\ol{\ga} \cn p_A \to q
\]
such that $F\ol{\ga} = \ga$ and hence satisfies the ice cream cone condition shown
below.
\begin{equation*}
\begin{tikzpicture}[x=30mm,y=35mm,baseline={(0,-1.25)},scale=.9]
\def1.2{1.7}
\begin{scope}[shift={(-1.2,0)}]
\draw[0cell]
(0,0) node (x) {X}
(120:1) node (0) {FA}
(60:1) node (1) {F\ol{X}}
;
\draw[1cell]
(0) edge[swap] node {f_A} (x)
(1) edge node {f_{\ol{X}}} (x)
(0) edge[bend right,swap] node (Fpa) {Fp_A} (1)
(0) edge[bend left] node (Fq) {Fq} (1)
;
\draw[2cell]
(.05,.35) node[rotate=30,font=\Large] {\Rightarrow}
node[above left] {\theta_A}
node[between=Fpa and Fq at .5, shift={(.05,0)}, rotate=90, font=\Large] (E) {\Rightarrow}
(E) node[left] {F\ol{\ga}}
;
\end{scope}
\begin{scope}[shift={(-.75,.2)},scale=.75]
\draw[font=\LARGE]
(1,0) ++(1.4,.5) node[rotate=0] {=}
(1,0) ++(-1.4,.5) node[rotate=0] {=}
;
\draw[0cell]
(0,0) node (Fa) {FA}
(1,0) node (Fxt) {F\ol{X}}
(2,0) node (x) {X}
(1,1.25) node (Fxt2) {F\ol{X}}
;
\draw[1cell]
(Fa) edge[bend right=60, looseness=1.15] node (faB) {f_A} (x)
(Fa) edge[swap] node (Fpa) {Fp_A} (Fxt)
(Fxt) edge node (fxt) {f_{\ol{X}}} (x)
(Fa) edge[bend left=60, looseness=1.25] node (fa) {f_A} (x)
(Fa) edge[bend left] node (Fq) {Fq} (Fxt2)
(Fxt2) edge[bend left] node (fxt2) {f_{\ol{X}}} (x)
;
\draw[2cell]
node[between=faB and Fxt at .5, rotate=90, font=\Large] (Ta) {\Rightarrow}
(Ta) node[left] {\theta_A}
node[between=Fxt and fa at .5, rotate=90, font=\Large] (Tainv) {\Rightarrow}
(Tainv) node[right] {\theta_A^\inv}
node[between=fa and Fxt2 at .5, rotate=90, font=\Large] (Om) {\Rightarrow}
(Om) node[right] {\omega}
;
\end{scope}
\begin{scope}[shift={(1.2,0)}]
\draw[0cell]
(0,0) node (x) {X}
(120:1) node (0) {FA}
(60:1) node (1) {F\ol{X}}
;
\draw[1cell]
(0) edge[swap] node {f_A} (x)
(1) edge node {f_{\ol{X}}} (x)
(0) edge[bend left] node (Fq) {Fq} (1)
;
\draw[2cell]
(.05,.55) node[rotate=30,font=\Large] {\Rightarrow}
node[above left] {\om}
;
\end{scope}
\end{tikzpicture}
\end{equation*}
Therefore $(\ol{\ga})$ is a $2$-cell in $F \sdar X$ from
$(p_A,\theta_A)$ to $(q,\om)$. The diagram above, together with
the invertibility of $\theta_A$ and the uniqueness of both $\ga$ and
$\ol{\ga}$ implies that $(\ol{\ga})$ is the unique such $2$-cell in $F
\sdar X$.
\end{proof}
\begin{proposition}\label{lemma:lax-slice-change-fiber}
Suppose $F$ is a pseudofunctor that is essentially surjective,
essentially full, and fully faithful. Then for each $1$-cell
$u\cn X \to Y$ in $\C$, the strict functor $F \sdar u$ preserves
initial components.\index{change-of-slice functor!preserves initial components}
\end{proposition}
\begin{proof}
For $(A, f_A) \in F \sdar X$, let $(p_A, \theta_A)$ denote the
initial $1$-cell from $(A,f_A)$ to the inc-lax terminal object
\[
(\ol{X},f_{\ol{X}}) \in F \sdar X.
\]
Let $(\ol{u},\theta_{\ol{u}})$ denote the initial $1$-cell from
\[
(F \sdar u)(\ol{X}, f_{\ol{X}}) = (\ol{X},u f_{\ol{X}})
\]
to the inc-lax terminal object
\[
(\ol{Y},f_{\ol{Y}}) \in F \sdar Y.
\]
We must show that the composite of $(\ol{u},\theta_{\ol{u}})$ with $(F \sdar
u)(p_A,\theta_A)$ is initial. This composite is given by $(\ol{u}
p_A, {\theta'})$, where ${\theta'}$ is the $2$-cell determined by the
pasting diagram below.
\begin{equation}\label{base-change-thetabar}
\begin{tikzpicture}[x=15mm,y=20mm,baseline=(xt.base), scale=.85]
\draw[0cell]
(0,0) node (y) {Y}
(-.8,.8) node (x) {X}
(-2,2) node (a) {FA}
(.15,1.75) node (xt) {F\ol{X}}
(2,2) node (yt) {F\ol{Y}}
;
\draw[1cell]
(a) edge[swap] node (f0) {f_{A}} (x)
(xt) edge node (f1) {f_{\ol{X}}} (x)
(x) edge[swap] node {u} (y)
(yt) edge node (f2) {f_{\ol{Y}}} (y)
(a) edge[bend right=10] node (b0) {Fp_A} (xt)
(xt) edge[bend right=10] node[pos=.75] (b1) {F\ol{u}} (yt)
(a) edge[bend left] node (T) {F(\ol{u}\,p_A)} (yt)
;
\draw[2cell]
node[between=f0 and xt at .4, rotate=45, font=\Large] (A) {\Rightarrow}
(A) node[below right] {\theta_A}
node[between=f2 and xt at .55, rotate=45, font=\Large] (B) {\Rightarrow}
(B) node[below right] {\theta_{\ol{u}}}
node[between=xt and T at .5, shift={(-.1,0)}, rotate=90, font=\Large] (C) {\Rightarrow}
(C) node[right] {F^2_{p_1, p_0}}
;
\end{tikzpicture}
\end{equation}
The argument in \cref{proposition:lax-slice-lax-terminal} shows that
$\theta_A$ and $\theta_{\ol{u}}$ are isomorphisms. Since
$F$ is a pseudofunctor by hypothesis, the $2$-cells $F^2$
are isomorphisms and hence ${\theta'}$ is an isomorphism. Then, as
in the proof of \cref{proposition:lax-slice-lax-terminal},
composition with the inverse of ${\theta'}$ shows that
$(\ol{u}\, p_A, {\theta'})$ is initial.
\end{proof}
\section{Quillen Theorem A for Bicategories}\label{sec:Quillen-A-bicat}
In this section we explain how to construct a reverse lax functor $G$.
We assume only that $F$ is lax functor, that its lax slices are
equipped with inc-lax terminal objects, and that these are preserved
by change-of-slice. The end of \cref{sec:lax-terminal} explains how,
with the axiom of choice, one can choose such data when $F$ is an
essentially surjective, essentially full, and fully faithful
pseudofunctor. However, if one has a constructive method for
obtaining these data in practice, then \cref{theorem:Quillen-A-bicat}
gives a construction of $G$ that does not depend on choice. In
\cref{sec:Whitehead-bicat} we show that, under the hypotheses of the
Bicategorical Whitehead Theorem \ref{theorem:whitehead-bicat}, the $G$
constructed here is an inverse biequivalence for $F$.
\begin{theorem}[Bicategorical Quillen Theorem A]\label{theorem:Quillen-A-bicat}\index{Bicategorical!Quillen Theorem A}\index{Theorem!Bicategorical Quillen - A}
Suppose $F\cn \B \to \C$ is a lax functor of bicategories and
suppose the following:
\begin{enumerate}
\item\label{QA-hypothesis-1} For each $X \in \C$, the lax slice
bicategory $F\sdar X$ has an inc-lax terminal object
$(\ol{X},f_{\ol{X}})$.
Let $k^X$ denote the inc-lax transformation $\Id_{F\sdar X} \to
\conof{{(\ol{X},f_{\ol{X}})}}$.\index{lax slice!bicategory}\index{inc-lax!terminal object}
\item\label{QA-hypothesis-2} For each $u\cn X \to Y$ in $\C$, the
change-of-slice functor $F\sdar u$
preserves initial components (\cref{definition:preserves-inc}).\index{change-of-slice functor!preserves initial components}
\end{enumerate}
Then there is a lax functor $G \cn \C \to \B$ together with
lax transformations
\[
\eta\cn \Id_\B \to GF \quad \mathrm{\ and\ } \quad \epz\cn FG \to \Id_\C.
\]
\end{theorem}
The proof is structured as follows:
\begin{enumerate}
\item \label{it:G-1} \cref{definition:G}: define the data for $G =
(G,G^2,G^0)$:
\begin{enumerate}
\item \label{it:G-1a} define $G$ as an assignment on $0$-, $1$-, and $2$-cells;
\item \label{it:G-1b} define the components of $G^0$ and $G^2$
\end{enumerate}
\item \label{it:G-2} \cref{proposition:G-lax}: Show that $G$ defines a lax functor:
\begin{enumerate}
\item \label{it:G-2a} show that $G$ is functorial with respect to $2$-cells;
\item \label{it:G-2b} show that $G^2$ and $G^0$ are natural with respect to $2$-cells;
\item \label{it:G-2c} verify the lax associativity axiom \eqref{f2-bicat}
\item \label{it:G-2d} verify the left and right unity axioms \eqref{f0-bicat}.
\end{enumerate}
\item \label{it:G-3} Establish the existence of $\eta$ and $\epz$:
\begin{enumerate}
\item \label{it:G-3a} define the components of $\eta$ and $\epz$;
\item \label{it:G-3b} verify the $2$-cell components of $\eta$ and $\epz$ are natural
with respect to $2$-cells;
\item \label{it:G-3c} verify the unity axiom \eqref{unity-transformation-pasting} for $\eta$
and $\epz$;
\item \label{it:G-3d} verify the horizontal naturality axiom
\eqref{2-cell-transformation-pasting} for $\eta$ and $\epz$.
\end{enumerate}
\end{enumerate}
\newcommand{\Gstep}[1]{\textbf{Step (\ref{it:G-#1}).}}
\newcommand{\Gsteps}[2]{\textbf{Steps (\ref{it:G-#1}) and (\ref{it:G-#2}).}}
\begin{definition}\label{definition:G}
Suppose $F\cn\B \to \C$ is a lax functor satisfying the
assumptions of \cref{theorem:Quillen-A-bicat}.
\Gstep{1a} We define an assignment on cells $G\cn\C \to \B$ as
follows.
\begin{itemize}
\item For each object $X$ in $\C$, the slice $F \sdar X$ has an inc-lax terminal
object $(\ol{X},f_{\ol{X}})$. Define $GX = \ol{X}$.
\item For each $1$-cell $u\cn X \to Y$ in $\C$, we have $(\ol{X},
uf_{\ol{X}}) \in F \sdar Y$, and inc-lax terminal object
$(\ol{Y},f_{\ol{Y}}) \in F \sdar Y$.
The component of $k^Y$ at $(\ol{X},uf_{\ol{X}})$ is an initial $1$-cell
\[
(\ol{u},\theta_{\ol{u}}) \cn (\ol{X}, uf_{\ol{X}}) \to (\ol{Y}, f_{\ol{Y}}).
\]
Define $Gu = \ol{u}$.
\item Given a $2$-cell $\ga\cn u_0 \to u_1$ in $\C$, we have $1$-cells in
$F\sdar Y$ given by $(\ol{u_0},\theta_{0})$ and
$(\ol{u_1},\theta_{1})$, the components of $k^Y$.
Pasting the latter of these with $\ga$ yields a $1$-cell $(\ol{u_1},
\theta_1(\ga * 1_{f_{\ol{X}}}))$ shown in the pasting diagram below.
\begin{equation}\label{Gdef-3}
\begin{tikzpicture}[x=15mm,y=15mm,baseline={(x.base)}]
\draw[0cell]
(0,0) node (y) {Y}
(120:2) node (a) {F\ol{X}}
(120:1) node (x) {X}
(60:2) node (b) {F\ol{Y}}
;
\draw[1cell]
(a) edge[swap] node {f_{\ol{X}}} (x)
(x) edge[bend left] node {u_1} (y)
(x) edge[swap, bend right=40] node {u_0} (y)
(b) edge node {f_{\ol{Y}}} (y)
(a) edge node (T) {F\ol{u_1}} (b)
;
\draw[2cell]
node[between=y and T at .65, rotate=30, font=\Large] (A) {\Rightarrow}
(A) node[below right] {\theta_{1}}
node[between=x and y at .55, rotate=30, font=\Large] (B) {\Rightarrow}
(B) node[above left] {\ga}
;
\end{tikzpicture}
\end{equation}
Since $(\ol{u_0},\theta_0)$ is initial by construction and
\[
(\ol{u_1},\theta_1(\ga * 1_{f_{\ol{X}}}))
\]
is another $1$-cell in $F \sdar Y$ with source
$(\ol{X},u_0f_{\ol{X}})$ and target $(\ol{Y},f_{\ol{Y}})$, there
is a unique $2$-cell $(\ol{\ga})$ in $F \sdar Y$ such that
$F\ol{\ga}$ satisfies the ice cream cone condition shown below.
\begin{equation}\label{Gdef-4}
\begin{tikzpicture}[x=17mm,y=17mm,baseline={(eq.base)}]
\def1.2{1.75}
\def1{-1.5}
\def-1} \def\v{-1{.5}
\draw[font=\Large] (1.2/2+-1} \def\v{-1,0) node (eq) {=};
\newcommand{\boundary}{
\draw[0cell]
(0,-.25) node (y) {Y}
(120:2) node (a) {F\ol{X}}
node[between=a and y at .5] (x) {X}
(60:2) node (b) {F\ol{Y}}
;
\draw[1cell]
(a) edge[swap] node {f_{\ol{X}}} (x)
(b) edge node (fy) {f_{\ol{Y}}} (y)
(x) edge[swap, bend right] node {u_0} (y)
(a) edge[bend left] node (T) {F\ol{u_1}} (b)
;
}
\begin{scope}[shift={(0,1/2)}]
\boundary
\draw[1cell]
(a) edge[swap, bend right] node (B) {F\ol{u_0}} (b)
;
\draw[2cell]
node[between=y and B at .6, rotate=50, font=\Large] (D) {\Rightarrow}
(D) node[left, shift={(-.05,.05)}] {\theta_{0}}
node[between=a and b at .5, rotate=90, font=\Large] (C) {\Rightarrow}
(C) node[right] {F\ol{\ga}}
;
\end{scope}
\begin{scope}[shift={(1.2+-1} \def\v{-1+-1} \def\v{-1,1/2)}]
\boundary
\draw[1cell]
(x) edge[bend left] node {u_1} (y)
;
\draw[2cell]
node[between=y and T at .5, shift={(.05,.1)}, rotate=30, font=\Large] (BB) {\Rightarrow}
(BB) node[below, shift={(.05,-.05)}] {\theta_{1}}
node[between=x and y at .55, rotate=30, font=\Large] (CC) {\Rightarrow}
(CC) node[above left] {\ga}
;
\end{scope}
\end{tikzpicture}
\end{equation}
Define $G\ga = \ol{\ga}$.
\end{itemize}
\Gstep{1b} Next we define the components of the lax constraints $G^0$ and $G^2$.
\begin{itemize}
\item Following the definition of $G$ for $Y = X$ and $u = 1_X$, we
obtain a $1$-cell
\[
G1_X = \ol{1_X}\cn \ol{X} \to \ol{X}
\]
together with $\theta_{\ol{1_X}}$ filling the triangle below.
\begin{equation}\label{Gdef-7}
\begin{tikzpicture}[x=15mm,y=13mm,baseline=(x.base)]
\draw[0cell]
(0,0) node (y) {X.}
(120:2) node (a) {F\ol{X}}
(120:1) node (x) {X}
(60:2) node (b) {F\ol{X}}
;
\draw[1cell]
(a) edge[swap] node {f_{\ol{X}}} (x)
(x) edge[swap] node {1_X} (y)
(b) edge node {f_{\ol{X}}} (y)
(a) edge node (T) {F\ol{1_X}} (b)
;
\draw[2cell]
node[between=y and T at .65, rotate=30, font=\Large] (A) {\Rightarrow}
(A) node[below right] {\theta_{\ol{1_X}}}
;
\end{tikzpicture}
\end{equation}
Composing $\theta_{\ol{1_X}}$ with the left unitor $\ell$ we obtain
a $1$-cell in $F \sdar X$
\[
(\ol{1_X}, \ell_{f_{\ol{X}}} \circ \theta_{\ol{1_X}}) \cn (\ol{X},
f_{\ol{X}}) \to (\ol{X}, f_{\ol{X}}).
\]
By the unit condition for inc-lax terminal objects, the identity $1$-cell for
$(\ol{X},f_{\ol{X}})$ is initial and hence we have a unique $2$-cell
\[
1_{GX} = 1_{\ol{X}} \to \ol{1_X} = G1_X
\]
whose image under $F$ satisfies the ice cream cone condition for
\[
(\ol{1_X}, \ell_{f_{\ol{X}}} \circ \theta_{\ol{1_X}}) \andspace
(1_{\ol{X}}, r').
\]
We define $G^0_X$ to be this $2$-cell.
\item Given a pair of composable arrows $u\cn X \to Y$ and $v\cn Y \to Z$ in $\C$, we have
initial $1$-cells $(\ol{u},\theta_{\ol{u}})$ and $(\ol{v},
\theta_{\ol{v}})$ shown below.
\begin{equation}\label{Gdef-9}
\begin{tikzpicture}[x=15mm,y=13mm,baseline={(x.base)}]
\draw[0cell]
(0,0) node (y) {Y}
(120:2) node (a) {F\ol{X}}
(120:1) node (x) {X}
(60:2) node (b) {F\ol{Y}}
;
\draw[1cell]
(a) edge[swap] node {f_{\ol{X}}} (x)
(x) edge[swap] node {u} (y)
(b) edge node {f_{\ol{Y}}} (y)
(a) edge node (T) {F\ol{u}} (b)
;
\draw[2cell]
node[between=y and T at .65, rotate=30, font=\Large] (A) {\Rightarrow}
(A) node[below right] {\theta_{\ol{u}}}
;
\end{tikzpicture}
\qquad \mathrm{ and } \qquad
\begin{tikzpicture}[x=15mm,y=13mm,baseline={(x.base)}]
\draw[0cell]
(0,0) node (y) {Z}
(120:2) node (a) {F\ol{Y}}
(120:1) node (x) {Y}
(60:2) node (b) {F\ol{Z}}
;
\draw[1cell]
(a) edge[swap] node {f_{\ol{Y}}} (x)
(x) edge[swap] node {v} (y)
(b) edge node {f_{\ol{Z}}} (y)
(a) edge node (T) {F\ol{v}} (b)
;
\draw[2cell]
node[between=y and T at .65, rotate=30, font=\Large] (A) {\Rightarrow}
(A) node[below right] {\theta_{\ol{v}}}
;
\end{tikzpicture}
\end{equation}
Pasting these together and composing with $F^2_{\ol{v},\ol{u}}$, we
obtain a $1$-cell in $F \sdar Z$
\[
(\ol{v} \circ \ol{u}, \theta')\cn (\ol{X}, v (u f_{\ol{X}})) \to (\ol{Z}, f_{\ol{Z}}),
\]
where $\theta'$ is given by the following pasting diagram.
\begin{equation}\label{Gdef-10}
\begin{tikzpicture}[x=13mm,y=11mm,baseline={(x.base)}]
\draw[0cell]
(-60:.25) node (y) {Y}
(120:2.5) node (a) {F\ol{X}}
(120:1.25) node (x) {X}
(60:2) node (b) {F\ol{Y}}
(-60:1.75) node (z) {Z}
(z) ++(60:4.25) node (c) {F\ol{Z}}
;
\draw[1cell]
(a) edge[swap] node {f_{\ol{X}}} (x)
(x) edge[swap] node {u} (y)
(b) edge node {f_{\ol{Y}}} (y)
(a) edge[bend right] node (T) {F\ol{u}} (b)
(y) edge[swap] node {v} (z)
(b) edge[bend right] node {F\ol{v}} (c)
(c) edge node {f_{\ol{Z}}} (z)
(a) edge[bend left] node (TT) {F(\ol{v} \circ \ol{u})} (c)
;
\draw[2cell]
node[between=y and T at .65, rotate=30, font=\Large] (A) {\Rightarrow}
(A) node[below right] {\theta_{\ol{u}}}
node[between=y and c at .4, shift={(0,-.1)}, rotate=30, font=\Large] (B) {\Rightarrow}
(B) node[below right] {\theta_{\ol{v}}}
node[between=b and TT at .5, shift={(-.05,0)}, rotate=90, font=\Large] (C) {\Rightarrow}
(C) node[right] {F^2_{\ol{v},\ol{u}}}
;
\end{tikzpicture}
\end{equation}
Now by definition, $(\ol{u},\theta_{\ol{u}}) = k^Y_{(\ol{X},uf_{\ol{X}})}$.
Therefore by hypothesis \eqref{QA-hypothesis-2} the composite
$(\ol{v} \circ \ol{u}, \theta')$ is an initial $1$-cell
$(\ol{X},v(uf_{\ol{X}})) \to (\ol{Z},f_{\ol{Z}})$. We also have
the component of $k^Z$ at $(\ol{X},(vu)f_{\ol{X}})$. This is an
initial $1$-cell
\[
(\ol{vu}, \theta_{\ol{vu}})\cn (\ol{X},(vu) f_{\ol{X}}) \to
(\ol{Z}, f_{\ol{Z}})
\]
where $\theta_{\ol{vu}}$ is displayed below.
\begin{equation}\label{Gdef-11}
\begin{tikzpicture}[x=15mm,y=13mm,baseline={(x.base)}]
\draw[0cell]
(0,0) node (y) {Z}
(120:2) node (a) {F\ol{X}}
(120:1) node (x) {X}
(60:2) node (b) {F\ol{Z}}
;
\draw[1cell]
(a) edge[swap] node {f_{\ol{X}}} (x)
(x) edge[swap] node {vu} (y)
(b) edge node {f_{\ol{Z}}} (y)
(a) edge node (T) {F\ol{vu}} (b)
;
\draw[2cell]
node[between=y and T at .65, rotate=30, font=\Large] (A) {\Rightarrow}
(A) node[below right] {\theta_{\ol{vu}}}
;
\end{tikzpicture}
\end{equation}
Composing $\theta_{\ol{vu}}$ with the associator
\[
a_\C^\inv \cn v(uf_{\ol{X}}) \to (uv)f_{\ol{X}}
\]
yields another $1$-cell
\[
(\ol{vu}, a_\C^\inv \theta_{\ol{vu}})\cn (\ol{X}, v(uf_{\ol{X}})) \to (\ol{Z},f_{\ol{Z}}),
\]
and therefore there is a unique $2$-cell in $\B$
\[
(Gv) \circ (Gu) = \ol{v} \circ \ol{u} \to \ol{vu} = G(vu)
\]
whose image under $F$ satisfies the ice cream cone condition for
the triangles \eqref{Gdef-10} and \eqref{Gdef-11}. We define
$G^2_{v,u}$ to be this $2$-cell.\defmark
\end{itemize}
\end{definition}
\begin{proposition}\label{proposition:G-lax}
Under the hypotheses of \cref{theorem:Quillen-A-bicat}, the
assignment on cells defined above specifies a lax functor $G\cn \B
\to \C$.
\end{proposition}
\begin{proof}
\Gstep{2a} To verify that $G$ defines a functor $\C(X,Y) \to \B(GX,GY)$ for
each $X$ and $Y$, first note that when $\ga = 1_{u}$, then
$1_{\ol{u}}$ satisfies the ice cream cone condition above, and hence by
uniqueness of $2$-cells out of an initial $1$-cell, we have
\[
G1_u = \ol{(1_u)} = 1_{(\ol{u})} = 1_{Gu}.
\]
Now we turn to functoriality with respect to vertical composition of
$2$-cells. Consider a pair of composable $2$-cells
\[
u_0 \fto{\ga} u_1 \fto{\de} u_2
\]
between $1$-cells $u_0, u_1, u_2\in \C(X,Y)$. We will show that the
chosen lift $G(\de\ga) = \ol{\de\ga}$ is equal to the composite
\[
(G\de) \circ (G\ga) = \ol{\de} \circ \ol{\ga}.
\]
To do this, we note that $(\ol{u_0},\theta_0)$ is an initial $1$-cell
and therefore we simply need to observe that $\ol{\de} \circ \ol{\ga}$
satisfies the ice cream cone condition for $\ol{\de \ga}$. Then the
uniqueness of $2$-cells from $(\ol{u_0},\theta_0)$ to
$(\ol{Y},f_{\ol{Y}})$ will imply the result. This is done by the four
pasting diagrams below. The first equality follows by functoriality of $F$:
we have $(F\ol{\de}) (F\ol{\ga}) = F(\ol{\de} \circ \ol{\ga})$. The next
two equalities follow by the conditions for $\ol{\ga}$ and $\ol{\de}$
individually.
\begin{equation}\label{Gdef-5}
\begin{tikzpicture}[x=18mm,y=18mm,baseline={(a.base)},scale=.9]
\def1.2{2.3}
\def1{-2.25}
\def-1} \def\v{-1{.5}
\draw[font=\Large] (1.2/2+-1} \def\v{-1,0) node (eq) {=};
\newcommand{\boundary}{
\draw[0cell]
(0,-.25) node (y) {Y}
(120:2) node (a) {F\ol{X}}
(150:1) node (x) {X}
(60:2) node (b) {F\ol{Y}}
;
\draw[1cell]
(a) edge[swap] node {f_{\ol{X}}} (x)
(b) edge[bend left] node (fy) {f_{\ol{Y}}} (y)
(x) edge[swap, bend right=70] node {u_0} (y)
(a) edge[bend left=70] node (T) {F\ol{u_2}} (b)
(a) edge[swap, bend right=70] node (B) {F\ol{u_0}} (b)
;
}
\begin{scope}[shift={(0,1/2)}]
\boundary
\draw[1cell]
;
\draw[2cell]
node[between=y and B at .6, rotate=50, font=\Large] (TH0) {\Rightarrow}
(TH0) node[left, shift={(-.05,.05)}] {\theta_{0}}
node[between=B and T at .5, rotate=90, font=\Large] (Fga) {\Rightarrow}
(Fga) node[right] {F(\ol{\de} \circ \ol{\ga})}
;
\end{scope}
\begin{scope}[shift={(1.2+-1} \def\v{-1+-1} \def\v{-1,1/2)}]
\boundary
\draw[1cell]
(a) edge[bend right=10] node[pos=.2] (M) {F\ol{u_1}} (b)
;
\draw[2cell]
node[between=y and B at .6, rotate=50, font=\Large] (TH0) {\Rightarrow}
(TH0) node[left, shift={(-.05,.05)}] {\theta_{0}}
node[between=B and T at .25, rotate=90, font=\Large] (Fga) {\Rightarrow}
(Fga) node[right] {F\ol{\ga}}
node[between=y and T at .85, rotate=90, font=\Large] (Fde) {\Rightarrow}
(Fde) node[right] {F\ol{\de}}
;
\end{scope}
\end{tikzpicture}
\end{equation}
\begin{equation}\label{Gdef-6}
\begin{tikzpicture}[x=18mm,y=18mm,baseline={(a.base)},scale=.9]
\def1.2{2.3}
\def1{-2.25}
\def-1} \def\v{-1{.5}
\draw[font=\Large] (1.2/2+-1} \def\v{-1,0) node (eq) {=};
\draw[font=\Large] (-1.2/2--1} \def\v{-1,0) node (eq) {=};
\newcommand{\boundary}{
\draw[0cell]
(0,-.25) node (y) {Y}
(120:2) node (a) {F\ol{X}}
(150:1) node (x) {X}
(60:2) node (b) {F\ol{Y}}
;
\draw[1cell]
(a) edge[swap] node {f_{\ol{X}}} (x)
(b) edge[bend left] node (fy) {f_{\ol{Y}}} (y)
(x) edge[swap, bend right=70] node (u0) {u_0} (y)
(x) edge[bend right=10] node[pos=.3] (u1) {u_1} (y)
(a) edge[bend left=70] node (T) {F\ol{u_2}} (b)
;
\draw[2cell]
node[between=u0 and u1 at .55, rotate=50, font=\Large] (ga) {\Rightarrow}
(ga) node[above left] {\ga}
;
}
\begin{scope}[shift={(0,1/2)}]
\boundary
\draw[1cell]
(a) edge[bend right=10] node[pos=.2] (M) {F\ol{u_1}} (b)
;
\draw[2cell]
node[between=y and T at .85, rotate=90, font=\Large] (Fde) {\Rightarrow}
(Fde) node[right] {F\ol{\de}}
node[between=y and T at .5, shift={(.05,-.1)}, rotate=50, font=\Large] (TH1) {\Rightarrow}
(TH1) node[below, shift={(.05,-.05)}] {\theta_{1}}
;
\end{scope}
\begin{scope}[shift={(1.2+-1} \def\v{-1+-1} \def\v{-1,1/2)}]
\boundary
\draw[1cell]
(x) edge[bend left=70] node[pos=.7] (u2) {u_2} (y)
;
\draw[2cell]
node[between=y and T at .5, shift={(.05,.2)}, rotate=50, font=\Large] (TH2) {\Rightarrow}
(TH2) node[below, shift={(.05,-.05)}] {\theta_{2}}
node[between=u0 and u1 at .55, rotate=50, font=\Large] (ga) {\Rightarrow}
(ga) node[above left] {\ga}
node[between=u0 and u2 at .70, rotate=50, font=\Large] (de) {\Rightarrow}
(de) node[below right] {\de}
;
\end{scope}
\end{tikzpicture}
\end{equation}
Since $\ol{\de \ga}$ is the unique $2$-cell satisfying
this condition, we must have $\ol{\de \ga} = \ol{\de} \circ \ol{\ga}$.
Therefore the definition of $G$ is functorial with respect to vertical
composition of $2$-cells.
\Gstep{2b} Naturality of $G^0$ is vacuous. Naturality of $G^2$
follows because $(\ol{v} \circ \ol{u}, \theta')$ shown in \eqref{Gdef-10} is
initial. Therefore given $\ga\cn u_0 \to u_1$ and $\de\cn v_0 \to v_1$, the two
composites
\[
(\ol{v_0} \circ \ol{u_0},\theta'_0) \to (\ol{v_1u_1},\theta_{\ol{v_1u_1}})
\]
are equal.
\Gstep{2c} Now we need to verify the lax associativity axiom
\eqref{f2-bicat} and two lax unity axioms \eqref{f0-bicat} for $G$. We
show that each of the $2$-cells involved is the projection to $\B$ of a
$2$-cell in a lax slice category, and that each composite in the
diagrams is a $2$-cell whose source is initial. Thus we conclude in
each diagram that the two relevant composites are equal.
First, let us consider the lax associativity hexagon \eqref{f2-bicat}
for $G^2$ and the associators.
Given a composable triple
\[
W \fto{s} X \fto{u} Y \fto{v} Z
\]
we need to show that the following diagram commutes
\begin{equation}\label{Gdef-12}
\begin{tikzpicture}[x=20mm,y=20mm,baseline=(X.base)]
\draw[0cell]
(0,0) node (3) {((Gv)(Gu))\, (Gs)}
(3) ++(60:1) node (2) {(Gv)\, ((Gu)(Gs))}
(3) ++(-60:1) node (4) {(G(vu))\, (Gs)}
(3) ++(3,0) node (0) {G(v\,(us))}
(0) ++(120:1) node (1) {(Gv)\, (G(us))}
(0) ++(-120:1) node (5) {G((vu)\, s)}
;
\draw[1cell]
(3) edge node (X) {a_\B} (2)
(2) edge node {1 * G^2} (1)
(1) edge node {G^2} (0)
(3) edge[swap] node {G^2 * 1} (4)
(4) edge[swap] node {G^2} (5)
(5) edge[swap] node {Ga_\C} (0)
;
\end{tikzpicture}
\end{equation}
where $a_\B$ and $a_\C$ denote the associators in $\B$ and $\C$
respectively. To do this, we observe that this entire diagram is the
projection to $\B$ of the following diagram in $F \sdar Z$, where we
use two key details from the description in \cref{defprop:lax-slice}:
\begin{itemize}
\item The horizontal composition of $2$-cells in $F\sdar Z$
(namely, the whiskering of $2$-cells by $1$-cells) is given by
horizontal composition in $\B$.
\item The associator in $F \sdar Z$ is given by $(a_\B)$.
\end{itemize}
\begin{equation}\label{Gdef-13}
\begin{tikzpicture}[x=25mm,y=20mm,baseline=(X.base)]
\draw[0cell]
(0,0) node (3) {
\big(
(\ol{v}, \theta_{\ol{v}}) (\ol{u}, 1_v * \theta_{\ol{u}})
\big)
\, (\ol{s}, 1_{vu} * \theta_{\ol{s}})
}
(3) ++(70:1) node (2) {
(\ol{v}, \theta_{\ol{v}}) \, \big(
(\ol{u}, 1_v * \theta_{\ol{u}}) (\ol{s}, 1_v * (1_u * \theta_{\ol{s}}))
\big)
}
(3) ++(-70:1) node (4) {
(\ol{vu}, \theta_{\ol{vu}})\, (\ol{s}, 1_{vu} * \theta_{\ol{s}})
}
(3) ++(3,0) node (0) {
(\ol{v\,(us)}, \theta_{\ol{v(us)}})
}
(0) ++(180-70:1) node (1) {
(\ol{v}, \theta_{\ol{v}}) \, (\ol{us}, 1_v * \theta_{\ol{us}})
}
(0) ++(180+70:1) node (5) {
(\ol{(vu)\, s}, \theta_{\ol{(vu)s}})
}
;
\draw[1cell]
(3) edge node[pos=.4] (X) {(a_\B)} (2)
(2) edge node {(1 * G^2)} (1)
(1) edge node[pos=.6] {(G^2)} (0)
(3) edge[swap] node[pos=.3] {(G^2 * 1)} (4)
(4) edge[swap] node {(G^2)} (5)
(5) edge[swap] node[pos=.7] {(\ol{a_\C})} (0)
;
\end{tikzpicture}
\end{equation}
Now the $1$-cells $(\ol{v},\theta_{\ol{v}})$,
$(\ol{u},\theta_{\ol{u}})$, and $(\ol{s},\theta_{\ol{s}})$ are defined
to be components of $k^X$, $k^Y$, and $k^Z$, respectively. We have
$(F \sdar u)(\ol{s}, \theta_{\ol{s}}) = (\ol{s}, 1_u *
\theta_{\ol{s}})$ and therefore
\[
(\ol{u},\theta_{\ol{u}}) (\ol{s},1_u * \theta_{\ol{s}})
\]
is initial by hypothesis \eqref{QA-hypothesis-2} and
\cref{lemma:preserves-initial-1-cells}. The strict functor $F \sdar v$
sends this composite to
\[
(\ol{u},1_v * \theta_{\ol{u}}) (\ol{s}, 1_v * (1_u * \theta_{\ol{s}})),
\]
so the upper-left corner of the hexagon is initial by
hypothesis \eqref{QA-hypothesis-2} and \cref{lemma:preserves-initial-1-cells} again.
Since $a_\B$ is an isomorphism, this implies that
\[
\big(
(\ol{v}, \theta_{\ol{v}}) (\ol{u}, 1_v * \theta_{\ol{u}})
\big)
\, (\ol{s}, 1_{vu} * \theta_{\ol{s}})
\]
is also an initial $1$-cell. Therefore the two composites around the diagram are equal
and consequently their projections to $\B$ are equal.
\Gstep{2d} Next we consider the lax unity axioms \eqref{f0-bicat} for a $1$-cell $u\cn X \to
Y$. We use subscripts $\B$ or $\C$ to denote the respective unitors.
As with the lax associativity axiom, the necessary diagrams
are projections to $\B$ of diagrams in $F \sdar Y$, each of whose
source $1$-cell is initial. Therefore the diagrams in $F \sdar Y$
commute and hence their projections to $\B$ commute.
\begin{equation}\label{Gdef-14}
\begin{tikzpicture}[x=23mm,y=18mm,baseline={(0,1)}]
\draw[0cell]
(0,0) node (a) {(Gu) (1_{GX})}
(.25,1) node (b) {(Gu) (G1_X)}
(1.25,1) node (c) {G(u 1_X)}
(1.5,0) node(d) {Gu}
;
\draw[1cell]
(a) edge node[pos=.4] {1*G^0} (b)
(b) edge node {G^2} (c)
(c) edge node[pos=.6] {Gr_\C} (d)
(a) edge[swap] node {r_\B} (d)
;
\end{tikzpicture}
\qquad
\begin{tikzpicture}[x=23mm,y=18mm, baseline={(0,1)}]
\draw[0cell]
(0,0) node (a) {(1_{GY}) (Gu)}
(.25,1) node (b) { (G1_Y) (Gu)}
(1.25,1) node (c) {G(1_Y u)}
(1.5,0) node(d) {Gu}
;
\draw[1cell]
(a) edge node[pos=.4] {G^0*1} (b)
(b) edge node {G^2} (c)
(c) edge node[pos=.6] {G(\ell_\C)} (d)
(a) edge[swap] node {\ell_\B} (d)
;
\end{tikzpicture}
\end{equation}
This completes the proof that $G$ is a lax functor $\C \to \B$.
\end{proof}
\begin{proof}[Proof of \cref{theorem:Quillen-A-bicat}]
Now we turn to the transformations
\[
\eta\cn \Id_\B \to GF \quad \mathrm{ and } \quad \epz\cn FG \to \Id_\C.
\]
\Gstep{3a} The components of $\epz$ are already defined in the
construction of $G$: given an object $X$, we define $\epz_X =
f_{\ol{X}}$, the $1$-cell part of the inc-lax terminal object $(\ol{X},
f_{\ol{X}})$. For a $1$-cell $u$, we define $\epz_u = \theta_{\ol{u}}$,
the $2$-cell part of the initial $1$-cell
\[
(\ol{u},\theta_{\ol{u}}) \cn (\ol{X},u f_{\ol{x}}) \to (\ol{Y},f_{\ol{Y}}).
\]
To define the components of $\eta$, suppose $A$ and $B$ are objects of $\B$
and suppose $p\cn A \to B$ is a $1$-cell between them. Then $(A,1_{FA})$ defines
an object of $F \sdar FA$. Therefore there is an initial $1$-cell
\begin{equation}\label{bracketA}
([A],\theta_{[A]})\cn(A,1_{FA}) \to (\ol{FA}, f_{\ol{FA}})
\end{equation}
to the inc-lax terminal object in $F \sdar FA$. We define
\[
\eta_A = [A] \cn A \to \ol{FA} = G(FA).
\]
Given a $1$-cell $p\cn A \to B$ in $\B$ we have two different $1$-cells
in $F \sdar FB$
\[
(A,1_{FA} (Fp)) \to (\ol{FB},f_{\ol{FB}}).
\]
One of these is the composite
\begin{equation}\label{comp-1}
(\ol{Fp}, \theta_{\ol{Fp}}) \circ (F \sdar Fp)(\eta_A,\theta_{\eta_A}),
\end{equation}
and note that this is initial by hypothesis \eqref{QA-hypothesis-2}
and \cref{lemma:preserves-initial-1-cells}. The other $1$-cell is the composite
\begin{equation}\label{comp-2}
(\eta_B,\theta_{\eta_B}) \circ (p,\upsilon),
\end{equation}
where $\upsilon$ denotes a composite of unitors. The $2$-cell
components of the composites \eqref{comp-1} and \eqref{comp-2} are
given, respectively, by the two pasting diagrams below.
\begin{equation}\label{Gdef-15}
\begin{tikzpicture}[x=11.1mm,y=15mm,baseline={(F2.base)}]
\draw[0cell]
(0,0) node (z) {FB}
(90:2) node (xt) {F(\ol{FA})}
(45:2.828) node (zt) {F(\ol{FB})}
(135:2.828) node (xt0) {FA}
(135:1.414) node (y) {FA}
;
\draw[1cell]
(xt0) edge node (F1xt) {F\eta_A} (xt)
(xt0) edge[swap] node {1_{FA}} (y)
(y) edge[swap] node {Fp} (z)
(xt0) edge[bend left=60, looseness=1.1] node (U) {F(\ol{Fp}\, \eta_A)} (zt)
(xt) edge node (fFa) {f_{\ol{FA}}} (y)
(zt) edge node[pos=.45] (fFb) {f_{\ol{FB}}} (z)
(xt) edge node (T) {F(\ol{Fp})} (zt)
;
\draw[2cell]
node[between=fFb and xt at .6, rotate=30, font=\Large] (A) {\Rightarrow}
(A) node[below right] {\theta_{\ol{Fp}}}
node[between=y and F1xt at .4, shift={(.108,0.08)}, rotate=30, font=\Large] (B) {\Rightarrow}
(B) node[above left] {\theta_{\eta_A}}
node[between=xt and U at .5, shift={(.1,0)}, rotate=90, font=\Large] (D) {\Rightarrow}
(D) node[right] (F2) {F^2}
;
\end{tikzpicture}
\qquad
\begin{tikzpicture}[x=11mm,y=15mm,baseline={(F2.base)}]
\draw[0cell]
(0,0) node (z) {FB}
(90:2) node (xt) {FB}
(45:2.828) node (zt) {F(\ol{FB})}
(135:2.828) node (xt0) {FA}
(135:1.414) node (y) {FA}
;
\draw[1cell]
(xt0) edge node (F1xt) {Fp} (xt)
(xt0) edge[swap] node {1_{FA}} (y)
(y) edge[swap] node {Fp} (z)
(xt0) edge[bend left=60, looseness=1.1] node (U) {F(\eta_B\, p)} (zt)
(xt) edge node {1_{FB}} (z)
(zt) edge node[pos=.45] (fFb) {f_{\ol{FB}}} (z)
(xt) edge node (T) {F(\eta_B)} (zt)
;
\draw[2cell]
node[between=fFb and xt at .6, rotate=30, font=\Large] (A) {\Rightarrow}
(A) node[below right] {\theta_{\eta_B}}
node[between=y and xt at .45, rotate=30, font=\Large] (B) {\Rightarrow}
(B) node[above left] {\upsilon}
node[between=xt and U at .5, shift={(.1,0)}, rotate=90, font=\Large] (D) {\Rightarrow}
(D) node[right] (F2) {F^2}
;
\end{tikzpicture}
\end{equation}
Since the diagram at left in
\eqref{Gdef-15} corresponds to an initial $1$-cell, we therefore have a
unique $2$-cell $(G(Fp)) \eta_A \to \eta_B p$ in $\B$ whose image under
$F$ satisfies the ice cream cone condition with respect to the two
outermost triangles in \eqref{Gdef-15}. We take $\eta_p$ to be this $2$-cell.
\Gstep{3b} Naturality of the components $\epz_u$ with respect to
$2$-cells $\ga\cn u_0 \to u_1$ is precisely the condition in
\eqref{Gdef-4} defining $G\ga = \ol{\ga}$. Naturality of the
components $\eta_p$ with respect to $2$-cells $\om\cn p_0 \to p_1$
follows because the source $1$-cell shown at left in \eqref{Gdef-15} is initial.
\Gsteps{3c}{3d} The lax transformation axioms for $\epz$ and $\eta$ follow immediately
from the inc-lax terminal conditions for $k^X$; the unit axiom follows from the unit
condition for $k^X$, and the $2$-cell axiom follows from uniqueness of
$2$-cells out of an initial $1$-cell.
\end{proof}
\section{The Whitehead Theorem for Bicategories}\label{sec:Whitehead-bicat}
In this section we apply the bicategorical Quillen Theorem A
(\ref{theorem:Quillen-A-bicat}) to prove the Bicategorical Whitehead
Theorem.
\begin{theorem}[Whitehead Theorem for Bicategories]\label{theorem:whitehead-bicat}\index{Bicategorical!Whitehead Theorem}\index{Theorem!Bicategorical Whitehead}\index{biequivalence!local characterization}\index{characterization of!a biequivalence}\index{pseudofunctor!biequivalence}
A pseudofunctor of bicategories $F\cn \B \to \C$ is a biequivalence
if and only if $F$ is
\begin{enumerate}
\item essentially surjective\index{essentially!surjective} on objects,
\item essentially full\index{essentially!full} on $1$-cells, and
\item fully faithful\index{fully faithful} on $2$-cells.
\end{enumerate}
\end{theorem}
\begin{proof}
One implication is immediate: if $F$ is a biequivalence with inverse
$G$, then the internal equivalence $FG \hty \Id_\C$ implies that $F$
is essentially surjective on objects.
\cref{lemma:biequiv-implies-local-equiv} proves that $F$ is
essentially full on $1$-cells and fully faithful on $2$-cells.
If $F$ is essentially surjective, essentially full, and fully faithful, then
\cref{proposition:lax-slice-lax-terminal,lemma:lax-slice-change-fiber}
show that the lax slices have inc-lax terminal objects and that the
strict functors $F \sdar u$ preserve initial components.
Therefore we apply \cref{theorem:Quillen-A-bicat} to obtain $G\cn\C
\to \B$ together with $\epz$ and $\eta$.
Moreover, the proof of \cref{proposition:lax-slice-lax-terminal} shows that
the components $\epz_X = f_{\ol{X}}$ and $\epz_u = \theta_{\ol{u}}$
are invertible. Likewise, if the constraints $F^0$ and $F^2$ are
invertible then the ice cream cone conditions for $F(G^0)$ and
$F(G^2)$, together with invertibility of the $\theta_{\ol{u}}$,
imply that $F(G^0)$ and $F(G^2)$ are invertible. Thus $G^0$ and
$G^2$ are invertible because $F$ is fully faithful on $2$-cells and
therefore reflects isomorphisms. Therefore $G$ is a pseudofunctor.
Likewise in the construction of $\eta_A$ via
\cref{proposition:lax-slice-lax-terminal}, we note that
$\theta_{\eta_A}$ and $f_{\ol{FA}}$ are both invertible, so
$F\eta_A$ is invertible. The assumption that $F$ is essentially
surjective on $1$-cells and fully faithful on $2$-cells implies that $F$
reflects invertibility of $1$-cells, and therefore $\eta_A$ is
invertible. Similarly, the construction of $\eta_p$ under these
hypotheses implies that $F(\eta_p)$ is invertible and hence $\eta_p$
is invertible.
Now $\eta$ and $\epz$ are strong transformations with invertible
components. Therefore by
\cref{proposition:adjoint-equivalence-componentwise} we conclude
that $\eta$ and $\epz$ are internal equivalences in $\Bicat(\C,\C)$
and $\Bicat(\B,\B)$, respectively. Thus $F$ and $G$ are inverse
biequivalences.
\end{proof}
If $\M$ and $\N$ are monoidal categories, with $F\cn\M \to \N$ a
monoidal functor we consider the corresponding pseudofunctor of
one-object bicategories
\[
\Si F \cn \Si\M \to \Si\N
\]
and obtain the following.
\begin{corollary}\index{monoidal functor!monoidal equivalence}\index{equivalence!monoidal}
A monoidal functor is a monoidal equivalence if and only if it is an
equivalence of underlying categories.
\end{corollary}
Recall the bicategory $\twovc$ of coordinatized $2$-vector spaces, the $2$-category $\twovtc$ of totally coordinatized $2$-vector spaces, and the strictly unitary pseudofunctor\index{pseudofunctor}\index{functor!pseudo-} \[F : \twovtc \to \twovc\] in \Cref{ex:two-vector-space,ex:twovect-tc,ex:two-vector-strict-functor}. We now observe that $F$ is an example of strictification of bicategories.
\begin{corollary}\label{cor:two-vector-spaces}\index{2-vector space}
$F : \twovtc \to \twovc$ is a biequivalence.
\end{corollary}
\begin{proof}
We check the three conditions in the Whitehead \Cref{theorem:whitehead-bicat}.
\begin{enumerate}
\item By definition $F$ is the identity function on objects.
\item $F$ is essentially full on 1-cells because each finite dimensional complex vector space is isomorphic to some $\fieldc^n$.
\item $F$ is fully faithful on $2$-cells because, with respect to the standard bases, there is a bijection between $\fieldc$-linear maps $\fieldc^m \to \fieldc^n$ and complex $n \times m$ matrices.
\end{enumerate}
Therefore, the strictly unitary pseudofunctor $F$ is a biequivalence.
\end{proof}
\section{Quillen Theorem A and The Whitehead Theorem for \texorpdfstring{$2$}{2}-Categories}\label{sec:quillen-whitehead-2-cat}
In this section we specialize to discuss stronger variants of
the bicategorical Quillen A and Whitehead Theorems.
Throughout this section,
$\B$ and $\C$ will be $2$-categories.
Observer first that, if $F\cn \B \to \C$ is a $2$-functor and is essentially surjective,
essentially full, and fully faithful, then by the bicategorical
Whitehead Theorem \ref{theorem:whitehead-bicat} we have a bi-inverse
$G$ which is generally a pseudofunctor but not a $2$-functor.
However, the work of the preceding sections culminating in \cref{theorem:whitehead-bicat} can
be modified to prove a stronger result under correspondingly stronger
hypotheses (cf. \cref{definition:2-equiv-terms}). That will be our focus for this section.
We will begin with the case that $F\cn \B \to \C$ is merely a lax
functor, but we will address the cases where $F$ is a pseudofunctor
and a $2$-functor at the end of this section.
Our first specialization is that the lax slices $F\sdar X$ are
$2$-categories in this case.
\begin{proposition}\label{proposition:lax-slice-2-cat}
Suppose $F\cn \B \to \C$ is a lax functor of $2$-categories, and
suppose that $X$ is an object of $\C$. Then the lax slice
bicategory $F \sdar X$ of \cref{sec:lax-slice} is a $2$-category.\index{lax slice!2-category}\index{2-category!lax slice}
\end{proposition}
\begin{proof}
Inspection of step (\ref{it:slice-5}) in the proof of \cref{defprop:lax-slice}
shows that the associator and unitors of $F \sdar X$ are determined
by those of $\B$. Therefore $F \sdar X$ is a $2$-category if $\B$ is a $2$-category.
\end{proof}
\begin{remark}
As this proof makes clear, the conclusion depends only on
associators and unitors in $\B$ being identities. The result holds
if $\C$ is a bicategory.
\end{remark}
\begin{definition}
We say that a $1$-cell $(p,\theta)$ in $F \sdar X$ is \emph{$2$-unitary}\index{2-unitary}\index{1-cell!2-unitary}
if $\theta$ is an identity $2$-cell.
\end{definition}
Next we turn to the inc-lax terminal phenomena of
\cref{sec:lax-slice}. The results of
\cref{proposition:lax-slice-lax-terminal,lemma:lax-slice-change-fiber}
apply in the $2$-categorical case, but a stronger result holds under
stronger hypotheses.
\begin{definition}\label{definition:2-equiv-terms}
Suppose $F \cn \B \to \C$ is a $2$-functor of $2$-categories.
\begin{itemize}
\item We say that $F$ is \emph{$1$-essentially surjective}\index{1-cell!1-essentially surjective}\index{essentially!surjective!1-} if $F$
is surjective on $1$-cell isomorphism-classes of objects. This is
equivalent to the statement that the underlying functor of
$1$-categories is essentially surjective.
\item We say that $F$ is \emph{$1$-fully faithful}\index{1-cell!1-fully faithful}\index{fully faithful!1-} if it is bijective
on $1$-cells. This is equivalent to the statement that the
underlying functor of $1$-categories is fully faithful.\defmark
\end{itemize}
\end{definition}
These conditions are somewhat stronger than the bicategorical
analogues. However, because a $2$-equivalence is an equivalence of
underlying $1$-categories, they do hold for $2$-equivalences. The
Whitehead Theorem for $2$-categories \ref{theorem:whitehead-2-cat} below
shows that the converse is true.
\begin{proposition}\label{inc-lax-terminal-2-cat}
Suppose $F \cn \B \to \C$ is a lax-functor of $2$-categories, and
suppose that $F$ is $1$-essentially surjective, $1$-fully-faithful, and
fully-faithful on $2$-cells. Then for each $X \in \C$, there is an
inc-lax terminal object $(\ol{X},f_{\ol{X}}) \in F \sdar X$ whose initial
components are $2$-unitary $1$-cells.
\end{proposition}
\begin{proof}
The proof of this result follows the proof of
\cref{proposition:lax-slice-lax-terminal}, with the following
important changes.
\begin{itemize}
\item In the choice of inc-lax terminal object
$(\ol{X}, f_{\ol{X}})$ described in \eqref{folX}, we choose
$f_{\ol{X}}$ to be an \emph{isomorphism} rather than an adjoint
equivalence (cf. \cref{definition:1-cell-isomorphism}), and choose
$f^\bdot_{\ol{X}}$ to be its inverse---this is possible because
$F$ is bijective on isomorphism classes of objects.
\item In the choice of initial component $(p_A,\theta_A)$ described
in \eqref{thetaA}, we choose $\theta_A$ to be an identity
$2$-cell---this is possible because $f_{\ol{X}}$ and
$f_{\ol{X}}^\bdot$ are inverse isomorphisms.
\item Note, given these choices, that $p_A$ is an isomorphism if and
only if $f_A$ is an isomorphism.\qedhere
\end{itemize}
\end{proof}
Next we have the corresponding specialization of the bicategorical Quillen Theorem A.
\begin{theorem}[$2$-categorical Quillen Theorem A]\label{theorem:Quillen-A-2-cat}\index{2-categorical!Quillen Theorem A}\index{Theorem!2-Categorical Quillen - A}
Suppose $F\cn \B \to \C$ is a lax functor of $2$-categories, and
suppose the following.
\begin{enumerate}
\item\label{QA2-hypothesis-1} For each $X \in \C$, the lax slice
$2$-category $F\sdar X$ has an inc-lax terminal object
$(\ol{X},f_{\ol{X}})$ whose initial components are $2$-unitary $1$-cells.\index{lax slice!2-category}\index{inc-lax!terminal object}\index{2-unitary}
\item\label{QA2-hypothesis-2} For each $u\cn X \to Y$ in $\C$, the
change-of-slice functor $F\sdar u$
preserves initial components (\cref{definition:preserves-inc}).\index{change-of-slice functor!preserves initial components}
\end{enumerate}
Then there is a lax functor $G \cn \C \to \B$ together with
a lax transformation $\eta$ and a strict transformation $\epz$ as below
\[
\eta\cn \Id_\B \to GF \quad \mathrm{\ and\ } \quad \epz\cn FG \to \Id_\C.
\]
Moreover, the $1$-cell components of $\eta$ and $\epz$ are isomorphisms.
\end{theorem}
\begin{proof}
The proof of this result follows from the proof of the bicategorical
Quillen Theorem A, noting the following key differences in the
constructions of $G$, $\epz$, and $\eta$.
\begin{itemize}
\item \Gstep{1b} In the definition of $G^0$ just after
\eqref{Gdef-7}, $\theta_{\ol{1_X}}$ is the identity $2$-cell and
both unitors are identities. Therefore the ice cream cone
condition for $G^0$ is $F(G^0) = F^0$.
\item Likewise, the definition of $G^2$ makes use of \eqref{Gdef-9},
\eqref{Gdef-10}, \eqref{Gdef-10}, and associators; all of these
except possibly $F^2$ are identity $2$-cells and therefore the ice
cream cone condition for $G^2$ reduces to $F(G^2) = F^2$.
\item \Gstep{3a} In the definition of $\epz$ we have
$\epz_X = f_{\ol{X}}$ and $\epz_u = \theta_{\ol{u}}$, the first of
which is an isomorphism $1$-cell and the second of which is an identity
$2$-cell. Therefore $\epz$ is a strict transformation whose
components are isomorphisms.
\item Likewise, $\eta_A = [A]$ is an isomorphism because both
$1_{FA}$ and $f_{\ol{FA}}$ are isomorphisms
(cf. \eqref{bracketA}). The $2$-cell $\eta_p$ is uniquely
determined by the two pasting diagrams in \eqref{Gdef-15}. Under
our assumptions, the $2$-cells $\theta_{\eta_A}$,
$\theta_{\ol{Fp}}$, $\upsilon$, and $\theta_{\eta_B}$ in those
diagrams are identities. Therefore we
have
\[
\eta_p \circ F^2_{\ol{Fp}, \eta_A} = F^2_{\eta_B, p}.\qedhere
\]
\end{itemize}
\end{proof}
If $F$ is bijective on $2$-cells and therefore reflects isomorphisms,
the proof of \cref{theorem:Quillen-A-2-cat} shows slightly more.
\begin{lemma}\label{lemma:refinements-Quillen-A-2-cat}
Suppose that $F\cn \B \to \C$ satisfies the hypotheses of
\cref{theorem:Quillen-A-2-cat} and suppose, moreover, that $F$ is
bijective on $2$-cells. Then we have the following additional
refinements of \cref{theorem:Quillen-A-2-cat}.
\begin{itemize}
\item If $F$ is a pseudofunctor, then $G$ is a pseudofunctor and
$\eta$ is a strong transformation.
\item If $F$ is a $2$-functor, then $G$ is a $2$-functor and $\eta$ is a
strict transformation (i.e., both $\eta$ and $\epz$ are $2$-natural isomorphisms).
\end{itemize}
\end{lemma}
Combining \cref{inc-lax-terminal-2-cat,theorem:Quillen-A-2-cat} with
\cref{lemma:refinements-Quillen-A-2-cat}, we have The Whitehead
Theorem for $2$-categories.
\begin{theorem}[Whitehead Theorem for $2$-categories]\label{theorem:whitehead-2-cat}\index{2-categorical!Whitehead Theorem}\index{Theorem!2-Categorical Whitehead}\index{2-equivalence!local characterization}\index{characterization of!a 2-equivalence}\index{2-functor!2-equivalence}
A $2$-functor of $2$-categories $F\cn \B \to \C$ is a $2$-equivalence
if and only if $F$ is
\begin{enumerate}
\item $1$-essentially surjective\index{essentially!surjective!1-} on objects;
\item $1$-fully faithful\index{fully faithful!1-} on $1$-cells;
\item fully faithful\index{fully faithful} on $2$-cells.
\end{enumerate}
\end{theorem}
\section{Exercises and Notes}\label{sec:whitehead-exercises}
\begin{exercise}\label{exercise:lr-lax-slice}
Return to \cref{defprop:lax-slice} and verify the left and right
unitors described in Step (\ref{it:slice-5}) satisfy the relevant ice cream cone conditions to be $2$-cells
in $F\sdar X$. Hint: both will use the right unity axiom
\eqref{f0-bicat}; one will also use the unity axiom
\eqref{bicat-unity} while the other will use
\cref{bicat-left-right-unity}.
\end{exercise}
\begin{exercise}\label{exercise:inc-lax-components}
Suppose $\lto$ is an object of $\B$ and suppose that for each object
$A \in \B$ there is a $1$-cell $k_A\cn A \to \lto$ that is initial in
$\B(A,\lto)$. Then $k_A$ are the components of a lax functor $\Id
\to \conof{\lto}$.
\end{exercise}
\begin{exercise}\label{exercise:unity-axiom-for-G}
Return to \cref{definition:G} and verify the left and right unity
axioms shown in \eqref{Gdef-14}. Hint: use the unit condition for
inc-lax terminal objects and hypothesis \eqref{QA-hypothesis-2}.
\end{exercise}
\subsection*{Notes}
\begin{note}[Discussion of Literature]
Kelly gives a brief outline of the $\V$-enriched Whitehead
theorem\index{Theorem!enriched Whitehead} in \cite[Section
1.11]{kelly-enriched}, which in particular implies the
$2$-categorical Whitehead Theorem \ref{theorem:whitehead-2-cat}.
The thesis of Schommer-Pries \cite{schommer-pries} proves an analogue
of \cref{theorem:whitehead-bicat} for symmetric monoidal bicategories,
and we follow him in calling this result a bicategorical Whitehead
Theorem.
Gurski \cite[Lemma 3.1]{gurski-biequivalences} gives a short
and direct proof of the bicategorical Whitehead Theorem without a
Quillen A theorem.
\end{note}
\begin{note}[Lax Slices]
The lax slice constructed in \cref{defprop:lax-slice} is similar to
(the opposite of) the \index{bicategory!oplax comma}oplax comma bicategory discussed by Buckley
\cite[Construction 4.2.1]{buckley}. However, the construction there
is given only for pseudofunctors. Moreover, even for pseudofunctors
the (op)lax slice over an object $X$ is not quite the same as the
(op)lax comma over the constant pseudofunctor $\conof{X}$. The
difference arises because unitors in a bicategory are nontrivial. We
saw this arise in the definition of icons, and one expects a
correspondence result like \cref{icon-is-icon}.
\end{note}
\begin{note}[Whitehead's Theorem]
Whitehead's theorem for topological spaces states that a
continuous function between CW complexes is a homotopy equivalence if
and only if it induces a bijection of connected components and also
induces isomorphisms of homotopy groups in all dimensions, for all
choices of basepoint.
To explain this further, suppose $F\cn \B \to \C$ is a pseudofunctor
and consider the following.
\begin{enumerate}
\item Let $\Pi_0(\B)$ denote the equivalence classes of objects, with
two objects being in the same class if and only if there is an
invertible $1$-cell between them. Then $F$ is essentially surjective
if and only if it induces a surjection
\[
\Pi_0(\B) \to \Pi_0(\C).
\]
\item Given two objects $X$ and $Y$, let $\Pi_1(\B;X,Y)$ denote the
isomorphism classes of $1$-cells $X \to Y$. Then $F$ is essentially
full if and only if it induces a surjection
\[
\Pi_1(\B; X,Y) \to \Pi_1(\C; FX, FY)
\]
for all $X$ and $Y$.
\item Given two $1$-cells $f$ and $g$, let $\Pi_2(\C; f,g)$ denote the
set of $2$-cells $f \to g$. Then $F$ is fully faithful if and only if
it induces a bijection
\[
\Pi_2(\B; f,g) \to \Pi_2(\C; Ff, Fg)
\]
for all $f$ and $g$.
\end{enumerate}
Now if $F$ is fully faithful, then it must induce injections on
isomorphism classes of $1$-cells and on adjoint-equivalence classes of
$0$-cells. Moreover, \cref{proposition:equiv-via-isos} implies that two
$0$-cells are connected equivalent in $\Pi_0(\B)$ if and only if there
is an adjoint-equivalence between them. Therefore $F$ is essentially
surjective, essentially full, and fully faithful if and only if it
induces bijections on each of the $\Pi_n$ for $n = 0, 1, 2$.
It should be noted that, while $\Pi_n$ are an algebraic analogue of
homotopy equivalence classes, their relation to homotopy groups of the
nerve of a bicategory is subtle and generally difficult to compute.
When $\B$ is a groupoid then the homotopy groups of $|N\B|$ vanish
above dimension 2. Moreover, the nonvanishing homotopy groups are
computed by the $\Pi_n$. The essential difference in general is that
paths in a topological space are reversible, while $1$-cells and $2$-cells
in a bicategory are generally not invertible.
\end{note}
\begin{note}[Quillen's Theorems A and B]
Quillen's Theorems A and B give conditions that imply a functor
of categories $F\cn \C \to 1.3$ induces a homotopy equivalence,
respectively fibration, on geometric realizations of nerves
\cite{quillenKI}. Bicategorical analogues of Quillen's Theorem B have
been discussed in \cite{chr} and depend on a notion of fibration
discussed in \cite{bakovicFib} and \cite{buckley}.
Chiche \cite{chiche} gives a Quillen Theorem A for lax functors of
$2$-categories, making use of a Grothendieck construction to analyze
functors whose lax slices have homotopy equivalent nerves.
We call \cref{theorem:Quillen-A-bicat} Quillen's Theorem A for
Bicategories because it generalizes to bicategories the essential
algebraic content of the original result. Applying a bicategorical
nerve, for example either of the nerves discussed in
\cref{ch:constructions}, one observes that a lax functor satisfying
the hypotheses of \cref{theorem:Quillen-A-bicat} induces a homotopy
equivalence---the transformations $\epz$ and $\eta$ do not need to be
invertible for such a conclusion. See \cite[Proposition 7.1]{ccg}.
The reader might wonder whether a similar result holds under a weaker
hypothesis on the lax slices, for example merely under the assumption
that the lax slices have contractible nerves. In that case, one will
require some other way to obtain compatibility between the lax slices
over different objects; this is crucial to the proof of the lax
associativity axiom for $G$. We leave this to the interested reader.
\end{note}
|
1109.2510
|
\section{Introduction}
QKD protocols such as BB84\cite{bb84}, and six-state\cite{sixstate1}, as they are defined in the literature, require the alignment of Alice and Bob's local measurement frames: $X_A=X_B=X$, $Y_A=Y_B=Y$ and $Z_A=Z_B=Z$ where ${X,Y,Z}$ stand for the Pauli operators ${\sigma_X,\sigma_Y,\sigma_Z}$.
It has been known for some time that alignment can be dispensed with, if one is ready to change the physical implementation of the qubit \cite{BGLPS04,TE11,AW07}; but these schemes require complex quantum states and are considered impractical. More recently, it was shown that reference frame independence can be achieved by changing the protocol\cite{LSROB10}, while keeping the same physical implementation. The main idea in the reference frame independent (rfi) protocol is that Alice and Bob share a common well-aligned measurement basis $Z=Z_A=Z_B$ (which is naturally available in many practical implementations~\footnote{Consider for examples, the time basis in a time bin implementation, the circular polarization basis, or the basis of different paths.}), while the other measurements can be misaligned by an arbitrary but fixed angle $\beta$
\begin{equation}
X_B=\cos\beta X_A+\sin\beta Y_A, \ \ \ Y_B=\cos\beta Y_A-\sin\beta X_A.
\end{equation}
Since the ``natural basis'' is automatically aligned, we have access to the quantum bit error rate in the $Z$ basis,
\begin{equation}
Q=\text{Pr(error)}=\text{Pr(a$\neq$b)}=\frac{1-\av{Z\otimes Z}}{2} .
\end{equation}
However, we also require additional information, namely the parameter $C$, in order to bound Eve's information.
\section{The parameter $C$}
The original rfi protocol introduced the $\beta$-independent parameter $C$,
\begin{equation}
\label{eq:C2}
C=\av{X_A\otimes X_B}^2+\av{X_A\otimes Y_B}^2+\av{Y_A\otimes X_B}^2+\av{Y_A\otimes Y_B}^2 ,
\end{equation}
which is an entanglement witness ($C\leq1$ for separable states and $C=2$ for maximally entangled states) to bound Eve's information.
We note that the protocol can be generalized to qudits in the following manner. Denoting $\{\ket{0},\ket{1},...,\ket{d-1}\}$ the computational basis vector of the Hilbert space describing a qudit, it is well known that the Pauli operators admit a generalization to higher dimension known as the Weyl operators, which are unitary operators of the form $X^kZ^\ell$ for $k,\ell\in\{0,1,...,d-1\}$ and
\begin{equation}
Z=\sum_{j=0}^{d-1} \omega^j\proj{j}, \ \ X=\sum_{j=0}^{d-1}\ket{j+1}\bra{j},
\label{eq:def}
\end{equation}
where $\omega=e^{2\pi i/d}$ are the roots of unity and $j+1$ denotes the sum modulo $d$. To accommodate relative unitary rotation around $Z$, let $X_A=UXU^\dagger$ and $X_B=VXV^\dagger$ where $[U,Z]=[V,Z]=0$. In the protocol, Alice and Bob perform the projective measurements on the eigenstates of $X_A^{k_1}Z^{\ell_1}$ and $X_B^{k_2}Z^{\ell_2}$ and from the statistics estimate $Q$ and
\begin{equation}
C = \mathop{\sum_{k_1,k_2=1}^{d-1}}_{\ell_1,\ell_2=0} |\langle X_A^{k_1} Z^{\ell_1} \otimes X_B^{k_2} Z^{\ell_2} \rangle |^2
\label{eq:Cd}
\end{equation}
to bound Eve's information. In the appendix we prove that (\ref{eq:Cd}) is a generalization of (\ref{eq:C2}) with all the desired properties: it is independent of the local unitaries $U$ and $V$ mentioned above and is an entanglement witness ($C\leq(d-1)^2$ for separable states and $C=d(d-1)$ for maximally entangled states).
\section{The tomographic approach} The way measurement results are used in the rfi protocol and its generalized version is not optimal, in the sense that the tomographic information deducible from the measurement statistics can be used directly, instead of via the parameter $C$. Estimating $C$ requires the knowledge of $[d(d-1)]^2$ correlators $\langle X_A^{k_1} Z^{\ell_1} \otimes X_B^{k_2} Z^{\ell_2} \rangle$ which can alternatively also be used to completely specify the state as\cite{P77}
\begin{equation}
\rho_{AB}=\frac{1}{d^2}\mathop{\sum_{k_1,k_2,\ell_1,\ell_2=0}^{d-1}} \langle X_A^{k_1} Z^{\ell_1} \otimes X_B^{k_2} Z^{\ell_2} \rangle X_A^{k_1} Z^{\ell_1} \otimes X_B^{k_2} Z^{\ell_2}
\label{eq:rhoexpression}
\end{equation}
in the measurement bases of Alice $\{X_A^{k_1}Z^{\ell_1}\}$ and Bob $\{X_B^{k_1}Z^{\ell_1}\}$. However, working only from $C$ discards some information that can lead to a tighter bound on Eve's information. Using tomography, one can have a rfi protocol without the need for $C$\cite{LKEKO03}.
Let us explain in detail how that is possible. The most direct approach is to make $d^2$ measurements $X^kZ^\ell$ on each subsystem (Alice has one, Bob the other) and combine the measurement outcomes to find each correlator directly. This is very inefficient because it requires $d^2$ different estimates to be made with good precision, which requires many copies of the state. (Recall that collective attacks are the optimal attack in general for Eve in this scenario~\cite{KGR05,RGK05}.) Also it is unnecessary because many of the Weyl operators have the same set of eigenvectors ($Z^\ell$ for $\ell=0,...,d-1$ for instance); hence the measurement statistics of one can be used to calculate the average values of all the others. In general, the minimum number of measurements needed to completely specify the state is still unknown. However, if $d$ is prime, one can reconstruct the state by making only $d+1$ measurements corresponding to $d+1$ mutually unbiased bases on each subsystem, say $\mathcal{B}=\{Z,X Z^\ell:\ell=0,...,d-1\}$. After the measurements, Alice and Bob can estimate their marginal probability distribution locally, and if they share the measurement outcomes, the joint probability distribution $p(a,b|A\otimes B)$ where $A,B\in\mathcal{B}$. It is well known that the eigenbasis of any $X^kZ^\ell$ is among the eigenbases of observables in $\mathcal{B}$; therefore from $p(a,b|A\otimes B)$ we can compute all the average values using
\begin{equation}
\langle X_A^{k_1} Z^{\ell_1} \otimes X_B^{k_2} Z^{\ell_2} \rangle=\sum_{a,b} \lambda_a\lambda_bp(a,b|A\otimes B),
\end{equation}
where $\lambda_a$ is the eigenvalue associated to the eigenvector representing outcome $a$ of $X_A^{k_1} Z^{\ell_1}$ and $A$ is the operator in $\mathcal{B}$ with the same eigenbasis as $X_A^{k_1}$, ditto for Bob.
Hence a full state reconstruction is possible by~(\ref{eq:rhoexpression}).
Once $\rho_{AB}$ is found relative to the (partially) unaligned frames, it is possible to find local rotations $U_A, U_B$ such that $U_A\otimes U_B\rho_{AB}U_A^\dagger\otimes U_B^\dagger=\tilde{\rho}_{AB}$ where $\tilde{\rho}_{AB}$ is a diagonal matrix of the eigenvalues of $\rho_{AB}$. This procedure is always possible if Alice and Bob's marginals are random, and if this is not the case, Alice and Bob can do the randomization themselves. Then $\tilde{\rho}_{AB}$ is a mixture of $d$-dimensional Bell states (maximally entangled states) in the new bases effected by the local rotations. Using the eigenvalues, $\boldsymbol{\lambda}$, of this state, the asymptotic rate will be\cite{SS10}
\begin{equation}
r_\infty = \log{d} - H(\boldsymbol{\lambda}),
\end{equation}
and the finite key bounds can also be calculated from those methods to give,
\begin{equation}
r_{N} = \frac{n}{N} \left(\min_{E|\mathbf{P\pm\mu}} H(A|E) - \text{leak}_{\text{EC}}/n - \frac{2}{n} \log \frac{1}{\epsilon_{\text{PA}}}- (2\log d+3) \sqrt{\frac{\log(2/\bar{\epsilon})}{n}}\right) ,
\label{eq:rn}
\end{equation}
where the security parameter is
$
\epsilon = \epsilon_{\text{EC}}+\epsilon_{\text{PA}} + n_{\text{PE}}\epsilon_{\text{PE}} +\bar{\epsilon} ,
$
and $\epsilon_{\text{EC}}$ is the probability of failure of the error correction step, $\epsilon_{\text{PA}}$ the probability of failure of the privacy amplification, $\epsilon_\text{PE}$ is the probability that the estimate of any measured parameter $\mathbf{P}$ is outside a tolerated range $\mu$. The minimization of $H(A|E)$ is done over all attacks of Eve, denoted by $E$, compatible with observed parameters $\mathbf{P}$ within tolerated fluctuations $\mu$.
We also note that doing tomography it is possible to relax the assumption of the ``natural basis'', allowing Alice and Bob to have one axis which they know is close to being aligned but is not perfectly aligned. This will lead to an increase in the error rate $Q$. For example, for the case $d=2$ the $Z$ axes must not differ by more than $41.5^{\circ}$ for $Q$ to be small enough to have any hope of generating the shared randomness required to grow a secret key. However, it is a natural assumption in many settings to take one basis to be aligned.
\section{Bounds from Uncertainty Relations}
Recently, a tighter finite-key bound for the Bennett-Brassard 1984 (BB84) protocol\cite{bb84} has been found by Tomamichel et. al.~\cite{TLGR11}. For some time there has been the idea of viewing the security of QKD as reducing to suitable URs. After a series of works \cite{K06,RB09,BCCRR10}, this program was brought to completion by Tomamichel and Renner~\cite{TR10}, who indeed provided a UR involving the quantity that captures Eve's uncertainty in QKD: the \textit{smooth min-entropy conditional on a quantum observer}. The result is directly applicable to finite-key bounds\cite{TLGR11}.
We can also use this approach instead of the tomographic approach described above to obtain a security bound for the rfi protocols. That is possible by using the complete knowledge of the eigenvalues $\boldsymbol{\lambda}$ to infer the max-entropy in the virtual bases corresponding to the local rotations that optimized Alice and Bob's correlations.
The UR lower bounds Eve's uncertainty on the key,
\begin{equation}
H_{\text{min}}^{\bar{\epsilon}}(\mathbf{Z}|E) + H_{\text{max}}^{\bar{\epsilon}}(\mathbf{X}|B) \geq \log \frac{1}{c},\label{eq:UR}
\end{equation}
where $c$ quantifies the `incompatibility' between the measurements $\mathbf{Z}=Z^{\otimes n}$ and $\mathbf{X}=X^{\otimes n}$. Moreover, as any decent measure of uncertainty, $H_\text{max}$ can only increase under information processing and in particular under Bob's measurement~\cite{TCR10}, so
\begin{equation}
H_{\text{max}}^{\bar{\epsilon}}(\mathbf{X}|B) \leq H_{\text{max}}^{\bar{\epsilon}}(\mathbf{X}|\mathbf{X'})
\end{equation}
where the measurement $\mathbf{X'}=X'^{\otimes n}$ is made on system $B$. Now comes the crucial insight for what follows: the protocol does not have to prescribe the \textit{actual} measurement of $\mathbf{X}$ and $\mathbf{X'}$. Given the observed parameters, we are free to imagine the measurement $\mathbf{X'}$ on Bob's side that makes his uncertainty as small as possible on a \textit{hypothetical} measurement $\mathbf{X}$. The data required for this is easily found once the state is reconstructed from the tomographic data.
Let $X$ and $X'$ be observables mutually unbiased to $Z$ with outcomes in dimension $d$ corresponding to measurements on Alice and Bob respectively, whence the right hand side of (\ref{eq:UR}) reduces to $n\log d$. Using a bound for $H_{\text{max}}^{\bar{\epsilon}}$ found using an extension of the method of~\cite{TLGR11},
\begin{equation}
H^{\bar{\epsilon}}_{\text{max}}(\mathbf{X}|\mathbf{X'}) \leq n H(\mathbf{v}_{X\otimes X'}(\mu)),
\end{equation}
finally we have to maximize $H\left(\mathbf{v}_{X\otimes X'}(\mu)\right)$, where $\mathbf{v}_{X\otimes X'}(\mu)$ is the vector of probabilities of the different outcomes (the parameters $\mathbf{P}$) in this virtual basis, up to a finite sampling uncertainty $\mu$. In this way, we have been conservative in giving Eve maximum information on the key, compatible with our observed data. The finite key rate reads therefore
\begin{equation}
r_N \leq \frac{n}{N}\Big(\log d - H(\mathbf{v}_{X\otimes X'}(\mu)) - \text{leak}_{\text{EC}}/n - \frac{2}{n} \log\frac{1}{2 (\epsilon_{\text{PA}}-\bar{\epsilon})}\Big)\,.
\label{eq:fkskr}
\end{equation}
We can maximize this value over the trade-off between the signals devoted to the key and those devoted to parameter estimation, as well as over the choices of $\epsilon_{\text{PE}}$ and $\epsilon_{\text{PA}}$ compatible with $\epsilon_{\text{sec}}$ for given $\epsilon_{\text{EC}}$ and now $\bar{\epsilon}$ is a function of $\epsilon_{\text{PE}}$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.8\textwidth]{RFIplotwinset.pdf}
\caption{Finite key rate $r_{N}$ as a function of the number of signals $N$ for the dimensions $d=2$ and $3$ computed via uncertainty relation using equation ($\ref{eq:fkskr}$) and tomographic approach using equation ($\ref{eq:rn}$). The plots are for $\epsilon_{\text{sec}}=10^{-10}$, $\epsilon_{\text{EC}}=10^{-20}$ and $Q=1\%$.}
\label{fig:finite}
\end{center}
\end{figure}
\section{Conclusions} The derivation of the UR is generic: it is not linked to BB84, nor to QKD for that matter. Still, one may surmise that, in the context of QKD, such a UR is of practical use only for two-measurement protocols like BB84 and its higher-dimensional generalizations \cite{CBKG02,SS10}. When it comes to the six-state protocol, which uses three measurements, the UR yields the same bound as for BB84, while a better bound is found by taking into account the detailed structure of the states \cite{review2}. In fact, in considering the rfi protocol, for which the bound described in the previous section with $d=2$ is equivalent to the bound for the six-state protocol, the asymptotic bound for the secret key is indeed worse using the UR technique. However, the critical number of signals at which the secret key rate becomes positive is reduced using the UR approach (see Figure~\ref{fig:finite}). This suggests that there should exist a better technique for constructing the finite-key bounds that gives the best of both methods, probably by eliminating the need for the smoothing correction term given in section 3.3.4 of Renner's thesis\cite{rennerthesis}.
\section*{Acknowledgments}
This work was supported by the National Research Foundation and the Ministry of Education, Singapore. We would like to thank an anonymous referee for insightful feedback, and Markus Grassl and Stephanie Wehner for helpful discussions.
\section*{Appendix.}
The essential ingredients in the proof that equation~(\ref{eq:Cd}) generalizes $C$ are twofold: (i) the relation between average values of operators and the Hilbert-Schmidt inner product, namely $\langle O\rangle_\rho=\langle\rho,O\rangle$ where $\langle \cdot,\cdot\rangle$ is the Hilbert-Schmidt inner product defined as $\langle A,B\rangle=\operatorname{Tr}(A^\dagger B)$, and (ii) the Weyl operators as an orthonormal basis up to normalization, i.e.\ $\langle X^{k_1}Z^{\ell_1},X^{k_2}Z^{\ell_2}\rangle=d\delta_{k_1,k_2}\delta_{\ell_1,\ell_2}$. We recall the computation of inner product using an orthonormal basis
\begin{equation}
\langle\rho,\rho\rangle = \frac{1}{d}\mathop{\sum_{k,\ell=0}^{d-1}} \langle\rho,X^kZ^\ell\rangle\langle X^kZ^\ell,\rho\rangle = \frac{1}{d}\mathop{\sum_{k,\ell=0}^{d-1}} |\langle\rho,X^kZ^\ell\rangle|^2.
\end{equation}
To prove that $C$ is invariant with respect to rotations around $Z$, first note that $C$ can be rewritten as
\begin{eqnarray}
C &=& \mathop{\sum_{k_1,k_2=0}^{d-1}}_{\ell_1,\ell_2=0} |\langle X_A^{k_1} Z^{\ell_1} \otimes X_B^{k_2} Z^{\ell_2} \rangle |^2 - \mathop{\sum_{k_2,\ell_2=0}^{d-1}}_{\ell_1=0} |\langle Z^{\ell_1} \otimes X_B^{k_2} Z^{\ell_2} \rangle |^2 \nonumber \\
&-& \mathop{\sum_{k_1,\ell_1=0}^{d-1}}_{\ell_2=0} |\langle X_A^{k_1}Z^{\ell_1} \otimes Z^{\ell_2} \rangle |^2 + \mathop{\sum_{\ell_1,\ell_2=0}^{d-1}} |\langle Z^{\ell_1} \otimes Z^{\ell_2} \rangle |^2
\end{eqnarray}
where the first sum simplifies to $d^2\operatorname{Tr}(\rho_{AB}^2)$. We can switch bases from $X_B^{k_2}Z^{l_2}$ to $X^{k_2}Z^{l_2}$ since they are both bases for $\mathcal{L}(\mathbb{C}^d)$, thus invariant, and similarly for the third sum. The final term is obviously invariant with respect to $Z$ rotations. Therefore, we have proved that $C$ is independent of the local unitaries $U$ and $V$ commuting with Z.
To show that $C$ acts as an entanglement witness, consider the product state $\rho_{AB}=\sigma_A\otimes\sigma_B$ for which $C$ factorizes into
\begin{equation}
C = \mathop{\sum_{k_1=1}^{d-1}}_{\ell_1=0} |\langle X_A^{k_1} Z^{\ell_1} \rangle_{\sigma_A} |^2 \mathop{\sum_{k_2=1}^{d-1}}_{\ell_2=0} |\langle X_B^{k_2} Z^{\ell_2} \rangle_{\sigma_B} |^2
\end{equation}
and note that
\begin{equation}
\mathop{\sum_{k_1=1}^{d-1}}_{\ell_1=0} |\langle X_A^{k_1} Z^{\ell_1} \rangle_{\sigma_A} |^2 = d\operatorname{Tr}(\sigma_A^2)-1-\mathop{\sum_{\ell_1=1}^{d-1}} |\langle Z^{\ell_1} \rangle_{\sigma_A} |^2 \leq d-1,
\end{equation}
from which $C\leq(d-1)^2$ for all product states and moreover for all separable states by convexity. Thus if $C>(d-1)^2$ for a particular state, then the state is entangled, however, the converse, that the state is separable for $C$ less than that value, is not implied. Indeed entangled states can have $C<(d-1)^2$.
Note that $C$ is a sum over tensor products of operators that do not commute with $Z$, the raw key basis. The maximum value of $C$ is only achieved for maximally entangled states. The maximum value that can be obtained with a separable state is $(d-1)^2$, therefore there is a gap between separable states and maximally entangled states that scales linearly with $d$.
\bibliographystyle{plain}
|
1503.08731
|
\section{Introduction}
Developing the theory of quantum measurements, von Neumann \cite{Neumann_1}
mentioned that the process of quantum measurements is analogous to decision
making. The formal analogy between these processes has been described in
several mathematical works \cite{Benioff_2,Holevo_3,Holevo_4,Holevo_5}.
However, this analogy remained rather formal, without comparing quantum
measurements with real decision making, as done by humans. Several important
questions have not been answered:
(i) First of all, in what sense human decision making could be characterized
by quantum measurements?
(ii) What would be a general scheme for describing measurements under
uncertainty and decisions under uncertainty?
(iii) How to correctly define a quantum probability of non-commuting events
for both, quantum measurements and quantum decisions?
(iv) What is the role of entanglement in quantum measurements, and in quantum
decision making?
(v) When should decision making be treated by quantum rules and when is it
sufficient to use classical theory?
In this report, we present a general approach and mathematical techniques
common to quantum measurements and decision making that provides natural
answers to these questions. The reason for developing a common quantum
approach to measurements and to decision making is twofold. First, quantum
theory provides tools for taking into account behavioral biases in human
decision making \cite{Yukalov_6,Yukalov_7,Yukalov_8}. Note that the possibility
of describing cognition effects by means of quantum theory was suggested by Bohr
\cite{Bohr_9,Bohr_10}. Second, formulating a quantum theory of decision making
defines the main directions for creating artificial quantum intelligence
\cite{Yukalov_11,Yukalov_12}.
\section{Operationally testable events}
Let us first briefly recall the definition of quantum probabilities for
measurements corresponding to operationally testable events and their
connection to simple events in decision making
\cite{Yukalov_6,Yukalov_7,Yukalov_8}.
The operator of an observable $\hat{A}$ is a self-adjoint operator, whose
eigenvectors are given by the eigenproblem
\be
\label{1}
\hat A | n \rgl = A_n | n \rgl \; ,
\ee
forming a complete basis. The closed linear envelope of this basis composes
a Hilbert space $\mathcal{H}_A \equiv span\{\vert n \rangle\}$. The measurement
of an eigenvalue $A_n$ is an event that we denote by the same letter. This
event is represented by a projector $\hat{P}_n$ according to the correspondence
\be
\label{2}
A_n \ra \hat P_n \equiv | n \rgl \lgl n | \; .
\ee
The system statistical state, or the decision maker strategic state, is given
by a statistical operator $\hat{\rho}_A$. The probability of an event $A_n$ is
\be
\label{3}
p(A_n) \equiv {\rm Tr}_A \hat \rho_A \hat P_n \equiv \lgl \hat P_n \rgl \; .
\ee
In the theory of measurements or decision theory, the set
$\mathcal{A} \equiv \{\hat{P}_n\}$ of projectors plays the role of the algebra
of observables, with the expected value (3) being the event probability.
In eigenproblem (1), a nondegenerate spectrum is tacitly assumed. If the
spectrum is degenerate, then in the eigenproblem
\be
\label{4}
\hat A | n_j \rgl = A_n | n_j \rgl
\ee
an eigenvalue $A_n$ corresponds to several eigenvectors $\vert n_j \rangle$,
where $j = 1,2,\ldots$. The Hilbert space can again be composed spanning all
these eigenvectors, $\mathcal{H}_A \equiv \{\vert n_j \rangle\}$. Now an event
of measuring $A_n$ is associated with the projector
\be
\label{5}
\hat P_n \equiv \sum_j \hat P_{n_j} \qquad
( \hat P_{n_j} \equiv | n_j \rgl \lgl n_j | ) \; .
\ee
And the event probability is defined as in Eq. (3).
A basis, formed by the eigenvectors of a self-adjoint operator, is complete,
because a self-adjoint operator is normal. The set of eigenvectors of any
normal operator, such that $\hat{A}^+ \hat{A} = \hat{A} \hat{A}^+$, forms a
complete basis. Normal operators include self-adjoint and unitary operators.
In the case of degenerate spectra, the basis may be not uniquely defined. In
quantum measurements, this does not lead to any principal problem, since one
can use the summary projector (5). But in decision theory, the question arises:
in what sense a single event $A_n$ can correspond to several projectors
$\hat{P}_{n_j}$, since an operationally testable event either happens or not?
Fortunately, this problem of nonuniqueness is easily avoidable, both in the
theory of quantum measurements and in decision theory. This is done by invoking
the von Neumann suggestion of degeneracy lifting \cite{Neumann_1}. For this
purpose, the operator of the observable $\hat{A}$ is slightly shifted by an
infinitesimal term,
\be
\label{6}
\hat A \ra \hat A + \nu \hat\Gm \qquad (\nu \ra 0 ) \; ,
\ee
where $\hat{\Gamma}$ is an operator lifting the symmetry responsible for the
spectrum degeneracy. Then the operator spectrum splits into the set
\be
\label{7}
A_n \ra A_n + \nu \Gm_{n_j} \; ,
\ee
thus, removing the degeneracy. Finally, the event probability is defined as
\be
\label{8}
p(A_n) = \lim_{\nu\ra 0} p(A_n + \nu \Gm_{n_j} ) \; .
\ee
In that way, the degeneracy is avoided, and one always deals with a unique
correspondence between eigenvalues, representing events, and eigenvectors.
This procedure is also analogous to the Bogolubov method of quasi-averages,
breaking the symmetry of statistical systems by introducing infinitesimal
sources \cite{Bogolubov_13,Bogolubov_14}.
The union of mutually orthogonal (incompatible) events is represented by
the projector sum:
\be
\label{9}
\bigcup_n A_n \ra \sum_n \hat P_n \qquad
( \hat P_m \hat P_n = \dlt_{mn} \hat P_n ) \; .
\ee
The probability of such a union is additive:
\be
\label{10}
p \left ( \bigcup_n A_n \right ) = \sum_n p( A_n) \; .
\ee
In the definition of the quantum probability (3), the averaging is done
with a statistical operator $\hat{\rho}_A$, generally, implying a mixed
state. In some special cases, a quantum system can be prepared in a pure
state described by a wave function $\vert \psi \rangle$, which corresponds
to the statistical operator $\vert \psi \rangle \langle \psi \vert$. The
setups of experiments with physical systems and with decision makers are
quite different. Accomplishing quantum measurements, we may meet two types
of systems, open and quasi-isolated.
An {\it open} system interacts with its surrounding, keeps information on the
preparation of its initial state and, generally, can feel past interactions
through retardation memory effects. Even if, at the given time, the
interactions with its surrounding can be neglected, the system remains open,
if it possesses the memory of its preparation and of past interactions.
In addition, there can exist quantum statistical correlations making the
system entangled with its surrounding.
A physical system is {\it quasi-isolated} when it is isolated from its
surrounding interactions, does not have retardation memory effects, and is
not entangled with surrounding through quantum statistical correlations. Such
a system can be conditionally treated as isolated for a short instance of
time. However, to confirm its isolation, one needs to realize measurements,
at least nondestructive, which perturbs the system making it not absolutely
isolated, but quasi-isolated \cite{Yukalov_15,Yukalov_16,Yukalov_17}.
Contrary to physical systems, a decision maker, even being isolated from
a surrounding society, always keeps memory and information of many previous
interactions. Therefore a decision maker has always to be considered as an
open system.
\section{Composite separable events}
When one deals with not just a single event, but with several events, the
situation becomes more involved. This especially concerns the measurements
of noncommuting observables, whose operators do not commute with each other
\cite{Gudder_18}. There has been an old problem of correctly defining the
quantum probability of such events. It has been shown \cite{Yukalov_19} that
the L\"{u}ders probability \cite{Luders_20} of consecutive measurements is
a transition probability between two quantum states and cannot be accepted
as a quantum extension of the classical conditional probability. Also, the
Wigner distribution \cite{Wigner_21} is just a weighted L\"{u}ders probability
and it cannot be treated as a quantum extension of the classical joint
probability. The correct and most general definition of a quantum joint
probability can be done \cite{Yukalov_19} by employing the Choi-Jamiolkowski
isomorphism \cite{Choi_22,Jamiolkowski_23} expressing the channel-state
duality.
Composite events, that are composed of two or more events, can be separable
or entangled. First, we consider separable events.
Let us be interested in the measurements involving two observables associated
with two operators, $\hat{A}$ and $\hat{B}$, with the related eigenproblems
\be
\label{11}
\hat A | n \rgl = A_n | n \rgl \; , \qquad
\hat B | \al \rgl = B_\al | \al \rgl \; .
\ee
As above, we shall denote the events of measuring the eigenvalues $A_n$ and
$B_\alpha$ by the same letters, respectively. To each event, we put into
correspondence the appropriate projectors,
\be
\label{12}
A_n \ra \hat P_n \equiv | n \rgl \lgl n| \; , \qquad
B_\al \ra \hat P_\al \equiv | \al \rgl \lgl \al | \; .
\ee
Constructing two Hilbert space copies
\be
\label{13}
{\cal H}_A \equiv {\rm span} \{ | n \rgl \} \; , \qquad
{\cal H}_B \equiv {\rm span} \{ | \al \rgl \} \; ,
\ee
we define the algebras of observables
\be
\label{14}
{\cal A} \equiv \{ \hat P_n \} \; , \qquad
{\cal B} \equiv \{ \hat P_\al \} \; ,
\ee
acting on the corresponding Hilbert spaces. The composite algebra
$\mathcal{A} \bigotimes \mathcal{B}$ acts on the tensor-product space
$\mathcal{H}_A \bigotimes \mathcal{H}_B$. The system statistical state
(decision-maker strategic state) also acts on the space
$\mathcal{H}_A \bigotimes \mathcal{H}_B$. The joint probability of events,
corresponding to the observables from the algebra
$\mathcal{A} \bigotimes \mathcal{B}$, is defined as
\be
\label{15}
p \left ( {\cal A} \bigotimes {\cal B} \right ) \equiv
{\rm Tr}_{AB} \; \hat\rho {\cal A} \bigotimes {\cal B} \; ,
\ee
with the trace over $\mathcal{H}_A \bigotimes \mathcal{H}_B$.
For any two operators from the algebra $\mathcal{A}$, it is possible to
introduce the Hilbert-Schmidt scalar product that is a map
\be
\label{16}
\sgm_A : \; {\cal A} \times {\cal A} \ra \mathbb{C} \; .
\ee
Thus, for the operators $\hat{A}_1$ and $\hat{A}_2$ from the algebra
$\mathcal{A}$, the scalar product reads as
\be
\label{17}
(\hat A_1 , \hat A_2) \equiv {\rm Tr}_{A} \hat A_1^+ \hat A_2 \; ,
\ee
with the trace over $\mathcal{H}_A$. The operators from the algebra
$\mathcal{A}$, acting on the Hilbert space $\mathcal{H}_A$, and complemented
with the scalar product $\sigma_A$, form the Hilbert-Schmidt operator space
\be
\label{18}
\widetilde {\cal A} \equiv \{ {\cal A} , {\cal H}_A , \sgm_A \} \; .
\ee
Similarly, one defines the Hilbert-Schmidt space
\be
\label{19}
\widetilde {\cal B} \equiv \{ {\cal B} , {\cal H}_B , \sgm_B \} \; .
\ee
The tensor-product of the above Hilbert-Schmidt spaces forms the space
\be
\label{20}
\widetilde{\cal{C}} \equiv
\widetilde{\cal{A}} \bigotimes \widetilde{\cal{B}} \; .
\ee
An operator $\hat{C}$ acting on $\widetilde{\mathcal{C}}$ is called {\it separable}
if and only if it can be represented as a sum
\be
\label{21}
\hat C = \sum_i \hat A_i \bigotimes \hat B_i \qquad
(\hat A_i \in \widetilde{\cal A} , \; \hat B_i \in \widetilde{\cal B} ) \; .
\ee
A composite event is named a prospect. The prospect $A_n \bigotimes B_\alpha$
corresponds to the prospect operator
\be
\label{22}
\hat P\left ( A_n \bigotimes B_\al \right ) =
\hat P_n \bigotimes \hat P_\al \; .
\ee
The prospect operator (22) is evidently separable. Therefore the corresponding
prospect $A_n \bigotimes B_\alpha$ is also called separable. The related
prospect probability writes as
\be
\label{23}
p \left ( A_n \bigotimes B_\al \right ) =
{\rm Tr}_{AB} \; \hat\rho \hat P_n \bigotimes \hat P_\al \; ,
\ee
with the trace over $\mathcal{H}_A \bigotimes \mathcal{H}_B$.
More generally, the prospect $A_n \bigotimes \bigcup_\alpha B_\alpha$, where
$\bigcup_\alpha B_\alpha$ is a union of mutually orthogonal events, corresponds
to the prospect operator
\be
\label{24}
\hat P \left ( A_n \bigotimes \bigcup_\al B_\al \right ) =
\sum_\al \hat P_n \bigotimes \hat P_\al \; .
\ee
This operator is separable, and the related prospect probability
\be
\label{25}
p \left ( A_n \bigotimes \bigcup_\al B_\al \right ) =
\sum_\al p\left ( A_n \bigotimes B_\al \right )
\ee
is additive with respect to the events $A_n \bigotimes B_\alpha$.
\section{Composite entangled events}
An operator $\hat{C}$ from the Hilbert-Schmidt space (20) is termed
{\it entangled}, or non-separable, if it cannot be represented as sum (21),
so that
\be
\label{26}
\hat C \neq \sum_i \hat A_i \bigotimes \hat B_i \qquad
( \hat A_i \in \widetilde{\cal A} , \; \hat B_i \in \widetilde{\cal B} ) \; .
\ee
The appearance of entangled events, corresponding to entangled operators,
is connected with the existence of not operationally testable measurements
that also are called uncertain measurements, or incomplete measurements, or
indecisive measurements, or inconclusive measurements. Respectively, one can
keep in mind uncertain events in decision making.
Let us define an uncertain event $B$ as a set of possible events $B_\alpha$,
characterized by weights $|b_\alpha|^2$,
\be
\label{27}
B = \{ B_\al : \; \al = 1,2, \ldots \} \; .
\ee
Since the uncertain event is not operationally testable, it is not required
that the weights $|b_\alpha|^2$ be summed to one. The uncertain-event
operator is
\be
\label{28}
\hat P(B) = | B \rgl \lgl B | \; ,
\ee
with the state
\be
\label{29}
| B \rgl = \sum_\al b_\al | \al \rgl
\ee
that is not necessarily normalized to one. The uncertain-event operator (28),
which can be written as
\be
\label{30}
\hat P(B) = \sum_{\al\bt} b_\al b_\bt^* | \al \rgl \lgl \bt | \; ,
\ee
is not a projector onto a subspace that would correspond to a degenerate
spectrum, because it does not have form (5). Moreover, it is not a projector
at all, as far as it is not necessarily idempotent,
$$
\hat P^2(B) = \lgl B | B \rgl \hat P(B) \neq \hat P(B) \; ,
$$
since state (29) is not, generally, normalized to one.
A composite event, formed of an operationally testable event $A_n$ and an
uncertain event (27), is the uncertain prospect
\be
\label{31}
\pi_n = A_n \bigotimes B \; ,
\ee
whose prospect operator is
\be
\label{32}
\hat P(\pi_n) \equiv | \pi_n \rgl \lgl \pi_n | =
\sum_\al | b_\al|^2 \hat P_n \bigotimes \hat P_\al +
\sum_{\al\neq\bt} b_\al b_\bt^* \hat P_n \bigotimes | \al \rgl \lgl \bt | \; .
\ee
This operator is not separable in the Hilbert-Schmidt space (20), because
the operator $\vert \alpha \rangle \langle \beta \vert$ does not pertain to
space (19) composed of projectors $\hat{P}_\alpha$. Hence, operator (32)
is entangled. The corresponding prospect (31) is termed entangled.
The prospect operators are assumed to satisfy the resolution of unity
\be
\label{33}
\sum_n \hat P(\pi_n) = \hat 1 \; ,
\ee
where $\hat{1}$ is the identity operator in the space
$\mathcal{H}_A \bigotimes \mathcal{H}_B$. But these prospect operators are
not necessarily orthogonal, since
$$
\hat P(\pi_m) \hat P(\pi_n) =
\lgl \pi_m | \pi_n \rgl | \pi_m \rgl \lgl \pi_n | \; ,
$$
and they are not idempotent, as far as
$$
\hat P^2(\pi_n) = \lgl \pi_n | \pi_n \rgl \hat P(\pi_n) \; .
$$
That is, they are not projectors. The family $\{\hat{P}(\pi_n)\}$ of such
positive operators, obeying the resolution of unity (33) is named
{\it positive operator-valued measure} \cite{Benioff_2,Holevo_4,Holevo_5}.
For a given lattice $\{\pi_n\}$ of prospects, the prospect probabilities
\be
\label{34}
p(\pi_n) = {\rm Tr} \hat\rho \hat P(\pi_n) \; ,
\ee
where the trace is over $\mathcal{H}_A \bigotimes \mathcal{H}_B$, satisfy
the conditions
\be
\label{35}
\sum_n p(\pi_n) = 1 \; , \qquad 0 \leq p(\pi_n) \leq 1 \; ,
\ee
making the set $\{p(\pi_n)\}$ a probability measure.
The analysis of the prospect probability (34) results in the following
properties \cite{Yukalov_24,Yukalov_25,Yukalov_26}. The probability can be
written in the form
\be
\label{36}
p(\pi_n) = f(\pi_n) + q(\pi_n) \; .
\ee
The first term,
\be
\label{37}
f(\pi_n) =
\sum_\al | b_\al |^2 p \left ( A_n \bigotimes B_\al \right ) \; ,
\ee
corresponds to classical probability, possessing the features
\be
\label{38}
\sum_n f(\pi_n) = 1 \; , \qquad 0 \leq f(\pi_n) \leq 1 \; .
\ee
The classical term (37) is an objective quantity reflecting the given
properties of the prospect. The second term,
\be
\label{39}
q(\pi_n) =
\sum_{\al\neq\bt} b_\al b_\bt^* \lgl n \al | \hat\rho | n \bt \rgl \; ,
\ee
is purely quantum, caused by interference and coherence effects. This quantum
term in the theory of measurements is called interference factor, or coherence
factor, and in decision theory, attraction factor, since it characterizes the
subjective attractiveness of different prospects to the decision maker.
According to the {\it quantum-classical correspondence principle}
\cite{Bohr_27}, when quantum effects disappear, quantum theory should reduce
to classical theory, which implies
\be
\label{40}
p(\pi_n) \ra f(\pi_n) \; , \qquad q(\pi_n) \ra 0 \; .
\ee
In general, the reduction of quantum measurements to their classical
counterparts is called decoherence \cite{Zurek_28}.
The quantum factor (39) varies in the range
\be
\label{41}
-1 \leq q(\pi_n) \leq 1
\ee
and satisfies the {\it alternation law}
\be
\label{42}
\sum_n q(\pi_n) = 0 \; .
\ee
For a large number of considered prospects $N$, we get the {\it quarter law}
\be
\label{43}
\lim_{N\ra\infty} \; \frac{1}{N} \sum_{n=1}^N | q(\pi_n) | =
\frac{1}{4} \; .
\ee
A known example of the arising interference under measurements is provided
by the double-slit experiment \cite{Neumann_1}. The passage of a particle
through one of two slits corresponds to prospect (31). The operationally
testable event $A_n$ is the registration of the considered particle by an
$n$-th detector, while the passage through one of the two slits, either $B_1$
or $B_2$ is described by the uncertain event $B = \{B_1, B_2\}$.
The quantum term is not always nontrivial \cite{Yukalov_29}. To be nonzero,
it requires the validity of two necessary conditions. The first condition is
that the considered prospect be entangled, as described in Sec. 4. The second
necessary requirement is the entanglement of the statistical operator. The
latter is entangled when, e.g.,
\be
\label{44}
\hat\rho \neq \hat\rho_A \bigotimes \hat\rho_B \; ,
\ee
where
$$
\hat\rho_A \equiv {\rm Tr}_B \hat\rho \; , \qquad
\hat\rho_B \equiv {\rm Tr}_A \hat\rho \; ,
$$
that is, when the statistical state of a composite system is not a product
of its partial subsystem states. More generally, the system state is entangled
if it cannot be represented in the form
\be
\label{45}
\hat\rho \neq
\sum_\al \lbd_\al \hat\rho_{\al A} \bigotimes \hat\rho_{\al B} \; ,
\ee
where
$$
\sum_\al \lbd_\al = 1 \; , \qquad 0 \leq \lbd_\al \leq 1 \; .
$$
Condition (45) is necessary, but not sufficient for the occurrence of a nonzero
term (39).
In quantum theory, entanglement is a well known notion. In decision theory, it
corresponds to the correlations between different possible events that are
perceived by the decision maker.
\section{Conclusion}
We have shown that quantum measurements and quantum decision making can be
described by the same mathematical tools. In both these cases, the problem
of degeneracy can be avoided by employing the von Neumann method of degeneracy
lifting, which is analogous to the Bogolubov quasi-averaging procedure. The
correct definition of quantum probabilities of composite events, called
prospects, is done by using the Choi-Jamiolkowsky isomorphism. This allows us
to describe any type of composite events, including those corresponding to
noncommutative observables.
The notion of uncertain events and measurements makes it feasible to give a
general scheme for describing measurements under uncertainty and decisions
under uncertainty. This is done by means of positive operator-valued measures.
Prospects are classified into two principally different types, separable and
entangled, depending on the structure of the related prospect operators in
the Hilbert-Schmidt space.
The appearance of a quantum interference term in the quantum probability of
composite events is shown to require two necessary conditions, prospect
entanglement and statistical state entanglement.
Classical measurements and classical decision making are particular cases
of the corresponding quantum counterparts. The reduction of quantum
measurements and decision making to the classical limit occurs when the
interference term becomes zero.
The investigation of the analogies between quantum measurements and quantum
decision making suggests the ways of creating artificial quantum intelligence
\cite{Yukalov_11,Yukalov_12} and gives keys for better understanding of
quantum effects in self-organization of complex systems \cite{YS_30}.
\vskip 2cm
|
1503.08555
|
\section{INTRODUCTION\label{introduction}}
Occurrence of Active Galactic Nuclei (AGNs) in Luminous Infrared Galaxies (LIRGs) has been extensively discussed since there are many lines of evidence for direct and causal relationship between them.
LIRGs predominantly emit at far infrared (FIR), and the infrared (IR) luminosities at 8--1000~$\mu$m, $L_{\rm IR}$, amount to $11<$ log $L_{\rm IR}$ (L$_{\rm \odot}$) $<12$ by definition \citep{sanders96}.
Although bulk of the energy in LIRGs is due to intense star formation activity \citep{sanders96}, AGNs are often found among LIRGs.
Also, a significant fraction of LIRGs are tidally interacting objects.
The fraction of AGN-dominated sources is known to be higher in LIRGs with larger IR luminosity (e.g., \citealt{veilleux95,kim95,nardini08,nardini10,petric11,iwasawa11,ah12}) and in later interaction stages \citep{petric11,iwasawa11}.
Both mid-infrared (MIR) and X-ray studies are very effective to reveal nuclear activities in LIRGs.
Although LIRGs, in particular their nuclei, are often heavily absorbed at optical and near-infrared (NIR) wavelengths, both MIR and hard X-ray photons can penetrate through such interstellar medium.
We can study AGN indications at MIR because AGNs typically show characteristic power-law-like spectral energy distribution (SED), which often shows an excess at MIR over the SED from warm and cold dusts (e.g., \citealt{granato97,ah01,nenkova02,armus07,donley12}).
Various kinds of AGN diagnostics have been developed at MIR, and empirical assessments of relative AGN contribution to the entire system have been made \citep{genzel98,strum02,armus07,spoon07,desai07,petric11,stierwalt13}.
We can also study typical AGNs at X-ray because AGNs show luminous and characteristic power-law spectra, the hard part of which suffers from less absorption by circum-AGN material.
Therefore, X-ray has been heavily utilized to study AGN in LIRGs (e.g., \citealt{iwasawa11}).
NGC~3256 is the most luminous LIRG in the local universe ($z<0.01$).
At a distance of 35~Mpc (for consistency with \citealt{sakamoto14}, which adopted \citealt{sanders03}\footnote{\cite{sanders03} adopted distance with $cz$ using the cosmic attractor model outlined in Appendix A of \cite{mould00}, using $H_{\rm 0}=75$ km s$^{-1}$ Mpc$^{-1}$ and adopting a flat cosmology in which $\Omega_{\rm M}=0.3$ and $\Omega_{\rm \lambda}=0.7$.}), its IR luminosity ($L_{\rm IR}$) is as large as $3.6 \times 10^{11}$ $\rm {L}_{\rm \odot}$ \citep{sanders03}, or total IR luminosity at 3--1100~$\mu$m (TIR) is (2.7--3.3)$ \times 10^{11}$ $\rm{L}_{\rm \odot}$(\citealt{engelbracht08} converted for our assumed distance).
It is also among the most luminous X-ray sources without a confirmed AGN in the local universe \citep{moran99,lira02,ps11}.
It is a major merging system showing tidally distorted morphology at galaxy scale with tidal tails, disturbed outer spiral arms, and prominent dust lanes with complex morphology (e.g., \citealt{graham84,kotilainen96,zepf99,lipari00,ah02,english03,ah06a,ah06b,sakamoto14}).
Its double nuclei, often referred to as Northern and Southern nuclei (hereafter, N and S nuclei, respectively) are separated by only 5\arcsec~(850~pc).
They are clearly visible at both radio and MIR (e.g., \citealt{norris95,kotilainen96,neff03,ah06b,lira08,sakamoto14}), but the S nucleus is hidden in dust at the optical.
The merger is likely in the late stage just before coalescence of the nuclei \citep{goals,stierwalt13}.
The N nucleus is a core of a starburst galaxy.
Its optical spectrum shows H~{\sc ii} region-like features.
Extended outflowing ionized gas with shocks is also detected and attributed to a superwind powered by the starburst \citep{moran99,lipari00,ifu1,ifu2}.
Stellar population examined in $K$ band indicates young starburst population \citep{doyon94}.
Prominent Polycyclic Aromatic Hydrocarbon (PAH) features as well as low-ionization fine structure lines have been detected at MIR \citep{siebenmorgen04,mh06,lira08,ps10,ds10}, also indicating star formation activity.
The nucleus is spatially resolved with {\it HST} at 1.6~$\mu$m at 0\farcs 14 full-width half maximum (FWHM) resolution \citep{ah02}, the 8.1~m Gemini Telescope at 8.74~$\mu$m at its diffraction-limit (0\farcs 30 FWHM) resolution \citep{ah06b}, and {\it Chandra} at X-ray at 0\farcs 5 FWHM resolution \citep{lira02}.
Presence of an AGN at the optically obscured S nucleus has been suspected for a long time (e.g., \citealt{kotilainen96}), but there is still no firm evidence for it.
Dust lanes cover the S nucleus and make it invisible at optical and NIR wavelengths below $K$ band.
A compact (unresolved at 0\farcs 5 FWHM) and moderately absorbed (neutral Hydrogen column density $N_{\rm H} \simeq 5 \times 10^{22}$ cm$^{-2}$) X-ray source was detected with {\it Chandra} at this nucleus \citep{lira02}.
Although it looks much fainter (by at least two orders of magnitude) than expected for a classical Seyfert nucleus given its MIR luminosity, it is brighter than expected for typical starburst galaxies (\citealt{lira02}; see also \citealt{awaki91,turner97,guainazzi05,fukazawa11}).
Analysis of the MIR SED extracted within a kpc-scale aperture (including both nuclei and the host galaxy component) indicated that AGN contribution to the total MIR luminosity is $<5$\% \citep{ah12}.
\cite{sakamoto14} recently discovered a bipolar collimated jet-like molecular-gas outflow from the S nucleus, and argued that it is likely driven by an AGN.
Due to its proximity, NGC~3256 is among the best targets to examine the characteristics and roles of AGN, if any, within a LIRG system.
For this purpose, it is essential to verify the presence of an AGN in the S nucleus using spatially resolved and/or sensitive MIR and X-ray observations (e.g., \citealt{lira02,ds10b,ds11,ps10}).
Recent spatially-resolved MIR spectroscopies revealed some interesting differences between the two nuclei.
By using the slit-scanning spectroscopy data taken with the {\it Spitzer} IRS \citep{irs}, \cite{ps10} noted that the S nucleus shows deeper silicate absorption at 9.7~$\mu$m, stronger H$_2$ lines, and a slightly smaller equivalent width (EW) of PAH 6.2~$\mu$m than those at the N nucleus.
More dramatic differences are found in ground-based spectra obtained with higher-spatial resolution.
The silicate absorption is much deeper and the PAH features are undetectably weaker at 0\farcs 36 (61~pc) scale at the S nucleus than in the IRS spectrum \citep{ds10}.
At X-ray, \cite{lira02} reported that the point-like source at the S nucleus is more heavily absorbed than other point sources and the diffuse component within the galaxy.
In this work we utilize archival and published data of {\it Spitzer}, {\it HST}, and {\it Chandra} to analyze the nuclear spectrophotometry more comprehensively than ever to explore an AGN in the S nucleus.
\section{INFRARED DATA}
\subsection{{\it Spitzer} IRAC\label{irac_data_analysis}}
We examined {\it Spitzer} IRAC \citep{irac} archival images of NGC~3256 at its four channels for morphological and photometric studies.
Although \cite{lira08} (see also \citealt{lira02}) published the IRAC images and provided nuclear photometries, they did not use the 8.0~$\mu$m channel image in their analysis because it is slightly saturated at around the N nucleus.
The 8.0~$\mu$m information is helpful in conjunction with information of the 3.6, 4.5, and 5.8~$\mu$m channels to create IRAC color-color diagrams for AGN diagnostics (e.g., \citealt{lacy04,sajina05,stern05,donley12}).
Fortunately, the images were taken under the HDR (High Dynamic Range) mode in which both short- and long-exposure frames are taken within an Astronomical Observation Request (AOR).
The short-exposure frame is not saturated at the N nucleus.
We retrieved the Post Basic-Calibrated Data (Post BCD or PBCD) of standard mapping observations (AOR: 3896832; PI: G. Fazio) from the {\it Spitzer} Heritage Archive.
This is the same data set of \cite{lira02,lira08}.
For the 8.0~$\mu$m channel, since short-exposure frames are not recommended for science use in general (IRAC data handling book), we confirmed flux calibration consistency between the long- and short-exposure frames using photometry of field stars.
Then the saturated pixels at around the N nucleus in the long-exposure frame were substituted with corresponding pixels in the short-exposure frame image.
For the 4.5~$\mu$m channel, additional simple sky subtraction was made to correct for its tilted background.
As shown by \cite{lira08} for the 3.6, 4.5, and 5.8~$\mu$m channels, the two compact nuclei are clearly separated in all four channels (Figure~\ref{fig_morphology}).
Aperture photometry was made on each nucleus with our IDL program (Table~\ref{tab_irac_flux}).
We used an aperture of 2\farcs 8 diameter for the photometry, and applied aperture correction following \cite{apcor}.
Flux contribution from the host galaxy is estimated within a concentric annulus whose inner and outer radii are 2\farcs 0 and 2\farcs 6, respectively.
The size of the annulus was set as small as possible to estimate the flux level near the nucleus by minimizing the effect of host galaxy structure.
Standard deviation within the annulus is quadratically added to the statistical error of the flux measurement of the nuclei.
Although our fluxes are systematically slightly fainter than those of \cite{lira08}, the two are consistent because \cite{lira08} performed simple aperture photometry with slightly larger apertures than ours (3\farcs 6, 3\farcs 6, and 4\farcs 0 diameter apertures for the 3.6~$\mu$m, 4.5~$\mu$m, and 5.8~$\mu$m images, respectively).
We note that N nucleus has only about 7\%, 7\%, 6\%, and 7\%, and the S nucleus has about 4\%, 9\%, 4\%, and 2\% in 3.6~$\mu$m, 4.5~$\mu$m, 5.8~$\mu$m, and 8.0~$\mu$m channels, respectively, of the flux integrated over the entire galaxy reported by \cite{engelbracht08}.
Flux ratio maps were also made for 4.5~$\mu$m/3.6~$\mu$m, 5.8~$\mu$m/4.5~$\mu$m, and 8.0~$\mu$m/5.8~$\mu$m (Figure~\ref{fig_ratioimage}).
The point spread function (PSF) size matching was made for these ratio maps using a bright field star as a reference.
The measured PSF FWHM is about 2.9 pixel or 1\farcs 7.
\subsection{{\it Spitzer} IRS\label{irs_data_analysis}}
We processed our own IRS spectra so that the spectrophotometry was done in a way consistent with the corresponding IRAC photometry for each nucleus, although similar spectra had been already published \citep{brandl06,bs09,ps10,ds10,ah12}.
We used the same IRS mapping data set of \cite{ps10} and \cite{ah12}, and followed almost the same processing methods as adopted in the earlier studies.
The BCD data (AOR: 17659904; PI: G. Rieke) of the Short-Low (SL) module were retrieved from the {\it Spitzer} Heritage Archive.
Similar mapping data with other modules (Long-Low (LL), Long-High (LH), and Short-High (SH)) were not used since spatial resolution was worse for the longer wavelength data.
They are not suitable for our analysis in which separating the two nuclei is essential.
The standard CUBISM data processing \citep{cubism,ps10} was made, including background subtraction, hot/rogue pixel identification and subtraction, and cube creation.
Aperture photometry was made with our IDL program to extract spectra for each nucleus on the data cubes created with CUBISM (Figure~\ref{fig_irac_irs_sed}).
We fixed an aperture size of 3\farcs 6 diameter over 5.2--14.5~$\mu$m and applied the aperture correction.
Flux contribution from the host galaxy is estimated within a concentric annulus whose inner and outer radii are 2\farcs 0 and 2\farcs 6, respectively.
The size of the annulus was set as small as possible to estimate flux level near the nucleus by minimizing the effect of host galaxy structure.
However, since the IRS spatial resolution ($\simeq 4$\arcsec~FWHM) is larger than that of IRAC and is comparable to the distance to another nucleus ($\simeq $5\arcsec), we carefully applied masks to hide another nuclear component in the annulus.
Standard deviation within the annulus is quadratically added to the statistical error of the flux measurement of the nuclei.
The aperture correction factor was measured with a mapping observation of a standard star, HR~7341.
The standard star data (AOR 19324160) taken in the same observing campaign of the NGC~3256 observation were retrieved from the {\it Spitzer} Heritage Archive, and they were reduced by CUBISM with exactly the same parameters.
We then measured the star with the same photometry program and aperture settings, and found the aperture correction factor by comparing the measured spectrum with the standard-star flux information from the IRS Instrument Handbook.
As in the case of the IRAC photometry, those nuclear spectrophotometries are about 12--20\% of that over the entire galaxy (13\farcs 4 $\times$ 13 \farcs 4; \citealt{ah12}), although the overall spectral shapes look similar, i.e., all show prominent PAH 6.2, 7.7, 8.6, 11.3~$\mu$m features and fine structure lines such as [Ne~{\sc ii}].
We confirmed characteristics reported earlier by \cite{ps10}, namely brighter H$_2$ fluxes, deeper silicate absorption at 9.7~$\mu$m, and smaller EW of PAH 6.2~$\mu$m at the S nucleus (see \S \ref{infrared_agn_signature_results} below for our silicate absorption depth measurements).
\subsubsection{Flux Matching between the IRAC and IRS Spectrophotometries}\label{irac_iras_fluxmatch}
The IRS nuclear photometries are about two times larger than the IRAC 5.8~$\mu$m and 8.0~$\mu$m nuclear photometries at both nuclei (Figure~\ref{fig_irac_irs_sed}), and we interpret this as a result of more contamination from the host component due to larger PSF of IRS.
Therefore, in order to combine the IRS and IRAC spectrophotometries in a consistent way for our subsequent analyses, we adjusted the IRAC nuclear photometries for the IRS resolution in the following way.
For the N nucleus, we simply scaled the observed nuclear IRAC fluxes by a factor measured at both 5.8~$\mu$m and 8.0~$\mu$m channels by comparing with synthetic IRAC fluxes from the IRS spectrum.
We utilized a standard photometry tool (spitzer\_synthphot.pro) from {\it Spitzer} Data Analysis Cookbook.
Since the IRS SL module cannot fully cover the IRAC 5.8~$\mu$m bandpass between 4.9 and 5.2~$\mu$m, we extrapolated the spectra shortward a little with a help of scaled SWIRE NGC~6090 SED template (see \S \ref{infrared_agn_signature_results} below for more details about the template).
We found that a single scaling factor on the IRAC N nuclear fluxes satisfies matching both 5.8~$\mu$m and 8.0~$\mu$m fluxes with the IRS-synthetic photometries, and the factor is applied to all four IRAC channel fluxes.
For the S nucleus, we added scaled IRAC fluxes at the N nucleus to the fluxes at the S nucleus to match IRS-synthetic photometries at the S nucleus.
Here we utilized the fact that colors of the N nucleus are very similar to those in the surrounding region of the S nucleus (Figure~\ref{fig_ratioimage}).
We calculated the scaling factor by considering, again, both 5.8~$\mu$m and 8.0~$\mu$m fluxes.
We found that a single scaling factor satisfies matching fluxes in both channels, and applied the factor to all four IRAC channels.
The adjusted IRAC fluxes for each nucleus are also shown in Table~\ref{tab_irac_flux}.
Note that the adjusted fluxes are used only for analyzing MIR SED together with IRS.
\subsection{T-ReCS Spectrum\label{trecs_data_analysis}}
The $N$-band high-spatial low-spectral resolution spectrophotometry of the S nucleus taken by \cite{ds10} shows remarkably different signatures just on the nucleus, and we therefore include the data in our analysis.
The spectrum was taken with T-ReCS at the Gemini-South Telescope through a 0\farcs 36-wide slit under 0\farcs 35 FWHM seeing condition, providing the best spatial resolution at $N$ band owing to both the diffraction-limited 8~m telescope and the seeing-matched narrow slit.
The spectrum is extracted over 0\farcs 36 (or four pixel width) around the peak at the S nucleus.
\cite{ds10} performed flux calibration by a standard star observation while considering the slit loss effect, and we further applied the aperture correction to estimate the flux of the central compact source by compensating loss of photons outside the extraction aperture box.
By assuming a Gaussian PSF of the seeing size, we estimated and multiplied a factor of 1.30 on the original data presented by \cite{ds10} in their Figure~3.
Since error of the spectrum is not explicitly given, we estimated it from a scatter of the data points around the fitted model profile (see Figure~3 of \citealt{ds10}).
\subsection{{\it HST} NICMOS Images\label{nicmos_data_analysis}}
In order to extend the shorter wavelength coverage of the nuclear spectrophotometries, we also analyzed high-resolution {\it HST} NICMOS images at NIR.
We retrieved the combined NIC3 images of F110W (1.1~$\mu$m wide band), F160W (1.6~$\mu$m wide band), and F222M (2.2~$\mu$m medium-wide band) from the {\it Hubble} Legacy Archive.
The NIC3 camera provides a larger field-of-view and is good for comparison with the IRAC images.
Their spatial resolutions (PSF size in FWHM) are 0\farcs 26 for all three bands.
Flux measurement is based on standard flux conversion factors for the instrument and the filter from the NICMOS Handbook.
The {\it HST} images presented by \cite{lira02} and \cite{ah02,ah06a,ah06b} are from a different data set taken with the NIC2 camera of the NICMOS instrument.
A small correction of tilted background was made for each filter image by fitting the image with a tilted plane while masking sources for each filter band.
Astrometry is corrected with three bright field stars in the 2MASS catalog outside of the galaxy, and the global accuracy is about 0\farcs 2.
The F160W and F222M images are shown along with the IRAC images in Figure~\ref{fig_morphology}.
Zoom-up images of all three filters around the S nucleus are also shown in Figure~\ref{fig_nicmos_snuc}.
A compact source marginally appears on the F222M image, although it is blended with surrounding structures.
This source is not clearly identified in F160W and F110W.
For more NICMOS images for other lines/bands and their descriptions, see \cite{lira02} and \cite{ah02,ah06a}.
\section{INFRARED RESULTS}
\subsection{Infrared Morphology}\label{infrared_morphology}
Figure~\ref{fig_morphology} compares overall morphology of the galaxy in all IRAC channels and NICMOS F160W and F222M images.
In general, IRAC channels (in particular the longer 5.8 and 8.0~$\mu$m channels) trace star-forming regions and the NICMOS bands trace stellar population.
The N nucleus is bright and distinct in all IRAC channels.
The S nucleus is barely visible at 2.2~$\mu$m and is more evident in the IRAC channels.
The N nucleus is brighter than the S nucleus at 3.6, 5.8, and 8.0~$\mu$m, while the two nuclei are comparable at 4.5~$\mu$m.
Such characteristics were noted for 2.2, 3.6, 4.5, and 5.8~$\mu$m by \cite{lira02,lira08}.
\cite{kotilainen96} also showed that both nuclei are equally bright at $L$' band (3.6~$\mu$m).
The flux ratios at the S nucleus are remarkably different from the rest due to a compact red source at the S nucleus.
The S nucleus shows larger 2.2~$\mu$m/1.6~$\mu$m and 4.5~$\mu$m/3.6~$\mu$m, and smaller 5.8~$\mu$m/4.5~$\mu$m flux ratios than those of the N nucleus and most of the host galaxy (Figure~\ref{fig_ratioimage}).
The large-scale dust lanes known around the S nucleus in the optical wavelengths \citep{zepf99,moran99,lipari00,ah02,ah06a,sakamoto14} are not visible in the MIR flux ratio maps.
The red S nucleus stands out even more in the {\it HST} data when the stellar component with complex morphology is subtracted.
We modeled the stellar component of the host galaxy in the F222M band using F160W and F110W images.
First we measured stellar colors at each pixel by using the two images, and we then synthesized the stellar F222M image by linearly extrapolating from the F160W image by considering the stellar colors.
We scaled the synthesized image by a factor of 0.74 to match the observed F222M image at well outside of the S nucleus.
This correction factor is probably due to color corrections on the filters for very red stellar continuum along with the heavy dust lanes around the S nucleus \citep{zepf99,moran99,lipari00,ah02,ah06a,sakamoto14}.
We found a distinct compact peak at the S nucleus on the F222M image subtracted for its synthetic stellar component (Figure~\ref{fig_nicmos_snuc}), indicating an additional component at the S nucleus.
This compact source coincides with the red peak in the F222M/F160W image, a radio continuum source at the S nucleus (e.g., \citealt{norris95,neff03,sakamoto14}), and a compact (at 0\farcs 30 FWHM resolution) 8.74~$\mu$m continuum source at the S nucleus \citep{ah06b}.
The red compact source at the S nucleus contains an unresolved (at 0\farcs 26 FWHM, or 44 pc) core.
We fitted the source with one and two circular Gaussian components.
In the single-Gaussian model, we found a systematic radial variation in the residual map in a sense that the center has a compact positive residual peak and the surrounding region systematically shows negative residuals.
In the double-Gaussian model, we assumed that the two Gaussian components share the same center position and that the more compact component is unresolved (at 0\farcs 26 FWHM).
The residual map has much less systematic structure compared with the single-Gaussian model.
The reduced $\chi^2$ are 1.8 and 1.0 for the single- and double-Gaussian models, respectively.
Therefore, the double-Gaussian model with an unresolved core is preferred.
The fitting results are summarized in Table~\ref{tab_nicmos_measurement}.
\subsection{MIR Spectra/SEDs and Color-Based AGN Diagnostics at Individual Nuclei}\label{infrared_agn_signature_results}
The two nuclei are basically similar to each other in the IRS spectra, but we found their noticeable differences.
The IRS spectra of both nuclei show prominent PAH and mild silicate absorption features as well as some fine structure lines (Figure~\ref{fig_irac_irs_sed}).
Such characteristics are common to LIRGs (e.g., \citealt{petric11}).
We compare the observations with the starburst IRS template of \cite{brandl06}, which was generated by averaging IRS spectra of local starburst galaxies.
Since this template is taken with exactly the same instrument and module (SL), we can directly compare it with our observations.
Although the template reproduces the observations rather well, we found two notable differences between them.
One is the depth of the $9.7$~$\mu$m silicate absorption, which is characterized by $S_{\rm Si~9.7~\mu m} \equiv ln~(F^{\rm obs}_{\lambda}/F^{\rm cont}_{\lambda})$.
Here, $F^{\rm cont}_{\lambda}$ is a continuum flux at 9.7~$\mu$m estimated following \cite{ds10}.
We measured $S_{\rm Si~9.7~\mu m}= -0.65\pm 0.20$, $-1.31\pm 0.34$, and $-0.50$ for the N nucleus, S nucleus, and the IRS starburst template, respectively.
The S nucleus shows significantly deeper absorption than the N nucleus, which shows similar depth within uncertainty as the starburst template.
We note that the N nucleus shows slightly smaller $S_{\rm Si~9.7~\mu m}$ than the template, although the observed 9.7~$\mu$m flux at the N nucleus is larger than the template scaled at 6.0--8.0~$\mu$m (Figure~\ref{fig_irac_irs_sed}).
This is because the N nucleus also shows higher continuum at $\gtrsim 12$~$\mu$m than the scaled template due to systematically redder color at $\gtrsim 9$~$\mu$m, and the estimated continuum level at 9.7~$\mu$m is also higher for the N nucleus.
Another difference is the color at $\lambda \lesssim 6$~$\mu$m.
Ratio of our IRS observations to the starburst template (Figure~\ref{fig_irs_irac_ratio}) shows a steep excess at the S nucleus below $\simeq 6$~$\mu$m, reaching about data-to-template ratio of $\sim 2$ at $\simeq 5$~$\mu$m.
The N nucleus does not show such a systematic deviation from the template.
We also compare the observations with a starburst-powered LIRG SED template of NGC~6090 (log $L_{\rm IR}$ ($\rm L_{\rm \odot}) =11.51$; \citealt{sanders03}) from \cite{swire} (Figure~\ref{fig_irac_irs_sed}).
The template was generated to fit the observations with the GRASIL code \citep{silva98}, which is a physical starburst evolution model for estimating SEDs.
Although this template also roughly resembles the spectra of two nuclei, it seems closer to the N nucleus at $<6$~$\mu$m.
The IRAC SEDs further illustrate the difference between the two nuclei (Figure~\ref{fig_irac_irs_sed}).
At the S nucleus, the IRAC flux monotonically increases toward longer wavelengths, i.e., no flux drop is found at 4.5~$\mu$m.
At the N nucleus, the IRAC 3.6--4.5~$\mu$m color is almost flat, indicating a contribution from the stellar component, which has a blue color at $>1.6$~$\mu$m (e.g., \citealt{sawicki02}).
The slope between 4.5~$\mu$m and 8.0~$\mu$m is steeper (redder) at the N nucleus than at the S nucleus.
An excess flux at 4.5~$\mu$m at the S nucleus causes such differences.
By utilizing IRAC color-color diagrams for AGN diagnostics, we found that the S nucleus shows AGN-like SED at 3.6--8.0~$\mu$m (Figure~\ref{fig_irac_color_color}).
Two types of such diagrams have been developed for AGN diagnostics.
One is based on flux ratios of 5.8~$\mu$m/3.6~$\mu$m vs. 8.0~$\mu$m/4.5~$\mu$m originally proposed by \cite{lacy04}.
We take color boundaries for AGN selection from \cite{donley12}, who modified the flux ratio cut of 5.8~$\mu$m/3.6~$\mu$m from the original one.
Another is based on 5.8~$\mu$m-8.0~$\mu$m and 3.6~$\mu$m-4.5~$\mu$m colors (in magnitudes) proposed by \cite{stern05}.
Although the same IRAC flux information is used in both plots, they are not completely consistent in identifying photometric AGN candidates.
The IRAC colors at the S nucleus are in the area for AGNs, which include SEDs of pure power-law and hot blackbody of 300--1000~K.
The N nucleus is closer in the IRAC colors to starburst-dominated sources (either spiral galaxies, starbursts, or ultra-luminous infrared galaxies (ULIRGs)).
The difference in colors can be again interpreted as due to 4.5~$\mu$m excess at the S nucleus.
\subsection{MIR Spectrophotometry Modeling\label{MIR_SED_modeling}}
It is very likely that the S nucleus is composed of multiple components with different characteristics.
This is because the spectrum of the S nucleus at 0\farcs 36 scale is remarkably different from the spectra at arcsec resolution \citep{ds10}.
Within the 0\farcs 36 aperture, the S nucleus shows much deeper silicate absorption without detectable PAH.
The PAH features are evident in the IRS at $\sim 3$\arcsec~scale (\citealt{ds10,ps10}, this work), the wider-slit T-ReCS (1\farcs 3-wide; \citealt{lira08}), and the TIMMI2 (1\farcs 2-wide; \citealt{siebenmorgen04,mh06}) spectra.
This difference is probably because circumnuclear star-forming regions dominate the MIR flux within the larger apertures.
In terms of flux level, all the larger-aperture spectrophotometries are comparable to our IRAC 8.0~$\mu$m nuclear photometry, but the T-ReCS 8.0~$\mu$m flux within the 0\farcs 36 aperture is as small as 30\% of the IRAC flux.
We constructed simple composite SED models for the S nucleus with and without AGN to examine how additional AGN contribution can better reproduce the MIR observations compared to a pure (but absorbed) starburst model.
We fitted the models to both the high spatial-resolution T-ReCS spectrum, in particular its deep silicate absorption, and the nuclear IRAC and IRS spectrophotometries, in particular its prominent PAH features.
For simplicity, only two components are in the composites.
One of them needs to be compact and heavily absorbed to reproduce the very deep silicate absorption of the T-ReCS spectrum.
Another one represents the extended circumnuclear star-forming region which appears to dominate the PAH feature.
We made two such composite models.
The AGN-starburst composite model assumes a heavily absorbed AGN in addition to surrounding star-forming regions at $\sim 3$\arcsec~scale.
The starburst-starburst composite model assumes heavily and mildly absorbed starburst regions.
Emission from the heated dusts associated with the compact and heavily absorbed starburst component is not considered in the fitting.
As a reference, we also constructed a single-component absorbed starburst model without an AGN.
Although \cite{ah12} already did similar but more sophisticated SED modeling over the whole aperture IRS spectrum, ours is improved in the following three points.
Firstly, we include the IRAC 4.5~$\mu$m flux, below the IRS wavelength coverage, in the fitting so that the bluer SED at $<6$~$\mu$m of the S nucleus can be clearly traced.
The IRAC 3.6~$\mu$m flux is not included in the fitting because the flux is usually contaminated by stellar emission and relative contribution of the stellar component with respect to non-stellar component varies from one star-forming region to another.
Secondly, we utilize the nuclear spectrophotometry.
\cite{ah12} used a very large aperture IRS spectrum that includes both nuclei and their surroundings.
Given the only mild differences between the N and S nuclei at the IRS spatial resolution (\S \ref{infrared_agn_signature_results}), it is essential to separately analyze the nuclear spectra.
Thirdly, we require our model to fit the high-spatial-resolution T-ReCS spectrum while simultaneously fitting the IRAC and IRS spectrophotometry.
We fit the T-ReCS spectrum at 8.0--12.9~$\mu$m (\S \ref{trecs_data_analysis}) and the IRAC 4.5~$\mu$m--IRS spectrophotometries at 4.5--14.5~$\mu$m (\S \ref{irac_iras_fluxmatch}).
For reference, we also fit only the IRS spectra (5.2--14.5~$\mu$m) to evaluate the effect of including the IRAC 4.5~$\mu$m photometry.
We employed a standard $\chi^2$ minimization technique implemented in the MPFIT software package \citep{mpfit}.
For the starburst component, we adopt the LIRG SED template of NGC~6090 from \cite{swire} since the IRS starburst template of \cite{brandl06} does not completely cover the IRAC 4.5~$\mu$m channel.
For the AGN component, we adopt a pure power-law SED ($f_{\nu} \propto \nu^{\alpha}$) with a power-law index ($\alpha$) of $-0.5$.
The colors/flux ratios of this SED are well within the expected ones for AGNs (Figure~\ref{fig_irac_color_color}), and, as we show later, this intrinsically bluer power-law fits the observed IRAC SED when heavily extinct to fit the deep silicate feature of the T-ReCS spectrum.
For the extinction curve, we adopt three different types and compare the results because, as we demonstrate later, a slight difference in silicate absorption profile can result in quite different best-fit parameters with similarly good statistics for the same T-ReCS data.
This strongly affects our fit to the IRAC and IRS data covering wider wavelengths.
We cannot identify more appropriate extinction curve without having good anchor point(s) of the fit away from the very deep silicate absorption.
Due to the same problem, we fit the two sets of spectrophotometry data one by one because fit to the T-ReCS spectrum at 8.0--12.9~$\mu$m hardly constrains our fit around 4.5~$\mu$m on the IRAC and IRS spectrophotometries.
Therefore, we first fit the T-ReCS spectrum with each of the three extinction curves by two SED models (either starburst or AGN).
During the fit, we fix the redshift, and fit the SED scaling factor and extinction (represented by optical depth at 9.7$~\mu$m, $\tau_{\rm 9.7\mu m}$).
We then subtract the fitted model from the IRAC and IRS data, and fit the residual spectra with the starburst SED by using the same extinction curve.
We again fix the redshift, and fit the SED scaling factor and $\tau_{\rm 9.7\mu m}$.
We assume a screen geometry for both components.
We do not consider an effect of radiation from heated dust, although such a radiation is expected for a compact dusty region surrounding the central energy sources (either starburst or AGN).
We employ the following three extinction curves in the fitting.
The first one is a theoretical extinction curve of \cite{ct06} (hereafter, CT06) for the Galactic center.
The second is from a spectral fitting code PAHFIT \citep{pahfit}.
\cite{pahfit} introduced a hybrid curve consisting of an observed 9.7~$\mu$m silicate absorption profile, an empirical 18~$\mu$m silicate absorption profile, and a power-law component.
The 9.7~$\mu$m silicate feature is taken from \cite{kemper04} based on the observation toward the Galactic center.
This curve is found to work very well on IRS spectra of normal and star-forming galaxies (\citealt{pahfit}; see also, e.g., \citealt{ds10b,ps10,ds11}).
The third one uses the same function of PAHFIT but with a different parameter set to reproduce the updated extinction curve toward the Galactic center by \cite{fritz11}.
The curve of \cite{fritz11} shows stronger silicate absorption than that of CT06, although both curves are for the Galactic center.
For our fitting purpose, we modified parameters of the PAHFIT extinction curve to reproduce the very high-resolution extinction curve of \cite{fritz11}.
We set the power-law slope of the continuum extinction curve zero, and relative strength of the power-law component with respect to the 9.7~$\mu$m silicate feature strength ($\beta$; \citealt{pahfit}) 0.155.
Because the extinction curve of \cite{fritz11} is almost proportional to that of CT06 below the silicate feature, we replaced short side of the extinction curve ($<7.3$~$\mu$m) with the scaled curve of CT06.
This gives smaller $A_{\rm V}/\tau_{9.7}$ (5.8) than that of CT06 ($A_{\rm V}/\tau_{9.7}=9$).
The three extinction curves are compared in Figure~\ref{fig_extinction_curve}.
Results of the fits are summarized in Figure~\ref{fig_sedmodeling} and Table~\ref{tab_sed_fitting_results}.
As we show below, the original PAHFIT extinction curve gives much poorer fits, therefore the best-fit SED models are not shown with this curve in Figure~\ref{fig_sedmodeling}.
\subsubsection{Absorbed Single-Starburst Model}
In the absorbed single-starburst model (Figure~\ref{fig_sedmodeling} top), we modeled the IRAC 4.5~$\mu$m--IRS spectrophotometry of the S nucleus only with the absorbed LIRG SED template.
Although the model reproduces the IRS spectrum reasonably well, neither the 4.5~$\mu$m excess nor deep silicate absorption at the 0\farcs 36 scale can be reproduced.
Reduced $\chi^2$ are about 4--6 for all three extinction curves, and the IRAC 4.5~$\mu$m flux contributes most to the $\chi^2$.
\subsubsection{AGN-Starburst Composite Model\label{agn_starburst_composite_model}}
In the AGN-starburst composite model (Figure~\ref{fig_sedmodeling} middle), we found a good fit, although not statistically satisfactory, to the T-ReCS spectrum with the AGN template by using the extinction curve of CT06 (reduced $\chi^2=1.4$) and our modified PAHFIT extinction curve (reduced $\chi^2=1.2$).
On the other hand, the original PAHFIT extinction curve gives poor fit (reduced $\chi^2=3.6$) because the curve always produces asymmetric extinction at both ends of the silicate absorption profile, while the observed profile is more symmetric than the curve.
We note that the best-fit parameters are significantly different for different extinction curves ($\tau_{\rm 9.7}$=9.4 and 12.7 for the extinction curve of CT06 and our modified PAHFIT extinction curve, respectively).
As explained earlier, this is due to slightly different profile of the silicate absorption feature and absence of fit anchor point(s) outside the feature.
For the fit to the IRAC 4.5~$\mu$m--IRS spectrophotometers, both extinction curve of CT06 (reduced $\chi^2=1.4$) and our modified PAHFIT extinction curve (reduced $\chi^2=0.8$) resulted in $\tau_{\rm 9.7}\simeq 0.5$--0.7.
We note that the IRAC 4.5~$\mu$m flux dominates the $\chi^2$ in a fit with the extinction curve of CT06 in a sense that the fit under-predicts the observation.
\subsubsection{Starburst-Starburst Composite Model\label{starburst_starburst_composite_model}}
In the starburst-starburst composite model (Figure~\ref{fig_sedmodeling} bottom), we found that a heavily absorbed LIRG SED also fits the T-ReCS spectrum (reduced $\chi^2=1.2$ for the extinction curve of CT06 and our modified PAHFIT extinction curve).
This is because the silicate absorption profile of the extinction curve essentially dominates the shape of the absorbed SED within the T-ReCS wavelength coverage.
Again, the original PAHFIT extinction curve gives poor fit (reduced $\chi^2=2.7$) because of the same reason for the case of the AGN-starburst composite model.
The fits to the IRAC 4.5~$\mu$m--IRS spectrophotometries (reduced $\chi^2 \simeq 4$--6) are much worse than those of the AGN-starburst composite model (reduced $\chi^2 \simeq 0.8$--1.4), because contribution to the 4.5~$\mu$m flux from the heavily absorbed LIRG template is negligibly small.
\subsubsection{Summary and Implication of the SED Modeling Results\label{summary_MIR_SED_modeling}}
We found that the AGN-starburst composite model fits the MIR observations much better than the starburst-starburst composite model since AGN contribution can reproduce the 4.5~$\mu$m excess.
In particular, the modified PAHFIT extinction curve gives better fit to the 4.5~$\mu$m flux, and we prefer the results with this extinction curve, i.e., Figure~\ref{fig_sedmodeling} right-middle.
This model can also roughly explain the NICMOS 2.2~$\mu$m flux of the unresolved nuclear component, although the flux is not included in the fit (Figure~\ref{fig_sedmodeling_best}).
In our best-fit model with the modified PAHFIT extinction curve, the AGN contributes $\simeq 24$\% and $\simeq 2$\% of the 6 and 24~$\mu$m fluxes, respectively.
As for the optical depth on the AGN component, we use both results, $\tau_{\rm 9.7}=9.4\pm 0.4$ and $12.7\pm 0.5$ from the extinction curve of CT06 and our modified PAHFIT extinction curve, respectively, and adopt the possible range of the optical depth of $\tau_{\rm 9.7}=9$--13, since the two extinction curves provide statistically equally good fit to the T-ReCS spectrum.
We estimate the column density toward the AGN to be on the order of $N_{\rm H} \sim10^{23}$ cm$^{-2}$.
We assumed a screen geometry of the dusty region.
The optical depth at 9.7~$\mu$m over the AGN continuum, $\tau_{\rm 9.7}=9$--13, corresponds to $A_{\rm V}\sim 80$ mag by using the conversion factors of $A_{\rm V}$/$\tau_{\rm 9.7} = 9$ and $5.8$ for extinction curves of CT06 and our modified PAHFIT, respectively.
Further assuming a standard conversion factor for the solar neighborhood, $N_{\rm H}$/$A_{\rm V} = 1.9 \times 10^{21}$ cm$^{-2}$ mag$^{-1}$ \citep{bohlin78}, the neutral Hydrogen column density toward the AGN is estimated to be $N_{\rm H} \simeq 1.5 \times 10^{23}$ cm$^{-2}$.
We can also estimate the star formation rate (SFR) and IR luminosity of the S nucleus.
By subtracting the synthetic 8.0$~\mu$m flux of the AGN component (16~mJy), we estimate 8~$\mu$m flux from the starburst component of our best AGN-starburst composite model is 35~mJy.
We then estimate SFR to be 0.43 M$_{\odot}$ yr$^{-1}$ by using calibration of \cite{zhu08}, which is derived for young (10--100~Myr continuous burst) starburst with Salpeter initial mass function (0.1--100 M$_{\odot}$) based on IR (8--1000~$\mu$m) luminosity--SFR conversion and monochromatic 8~$\mu$m luminosity--IR luminosity correlation.
This SFR corresponds to $L_{\rm IR} \simeq 2.5\times10^{9}$ L$_{\odot}$.
\cite{zhu08} noted that correlation between MIR and IR luminosities are essentially the same for AGN-starburst composite galaxies, AGNs, and star-forming galaxies.
Therefore, the IR luminosities of the AGN and the S nucleus, including the AGN and the circumnuclear starburst component, are estimated to be $\simeq 1.1\times10^{9}$ L$_{\odot}$ and $3.6\times10^{9}$ L$_{\odot}$, respectively\footnote{\cite{sakamoto14} referred to this paper in its submitted form for the FIR luminosities. The quoted luminosities are $\simeq 2.9\times10^{10}$ L$_{\odot}$, $\simeq 1.5\times10^{10}$ L$_{\odot}$, and $\simeq 5.0\times10^{9}$ L$_{\odot}$ for the N nucleus, the S nucleus, and the AGN in the S nucleus, respectively. During the revision of this paper, we revised the numbers as shown here.}.
We note that the AGN luminosity is highly uncertain because the correlation between the MIR and IR luminosities is established for field AGNs found in a blank-sky survey \citep{zhu08}, and it is not clear if the same correlation applies to the heavily absorbed AGN at the S nucleus.
For comparison, the IR luminosity of the N nucleus estimated in the same way from the observed 8.0$~\mu$m flux is $\simeq 1.5\times10^{10}$ L$_{\odot}$.
The total infrared luminosity at 3--1100~$\mu$m (TIR) of the N nucleus is also estimated from the whole-aperture TIR luminosity by \cite{engelbracht08}, who used both IRAC and MIPS \citep{mips} 24~$\mu$m, 70~$\mu$m, and 160~$\mu$m fluxes.
Since the IRAC colors of the N nucleus are very similar to that of the rest of the galaxy, and the S nucleus that shows distinct colors consists of only a few percent of the whole aperture flux (\S \ref{irac_data_analysis}), we scale the whole-aperture TIR luminosity according to the nuclear fluxes at 8.0~$\mu$m.
We then found $L_{\rm TIR}=$(2.0--2.4) $\times 10^{10}$ L$_{\odot}$ for the N nucleus.
\section{X-RAY DATA}\label{x_ray_analysis}
We analyzed archival data of two {\it Chandra} observations of NGC~3256 to search for X-ray indications of the presence of an AGN
in the S nucleus.
The superb spatial resolution of {\it Chandra} enables us to
extract X-ray spectrum of the S nucleus separated from other emission components in the host galaxy.
The two {\it Chandra} observations are summarized in Table~\ref{tab_chandra_log}.
The observations performed on 2000 Jan 5 and 2003 May 23 are referred to as the first and second observations,
respectively, hereafter. NGC~3256 was located on the back illuminated
CCD chip ACIS-S3 in both observations.
The {\it Chandra} Interactive Analysis of Observations (CIAO) software package version 4.6 combined with
the latest calibration database (CALDB) version 4.6.3 was used to analyze the {\it Chandra} data.
The data were reprocessed to generate level=2 event files using the latest calibration.
We made a light curve for a source free region in the same CCD chip to examine the stability of background and discarded
time intervals showing high background.
The background was stable in the second observation, while the background rates were
relatively high in the first. The resulting exposure times after discarding high background intervals are 16.2 and 27.2~ksec
for the first and second observations, respectively, totaling $\simeq 2.7$ times more exposure time than that in \cite{lira02}.
X-ray spectra of the S nucleus were extracted using a circular region with a radius of 2.2 pixels or 1\farcs 1.
Background spectra were made using a circular region with a radius of
5\farcs 2 (first observation) or 5\farcs 9 (second observation)
that does not contain noticeable point sources near NGC 3256
and were subtracted from the source spectra.
The net counts after the background subtraction are shown in Table~\ref{tab_chandra_log}
and the background subtracted spectra are presented in Figure~\ref{chandra_spec}.
The spectra of the S nucleus were binned so that each bin contains at least one count.
Spectral fits were performed by using XSPEC
version 12.8.2. We applied a maximum likelihood method to fit the spectra using
the modified version of $C$ statistic \citep{cash79}, in which a Poisson distribution is assumed for numbers of counts in each bin.
While the absolute value of $C$ does not provide goodness of fits, $\Delta C$ can be used to examine relative goodness or to generate confidence intervals.
The errors are at the 90\% confidence range for one parameter of interest ($\Delta C = 2.7$).
The Galactic absorption $N_{\rm H} = 9.35\times 10^{20}$ cm$^{-2}$ \citep{kalberla05}
calculated by using the tool {\tt nh} in FTOOLS was applied to all the models examined below.
The {\tt phabs} model in XSPEC was used
for photoelectric absorption, in which the
cross sections from \cite{balucinska92} with a He cross section in \cite{yan98} are used.
All the spectral components except for the Galactic absorption were assumed to be emitted/absorbed at the source redshift.
\section{X-RAY RESULTS}\label{x_ray_results}
We fitted the spectra from the two observations simultaneously. Since the photon statistics are limited, we assumed same spectral parameters for
the two spectra.
A constant factor was multiplied to the model to represent variability between the two observations.
This factor was fixed at unity for the first observation and left free for the second observation.
\subsection{Power-Law Models}\label{powerlaw_model}
A simple power law modified by absorption intrinsic to the source (model A) was examined first. The resulting parameters are summarized in Table~\ref{tab_chandra_pl_fit_results}, and
the photon index and absorption column density of
$-0.42^{+0.67}_{-0.27}$ and $<9.7\times10^{21}$ cm$^{-2}$ (90\% confidence upper limit), respectively, were obtained. Although this model represents the overall shape of the spectra, the best-fit photon index is extremely flat.
If the photon index is fixed at a steeper value 1.8, which is typically observed in Seyfert galaxies (e.g., \citealt{dadina08}),
the $C$ statistic becomes worse by $\Delta C$ = 17.7 (Model B).
An apparently flat spectrum is often explained by a reflection dominated spectrum or a combination of heavily absorbed and lightly absorbed continua, and these possibilities are tested below.
The former case is expected if the primary X-ray emission is obscured by a large amount of matter exceeding a column density of $\sim10^{24}$ cm$^{-2}$ and if emission scattered by medium surrounding the X-ray source dominates observed spectra. In oder to represent a spectrum dominated by reflection by cold matter, the {\tt pexrav} model \citep{magdziarz95} in XSPEC modified by intrinsic absorption was used. The incident spectrum was assumed to be power law with an exponential high energy cutoff. The photon index and the cutoff energy were fixed at 1.8 and 300 keV, respectively, since these parameters were not well constrained from the observed spectra. A reflector of a slab shape
with an inclination angle of 60$^\circ$ was assumed, where 0$^\circ$ corresponds to face on.
The abundance of the reflector was assumed to be solar, where the abundance table of \cite{anders89} was used.
The reflection normalization factor (``{\tt ref\_refl}'' parameter) was fixed at $-1$ so that only the reflected continuum emission is obtained.
This model (Model C) represents the observed spectral shape, and the best-fit parameters are shown in Table~\ref{tab_chandra_pl_fit_results}, though the absence of Fe-K emission line is not compatible with this refection dominated model as discussed in \S \ref{x_ray_evidence}.
Another possibility to explain the very flat continuum is a multicomponent model consisting of heavily and lightly absorbed continua. We assumed a power law continuum with a fixed photon index of 1.8 as an incident spectrum and the absorption column densities of
$N_{\rm H, 1}$ and $N_{\rm H, 2}$ for light and heavy absorbers, respectively.
This model is expressed as
$e^{-\sigma(E) N_{\rm H, 1}} ~[f e^{-\sigma(E) N_{\rm H, 2}} + (1-f)]~A~E^{-\Gamma}$,
where $E$, $\sigma(E)$, $f$, $A$, and $\Gamma$ are photon energy,
photoelectric absorption cross section, fraction of continuum emission absorbed by $N_{\rm H, 2}$, normalization of power law, and
photon index, respectively (Model D). The observed spectra are well represented by this model.
The result of this fit is also in Table~\ref{tab_chandra_pl_fit_results}.
The best-fit model is shown in Figures \ref{chandra_spec} and \ref{chandra_ufsp}.
The measured column density of the heavy absorber ($7^{+19}_{-3} \times10^{22}$ cm$^{-2}$) is consistent with the MIR measurement of an AGN absorption from the AGN-starburst composite model ($\simeq 1.5 \times 10^{23}$ cm$^{-2}$) (\S \ref{summary_MIR_SED_modeling}).
The constant factors multiplied to the models obtained from the fits are summarized in Table~\ref{tab_chandra_pl_fit_results}. The best-fit values for the models examined above are in the range from 0.92 to 0.95, and the error ranges contain unity. Therefore, flux variability between the two observations is not significant.
Observed fluxes and luminosities corrected for absorption in the 2--10 keV band are also shown in Table~\ref{tab_chandra_pl_fit_results} except for the model B, for which
$C$ statistic was significantly worse than those for other models.
The observed spectra do not show an indication of an Fe-K emission line. We calculated an upper limit on the EW of a fluorescent
Fe-K line at 6.4 keV by adding a Gaussian spectral component to the models examined above.
The center energy was fixed at 6.4 keV. The line width $\sigma$ is assumed
to be 10 eV, since previous observation of type 2 AGNs show that Fe-K line
width is much narrower than the energy resolution of ACIS \citep{shu11}.
The upper limit on the line EW is shown in Table~\ref{tab_chandra_pl_fit_results}.
\subsection{Thermal Emission Models}\label{thermal_model}
We also examined a thermal emission model. The APEC model \citep{smith01} was used as emission from optically-thin plasma in collisional ionization equilibrium.
A combination of multiple components, one of which is heavily absorbed, is required to explain the apparent flat spectrum.
We therefore examined a
model consisting of unabsorbed and absorbed APEC components
represented by the expression
{\tt apec}$(T_1) + e^{-\sigma(E) N_{\rm H}}$ {\tt apec}$(T_2)$, where $T_1$ and $T_2$
are temperatures for unabsorbed and absorbed APEC components, respectively (Model E).
Only lower bounds for allowed temperature ranges for these two components were obtained as 1.2 keV and 9.4 keV, respectively. The intrinsic column density for the absorbed component was obtained to be
$N_{\rm H} = 5.1^{+3.3}_{-1.9}\times10^{22}$ cm$^{-2}$. This model resulted in the value of $C$ statistic 77.6 for 88 degrees of freedom and describes the shape of the observed spectra, where the plasma temperatures $kT_1$
and $kT_2$ were pegged at 64 keV.
X-ray spectra of starburst galaxies generally show emission from thermal
plasma with a temperature $kT=1$ keV or less
\citep{ptak99,strickland04,tullmann06}, and the
high temperatures obtained from the fits are unusual. We therefore
examined whether the presence of low temperature ($kT<1$ keV) plasma is
consistent with the observed spectrum by multiplying extra absorption to
the model examined above. This trial model is
expressed by
$e^{-\sigma(E) N_{\rm H, 1}} [$
{\tt apec}$(T_1) + e^{-\sigma(E) N_{\rm H, 2}}$
{\tt apec}$(T_2) ]$,
and we found that $kT_1$ of 1 keV or less is allowed
for the component representing the low energy part of the spectra (Model F).
If $kT_1$= 1 keV is assumed, the best-fit $N_{\rm H, 1}$ is $1.3\times10^{22}$ cm$^{-2}$, and $C$=77.3 is obtained for 88 degrees of freedom.
The resulting spectral parameters, observed fluxes, and absorption corrected luminosities
for the thermal models are summarized in Table~\ref{tab_chandra_thermal_fit_results}.
\section{DISCUSSION}
\subsection{Evidence of AGN in the S Nucleus\label{agn_evidence}}
\subsubsection{In Infrared\label{ir_evidence}}
The IRAC colors at the S nucleus indicate a non-starburst-like SED (\S \ref{infrared_agn_signature_results}).
Specifically, the excess of the 4.5~$\mu$m flux makes 8.0~$\mu$m/4.5~$\mu$m flux ratio smaller and [3.6]$-$[4.5] color larger in both diagnostic diagrams in Figure~\ref{fig_irac_color_color}, bringing the S nucleus away from starburst-dominated galaxies in the observed colors.
Such colors are found only at the S nucleus in the IRAC images (\S \ref{infrared_morphology}).
The MIR spectrum of the flux ratio between the two nuclei (Figure~\ref{fig_irs_irac_ratio}a) can be used to examine presence of an AGN in a way least dependent on the SED templates.
It shows a complicated trend of both the excessed 4--6~$\mu$m flux and the deeper 9.7~$\mu$m absorption at the S nucleus.
The ratio changes in a way different from the dust extinction curves (Figure~\ref{fig_extinction_curve}) and, therefore, the two nuclei must have different intrinsic SED shapes.
This conclusion is independent of the detailed shape of the extinction curve.
Such an excess at 4--6~$\mu$m cannot be reproduced only from starburst-dominated SEDs because the observed 8.0~$\mu$m/4.5~$\mu$m flux ratio is smaller than, and the observed [3.6]$-$[4.5] color is larger (redder) than, those of starburst-dominated galaxies \citep{ruiz10}.
We demonstrated with our AGN-starburst composite SED model that the AGN contribution can reproduce the observed 4.5~$\mu$m enhancement (\S \ref{summary_MIR_SED_modeling}).
We discovered a very compact 2.2~$\mu$m source at the S nucleus, and its unresolved core (at 0\farcs 26 FWHM) seems a NIR counterpart of the compact (at 0\farcs 30 FWHM resolution) MIR core \citep{ah06b}.
The 2.2~$\mu$m core coincides with the compact MIR and radio sources at the S nucleus (\S \ref{infrared_morphology}).
Both the 2.2~$\mu$m core and the MIR core show a comparable source size.
Although this 2.2~$\mu$m component is not included in our SED fitting together with the T-ReCS spectrophotometries, the heavily extinct power-law AGN SED that is fitted to the MIR data roughly predicts the 2.2~$\mu$m flux (\S \ref{summary_MIR_SED_modeling}).
Therefore, it is very likely that the same power-law component, or hot blackbody-dominated component (see below), dominates the NIR--MIR SED of the nucleus at $<50$~pc scale.
Although our SED model fitting prefers a model with AGN, the nuclear IRAC colors can be also represented by a blackbody of $\simeq 600$~K (Figure~\ref{fig_irac_color_color}).
Although we neglected dust emission in our SED modeling of the starburst-starburst composite model (\S \ref{MIR_SED_modeling}), such dust emission might dominate the SED around 4.5~$\mu$m to account for the 4.5~$\mu$m excess.
\cite{verley07} studied SEDs of embedded star-forming regions by considering hot dusty shell with radiative-transfer calculation with {\it DUSTY} code \citep{dusty}.
No PAH emission is considered within {\it DUSTY}.
The young star clusters are modeled with the {\it starburst99} \citep{starburst99} stellar-population model.
Among the parameter space they explored, which was designed to cover a range of typical extragalactic H~{\sc ii} regions, the S nucleus is closer in IRAC colors to cases of temperature of the inner surface of the dusty shell $T_{\rm in}=600$~K and total visual extinction $A_{\rm V}=46.4$ mag.
A heavy 9.7~$\mu$m silicate absorption is also reproduced when the extinction is very high ($A_{\rm V}\sim 100$ mag).
In such configurations of the dusty shell, large extinction suppresses stellar SED at 3.6~$\mu$m, and the color becomes similar to a pure blackbody of $T=600$~K.
Luminosity of the 600~K blackbody component is $\sim 3\times 10^8$ L$_{\rm \odot}$ if the blackbody dominates the observed nuclear IRAC SED.
If we assume a shell of dusts emitting such blackbody, radius of the shell is $< 0.1$ pc.
MIR SEDs with dominating hot blackbody are unusual for star-bursting galactic nuclei.
Some types of star-forming objects are known to show SEDs similar to the IRAC SED at the S nucleus, but they are likely not responsible for the S nucleus for the following reasons.
Hot blackbody-dominated SEDs are found in blue compact dwarf galaxies \citep{hunt05} and ultra-compact H~{\sc ii} regions (e.g., \citealt{churchwell90,churchwell02}).
In blue compact dwarf galaxies, vigorous star formation is clearly seen in optical wavelength, and the MIR SEDs resemble a blackbody because of lack of prominent PAH emission due to low metallicity.
Because the S nucleus is most likely a nucleus of a massive disk galaxy under merging process (e.g., \citealt{english03,sakamoto14}) and the metallicity of the nuclear region is about solar (e.g., \citealt{sb95,lipari00}), the same explanation for blue compact dwarf galaxies unlikely applies to the S nucleus.
In addition, typical X-ray luminosity of such galaxies is about a few $10^{39}$ erg s$^{-1}$ \citep{kaaret11,thuan14}, which is about an order of magnitude fainter than the observed luminosity of the S nucleus.
Ultra-compact H~{\sc ii} regions are manifestations of newly formed massive stars that are still embedded in their natal molecular cloud, and hence dusty cocoon \citep{churchwell90}.
It is compact ($<$1~pc) and emits mostly at infrared from hot dust heated by central O star.
Strong silicate 9.7~$\mu$m absorption is sometimes seen.
Although they are among the most luminous FIR sources in the Galaxy and their MIR spectral characteristics are similar to those of the S nucleus, even the most luminous ultra-compact H~{\sc ii} regions in a classical sample of \cite{wood89} are $\sim 4 \times10^3$ times less luminous than that of the S nucleus at $\sim 10$~$\mu$m.
Their X-ray luminosities (typically a few $10^{33}$ erg s$^{-1}$; \citealt{tsujimoto06}) are also much fainter than the observed luminosity of the S nucleus.
Nuclear starburst galaxies and (U)LIRGs hardly show such hot dust component \citep{marshall07,armus07,dacunha10}.
In a sample of representative local star-forming ULIRGs, \cite{dacunha10} requires no component hotter than 250~K in their UV-FIR SED fitting.
\cite{armus07} pointed out from their detailed analysis of the infrared (1--1000~$\mu$m) SEDs for local ULIRGs with and without AGNs that detection of hot dust at $T\gtrsim 300$~K in a nuclear spectrum provides indirect evidence for a buried AGN.
In summary, our infrared analysis strongly prefers presence of an AGN at the S nucleus.
In order to better constrain the nuclear activities, sophisticated SED decomposition as \cite{marshall07} did or physical SED modeling with dust radiation-transfer consideration would be necessary.
Additional high spatial-resolution photometry data to fill a wavelength gap between NICMOS and $N$ band as well as spatially-resolved photometry of the S nucleus at longer infrared wavelength would greatly help such modeling and analysis.
\subsubsection{In X-Ray}\label{x_ray_evidence}
{\it Chandra} imaging of NGC~3256 shows an X-ray source at the position of the S nucleus. Its spatial extent is consistent with the PSF of {\it Chandra} \citep{lira02}. We first discuss possible origins of the X-ray emission assuming that X-rays are coming from one point source.
The X-ray spectra are extremely hard; if a simple power law model modified by intrinsic absorption is applied, the best-fit photon index of $-0.43$ is obtained (Model A).
Such a flat spectrum is unusual for primary X-ray emission and suggests a contribution of reprocessed and/or absorbed emission. If
the primary source is hidden behind optically thick matter with a column density greater than $N_{\rm H} \sim10^{24}$ cm$^{-2}$ and the observed spectrum is dominated by emission scattered by cold matter, the resulting spectrum becomes very hard. Such a spectrum is referred to as reflection dominated. This situation is observed in
Compton-thick AGNs, for which an absorption column density exceeds $1.5\times10^{24}$ cm$^{-2}$
(e.g., \citealt{comastri04}). In this case, a strong fluorescent Fe-K emission line with an EW greater than $\sim 700$ eV is seen
(e.g., \citealt{guainazzi05,fukazawa11}). The observed spectra, however, do not show an indication of Fe-K emission line, and the upper limit on the EW is 190 eV (for model C).
This limit is inconsistent with a reflection dominated spectrum and the presence of a Compton-thick nucleus is ruled out.
A dual absorber model (Model D) also provided a good description of the spectrum. Such spectra are seen in AGNs obscured by Compton-thin matter. The fraction, $f=0.94$, of the continuum absorbed by a large column density ($7\times10^{22}$ cm$^{-2}$) is in the range typically observed in Seyfert 2s \citep{turner97,noguchi10}. The less absorbed emission component is often interpreted as emission scattered by ionized medium in the opening part of the putative obscuring torus (e.g., \citealt{turner97}).
The upper limit on the EW of an Fe-K emission line (550 eV) is consistent with that expected for the absorption column density of $7\times10^{22}$ cm$^{-2}$ (EW$\simeq$ 50--150 eV; \citealt{turner97,guainazzi05,fukazawa11}). The luminosity in the 2--10 keV band corrected for absorption is estimated to be $1.5\times10^{40}$ erg s$^{-1}$. The error on the absorption column density introduces uncertainties in the correction of absorption, and the allowed range for the 2--10 keV luminosity is (1.2--2.9)$\times10^{40}$ erg s$^{-1}$. This luminosity is in the range for low-luminosity AGNs (e.g., \citealt{ho01,terashima02,cappi06}).
Thus the observed properties are in accordance with a low-luminosity AGN obscured by Compton-thin matter.
The luminosity of the X-ray source at the S nucleus is much higher than that of a single X-ray binary containing a stellar mass black hole or a neutron star.
If the X-ray source is a single compact object, its mass should be greater than
\[
1000M_{\odot}
\left(
\frac{1.0}{
\lambda_{\rm Edd}
}
\right)
\left(
\frac{\kappa_{\rm 2-10~keV}}{10}
\right)
\left(
\frac{L_{\rm 2-10~keV}}{1.5\times10^{40} ~{\rm erg s}^{-1}
}
\right),
\]
where $\lambda_{\rm Edd}$, $\kappa_{\rm 2-10~keV}$, $L_{\rm 2-10~keV}$ are
the Eddington ratio, bolometric correction factor, and intrinsic luminosity in 2--10 keV, respectively.
Note that we assumed an Eddington luminosity of $1.5\times10^{38}$ erg
s$^{-1}$ for one solar mass, which is based on an assumption of a
Hydrogen-to-Helium ratio of 0.76:0.23 by weight. The bolometric correction
factor $\kappa_{2-10 \rm keV}$ is known to be dependent on a luminosity
and an Eddington ratio. Studies of AGN SEDs show that $\kappa_{2-10
\rm keV}$ is 10--20 for low-luminosity AGNs accreting at an Eddington
ratio of $<$0.1 \citep{vasudevan09,ho08}.
The X-ray fluxes of the S nucleus for the two observations with a 3 year interval are consistent with each other within errors.
A systematic analysis of AGN X-ray light curves of a large sample shows that
a high percentage of low-luminosity AGN do not show significant variability
\citep{gm12}, and the absence of variability is
consistent with
X-ray emission being from a low-luminosity AGN.
On the other hand, the absence of variability might be explained by the possibility that the X-ray emission comes from multiple sources.
If there are tens of stellar mass black holes well within the size of the PSF of {\it Chandra} (0\farcs 49 or 83~pc
at the distance of NGC~3256),
the luminosity measured could be explained. The very hard spectra observed, however, are not compatible with any spectral states
of stellar mass black hole binaries (\citealt{mcclintock06,done07})
and superposition of X-ray spectra suffering from different absorption column densities is required.
A large number of X-ray binaries are expected in starburst galaxies.
The integrated luminosity of high mass X-ray binaries, which have a hard
X-ray spectrum, formed by starburst activity is related to SFR \citep{ranalli03,grimm03,gilfanov04}.
The scaling law of \cite{grimm03} between SFR and integrated X-ray luminosity ($L_{\rm 2-10~keV}$) for low SFR values ($<4.5$ M$_{\odot}$ yr$^{-1}$) predicts $L_{\rm 2-10~keV}$ of $6\times10^{38}$ erg s$^{-1}$ for SFR of $\simeq 0.43$ M$_{\odot}$ yr$^{-1}$ (\S \ref{summary_MIR_SED_modeling}).
The observed luminosity is about a factor of $\sim 20$ larger than this expectation.
{\it Chandra} images of nearby starburst galaxies
indeed show many point sources (e.g., \citealt{griffiths00,strickland00,bauer01}). Such point sources, however, are distributed over a region of several hundreds of parsecs, which is much larger than the limit on the source size of the S nucleus.
A single nuclear source is therefore more conceivable,
although the possibility of superposition of multiple sources cannot be completely excluded.
A two component thermal plasma model also explains the shape of the observed spectra (\S \ref{thermal_model}). Thermal plasma emission with a temperature lower than $\sim 1$ keV is commonly observed in starburst galaxies (e.g., \citealt{strickland06}). Such a component is extended to the scale of host galaxy and intrinsic absorption is generally small. Our spectral fit indicates that emission from plasma with $kT<1 $ keV should be significantly absorbed by a column density of $1.3\times10^{22}$ cm$^{-2}$ or higher if such a component exists. Thus the observed properties are unusual to be interpreted as starburst galaxies in terms of the small source size and low-$kT$ plasma confined and hidden behind absorbing matter.
\subsubsection{Arguments at Other Wavelengths}
In addition to our MIR and X-ray evidence for the AGN, recent radio and NIR observations also suggest the AGN activity in the S nucleus.
\cite{sakamoto14} found a highly collimated bipolar jet-like outflow of molecular gas from the S nucleus using {\it ALMA}.
It extends up to 4\arcsec~(700~pc) from the nucleus and is associated with a bipolar spur of radio continuum emission.
They inferred from the morphology and kinematics of the molecular outflow and from the radio feature a jet-driving AGN in the S nucleus.
They also found that the continuum spectral slope at 860~$\mu$m is flatter at the S nucleus than at the N nucleus.
More synchrotron emission from an AGN in the S nucleus would explain the difference.
Similar bipolar outflow from the S nucleus was also detected in the NIR H$_2$ 1--0 $S$(1) line by \cite{emonts14}, and was suggested to be AGN-driven from the energetics and high-mass loading factor.
In addition to the X-ray to MIR SED (\citealt{lira02}; see also \S \ref{X_MIR} below), the X-ray to radio SED has been used to distinguish types of nuclear activities because these wavelengths are least sensitive to extinction and contamination by stellar activities is minimal there \citep{terashima03}.
\cite{neff03} found the ratio of 6~cm radio to 2--10 keV X-ray luminosities, $R_{\rm X}$ ($=L_{\rm R}/L_{\rm X}=\nu L_{\rm \nu}$ (5~GHz)/$L_{\rm 2-10~keV}$)$=1\times 10^{-2}$, for the compact source in the S nucleus from their {\it VLA} observations and the {\it Chandra} X-ray results of \cite{lira02}.
By comparing with the ratios of various kinds of Galactic and extragalactic sources, they argued that the compact source is most likely a low-luminosity AGN, although a possibility of a collection of supernova remnants cannot be rejected.
Our new X-ray luminosity is only 26\% larger than that of \cite{lira02}, and the revised $R_{\rm X}$ remains in the range of low-luminosity AGNs as \cite{neff03} discussed.
\subsection{Distribution of Dusty Material in and around the S Nucleus\label{dust_distribution}}
Our absorption magnitude and column density toward the AGN in the S nucleus, $A_{\rm V}\sim 80$ mag and $N_{\rm H} \sim10^{23}$ cm$^{-2}$, are much larger than previous estimates at NIR--MIR wavelengths.
They are $A_{\rm V} = 5.3$ and 10.7 mag from the $JHK'L'$ continuum and line photometries, respectively \citep{kotilainen96}, 16 mag from the {\it HST} NICMOS $H$--$K$ nuclear color measurement \citep{lira02}, and $15\pm 5$ mag from the {\it HST} NICMOS line flux ratio \citep{ah06a}.
Also, \cite{lira08} estimated $A_{\rm V} \sim 10$ mag on the basis of MIR SED fitting within 3\farcs 6--4\farcs 0 apertures.
It seems that the previous measurements of $A_{\rm V} = 5$--15 mag (i.e, $N_{\rm H} \sim 10^{22}$ cm$^{-2}$) correspond to the extended dust lanes over 1~kpc scale (e.g., \citealt{zepf99,moran99,lipari00,ah02,ah06a,sakamoto14}) or dusty circumnuclear star-forming region (\S \ref{MIR_SED_modeling}).
In our MIR SED analysis, we applied mild extinction corresponding to $A_{\rm V} \simeq 5$--8 mag on the starburst component of the AGN-starburst composite model (\S \ref{agn_starburst_composite_model}).
This component is most likely spatially extended because \cite{ah06b} reported a slightly extended structure in a 0\farcs 30 FWHM resolution (51~pc) image of the S nucleus at 8.74~$\mu$m.
We also found a slightly extended component, besides an unresolved core, in the NICMOS 2.2~$\mu$m image (0\farcs 75 FWHM or 128~pc) (\S \ref{infrared_morphology}).
In our X-ray analysis, we showed that the lightly absorbed component of the dual-component model suffers from $N_{\rm H,1}=0.34^{+1.56}_{-0.25}\times10^{22}$ cm$^{-2}$ (\S \ref{powerlaw_model}), matching the values from previous and our measurements at NIR--MIR.
Additional heavy extinction on the AGN is required to account for both the deep silicate absorption at the 0\farcs 36 scale and the heavy absorption on another component of the dual-component model in the X-ray analysis.
Such a heavily absorbed region is most likely within the central $\lesssim 0.5$\arcsec~of the S nucleus.
At MIR, the silicate absorption becomes much shallower at arcsec scale (\citealt{ds10}; \S \ref{MIR_SED_modeling}).
At X-ray, high spatial-resolution {\it Chandra} spectra show that the column density on a heavily absorbed component ($N_{\rm H,2}=7^{+19}_{-3}\times10^{22}$ cm$^{-2}$) is much larger than that on the power-law component in the whole-aperture {\it XMM-Newton} spectrum ($0.14\pm 0.03 \times10^{22}$ cm$^{-2}$; \citealt{ps11}).
For comparison, from high spatial-resolution {\it ALMA} molecular-gas imaging, \cite{sakamoto14} obtained a column density of
$N_{\rm H} = 10^{23}$--$10^{24}$ cm$^{-2}$
to the S nucleus in their 0\farcs 5 (80~pc) beam.
Both MIR and X-ray analyses strongly suggest that large amount of dusts distributes immediately around the AGN in a form of thick and smooth shell or torus.
ULIRGs generally show deeper silicate absorption ($S_{\rm 9.7\mu m} \ll -1$) than in AGNs (e.g., \citealt{hao07}), and such deep absorption requires that the energy source is deeply embedded in dust that is both optically and geometrically (along radial direction) thick, as well as geometrically smooth \citep{levenson07,nenkova08a,nenkova08b,sirocky08}.
For example, slab geometry cannot make the absorption $S_{\rm 9.7\mu m} < -1.1$ even with $\tau_{\rm V}=1000$, because one needs temperature gradient within the dusty region and the illuminated surface of the dust should be hidden from our direct views \citep{levenson07}.
Also, clumpy medium cannot make the absorption very deep because illuminated surface of individual clumps can be directly seen/illuminate dark surface of other clumps \citep{nenkova08a,nenkova08b}.
The observed upper limit of the silicate absorption depth at the 0\farcs 36 resolution, $S_{\rm 9.7\mu m} < -3$ \citep{ds10}, where the bottom of the feature was not determined due to very strong absorption and the sensitivity, is already in the range of the most heavily absorbed sources in the local universe \citep{hao07}.
Our MIR SED analysis implies a much deeper depth ($S_{\rm 9.7\mu m} = -9$ -- $-13$ where $S_{\rm 9.7\mu m} \equiv -\tau_{\rm 9.7}$ in our assumed geometry).
This strengthens a need for optically and geometrically thick dusty region immediately surrounding the AGN.
On the other hand, our X-ray spectrum analysis with a dual-component model suggests that a classical picture of type-2 AGNs with a dusty torus immediately surrounding a supermassive black hole (e.g., \citealt{antonucci93}) applies to the S nucleus.
Firstly, the elevated column density at the S nucleus is compatible with a typical dusty torus of Compton-thin type-2 AGNs
(e.g., \citealt{awaki91,turner97,guainazzi05,fukazawa11}).
Secondly, the model also suggests that $\simeq 6$\% of the primary X-ray spectrum is seen unabsorbed by the heavy absorber, and this component can be interpreted as emission scattered by ionized medium in the opening part of the putative obscuring torus around the AGN.
Such phenomenon is commonly seen in typical Seyfert 2s \citep{turner97,noguchi10}.
To reproduce deep silicate absorption with such a torus, the torus must be geometrically thick enough along the height direction, as well as along the radial direction, or our viewing angle must be close enough to the equatorial plane of the torus, so that the inner illuminated surface is hidden from our line-of-sight.
\subsection{Revisiting X-Ray-to-MIR SED\label{X_MIR}}
The S nucleus is known to be X-ray-quiet, and the fact has been considered to be evidence against an AGN there \citep{lira02}, although the galaxy as a whole is among the most luminous X-ray sources without confirmed AGN in the local universe \citep{moran99,lira02,ps11}.
\cite{lira02} revealed a compact X-ray source at the S nucleus with the first {\it Chandra} observation and reported the neutral Hydrogen column density obscuring the nucleus to be $N_{\rm H} = 5 \times 10^{22}$ cm$^{-2}$ (best fit)--$1 \times 10^{23}$ cm$^{-2}$ (acceptable).
By using a radio--FIR--MIR-X-ray SED of the S nucleus, \cite{lira02} showed that the observed X-ray luminosity is higher than that of typical starbursts but is at least two orders of magnitude lower than expected for a classical Seyfert nucleus.
\cite{alexander05} plotted the data of \cite{lira02} for the total aperture in the correlation diagram between IR and (absorption-corrected) X-ray luminosities together with well-studied local starbursts and AGNs.
They demonstrated that the X-ray luminosity of NGC~3256 between 0.5 keV and 8 keV is well below the range of AGNs but the galaxy is slightly more X-ray luminous than the starburst galaxies.
Our AGN-starburst composite model indicates that the observed SED is heavily contaminated at MIR by the circumnuclear star formation, and we found that the AGN component alone is consistent with typical AGNs in terms of X-Ray-to-MIR SED.
\cite{horst06,horst08} demonstrated that contamination by the circumnuclear starburst at MIR must be removed when evaluating AGN SEDs.
By using high spatial-resolution MIR photometries, \cite{horst06,horst08} and \cite{gandhi09} showed that the ratio of the MIR luminosity at 12.3~$\mu$m to the absorption-corrected X-ray luminosity at 2--10 keV scatters within a range of log $L_{\rm 12.3~\mu m}$/$L_{\rm 2-10~keV} = -0.5$ to 1.5, with a mean of $0.61 \pm 0.31$ among both type-1 and -2 Seyfert nuclei.
In the case of the S nucleus, we adopt the T-ReCS flux at 12.3~$\mu$m within the 0\farcs 36 aperture \citep{ds10}
and the absorption-corrected X-ray luminosity of the dual-component model (Model D; \S \ref{x_ray_evidence}).
Then we find log $L_{\rm 12.3~\mu m}$/$L_{\rm 2-10~keV} \sim 0.8$--1.2 ($=1.1$ with the best-fit 2--10 keV luminosity) for the AGN in the S nucleus.
It is in the range of ratios that \cite{horst08} and \cite{gandhi09} obtained for Seyfert nuclei.
Therefore, we conclude that the apparent X-ray quietness of the S nucleus should no longer be considered as evidence against AGN in the nucleus.
\subsection{Comparison with Previous Results}
Although our conclusion of the presence of an AGN in the S nucleus appears contradictory to most of the earlier studies, they are consistent with each other if we consider different aperture sizes and contamination from circumnuclear star formation.
\cite{lira02,lira08} noted the 4.5~$\mu$m excess at the S nucleus among the three IRAC channels.
\cite{lira02} also noted, by adding information of ground-based $JHK'L'N$-band photometry, that the S nucleus is in the 1--10~$\mu$m SED in $\nu F_{\rm \nu}$ flatter than typical starburst galaxies and is closer to AGNs.
However, they concluded that the AGN contribution is insignificant, if any.
This is because in \cite{lira02} their FIR--MIR--X-ray SED for the S nucleus is not consistent with the SEDs of typical Seyfert 2 nuclei, and in \cite{lira08} their T-ReCS $N$-band spectrum with a 1\farcs 3 slit shows evidence for star formation (e.g., PAH features).
In our work, we showed that the circumnuclear star formation dominates the MIR flux at a $\sim 3$\arcsec~scale, and it is not surprising that the T-ReCS spectrum from the 1\farcs 3 slit shows PAH features because the T-ReCS flux and the IRS flux within the 3\farcs 6 aperture are almost the same (\S \ref{MIR_SED_modeling}).
We also showed that the X-ray-to-MIR luminosity ratio is consistent with typical AGNs if we consider only the AGN contribution at MIR after excluding contribution from the circumnuclear star-forming regions (\S \ref{X_MIR}).
Our deeper {\it Chandra} spectrum enabled us to better constrain the X-ray characteristics owing to improved statistics.
On the basis of their first {\it Chandra} observation, \cite{lira02} presented spectral analysis of a composite X-ray spectrum of three hard sources including the S nucleus, and
constrained the spectral shape of the S nucleus using X-ray hardness.
We used two {\it Chandra} data sets totaling $\simeq 2.7$ times more effective exposure time, and were able to obtain spectra of the S nucleus alone. The relatively flat spectra we obtained are
qualitatively consistent with the results by \cite{lira02}. They assumed an absorbed power law model to constrain the allowed ranges of
the photon index and absorption column density. Our spectral fits indicate that a single-component absorbed power law model gives an extremely flat photon index ($\Gamma < 0.25$), otherwise the quality of fit becomes significantly worse. The absence of a strong Fe-K fluorescent line
in our spectrum strongly prefers
the model consisting of two components, one of which is absorbed by Compton-thin matter ($N_{\rm H} \sim 7\times10^{22}$ cm$^{-2}$),
rather than a single absorbed power law.
The integrated spectrum of NGC~3256 measured with {\it XMM-Newton} provided a strong constraint on the flux of an Fe-K fluorescent line \citep{ps11}. Their upper limit on the flux is $3.5\times10^{-15}$ erg cm$^{-2}$ s$^{-1}$ or $3.4\times10^{-7}$ photons cm$^{-2}$ s$^{-1}$. Our limit on the Fe-K flux ($3.6\times 10^{-7}$ photons cm$^{-2}$ s$^{-1}$ for Model D) is almost the same as that obtained by \cite{ps11}. We obtained spectra of the S nucleus alone, in contrast to the integrated spectra previously reported,
and succeeded to set a limit on an Fe-K EW ($<190$ eV for Model C and $<550$ eV for Model D) that excludes the possibility of the presence of a Compton-thick AGN.
The AGN in the S nucleus is energetically much less important at MIR than the rest within NGC~3256 (\S \ref{summary_MIR_SED_modeling}), and previous estimates of the fractional AGN contribution within the galaxy seem not sensitive enough to detect the AGN.
\cite{ah12} made SED modeling of the IRS spectrum extracted over a kpc-scale region to explore possible AGN contribution.
They concluded that the AGN contributions to the whole-aperture MIR (at 6 and 24~$\mu$m) and IR luminosities are very small ($<5$\% and $<1$\%, respectively).
Our estimates of the AGN contribution to the S nucleus at $\sim 3$\arcsec~scale are $\sim 24$\% and $\sim 2$\% at 6 and 24~$\mu$m, respectively (\S \ref{summary_MIR_SED_modeling}).
Since the entire-galaxy IRS spectrum of \cite{ah12} is about 8 times brighter than our nuclear ones (\S \ref{irac_data_analysis}, \ref{irs_data_analysis}), the AGN contributes to $\sim 3$\% and $\sim 0.3$\% of the whole-aperture MIR luminosities at 6 and 24~$\mu$m, respectively.
Therefore, our results are consistent with those of \cite{ah12}.
The absorption corrected X-ray luminosity of the S nucleus for the dual absorber model with Seyfert-like power-law continua (Model D),
which we conclude most plausible, is $1.5\times10^{40}$ erg s$^{-1}$.
It is about 20\% of the total X-ray luminosity integrated over NGC 3256,
$7.4\times10^{40}$ erg s$^{-1}$, measured with {\it XMM-Newton}
\citep{ps11}.
\section{SUMMARY AND CONCLUSIONS}
NGC~3256 is the most luminous LIRG in the local universe ($z<0.01$), and it is a merging galaxy with two (N and S) nuclei.
Presence of an AGN in the S nucleus has been controversial.
We examined spectrophotometric characteristics of the S nucleus at both near--mid-infrared (NIR--MIR) and X-ray using archival and published data, and found several pieces of evidence to support a low-luminosity AGN obscured by Compton-thin matter.
The following are our findings and their implications.
We found in IRAC flux ratio maps that the S nucleus shows distinct photometric properties at 3.6--8.0~$\mu$m, most notably in its excess of 4.5~$\mu$m flux.
In contrast, the N nucleus is similar in colors to the star-forming regions within the host galaxy.
We applied the IRAC color-color diagrams for AGN diagnostics to the nuclear photometries for each nucleus, and found that the N and S nuclei show starburst- and AGN (power-law)-like SEDs, respectively.
This difference originates from the 4.5~$\mu$m excess at the S nucleus.
Using high-resolution {\it HST} NICMOS images, we extracted a compact source at the S nucleus by subtracting the stellar component at 2.2$~\mu$m.
The S nucleus consists of an unresolved (at 0\farcs 26 FWHM) and a resolved (0\farcs 75 FWHM or 128~pc after subtracting the instrumental resolution) component.
The corresponding structures are not seen at 1.6~$\mu$m and shorter wavelengths.
Because of its position and size, we identify its unresolved core with the compact MIR core (at 0\farcs 30 FWHM resolution at 8.74~$\mu$m).
The flux of the 2.2$~\mu$m core is consistent with our AGN spectral energy distribution (SED) model.
We analyzed the IRS nuclear spectrophotometries together with the IRAC nuclear photometries at 3.6--14.5~$\mu$m.
The IRS spectrum of the S nucleus shows bluer colors at $<6$~$\mu$m with respect to both the N nucleus and the IRS starburst template of \cite{brandl06} in a consistent way with the IRAC SED.
We conducted SED modeling of the S nucleus to reproduce the high spatial-resolution (0\farcs 36 aperture) T-ReCS $N$-band spectrophotometry \citep{ds10} as well as the nuclear IRAC 4.5~$\mu$m and IRS data.
Our AGN-starburst composite model (a heavily absorbed power-law AGN SED superposed on a mildly absorbed starburst-powered LIRG SED template) successfully reproduces both the deep silicate absorption at 9.7~$\mu$m within the 0\farcs 36 aperture and the distinct PAH features with the 4--6~$\mu$m excess at $\sim 3$\arcsec~scale.
We estimated $A_{\rm V}$ toward the AGN to be as large as 80 mag.
All the MIR results point toward a heavily absorbed (but Compton-thin) type-2 AGN in the S nucleus.
The AGN is most likely accompanied by circumnuclear star-forming regions.
We obtained a deep {\it Chandra} X-ray spectrum with $\simeq 2.7$ times more exposure time than in the previous study, and performed spectral analysis of a point-like source at the S nucleus.
We found that a dual-component (for primary and scattered ones) power-law model successfully fits the hard spectrum.
The inferred column density that is associated with the heavily absorbed primary component is $N_{\rm H}=7^{+19}_{-3} \times10^{22}$ cm$^{-2}$, which is in the range typically observed in Seyfert 2s and is consistent with our MIR measurement.
The lightly absorbed component can be interpreted as emission scattered by ionized medium in the opening part of the obscuring material around the AGN.
The successful dual-component model suggests a Compton-thin type-2 AGN in the S nucleus.
We also examined three other models for the S nucleus, namely a Compton-thick AGN, a single X-ray emitting compact source or a collection of such sources in the S nucleus, and thermal plasma with $kT<1$ keV.
We found that all of them are either rejected or less likely.
A Compton-thick AGN is rejected because of a limit on EW of a fluorescent Fe-K emission line at 6.4 keV ($<190$ eV for a reflection-dominated model).
Models with compact stellar source(s) are less likely because of the observed high luminosity, hard spectrum, and/or small region size well within the {\it Chandra} resolution (0\farcs 49 or 83~pc), although we cannot reject possibilities of models with multiple compact stellar sources.
A model with the thermal plasma seems unusual, although it is statistically possible, because the low-$kT$ plasma needs to be confined and hidden behind absorbing matter.
Our model with AGN is quantitatively consistent with most of the earlier studies that found no AGN in the S nucleus if we consider different aperture sizes and the star-forming circumnuclear region that dominates the MIR flux at $\sim 3$\arcsec~scale.
In our best MIR SED model (the AGN-starburst composite model), the AGN contributes to $\sim 2$\% and $\sim 0.2$\% of the whole-aperture MIR luminosities at 6 and 24~$\mu$m, respectively.
In our best X-ray model (a dual absorber model with Seyfert-like power-law continua), the absorption corrected X-ray luminosity of the S nucleus is about 20\% of the total X-ray luminosity integrated over NGC 3256.
In particular, we showed that the X-ray to MIR luminosity ratio of the AGN is consistent with that of typical Seyfert 2s if we exclude contribution of MIR flux from the circumnuclear star formation.
\acknowledgments
We thank our referee for useful comments and suggestions to improve this paper.
This work is supported by MOST grants 100-2112-M-001-001-MY3 (Y.O.) and 102-2119-M-001-011-MY3 (K.S.).
This work is based in part on observations made with the {\it Spitzer} Space Telescope, obtained from the NASA/IPAC Infrared Science Archive, both of which are operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with the National Aeronautics and Space Administration.
This research has made use of data obtained from the Chandra Data Archive and the Chandra Source Catalog, and software provided by the Chandra X-ray Center (CXC) in the application package CIAO.
Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA).
{\it Facilities:} \facility{Spitzer (IRAC)}, \facility{Spitzer (IRS)}, \facility{Chandra (ACIS)}, \facility{HST (NICMOS)}.
|
2103.17042
|
\section{Edge rings}
In this section unless stated otherwise $G=K_{r_1,\dots r_n}$ is the complete multipartite graph on $[d]$ with vertices partitioned $V(G)=V_1\sqcup\dots \sqcup V_n$, $n\geq 2$, $|V_k|=r_k$ for all $k$. In this context $d=\sum_{k=1}^n r_k$ and without loss of generality, we will always assume that $1\leq r_1\leq \dots \leq r_n$.
It is easy to see that for any two odd cycles in $G$ which have no common vertex there is a bridge between them, i.e. $G$ satisfies the so called odd cycle condition.
Consequently, by \cite{OH-jalg} the edge ring
$$
R={\NZQ K}[G]={\NZQ K}[x_i x_j: i\in V_a, j\in V_b, 1\leq a<b\leq n]\subset {\NZQ K}[x_1,\dots, x_d]
$$ is normal, hence a Cohen--Macaulay domain (\cite{Hochster}).
Before we address the nearly Gorenstein property, we recall that Ohsugi and Hibi \cite{OH-Ill} classified the complete multipartite edge rings which are Gorenstein. With notation as above, their result is the following.
\begin{Theorem} (Ohsugi, Hibi \cite[Remark 2.8]{OH-Ill})
\label{thm:gore-multipartite}
The edge ring of the complete multipartite graph $K_{r_1,\dots, r_n}$ is Gorenstein if and only if
\begin{enumerate}
\item $n=2$ and $(r_1,r_2) \in \{ (1,m), (m,m): m\geq 1 \}$, or
\item $n=3$ and $1\leq r_1\leq r_2 \leq r_2 \leq 2$, or
\item $n=4$ and $r_1=r_2=r_3=r_4=1$.
\end{enumerate}
\end{Theorem}
For some complete multipartite graphs the edge ring fits into classes of algebras for which the nearly Gorenstein property is already understood.
\begin{Example}
\label{edge-rings-as-sqfreeveronese}
{\em
When $r_1=\dots=r_n=1$, the edge ring $R$ is the squarefree Veronese subalgebra of degree $2$ in the polynomial ring ${\NZQ K}[x_1,\dots, x_n]$, and according to \cite[Theorem 4.14]{HHS}, $R$ is nearly Gorenstein if and only if it is Gorenstein. The latter property holds if and only if $n\leq 4$, by using work of De Negri and Hibi \cite{DeNegri-Hibi}, or Bruns, Vasconcelos and Villarreal \cite{BVV}.}
\end{Example}
\begin{Example}
\label{edge-rings-as-hibi-rings}{\em
According to Higashitani and Matsushita \cite[Proposition 2.2]{HM-2020}, when $n=2$, or when $n=3$ and $r_1=1$, the corresponding edge ring is isomorphic to a Hibi ring, and for the latter the nearly Gorenstein property is described in \cite{HHS}. We refer to \cite{TH87} for background on Hibi rings.}
\end{Example}
\begin{Theorem}(\cite[Theorem 5.4]{HHS}, \cite{TH87})
\label{thm:hibirings-ng}
Let $P$ be a finite poset. Then the Hibi ring $R$ of the distributive lattice of the order ideals in $P$ is nearly Gorenstein if and only if $P$ is the disjoint union of pure connected posets $P_1,\dots, P_q$ such that $|\rank(P_i)-\rank(P_j)| \leq 1$ for $1\leq i<j\leq q$.
In particular, $R$ is a Gorenstein ring if and only if $P$ is pure.
\end{Theorem}
Based on that, when $G$ is a complete bipartite graph we obtain the following classification.
\begin{Proposition}\label{prop:ng-bipartite}
Let $G=K_{r_1, r_2}$ be the complete bipartite graph with $1\leq r_1\leq r_2$.
Then the edge ring ${\NZQ K}[G]$ is nearly Gorenstein if and only if $r_1=1$, or $r_1 \geq 2$ and $r_2\in \{r_1, r_1+1\}$.
When $2\leq r_1= r_2-1$, the ring ${\NZQ K}[G]$ is nearly Gorenstein and not Gorenstein.
\end{Proposition}
\begin{proof}
By \cite[Proposition 2.2]{HM-2020}, ${\NZQ K}[G]$ is isomorphic to the Hibi ring associated to the distributive lattice of order ideals in the poset $P$ which consists of two disjoint chains with $r_1-1$ and $r_2-1$ elements, respectively. By Theorem~\ref{thm:hibirings-ng}, ${\NZQ K}[G]$ is nearly Gorenstein if and only if $r_1=1$, or $r_1\geq 2$ and $r_2\in \{r_1, r_1+1\}$.
\end{proof}
For non-bipartite graphs we prove the following result.
\begin{Theorem}
\label{thm:ng-graphs}
Let $R$ be the edge ring of a complete multipartite graph $K_{r_1,\dots, r_n}$ with $n\geq 3$. The following statements are equivalent:
\begin{enumerate}
\item[(i)] $R$ is a Gorenstein ring;
\item[(ii)] $R$ is a nearly Gorenstein ring.
\end{enumerate}
\end{Theorem}
\begin{proof}
Clearly, $(i)\implies (ii)$.
We'll prove the converse.
When $n=3$ and $r_1=1\leq r_2\leq r_3$, by \cite[Proposition 2.2]{HM-2020} the ring $R$ is isomorphic to the Hibi ring associated to the distributive lattice of order ideals in a poset $Q$ with maximal chains $q_1<\dots <q_{r_1}$, $q_{r_1+1}<\dots<q_{r_1+r_2}$ and $q_1<q_{r_1+r_2}$. The poset $Q$ is connected, hence $R$ is nearly Gorenstein if and only if it is Gorenstein, i.e. $1=r_1\leq r_2\leq r_3\leq 2$.
We now consider the remaining cases: either $n=3$ and $r_1\geq 2$, or $n\geq 4$.
Assume, by contradiction that $R$ is nearly Gorenstein and not Gorenstein, i.e.
\begin{equation}
\label{eq:tracemax}
\tr(\omega_R)={\frk m}_R.
\end{equation}
The monomials in $R$ and $\omega_R$ have a nice combinatorial description as feasable integer solutions to some systems of inequalities. This can be described as follows.
We denote $H=\sum_{\{i,j\}\in E(G)} {\NZQ N} ({\mathbf e}_i+{\mathbf e}_j) \subset {\NZQ N}^d$ the affine semigroup generated by the columns of the vertex-edge incidence matrix for $G$, and $\mathcal{C}={\NZQ R}_+H$ the rational cone over $H$.
For ${\mathbf u}=(u_1,\dots, u_d)\in {\NZQ N}^d$, it follows from \cite{OH-jalg} and \cite[Proposition 3.4]{Villarreal}
that ${\mathbf u} \in H$ (equivalently, ${\mathbf x}^{\mathbf u} \in R$) if and only if
\begin{eqnarray}
\label{eq:r}
\sum_{i=1}^d u_i \equiv 0 \mod 2, \\
\nonumber u_1,\dots, u_d \geq 0, \quad \text{and}\\
\nonumber \sum_{i\notin V_k} u_i \geq \sum_{j\in V_k} u_j \text{ for all } k=1,\dots, n.
\end{eqnarray}
The latter inequalities are equivalent to
\begin{equation}
\label{two-r}\sum_{i=1}^d u_i \geq 2\sum_{j\in V_k} u_j, \text{ for } k=1,\dots, n.
\end{equation}
Since $R$ is normal, by \cite{Danilov}, \cite{Stanley} (see also \cite[Theorem 6.3.5(b)]{BH}), a ${\NZQ K}$-basis for $\omega_R$ is given by the monomials ${\mathbf x}^{\mathbf u}$ where ${\mathbf u}=(u_1,\dots, u_d) \in {\NZQ Z}^d$ satisfies
\begin{eqnarray}
\label{eq:omega}
\label{zero}\sum_{i=1}^d u_i \equiv 0 \mod 2, \\
\label{one} u_1,\dots, u_d \geq 1, \text{and}\\
\label{two}\sum_{i=1}^d u_i \geq 2+2\sum_{j\in V_k} u_j, \text{ for } k=1,\dots, n.
\end{eqnarray}
From the equations above it is easy to see that if the monomial ${\mathbf x}^{\mathbf u}$ is in $R$ or in $\omega_R$, we can permute the exponents $x_i$ and $x_j$ whenever $i, j \in V_k$ for some $k$, and we obtain another monomial in $R$, or in $\omega_R$, respectively.
In what follows ${\mathbf u}=(u_1,\dots, u_d)$ and ${\mathbf v}=(v_1,\dots, v_d)$.
For a monomial ${\mathbf x}^{\mathbf u} \in \omega_R$ and $1\leq k\leq n$ we say that $V_k$ (or simply, $k$) is a \textit{heavy component} in ${\mathbf u}$ if
\begin{equation}
\label{eq:heavy}\sum_{i=1}^d u_i=2+ 2\sum_{j\in V_k}u_j.
\end{equation}
\begin{Claim} {\em For any ${\mathbf x}^{\mathbf u} \in \omega_R$ there exist at most two heavy components in ${\mathbf u}$. In particular, there is at least one non-heavy component in ${\mathbf u}$. }\end{Claim}
\begin{proof}
Indeed, if $k_1<k_2<k_3$ are heavy components in ${\mathbf u}$, then by adding the equations \eqref{eq:heavy} for these indices we get
$$
3\sum_{i=1}^d u_i= 6+ \sum_{j\in V_{k_1}\cup V_{k_2}\cup V_{k_3}}2u_j,
$$
If $n=3$, then $\sum_{i=1}^d u_i=6$. Since $u_i\geq r_i\geq 2$ for all $i$, we infer that $r_1=r_2=r_3=2$, and ${\NZQ K}[G]$ is a Gorenstein ring (by Theorem \ref{thm:gore-multipartite}), which is not the case.
If $n\geq 4$, then $\sum_{i=1}^d u_i< 6$. As $\sum_{i=1}^d u_i$ is even, we get that $n=4$ and $r_1=r_2=r_3=r_4=1$. Example \ref{edge-rings-as-sqfreeveronese} implies that $R$ is a Gorenstein ring, which is false.
\end{proof}
\begin{Claim} {\em For any $1\leq i\leq d$ there exists a monomial ${\mathbf x}^{\mathbf u} \in \omega_R$ such that $u_i=1$. }
\end{Claim}
\begin{proof} We fix $i$ and we denote $a_i=\min \{u_i: \prod x_i^{u_i} \in \omega_R\}$. By \eqref{one}, $a_i \geq 1$. Assume $a_i\geq 2$, and say $i\in V_k$.
If $r_k>1$, we may pick $j \in V_k$, $j\neq i$. Then it is easy to check that the monomial $m=\frac{{\mathbf x}^{\mathbf u}}{x_i}{x_j} \in \omega_R$ and $\deg_{x_i}(m)=a_i-1$, a contradiction.
When $r_k=1$, then $n\geq 4$ and by the previous claim there is at least one non-heavy component $V_{k_1}$ in ${\mathbf u}$ which is different from $V_k$. We pick $j\in V_{k_1}$ and since the monomial $m=\frac{{\mathbf x}^{\mathbf u}}{x_i}{x_j} \in \omega_R$ and $\deg_{x_i}(m)=a_i-1$ we obtain a contradiction.
\end{proof}
It follows at once that
\begin{equation*}
\gcd({\mathbf x}^{\mathbf u} : {\mathbf x}^{\mathbf u} \in \omega_R)=\prod_{i=1}^d x_i,
\end{equation*}
where the greatest common divisor is computed in the polynomial ring $S={\NZQ K}[x_1,\dots, x_d]$.
Since $\omega_R$ is generated by monomials, one gets that $\omega_R^{-1}$ is also generated by monomials in ${\NZQ K}[x_1^{\pm 1}, \dots, x_d^{\pm 1}]$.
If $f={\mathbf x}^{\mathbf u}/{\mathbf x}^{\mathbf v} \in \omega_R^{-1}$ with ${\mathbf x}^{\mathbf u}$ and ${\mathbf x}^{\mathbf v}$ coprime monomials in $S$, then ${\mathbf x}^{\mathbf v}$ divides the greatest common divisor of the monomials in $\omega_R$. Hence, in order to determine a system of generators for the $R$-module $\omega_R^{-1}$ it is enough to scan among the (non-reduced) fractions $f={\mathbf x}^{\mathbf u}/(x_1\dots x_d)$, where ${\mathbf x}^{\mathbf u}$ is in the set
$$
\mathcal {B}=\left\{ {\mathbf x}^{\mathbf u} \in S: \sum_{i=1}^d u_i \equiv 0 \mod 2, \quad {\mathbf x}^{\mathbf u} \cdot \omega_R \subseteq x_1\dots x_d R\right\}.
$$
A monomial ${\mathbf x}^{\mathbf u}$ is in $\mathcal{B}$ if and only if $\sum_{i=1}^d u_i \equiv 0 \mod 2$ and
$$
x_1^{u_1+v_1-1}\cdots x_d^{u_d+v_d-1} \in R
$$
for all $x_1^{v_1}\cdots x_d^{v_d}$ in $\omega_R$. That is equivalent, via \eqref{eq:r}, \eqref{zero}, \eqref{two-r}, to the fact that
\begin{eqnarray}
\sum_{i=1}^d u_i \equiv d \mod 2, \text{ and }\\
\label{eq:first}\sum_{i=1}^d u_i +\sum_{i=1}^d v_i \geq d-r_k + 2\sum_{j\in V_k} u_j + 2 \sum_{j\in V_k} v_j, \\
\nonumber \text{ for } k=1, \dots, d,\text{ and any } {\mathbf x}^{\mathbf v} \in \omega_R.
\end{eqnarray}
For $k=1,\dots, n $ we set
$$
E_k=\min \left\{ \sum_{i=1}^d v_i- 2 \sum_{j\in V_k}v_k: {\mathbf x}^{\mathbf v} \in \omega_R\right\}.
$$
Therefore, \eqref{eq:first} is equivalent to
\begin{equation}
\label{eq-with-ek}
\sum_{i=1}^d u_i \geq d-r_k-E_k+2\sum_{j\in V_k} u_j \text{ for } k=1,\dots, n.
\end{equation}
Before computing $E_k$ we make a simple observation regarding $d$ and the $r_i$'s.
\begin{Claim}
$2 r_i+2 \leq d$ for all $i=1,\dots, n-1$.
\end{Claim}
\begin{proof}
Indeed, if that were not the case, then $2r_n+2 \geq 2 r_{n-1} +2>d$, hence $2 r_n\geq 2 r_{n-1} \geq d-1$. This implies $r_n+r_{n-1} \geq d-1$, equivalently that $1=\sum_{i=1}^{n-2}r_i$, which is not possible in our setup.
\end{proof}
Next we show that $E_k$ does not depend on $k$.
\begin{Claim}{\em $E_k=2$ for $k=1,\dots, n$.}
\end{Claim}
\begin{proof}
We fix $1\leq k \leq n$.
Clearly, $E_k \geq 2$, by \eqref{two}.
Then $E_k=2$ once we find
\begin{equation}
\label{good-v}
{\mathbf x}^{\mathbf v} \in \omega_R \text{ such that } \sum_{i=1}^d v_i=2 + 2\sum_{j\in V_k} v_j.
\end{equation}
Using Eqs. \eqref{zero}, \eqref{one}, \eqref{two}, and translating $v_i=r_i+s_i$ for $i=1,\dots, n$, we observe that finding ${\mathbf v}$ as in \eqref{good-v} is equivalent to finding integers $s_1,\dots, s_n$ such that
\begin{eqnarray}
\label{s-zero} s_1,\dots, s_n \geq 0,\\
\label{s-one} \sum_{i=1}^n s_i \geq 2 s_\ell +2 r_\ell +2-d, \text{ for } 1\leq \ell \leq n, \ell\neq k, \text{ and} \\
\label{s-two} \sum_{i=1}^n s_i=2 s_k+2+2r_k-d.
\end{eqnarray}
The $s_\ell$ represents the sum of the components of ${\mathbf v}$ from $V_\ell$, each decreased by one.
Note that \eqref{s-two} already implies that $\sum_{i=1}^n s_i \equiv d \mod 2$.
We have two cases to consider.
{\em Case $k=n$}:
We let $s_\ell= \lfloor d/2 \rfloor -r_\ell-1$ for $\ell=1,\dots, n-1$.
For \eqref{s-two} to hold, we must let
\begin{eqnarray*}
s_n &=& \sum_{i=1}^{n-1} s_i-2-2r_n+d=(n-1)\lfloor d/2 \rfloor -d+r_n-(n-1)-2-2r_n+d \\
&=& (n-1)(\lfloor d/2\rfloor -1)-r_n-1 \geq 2(\lfloor d/2 \rfloor -1)-r_n-1 \geq d-r_n-2 \geq 0.
\end{eqnarray*}
For $\ell<n$, one has $s_\ell \geq 0$ by the previous Claim. Also, $2s_\ell+2+2 r_\ell-d$ is either $0$ or $1$, depending on $d$ being even or odd.
Therefore, \eqref{s-one} and \eqref{s-zero} are all verified.
{\em Case $1\leq k \leq n-1$}:
We let $s_n=0$
and $s_\ell= \lfloor d/2 \rfloor -r_\ell-1$ for $\ell=1,\dots, n-1$ where $\ell\neq k$.
For \eqref{s-two} to hold, we must let
\begin{eqnarray}
\label{sk}
s_k &=& (\sum_{1\leq i \leq n-1, i\neq k}s_i )+s_n-2-2r_k+d.
\end{eqnarray}
Clearly, $s_k\geq 0$ since
$d\geq 2r_k+2$.
Arguing as in the other case, for $k\neq \ell<n$ one has $s_\ell \geq 0$ and \eqref{s-one} holds.
We are left to verify that
\begin{equation}
\label{eq-last}
\sum_{i=1}^n s_i \geq 2 s_n+2 r_n+2-d.
\end{equation}
Substituting \eqref{s-two} into the LHS above, \eqref{eq-last} is equivalent to
\begin{equation*}
s_k+r_k \geq s_n+r_n.
\end{equation*}
Using \eqref{sk} we get that
$$
s_k+r_k=(\sum_{1\leq i\leq n-1, i\neq k} s_i) +s_n+d-r_k-2= (\sum_{1\leq i \leq n-1, i\neq k}s_i)+s_n+r_n+(d-r_k-r_n-2)\geq s_n+r_n,
$$
where for the latter inequality we used the observation that $d\geq r_k+r_n+2$ in our setup.
Consequently, $s_1,\dots, s_n$ fulfil \eqref{s-zero}, \eqref{s-one}, \eqref{s-two}, and $E_k=2$.
\end{proof}
We can now finish the proof of Theorem \ref{thm:ng-graphs}.
Let $m=x_1^{a_1}\dots x_d^{a_d}$ be a monomial generator for $\omega_R$. Then $\deg m=\sum_{i=1}^d a_i \geq 2+2\sum_{j\in V_k}a_j$ for all $k=1,\dots, n$. In particular, $\deg m \geq 2 r_n +2$.
Let $f={\mathbf x}^{\mathbf u}/(x_1\cdots x_d)$ be a monomial in $\omega_R^{-1}$, with ${\mathbf x}^{\mathbf u} \in \mathcal{B}$. By \eqref{eq-with-ek},
$$
\deg {\mathbf x}^{\mathbf u}=\sum_{i=1}^d u_i \geq d-r_k-2+2\sum_{j\in V_k} u_j \text{ for all } k=1,\dots, n.
$$
Since $d>r_n+2$ in our setup, we find a component $k_1$ such that $\sum_{j\in V_{k_1}} u_j >0$.
The product $m\cdot f$ is a monomial in $R$ of degree at least
$$
(2r_n+2)+(d-r_{k_1} -2 +2\sum_{j\in V_{k_1}}u_j)-d\geq 2r_n-r_{k_1}+2 \geq 3.
$$
Consequently, $\tr(\omega_R)=\omega_R\cdot \omega_R^{-1} \subsetneq {\frk m}_R$, a contradiction with \eqref{eq:tracemax}.
\end{proof}
\section{Stable set rings}
In this section we consider an algebra generated by the monomials coming from the stable sets of a graph.
Let $G$ be a finite simple graph on $[n]$ and $E(G)$ is the set of edges of $G$. A subset $C \subset [n]$ is a {\em clique} of $G$ if $\{i, j\} \in E(G)$ for all $i, j \in C$ with $i \neq j$. A subset $W \subset [n]$ is {\em stable} in $G$ if $\{i, j\} \not\in E(G)$ for all $i, j \in W$ with $i \neq j$. In particular, the empty set as well as each $\{ i \} \subset [n]$ is both a clique of $G$ and a stable subset of $G$. Let $\Delta(G)$ denote the {\em clique complex} of $G$ which is the simplicial complex on $[n]$ consisting of all cliques of $G$. Let $\delta$ denote the maximal cardinality of cliques of $G$. Thus $\dim \Delta(G) = \delta - 1$. We say that $G$ is {\em pure} if $\Delta(G)$ is a pure simplicial complex, i.e. the cardinality of each maximal clique of $G$ is $\delta$.
The graph $G$ is called {\em perfect} if for all induced subgraphs $H$ of $G$, including $G$ itself, the chromatic number is equal to the maximal cardinality of cliques contained in $H$, see \cite[p.~165]{BB}.
Let ${\NZQ K}[x_1, \ldots, x_n, t]$ denote the polynomial ring in $n + 1$ variables over the field ${\NZQ K}$. If, in general, $W \subset [n]$, then $x^W t$ stands for the squarefree monomial
\[
x^W t = \big(\prod_{i \in W}x_i \big) \cdot t \in {\NZQ K}[x_1, \ldots, x_n, t].
\]
Let ${\rm Stab}_{\NZQ K}(G)$ denote the subalgebra of ${\NZQ K}[x_1, \ldots, x_n]$ which is generated by those $x^W t$ for which $W$ is a stable set of $G$. Letting $\deg(x^W t)=1$ for any stable set $W$, the algebra ${\rm Stab}_{\NZQ K}(G)$ becomes standard graded. We call ${\rm Stab}_{\NZQ K}(G)$ the {\em stable set ring} of $G$.
It is known \cite[Example 1.3 (c)]{OH} that ${\rm Stab}_{\NZQ K}(G)$ is normal if $G$ is perfect. It follows that, when $G$ is perfect, ${\rm Stab}_{\NZQ K}(G)$ is spanned over ${\NZQ K}$ by those monomials $(\prod_{i=1}^{n} x_i^{a_i}) t^q$ with $\sum_{i \in C} a_i \leq q$ for each maximal clique $C$ of $G$. Furthermore, the canonical module $\omega_{{\rm Stab}_{\NZQ K}(G)}$ of ${\rm Stab}_{\NZQ K}(G)$ is spanned over ${\NZQ K}$ by those monomials $(\prod_{i=1}^{n} x_i^{a_i}) t^q$ with each $a_i > 0$ and with $\sum_{i \in C} a_i < q$ for each maximal clique $C$ of $G$. Thus it is proved in \cite[Theorem 2.1 (b)]{OHjct} that ${\rm Stab}_{\NZQ K}(G)$ is Gorenstein if and only if $G$ is pure.
The following lemma captures a sufficient combinatorial condition for ${\rm Stab}_{\NZQ K}(G)$ to be nearly Gorenstein.
\begin{comment}
For the nearly Gorenstein property we first analyze the case when $G$ is connected.
\begin{Lemma}
\label{perfect-OLD}
Let $G$ be a finite simple connected perfect graph on $[n]$. Then ${\rm Stab}_{\NZQ K}(G)$ is nearly Gorenstein if and only if ${\rm Stab}_{\NZQ K}(G)$ is Gorenstein.
\end{Lemma}
\begin{proof}
Since each $x_i t$ as well as $t$ belongs to ${\rm Stab}_{\NZQ K}(G)$, the quotient field of ${\rm Stab}_{\NZQ K}(G)$ is the rational function field ${\NZQ K}(x_1, \ldots, x_n, t)$ over ${\NZQ K}$. Let $\delta$ denote the maximal cardinality of cliques of $G$. Suppose that ${\rm Stab}_{\NZQ K}(G)$ is not Gorenstein. It follows that, since $G$ is connected and since $G$ is not pure, there is an edge $\{i_0, j_0\} \in E(G)$ for which $i_0$ belongs to a clique $C$ of $G$ with $|C| = \delta$ and for which $j_0$ belongs to {\em no} clique $C$ of $G$ with $|C| = \delta$. Let $v = x_1x_2\cdots x_n t^{\delta+1}$. One has $v \in \omega_{{\rm Stab}_{\NZQ K}(G)}$ and $x_{j_0} v \in \omega_{{\rm Stab}_{\NZQ K}(G)}$.
Now, suppose that ${\rm Stab}_{\NZQ K}(G)$ is nearly Gorenstein. Thus
\[
\tr(\omega_{{\rm Stab}_{\NZQ K}(G)}) = \omega_{{\rm Stab}_{\NZQ K}(G)} \cdot \omega_{{\rm Stab}_{\NZQ K}(G)}^{-1} = {\frk m}_{{\rm Stab}_{\NZQ K}(G)}.
\]
Let $z=\prod_{i=1}^{n} x_i^{a'_i} t^{q'} \in {\NZQ K}(x_1, \ldots, x_n, t)$ belonging to $\omega_{{\rm Stab}_{\NZQ K}(G)}^{-1}$.
Since each monomial belonging to $\omega_{{\rm Stab}_{\NZQ K}(G)}$ is divisible by $v$ (in ${\NZQ K}[x_1,\dots, x_n,t]$) and $vz \in {\rm Stab}_{\NZQ K}(G)$ we get that $a_i \geq -1$ for all $i$.
At least one of $x_{j_0}vz$ and $vz$ is in ${\frk m}_{{\rm Stab}_{\NZQ K}(G)}$. Hence since each monomial belonging to ${\frk m}_{{\rm Stab}_{\NZQ K}(G)}$ is divisible by $t$ one has $q'\geq -\delta$.
Let $w' = \prod_{i=1}^{n} x_i^{a'_i} t^{q'} \in \omega_{{\rm Stab}_{\NZQ K}(G)}^{-1}$ and $w = \prod_{i=1}^{n} x_i^{a_i} t^{q} \in \omega_{{\rm Stab}_{\NZQ K}(G)}$ with $w'w = x_{i_0}t$. Since $q' \geq - \delta$ and $q \geq \delta + 1$, one has $q' = - \delta$ and $q = \delta + 1$.
Since $i_0$ belongs to a clique $C$ of $G$ with $|C| = \delta$, one has $a_{i_0} = 1$. Thus $a'_{i_0} = 0$. Hence $w'\cdot x_{j_0}v \in {\frk m}_{{\rm Stab}_{\NZQ K}(G)}$ is divisible by $x_{i_0}x_{j_0}t$, but it is not divisible by $t^2$. Thus $w'\cdot x_{j_0}v$ must be of the form $x^W t$, where $W$ is a stable set of $G$, which contradicts $\{i_0, j_0\} \in E(G)$. Hence ${\frk m}_{{\rm Stab}_{\NZQ K}(G)} \subsetneq \tr(\omega_{{\rm Stab}_{\NZQ K}(G)})$, as desired.
\end{proof}
\end{comment}
\begin{Lemma}
\label{perfect}
Let $G$ be a finite simple perfect graph such that ${\rm Stab}_{\NZQ K}(G)$ is nearly Gorenstein.
Then every connected component of $G$ is pure.
\end{Lemma}
\begin{proof}
Assume $V(G)=[n]$. Denote $R={\rm Stab}_{\NZQ K}(G)$.
Since each $x_i t$ as well as $t$ belongs to $R$, the quotient field of $R$ is the rational function field ${\NZQ K}(x_1, \ldots, x_n, t)$ over ${\NZQ K}$.
Suppose $G_1$ is a connected component of $G$ which is not pure.
Let $\delta$ and $\delta_1$ denote the maximal cardinality of cliques of $G$ and of $G_1$, respectively.
Then there is an edge $\{i_0, j_0\} \in E(G_1)$ for which $i_0$ belongs to a clique $C$ of $G$ with $|C| = \delta_1$ and for which $j_0$ belongs to {\em no} clique $C$ of $G$ with $|C| = \delta_1$.
Let $z=\prod_{i=1}^{n} x_i^{a'_i} t^{q'} \in \omega_R^{-1}$. Set $v_1=x_1\cdots x_n t^{\delta+1}$.
It is easy to check that $v_1\in \omega_R$ and that each monomial belonging to $\omega_R$ is divisible (in ${\NZQ K}[x_1,\dots, x_n, t]$) by $v_1$. Hence $a_i \geq -1$ for all $i$. Clearly, $x_{j_0}v_1\in \omega_R$ and $1\neq x_{j_0}v_1z\in R$, hence $q'\geq -\delta$.
Since $G$ is not pure, $R$ is not a Gorenstein ring and thus
$$
\tr(\omega_R)=\omega_R\cdot \omega_R^{-1}={\frk m}_R.
$$
Let $w' = \prod_{i=1}^{n} x_i^{a'_i} t^{q'} \in \omega_R^{-1}$ and $w = \prod_{i=1}^{n} x_i^{a_i} t^{q} \in \omega_R$ with $w'w = x_{i_0}t$. Since $q' \geq - \delta$ and $q \geq \delta + 1$, one has $q' = - \delta$ and $q = \delta + 1$.
Let $v = x_1x_2\cdots x_n t^{\delta+1}\cdot (x_{i_0} x_{j_0})^{\delta-\delta_1}$. One has $v\in \omega_R$ and $x_{j_0}v\in \omega_R$.
We claim that $w'\cdot x_{j_0}v \in {\frk m}_R$ is divisible by $x_{i_0}x_{j_0}t$, but it is not divisible by $t^2$.
This is clear when $\delta>\delta_1$. In case $\delta=\delta_1$, since $i_0$ belongs to a clique $C$ of $G$ with $|C| = \delta$, one has $a_{i_0} = 1$. Thus $a'_{i_0} = 0$ and the claim is verified.
Thus $w'\cdot x_{j_0}v$ must be of the form $x^W t$, where $W$ is a stable set of $G$, which contradicts $\{i_0, j_0\} \in E(G)$. Hence ${\frk m}_R \subsetneq \tr(\omega_R)$, as desired.
\end{proof}
Recall that the $a$-invariant of any graded algebra $R$ with canonical module $\omega_R$ is defined as
$a(R)= -\min\{i: (\omega_R)_i\neq 0\}$.
\begin{Corollary}
\label{a-inv}
If $G$ is a perfect graph then $a({\rm Stab}_{\NZQ K}(G))=-\dim \Delta(G)-2$.
\end{Corollary}
\begin{proof}
Let $\delta$ be the maximal size of a clique in $G$.
From the proof of the Lemma~\ref{perfect}, $v=x_1\cdots x_nt^{\delta+1}$ is in $(\omega_{{\rm Stab}_{\NZQ K}(G)})_{\delta+1}$ and $v$ divides every monomial in $\omega_{{\rm Stab}_{\NZQ K}(G)}$. This gives the conclusion.
\end{proof}
\begin{Theorem}
\label{ngstable}
Let $G$ be a finite simple graph with $G_1, \ldots, G_s$ its connected components and suppose that $G$ is perfect. Let $\delta_i$ denote the maximal cardinality of cliques of $G_i$. Then ${\rm Stab}_{\NZQ K}(G)$ is nearly Gorenstein if and only if each $G_i$ is pure and $|\delta_i - \delta_j| \leq 1$ for $1 \leq i < j \leq s$.
\end{Theorem}
\begin{proof}
Suppose that ${\rm Stab}_{\NZQ K}(G)$ is nearly Gorenstein. It follows from Lemma \ref{perfect} that each $G_i$ is pure and each ${\rm Stab}_{\NZQ K}(G_i)$ is Gorenstein. Since ${\rm Stab}_{\NZQ K}(G)$ is the Segre product of ${\rm Stab}_{\NZQ K}(G_1), \ldots, {\rm Stab}_{\NZQ K}(G_s)$, it follows from \cite[Corollary 4.16]{HHS} that
$$
|a({\rm Stab}_{\NZQ K}(G_i))-a({\rm Stab}_{\NZQ K}(G_j))|\leq 1 \text{ for all }i,j.
$$
Corollary~\ref{a-inv} yields $|\delta_i - \delta_j| \leq 1$ for $1 \leq i < j \leq s$. Furthermore, the ``If'' part also follows from \cite[Corollary 4.16]{HHS}.
\end{proof}
\begin{Corollary}
Let $G$ be a finite simple graph which is pefect and connected. Then the ring ${\rm Stab}_{\NZQ K}(G)$ is nearly Gorenstein if and only if it is Gorenstein.
\end{Corollary}
\medskip
{\bf Acknowledgement.} The first author was partially supported by JSPS KAKENHI 19H00637.
\medskip
|
1210.5214
|
\section{Introduction}
Supersymmetry is a well-studied and elegant extension of the Standard Model of particle physics. If supersymmetry is broken near the weak scale, not only is it possible to achieve gauge coupling unification, but the Hierarchy Problem is also addressed through cancellations of the quadratically divergent corrections to the Higgs mass squared. Furthermore, supersymmetric theories provide a host of natural particle candidates for dark matter. Specifically, we are interested in supersymmetric models in which the lightest neutralino is the lightest supersymmetric particle (LSP), and constitutes some or all of the observed dark matter in the universe. In many cases, neutralino dark matter is predicted to scatter on nuclei with a cross section large enough to be observed by current or next generation direct detection experiments.
Despite these virtues, supersymmetric theories often suffer from fine-tuning issues related to the fact that the Z mass can be calculated from supersymmetric parameters that may take values quite far from the weak scale. Here we address the question of what can be learned about fine-tuning from direct dark matter searches.
I begin by summarizing the results of Amsel, Freese, and Sandick (2011)~\cite{AFS}, in which ease of discoverability of neutralino dark matter in direct detection experiments, fine-tuning of electroweak symmetry breaking (EWSB), and the relationship between the two are explored for two supersymmetric scenarios in which there are large degrees of universality at the supersymmetric grand unification (GUT) scale. The conclusions reached are in agreement with those of Ref.~\cite{Perelstein:2011tg}, which investigated the MSSM with relevant parameters specified at the weak scale and with the assumption that neutralinos constitute all of the dark matter in the Universe. After presenting the results of~\cite{AFS}, I go on to explore the sensitivity of those conclusions to the mass of the Standard Model-like Higgs boson~\cite{higgsCMS,higgsATLAS} and the recently improved measurement of the branching ratio, $BR(B_s \rightarrow \mu^+ \mu^-)$~\cite{LHCbmm}.
The two scenarios studied are special cases of the Minimal Supersymmetric Standard Model (MSSM); namely, the constrained MSSM (CMSSM), and models with non-universal Higgs masses, (NUHM). All CMSSM models can be described by four parameters and a sign: a universal mass for all gauginos, $M_{1/2}$, a universal mass for all scalars, $M_0$, a universal trilinear coupling, $A_0$, the ratio of the Higgs vacuum expectation values, $\tan \beta$, and the sign of the Higgs mass parameter, $\mu$. In NUHM scenarios, additional freedom in the Higgs sector is introduced by allowing the supersymmetry-breaking contributions to the effective masses of the up- and down-type Higgs scalars, $m_{H_u}$ and $m_{H_d}$, respectively, to differ from the universal mass taken by the supersymmetric scalar particles at the GUT scale, $M_0$.
After introducing the methodology and parameter space, the relationship between EWSB fine-tuning and discoverability of dark matter via neutralino-nucleon elastic scattering is discussed, with emphasis on the specific mechanisms by which the relic abundance of neutralino dark matter is reduced to cosmologically viable values in the CMSSM and the NUHM. Finally, the impact of updated constraints on the conclusions of~\cite{AFS} is addressed.
\section{The Parameter Space}
In Ref.~\cite{AFS}, we explore the parameter space of the CMSSM and NUHM by scanning over the relevant input parameters, applying constraints, and identifying trends in viable models. In both the CMSSM and the NUHM, we assume $\mu >0$ and scan the ranges $1 < \tan\beta < 60$ and $-12$ TeV $< A_{0} < 12$ TeV. In the CMSSM, we scan $0 < M_{0} < 4$ TeV and $0 < M_{1/2} < 2$ TeV while in NUHM space we take $0 < M_{0} < 3$ TeV, $0 < M_{1/2} < 2$ TeV\footnote{For NUHM models, our scan is more dense for $M_{1/2} < 1$ TeV. The difference in density of points does not affect our conclusions.}, and the GUT-scale Higgs scalar mass parameters $-3$ TeV $< M_{H_{u,d}}(M_{GUT}) < 3$ TeV. Since gaugino universality is assumed in both cases, the electroweak scale mass relations of $M_{1} : M_{2} : M_{3} \approx 1 : 2 : 6$ hold for both the CMSSM and the NUHM.
We begin by imposing a lower limit on the mass of the light CP-even Higgs boson, $m_{h} > 114$ GeV~\cite{LEPhiggs}. Accelerator bounds on SUSY parameters are enforced, including $m_{\tilde{\chi}_1^\pm} > 104$ GeV~\cite{Abbiendi:2003sc} and, following~\cite{Feldman:2009zc}, $m_{\tilde{t}_1, \tilde{\tau}_1} > 100$ GeV. We allow a $3\sigma$ range for the $BR(b\rightarrow s\gamma)$ as recommended by the Heavy Flavor Averaging Group~\cite{hfag}; accounting for the improved Standard Model calculation~\cite{bsgSM}, we take $2.77\times 10^{-4}< BR(b\rightarrow s\gamma) <4.27\times 10^{-4}$. We also demand demand $-11.4\times 10^{-10}< \delta(g_{\mu}-2)<9.4\times 10^{-9}$~\cite{Djouadi:2006be}. Finally, we require $BR( B_s \to \mu^{+}\mu^{-}) < 10^{-7}$ as measured by CDF~\cite{CDFbmumu}. After summarizing the results of~\cite{AFS} as obtained by implementing these constraints, we explore the sensitivity of our conclusions to the Higgs mass and the recently improved limit on $BR( B_s \to \mu^{+}\mu^{-})$.
For all models, we apply the $2\sigma$ upper limit on the relic abundance of neutralino dark matter\footnote{In~\cite{AFS} we also investigate the implications of requiring that the entire dark matter abundance is due to neutralinos.} of $\Omega_{\tilde{\chi}_1^0} h^{2} < 0.12$~\cite{WMAP7}. For bino-like neutralino LSPs, thermal freeze out typically results in an overabundance of dark matter. Regions of parameter space where the abundance falls in or below the observed range often involve one or more specific mechanisms that act to reduce the neutralino abundance. Examples of these mechanisms include coannihilations of LSPs with other supersymmetric particles and LSP annihilations at a Higgs pole. In the former case, it's necessary that a viable coannihilation candidate particle be nearly degenerate in mass with the neutralino LSP, while in the latter case, s-channel annihilations through $h$ or $A$ exchange are enhanced when $2m_{\tilde{\chi}_1^0} \approx m_{h,A}$. If the LSP has a significant higgsino admixture, it is possible for the relic density to be consistent with the measured abundance of dark matter even in the absence of coannihilations or a resonance. We make no a priori assumptions about the composition of the neutralino LSP, and we differentiate model categories by the mass relation obeyed by the relevant particles as listed in Table~\ref{tab:degeneracies}. We label the model categories by annihilation mechanisms; the named mechanism is usually, but not always, the primary one for producing the correct relic abundance. In some cases, models may satisfy more than one mass relation, while in other cases, none of the mass relations are satisfied. If the latter, points that obey the limit on the relic density of dark matter are labeled ``other,'' and typically the LSP is a mixed bino-higgsino state.
\begin{table}[h]
\begin{tabular}{lc p{5mm}c lc}
\hline
\tablehead{1}{l}{b}{Model Category}
& \tablehead{1}{c}{b}{Mass Relation}
& \tablehead{1}{c}{b}{ }
& \tablehead{1}{l}{b}{Model Category}
& \tablehead{1}{c}{b}{Mass Relation}\\
\hline
Stop Coannihilation & $m_{\tilde{t}_1} - m_{\neut} < 0.2 m_{\neut} $ & & Light Higgs Pole & $| \frac{1}{2}m_h - m_{\neut} | < 0.1 m_{\neut} $ \\
Stau Coannihilation & $m_{\tilde{\tau}_1} - m_{\neut} < 0.2 m_{\neut} $ & & Heavy Higgs Pole & $| \frac{1}{2}m_A - m_{\neut} | < 0.1 m_{\neut} $ \\
Chargino Coannihilation & $m_{\tilde{\chi}_1^\mp} - m_{\neut} < 0.15 m_{\neut} $ & & & \\
\hline
\end{tabular}
\caption{Annihilation mechanisms as specified by mass relations. All model points that do not obey one of these mass relations are labeled "other."}
\label{tab:degeneracies}
\end{table}
\begin{figure}
\includegraphics[width=.5\textwidth, height=65mm]{msugra_m0_mhf.pdf}
\includegraphics[width=.5\textwidth, height=65mm]{nuh_m0_mhf.pdf}
\caption{The $(M_{1/2},M_0)$ plane of the CMSSM (left) and the NUHM (right).
Models are color-coded by mass relation as described in the legend.} \label{fig:m0_mhf}\end{figure}
In Figure~\ref{fig:m0_mhf} we present the $(M_{1/2},M_0)$ plane of the CMSSM (left) and the NUHM (right). Of the mass relations plotted, some are more localized in the CMSSM plane than in the NUHM plane. For example, the $m_{\tilde{\chi}_1^0}\approx m_{\tilde{\chi}_1^\pm}$ points in the CMSSM all occur at large $M_0$ and small $M_{1/2}$ since that is the only region of the CMSSM plane where the neutralino LSP is significantly higgsino-like so that this near-degeneracy is possible. In the NUHM, however, the restriction that $m_{H_u}(M_{GUT})=m_{H_d}(M_{GUT})=M_0$ is relaxed, so there is considerably more freedom in the Higgs sector. As a result, the neutralino LSP may be higgsino-like in any region of the $(M_{1/2},M_0)$ plane. Indeed, there are $m_{\tilde{\chi}_1^0}\approx m_{\tilde{\chi}_1^\pm}$ points spread throughout the NUHM plane in the right panel of Fig.~\ref{fig:m0_mhf}.
\section{Fine-tuning}
Despite the successes of the MSSM, fine-tuning of the Z mass is a generic issue for supersymmetric models.
Neglecting loop corrections, the Z mass in the MSSM is given by
\begin{eqnarray}
\label{zmass}
m_Z^2 =
\frac{|m_{H_d}^2-m_{H_u}^2|}{\sqrt{1-\sin^2 2\beta}}-m_{H_d}^2-m_{H_u}^2-2|\mu|^2,
\end{eqnarray}
where all parameters are defined at $m_Z$.
Clearly, a cancellation of the terms on the right hand side
is required in order to obtain the measured value of $m_Z$. However typical values for parameters on the right hand side can be orders of magnitude from the weak scale.
As noted in~\cite{Ellis:1986yg} and~\cite{BarbieriGiudice}, the degree of fine-tuning may be quantified using log-derivatives. Here, we follow~\cite{Perelstein:2007nx} and compute the quantity
\begin{eqnarray}
A(\xi)\,=\,\left| \frac{\partial\log m_Z^2}{\partial\log \xi}\right|,
\label{eq:ftpars}
\end{eqnarray}
where $\xi=m_{H_u}^2$, $m_{H_d}^2$, $b$, and $\mu$ are the relevant Lagrangian parameters.
Then
\begin{eqnarray}
A(\mu) &=&
\frac{4\mu^2}{m_Z^2}\,\left(1+\frac{m_A^2+m_Z^2}{m_A^2} \tan^2 2\beta
\right), \nonumber \\ A(b) &=& \left( 1+\frac{m_A^2}{m_Z^2}\right)\tan^2
2\beta, \nonumber \\ A(m_{H_u}^2) &=& \left| \frac{1}{2}\cos2\beta
+\frac{m_A^2}{m_Z^2}\cos^2\beta
-\frac{\mu^2}{m_Z^2}\right|\times\left(1-\frac{1}{\cos2\beta}+
\frac{m_A^2+m_Z^2}{m_A^2} \tan^2 2\beta \right), \nonumber \\ A(m_{H_d}^2) &=&
\left| -\frac{1}{2}\cos2\beta +\frac{m_A^2}{m_Z^2}\sin^2\beta
-\frac{\mu^2}{m_Z^2}\right|\times\left|1+\frac{1}{\cos2\beta}+
\frac{m_A^2+m_Z^2}{m_A^2} \tan^2 2\beta \right|, \nonumber \\
\label{eq:deltapieces}
\end{eqnarray}
where it is assumed that $\tan\beta>1$.
The overall fine-tuning $\Delta$ is
defined as
\begin{eqnarray}
\Delta = \sqrt{A(\mu)^2+A(b)^2+A(m_{H_u}^2)^2+A(m_{H_d}^2)^2},
\label{eq:delta}
\end{eqnarray}
with values of $\Delta$ far above one indicating significant fine-tuning. Quantum corrections further contribute to the fine-tuning,
e.g.~the one-loop contribution to the $m_{H_u}^2$ parameter from top and stop loops. In this study, we compute the fine-tuning parameter, $\Delta$, accurate to at least one loop, as well as the cross sections for scattering of neutralino dark matter on nuclei, with MicrOMEGAs~\cite{micromegas}, employing the spectrum calculator SUSPECT~\cite{suspect}.
\section{Direct Dark Matter Searches}
For each viable model point, we calculate the spin-independent neutralino-nucleon elastic scattering cross section, $\sigma_{SI}$, to be compared with the limits from direct dark matter searches. Here, we focus on the XENON-100 limit~\cite{xenon100} and the projected sensitivity of the XENON-1T experiment~\cite{xenon1t}. For details of the calculation and relevant uncertainties, see~\cite{AFS,Falk:1998xj,Ellis:2008hf}.
In Fig.~\ref{fig:mchisigsi} we show $\sigma_{SI}$ as a function of LSP mass, $m_{\tilde{\chi}_1^0}$, for model points that pass all constraints, as in~\cite{AFS}. Points are color-coded by mass relation as indicated in the legend. The black (upper) and green (lower) curves in each panel represent the upper limit on $\sigma_{SI}$ from XENON-100 and the projected sensitivity of XENON-1T, respectively. We find that there is significantly more variation in $\sigma_{SI}$ in the NUHM than in the CMSSM, especially for $m_{\tilde{\chi}_1^0} \lesssim 150$ GeV or $m_{\tilde{\chi}_1^0} \gtrsim 700$ GeV. This is a straightforward consequence of the additional freedom in the Higgs sector in the NUHM for two reasons: First, since $\mu$ is fixed by the electroweak vacuum conditions, which are related to the Higgs scalar masses, the LSP can be made Higgsino-like for nearly all choices of $M_{1/2}$ and $M_0$ in the NUHM. Furthermore, in the NUHM it is possible to maintain nearly the measured value of the relic density of neutralinos even if they are largely higgsino-like. In the CMSSM, higgsino-like LSPs annihilate efficiently, resulting in very small $\Omega_{\tilde{\chi}_1^0}$, and would therefore be difficult to observe in a direct detection experiment. In the NUHM, however, the varied higgsino content leads to a much larger range of effective scattering cross sections. Second, since the mass of the heavy CP-even Higgs boson, H, is not constrained by the choice of $M_0$ in the NUHM as it is in the CMSSM, a larger range of $m_H$ is possible. Since $\sigma_{SI} \propto 1/m_H^4$ for scattering via Higgs exchange, a larger range of $\sigma_{SI}$ is therefore possible. Higgs masses are bounded from below by collider constraints, so the main effect is that since $m_H$ can be much larger in the NUHM than in the CMSSM, lower scattering cross sections are possible. These findings are consistent with those presented in~\cite{nuhm1}.
\begin{figure}
\includegraphics[width=65mm,height=63mm]{msugra_xenonbound.pdf}
\includegraphics[width=65mm, height=63mm]{nuh_xenonbound.pdf}
\caption{Spin-independent neutralino-nucleon elastic scattering cross section, $\sigma_{SI}$, as a function of neutralino mass $M_{\tilde{\chi}_1^0}$ for the CMSSM (left panels) and the NUHM (right panels). Model points are color-coded by mass relation as indicated in the legend. The limit on $\sigma_{SI}$ from XENON-100 and the projected sensitivity of XENON-1T are shown as black and green curves, respectively.
} \label{fig:mchisigsi}
\end{figure}
Returning to the question of the relationship between annihilation mechanism (mass hierarchy) and fine-tuning,
Figs.~\ref{fig:coann} and \ref{fig:poles} show the $(m_{\tilde{\chi}_1^0},\sigma_{SI})$ plane for a variety of subsets of our CMSSM and NUHM parameter spaces chosen by mass relation. Points in Figs.~\ref{fig:coann} and \ref{fig:poles} are color-coded by the value of $\Delta$, the fine-tuning parameter, for each model point.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=65mm, height=63mm]{msugra_xenonbound_tau.pdf} &
\includegraphics[width=65mm, height=63mm]{nuh_xenonbound_tau.pdf} \\
\includegraphics[width=65mm, height=63mm]{msugra_xenonbound_top.pdf} &
\includegraphics[width=65mm, height=63mm]{nuh_xenonbound_top.pdf}
\end{tabular}
\caption{Spin independent elastic scattering cross section, $\sigma_{SI}$, as a function of LSP mass, $M_{\tilde{\chi}_1^0}$, for the CMSSM (left panels) and the NUHM (right panels). Models are divided according to the mass relations as discussed in the text.
}
\label{fig:coann}
\end{figure}
\paragraph{Stau Coannihilation}
In the top left and right panels of Fig.~\ref{fig:coann} we show the points obeying the mass relation that roughly characterizes morels for which neutralino-stau coannihilation is significant, namely $m_{\tilde{\chi}_1^0} \approx m_{\tilde{\tau}_1}$, in the CMSSM and the NUHM, respectively. In both cases, the least fine-tuned models are the most accessible to direct detection experiments. In the CMSSM, there is a clear anti-correlation between $\sigma_{SI}$ and $\Delta$, but in the NUHM the relationship is less clear. Furthermore, in the CMSSM is appears that all cases with very light $m_{\tilde{\chi}_1^0} \approx m_{\tilde{\tau}_1} \lesssim 180$ GeV would be accessible to XENON-1T\footnote{We note that statements regarding detectability in specific experiments depend on the strangeness content of the nucleon, which is not well-known and can change $\sigma_{SI}$ by a factor of a few~\cite{Ellis:2008hf}. We caution against a strict interpretation of whether a model point is detectable in a particular experiment, but use the sensitivity contours as general guidelines.}, however this conclusion does not hold for the NUHM, where there is considerably more variation in both $\sigma_{SI}$ and $\Delta$.
\paragraph{Stop Coannihilation}
The bottom left and right panels of Fig.~\ref{fig:coann} show the model points obeying the mass relation $m_{\tilde{\chi}_1^0} \approx m_{\tilde{t}_1}$, in the CMSSM and the NUHM, respectively. In both the CMSSM and the NUHM these models are extremely fine-tuned with $\Delta > 1000$. The large fine-tuning comes from the fact that in order to get $m_{\tilde{t}_1}$ to be low enough to be close to the LSP mass, the running of $m_{\tilde{t}_1}$ from $M_0(M_{GUT})$ must be accelerated. This can be achieved with a large value of the top trilinear coupling, $|A_t| > 1$ TeV. These large values of $A_{t}$ also drive $m_{H_{u}}$ to be large and negative.
One can see from Eq.~\ref{zmass} that in order for EWSB to produce the observed value of $m_Z$, in the CMSSM, a large value of $\mu$ is then required.
Eqs.~\ref{eq:deltapieces} and~\ref{eq:delta} show the strong dependence of $\Delta$ on $\mu$. To illustrate it even more clearly, consider the case of large $\tan\beta$, when $\Delta = \sqrt{5} \times \mu^2/m_Z^2 + O(1/\tan^2\beta)$. Clearly, large $\mu$ implies large $\Delta$. It is obvious then why these models are all very fine-tuned for the CMSSM. The additional freedom in the Higgs sector of the NUHM admits somewhat smaller $\mu$, but overall the fine-tuning is uncomfortably large in both the CMSSM and the NUHM for models with $m_{\tilde{\chi}_1^0} \approx m_{\tilde{t}_1}$.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=65mm, height=63mm]{msugra_xenonbound_chi.pdf} &
\includegraphics[width=65mm, height=63mm]{nuh_xenonbound_chi.pdf} \\
\includegraphics[width=65mm, height=63mm]{msugra_xenonbound_higgs.pdf} &
\includegraphics[width=65mm, height=63mm]{nuh_xenonbound_higgs.pdf}
\end{tabular}
\caption{Spin independent elastic scattering cross section, $\sigma_{SI}$, as a function of LSP mass, $M_{\tilde{\chi}_1^0}$, for the CMSSM (left panels) and the NUHM (right panels). Models are divided according to the mass relations as discussed in the text.
}
\label{fig:poles}
\end{figure}
\paragraph{Chargino Coannihilation}
In the top left and right panels of Fig.~\ref{fig:poles} we show the points obeying the mass relation that roughly characterizes neutralino-chargino coannihilation, $m_{\tilde{\chi}_1^0} \approx m_{\tilde{\chi}_1^\pm}$, in the CMSSM and the NUHM, respectively. Since the neutralino LSP is a mixture of bino, and neutral wino and higgsino states while the chargino is a mixture of charged wino and higgsino states, and since gaugino universality implies that that the bino mass is always $\sim 1/2$ the wino mass, in order for $m_{\tilde{\chi}_1^0} \approx m_{\tilde{\chi}_1^\pm}$ they must both have significant higgsino components. When $\mu < M_1$, $m_{\tilde{\chi}_1^0} \approx m_{\tilde{\chi}_1^\pm} \approx \mu$, in which case heavier neutralinos and/or charginos will have larger $\mu$ and are therefore more fine-tuned, as is evident in the upper panels of Fig.~\ref{fig:poles} for both the CMSSM and the NUHM.
Since the LSP is a mixed bino-higgsino in these cases, $\sigma_{SI}$ is generally quite large. There is more variation in $\sigma_{SI}$ in the NUHM than in the CMSSM for the reasons discussed above related to the variation in higgsino content and $m_H$. In the CMSSM, all models in this category would be detectable by XENON-1T, while this is not quite the case in the NUHM.
\paragraph{Higgs Poles}
Finally, in the bottom panels of Fig.~\ref{fig:poles} we show models in which the LSP is nearly degenerate with the light or pseudoscalar Higgs for the CMSSM (left) and the NUHM (right). The light Higgs pole occurs where $2 m_{\tilde{\chi}_1^0} \approx m_h$, so $m_{\tilde{\chi}_1^0} \approx 60$ GeV. For such a light LSP, it is necessary that both $M_1$ and $\mu$ are small, so the fine-tuning is relatively low in these models. The heavy Higgs pole occurs where $2 m_{\tilde{\chi}_1^0} \approx m_A$. Again, because of the additional freedom in the Higgs sector in the NUHM, the parameter space for A-pole annihilations is larger than in the CMSSM, resulting in a larger range of $\sigma_{SI}$ in the NUHM than in the CMSSM. In the CMSSM, A-pole points at lower $m_{\tilde{\chi}_1^0}$ and with larger $\sigma_{SI}$, i.e.~the most accessible to direct dark matter searches, are the least fine-tuned. In the NUHM, that conclusion does not hold; points with $\Delta$ as small as a few$\times 10$ have cross sections that may not be accessible even to next generation direct detection experiments.
\begin{figure}[h]
\includegraphics[width=65mm, height=63mm]{msugra_bloodybone.pdf}
\includegraphics[width=65mm, height=63mm]{nuh_bloodybone.pdf}
\caption{Spin-independent neutralino-nucleon scattering cross section, $\sigma_{SI}$, as a function of fine-tuning parameter, $\Delta$, for the CMSSM (left) and
the NUHM (right). Color-coding indicates the mass relation obeyed as described in the legend.}
\label{fig:bloodybone}
\end{figure}
Finally, the left and right panels of Fig.~\ref{fig:bloodybone} show the spin-indedepent neutralino-nucleon elastic scattering cross section as a function of the fine-tuning parameter in the CMSSM and the NUHM, respectively. From the general downward slope of the points in the $(\Delta,\sigma_{SI})$ plane, it is evident that as $\Delta$ becomes large,
$\sigma_{SI}$ tends to decrease in both the CMSSM and the NUHM.
This is related to the fact that large $\Delta$ implies large $\mu$, which, all other factors being fixed, would result in a more bino-like LSP. Especially in the CMSSM, the least fine-tuned models tend to be the easiest to rule out, with the general trend that increasing sensitivity to $\sigma_{SI}$ will test increasingly fine-tuned models.
In the NUHM, the relation between $\sigma_{SI}$ and fine-tuning does not hold as clearly. For example, models with small fine-tuning and $m_{\tilde{\chi}_1^0} \approx m_{\tilde{\chi}_1^\pm}$ may be much more difficult to discover via direct dark matter searches if we are in an NUHM scenario than if the CMSSM is an adequate description of nature. Given the additional freedom in the Higgs sector of the NUHM, it is perhaps surprising that the CMSSM and the NUHM exhibit as many similarities as they do.
\section{Implications of Higgs searches and the limit on BR$(B_s \rightarrow \mu^+ \mu^)$}
Thus far we have summarized the results of~\cite{AFS}. Since the publication of~\cite{AFS}, however, there have been two measurements that have a profound effect on the parameter space of the CMSSM and the NUHM: First, the LHCb Collaboration placed a very strong limit on the branching ratio of $B_s$ of $BR(B_s \rightarrow \mu^+ \mu^-) < 4.5 \times 10^{-9}$ at the 95\% confidence level~\cite{LHCbmm}. This branching ratio is enhanced at large $\tan\beta$, a general feature of models excluded by the new constraint. The second important measurement is the discovery of a new particle with properties consistent with those expected of a Standard Model-like Higgs boson with a mass of $\sim 125$ GeV~\cite{higgsCMS,higgsATLAS}. This relatively large mass, within the CMSSM or NUHM, favors large $A_0$ and large $\tan\beta$~\cite{Ellis:2012aa}.
To investigate the implications for our results of these recent developments, we demand the $BR(B_s \rightarrow \mu^+\mu^-) < 4.5 \times 10^{-9}$ and $123$ GeV $ < m_h < 127$ GeV. In Fig.~\ref{fig:m0mhf2}, we show the remaining model points in the CMSSM (left) and the NUHM (right) after implementing these constraints. Comparison with Fig.~\ref{fig:m0_mhf} shows that, especially in the CMSSM, the combined effect of the limit on the $BR(B_s\rightarrow \mu^+\mu^-)$ and $m_h \approx 125$ GeV leaves few viable model points. We note that these data were generated prior to indications of a large Standard Model-like Higgs mass, so the parameter ranges were not chosen to optimize the parameter space with $m_h \approx 125$ GeV. The indication, however, is that the fraction of surviving models is far greater for the NUHM than the CMSSM.
For example, one can see from Fig.~\ref{fig:m0mhf2} that in the CMSSM no points with a substantial higgsino fraction survive. The primary cosmologically-viable region of CMSSM parameter space where the neutralino LSP is significantly higgsino-like is known as the focus point region, where $\mu$ is small and $M_0$ is large. For $M_0 < 4$ TeV as we explore here, we do not find any model points compatible with $m_h \approx 125$ GeV, though the possibility of mixed bino-higgsino dark matter and large enough $m_h$ at larger $M_0$ still exists~\cite{Ellis:2012aa,CMSSMhiggs}. Among NUHM models, by contrast, there are many surviving model points with $m_{\tilde{\chi}_1^0} \approx m_{\tilde{\chi}_1^\pm}$, where the LSP is a mixed bino-higgsino state.
A striking feature in both the CMSSM and NUHM $(M_{1/2},M_0)$ planes is that the light Higgs pole appears to have been excluded. The constraint on the mass of the Standard Model-like Higgs leads to a dearth of viable models at low $M_{1/2}$ in both the CMSSM and the NUHM.
\begin{figure}
\includegraphics[width=.5\textwidth, height=70mm]{cmssm_fig5_CETUP-eps-converted-to.pdf}
\includegraphics[width=.5\textwidth, height=70mm]{nuhm_fig5_CETUP-eps-converted-to.pdf}
\caption{The $(M_{1/2},M_0)$ plane of the CMSSM (left) and the NUHM (right) after applying $BR(B_s \rightarrow \mu^+\mu^-) > 4.5 \times 10^{-9}$ and $123$ GeV $ < m_h < 127$ GeV.
Models are color-coded by mass relation as described in the legend.}
\label{fig:m0mhf2}
\end{figure}
Turning to the direct detection prospects, the upper panels of Fig.~\ref{fig:higgsbmm} reveal that in the CMSSM, the models most likely to be discovered by direct dark matter searches are already excluded by the measurement of the Higgs mass and the $BR(B_s \rightarrow \mu^+ \mu^-)$, while in the NUHM, models with $m_{\tilde{\chi}_1^0} \approx m_{\tilde{\chi}_1^\pm}$, as well as some scenarios in which $m_{\tilde{\chi}_1^0} \approx m_{\tilde{\tau}_1}$ and/or annihilations are enhanced by the A-pole, may be accessible to XENON-1T or a similar next generation direct dark matter experiment. Finally, the lower panels of Fig.~\ref{fig:higgsbmm} illustrate that in the CMSSM, the least fine-tuned models have already been excluded; remaining CMSSM points have large fine-tuning and small spin-independent neutralino-nucleon elastic scattering cross sections. NUHM scenarios, by contrast, still allow for the possibility of low fine-tuning and large enough spin-independent neutralino-nucleon elastic scattering cross sections that there is still reason to be optimistic about the prospects for discovery in next generation direct detection experiments such as XENON-1T.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=65mm, height=63mm]{cmssm_fig2_CETUP-eps-converted-to.pdf} &
\includegraphics[width=65mm, height=63mm]{nuhm_fig2_CETUP-eps-converted-to.pdf} \\
\includegraphics[width=65mm, height=63mm]{cmssm_fig3_CETUP-eps-converted-to.pdf} &
\includegraphics[width=65mm, height=63mm]{nuhm_fig3_CETUP-eps-converted-to.pdf}
\end{tabular}
\caption{Spin-independent neutralino-nucleon elastic scattering cross section, $\sigma_{SI}$, as a function of neutralino mass, $m_{\tilde{\chi}_1^0}$, (top panels) and fine-tuning parameter, $\Delta$, (bottom) for the CMSSM (left panels) and the NUHM (right panels). Model points are color-coded by mass hierarchy as indicated in the legend.}
\label{fig:higgsbmm}
\end{figure}
\section{Summary}
The relationship between the degree of fine-tuning and discoverability of dark matter in current and next generation
direct detection experiments has been investigated. In~\cite{AFS} it was found that there is considerably more variation in the spin-independent neutralino-nucleon elastic scattering cross section in the NUHM than in the CMSSM, and as a result there is less correlation between the degree of fine-tuning and direct detection prospects in the NUHM than in the CMSSM. The least fine-tuned CMSSM model points would be the first probed by direct dark matter searches. In the NUHM, this conclusion is only approximate.
The relationship between degree of fine-tuning and discoverability of dark matter in direct detection experiments was examined also in light of the specific mechanism(s) by which the relic abundance of neutralino dark matter is suppressed to cosmologically viable values. These mechanisms are modeled by considering mass relations roughly indicative of the regions of parameter space in which a particular mechanism would act to significantly enhance the dark matter annihilation rate in the early universe. Models with $m_{\tilde{\chi}_1^0}\approx m_{\tilde{\chi}_1^\pm}$ may have low fine-tuning and large spin-independent neutralino-nucleon elastic scattering cross section. For $M_{\tilde{\chi}_1^0} \sim M_{\tilde{\tau}_1}$, in the CMSSM most cases with very light $m_{\tilde{\chi}_1^0} \approx m_{\tilde{\tau}_1} \lesssim 200$ GeV would be accessible at XENON-1T or a similar experiment. For the NUHM, however, it is possible that the lightest neutralino has $\sigma_{SI} \lesssim 10^{-12}$ pb for a large range of $m_{\tilde{\chi}_1^0}$. For the case of $M_{\tilde{\chi}_1^0} \sim M_{\tilde{t}_1}$, it is clear that the neutralino dark matter would not be discoverable even with an experiment like XENON-1T, and furthermore almost all of the points in this case are quite fine-tuned with $\Delta > 1000$.
Finally, we investigated the impact on our results of the discovery of a new particle consistent with a Standard Model-like Higgs boson with a mass of $\sim125$ GeV as well as the improvement in the limit on the $BR(B_s \rightarrow \mu^+ \mu^-)$.
We find that these two constraints already exclude the least fine-tuned CMSSM points and that remaining viable parameter space may be difficult to probe with next generation direct dark matter searches. However, relatively low fine-tuning and good direct detection prospects are still possible in NUHM scenarios.
\begin{theacknowledgments}
P.S. thanks Katherine Freese and Steven Amsel for fruitful collaboration and the Center for Theoretical Underground Physics and Related Areas (CETUP* 2012) in South Dakota for its hospitality and for partial support during the completion of this work.
\end{theacknowledgments}
\bibliographystyle{aipproc}
|
2111.12520
|
\section{Computation of the anisotropic $2^{nd}$Piola-Kirchhoff tensor \label{PKtensoraniso_computation_app}}
The transformation is considered as reversible through quasi-static assumption, so the $2^{nd}$ Piola Kirchhoff tensor $\tens{\Sigma}$ is as in equation \ref{PK_VS_psi}: $\displaystyle \tens{\Sigma} = 2 \partialderiv{\psi}{\tens{C}}$.
For the anisotropic part, this translate into equation \ref{PK_aniso_dpsidC}, when decomposing the derivative regarding to the current length of the fiber $l$: $\displaystyle \vertjoliwriting{\tens{\Sigma}_{aniso}} = 2* \partialderiv{\textcolor{vertjoli}{\psi^{lam}}}{\tens{C}} = 2* \partialderiv{\textcolor{vertjoli}{\psi^{lam}}}{\lambda} \partialderiv{\lambda}{\lambda^2} \partialderiv{\lambda^2}{\tens{C}}$.
With the energy function defined in equation \ref{psi_aniso_continuous}: \\
$\displaystyle \textcolor{vertjoli}{\psi^{lam}} \vertjoliwriting{ = \int_{\theta = 0}^{\pi} \int_{\phi = 0}^{2 \pi}{ (\textcolor{vertjoli}{\rho_{1}} (\theta, \phi) \textcolor{vertjoli}{\delta \psi^{lam}_{1}} (\theta, \phi) + \textcolor{vertjoli}{\rho_{2}} (\theta, \phi) \textcolor{vertjoli}{\delta \psi^{lam}_{2}} (\theta, \phi) ) \sin \theta \text{d} \theta \text{d} \phi} }$
The only things that depend on $\tens{C}$ and therefore on $\lambda$ in \ref{psi_aniso_continuous} are $\textcolor{vertjoli}{\delta \psi^{lam}_{1}}$ and $\textcolor{vertjoli}{\delta \psi^{lam}_{2}}$ and so have to be derived with respect to $\lambda$. Furthermore, terms for fiber one a fiber two are independent so can be treated each independently, so $\vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}}$ and $\vertjoliwriting{\delta \tens{\Sigma}_{aniso,2}}$ can be defined as:
\begin{equation}
\begin{cases}
\displaystyle \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} = 2* \partialderiv{\textcolor{vertjoli}{\delta \psi^{lam}_{1}} }{\lambda} \partialderiv{\lambda}{\lambda^2} \partialderiv{\lambda^2}{\tens{C}} \\
\displaystyle \vertjoliwriting{\delta \tens{\Sigma}_{aniso,2}} = 2* \partialderiv{\textcolor{vertjoli}{\delta \psi^{lam}_{2}} }{\lambda} \partialderiv{\lambda}{\lambda^2} \partialderiv{\lambda^2}{\tens{C}} \\
\end{cases}
\label{dPiola_dpsi}
\end{equation}
And so $\vertjoliwriting{\tens{\Sigma}_{aniso}} $ can be computed with the integral \ref{PK_aniso_continuous_app}:
\begin{equation}
\vertjoliwriting{\tens{\Sigma}_{aniso}} = \vertjoliwriting{ = \int_{\theta=0}^{\pi} \int_{\phi=0}^{2\pi}{(\textcolor{vertjoli}{\rho_{1}} (\theta, \phi) \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} (\theta, \phi) + \textcolor{vertjoli}{\rho_{2}} (\theta, \phi) \vertjoliwriting{\delta \tens{\Sigma}_{aniso,2}} (\theta, \phi) ) \sin \theta \text{d} \theta \text{d} \phi} }
\label{PK_aniso_continuous_app}
\end{equation}
Thanks to the definition of fiber traction $t_{0,1}$ and $t_{0,2}$ in \ref{t1_t2_VS_psi1_psi2}, it turns out that:
\begin{center}
$
\begin{cases}
\displaystyle \partialderiv{\textcolor{vertjoli}{\delta \psi^{lam}_{1}}}{l} = t_{0,1} = k_1 ( \frac{ \lambda}{\lambda_{u,1}} - 1 )_{+} + t_{u,1} \\
\displaystyle \partialderiv{\textcolor{vertjoli}{\delta \psi^{lam}_{2}}}{l} = t_{0,2} = k_2 (\frac{\lambda}{\lambda_{u,2}} -1)_{+} + t_{u,2}
\end{cases}
\text{with} \quad \lambda(\theta, \phi)^2 = \vect{r_0}(\theta, \phi).\tens{C}.\vect{r_0}(\theta, \phi)
$
\end{center}
The two other useful partial derivative are given in equation \ref{useful_partial_derivative_app}:
\begin{equation}
\partialderiv{\lambda}{\lambda^2} = \frac{1}{2\lambda} \quad \text{and} \quad \partialderiv{\lambda^2}{\tens{C}} = \vect{r_0} \otimes \vect{r_0}
\label{useful_partial_derivative_app}
\end{equation}
The computation for $\displaystyle \partialderiv{\lambda^2}{\tens{C}} = \vect{r_0} \otimes \vect{r_0}$ in Cartesian coordinates is detailed in equation \ref{dlsquare_dC}.
First the computation of $\lambda(\theta, \phi)$ in Cartesian coordinates is needed, so computed in \ref{elongation_matric_app}.
\begin{equation}
\displaystyle \lambda(\theta, \phi)^2 = \vect{r_0}(\theta, \phi).\tens{C}.\vect{r_0}(\theta, \phi) = C_{xx} r^2_x + C_{yy}r^2_y + C_{zz}r^2_z + (C_{xy}+C_{yx}) r_x r_y+ (C_{yz} +C_{zy})r_y r_z+ (C_{xz}+C_{zx}) r_x r_z
\label{elongation_matric_app}
\end{equation}
\begin{equation}
\displaystyle \partialderiv{\lambda^2}{\tens{C}} =
\begin{pmatrix}
\displaystyle \frac{\partial (\lambda^2)}{\partial C_{xx} } & \displaystyle \frac{\partial (\lambda^2)}{\partial C_{xy} } & \displaystyle \frac{\partial (\lambda^2)}{\partial C_{xz} } \\
\displaystyle \frac{\partial (\lambda^2)}{\partial C_{xy} } & \displaystyle \frac{\partial (\lambda^2)}{\partial C_{yy} } & \displaystyle \frac{\partial (\lambda^2)}{\partial C_{yz} } \\
\displaystyle \frac{\partial (\lambda^2)}{\partial C_{xz} } & \displaystyle \frac{\partial (\lambda^2)}{\partial C_{yz} } & \displaystyle \frac{\partial (\lambda^2)}{\partial C_{zz} } \\
\end{pmatrix}
=
\begin{pmatrix}
r^2_x & r_x r_y & r_x r_z\\
r_x r_y & r^2_y & r_y r_z\\
r_x r_z & r_y r_z & r^2_z \\
\end{pmatrix}
= \vect{r_0} \otimes \vect{r_0}
\label{dlsquare_dC}
\end{equation}
Finally, when everything is assembled, $\vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}}$ and $\vertjoliwriting{\delta \tens{\Sigma}_{aniso,2}}$ can be computed as in equation \ref{dPiola_aniso_app}:
\begin{equation}
\begin{cases}
\displaystyle \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} = 2* \partialderiv{\textcolor{vertjoli}{\delta \psi^{lam}_{1}} }{\lambda} \partialderiv{\lambda}{\lambda^2} \partialderiv{\lambda^2}{\tens{C}} = 2* l_{0,1} t_{0,1} \frac{1}{2\lambda} \vect{r_0} \otimes \vect{r_0} = \frac{l_{0,1}}{\lambda} (k_1 ( \frac{ \lambda}{\lambda_{u,1}} - 1 )_{+} + t_{u,1}) \vect{r_0} \otimes \vect{r_0} \\
\displaystyle \vertjoliwriting{\delta \tens{\Sigma}_{aniso,2}} = 2* \partialderiv{\textcolor{vertjoli}{\delta \psi^{lam}_{2}} }{\lambda} \partialderiv{\lambda}{\lambda^2} \partialderiv{\lambda^2}{\tens{C}} = \displaystyle 2* l_{0,2} t_{0,2} \frac{1}{2\lambda} \vect{r_0} \otimes \vect{r_0} = \frac{l_{0,2}}{\lambda} (k_2 ( \frac{ \lambda}{ \lambda_{u,2}} - 1 )_{+} + t_{u,2}) \vect{r_0} \otimes \vect{r_0} \\
\end{cases}
\label{dPiola_aniso_app}
\end{equation}
\section{Computation of the quadrature used in the micro-sphere model \label{quadrature_microsphere_section} }
\subsection{Quadrature used in the FEM code to calculate the fiber contribution \\}
At each Gauss point, the integral \ref{PK_aniso_app} has to be computed:
\begin{equation}
\displaystyle \vertjoliwriting{\tens{\Sigma}_{aniso}} \vertjoliwriting{ = \int_{\theta=0}^{\pi} \int_{\phi=0}^{2\pi}{(\textcolor{vertjoli}{\rho_{1}} (\theta, \phi) \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} (\theta, \phi) + \textcolor{vertjoli}{\rho_{2}} (\theta, \phi) \vertjoliwriting{\delta \tens{\Sigma}_{aniso,2}} (\theta, \phi) ) \sin \theta \text{d} \theta \text{d} \phi} }
\label{PK_aniso_app}
\end{equation}
In classical FEM code, the tangent computation is needed also. Tangent is the derivative of the 2$^{nd}$ Piola-Kirchhoff tensor with respect to $\tens{e}$, or the 2$^{nd}$ derivative of $\psi$ with respect to $\tens{e}$. With same arguments as in appendix \ref{PKtensoraniso_computation_app}, the tangent can be computed as in equation:
\begin{equation}
\vertjoliwriting{\bigtens{T}_{aniso}} \vertjoliwriting{ = \int_{\theta=0}^{\pi} \int_{\phi=0}^{2\pi}{(\textcolor{vertjoli}{\rho_{1}} (\theta, \phi) \vertjoliwriting{\delta \bigtens{T}_{aniso,1}} (\theta, \phi) + \textcolor{vertjoli}{\rho_{2}} (\theta, \phi) \vertjoliwriting{\delta \bigtens{T}_{aniso,2}} (\theta, \phi) ) \sin \theta \text{d} \theta \text{d} \phi} }
\label{tangent_app}
\end{equation}
Where:
\begin{equation}
\begin{cases}
\displaystyle \vertjoliwriting{\delta \bigtens{T}_{aniso,1}} =
\begin{cases}
\displaystyle \frac{k_1 - t_{u,1}}{\lambda^3} \vect{r_0} \otimes \vect{r_0} \otimes \vect{r_0} \otimes \vect{r_0} \quad \text{if} \quad \lambda > \lambda_{u,1} \\
\displaystyle \frac{- t_{u,1}}{\lambda^3} \vect{r_0} \otimes \vect{r_0} \otimes \vect{r_0} \otimes \vect{r_0} \quad \text{if not} \\
\end{cases} \\
\displaystyle \vertjoliwriting{\delta \bigtens{T}_{aniso,2}} =
\begin{cases}
\displaystyle \frac{k_2 - t_{u,2}}{\lambda^3} \vect{r_0} \otimes \vect{r_0} \otimes \vect{r_0} \otimes \vect{r_0} \quad \text{if} \quad \lambda > \lambda_{u,2} \\
\displaystyle \frac{- t_{u,2}}{\lambda^3} \vect{r_0} \otimes \vect{r_0} \otimes \vect{r_0} \otimes \vect{r_0} \quad \text{if not} \\
\end{cases} \\
\end{cases}
\label{dtangent}
\end{equation}
First of all, the integrals can be reduced thanks to the invariance in $\pi$ of the density of the fiber in $\phi$:
\begin{center}
$\displaystyle \vertjoliwriting{\tens{\Sigma}_{aniso}} \vertjoliwriting{ = \int_{\theta=0}^{\pi} \int_{\phi=0}^{\pi}{(\textcolor{vertjoli}{\rho_{1}} (\theta, \phi) \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} (\theta, \phi) + \textcolor{vertjoli}{\rho_{2}} (\theta, \phi) \vertjoliwriting{\delta \tens{\Sigma}_{aniso,2}} (\theta, \phi) ) \sin \theta \text{d} \theta \text{d} \phi} }$
\end{center}
and
\begin{center}
$
\vertjoliwriting{\bigtens{T}_{aniso}} \vertjoliwriting{ = \displaystyle \int_{\theta=0}^{\pi} \int_{\phi=0}^{\pi}{(\textcolor{vertjoli}{\rho_{1}} (\theta, \phi) \vertjoliwriting{\delta \bigtens{T}_{aniso,1}} (\theta, \phi) + \textcolor{vertjoli}{\rho_{2}} (\theta, \phi) \vertjoliwriting{\delta \bigtens{T}_{aniso,2}} (\theta, \phi) ) \sin \theta \text{d} \theta \text{d} \phi} }
$
\end{center}
Then, density functions following the equation \ref{density_functions} recalled below :
\begin{center}
$\begin{cases}
\displaystyle \textcolor{vertjoli}{\rho_{1}} (\theta, \phi) = K_1 e^{(\sigma_{p, fib1}\cos(2(\phi-\mu_{fib1})) + \sigma_{t, fib1}\cos(2(\theta-\nu_{fib1})))}, \quad \text{with: } K_1 = \frac{1}{K_{fib1}} \\
\\
\displaystyle \textcolor{vertjoli}{\rho_{2}} (\theta, \phi) = K_2 e^{(\sigma_{p, fib2}\cos(2(\phi-\mu_{fib2})) + \sigma_{t, fib2}\cos(2(\theta-\nu_{fib2})))} , \quad \text{with: } K_2 = \frac{1}{K_{fib2}}
\end{cases}$
\end{center}
are subject to several assumptions:
\begin{itemize}
\item the fiber coordinate system is chosen that $\vect{e_x^{fib}}$ coincide with the direction of one fiber. It implies that:
\begin{itemize}
\item $\displaystyle \rho_{fib1} (\theta, \phi) = K_1 e^{\sigma_{p,1}cos(2\phi) + \sigma_{t,1}\cos(2(\theta-\nu_{1})) }$
\item $\displaystyle \rho_{fib2} (\theta, \phi) = K_2 e^{\sigma_{p,2}\cos(2(\phi+\mu_{1}-\mu_{2})) + \sigma_{t,2}cos(2(\theta-\nu_{2})) }$.
\end{itemize}
\item The fiber coordinate system is chosen that the two families of fibers are mainly in the plane $(\vect{e}_x^{lam},\vect{e}_y^{lam} )$ (it is in the definition of the coordinates system as $\vect{e}_x^{lam}$ correspond to the first fiber direction and $\vect{e}_y^{lam} $ correspond to the second fiber direction). It implies that $\nu_1 = \nu_2 = \pi/2$ and so:
\begin{itemize}
\item $\displaystyle \rho_{fib1} (\theta, \phi) = K_1 e^{\sigma_{p,1}\cos(2\phi) + \sigma_{t,1}\cos(2(\theta-\pi/2)) } = K_1 e^{\sigma_{p,1}\cos(2\phi) - \sigma_{t,1}\cos(2\theta) } $
\item $\displaystyle \rho_{fib2} (\theta, \phi) = K_2 e^{\sigma_{p,2}\cos(2(\phi+\mu_{1}-\mu_{2})) + \sigma_{t,2}\cos(2(\theta-\pi/2)) } = K_2 e^{\sigma_{p,2}\cos(2(\phi+\mu_{1}-\mu_{2})) - \sigma_{t,2}\cos(2\theta) }$.
\end{itemize}
\end{itemize}
So from now, the integral that needs to be compute is as in equation
\begin{center}
$\displaystyle \vertjoliwriting{\tens{\Sigma}_{aniso}} \textcolor{vertjoli}{= \int_{\theta=0}^{\pi} \int_{\phi=0}^{\pi}K_1 e^{\sigma_{p,1}\cos(2\phi) - \sigma_{t,1}\cos(2\theta) } \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} + K_2 e^{\sigma_{p,2}\cos(2(\phi+\mu_{1}-\mu_{2})) - \sigma_{t,2}\cos(2\theta) } \vertjoliwriting{\delta \tens{\Sigma}_{aniso,2}} \sin\theta\text{d}\theta \text{d}\phi}$
\end{center}
and
\begin{center}
$
\vertjoliwriting{\bigtens{T}_{aniso}} \vertjoliwriting{ = \displaystyle \int_{\theta=0}^{\pi} \int_{\phi=0}^{\pi}{(K_1 e^{\sigma_{p,1}\cos(2\phi) - \sigma_{t,1}\cos(2\theta) } \vertjoliwriting{\delta \bigtens{T}_{aniso,1}} (\theta, \phi) + K_2 e^{\sigma_{p,2}\cos(2(\phi+\mu_{1}-\mu_{2})) - \sigma_{t,2}\cos(2\theta) } \vertjoliwriting{\delta \bigtens{T}_{aniso,2}} (\theta, \phi) ) \sin \theta \text{d} \theta \text{d} \phi} }
$
\end{center}
which can each be decomposed into two parts, which can be treated the same way :
\begin{itemize}
\item $\displaystyle \vertjoliwriting{\tens{\Sigma}_{aniso,1}} = \int_{\theta=0}^{\pi} \int_{\phi=0}^{\pi}K_1 e^{\sigma_{p,1}\cos(2\phi) - \sigma_{t,1}\cos(2\theta) } \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} \textcolor{vertjoli}{(\theta, \phi) } \sin \theta \text{d} \theta \text{d} \phi$
\item $\displaystyle \vertjoliwriting{\tens{\Sigma}_{aniso,2}} = \int_{\theta=0}^{\pi} \int_{\phi=0}^{\pi}K_2 e^{\sigma_{p,2}\cos(2(\phi+\mu_{1}-\mu_{2})) - \sigma_{t,2}\cos(2\theta) } \vertjoliwriting{\delta \tens{\Sigma}_{aniso,2}} \textcolor{vertjoli}{(\theta, \phi) } \sin \theta \text{d} \theta \text{d} \phi$
\item $\displaystyle \vertjoliwriting{\bigtens{T}_{aniso,1}} = \int_{\theta=0}^{\pi} \int_{\phi=0}^{\pi} K_1 e^{\sigma_{p,1}\cos(2\phi) - \sigma_{t,1}\cos(2\theta) } \vertjoliwriting{\delta \bigtens{T}_{aniso,1}} \textcolor{vertjoli}{(\theta, \phi) } \sin \theta \text{d} \theta \text{d} \phi$
\item $\displaystyle \vertjoliwriting{\bigtens{T}_{aniso,2}} = \int_{\theta=0}^{\pi} \int_{\phi=0}^{\pi}K_2 e^{\sigma_{p,2}\cos(2(\phi+\mu_{1}-\mu_{2})) - \sigma_{t,2}\cos(2\theta) } \vertjoliwriting{\delta \bigtens{T}_{aniso,2}} \textcolor{vertjoli}{(\theta, \phi) } \sin \theta \text{d} \theta \text{d} \phi$
\end{itemize}
Quadratures that will be chosen do not depend on the function that changes between the integrals from equation \ref{PK_aniso_app} and \ref{tangent_app} but on those that do not change: the densities $\textcolor{vertjoli}{\rho_{1}}$ and $\textcolor{vertjoli}{\rho_{2}}$. Furthermore, the term depending on the two fibers are independent. So, $\vertjoliwriting{\tens{\Sigma}_{aniso,1}}$ will be chosen as an example, but it is the same with the other terms that we have to compute ($\vertjoliwriting{\tens{\Sigma}_{aniso,2}}$, $\vertjoliwriting{\bigtens{T}_{aniso,1}}$ and $\vertjoliwriting{\bigtens{T}_{aniso,2}}$).
Here is the computation for $\vertjoliwriting{\tens{\Sigma}_{aniso,1}}$ :
\begin{center}
$\displaystyle \vertjoliwriting{\tens{\Sigma}_{aniso,1}} = \int_{\theta=0}^{\pi} \int_{\phi=0}^{\pi}K_1 e^{\sigma_{p,1}\cos(2\phi) - \sigma_{t,1}\cos(2\theta) } \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} \textcolor{vertjoli}{(\theta, \phi) } \sin \theta \text{d} \theta \text{d} \phi$
\end{center}
The integral can be separated in $\theta$ and $\phi$:
\begin{center}
$\displaystyle \vertjoliwriting{\tens{\Sigma}_{aniso,1}}= K_1 \int_{\theta=0}^{\pi} e^{ - \sigma_{t,1}\cos(2\theta)} (\int_{\phi=0}^{\pi}e^{\sigma_{p,1}\cos(2\phi)} \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} \textcolor{vertjoli}{(\theta, \phi) } \text{d}\phi) \sin\theta\text{d}\theta$
\end{center}
\subsubsection{Quadrature in $\phi$ \label{phi_quadrature}\\}
First the integral in $\phi$ of equation \ref{Ione_integral_phi} is considered:
\begin{equation}
\displaystyle I_1 = \int_{\phi=0}^{\pi}e^{\sigma_{p,1}\cos(2\phi)} \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} \textcolor{vertjoli}{(\theta, \phi) } \text{d}\phi
\label{Ione_integral_phi}
\end{equation}
The function $f_1$ is defined as: $\displaystyle f_1(\phi) = e^{\sigma_{p,1}\cos(2\phi)} \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} \textcolor{vertjoli}{(\theta, \phi) } $
and then $I_1$ becomes
\begin{center}
$\displaystyle I_1 = \int_{\phi=0}^{\pi}f_1(\phi) \text{d}\phi$
\end{center}
For the integral in $\phi$, a uniformly distributed weight is used because a rather smooth distribution of $f_1 = e^{\sigma_{p,1}\cos(2\phi)} \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} (\theta, \phi) $ in terms of $\phi$ is expected.
The function is integrated over the segment $[0, \pi]$ with a uniform distribution of the angle $\phi$ : $\displaystyle \forall i \in [1,m], \phi_i = \frac{(i-1)\pi}{m} $, with $m$ the number of equatorial in-plane discretization points.\\
Then, the rectangle method is used to calculate the integral in $\phi$:
\begin{center}
$\displaystyle I_1 \approx \sum_{i=1}^m (\phi_{i+1}-\phi_i)f_1(\phi_i) \approx \frac{1}{m} \sum_{i=1}^m f_1(\phi_i) \approx \frac{1}{m} \sum_{i=1}^m e^{\sigma_{p,1}\cos(2\phi_i)} \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} \textcolor{vertjoli}{(\theta, \phi_i) } $
\end{center}
At this point, $\vertjoliwriting{\tens{\Sigma}_{aniso,1}}$ is:
\begin{center}
$\displaystyle \vertjoliwriting{\tens{\Sigma}_{aniso,1}} = K_1 \int_{\theta=0}^{\pi} e^{ - \sigma_{t,1}\cos(2\theta)} ( \frac{1}{m} \sum_{i=1}^m e^{\sigma_{p,1}\cos(2\phi_i)} \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} \textcolor{vertjoli}{(\theta, \phi_i) } ) \sin\theta\text{d}\theta$
\end{center}
Which can be rearrange and rewrite as:
\begin{center}
$\displaystyle \vertjoliwriting{\tens{\Sigma}_{aniso,1}} = \frac{K_1}{m} \sum_{i=1}^m e^{\sigma_{p,1}\cos(2\phi_i)} \int_{\theta=0}^{\pi} e^{ - \sigma_{t,1}\cos(2\theta)} \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} \textcolor{vertjoli}{(\theta, \phi_i) } \sin\theta\text{d}\theta$
\end{center}
\subsubsection{Quadrature in $\theta$ \label{theta_quadrature}\\ }
Second, the integral in $\theta$ of equation \ref{Itwo_integral_theta} is considered:
\begin{equation}
\displaystyle I_2 = \int_{\theta=0}^{\pi} e^{ - \sigma_{t,1}\cos(2\theta)} \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} \textcolor{vertjoli}{(\theta, \phi_i) } \sin\theta\text{d}\theta
\label{Itwo_integral_theta}
\end{equation}
The well known trigonometry formula :
\begin{equation}
\displaystyle \cos(2 \theta) = 2\cos^2(\theta) - 1
\label{trigo_formula}
\end{equation}
help us to rewrite $I_2$ as:
\begin{center}
$\displaystyle I_2= \int_{\theta=0}^{\pi} e^{ - \sigma_{t,1}(2\cos^2(\theta) - 1)} \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} \textcolor{vertjoli}{(\theta, \phi_i) } \sin \theta \text{d} \theta = e^{ \sigma_{t,1}} \int_{\theta=0}^{\pi} e^{ - 2\sigma_{t,1}\cos^2(\theta)} \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} \textcolor{vertjoli}{(\theta, \phi_i) } \sin\theta\text{d}\theta$
\end{center}
Then the change of variables of equation \ref{change_variable_theta_x} is applied:
\begin{equation}
\begin{cases}
\displaystyle
x = \sqrt{2\sigma_{t,1}}\cos(\theta)\\
\displaystyle dx = - \sqrt{2\sigma_{t,1}}\sin(\theta) \\
\end{cases}
\text{and}
\begin{cases}
\displaystyle \theta = 0 \Leftrightarrow x = \sqrt{2\sigma_{t,1}}\\
\displaystyle \theta = \pi \Leftrightarrow x = - \sqrt{2\sigma_{t,1}}\\
\end{cases}
\label{change_variable_theta_x}
\end{equation}
and the following expression is found for $I_2$:
\begin{center}
$\displaystyle I_2 = e^{ \sigma_{t,1}} \int_{x=\sqrt{2\sigma_{t,1}}}^{- \sqrt{2\sigma_{t,1}}} e^{ - x^2} \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} \textcolor{vertjoli}{(\arccos(\frac{x}{\sqrt{2\sigma_{t,1}}}),\phi_i) } (- \frac{dx}{\sqrt{2\sigma_{t,1}}})$
\end{center}
The function $f_2$ is defined as: $\displaystyle f_2(x) = \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} \textcolor{vertjoli}{(\arccos(\frac{x}{\sqrt{2\sigma_{t,1}}}),\phi_i) }$
and then $I_2$ becomes:
\begin{center}
$\displaystyle I_2 = \frac{e^{ \sigma_{t,1}}}{\sqrt{2\sigma_{t,1}}} \int_{x=-\sqrt{2\sigma_{t,1}}}^{ \sqrt{2\sigma_{t,1}}} e^{ - x^2} f_2(x) dx$
\end{center}
The Gauss-Hermite quadrature rule gives :
\begin{center}
$\displaystyle \int_{x= - \infty }^{+ \infty} e^{ - x^2} f(x) dx \approx \sum_{j=1}^{n} \frac{\sqrt{\pi}2^{n+1} n!}{[H_n(x_j)']^2} f(x_j)$
\end{center}
with $x_j$ the root of the $H_n$ Hermite polynomials which are defined recursively like this:
\begin{center}
$
\begin{cases}
H_0(x) = 1, H_1(x) = 2x\\
H_{k+1}(x) = 2x H_k(x) - 2k H_{k-1}(x) \\
\end{cases}
$
\end{center}
The function $\overset{\sim}{f_2}$ is defined as follow:
\begin{equation}
\begin{array}{ccccc}
\overset{\sim}{f_2} & : & \mathbb{R} & \to & \mathbb{R} \\
& & x & \mapsto & \overset{\sim}{f_2}(x) = \begin{cases}
f_2(x) \quad \forall x \in \displaystyle [-\sqrt{2\sigma_{t,1}} ; \sqrt{2\sigma_{t,1}}] \\
0 \quad \text{elsewhere} \\
\end{cases}\\
\end{array}
\label{f2tilde}
\end{equation}
Gauss-Hermite approximation is then applied to the $\overset{\sim}{f_2}$ function:
\begin{center}
$\displaystyle \int_{x= - \infty }^{+ \infty} e^{ - x^2} \overset{\sim}{f_2} dx \approx \sum_{j=1}^{n} \frac{\sqrt{\pi}2^{n+1} n!}{[H_n(x_j)']^2} \overset{\sim}{f_2}(x_j)$
\end{center}
And, so $I_2$ becomes:
\begin{equation}
\displaystyle I_2 \approx \frac{e^{ \sigma_{t,1}}}{\sqrt{2\sigma_{t,1}}}\sum_{j=1}^{n} \frac{\sqrt{\pi}2^{n+1} n!}{[H_n(x_j)']^2} \overset{\sim}{f_2}(x_j)
\label{I2_app}
\end{equation}
Last question that has to be answered is the validity of such an interpolation. The function $\displaystyle \overset{\sim}{f_2}$ is bounded (continue in $x$ on interval $\displaystyle [-\sqrt{2\sigma_{t,1}} ; \sqrt{2\sigma_{t,1}}]$ and equals to 0 elsewhere)
\begin{center}
$\displaystyle I_2 = \frac{e^{ \sigma_{t,1}}}{\sqrt{2\sigma_{t,1}}} \int_{x=-\sqrt{2\sigma_{t,1}}}^{ \sqrt{2\sigma_{t,1}}} e^{ - x^2} f_2(x) dx \leq \frac{M e^{ \sigma_{t,1}}}{\sqrt{2\sigma_{t,1}}} \int_{x=-\sqrt{2\sigma_{t,1}}}^{ \sqrt{2\sigma_{t,1}}} e^{ - x^2}dx$
\end{center}
with $M$ the maximum of the $\overset{\sim}{f_2}$ function. Now, the integral looks like the integral of a normal distribution.
Here is defined the integral $I_{t,a}$ as the integral of a normal distribution, centered in $0$ on the interval $[-a,a]$ :
\begin{center}
$\displaystyle I_{t,a} = \int_{t=-a}^{a} \frac{1}{\sigma \sqrt{2\pi}} e^{ - (\frac{t}{\sqrt{2}\sigma})^2} dt $
\end{center}
For $a = 3 \sigma$, this integral is more than 99 percent of the integral on the whole domain :
\begin{center}
$\displaystyle I_{t, 3\sigma}= \int_{t= -3 \sigma}^{3 \sigma} \frac{1}{\sigma \sqrt{2\pi}} e^{ - (\frac{t}{\sqrt{2}\sigma})^2} dt \geq 0.99*I_{t, \infty} = 0.99* \int_{t= - \infty}^{\infty} \frac{1}{\sigma \sqrt{2\pi}} e^{ - (\frac{t}{\sqrt{2}\sigma})^2} dt $
\end{center}
The following change of variable
\begin{center}
$
\begin{cases}
\displaystyle u= \frac{t}{\sqrt{2}\sigma}\\
\displaystyle du = \frac{dt}{\sqrt{2}\sigma} \\
\end{cases}
\text{and}
\begin{cases}
\displaystyle t = - 3 \sigma \Leftrightarrow u = - \frac{3\sqrt{2}}{2}\\
\displaystyle t = 3 \sigma \Leftrightarrow u = \frac{3\sqrt{2}}{2} \\
\end{cases}
$
\end{center}
is used in $I_{t, 3\sigma}$:
\begin{center}
$\displaystyle I_{t, 3\sigma} = I_{u, \frac{3\sqrt{2}}{2}}= \int_{u= - \frac{3\sqrt{2}}{2}}^{ \frac{3\sqrt{2}}{2} } \frac{1}{\sigma \sqrt{2\pi}} e^{ - u^2} \sqrt{2} \sigma du \geq 0.99 * I_{u, \infty} = 0.99* \int_{u= - \infty}^{\infty} \frac{1}{\sigma \sqrt{2\pi}} e^{ - u^2} \sqrt{2} \sigma du$
\end{center}
Which can be rewrite as:
\begin{center}
$\displaystyle I_{u, \frac{3\sqrt{2}}{2}}= \frac{1}{ \sqrt{\pi}} \int_{u= - \frac{3\sqrt{2}}{2}}^{ \frac{3\sqrt{2}}{2} } e^{ - u^2} du \geq 0.99* I_{u, \infty} = 0.99* \frac{1}{\sqrt{\pi}} \int_{u= - \infty}^{\infty} e^{ - u^2} du$
\end{center}
Here, a limit value for $\sigma_{t,1}$ that allows to tell that the approximation of $I_2$ by Hermite polynomials is pretty good for all value of $\theta$ can be found:
\begin{center}
$\displaystyle \sqrt{2 \sigma_{t,1}} \geq \frac{3\sqrt{2}}{2} \Leftrightarrow \sigma_{t,1} \geq \frac{9}{4} = 2.25 = \sigma_{t,min}$
\end{center}
Under the assumption of $\sigma_{t,1} \geq \sigma_{t,min}$, $\vertjoliwriting{\tens{\Sigma}_{aniso,1}}$ becomes:
\begin{center}
$\displaystyle \vertjoliwriting{\tens{\Sigma}_{aniso,1}} \approx \frac{K_1}{m} \frac{e^{ \sigma_{t,1}}}{\sqrt{2\sigma_{t,1}}} \sum_{i=1}^m \sum_{j=1}^{n} e^{\sigma_{p,1}\cos(2\phi_i)}\frac{\sqrt{\pi}2^{n+1} n!}{[H_n(x_j)']^2} \overset{\sim}{f_2}(x_j)$
\end{center}
Which can be rearranged as:
\begin{center}
$\displaystyle \vertjoliwriting{\tens{\Sigma}_{aniso,1}} \approx \frac{K_1}{m} \frac{e^{ \sigma_{t,1}}}{\sqrt{2\sigma_{t,1}}} \sum_{i=1}^m \sum_{j=1}^{n} e^{\sigma_{p,1}\cos(2\phi_i)} \frac{\sqrt{\pi}2^{n+1} n!}{[H_n(x_j)']^2} \textcolor{vertjoli}{\vertjoliwriting{\delta \overset{\sim}{\tens{\Sigma}}_{aniso,1}}(\arccos(\frac{x_j}{\sqrt{2\sigma_{t,1}}}),\phi_i)}$
\end{center}
And finally:
\begin{center}
$\displaystyle \vertjoliwriting{\tens{\Sigma}_{aniso,1}} \approx \frac{K_1}{m} \frac{e^{ \sigma_{t,1}}}{\sqrt{2\sigma_{t,1}}} \sum_{i=1}^m \sum_{j=1}^{n} e^{\sigma_{p,1}\cos(2\phi_i)} \frac{\sqrt{\pi}2^{n+1} n!}{[H_n(\sqrt{2\sigma_{t,1}}cos(\theta_j))']^2} \textcolor{vertjoli}{\vertjoliwriting{\delta \overset{\sim}{\tens{\Sigma}}_{aniso,1}}(\theta_j, \phi_i) }$
\end{center}
In the same way, $\vertjoliwriting{\tens{\Sigma}_{aniso,2}}$ can be computed as:
\begin{center}
$\displaystyle \vertjoliwriting{\tens{\Sigma}_{aniso,2}} \approx \frac{K_2}{m} \frac{e^{ \sigma_{t,2}}}{\sqrt{2\sigma_{t,2}}} \sum_{i=1}^m \sum_{j=1}^{n} e^{\sigma_{p,2}\cos(2 (\phi_i + \mu_1 - \mu_2))} \frac{\sqrt{\pi}2^{n+1} n!}{[H_n(\sqrt{2\sigma_{t,2}}cos(\theta_j))']^2} \textcolor{vertjoli}{\vertjoliwriting{\delta \overset{\sim}{\tens{\Sigma}}_{aniso,2}}(\theta_j, \phi_i) }$
\end{center}
\subsubsection{Value of $K_1$ and $K_2$\\}
Last thing that has to be known is the values of $K_1$ and $K_2$, which allow that $\rho_{fib1}$ and $\rho_{fib2}$ are density so:
\begin{center}
$\displaystyle \int_{\theta=0}^{\pi} \int_{\phi=0}^{\pi}\rho_{fib1}(\theta, \phi) \sin\theta\text{d}\theta \text{d}\phi = 1 $ and $\displaystyle \int_{\theta=0}^{\pi} \int_{\phi=0}^{\pi}\rho^M_{fib2}(\theta, \phi)\sin\theta\text{d}\theta \text{d}\phi = 1$
\end{center}
Or rewriting in terms of $K_1$ and $K_2$:
\begin{equation}
\displaystyle \int_{\theta=0}^{\pi} \int_{\phi=0}^{\pi} K_1 e^{\sigma_{p,1}\cos(2\phi) - \sigma_{t,1}\cos(2\theta) } \sin\theta\text{d}\theta \text{d}\phi = 1 \quad \text{and} \quad \int_{\theta=0}^{\pi} \int_{\phi=0}^{\pi} K_2 e^{\sigma_{p,2}\cos(2(\phi+\mu_{1}-\mu_{2})) - \sigma_{t,2}\cos(2\theta) } \sin\theta\text{d}\theta \text{d}\phi = 1
\end{equation}
And finally, it gives the value of $K_1$ and $K_2$:
\begin{equation}
\displaystyle K_1 = \frac{1}{\displaystyle \int_{\theta=0}^{\pi} \int_{\phi=0}^{\pi} e^{\sigma_{p,1}\cos(2\phi) - \sigma_{t,1}\cos(2\theta) } \sin\theta\text{d}\theta \text{d}\phi } \quad \text{and} \quad K_2 = \frac{1}{\displaystyle \int_{\theta=0}^{\pi} \int_{\phi=0}^{\pi} e^{\sigma_{p,2}\cos(2(\phi+\mu_{1}-\mu_{2})) - \sigma_{t,2}\cos(2\theta) } \sin\theta\text{d}\theta \text{d}\phi }
\end{equation}
$K_1$ and $K_2$ computation are equivalent so only one of the two will be considered, and to simplified a bit, the following computation $\frac{1}{K_1}$ will be performed:
\begin{center}
$\displaystyle \frac{1}{K_1} = \int_{\theta=0}^{\pi} \int_{\phi=0}^{\pi} e^{\sigma_{p,1}\cos(2\phi) - \sigma_{t,1}\cos(2\theta) } \sin\theta\text{d}\theta \text{d}\phi$
\end{center}
It can decompose in $\theta$ and $\phi$:
\begin{center}
$\displaystyle \frac{1}{K_1} = \int_{\theta=0}^{\pi} (\int_{\phi=0}^{\pi} e^{\sigma_{p,1}\cos(2\phi)} \text{d}\phi) e^{- \sigma_{t,1}\cos(2\theta)} \sin\theta\text{d}\theta $
\end{center}
For the $\phi$ angle, the same rules of quadrature as in \ref{phi_quadrature} is used, and then:
\begin{center}
$\displaystyle \frac{1}{K_1} \approx \int_{\theta=0}^{\pi} (\sum_{i=0}^{m} \frac{1}{m} e^{\sigma_{p,1}\cos(2\phi_i)}) e^{- \sigma_{t,1}\cos(2\theta)} \sin\theta\text{d}\theta$ with $ \displaystyle \forall i \in [1,m], \phi_i = \frac{(i-1)\pi}{m} $
\end{center}
Or in other terms:
\begin{center}
$\displaystyle \frac{1}{K_1} \approx \frac{1}{m} \sum_{i=0}^{m} e^{\sigma_{p,1}\cos(2\phi_i)} \int_{\theta=0}^{\pi} e^{- \sigma_{t,1}\cos(2\theta)} \sin\theta\text{d}\theta$
\end{center}
Using the same trigonometry formula \ref{trigo_formula}, the computation gives:
\begin{center}
$\displaystyle \frac{1}{K_1} \approx \frac{1}{m} (\sum_{i=0}^{m} e^{\sigma_{p,1}\cos(2\phi_i)} ) e^{ \sigma_{t,1}} \int_{\theta=0}^{\pi} e^{-2\sigma_{t,1}\cos^2(\theta)} \sin\theta\text{d}\theta$
\end{center}
With the same change of variable as in \ref{change_variable_theta_x}, $\displaystyle \frac{1}{K_1}$ becomes:
\begin{center}
$\displaystyle \frac{1}{K_1} \approx \frac{1}{m} (\sum_{i=0}^{m} e^{\sigma_{p,1}\cos(2\phi_i)} ) \frac{e^{ \sigma_{t,1}}}{\sqrt{2\sigma_{t,1}}} \int_{x = - \sqrt{2\sigma_{t,1}}}^{\sqrt{2\sigma_{t,1}}} e^{-x^2} \text{d}x$
\end{center}
Which is:
\begin{center}
$\displaystyle \frac{1}{K_1} \approx \frac{e^{ \sigma_{t,1}}}{m\sqrt{2\sigma_{t,1}}} \sum_{i=0}^{m} e^{\sigma_{p,1}\cos(2\phi_i)} \int_{x = - \sqrt{2\sigma_{t,1}}}^{\sqrt{2\sigma_{t,1}}} e^{-x^2} \text{d}x$
\end{center}
To compute exactly this term, the $erf$ function is needed:
\begin{center}
$\displaystyle \frac{1}{K_1} \approx \frac{e^{ \sigma_{t,1}}}{m\sqrt{2\sigma_{t,1}}} \sum_{i=0}^{m} e^{\sigma_{p,1}\cos(2\phi_i)} \frac{\sqrt{\pi}}{2} (erf(\sqrt{2\sigma_{t,1}}) - erf(-\sqrt{2\sigma_{t,1}})) \approx \frac{e^{ \sigma_{t,1}}}{m\sqrt{2\sigma_{t,1}}} \sum_{i=0}^{m} e^{\sigma_{p,1}\cos(2\phi_i)} \sqrt{\pi} erf(\sqrt{2\sigma_{t,1}})$
\end{center}
\begin{equation}
\displaystyle {K_1} \approx \frac{m\sqrt{2\sigma_{t,1}}}{e^{ \sigma_{t,1}}} \frac{1}{ \sqrt{\pi}erf(\sqrt{2\sigma_{t,1}}) \sum_{i=0}^{m} e^{\sigma_{p,1}\cos(2\phi_i)}} \quad \text{with} \quad
\forall i \in [1,m], \phi_i = \frac{(i-1)\pi}{m}
\label{K1_final}
\end{equation}
Reminding the value of $\vertjoliwriting{\tens{\Sigma}_{aniso,1}}$:
\begin{center}
$ \displaystyle \vertjoliwriting{\tens{\Sigma}_{aniso,1}} \approx \frac{K_1}{m} \frac{e^{ \sigma_{t,1}}}{\sqrt{2\sigma_{t,1}}} \sum_{i=1}^m \sum_{j=1}^{n} e^{\sigma_{p,1}\cos(2\phi_i)} \frac{\sqrt{\pi}2^{n+1} n!}{[H_n(\sqrt{2\sigma_{t,1}}\cos(\theta_j))']^2} \textcolor{vertjoli}{\vertjoliwriting{\delta \overset{\sim}{\tens{\Sigma}}_{aniso,1}}(\theta_j, \phi_i) }$
\end{center}
Defining:
\begin{equation}
\displaystyle \forall j \in [1,n], \lambda_{j,n} = \frac{\sqrt{\pi}2^{n+1} n!}{[H_n(\sqrt{2\sigma_{t,1}}\cos(\theta_j)))']^2} =\frac{\sqrt{\pi}2^{n+1} n!}{[H_n(x_j))']^2}
\label{weight_Hermite}
\end{equation}
$\vertjoliwriting{\tens{\Sigma}_{aniso,1}}$ becomes:
\begin{equation}
\displaystyle \vertjoliwriting{\tens{\Sigma}_{aniso,1}} \approx \frac{K_1}{m} \frac{e^{ \sigma_{t,1}}}{\sqrt{2\sigma_{t,1}}} \sum_{i=1}^m \sum_{j=1}^{n} e^{\sigma_{p,1}\cos(2\phi_i)} \lambda_{j,n} \textcolor{vertjoli}{\vertjoliwriting{\delta \overset{\sim}{\tens{\Sigma}}_{aniso,1}}(\theta_j, \phi_i) }
\label{PKstressanisoone_K1}
\end{equation}
Combining \ref{PKstressanisoone_K1} and \ref{K1_final}:
\begin{equation}
\displaystyle \vertjoliwriting{\tens{\Sigma}_{aniso,1}} \approx \frac{\displaystyle \sum_{i=1}^m \sum_{j=1}^{n} e^{\sigma_{p,1}\cos(2\phi_i)} \lambda_{j,n} \textcolor{vertjoli}{\vertjoliwriting{\delta \overset{\sim}{\tens{\Sigma}}_{aniso,1}}(\theta_j, \phi_i) } }{\displaystyle \sqrt{\pi} erf(\sqrt{2\sigma_{t,1}}) \sum_{i=1}^m e^{\sigma_{p,1}\cos(2\phi_i)}}
\quad \text{with}
\begin{cases}
\displaystyle \forall i \in [1,m], \phi_i = \frac{(i-1)\pi}{m} \\
\displaystyle \forall j \in [1,n], \cos(\theta_{j,k}) = \frac{x_j}{\sqrt{2\sigma_{t,k}}} \\
\text{and} \\
\displaystyle \forall j \in [1,n], \lambda_{j,n} =\frac{\sqrt{\pi}2^{n+1} n!}{[H_n(x_j))']^2}
\end{cases}
\end{equation}
\subsection{Summary of the quadrature for Piola-Kirchhoff tensor \label{summary_Piola_Kirchhoff} \\}
To sum up the computation of the 2$^{nd}$ Piola-Kirchhoff tensor:
\begin{equation}
\begin{cases}
\displaystyle \vertjoliwriting{\tens{\Sigma}_{aniso,1}} \approx \frac{\displaystyle \sum_{i=1}^m \sum_{j=1}^{n} e^{\sigma_{p,1}\cos(2\phi_i)} \lambda_{j,n} \textcolor{vertjoli}{\vertjoliwriting{\delta \overset{\sim}{\tens{\Sigma}}_{aniso,1}}(\theta_j, \phi_i) } }{\displaystyle \sqrt{\pi} erf(\sqrt{2\sigma_{t,1}}) \sum_{i=1}^m e^{\sigma_{p,1}\cos(2\phi_i)}}\\
\displaystyle \vertjoliwriting{\tens{\Sigma}_{aniso,2}} \approx \frac{\displaystyle \sum_{i=1}^m \sum_{j=1}^{n} e^{\sigma_{p,1}\cos(2(\phi_i+\mu_{1}-\mu_{2}))} \lambda_{j,n} \textcolor{vertjoli}{\vertjoliwriting{\delta \overset{\sim}{\tens{\Sigma}}_{aniso,2}}(\theta_j, \phi_i) } }{\displaystyle \sqrt{\pi} erf(\sqrt{2\sigma_{t,1}}) \sum_{i=1}^m e^{\sigma_{p,1}\cos(2(\phi_i+\mu_{1}-\mu_{2}))}}
\end{cases}
\quad \text{with}
\begin{cases}
\displaystyle \forall i \in [1,m], \phi_i = \frac{(i-1)\pi}{m} \\
\displaystyle \forall j \in [1,n], \cos(\theta_{j,k}) = \frac{x_j}{\sqrt{2\sigma_{t,k}}} \\
\text{and} \\
\displaystyle \forall j \in [1,n], \lambda_{j,n} =\frac{\sqrt{\pi}2^{n+1} n!}{[H_n(x_j))']^2}
\end{cases}\\
\label{PKaniso_quadrature}
\end{equation}
with $\forall j \in [1,n], x_j$ the roots of the $H_n$ Hermite polynomials (see table \ref{Hermite_tabular_1_5} for the five first polynomial) which are defined recursively like this:
\begin{center}
\begin{equation}
\begin{cases}
H_0(x) = 1, H_1(x) = 2x\\
H_{k+1}(x) = 2x H_k(x) - 2k H_{k-1}(x) \\
\end{cases}
\label{Hermite_polynomial_recursive}
\end{equation}
\end{center}
with $\vertjoliwriting{\delta \overset{\sim}{\tens{\Sigma}}_{aniso,1}}$ and $\vertjoliwriting{\delta \overset{\sim}{\tens{\Sigma}}_{aniso,2}}$ defined as in \ref{f2tilde} and $\vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}}$ and $\vertjoliwriting{\delta \tens{\Sigma}_{aniso,2}}$ defined as in \ref{dPiola_aniso_final_app}:
\begin{equation}
\begin{cases}
\displaystyle \vertjoliwriting{\delta \tens{\Sigma}_{aniso,1}} = (\frac{k}{\lambda l_0} ( \frac{ \lambda}{\lambda_{u,1}} - 1 )_{+} + t_{u,1}) \vect{r} \otimes \vect{r} \\
\displaystyle \vertjoliwriting{\delta \tens{\Sigma}_{aniso,2}} = (\frac{k}{\lambda l_0} ( \frac{ \lambda}{ \lambda_{u,2}} - 1 )_{+} + t_{u,2}) \vect{r} \otimes \vect{r} \\
\end{cases}
\text{with} \quad l(\theta, \phi)^2 = \vect{r}(\theta, \phi).\tens{C}.\vect{r}(\theta, \phi)
\label{dPiola_aniso_final_app}
\end{equation}
And finally, in the discrete form $\vect{r}$ is given by equation \ref{direction_vector_discrete}:
\begin{equation}
\begin{split}
\vect{r}(\theta_{j,k}, \phi_i) = (\sin\theta_{j,k} \cos\phi_i \vect{e}_x^{lam}(1) + \sin\theta_{j,k} \sin\phi_i \vect{e}_y^{lam} (1) + \cos \theta_{j,k} \vect{e}_z^{lam} (1) ) \vect{e}_X \\ + (\sin\theta_{j,k} \cos\phi_i \vect{e}_x^{lam}(2) + \sin\theta_{j,k} \sin\phi_i \vect{e}_y^{lam} (2) + \cos \theta_{j,k} \vect{e}_z^{lam} (2) ) \vect{e}_Y \\ + (\sin\theta_{j,k} \cos\phi_i \vect{e}_x^{lam}(3) + \sin\theta_{j,k} \sin\phi_i \vect{e}_y^{lam} (3) + \cos \theta_{j,k} \vect{e}_z^{lam} (3) ) \vect{e}_Z
\end{split}
\label{direction_vector_discrete}
\end{equation}
\newpage
\section{Five first Hermite polynomials, roots and weight associated \label{HermitePolynomialAppendix}}
Table \ref{Hermite_tabular_1_5} and \ref{Hermite_tabular_1_5_numerical_value} present the five first Hermite polynomials of order $n$, their roots $ x_{j,n}$ and angles associates $\theta_{j,n}$ for $ \sigma_{t,min} = 2.25$ and weight associated with the roots, first in an explicit form and then with the numerical values.
\begin{table}[h!]
\begin{tabular}{|c|c|p{3.5cm}|p{4.25cm}|p{4.cm}|}
\hline
$n$ & $H_n(x)$ & $x_{j,n}$ : $j$ roots of $H_n(x)$ & $\displaystyle \theta_{j,n} = \arccos(\frac{x_{j,n}}{\sqrt{2\sigma_{t}}})$ with $\sigma_{t} = 2.25$ (in degrees) & $\displaystyle \lambda_{j,n} = \frac{\sqrt{\pi}2^{n+1} n!}{[H_n(x_j))']^2}= \frac{\sqrt{\pi}2^{n+1} n!}{[H_n(\sqrt{2\sigma_{t}}\cos(\theta_j))']^2}$ \\
\hline
$n=1$ & $H_1(x) = 2x$ & $x_{1,1} = 0$ & $\theta_{1,1} = 90 $ & $\displaystyle \lambda_{1,1} = \sqrt{\pi} $ \\
\hline
\multirow{2}{*}{$n=2$} & \multirow{2}{3.5cm}{$H_2(x) = 4x^2 - 2$} & $\displaystyle x_{1,2} = - \frac{\sqrt{2}}{2} $ & $\theta_{1,2} = 109.471 220 634 504 $ & $\displaystyle \lambda_{1,2} = \frac{\sqrt{\pi}}{2} $ \\ \cline{3-5} && $\displaystyle x_{2,2} = \frac{\sqrt{2}}{2} $ & $\theta_{2,2} = 70.528 779 365 496 $ & $\displaystyle \lambda_{2,2} = \frac{\sqrt{\pi}}{2} $\\
\hline
\multirow{3}{*}{$n=3$} & \multirow{3}{3.5cm}{$H_3(x) = 8x^3-12x$} & $\displaystyle x_{1,3} = - \sqrt{ \frac{3}{2}}$ & $\theta_{1,3} = 125.264 389 669 801 $ & $\displaystyle \lambda_{1,3} = \frac{\sqrt{\pi}}{6} $ \\ \cline{3-5} && $x_{2,3} = 0 $ & $\theta_{2,3} = 90 $ & $\displaystyle \lambda_{2,3} = \frac{2\sqrt{\pi}}{3} $ \\ \cline{3-5} && $\displaystyle x_{3,3} = \sqrt{ \frac{3}{2}} $ & $\theta_{3,3} = 54.735 610 317 232 $ & $\displaystyle \lambda_{3,3} = \frac{\sqrt{\pi}}{6} $\\
\hline
\multirow{4}{*}{$n=4$} & \multirow{4}{3.5cm}{$H_4(x) = 16x^4 - 48x^2 +12$} & $\displaystyle x_{1,4} = - \sqrt{\frac{3+\sqrt{6}}{2}}$ & $\theta_{1,4} = 141. 090 413 810 743 $ & $\displaystyle \lambda_{1,4} = \frac{\sqrt{\pi}(3-\sqrt{6})}{12} $ \\ \cline{3-5} && $\displaystyle x_{2,4} = - \sqrt{\frac{3-\sqrt{6}}{2}} $ & $\theta_{2,4} = 104.319 054 669 049 $ & $\displaystyle \lambda_{2,4} = \frac{\sqrt{\pi}(3+\sqrt{6})}{12} $ \\ \cline{3-5} && $\displaystyle x_{3,4} = \sqrt{\frac{3-\sqrt{6}}{2}}$ & $\theta_{3,4} = 75.680 945 330 951 $ & $\displaystyle \lambda_{3,4} = \frac{\sqrt{\pi}(3+\sqrt{6})}{12} $ \\ \cline{3-5} && $\displaystyle x_{4,4} = \sqrt{\frac{3+\sqrt{6}}{2}} $ & $\theta_{4,4} = 38.909 586 189 258 $ & $\displaystyle \lambda_{4,4} = \frac{\sqrt{\pi}(3-\sqrt{6})}{12} $\\
\hline
\multirow{5}{*}{$n=5$} & \multirow{5}{3.5cm}{$H_5(x) = 32x^5-160x^3+120x$} & $\displaystyle x_{1,5} = - \sqrt{\frac{5+\sqrt{10}}{2}}$ & $\theta_{1,5} = 162.236 386 701 988 $ & $\displaystyle \lambda_{1,5} = \frac{(7-2\sqrt{10})\sqrt{\pi}}{60} $ \\ \cline{3-5} && $\displaystyle x_{2,5} = - \sqrt{\frac{5-\sqrt{10}}{2}} $ & $\theta_{2,5} = 116.864 071 047 785$ & $\displaystyle \lambda_{2,5} = \frac{(7+2\sqrt{10})\sqrt{\pi}}{60} $ \\ \cline{3-5} && $x_{3,5} = 0$ & $\theta_{3,5} = 90 $ & $\displaystyle \lambda_{3,5} = \frac{8\sqrt{\pi}}{15} $ \\ \cline{3-5} && $\displaystyle x_{4,5} = \sqrt{\frac{5-\sqrt{10}}{2}} $ & $\theta_{4,5} = 63.135 928 952 216 $ & $\displaystyle \lambda_{4,5} = \frac{(7+2\sqrt{10})\sqrt{\pi}}{60}$\\ \cline{3-5} && $\displaystyle x_{5,5} = \sqrt{\frac{5+\sqrt{10}}{2}} $ & $\theta_{5,5} = 17.763 613 298 012 $ & $\displaystyle \lambda_{5,5} = \frac{(7-2\sqrt{10})\sqrt{\pi}}{60} $\\
\hline
\end{tabular}
\caption{Five first Hermite polynomials of order $\protect n$, roots of the Hermite polynomials $\protect x_{j,n}$ and angles associates $\protect \theta_{j,n}$ for $\protect \sigma_{t,min} = 2.25$ and weight associated with the roots. \label{Hermite_tabular_1_5} \ccite{greenwood1948}, \ccite{salzer1952} }
\end{table}
\begin{table}[h!]
\begin{tabular}{|c|c|p{4.25cm}|p{4.25cm}|p{4.25cm}|}
\hline
$n$ & $H_n(x)$ & $x_{j,n}$ : $j$ roots of $H_n(x)$ & $\theta_{j,n} = \arccos(\frac{x_{j,n}}{\sqrt{2\sigma_{t}}})$ with $\sigma_{t} = 2.25$ (in degrees) & $\lambda_{j,n} = \frac{\sqrt{\pi}2^{n+1} n!}{[H_n(x_j))']^2}= \frac{\sqrt{\pi}2^{n+1} n!}{[H_n(\sqrt{2\sigma_{t}}\cos(\theta_j))']^2}$ \\
\hline
$n=1$ & $H_1(x) = 2x$ & $x_{1,1} = 0.000 000 000 000$ & $\theta_{1,1} = 90 $ & $\lambda_{1,1} = 1.772 453 850 906$ \\
\hline
\multirow{2}{*}{$n=2$} & \multirow{2}{2.5cm}{$H_2(x) = 4x^2 - 2$} & $x_{1,2} = -0.707 106 781 187$ & $\theta_{1,2} = 109.471 220 634 504 $ & $\lambda_{1,2} = 0.886 226 925 453$ \\ \cline{3-5} && $x_{2,2} = 0.707 106 781 187$ & $\theta_{2,2} = 70.528 779 365 496 $ & $\lambda_{2,2} = 0.886 226 925 453$\\
\hline
\multirow{3}{*}{$n=3$} & \multirow{3}{2.5cm}{$H_3(x) = 8x^3-12x$} & $x_{1,3} = - 1.224 744 871 392$ & $\theta_{1,3} = 125.264 389 669 801 $ & $\lambda_{1,3} = 0.295 408 975 151$ \\ \cline{3-5} && $x_{2,3} = 0.000 000 000 000 $ & $\theta_{2,3} = 90 $ & $\lambda_{2,3} = 1.181 635 900 604$ \\ \cline{3-5} && $x_{3,3} = 1.224 744 871 392 $ & $\theta_{3,3} = 54.735 610 317 232 $ & $\lambda_{3,3} = 0.295 408 975 151 $\\
\hline
\multirow{4}{*}{$n=4$} & \multirow{4}{2.5cm}{$H_4(x) = 16x^4 - 48x^2 +12$} & $x_{1,4} = -1.650 680 123 886 $ & $\theta_{1,4} = 141. 090 413 810 743 $ & $\lambda_{1,4} = 0.081 312 835 447 3 $ \\ \cline{3-5} && $x_{2,4} = -0.524 647 623 275 $ & $\theta_{2,4} = 104.319 054 669 049 $ & $\lambda_{2,4} = 0.804 914 090 006 $ \\ \cline{3-5} && $x_{3,4} = 0.524 647 623 275$ & $\theta_{3,4} = 75.680 945 330 951 $ & $\lambda_{3,4} = 0.804 914 090 006 $ \\ \cline{3-5} && $x_{4,4} = 1.650 680 123 886 $ & $\theta_{4,4} = 38.909 586 189 258 $ & $\lambda_{4,4} = 0.081 312 835 447 3$\\
\hline
\multirow{5}{*}{$n=5$} & \multirow{5}{2.5cm}{$H_5(x) = 32x^5-160x^3+120x$} & $x_{1,5} = -2.020 182 870 456$ & $\theta_{1,5} = 162.236 386 701 988 $ & $\lambda_{1,5} = 0.019 953 242 059 0$ \\ \cline{3-5} && $x_{2,5} = -0.958 572 464 614 $ & $\theta_{2,5} = 116.864 071 047 785$ & $\lambda_{2,5} = 0.393 619 323 152$ \\ \cline{3-5} && $x_{3,5} = 0.000 000 000 000$ & $\theta_{3,5} = 90 $ & $\lambda_{3,5} = 0.945 308 720 483 $ \\ \cline{3-5} && $x_{4,5} = 0.958 572 464 614 $ & $\theta_{4,5} = 63.135 928 952 216 $ & $\lambda_{4,5} = 0.393 619 323 152$\\ \cline{3-5} && $x_{5,5} = 2.020 182 870 456 $ & $\theta_{5,5} = 17.763 613 298 012 $ & $\lambda_{5,5} = 0.019 953 242 059 0$\\
\hline
\end{tabular}
\caption{Five first Hermite polynomials of order $\protect n$, numerical value of the roots of the Hermite polynomials $\protect x_{j,n}$ and numerical value of the angles associates $\protect \theta_{j,n}$ for $\protect \sigma_{t,min} = 2.25$ and numerical value of the weight associated with the roots. \label{Hermite_tabular_1_5_numerical_value} \ccite{greenwood1948}, \ccite{salzer1952} }
\end{table}
\section{Conclusions}
In this paper, we have built a multi-scale model of the cornea, coupled to a patient-specific geometry to investigate the origin of the keratoconus. We have first used our model to reproduce the pressure versus apex displacement curve from Elsheikh et al. \cite{elsheikh_biomechanical_2008} and determined a reference set of mechanical parameters, describing a healthy cornea. We show that the central element of the mechanical response is the one of the fibrils, and in particular their prestretch.
Our simulation of cornea with keratoconic geometry but healthy mechanical parameters shows that the geometry change is not able to reproduce the response of keratoconic cornea to an increase of the intraocular pressure \cite{mcmonnies_corneal_2010}. In fact, we showed that the keratoconic response is well reproduced when the mechanical properties are altered, whatever the initial geometry, and that the main component involved in this response is the lamellae stiffness. The lamellae weakening is even sufficient to obtain a shape resembling an early-stage keratoconus.
Although they could be completed by a better description of the induced remodeling, our simulations show the importance of a fine measurement of the mechanical properties in the understanding and diagnosis of keratoconus.
\paragraph*{Acknowledgment}
We kindly thank A. Pandolfi for providing the 3D mesh code, K. M. Meek and S. Hayes for the corneal X-ray experimental data and J. Knoeri and V. Borderie for providing elevation and thickness maps.
\paragraph*{Authors' contribution} C.G. and P.L.T designed the model. C.G. and J.D. implemented the model. C.G. performed the simulations, and analyzed the results. C.G. and J.M.A. discussed the results. J.M.A. supervised the research. All authors read and approved the manuscript.
\paragraph*{Competing interests}
The authors declare no competing interests.
\paragraph*{Financial disclosure}
No funding has been received for this article.
\section{Discussion}
To investigate the origin of keratoconus, we have compared the effects of a change in geometry and of a change in mechanical properties. To do so, we constructed a patient-specific mesh, which reproduces the geometry measured in clinic. We have built a multi-scale model, which contains explicitly the different contributions (fibrils, isotropic matrix, etc.), but it was not possible to obtain patient-specific data for these parameters. The collagen organization was obtained from experimental observations (X-ray \cite{aghamohammadzadeh_x-ray_2004} or SHG \cite{winkler_three-dimensional_2013}). The different mechanical stiffnesses were manually calibrated to reproduce the reported data \cite{elsheikh_biomechanical_2008}. As we have access only to the displacement of the apex in human cornea, with a variability between corneas, we did not try a proper identification. This implies that our "reference" set of parameters may not be unique. Corneal strain maps have been measured on other animals (as bovine \cite{boyce_full-field_2008}), but then the keratoconus geometry is not available on the same animal.
We have tested the influence of small variations of each mechanical parameter, and we observed that the most sensitive one is the unfolding stretch, i.e. the stretch at which the fibrils start to generate force. Associated with our observation that the stress distribution corresponds to the fibril distribution (Fig.~\ref{CS_diopter_all_15mmHg}), this supports the idea that the forces in the cornea are mainly due to the fibrils, and only partly to the isotropic matrix or the volume variation. Note that the fibers become more and more unfolded as the pressure increases above physiological pressure, contributing to the increase of the tissue stiffness (see Fig.~\ref{apical_displacement_plus_minus_1_percent}). SHG observations of the lamellae show straight fibrils \cite{latour_vivo_2012, benoit_simultaneous_2016}. It may be explained by the fibril organization at smaller scale \cite{bell_hierarchical_2018}. In any case, it implies that the fibril tensions play a major role in the corneal response, which could have an impact on the recovery of the cornea after a laser surgery.
We simulated the inflation of a cornea with a keratoconic geometry. Using directly our reference mechanical parameter fails to reproduce the reported variation of keratometry during the inflation test \cite{mcmonnies_corneal_2010}. This shows that keratoconic geometry alone (thinner cornea) is not enough to have a keratoconic behavior. However, a 30 to 40 \% decrease in the average fiber stiffnesses allows our model to reproduce the $2$ diopters variations, even for healthy geometries. Thus, our approach shows that mechanical weakening, contrary to the geometry, is able to reproduce the keratoconus changes in SimK, emphasizing the importance of mechanical weakening on the keratoconic response. The weakened parameters reproducing the keratoconus behavior (see Table~\ref{studies_parameters}) indicate that it requires a relatively small decrease of the fibril stiffness to obtain a $2$ diopter variation. This points toward the key role of the collagen lamellae in the development of the keratoconus, in agreement with the proposed treatments by the addition of cross-links.
By using the weakened mechanical parameters on the healthy stress-free configuration (see Fig.~\ref{map_elevation_RefMesh20_all}), we were able to reproduce partly a keratoconic shape at physiological pressure. This again supports the idea that the primary motor of the keratoconus is a weakening of the collagen fibrils, consistent with the disorganization of the lamellae observed in \cite{meek_changes_2005}. However, the obtained shape is not the one of a real keratoconus, with a large elevation peak slightly off-centered. This may come from our quasi-incompressibility assumption, which prevents a thinning of the cornea. But more likely, to go further in the modeling of the keratoconus, we need a better understanding of the remodeling going on inside the tissue.
\section{Results}
We first determine the values of our model parameters by reproducing experimental data on \textit{ex-vivo} inflation assays \cite{elsheikh_biomechanical_2008}: these parameters will be our "reference" parameters used to investigate the origin of the keratoconus.
\subsection{Parameter estimation \label{parameter_estimation}}
We simulated the experiment by Elsheik et al. \cite{elsheikh_biomechanical_2008}. To do so, we used the stress-free geometry $\Omega_{0, stress-free}^{ref}$ of a healthy cornea and applied a pressure from $0$ to $160$ mmHg while determining the apex displacement. Figure~\ref{apical_displacement_plus_minus_1_percent} shows the envelope of the experimental data (in pink), which comes from inter-cornea variability. The triangular markers are our simulation using the "reference" parameters (see Table \ref{independant_parameters}), obtained after manual calibration.
We have then varied each parameter independently by $1\%$. The most sensitive parameters are the unfolding stretches $\lambda_{u, min}$ and $\lambda_{u, max}$ (the results for the other parameters are presented in appendix~\ref{sensitivity_analysis}, Fig.~\ref{apical_displacement_all}). Figure~\ref{apical_displacement_plus_minus_1_percent} shows that an increase (resp. decrease) of both the unfolding stretches by $1\%$ moves the pressure vs apex displacement curve to the right (resp. to the left), well outside the experimental data range. Unfolding stretch corresponds to the stretch above which the lamellae start to respond elastically. As $\lambda_u > 1$, the lamellae in the reference configuration are folded and do not contribute to the tissue rigidity. Once they become activated, the tissue becomes much stiffer. This explains why a change in the unfolding stretch leads to a shift of the pressure vs apical displacement curve: increasing the unfolding stretch will elongate the heel region, without changing the linear part so much.
\begin{figureth}
\includegraphics[width = 1.1\linewidth]{apical_displacement_lambda_u_sensitivity.pdf}
\caption[Pressure with apical displacement for different cases of limits values of $\protect \lambda_{u}$]{\label{apical_displacement_plus_minus_1_percent} Pressure with apical displacement for three different $\protect \lambda_{u}$. Pink zones: envelopes of the experimental data from \cite{elsheikh_biomechanical_2008}.'$\nabla$': reference case. 'o': $1\%$ decrease of the $\lambda_{u}$. 'x': $1\%$ increase of the $\lambda_{u}$. }
\end{figureth}
\subsection{Keratoconus: geometrical and mechanical effect \label{keratoconus_geometry}}
To distinguish between mechanical and geometrical origin of keratoconus, we first simulated a healthy and a stage 4 keratoconic cornea with "reference" mechanical parameters, and compared with the observations from McMonnies and Boneham \cite{mcmonnies_corneal_2010}. They showed that the simK of the healthy corneas does not change significantly for a change of intra-ocular pressure in the range of $15-30mmHg$ whereas the simK of keratoconic corneas increases of 2 diopters. Figure~\ref{simK} shows the simulated keratometry (or simK) as a function of the applied pressure: for the "reference" parameters ($\nabla$ symbols, see table~\ref{studies_parameters}) in both healthy (pink) and keratoconic (purple) corneas, the simK does not change significantly (less than $0.5$ diopter). This implies that a modification of the mechanical properties is needed to reproduce the keratoconus response.
Then, we modified the mechanical parameters to obtain a change of keratometry of 2 diopters, by a manual adjustment. We modified separately the non-fibrillar matrix stiffness ($\kappa_{1}^ {apparent}$), the distributed fibril stiffness ($C_i*k_{lam}$), or the pre-elongation ($\lambda_u$). The only parameter that gives a significant change of diopter without changing of order of magnitude is the fibril stiffness $k_{lam}$, the mean values of distributed lamellae stiffnesses $(C_1*k_{lam})$ and $(C_2*k_{lam})$ decreasing by around 40 and 30\% respectively. Table~\ref{studies_parameters} gives the changed parameters of each simulation.
To obtain a change of $1$ diopter by weakening the matrix, a two orders of magnitude change was needed on $\kappa_{1}^ {apparent}$, and no set of parameters was found to have a change greater than $0.3$ diopter thanks to a variation of the pre-elongation parameters $\lambda_u$. Figure~\ref{simK} shows the simK variation with pressure of the reference and weakened fibril stiffness cases. Our results show that the keratoconus pressure response can easily be captured by a change in the mechanical behavior, even if we changed the parameters slightly differently for healthy and keratoconic corneas.
\begin{figureth}
\includegraphics[width = 0.9\linewidth]{SimK.pdf}
\caption[SimK Computation]{\label{simK} Computation of the SimK for the reference and fiber weakness cases considered in table~\ref{studies_parameters} with healthy (up) and keratoconic (down) geometries. Modifying the value of the mechanical parameters of the anisotropic part of the cornea, a variation of 2 diopters can be observed.}
\end{figureth}
Figure~\ref{CS_diopter_all_15mmHg} presents the stresses in the Nasal-Temporal (NT) and Superior-Inferior (SI) directions for healthy and keratoconic geometries, without and with mechanical weaknesses at physiological pressure. The pattern at the boundary is due to the highly rigid boundary condition, and is heterogeneous in the thickness. Both healthy and keratoconic corneas show a higher concentration of the stress in the central region of the anterior surface (even higher in the keratoconic case), whereas the stress in the posterior surface is quite homogeneous. This means that the geometry has a strong impact on the stress, even if it does not affect the keratometry response. On the contrary, modifications of the mechanical parameters do not affect the pattern strongly - mainly smoothing it. This indicates that the stress distribution is mostly due to the fiber distribution, except at the vicinity of the corneal boundary.
\begin{figureth}
\includegraphics[width = 1.1\linewidth]{CS_diopter_all_15mmHg.png}
\caption[Cauchy stress at physiological pressure for different cases of mechanical weaknesses in the cornea]{\label{CS_diopter_all_15mmHg} Cauchy stress at physiological pressure for different cases of mechanical weaknesses in the cornea with healthy and keratoconic geometries. Fig.~\ref{CS_diopter_all_15mmHg}a-h: Naso-Temporal and Superior-Inferior stresses for the healthy geometry (a-d: reference case and e-h: case of the fibril weakness with an increase of 2 diopters between 15 and 31 mmHg) on the anterior and posterior surfaces. Fig.~\ref{CS_diopter_all_15mmHg}i-p: Naso-Temporal and Superior-Inferior stresses for the keratoconic geometry on the anterior and posterior surfaces (i-l: reference case and m-p: case of the fibril weakness with an increase of 2 diopters between 15 and 31 mmHg). }
\end{figureth}
\subsection{Induced keratoconus \label{induced_keratoconus}}
So far, we have separated the problem of the geometry and of the mechanical parameters: we have chosen either the healthy parameters and changed the geometry, or chosen an observed geometry and modified the mechanical parameters. In both cases, we show that the change in diopter associated with keratoconus response cannot be explained by the change in geometry but can be reproduced by a decrease in the mechanical properties, in particular of the fiber rigidity. To do so, we started from an observed geometry, and simulated a stress-free configuration, obtained such that it reproduces at physiological pressure the observed geometry, for the chosen set of mechanical parameter. This means that the keratoconic cornea has a stress-free configuration which is different from the healthy cornea. Here, we ask ourselves what will be the geometry of a cornea under pressure if we use on the healthy-stress cornea the keratoconic mechanical parameters: we would like to see if the change of mechanical parameters is able to recreate the keratoconic geometry.
We first determine the stress-free configuration of our reference case (healthy geometry, with reference mechanical parameters), and simulated the response of the cornea at different pressures for weakened fibril stiffness corresponding to a $2$ diopter increase.
Figure~\ref{SimK_RefMesh20} shows the computed SimK for this new case. We also reproduced the simulation of the reference case, which leads to a constant SimK (see Fig.~\ref{simK}). The decreased mechanical properties lead to a higher SimK at physiological pressure than for the reference case, although it is smaller than the one for the simulation starting from keratoconic stress-free configuration (around $61$ D). This reflects the fact that a different stress-free configuration will lead to a different geometry under pressure, and is in line with stage-1 keratoconus based on Krumeich's classification \cite{naderan_histopathologic_2017}. We also observe an increase of $2$ diopters, consistent with a keratoconic response.
\begin{figureth}
\includegraphics[width = 0.9\linewidth]{SimK_RefMesh20.pdf}
\caption[Simulated keratoconus]{\label{SimK_RefMesh20} SimK computed for the different cases of mechanical weakness on the reference stress-free configuration.}
\end{figureth}
Figure~\ref{CS_diopter_healthy_mesh20_15mmHg} presents the NT ans SI stress distributions for the reference and mechanical weakness cases. The distributions of stresses are very similar to those in Fig.~\ref{CS_diopter_all_15mmHg}, in agreement with our previous observation that this stress pattern is more controlled by the fiber distribution than by the cornea geometry.
\begin{figureth}
\includegraphics[width = 1.0\linewidth]{CS_diopter_healthy_mesh20_15mmHg.png}
\caption[Cauchy stress at physiological pressure for different cases of mechanical weaknesses in the cornea]{\label{CS_diopter_healthy_mesh20_15mmHg} Stress at physiological pressure with reference parameters and mechanical weakening of the cornea with healthy geometry and stress-free configuration of the reference case used for every computation.
Fig.~\ref{CS_diopter_healthy_mesh20_15mmHg}a-d: Naso-Temporal and Superior-Inferior stresses for the reference case a, Fig.~\ref{CS_diopter_healthy_mesh20_15mmHg}e-h: Naso-Temporal and Superior-Inferior stresses for case of the fibril weakness with an increase of 2 diopters between 15 and 31 mmHg.}
\end{figureth}
Figure~\ref{map_elevation_RefMesh20_all} show the elevation maps obtained at physiological pressure and at $P = 30$ mmHg, for the reference case and for the weakened mechanical properties. The fibril weakening does not lead to a major change of the elevation, but we can see that in the posterior surface, the elevation in the central region is higher than in the reference case (it is even clearer at 30 mmHg), which can lead to the suspicion of a very early stage of a keratoconus. Those results are coherent with the value of the SimK at physiological pressure previously computed and tend to indicate that the keratoconus may appear following a weakening of the anisotropic part of the cornea. On the other hand, elevation maps do not show an off-centered elevation (neither an off-centered thinning on thickness maps is seen) that could lead to suspect a keratoconus \cite{duncan_assessing_2016, belin_scheimpflug_2013}. Indeed the quasi-incompressibility of the cornea does not allow for a significant change in the cornea geometry with a thinning of the cone region, thus it cannot change to become an advanced stage keratoconic cornea, although the change of diopter - and thus the change of curvature radii - is coherent with a keratoconus.
\begin{figureth}
\includegraphics[width = 1.0\linewidth]{map_elevation_RefMesh20_all.png}
\caption[Elevation maps]{\label{map_elevation_RefMesh20_all} Anterior and posterior elevation maps with respect to the best fit spheres for the reference (a-d) and weakened fiber (e-f) cases at physiological pressure (a,b,e,f) and for $P=30$ mmHg (c,d,g,h).}
\end{figureth}
\section{Introduction}
Cornea is a critical part of the eye providing two thirds of its optical power through its specific lens shape. In keratoconus disease, the shape of the cornea is progressively altered to become conical, leading to optical aberration and thus to a loss of vision \cite{sedaghat_comparative_2018}. A late detection of the keratoconus imposes a laser surgery with possible complications \cite{vesaluoma_corneal_2000, moilanen_long-term_2003, holzer_femtosecond_2006}. Conversely, if the keratoconus is detected at an early stage, appropriated contact lenses can be used to stop its progression \cite{barnett_contact_2011, downie_contact_2015}. This explains the interest for early diagnosis methods in the literature \cite{cavas-martinez_new_2017}.
Keratoconus origin is not determined as of today: it has been shown to be favored by genetic, but also by mechanical rubbing of the eye \cite{najmi_correlation_2019}. Early keratoconus are associated with both a thinning of the cornea \cite{pinero_corneal_2010} and a decrease of the mechanical properties \cite{ambekar_effect_2011}, combined with a loss of the highly organized structure of the cornea \cite{radner_altered_1998}. However, it is not clear if the thinning is due to the weakening of the cornea or comes first. To tackle this question, we propose a modeling approach in which we can change independently the cornea geometry and its mechanical properties from healthy to keratoconic ones.
Patient-specific images of the cornea are obtained by clinicians using topographers. They give morpho-geometric indicators for an early stage of the keratoconus \cite{pinero_corneal_2010, cavas-martinez_new_2017, cavas-martinez_study_2018}, such as corneal thickness, anterior and posterior surfaces geometries, and pachymetry. On the other hand, cornea mechanical properties are difficult to estimate specifically \textit{in-vivo} \cite{eliasy_determination_2019, kling_corneal_2014}. They have been investigated \textit{ex-vivo} with inflation tests \cite{elsheikh_biomechanical_2008, benoit_simultaneous_2016} or strip stretching \cite{elsheikh_comparative_2005, zeng_comparison_2001}. They show a response similar to other collagen-rich tissues (as aorta \cite{choudhury_local_2009}, tendon \cite{goulam_houssen_monitoring_2011} or skin \cite{lynch_novel_2017}), with a first heel region associated with a low, non-linear, increase of the stress for large stretch, followed by a linear region in which the force increases proportionally to the stretch. Indeed, it has long been known \cite{maurice_structure_1957} that optical and mechanical properties of the cornea are linked to the micro-structural organization of the stroma \cite{ruberti_corneal_2011, meek_corneal_2015}, a collagen-rich tissue made of a plywood of collagen lamellae anchored in a matrix of proteoglycans and keratocytes. It is classically accepted that the mechanical properties arise from a progressive straightening of the lamellae in the heel region, followed by their stretching in the linear part \cite{ashofteh_yazdi_characterization_2020}, as reported for tendon \cite{fang_modelling_2016} for example. Only a few papers have questioned this interpretation, with contradictory observations \cite{benoit_simultaneous_2016, bell_hierarchical_2018} either due to the probed scales or to the differences in the experimental conditions.
The techniques used today to image the corneal lamellae are either destructive (as X-rays scattering \cite{newton_circumcorneal_1998, aghamohammadzadeh_x-ray_2004, meek_use_2009}) or with very limited field of view (as transmission electron microscopy \cite{bergmanson_assessment_2005} and scanning electron microscopy \cite{radner_interlacing_2002, feneck_comparative_2018}, which are also destructive, or Second Harmonic Generation microscopy \cite{winkler_nonlinear_2011, latour_vivo_2012, mercatelli_three-dimensional_2017, avila_quantitative_2019}, which is not destructive). The experimental complexity means that the available data are not patient-specific and thus do not represent the variability of the human eyes.
The organization of the lamellae has been shown to be different in the keratoconic corneas compared to healthy ones \cite{meek_changes_2005, akhtar_ultrastructural_2008}, and so one can expect different mechanical properties. Brillouin microscopy showed that a mechanical loss occurs in the region of the cone in keratoconus \cite{scarcelli_biomechanical_2014, seiler_brillouin_2019}. Still, there is no consensus on the difference of rigidity {\it in-vivo} between healthy and keratoconic corneas \cite{ambekar_effect_2011}. Mechanically, a global difference between healthy and keratoconic cornea has been observed {\it in-vivo} in the change of the diopter under pressure \cite{mcmonnies_corneal_2010}.
Usually, cornea is modeled as an hyperelastic quasi-incompressible material reinforced by fibers \cite{studer_biomechanical_2010, nguyen_inverse_2011, petsche_role_2013, gefen_biomechanical_2009, whitford_biomechanical_2015, montanino_modeling_2018} representing the two families of lamellae. The validation of these models is only done on a few experiments measuring the displacement of the apex (\cite{elsheikh_biomechanical_2008, lombardo_analysis_2014} for human cornea) or the 3D displacement of the anterior surface (\cite{boyce_full-field_2008} in bovine cornea) and exclusively in healthy cases. Note that most models do not include a variation of the mechanical properties through the cornea thickness, while nanoindentation has shown that the anterior part is stiffer than the posterior part \cite{dias_anterior_2013}.
We propose here a multi-scale and heterogeneous model of the cornea, based on the experimental lamellae orientations. This model is calibrated on the available experimental data, showing the high sensitivity of the response to the pre-strain of lamellae. This model is then implemented in a finite element code to simulate variations of intra-ocular pressure (or bulge test) on patient-specific geometries, thanks to clinical keratometer elevation maps. We show that a mechanical weakening of the cornea is needed to reproduce the reported variation of diopter with pressure \cite{mcmonnies_corneal_2010}, for both healthy and keratoconic geometries. On the other hand, the change in geometry without mechanical variation does not reproduce the keratoconus response. We also show that the mechanical weakening tends to induce a keratoconus shape if we start from a stress-free healthy geometry, but the quasi-incompressibility of the cornea does not allow the thinning observed in keratoconus. All of this point towards the importance in a weakening of the mechanical properties in the development of the keratoconus. Particularly, our analysis shows that a weakening of the collagen lamellae is the most likely to induce the pathology. Our observations support the importance of an early measure of the cornea mechanical response, as well as the importance of treatments strengthening the collagen fibers.
\section{Methods}
The mechanical problem we solve is an inflation test where the cornea is fixed on a pressure chamber at its border and put under pressure. A patient-specific mesh is created using clinical elevations and thicknesses maps. The fixation is located at the sclera, the white and very stiff tissue surrounding the cornea. The material response of the cornea is brought by the stroma, modeled as an hyperelastic matrix reinforced by collagen lamellae. The lamellae orientations are extracted from X-rays \cite{aghamohammadzadeh_x-ray_2004} and SHG images \cite{winkler_three-dimensional_2013, petsche_role_2013}.
\subsection{Patient - specific geometry}
To construct a patient-specific mesh, we proceed in two steps. First, we construct an idealized geometry of the cornea using an analytical description: the geometry of the healthy cornea is almost regular and well described by a parametric quadratic equation \cite{gatinel_corneal_2011}. Considering the apex of the cornea at the origin of a coordinate system with the z-axis oriented vertically and downwards, the anterior and posterior surfaces of the cornea are described by the biconic function \cite{janunts_parametric_2015}:
\begin{equation}
z(x, y, R_x, R_y, Q_x, Q_y) = z_0 + \frac{ \displaystyle \frac{x^2}{R_x}+\frac{y^2}{R_y} }{\displaystyle 1 + \sqrt{1 - (1 + Q_x) \frac{x^2}{R_x^2} - (1 + Q_y) \frac{y^2}{R_y^2} } } ,
\label{biconic_equation}
\end{equation}
where $R_x $ and $R_y$ are the curvature radii of the flattest ($x$ axis) and the steepest ($y$ axis) meridians of the cornea, $Q_x$ and $Q_y$ are the associated asphericities. Note that the $x$ and $y$ directions can be rotated of an angle $\psi$ from the classical nasal-temporal (N-T) and inferior-superior (I-S) axes (see Fig.~\ref{corrected_mesh}b for illustration of the anterior surface). Finally, $z_0$ is the arbitrary translation with respect to the $z$ axis origin.
To adapt the mesh to real cornea, we use anonymized clinical data obtained by an anterior segment OCT combined with a MS-39 placido type topographer (Dr. J. Knoeri's personal communication). Figures~\ref{cornea_mesh_OCT}a, c, g and i present the maps of clinical anterior and posterior elevations for a healthy (Fig.~\ref{cornea_mesh_OCT}a and c) and a keratoconic cornea (Fig.~\ref{cornea_mesh_OCT}g and i). For each surface, a best fit sphere (BFS) is determined during the acquisition. The distance between the BFS and the real surfaces are called the anterior and posterior elevations (for the exterior and interior surface of the cornea respectively). Figures~\ref{cornea_mesh_OCT}e and k show clinical maps of the thicknesses of the same cornea. We first do a least square minimization of Eq.~\eqref{biconic_equation} with respect to the clinical data. Then, the cornea's thickness at the apex is used to place the anterior surface with respect to the posterior surface. This is used to create an idealized mesh (see Fig.~\ref{corrected_mesh}a - grey mesh) thanks to the code provided by Pr. A. Pandolfi \cite{pandolfi_model_2006}.
This mesh is then corrected to match the real one. First, we adjust the anterior and posterior surfaces to match exactly the clinical observations (see Fig.~\ref{corrected_mesh}a - pink mesh). This step requires the interpolation of the elevation maps at the node positions, which is done with a bi-dimensional B-spline approximation. Second, the points in the volume of the mesh (so between the interior and exterior surfaces) are corrected to be linearly distributed between the two surfaces. This procedure ensures that the mesh is both realistic and regular.
At the end of the process, elevations (Fig.~\ref{cornea_mesh_OCT}b and d for healthy cornea and Fig.~\ref{cornea_mesh_OCT}h and j for keratoconic cornea) and thicknesses (Fig.~\ref{cornea_mesh_OCT}f for healthy cornea and Fig.~\ref{cornea_mesh_OCT}l for keratoconic cornea) are reproduced on the mesh to be compared to the clinical ones. Although they are determined at different positions and thus cannot be compared directly, we can say that the B-splines approximation captures the clinical data (elevations and thicknesses) pretty well, despite the expected tendency to smooth the shape.
An important point is that this mesh is built in the loaded configuration where the cornea is subjected to the physiological intra-ocular pressure (IOP). We call this configuration $\Omega_{physio}$.
\begin{figureth}
\includegraphics[width = 1.0\linewidth]{opthalmo_carto.png}
\caption[Elevations and thicknesses clinical and computed maps]{\label{cornea_mesh_OCT}Elevation and thickness maps of healthy and keratoconic cornea. (a-f) Clinical and computed maps for a healthy cornea. (g-l) Clinical and computed maps for an advanced stage of keratoconic cornea. (a, c, e, g, i, k) Clinical data obtained by an OCT combined with a MS-39 placido type topographer. (b, d, f, h, j, l) Computed maps at physiological pressure for the same corneas and adapted meshes. (a, b, g, h) Clinical and computed anterior elevations with respect to the best fit sphere (BFS). Scale bar in $\mu m$. (c, d, i, j) Clinical and computed posterior elevations with respect to the BFS. Scale bar in $\mu m$. (e, f, k,l) Clinical and computed thickness. Scale bar in $\mu m$.}
\end{figureth}
\begin{figureth}
\includegraphics[width = 0.7\linewidth]{corrected_mesh.png}
\caption[Mesh used in the finite element code process]{\label{corrected_mesh}Example of a mesh construction for a keratoconic cornea. Mesh parameters: 12250 nodes and 10404 hexahedral elements. (a) Vertical cross-section along the long axis of the cornea of the idealized mesh (grey) and the patient-specific mesh at physiological pressure $\protect \Omega_{physio}$ (pink). (b) 3D picture of the patient-specific mesh at physiological pressure $\protect \Omega_{physio}$. (c) Cross-section through the apex of the patient-specific mesh at physiological pressure $\protect \Omega_{physio}$ (pink) and in stress-free configuration $\protect \Omega_{0, stress-free}$ (blue) to be defined later.}
\end{figureth}
\subsection{Mechanical equilibrium of the cornea: variational formulation \label{mechanical_equilibrium_cornea}}
We use a weak formulation written in the unknown unloaded configuration $\Omega_0$ to represent the energetic equilibrium, the different terms being summarized in Fig.~\ref{inflation_test_meca}. This writes:
\begin{equation}
\displaystyle \mathcal{P}_{i} = \mathcal{P}_{e} + \mathcal{P}_{sclera},
\end{equation}
where $\mathcal{P}_{i}$ is the inner power, $\mathcal{P}_{e}$ is the power of external forces and $\mathcal{P}_{sclera}$ is the power associated to the elastic boundary conditions. We look for a quasi-static solution of the problem, where the inertia terms are neglected. We also neglect volumic forces. The external forces are associated to the pressure $P$ applied on the posterior surface of the cornea, producing a virtual power in Lagrangian formalism:
\begin{equation}
\displaystyle \forall \vect{w} \in \mathcal{V}(\Omega_0) \text{,} \quad \mathcal{P}_{e} = - P \int_{\Gamma^{post}_0} J \vect{n}_0 .\tens{F}^{-1} .\vect{w} \text{d} \Gamma,
\end{equation}
with $\vect{w}$ an admissible test function (satisfying the boundary conditions), $J = \det (\tens{F})$ the change in volume, $\tens{F}$ the gradient of the transformation sending $\Omega_0$ to $\Omega (t)$ and $\vect{n}_0$ the external normal on the posterior surface in the stress-free configuration.
The anterior surface is free of loading. The stiff sclera fixed to the pressure chamber is treated as an elastic support boundary condition, producing the virtual power:
\begin{equation}
\displaystyle \forall \vect{w} \in \mathcal{V}(\Omega_0) \text{,} \quad \mathcal{P}_{sclera} = - \int_{\Gamma^{sclera}_0} a \vect{u}. \vect{w} \text{d} \Gamma,
\end{equation}
with $\vect{u}$ the displacement vector, and $a$ the boundary elastic modulus, assumed to be large with respect to the cornea stiffness.
\begin{figureth}
\includegraphics[width = 1.0\linewidth]{inflation_test_meca_P.png}
\caption[Schematic view of the mechanical problem of the inflation test.]{\label{inflation_test_meca} Schematic view of the mechanical problem of an inflation test. A pressure $P$ is applied on the posterior surface of the cornea, while the anterior surface of the cornea is stress-free, and the sclera is fixed to a pressure chamber treated as an elastic boundary condition of stiffness $a$.}
\end{figureth}
Finally, the internal power:
\begin{equation}
\displaystyle \forall \vect{w} \in \mathcal{V}(\Omega_0) \text{,} \quad \mathcal{P}_{i} = \int_{\Omega_0} \tens{\Sigma} : d_{\vect{u}} \tens{e}.\vect{w} \text{d} \Omega,
\end{equation}
introduces the $2^{nd}$ Piola-Kirchhoff stress tensor $\tens{\Sigma}$, which is related to the energy function $\psi$ through its derivative with respect to the Green-Lagrange tensor $\displaystyle \tens{e} = \frac{1}{2} (\tens{F}^{T}\tens{F} - \tens{1})$:
\begin{equation}
\tens{\Sigma} := \frac{d \psi}{d \tens{e}},
\end{equation}
and $ d_{\vect{u}} \tens{e}. \vect{w}= \frac{1}{2} (( \tens{\nabla}_{\vect{\xi}} \vect{w}) ^T . \tens{F} +\tens{F}^T. \tens{\nabla}_{\vect{\xi}} \vect{w}) $, the symmetric part of the gradient tensor of the test function in the current configuration brought back in the reference configuration.\\
The weak formulation of our mechanical problem leads to the following equilibrium equation in Lagrangian form:
\begin{equation}
\forall \vect{w} \in \mathcal{V}(\Omega_0) \text{,} \quad \int_{\Omega_0} \tens{\Sigma} : d_{\vect{u}} \tens{e}.\vect{w} \text{d} \Omega = - P \int_{\Gamma^{post}_0} J \vect{n}_0 .\tens{F}^{-1} . \vect{w} \text{d} \Gamma - \int_{\Gamma^{sclera}_0} a \vect{u} . \vect{w} \text{d} \Gamma .
\label{final_weak_formulation_Sigma}
\end{equation}
\subsection{Constitutive behavior \label{constitutive_behavior_cornea}}
We consider that the mechanical resistance of the cornea arises from the stroma, its main layer \cite{pandolfi_fiber_2012, simonini_customized_2015}. The stroma is a collagen-rich tissue that we describe as a hyperelastic material made of fibers in an isotropic matrix viewed as weakly compressible. So, our associated energy function $\psi$ is splitted into three contributions:
\begin{equation}
\psi = \textcolor{bleufonce}{\psi^{iso}} + \textcolor{rose}{\psi^{vol}} + \textcolor{vertjoli}{\psi^{lam}},
\label{decomposition_Psi}
\end{equation}
with an isotropic part $\textcolor{bleufonce}{\psi^{iso}}$ corresponding to the matrix, the keratocytes and the randomly distributed lamellae, a volumetric part $\textcolor{rose}{\psi^{vol}}$ penalizing any change of volume and an anisotropic part $\textcolor{vertjoli}{\psi^{lam}}$, taking into account the mechanical role of the oriented lamellae.
The isotropic part of the function $\textcolor{bleufonce}{\psi^{iso}}$ is chosen here as a Mooney-Rivlin function of the reduced invariants $\bar{I}_1 = I_1 I_3^{-1/3}$ and $\bar{I}_2 = I_2 I_3^{-2/3}$ \cite{pandolfi_model_2006, simonini_customized_2015} of the Cauchy-green tensor $\tens{C} = \tens{F}^T \tens{F}$:
\begin{equation}
\textcolor{bleufonce}{\psi^{iso}} \bleufoncecwriting{:= \kappa_1 (\bar{I}_1-3) + \kappa_2(\bar{I}_2-3) } ,
\label{psi_iso}
\end{equation}
while the volumetric part $\textcolor{rose}{\psi^{vol}}$ penalizes any volumic change by a very large bulk modulus \rosewriting{\textit{K}} \cite{simonini_customized_2015}
\begin{equation}
\textcolor{rose}{\psi^{vol}} \rosewriting{:= \displaystyle K(J^2 - 1 - 2log(J)),} \quad \text{with} \quad J^2 = I_3 .
\label{psi_vol}
\end{equation}
The anisotropic contribution is due to the anisotropic distribution of the lamellae. X-ray and SHG observations have shown a two-peak distribution of lamellae (see Fig.~\ref{MeekData}) \cite{aghamohammadzadeh_x-ray_2004, latour_vivo_2012} that we describe by two families of lamellae $\vertjoliwriting{ (lam_1, lam_2) }$. We model their contribution by an angular integration (AI) approach \cite{studer_biomechanical_2010, petsche_role_2013}. At each material point of the cornea, the two families of lamellae have a given directional density distribution $\vertjoliwriting{ (\textcolor{vertjoli}{\rho_{1}} (\theta, \phi) , \textcolor{vertjoli}{\rho_{2}} (\theta, \phi) ) }$. The contribution $\textcolor{vertjoli}{\psi^{lam}}$ of the two families of lamellae at each point adds local contributions of all possible directions, through the integration on a sphere of radius 1 (called "micro-sphere"):
\begin{equation}
\textcolor{vertjoli}{\psi^{lam}} \vertjoliwriting{ := \int_{\theta = 0}^{\pi} \int_{\phi = 0}^{2 \pi}{ (\textcolor{vertjoli}{\rho_{1}} (\theta, \phi) \textcolor{vertjoli}{\delta \psi^{lam}_{1}} (\theta, \phi) + \textcolor{vertjoli}{\rho_{2}} (\theta, \phi) \textcolor{vertjoli}{\delta \psi^{lam}_{2}} (\theta, \phi) ) \sin \theta \text{d} \theta \text{d} \phi} }
\label{psi_aniso_continuous}
\end{equation}
performed in the local system of coordinates at the given spatial quadrature point $(\vect{e}_r^{lam} , \vect{e}_\theta^{lam} , \vect{e}_\phi^{lam} )$ (see Fig.~\ref{MeekData}). At each mesh node, a local Cartesian basis $(\vect{e}_x^{lam}, \vect{e}_y^{lam} , \vect{e}_z^{lam} )$ (see Fig.~\ref{MeekData}d) is created using the main directions of the lamellae extracted from \cite{aghamohammadzadeh_x-ray_2004}: $\vect{e}_x^{lam}$ is in the direction of one lamellae (chosen as the one which direction is closer to the long axis of the cornea in the central part and the one closer to the tangential direction in the periphery) interpolated at the node from the data at the experimental points; $\vect{e}_z^{lam} $ is normal to the surface at the node and $\vect{e}_y^{lam} $ completes the trihedron. Then, $(\vect{e}_r^{lam} , \vect{e}_\theta^{lam} , \vect{e}_\phi^{lam} )$ define the local spherical system characterizing the direction $(\theta, \phi)$ of a particular quadrature point of the micro-sphere (see Fig.~\ref{MeekData}e).
\begin{figureth}
\includegraphics[width = 0.9\linewidth]{meekData.png}
\caption[Meek's data and microsphere.]{\label{MeekData} Distribution of lamellae orientation in a cornea. (a) Experimental polar plot of the direction of the lamellae obtained from X-ray observation (Figure from \cite{aghamohammadzadeh_x-ray_2004}, kindly provided by S. Hayes and K. M. Meek). (b) Zoom on a sub-region of the cornea. (c) Experimental (pink) and associated optimized angular intensity (green) at one point of measurement. (d-e) Local Cartesian coordinates system $ \displaystyle ( \protect \vect{e}_x^{lam}, \protect \vect{e}_y^{lam} , \protect \vect{e}_z^{lam} )$ at the same particular point of measurement, and the associated spherical coordinates.}
\end{figureth}
\subsubsection{Elementary response of a lamella $\textcolor{vertjoli}{\delta \psi^{lam}}$ }
In many tissue, collagen fibrils are crimped \cite{fratzl_collagen_2008}, explaining the non-linear response of the tissue, with a heel-region in which the crimps disappear, generating a low force, and a linear region where the fibrils are stretched (and aligned) with a spring-like behavior. In cornea, the collagen fibrils appear very aligned in lamellae \cite{winkler_nonlinear_2011}. Still, they can buckle, but we expect that this buckling occurs at a stretch smaller than the one at physiological pressure. Note that experiments on cornea strips have shown that the fibrils are tilted and that this tilt decreases in the heel region to create the non-linear response, as the crimps in other tissues \cite{bell_hierarchical_2018}.
\begin{figureth}
\includegraphics[width=10cm]{fiber_configurations.png}
\caption[Schematic representation of the different configurations of the lamella]{\label{fiber_configuration} Schematic representation of the different configurations of the lamella: the 'unfolding' configuration corresponds to the limit of the lamella in compression, the reference and deformed configuration are those considered in our problem.}
\end{figureth}
We model a collagen lamella as a bi-domain material (see Fig.~\ref{fiber_configuration}). For stretches below an "unfolding" stretch $\lambda_{u}$, the lamella creates a constant prestress $t_u$, while for higher stretches, the lamella has a spring-like behavior of apparent "stiffness" $k$. The elementary energy function is therefore given by:
\begin{equation}
\displaystyle \textcolor{vertjoli}{\delta \psi^{lam}_{i}} (\theta, \phi) := \frac{1}{2} k_i \lambda_{u , i} l_{0 , i}( \frac{ \lambda_i}{\lambda_{u , i} } - 1) _{+}^2 + t_{u , i} l_{0 , i} \lambda_{i}, \quad \forall i \in [1:2],
\label{dpsianiso_without_compression}
\end{equation}
where $()_+$ is the positive absolute value function.
The elongation $\lambda (\theta, \phi) $ of a lamella of reference direction $\vect{r_0} (\theta, \phi) $ is directly obtained under an affine assumption as a function of the Cauchy Green tensor:
\begin{equation}
\lambda(\theta, \phi) := \displaystyle \sqrt{ \frac{\vect{r_0}(\theta, \phi).\tens{C}.\vect{r_0}(\theta, \phi)}{\vect{r_0}(\theta, \phi).\vect{r_0}(\theta, \phi)} } = \sqrt{ \vect{r_0}(\theta, \phi).\tens{C}.\vect{r_0}(\theta, \phi)} \quad (||\vect{r_0}||^2 = 1),
\label{stretch_computation}
\end{equation}
with $\vect{r_0}(\theta, \phi) := \sin \theta \cos \phi \vect{e}_x^{lam} + \sin \theta \sin \phi \vect{e}_y^{lam} + \cos \theta \vect{e}_z^{lam} $.
\subsubsection{Density functions $\vertjoliwriting{ (\textcolor{vertjoli}{\rho_{1}} (\theta, \phi) , \textcolor{vertjoli}{\rho_{2}} (\theta, \phi) ) }$}
The distribution of each lamellae family is described by a Von Mises distribution (Eq.~\eqref{VonMises_distribution}):
\begin{equation}
\text{VM} (\theta, \phi | \kappa_{ip}, \kappa_{t}, \mu, \nu) := \frac{\displaystyle e^{ \kappa_{ip} \cos(2(\phi-\mu)) } e^{ \kappa_{t} \cos(2(\theta-\nu)) }}{\displaystyle C_{lam} },
\label{VonMises_distribution}
\end{equation}
where $C_{lam}$ is a normalization factor ensuring that the distribution has a total density over the sphere equal to 1. The in-plane $\kappa_{ip}$ and out-of-plane $ \kappa_{t}$ concentrations are a measure of the dispersion (the larger the $\kappa$ the thinner the peak) when $\mu$ and $\nu$ describe the mean orientations (in-plane and out-of-plane respectively).
To reproduce the X-ray experimental data from \cite{aghamohammadzadeh_x-ray_2004} at each point of measure (see Fig.~\ref{MeekData}c), we consider that the diffracted signal is the sum of the two in-plane distributions of the lamellae families, supplemented by an isotropic contribution:
\begin{equation}
I_{m} (\phi | \kappa_{ip,1}, \kappa_{ip,2}, \mu_1, \mu_2) = I_{iso} + C_1 \text{VM}_{ip} (\phi | \kappa_{ip,1}, \mu_1) + C_2\text{VM}_{ip} (\phi | \kappa_{ip,2}, \mu_2) ,
\label{I_minimised}
\end{equation}
where $ I^{iso} $ is a constant component representing the isotropic part of the measure, $\mu_1 + \pi/2$ and $\mu_2 + \pi/2$ the mean directions of the lamellae (the intensity pic is shifted of $\pi/2$ with respect to the main direction of the lamellae \cite{aghamohammadzadeh_x-ray_2004}), $ \kappa_{ip,1}$ and $\kappa_{ip,2}$ the concentrations of the lamellae distributions, and $C_1$ and $C_2$ the measures of the number of oriented lamellae in each direction at the point of measurement. The seven fields $C_1$, $C_2$, $\kappa_{ip,1}$, $\kappa_{ip,2}$, $\mu_1$, $\mu_2$ and $ I_{iso} $, identified at those experimental points by a least square minimization technique, are then bi-linearly interpolated at each node of the mesh.
The X-rays experiments do not give any indication on the out-of-plane distribution. Using Second Harmonic Generation (SHG), it has been shown that the lamellae have a maximum out-of-plane angle of around 30$\degree$ for healthy cornea in the anterior region, well represented by a Gaussian distribution \cite{winkler_three-dimensional_2013} and that the maximum out-of-plane angle decreases with the depth \cite{petsche_role_2013, winkler_three-dimensional_2013}. So, we assumed that the out-of-plane Von Mises distribution has a in-plane mean orientation ($\nu = 0$) so that it reduces to $\displaystyle \text{VM}_t (\theta | \kappa_{t}) = \frac{e^{ \kappa_{t} \cos(2\theta) }}{\displaystyle C(\kappa_{t}) }$, and that the out-of-plane concentration varies exponentially with depth \cite{petsche_role_2013}:
\begin{equation}
\displaystyle \kappa_{t}(s) = (\kappa_{t, min} - \kappa_{t, max}) * \frac{ e^{\gamma(1-s) }-1 }{e^{\gamma } -1} + \kappa_{t, max},
\quad \text{with}
\begin{cases}
\gamma = 3.19,\\
\kappa_{t, min} = 7, \\
\kappa_{t, max} = 700,
\end{cases}
\label{cutoffangle_outofplanedensity}
\end{equation}
where $s$ is the normalized depth ($0$ at the anterior surface, $1$ at the posterior), and $\displaystyle C(\kappa_{t})$ normalizes the distribution. $\kappa_{t, min} $ and $\kappa_{t, max}$ have been chosen such that the maximum cut-off-angle is around $30 \degree$ on the anterior surface ($\kappa_{t} = \kappa_{t, min}$ and so the peak of the distribution is large) and around $0 \degree$ (in-plane lamellae) on the posterior surface of the cornea ($\kappa_{t} = \kappa_{t, max}$ and so the peak of the distribution is tight). No lateral heterogeneity in the lamellae out-of-plane distribution has been reported.
\subsection{Parameters of the mechanical model}
Once the lamellae orientations are known, our model has still $11$ parameters to be determined: $2$ for the isotropic energy $\textcolor{bleufonce}{\psi^{iso}}$ ($\bleufoncecwriting{\kappa_1} $ and $\bleufoncecwriting{\kappa_2}$), $1$ for the volumic energy $\textcolor{rose}{\psi^{vol}}$ ($\rosewriting{\displaystyle K}$) and $8$ for the anisotropic energy $\textcolor{vertjoli}{\psi^{lam}}$ ($\vertjoliwriting{k_i}$, $\vertjoliwriting{\lambda_{u,i}}$, $\vertjoliwriting{ l_{0,i}}$ and $\vertjoliwriting{t_{u,i}}$). Furthermore, all of them except $\rosewriting{\displaystyle K}$ have to be distributed locally to represent the variation of the micro-structure of the cornea.
The isotropic energy function $\textcolor{bleufonce}{\psi^{iso}}$ (Eq.~\eqref{psi_iso}) involves two parameters: $\bleufoncecwriting{\kappa_1}$ and $\bleufoncecwriting{\kappa_2}$. For simplicity, as we have no specific information, we are going to assume that they are proportional with each other:
\begin{equation}
\bleufoncecwriting{\kappa_2} = \displaystyle \alpha \bleufoncecwriting{\kappa_1}.
\label{Iiso2}
\end{equation}
with $\alpha$ a constant to be identified. We will also make the assumption that they are proportional to the fraction of the isotropic part of the signal $I_{iso}$ (Eq.~\eqref{I_minimised}), so they are distributed in space:
\begin{equation}
\bleufoncecwriting{\kappa_1(x,y,s) = \kappa_{1}^ {apparent}} * I_{iso}(x,y,s).
\label{kappa_1_apparent}
\end{equation}
We consider that this term varies in the cornea's thickness, since the elastic modulus of the posterior stroma is reported to be $39.3\%$ of the modulus of the anterior stroma \cite{dias_anterior_2013}. We thus apply the same exponential variation as for the out-of-plane angular distribution (Eq.~\eqref{kappa1throughthickness}), namely:
\begin{center}
$
\displaystyle I_{iso}(x,y,s) = ( I_{iso}^{ant}(x,y) - I_{iso}^{post}(x,y) ) * \frac{ e^{\gamma(1-s) }-1 }{e^{\gamma } -1} + I_{iso}^{post}(x,y) ,
$
\end{center}
\begin{equation}
\quad \text{with}
\begin{cases}
\gamma = 3.19,\\
I_{iso}^{ant}(x,y) \quad \text{depending of the in-plane position} (x,y) \\
I_{iso}^{post}(x,y) = 39.3\% * Int_{iso}^{ant}(x,y) .
\end{cases}
\label{kappa1throughthickness}
\end{equation}
Here $I_{iso}^{ant}$ is being obtained by equaling the mean of $I_{iso}(x,y,s)$ in $s$ with the experimental value $I_{iso}$ obtained from the X-ray data. In the end, only $\kappa_{1}^ {apparent}$, a global parameter, needs to be determined to reproduce the experimental data.
The volumetric energy function $\textcolor{rose}{\psi^{vol}}$ (Eq.~\eqref{psi_vol}) involves an independent penalty parameter $\rosewriting{K}$ to impose volume conservation, which we consider as a global constant parameter, and which needs to be determined through experimental data.
The anisotropic energy functions $\textcolor{vertjoli}{\delta \psi^{lam}_{1}}$ and $\textcolor{vertjoli}{\delta \psi^{lam}_{2}}$ of the two lamellae families (Eq.~\eqref{dpsianiso_without_compression}) involve eight local parameters: $k_1, \lambda_{u,1}, l_{0,1}, t_{u,1}, k_2, \lambda_{u,2}, l_{0,2}$ and $t_{u,2}$ (four per lamellae family).\\
$t_{u,1}$ and $ t_{u,2}$ are the forces generated by "undulated" lamellae, which are much smaller than the ones of the stretched ones. So, we are going to neglect them for simplicity, taking $ t_{u,1} = t_{u,2} = 0$. Thus, the energy functions (Eq.~\eqref{dpsianiso_without_compression}) reduce to:
$$\displaystyle \textcolor{vertjoli}{\delta \psi^{lam}_{i}} (\theta, \phi) := \frac{1}{2} k_i \lambda_{u,i} l_{0,i} ( \frac{ \lambda_i}{\lambda_{u,i} } - 1) _{+}^2, \quad \forall i \in [1:2].$$
The product $\lambda_{u,i} l_{0,i}$ of the unfolding elongation and reference length is the unfolding length of a lamellae $l_{u,i}$. We are assuming that all the lamellae are the same and thus have the same unfolding length: $l_{u,1} = l_{u,2} = l_{u} = Cte$. So the energy function becomes $$\displaystyle \textcolor{vertjoli}{\delta \psi^{lam}_{i}} (\theta, \phi) := \frac{1}{2} k_i l_u ( \frac{ \lambda_i}{\lambda_{u,i} } - 1) _{+}^2, \quad \forall i \in [1:2].$$
The apparent "stiffnesses" $k_1$ and $ k_2$ are a measure of the relative stiffness of each lamellae. Thus, they are proportional to the number of fibers in the lamellae direction and hence to the coefficients $C_1$ and $C_2$ (Eq.~\eqref{I_minimised}). Thus, there is a proportionality factor $k_{lamellae,apparent}$ such that:
\begin{equation}
k_i = k_{lamellae,apparent} C_i \\
\label{stiffnesses_relationship}
\end{equation}
Finally, we can define an effective "stiffness" $k_{lam} = l_u k_{lamellae,apparent}$, so that the energy function becomes:
\begin{equation}
\displaystyle \textcolor{vertjoli}{\delta \psi^{lam}_{i}} (\theta, \phi) := \frac{1}{2} C_i k_{lam} ( \frac{ \lambda_i}{\lambda_{u,i} } - 1) _{+}^2, \quad \forall i \in [1:2].
\label{dpsianiso_all_reduced}
\end{equation}
and so it leaves only a global constant parameter $k_{lam}$. \\
The last parameters are the unfolding stretches $ \lambda_{u,1}, \lambda_{u,2}$. The "unfolding" elongations are supposed to depend on the dispersion of the lamellae. Indeed, the more the lamellae are stretched in the reference configuration (i.e. the closer the "unfolding" elongation is to 0), the more the lamellae are aligned, therefore the less they are dispersed (i.e. the greater the $\kappa_{ip}$). On the contrary, the less the lamellae are stretched in the reference configuration (i.e. the closer the reference length is to the "unfolding" length), the less the lamellae are aligned, therefore the more they are dispersed (i.e. the smaller the $\kappa_{ip}$). In a first approach, they are considered to be linearly inversely proportional $\lambda_{u} = a / \kappa_{ip} + b$, with coefficients $a$ and $b$ to be determined thanks to the limits:
\begin{equation}
\displaystyle \lambda_{u, min} = \frac{a}{\kappa_{ip, max}} + b, \quad \text{and} \quad \displaystyle \lambda_{u, max} = \frac{a}{\kappa_{ip, min}} + b
\label{unfolding_elongation_max_min}
\end{equation}
which makes for two news independent parameters $\lambda_{u, max}$ and $\lambda_{u, min}$ the maximum and minimum unfolding elongation of the lamellae in the whole cornea, to be determined experimentally.
Anisotropic contribution (Eq.~\eqref{psi_aniso_continuous}) finally reduces to
\begin{equation}
\displaystyle \textcolor{vertjoli}{\psi^{lam}} \vertjoliwriting{ = \int_{\theta = 0}^{\pi} \int_{\phi = 0}^{2 \pi} \sum_{i=1}^{2} \frac{1}{2} C_i k_{lam} \big( \frac{ \lambda_i (\theta, \phi) }{ \lambda_{u,i} } - 1 \big)_{+}^2 \frac{ e^{ \kappa_{ip,i} \cos(2 (\phi-\mu_i) ) } e^{ \kappa_{t,i} \cos(2\theta) } } { C^{lam}_i } \sin \theta \text{d} \theta \text{d} \phi }
\label{psi_aniso_reduced}
\end{equation}
with only three unknown global parameters left $\lambda_{u, max}$, $\lambda_{u, min}$ and $k_{lam}$.
Table~\ref{independant_parameters} summaries the independent global parameters used in the model, the constitutive equations where they appear and the values determined to reproduce the experimental data from \cite{elsheikh_biomechanical_2008} and \cite{mcmonnies_corneal_2010}.
\begin{tableth}
\begin{tabular}{|c|c|p{5.5cm}|p{2.3cm}|c|}
\hline
Parameter notation & Energy function & Parameter description & Equation & Value \\
\hline
$\bleufoncecwriting{\kappa_{1}^ {apparent}} $ & \multirow{5}{*}{} & Matrix stiffness & Eq. \eqref{psi_iso}, \eqref{kappa_1_apparent} & $60$Pa \\
\cline{1-1} \cline{3-5}
$ \alpha $ & $\textcolor{bleufonce}{\psi^{iso}}$ & Proportional factor between the two matrix parameter & Eq. \eqref{Iiso2} & $1/4$ \\
\hline
$\rosewriting{K}$ & $\textcolor{rose}{\psi^{vol}}$ & Hyperelastic bulk & Eq. \eqref{psi_vol} & $80$ kPa\\
\hline
$k_{lam}$ & \multirow{5}{*}{} & Apparent stiffness of a collagen lamellae for a given length & Eq. \eqref{dpsianiso_all_reduced}, \eqref{psi_aniso_reduced} & $65$ Pa\\
\cline{1-1} \cline{3-5} $\lambda_{u, max}$ & $\textcolor{vertjoli}{\psi^{lam}}$ & Maximum "unfolding" elongation $\lambda_{u}$ in the reference configuration & Eq. \ref{unfolding_elongation_max_min}, \eqref{psi_aniso_reduced} & 1.0245 \\
\cline{1-1} \cline{3-5} $\lambda_{u, min}$ & & Minimum "unfolding" elongation $\lambda_{u}$ in the reference configuration & Eq. \eqref{unfolding_elongation_max_min}, \eqref{psi_aniso_reduced} & 1.0195\\
\hline
\end{tabular}
\caption[Summary of the global parameters of the model]{Summary of the global parameters of the model, their contribution, where they appear, and their values determined by simulating an inflation test to reproduce the data from \cite{elsheikh_biomechanical_2008}. \label{independant_parameters}}
\end{tableth}
Once we have simplified the model by reducing the number of independent parameters, we use a finite element code - MoReFEM - developed at Inria by the M$\Xi$DISIM team \cite{gilles_gitlab_nodate} to solve Eq.~\eqref{final_weak_formulation_Sigma}. The Galerkin method is used to do the spatial discretization, using Q1 hexaedric finite elements. To compute the anisotropic part of the 2$^{nd}$ Piola-Kirchhoff tensor at Gauss points, a numerical quadrature is used for the integral (Eq.~\eqref{psi_aniso_continuous}) on the microsphere using a uniform rule with 20 equally distributed points for the in-plane angle $\phi$ and the Gauss-Hermite quadrature rule with $5^{th}$ order polynomial and 5 quadrature points for the out-of-plane angle $\theta$. Two loading conditions are used:
\begin{itemize}
\item Loading from $2$ mmHg to $160$ mmHg to mimic the \textit{ex-vivo} experiment of Elsheik et al. \cite{elsheikh_biomechanical_2008} on human cornea under pressure: we use this to calibrate the model.
\item Loading from $15$ mmHg to $30$ mmHg to mimic the \textit{in-vivo} experiment of McMonnies and Boneham \cite{mcmonnies_corneal_2010}: we use this to investigate the origin of the keratoconus.
\end{itemize}
\subsection{Stress-free configuration \label{Stres_free} }
To numerically solve Eq.~\eqref{final_weak_formulation_Sigma}, we need to start from a stress-free configuration. However, the patient-specific geometry is obtained under physiological intra-ocular pressure (IOP). As IOP was not determined during this clinical acquisition, we assume that it is the mean IOP of healthy individuals ($14.5$ mmHg \cite{hashemi_distribution_2005}). We then use the patient-specific configuration $\Omega_{physio}$ (associated to the positions $\vect{x}_{physio}$) as the target of a shooting method to determine the stress-free configuration. Starting from an assumed reference configuration $(\Omega_0 , \vect{\xi})$, the procedure is the following:\\
\begin{algorithm}
\caption{Computation of the stress-free configuration}
\begin{algorithmic}
\STATE \textbf{Step 1} - Computation of the deformed configuration under IOP pressure $(\Omega_p, \vect{x}_p)$\\
\STATE \textbf{Step 2} - Determine the differences $\displaystyle \vect{\Delta}_{\vect{x}} = \vect{x}_p-\vect{x}_{physio}$.\\
\STATE \textbf{Step 3} - While any of the differences $|\vect{\Delta}_{\vect{x}} |$ is larger than a tolerance (taken at $10^{-6}$mm), update the reference configuration by $\vect{\xi}_{new} = \vect{\xi} - \vect{u} $. Otherwise, we consider that we have found the reference configuration.
\end{algorithmic}
\end{algorithm}
Figure~\ref{corrected_mesh}c presents the two meshes used in the algorithm for a stage 4 keratoconic cornea. The pink one is the corrected mesh under physiological pressure $\Omega_{physio}$ and the blue one corresponds to the associated stress-free configuration $\Omega_{0, stress-free}$ mesh (for $P=0$ mmHg): the two being barely distinguishable. Note that the reference configuration needs to be updated each time you change any mechanical parameter of the model.
\subsection{simK determination}
To compare our data with McMonnies and Boneham \cite{mcmonnies_corneal_2010}, we computed the simK of our cornea at different pressures. The simK is the diopter (D) associated to the steepest meridian of the cornea as identified at a small radius ($r = 1$ mm - see Fig.~\ref{simK_computation}). To compute the simK, we fit the biconic equation (Eq.~\eqref{biconic_equation}) on the deformed anterior surface inside a $1$mm radius from the apex. We obtain the two radii for each level of pressure and from them we can compute the diopter D using the steepest one:
\begin{equation}
D(P)= simK(P) = \frac{ n_{aqh} - n_{air} }{R_{steep(P)}}
\end{equation}
where $R_{steep}$ is the radius of the steepest meridian and $n_{aqh}$ and $n_{air}$ are the refraction indexes of the aqueous humor and air (taken at 1.3375 and 1.0000 respectively).
\begin{figureth}
\includegraphics[width = 0.7\linewidth]{SimK_computation.png}
\caption[SimK Computation]{\label{simK_computation} Example of the considered surface used to compute the SimK. A subregion of $1$ mm in radius of the anterior surface is fitted by a biconic function. The steepest meridian is used to compute the SimK.}
\end{figureth}
\section{Sensitivity analysis \label{sensitivity_analysis}}
\begin{figureth}
\includegraphics[width = 1.1\linewidth]{apical_displacement_all.pdf}
\caption[Pressure with apical displacement for sensitivity analysis cases ]{\label{apical_displacement_all} Pressure with apical displacement for sensitivity analysis cases. Pink zones: envelopes of the experimental data from \cite{elsheikh_biomechanical_2008}.'$\nabla$' markers curve: reference case. 'o' violet markers curve: $1\%$ decrease of the $\lambda_{u}$. 'x' violet markers curve: $1\%$ increase of the $\lambda_{u}$. 'o' blue markers curve: $1\%$ decrease of the $\kappa_{1}^{apparent}$. 'x' blue markers curve: $1\%$ increase of the $\kappa_{1}^{apparent}$. 'o' green markers curve: $1\%$ decrease of the $k_{lam}$. 'x' green markers curve: $1\%$ increase of the $k_{lam}$.}
\end{figureth}
\section{Mechanical parameters used in the computation}
\renewcommand{\arraystretch}{1.5}
\begin{tableth}
\begin{tabular}{|p{5cm}|p{4.6cm}|p{4.6cm}|}
\hline
Geometry & Healthy (associated stress-free configuration) & Keratoconic (associated stress-free configuration ) \\
\hline
Ref = no mechanical weakness & RefH ($\displaystyle \Omega_{0}^{RefH}$ ) & RefK ($\displaystyle \Omega_{0}^{RefK}$ ) \\
\hline
ElongM1 = Pre-elongation minus one percent & ElongM1 ($\displaystyle \Omega_{0}^{ElongM1}$ ) & / \\
\hline
ElongP1 = Pre-elongation plus one percent & ElongP1 ($\displaystyle \Omega_{0}^{ElongP1}$ ) & /\\
\hline
Fib2 = Mechanical weakness on the lamellae leading to a 2 diopters change & Fib2H ($\displaystyle \Omega_{0}^{Fib2H}$) and Fib2H2 ($\displaystyle \rosewriting{\Omega_{0}^{RefH}}$ ) & Fib2K ($\displaystyle \Omega_{0}^{Fib2K}$ ) \\
\hline
\end{tabular}
\caption[Keratoconic cases]{Cases considered in the mechanical study of keratoconic cornea (Sec.~\ref{keratoconus_geometry} and~\ref{induced_keratoconus}). The reference case for healthy geometry (RefH) corresponds to the ones calibrated on Elsheik's group data (see Sec.~\ref{parameter_estimation}). Between brackets are noted the stress-free meshes $\displaystyle \Omega_{0, stress-free}$ used for each computational cases. \label{all_cases_names} }
\end{tableth}
\renewcommand{\arraystretch}{1.5}
\begin{tableth}
\begin{tabular}{|p{5.78cm}|c|c|c|c|c|}
\hline
\diagbox{Parameter}{Case considered} & RefH / RefK & ElongM1 & ElongP1 & Fib2H / Fib2H2 & Fib2K \\
\hline
Average of distributed isotropic coefficient $\displaystyle\bleufoncecwriting{\kappa}_1 (MPa)$ & 0,97 & 0,97 & 0,97 & 0,97 & 0,97 \\
\hline
Minimum "unfolding" elongation $\displaystyle \lambda_{u,min}$ & 1,0195 & \rosewriting{1,0093} &\rosewriting{1,0297 } & 1,0195 & 1,0195 \\
\hline
Maximum "unfolding" elongation $\displaystyle \lambda_{u,max}$ & 1,0245 & \rosewriting{1,0143} & \rosewriting{1,0347} & 1,0245 & 1,0245 \\
\hline
Average of distributed anisotropic coefficient $\displaystyle \vertjoliwriting{C_1*k_{lam}}$ (MPa) & 7,15 & 7,15 & 7,15 & \rosewriting{4,10} & \rosewriting{4,11}\\
\hline
Average of distributed anisotropic coefficient $\displaystyle \vertjoliwriting{C_2*k_{lam}}$ (MPa) & 18,70 & 18,70 & 18,70 & \rosewriting{12,43} & \rosewriting{12,76} \\
\hline
\end{tabular}
\caption[Reference parameters]{Mechanical parameters used in the different computations on the cornea. The different cases are presented in Table~\ref{all_cases_names}. For the distributed parameters ($\displaystyle\bleufoncecwriting{\kappa_1}$, $\vertjoliwriting{C_1*k_{lam}}$ and $\vertjoliwriting{C_2*k_{lam}}$) the average values on all over the cornea are given. \label{studies_parameters} }
\end{tableth}
|
1907.11099
|
\section{Introduction}
We consider only finite and simple graphs. For all the graph theoretic terms which are used in this paper but not defined, we refer the reader to~\cite{Bondy}.
Let $G=(V,E)$ be a graph. We denote by $|V(G)|$ and $|E(G)|$ the size of the vertex set and the edge set of $G$, respectively. The \textit{open neighborhood} of a vertex $v \in V$ is $N(v) = \{u \in V~|~uv \in E \}$ and the \textit{closed neighborhood} is $N[v]= N(v) \cup \{v\}$. A vertex $v$ is said to dominate itself and its neighbors. A subset $D \subseteq V$ is said to be a \textit{dominating set} (DS) of $G$ if every vertex of $G$ is dominated by $D$. Equivalently, a subset $D$ of vertices is a \textit{dominating set} of $G$ if for each $v\in V$, $|N[v] \cap D| \geq 1$. The minimum cardinality of a dominating set is called the \textit{domination number} (DN) of $G$ and it is denoted by $\gamma(G)$. Different types of domination have been researched extensively. The literature on the studies of domination has been surveyed and detailed in the books ~\cite{Haynes} and \cite{Haynes2}.
In~\cite{Harary3}, Harary and Haynes defined a generalization of domination as follows: a subset $D \subseteq V$ is a \textit{k-tuple dominating} set of $G$ if for every vertex $v \in V$, either $v$ is in $D$ and has at least $k-1$ neighbors in $D$ or $v$ is in $V\setminus D$ and has at least $k$ neighbors in $D$. Equivalently, a subset $D \subseteq V$ is a \textit{k-tuple dominating} set of $G$ if for each $v\in V$, $|N[v] \cap D| \geq k$. A $2\text{-tuple}$ dominating set is called a \textit{double dominating set} (DDS). The cardinality of a minimum DDS of $G$ is called the \textit{double domination number} (DDN) of $G$. Some bounds for the double domination number in graphs are given in \cite{Hajian} and \cite{Harant}.
In~\cite{Harary2}, Harary introduced the notion of signed graphs and balance. A \textit{signed graph} is a graph whose edges are labelled with positive or negative signs. We denote it by $\Sigma = (G,\sigma)$, where $G$ is called the underlying graph of $\Sigma$ and $\sigma$ is called the \textit{signature~(signing)} of $G$. A signature $\sigma$ can also be viewed as a function from $E(G)$ into $\{+, -\}$. If the edges of $\Sigma$ are all positive, \textit{i.e.,} $\sigma^{-1}(-) = \emptyset$, then the signed graph is called the \textit{all positive} signed graph, and we denote it by $|\Sigma|$.
In a signed graph, \textit{switching} a vertex $v$ is to change the sign of each edge incident to $v$. If we switch every vertex of a subset $X$ of vertices, then we write the resulting signed graph as $\Sigma^{X}$. We say a signature $\Sigma_{1}$ is \textit{switching equivalent} or simply \textit{equivalent} to a signature $\Sigma_{2}$, denoted by $\Sigma_{1} \sim \Sigma_{2}$, if both $\Sigma_{1}$ and $\Sigma_{2}$ have the same underlying graph $G$ and $\Sigma_{1} = \Sigma_{2}^{X}$ for some $X \subseteq V(G)$.
A cycle in a signed graph is called \emph{positive} if the product of signs of its edges is positive, and negative, otherwise. A signed graph is \textit{balanced} if each of its cycles is balanced. The following theorem gives a necessary and sufficient condition for two signed graphs $\Sigma_{1}$ and $\Sigma_{2}$ to be switching equivalent.
\begin{theorem}~\cite{Zaslavsky}
\label{Signature}
Two signed graphs $\Sigma_{1}$ and $\Sigma_{2}$ are switching equivalent if and only if they have the same set of negative cycles.
\end{theorem}
Let $X \subseteq V(G)$ and $Y = V(G) \setminus X$. We denote by $[X:Y]$ the set of edges of $G$ with one end point in $X$ and the other end point in $Y$, and $|[X:Y]|$ denotes the number of edges in $[X:Y]$. The set $[X:Y]$ is called the \textit{edge cut} of $G$ associated with $X$. Further, $G[X:Y]$ denotes the subgraph of $G$ induced by the edges of $[X:Y]$. Similarly for $\Sigma = (G,\sigma)$, we denote by $\Sigma [X:Y]$ the subgraph induced by the edges of $\Sigma$ with one end point in $X$ and the other end point in $Y$.
Several notions of graph theory, such as the theory of nowhere zero flows and the theory of minors and graph homomorphisms, have been already extended to signed graphs. In 2013, Acharya~\cite{Acharya} extended the concept of domination to signed graphs. In 2016, Ashraf and Germina~\cite{Ashraf} generalized the notion of double domination to signed graphs as follows.
\begin{definition}\label{def_dd}~\cite{Ashraf}
\rm{A subset $D \subseteq V$ is a \textit{double dominating set} of a signed graph $\Sigma$ if it satisfies the following two conditions: (i) for every $v \in V,~~|N[v] \cap D| \geq 2$ and, (ii) $\Sigma [D : V\setminus D]$ is balanced.}
\end{definition}
Clearly, Definition~\ref{def_dd} takes care of the concept of double domination in unsigned graphs. The cardinality of a minimum DDS of $\Sigma$ is called the double domination number of $\Sigma$ and is denoted by $\gamma_{\times 2}(\Sigma)$. The following theorem shows that the double domination is switching invariant.
\begin{theorem}\cite{Ashraf}
Double domination is invariant under switching.
\end{theorem}
\begin{definition}\label{def_GPG}
\rm{For positive integers $n$ and $k$ satisfying $2 \leq 2k <n$, the \textit{generalised Petersen graph} $P_{n,k}$ is defined by $$V(P_{n,k}) = \{u_{0},u_{1},...,u_{n-1},v_{0},v_{1},...,v_{n-1}\}~ \text{and} ~E(P_{n,k}) = \{u_{i}u_{i+1},u_{i}v_{i},v_{i}v_{i+k}~|~i=0,1,...,n-1\},$$ where the subscripts are read modulo $n$.}
\end{definition}
We denote the sets $\{u_{0},u_{1},...,u_{n-1}\}$ and $\{v_{0},v_{1},...,v_{n-1}\}$ by $U$ and $V_{v}$, respectively. From the definition, it is clear that $P_{n,k}$ is a cubic graph and $P_{5,2}$ is the well-known Petersen graph. The edges $u_{i}v_{i}$ for $0 \leq i\leq n-1$ are called the \textit{spokes} and we denote the set of spokes by $S_{s}$. The cycle induced by vertices of $U$ is called the \textit{outer cycle} of $P_{n,k}$ and is denoted by $C_{o}$. The cycle(s) induced by vertices of $V_{v}$ is(are) called the \textit{inner cycle(s)} of $P_{n,k}$. If $\text{gcd}(n,k) = d$ then the subgraph induced by vertices of $V_{v}$, consists of $d$ pairwise disjoint $\frac{n}{d}$-cycles. If $d >1$ then no two vertices among $v_{0}, v_{1},..., v_{d-1}$ can be in the same $\frac{n}{d}$-cycle.
\begin{definition}
\rm{The I-graph $I(n,j,k)$ is a graph with vertex set $$V(I(n,j,k)) = \{u_{0},u_{1},...,u_{n-1},v_{0},v_{1},...,v_{n-1}\}$$ and edge set $$E(I(n,j,k)) = \{u_{i}u_{i+j},u_{i}v_{i},v_{i}v_{i+k}~|~i=0,1,...,n-1\},$$ where subscripts are read modulo $n$.}
\end{definition}
The class of generalized Petersen graphs is a sub-class of the class of I-graphs.
In \cite{Boben}, Boben, Pisanski and Zitnik have studied various properties of I-graphs such as connectedness, girth, and whether they are bipartite or vertex-transitive. They also characterized the automorphism groups of I-graphs.
The rest of this paper is organized as follows. First we give a lower bound and an upper bound for the DDN of signed cubic graphs. Further, we show that if $D$ is a DDS of a cubic graph $G$ such that $|D|= \frac{|V(G)|}{2}$ then $G[D : V\setminus D]$ admits a cycle decomposition. Also we show that if $D$ with $|D|= \frac{|V(G)|}{2}$ is not a DDS of a cubic graph $G$ then it is not necessarily true that $G[D : V\setminus D]$ admits a cycle decomposition. Second we obtain some bounds on the DDN of signed generalized Petersen graphs. Finally, we give bounds on the DDN of signed I-graphs.
\section{Bounds on DDN of Signed Cubic Graph}
In \cite{Ashraf} the authors obtained a bound on the double domination number of a signed graph.
\begin{theorem}\cite{Ashraf}\label{bound_0}
Let $\Sigma$ be any signed graph without isolated vertices on $n$ vertices, then $2\leq \gamma_{\times 2}(\Sigma) \leq n$. Moreover, these bounds are sharp.
\end{theorem}
In the following theorem, we show that the lower bound of Theorem~\ref{bound_0} can be improved if the underlying graph of $\Sigma$ is cubic.
\begin{theorem}\label{general bounds}
For $m\geq 2$, let $\Sigma$ be any signed cubic graph on $2m$ vertices. Then $$m \leq \gamma_{\times 2}(\Sigma) \leq 2m.$$
\end{theorem}
\begin{proof}
The upper bound follows from Theorem~\ref{bound_0}.
To get the lower bound all we need is to show that $\Sigma$ cannot have a DDS of size $m-1$. Suppose on the contrary that there exists a DDS $D$ of $\Sigma$ such that $|D|=m-1$. It is clear that each vertex of $D$ is adjacent to at most two vertices of $V \setminus D$ because $D$ is a DDS. Thus
\begin{equation}\label{eq 1}
|[D:V \setminus D]| \leq 2m-2.
\end{equation}
Also, since $D$ is a DDS, every vertex of $V \setminus D$ is adjacent to at least two vertices of $D$. Thus
\begin{equation}\label{eq 2}
|[V \setminus D:D]| \geq 2m+2.
\end{equation}
But it is impossible for a set $D$ to satisfy (\ref{eq 1}) and (\ref{eq 2}) simultaneously. Thus the set $D$ cannot be a DDS of $\Sigma$. This implies that any DDS of $G$ (hence of $\Sigma$) must be of size at least $m$. Therefore we have $\gamma_{\times 2}(\Sigma) \geq m$, and this completes the proof.
\end{proof}
Note that the lower bound of Theorem~\ref{general bounds} can be achieved. For that, let $G$ be a disjoint union of $m$ copies of $K_{4}$. Take $\Sigma = (G,\sigma)$ such that all the edges of $\Sigma$ are positive, that is $\sigma^{-1}(-) = \emptyset$. It is then clear that $\Sigma$ is a signed cubic graph on $4m$ vertices and $\gamma_{\times 2}(\Sigma) = 2m$.
As an application of DDS, we show that if $D$ with $|D|= \frac{|V(G)|}{2}$ is a DDS of a cubic graph $G$ then $G[D:V \setminus D]$ admits a cycle decomposition.
\begin{lem}\label{2-regular lemma}
For $m\geq 2$, let $G$ be a cubic graph on $2m$ vertices. If $D$ is a DDS of $G$ such that $|D|=m$ then $G[D:V \setminus D]$ is a 2-regular subgraph of $G$.
\end{lem}
\begin{proof}
Given that $D$ is a DDS of $G$ such that $|D|=m$, where $G$ is a cubic graph on $2m$ vertices. We complete the proof by showing that each vertex of $D$ and $V \setminus D$ is adjacent to exactly two vertices of $V \setminus D$ and $D$, respectively.
Suppose on the contrary that there are $r$ vertices of $D$ each of which are adjacent to at most one vertex of $V \setminus D$, where $1\leq r \leq m$. Since $D$ is a DDS, each vertex of the remaining $m-r$ vertices of $D$ is adjacent to exactly two vertices of $V \setminus D$. Therefore
\begin{equation}\label{eq 3}
|[D:V \setminus D]| \leq 2(m-r)+r = 2m-r.
\end{equation}
On the other hand, each vertex of $V \setminus D$ is adjacent to at least two vertices of $D$ as $D$ is a DDS. So we have
\begin{equation}\label{eq 4}
|[V \setminus D:D]| \geq 2m.
\end{equation}
As $|[D:V \setminus D]|=|[V \setminus D:D]|$, inequalities (\ref{eq 3}) and (\ref{eq 4}) cannot hold simultaneously. Thus every vertex of $D$ is adjacent to exactly two vertices of $V \setminus D$. Similarly it can be shown that every vertex of $V \setminus D$ is adjacent to exactly two vertices of $D$. Hence $G[D:V \setminus D]$ is a 2-regular subgraph of $G$, and the proof is complete.
\end{proof}
A graph in which each vertex has even degree is called an \textit{even graph}. Veblen's theorem (see Theorem 2.7,~\cite{ Bondy}) says that a graph admits a cycle decomposition if and only if it is even. The following theorem is a direct consequence of Veblen's theorem and Lemma~\ref{2-regular lemma}.
\begin{theorem}\label{cor_1}
For $m\geq 2$, let $G$ be a cubic graph on $2m$ vertices. If $D$ is a DDS of $G$ such that $|D|=m$ then $G[D:V \setminus D]$ admits a cycle decomposition.
\end{theorem}
If $D$ is not a DDS of a cubic graph $G$ such that $|D|= \frac{|V(G)|}{2}$, then it is not necessary that $G[D:V \setminus D]$ is the union of vertex disjoint cycles. For instance, let $G = P_{4,1}$ and $D =
\{u_{0},u_{1},u_{2},u_{3}\}$. It is easy to see that $G[D:V \setminus D]$ is not a union of vertex disjoint cycles.
\subsection{Bounds on $\gamma_{\times 2}(P_{n,k},\sigma)$}
A DDS of $(P_{n,k}, \sigma)$ need not be a DDS of $(P_{n,k}, \sigma')$, where $\sigma'$ is not equivalent to $\sigma$. For example, let $\Sigma = (P_{4,1}, \sigma)$, where $\sigma$ is a signature of $P_{4,1}$ for which the outer cycle $C_{o}$ and the inner cycle $C_{i}$ are positive. It is easy to check that the set $D = \{u_{0},v_{0},u_{2},v_{2}\}$, see Figure~\ref{P4}, forms a DDS of $\Sigma$. But if we take a signature $\sigma'$ for which $C_{o}$ and $C_{i}$ are negative, then $D = \{u_{0},v_{0},u_{2},v_{2}\}$ does not satisfy the condition (ii) of Definition~\ref{def_dd}.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.45]
\node[vertex] (v1) at (0,6) {};
\draw [black] (0,6) circle (9pt);
\node [below] at (-0.7,6.9) {$u_{0}$};
\node[vertex] (v2) at (6,6) {};
\node [below] at (6.6,6.8) {$u_{1}$};
\node[vertex] (v3) at (6,0) {};
\draw [black] (6,0) circle (9pt);
\node [below] at (6.75,0.1) {$u_{2}$};
\node[vertex] (v4) at (0,0) {};
\node [below] at (-0.6,0.2) {$u_{3}$};
\node[vertex] (v5) at (1,5) {};
\draw [black] (1,5) circle (9pt);
\node [below] at (1.6,4.8) {$v_{0}$};
\node[vertex] (v6) at (5,5) {};
\node [below] at (4.5,4.8) {$v_{1}$};
\node[vertex] (v7) at (5,1) {};
\draw [black] (5,1) circle (9pt);
\node [below] at (4.4,1.9) {$v_{2}$};
\node[vertex] (v8) at (1,1) {};
\node [below] at (1.6,1.9) {$v_{3}$};
\filldraw [black] (0,6) circle (3.5pt);
\filldraw [black] (6,6) circle (3.5pt);
\filldraw [black] (6,0) circle (3.5pt);
\filldraw [black] (0,0) circle (3.5pt);
\filldraw [black] (1,5) circle (3.5pt);
\filldraw [black] (5,5) circle (3.5pt);
\filldraw [black] (5,1) circle (3.5pt);
\filldraw [black] (1,1) circle (3.5pt);
\foreach \from/\to in {v1/v2,v2/v3,v3/v4,v4/v1,v5/v6,v6/v7,v7/v8,v8/v5,v1/v5,v2/v6,v3/v7,v4/v8} \draw (\from) -- (\to);
\end{tikzpicture}
\caption{A DDS of $P_{4,1}$.}
\label{P4}
\end{figure}
We saw that for two distinct signed graphs $\Sigma_1$ and $\Sigma_2$ having same underlying graph, a DDS of $\Sigma_1$ need not be a DDS of $\Sigma_2$. So in order to get upper bound of DDN of signed generalized Petersen graphs, we will construct DDS of generalized Petersen graphs in such a way that they satisfy condition (ii) of Definition~\ref{def_dd} for all possible signatures of generalized Petersen graphs.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.45]
\node[vertex] (v1) at (0,3.2) {};
\draw [black] (0,3.2) circle (9pt);
\node [below] at (0,4.5) {$u_{0}$};
\node[vertex] (v2) at (0,0) {};
\draw [black] (0,0) circle (9pt);
\node [below] at (0,-0.35) {$v_{0}$};
\node[vertex] (v3) at (3,3.2) {};
\node [below] at (3,4.5) {$u_{1}$};
\node[vertex] (v4) at (3,0) {};
\node [below] at (3,-0.35) {$v_{1}$};
\node[vertex] (v5) at (6,3.2) {};
\draw [black] (6,3.2) circle (9pt);
\node [below] at (6,4.5) {$u_{2}$};
\node[vertex] (v6) at (6,0) {};
\draw [black] (6,0) circle (9pt);
\node [below] at (6,-0.35) {$v_{2}$};
\node[vertex] (v7) at (9,3.2) {};
\node [below] at (9,4.5) {$u_{3}$};
\node[vertex] (v8) at (9,0) {};
\node [below] at (9,-0.35) {$v_{3}$};
\node[vertex] (v9) at (-3,3.2) {};
\draw [black] (-3,3.2) circle (9pt);
\node [below] at (-3,4.5) {$u_{2m}$};
\node[vertex] (v10) at (-3,0) {};
\node [below] at (-3,-0.35) {$v_{2m}$};
\node[vertex] (v11) at (-6,3.2) {};
\draw [black] (-6,3.2) circle (9pt);
\node [below] at (-6,4.5) {$u_{2m-1}$};
\node[vertex] (v12) at (-6,0) {};
\node [below] at (-6,-0.35) {$v_{2m-1}$};
\node[vertex] (v13) at (-9,3.2) {};
\draw [black] (-9,3.2) circle (9pt);
\node [below] at (-9,4.5) {$u_{2m-2}$};
\node[vertex] (v14) at (-9,0) {};
\draw [black] (-9,0) circle (9pt);
\node [below] at (-9,-0.35) {$v_{2m-2}$};
\node[vertex] (v15) at (-12,3.2) {};
\node [below] at (-12,4.5) {$u_{2m-3}$};
\node[vertex] (v16) at (-12,0) {};
\node [below] at (-12,-0.35) {$v_{2m-3}$};
\filldraw [black] (0,3.2) circle (3.5pt);
\filldraw [black] (3,3.2) circle (3.5pt);
\filldraw [black] (6,3.2) circle (3.5pt);
\filldraw [black] (9,3.2) circle (3.5pt);
\filldraw [black] (0,0) circle (3.5pt);\filldraw [black] (9,0) circle (3.5pt);
\filldraw [black] (3,0) circle (3.5pt);
\filldraw [black] (6,0) circle (3.5pt);
\filldraw [black] (-3,3.2) circle (3.5pt);
\filldraw [black] (-3,0) circle (3.5pt);
\filldraw [black] (-6,3.2) circle (3.5pt);
\filldraw [black] (-6,0) circle (3.5pt);
\filldraw [black] (-9,3.2) circle (3.5pt);
\filldraw [black] (-9,0) circle (3.5pt);
\filldraw [black] (-12,3.2) circle (3.5pt);
\filldraw [black] (-12,0) circle (3.5pt);
\foreach \from/\to in {v1/v3,v3/v5,v5/v7,v1/v2,v2/v4,v3/v4,v4/v6,v5/v6,v6/v8,v7/v8,v9/v1,v9/v10,v10/v12,v11/v12,v11/v13,v13/v14,v2/v10,v9/v11,v12/v14,v14/v16,v15/v16,v13/v15} \draw (\from) -- (\to);
\draw (9,3.2) -- (12,3.2);
\draw (9,0) -- (12,0);
\draw (-12,3.2) -- (-15,3.2);
\draw (-12,0) -- (-15,0);
\end{tikzpicture}
\caption{A DDS of $P_{2m+1,1}$.}
\label{P_2m+1}
\end{figure}
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.45]
\node[vertex] (v1) at (0,3.2) {};
\draw [black] (0,3.2) circle (9pt);
\node [below] at (0,4.5) {$u_{0}$};
\node[vertex] (v2) at (0,0) {};
\draw [black] (0,0) circle (9pt);
\node [below] at (0,-0.35) {$v_{0}$};
\node[vertex] (v3) at (3,3.2) {};
\node [below] at (3,4.5) {$u_{1}$};
\node[vertex] (v4) at (3,0) {};
\node [below] at (3,-0.35) {$v_{1}$};
\node[vertex] (v5) at (6,3.2) {};
\draw [black] (6,3.2) circle (9pt);
\node [below] at (6,4.5) {$u_{2}$};
\node[vertex] (v6) at (6,0) {};
\draw [black] (6,0) circle (9pt);
\node [below] at (6,-0.35) {$v_{2}$};
\node[vertex] (v7) at (9,3.2) {};
\node [below] at (9,4.5) {$u_{3}$};
\node[vertex] (v8) at (9,0) {};
\node [below] at (9,-0.35) {$v_{3}$};
\node[vertex] (v9) at (-3,3.2) {};
\draw [black] (-3,3.2) circle (9pt);
\node [below] at (-3,4.5) {$u_{2m-1}$};
\node[vertex] (v10) at (-3,0) {};
\draw [black] (-3,0) circle (9pt);
\node [below] at (-3,-0.35) {$v_{2m-1}$};
\node[vertex] (v11) at (-6,3.2) {};
\draw [black] (-6,3.2) circle (9pt);
\node [below] at (-6,4.5) {$u_{2m-2}$};
\node[vertex] (v12) at (-6,0) {};
\draw [black] (-6,0) circle (9pt);
\node [below] at (-6,-0.35) {$v_{2m-2}$};
\node[vertex] (v13) at (-9,3.2) {};
\node [below] at (-9,4.5) {$u_{2m-3}$};
\node[vertex] (v14) at (-9,0) {};
\node [below] at (-9,-0.35) {$v_{2m-3}$};
\filldraw [black] (0,3.2) circle (3.5pt);
\filldraw [black] (3,3.2) circle (3.5pt);
\filldraw [black] (6,3.2) circle (3.5pt);
\filldraw [black] (9,3.2) circle (3.5pt);
\filldraw [black] (0,0) circle (3.5pt);\filldraw [black] (9,0) circle (3.5pt);
\filldraw [black] (3,0) circle (3.5pt);
\filldraw [black] (6,0) circle (3.5pt);
\filldraw [black] (-3,3.2) circle (3.5pt);
\filldraw [black] (-3,0) circle (3.5pt);
\filldraw [black] (-6,3.2) circle (3.5pt);
\filldraw [black] (-6,0) circle (3.5pt);
\filldraw [black] (-9,3.2) circle (3.5pt);
\filldraw [black] (-9,0) circle (3.5pt);
\foreach \from/\to in {v1/v3,v3/v5,v5/v7,v1/v2,v2/v4,v3/v4,v4/v6,v5/v6,v6/v8,v7/v8,v9/v1,v9/v10,v10/v12,v11/v12,v11/v13,v13/v14,v2/v10,v9/v11,v12/v14} \draw (\from) -- (\to);
\draw (9,3.2) -- (12,3.2);
\draw (9,0) -- (12,0);
\draw (-9,3.2) -- (-12,3.2);
\draw (-9,0) -- (-12,0);
\end{tikzpicture}
\caption{A DDS of $P_{2m,1}$.}
\label{P_2m}
\end{figure}
The following two lemmas will be used to get the bounds on the DDN of signed generalized Petersen graphs for $k=1$.
\begin{lem}\label{bound_1}
Let $\Sigma = (P_{2m+1,1}, \sigma)$ be any signed generalized Petersen graph. Then $$2m+1 \leq \gamma_{\times 2}(\Sigma) \leq 2m+2 .$$
\end{lem}
\begin{proof}
The lower bound follows from Theorem~\ref{general bounds}.
To get the upper bound, all we need is to construct a DDS of $\Sigma$ that uses $2m+2$ vertices. Consider the set $D=\{u_{2i},v_{2i}~|~i = 0,1,2...,m-1\} \cup \{u_{2m-1},u_{2m}\}$. It is easy to check that $D$ is a DDS of $P_{2m+1,1}$, as illustrated in Figure~\ref{P_2m+1}, and $|D| = 2m+2$.
To complete the proof, it remains to show that $\Sigma [D:V\setminus D]$ is balanced. Note that \linebreak[4] $\Sigma[D:V\setminus D]$ is the union of two vertex disjoint paths $P_{1}~ \text{and}~ P_{2}$, where $P_{1}=u_{0}u_{1}u_{2}...u_{2m-3}u_{2m-2}$ and $P_{2}=u_{2m}v_{2m}v_{0}v_{1}v_{2}...v_{2m-2}v_{2m-1}u_{2m-1}$. This implies that $\Sigma[D:V \setminus D]$ is acyclic, and so $\Sigma [D:V \setminus D]$ is balanced. Thus $D$ is DDS of $\Sigma$.
\end{proof}
\begin{lem}\label{bound_2}
Let $\Sigma = (P_{2m,1}, \sigma)$ be any signed generalized Petersen graph. Then $$2m \leq \gamma_{\times 2}(\Sigma) \leq 2m+2.$$ Moreover, there exists a signed graph $\Sigma = (P_{2m,1}, \sigma)$, such that $\gamma_{\times 2}(\Sigma) = 2m$.
\end{lem}
\begin{proof}
From Theorem~\ref{general bounds}, it is obvious that $2m \leq \gamma_{\times 2}(\Sigma)$.
To get the upper bound, we need to produce a DDS of $\Sigma$ having $2m+2$ vertices. Consider the set $D=\{u_{2i},v_{2i}~|~i = 0,1,2...,m-1\} \cup \{u_{2m-1},v_{2m-1}\}$, as depicted in Figure~\ref{P_2m}. It is clear that each vertex of $P_{2m,1}$ is dominated at least twice by $D$, and that $|D| = 2m+2$.
Now we show that $\Sigma[D:V \setminus D]$ is balanced. Notice that $[D:V \setminus D] = E(P_{1}) \cup E(P_{2})$, where $P_{1}=u_{0}u_{1}u_{2}...u_{2m-2}$ and $P_{2}=v_{0}v_{1}v_{2}...v_{2m-2}$ are two vertex disjoint paths. Therefore $\Sigma[D:V \setminus D]$ is acyclic, and so $\Sigma[D:V \setminus D]$ is balanced. This shows that $D$ is a DDS of $\Sigma$. Hence $\gamma_{\times 2}(\Sigma) \leq 2m+2$.
Let $\Sigma=(P_{2m,1}, \sigma)$, where $\sigma$ is any signature such that both the outer cycle $C_{o}$ and the inner cycle $C_{i}$ are positive in $\Sigma$. Consider the set $D=\{u_{2i},v_{2i}~|~i = 0,1,2...,m-1\}$, which is clearly a DDS of $P_{2m,1}$, and that $|D|=2m$. It is easy to see that $\Sigma[D:V \setminus D] = C_{0} \cup C_{i}$. Thus $\Sigma[D:V \setminus D]$ is balanced as $C_{o}~ \text{and}~C_{i}$ are positive in $\Sigma$. Hence $\gamma_{\times 2}(\Sigma) = 2m$, and the proof is complete.
\end{proof}
Lemma~\ref{bound_1} and Lemma~\ref{bound_2} together yield the following theorem.
\begin{theorem}\label{bound_(n,1)}
Let $\Sigma = (P_{n,1}, \sigma)$ be any signed generalized Petersen graph. Then $$ n \leq \gamma_{\times 2}(\Sigma) \leq 2\big(\lfloor \frac{n}{2} \rfloor +1\big). $$
\end{theorem}
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=0.55]
\node[vertex] (v5) at ({360/17*0 }:7cm) {};
\node[vertex] (v4) at ({360/17*1 }:7cm) {};
\node[vertex] (v3) at ({360/17 *2 }:7cm) {};
\node[vertex] (v2) at ({360/17 *3 }:7cm) {};
\node[vertex] (v1) at ({360/17 *4 }:7cm) {};
\node[vertex] (v17) at ({360/17 *5}:7cm) {};
\node[vertex] (v16) at ({360/17 *6 }:7cm) {};
\node[vertex] (v15) at ({360/17 *7}:7cm) {};
\node[vertex] (v14) at ({360/17 *8}:7cm) {};
\node[vertex] (v13) at ({360/17 *9}:7cm) {};
\node[vertex] (v12) at ({360/17 *10}:7cm) {};
\node[vertex] (v11) at ({360/17 *11}:7cm) {};
\node[vertex] (v10) at ({360/17 *12}:7cm) {};
\node[vertex] (v9) at ({360/17 *13}:7cm) {};
\node[vertex] (v8) at ({360/17 *14}:7cm) {};
\node[vertex] (v7) at ({360/17 *15}:7cm) {};
\node[vertex] (v6) at ({360/17 *16}:7cm) {};
\draw ({360/17 *0 +1.5}:7cm) arc ({360/17*0+1.5}:{360/17*1 -1.5 }:7cm);
\draw ({360/17 *1 +1.5}:7cm) arc ({360/17*1+1.5}:{360/17*2 -1.5 }:7cm);
\draw ({360/17 *2 +1.5}:7cm) arc ({360/17*2+1.5}:{360/17*3 -1.5 }:7cm);
\draw ({360/17 *3 +1.5}:7cm) arc ({360/17*3+1.5}:{360/17*4 -1.5 }:7cm);
\draw ({360/17 *4 +1.5}:7cm) arc ({360/17*4+1.5}:{360/17*5 -1.5 }:7cm);
\draw ({360/17 *5 +1.5}:7cm) arc ({360/17*5+1.5}:{360/17*6 -1.5 }:7cm);
\draw ({360/17 *6 +1.5}:7cm) arc ({360/17*6+1.5}:{360/17*7 -1.5 }:7cm);
\draw ({360/17 *7 +1.5}:7cm) arc ({360/17*7+1.5}:{360/17*8 -1.5 }:7cm);
\draw ({360/17 *8 +1.5}:7cm) arc ({360/17*8+1.5}:{360/17*9 -1.5 }:7cm);
\draw ({360/17 *9 +1.5}:7cm) arc ({360/17*9+1.5}:{360/17*10 -1.5 }:7cm);
\draw ({360/17 *10 +1.5}:7cm) arc ({360/17*10+1.5}:{360/17*11 -1.5 }:7cm);
\draw ({360/17 *11+1.5}:7cm) arc ({360/17*11+1.5}:{360/17*12 -1.5 }:7cm);
\draw ({360/17 *12 +1.5}:7cm) arc ({360/17*12+1.5}:{360/17*13 -1.5 }:7cm);
\draw ({360/17 *13 +1.5}:7cm) arc ({360/17*13+1.5}:{360/17*14 -1.5 }:7cm);
\draw ({360/17 *14 +1.5}:7cm) arc ({360/17*14+1.5}:{360/17*15 -1.5 }:7cm);
\draw ({360/17 *15 +1.5}:7cm) arc ({360/17*15+1.5}:{360/17*16 -1.5 }:7cm);
\draw ({360/17 *16 +1.5}:7cm) arc ({360/17*16+1.5}:{360/17*17 -1.5 }:7cm);
\node[xshift=0.4cm, yshift=-0.05cm] at ({360/17*0 }:7cm) {$u_{4}$};
\node[xshift=0.4cm] at ({360/17*1 }:7cm) {$u_{3}$};
\node[xshift=0.25cm, yshift=0.2cm] at ({360/17 *2 }:7cm) {$u_{2}$};
\node[xshift=0.15cm,yshift=0.25cm] at ({360/17 *3 }:7cm) {$u_{1}$};
\node[ yshift=0.3cm] at ({360/17 *4 }:7cm) {$u_{0}$};
\node[xshift=-0.15cm, yshift=0.3cm] at ({360/17 *5}:7cm) {$u_{16}$};
\node[xshift=-0.3cm, yshift=0.25cm] at ({360/17 *6 }:7cm) {$u_{15}$};
\node[xshift=-0.4cm, yshift=0.25cm] at ({360/17 *7}:7cm) {$u_{14}$};
\node[xshift=-0.45cm] at ({360/17 *8}:7cm) {$u_{13}$};
\node[xshift=-0.45cm, yshift=-0.15cm] at ({360/17 *9}:7cm) {$u_{12}$};
\node[xshift=-0.3cm, yshift=-0.25cm] at ({360/17 *10}:7cm) {$u_{11}$};
\node[xshift=-0.1cm, yshift=-0.35cm] at ({360/17 *11}:7cm) {$u_{10}$};
\node[yshift=-0.35cm] at ({360/17 *12}:7cm) {$u_{9}$};
\node[xshift=0.1cm, yshift=-0.35cm] at ({360/17 *13}:7cm) {$u_{8}$};
\node[xshift=0.2cm, yshift=-0.3cm] at ({360/17 *14}:7cm) {$u_{7}$};
\node[xshift=0.35cm, yshift=-0.25cm] at ({360/17 *15}:7cm) {$u_{6}$};
\node[xshift=0.35cm, yshift=-0.15cm] at ({360/17 *16}:7cm) {$u_{5}$};
\filldraw[black] ({360/17*0 }:7cm) circle (3pt);
\filldraw [black] ({360/17 *1}:7cm) circle (3pt);
\filldraw [black] ({360/17 *2}:7cm) circle (3pt);
\filldraw [black] ({360/17 *3}:7cm) circle (3pt);
\filldraw [black] ({360/17 *4}:7cm) circle (3pt);
\filldraw [black] ({360/17 *5}:7cm) circle (3pt);
\filldraw [black] ({360/17 *6}:7cm) circle (3pt);
\filldraw [black] ({360/17 *7}:7cm) circle (3pt);
\filldraw [black] ({360/17 *8}:7cm) circle (3pt);
\filldraw [black] ({360/17 *9}:7cm) circle (3pt);
\filldraw [black] ({360/17 *10}:7cm) circle (3pt);
\filldraw [black] ({360/17 *11}:7cm) circle (3pt);
\filldraw [black] ({360/17 *12}:7cm) circle (3pt);
\filldraw [black] ({360/17 *13}:7cm) circle (3pt);
\filldraw [black] ({360/17 *14}:7cm) circle (3pt);
\filldraw [black] ({360/17 *15}:7cm) circle (3pt);
\filldraw [black] ({360/17 *16}:7cm) circle (3pt);
\draw [black] ({360/17*0 }:7cm) circle (7pt);
\draw [black] ({360/17 *1}:7cm) circle (7pt);
\draw [black] ({360/17 *2}:7cm) circle (7pt);
\draw [black] ({360/17 *3}:7cm) circle (7pt);
\draw [black] ({360/17 *4}:7cm) circle (7pt);
\draw [black] ({360/17 *5}:7cm) circle (7pt);
\draw [black] ({360/17 *6}:7cm) circle (7pt);
\draw [black] ({360/17 *7}:7cm) circle (7pt);
\draw [black] ({360/17 *8}:7cm) circle (7pt);
\draw [black] ({360/17 *9}:7cm) circle (7pt);
\draw [black] ({360/17 *10}:7cm) circle (7pt);
\draw [black] ({360/17 *11}:7cm) circle (7pt);
\draw [black] ({360/17 *12}:7cm) circle (7pt);
\draw [black] ({360/17 *13}:7cm) circle (7pt);
\draw [black] ({360/17 *14}:7cm) circle (7pt);
\draw [black] ({360/17 *15}:7cm) circle (7pt);
\draw [black] ({360/17 *16}:7cm) circle (7pt);
\node[vertex] (v22) at ({360/17*0 }:5.5cm) {};
\node[vertex] (v21) at ({360/17*1 }:5.5cm) {};
\node[vertex] (v20) at ({360/17 *2 }:5.5cm) {};
\node[vertex] (v19) at ({360/17 *3 }:5.5cm) {};
\node[vertex] (v18) at ({360/17 *4 }:5.5cm) {};
\node[vertex] (v34) at ({360/17 *5}:5.5cm) {};
\node[vertex] (v33) at ({360/17 *6 }:5.5cm) {};
\node[vertex] (v32) at ({360/17 *7}:5.5cm) {};
\node[vertex] (v31) at ({360/17 *8}:5.5cm) {};
\node[vertex] (v30) at ({360/17 *9}:5.5cm) {};
\node[vertex] (v29) at ({360/17 *10}:5.5cm) {};
\node[vertex] (v28) at ({360/17 *11}:5.5cm) {};
\node[vertex] (v27) at ({360/17 *12}:5.5cm) {};
\node[vertex] (v26) at ({360/17 *13}:5.5cm) {};
\node[vertex] (v25) at ({360/17 *14}:5.5cm) {};
\node[vertex] (v24) at ({360/17 *15}:5.5cm) {};
\node[vertex] (v23) at ({360/17 *16}:5.5cm) {};
\filldraw[black] ({360/17*0 }:5.5cm) circle (3pt);
\filldraw [black] ({360/17 *1}:5.5cm) circle (3pt);
\filldraw [black] ({360/17 *2}:5.5cm) circle (3pt);
\filldraw [black] ({360/17 *3}:5.5cm) circle (3pt);
\filldraw [black] ({360/17 *4}:5.5cm) circle (3pt);
\filldraw [black] ({360/17 *5}:5.5cm) circle (3pt);
\filldraw [black] ({360/17 *6}:5.5cm) circle (3pt);
\filldraw [black] ({360/17 *7}:5.5cm) circle (3pt);
\filldraw [black] ({360/17 *8}:5.5cm) circle (3pt);
\filldraw [black] ({360/17 *9}:5.5cm) circle (3pt);
\filldraw [black] ({360/17 *10}:5.5cm) circle (3pt);
\filldraw [black] ({360/17 *11}:5.5cm) circle (3pt);
\filldraw [black] ({360/17 *12}:5.5cm) circle (3pt);
\filldraw [black] ({360/17 *13}:5.5cm) circle (3pt);
\filldraw [black] ({360/17 *14}:5.5cm) circle (3pt);
\filldraw [black] ({360/17 *15}:5.5cm) circle (3pt);
\filldraw [black] ({360/17 *16}:5.5cm) circle (3pt);
\draw ({360/17 *0 +1.5}:5.5cm) ;
\draw ({360/17 *1 +1.5}:5.5cm) ;
\draw ({360/17 *2 +1.5}:5.5cm) ;
\draw ({360/17 *3 +1.5}:5.5cm) ;
\draw ({360/17 *4 +1.5}:5.5cm) ;
\draw ({360/17 *5 +1.5}:5.5cm) ;
\draw ({360/17 *6 +1.5}:5.5cm) ;
\draw ({360/17 *7 +1.5}:5.5cm) ;
\draw ({360/17 *8 +1.5}:5.5cm) ;
\draw ({360/17 *9 +1.5}:5.5cm) ;
\draw ({360/17 *10 +1.5}:5.5cm) ;
\draw ({360/17 *11+1.5}:5.5cm) ;
\draw ({360/17 *12 +1.5}:5.5cm) ;
\draw ({360/17 *13 +1.5}:5.5cm) ;
\draw ({360/17 *14 +1.5}:5.5cm) ;
\draw ({360/17 *15 +1.5}:5.5cm) ;
\draw ({360/17 *16 +1.5}:5.5cm) ;
\draw [black] ({360/17 *1}:5.5cm) circle (7pt);
\draw [black] ({360/17 *2}:5.5cm) circle (7pt);
\draw [black] ({360/17 *15}:5.5cm) circle (7pt);
\draw [black] ({360/17 *14}:5.5cm) circle (7pt);
\draw [black] ({360/17 *11}:5.5cm) circle (7pt);
\draw [black] ({360/17 *10}:5.5cm) circle (7pt);
\draw [black] ({360/17 *7}:5.5cm) circle (7pt);
\draw [black] ({360/17 *6}:5.5cm) circle (7pt);
\node[xshift=0.15cm, yshift=0.25cm] at ({360/17*0 }:5.5cm) {$v_{4}$};
\node[xshift=0.05cm, yshift=0.25cm] at ({360/17*1 }:5.5cm) {$v_{3}$};
\node[xshift=-0.05cm, yshift=0.25cm] at ({360/17 *2 }:5.5cm) {$v_{2}$};
\node[xshift=-0.2cm, yshift=0.25cm] at ({360/17 *3 }:5.5cm) {$v_{1}$};
\node[ xshift=-0.3cm, yshift=0.2cm] at ({360/17 *4 }:5.5cm) {$v_{0}$};
\node[xshift=-0.35cm, yshift=0.1cm] at ({360/17 *5}:5.5cm) {$v_{16}$};
\node[xshift=-0.4cm, yshift=0.05cm] at ({360/17 *6 }:5.5cm) {$v_{15}$};
\node[xshift=-0.35cm, yshift=-0.15cm] at ({360/17 *7}:5.5cm) {$v_{14}$};
\node[xshift=-0.25cm, yshift=-0.3cm] at ({360/17 *8}:5.5cm) {$v_{13}$};
\node[xshift=-0.15cm, yshift=-0.35cm] at ({360/17 *9}:5.5cm) {$v_{12}$};
\node[xshift=0.1cm, yshift=-0.35cm] at ({360/17 *10}:5.5cm) {$v_{11}$};
\node[xshift=0.2cm, yshift=-0.35cm] at ({360/17 *11}:5.5cm) {$v_{10}$};
\node[xshift=0.25cm, yshift=-0.25cm] at ({360/17 *12}:5.5cm) {$v_{9}$};
\node[xshift=0.35cm, yshift=-0.15cm] at ({360/17 *13}:5.5cm) {$v_{8}$};
\node[xshift=0.35cm, yshift=-0.1cm] at ({360/17 *14}:5.5cm) {$v_{7}$};
\node[xshift=0.4cm, yshift=-0.05cm] at ({360/17 *15}:5.5cm) {$v_{6}$};
\node[xshift=0.3cm, yshift=0.1cm] at ({360/17 *16}:5.5cm) {$v_{5}$};
\foreach \from/\to in {v1/v18,v2/v19,v3/v20,v4/v21,v5/v22,v6/v23,v7/v24,v8/v25,v9/v26,v10/v27,v11/v28,v12/v29,v13/v30,v14/v31,v15/v32,v16/v33,v17/v34,v18/v20,v19/v21,v20/v22,v21/v23,v22/v24,v23/v25,v24/v26,v25/v27,v26/v28,v27/v29,v28/v30,v29/v31,v30/v32,v31/v33,v32/v34,v33/v18,v34/v19} \draw (\from) -- (\to);
\end{tikzpicture}
\caption{An example for the upper bound of Lemma~\ref{bound_3}: a DDS of $P_{17,2}$.}\label{P_17,2}
\end{center}
\end{figure}
We will use the following two lemmas to get the bounds on the DDN of signed generalized Petersen graph $(P_{n,k}, \sigma)$, where $\gcd(n,k) = 1~\text{and}~k\geq 2$.
\begin{lem}\label{bound_3}
Let $\Sigma = (P_{n,k}, \sigma)$ be any signed generalized Petersen graph, where $\gcd(n,k) =1~\text{and}~k\geq2$. Let $\lceil \frac{n}{k} \rceil = 2m+1$, for some $m\geq1$. Then $ n \leq \gamma_{\times 2}(\Sigma) \leq n+mk$.
\end{lem}
\begin{proof}
Recall that $U$ denotes the set of $u\text{-vertices}$ and $V_v$ denotes the set of $v\text{-vertices}$ of $P_{n,k}$. Clearly $|U| = |V_{v}| = n$. Let $V_{1},V_{2},...,V_{2m},V_{2m+1}$ be a partition of the set $V_v$ such that \linebreak[4] $V_{i}= \{v_{(i-1)k},v_{(i-1)k+1},...v_{(i-1)k+(k-1)}\}$ for $1\leq i\leq 2m$ and $V_{2m+1} = V_v- \cup_{i=1}^{2m}V_i$. For each $1 \leq i \leq 2m$, it is obvious that $|V_{i}| = k$. Thus we have $|V_{2m+1}| = n-2mk$.
To get the upper bound, we take the set $D= U \cup \left(\cup_{i=1}^{m} V_{2i}\right)$. For example, see Figure~\ref{P_17,2}. It is clear that $|D|=n+km$. As the cycle $C_o$ lies completely inside $G[D]$, where $G = P_{n,k}$, every $u\text{-vertex}$ is dominated at least twice by $D$. Also for each $v\text{-vertex}$, the corresponding neighbor $u\text{-vertex}$ is in $D$. We show that each vertex of $V_v$ is either in $D$ or adjacent to at least one $v\text{-vertex}$ in $D$. Clearly each vertex of $\cup_{i=1}^{m} V_{2i}$ is in $D$. Further for $1\leq i \leq m$, each vertex of $V_{2i-1}$ is adjacent to a vertex of $V_{2i}$ and $V_{2i} \subseteq D$. Thus each vertex of $\cup_{i=1}^{m} V_{2i-1}$ is adjacent to a $v\text{-vertex}$ in $D$. Also each vertex $V_{2m+1}$ is adjacent to a vertex of $V_{2m}$ and $V_{2m} \subseteq D$. Thus each vertex of $V_{2m+1}$ is also adjacent to a $v\text{-vertex}$ in $D$. This shows that $D$ is a DDS of $P_{n,k}$.
Now we show that $\Sigma[D:V \setminus D]$ is balanced. To prove this, it is enough to show that $\Sigma[D:V \setminus D]$ is acyclic. Note that the vertex $u_{i}$ is adjacent to at most one vertex of $V \setminus D$, since $C_{o}$ lies in $G[D]$. Thus if $\Sigma[D:V \setminus D]$ contains any cycle then all the vertices of that cycle must be $v\text{-vertices}$ only. But the graph $P_{n,k}$ has only one inner cycle, say $C_i$, induced by $v\text{-vertices}$ as $\gcd(n,k) =1$. Therefore if $\Sigma[D:V \setminus D]$ contains a cycle then that cycle must be the inner cycle $C_{i}$ itself. Note that the vertices $v_{k-1}~\text{and}~v_{n-1}$ are adjacent and both belong to $V \setminus D$. Thus $\Sigma[D:V \setminus D]$ cannot contain a cycle. Therefore $\Sigma [D:V \setminus D]$ is balanced. This implies that $\gamma_{\times 2}(\Sigma) \leq n+mk$.
The lower bound follows from Theorem~\ref{general bounds}, and the proof is complete.
\end{proof}
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=0.55]
\node[vertex] (v5) at ({360/15*0 }:7cm) {};
\node[vertex] (v4) at ({360/15*1 }:7cm) {};
\node[vertex] (v3) at ({360/15 *2 }:7cm) {};
\node[vertex] (v2) at ({360/15 *3 }:7cm) {};
\node[vertex] (v1) at ({360/15 *4 }:7cm) {};
\node[vertex] (v15) at ({360/15 *5}:7cm) {};
\node[vertex] (v14) at ({360/15 *6 }:7cm) {};
\node[vertex] (v13) at ({360/15 *7}:7cm) {};
\node[vertex] (v12) at ({360/15 *8}:7cm) {};
\node[vertex] (v11) at ({360/15 *9}:7cm) {};
\node[vertex] (v10) at ({360/15 *10}:7cm) {};
\node[vertex] (v9) at ({360/15 *11}:7cm) {};
\node[vertex] (v8) at ({360/15 *12}:7cm) {};
\node[vertex] (v7) at ({360/15 *13}:7cm) {};
\node[vertex] (v6) at ({360/15 *14}:7cm) {};
\draw ({360/15 *0 +1.5}:7cm) arc ({360/15*0+1.5}:{360/15*1 -1.5 }:7cm);
\draw ({360/15 *1 +1.5}:7cm) arc ({360/15*1+1.5}:{360/15*2 -1.5 }:7cm);
\draw ({360/15 *2 +1.5}:7cm) arc ({360/15*2+1.5}:{360/15*3 -1.5 }:7cm);
\draw ({360/15 *3 +1.5}:7cm) arc ({360/15*3+1.5}:{360/15*4 -1.5 }:7cm);
\draw ({360/15 *4 +1.5}:7cm) arc ({360/15*4+1.5}:{360/15*5 -1.5 }:7cm);
\draw ({360/15*5 +1.5}:7cm) arc ({360/15*5+1.5}:{360/15*6 -1.5 }:7cm);
\draw ({360/15 *6 +1.5}:7cm) arc ({360/15*6+1.5}:{360/15*7 -1.5 }:7cm);
\draw ({360/15*7 +1.5}:7cm) arc ({360/15*7+1.5}:{360/15*8 -1.5 }:7cm);
\draw ({360/15 *8 +1.5}:7cm) arc ({360/15*8+1.5}:{360/15*9 -1.5 }:7cm);
\draw ({360/15 *9 +1.5}:7cm) arc ({360/15*9+1.5}:{360/15*10 -1.5 }:7cm);
\draw ({360/15 *10 +1.5}:7cm) arc ({360/15*10+1.5}:{360/15*11 -1.5 }:7cm);
\draw ({360/15 *11+1.5}:7cm) arc ({360/15*11+1.5}:{360/15*12 -1.5 }:7cm);
\draw ({360/15 *12 +1.5}:7cm) arc ({360/15*12+1.5}:{360/15*13 -1.5 }:7cm);
\draw ({360/15 *13 +1.5}:7cm) arc ({360/15*13+1.5}:{360/15*14 -1.5 }:7cm);
\draw ({360/15 *14 +1.5}:7cm) arc ({360/15*14+1.5}:{360/15*15 -1.5 }:7cm);
\node[xshift=0.4cm, yshift=-0.05cm] at ({360/15*0 }:7cm) {$u_{4}$};
\node[xshift=0.4cm] at ({360/15*1 }:7cm) {$u_{3}$};
\node[xshift=0.25cm, yshift=0.2cm] at ({360/15 *2 }:7cm) {$u_{2}$};
\node[xshift=0.15cm,yshift=0.25cm] at ({360/15 *3 }:7cm) {$u_{1}$};
\node[ yshift=0.3cm] at ({360/15 *4 }:7cm) {$u_{0}$};
\node[xshift=-0.2cm, yshift=0.3cm] at ({360/15 *5}:7cm) {$u_{14}$};
\node[xshift=-0.4cm, yshift=0.15cm] at ({360/15 *6 }:7cm) {$u_{13}$};
\node[xshift=-0.4cm, yshift=0.1cm] at ({360/15 *7}:7cm) {$u_{12}$};
\node[xshift=-0.45cm, yshift=-0.05cm] at ({360/15 *8}:7cm) {$u_{11}$};
\node[xshift=-0.35cm, yshift=-0.3cm] at ({360/15 *9}:7cm) {$u_{10}$};
\node[xshift=-0.15cm, yshift=-0.25cm] at ({360/15 *10}:7cm) {$u_{9}$};
\node[yshift=-0.35cm] at ({360/15 *11}:7cm) {$u_{8}$};
\node[xshift=0.1cm, yshift=-0.35cm] at ({360/15 *12}:7cm) {$u_{7}$};
\node[xshift=0.25cm, yshift=-0.3cm] at ({360/15 *13}:7cm) {$u_{6}$};
\node[xshift=0.4cm, yshift=-0.2cm] at ({360/15 *14}:7cm) {$u_{5}$};
\filldraw [black] ({360/15 *0}:7cm) circle (3pt);
\filldraw [black] ({360/15 *1}:7cm) circle (3pt);
\filldraw [black] ({360/15 *2}:7cm) circle (3pt);
\filldraw [black] ({360/15 *3}:7cm) circle (3pt);
\filldraw [black] ({360/15 *4}:7cm) circle (3pt);
\filldraw [black] ({360/15 *5}:7cm) circle (3pt);
\filldraw [black] ({360/15 *6}:7cm) circle (3pt);
\filldraw [black] ({360/15 *7}:7cm) circle (3pt);
\filldraw [black] ({360/15 *8}:7cm) circle (3pt);
\filldraw [black] ({360/15 *9}:7cm) circle (3pt);
\filldraw [black] ({360/15 *10}:7cm) circle (3pt);
\filldraw [black] ({360/15 *11}:7cm) circle (3pt);
\filldraw [black] ({360/15 *12}:7cm) circle (3pt);
\filldraw [black] ({360/15 *13}:7cm) circle (3pt);
\filldraw [black] ({360/15 *14}:7cm) circle (3pt);
\draw [black] ({360/15*0 }:7cm) circle (7pt);
\draw [black] ({360/15 *1}:7cm) circle (7pt);
\draw [black] ({360/15 *2}:7cm) circle (7pt);
\draw [black] ({360/15 *3}:7cm) circle (7pt);
\draw [black] ({360/15 *4}:7cm) circle (7pt);
\draw [black] ({360/15 *5}:7cm) circle (7pt);
\draw [black] ({360/15 *6}:7cm) circle (7pt);
\draw [black] ({360/15 *7}:7cm) circle (7pt);
\draw [black] ({360/15 *8}:7cm) circle (7pt);
\draw [black] ({360/15 *9}:7cm) circle (7pt);
\draw [black] ({360/15 *10}:7cm) circle (7pt);
\draw [black] ({360/15 *11}:7cm) circle (7pt);
\draw [black] ({360/15 *12}:7cm) circle (7pt);
\draw [black] ({360/15 *13}:7cm) circle (7pt);
\draw [black] ({360/15 *14}:7cm) circle (7pt);
\node[vertex] (v20) at ({360/15*0 }:5.5cm) {};
\node[vertex] (v19) at ({360/15*1 }:5.5cm) {};
\node[vertex] (v18) at ({360/15*2 }:5.5cm) {};
\node[vertex] (v17) at ({360/15*3 }:5.5cm) {};
\node[vertex] (v16) at ({360/15*4 }:5.5cm) {};
\node[vertex] (v30) at ({360/15*5 }:5.5cm) {};
\node[vertex] (v29) at ({360/15*6 }:5.5cm) {};
\node[vertex] (v28) at ({360/15*7 }:5.5cm) {};
\node[vertex] (v27) at ({360/15*8 }:5.5cm) {};
\node[vertex] (v26) at ({360/15*9 }:5.5cm) {};
\node[vertex] (v25) at ({360/15*10 }:5.5cm) {};
\node[vertex] (v24) at ({360/15*11 }:5.5cm) {};
\node[vertex] (v23) at ({360/15*12 }:5.5cm) {};
\node[vertex] (v22) at ({360/15*13 }:5.5cm) {};
\node[vertex] (v21) at ({360/15*14 }:5.5cm) {};
\draw ({360/15 *0 +1.5}:5.5cm) ;
\draw ({360/15 *1 +1.5}:5.5cm) ;
\draw ({360/15 *2 +1.5}:5.5cm) ;
\draw ({360/15 *3 +1.5}:5.5cm) ;
\draw ({360/15 *4 +1.5}:5.5cm) ;
\draw ({360/15 *5 +1.5}:5.5cm) ;
\draw ({360/15 *6 +1.5}:5.5cm) ;
\draw ({360/15 *7 +1.5}:5.5cm) ;
\draw ({360/15 *8 +1.5}:5.5cm) ;
\draw ({360/15 *9 +1.5}:5.5cm) ;
\draw ({360/15 *10 +1.5}:5.5cm) ;
\draw ({360/15 *11 +1.5}:5.5cm) ;
\draw ({360/15 *12 +1.5}:5.5cm) ;
\draw ({360/15 *13 +1.5}:5.5cm) ;
\draw ({360/15 *14 +1.5}:5.5cm) ;
\filldraw [black] ({360/15 *0}:5.5cm) circle (3pt);
\filldraw [black] ({360/15 *1}:5.5cm) circle (3pt);
\filldraw [black] ({360/15 *2}:5.5cm) circle (3pt);
\filldraw [black] ({360/15 *3}:5.5cm) circle (3pt);
\filldraw [black] ({360/15 *4}:5.5cm) circle (3pt);
\filldraw [black] ({360/15 *5}:5.5cm) circle (3pt);
\filldraw [black] ({360/15 *6}:5.5cm) circle (3pt);
\filldraw [black] ({360/15 *7}:5.5cm) circle (3pt);
\filldraw [black] ({360/15 *8}:5.5cm) circle (3pt);
\filldraw [black] ({360/15 *9}:5.5cm) circle (3pt);
\filldraw [black] ({360/15 *10}:5.5cm) circle (3pt);
\filldraw [black] ({360/15 *11}:5.5cm) circle (3pt);
\filldraw [black] ({360/15 *12}:5.5cm) circle (3pt);
\filldraw [black] ({360/15 *13}:5.5cm) circle (3pt);
\filldraw [black] ({360/15 *14}:5.5cm) circle (3pt);
\draw [black] ({360/15*2 }:5.5cm) circle (7pt);
\draw [black] ({360/15*1 }:5.5cm) circle (7pt);
\draw [black] ({360/15*13 }:5.5cm) circle (7pt);
\draw [black] ({360/15*12 }:5.5cm) circle (7pt);
\draw [black] ({360/15*9 }:5.5cm) circle (7pt);
\draw [black] ({360/15*8 }:5.5cm) circle (7pt);
\draw [black] ({360/15*5 }:5.5cm) circle (7pt);
\node[xshift=0.3cm, yshift=0.1cm] at ({360/15 *14}:5.5cm) {$v_{5}$};
\node[xshift=0.35cm] at ({360/15 *13}:5.5cm) {$v_{6}$};
\node[xshift=0.35cm, yshift=-0.1cm] at ({360/15 *12}:5.5cm) {$v_{7}$};
\node[xshift=0.3cm, yshift=-0.155cm] at ({360/15 *11}:5.5cm) {$v_{8}$};
\node[xshift=0.25cm, yshift=-0.3cm] at ({360/15 *10}:5.5cm) {$v_{9}$};
\node[xshift=0.05cm, yshift=-0.35cm] at ({360/15 *9}:5.5cm) {$v_{10}$};
\node[xshift=-0.05cm, yshift=-0.35cm] at ({360/15 *8}:5.5cm) {$v_{11}$};
\node[xshift=-0.25cm, yshift=-0.2cm] at ({360/15 *7}:5.5cm) {$v_{12}$};
\node[xshift=-0.35cm, yshift=-0.2cm] at ({360/15 *6}:5.5cm) {$v_{13}$};
\node[xshift=-0.4cm] at ({360/15 *5}:5.5cm) {$v_{14}$};
\node[xshift=-0.35cm, yshift=0.1cm] at ({360/15 *4}:5.5cm) {$v_{0}$};
\node[xshift=-0.3cm, yshift=0.15cm] at ({360/15 *3}:5.5cm) {$v_{1}$};
\node[xshift=-0.1cm, yshift=0.3cm] at ({360/15 *2}:5.5cm) {$v_{2}$};
\node[xshift=0.1cm, yshift=0.25cm] at ({360/15 *1}:5.5cm) {$v_{3}$};
\node[xshift=0.15cm, yshift=0.2cm] at ({360/15 *0}:5.5cm) {$v_{4}$};
\foreach \from/\to in {v5/v20,v4/v19,v3/v18,v2/v17,v1/v16,v15/v30,v14/v29,v13/v28,v12/v27,v11/v26,v10/v25,v9/v24,v8/v23,v7/v22,v6/v21,v16/v18,v17/v19,v18/v20,v19/v21,v20/v22,v21/v23,v22/v24,v23/v25,v24/v26,v25/v27,v26/v28,v27/v29,v28/v30,v29/v16,v30/v17} \draw (\from) -- (\to);
\end{tikzpicture}
\caption{An example for the upper bound of Lemma~\ref{bound_4}: a DDS of $P_{15,2}$.}\label{P_15,2}
\end{center}
\end{figure}
\begin{lem}\label{bound_4}
Let $\Sigma = (P_{n,k}, \sigma)$ be any signed generalized Petersen graph, where $\gcd(n,k) =1$ and $k\geq2$. Let $\lceil \frac{n}{k} \rceil = 2m$, for some $m\geq2$. Then $ n \leq \gamma_{\times 2}(\Sigma) \leq 2n-mk$.
\end{lem}
\begin{proof}
Let $V_{1},V_{2},...,V_{2m}$ be a partition of the set $V_v$ such that $V_{i}= \{v_{(i-1)k},v_{(i-1)k+1},...v_{(i-1)k+(k-1)}\}$ and $V_{2m} = V_v- \cup_{i=1}^{2m-1}V_i$ for $1 \leq i \leq 2m-1$. Note that $|V_{i}| = k$ for each $1 \leq i \leq 2m-1$. Thus we have $|V_{2m}| = n-k(2m-1)$.
Consider the set $D=U \cup \left(\cup_{i=1}^{m} V_{2i}\right)$. Clearly $|D| = n+k(m-1)+n-k(2m-1) = 2n-km$. For example, see Figure~\ref{P_15,2}. We prove that $D$ is a DDS of $\Sigma$, and this will give us the required upper bound.
Since the cycle $C_o$ lies completely inside $G[D]$, every $u\text{-vertex}$ is dominated at least twice by $D$. Note that for each $v\text{-vertex}$, the corresponding neighbor $u\text{-vertex}$ is in $D$ as $U \subseteq D$. Thus to show that $D$ dominates every vertex of $V_{v}$ at least twice, we just need to show that each vertex of $V_v$ is either in $D$ or adjacent to at least one $v\text{-vertex}$ in $D$. Clearly each vertex of $\cup_{i=1}^{m} V_{2i}$ is in $D$. Further, for $1\leq i \leq m-1$, each vertex of $V_{2i-1}$ is adjacent to a $v\text{-vertex}$ in $D$ because each vertex of $V_{2i-1}$ is adjacent to a vertex of $V_{2i}$ and $V_{2i} \subseteq D$. Also each vertex of $V_{2m-1}$ is adjacent to a vertex of $V_{2m-2}$ and $V_{2m-2} \subseteq D$. Therefore each vertex of $V_{2m-1}$ is also adjacent to a $v\text{-vertex}$ in $D$. Hence $D$ is a DDS of $P_{n,k}$.
Now it remains to show that $\Sigma[D:V \setminus D]$ is balanced. To do so, it is enough to show that $\Sigma[D:V \setminus D]$ is acyclic. Note that every $u\text{-vertex}$ is adjacent to at most one vertex of $V \setminus D$ since $C_o$ lies completely inside $G[D]$. Therefore if $\Sigma[D:V \setminus D]$ contains any cycle then that cycle must be the inner cycle $C_{i}$ itself. Further, the vertex $v_{0} \in V_{1}$ and the vertex $v_{2mk-k-1} \in V_{2m-1}$. Also both the vertices $v_{0}~\text{and}~v_{2mk-k-1}$ belong to the set $V\setminus D$, and they are adjacent to each other. Therefore $\Sigma[D:V \setminus D]$ is acyclic, and so $\Sigma [D:V \setminus D]$ is balanced. Hence $D$ is a DDS of $\Sigma$. This implies that $\gamma_{\times 2}(\Sigma) \leq 2n-km$.
The lower bound follows from Theorem~\ref{general bounds}, and the proof is complete.
\end{proof}
\begin{theorem}\label{bound_(n,k)=1}
Let $\Sigma = (P_{n,k}, \sigma)$ be any signed generalized Petersen graph, where $\gcd(n,k) =1$ and $k\geq2$. Then $ n \leq \gamma_{\times 2}(\Sigma) \leq \frac{3n}{2}. $
\end{theorem}
\begin{proof}
For any positive integers $n~\text{and}~k$ it is always true that $\lfloor \frac{n}{k} \rfloor \leq \frac{n}{k} \leq \lceil \frac{n}{k} \rceil$. So for $\lceil \frac{n}{k} \rceil = 2m$, we have $\frac{n}{2} \leq mk$. Therefore the upper bound of Lemma~\ref{bound_4} can be replaced by $\frac{3n}{2}$.
Further, with the assumptions of Lemma~\ref{bound_3}, we have $2m = \lfloor \frac{n}{k} \rfloor \leq \frac{n}{k} \leq \lceil \frac{n}{k} \rceil = 2m+1$ and this implies that $mk \leq \frac{n}{2} $. Thus the upper bound of Lemma~\ref{bound_3} can also be replaced by $\frac{3n}{2}$. Hence we conclude that $ n \leq \gamma_{\times 2}(\Sigma) \leq \frac{3n}{2}$. This completes the proof.
\end{proof}
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=0.55]
\node[vertex] (v5) at ({360/16*0 }:7cm) {};
\node[vertex] (v4) at ({360/16*1 }:7cm) {};
\node[vertex] (v3) at ({360/16 *2 }:7cm) {};
\node[vertex] (v2) at ({360/16 *3 }:7cm) {};
\node[vertex] (v1) at ({360/16 *4 }:7cm) {};
\node[vertex] (v16) at ({360/16 *5}:7cm) {};
\node[vertex] (v15) at ({360/16 *6 }:7cm) {};
\node[vertex] (v14) at ({360/16 *7}:7cm) {};
\node[vertex] (v13) at ({360/16 *8}:7cm) {};
\node[vertex] (v12) at ({360/16 *9}:7cm) {};
\node[vertex] (v11) at ({360/16 *10}:7cm) {};
\node[vertex] (v10) at ({360/16 *11}:7cm) {};
\node[vertex] (v9) at ({360/16 *12}:7cm) {};
\node[vertex] (v8) at ({360/16 *13}:7cm) {};
\node[vertex] (v7) at ({360/16 *14}:7cm) {};
\node[vertex] (v6) at ({360/16 *15}:7cm) {};
\draw ({360/16 *0 +1.5}:7cm) arc ({360/16*0+1.5}:{360/16*1 -1.5 }:7cm);
\draw ({360/16 *1 +1.5}:7cm) arc ({360/16*1+1.5}:{360/16*2 -1.5 }:7cm);
\draw ({360/16 *2 +1.5}:7cm) arc ({360/16*2+1.5}:{360/16*3 -1.5 }:7cm);
\draw ({360/16 *3 +1.5}:7cm) arc ({360/16*3+1.5}:{360/16*4 -1.5 }:7cm);
\draw ({360/16 *4 +1.5}:7cm) arc ({360/16*4+1.5}:{360/16*5 -1.5 }:7cm);
\draw ({360/16 *5 +1.5}:7cm) arc ({360/16*5+1.5}:{360/16*6 -1.5 }:7cm);
\draw ({360/16 *6 +1.5}:7cm) arc ({360/16*6+1.5}:{360/16*7 -1.5 }:7cm);
\draw ({360/16 *7 +1.5}:7cm) arc ({360/16*7+1.5}:{360/16*8 -1.5 }:7cm);
\draw ({360/16 *8 +1.5}:7cm) arc ({360/16*8+1.5}:{360/16*9 -1.5 }:7cm);
\draw ({360/16 *9 +1.5}:7cm) arc ({360/16*9+1.5}:{360/16*10 -1.5 }:7cm);
\draw ({360/16 *10 +1.5}:7cm) arc ({360/16*10+1.5}:{360/16*11 -1.5 }:7cm);
\draw ({360/16 *11+1.5}:7cm) arc ({360/16*11+1.5}:{360/16*12 -1.5 }:7cm);
\draw ({360/16 *12 +1.5}:7cm) arc ({360/16*12+1.5}:{360/16*13 -1.5 }:7cm);
\draw ({360/16 *13 +1.5}:7cm) arc ({360/16*13+1.5}:{360/16*14 -1.5 }:7cm);
\draw ({360/16 *14 +1.5}:7cm) arc ({360/16*14+1.5}:{360/16*15 -1.5 }:7cm);
\draw ({360/16 *15 +1.5}:7cm) arc ({360/16*15+1.5}:{360/16*16 -1.5 }:7cm);
\node[xshift=0.35cm] at ({360/16*0 }:7cm) {$u_{4}$};
\node[xshift=0.3cm, yshift=0.1cm] at ({360/16*1 }:7cm) {$u_{3}$};
\node[xshift=0.25cm, yshift=0.2cm] at ({360/16 *2 }:7cm) {$u_{2}$};
\node[xshift=0.15cm,yshift=0.25cm] at ({360/16 *3 }:7cm) {$u_{1}$};
\node[ yshift=0.3cm] at ({360/16 *4 }:7cm) {$u_{0}$};
\node[xshift=-0.15cm, yshift=0.3cm] at ({360/16 *5}:7cm) {$u_{15}$};
\node[xshift=-0.4cm, yshift=0.2cm] at ({360/16 *6 }:7cm) {$u_{14}$};
\node[xshift=-0.4cm, yshift=0.2cm] at ({360/16 *7}:7cm) {$u_{13}$};
\node[xshift=-0.45cm] at ({360/16 *8}:7cm) {$u_{12}$};
\node[xshift=-0.35cm, yshift=-0.2cm] at ({360/16 *9}:7cm) {$u_{11}$};
\node[xshift=-0.15cm, yshift=-0.35cm] at ({360/16 *10}:7cm) {$u_{10}$};
\node[ yshift=-0.3cm] at ({360/16 *11}:7cm) {$u_{9}$};
\node[yshift=-0.35cm] at ({360/16 *12}:7cm) {$u_{8}$};
\node[xshift=0.25cm, yshift=-0.3cm] at ({360/16 *13}:7cm) {$u_{7}$};
\node[xshift=0.3cm, yshift=-0.25cm] at ({360/16 *14}:7cm) {$u_{6}$};
\node[xshift=0.35cm, yshift=-0.15cm] at ({360/16 *15}:7cm) {$u_{5}$};
\filldraw[black] ({360/16*0 }:7cm) circle (3pt);
\filldraw [black] ({360/16 *1}:7cm) circle (3pt);
\filldraw [black] ({360/16 *2}:7cm) circle (3pt);
\filldraw [black] ({360/16 *3}:7cm) circle (3pt);
\filldraw [black] ({360/16 *4}:7cm) circle (3pt);
\filldraw [black] ({360/16 *5}:7cm) circle (3pt);
\filldraw [black] ({360/16 *6}:7cm) circle (3pt);
\filldraw [black] ({360/16 *7}:7cm) circle (3pt);
\filldraw [black] ({360/16 *8}:7cm) circle (3pt);
\filldraw [black] ({360/16 *9}:7cm) circle (3pt);
\filldraw [black] ({360/16 *10}:7cm) circle (3pt);
\filldraw [black] ({360/16 *11}:7cm) circle (3pt);
\filldraw [black] ({360/16 *12}:7cm) circle (3pt);
\filldraw [black] ({360/16 *13}:7cm) circle (3pt);
\filldraw [black] ({360/16 *14}:7cm) circle (3pt);
\filldraw [black] ({360/16 *15}:7cm) circle (3pt);
\draw [black] ({360/16*0 }:7cm) circle (7pt);
\draw [black] ({360/16 *1}:7cm) circle (7pt);
\draw [black] ({360/16 *2}:7cm) circle (7pt);
\draw [black] ({360/16 *3}:7cm) circle (7pt);
\draw [black] ({360/16 *4}:7cm) circle (7pt);
\draw [black] ({360/16 *5}:7cm) circle (7pt);
\draw [black] ({360/16 *6}:7cm) circle (7pt);
\draw [black] ({360/16 *7}:7cm) circle (7pt);
\draw [black] ({360/16 *8}:7cm) circle (7pt);
\draw [black] ({360/16 *9}:7cm) circle (7pt);
\draw [black] ({360/16 *10}:7cm) circle (7pt);
\draw [black] ({360/16 *11}:7cm) circle (7pt);
\draw [black] ({360/16 *12}:7cm) circle (7pt);
\draw [black] ({360/16 *13}:7cm) circle (7pt);
\draw [black] ({360/16 *14}:7cm) circle (7pt);
\draw [black] ({360/16 *15}:7cm) circle (7pt);
\node[vertex] (v21) at ({360/16*0 }:5.5cm) {};
\node[vertex] (v20) at ({360/16*1 }:5.5cm) {};
\node[vertex] (v19) at ({360/16 *2 }:5.5cm) {};
\node[vertex] (v18) at ({360/16 *3 }:5.5cm) {};
\node[vertex] (v17) at ({360/16 *4 }:5.5cm) {};
\node[vertex] (v32) at ({360/16 *5}:5.5cm) {};
\node[vertex] (v31) at ({360/16 *6 }:5.5cm) {};
\node[vertex] (v30) at ({360/16 *7}:5.5cm) {};
\node[vertex] (v29) at ({360/16 *8}:5.5cm) {};
\node[vertex] (v28) at ({360/16 *9}:5.5cm) {};
\node[vertex] (v27) at ({360/16 *10}:5.5cm) {};
\node[vertex] (v26) at ({360/16 *11}:5.5cm) {};
\node[vertex] (v25) at ({360/16 *12}:5.5cm) {};
\node[vertex] (v24) at ({360/16 *13}:5.5cm) {};
\node[vertex] (v23) at ({360/16 *14}:5.5cm) {};
\node[vertex] (v22) at ({360/16 *15}:5.5cm) {};
\filldraw[black] ({360/16*0 }:5.5cm) circle (3pt);
\filldraw [black] ({360/16 *1}:5.5cm) circle (3pt);
\filldraw [black] ({360/16 *2}:5.5cm) circle (3pt);
\filldraw [black] ({360/16 *3}:5.5cm) circle (3pt);
\filldraw [black] ({360/16 *4}:5.5cm) circle (3pt);
\filldraw [black] ({360/16 *5}:5.5cm) circle (3pt);
\filldraw [black] ({360/16 *6}:5.5cm) circle (3pt);
\filldraw [black] ({360/16 *7}:5.5cm) circle (3pt);
\filldraw [black] ({360/16 *8}:5.5cm) circle (3pt);
\filldraw [black] ({360/16 *9}:5.5cm) circle (3pt);
\filldraw [black] ({360/16 *10}:5.5cm) circle (3pt);
\filldraw [black] ({360/16 *11}:5.5cm) circle (3pt);
\filldraw [black] ({360/16 *12}:5.5cm) circle (3pt);
\filldraw [black] ({360/16 *13}:5.5cm) circle (3pt);
\filldraw [black] ({360/16 *14}:5.5cm) circle (3pt);
\filldraw [black] ({360/16 *15}:5.5cm) circle (3pt);
\draw ({360/16 *0 +1.5}:5.5cm) ;
\draw ({360/16 *1 +1.5}:5.5cm) ;
\draw ({360/16 *2 +1.5}:5.5cm) ;
\draw ({360/16 *3 +1.5}:5.5cm) ;
\draw ({360/16 *4 +1.5}:5.5cm) ;
\draw ({360/16 *5 +1.5}:5.5cm) ;
\draw ({360/16 *6 +1.5}:5.5cm) ;
\draw ({360/16 *7 +1.5}:5.5cm) ;
\draw ({360/16 *8 +1.5}:5.5cm) ;
\draw ({360/16 *9 +1.5}:5.5cm) ;
\draw ({360/16 *10 +1.5}:5.5cm) ;
\draw ({360/16 *11+1.5}:5.5cm) ;
\draw ({360/16 *12 +1.5}:5.5cm) ;
\draw ({360/16 *13 +1.5}:5.5cm) ;
\draw ({360/16 *14 +1.5}:5.5cm) ;
\draw ({360/16 *15 +1.5}:5.5cm) ;
\draw [black] ({360/16 *0}:5.5cm) circle (7pt);
\draw [black] ({360/16 *1}:5.5cm) circle (7pt);
\draw [black] ({360/16 *2}:5.5cm) circle (7pt);
\draw [black] ({360/16 *3}:5.5cm) circle (7pt);
\draw [black] ({360/16 *4}:5.5cm) circle (7pt);
\draw [black] ({360/16 *15}:5.5cm) circle (7pt);
\node[xshift=0.05cm, yshift=0.25cm] at ({360/16*0 }:5.5cm) {$v_{4}$};
\node[ yshift=0.3cm] at ({360/16*1 }:5.5cm) {$v_{3}$};
\node[xshift=-0.15cm, yshift=0.3cm] at ({360/16 *2 }:5.5cm) {$v_{2}$};
\node[xshift=-0.3cm, yshift=0.1cm] at ({360/16 *3 }:5.5cm) {$v_{1}$};
\node[ xshift=-0.35cm] at ({360/16 *4 }:5.5cm) {$v_{0}$};
\node[xshift=-0.35cm, yshift=-0.1cm] at ({360/16 *5}:5.5cm) {$v_{15}$};
\node[xshift=-0.3cm, yshift=-0.15cm] at ({360/16 *6 }:5.5cm) {$v_{14}$};
\node[xshift=-0.15cm, yshift=-0.25cm] at ({360/16 *7}:5.5cm) {$v_{13}$};
\node[yshift=-0.3cm] at ({360/16 *8}:5.5cm) {$v_{12}$};
\node[ yshift=-0.35cm] at ({360/16 *9}:5.5cm) {$v_{11}$};
\node[xshift=0.1cm, yshift=-0.35cm] at ({360/16 *10}:5.5cm) {$v_{10}$};
\node[xshift=0.2cm, yshift=-0.25cm] at ({360/16 *11}:5.5cm) {$v_{9}$};
\node[xshift=0.25cm, yshift=-0.25cm] at ({360/16 *12}:5.5cm) {$v_{8}$};
\node[xshift=0.4cm, yshift=-0.05cm] at ({360/16 *13}:5.5cm) {$v_{7}$};
\node[xshift=0.35cm] at ({360/16 *14}:5.5cm) {$v_{6}$};
\node[xshift=0.3cm, yshift=0.15cm] at ({360/16 *15}:5.5cm) {$v_{5}$};
\foreach \from/\to in {v1/v17,v2/v18,v3/v19,v4/v20,v5/v21,v6/v22,v7/v23,v8/v24,v9/v25,v10/v26,v11/v27,v12/v28,v13/v29,v14/v30,v15/v31,v16/v32,v17/v23,v18/v24,v19/v25,v20/v26,v21/v27,v22/v28,v23/v29,v24/v30,v25/v31,v26/v32,v27/v17,v28/v18,v29/v19,v30/v20,v31/v21,v32/v22} \draw (\from) -- (\to);
\end{tikzpicture}
\caption{An example for the upper bound of Theorem~\ref{bound_5}: a DDS of $P_{16,6}$.}\label{P_16,6}
\end{center}
\end{figure}
Finally, we give a lower bound and an upper bound for the DDN of signed generalized Petersen graphs, where $\gcd(n,k)=d \geq 2$.
\begin{theorem}\label{bound_5}
Let $\Sigma = (P_{n,k}, \sigma)$ be any signed generalized Petersen graph, where $\gcd(n,k)=d \geq 2$. Then $$n \leq \gamma_{\times 2}(\Sigma) \leq n+d \Big\lceil \frac{n}{3d} \Big\rceil.$$
\end{theorem}
\begin{proof}
Since $\gcd(n,k)=d\geq2$, $P_{n,k}$ has exactly $d$ disjoint $\frac{n}{d}\text{-cycles}$ induced by vertices of $V_{v}$. For each $1 \leq r \leq d$, let $C_{r} = v_{(r-1)}v_{(r-1)+k}v_{(r-1)+2k}...v_{(r-1)+(\frac{n}{d}-1)k}v_{(r-1)}$ be a cycle of length $\frac{n}{d}$. Let $V_{r} = \{v_{(r-1)+3(j-1)k}~|~j = 1,2,...,\lceil \frac{n}{3d} \rceil\} \subset V(C_{r})$. For each $1 \leq r \leq d$, it is clear that $|V_r| = \lceil \frac{n}{3d} \rceil$. Note that every vertex of $V(C_{r}) \setminus V_r$ is adjacent to at least one vertex of $V_r$ for $1 \leq r \leq d$.
To get the upper bound, consider the set $D = U \cup \left(\cup_{r=1}^{d}V_{r}\right)$. For example, see Figure~\ref{P_16,6}. It is clear that $|D| = n+d \lceil \frac{n}{3d} \rceil$. Also it is clear that $D$ dominates every vertex of $U$ at least twice. Note that every vertex of $V_{v}$ is either in $D$ and has one neighbor in $D$ or in $V\setminus D$ and has two neighbors in $D$. Therefore $D$ is a DDS of $P_{n,k}$.
Now we show that $\Sigma[D:V \setminus D]$ is balanced. Note that if $\Sigma[D:V \setminus D]$ contains any cycle then that cycle must be one of the $C_r\text{'s}$, for some $1 \leq r \leq d$, as the outer cycle $C_o$ completely lies inside $G[D]$. Also for each $1 \leq r \leq d$, two consecutive vertices $v_{(r-1)+k}~\text{and}~v_{(r-1)+2k}$ of $C_r$ are contained in $V \setminus D$. This implies that $\Sigma[D:V \setminus D]$ cannot contain a cycle. Hence $\Sigma[D:V \setminus D]$ is acyclic, and so $\Sigma[D:V \setminus D]$ is balanced. Thus we have $\gamma_{\times 2}(\Sigma) \leq n+d\lceil \frac{n}{3d} \rceil$.
The lower bound follows from Theorem~\ref{general bounds}, and the proof is complete.
\end{proof}
\subsection{Bounds on $\gamma_{\times 2}(I(n,j,k),\sigma)$}
It is clear that $P_{n,k} = I(n,1,k)$. Since $I(n,j,k) = I(n,k,j)$ and we wish to get the bounds on $\gamma_{\times2}(I(n,j,k),\sigma)$, we assume that $2\leq j\leq k$. The following theorem gives bounds on $\gamma_{\times2}(I(n,j,k),\sigma)$, for $\gcd(n,k)=1$.
\begin{theorem}\label{bound_I-graph_(n,k)=1}
Let $\Sigma = (I(n,j,k),\sigma)$ be any signed I-graph, where $\gcd(n,k)=1~and~k\geq 2$. Then $$n \leq \gamma_{\times2}(\Sigma) \leq \frac{3n}{2}.$$
\end{theorem}
\begin{proof}
The lower bound follows from Theorem~\ref{general bounds}.
If $\lceil \frac{n}{k} \rceil = 2m$, the set $D$ as considered in Lemma~\ref{bound_4} will be a DDS of $\Sigma$. Therefore $\gamma_{\times2}(\Sigma) \leq 2n-mk$. If $\lceil \frac{n}{k} \rceil = 2m+1$, the set $D$ as considered in Lemma~\ref{bound_3} will be a DDS of $\Sigma$. Therefore $\gamma_{\times2}(\Sigma) \leq n+mk$. To get the required upper bound we mimic the proof of Theorem~\ref{bound_(n,k)=1}. This completes the proof.
\end{proof}
In the following theorem we give bounds on $\gamma_{\times2}(I(n,j,k),\sigma)$, where $\gcd(n,k) \geq 2$.
\begin{theorem}\label{bound_I-graph_(n,k)=d}
Let $\Sigma = (I(n,j,k),\sigma)$ be any signed I-graph, where $\gcd(n,k)=d \geq 2$. Then $$n \leq \gamma_{\times 2}(\Sigma) \leq n+d \Big\lceil \frac{n}{3d} \Big\rceil.$$
\end{theorem}
\begin{proof}
The lower bound follows from Theorem~\ref{general bounds}.
Note that the structure of cycles induced by vertices of $V_{v}$ of $I(n,j,k)$ is same as the structure of cycles induced by vertices of $V_{v}$ of $P_{n,k}$. Since the set $D$ considered in proof of the Theorem~\ref{bound_5} contains the whole set $U$, this same set $D$ will be a DDS of any $\Sigma = (I(n,j,k),\sigma)$. Therefore $$n \leq \gamma_{\times 2}(\Sigma) \leq n+d \Big\lceil \frac{n}{3d} \Big\rceil.$$
This completes the proof.
\end{proof}
\noindent
\textbf{Acknowledgment.} The first author is grateful to Indian Institute of Technology Guwahati, India for providing him a graduate fellowship to carry out the research.
|
1508.04696
|
\section{Introduction}
We will study spectral characteristics of self-adjoint operators of the form
\[
H_V \phi
=
-\phi'' + V\phi
\]
in $L^2({\mathbb R})$, where $V:{\mathbb R} \to {\mathbb R}$ is a bounded, continuous function, known as the \emph{potential}. Our results concern the class of (uniformly) \emph{limit-periodic} potentials, that is, potentials $V$ which are uniform limits of continuous periodic functions on ${\mathbb R}$. Let ${\mathrm{LP}} = {\mathrm{LP}}({\mathbb R})$ denote the set of uniformly limit-periodic functions ${\mathbb R} \to {\mathbb R}$. Equipped with the $L^\infty$ norm, this is a complete metric space of functions. It is well known that the spectrum of $H_V$ has a tendency to be a Cantor set whenever $V$ is limit-periodic; compare; for example, \cite{AS81, FL, MC, M, PT88}. Here we show the following result:
\begin{theorem} \label{t:zmdense:coup}
There is a residual subset $\mathcal C \subseteq {\mathrm{LP}}$ such that $\sigma(H_{\lambda V})$ is a perfect set of zero Lebesgue measure, and $H_{\lambda V}$ has purely singular continuous spectrum for all $V \in \mathcal C$ and all $\lambda > 0$.
\end{theorem}
We will first address the question of zero-measure Cantor spectrum. By the Baire Category Theorem, it suffices to prove the following theorem to demonstrate generic persistence of zero-measure spectrum at arbitrary coupling.
\begin{theorem} \label{t:urdense:coup}
For $R > 0$, $\delta > 0$, and $\Lambda > 1$, let $U_{R,\delta,\Lambda}$ denote the set of $V \in {\mathrm{LP}}$ with ${\mathrm{Leb}}(\sigma(H_{\lambda V}) \cap [-R,R]) < \delta$ for all $\Lambda^{-1} \leq \lambda \leq \Lambda$. Then, for all $R,\delta > 0$, and $\Lambda > 1$, $U_{R,\delta,\Lambda}$ is a dense, open subset of ${\mathrm{LP}}$.
\end{theorem}
Moreover, if we control things more carefully, we can even get spectra of global Hausdorff dimension zero (though this set will only be dense).
\begin{theorem} \label{t:hd0dense}
There is a dense set $\mathcal H \subseteq {\mathrm{LP}}$ such that $\sigma(H_{\lambda V})$ has Hausdorff dimension zero and such that $H_{\lambda V}$ has purely singular continuous spectrum for all $V \in \mathcal H$ and all $\lambda > 0$.
\end{theorem}
Though the foregoing result is a continuum analog of a known result for discrete Schr\"odinger operators, it is rather striking in this setting since, heuristically speaking, the small coupling and high energy regimes both tend to conspire to ``thicken'' the spectrum, but this construction beats both: one gets spectrum of global Hausdorff dimension zero for small coupling.
Our proofs work by adapting a construction of Avila \cite{avila2009CMP} involving discrete Schr\"odinger operators with limit-periodic potentials to the setting of continuum Schr\"odinger operators.
\medskip
It is interesting to ask whether one can produce quasi-periodic continuum potentials which exhibit zero-measure Cantor spectrum. Of course, there are examples (such as the critical Almost-Mathieu operator) in the discrete setting, but one should keep in mind that the high energy region in the continuum case is analogous to the weak coupling regime in the discrete case. Thus, when looking for evidence for the desired phenomenon in the discrete setting, one really needs to look for quasi-periodic potentials that give rise to zero-measure Cantor spectrum for all non-zero values of the coupling constant. No such examples are presently known. Indeed the only known almost periodic examples of this kind in the discrete case are the ones discussed in \cite{avila2009CMP} and hence we were naturally compelled to work out the continuum analog of that work.
We also note that an even easier question is still open in the discrete case. Is it true that, for fixed irrational frequency $\alpha$, the set of $f \in C({\mathbb T},{\mathbb R})$, for which the discrete Schr\"odinger operator with potential $n \mapsto f(n\alpha)$ has zero-measure Cantor spectrum, is dense, or even residual? This question motivated the work \cite{ADZ14}, where only the following weaker result was shown: for fixed irrational frequency $\alpha$, the set of $f \in C({\mathbb T},{\mathbb R})$, for which the density of states measure is singular, is residual.
\medskip
To give additional motivation for the results above, let us put them in context. It was shown by Fillman and Lukic \cite{FL} that for an explicit dense set of limit-periodic continuum Schr\"odinger operators, the spectrum is homogeneous in the sense of Carleson, and hence in particular of positive Lebesgue measure. Theorem~\ref{t:zmdense:coup} is a companion result, which says that the generic behavior is different. Another perspective on these results is provided by Deift's question about solutions to the KdV equation with almost periodic initial condition; see \cite[Problem~1]{D}. Egorova answered the conjecture in the affirmative for a class of reflectionless limit-periodic potentials with homogeneous spectrum \cite{E} (the same class that is considered in \cite{FL}). Additionally, there has been some recent progress on this question in the case of small quasi-periodic initial data \cite{BDGL, DG}. In particular, the works \cite{BDGL,E} suggest that homogeneity of the spectrum, along with reflectionlessness of the initial condition, may indeed be important to one's ability to show almost periodicity in time for the solution in question, as conjectured by Deift. Indeed, the initial data covered by the works \cite{DG,E} obey these conditions. Thus, in order to explore the limitations of the approach to Deift's question suggested in these recent papers, it is natural to ask if and ``how often'' the necessary conditions are satisfied. The examples provided by Theorem~\ref{t:zmdense:coup}, and especially those provided by Theorem~\ref{t:hd0dense}, are particularly bad from this perspective. Indeed, whenever the spectrum of a continuum Schr\"odinger operator is this small, we are currently very far from a suitable description of the associated isospectral torus, which gives rise to the phase space for the associated KdV evolution, and this prevents us from proving existence, uniqueness, and almost periodicity of the solution of the KdV equation with such initial data. In other words, Theorems~\ref{t:zmdense:coup} and \ref{t:hd0dense} may be viewed as particular challenges to overcome in order to answer Deift's question in full generality.
\medskip
Let us add some comments on the results established here and interesting questions for further study. As pointed out above, our results hold uniformly in the coupling constant. This phenomenon has shown up repeatedly in the limit-periodic theory. Indeed, not a single limit-periodic potential $V$ is known such that the spectral type of the Schr\"odinger operator with potential $\lambda V$ changes as $\lambda$ is varied in the set ${\mathbb R} \setminus \{ 0 \}$. Thus, no phase transitions may be observed. Similarly, the spectral type is always pure and hence there are also no phase transitions as the energy varies in the spectrum. Finally, no change in spectral type may be observed as one varies the element of the hull. In the quasi-periodic theory one can observe changing spectral type in each of these three scenarios. It would therefore be of obvious interest to exhibit limit-periodic examples for which phase transitions occur, or to show that this can never happen.
Another difference between the limit-periodic theory and the quasi-periodic theory we wish to point out is that there is no obvious way to distinguish between regularity classes of sampling functions in the limit-periodic case, whereas this distinction is very important in the quasi-periodic case. Indeed, in the quasi-periodic case there is a very deep understanding of the case of highly regular sampling functions (such as trigonometric polynomials, analytic functions, and Gevrey functions). It is here where one can observe a variety of phase transitions, and in particular the occurrence of absolutely continuous spectrum and pure point spectrum. In fact, recent work by many authors, culminating in Avila's global theory \cite{avilaGlobal1,avilaGlobal2}, has explained these phenomena in great detail in the analytic one-frequency case. On the other hand, the generic behavior for sampling functions that are merely continuous is very similar to the generic behavior in the limit-periodic case; singular continuous spectra dominate in this regime. Of course, the differences between limit-periodic potentials and quasi-periodic potentials have their roots in the topological structure of their respective hulls; specifically, the hull of a quasi-periodic potential will be isomorphic to a finite-dimensional torus, while the hull of a limit-periodic potential will be isomorphic to a solenoid (i.e.\ an inverse limit of circles, which is also sometimes called an odometer). A possible way to discuss suitable analogues of regularity issues in the limit-periodic case could be devised in terms of the speed of approximation by periodic potentials, relative to the periods of the approximants, and hence it would be interesting to extend the work of Pastur and Tkachenko to more general classes \cite{PT84,PT88}; see also \cite{Chul81,MC}.
\medskip
The structure of the paper is as follows. In Section~\ref{sec:prep}, we collect some relevant background which will help in the proofs of Theorems~\ref{t:zmdense:coup}, \ref{t:urdense:coup}, and \ref{t:hd0dense}. In Section~\ref{sec:smallspec}, we describe a construction which enables one to produce periodic Schr\"odinger operators whose spectra are suitably thin for specific ranges of energies and couplings (Lemma~\ref{l:smallspec}). Section~\ref{sec:zm} uses this construction to prove Theorems~\ref{t:zmdense:coup} and \ref{t:urdense:coup}, and Section~\ref{sec:hd0} contains the proof of Theorem~\ref{t:hd0dense}.
\section*{Acknowledgements}
The authors are grateful to the anonymous referee for an excellent report with many helpful comments and observations.
\section{Preparatory Work} \label{sec:prep}
Here, we collect a few technical lemmas which will be used to prove the main theorems. Let us recall the definitions of some of the relevant tools. First, we describe the transfer matrices, which are used to propagate solutions of the time-independent Schr\"odinger equation. Given a potential $V \in C({\mathbb R})$, $E \in {\mathbb C}$, and $s,t \in {\mathbb R}$, the associated \emph{transfer matrices} $A_E(s,t) = A_E^V(s,t)$ are uniquely defined by
\[
\begin{pmatrix}
y'(s) \\ y(s)
\end{pmatrix}
=
A_E(s,t)
\begin{pmatrix}
y'(t) \\ y(t)
\end{pmatrix}
\]
whenever $y$ is a solution of the time-independent Schr\"odinger equation
\begin{equation} \label{eq:se}
-y'' + Vy
=
Ey.
\end{equation}
The \emph{Lyapunov exponent}, which tracks the exponential growth of solutions to \eqref{eq:se}, is given by
\[
L(E)
=
L(E,V)
=
\lim_{x \to \infty} \frac{1}{x} \log\|A_E^V(x,0)\|
\]
whenever this limit exists. It is not hard to see that if $V$ is continuous and $T$-periodic, then $L(E,V)$ exists and satisfies
\begin{equation} \label{per:le}
L(E,V)
=
\frac{1}{T} \log \rho(A_E^V(T,0)),
\end{equation}
where $\rho(A)$ denotes the spectral radius of the matrix $A$, i.e., the maximal modulus of an eigenvalue of $A$. Notice that \eqref{per:le} immediately implies that $L$ is a continuous function of $E$ whenever $V$ is periodic. The transfer matrix over a full period which appears on the right hand side of \eqref{per:le} is called the \emph{monodromy matrix} of the corresponding periodic potential. There is more than one possible choice for the monodromy matrix here; clearly, any transfer matrix over a full period will yield the same result in \eqref{per:le}, since all such matrices are conjugate to one another.
\subsection{The IDS for Periodic Operators}
If $V$ is $T$-periodic, denote the associated monodromy matrices by $\Phi_{E}(s) = \Phi_E^V(s) = \Phi_E^V(s;T) := A_E(s+T,s)$, $\Phi_E := \Phi_{E}(0) = A_E(T,0)$, and denote the discriminant by $D(E) := \mathrm{tr} (\Phi_E)$. Recall that ${\mathrm{SL}}(2,{\mathbb R})$ acts on the upper half-plane ${\mathbb C}_+ = \{ z \in {\mathbb C} : \mathrm{Im}(z) > 0\}$ via M\"obius transformations, i.e.,
\[
A \cdot z
=
\frac{az + b}{cz + d},
\text{ where }
A
=
\begin{pmatrix} a & b \\ c & d \end{pmatrix} \in {\mathrm{SL}}(2,{\mathbb R}).
\]
One can easily check that $A \in {\mathrm{SL}}(2,{\mathbb R})$ is elliptic (i.e., $\mathrm{tr} (A) \in (-2,2)$) if and only if the M\"obius action of $A$ on ${\mathbb C}_+$ has a unique fixed point. It turns out that there is a remarkable relationship between the integrated density of states, which is given by the average of a certain spectral measure over one period (see \cite[Equation~(10)]{AS83} and its discussion there), and the M\"obius action of the elliptic monodromy matrices. The following formula is \cite[Equation~(17)]{avila2015JAMS}:
\begin{equation} \label{eq:ids:UHP}
\frac{dk}{dE}(E_0)
=
\frac{1}{2\pi T} \int_0^T \frac{dt}{\mathrm{Im}(z_{E_0}(t))}
\end{equation}
for $E_0$ with $D(E_0) \in (-2,2)$, where $k$ denotes the IDS, and $z_{E_0}(t)$ denotes the unique element of ${\mathbb C}_+$ which is fixed by the M\"obius action of $\Phi_{E_0}(t)$. We can use the relation \eqref{eq:ids:UHP} to find a relationship between the (derivative of the) integrated density of states and norms of transfer matrices.
For each $E$ such that $D(E) \in (-2,2)$ and each $t \in {\mathbb R}$, there exists a conjugacy $M_{E}(t) = M_E^V(t) = M_E^V(t;T) \in \mathrm{SL}(2,{\mathbb R})$ such that
\[
M_{E}(t) \Phi_{E}(t) M_{E}(t)^{-1}
\in
\mathrm{SO}(2,{\mathbb R}).
\]
Of course, $M_{E}(t)$ is not unique, since one may post-compose it with a rotation, but this is the only ambiguity. More specifically, if $\Phi \in {\mathrm{SL}}(2,{\mathbb R})$ is elliptic and $A\Phi A^{-1},B \Phi B^{-1} \in {\mathrm{SO}}(2,{\mathbb R})$, then one can check that the M\"obius action of $A B^{-1}$ on ${\mathbb C}_+$ fixes $i$, which implies that $A = OB$ for some $O \in \mathrm{SO}(2,{\mathbb R})$ (since $\mathrm{SO}(2,{\mathbb R})$ is the stabilizer of $i$ with respect to the action of ${\mathrm{SL}}(2,{\mathbb R})$ on ${\mathbb C}_+$).
\begin{lemma} \label{l:ids:tm}
For all $Q, R > 0$, there is a constant $C_0 = C_0(Q,R)$ with the following property. Suppose $V$ is $T$-periodic with $T \geq 1$ and $\|V\|_\infty \leq Q$. Denote the associated discriminant by $D$ and the integrated density of states by $k$. If $D(E_0) \in (-2,2)$ and $|E_0| \leq R$, then
\begin{equation} \label{eq:ids:tm}
\frac{dk}{dE}(E_0)
\geq
\frac{C_0}{T} \int_0^T \! \|M_{E_0}(t)\|^2 \, dt.
\end{equation}
\end{lemma}
In the course of the proof, we will use the following solution estimates from \cite[Lemma~3.1]{simon96PAMS}
\begin{lemma}\label{l:simon96}
For all $Q,R > 0$, there is a constant $C_1 = C_1(Q,R) > 0$ such that if $u$ satisfies $-u'' + Vu = Eu$ with $|E| \leq R$ and $\| V \|_\infty \leq Q$, then
\[
|u'(x)|^2
\leq
C_1 \int_{x-1}^{x+1} \! |u(t)|^2 \, dt
\]
for all $x \in {\mathbb R}$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{l:ids:tm}]
Since $M_E(t)$ is unique modulo left-multiplication by an element of $\mathrm{SO}(2,{\mathbb R})$, its Hilbert--Schmidt norm is independent of the choice of conjugacy. Since
\[
M_{E}(t)
=
\mathrm{Im}(z_E(t))^{-1/2}
\begin{pmatrix}
1 & - \mathrm{Re}(z_E(t)) \\
0 & \mathrm{Im}(z_E(t))
\end{pmatrix}
\]
clearly furnishes an example of a matrix which conjugates $\Phi_E(t)$ to a rotation, we may explicitly compute the Hilbert--Schmidt norm of $M_{E}(t)$ via
\begin{equation} \label{eq:hsnorm}
\| M_{E}(t) \|_2^2
=
\frac{1+|z_E(t)|^2}{\mathrm{Im}(z_E(t))}.
\end{equation}
With $\theta$ chosen such that $2\cos\theta = D(E)$, there are solutions $\psi_\pm$ of $H \psi = E\psi$ such that $\psi_\pm(x+T) = e^{\pm i \theta}\psi_\pm(x)$ for all $x \in {\mathbb R}$ (indeed, we may take $\psi_- = \overline\psi_+$). Then, the fixed points of the M\"obius action of $\Phi_E(t)$ are precisely $\psi_\pm'(t)/\psi_\pm(t)$. We choose $\psi \in \{\psi_+,\psi_-\}$ so that $\mathrm{Im}(\psi'(t)/\psi(t)) > 0$, and hence $z_E(t) = \psi'(t)/\psi(t)$.
Applying Lemma~\ref{l:simon96}, Fubini's theorem, and the hypothesis $T\geq 1$, we observe:
\begin{align*}
\int_0^T \! |\psi'(t)|^2 \, dt
& \leq
C_1 \int_0^T \! \int_{t-1}^{t+1} \! |\psi(s)|^2 \, ds \, dt \\
& \leq
C_1 \int_{-T}^{2T} \! \int_{s-1}^{s+1} \! |\psi(s)|^2 \, dt \, ds \\
& =
2C_1 \int_{-T}^{2T} |\psi(s)|^2 \, ds.
\end{align*}
Consequently, if we denote the Wronskian of $\psi$ and $\overline \psi$ by $W = W(\psi,\overline\psi) := \psi'\overline\psi - \psi\overline{\psi'}$, we obtain
\begin{align*}
\int_{0}^{T} \! \frac{dt}{\mathrm{Im}(z_E(t))}
& =
\frac{1}{3} \int_{-T}^{2T} \! \frac{dt}{\mathrm{Im}(z_E(t))} \\
& =
\frac{1}{3} \int_{-T}^{2T} \! \frac{2i}{W} |\psi(t)|^2 \, dt \\
& \geq
\frac{1}{6C_1+3} \int_{0}^{T} \! \frac{2i}{W} \left( |\psi(t)|^2 + |\psi'(t)|^2 \right)\, dt \\
& =
C_0 \int_0^T \! \frac{1+|z_E(t)|^2}{\mathrm{Im}(z_E(t))} \, dt,
\end{align*}
where $C_0 := \frac{1}{6C_1+3}$. Using \eqref{eq:hsnorm}, this yields the conclusion of the lemma with the Hilbert-Schmidt norm on the right hand side of \eqref{eq:ids:tm}. For all matrices $A$, one has $\| A \| \leq \| A \|_2$ by the Cauchy-Schwarz inequality, so the lemma is proved.
\end{proof}
\subsection{Band-counting in Finite Energy Windows}
We will also need the following elementary estimate on the number of bands that one may observe in a finite energy window.
\begin{lemma} \label{l:bandcount}
If $V \in C({\mathbb R})$ is $T$-periodic, then $[-R,R]$ intersects at most
\[
\frac{T}{\pi} \sqrt{R+\|V\|_\infty} + 1
\]
bands of $\sigma(H_V)$ for each $R > 0$.
\end{lemma}
\begin{proof}
Regard the free operator $H_0 = -\Delta$ as a $T$-periodic operator. Listed in ascending order, the periodic and antiperiodic eigenvalues of $H_0$ on $L^2([0,T])$ are
\[
E_{n}
=
\frac{n^2 \pi^2}{T^2},
\quad
n \geq 0.
\]
Let $Q = \|V\|_\infty$, and choose $n \in {\mathbb Z}_0$ maximal with $E_{n} \leq R + Q$. By standard eigenvalue perturbation theory, at most $n+1$ bands of $\sigma(H_V)$ touch $[-R,R]$. Since $E_n \leq R + Q$, we have
\[
n
\leq
\frac{T}{\pi} \sqrt{R+Q},
\]
which proves the lemma.
\end{proof}
\section{The Measure of the Spectrum in Finite Energy Windows} \label{sec:smallspec}
We may combine the ingredients of Section~\ref{sec:prep} to construct periodic operators whose spectra are exponentially small (relative to the period) in finite energy regions for compact ranges of coupling constants which are bounded away from zero.
\begin{lemma} \label{l:smallspec}
Suppose $V$ is a $T$-periodic potential, $\varepsilon > 0$, and $\Lambda, R > 1$. There exists $N_0 \in {\mathbb Z}_+$ such that for all $N \in {\mathbb Z}_+$ with $N \geq N_0$, there exists a $\widetilde T := NT$-periodic potential $\widetilde V = \widetilde V_N$ such that $\|V - \widetilde V\|_\infty < \varepsilon$, and
\[
{\mathrm{Leb}}(\sigma(H_{\lambda \widetilde V}) \cap [-R,R])
\leq
e^{-\widetilde T^{1/2}}
\]
for all $\lambda \in [\Lambda^{-1},\Lambda]$.
\end{lemma}
\begin{proof}
The construction works by first perturbing $V$ to produce a family of potentials which are very close to $V$, and whose resolvent sets cover $[-R,R]$. Thus, for every $E \in [-R,R]$, one of these new potentials will have $L(E) > 0$. We then form a new potential by concatenating these finite families over long blocks and suitably connecting them. Positive exponents over sub-blocks enable us to produce growth of transfer matrices, and we then parlay growth of transfer matrices into upper bounds on band lengths via Lemma~\ref{l:ids:tm}. The details follow.
Denote $I = [\Lambda^{-1}, \Lambda]$. First, choose $N' > 1/T$ large enough that the maximal distance between $N'$-break points of $\lambda V$ contained in $[-R,R]$ is less than $\varepsilon/9$ for all $\lambda \in I$, where an $N'$-break point of $\lambda V$ is a (possibly degenerate) band edge of $\sigma(H_{\lambda V})$, viewed as a $T' := N' T$-periodic operator.
By \cite{simon76} and compactness, there are finitely many potentials $V_1',V_2',\ldots,V_m' \in B_{\varepsilon/(9\Lambda)}(V)$ which are $T'$-periodic and such that for every $\lambda \in I$, there is a $1 \leq j \leq m$ such that $\lambda V_j'$ has all gaps open. More specifically, for each $\lambda_0 > 0$, there is a potential $q$ within $\varepsilon/(9\Lambda)$ of $V$ which is $T'$-periodic and such that $\lambda_0 q$ has all gaps open; since gaps will remain open for $\lambda q$ with $\lambda$ in a suitably small neighborhood of $\lambda_0$, we may pass to finitely many perturbations by using compactness of $I$. Given $1 \leq j \leq m$ and $\lambda \in I$, we have $\| \lambda V_j' - \lambda V\|_\infty < \varepsilon/9$ and the distance between $N'$-break points of $\lambda V$ is less than $\varepsilon / 9$; thus,
\[
{\mathrm{Leb}}(J\cap [-R,R])
<
\varepsilon/3
\]
for all bands $J$ of $\sigma(H_{\lambda V_j'})$.
\newline
\begin{claim} \label{cl:posle}
There is a finite set $\mathcal F = \{W_1,\ldots,W_\ell\} \subseteq B_{\varepsilon}(V)$ of $T'$-periodic potentials such that
\begin{equation} \label{eq:resolventcover}
[-R,R] \cap \bigcap_{j=1}^\ell
\sigma(H_{\lambda W_j}) = \emptyset
\end{equation}
for all $\lambda \in I$.
\end{claim}
\begin{proof}[Proof of Claim] Given $\lambda_0 \in I$, choose $j$ so that $\sigma(H_{\lambda_0 V_j'})$ has all spectral gaps open, and let $\gamma_0$ denote the minimal length of a gap of $\sigma(H_{\lambda_0 V_j'})$ which intersects $[-R,R]$. Now, put
\[
\gamma
=
\min\left( \frac{\varepsilon}{3}, \frac{\gamma_0}{2\Lambda} \right),
\quad
k
=
\left\lceil \frac{\varepsilon}{3\gamma} \right\rceil,
\]
and define new potentials $U_{-k}, \ldots U_{k}$ by $U_i = V_j' + i\gamma$. Then, it is easy to see that each $U_i$ is in $B_\varepsilon(V)$ and that the resolvent sets of $H_{\lambda_0 U_{-k}}, \ldots, H_{\lambda_0, U_k}$ cover $[-R,R]$. Thus, \eqref{eq:resolventcover} holds for this family and $\lambda = \lambda_0$. Since this will also hold for the same finite family and for $\lambda$ within a neighborhood of $\lambda_0$, the claim follows by compactness of $I$.
\end{proof}
By Claim~\ref{cl:posle} and \eqref{per:le}, we have
\begin{equation} \label{eq:pos:avgLE}
\min_{|E| \leq R} \min_{\lambda \in I} \max_{1 \leq j \leq \ell} L(E,\lambda W_j)
\overset{\mathrm{def}}=
\eta
>
0,
\end{equation}
by continuity of the Lyapunov exponent in the periodic setting. Now, suppose that $N$ is sufficiently large. To construct the desired potential, choose $\widetilde N \in {\mathbb Z}_+$ maximal with $\ell N' (\widetilde N + 2) \leq N$, define $\widetilde T = N T$, and generate a new $\widetilde T$-periodic potential $\widetilde V = \widetilde V_N$ by concatenating each $W_j$ a total of $\widetilde N+1$ times, and forming continuous connections which are uniformly close to $V$. More specifically, denote $s_j = j(\widetilde N + 2) T'$ for each integer $0 \leq j \leq \ell$, and define $\widetilde V$ on $[0,NT]$ by
\[
\widetilde V(x)
=
\begin{cases}
W_j(x) & x \in [s_{j-1}, s_j - T'] \\
\varphi_j(x) & x \in [s_j - T', s_j], \, 1 \leq j \leq \ell - 1 \\
\varphi_\ell(x) & x \in [s_\ell - T', NT]
\end{cases}
\]
When $1 \leq j \leq \ell - 1$, $\varphi_j$ is chosen to be a continuous function on $[s_j - T',s_j]$ with
\[
\varphi_j(s_j-T')
=
W_j(T'),
\quad
\varphi_j(s_j)
=
W_{j+1}(0),
\quad
\sup_{x \in [s_j - T',s_j]} |\varphi_j(x) - V(x)| < \varepsilon.
\]
Similarly, $\varphi_\ell$ is continuous with $\varphi_\ell(s_\ell-T') = W_\ell(T')$, $\varphi_\ell(NT) = W_1(0)$, and $\| \varphi_\ell - V\|_\infty < \varepsilon$.
Now, suppose $E \in [-R,R]$ and $\lambda \in I$ are such that $\widetilde D_\lambda(E) \in (-2,2)$, where $\widetilde D_\lambda$ denotes the discriminant of $H_{\lambda \widetilde V}$. By \eqref{eq:pos:avgLE}, there is $j \in \{1,\ldots,\ell\}$ such that $L(E,\lambda W_j) \geq \eta$. But then the associated transfer matrices over subintervals of $[s_{j-1}, s_j - T']$ of length $\widetilde N T$ are exponentially large. More specifically, we have
\begin{equation} \label{eq:simplele}
\begin{split}
\| A_E^{\lambda \widetilde V}(s_j+t-2T' , s_{j-1}+t) \|
& \ge
\rho( A_E^{\lambda W_j}(t+T',t)^{\widetilde N} ) \\
& =
\rho( A_E^{\lambda W_j}(t+T',t))^{\widetilde N} \\
& =
e^{\widetilde N T' L(E,\lambda W_j) } \\
& \geq
e^{\widetilde T \eta/(2\ell)}.
\end{split}
\end{equation}
for all $t \in [0,T']$; we have used \eqref{per:le}. Notice that the last step requires $N$ sufficiently large to get $\widetilde N \geq \frac{1}{2} (\widetilde N + 3)$. We can see that the estimate above implies lower bounds on the norms of the matrices which conjugate the monodromy matrices into rotations. More specifically, with $\Phi = \Phi_E^{\lambda \widetilde V}(s_{j-1} + t;\widetilde T) $, we have $X \Phi X^{-1} \in {\mathrm{SO}}(2,{\mathbb R})$ for
\begin{align*}
X
& =
M_E^{\lambda \widetilde V}(s_{j-1}+t;\widetilde T), \\
X
& =
M_E^{\lambda \widetilde V}(s_j+t-2T'; \widetilde T) A_E^{\lambda \widetilde V}(s_j + t - 2T',s_{j-1} +t),
\end{align*}
by periodicity of $\widetilde V$ and definition of $M_E$; more specifically, $\widetilde T$-periodicity of $\widetilde V$ allows one to conclude that $A_E^{\lambda \widetilde V}(s_j + t - 2T', s_{j-1} + t)$ conjugates $\Phi$ to $\Phi_E^{\lambda \widetilde V}(s_j + t -2T';\widetilde T)$, since
\[
A_E^{\lambda \widetilde V}(s_j + t - 2T', s_{j-1} + t)
=
A_E^{\lambda \widetilde V}(s_j + t - 2T'+\widetilde T, s_{j-1} + t+\widetilde T),
\]
and $\Phi_E^{\lambda \widetilde V}(s_j + t -2T';\widetilde T)$ is then conjugated to a rotation by $M_E^{\lambda \widetilde V}(s_j + t - 2T'; \widetilde T)$. Since conjugacies of elliptic matrices to rotations are unique modulo left-multiplication by elements of $\mathrm{SO}(2,{\mathbb R})$, we have
\begin{equation} \label{eq:conjrel1}
M_E^{\lambda \widetilde V}(s_j+t-2T'; \widetilde T) A_E^{\lambda \widetilde V}(s_j + t - 2T',s_{j-1} +t)
=
O M_E^{\lambda \widetilde V}(s_{j-1}+t; \widetilde T)
\end{equation}
for some rotation $O = O(E,\lambda, t) \in {\mathrm{SO}}(2,{\mathbb R})$. Using the lower bound on the norm of $A_E^{\lambda \widetilde V}(s_j+t-2T',s_{j-1}+t)$, \eqref{eq:conjrel1} implies
\begin{equation} \label{eq:conjbounds}
\max(\| M_{E}^{\lambda \widetilde V}(s_j + t -2 T'; \widetilde T)\|, \| M_{E}^{\lambda \widetilde V}(s_{j-1}+t; \widetilde T) \|)
\geq
e^{\widetilde T\eta/(4\ell)}
\end{equation}
for all $t \in [0,T']$. Notice that this uses $\|M^{-1}\| = \|M\|$ for $M \in {\mathrm{SL}}(2,{\mathbb R})$ (which follows from the singular value decomposition). A bit more precisely, if one has $A = M_1^{-1} O M_2$ with $M_j \in {\mathrm{SL}}(2,{\mathbb R})$ and $O \in {\mathrm{SO}}(2)$, then one cannot simultaneously have $\|M_1\|, \|M_2\| < \|A\|^{1/2}$.
These estimates are uniform in $\lambda \in I$ and $E \in [-R,R] \cap \sigma(H_{\lambda \widetilde V})$. Consequently, for any band $J \subseteq \sigma(H_{\lambda \widetilde V})$, we have
\begin{equation}\label{e.bandest}
{\mathrm{Leb}}(J \cap [-R,R])
\leq
C e^{-\widetilde T \eta /(4\ell)}
\end{equation}
by Lemma~\ref{l:ids:tm}, where $C$ denotes a constant which depends only on $R$ and $Q := \Lambda(\|V\|_\infty + \varepsilon)$. We have also used that $dk(J) = 1/\widetilde T$ for any band $J$ of $\sigma(H_{\lambda \widetilde V})$, where $dk$ denotes the associated IDS.
Since all potentials in question are uniformly bounded (by $\Lambda(\|V_0\|_\infty + \varepsilon)$) and $R \geq 1$, Lemma~\ref{l:bandcount} implies that the number of bands of $\sigma(H_{\lambda \widetilde V})$ which touch $[-R,R]$ is bounded above by
\[
\frac{1}{\pi}\widetilde T\sqrt{R + \Lambda(\|V\|_\infty+\varepsilon)}
=
\frac{1}{\pi} \widetilde T\sqrt{R+Q}.
\]
Thus, by \eqref{e.bandest},
\begin{equation} \label{eq:spec:expsmall}
{\mathrm{Leb}}(\sigma(H_{\lambda \widetilde V}) \cap [-R,R])
\leq
C\widetilde T e^{-\eta \widetilde T /(4\ell)}.
\end{equation}
Since $\widetilde T = NT$ and $C$ only depends on $R$ and $Q$, we may choose $\widetilde N$ sufficiently large (hence $N$ sufficiently large) and make the right hand side of \eqref{eq:spec:expsmall} smaller than $e^{-\widetilde T^{1/2}}$. Since $\|V - \widetilde V\|_\infty < \varepsilon$, the lemma is proved.
\end{proof}
\begin{remark}
Let us comment briefly on the relationship between the proof of Lemma~\ref{l:smallspec} and the arguments in \cite{avila2009CMP}. The primary difference between our arguments and those of \cite{avila2009CMP} is that we do not attempt to push positive Lyapunov exponents through to the limit. We use growth of transfer matrices purely as a means to control the size of the spectrum (via Lemma~\ref{l:ids:tm}). Consequently, for our purposes, it suffices to consider ``local'' growth behavior of the transfer matrices of $\widetilde V$; more specifically, we only need to produce growth within subblocks of commuting matrices, where one has a simple relationship between the spectral radius and the norm (cf.\ \eqref{eq:simplele}).
If one wishes to obtain a global understanding of transfer matrix growth and control the Lyapunov exponent of $\widetilde V$, then one must deal with concatenated blocks of noncommuting matrices, and the simple Lyapunov behavior exploited in \eqref{eq:simplele} may break down. In this situation, one must produce analogs of \cite[Claim~3.6]{avila2009CMP} and \cite[Claim~3.7]{avila2009CMP} to produce the necessary global growth of transfer matrices.
\end{remark}
\section{Singular Continuous Spectrum of Zero Lebesgue Measure} \label{sec:zm}
\begin{proof}[Proof of Theorem~\ref{t:urdense:coup}]
Let $R$, $\delta$, and $\Lambda$ be given. We first show that $U_{R,\delta,\Lambda}$ is dense in ${\mathrm{LP}}$. To that end, let $V\in {\mathrm{LP}}$ be $T$-periodic, and let $\varepsilon > 0$. Choose $N$ large enough that $e^{-\sqrt{NT}} < \delta$ and Lemma~\ref{l:smallspec} applies, and then let $\widetilde V_N$ be the potential given by the conclusion of the lemma with the same choices of $\varepsilon$, $\Lambda$, and $R$. Evidently, $\widetilde V_N \in U_{R,\delta,\Lambda} \cap B_\varepsilon(V)$, so we are done (since periodic potentials are dense in ${\mathrm{LP}}$).
\newline
It remains to be seen that $U_{R,\delta,\Lambda}$ is open in ${\mathrm{LP}}$. Suppose $V \in U_{R,\delta,\Lambda}$. By compactness of $I := [\Lambda^{-1},\Lambda]$, it suffices to show that, for every $\lambda \in I$, there exist $\tau = \tau(\lambda) > 0$ and $r = r(\lambda) > 0$ such that
\[
{\mathrm{Leb}}(\sigma(H_{\lambda' V'}) \cap [-R,R])
<
\delta
\]
whenever $\lambda' \in I$ and $V' \in {\mathrm{LP}}$ satisfy $|\lambda - \lambda'| < \tau$ and $\| V - V' \|_\infty < r$. To see why such $\tau$ and $r$ exist, fix $\lambda \in I$, and choose a cover of $\sigma(H_{\lambda V}) \cap [-R,R]$ by open intervals $I_1,\ldots, I_n$ such that $\sum_{j=1}^n |I_j| < \delta$ (which we may do by compactness of $\sigma(H_{\lambda V}) \cap [-R,R]$). Choose $\varepsilon > 0$ small enough that
\[
B_\varepsilon(\sigma(H_{\lambda V}) \cap [-R,R])
\subseteq
\bigcup_{j=1}^n I_j,
\quad
\text{and}
\quad
\sum_{j=1}^n|I_j| + 2\varepsilon < \delta,
\]
where $B_\varepsilon(S)$ denotes the $\varepsilon$-neighborhood of the set $S$. Now, take
\[
\tau
=
\tau(\lambda)
=
\frac{\varepsilon}{2\|V\|_\infty},
\quad
r
=
r(\lambda)
=
\frac{\varepsilon}{2\Lambda},
\]
and suppose $|\lambda - \lambda'| < \tau$ and $\|V - V'\|_\infty < r$; since $\|\lambda V - \lambda ' V' \|_\infty < \varepsilon$, we have
\[
\sigma(H_{\lambda' V'})
\subseteq
B_\varepsilon(\sigma(H_{\lambda V}))
\]
by the usual $L^\infty$ eigenvalue perturbation theory. Consequently,
\[
{\mathrm{Leb}}(\sigma(H_{\lambda' V'}) \cap [-R,R])
\leq
\sum_{j=1}^n |I_j|
+
2\varepsilon
<
\delta.
\]
Note that the second term originates because new spectrum might ``creep in'' at the edges of the interval $[-R,R]$. Thus, we have proved that $\tau(\lambda)$ and $r(\lambda)$ satisfy the desired properties.
\end{proof}
We can use the foregoing theorem and Baire category to produce generic singular continuous spectrum supported on a set of zero Lebesuge measure. One still needs to exclude eigenvalues on a generic set, but this is easy to do using Gordon methods; we provide the details for the reader's convenience.
\begin{proof}[Proof of Theorem~\ref{t:zmdense:coup}]
By Theorem~\ref{t:urdense:coup}, $U_{R,\delta,\Lambda}$ is a dense open subset of ${\mathrm{LP}}$ for all $R,\Lambda,\delta$. Now, take a trio of sequences $R_n, \Lambda_n \to \infty$ and $\delta_n \to 0$; by the Baire Category Theorem,
\[
\mathcal Z
=
\bigcap_{n = 1}^\infty U_{R_n,\delta_n, \Lambda_n}
\]
is a dense $G_\delta$ in ${\mathrm{LP}}$ such that ${\mathrm{Leb}}(\sigma(H_{\lambda V})) = 0$ for all $V \in \mathcal Z$ and all $\lambda > 0$.
Next, let $\mathcal G \subseteq {\mathrm{LP}}$ denote the set of \emph{Gordon potentials} in ${\mathrm{LP}}$. More specifically, $V \in \mathcal G$ if and only if $V \in {\mathrm{LP}}$ and there exist $T_k \to \infty$ such that
\[
\lim_{k \to \infty} C^{T_k} \max_{|x| \leq T_k} |V(x) - V(x+T_k)|
=
0
\]
for all $C > 0$. It is easy to see that $\mathcal G$ is dense in ${\mathrm{LP}}$ (as it contains all periodic potentials). Moreover, one can check that $\mathcal G$ is a $G_\delta$. A bit more concretely, for $n, m \in {\mathbb Z}_+$, denote
\[
\mathcal O_{n,m}
=
\left\{
V \in {\mathrm{LP}} : \exists T \in (n-1,n+1) \text{ with } \max_{|x| \leq T} |V(x) - V(x+T)| < m^{-T}
\right\}.
\]
It is straightforward to check that $\mathcal O_{n,m}$ is open in ${\mathrm{LP}}$ and that
\[
\mathcal G
=
\bigcap_{N\geq 1} \bigcup_{n \geq N} \bigcup_{m \geq N} \mathcal O_{n,m}.
\]
Since $\sigma_{\mathrm{pp}}(H_{\lambda V}) = \emptyset$ for every $V \in \mathcal G$ and every $\lambda > 0$ \cite{G76}, we obtain the desired result with $\mathcal C = \mathcal G \cap \mathcal Z$.
\end{proof}
\section{Zero Hausdorff Dimension} \label{sec:hd0}
Here, we will prove Theorem~\ref{t:hd0dense}. For the convenience of the reader, and to establish notation, let us briefly recall how Hausdorff measures and dimension are defined; for further details, \cite{Falconer} supplies an inspired reference.
Given a set $S \subseteq {\mathbb R}$ and $\delta > 0$, a $\delta$-\emph{cover} of $S$ is a countable collection of intervals $\{I_j\}$ such that $S \subseteq \bigcup_j I_j$ and ${\mathrm{Leb}}(I_j) < \delta$ for each $j$. Then, for each $\alpha \geq 0$, one defines the $\alpha$-dimensional \emph{Hausdorff measure} of $S$ by
\[
h^\alpha(S)
=
\lim_{\delta \downarrow 0} \inf_{\delta\text{-covers}} \sum_j {\mathrm{Leb}}(I_j)^\alpha.
\]
For each $S \subseteq {\mathbb R}$, there is a unique $\alpha_0 \in [0,1]$ such that
\[
h^\alpha(S)
=
\begin{cases}
\infty & 0 \leq \alpha < \alpha_0 \\
0 & \alpha_0 < \alpha
\end{cases}
\]
We denote $\alpha_0 = \dim_{{\mathrm{H}}}(S)$ and refer to this as the \emph{Hausdorff dimension} of the set $S$.
\begin{proof}[Proof of Theorem~\ref{t:hd0dense}]
Let $V_0 \in {\mathrm{LP}}$ be $T_0$-periodic, and suppose $\varepsilon_0 > 0$. We will construct a sequence $(V_n)_{n=1}^\infty$ consisting of periodic potentials so that $V_\infty = \lim_n V_n$ satisfies $\|V_0 - V_\infty\|_\infty < \varepsilon_0$ and
\[
h^\alpha(\sigma(H_{\lambda V_\infty}))
=
0
\]
for all $\lambda > 0$ and all $\alpha>0$; evidently, this suffices to show that $\sigma(H_{\lambda V_\infty})$ has Hausdorff dimension zero for all $\lambda > 0$. Throughout the proof, $H_{n,\lambda} = -\Delta + \lambda V_n$ and $\Sigma_{n,\lambda} = \sigma(H_{n,\lambda})$ for $1 \leq n \leq \infty$, $\lambda > 0$.
\newline
Denote $\Lambda_n = r_n = 2^n$ (in general, one may take any pair of sequences tending to $\infty$ not too quickly). Take $\varepsilon_1 = \varepsilon_0/2$. By Lemma~\ref{l:smallspec}, we may construct a $T_1$-periodic $V_1$ with $T_1 > 1$, $\|V_0 - V_1 \|_\infty < \varepsilon_1$, and for which
\[
\delta_1
:=
\max_{\Lambda_1^{-1} \leq \lambda \leq \Lambda_1} {\mathrm{Leb}}(\Sigma_{1,\lambda} \cap [-r_1,r_1])
<
e^{-T_1^{1/2}}.
\]
Having constructed $V_{n-1}$, $\delta_{n-1}$, and $\varepsilon_{n-1}$, let
\begin{equation}\label{eq:nextepsdef}
\varepsilon_{n}
=
\min\left(\frac{\varepsilon_{n-1}}{2},
\frac{1}{2} n^{-T_{n-1}},
\frac{\delta_{n-1}}{4\Lambda_{n-1}} \right).
\end{equation}
By Lemma~\ref{l:smallspec}, we may construct a $T_{n} := N_n T_{n-1}$-periodic potential $V_{n}$ with $\|V_n - V_{n-1}\| < \varepsilon_{n}$ such that
\begin{equation} \label{eq:smallspec}
\delta_{n}
:=
\max_{\Lambda_n^{-1} \leq \lambda \leq \Lambda_n} {\mathrm{Leb}}(\Sigma_{n,\lambda} \cap [-r_n,r_n])
<
e^{-T_{n}^{1/2}}.
\end{equation}
Clearly, $V_\infty = \lim_{n\to\infty}V_n$ exists and is limit-periodic. From the first condition in \eqref{eq:nextepsdef}, we deduce
\[
\| V_0 - V_\infty\|
<
\sum_{j = 1}^\infty \varepsilon_j
\leq
\sum_{j=1}^\infty 2^{-j} \varepsilon_0
=
\varepsilon_0,
\]
so $V_\infty \in B_{\varepsilon_0}(V_0)$. Similarly, using the first and second conditions in \eqref{eq:nextepsdef}, we observe that
\[
\|V_n - V_\infty\|_\infty
<
n^{-T_n}
\]
for each $n \geq 2$. From this, it is easy to see that $\lambda V_\infty$ is a Gordon potential for every $\lambda > 0$. In particular, $H_{\lambda V_\infty}$ has purely continuous spectrum for all $\lambda > 0$. Thus, it remains to show that the spectrum has Hausdorff dimension zero. The key observation in this direction is that \eqref{eq:nextepsdef} yields
\begin{equation} \label{eq:taildist}
\|\lambda V_n - \lambda V_\infty\|
\leq
\Lambda_n \sum_{j = n + 1}^\infty \varepsilon_j
<
\sum_{k = 2}^\infty 2^{-k} \delta_n
=
\delta_n/2
\end{equation}
for all $n \in {\mathbb Z}_+$ and all $\Lambda_n^{-1} \leq \lambda \leq \Lambda_n$.
\begin{claim} \label{cl:hdim}
For all $j \in {\mathbb Z}_+$, and all $\lambda > 0$, $\dim_{\mathrm H}(\Sigma_{\infty,\lambda} \cap [-r_j,r_j]) = 0$.
\end{claim}
\begin{proof}[Proof of Claim]
Let $j \in {\mathbb Z}_+$, $\delta > 0$, $\lambda > 0$, and $\alpha > 0$ be given. Choose $n \geq j$ for which $2\delta_n < \delta$ and $\lambda \in [\Lambda_n^{-1},\Lambda_n]$. Then, by \eqref{eq:taildist}, the $\delta_n/2$-neighborhood of $\Sigma_{n,\lambda} \cap [-r_n,r_n]$ together with the intervals $[-r_n,-r_n + 2\delta_n]$ and $[r_n - 2\delta_n,r_n]$ comprises a $\delta$-cover of $\Sigma_{\infty,\lambda} \cap [-r_j,r_j]$; denote this cover by $\mathcal I_n$. By \eqref{eq:smallspec} and Lemma~\ref{l:bandcount}, we have
\[
\sum_{I \in \mathcal I_n}
|I|^{\alpha}
\leq
\left(\frac{1}{\pi} T_n \sqrt{\Lambda_n(\|V_0\|_\infty + \varepsilon) + r_n} + 3 \right) 2^\alpha e^{-\alpha T_n^{1/2}}
\]
Sending $\delta \downarrow 0$ (and hence $n \to \infty$), we have $h^{\alpha}(\Sigma_{\infty,\lambda} \cap [-r_j,r_j]) = 0$, which proves the claim.
\end{proof}
With Claim~\ref{cl:hdim} in hand, we have $h^\alpha(\Sigma_{\infty,\lambda}) = 0$ immediately. Since this holds for all $\alpha > 0$ and all $\lambda > 0$, the theorem follows.
\end{proof}
|
cs/0601074
|
\section{Introduction}
\label{sec:intro}
In universal data compression, a single code achieves asymptotically optimal performance on all sources within a given family. Intuition suggests that a good universal coder should acquire an accurate model of the source statistics from a sufficiently long data sequence and incorporate this knowledge in its operation. For lossless codes, this intuition has been made rigorous by Rissanen \cite{Ris84}. Under his scheme, the data are encoded in a {\em two-stage} set-up, in which the binary representation of each source block consists of two parts: (1) a suitably quantized maximum-likelihood estimate of the source parameters, and (2) lossless encoding of the data matched to the acquired model; the redundancy of the resulting code converges to zero as $\log n/n$, where $n$ is the block length.
In this paper, we extend Rissanen's idea to {\em lossy} block coding (vector quantization) of i.i.d. sources with values in $\R^d$ for some finite $d$. Specifically, let $\{X_i\}^\infty_{i=-\infty}$ be an i.i.d. source with the marginal distribution of $X_1$ belonging to some indexed class $\{P_\theta : \theta \in \Theta\}$ of absolutely continuous distributions on $\R^d$, where $\Theta$ is a bounded subset of $\R^k$ for some $k$. For bounded distortion measures, our main result, Theorem~\ref{thm:wu}, states that if the class $\{P_\theta\}$ satisfies certain smoothness and learnability conditions, then there exists a sequence of finite-memory lossy block codes that achieves asymptotically optimal compression of each source in the class and permits asymptotically exact identification of the active source with respect to the {\em variational distance}, defined as $d_V(P,Q) \deq \sup_B |P(B) - Q(B)|$, where the supremum is over all Borel subsets of $\R^d$. The overhead rate and the distortion redundancy of the scheme converge to zero as $O(\log n/n)$ and $O(\sqrt{\log n/n})$, respectively, where $n$ is the block length, while the active source can be identified up to a variational ball of radius $O(\sqrt{\log n/n})$ eventually almost surely. We also describe an extension of our scheme to unbounded distortion measures satisfying a certain moment condition, and present two examples of parametric families satisfying the regularity conditions of Theorem~\ref{thm:wu}.
While most existing schemes for universal lossy coding rely on {\em implicit} identification of the active source (e.g., through topological covering arguments \cite{NeuGraDav75}, Glivenko--Cantelli uniform laws of large numbers \cite{LinLugZeg94}, or nearest-neighbor code clustering \cite{ChoEffGra96}), our code builds an {\em explicit model} of the mechanism responsible for generating the data and then selects an appropriate code for the data on the basis of the model. This ability to simultaneously model and compress the data may prove useful in such applications as {\em media forensics} \cite{MouKoe05}, where the parameter $\theta$ could represent evidence of tampering, and the aim is to compress the data in such a way that the evidence can be later extracted with high fidelity from the compressed version. Another key feature of our approach is the use of Vapnik--Chervonenkis theory \cite{DevLug01} in order to connect universal encodability of a class of sources to the combinatorial ``richness" of a certain collection of decision regions associated with the sources. In a way, Vapnik--Chervonenkis estimates can be thought of as an (imperfect) analogue of the combinatorial method of types for finite alphabets \cite{CsiKor81}.
\section{Preliminaries}
\label{sec:prelims}
Let $\{X_i\}^\infty_{i=-\infty}$ be an i.i.d. source with alphabet ${\cal X}$, such that the marginal distribution of $X_1$ comes from an indexed class $\{P_\theta : \theta \in \Theta\}$. For any $t \in \Z$ and any $m,n \ge 0$, let $X^n_m(t)$ denote the segment $(X_{tn-m+1},X_{tn-m+2},\cdots,X_{tn})$ of $\{X_i\}$, with $X^n_0(t)$ understood to denote an empty string for all $n,t$. We shall abbreviate $X^n_n(t)$ to $X^n(t)$.
Consider coding $\{X_i\}$ into the reproduction process $\{\widehat{X}_i\}$ with alphabet $\widehat{\cal X}$ by means of a stationary lossy code with block length $n$ and memory length $m$ [an $(n,m)$-block code, for brevity]. Such a code consists of an encoder $\map{f}{{\cal X}^n \times {\cal X}^m}{{\cal S}}$ and a decoder $\map{\phi}{{\cal S}}{\widehat{\cal X}^n}$, where ${\cal S}$ is a collection of fixed-length binary strings: $\widehat{X}^n(t) = \phi(f(X^n(t),X^n_m(t-1))), \forall t \in \Z$. That is, the encoding is done in blocks of length $n$, but the encoder is also allowed to observe a fixed finite amount of past data. Abusing notation, we shall denote by $C^{n,m}$ both the composition $\phi \circ f$ and the encoder-decoder pair $(f,\phi)$; when $m=0$, we shall use a more compact notation $C^n$. The number $R(C^{n,m}) \deq n^{-1}\log |{\cal S}|$ is the rate of $C^{n,m}$ in bits per letter. The distortion between the source $n$-block $X^n = (X_1,\cdots,X_n)$ and its reproduction $\widehat{X}^n = (\widehat{X}_1,\cdots,\widehat{X}_n)$ is given by $\rho(X^n,\widehat{X}^n) = \sum^n_{i=1}\rho(X_i,\widehat{X}_i)$, where $\map{\rho}{{\cal X} \times \widehat{\cal X}}{\R^+}$ is a single-letter distortion measure.
Suppose $X_1 \sim P_\theta$ for some $\theta \in \Theta$. Since the source is i.i.d., and the code $C^{n,m}$ does not vary with time, the process $\{(X_i,\widehat{X}_i)\}$ is $n$-stationary, and the average distortion of $C^{n,m}$ is $D_\theta(C^{n,m}) = n^{-1}\E_\theta\big[\rho(X^n(1),\widehat{X}^n(1))\big]$, where $\widehat{X}^n(1) \equiv \phi(f(X^n(1),X^n_m(0)))$. The optimal performance achievable on $P_\theta$ by any finite-memory $n$-block code at rate $R$ is given by the $n$th-order operational distortion-rate function (DRF)
$$
\widehat{D}^{n,*}_\theta(R) \deq \inf_{m \ge 0} \,\, \inf_{C^{n,m}: R(C^{n,m}) \le R} \,\, D_\theta(C^{n,m}),
$$
where the asterisk denotes the fact that we allow any finite memory length. For zero-memory $n$-block codes the corresponding DRF is
$$
\widehat{D}^n_\theta(R) \deq \Inf_{C^n: R(C^n) \le R} D_\theta(C^n).
$$
Clearly, $\widehat{D}^{n,*}_\theta(R) \le \widehat{D}^n_\theta(R)$. Conversely, we can use memoryless minimum-distortion encoders to convert any $(n,m)$-block code into a zero-memory $n$-block code without increasing either distortion or rate, so $\widehat{D}^n_\theta(R) = \widehat{D}^{n,*}_\theta(R)$. Finally, the best performance achievable by any block code, with or without memory, is given by the operational DRF $\widehat{D}_\theta(R) \deq \Inf_{n \ge 1} \widehat{D}^n_\theta(R) = \Lim_{n \to \infty} \widehat{D}^n_\theta(R)$; since the source is i.i.d., $\widehat{D}_\theta(R)$ is equal to the Shannon DRF $D_\theta(R)$ by the source coding theorem and its converse.
We are interested in sequences of codes that asymptotically achieve optimal performance across the entire class $\{P_\theta \}$. Let $\{C^{n,m}\}^\infty_{n=1}$ be a sequence of $(n,m)$-block codes, where the memory length $m$ may depend on $n$, such that $R(C^{n,m}) \to R$. Then $\{C^{n,m}\}$ is {\em weakly minimax universal} for $\{P_\theta\}$ if the {\em distortion redundancy} $\delta_\theta(C^{n,m}) \deq D_\theta(C^{n,m}) - D_\theta(R)$ converges to zero as $n \to \infty$ for every $\theta \in \Theta$.\footnote{See \cite{NeuGraDav75} for other notions of universality for source codes.} We shall follow Chou {\em et al.} \cite{ChoEffGra96} and split $\delta_\theta(C^{n,m})$ into two terms:
$$
\delta_\theta(C^{n,m}) = \big(D_\theta(C^{n,m}) - \widehat{D}^n_\theta(R)\big) + \big(\widehat{D}^n_\theta(R) - D_\theta(R)\big).
$$
The first term, which we shall call the $n$th-order redundancy and denote by $\delta^n_\theta(C^{n,m})$, is the excess distortion of $C^{n,m}$ relative to the best $n$-block code for $P_\theta$, while the second term gives the extent to which the best $n$-block code falls short of the Shannon optimum. Note that $\delta_\theta(C^{n,m}) \to 0$ if and only if $\delta^n_\theta(C^{n,m}) \to 0$, since $\widehat{D}^n_\theta(R) \to D_\theta(R)$ by the source coding theorem.
\section{Informal description of the system}
\label{sec:system}
As stated in the Introduction, we are after a sequence of lossy block codes that would not only be universally optimal for a given class $\{P_\theta\}$ of i.i.d. sources with values in $\R^d$, but would also permit asymptotically reliable identification of the source parameter $\theta \in \Theta$. We formally state and prove our result in Section~\ref{sec:mainthm}; here we outline the main idea behind it.
Fix the block length $n$, and denote by $X^n$ the current $n$-block $X^n(t)$ and by $Z^n$ the preceding $n$-block $X^n(t-1)$. Let us assume that we can find for each $\theta \in \Theta$ an $n$-block code $C^n_\theta$ at the desired rate $R$ which achieves the $n$th-order DRF for $P_\theta$: $D_\theta(C^n_\theta) = \widehat{D}^n_\theta(R)$. The basic idea is to construct an $(n,n)$-block code $C^{n,n}$ that first estimates the parameter $\theta$ of the active source from $Z^n$ and then codes $X^n$ with the code $C^n_{\wh{\theta}(Z^n)}$, where $\wh{\theta}(Z^n)$ is the estimate of $\theta$, suitably quantized.
Suppose the encoder can use $Z^n$ to identify the active source up to a variational ball of radius $O(\sqrt{\log n/n})$. Next, suppose that the parameters of the estimated source (assumed to belong to a bounded subset of $\R^k$ for some $k$) are quantized to $O(\log n)$ bits in such a way that the variational distance between any two sources whose parameters lie in the same quantizer cell is $O(\sqrt{1/n})$. If $\wh{\theta} = \wh{\theta}(Z^n)$ is the quantized parameter estimate, then the variational distance between $P_{\wh{\theta}}$ and the ``true" source $P_\theta$ is $O(\sqrt{\log n/n})$, which for bounded distortion functions implies an $O(\sqrt{\log n/n})$ upper bound on the distortion redundancy $\delta^n_\theta(C^n_{\wh{\theta}})$.
More formally, let $\map{\td{f}}{{\cal X}^n}{\td{{\cal S}}}$ denote the map that sends each $Z^n$ to the binary representation of $\wh{\theta}(Z^n)$, where $\td{{\cal S}}$ is a collection of fixed-length binary strings with $\log |\td{{\cal S}}| = O(\log n)$, and let $\map{\td{\psi}}{\td{{\cal S}}}{\Theta}$ be the parameter decoder that maps each $\td{s} \in \td{{\cal S}}$ to its reproduction: $\wh{\theta}(Z^n) = \td{\psi}(\td{f}(Z^n))$. Thus, to each $\td{s} \in \td{{\cal S}}$ there corresponds an $n$-block code $C^n_{\td{\psi}(\td{s})}$, which we denote more compactly by $C^n_{\td{s}} = (f_{\td{s}},\phi_{\td{s}})$. Our $(n,n)$-block code $C^{n,n}$ thus has the encoder $f(X^n,Z^n) \deq \td{f}(Z^n)f_{\td{f}(Z^n)}(X^n)$ and the corresponding decoder $\phi(\td{f}(Z^n)f_{\td{f}(Z^n)}(X^n)) \deq \phi_{\td{f}(Z^n)}(f_{\td{f}(Z^n)}(X^n))$. That is, the binary string emitted by the encoder consists of two parts: (a) the {\em header} containing a binary description of the chosen code [equivalently, of the estimated source $P_{\wh{\theta}(Z^n)}$], and (b) the {\em body} containing the binary description of the data $X^n$ using the chosen code at rate $R$. The combined rate of $C^{n,n}$ is $R + n^{-1} \log |\td{{\cal S}}| = R + O(\log n/n)$ bits per letter, while the expected distortion with respect to $P_\theta$ is
\begin{eqnarray}
D_\theta(C^{n,m}) &=& \frac{1}{n}\E_\theta\left[\rho(X^n,\phi(f(X^n,Z^n)))\right] \nonumber \\
&=& \frac{1}{n} \E_\theta\left\{ \E_\theta\big[ \rho(X^n,C^n_{\td{f}(Z^n)}(X^n))\big|Z^n\big]\right\} \nonumber \\
&=& \E_\theta\big[D_\theta(C^n_{\td{f}(Z^n)})\big].
\label{eq:2stdist_mem}
\end{eqnarray}
This scheme is universal because the map $\td{f}$ and the subcodes $C^n_{\td{s}}$ are chosen so that $\E_\theta\big[D_\theta(C^n_{\td{f}(Z^n)})\big] - \wh{D}^n_\theta(R) = O(\sqrt{\log n/n})$ for each $\theta \in \Theta$. Note that decoder can not only decode the data in a near-optimal fashion, but also identify the active source up to a variational ball of radius $O(\sqrt{\log n/n})$.
We remark that our scheme is a modification of the two-stage code of Chou {\em et al.} \cite{ChoEffGra96}, the difference being that here the subcode $C^n_{\td{s}}$, used to encode the current $n$-block $X^n$, is selected on the basis of the preceding $n$-block $Z^n$. Nonetheless, we shall adopt the terminology of \cite{ChoEffGra96} and refer to $\td{f}$ as the {\em first-stage encoder}. The structure of the encoder and the decoder in our scheme is displayed in Fig.~\ref{fig:twostage}.
\begin{figure*}
\centerline{
\subfigure[Encoder]{\includegraphics[width=2.75in]{encoder}\label{fig:encoder}
}
\hfil
\subfigure[Decoder]{\includegraphics[width=2.75in]{decoder}\label{fig:decoder}
}}
\caption{The two-stage scheme for joint universal lossy source coding and identification.}
\label{fig:twostage}
\end{figure*}
\section{The main theorem}
\label{sec:mainthm}
Before stating and proving our result, let us list our assumptions, as well as fix some auxiliary results and notation.
\noindent{\bf The source models.} Let $\{X_i\}^\infty_{i=-\infty}$ be an i.i.d. source with alphabet ${\cal X}$ and with the marginal distribution of $X_1$ belonging to an indexed class $\{P_\theta : \theta \in \Theta\}$, such that the following conditions are satisfied:
\newcounter{Lcount}
\begin{list}{(S.\arabic{Lcount})}{\usecounter{Lcount}}
\item ${\cal X}$ is a measurable subset of $\R^d$.\label{src:alphabet}
\item Each $P_\theta$ is absolutely continuous with pdf $p_\theta$.
\end{list}
\setcounter{Lcount}{0}
\noindent{\bf Distortion function.} The distortion function $\rho$ is assumed to satisfy the following requirements:
\begin{list}{(D.\arabic{Lcount})}{\usecounter{Lcount}}
\item $\Inf_{\widehat{x} \in \widehat{\cal X}} \rho(x,\widehat{x}) = 0$ for all $x \in {\cal X}$.
\item $\Sup_{x \in {\cal X}, \widehat{x} \in \widehat{\cal X}} \rho(x,\widehat{x}) = K < \infty$.
\end{list}
Under these conditions, it can be proved (in a manner similar to the proof of Thm.~2 of Linder {\em et al.} \cite{LinLugZeg94}) that
\begin{equation}
\widehat{D}^n_\theta(R) = D_\theta(R) + O(\sqrt{\log n/n})
\label{eq:drf_conv}
\end{equation}
for each $\theta \in \Theta$ and for all rates $R$ such that $D_\theta(R) > 0$ (this condition is automatically satisfied in our case since all the $P_\theta$'s are absolutely continuous). The constant implicit in $O(\cdot)$ depends on both $\theta$ and $R$.
\noindent{\bf Vapnik--Chervonenkis theory.} Given a collection ${\cal A}$ of measurable subsets of $\R^d$, its {\em Vapnik-Chervonenkis (VC) dimension} $V({\cal A})$ is defined as the largest integer $n$ for which
\begin{equation}
\max_{x^n \in (\R^d)^n} |\{(1_{\{x_1 \in A\}},\cdots,1_{\{x_n \in A\}}) : A \in {\cal A} \} | = 2^n;
\label{eq:vc}
\end{equation}
if (\ref{eq:vc}) holds for all $n$, then $V({\cal A}) = \infty$. If $V({\cal A}) < \infty$, we say that ${\cal A}$ is a VC class. For any such class, one can give finite-sample bounds on uniform deviations of probabilities of events in that class from their relative frequencies. That is, if $X^n = (X_1,\cdots,X_n)$ is an i.i.d. sample from a distribution $P$, and if ${\cal A}$ is a VC class with $V({\cal A}) \ge 2$, then
$$
\Pr \left\{\sup_{A \in {\cal A}} |P_{X^n}(A) - P(A)| > \epsilon\right\} \le 8n^{V({\cal A})}e^{-n\epsilon^2/32}, \forall \epsilon > 0
$$
and
$$
\E\left\{\sup_{A \in {\cal A}} |P_{X^n}(A) - P(A)| \right\} \le c\sqrt{\log n/n},
$$
where $P_{X^n}$ is the empirical distribution of $X^n$ and the constant $c$ depends on $V({\cal A})$ but not on $P$.\footnote{Using more refined techniques, the $c\sqrt{\log n/n}$ bound can be improved to $c'/\sqrt{n}$, where $c'$ is another constant, but $c'$ is much larger than $c$, so any benefit of the new bound shows only for ``impractically" large values of $n$.} (See, e.g., \cite{DevLug01} for details.)
\begin{theorem}\label{thm:wu} Let $\{X_i\}^\infty_{i=-\infty}$ be an i.i.d. source satisfying Conditions (S.1) and (S.2), and let $\rho$ be a distortion function satisfying Conditions (D.1) and (D.2). Assume the following:
\begin{enumerate}
\item $\Theta$ is a bounded subset of $\R^k$ for some $k$.
\item The map $\theta \mapsto P_\theta$ is {\em uniformly locally Lipschitz}: there exist constants $r,\beta > 0$ such that, for each $\theta \in \Theta$, $d_V(P_\theta,P_\eta) \le \beta \| \theta - \eta \|$ for all $\eta \in B_r(\theta)$, where $\| \cdot \|$ is the Euclidean norm on $\R^k$ and $B_r(\theta)$ is an open ball of radius $r$ centered at $\theta$.
\item The collection ${\cal A}_\Theta$ of all sets of the form $A_{\theta,\eta} = \{x \in {\cal X}: p_\theta(x) > p_\eta(x) \}$ with $\theta \neq \eta$ (the so-called {\em Yatracos class} associated with $\{P_\theta\}$ \cite{Yat85,DevLug96,DevLug97}) is a VC class.
\end{enumerate}
Also, suppose that for each $n,\theta$ there exists an $n$-block code $C^n_\theta = (f_\theta,\phi_\theta)$ at rate of $R$ bits per letter achieving the $n$th-order operational DRF for $\theta$: $D_\theta(C^n_\theta) = \widehat{D}^n_\theta(R)$. Then there exists an $(n,n)$-block code $C^{n,n}$ with
\begin{equation}
R(C^{n,n}) = R + O(\log n/n),
\label{eq:rate}
\end{equation}
such that for every $\theta \in \Theta$
\begin{equation}
\delta_\theta(C^{n,n}) = O(\sqrt{\log n/n}).
\label{eq:distred}
\end{equation}
Therefore, the sequence of codes $\{C^{n,n}\}^\infty_{n=1}$ is weakly minimax universal for $\{P_\theta : \theta \in \Theta\}$ at rate $R$. Furthermore, for each $n$ the first-stage encoder $\td{f}$ and the corresponding parameter decoder $\td{\psi}$ are such that
\begin{equation}
d_V(P_\theta,P_{\td{\psi}(\td{f}(X^n))}) = O(\sqrt{\log n/n}) \qquad \mbox{$P_\theta$-a.s.}
\label{eq:srcident}
\end{equation}
The constants implicit in the $O(\cdot)$ notation in (\ref{eq:rate}) and (\ref{eq:srcident}) are independent of $\theta$.
\end{theorem}
\begin{proof} The proof is by construction of a two-stage $(n,n)$-block code as outlined in Sec.~\ref{sec:system}. As before, let $X^n$ denote the current $n$-block $X^n(t)$, and let $Z^n$ be the preceding $n$-block $X^n(t-1)$. We first define our first-stage encoder $\td{f}$, which we shall realize as the composition $\td{g} \circ \td{\theta}$ of a parameter estimator $\td{\theta}$ and a lossy parameter encoder $\td{g}$ (cf.~Fig.~\ref{fig:twostage}). For any $z^n \in {\cal X}^n$ and for any $\theta \in \Theta$, let $\Delta_\theta(z^n) \deq \Sup_{A \in {\cal A}_\Theta} |P_\theta(A) - P_{z^n}(A)|$, where $P_\theta(A) \equiv \int_A p_\theta(x)dx$. Define $\td{\theta}(z^n)$ as any $\theta^* \in \Theta$ such that $\Delta_{\theta^*}(z^n) < \Inf_{\theta \in \Theta}\Delta_\theta(z^n) + 1/n$, where the extra $1/n$ ensures that at least one such $\theta^*$ exists. The map $z^n \mapsto \td{\theta}(z^n)$ is the so-called {\em minimum-distance density estimator} of Devroye and Lugosi \cite{DevLug96, DevLug97}, which satisfies
\begin{equation}
d_V(P_\theta,P_{\td{\theta}(Z^n)}) \le 2\Delta_\theta(Z^n) + 3/2n.
\label{eq:mindist}
\end{equation}
Since ${\cal A}_\Theta$ is a VC class,
\begin{equation}
\E_\theta[d_V(P_\theta,P_{\td{\theta}(Z^n)})] \le c\sqrt{\log n/n} + 3/2n,
\label{eq:avmindist}
\end{equation}
for each $\theta \in \Theta$, where $c > 0$ depends on $V({\cal A}_\Theta)$.
Next, we construct the lossy encoder $\td{g}$. Since $\Theta$ is bounded, it is contained in some cube $M$ of side $J \in \N$. Let $\{M^{(n)}_1,M^{(n)}_2,\cdots,M^{(n)}_K\}$ be a partitioning of $M$
into contiguous cubes of side $1/\lceil n^{1/2} \rceil$, so
that $K \le (Jn^{1/2})^k$. Represent each $M^{(n)}_j$ that intersects
$\Theta$ by a unique fixed-length binary string $\td{s}_j$, and let
$\td{{\cal S}} = \{\td{s}_j\}$. Then if a given $\theta \in \Theta$ is
contained in $M^{(n)}_j$, map it to $\td{s}_j$, $\td{g}(\theta) =
\td{s}_j$; this can be described by a string of no more than $k(\log n^{1/2} + \log J)$
bits. For each $M^{(n)}_j$ that intersects
$\Theta$, choose a reproduction $\wh{\theta}_j \in M^{(n)}_j \cap
\Theta$ and designate $C^n_{\wh{\theta}_j}$ as the corresponding $n$-block code $C^n_{\td{s}_j}$. The parameter decoder
$\map{\td{\psi}}{\td{{\cal S}}}{\Theta}$ is then given by $\td{\psi}(\td{s}_j) = \wh{\theta}_j$.
The rate of the resulting $(n,n)$-block code $C^{n,n}$ does not exceed $R + n^{-1}k(\log n^{1/2} + \log J)$ bits per letter, which proves (\ref{eq:rate}). By (\ref{eq:2stdist_mem}), the average distortion of $C^{n,n}$ on the source
$P_\theta$ is given by $D_\theta(C^{n,n}) = \E_\theta \big[ D_\theta\big(C^n_{\wh{\theta}}
\big) \big]$, where $\wh{\theta} = \wh{\theta}(Z^n) \equiv
\td{\psi}(\td{f}(Z^n))$. From standard quantizer mismatch arguments and the triangle inequality,
$$
\delta^n_\theta(C^n_{\wh{\theta}}) \le 4K[
d_V(P_\theta,P_{\td{\theta}}) +
d_V(P_{\td{\theta}},P_{\wh{\theta}})].
$$
Taking
expectations, we get
\begin{equation} \delta^n_\theta(C^{n,n}) \le 4K
\big\{\E_\theta [d_V(P_\theta,P_{\td{\theta}})] + \E_\theta
[d_V(P_{\td{\theta}},P_{\wh{\theta}})]\big\}.
\label{eq:distbound}
\end{equation}
We now estimate separately each term in the curly brackets in (\ref{eq:distbound}). Using (\ref{eq:avmindist}), we can bound the first term by
\begin{equation}
\E_\theta [d_V(P_\theta,P_{\td{\theta}})] \le c \sqrt{\log n/n}
+ 3/2n.
\label{eq:1st_term}
\end{equation}
The second term involves $\td{\theta} = \td{\theta}(Z^n)$ and its quantized version
$\wh{\theta}$, where $\| \td{\theta} - \wh{\theta} \| \le
\sqrt{k/n}$ by construction of $\td{g},\td{\psi}$. Using the assumption that the map $\theta \mapsto P_\theta$ is uniformly locally Lipschitz, as well as the fact that $d_V(P,Q) \le 1$ for any two distributions $P,Q$, it is not hard to show that there exists a constant $\beta'$ such that
\begin{equation}
d_V(P_{\td{\theta}},P_{\wh{\theta}})
\le \beta' \sqrt{k/n}
\label{eq:2nd_term_noexp}
\end{equation}
and consequently that
\begin{equation}
\E_\theta [d_V(P_{\td{\theta}},P_{\wh{\theta}})] \le \beta'\sqrt{k/n}.
\label{eq:2nd_term}
\end{equation}
Substituting the bounds (\ref{eq:1st_term}) and (\ref{eq:2nd_term})
into (\ref{eq:distbound}) yields
$$
\delta^n_\theta(C^{n,n}) \le K \left(
4c\sqrt{\log n/n} + 6/n + 4\beta' \sqrt{k/n}
\right),
$$
whence it follows that the $n$th-order redundancy $\delta^n_\theta(C^{n,n}) = O(\sqrt{\log n/n})$ for every $\theta \in \Theta$. Then the decomposition
$$
\delta_\theta(C^{n,n}) = \delta^n_\theta(C^{n,n}) + \wh{D}^n_\theta(R) - D_\theta(R)
$$
and (\ref{eq:drf_conv}) imply that (\ref{eq:distred}) holds for every $\theta \in \Theta$.
To prove (\ref{eq:srcident}), fix an $\epsilon > 0$ and note that by (\ref{eq:mindist}), (\ref{eq:2nd_term_noexp}) and the triangle inequality, $d_V(P_\theta,P_{\wh{\theta}}) >
\epsilon$ implies that $2\Delta_\theta(Z^n) + 3/2n + \beta'\sqrt{k/n} > \epsilon$. Hence,
$$
\Pr \left\{ d_V(P_\theta,P_{\wh{\theta}}) > \epsilon \right\} \le \Pr \left\{ \Delta_\theta(Z^n) > \big(\epsilon - \gamma\sqrt{1/n}\big)/2 \right\},
$$
where $\gamma = 3/2 + \beta'\sqrt{k}$. Since ${\cal A}_\Theta$ is a VC class,
\begin{equation}
\Pr \left\{ d_V(P_\theta,P_{\wh{\theta}}) > \epsilon \right\} \le
8n^{V({\cal A}_\Theta)} e^{-n(\epsilon - \gamma\sqrt{1/n})^2/128}.
\label{eq:lgdev}
\end{equation}
If for each $n$ we choose $\epsilon_n > \sqrt{128 V({\cal A}_\Theta) \ln n /n} + \gamma \sqrt{1/n}$, then the
right-hand side of (\ref{eq:lgdev}) will be summable in $n$, hence
$d_V(P_\theta,P_{\wh{\theta}(Z^n)}) = O(\sqrt{\log n/n})$ $P_\theta$-a.s. by the
Borel--Cantelli lemma.\end{proof}
The above proof combines techniques of Rissanen \cite{Ris84} (namely, explicit identification of the source parameters) with the parameter-space quantization idea of Chou {\em et al.} \cite{ChoEffGra96}. The VC condition on the Yatracos class ${\cal A}_\Theta$ is needed to control the $L_1$ convergence rate of the density estimators, which bounds the convergence rate of the distortion redundancies. We remark also that the boundedness condition on the distortion function can be relaxed in favor of a uniform moment condition with respect to a reference letter, but at the expense of a quadratic slowdown of the rate at which the distortion redundancy converges to zero (the proof is omitted for lack of space):
\begin{theorem}Let $\{P_\theta
: \theta \in \Theta\}$ be a family of i.i.d. sources satisfying the conditions of Theorem~\ref{thm:wu}, and let $\rho$ be a distortion function for which
there exists a {\em reference letter} $a_* \in \widehat{\cal X}$ such that $\Sup_{\theta \in \Theta} \E_\theta[\rho(X,a_*)^2] < \infty$, and which satisfies Condition (D.1). Then for any rate $R > 0$ satisfying $\Sup_{\theta \in \Theta} D_\theta(R) < \infty$ there exists a sequence $\{C^{n,n}\}^\infty_{n=1}$ of $(n,n)$-block codes with $R(C^{n,n}) = R + O(\log n/n)$ and $\delta_\theta(C^{n,n}) = O(\sqrt[4]{\log n/n})$ for every $\theta \in \Theta$. The source identification performance is the same as in Theorem~\ref{thm:wu}.
\end{theorem}
\section{Examples}
\label{sec:examples}
Here, we present two explicit examples of parametric families satisfying the conditions of Theorem~\ref{thm:wu} and thus admitting joint universal lossy coding and identification schemes.
\noindent{\bf Mixture classes.} Let $p_1,\cdots,p_k$ be fixed pdf's over a measurable domain ${\cal X} \subseteq \R^d$, and let $\Theta$ be the simplex of probability distributions on $\{1,\cdots,k\}$. Then the mixture class defined
by the $p_i$'s consists of all densities of the form $p_\theta(x) = \sum^k_{i=1} \theta_i p_i(x)$, $\theta = (\theta_1,\cdots,\theta_k) \in \Theta$. The parameter space $\Theta$ is compact and thus satisfies Condition 1) of Theorem~\ref{thm:wu}. In order to show that Condition
2) holds, fix any $\theta,\eta \in \Theta$. Then
$$
d_V(P_\theta,P_\eta) \le \frac{1}{2}\sum^k_{i=1}|\theta_i - \eta_i| \le \frac{\sqrt{k}}{2} \|\theta - \eta\|,
$$
where the last inequality follows by concavity of the square root. Therefore, the map $\theta \mapsto P_\theta$ is everywhere Lipschitz with
Lipschitz constant $\sqrt{k}/2$. It remains to show that the Yatracos class ${\cal A}_\Theta$ is VC. To this end, observe that
$x \in A_{\theta,\eta}$ if and only if $\sum_i (\theta_i - \eta_i)p_i(x) > 0$. Thus ${\cal A}_\Theta$ consists of sets of the form $\left\{ x \in {\cal X}: \sum_i \alpha_i p_i(x) > 0, (\alpha_1,\cdots,\alpha_k) \in \R^k \right\}$. Since the functions $p_1,\cdots,p_k$ span a linear space of dimension not larger than $k$, Lemma~4.2 in \cite{DevLug01} guarantees that $V({\cal A}_\Theta) \le k$.
\noindent{\bf Exponential families.} Let ${\cal X}$ be a measurable subset of $\R^d$, and let $\Theta$ be a
compact subset of $\R^k$. A family $\{ p_\theta : \theta \in \Theta\}$
of probability densities on ${\cal X}$ is an {\em exponential family} \cite{BarShe91} if there exist a probability density $p$ and $k$ real-valued functions $h_1,\cdots,h_k$ on ${\cal X}$, such that each $p_\theta$ has the form $p_\theta(x) = p(x) e^{\theta \cdot h(x) -
g(\theta)}$, where $h(x) \deq
(h_1(x),\cdots,h_k(x))$, $\theta \cdot h(x) \deq \sum^k_{i=1}
\theta_i h_i(x)$, and $g(\theta) = \ln \int_{\cal X} e^{\theta \cdot h(x)}p(x)dx$ is the normalization constant. Given the densities $p$ and $p_\theta$, let $P$ and
$P_\theta$ denote the corresponding distributions. By the compactness of $\Theta$, Condition 1) of
Theorem~\ref{thm:wu} is satisfied. Next we demonstrate that Conditions 2) and 3) can also be met under certain regularity assumptions.
Namely, suppose that $\{1,h_1,\cdots,h_k\}$ is a linearly
independent set of functions. This guarantees that the map $\theta \mapsto P_\theta$ is one-to-one. We also assume that each $h_i$ is
square-integrable with respect to $P$: $\int_{\cal X} h_i^2 dP < \infty$, $1 \le i \le k$. Then the $(k+1)$-dimensional real linear space ${\cal F} \subset L^2({\cal X},P)$ spanned by $\{1,h_1,\cdots,h_k\}$ can
be equipped with an inner product $\ave{f,g} \deq \int_{\cal X} fg dP$ and the corresponding $L_2$ norm $\| f \|_2 \deq \sqrt{ \ave{f,f} } \equiv \sqrt{\int_{\cal X} f^2 dP}$. Also let $\| f \|_\infty \deq \inf \big\{ M : |f(x)| \le M \mbox{ $P$-a.e.} \big\}$ denote the $L_\infty$ norm of $f$. Since ${\cal F}$ is
finite-dimensional, there exists a constant $A_k > 0$ such that $\| f \|_\infty \le A_k \| f \|_2$. Finally, assume that the logarithms of Radon--Nikodym derivatives $dP/dP_\theta \equiv p/p_\theta$ are uniformly bounded $P$-a.e.: $\Sup_{\theta \in \Theta} \| \ln (p/p_\theta) \|_\infty = L < \infty$. These conditions are satisfied, for example, by truncated Gaussian densities over a compact domain in $\R^d$ with suitably bounded means and covariance matrices.
Let $D(P_\theta \| P_\eta)$ denote the relative entropy (information divergence) between $P_\theta$ and $P_\eta$. With the above conditions in place, we can prove the following result along the lines of Lemma~4 of Barron and Sheu \cite{BarShe91}:
\begin{equation}
D(P_\theta \| P_\eta) \le \frac{1}{2}e^{\| \ln (p/p_\theta)
\|_\infty} e^{2 A_k \| \theta - \eta \|} \| \theta - \eta
\|^2,
\label{eq:divbound}
\end{equation}
where $\| \cdot \|$ is the Euclidean norm on $\R^k$. From Pinsker's inequality $d_V(P_\theta,P_\eta) \le
\sqrt{D(P_\theta \| P_\eta)/2}$ \cite[Lemma~5.2.8]{Gra90a}, (\ref{eq:divbound}) and the uniform boundedness of $\ln p/p_\theta$, we get
\begin{equation}
d_V(P_\theta,P_\eta) \le \beta_0 e^{A_k\|\theta - \eta \|} \| \theta - \eta \|, \qquad
\theta,\eta \in \Theta,
\label{eq:dvbound}
\end{equation}
where $\beta_0 \deq e^{L/2}/2$. If we fix $\theta \in \Theta$, then from (\ref{eq:dvbound}) it
follows that for any $r > 0$, $d_V(P_\theta,P_\eta) \le \beta_0 e^{A_k r} \| \theta - \eta \|$ for all $\eta$ satisfying $\| \eta - \theta \| \le r$. That is,
the family $\{P_\theta : \theta \in \Theta \}$ satisfies the uniform local
Lipschitz condition of Theorem~\ref{thm:wu}, and the magnitude of the Lipschitz constant can be controlled
by tuning $r$.
All we have left to show is that the Yatracos class ${\cal A}_\Theta$ is a VC class. Since $p_\theta(x) > p_\eta(x)$ if and only if
$(\theta - \eta) \cdot h(x) > g(\theta) - g(\eta)$, ${\cal A}_\Theta$ consists of sets of the form
$$
\Big\{x \in {\cal X} : \alpha_0 + \sum_i \alpha_i h_i(x) > 0, (\alpha_0,\alpha_1,\cdots,\alpha_k) \in \R^{k+1}\Big\}.
$$
Since the functions $1,h_1,\cdots,h_k$ span a $(k+1)$-dimensional
linear space, the same argument as that used for mixture classes shows that $V({\cal A}_\Theta) \le k+1$.
\section{Conclusion and future work}
\label{sec:conclusion}
We have presented a constructive proof of the existence of a scheme for joint universal lossy block coding and identification of real i.i.d. vector sources with parametric marginal distributions satisfying certain regularity conditions. Our main motivation was to show that the connection between universal coding and source identification, exhibited by Rissanen for lossless coding of discrete-alphabet sources \cite{Ris84}, carries over to the domain of lossy codes and continuous alphabets. As far as future work is concerned, it would be of both theoretical and practical interest to extend the approach described here to variable-rate codes (thus lifting the boundedness requirement for the parameter space) and to general (not necessarily memoryless) stationary sources.
\section*{Acknowledgment}
The author would like to thank Pierre Moulin and Ioannis Kontoyiannis for useful discussions. This work was supported by the Beckman Institute Postdoctoral Fellowship.
|
astro-ph/0601334
|
\section{INTRODUCTION}
It has become clear that massive clusters are not extremely rare at high redshifts ($z>0.8$) and
the presence of these large collapsed structures when the age of the
Universe is less than half its present value
is no longer in conflict with our current understanding
of the structure formation, especially in a
$\Lambda$-dominated flat cosmology.
Pursuit of galaxy clusters to higher and higher redshift is important in
the extension of the evolutionary sequences to earlier epochs, when
the effect of the different cosmological frameworks becomes more discriminating.
A great deal of observational efforts have been made in the last decade
in enlarging the sample of high-redshift clusters. X-ray surveys have
provided an efficient method of cluster identification and probe of
cluster properties because a hot intracluster medium (ICM) within the cluster
generates strong diffuse X-ray emission and is believed to be in quasi-equilibrium
with gravity. However, it is questionable how well the clusters
selected by their X-ray excess can provide the unbiased representation of the typical
large scale structure at the cluster redshift. If
the degree of the virialization decreases significantly with redshift and
is strongly correlated with X-ray temperature, the
cosmological dimming $\sim (1+z)^{-4}$ can bias our selection
progressively towards higher and higher mass, relaxed structures.
Among other important approaches to detect high-redshift clusters is a red-cluster-sequence (RCS)
survey using the distinctive spectral feature in cluster ellipticals. This
so-called 4000\AA~break feature is well-captured by
a careful combination of two passbands, and Gladders \& Yee (2005) recently reported
67 candidate clusters at a photometric redshift of $0.9 < z < 1.4$ from the $\sim10$\% subregion
of the total $\sim100~\mbox{deg}^2$ RCS survey field. A related method
but giving a higher contrast of cluster members with respect to the background sources is
to use deep near-infrared (NIR) imaging (e.g., Stanford et al. 1997) for the selection of cluster
candidates. High-redshift clusters identified in these color selection methods are
expected to serve as less biased samples encompassing the lower mass regime
at high redshifts.
In the current paper, we study two $z\sim1.3$ clusters, namely RX J0849+4452
and RX J0848+4453 (hereafter Lynx-E and Lynx-W for brevity),
using the deep F775W and F850LP (hereafter
$i_{775}$ and $z_{850}$, respectively) images obtained with the Advanced Camera for Surveys (ACS)
on the $Hubble$ $Space$ $Telescope$ ($HST$). Interestingly, although these two clusters
are separated by only $\sim4\arcmin$ from each other, they were discovered by
different methods.
Standford et al. (1997) discovered Lynx-W in a NIR survey as an overdense
region of the $J-K > 1.9$ galaxies and spectroscopically confirmed 8 cluster members.
They also analyzed the archival ROSAT-PSPC observation of the region and found
diffuse X-ray emission near the confirmed cluster galaxies. However, they
could not rule out the possibility that the X-ray flux might be coming
from the foreground point sources because of the PSPC PSF is too broad
to identify such objects. The subsequent study of the field using the $Chandra$ observations
showed that, although the previous ROSAT-PSPC observation is severely contaminated
by the X-ray point sources adjacent to the cluster, the cluster
is still responsible for some diffuse X-ray emission. Both
the X-ray temperature and luminosity
of the cluster appear to be low ($T_X\sim1.6$~keV and $L_{bol}\sim0.69\times10^{44}
~ \mbox{ergs}~\mbox{s}^{-1}$; Stanford et al. 2001).
Lynx-E was, on the other hand, first discovered in the ROSAT Deep Cluster Survey (RDCS) as
a cluster candidate and follow-up near-infrared imaging showed an excess of
red ($1.8<J-K<2.1$) galaxies around the peak of the X-ray emission (Rosati et al. 1999).
They also showed that five galaxies around the X-ray centroid have redshifts that are consistent
with the cluster redshift at $z=1.26$ using the Keck spectroscopic observations.
From the $Chandra$ data analysis, Stanford et al. (2001) determined the cluster temperature
and luminosity to be $T_X=5.8_{-1.7}^{+2.8}$ keV and $L_{bol}=3.3_{-0.5}^{+0.9}\times10^{44} \mbox{ergs}~\mbox{s}^{-1}$,
respectively.
The rather large difference in the X-ray properties of these two clusters may be viewed as
representing the characteristics of the sample obtained from
different survey methods. Lynx-E, the X-ray selected cluster, has
much higher X-ray temperature and luminosity than Lynx-W, the
NIR-selected cluster. If the stronger X-ray emission
means higher dynamical maturity, the more compact distribution
of the Lynx-E galaxies provides an alternate support of this
hypothesis. For dynamically relaxed systems the observed X-ray
properties can be easily translated into the mass properties
under the assumption of hydrostatic equilibrium. However,
as we probe into the higher and higher-redshift regime, it is natural
to expect that there will be more frequent occasions when
the equilibrium assumption loses its validity in deriving the
mass properties of the system. In addition, at $z>1$ we expect to
have a growing list of low-mass clusters that are also X-ray dark because
of the evasively low-temperature, as well as the substantial cosmological dimming.
Therefore, it is plausible to suspect
that these two $z\sim1.3$ clusters (especially Lynx-W, the poorer X-ray system)
might lie on a border where the X-ray observations alone start
to become insufficient to infer the mass properties.
Weak-lensing provides an alternative approach to
deriving the mass of a gravitationally bound
system without relying on assumptions about the dynamical state. This
can help us to probe the properties of the high-redshift
clusters in lower mass regimes, where the X-ray measurements
alone may not provide useful physical quantities.
In our particular case, weak-lensing is an important tool to test how
the masses of the two Lynx clusters at $z\sim1.3$
compare with their X-ray measurements. Especially
for Lynx-W, weak-lensing seems to be the unique route for probing
the cluster mass, considering the poor and amorphous X-ray emission.
Another interesting question is whether the low X-ray temperature of Lynx-W arises simply from a low
mass or from a yet poor thermalization of the ICM.
However, the detection of weak-lensing signal at $z\sim1.3$
is difficult and much more so if the lens is not very massive.
In our previous investigation of the two $z\sim0.83$ high-redshift
clusters (Jee et al. 2005a, hereafter Paper I; Jee et al. 2005b, hereafter Paper II),
we were able to detect clear lensing signals. They revealed the
complicated dark matter substructure of the clusters in great detail.
The effective source plane (defined by the effective mean redshift of the background galaxies)
in Paper I and II is at $z_{eff}\sim1.3$, corresponding to
the redshift of the lenses targeted in the current paper! Therefore, the
number density of background galaxies decreases substantially
compared to our $z\sim0.8$ studies and, in addition, the higher fraction of
non-background population in our source sample inevitably dilutes the
resulting lensing signal quite severely. Furthermore, the accurate
removal of instrumental artifacts becomes more critical as stronger
signals come from more distant, and thus fainter and smaller galaxies.
They are more severely affected by the point-spread-function (PSF).
Nevertheless, our
analyses of RDCS 1252-2927 at $z=1.24$ (Lombardi et al. 2005; Jee et al. in
preparation) demonstrate that weak-lensing can still be
applied to clusters even at these redshifts and
reveals the cluster mass distribution with high significance.
Returning to the X-ray properties, the low-energy quantum efficiency (QE) degradation of the $Chandra$ instrument can
cause noticeable biases in cluster temperature
measurements. Although there have been many suggestions regarding this issue,
it was not until recently that a convergent prescription to remedy
the situation has become available from the $Chandra$
X-ray Center\footnote{see http://cxc.harvard.edu/ciao3.0/threads/apply\_acisabs/
or http://cxc.harvard.edu/ciao3.2/releasenotes/}.
Because we suspect that the previous X-ray analyses of the Lynx clusters
suffered from the relatively insufficient understanding of this problem, we have also re-analyzed
the archival $Chandra$ data to enable a fairer
comparison between the weak-lensing and X-ray measurements.
Throughout the paper, we assume a $\Lambda$CDM cosmology favored by the
Wilkinson Microwave Anisotropy Probe (WMAP), where $\Omega_M$, $\Omega_{\Lambda}$, and
$H_0$ are 0.27, 0.73, and 71 $\mbox{km}~\mbox{s}^{-1}~\mbox{Mpc}^{-1}$, respectively.
All the quoted uncertainties are at the 1 $\sigma$ ($\sim68$\%) level.
\section{OBSERVATIONS}
\subsection{ACS Observation \label{subsection_acsobservation}}
Deep ACS/WFC imaging of the Lynx clusters were carried out as part of ACS Guaranteed Time Observation (GTO)
during 2004 March in three contiguous pointings, which cover a strip of $\sim9\arcmin\times3\arcmin$ region.
A slight overlap ($\sim30\arcsec$) was made between the pointings and the strip is oriented in
such a way that the two cluster centers are approximately located near the overlap region.
Each pointing was observed in $i_{775}$ and $z_{850}$
passbands with 3 and 5 orbits of integration, respectively.
We used the ACS GTO pipeline (``APSIS"; Blakeslee et al. 2003) to remove cosmic rays, correct
geometric distortion via drizzle algorithm (Fruchter and Hook 2002), and register different exposures. Apsis
meets all the requirements of weak lensing analysis (Paper I and II), offering
a precise ($\sim0.015$ pixels) image registration via the ``match'' program (Richmond 2002)
after correcting for geometric distortion (Meurer et al. 2003).
In Figure~\ref{fig_lynx_illustration} we present the pseudo-color
image of the entire ACS field with the blow-ups of the two Lynx clusters.
Lynx-E is well-portrayed by the somewhat compact distribution
of the cluster red sequence around the brightest cluster galaxies (BCGs).
It appears that the cluster has a strongly lensed blue giant arc $\sim4.5\arcsec$
south of the BCGs. The spectroscopic
redshift of this arc candidate has not yet been determined.
The red sequence of Lynx-W looks somewhat scattered and there
seem to be no distinct BCGs characterizing the cluster center though
the excess of the early-type galaxies in the region clearly defines
the cluster locus.
The detection image was created by combining the two passband images using inverse variance
weighting. Objects are detected through the SExtractor program (Bertin \& Arnouts 1996) by searching for
at least five connected pixels above 1.5 times the sky rms. The field contains several
bright stars whose diffraction spikes not only induce a false detection, but
also contaminate the neighboring objects. We manually selected and removed these objects.
The catalog contains a total of 8737 galaxies.
\subsection{Chandra Observation}
We retrieved the $Chandra$ observation of the Lynx field from the
Chandra X-ray Center. The field was observed with the Advanced CCD Imaging
Spectrometer I-array (ACIS-I) in the faint mode at a focal temperature of -120 K.
The observation consists of two exposures: $\sim65$ ks and $\sim125$ ks integrations
on 2000 May 3 and 4, respectively.
The raw X-ray events were processed with the $Chandra$ Interactive Analysis of
Observations (CIAO) software version 3.2 and the Calibration Database (CALDB) version 3.1,
which provide the correction for time-dependent gain variation and the low-energy
quantum efficiency degradation without requiring any external guidance.
We identified and flagged hot pixels and afterglow events using the $acis\_build\_badpix$,
$acis\_classify\_hotpix$ and $acis\_find\_hotpix$ scripts while selecting only the standard
$ASCA$ events (0,2,3,4, and 6).
Figure~\ref{fig_xrayoverimage} shows the adaptively smoothed $Chandra$ X-ray contours
of the Lynx field overlaid on the ACS image. This adaptive smoothing
is performed using the CIAO CSMOOTH program with a minimum significance of 3 $\sigma$ and
the contours are spaced in square-root scale. Because of the low counts from the two
high-redshift clusters, the 3 $\sigma$ significance condition can only be met
with rather large smoothing kernels. Therefore, the round appearance
of the contours should not be misinterpreted as indicating the relaxed status of the systems.
When the contours are reproduced with a smaller, constant kernel smoothing, Lynx-W
looks much more irregular than Lynx-E.
The X-ray centroids of Lynx-E and W are in good spatial
agreement with those of cluster optical lights.
The foreground cluster RXJ 0849+4456 (Holden et al. 2001) at z=0.57
appears to be also strong in X-ray emission, but is located
outside the ACS pointings ($\sim5\arcmin$ and $\sim3\arcmin$ apart
from Lynx-E and W, respectively). The multi-wavelength analysis of this
cluster is presented by Holden et al. (2001) and they found that
the cluster can be further resolved into two groups at z=0.57 and 0.54.
Our subsequent X-ray analyses are confined to the two
high-redshift clusters at $\bar{z}=1.265$ present within the current ACS pointings.
\section{ACS DATA ANALYSIS}
As in our previous investigations (Paper I and II), we measure galaxy shapes
and model the point-spread-function of the observation using shapelets (Bernstein \& Jarvis 2002; Refregier 2003).
Readers are referred to Paper I and II for detailed description of the ellipticity measurements.
\subsection{Cluster Luminosity}
Our current spectroscopic catalog of the ACS Lynx field (B. Holden et al. in preparation) contains
150 objects and 32 of them belong to either of the two high-redshift clusters ($1.24<z<1.28$);12 galaxies
are at $z>1.31$ and the rest of them (106 objects) are foreground objects.
We supplemented the cluster member galaxy catalog with
the cluster red sequence (Mei et al. 2005) using
$i_{775}-z_{850}$ colors. In order to minimize the systematics
from internal gradients and the different
PSF sizes (the PSF of $z_{850}$ is $\sim10$\% broader than that of $i_{775}$), the galaxies are deconvolved
with the CLEAN (H\"{o}gbom et al. 1974) algorithm. After an effective radius $R_e$ is determined for each galaxy, we
measured the object colors within a circular aperture defined by $R_e$. When the estimated $R_e$ was less than
three pixels, we used a three pixel aperture instead (the median $R_e$ is $\sim5$~pixels).
At $z\sim1.265$, the 4000\AA~break
is shifted slightly blue-ward of the effective wavelength of the $z_{850}$ filter. Therefore,
this filter combination is less than ideal, but
the red sequence is still visible
down to $z_{850}\sim24$ in the $i_{775}-z_{850}$ versus $z_{850}$ plot (Figure~\ref{fig_cm}).
We visually examined each candidate and discarded the objects
that do not seem to have early-type morphology, or whose
redshifts (if known) are inconsistent with the cluster redshifts.
The final cluster member catalog contains 68 objects.
The rest-frame $B$ band at the cluster redshift is approximately
redshifted to the ACS $z_{850}$ band and we derive the
following photometric transformation from the synthetic photometry
with the Spectral Energy Distribution (SED) templates of Kinney et al. (1996).
\begin{equation}
B_{rest} = z_{850} - (0.70 \pm 0.02) (i_{775}-z_{850}) + (1.08 \pm 0.01) - DM \label{photran},
\end{equation}
\noindent
where DM is the distance modulus of 44.75 at $\bar{z}=1.265$.
From the above selection of the cluster galaxies,
we estimate that Lynx-E and W encloses $L_B\sim1.5\times10^{12}$ and
$\sim0.8\times10^{12} L_{B\sun}$, respectively within 0.5 Mpc ($\sim60\arcsec$) radius.
Of course, these values
correspond to the lower limits because we neglected the
contribution from the blue galaxies (except for the several spectroscopically confirmed
ones), as well
as the less luminous population ($z_{850}>24$).
However, we do not attempt to determine the correction factors in the current
paper because the number of galaxies in both of our spectroscopic
and red sequence samples is insufficient to
support our statistical derivation.
\subsection{PSF Correction}
ACS/WFC has a time- and position-dependent PSF (Paper I) and the
ability to properly model the PSF pattern in the observed cluster field is critical
in subsequent galaxy ellipticity analysis. In paper I and II, we demonstrated that
the PSF of WFC sampled from the 47 Tucanae field can be used to describe the
PSF pattern of the cluster images where only a limited number of stars are available, but can
be used as a diagnosis of the model accuracy.
We selected the stars in the Lynx field via a typical magnitude versus half-light radius plot
(Figure~\ref{fig_starselect}). Figure~\ref{fig_starfield}a show the WFC PSF pattern in the $i_{775}$ image
of the Lynx field, which is
similar to the ones in our previous cluster weak lensing studies. The PSFs are elongated
in the lower-left to upper-right direction. An analogous pattern is also observed
in the $z_{850}$ band. However, the wings of the $z_{850}$
are stretched approximately parallel to the row of the CCD (telescope V2 axis) and
the feature becomes observable when the wings of the PSFs are more heavily weighed (Heymans et al. 2005).
In our calibration of the ACS (Sirianni et al. 2005),
we also observed an opposite pattern (i.e., with an ellipticity nearly perpendicular to
Figure~\ref{fig_starfield}a), it seems that this PSF pattern is more frequently observed, at least
in our GTO surveys of $\sim15$ clusters. Because the focus offsets of different HST visits are
likely to vary, one may desire to find the closest PSF template
for every individual exposure and perform PSF corrections one by one.
However, we find that in our GTO cluster observations the PSF patterns in different exposures do not
vary considerably. Therefore, we chose a single PSF template for each filter and
created a PSF map for the entire $3\times 1$ mosaic image by placing the template PSFs on each pointing.
In order to minimize the model-data discrepancy due to the slight focus variation, we fine-tuned our model
for each exposure by shearing
the PSF by an amount $\delta \eta$, which can be expressed in shapelet notation as
\begin{equation}
b_{pq}^{\prime} = \mbox{\bf{S}}_{\delta \eta} b_{pq},
\end{equation}
\noindent
where $b_{pq}$ is the shapelet componet of the PSF and the evaluation of matrix elements of the shear operator
$\mbox{\bf{S}} _ {\delta\eta}$ can be found in Bernstein \& Jarvis (2002).
Figure~\ref{fig_starfield}b displays
the residual ellipticities of the same stars in the $i_{775}$ when the PSF is circularized
with rounding kernels (Fischer \&
Tyson 1997; Kaiser 2000; Bernstein \& Jarvis 2002). The dramatic reduction
of the PSF anisotropy is also distinct when the ellipticity components ($e_+$ and $e_{\times}$) before
and after the corrections are compared (Figure~\ref{fig_star_anisotropy}).
This rounding kernel test verifies that our PSF models describe the PSF pattern of the
cluster observation very precisely. Although one can continue with this rounding kernel
method and make a subsequent measurement of the galaxy shape in this ``rounded'' images (e.g., Fischer \& Tyson 1997),
we prefer to remove the PSF effect through straightforward deconvolution in $shapelets$ because the latter
gives more satisfactory results for very faint galaxies (Paper I; Hirata \& Seljak 2003). Besides,
the $z_{850}$ PSF is rather complicated because of the ellipticity variation between core and wing metioned above,
and this PSF effect can be more efficiently corrected by the deconvolution.
\subsection{Mass Reconstruction \label{section_mass_reconstruction}}
In order to maximize the weak-lensing signal, it is important to select the source population
in such a way that the source sample contains the minimal contamination from cluster and foreground
galaxies. Because only two passband images of the Lynx field are available,
direct determination of reliable photometric redshift for an individual galaxy is impossible.
Therefore,
we chose to select the background galaxies based on their ($i_{755}-z_{850}$) colors and $z_{850}$
magnitudes. The redshift distribution of this sample can be indirectly inferred when we
apply the same selection criteria to other deep multi-band HST observations such as the
Ultra Deep Field (UDF; Beckwidth et al. 2003) project, for which reliable photometric redshift information is obtainable down
to the limiting magnitude of our cluster observation (D. Coe et al., in preparation).
We selected the $24<z_{850}<28.5$ galaxies whose $i_{775}-z_{850}$ colors are
bluer than those of the cluster redsequence ($i_{775}-z_{850}\lesssim0.7)$ as ``optimal'' background population by examining
the resulting tangential shears around the two $\bar{z}=1.265$ clusters.
This selection yields a total of 6742 galaxies ($\sim204 \mbox{arcmin}^{-2}$).
Assuming that the cosmic variance between the Lynx and UDF is not large,
we estimate that approximately 60 per cent of the selection is behind the Lynx clusters.
Our final ellipticity catalog was created by combining the $i_{775}$ and $z_{850}$ bandpass ellipticities.
Of course, there is a subtlety in this procedure because an object can have intrinsically
different shapes and thus ellipticities in different passbands. We adopted the methodology presented by
Bernstein and Jarvis (2002) to optimally combine the galaxy ellipticities.
In our previous weak-lensing analyses (Paper I and II), we found that this scheme indeed reduced the
mass reconstruction scatters compared to the case when only single passband images were used; the improvement
increases as fainter galaxies are included.
As a consistency check, we compared the shapes and lensing signals from the two passband images, and
confirmed that the results are statistically consistent.
We show the distortion and mass reconstruction of the Lynx field from this combined shape catalog in Figure~\ref{fig_whisker}.
Although the systematic alignments of source galaxies around the
cluster centers are subtle in the whisker plot (left panel), the resulting mass reconstruction (right panel)
clearly shows the dark matter concentration associated with the cluster galaxies.
The mass map is generated using the maximum likelihood algorithm and is
smoothed with a FWHM$\sim40\arcsec$ Gaussian kernel.
We verify that
other methods (e.g., Seitz \& Schneider 1995; Lombardi \& Bertin 1999) also produce
virtually identical results.
The two mass clumps are in good spatial agreement with both the cluster light and X-ray emission.
Within a radius of $1\arcmin$,
both clumps are found to be significant, above the $4 \sigma$ level (determined from bootstrap resampling).
Figure~\ref{fig_high_resolution} shows the high-resolution (smoothed with a FWHM$\sim20\arcsec$ kernel)
version of the mass maps overlaid on the ACS images. The clump associated with Lynx-E is offset $\sim10\arcsec$ from the
BCGs and the Lynx-W clump seems to lie on the western edge of the cluster galaxy distribution.
In Paper I and II, we have reported significant mass-galaxy offsets for two clusters at $z\sim0.83$
and discussed the possibility that those offsets may signal the merging substructures.
Although it is tempting to interpret the mass-galaxy offsets in the current study as also
implying the similar merging of the two Lynx clusters, our investigation of
the mass centroid distribution using the bootstrap resampling shows that the significance is only marginal
(i.e., the $r\sim10\arcsec$ circle roughly encloses $\sim70$\% of the centroid distribution).
It is encouraging to observe that the foreground cluster at $z\sim0.54$ affects
the distortion of source galaxies and reveals itself
in the weak lensing mass reconstruction (Figure~\ref{fig_whisker}) though
most of its galaxies are outside our ACS field (see Figure~\ref{fig_xrayoverimage}
for the location of the X-ray emission from the foreground cluster).
As shown by this foreground cluster and its manifestation in the mass map, light coming from
background galaxies is perturbed by all the objects lying in their paths to the observer.
Considering the high-redshifts ($\bar{z}=1.265$) of the Lynx clusters, the
likelihood of such interlopers is high. In addition, if the masses
of the two high-redshift clusters are not very large, even a moderately massive
foreground object can generate a similar lensing signal
because it has higher lensing efficiency for a fixed source plane (unless
the source plane is located at substantially higher than $z\sim1.3$).
In an attempt to separate this lower-redshift contribution from our weak lensing mass map
presented in Figure~\ref{fig_whisker}b, we created an alternate source sample
by selecting the brighter ($22<z_{850}<25$) galaxies. This time we did not exclude
the galaxies whose $i_{775}-z_{850}$ colors correspond to that of the cluster red-sequence because
they also serve as well-defined source plane at $z\sim1.3$ and their shapes should be
perturbed by any lower-redshift mass clumps. We present this second
version of the mass reconstruction in Figure~\ref{fig_mass_fore}.
It is remarkable to observe that in this version
the two high-redshift clusters disappear whereas many of the assumed foreground
features (including the cluster at $z=0.54$) still remain.
The comparison of this second mass reconstruction with the previous result
also indicates that some of the foreground mass clumps might affect the
shape of the contours of the high-redshift clusters at large radii;
the mass clump of Lynx-E seems to have a neighboring foreground clump
at its southwestern edge, and the southern edge of the Lynx-W clump
also slightly touches the foreground structure (Figure~\ref{fig_high_resolution})
(However, far fewer galaxies were used for this second version of mass
reconstruction and thus the position of these structures have much less
significance). This apparent substructure in projection may bias our measurements of
the total mass. We discuss this issue in \textsection\ref{section_mass_estimate}.
\subsection{Redshift Distribution of Source Galaxies}
As detailed in Paper I, the redshift distribution of the source galaxies of the Lynx field was inferred
from the photometric redshift catalog of the UDF.
We also used the two photometric catalogs created from the Great Observatories Origins Deep
Survey (GOODS; Giavalisco et al. 2004) and the degraded UDF in order to estimate
the contamination of the cluster members in the source sample for $z_{850}<26$ and $z_{850}>26$, respectively.
Figure~\ref{fig_zdist} shows the magnitude distribution of the source galaxies (top panel) with
the estimated mean redshift (bottom panel) for each magnitude bin. It appears that the number density excess
due to the cluster galaxy contamination is not significant throughout the entire magnitude range. However,
we must remember that the sample contains substantial contamination of foreground galaxies, which dilute the lensing signal. We
measure the mean redshift in terms of the following:
\begin{equation}
\beta_{l} = \left < \mbox{max} ( 0, \frac{D_{ls}} {D_s}) \right > \label{eqn_beta},
\end{equation}
\noindent
where $D_s$, $D_l$, and $D_{ls}$ are the angular diameter distance from the observer to the source,
from the observer to the lens and from the lens to the source, respectively.
We obtain $<\beta>=0.155$ for the entire source galaxies. The value corresponds to a single
source plane at $z_{eff}\simeq1.635$ and the critical surface mass density
($\Sigma_c = c^2 (4 \pi G D_l \beta)^{-1}$) has the physical unit of $\sim6180 M_{\sun}/\mbox{pc}^2$
at the redshift of the lens $\bar{z}=1.265$.
\subsection{Weak-lensing Mass Estimation \label{section_mass_estimate}}
A first guess of the mass can be obtained by fitting the SIS model
to the observed tangential shears around the clusters. We chose
the origin of the tangential shears as the centroids
of the mass clumps in Figure~\ref{fig_whisker}. The neighboring
foreground structures at $z\simeq0.54$ as well as the
proximity of the field boundary restrict us to the use of
the tangential shears at radii no greater than $\sim80\arcsec$.
In addition, we discarded the measurements at $r<30\arcsec$
in order to minimize the possible substructure artifact and
the contamination of the lensing signal from the cluster members.
Although this precaution leaves us with only a small fraction of
the total measurements, the lensing signal is clearly detected for both clusters
at the $\sim3\sigma$ level in the tangential shear plots (Figure~\ref{fig_tan_shear}).
It is plausible that the severly decreased shears at $r<30\arcsec$ for Lynx-E
might be in part caused by the aforementioned contamination from the cluster members.
We verified that the lensing signal disappeared when the background galaxies were
rotated by 45$\degr$ (null test).
Note that the uncertainties in Figure~\ref{fig_tan_shear} reflect only the statistical
errors set by the finite number of background galaxies. In Paper II, we demonstrated that the large scale structures
lying in front of and behind the high-redshift cluster MS 1054-0321 ($z\simeq0.83$) were dominant
source of errors in the mass determination, responsible
for approximately 15\% of the total cluster mass. This fractional uncertainty increases
substantially with cluster redshifts because the lensing by the foreground cosmic structures
become more efficient than the lensing by clusters whose redshifts approach those of source galaxies.
However, for the current clusters, we expect that the large statistical errors
still overwhelm the cosmic shear effects.
When we repeat the analysis of Paper II for the current clusters, we estimate that
the uncertainties of the Einstein Radius for the SIS fit marginally increases from
$\sigma_{er}=0\arcsec.75$ and $0\arcsec.77$ to
$\sigma_{er}=0\arcsec.81$ and $0\arcsec.83$ for Lynx-E and W, respectively.
The Einstein radius of $\theta_E=2\arcsec.45\pm0\arcsec.81$ (with respect to
the effective source plane at $z_{eff}\simeq1.635$)
for Lynx-E
corresponds to a mass of $M(r)=(4.0\pm1.3)\times10^{14} (r/\mbox{Mpc})~M_{\sun}$
and a velocity dispersion of $740_{-134}^{+113}\mbox{km}\mbox{s}^{-1}$.
Similar values of $M(r)=(4.2\pm1.4)\times10^{14} (r/\mbox{Mpc})~M_{\sun}$ and
$\sigma_{SIS}=762_{-133}^{+113}\mbox{km}\mbox{s}^{-1}$
are obtained for Lynx-W as implied by its comparable
Einstein Radius $\theta_E=2\arcsec.60\pm0\arcsec.83$.
As mentioned in \textsection\ref{subsection_acsobservation}, we note that there is
a strongly lensed arc candidate at $r\simeq4.5 \arcsec$ for Lynx-E, which
can provide a useful consistency check.
In general,
Einstein radii depend on source redshifts, and the relation steepens if a lens
is at a high redshift. If the Einstein radius of the arc is assumed to be $\theta_E=4.5\arcsec$,
this implies that the redshift of the object should lie at $1.8<z<3.2$ in our adopted cosmology
(the uncertainty reflects only the errors of the Einstein radius from the SIS fit result).
Because we have only $i_{775}$ and $z_{850}$ band images, the photometric redshift estimation
of this arc candidate is unstable. Nevertheless, if we use the HDFN prior and truncate
it below $z=1.2$, the color ($i_{775}-z_{850}=0.098$) of the object is consistent
with the SED of the starburst galaxy at $1.7<z<3.7$.
Alternatively, we can also estimate the cluster mass
based on the two parameter-free methods, namely the aperture mass densitometry and the rescaled mass
reconstruction. Although these two parameter-free approaches need some feedbacks from the above SIS fitting result
to lift the mass-sheet degeneracy, in general they provide
more robust methodology. They are less affected by the cluster substructure or the deviation
from the assumed radial profile. However, one drawback of this approach is that
the measurement is more severely influenced by the cosmic shear effect than
in the case of the SIS fitting because the aperture mass densitometry uses less amount
of information (i.e., decreased tangential shears in outer range).
With the $r=80-90\arcsec$ region as a control annulus for both clusters,
we computed the cluster mass profiles from these two parameter-free methods (Figure~\ref{fig_mass_summary});
from the SIS fit results, we determine the mean mass density in the annulus
to be $\bar{\kappa}=0.014\pm0.004$ and $0.015\pm0.005$ for Lynx-E and W, respectively.
As observed in Paper I and II, the
mass estimation obtained
from the rescaled mass reconstruction (dotted) is in good agreement with the aperture mass densitometry (open
circle). We also note that both methods gives masses consistent with the SIS fit results.
Because we used the SIS fit results above to lift the mass-sheet degeneracy, it is useful to
examine how the result change when an NFW profile is assumed, instead. Unfortunately, the low lensing signal
in the limited range does not allow us to constrain the two free parameters of the NFW profile simultaneously;
the two parameters trade off with each other without significantly altering the quality of the fit.
Freezing the concentration parameter to $c=4$, nevertheless, yields $r_s=180\pm37$ ($187\pm34$) kpc for
Lynx-E (W), predicting the mean mass density of $\bar{\kappa}=0.015\pm0.020$ ($0.016\pm0.021$) in the control
annulus. Different choices for the concentration parameter $c$ do not change these results substantially
(for instance, the choice of $c=6$ gives $\bar{\kappa}\simeq0.012$ for Lynx-E).
In \textsection\ref{section_mass_reconstruction} we demonstrated that
both clusters might have neighboring foreground mass clumps in projection. Therefore, it is
worthwhile to assess how much these foreground structures affect our mass estimation.
Because the redshift information of the foreground masses are not available, we
cannot subtract their contribution directly from our mass map.
Instead, we attempted to minimize their effects by replacing the mass density
of the region that is occupied by the foreground mass clumps
with the azimuthal average from the rest. Of course, we do not expect that this scheme
yields cluster masses that are completely free from foreground contamination, since
the azimuthal averages taken at other regions might be biased.
However, this method is still an important test because
a significant difference in resulting mass estimation must be detected if the foreground
contamination is indeed severe.
The mass-sheet lifted mass map is convenient for this type of analysis. We replaced
the southwestern region ($\sim220\degr<\theta<\sim260\degr$; the angle is measured from
the north axis counterclockwise) of the Lynx-E clump and the southern
region ($\sim130\degr<\theta<\sim195\degr$) of the Lynx-W clump with
the azimuthal averages taken at different angles.
The solid lines in Figure~\ref{fig_mass_summary} represent the mass profiles
obtained from this measurement. For both clusters, this new measurements
give slightly lower values, but the change is only marginal.
We estimate that both Lynx-E and W have
a similar mass of $(2.0\pm0.5) \times 10^{14} M_{\sun}$ within
0.5 Mpc ($\sim60\arcsec$) aperture radius from this approach.
The uncertainties here are estimated from 5000 bootstrap
resampling of the source galaxies and we do not include the cosmic
shear effects because it is non-trivial to estimate the effect
for this rescaled mass map approach.
We adopt the conventional definition of the virial radius, where
the enclosed mean density within the sphere becomes
200 times the critical density $\rho_c(z)=3H(z)^2/8\pi G$ at the redshift of the cluster.
Although the factor 200 above is most meaningful in the mass-dominated flat universe,
we retain this definition so as to enable
a consistent comparison with the values of other clusters found in the literature.
The assumption of the spherical symmetry (SIS)
allows us to estimate $r_{200}\simeq0.75$~Mpc
and $M_{200}\simeq2.0\times 10^{14} M_{\sun}$ for both Lynx clusters.
These virial properties are much smaller than the clusters at $z\sim0.83$
studied in Paper I and II. We reported that CL 0152-1357 has a virial radius of $r_{200}\sim1.1$~Mpc
and a virial mass of $M_{200}\sim4.5\times10^{14}~M_{\sun}$ in Paper I. For
MS 1054-0321, Paper II quoted $r_{200}\simeq1.5$~Mpc and $M_{200}\simeq1.1\times10^{15}~M_{\sun}$.
If we assume that the two Lynx clusters are approaching each other perpendicular to the line of sight at a free-fall speed,
our order-of-magnitude estimation predicts that
the two Lynx cluster will merge into a single cluster whose virial
mass exceeds
$\sim 4.0\times 10^{14} M_{\sun}$ in a time scale of $t\sim2$~Gyrs (or at $z\sim0.8$).
\section{$CHANDRA$ X-RAY ANALYSIS}
\subsection{Cluster Temperature and Luminosity \label{section_temperature}}
The X-ray spectra of Lynx-E and Lynx-W were extracted from the circular regions ($\bar{r}\sim36\arcsec$) positioned
at their approximate X-ray centroids after the point sources (Stern et al. 2002) are
removed. The redistribution matrix file (RMF) and the area response file (ARF) were created using the CIAO tool version 3.2
with the calibration database (CALDB) version 3.1, which
properly accounts for the time-dependent low-energy QE degradation, as well as charge transfer inefficiency (CTI).
The photon statistics is somewhat poor mainly because the clusters are at a high-redshift ($\bar{z}=1.265$) and
thus the differential surface brightness dimming is severe $\sim (1+z)^4$. Especially, as implied by its low
temperature ($T<2$ keV), the Poissonian scatter of the Lynx-W is worse. Therefore, we constructed the spectra
for both clusters with a minimum count of 40 per spectral bin. We think that this choice makes the
spectral fitting stable without diluting the overall shape of photon distribution too much. Because it
is impossible to constrain the iron abundance given the statistics, we fixed the metallicity at 0.36 $Z_{\sun}$.
This assumes that both Lynx clusters possess similar metallicity to RDCS 1252.9-2927 at $z=1.24$ (Rosati et al. 2004).
However, as noted by Stanford et al. (2001), we observed only minor changes even when different values were tried.
The Galactic hydrogen column density was also fixed at $\mbox{n}_H=2.0\times10^{20}\mbox{cm}^{-2}$ (Dickey \& Lockman 1990).
We used the $\chi^2$ minimization modified by Gehrels (1986;CHI-GEHRELS), who extended the conventional
$\chi^2$ statistics so that it can handle the deviation of the Poissonian from the Gaussian at
the low-count limit.
Figure~\ref{fig_spec} shows the best-fit MEKAL plasma spectra (Kaastra \& Mewe 1993; Liedahl, Osterheld, \&
Goldstein 1995) for both clusters. We obtain $T=3.8_{-0.7}^{+1.3}$~keV for Lynx-E with a
reduced $\chi^2$ of 0.79 (19 degrees of freedom). Lynx-W is determined to have $T=1.7_{-0.4}^{+0.7}$~keV
with a reduced $\chi^2$ of 1.16 (7 degrees of freedom).
The observed fluxes are estimated to be $F (0.4-7\mbox{keV})=1.5_{-0.2}^{+0.3}\times10^{-14}\mbox{ergs~cm}^{-2}~\mbox{s}^{-1}$ and
$F (0.4-4\mbox{keV})=7.2_{-0.5}^{+1.4}\times10^{-15}\mbox{ergs}~\mbox{cm}^{-2}~\mbox{s}^{-1}$, which
can be transformed into the rest-frame (also, apertured-corrected to $\sim0.5$~Mpc) bolometric (0.01 - 40 keV)
luminosity of $L_X=(2.1\pm0.5)\times10^{44}$
and $(1.5\pm0.8)\times10^{44}~\mbox{ergs}~\mbox{s}^{-1}$ for Lynx-E and Lynx-W, respectively
(note that the shallow surface brightness profile of Lynx-W requires a rather large aperture
correction factor).
\subsection{X-ray Surface Brightness Profile and Mass Determination \label{section_sb}}
The azimuthally averaged radial profiles were created from the exposure-corrected $Chandra$ image.
In Figure~\ref{fig_betafit} we display these radial profiles with the best-fit isothermal beta models
for both clusters. As is indicated by their X-ray image and cluster galaxy distribution,
Lynx-E has a higher concentration ($\beta=0.71\pm0.12$ and $r_c=13.2\pm3.2$) of the ICM
than Lynx-W ($\beta=0.42\pm0.07$ and $r_c=4.9\arcsec\pm2.8\arcsec$).
Together with the cluster temperatures determined in \textsection\ref{section_temperature}, these
structural parameters can be converted to the cluster mass under the assumption of hydrostatic
equilibrium. In general, many authors report cluster masses within a spherical volume rather
than a cylindrical volume spanning from the observer to the source plane, which however
is the preferred and natural choice in weak-lensing measurements. This different
geometry is often a source of confusion and subtlety in mass comparison between
both approaches. Therefore, in this paper we present our X-ray mass estimates in a cylindrical volume in order
to ensure more straightforward comparison with the weak-lensing result using the following equation (Paper II):
\begin{equation}
M_{ap}(r)= 1.78 \times 10^{14} \beta \left ( \frac{T}{\mbox{keV}} \right )
\left ( \frac{r}{\mbox{Mpc}} \right ) \frac{r/r_c}{\sqrt{1+(r/r_c)^2}} M_{\sun} \label{eqn_xray_mass_2d}
\end{equation}
\noindent
For Lynx-E we obtain $M(r\leq0.5~\mbox{Mpc})=2.3_{-0.4}^{+0.8}\times10^{14} M_{\sun}$, which is in
good agreement with our weak-lensing measurement.
On the other hand, the X-ray mass of Lynx-W ($M(r\leq0.5~\mbox{Mpc})=6.3_{-1.5}^{+2.6}\times10^{13} M_{\sun}$)
is much lower than the weak lensing estimation. We will discuss a few possible scenarios for this discrepancy
in \textsection\ref{summary}.
\section{COMPARISON WITH OTHER STUDIES}
The first attempt to estimate the mass of Lynx-W was made by Stanford et al. (1997)
using the X-ray luminosity from the ROSAT-PSPC observation and
the velocity dispersion obtained from the Keck spectroscopy of 8 galaxies.
They converted the luminosity $L_X\sim1.5\times10^{44} \mbox{ergs}~\mbox{s}^{-1}$ to
$M(r<2.3~ \mbox{Mpc})\sim7.8\times10^{14} M_{\sun}$ assuming $\beta=0.8$. A similar
value of $M(r<2.3 \mbox{Mpc})=5.4_{-2.3}^{+3.1}\times10^{14} M_{\sun}$ was estimated
from the velocity dispersion of $\sigma=700\pm180 \mbox{km}~\mbox{s}^{-1}$ (note
that they adopted $h_{100}=0.65$ and $q_0=0.1$).
Although both masses are consistent with each other,
their X-ray luminosity measurement seems to have suffered a severe contamination
from the neighboring point sources, which are now identified in the $Chandra$ observation.
In their presentation of the $Chandra$ analysis,
Stanford et al. (2001) did not attempt to estimate the mass of Lynx-W
because of the large uncertainty of the temperature measurement, as well as the apparent
asymmetry of the X-ray emission.
Our predicted velocity dispersion of $\sigma_{SIS}=762_{-133}^{+113}\mbox{km}\mbox{s}^{-1}$
from the SIS fit result is consistent with their most recent determination
of the velocity dispersion $\sigma=650\pm170~\mbox{km}~\mbox{s}^{-1}$ from the spectroscopic
redshifts of the 9 member galaxies. Lynx-W was also selected as one of the 28 X-ray clusters for
the study of the X-ray scaling relation at high redshifts by Ettori et al. (2004). From the
re-analysis of the $Chandra$ data, they obtained $\beta=0.97\pm0.43$, $r_c=163\pm70$~kpc, and $T_X=2.9\pm0.8$~keV, which
predicts a projected mass of $M(r\leq 0.5~\mbox{Mpc})= 3.0\pm1.5\times10^{14} M_{\sun}$ (eqn.~\ref{eqn_xray_mass_2d}).
This mass is consistent with our weak-lensing estimation ($2.0\pm0.5) \times 10^{14} M_{\sun}$, but
much higher than the value from our re-analysis of the same $Chandra$ data ($6.3_{-1.5}^{+2.6}\times10^{13} M_{\sun}$).
In general, many detailed steps in the $Chandra$ X-ray analysis such as the QE correction, background
modeling, flare removal, spectral aperture, etc. affect the final result, and much more
if the source is faint. Therefore, it is difficult, if not impossible, to trace the exact causes
of the differences. Nevertheless, we note that there is an important difference in the calibration
of the low-energy quantum efficiency correction between the results. Ettori et al. (2004) used
the ACISABS correction method (Chartas and Getman 2002) to account for the low-energy QE degradation, which is however
now officially disapproved by the $Chandra$ $Data$ $Center$. We also demonstrate in Paper II that
the use of this ACISABS model causes a difference of $\sim1$~keV in the temperature determination of MS1054-0321.
We suspect that the effect should be more important in Lynx-W because of
its low temperature and luminosity.
Stanford et al. (2001) obtained an X-ray temperature of $5.8_{-1.7}^{+2.8}$ keV
for Lynx-E. Combined with their determination of $\beta=0.61\pm0.12$ and
$r_c=11\arcsec.14\pm3\arcsec.41$, this gives
a projected mass of $M(r\leq0.5\mbox{Mpc})=3.1_{-0.9}^{+2.4} \times 10^{14} M_{\sun}$
(eqn.~\ref{eqn_xray_mass_2d}), which is slightly higher
than our X-ray re-analysis of the same $Chandra$ data by $\sim35$\% though the error bars from
both results marginally overlap. Vihklinin et al. (2002) included Lynx-E in their
sample of the 22 distant clusters to study the evolution the X-ray scaling relation.
With the early understanding of the low-energy QE problem of the $Chandra$, they obtained
$T_X=4.7\pm1.0$~keV, $r_c=167$~kpc, and $\beta=0.85\pm0.33$. Using the ACISABS correction. Ettori et al. (2004)
reported $T_X=5.2_{-1.1}^{+1.6}$~keV, $r_c=128\pm40$~kpc, and $\beta=0.77\pm0.19$.
The results from these two papers are statistically consistent with, but
slightly higher than our values ($T=3.8_{-0.7}^{+1.3}$keV, $r_c=111\pm27$, and $\beta=0.71\pm0.12$),
which predicts the lowest projected mass of $M(r\leq0.5\mbox{Mpc})=2.3_{-0.4}^{+0.8}\times10^{14} M_{\sun}$.
As already mentioned in the discussion of the Lynx-W temperature above, we suspect that
the difference in temperatures mainly stems from the different correction methods of the low-energy
QE degradation.
Although our understanding of the $Chandra$ instrument still evolves and this may
neccesitate some updates to our results,
it is encouraging to
note that this X-ray mass is closest to
our independent lensing determination of the cluster mass of
$M(r\leq0.5\mbox{Mpc})=(2.0\pm0.6)\times10^{14} M_{\sun}$ from the SIS fit result.
Our spectroscopic catalog currently provides the redshifts of 11 member galaxies
within a $r=80\arcsec$ radius (B. Holden in prep). Based on Tukey's biweight estimator,
we obtain a velocity dispersion of $720\pm140 \mbox{km}~\mbox{s}^{-1}$ (without
assuming a Gaussian distribution). This direct measurement agrees with the predicted
velocity dispersion of $740_{-134}^{+113}\mbox{km}\mbox{s}^{-1}$ from the lensing
analysis (\textsection\ref{section_mass_estimate}). In addition,
the cluster temperature $T_X=3.8_{-0.7}^{+1.3}$keV with $\beta=0.71$ is translated into
$\sigma_v=662_{-64}^{+106} \mbox{km}\mbox{s}^{-1}$ (from $\beta=\mu m_p \sigma_v^2/kT_X$),
in good agreement with both results.
\section{DISCUSSION AND CONCLUSIONS \label{summary}}
We have presented a weak-lensing analysis of the two Lynx clusters at $\bar{z}=1.265$
using the deep ACS $i_{775}$ and $z_{850}$ images. Our mass reconstruction
clearly detects the dark matter clumps associated with the two high-redshift
clusters and other intervening objects within the ACS field, including
the known foreground cluster at $z=0.57$.
In order to verify the significance of the cluster detection and
to separate the high-redshift signal from the low-redshift contributions,
we performed a weak-lensing tomography by selecting an alternate
lower-redshift source plane. This second mass reconstruction does not
show the mass clumps around the high-redshift clusters, while maintaining
most of the other structures seen in the first mass map. This experiment
strongly confirms that the weak-lensing signals observed in the
first mass reconstruction are real and come from the high-redshift Lynx clusters.
Interestingly, both clusters are found to have similar weak-lensing masses
of $\sim 2.0\times 10^{14} M_{\sun}$ within 0.5 Mpc ($\sim60\arcsec$) aperture radius
despite their discrepant X-ray properties. Our re-analysis of the Chandra archival data
with the use of the latest calibration of the low-energy QE degradation
shows that Lynx-E and W have temperatures of $T=3.8_{-0.7}^{+1.3}$ and
$1.7_{-0.4}^{+0.7}$~keV, respectively.
Combined with the X-ray surface brightness profile measurements, the X-ray temperature of Lynx-E
gives a mass estimate in good agreement with the weak-lensing result. On the other hand,
the X-ray mass of Lynx-W is much smaller than the weak-lensing estimation nearly by a factor of three.
According to our experiment in \textsection\ref{section_mass_estimate},
it is unlikely that any foreground contamination or cosmic shear effect in
weak-lensing measurement causes this large discrepancy.
Apart from a simplistic, but valid possibility that Lynx-W might have a filamentary structure extended along the line of sight,
yielding a substantial, projected mass but with yet only low-temperature thermal emission, we can also
consider the self-similarity breaking (e.g., Ponman et al. 1999; Tozzi \& Norman 2001; Rosati, Stefano, \& Norman et al. 2002)
typically observed for low-temperature X-ray systems. There have been quite a few suggestions that a non-gravitational heating (thus extra
entropy) might prevent the ICM from further collapsing at the cluster core.
The effect is supposed to be more pronounced
in colder systems whose virial temperature is comparable to the temperature created by this non-gravitational heating,
leading to shallower gas profiles than
those of high-temperature systems (e.g., Balogh et al. 1999; Tozzi \& Norman 2001).
Interestingly, our determination of the surface brightness profile of Lynx-W is much shallower ($\beta=0.42\pm0.07$) than that
of Lynx-E ($\beta=0.71\pm0.12$) (however, Ettori et al. (2004) obtained $\beta=0.97\pm0.43$ for Lynx-W).
The relatively loose distribution of the cluster galaxies in Lynx-W without any apparent BCG defining the cluster center
leads us to consider another possibility that the system might be dynamically young and the ICM
has not fully thermalized within the potential well. If we imagine that
the ICM is not primordial, but has been ejected from the cluster galaxies at some recent epoch,
it is plausible to expect that the X-ray temperature of the ICM might yet under-represent the depth of the cluster
potential well. Tozzi et al. (2003) investigated the iron abundance in the ICM
at $0.3<z<1.3$ and argued that the result was consistent with no evolution of the mean iron abundance
out to $z\simeq1.2$. If we assume that, as they suggested, Type Ia SNe are the dominant sources of
this iron enrichment and have already injected their metals into the ICM by $z\sim1.2$,
a significant fraction of clusters at $z\gtrsim 1.2$ may possess dynamically young ICM.
Recently, Nakata et al. (2005) reported with a photometric redshift technique
the discovery of seven other cluster candidates around
these two Lynx clusters possibly forming a $z\sim1.3$ supercluster. Although
further evidence is needed that the individual clumps are dynamically
bound, the clear enhancement of the red galaxies consistent with the color
at the redshift of the two known Lynx clusters is worthy of our attention.
If they are indeed found to be forming groups/clusters at $z\sim1.3$, but
missed by X-ray observations because of their low X-ray contrast,
the detailed studies
of these young high-redshift structures will provide a critical benchmark
in testing our understanding of the structure formation as well as the individual galaxy
evolution in the context of different environments.
Deep two band ($i_{775}$ and $z_{850}$ $HST$/ACS imaging of the five out of the seven group/cluster candidates of
Nakata el al. (2005) are scheduled in $HST$ Cycle 14 (Prop. 10574, PI. Mei). Studies
similar to the current investigation will not only test
whether there exist dark matter clumps around the candidate galaxies, but also
quantify the environments for the investigation of the cluster galaxy
color/morphology evolution.
ACS was developed under NASA contract NAS5-32865, and this research was supported
by NASA grant NAG5-7697. We are grateful for an equipment
grant from Sun Microsystems, Inc.
Some of the data presented herein were obtained at the W.M. Keck
Observatory, which is operated as a scientific partnership among the
California Institute of Technology, the University of California and
the National Aeronautics and Space Administration. The Observatory was
made possible by the generous financial support of the W.M. Keck
Foundation. The authors wish to recognize and acknowledge the very
significant cultural role and reverence that the summit of Mauna Kea
has always had within the indigenous Hawaiian community. We are most
fortunate to have the opportunity to conduct observations from this
mountain.
|
2201.09414
|
\section{Introduction}
Turbo codes \cite{397441,Vucetic:2000:TCP:352869} and low-density parity-check (LDPC) codes \cite{Gallager63low-densityparity-check} are two important classes of codes that have been adopted in various communications standards. These codes are capable of achieving near-Shannon-limit performance as the blocklength grows large. Spatial coupling brings further performance improvement to these codes. The first spatially-coupled LDPC (SC-LDPC) codes, also known as LDPC convolutional codes, were introduced in \cite{782171}. These codes can be obtained by spreading the edges of the Tanner graph \cite{1056404} of the underlying uncoupled LDPC block codes to several adjacent blocks. The most important property of SC-LDPC codes, observed numerically in \cite{5571910} and proven analytically in \cite{5695130,6589171}, is that their iterative decoding threshold under suboptimal belief propagation (BP) decoding achieves the optimal maximum-a-posteriori (MAP) decoding threshold. Such a phenomenon is known as threshold saturation \cite{5695130}. Another advantage of SC-LDPC codes is that they also preserve the minimum distance growth rate of their underlying uncoupled LDPC block codes \cite{7152893}.
The concept of spatial coupling has also been applied, with much success, to various classes of codes to construct capacity-approaching channel codes. For example, the authors in \cite{Smith12} proposed a class of spatially-coupled product codes called staircase codes, which can operate close to the binary symmetric channel capacity under iterative bounded-distance decoding. In this work, we focus on turbo-like codes, whose factor graphs \cite{910572} have convolutional code trellis constraints. In \cite{8002601}, the authors introduced spatially-coupled turbo-like codes (SC-TCs) by applying spatial coupling on parallel concatenated codes (PCCs) \cite{397441}, serially concatenated codes (SCCs) \cite{669119} and braided convolutional codes (BCCs) \cite{5361461}. It was proven in \cite{8002601} that threshold saturation also occurs for SC-TCs. Further investigations on the trade-off between error floor and waterfall performance of SC-TCs as well as the effects of coupling memory and component code blocklength on decoding performance were conducted in \cite{8631116} and \cite{9448689}, respectively. Despite the capacity-approaching performance of SC-SCCs and SC-BCCs, the threshold of SC-PCCs (especially when punctured) are (strictly) bounded away from capacity \cite{8002601}. Recently, partially-information coupled turbo codes (PIC-TCs) were proposed in \cite{8368318} to enhance the performance of the hybrid automatic repeat request protocol in LTE \cite{4907407}. The main idea is that each pair of adjacent code blocks share a fraction of information bits such that these bits are protected by two component turbo codewords. We extended the design of PIC-TCs to a large coupling memory and used density evolution to compute their decoding thresholds in \cite{8989359,PIC2020}. One benefit of such construction is that the technique of partial coupling can be applied to any systematic linear code such as LDPC codes \cite{8301547} and polar codes \cite{9491085} without changing its encoding and decoding architecture. Both theoretical analysis and simulation results in \cite{PIC2020,9174156} showed that partially-coupled turbo codes outperform SC-PCCs and have comparable performance to SC-SCCs and SC-BCCs. However, threshold saturation was neither observed nor proven for PIC-TCs in \cite{8368318,8989359,PIC2020,9174156}.
Although the above works on SC-TCs have all reported capacity-approaching performance, it remains unclear whether spatial coupling can allow turbo-like codes to eventually achieve capacity. Motivated by the fact that PCCs (or turbo codes) are the standard channel coding scheme in the 4G wireless mobile communication systems which coexist with 5G systems, we are interested in designing new and powerful coupled codes with PCCs as component codes that can be compatible with the current standard. In this paper, we introduce generalized SC-PCCs (GSC-PCCs), which are constructed by applying spatial coupling on a component PCC, where a fraction of the information bits are repeated $q$ times. The main contributions of the papers are as follows:
\begin{itemize}
\item We introduce the construction and decoding for GSC-PCCs. We emphasize that the proposed codes not only can be seen as a generalization of the conventional SC-PCCs \cite{8002601}, but also exhibit a similar structure to that of PIC-TCs \cite{PIC2020}, as the repeated bits are protected by the component PCC codewords at several time instants. The proposed construction allows GSC-PCCs to inherit all the positive features of both SC-PCCs and PIC-TCs, such as threshold saturation and close-to-capacity performance when punctured.
\item We derive the density evolution (DE) equations for the proposed GSC-PCC ensembles on the binary erasure channel (BEC). To evaluate and compare the ensembles at rates higher than their mother PCCs, we also derive DE equations for the punctured ensembles. In particular, for a given target code rate $R$ and coupling memory $m$, we find the optimal fraction of repeated information bits that gives the largest decoding threshold for various repetition factors $q$. With these DE equations, we compute the MAP decoding threshold by using the area theorem \cite{Measson2006thesis} and observe threshold saturation numerically.
\item We analytically prove that threshold saturation occurs for the proposed GSC-PCC ensembles by using the proof technique based on potential functions \cite{6325197}. By utilizing this property, we then rigorously prove that the proposed GSC-PCC ensembles with rate $R$ and 2-state convolutional component codes achieve at least a fraction $1-\frac{R}{R+q}$ of the BEC capacity and this multiplicative gap vanishes as $q$ tends to infinity, i.e., GSC-PCCs with 2-state convolutional component codes achieve capacity. To the best of our knowledge, this is the first class of turbo-like codes that are proven to be capacity-achieving. We conjecture that GSC-PCC ensembles with any convolutional component codes are also capable of achieving capacity. Furthermore, the connections between the threshold of GSC-PCC ensembles, the repetition factor, and the strength of the underlying component code are established.
\item The error performance of GSC-PCCs under finite blocklength on the BEC and additive white Gaussian
noise (AWGN) channel is investigated via simulation. Both theoretical analysis and simulation results show that the proposed codes significantly outperform existing coupled codes with PCCs as component codes. In addition, we present an effective method for selecting coupled information bits to further enhance the error performance of GSC-PCCs.
\end{itemize}
\section{Generalized Spatially-Coupled Parallel Concatenated Codes}\label{sec2}
In this section, we first introduce the uncoupled PCCs with partial information repetition that will be used to construct GSC-PCCs. Then, we present the encoding and decoding of GSC-PCCs.
\subsection{Parallel Concatenated Codes with Partial Repetition}
Uncoupled PCCs with partial information repetition are similar to the dual-repeat-punctured turbo codes in \cite{5308225}, except that in our case only a fraction of the information bits are repeated. The encoder of an uncoupled PCC with partial repetition is depicted in Fig. \ref{fig:GSC_PCC_ENC}(a). A length-$K$ information sequence $\boldsymbol{u}$ is divided into two sequences, $\boldsymbol{u}_{\text{r}}$ and $\boldsymbol{u}_{\text{o}}$. Then, sequence $\boldsymbol{u}_{\text{r}}$ is repeated $q$ times and combined with $\boldsymbol{u}_{\text{o}}$ to form a length-$K'$ information sequence $[\boldsymbol{u}_{\text{r}},\ldots,\boldsymbol{u}_{\text{r}},\boldsymbol{u}_{\text{o}}]$. The resultant sequence and its reordered copy $\Pi([\boldsymbol{u}_{\text{r}},\ldots,\boldsymbol{u}_{\text{r}},\boldsymbol{u}_{\text{o}}])$, where $\Pi(.)$ denotes the interleaving function, are encoded by the upper and lower convolutional encoders, respectively. We define the repetition ratio $\lambda \triangleq \frac{K'-K}{(q-1)K'} \in [0,1/q]$ as the length of $\boldsymbol{u}_r$ over $K'$. The length of $\boldsymbol{u}_{\text{o}}$ is then given by $(1-q\lambda)K'$. The repetition ratio is an important parameter and its definition and notation are used throughout the rest of the paper. Finally, the codeword is a length-$N $ sequence $\boldsymbol{c}=[\boldsymbol{u}_{\text{r}},\boldsymbol{u}_{\text{o}},\boldsymbol{v}^{\text{U}},\boldsymbol{v}^{\text{L}}]=[\boldsymbol{u},\boldsymbol{v}^{\text{U}},\boldsymbol{v}^{\text{L}}]$, comprising the information sequence before repetition, as well as two length-$\frac{N-K}{2}$ parity sequences generated by the upper and lower convolutional encoders. Note that it is natural to exclude all other $q-1$ replicas of $\boldsymbol{u}_{\text{r}}$ from $\boldsymbol{c}$ as they do not contain new information. Given the code rate of the mother PCC, $R_0 = \frac{K'}{N'}$, where $N' = N-K+K'$ is its codeword length, the code rate of the uncoupled PCC with partial repetition is
\begin{align}\label{uncoupled_rate}
R_{\text{uc}} =& \frac{K'(1-(q-1)\lambda)}{N'-K'+K'(1-(q-1)\lambda)}
= \frac{1-(q-1)\lambda}{\frac{1}{R_0}-(q-1)\lambda}
\geq \frac{1}{q(\frac{1}{R_0}-1)+1},
\end{align}
where the last inequality shows that the lowest rate is achieved when $\lambda = 1/q$.
\subsection{Encoding}
\begin{figure}[t!]
\centering
\includegraphics[width=2.7in,clip,keepaspectratio]{GSC_PCC_enc.pdf}
\caption{Encoders of (a) an uncoupled PCC with partial repetition, and (b) a GSC-PCC with $m=1$ at time $t$.}
\label{fig:GSC_PCC_ENC}
\end{figure}
We construct GSC-PCCs by applying spatial coupling to the above PCCs with partial repetition. The block diagram of a GSC-PCC with coupling memory $m=1$ at time instant $t$ is depicted in Fig. \ref{fig:GSC_PCC_ENC}(b).
An information sequence $\boldsymbol{u}$ is divided into $L$ sequences of equal length $K$, which are denoted by $\boldsymbol{u}_t$, $t =1,\ldots,L$. We refer to $L$ as the coupling length. At time $t$, $\boldsymbol{u}_t$ is decomposed into $\boldsymbol{u}_{t,\text{r}}$ and $\boldsymbol{u}_{t,\text{o}}$, where $\boldsymbol{u}_{t,\text{r}}$ is a length-$\lambda K'$ sequence and $\boldsymbol{u}_{t,\text{o}}$ is a length-$K'(1-q\lambda)$ sequence. Sequence $\boldsymbol{u}_{t,\text{r}}$ is repeated $q$ times and combined with $\boldsymbol{u}_{t,\text{o}}$ to form a length-$K'$ information sequence $[\boldsymbol{u}_{t,\text{r}},\ldots,\boldsymbol{u}_{t,\text{r}},\boldsymbol{u}_{t,\text{o}}]$. The resultant sequence is then decomposed into $m+1$ sequences of length $\frac{K'}{m+1}$, denoted by $\boldsymbol{u}^\text{U}_{t,t+j}$, $j=0,\ldots,m$. The information sequence $\boldsymbol{u}^\text{U}_{t,t+j}$ is used as a part of the input of the upper convolutional encoder at time $t+j$. The coupling is performed such that a length-$K'$ information sequence, $[\boldsymbol{u}^\text{U}_{t-m,t},\ldots,\boldsymbol{u}^\text{U}_{t,t}]$, is formed. Meanwhile, the reordered copy of information sequence $[\boldsymbol{u}_{t,\text{r}},\ldots,\boldsymbol{u}_{t,\text{r}},\boldsymbol{u}_{t,\text{o}}]$, i.e., $\Pi([\boldsymbol{u}_{t,\text{r}},\ldots,\boldsymbol{u}_{t,\text{r}},\boldsymbol{u}_{t,\text{o}}])$, is also decomposed into $m+1$ sequences of length $\frac{K'}{m+1}$, i.e., $\boldsymbol{u}^\text{L}_{t,t+j}$, $j=0,\ldots,m$, where $\boldsymbol{u}^\text{L}_{t,t+j}$ is used as a part of the input of the lower convolutional encoder at time $t+j$. With coupling, a length-$K'$ information sequence $[\boldsymbol{u}^\text{L}_{t-m,t},\ldots,\boldsymbol{u}^\text{L}_{t,t}]$ is formed. The codeword obtained at time $t$ is a length-$N$ sequence $\boldsymbol{c}_t = [\boldsymbol{u}_t,\boldsymbol{v}^\text{U}_t,\boldsymbol{v}^\text{L}_t]$, where $\boldsymbol{v}^\text{U}_t$ and $\boldsymbol{v}^\text{L}_t$ are two length-$\frac{N-K}{2}$ parity sequences as the result of encoding $\Pi^\text{U}([\boldsymbol{u}^\text{U}_{t-m,t},\ldots,\boldsymbol{u}^\text{U}_{t,t}])$ and $\Pi^\text{L}([\boldsymbol{u}^\text{L}_{t-m,t},\ldots,\boldsymbol{u}^\text{L}_{t,t}])$ at the upper and lower systematic convolutional encoders, respectively, at time $t$. We remark that the three interleavers are crucial for introducing randomness in code structures such that the codes become ensembles for which density evolution can be rigorously applied to analyze the decoding threshold.
To initialize and terminate the coupled chain, we can simply set $\boldsymbol{u}_t$ to $\boldsymbol{0}$ for $t\leq0$ and $t>L$. As a result, the code rate of the GSC-PCC with coupling memory $m$, coupling length $L$, and repetition factor $q$ is
\begin{align}
R_{\text{sc}}&=\frac{KL}{NL+m(N-K)
=\frac{K'L(1-(q-1)\lambda)}{L(N'-K'(q-1)\lambda)+m(N'-K')} \nonumber \\
&=\frac{L(1-(q-1)\lambda)}{L(\frac{1}{R_0}-(q-1)\lambda)+m(\frac{1}{R_0}-1)},
\end{align}
where $R_0=\frac{K'}{N'}$ is the code rate of the mother PCC. When $L \rightarrow \infty$, the code rate of the GSC-PCC approaches $R_{\text{uc}}$ in \eqref{uncoupled_rate}.
Due to partial repetition of information bits, GSC-PCCs have an encoding latency of $\frac{1}{1-(q-1)\lambda}$ times higher than that of SC-PCCs. When $q=2$, GSC-PCCs have a similar encoding latency to PIC-TCs because PIC-TCs also have a fraction of information bits repeated twice. However, it is important to note that the encoding of GSC-PCCs can be performed either in parallel, i.e., encoding $L$ information sequences in parallel, or in a serial and streaming fashion, making them still more appealing than block codes.
\subsection{Comparison to Existing Codes}
There are connections between the proposed GSC-PCCs and some existing SC-TCs. First, the proposed codes can be seen as a generalization of the conventional SC-PCCs \cite{8002601}. More precisely, one can obtain the original SC-PCC from a GSC-PCC by setting either $q=1$ or $\lambda = 0$. However, the introduction of partial repetition gives rise to a significant performance improvement, as it will be shown in Section \ref{sec:DE} and Section \ref{sec:TS}. In addition, the proposed codes have more flexible structure as they can reach a code rate as low as $\frac{1}{2q+1}$ when the mother PCC is rate-$1/3$, while the lowest code rate for the conventional SC-PCCs is $1/3$.
The proposed codes also bear similarities to PIC-TCs \cite{PIC2020}, whose coupled information bits are encoded (and protected) by two turbo encoders (four convolutional encoders). In fact, PIC-TCs can be seen as having a fraction of information bits repeated twice and using the copies of those information bits as the input of the turbo encoder at the succeeding time instant. For GSC-PCCs, this can happen when some of the information bits from $\boldsymbol{u}_{t,\text{r}}$ appear in $\boldsymbol{u}^\text{U}_{t,t}$ and $\boldsymbol{u}^\text{L}_{t,t}$, while their copies appear in $\boldsymbol{u}^\text{U}_{t,t+j}$ and $\boldsymbol{u}^\text{L}_{t,t+j}$, $j\in\{1,\ldots,m\}$. In this case, those repeated and coupled information bits can be protected by the component PCC codewords at multiple time instants. However, the coupling of PIC-TCs is at the turbo code level (the information encoded by upper and lower encoders is the same). In contrast, GSC-PCCs are coupled at the convolutional code level (the information encoded by upper and lower encoders is different) such that the proposed codes inherit many nice properties from SC-PCCs, such as threshold saturation (crucial for achieving capacity) and decoding threshold improvement from employing stronger convolutional component codes.
\subsection{Decoding}
The decoding of GSC-PCCs consists of two types of iterations: intra-block iterations and inter-block iterations. Specifically, an intra-block iteration is the exchange of the extrinsic information of information bits between the upper and lower Bahl–Cocke–Jelinek–Raviv (BCJR) \cite{1055186} component decoders at the same time instant. An inter-block iteration is the exchange of extrinsic information of information bits in component codes across $L$ time instants in a forward/backward round trip. Note that the inter-block iteration can also be performed in a sliding window fashion with window size $W$, where $m+1 \leq W \leq L $. To avoid repetition, we focus on the log-likelihood ratio (LLR) updates for information bits since the LLR updates for all parity bits are the same as those for the conventional uncoupled turbo codes.
Let $u^{\text{U}}_{t,k}$ denote the $k$-th information bit at the upper decoder at time $t$ and $k \in \{1,\ldots,K'\}$. Let $L_{\text{C}}(.)$, $L_{\text{E}}(.)$ denote the channel and extrinsic LLRs, respectively. In addition, we denote by $\mathcal{Q}^{\text{U}}_k$ the sets of bit positions associated with $u^{\text{U}}_{t,k}$ and its replicas which appear in the upper decoder at the same instant, where $\mathcal{Q}^{\text{U}}_k\subseteq \{1,\ldots,K'\}$ and $|\mathcal{Q}^{\text{U}}_k|\subseteq \{1,\ldots,q\}$. The definition of $\mathcal{Q}^{\text{L}}_k$ is analogous to $\mathcal{Q}^{\text{U}}_k$. Notice that $t$ is not required here because the three interleavers and the selection of information bits to be repeated or coupled are the same for all $t\in\{1,\ldots,L\}$. As an example, $|\mathcal{Q}^{\text{U}}_k|=1$ means that either $u^{\text{U}}_{t,k}$ is not repeated or its replicas are not in the upper decoder at the same time instant. Consequently, there are two types of \emph{a priori} LLRs of each repeated information bit: the \emph{a priori} LLR obtained from its replicas at the upper BCJR decoder at the same time instant, denoted by $L_{\text{A}_1}(.)$; and the \emph{a priori} LLR obtained from its interleaved bit at the lower BCJR decoder at the same time instant, denoted by $L_{\text{A}_2}(.)$. Furthermore, we define $L_{\text{in}}(.)$ and $L_{\text{out}}(.)$ as the input and output LLR of the BCJR decoder. Due to space limitations, we omit the LLR updates for inter-block decoding as it is similar to SC-PCCs \cite{8002601}. The updates for the LLR associated with information bits during intra-block decoding are described as follows.
\emph{\textbf{Step 1 (Input LLR Computation):}} Construct the LLR of $u^{\text{U}}_{t,k}$ for the upper BCJR decoder input as $L_{\text{in}}(u^{\text{U}}_{t,k}) = L_{\text{C}}(u^{\text{U}}_{t,k})+L_{\text{A}_1}(u^{\text{U}}_{t,k})+L_{\text{A}_2}(u^{\text{U}}_{t,k})$, where $L_{\text{A}_1}(u^{\text{U}}_{t,k})$ is computed in Step 4 in the last iteration, and $L_{\text{A}_2}(u^{\text{U}}_{t,k})$ is obtained from the extrinsic LLR of its interleaved bit $u^{\text{L}}_{t,\tilde{k}}$ at the lower BCJR component decoder with a step analogous to Step 4.
\emph{\textbf{Step 2 (BCJR Component Decoding):}} Perform BCJR decoding and obtain the output LLR of $u^{\text{U}}_{t,k}$ as $L_{\text{out}}(u^{\text{U}}_{t,k})$.
\emph{\textbf{Step 3 (Extrinsic Information Computation):}} The extrinsic LLR of $u^{\text{U}}_{t,k}$ is computed as $L_{\text{E}}(u^{\text{U}}_{t,k}) = \sum_{k \in \mathcal{Q}^{\text{U}}_k} \hat{L}_{\text{E}}(u^{\text{U}}_{t,k})$, where we define $\hat{L}_{\text{E}}(u^{\text{U}}_{t,k})\triangleq L_{\text{out}}(u^{\text{U}}_{t,k})-L_{\text{in}}(u^{\text{U}}_{t,k})$. Then, for any $k,k'\in \mathcal{Q}^{\text{U}}_k$ and $k\neq k'$, we have $L_{\text{E}}(u^{\text{U}}_{t,k}) =L_{\text{E}}(u^{\text{U}}_{t,k'})$.
\emph{\textbf{Step 4 (A priori Information Computation):}} Compute the \emph{a priori} LLR of $u^{\text{U}}_{t,k}$ to be used in the next iteration as $L_{\text{A}_1}(u^{\text{U}}_{t,k})= L_{\text{E}}(u^{\text{U}}_{t,k})-\hat{L}_{\text{E}}(u^{\text{U}}_{t,k})$. The extrinsic LLR of $u^{\text{U}}_{t,k}$ is used as the \emph{a priori} LLR of its interleaved bit $u^{\text{L}}_{t,\tilde{k}}$ at the lower decoder, i.e., $L_{\text{A}_2}(u^{\text{L}}_{t,\tilde{k}}) = L_{\text{E}}(u^{\text{U}}_{t,k})$. For any $\tilde{k},\tilde{k'} \in \mathcal{Q}^{\text{L}}_{\tilde{k}}$ and $\tilde{k}\neq \tilde{k'}$, $L_{\text{A}_2}(u^{\text{L}}_{t,\tilde{k}}) =L_{\text{A}_2}(u^{\text{L}}_{t,\tilde{k'}})$.
After Step 4, the intra-block decoding proceeds to the lower BCJR component decoding, for which the LLR updates can be easily obtained from the above steps by interchanging subscripts U and L. Compared to SC-PCCs, the increase in the complexity mainly comes from Step 3, where additional computation resource is required to perform the combination of the extrinsic information associated with the repeated bits. Moreover, addition memory is needed to store the bit positions of repeated bits (does not change with $t$). However, when $q=2$, the complexity of GSC-PCCs is comparable to that of PIC-TCs since PIC-TCs also have a fraction of information bits repeated twice.
\section{Density Evolution Analysis on the BEC}\label{sec:DE}
In this section, we first look into the graph representation of GSC-PCCs and then derive the exact density evolution equations to characterize their decoding threshold. In this work, we consider a rate $R_0 = 1/3$ mother PCC built from two rate-$1/2$ recursive systematic convolutional codes.
\subsection{Graph Representation}
Turbo-like code ensembles can be represented by a compact graph \cite{8002601}, which simplifies the factor graph representation. The main idea is that each information or parity sequence in a factor graph is represented by a single variable node, while a trellis constraint is represented by a factor node. An interleaver is represented by a line segment that crosses an edge.
\begin{figure}[t!]
\centering
\includegraphics[width=2.4in,clip,keepaspectratio]{graph_v1.pdf}
\caption{Compact graph representation of (a) uncoupled ensembles, and (b) GSC-PCC ensembles at time $t$.}
\label{fig:uc_graph}
\end{figure}
We first look at the compact graph of an uncoupled PCC with partial repetition, which is depicted in Fig. \ref{fig:uc_graph}(a). Compared to the compact graph of a conventional PCC (see \cite[Fig. 4a]{8002601}), the difference is that in our case the information node $\boldsymbol{u}$ is represented by two nodes, $\boldsymbol{u}_{\text{r}}$ and $\boldsymbol{u}_{\text{o}}$.\footnote{With some abuse of language, we sometimes refer to a variable node
representing a sequence as the sequence itself.} Since the information sequence $\boldsymbol{u}_{\text{r}}$ is repeated $q$ times before being encoded by the PCC encoder, node $\boldsymbol{u}_{\text{r}}$ connects the upper and lower factor nodes $f^\text{U}$ and $f^\text{L}$ via $q$ edges, respectively.
The compact graph representation of GSC-PCCs with coupling memory $m$ and at time $t$ is depicted in Fig. \ref{fig:uc_graph}(b). It is similar to the compact graph of SC-PCCs (see \cite[Fig. 5a]{8002601}), except that information node $\boldsymbol{u}_t$ is represented by nodes $\boldsymbol{u}_{t,\text{r}}$ and $\boldsymbol{u}_{t,\text{o}}$, where node $\boldsymbol{u}_{t,\text{r}}$ connects the upper and lower factor nodes via $q$ edges, respectively. Analogous to many turbo-like ensembles, as the lengths of each component codeword and random interleavers go to infinity, the assumptions of decoder symmetry, all-one codeword, concentration, asymptotically tree-like computation graph for a fixed number of iterations can be adopted to the graphs of GSC-PCC ensembles \cite[Ch. 6]{Richardson:2008:MCT:1795974}. This allows us to rigorously apply DE to analyze the decoding threshold of GSC-PCC ensembles.
\subsection{Density Evolution}
Since GSC-PCCs are newly proposed, it is natural to study their behavior under a fundamental channel model, i.e., the BEC model, first. For this model, the exact decoding threshold for turbo-like codes can be rigorously analyzed \cite{8002601}. In addition, the results in \cite{6589171,8631116,PIC2020} suggest that several good classes of spatially-coupled codes for the BEC also perform well over other channels. Hence, we focus on the BEC in this work in order to fully understand the behavior of the proposed codes.
Let $\epsilon$ denote the channel erasure probability of the BEC. For a rate-$1/2$ convolutional code, we let $f^\text{U}_\text{s}(.)$ and $f^\text{U}_\text{p}(.)$ denote the transfer functions of the upper decoder for information and parity bits, respectively, where $x$ and $y$ correspond to the input erasure probabilities for information and parity bits, respectively. Similarly, let $f^\text{L}_\text{s}(.)$ and $f^\text{L}_\text{p}(.)$ denote the transfer functions of the lower decoder for information and parity bits, respectively. The exact input/output transfer functions of a convolutional code under the BCJR decoding \cite{1055186} on the BEC can be explicitly derived by using the methods in \cite{370145,1258535}.
\subsubsection{Uncoupled Ensembles}\label{sec:un_de}
As shown in Fig. \ref{fig:uc_graph}(a), $p^{(\ell)}_{\text{U}}$ and $q^{(\ell)}_{\text{U}}$ represent the output erasure probability of factor node $f^{\text{U}}$ for information and parity bits, respectively, after $\ell$ decoding iterations. Similarly, $p^{(\ell)}_{\text{L}}$ and $q^{(\ell)}_{\text{L}}$ denote the output erasure probability of $f^{\text{L}}$ for information and parity bits, respectively.
The DE update equation for the output erasure probability of the information bits at node $f^\text{U}$ is
\begin{align}\label{eq:un_de1}
p^{(\ell)}_{\text{U}}
=f^\text{U}_{\text{s}}\left(\epsilon q\lambda \left(p^{(\ell-1)}_{\text{U}}\right)^{q-1} \left(p^{(\ell)}_\text{L}\right)^q + \epsilon\left(1-q\lambda\right)p^{(\ell)}_{\text{L}},\epsilon\right),
\end{align}
where $(1-q\lambda)$ and $q\lambda$ are the weights of the erasure probability of $\boldsymbol{u}_{\text{o}}$ and $\boldsymbol{u}_{\text{r}}$, respectively, determined by the ratios of their lengths over $K'$ (the input length of the upper and lower convolutional encoder), $\epsilon q\lambda(p^{(\ell-1)}_{\text{U}})^{q-1} (p^{(\ell)}_\text{L})^q$ is the weighted extrinsic erasure probability from node $\boldsymbol{u}_{\text{r}}$ to node $f^\text{U}$ while the powers on $p^{(\ell-1)}_{\text{U}}$ and $p^{(\ell)}_\text{L}$ are due to the repetition at the upper and lower encoders, $\epsilon(1-q\lambda)p^{(\ell)}_{\text{L}}$ is the weighted extrinsic erasure probability from node $\boldsymbol{u}_{\text{o}}$ to node $f^\text{U}$, and finally the average erasure probability from node $\boldsymbol{v}^\text{U}$ to node $f^\text{U}$ is $\epsilon$.
The DE update equation for the output erasure probability of the parity bits at node $f^\text{U}$, i.e., $q^{(\ell)}_{\text{U}}$, can be obtained by replacing the transfer function $f^\text{U}_{\text{s}}(.)$ by $f^\text{U}_{\text{p}}(.)$. To obtain the DE update equations for $p^{(\ell)}_{\text{L}}$ and $q^{(\ell)}_{\text{L}}$ at node $f^\text{L}$, we can simply interchange $p_{\text{U}}$ and $p_{\text{L}}$ and replace $f^\text{U}(.)$ by $f^\text{L}(.)$ in \eqref{eq:un_de1}.
\subsubsection{Coupled Ensembles}
Based on the compact graph in Fig. \ref{fig:uc_graph}(b), we denote by $p^{(\ell)}_{\text{U},t}$ and $q^{(\ell)}_{\text{U},t}$ the output erasure probability of $f^{\text{U}}$ for information and parity bits, respectively, at time $t$ and after $\ell$ decoding iterations. Similarly, $p^{(\ell)}_{\text{L},t}$ and $q^{(\ell)}_{\text{L},t}$ denote the output erasure probability of $f^{\text{L}}$ for information and parity bits, respectively. We also define the average erasure probability from $f^{\text{U}}$ and $f^{\text{L}}$ to $\boldsymbol{u}_t$ as $\bar{p}^{(\ell-1)}_{\text{U},t}$ and $\bar{p}^{(\ell-1)}_{\text{L},t}$, respectively, where
\begin{align}
\bar{p}^{(\ell-1)}_{\text{U},t} = \frac{1}{m+1}\sum_{j=0}^m p^{(\ell-1)}_{\text{U},t+j},\label{eq:sc_de_1}\\
\bar{p}^{(\ell-1)}_{\text{L},t} = \frac{1}{m+1}\sum_{j=0}^m p^{(\ell-1)}_{\text{L},t+j}.\label{eq:sc_de_2}
\end{align}
By using \eqref{eq:sc_de_1} and \eqref{eq:sc_de_2}, as well as taking into account the partial repetition of information bits, we obtain the DE update for the erasure probability of the information bits at $f^{\text{U}}$ as
\begin{align}
p^{(\ell)}_{\text{U},t}
=&f^\text{U}_\text{s}\Bigg(\frac{\epsilon}{m+1}\sum_{k=0}^m \Big(q\lambda \left(\bar{p}^{(\ell-1)}_{\text{L},t-k}\right)^q \left(\bar{p}^{(\ell-1)}_{\text{U},t-k}\right)^{q-1}
+\left(1-q\lambda\right)\bar{p}^{(\ell-1)}_{\text{L},t-k}\Big),\epsilon \Bigg) \nonumber \\
= &f^\text{U}_\text{s}\Bigg(\frac{\epsilon}{m+1}\sum_{k=0}^m \Bigg(q\lambda \left(\frac{1}{m+1}\sum_{j=0}^m p^{(\ell-1)}_{\text{L},t+j-k}\right)^q
\cdot\left(\frac{1}{m+1}\sum_{j=0}^m p^{(\ell-1)}_{\text{U},t+j-k}\right)^{q-1} \nonumber \\
&+\frac{1-q\lambda}{m+1}\sum_{j=0}^m p^{(\ell-1)}_{\text{L},t+j-k}\Bigg),\epsilon \Bigg). \label{eq:sc_de_3}
\end{align}
To avoid repetition, we omit the DE equations for the erasure probability of the parity bits at $f^{\text{U}}$ as well as the DE equations at node $f^\text{L}$ as they can be trivially obtained from \eqref{eq:sc_de_3}.
\subsection{Random Puncturing}
To increase the code rate, we consider random puncturing of parity bits.
Let $\rho\in [0,1]$ denote the fraction of surviving parity bits after puncturing. For such a randomly punctured code sequence transmitted over the BEC with erasure probability $\epsilon$, the erasure probability of the parity sequence becomes $\epsilon_{\rho} = 1-(1-\epsilon)\rho$ \cite[Eq. 4]{7353121}. As a result, the DE equations for the punctured uncoupled and coupled ensembles can be obtained by substituting $\epsilon_{\rho} \rightarrow \epsilon$ for the average erasure probability from node $\boldsymbol{v}^\text{U}$ to node $f^\text{U}$ in \eqref{eq:un_de1} and that from node $\boldsymbol{v}_t^\text{U}$ to node $f^\text{U}$ in \eqref{eq:sc_de_3}, respectively.
After puncturing, the code rates of both uncoupled and coupled ensembles (considering $L\rightarrow \infty$) become
\begin{align}\label{eq:rate_punc}
R =\frac{1-(q-1)\lambda}{\left(\frac{1}{R_0}-1\right)\rho+1-(q-1)\lambda}.
\end{align}
Given $(R_0,R,q,\lambda)$, then $\rho$ is uniquely determined.
\subsection{Decoding Thresholds}
We compute the decoding thresholds over the BEC by using the DE equations derived in the previous section. We consider 4-state, rate-$1/2$ convolutional encoders with generator polynomial $(1,5/7)$ in octal notation for both upper and lower encoders. Given a target code rate $R \in \left[\frac{1}{q\left(\frac{1}{R_0}-1\right)+1},1\right)$ and coupling memory $m$, we optimize the repetition ratio $\lambda$ in order to maximize the iterative decoding threshold for various $q$. The optimized repetition ratios and the corresponding thresholds for the uncoupled ensembles (denoted by $\lambda$ and $\epsilon_{\text{BP}}$, respectively)\footnote{The decoding of turbo-like codes comprises BCJR decoding for convolutional component codes while the message exchange between BCJR component decoders follows the extrinsic message passing rule. Hence, we refer to the threshold under iterative message passing decoding with BCJR component decoding as BP threshold.} and coupled ensembles with coupling memory $m$ (denoted by $\lambda^{(m)}$ and $\epsilon^{(m)}_{\text{BP}}$, respectively) are reported in Table \ref{table0} and Table \ref{table1}, respectively. Note that the optimal $\lambda$ could be a range of values because these $\lambda$ lead to the same decoding threshold up to the fourth decimal place, which we believe have sufficient accuracy. In Table \ref{table1}, we also report the MAP threshold of the uncoupled ensembles ($\epsilon_{\text{MAP}}$), the minimum coupling memory ($m_{\min}$) for which threshold saturation is observed numerically, and the gap between the decoding threshold $\epsilon^{(m=m_{\min})}_{\text{BP}}$ and the corresponding BEC capacity (denoted by $\Delta_{\text{SH}}=1-R-\epsilon^{(m=m_{\min})}_{\text{BP}}$). Since turbo-like code ensembles including uncoupled PCCs with partial repetition, can be described by using factor graphs, the MAP threshold can be computed by using the area theorem \footnote{Although the MAP threshold given by the area theorem is an upper bound, we opt to drop the term ``upper bound'' for simplicity as the numerical results show that the thresholds of the coupled ensembles converge to this upper bound.} \cite{Measson2006thesis}
\begin{align}\label{eq:MAP_find}
R = \int_{\epsilon_{\text{MAP}}}^1 h^{\text{BP}}(\epsilon) d \epsilon \overset{(a)}{=} \int_{\epsilon_{\text{MAP}}}^1 R \bar{p}(\epsilon)+(1-R)\bar{q}(\epsilon) d \epsilon,
\end{align}
where $R$ is the target code rate, $h^{\text{BP}}(\epsilon)$ is the BP extrinsic information transfer (EXIT) function, $\bar{p}(\epsilon)$ and $\bar{q}(\epsilon)$ are the average extrinsic erasure probability for information bits and parity bits, respectively, $(a)$ follows from \cite{1523540}. To be specific,
\begin{align}
\bar{p}(\epsilon) =&q\lambda \left(p^{(\infty)}_{\text{U}}\right)^{q} \left(p^{(\infty)}_\text{L}\right)^q + \left(1-q\lambda\right)p^{(\infty)}_{\text{U}}p^{(\infty)}_{\text{L}}, \\
\bar{q}(\epsilon) =& \frac{1}{2}f^\text{U}_{\text{p}}\left(\epsilon q\lambda \left(p^{(\infty)}_{\text{U}}\right)^{q-1} \left(p^{(\infty)}_\text{L}\right)^q + \epsilon\left(1-q\lambda\right)p^{(\infty)}_{\text{L}},1-(1-\epsilon)\rho\right) \nonumber \\
&+\frac{1}{2}f^\text{L}_{\text{p}}\left(\epsilon q\lambda \left(p^{(\infty)}_{\text{L}}\right)^{q-1} \left(p^{(\infty)}_\text{U}\right)^q + \epsilon\left(1-q\lambda\right)p^{(\infty)}_{\text{U}},1-(1-\epsilon)\rho\right).
\end{align}
\begin{table}[t!]
\centering
\caption{Optimal Repetition Ratio of GSC-PCCs}\label{table0}
\begin{tabular}{c c c c c c }
\hline
Rate & $q$ & $\lambda$ & $\lambda^{(m=1)}$ & $\lambda^{(m=3)}$ & $\lambda^{(m=5)}$ \\
\hline
$3/4$ & 2 &$[0.287,0.313]$ & 0.5 & 0.5 & 0.5 \\
$3/4$ & 4 &0.172 & $[0.201,0.206]$ & 0.24 & 0.25 \\
$3/4$ & 6 &0.13 & 0.137 & $[0.152,0.154]$ & $[0.162,0.163]$ \\
\hline
$1/2$ & 2 & $[0.184,0.213]$ & 0.44 & 0.5 & 0.5 \\
$1/2$ & 4 & 0.147 & $[0.187,0.188]$ & 0.23 & 0.25 \\
$1/2$ & 6 & 0.12 & 0.131 & $[0.150,0.151]$ & $[0.156,0.160]$ \\
\hline
$1/3$ & 2 &$[0.088,0.124]$ & $[0.37,0.39]$ & 0.5 & 0.5 \\
$1/3$ & 4 & $[0.107,0.108]$ & $[0.162,0.172]$ & $[0.216,0.229]$ & 0.25 \\
$1/3$ & 6 & $[0.104,0.105]$ & $[0.121,0.122]$ & $[0.138,0.146]$ & $[0.151,0.158]$ \\
\hline
$1/4$ & 2 &$[0.036,0.072]$ & $[0.319,0.353]$ & 0.5 & 0.5 \\
$1/4$ & 4 & $[0.083,0.086]$ & $[0.152,0.162]$ & $[0.216,0.229]$ & 0.24 \\
$1/4$ & 6 & $0.112$ & $[0.112,0.116]$ & $[0.134,0.143]$ & $[0.143,0.158]$ \\
\hline
\end{tabular}
\end{table}
\begin{table}[t!]
\centering
\caption{Iterative Decoding thresholds of GSC-PCCs, SC-PCCs and PIC-TCs}\label{table1}
\begin{tabular}{c c c c c c c c c c}
\hline
Rate & Ensemble & $q$ & $\epsilon_{\text{BP}}$ & $\epsilon^{(m=1)}_{\text{BP}}$ & $\epsilon^{(m=3)}_{\text{BP}}$ & $\epsilon^{(m=5)}_{\text{BP}}$& $\epsilon_{\text{MAP}}$ &$m_{\min}$ & $\Delta_{\text{SH}}$ \\
\hline
&PIC-TC & 2 &- & 0.2307 & 0.2337 & 0.2344& 0.2351 & 1000 & 0.0149\\
\cdashline{2-10}
&SC-PCC & 1 &0.1854 & 0.1876 & 0.1876 & 0.1876 & 0.1876 & 1& 0.0624\\
\cdashline{2-10}
$3/4$ & & 2 &0.2115 & 0.2326 & 0.2352 & 0.2352& 0.2352 & 3 & 0.0148\\
&GSC-PCC & 4 &0.2268 & 0.2380 & 0.2430 & 0.2443 &0.2444 & 6 & 0.0056\\
& & 6 &0.2218 & 0.2406 & 0.2442 & 0.2457 & 0.2466 & 9 & 0.0034\\
\hline
&PIC-TC & 2 &- & 0.4865 & 0.4906 & 0.4920& 0.4934 &1000 & 0.0066\\
\cdashline{2-10}
&SC-PCC & 1 &0.4606 & 0.4689 & 0.4689 & 0.4689 & 0.4689 & 1 & 0.0311\\
\cdashline{2-10}
$1/2$ & & 2 & 0.4698 & 0.4907 & 0.4938 & 0.4938 & 0.4938 & 3 & 0.0062\\
&GSC-PCC & 4 & 0.4849 & 0.4940 & 0.4969 & 0.4978 & 0.4979 & 6 & 0.0021\\
& & 6 & 0.4747 & 0.4952 & 0.4974 & 0.4982 & 0.4988 & 9 & 0.0012 \\
\hline
&PIC-TC & 2 &- & 0.6576 & 0.6615 & 0.6625& 0.6640 & 1000 & 0.0027\\
\cdashline{2-10}
&SC-PCC & 1 &0.6428 & 0.6553 & 0.6553 & 0.6553 & 0.6553 & 1 & 0.0113\\
\cdashline{2-10}
$1/3$ & & 2 & 0.6446 & 0.6627 & 0.6647 & 0.6647 & 0.6647 & 3 & 0.0020\\
&GSC-PCC & 4 & 0.6583 & 0.6642 & 0.6656 & 0.6660 & 0.6661 & 6 & 0.0006\\
& & 6 & 0.6512 & 0.6648 & 0.6658 & 0.6661& 0.6663 & 8 & 0.0004\\
\hline
&PIC-TC & 2 &- & 0.7425 & 0.7459 & 0.7466& 0.7483 & 1000 & 0.0017\\
\cdashline{2-10}
\multirow{2}{*}{$1/4$} & & 2 & 0.7313 & 0.7478 & 0.7491 & 0.7491 & 0.7491 & 3 & 0.0009\\
& GSC-PCC& 4 & 0.7413 & 0.7487 & 0.7495 & 0.7497 & 0.7497 & 5 & 0.0003\\
& & 6 & 0.7406 & 0.7490 & 0.7496 & 0.7497 & 0.7498 & 6 & 0.0002\\
\hline
\end{tabular}
\end{table}
For comparison purposes, we list the decoding thresholds of SC-PCCs \cite{8002601} and PIC-TCs \cite{PIC2020}, which all use the same convolutional encoder as that for GSC-PCCs, in Table \ref{table1}. Except the rate-$1/3$ SC-PCC which reaches its lowest rate, the rest of the codes all require puncturing on the parity bits. Note that SC-PCCs can be seen as a special case of the proposed GSC-PCCs with $q=1$ or $\lambda = 0$ while PIC-TCs only have a fraction of information bits repeated twice, i.e., $q=2$. Since PIC-TCs do not show threshold saturation \cite{PIC2020}, their MAP threshold and the BP threshold of the underlying uncoupled ensemble are unknown. Hence, we show their iterative decoding threshold for $m=1000$ under the column of $\epsilon_{\text{MAP}}$.
First, it can be observed that the thresholds of GSC-PCCs surpass those of PIC-TCs and SC-PCCs for the same coupling memories and same code rates even for $q=2$ and puncturing. Although all codes exhibit a larger $\Delta_{\text{SH}}$ with increasing the fraction of punctured bits, the proposed GSC-PCCs can close this gap by increasing $q$. Particularly, the BP thresholds of GSC-PCCs improve with increasing $q$ for all the considered code rates and coupling memories. On the other hand, uncoupled PCCs with partial repetition have worse performance than coupled ensembles and their BP thresholds do not always improve with $q$. Intuitively, the BP threshold would improve if the extrinsic information of each convolutional decoder, i.e., BCJR decoder, becomes more reliable. However, increasing the repetition factor $q$ does not necessarily lead to more reliable extrinsic information as the large number of repetition (without coupling) could cause some bias. Meanwhile, puncturing is required to compensate the code rate reduction introduced by repetition, which subsequently reduces the BP threshold. For a large enough coupling memory, threshold saturation effect can be observed for GSC-PCCs. It is also worth noting that the optimal repetition ratio $\lambda$ approaches $1/q$ when $m$ is large in most cases, e.g., $m\geq m_{\min}$, suggesting that choosing $\lambda=1/q$ is sufficient for the proposed codes to universally achieve their MAP thresholds.
We also compare the decoding thresholds between GSC-PCCs and SC-LDPC codes \cite{7152893}. Since both SC-LDPC codes and GSC-PCCs are capacity-achieving (as we will see in Section \ref{sec:cap_achieve}), we are interested in their performance by taking into account rate loss due to termination, i.e., under finite coupling length $L$. As an example, we consider two GSC-PCC ensembles with $(1,5/7)$ and $(1,15/13)$ convolutional component codes with $q\in\{2,3\}$, $\lambda \in \{0.5,0.3\}$, and $m \in \{2,3\}$, respectively. Moreover, we consider a $(3,6)$ SC-LDPC ensemble and a $(4,8)$ SC-LDPC ensemble with coupling widths 2 and 3, respectively, as two benchmark codes. We denote by $R_{\text{term}}$ the design rate of a terminated spatially coupled ensemble. The gaps to the BEC capacity ($1-R_{\text{term}}-\epsilon^{(m)}_{\text{BP}}$) versus $L$ for the aforementioned four codes are shown in Fig. \ref{fig:gap_compare}.
\begin{figure}[t!]
\centering
\includegraphics[width=3.1in,clip,keepaspectratio]{Thrs_compare.pdf}
\caption{Gap to the BEC capacity for GSC-PCCs and SC-LDPC codes with target rate $1/2$.}
\label{fig:gap_compare}
\end{figure}
Observe that the proposed GSC-PCC ensembles have a smaller gap to capacity than that for the SC-LDPC ensembles with the same coupling memory (width). This is because GSC-PCC ensembles have less rate loss and a larger threshold than SC-LDPC ensembles. For example, the GSC-PCC ensemble with $(q,L,m)=(2,50,2)$ has a rate 0.4950 and a threshold of 0.4936 while the $(3,6,50,2)$ SC-LDPC ensemble has a rate of 0.48 and a threshold of 0.4881. Hence, the proposed GSC-PCCs have rate and threshold advantages over SC-LDPC codes.
\section{Threshold Saturation And Capacity-Achieving}\label{sec:TS}
In this section, we first analytically prove that threshold saturation occurs for GSC-PCCs. We then utilize this property to further prove that the proposed codes achieve capacity. Finally, some useful properties in relation to the threshold behavior of GSC-PCCs are presented.
\subsection{Threshold Saturation}
We consider identical upper and lower encoders for simplicity. Thus, for uncoupled PCCs with partial repetition, we can define $f_\text{s}\triangleq f^\text{U}_\text{s} = f^\text{L}_\text{s}$ and $x^{(\ell)}\triangleq p^{(\ell)}_{\text{L}} = p^{(\ell)}_\text{U}$. The DE equation in \eqref{eq:un_de1} can be written as a fixed point recursive equation
\begin{subequations}\label{eq:pot1}
\begin{align}
x^{(\ell)}= &f_\text{s}\bigg(q\epsilon\lambda \left(x^{(\ell-1)}\right)^{2q-1} +\epsilon(1-q\lambda)x^{(\ell-1)},1-(1-\epsilon)\rho \bigg)\label{eq:pot1a} \\
=&f\left( g\left(x^{(\ell-1)}\right);\epsilon \right),\label{eq:pot1b}
\end{align}
\end{subequations}
where $\eqref{eq:pot1b}$ is due to using the following definitions
\begin{align}
f(x ; \epsilon) &\triangleq f_\text{s}(\epsilon x, 1-(1-\epsilon)\rho), \label{eq:f}\\
g(x)&\triangleq q\lambda x^{2q-1}+(1-q\lambda)x. \label{eq:g}
\end{align}
First, we note that the following properties hold due to \cite[Lemma 1]{8002601} and \cite[Lemma 2]{8002601}:
1) $f(x ; \epsilon)$ is increasing in both arguments $x,\epsilon \in(0,1]$;
2) $f(0;\epsilon)=f(\epsilon;0)=g(0)=0$;
3) $f(x ; \epsilon)$ has continuous second derivatives on $[0,1]$ with respect to all arguments.
Moreover, it is easy to see that $g'(x)>0,\forall x \in (0,1]$, and $g''(x)$ exists and is continuous $\forall x \in[0,1]$. Therefore, the DE recursion in \eqref{eq:pot1} forms a scalar admissible system \cite[Def. 1]{6325197}.
For the above scalar admissible system, the potential function \cite[Def. 2]{6325197} is
\begin{subequations}\label{eq:pot2}
\begin{align}
U(x;\epsilon) =& xg(x)-G(x)-F(g(x);\epsilon) \label{eq:pot2a} \\
=& \left(q-\frac{1}{2}\right)\lambda x^{2q}+\frac{1}{2}(1-q\lambda)x^2
- \int_0^{q\lambda x^{2q-1}+(1-q\lambda)x} f_\text{s}(\epsilon z,1-(1-\epsilon)\rho)dz,\label{eq:pot2b}
\end{align}
\end{subequations}
where $\eqref{eq:pot2b}$ follows from
\begin{align}
F(x;\epsilon) &= \int_0^x f(z;\epsilon) dz = \int_0^x f_\text{s}(\epsilon z, 1-(1-\epsilon)\rho) dz, \\
G(x) &= \int_0^x g(z) dz = \frac{1}{2}\lambda x^{2q}+\frac{1}{2}(1-q\lambda)x^2.
\end{align}
The following definitions are useful in the subsequent analysis.
\begin{definition}\label{def:sst}
The single system threshold of an admissible system is defined as \cite{6325197,6887298}
\begin{align}
\epsilon_\text{s} = \sup\left\{\epsilon \in [0,1]:U'(x;\epsilon)>0,\forall x \in (0,1] \right\}.
\end{align}
\end{definition}
In our case, $\epsilon_\text{s}$ is the BP threshold of the uncoupled ensembles. The fixed point for the recursive equation in \eqref{eq:pot1} is $x=0$ for $\epsilon <\epsilon_\text{s}$, and converges to a non-zero fixed point otherwise.
\begin{definition}\label{def:2}
The potential threshold of an admissible system is defined as \cite{6325197,6887298}
\begin{align}
\epsilon_\text{c} = \sup\left\{\epsilon \in [0,1]:\min_{x\in[u(\epsilon),1]}U(x;\epsilon)\geq 0,u(\epsilon)>0 \right\},
\end{align}
where
\begin{align}
u(\epsilon) = \sup\left\{\tilde{x}\in[0,1]: f( g(x);\epsilon )<x,x\in (0,\tilde{x})\right\},
\end{align}
is the minimum unstable fixed point for $\epsilon>\epsilon_\text{s}$.
\end{definition}
\begin{example}\label{example1}
The potential functions for the rate-$1/2$ uncoupled ensemble built from two $(1,5/7)$ convolutional codes for various $q$ are shown in Fig. \ref{fig:potential_fun_2}. In this example, we set $\lambda = 1/q$. The channel erasure probability $\epsilon$ is set to the values of the potential thresholds, which are shown in the legend of Fig. \ref{fig:potential_fun_2}. It can be seen that the potential thresholds match with the MAP thresholds in Table \ref{table1}. \demo
\end{example}
\begin{figure}[t!]
\centering
\includegraphics[width=3.1in,clip,keepaspectratio]{potential_function_with_q_3.pdf}
\caption{Potential functions of the uncoupled PCC ensembles with $\lambda = 1/q$ for rate-$1/2$.}
\label{fig:potential_fun_2}
\end{figure}
As for the coupled system, we can rewrite the DE equation from \eqref{eq:sc_de_3} into the following by letting $x^{(\ell)}_t\triangleq \bar{p}^{(\ell)}_{\text{L},t} = \bar{p}^{(\ell)}_{\text{U},t}$.
\begin{subequations}\label{eq:pot3}
\begin{align}
x_t^{(\ell)} =& \frac{1}{1+m}\sum_{j=0}^mf_\text{s}\Bigg( \frac{\epsilon}{1+m}\sum_{k=0}^m \bigg(q\lambda \left(x_{t+j-k}^{(\ell-1)}\right)^{2q-1}
+(1-q\lambda)x_{t+j-k}^{(\ell-1)}\bigg) , 1-(1-\epsilon)\rho \Bigg) \\
=&\frac{1}{1+m}\sum_{j=0}^mf\left( \frac{1}{1+m}\sum_{k=0}^m g\left(x_{t+j-k}^{(\ell-1)}\right); \epsilon \right).
\end{align}
\end{subequations}
Then, we have the following theorem.
\begin{theorem}\label{the:ts}
For the spatially-coupled system defined in \eqref{eq:pot3} and any $\epsilon<\epsilon_\text{c}$, where $\epsilon_\text{c}$ is the potential threshold associated with the potential function in \eqref{eq:pot2}, the only fixed point of the recursion in \eqref{eq:pot3} is $\boldsymbol{x} = \boldsymbol{0}$ as $L \rightarrow \infty$, $m \rightarrow \infty$ and $L \gg m$.
\end{theorem}
\begin{IEEEproof}
The proof follows from \cite[Theorem 1]{6325197}.
\end{IEEEproof}
Therefore, threshold saturation occurs for the proposed GSC-PCC ensembles. As a result, the BP thresholds of GSC-PCCs even when $q$ is very large can be easily found via computing either the potential thresholds by using Definition \ref{def:2} or the MAP thresholds by using the area theorem \cite{1523540} as in \eqref{eq:MAP_find}. Consider GSC-PCCs with identical upper and lower 2-state, 4-state and 8-state component convolutional encoders with generator polynomials $(1,1/3)$, $(1,5/7)$ and $(1,15/13)$, respectively. We report the potential thresholds of the uncoupled ensembles with various $q$ (denoted by $\epsilon^{(q)}_{\text{c}}$) for different rates in Table \ref{MAP1}. Here, we choose $\lambda = 1/q$ as we observe from Tables \ref{table0}-\ref{table1} that this choice allows GSC-PCCs to achieve their respective MAP thresholds as $m$ goes large.
\begin{table}[t!]
\centering
\caption{Potential thresholds of Uncoupled PCCs with Partial Repetition}\label{MAP1}
\begin{tabular}{c c c c c c c c c c}
\hline
Rate & States & $\epsilon^{(q=1)}_{\text{c}}$ & $\epsilon^{(q=2)}_{\text{c}}$ & $\epsilon^{(q=3)}_{\text{c}}$ & $\epsilon^{(q=4)}_{\text{c}}$ & $\epsilon^{(q=5)}_{\text{c}}$ & $\epsilon^{(q=6)}_{\text{c}}$ & $\epsilon^{(q=50)}_{\text{c}}$ \\ \hline
& 2 &0.0285 & 0.0751 & 0.0846 & 0.0888& 0.0913& 0.0928 & 0.0992\\
$9/10$ & 4 & 0.0582 & 0.0882 & 0.0932 & 0.0952& 0.0963& 0.0970 & 0.0996\\
& 8 & 0.0769& 0.0940 & 0.0966 & 0.0977& 0.0982& 0.0986& 0.0998\\
\hline
& 2 &0.0661 & 0.1582 & 0.1747 & 0.1819& 0.1859& 0.1884 & 0.1987\\
$4/5$ & 4 & 0.1391 & 0.1848 & 0.1915 & 0.1941& 0.1955& 0.1964 & 0.1996\\
& 8 & 0.1698 & 0.1930 & 0.1962 & 0.1975& 0.1981& 0.1985 & 0.1998\\
\hline
& 2 & 0.0895 & 0.2027 & 0.2217 & 0.2298 & 0.2343& 0.2372 & 0.2486\\
$3/4$ & 4 & 0.1876 & 0.2352 & 0.2418 & 0.2444 & 0.2457& 0.2466 & 0.2496\\
& 8 & 0.2204 & 0.2435 & 0.2466 & 0.2477 & 0.2483& 0.2486 & 0.2498\\
\hline
& 2 & 0.1375 & 0.2811 & 0.3027 & 0.3116 & 0.3165& 0.3196 & 0.3318\\
$2/3$ & 4 & 0.2772 & 0.3209 & 0.3266 & 0.3288 & 0.3299& 0.3306 & 0.3330\\
& 8 & 0.3080 & 0.3282 & 0.3307 & 0.3316 & 0.3321& 0.3323 & 0.3332\\
\hline
& 2 & 0.2808 & 0.4520 & 0.4727 & 0.4809 & 0.4854 &0.4881 & 0.4987\\
$1/2$ & 4 & 0.4689 & 0.4938 & 0.4968 & 0.4979 & 0.4985 & 0.4988 & 0.4998\\
& 8 & 0.4863 & 0.4976 & 0.4989 & 0.4993 & 0.4995 & 0.4996 & 0.4999\\
\hline
& 2 & 0.5000 & 0.6352 & 0.6493 & 0.6548 & 0.6576 & 0.6594 &0.6659\\
$1/3$ & 4 & 0.6553 & 0.6647 & 0.6657 & 0.6661 & 0.6662 & 0.6663 &0.6667\\
& 8 & 0.6621 & 0.6659 & 0.6663 & 0.6665 & 0.6665 & 0.6665 &0.6667\\
\hline
\end{tabular}
\end{table}
Table \ref{MAP1} shows that the potential thresholds of uncoupled PCCs with partial repetition improve as $q$ increases. The thresholds also improve as the number of states of the component convolutional codes increases. When $q$ is large, the potential thresholds of all the ensembles approach the BEC capacity for all the considered rates. In particular, even the potential thresholds for the ensembles with 2-state component convolutional codes are within 0.002 to the BEC capacity when $q =50$. This suggests that the BP thresholds of GSC-PCCs can achieve the BEC capacity as $q$ tends to infinity regardless of the number of states of the component convolutional codes. Hence, one can simply increase the repetition factor $q$ to obtain a GSC-PCC with its decoding threshold very close to the BEC capacity for any given component convolutional code while it is difficult for the SC-TCs in \cite{8002601} to further improve their thresholds without changing the component codes. In the next section, we prove that the proposed GSC-PCCs can in fact achieve the BEC capacity.
\subsection{Achieving Capacity}\label{sec:cap_achieve}
First, we let $\lambda = 1/q$ as this simple choice suffices to allow GSC-PCCs to achieve the largest threshold as $m$ becomes large. As a result, the potential function in \eqref{eq:pot2} simplifies to
\begin{align}\label{eq:pot_final1}
U(x;\epsilon)= \left(1-\frac{1}{2q}\right) x^{2q}- \int_0^{x^{2q-1}} f_\text{s}(\epsilon z,1-(1-\epsilon)\rho)dz,
\end{align}
where $\rho = \frac{R_0(1-R)}{qR(1-R_0)}$ due to \eqref{eq:rate_punc}. Then, we state the main result of this section in the following.
\begin{theorem}\label{the:cap}
The rate-$R$ GSC-PCC ensemble with $(1,1/3)$ convolutional component codes achieves at least a fraction $1-\frac{R}{R+q}$ of the BEC capacity under BP decoding.
\end{theorem}
\begin{IEEEproof}
See Appendix \ref{app:cap}.
\end{IEEEproof}
Corollary \ref{corollary1} follows immediately from Theorem \ref{the:cap}.
\begin{corollary}\label{corollary1}
The GSC-PCC ensemble with $(1,1/3)$ convolutional component codes achieves the BEC capacity under BP decoding as $q \rightarrow \infty$.
\end{corollary}
\begin{remark}
To prove Theorem \ref{the:cap}, we choose to use the potential function as the key tool rather than the area theorem because the potential function only involves the transfer function of the information bits of the component decoder while the area theorem requires the transfer functions of both information and parity bits. It is also interesting to see that the GSC-PCC ensemble constructed from 2-state convolutional component codes has a multiplicative gap to the BEC capacity and the gap vanishes as $q \rightarrow \infty$. Generalizing the result of Theorem \ref{the:cap} to the GSC-PCC ensembles with any component convolutional codes is highly non-trivial because the transfer functions of different component decoders have to be derived separately. In particular, when the number of states is large, the derivation for the transfer function becomes extremely cumbersome and the exact analytical expression would be much more complicated than that of the 2-state code in \eqref{eq:2state} (e.g., \cite[Tables I-II]{1258535}). However, Theorem \ref{the:cap} together with the results of Table \ref{MAP1} strongly suggest that the proposed code ensembles with any given component convolutional codes also achieve capacity.
\end{remark}
Although obtaining an analytical expression for the potential threshold of GSC-PCCs with any given component convolutional codes is difficult, we establish in the next section some useful properties of the proposed codes to allow us to better understand how their decoding thresholds behave.
\subsection{Useful Properties of GSC-PCCs}
In this section, we further investigate some properties of GSC-PCCs by establishing the links between the decoding thresholds of the proposed coupled codes, the strength of the component codes, and the repetition factor (Propositions \ref{lem2}-\ref{lem3} below). Following from the previous analysis, we fix $\lambda = 1/q$.
Since the subsequent analysis only involves the transfer function of the information bits, we simply drop the subscript ``s'' from the transfer function for simplicity. Before proceeding, we present a useful result from \cite[Lemma 7.5]{Measson2006thesis}.
\begin{lemma}
Consider a convolutional code $\mathcal{C}$ with code rate $R_\mathcal{C}\geq 1/2$. Its decoder's transfer function for the information bits satisfies
\begin{align}\label{eq:int1}
\int_0^1f(x,y)dx &= 2-y+\frac{1}{R_\mathcal{C}}(y-1).
\end{align}
\end{lemma}
\begin{IEEEproof}
Please refer to the proof of \cite[Lemma 7.5]{Measson2006thesis}.
\end{IEEEproof}
For the convolutional code with rate-$1/2$, \eqref{eq:int1} simplifies to
\begin{align}\label{eq:int2}
\int_0^1f(x,y)dx = y.
\end{align}
Now, we are ready to present the first property that gives the relationship between the strength of the component convolutional code and the decoding threshold of the corresponding coupled codes.
\begin{proposition}\label{lem2}
Consider two convolutional codes $\mathcal{C}_1$ and $\mathcal{C}_2$ with their decoders' transfer functions for the information bits, denoted by $f_1(x,y)$ and $f_2(x,y)$, respectively, satisfying
\begin{align}\label{conj_eq1}
\left\{ {\begin{array}{*{20}{c}}
f_1(x,y)<f_2(x,y),\forall x\in(z,1)\\
f_1(x,y)>f_2(x,y),\forall x\in(0,z),
\end{array}} \right.
\end{align}
for some $z \in (0,1)$ and any fixed $y\in (0,1)$. The potential thresholds of the coupled systems based on $\mathcal{C}_1$ and $\mathcal{C}_2$, denoted by $\epsilon_\text{c}(\mathcal{C}_1)$ and $\epsilon_\text{c}(\mathcal{C}_2)$, respectively, satisfy the following condition under the same repetition factor $q<\infty$,
\begin{align}\label{lem2_eq1}
\epsilon_\text{c}(\mathcal{C}_2)>\epsilon_\text{c}(\mathcal{C}_1).
\end{align}
\end{proposition}
\begin{IEEEproof}
See Appendix \ref{app2}.
\end{IEEEproof}
\begin{figure}[t!]
\centering
\includegraphics[width=3.3in,clip,keepaspectratio]{trans_diff_2.pdf}
\caption{Transfer functions of information bits for various convolutional codes.}
\label{fig:trans_diff1}
\end{figure}
Proposition \ref{lem2} explains the reason why GSC-PCC ensembles built from convolutional codes with a larger number of states have better decoding threshold than those with a lower number of states as reported in Table \ref{MAP1}. This is because a convolutional code with a larger number of states usually achieves a lower bit erasure rate at a lower input erasure probability while achieving a higher bit erasure rate at a higher input erasure probability compared to a convolutional code with a smaller number of states. In Fig. \ref{fig:trans_diff1}, we show the output erasure probability of various transfer functions for $x \in [0,1]$ and $y = 0.66$. One can see that any pair of the considered convolutional codes in the figure satisfying \eqref{conj_eq1}. Thus, when $q$ is fixed and finite, one can use a convolutional code which performs better at a low input erasure probability (not necessarily with a large number of states) to construct a GSC-PCC ensemble with improved decoding threshold. Although we only show one value for $y$ in the figure, we have experimentally verified that the relationships in \eqref{conj_eq1} hold for all the considered convolutional codes with several values of $y \in (0,1)$.
\begin{remark}
If we want to prove that the condition in \eqref{conj_eq1} holds for any pair of convolutional codes, we must explicitly derive and inspect their decoders' transfer functions. However, we can show that $f_1(x,y)$ and $f_2(x,y)$ intersect at $x \in (0,1)$ with a finite number of points. Due to \eqref{eq:int2}, the following holds
\begin{align}
&\int_0^1f_1(x,y)dx = \int_0^1f_2(x,y)dx\\
\Rightarrow &\int_0^1f_1(x,y)-f_2(x,y)dx = 0. \label{eq:contra1}
\end{align}
If $f_1(x,y)$ and $f_2(x,y)$ do not intersect, then it must be true that either $f_1(x,y)>f_2(x,y)$ or $f_1(x,y)<f_2(x,y), \forall x\in (0,1)$. However, this is contradictory to \eqref{eq:contra1}. Hence $f_1(x,y)$ and $f_2(x,y)$ must intersect. In addition, it is impossible for equation $f_1(x,y) = f_2(x,y)$ to have an infinite number of solutions in $x\in (0,1)$ because the transfer function of a convolutional decoder is a rational function whose numerator and denominator are polynomials with finite degrees \cite{1258535}.
\end{remark}
The next property shows the relationship between the decoding threshold of GSC-PCC ensembles, and the repetition factor $q$. Specifically, we investigate the conditions under which the threshold improves with $q$.
\begin{proposition}\label{lem3}
Consider a GSC-PCC ensemble constructed from a convolutional code with decoder transfer function $f(x,y)$. The potential threshold $\epsilon_\text{c}$ improves with $q$ if both of following conditions are satisfied:
1) The fixed point DE equation in \eqref{eq:pot1}, i.e., $f(\epsilon x^{2q-1},1-(1-\epsilon)\rho)=x$, only has two solutions in $x \in (0,1)$ for $\epsilon\in (\epsilon_\text{s},1-R)$, where $\epsilon_\text{s}$ is the BP threshold;
2) The output of the recursive DE equation in \eqref{eq:pot1} with initial condition $x^{(0)}=1$ as $\ell \rightarrow \infty$, i.e., $x^{(\infty)}$, increases with $q$.
\end{proposition}
\begin{IEEEproof}
See Appendix \ref{app3}.
\end{IEEEproof}
\begin{figure}[t!]
\centering
\includegraphics[width=3.3in,clip,keepaspectratio]{trans_func_q_v1.pdf}
\caption{Outputs of the transfer functions of an 8-state convolutional code with various $q$ and $R=4/5$.}
\label{fig:trans_func_q}
\end{figure}
To show that both conditions in Proposition \ref{lem3} hold, we use specific examples. In Fig. \ref{fig:trans_func_q}, we show the values of function $f(\epsilon x^{2q-1},1-(1-\epsilon)\rho)$ for the convolutional code with generator polynomial $(1,15/13)$ for various $q$. In this example, we set $R = 4/5$ and $\epsilon = 0.1698>\epsilon_\text{s}$. One can see that all the curves of the transfer function and line $y=x$ have two intersection points (also known as stationary points according to \cite[Def. 3]{6325197}) while the value of each intersection point increases with $q$. For the ensemble considered in Example \ref{example1}, it can be observed from Fig. \ref{fig:potential_fun_2} that the stationary points of its potential function in \eqref{eq:pot_final1} also increase with $q$. Hence, we expect that the transfer function of any convolutional decoder satisfies both conditions in Proposition \ref{lem3}. To this end, the decoding threshold of general GSC-PCC ensembles can be shown to improve with $q$ until reaching capacity, which is similar to the case considered in Theorem \ref{the:cap}.
\begin{remark}
The potential function in \eqref{eq:pot_final1} is related to that of uncoupled generalized LDPC (GLDPC) codes \cite{1056404}. More precisely, it is associated with the GLDPC codes whose constraint nodes are convolutional codes, e.g., \cite{9174017}. This can be seen by noting that our potential function is a half-iteration shift of the density evolution recursion of an uncoupled GLDPC ensemble by swapping $f$ in \eqref{eq:f} and $g$ in \eqref{eq:g} \cite[Section II-D]{6887298}. Since both coupled systems share many similarities \cite[Lemma 11]{6887298}, the analysis on the potential threshold of our coupled system can be used for the GLDPC counterpart. We also note that the repetition ratio of GSC-PCCs, $\lambda$, can be made irregular, analogous to the irregular variable node degrees of GLDPC codes. However, Tables \ref{table0}-\ref{table1} already show that the BP threshold of GSC-PCCs is close to the corresponding MAP threshold by optimizing $\lambda$ only. Moreover, the analysis in this section demonstrates that regular repetition, i.e., $\lambda = 1/q$, is sufficient to achieve capacity.
\end{remark}
\section{Simulation Results}\label{sec:sim}
In this section, we show the finite length performance of the proposed codes. Unless specified otherwise, we use random interleaving and random parity puncturing (random for each channel realization) in the simulation. In addition, each error point is obtained by collecting at least 300 decoding errors.
\subsection{Performance on the BEC}
We consider GSC-PCCs with identical upper and lower convolutional encoders of generator polynomial $(1,5/7)$. We set $K=10000$, $L=100$, $m=1$, $q\in\{2,4\}$, and $R\in\{1/3,1/2\}$. The values of $\lambda$ are chosen according to Table \ref{table0}. The bit erasure rate (BER) and the BP thresholds for GSC-PCCs are shown in Fig. \ref{fig:ber}. In the same figure, we also plot the BER and decoding thresholds of SC-PCCs \cite{8002601} and PIC-TCs \cite{PIC2020} for comparison purposes. For fair comparison, the benchmark codes and GSC-PCCs have the same target code rate, input message length, coupling length, and coupling memory. To show the best possible performance, all codes are under full decoding of the entire spatial code chain and hence they have the same decoding latency \cite{7296605}.
\begin{figure}[t!]
\centering
\includegraphics[width=3.3in,clip,keepaspectratio]{GSC_PCC_BER_0p33.pdf}
\caption{BER performance (solid lines) and density evolution thresholds (dash lines) of GSC-PCCs with target rate $1/3$.}
\label{fig:ber}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=3.3in,clip,keepaspectratio]{GSC_PCC_BER_0p5.pdf}
\caption{BER performance (solid lines) and density evolution thresholds (dash lines) of GSC-PCCs with target rate $1/2$.}
\label{fig:ber2}
\end{figure}
We observe that for both rates, GSC-PCCs perform better than SC-PCCs and PIC-TCs and the performance gains are in agreement with the DE results. This also confirms that the optimal design of $\lambda$ is effective. It is interesting to see that choosing $q=2$ is sufficient to allow GSC-PCCs outperform SC-PCCs and PIC-TCs while for $q=4$ the proposed codes have a noticeable performance gain over those with $q=2$. Although the BER of uncoupled PCCs is not shown in the figure, one can clearly see that the actual performance of GSC-PCCs at a BER of $10^{-5}$ is much better than the BP thresholds of uncoupled PCCs with the same $q$ (see Table \ref{table1}) or without repetition (see \cite[Table II]{8002601}). It should be noted that the BER performance of GSC-PCCs can be further improved by using a larger $q$ according to our analysis in Section \ref{sec:DE} and Section \ref{sec:TS}.
\subsection{A Criterion For Coupling Bits Selection}\label{sec:coupling_bits_criteria}
From Sections \ref{sec:DE}-\ref{sec:TS}, we know that the excellent threshold is reported for GSC-PCC ensembles which naturally assume random selection of information bits (due to random interleaving). In contrast, the error performance of a GSC-PCC with a fixed code structure can be affected by the selection of coupled information bits.
When the selection of coupling bits is completely random, it is possible that some of the information bits in $\boldsymbol{u}_{t,\text{r}}$ (i.e., the information bits to be repeated) and their $q-1$ replicas can appear in both $\boldsymbol{u}^{\text{U}}_{t,t+j}$ and $\boldsymbol{u}^{\text{L}}_{t,t+j}$ for some $j \in \{0,\ldots,m\}$. In other words, these bits and their $q-1$ replicas are encoded by the upper and lower convolutional component encoders at the same time instant. In this case, these bits cannot benefit from coupling as no extrinsic information from the component codewords at other time instants can be obtained. To enable the exchange of extrinsic information between coupling blocks via these repeated bits, we introduce a simple criterion of selecting coupled bits. That is, each bit in $\boldsymbol{u}_{t,\text{r}}$ and its $q-1$ replicas should not appear in $\boldsymbol{u}^{\text{U}}_{t,t+j}$ and $\boldsymbol{u}^{\text{L}}_{t,t+j}$ at the same time instant, i.e., the repeated bits spread across different time instants. In what follows, we show that by incorporating this criterion in designing GSC-PCCs, a noticeable gain can be attained compared to totally random selection of coupling bits.
We adopt the same settings as in the simulation for Fig. \ref{fig:ber}, except that the employed random interleavers should ensure coupling bits satisfying the aforementioned criterion. The BER performance of the proposed codes under the selected coupling bits (labeled as ``Designed CP'') and that under random selection of coupling bits (labeled as ``Random CP'') is shown in Fig. \ref{fig:BER_designed_CP}. Observe that for both $q=2$ and $q=4$, the error performance is improved.
\begin{figure}[t!]
\centering
\includegraphics[width=3.3in,clip,keepaspectratio]{BER_designed_CP_v1.pdf}
\caption{BER of GSC-PCCs with target rate $1/3$ and under the proposed criterion.}
\label{fig:BER_designed_CP}
\end{figure}
\subsection{Performance on the AWGN Channel}
In this section, we provide the simulation for bit error rate (BER) versus bit signal-to-noise ratio $E_b/N_0$ for GSC-PCCs, PIC-TCs \cite{PIC2020} and SC-LDPC codes \cite{7152893} on the AWGN channel. We have also simulated the frame error rate (FER). Since all FER curves show a similar trend as that for all BER curves, we do not include the FER performance due to the space limitations.
\begin{table}[t!]
\centering
\caption{Five GSC-PCCs Used For Simulations}\label{GSC_setup}
\begin{tabular}{c c c c c c c}
\hline
GSC-PCC & Component Codes & $\lambda$ & Interleaving & Puncturing & $\epsilon^{(m=1)}_{\text{BP}}$ \\ \hline
1 & $(1,5/7)$ & 0.44 & Random & Random & 0.4907\\
2 & $(1,15/13)$ & 0.31 & Random & Random & 0.4935 \\
3 & $(1,15/13)$ & 0.31 & Random & Fixed & 0.4935 \\
4 & $(1,15/13)$ & 0.31 & Fixed & Fixed & 0.4935\\
5 & $(1,15/13)$ & 0.375 & Fixed & Fixed & 0.4928 \\
\hline
\end{tabular}
\end{table}
First, we consider that all codes have a target rate $R = 1/2$ and coupling length $L=50$. For both GSC-PCCs with $q=2$ and PIC-TCs, we set $K=1000$ and $m=1$. To see the impacts of interleaving, puncturing, and changing of component codes on the finite length performance of GSC-PCCs, we will evaluate the performance of five GSC-PCCs listed in Table \ref{GSC_setup}. Here, for fixed puncturing, we use a periodic puncturing pattern by following \cite[Section VII-A]{7932507}. To obtain the fixed interleavers, we first randomly generate more than 60 sets of interleavers such that the resultant coupling bits satisfy the criterion in Section \ref{sec:coupling_bits_criteria}. Then, we simulate the BER at an $E_b/N_0$ of 1 dB and find the set of interleavers that lead to the lowest BER. The benchmark PIC-TC is with $(1,5/7)$ convolutional component codes, random interleaving and puncturing, and coupling ratio following \cite[Table II]{PIC2020}. The benchmark $(3,6,50,2)$ SC-LDPC code is constructed by following \cite{7152893}, which has a coupling width of 2 and a lifting factor of 1000. The maximum intra-block and inter-block decoding iterations for all turbo-like codes are set to 20 while the maximum BP decoding iterations for SC-LDPC codes are set to 1000. Apart from all the aforementioned codes that are under full decoding, we also showcase an example of the proposed codes, i.e., GSC-PCC 4 in Table \ref{GSC_setup}, using sliding window decoding with a window size $W=8$. The BER versus $E_b/N_0$ is shown in Fig. \ref{fig:ber_res2}.
\begin{figure}[t!]
\centering
\includegraphics[width=3.3in,clip,keepaspectratio]{BER_AWGN_v1.pdf}
\caption{BER of GSC-PCCs, PIC-TCs and SC-LDPC with target rate $1/2$.}
\label{fig:ber_res2}
\end{figure}
It can be seen that all the GSC-PCCs outperform the benchmark PIC-TC and SC-LDPC code in terms of waterfall performance on the AWGN channel. Particularly, the GSC-PCC under windowed decoding with a decoding latency of 8000 bits still perform better than the SC-LDPC code and PIC-TC under full decoding with a decoding latency of 50000 bits in the waterfall region. In addition, the GSC-PCCs with a larger BEC decoding threshold reported in Table \ref{GSC_setup} has a better waterfall performance compared to those with a smaller threshold. This means that the excellent performance of the proposed codes on the BEC can be carried over to the AWGN channel. It is also interesting to note that the GSC-PCCs under fixed interleaving and puncturing achieve better waterfall and error floor than their random counterparts. In fact, one can adopt the interleaver designs for turbo-like codes in the literature (e.g., \cite{8214245,9594196}) to attain further performance improvement. Finally, observe that the GSC-PCC with a large $\lambda$ has a lower error floor than that with a small $\lambda$ and comparable error floor performance to the benchmark SC-LDPC code. Therefore, with a small $m$ and finite blocklength, $\lambda$ plays a key role in the trade-off between waterfall and error floor. That said, it is expected that when $m$ becomes large, choosing the maximum $\lambda$, i.e., $\lambda = 1/q$, will result in very good waterfall and error floor.
\section{Conclusions}\label{sec:conclude}
We introduced generalized spatially-coupled parallel concatenated codes, which can be seen as a generalization of the conventional SC-PCCs and have a similar structure to that of PIC-TCs. We derived the density evolution equations for the proposed codes and found the decoding threshold via optimizing the fraction of repeated information bits. By using the potential function argument, we analytically proved that the proposed codes exhibit threshold saturation. Then, we rigorously proved that the GSC-PCC ensemble with 2-state convolutional component codes achieves capacity as the repetition factor tends to infinity and numerically showed that the results can be generalized to GSC-PCC ensembles with other convolutional component codes. To gain more insights into the decoding performance of the proposed codes, the relationships between the strength of the component convolutional codes, decoding threshold of the corresponding GSC-PCCs, and the repetition factor were established. Simulation results of BER under finite blocklength were provided to show that the proposed codes outperform existing class of spatially-coupled codes constructed from component PCCs (or turbo codes).
\appendices
\section{Proof of Theorem \ref{the:cap}}\label{app:cap}
First, the decoder transfer function for the information bits of $(1,1/3)$ convolutional codes can be derived by following \cite{1258535} as
\begin{align}\label{eq:2state}
f_\text{s}(x,y) = \frac{xy(2-2y+xy)}{(1-y+xy)^2}.
\end{align}
Therefore,
\begin{align}
\int_0^{a} f_\text{s}(x z,y)dz = \frac{xy a^2}{xy a-y+1},
\end{align}
and the potential function becomes
\begin{align}
U(x;\epsilon) &= \left(1-\frac{1}{2q}\right) x^{2q}- \frac{\epsilon(1-(1-\epsilon)\rho)x^{4q-2}}{\epsilon(1-(1-\epsilon)\rho)x^{2q-1}-(1-(1-\epsilon)\rho)+1}.
\end{align}
Next, we find the necessary condition which $\epsilon \in (0,1)$ has to fulfill such that $U(x;\epsilon)\geq 0, \forall x \in (0,1]$. We have
\begin{subequations}
\begin{align}
U(x;\epsilon)\geq 0\Rightarrow &\;\left(1-\frac{1}{2q}\right) x^{2q}- \frac{\epsilon(1-(1-\epsilon)\rho)x^{4q-2}}{\epsilon(1-(1-\epsilon)\rho)x^{2q-1}-(1-(1-\epsilon)\rho)+1} \geq 0 \\
\Rightarrow &\; \frac{\epsilon(1-(1-\epsilon)\rho)x^{2q-2}}{\epsilon(1-(1-\epsilon)\rho)x^{2q-1}-(1-(1-\epsilon)\rho)+1}\leq \frac{2q-1}{2q} \\
\Rightarrow & \;2q\epsilon(1-\rho+\epsilon\rho)x^{2q-2}-(2q-1)\epsilon(1-\rho+\epsilon\rho)x^{2q-1} \nonumber \\
&+(2q-1)(1-\rho+\epsilon\rho)-(2q-1) \leq 0 \\
\Rightarrow & \;\epsilon(1-\rho+\epsilon\rho)(2qx^{2q-2}-(2q-1)x^{2q-1})-\rho(2q-1)(1-\epsilon) \leq 0 \\
\Rightarrow& \;\epsilon^2x^{2q-2}\left( \frac{2q}{2q-1}-x\right)
+\epsilon\left(x^{2q-2}\left( \frac{2q}{2q-1}-x\right)\frac{1-\rho}{\rho}+1\right)-1 \leq 0 \label{eq:quadratic1}\\
\Rightarrow &\; (\epsilon-\epsilon_1)(\epsilon-\epsilon_2) \leq 0,
\end{align}
\end{subequations}
where $\epsilon_1$ and $\epsilon_2$ are the roots of the quadratic function of \eqref{eq:quadratic1}. Specifically,
\begin{align}
\epsilon_1 &= \frac{-b-\sqrt{b^2-4ac}}{2a} = \frac{-\left(\frac{1-\rho}{\rho}a+1\right)-\sqrt{\left(\frac{1-\rho}{\rho}a+1\right)^2+4a}}{2a}<0, \\
\epsilon_2 &= \frac{-b+\sqrt{b^2-4ac}}{2a} = \frac{-\left(\frac{1-\rho}{\rho}a+1\right)+\sqrt{\left(\frac{1-\rho}{\rho}a+1\right)^2+4a}}{2a} >0 \label{eq:sol2_2s},
\end{align}
where we have used the following definitions for ease of presentation
\begin{align}
&a\triangleq x^{2q-2}\left(\frac{2q}{2q-1}-x\right)>0, \label{notation_a} \\
&b\triangleq x^{2q-2}\left( \frac{2q}{2q-1}-x\right)\frac{1-\rho}{\rho}+1 = \frac{1-\rho}{\rho}a+1>0, \\
&c \triangleq -1.
\end{align}
Since $\epsilon_1<0<\epsilon_2$, it then remains to find $x$ such that $\epsilon_2$ reaches its minimum. This is because we want to ensure that $U(x;\epsilon) \geq 0$ for any $\epsilon<\min_{x\in(0,1]}\epsilon_2$. By taking the following first order partial derivative,
\begin{align}
\frac{\partial\epsilon_2}{\partial x} =& \frac{\partial\epsilon_2}{\partial a}\cdot\frac{\partial a}{\partial x} \nonumber \\
=& \frac{\sqrt{\left( \frac{1-\rho}{\rho}a+1 \right)^2+4a }-\frac{1+\rho}{\rho}a-1}{2a^2\sqrt{\left( \frac{1-\rho}{\rho}a+1 \right)^2+4a }}\cdot\left(\frac{2q(2q-2)}{2q-1}x^{2q-3}-(2q-1)x^{2q-2}\right),
\end{align}
we note that $\epsilon_2$ is strictly decreasing in $x \in \left(0,\frac{4q(q-1)}{2q-1}\right)$ and strictly increasing in $x \in \left(\frac{4q(q-1)}{2q-1},1\right]$. This can be seen by first noting that the partial derivative $\frac{\partial\epsilon_2}{\partial a}$ satisfies
\begin{align}
\frac{\partial\epsilon_2}{\partial a} = \sqrt{\left( \frac{1-\rho}{\rho}a+1 \right)^2+4a }-\frac{1+\rho}{\rho}a-1<0,
\end{align}
due to the fact that
\begin{align}
\left( \frac{1-\rho}{\rho}a+1 \right)^2+4a-\left(\frac{1+\rho}{\rho}a+1\right)^2 = -\frac{4a^2}{\rho} <0.
\end{align}
In addition, it is easy to see that the partial derivative $\frac{\partial a}{\partial x} = \frac{2q(2q-2)}{2q-1}x^{2q-3}-(2q-1)x^{2q-2}$ is strictly increasing in $x \in \left(0,\frac{4q(q-1)}{2q-1}\right)$ and strictly decreasing in $x \in \left(\frac{4q(q-1)}{2q-1},1\right]$. Therefore,
\begin{align}\label{eq:s2_proof_2}
x=\argmin_{x\in(0,1]} \epsilon_2 = \frac{4q(q-1)}{(2q-1)^2}.
\end{align}
Note that in order to ensure $x>0$, one should have $q\geq2$ ($q=1$ corresponds to the case of SC-PCCs \cite{8002601}).
The potential threshold can be obtained by substituting \eqref{eq:s2_proof_2} into \eqref{eq:sol2_2s},
\begin{subequations}
\begin{align}
\epsilon_{\text{c}} =\epsilon_2=&\frac{-\left(\frac{1-\rho}{\rho}a+1\right)+\sqrt{\left(\frac{1-\rho}{\rho}a+1\right)^2+4a}}{2a} \\
=& -\left(\frac{1-\rho}{2\rho}+\frac{1}{2a}\right)+\sqrt{\frac{(a+\rho)^2+2a\rho(\rho-a)+a^2\rho^2}{4a^2\rho^2}} \\
\geq& -\left(\frac{1-\rho}{2\rho}+\frac{1}{2a}\right)+\sqrt{\frac{(a+\rho)^2+2a\rho(\rho-a)+a^2\rho^2\left(\frac{\rho-a}{\rho+a}\right)^2}{4a^2\rho^2}} \\
=& -\left(\frac{1-\rho}{2\rho}+\frac{1}{2a}\right)+\sqrt{\frac{\left((a+\rho)+\frac{\rho-a}{\rho+a}a\rho\right)^2}{4a^2\rho^2}} \\
=& -\frac{a+\rho-a\rho}{2a\rho}+\frac{(a+\rho)+\frac{\rho-a}{\rho+a}a\rho}{2a\rho} \\
=& \frac{\rho}{a+\rho} \\
=& (1-R)\left(1-\frac{1}{1+\frac{1}{R(2qa-1)}}\right)\label{eq:rho_use}\\
\geq& (1-R)\left(1-\frac{R}{R+q}\right),\label{eq:state2_final_step}
\end{align}
\end{subequations}
where in \eqref{eq:rho_use} we have used $\rho = \frac{R_0(1-R)}{qR(1-R_0)}=\frac{1-R}{2qR}$ and \eqref{eq:state2_final_step} follows from
\begin{subequations}
\begin{align}
\frac{(q+1)(q-\frac{1}{2})^{4q-2}}{q^{2q+1}(q-1)^{2q-2}} =& \frac{(q+1)(q-1)}{q^2}\left(\frac{(q-\frac{1}{2})^2}{q(q-1)}\right)^{2q-1} \\
=& \left(1-\frac{1}{q^2} \right)\left(1+\frac{1}{4(q^2-q)}\right)^{2q-1} \label{ineq1_before} \\
\geq& 1\label{ineq1},
\end{align}
\end{subequations}
which implies that
\begin{subequations}
\begin{align}
&\frac{2^{4q-2}q^{2q}(q-1)^{2q-2}}{(2q-1)^{4q-2}} \leq \frac{q+1}{q} \\
\Rightarrow & 2qa-1\leq \frac{1}{q} \qquad (\text{by} \; \eqref{notation_a} \;\& \;\eqref{eq:s2_proof_2}),
\end{align}
\end{subequations}
and \eqref{ineq1} holds because by inspecting the derivative of \eqref{ineq1_before}, i.e.,
\begin{align}
\frac{4\left(1+\frac{1}{4(q^2-q)}\right)^{2q}(q-1)^2\left( (2q+2q^2)\ln\left(1+\frac{1}{4(q^2-q)}\right)-1\right)}{\left((2q-1)q\right)^2},
\end{align}
we note that the function \eqref{ineq1_before} is strictly increasing in $q \in [2,2.91486)$ and strictly decreasing in $q \in (2.91486,\infty)$ such that its minimum is achieved when $q \rightarrow \infty$.
Using Theorem \ref{the:ts}, we conclude that the BP threshold of the considered GSC-PCC ensembles with $L \rightarrow \infty$, $m \rightarrow \infty$ and $L \gg m$ is lower bounded by \eqref{eq:state2_final_step}. This completes the proof.
\section{Proof of Proposition \ref{lem2}}\label{app2}
We first show that the following inequality is true.
\begin{align}\label{the_f}
\int_{0}^{\vartheta}f_1(x,y)dx > \int_{0}^{\vartheta}f_2(x,y)dx, \forall \vartheta \in (0,1).
\end{align}
It is immediate that \eqref{the_f} holds for $\vartheta \in (0,z]$ due to the second equality of \eqref{conj_eq1}. As for $\vartheta \in (z,1)$, we have
\begin{align}
\int_{0}^{\vartheta}f_1(x,y)dx =& \int_{0}^{1}f_1(x,y)dx - \int_{\vartheta}^{1}f_1(x,y)dx \nonumber \\
\overset{\eqref{eq:int2}}{=}& \int_{0}^{1}f_2(x,y)dx - \int_{\vartheta}^{1}f_1(x,y)dx \nonumber \\
\overset{\eqref{conj_eq1}}{>}& \int_{0}^{1}f_2(x,y)dx- \int_{\vartheta}^{1}f_2(x,y)dx \nonumber \\
=& \int_{0}^{\vartheta}f_2(x,y)dx.
\end{align}
For the transfer functions satisfying \eqref{the_f}, we can show that the potential functions in relation to $\mathcal{C}_1$ and $\mathcal{C}_2$ satisfy the following condition for any $x \in (0,1]$ and $\epsilon \in (0,1)$.
\begin{align}\label{ineq:pot}
U(x,\epsilon)(\mathcal{C}_2) =& \left(1-\frac{1}{2q}\right) x^{2q}- \int_0^{x^{2q-1}} f_2(\epsilon z,1-(1-\epsilon)\rho)dz \nonumber \\
=& \left(1-\frac{1}{2q}\right) x^{2q}- \frac{1}{\epsilon}\int_0^{\epsilon x^{2q-1}} f_2(z',1-(1-\epsilon)\rho)dz'\qquad \left(z' = \epsilon z\right) \nonumber \\
\overset{\eqref{the_f}}{>}& \left(1-\frac{1}{2q}\right) x^{2q}- \frac{1}{\epsilon}\int_0^{\epsilon x^{2q-1}} f_1(z',1-(1-\epsilon)\rho)dz' \nonumber \\
=& U(x,\epsilon)(\mathcal{C}_1).
\end{align}
The inequality in \eqref{ineq:pot} implies that $\min_{x \in (0,1]} U(x,\epsilon)(\mathcal{C}_1)= \min_{x \in (0,1]}U(x,\epsilon')(\mathcal{C}_2)=0,\exists\epsilon'>\epsilon$. As a result, we obtain \eqref{lem2_eq1} by using Definition \ref{def:2}.
\section{Proof of Proposition \ref{lem3}}\label{app3}
Consider that the two solutions, $x_1$ and $x_2$, satisfy $0<x_1<x_2<1$. We first show that the following holds.
\begin{align}\label{conj_eq2}
\left\{ {\begin{array}{*{20}{c}}
&f(\epsilon x^{2q-1},1-(1-\epsilon)\rho)<x,\forall x\in(0,x_1)\cup(x_2,1]\\
&f(\epsilon x^{2q-1},1-(1-\epsilon)\rho)>x,\forall x\in(x_1,x_2)
\end{array}} \right..
\end{align}
Recall that the transfer function of any convolutional decoder is strictly increasing in all its arguments \cite[Lemma 1]{8002601}. Thus, it is easy to see that $f(\epsilon x^{2q-1},1-(1-\epsilon)\rho)$ is also strictly increasing in $x\in (0,1]$. Since $x > f(\epsilon x^{2q-1},1-(1-\epsilon)\rho)$ for $x=1$ and by realizing that $x_2$ is the largest root of $x = f(\epsilon x^{2q-1},1-(1-\epsilon)\rho)$, we have
\begin{align}\label{eq:region1}
x > f(\epsilon x^{2q-1},1-(1-\epsilon)\rho), \forall x \in (x_2,1].
\end{align}
Since $x_1$ is the smallest non-zero root, then for $\forall x \in(0,x_1)$, one must have either $x<f(\epsilon x^{2q-1},1-(1-\epsilon)\rho)$ or $x>f(\epsilon x^{2q-1},1-(1-\epsilon)\rho)$. If the former holds, then the following must be true
\begin{align}\label{eq:former_contra}
&x^{(\ell)} = f(\epsilon (x^{(\ell-1)})^{2q-1},1-(1-\epsilon)\rho)>x^{(\ell-1)}, \\
\Rightarrow & x^{(\infty)} =f(\epsilon (x^{(\infty)})^{2q-1},1-(1-\epsilon)\rho) = x_1>0.
\end{align}
This means that even given an initial condition very close to 0, i.e., $x^{(0)}\rightarrow 0$, the iterative system defined by the recursion in \eqref{eq:former_contra} will never converge to 0 as $\ell \rightarrow \infty$, which is not true. Hence, one can only have the following
\begin{align}\label{eq:region2}
x > f(\epsilon x^{2q-1},1-(1-\epsilon)\rho), \forall x \in (0,x_1).
\end{align}
However, there must exist a region on which $x < f(\epsilon x^{2q-1},1-(1-\epsilon)\rho)$ because the condition $\epsilon > \epsilon_\text{s}$ leads to $U'(x;\epsilon)\leq0, \exists x\in (0,1)$ according to Definition \ref{def:sst} and $f(\epsilon x^{2q-1},1-(1-\epsilon)\rho)$ is increasing with $\epsilon$. The only possible region for such condition to hold is $x \in (x_1,x_2)$. This leads to \eqref{conj_eq2}.
As for the largest root, the following condition holds due to Definition \ref{def:2} and \eqref{eq:region1}
\begin{align}\label{eq:con_x2}
x_2=\argmin_{x \in (0,1]} U(x;\epsilon)
=x^{(\infty)}=f(\epsilon (x^{(\infty)})^{2q-1},1-(1-\epsilon)\rho), \;x^{(0)}=1.
\end{align}
Using \eqref{conj_eq2} and \eqref{eq:con_x2}, we obtain the following system of equations by letting $\epsilon = \epsilon_{\text{c}}
\begin{align}\label{conj_eq3}
\left\{ {\begin{array}{*{20}{c}}
U(x_2,\epsilon_{\text{c}}) = 0\\
U'(x_2,\epsilon_{\text{c}}) = 0
\end{array}} \right. \Rightarrow \left\{ {\begin{array}{*{20}{c}}
\left(1-\frac{1}{2q}\right)x_2^{2q}-\int_0^{x_2^{2q-1}}f(\epsilon_{\text{c}} z,1-(1-\epsilon_{\text{c}})\rho)dz = 0\\
x_2 = f(\epsilon_{\text{c}} x_2^{2q-1},1-(1-\epsilon_{\text{c}})\rho) = 0
\end{array}} \right..
\end{align}
Given a specific transfer function $f$, one can solve for $\epsilon_{\text{c}}$ as a function of $q$ from the above equations. Since the transfer function of any convolutional decoder cannot be expressed as a universal closed form, we instead look at the following derivative
\begin{align}
\frac{\partial \epsilon_{\text{c}}(q)}{\partial q} = \frac{\partial \epsilon_{\text{c}}(x_2,q)}{\partial x_2}\cdot \frac{\partial x_2(\epsilon_{\text{c}},q)}{\partial q},
\end{align}
where $\epsilon_{\text{c}}(x_2,q)$ is the solution of the first equation in \eqref{conj_eq3}, and $ x_2(\epsilon_{\text{c}},q)$ is the solution of the second equation in \eqref{conj_eq3}. Consider $x'_2 \in (x_2,1)$. Then, the following holds
\begin{subequations}
\begin{align}
\eqref{eq:region1}\Rightarrow&U'(x;\epsilon)>0,\forall x\in(x_2,1] \\
\Rightarrow & U(x'_2;\epsilon_{\text{c}}(x_2,q))>U(x_2;\epsilon_{\text{c}}(x_2,q)) = U(x'_2;\epsilon_{\text{c}}(x'_2,q))=0 \\
\Rightarrow & \epsilon_{\text{c}}(x'_2,q)>\epsilon_{\text{c}}(x_2,q)\label{eq:conj4} \\
\Rightarrow & \frac{\partial \epsilon_{\text{c}}(x_2,q)}{\partial x_2}>0,
\end{align}
\end{subequations}
where \eqref{eq:conj4} follows from the fact that $U(x;\epsilon)$ is strictly decreasing in $\epsilon \in (0,1]$ \cite{6325197,6887298}. In addition, with \eqref{eq:con_x2} and condition 2), i.e., $x^{(\infty)}$, increases with $q$, it is immediate that $\frac{\partial x_2(\epsilon_{\text{c}},q)}{\partial q}>0$. Therefore,
$\frac{\partial \epsilon_{\text{c}}(q)}{\partial q}>0$, which means that the potential threshold $\epsilon_\text{c}$ improves with $q$.
\bibliographystyle{IEEEtran}
|
1909.11281
|
\section{Introduction}
\subsubsection{Problem description and motivation}
Signed graphs represent networked systems with interactions classified as
positive or negative, e.g., cooperation or antagonism, promotion or
inhibition, attraction or repulsion. Such graphs naturally arise in diverse
fields, e.g., political science~\cite{MOJ-SN:15}, communication
studies~\cite{JL-DH-JK:10} and biology~\cite{CCL-CHL-CSF-HFJ-HCH:13}. In
sociology~\cite{NEF:98,DE-JK:10}, they are used to represent friendly or
antagonistic relationships, whereby signed edges may be interpreted as
interpersonal sentiment appraisals. In the work by Heider~\cite{FH:46},
each individual appraises all other individuals either positively (friends,
allies) or negatively (enemies, rivals). Heider postulated four famous
axioms: (i) ``the friend of a friend is a friend,'' (ii) ``the enemy of a
friend is an enemy,'' (iii) ``the friend of an enemy is an enemy,'' and
(iv) ``the enemy of an enemy is a friend.'' Violations of these axioms
lead to cognitive tensions and dissonances that the individuals strive to
resolve; in this sense, Heider's axioms are consistent with the general
theory of cognitive dissonance~\cite{LF:1957}. A signed network satisfying
Heider's axioms is called \emph{structurally balanced} \and {and} can have
only two possible configurations: either all of its members have positive
relationships with each other and become a unique faction, or there exist
two factions in which members of the same faction are friends but enemies
with every other member in the other faction. We refer
to~\cite{NEF:98,DE-JK:10} for textbook treatment and to~\cite{XZ-DZ-FYW:15}
for a recent comprehensive survey.
Whereas Heider's theory describes the qualitative emergence of structural
balance as the result of tension-resolving cognitive mechanisms, it does
not provide a quantitative description of these mechanisms and dynamic
models explaining the emergence of balance. The aim to fill this gap has
given rise to the important research area of \emph{dynamic structural
balance}. The Ku{\l}akowski\xspace et al.~\cite{KK-PG-PG:05} model postulates an
influence process, whereby any individual $i$ updates her appraisal of
individual $j$ based on what others positively or negatively think about
$j$. The Traag et al.~\cite{VAT-PVD-PDL:13} model postulates a homophily
process, whereby any individual $i$ updates her appraisal of $j$ according
to how much she agrees with $j$ on the appraisals of their common
acquaintances. Both models explain convergence to structural balance under
certain assumptions on the initial state (see below for more
information). Remarkably, both models assume the existence of so-called
\emph{self-appraisals} (loops in the signed graph) that strongly influence
the system dynamics. Self-appraisals can be interpreted as individuals'
positive or negative opinions of themselves.
A second line of research, consistent with dissonance theory, has focused
on formulating social balance via appropriate energy functions. The
work~\cite{SAM-SHS-JMK:09} proposes an energy function for binary appraisal
matrices with global minima that represent structurally stable
configurations; it is argued that a dynamic structural balance model should
aim to navigate through this energy landscape and look for its minima. Some
models (e.g.,~\cite{TA-PLK-SR:05,TA-PLK-SR:06}) were designed precisely to
achieve this task. The work~\cite{GF-GI-CA:11} computes a distance to
balance via a combinatorial optimization problem, inspired by Ising models.
The purpose of this paper is threefold. First, we aim to propose a more
parsimonious model of the influence process establishing structural
balance, that is, a model without self-appraisal weights. Our argument for
dropping these variables is that balance theory axioms do not include
self-appraisals, and the inclusion of such appraisals amounts to an
additional assumption and introduces unnecessary complexities. Second, we
aim to connect the literature on dynamic structural balance with the
literature treating social balance as an optimization problem. Finally, in
comparison with a known limitation of the Ku{\l}akowski\xspace et al.~model, we aim
to emphasize through numerical simulations that our parsimonious model
predicts the emergence of structural balance also from asymmetric initial
configurations.
\subsubsection{Further comments on the state of the art}
We now present a summary of the current literature on dynamic structural
balance. Historically, the first models appeared in the physics
community~\cite{TA-PLK-SR:05,TA-PLK-SR:06,RF-DV-SY-OM:07}. These models
borrowed some concepts from statistical physics and had the
particularity of assuming that the appraisals between individuals are
binary valued (either $+1$ or $-1$). At the same time, they rely on
hard-wired random mechanisms for the asynchronous updates of the
appraisals that lack a sociological insightful interpretation.
Another type of proposed models is based on discrete- and continuous-time
dynamical systems with real-valued appraisals. The seminal models of
this kind are due to Ku{\l}akowski\xspace et al.~\cite{KK-PG-PG:05} (later
analyzed more formally by~\cite{SAM-JK-RDK-SHS:11}) and Traag et
al.~\cite{VAT-PVD-PDL:13}. Models with real-valued appraisals
capture not only signs, but also magnitudes of positive or negative
sentiments. All these models adopt synchronous updating and stipulate
sociological meaningful rules for the updating of appraisals, based on
either influence or homophily processes. The following facts are known
about the Ku{\l}akowski\xspace et al.~influence-based and the Traag et
al.~homophily-based models: the set of well-behaved initial conditions that
lead the social network towards social balance for the first model is a
subset of the set of normal matrices, while the second model can work under
generic initial conditions. Similar results are obtained by
\cite{WM-PCV-GC-NEF-FB:17f} for two discrete-time models based on influence
and homophily respectively: influence-based processes do not perform well
under generic initial conditions (in contrast to the homophily-based
processes). Finally, only the models proposed in
\cite{WM-PCV-GC-NEF-FB:17f} and a variation of the model by Ku{\l}akowski\xspace et
al.~proposed in the early work~\cite{KK-PG-PG:05}, have a bounded evolution
of appraisals, whereas the others have finite escape time.
Recent work has also started to focus on dynamic models for other relevant
configuration of signed graphs, e.g., configurations that satisfy only a
subset of the four Heider's axioms.
The work~\cite{NEF-AVP-FB:18m} provides a parsimonious model explaining the
emergence of a generalized version of structural balance from any initial
configuration; this model is based on an influence process of positive
contagion whereby influence is accorded only to positively-appraised
individuals. A second model in this area is proposed
by~\cite{PJ-NEF-FB:13n}. Finally, there has been a third type of models
that propose the emergence of structural balance or other generalized
balance structures for undirected graphs from a game theoretical
perspective~\cite{AvdR:11,MM-MF-PJK-HRR-MAS:11,PCV-FB:19g}.
\subsubsection{Contributions}
First of all, we contribute
by proposing two new dynamic models that do not adopt the long-standing assumption of
self-appraisals and describe the evolution of signed networks without
self-loops.
We argue that the introduction of self-weights
is poorly justified and that a model without them
is a more faithful representation of Heider's theory. The first model, called the
\emph{pure-influence model}, is a modification of the classic model by
Ku{\l}akowski\xspace et al. which is obtained by eliminating self-appraisals (and
thus reducing the system's dimension). Analysis of its convergence
properties reduces to the analysis of our second model, which is called
the~\emph{projected pure-influence model} and which arises as a projection
of the first model onto the unit sphere. This second model has a
self-standing interest, since it enjoys bounded evolution of the
appraisals, while the first model shares the finite escape time property of
the classic model by~Ku{\l}akowski\xspace et al.
Our second contribution is to build a bridge between dynamic structural
balance and balance as an optimization problem. We propose an energy
function inspired by~\cite{SAM-SHS-JMK:09}, namely the \emph{dissonance
function}, which measures the degree at which Heider's axioms are
violated among the individuals of a social network. We show that this
energy function has global minima that correspond to signed graphs
satisfying structural balance in the case of real-valued appraisals
(restricted on the unit sphere). Moreover, we show that our (projected)
pure-influence model is the gradient system of the dissonance function in
the case of undirected signed graphs, and hence the critical points of the
dissonance function are the equilibria of our dynamical system. Thus, we
establish a novel connection between dynamic structural balance and the
characterization of structural balance as the minima of an energy function
for real-valued appraisals. Remarkably, our derivations show that this
property of our models is enabled by the elimination of self-appraisals.
Thus, the models contributed in this paper may be considered as both an
interpersonal influence process and an extremum seeking dynamics for the
cognitive dissonance function.
Our third and more detailed contribution is the mathematical analysis of
the projected pure-influence model in the cases where the initial appraisal
matrix is symmetric. In particular, we provide a complete characterization
of the critical points of the dissonance function (i.e., the equilibrium
points of the projected pure-influence model). This characterization relies
upon a special submanifold of the Stiefel manifold and its
properties. Along with the characterization of the critical points, we
analyze their local stability properties and provide some results on
convergence towards structural balance.
Our final contribution is a Monte Carlo numerical study of the
convergence of our models to structural balance under generic initial
conditions in both the symmetric and the asymmetric case. For the symmetric
case, our result is comparable to, but stronger than, what has already been
proved for the Ku{\l}akowski\xspace et al.~model: our models converge to structural
balance under generic symmetric initial conditions. One key advantage of
our models, as compared with those by Ku{\l}akowski\xspace et al., is that
convergence to structural balance emerges under generic asymmetric initial
conditions. Based on these numerical results, we formulate relevant
conjectures.
\subsubsection{Paper organization}
Section~\ref{sec:prelim} presents preliminary concepts.
Section~\ref{sec:models} presents our models and shows they are gradient
flows. Section~\ref{sec:classification} and
Section~\ref{sec:convergence-analysis} contain an analysis of equilibria
and important convergence results, respectively.
Section~\ref{sec:simulations} contains numerical results and
conjectures. Finally, Section~\ref{sec:conclusion} contains some concluding
remarks.
\section{Preliminaries}
\label{sec:prelim}
\subsection{Signed weighted digraphs}
Given an $n\times n$ matrix $X=(x_{ij})$ with entries taking values in
$[-\infty,\infty]$, let $G(X)$ denote the signed directed graph where the
directed edge $i\xrightarrow[]{}j$ exists if and only if $x_{ij}\ne 0$, and
$x_{ij}$ represents its signed weight. The directed graph $G(X)$ is
complete if $X$ has no zero entries, except for the main diagonal. $G(X)$
has no self-loops if and only if $X$ has zero diagonal entries. Let
$x_{i*}$ denote the $i$th row of the matrix $X$ and $x_{*i}$ the $i$th
column of the matrix $X$. Let $\operatorname{sign}(X)=(\operatorname{sign} (x_{ij}))$, where
$\map{\operatorname{sign}}{[-\infty,\infty]}{\{-1,0,+1\}}$ is as usual
\[
\operatorname{sign}(x)=
\begin{cases}
-1,\qquad\qquad&\text{if }x<0,\\
0,&\text{if }x=0,\\
+1,&\text{if }x>0.
\end{cases}
\]
Given a sequence $a_1,\ldots,a_n$, let $B=\operatorname{diag}(a_1,\ldots,a_n)$ denote the
diagonal $n\times n$ matrix $(b_{ij})$, where $b_{ii}=a_i$ and $b_{ij}=0$
for $i\ne j$. For an $n\times n$ matrix $X$, define
$\operatorname{diag}(X)=\operatorname{diag}(x_{11},\ldots,x_{nn})$. For a vector $v\in\mathbb{R}^n$, define
$\operatorname{diag}(v)=\operatorname{diag}(v_1,\ldots,v_n)$. Let $\vect{0}_n$ denote the $n\times{1}$
vector of zeros, and $\vect{0}_{n\times{n}}$ the $n\times{n}$ matrix with
zero entries.
Let $\succ$ and $\prec$ denote ``entry-wise greater than'' and ``entry-wise
less than,'' respectively.
A \textit{triad} (if it exists) is a cycle between three nodes in
$G(X)$. The \textit{sign} of a triad is defined by the sign of the product
of the weights composing a triad. For example, the triad $i\to j\to k\to i$
has sign $\operatorname{sign}(x_{ij}x_{jk}x_{ki})$.
A real-valued matrix $Z$ is \emph{irreducible} if its graph $G(Z)$ is
strongly connected (a directed path between every two nodes exists) and
\emph{reducible} otherwise. If $Z$ is reducible, a permutation matrix $P$
exists such that the matrix
\[
PZP^{\top}=
\begin{bmatrix}
Z_1 & * & \ldots & *\\
0 & Z_2& \ldots & *\\
\vdots &&&\\
0 &&& Z_k
\end{bmatrix}
\]
is upper-triangular with irreducible blocks $Z_i$ (some of them can be $1\times 1$ matrices).
If $Z=Z^{\top}$, the latter matrix is block-diagonal matrix $PZP^\top=\operatorname{diag}(Z_1,\dots,Z_k)$ and the
graphs $G(Z_i)$ are the \emph{connected components} of the graph $G(Z)$.
\subsection{Sets of matrices and the Frobenius inner product}
Given two matrices $A,B\in\mathbb{R}^{n\times{n}}$, their Frobenius inner product
is defined by $\Fprod{A}{B}=\operatorname{trace}(B^\top A)$; the induce
norm
is $\Fnorm{A}=\sqrt{\Fprod{A}{A}}$. Some
important properties for the trace operator are:
$\operatorname{trace}(A)=\operatorname{trace}(A^\top)$, $\operatorname{trace}(AB)=\operatorname{trace}(BA)$, and, for all
$d\in\natural$, $\operatorname{trace}(A^d)=\sum_{i=1}^n\lambda_i^d$ where
$\lambda_1,\ldots,\lambda_n$ are the eigenvalues
of $A$.
Let $\mathbb{R}^{n\times{n}}_{\textup{zd}}$ be the set of $n\times{n}$ real matrices with zero diagonal
entries, and $\mathbb{R}^{n\times{n}}_{\textup{zd},\textup{symm}}$ be the set of symmetric matrices belonging to
$\mathbb{R}^{n\times{n}}_{\textup{zd}}$. Let $\mathbb{S}^{n\times{n}}$ be the unit sphere in $\mathbb{R}^{n\times n}$, that is
$A\in\mathbb{S}^{n\times{n}}$ if and only if
$A\in\mathbb{R}^{n\times{n}}$ with $\norm{A}_F=1$. Similarly, we define the sets $\mathbb{S}^{n\times{n}}_{\textup{zd}}=\mathbb{R}^{n\times{n}}_{\textup{zd}}\cap\mathbb{S}^{n\times n}$ and
$\mathbb{S}^{n\times{n}}_{\textup{zd},\textup{symm}}=\mathbb{R}^{n\times{n}}_{\textup{zd},\textup{symm}}\cap\mathbb{S}^{n\times n}$.
Let $\R^{n\times{n}}_{\textup{diag}}$ be the set of all real diagonal matrices and
$\mathbb{R}^{n\times{n}}_{\textup{sk-symm}}$ be the set of all skew-symmetric
matrices. Then, we have the following orthogonal decomposition of
$\mathbb{R}^{n\times{n}}$ equipped with the Frobenius inner product:
\begin{equation}
\label{eq:F-decomposition}
\mathbb{R}^{n\times{n}}=\mathbb{R}^{n\times{n}}_{\textup{sk-symm}} \oplus \mathbb{R}^{n\times{n}}_{\textup{zd},\textup{symm}} \oplus \R^{n\times{n}}_{\textup{diag}}.
\end{equation}
\subsection{A review on structural balance}
Throughout the paper we deal with social networks composed of $n\geq 3$
individuals, although the definition of structural balance
(Definition~\ref{def:sbal}) is formally applicable to the case of
degenerate networks with $n=1$ or $n=2$ nodes.
\begin{definition}[Appraisal matrix and network]
We let the entry $x_{ij}$ of the matrix $X\in\mathbb{R}^{n\times{n}}$ denote the
appraisal (or qualitative evaluation) held by individual $i$ of
individual $j$. The sign of $x_{ij}$ indicates if the relationship is
positive ($+1$), negative ($-1$) or of indifference ($0$). The magnitude
of $x_{ij}$ indicates the strength of the relationship. $x_{ii}$ can be
interpreted as $i$'s self-appraisal. We call $X$ the \emph{appraisal
matrix}, and $G(X)$ the \emph{appraisal network}.
\end{definition}
\begin{definition}[Heider's axioms and social balance notions]
\label{def:Heider+balance}
The \emph{Heider's axioms} are
\begin{enumerate}[label={H\arabic*)}]
\item\label{H1} A friend of a friend is a friend,
\item\label{H2} An enemy of a friend is an enemy,
\item\label{H3} A friend of an enemy is an enemy,
\item\label{H4} An enemy of an enemy is a friend.
\end{enumerate}
An appraisal network $G(X)$ is \emph{structurally balanced in Heider's
sense}, if it is complete and satisfies axioms \ref{H1}-\ref{H4}.
\end{definition}
Consider a complete appraisal network $G(X)$. We call a \emph{faction} any
group of agents whose members positively appraise each other. We say two
factions are \emph{antagonistic} if every representative from one faction
negatively appraise every representative of the other faction.
It can be shown (\cite{FH:46,FH:53,DC-FH:56}) that Heider's structural
balance condition for $G(X)$ with $n\geq 3$ nodes holds if and only if
either the individuals constitute a single faction or can be partitioned
into two antagonistic factions. The possession of the latter property may
thus be considered as an alternative definition of structural balance (and
is formally applicable to graphs without triads).
\begin{definition}[Structural balance]\label{def:structural-balance}
\label{def:sbal}
A complete appraisal network $G(X)$ is said to satisfy \emph{structural
balance}, if $G(X)$ is composed by one faction or two antagonistic
factions; or, whenever $n\geq 3$, equivalently, that all triads are
positive, i.e., $x_{ij}x_{jk}x_{ki}>0$ for any different
$i,j,k\in\until{n}$.
\end{definition}
Notice that a structurally balanced graph is always sign-symmetric: $\operatorname{sign}(x_{ij})=\operatorname{sign}(x_{ji})$ for any $i\ne j$. For simplicity we will say that a matrix $X$ \emph{corresponds to} structural balance whenever $G(X)$ satisfies structural balance.
\section{Proposed models and representation as gradient flows}
\label{sec:models}
In this section we propose our models defining them over the set of
symmetric (appraisal) matrices, and the general setting will be postponed
until Section~\ref{sec:simulations} along some numerical results. Finally,
we prove that our models are gradient flows over a sociologically motivated
energy function.
\subsection{Pure-influence model}
\label{sec:a-new-model}
We propose our new dynamic model solely based on interpersonal appraisals.
\begin{definition}[Pure-influence model]
The \emph{pure-influence model} is a system of differential equations on
the set of zero-diagonal matrices $\mathbb{R}^{n\times{n}}_{\textup{zd}}$ defined by
\begin{equation}
\label{inf-dyn}
\dot{x}_{ij}=\sum_{\substack{k=1\\k\neq i,j}}^n x_{ik}x_{kj},
\end{equation}
for any $i,j\in\until{n}$ and $i\neq j$. Here $x_{ij}$, $i\neq j$, are
the off-diagonal entries of a zero-diagonal matrix $X\in\mathbb{R}^{n\times{n}}_{\textup{zd}}$. In
equivalent matrix form, the previous equations read:
\begin{equation}
\label{inf-dyn-m}
\dot{X}=X^2-\operatorname{diag}(X^2), \qquad X(0)\in\mathbb{R}^{n\times{n}}_{\textup{zd}}.
\end{equation}
\end{definition}
We interpret $X$ as the interpersonal appraisal matrix. While
system~\eqref{inf-dyn} does not define the evolution of self-appraisals,
the matrix reformulation~\eqref{inf-dyn-m} ensures $\operatorname{diag}(\dot{X})=0$ and,
since $X(0)\in\mathbb{R}^{n\times{n}}_{\textup{zd}}$ means $\operatorname{diag}(X(0))=\vect{0}_{n\times{n}}$, we have
$\operatorname{diag}(X(t))=\vect{0}_{n\times{n}}$ for all positive times $t$.
Our model is a modification of the classical model proposed by Ku{\l}akowski\xspace
et al.~\cite{KK-PG-PG:05}, where self-appraisals play a crucial role in the
dynamics of the interpersonal appraisals.
\begin{definition}[Ku{\l}akowski\xspace et al. model]
The \emph{Ku{\l}akowski\xspace et al. model} is a system of differential equations
on the state space $\mathbb{R}^{n\times{n}}$ defined by
\begin{subequations} \label{inf-dynK}
\begin{align}
\label{inf-dynK1}
\dot{x}_{ij}&=\sum_{k=1}^n x_{ik}x_{kj}=x_{ij}(x_{ii}+x_{jj})+\sum_{\substack{k=1\\k\neq i,j}}^n x_{ik}x_{kj},\\
\dot{x}_{ii}&= x_{ii}^2 + \sum_{\substack{k=1\\k\neq i}}^n x_{ik}x_{ki}, \label{inf-dynK2}
\end{align}
\end{subequations}
for any $i\neq j\in\until{n}$. In equivalent matrix form, the previous
equations read: $\dot{X}=X^2$.
\end{definition}
\begin{remark}[The problem with self-appraisals]
The introduction of self-appraisals in model~\eqref{inf-dynK} is
objectionable on several grounds. The first conceptual problem is that
self-appraisals are not considered in any definition of structural
balance in the social sciences. Heider's axioms in
Definition~\ref{def:Heider+balance} do not take into account
self-appraisals: social balance is a function of only interpersonal
appraisals. Moreover, once self-appraisals are introduced, one needs to
postulate why and how self-appraisals affect interpersonal appraisals,
i.e., justify the choice of the first addendum for the right hand side
of~\eqref{inf-dynK1}. Finally, one needs to postulate how they evolve,
i.e., justify the choice for the right hand side of~\eqref{inf-dynK2}.
In summary, the pure influence model~\eqref{inf-dyn} avoids these
difficulties and stays closer to the foundations of structural balance,
in which individuals are attending only to interpersonal appraisals.
Even though $\dot{X}=X^2$ may appear mathematically simpler or more
elegant than $\dot{X}=X^2-\operatorname{diag}(X^2)$, we believe the latter model is
actually more parsimonious, lower dimensional, and more faithful to
Heiders' axioms.
\end{remark}
One easily notices the following important property of the pure-influence
model~\eqref{inf-dyn-m}: the right-hand side is an analytic function of $X$
so that the equation enjoys (local) existence and uniqueness of the
solutions. A second property is that, if $X(0)=X(0)^{\top}$, then
$X(t)=X(t)^{\top}$ for all subsequent times. This implies that the
pure-influence model is well defined over the set of symmetric (zero
diagonal) matrices $\mathbb{R}^{n\times{n}}_{\textup{zd},\textup{symm}}$.
\subsection{Dissonance function}
We introduce and study the properties of a useful \emph{dissonance
function} that summarize the total amount of cognitive
dissonances~\cite{LF:1957} among the members of a social network due to the
lack of satisfaction of Heider's axioms. Recall that, according to
Definition~\ref{def:structural-balance}, a triad $i\to j\to k\to i$
satisfies the axioms if and only if $x_{ij}x_{jk}x_{ki}>0$.
\begin{definition}[Dissonance function]
The \emph{dissonance function} $\map{\mathcal{D}}{\mathbb{R}^{n\times{n}}_{\textup{zd}}}{\mathbb{R}}$ is
\begin{equation}
\label{beta_hat1}
\mathcal{D}(X) = - \!\!\!\!\!\! \sum_{\substack{i,j,k=1\\ i\neq{j},j\neq{k},k\neq{i} }}^n \!\!\!\!\!\!
x_{ij}x_{jk}x_{ki}= - \operatorname{trace}(X^3)=-\sum_{i=1}^n\lambda_i^3,
\end{equation}
where $\{\lambda_i\}_{i=1}^n$ is the set of eigenvalues of $X$.
\end{definition}
We plot $\mathcal{D}$ in a low-dimensional setting in
Figure~\ref{fig:dissonancefunction}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{sphere_withAllNEW_2.png}
\includegraphics[width=0.48\textwidth]{sphere_sproj_2.png}
\caption{For $n=3$, an arbitrary symmetric unit-norm zero-diagonal
matrix $X\in\mathbb{S}^{n\times{n}}_{\textup{zd},\textup{symm}}$ is described by $(x_{12},x_{23},x_{31})$ with
these coordinates living in the sphere
$x_{12}^2+x_{23}^2+x_{31}^2=1$. In the upper figure, we plot this
sphere with a heatmap, with dark blue being the lowest value and
light yellow the largest value, according to the evaluation of the
dissonance function $\mathcal{D}(X)$. The function has four global
minima corresponding to the four possible configurations of $G(X)$
satisfying structural balance, and we can qualitatively appreciate
the convergence of solution trajectories to these minima in the
superimposed vector field on the sphere. The lower figure is a
stereographic projection of the upper figure.}
\label{fig:dissonancefunction}
\end{figure}
Energy landscapes in social balance theory are studied
in~\cite{SAM-SHS-JMK:09,GF-GI-CA:11}.
Our proposed dissonance function is the extension to $\mathbb{R}^{n\times{n}}_{\textup{zd}}$ of the energy
function proposed by~\cite{SAM-SHS-JMK:09} for the setting of binary-valued
symmetric appraisal matrices.
For binary-valued appraisals, the global minima of $\mathcal{D}$ correspond to
networks that satisfy structural balance, since all triads are positive
(see Definition~\ref{def:sbal}). Thus, $\mathcal{D}$ naturally measures to which
extent Heider's axioms are violated in a (complete) social network.
\begin{lemma}[Properties of the dissonance function]
\label{lem-DH}
Consider the dissonance function $\mathcal{D}$ and pick $X\in\mathbb{R}^{n\times{n}}_{\textup{zd}}$. Then
\begin{enumerate}[label=(\roman*)]
\item $\mathcal{D}$ is analytic and attains its maximum and minimum values on any
compact matrix subset of $\mathbb{R}^{n\times{n}}_{\textup{zd}}$,
\item if $G(X)$ satisfies structural balance, then $\mathcal{D}(X)<0$,
\item $\mathcal{D}(X)=\mathcal{D}(X^\top)$,
\item $\mathcal{D}(X) = -\Fprod{X^2}{X^\top}$
\setcounter{saveenum}{\value{enumi}}
\end{enumerate}
Additionally, if $\norm{X}_F=1$, that is, $X\in\mathbb{S}^{n\times{n}}_{\textup{zd}}$, then
\begin{enumerate}[label=(\roman*)]\setcounter{enumi}{\value{saveenum}}
\item $-1 \leq \mathcal{D}(X) \leq 1$. \label{lastin}
\end{enumerate}
\end{lemma}
\begin{proof}
Here we show only property~\ref{lastin}, since the other properties are
easily verified from the definition of $\mathcal{D}$. The key step is to show that
$\Fnorm{X}\leq1$ implies $\Fnorm{X^2}\leq1$. The Cauchy-Schwartz
inequality leads to:
\begin{align*}
\Fnorm{X^2}^2 &= \sum_{i,j=1}^n (X^2)^2_{ij} = \sum_{i,j=1}^n (X_{i*} X_{*j})^2 \\
&\leq \sum_{i,j=1}^n \|X_{i*}\|_2^2 \|X_{*j}\|_2^2= \Big(\sum_{i=1}^n \|X_{i*}\|_2^2\Big) \Big( \sum_{j=1}^n\|X_{*j}\|_2^2 \Big)\\
&= \Big(\sum\nolimits_{i,k=1}^n x_{ik}^2\Big)^2 = \Fnorm{X}^2 =1.
\end{align*}
Since $\mathcal{D}$ is a Frobenius inner product of vectors with at-most unit
norm, it is bounded by $1$ in absolute value.
\end{proof}
\subsection{Transcription on the unit sphere and the projected pure-influence model}
We start by noting a simple fact. Given a trajectory
$\map{X}{\ensuremath{\mathbb{R}}_{\ge 0}}{\mathbb{R}^{n\times{n}}_{\textup{zd}}}$ with $X(t)\neq \vect{0}_{n\times{n}}$ for
all $t$, there exist unique trajectories
$\map{\eta}{\ensuremath{\mathbb{R}}_{\ge 0}}{\ensuremath{\mathbb{R}}_{\ge 0}}$ and
$\map{Z}{\ensuremath{\mathbb{R}}_{\ge 0}}{\mathbb{S}^{n\times{n}}_{\textup{zd}}}$ such that $X(t) = \eta(t) Z(t)$, where
$\eta(t)=\Fnorm{X(t)}$ and $Z(t)=X(t)/\Fnorm{X(t)}$.
\begin{theorem}[Transcription of the pure-influence model]
\label{eg-trans}
The pure-influence model~\eqref{inf-dyn} can be expressed as the
following system of differential equations:
\begin{subequations} \label{eq1o}
\begin{align}
\dot{Z}&=\eta\mathcal{P}_{Z^\perp}(Z^2-\operatorname{diag}(Z^2)) \nonumber \\
&=\eta(Z^2-\operatorname{diag}(Z^2)+\mathcal{D}(Z)Z), \label{eq1o-1} \\
\dot{\eta}&=-\mathcal{D}(Z)\eta^2, \label{eq1o-2}
\end{align}
\end{subequations}
where $\map{\eta}{\ensuremath{\mathbb{R}}_{\ge 0}}{\ensuremath{\mathbb{R}}_{\ge 0}}$ and
$\map{Z}{\ensuremath{\mathbb{R}}_{\ge 0}}{\mathbb{S}^{n\times{n}}_{\textup{zd},\textup{symm}}}$. Here $\mathcal{P}_{Z^\perp}$ is
the orthogonal projection onto $\operatorname{span}\{Z\}^\perp$ in the
vector space of square matrices with the Frobenious inner product.
\end{theorem}
\begin{proof}
We start by computing
$\dot{X}=\dot{\eta}Z+\eta\dot{Z}=X^2-\operatorname{diag}(X^2)$. Since
$X^2-\operatorname{diag}(X^2)=\eta^2\left(Z^2-\operatorname{diag}(Z^2)\right)$, we know
\begin{equation}
\label{one2}
\dot{\eta}Z+\eta\dot{Z}=\eta^2\left(Z^2-\operatorname{diag}(Z^2)\right).
\end{equation}
Recall that $\Fnorm{Z(t)}=1$ implies $\Fprod{Z(t)}{\dot{Z}(t)}=0$, that is,
$Z(t)\perp\dot{Z}(t)$. Computing the Frobenius inner product with $Z(t)$
on both sides of~\eqref{one2}, we obtain
\begin{equation*}
\begin{split}
\dot{\eta}&=\eta^2\Fprod{Z(t)}{Z^2(t)-\operatorname{diag}(Z^2(t))}
=\eta^2\Fprod{Z(t)}{Z^2(t)}\\
&= -\mathcal{D}(Z(t))\eta^2. \end{split}
\end{equation*}
where we have used the
decomposition~\eqref{eq:F-decomposition}. Substituting this equation into
equation~\eqref{one2}, one arrives at
$\dot{Z}=\eta\left(Z^2-\operatorname{diag}(Z^2)+\mathcal{D}(Z)\right)$.
Given $Y\in\mathbb{R}^{n\times{n}}$, let $\mathcal{P}_Z(Y)=\langle Y,Z\rangle Z$,
i.e., $\mathcal{P}_Z$ is the orthogonal projection operator onto the linear
space spanned by $Z$; and let
$\mathcal{P}_{Z^\perp}(Y)=Y-\mathcal{P}_Z(Y)=Y-\langle Y,Z\rangle Z$ be the
orthogonal projection onto the space perpendicular to the linear space
spanned by $Z$. Then, we observe that $\mathcal{P}_{Z^\perp}(Z)=0$ and
$\mathcal{P}_{Z^\perp}(\dot{Z})=\dot{Z}$. Using these results, we apply
$\mathcal{P}_{Z^\perp}$ to both sides of~\eqref{one2} and obtain
$\dot{Z}=\eta\mathcal{P}_{Z^\perp}(Z^2-\operatorname{diag}(Z^2))$. This concludes the
proof of equations~\eqref{eq1o}.
\end{proof}
In what follows, we are primarily interested in the
dynamics~\eqref{eq1o-1}, describing the behavior of the bounded component
$Z(t)$. For our needs, it is convenient to change the time variable
(Lemma~\ref{thaux2}) by getting rid of $\eta$ and replacing~\eqref{eq1o} by
the following dynamical system on the unit sphere.
\begin{definition}[Projected pure-influence model]
The \emph{projected pure-influence model} is a system of differential equations on the manifold $\mathbb{S}^{n\times{n}}_{\textup{zd}}$ defined by
\begin{equation}
\label{def:projected-pure-influence}
\dot Z=Z^2-\operatorname{diag}(Z^2)+\mathcal{D}(Z)Z.
\end{equation}
\end{definition}
Similarly, projecting onto the unit sphere leads to a new model based on
the Ku{\l}akowski\xspace et al. model.
\begin{definition}[Projected Ku{\l}akowski\xspace et al. model]
The \emph{projected Ku{\l}akowski\xspace et al. model} is a system of differential
equations on the manifold of symmetric unit-Frobenius norm matrices
matrices defined by
\begin{equation}
\label{def:projected-pure-influenceK}
\dot Z(t)=Z^2+\mathcal{D}(Z)Z.
\end{equation}
\end{definition}
\subsection{Pure-influence is the gradient flow of the dissonance function}
In this section we let $\operatorname{grad}\mathcal{D}$ denote the gradient vector field on $\mathbb{R}^{n\times{n}}_{\textup{zd}}$
defined by the dissonance function $\mathcal{D}$. We also let $\mathcal{D}\big|_{\mathbb{S}^{n\times{n}}_{\textup{zd},\textup{symm}}}$
denote the restriction of $\mathcal{D}$ onto the set $\mathbb{S}^{n\times{n}}_{\textup{zd},\textup{symm}}$. With this notation,
we now present the first of our main results.
\begin{theorem}[The pure-influence models over symmetric matrices are gradient flows]
\label{th-mini-gr}
Consider the pure-influence model~\eqref{inf-dyn} with $X(0)\in\mathbb{R}^{n\times{n}}_{\textup{zd},\textup{symm}}$ and
the projected pure-influence model~\eqref{def:projected-pure-influence}
with $Z(0)\in\mathbb{S}^{n\times{n}}_{\textup{zd},\textup{symm}}$. Then
\begin{enumerate}
\item \label{lasym22}$t\mapsto X(t)$ remains in the set $\mathbb{R}^{n\times{n}}_{\textup{zd},\textup{symm}}$ and
\begin{equation}
\label{eq:pureinf=gradient}
\dot{X}=-\tfrac{1}{3}\operatorname{grad} \mathcal{D}(X),
\end{equation}
\item\label{lasym0} $t\mapsto Z(t)$ remains in the set $\mathbb{S}^{n\times{n}}_{\textup{zd},\textup{symm}}$ and
\begin{equation}
\label{x1}
\dot{Z} = -\tfrac{1}{3} \mathcal{P}_{Z^\perp}\!\big( \operatorname{grad} \mathcal{D}(Z) \big)
= -\tfrac{1}{3} \operatorname{grad}\mathcal{D}\Big|_{\mathbb{S}^{n\times{n}}_{\textup{zd},\textup{symm}}}\!\!\!\!\!\!\!\!\!(Z).
\end{equation}
\end{enumerate}
\end{theorem}
In other words, the projected pure-influence
model~\eqref{def:projected-pure-influence} is, modulo a constant factor,
the gradient flow of the dissonance function $\mathcal{D}$ restricted to the manifold of
zero-diagonal unit-norm symmetric matrices $\mathbb{S}^{n\times{n}}_{\textup{zd},\textup{symm}}$.
\begin{proof}[Proof of Theorem~\ref{th-mini-gr}]
The forward invariance of the set of symmetric matrices in both statements
is immediate. To prove equation~\eqref{x1}, we adopt the slight abuse of
notation $$\operatorname{grad}\mathcal{D}(Z) = \operatorname{grad}\mathcal{D}\Big|_{\mathbb{S}^{n\times{n}}_{\textup{zd},\textup{symm}}}\!\!\!\!\!\!\!\!\!(Z).$$ With this
notation, note that $Z\mapsto\operatorname{grad}\mathcal{D}(Z)$ is the unique vector field on
$\mathbb{S}^{n\times{n}}_{\textup{zd},\textup{symm}}$ satisfying, along any differentiable trajectory $t\mapsto{Z(t)}$,
\begin{equation}\label{eq:fb:blast}
\frac{d}{dt}\mathcal{D}(Z(t)) = \Fprod{\operatorname{grad}\mathcal{D}(Z(t))}{\dot{Z}(t)}.
\end{equation}
Note that, here, both $\operatorname{grad}\mathcal{D}(Z(t))$ and $\dot{Z}(t)$ take value on the
tangent space to the manifold $\mathbb{S}^{n\times{n}}_{\textup{zd},\textup{symm}}$.
Now, using the various properties of the trace inner product (e.g.,
$\dot{Z}(t)\perp Z(t)$), we compute
\begin{equation*}
\begin{split}
\dot{\mathcal{D}}(Z(t)) &=
-(\operatorname{trace}(\dot{Z}(t)Z(t)Z(t))+\operatorname{trace}(Z(t)\dot{Z}(t)Z(t)))\\
&\phantom{ = }\quad+\operatorname{trace}(Z(t)Z(t)\dot{Z}(t)) \\
&=-3\operatorname{trace}(\dot{Z}(t)Z^2(t)) = -3\Fprod{\dot{Z}(t)}{Z^2(t)} \\
&= -3\Fprod{\dot{Z}(t)}{Z^2(t)-\operatorname{diag} Z^2(t)+\mathcal{D}(Z(t))Z(t)}.
\end{split}
\end{equation*}
Recalling that $Z^2-\operatorname{diag}{Z^2}+\mathcal{D}(Z)Z \overset{\eqref{eq1o-1}}{=}
P_{Z^{\top}}(Z^2-\operatorname{diag} Z^2)$ belongs to the tangent space to the manifold
$\mathbb{S}^{n\times{n}}_{\textup{zd},\textup{symm}}$ at the point $Z(t)$, one arrives at the equality
\begin{equation*}
\operatorname{grad}\mathcal{D}(Z) = -3 \Big( Z^2-\operatorname{diag} Z^2+\mathcal{D}(Z)Z \Big).
\end{equation*}
This concludes the proof of statement~\ref{lasym0}. Finally,
equation~\eqref{eq:pureinf=gradient} can be proved in a similar way.
\end{proof}
\section{Classification of symmetric equilibria}
\label{sec:classification}
In this section we give the complete classification of the symmetric
equilibria in the projected pure-influence
model~\eqref{def:projected-pure-influence}; the classification of general
asymmetric equilibria remains an open problem. Thanks to
Theorem~\ref{th-mini-gr}, all symmetric equilibria of the projected
pure-influence model are critical points of the dissonance function
$\mathcal{D}$. It is useful to write the equilibrium equation:
\begin{equation}
\label{eq.equil}
Z^2+\mathcal{D}(Z)Z-\operatorname{diag}(Z^2)=0, \quad Z\in\mathbb{S}^{n\times{n}}_{\textup{zd},\textup{symm}}.
\end{equation}
Note that the equilibria $Z^*$ with $\mathcal{D}(Z^*)=0$ correspond to equilibria of
the original system~\eqref{inf-dyn-m} $X(t)\equiv X^*=\eta(0)Z^*$, whereas
the others with $\mathcal{H}(Z^*)\neq 0$ lead to
\begin{equation*}
X(t)=\eta(t)Z^*, \quad\eta(t)=\frac{\eta(0)}{1+t\eta(0)\mathcal{D}(Z^*)}
\end{equation*}
defined for $t\in[0,\frac{1}{\eta(0)\mathcal{D}(Z^*)})$ if $\mathcal{D}(Z^*)<0$ (for which
the solution is unbounded) or for $t\geq 0$ if $\mathcal{D}(Z^*)>0$.
\subsection{Normalized Stiefel matrices}
To start with, we introduce a special important manifold of non-square
matrices that we will use throughout the paper.
\begin{definition}[Normalized Stiefel matrices]\label{def-unit}
A matrix $V\in\ensuremath{\mathbb{R}}^{n\times k}$, for $k\le n$, is \emph{normalized
Stiefel} (\ensuremath{\textup{nSt}}\xspace), if
\begin{enumerate}
\item\label{eq.gu1} the columns of $V$ are pairwise orthogonal unit
vectors, i.e., $V^{\top}V=I_{k}$;
\item\label{eq.gu2} the norm of each row is the same (obviously, it must
be $\sqrt{k/n}\le 1$): $\operatorname{diag}(VV^{\top})=n^{-1}k I_n$.
\end{enumerate}
Let $\ensuremath{\textup{nSt}}\xspace(n,k)\subseteq\mathbb{R}^{n\times k}$ denote the set of normalized
Stiefel matrices.
\end{definition}
In general, the rows of an \ensuremath{\textup{nSt}}\xspace matrix \emph{need not} be orthogonal. We
recall from~\cite{IMJ-NJH:76} the notion of \emph{compact Stiefel
manifold}, denoted by $\ensuremath{\textup{St}}\xspace(k,n)=\setdef{X\in\mathbb{R}^{n\times{k}}}{X^\top X =
I_k}$.
\begin{lemma}[Characterization of \ensuremath{\textup{nSt}}\xspace matrices]\label{ch-gunit}
The set $\ensuremath{\textup{nSt}}\xspace(n,k)$, $k\leq n$, is a compact and analytic submanifold of
$\mathbb{R}^{n\times k}$ of dimension $(k-1)n+1-k(k+1)/2$, and it is also a
submanifold of the compact Stiefel manifold~(and thus,
$\ensuremath{\textup{nSt}}\xspace(n,k)\subseteq \ensuremath{\textup{St}}\xspace(k,n)$). Moreover,
\begin{enumerate}[label=(\roman*)]
\item \label{st-1}$\ensuremath{\textup{nSt}}\xspace(n,n)$ is the set of orthogonal matrices,
\item \label{st-2}for $k=1$, the matrix $V$ is \ensuremath{\textup{nSt}}\xspace if and only if
\begin{equation}\label{eq.k1-general}
V=\frac{1}{\sqrt{n}}\begin{bmatrix}s_1\\ \vdots \\ s_n
\end{bmatrix},
\end{equation}
for any numbers $s_i\in\{-1,+1\}$, $i\in\until{n}$,
\item \label{st-3}for $k=2$, the matrix $V$ is \ensuremath{\textup{nSt}}\xspace if and only if
\begin{equation}\label{eq.k2-general}
V= \sqrt{\frac{2}{n}}
\begin{bmatrix}
\cos\alpha_1 & \sin\alpha_1\\
\vdots & \vdots\\
\cos\alpha_n & \sin\alpha_n
\end{bmatrix},
\end{equation}
for any set of angles $\alpha_1,\dots,\alpha_n$ satisfying
\begin{equation}
\label{eq.ngon}
\sum_{m=1}^n e^{2\alpha_m \sqrt{-1}}=0.
\end{equation}
\end{enumerate}
\end{lemma}
We postpone the proof of Lemma~\ref{ch-gunit} to
Appendix~\ref{sec:proofs}. We remark that in the case of $n=k=2$, the
constraint~\eqref{eq.ngon} implies that $2\alpha_2=\pi+2\pi s+2\alpha_1$,
where $s\in\ensuremath{\mathbb{Z}}$, that is, $\alpha_2=\pi/2+\pi s+\alpha_1$ and
$\cos\alpha_2=(-1)^{s+1}\sin\alpha_1$,
$\sin\alpha_2=(-1)^{s}\cos\alpha_1$. Thus, the matrices in $\ensuremath{\textup{nSt}}\xspace(2,2)$ are
orthogonal $2\times 2$ matrices (representing rotations or rotations with
reflection):
\[
V=\begin{bmatrix}
\cos\alpha_1 & \sin\alpha_1\\
-\varepsilon\sin\alpha_1 & \varepsilon\cos\alpha_1
\end{bmatrix},\quad \varepsilon\in\{-1,+1\}.
\]
For a general $k$, it is difficult to give a closed-form description of all
matrices from $\ensuremath{\textup{nSt}}\xspace(n,k)$. However, there are simple examples of matrices
from $\ensuremath{\textup{nSt}}\xspace(n,k)$ in the case where $n=2k$, including every matrix of the form
\[
V=\frac{1}{\sqrt{2}}
\begin{bmatrix}
U_1\\
U_2
\end{bmatrix},
\]
where $U_i$ are orthogonal $k\times k$ matrices.
\subsection{Technical results}
We here present two technical results proved in Appendix~\ref{sec:proofs}.
\begin{lemma}\label{lem.tech2}
Suppose that $Z^2-2\alpha Z=\beta I_n$ for some symmetric $n\times n$ matrix $Z$ with $\operatorname{diag} Z=0$. Then $Z$ can be decomposed a
\begin{equation}\label{eq.special1}
Z=pVV^{\top}-qI_n=Z^{\top}
\end{equation}
for some $V\in \ensuremath{\textup{nSt}}\xspace(n,k)$ ($1\le k<n$) and constants $p,q\ge 0$ such that $pk=qn$, $2\alpha=\theta=p-2q$ and $\beta=q(p-q)$. Namely,
$p=2\sqrt{\alpha^2+\beta},\quad q=\sqrt{\alpha^2+\beta}-\alpha$.
\end{lemma}
\begin{corollary}\label{cor.diag}
Given a matrix $Z=Z^{\top}$ with $\operatorname{diag}(Z)=0$, the matrix $Z^2-2\alpha Z$ is
diagonal with $s$ different eigenvalues $\beta_1<\ldots<\beta_s$ of
multiplicities $n_1,\ldots,n_s$ respectively ($n_1+n_2+\ldots+n_s=n$) if
and only if there exists such a permutation matrix $S$ that
\[
SZS^{-1}=\operatorname{diag}(Z_1,\ldots,Z_s),
\]
where each $Z_i$ is decomposed as~\eqref{eq.special1} with parameters $p_i,q_i,V_i$, where $V_i\in \ensuremath{\textup{nSt}}\xspace(n_i,k_i)$ for some $k_i<n_i$ and
\begin{equation}\label{eq.pq-general1}
p_i=2\sqrt{\alpha^2+\beta_i},\quad q_i=\sqrt{\alpha^2+\beta_i}-\alpha.
\end{equation}
Thus, for \emph{irreducible} $Z=Z^{\top}$ the matrix $Z^2-2\alpha Z$ is diagonal if and only if $Z$ is decomposed as~\eqref{eq.special1} with $p,q\ge 0$.
\end{corollary}
\color{black}
\subsection{Classification of irreducible symmetric equilibria}
\begin{theorem}[Irreducible equilibria for the projected pure-influence model]\label{cor.tech}
For the projected pure-influence model~\eqref{def:projected-pure-influence},
\begin{enumerate}
\item \label{1a0} all irreducible symmetric equilibria are of the form
\begin{equation}
\label{eq.special}
Z^*=pVV^{\top}-qI_n,
\end{equation}
with $V\in \ensuremath{\textup{nSt}}\xspace(n,k)$, $k<n$, and
\begin{equation}\label{eq.pq}
p=\sqrt{\frac{n}{k(n-k)}},\quad q=\sqrt{\frac{k}{n(n-k)}};
\end{equation}
\item \label{la1}$Z^*$ has $k$ positive eigenvalues with value $p-q$ and $n-k$ negative eigenvalues with value $-q$;
\item \label{la2}the dissonance function satisfies
\begin{equation}\label{eqH}
\mathcal{D}(Z^*)=-\frac{n-2k}{\sqrt{kn(n-k)}},
\end{equation}
and the right-hand side is monotonically increasing in $k\in\until{n-1}$
(see Figure~\ref{cos_D_eval}).
\end{enumerate}
\end{theorem}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.6\linewidth]{dissonance-function-eq.pdf}
\caption{For a network with size $n=10$, the dissonance function $\mathcal{D}$
evaluated on all irreducible symmetric equilibria with $k\in\until{9}$
positive eigenvalues, according to equation~\eqref{eqH}.}
\label{cos_D_eval}
\end{figure}
\begin{proof}
We start by proving a technical statement. Pick $V\in \ensuremath{\textup{nSt}}\xspace(n,k)$, $p,q$ real
numbers and $\theta=p-2q$. Then,
the matrix
$Z=pVV^{\top}-qI_n=Z^{\top}$ satisfies the following properties:
\begin{enumerate}[label=(\alph*)]
\item $Z^2-\theta Z=q(p-q)I_n$, and thus $\operatorname{diag}(Z^2)=\theta\operatorname{diag}(Z)+q(p-q)I_n$;\label{in1}
\item for any $p\ne 0$, the matrix $Z$ has two eigenvalues $p-q$ and $(-q)$
whose multiplicities are $k$ and $(n-k)$ respectively;\label{in2}
\item the eigenspaces corresponding to $p-q$ and $-q$ are the image of $V$ and the kernel of $V^{\top}$ respectively; \label{in3}
\item $\operatorname{diag}(Z)=\vect{0}_{n\times{n}}$ if and only if $pk=qn$;
in this situation, $\operatorname{trace}(Z^2)=q(p-q)n$ and $\mathcal{D}(Z)=-\operatorname{trace}(Z^2Z^{\top})=-\theta nq(p-q)$.\label{in4}
\end{enumerate}
\noindent To prove~\ref{in1}, recall that $V^{\top}V=I_k$ and therefore
\begin{align*}
Z^2&=p^2VV^{\top}VV^{\top}+q^2I_n-2pqVV^{\top}=p\theta VV^{\top}+q^2I_n\\
&=\theta Z+(pq-q^2)I_n.
\end{align*}
To prove~\ref{in2} and~\ref{in3}, notice that for any vector $z=Vy$ one has
$VV^{\top}z=V(V^{\top}V)y=Vy=z$, and thus $Zz=(p-q)z$. The space of such
vectors is nothing else than the image of $V$ and has dimension $k$ (recall
that the columns of $V$ are orthogonal, and hence are linearly
independent). If $V^{\top}z=0$, then $Zz=-qz$, and the dimension of
$\ker(V^{\top})$ is $(n-k)$. Since $Z=Z^{\top}$ and $p-q\ne -q$ (except for
the case where $p=q=0$ and $Z=0$, which is trivial), the two eigenspaces
are orthogonal and their sum coincides with $\mathbb{R}^n$. Hence, there are no
other eigenvalues. To prove~\ref{in4}, note first $p\operatorname{diag}
(VV^{\top})=(pk/n) I_n$, and thus $\operatorname{diag}(Z)=0$ if and only if
$pk/n=q$. Using statement~\ref{in1}, one shows that in this situation
$\operatorname{diag}(Z^2)=q(p-q)I_n$ and hence $\operatorname{trace}(Z^2)=q(p-q)n$. Thanks
to~\ref{in1}, $Z^3=\theta Z^2+q(p-q)Z\Longrightarrow
\operatorname{trace}(Z^3)=\theta\operatorname{trace}(Z^2)=\theta nq(p-q)$, which finishes the proof
of~\ref{in4}.
Now, to prove the statement~\ref{1a0} of the theorem, note first that
from~\ref{in1} and equation~\eqref{eq.equil}, it follows from
Corollary~\ref{cor.diag} that every irreducible equilibrium is decomposed
as~\eqref{eq.special} with some $p,q\geq 0$. Moreover, note that
from~\ref{in1} and~\ref{in4}, it also follows that
equation~\eqref{eq.equil} holds if and only if $pk=qn$ (which comes from
$Z$ having zero diagonal entries and so $k<n$) and $pq-q^2=1/n$ (which
comes from $\operatorname{trace}(Z^2)=1$). This implies that $q=\sqrt{\frac{k}{n(n-k)}}$
and $p=\sqrt{\frac{n}{k(n-k)}}$.
Finally, statement~\ref{la1} follows from~\ref{in2}; and~\ref{la2} is
obtained by substituting the values of $p$ and $q$ to the definition of the
dissonance function~\eqref{beta_hat1} and noting that the smooth function
$\kappa\mapsto -\frac{n-2\kappa}{\sqrt{n\kappa(n-\kappa)}}$ has positive
derivative on $(0,n)$.
\end{proof}
\subsection{Classification of reducible symmetric equilibria}
The next theorem generalizes Theorem~\ref{cor.tech} and characterizes all
symmetric equilibria for the projected pure-influence model and its proof
can be found in Appendix~\ref{sec:proofs}.
\begin{theorem}[All equilibria for the projected pure-influence model]
\label{th_multi}
The matrix $Z^*$ is an equilibrium~\eqref{eq.equil} of the projected pure-influence model if and only if a permutation matrix $S$ exists such that:
\begin{enumerate}[label=(\roman*)
\item $SZ^*S^{-1}=\operatorname{diag}(Z_1^*,\ldots,Z_s^*)$, $s\ge 1$, $Z_i^*={Z_i^*}^{\top}\in\mathbb{R}^{n_i\times n_i}$;
\item $Z_i^*=p_iVV^{\top}-q_iI_{n_i}$, where $p_i,q_i\ge 0$ and $V\in \ensuremath{\textup{nSt}}\xspace(n_i,k_i)$, $k_i<n_i$;
\item \label{s3}the sign $\varepsilon=\operatorname{sign}(n_i-2k_i)\in\{-1,0,1\}$ is the same for all $i=1,\ldots,s$ such that $Z_i^*\ne \vect{0}_{n_i\times{n_i}}$ and
\item \label{s4}for each block $Z_i^*\ne \vect{0}_{n_i\times{n_i}}$ the coefficients $p_i,q_i$ have the form
\begin{equation}\label{eq.pq-general}
p_i=2\sqrt{\alpha^2+\beta_i},\quad q_i=\sqrt{\alpha^2+\beta_i}-\alpha,
\end{equation}
where
\begin{enumerate
\item for $\varepsilon\ne 0$, $\alpha$ and $\beta_i$ are determined from \label{a-s4}
\begin{equation}\label{eq.alpha-beta}
\begin{split}
&\alpha=\varepsilon\Bigg(\sum_{i:Z_i\ne 0}\frac{4k_in_i(n_i-k_i)}{(n_i-2k_i)^2}\Bigg)^{-1/2}\\
&\beta_i=\alpha^2\frac{4n_ik_i-4k_i^2}{(n_i-2k_i)^2};
\end{split}
\end{equation}
\item for $\varepsilon=0$, $\alpha=0$, for all $i$, and $\beta_i$ are chosen in
such a way that $\sum_{i:Z_i\ne 0}\beta_in_i=1$.
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{remark}
Let $Z^*$ be a reducible equilibrium for the projected pure-influence model
such that $G(Z^*)$ is composed of $m$ (disconnected) subgraphs that satisfy
structural balance.
According to Definition~\ref{def:sbal}, $G(Z^*)$ \emph{does not} satisfy
structural balance since this definition requires $G(Z^*)$ to be complete.
\end{remark}
\subsection{Structural balance and equilibria}
We now characterize the equilibria corresponding to structural balance and
how they minimize the dissonance function.
\begin{corollary}[Balanced equilibria of the projected pure-influence model]\label{thm:bal-eq}
For the projected pure-influence
model~\eqref{def:projected-pure-influence}, let $Z^*\in\mathbb{S}^{n\times{n}}_{\textup{zd}}$ be an
equilibrium point with a single positive eigenvalue. Then,
\begin{enumerate}
\item\label{fact:equilibrium:2} $Z^*$ has the form
\begin{equation} \label{pre_otr}
Z^*=
\left[
\begin{array}{c|c}
Z' & \vect{0}_{n_1\times{n-n_1}}\\
\hline
\vect{0}_{n-n_1\times{n_1}} & \vect{0}_{n-n_1\times{n-n_1}}
\end{array}
\right]
\end{equation}
with $n_1\leq n$ and
\begin{equation} \label{eq:structural-eq0}
Z'=\frac{1}{\sqrt{n_1(n_1-1)}} (s s^\top - I_{n_1}),
\end{equation}
for some $s\in\{-1,+1\}^{n_1}$; and thus, for any fixed $n_1$, there
are only $2^{n_1-1}$ different equilibria (with a single positive
eigenvalue),
\item\label{fact:equilibrium:3}$G(Z')$ satisfies structural balance, with
the binary vector $s$ characterizing the distribution of the
individuals in the single faction or in the two factions, and
\item \label{unss} if $G(Z^*)$ is a connected graph, then $G(Z^*)$
satisfies structural balance, $Z^*$ is a global minimizer to the
optimization problem:
\begin{equation*}
\begin{aligned}
& \underset{Z\in\mathbb{R}^{n\times{n}}}{\textup{minimize}}
& & \mathcal{D}(Z) \\
& \textup{subject to}
& &Z\in\mathbb{S}^{n\times{n}}_{\textup{zd},\textup{symm}}
\end{aligned}
\end{equation*}
and satisfies $\mathcal{D}(Z^*)=-\frac{n-2}{\sqrt{n(n-1)}}$.
\end{enumerate}
\end{corollary}
\begin{proof}
Statement~\ref{fact:equilibrium:2} follows immediately from Theorem~\ref{cor.tech} and equation~\eqref{eq.k1-general}. Indeed, from Theorem~\ref{th_multi} we know that $Z'$
must be irreducible.
Regarding statement~\ref{fact:equilibrium:3}, observe that for any different $i$, $j$ and $k$,
\begin{equation*}
z'_{ij}z'_{jk}z'_{ki}=\frac{(s_is_j)(s_js_k)(s_ks_i)}{(n_1(n_1-1))^{3/2}}=\frac{1}{(n_1(n_1-1))^{3/2}}>0.
\end{equation*}
This inequality implies $\operatorname{sign}(z'_{ij})=\operatorname{sign}(z'_{jk}z'_{ki})$ and thus we
know that $Z'$ satisfies structural balance. It is immediate to see that
any $i$ and $j$ such that $s_i=s_j$ correspond to the same faction in the
network $G(Z')$. This completes the proof for~\ref{fact:equilibrium:3}.
Regarding statement~\ref{unss}, we notice that the smooth function
$\eta\mapsto-\frac{\eta-2}{\sqrt{\eta(\eta-1)}}$ has negative derivative
for $\eta>3/2$. Then, if an equilibrium point with a single positive
eigenvalue of the form~\eqref{pre_otr} is a candidate solution to the shown
optimization problem, then it must be the case that $n_1=n$, i.e., the
graph associated with such equilibrium point is complete. Now, let us focus
on the evaluation of $\mathcal{D}$ on the equilibria of the projected pure-influence
model. First, let us have $k_1+\dots+k_s=k$ and $n_1+\dots+n_s=n$ for any
$s\geq 2$, where $k_i$ and $n_i$ are positive integers, and assume that
$k<n/2$ and $k_i<n_i/2$ for any $i\in\until{s}$. Note that the function
$f(\xi)=\xi(1-\xi)/(1-2\xi)^2$ is convex on $(0,1/2)$. Therefore, Jensen's
inequality implies
\begin{multline*}
\frac{1}{n}\sum_{i=1}^s\frac{k_in_i(n_i-k_i)}{(n_i-2k_i)^2}=\sum_{i=1}^s\frac{n_i}{n}f\Big(\frac{k_i}{n_i}\Big)\geq f\Big(\sum_{i=1}^s\frac{k_i}{n}\Big)=f\Big(\frac{k}{n}\Big)=\frac{k(n-k)}{(n-2k)^2},
\end{multline*}
and, in turn,
\begin{equation}
\label{l_ineq_aux}
-\Bigg(\sum_{i=1}^s\frac{k_in_i(n_i-k_i)}{(n_i-2k_i)^2}\Bigg)^{-1/2}\geq -\frac{n-2k}{\sqrt{kn(n-k)}}.
\end{equation}
Now, let $Z^*$ and $Z^{**}$ be two equilibria with $k$ positive eigenvalues
being irreducible (as in Theorem~\ref{cor.tech}) and reducible with $s$
blocks (as in Theorem~\ref{th_multi}) respectively. We immediately see
that, under our previous assumptions, the left hand side
of~\eqref{l_ineq_aux} corresponds to $\mathcal{D}(Z^{**})<0$ and the right hand side
corresponds to $\mathcal{D}(Z^*)<0$, so that $\mathcal{D}(Z^{**})\geq\mathcal{D}(Z^*)$. Thus, we only
need to investigate the minimum value of $\mathcal{D}$ in the set of irreducible
equilibria with $k<n/2$ positive eigenvalues in order to solve the
optimization problem, but the solution is already known
by~Theorem~\ref{cor.tech}\ref{la2} to be when $k=1$. This finishes the
proof.
\end{proof}
\begin{remark}
\label{remark_n-1}
Consider an equilibrium point $Z^*$ with one positive eigenvalue.
Then, $-Z^*$ has one negative eigenvalue and $n-1$ positive eigenvalues, and does not correspond to structural balance. Note that all such $-Z^*$ correspond to critical points of $\mathcal{D}$ which are also isolated.
\end{remark}
\subsection{Examples of equilibria with two positive eigenvalues}
Let $Z^*$ be any equilibrium of the projected pure-influence model
parameterized by $\ensuremath{\textup{nSt}}\xspace(n,2)$ matrices, so that it has two positive
eigenvalues. Let us assume first that it is irreducible. Then, another
class of equilibria is found using the
parametrization~\eqref{eq.k2-general}. It can be easily shown that
\begin{equation*}
Z^*=\sqrt{\frac{2}{n(n-2)}}(\theta_{ij})_{i,j=1}^n,\quad \theta_{ij}=
\begin{cases}
0,\,i=j\\
\cos(\alpha_i-\alpha_j),i\ne j.
\end{cases}
\end{equation*}
Here the angles $\alpha_i$ should satisfy the relation~\eqref{eq.ngon}. Interestingly, many of such matrices do not correspond to structural balance. Consider, for example, the case where the unit vectors in~\eqref{eq.ngon} constitute a regular $n$-gon: $\alpha_i=\frac{\pi (i-1)}{n}$, $i=1,\ldots,n$. For any pair $i,j>i$ the entry $z_{ij}$ is negative if $(j-i)>n/2$, positive if $j-i<n/2$ and zero if $j-i=n/2$ (possible only for even $n$). If $n$ is odd, the graph is complete, otherwise, the pairs of nodes $(i,i+n/2)$ for $i=1,\ldots,n/2$ are not connected. For example, in the smallest dimension $n=3$, by setting $\alpha_1=0$, $\alpha_2=\pi/3$ and $2\pi/3$, we obtain the equilibrium
\begin{equation*}
Z^*=\frac{1}{\sqrt{6}}\begin{bmatrix}
0 & +1 & -1 \\
+1 & 0 & +1 \\
-1 & +1 & 0
\end{bmatrix}
\end{equation*}
which does not correspond to structural balance. Indeed, in the case where $n=3$ or $n\ge 5$,
the graph always contains imbalanced triads. For instance, for $n\ge 3$ being odd the nodes $i=1$, $j=(n-1)/2$ and $\ell=(n+3)/2$ always constitute such a triad: $z_{i\ell}<0$, whereas $z_{ij},z_{j\ell}\ge 0$. For an even number $n\ge 6$, one may take $i=1$, $j=n/2$, $\ell=n/2+2$.
In the case $n=4$, the equilibrium $Z^*$ corresponds to an incomplete cyclic graph such that $\mathcal{D}(Z^*)=0$:
\[
Z^*=\frac{1}{2\sqrt{2}}
\begin{bmatrix}
0 & \frac{1}{\sqrt{2}} & 0 & -\frac{1}{\sqrt{2}}\\
\frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}} & 0\\
0 & \frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}}\\
-\frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}} & 0
\end{bmatrix}.
\]
For the reducible matrix case, since $Z^*$ has two positive eigenvalues, $G(Z^*)$ contains two disconnected subgraphs that satisfy structural balance with possibly other isolated nodes.
\section{Convergence to balanced equilibria and stability analysis}
\label{sec:convergence-analysis}
We now provide convergence results for our models towards equilibria that
correspond to structural balance. We present a supporting lemma and then our main theorem.
\begin{lemma}\label{prop.aux}
Assume that the solution of~\eqref{inf-dyn} satisfies
$x_{i*}(t_0)=\vect{0}_{1\times{n}}$ at some $t_0\ge 0$, that is, in the
graph $G(X(t_0))$ node $i$ does not communicate to any other node. Then,
$x_{i*}(t)\equiv\vect{0}_{1\times{n}}$ for any $t\ge 0$. The same holds
for the solutions of~\eqref{def:projected-pure-influence}.
\end{lemma}
\begin{proof}
Since the right-hand sides of~\eqref{inf-dyn} and~\eqref{def:projected-pure-influence} are analytic, any solution is a real-analytic function of time. Assuming that $x_{ij}(t_0)=0$ for all $j$, one finds that $\dot x_{ij}(t_0)=0$. Differentiating~\eqref{inf-dyn}, it is easy to show that $\ddot x_{ij}(t_0)=0$, and so on, $x_{ij}^{(m)}(t_0)=0$ for any $m\ge 1$. In view of analyticity, one has $x_{ij}(t)\equiv 0$ for any $t$. Similarly, $z_{ij}(t_0)=0\,\forall j$ entails that $z_{ij}(t)\equiv 0$ for any solution of~\eqref{def:projected-pure-influence}.
\end{proof}
\begin{theorem}[Convergence results and dynamical properties]\label{th-mini}
Consider the pure-influence model~\eqref{inf-dyn} with an initial
condition $X(0)\in\mathbb{R}^{n\times{n}}_{\textup{zd},\textup{symm}}$ and the projected pure-influence
model~\eqref{def:projected-pure-influence} with initial condition
$Z(0)=\frac{X(0)}{\Fnorm{X(0)}}$. Then,
\begin{enumerate}
\item the solution $Z(t)$ converges to a single critical point of the dissonance function $\mathcal{D}$;\label{lasym2}
\item \label{lem-nondec}
the number of negative eigenvalues of $Z(t)$ is non-decreasing.
\setcounter{saveenum}{\value{enumi}}
\end{enumerate}
Moreover, if $X(0)$ has one positive eigenvalue, then
\begin{enumerate}\setcounter{enumi}{\value{saveenum}}
\item\label{one-thmm-mini} $\lim_{t\to+\infty}Z(t)=Z^*$, where $Z^*$ is
as in~\eqref{eq:structural-eq0}, so that $G(Z(t))$ or one of its
connected components (while the rest of nodes are isolated) reaches
structural balance in finite time;
\item\label{two-thmm-mini} $X(t)$ achieves the same sign structure as
$Z^*$ in finite time;
\item\label{three-thmm-mini2} nonzero entries of $X(t)$ diverge to
infinity in finite time.
\end{enumerate}
\end{theorem}
\begin{proof}
For convenience, throughout this proof, let us denote $W(t)=\frac{X(t)}{\Fnorm{X(t)}}$, i.e., $X(t)=\eta(t)W(t)$ with $\eta(t)$ evolving according to~\eqref{eq1o-1} and $W(t)$ evolving according to~\eqref{eq1o-2}. From the construction of the transcription of the pure-influence model in Theorem~\ref{eg-trans}, we have that $\eta(t)=\Fnorm{X(t)}$ and so $\eta(t)>0$ for all well-defined $t\geq 0$. Moreover, Lemma~\ref{thaux2} let us conclude that $W(t)=Z(\int_{0}^t\eta(s)ds)$ for all $t\geq 0$, and thus the solution $X(t)$ is well defined.
To prove~\ref{lasym2}, recall that~\eqref{def:projected-pure-influence} is a gradient flow dynamics of the analytic function $\mathcal{D}$, and the trajectory $Z(t)$ stays on a compact manifold and, in particular, is bounded. The classical result of \L ojasiewicz~\cite{PAA-RM-BA:05} implies convergence of the trajectory to a single fixed point.
To prove~\ref{lem-nondec}, we enumerate the eigenvalues of $Z(t)$ in the descending order $\lambda_1(t)\ge\lambda_2(t)\ldots\ge\lambda_n(t)$ and consider the corresponding orthonormal bases of eigenvectors $v_i(t)$. Since $Z_i(t)v_i(t)=\lambda_i(t)v_i(t)$ and $v_i(t)^\top v_i(t)=1$, we obtain $\dot{Z}v_i+Z\dot{v}_i=\dot{\lambda}_iv_i+\lambda_i\dot{v}_i$ and $\dot v_i(t)^\top v_i(t)=0$. Therefore,
$$
\dot{\lambda}_i=v_i^\top\dot{Z}v_i+v_i^{\top}Z\dot{v}_i=v_i^\top\dot{Z}v_i+\lambda_iv_i^{\top}\dot{v}_i=v_i^\top\dot{Z}v_i,
$$
entailing the following differential equation
\begin{align}
\label{eig-dif-o}
\dot{\lambda}_i=\lambda_i^2+\mathcal{D}(Z)\lambda_i-v_i^\top\operatorname{diag}(Z^2)v_i.
\end{align}
Notice that all diagonal entries of $\operatorname{diag}(Z^2)$ are nonnegative. Now, due
to Lemma~\ref{prop.aux}, if the $i$th row of $X$ was initially the zero
vector, then it will continue being the same for all times and also for
$Z$; and, moreover, $\operatorname{diag}(Z^2)_{ii}=0$ and there exists a zero eigenvalue
with its associated eigenvector having zero entries in all the positions of
the entries where $\operatorname{diag}(Z^2)$ are positive. Then, it immediately follows
from~\eqref{eig-dif-o} that if $\lambda_{i}(0)=0$ due to $Z(0)$ having a
row being the zero vector $\vect{0}_{1\times n}$, then $\dot{\lambda_i}=0$.
Now, let $\mathcal{N}$ be the set of indices $i$ such that
$\operatorname{diag}(Z^2)_{ii}>0$. Thus, for any $i\in\mathcal{N}$, if $\lambda_i$
crosses the real axis at time $t^*$, i.e., $\lambda(t^*)=0$, then
\begin{equation}
\label{equin}
\dot{\lambda}_i(t^*)=-(v_i(t^*))^\top\operatorname{diag}{(Z^2(t^*))}v_i(t^*)<0.
\end{equation}
Therefore, if $\lambda_i(t_0)\le 0$ for some $t_0\ge 0$, then $\lambda_i(t)\le 0$
for all $t\ge t_0$. This finishes the proof for~\ref{lem-nondec}.
Notice that since $\operatorname{trace}(Z(t))=0$ and $Z(t)=Z(t)^{\top}\ne \vect{0}_{n\times{n}}$, then $Z(t)$ has at least one positive eigenvalue. Then, equation~\eqref{equin} implies tha
\begin{equation*
\Lambda:=\setdef{Z\in\mathbb{S}^{n\times{n}}_{\textup{zd},\textup{symm}}}{Z\text{ has only one positive eigenvalue}}
\end{equation*}
is forward invariant and, in particular, the limit $Z^*=\lim_{t\to\infty} Z(t)$ (existing in view of statement~\ref{lasym2}) belongs to $\Lambda$. Since $Z^*$ is a critical point of $\mathcal{D}$ (or, in view of Theorem~\ref{th-mini-gr}, the equilibrium of~\eqref{def:projected-pure-influence}), it has the structure described by
Corollary~\ref{thm:bal-eq}.
By continuity of the flow $Z(t)$, there is a finite time $\tau$ such that $G(Z(t))$ has the same sign structure as $G(Z^*)$
for all $t\geq\tau$.
This finishes the proof for~\ref{one-thmm-mini}.
Now we prove the last two statements of the theorem. Knowing the convergence result from~\ref{one-thmm-mini}, Lemma~\ref{thaux2} tells us that introducing the term $\eta$ as in the transcribed system~\eqref{eq1o-1} to the projected pure-influence model has the simple effect of altering the convergence rate properties for $Z(t)$. Therefore, there always exist a finite time $\tau^*\geq 0$ such that, for any $t\geq\tau^*$, $W(t)$ satisfies the sign properties of statement~\ref{one-thmm-mini} regarding structural balance. Moreover, the fact that $X(t)=\eta(t)W(t)$ and $\eta(t)\geq 0$ by construction, immediately implies~\ref{two-thmm-mini}. %
Now, let $g(t):=-\mathcal{D}(W(t))$, and notice that $g(t)$ is a strictly positive continuous function for all (well-defined) $t\geq\tau^*$. Now, from equation~\eqref{eq1o-2}, we have the system $\dot{\eta}(t)=g(t)\eta^2(t)$, with solution $\eta(t)=\frac{\eta(\tau)}{1-\eta(\tau)\int_\tau^tg(s)ds}$ for $t\geq\tau$. Then, since $\int_\tau^tg(s)ds$ is a monotonic strictly increasing function on $t\geq\tau$, we have that $\eta(t)\to+\infty$ as $t\to t^*$, where $t^*>\tau^*$ is some finite time such that $\int_\tau^{t^*}g(s)ds=\frac{1}{\eta(\tau)}$ (note that $t^*>\tau^*$ holds from the relationship $W(t)=Z(\int_0^{t}\eta(s)ds)$). Then, we conclude that the solution $\eta(t)$ and the entries of $X(t)$ diverge in some finite time $t^*$,
which proves~\ref{three-thmm-mini2}.
\end{proof}
\begin{corollary}
Consider the same conditions as in Theorem~\ref{th-mini}, i.e., the
projected pure-influence model with initial condition $Z(0)\in\mathbb{S}^{n\times{n}}_{\textup{zd},\textup{symm}}$
having one positive eigenvalue. If
$\mathcal{D}(Z(0))<-\frac{n-3}{\sqrt{(n-1)(n-2)}}$, then $G(Z(t))$ eventually
reaches structural balance.
\end{corollary}
The previous theorem immediately implies that the set of irreducible
equilibria with a single positive eigenvalue is (locally) asymptotically
stable. We present further results on the stability of equilibria.
\begin{lemma}[Further results on stability of the equilibria]\label{l2}
Consider a symmetric equilibrium point $Z^*$ for the projected
pure-influence model~\eqref{def:projected-pure-influence}. Without loss
of generality, assume that $Z^*$ has no row equal to the zero
vector\footnote{If $Z^*$ had a row equal to the zero vector, then, in the
lemma statement, we would replace $n$ by $n_1<n$, where $n_1$ is the
number of rows of $Z^*$ that are not equal to the zero vector.}.
If $\mathcal{D}(Z^*)\geq 0$, then $Z^*$ is an unstable equilibrium point and does
not correspond to structural balance.
\end{lemma}
\begin{proof}
Write the analytic projected influence
system~\eqref{def:projected-pure-influence} as
$\dot{Z}=f(Z):=Z^2-\operatorname{diag}(Z^2)+\mathcal{D}(Z)Z$, thereby defining
$\map{f}{\mathbb{R}^{n\times{n}}}{\mathbb{R}^{n\times{n}}}$, and compute
\begin{align*}
\frac{\partial f_{ij}(Z)}{\partial z_{ij}} &= {\mathcal{D}(Z)+\frac{\partial\mathcal{D}(Z)}{\partial z_{ij}}z_{ij},}\\
\frac{\partial \mathcal{D}(Z^*)}{\partial z_{ij}}&= -3\sum\nolimits_{\substack{k=1\\k\neq i,j}}^n z^*_{ik}z^*_{kj}.
\end{align*}
Now, the Jacobian of $f$, denoted by $\jac{f}$, is a
$(n^2-n)\times(n^2-n)$ matrix (since we do not consider
self-appraisals). Let $\jac{f}(Z^*)$ be the Jacobian evaluated at $Z^*$ and
let $\{\lambda_i\}_{i=1}^{n^2-n}$ be the set of its eigenvalues. Then, we
compute
\begin{align*}
\sum_{i=1}^{n^2-n}\lambda_{i}&=\operatorname{trace}(\jac{f}(Z^*))=\sum_{i=1}^n\sum\nolimits_{\substack{j=1\\j\neq i}}^n\frac{\partial f_{ij}(Z^*)}{\partial z_{ij}}\\
&=(n^2-n)\mathcal{D}(Z^*)+3\mathcal{D}(Z^*)=(n^2-n+3)\mathcal{D}(Z^*).
\end{align*}
Since $n^2-n+3>0$ for $n\geq 3$, we draw the following conclusions for
$\mathcal{D}(Z^*)\geq 0$: (i) $\jac{f}(Z^*)$ contains at least one positive
eigenvalue and so the equilibrium point $Z^*$ is unstable; (ii) at least
one triad in $G(Z^*)$ is unbalanced and so $Z^*$ does not correspond to
structural balance.
\end{proof}
\section{Simulation results and conjectures}
\label{sec:simulations}
The generic convergence of trajectories to the minima of $\mathcal{D}$ (or,
equivalently, the convergence from almost all initial conditions) is an
open problem. However, we present strong numerical evidence that support
such claim. We first remark that, from the proof of
Theorem~\ref{eg-trans}, the projected pure-influence
model~\eqref{def:projected-pure-influence} can be generalized over any
asymmetric matrix in $\mathbb{S}^{n\times{n}}_{\textup{zd}}$ by replacing $\mathcal{D}(Z)$ by $-\operatorname{trace}(Z^\top Z^2)$
and this is the model we will refer throughout this section.
A \emph{generic asymmetric initial condition} $X(0)$ for the pure-influence model~\eqref{inf-dyn}
is a matrix that is generated with each entry independently sampled from a uniform distribution with support $[-100,100]$, and its diagonal entries set to zero. A \emph{generic symmetric initial condition} is similarly constructed by only sampling the upper triangular entries of the matrix.
For the projected pure-influence model,
we say $Z(0)=\frac{X(0)}{\Fnorm{X(0)}}$ is a (non-)symmetric generic initial condition depending on how $X(0)$ was generated.
We immediately see from the proof of Theorem~\ref{th-mini},
that $Z(t)$ converges to social balance if and only if $X(t)$ converges to social balance. Indeed, given that $X(t)$ diverges at some finite time $\bar{t}$, we have $Z(\infty)=\frac{X(\bar{t}^-)}{\Fnorm{X(\bar{t}^-)}}$.
For a fixed network size $n$, we use a Monte Carlo
method~\cite{RT-GC-FD:05} to estimate the probability $p$ of the event
``under a generic asymmetric initial condition $Z(0)$, $Z(t)$ converges to
structural balance in finite time". We estimate $p$ by performing $N$
independent simulations (i.e., each simulation generates a new independent
initial condition) and obtaining the proportion $\hat{p}_N$, also known as
the empirical probability, of times that the simulation indeed had $Z(t)$
converging to structural balance in finite time. For any accuracy
$1-\epsilon\in(0,1)$ and confidence level $1-\eta\in(0,1)$ we have that
$|\hat{p}_N-p|<\epsilon$ with probability greater than $1-\eta$ if the
Chernoff bound $N\geq\frac{1}{2\epsilon^2}\log\frac{2}{\eta}$ is
satisfied. For $\epsilon=\eta=0.01$, the bound is satisfied by $N=27000$.
We performed the $N=27000$ independent simulations with $n\in\{5,6\}$, and
found that $\hat{p}_N=1$. Our observations let us conclude that \emph{for
generic asymmetric initial condition $Z(0)$ and $n\in\{5,6\}$, with
$99\%$ confidence level, there is at least $0.99$ probability that $Z(t)$
converges to structural balance in finite time.}
Similarly, we performed the same Monte Carlo analysis for generic symmetric
initial conditions with $n\in\{3,5,6,15\}$, and found for that
$\hat{p}_N=1$ for all $n$. Therefore, we conclude that \emph{for any
symmetric generic initial condition $Z(0)$ and $n\in\{3,5,6,15\}$, with
$99\%$ confidence level, there is at least $0.99$ probability that $Z(t)$
converges to structural balance in finite time.}
We report three more observations and then state a resulting conjecture.
First, remarkably, we found that all of our simulations (for any type of
random initial condition) that converged to structural balance in finite
time, did it by converging to an equilibrium point having only one positive
eigenvalue inside the set of scale-symmetric matrices, which is a superset
of the set of symmetric matrices (see
Appendix~\ref{sec:AppScaSym}). Second, we did not perform experiments for
larger sizes of $n$ due to computational constraints. Third,
unfortunately, for $n=3$, we did find randomly-generated asymmetric initial
conditions whose numerically-computed solutions do not converge to
structural balance.
\begin{conjecture}[Convergence from generic initial conditions]
Consider the pure-influence model~\eqref{inf-dyn} with some initial
condition $X(0)$, and the projected pure-influence
model~\eqref{def:projected-pure-influence} with initial condition
$Z(0)=\frac{X(0)}{\Fnorm{X(0)}}$. Then,
\begin{enumerate}
\item under generic asymmetric initial conditions, $\lim_{t\to+\infty}Z(t)=Z^*$ for a sufficiently large $n$,
\item under generic symmetric initial conditions, $\lim_{t\to+\infty}Z(t)=Z^*$ for any $n$,\label{opoo}
\end{enumerate}
where $Z^*$ is scale-symmetric (and particularly symmetric for~\ref{opoo})
corresponding to structural balance. Then, $Z(t)$ reaches structural
balance in finite time. Moreover, $X(t)$ reaches structural balance in
finite time with same sign structure as $Z^*$, and also diverges in finite
time.
\end{conjecture}
Similarly, we performed the same simulation analysis for the Ku{\l}akowski\xspace et
al. model~\eqref{def:projected-pure-influenceK}, which converges to
structural balance if and only if the projected Ku{\l}akowski\xspace
model~\eqref{inf-dynK} does. To generate a generic initial condition for
this system, we generated an $n\times{n}$ matrix with each entry
independently sampled from a uniform distribution with support
$[-100,100]$, and then divide it by its Frobenius norm. We performed
$N=27000$ independent simulations with $n\in\{5,6\}$, and found that
\emph{for generic initial condition $Z(0)$ and $n=5$, only $16.94\%$
converged to structural balance, and for $n=6$, only $11.50\%$ converged
to structural balance. }
Also, for $n=3$, not all simulations converged to structural balance. We
remark that not all of the networks for which the system converged and did
not satisfy structural balance were complete, some of them were networks
with only self-loops, e.g., Figure~\ref{f:sim3}(a). Similarly, we performed
the same Monte Carlo analysis for symmetric initial conditions with
$n\in\{3,5,6,15\}$. Our results show that \emph{for symmetric generic
initial condition, $Z(0)$ did not always converge to structural balance
for $n=3$, but, for $n\in\{5,6,15\}$, with $99\%$ confidence level, there
is at least $0.99$ probability that $Z(t)$ converges to structural
balance in finite time.}
These Monte Carlo results are expected, since it has been formally proved
that the Ku{\l}akowski\xspace et al. model converges to structural balance only
under generic symmetric initial conditions as
$n\to\infty$~\cite{SAM-JK-RDK-SHS:11} and negative results for asymmetric
conditions are given by~\cite{VAT-PVD-PDL:13}.
See Figure~\ref{f:sim1} for a comparison of trajectories of the
pure-influence model in both generic and symmetric generic initial
conditions. Figure~\ref{f:sim2} shows a comparison between our projected
pure-influence model, which does not consider self-appraisals, and the
projected influence model, which considers self-appraisals. Note how not
considering self-appraisals drastically change the convergence time as well
as the dynamic behavior of the interpersonal appraisals.
\begin{figure}[ht]
\centering
\subfloat[Projected pure-influence model~\eqref{def:projected-pure-influence} with generic asymmetric initial condition]{\label{f:11-a}\includegraphics[width=0.485\linewidth]{fig_1_Z1_gen.pdf}}
\hfil
\subfloat[Projected pure-influence model~\eqref{def:projected-pure-influence} with generic symmetric initial condition]{\label{f:14-a}\includegraphics[width=0.485\linewidth]{fig_1_Z1_sym.pdf}}
\caption{Convergence to structural balance for a network of size $n=10$.
We plot the evolution of all the entries of $Z(t)$.
}
\label{f:sim1}
\end{figure}
\begin{figure}[ht
\centering
\subfloat[Projected influence model~\eqref{def:projected-pure-influenceK} with generic asymmetric initial condition]{\label{f:11-b}\includegraphics[width=0.485\linewidth]{fig_1_Z1_gen_K.pdf}}
\hfil
\subfloat[Projected pure-influence model~\eqref{def:projected-pure-influence} with generic asymmetric initial condition]{\label{f:14-b}\includegraphics[width=0.485\linewidth]{fig_1_Z1_gen_comp.pdf}}
\caption{Convergence comparison for a network of size $n=7$
(a) with and (b) without the consideration of self-appraisals.
We first generated an $n\times{n}$ random matrix $W$ with each entry independently sampled from a uniform distribution with support $[-100,100]$. Then, for (a), we normalize this matrix to have unit Frobenius norm and used it as the initial condition. For (b), we set the diagonal entries of $W$ to zero and then normalize it to have unit Frobenius norm and use it as the initial condition. In this example, (a) did not converged to structural balance, whereas (b) did.
We plot the evolution of all the entries of the appraisal matrix.
}\label{f:sim2}
\end{figure}
\begin{figure}[ht]
\centering
\subfloat[Projected influence model~\eqref{def:projected-pure-influenceK} with generic asymmetric initial condition]{\label{f:11-c}\includegraphics[width=0.485\linewidth]{fig_1_Z1_gen_K2.pdf}}
\hfil
\subfloat[Projected pure-influence model~\eqref{def:projected-pure-influence} with generic asymmetric initial condition]{\label{f:14-c}\includegraphics[width=0.485\linewidth] {fig_1_Z1_gen_comp2.pdf}}
\caption{Convergence comparison for a network of size $n=7$ (a) with and (b) without the consideration of self-appraisals.
The setting is the same one as in Figure~\ref{f:sim2}, but with a different random initial condition. (a) converged to a network with only diagonal negative entries (all interpersonal appraisals go to zero), whereas (b) converged to structural balance
}
\label{f:sim3}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We propose two new dynamic structural balance models that incorporates more psychologically plausible assumptions than previous models in the literature, based on a modification by a model proposed by Ku{\l}akowski\xspace et al. We have established important convergence properties for these models and also that, most importantly, they correspond to gradient systems over an energy function that characterizes the violations of Heider's axioms for the symmetric case. We also expanded our results to a set of asymmetric matrices called scale-symmetric. Numerical results illustrates that, under generic initial conditions, our models converges to structural balance (for sufficiently large $n$) and thus have better convergence properties than the previous model by Ku{\l}akowski\xspace et al.
As future work, we propose to further study the general case of asymmetric
(and non-scale-symmetric) equilibria and the convergence properties of our
models under arbitrary initial conditions. For example, numerical
simulations of the projected pure-influence model from generic initial
conditions illustrate how this system features transient chaos before
converging towards an equilibrium. A second future direction of work is to
find models with a more sociologically justified transient behavior from
generic initial conditions. Finally, another future direction is to study
the removal of the self-appraisals in other dynamical structural balance
models, like the homophily-based Traag el al.\ model~\cite{VAT-PVD-PDL:13}.
\section*{Acknowledgment}
We are grateful to Prof.\ John Gilbert, Prof.\ Ambuj Singh, and Dr.\ Saber
Jafarpour for insightful discussions.
\bibliographystyle{plainurl+isbn}
|
1909.11247
|
\section{Introduction}
This paper concerns a relation between the double affine Hecke algebras of Cherednik and certain algebras associated to the punctured and unpunctured torus that are defined using skein theory. As partial motivation, we first briefly discuss previous results in the $\mathfrak{sl}_2$ case of the Kauffman bracket, and then go on to discuss the conjectures and results of the present paper. We note that double affine Hecke algebras and skein algebras are related to several aspects of mathematical physics, including refined Chern-Simons theory \cite{AS15}; however, the exposition of the present paper will be purely mathematical.
\subsection{The Kauffman bracket}
The \emph{Kauffman bracket skein algebra} $K_s(F)$ of a surface $F$ is spanned by embedded links in the thickening $F \times [0,1]$, modulo the Kauffman bracket skein relations. These are local relations depending on a parameter $s \in \mathbb C^\times$, and which are similar to equation \eqref{eq:skeinrel}. The product is given by stacking links in the $[0,1]$ direction, and this algebra can be viewed as a quantization of the ring of functions on the $\mathrm{SL}_2$ character variety of $F$ \cite{PS00, BFK99}. For the torus and punctured torus these algebras have been described explicitly by Frohman, Gelca, and by Bullock, Przytycki, respectively.
The \emph{double affine Hecke algebra} was defined by Cherednik (see, e.g.\ \cite{Che05} and references therein) using explicit generators and relations, and it depends on two parameters, $q,t \in \mathbb{C}^\times$. In rank 1, its \emph{spherical subalgebra}
$SH_{q,t}$
of the DAHA was described explicitly by Koornwinder and later by Terwilliger.
Combining these explicit descriptions leads to the following theorem.
\begin{theorem}[{\cite{FG00, Ter13, Koo08}}]\label{thm:ktorus}
There is an isomorphism
\[
K_s(T^2) \cong SH_{s,s}
\]
between the Kauffman bracket skein algebra of the torus and the $t=q=s$ specialization\footnote{Technically, Frohman and Gelca showed skein algebra is isomorphic to the $t_{DAHA}=1$, $q_{DAHA}=s_{skein}$ specialization, but the presentations of Koornwinder and Terwilliger show that this spherical subalgebra is isomorphic to the spherical subalgebra in the $t_{DAHA} = q_{DAHA} = s_{skein}$ specialization, which is a nontrivial statement.} of the rank 1 spherical DAHA.
\end{theorem}
Combining the same algebraic theorems with the description of the skein algebra of the punctured torus instead, we obtain the following.
\begin{theorem}[{\cite{BP00, Ter13, Koo08}}]\label{thm:kptorus}
There is a surjective map
\[
K_s(T^2 - D^2) \twoheadrightarrow SH_{q=s,t}
\]
from the skein algebra of the punctured torus to the spherical rank 1 DAHA.
\end{theorem}
We note that the source algebra still only depends on one parameter -- the second parameter $t$ in the target appears in the relations describing the kernel of the map. One rough way of thinking of these results is that the spherical DAHA can be obtained from the skein algebra of the punctured torus using some kind of ``decoration'' at the puncture.
Let us also comment briefly on the importance of two parameters. The Macdonald polynomials are symmetric polynomials depending on the parameters $q$ and $t$ which have been studied intensively, and this had led (at the very least) to very interesting combinatorics, geometry, and algebra. In the $t=q$ specialization, the Macdonald polynomials degenerate to Schur polynomials, which are much more well-understood. It is therefore desirable to be able to ``see'' both parameters from topology.
\subsection{The elliptic Hall algebra and Homflypt skeins}
This paper deals with the ``infinite rank'' versions of the algebras in the previous section. The Kauffman bracket skein algebras are replaced with the \emph{Homflypt skein algebra} $\sk(T^2)$ of closed links in the thickened torus modulo the Homflypt skein relations. These relations are recalled in equations \eqref{eq:skeinrelintroc} and \eqref{eq:skeinrelintrof}, and they depend on parameters $s,v \in \mathbb{C}^\times$. The spherical DAHA is replaced by the \emph{elliptic Hall algebra} $\mathcal{E}_{\sigma, \bar \sigma}$ defined by Burban and Schiffmann \cite{BS12}. In earlier work we proved the analogue of Theorem \ref{thm:ktorus}:
\begin{theorem*}[{\cite{MS17}}]
There is an isomorphism
\[
\sk(T^2) \cong \mathcal{E}_{s,s}
\]
between the Homflypt skein algebra of the torus and the $\sigma=\bar \sigma = s$ specialization\footnote{To be precise, the presentation of $\sk(T^2)$ does not depend on the parameter $v$, so technically the right hand side of the isomorphism should be $\mathcal{E}_{s,s}\otimes_k \mathbb{C}[v^{\pm 1}]$. Also, see Remark \ref{rmk:params} for a comparison of this specialization to the $q=1$ specialization.} of the elliptic Hall algebra.
\end{theorem*}
We make the following conjecture which is the analogue of Theorem \ref{thm:kptorus}. Note that as in the Kauffman bracket case, the source algebra only depends on one parameter $s$ -- we expect the second parameter in the elliptic Hall algebra to arise as in the kernel of the map from $\sk(T^2 - D^2)$. Also, we point out that $\sk(T^2 - D^2)$ is ``much bigger'' than the elliptic Hall algebra: as a vector space it is isomorphic to a polynomial algebra with generators ``conjugacy classes in the free group of rank 2,'' while the elliptic Hall algebra is isomorphic as a vector space to a polynomial algebra with generators indexed by $\mathbb{Z}^2$.
\begin{conjecture}\label{conj:punctoeha}
There is a surjective algebra map $\sk(T^2- D^2) \twoheadrightarrow \mathcal{E}_{\sigma, \bar \sigma}$. This map takes a simple closed curve of homology class ${\mathbf{x}} \in \mathbb{Z}^2$ to the generator $u_{\mathbf{x}} \in \mathcal{E}_{\sigma, \bar \sigma}$ used by Schiffmann and Vasserot.
\end{conjecture}
The currently available proofs of all three of the previous theorems involve giving explicit presentations of the algebras in the statements, and then using these presentation to construct an algebra map by hand. Giving a presentation of $\sk(T^2- D^2)$ seems difficult, so instead of doing this, we give some evidence for this conjecture using other techniques, which we describe in the next subsection. These techniques are closely related to some techniques used or mentioned by others -- one reason we make precise statements of our own version is that it gives evidence for the conjecture above.
\subsection{DAHAs for $\mathfrak{gl}_n$}
The elliptic Hall algebra $\mathcal{E}_{\sigma, \bar \sigma}$ is closely related to the double affine Hecke algebras $\ddot{\mathrm{H}}_n$ of type $\mathfrak{gl}_n$, as detailed in the work of Schiffman and Vasserot, \cite{SV11}. We were intrigued by the nature of the presentation of the algebras $\ddot{\mathrm{H}}_n$, which involved Homfly type relations and braids in the torus $T^2$. This led us to speculate on the possibility of constructing some form of skein theoretic model which would incorporate both the algebra $\ddot{\mathrm{H}}_n$, in terms of braids, and our original algebra of closed curves in the thickened torus.
As a start we considered the possibility of a direct skein-based model in terms of $n$-braids in $T^2$ for the double affine Hecke algebra $\ddot{\mathrm{H}}_n$ with parameters $t$ and $q$.
The naive approach of considering $n$-braids in $T^2$ modulo the Homfly relations gives a model that works for one of the parameters $t$ but only covers the case $q=1$ for the other parameter. A search of the literature came up with a paper by Burella et al, \cite{BWPV14}, suggesting that a model based on framed braids could handle the more general case of $q\ne 1$, where adding a twist to the framing of a braid was reflected in multiplication by $q$. Their model depends on the product of certain braids with explicit framing resulting in a single twist on the framing of one string. We tried without success to follow the diagrammatic views of this product, which appears to us to have the trivial framing on all strings, and not the desired twist. We worked out a uniform way of specifying a framing on the strings of a torus braid, noted in Theorem \ref{thm:dahatoskein} below, and we came to the conclusion that the use of framing alone would not provide a means of incorporating the second parameter $q$ into a geometric model for $\ddot{\mathrm{H}}_n$.
We were still hopeful of making a skein-based geometric model, and we came up instead with one that includes an extra string. Instead of working with $n$-braids in the torus we use $(n+1)$-braids in which one distinguished string, called the \emph{base string}, is fixed throughout. Equivalently our geometric elements are $n$-braids in the once-punctured torus, regarding the fixed base string as determining the puncture. In our model we use linear combinations of these braids. The regular $n$-string braids are allowed to interact as braids by the Homfly relations and the parameter $q=c^{-2}$ is introduced when a regular string is allowed to cross through the base string.
These relations can be summarised in diagrammatic form as \[\Xor -\Yor=(s-s^{-1})\Ior \]
\begin{center}
\labellist\small
\pinlabel{$*$} at 146 740
\endlabellist\Xor$\quad=\quad c^2\ $
\labellist\small
\pinlabel{$*$} at 146 740
\endlabellist\Yor
\end{center}
In Section \ref{braidskein} of this paper we set up our skein model starting from ${\bf Z}[s^{\pm1},c^{\pm1}]$-linear combinations of $n$-braids in the punctured torus, up to equivalence. We give a presentation for this algebra as a quotient of the group algebra of the braid group of $n$-braids in the punctured torus, using an explicit presentation by Bellingeri, \cite[Theorem 1.1]{B04}, for this braid group with generators $\sigma_1,\ldots,\sigma_{n-1}, a, b$. Our emphasis here is on the use of geometric diagrams to represent the elements of the algebra. Such an approach is used elsewhere with oriented framed (banded) curves in a variety of manifolds as the basic ingredients subject to the $3$-term linear relations above. A new addition in our current setting is the use of the base string, and the relation introducing the second parameter $c^2$ when a string is moved across it.
We give diagrammatic illustrations of some useful braids and their interrelations, and show how to interpret Bellingeri's presentation in terms of our braids.
The skein relations can then be included by adding the relations
\[\sigma_i-\sigma_{i}^{-1}=s-s^{-1}\] or \[(\sigma_i-s)(\sigma_i+s^{-1})=0\] in quadratic form, and \[P=c^2,\] where $P$ is the braid taking string $n$ once round the puncture and fixing the other strings. There is a simple formula for $P$ in terms of the generators of the punctured braid group, which we provide.
The outcome is a presentation for the algebra of braids in the punctured torus modulo the skein relations, which is our skein-based model, $\bsk_n(T^2,*)$, see Definition \ref{def:bskein}.
We establish this presentation in Theorem \ref{braidpresentation}, and show how it corresponds exactly to the presentation in \cite{SV11} for the double affine Hecke algebra $\ddot{\mathrm{H}}_n$:
\begin{theorem*}[see Thm.\ \ref{braidpresentation}]
The braid skein algebra $\bsk_n(T^2,*)$ is isomorphic to the $\mathfrak{gl}_n$ double affine Hecke algebra $\ddot H_{n;q,t}$.
\end{theorem*}
\begin{remark}
While this paper was in preparation, D. Jordan and M. Vazirani proved a very similar statement, that the DAHA is a quotient of the group algebra of the braid group of the punctured torus \cite[Prop. 4.1]{JV17}. One difference between their statement and ours is that they impose relations algebraically, while we impose ours topologically, giving a visually appealing interpretation of elements of the algebra and the underlying relations. More precisely, their relations are a subset of ours; it is therefore a-priori possible that we impose strictly more relations than they do, and the content of this theorem is that their relations imply ours. Let us also mention here that Cherednik states in \cite{Che05} that the $\mathfrak{gl}_n$ DAHA is a deformation of the Hecke quotient of the braid group of the \emph{closed} torus. One of our desires was to remove the word ``deformation'' from this statement so that the deformation parameter $q$ has a direct topological meaning.
\end{remark}
In Section \ref{fullskein} we make use of framed $n$-tangles in the full framed Homfly skein of the punctured torus, $\sk_n(T^2,*)$. In this setting an $n$-tangle consists of $n$ framed oriented arcs in the thickened torus, along with a number of framed oriented closed curves. The arcs are no longer restricted to lying as braids in $T^2\times I$. We work with linear combinations of framed tangles and impose the local relation
\begin{equation}\label{eq:skeinrelintroc}
\Xorband\ -\ \Yorband \qquad =\qquad{(s-s^{-1})}\quad\ \Iorband
\end{equation}
between framed tangles, as well as the change of framing relation
\begin{equation}\label{eq:skeinrelintrof}
\pic{rcurlorband.eps} {.60} \qquad=\qquad {v^{-1}}\quad \pic{idorband} {.60}
\end{equation}
using a second parameter $v$. In keeping with the first section we include a base string defining the puncture, and allow a framed string to cross through it at the expense of the scalar $c^2$, with local relation
\begin{center}
\labellist\small
\pinlabel{$*$} at 146 740
\endlabellist\Xor$\quad=\quad c^2\ $
\labellist\small
\pinlabel{$*$} at 146 740
\endlabellist\Yor .
\end{center}
There is a homomorphism from $\bsk_n(T^2,*)$ to $\sk_n(T^2,*)$, since the braids in $\bsk_n(T^2,*)$ can be given a consistent framing using a nonvanishing vector field on the torus so that the relations in $\bsk_n(T^2,*)$ continue to hold in the wider tangle skein.
It is not clear however whether this homomorphism is injective. One point at issue is that, as well as the extra elements introduced, the additional relations between them might have the effect of collapsing the algebra considerably. Nonetheless, we make the following:
\begin{conjecture}\label{conj:mainiso}
The map $\bsk_n(T^2,*) \to \sk_n(T^2,*)$ from the braid skein algebra to the tangle skein algebra is an isomorphism.
\end{conjecture}
\begin{remark}
This may seem surprising, since tangles can contain closed curves, which means that a-priori, the tangle algebra is ``much bigger'' and the map in question should not be surjective. Indeed, for other surfaces the analogous map is not surjective. However, on the torus, an embedded closed curve ``on one side of the puncture'' is isotopic to the same curve ``on the other side of the puncture," and we use this fact in Theorem \ref{thm:dahatoskein} to show that the map in the conjecture is surjective.
\end{remark}
There is an algebra map from the (classical) Homflypt skein module $\sk(T^2- D^2)$ of closed links in the thickened punctured torus to our skein algebra $\sk_n(T^2, *)$ of tangles with a base string, given by ``filling in the puncture with the identity braid and the base string.'' If we assume Conjecture \ref{conj:mainiso}, this gives us an algebra map
$\sk(T^2 - D^2) \to \ddot H_n$ for any $n$. We can compose this map with multiplication by the symmetrizer $\mathbf{e}$ in the finite Hecke algebra to obtain a map $\sk(T^2- D^2) \to \mathbf{e} \ddot H_n \mathbf{e}$ to the so-called \emph{spherical subalgebra}.
Schiffmann and Vasserot showed that the elliptic Hall algebra is the $n \to \infty$ limit of the spherical subalgebras (see Theorem \ref{thm:SVlimit} for a precise statement). Let $\sk^+(T^2 - D^2)$ be the subalgebra generated by ``curves lifted from the closed torus which only cross the $y$-axis positively'' (see Definition \ref{def:posskein} for a precise statement). We then show the following, which we view as evidence for Conjecture \ref{conj:punctoeha}.
\begin{theorem*}[see Thm. \ref{thm:psktoeha}]
Assuming Conjecture \ref{conj:mainiso} holds, there is a surjective algebra map $\sk^+(T^2- D^2) \twoheadrightarrow \mathcal{E}^+_{\sigma, \bar \sigma}$. This map sends the simple closed curve of homology class ${\mathbf{x}} \in \mathbb{Z}^2$ to the generator $u_{\mathbf{x}}$ of the elliptic Hall algebra.
\end{theorem*}
One corollary of this theorem (again assuming Conjecture \ref{conj:mainiso}) is that the generator $u_{\mathbf{x}}$ of the elliptic Hall algebra has a simple interpretation as a sum $W_{\mathbf{x}}$ of certain closed curves on the torus, with homology class ${\mathbf{x}} \in \mathbb{Z}^2 = H_1(T^2 - D^2)$. In fact, these elements are lifts of the exact same elements in $\sk(T^2)$ that were used in \cite{MS17}. The subtle point here is that not all the relations between $W_{\mathbf{x}}$ that were proved in \cite{MS17} hold in the punctured torus, since the proofs of some of these relations used global isotopies on $T^2$ that don't lift to the punctured torus. Roughly, the problem is that some curves ``get caught on the puncture.'' When pushed into our skein algebra of tangles with a base string, these curves can once again be pushed through the puncture, but at the cost of some ``lower order terms'' involving braids, and these lower order terms contribute to the generating series relations in the elliptic Hall algebra. See also Remark \ref{rmk:ptorusrel}.
This suggests one purely algebraic question of possible interest. The Schiffmann-Vasserot elements $u_{\mathbf{x}}$ are in the spherical DAHA $e_n \ddot{\mathrm{H}}_{q,t} e_n$, where $e_n$ is the symmetrizer in the finite Hecke algebra. However, the images of our elements $W_{\mathbf{x}}$ most naturally lie in the centralizer $Z_{\ddot H_n}(H_n)$ of the finite Hecke algebra $H_n$ inside the double affine Hecke algebra. This suggests there may be an interesting limit of these centralizers which would include the elliptic Hall algebra a subalgebra.
Let us briefly comment on related or future work. In \cite{JV17}, Jordan and Vazarani used factorization homology to construct representations of the braid-skein algebra $\bsk_n(T^2,*)$, and more skein-theoretic techniques to construct representations are being used in work in progress of Vazarani and Walker. We hope that some combination of these approaches could be used to prove Conjecture \ref{conj:mainiso}, but we don't discuss this in the present paper.
We also note that the so-called $A_{q,t}$ algebra introduced by Carlesson and Mellit in \cite{CM18} has a relation that looks like a 3-term version of the skein relation involving the base string. Discussions with Jordan and Mellit indicate that more precise versions of this statement are available, but this will be left to future work.
A summary of the contents of the paper is as follows. In Section 2 we recall algebraic background involving DAHAs and the elliptic Hall algebra. In Section 3 we define the braid skein algebra and show it is isomorphic to the DAHA, and in Section 4 we discuss the tangle skein algebra. In Section 5 we compare the tangle skein algebra and the (classical) skein algebra of closed links in the punctured torus to the elliptic Hall algebra.
\noindent \textbf{Acknowledgements:} This work was initiated during the authors participation in the Research in Pairs program at Oberwolfach in the spring of 2015, and we gratefully acknowledge their support for our stay there, and for their excellent working conditions. More work was done at conferences at the Isaac Newton Institute and at BIRS in Banff, and we gratefully acknowledge their support. Parts of the travel of the second author were supported by a Simons Travel Grant. We thank E.\ Gorsky, A.\ Negut, A.\ Oblomkov, O.\ Schiffmann, E.\ Vasserot, M.\ Vazirani, and K. Walker for their interest and discussions of this and/or their work over the years. We especially thank D. Jordan and A. Mellit for many discussions closely related to this paper. We would also like to thank the referees for thoughtful comments and suggestions that helped us improve the exposition and clarity of the paper. We are grateful to M. Scharlemann for discussions in connection with our revision of Section \ref{sec:isorels}.
The work of the second author has been partially funded by the ERC grant 637618 and a Simons Foundation Collaboration Grant.
\section{Algebraic background}\label{sec:algebra}
In this section we recall the algebraic definitions and results that we need in the rest of the paper. In particular, we define the elliptic Hall algebra and double affine Hecke algebras (DAHAs), and we recall results of Schiffmann and Vasserot relating the two. In later sections we use their results to relate the skein algebra of the punctured torus to the elliptic Hall algebra.
\subsection{The Elliptic Hall algebra}
Let us recall the definition of the elliptic Hall algebra $\mathcal{E} = \mathcal{E}_{\sigma, \bar \sigma}$ of Burban and Schiffmann \cite{BS12}, using the conventions of \cite{SV11}. It is an algebra over the ring $\mathbb{Q}(\sigma, \bar \sigma)$, and it is generated by elements $u_{\mathbf{x}}$ for ${\mathbf{x}} \in \mathbb{Z}^2$, subject to the following relations:
\begin{enumerate}
\item If ${\mathbf{x}}$ and ${\mathbf{x}}'$ belong to the same line in $\mathbb{Z}^2$, then $[u_{\mathbf{x}},u_{{\mathbf{x}}'}] = 0$.
\item Assume that ${\mathbf{x}}$ is primitive and that the triangle with vertices $0$, ${\mathbf{x}}$, and ${\mathbf{x}}+{\mathbf{y}}$ has no interior lattice points. Then
\begin{equation}\label{eq:hallrel}
[u_{\mathbf{y}}, u_{\mathbf{x}}] = \epsilon_{{\mathbf{x}},{\mathbf{y}}} \frac {\theta_{{\mathbf{x}}+{\mathbf{y}}}}{\alpha_1}
\end{equation}
where the elements $\theta_{\mathbf{z}}$ with ${\mathbf{z}} \in \mathbb{Z}^2$ are obtained by the generating series identity
\[
\sum_{i} \theta_{i {\mathbf{x}}_0} z^i = \exp\left( \sum_{i \geq 1} \alpha_i u_{i {\mathbf{x}}_0} z^i\right)
\]
for ${\mathbf{x}}_0 \in \mathbb{Z}^2$ primitive.
\end{enumerate}
In the above relations we used the constants $\epsilon_{{\mathbf{x}},{\mathbf{y}}} = \mathrm{sign}(\det({\mathbf{x}}\,{\mathbf{y}}))$ and
\[
\alpha_i = (1-\sigma^i)(1-\bar \sigma^i)(1-(\sigma \bar \sigma)^{-i})/i
\]
We also define the following subsets of $\mathbf{Z} := \mathbb{Z}^2$:
\begin{equation}
\mathbf{Z}^> := \{(x,y) \mid x > 0\},\quad \mathbf{Z}^+ := \mathbf{Z}^> \sqcup \{(0,y) \mid y \geq 0\}
\end{equation}
We also use this notation to define subalgebras of $\mathcal{E}$, for example,
\[
\mathcal{E}^+ := \langle u_{\mathbf{x}} \mid {\mathbf{x}} \in \mathbf{Z}^+\}
\]
We will use similar notation for other algebras generated by elements indexed by $\mathbf{Z}$.
Finally, let $d({\mathbf{x}})$ be the greatest common denominator of the entries of ${\mathbf{x}} \in \mathbb{Z}^2$.
\subsection{Limits of DAHAs}
We now recall the definition of the double affine Hecke algebra $\ddot{\mathrm{H}}_n$, following the conventions given in \cite{SV11}.
This is an algebra over ${\bf Z}[t^{\pm1/2}, q^{\pm1}]$ with generators \[\{T_i\},1\le i\le n-1, \quad \{X_j\},\{Y_j\}, 1\le j\le n\] and relations
\begin{eqnarray}
(T_i+t^{1/2})(T_i-t^{-1/2})&=&0\\
T_i T_{i+1} T_i&=& T_{i+1}T_i T_{i+1}\\[0mm]
[T_i,T_j]&=&0, |i-j|>1\\[0mm]
[T_i,X_j]=[T_i,Y_j]&=&0, j\ne i,i+1\\[0mm]
[X_i,X_j]=[Y_i,Y_j]&=&0\\
X_{i+1}&=&T_iX_iT_i, \\
Y_{i+1}&=&T_i^{-1}Y_i T_i^{-1}\\
X_1^{-1}Y_2&=&Y_2X_1^{-1}T_1^{-2}\\
Y_1 X_1\cdots X_n&=&q X_1\cdots X_n Y_1
\end{eqnarray}
Let $e_n$ be the symmetrizing idempotent in the finite Hecke algebra (which is generated by the $T_i$'s), which is characterized by $T_je_n = e_n T_j = t^{1/2}e_n$ for all $j$. The spherical DAHA is the subalgebra $\mathrm {S} \daha^n_{q,t} := e_n \ddot{\mathrm{H}}^n_{q,t} e_n$ of $\ddot{\mathrm{H}}^n_{q,t}$, and it is also $\mathbb{Z}^2$-graded. There is an $\mathrm{SL}_2(\mathbb{Z})$ action on the subalgebra $\mathrm {S} \daha_{q,t}^n$ (see the paragraph above Lemma 2.1 in \cite{SV11}).
Following \cite[Sec.\ 2.2]{SV11} (except for the notational change $P\to Q$), for $k > 0$ we define elements
\begin{equation*}
Q^n_{0,k} = e_n \sum_i Y_i^k e_n
\end{equation*}
Elements $Q^n_{\mathbf{x}}$ for ${\mathbf{x}} \in
\mathbb{Z}^2$ are defined using the $\mathrm{SL}_2(\mathbb{Z})$ action. We define $\mathrm {S} \daha_{q,t}^{n,>}$ to be the subalgebra of $\mathrm {S} \daha_{q,t}^n$ generated by $Q_{a,b}^n$ with $a > 0$.
Let us identify parameters $\sigma = q^{-1}$ and $\bar \sigma = t^{-1}$. Then Schiffmann and Vasserot proved the following theorem relating the elliptic Hall algebra and spherical DAHAs.
\begin{theorem}[{\cite[Thm.\ 3.1]{SV11}}]\label{thm:sv}
The assignment
\[
u_{\mathbf{x}} \mapsto \frac 1 {q^{d({\mathbf{x}})} - 1}Q_{\mathbf{x}}^n
\]
extends uniquely to a $\mathbb{Z}^2$-graded $\mathrm{SL}_2(\mathbb{Z})$-equivariant surjective algebra homomorphism
\[
\phi^n: \mathcal{E}_{q,t} \twoheadrightarrow \mathrm {S} \daha_{q,t}^{n}
\]
\end{theorem}
Given the previous theorem, a natural question is whether there is some type of limit one can take as $n \to \infty$. It turns out that there is, but to describe it Schiffmann and Vasserot first had to prove the following theorem.
\begin{theorem}[{\cite[Prop.\ 4.1]{SV13}}]
The assignment $Q^n_{\mathbf{x}} \mapsto Q^{n-1}_{\mathbf{x}}$ for each ${\mathbf{x}} \in \mathbf{Z}^+$ extends to a unique surjective algebra map $\Phi_n: \mathrm {S} \daha_{q,t}^{n,+} \to \mathrm {S} \daha_{q,t}^{n-1,+}$.
\end{theorem}
This theorem allows us to construct a projective limit $\varprojlim \mathrm {S} \daha^n_{q,t}$. Also, the generators $Q^n_{\mathbf{x}}$ provide elements in this projective limit, and we let $\mathrm {S} \daha^{\infty,+}_{q,t}$ be the subalgebra generated by these elements for ${\mathbf{x}} \in \mathbf{Z}^+$. Theorem \ref{thm:sv} shows that there is a map from the elliptic Hall algebra to $\mathrm {S} \daha_{q,t}^{\infty,+}$.
\begin{theorem}[{\cite[Thm.\ 4.6]{SV13}}]\label{thm:SVlimit}
The induced map $\phi^\infty: \mathcal{E}_{q,t}^+ \to \mathrm {S} \daha_{q,t}^{\infty,+}$ is an isomorphism.
\end{theorem}
Summarizing this work of Schiffmann and Vasserot, we obtain the following corollary which we use below.
\begin{corollary}\label{cor:limitmap}
Suppose $A$ is an algebra generated by elements $a_{\mathbf{x}}$ for ${\mathbf{x}} \in S \subset \mathbf{Z}^+$. Suppose there are algebra maps $A \to \mathrm {S} \daha_{q,t}^{n,+}$ for each $n$ such that $a_{\mathbf{x}} \mapsto Q_{\mathbf{x}}$. Then there is an algebra map $A \to \mathcal{E}_{q,t}^+$ sending $a_{\mathbf{x}} \mapsto (q^{d({\mathbf{x}})}-1) u_{\mathbf{x}}$.
\end{corollary}
\begin{remark}\label{rmk:params}
The elliptic Hall algebra could instead be defined using three constants $q_1, q_2, q_3$ which replace the symbols $\sigma, \bar \sigma, (\sigma \bar \sigma)^{-1}$ (so that the definition above could be recovered by specializing $q_1=\sigma$, $q_3 = \bar \sigma$, and $q_3 = q_1^{-1}q_2^{-1}$). These three parameters $q_i$ then appear symmetrically in the presentation, and can therefore be permuted to give an isomorphic algebra. To the best of our knowledge, this symmetry seems mysterious from every point of view that appears in this paper. For example, in terms of elliptic curves over the finite field of order $q$, there is an identity $\sigma \bar \sigma = \sqrt{q}$, and this identity breaks the symmetry mentioned above. In terms of double affine Hecke algebras, the parameters $q$ and $t$ appear in the relations in very different ways, and there is again no symmetry in the parameters. Finally, at the topological level this symmetry does not seem to be visible, since the parameter $s$ appears as a relation between two strands in a tangle, while the parameter $c$ appears as a relation involving the base string and a strand in a tangle.
Let us also note that the skein relation we impose with the base string makes the base string ``invisible'' when $q=1$. This means that the $q=1$ specialization of the conjectures in the present paper should be compatible with the results of the previous paper \cite{MS17}; however, the previous paper states that the skein algebra of the torus is isomorphic to the $q=t$ specialization of the elliptic Hall algebra.
This apparent conflict is explained by the symmetry of parameters in $\mathcal{E}_{\sigma, \bar \sigma}$ described in the previous paragraph. In particular, this symmetry implies that the three specializations $\mathcal{E}_{q,1}$, $\mathcal{E}_{1,q}$, and $\mathcal{E}_{q,q}$ are all isomorphic. (The reason the $q=t$ specialization was chosen in \cite{MS17} is that this specialization is the one which is compatible with the actions of both algebras on the space of symmetric functions, since it is the specialization taking Macdonald functions to Schur functions.)
\end{remark}
\section{Skeins with a base string}\label{braidskein}
We will describe some skeins which use the framed Homfly relations on oriented framed curves and braids in the thickened torus $T^2\times I$, together with a single fixed base string $\{*\}\times I\subset T^2\times I$.
In this section we define the braid skein algebra $\bsk_n(T^2,*)$ in terms of ${\bf Z}[s^{\pm1},c^{\pm1}]$-linear combinations of braids, and their composites, and prove that
it is isomorphic to the double affine Hecke algebra $\ddot{\mathrm{H}}_n$ (following the conventions in \cite{SV11}). (See Theorem \ref{thm:bskiso}.) The multiplication in the braid skein algebra comes from stacking in the $[0,1]$ direction -- more precisely, it comes from the glue-then-rescale map $[0,1] \sqcup [1,2] \to [0,2] \to [0,1]$.
\subsection{Isotopies of braids in the punctured torus}
We start by considering the group of $n$-braids in the punctured torus $T^2- \{*\}$. We will work with the thickened torus $T^2\times I$ with a single fixed base string $\{*\}\times I$ to determine by the puncture $*\in T^2$.
Braids are made up of $n$ strings oriented monotonically from $T^2\times \{0\}$ to $T^2\times \{1\}$ which do not intersect each other or the base string.
Braids are considered equivalent when the strings are isotopic avoiding the base string.
Composition of braids is defined by placing one on top of the other, using the convention that $AB$ means braid $A$ lying below braid $B$.
As in \cite{MS17} we shall regard $T^2$ as given by identifying opposite pairs of sides in the unit square $[0,1]\times [0,1]$.
Take the base point $*$ to be the centre $(1/2,1/2)$ of the square. Fix $n>0$ points in order on the lower part of the diagonal of the square between $(0,0)$ and $*$ as the end points for $n$-string braids in $T^2\times I - \{*\}\times I$.
We can draw the thickened torus in plan view as a square with opposite pairs of edges identified. We show the braid points and the base string position in the figure below, including a line along the diagonal through them as a visual help to keep track of them.
\begin{center}
\Torusbase
\end{center}
We can indicate some simple braids where only one or two of the points move by drawing the path of the moving points on the plan view, rather as in the diagrams in \cite{AM98}. In this view the braid product is given by concatenation of the paths.
For example, write $ x_i$ for the braid in which point $i$ moves uniformly around the $(1,0)$ curve in the torus, and $ y_i$ where point $i$ moves around the $(0,1)$ curve, with all other points remaining fixed. These are shown in plan view as \begin{center} $ x_i = $\xii,$\quad y_i =$\etai
\end{center}
and in a side view in figures \ref{xielevation} and \ref{yielevation}, where the colouring of the edges being identified is consistent with that used in the plan view. Similarly the braid $\sigma_i$ appears in plan view as in figure \ref{plansigmai},
\begin{figure}[ht]
\begin{center} $\sigma_i = $ \labellist\small
\pinlabel {$1$} at 100 337
\pinlabel {$i$} at 122 363
\pinlabel {$n$} at 155 398
\endlabellist \sigmaiplan
\end{center}
\caption{Plan view of $\sigma_i$}\label{plansigmai}
\end{figure}
concentrating only on the region around the braid points.
A side elevation for $ x_i$ viewed in the $(0,1)$ direction is shown in figure \ref{xielevation},
\begin{figure}[ht]
\begin{center} $ x_i\quad = \quad $\labellist\small
\pinlabel {$i$} at 210 335
\pinlabel{$*$} at 273 335
\endlabellist \xiibraid
\end{center}
\caption{Side view of $x_i$}\label{xielevation}
\end{figure}
and $ y_i$ viewed in the $(-1,0)$ direction is seen in elevation in figure \ref{yielevation}.
\begin{figure}[ht]
\begin{center} $ y_i\quad = \quad $\labellist\small
\pinlabel {$i$} at 210 335
\pinlabel{$*$} at 273 335
\endlabellist \etaibraid
\end{center}
\caption{Side view of $y_i$}\label{yielevation}
\end{figure}
Using either of these two elevation views the braids $\sigma_i$ appear in their usual form above, and it is immediate from these views that \begin{eqnarray}\sigma_i^{-1} x_i\sigma_i^{-1}&=& x_{i+1}\\
\sigma_i y_i\sigma_i&=& y_{i+1}.\end{eqnarray}
In a plan view we assume that paths are projections of braid strings which rise monotonically from their initial braid point to their final braid point. The product of two braids corresponds to the concatenation of their paths.
We can see that the braids $\{ x_i\}$ commute among themselves, since their paths in the plan view are disjoint. The same applies to the braids $\{ y_i\}$, and equally the braids $\sigma_i$ commute with $ x_j$ and $ y_j$ when $j\ne i,i+1$.
The relations \[x_1x_2=x_2 x_1,\quad y_1 y_2=y_2 y_1\] become
\begin{eqnarray}
x_1\sigma_1^{-1}x_1\sigma_1^{-1} &=& \sigma_1^{-1}x_1\sigma_1^{-1} x_1,\\
y_1\sigma_1 y_1\sigma_1&=& \sigma_1 y_1\sigma_1 y_1
\end{eqnarray}
in terms of the generators $x_1, y_1$.
We can use the plan view for a braid where two paths cross, taking the usual convention of knot crossings to show which strand lies at a higher level. For example in the plan view of $ x_1 y_2$ the path of point $1$ lies below that of point $2$, giving views of $ x_1 y_2$ and $ y_2 x_1$ in figure \ref{pathproduct}.
\begin{figure}[ht]
\begin{center} $ x_1 y_2 =$\xioneetatwo $\quad y_2 x_1 =$ \etatwoxione
\end{center}
\caption{Plan views of $ x_1 y_2$ and $ y_2 x_1$}\label{pathproduct}
\end{figure}
When two braids are composed there may be a path on the plan view that passes through a braid point at an intermediate stage. The plan can be altered to avoid such intermediate calls, by diverting the path slightly away from the braid point.
For example the braid $ x_1 y_1$ starts with a plan view in figure \ref{xy}. When the intermediate visit to braid point $1$ is diverted a plan view for $ x_1 y_1$ is shown in figure \ref{smoothxy} along with a view for $ y_1 x_1$.
\begin{figure}[ht]
\begin{center} \xioneetaone
\end{center}
\caption{Plan view of $ x_1 y_1$}\label{xy}
\end{figure}
\begin{figure}[ht]
\begin{center} $ x_1 y_1\ =\ $ \xioneetaonedivert\ , \quad $ y_1 x_1\ =\ $ \etaonexionedivert
\end{center}
\caption{Smoothed plan view of $ x_1 y_1$ and $ y_1 x_1$}\label{smoothxy}
\end{figure}
With further smoothing we get the plan view of the commutator $ x_1 y_1 x_1^{-1} y_1^{-1}$ as shown in figure \ref{xycommutator}.
\begin{figure}[ht]
\begin{center} $ x_1 y_1 x_1^{-1} y_1^{-1}\ =\ $ \xioneetaonecommutator
\end{center}
\caption{Plan view of $ x_1 y_1 x_1^{-1} y_1^{-1}$}\label{xycommutator}
\end{figure}
From its elevation view in figure \ref{xycommutatorelev}
\begin{figure}[ht]
\begin{center}
\labellist\small
\pinlabel{$*$} at 288 395
\endlabellist
\commutatoronebraid
\end{center}
\caption{Elevation of $ x_1 y_1 x_1^{-1} y_1^{-1}$}\label{xycommutatorelev}
\end{figure}
we can write it as
\[ x_1 y_1 x_1^{-1} y_1^{-1}=\sigma_1\sigma_2\cdots\sigma_{n-1}P\sigma_{n-1}\cdots\sigma_2\sigma_1. \] Here \[P=\Pbraid\] is the braid taking string $n$ once round the base string, with plan view \begin{center} \Pplan\end{center}
This gives an expression
\[P=\sigma_{n-1}^{-1}\cdots\sigma_1^{-1}x_1 y_1 x_1^{-1} y_1^{-1}\sigma_1^{-1}\cdots\sigma_{n-1}^{-1}.\]
as a braid in the punctured torus, in terms of the generators $x_1,y_1, \sigma_i$.
As a further help in using the plan view for paths we can alter the view near the projection of one of the braid points, where a path starts out at the lowest level from the braid point and finishes at the highest level. Then another path crossing nearby (with either orientation) can be moved across the braid point as shown locally in figure \ref{braidpointmove}.
\begin{figure}[ht]
\begin{center} \braidplanunder $\quad =\quad $ \braidplanover\end{center}
\caption{Moving an arc past a braidpoint}\label{braidpointmove}
\end{figure}
Apply this to the view of $ y_1 x_2$ by moving the path from braid point $1$ across braid point $2$. This gives \[
y_1 x_2 = \etaonexitwo = \xitwoalphatwo= x_2\alpha_2\] where \[\alpha_2= \alphatwo = \sigma_1^{2} y_1,\] and thus
\begin{eqnarray}
x_2 y_1^{-1}&=& y_1^{-1} x_2\sigma_1^2.
\end{eqnarray}
We can rewrite this equation in terms of the generators $x_1$ and $y_1$ as
\[\sigma_1^{-1}x_1\sigma_1^{-1}y_1^{-1}=y_1^{-1}\sigma_1^{-1}x_1\sigma_1\] and further
\[\sigma_1^{-2}x_1y_2^{-1}=y_2^{-1}x_1.\]
A similar argument, moving one path across braid points $2 \ldots n$, shows that \[ y_1 x_2 x_3 \cdots x_n = \etaonexitwon= \etaonexitwondivert= x_2 x_3 \cdots x_n \alpha_n\] in the punctured braid group, where \[\alpha_n= \alphan= \beta_n y_1,{\rm with } \ \beta_n=\xioneetaonecommutatorbraid \] as in figure \ref{betan}, giving
\[ y_1 x_2 \cdots x_n= x_2\cdots x_n\beta_n y_1.\]
\begin{figure}[ht]
\begin{center}
\labellist\small
\pinlabel{$*$} at 265 375
\endlabellist
\betanbraid
\end{center}
\caption{Side view of the braid $\beta_n=\sigma_1\sigma_2\cdots\sigma_{n-1}\sigma_{n-1}\cdots\sigma_2\sigma_1$}\label{betan}
\end{figure}
Bellingeri \cite[theorem 1.1]{B04} gives a presentation for the group of $n$-braids in the punctured torus with generators \[\sigma_1,\cdots,\sigma_{n-1}, a,b,\] and relations
\begin{eqnarray}
\sigma_i\sigma_j&=&\sigma_j\sigma_i, |i-j|>1\\
\sigma_i\sigma_{i+1}\sigma_i&=&\sigma_{i+1}\sigma_i\sigma_{i+1}\\
\sigma_i a&=&a\sigma_i, i>1\\
\sigma_i b&=&b\sigma_i, i>1\\
a\sigma_1^{-1}a\sigma_1^{-1}&=&\sigma_1^{-1}a\sigma_1^{-1}a\\
b\sigma_1^{-1}b\sigma_1^{-1}&=&\sigma_1^{-1}b\sigma_1^{-1}b\\
b\sigma_1^{-1}a\sigma_1&=&\sigma_1^{-1}a\sigma_1^{-1}b
\end{eqnarray}
In our notation this corresponds to a presentation with generators $x_1, y_1,\sigma_i$ taking $a =y_1$ and $b=x_1^{-1}$ and $\sigma_i^{-1}$ in place of $\sigma_i$.
Bellingeri's relations involving $a$ and $b$ correspond to the equations
\begin{eqnarray*}
x_1 x_2&=& x_2 x_1\\
y_1 y_2 &=& y_2 y_1\\
x_2 y_1^{-1}&=& y_1^{-1} x_2\sigma_1^2
\end{eqnarray*}
when written in terms of the generators $x_1, y_1, \sigma_1$.
\subsection{A presentation for the algebra $\bsk_n(T^2,*)$}
\begin{definition}\label{def:bskein}
The braid skein algebra $\bsk_n(T^2,*)$ is defined to be ${\bf Z}[s^{\pm1},c^{\pm1}]$-linear combinations of $n$-braids in the punctured torus, up to equivalence, subject to the local relations
\begin{equation}\label{eq:skeinrel}
\Xor -\Yor=(s-s^{-1})\Ior
\end{equation}
and
\begin{equation}\label{eq:puncturecross}
\labellist\small
\pinlabel{$*$} at 75 105
\endlabellist\basecross \quad=\quad c^2\
\labellist\small
\pinlabel{$*$} at 80 100
\endlabellist\baseidentity
\end{equation}
between braids.
\end{definition}
By the term \emph{local relation} in this definition we mean that the braids in the relations only differ as shown inside a $3$-ball. We would like to find a ``small'' generating set for the ideal defined by these relations, which we do in the following three theorems. (To simplify exposition, Theorems \ref{thm:brskein} and \ref{thm:brpuncture} are proved in Subsection \ref{sec:isorels}.)
\begin{theorem}\label{thm:brskein} Suppose that $\alpha,\beta,\gamma$ are three $n$-braids in the punctured torus whose diagrams can be isotoped in $(T^2- \{*\})\times I$, fixing the boundary, so that they differ only inside a ball as
\[
\alpha=\Xor, \
\beta= \Yor, \
\gamma= \Ior.
\]
Then there exists a braid $L$ such that
\[ L\beta=\sigma^{-2}L\alpha,\ L\gamma=\sigma^{-1}L\alpha.\]
\end{theorem}
\begin{theorem}\label{thm:brpuncture} Suppose that $\delta,\epsilon$ are two $n$-braids in the punctured torus whose diagrams can be isotoped in $(T^2 - D^2)\times I$, fixing the boundary, so that they differ only in a ball as
\[
\delta= \labellist\small
\pinlabel{$*$} at 75 105
\endlabellist\basecross \ ,\
\epsilon= \labellist\small
\pinlabel{$*$} at 80 100
\endlabellist\baseidentity \ .
\]
Then there exists a braid $L$ such that
\[
L\epsilon=P^{-1}L\delta,
\]
where $P$ is the braid taking string $n$ once round the base string, shown here in plan and elevation.
\begin{center} $P$ \quad =\quad \Pplan \quad =\quad \labellist\small
\pinlabel{$*$} at 193 130
\endlabellist
\Pbraid \end{center}
\end{theorem}
\begin{theorem} The ideal generated by \eqref{eq:skeinrel} and \eqref{eq:puncturecross} is the same as the ideal defined by
\begin{equation}
\sigma_1-\sigma_1^{-1}=(s-s^{-1})\label{eq:Hecke}
\end{equation}
\begin{equation}
P=c^2.\label{eq:basecircle}
\end{equation}
\end{theorem}
\begin{proof}
Clearly the equations $\sigma_1-\sigma_1^{-1}=s-s^{-1}$ and $P=c^2$ are special cases of
\eqref{eq:skeinrel} and \eqref{eq:puncturecross} .
Conversely given any braids $\alpha,\beta, \gamma$ whose diagrams differ in some ball as \[\alpha=\Xor,\beta=\Yor,\gamma=\Ior.\]
By theorem \ref{thm:brskein} we can write \[L(\alpha-\beta) =(1-\sigma_1^{-2})L\alpha=(\sigma_1-\sigma_1^{-1})\sigma^{-1}L\alpha. \]
Then equation \eqref{eq:Hecke} shows that \[L(\alpha-\beta) =(s-s^{-1})\sigma_1^{-1}L\alpha=(s-s^{-1})L\gamma.\] Hence $\alpha-\beta=(s-s^{-1})\gamma$, and so $\alpha, \beta$ and $\gamma$ satisfy equation \eqref{eq:skeinrel}.
To deduce equation \eqref{eq:puncturecross} for braids $\delta$ and $\epsilon$ as in theorem \ref{thm:brpuncture} write \[L\epsilon =P^{-1}L\delta\] and apply equation \eqref{eq:basecircle} to get \[L\epsilon =c^{-2}L\delta\] and hence
\[\delta=c^2\epsilon.\]
\end{proof}
We can now adjoin these relations to Bellingeri's presentation for the braid group of the punctured torus to give a presentation of the algebra $\bsk_n(T^2,*)$.
\begin{theorem}\label{braidpresentation}
The algebra $\bsk_n(T^2,*)$ can be presented by the braids
\[\sigma_1,\cdots,\sigma_{n-1},x_1,y_1,\]
with relations
\begin{eqnarray}
\sigma_i\sigma_j&=&\sigma_j\sigma_i, |i-j|>1\label{eq:braidstart}\\
\sigma_i\sigma_{i+1}\sigma_i&=&\sigma_{i+1}\sigma_i\sigma_{i+1}\\
\sigma_i x_1&=&x_1\sigma_i, i>1\\
\sigma_i y_1&=&y_1\sigma_i, i>1\\
x_1\sigma_1^{-1}x_1\sigma_1^{-1}&=&\sigma_1^{-1}x_1\sigma_1^{-1}x_1\\
y_1\sigma_1y_1\sigma_1&=&\sigma_1y_1\sigma_1y_1\\
x_1^{-1}\sigma_1y_1\sigma_1^{-1}&=&\sigma_1y_1\sigma_1x_1^{-1}\label{eq:braidend}\\
(\sigma_1-s)(\sigma_1+s^{-1})&=&0 \label{eq:skeinone}\\
x_1y_1x_1^{-1}y_1^{-1}&=&c^2\sigma_1\sigma_2\cdots\sigma_{n-1}\sigma_{n-1}\cdots\sigma_2\sigma_1 \label{eq:commutator}
\end{eqnarray}
\end{theorem}
\begin{proof}
In our notation Bellingeri's generators are $a=y_1, b=x_1^{-1}$, and our $\sigma_i$ is Bellingeri's $\sigma_i^{-1}$.
Relations \eqref{eq:braidstart} to \eqref{eq:braidend} then present the algebra of $n$-braids in the punctured torus, by \cite{B04}.
Relation \eqref{eq:skeinone} is equivalent to relation \eqref{eq:Hecke}.
Relation \eqref{eq:commutator} is equivalent to the relation \eqref{eq:basecircle}, $P=c^2$, since
\[ x_1 y_1 x_1^{-1} y_1^{-1}=\sigma_1\sigma_2\cdots\sigma_{n-1}P\sigma_{n-1}\cdots\sigma_2\sigma_1. \]
\end{proof}
\begin{remark}
As confirmation that our conventions are consistent with these relations note that with $x_2=\sigma_1^{-1}x_1\sigma_1^{-1}$ and $y_2=\sigma_1y_1\sigma_1$ the relations between the generators $x_1$ and $y_1$ become
$x_1x_2=x_2x_1, y_1y_2=y_2y_1$ and $y_2x_1^{-1}=x_1^{-1}y_2\sigma_1^{-2}$. These relations have already been demonstrated in our illustrations above.
\end{remark}
\begin{theorem}\label{thm:bskiso}
The skein algebra $\bsk_n(T^2,*)$ is isomorphic to the double affine Hecke algebra $\ddot{\mathrm{H}}_n$.
\end{theorem}
\begin{proof}
We construct inverse homomorphisms between the two algebras.
\begin{itemize}
\item
Define a homomorphism from $\bsk_n(T^2,*)$ to $\ddot{\mathrm{H}}_n$ by
sending $x_1,y_1,\sigma_i$ to $X_1,Y_1, T_i^{-1}$ and $s^2,c^2$ to $t,q^{-1}$.
To show that this gives a homomorphism it is enough to check that the relations in the presentation of $\bsk_n(T^2,*)$ hold after the assignment of generators in $\ddot{\mathrm{H}}_n$.
The only relation for which this is not immediately clear is relation \eqref{eq:commutator} in $\bsk_n(T^2,*)$.
Relation \eqref{eq:commutator} can be written \[x_1y_1x_1^{-1}
=c^2 \beta_n y_1. \]
We also know that
\[y_1x_2\cdots x_n=x_2\cdots x_n \beta_n y_1.\]
The relation can then be rewritten as
\[c^{-2}x_1y_1x_1^{-1}=(x_2\cdots x_n )^{-1}y_1x_2\cdots x_n .\]
In our assignment to $\ddot{\mathrm{H}}_n$ we can see that each $x_i$ is sent to $X_i$. It is then enough to check that \[qX_1Y_1X_1^{-1}=(X_2\cdots X_n )^{-1}Y_1X_2\cdots X_n\] in $\ddot{\mathrm{H}}_n$. This follows immediately from the last relation for $\ddot{\mathrm{H}}_n$ and the fact that the elements $X_i$ all commute.
\item
We can define an inverse homomorphism from $\ddot{\mathrm{H}}_n$ to $\bsk_n(T^2,*)$ by sending $X_i, Y_i, T_i$ to $x_i,y_i,\sigma_i^{-1}$ and $t,q$ to $s^2,c^{-2}$.
Our illustrations above confirm that the relations from $\ddot{\mathrm{H}}_n$ hold in $\bsk_n(T^2,*)$ after this assignment.
\end{itemize}
\end{proof}
\subsection{Isotopies of relations}\label{sec:isorels}
In this section we prove Theorems \ref{thm:brskein} and \ref{thm:brpuncture}. We have reformulated the proofs so as to apply a theorem of Yi Ni, \cite[Theorem 1.1]{Ni11}, about Dehn surgery on a knot in a $3$-manifold $F\times I$ where $F$ is a surface with boundary.
Dehn surgery on a curve $K$ in a $3$-manifold $M$ is the operation of removing a solid torus neighbourhood of $K$ and regluing a solid torus in its place, with a specification, in general by a rational number, of how the meridian of the solid torus is glued back. When the surgery curve $K$ spans a disc in $M$ the manifold resulting from $\pm 1/n$ surgery on $M$ is homeomorphic to the original manifold $M$, and can be viewed as cutting open $M$ along the spanning disc and then regluing after making $n$ full twists on the disc.
Ni shows that when Dehn surgery on a curve $K$ in the product $M=F\times I$ results in a manifold $N$ which is homeomorphic to $M$ in a controlled way then $K$ must lie in a very simple way in the product.
\begin{proof} [Proof of theorem \ref{thm:brskein}] We are given diagrams for $\alpha, \beta$ and $\gamma$ which differ as shown inside a ball $D$. We say that $\beta$ and $\gamma$ are given by \emph{switching} or \emph{smoothing} $ \alpha$ in $D$.
We can specify the ball by including its oriented equator $K$ in the diagram of $\alpha$, to make a diagram $\alpha \cup K$ which includes $K$ as a closed component. We call $\alpha \cup K$ a \emph{switching configuration} for $\alpha$ and $K$ the \emph{switching curve}, as shown in figure \ref{fig:switchingconfig}.
\begin{figure}[ht]
\begin{center} $\alpha\cup K =\labellist\small
\pinlabel{$K$} at 204 170
\endlabellist\switchconfig $ \end{center}
\caption{A switching configuration for a braid $\alpha$.} \label{fig:switchingconfig}
\end{figure}
Using the switching configuration $\alpha\cup K$ has the advantage that the whole diagram can be manipulated by isotopy, while still allowing us to recover diagrams for $\beta$ and $\gamma$ as follows.
The switching curve $K$ spans an embedded $2$-disc which is crossed by $\alpha$ in just two points, in the same direction. We can retrieve the ball $D$ where the switch and smooth takes place as a neighbourhood of this disc, while the switched and smoothed diagrams $\beta$ and $\gamma$ are given by replacing the two arcs crossing the neighbourhood of the disc with two other arcs as follows.
A switching curve $K$ around $\alpha$ is shown locally here, \begin{center} $ \alpha\cup K \quad =\quad \labellist\small
\pinlabel{$K$} at 150 90
\endlabellist\switchcurve $\ . \end{center} The switched and the smoothed diagrams, in the neighbourhood of the disc spanning $K$, are then
\begin{center} $\beta\quad = \quad \switch, \qquad \gamma \quad = \quad \smooth $.\end{center}
The effect of the switching operation which gives $\beta$ from $\alpha$ can then be described as performing $\pm 1$ Dehn surgery on the switching curve $K$, since it corresponds to making a full twist on the strings crossing the disc which spans $K$. This observation means that a switching curve is an important feature when considering the application of Ni's theorem.
We can isotop a given switching configuration \[\alpha\cup K =\labellist\small
\pinlabel{$j$} at 220 00
\pinlabel{$i$} at 90 00
\pinlabel{$K$} at 204 170
\endlabellist\switchconfig \]\\[2mm] to push the switching curve to the bottom of the braid. To do this easily we first straighten out the braid strings to form the identity braid $\iota$, by looking at
\[(\alpha\cup K)\alpha^{-1} = \iota \cup K'=\labellist\small
\pinlabel{$j$} at 237 0
\pinlabel{$i$} at 95 0
\pinlabel{$K'$} at 300 180
\endlabellist\identityconfig\ .\] We can then redraw \[\alpha\cup K =\switchconfig\quad =\quad \labellist\small
\pinlabel{$j$} at 237 0
\pinlabel{$i$} at 95 0
\pinlabel{$\iota\cup K'$} at 290 125
\pinlabel{$\alpha$} at 275 300
\endlabellist\compositeconfig \qquad=(\iota\cup K')\alpha.\] \\[2mm] In this isotopic switching configuration the switching curve has been moved down to $K'$. After switching we get \[\beta =\beta'\alpha\] where $\beta'$ is the result of switching the identity braid using the switching curve $K'$. It follows that $\beta'=\beta\alpha^{-1}$ is then also a braid.
We can now look at $K'$ in plan view as a knot diagram in $T^2-\{1,\ldots,n,*\}$.
The diagram of $K'$ encircles points $i$ and $j$, where the original switching takes place between strings $i$ and $j$ of $\alpha$.
\begin{remark}
We must have $i\ne j$ here, otherwise the smoothed curve $\gamma$ will have a closed component, and could not be itself a braid.
\end{remark}
The plan view of $K'$ will then look something like \[\labellist\small
\pinlabel{$j$} at 237 100
\pinlabel{$i$} at 90 100
\pinlabel{$K'$} at 05 60
\endlabellist\switchplan\]
in general. The result of switching, $\beta'$, can be viewed in this projection by taking the point $i$ through the projected spanning disc and moving it out to the curve $K'$, then once round $K'$ and back to the point $i$, while leaving the other strings alone, as indicated here,
\[\labellist\small
\pinlabel{$j$} at 237 100
\pinlabel{$i$} at 110 100
\endlabellist\switcheddiagram.\]
If there are crossing points in the diagram of $K'$, as here, the resulting diagram of $\beta'$ seems unlikely in general to be the projection of a braid since the crossings for the projection of a braid will all appear as undercrossings at their first encounter round $K'$.
Although it is possible that this could be realised after an isotopy we can
apply Ni's theorem \cite[Theorem 1.1] {Ni11} to prove the following result.
\begin{theorem} \label{th:switchcurve}
If $\iota \cup K'$ is a switching configuration for the identity braid which gives rise to a braid after switching then up to isotopy the diagram of $K'$ in $T^2-\{1,\ldots,n,*\}$ has no crossing points.
\end{theorem}
\begin{proof}
By removing a neighbourhood of each of the braid strings from $\iota$ we can regard $K'$ as a knot in the product manifold $F\times I$ with $F=T^2-(n+1)\ \mathrm{discs}$. We know that $\pm1$ Dehn surgery on $K'$ gives the exterior of the braid $\beta'$. Now any braid exterior is homeomorphic to $F\times I$ by a level-preserving homeomorphism. Theorem 1.1 of \cite{Ni11} shows that $K'$ then has a diagram in $F$ with $0$ or $1$ crossing. The possibility of $1$ crossing can be excluded, since the spanning disc for $K'$ meets two strings of $\iota$ in the same sense. Consequently after isotopy the projection of $K'$ is an embedded curve $C\subset F$ which bounds a $2$-disc in $F$ containing just two braid points.
\end{proof}
It is convenient to associate with any such embedded curve $C\subset F$ a switching configuration $\iota\cup C^+$ for the identity braid, using $C^+=C\times\frac{1}{2}\subset F\times I$ as switch curve. For example the standard curve $C_{1\,2}$ around the braid points $1$ and $2$ gives the basic switching configuration
\[\iota\cup C_{1\,2}^+ \quad = \quad \labellist\small
\pinlabel{$C_{1\,2}^+$} at 170 80
\pinlabel{$1$} at 30 50
\pinlabel{$2$} at 120 50
\endlabellist\standardswitchcurve \]
We know that switching with this particular configuration gives rise to the braid $\sigma_1^{-2}$, and smoothing gives $\sigma_1^{-1}$. Similarly the standard embedded curve $C_{n\, *}$ around the braid point $n$ and the base point $*$ gives rise to the braid $P^{-1}$ after switching.
By Theorem \ref{th:switchcurve} the projection $C$ of $K'$ is an embedded curve in $T^2$, which bounds a $2$-disc in $T^2$ containing two braid points $i$ and $j$, for example
\[C= \labellist\small
\pinlabel{$j$} at 190 100
\pinlabel{$i$} at 90 100
\endlabellist\embeddeddiagram \ ,\] and hence we can write $\iota\cup K' =\iota\cup C^+$.
We can find an isotopy of $C\cup \{1,\ldots,n\}$ in $T^2-\{*\}$ which moves $C$ to the standard curve $C_{1\,2}$ around points $1$ and $2$, and carries the set of braid points to itself. Such an isotopy can be constructed by first moving the points $i$ and $j$ close together within $C$ and then moving the whole disc to standard position, and finally moving the remaining braid points. The reverse of this isotopy provides a braid $L$ with the property that \[(\iota\cup C^+_{1\,2})L =L(\iota\cup C^+)\] up to isotopy, so in this sense $L$ intertwines the curve $C^+$ with the standard curve $C^+_{1\, 2}$.
As a result we have isotopic switching configurations for the braid $L\alpha$,
\[ L(\alpha\cup K)=L(\iota\cup C^+)\alpha=(\iota\cup C^+_{1\,2})L\alpha\ , \]
in which the switching curve $K$ is moved down successively from $\alpha$ through $C$ to become the standard curve $C^+_{1\,2}$ at the bottom.
By switching this configuration we have
\[L\beta =\sigma_1^{-2}L\alpha \]
and by smoothing we get \[ L\gamma =\sigma_1^{-1}L\alpha ,\]
as required for theorem \ref{thm:brskein}.
\end{proof}
\begin{proof} [Proof of theorem \ref{thm:brpuncture}] The braid $\epsilon$ arises from $\delta$ by a switch using a switching curve $K$ around the base string and string $i$. As in the proof of theorem \ref{thm:brskein} we can write $\delta\cup K = (\iota\cup K')\delta$ up to isotopy and apply Theorem \ref{th:switchcurve} to show that $K'$ projects to an embedded curve $C$ in $T^2$ around the two points $i$ and $*$. This time find an isotopy of $T^2$ which carries $C$ to the standard curve $C_{n\, *}$ around the points $n$ and $*$, and use it provide a braid $L$ such that
\[ L(\delta\cup K)=L(\iota\cup C^+)\delta =(\iota\cup C^+_{n\, *})L\delta \ .\]
In this switching configuration for $L\delta$ use either $K$ or the isotopic curve $C_{n\,*}$ to switch. Switching $\iota\cup C^+_{n\, *}$ gives the braid $P^{-1}$ and switching $\delta\cup K$ gives $\epsilon$. Hence switching the whole configuration gives
\[L\epsilon=P^{-1}L\delta \ ,\] as required for Theorem \ref{thm:brpuncture}.
\end{proof}
%
\section{The tangle skein algebra}
In this section we generalize the definition of the braid skein algebra using framed tangles, and we conjecture that this produces the same algebra as the braid skein algebra. The multiplication in the tangle skein algebra is the same as in the previous section, and comes from stacking in the $[0,1]$ direction. In the next section we use this conjecture to relate the classical skein algebra of closed links in the punctured torus to the elliptic Hall algebra.
In Homflypt skein theory we consider oriented \emph{banded} curves in a $3$-manifold $M$, possibly with marked input and output points on its boundary.
Here are some such pieces
\begin{center}
\pic{idorband} {.60} \qquad \Xorband \qquad \pic{rcurlorband.eps} {.60}
\end{center}
We can think of these as made of flat tape rather than rope.
The only difference from rope is that the tapes can have extra twists in them such as
\begin{center}\pic{idorleftband} {.60} \qquad or \quad \pic{idorrightband} {.60}\end{center}
Twists may be dealt with by drawing little kinks in the diagram, replacing \begin{center}\pic{idorrightband} {.60} \qquad by \quad \pic{rcurlorband.eps} {.60}\end{center} and \begin{center}\pic{idorleftband} {.60} \qquad by \quad \pic{lcurlorband.eps} {.60}\end{center}
When there are boundary points the curves will include oriented arcs joining input to output points. In addition we can have some closed oriented curves.
The general Homflypt skein $\sk(M)$ is defined to be $\mathbb{Z}[s^{\pm1}, v^{\pm1}]$-linear combinations of banded links, up to isotopy, with the basic linear relations
\begin{align*}
\Xorband\ -\ \Yorband \qquad &=\qquad{(s-s^{-1})}\quad\ \Iorband \\
\pic{rcurlorband.eps} {.60} \qquad&=\qquad {v^{-1}}\quad \pic{idorband} {.60}
\end{align*}
between banded links whose diagrams differ only locally as shown.
Special cases of interest to us are where $M=F\times I$ for a surface $F$, with or without boundary. In such cases we write $\sk(F)$ for the skein $\sk(M)$, which has the structure of an algebra, with product induced by stacking curves in the direction of the interval $I$.
In \cite{MS17} we have looked at the case where $F=T^2$, and given a presentation for $\sk(T^2)$.
The case $\mathcal{C}=\sk(A)$, where $A$ is the annulus, is a commutative algebra. It has been widely studied, originally by Turaev \cite{Turaev}, and subsequently by Morton and others.
In our present work we will incorporate the skein of the torus with one hole, $\sk(T^2 - D^2)$, including elements which map to the generators of $\sk(T^2)$ under the homomorphism induced by the inclusion $T^2- D^2 \to T^2$.
Again in the case $M=F\times I$ we will consider the case where we fix $n$ input points in $F\times \{0\}$, and take the corresponding $n$ output points in $F\times \{1\}$. Stacking in the $I$ direction will give this skein the structure of an algebra over $\mathbb{Z}[s^{\pm1}, v^{\pm1}]$ which we denote by $\sk_n(F)$.
The simplest case of this, when $F=D^2$, gives the algebra $\sk_n(D^2)$. This algebra is a version of the Hecke algebra $H_n(z)$ of type $A$, based on the quadratic relation $\sigma_i^2=z\sigma_i +1$, where $z=s-s^{-1}$.
In anticipation of the next section we are led to consider the skein $\sk_n(T^2 - \{*\})$ of the punctured torus. In order to incorporate our algebra $\bsk_n(T^2,*)$ into this framework we will adjoin the relation
\begin{center}
\labellist\small
\pinlabel{$*$} at 146 740
\endlabellist\Xor$\quad=\quad c^2\ $
\labellist\small
\pinlabel{$*$} at 146 740
\endlabellist\Yor
\end{center}
to allow a string to cross through the fixed string $\{*\}\times I$ in $T^2\times I$ which defines the puncture.
With this extra relation in place we use the notation $\sk_n(T^2,*)$ for the resulting algebra over $\mathbb{Z}[s^{\pm1}, v^{\pm1}, c^{\pm 1}]$.
\begin{theorem}\label{thm:dahatoskein} There is an algebra homomorphism
\[F_n: \bsk_n(T^2,*)\cong \ddot H_n\to \sk_n(T^2,*).\]
\end{theorem}
\begin{proof} The homomorphism $F_n$ is defined on a braid by making a consistent choice of framing for it.
Braids in $T^2$ can be framed by fixing a direction in $T^2$, say the $(1,0)$ direction, and taking the band on each braid string in this direction, which is transverse to the string at all points. This appears to give the framing used in \cite{BWPV14}. Under any braid isotopy the bands will be preserved, and the relations between braids will satisfy the skein relations between banded curves.
\end{proof}
In this section and the next we consider the algebra $\sk_n(T^2,*)$, which incorporates general framed tangles along with closed curves, with the elliptic Hall algebra and its relation to the algebras $\ddot H_n$ in mind. One outcome of this is the following result, which will be established in the next section.
\begin{theorem} The homomorphism \[F_n: \bsk_n(T^2,*)\to \sk_n(T^2,*)\]
is surjective. \label{surjection}
\end{theorem}
\begin{proof}
We must look at elements of the skein $\sk_n(T^2,*)$ and reduce them to combinations of braids. In particular we have to deal with closed curves as well as braids.
Choose a disc $D^2$ in $T^2$ which contains the $n$ braid points and the puncture $*$. A suitable choice is a neighbourhood of the line through the braid points $1,\cdots,n$ and the base point $*$, shown in figure \ref{disc}.
The inclusion of $T^2-D^2$ in $T^2$, combined with the identity braid on the $n$ strings, induces an algebra homomorphism
\[\varphi_n: \sk(T^2-D^2) \to \sk_n(T^2,*).\]
\begin{figure}[ht]
\begin{center}
\Torusbasedisc\end{center}
\caption{A choice for the disc $D^2$ in $T^2$} \label{disc}
\end{figure}
The closed curves we particularly want to use can be described in the skein $\sk(T^2-D^2)$. This skein is an algebra with a homomorphism \[\sk(T^2-D^2)\to \sk(T^2)\] induced by filling in the disc. A presentation of the algebra $\sk(T^2)$ is given in \cite{MS17}, with generators $W_{\mathbf{x}}, {\mathbf{x}}\in \mathbb{Z}^2$, and relations \[[W_{\mathbf{x}},W_{\mathbf{y}}]=(s^m-s^{-m})W_{{\mathbf{x}}+{\mathbf{y}}}\] where $m=\det({\mathbf{x}} {\mathbf{y}})$.
When ${\mathbf{z}}\in\mathbb{Z}^2$ is primitive there is an embedded curve $W_{\mathbf{z}}\subset T^2$ which is taken to represent the element $W_{\mathbf{z}}\in\sk(T^2)$. The same curve will give a well-defined element of $\sk(T^2-D^2)$, which we will also write as $W_{\mathbf{z}}$.
Write $A_{\mathbf{z}}\subset T^2-D^2$ for the annulus with $W_{\mathbf{z}}$ as its core. Further elements $W_{\mathbf{x}}$, where ${\mathbf{x}}=m{\mathbf{z}}$, are defined in section \ref{sec:closed} as in \cite{MS17} by suitably chosen combinations of curves in $A_{\mathbf{z}}$, and determine the generators of $\sk(T^2)$ above.
The elements $W_{\mathbf{x}}$ do not generate the whole of the algebra $\sk(T^2-D^2)$, nor do all the commutation relations from \cite{MS17} hold in $\sk(T^2-D^2)$. The main problem is that the disc $D^2$ gets in the way of isotopies that can be made in $T^2$.
\begin{remark}\label{rmk:ptorusrel}
It is worth noting however that if ${\mathbf{y}}$ and ${\mathbf{z}}$ are primitive, and the curves $W_{\mathbf{y}}$ and $W_{\mathbf{z}}$ intersect in just one point, then the commutation relation
\[ [W_{\mathbf{x}},W_{\mathbf{y}}]=(s^m-s^{-m})W_{{\mathbf{x}}+{\mathbf{y}}}\] holds for ${\mathbf{x}}=m{\mathbf{z}}$. This is because the argument from \cite{MS17} only involves curves in the union of the annuli $A_{\mathbf{y}}$ and $A_{\mathbf{z}}$.
Hence in particular we have\footnote{The sign difference between the right hand side of \eqref{eq:demoskein} and the corresponding relation in \cite{MS17} comes from the fact that our convention for the product in the skein algebra here (left element goes below the right element) is the opposite from \cite{MS17}.}
\begin{equation}\label{eq:demoskein}
[W_{(m,0)}, W_{(0,1)}]=-(s^m-s^{-m})W_{(m,1)}
\end{equation}
in the image of $\ddot{\mathrm{H}}_n$ in $\sk_n(T^2,*)$. Note that under the map\footnote{The existence of this map depends on the assumption that Conjecture \ref{conj:mainiso} is true.} $\sk^+(T^2-D^2) \to \mathcal{E}_{q,t}$ described in Theorem \ref{thm:psktoeha}, we have the assignments
\[W_{m,0} \mapsto \{m\}_s u_{m,0},\quad \quad W_{0,1} \mapsto \{1\}_s u_{0,1},\quad \quad W_{m,1} \mapsto \{1\}_s u_{m,1}
\]
where we have used the notation $\{d\}_s := s^d-s^{-d}$. Expanding the relation \eqref{eq:hallrel} in the case ${\mathbf{y}}=(m,0)$ and ${\mathbf{x}} = (0,1)$, we get
\begin{equation}\label{eq:demoeha}
[u_{m,0},u_{0,1}] = -u_{m,1}
\end{equation}
Thus, we see that that relation \eqref{eq:demoskein} in $\sk^+(T^2-D^2)$ gets mapped to the relation \eqref{eq:demoeha} in $\mathcal{E}_{q,t}$ under the algebra map in Theorem \ref{thm:psktoeha} (whose existence depends on Conjecture \ref{conj:mainiso}).
\end{remark}
To prove theorem \ref{surjection} we first show that for each $n$ the image of $\sk(T^2-D^2)$ is represented by braids. In other words we establish
\begin{lemma} \[\varphi_n(\sk(T^2-D^2))\subset F_n(\bsk_n(T^2,*)).\] \label{braidrep}
\end{lemma}
This depends on two results.
\begin{itemize}
\item We can write any curve $C\subset T^2-D^2$ as a polynomial in the elements $\{W_{\mathbf{x}}\}$ plus a linear combination of braids.
\item Each $W_{\mathbf{x}}$ is a linear combination of braids.
\end{itemize}
The proof of theorem \ref{surjection} is completed by showing that a general tangle can be written as a product of braids and closed curves which avoid $D^2$.
\begin{lemma} The algebra $\sk(T^2-D^2)$ is generated by totally ascending single curves $C\subset T^2-D^2$.
\label{ascending}
\end{lemma}
\begin{proof} This is a standard skein theory exercise. We proceed by induction on the number of crossings in a diagram in $T^2-D^2$. Order the components of the diagram and choose a starting point on each component. A totally ascending diagram is one in which each crossing appears first as an undercrossing, when working along the component from the chosen starting point. Go through the components in turn, switching crossings, and using the skein relation, to end up with a diagram $L$ in which the components are totally ascending, along with a linear combination of diagrams with fewer crossings.
The components of $L$ are then stacked one above the other, and so represent the product of single totally ascending curves.
\end{proof}
The ability to alter the starting point of a totally ascending diagram of a curve, using induction on the number of crossings in the diagram, will prove useful in the arguments which follow.
\begin{lemma} Suppose that $C$ and $D$ are two diagrams differing only in the signs of their crossings, then $D=C+(s-s^{-1})\sum \pm D_\alpha$ where each $D_\alpha$ is a $2$-component diagram with fewer crossings than $C$. \end{lemma}
\begin{proof} This follows immediately, using the skein relation at each crossing of $C$ to be switched.
\end{proof}
\begin{lemma} Let $C$ be a curve in $T^2-D^2$ which is totally ascending from a starting point immediately beside the boundary of $D^2$. The curve $C'$ given by diverting $C$ around the other side of $D^2$, which is also totally ascending and is isotopic to $C$ in $T^2$ but not in $T^2-D^2$, satisfies the relation
\[\varphi_n(q^{\pm1}C'-C)=\pm(s-s^{-1})\sum_{i=1}^n \beta_i(C)\]
where $\beta_i(C)$ are braids. \label{diversion}
\end{lemma}
\begin{proof}
Divert the curve $C$ successively past the braid points to give curves $C=C_0$, $C_i$, $C_n$, and then $C'$, with $C_i$ crossing $D^2$ between points $i$ and $i+1$, interpreting $*$ as point $n+1$.
The skein relation gives $C_i=C_{i-1}\pm (s-s^{-1})\beta_i(C)$ where $\beta_i(C)$ is a braid in which only string $i$ moves, because the curve $C_{i-1}$ is totally ascending at the point of crossing string $i$.
The result follows, given that $C'=q^{\pm 1}C_n$.
\end{proof}
\begin{remark} The sign depends on the direction of the curves $C_i$ across the disc. It will be $+$ if $C$ crosses the line of braid points in the $x$ direction. The curve $C$ represents an element $w_C(x,y)\in \pi_1(T^2-D^2)$ based at the starting point. This fundamental group is the free group on $2$ generators, and the braid $\beta_1(C)$ is then $\beta_1(C)=w(x_1,y_1)$ while the successive braids $\beta_i(C)$ satisfy
\[\beta_i(C)=\sigma_{i-1}\cdots \sigma_1\beta_1(C)\sigma_1\cdots\sigma_{i-1}.\]
\end{remark}
We are now in a position to prove that a curve $C\subset T^2-D^2$ can be written in $\sk_n(T^2,*)$ as a polynomial in $\{W_{\mathbf{x}}\}$ modulo ${\mathrm{im}}(F_n)$. Work by induction on the number of crossings in the diagram of $C$. We can assume by lemma \ref{ascending} that $C$ is totally ascending.
Write $\mathcal{C}\in \mathbb{Z}^2$ for the homology class of $C$ in $H_1(T^2)$, and write $\mathcal{C}=m{\mathbf{z}}$ with ${\mathbf{z}}$ primitive. If $\mathcal{C}=(0,0)$ choose any primitive as ${\mathbf{z}}$.
Fix a simple closed curve $Z$ in $T^2$ in the direction of ${\mathbf{z}}$. Now perform a sequence of moves at the expense of braids and other elements $W_{\mathbf{x}}$, at each stage replacing $C$ by $C'$ homologous to $C$. Firstly move strands of $C$ across $D^2$ until there are no strands of $C$ lying between $Z$ and $D^2$ to one side of $Z$. At each move we need to change the starting point to lie on the strand to be moved across $D^2$, using lemma \ref{diversion}.
Now suppose that $Z$ is crossed $\l$ times by $C$ in the direction away from $D^2$. If $\l=0$ then $C$ lies entirely in the annulus $A_{\mathbf{z}}$ on the side away from $D^2$. It then represents an element in the skein of this annulus, $\sk(A_{\mathbf{z}})$, which can be written as a polynomial in the commuting elements $\{W_{k{\mathbf{z}}}\}$.
Otherwise $Z$ is also crossed $\l$ times by $C$ in the opposite sense, because $C$ is homologous to $m{\mathbf{z}}$. Choose a crossing in the direction away from $D^2$ where the next crossing is towards $D^2$, and take this as the starting point for $C$, again using the induction on the number of crossings. The arc of $C$ between the starting crossing and the next one can now be isotoped across $Z$, without crossing $D^2$, to reduce $\l$.
In equation \eqref{eq:Wasbraid}, it is shown that $W_{(m,0)}$ can be written as a linear combination of braids, and combining this with the $SL_2(\mathbb{Z})$ action shows that any curve in $T^2-D^2$ can be represented in $\sk_n(T^2,*)$ by a linear combination of braids.
This completes the proof of lemma \ref{braidrep}.
The last step in the proof of theorem \ref{surjection} is to deal with a general framed $n$-tangle in $(T^2,*)$. This will consist of $n$ arcs along with some closed curves. We use the plan view, modified slightly to separate the top and bottom points, and work by induction on the number of crossing points in a diagram to show that it represents a polynomial in braids and closed curves avoiding $D^2$.
Make the tangle diagram totally ascending, choosing starting points first at the bottom point of each arc and then on the closed curves. This can be done by the induction. The resulting diagram represents a product of a braid with closed curves, since the arcs are each totally ascending, and the crossings with closed curves all lie above them.
It remains to show that a single totally ascending curve $C$ in the torus which avoids the $n+1$ points $\{1,\cdots,n,*\}$ can be altered at the expense of braids to avoid the line through the $n+1$ points which determines $D^2$.
Suppose that $C$ crosses the connecting line immediately to the right of point $i$. Make $C$ totally ascending at this crossing position, and then move $C$ across $i$ to lie immediately to its left, at the expense of a braid $\beta_i(C)$ as in the proof of lemma \ref{diversion}. Continue moving intersections to the left, and eventually past the point $1$ at the end of the line, to finish by avoiding the connecting line altogether, and hence lying in $T^2-D^2$. Lemma \ref{braidrep} then completes the proof of theorem \ref{surjection}
\end{proof}
\begin{remark} It is not clear whether the homomorphism $F_n$ is injective.
There can be the question of possible further relations between elements in the image of $\bsk_n(T^2,*)\cong \ddot H_n$ coming from the additional closed curves that can be used in $\sk_n(T^2,*)$.
\end{remark}
Despite the previous remark, we conjecture (see Conjecture \ref{conj:mainiso} in the introduction) that the algebra map $F_n:\ddot H_n\to \sk_n(T^2,*)$ in Theorem \ref{thm:dahatoskein} is an isomorphism.
\section{Relations with the elliptic Hall algebra}\label{fullskein}
In \cite{SV11} the authors relate the double affine Hecke algebras $\ddot{H}_n$ to the elliptic Hall algebra.
As part of their construction they make use of the sums of powers \[
\sum_i X_i^l, \sum_i Y_i^l \in \ddot{H}_n
\]
which have a very useful skein theoretic description, and which led us to try including closed curves in our skein $\bsk_n(T^2,*)$. We will show that the images of these elements in $\sk_n(T^2,*)$ agree with the images of certain natural elements in $\sk(T^2- D^2)$. In Theorem \ref{thm:psktoeha}, we combine this with results of Schiffmann and Vasserot to show that Conjecture \ref{conj:mainiso} implies a weakened version of Conjecture \ref{conj:punctoeha}.
\subsection{Certain closed curves}\label{sec:closed}
For the moment consider the Homflypt skein $\sk_n(A)$ where $A$ is an annulus, using oriented diagrams in the thickened annulus $A\times I$ with $n$ output points on the top $A\times\{1\}$, and $n$ matching input points on $A\times\{0\}$. We also allow closed components in the diagrams.
When restricted to braid diagrams the skein $\bsk_n(A)$ is used by Graham and Lehrer as a model for the affine Hecke algebra $\dot {H}_n$, where composition is again induced by composition of braids.
Write $Z_i$ and $\overline Z_i$ for the elements represented in $\sk_n(A)$ by the diagrams shown here. Take the framing of the closed component as given by the plane of the diagram.
\begin{center} $Z_i\quad = \quad $\labellist\small
\pinlabel {$i$} at 210 335
\endlabellist \xiibraidblue\ , $\quad \overline Z_i\quad = \quad $\labellist\small
\pinlabel {$i$} at 210 335
\endlabellist \etaibraidblue\\[4mm]
\end{center}
It is readily established that
\begin{eqnarray*} \backid \quad -\quad \frontid &=&(s-s^{-1}) \sum Z_i\\
&=&(s-s^{-1}) \sum \overline Z_i
\end{eqnarray*}
There is also a well-established element $P_m$ for each $m$ in the skein $\sk(A)=\mathcal{C}$ of the annulus with no boundary points which satisfies the relation
\begin{eqnarray*} \labellist\small
\pinlabel {$P_m$} at 150 415
\endlabellist\backidblue \quad - \quad \labellist\small
\pinlabel {$P_m$} at 150 415
\endlabellist\frontidblue &=& (s^m-s^{-m})\sum Z_i^m\\
&=&(s^m-s^{-m}) \sum \overline Z_i^m .
\end{eqnarray*}
A detailed account of $P_m$ can be found, for example, in \cite{Mor02b}.
When we embed $A$ into $T^2$ around the $(1,0)$ curve, matching the braid points suitably, the induced homomorphism from $\sk_n(A)$ to $\sk_n(T^2,*)$ gives the equation
\begin{eqnarray*}(s^m-s^{-m})\sum_i x_i^m&=& \labellist\small
\pinlabel {$P_m$} at 160 353
\endlabellist\uxfront \quad-\quad\labellist\small
\pinlabel {$P_m$} at 160 387
\endlabellist\uxback \\
&=&(1-c^{2m}) \ \labellist\small
\pinlabel {$P_m$} at 160 353
\endlabellist\uxfront
\end{eqnarray*}
Similarly, taking $A$ around the $(0,1)$ curve on $T^2$ we get
\begin{eqnarray*}(s^m-s^{-m})\sum_i y_i^m&=& \labellist\small
\pinlabel {$P_m$} at 135 405
\endlabellist\uyfront \quad-\quad\labellist\small
\pinlabel {$P_m$} at 105 405
\endlabellist\uyback \\
&=&(c^{-2m}-1) \ \labellist\small
\pinlabel {$P_m$} at 135 405
\endlabellist\uyback
\end{eqnarray*}
In $\sk_n(A)$, taking account of the crossing signs, we also have
\begin{eqnarray*} \labellist\small
\pinlabel {$P_m$} at 130 415
\endlabellist\backidleftblue \quad - \quad \labellist\small
\pinlabel {$P_m$} at 130 415
\endlabellist\frontidleftblue &=& -(s^m-s^{-m})\sum Z_i^{-m}\\
&=&-(s^m-s^{-m})\sum \overline Z_i^{-m}.\end{eqnarray*}
Placing $A$ along the $(1,0)$ curve then gives
\begin{eqnarray*} -(s^m-s^{-m})\sum x_i^{-m}&=& \labellist\small
\pinlabel {$P_m$} at 160 353
\endlabellist\uxfrontleft \quad-\quad\labellist\small
\pinlabel {$P_m$} at 160 387
\endlabellist\uxbackleft \\
&=&(1-c^{-2m}) \ \labellist\small
\pinlabel {$P_m$} at 160 353
\endlabellist\uxfrontleft\ ,\\
\end{eqnarray*}
while placing $A$ along the $(0,1)$ curve gives
\begin{eqnarray*}-(s^m-s^{-m})\sum_i y_i^{-m}&=& \labellist\small
\pinlabel {$P_m$} at 135 405
\endlabellist\uyfrontdown \quad-\quad\labellist\small
\pinlabel {$P_m$} at 105 405
\endlabellist\uybackdown \\
&=&(c^{2m}-1) \ \labellist\small
\pinlabel {$P_m$} at 105 405
\endlabellist\uybackdown\\
\end{eqnarray*}
In \cite{SV11} there is a description of the elliptic Hall algebra which involves generators $u_{\mathbf{x}}$ for every non-zero ${\mathbf{x}}\in \mathbb{Z}^2$. These elements satisfy certain commutation relations, and the comparison with the algebras $\ddot{H}_n$ requires the prescription of an image for each $u_{\mathbf{x}}$, and a check on their commutation properties.
We can give a version of this comparison by using the skein $\sk_n(T^2,*)$, and the skein $\sk(T^2 -D^2)$.
Fix a disc $D^2$ in $T^2$ which includes the braid points and the base point. A suitable choice for our purposes is a neighbourhood of the diagonal in the square.
In the previous section we introduced the homomorphism
\begin{equation}\label{eq:fillinpunc}
\varphi_n: \sk(T^2-D^2)\to \sk_n(T^2,*)
\end{equation}
defined by taking the banded curves in $T^2 - D^2$ along with the identity $n$-braid in $\sk_n(T^2,*)$, consisting of $n$ vertical strings in $D^2\times I$ and the base string.
Now any oriented embedded curve in $T^2 - D^2$ is determined up to isotopy by a primitive element ${\mathbf{z}}\in \mathbb{Z}^2$, representing the homology class of the curve. This curve, framed by its neighbourhood in $T^2$ defines an element $W_{\mathbf{z}}\in \sk(T^2-D^2)$.
For any other non-zero ${\mathbf{x}}\in \mathbb{Z}^2$ write ${\mathbf{x}}=m{\mathbf{z}}$ with $m>0$ and ${\mathbf{z}}$ primitive, and define $W_{\mathbf{x}}$ to be $W_{\mathbf{z}}$ with the closed curve decorated by the element $P_m$.
We will write $W_{\mathbf{x}}$ also for its image in the skein $\sk_n(T^2,*)$.
We then have plan views of $W_{(\pm m,0)}$ and $W_{(0,\pm m)}$ as
\begin{center} $W_{(m,0)}\ =\ $ \labellist\small
\pinlabel {$P_m$} at 160 353
\endlabellist\uxfront
\ ,\quad $W_{(-m,0)}\ =\ $ \labellist\small
\pinlabel {$P_m$} at 160 353
\endlabellist\uxfrontleft
\ ,
\end{center}
\begin{center} $W_{(0,m)}\ =\ $ \labellist\small
\pinlabel {$P_m$} at 105 405
\endlabellist\uyback\ , \quad $W_{(0,-m)}\ =\ $ \labellist\small
\pinlabel {$P_m$} at 105 405
\endlabellist\uybackdown\ ,
\end{center}
Our equations above show that
\begin{align}
(1-c^{2m})W_{(m,0)}&=(s^m-s^{-m})\sum x_i^m,\label{eq:Wasbraid}\\
(c^{-2m}-1)W_{(-m,0)}&=(s^m-s^{-m})\sum x_i^{-m}\notag\\
(c^{-2m} - 1)W_{(0,m)}&=(s^m-s^{-m})\sum y_i^m\notag \\
(1-c^{2m})W_{(0,-m)}&=(s^m-s^{-m})\sum y_i^{-m}.\notag
\end{align}
\subsection{Comparison with the algebraic approach}
For non-zero ${\mathbf{x}}\in \mathbb{Z}^2$ Schiffman and Vasserot in \cite{SV11} define elements $Q_{\mathbf{x}}$ in the spherical algebra $S\ddot{\mathrm{H}}_n$, where $S\ddot{\mathrm{H}}_n$ is defined as $e_n\ddot{\mathrm{H}}_n e_n$, with $e_n\in H_n$ being the symmetrizer. They use the elements $Q_{\mathbf{x}}$ in setting up their comparisons with the elliptic Hall algebra.
Using the identification of $\bsk_n(T^2,*)$ with $\ddot{\mathrm{H}}_n$, where $q=c^{-2}, s^2=t$, we show now that our elements $W_{\mathbf{x}}$ are closely related to $Q_{\mathbf{x}}\in S\ddot{\mathrm{H}}_n$, when mapped into the full skein algebra $\sk_n(T^2,*)$.
Before doing this we note the construction of the symmetrizer $e_n\in H_n\subset \ddot{\mathrm{H}}_n$ in the braid skein setting, as used by Aiston and Morton in \cite{AM98}.
We use the model of the Hecke algebra $H_n$ described in \cite{MT90}, and further in \cite{AM98}. The symmetrizer is given there as a multiple of the quasi-idempotent
$a_n=\sum s^{l(\pi )}\omega _{\pi}$, where $\omega _{\pi}$ is the positive permutation braid associated to the permutation $\pi$ with length $l(\pi )$ in the symmetric group. The symmetrizer is then $e_n=\frac{1}{\alpha_n}a_n$ where $\alpha_n$ is given by the equation $a_na_n=\alpha _na_n$ \cite{{Luk05}, {AM98}}. Using the quasi-idempotent $b_n=\sum (-s)^{-l(\pi )}\omega _{\pi}$ in a similar way gives the antisymmetrizer.
We prefer to avoid the notation $S$ for the symmetrizer, because of conflict with the notation for the symmetric group. In \cite{SV11} the element $a_n$ is denoted by $\tilde S$, and the symmetrizer by $S$.
\begin{theorem} \label{PWcomparison} For ${\mathbf{x}}\in\mathbb{Z}^2$ we have the following equality in $\sk_n(T^2,*)$:
\[
(q^m-1)e_nW_{\mathbf{x}} e_n=(s^m-s^{-m})Q_{\mathbf{x}},
\]
where ${\mathbf{x}}=m{\mathbf{y}}$ with ${\mathbf{y}}$ primitive and $m>0$.
\end{theorem}
\begin{proof} We start from the definition in \cite{SV11} which sets $Q_{(0,m)}:=e_n\sum Y_i^m e_n$ for $m>0$.
Our third equation above proves the theorem for ${\mathbf{x}}=(0,m)$, since \[(q^m-1)e_n W_{(0,m)} e_n= (s^m-s^{-m}) e_n\sum Y_i^me_n = (s^m-s^{-m}) Q_{(0,m)}.\]
When ${\mathbf{x}}= ({\pm m,0}), ({0, -m})$ the values of $Q_{\mathbf{x}}$ are shown in \cite[Eq. 2.16-2.18]{SV11} to be
\begin{eqnarray*} Q_{(-m,0)}&=& e_n\sum X_i^{-m}e_n\\
Q_{(0,-m)}&=& q^me_n\sum Y_i^{-m}e_n\\
Q_{(m,0)}&=& q^me_n \sum X_i^me_n.
\end{eqnarray*}
The theorem follows immediately from \eqref{eq:Wasbraid} in these cases too, since \[(q^m-1)W_{(-m,0)}=(s^m-s^{-m})\sum x_i^{-m},\] giving the case ${\mathbf{x}}=(-m,0)$, while \[(q^m-1)W_{(m,0)}=(s^m-s^{-m})q^m\sum x_i^{m}\] and \[(q^m-1)W_{(0,-m)}=(s^m-s^{-m})q^m\sum y_i^{-m},\] giving the other two cases.
We use automorphisms of $\ddot{\mathrm{H}}_n$, and their counterpart in the skein models $\bsk_n(T^2,*)$ and $\sk_n(T^2,*)$ to establish the proof for general ${\mathbf{x}}$.
Firstly, in our skein model, a right-hand Dehn twist about the (unoriented) $(1,0)$ curve in $T^2 - D$ induces an automorphism $\tau_1$ of $\sk(T^2-D^2)$, which carries $W_{{\mathbf{x}}}$ to $W_{\mathbf{y}}$ with
\[
{\mathbf{y}}=
\begin{pmatrix} 1&1\\ 0&1 \end{pmatrix}{\mathbf{x}}.
\]
A left-hand Dehn twist about the unoriented $(0,1)$ curve in $T^2 - D$ induces an automorphism $\tau_2$ of $\sk(T^2-D^2)$, which carries $W_{{\mathbf{x}}}$ to $W_{\mathbf{y}}$ with \[{\mathbf{y}}=\begin{pmatrix} 1&0\cr 1&1 \end{pmatrix}{\mathbf{x}}.\]
These two automorphisms generate all homeomorphisms of $T^2$ which fix $D$, up to isotopy fixing $\partial D$. This group of automorphisms is isomorphic to the braid group $B_3$ with $\tau_1$ and $\tau_2^{-1}$ playing the roles of the usual Artin generators $\sigma_1, \sigma_2$. The kernel of the map to $SL(2,\mathbb{Z})$ is infinite cyclic, generated by $(\tau_1 \tau_2^{-1}\tau_1)^4$, which is the right-hand Dehn twist about $\partial D$.
For any ${\mathbf{x}}$ with $d({\mathbf{x}})=m>0$ we can find an automorphism $\gamma$ so that ${\mathbf{x}}=\gamma((0,m))$.
Now the effect of $\tau_1$ on the generators $\sigma_i, x_i, y_i$ of $\bsk_n(T^2,*)$ is
\begin{eqnarray*} \tau_1(\sigma_i)&=& \sigma_i\\
\tau_1(x_i)&=&x_i\\
\tau_1(y_i)&=& \eta_i
\end{eqnarray*} where
\[ \eta_i=y_i x_i \delta_i\] and \[\delta_i= \sigma_{i-1}\ldots \sigma_1 \sigma_1\ldots \sigma_{i-1}.\]
The effect of $\tau_2$ is
\begin{eqnarray*} \tau_2(\sigma_i)&=& \sigma_i\\
\tau_2(x_i)&=&\xi_i\\
\tau_2(y_i)&=& y_i
\end{eqnarray*} where
\[ \xi_i=x_i y_i \delta_i^{-1}.
\]
The automorphisms $\rho_1$ and $\rho_2$ used in \cite{SV11} agree with $\tau_2$ and $\tau_1$, given the correspondence of $x_i, y_i, \sigma_i$ with $X_i, Y_i, T_i^{-1}$ respectively.
Since $Q_{\mathbf{x}}$ is given from $Q_{(0,m)}$ by applying a suitable product of $\rho_1$ and $\rho_2$, the same automorphism will carry $W_{(0,m)}$ to $W_{\mathbf{x}}$ and the theorem will follow.
\end{proof}
\subsection{Without the symmetrizer}
Theorem \ref{PWcomparison}, which refers to elements of $\sk_n(T^2,*)$, suggests that $Q_{\mathbf{x}}$ could be defined
unambiguously from an element $\tilde Q_{\mathbf{x}}$ in $\ddot{\mathrm{H}}_n\cong \bsk(T^2,*)$ before passing to $S\ddot{\mathrm{H}}_n$. The kernel of the map from $B_3$ to $SL(2,\mathbb{Z})$ is generated by $(\tau_1 \tau_2^{-1}\tau_1)^4$. In the skein model this is a Dehn twist about the boundary of the disc $D$, and so in this model we expect the following theorem, which we can prove algebraically.
\begin{proposition}
For any $Z\in \ddot{\mathrm{H}}_n\cong \bsk_n(T^2,*)$ we have \[(\tau_1 \tau_2^{-1}\tau_1)^4 Z=\Delta^{-2} Z \Delta^2,\]
where $\Delta^2$ is the full twist braid in the finite Hecke algebra $H_n$.
\end{proposition}
\begin{proof} It is enough to prove this when $Z=x_1$ and $Z=y_1$, since these elements, along with $\sigma_i$, generate $\bsk_n(T^2,*)$. In the case $Z=\sigma_i$ we have $\tau_1(\sigma_i)=\tau_2(\sigma_i)=\sigma_i$, while the full twist $\Delta^2$ commutes with each $\sigma_i$.
We also know that
\begin{eqnarray*}
\tau_1(x_1)&=&x_1\\
\tau_1(y_1)&=& y_1 x_1\\
\tau_2^{-1}(x_1)&=&x_1 y_1^{-1}\\
\tau_2^{-1}(y_1)&=&y_1
\end{eqnarray*}
Writing $\tau_1 \tau_2^{-1}\tau_1=\theta$ we get
\[ \theta(x_1)=y_1^{-1}, \theta (y_1)=y_1 x_1 y_1^{-1}\] so
\[ \theta^2(x_1)=(\theta (y_1))^{-1}=y_1 x_1^{-1} y_1^{-1}=(y_1x_1)x_1^{-1}(y_1x_1)^{-1}\]
\[
\theta^2(y_1)=\theta (y_1)\theta (x_1)(\theta (y_1))^{-1}=(y_1x_1)y_1^{-1}(y_1x_1)^{-1}\]
Finally
\begin{eqnarray*} \theta^4(x_1)&=&\theta^2(y_1x_1)\theta^2(x_1^{-1})(\theta^2(y_1x_1))^{-1}=(y_1 x_1)(y_1^{-1}x_1^{-1})x_1(x_1 y_1)(y_1 x_1)^{-1}\\
&=&[x_1,y_1]^{-1}x_1 [x_1,y_1]\\[2mm]
\theta^4(y_1)&=&[x_1,y_1]^{-1}x_1 [x_1,y_1]
\end{eqnarray*}
Now $[x_1,y_1]=c^2\beta_n$ so \begin{eqnarray*}
\theta^4(x_1)&=&\beta_n^{-1}x_1\beta_n\\
\theta^4(y_1)&=&\beta_n^{-1}y_1\beta_n
\end{eqnarray*}
The result now follows since \[\Delta^2=w(\sigma_2,\cdots,\sigma_{n-1})\sigma_1\sigma_2\cdots\sigma_{n-1}\sigma_{n-1}\cdots\sigma_2\sigma_1 =w\beta_n\] and the braid $w$ commutes with $x_1$ and $y_1$.
\end{proof}
\begin{remark} Simental, \cite[Lemma 2.4.20]{Simmental}, in notes which are part of a seminar series at MIT and Northeastern in 2017, makes a similar observation when applied to the spherical algebra $\mathrm {S} \daha_n$, to demonstrate the construction of the elements $Q_{\mathbf{x}}$.
\end{remark}
We can go further and define $\tilde Q_{0,m}$ for $m>0$, by
\[\tilde Q_{0,m}=y_1^m+y_2^m+\cdots+y_n^m.\]
Then \[Q_{0,m}=e_n\tilde Q_{0,m} e_n,\] in \cite{SV11}.
We follow the same procedure as in \cite{SV11} to define $\tilde Q_{\mathbf{x}}$ from $\tilde Q_{0,m}$ by applying an automorphism from $SL(2,\mathbb{Z})$ which takes $(0,m)$ to ${\mathbf{x}}$.
This gives a well-defined element $\tilde Q_{\mathbf{x}}$, provided we can show that $(\tau_1 \tau_2^{-1}\tau_1)^4=\theta^4$ acts trivially on $\tilde Q_{0,m} = \sum y_i^m$.
So we prove
\begin{lemma}
\[\Delta^{-2}(y_1^m+\cdots+y_n^m)\Delta^2 =y_1^m+\cdots+y_n^m.\]
\end{lemma}
\begin{proof}
It is enough to show that $y_1^m+\cdots+y_n^m$ commutes with $\sigma_i$ for all $i$. Now $\sigma_i$ commutes with $y_j$ for $j\ne i, i+1$. So we just need to show that $\sigma_i $ commutes with $y_i^m+y_{i+1}^m$.
This in turn follows once we prove that
\begin{eqnarray*} \sigma_i(y_i +y_{i+1})&=&(y_i +y_{i+1})\sigma_i\\
\sigma_i(y_i y_{i+1})&=&(y_i y_{i+1})\sigma_i
\end{eqnarray*}
Now \begin{eqnarray*} \sigma_i(y_i +y_{i+1})&=&\sigma_i y_i +\sigma_i^2y_i\sigma_i = \sigma_i y_i +y_i\sigma_i
+(s-s^{-1})\sigma_iy_i\sigma_i\\
& =&y_i\sigma_i +\sigma_iy_i\sigma_i^2 = (y_i +y_{i+1})\sigma_i, \\[2mm]
\sigma_i(y_i y_{i+1})&=&\sigma_iy_i\sigma_iy_i\sigma_i =y_{i+1}y_i\sigma_i =(y_i y_{i+1})\sigma_i.
\end{eqnarray*}
This completes the proof.
\end{proof}
So we have constructed elements $\tilde Q_{\mathbf{x}} \in \ddot{\mathrm{H}}_n$ with $Q_{\mathbf{x}}=e_n \tilde Q_{\mathbf{x}} e_n$, which are related even more directly to the elements $W_{\mathbf{x}}$ in $\sk(T^2,*)$, in the following enhancement of theorem \ref{PWcomparison}.
\begin{theorem} For every non-zero ${\mathbf{x}}\in\mathbb{Z}^2$ we have
\[(q^m-1)W_{\mathbf{x}} =(s^m-s^{-m})\tilde Q_{\mathbf{x}},\] where ${\mathbf{x}}=m{\mathbf{y}}$ with ${\mathbf{y}}$ primitive and $m>0$.
\end{theorem}
\subsection{The punctured torus and elliptic Hall algebra}
In this subsection, we use the previous results in this section to show that Conjecture \ref{conj:mainiso} implies a weakened version of Conjecture \ref{conj:punctoeha}. Recall that $\mathbf{Z}^+ \subset \mathbf{Z} = \mathbb{Z}^2$ is defined by
\[
\mathbf{Z}^+ := \{(a,b) \in \mathbb{Z}^2 \mid a > 0\} \sqcup \{(0,b) \mid b \geq 0\}
\]
\begin{definition}\label{def:posskein}
Let $\sk^+(T^2- D^2) $ be the subalgebra of $ \sk(T^2- D^2)$ generated by $W_{\mathbf{x}}$ for ${\mathbf{x}} \in \mathbf{Z}^+ $.
\end{definition}
\begin{theorem}\label{thm:psktoeha}
If Conjecture \ref{conj:mainiso} is true, then there is a surjective algebra map $\sk^+(T^2- D^2) \twoheadrightarrow \mathcal{E}^+_{\sigma, \bar \sigma}$ sending $W_{\mathbf{x}}$ to $(s^{d({\mathbf{x}})} - s^{-d({\mathbf{x}})}) u_{\mathbf{x}}$.
\end{theorem}
\begin{proof}
By Conjecture \ref{conj:mainiso}, the map $F_n:\ddot H_n \to \sk_n(T^2,*)$ is an isomorphism, and we can compose its inverse with the natural map \[\varphi_n:\sk(T^2- D^2) \to \sk_n(T^2,*)\] to obtain a map $\sk(T^2 - D^2) \to \ddot H_n$. By Theorem \ref{PWcomparison}, this map satisfies the following equation:
\begin{equation*
W_{\mathbf{x}} \mapsto \frac{s^{d({\mathbf{x}})} - s^{-d({\mathbf{x}})}}{q^{d({\mathbf{x}})} - 1} Q_{\mathbf{x}}
\end{equation*}
By Corollary \ref{cor:limitmap}, this proves the existence of the algebra map stated in the theorem, and surjectivity follows immediately from the definition of the subalgebra $\mathcal{E}_{\sigma, \bar \sigma}^+$.
\end{proof}
The simplest relations between the $W_{\mathbf{x}}$ are easy to check in the elliptic Hall algebra independently of Conjecture \ref{conj:mainiso} (see Remark \ref{rmk:ptorusrel}). However, describing all relations between the $W_{\mathbf{x}}$ in the punctured torus is an open problem.
\begin{remark}
It would be desirable to extend the map in Theorem \ref{thm:psktoeha} to a much larger subalgebra of $\sk(T^2-D^2)$, instead of the subalgebra $\sk^+(T^2 - D^2)$ generated by $W_{\mathbf{x}}$ for ${\mathbf{x}} \in \mathbf{Z}^+$.
It seems that the main difficulty is showing compatibility between the Schiffmann-Vasserot projections between spherical DAHAs and the maps from $\sk(T^2-D^2)$ to the spherical DAHAs.
Ideally this would follow from a topological interpretation of the Schiffmann-Vasserot projections as some kind of partial trace, but it isn't clear if such an interpretation exists. We do note that we have defined algebra maps from the entire skein algebra $\sk(T^2 - D^2)$ to the DAHAs (and not just the positive subalgebra). The technical difficulty here is that the Schiffmann-Vasserot projections between spherical DAHAs of different ranks are only defined on the ``positive subalgebras.''
\end{remark}
|
astro-ph/9804305
|
\section{Introduction\label{sec:adia-intro}}
ZZ Cetis, also called DAVs, are variable white dwarfs with hydrogen
atmospheres. Their photometric variations are associated with
nonradial gravity-modes (g-modes); for the first conclusive proof, see
Robinson et al.\, (\cite{scaling-robinson82}). These stars have shallow
surface convection zones overlying stably stratified interiors. As the
result of gravitational settling, different elements are well
separated . With increasing depth, the composition changes from
hydrogen to helium, and then in most cases to a mixture of carbon and
oxygen. From center to surface the luminosity is carried first by
electron conduction, then by radiative diffusion, and finally by
convection.
Our aim is to describe the mechanism responsible for the overstability
of g-modes in ZZ Ceti stars. This topic has received attention in the
past. Initial calculations of overstable modes were presented in
Dziembowski \& Koester (\cite{adia-dziem81}), Dolez \& Vauclair
(\cite{adia-dolez81}), and Winget et al.\, (\cite{adia-winget82}). These
were based on the assumption that the convective flux does not respond
to pulsation; this is often referred to as the frozen convection
hypothesis. Because hydrogen is partially ionized in the surface
layers of ZZ Ceti stars, these workers attributed mode excitation to
the $\kappa$-mechanism. In so doing, they ignored the fact that the
thermal time-scale in the layer of partial ionization is many orders
of magnitude smaller than the periods of the overstable modes. Pesnell
(\cite{adia-pesnell87}) pointed out that in calculations such as those
just referred to, mode excitation results from the outward decay of
the perturbed radiative flux at the bottom of the convective
envelope. He coined the term `convective blocking' for this excitation
mechanism.\footnote{This mechanism was described in a general way by
Cox \& Guili (\cite{adia-cox68}), and explained in more detail by
Goldreich \& Keeley (\cite{adia-goldreich77}).} Although convective
blocking is responsible for mode excitation in the above cited
references, it does not occur in the convective envelopes of ZZ Ceti
stars. This is because the dynamic time-scale for convective
readjustment (i.e., convective turn-over time) in these stars is much
shorter than the g-mode periods. Noting this, Brickhill
(\cite{adia-brick83}, \cite{adia-brick90}, \cite{adia-brick91a},
\cite{adia-brick91b}) assumed that convection responds instantaneously
to the pulsational state. He demonstrated that this leads to a new
type of mode excitation, which he referred to as convective
driving. Brickhill went on to presents the first physically consistent
calculations of mode overstability, mode visibility, and instability
strip width. Our investigation supports most of his
conclusions. Additional support for convective driving is provided by
Gautschy et al.\, (\cite{adia-gautschy96}) who found overstable modes in
calculations in which convection is modeled by hydrodynamic
simulation.
In this paper we elucidate the manner in which instantaneous
convective adjustment promotes mode overstability. We adopt the
quasiadiabatic approximation in the radiative interior. We also ignore
the effects of turbulent viscosity in the convection zone. These
simplifications enable us to keep our investigation analytical,
although we appeal to numerically computed stellar models and
eigenfunctions for guidance. The DA white dwarf models we use are
those produced by Bradley (\cite{scaling-bradley96}) for
asteroseismology. Fully nonadiabatic results, which require numerical
computations, will be reported in a subsequent paper. These modify the
details, but not the principal conclusions arrived at in the present
paper.
The plan of our paper is as follows. The linearized wave equation is
derived in \S \ref{sec:adia-prepare}. In \S \ref{sec:adia-perturb}, we
evaluate the perturbations associated with a g-mode in different parts
of the star. We devote \S \ref{sec:adia-driving} to the derivation of
a simple overstability criterion. Relevant time-scales and the
validity of the quasiadiabatic approximation are discussed in \S
\ref{sec:adia-discussion}. The appendix contains derivations of
convenient scaling relations for the dispersion relation, the WKB
eigenfunction, and the amplitude normalization.
\section{Preparations}
\label{sec:adia-prepare}
We adopt standard notation for the thermodynamic variables pressure,
$p$, density, $\rho$, and temperature, $T$. Our specific entropy, $s$,
is dimensionless; we measure it in units of $k_B/m_p$, where $k_B$ is
the Boltzmann constant and $m_p$ the proton mass. We denote the
gravitational acceleration by $g$, the opacity per unit mass by
$\kappa$, the pressure scale height by, $H_p
\equiv p/(g\rho)$, and the adiabatic sound speed by $c_s$ where
$c_s^2 \equiv ({\partial p} /{\partial \rho})_s$.
A g-mode in a spherical star is characterized by three eigenvalues
$(n,\ell,m)$; $n$ is the number of radial nodes in the radial
component of the displacement vector, ${\ell}$ is the angular degree,
and $m$ the azimuthal separation parameter.\footnote{We ignore $m$ in
this paper since DAV's are slow rotators.} G-modes detected in DAV
stars have modest radial orders, $1\leq n\leq 25$, and low angular
degrees, $1\leq {\ell}\leq 2$.\footnote{The latter is probably an
observational selection effect.}. Their periods fall in the range
between $100 \, \rm s$ and $1200 \, \rm s$. G-mode periods increase with $n$ and
decrease with ${\ell}$ (see \S
\ref{subsec:adia-scaling-disp}). These properties reflect the nature of the
restoring force, namely gravity which opposes departure of surfaces
of constant density from those of constant gravitational potential.
Gravity waves propagate where $\omega$ is smaller than both the radian
Lamb (acoustic) frequency, $L_\ell$, and the radian Brunt-V{\"a}is{\"a}l{\"a}\,\,
(buoyancy) frequency, $N$. These critical frequencies are defined by
$L_{\ell}^2={{\ell}({\ell}+1)}(c_s/r)^2$, and $N^2\equiv -
g(\partial\ln\rho/\partial s)_p \, ds/dr$. Electron degeneracy
reduces the buoyancy frequency in the deep interior of a white dwarf
(cf. Fig. 1). Consequently, the uppermost radial node of even the
lowest order g-mode is confined to the outer few percent of the
stellar radius. Since all of the relevant physics takes place in the
region above this node, it is convenient to adopt a plane-parallel
approximation with $g$ taken to be constant in both space and time.
In place of the radial coordinate $r$, we use the depth $z$ below the
photosphere. We neglect gravitational perturbations; due to the small
fraction of the stellar mass that a mode samples, its gravitational
perturbation is insignificant. A few order of magnitude relations to
keep in mind are: $p\sim g\rho z$, $H_p\sim z$, $d\ln\rho/dz\sim 1/z$,
and $c_s^2\sim gz$. In the convection zone, $N^2\sim -(v_{\rm cv}/c_s)^2 g/z
\sim - (v_{\rm cv}/z)^2$, where $v_{\rm cv}$ is the convective velocity. In the
upper radiative interior, $N^2\sim g/z \sim (c_s/z)^2$.
The linearized equations of mass and momentum conservation, augmented
by the linearized, adiabatic equation of state, read\footnote{Readers
are referred to Unno et al.\, (\cite{scaling-unno89}) for detailed
derivations of these equations.}
\begin{eqnarray}
{\delta\rho\over\rho} & = & -ik_h\xi_h-{d\xi_z\over dz},
\label{eq:adia-cont} \\
\omega^2\xi_h & = & ik_h\left[{p\over\rho} \left({\delta p\over
p}\right)-g\xi_z\right], \label{eq:adia-momh} \\
\omega^2\xi_z & = & {p\over\rho}{d\over dz}\left({\delta p\over p}\right)
+g\left({\delta p\over p}-{\delta\rho\over\rho}-{d\xi_z\over dz}\right),
\label{eq:adia-momz} \\
\delta p & = & c_s^2\delta\rho. \label{eq:adia-state}
\end{eqnarray}
In the above equations, $\xi_h$ and $\xi_z$ represent horizontal and
vertical components of the displacement vector, $\delta$ denotes a
Lagrangian perturbation, and $\exp{i(k_h x -\omega t)}$ describes the
horizontal space and time dependence of the perturbations. The
horizontal component of the propagation vector, $k_h$, is related to
the angular degree, ${\ell}$, by
\begin{equation}
k_h^2={{\ell}({\ell}+1)\over R^2}, \label{eq:adia-kh}
\end{equation}
where $R$ is the stellar radius. After some manipulation, equations
\refnew{eq:adia-momh} and
\refnew{eq:adia-momz} yield
\begin{equation}
\xi_h={ig^2k_h\over (gk_h)^2-\omega^4}\left[{p\over g\rho}{d\over
dz}\left({\delta p\over p}\right)+\left(1-{\omega^2p\over
g^2\rho}\right)
\left({\delta p\over p}\right)\right], \label{eq:adia-xih}
\end{equation}
and
\begin{equation}
\xi_z={-g\omega^2\over (gk_h)^2-\omega^4}\left[{p\over g\rho}{d\over
dz}\left({\delta p\over p}\right)+\left(1-{k_h^2p\over\omega^2\rho}
\right)\left({\delta p\over p}\right)\right]. \label{eq:adia-xiz}
\end{equation}
Notice that these two equations are independent of the assumption of
adiabaticity.
The linear, adiabatic wave equation for the fractional, Lagrangian
pressure perturbation, $\delta p/p$, follows from combining equations
\refnew{eq:adia-cont}, \refnew{eq:adia-xih}, and \refnew{eq:adia-xiz},
\begin{equation}
{d^2\over dz^2}\left({\delta p\over p}\right)+{d\over
dz}\left(\ln{p^2\over\rho}\right){d\over dz}\left({\delta p\over
p}\right)+
\left[k_h^2\left({N^2\over\omega^2}-1\right)+\left({\omega\over
c_s}\right)^2 \right]\left({\delta p\over
p}\right)=0. \label{eq:adia-wave}
\end{equation}
The advantages of choosing $\delta p/p$ as the dependent variable will
become apparent as we proceed.
The appendix is devoted to deducing the properties of g-modes from the
wave equation \refnew{eq:adia-wave}. Here we summarize results that
are needed in the main body of the paper. A g-mode's cavity coincides
with the radial interval within which $\omega$ is smaller than both
$N$ and $L_{\ell}$. Inside this cavity the mode's velocity field is
relatively incompressible. The lower boundary of each g-mode's cavity
is set by the condition $\omega=N$. For modes of sufficiently high
frequency, the relation $\omega=L_\ell$ is satisfied in the radiative
interior, and it determines the location of the upper
boundary. Otherwise, the upper boundary falls in the transition layer
between the radiative interior and the convective envelope where
$\omega=N$. In this paper, we restrict consideration to modes which do
not propagate immediately below the convection zone since these
include the overstable modes. To order of magnitude, the condition
$\omega \sim L_{\ell}$ is satisfied at $z_\omega \sim
{\omega^2}/{(gk_h^2)}$. If we denote the depth of the first radial
node of ${\delta p/p}$ as $z_1$, and the depth at the bottom of the
surface convection zone as $z_b$, then $z_1 \sim z_\omega$ provided
$z_\omega > z_b$.
The fractional Lagrangian pressure perturbation, $\delta p/p$, is
almost constant in the convection zone ($z\leq z_b$), and declines
smoothly to zero in the upper evanescent region ($ z_b \leq z \leq
z_1$). Its envelope scales with depth in the cavity so as to maintain
a constant vertical energy flux. An equal amount of energy is stored
between each pair of consecutive radial nodes. For modes having
$z_\omega>z_b$, the photospheric value of the normalized eigenfunction
is given by $(\delta p/p)_{\rm{ph}}\sim 1/(n\tau_\omega L)^{1/2}$.
\begin{figure}
\centerline{\psfig{figure=fig1.ps,width=0.75\hsize}}
\caption[]{The squares of the radian Brunt-V{\"a}is{\"a}l{\"a}\,\, frequency, $N^2$, and
Lamb frequency, $L_{\ell}^2$, in $\, \rm s^{-2}$ as functions of pressure in
$\, \rm dyne\, \rm cm^{-2}$ for Bradley's white dwarf model with $T_{\rm
eff}=12,000\, \rm K$. $N^2$ is negative in the convection zone $(\log p\leq
9.3)$, and drops with increasing depth to near zero in the degenerate
interior $(\log p\geq 23)$. The little bump at $\log p
\sim 19$ is due to the sharp compositional transition from hydrogen to
helium. The long dashed line is our analytic approximation, $N^2 \sim
(g/z)$. Notice that it worsens as degeneracy becomes important.
G-modes with frequency $\omega$ propagate in regions where both
$\omega < N$ and $\omega < L_{\ell}$. These regions are depicted by
horizontal dashed lines for two ${\ell} = 1$ modes having periods of
$134 \, \rm s$ and $1200 \, \rm s$ respectively. The former has its upper cavity
terminated where $\omega = L_{\ell}$, while the latter propagates
until immediately below the convection zone where $\omega =N$.
\label{fig:scaling-bruntplot}}
\end{figure}
\section{Perturbations Associated with G-Mode Pulsations
\label{sec:adia-perturb}}
Here we derive expressions relating various perturbation quantities
associated with g-modes.\footnote{When reading this section, it is
important to bear in mind that all perturbation quantities have an
implicit time and horizontal space dependence of the form $\exp{i(k_h
x-\omega t)}$.} These are used in \S
\ref{sec:adia-driving} to evaluate
the driving and damping in various parts of the star. Since
theoretical details are of limited interest, we summarize selected
results at the end of this section.
We start with the radiative interior and proceed outward through the
convection zone to the photosphere. A new symbol, $\Delta$, is
introduced to denote variations associated with a g-mode at a
particular level, such as the photosphere, or within a particular
layer, such as the convection zone. These variations are not to be
confused with either Lagrangian variations which we denote by
$\delta$, or Eulerian variations.
This section is replete with thermodynamic derivatives. In most
instances we take $\rho$ and $T$ as our independent variables. Unless
specified otherwise, it is implicitly assumed that partial derivatives
with respect to one are taken with the other held constant. We adopt
the shorthand notation: $p_\rho\equiv \partial\ln p/\partial\ln \rho$,
$p_T\equiv \partial \ln p/\partial \ln T$, $s_\rho\equiv\partial
s/\partial\ln\rho $, $s_T\equiv\partial s/\partial\ln T$,
$\kappa_\rho\equiv
\partial\ln\kappa/\partial\ln \rho$, $\kappa_T\equiv \partial \ln
\kappa/\partial \ln T$. However, we use the symbols
$\rho_s\equiv (\partial\ln\rho/\partial s)_p$ and $c_p\equiv
T(\partial s/\partial T)_p$. These thermodynamic quantities evaluated
for a model DA white dwarf are displayed in Fig.
\ref{fig:adia-scaling-sTandsoon}.
\begin{figure}
\centerline{\psfig{figure=fig2.ps,width=0.85\hsize}}
\caption[]{Values of various dimensionless thermodynamic quantities
as a function of pressure in $\, \rm dyne\, \rm cm^{-2}$ for Bradley's white dwarf
model with $T_{\rm eff}=12,300\, \rm K$. The convection zone extends from
the surface down to $\log p \sim 8.5$. Both $s_T$ and $\kappa_T$
change significantly with depth in the upper layers where hydrogen is
partially ionized. The fully ionized region has mean molecular weight
$\mu = 1/2$. Other thermodynamic quantities can be obtained from those
plotted by using the relations: $\rho_s = p_T/(p_T s_\rho - p_\rho
s_T)$, $c_p = (s_T p_\rho - s_\rho p_T)/p_\rho$, and $\Gamma_1 = c_s^2
\rho/p = (p_\rho s_T - p_T s_\rho)/s_T$.
\label{fig:adia-scaling-sTandsoon} }
\end{figure}
\subsection{Radiative Interior\label{subsec:adia-radia}}
The radiative flux in an optically thick region obeys the diffusion
equation,
\begin{equation}
F={4\sigma\over 3\kappa\rho}{dT^4\over dz}, \label{eq:adia-Frad}
\end{equation}
where $\sigma$ is the Stefan-Boltzmann constant. The Lagrangian
perturbation of the flux takes the form
\begin{equation}
{\delta F\over F}=-(1+\kappa_\rho){\delta\rho\over\rho}
+(4-\kappa_T){\delta T\over T} - {d\xi_z\over dz}
+\left({d\ln T\over dz}\right)^{-1}{d\over dz}
\left({\delta T\over T}\right). \label{eq:adia-eqdelF}
\end{equation}
Next we express $\delta F/F$ in terms of $\delta p/p$ within the
quasiadiabatic approximation. To accomplish this, we decompose $\delta
\rho/\rho$ and $\delta T/T$ into adiabatic and nonadiabatic components
by means of the thermodynamic identities
\begin{equation}
{\delta \rho\over \rho}={p\over c_s^2\rho}{\delta p\over p} + \rho_s\delta s
\approx {3\over 5}{\delta p\over p}-{\delta s\over 5},
\label{eq:adia-eqdelrho}
\end{equation}
and
\begin{equation}
{\delta T\over T}=-{s_\rho\over s_T}{p\over c_s^2\rho}{\delta p\over
p} + {\delta s\over c_p}\approx {2\over 5}{\delta p\over p}+{\delta s\over 5},
\label{eq:adia-eqdelT}
\end{equation}
where the coefficients are evaluated as appropriate for fully ionized
hydrogen plasma (cf. Fig. \ref{fig:adia-scaling-sTandsoon}).
Substitution of the adiabatic parts of these expressions into equation
\refnew{eq:adia-eqdelF} is carried out separately for the upper evanescent layer
and the g-mode cavity.
\subsubsection{Upper Evanescent Layer}
Within the region $z_b\leq z \leq z_\omega$, ${\delta p/p}\approx -i
k_h \xi_h$, and it varies on spatial scale $z_\omega$ (see the
appendix). Thus equation (\ref{eq:adia-cont}) leads to
\begin{equation}
{d\xi_z\over dz}\approx {\delta p\over p}-{\delta \rho\over \rho}.
\label{eq:adia-eqdxizdz}
\end{equation}
Moreover, the term
\begin{equation}
\left({d\ln T\over dz}\right)^{-1}{d\over dz}\left({\delta T\over
T}\right) \approx -\left({d\ln T\over dz}\right)^{-1}{d\over
dz}\left({s_\rho\over s_T}{p\over c_s^2\rho}{\delta p\over
p}\right), \label{eq:adia-graddelT}
\end{equation}
is of order $(z/z_\omega)(\delta p/p)$ and can be
discarded.\footnote{As shown in Fig. \ref{fig:adia-scaling-sTandsoon},
the factors $s_\rho\approx -2$, $s_T\approx 3$, and
$p/c_s^2\rho\approx 3/5$ are each nearly constant in the upper part of
the radiative interior.} Making these approximations, we arrive at
\begin{equation}
{\delta F\over F} \approx A {{\delta p}\over p},
\label{eq:adia-eqdelFup}
\end{equation}
where
\begin{equation}
A={\left(3-3\kappa_\rho-2\kappa_T\right)\over 5}.
\label{eq:adia-A}\end{equation}
\subsubsection{G-Mode Cavity}
For $z\geq z_\omega$, perturbations vary on the spatial scale of
$1/k_z$, where $k_z\approx (N/\omega)k_h>1/z$ is the vertical
component of the wave vector as given by equation \refnew{eq:adia-DR}.
Here, the dominant contributions to $\delta F/F$ come from the last
two terms in equation (\ref{eq:adia-eqdelF}). The first transforms to
\begin{equation}
{d\xi_z\over dz}\approx -ik_z{\delta p\over g\rho}\sim -ik_z z{\delta p\over p},
\label{eq:adia-eqdxizdzl}
\end{equation}
as most easily seen from equation \refnew{eq:adia-momz}. It then follows
that
\begin{equation}
{\delta F\over F}\approx {ik_z p\over g\rho}{d\ln p\over d\ln
T}\left({\partial \ln T\over\partial\ln p}\biggr\vert_s-{d\ln T\over
d\ln p}\right){\delta p\over p}\approx {ik_z N^2\over g}\left({p\over
g\rho}\right)^2 {d\ln p\over d\ln T}{\delta p\over p},
\label{eq:adia-eqdelFlp}
\end{equation}
where we have set $p_\rho\approx 1$ and $p_T\approx 1$ as is appropriate for
a fully ionized plasma (cf. Fig. \ref{fig:adia-scaling-sTandsoon}).
\subsection{Convective Envelope\label{subsec:adia-convec}}
The net absorption of heat by the convective envelope is given by
\begin{equation}
\Delta Q=\int_{\rm cvz}\, dz\, \rho{k_B\over m_p}T\, \delta s\approx
\Delta s_b\int_{\rm cvz}\, dz\, \rho{k_B\over m_p}T. \label{eq:adia-eqDelQ}
\end{equation}
Here $\Delta s_b$ is the variation of the specific entropy evaluated
at the bottom of the convection zone; it is neither a Lagrangian nor
an Eulerian variation. The chain of argument leading to the second
expression for $\Delta Q$ requires some discussion. At equilibrium (no
pulsation), convection is so efficient that $ds/dz\ll s/z$ except in a
thin superadiabatic layer just below the photosphere; the unperturbed
convection zone is nearly isentropic. Moreover, the rapid response of
the convective motions ensures that $d\delta s/dz\ll
\delta s/z$; the vertical component of the perturbed entropy gradient
is small during pulsation. Because both $ds/dz\approx 0$ and $d\delta
s/dz\approx 0$, $\Delta s_b\approx \delta s$ .
To relate $\delta s$ with the flux perturbation, we treat convection
by means of a crude mixing-length model. At equilibrium, the
convective flux is set equal to\footnote{Where the convection is
efficient, $F$ is comparable to the total flux.}
\begin{equation}
F\sim \alpha H_pv_{\rm cv} {\rho k_B T\over m_p}{ds\over dz}, \label{eq:adia-Fcv}
\end{equation}
where $\alpha$ is the ratio of mixing length to the pressure scale height. The
convective velocity, $v_{\rm cv}$, satisfies\footnote{Note that $\rho_s
\equiv p_T/ (p_Ts_\rho-p_\rho s_T) < 0$.}
\begin{equation}
v_{\rm cv}^2\sim -(\alpha H_p)^2 g\rho_s {ds\over
dz}=-\left(\alpha H_p N\right)^2. \label{eq:adia-vcv}
\end{equation}
Eliminating $ds/dz$ and solving for $v_{\rm cv}$, we find
\begin{equation}
F\sim \rhov_{\rm cv}^3.
\label{eq:adia-Fvcv}\end{equation}
Reversing the procedure and solving for $ds/dz$ yields
\begin{equation}
s_b-s_{\rm ph}\equiv \int_{\rm cvz}\, dz{ds\over dz} \approx H_p {ds\over dz}
\approx f\left({-1\over\rho_s\rho p}\right)^{1/3}
\left({m_pF\over k_B T}\right)^{2/3}, \label{eq:adia-eqsmsb}
\end{equation}
where $s_{\rm ph}$ is the entropy at the photosphere. The
dimensionless factor $f\sim \alpha^{-4/3}$ is of order unity. The
right hand side of equation \refnew{eq:adia-eqsmsb} is evaluated at
the photosphere, since the entropy jump is concentrated immediately
below it.
We assume that the mixing length model applies during pulsation, as is
consistent with the short convective turn-over time. Thus we write
the variation of $s_b-s_{\rm ph}$ associated with a g-mode as
\begin{equation}
{\Delta(s_b-s_{\rm ph})\over (s_b-s_{\rm ph})} = C {\Delta F_{\rm ph}\over F},
\label{eq:adia-eqDelsmsb} \end{equation}
where
\begin{eqnarray}
C & = & {1\over 12}
\biggr\{
6+{{p_T(1-\kappa_\rho) +\kappa_T(1+p_\rho)}\over {p_\rho+\kappa_\rho}}
+ {1\over {p_T(p_\rho s_T-p_Ts_\rho)}}
\nonumber \\
& & \biggr [
p_T(p_{\rho T}s_T+p_\rho s_{\rm TT}-p_Ts_{\rho T})-p_T^2s_{\rho T}
-p_{\rm TT}p_\rho s_T
\nonumber \\
& & +\left({p_T+\kappa_T}\over
{p_\rho+\kappa_\rho}\right)
\left(p_T^2s_{\rho\rho}-p_T(p_{\rho\rho}s_T+p_\rho s_{\rho T})+ p_\rho
p_{\rho T}s_T \right) \biggr ] \biggr\}.
\label{eq:adia-C} \end{eqnarray}
The dimensionless number $C$ is to be evaluated at the photosphere. In
practice, only a few terms in the complicated expression for $C$ make
significant contributions. In arriving at equations \refnew{eq:adia-eqDelsmsb}
and \refnew{eq:adia-C}, we implicitly assume that convection carries the entire
stellar flux, and that $f$ does not vary during the pulsational cycle.
\subsection{Photosphere\label{subsec:adia-photo}}
The photospheric temperature and pressure are determined by
\begin{equation}
T\approx\left({F\over \sigma}\right)^{1/4}, \label{eq:adia-Tphot}
\end{equation}
and
\begin{equation}
p\approx {2\over 3}{g\over \kappa}. \label{eq:adia-pphot}
\end{equation}
Provided most of the flux in the photosphere is carried by radiation,
the temperature gradient is set by the equation of radiative diffusion
(eq. [\ref{eq:adia-Frad}]).\footnote{Applying the diffusion equation
at the photosphere is a crude approximation.} Combining equations
\refnew{eq:adia-Frad}, \refnew{eq:adia-Tphot}, and
\refnew{eq:adia-pphot} yields
\begin{equation}
{d\ln T\over d\ln p}\approx {1\over 8}.
\label{eq:adia-Tgrad} \end{equation}
On the other hand, the adiabatic gradient,
\begin{equation}
{\partial \ln T\over\partial \ln p}\biggr\vert_s
= {s_\rho\over p_T s_\rho - p_\rho s_T}\approx {1\over 10},
\label{eq:adia-Tgradad} \end{equation}
using the numbers in Fig.
\ref{fig:adia-scaling-sTandsoon}. Comparing the photospheric and
adiabatic temperature gradients, we find that the convection zone
extends up to the photosphere for DA white dwarfs which lie inside the
instability strip. This is confirmed by comparison with numerical
models (e.g., Bradley
\cite{scaling-bradley96}, Wu \cite{scaling-phdthesis}). However, because the
photospheric density is low, convection is inefficient in this region, and
radiation carries the bulk of the flux.
Next we relate changes in the thermodynamic variables at the
photosphere to changes in the emergent flux. Thus
\begin{equation}
{\Delta T\over T}\approx {1\over 4}{\Delta F_{\rm ph}\over F}
\label{eq:adia-eqDetT}
\end{equation}
is an immediate consequence of equation (\ref{eq:adia-Tphot}).
Expressions for $\Delta\rho/\rho$ and $\Delta p/p$ follow in a
straightforward manner from the thermodynamic identity,
\begin{equation}
{\Delta p\over p}\approx p_\rho {\Delta\rho\over \rho}+p_T{\Delta T\over T},
\label{eq:adia-idenprhoT}
\end{equation}
and from the perturbed form of equation (\ref{eq:adia-pphot}),
\begin{equation}
{\Delta p\over p}=-\kappa_\rho{\Delta\rho\over \rho} -
\kappa_T{\Delta T\over T}. \label{eq:adia-perphoto}
\end{equation}
Together, these yield
\begin{equation}
{\Delta\rho\over \rho}\approx -{1\over 4}\left({p_T+\kappa_T\over
p_\rho+\kappa_\rho}\right){\Delta F_{\rm ph}\over F}, \label{eq:adia-eqDelrho}
\end{equation}
and
\begin{equation}
{\Delta p\over p}\approx {1\over 4}\left({p_T\kappa_\rho - p_\rho\kappa_T\over
p_\rho+\kappa_\rho}\right) {\Delta F_{\rm ph}\over F}. \label{eq:adia-eqDelp}
\end{equation}
It is then a simple step to show that
\begin{equation}
\Delta s_{\rm ph}\approx B {\Delta F_{\rm ph}\over F},
\label{eq:adia-eqspert}
\end{equation}
where
\begin{equation}
B={1\over 4}\left[s_T-\left({p_T+\kappa_T\over
p_\rho+\kappa_\rho}\right)s_\rho\right].
\label{eq:adia-B}\end{equation}
\subsection{Putting It All Together\label{subsec:adia-together}}
We begin by collecting a few key equations obtained in previous
sections. These are then combined to determine how the entropy and
flux variations in the convection zone and photosphere depend upon the
pressure perturbation associated with the g-mode.
\subsubsection{Key Equations}
The flux perturbation entering the bottom of the
convective envelope from the radiative interior satisfies (eq.
[\ref{eq:adia-eqdelFup}]),
\begin{equation}
{\Delta F_b\over F}\approx A\left({\delta p\over p}\right)_b.
\label{eq:adia-eqA}
\end{equation}
The photospheric entropy varies with the flux perturbation emerging
from the convection zone as (eq. [\ref{eq:adia-eqspert}])
\begin{equation}
\Delta s_{\rm ph} \approx B{\Delta F_{\rm ph}\over F}.
\label{eq:adia-eqDelsph}
\end{equation}
The variation in the superadiabatic entropy jump is related to
this photospheric flux perturbation by (eq. [\ref{eq:adia-eqDelsmsb}])
\begin{equation}
\Delta (s_b-s_{\rm ph})\approx C{\Delta F_{\rm ph}\over F}.
\label{eq:adia-entjump}
\end{equation}
The dimensionless numbers $A$, $B$ and $C$ are all positive; $A$ is evaluated at
the boundary between the radiative interior and the convective envelope, whereas
$B$ and $C$ are computed at the photosphere.
The net heat variation, $\Delta Q$, and specific entropy variation,
$\Delta s_b$, of the convective envelope are connected by equation
(\ref{eq:adia-eqDelQ}),
\begin{equation}
\Delta Q \approx F\tau_b\Delta s_b, \label{eq:adia-eqDelQp}
\end{equation}
where the thermal time constant
\begin{equation}
\tau_b\equiv {1\over F}\int_{\rm cvz}\, dz\, {\rho k_B T\over m_p}\approx
{p_bz_b\over 7F}.
\label{eq:adia-taub}\end{equation}
The set of key equation is completed by the relation
\begin{equation}
{d\Delta Q\over dt}=\Delta F_b-\Delta F_{\rm ph}.
\label{eq:adia-dQdt}\end{equation}
We ignore horizontal heat transport for good reasons: transport by
radiation is completely negligible because $k_h z_b\ll 1$; turbulent
diffusion acts to diminish $\Delta s_b$, but only at the tiny rate
$k_h^2 z\,v_{\rm cv}\ll \omega$.
For compactness of notation, we define a new thermal time constant, $\tau_c$, by
\begin{equation}
\tau_c\equiv (B+C)\tau_b. \label{eq:adia-eqtauc}
\end{equation}
The physical relation between $\tau_c$ and $\tau_b$ is discussed in
\S \ref{sec:adia-discussion}. Here we merely note that $\tau_c$ is
generally an order of magnitude or more larger than $\tau_b$.
\subsubsection{Implications Of Key Equations}
Taken together, the five homogeneous equations (\ref{eq:adia-eqA}),
(\ref{eq:adia-eqDelsph}), (\ref{eq:adia-entjump}),
(\ref{eq:adia-eqDelQp}), and (\ref{eq:adia-dQdt}) enable us to express
the five quantities $\Delta s_b$, $\Delta s_{\rm ph}$, $\Delta F_b/F$,
$\Delta F_{\rm ph}/F$, and $\Delta Q$ in terms of $(\delta p/
p)_b$.\footnote{We include $\Delta F_b/F$ in this list for
completeness although it is already expressed in terms of $(\delta
p/p)_b$ by equation (\ref{eq:adia-eqA}).} The principal results from
this section read:
\begin{equation}
\Delta s_b\approx {A(B+C)\over 1-i\omega\tau_c}\left({\delta p\over p}\right)_b,
\label{eq:adia-Dsb}
\end{equation}
\begin{equation}
\Delta s_{\rm ph}\approx {AB\over 1-i\omega\tau_c}\left({\delta p\over
p}\right)_b,
\label{eq:adia-Dsph}
\end{equation}
\begin{equation}
{\Delta F_b\over F}\approx A\left({\delta p\over p}\right)_b,
\label{eq:adia-DFb}
\end{equation}
\begin{equation}
{\Delta F_{\rm ph}\over F}\approx {A \over 1-i\omega\tau_c}\left({\delta p\over
p}\right)_b, \label{eq:adia-DFph}
\end{equation}
\begin{equation}
\Delta Q \approx {AF\tau_c\over 1-i\omega\tau_c}\left({\delta p\over
p}\right)_b.
\label{eq:adia-DQ}
\end{equation}
\section{Driving and Damping\label{sec:adia-driving}}
The produce of this section is the proof that a g-mode is linearly
overstable in a ZZ Ceti star provided the convective envelope is thick
enough, or more precisely, provided $\omega\tau_c\gtrsim
1$.\footnote{The qualification that $z_\omega\geq z_b$ is necessary
here.}
The time-averaged rate of change in a mode's energy,
\begin{equation}
\gamma\equiv {\omega\over 2\pi}\oint dt {1\over E}{dE\over dt},
\label{eq:adia-work}
\end{equation}
is obtained from the work integral. Useful forms for $\gamma$
are\footnote{These assume that the eigenfunction is
normalized such that $E=1$.}
\begin{equation}
\gamma={2\omega R^2}\oint dt \int^R_0 dz\,\rho\,
{k_B\over m_p}\delta T\, {d\delta s\over dt} = {\omega\over 2\pi}L\oint
dt \int^R_0 dz {\delta T\over T}{d\over dz}\left({\delta F\over
F}\right).\label{eq:adia-workp}
\end{equation}
Regions of driving and damping are associated with positive and
negative values of the above integrand. The evaluation of $\gamma$ for
modes with $z_\omega \geq z_b$ is simplified by the near constancy of
$\delta p/p$ for $z\ll z_\omega$.
\subsection{Radiative Damping\label{subsec:adia-rdamping}}
In the upper evanescent region of the radiative interior, the perturbed flux
varies in phase with the pressure perturbation because adiabatic compression
causes the opacity to decrease.\footnote{$A > 0$ in equation
\refnew{eq:adia-DFb}}. Since $\delta p/p$ declines from close to its
surface value at $z_b$ to zero near $z_\omega$, and then oscillates
with rapidly declining amplitude at greater depth, the upper
evanescent layer loses heat during compression and thus contributes
most to mode damping.
To evaluate the radiative damping rate, it proves convenient to use the second
form for $\gamma$ given by equation (\ref{eq:adia-workp}).
We consider the contributions to $\gamma_{\rm rad}$ from the upper
evanescent layer, $\gamma_{r_u}$, and from the propagating cavity,
$\gamma_{r_l}$, separately.
The contribution from $z_b\leq z\leq z_\omega$, obtained with the aid
of equation (\ref{eq:adia-eqdelT}) for $\delta T/T$ and equation
(\ref{eq:adia-DFb}) for $\delta F/F$, reads
\begin{equation}
\gamma_{r_u}\approx - \left({A\, L\over 10}\right)\left({\delta
p\over p}\right)_b^2+{L\over 10}\int_{z_b}^{z_\omega} dz\left({\delta
p\over p}\right)^2{dA\over dz}. \label{eq:adia-Wdotru}
\end{equation}
The first term on the right hand side generally dominates over the
second one as $\delta p/p$ declines with depth and $A$ does not vary
significantly in this region.
For $z \geq z_\omega$, we again use equation \refnew{eq:adia-eqdelT} for
$\delta T/T$, but now substitute equation (\ref{eq:adia-eqdelFlp}) for
$\Delta F/F$ to arrive at
\begin{equation}
\gamma_{r_l}\approx {-9k_h^2 L\over 125g^3\omega^2}
\int_{z_\omega}^\infty dz N^4 c^4 {d\ln p\over d\ln T}
\left({\delta p\over p}\right)^2. \label{eq:adia-Wdotrl}
\end{equation}
Appeal to equation (\ref{eq:adia-Fluxp}) giving the WKB envelope
relation for $\delta p/p$ establishes that the integrand peaks close
to $z_\omega$. Thus,
\begin{equation}
\gamma_{r_l}\sim -L\left({c N\over g}\right)_\omega^4
\left({\delta p\over p}\right)_\omega^2\sim -L\left({\delta p\over
p}\right)_\omega^2,
\label{eq:adia-Wdotrlp}
\end{equation}
where in this context $(\delta p/p)_\omega$ represents the magnitude of the WKB
envelope evaluated at $z_\omega$. Since this magnitude is significantly smaller
than $(\delta p/p)_b$, the contribution to $\gamma$ from the g-mode cavity
is negligible provided $z_\omega\geq z_b$.
To a fair approximation, $\gamma_{\rm rad}$ may be set equal to the first
term on the right-hand side of equation (\ref{eq:adia-Wdotru}). Thus
\begin{equation}
\gamma_{\rm rad}\approx - \left({A\, L\over 10}\right)
\left({\delta p\over p}\right)_b^2. \label{eq:adia-Wdotr}
\end{equation}
\subsection{Convective Driving\label{subsec:adia-cdriving}}
The perturbed flux which exits the radiative interior enters the bottom of the
convection zone. There it is almost instantaneously distributed so as to
maintain the vertical entropy gradient near zero. Because the convection zone
gains heat during compression, it is the seat of mode driving.
To evaluate the rate of convective driving we substitute equations
(\ref{eq:adia-eqdelT}) and (\ref{eq:adia-Dsb})
into the first form for $\gamma$ given by equation
(\ref{eq:adia-workp}). It is apparent that the net contribution comes
entirely from the adiabatic part of $\delta T/T$ (see
eq. [\ref{eq:adia-eqdelT}]). Since the integrand is strongly weighted
toward the bottom of the convection zone, we evaluate all quantities there and
arrive at
\begin{equation}
\gamma_{\rm cvz}\approx {(\omega \tau_c)^2 \over 1+ (\omega \tau_c)^2}
\left({A\, L\over 5}\right)\left({\delta p\over p}\right)_b^2.
\label{eq:adia-Wdotcvz}
\end{equation}
Since $A$ is positive, so is $\gamma_{\rm cvz}$.
\subsection{Turbulent Damping\label{subsec:adia-vdamping}}
Damping due to turbulent viscosity acting on the velocity shear in the
convection zone as well as the convective overshoot region is
estimated as
\begin{equation}
\gamma_{\rm visc}\approx -4\pi R^2\omega^2\int_0^{z_b}\, dz\, \rho\, \nu
\left({d\xi_h\over dz}\right)^2. \label{eq:adia-Edot}
\end{equation}
Here, $\nu\sim v_{\rm cv} H_p $ is the turbulent viscosity, and
$-i\omega d\xi_h/dz$ is the dominant component of velocity gradient.
Adiabatic perturbations in an isentropic region are
irrotational. Consequently, the velocity shear in an isentropic
convection zone would be very small and lead to negligible turbulent
damping. However, the mean entropy gradient and the horizontal
gradient of the entropy perturbation represent significant departures
from isentropy. In a future paper we demonstrate that turbulent viscosity
suppresses the production of velocity gradients in the convection zone to the
extent that its effect on damping is negligible in comparison to
radiative damping. However, turbulent damping in the region of
convective overshoot at the top of the radiative interior stabilizes
long period modes near the red edge of the ZZ Ceti instability strip.
\subsection{Net Driving\label{subsec:adia-net}}
The net driving rate follows from combining equations \refnew{eq:adia-Wdotr} and
\refnew{eq:adia-Wdotcvz};
\begin{equation}
\gamma_{\rm net}\approx {(\omega \tau_c)^2-1\over (\omega \tau_c)^2+1}
\left({A\, L\over 10}\right)\left({\delta p\over p}\right)_b^2.
\end{equation}
Driving exceeds damping by a factor of two in the limit $\omega\tau_c\gg 1$.
Substitution of the normalization relation given by equation
\refnew{eq:adia-scaling-normdrho} for $\delta p/p$ yields the more revealing
form
\begin{equation}
\gamma_{\rm net}\sim {(\omega \tau_c)^2-1\over
(\omega \tau_c)^2+1}\left({1\over n\tau_\omega}\right).
\label{eq:adia-Wdotnetp}
\end{equation}
Overstability occurs if $\omega \tau_c > 1$.\footnote{In the
quasiadiabatic limit for modes with $z_\omega > z_b$.} Following
Brickhill, we refer to this excitation mechanism as convective
driving.
\section{Discussion\label{sec:adia-discussion}}
\subsection{Time-Scales \label{sec:adia-relevant}}
Three time-scales are relevant for convective driving in DAVs. The
first is the period of an overstable g-mode, $P = 2\pi/\omega$, which
is typically of order a few hundred seconds. The second is the
dynamical time constant, $t_{\rm cv} \sim H_p/v_{\rm cv}$, on which convective
motions respond to perturbations; $t_{\rm cv} \leq 1 \, \rm s$ throughout the
convection zones of even the coolest ZZ Cetis. This is why the
convective motions adjust to the instantaneous pulsational state. The
third is the thermal time constant, $\tau_c$, during which the
convection zone can bottle up flux perturbations that enter it from
below.
Given the central role of $\tau_c$, we elaborate on its relation both
to $t_{\rm cv}$ and to the more conventional definition of thermal time
constant, $\tau_{\rm th}$, at depth $z$. The latter is the heat
capacity of the material above that depth divided by the
luminosity. In a plane parallel, fully ionized atmosphere this is
equivalent to
\begin{equation}
\tau_{\rm th}\equiv {1\over F} \int_0^z dz \, c_p\, {\rho k_B\over m_p T}
\approx {5pz\over 7F}.
\label{eq:adia-tauth}\end{equation}
Appeal to equation \refnew{eq:adia-Fvcv} establishes that inside the
convection zone
\begin{equation}
{t_{\rm cv}\over \tau_{\rm th}}\sim \left({v_{\rm cv}\over c_s}\right)^2\ll 1.
\end{equation}
Now $\tau_c\equiv (B+C)\tau_b$, where $\tau_b$ is defined by equation
\refnew{eq:adia-taub}. To the extent that $c_p\approx 5$ is constant in
the convection zone, $\tau_b\approx \tau_{\rm th}/5$, where the latter
is evaluated at $z_b$.
Next we address the relation between $\tau_c$ and $\tau_b$. Here we
are concerned with the relatively large value of $B+C$, typically
about 20 for DAVs.\footnote{In our models, $B$ and $C$ have comparable
value.} Recall from equations \refnew{eq:adia-Dsb} and
\refnew{eq:adia-DFph} that
\begin{equation}
{\delta F_{\rm{ph}} \over F}\approx {\delta s_b\over B+C}.
\label{eq:adia-dFds}\end{equation}
So the photosphere and superadiabatic layer add an insulating blanket
on top of the convection zone. The large value of $B$ follows because
the photospheres of DAVs are composed of lightly ionized hydrogen. In
this state, the values of $\kappa_T$ and $s_T$ are both large and
positive; typical values in the middle of the instability strip are
$\kappa_T\approx 6$ and $s_T\approx 24$. The large and positive
$\kappa_T$ arises because the population of hydrogen atoms in excited
states which the ambient radiation field can photoionize increases
exponentially with increasing $T$. The large and positive $s_T$ occurs
as the ionization fraction increases exponentially with increasing
$T$, and the much larger entropy contributed by a free as compared to
a bound electron. The large and positive value of $C$ reflects the
increase in entropy gradient that accompanies an increase in
convective flux. It is obtained from mixing length theory with an
unperturbed mixing length.
\subsection{Validity of the Quasiadiabatic Approximation}
The validity of the quasiadiabatic approximation requires that the
nonadiabatic parts of the expressions for $\delta \rho/\rho$ and
$\delta T/T$, as given by equations \refnew{eq:adia-eqdelrho} and
\refnew{eq:adia-eqdelT}, be small in comparison to the adiabatic
parts. Thus the ratio
\begin{equation}
{\cal R_{\rm na}}\equiv {\delta s\over\delta p/p}
\label{eq:adia-rnonad}\end{equation}
is a quantitative measure of nonadiabaticity. We estimate ${\cal R}_{\rm na}$
for the radiative interior and the convection zone.
We calculate $\delta s$ in the radiative interior from
\begin{equation}
\delta s \approx {iF\over \omega}{m_p\over\rho k_B T}{d\over dz}\left({\delta
F\over F}\right).
\label{eq:adia-delsrad}\end{equation}
In the upper evanescent layer, $z_b<z<z_\omega$, this leads to
\begin{equation}
|{\cal R_{\rm na}}|\sim {1\over \omega\tau_{\rm {th}}},
\label{eq:adia-ratioevu}\end{equation}
whereas in the propagating cavity, $z>z_\omega$, we find
\begin{equation}
|{\cal R_{\rm na}}|\sim {1\over \omega\tau_{\rm {th}}}\left({z\over z_\omega}\right).
\label{eq:adia-ratioevl}\end{equation}
Nonadiabatic effects in the radiative interior are maximal at $z=z_b$, where
\begin{equation}
|{\cal R_{\rm na}}|\sim {1\over \omega\tau_b},
\label{eq:adia-ratioevmax}\end{equation}
since $\tau_{\rm th}/z$ increases with depth.
The measure of nonadiabaticity in the convection zone is given by
equation \refnew{eq:adia-Dsb}. Since $\omega\tau_b>1$ is required for
the validity of the quasiadiabatic approximation in the radiative
zone, we restrict consideration to the limiting case $\omega\tau_c\gg
1$. In this limit, equation
\refnew{eq:adia-Dsb} yields
\begin{equation}
|{\cal R_{\rm na}}|\sim {1\over \omega\tau_b},
\label{eq:adia-ratiocvz}\end{equation}
which is identical to the value arrived at for the radiative zone in equation
\refnew{eq:adia-ratioevmax}.
The requirement $\omega\tau_b\gtrsim 1$ for the validity of the quasiadiabatic
approximation severely limits the applicability of the current investigation.
The perturbed flux at the photosphere is related to that at the bottom of the
convection zone by
\begin{equation}
{\Delta F_{\rm{ph}}\over F}\approx {1\over 1-i\omega\tau_c}{\Delta F_b\over F}.
\label{eq:adia-relDelF}\end{equation}
Since $\tau_c$ is at least an order of magnitude larger than $\tau_b$, modes
with $\omega\tau_b\gtrsim 1$ are likely to exhibit small photometric variations.
However, this may not render them undetectable because their horizontal velocity
perturbations pass undiminished through the convection zone.
\subsection{Brickhill's Papers\label{subsec:adia-brickhill}}
Our investigation is closely related to studies of ZZ Cetis by
Brickhill
(\cite{adia-brick83},\cite{adia-brick90},\cite{adia-brick91a}). Brickhill
recognized that the convective flux must respond to the instantaneous
pulsational state. To determine the manner in which the convection
zone changes during a pulsational cycle, he compared equilibrium
stellar models covering a narrow range of effective
temperature. Brickhill provided a physical description of convective
driving and obtained an overstability criterion equivalent to
ours.\footnote{Our time constant $\tau_c$ is equivalent to the
quantity $D$ which Brickhill defined in equation (9) of his 1983
paper.} Moreover, he recognized that the convection zone reduces the
perturbed flux and delays its phase.
Our excuses for revisiting this topic are that Brickhill's papers are not widely
appreciated, that our approach is different from his, and that our paper
provides the foundation for future papers which will examine issues beyond those
he treated.
\begin{appendix}
\section{Scaling Relations for Gravity-Modes\label{sec:adia-loworder}}
The appendix contains derivations of a number of scaling relations appropriate
to g-modes in ZZ Cetis. These relations are applied in the main
text and in our subsequent papers on white dwarf
pulsations. We obtain approximate expressions for dispersion relations,
eigenfunctions, and normalization constants.
\subsection{Properties of Gravity Waves}
Substituting the WKB ansatz
\begin{equation}
{\delta p\over p}={\cal A}\exp\left({i\int^z\, dz\, k_z}\right),
\label{eq:adia-WKB}
\end{equation}
into the wave equation \refnew{eq:adia-wave}, and retaining terms of
leading order in $k_zH_p\gg 1$, we obtain the dispersion relation
\begin{equation}
k_z^2\approx
{\left(N^2-\omega^2\right)\left(k_h^2c_s^2-\omega^2\right)\over
c_s^2\omega^2},
\label{eq:adia-gwavedr}\end{equation}
where $k_hc_s$ is the plane parallel equivalent of the Lamb frequency, $L_\ell$.
Gravity waves propagate in regions where $\omega$ is smaller than both
$N$ and $L_\ell$. Far from turning points, their WKB dispersion
relation simplifies to
\begin{equation}
k_z\approx \pm{N\over \omega}k_h.
\label{eq:adia-DR}\end{equation}
The vertical component of their group velocity is given by
\begin{equation}
v_{gz}\equiv {\partial \omega\over \partial k_z}\approx -{\omega\over
k_z}\approx \pm {\omega^2\over Nk_h}.
\label{eq:adia-vgz} \end{equation}
Terms of next highest order in $k_zH_p$ yield the WKB amplitude
relation for gravity waves;
\begin{equation}
{\cal A}^2\propto {\rho\over k_z p^2}\propto {\rho\over N p^2}.
\label{eq:adia-amp}
\end{equation}
The amplitude relation expresses the conservation of the vertical flux
of wave action. This may be confirmed directly by means of an
alternate derivation. A straightforward manipulation of the linear
perturbation equations yields the quadratic conservation law
\begin{equation}
{\partial\over\partial t}\left({\rho\over
2}\biggr\vert{\partial\mbox{\boldmath $\xi$}\over\partial t}\biggr\vert^2
+{(p^\prime)^2\over 2\rho c_s^2}+ {\rho\over 2}N^2\xi_z^2\right) =
-\mbox{\boldmath $\nabla$}\cdot\left(p^\prime{\partial\mbox{\boldmath $\xi$}\over\partial t}\right),
\label{eq:adia-conserv}\end{equation}
where $p^\prime\equiv \delta p-g\rho\xi_z$ is the Eulerian pressure
perturbation. We identify ${\cal{\bf F}}\equiv
p^\prime(\partial\mbox{\boldmath $\xi$}\partial t)$ as the quadratic energy flux. The
time-averaged magnitude of the vertical component of ${\cal{\bf F}}$,
computed to lowest order in $(k_z z)^{-1}$ with the aid of equations
\refnew{eq:adia-xiz} and \refnew{eq:adia-DR},
is found to be
\begin{equation}
{\cal F}\approx {\omega^2\over gk_h}{N p^2\over g\rho}\left({\delta
p\over p}\right)^2,
\label{eq:adia-Fz}\end{equation}
which is in accord with the WKB amplitude relation. Yet another way to
look at the energy flux is as the energy density transported by the
group velocity. The energy density may be approximated as
$\omega^2\rho\xi_h^2$, since kinetic energy contributes half the
time-averaged total energy density, and $|\xi_h|\gg
|\xi_z|$. Application of equations \refnew{eq:adia-xih},
\refnew{eq:adia-DR}, and \refnew{eq:adia-vgz} yields
\begin{equation}
{\cal F}\approx \omega^2\rho\xi_h^2 v_{gz}\approx {\omega^2\over
gk_h}{N p^2\over g\rho}\left({\delta p\over p}\right)^2,
\label{eq:adia-Fluxp}\end{equation}
which reproduces the result for ${\cal F}$ given by equation
\refnew{eq:adia-Fz}.
In a spherical star, the conserved quantity is the WKB luminosity,
${\cal L}\equiv 4\pi r^2{\cal F}$.
\subsection{Properties of G-Modes}
\subsubsection{Inside the Propagating Cavity\label{subsec:adia-cavity}}
The Lamb frequency, $L_\ell$, decreases with increasing $r$, whereas
the
Brunt-V{\"a}is{\"a}l{\"a}\,\, frequency, $N$, increases until it drops abruptly just below the
convection zone and becomes imaginary within it
(cf. Fig. 1). Compositional transitions may be exceptions to these
trends. The WKB approximation is violated if these transitions are
sharp on the scale of the radial wavelength. When these are located in
regions of propagation, they are best viewed as separating linked
cavities.
We divide a star's g-modes into two classes, high frequency modes and
low frequency modes. The former have cavities bounded from above at
$z_\omega>z_b$ where $\omega = L_\ell \approx k_h c_s$. The latter
propagate until just below the convection zone at $z \approx z_b$
where $\omega=N$. The floor of the propagating cavity is set by
$\omega=N$ for all modes. In this paper we are exclusively concerned
with high frequency modes. Taking $c_s\sim (gz)^{1/2}$, we find
\begin{equation}
z_\omega\equiv {{\omega^2}\over{gk_h^2}}. \label{eq:adia-zone}
\end{equation}
Moreover, $N\sim (g/z)^{1/2}\sim c_s/z$ implies $k_z z_\omega\sim 1$ at
$z=z_\omega$. Thus $z_\omega$ is to be identified with $z_1$, the uppermost node
of $\delta p/p$. More generally,
\begin{equation}
k_z\sim {1\over \left(z_\omega z\right)^{1/2}},
\label{eq:adia-kz}\end{equation}
for $z>z_\omega$.
Expressions for $\xi_h$ and $\xi_z$ are given by equations (\ref{eq:adia-xih})
and (\ref{eq:adia-xiz}). To leading order in $k_z H_p$, these reduce to
\begin{equation}
\xi_h\approx -{k_z\over k_h}{p\over g\rho}\left({\delta p\over p}\right),
\label{eq:adia-xihc}
\end{equation}
and
\begin{equation}
\xi_z\approx {p\over g\rho}\left({\delta p\over p}\right).
\label{eq:adia-xizc}
\end{equation}
Where $N/\omega\gg 1$, we have $k_z/k_h\gg 1$, which implies
$|\xi_h/\xi_z|\gg 1$. In these regions g-modes are characterized by
nearly horizontal motions. Equations (\ref{eq:adia-xihc}) and
(\ref{eq:adia-xizc}) imply that $\delta
\rho/\rho = - ik_h\xi_h-d\xi_z/dz$ vanishes to leading order in
$k_zH_p\gg 1$. Thus g-modes are relatively incompressible within their
cavities. A more precise estimate, obtained from equation
\refnew{eq:adia-xihc}, is $\delta\rho/\rho\sim
ik_h\xi_h(z_\omega/z)^{1/2}$.
\begin{figure}
\centerline{\psfig{figure=fig3.ps,width=0.85\hsize}}
\caption[]{Structure of a mode with $n=9$, ${\ell}=1$, and a period
of $502 \, \rm s$, as a function of pressure in $\, \rm dyne\, \rm cm^{-2}$. The top
panel, which is similar to Fig.
\ref{fig:scaling-bruntplot}, illustrates how the mode cavity, depicted
by the dashed line, is formed. For this mode $z_\omega > z_b$. The
WKB luminosity measured in $\, \rm erg /\, \rm s$ is plotted in the second panel.
It is constant inside the cavity and decays outside it. The
compositional transition between the hydrogen and helium layers has
minimal effect on this mode. The lower two panels display the depth
dependences for the dimensionless $\delta\rho/\rho$ and $\xi_h$
measured in $\, \rm cm$. The numerical values in this figure come from
setting $\xi_h = R$ at the photosphere.}
\label{fig:adia-scaling-eigen}
\end{figure}
\subsubsection{Upper Evanescent Layer\label{subsec:adia-upper}}
For the purpose of this discussion, we pretend that both $\rho\to 0$
and $p\to 0$ as $z\to 0$. Then $z=0$ is a singular point of the wave
equation (\ref{eq:adia-wave}). Since the physical solution is regular
at $z=0$,
\begin{equation}
{d\over dz}\left[\ln\left({\delta p\over p}\right)\right]\sim
-\left({N\over\omega}\right)^2 k_h^2 z. \label{eq:adia-reg}
\end{equation}
In the convection zone, where $N^2\approx -(v_{\rm cv}/z)^2$,
\begin{equation}
{d\over dz}\left[\ln\left({\delta p\over p}\right)\right]\sim
\left({v_{\rm cv}\over c_s}\right)^2{1\over z_\omega}, \label{eq:adia-regcv}
\end{equation}
and at the top of the radiative interior, where $N^2\sim (c_s/z)^2$,
\begin{equation}
{d\over dz}\left[\ln\left({\delta p\over p}\right)\right]\sim
-{1\over z_\omega}, \label{eq:adia-regr}
\end{equation}
We see that $\delta p/p$ is nearly constant for $z\ll z_\omega$ in the
upper evanescent layer.\footnote{That is why we chose to use it as the
dependent variable in the wave equation.}
Taking into account the near constancy of $\delta p/p$, equations
(\ref{eq:adia-xih}) and (\ref{eq:adia-xiz}) imply that to leading
order in $z/z_\omega$,
\begin{equation}
\xi_h\approx {i\over k_h}\left({\delta p\over p}\right),
\label{eq:adia-xihe}
\end{equation}
and
\begin{equation}
\xi_z\approx -{\omega^2\over gk_h^2}\left({\delta p\over p}\right).
\label{eq:adia-xize}
\end{equation}
Thus both components of the displacement vector are nearly constant for $z\ll
z_\omega$. The behavior of $\delta\rho/\rho$ is more subtle; in principle, it
could vary on scale $z$ should $p/(c_s^2\rho)$ do so. However, in
practice, $p/(c_s^2\rho)$ exhibits only mild depth variations. Thus,
equation (\ref{eq:adia-xihe}) yields
\begin{equation}
{\delta\rho\over\rho}\sim -ik_h\xi_h.
\label{eq:adia-incompp}
\end{equation}
Equation (\ref{eq:adia-incompp}) shows that the relative incompressibility that
characterizes propagating g-modes does not extend to their evanescent
tails.
\subsection{Global Dispersion Relation \label{subsec:adia-scaling-disp}}
The global dispersion relation is obtained from
\begin{equation}
n\pi\approx \int_{z_1}^{z_l} dz\, k_z,
\label{eq:adia-globalDR}
\end{equation}
where $z_l$ is the lower boundary of the mode cavity. Carrying out the
integration using $N^2\sim g/z$, $z_1 \sim z_\omega$, and equation
(\ref{eq:adia-DR}), we obtain $\omega^2\sim gk_h/n$. This relation is
a good approximation to the g-mode dispersion relation for a
polytropic atmosphere. The proportionality
\begin{equation}
\omega^2 \propto {{k_h}\over{n}}
\label{eq:adia-scaling-nl}
\end{equation}
provides a satisfactory fit to the high frequency modes in DAV white
dwarfs. However, low frequency modes penetrate deeply into the
interior where the approximation $N \sim (g/z)^{1/2}$ fails because of
electron degeneracy (cf. Fig. \ref{fig:scaling-bruntplot}). As a
result of the steep drop in $N$, $z_l$ is nearly independent of
$\omega$ for $\omega \leq 10^{-2}\, \rm s^{-1}$. This steepens the
dependence of $\omega$ on $n$ such that
\begin{equation}
\omega \propto {{k_h}\over{n}}.
\label{eq:adia-scaling-newnl}
\end{equation}
Numerical results for the dispersion relations are shown in
Fig. \ref{fig:adia-scaling-disp}.
\begin{figure}
\centerline{\psfig{figure=fig4.ps,width=0.75\hsize}}
\caption[]{Radian frequencies of gravity-modes in $\, \rm s^{-1}$ as
functions of radial order, $n$, and spherical degree, ${\ell}$. These
eigenvalues are computed using Bradley's white dwarf model with
$T_{\rm eff} = 12,000
\, \rm K$. The upper panel is for $n = 1$ modes with various angular degrees,
while the lower panel is for ${\ell}= 1$ modes of different radial
orders. The global dispersion relations given by equations
\refnew{eq:adia-scaling-nl} and
\refnew{eq:adia-scaling-newnl} fit well for modes of low and high order
respectively. We adopt these empirical laws in our analytical studies.
\label{fig:adia-scaling-disp} }
\end{figure}
\subsection{Normalization of Eigenfunctions\label{subsec:adia-normal}}
We conform to standard practice and set
\begin{equation}
{\omega^2\over 2} \int_0^R dr\, r^2 \,\rho\left(\xi_h^2+\xi_z^2\right)=1.
\label{eq:adia-norm}
\end{equation}
The conservation of the WKB energy flux implies that the region
between each pair of neighboring radial nodes contributes an equal
amount to the above integral.
To achieve a simple analytic result, we take advantage of the
following: $|\xi_h| \gg |\xi_z|$ and $\xi_h\sim \xi_h\big\vert_{\rm{ph}}$
for $z\ll z_\omega$, and the envelope of $\rho v_{gz}\xi_h^2\approx$
constant for $z\gg z_\omega$ (cf. eq. [\ref{eq:adia-Fluxp}]). This
enables us to write
\begin{equation}
{\omega^2\over 2} R^2
\xi^2_h\big\vert_{\rm{ph}}\left(\rho v_{gz}\right)_\omega\int
{dz\over v_{gz}} \approx 1, \label{eq:adia-normp}
\end{equation}
where the subscript $\omega$ stands for quantities evaluated at
$z_\omega$. Using
\begin{equation}
\int_0^R {dz\over v_{gz}}\approx {\pi n\over \omega},
\label{eq:adia-scaling-eqn}
\end{equation}
we arrive at
\begin{equation} \left(\rho v_{gz}\right)_\omega=\left({\omega^2\rho\over k_h
N}\right)_\omega\sim \left({\omega^2 pz\over
k_h c_s^3}\right)_\omega \sim {k_h^2
F\tau_\omega\over\omega},
\label{eq:adia-scaleing-coeff}
\end{equation}
where $F$ is the stellar flux, and
\begin{equation}
\tau_\omega\approx\left({pz\over
F}\right)_\omega
\label{eq:adia-scaling-tone}
\end{equation}
is the thermal time scale at $z_\omega$.
Putting these relations together, we obtain
\begin{equation} k_h^2\xi_h^2\big\vert_{\rm{ph}}\sim {1\over n\tau_\omega L},
\label{eq:adia-scaling-normfin} \end{equation}
where $L=4\pi R^2F$ is the stellar luminosity. Equation (\ref{eq:adia-xihe})
then implies
\begin{equation}
\left({{\delta p}\over{p}}\right)^2_{\rm{ph}}\sim
\left({{\delta \rho}\over{\rho}}\right)^2_{\rm{ph}}\sim {1\over
n\tau_\omega L}. \label{eq:adia-scaling-normdrho}
\end{equation}
This is an appropriate normalization formula for modes having
$ z_\omega > z_b $.
\begin{figure}
\centerline{\psfig{figure=fig5.ps,width=0.75\hsize}}
\caption[]{Normalized surface amplitudes as well as driving/damping rates
for g-modes in white dwarfs. We compare numerical values obtained
using Bradley's model with $T_{\rm eff} = 12,000 \, \rm K$ ($\tau_b = 300
\, \rm s$) with analytic estimates. Both properties depend much more
strongly on $n$ than on $\ell$; hence the choice of $n$ as the
abscissa. The upper panel plots as triangles numerical values of
$(\delta\rho/\rho)_{\rm ph}^2$ in $\, \rm gm^{-1}{\rm cm}^{-2}
\, \rm s^2$ for modes with ${\ell} = 1$ to $10$.
The eigenfunctions are normalized according to equation
\refnew{eq:adia-norm}. The solid line represents the analytical estimate from
equation \refnew{eq:adia-scaling-normdrho} for modes with $z_\omega >
z_b$, where $z_\omega$ is chosen to be $3\pi\omega^2/(2gk_h^2)$. The
lower panel displays numerical radiative damping rates as triangles,
together with the analytic estimate from equation
\refnew{eq:adia-Wdotr} as a solid line.
\label{fig:adia-scaling-drhogamma} }
\end{figure}
\end{appendix}
|
hep-ph/9804362
|
\section{INTRODUCTION}
In the standard model of elementary particle physics, the $SU{\left( 2
\right)} \times U{\left( 1 \right)}$ symmetry is spontaneously
broken to a residual $U{\left(1\right)}_{EM}$, generating mass for the $W^\pm$
and $Z$ gauge bosons and the matter fields. A possible cause for the
symmetry breaking is the presence of an additional scalar field, the
Higgs field. Although there is as yet no experimental evidence for
the expected Higgs boson, we can still explore the
implications of this symmetry breaking mechanism using radiative corrections
to standard-model processes. For
example, the condition that perturbative calculations be reliable
provides a theoretical upper bound on the mass \mbox{$M_H$}\ of a weakly interacting
Higgs boson
\cite{dicus,lqt,chan,marciano,dawson,passarino,vayonakis,djl,ldurand,durand2,kurt96}.
In a recent paper \cite{kurt96},
Nierste and Riesselmann analyzed one- and two-scale processes involving
the Higgs field with a particular emphasis on the running of the
quartic Higgs coupling $\lambda(\mu)$. They assessed the
reliability of perturbation theory using two criteria: the relative
difference of physical quantities calculated in different
renormalization schemes; and the dependence of $\lambda$ on the
renormalization scale $\mu$.
If perturbation theory is to be reliable,
the choices of the renormalization
scheme and scale should not be important for physical
quantities.
To determine $\lambda(\mu)$ in their analysis,
Nierste and Riesselmann integrated the renormalization group
equation using the three-loop $\beta$ function, and solved the resulting
equation for $\lambda$ iteratively using four different approximation schemes.
The solutions differed significantly for large values of the coupling
or mass scale, and determined one constraint on \mbox{$M_H$}\ in a perturbative theory.
This uncertainty in $\lambda(\mu)$ carries over to physical quantities
such as scattering amplitudes and again affects the ranges of \mbox{$M_H$}\ and $\mu$
over which perturbative calculations are reliable.
We show here that is is possible to explore the regime of large coupling
without the ambiguities that arise from the direct iterative solution for the
coupling. We approach the problem by emphasizing the $\beta$ function
$\beta(\lambda)$, and show that it can be determined reliably to rather
large values of $\lambda$ by using \pade\ approximates
\cite{baker,BendOrsz} to sum the
perturbation series for $\beta(\lambda)$
\cite{yang}. Integration of the renormalization
group equation then gives an implicit equation for $\lambda(\mu)$ that can be
inverted numerically. The results can be used
to study the the validity of perturbation theory for
scattering amplitudes in the region of large Higgs-boson masses
and high energies where the running coupling $\lambda(\mu)$
is the natural renormalization group expansion parameter. We will not
pursue those applications here as a number of authors
\cite{marciano,dawson,passarino,vayonakis,djl,ldurand,durand2,kurt96,ghinculov} have considered them in detail.
We first investigate the \pade\ approach in Sec.\ \ref{sec:pade} using the
the results for $\beta(\lambda)$ in the minimal subtraction (MS)
renormalization scheme for which the perturbation series for $\beta$
is known to five loops \cite{ms4loop,ms5loop}.
After establishing the effectiveness of the \pade\
approach, we apply it in Sec.\ \ref{sec:applications} to the more physical
on-mass-shell (OMS)
renormalization scheme where $\beta$ is only known to three loops
\cite{kurt96,luscher}. We
find that \pade\ summation of the series apparently
gives a reliable result for $\beta(\lambda)$
for quite large values of the coupling,
$\lambda\leq 10$, and conclude, after inversion of the renormalization group
expression, that $\lambda(\mu)$ is known reliably in the OMS scheme for
$\mu\leq 4$ TeV for $\mbox{$M_H$}\leq 800$ GeV. The region in which $\lambda(\mu)$ is
known well extends to very large mass scales if \mbox{$M_H$}\ is sufficiently small,
for example, to $10^{17}$ GeV for $\mbox{$M_H$}\leq 155$ GeV.
\section{PAD\'{E} SUMMATION OF THE $\beta$ FUNCTION}
\label{sec:pade}
\subsection{Preliminary considerations}
\label{subsec:prelim}
In the following, we deal with the quartic Higgs-boson coupling $\lambda$
defined at tree level in terms of \mbox{$M_H$}\ and the electroweak vacuum expectation
value $v=246$ GeV or the Fermi coupling \gf\ by $\lambda=\mbox{$M_H$}^2/2v^2=\gf\mbox{$M_H$}^2/\sqrt{2}$.
We will work in the interesting
limit of large Higgs-boson masses, corresponding
to the limit of large quartic couplings, and neglect the effects of couplings
with fermions.
The running coupling $\lambda(\mu)$ is defined as the solution of the
renormalization group equation
\begin{equation}
\label{RGeqn}
\mu\,\frac{d\lambda(\mu)}{d\mu}=\beta(\lambda)
\end{equation}
at the energy scale $\mu$. The function $\beta(\lambda)$ is given in
perturbation theory as a power series in $\lambda$,
\begin{eqnarray}
\beta(\lambda) &=& \frac{\lambda^2}{16\pi^2}\,\sum_{n=0}\,\beta_n\left(
\frac{\lambda}{16\pi^2}\right)^n \label{beta_series}\\
&=& \beta_0\,\frac{\lambda^2}{16\pi^2}\,\left(1+\sum_{n=1}\,B_n\lambda^n
\right). \label{B_series}
\end{eqnarray}
The coefficients $\beta_n$ are renormalization-scheme dependent beyond two
loops. They are known through three loops in the on-mass-shell renormalization
scheme \cite{kurt96,luscher},
\begin{equation}
{\rm OMS:}\quad \beta_0=24,\quad \beta_1=-312,\quad \beta_2=4238.23,
\label{beta_n_oms}
\end{equation}
and to five loops in the minimal subtraction scheme \cite{ms4loop,ms5loop},
\begin{eqnarray}
{\rm MS:}\quad &\beta_0=24, \quad \beta_1=-312, \quad \beta_2=12022.7
\nonumber\\[1ex]
& \beta_3=-690759, \quad \beta_4=4.91261\times 10^7. \label{beta_n_ms}
\end{eqnarray}
Alternatively, the coefficients $B_n$ are given by
\begin{equation}
{\rm OMS:} \quad B_0=1,\quad B_1=-0.082323,\quad B_2=0.0070816, \label{B_n_oms}
\end{equation}
\begin{eqnarray}
{\rm MS:}\quad & B_0=1,\quad B_1=-0.082323,\quad B_2=0.020089 \nonumber\\[1ex]
& B_3=-0.0073090, \quad B_4=0.0032917. \label{B_n_ms}
\end{eqnarray}
To determine the running coupling, one must integrate the renormalization
group equation, Eq.\ (\ref{RGeqn}), and solve the implicit equation
\begin{equation}
\ln\frac{\mu}{\mu_0}=
\int^{\lambda \left(\mu \right)}_{\lambda \left( \mu_0 \right)}
\frac{dx}{\beta(x)}.
\label{beta:int}
\end{equation}
This equation determines $\lambda(\mu)$ in terms of the initial and final mass
scales $\mu_0$ and $\mu$ and the initial value of the coupling at the scale
$\mu_0$, defined as $\lambda_0=\lambda(\mu_0)$.
Different, typically iterative, methods of solution lead to
different results for $\lambda(\mu)$, with the differences increasing
for large vales of \mbox{$M_H$}\ or $\lambda_0$ and for $\mu\gg\mu_0$ \cite{kurt96}.
Since the $\beta$ function is only
known to finite order, the only constraint on this standard approach is that
the different solutions satisfy the renormalization group equation,
Eq.\ (\ref{beta:int}), to that order. However, the
resulting ambiguities for large values of \mbox{$M_H$}\ can compromise tests
of the reliability of perturbation theory, and the determination of limits
on \mbox{$M_H$}\ in a weakly interacting theory. It is therefore useful to approach
the problem differently, and concentrate on the $\beta$ function itself. If
$\beta(\lambda)$ is known accurately for some range of $\lambda$, the
integral in Eq.\ (\ref{beta:int}) will also be accurately determined, and
the equation can be inverted numerically to find $\lambda(\mu)$ in that
region.
\subsection{Pad\'{e} summation and $\beta(\lambda)$}
\label{subsec:pade}
\pade\ approximates \cite{baker,BendOrsz} give a very useful way of summing or extrapolating
series for which only a finite number of terms are known.
The $[N,\,M]$ \pade\ approximate for a function
$f(z)$ defined by a truncated power series
\begin{equation}
f(z)=\sum_{j=0}^m\,c_jz^j + O(z^{m+1})
\end{equation}
is a ratio of two polynomials,
\begin{equation}
P[N,\,M](z)\equiv\frac{\textstyle{\sum_{n=0}^{N}\,a_nz^n}}
{\textstyle{\sum_{n=0}^{M}\,b_nz^n}},\quad b_0=1,\quad
N+M=m.\label{pade_define}
\end{equation}
The coefficients $a_n,\,b_n$ are determined uniquely by the
requirement that the series expansion of
$P[N,\,M](z)$ agree term-by-term with the series for $f(z)$ through terms of
order $z^m$.
The sequence of \pade\ approximates $P[N,\,M]$ is known to converge to
$f(z)$ as $N,\,M\rightarrow\infty$ with $N-M$ fixed for large classes
of functions \cite{baker,BendOrsz}, but the approximates can also give useful
and rapidly convergent asymptotic approximations for finite $N$ and $M$
even if the sequence and the original series for $f(z)$ do not converge \cite{BendOrsz}.
In the present case, the function in question is $\beta(\lambda)$, known
perturbatively to orders $\lambda^4$ and $\lambda^6$, that is, to three
and five loops, in the OMS and MS renormalization schemes, respectively.
The perturbation series for $\beta$ is not expected to converge,
but a \pade\ summation
of the series may still be useful for $\lambda$ not too large. Because the
perturbative expansion of $\beta(\lambda)$ starts at order $\lambda^2$, we
will extract the leading power explicitly, redefine the \pade\ coefficients,
and define the $[N,\,M]$ approximate for the n-loop $\beta$ function as
\footnote{\pade\ summation of $\beta$ was considered by Yang and Ni
\cite{yang}, but without applications to the present problem. Those authors
did not extract the overall factor $\lambda^2$, so use a different
labeling of the approximates, and miss the diagonal approximates used
here.}
\begin{equation}
\beta[N,\,M]=\beta_0\frac{\lambda^2}{16\pi^2}\,\frac{1+a_1\lambda+
a_2\lambda^2+\cdots+a_N\lambda^N}
{1+b_1\lambda+b_2\lambda^2+\cdots+b_M\lambda^M},\quad N+M=n-1.
\label{eq:P_for_beta}
\end{equation}
Note that the approximates \padef{n-1}{0} are just the perturbation series
for $\beta$ carried to n loops.
The series for $\beta(\lambda)$ defined by Eq.\ (\ref{B_series}) are
alternating series in which the ratios of coefficients $B_{n+1}/B_n$
change only slowly in either OMS or MS
renormalization in the range in which the $B$'s are known. This suggests that
the diagonal approximates \padef{N}{N} with $M=N$ or the
subdiagonal approximates with $M=N+1$ may be particularly effective in
estimating the series. In the case of OMS renormalization, the $\beta$'s are
known only to three loops, so $M+N\leq 2$. The possible choices are then
\padef{1}{1} or \padef{0}{2} if we use all the three-loop information,
or \padef{0}{1} if the perturbation series is truncated at two loops.
\padef{2}{0} and \padef{1}{0} are just the three- and two-loop
perturbation series. In the case of MS renormalization, $\beta$ is known to
five loops, $M+N\leq 4$, and we will consider the approximates
\padef{1}{2} at the four-loop level, and \padef{2}{2} at five loops, keeping
$M=N$ or $M=N+1$. The additional five-loop approximates
\padef{1}{3}, \padef{3}{1}, and \padef{0}{4} are members of sequences
two or more steps off the diagonal. These are not expected to converge as
rapidly as the sequences we consider. The coefficients
$a_j$, $b_j$ for these approximates are given in appendix A.
\subsection{Tests of \pade\ summation using MS renormalization}
\label{subsec:pade_tests}
The fact that the perturbation series for $\beta$ is known to five loops
gives us the opportunity to test the \pade\ summation procedure using
known results. Having established its reliability, we will the apply the
method in Sec.~\ref{sec:applications} to the more physical OMS renormalization
scheme in which the connection between $\lambda$ and \mbox{$M_H$}\ is known.
\subsubsection{Convergence of the Pad\'{e} sequence}
\label{subsubsec:convergence}
Based upon the general convergence properties of \pade\
approximates and the alternating character of the series at hand,
we expect the sequence \padef{1}{1}, \padef{1}{2}
and \padef{2}{2} to converge as we progress from three to five loops.
We plot these approximates in Fig.~\ref{fig:ms5}
to demonstrate that convergence.
The convergence of the \pade\
sequence is, in fact,
relatively fast. For low values of $\lambda$ there is
excellent agreement. Even for $\lambda=10$, \padef{1}{1} and
\padef{1}{2} differ by $<$ 10 \% with the diagonal five-loop approximate
\padef{2}{2} lying roughly halfway between the other two.
We interpret the agreement and the pattern of convergence as strong
evidence for the effectiveness of the \padef{N}{N} sequence
in summing the series for $\beta(\lambda)$, and conclude that it is
unlikely that $\beta$ would be found to differ significantly from
\padef{2}{2} in the region shown if higher-loop contributions were
calculated.
In Fig.\ \ref{fig:n0ms}, we look at the problem from the point of view
of the purely perturbative approach, and show the sequence of the
N-loop perturbation series \padef{N-1}{0} for $\beta$.
This is not a sequence in which $N$
and $M$ increase together with the difference $N-M$ fixed, so the
standard results on \pade\ convergence do not apply.
The convergence of the sequence is very slow as shown in the figure,
with large differences between successive terms already present for
$\lambda \simeq 3$. For
comparison, we also show the three- and five-loop diagonal approximates
\padef{1}{1} and \padef{2}{2}. These forms
interpolate the perturbative sequence very well, eliminating
the dominance of the last term in the series for $\lambda$ large.
Since \padef{1}{1}, \padef{2}{2}, and the four-loop approximate \padef{1}{2}
differ from each other by less than
5\% for $\lambda<10$, all are effective in extrapolating the perturbation
series. We conclude, in particular, that the three-loop approximate
\padef{1}{1} already gives a reliable extrapolation
for $\beta(\lambda)$, with uncertainties of only a few percent, out to
$\lambda\sim 10$, far beyond the range in which the five-loop perturbation
series is reliable.
\subsubsection{Estimates of unknown coefficients}
\label{subsubsec:coefficients}
\pade\ approximates often converge to the limit function faster than
the power series used to construct them. In that case, the
terms in the expansion of a \pade\ approximate beyond the matched
order may give reasonable estimates for the unknown higher-order
coefficients in the power series. As a simple test of this expectation
in the present case, we can expand the three- and four-loop approximates
\padef{1}{1} and \padef{1}{2} to one order higher in $\lambda$ than the
finite power series used to construct them, and compare the new coefficient
with the known four- and five-loop results. Thus, the expansion
\begin{equation}
\padef{1}{1} = \frac{\lambda^2}{16\pi^2}\,\beta_0\,\left[ 1+B_1\lambda +
B_2\lambda^2 + (B_2^2/B_1)\lambda^3 + (B_2^3/B_1^2)\lambda^4 +\cdots
\right]
\end{equation}
gives the estimates
\begin{equation}
\label{eq:B3_est}
B_3^\prime \equiv B_2^2/B_1, \qquad B_4^\prime \equiv B_2^3/B_1^2,
\end{equation}
for the four- and five-loop coefficients $B_3$ and $B_4$, results equivalent to
\begin{equation}
\label{eq:b3_est}
\beta_3^\prime \equiv\beta_2^2/\beta_1 = -463286,\qquad \beta_4^\prime
\equiv \beta_2^3/\beta_1^2 = 1.785\times 10^7.
\end{equation}
The actual four- and five-loop results are
\begin{equation}
\label{eq:b3_act}
\beta_3 = -690759, \qquad \beta_4=4.913\times 10^7.
\end{equation}
The estimates of $\beta_3$ and $\beta_4$ from the three-loop
coefficients
are therefore about 0.67 and 0.36 of the actual coefficients.
In the case of \padef{1}{2}, we can estimate only $\beta_4$, with the
result
\begin{equation}
\label{eq:b4_est}
B_4^\prime= -\left(B_3^2-2B_1B_2B_3+B_2^3\right)/\left(B_1^2-B_2\right).
\end{equation}
This estimate gives $\beta_4^\prime=3.48\times 10^7$, and a ratio
$\beta_4^\prime/\beta_4=0.71$.
The estimates for the first missing terms in the perturbation series
are too small in both of the cases considered. We can understand this
result qualitatively as resulting from the averaging of an alternating
series by the approximates, with the corresponding tendency to avoid large
higher coefficients in the expansion. We will use this observation below.
The effects of incorrect estimates of $B_3$ on the approximate \padef{1}{2}
are shown in Fig.\ \ref{fig:b3'}. In these calculations, we have taken
$B_3$ as five- and ten times the estimated value, and calculated \padef{1}{2}
using the new value as input. The result is a $<10$\% change in
$\beta$ for $\lambda<10$ despite the very large values of the new coefficient.
\subsection{The running coupling $\lambda(\mu)$ in MS renormalization}
\label{subsec:MSrunning}
The effect of the uncertainty in $\beta(\lambda)$ on the running of
$\lambda(\mu)$ can be studied by integrating the renormalization group
equation, Eq.\ (\ref{RGeqn}), and solving numerically for $\lambda$
as a function of its initial value $\lambda_0$ and the ratio of energy
scales $\mu/\mu_0$.\footnote{In the case of MS renormalization, $\lambda$ is
connected only indirectly to the physical pole mass of the Higgs boson,
so we cannot state the results in terms of $\mu$ and \mbox{$M_H$}\ without
using a separate calculation of the self-energy function.}
We have done this calculation using the approximates
\padef{1}{1} and \padef{1}{2}, choosing initial values $\lambda_0=1,\,3,\,5$.
The results are shown in Fig.\ \ref{fig:MSrunning}. The result for
the optimum five-loop approximate, \padef{2}{2}, lies near the center of the
shaded regions in that figure, as would be expected from the comparison
of the approximates in Fig.\ \ref{fig:ms5}. We believe the estimated range
of uncertainty is quite generous given the rapid convergence of the
sequence shown there toward \padef{2}{2}.
The range of uncertainty in $\lambda(\mu)$ at fixed $\mu/\mu_0$ is quite
small for $\lambda_0=1,\,3$ over the entire range shown, $\mu/\mu_0\leq 6$.
The uncertainty is larger for $\lambda_0=5$, roughly 16\%, at $\mu/\mu_0=3$,
but even then the boundary curves differ from the curve for \padef{2}{2}
by $<8$\%.
The rather small effect of uncertainties in $\beta$ on $\lambda(\mu)$ can be
understood rather simply. The renormalization group equation involves
$1/\beta$ rather than $\beta$. The prefactor $\lambda^2$ in the \pade\
expression in Eq.\ (\ref{eq:P_for_beta}) leads to a rapid decrease in the
integrand, and the value of the integral is determined mainly by the region
near $\lambda_0$, the lower endpoint of the integration. For $\lambda_0$
small, $\beta$ is well determined in the most important region, and the
uncertainty in the integral is small. The uncertainty in the integral,
hence the uncertainty in $\lambda(\mu)$, becomes
large only for renormalization group evolution away from a large
starting value for $\lambda_0$.
\section{APPLICATIONS: RANGES OF RELIABILITY OF $\beta(\lambda)$ and
$\lambda(\mu)$ IN OMS RENORMALIZATION}
\label{sec:applications}
\subsection{\pade\ approximates for $\beta(\lambda)_{OMS}$}
\label{subsec_beta_OMS}
Having tested the use of \pade\ approximates in the MS scheme, we
consider the implications of \pade\ summation for the OMS scheme.
The most significant difference is the limited order, three loops,
to which the perturbation series for $\beta$ is known.
We are therefore restricted to two approximates that use the full
information available, the diagonal approximate \padef{1}{1} and
the subdiagonal approximate \padef{0}{2}. We can also use \padef{0}{1}
at the two-loop level.
Based upon the convergence of the \pade\ sequence demonstrated for the MS
scheme, and the apparent reduction in the size of the coefficients in the OMS
scheme\footnote{The known value of $\beta_2$ in the OMS scheme is smaller
than that in the MS scheme by roughly a factor of three
\cite{kurt96,luscher}. Nierste and
Riesselmann \cite{kurt96} have found similar reductions in the coefficients in
the expansion of physical amplitudes. We assume that the reductions in the
size of the coefficients persist at higher orders.},
we will assume that these approximates again provide an accurate estimate
for the $\beta$~function, with the diagonal approximate probably the most reliable.
To determine the range of $\lambda$ for which the $\beta$~function\ is reliable, we first
considered the differences among the three-loop functions \padef{1}{1} and
\padef{0}{2}, and the two-loop function
\padef{0}{1}. These approximates can barely be distinguished over
the range of $\lambda$ shown in Fig.~\ref{fig:p11p02} with the scale used
there, so only \padef{1}{1} is shown. This agreement is the result of the
nearly geometric growth of the first coefficients in the perturbation series.
The three-loop approximates \padef{1}{1} and \padef{0}{2} continue to agree
well to much larger values of $\lambda$.
While one is tempted on this basis to conclude that the
OMS $\beta$~function\ is reliably known for $\lambda\leq 10$, the range of current
interest, the geometric character of the low-order perturbation series
may well be accidental. We have therefore
attempted to estimate a wider range of uncertainty in the $\beta$~function\
in a different way by supposing, in agreement with the results of the
MS analysis, that the coefficient $B_3^\prime$ estimated by expanding
\padef{1}{1} is too small, and constructing a new ``four-loop'' approximate \padef{1}{2} using a greatly increased value of $B_3$.
The result obtained using $B_3=5B_3^\prime$
is shown in Fig.~\ref{fig:p11p02}. The change in the extrapolation
of the perturbation series is quite small, with a difference
of less than 2\% between \padef{1}{1} and \padef{1}{1} for $\lambda<10$.
We also show the perturbation series for the $\beta$~function, \padef{2}{0}, in
Fig.~\ref{fig:p11p02} for comparison.
\subsection{The running coupling $\lambda(\mu)$}
\label{sec:running_coupling}
In the OMS renormalization scheme, the parameter
$\lambda$ is defined by the relation
$\lambda=\gf\mbox{$M_H$}^2/\sqrt{2}$ to all orders in perturbation theory
\cite{ldurand,ldkr}. We will choose the starting value $\lambda_0$ of the
running coupling $\lambda(\mu)$ to have this value.
What remains to be decided is the energy scale $\mu_0$
at which this relation should be taken to hold. The natural energy
scale would appear to be $\mu_0=\mbox{$M_H$}$. However, other choices have been
made. Thus, in an early investigation, Sirlin and Zucchini \cite{SirZuch}
calculated the one-loop corrections to the four-point Higgs-boson scattering
amplitude and defined the parameters in the theory so that large
electromagnetic effects appear only in such standard relations as that
between \gf\ and the muon decay rate. With this definition, the high mass
limit of the four-point function gives \cite{SirZuch}
\begin{equation}
h(\mu)=\lambda_0\left[1+\frac{\lambda_0 }{16\pi^2}\left(
24\ln{\frac{\mu}{\mbox{$M_H$}}}+25-3\sqrt{3}\pi \right)\right].
\label{eq:sirlin}
\end{equation}
The logarithm in the expression above is just that which appears in
the expansion of the one-loop expression for $\lambda(\mu)$,
\begin{equation}
\lambda(\mu)=\lambda_0\left(1-\beta_0\frac{\lambda_0}{16\pi^2}\,\ln\frac{\mu}
{\mu_0}\right)^{-1}
\label{eq:one_loop_lambda}
\end{equation}
for $\beta_0=24$ and $\mu_0=\mbox{$M_H$}$. The ambiguity in the choice of $\mu_0$
is in the treatment of the remaining constants in Eq.\ (\ref{eq:sirlin}).
These have been incorporated in the running coupling
by some authors\cite{SirZuch,djl,ldurand}
by redefining $\mu_0$ as $\mu_0=\mbox{$M_H$}\exp[(-25+3\sqrt{3}\pi)/24]$.
However, the constants do not
appear naturally in the expression for the four-point function at two loops
\cite{ldurand}. It is probably most reasonable, therefore, to treat them as
separate ``radiative corrections'' and write $h(\mu)$ to one loop as
$h(\mu)=\lambda(\mu)[1+\delta]$, with $\lambda(\mu)$ the one-loop
running coupling defined above, and $\delta$ incorporating the remaining
scale-independent corrections.
This question has been studied in more detail by Nierste and Riesselmann
\cite{kurt96}, who showed that the convergence of the perturbation series
was improved for several physical amplitudes by adopting the natural
scale $\mu_0=\mbox{$M_H$}$ instead of the choice noted above.
They note, furthermore, that in order
to cancel large logarithmic terms in the perturbative
result when one considers two-scale physical processes such as scattering,
the scale $\mu$ must be related to the energy scale of the interaction
by $\mu=\sqrt{s}$ \cite{kurt96}. We will follow Nierste and Riesselmann
and make the definite, physically motivated choices
$\mu_0=\mbox{$M_H$}$ and $\mu=\sqrt{s}$
in the following analysis. This specification amounts as already noted to
a definite specification of the ``radiative corrections'' in perturbatively
calculated amplitudes once the couplings are expressed in terms of
$\lambda(\mu)$.
With $\lambda_0$ and $\mu_0$ specified, and the range of reliability of
the $\beta$~function\ established, it is straightforward to integrate the renormalization
group equation and invert the result numerically to obtain $\lambda(\mu)$.
The uncertainty in $\lambda(\mu)$ can be specified in terms of that in $\beta$.
With this procedure, no further uncertainties such as those illustrated
in \cite{kurt96} are introduced. It is not necessary, for example, to obtain
the solution of the renormalization group equation as a series in $\lambda_0$,
in an iterative approximation. We note in this connection that the ``naive''
and ``consistent'' forms for $\lambda(\mu)$ given in \cite{kurt96}
correspond respectively to the approximates \padef{N}{0}, the perturbation
series for $\beta$, and \padef{0}{N}, the series obtained by expanding
$\/\beta$. Neither sequence is expected to converge well with increasing $N$.
Our results for $\lambda(\mu)$ are shown in Fig.~\ref{fig:lmaxvmu} for
$\mbox{$M_H$}=500$ and 800 GeV and $\mu_0\leq\mu\leq 4$ TeV. We find
for $\mbox{$M_H$}=500$ GeV that all \pade\
approximates, including the perturbation series, agree very for $\mu<5$ TeV,
a region in which $\lambda_0<5$. The
residual uncertainty in $\lambda(\mu)$ is small enough not to affect
perturbative results for physical processes.
Different \pade\ approximates also give very similar extrapolations
for $\lambda(\mu)$ for $\mbox{$M_H$}=800$ GeV, even when the predicted value of
$\beta_3$ is changed by a large factor. The only significant deviation
involves the perturbation series \padef{2}{0} which we do not believe
is reliable on the basis of our earlier investigation. Even if we restrict
the range of $\lambda$ in which we take $\beta$ as reliable to $\lambda<10$
as in Fig.~\ref{fig:p11p02}, the result for $\lambda(\mu)$ remains
reliable for $\mu<2$ TeV, a value well into the energy region of interest
for experiments at the Large Hadron Collider at CERN.
The rapid growth of $\lambda(\mu)$ for the perturbative
approximate \padef{2}{0} in Fig.~\ref{fig:lmaxvmu} is the result of an
Landau pole at $\mu=2339$ GeV.
A pole can appear in $\lambda(\mu)$ if the
integral of $1/\beta$ converges for $\lambda\rightarrow\infty$, with
the position of the pole in $\mu$ determined by the condition
\begin{equation}
\ln\frac{\mu}{\mu_0}=\lim_{\lambda \rightarrow \infty}
\int_{\lambda_0}^\lambda \frac{d\lambda}{\beta}.
\label{eq:Landau}
\end{equation}
No pole can actually appear when the integration is restricted
to the finite range of $\lambda$ in which $\beta$ is known reliably,
but the likely presence of
a pole would be indicated by very rapid growth of $\lambda(\mu)$ with
increasing $\mu$ in that region. In the present case,
there is no reason to expect the perturbation series \padef{2}{0}
to be accurate for $\lambda$ large. The results in
Fig.~\ref{fig:p11p02} indicate, in fact,
that the perturbative approximation
begins to fail badly for $\lambda\approx 5$, while the starting point for the
evolution of $\lambda(\mu)$ shown in Fig.~\ref{fig:lmaxvmu} is
at $\lambda=5.3$ for $\mbox{$M_H$}=800$ GeV. The remaining approximates do not lead to
poles in the region shown.
\subsection{Conclusions}
\label{subsec:conclusions}
We have shown that \pade\ summation of the $\beta$ function
improves the reliability of $\beta$ and the running quartic Higgs coupling.
The method gives a best estimate for $\beta$, and removes much of the
uncertainty associated with different determinations of $\lambda(\mu)$
at the three-loop level \cite{kurt96}.
We have tested the \pade\ method using the
$\beta$~function\ in the MS renormalization scheme, where $\beta$ is known to five
loops in the perturbation expansion. The test results suggest rapid
convergence of the diagonal and subdiagonal \pade\ sequences. Our applications
are to the more physical OMS renormalization scheme, where the first
scheme-dependent coefficient in the OMS expansion is significantly smaller
than in the MS expansion. This more rapid apparent convergence is reflected
in the excellent agreement among the leading \pade\ approximates for
$\beta_{OMS}$ even for rather large values of the first unknown
coefficient, $\beta_3$.
Application of the \pade\ method to the three-loop results for the
OMS $\beta$~function\ leads to a running coupling that appears to be quite reliable
in the region studied, namely a Higgs-boson mass $\mbox{$M_H$}\leq 800$ GeV and
a mass scale $\mu\leq 4$ TeV. We conclude that uncertainties in $\lambda(\mu)$
are not an important source of uncertainty in perturbative results for
physical scattering and decay amplitudes in this interesting region.
\acknowledgments
This work was supported in part by the U.S. Department of Energy under
Grant No.\ DE-FG02-95ER40896.
One of the authors (LD) would like to thank the Aspen Center for Physics
for its hospitality while parts of this work were done.
|
gr-qc/9804064
|
\subsection*{Acknowledgement} #1}
\makeatother
\newcommand{\Title}[1]{\noindent {\Large #1} \\}
\newcommand{\Author}[2]{\noindent{\large\bf #1}\\[2ex]\noindent{\it #2}\\}
\newcommand{\Abstract}[1]{\vskip 2mm \begin{center}
\parbox{16.4cm}{\small\noindent #1} \end{center}\bigskip}
\newcommand{\foom}[1]{\protect\footnotemark[#1]}
\newcommand{\foox}[2]{\footnotetext[#1]{#2}}
\newcommand{\email}[2]{\footnotetext[#1]{e-mail: #2}}
\newcommand{\sect}[1]{Sec.\,#1}
\newcommand{\ssect}[1]{Subsec.\,#1}
\def\nopagebreak{\nopagebreak}
\def\hspace{-1em}{\hspace{-1em}}
\def\hspace{-2em}{\hspace{-2em}}
\def\hspace{-0.5em}{\hspace{-0.5em}}
\def\hspace{-0.3em}{\hspace{-0.3em}}
\def\hspace{1cm}{\hspace{1cm}}
\def\hspace{1in}{\hspace{1in}}
\newcommand{\sequ}[1]{\setcounter{equation}{#1}}
\newcommand{\Eq}[1]{Eq.\,(\ref{#1})}
\newcommand{\Eqs}[2]{Eqs.\,(\ref{#1})--(\ref{#2})}
\defEq.\,{Eq.\,}
\defEqs.\,{Eqs.\,}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def&\nhq{&\hspace{-0.5em}}
\def\lal{&&\hspace{-2em} {}}
\def\begin{eqnarray} \lal{\begin{eqnarray} \lal}
\def\end{eqnarray}{\end{eqnarray}}
\def\nonumber \end{eqnarray}{\nonumber \end{eqnarray}}
\def\displaystyle{\displaystyle}
\def\textstyle{\textstyle}
\newcommand{\fracd}[2]{{\displaystyle\frac{#1}{#2}}}
\newcommand{\fract}[2]{{\textstyle\frac{#1}{#2}}}
\def\nonumber\\ {}{\nonumber\\ {}}
\def\nonumber\\[5pt] {}{\nonumber\\[5pt] {}}
\def\nonumber\\ \lal {\nonumber\\ \lal }
\def\nonumber\\[5pt] \lal {\nonumber\\[5pt] \lal }
\def\\[5pt]{\\[5pt]}
\def\\[5pt] \lal{\\[5pt] \lal}
\def\al =\al{&\nhq =&\nhq}
\def\stackrel{\rm def}{=}{\stackrel{\rm def}{=}}
\def{\,\rm e}{{\,\rm e}}
\def\partial{\partial}
\def{\,\rm Re\,}{{\,\rm Re\,}}
\def{\,\rm Im\,}{{\,\rm Im\,}}
\def{\,\rm tr\,}{{\,\rm tr\,}}
\def{\,\rm sign\,}{{\,\rm sign\,}}
\def{\,\rm diag\,}{{\,\rm diag\,}}
\def{\,\rm dim\,}{{\,\rm dim\,}}
\def{\rm const}{{\rm const}}
\def{\dst\frac{1}{2}}{{\displaystyle\frac{1}{2}}}
\def{\tst\frac{1}{2}}{{\textstyle\frac{1}{2}}}
\def\ \Rightarrow\ {\ \Rightarrow\ }
\def\ten#1{\mbox{$\cdot 10^{#1}$}}
\newcommand{\aver}[1]{\langle \, #1 \, \rangle \mathstrut}
\def\raisebox{-1.6pt}{\large $\Box$}\,{\raisebox{-1.6pt}{\large $\Box$}\,}
\newcommand{\vars}[1]{\left\{\begin{array}{ll}#1\end{array}\right.}
\newcommand{\lims}[1]{\mathop{#1}\limits}
\def\sum\limits{\sum\limits}
\def\int\limits{\int\limits}
\def\vphantom{\int}{\vphantom{\int}}
\def\vphantom{\intl^a}{\vphantom{\int\limits^a}}
\def\varepsilon{\varepsilon}
\defu_{\max}{u_{\max}}
\defspherically symmetric\ {spherically symmetric\ }
\defwormhole{wormhole}
\defblack hole{black hole}
\defblack holes{black holes}
\defT_{\rm H}{T_{\rm H}}
\def\delta\varphi{\delta\varphi}
\def\delta\alpha{\delta\alpha}
\def\delta\beta{\delta\beta}
\def\delta\gamma{\delta\gamma}
\defperturbation{perturbation}
\defperturbations{perturbations}
\defFig.\,{Fig.\,}
\renewcommand\floatpagefraction{1}
\renewcommand\topfraction{1}
\renewcommand\bottomfraction{1}
\renewcommand\textfraction{0}
\renewcommand\dblfloatpagefraction{1}
\renewcommand\dbltopfraction{1}
\newcommand{\Picture}[3]{
\begin{figure} \centering \unitlength=1mm
\begin{picture}(82.5,#1)
\put(0,0){\line(0,1){#1}}
\put(0,0){\line(1,0){82.5}}
\put(82.5,0){\line(0,1){#1}}
\put(0,#1){\line(1,0){82.5}}
\put(0,0){#2} \end{picture}
\caption{\protect\small #3} \medskip \hrule \end{figure} }
\newcommand{\WPicture}[3]{
\begin{figure*}[!t] \centering \unitlength=1mm
\begin{picture}(174,#1)
\put(0,0){\line(0,1){#1}}
\put(0,0){\line(1,0){174}}
\put(174,0){\line(0,1){#1}}
\put(0,#1){\line(1,0){174}}
\put(0,0){#2} \end{picture}
\caption{\protect\small #3} \medskip \hrule \end{figure*} }
\heads{K.A. Bronnikov, G. Cl\'ement, C.P. Constantinidis and J.C. Fabris}
{Cold Scalar-Tensor Black Holes: Causal Structure, Geodesics, Stability}
\bls{1.05}
\begin{document}
\thispagestyle{empty}
\twocolumn[
\rightline{\bf gr-qc/9804064}
\vspace*{25mm}
\Title{COLD SCALAR-TENSOR BLACK HOLES: \\[5pt]
CAUSAL STRUCTURE, GEODESICS, STABILITY}
\Author{K.A. Bronnikov\foom 1,
G. Cl\'ement\foom 2,
C.P. Constantinidis\foom 3
and J.C. Fabris\foom 4}
{Departamento de F\'{\i}sica, Universidade Federal do Esp\'{\i}rito Santo,
Vit\'oria, Esp\'{\i}rito Santo, Brazil}
\Abstract
{We study the structure and stability of the recently discussed spherically symmetric\
Brans-Dicke black-hole type solutions with an infinite horizon area and zero
Hawking temperature,
existing for negative values of the coupling constant $\omega$. These
solutions split into two classes: B1, whose horizon is reached by an
infalling particle in a finite proper time, and B2, for which this proper
time is infinite. Class B1 metrics are shown to be extendable beyond the
horizon only for discrete values of mass and scalar charge, depending on two
integers $m$ and $n$. In the case of even $m-n$ the space-time is globally
regular; for odd $m$ the metric changes its signature as the horizon is
crossed. Nevertheless, the Lorentzian nature of the space-time is
preserved, and geodesics are smoothly continued across the horizon, but,
when crossing the horizon, for odd $m$ timelike geodesics become spacelike
and {\sl vice versa}. Problems with causality, arising in some cases, are
discussed. Tidal forces are shown to grow infinitely near type B1 horizons.
All vacuum static, spherically symmetric\ solutions of the Brans-Dicke
theory with $\omega<-3/2$ are found to be linearly stable against spherically symmetric\ perturbations.
This result extends to the generic case of the Bergmann-Wagoner class of
scalar-tensor theories of gravity with the coupling function
$\omega(\phi) < -3/2$. }
]
\foox 1 {e-mail: kb@goga.mainet.msk.su;
permanent address: Centre for Gravitation and Fundamental Metrology,
VNIIMS, 3-1 M. Ulyanovoy St., Moscow 117313, Russia.}
\section{Introduction}
In the recent years there has been a renewed interest in scalar-tensor
theories (STT) of gravity as viable alternatives to general relativity
(GR), mostly in connection with their possible role in the early
Universe:
they provide power-law instead of exponential inflation, leading to
more plausible perturbation spectra \cite{spectra}.
Another aspect of interest in STT is the possible existence of
black holes (BHs) different from those well-known in GR.
\foox 2 {e-mail: gecl@ccr.jussieu.fr; permanent address:
Laboratoire de Gravitation et Cosmologie Relativistes, Universit\'e
Pierre et Marie Curie, CNRS/URA769, Tour 22-12, Bo\^{\i}te 142, 4 Place
Jussieu, 75252 Paris Cedex 05, France.}
\email 3 {clisthen@cce.ufes.br}
\email 4 {fabris@cce.ufes.br}
\setcounter{footnote}{4}
Thus, Campanelli and Lousto \cite{lousto} pointed out among the
static, spherically symmetric\ solutions of the Brans-Dicke (BD) theory
a subfamily possessing all BH properties, but (i) existing only for
negative values of the coupling constant $\omega$ and (ii) with
horizons of infinite area. These
authors argued that large negative $\omega$ are compatible with modern
observations and that such BHs may be of astrophysical relevance%
\footnote
{For brevity, we call BHs with infinite horizon areas type B BHs \cite{we},
to distingish them from the conventional ones,
with finite horizon areas, to be called type A.}.
Indeed, the post-Newtonian parameters, determining the observational
behaviour of STT in the weak field limit, depend crucially on the
absolute value of $\omega$ rather than its sign \cite{73,will}.
In Ref.\,\cite{we} it was shown, in the framework of a general
(Bergmann-Wagoner) class of STT, that nontrivial BH solutions can exist
for the coupling function $\omega(\phi)+3/2 <0$, and that only in
exceptional cases these BHs have a finite horizon area. In the BD theory
($\omega = {\rm const}$) such BHs were indicated explicitly; they have
infinite horizon areas and zero Hawking temperature (``cold BHs''), thus
confirming the conclusions of \cite{lousto}. These BHs in turn split
into two subclasses: B1, where horizons are attained by infalling
particles in a finite proper time $\tau$, and B2, for which $\tau$
is infinite. These results are briefly presented in Sections 2 and 3.
The static region of a type B2 BH is geodesically complete since its
horizon is infinitely remote and actually forms a second spatial
asymptotic. For type B1 BHs the global picture is more complex and is
discussed here in some detail (\sect 4). It turns out that the horizon
is generically singular due to violation of analyticity, despite the
vanishing curvature invariants. Only a discrete set of B1-solutions,
parametrized by two integers $m$ and $n$, admits a Kruskal-like
extension, and, depending on their parity, four different global
structures are distinguished. Two of them, where $m-n$ is even, are
globally regular, in two others the region beyond the horizon contains a
spacelike singularity.
\ssect {4.4} describes the behaviour of geodesics for different cases of
BD BH metrics. It turns out that for odd $m$,
when crossing the horizon, timelike geodesics become spacelike and {\it
vice versa\/}, leading to problems with causality.
Thus, in a family of BHs there appear closed timelike curves, whose
existence may be avoided by assuming a ``helicoidal" analytic extension
of the space-time manifold. Moreover, it is shown by considering the
geodesic deviation equations that near a horizon with an infinite area
all extended bodies are destroyed (stretched apart) by unbounded tidal
forces.
\sect 5 discusses the stability of STT solutions under spherically symmetric\ perturbations. In
this case the only dynamical degree of freedom is the scalar field, and
the perturbation\ analysis reduces to a single wave equation whose radial
part determines the system behaviour. Under reasonable boundary
conditions, it turns out that the BD solutions with $\omega<-3/2$ are
linearly stable, and this result extends to similar solutions of the
general STT provided the scalar field does not create new singularities
in the static domain. For the case $\omega > -3/2$ the stability
conclusion depends on the boundary condition at a naked singularity,
which is hard to formulate unambiguously.
We can conclude that, in STT with the anomalous sign of the scalar
field energy, vacuum spherically symmetric\ solutions generically describe stable BH-like
configurations. Some of them, satisfying a ``quantization" condition,
are globally regular and have peculiar global structures. We also
conclude that, due to infinite tidal forces, a horizon with an infinite
area converts any infalling body into true elementary
(pointlike) particles, which afterwards become tachyons.
A short preliminary account of this work has been given in \cite{PLA}.
In our next paper we intend to discuss similar solutions with an
electric charge.
\section {Field equations}
The Lagrangian of the general (Bergmann-Wagoner) class of
STT of gravity in four dimensions is
\begin{equation} \label{L1}
L = \sqrt{-g}\biggr[\phi R
+ \frac{\omega(\phi)}{\phi}\phi_{;\rho}\phi^{;\rho}
+ L_{\rm m}\biggl]
\end{equation}
where $\omega(\phi)$ is an arbitrary function of the scalar field
$\phi$ and $L_{\rm m}$ is the Lagrangian of non-gravitational matter.
This formulation (the so-called {\it Jordan conformal
frame\/}) is commonly considered to be fundamental since just in this
frame the matter energy-momentum tensor $T^\mu_\nu$ obeys the
conventional conservation law $\nabla_{\alpha}T^\alpha_\mu =0$, giving
the usual equations of motion (the so-called atomic system of
measurements). In particular, free particles move along geodesics of
the Jordan-frame metric. Therefore, in what folows we discuss the
geometry, causal structure and geodesics in the Jordan frame.
We consider only scalar-vacuum configurations and put
$L_{\rm m}=0$.
The field equations are easier to deal with in
the {\it Einstein conformal frame\/}, where the transformed scalar
field $\varphi$ is minimally coupled to gravity. Namely, the
conformal mapping
$g_{\mu\nu} = \phi^{-1}\bar g_{\mu\nu}$ transforms Eq.\, (\ref{L1})
(up to a total divergence) to
\begin{eqnarray} \lal \label{L2}
L = \sqrt{-\bar g}\biggr(\bar R + \varepsilon
{\bar g}^{\alpha\beta}\varphi_{;\alpha}\varphi_{;\beta}\biggl),
\\ \lal
\varepsilon = {\,\rm sign\,} (\omega + 3/2), \label{eps}
\hspace{1cm}
\frac{d\varphi}{d\phi} = \biggl|\frac{\omega + 3/2}{\phi^2}\biggr|^{1/2}.
\end{eqnarray}
The field equations are
\begin{equation} \label{eq1}
R_{\mu\nu} = -\varepsilon \varphi_\mu \varphi_\nu, \hspace{1cm}
\nabla^\alpha \nabla_\alpha\varphi =0
\end{equation}
where we have suppressed the bars marking the Einstein frame.
The value $\varepsilon=+1$ corresponds to normal STT, with positive energy
density in the Einstein frame; the choice $\varepsilon=-1$ is anomalous. The
BD theory corresponds to the special case $\omega = {\rm const}$, so
that
\begin{equation}
\phi = \exp (\varphi/\sqrt{|\omega+3/2|}). \label{phi-bd}
\end{equation}
Let us consider a spherically symmetric\ field system, with the metric
\begin{eqnarray} \lal \label{met1}
ds_{\rm E}^2 = {\,\rm e}^{2\gamma}dt^2 - {\,\rm e}^{2\alpha}du^2
- {\,\rm e}^{2\beta}d\Omega^2, \nonumber\\ \lal
\hspace{1cm} d\Omega^2 =
d\theta^2 + \sin^2 \theta d\Phi^2,
\end{eqnarray}
where E stands for the Einstein frame, $u$ is the radial
coordinate, $\alpha$, $\beta$, $\gamma$ and the field $\varphi$ are
functions of $u$ and $t$.
Preserving only linear terms with respect to
time derivatives, we can write Eqs.\, (\ref{eq1}) in the following form:
\begin{eqnarray}
{\,\rm e}^{2\alpha}R^0_0 \al =\al \label{00}
{\,\rm e}^{2\alpha-2\gamma}(\ddot\alpha + 2\ddot\beta) \nonumber\\ \lal
-[ \gamma'' +\gamma'(\gamma'-\alpha'+2\beta')] =0; \\
{\,\rm e}^{2\alpha}R^1_1 \al =\al
{\,\rm e}^{2\alpha-2\gamma}\ddot\alpha \nonumber\\ \lal \label{11}
\hspace{-2em}\nqq\hspace{-1em} -[\gamma''+2\beta'' +\gamma'{}^2+2\beta'{}^2
-\alpha'(\gamma'+2\beta')] = {\tst\frac{1}{2}}\varepsilon\varphi'{}^2;\\
{\,\rm e}^{2\alpha}R^2_2 \al =\al
{\,\rm e}^{2\alpha-2\beta} \label{22}
+{\,\rm e}^{2\alpha-2\gamma}\ddot\beta \nonumber\\ \lal
-[\beta''+\beta'(\gamma'-\alpha'+2\beta')] =0;\\
R_{01}\al =\al
2[\dot\beta' + \dot{\beta}\beta' \label{01}
-\dot{\alpha}\beta'-\dot{\beta}\gamma']
= -\varepsilon \dot{\varphi}\varphi'; \\
\sqrt{-g}\raisebox{-1.6pt}{\large $\Box$}\, \varphi\al =\al \label{ephi}
{\,\rm e}^{-\gamma+\alpha+2\beta}\ddot\varphi
-\bigl({\,\rm e}^{\gamma-\alpha+2\beta} \varphi'\bigr)' =0
\end{eqnarray}
where dots and primes denote, respectively, $\partial/\partial t$ and $\partial/\partial u$.
Up to the end of \sect 4 we will
restrict ourselves to static configurations.
\section{Black holes in scalar-tensor theories}
The general static, spherically symmetric\ scalar-vacu\-um solution of the
theory (\ref{L1}) is given by \cite{73,k1}
\begin{eqnarray} \lal \label{ds1}
ds_{\rm J}^2 = \frac{1}{\phi}ds_{\rm E}^2 \nonumber\\ \lal
= \frac{1}{\phi}
\biggl\{{\,\rm e}^{-2bu}dt^2 - \frac{{\,\rm e}^{2bu}}{s^2(k,u)}
\biggr[\frac{du^2}{s^2(k,u)} + d\Omega^2\biggl]
\biggr\},
\\ \lal \label{phi1}
\varphi = Cu + \varphi_0, \hspace{1cm} C, \varphi_0 ={\rm const},
\end{eqnarray}
where J denotes the Jordan frame,
$u$ is the harmonic radial coordinate in the static space-time
in the Einstein frame,
such that $\alpha(u) = 2\beta(u) + \gamma(u)$, and the function
$s(k,u)$ is
\begin{equation} \label{def-s}
s(k,u) \stackrel{\rm def}{=} \vars {
k^{-1}\sinh ku, \ & k > 0, \\
u, \ & k = 0, \\
k^{-1}\sin ku, \ & k < 0, }
\end{equation}
The constants $b$, $k$ and $C$ (the scalar charge) are related by
\begin{equation} \label{r1}
2k^2{\,\rm sign\,} k = 2b^2 + \varepsilon C^2.
\end{equation}
The range of $u$ is $0 < u < u_{\max}$, where $u=0$ corresponds to spatial
infinity, while $u_{\max}$ may be finite or infinite depending on $k$ and
the behaviour of $\phi(\varphi)$ described by (\ref{eps}). In normal
STT ($\varepsilon=+1$), by (\ref{r1}), we have only $k > 0$, while in
anomalous STT $k$ can have either sign.
According to the previous studies \cite{73,k1}, all these solutions in
normal STT have naked singularities, up to rare exceptions
when the sphere $u=\infty$ is regular and admits an extension of the
static coordinate chart. An example is a conformal scalar field in
GR viewed as a special case of STT, leading to BHs with scalar charge
\cite{70,bek}. Even when it is the case, such configurations are
unstable due to blowing-up of the effective gravitational coupling
\cite{78}.
In anomalous STT ($\varepsilon=-1$) the solution behaviour is more diverse and
the following cases without naked singularities can be found:
\medskip\noindent
{\bf 1.} $k > 0$.
Possible event horizons have an infinite area (type B BHs),
i.e. $g_{22}\to\infty$ as $r\to 2k$.
In BD theory, after the coordinate transformation
\begin{equation}
{\,\rm e}^{-2ku} = 1 - 2k/r \equiv P(r) \label{def-P}
\end{equation}
the solution takes the form
\begin{eqnarray} \label{ds+}
ds_{\rm J}^2 \al =\al P^{-\xi} ds^2_{\rm E} \nonumber\\ {}
\al =\al P^{-\xi}\Bigl(P^{a }dt^2 - P^{-a }dr^2
- P^{1 - a }r^2 d\Omega^2 \Bigl),\nonumber\\ {}
\phi \al =\al P^\xi
\end{eqnarray}
with the constants related by
\begin{equation}
(2\omega+3) \xi^2 = 1-a^2, \hspace{1cm} a=b/k. \label{int+}
\end{equation}
The allowed range of $a$ and $\xi$, providing
a horizon without a curvature singularity at $r=2k$, is
\begin{equation}
a >1, \hspace{1cm}\cm a > \xi \geq 2- a. \label{range+}
\end{equation}
(Eqs.\, (\ref{ds+}),(\ref{int+}) are valid for $\omega>-3/2$ as well, but
then $a<1$, leading to a naked singularity.)
For $\xi < 1$ particles can arrive at the horizon in a finite proper
time and may eventually (if geodesics can be extended)
cross it, entering the BH interior
(type B1 BHs \cite{we}).
When $\xi\geq 1$, the sphere $r=2k$ is infinitely far and it takes
an infinite proper time for a particle to reach it. Since in the
same limit $g_{22}\to \infty$, this configuration (a type
B2 BH \cite{we}) resembles a wormhole.
\medskip\noindent
{\bf 2.} $k = 0$.
Just as for $k>0$, in a general STT, only type B black holes\ are possible
\cite{we}), with $g_{22}\to\infty$ as $u\to \infty$.
In particular, the BD solution is
\begin{eqnarray} \lal
ds^2 = {\,\rm e}^{-su}\biggr[{\,\rm e}^{-2bu}dt^2 - \frac{{\,\rm e}^{2b u}}{u^2}\biggr(
\frac{du^2}{u^2} + d\Omega^2\biggl)\biggl], \nonumber\\ \lal
\phi = {\,\rm e}^{su}, \hspace{1cm}
s^2(\omega + 3/2) = -2b^2. \label{ds0}
\end{eqnarray}
The allowed range of the integration constants is
$b > 0, \quad 2b > s > -2b$.
This range is again divided into two halves: for $s>0$ we
deal with a type B1 BH, for $s<0$ with that of type B2
($s=0$ is excluded since it leads to GR).
\medskip\noindent
{\bf 3.} $k < 0$.
In the general STT the metric (\ref{ds1}) describes a wormhole, with
two flat asymptotics at $u=0$ and $u=\pi/|k|$, provided $\phi$ is
regular between them. In exceptional cases
the sphere $u_{\max} = \pi/|k|$ may be an event horizon, namely, if
$\phi \sim 1/\Delta u^2$, $\Delta u \equiv |u - u_{\max}| $. In this case
it has a finite area (a type A black hole) and
\begin{equation} \label{bh-}
\omega(\phi) + 3/2 \to -0 \qquad {\rm as} \qquad u\tou_{\max}.
\end{equation}
The behaviour of $g_{00}$ and $g_{11}$ near the horizon is then similar
to that in the extreme Reissner-Nordstr\"om solution.
In the BD theory we have only the wormhole solution
\begin{eqnarray} \lal
ds^2 = {\,\rm e}^{-su}\biggr[{\,\rm e}^{-2bu}dt^2 \label{ds-}
- \frac{k^2{\,\rm e}^{2bu}}{\sin^2{ku}}\biggr(
\frac{k^2du^2}{\sin^2{ku}} + d\Omega^2\biggl)\biggl],
\nonumber\\ \lal
s^2(\omega + 3/2) = - k^2 - 2b^2.
\end{eqnarray}
with masses of different signs at the two asymptotics.
\medskip\noindent
For all the BH solutions mentioned, the Hawking temperature is zero.
Their infinite horizon areas may mean that their entropy is also
infinite; however, a straightforward application of the
proportionality relation from GR between entropy and horizon area,
is here hardly justified; a calculation of BH entropy is a separate
problem, discussed in a large number of recent works from various
standpoints.
\section{Analytic extension and causal structure of Brans-Dicke
black holes}
\subsection {Extension}
Let us discuss possible Kruskal-like extensions of type B1
BH metrics (\ref{ds+}) ($k>0$) and (\ref{ds0}) ($k=0$) of the BD theory.
For (\ref{ds+}), with $a > 1 > \xi > 2-a$,
we introduce, as usual, the null coordinates $v$ and $w$:
\begin{equation}
v = t + x, \qquad w = t - x, \qquad
x \stackrel{\rm def}{=} \int P^{-a}dr \label{vw}
\end{equation}
where $x \to \infty$ as $r \to \infty$ and $x \to -\infty$
as $r \to 2k$. The asymptotic behaviour of
$x$ as $r \to 2k$ ($P \to 0$) is
$x \propto - P^{1-a}$, and in a finite
neighbourhood of the horizon $P=0$ one can write
\begin{equation} \label{x}
x \equiv {\tst\frac{1}{2}}(v-w) =-{\tst\frac{1}{2}} P^{1-a}f(P)\,,
\end{equation}
where $f(P)$ is an analytic function of $P$, with $f(0)=4k/(a-1)$:
\begin{equation} \label{f}
f(P) = -4k\sum_{q=0}^{\infty} \frac{q+1}{q-a+1}P^q.
\end{equation}
Then, let us define new null coordinates $V<0$ and $W>0$
related to $v$ and $w$ by
\begin{equation} \label{VW}
- v = (-V)^{-n-1}, \quad\ w = W^{-n-1}, \quad\ n={\rm const}.
\end{equation}
The mixed coordinate patch $(V,w)$ is defined for $v<0$ ($t<-x$) and
covers the whole past horizon $v=-\infty$. Similarly, the patch $(v,W)$
is defined for $w>0$ ($t>x$) and covers the whole future horizon
$w=+\infty$. So these patches can be used to extend the metric through
one or the other horizon.
Consider the future horizon. As is easily verified, a finite value
of the metric coefficient $g_{vW}$ at $W=0$ is achieved if we take
$n+1 = (a-1)/(1-\xi)$, which is positive for $a>1>\xi$.
In a finite neighbourhood of the horizon,
the metric (\ref{ds+}) can be written in
the coordinates $(v,W)$ as follows:
\begin{eqnarray} \label{ext}
ds^2 \al =\al P^{a-\xi} dv\, dw - P^{1-a-\xi}r^2 d\Omega^2
\nonumber\\ {}
\al =\al -(n+1)f^{\frac{n+2}{n+1}}
\cdot(1-vW^{n+1})^{-\frac{n+2}{n+1}}dv\,dW
\nonumber\\ \lal
-\fracd{4k^2}{(1-P)^2}f^{-\frac m{n+1}}
\cdot(1-vW^{n+1})^{\frac m{n+1}} W^{-m} d\Omega^2\nonumber\\ \lal
\end{eqnarray}
where
\begin{equation}
m = (a-1+\xi)/(1-\xi). \label{def-m}
\end{equation}
The metric (\ref{ext}) can be extended at fixed $v$ from $W>0$ to $W<0$
only if the numbers $n+1$ and $m$ are both integers (since otherwise
the fractional powers of negative numbers violate the analyticity in
the immediate neighbourhood of the horizon). This leads to a discrete
set of values of the integration constants $a$ and $\xi$:
\begin{equation}
a = \frac{m+1}{m-n}, \hspace{1cm} \xi = \frac{m-n-1}{m-n}. \label{qu}
\end{equation}
where, according to the regularity conditions (\ref{range+}),
$m > n \geq 0$. Excluding the Schwarzschild case $m=n+1$ ($\xi =0$), which
corresponds from (\ref{ds-}) to $a=1$ ($m=0$), we see
that regular BD BHs correspond to integers $m$ and $n$ such that
\begin{equation}
m-2 \ge n \ge 0. \label{mn}
\end{equation}
An extension through the past horizon can be performed in the
coordinates $(V,w)$ in a similar way and with the same results.
It follows that, although the curvature scalars
vanish on the Killing horizon $P=0$, the metric cannot
be extended beyond it unless the constants $a$ and $\xi$
obey the ``quantization condition'' (\ref{qu}),
and is generically singular. The Killing horizon, which
is at a finite affine distance, is part of the boundary of the
space-time, i.e. geodesics and other possible trajectories terminate
there. Similar properties were obtained in a (2+1)--dimensional model
with exact power--law metric functions \cite{sigma} and in the case of
black $p$--branes \cite{GHT}.
\Picture{60}
{\hspace{-1em} \input bh0.pic}
{Extensions through the future horizon for odd (a) and even (b) values of
$n$. Thick vertical and horizontal arrows show the growth direction of the
$x$ coordinate in the corresponding regions.}
We have thus obtained a discrete family of BH solutions whose parameters
depend on the two integers $m$ and $n$. The corresponding parameters
describing the asymptotic form of the solution, the active gravitational
mass $M$ and the scalar charge $S$, defined by
\[
\hspace{-0.5em} g_{00}= 1 -\frac{2GM}{r} +o\biggl(\frac 1r\biggr),
\qquad \phi = 1 + \frac Sr + o\biggl(\frac 1r\biggr),
\]
where $G$ is the Newtonian gravitational constant,
are ``quantized" according to the relations
\begin{equation}
GM = k\frac{n+2}{m-n}, \hspace{1cm} S = -2k\frac{m-n-1}{m-n}. \label{mass}
\end{equation}
The constant $k$, specifying the length scale of the solution,
remains arbitrary. On the other hand, the coupling constant $\omega$
takes, according to (\ref{int+}), discrete values:
\begin{equation}
|2\omega+3| = \frac{(2m-n+1)(n+1)}{(m-n-1)^2}. \label{om-mn}
\end{equation}
The $k = 0$ solution (\ref{ds0}) of the BD theory also has a Killing
horizon ($u \to \infty$) at finite geodesic distance if $s > 0$. However,
this space-time does not admit a Kruskal--like
extension and so is singular. The reason is that in this case the relation
giving the tortoise--like coordinate $x$,
\begin{equation} \label{x0}
x = \int\frac{{\rm e}^{2bu}}{u^2}\,du = \frac{{\rm e}^{2bu}}{2bu^2}F(u)
\end{equation}
(where $F(u)$ is some function such that $F(\infty) = 1$) cannot be
inverted near $u = \infty$ to obtain $u$ as an analytic function of $x$.
\subsection {Geometry and causal structure}
To study the geometry beyond the horizons of the metric (\ref{ds+}), or
(\ref{ext}), let us define the new radial coordinate $\rho$ by
\begin{equation} \label{rho}
P \equiv {\rm e}^{-2ku} \equiv 1 - \frac{2k}{r} \equiv \rho^{m-n}.
\end{equation}
The resulting solution (\ref{ds+}), defined in the
static region I ($P>0$, $\rho > 0$), is
\begin{eqnarray} \label{global}
ds^2 \al =\al \rho^{n+2}\,dt^2 -
\frac{4k^2(m-n)^2}{(1-P)^4}\,\rho^{-n-2}\,d\rho^2 \nonumber\\ \lal
- \frac{4k^2}{(1-P)^2}\,\rho^{-m}\,d\Omega^2,
\qquad
\phi = \rho^{m-n-1}.
\end{eqnarray}
By (\ref{x}), $\rho$ is related to the mixed null coordinates
$(v,W)$ by
\begin{eqnarray} \lal \label{rhoW}
\rho (v,W) = W\,[f(P)]^{1/(n+1)}[1-vW^{n+1}]^{-1/(n+1)}. \nonumber\\ \lal
\end{eqnarray}
This relation and a similar one giving $\rho (V,w)$
show that when the future (past) horizon
is crossed, $\rho$ varies smoothly, changing its sign with $W$ ($V$).
For $\rho < 0$ the metric (\ref{global}) describes
the space-time regions beyond the horizons.
\WPicture{79}
{\qquad \input bh1.pic \hspace{1cm}
\qquad \input bh2.pic}
{The Penrose diagram and the effective potential for geodesics for a BH
with $m$ and $n$ both even. Curve 1 is $V_\eta(\rho)$, curve 2 is
$V_L(\rho)$ and curve 3 is $V(\rho)$ for a nonradial timelike geodesic. The
curves and ``energy levels" E1--E5 correspond to different kinds of
geodesics as described in \ssect 4.4, item 1a.}
To construct the corresponding Penrose diagrams, it is helpful to notice
that by (\ref{x}) the radial coordinate $x$ is related to $\rho$ by
\begin{equation}
x= -{\tst\frac{1}{2}} \rho^{-n-1} f(P), \label{x-rho}
\end{equation}
so that for odd $n$ the horizon as seen from region II ($\rho <0$) also
corresponds to $x\to -\infty$. On the other hand, in the 2-dimensional
metric $ds_2^2 = \rho^{n+2}(dt^2 - dx^2)$, for $\rho <0$ the coordinate
$x$ is timelike, hence in region II beyond the future horizon (with
respect to the original region I) $x\to -\infty$ means ``down". A new
horizon for region II joins the picture at point O --- see Fig.\, 1(a).
For even $n$, when $x$ in region II remains a spatial coordinate
and the coordinate (\ref{x-rho}) changes its sign when crossing the
horizon, a new horizon joins the old one at the future infinity point of
region I --- Fig.\, 1(b). These considerations are easily verified by
introducing null coordinates in region II, similar to $v$ and $w$
previously used in region I. Continuations through the past horizons are
performed in a similar manner.
The resulting causal structures depend on the parities of $m$ and $n$.
\medskip\noindent
{\bf 1a.} Both $m$ and $n$ are even, so $P(\rho)$ is an even function.
The two regions $\rho > 0$ and $\rho < 0$ are isometric
($g_{\mu\nu}(-\rho) = g_{\mu\nu}(\rho)$), and the Penrose diagram is
similar to that for the extreme Kerr space-time,
an infinite tower of alternating regions I and II (Fig.\, 2, left).
All points of the diagram, except the boundary and the
horizons, correspond to usual 2-spheres.
\medskip\noindent
{\bf 1b.} Both $m$ and $n$ are odd; again $P(\rho)$ is an even function,
but regions I and II are
now anti-isometric ($g_{\mu\nu}(-\rho) = -g_{\mu\nu}(\rho)$).
The metric tensor in region II ($\rho < 0$) has the signature ($-+++$)
instead of ($+---$) in region I. Nevertheless, the Lorentzian nature of
the space-time is preserved. The Penrose diagram is shown in Fig.\, 3,
left. In both cases 1a and 1b the maximally extended space-times are
globally regular\footnote{A globally regular extension of an extreme
dilatonic black hole, with the same Penrose diagram as in our case 1a,
was discussed in \cite{GHT}.}.
\medskip\noindent
{\bf 2.} $m-n$ is odd, i.e. $P(\rho)$ is an odd function; moreover,
$P \to -\infty$ ($r \to 0$) as $\rho \to -\infty$, so that the
metric (\ref{global}) is singular on the line $\rho = -\infty$, which is
spacelike. The resulting Penrose
diagrams are similar to those of the Schwarzschild
space-time in case 2a ($n$ odd, $m$ even), and of the extreme ($e^2 =
m^2$) Reissner--Nordstr\"{o}m space-time in case 2b ($n$ even, $m$ odd,
Fig.\, 4). In the latter subcase, however, the 4-dimensional metric
changes its signature when crossing the horizon, similarly to case 1b.
\subsection {Type B2 structure}
Let us briefly consider the case B2: \ $k>0$, $a>\xi>1$. As before,
the metric is transformed according to (\ref{vw})--(\ref{VW}) and at the
future null limit (now infinity rather than a horizon, therefore we
avoid the term ``black hole") we again arrive at (\ref{ext}), where now $W\to
\infty$ as $P\to 0$. The asymptotic form of the metric as $W\to \infty$
is
\begin{equation}
ds^2 = -C_1 dv\,dW - C_2 W^{-m} d\Omega^2 \label{B2vW}
\end{equation}
where $C_{1,2}$ are some positive constants, while
the constant $m$, defined in (\ref{def-m}), is
now negative. A further application of the $v$-transformation
(\ref{VW}) at the same asymptotic, valid for any finite $v<0$, leads to
\begin{eqnarray} \lal \label{B2VW}
ds^2 = -C_1 (-V)^{(a-\xi)/(\xi-1)}dV\,dW - C_2 W^{-m}d\Omega^2.
\nonumber\\ \lal
\end{eqnarray}
If we now introduce new radial ($R$) and time ($T$) coordinates
by $T=V+W$ and $R=T-W$, in a spacelike section $T={\rm const}$ the limit
$R\to -\infty$ corresponds to simultaneously $V\to -\infty$ and
$W\to +\infty$, with $|V|\sim W$, and the metric (\ref{B2VW}) turns into
\begin{eqnarray}
ds^2 \al =\al 4C_1 (-R)^{(a-\xi)/(\xi-1)}(dT^2 - dR^2) \nonumber\\ \lal \label{B2RT}
\hspace{1in} - C_2 (-R)^{-m}d\Omega^2.
\end{eqnarray}
Evidently, this asymptotic is a nonflat spatial infinity, with infinitely
growing coordinate spheres and also infinitely growing $g_{00}$, i.e.,
this infinity repels test particles.
\WPicture{79}
{\input bh3.pic
\hspace{1cm}
\input bh4.pic
}
{The Penrose diagram and the effective potential for geodesics for a BH
with $m$ and $n$ both odd. Curve 1 is $V_\eta(\rho)$, curve 2 is
$V_L(\rho)$ and curve 3 is $V(\rho)$ for a nonradial timelike geodesic. The
curves and ``energy levels" E1--E7 correspond to different kinds of
geodesics as described in \ssect 4.4, item 1b.}
A Penrose diagram of a B2 type configuration coincides with a single
region I in any of the diagrams; all its sides depict null infinities,
its right corner corresponds to the usual spatial infinity and its left
corner to the unusual one, described by the metric (\ref{B2RT}). The
latter has been obtained here by ``moving along" the future null
infinity $W\to\infty$, but the same is evidently achieved starting from
the past side.
\subsection{Geodesics}
Let us now return to type B1 BHs and study test particle motion in
their backgrouds, described by geodesics equations.
One can verify that all geodesics are continued smoothly across the
horizons, even in cases 1b and 2b when the metric changes its
signature (the geodesic equation depends only on the Christoffel symbols
and is invariant under the anti-isometry $g_{\mu\nu} \to -g_{\mu\nu}$).
We will use the metric (\ref{global}). Then the integrated geodesic
equation for arbitrary motion in the plane $\theta=\pi/2$ reads:
\begin{eqnarray} \lal \label{geo}
\hspace{-1em} \frac{4k^2(m-n)^2}{(1-P)^4}\dot{\rho}^2
+ \eta \rho^{n+2}
+ \frac{L^2}{4k^2}(1-P)^2\rho^{m+n+2} = E^2 \nonumber\\ \lal
\end{eqnarray}
where $\dot{\rho} \equiv d\rho/d\lambda$, $\lambda$ being an affine
parameter such that $ds^2 = \eta d\lambda^2$, with $\eta = +1,\ 0,\ -1$
for timelike, null or spacelike geodesics; $E^2$ and $L^2$ are constants
of motion associated with the timelike and azimuthal Killing vectors and,
correspondingly, with the particle total energy and angular momentum.
Eq.\,(\ref{geo}) has the form of an energy balance equation,
with the effective potential
\begin{equation}
V(\rho)= V_{\eta}(\rho) + V_L(\rho) \label{V}
\end{equation}
where $V_{\eta}$ and $V_L$ are the respective terms in the left-hand
side.
One should note that although the coordinate $\rho$ belongs to the
static frame of reference, one can use it (and consequently Eq.\,
(\ref{geo})) to describe geodesics that cross the horizon since
in a close neighbourhood of the horizon $\rho=0$ it coincides (up to a
positive constant factor) with a manifestly well-behaved coordinate $V$
or $W$ and, on the other hand, Eq.\, (\ref{geo}) reads simply
$\dot\rho{}^2 = {\rm const}\cdot E^2$ and is thus also well-behaved.
Let us discuss the four possible cases according to the
parity of $m$ and $n$ and the corresponding signature of the metric
(\ref{global}) for $\rho < 0$:
\medskip\noindent
{\bf 1a:} $m$ even, $n$ even, ($+ - - \,-$). The range of $\rho$ is
$(-1,+1)$. The coefficient $(1-P)^{-1}$ of the kinetic term and the
potential in (\ref{geo}) are both symmetric under the exchange
$\rho \to -\rho$. The potential $V(\rho)$ is shown in
Fig.\,2: curve 1 depicts $V_\eta(\rho)$ for $\eta=1$, i.e., the
potential for radial timelike geodesics; curve 2 shows $V_L$, the
angular momentum dependent part of $V(\rho)$, and curve 3 shows
their sum for certain generic values of the motion parameters.
Depending on the value of $E$, geodesic motion can be symmetrical with
successive horizon crossings from one region to the next isometrical
region without reaching past or future null infinity (E1: see ``energy
levels" in Fig.\,2, right and the corresponding curves in Fig.\,2,
left), or starting from a past timelike infinity in region I and
reaching a future timelike infinity in region II (E4). Some nonradial
timelike (E2, E3) and null (E5) geodesics remain in a single region,
corresponding to bound (E2) or unbound (E3) particle orbits near the BH
or photons passing it by (E5). The existence of nonradial trajectories
like E1 for any value of $L$ is here connected with the infinite value
of ${\,\rm e}^{\beta}$ at the horizon, creating a minimum of $V_L$. Radial null
geodesics ($V(\rho)\equiv 0$) correspond, as always, to straight lines
tilted by $45^{\circ}$ (unshown).
\medskip\noindent
{\bf 1b:} $m$ odd, $n$ odd, ($- + + \,+$).
The one-dimensional dependence $\rho(\lambda$) is qualitatively the same
for null geodesics ($\eta = 0$). However, the global picture is
drastically different, see Fig.\, 3. In particular, type E1 null
geodesics which periodically cross the horizon necessarily go repeatedly
through all four regions of the Penrose diagram, so that their
projections onto a 2-dimensional plane $\theta= {\rm const}$, $\Phi={\rm const}$
are closed; they will be even closed in the full 4-dimensional
space-time for some discrete values of the angular momentum $L^2$. Thus
such BH space-times contain closed null geodesics, leading to
causality violation.
\WPicture{81}
{\input bh6.pic \hspace{1cm} \quad
\input bh5.pic}
{The Penrose diagram and the effective potential for geodesics for a BH
with $n$ even and $m$ odd. Curve 1 is $V_\eta(\rho)$, curve 2 is
$V_L(\rho)$ and curve 3 is $V(\rho)$ for a nonradial timelike geodesic.
The curves and ``energy levels" E1--E7 correspond to different kinds of
geodesics as described in \ssect 4.4, item 2b.}
However, this problem can be avoided if we choose, instead of the
simplest, one-sheet maximal analytic extension corresponding to the
planar Penrose diagram of Fig.\, 3, left, a ``helicoidal" analytic
extension constructed in the following way: starting from a given region
I and proceeding with the extension counterclockwise, after 4 steps we
come again to a region I isometric to the original one, but do not
identify these mutually isometric regions and repeat the process
indefinitely. The same process is performed in the clockwise direction.
The resulting Penrose diagram is a Riemann
surface with a countable infinity of sheets similar to that in Fig.\, 3,
left, cut along one of the horizons. Such a structure no longer
exhibits causality violation%
\footnote
{Strictly speaking, such a process might be applied to the Schwarzschild
and Rindler space-times, with possible identification of isometric regions
after a finite number of steps. This is, however, unnecessary since there
a causality problem like ours does not exist.}.
In general, when crossing a horizon, a null geodesic remains null,
but timelike geodesics become spacelike and {\sl vice versa\/},
since the coefficient $\eta$ in Eq.\, (\ref{geo}), being an integral of
motion for a given geodesic, changes its meaning in transitions between
regions I and II: a geodesic with $\eta=1$ is timelike in region I and
spacelike in region II, and conversely for $\eta=-1$.
For nonnull geodesics the potential is now asymmetric: thus, for
trajectories which are timelike in region I ($\eta = +1$), it becomes
attractive for $\rho < 0$ (Fig.\,3, the notations coincide with those of
Fig.\,2). These geodesics become spacelike in region II and
generically extend to spacelike infinity as $\lambda \to \infty$ (E4 in
Fig.\,3). This is true for all radial geodesics and part of nonradial
ones; however, nonradial geodesics with small E (near the minimum of
curve 3 at $\rho=0$) are of the type E1, quite similar to null E1
trajectories that have been just discussed. A new type of tachyonic
motion as compared with item 1a is E6 shown in Fig.\, 3.
There are also circular unstable geodesics with $\eta=+1$ in region II,
with $t={\rm const}$ and $\rho=\rho_0 =-[m/(3m-2n)]^{1/(m-n)}$
(points in the Penrose diagram), such that $V(\rho_0) = V'(\rho_0) = 0$,
$E=0$. All these spacelike geodesics have full analogues with $\eta=-1$
in region I.
The whole space-time possesses full symmetry under an exchange
between regions I and II, corresponding to rotations of the Penrose
diagram of Fig.\, 3 by odd (in addition to even) multiples of the right
angle, accompanied by a change of sign of $\eta$ so that geodesics
keep their timelike or spacelike nature.
\medskip\noindent
{\bf 2a:} $m$ even, $n$ odd, $(- + - \,-)$. The range of $\rho$ is now
$(-\infty, +1)$. Both parts of the effective potential become
negative and monotonic at $\rho < 0$, so that all geodesics
entering the horizon terminate at the spacelike singularity $\rho =
-\infty$, as in the Schwarzschild case. Thus the whole
qualitative picture of test particle motion, as well as the Penrose
diagram, coincide with the Schwarzschild ones.
\medskip\noindent
{\bf 2b:} $m$ odd, $n$ even, $(+ - + \,+)$. The
potential $V_\eta$ ($\eta=1$) is positive-definite (as in the extreme
Reissner--Nordstr\"{o}m case), as shown in Fig.\,4, so that all radial
timelike geodesics avoid the singularity,
crossing the horizon either twice (E6),
or indefinitely (E1). For
nonradial motion, the summed potential $V(\rho)\to -\infty$ as
$\rho\to -\infty$, whatever small is $L^2$, therefore all geodesics that
are timelike in region I (and become spacelike in region II) reach the
spacelike singularity%
\footnote
{Due to the metric signature change, in the Penrose diagram of Fig.\,4, just
as in Fig.\,3 for the case 1b, the time direction is vertical in regions I
and horizontal in regions II.}
$\rho=-\infty$ (levels and curves E4 and E7 in Fig.\,4), except
those with small $E$ depicted as E1.
\medskip
One can conclude that the unusual nature of metric of B1 type BHs
creates some unusual types of particle motion. Some of them even exhibit
evident causality violation
--- such as an observer receiving messages from his or her own future ---
which can be avoided by assuming a more complicated space-time structure.
Another paradox, also related to causality, arises if we
follow in cases 1b or 2b the fate of a hypothetical (timelike) observer
who has crossed the horizon and finds him(her)self in a region II where
his (her) proper time is now spacelike as viewed by a resident
observer (whose timelike geodesic is entirely contained in region II),
and can be reversed by a simple coordinate change.
However, one may suspect that, the horizon area
being infinite, any extended body, and an observer in particular,
will have been infinitely stretched apart
and destroyed before actually crossing the horizon. To check this,
consider for instance a freely falling observer whose center of mass
follows a radial geodesic in the plane $\theta = \pi/2$. The separation
$n^{\alpha}$ between this geodesic and a neighbouring radial geodesic
varies according to the law of geodesic deviation
\begin{equation} \label{dev-geo}
\frac{D^2n^{\alpha}}{d\lambda^2} +
{R^{\alpha}}_{\beta\gamma\delta}u^{\beta}n^{\gamma}u^{\delta} = 0.
\end{equation}
For the four--velocity of the center of mass we have
$u^0 = E g^{00}$ and $u^0 u_0 + u^1 u_1=1$,
so that near the horizon
$u^{\mu} \simeq (Eg^{00}, E(-g^{00}g^{11})^{1/2}, 0, 0)$. We obtain for
the relative azimuthal acceleration near the horizon
\begin{eqnarray} \lal \label{tide}
\frac{1}{n^3}\frac{D^2 n^3}{d\lambda^2}
= ({R^{30}}_{30}u^0 u_0 + {R^{31}}_{31}u^1 u_1)
\nonumber\\ \lal \qquad
\simeq -E^2 g^{00}({R^{30}}_{30} - {R^{31}}_{31})
= E^2 R''/R,
\end{eqnarray}
and a similar equation for the deviation $n^2$ in the $\theta$
direction. Here $R^2 =|g_{22}|$, $R'' = d^2 R/d\rho^2$, and $\rho$ is a
radial coordinate
such that the Jordan--frame metric functions are related by
$g_{00}g_{11} = {\rm const}$; this condition is valid near the horizon for
our coordinate $\rho$ defined in (\ref{rho}).
The azimuthal geodesic deviation (\ref{tide}) which, due to its
structure, is insensitive to radial boosts and is thus equally applicable
to the static frame of reference and to the one comoving with the
infalling body, agrees with similar relations given by Horowitz and Ross
\cite{HR}. In the case of the Schwarzschild metric, $R''/R = 0$ and the
tidal force (given by the terms that we have neglected) is finite. In
the case of the examples discussed in \cite{HR}, $R''/R$ is negative and
large, i.e. geodesics converge and physical bodies are crushed as they
approach the horizon, as by a naked singularity, hence the name ``naked
black holes'' given to these spacetimes in \cite{HR}. On the contrary,
in the case of the cold black hole metric (\ref{global}), $R''/R \to
+\infty$ (as $\rho^{-2}$), i.e. geodesics diverge; the resulting
infinite tidal forces pull apart all extended objects, e.g. any kind of
clock, approaching the horizon. Only true elementary (pointlike)
particles, resulting from the destruction of the falling body, cross
such a horizon to become tachyons\footnote
{Our cold black holes are thus
counterexamples to the claim made in \cite{HR} that a smooth extension
is not possible when tidal forces diverge on the horizon.}.
\section{Stability}
Let us now study small (linear) spherically symmetric\ perturbations\ of the above static
solutions (or static regions of the BHs), i.e. consider, instead of
$\varphi(u)$,
\begin{equation}
\varphi(u,t)= \varphi(u)+ \delta\varphi(u,t)
\end{equation}
and similarly for the metric functions $\alpha,\beta,\gamma$, where
$\varphi (u)$, etc., are taken from the static solutions of \sect 2.
We are working in the Einstein conformal frame and use Eqs.\, (\ref{eq1}).
The consideration applies to the whole class of STT (\ref{L1}); its
different members can differ in boundary conditions, to be
discussed below.
In perturbation analysis there is the so-called gauge freedom, i.e. that
of choosing the frame of reference and the coordinates of the perturbed
space-time. The most frequently used frame for studying radial
perturbations has been that characterized by the technically convenient
condition $\delta\beta \equiv 0$ \cite{78,hod,bm}. It was applied, however, to
background configurations where the area function ${\,\rm e}^\beta$ was
monotonic in the whole range of $u$, or was itself used as a coordinate
in the static space-time. Unlike that, in our study the configurations
of utmost interest are type B black holes\ and wormholes, i.e. those having a
minimum of ${\,\rm e}^\beta$ (a throat) at some value of $u$ and infinite
${\,\rm e}^{\beta}$ at both ends of the $u$ range. At the throats, the
equality $\delta\beta\equiv 0$ is no longer a coordinate condition, but a
physical restriction, forcing the throat to be at rest. It can be
explicitly shown that the condition $\delta\beta\equiv 0$ creates poles in the
effective potential for perturbations, leading to their separate existence at
different sides of the throat, i.e. the latter behaves like a wall.
For these reasons, we have to use another gauge,
and we choose it in the form
\begin{equation}
\delta\alpha = 2\delta\beta + \delta\gamma, \label{da}
\end{equation}
extending to perturbations\ the harmonic coordinate condition of the static
system. In this and only in this case the scalar equation (\ref{ephi})
for $\delta\varphi$ decouples from the other perturbation\ equations and reads
\begin{equation}
{\,\rm e}^{4\beta(u)}\delta\ddot\varphi - \delta\varphi''=0. \label{edf}
\end{equation}
Since the scalar field is the only dynamical degree of freedom, this
equation can be used as the master one, while others only express the
metric variables in terms of $\delta\varphi$, provided the whole set of field
equations is consistent. That it is indeed the case, can be verified
directly. Indeed, under the condition (\ref{da}), the four equations
(\ref{00})--(\ref{01}) for perturbations\ take the form
\begin{eqnarray} \lal \label{00p}
{\,\rm e}^{4\beta}(4\delta\beta +\delta\gamma)\ddot{} - \delta\gamma'' =0; \\ \lal
\label{11p}
{\,\rm e}^{4\beta}(2\delta\beta +\delta\gamma)\ddot{} -2\delta\beta'' -\delta\gamma'' \nonumber\\ \lal
\hspace{1cm}
-4(\beta'-\gamma')\delta\beta'+4\beta'\delta\gamma' = 2\varepsilon\varphi'\delta\varphi'; \\ \lal
\label{22p}
{\,\rm e}^{4\beta}\delta\ddot{\beta}
-\delta\beta'' + 2 {\,\rm e}^{2\beta+2\gamma}(\delta\beta+\delta\gamma) =0; \\ \lal
\label{01p}
\dot{\beta}' - \beta'(\delta\beta + \delta\gamma)\dot{}
- \gamma'\delta\dot{\beta}
= -{\tst\frac{1}{2}} \varepsilon \varphi'\delta\dot{\varphi},
\end{eqnarray}
where $\alpha,\ \beta,\ \gamma,\ \varphi$ satisfy the static field
equations.
Eq.\,(\ref{01p}) may be integrated in $t$ and further
differentiated in $u$; the result turns out to be proportional to a
linear combination of the remaining Einstein equations
(\ref{00p})--(\ref{22p}). On the other
hand, the quantities $\delta\ddot\varphi$ and $\delta\varphi''$ can be calculated
from (\ref{00p})--(\ref{01p}), resulting in (\ref{edf}). Therefore we
have three independent equations for the three functions $\delta\varphi$, $\delta\beta$
and $\delta\gamma$.
The following stability analysis rests on Eq.\, (\ref{edf}).
The static nature of the background solution makes it possible to
separate the variables,
\begin{equation}
\delta\varphi = \psi(u) {\,\rm e}^{i\omega t}, \label{psi}
\end{equation}
and to reduce the stability problem to a boundary-value problem for
$\psi(u)$. Namely, if there exists a nontrivial solution to (\ref{edf})
with $\omega^2 <0$, satisfying some physically reasonable boundary
conditions at the ends of the range of $u$, then the static background
system is unstable since perturbations\ can exponentially grow with $t$.
Otherwise it is stable in the linear approximation.
Suppose $-\omega^2 = \Omega^2,\ \Omega > 0$. In what follows we
use two forms of the radial equation (\ref{edf}): the one directly
following from (\ref{psi}),
\begin{equation}
\psi'' -\Omega^2 {\,\rm e}^{4\beta(u)}\psi=0, \label{epsi}
\end{equation}
and the normal Liouville (Schr\"odinger-like) form
\begin{eqnarray} \lal
d^2 y/dx^2 - [\Omega^2+V(x)] y(x) =0, \nonumber\\ \lal \hspace{1cm}
V(x) = {\,\rm e}^{-4\beta}(\beta''-\beta'{}^2). \label{ey}
\end{eqnarray}
obtained from (\ref{epsi}) by the transformation
\begin{equation}
\psi(u) = y(x){\,\rm e}^{-\beta},\qquad \label{tx}
x = - \int {\,\rm e}^{2\beta(u)}du.
\end{equation}
Here, as before, a prime denotes $\partial/\partial u$.
The boundary condition at spatial infinity ($u\to 0$, $x \simeq 1/u \to
+\infty$) is evident: $\delta\varphi\to 0$, or $\psi\to 0$.
For our metric (\ref{ds1}) the effective potential $V(x)$ has the
asymptotic form
\begin{equation}
V(x) \approx 2b/x^3, \hspace{1cm} {\rm as} \hspace{1cm} x\to +\infty,
\end{equation}
hence the general solutions to (\ref{ey}) and (\ref{epsi}) have the
asymptotic form
\begin{eqnarray}
y &\nhq\sim &\nhq c_1{\,\rm e}^{\Omega x} + c_2{\,\rm e}^{-\Omega x}
\qquad (x \rightarrow +\infty), \label{as+} \\
\psi &\nhq \label{aspsy}
\sim &\nhq u \bigl(c_1 {\,\rm e}^{\Omega/u} + c_2{\,\rm e}^{-\Omega/u}\bigr)
\qquad (u \rightarrow 0),
\end{eqnarray}
with arbitrary constants $c_1,\ c_2$. Our boundary condition leads to
$c_1=0$.
For $u\to u_{\max}$, where in many cases the background field $\varphi$
tends to infinity, a formulation of the boundary condition is not so
evident. Refs.\,\cite{hod,bm} and others, dealing with
minimally coupled or dilatonic scalar fields, used the minimal
requirement providing the validity of the perturbation\ scheme in the Einstein
frame:
\begin{equation}
|\delta\varphi/\varphi| < \infty. \label{weak}
\end{equation}
In STT, where Jordan-frame and Einstein-frame metrics are related by
$g^{\rm J}_{\mu\nu} = (1/\phi)g^{\rm E}_{\mu\nu}$,
it seems reasonable to require that the perturbed
conformal factor $1/\phi$ behave no worse than the unperturbed one,
i.e.
\begin{equation}
|\delta\phi/\phi| < \infty. \label{strong}
\end{equation}
An explicit form of this requirement depends on the specific STT and
can differ from (\ref{weak}), for example, in the BD theory, where
$\phi$ and $\varphi$ are connected by (\ref{phi-bd}), the requirement
(\ref{strong}) leads to $|\delta\varphi| <\infty$. We will refer to (\ref{weak})
and (\ref{strong}) as to the ``weak" and ``strong" boundary condition,
respectively. For configurations with regular $\phi$ and $\varphi$ at
$u\to u_{\max}$ these conditions both give $|\delta\varphi|<\infty$.
Let us now discuss different cases of the STT solutions.
We will suppose that the scalar field $\phi$ is regular for
$0<u<u_{\max}$, so that the conformal factor $\phi^{-1}$ in (\ref{ds1})
does not affect the range of the $u$ coordinate.
\medskip\noindent
{\bf 1.} $\varepsilon=+1,\ k>0$. This is the singular solution of normal STT.
As $u \to +\infty$, $\beta \sim (b-k)u \to -\infty$, so that $x$ tends to
a finite limit and with no loss of
generality one can put $x\to 0$. The effective potential $V(x)$ then has a
negative double pole, $V \sim -1/(4x^2)$, and the asymptotic form of the
general solution to (\ref{ey}) leads to
\begin{equation}
\psi(u) \approx y(x)/\sqrt{x} \approx (c_3 + c_4\ln x) \quad
(x \to 0). \label{y1}
\end{equation}
The weak boundary condition leads to the requirement
$|\delta\varphi/\varphi| \approx
|y|/(\sqrt{x}|\ln x|) < \infty$, met by the general solution (\ref{y1})
and consequently by its special solution that joins the allowed
case ($c_1=0$) of the solution (\ref{as+}) at the spatial asymptotic.
We then conclude that the static field configuration is unstable,
in agreement with the previous work \cite{hod}.
As for the strong boundary condition (\ref{strong}),
probably more appropriate in STT, its explicit form
varies from theory to theory, therefore a general conclusion cannot be
made. In the special case of the BD theory the condition (\ref{strong})
means $|\psi|< \infty$ as $u\to +\infty$. Such an asymptotic behavior
is forbidden by (Eq.\, (\ref{epsi}), according to which $\psi''/\psi > 0$,
i.e. the perturbation $\psi(u)$ is convex and so cannot be bounded as $u
\to \infty$ for an initial value $\psi(0) = 0$ ($c_1 = 0$). We thus
conclude that the static system is stable.
We see that in this singular case the particular choice of a boundary
condition is crucial for the stability conclusion. In the case of
GR with a minimally coupled scalar field \cite{hod}
there is no reason to ``strengthen" the weak condition that leads to
the instability. In the BD case the strong condition seems more
reasonable, so that the BD singular solution seems stable. For any
other STT the situation must be considered separately.
\medskip\noindent
{\bf 2.} $\varepsilon=-1,\ k>0$. This case includes some singular
solutions and cold black holes, as is exemplified for the BD theory in
(\ref{ds+})--(\ref{range+}).
As $u\to +\infty$, $\beta \sim (b-k)u \to +\infty$, so that $x\to
-\infty$ and $V(x) \approx -1/(4x^2)\to 0$. The general solution to Eq.\,
(\ref{ey}) again has the asymptotic form (\ref{as+}) for $x \to
-\infty$. The weak condition (\ref{weak}) leads, as in the previous
case, to the requirement $|y|/(\sqrt{|x|}\ln|x|) <\infty$, and, applied
to (\ref{as+}), to $c_2=0$. This means that the function $\psi$ must
tend to zero for both $u\to 0$ and $u\to \infty$, which is impossible
due to $\psi''/\psi >0$. Thus the static system is stable. Obviously the
more restrictive strong condition (\ref{strong}) can only lead to the
same conclusion.
\medskip\noindent
{\bf 3.} $\varepsilon=-1,\ k=0$. There are again singular solutions and cold
black holes. As $u \to +\infty$, $x\to -\infty$ and again the potential $V(x)
\approx -1/4x^2 \to 0$, leading to the same conclusion as in case 2.
\medskip\noindent
{\bf 4.} $\varepsilon=-1,\ k<0$. In the generic case the solution describes a
wormhole, and in the exceptional case (\ref{bh-}) there is a cold
black hole\ with a finite horizon area. In all such cases, as $u\tou_{\max} =
\pi/|k|$, one has $x\to -\infty$ and $V \sim 1/|x|^3 \to 0$, so that the
stability is concluded just as in cases 2 and 3.
Thus, generically, scalar-vacuum spherically symmetric\ solution of anomalous STT are
linearly stable against spherically symmetric\ perturbations. Excluded are only the cases when the
field $\phi$ behaves in a singular way inside the coordinate range $0 <u
<u_{\max}$; such cases should be studied individually.
\Acknow
{This work was supported in part by CNPq (Brazil) and CAPES (Brazil).}
\small
|
hep-th/9804077
|
\section{Introduction}
\label{int}
I wish to present arguments favoring a scenario in
which gravity can arise as a particular effect
in the framework of the conventional
Yang-Mills gauge theory \cite{YM}
formulated in flat space-time.
The idea was first reported in \cite{IAP}, this
paper presents it in detail.
We will suppose that the Lagrangian of the theory
describes the usual degrees of freedom
of gauge theory, i.e.
gauge bosons interacting with
fermions and scalars,
while there are no geometrical degrees of freedom
on the Lagrangian level.
In particular, the Lagrangian does not contain gravitons.
Our goal is to show that all
geometrical objects
necessary for the gravitational phenomenon
can originate from the gauge degrees of freedom
if a particular nontrivial vacuum
state, which we will call the vacuum with polarized instantons
develops in the $SO(4)$ gauge theory.
Let us first describe
the main idea in general terms.
To do this let us imagine that
in some conventional gauge theory
there develops a particular
nontrivial vacuum state.
Its nature is discussed below in detail.
At the moment let us only assume that
this this nontrivial vacuum has a very strong impact on
the properties of excitations
of the gauge field.
Instead of the usual
spin-1 gauge boson there appears quite new
massless spin-2 excitation.
Certainly this is a very peculiar property,
but if it exists, it provides a possibility to identify
this massless mode with the graviton.
Furthermore one can hope to evaluate all manifestations
of gravity from an effective theory which should describe
propagation of this low-energy mode.
In the classical limit
one can hope also that this effective theory should reproduce the
Einstein equations of general relativity.
Developing this line of reasoning
one should keep in mind a sufficiently long list of important
relevant questions.
The most urgent ones are listed below.\\
1.In which gauge group one should look for the effect?\\
2.What is the origin of the necessary nontrivial
vacuum, roughly speaking from which fields
should it be constructed?\\
3.What is the nature of the order parameter
which governs the nontrivial phase? In particular,
which symmetry should it possess?\\
4.What is the nature of the low-energy excitations in this vacuum?
As said above, this excitation should be
the massless spin-2 mode.\\
5.What is the effective low-energy theory governing classical
propagation of this massless spin-2 mode?\\
6. How important are the
quantum corrections? One should hope
that they neither confine the spin-2 mode,
nor supply it with the mass in the infrared region.\\
7. What is the origin of the phase transition
into the necessary vacuum state?\\
8. Which phase transitions separate the
necessary vacuum from conventional vacua of gauge theory?
All these questions are concentrating around the two
major problems.
One of them is the existence of the
necessary vacuum.
Possible models in which it can arise
have been considered
in recent Refs.\cite{det,PRdet}.
The other problem is
physical manifestations of the nontrivial vacuum,
granted it exists. This later problem is in the focus of this paper.
Several reasons make this study necessary.
Firstly, the physical
nature of the excitations proves to be very interesting.
Secondly, one should know precisely the properties of the
excitations in order to construct and tune a model in which
the necessary ground state appears.
Thirdly, even in such a long known phase as the QCD vacuum
the properties of the vacuum state are not fully understood
up to now. Instantons are known to play an important
role there, but verification of this fact remains
an interesting as well as nontrivial problem \cite{shu}.
This paper proposes to
consider a new phase in gauge theory
in which instantons play an important role.
It could also prove to be sufficiently complicated.
To begin such a study one should fully
realize its purpose.
Having these arguments in mind I address in this paper
some questions from the above list
in detail, while only qualitatively discussing others.
The idea discussed can suggest
a novel approach to the old problem of quantization
of gravity having several important advantages.
One of them is the renormalizability which is guaranteed
from the very beginning
because on the basic Lagrangian level the theory
is the usual renormalizable gauge theory.
It is important also that the basic
idea is sufficiently simple being formulated by means
of a pure field theory. It needs the single
tool, the nontrivial condensate, to explain the
single phenomenon, gravity.
The string theory \cite{GrS}, which
for a long period of time has been considered as one of the most
advanced candidates for description of quantum gravity
offers much more complicated approach.
One of the popular modern developments in
quantum gravity was originated in \cite{ash},
where it was argued that the variables of gauge theory
can be used for description of geometrical objects
of the Riemann geometry.
In this approach it is assumed that
the geometrical degrees of freedom do exist on the basic Lagrangian level.
In contrast, in this paper
we suppose that geometry does not manifest
itself on the Lagrangian level,
providing only an effective description
for some particular low energy degrees of freedom of gauge field.
Let us now describe shortly the answers
which this paper proposes to some of the listed above questions.
First, the gauge group has to be $SO(4)$, or bigger one.
Furthermore, the phase considered may be described
in terms of nontrivial topological excitations of the gauge
field. In this sense the condensate
is composed of the $SO(4)$ gauge field.
The necessary phase
can be described in terms of
the BPST instanton \cite{bpst}.
An instanton is known to possess eight degrees
of freedom. Four of them give its position,
one is its radius. The rest three
describe its orientation
which plays a crucial
role in our consideration.
In the usual phases of gauge theory, for example in the QCD vacuum
orientations of instantons are arbitrary.
In this paper we discuss a phase in which
instantons have a preferred direction
of orientation. This means that a mean value of
orientation is nonzero.
A possible way to visualize
this vacuum, to find a simple physical analogy,
is to compare it with the usual ferromagnetic
or antiferromagnetic phases in condensed matter physics
in which spins of atoms or electrons have preferred orientation.
The instantons constituting
this vacuum will be called ``polarized instantons''
or ``the condensate of polarized instantons''.
The vacuum itself will be referred to
as ``the polarized vacuum''.
The density of the condensate of polarized instantons
is described by a length parameter which depends on radiuses
and separations of the polarized instantons.
This length parameter proves to be equal to the Planck radius,
which is only possible if
radiuses of the considered instantons are
comparable with the Planck radius.
The existence of this length parameter permits
the Newton gravitational constant to appear in the theory
as the inverse density of the condensate.
Thus the main gravitational parameter absent in the initial
Lagrangian is introduced into the theory by
the nontrivial vacuum state.
In Section \ref{res} we will return to the questions listed above
and discuss them in more detail.
It is necessary to mention that
the existence of a state with polarized instantons
does not come into contradiction
with a general principle of gauge invariance
which in the context considered is
known as the Elitzur theorem \cite{eli}, see also
Refs.\cite{pol,ham}.
The theorem forbids existence of any vacuum state
in which local gauge invariance
is broken spontaneously by a
non-gauge-invariant mean value of a field.
The orientation of an instanton is a gauge invariant
parameter. Therefore
polarization of instantons does not break spontaneously
gauge invariance, and the vacuum state
with polarized instantons is not forbidden.
We will mainly restrict our discussion to
the dilute gas approximation for instantons, mainly to secure
simplicity of consideration.
In this approximations the
gases of instantons and antiinstantons can coexist.
We will suppose that both gases of instantons and antiinstantons
are polarized.
The mean values of
matrixes which describe orientations of polarized instantons and
antiinstantons play the role of the order parameter
for the considered phase.
It will be argued that this order parameter
should have a particular symmetry.
Namely
orientations of all polarized topological excitations should be described
by one $6\times 6 $ matrix which belongs to $SO(3,3)$.
The main result of this paper
can be formulated as the following statement.
Suppose that instantons and antiinstantons
in the vacuum state of the $SO(4)$ gauge theory
have preferred orientations which are described
by a matrix from the $SO(3,3)$ group.
Then long-wave excitations above this vacuum
are the spin-2 massless particles which on the classical
level are described by the Hilbert action
and the Einstein equations of general relativity.
\section{Instantons in the external field}
\label{pair}
Consider Euclidean formulation of the $SO(4)$ gauge theory.
The gauge algebra for $SO(4)$ gauge group consists of two $su(2)$
gauge subalgebras, $so(4) = su(2)+su(2)$.
The instantons and antiinstantons can belong to any one
of these two available $su(2)$ gauge subalgebras.
It is convenient to choose the generators for one
$su(2)$ gauge subalgebra to be $(- 1/2)\eta^{aij}$ and the
generators for the other one
to be $(-1/2) \bar \eta^{aij}$.
To distinguish between these two subalgebras we will refer to them as
$su(2)\eta$ and $su(2) \bar \eta$.
Symbols $\eta^{aij}, \bar \eta^{aij}$
are the usual 't Hooft symbols, $a =1,2,3,~ i,j = 1, \cdots,4$ which
give a full set of $4\times 4$ matrixes antisymmetric in $ij$ \cite{tH}.
In this notation
the gauge potential and the strength of the gauge field are
\begin{eqnarray} \label{Af}
A^{ij}_{\mu}&=&-\frac{1}{2} (A^a_{\mu} \eta^{aij} +
\bar A^a_{\mu} \bar \eta^{aij}), \\ \label{F}
F^{ij}_{\mu \nu}&=&-\frac{1}{2} (F^a_{\mu \nu} \eta^{aij} +
\bar F^a_{\mu \nu} \bar \eta^{aij}),
\end{eqnarray}
where $A^a_{\mu}$ and $F^a_{\mu \nu}$ belong to $su(2) \eta$ and
$\bar A^a_\mu,\bar F^a_{\mu\nu}$ belong to $su(2) \bar \eta $.
The Yang-Mills action reads
\begin{equation}\label{yma}
S= \frac{1}{4g^2}\int F^{ij}_{\mu\nu}(x)F^{ij}_{\mu\nu}(x) \, d^4x.
\end{equation}
The Latin indexes $i,j=1,.\cdots,4$ label the
isotopic indexes, while the Greek indexes $\mu,\nu=1,\cdots,4$
label the indexes in Euclidean coordinate space.
Remember that we consider the usual
gauge field theory in flat space-time.
For the chosen normalization of generators
the relation between the gauge potential and the field strength reads
\begin{equation} \label{FA}
F_{\mu\nu}^{ij}=\partial_\mu A_{\nu}^{ij}-
\partial_\nu A_{\mu}^{ij}
+A_{\mu}^{ik}A_{\nu}^{kj}- A_{\nu}^{ik}A_{\mu}^{kj}.
\end{equation}
For our purposes it is important to consider
an interaction of nontrivial topological excitations,
instantons and antiinstantons
with an external gauge field which has trivial topological structure.
Consider first a single instanton in an
external gauge field. It is sufficient to assume
that this external field is weak and smooth, i.e.
it is weaker than the field of the instanton inside the instanton
and varies much more smoothly than the instanton field which
sharply decreases outside the instanton radius.
Thus formulated problem was first considered
by Callan, Dashen and Gross in \cite{CDG} where
it was shown
that the interaction of an instanton
with the external field is described by an effective action
\begin{equation}\label{cdg}
\Delta S =
\frac{2 \pi^2 \rho^2}{g^2}\bar \eta^{a\mu\nu} D^{ab}F^b_{\mu\nu}(x_0).
\end{equation}
Here $\rho$ and $x_0$ are the radius and position of the instanton.
The matrix $D^{ab} \in SO(3)$ describes the orientation of the instanton
in the $su(2)$ gauge subalgebra where the instanton
belongs (suppose for example that it is $su(2)\eta$).
Its definition is given in Appendix A, see (\ref{CU}).
$F^b_{\mu\nu}(x)$
is the gauge field in the subalgebra where the instanton belongs.
This field has to be taken in the singular gauge \cite{tH}.
The interaction of an antiinstanton with an external field
is described similarly. The only distinction
is that it produces the corresponding term with the 't Hooft symbol
$\eta^{a\mu\nu}$ instead of $\bar\eta^{a\mu\nu}$ which stands in
(\ref{cdg}). Notice that we use
$\eta^{a\mu\nu}, \bar \eta^{a\mu\nu}$ as generators of rotations in
the coordinate space, while
$\eta^{aij},\bar \eta^{aij}$ generate rotations in the isotopic space.
These two sets of symbols are defined in different spaces,
but numerically they are identical of course.
It is worth to stress
several interesting and important properties of the action
(\ref{cdg}). First of all
it has a very unusual structure being
linear in the external field. At first sight
this fact looks as a paradox because
an instanton provides a minimum for the action and
one could anticipate only quadratic terms in the weak field to appear.
The paradox is resolved
by the fact that the linear term arises from the region of large separations
from the instanton
where the external field
exceeds the instanton field, resulting in breaking down of
the naive perturbation theory.
This question is discussed in detail in Appendix \ref{A}
which develops the approach of \cite{CDG}
to cover the case of several overlapping instantons.
The action (\ref{cdg}) is derived
starting from the usual Yang-Mills action
in which the external field is considered as a perturbation.
The contribution to the
action linear in the external field
is found as an integral over a sphere which is well
separated from instantons.
An alternative way to derive the action (\ref{cdg})
was suggested in \cite{VZNS} where an
instanton was considered as an effective source for
gauge bosons. In this approach the linear
term in the action arises due to the large probability
of creating small-momenta bosons by an instanton.
Furthermore,
it is important to stress
that the orientation of an instanton is
a gauge invariant parameter. A convenient way to
define the orientation is given by the general n-instanton
ADHM solution of Ref.\cite{ADHM}, for a review see
\cite{pras}. This solution is briefly discussed
in Appendix A where Eq.(\ref{CU}) defines the
matrix of the instanton orientation $D^{ab}$ in terms of
variables of the ADHM construction.
The fact that the orientation of an instanton
is a gauge invariant parameter is closely connected
with gauge invariance of the action (\ref{cdg}). This action
is derived from the usual Yang-Mills action,
from which it inherits gauge invariance. The invariance is
guaranteed by the mentioned explicit requirement that the
field $F^a_{\mu\nu}(x_0)$
has to be considered in the singular gauge.
This fixing of the gauge indicates that
there is no spontaneous breaking of the local symmetry.
The last comment addresses the behaviour of several
instantons.
It is found in Appendix \ref{A} that they
interact with the external field as individual
objects. This statement is trivial when the dilute
gas approximation is valid, but surprisingly
it stands for any separation between instantons,
even if they strongly overlap.
In this paper we will not push this argument further
on restricting our consideration by the dilute gas
approximation. Still the independent interaction
of instantons with the external field gives a
hope that final conclusions of the paper
can be more reliable than the dilute gas approximation
which is used to derive them.
The discussion
of the action (\ref{cdg}) given above
permits us now to
address the following important for us
problem. Suppose that there
is a number of instantons and antiinstantons which
belong to either $su(2)\eta$ or $su(2)\bar\eta$ gauge
algebras and satisfy the dilute gas approximation.
Suppose also that there is an
additional gauge field $F^{ij}_{\mu\nu}(x)$
which has a trivial topological
structure, is weaker than the fields of the instantons
and varies smoothly
for the distances which characterize separations and radiuses
of instantons.
In this situation we immediately deduce
that the influence of the external field
on instantons and antiinstantons
can be described by
the effective action which is equal to the sum
of terms of the type of (\ref{cdg}) which describe
independent interaction of instantons and antiinstantons
with the field.
Using definition (\ref{F}) one can present
the corresponding action in the following form
\begin{equation} \label{4gas}
\Delta S = -\frac{\pi^2}{g^2} \sum_k
\eta^{A\mu\nu}\eta^{Bij}
T^{AB}_k \rho^2_k F^{ij}_{\mu\nu}(x_k).
\end{equation}
Here an index $k$ runs over all available instantons and antiinstantons
which have radiuses and coordinates $\rho_k$ and $x_k$.
To simplify notation
the 't Hooft symbols are enumerated
as 6-vectors $\eta^A = (\eta^a,\bar \eta^b),
~A=1,\cdots,6;~
a,b=1,2,3$. More precisely this definition means that
\begin{eqnarray}\label{etaA}
\eta^{Aij} &=& \eta^{aij},~~
\eta^{A\mu\nu} = \eta^{a\mu\nu},~~~{\rm if}~~A=a=1,2,3,
\\ \nonumber
\eta^{Aij} &=& \bar \eta^{bij},~~
\eta^{A\mu\nu} = \bar \eta^{b\mu\nu},
~~~ {\rm if }~~A-3=b=1,2,3.
\end{eqnarray}
To describe an orientation of every (anti)instanton
it is convenient to introduce the $6\times 6$ matrix $T^{AB}_k,~~
A,B=1,\cdots,6$
\begin{equation} \label{CAB}
T_k^{AB} \equiv T_k =
\left( \begin{array}{cc} C_k & D_k \\
\bar D_k & \bar C_k \end{array} \right)
\end{equation}
as a set of four $3\times 3$ matrixes $C_k,\bar C_k,D_k,\bar D_k$.
For any given $k$-th (anti)instanton only one of these four matrixes
is essential while the other three are
equal to zero. This nonzero matrix belongs to $SO(3)$ and describes the
orientation of the $k$-th topological object in the gauge algebra where
it belongs.
For example, if the $k$-th object is
an antiinstanton in the $su(2)\eta$ gauge subalgebra,
then we assume that $C_k\in SO(3)$ describes its orientation
in the $su(2)\eta$ while $\bar C_k,D_k,\bar D_k =0$.
Similarly, $D_k$
describes the orientation if the $k$-th object is
an instanton $\in su(2)\eta$ ($C_k,D_k,\bar D_k =0$),
$\bar D_k$ describes the orientation if the
antiinstanton $\in su(2)\bar \eta$ is considered
($C_k,\bar C_k,\bar D_k =0$), and $\bar C_k$ describes
orientation if the instanton $\in su(2)\bar\eta$ is considered
($C_k,D_k,\bar D_k =0$) .
These definitions can be presented in a short symbolic form
\begin{equation}\label{T}
T_k =
\left( \begin{array}{cc} {\rm antiinstanton} \in su(2)\eta &
{\rm instanton} \in su(2)\eta \\
{\rm antiinstanton} \in su(2)\bar \eta & {\rm instanton}
\in su(2)\bar\eta \end{array} \right).
\end{equation}
Using definitions (\ref{etaA}),(\ref{CAB})
as well as Eq.(\ref{F}) it is easy to verify that
every term in Eq.(\ref{4gas})
describes an interaction of some (anti)instanton with
the external field in accordance with (\ref{cdg}).
Let us address now the question of the behavior of the
ensemble of instantons in the vacuum state assuming that
there exists a weak and smooth topologically trivial gauge field
$F^{ij}_{\mu\nu}(x)$. One can derive the
effective action which describes
interaction of the vacuum with this field
averaging Eq.(\ref{4gas}) over
short-range quantum fluctuations in the vacuum.
The result can be presented as the effective action
\begin{equation} \label{f/4}
\Delta S =
-\int \eta^{A\mu\nu}\eta^{Aij}{\cal M}^{AB}(x)F^{ij}_{\mu\nu}(x)
\,d^4 x.
\end{equation}
where the matrix ${\cal M}^{AB}(x)$ is
\begin{equation} \label{M^AB}
{\cal M}^{AB}(x) =
\pi^2 \langle \,
\frac{1}{g^2}
\rho^2 T^{AB} n(\rho, T,x)\,\rangle.
\end{equation}
The brackets $\langle\, \rangle$ here describe
averaging over quantum fluctuations whose
wavelength is shorter than a typical distance
describing variation of the external field.
These fluctuations in the dilute gas approximation for instantons
should include averaging over positions,
radiuses and orientations of instantons.
In Eq.(\ref{M^AB}) $ n(\rho, T,x)$ is the concentration
of (anti)instantons which have
the radius $\rho$ and the orientation described
by the matrix $T\equiv T^{AB}$. In the usual vacuum
states the concentration of instantons does not depend on
the orientation, $n(\rho, T,x) \equiv n(\rho, x)$. In that case
an averaging over orientations gives the trivial result
${\cal M}^{AB}(x) \equiv 0$, as mentioned in \cite{VZNS}.
The main goal of this paper is to investigate
what happens if the concentration $n(\rho, T,x) $ does depend on the
orientation $T=T^{AB}$ providing the nonzero value for the matrix
${\cal M}^{AB}(x)$.
From (\ref{M^AB}) one can anticipate
that the consequences should be interesting because
there appears the unusual term linear in the gauge field in the
action. To clarify this point
we are to be more specific about properties of
the matrix ${\cal M}^{AB}(x)$ assuming that it satisfies
the following three conditions.
First we will assume that
it is a nondegenarate matrix with positive determinant.
It is convenient to present this assumption as a statement
that the matrix ${\cal M}(x)$ is proportional to the unimodular
$6\times 6$ matrix $M(x)$
\begin{equation} \label{M=fM}
{\cal M}^{AB}(x) = \frac{1}{4}\,f\, M^{AB}(x),~~~~\det M(x) =1.
\end{equation}
Second, we suppose that $f$ introduced above is a positive constant
\begin{equation}\label{f>0}
f = const > 0.
\end{equation}
This means that the determinant $\det {\cal M}(x) = (f/4)^6$
is a constant.
Third, we postulate that the matrix $M^{AB}(x)$ possesses a
particular symmetry property, namely that it belongs
to the $SO(3,3)$ group,
\begin{equation} \label{so33}
M^{AB}(x)\in SO_+(3,3).
\end{equation}
This means that $M=M^{AB}(x)$
should satisfy identity
\begin{equation} \label{MSM}
M \Sigma M^T=
\Sigma.
\end{equation}
Here the matrix $\Sigma^{AB}$ is defined as
\begin{equation}\label{Sig}
\Sigma =
\left( \begin{array}{cr} \hat 1 & \hat 0 \\ \hat 0 & -\hat 1
\end{array} \right),
\end{equation}
where numbers with hats represent $3\times 3$ diagonal
matrixes.
Notation $SO_+(3,3)$ is used in (\ref{so33})
to distinguish the subset of matrixes
$M\in SO(3,3)$
which can be transformed into the unity matrix ${\bf 1}$
by a continuous function in $SO(3,3)$,
see Appendix \ref{B}.
Conditions (\ref{M=fM}),(\ref{f>0}),(\ref{so33})
are to be considered as
the main {\it assumptions} made in this paper
about properties of
the vacuum state of the $SO(4)$ gauge theory.
Now we can clarify the meaning of
the term ``polarization of instantons'' which was introduced
above intuitively.
We say that instantons in the vacuum state
are polarized if the matrix ${\cal M}^{AB}$
which
defines the mean values
of orientations of (anti)instantons in (\ref{M^AB}) is nonzero.
We will say that the polarization of instantons
has the $SO(3,3)$ symmetry,
or equivalently there is the $SO(3,3)$
polarization of instantons, if
Eqs.(\ref{M=fM}),(\ref{f>0}),(\ref{so33})
are valid.
Let us mention once more that the orientation of an instanton
is a gauge invariant parameter. Therefore its
nonzero mean value does not come into contradiction with
the Elitzur theorem \cite{eli}.
In the following consideration we postulate
that there is the $SO(3,3)$ polarization of instantons in the vacuum.
This assumption at the moment looks rather bizarre,
but later development will show its advantages.
Firstly,
in the next Section \ref{eineq} we will verify
that it makes excitations above the vacuum identical
to gravitational waves.
This is exactly what we are looking for.
Then in Section \ref{res} we discuss
what is known about a way to
justify assumptions
(\ref{M=fM}),(\ref{f>0}),(\ref{so33})
in the framework of gauge theory.
In the usual phases of gauge theory
the mean values of matrixes describing orientations
of (anti)instantons vanish. Therefore one can interpret
Eqs.(\ref{M=fM}),(\ref{f>0}),(\ref{so33})
as the statement that there exists the new nontrivial
phase of the $SO(4)$ gauge theory. The matrix
$M(x)$ plays the role of the order parameter for this phase.
The (anti)instantons which contribute to the
nontrivial value of the matrix $T^{AB}$ in (\ref{M^AB}) can
be looked at as a specific condensate.
The constant $f$ characterizes the density of this condensate.
The dimension cm$^{-2}$ of this constant supplies the theory
with the dimensional parameter which later on
will be interpreted as
the Newton gravitational constant,
see Eq.(\ref{newt}).
In order to examine the consequences of assumptions
(\ref{M=fM}),(\ref{f>0}),(\ref{so33}) it is very
instructive to re-write (\ref{f/4}) in another form.
With this purpose let us take into consideration
that there exists
a relation between matrixes belonging
to $SO(3,3)$ and matrixes
belonging to $SL(4)$ groups.
It states that
for any $M^{AB} \in SO_+(3,3),~A,B=1,\cdots,6$
there exists some matrix $H^{i\mu}\in SL(4),~i,\mu=1,\cdots,4$ satisfying
\begin{equation} \label{hom}
H^{i\mu}H^{j\nu}-H^{i\nu}H^{j\mu} =
\frac{1}{2} \eta^{A\mu\nu}\eta^{Bij}{M}^{AB}.
\end{equation}
This matrix if defined uniquely up
to a sign factor $\pm H^{i\mu}$.
The reversed statement
is also true: for any given $H^{i\mu}\in SL(4)$
Eq.(\ref{hom}) defines
the matrix $M^{AB}$ which belongs to $SO_+(3,3)$.
Moreover, it can be shown that (\ref{hom})
gives an isomorphism $SL(4)/(\pm {\bf 1})\equiv SO_+(3,3)\,$.
All these statements are verified in Appendix \ref{B}.
Identifying $M^{AB}(x)=M^{AB}$ one finds from (\ref{hom})
$H^{i\mu}=H^{i\mu}(x)\in SL(4)$.
This new matrix is well defined by the matrix $M^{AB}(x)$
and hence
can be considered as another representation of the
order parameter for polarized instantons.
Substituting $H^{i\mu}(x)$ defined by
(\ref{hom}) into (\ref{M=fM}) one finds that
the action (\ref{f/4})
can be rewritten in the following useful form
\begin{equation} \label{HHF}
\Delta S = -f \int H^{i\mu}(x)H^{j\nu}(x)F^{ij}_{\mu\nu}(x)
\,d^4x.
\end{equation}
Up to now our consideration was fulfilled in the
orthogonal coordinates which describe the
considered flat space-time. In these coordinates the
Lagrangian of the gauge theory is formulated.
It is instructive however to present the action
(\ref{HHF}) in arbitrary coordinates.
Under the coordinate transformation
$x^\mu \rightarrow x'^\mu$ the gauge field obviously transforms as
\begin{equation}\label{FF'}
F^{ij}_{\mu\nu}(x) \rightarrow
F'^{ij}_{\mu\nu}(x') =
\frac{ \partial x^\lambda}{\partial x'^\mu}
\frac{ \partial x^\rho}{\partial x'^\nu}
F^{ij}_{\lambda\rho}(x).
\end{equation}
Moreover, a coordinate transformation does not affect the action.
From this one deduces that the matrix $H^{i\mu}(x)$ is transformed
by the coordinate transformation as
\begin{equation}\label{Hh}
H^{i\mu}(x)\rightarrow h^{i\mu}(x') =
\frac{ \partial x'^\mu}{\partial x^\lambda} H^{i\lambda}(x).
\end{equation}
We chose different notation for the transformed matrix
calling it $h^{i\mu}(x)$ in order to distinguish it
from the unimodular matrix $H^{i\mu}(x)$. According to
(\ref{Hh}) the determinant of the transformed matrix
depends on the coordinate transformation
\begin{equation}\label{deth}
\det [h^{i\mu} (x') ]
= \det \left[ \frac{ \partial x'^\mu}{\partial x^\lambda}\right].
\end{equation}
From Eqs.(\ref{FF'}),(\ref{Hh}) one deduces that in arbitrary
coordinates $x$ the action (\ref{HHF}) can be presented in the following
form
\begin{equation} \label{hhF}
\Delta S = -f \int h^{i\mu}(x)h^{j\nu}(x)F^{ij}_{\mu\nu}(x) \det h(x)
\,d^4x.
\end{equation}
It is convenient for further discussion
to introduce notation in which $h^{i}_\mu(x)$ is understood
as the matrix
inverse to $h^{i\mu}(x)$, i.e.
\begin{equation}\label{h-1}
h^{i}_\mu(x)h^{j\mu}(x) = \delta_{ij}.
\end{equation}
The determinant in Eq.(\ref{hhF}) is defined as a determinant of
this inverse matrix.
Thus the factor $\det h(x)$ in (\ref{hhF})
simply accounts for the variation of the
phase volume under the coordinate transformation
\begin{equation}\label{d^4}
\det h(x') d^4 x'
\equiv \det [ h^i_\mu(x')] d^4 x'= \det
\left[ \frac{ \partial x^\mu}{\partial x'^\nu} \right] d^4 x' = d^4x,
\end{equation}
Here $x$ and $x'$ are the orthogonal and
arbitrary coordinates respectively.
Summarizing, it is shown in this Section that
(\ref{hhF}) gives the effective action which describes the
interaction of a weak and smooth gauge field
$F^{ij}_{\mu\nu}(x)$ with polarized instantons.
\section{The Riemann geometry and the Einstein equations}
\label{eineq}
Let us consider in this Section excitations above
the polarized vacuum in the classical approximation.
It is clear that these excitations
should have very interesting properties
because a variation of the gauge field results in the contribution
to the action (\ref{hhF})
which is linear in the field. This is in contrast to
the standard quadratic behavior of the Yang-Mills action (\ref{yma}).
We will assume that the fields considered
vary on the macroscopic distances, say $\sim 1$cm, and
their magnitude can be roughly estimated as
$|F^{ij}_{\mu\nu}| \sim 1/ {\rm cm^2}$.
We will see below that the constant $f$ which was
defined in Eqs.(\ref{M^AB}),(\ref{M=fM}) is large,
$f \sim 1/r_{\rm P}^2$, where $r_{\rm P}$ is the Planck radius.
This shows that for weak fields the integrand in the
Yang-Mills action (\ref{yma}) is suppressed compared to (\ref{hhF})
by a drastic factor
\begin{equation}\label{frp}
|F^{ij}_{\mu\nu}|/f \sim (r_{\rm P}/1 {\rm cm})^2 = 10^{-64}.
\end{equation}
This estimate demonstrates that
our first priority is to take into account
the action (\ref{hhF}) which describes interaction
of the weak field with polarized instantons, neglecting the Yang-Mills
action (\ref{yma}).
We will return to this point in Section \ref{con}.
Thus we suppose that properties of
low-energy excitations of the gauge field
can be described by the action (\ref{hhF}).
The fact that
the field $F^{ij}_{\mu\nu}(x)$ varies on the macroscopic
scale makes it weak and smooth on the microscopic level.
These properties of the field mean that it reveals trivial
topological structure on the microscopic level.
In contrast, the matrix $h^{i\mu}(x)$ describes
those degrees of freedom of the gauge field
which are associated with instantons and
therefore have highly nontrivial microscopic topological structure.
Thus $F^{ij}_{\mu\nu}(x)$ and $h^{i\mu}(x)$ describe
quite different topological structures.
Their different topology enables us to consider
them as two sets of independent variables, or modes.
This makes the
action (\ref{hhF}) a functional of these variables
\[ \Delta S =\Delta S( \{A^{ij}_\mu(x)\},\{ h^{i\mu}(x)\}),\]
where $A^{ij}_\mu(x)$ is the vector potential of the
external field $F^{ij}_{\mu\nu}(x)$.
We see that
classical equations for the functional (\ref{hhF}) read
\begin{eqnarray} \label{dsda}
\frac{\delta( \Delta S)}{ \delta A^{ij}_{\mu}(x)} &=& 0,
\\ \label{dsdh}
\frac{\delta( \Delta S)}{ \delta h^{i \mu}(x)} &=& 0.
\end{eqnarray}
From Eq.(\ref{hhF}) one finds that the first classical equation
(\ref{dsda}) results in
\begin{equation} \label{dA}
\nabla^{ik}_\mu [ \left(h^{k\mu}(x)h^{j\nu}(x)-
h^{k\nu}(x)h^{j\mu}(x)\right) \det h(x)] = 0.
\end{equation}
Here $\nabla^{ij}_\mu=\partial_\mu\delta_{ij}+
A^{ij}_\mu(x)$ is the covariant derivative
in the external gauge field.
The second classical equation which follows from
Eqs.(\ref{hhF}),(\ref{dsdh}) reads
\begin{equation} \label{eint}
h^{j\nu}(x)F^{ij}_{\mu\nu}(x) - \frac{1}{2}h^i_\mu(x)
h^{k\lambda}(x)h^{j\nu}(x)F^{kj}_{\lambda\nu}(x) = 0.
\end{equation}
Here $h^i_\mu(x)$ is defined in Eq.(\ref{h-1}).
In order to present Eqs.(\ref{dA}),(\ref{eint})
in a more convenient form let us define three
quantities, $g_{\mu \nu}(x),
\Gamma^{\lambda}_{\mu \nu}(x)$, and
$R^{\lambda}_{\rho \mu \nu}(x)$:
\begin{eqnarray} \label{ghh}
g_{\mu \nu}(x) &=& h^{i}_{\mu}(x) h^{i}_{\nu}(x),
\\ \label{gam}
\Gamma^{\lambda}_{\mu \nu}(x) &=&
h^{i \lambda}(x)h^{j}_{\mu}(x)A^{ij}_
{\nu}(x) + h^{i \lambda}(x) \partial_{\nu} h^{i}_{\mu}(x),
\\ \label{RF}
R^{\lambda}_{\rho \mu \nu}(x) &=& h^{i \lambda}(x)
h^{j}_{\rho}(x)F^{ij}_
{\mu \nu}(x).
\end{eqnarray}
Remember that space-time under consideration is basically
flat. Therefore
Eqs.(\ref{ghh}),(\ref{gam}),(\ref{RF}) just define
the left-hand sides in terms of $A^{ij}_\mu(x)$ and $h^{i\mu}(x)$.
Remarkably, the classical Eqs.({\ref{dA}),(\ref{eint})
for the gauge field supply these definitions with
an interesting geometrical content.
In order to see this let us notice
that after simple calculations
the first classical
Eq.(\ref{dA}) may be presented in the following form
\begin{equation} \label{Gg}
\Gamma_{\lambda\mu}^{\sigma}(x)=\frac{1}{2}
g^{\sigma\tau}(x) \left[
\partial_\lambda g_{\tau \mu }(x)+
\partial_\mu g_{\lambda \tau }(x)-
\partial_\tau g_{\lambda \mu }(x)\right].
\end{equation}
Here the matrix $g^{\mu\nu}(x)$ is defined as
\begin{equation}\label{gin}
g^{\mu\nu}(x) = h^{i\mu}(x)h^{i\nu}(x),
\end{equation}
which according to (\ref{h-1}) makes it
inverse to $g_{\mu\nu}(x)$
\begin{equation}\label{inve}
g^{\mu\lambda}(x)g_{\lambda\nu}(x) = \delta_{\mu\nu}.
\end{equation}
The form of Eq.(\ref{Gg}) is identical to the usual
relation which expresses the Christoffel symbol
in terms of the Riemann metric for some Riemann geometry
\cite{LL}.
Moreover, using
({\ref{Gg}),(\ref{ghh}),(\ref{gam}), and (\ref{RF})
it is easy to verify that the quantity
$R^{\lambda}_{\rho\mu \nu}(x)$ can be presented in
terms of $g_{\mu\nu}(x)$ as well
\begin{equation} \label{RG}
R_{\lambda\mu\nu}^{\lambda}(x)=
\partial_\mu \Gamma_{\lambda\nu}^{\lambda}-
\partial_\nu \Gamma_{\lambda \mu}^{\lambda}+
\Gamma_{\tau \mu}^{\sigma}\Gamma_{\lambda\nu}^{\tau}-
\Gamma_{\tau\nu}^{\sigma}\Gamma_{\lambda\mu}^{\tau}.
\end{equation}
One recognizes in this
relation the usual connection between the Riemann
tensor and the Riemann metric. Furthermore,
it follows from (\ref{Gg}) that
$g_{\mu\nu}(x), \Gamma^\lambda_{\mu\nu}(x)$ and
$R_{\lambda\mu\nu}^{\sigma}(x)$ possess the symmetry properties
which are usual in the Riemann geometry
\begin{eqnarray}\label{gsy}
g_{\mu\nu}(x) &=& g_{\nu\mu}(x), \\ \label{Gsy}
\Gamma^\lambda_{\mu\nu}(x) &=& \Gamma^\lambda_{\nu\mu}(x), \\ \label{Rsy}
R_{\sigma\rho\mu\nu}(x)&=& R_{\sigma\rho\nu\mu}(x)=R_{\rho\sigma\mu\nu}(x)=
R_{\mu\nu\sigma\rho}(x),
\end{eqnarray}
where
\[R_{\sigma\rho\mu\nu}(x)=
g_{\sigma\lambda}(x)R^\lambda_{\rho\mu\nu}(x).\]
Eqs.(\ref{Gg}),(\ref{RG}), (\ref{gsy}),(\ref{Gsy}) and (\ref{Rsy})
show that $g_{\mu\nu}(x)$ can be considered as
a metric for some Riemann geometry
with the Christoffel symbol $\Gamma^\lambda_{\mu\nu}(x)$
and the Riemann tensor $R^\lambda_{\rho\mu\nu}(x)$.
Consider now the second classical equation (\ref{eint}).
With the help of Eqs.(\ref{ghh}),(\ref{RF}) it is easy to verify that
it can be presented
in the following form
\begin{equation} \label{einnom}
R_{\mu \nu} - \frac{1}{2} g_{\mu \nu} R = 0.
\end{equation}
Here
\begin{eqnarray}\label{ric}
&&R_{\mu\nu}(x) = R^\lambda_{\mu\lambda\nu}(x), \\ \label{kriv}
&&R(x) = g^{\mu\nu}(x) R_{\mu\nu}(x).
\end{eqnarray}
These definitions show that
$R_{\mu\nu}(x)$ and $R(x)$ are the
Ricci tensor and the curvature of the Riemann geometry based on the
metric $g_{\mu\nu}(x)$.
Hence, Eq.(\ref{einnom}) proves be identical to the
Einstein equations of general relativity in the absence of matter.
We come to the interesting result.
Remember that the matrix $h^{i\mu}(x)$ describes the orientation
of (anti)instantons in the considered nontrivial vacuum,
while $A^{ij}_\mu(x)$ is the potential of the weak gauge field.
Both these quantities describe properties of the gauge field.
We have demonstrated above that the
first classical condition (\ref{dsda})
for the gauge field
can be expressed as a statement that particular combinations
(\ref{ghh}),(\ref{gam}),(\ref{RF})
of $h^{i\mu}(x)$ and $A^{ij}_\mu(x)$
are identical to the Riemann metric,
the Christoffel symbol, and the Riemann tensor for some Riemann space.
The second classical equation (\ref{dsdh})
proves be identical to the Einstein equations for this Riemann metric.
The Einstein equations imply in
particular that there appear excitations called
gravitational waves.
They possess zero mass and spin-2,
exactly what we have been looking for.
In the considered above picture these excitations arise
from variables which describe the particular degrees
of freedom of the gauge field.
Remember that these variables can be considered as the two modes.
One of them describes a
weak topologically trivial gauge field.
The other one describes polarization of instantons.
The strong interaction of these two modes
(\ref{hhF}) mixes them and
the resulting spin-2 excitation should
be considered as a coherent propagation of
the two interacting modes.
It is instructive to consider
the action (\ref{hhF}) when the first classical Eq.(\ref{dsda})
is valid. It is clear from (\ref{ghh}),(\ref{RF})
that the integrand of the action (\ref{hhF})
proves be
proportional to the integrand of the usual
Hilbert action of general relativity \cite{LL} which
has the following form in Euclidean formulation
\begin{equation}\label{hil}
S_{H}= -\frac{1}{16\pi k} \int R(x) [\det g(x)]^{1/2}\,d^4x.
\end{equation}
One can consider the two actions
(\ref{hhF}) and (\ref{hil}) as same quantity
if the Newton gravitational constant $k$ is connected with
the constant $f$ which characterize the density of the
polarized condensate of instantons
\begin{equation} \label{newt}
k = \frac{1}{16 \pi f}.
\end{equation}
This relation demonstrates that $f=2/r_{\rm P}^2$, thus supporting
estimation (\ref{frp}). Remember that the constant
$f$ introduced in (\ref{M^AB}),(\ref{M=fM})
depends on the typical
radiuses and separations of the polarized instantons.
Relation (\ref{newt}) shows that we should assume that
these radiuses and separations are comparable with the Planck
radius.
The consideration given verifies that
the long-range excitations of the gauge field
proves to be the massless spin-2 particles.
Their classical propagation is described by
the Einstein equation of general relativity. The corresponding
effective action turns out to be identical to the
Hilbert action of general relativity.
These facts altogether permit one to identify the found excitations
with gravitational waves.
This indicates that
gravity arises in the framework of the gauge theory.
It is very important that the dynamics of general relativity,
its action and equations of motion,
originate directly from the dynamics of the gauge field.
All these results follow directly from
assumptions (\ref{M^AB})-(\ref{Sig}) which
were interpreted above as the $SO(3,3)$ polarization of instantons.
The picture developed above
remains valid until the gauge
field varies smoothly on the radius of polarized instantons.
Under this condition the action (\ref{hhF}) remains valid.
As was mentioned,
radiuses of polarized instantons
are comparable with the Planck radius.
It makes it necessary for
gravitational waves to have the wavelength
larger then the Planck radius.
For shorter wavelengths the term (\ref{hhF})
does not manifest itself in the action.
This means that
the interaction of short wavelength gauge field
with the polarized instantons is suppressed.
Therefore in this high-energy region
the gauge field is described by the conventional
Yang-Mills action and reveals its usual properties.
In particular its excitations are
spin-1 gauge bosons, while gravitons do not exist.
Thus gravity manifests itself only
for energies below the Planck energy, while for
higher energies it does not exist.
Remember that $h^{i\mu}(x)$ was introduced in Section \ref{eineq}
as an order parameter which describes the polarization
of instantons.
It is interesting that
the Riemann structure defined
in (\ref{ghh}),(\ref{gam}),(\ref{RF})
makes it possible another physical interpretation
of this quantity.
Namely, $h^{i\mu}(x)$ can be identified
with the vierbein, the quantity which generally
speaking is well known in general relativity
\cite{LL}.
The way of deriving the Einstein equation
(\ref{einnom}) from the two classical equations
is similar to the Palatini method, see Ref.\cite{LL},
formulated with the help of the vierbein formalism.
Our consideration reveals however an important subtlety.
In the usual vierbein formulation the physical
nature of the space where the index
$i$ of the matrix $h^{i\mu}(x)$ belongs
does not play a substantial role. This index can be
considered purely as a label \cite{LL}.
In contrast, in our approach
this index belongs to the isotopic space
which plays the most important central role.
In this space gauge transformations of the
considered $SO(4)$ gauge theory are defined.
There is however a price for this
new physical interpretation of the index $i$.
It is important for us to rely on the Euclidean formulation.
An attempt to use the Minkowsky formulation
as a starting point
meets a difficulty.
Really, in Euclidean formulation a connection between
the metric and the vierbein can be presented as
\[ g^{\mu\nu}(x)h^i_\mu(x)h^j_\nu(x) = \delta_{ij},\]
where Eqs.(\ref{ghh}),(\ref{inve}) were used.
The delta-symbol in the right hand side
here simply shows that the isotopic space is
the 4D Euclidean space. In contrast, if one would attempt to
develop an approach used above
starting from the Minkowsky coordinate space then
connection between the metric
and the vierbein would look \cite{LL}
\[ g^{\mu\nu}(x)h^i_\mu(x)h^j_\nu(x) = g^{(0)ij} \]
where $ g^{(0)} = {\rm diag}(1,1,1,-1)$ is
the Minkowsky metric. Thus
the space which in our
approach should be considered as an isotopic space
for some gauge theory acquires
the structure of the $3+1$ Minkowsky space.
Gauge transformations in this space look
as $SO(3,1)$ transformations
belonging to the noncompact Lorentz group.
As is well known, the gauge theory for noncompact groups
is not unitary and therefore is
poorly defined \cite{gel}.
Thus the gauge formulation discussed above becomes
questionable if the Minkowsky space would be used
a starting point.
We conclude that
an attempt to continue the vierbein into
real space-time presents a problem.
Fortunately, one can avoid it. One can use
first Euclidean formulation, deriving
the Einstein equations (\ref{einnom}) as was discussed above.
These equations are
formulated entirely in terms of metric,
neither the vierbein nor any isotopic index
manifest themselves in these equations explicitly.
This fact permits one to fulfill continuation
of the final Einstein equations into real space-time
by continuation of the metric without
reference to the vierbein formalism.
In summary, it is shown in this Section that
the Einstein equations of general relativity
arise from of the $SO(4)$ gauge theory.
\section{Discussion of results}
\label{res}
The main results of this paper is a set of conditions
which are to be fulfilled in order to derive
the effect of gravity by means of conventional gauge theory.
Let us formulate these results presenting them as comments to
the list of questions
posed in the Introduction.
1.It is shown above that
one can hope to deduce the effect of gravity from
the gauge theory if the gauge group is $SO(4)$.
2.The necessary vacuum is shown to
include the polarized instantons.
There is a simple physical reason explaining a necessity for
this vacuum state.
Compare the Yang-Mills
action (\ref{yma}) with the Hilbert action (\ref{hil}). The striking
difference between them is the power of the field
in their integrand.
The Yang-Mills action is quadratic in the gauge field strength.
In contrast the Hilbert action (\ref{hil})
can be considered as a quantity which
is linear in the Riemann tensor due to an
obvious relation $R = g^{\mu\nu}R^\lambda_{\mu\lambda\nu}$.
At the same time there is the long-known resemblance
between the basic properties of the gauge field strength and
the Riemann tensor, see for example Ref.\cite{fad}.
This paper proposes to make this resemblance
identity in a sense explained in (\ref{RF}).
This proposal can only be accomplish
if some phenomenon in gauge theory produces
a term in the action which is linear in the field strength.
Eq.(\ref{cdg}) shows that an instanton
provides just the necessary effect.
Moreover, an ensemble of polarized instantons
results in the action (\ref{hhF}) which proves to be identical
to the Hilbert action. This makes polarized instantons be so
special for gravity.
3.It is argued above that the role of the order parameter
for the phase with polarized instantons
should is played by mean orientations of instantons and
antiinstantons.
The orientation for an (anti)instanton is described by
$3\times 3$ matrixes.
Therefore in the $SO(4)$
gauge theory there can arise four $3\times 3$ matrixes
describing mean orientations of all available
instantons and antiinstantons.
The main assumption of this paper is that
these four matrixes can be described by one
$6\times 6$ matrix which belongs to $SO(3,3)$.
It is important to mention that one cannot afford something
less than that, the order parameter
in the picture suggested has to be an
$SO(3,3)$ matrix. To see this one can reverse arguments of
Section \ref{eineq}.
The gravitational field is known to be described by
the vierbein which can be looked at as a $SL(4)$ matrix.
Therefore to describe gravity by means of gauge theory
one needs to express the vierbein, the local $SL(4)$ matrix
in terms of some variables of gauge theory.
Eq.(\ref{hom}) provides such a
possibility, provided there exists the $SO(3,3)$ order parameter.
This consideration shows in particular that the
gauge theory should be based on the $SO(4)$ group (or bigger one)
which possesses sufficient number of different
(anti)instantons in order to
develop the necessary $SO(3,3)$ order parameter.
4. Excitations above the considered vacuum
state are found to be identical to gravitational waves.
These excitations are described
in terms of two modes of the gauge field.
One of them is a weak
topologically trivial gauge field $A^{ij}_\mu(x)$. The other mode
describes orientations of instantons $h^{i\mu}(x)$.
Eq.(\ref{hhF}) shows that there is the strong interaction
between these modes which results in their mixing.
Thus gravitational waves should be considered as a
coherent propagation of the
two interacting modes describing two different
degrees of freedom of the gauge field.
5.We verified that low energy degrees of freedom
of the gauge field are described by the
variables of the Riemann geometry if the
polarized vacuum exists. The low-energy action (\ref{hhF})
is found to be identical to the Hilbert action of general relativity.
This makes the classical equations be identical to
the Einstein equations of general relativity.
Eq.(\ref{newt}) shows that the relevant instantons
should have radiuses comparable with the Planck radius.
This fact restricts the energies from above.
For energies well below the Planck radius
the long-wave-length action (\ref{hhF}) is valid,
resulting in existence of gravitational waves.
For energies higher than the Planck limit
Eq.(\ref{hhF}) becomes not applicable,
the weak gauge field interacts
with instantons no more
and one should describe the gauge field
by the usual Yang-Mills action.
This shows that gravitational waves exist
only for low energies, while high energy behavior of
the gauge field should be described by the gauge bosons.
6.We have not considered quantum corrections above.
From experience in QCD one could anticipate that they
might be dangerous for the scenario considered.
Even if the gauge constant is small for
some short distances, quantum corrections
in the QCD vacuum are known to
make it rise for larger distances
\cite{kh,gross,poli} eventually resulting in confinement.
One should keep in mind, however, that in the
considered polarized vacuum the role
of quantum corrections differ qualitatively
from the QCD vacuum.
Remember that the rise of the gauge coupling
constant in the QCD vacuum
is related to those quantum fluctuations
whose wavelength is the longer
the larger the distances considered.
The point is that the long range quantum fluctuations
of the gauge field in the considered vacuum
are strongly affected by
polarized instantons. This makes the quantum corrections
different from the ones in the QCD vacuum.
This is an important issue and it is worth to
repeat it in terms of the basic excitations.
The perturbation series in the QCD vacuum
is known to become non-reliable for large distances
due to large contribution from virtual long-wavelength gauge
bosons. The divergence of the perturbation theory is usually
considered as an indication that
for large distances the quantum corrections
become very important.
In the polarized vacuum this picture does not work because,
as we verified above,
low-energy spin-1 gauge bosons simply do not exist in
this vacuum.
There are spin-2 gravitational waves instead of them.
Definitely this argument deserves to be considered
in much more detail, which puts it outside the
main stream of this paper.
7. Eq.(\ref{so33}) postulates the
$SO(3,3)$ symmetry for the order parameter.
This proves to be a very demanding assumption.
To illustrate this statement consider
the simplest possible case of zero gravitational field.
Notice that choosing the Galilean
coordinates one can always eliminate the
gravitational field locally. This makes
the case of the zero field be of interest
for nonzero gravitational fields as well.
In the absence of gravitational field the vierbein
can be chosen to be a constant $SO(4)$ matrix
$h^{i\mu}(x) = H^{i\mu} \in SO(4)$.
Eq.(\ref{hom}) shows that the matrix $M^{AB}(x)$
in this case is a constant $SO(3)\times SO(3)$ matrix
\begin{equation}\label{so3so3}
M =
\left( \begin{array}{cc} C & \hat 0 \\
\hat 0 & \bar C \end{array} \right),
\end{equation}
where $C, \bar C \in SO(3)$. This means according to (\ref{T})
that the antiinstantons in the $su(2)\eta$
subalgebra must be polarized, their polarization
is described by the orthogonal matrix $C$.
Similarly, the instantons in the $su(2)\bar\eta$
subalgebra are also polarized, their polarization
is described by the orthogonal matrix $\bar C$.
In contrast, the instantons in the $su(2)\eta$ and
antiinstantons in the $su(2)\bar\eta $ subalgebras
are not polarized.
Thus the polarization of (anti)instantons is to be not
trivial even in the absence of the gravitational field.
To meet the requirement (\ref{so3so3})
one should develop a $SU(2)$ gauge theory model in which
instantons are polarized, while
antiinstantons remain not polarized.
The candidate for such a model was proposed in
Refs.\cite{det,PRdet}.
In this model there arises interaction between instantons
which makes their identical orientation more probable.
This interaction originates from single-fermion loop
correction when interaction of fermions with the given
instantons as well as with
particular scalar condensates is taken into account.
There is no interaction of this type between antiinstantons.
The interaction between instantons
provides a possibility for a phase transition
into the state with polarized instantons.
An existence of the polarized phase
has been verified in \cite{PRdet} in the mean field approximation.
The order parameter for polarized instantons
has the $SO(3)$ symmetry as is necessary to satisfy
Eq.(\ref{so3so3}).
Thus the mentioned model gives a hope that
the phase described by
the simplest $SO(3)\times SO(3)$ order parameter (\ref{so3so3})
can exist.
To move further on one should overcome two
limitations of the model.
Firstly, the way is to be found to make the ensemble
of instantons and antiinstantons in the $SO(4)$ gauge
theory to develop the condensate governed by the
more sophisticated $SO(3,3)$ order parameter.
Secondly, the model of \cite{det,PRdet}
relies upon existence of the scalar condensates transformed
by the gauge group.
These condensates make the gauge field
acquire a mass through the Higgs mechanism.
Therefore there is a danger that the mass of the gauge
field would result in creation
of the mass for gravitational waves.
One should find a way to avoid this undesirable property.
Thus, either development of the model \cite{det,PRdet}
or possibly a new more sophisticated model is necessary.
The present paper clearly sets the requirements for such model.
8.The only information available on possible phase transitions
into the state with polarized instantons comes from the
mentioned above model \cite{det,PRdet} in which
it was found that in the mean field approximation
there exists the
first order phase transition into the polarized state.
Our last comment is on the dilute gas
approximation for instantons.
It is shown in Appendix \ref{A} that
even closely located instantons interact
with the external field as individual objects.
This fact may be considered as an
indication that the picture developed above
for the gas of instantons may be valid for
the instanton liquid as well.
\section{Conclusion}
\label{con}
The conventional approach to general relativity postulates
that the theory is to be written in geometrical terms.
Postulating then the Hilbert action
one derives the Einstein equations.
In this paper we discuss a possibility for a novel approach.
Space-time is supposed to be basically flat.
In this space-time the $SO(4)$ gauge theory is formulated.
Assuming that in this theory there develops the particular phase,
called polarized instantons we find that for low energies
the gauge field can be described in terms of the Riemann geometry.
There are two low-energy modes of the gauge field which play the
important role.
One of them accounts for topologically
trivial sector of the theory, the other one
describes the polarization of instantons.
These two modes strongly
interact. Their interaction originates
from the conventional Yang-Mills action, but
for weak fields it can be presented in the form of the Hilbert
action from which one derives the Einstein equations
describing gravitational waves. Thus the gravitational
wave arises as a mixing of the two modes
of the gauge field.
\ack
I am thankful to
R.Arnowitt,
C.Hamer,
E.Shuryak
for critical comments.
My participation in the
Workshop on Symmetries in the Strong Interaction, Adelaide 1997,
where part of this work was fulfilled, was helpful.
The support of the Australian Research Council is appreciated.
|
math/9804132
|
\section*{Introduction}
In this paper, we propose a class of discrete dynamical systems
associated with affine root systems, by constructing new
representations of affine Weyl groups.
This class of difference systems covers certain types of
discrete Painlev\'e equations,
and is expected also to provide
a general framework to describe the structure of B\"acklund
transformations of differential systems of Painlev\'e type.
\medskip
By a series of works by K.~Okamoto \cite{O},
it has been known since 80's
that Painlev\'e equations
$P_{\,\text{II}}$, $P_{\,\text{III}}$, $P_{\,\text{IV}}$,
$P_{\,\text{V}}$ and $P_{\,\text{VI}}$
admit the affine Weyl groups of type
$A^{(1)}_{1}$, $C^{(1)}_{2}$, $A^{(1)}_{2}$, $A^{(1)}_{3}$ and $D^{(1)}_{4}$,
respectively, as groups of B\"acklund transformations.
The relationship between the affine Weyl group symmetry
and the structure of classical solutions has been clarified
through the studies of irreducibility of Painlev\'e equations
in the modern sense of H.~Umemura
(see \cite{O},\cite{Mu},\cite{U0},\cite{NO}, for instance).
In a recent work \cite{NY1}, the authors introduced a new representation
(\ref{SP4})
of the fourth Painlev\'e equation $P_{\,\text{IV}}$ from which the structures
of B\"acklund transformations and of special solutions of $P_{\,\text{IV}}$
are understood naturally.
This sort of ``symmetric forms'' can be formulated for other Painlev\'e
equations as well (see \cite{NY3}).
One important point of symmetric forms is
that the structure of B\"acklund transformations of these Painlev\'e
equations can be described in a unified manner, by introducing a class
of representations of affine Weyl groups inside certain Cremona groups.
Also, with the $\tau$-functions appropriately defined,
the dependent variables of the Painlev\'e equations allow certain
``multiplicative formulas'' in terms of $\tau$-functions.
One remarkable fact about our multiplicative formulas (\ref{f-by-tau})
is that the factors are completely
determined by the Cartan matrix of the corresponding
affine root system.
Similar structures can be found commonly in
various (discrete) integrable systems with
Painlev\'e (singularity confinement) property
(\cite{RGH},\cite{KNS},\cite{KNH}).
\medskip
The main purpose of this paper is to present a new class of
representations of affine Weyl groups which provides
a prototype of affine Weyl group symmetry in nonlinear
differential and difference systems.
In Sections 1 and 2, we introduce a class of representations of the
Coxeter groups
of Kac-Moody type on certain fields of rational functions
(on the levels of $f$-{\em variables} and $\tau$-{\em functions},
respectively).
This class of representations was found as a generalization of
the structure of B\"acklund transformations in the symmetric forms
of Painlev\'e equations $P_{\,\text{IV}}$, $P_{\,\text{V}}$ and
$P_{\,\text{VI}}$ which are the cases of $A^{(1)}_2$, $A^{(1)}_3$ and
$D^{(1)}_4$ respectively.
Our representation in the case of an affine root system
provides naturally a discrete dynamical system from the lattice part
of the affine Weyl group.
We introduce in Section 3 the discrete dynamical
systems associated with affine root systems in this sense.
The case of $A^{(1)}_l$ is discussed in Section 4 in some detail as an example.
One interesting aspect of our system is that
{\em continued fractions} arise naturally in the discrete dynamical system,
with variations depending on the affine root system.
In the final section, we explain how one can apply our discrete
dynamical systems to the problem of symmetry of
nonlinear differential (or difference) systems.
In particular, we present a series of nonlinear ordinary differential systems
which have symmetry under the affine Weyl groups of type $A^{(1)}_l$.
This series of nonlinear equations gives a generalization of
the Painlev\'e equations $P_{\,\text{IV}}$ and $P_{\,\text{V}}$ to higher
orders.
\section{A representation of the Coxeter group $W(A)$}
We fix a {\it generalized Cartan matrix} (or a {\it root datum})
$A=(a_{ij})_{i,j\in I}$ with $I$ being a finite indexing set.
By definition, $A$ is a square matrix with the properties
\par\smallskip
\begin{quote}
(C1) \quad $a_{jj}=2 $ for all $j\in I$,\newline
(C2) \quad $a_{ij}$ is a nonpositive integer if $i\ne j$,\newline
(C3) \quad $a_{ij}=0 \Leftrightarrow a_{ji}=0$ \quad $(i,j\in I)$.
\end{quote}
\par\smallskip\noindent
(See Kac \cite{Kac} for the basic properties of generalized Cartan matrices.
Although we assume that $I$ is finite,
a considerable part of the following argument can be formulated under the
assumption that $A$ is {\em locally finite}, namely, for each $j\in I$,
$a_{ij}=0 $ except for a finite number of $i$'s. )
We define the {\em root lattice} $Q=Q(A)$
and the {\em coroot lattice} $Q^\vee$ for $A$ by
\begin{equation}
Q=\bigoplus_{j\in I} {\Bbb Z} \, \alpha_j\quad\text{and}\quad
Q^\vee=\bigoplus_{j\in I} {\Bbb Z}\, \alpha^\vee_j
\end{equation}
respectively, together with the pairing
$\br{\,,\,} : Q^\vee\times Q \to {\Bbb Z}$
such that $\br{\alpha^\vee_i,\alpha_j}=a_{ij}$ for $i,j\in I$.
We denote by $W=W(A)$
the Coxeter group defined by the generators
$s_i$ ($i\in I$) and defining relations
\begin{equation}
s_i^2=1,\quad (s_is_j)^{m_{ij}}=1\quad(i,j\in I, i\ne j),
\end{equation}
where $m_{ij}=2,3,4,6$ or $\infty$ according as
$a_{ij}a_{ji}=0,1,2,3$ or $\ge 4$.
The generators $s_i$ act naturally on $Q$ by reflections
\begin{equation}\label{s-on-a}
s_{i}(\alpha_j)=\alpha_j-\alpha_i\br{\alpha^\vee_i,\alpha_j}
=\alpha_j-\alpha_i a_{ij}
\end{equation}
for $i,j\in I$.
Note that the action of each $s_i$ on $Q$ induces an
automorphism of the field ${\Bbb C}(\alpha)={\Bbb C}(\alpha_i; i\in I )$
of rational functions in $\alpha_i$ $(i\in I)$ so that
${\Bbb C}(\alpha)$ becomes a left $W$-module.
\medskip
Introducing a set of new ``variables'' $f_j$ ($j\in I$), we propose
to extend the representation of $W$ on ${\Bbb C}(\alpha)$ to the
field ${\Bbb C}(\alpha;f)={\Bbb C}(\alpha)(f_j; j\in I)$ of rational
functions in $\alpha_j$ and $f_j$ ($j\in I$).
In order to specify the action of $s_i$ on $f_j$, we fix
a matrix $U=(u_{ij})_{i,j\in I}$ with entries in ${\Bbb C}$ such
that
\par\smallskip
\begin{tabular}{clll}
(0) &$u_{ij}=0$ & if & $i=j$ \ or \ $a_{ij}=0$, \\
(1) &$u_{ij}=-u_{ji}$ &if & $(a_{ij},a_{ji})=(-1,-1)$, \\
(2) &$u_{ij}=-u_{ji}$ or $-2u_{ji}$ & if & $(a_{ij},a_{ji})=(-2,-1)$, \\
(3) &$u_{ij}=-u_{ji},-\frac{3}{2}u_{ji},-2u_{ji}$ or $-3u_{ji}$ & if &
$(a_{ij},a_{ji})=(-3,-1)$.
\end{tabular}
\par\smallskip
\begin{thm}\label{thmA}
Let $A=(a_{ij})_{i,j\in I}$ be a generalized Cartan matrix and
$U=(u_{ij})_{i,j\in I}$ a matrix satisfying the conditions above.
For each $i\in I$, we extend the action of $s_i$ on ${\Bbb C}(\alpha)$ to an
automorphism of ${\Bbb C}(\alpha;f)$ such that
\begin{equation}
s_i(f_j)=f_j+\frac{\alpha_i}{f_i}u_{ij} \quad(j\in I).
\end{equation}
Then the actions of these $s_i$ define a representation of
the Coxeter group $W=W(A)$ $($i.e. a left $W$-module structure$)$
on the field ${\Bbb C}(\alpha;f)$ of rational functions.
\end{thm}
\noindent
We have only to check that the automorphisms $s_i$ on ${\Bbb C}(\alpha;f)$
are involutions ($s_i^2=1$ for all $i\in I$) and that they satisfy
the Coxeter relations $(s_is_j)^{m_{ij}}=1$ when $i\ne j$ and $m_{ij}=2,3,4,6$.
This can be carried out by direct computations since,
for any $i\in I$, the automorphism $s_i$ stabilizes the subfield
${\Bbb C}(\alpha)(f_i,f_k)$ for each $k\in I$ and,
for any $i,j\in I$, both $s_i$ and $s_j$ stabilize
the subfield ${\Bbb C}(\alpha)(f_i,f_j,f_k)$ for each $k\in I$.
\medskip
We remark that Theorem \ref{thmA} provides a systematic method to
realize the Coxeter groups of Kac-Moody type nontrivially
inside {\em Cremona groups}
(groups of the birational transformations of affine spaces).
\begin{rem}
An important class of generalized Cartan matrices is that of
{symmetrizable} ones, which includes the matrices of
finite type and of affine type.
Our condition on $U=(u_{ij})_{ij\in I}$
described above requires that $U$ should be ``almost'' skew-symmetrizable.
The matrix $U$ can be thought of as specifying a sort of {\em orientation} of
the Coxeter graph of $A$.
It is also related to {\em Poisson structures} of dynamical systems.
\end{rem}
\begin{rem}
Practically, it is sometimes necessary to consider the extension
$\widetilde{W}=W\rtimes \Omega$ of
$W=W(A)$ by a group $\Omega$ of diagram automorphisms
of $A$.
Recall that a {\em diagram automorphism} $\omega$ is by definition
a bijection on $I$ such that
$a_{\omega(i) \omega(j)}=a_{ij}$ for all $i,j\in I$;
the commutation relations of each $\omega\in \Omega$
with elements of $W$ are given by
$\omega s_i=s_{\omega(i)} \omega$ for all $i\in I$.
Suppose that the matrix $U$ satisfies in addition
the following compatibility condition with respect to $\Omega$ :
$u_{\omega(i)\omega(j)}=u_{ij}$ for all $i,j\in I$, $\omega\in\Omega.$
Then, together with the automorphisms $\omega$ of ${\Bbb C}(\alpha;f)$
such that
$\omega(\alpha_j)=\alpha_{\omega(j)}$,
$\omega(f_j)=f_{\omega(j)}$ ($j\in I$),
the representation of $W$ in Theorem \ref{thmA} lifts to
a representation of the {\em extended } Coxeter group
$\widetilde{W}=W\rtimes \Omega$ on ${\Bbb C}(\alpha;f)$.
\end{rem}
\section{$\tau$-Functions -- A further extension of the representation}
We now introduce another set of variables $\tau_j$ ($j\in I$), which
we call the ``$\tau$-functions'' for
the $f$-variables $f_j$ ($j\in I$).
Considering the field extension
${\Bbb C}(\alpha;f;\tau)$
$={\Bbb C}(\alpha;f)(\tau_j; j\in I)$,
we propose a way to extend the representation of $W$
of Theorem \ref{thmA} to ${\Bbb C}(\alpha;f;\tau)$.
\begin{thm}\label{thmB}
Let $A$ be a generalized Cartan matrix and $U=(u_{ij})_{i,j\in I}$
a matrix with entries in ${\Bbb C}$ satisfying the conditions
\begin{quote}
$(0)$ \quad $u_{jj}=0$ \quad for all $j\in I$, \newline
$(1)$ \quad $u_{ij}=u_{ji}=0$\quad if \ \ $a_{ij}=a_{ji}=0$, \newline
$(2)$ \quad $u_{ij}=-k u_{ji}$ \quad if \ \ $(a_{ij},a_{ji})=(-k,-1)$ with $k=1,2$ or $3$.
\end{quote}
We extend the action of each generator $s_i$ of $W$ on
${\Bbb C}(\alpha;f)$ to an automorphism of ${\Bbb C}(\alpha;f;\tau)$
by the formulas
\begin{equation}\label{s-on-tau}
s_i(\tau_j)=\tau_j\quad (i\ne j),\quad
s_i(\tau_i)=f_i \ \tau_i\prod_{k\in I} \tau_k^{-a_{ki}}=
f_i\frac{\prod_{k\in I\backslash\{i\}} \tau_k^{|a_{ki}|}}{\tau_i},
\end{equation}
for all $i,j\in I$.
Then these automorphisms define a representation of $W$ on
${\Bbb C}(\alpha;f;\tau)$
\end{thm}
The formulas (\ref{s-on-tau}) of Theorem \ref{thmB} specify
how the $f$-variables should be expressed in terms of the $\tau$-functions:
\begin{equation} \label{f-by-tau}
f_j=\frac{\tau_j \ s_j(\tau_j)}
{{\prod}_{ i\in I\backslash\{j\}}\, \tau_i^{|a_{ij}|}}
\end{equation}
for all $j\in I$.
We remark that this type of {\em multiplicative formulas by $\tau$-functions}
is of a {\em universal\/} nature as can be found
in various discretized integrable systems such as
$T$-systems, discrete Toda equations and discrete Painlev\'e equations
(see \cite{KNS},\cite{KNH},\cite{RGH},$\ldots$).
In that context, the existence of multiplicative formulas is thought of
as a reflection
of {\em singularity confinement} which is a discrete analogue of the
Painlev\'e property.
\begin{rem}
If the matrix $U$ is invariant with respect to a group $\Omega$ of
diagram automorphisms, then the action of the extended Coxeter group
$\widetilde{W}=W\rtimes \Omega$ on ${\Bbb C}(\alpha;f)$ extends naturally
to ${\Bbb C}(\alpha;f;\tau)$ by $\omega.\tau_j=\tau_{\omega(j)}$
for all $j\in I$.
\end{rem}
\par\medskip
Theorem \ref{thmB} can be proved essentially by direct computation to
verify the fundamental relations of the Coxeter group
with respect to the action on the $\tau$-functions $\tau_k$ ($k\in I$).
Instead of giving the detail of such a proof,
we will explain some of the ideas behind these multiplicative formulas.
We consider that the $\tau$-functions should correspond to the
{\em fundamental weights} $\Lambda_j$, while the $f$-variables do to
{\em simple roots} $\alpha_i$.
Let us denote by $L=\operatorname{Hom}_{\Bbb Z}(Q^\vee,{\Bbb Z})$ the dual
${\Bbb Z}$-module of the coroot lattice $Q^\vee$, and take the
dual basis $\{\Lambda_j\}_{j\in I}$ of $\{\alpha^\vee_i\}_{i\in I}$
so that $L=\bigoplus_{j\in I} {\Bbb Z} \Lambda_j$.
Note that $L$, being the dual of $Q$, has a natural action of $W$
and that there is a natural $W$-homomorphism $Q\to L$ such that
\begin{equation}\label{a-in-L}
\alpha_j\mapsto\sum_{i\in I} \Lambda_i a_{ij}\quad(j\in I)
\end{equation}
through the pairing $\br{\ ,\ }$.
(The lattice $L$ is in fact the weight lattice modulo the null roots.)
The action of $W$ on $L$ is then described as
\begin{equation}\label{s-on-L}
s_i(\Lambda_j)=0\ (i\ne j), \quad
s_i(\Lambda_i)=\Lambda_i-\sum_{k\in I}\Lambda_k a_{ki}
\end{equation}
for $i,j\in I$.
We remark that formulas (\ref{s-on-tau}) in Theorem \ref{thmB} are
a multiplicative analogue of $(\ref{s-on-L})$ {\em except for the
factor} $f_j$.
Let us introduce the notation of formal exponentials for $\tau$-functions:
\begin{equation}
\tau^\lambda=\prod_{i\in I} \tau_i^{\lambda_i}\quad\text{for each}\quad
\lambda=\sum_{i\in I}\lambda_i \Lambda_i\in L,
\end{equation}
where $\lambda_i=\br{\alpha_i^\vee,\lambda}$.
In order to clarify the meaning of Theorem \ref{thmB},
we consider the action of each element
$w\in W$ on $\tau^\lambda$ for $\lambda\in L$.
Suppose now that the action of $W$ on ${\Bbb C}(\alpha;f)$ can be
extended to ${\Bbb C}(\alpha;f;\tau)$ as described in Theorem \ref{thmB}.
Since formulas (\ref{s-on-tau}) read as
$s_i(\tau^{\Lambda_j})= f_j^{\delta_{ij}} \,\tau^{s_i(\Lambda_j)}$
for $j\in I$,
we have by linearity
\begin{equation}
s_i(\tau^{\lambda})=
f_i^{\lambda_i} \,\tau^{s_i(\lambda)}
\end{equation}
for each $\lambda\in L$.
Hence, for each $w\in W$, we should have rational functions
$\phi_w(\lambda)\in {\Bbb C}(\alpha;f)$ indexed by $\lambda\in L$ such that
\begin{equation}\label{coboundary}
w(\tau^\lambda)=\phi_w(\lambda)\, \tau^{w.\lambda}\quad(w\in W, \lambda\in L).
\end{equation}
Furthermore, these functions $\phi_w(\lambda)$ should satisfy the
following {\em cocycle condition}:
\begin{equation}\label{cocycle}
\phi_{w_1w_2}(\lambda)=w_1(\phi_{w_2}(\lambda))\, \phi_{w_1}({w_2}.\lambda)
\end{equation}
for all $w_1,w_2\in W$ and $\lambda\in L$.
Conversely, if one has a family $(\phi_w(\lambda))_{w\in W, \lambda\in L}$
of rational functions
satisfying the cocycle condition (\ref{cocycle}),
one can define a representation of $W$ on ${\Bbb C}(\alpha;f;\tau)$ by means of
(\ref{coboundary}).
Theorem \ref{thmB} is thus equivalent to the following proposition.
\begin{prop}\label{prop-cocycle}
Under the same assumption of Theorem \ref{thmB}, there exists a
unique cocycle $\phi=(\phi_w(\lambda))_{w\in W, \lambda\in L}$ such that
\begin{equation}
\phi_{1}(\lambda)=1,\quad \phi_{s_i}(\lambda)=f_i^{\br{\alpha_i^\vee,\lambda}}
\quad(\lambda\in L)
\end{equation}
for each $i\in I$.
\end{prop}
\begin{rem}
Any family $\{\phi_w(\lambda)\}_{w\in W, \lambda\in L}$
of rational functions in ${\Bbb C}(\alpha;f)$ can be identified with
a mapping
\begin{equation}\label{cochain}
\phi: W\to \operatorname{Hom}_{\Bbb Z}(L,{\Bbb C}(\alpha;f)^\times):
w \mapsto \phi_w,
\end{equation}
where ${\Bbb C}(\alpha;f)^\times$ stands for the multiplicative group of
${\Bbb C}(\alpha;f)$ regarded as a ${\Bbb Z}$-module.
The cocycle condition (\ref{cocycle}) is then equivalent to
saying that the mapping $\phi$ of (\ref{cochain}) is a
{\em Hochschild 1-cocycle}
of $W$ with respect to the natural $W$-bimodule structure of
$\operatorname{Hom}_{\Bbb Z}(L,{\Bbb C}(\alpha;f)^\times)$.
Furthermore, formula (\ref{coboundary}) means that,
this cocycle $\phi$ becomes the coboundary of the 0-cochain
\begin{equation}
\tau \in \operatorname{Hom}_{\Bbb Z}(L,{\Bbb C}(\alpha;f;\tau)^\times):
\lambda\mapsto \tau^\lambda
\end{equation}
after the extension of the $W$-module ${\Bbb C}(\alpha;f)$
to ${\Bbb C}(\alpha;f;\tau)$.
Thus one could say that: {\em The role of $\tau$-functions is to
trivialize the Hochschild 1-cocycle defined by the
$f$-variables.}
From the cocycle condition, it follows that
the cocycle $\phi_w : L\to {\Bbb C}(\alpha;f)^\times$
of Proposition \ref{prop-cocycle}
can be expressed as
\begin{equation}
\phi_w(\lambda)=\prod_{r=1}^p s_{j_1}\cdots s_{j_{r-1}}
(f_{j_r})^{\br{\alpha^\vee_{j_r},\,
s_{j_{r+1}}\ldots s_{j_p}.\lambda}}
\quad(\lambda\in L)
\end{equation}
for any expression $w=s_{j_1}\ldots s_{j_p}$ of $w$ in terms of generators.
\end{rem}
\medskip
The cocycle $\phi=(\phi_w(\lambda))_{w\in W,\lambda\in L}$
defined above plays a crucial role in application of our
representation to discrete dynamical systems.
One remarkable thing about this cocycle is that $\phi$ seem to have
a very strong {\em regularity} as described in the following conjecture.
\begin{conj}\label{conj-cocycle}
In addition to conditions $(1)$ and $(2)$ of Theorem \ref{thmB},
suppose that the matrix $U=(u_{ij})_{i,j\in I}$ satisfies the condition
\begin{quote}
$(3')$ $\quad u_{ij}a_{ji}+a_{ij}u_{ji}=0$ \quad for all $i,j\in I$.
\end{quote}
Then, for any $k\in I$, the rational functions $\phi_w(\Lambda_k)$ $(w\in W)$
of $(\ref{coboundary})$ are polynomials in
$\alpha_j,f_j$ and $u_{ij}$ $(i,j\in I)$ with coefficients in ${\Bbb Z}$.
\end{conj}
\begin{rem}
In Sections 1 and 2, we presented a nontrivial class of representations
of Coxeter groups $W(A)$ over the fields of $f$-variables and
$\tau$-functions, with $A$ being a generalized Cartan matrix.
This class of representations appears in fact
as {\em B\"acklund transformations}
(or the {\em Schlesinger transformations}) of the
Painlev\'e equations $P_{\,\text{IV}}, P_{\,\text{V}}$ and $P_{\,\text{VI}}$,
which correspond to the cases of the generalized Cartan matrices $A$ of
type $A^{(1)}_2$, $A^{(1)}_3$ and $D^{(1)}_4$, respectively.
As to these Painlev\'e equations, one can define appropriate
$f$-variables and $\tau$-functions
for which the B\"acklund transformations are described as in
Theorems \ref{thmA} and \ref{thmB} (see \cite{NY1}, \cite{NY3}).
(In the cases of $P_{\,\text{II}}$ and $P_{\,\text{III}}$,
which have symmetries of type $A^{(1)}_1$ and $C^{(1)}_2$,
the corresponding representations of the affine Weyl groups
on $f$-variables must be modified appropriately, while the
multiplicative formulas in terms of $\tau$-functions keep the same structure.)
In the context of B\"acklund transformations of Painlev\'e equations,
the functions $\phi_w(\Lambda_k)$, specialized to certain particular
solutions, give rise to the special polynomials, called {\em Umemura
polynomials} (see \cite{U},\cite{NOOU}), which are defined
to be the main factors of $\tau$-functions for algebraic solutions
of the Painlev\'e equations.
For this reason, we expect that the functions $\phi_w(\Lambda_k)$
($w\in W, k\in I$) should supply an ample generalization of Umemura
polynomials in terms of root systems.
\end{rem}
\section{Affine Weyl groups and discrete dynamical systems}
In what follows, we assume that the generalized Cartan matrix $A$ is
indecomposable and is {\em of affine type}.
We use the standard notation of
the indexing set $I=\{0,1,\ldots,l\}$ so that $\alpha_1,\ldots,\alpha_l$
form a basis for the corresponding finite root system.
Recall that the null root $\delta$ is expressed as
$\delta=a_0\alpha_0+a_1\alpha_1+\cdots+a_l\alpha_l$ with
certain positive integers $a_0,a_1,\ldots,a_l$.
The affine Weyl group $W=W(A)$ is generated by the
fundamental reflections $s_0,s_1,\ldots,s_l$ with respect to the
simple roots $\alpha_0,\alpha_1,\ldots,\alpha_l$:
\begin{equation}
W=W(A)=\br{s_0,\ldots,s_l}.
\end{equation}
One important aspect of the affine case is that the affine Weyl group
$W=W(A)$ has an alternative description as the semi-direct product of
a free ${\Bbb Z}$-submodule $M$ of rank $l$
of ${\overset\circ{\frak h}}_{{\Bbb R}}=\bigoplus_{i=1}^l {\Bbb R} \alpha_i^\vee$,
and
the finite Weyl group $W_0$ acting on $M$:
\begin{equation}\label{MxW0}
W\ \overset{\sim}{\leftarrow}\ M\rtimes W_0 \quad\text{with}\quad W_0
=\br{s_1,\ldots,s_l}.
\end{equation}
For each element $\mu\in M$, we denote by $t_{\mu}$ the corresponding
element of the affine Weyl group $W$, so that $t_{\mu+\nu}=t_\mu t_\nu$
for all $\mu,\nu\in M$.
Note that the structure of the {\em lattice part} $M$ depends on the
type of the affine root system and that, if $A$ is nontwisted, i.e., of type
$X^{(1)}_l$, then $M$ is identified with the coroot lattice
$\overset{\circ}{Q}{}^\vee$
of the finite root system with basis $\{\alpha_1,\ldots,\alpha_l\}$.
(There are descriptions analogous to (\ref{MxW0})
for certain {\em extended} affine Weyl groups
$\widetilde{W}=W\rtimes\Omega$ as well.)
As we already remarked, the field
${\Bbb C}(\alpha)={\Bbb C}(\alpha_0,\alpha_1,\ldots,\alpha_l)$
has a natural structure of $W$-module.
The lattice part $M$ in the decomposition (\ref{MxW0}) acts
on ${\Bbb C}(\alpha)$ by
\begin{equation}
t_\mu(\alpha_j)=\alpha_j-\br{\mu,\alpha_j}\delta\quad(j=0,1,\ldots,l;\mu\in M)
\end{equation}
as shift operators with respect to the simple affine roots.
(The null root $\delta$ is a $W$-invariant element of ${\Bbb C}(\alpha)$.
For this reason, it is sometimes more
convenient to consider $\delta$ to be a nonzero constant
which represents the scaling of the lattice $M$.)
\medskip
Suppose now that one has extended the action of $W$ from
${\Bbb C}(\alpha)$ to ${\Bbb C}(\alpha;f)={\Bbb C}(\alpha)(f_0,f_1,\ldots,f_l)$.
At this moment, we can consider an arbitrary extension ${\Bbb C}(\alpha;f)$
as a $W$-module, assuming that each element of $W$ acts on the function
field as an automorphism; the representation of $W$
presented in Sections 1 and 2 provides a choice of such an extension.
For each $\nu\in M$,
we define a family of rational functions
$F_{\nu j}(\alpha;f)\in {\Bbb C}(\alpha; f)$ by
\begin{equation}\label{disc-time-ev}
t_\nu(f_j)=F_{\nu j}(\alpha;f)\quad (j=0,1,\ldots,l).
\end{equation}
Then these formulas can already be considered as a
{\em discrete dynamical system},
defined by a set of commuting discrete time evolutions.
In other words, we obtain a commuting family of rational mappings
on the affine space
where $\alpha_j$ and $f_j$ play the role of coordinates
of the discrete time variables and the dependent variables,
respectively.
To make clear the meaning of (\ref{disc-time-ev}) as a difference system,
we set
\begin{equation}
\alpha_j[\mu]=t_\mu(\alpha_j)=\alpha_j-\br{\mu,\alpha_j}\delta,
\quad f_j[\mu]=t_\mu(f_j)\quad
(j=0,\ldots,l)
\end{equation}
for each $\mu\in M$,
and consider them as representing functions on $M$ with initial values
$\alpha_j[0]=\alpha_j, f_j[0]=f_j$ ($j=0,\ldots,l$).
Then formulas (\ref{disc-time-ev}) implies that
\begin{equation}\label{disc-time-ev2}
f_j[\mu+\nu]=
F_{\nu j}(\alpha[\mu];f[\mu])\quad(j=0,1,\ldots,l).
\end{equation}
In this sense, the functions $F_{\nu j}(\alpha;f)$ defined above provide a
difference dynamical system on the lattice $M$.
Since $f_j[\mu]$ is a rational function
in $f_0,\ldots,f_l$, for each $\mu\in M$,
the general solution of the difference system
(\ref{disc-time-ev2}) {\em a priori\/} depends rationally
on initial values $f_0,f_1,\ldots,f_l$.
Note also that the action of the affine Weyl group $W$ on $f_j[\nu]$
is described as
\begin{equation}
(w.f_j)[\mu]=w(f_j[w^{-1}\mu])\quad(j=0,\ldots,l; \mu\in M)
\end{equation}
for all $w\in W$.
In this sense, our difference system admits the action
of the affine Weyl group $W(A)$.
Note that, if one take the representation of Theorem \ref{thmA},
one has
\begin{equation}
(s_i.f_j)[\mu]
=f_j[\mu]+\frac{\alpha_i[\mu]}{f_i[\mu]}u_{ij}
\end{equation}
for $i,j=0,\ldots,l$.
Suppose that one can extend the action of $W$ further to the $\tau$-functions
as in Theorem \ref{thmB} and set
$\tau_i[\mu]=t_\mu(\tau_i)$, regarding $\tau_i[0]=\tau_i$ as initial
values of the $\tau$-functions.
Then from (\ref{f-by-tau}) we obtain the multiplicative formulas
\begin{equation}
f_j[\mu]=\frac{\tau_j[\mu] \ s_{\alpha_j[\mu]}(\tau_j[\mu])}
{{\prod}_{ i\in I\backslash\{j\} }\, \tau_i[\mu]^{|a_{ij}|}}
\qquad(j=0,\ldots,l)
\end{equation}
for the $f$-variables in terms of $\tau$-functions.
In terms of the cocycle $\phi$, these formulas are rewritten
by (\ref{coboundary}) into
\begin{equation}
f_j[\mu]=\frac{\phi_{t_\mu}(\Lambda_j)\ \phi_{t_\mu s_j}(\Lambda_j)}
{{\prod}_{ i\in I\backslash\{j\}}\, \phi_{t_\mu}(\Lambda_i)^{|a_{ij}|}}
\qquad(j=0,\ldots,l),
\end{equation}
which give a complete description of the general solution of the
difference system (\ref{disc-time-ev}) in terms of the initial
values $f_0,\ldots,f_l$.
In this sense, the cocycle $\phi$ {\em solves}
our difference system (\ref{disc-time-ev}).
It should be noted that all these properties of the difference system
(\ref{disc-time-ev}), or (\ref{disc-time-ev2}) equivalently,
are already guaranteed when we take the representation of the affine
Weyl group $W(A)$ as in Theorem \ref{thmB}.
Also, it is meaningful if one could find other types of representations
of affine Weyl groups which have the properties of Theorem \ref{thmB}.
\medskip
We now take the representation of the affine Weyl group
$W(A)$ on ${\Bbb C}(\alpha;f;\tau)$ introduced in Theorem \ref{thmB}.
One interesting feature of our representation is that
{\em continued fractions} arise naturally in the description
of discrete dynamical systems, and that the structure of continued
fractions is determined by the affine root system.
We assume for simplicity that the generalized Cartan matrix $A$ is
of type $X^{(1)}_l$.
For a given element $w\in W(A)$, take a reduced decomposition
$w=s_{i_1}s_{i_2}\cdots s_{i_p}$ of $w$, and define the affine
roots $\beta_1,\beta_2,\cdots,\beta_p$ by
\begin{equation}
\beta_1=\alpha_1, \ \ \beta_2=s_{i_1}(\alpha_{i_2}), \ \
\ldots, \ \ \beta_p=s_{i_1}\cdots s_{i_{p-1}}(\alpha_{i_p}).
\end{equation}
Note that these $\beta_r$ ($r=1,\ldots,p$) give precisely the set
of all positive real roots whose reflection hyperplanes separate
the fundamental alcove $C$ and its image $w.C$ by $w$.
Since the action of $s_i$ on $f_j$ is given by
\begin{equation}
s_i(f_j)=f_j+\frac{\alpha_i}{f_i} u_{ij}\qquad(i,j=0,\ldots,l),
\end{equation}
we have inductively
\begin{equation}\label{w-on-f}
w(f_j)= f_j + \frac{\alpha_{i_1}}{f_{i_1}}u_{i_1j}
+s_{i_1}\big(\frac{\alpha_{i_2}}{f_{i_2}}\big)u_{i_2j}
+\cdots+s_{i_1}\cdots s_{i_{p-1}}
\big(\frac{\alpha_{i_p}}{f_{i_p}}\big)u_{i_pj}.
\end{equation}
Each summand of this expression is given by the continued
fraction
\begin{equation}
s_{i_1}\cdots s_{i_{r-1}}\big(\frac{\alpha_{i_r}}{f_{i_r}}\big)
=\cfrac{\beta_r}{f_{i_r}+
u_{i_{r-1}i_{r}}
\cfrac{\beta_{r-1}}{
\overset{\displaystyle{f_{i_{r-1}}+{}_{\ddots}}\phantom{\frac{X}{X}}\hfill}
{\phantom{SSSSSS}\cfrac{\beta_2}{
{f_{i_2}+u_{i_1i_2}\dfrac{\beta_{1}}{ f_{i_1}}}
}}}}
\end{equation}
along the reduced decomposition $w=s_{i_1}\cdots s_{i_p}$.
Note also that formula (\ref{w-on-f}) for $w(f_j)$ has an alternative
expression
\begin{equation}
w(f_j)=\frac{\phi_{w}(\Lambda_j)\ \phi_{ws_j}(\Lambda_j)}
{\prod_{i\in I\backslash\{j\}} \phi_w(\Lambda_i)^{|a_{ij}|}}
\end{equation}
in terms of the cocycle $\phi$, which is implied by
(\ref{f-by-tau}) and (\ref{coboundary}).
If one take an element $\nu\in M=\overset{\circ}{Q}{}^\vee$ of the dual root lattice,
the rational functions $t_{\nu}(f_j)=F_{\nu j}(\alpha;f)$
$(j=0,\ldots,l)$ for the
time evolution with respect to $\nu$ are
determined in the form
\begin{equation}
F_{\nu j}(\alpha;f)=f_j+\sum_{r=1}^p
s_{i_1}\cdots s_{i_{r-1}}\big(\frac{\alpha_{i_r}}{f_{i_r}}\big) u_{i_rj}
\end{equation}
as a sum of continued fractions
along the reduced decomposition of $t_{\nu}$,
with positive real roots separating the fundamental alcove $C$ and
its translation $C+\nu$.
We remark that a similar description of the rational functions
$F_{\nu j}(\alpha;f)$ can be given also for the cases of
extended affine Weyl groups $\widetilde{W}=W\rtimes \Omega$.
A series of such discrete dynamical systems
will be given in the next section.
\section{Discrete dynamical system of type $A^{(1)}_l$}
As an example of our discrete dynamical systems associated with
affine root systems, we will give an explicit description of the
case of $A^{(1)}_l$ with $l\ge 2$.
Consider the generalized Cartan matrix
\begin{equation}
A=\begin{pmatrix}
2 & -1 & 0 &\cdots &0 & -1 \\
-1& 2 & -1 &\cdots &0 & 0 \\
0 & -1 & 2 &\cdots &0 & 0 \\
\vdots & \vdots & \vdots &\ddots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & 2 &-1 \\
-1& 0 & 0 &\cdots & -1 & 2
\end{pmatrix}
\end{equation}
of type $A^{(1)}_l$ ($l\ge 2$), and
identify the indexing set $\{0,1,\ldots,l\}$ with ${\Bbb Z}/(l+1){\Bbb Z}$.
We take following matrix of ``orientation'' to specify our
representation of $W=W(A^{(1)}_l)$:
\begin{equation}\label{AU}
U=\begin{pmatrix}
0 & 1 & 0 &\cdots &0 & -1 \\
-1& 0 & 1 &\cdots &0 & 0 \\
0 & -1 & 0 &\cdots &0 & 0 \\
\vdots & \vdots & \vdots &\ddots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & 0 & 1 \\
1& 0 & 0 &\cdots & -1 & 0
\end{pmatrix}.
\end{equation}
Then the action of the affine Weyl group $W=\br{s_0,\ldots,s_l}$ on the
variables $\alpha_j$, $f_j$ and $\tau_j$ is given explicitly as follows:
\begin{equation}\label{WAl1}
\begin{array}{lllll}
s_i(\alpha_i)=-\alpha_i,
&s_i(\alpha_{j})=\alpha_j+\alpha_i&(j=i\pm 1),
&s_i(\alpha_{j})=\alpha_j&(j\ne i,i\pm1 ) \\
s_i(f_i)=f_i,
&s_i(f_j)=\displaystyle{f_j\pm\frac{\alpha_i}{f_i}}&(j=i\pm 1),
&s_i(f_j)=f_j&(j\ne i,i\pm1) \\
s_i(\tau_i)=\displaystyle{f_i \,\frac{\tau_{i-1}\tau_{i+1}}{\tau_i}},
&s_i(\tau_j)=\tau_j &(j\ne i)
\end{array}
\end{equation}
Note that $U$ is invariant with respect to the diagram rotation
$\pi: i\to i+1$.
Hence this action of $W$ extends to
the extended affine Weyl group
$\widetilde{W}=W\rtimes\{1,\pi,\ldots,\pi^l\}$ by
\begin{equation}\label{WAl2}
\pi(\alpha_j)=\alpha_{j+1},\quad \pi(f_j)=f_{j+1},\quad \pi(\tau_j)=\tau_{j+1}.
\end{equation}
The group $\widetilde{W}$ is now
isomorphic to $\overset{\circ}{P}\rtimes W_0$,
where $\overset{\circ}{P}$ is the weight lattice of the finite root
system of type $A_l$ and $W_0=\br{s_1,\ldots,s_l}\simeq {\frak S}_{l+1}$.
Taking the first fundamental weight
$\varpi_1=(l\alpha_1+(l-1)\alpha_2+\cdots+\alpha_l)/(l+1)$
of the finite root system, we set
\begin{equation}
T_1=t_{\varpi_1},\quad T_i=\pi T_{i-1}\pi^{-1}\quad (i=2,\ldots,{l+1}).
\end{equation}
These {\em shift operators\/} are expressed as
\begin{equation}
T_1=\pi s_ls_{l-1}\cdots s_1, \ \
T_2=s_1\pi s_l\ldots s_2,\ \ \ldots,\ \ T_{l+1}=s_l\cdots s_1 \pi
\end{equation}
in terms of the generators of $\widetilde{W}$.
Note that $T_1\cdots T_{l+1}=1$ and that
$T_1,\ldots,T_l$ form
a basis for the lattice part of $\widetilde{W}$.
The simple affine roots $\alpha_0,\ldots,\alpha_l$ are the dynamical variables
for the shift operators $T_1,\ldots,T_{l}$ such that
\begin{equation}
T_i(\alpha_{i-1})=\alpha_{i-1}+\delta,\quad
T_i(\alpha_{i})=\alpha_i-\delta, \quad
T_i(\alpha_j)=\alpha_j \ \ (j\ne i-1,i).
\end{equation}
For each $k\in {\Bbb Z}/(l+1){\Bbb Z}$ and $r=0,1,\ldots,l-1$, we define
$g_{k,r}$ to be the continued fraction
\begin{equation}
\begin{align}
g_{k,r}&=s_{k+r}s_{k+r-1}\cdots s_{k+1}\big(\frac{\alpha_k}{f_k}\big)\\
&=\contfrac{\alpha_k+\ldots+\alpha_{k+r}}{f_k}-
\contfrac{\alpha_{k+1}+\ldots+\alpha_{k+r}}{f_{k+1}}-
\cdots-\contfrac{\alpha_{k+r}}{f_{k+r}}.
\notag
\end{align}
\end{equation}
Then the discrete time evolution by $T_1$ is expressed as
\begin{equation}\label{dAl}
\begin{align}
T_1(f_0)&=f_1-g_{2,l-1}+g_{0,0},\\
T_1(f_1)&=f_2-g_{3,l-2},\notag\\
T_1(f_2)&=f_3-g_{4,l-3}+g_{2,l-1},\notag\\
& \cdots\notag\\
T_1(f_{l-1})&=f_l-g_{0,0}+g_{l-1,2},\notag\\
T_1(f_l)&=f_0+g_{l,1}.\notag
\end{align}
\end{equation}
The corresponding formulas for $T_2,\ldots, T_l$ are obtained
from these by applying the diagram rotation $\pi$.
\section{Nonlinear systems with affine Weyl group symmetry}
As we already remarked in Section 2,
the representation of $W(A)$ introduced in Theorems \ref{thmA} and \ref{thmB},
for the cases $A^{(1)}_2$, $A^{(1)}_3$ and $D^{(1)}_4$,
arises in nature as B\"acklund transformations of Painlev\'e equations
$P_{\,\text{IV}}, P_{\,\text{V}}$ and $P_{\,\text{VI}}$,
respectively.
Hence, the Painlev\'e equations
$P_{\,\text{IV}}, P_{\,\text{V}}$ and $P_{\,\text{VI}}$
have the structure of discrete dynamical systems on the lattice
as described in Section 3 with respect to B\"acklund transformations.
As to $P_{IV}$, this point has been discussed in detail in our
previous paper \cite{NY1}.
``Symmetric forms'' of all Painlev\'e equations
$P_{\text{II}},\ldots, P_{\,\text{VI}}$ and their B\"acklund transformations
will be discussed in our forthcoming paper \cite{NY3}.
\medskip
From the viewpoint of nonlinear equations of Painlev\'e type,
an important problem would be the following:
\begin{prob}\label{prob}
For each affine root system $($or for each generalized Cartan matrix $A$,
in general$)$, find a system of differential $($or difference$)$equations
for which the Coxeter group $W=W(A)$ acts as B\"acklund transformations.
\end{prob}
\noindent
We believe that such differential (or difference) systems
with affine Weyl group symmetry
should provide an intriguing class of dynamical systems with
rich mathematical structures, to be compared to Painlev\'e equations.
We also remark that, if one specifies the representation of $W=W(A)$
in advance as in Theorem \ref{thmB}, then
the problem mentioned above is equivalent to finding such
derivations (or shift operators) on ${\Bbb C}(\alpha;f)$ and
${\Bbb C}(\alpha;f;\tau)$
that commute with the action of $W(A)$.
In this section, we will introduce some examples of type $A^{(1)}_l$
of difference and differential systems with affine Weyl group
symmetry, as well as remarks on the continuum limit
from the difference to the differential systems.
\medskip
We first explain a general idea to construct
difference systems with affine Weyl group symmetry
by means of our discrete dynamical systems associated with
affine root systems.
Consider the
discrete dynamical system defined by an affine root system
as in Section 3.
If we take a sublattice $N\subset M$ of rank $r$,
then the centralizer $Z_{W(A)}(N)$ of $N$ in $W$ gives rise to a
group of B\"acklund transformations of the discrete system
\begin{equation}\label{disc-time-ev-r}
t_\nu(f_j)=F_{\nu j}(\alpha;f)\quad(j=0,\ldots,l)
\end{equation}
on the sublattice $N$ of rank $r$, with $\alpha_j$ ($j=0,\ldots,l)$
regarded as functions on $N$ such that
$t_\nu(\alpha_j)=\alpha_j-\br{\nu,\alpha_j}$.
The centralizer $Z_{W(A)}(N)$ contains in fact
subgroups generated by reflections acting on the quotient $M/N$.
For instance, let $W_{M/N}$ be the group generated by
the reflections $s_\alpha$ with respect to the affine roots $\alpha$
that are perpendicular to the lattice $N$.
Then $W_{M/N}$ is contained in the group of B\"acklund
transformations of the discrete system (\ref{disc-time-ev-r}).
(The group of symmetry thus obtained
may have a different structure from that of our
representations of Sections 1 and 2.)
For example, the difference system (\ref{dAl}) with respect to
the shift operator $T_1$ has symmetry under the affine Weyl
group $W(A^{(1)}_{l-1})=\br{r,s_2,\ldots,s_l}$, where
$r=s_0 s_1 s_0$.
The corresponding simple affine roots are given by
$\alpha_0+\alpha_1, \alpha_2,\ldots,\alpha_l$.
Note that the root $\alpha_0+\alpha_1$ is invariant under $T_1$.
The reflection $r$ acts on the variables $f_j$ as follows:
\begin{equation}
\begin{align}
&r(f_1)=f_1+\frac{\alpha_0+\alpha_1}{s_1(f_0)},
\quad
r(f_2)=f_2+\frac{\alpha_0+\alpha_1}{s_0(f_1)},
\\
&r(f_l)=f_l-\frac{\alpha_0+\alpha_1}{s_1(f_0)},
\quad
r(f_0)=f_0-\frac{\alpha_0+\alpha_1}{s_0(f_1)}, \notag
\\
&r(f_j)=f_j \qquad(j=3,\ldots,l-1).\notag
\end{align}
\end{equation}
We remark
that the two element $s_1(f_0)$ and $s_0(f_1)= T_1 s_1(f_0)$
are invariant under the action of $r$ as well as
$f_3,\ldots,f_{l-1}$.
Similarly, the difference system with respect to the commuting
operators $T_1,\ldots, T_k$ has affine Weyl group symmetry under the
subgroup $W(A^{(1)}_{l-k})=\br{r,s_{k+1},\ldots,s_{l}}$ where
$r=s_0s_1\cdots s_{k-1} s_k s_{k-1}\cdots s_0$.
\medskip
If one can take an appropriate continuum limit of
the sublattice $N$ inside $M$, one would possibly obtain
a differential system in $r$ variables
whose group of B\"acklund transformations contains
a reasonable reflection group.
We show an example in which the idea explained above works nicely,
in some detail.
Consider the discrete dynamical system of type $A^{(1)}_2$
with extended affine Weyl group $\widetilde{W}=W\rtimes \Omega$
as in Section 4.
We take the element $T=T_1=\pi s_2 s_1\in \widetilde{W}$ which represents
to the translation $t_{\varpi_1}$ with respect to the first fundamental
weight $\varpi_1=(2\alpha_1+\alpha_2)/3$ of the finite root system, so that
\begin{equation}
T(\alpha_0)=\alpha_0+\delta,\quad T(\alpha_1)=\alpha_1-\delta,
\quad T(\alpha_2)=\alpha_2.
\end{equation}
Note that $T$ can be considered as a shift operator with respect to
the variable $\alpha_1$.
Our discrete dynamical system for this case is described as follows:
\begin{equation}
\begin{align}\label{Tf0}
T(f_0)&=f_1+\frac{\alpha_0}{f_0}-
\contfrac{\alpha_0+\alpha_1}{f_2}-\contfrac{\alpha_0}{f_0},\\
T(f_1)&=f_2-\frac{\alpha_0}{f_0},\notag\\
T(f_2)&=f_0+\contfrac{\alpha_0+\alpha_1}{f_2}-\contfrac{\alpha_0}{f_0}\notag
\end{align}
\end{equation}
in terms of continued fractions.
We remark that $T^{-1}(f_0)$ takes a simpler form than $T(f_0)$ above:
\begin{equation}\label{Tf1}
T^{-1}(f_0)=f_2+\frac{\alpha_1}{f_1}
\end{equation}
Noticing that the element $f_0+f_1+f_2$ is invariant under the
action of $\widetilde{W}$, we set $f_0+f_1+f_2=c$.
Then from (\ref{Tf0}) and (\ref{Tf1})
we obtain the following equivalent form of our difference system:
\begin{equation}\label{dP20}
T^{-1}(f_0)+f_0=c-f_1+\frac{\alpha_1}{f_1},\quad
f_1+T(f_1)=c-f_0-\frac{\alpha_0}{f_0}.
\end{equation}
With the notation $f_i[n]=T^n(f_i)$ for $n\in{\Bbb Z}$, this equation gives rise
to a representation of the {\em second discrete Painlev\'e equation\/}
$dP_{\,\text{II}}$ (\cite{SKGHR},\cite{S}):
\begin{equation}
\begin{align}\label{dP2}
f_0[n-1]+f_0[n]&=c-f_1[n]+\frac{\alpha_1-n\delta}{f_1[n]},\quad \\
f_1[n]+f_1[n+1]&=c-f_0[n]-\frac{\alpha_0+n\delta}{f_0[n]}\quad(n\in {\Bbb Z}).\notag
\end{align}
\end{equation}
Since the shift operator $T=\pi s_2 s_1$ commute with the two reflections
$r_0=s_0s_1s_0$ and $r_1=s_2$, we see that the difference system
(\ref{dP20})
or (\ref{dP2})
has symmetry of the affine Weyl group $W(A^{(1)}_1)=\br{r_0,r_1}$.
(The corresponding simple roots are
$\beta_0=\alpha_0+\alpha_1$ and $\beta_1=\alpha_2$.)
The second Painlev\'e equation $P_{\,\text{II}}$
arises as a continuum limit of the difference system (\ref{dP2}),
and that $A^{(1)}_1$-symmetry of (\ref{dP2}) naturally passes to
$P_{\,\text{II}}$.
Introduce a small parameter $\varepsilon$ such that $\delta=\varepsilon^3$,
and set
\begin{equation}
\begin{align}
&f_0[n]=1+\varepsilon \psi + \varepsilon^2 \varphi_0,\quad
f_1[n]=1-\varepsilon \psi + \varepsilon^2 \varphi_1,\quad
c=2,\\
&\alpha_0+n \delta=-1+\varepsilon^2 x+\varepsilon^3 a_0,\ \
\alpha_1-n \delta=1-\varepsilon^2 x+\varepsilon^3 a_1,\ \
\alpha_2=\varepsilon^3 b_1. \notag
\end{align}
\end{equation}
Then in the limit as $\varepsilon\to 0$, the difference equations (\ref{dP2})
imply the following differential equation for $\varphi_0, \varphi_1,\psi$:
\begin{equation}
\begin{align}
&\varphi_0' = 2\varphi_0 \psi+a_0-\frac{1}{2},\
\varphi_1' = 2\varphi_1 \psi+a_1-\frac{1}{2},\\
&\psi'=2(\varphi_0+\varphi_1)-\psi^2+x.\notag
\end{align}
\end{equation}
{}From this we get the second Painlev\'e equation for $\psi$
\begin{equation}
\psi''=2 \psi^3 - 2 x \psi -2 b_1+1,
\end{equation}
and the other dependent variables $\varphi_0, \varphi_1$ are
determined by quadrature from $\psi$.
At the same time, we obtain
the following B\"acklund transformations $r_0$ and $r_1$ for $\psi$:
\begin{equation}
r_0(\psi)=\psi-\frac{2 b_0}{\psi'-\psi^2+x},\quad
r_1(\psi)=\psi-\frac{2 b_1}{\psi'+\psi^2-x},
\end{equation}
where $b_0=a_0+a_1=1-b_1$.
The parameters $b_0, b_1$ are the simple roots for the $A^{(1)}_{1}$-symmetry
of $P_{\,\text{II}}$.
\medskip
Finally,
we present a series of differential systems with $A^{(1)}_l$-symmetry
$(l\ge 2)$, which give a generalization of the Painlev\'e equations
$P_{\,\text{IV}}$ and $P_{\,\text{V}}$.
In our previous paper \cite{NY1}, we introduced the symmetric form of the
fourth Painlev\'e equation:
\begin{equation}\label{SP4}
\begin{align}
&f'_0=f_0(f_1-f_2)+\alpha_0, \\
&f'_1=f_1(f_2-f_0)+\alpha_1, \notag\\
&f'_2=f_2(f_0-f_1)+\alpha_2. \notag
\end{align}
\end{equation}
This system defines in fact a derivation $'$ of the field ${\Bbb C}(\alpha;f)$
which commute with the action of the extended affine Weyl group
$\widetilde{W}$ of type $A^{(1)}_2$ as in (\ref{WAl1}) and (\ref{WAl2}).
(Note that the convention of \cite{NY1} corresponds to the
transposition of $U$ in (\ref{AU}).)
We remark that the sum $f_0+f_1+f_2$ is invariant under $\widetilde{W}$,
and satisfies the equation $(f_0+f_1+f_2)'=\alpha_0+\alpha_1+\alpha_2=\delta$.
Introduce the independent variable $x$ so that $x'=1$, and eliminate one of
the three $f$-variables, noting that $f_0+f_1+f_2$ is a linear function of $x$.
Then the differential system above is rewritten into a system of order 2,
which is equivalent to the Painlev\'e equation $P_{\,\text{IV}}$.
Differential system (\ref{SP4}) has a generalization to higher orders.
For example, when $l=4$, the differential system
\begin{equation}
\begin{align}
&f'_0=f_0(f_1-f_2+f_3-f_4)+\alpha_0, \\
&f'_1=f_1(f_2-f_3+f_4-f_0)+\alpha_1, \notag\\
&f'_2=f_2(f_3-f_4+f_0-f_1)+\alpha_2, \notag\\
&f'_3=f_3(f_4-f_0+f_1-f_2)+\alpha_3, \notag\\
&f'_4=f_4(f_0-f_1+f_2-f_3)+\alpha_4 \notag
\end{align}
\end{equation}
has $A^{(1)}_4$-symmetry.
Note that the sum $f_0+f_1+f_2+f_3+f_4$ is a linear function
of the independent variable $x$ such that $x'=1$
and that the system above is essentially of order 4.
In general, when $l=2n$,
the following differential system (essentially of order $2n$) turns out to
have $A^{(1)}_{2n}$-symmetry with the B\"acklund transformations
defined as in Section 4:
\begin{equation}
f_j'=f_j\sum_{1\le r\le n}(f_{j+2r-1}-f_{2r}\big)+\alpha_j
\quad (j=0,1,\ldots,2n).
\end{equation}
We remark that this differential system is obtained
as a continuum limit from the difference system with
$A^{(1)}_{2n}$-symmetry which arises from the discrete dynamical
system of type $A^{(1)}_{2n+1}$, in the manner as we explained above.
We also found a series of differential systems with $A^{(1)}_{2n+1}$-symmetry
($n=1,2,\ldots$) which generalize the fifth Painlev\'e equation $P_{\,\text{V}}$:
\begin{equation}\label{A2n+1}
\begin{align}
f_j'&=f_j\big(
\sum_{ 1\le r\le s\le n } f_{j+2r-1}f_{j+2s}
-\sum_{ 1\le r\le s\le n} f_{j+2r}f_{j+2s+1}
\big)\\
&\quad+(\frac{\delta}{2}-\sum_{1\le r\le n}\alpha_{j+2r})f_j +
\alpha_j(\sum_{1\le r\le n}f_{j+2r})
\quad (j=0,1,\ldots,2n+1),\notag
\end{align}
\end{equation}
where $\alpha_0+\cdots+\alpha_{2n+1}=\delta$.
We remark that differential system (\ref{A2n+1}) is
also essentially of order $2n$,
since each of the sums $\sum_{r=0}^n f_{2r}$ and $\sum_{r=0}^n f_{2r+1}$
is determined elementarily.
The Painlev\'e equation $P_{\,\text{V}}$ is covered as the case $n=1$
(see \cite{NY3}):
\begin{equation}
\begin{align}
f_0'&=f_0(f_1f_2-f_2f_3)+(\frac{\delta}{2}-\alpha_2)f_0+\alpha_0 f_2,
\\
f_1'&=f_1(f_2f_3-f_3f_0)+(\frac{\delta}{2}-\alpha_3)f_1+\alpha_1 f_3,
\notag\\
f_2'&=f_2(f_3f_0-f_0f_1)+(\frac{\delta}{2}-\alpha_0)f_2+\alpha_2 f_0,
\notag\\
f_3'&=f_3(f_0f_1-f_1f_2)+(\frac{\delta}{2}-\alpha_1)f_3+\alpha_3 f_1,
\notag
\end{align}
\end{equation}
where $\alpha_0+\alpha_1+\alpha_2+\alpha_3=\delta$.
\medskip
These two series of differential systems with affine Weyl group symmetry
can be considered as a variation of Lotka-Voltera equations and
Bogoyavlensky lattices, including the
parameters $\alpha_0,\ldots,\alpha_l$.
Also, the structures of their B\"acklund transformations
can be described completely in terms of the discrete dynamical
systems we have introduced in this paper.
(Details will be discussed elsewhere.)
We expect that these systems of differential equations
with affine Weyl group symmetry
deserve to be studied individually from various aspects,
since they already give a candidate for
systematic generalization of
Painlev\'e equations to higher orders.
\vfill\eject
|
astro-ph/9804251
|
\section{INTRODUCTION}
The high power output from quasars is usually attributed to
the combination of a supermassive black hole with a surrounding accretion disk.
This belief is supported by a number of observations that indicate the
presence of a deep gravitational potential well or a hot gas disk
at the center of quasars or other active galactic nuclei;
e.g., measurements of stellar velocity dispersion
clearly showed a peculiar increase toward the center
(Young et al. 1978; Sargent et al. 1978;
see also Ford et al. 1994; Harms et al. 1994).
Malkan (1983) found that the optical to UV spectra
are well fitted by the standard-type accretion disk model
(Shakura \& Sunyaev 1973).
Recently, by far the best evidence of a supermassive black hole has been found
by radio observations of nuclear H$_2$O maser
sources in NGC4258 (Miyoshi et al. 1995).
Alternatively, we can infer the presence of a relativistic object from
the asymmetric Fe line profile (Tanaka et al. 1995).
These observational results are all attractive, but still the real vicinity of
a putative black hole has not been resolved.
Q2237+0305 (e.g., Huchra et al. 1985) is the first object,
in which the quasar microlensing events were detected
(Corrigan et al. 1987; Houde \& Racine 1994; see also Ostensen et al. 1996).
These observations suggest that microlensing events
take place roughly once per year.
This rather high frequency is consistent with the microlens
optical depth of $\tau \sim 0.8$ obtained by the realistic simulation
of the lensing galaxy (i.e., Wambsganss \& Paczy\'nski 1994).
We consider, here, specifically the microlensing events of this source
caused by the so-called `caustic crossings'
(see Yonehara et al. 1997 for single-lens calculations).
Several authors have already analyzed this `caustic' case
based on a simple model for quasar accretion disk
(e.g., Wambsganss \& Paczy\'nski 1991;
Jaroszy\'nski, Wambsganss \& Paczy\'nski 1992).
So far, however, only the standard-type disk,
which is too cool to emit X-rays, has been considered,
and thus the property of an X-ray microlensing of quasar, e.g., Q2237+0305,
has not been predicted.
We stress here the significance of X-ray observations to elucidate the physics
of the innermost parts of the disk, since X-rays specifically originate
from a deep potential well.
The observations allow us to assess the extension of hot regions on
several AU scales and resultantly to deduce
the mass of a central massive black hole.
In this $Letter$, we propose to investigate quasar central structure
by using X-ray microlensing of Q2237+0305.
In section 2, we describe the method for resolving X-ray emission properties
of the inner disk structure on a scale down to a few AU.
In section 3, we calculate the disk spectra
and light curves during microlensing.
We here use realistic disk models:
the optically-thick, standard disk (Shakura \& Sunyaev 1973) and
the optically-thin, advection-dominated accretion flow
(ADAF, Abramowicz et al. 1995; Narayan \& Yi 1995; see also Ichimaru 1977).
\section{X-RAY MICROLENS}
Using microlensing, it is possible to discriminate the
structure of quasar accretion disks, and possibly to map them in detail.
Broad band photometry will be able to detect the color changes,
thereby revealing the structure of quasar accretion disks.
Fortunately, such observations do not require very high
time resolution, if the lens is far from the observer.
In the case of Q2237+0305, furthermore,
all the four images are very close, $\sim 1 {\rm arcsec}$,
to the image center of the lensing galaxy (e.g., Irwin et al. 1989).
This situation and the almost symmetric image pattern
permits us to neglect time delay between images;
e.g., time delay between images A and B is $\sim 3 {\rm hr}$
for $H_{\rm 0}=75 {\rm km~s^{-1} Mpc^{-1}}$ (Wambsganss \& Paczy\'nski 1994).
We can thus easily discriminate intrinsic variability from microlensing;
if only one image exhibits peculiar brightening over several tens of days
superposed on intrinsic variations which the other three also show,
we can conclude that the event is due to microlensing.
On the other hand, variations caused by
superposition of other variable objects
(e.g., supernovae and cataclysmic variables) are distinguishable
in terms of their characteristic light curves shapes
(e.g., sharp rise, reccurency, etc.).
One important length scale for lensing is the {\it Einstein-ring radius}
on the source plane, $r_{\rm E} \equiv \theta_{\rm E}D_{\rm os}$, where
$\theta_{\rm E}=[(4GM_{\rm lens}/c^2)(D_{\rm ls}/D_{\rm os}D_{\rm ol})]^{1/2}$,
$M_{\rm lens}$ is the typical mass of a lens star,
and $D_{\rm ls}$, $D_{\rm os}$, and $D_{\rm ol}$
represent the angular diameter distances from the lens to the source,
from the observer to the source, and from the observer to the lens,
respectively (see Paczy\'nski 1986). For Q2237+0305 (e.g., Irwin 1989),
the redshifts corresponding to the distances from the observer to the quasar,
from the observer to the lens, and from the lens to the quasar are,
$z_{\rm os} = 1.675$, $z_{\rm ol} = 0.039$,
and $z_{\rm ls} = 1.575$, respectively.
For these the Einstein-ring angular extension
and the radius on the source plane are
$\theta_{\rm E} \sim 4 \times 10^{-11} (M_{\rm lens}/M_{\odot}) {\rm (radian)}$
and $r_{\rm E} \equiv \theta_{\rm E}D_{\rm os}
\sim 1.5\times 10^{17}(M_{\rm lens}/M_\odot)^{1/2}$cm, respectively,
where we assume an Einstein-de Sitter universe and
Hubble's constant to be $H_{\rm 0} \sim 60 {\rm km \ s^{-1} Mpc^{-1}}$,
according to Kundi\'c et al. (1997).
We, however, emphasize that an even more important length is
the {\it caustic crossing scale} on the quasar image plane ($r_{\rm cross}$);
\begin{equation}
r_{\rm cross}
= v_{\rm t}t \frac{D_{\rm os}}{D_{\rm ol}}
\sim 4.1\times 10^{13}
\left(\frac{v_{\rm t}}{600~{\rm km~s}^{-1}}\right)
\left(\frac{t}{1~{\rm d}}\right) {\rm cm},
\end{equation}
where $t$ is the time over which the caustic moves on
the quasar disk plane and $v_{\rm t}$ is the transverse velocity of the lens,
including the transverse velocity of the peculiar motion of
the foreground galaxy relative to the source and the observer.
Surprisingly, this is comparable to the Schwarzschild radius,
$r_{\rm g}\simeq 3\times 10^{13}M_8$cm,
for a $10^8M_8$ black hole and is much smaller than $r_{\rm E}$.
In other words, by daily monitoring, we can hope to resolve the disk structure
with a resolution of $\sim$ 2.7 AU or $\sim r_{\rm g}M_8^{-1}$,
if a microlensing event takes place.
We can then place a limit on the depth of the potential well
at the center of a quasar
from X-ray observations of the microlensing light curve.
Since an X-ray emitting plasma is within the potential well,
we can conjecture
$kT_{\rm e}/m_{\rm p} < GM/r$ (with $k$ being the Boltzmann constant,
$T_{\rm e}$ being electron temperature
and $m_{\rm p}$ being the proton mass);
This leads to a constraint for the central hole mass as
\begin{equation}
M > 1.5 \times 10^5 M_\odot
\left(\frac{r_{\rm cross}}{2.5\times 10^{14}{\rm cm}}\right)
\left(\frac{T_{\rm e}}{10^9{\rm K}}\right).
\end{equation}
If we observe every 6 days,
we can achieve an effective spatial resolution
of $2.5\times 10^{14}$cm on the disk plane and thus set a lower mass limit
of $\sim 10^{5}M_\odot$ for $T_{\rm e} \sim 10^9{\rm K}$
in such a compact region.
The maximally attainable resolution is, therefore,
over two orders of magnitude better than
what was obtained by ${\rm H_2O}$ maser observation.
Although X-ray observations alone are very informative,
by simultaneous X-ray and optical observations
allow us to obtain more detailed information
concerning AU-scale disk structure.
We can set good constrain to the model parameters,
such as $v_{\rm t}$ from optical observations,
because the standard-type disk, emitting the optical flux, is well studied.
We demonstrate this bellow (\S 3.3).
\section{MICROLENS DIAGNOSIS}
\subsection{Simulation Methods}
Let us next calculate expected X-ray variations based on specific disk models.
The observed flux from a part of the disk at ($r_i,\varphi_j$)
during a microlensing event is calculated by
\begin{equation}
\Delta F_{\rm obs}(\nu;r_i,\varphi_j) d \Omega \simeq
A(u) \frac{\Delta L[\nu(1+z_{\rm os}),r_i]}{4\pi D_{\rm os}^2(1+z_{\rm os})^3}
r_i dr_i \varphi_j .
\end{equation}
The integral $\int\Delta F_{\rm obs} d\Omega$ over the disk plane gives
the total observed flux.
Here, $\Delta L$ is the luminosity of a part of a disk,
$A(u)$ represents the amplification factor as a function of
the angular separation ($u$) between the source and a caustic, and
$z_{\rm os}$ is the redshift between the observer and the source.
The amplification factor is calculated in the following way.
We consider the idealized situation of a straight caustic
with infinite length passing over the disk.
This is a crude but reasonable approximation,
since the size of the source (i.e., disk) of interest is smaller
than the Einstein-ring radius on the source plane.
Then, the following analytical
approximate formula can be used (see Schneider, Ehlers, \& Falco, 1992):
\begin{equation}
A(u) = \left\{
\begin{array}{@{\,}ll}
u^{-1/2} + A_{\rm 0} & \mbox{(\rm for $u > 0$)} \\
A_{\rm 0} & \mbox{(\rm for $u \le 0$)},
\end{array}
\right.
\label{eq:ampli}
\end{equation}
where $u$ is the angular separation from a caustic in units of
$\theta_{\rm E}$, and $A_{\rm 0}$ represents a constant amplification factor
due to the initial amplification by the caustic
(i.e., when the source lies outside the caustic) and
also due to the effects of other caustics.
In the present study, we set $A_{\rm 0}=6.0$ so as to reproduce
observed microlens amplitudes $\sim 0.5 {\rm mag}$.
The positive (or negative) angular separation, $u>0$ ($u<0$), means that
the source is located inside (outside) the caustic (see figure 1).
To calculate $\Delta L$, we consider a hybrid disk model
which is composed of an optically thick standard disk
and an optically thin ADAF.
Optical-UV flux come from optically thick parts
while X-rays originate from optically thin ADAF parts.
For the optically thick part, we assume blackbody radiation
$B_{\nu}(T)$ (with $\nu$ being frequency),
where temperature $T(r)$ is given by
\begin{equation}
T(r) = 2.2 \times 10^5 \left( \frac{\dot{M}}{10^{26}
{\rm g \ s^{-1}}} \right)^{1/4} \left( \frac{M}{10^8 M_{\odot}}
\right)^{1/4} \left( \frac{r}{10^{14} {\rm cm}} \right)^{-3/4}
\left[1 - \left( \frac{r_{\rm in}}{r} \right)^{1/2} \right]^{1/4} {\rm K},
\end{equation}
if relativistic effects, irradiation heating,
and Compton scattering effects are ignored.
Here, $\dot{M}$ is the mass accretion rate, $M$ is the black hole mass, and
the inner edge of the disk ($r_{\rm in}$) is set to be $3r_{\rm g}$.
The fractional luminosity from a tiny portion of the disk at $(r,\phi)$
with surface element $\Delta S =r\Delta r\Delta\phi$ is then
$\Delta L(\nu,r) = 8\pi B_{\nu}\left[T(r)\right] \Delta S$.
In the optically thin (ADAF) parts,
radiative cooling is inefficient due to the very low density.
As a result, accreting matter falls into a central object without losing
its internal energy via radiation.
Hence, the disk can be significantly hotter with $T_{\rm e} \lower.5ex\hbox{\gtsima} 10^9$K,
producing high energy (X-$\gamma$ ray) photons.
We use the spectrum calculated by Manmoto, Mineshige, \& Kusunose (1997)
and derive luminosity, $\Delta L(\nu,r)= 2\epsilon_{\nu}(r) \Delta S$,
where $\epsilon_{\nu}$ is emissivity.
Included are synchrotron, bremsstrahlung, and inverse
Comptonization of soft photons created by the former two processes
(see Narayan \& Yi 1995 for details).
Photon energy is thus spread over a large frequency range;
from radio (due to synchrotron) to hard X-$\gamma$ rays (via inverse Compton).
Unlike the optically thick case, radiative cooling is no longer
balanced with viscous heating (and with gravitational energy release).
Consequently, emission from regions within
a few tens to hundreds of $r_{\rm g}$
from the center contributes to most of the entire spectrum
(see Yonehara et al. 1997).
In the present study, we assign the viscosity parameter to be $\alpha=0.55$.
The mass of the central black hole is $M = 10^8 M_{\odot}$
and the mass-flow rates are $\dot{M} =8 \times 10^{24}$ g s$^{-1}$.
We focus our discussion on the relative changes of the radiation flux and
variation timescales, and are not concerned with the absolute flux.
It is easy to confirm that the parameters associated with
the accretion disk only affect non-microlensed flux,
and do not affect the timescale of microlensing events.
Although the only influential parameter is $v_{\rm t}$,
it can be estimated by analysis of the optical light curves,
since the theory of the optically thick disk is relatively well established.
For simplicity, we assume that the accretion disk
is face-on to the observer ($i=0$).
If we change the inclination angle (i.e., $i \ne 0$),
the timescale will be shortened by $\cos i$ because of
the apparently smaller size of the emitting regions,
but the basic properties are not altered significantly.
Any how, a quasar is believed to be more or less close to a face-on view.
The inclusion of the relativistic effects
(e.g., Doppler shifts by the disk rotation,
beaming, or gravitational redshifts)
in the disk model could reveal the detailed disk structure
in the real vicinity of a central black hole, which will be discussed
in a future paper.
\subsection{Spectral Variation}
First, we calculate the predicted spectra for the cases
with the angular separation ($d$) between the caustic and the source center
normalized with $\theta_{\rm E}$ at
$d=0.1$, $0.01$, $0.0$, $-0.01$, and $-0.1$, respectively.
The resultant spectral modifications by the lens are shown in figure 2.
For the optically-thick parts, not only disk brightening
but also substantial spectral deformation are expected
for $d \le 0.1$.
The spectral shapes critically depend on whether the emitting region is
inside the caustic or not, since
a large amplification is expected only when
the emitting regions are inside (see Eq.~\ref{eq:ampli}).
Consequently, totally distinct spectra are expected
for the cases with $d=0.01$ and $d=-0.01$,
unlike the single-lens calculations (Yonehara et al. 1997).
In the former case,
higher-energy parts (i.e., {\it U}-band) as well as lower-energy parts
({\it I}-band) are effectively amplified,
giving rise to a sharply modified spectrum.
In the latter, in contrast,
the higher-energy part is not substantially amplified,
since the hotter parts are outside the caustic.
This produces a relatively flat spectrum.
As a result, frequency-dependent, time asymmetric
microlensing light curves are produced (see \S 3.3).
For the optically thin parts, on the other hand, there are not
large spectral modifications produced by microlensing;
the total flux is amplified more strongly over the entire energy range,
although spectral modifications are bit smaller in the X-ray range.
This is because in ADAFs the large emissivity is achieved only
at several tens to hundreds of $r_{\rm g}$.
Therefore, timescales for X-ray variation are somewhat shorter.
\subsection{Light Curves}
Such unique spectral effects lead to interesting behaviors in
the microlensing light curves. We continuously change the
angular separation between the caustic and the center of the
accretion disk ($d$) according to $d(t) = v_{\rm t}t/D_{\rm ol}\theta_{\rm E}$
and calculate the flux at each frequency, assuming optical-UV flux coming from
the optically thick parts (standard disk model) and
X-rays from the optically thin parts (ADAF).
Figure 3 shows the light curves of the microlensing events for the
optically thick and thin parts.
There are two interesting noticeable features.
First, the light curves of the optically thick parts show strong
frequency dependence in the sense
that higher-energy radiation (i.e., {\it U}-band) shows more rapid changes
than lower-energy ({\it K}-band).
Second, the shapes of the light curves are different between the two models;
the changes are gradual in the optically thick parts,
on a timescale of a few tens of days, while the variation timescales are
somewhat shorter for the optically thin parts.
This feature reflects the different emissivity distributions of
the two disk models in the radial directions.
That is, we can infer the radial disk structure
(i.e., radial dependence of emission properties) from
the microlensing light curves at different bands.
For this purpose, simultaneous monitoring by ground-based photometry is
indispensable; namely, we can obtain a good guess for the model parameters,
such as the transverse velocity,
by using the standard-disk model for optical emission.
We can then determine the distance between the caustic and the disk center
as a function of the observing time.
On the basis of these data, we can clarify the radial dependence of
X-ray emission properties. Since the standard-disk model is
fairly well established,
and since it predicts that the effective temperature varies as $r^{-3/4}$,
irrespective of the magnitude of viscosity,
it is easy to tell the transverse velocity from the time changes of
estimated effective temperature of the region brightened by a caustic.
In summary, we have studied microlensing of a quasar disk
by a caustic crossing using realistic models of an accretion disk.
As a result, we have obtained the basic properties of microlens
light curves of a quasar like Q2237+0305.
Future calculations, which incorporate
the effects of caustic curvature and caustic networks,
will tell more precisely how the amplitudes are determined.
\acknowledgements
The authors would like to express their thanks to
Joachim Wambsgan{\ss} for valuable suggestions,
and acknowledge Matt Malkan for helpful comments.
This work was supported in part by the Japan-US Cooperative
Research Program which is founded by the Japan Society for
the Promotion of Science and the US National Science
Foundation, and by the Grants-in Aid of the
Ministry of Education, Science, and Culture of Japan
(08640329, 09223212, SM).
|
1901.11331
|
\section{Introduction}
$K$-means is one of the most popular clustering methods.
This is because its algorithm is simple and can be executed at high speed in linear time with respect to the number of data.
However, it is necessary to specify the number of clusters in advance.
Therefore, it is necessary to apply some heuristics or examine results with multiple cluster numbers.
To estimate the number of clusters, a learning method of Gaussian mixtures by the nonparametric Bayes approach was proposed \cite{non-parabayse} leading to an extension of $K$-means capable of estimating the number of clusters in the limit where the variance of the Gaussian component approaches $0$.
This method is called Dirichlet-Process-means (DP-means) \cite{dpmeans}.
Although clustering methods that can estimate the number of clusters from given data have been proposed so far, they have problems such as long computation time and many parameters to be tuned.
Affinity propagation \cite{affinity} and Mean shift clustering \cite{mean_shift} are difficult to apply to large scale data because they require the squared order of the number of data to execute the algorithms.
Gamma-clust based on a $\gamma$-divergence has been proposed as a clustering method that is robust against scatted outliers \cite{gamma-clust}.
It learns means and covariance matrices of $q$-Gaussian mixtures based on $\gamma$-divergence.
It requires four or five hyper parameters to be set.
On the other hand, DP-means retains the advantages of $K$-means.
It can be executed in linear time with respect to the number of data, is easy to apply to large scale data, and is can be implemented using a simple algorithm.
In an attempt to further speed up DP-means, parallelization can be applied with optimistic concurrency control when a new cluster is created \cite{occ}.
In addition, computation time has been significantly reduced by dividing data into weighted subsets called coresets, although at the expense of accuracy \cite{coreset-dpmeans}.
DP-means specialized for application to large scale genetic data has been devised, and shown to be superior to existing methods from the aspect both of accuracy and efficiency \cite{dace}.
To improve the accuracy of clustering, studies have been made to avoid local minimum solutions \cite{split-dp}.
An extension using Bregman divergence was also given, which introduces an appropriate distance measure when data has a special type such as binary or non-negative integer value \cite{clustering-bregman,dp-bregman}.
From the viewpoint of information theory, the algorithm of DP-means monotonically decreases the average distortion of the training data, whereas the penalty parameter, which controls the number of clusters, has been interpreted as the maximum distortion of the data \cite{mypaper1}.
This motivated us to consider modifying DP-means, where the maximum distortion would be minimized instead of the average distortion.
This problem is also known as the $K$-center problem \cite{k-center}.
It is also demonstrated in the smallest enclosing ball problem when the number of clusters is one \cite{seb}, and a method to calculate the smallest enclosing Bregman ball with the radius of the smallest enclosing ball measured by Bregman divergence has been studied \cite{bregman-ball}.
DP-means has a problem that it is prone to the influence of outliers because of the nature of the objective function, the average distortion.
Therefore, we extended the objective function of DP-means and invented two objective functions, either of which could bridge maximum distortion and robust distortion measures, and constructed the algorithms to minimize them \cite{mypaper2}.
However, the degree of the robustness against outliers induced by these objective functions has yet to be clarified.
Further, in order to extend the linear distortion measure as the average distortion to nonlinear distortion measures with respect to the distortion of each data point, $f$-separable distortion measures using $f$-mean has been proposed.
In particular, for this distortion measure, the rate-distortion function showing the limit of lossy compression was elucidated \cite{f-separable}.
In this paper, we further capitalize on the above previous work and extend the objective function of DP-means to $f$-separable distortion measure using a monotonically increasing function $f$.
We derive a cluster center update rule for a sufficiently wide class of the function $f$, and show that the objective function monotonically decreases if $f$ is concave.
As concrete examples of the function $f$, we show that two kinds of functions $f$ including a parameter $\beta$ can unify the minimization of robust distortion measures and the minimization of the maximum distortion by adjusting the parameter $\beta$.
Furthermore, we derive the influence function and evaluate the robustness against outliers.
Experiments using real datasets demonstrate that DP-means generalized by the function $f$ improve the performance of the original DP-means.
The paper is organized as follows.
Section \ref{sec:dpm} introduces DP-means.
Section \ref{sec:fgene} generalizes the objective function of DP-means to $f$-separable distortion measures and explains the behavior of the objective function corresponding to the selection of the function $f$.
In Section \ref{sec:const_algorithm}, the generalized DP-means algorithm is constructed based on the generalized objective function.
In Section \ref{sec:if}, we derive the influence function and evaluate robustness against outliers.
In Section \ref{sec:experiments}, we present the results of numerical experiments using real datasets to demonstrate the effectiveness of the generalized DP-means.
In Section \ref{sec:tbd}, we discuss further modification of the objective function in terms of the pseudo-distance.
Finally, Section \ref{sec:conclusion} concludes this paper.
\section{DP-means \label{sec:dpm}}
DP-means requires data $\upbm{x}{n}=\autonami{\udrbm{x}{1},\dots,\udrbm{x}{n}}$ and penalty parameter $\lambda$ as inputs.
Suppose that each data point is $L$-dimensional, $\udrbm{x}{i}=(\upbracudr{x}{i}{1}, \cdots, \upbracudr{x}{i}{L})^{\rm T}\in \mathbb{R}^L$.
The algorithm of DP-means is basically the same as $K$-means.
Let $\autonami{\udrbm{\theta}{1}, \dots, \udrbm{\theta}{K}}$ be cluster centers.
DP-means executes a calculation of the cluster centers and assigns the data point to clusters until the following objective function converges:
\begin{align}
L(\autonami{\udrbm{\theta}{k}}_{k=1}^K, \autonami{c(i)}_{i=1}^n) = \sum_{i=1}^n \vecbregman{x}{i}{\theta}{c(i)} + \lambda K. \label{eq:original}
\end{align}
Here, $c\left(i\right) \triangleq \argmin_k {\vecbregman{x}{i}{\theta}{k}}$ denotes the cluster label of the data point $\udrbm{x}{i}$.
However, the following two points are different from $K$-means.
DP-means is initialized with one cluster, $K=1$.
When the pseudo-distance between a data point and its nearest cluster center is greater than the penalty parameter $\lambda$, a new cluster is created.
In other words, a new cluster is generated when the following is satisfied:
\begin{align}
\vecbregman{x}{i}{\theta}{c(i)} > \lambda . \label{eq:clusterAdd}
\end{align}
Also, the cluster center $\udrbm{\theta}{k}$ is calculated as the average of data points assigned to the $k$-th cluster,
\begin{align*}
\udrbm{\theta}{k} = \frac{\sum_{i=1}^n r_{ik}\bm{x}_i}{\sum_{j=1}^n r_{jk}} .
\end{align*}
Here, $r_{ik}$ is given by
\[
r_{ik} = \begin{cases}
1 & (c(i) = k), \\
0 & (c(i) \neq k) .
\end{cases}
\]
In this paper, we assume the Bregman divergence as the pseudo-distance, which generalizes the squared distance.
Specifically, the Bregman divergence is the pseudo-distance determined from a differentiable strictly convex function $\phi$ as
\begin{align}
d_\phi \autobrac{\bm{x},\bm{\theta}} \triangleq \phi(\bm{x}) - \phi(\bm{\theta}) - \langle \bm{x}-\bm{\theta},\bm{\nabla}\phi(\bm{\theta})\rangle, \label{eq:def_bregman}
\end{align}
where $\bm{\nabla}\phi$ represents the gradient vector of $\phi$ and $\langle \cdot , \cdot \rangle$ is the inner product.
The Bregman divergence satisfies non-negativity and identity of indiscernibles in distance axioms.
Moreover, the Bregman divergences are related to the probability distributions in the exponential family, bijectively.
For example, if data are of a particular type such as a binary or non-negative integer, the Bregman divergences corresponding to the Bernoulli and Poisson distributions are used as the more suitable distance measures than the usual squared distance for real numbers \cite{clustering-bregman,dp-bregman}.
The average distortion and the maximum distortion are defined by
\begin{align*}
\frac{1}{n} \sum_{i=1}^n \vecbregman{x}{i}{\theta}{c(i)}, \\%\label{eq:aveBregman} \\
\max_{1\leq i \leq n} \vecbregman{x}{i}{\theta}{c(i)}, \nonumber
\end{align*}
respectively.
Note that the minimization of the objective function \eqref{eq:original} with respect to $\udrbm{\theta}{k}$ is equivalent to that of the average distortion.
On the other hand, the penalty parameter $\lambda$, which determines the number of clusters, can be interpreted as the maximum distortion \cite{mypaper1}.
Therefore, we can consider the maximum distortion as the measure for cluster increment.
\section{Generalized objective function \label{sec:fgene}}
\subsection{Generalization with $f$-separable distortion measures \label{sec:fmean}}
In this paper, we propose an objective function that generalizes the objective function of DP-means to $f$-separable distortion measures as follows:
\begin{align}
L_f(\autonami{\udrbm{\theta}{k}}_{k=1}^K, \autonami{c(i)}_{i=1}^n) = \sum_{i=1}^n f\autobrac{\vecbregman{x}{i}{\theta}{c(i)}} + f\autobrac{\lambda}K . \label{eq:generalized_f}
\end{align}
As we will discuss in Section \ref{sec:monotonically}, this objective function is guaranteed to decrease monotonically with respect to $\autonami{\udrbm{\theta}{k}}_{k=1}^K$ and $\autonami{c(i)}_{i=1}^n$.
In this paper, we assume that the function $f: \mathbb{R}_+ \rightarrow \mathbb{R}$ is a differentiable and continuous monotonically increasing, where $\mathbb{R}_+$ is the set of non-negative real numbers.
The argument is given by the Bregman divergence or the penalty parameter $\lambda$.
In particular, we consider the following three types:
\begin{itemize}
\setlength{\leftskip}{1.0cm}
\item[$\nearrow$ \ :] linear,
\item[\uzarrow \ :] concave,
\item[\szarrow \ :] convex.
\end{itemize}
If the function $f (z)$ is a differentiable and strictly monotonically increasing function, an inverse function $f^{-1}(z)$ exists, and \eqref{eq:generalized_f} can be normalized to a distortion measure as $f$-mean \cite{f-mean}.
The $f$-separable distortion measures using this inverse function $f^{-1}(z)$ correspond to that in the literature \cite{f-separable}.
The generalized objective function \eqref{eq:generalized_f} can be monotonically transformed with the inverse function $f^{-1}$.
Therefore, minimizing \eqref{eq:generalized_f} is equivalent to minimizing the $f$-mean,
\begin{align*}
f^{-1}\left( \frac{1}{n+K} \left[\sum_{i=1}^n f\autobrac{\vecbregman{x}{i}{\theta}{c(i)}} + f\autobrac{\lambda}K\right] \right).
\end{align*}
Table \ref{table:func_property} summarizes the behavior of the objective function, its monotonic improvement property, and the calculation order required to execute the learning algorithm.
If $f$ is linear ($\nearrow$), it becomes the original objective function \eqref{eq:original}, which corresponds to the average distortion in the original DP-means.
If $f$ is convex (\szarrow), a distortion with a larger pseudo-distance value tends to be minimized.
In particular, the faster the function $f$ diverges to infinity, the more the objective function approaches the maximum distortion.
Conversely, If $f$ is concave (\uzarrow), a distortion with a smaller pseudo-distance value will be prioritized to be minimized.
That is, the influence of data points far from other data points, such as outliers, is weakened.
In other words, there is a trade-off between the maximum distortion, which is the maximum radius of the cluster, and the robustness against outliers.
Robustness against outliers is explained more in detail in Section \ref{sec:if}.
\begin{table}[tbp]\scriptsize
\begin{center}
\caption{Algorithm behavior corresponding to function $f$.}
\begin{tabular}{lllll} \toprule
$f(z)$ & $f^{-1}(z)$ & behavior & monotonic decrease & calculation order\\ \midrule
$\nearrow$ & $\nearrow$ &average distortion minimization & yes & $\bm{O}(Ln)$\\
\uzarrow & \szarrow & robustness against outliers & yes & $\bm{O}(Ln)$\\
\szarrow & \uzarrow & approaches the maximum distortion minimization & *1 & *2 \\ \bottomrule
\end{tabular
\\ *1: A gradient descent optimization method is required. \\ *2: The order depends on the applied gradient descent optimization.
\label{table:func_property}
\end{center}
\end{table}
Eguchi and Kano generalized the likelihood of a probabilistic model to ${\rm \Psi}$-likelihood using a convex function like \eqref{eq:generalized_f}, and devised a ${\rm \Psi}$-estimator as a robust estimator against outliers \cite{psi-div}.
The ${\rm \Psi}$-estimator focuses on the following two points.
The first is to obtain robustness against outliers.
We consider a wide class of functions, not only the robustness against outliers but also the maximum distortion minimization within our focus.
The second point is to guarantee the unbiased estimation equation.
For this, a bias correction term, whose calculation is complicated in general, is included in the objective function.
The ${\rm \Psi}$-likelihood assumes a probabilistic model, whereas in this study only the Bregman divergence is assumed.
As we will discuss, the update rule of the cluster center derived from the combination of the function $f$ and Bregman divergence enables us to execute the learning algorithm in the linear order on the number of the training data as the original DP-means.
\subsection{Examples of function $f$}
In this subsection, we show two concrete examples of functions with a parameter $\beta$.
When the parameter $\beta$ is changed, the generalized objective function changes its behavior as average distortion, maximum distortion, or robust distortion measures.
\subsubsection{Power mean objective}
For the function
\begin{align}
f(z)=\frac{1}{\beta}\left[(z+a)^\beta-1\right],\footnotemark \label{eq:f_pow}
\end{align}
the corresponding $f$-mean is given by
\begin{align}
\left[\frac{1}{n}\sum_{i=1}^n \autobrac{z_i+a}^\beta\right]^{\frac{1}{\beta}}-a . \label{eq:pow}
\end{align}
The first term of \eqref{eq:pow} is called power mean.
The parameters are $\beta \in \mathbb{R}$, $a\geq0$.
Parameter $\beta$ determines the effect of the objective function, and parameter $a$ is introduced to avoid an algorithmic disadvantage.
The disadvantage is that the cluster center is not updated when it overlaps the nearest data point.
If $a>0$, this problem does not occur.
If $a=0$, some algorithmic modification is required.
We will explain the details in Section \ref{sec:problem}.
Table \ref{table:pow} shows the characteristics of the objective function \eqref{eq:pow} and the corresponding function $f$ for different choices of $\beta$.
\footnotetext{
It can also be expressed as $\ln_{1-\beta} (z+a)=\frac{1}{\beta}[(z+a)^\beta-1]$ by using Tsallis $q$-function $\ln_q (z) \triangleq \frac{z^{1-q}-1}{1-q}$, for which $\ln(z) = \lim_{q\to1} \ln_q (z)$ \cite{tsallis_book}.}
As shown in Table \ref{table:pow}, the behavior of the objective function varies around $\beta = 1$: it shows robust characteristics when $\beta <1$, and the smaller the $\beta$, the smaller the influence of outliers.
When $\beta>1$, the larger the value of $\beta$, the closer the objective function approaches the maximum distortion. In particular, $\beta\to\infty$ implies maximum distortion minimization.
When the Bregman divergence is the squared distance, the objective function using \eqref{eq:pow} with $\beta>0$ and $a=0$ corresponds to the objective function derived in the framework of MAP-based Asymptotic Derivations (MAD-Bayes) \cite{mad-bayes}, in which the generalized Gaussian distribution is assumed as the component of the nonparametric mixture model.
The proof for this is in \ref{sec:proof_gene_gauss}.
Similarly, assuming a deformed $t$-distribution as the component of a nonparametric mixture model, the same objective function when $\beta=0$ and $a> 0$ is obtained.
The proof is in \ref{sec:proof_t_dist}.
\subsubsection{Log-sum-exp objective}
For the function
\begin{align}
f(z)=\frac{1}{\beta-1}\left[\exp\autobrac{(\beta-1)z}-1\right], \footnotemark \label{eq:f_exp}
\end{align}
the corresponding $f$-mean is given by
\begin{align}
\frac{1}{\beta-1}\ln\left[\frac{1}{n} \sum_{i=1}^n \exp\autobrac{(\beta-1)z_i}\right] . \label{eq:lse}
\end{align}
Equation \eqref{eq:lse} is a differentiable approximation of the maximum value function when $\beta=2$, known as the log-sum-exp function \cite{convex}.
As in the case of the power mean, $\beta \in \mathbb{R}$ determines the characteristics of the objective function as a parameter.
Table \ref{table:lse} shows the characteristics of the objective function using \eqref{eq:lse} and the corresponding function $f$ for different choices of $\beta$.
Table \ref{table:lse} also shows how the objective function behaves differently around $\beta=1$, as in the case of the power mean.
It becomes robust when $\beta <1$, and approaches the maximum distortion when $\beta> 1$.
In particular, when $\beta\to\infty$, its limit is maximum distortion.
In addition, \eqref{eq:lse} corresponds to the objective function for the estimation of mixture models in \cite{entropic_risk} when the variance of each component approaches $0$.
\footnotetext{
It can also be expressed as $\ln_{2-\beta} \autobrac{\exp(z)}=\frac{\exp\autobrac{(\beta-1)z}-1}{\beta-1}$.}
\begin{landscape}
\begin{table*}[tb]
\begin{center}
\caption{Behavior of the power mean objective and corresponding function $f$.}
\scalebox{0.8}[1]{
\begin{tabular}{|c|c|c|c|c|c|}
\hline \hline
$\beta$ & $f^{-1}\autobrac{\frac{1}{n}\sum_{i=1}^n f(z_i)}$ & \multicolumn{2}{|c|}{$f(z)$} & $f{\,'}(z)$ & behavior \\
\hline \hline
$\beta=1$ & $\frac{1}{n} \sum_{i=1}^n z_i$ & $z+a-1$ & $\nearrow$ & 1 & average distortion minimization \\ \hline
$\beta=0$ & $\autobrac{\prod_{i=1}^n (z_i+a)}^{\frac{1}{n}}-a$ &$\ln (z+a)$ & \multirow{3}{*}{\uzarrow} & $\frac{1}{z+a}$ & \multirow{3}{*}{robustness against outliers} \\ \cline{1-3} \cline{5-5}
$-\infty<\beta<0$ & \multirow{3}{*}{$\autobrac{\frac{1}{n}\sum_{i=1}^n (z_i+a)^\beta}^\frac{1}{\beta}-a$} & \multirow{3}{*}{$\frac{1}{\beta}[(z+a)^\beta-1]$} & & \multirow{3}{*}{$(z+a)^{\beta-1}$} & \\ \cline{1-1}
$0<\beta<1$ & & & & & \\ \cline{1-1} \cline{4-4} \cline{6-6}
$1<\beta<\infty$ & & & \szarrow & & approaches the maximum distortion minimization \\ \hline
$\beta\to\infty$ & $\max_{1\leq i \leq n} z_i$ & & & & maximum distortion minimization \\ \hline
\end{tabular}}
\label{table:pow}
\end{center}
\begin{center}
\caption{Behavior of the log-sum-exp objective and corresponding function $f$.}
\scalebox{0.75}[1]{
\begin{tabular}{|c|c|c|c|c|c|}
\hline \hline
$\beta$ & $f^{-1}\autobrac{\frac{1}{n}\sum_{i=1}^n f(z_i)}$ & \multicolumn{2}{|c|}{$f(z)$} & $f{\,'}(z)$ & behavior \\
\hline \hline
$\beta=1$ & $\frac{1}{n} \sum_{i=1}^n z_i$ & $z$ & $\nearrow$ & 1 & average distortion minimization \\ \hline
$-\infty<\beta<1$ & \multirow{2}{*}{$\frac{1}{\beta-1}\ln\autobrac{\frac{1}{n} \sum_{i=1}^n \exp\autobrac{(\beta-1)z_i}}$} & \multirow{2}{*}{$\frac{\exp\autobrac{(\beta-1)z}-1}{\beta-1}$} & \uzarrow & \multirow{2}{*}{$\exp\autobrac{(\beta-1)z}$} & robustness against outliers \\ \cline{1-1} \cline{4-4} \cline{6-6}
$1<\beta<\infty$ & & & \szarrow & & approaches the maximum distortion minimization \\ \hline
$\beta\to\infty$ & $\max_{1\leq i \leq n} z_i$ & & & & maximum distortion minimization \\ \hline
\end{tabular}}
\label{table:lse}
\end{center}
\end{table*}
\end{landscape}
\section{Construction of generalized algorithm \label{sec:const_algorithm}}
In this section, we construct a generalized DP-means algorithm based on the objective function proposed in Section \ref{sec:fgene}.
First, we derive the update rule of the cluster center in Section \ref{sec:update_rule}.
Second, we show that the objective function decreases monotonically with the derived update rule in Section \ref{sec:monotonically}.
Finally, we discuss minor problems in the execution of the algorithm and offer solutions to them in Section \ref{sec:problem}.
The generalized algorithms are constructed from the original DP-means by replacing the update rule of cluster centers and the objective function used for convergence diagnosis (Algorithm \ref{extAlg}).
This algorithm differs only to the original algorithm in the update rule of the cluster center, and the computation time required for execution is of linear order with respect to the number of data.
\subsection{Derivation of update rules \label{sec:update_rule}}
The updated equations for the cluster centers are derived from the stationary conditions when the gradient of the cluster center $\udrbm{\theta}{k}$ of the generalized objective function \eqref{eq:generalized_f} is $\bm{0}$.
Here, $f{\,'}$ represents the derivative of the function $f$.
Thus, the update rule of the cluster center is
\begin{align}
\udrbm{\theta}{k} = \frac{\sum_{i=1}^n w_{ik} \udrbm{x}{i}}{\sum_{j=1}^n w_{jk}}, \label{eq:update_rule}
\end{align}
\begin{align}
w_{ik} = r_{ik}f{\,'}\autobrac{\vecbregman{x}{i}{\theta}{k}} , \label{eq:weight}
\end{align}
which is the weighted mean of $\udrbm{x}{i}$ weighted by $f{\,'}$ with the Bregman divergence as its argument.
However, the objective function monotonically decreases by this update rule \eqref{eq:update_rule} only when the function $f$ is concave (or linear).
If function $f$ is convex, the cluster center is updated by gradient descent optimization such as the steepest gradient or Newton's method and so on.
The update rule with Newton's method is put in \ref{sec:newton_update}.
\begin{algorithm}[tb]
\caption{Generalized DP-means for $f$-separable distortion measures}
\label{extAlg}
\setstretch{0.9}
\DontPrintSemicolon
\nl \KwIn{$\upbm{x}{n}=\autonami{\udrbm{x}{1},\dots,\udrbm{x}{n}},\:\: \lambda, \:\:$ generic function $f$}
\nl \KwOut{$\{\bm{\theta}_k\}_{k=1}^K, \:\: \{c(i)\}_{i=1}^n, \:\: K$}\nl$K=1, \:\: \bm{\theta}_1=\frac{1}{n} \sum_{i=1}^{n} \bm{x}_i, \:\: c\left(i\right)=1~(i=1,\dots, n)$\;
\nl\Repeat{the decrease of $\bar{L}_f\left(\bm{\theta}_1\right)$ becomes smaller than the threshold $\delta$}{
\nl calculate $w_{ik}$ by \eqref{eq:weight} ~$({i}=1,\dots, n)$\;
\nl update $\bm{\theta}_1$ by \eqref{eq:update_rule}\;
}
\Repeat{\eqref{eq:generalized_f} convergences}{
\nl\For{$i=1$ \KwTo $n$}{
\nl $d_{k} = {d_\phi\left(\bm{x}_i, \bm{\theta}_{k}\right)}~({k}=1,\dots, K)$ \;
\nl \If{$\min_k d_k>\lambda$ }{
\nl $K=K+1$ \;
\nl $c(i)=K, \:\: \bm{\theta}_K=\bm{x}_i$ \;
}
\nl \Else{
\nl $c(i)=\argmin_{k} d_k$ \;
}
}
\nl\For{$k=1$ \KwTo $K$}{
\nl\Repeat{the decrease of $\bar{L}_f\left(\bm{\theta}_k\right)$ becomes smaller than the threshold $\delta$}{
\nl calculate $w_{ik}$ by \eqref{eq:weight} ~$({i}=1,\dots, n)$\;
\nl update $\bm{\theta}_k$ by \eqref{eq:update_rule} \;
}
}
}
\end{algorithm}
\subsection{Guarantee of monotonic decreasing property \label{sec:monotonically}}
The DP-means algorithm iterates the cluster center updating step and the assignments of data points to clusters.
Because of the monotonic decreasing property of the objective function in each step, the algorithm converges within finite iterations.
Even in the generalized objective function \eqref{eq:generalized_f}, the cluster assignment step is same as the original DP-means.
Therefore, only the monotonic decreasing property of the objective function in the cluster center updating step is considered.
In the following, we explain the two cases, concave and convex.
\subsubsection{Concave case}
The following theorem applies when the function $f$ is concave.
\begin{theorem}\label{teiri:monot}
If the function $f$ is concave, the updating of the cluster center using \eqref{eq:update_rule} monotonically decreases the objective function for general Bregman divergence.
\end{theorem}
\begin{proof}
{\rm
We show that the objective function \eqref{eq:generalized_f} monotonically decreases when the $k$-th cluster center $\udrbm{\theta}{k}$ is newly updated to $\tilde{\bm{\theta}}_k$ by \eqref{eq:update_rule}.
More specifically, we prove that, $\bar{L}_f(\udrbm{\theta}{k})\geq \bar{L}_f(\tilde{\bm{\theta}}_k) $, where $\bar{L}_f(\udrbm{\theta}{k})$ is the sum of the terms related to $\udrbm{\theta}{k}$ in \eqref{eq:generalized_f}:
\begin{align*}
\bar{L}_f({\bm{\theta}}_k) = \sum_{i=1}^n r_{ik} f\autobrac{\vecbregman{x}{i}{\theta}{k}} .
\end{align*}
Here, the tangent line $y$ of $f(z)$ at the point $a$ is expressed by the following equation:
\begin{align*}
y=f{\,'}(a)(z-a)+f(a) .
\end{align*}
Furthermore, since $y\geq f(z)$ follows from the concavity of the function $f$, the following inequality holds:
\begin{align}
f(a)-f(z)\geq f{\,'}(a)(a-z) . \label{eq:tangent_inquality}
\end{align}
From this inequality, the following holds:
\begin{align}
\MoveEqLeft[1]
\bar{L}_f(\udrbm{\theta}{k})-\bar{L}_f(\tilde{\bm{\theta}}_k) \nonumber
\\={}& \sum_{i=1}^n r_{ik} \left[f\autobrac{\vecbregman{x}{i}{\theta}{k}}-f\autobrac{\tildebregman{x}{i}{\theta}{k}}\right] \nonumber
\\ \geq{}& \sum_{i=1}^n r_{ik} f{\,'}\autobrac{\vecbregman{x}{i}{\theta}{k}} \left[\vecbregman{x}{i}{\theta}{k} - \tildebregman{x}{i}{\theta}{k}\right] \nonumber
\\={}& \sum_{i=1}^n r_{ik} f{\,'}\autobrac{\vecbregman{x}{i}{\theta}{k}} \left[ \phi\autobrac{\tilde{\bm{\theta}}_k} - \phi\autobrac{\bm{\theta}_k} -\bm{\nabla}\phi\autobrac{\bm{\theta}_k}\autobrac{\bm{x}_i-\bm{\theta}_k} + \bm{\nabla}\phi\autobrac{\tilde{\bm{\theta}}_k}\autobrac{\bm{x}_i-\tilde{\bm{\theta}}_k} \right] \label{eq:use_update}
\\={}& d_\phi \autobrac{\tilde{\bm{\theta}}_k, \bm{\theta}_k} \sum_{i=1}^n r_{ik}f{\,'}\autobrac{\vecbregman{x}{i}{\theta}{k}} \geq 0 , \nonumber
\end{align}
where
\begin{align*}
\tilde{\bm{\theta}}_k \sum_{i=1}^n r_{ik} f{\,'}\autobrac{\vecbregman{x}{i}{\theta}{k}} = \sum_{i=1}^n r_{ik} f{\,'}\autobrac{\vecbregman{x}{i}{\theta}{k}} \bm{x}_i
\end{align*}
derived from \eqref{eq:update_rule} and \eqref{eq:weight} was used in \eqref{eq:use_update}.
\qed
}
\end{proof}
The following corollary immediately follows from Theorem \ref{teiri:monot}.
\begin{corollary}
When the objective function is constructed by the power mean \eqref{eq:pow} or the log-sum-exp function \eqref{eq:lse}, the following holds.
When $\beta\leq1$, the updating of the cluster center using \eqref{eq:update_rule} monotonically decreases the objective function for general Bregman divergence.
\end{corollary}
If the function $f$ satisfies $f(0)>-\infty$, the algorithm converges within finite iterations.
Let $t$ be the number of times the $k$-th cluster center has been updated, and denote it by $\bm{\theta}_k^{(t)}$.
The corresponding objective function is $\bar{L}_f\left(\bm{\theta}_k^{(t)}\right)$.
The objective function sequence $\left\{\bar{L}_f\left(\bm{\theta}_k^{(t)}\right)\right\}_{t=0}^\infty$ converges to a finite value, because it has monotonic decreasing property (Theorem \ref{teiri:monot} ) and lower bounded, that is, $\bar{L}_f\autobrac{\bm{\theta}_k^{(t)}}\geq \sum_{i=1}^n r_{ik} f(0)>-\infty$.
Therefore, the following holds:
\begin{align*}
\lim_{t\rightarrow\infty} \bar{L}_f\left(\bm{\theta}_k^{(t)}\right)-\bar{L}_f\left(\bm{\theta}_k^{(t+1)}\right)=0.
\end{align*}
In other words,
\begin{align*}\forall\delta>0, \exists t^*\in \mathbb{N}: \bar{L}_f\autobrac{\bm{\theta}_k^{(t^*)}}- \bar{L}_f\autobrac{\bm{\theta}_k^{(t^*+1)}} < \delta.
\end{align*}
That is, the algorithm converges in finite iterations for threshold $\delta$.
The proof of Theorem \ref{teiri:monot} is a generalization of the monotonic decreasing property of ei-means ($\varepsilon=0$) proposed in \cite{ei-means}, which corresponds to the case where $f(z)=\sqrt{z}$ and $\vecbregman{x}{}{\theta}{}=\|\bm{x}-\bm{\theta}\|^2$.
The proof of Theorem \ref{teiri:monot} is also interpreted by the Majorization-Minimization algorithm \cite{mm-alg}.
\subsubsection{Convex case}
The function $f$ is convex when the objective function \eqref{eq:generalized_f} exhibits a characteristic close to the maximum distortion minimization.
If the cluster center is updated by \eqref{eq:update_rule}, the value of the objective function can oscillate and may not decrease monotonically.
Therefore, it is necessary to apply a gradient descent optimization such as the steepest descent method or Newton's method.
When the pseudo-distance is the general Bregman divergence, the problem of calculating the cluster centers is not generally a convex optimization problem, so the gradient is not necessarily in the descent direction.
However, monotonic decreasing property is also guaranteed for the Bregman divergence in general by using the algorithm that updates to the descending direction of the gradient like the modified Newton's method.
In particular, when symmetry is satisfied among distance axioms such as the squared distance, the problem of calculating the cluster centers is reduced to a convex optimization problem, so the gradient direction is always in the descent direction \cite[Section 3.2]{bregman-ball,convex}.
When the function $f$ is convex, the calculation time in the case of the steepest descent method is $\bm{O}(Ln)$, and Newton's method with Cholesky decomposition is $\bm{O}(L^2n)$, where $L$ is the dimension of data points.
In this case, there is no change in the linear order with respect to the number of data.
\subsection{Problem and solution\label{sec:problem}}
When the objective function shows robustness against outliers, that is, when the function $f$ is concave, if a cluster center overlaps a data point, subsequent updates are not performed.
In the assignment step of DP-means, a new cluster center is generated exactly on the data point satisfying \eqref{eq:clusterAdd}.
We can see that a data point and a cluster center frequently overlap.
We will now show the condition where cluster center updating does not take place and offer a solution.
When the function $f$ is a concave and satisfies
\begin{align}
\lim_{z\to0} f{\,'}(z)=\infty , \label{eq:no_update}
\end{align}
if a cluster center overlaps a data point, updating of the cluster center does not occur.
It is assumed that one point in $\upbm{x}{n}=\autonami{\udrbm{x}{1},\dots,\udrbm{x}{n}}$ overlaps the cluster center $\udrbm{\theta}{k}$.
That is, $\udrbm{x}{i^*}=\udrbm{\theta}{k} \iff \vecbregman{x}{i^*}{\theta}{k}=0$.
Here, if the function $f$ satisfies \eqref{eq:no_update},
\begin{align*}
\frac{f{\,'}\autobrac{\vecbregman{x}{i}{\theta}{k}}} {f{\,'}\autobrac{\vecbregman{x}{i^*}{\theta}{k}} } =
\begin{cases}
1 & (i=i^*), \\
0 & (i\neq i^*),
\end{cases}
\end{align*}
holds.
Therefore, we have
\begin{align*}
\udrbm{\theta}{k} = \frac{\sum_{i=1}^n \frac{w_{ik}} {f{\,'}\autobrac{\vecbregman{x}{i^*}{\theta}{k}} } \udrbm{x}{i} }{\sum_{j=1}^n \frac{w_{jk}}{f{\,'}\autobrac{\vecbregman{x}{i^*}{\theta}{k}}}}
=\udrbm{x}{i^*} ,
\end{align*}
which means that the cluster center does not move from the data point.
Next, we examine the case where function $f$ satisfies \eqref{eq:no_update} and the cluster center overlaps the nearest data point with new data points additionally assigned to the cluster.
Here, updating can be performed by shifting the cluster center from the data point to some point such as the average point, and applying the update rule.
In the case of the power mean \eqref{eq:f_pow}, when $a=0$, since \eqref{eq:no_update} is satisfied, updating of the cluster center does not occur.
Therefore, in this case, the procedure described above is required.
However, in the case of the power mean \eqref{eq:f_pow} with $a>0$, \eqref {eq:no_update} is not satisfied, so the update of the cluster center is performed without the above procedure.
However, it must be smaller than pseudo-distance values.
Similarly, in the case of the log-sum-exp function \eqref{eq:f_exp}, the cluster center can be updated without the above procedure because \eqref{eq:no_update} is not satisfied.
\section{Analysis of influence function \label{sec:if}}
The influence function is one of the indicators of the robustness against outliers and shows how much the estimator is influenced by the contamination of a small number of outliers.
In this section, we derive the influence function and evaluate how robust the generalized objective function is against outliers.
\subsection{Influence function of general function $f$ \label{sec:influence}}
Derivation of the influence function in this subsection is based on the derivation of the influence function of the total Bregman divergence \cite{total-bregman,information-geometry} (see also Section \ref{sec:tbd}).
We consider the influence function when an outlier is mixed in the $k$-th cluster.
However, since the influence of mixed outliers is independent for each cluster, in the derivation of subsequent influence functions, the subscript of $\udrbm{\theta}{k}$ is omitted and expressed as $\bm{\theta}$.
Suppose that the $k$-th cluster contains $m$ samples of data $\upbm{x}{m}=\autonami{\udrbm{x}{1}, \dots, \udrbm{x}{m}}$, and the cluster center estimated from $\upbm{x}{m}$ is $\bm{\theta}$.
When an outlier $\upbm{x}{*}$ is mixed into the data $\upbm{x}{m}$, a new cluster center $\tilde{\bm{\theta}}$ is calculated for the data including outliers $\upbm{x}{m+1}=\autonami{\udrbm{x}{1}, \dots, \udrbm{x}{m}, \upbm{x}{*}}$.
If we let $\delta\bm{\eta}$ be the difference between the estimator $\tilde{\bm{\theta}}$ including outliers and the estimator $\bm{\theta}$ without outliers,
\begin{align*}
\tilde{\bm{\theta}} -\bm{\theta} = \delta\bm{\eta} .
\end{align*}
The influence function is defined by
\begin{align}
\bm{{\rm IF}}(\upbm{x}{*}) = m\cdot\delta \bm{\eta} . \label{eq:influence}
\end{align}
The influence function \eqref{eq:influence} defined by a finite sample is also specifically called a sensitivity curve in the field of robust statistics \cite{robust}.
Then, the new cluster center $\tilde{\bm{\theta}}$ minimizes
\begin{align}
\frac{1}{m+1} \sum_{i=1}^m f\autobrac{\tildebregman{x}{i}{\theta}{}} + \frac{1}{m+1} f\autobrac{d_\phi \autobrac{\bm{x}^*, \tilde{\bm{\theta}}}} . \label{eq:outlier_loss}
\end{align}
Now we compute the first three terms of the Taylor expansion around the old cluster center $\bm{\theta}$, which minimizes $\sum_{i=1}^m f\autobrac{\vecbregman{x}{i}{\theta}{}}$.
Therefore, the first derivative with respect to $\delta\bm{\eta}$ of the first term of \eqref{eq:outlier_loss} becomes $\sum_{i=1}^m \bm{\nabla}f\autobrac{\vecbregman{x}{i}{\theta}{}}=0$.
The second derivative with respect to $\delta\bm{\eta}$ of the second term of \eqref{eq:outlier_loss} becomes very small when $m$ is large because
\begin{align*}
&\frac{1}{m+1}\delta\bm{\eta}^{\mathrm{T}} \bm{\nabla}\bm{\nabla} f\autobrac{d_\phi \autobrac{\bm{x}^*, \bm{\theta}}}\delta\bm{\eta}
\\=& \frac{1}{m+1}\frac{1}{m^2}\bm{{\rm IF}}^{\mathrm{T}}(\upbm{x}{*}) \bm{\nabla}\bm{\nabla}f\autobrac{d_\phi \autobrac{\bm{x}^*, \bm{\theta}}} \bm{{\rm IF}}(\upbm{x}{*})
=O(m^{-3}),
\end{align*}
where \eqref{eq:influence} was used.
Here, $\bm{\nabla}$ expresses the gradient with respect to $\bm{\theta}$.
From the above arguments, the term related to $\delta\bm{\eta}$ is given as follows:
\begin{align}
&\frac{1}{m+1} \left[\frac{1}{2}\delta\bm{\eta}^{\mathrm{T}} \bm{{\rm G}}\delta\bm{\eta}
+ \bm{\nabla}f\autobrac{d_\phi \autobrac{\bm{x}^*, \bm{\theta}}}\delta\bm{\eta}\right] , \label{eq:taylor}\\
&\bm{{\rm G}} = \sum_{i=1}^m \bm{\nabla}\bm{\nabla}f\autobrac{\vecbregman{x}{i}{\theta}{}} . \nonumber
\end{align}
The $\delta\bm{\eta}$ that minimizes \eqref{eq:taylor} is solved by completing the square with respect to $\delta\bm{\eta}$.
Then, the influence function in \eqref{eq:influence} is given by
\begin{align}
\bm{{\rm IF}}(\upbm{x}{*}) = - m\bm{{\rm G}}^{-1}\bm{\nabla}f\autobrac{d_\phi \autobrac{\bm{x}^*, \bm{\theta}}} . \label{eq:full_IF}
\end{align}
Essentially, \eqref{eq:full_IF} is the same as the influence function in M-estimation \cite{robust}.
Since the matrix $\bm{{\rm G}}$ does not depend on the outlier, in order to investigate the boundedness of the influence function, we should evaluate the following term :
\begin{align}
\begin{split}
\MoveEqLeft
-\bm{\nabla}f\autobrac{d_\phi \autobrac{\bm{x}^*, \bm{\theta}}} = -f{\,'}\autobrac{d_\phi \autobrac{\bm{x}^*, \bm{\theta}}} \bm{\nabla}d_\phi \autobrac{\bm{x}^*, \bm{\theta}} \\
= {}&f{\,'}\autobrac{d_\phi \autobrac{\bm{x}^*, \bm{\theta}}} \autobrac{\bm{\nabla}\bm{\nabla}\phi\autobrac{\bm{\theta}}(\bm{x}^* - \bm{\theta})}. \\
\end{split} \label{eq:if}
\end{align}
This shows that the boundedness of the influence function depends on the function $f$ and the strictly convex function $\phi$ constituting the Bregman divergence.
If \eqref{eq:if} is bounded, the estimated cluster center is robust against outliers.
\subsection{Necessary condition for boundedness}
\begin{theorem} \label{th:bound}
The following condition of the function $f$ is necessary for its influence function to be bounded for all $\bm{x}^*$:
\begin{align}\lim_{z\to\infty}f{\,'}(z)=0 . \label{eq:limz}\end{align}
\end{theorem}
\begin{proof}
As $\|\bm{\theta}\|<\infty$, when $\|\bm{x}^*\|$ is large, $d_\phi \autobrac{\bm{x}^*, \bm{\theta}}$ is large.
When the function $f(z)$ is linear ($\nearrow$) or convex (\szarrow), $f{\,'}(z)$ is constant or monotonically increasing with respect to $z$.
In such a case, $\|\bm{x}^*\|$ is increased, and the norm of \eqref{eq:if} becomes as large as possible, which means the influence function is not bounded.
In order to reduce the norm of \eqref{eq:if} when $\|\bm{x}^*\|$ is large, $f{\,'}(z)$ must be monotonically decreasing.
The type of the function $f(z)$ that satisfies this condition is only concave (\uzarrow).
When $f{\,'}(z)$ does not satisfy \eqref{eq:limz}, the norm of \eqref{eq:if} is divergent at $\|\bm{x}^*\|\to\infty$, and hence the influence function is not bounded.
Therefore, the function $f(z)$ must satisfy \eqref{eq:limz}.
\qed
\end{proof}
\begin{remark}
In this paper, the function $f$ is assumed to be one of three types: linear, convex, and concave.
Therefore, from the proof of this theorem, we know that $f(z)$ is restricted to a subclass of concave functions in order to obtain the robustness.
However, even for non-concave functions, if the condition of Theorem \ref{th:bound} is satisfied, robustness can be obtained.
For example, a sigmoid function is not a concave function although it can induce robustness.
\end{remark}
\begin{remark} \label{rmk:infl}
In some cases, the factor $f{\,'}\autobrac{d_\phi \autobrac{\bm{x}^*, \bm{\theta}}}$ in \eqref{eq:if} can diverge to infinity around $\bm{x}^*=\bm{\theta}$.
Even in such a case, if we consider the region of $\bm{x}^*$ satisfying $d_\phi \autobrac{\bm{x}^*, \bm{\theta}}>\delta$ for a constant $\delta$, the condition of Theorem \ref{th:bound} provides a necessary condition for the boundedness of the influence function on the region.
\end{remark}
Under the condition of Theorem \ref{th:bound}, the norm of \eqref{eq:if} is bounded.
In the following, we consider whether the norm of the influence function vanishes as $\|\bm{x}^*\|\to\infty$.
In particular, if
\begin{align}
\lim_{\|\bm{x}^*\|\to\infty} \|\bm{{\rm IF}}(\upbm{x}{*})\| = 0 \label{eq:redescending}
\end{align}
holds, the influence function is said to be redescending, and an outlier that is too large is automatically ignored.
The following Assumption \ref{asm:asm} is assumed in the following discussion.
\begin{assumption} \label{asm:asm}
The input dimension, the norm of the cluster center $\bm{\theta}$, and that of $\bm{\nabla}\phi$ at $\bm{\theta}$ are finite, that is,
$L < \infty$, $\|\bm{\theta}\| < \infty$, and $\|\bm{\nabla}\phi(\bm{\theta})\|<\infty$.
The Bregman divergence $d_\phi$ satisfies the followings for $l\in \{1,\cdots, L\}$:
\begin{align}
&\quad |{x^{*}}^{(l)}| \to \infty \Rightarrow d_\phi\autobrac{\bm{x}^{*},\bm{\theta}} \to \infty, \label{eq:scal_infty} \\
&\tilde{\bm{x}}=(\theta^{(1)}, \cdots, \theta^{(l-1)}, {x^*}^{(l)}, \theta^{(l+1)}, \cdots, \theta^{(L)})^{\rm T} \Rightarrow d_\phi(\tilde{\bm{x}}, \bm{\theta})\leq d_\phi(\bm{x}^*, \bm{\theta}). \label{eq:addtiveBregman}
\end{align}
\end{assumption}
Equations \eqref{eq:scal_infty} and \eqref{eq:addtiveBregman} hold true if the Bregman divergence $d_\phi(\bm{x}, \bm{\theta})$ is defined additively with respect to $L$-dimensions.
Below, we discuss the situation where $\|\bm{x}^*\| \to \infty$ holds under these assumptions.
In some cases, such as the Bregman divergence corresponding to the binomial distribution, $\|\bm{x}^*\| \to \infty$ can not occur.
However we can investigate the behavior of the influence function for finite $\bm{x}^*$.
We illustrate the behavior of \eqref{eq:if} for such a case in \ref{sec:plot_if}.
\subsection{Power mean\label{sec:pow}}
From \eqref{eq:if} and \eqref{eq:f_pow}, to evaluate the influence function in the case of the power mean, we evaluate the following term:
\begin{align*}
\lim_{\|\bm{x}^*\|\to\infty} \left\| \frac{\bm{\nabla}\bm{\nabla}\phi\autobrac{\bm{\theta}}(\bm{x}^* - \bm{\theta})}{\left[d_\phi \autobrac{\bm{x}^*, \bm{\theta}}+a\right]^{1-\beta}}\right\|.
\end{align*}
\begin{theorem} \label{th:pow}
For the function $f$ of the power mean \eqref{eq:f_pow} with $\beta<0$, the influence function is redescending for general Bregman divergences.
\end{theorem}
The proof of Theorem \ref{th:pow} is in \ref{sec:pow_redec}.
However, when $0\leq\beta<1$, the redescending or boundedness property of the influence function depends on the Bregman divergence.
In the rest of this subsection, we investigate the influence functions for concrete examples of the Bregman divergences.
$\alpha$-divergence is a subclass of Bregman divergences, including the Itakura Saito divergence ($\alpha=0$), the generalized KL divergence ($\alpha=1$), and the squared distance ($\alpha=2$) as special cases \cite{beta-divergence}.\footnotemark
\footnotetext{The $\alpha$-divergence here is usually termed as the $\beta$-divergence with the parameter $\beta$.
We denote the parameter by $\alpha$ in order not to be confused with the $\beta$ in \eqref{eq:f_pow}.}
The $\alpha$-divergence and corresponding convex functions are given by
\begin{align*}
&d_\alpha\autobrac{x, \theta} = \begin{cases}
\frac{x}{\theta}-\ln\autobrac{\frac{x}{\theta}}-1 & (\alpha=0) , \\
x\ln\autobrac{\frac{x}{\theta}}-(x-\theta) & (\alpha=1) , \\
\frac{x^\alpha+(\alpha-1)\theta^\alpha-\alpha x\theta^{\alpha-1}}{\alpha(\alpha-1)} & ({\rm otherwise}) , \\
\end{cases} \\
&\phi_\alpha(x) = \begin{cases}
-\ln x + x -1 & (\alpha = 0) ,\\
x\ln x - x + 1 & (\alpha = 1) ,\\
\frac{{x}^\alpha}{\alpha(\alpha-1)} - \frac{x}{\alpha-1} + \frac{1}{\alpha} & ({\rm otherwise}) ,
\end{cases}
\end{align*}
respectively.
In the convex function, when the parameter $\alpha$ is a positive even number other than $0$, its domain is defined as $\mathbb{R}$, otherwise it is $\mathbb{R}_{+}\setminus\{0\}$.
If the data is multidimensional, the Bregman divergence and the corresponding convex function are defined additively with respect to dimensions as follows :
\begin{align*}
&d_\phi\autobrac{\bm{x}, \bm{\theta}} = \sum_{l=1}^L d_\alpha(x^{(l)}, \theta^{(l)}) ,\\
&\phi(\bm{x}) = \sum_{l=1}^L \phi_\alpha(x^{(l)}).
\end{align*}
We calculated the influence function for the $\alpha$-divergence and found that it can be classified into divergent, bounded, and redescending types with respect to $\alpha$ and $\beta$ according to the following conditions (the proof is in \ref{sec:beta_proof}) :
\begin{align*}
\alpha<1 &:& \begin{cases}
\beta>0 & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm divergent}, \\
\beta = 0 & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm bounded}, \\
\beta <0 & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm redescending}, \\
\end{cases}\\
\alpha=1 &:& \begin{cases}
\beta>0 & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm divergent}, \\
\beta \leq 0 & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm redescending},
\end{cases}\\
\alpha>1 &:& \begin{cases}
\beta>1-\frac{1}{\alpha} & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm divergent}, \\
\beta = 1-\frac{1}{\alpha} &\Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm bounded}, \\
\beta <1-\frac{1}{\alpha} & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm redescending} . \\
\end{cases}
\end{align*}
Figure \ref{fig:beta-div} shows the regions of $(\alpha, \beta)$ corresponding to the three types.
Specifically, we see the boundedness property holds at the boundary line between the divergent and the redescending properties of the influence function.
This boundary line is continuous except for the point of $\alpha=1$, and gradually approaches $\beta=1$ when $\alpha\to\infty$.
The case of another divergence, the exp-loss is shown in \ref{sec:exp-loss}.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.7\textwidth]{./Fig1_alpha-divergence.pdf}
\end{center}
\caption{Robustness of the $\alpha$-divergence in the case of generalization with the power mean.}
\label{fig:beta-div}
\end{figure}
\subsection{Log-sum-exp\label{sec:lse}}
From \eqref{eq:if} and \eqref{eq:f_exp}, to evaluate the influence function in the case of the log-sum-exp function, we evaluate the following term:
\begin{align*}
\lim_{\|\bm{x}^*\|\to\infty} \left\|\frac{{\nabla\nabla\phi(\bm{\theta})(\upbm{x}{*}-\bm{\theta})}}{\exp\left((1-\beta)d_\phi \autobrac{\bm{x}^*, \bm{\theta}}\right)}\right\|. \end{align*}
\begin{theorem} \label{th:lse}
For the function $f$ of the log-sum-exp \eqref{eq:f_exp} with $\beta<1$, the influence function is redescending for general Bregman divergences.
\end{theorem}
Theorem \ref{th:lse} shows that when the estimated cluster center is robust against outliers, the redescending property always holds for any Bregman divergences.
(The proof of Theorem \ref{th:lse} is in \ref{sec:lse_redec}.)
From the above examples, it can be seen that the robustness property of $f$-mean strongly depends on the function $f$.
\subsection{Discussion on the influence function}
In Section \ref{sec:if}, we have discussed the boundedness of the influence function when the norm of the outlier $\left\|\bm{x}^*\right\|$ goes to infinity.
As the property of the DP-means algorithm, the data point which satisfies \eqref{eq:clusterAdd} generates a new cluster.
Hence, updating of the cluster center is not affected by such a data point.
However, data points which satisfy
\begin{align}d_\phi(\bm{x}, \bm{\theta})\leq \lambda \label{eq:no_outlier}\end{align} may effectively work as outliers.
Clustering performance may be badly affected by data points satisfying \eqref{eq:no_outlier} for the fixed penalty parameter $\lambda$ depending on the dataset.
In fact, the clustering performance of DP-means depends on the selection of function $f$ as we will examine numerically in Section \ref{sec:experiments}.
Therefore, it is important to consider the robustness against outliers.
On the other hand, efficiency and robustness trade-off: if priority is given to efficiency, lower robustness may be better.
In such a case, we may choose a concave function similar to the linear function which corresponds to an unbounded influence function.
In any case, it is important to examine the influence function, and it can be a criterion for the design of the function $f$ and the penalty parameter $\lambda$ from the dataset.
\section{Experiments \label{sec:experiments}}
\subsection{UCI experiment \label{sec:uci_experiment}}
We conducted experiments with benchmark datasets in UCI Machine Learning Repository\footnote{https://archive.ics.uci.edu/ml/datasets.html} to investigate the influence of the objective function generalized by the monotonically increasing function $f$.
We focused on the power mean \eqref{eq:f_pow} and the log-sum-exp function \eqref{eq:f_exp}.
\subsubsection{Dataset}
The datasets used for the experiment, where $n$, $K$, and $L$ denote the number of data, the number of clusters, and the number of dimensions, respectively are summarized in Table \ref{table:uci_data}, where links to the specific datasets are given as footnotes when there are multiple datasets.
Datasets for classification problems were used assuming classes as the true clusters.
Heart dataset consists of data on heart disease with five clusters.
Four clusters represent heart disease and one cluster represents no heart disease.
HeartK2 dataset was made of Heart dataset by coarsening the cluster labels.
More specifically it was made from the two clusters, with and without heart disease.
We deleted data points with missing values beforehand.
\begin{table}[t]
\begin{center}
\caption{UCI datasets. }
\begin{tabular}{crrrr} \toprule
dataset & $n$ & $K$ & $L$ \\ \midrule
Breast Cancer Wisconsin\footnotemark[7] & $683$ & $2$ & $9$ \\
Heart\footnotemark[8] & $297$ & $5$ & $13$ \\
HeartK2 & $297$ & $2$ & $13$ \\
HTRU2 & $17898$ & $2$ & $8$ \\
Iris & $150$ & $3$ & $4$ \\
Mice Protein Expression & 552 & 8 & 77 \\
Pima & $768$ & $2$ & $8$ \\
Seeds & $210$ & $3$ & $7$ \\
Thyroid\footnotemark[9] & $215$ & $3$ & $5$ \\
Wine & $178$ & $3$ & $13$ \\
Yeast & $1484$ & $10$ & $8$ \\ \bottomrule
\end{tabular}
\label{table:uci_data}
\end{center}
\end{table}
\footnotetext[7]{http://archive.ics.uci.edu/ml/machine-learning-databases/thyroid-disease/new-thyroid.data}
\footnotetext[8]{https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/processed.cleveland.data}
\footnotetext[9]{https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data}
\subsubsection{Evaluation criteria}
We used the number of clusters and normalized mutual information (NMI) as the evaluation criteria to investigate the influence of the penalty parameter $\lambda$ and control parameter $\beta$ for the objective function.
In order to confirm the behavior of the objective function, we examined the behavior of the maximum distortion against the change of $\beta$.
NMI is criterion for evaluating the clustering result, and take values from $0$ to $1$.
The closer the NMI is to $1$, the better the result.
NMI is defined by the following equation:
\begin{align*}
{\rm NMI}(C, A) = \frac{I(C, A)}{\sqrt{H(C)H(A)}}
\end{align*}
for the label set $C$ of the clustering result and the label set $A$ of the correct cluster.
Here $I(\cdot,\cdot)$ and $H(\cdot)$ represent mutual information and entropy, respectively.
\subsubsection{Method}
For preprocessing of clustering, we standardized data so that the each dimension is transformed as $x_i^{(l)}\leftarrow \frac{x_i^{(l)}}{\sqrt{\frac{1}{n}\sum_{i=1}^n (x_i^{(l)})^2}}$.
We chose the squared distance $d_\phi\autobrac{\bm{x}, \bm{\theta}}=\frac{1}{L} \|\bm{x}-\bm{\theta}\|^2$ as the distortion measure, which is averaged with respect to the dimensions.
In the experiment, we investigated the change of each evaluation criterion when changing the parameter $\beta$ for each case of the power mean \eqref{eq:pow} with $a=0$ and log-sum-exp function \eqref{eq:lse}.
The range of $\beta$ examined was $[-2,5]$.
DP-means returns a local minimum solution depending on the order of data.
Hence, the order of data was shuffled $100$ times, and for each evaluation criteria, the average value on the number of shuffles was calculated and used as the result.
The concrete procedure is shown below.
\begin{enumerate}[Step 1.]
\item When the number of clusters is 1, find the maximum value of the maximum distortion $\hat{d}_m$ in the range of $\beta\in[-2, 5]$.
\item Randomly rearrange the sequence of data. \label{item:first}
\item Change $\beta$ from $-2$ to $5$ in $0.1$ increments. \label{item:second}
\begin{enumerate}[Step 3-1.]
\item Set $\lambda^{(t)} = \begin{cases} \hat{d}_m & (t=0), \\ \lambda^{(t-1)}/1.01 & (t=1, 2, \cdots, ). \end{cases}$ \label{item:lambda}
\item Algorithm is executed with the penalty parameter $\lambda^{(t)}$. \label{item:exe}
\item Repeat Step 3-1 and Step 3-2 until $\lambda^{(t)}$ falls below the threshold.
\end{enumerate}
\item Repeat Step 2 and Step 3 100 times.
\item Average the evaluation criteria over $100$ rearrangements of data for each $\beta$ and $\lambda$, and use them as the result.
\end{enumerate}
The threshold was set so that the number of clusters was within about three times the number of correct clusters.
Note that when the Bregman divergence is defined as the average with respect to the dimension $L$, in the case of the log-sum-exp function, the effective value of the parameter $\beta$ depends on $L$.
When the parameter to be given is $\beta^*$, the effective value of the parameter is $\beta=\frac{\beta^*-1}{L}+1$.
Thus, the range of $\beta$ is $[\frac{-3}{L}+1, \frac{4}{L}+1]$ and the step size is $\frac{0.1}{L}$.
\begin{figure*}[htbp]
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./breast-cancer-wisconsin_delete_K_2_pow_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./breast-cancer-wisconsin_delete_K_2_pow_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./breast-cancer-wisconsin_delete_K_2_pow_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{Breast Cancer Wisconsin, power mean \label{fig:bcw_pow} }
\vspace{-10pt}
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./breast-cancer-wisconsin_delete_K_2_exp_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./breast-cancer-wisconsin_delete_K_2_exp_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./breast-cancer-wisconsin_delete_K_2_exp_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{Breast Cancer Wisconsin, log-sum-exp \label{fig:bcw_exp} }
\vspace{-10pt}
\end{figure*}
\begin{figure*}[htbp]
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./Heart_K_2_pow_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Heart_K_2_pow_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Heart_K_2_pow_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{Heart, pow mean \label{fig:heart_pow} }
\vspace{-10pt}
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./Heart_K_2_exp_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Heart_K_2_exp_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Heart_K_2_exp_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{Heart, log-sum-exp \label{fig:Heart_exp} }
\vspace{-10pt}
\end{figure*}
\begin{figure*}[htbp]
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./HeartK2_K_2_pow_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./HeartK2_K_2_pow_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./HeartK2_K_2_pow_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{HeartK2, pow mean \label{fig:heartK2_pow} }
\vspace{-10pt}
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./HeartK2_K_2_exp_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./HeartK2_K_2_exp_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./HeartK2_K_2_exp_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{HeartK2, log-sum-exp \label{fig:HeartK2_exp} }
\vspace{-10pt}
\end{figure*}
\begin{figure*}[htbp]
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./HTRU2_K_2_pow_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./HTRU2_K_2_pow_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./HTRU2_K_2_pow_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{HTRU2, pow mean \label{fig:htru2_pow} }
\vspace{-10pt}
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./HTRU2_K_2_exp_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./HTRU2_K_2_exp_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./HTRU2_K_2_exp_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{HTRU2, log-sum-exp \label{fig:htru2_exp} }
\vspace{-10pt}
\end{figure*}
\begin{figure*}[htbp]
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./Iris_K_2_pow_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Iris_K_2_pow_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Iris_K_2_pow_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{Iris, pow mean \label{fig:iris_pow} }
\vspace{-10pt}
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./Iris_K_2_exp_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Iris_K_2_exp_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Iris_K_2_exp_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{Iris, log-sum-exp \label{fig:iris_exp} }
\vspace{-10pt}
\end{figure*}
\begin{figure*}[htbp]
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./MPE_K_2_pow_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./MPE_K_2_pow_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=48mm]{./MPE_K_2_pow_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{Mice Protein Expression, pow mean \label{fig:mpe_pow} }
\vspace{-10pt}
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./MPE_K_2_exp_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=48mm]{./MPE_K_2_exp_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./MPE_K_2_exp_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{Mice Protein Expression, log-sum-exp \label{fig:mpe_exp} }
\vspace{-10pt}
\end{figure*}
\begin{figure*}[htbp]
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./Pima_K_2_pow_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Pima_K_2_pow_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Pima_K_2_pow_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{Pima, pow mean \label{fig:pima_pow} }
\vspace{-10pt}
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./Pima_K_2_exp_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=48mm]{./Pima_K_2_exp_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Pima_K_2_exp_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{Pima, log-sum-exp \label{fig:pima_exp} }
\vspace{-10pt}
\end{figure*}
\begin{figure*}[htbp]
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./Seeds_K_2_pow_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Seeds_K_2_pow_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Seeds_K_2_pow_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{Seeds, pow mean \label{fig:seeds_pow} }
\vspace{-10pt}
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./Seeds_K_2_exp_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Seeds_K_2_exp_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Seeds_K_2_exp_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{Seeds, log-sum-exp \label{fig:seeds_exp} }
\vspace{-10pt}
\end{figure*}
\begin{figure*}[htbp]
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./Thyroid_K_2_pow_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=48mm]{./Thyroid_K_2_pow_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Thyroid_K_2_pow_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{Thyroid, pow mean \label{fig:thyroid_pow} }
\vspace{-10pt}
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./Thyroid_K_2_exp_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=48mm]{./Thyroid_K_2_exp_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Thyroid_K_2_exp_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{Thyroid, log-sum-exp \label{fig:thyroid_exp} }
\vspace{-10pt}
\end{figure*}
\begin{figure*}[htbp]
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./wine_K_2_pow_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./wine_K_2_pow_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./wine_K_2_pow_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{Wine, pow mean \label{fig:wine_pow} }
\vspace{-10pt}
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./wine_K_2_exp_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./wine_K_2_exp_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./wine_K_2_exp_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{Wine, log-sum-exp \label{fig:wine_exp} }
\vspace{-10pt}
\end{figure*}
\begin{figure*}[htbp]
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./Yeast_K_2_pow_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Yeast_K_2_pow_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Yeast_K_2_pow_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{Yeast, pow mean \label{fig:yeast_pow} }
\vspace{-10pt}
\centering
\subfloat[Number of clusters \label{fig:under_lim_min_ave}]{
\includegraphics[width=47mm]{./Yeast_K_2_exp_K_compressed.pdf}
}
\subfloat[NMI \label{fig:under_lim_min_max}]{
\includegraphics[width=48mm]{./Yeast_K_2_exp_NMI_compressed.pdf}
}
\subfloat[Maximum distortion \label{fig:under_lim_min_max}]{
\includegraphics[width=47mm]{./Yeast_K_2_exp_maxD_compressed.pdf}
}
\vspace{-10pt}
\caption{Yeast, log-sum-exp \label{fig:yeast_exp} }
\vspace{-10pt}
\end{figure*}
\subsubsection{Result}
Figure \ref{fig:bcw_pow} through \ref{fig:yeast_exp} show the number of clusters, NMI, and the maximum distortion by heat maps for the 11 datasets.
We can see that the number of clusters, NMI, and maximum distortion are correlated.
Depending on the dataset, NMI tends to be improved when the value of $\beta$ is lowered below 1.
The tendency is remarkable in ``Breast Cancer Wisconsin'' (Figure \ref{fig:bcw_pow}(b), Figure \ref{fig:bcw_exp}(b)) and ``HTRU2'' (Figure \ref{fig:htru2_pow}(b), Figure \ref{fig:htru2_exp}(b)), where NMI increases monotonically as the value of $\beta$ is lowered.
However, NMI worsens if $\beta$ is lowered too much.
Since the number of clusters and the maximum distortion are in a trade-off relationship, we can see the maximum distortion tends to decrease as $\beta$ increases when the number of clusters is fixed and the maximum distortion is compared.
Theoretically, the larger $\beta$, the closer the generalized DP-means to the maximum distortion minimization and the smaller $\beta$, the it robust against outliers.
The above result supports this fact.
\subsection{Image compression task}
We conducted an experiment to see if generalized DP-means is more effective than the original DP-means ($\beta = 1$) through the application of vector quantization to an image compression task.
We used generalized DP-means with power mean \eqref{eq:pow} ($a=0$).
In particular, we considered the case where the minimization measure approaches maximum distortion minimization, that is, $\beta$ is sufficiently large.
Although in this experiment, image compression was handled, the purpose was to examine the performance of the generalized DP-means.
Tipping and Sch{\"o}lkopf compared the maximum distortion minimization and the average distortion minimization in an image compression task by using a clustering method called the kernel vector quantization \cite{kvq}.
In this experiment, the same comparison was carried out using the same image (Figure \ref {fig:original}).
This is a color image size $384 \times 256$.
We obtained data points by dividing it into block images of $8 \times 8$.
Each data point consisted of $8 \times 8 \times 3 = 192$ dimensions from the block size and the color information.
The uncompressed image is represented by the dataset of $384 \times 256/64 = 1536$ points.
Image compression was performed while increasing the penalty parameter and carrying out clustering using Algorithm \ref{extAlg} with $f$ given by \eqref{eq:f_pow} and $\beta=1$ as the average distortion minimization and $\beta=200$ as the approximation of the maximum distortion minimization.
As the penalty parameter increases, the number of clusters decreases.
The preprocessing for clustering was the same as the previous experiment.
We chose the squared distance, $d_\phi(\bm{x}, \bm{\theta})={\lVert \bm{x} - \bm{\theta} \rVert}^2$ as the distance measure.
Newton's method was used for gradient descent optimization to calculate cluster centers because the convergence speed is second order.
The parameter $\beta=200$ was a relatively large value among $\beta$, which did not cause any divergence in the calculation.
Thus, $\beta=200$ was regarded as the maximum distortion minimization.
In this experiment, we focused on whether or not the letter string of the license plate in Figure \ref{fig:original} was recognizable \cite{kvq}.
Figure \ref{fig:limitImage} shows an image compressed to the limit at which the license plate letter string can be read for each of average distortion minimization and maximum distortion minimization.
Further, when the letter string of the license plate can only be read partly, comparison under the same compression ratio is shown in Figure \ref{fig:underLimitImage}.
Figure \ref{fig:limitImage} shows that the maximum distortion minimization achieved the better compression ratio compared to the average distortion minimization when the whole letter string can be read.
In Figure \ref{fig:underLimitImage}, it is possible to read several characters of the license plate in the case of the maximum distortion minimization, while it is almost impossible to read in the case of the average distortion minimization.
In the image compression task focusing on the letter string in the image, it was suggested that better performance was obtained by using the generalized DP-means with a large value of $\beta$ that approaches the maximum distortion minimization than the original DP-means.
The reason why the generalized objective function with a large $\beta$ was effective may be as follows.
In the average distortion minimization, the license plate consisting of a small number of patterns in the entire image tended to have large distortion from the cluster center to which it belongs, whereas in the case of the maximum distortion minimization, this led to the reduction in the distortion of the blocks from the license plate.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=60mm]{./Fig2_yellowcar.pdf}
\caption{No compression, compression ratio:100\%, number of clusters 1536 \protect\cite{kvq}. \label{fig:original}}
\end{center}
\vspace{-25pt}
\end{figure}
\begin{figure*}[htbp]
\begin{center}
\subfloat[Average distortion minimization, compression rate : 5.61\%, number of clusters 86 \label{fig:lim_min_ave}]{
\includegraphics[width=60mm]{./Fig3a_averageDistortionMinimization_K86_beta_1.pdf}
}
\subfloat[Maximum distortion minimization, compression rate : 4.82\%, number of clusters 74 \label{fig:lim_min_max}]{
\includegraphics[width=60mm]{./Fig3b_maxDistortionMinimization_K74_beta_200.pdf}
}
\end{center}
\vspace{-10pt}
\caption{Limited compression that can read the license plate. \label{fig:limitImage} }
\vspace{-10pt}
\begin{center}
\subfloat[Average distortion minimization\label{fig:under_lim_min_ave}]{
\includegraphics[width=60mm]{./Fig4a_string_submarginal_averageDistortionMinimization_K60_beta_1.pdf}
}
\subfloat[Maximum distortion minimization\label{fig:under_lim_min_max}]{
\includegraphics[width=60mm]{./Fig4b_string_submarginal_maxDistortionMinimization_K60_beta_200.pdf}
}
\end{center}
\vspace{-10pt}
\caption{Compression below limit, compression rate : 3.91\%, number of clusters 60. \label{fig:underLimitImage} }
\vspace{-10pt}
\end{figure*}
\section{Discussion: total Bregman divergence \label{sec:tbd}}
So far we have discussed with the Bregman divergence as a prerequisite.
The same discussion can be made when the total Bregman divergence is used as the pseudo-distance.
The total Bregman divergence is invariant to the rotation of the coordinate axes, and the cluster center obtained by minimizing the average distortion has been shown to be robust to outliers \cite{total-bregman}.
The Bregman divergence is known to have a bijective relationship with the exponential family, whereas the total Bregman divergence corresponds to the lifted exponential family \cite{lifted_bregman}.
The total Bregman divergence is defined by
\begin{align*}
{\rm tBD}\autobrac{\bm{x}, \bm{\theta}} \triangleq \frac{d_\phi(\bm{\theta}, \bm{x})}{\sqrt{1+c^2\|\bm{\nabla}\phi(\bm{x})\|^2}},
\end{align*}
where $c> 0$ \cite{total-bregman,information-geometry}.
Note that the arguments of $d_\phi$ in the numerator are reversed compared to the $d_\phi\autobrac{\bm{x}, \bm{\theta}}$ in \eqref{eq:def_bregman}.
In the case of $c=1$, it coincides with the definition in \cite{total-bregman}, and when $c=0$, it coincides with the case of the reversed Bregman divergence.
When the total Bregman divergence is used for the pseudo-distance, as in Section \ref{sec:update_rule}, the update rule of the cluster center can be obtained as
\begin{align}
\udrbm{\theta}{k} = \autobrac{\bm{\nabla}\phi}^{-1}\autobrac{\frac{\sum_{i=1}^n w_{ik}\bm{\nabla}\phi(\bm{x}_i)}{\sum_{j=1}^n w_{jk}}}, \label{eq:tbd_update_rule}
\end{align}
\begin{align}
w_{ik} = \frac{r_{ik}f{\,'}\autobrac{{\rm tBD}\autobrac{\bm{x}_i, \bm{\theta}_k}}}{\sqrt{1+c^2\|\bm{\nabla}\phi(\bm{x}_i)\|^2}}. \label{eq:tbd_weight}
\end{align}
Here, $\autobrac{\bm{\nabla}\phi}^{-1}$ denotes the inverse function of $\bm{\nabla}\phi$.
As in Section \ref{sec:monotonically}, when the function $f$ is concave, Theorem \ref{teiri:tbd_monot} (\ref{sec:tbd_monoto}) holds, claims the monotonic decreasing property of the objective function.
When the function $f$ is convex, since the problem of updating the cluster center is a convex optimization problem, the gradient direction always becomes the descent direction by the gradient descent optimization.
The update rules with Newton's method are summarized in \ref{sec:newton_update}.
The influence function is derived in the same flow as in Section \ref{sec:influence}. As the theorem on the boundedness of the influence function, the following holds.
\begin{theorem}\label{thm:tbd_theorem}
The following Condition 1 on the function $f$ is a necessary and sufficient condition for its influence function to be bounded for all $\bm{x}^*$ and Condition 2 provides a necessary and sufficient condition for it to be redescending:
\begin{enumerate}
\item $f{\,'}(z)$ is monotonically decreasing function $\iff$ $f(z)$ is a concave function or linear function,
\item \begin{align}\lim_{z\to\infty}f{\,'}(z)=0 . \label{eq:nec_suf_tbd}\end{align}
\end{enumerate}
\end{theorem}
The proof of this theorem is in \ref{sec:tbd_theorem}.
(See Remark \ref{rmk:infl} for the discussion when $f{\,'}\autobrac{{\rm tBD} \autobrac{\bm{x}^*, \bm{\theta}}}\to\infty$ around $\bm{x}^*=\bm{\theta}$.)
Recall that Condition 2 is a necessary condition for the boundedness of the influence function when the standard Bregman divergence $d_\phi\autobrac{\bm{x}, \bm{\theta}}$ is used as the pseudo-distance (Theorem \ref{th:bound}).
\section{Conclusion \label{sec:conclusion}}
In this paper, we generalized the average distortion of DP-means to $f$-separable distortion measures by using a monotonically increasing function $f$.
If the function $f$ has an inverse function $f^{-1}$, the $f$-separable distortion measure can be expressed by $f$-mean.
We classified the function $f$ into three types, linear, convex and concave.
These three types correspond to the original average distortion, distortion measures approaching the maximum distortion, and those with robustness against outliers respectively.
We showcased two kinds of functions including the parameter $\beta$.
The objective function constituted by these functions can change the characteristics according to the value of the parameter $\beta$.
Furthermore, based on this generalized objective function, an algorithm with guaranteed convergence was constructed.
Like the original DP-means, this algorithm has the computational complexity of the linear order of the number of data.
In order to evaluate the robustness against outliers, we derived the influence function on the general form of the function $f$ and showed the necessary condition for the influence function to be bounded.
For each concrete example of the function $f$, we examined the condition under which the boundedness of the influence function holds.
We proved that the log-sum-exp function showed the robustness against outliers regardless of the Bregman divergence.
Although the above discussion assumes the Bregman divergence as pseudo-distance, we also showed that the same argument holds true for the total Bregman divergence.
In addition, experiments using real datasets demonstrated that the generalized DP-means improves the performance of the original DP-means.
Our future research will include analysis of the generalization error consisting of the bias and variance in the estimation of the cluster centers.
This will led to a principled design of a combination of the function $f$ and the Bregman divergence (or pseudo-distance) by investigating the trade-off between the generalization error and the robustness.
\section{Derivation of objective functions through MAD-Bayes}
\subsection{Generalized Gaussian distribution \label{sec:proof_gene_gauss}}
We focus on the objective function with the function $f$ in \eqref{eq:f_pow} ($\beta>0$, $a=0$) and the Bregman divergence as the squared distance $\|\bm{x}-\bm{\theta}\|^2$.
We prove that the objective function of this case is derived from the framework of MAD-Bayes \cite{mad-bayes} when the generalized Gaussian distribution is assumed as a component.
The generalized Gaussian distribution is given by
\begin{align*}
p(\bm{x}|\bm{\theta}, \alpha, \beta)=\frac{1}{C} \exp\autobrac{-\autobrac{ \frac{\|\bm{x}-\bm{\theta}\|^2}{\alpha}}^\beta},
\end{align*}
where the normalization constant is
\begin{align*}
C = \frac{\beta\Gamma(\frac{L}{2})}{2\pi^{\frac{L}{2}}\Gamma(\frac{L}{\beta})\alpha^{\frac{L}{2}}},
\end{align*}
and $\Gamma(\cdot)$ is the gamma function.
The parameters are $\alpha>0$ and $\beta>0$.
It includes the Laplace distribution ($\beta=\frac{1}{2}$), the Gaussian distribution ($\beta=1$), and the uniform distribution ($\beta\to\infty$) as special cases.
The likelihood is given by
\begin{align*}
p(\upbm{x}{n} | \bm{r}, \autonami{\udrbm{\theta}{k}}_{k=1}^K) = \prod_{k=1}^{K} \prod_{i:r_{ik}=1} p(\bm{x}_i|\bm{\theta}_k, \alpha, \beta).
\end{align*}
The Chinese restaurant process, an example of Dirichlet processes, is given by
\begin{align*}
p(\bm{r}) = \tau^{K-1} \frac{\Gamma(\tau+1)}{\Gamma(\tau+n)} \prod_{k=1}^{K} (S_{n,k}-1)!,
\end{align*}
where $S_{n,k}=\sum_{i=1}^{n}r_{ik}$ and $\tau>0$ is the hyperparameter \cite{non-parabayse}.
When an arbitrary distribution that creates the cluster center is defined as $p(\bm{\theta}_k)$, the simultaneous distribution is expressed by $p(\autonami{\udrbm{\theta}{k}}_{k=1}^K)$.
The simultaneous distribution of data, cluster assignments and cluster centers are expressed by the following equation:
\begin{align*}
\begin{split}
\MoveEqLeft[1]
p(\upbm{x}{n}, \bm{r}, \autonami{\udrbm{\theta}{k}}_{k=1}^K)
= p(\upbm{x}{n} | \bm{r}, \autonami{\udrbm{\theta}{k}}_{k=1}^K)p(\bm{r})p(\autonami{\udrbm{\theta}{k}}_{k=1}^K)
\\={}& \prod_{k=1}^{K} \prod_{i:r_{ik}=1} \frac{1}{C} \exp\autobrac{-\autobrac{ \frac{\|\bm{x}_i-\bm{\theta}_k\|^2}{\alpha}}^\beta}
\cdot{} \tau^{K-1} \frac{\Gamma(\tau+1)}{\Gamma(\tau+n)} \prod_{k=1}^{K} (S_{n,k}-1)!
\cdot{} \prod_{k=1}^K p(\theta_k) .
\end{split}
\end{align*}
Then, setting $\tau = \exp\autobrac{-\frac{\lambda^\beta}{\alpha^\beta}}$, we consider the limit $\alpha\to0$.
We have
\begin{align*}
-\ln p(\upbm{x}{n}, \bm{r}, \autonami{\udrbm{\theta}{k}}_{k=1}^K) = \sum_{k=1}^K \sum_{i:r_{ik}=1} \left[ O(\ln(\alpha))+\autobrac{ \frac{\|\bm{x}_i-\bm{\theta}_k\|^{2\beta}}{\alpha^\beta}} \right]
+ (K-1)\frac{\lambda^\beta}{\alpha^\beta} + O(1) .
\end{align*}
It follows that
\begin{align*}
-\alpha^\beta\ln p(\upbm{x}{n}, \bm{r}, \autonami{\udrbm{\theta}{k}}_{k=1}^K) = \sum_{k=1}^K \sum_{i:r_{ik}=1} \|\bm{x}_i-\bm{\theta}_k\|^{2\beta}
+ (K-1)\lambda^\beta + \alpha^\beta O(\ln(\alpha)).
\end{align*}
Because $\alpha^\beta O(\ln(\alpha))\to0$ as $\alpha\to0$, we obtain the objective function as follows:
\begin{align*}
\sum_{k=1}^K \sum_{i:r_{ik}=1} \|\bm{x}_i-\bm{\theta}_k\|^{2\beta}
+ (K-1)\lambda^\beta .
\end{align*}
This objective function is equivalent to the objective function \eqref{eq:generalized_f} with the function $f$ in \eqref{eq:f_pow} ($\beta\neq0$, $a=0$) as follows:
\begin{align*}
\sum_{i=1}^n \|\bm{x}_i-\bm{\theta}_{c(i)}\|^{2\beta}+ \lambda^\beta K +O(1).
\end{align*}
Note, however, that $\beta$ must be positive in the generalized Gaussian distribution.
\subsection{Deformed $t$-distribution \label{sec:proof_t_dist}}
We consider the same objective function as that in \ref{sec:proof_gene_gauss} except that $\beta=0$ and $a>0$ instead of $\beta \neq 0$.
We prove that the objective function of this case is derived from the framework of MAD-Bayes when the deformed $t$-distribution is assumed as a component.
The $t$-distribution is
\begin{align*}
p(\bm{x}|\bm{\theta}, \nu) = \frac{\Gamma(\frac{\nu}{2}+\frac{L}{2})}{\Gamma(\frac{\nu}{2})(\nu\pi)^\frac{L}{2}}\left[1+\frac{\|\bm{x}-\bm{\theta}\|^2}{\nu}\right]^{-\frac{\nu+L}{2}},
\end{align*}
where $\nu>0$ is the degree of freedom.
$L$ is the dimension of data.
It includes the Cauchy distribution ($\nu=1$) and the Gaussian distribution ($\nu\to\infty$) as special cases.
Here, we use the following distribution obtained by transforming this $t$-distribution:
\begin{align*}
p(\bm{x}|\bm{\theta}, \nu, \sigma^2) = \frac{1}{C}\left[1+\frac{\|\bm{x}-\bm{\theta}\|^2}{\nu}\right]^{-\frac{\nu+L}{2\sigma^2}},
\end{align*}
where the normalization constant is
\begin{align*}
C = \frac{\Gamma(\frac{\nu+L}{2\sigma^2}-\frac{L}{2})(\nu\pi)^\frac{L}{2}}{\Gamma(\frac{\nu+L}{2\sigma^2})}.
\end{align*}
The likelihood is given by
\begin{align*}
p(\upbm{x}{n} | \bm{r}, \autonami{\udrbm{\theta}{k}}_{k=1}^K) = \prod_{k=1}^{K} \prod_{i:r_{ik}=1} p(\bm{x}_i|\bm{\theta}_k, \nu, \sigma^2) .
\end{align*}
As in \ref{sec:proof_gene_gauss}, the simultaneous distribution of data, cluster assignments, and cluster centers are expressed by the following equation:
\begin{align*}
\begin{split}
\MoveEqLeft[1]
p(\upbm{x}{n}, \bm{r}, \autonami{\udrbm{\theta}{k}}_{k=1}^K)
= p(\upbm{x}{n} | \bm{r}, \autonami{\udrbm{\theta}{k}}_{k=1}^K)p(\bm{r})p(\autonami{\udrbm{\theta}{k}}_{k=1}^K)
\\={}& \prod_{k=1}^{K} \prod_{i:r_{ik}=1} p(\bm{x}_i|\bm{\theta}_k, \nu, \sigma^2)
\cdot{} \tau^{K-1} \frac{\Gamma(\tau+1)}{\Gamma(\tau+n)} \prod_{k=1}^{K} (S_{n,k}-1)!
\cdot{} \prod_{k=1}^K p(\theta_k) .
\end{split}
\end{align*}
Then, setting $\tau = (\nu+\lambda)^{-\frac{\nu+L}{2\sigma^2}}$, we consider the limit $\sigma^2\to0$.
We have
\begin{align*}
\begin{split}
\MoveEqLeft[1]
-\ln p(\upbm{x}{n}, \bm{r}, \autonami{\udrbm{\theta}{k}}_{k=1}^K)
\\={}& \sum_{k=1}^K \sum_{i:r_{ik}=1} \left[ \ln C +\frac{\nu+L}{2\sigma^2}\ln\autobrac{1+\frac{\|\bm{x}_i-\bm{\theta}_k\|^{2}}{\nu}} \right]
+ \frac{\nu+L}{2\sigma^2}(K-1)\ln(\nu+\lambda) + O(1) .
\end{split}
\end{align*}
It follows that
\begin{align*}
\begin{split}
\MoveEqLeft[1]
-\frac{2\sigma^2}{\nu+L} \ln p(\upbm{x}{n}, \bm{r}, \autonami{\udrbm{\theta}{k}}_{k=1}^K)
\\={}& \sum_{k=1}^K \sum_{i:r_{ik}=1} \left[ \frac{2\sigma^2}{\nu+L} \ln C +\ln\autobrac{1+\frac{\|\bm{x}_i-\bm{\theta}_k\|^{2}}{\nu}} \right]
+ (K-1)\ln(\nu+\lambda)
\\={}& \sum_{k=1}^K \sum_{i:r_{ik}=1} \left[ \frac{2\sigma^2}{\nu+L} \ln C +\ln\frac{1}{\nu}+ \ln\autobrac{\nu+\|\bm{x}_i-\bm{\theta}_k\|^{2}} \right]
+ (K-1)\ln(\nu+\lambda).
\end{split}
\end{align*}
Because $\sigma^2\ln C\to constant$ as $\sigma^2\to0$, we obtain the following objective function:
\begin{align*}
\sum_{k=1}^K \sum_{i:r_{ik}=1} \ln\autobrac{\nu+\|\bm{x}_i-\bm{\theta}_k\|^{2}}
+ (K-1)\ln(\nu+\lambda) .
\end{align*}
We put $\nu=a$. This objective function is equivalent to the objective function \eqref{eq:generalized_f} with the function $f$ in \eqref{eq:f_pow} ($\beta=0$, $a\geq0$) as follows:
\begin{align*}
\sum_{i=1}^n \ln\autobrac{\|\bm{x}_i-\bm{\theta}_{c(i)}\|^{2}+a}+ \ln\autobrac{\lambda+a} K .
\end{align*}
Note, however, that $a$ must not be equal to 0 in the deformed $t$-distribution.
\section{Update rules with Newton's method \label{sec:newton_update}}
The cluster center is updated with Newton's method as follows:
\begin{align}
\bm{\theta}_k = \bm{\theta}_k - \left[\bm{\nabla\nabla}\bar{L}_f(\bm{\theta}_k)\right]^{-1}\bm{\nabla}\bar{L}_f(\bm{\theta}_k), \label{eq:newton_rule}
\end{align}
where the gradient vector and the hessian matrix are given by
\begin{align}
&\bm{\nabla}\bar{L}_f(\bm{\theta}_k) = \sum_{i=1}^n f{\,'}\left(d_\phi \left(\bm{x}_i, \bm{\theta}_k\right)\right) \bm{\nabla}d_\phi\left(\bm{x}_i, \bm{\theta}_k\right), \label{eq:loss_grad}\\
\begin{split}
\MoveEqLeft
\bm{\nabla\nabla}\bar{L}_f(\bm{\theta}_k) = \sum_{i=1}^n f{\,'}'\left(d_\phi \left(\bm{x}_i, \bm{\theta}_k\right)\right)\bm{\nabla}d_\phi\left(\bm{x}_i, \bm{\theta}_k\right)\left[\bm{\nabla}d_\phi\left(\bm{x}_i, \bm{\theta}_k\right)\right]^{\rm T}\\
&+\sum_{i=1}^n f{\,'}\left(d_\phi \left(\bm{x}_i, \bm{\theta}_k\right)\right)\bm{\nabla\nabla}d_\phi\left(\bm{x}_i, \bm{\theta}_k\right),
\end{split} \label{eq:loss_hess}
\end{align}
respectively.
The gradient vector and the hessian matrix of the Bregman divergence are given by
\begin{align}
\bm{\nabla}d_\phi\left(\bm{x}, \bm{\theta}\right) = -\bm{\nabla\nabla}\phi(\bm{\theta})\left(\bm{x} - \bm{\theta}\right), \label{eq:breg_grad} \\
\bm{\nabla\nabla}d_\phi\left(\bm{x}, \bm{\theta}\right) = \bm{\nabla\nabla}\phi(\bm{\theta})-\bm{\nabla\nabla\nabla}\phi(\bm{\theta})(\bm{x}-\bm{\theta}) \label{eq:breg_hess} .
\end{align}
If the Bregman divergence is additive with respect to the dimension of data points, $d_\phi(\bm{x}, \bm{\theta}) = \sum_{l=1}^L d_\phi\left(x^{(l)}, \theta^{(l)}\right)
$, the gradient \eqref{eq:breg_grad} and hessian matrix \eqref{eq:breg_hess} are expressed simply.
The $l$-th element of $L$-dimensional column vector \eqref{eq:breg_grad} is given by
\begin{align*}
\frac{\partial d_\phi(\bm{x}, \bm{\theta})}{\partial \theta^{(l)}} = -\phi{\,''}\left(\theta^{(l)}\right)\left(x^{(l)}-\theta^{(l)}\right).
\end{align*}
The hessian matrix \eqref{eq:breg_hess} is diagonal matrix and its $ll$-th element is given by
\begin{align*}
\frac{\partial d_\phi(\bm{x}, \bm{\theta})}{\partial \theta^{(l)}\partial \theta^{(l)}} = -\phi{\,'''}\left(\theta^{(l)}\right)\left(x^{(l)}-\theta^{(l)}\right)+\phi{\,''}\left(\theta^{(l)}\right) .
\end{align*}
In the case of the total Bregman divergence, $d_\phi(\bm{x},\bm{\theta})$ in \eqref{eq:loss_grad} and \eqref{eq:loss_hess} is replaced with ${\rm tBD}(\bm{x}, \bm{\theta})$ and the cluster center is updated by \eqref{eq:newton_rule}.
The gradient vector and the hessian matrix of the total Bregman divergence are given by
\begin{align}
\bm{\nabla}{\rm tBD}(\bm{x}, \bm{\theta}) = \frac{\bm{\nabla}\phi(\bm{\theta})-\bm{\nabla}\phi(\bm{x})}{\sqrt{1+c^2\left\|\bm{\nabla}\phi(\bm{x})\right\|^2}}, \label{eq:tbd_grad} \\
\bm{\nabla\nabla}{\rm tBD}(\bm{x}, \bm{\theta}) = \frac{\bm{\nabla\nabla}\phi(\bm{\theta})}{\sqrt{1+c^2\left\|\bm{\nabla}\phi(\bm{x})\right\|^2}}. \label{eq:tbd_hess}
\end{align}
Similarly, the total Bregman divergence is additive with respect to the dimension of data points, \eqref{eq:tbd_grad} and \eqref{eq:tbd_hess} are expressed simply.
The $l$-th element of $L$-dimensional column vector \eqref{eq:tbd_grad} is given by
\begin{align*}
\frac{\partial {\rm tBD}(\bm{x}, \bm{\theta})}{\partial \theta^{(l)}} =\frac{\phi{\,'}(\theta^{(l)}) - \phi{\,'}(x^{(l)})}{\sqrt{1+c^2\left\|\bm{\nabla}\phi(\bm{x})\right\|^2}}.
\end{align*}
The hessian matrix \eqref{eq:tbd_hess} is diagonal matrix and its $ll$-th element is given by
\begin{align*}
\frac{\partial {\rm tBD}(\bm{x}, \bm{\theta})}{\partial \theta^{(l)}\partial \theta^{(l)}} = \frac{\phi{\,''}(\theta^{(l)})}{\sqrt{1+c^2\left\|\bm{\nabla}\phi(\bm{x})\right\|^2}} .
\end{align*}
\section{Plots of influence functions \label{sec:plot_if}}
The Bregman divergence corresponding to the binomial distribution is given by
\begin{align}
d_\phi(x,\theta) = x\ln \autobrac{\frac{x}{\theta}}+(N-x)\ln \autobrac{\frac{N-x}{N-\theta}} \label{eq:binomial}
\end{align}
where $N$ is a non-negative integer value and $x\in\{0,1,\ldots,N\}$ in \cite{clustering-bregman}.
In this Section, we call equation \eqref{eq:binomial} ``binomial-loss''.
In the following, \eqref{eq:if} in the one-dimensional case is illustrated as a function of $x^*$ for each Bregman divergence for the power mean and the log-sum-exp.
It is 0 at $x^*=\theta$.
The tendencies of the influence functions as discussed in Section \ref{sec:pow} and Section \ref{sec:lse} for different $f$ and Bregman divergences can be seen.
\begin{figure*}[htbp]
\begin{center}
\subfloat[$\theta=50$, $a=1$, $N=100$]{
\includegraphics[width=6cm]{Fig5a_binomial_a=1.pdf}
}
\subfloat[$\theta=50$, $a=0$, $N=100$]{
\includegraphics[width=6cm]{Fig5b_binomial_a=0.pdf}
}
\end{center}
\caption{Power mean and binomial-loss.}
\begin{center}
\subfloat[$\theta=0$, $a=1$]{
\includegraphics[width=6cm]{Fig6a_squared_a=1.pdf}
}
\subfloat[$\theta=0$, $a=0$]{
\includegraphics[width=6cm]{Fig6b_squared_a=0.pdf}
}
\end{center}
\caption{Power mean and squared distance.}
\begin{center}
\subfloat[$\theta=100$, $a=1$]{
\includegraphics[width=6cm]{Fig7a_kl_a=1.pdf}
}
\subfloat[$\theta=100$, $a=0$]{
\includegraphics[width=6cm]{Fig7b_kl_a=0.pdf}
}
\end{center}
\caption{Power mean and generalized KL divergence.}
\end{figure*}
\begin{figure*}
\begin{center}
\subfloat[$\theta=1000$, $a=1$]{
\includegraphics[width=6cm]{Fig8a_itakura_a=1.pdf}
}
\subfloat[$\theta=1000$, $a=0.1$]{
\includegraphics[width=6cm]{Fig8b_itakura_a=0.pdf}
}
\end{center}
\caption{Power mean and Itakura Saito divergence.}
\end{figure*}
\begin{figure*}
\begin{center}
\subfloat[$\theta=0$, $a=1$]{
\includegraphics[width=6cm]{Fig9a_exp_loss_a=1.pdf}
}
\subfloat[$\theta=0$, $a=0$]{
\includegraphics[width=6cm]{Fig9b_exp_loss_a=0.pdf}
}
\end{center}
\caption{Power mean and exp-loss.}
\end{figure*}
\begin{figure*}[htbp]
\begin{center}
\subfloat[Binomial-loss, $\theta=50$, $N=100$]{
\includegraphics[width=6cm]{Fig10a_expType_binomial.pdf}
}
\subfloat[Squared distance, $\theta=0$]{
\includegraphics[width=6cm]{Fig10b_expType_squared.pdf}
}
\end{center}
\begin{center}
\subfloat[Generalized KL divergence, $\theta=100$]{
\includegraphics[width=6cm]{Fig10c_expType_kl.pdf}
}
\subfloat[Itakua Saito divergence, $\theta=1000$]{
\includegraphics[width=6cm]{Fig10d_expType_itakura.pdf}
}
\end{center}
\begin{center}
\subfloat[Exp-loss, $\theta=0$]{
\includegraphics[width=6cm]{Fig10e_expType_exp_loss.pdf}
}
\end{center}
\vspace{-10pt}
\caption{Log-sum-exp }
\vspace{-10pt}
\end{figure*}
\section{Proof of bounded influence function}
\subsection{Proof of Lemma \ref{lem:lem}}
Under Assumption \ref{asm:asm}, the following lemma holds.
\begin{lemma} \label{lem:lem}
For $\tilde{\bm{x}}=(\theta^{(1)}, \cdots, \theta^{(l-1)}, {x^*}^{(l)}, \theta^{(l+1)}, \cdots, \theta^{(L)})^{\rm T}$, let
\begin{align}
\tilde{{\rm IF}}_l = \left| \lim_{|{x^*}^{(l)}|\to\infty} f{\,'}\autobrac{d_\phi \autobrac{\tilde{\bm{x}}, \bm{\theta}}}({x^*}^{(l)}-\theta^{(l)}) \right|, \label{eq:lemma}
\end{align}
be the influence function of the $l$-th dimension.
Then, it holds that
\begin{align*}
\lim_{\|\bm{x}^*\|\to\infty} \|\bm{{\rm IF}}(\upbm{x}{*})\|=
\begin{cases}
\infty & \text{{\rm if} $\exists l$, $\tilde{{\rm IF}}_l$ {\rm is divergent}} , \\
{\rm constant} & \text{{\rm if} $\forall l$, $\tilde{{\rm IF}}_l$ {\rm is bounded}}, \\
0 & \text{{\rm if} $\forall l$, $\tilde{{\rm IF}}_l$ {\rm is} $0$} .
\end{cases}
\end{align*}
\end{lemma}
\begin{proof}
Let the $ij$-th component of the matrix $\bm{\nabla}\bm{\nabla}\phi\autobrac{\bm{\theta}}$ be $b_{ij}$, and $\bm{\nabla}\bm{\nabla}\phi\autobrac{\bm{\theta}}(\bm{x}^* - \bm{\theta})=\bm{u}=(u^{(1)}, \cdots, u^{(L)})^{\rm T}\in \mathbb{R}^L$.
It follows that
\begin{align*}
\left(
\begin{array}{ccc}
b_{11} & \ldots & b_{1L} \\
\vdots & \ddots & \vdots \\
b_{L1} & \ldots & b_{LL}
\end{array}
\right)
\left(
\begin{array}{c}
{x^{*}}^{(1)}-\theta^{(1)} \\
\vdots \\
{x^{*}}^{(L)}-\theta^{(L)} \end{array}
\right)
=
\left(
\begin{array}{c}
u^{(1)} \\
\vdots \\
u^{(L)} \end{array}
\right),
\end{align*}
where
\begin{align*}
u^{(j)} = \sum_{l=1}^L b_{jl}({x^{*}}^{(l)}-\theta^{(l)}) .
\end{align*}
If the norm of \eqref{eq:if} is bounded, the norm of the influence function is bounded.
Here, the norm of \eqref{eq:if} is
\begin{align*}
\begin{split}
\MoveEqLeft
\left\|f{\,'}\autobrac{d_\phi \autobrac{\bm{x}^*, \bm{\theta}}} \bm{\nabla}\bm{\nabla}\phi\autobrac{\bm{\theta}}(\bm{x}^* - \bm{\theta})\right\|
= \sqrt{\sum_{j=1}^L \left|f{\,'}\autobrac{d_\phi \autobrac{\bm{x}^*, \bm{\theta}}} u^{(j)} \right|^2} \\
= {}&\sqrt{\sum_{j=1}^L \left|\sum_{l=1}^L b_{jl} f{\,'}\autobrac{d_\phi \autobrac{\bm{x}^*, \bm{\theta}}} ({x^{*}}^{(l)}-\theta^{(l)}) \right|^2} .
\end{split}
\end{align*}
Hence, we have
\begin{align*}
\lim_{\|\bm{x}^*\|\to\infty} \left\|f{\,'}\autobrac{d_\phi \autobrac{\bm{x}^*, \bm{\theta}}} \bm{\nabla}\bm{\nabla}\phi\autobrac{\bm{\theta}}(\bm{x}^* - \bm{\theta})\right\| \\
=\sqrt{\sum_{j=1}^L \left|\sum_{l=1}^L b_{jl} \lim_{\|\bm{x}^*\|\to\infty} f{\,'}\autobrac{d_\phi \autobrac{\bm{x}^*, \bm{\theta}}} ({x^{*}}^{(l)}-\theta^{(l)}) \right|^2}
\end{align*}
when $\|\bm{x}^*\|\to\infty$.
If the function $f$ satisfies \eqref{eq:limz} of Theorem \ref{th:bound}, which implies the concavity of $f$, the following holds:
\begin{align*}
\left| f{\,'}\autobrac{d_\phi \autobrac{\tilde{\bm{x}}, \bm{\theta}}} ({x^{*}}^{(l)}-\theta^{(l)}) \right|
\geq
\left| f{\,'}\autobrac{d_\phi \autobrac{\bm{x}^*, \bm{\theta}}} ({x^{*}}^{(l)}-\theta^{(l)}) \right| .
\end{align*}
Thus, the bounded and redescending properties of the left hand side as $|{x^*}^{(l)}|\to\infty$ imply those of the right hand side as $\|\bm{x}^*\|\to\infty$, respectively.
This means that if $\tilde{{\rm IF}}_l$ is bounded or converging to 0 for all $l$, so is $\lim_{\|\bm{x}^*\|\to\infty} \|\bm{{\rm IF}}(\upbm{x}{*})\|$.
If $\tilde{{\rm IF}}_l = \infty$ for some $l$, putting $\bm{x}^*=\tilde{\bm{x}}$ and taking the limit $|{x^*}^{(l)}|\to\infty$, we have $\lim_{\|\bm{x}^*\|\to\infty} \|\bm{{\rm IF}}(\upbm{x}{*})\|=\infty$.
\qed
\end{proof}
\subsection{Power mean}
\subsubsection{Proof of Theorem \ref{th:pow}: redescending property for power mean \label{sec:pow_redec}}
Evaluate the following expression \eqref{eq:lemma} of Lemma \ref {lem:lem} for the function $f$ in \eqref{eq:f_pow} ($\beta<0$).
It follows from l'Hopital's rule that
\begin{align*}
\begin{split}
\MoveEqLeft
\tilde{{\rm IF}}_l=
\left| \lim_{|{x^*}^{(l)}| \to \infty} \frac{{x^*}^{(l)} - \theta^{(l)}}{\left[d_\phi \autobrac{\tilde{\bm{x}}, \bm{\theta}}+a\right]^{1-\beta}} \right| \\
= {}& \left| \lim_{|{x^*}^{(l)}| \to \infty} \frac{1}{(1-\beta)\left[d_\phi \autobrac{\tilde{\bm{x}}, \bm{\theta}}+a\right]^{-\beta} \autobrac{\frac{\partial \phi(\tilde{\bm{x}})}{\partial {x^{*}}^{(l)}}-\frac{\partial \phi(\bm{\theta})}{\partial {\theta}^{(l)}}}} \right|= 0 .
\end{split}
\end{align*}
Therefore, from Lemma \ref{lem:lem}, the redescending property holds.
\subsubsection{$\alpha$-divergence \label{sec:beta_proof}}
Here, since the $\alpha$-divergence is additively defined, it holds that $d_\phi\autobrac{\tilde{\bm{x}}, \bm{\theta}} = d_\alpha({x^{(l)}}^*, \theta^{(l)})$.
Then,the $\alpha$-divergence is expressed as
\begin{align*}
d_\alpha(x^{(l)}, \theta^{(l)}) =
\begin{cases}
x^{(l)} + O({x^{(l)}}^{\alpha}) & \alpha<1\\
{x^{(l)}}\ln(x^{(l)}) + O(x^{(l)}) & \alpha=1\\
{x^{(l)}}^\alpha + O(x^{(l)}) & \alpha>1 .
\end{cases}
\end{align*}
Evaluate the following expression \eqref{eq:lemma} of Lemma \ref {lem:lem} for the function $f$ in \eqref{eq:f_pow} :
\begin{align}
\MoveEqLeft
\tilde{{\rm IF}}_l= \left| \lim_{|{x^*}^{(l)}| \to \infty} \frac{{x^*}^{(l)} - \theta^{(l)}}{\left[d_\alpha({x^*}^{(l)}, \theta^{(l)})+a\right]^{1-\beta}} \right| \nonumber \\
= {}& \left| \left[ \lim_{|{x^*}^{(l)}| \to \infty} \frac{\autobrac{{x^*}^{(l)} - \theta^{(l)}}^{\frac{1}{1-\beta}} } {d_\alpha({x^*}^{(l)}, \theta^{(l)})+a}\right]^{1-\beta}\right|. \label{eq:beta_eval}
\end{align}
\paragraph{1. $\alpha<1$} $\\$ \noindent
It follows from \eqref{eq:beta_eval} that
\begin{align*}
\MoveEqLeft
\tilde{{\rm IF}}_l=\left| \left[ \lim_{|{x^*}^{(l)}| \to \infty} \frac{\alpha(\alpha-1) \autobrac{{x^*}^{(l)} - \theta^{(l)}}^{\frac{1}{1-\beta}} } { {{x^*}^{(l)}}+O({{{x^*}^{(l)}}}^\alpha)}\right]^{1-\beta}\right| \\
= {}& \left| \left[ \alpha(\alpha-1) \lim_{|{x^*}^{(l)}| \to \infty} \frac{\autobrac{{x^*}^{(l)} - \theta^{(l)}}^{\frac{1}{1-\beta}} } { {x^*}^{(l)}} \lim_{|{x^*}^{(l)}| \to \infty} \frac{1} {1+ O({{{x^*}^{(l)}}}^{(\alpha-1)})}\right]^{1-\beta}\right| \\
= {}& \left| \left[ \alpha(\alpha-1) \lim_{|{x^*}^{(l)}| \to \infty} \left[\frac{{x^*}^{(l)} - \theta^{(l)} } { {{x^*}^{(l)}}^{(1-\beta)}}\right]^{\frac{1}{1-\beta}} \right]^{1-\beta}\right| .
\end{align*}
Therefore, it holds that
\begin{align*}
&\begin{cases}
1>1-\beta & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm divergent}, \\
1=1-\beta & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm bounded}, \\
1>1-\beta & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm redescending},
\end{cases}
\\ \iff
&\begin{cases}
\beta>0 & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm divergent}, \\
\beta = 0 & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm bounded}, \\
\beta <0 & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm redescending} .
\end{cases}
\end{align*}
\paragraph{2. $\alpha=1$ (generalized Kullback-Leibler divergence)} $\\$ \noindent
It follows from \eqref{eq:beta_eval} that
\begin{align*}
\MoveEqLeft
\tilde{{\rm IF}}_l=\left| \left[ \lim_{|{x^*}^{(l)}| \to \infty} \frac{ \autobrac{{x^*}^{(l)} - \theta^{(l)}}^{\frac{1}{1-\beta}} } { {x^{(l)}}^* \ln({x^{(l)}}^*) + O({x^{(l)}}^*)}\right]^{1-\beta}\right| \\
= {}& \left| \left[ \lim_{|{x^*}^{(l)}| \to \infty} \frac{\autobrac{{x^*}^{(l)} - \theta^{(l)}}^{\frac{1}{1-\beta}} } { {x^{(l)}}^* \ln({x^{(l)}}^*)} \lim_{|{x^*}^{(l)}| \to \infty} \frac{1} { 1+O(\ln({{x^*}^{(l)}})^{-1})}\right]^{1-\beta}\right| \\
= {}& \left| \left[ \lim_{|{x^*}^{(l)}| \to \infty} \left[\frac{{x^*}^{(l)} - \theta^{(l)} } { {{x^{(l)}}^*}^{1-\beta} {\ln({x^{(l)}}^*)}^{1-\beta} }\right]^{\frac{1}{1-\beta}} \right]^{1-\beta}\right| .
\end{align*}
Therefore, it holds that
\begin{align*}
\begin{cases}
\beta >0 & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm divergent}, \\
\beta \leq 0 & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm redescending} .
\end{cases}
\end{align*}
\paragraph{3. $\alpha>1$} $\\$ \noindent
It follows from \eqref{eq:beta_eval} that
\begin{align*}
\MoveEqLeft
\tilde{{\rm IF}}_l=\left| \left[ \lim_{|{x^*}^{(l)}| \to \infty} \frac{\alpha(\alpha-1) \autobrac{{x^*}^{(l)} - \theta^{(l)}}^{\frac{1}{1-\beta}} } { {{x^*}^{(l)}}^\alpha+O({{x^*}^{(l)}})}\right]^{1-\beta}\right| \\
= {}& \left| \left[ \alpha(\alpha-1) \lim_{|{x^*}^{(l)}| \to \infty} \frac{\autobrac{{x^*}^{(l)} - \theta^{(l)}}^{\frac{1}{1-\beta}} } { {{x^*}^{(l)}}^\alpha} \lim_{|{x^*}^{(l)}| \to \infty} \frac{1} { 1+ O({{{x^*}^{(l)}}}^{(1-\alpha)})}\right]^{1-\beta}\right| \\
= {}& \left| \left[ \alpha(\alpha-1) \lim_{|{x^*}^{(l)}| \to \infty} \left[\frac{{x^*}^{(l)} - \theta^{(l)} } { {{x^*}^{(l)}}^{\alpha(1-\beta)}}\right]^{\frac{1}{1-\beta}} \right]^{1-\beta}\right| .
\end{align*}
Therefore, it holds that
\begin{align*}
&\begin{cases}
\alpha(1-\beta)>1 & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm divergent}, \\
\alpha(1-\beta)=1 & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm bounded}, \\
\alpha(1-\beta)>1 & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm redescending},
\end{cases}
\\ \iff
&\begin{cases}
\beta>1-\frac{1}{\alpha} & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm divergent}, \\
\beta = 1-\frac{1}{\alpha} &\Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm bounded}, \\
\beta <1-\frac{1}{\alpha} & \Rightarrow \|\bm{{\rm IF}}(\upbm{x}{*})\| \; {\rm is} \; {\rm redescending} .
\end{cases}
\end{align*}
\subsubsection{Exp-loss \label{sec:exp-loss}}
When the function $f$ is given by \eqref{eq:f_pow} and the convex function constituting the Bregman divergence is given by the exponential function, we investigate the boundedness of the influence function.
For the convex function
\begin{align*}
\phi(x)=\exp(x),
\end{align*}
the corresponding Bregman divergence is given by \cite{clustering-bregman},
\begin{align*}
d_\phi\autobrac{x, \theta} = \exp(x)-\exp(\theta)-\exp(\theta)(x-\theta) .
\end{align*}
For multidimensional data, we additively define the divergence as follows:
\begin{align}
d_\phi\autobrac{\bm{x}, \bm{\theta}} = \sum_{l=1}^L d_\phi\autobrac{x^{(l)}, \theta^{(l)}} , \label{eq:addtive_loss_func}\\
\phi(\bm{x})=\sum_{l=1}^L \phi(x^{(l)}) \nonumber.
\end{align}
Here, since \eqref{eq:addtive_loss_func} is additively defined, it holds that $d_\phi\autobrac{\tilde{\bm{x}}, \bm{\theta}} = d_\phi({x^{(l)}}^*, \theta^{(l)})$.
Evaluation of the following expression \eqref{eq:lemma} of Lemma \ref {lem:lem} for the function $f$ in \eqref{eq:f_pow} is as follows:
\begin{align}
\MoveEqLeft
\tilde{{\rm IF}}_l=\left| \lim_{|{x^*}^{(l)}| \to \infty} \frac{{x^*}^{(l)} - \theta^{(l)}}{\left[d_\phi\autobrac{\tilde{\bm{x}}, \bm{\theta}}+a\right]^{1-\beta}} \right| \nonumber \\
= {}& \left| \lim_{|{x^*}^{(l)}| \to \infty} \frac{1}{(1-\beta)\left[d_\phi\autobrac{\tilde{\bm{x}}, \bm{\theta}}+a\right]^{-\beta} \autobrac{\exp({x^{*}}^{(l)})-\exp({\theta}^{(l)})}} \right| \label{eq:exp_bregman}\\
= {}& \left| \lim_{|{x^*}^{(l)}| \to \infty} \frac{\beta\left[d_\phi\autobrac{\tilde{\bm{x}}, \bm{\theta}}+a\right]^{\beta-1}\autobrac{\exp({x^{*}}^{(l)})-\exp({\theta}^{(l)})}}{(1-\beta) \autobrac{\exp({x^{*}}^{(l)})-\exp({\theta}^{(l)})}} \right| \nonumber \\
= {}& \left| \lim_{|{x^*}^{(l)}| \to \infty} \frac{\beta\left[d_\phi\autobrac{\tilde{\bm{x}}, \bm{\theta}}+a\right]^{\beta-1}}{(1-\beta)} \right| \nonumber \\
= {}& \left| \lim_{|{x^*}^{(l)}| \to \infty} \frac{\beta}{(1-\beta)\left[d_\phi\autobrac{\tilde{\bm{x}}, \bm{\theta}}+a\right]^{1-\beta}} \right| \label{eq:exp_bregman0} .
\end{align}
Here, when $\beta<1$ and $\beta\neq 0 $, \eqref{eq:exp_bregman0} is 0, from Lemma \ref{lem:lem}, the redescending property holds.
When $\beta=0$, it follows from \eqref{eq:exp_bregman} that:
\begin{align*}
\frac{1}{\exp({x^{*}}^{(l)})-\exp({\theta}^{(l)})}
=\begin{cases}
0 & {\rm if} \; {x^*}^{(l)} \to \infty, \\
-\exp(-{\theta}^{(l)}) & {\rm if} \; {x^*}^{(l)} \to -\infty . \\
\end{cases}
\end{align*}
That is, when $\beta=0$, at least one of the elements of $\bm {x}^*$ satisfies ${x^*}^{(l)} \to -\infty$, the influence function is bounded, however it is not redescending.
If there is no such an element, the influence function satisfies the redescending property.
The results are summarized as follows:
\begin{align*}
\begin{cases}
\beta=0 & \begin{cases}\rm{redescending} & {\rm if} \not\exists l, {x^*}^{(l)} \to -\infty, \\
{\rm bounded} & {\rm if} \quad \exists l, {x^*}^{(l)} \to -\infty, \\ \end{cases} \\
0<\beta<1 & {\rm redescending}, \\
\beta<0 & {\rm redescending} . \\
\end{cases}
\end{align*}
\subsection{Proof of Theorem \ref{th:lse}: redscending property for log-sum-exp \label{sec:lse_redec}}
Evaluate the following expression \eqref{eq:lemma} of Lemma \ref {lem:lem} for the function $f$ in \eqref{eq:f_exp} ($\beta<1$).
It follows from l'Hopital's rule that
\begin{align*}
\begin{split}
\MoveEqLeft
\tilde{{\rm IF}}_l=
\left| \lim_{|{x^*}^{(l)}| \to \infty} \frac{{{x^*}^{(l)} - \theta^{(l)}}}{\exp\left( (1-\beta)d_\phi \autobrac{\tilde{\bm{x}}, \bm{\theta}}\right)} \right| \\
= {}& \left| \lim_{|{x^*}^{(l)}| \to \infty} \frac{1}{(1-\beta)\exp\left((1-\beta)d_\phi \autobrac{\tilde{\bm{x}}, \bm{\theta}}\right) \autobrac{\frac{\partial \phi(\tilde{\bm{x}})}{\partial {x^{*}}^{(l)}}-\frac{\partial \phi(\bm{\theta})}{\partial {\theta}^{(l)}}}} \right| = 0 .
\end{split}
\end{align*}
Therefore, from Lemma \ref{lem:lem}, the redescending property holds.
\section{Properties of total Bregman divergence}
\subsection{Guarantee of monotonic decreasing property \label{sec:tbd_monoto}}
\begin{theorem}\label{teiri:tbd_monot}
If the function $f$ is concave, the updating of the cluster center using \eqref{eq:tbd_update_rule} monotonically decreases the objective function for general total Bregman divergence.
\end{theorem}
\begin{proof}
The flow of the proof is almost the same as the proof of Theorem \ref{teiri:monot}.
We show that the objective function \eqref{eq:generalized_f} monotonically decreases when the $k$-th cluster center $\udrbm{\theta}{k}$ is newly updated to $\tilde{\bm{\theta}}_k$ by \eqref{eq:tbd_update_rule}.
More specifically, we prove that, $\bar{L}_f(\udrbm{\theta}{k})\geq \bar{L}_f(\tilde{\bm{\theta}}_k) $, where $\bar{L}_f(\udrbm{\theta}{k})$ is the sum of the terms related to $\udrbm{\theta}{k}$ in \eqref{eq:generalized_f}, that is,
\begin{align*}
\bar{L}_f({\bm{\theta}}_k) = \sum_{i=1}^n r_{ik} f\autobrac{{\rm tBD}\autobrac{\bm{x}_i, \bm{\theta}_k}} .
\end{align*}
From the inequality in \eqref{eq:tangent_inquality}, the following holds:
\begin{align}
\MoveEqLeft[1]
\bar{L}_f(\udrbm{\theta}{k})-\bar{L}_f(\tilde{\bm{\theta}}_k) \nonumber
\\={}& \sum_{i=1}^n r_{ik} \left[f\autobrac{{\rm tBD}\autobrac{\bm{x}_i, \bm{\theta}_k}}-f\autobrac{{\rm tBD}\autobrac{\bm{x}_i, \tilde{\bm{\theta}}_k}}\right] \nonumber
\\ \geq{}& \sum_{i=1}^n r_{ik} f{\,'}\autobrac{{\rm tBD}\autobrac{\bm{x}_i, \bm{\theta}_k}} \left[{\rm tBD}\autobrac{\bm{x}_i, \bm{\theta}_k} - {\rm tBD}\autobrac{\bm{x}_i, \tilde{\bm{\theta}}_k}\right] \nonumber
\\={}& \sum_{i=1}^n r_{ik} \frac{f{\,'}\autobrac{{\rm tBD}\autobrac{\bm{x}_i, \bm{\theta}_k}}}{\sqrt{1+c^2\|\bm{\nabla}\phi(\bm{x}_i)\|^2}}\left[d_\phi\left(\bm{\theta}_k, \bm{x}_i\right) - d_\phi\left(\tilde{\bm{\theta}}_k, \bm{x}_i \right) \right] \nonumber
\\={}& \sum_{i=1}^n r_{ik} \frac{f{\,'}\autobrac{{\rm tBD}\autobrac{\bm{x}_i, \bm{\theta}_k}}}{\sqrt{1+c^2\|\bm{\nabla}\phi(\bm{x}_i)\|^2}}\left[ \phi\left(\bm{\theta}_k\right) - \phi\left(\tilde{\bm{\theta}}_k\right) - \bm{\nabla}\phi\left(\bm{x}_i\right) \left(\bm{\theta}_k - \tilde{\bm{\theta}}_k\right) \right] \label{eq:use_update_tbd}
\\={}& d_\phi \autobrac{\bm{\theta}_k, \tilde{\bm{\theta}}_k} \sum_{i=1}^n r_{ik} \frac{f{\,'}\autobrac{{\rm tBD}\autobrac{\bm{x}_i, \bm{\theta}_k}}}{\sqrt{1+c^2\|\bm{\nabla}\phi(\bm{x}_i)\|^2}} \geq 0 , \nonumber
\end{align}
where
\begin{align*}
\bm{\nabla}\phi\left(\tilde{\bm{\theta}}_k \right)\sum_{i=1}^n r_{ik} \frac{f{\,'}\autobrac{{\rm tBD}\autobrac{\bm{x}_i, \bm{\theta}_k}}}{\sqrt{1+c^2\|\bm{\nabla}\phi(\bm{x}_i)\|^2}} = \sum_{i=1}^n r_{ik} \frac{f{\,'}\autobrac{{\rm tBD}\autobrac{\bm{x}_i, \bm{\theta}_k}}}{\sqrt{1+c^2\|\bm{\nabla}\phi(\bm{x}_i)\|^2}}\bm{\nabla}\phi\left(\bm{x}_i\right) ,
\end{align*}
which is derived from \eqref{eq:tbd_update_rule} and \eqref{eq:tbd_weight} was used in \eqref{eq:use_update_tbd}.
\qed
\end{proof}
The following corollary immediately follows from Theorem \ref{teiri:tbd_monot}.
\begin{corollary}
When the objective function is constructed by the power mean \eqref{eq:pow} or the log-sum-exp function \eqref{eq:lse}, the following holds.
When $\beta\leq1$, the updating of the cluster center using \eqref{eq:tbd_update_rule} monotonically decreases the objective function for general total Bregman divergence.
\end{corollary}
\subsection{Influence function}
When we derive the influence function as in Section \ref{sec:influence}, it is given by
\begin{align*}
&\bm{{\rm IF}}(\upbm{x}{*}) = - m\bm{{\rm G}}^{-1}\bm{\nabla}f\autobrac{{\rm tBD} \autobrac{\bm{x}^*, \bm{\theta}}} , \\
&\bm{{\rm G}} = \sum_{i=1}^m \bm{\nabla}\bm{\nabla}f\autobrac{{\rm tBD}\autobrac{\bm{x}_i, \bm{\theta}}}.
\end{align*}
Because the matrix $\bm{{\rm G}}$ does not depend on $\upbm{x}{*}$, the robustness against outliers is evaluated by
\begin{align}
\begin{split}
\MoveEqLeft
-\bm{\nabla}f\autobrac{{\rm tBD} \autobrac{\bm{x}^*, \bm{\theta}}} = -f{\,'}\autobrac{{\rm tBD} \autobrac{\bm{x}^*, \bm{\theta}}} \bm{\nabla}{\rm tBD} \autobrac{\bm{x}^*, \bm{\theta}} \\
= {}&f{\,'}\autobrac{{\rm tBD} \autobrac{\bm{x}^*, \bm{\theta}}} \frac{\bm{\nabla}\phi(\bm{x}^*)-\bm{\nabla}\phi(\bm{\theta})}{\sqrt{1+c^2\|\bm{\nabla}\phi(\bm{x}^*)\|^2}}. \\
\end{split} \label{eq:if_tbd}
\end{align}
\subsection{Proof of Theorem \ref{thm:tbd_theorem} \label{sec:tbd_theorem}}
Evaluate the following expression, which is the norm of \eqref{eq:if_tbd}:
\begin{align}
f{\,'}\autobrac{{\rm tBD} \autobrac{\bm{x}^*, \bm{\theta}}} \frac{\| \bm{\nabla}\phi(\bm{x}^*)-\bm{\nabla}\phi(\bm{\theta})\| }{\sqrt{1+c^2\|\bm{\nabla}\phi(\bm{x}^*)\|^2}}. \label{eq:norm_total_breg}
\end{align}
Even if $\|\bm{x}^*\|$ has any value, $\frac{\| \bm{\nabla}\phi(\bm{x}^*)-\bm{\nabla}\phi(\bm{\theta})\| }{\sqrt{1+c^2\|\bm{\nabla}\phi(\bm{x}^*)\|^2}}$ it is bounded \cite{total-bregman}.
Therefore, it is a necessary and sufficient condition for the influence function to be bounded that $f '(z)$ is bounded for all $z$.
As $\|\bm{\theta}\|<\infty$, when $\|\bm{x}^*\|$ is large, ${\rm tBD} \autobrac{\bm{x}^*, \bm{\theta}}$ is large.
When the function $f(z)$ is convex, $f{\,'}(z)$ is a monotonically increasing function.
As $\|\bm{x}^*\|$ is increased, \eqref{eq:norm_total_breg} grows unboundedly, and the influence function is not bounded.
In order to reduce \eqref{eq:norm_total_breg} when $\|\bm{x}^*\|$ is large, $f{\,'}(z)$ must be a monotonically decreasing function.
The function $f(z)$ that satisfies this condition is concave or linear (Condition 1).
As $\|\bm{\theta}\|<\infty$, when $\|\bm{x}^*\|\to\infty$, $\frac{\| \bm{\nabla}\phi(\bm{x}^*)-\bm{\nabla}\phi(\bm{\theta})\| }{\sqrt{1+c^2\|\bm{\nabla}\phi(\bm{x}^*)\|^2}}$ does not become 0.
Therefore, the necessary and sufficient condition for satisfying the redescending property \eqref{eq:redescending} is \eqref{eq:nec_suf_tbd} (Condition 2).
\section{}
\bibliographystyle{elsarticle-num}
|
1901.11250
|
\section{Introduction}
\quad \quad The trapped ions system is one of the most attractive candidates for quantum computing\cite{Klimov2003Qutrit}, offering a long coherence time and a high-fidelity quantum operation\cite{ballance2016high}. The quantum
charge-coupled device (QCCD)\cite{kielpinski2002architecture} is one of the major schemes for the scalability of ion-trap-based quantum information processing\cite{Monroe2013Scaling}. In this scheme, the surface ion trap plays an important role since it completely eliminate the difficulties in the assembly of macroscopic devices such as blade traps and four-rod traps\cite{N2000Investigating,cha2000interface}. The manufacturing of surface ion trap is much simpler\cite{sterling2014fabrication,Chiaverini2005Surface,Stahl2005A,hellwig2010fabrication} using the micro/nanofabrication technologies with recent processes. The trapping performance depends entirely on the electrodes' geometry in the surface ion trap. The symmetric five-wire (FW) geometry trap is widely used in trapped ion scaling\cite{Wright2013Reliable,moehring2011design,Mokhberi2017Optimised}, due to the simplicity of its geometry and fabrication process.
However, with the symmetric FW trap, it's impossible to realize effective ion cooling in the direction perpendicular to the trap surface, because the wavevectors of the cooling laser beams have to be parallel to the trap surface. Fortunately, the effective ion cooling can be achieved by rotating the principal axes of the trap or offering the optical access through a hole in the substrate. However, it is difficult to etch a hole in substrate materials such as silica or sapphire. The rotation of the principal axes is easier to realize effective ion cooling, which is widely used in surface ion trap. The principal axes rotation can be achieved by an elaborate design of the radio frequency (RF) electrodes\cite{Stick2010Demonstration,Allcock2010Implementation,Doret2012Controlling,Amini2009Scalable,Wright2013Reliable,Leibrandt2009Demonstration,Allcock2013A,Niedermayr2014Cryogenic}. Currently, there are several designs to rotate the principle axes of the trap, which has been realized in many trap geometries: the four-wire geometry\cite{Britton2006A,Seidelin2006Microfabricated}, the asymmetric geometry\cite{Narayanan2011Electric,Daniilidis2011Fabrication,Niedermayr2014Cryogenic,Labaziewicz2008Suppression}, the six-wire geometry\cite{Allcock2010Implementation,Stick2010Demonstration} and so on. However, with the four-wire geometry or six-wire geometry (different voltages are applied on two center DC electrodes), the trapped ions above the electrode gap suffer from the gap potential\cite{Schmied2010electrostatics} and the stray charge on the substrate\cite{Chiaverini2005Surface}, which will influence the trapping stability. Besides, it is difficult to realize the segmentation control of the DC electrodes\cite{Chiaverini2005Surface} for four-wire trap. The asymmetric design leads to more capacitive coupling and higher RF loss\cite{Allcock2010Implementation}. Furthermore, the downsides of these geometries will be aggravated when we scale up trapped ion systems. For example, the asymmetric geometry trap suffers from high RF losses, which are proportional to the scale and the width ratio of two RF electrodes\cite{Amini2009Scalable}. For other designs, special designs with additional electrodes are required\cite{Bermudez2017Assessing} to rotate the principle axes at an appropriate angle. These designs have an extremely complex and difficult fabrication process, which involved with multi-layer structures and buried wire technology.
To avoid the problems mentioned above, we propose a novel surface ion trap that has innately rotated principle axes without excess electrodes or asymmetric geometry and enables 2D large-scale parallel ion chains trapping. The chip consists of a seven wire (SW) geometry trap, a symmetric FW geometry trap, and a fork junction. The SW trap generates double wells simultaneously at a distance of 200 $\mu$m and the rotation of their principal axes at $\pm$35 degrees. This not only enables trapping two parallel ion chains, but also can effectively cool ions along all principle axes in the manipulation zone. We design a symmetric FW trap as the loading zone, which produces a single well located 100 $\mu$m above the trap surface. A fork junction is designed to shuttle ions from the loading zone to the manipulation zone after pre-cooling. The shuttling path is split at the transportation zone forming a ``fork junction". The junction geometry is optimized by a combination of the ant colony algorithm and a multi-objective function.
Our design can be applied in many interesting research fields. In the manipulation zone, two species of ions can be trapped in the double wells independently. After the ordered transport of ions from double wells to the symmetric FW trap, the mixed and ordered two-specie ion chains can be obtained. So, the trap can be used as a mixer of ions\cite{Bermudez2017Assessing}, which can offer a flexible scheme for sympathetic cooling ions\cite{larson1986sympathetic}. In addition, we can split an ion chain in the loading zone into two ion chains trapped in the double wells. The split ability (like an ion beam splitter) should be useful in the studies of 2D dimensional ion crystals\cite{britton2012engineered,porras2006quantum} and in the industrial applications as a guide of quantum microscope such as the ion etching technology\cite{tachi1988low} and the electronic imaging technology\cite{Hoffrogge2011Microwave,Jakob2015Microwave,hammer2015microwave}. The effective spin-spin interaction between two ion chains can also be studied based on our trap\cite{wilson2014tunable,welzel2011designing}.
\section{Design of linear electrode zones}
\quad \quad A symmetric FW trap has been designed, fabricated and tested in our lab\cite{Xie2017Creating,Ou2016Optimization}. To further improve the confinement performance, realize effective cooling of ions, and extend the application, a versatile surface ion trap is designed as shown in Figure 1. There are three parts in the trap: two linear electrode zones of the loading zone and the manipulation zone, and a fork junction. The symmetry FW trap serves as the loading zone where ions are trapped at the height of $h = 100 \mu$m above the surface, and ions can be transported through the junction into the SW trap (the distance of double wells d is $200 \mu$m). The advantage of our design is that it provides the chance of effective ion cooling without redundant (DC or RF) electrodes and realizes trapping two parallel ion chains, which can confine twice as many ions as the symmetric FW trap.
\begin{figure}
\centering
\includegraphics[width=8cm]{figure1.eps}
\caption{\label{vid:PRSTPER.4.010101}%
The trap with different quantum zones, including a SW trap as the manipulation zone to generate double wells with rotating principle axes for effective cooling of ions, a symmetry FW trap as the loading zone to load and pre-cool ions, and a fork junction as the transportation zone to shuttle ions.
}%
\end{figure}
\subsection{Design method}
\quad \quad The surface ion trap has two main components: the metal electrodes (DC and RF) and the insulating substrate\cite{Seidelin2006Microfabricated,Britton2006A}. When RF voltages are applied to the electrodes, RF losses are inevitably generated on the dielectric substrate. Large RF voltages are usually applied on the RF electrodes to constrain ions in the strong-binding regime, but a higher RF voltage generally produces a greater loss on the substrate\cite{lee2003design}. In general, the RF loss on the dielectric substrate is mostly transformed into thermal energy, which results in a temperature rise of the trap that significantly influences the trapping stability. One simple way to remedy this problem is to reduce the RF voltage as much as possible when the ions are always trapped in the strong-binding regime.
Our design method is different from general designs\cite{Stick2010Demonstration,Allcock2010Implementation,Doret2012Controlling,Amini2009Scalable,Wright2013Reliable,Leibrandt2009Demonstration,Allcock2013A,Niedermayr2014Cryogenic}. Instead of directly optimizing the trapping height, depth, frequency, and $q$ parameter of the Mathieu differential equation\cite{Leibfried2003Quantum}, we use the pseudopotential curvature as the optimal parameter\cite{Schmied2009Optimal} to obtain an deep trapping depth and high trapping frequency with the RF source $U_{RF}$. We consider ions of mass $m$ and charge $q$ that are confined at height $y$ by the pseudopotential generated by the RF electric field with amplitude $\vec E(\vec r)$ at angular frequency $\Omega_{RF}$. The pseudopotential is
\begin{equation}\label{1}
\centering
\qquad \qquad \quad \Phi(\vec r)=\frac{q^{2}\|\vec E(\vec r)\|^{2}}{4m\Omega^{2}_{RF}}.
\end{equation}
At arbitrary positions $\vec r$ above the trap, the pseudopotential curvature tensor is proportional to the square of the electric potential curvature tensor\cite{Schmied2009Optimal}, i.e. $\Phi^{(2)}(\vec r)=\partial_{\alpha}\partial_{\beta}\Phi(\vec r)$. According to Laplace equation, the trace of this tensor is zero, i.e. $Tr\Phi^{2}(\vec r)=0$, and the elements of this tensor are dependent on the trap geometry. In the linear Paul surface ion trap, the pseudopotential curvature tensor matrix can be described as
\begin{center}
\begin{equation}\label{3}
\qquad \qquad \Phi^{2}(\vec r)=\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & 0 \\
\end{array}
\right).
\end{equation}
\end{center}
The matrix as the boundary conditions can be used to design the RF electrodes' geometry in terms of the spatial position of ions. In this process, the pseudopotential curvature tensor is applied globally to all ions. According to Ref. \cite{Schmied2009Optimal}, the dimensionless curvature $\kappa$ can quantify the trap strength as the following expression
\begin{equation}\label{4}
\qquad \qquad \kappa=|det\Phi^{(2)}(\vec r)|^{\frac{1}{3}}(\frac{y^{2}}{U_{RF}}).
\end{equation}
Subject to the Mathieu stability requirements, it prefers to optimize the trap geometry so that the parameter $\kappa$ is maximized for the given constraints. Furthermore, the higher trapping frequency can be achieved with the larger $\kappa$, when the same RF voltage is applied. A higher secular frequency is desired because it not only allows tighter confinement, faster ion transportation\cite{Yeo2007On}, more effective cooling, and less sensitivity to external noise\cite{Zhu2006Trapped}, but also can realize multi-qubit entangling gate operations using motional modes\cite{Cirac1995Quantum}.
\subsection{The manipulation zone design}
\quad \quad Based on the above method, we designed and optimized the symmetric SW geometry trap. The RF electrodes' geometry is determined by two parallel linear ion chains. Based on the spatial position of the ions, the electric potential is solved by the boundary element method (BEM)\cite{chen1992boundary} to obtain the electrode geometry for the expected $\kappa$ value. The potential generated by the electrodes is analytically calculated in the gapless plane approximation\cite{Wesenberg2008Electrostatics} using the SurfacePattern software package\cite{SurfacePattern}. Here, the desired trapping height (RF null) is 100 $\mu$m above the trap surface, and the two ion chains are juxtaposed at a distance of 200 $\mu$m. Finally, the RF electrodes of the SW trap include one $RF_{Centre}$ and two $RF_{Side}$, whose widths are 120 $\mu$m and 167 $\mu$m, respectively. Both ground electrodes are 83 $\mu$m width, as shown in Figure 2(a). As we can see from the pseudopotential distribution generated by RF electrodes in Figure 2(b), there is an innate $\pm35$-degree angle between the trap principal axes and the $y$ detection perpendicular to the trap surface. A laser-cooling beam parallel to the trap surface can cool ion along all principle axes, which is expected to achieve effective ion cooling to their vibrational ground state.
\begin{figure}
\centering
\includegraphics[height=5cm]{figure2a}
\quad
\includegraphics[width=5.4cm,height=4.4cm]{figure2b}
\includegraphics[width=2.5cm,height=4.1cm]{figure2c}
\caption{\label{vid:PRSTPER.4.010101}%
Parameters of the SW trap and the pseudopotential distribution.
(a) Diagram of the trap layout and dimensions. The trap consists of an $RF_{Centre}$ electrode of width a = 120 $\mu$m, a pair of $RF_{Side}$ electrodes of width b = 167 $\mu$m, two ground electrodes of width c = 83 $\mu$m, and the segmented DC control electrodes with 150 $\mu$m width.
(b) Radial contour plot of total pseudopotential generated by (a). There are double wells in the $x$ direction. The pseudopotential decreases from red to blue.
(c) 1D plot of the pseudopotential along the line through the null and saddle points. The trap depth is about 0.129 eV, and the trapping height and the escape point are approximately 98 $\mu$m and 201 $\mu$m above the trap surface, respectively.
}
\end{figure}
The trapping height of the $^{40}Ca^{+}$ ions is about 98 $\mu$m simulated by the BEM with a 100-V-amplitude RF source at a 25 MHz frequency. Figure 2(c) shows that the escape point (saddle point) is approximately 201 $\mu$m, and the trap depth is 0.129 eV, which is five times deeper than that in Ref. \cite{Hong2016Guidelines} (see Table 1) obtained with the 155 V. The secular frequencies in the radial $x$ and $y$ directions are approximately 2.731 MHz and 2.487 MHz, respectively. We can achieve deeper trap depth and higher trap frequency by optimizing the electrode geometry. The distance between two RF nulls is 200 $\mu$m, as shown in Figure 2(b), which can trap two parallel linear ion chains along the RF electrodes. The distance between the RF nulls can be controlled by the RF voltages applied to $RF_{Centre}$ and $RF_{Side}$ electrodes. However, the phase difference between RF electrodes needs to be overcome, since it causes excess micromotion. In addition, the $RF_{Centre}$ electrode requires a DC voltage offset to achieve the stable trapping ions in the spatial overlap between the RF null lines and the minimum of electrostatic potential.
\subsection{The loading zone design}
\quad \quad A separate loading zone can alleviate contamination of the other zones\cite{Daniilidis2011Fabrication}, which can prolong the ions' lifetime. According to recent reports \cite{Sage2012Loading,Boldin2018Measuring,Sedlacek2018Distance,Tanaka2012Micromotion,Wright2013Reliable,moehring2011design,Mokhberi2017Optimised}, it is more convenient to use a symmetric FW trap for the pre-cooling and loading of ions. In addition, the loading zone should be capable of the fast loading and efficient shuttling ions. When ions are transported from the loading zone to the manipulation zone, the ions' parameters (e.g., trapped height, depth and frequency) should remain as constant as possible to ensure that the motional modes of the ions are not excited.
The RF geometry of the symmetric FW trap is solved with a 1D linear ion chain at 100 $\mu$m above the trap surface. The calculation process is the same as for double wells. The achieved trap is composed of two RF electrodes of 130 $\mu$m width and a ground electrode of 108 $\mu$m width, as shown in Figure 3(a). The width ratio between the RF electrode and the ground electrode is about 1.2, which is the same as in Ref. \cite{house2008analytic} for symmetric FW trap. The performance of the trap is simulated using BEM for trapped $^{40}Ca^{+}$ ions with an applied 100 V RF voltage at 25 MHz frequency. The pseudopotential distribution of this design is shown in Figure 3(b). The trapping height and escape point are approximately 102 $\mu$m and 190 $\mu$m above the trap surface, respectively. The trapping depth is approximately 0.122 eV. In the loading zone, a deeper trap depth improves the stability of trapping ions, which can shorten the working time of the atom oven to reduce contamination of the trap surface. In addition, it's better to ensure that there is the same trapping depth in different zones, since we will shuttle ions between different zones. The secular frequencies in the radial $x$ and $y$ directions are 2.845 MHz and 2.822 MHz, respectively. Here, we achieve a higher frequency by using a relatively low voltage for the RF source. The Mathieu parameter $q$ along $x$ is 0.229.
\begin{figure}
\centering
\includegraphics[width=5.1cm,height=5cm]{figure3a}
\quad
\includegraphics[width=5.4cm,height=4.4cm]{figure3b}
\includegraphics[width=2.5cm,height=4.1cm]{figure3c}
\caption{\label{vid:PRSTPER.4.010101}%
Parameters of the FW trap and the pseudopotential distribution.
(a) Diagram of the trap layout and dimensions. A ground electrode of width a = 108 $\mu$m, a pair of RF electrodes of width b = 130 $\mu$m and the segmented DC control electrodes with 150 $\mu$m width..
(b) Radial contour plot of total pseudopotential generated by (a). The pseudopotential decreases from red to blue.
(c) 1D plot of the pseudopotential along the $y$ direction ($x = z = 0$). The trap depth is approximately 0.122 eV, and the trapping height and the escape point are approximately 102 $\mu$m and 190 $\mu$m above the trap surface, respectively.
}
\end{figure}
\section{Multi-objective optimization of fork junction}
\quad \quad A fork junction is designed to shuttle ions between loading zone and manipulation zone (as described in sections 2.2 and 2.3), which allows orderly merging of two parallel linear ion chains from the double wells to the single well by sequential control of voltages on the DC electrodes, and vice versa. For ions in the near-motional ground states as quantum bits, it is important that shuttling ions between different zones is accomplished with a high success probability and a low motional-energy gain. For this purpose, we use the multi-objective function and the ant colony optimization (ACO) algorithm\cite{Guo2012Ant} to optimize the junction geometry.
\subsection{Analysis of objective functions}
\quad \quad The optimal objectives include equating the minimum values of the pseudopotential, minimizing the pseudopotential gradient, minimizing the trapping height fluctuation, and unifying the shape of the pseudopotential tube (the equipotential surface with a certain radius to the RF null line) along the shuttling path.
The minimum value of the pseudopotential should ideally be equal in the shuttling paths. However, there are some pseudopotential barriers along the shuttling paths (unlike linear electrodes, which generate a uniformly distributed pseudopotential along the RF electrodes) produced by the initial junction electrodes. The pseudopotential barriers on the shuttling paths will reduce the trap depth and weak trapping ability. Eliminating the pseudopotential barriers is the first objective.
In the shuttling process, the heating rate is mainly derived in the pseudopotential gradient, which is the first derivative of the pseudopotential\cite{Blakestad2009High}, since the pseudopotential gradient will result in exciting the motional modes. Therefore, the second objective is to find a junction geometry that maintains a low pseudopotential gradient.
The fluctuation of the trapping height should be as small as possible. The large fluctuation affects the stability and repeatability of the shuttling ions. The minimum potential generated by the time-dependent voltages of the DC electrodes will not overlap with the RF null line at all time, which will result in the aggravated micromotion and heating ions. Furthermore, the trapping height fluctuation lead to misalignment between ions and the laser beam. The third objective should be to ensure a consistent trapping height throughout the shuttling process.
The secular-frequency different between the linear zone and the junction zone will result in changing the energy of ions in the shutting process, which is proportional to the number of shuttling ions. To keep the secular frequency as stable as possible, it is necessary to limit trapping ions above the junction electrodes to be as strong as the symmetric FW and SW traps. Thus, unifying the shape of the pseudopotential tube is the last optimal objective.
\subsection{Multi-objective optimization}
\quad \quad The terms of the multi-objective function are described in Table 1, where $F_{i} (i = 1, 2, 3, 4)$ denotes the different objective functions. The pseudopotential barriers are obtained by finding the pseudopotential extremums along the RF null line $y|_{\Phi_{min}}$ (the $\Phi_{min}$ is the pseudopotential corresponding to the RF null line); $l_{min}$ and $l_{max}$ are the start and the end points of the ion-shuttling path, respectively, and $l$ denotes the whole path. $N_{l}$ and $\Delta$ denote the number of segments on the path and the radius of the pseudopotential tube, which is the distance from the RF null line to the $\Phi_{const}= 10$ meV pseudopotential.
\begin{table*}
\small
\caption{\label{tab:table3}
Mathematical expression and description of the multi-objective function.
\texttt{}}
\begin{tabular}{lp{10.5cm}lp{8cm}}
\br
\multicolumn{1}{l}{Function}&
\multicolumn{1}{l}{\quad Description}\\
\hline
$F_{1}=\int^{l_{max}}_{l_{min}}\Phi(y|_{\Phi_{min}},l)dl$ & The average of the pseudopotential along the trapped path (the pseudopotential minimum).\\
\mr
$ F_{2}=\int^{l_{max}}_{l_{min}}(\frac{\partial|E\cdot\vec l|}{\partial l})dl$ & Average slope of the electric field along the shuttling path (a measure of heating).\\
\mr
$ F_{3}=\int^{l_{max}}_{l_{min}}|y|_{\Phi_{min}}-h|dl$ & Height excursion to $h=100 \mu m$ along the trapped path.\\
\mr
$F_{4}=\sqrt{\frac{1}{N_{l}}\sum_{l}{((y|_{\Phi_{const}}-y|_{\Phi_{min}})-\Delta)^{2}}}$ & The 10 meV contour (a measure of trapping potential shape and secular frequency compared to the linear region).\\
\br
\end{tabular}
\end{table*}
To effectively optimize by the multiple objective functions, we adjust the multi-objective function with the weight factor and the normalization operation:
\begin{equation}\label{5}
\quad F_{multi-objective}=\sum^{4}_{i=1}\omega_{i}(\sigma_{i}\cdot F_{i}),\quad \sigma_{i}=\frac{1}{F^{min}_{i}}.
\end{equation}
A small $F_{multi-objective}$ is expected because it indicates better optimization. Where $\omega_{i}$ is the weight factor of objective function $F_{i}$, which determines the contribution of the corresponding objective function in the optimization, and $\sigma_{i}$ is a normalization factor that is equal to the reciprocal of the minimum value obtained when the objective function is independently optimized. The value range of $\omega_{i}$ is between 0 and 1, which is adjusted reasonably according to the objective.
\subsection{Optimization results}
\quad \quad Initially, we simply connected the RF electrodes of the FW trap and the SW trap by the initial junction as shown in Figure 4(a). The electrodes' geometry (gray part) of the SW trap and the SW trap has been maintained based on above optimization. Only the initial geometry of the junction (yellow part) is optimized by the optimization method discussed above. The optimal junction geometry is obtained by adjusting the 38 control points as shown in Figure 4(b), where the $P1 \ldots P19$ points and the $P20 \ldots P38$ points are on both side edges of ground electrode. Thus, we have 38 degrees of freedom, which can only be adjusted in the $x$ direction within the yellow area. The locations of these points are modified using an ACO algorithm and multi-objective functions.
\begin{figure}
\includegraphics[width=4cm,height=3.8cm]{figure4a}%
\includegraphics[width=4cm,height=3.8cm]{figure4b}
\caption{\label{vid:PRSTPER.4.010101}%
Diagram of the local electrodes.
(a) Layout of symmetric FW and SW traps (gray) and initial geometry of the junction (yellow).
(b) The half of the optimized layout of the junction, since the junction is symmetrical. The 38 dots denote the variables, which move independently in the $x$ direction
}%
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=7.45cm,height=6.3cm]{figure5a}
\quad
\includegraphics[width=7.4cm,height=6.5cm]{figure5b}\quad
\includegraphics[width=7.25cm,height=6.45cm]{figure5c}
\quad
\includegraphics[width=7.55cm,height=6.3cm]{figure5d}
\caption{\label{vid:PRSTPER.4.010101}%
Results of the multi-objective optimization for the junction (red and blue lines denote initial and optimized results). (a) Fluctuation of trapped height. (b) Pseudopotential barriers of the initial and optimized junction. (c) Derivative of the pseudopotential barriers in (b). (d) Scale ($y$ direction) of the RF pseudopotential tube at 10 meV from the RF null line. Here, $\sigma$ and $\Delta\sigma$ are the standard deviation and the maximum deviation of the pseudopotential tube radius in the shuttling path. $\Delta f_{max}$ is the secular-frequency difference between the junction zone and linear zone.
}%
\end{figure*}
In the optimal process, the value of the multi-objective function in Eq. (5) decreases from 219 to 5.39 after just one iteration, which shows that our function exhibits rapid convergence. Figures 5(a)--(d) show the results before and after optimization. In the initial junction, the trapped height fluctuates approximately 65 $\mu$m. After the optimization, the height fluctuation has been reduced by a factor of five to just 10.4 $\mu$m. The maximum pseudopotential barrier is approximately 1 meV before the optimization, which is about $0.7\%$ of the trap depth. The largest barrier is about 0.221 meV after the optimization, which is not only less than a quarter of the non-optimized value, but also three orders of magnitude smaller than the trap depth. The maximum derivative of the pseudopotential along the shuttling path is $2.15\times10^{-5}$ eV/$\mu$m, as shown in Figure 5(c), which is approximately three times smaller than the initial value. These optimizations will drastically reduce the heating rate.
The comparison of the optimization with and without function $F_4$ is shown in Figure 5(d). The gray area represents the pseudopotential distribution above the junction and the other areas are the pseudopotential distribution above the linear electrodes. The effect of unifying the shape of the pseudopotential tube is obvious from comparing the results optimized by all multi-function equations and without applying $F_4$. The standard deviation and maximum deviation of the tube radius are extremely different with and without $F_4$, giving $\sigma=2.97, \Delta\sigma_{max}=8,38$ and $\sigma=13.94, \Delta\sigma_{max}=35.27$, respectively. The secular frequency is simulated for trapping $^{40}Ca^{+}$. The maximum secular-frequency differences $\Delta f_{max}$ are -2.31 MHz (without $F_4$) and -0.66 MHz (with $F_4$) in the transportation zone, which are lower than in linear zones. These results illustrate the importance of $F_4$ for optimization.
To validate the optimized effect, the results before and after optimization are compared by FEM simulations. For trapping $^{40}Ca^{+}$ with 100 V RF source at 25 MHz frequency, the corresponding pseudopotential profiles of the initial and optimized junction are shown in Figure 6. The pseudopotential distribution on the confinement surface (which is based on the shuttling path stretching along the $y$ direction perpendicular to the trap surface) is significantly meliorated after optimization. The results are present in the form of contour lines, and it is clear that the pseudopotential generated by the initial junction suffers a discontinuity along the shuttling path, as shown in Figure 6(a). In contrast, the pseudopotential distribution after optimization is more continuous and smooth (see Figure 6(b). The optimized electrodes are significantly better by comparing the two single-value contour lines (the black lines display the pseudopotential $\Phi=10$ meV in the figure). A large potential barrier shown in Figure 6(a) is generated by the initial junction, which is unable to realize the stable trapping and shuttling ions. However, a homogeneous shape of pseudopotential tube is achieved by optimization, enabling ion shuttling with a relative low heating rate.
\begin{figure}
\includegraphics[totalheight=3.7cm]{figure6}
\caption{\label{vid:PRSTPER.4.010101}%
Pseudopotential distribution above the junction. The black solid line denotes the pseudopotential value of 10 meV. (a) and (b) are contour maps of the pseudopotential distribution generated by the initial and the optimized junction.
}%
\end{figure}
\section{Conclusion}
\quad \quad A versatile surface ion trap is proposed which concludes a SW trap, a FW trap, and a fork junction for bridging different quantum zones. The novel SW trap of the surface ion trap enables the generation of double wells with innately rotating principle axes. Our proposed design can be used to scaling up trapped-ion-based quantum system because a pair of parallel ion chains can be trapped at a distance of 200 $\mu$m. Moreover, the distance between the double wells can be adjusted by controlling the balance of the RF voltages applied to the $RF_{Centre}$ and $RF_{Side}$ electrodes\cite{Tanaka2014Design}. A symmetric FW trap for loading ions can alleviate the pollution of the other zones during the loading process, and a fork junction optimized by a multi-objective function is used to transport ions between different zones with a low heating rate. Furthermore, our trap can be considered as an ion mixer, which can generate a mixed and ordered ion chain from the two-species ions trapped in the double wells. This is expected to be useful in sympathetic cooling scheme. The trap can also be applied as a beam splitter for electrons, which is very useful in electronic imaging technology\cite{hammer2015microwave}. We can also use the trap to rapidly split ion chains, like in an electron beam splitter.
The trap not only solves technical problems in cooling experiments by rotated principle-axes, but also provides an effective way to increase the number of trapped ion quantum bits. In our design, ions no longer exist as a single linear chain in the radial $x$ direction\cite{Ou2016Optimization,Xie2017Creating}, but as multiple ion chains spread over the trapped surface, forming a quantum network. In future studies, it should be possible to increase the number of RF electrodes, such as in interdigital electrodes\cite{Zhang2016Sniffing}, to produce many parallel ion chains over the trap surface. There will be spin-spin interaction between the different ion trains in the axial direction. By controlling the distance between the ion chains, it may be possible to store and exchange information, as well as apply quantum logic control between adjacent ion chains. The results of our study may provide additional theoretical concepts and experimental breakthrough for quantum computation, precision measurements, and quantum metrology.
\section*{Acknowledgments}
\quad \quad We would like to thank Wei Liu for fruitful discussions. This work is supported by the National Basic Research Program of China under Grant No. 2016YFA0301903; the National Natural Science Foundation of China under Grant Nos. 11174370, 11304387, 61632021, 11305262, 61205108, and 11574398, and the Research Plan Project of the National University of Defense Technology under Grant Nos. ZK18-03-04 and ZK16-03-04.
\section*{References}
\bibliographystyle{iopart-num}
|
1702.06517
|
\section{Introduction}
A general choice of $k$ hypersurfaces in $\mathbb{P}^r$ will intersect in a locus of dimension $r-k$. Equivalently, a general choice of $k$ homogenous polynomials in $K[X_0,\ldots,X_r]$ form a regular sequence. It is a closed condition for a sequence of hypersurfaces to not intersect properly, and the purpose of this paper is to address basic questions about this locus.
In the second part of the paper, we will give two applications that are of independent interest. Specifically, we will look at the locus of hypersurfaces with positive dimensional singular locus and the locus of hypersurfaces with more lines through a point than expected, improving on previously known results.
\subsection{Problem statement}
A first natural question would be the following:
\begin{Pro}
\label{PROBA}
Let $Z\subset \prod_{i=1}^{k}\mathbb{A}^{\binom{r+d_i}{d_i}}$ parameterize the $k$-tuples $(F_1,\ldots,F_{k})$ of homogenous forms of degrees $d_1,\ldots,d_k$ where $\dim(V(F_1,\ldots,F_k))$ exceeds $r-k$. What is the dimension of $Z$, and what are its components of maximal dimension?
\end{Pro}
\begin{comment}
For each component $\mathcal{H}$ of the Hilbert scheme of $r-k+1$-dimensional subvarieties of $\mathbb{P}^r$, there is a corresponding closed locus in $Z$ consisting of tuples $(F_1,\ldots,F_k)$ of forms that all vanish on some subvariety parameterized by $\mathcal{H}$. The challenge to answering Problem \ref{PROBA} is determining which component $\mathcal{H}$ yields the largest component of $Z$.
It is fewer conditions for $F_1,\ldots,F_k$ to each contain a given subvariety of $\mathbb{P}^r$ with small Hilbert function, but subvarieties with large Hilbert functions can vary within a larger family. In other words, the dimension of $\mathcal{H}$ and Hilbert function of the subvarieties parameterized by $\mathcal{H}$ are working against each other, not to mention that the Hilbert function can also vary within $\mathcal{H}$.
\end{comment}
For example, one might naively expect that the largest component of $Z$ is the locus of $k$-tuples of forms all containing the same linear space of dimension $r-k+1$, because linear spaces are the simplest $r-k+1$ dimensional subvarieties of projective space.
\begin{Pro}
\label{PROBB}
Does $Z$ have a unique component of maximal dimension, consisting of tuples $(F_1,\ldots,F_k)$ of hypersurfaces all containing the same $r-k+1$ dimensional linear space?
\end{Pro}
\begin{comment}
The answer the Problem 2 is negative as it stands. For exa
\end{comment}
The answer to Problem \ref{PROBB} is negative as it stands. For example, if $r=3$ and the degrees are $d_1=2$, $d_2=2$, and $d_3=100$, then the locus of 3-tuples of hypersurfaces all containing the same line is codimension 103, while the second quadric will be equal to the first quadric in codimension 9. Even if the degrees are all equal, we can let $r=4$, $k=2$, and the degrees be $d_1=2$, $d_2=2$, where the two quadrics will contain a plane in codimension 16, but are equal in codimension 14.
\subsection{Results and applications}
In spite of the simple counterexamples above, we will show that Problem \ref{PROBB} has a positive answer in many cases when $k=r$, as stated in Theorem \ref{NR}.
\begin{Thm}
\label{NR}
If $2\leq d_1\leq d_2\leq \cdots\leq d_r$ and $d_i\leq d_1+\binom{d_1}{2}(i-1)$, then the locus $Z\subset \prod_{i=1}^{r}\mathbb{A}^{\binom{r+d_i}{d_i}}$ has codimension
\begin{align*}
-2(r-1)+\sum_{i=1}^{r}{(d_i+1)}.
\end{align*}
Furthermore, the unique component of maximal dimension consists of $r$-tuples of hypersurfaces all containing the same line.
\end{Thm}
Figure \ref{F1} depicts the restriction on the degrees in Theorem \ref{NR}.
In fact, it is no harder to prove the analogous result in the more general case where $Z$ is the locus of $k$-tuples of hypersurfaces having positive dimensional intersection for $k\geq r$. This is stated in Theorem \ref{slope}, proving the result claimed in the abstract.
\begin{figure}[h]
\begin{tikzpicture}
\begin{axis}[xmin=-.5, xmax=10.5,
ymin=0, ymax= 35, yticklabels={,,},
xticklabels={,,}, ticks=none]
\addplot[name path=f,domain=0:10,black, dashed] {3*x+3};
\addplot[name path=g,domain=0:10,black, dashed] {3};
\path[name path=axis] (axis cs:0,0) -- (axis cs:10,0);
\addplot [
thick,
color=black,
fill=black,
fill opacity=0.05
]
fill between[
of=f and g,
soft clip={domain=0:10},
];
\node [rotate=43] at (axis cs: 5, 21) {Slope $\binom{d_1}{2}$};
\node at (axis cs: 0, 1.5) {$d_1$};
\node at (axis cs: 1, 2.5) {$d_2$};
\node at (axis cs: 10, 25.5) {$d_r$};
\addplot[mark=none, domain=0:1,black]{
x+3
};
\addplot[mark=none, domain=1:2,black]{
3*(x-1)+4
};
\addplot[mark=none, domain=2:5,black]{
7
};
\addplot[mark=none, domain=5:6,black]{
14*(x-5)+7
};
\addplot[mark=none, domain=6:8,black]{
1*(x-6)+21
};
\addplot[mark=none, domain=8:10,black]{
2*(x-8)+23
};
\end{axis}
\end{tikzpicture}
\caption{Allowable range of degrees in Theorem \ref{NR}}
\label{F1}
\end{figure}
\begin{comment}
\begin{Rem}
In fact, Theorem \ref{slope} implies that we can take any choice of degrees $d_1\leq d_2\leq \cdots\leq d_r$ with $d_i\leq d_1+\binom{d_1}{2}(i-1)$, increase any of the degrees $d_i$ arbitrarily up to a total increase of $r-2$, and the conclusion of Theorem \ref{NR} would still hold.
\end{Rem}
\end{comment}
We give two applications.
\subsubsection{Singular hypersurfaces}
Consider the following conjecture studied by Slavov \cite{Kaloyan}:
\begin{Conj}
\label{CONJA}
Among hypersurfaces of degree $\ell$ in $\mathbb{P}^r$ with singular locus of dimension at least $b$, the unique component of maximal dimension consists of hypersurfaces singular along a $b$-dimensional linear space.
\end{Conj}
The obstruction to solving Conjecture \ref{CONJA} is similar to the obstruction to solving Problem \ref{PROBA} outlined above. One would expect that it is easier to be singular along a linear space than a more complicated variety, but a more complicated variety might vary in a family with more moduli.
As progress towards Conjecture \ref{CONJA}, Slavov showed:
\begin{Thm} [{\cite[Theorem 1.1]{Kaloyan}}]
\label{KaloyanT}
Given $r, b, p$, there exists an effectively computable integer $\ell_0=\ell_0(r,b,p)$ such that for all $\ell\geq \ell_0$, Conjecture \ref{CONJA} holds over a field of characteristic $p$, where $p$ is possibly 0.
\end{Thm}
We can give a much better bound for $\ell_0$ in characteristic 0 when $b=1$. We will show
\begin{Thm}
\label{TsengT}
In characteristic 0 or 2, if $\ell\geq 7$ or $\ell=5$, among hypersurfaces with positive dimensional singular locus, the unique component of maximal dimension is the locus of hypersurfaces singular along a line.
\end{Thm}
We will prove Theorem \ref{TsengT} by applying the trick first given by Poonen \cite{Poonen} to decouple the partial derivatives of a homogenous form in positive characteristic to prove Theorem \ref{KaloyanT}. Specifically, we will use Slavov's argument \cite{Kaloyan} to reduce the problem to understanding the locus of $(r+1)$-tuples $(F_1,\ldots,F_{r+1})$ of hypersurfaces in $\mathbb{P}^r$ of the same degree, where $V(F_1, \cdots,F_{r+1})$ is positive dimensional, in characteristic 2.
\subsubsection{Lines on hypersurfaces}
The second application concerns lines on hypersurfaces, in particular, smooth hypersurfaces with unexpectedly large dimensions of lines through a given point. These points are called \emph{1-sharp} points in \cite{RY}. We are able to find the maximal dimensional components of the space of hypersurfaces with a 1-sharp point in \emph{all} dimensions and degrees.
We would expect an $r-d-1$ dimensional family of lines through a point on a smooth hypersurface of degree $d$ in $\mathbb{P}^r$. The case $d\geq r$ is trivial, since we are not expecting any lines through a given point, the case $d\leq r-2$ can be deduced from the crude degeneration given in the proof of \cite[Theorem 2.1]{LowDegree}. Therefore, the last open case is $d=r-1$, and we show
\begin{Thm}
\label{EVMI}
Let $U\subset \mathbb{P}^{\binom{2r-1}{r-1}-1}$ be the open set of smooth hypersurfaces of degree $r-1\geq 3$ in $\mathbb{P}^r$. Let $Z\subset U$ be the closed subset of hypersurfaces $X$ that contain a positive dimensional family of lines through a point. Then, there are two components of $Z$ of maximal dimension for $r=5$ and a unique component of maximal dimension for $r\neq 5$. The components of maximal dimensions are:
\begin{enumerate}
\item the locus of hypersurfaces where the second fundamental form vanishes at a point for $r\leq 5$
\item the locus of hypersurfaces containing a 2-plane for $r\geq 5$.
\end{enumerate}
\end{Thm}
For a more detailed description of what happens in all degrees and dimensions, see Theorem \ref{EVM} below.
Lines through points on hypersurfaces are relevant to the following conjecture by Coskun, Harris, and Starr:
\begin{Conj}
[{\cite[Conjecture 1.3]{cubic}}]
\label{CONJB}
Let $X\subset\mathbb{P}^r$ be a general Fano hypersurface of degree at least 3 and $R_e(X)$ denote the closure of the locus of the Hilbert scheme parameterizing smooth degree $e$ rational curves on $X$. Then, $R_e(X)$ is irreducible of dimension $e(r+1-d)+r-4$.
\end{Conj}
For work on Conjecture \ref{CONJB}, see \cite{LowDegree, Beheshti, RY}. For the related problem regarding rational curves on a arbitrary smooth hypersurface of very low degree see \cite{cubic, TDBrowning}.
The best-known bound to Conjecture \ref{CONJB} was given by Riedl-Yang \cite{RY}, where the authors proved Conjecture \ref{CONJB} for $d\leq r-2$ by applying bend and break to rational curves within a complete family of hypersurfaces to reduce to the case of lines, so it was crucial for them to know the codimension of the locus of hypersurfaces with more lines through points than expected. We hope that Theorem \ref{EVMI} will be useful in proving the $d=r-1$ case of Conjecture \ref{CONJB}. An attempt at using Theorem \ref{EVMI} to prove the $d=r-1$ case that works only for when $e$ is at most roughly $\frac{2-\sqrt{2}}{2}n$ is given in \cite{Tseng2}.
\subsection{Previous work}
The naive incidence correspondence between $k$-tuples of hypersurfaces and possible $r-k+1$ -dimensional schemes contained in the intersection quickly runs into issues regarding the dimension of the Hilbert scheme and bounds for the Hilbert function, neither of which are well understood.
To the author's knowledge, the only known method in the literature to approach Problem \ref{PROBA} is through a crude degeneration. Namely, in order to bound the locus where $(F_1,\ldots,F_i)$ form a complete intersection but $F_{i+1}$ contains a component of $V(F_1, \cdots, F_i)$, we can linearly degenerate each component of $V(F_1, \cdots,F_i)$ to lie in a linear space and forget the scheme-theoretic structure. This method was used in \cite[Theorem 2.1]{LowDegree}.
Similarly, we can linearly degenerate each component of $V(F_1, \cdots, F_i)$ to a union of linear spaces \cite[Lemma 4.5]{Kaloyan}. However, applying this requires a separate incidence correspondence to deal with the case where $V(F_1, \cdots, F_k)$ contains a $r-k+1$ dimensional variety of low degree, making it difficult to obtain quantitative bounds.
To illustrate the limitations of the known methods, the solution to Problem \ref{PROBA} was unknown to the author even in the case $k=r$ and $d_1=\cdots=d_r=2$, where we are intersecting $r$ quadrics in $\mathbb{P}^r$.
\subsubsection{A qualitative answer}
We note that Problem \ref{PROBB} also has an affirmative answer after a large enough twist.
\begin{Thm} [{\cite[Corollary 1.3]{Tseng}}]
\label{HYP}
Given degrees $d_1,\ldots,d_k$, there exists $N_0$ such that for $N\geq N_0$, the unique component of maximal dimension of the locus
\begin{align*}
Z\subset \prod_{i=1}^{k}\mathbb{A}^{\binom{r+d_i+N}{d_i+N}}
\end{align*}
of $k$-tuples of hypersurfaces of degrees $d_1+N,\ldots,d_k+N$ that fail to intersect properly is the locus where the hypersurfaces all contain the same codimension $k-1$ linear space.
\end{Thm}
In fact, the author has shown that Theorem \ref{HYP} holds in much greater generality, where we replace $\mathbb{P}^r$ with an arbitrary variety $X$ and replace the choice of $k$ hypersurfaces by a section of a vector bundle $V$ \cite[Theorem 1.2]{Tseng}. However, the key difference is that the method in this paper gives effective bounds, which is required for the applications. As an analogy, both Theorems \ref{HYP} and \ref{KaloyanT} require a large twist, and Theorems \ref{NR} and \ref{TsengT} are effective versions.
\subsection{Proof idea}
\label{PI}
The proof is elementary, and the only input is a lower bound for the Hilbert function of a nondegenerate variety. This is stated in \cite[Theorem 1.3]{Park}, for example.
\begin{Lem}
\label{ARBH}
If $X\subset \mathbb{P}^r$ is a nondegenerate variety of dimension $a$, then the Hilbert function $h_X$ of $X$ is bounded below by
\begin{align*}
h_{X}(d) \geq (r-a)\binom{d+a-1}{d-1}+\binom{d+a}{d}.
\end{align*}
\end{Lem}
Note that Lemma \ref{ARBH} is also elementary, as it can be proven by repeatedly cutting it with a general hyperplane and applying \cite[Lemma 3.1]{Montreal}. The varieties $X$ for which equality is satisfied are the varieties of minimal degree \cite[Remark 1.6 (1)]{Park}.
To see the proof worked out completely in an example, see Section \ref{KEX}. In this section, we give a feeling for the argument by briefly describing how it might arise naturally. Suppose for convenience that $k=r$, and we try to take the homogeneous forms $F_1,F_2,\ldots,F_r$ one at a time. For each $1\leq i\leq r-1$, it suffices to consider each case where
\begin{enumerate}
\item $V(F_1,\ldots,F_i)\subset \mathbb{P}^r$ has the expected dimension $r-i$
\item $V(F_1,\ldots,F_{i+1})$ is also dimension $r-i$,
\end{enumerate}
which means $V(F_{i+1})$ contains one of the components of $V(F_1,\ldots,F_i)$. By the definition of the Hilbert function, for a component $Z$ of $V(F_1,\ldots,F_i)$ , $V(F_{i+1})$ will contain $Z$ in codimension $h_Z(d_{i+1})$, where $h_Z$ is the Hilbert function of $Z$.
At this point, the obstacle is that $Z$ could have a small Hilbert function, and it won't be enough to simply bound $Z$ below by the Hilbert function of a linear subspace. The next idea is to further divide up our case work in terms of the dimension of the \emph{linear span} of $Z$.
For instance, if $Z$ spans all of $\mathbb{P}^r$, then we can bound $h_Z$ using Lemma \ref{ARBH}. If $Z$ spans a proper subspace $\Lambda\subset \mathbb{P}^r$, then we have a worse bound for the Hilbert function of $Z$. However, if we restrict $F_1,\ldots,F_{i+1}$ to $\Lambda\cong\mathbb{P}^b$ for $b<r$, we find $F_1|_{\Lambda},\ldots,F_{i+1}|_{\Lambda}$ are homogenous forms on $\mathbb{P}^b$ with intersection dimension
\begin{align*}
\dim(V(F_1,\ldots,V_{i+1}))=r-i=(b-i)+(r-b),
\end{align*}
which is now $(r-b)+1$ more than expected. This means that, if we restrict our attention to $\Lambda$ and take the forms $F_1,\ldots,F_{i+1}$ one at a time, then there will be $(r-b)+1$ separate instances where intersecting with the next form won't drop the dimension, which will contribute $(r-b)+1$ times to the codimension instead of just once.
Finally, in the final argument, it is cleaner to divide up the tuples of forms $F_1,\ldots,F_r$ for which $\dim(V(F_1,\ldots,F_r))\geq 1$ by the dimension $b$ of the spans of the curves they contain \emph{first}, and then in each case intersect the forms restricted to $b$-dimensional linear spaces $\Lambda \subset \mathbb{P}^r$ one at a time.
\subsection{Funding}
This material is based upon work supported by the National Science Foundation Graduate
Research Fellowship Program under Grant No. 1144152. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.
\subsection{Acknowledgements}
The author would like to thank Joe Harris, Anand Patel, Bjorn Poonen, and Kaloyan Slavov for helpful conversations. The author would also like to thank the referees for helpful comments and patience with the author's inexperience.
\section{Example}
\label{KEX}
The reader is encouraged to read the following example before the main argument because it will convey all of the main ideas. Throughout the example, we will be informal in our definitions and our justifications in the interest of clarity. We work over an algebraically closed field of arbitrary characteristic.
We consider the following case of our general problem.
\begin{Pro}
What is the dimension of the space 4-tuples of homogenous forms in 5 variables and degree sequence (3,4,5,6), such that the common vanishing locus in $\mathbb{P}^4$ is positive dimensional, and what is a description of the component(s) of maximal dimension?
\end{Pro}
More precisely, let $W_d$ be the vector space of homogenous degree $d$ polynomials in $X_0,\ldots,X_4$. Define the closed subset $\Phi\subset W_3\times W_4\times W_5\times W_6$ to be the tuples $(F_3,F_4,F_5,F_6)$ where $F_i$ is of homogenous of degree $i$ and $F_3,F_4,F_5,F_6$ all vanish on some curve. Note that we are allowing the possibility that $F_i$ can be chosen to be zero.
Since there are many types of curves that could be contained in $V(F_3,F_4,F_5,F_6)$, we set up the incidence correspondence
\begin{center}
\begin{tikzcd}
& \widetilde{\Phi} \arrow[swap, ld, "\pi_1"] \arrow[dr, "\pi_2"] &\\
\Phi & & {\rm Hilb}
\end{tikzcd}
\end{center}
where ${\rm Hilb}$ is open subset of the Hilbert scheme of curves parameterizing integral curves, and $\widetilde{\Phi}\subset \Phi\times {\rm Hilb}$ is the locus of pairs $((F_3,F_4,F_5,F_6),[C])$ such that $C$ is a curve contained in $V(F_3,F_4,F_5,F_6)$.
\begin{comment}
A first guess is that the largest component of $\Phi$ is given by $\pi_1(\pi_2^{-1}(\{\text{lines}\}))$, the tuples $(F_3,F_4,F_5,F_6)$ of hypersurfaces all containing a line. Comparing this with $\pi_1(\pi_2^{-1}(\{\text{conics}\}))$, it is more conditions for a hypersurface to contain a fixed conic, but on the other hand conics have more moduli. It is possible to do the computation and see $\pi_1(\pi_2^{-1}(\{\text{lines}\}))$ has greater dimension than $\pi_1(\pi_2^{-1}(\{\text{conics}\}))$, but it isn't clear how to carry this out in general, as we would need to understand the relation between the dimension of a component of the Hilbert scheme and the Hilbert function of the curves it parameterizes.
Furthermore, there are curves that will not be in the intersection of $F_3,F_4,F_5,F_6$ without an accompanying surface, in which case we also need to also account for the moduli of the curves on that surface. For instance, in our present example, we cannot have a degree 7 plane curve be contained in $V(F_3, F_4, F_5,F_6)$ without the 2-plane spanned by that curve also being in the intersection. In what follows, we will show that $\pi_1(\pi_2^{-1}(\{\text{lines}\}))$ is in fact the largest component of $\Phi$ in spite of the issues outlined above.
\end{comment}
\subsection{Filtration by span}
We write $\Phi=\Phi(1)\cup \Phi(2)\cup \Phi(3) \cup \Phi(4)$, where $\Phi(i)$ is the locus where $V(F_3,F_4,F_5,F_6)$ contains some integral curve $C$ spanning an $i$-dimensional plane. Even though the sets $\Phi(i)$ are in general only constructible sets, it still makes sense to talk about their dimensions, for example by taking the dimension of their closures.
\subsubsection{Case of lines}
The set $\Phi(1)$ is by definition $\pi_1(\pi_2^{-1}(\{\text{lines}\}))$, or the locus where $V(F_3,F_4,F_5,F_6)$ contains some line. One can check the codimension of $\Phi(1)$ in $ W_3\times W_4\times W_5\times W_6$ is
\begin{align*}
4+5+6+7-\dim(\mathbb{G}(1,4))=16.
\end{align*}
\subsubsection{Case of nondegenerate curves}
Next, we look at $\Phi(4)$. Since we will need to keep track of more data in the following discussion, we let $\Phi^{\text{curve}}_{3,4,5,6}(4):=\Phi(4)$. Here, the subscripts are the degrees of the homogenous forms $F_3,F_4,F_5,F_6$, and the superscript shows that we are restricting to the case where $V(F_3,F_4,F_5,F_6)$ contains a curve that spans a 4-D plane.
If $(F_3,F_4,F_5,F_6)\in \Phi^{\text{curve}}_{3,4,5,6}(4)$, then in particular $F_3,F_4,F_5$ contain a nondegenerate integral curve. Under our new notation, this means forgetting the last homogenous form defines a map $\pi: \Phi^{\text{curve}}_{3,4,5,6}(4)\rightarrow \Phi^{\text{curve}}_{3,4,5}(4)$. Let $\Phi^{\text{surface}}_{3,4,5}(4)\subset \Phi^{\text{curve}}_{3,4,5}(4)$ be the loci of $(F_3,F_4,F_5)$ that all vanish on a nondegenerate integral surface.
Since
\begin{align*}
\dim(\Phi^{\text{curve}}_{3,4,5,6}(4))=\max\{\dim(\pi^{-1}(\Phi^{\text{surface}}_{3,4,5}(4))),\dim(\pi^{-1}( \Phi^{\text{curve}}_{3,4,5}(4)\backslash \Phi^{\text{surface}}_{3,4,5}(4)))\},
\end{align*}
it suffices to bound $\pi^{-1}(\Phi^{\text{surface}}_{3,4,5}(4))$ and $\pi^{-1}( \Phi^{\text{curve}}_{3,4,5}(4)\backslash \Phi^{\text{surface}}_{3,4,5}(4))$ separately. Trivially,
\begin{align*}
\dim(\pi^{-1}(\Phi^{\text{surface}}_{3,4,5}(4)))\leq \dim(\Phi^{\text{surface}}_{3,4,5}(4))+\dim(W_6).
\end{align*}
To bound $\pi^{-1}( \Phi^{\text{curve}}_{3,4,5}(4)\backslash \Phi^{\text{surface}}_{3,4,5}(4))$, we need to bound the dimension of the fiber of
\begin{align*}
\pi: \pi^{-1}( \Phi^{\text{curve}}_{3,4,5}(4)\backslash \Phi^{\text{surface}}_{3,4,5}(4))\rightarrow \Phi^{\text{curve}}_{3,4,5}(4)\backslash \Phi^{\text{surface}}_{3,4,5}(4)
\end{align*}
from below. Suppose $(F_3,F_4,F_5,F_6)\in \pi^{-1}( \Phi^{\text{curve}}_{3,4,5}(4)\backslash \Phi^{\text{surface}}_{3,4,5}(4))$. Then, $V(F_3, F_4, F_5)$ contains no nondegenerate surfaces, but it does contain a nondegenerate curve $C$. We see that the number of conditions for $F_6$ to contain $C$ set theoretically is at least $6\cdot 4+1=25$, which is the Hilbert function of the rational normal curve in $\mathbb{P}^4$ evaluated at 6.
To summarize what we have so far, it is easier to use codimensions instead of dimensions. Let ${\rm codim}(\Phi^{\text{curve}}_{3,4,5,6}(4))$ denote the codimension in $W_3\times W_4\times W_5 \times W_6$. Similarly, ${\rm codim}(\Phi^{\text{curve}}_{3,4,5}(4))$ and ${\rm codim}(\Phi^{\text{surface}}_{3,4,5}(4))$ denote the codimension in $W_3\times W_4\times W_5$. So far, we have shown
\begin{align*}
{\rm codim}(\Phi^{\text{curve}}_{3,4,5,6}(4))&\geq \min\{{\rm codim}(\Phi^{\text{surface}}_{3,4,5}(4)),{\rm codim}(\Phi^{\text{curve}}_{3,4,5}(4))+25\}\\
&=\min\{{\rm codim}(\Phi^{\text{surface}}_{3,4,5}(4)),25\}.
\end{align*}
Repeating the same argument above for $\Phi^{\text{surface}}_{3,4,5}(4)$, we see
\begin{align*}
{\rm codim}(\Phi^{\text{surface}}_{3,4,5}(4))\geq \min\{{\rm codim}(\Phi^{\text{3-fold}}_{3,4}(4)),51\},
\end{align*}
where $51=2\binom{2+5-1}{5-1}+\binom{2+5}{5}$ is the Hilbert function of a minimal surface evaluated at 5. Continuing the process, we find
\begin{align*}
{\rm codim}(\Phi^{\text{3-fold}}_{3,4}(4))&\geq \min\{{\rm codim}(\Phi^{\text{4-fold}}_{3}(4)),55\}.
\end{align*}
As a possible point of confusion, the number of conditions it is for a degree 4 hypersurface to contain a degree 3 hypersurface is $\binom{4+4}{4}-\binom{4+1}{1}=65$, which is greater than 55. However, to keep our arguments consistent, we instead choose the weaker bound by the Hilbert function of a quadric hypersurface, which is also a variety of minimal degree. Finally,
\begin{align*}
{\rm codim}(\Phi^{\text{4-fold}}_{3}(4))&=35,
\end{align*}
where $\Phi^{\text{4-fold}}_{3}(4)$ is just the single point in $W_3$ corresponding to the zero homogenous form. Putting everything together, we find
\begin{align*}
{\rm codim}(\Phi(4))={\rm codim}(\Phi^{\text{curve}}_{3,4,5,6}(4))\geq \min\{25,51,55,35\}=25.
\end{align*}
\subsubsection{Case of curves spanning a 3-plane}
\label{ICEX}
In order to bound the codimension of $\Phi(3)$ in $W_3\times W_4\times W_5\times W_6$, we use the same argument as we used to bound the codimension of $\Phi(4)$ with one small extra step. Consider the incidence correspondence
\begin{center}
\begin{tikzcd}
& \widetilde{\Phi}(3) \arrow[swap, ld, "\pi_1"] \arrow[dr, "\pi_2"] &\\
\Phi(3) & & \mathbb{G}(3,4)=(\mathbb{P}^4)^{*}
\end{tikzcd}
\end{center}
where $\widetilde{\Phi}(3)\subset \Phi(3)\times (\mathbb{P}^4)^{*}$ consists of pairs $((F_3,F_4,F_5,F_6),H)$ where $H$ is a hyperplane such that $V(F_3, F_4, F_5, F_6)$ contains an integral curve that spans $H$. To bound the codimension of $\Phi(3)$ from below, it suffices to bound the codimension of the fibers of $\pi_2$ in $W_3\times W_4\times W_5\times W_6$ from below.
By restricting $F_3,F_4,F_5,F_6$ to $H$, we see that the codimension of each fiber of $\pi_2$ is precisely the number of conditions for homogenous forms of degrees (3,4,5,6) in $\mathbb{P}^3$ to vanish on some nondegenerate curve in $\mathbb{P}^3$. Writing this in symbols, the codimension of the fibers of $\pi_2$ is the codimension of $\Phi^{3,\text{curve}}_{3,4,5,6}(3)\subset W_{3,3}\times W_{3,4}\times W_{3,5}\times W_{3,6}$, where $W_{r,d}\cong \mathbb{A}^{\binom{r+d}{d}}$ is the vector space of homogenous polynomials of degree $d$ in $r+1$ variables, and $\Phi^{\mathbb{P}^3,\text{curve}}_{3,4,5,6}(3)$ is the locus of tuples of homogenous forms $(F_3',F_4',F_5',F_6')$ vanishing on some nondegenerate integral curve in $\mathbb{P}^3$. The extra superscript is to remind ourselves that we are working in $\mathbb{P}^3$ now.
Repeating the same argument as we did to bound ${\rm codim}(\Phi(4))$ above, we compute
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^3,\text{curve}}_{3,4,5,6}(3))&\geq \min\{{\rm codim}(\Phi^{\mathbb{P}^3,\text{surface}}_{3,4,5}(3)),19+{\rm codim}(\Phi^{\mathbb{P}^3,\text{curve}}_{3,4,5}(3))\}\\
{\rm codim}(\Phi^{\mathbb{P}^3,\text{curve}}_{3,4,5}(3))&\geq \min\{{\rm codim}(\Phi^{\mathbb{P}^3,\text{surface}}_{3,4}(3)),16\}\\
{\rm codim}(\Phi^{\mathbb{P}^3,\text{surface}}_{3,4,5}(3))&\geq \min\{{\rm codim}(\Phi^{\mathbb{P}^3,\text{3-fold}}_{3,4}(3)),{\rm codim}(\Phi^{\mathbb{P}^3,\text{surface}}_{3,4}(3))+36\}\\
{\rm codim}(\Phi^{\mathbb{P}^3,\text{surface}}_{3,4}(3))&\geq \min\{{\rm codim}(\Phi^{\mathbb{P}^3,\text{3-fold}}_{3}(3)),25\}\\
{\rm codim}(\Phi^{\mathbb{P}^3,\text{3-fold}}_{3,4}(3))&={\rm codim}(\Phi^{\mathbb{P}^3,\text{3-fold}}_{3}(3))+35\\
{\rm codim}(\Phi^{\mathbb{P}^3,\text{3-fold}}_{3}(3))&=20.
\end{align*}
Putting everything together, we see
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^3,\text{curve}}_{3,4,5,6}(3))&\geq \min\{19+16,19+25,19+20,36+25,36+20,35+20\}=35
\end{align*}
Finally, we conclude
\begin{align*}
{\rm codim}(\Phi(3))&\geq -\dim(\mathbb{G}(3,4))+{\rm codim}(\Phi^{\mathbb{P}^3,\text{curve}}_{3,4,5,6}(3))\geq 31.
\end{align*}
\subsubsection{Case of plane curves}
The case of plane curves can be done directly since we can completely classify them. However, if we apply the same argument as above, the bound we get is ${\rm codim}(\Phi(2))$ is at least
\begin{align*}
-\dim(\mathbb{G}(2,4))+\min\{13+11+9,13+11+10,13+15+10,21+15+10\}=27.
\end{align*}
\subsubsection{Combining the cases}
Combining the cases, we get
\begin{align*}
{\rm codim}(\Phi^1)&=16,\ {\rm codim}(\Phi^2)\geq 27,\ {\rm codim}(\Phi^3)\geq 31,\ {\rm codim}(\Phi^4)\geq 25.
\end{align*}
Summarizing, we now know that the codimension of $\Phi$ in $W_3\times W_4\times W_5\times W_6$ is exactly 16, the component of maximal dimension of $\Phi$ corresponds to tuples of homogenous forms vanishing on some line, and a component of second largest dimension has codimension at least 25.
\section{General Argument}
\label{MainArgument}
We now implement the argument given in Section \ref{KEX} in the general case. The notation is heavy, so we have included a list of conventions and a chart of the definitions introduced for the argument in this section for reference.
Conventions:
\begin{enumerate}
\item the base field is an algebraically closed field $K$ of arbitrary characteristic
\item the ambient projective space is $\mathbb{P}^r$
\item $X$ is a subvariety of $\mathbb{P}^r$
\item $A$ is a constructible set
\item $\mathcal{X}\to S$ is a family over a finite type $K$-scheme $S$
\item $F_i$ is a homogeous form of degree $d_i$
\item $D$ is an integer greater than $d_1\cdots d_k$
\end{enumerate}
\pagebreak
Notations:
\begin{center}
\begin{longtable}{l | p{11cm} | r}
Symbol & Informal Meaning & Definition \\\hline
${\rm Hilb}_X$ & Hilbert scheme of $X$ & \ref{HilbD}\\
${\rm Hilb}^b_X$ & ${\rm Hilb}_X$ restricted to subschemes of dimension $b$ & \ref{HilbD}\\
${\rm Hilb}_X$ & ${\rm Hilb}_X$ of $X$ restricted to integral subschemes & \ref{RHilbD}\\
${\rm Hilb}^b_X$ & ${\rm Hilb}_X\cap {\rm Hilb}^b_X$ & \ref{RHilbD}\\
${\rm Hilb}_X^{b,\leq D}$ & ${\rm Hilb}^b_X$ restricted to schemes of degree at most $b$ & \ref{HilbRDD}\\
${\rm Hilb}_{\mathcal{X}/S}$ & relative version of ${\rm Hilb}_X$ & \ref{HilbS}\\
${\rm Hilb}^b_{\mathcal{X}/S}$ & relative version of ${\rm Hilb}^b_X$ & \ref{HilbS}\\
${\rm Hilb}_{\mathcal{X}/S}$ & relative version of ${\rm Hilb}_X$ & \ref{HilbS}\\
${\rm Hilb}^b_{\mathcal{X}/S}$ & relative version of ${\rm Hilb}^b_X$ & \ref{HilbS}\\
${\rm Hilb}_{\mathcal{X}/S}^{b,\leq D}$ & relative version of ${\rm Hilb}_X^{b,\leq D}$ & \ref{HilbRDD}\\
$W_{r,d}$ & vector space degree $d$ homogeous forms in $r+1$ variables & \ref{WDEF}\\
$F_{r,d}$ & universal hypersurface $F_{r,d}\subset W_{r,d}\times \mathbb{P}^r$ & \ref{FDEF}\\
$\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X)$ & parameterizes forms $F_1,\ldots,F_k$ whose vanishing locus in $X\subset \mathbb{P}^r$ is dimension $a$ more than expected & \ref{PX}\\
$\Phi^{\mathbb{P}_S^r,a}_{d_1,\ldots,d_k}(\mathcal{X}/S)$ & relative version of $\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X)$ & \ref{PXA}\\
$\widetilde{\Phi}^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X)$ & parameterizes forms $F_1,\ldots,F_k$ in $\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X)$ together with a choice of integral subscheme $Z\subset X\cap V(F_1,\ldots,F_k)$ of dimension $a$ more than expected & \ref{PXT}\\
$\widetilde{\Phi}^{\mathbb{P}^r_S,a}_{d_1,\ldots,d_k}(\mathcal{X}/S)$ & relative version of $\widetilde{\Phi}^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X)$ & \ref{PXAT}\\
$\widetilde{\Phi}^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X,A)$ & $\widetilde{\Phi}^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X)$ restricted to choices $((F_1,\ldots,F_k),Z)$ where $Z\in A$ ($A\subset {\rm Hilb}_{\mathbb{P}^r}$) & \ref{IMA}\\
$\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X,A)$ & image $\widetilde{\Phi}^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X)\to \Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X,A)$ forgetting $Z$ & \ref{IMA}\\
$\widetilde{\Phi}^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(\mathcal{X}/S,A)$ & relative version of $\widetilde{\Phi}^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X,A)$ & \ref{IMAS}\\
$\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(\mathcal{X}/S,A)$ & relative version of $\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X,A)$ & \ref{IMAS}\\
$h_A$ & $h_A(d)$ is the minimum of Hilbert functions $h_Z(d)$ for $[Z]\in A\subset \widetilde{{\rm Hilb}_X}$ \ref{minhD}\\
${\rm Contain}(A)$ & all $[Y]\in {\rm Hilb}_X$ where $Y\supset Z$ for $[Z]\in A$ & \ref{containD}\\
${\rm Span}(r,b)$ & subset of ${\rm Hilb}_{\mathbb{P}^r}$ of schemes $Z$ that span a $b$-dimensional linear space & \ref{spanD}\\
\end{longtable}
\end{center}
\subsection{Definitions}
Here, we fix the notation for the objects of study.
\subsubsection{Hilbert scheme and relative Hilbert scheme}
We won't use the geometry of the Hilbert scheme, but we will use its existence.
\begin{Def}
\label{HilbD}
If $X\subset \mathbb{P}^r$ is a projective scheme, let ${\rm Hilb}_X$ denote the Hilbert scheme of subschemes of $X$. Let ${\rm Hilb}_X^b\subset {\rm Hilb}_X$ denote the connected components corresponding to subschemes of dimension $b$.
\end{Def}
\begin{Def}
\label{RHilbD}
Let ${\rm Hilb}_X\subset {\rm Hilb}_X$ denote the subset of the Hilbert scheme corresponding to geometrically integral subschemes. Recall ${\rm Hilb}$ is open in ${\rm Hilb}$ \cite[IV 12.1.1 (x)]{EGA}, . Let ${\rm Hilb}_X^b\subset {\rm Hilb}_X$ correspond to subschemes of dimension $b$.
\end{Def}
\begin{Def}
\label{HilbS}
Suppose $S$ is a finite type $K$-scheme and $\mathcal{X}\subset \mathbb{P}^r_S$ is a closed subscheme. Let ${\rm Hilb}_{\mathcal{X}/S}$ denote the relative Hilbert scheme of the family $\mathcal{X}\rightarrow S$. Similar to above, we let ${\rm Hilb}_{\mathcal{X}/S}\subset {\rm Hilb}_{\mathcal{X}/S}$ denote the open locus parameterizing geometrically integral subschemes, and ${\rm Hilb}_{\mathcal{X}/S}^b\subset {\rm Hilb}_{\mathcal{X}/S}^b$ denote the restriction to $b$-dimensional subschemes.
\end{Def}
\subsubsection{Constructible sets}
\label{CONSET}
We will need to work with dimensions and maps of constructible sets. Chevalley's theorem implies constructible subsets remain constructible after taking images \cite[Tag 054K]{stacks-project}. Since we are only interested in their dimensions, it suffices to think of them as subsets of an ambient space with a notion of dimension.
\begin{Def}
If $X$ is a scheme, then a constructible set $A\subset X$ is a finite union of locally closed subsets.
\end{Def}
To take the dimension of a constructible set, it suffices to either look at the generic points or take the closure.
\begin{Def}
\label{DEFDIM}
If $A\subset X$ is a constructible set, then $\dim(A):=\dim(\overline{A})$. The dimension of the empty set is $-\infty$. The codimension of the empty set in a nonempty constructible set is $\infty$.
\end{Def}
By applying the usual theorem on fiber dimension to $\overline{A}$ and $\overline{B}$ below, we have:
\begin{Lem}
\label{FY6}
If $f:X\rightarrow Y$ is a morphism of finite type schemes over a field, $A\subset X$ and $B=f(A)\subset Y$ constructible sets, and $\dim(f^{-1}(b)\cap A)<c$ for all $b\in B$, then
\begin{align*}
\dim(A)\leq \dim(B)+c.
\end{align*}
If $\dim(f^{-1}(b))=c$ for all $b\in B$, then equality holds.
\end{Lem}
For notational convenience, we define the pullback of a constructible set.
\begin{Def}
If $X\rightarrow Z \leftarrow Y$ are morphisms and $A\subset X$ is a constructible set, then define $A\times_Z Y$ to be the preimage of $A$ in $X\times_Z Y$ under $X\times_Z Y\rightarrow X$.
\end{Def}
\subsubsection{Locus of tuples of hypersurfaces not intersecting properly}
We parameterize our space of homogenous forms with affine spaces.
\begin{Def}
\label{WDEF}
Given positive integers $r,d$, let $W_{r,d}\cong \mathbb{A}^{\binom{r+d}{d}}$ be the affine space whose underlying vector space is $H^0(\mathbb{P}^r,\mathscr{O}_{\mathbb{P}^r}(d))$, the hypersurfaces of degree $d$ in $\mathbb{P}^r$.
\end{Def}
\begin{Def}
\label{FDEF}
Over $W_{r,d}$, let $\mathcal{F}_{r,d}\subset \mathbb{P}^r\times W_{r,d}$ be the universal family $\mathcal{F}_{r,d}\rightarrow W_{r,d}$.
\end{Def}
\begin{Def}
\label{PX}
If $X\subset \mathbb{P}^r$ is a projective scheme, $a$ a nonnegative integer, and $(d_1,\ldots,d_k)$ is a tuple of positive integers, define $\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X)\subset \prod_{i=1}^{k}W_{r,d_i}$ to be the closed subset of tuples $(F_1,\ldots,F_k)$ of homogenous forms of degrees $(d_1\,\ldots,d_k)$ such that
\begin{align*}
\dim(X\cap V(F_1,\ldots,F_k))\geq \dim(X)-k+a.
\end{align*}
\end{Def}
In particular, $\Phi^{\mathbb{P}^r,0}_{d_1,\ldots,d_k}(X)$ is all of $\prod_{i=1}^{k}W_{r,d_i}$ and $\Phi^{\mathbb{P}^r,1}_{d_1,\ldots,d_k}(\mathbb{P}^r)$ is the locus of hypersurfaces failing to be a complete intersection. Similarly, given a family $\mathcal{X}\subset \mathbb{P}^r_S$, we can define a relative version.
In Definition \ref{PX} it is useful to keep in mind that, even though we are mostly interested in the case of hypersurfaces failing to be a complete intersection $\Phi^{\mathbb{P}^r,1}_{d_1,\ldots,d_k}(\mathbb{P}^r)$, we will need to let $X$ and $a$ vary.
\begin{Def}
\label{PXA}
Given a finite type $K$-scheme $S$ and a closed subscheme $\mathcal{X}\subset \mathbb{P}^r_S$, let $\Phi^{\mathbb{P}_S^r,a}_{d_1,\ldots,d_k}(\mathcal{X}/S)\subset S\times_K \prod_{i=1}^{k}{W_{r,d_i}}$ denote the closed subset such that for all $s\in S$ with residue field $k(s)$, then the fiber of $\Phi^{\mathbb{P}_S^r,a}_{d_1,\ldots,d_k}(\mathcal{X}/S)\rightarrow S$ over $s$ is $\Phi^{\mathbb{P}_{k(s)}^r,a}_{d_1,\ldots,d_k}(\mathcal{X}|_s)$.
\end{Def}
Both $\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X)$ and $\Phi^{\mathbb{P}_S^r,a}_{d_1,\ldots,d_k}(\mathcal{X}/S)$ can be constructed by applying upper semicontinuity of dimension to the respective families over $\prod_{i=1}^{k}{W_{r,d_i}}$ and $S\times \prod_{i=1}^{k}{W_{r,d_i}}$.
\subsection{Incidence correspondence}
We will want to break up $\Phi^{\mathbb{P}^r,1}_{d_1,\ldots,d_k}(\mathbb{P}^r)$ into constructible sets based on the types of schemes contained in the common vanishing loci of the $k$-tuples of homogenous forms. To formalize this, we define the following.
\begin{Def}
\label{PXT}
Given $X\subset \mathbb{P}^r$ projective, let $\widetilde{\Phi}^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X)\subset \Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X)\times {\rm Hilb}_X^{\dim(X)-k+a}$ denote the closed subset corresponding to pairs $((F_1,\ldots,F_k),[V])$, where $(F_1,\ldots,F_k)$ is in $\prod_{i=1}^{k}{W_{r,d_i}}$ and $[V]$ is in $ {\rm Hilb}_{\dim(X)-k+a}$ such that $V\subset X\cap V(F_1,\cdots F_k)$.
\end{Def}
Similarly, we have the relative version.
\begin{Def}
\label{PXAT}
Given a finite type $K$-scheme $S$ and a subscheme $\mathcal{X}\subset \mathbb{P}^r_S$ where each fiber of $\mathcal{X}/S$ has the same dimension $b$, let $\widetilde{\Phi}^{\mathbb{P}^r_S,a}_{d_1,\ldots,d_k}(\mathcal{X}/S)\subset \Phi^{\mathbb{P}_S^r,a}_{d_1,\ldots,d_k}(\mathcal{X}/S)\times {\rm Hilb}_{\mathcal{X}/S}^{b-k+a}$ denote the closed subset such that, for each $s\in S$ with residue field $k(s)$, the restriction of $\widetilde{\Phi}^{\mathbb{P}^r_S,a}_{d_1,\ldots,d_k}(\mathcal{X}/S)\rightarrow S$ to the fiber over $s$ is $\widetilde{\Phi}^{\mathbb{P}_{k(s)}^r,a}_{d_1,\ldots,d_k}(\mathcal{X}|_{s})$.
\end{Def}
To construct both $\widetilde{\Phi}^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X)$ and $\widetilde{\Phi}^{\mathbb{P}^r_S,a}_{d_1,\ldots,d_k}(\mathcal{X}/S)$, we can use:
\begin{Lem}
[{\cite[Lemma 7.1]{ACGH2}}]
\label{CFF}
Given a scheme $A$ and two closed subschemes $B,C\subset \mathbb{P}^r_A$ with $B$ flat over $A$, there exists a closed subscheme $D\subset A$ such that any morphism $T\rightarrow A$ factors through $D$ if and only if $B\times_A T$ is a subscheme of $C\times_A T$.
\end{Lem}
In particular, $\widetilde{\Phi}^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X)$ and $\widetilde{\Phi}^{\mathbb{P}^r_S,a}_{d_1,\ldots,d_k}(\mathcal{X}/S)$ actually have a canonical scheme theoretic structure, though we will not make use of it.
To find the maximal dimensional components of $\Phi^{\mathbb{P}^r,1}_{d_1,\ldots,d_k}(\mathbb{P}^r)$, our general plan is to cover ${\rm Hilb}_{\mathbb{P}^r}^{r-k+1}$ with constructible sets $A_i$, and bound the dimensions of $\pi_1(\pi_2^{-1}(A_i))$ for each $i$, where $\pi_1$ and $\pi_2$ are given by the following diagram in the case $X=\mathbb{P}^r$ and $a=1$.
\begin{center}
\begin{tikzcd}
& \widetilde{\Phi}^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X) \arrow[swap, ld, "\pi_1"] \arrow[dr, "\pi_2"] &\\
\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X) & & {\rm Hilb}_{X}^{\dim(X)-k+a}
\end{tikzcd}
\end{center}
This isn't a serious issue, but we would like to restrict ourselves to finitely many connected components of ${\rm Hilb}_{X}^{\dim(X)-k+a}$. Otherwise, for example, if we take the subset $A\subset {\rm Hilb}_{X}^{\dim(X)-k+a}$ of nondegenerate varieties, then $\pi_1(\pi_2^{-1}(A))$ is a priori only a countable union of constructible sets obtained by applying Chevalley's theorem to $A$ restricted to each connected component of ${\rm Hilb}_{X}^{\dim(X)-k+a}$.
Therefore, we define
\begin{Def}
\label{HilbRDD}
Let ${\rm Hilb}_X^{b,\leq D}\subset {\rm Hilb}_X^b$ denote the connected components parameterizing subschemes of degree at most $D$. We similarly define ${\rm Hilb}_{\mathcal{X}/S}^{b,\leq D}\subset {\rm Hilb}_{\mathcal{X}/S}^{b}$ for $\mathcal{X}\subset \mathbb{P}^r_S$.
\end{Def}
From Chow's finiteness theorem \cite[Exercise I.3.28 and Theorem I.6.3]{Kollar}, we see ${\rm Hilb}_X^{c,\leq D}$ and ${\rm Hilb}_{\mathcal{X}/S}^{c,\leq D}$ have only finitely many connected components. From refined Bezout's theorem \cite[Example 12.3.1]{Fulton}, for $D=d_1\cdots d_k$, $\pi_1(\pi_2^{-1}({\rm Hilb}_X^{\dim(X)-k+a,\leq D}))$ is all of $\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X)$. In general, we will always choose $D$ such that $D\geq d_1\cdots d_k$.
\begin{Rem}
The application of Chow's finiteness theorem and refined Bezout's theorem are unnecessary and entirely for notational convenience. Very generally, given any finite dimensional Noetherian scheme $S$ covered by a family of constructible sets, there will exist a finite subcover by looking at the generic points of $S$ and applying induction on dimension. Applying this to $\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X)$ shows that there is a union $A$ of finitely many connected components of ${\rm Hilb}_{\mathbb{P}^r}^{r-k+1}$ such that $\pi_1(\pi_2^{-1}(A))=\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X)$, so we can just restrict our incidence correspondence to $A$ instead of ${\rm Hilb}_X^{\dim(X)-k+a,\leq D}$.
\end{Rem}
Motivated from above, we make the following definition.
\begin{Def}
\label{IMA}
For $A\subset {\rm Hilb}_{\mathbb{P}^r}$ a constructible subset and $D\geq d_1\cdots d_k$ an integer, let $\widetilde{\Phi}^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X,A)=\pi_2^{-1}(A\cap {\rm Hilb}_X^{\dim(X)-k+a,\leq D})$ and $\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X,A)=\pi_1(\pi_2^{-1}(A\cap {\rm Hilb}_X^{\dim(X)-k+a,\leq D}))$.
\end{Def}
Similarly, we have the relative version.
\begin{Def}
\label{IMAS}
Let $S$ be a finite type $K$-scheme, $\mathcal{X}\subset \mathbb{P}^r_S$ a family such that $\mathcal{X}\rightarrow S$ has $b$-dimensional fibers, $\mathcal{A}\subset S\times {\rm Hilb}_{\mathbb{P}^r}$ a constructible subset. For $D\geq d_1\cdots d_k$, let $\widetilde{\Phi}^{\mathbb{P}_S^r,a}_{d_1,\ldots,d_k}(\mathcal{X}/S,\mathcal{A})=\pi_2^{-1}(\mathcal{A}\cap {\rm Hilb}_{\mathcal{X}/S}^{b-k+a,\leq D})$ and $\Phi^{\mathbb{P}_S^r,a}_{d_1,\ldots,d_k}(\mathcal{X}/S,\mathcal{A})=\pi_1(\pi_2^{-1}(\mathcal{A}\cap {\rm Hilb}_{\mathcal{X}/S}^{b-k+a,\leq D}))$ for $\pi_1$ and $\pi_2$ defined as below.
\begin{center}
\begin{tikzcd}
& \widetilde{\Phi}^{\mathbb{P}_S^r,a}_{d_1,\ldots,d_k}(\mathcal{X}/S) \arrow[swap, ld, "\pi_1"] \arrow[dr, "\pi_2"] &\\
\Phi^{\mathbb{P}_S^r,a}_{d_1,\ldots,d_k}(\mathcal{X}/S) & & {\rm Hilb}_{\mathcal{X}/S}^{b-k+a,\leq D}
\end{tikzcd}
\end{center}
\end{Def}
\subsection{Inducting on the number of hypersurfaces}
We present Lemma \ref{kind} which bounds the dimension of $\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X,A)$.
\subsubsection{Preliminary definitions}
Instead of working with dimensions, it is easier to work with codimensions.
\begin{Def}
Let ${\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X))$ and ${\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X,A))$ be the codimension in $\prod_{i=1}^{k}W_{r,d_i}$.
We similarly let ${\rm codim}(\Phi^{\mathbb{P}_S^r,a}_{d_1,\ldots,d_k}(\mathcal{X}/S))$ and ${\rm codim}(\Phi^{\mathbb{P}_S^r,a}_{d_1,\ldots,d_k}(\mathcal{X}/S,\mathcal{A}))$ mean the codimension in $S\times\prod_{i=1}^{k}W_{r,d_i}$.
\end{Def}
We will need to refer to the minimum of the Hilbert function over a constructible set of the Hilbert scheme.
\begin{Def}
\label{minhD}
For $A\subset {\rm Hilb}_X$ a constructible set, let
\begin{align*}
h_A(d):=\min\{h_Z(d):[Z]\in A\},
\end{align*}
where $h_Z$ refers to the Hilbert function of a projective scheme $Z$. This is well-defined since $Z\subset X\subset \mathbb{P}^r$.
\end{Def}
We will need notation to refer to varieties containing a piece of the Hilbert scheme.
\begin{Def}
\label{containD}
Given $A\subset {\rm Hilb}^{\leq D}_{X}$ a constructible subset, let ${\rm Contain}(A)$ denote the constructible subset consisting of all $[Y]\in {\rm Hilb}^{\leq D}_X$ such that $Y$ contains some $Z$ for $[Z]\in A$.
\end{Def}
To see ${\rm Contain}(A)$ is constructible, let $\mathcal{Z}\subset {\rm Hilb}^{\leq D}_{X}\times {\rm Hilb}^{\leq D}_{X}$ denote the subset corresponding to pairs $([Y],[Z])$ such that $Z\subset Y$. Lemma \ref{CFF} shows $\mathcal{Z}$ is closed. From the incidence correspondence
\begin{center}
\begin{tikzcd}
& \mathcal{Z} \arrow[swap, ld, "\pi_1"] \arrow[dr, "\pi_2"] &\\
{\rm Hilb}^{\leq D}_{X}& & {\rm Hilb}^{\leq D}_{X}
\end{tikzcd}
\end{center}
we see ${\rm Contain}(A)=\pi_1(\pi_2^{-1}(A))$ is constructible by Chevalley's theorem \cite[Tag 054K]{stacks-project}.
\subsubsection{Forgetting a hypersurface}
\begin{Lem}
\label{kind}
For $X\subset \mathbb{P}^r$ a projective scheme and $A\subset {\rm Hilb}^{\dim(X)-k+a,\leq D}_X$ constructible,
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X,A))\geq \min\{{\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_{k-1}}(X,A_1)),{\rm codim}(\Phi^{\mathbb{P}^r,a-1}_{d_1,\ldots,d_{k-1}}(X,A))+h_A(d_k)\},
\end{align*}
where $A_1={\rm Contain}(A)\cap {\rm Hilb}_X^{\dim(X)-k+a+1,\leq D}$
\end{Lem}
In words, the idea of Lemma \ref{kind} is intuitively obvious. If $(F_1,\ldots,F_k)$ are homogenous forms that vanish on $Z$ for some $[Z]\in A$, then at least one of the following holds:
\begin{enumerate}
\item
$(F_1,\ldots,F_{k-1})$ vanish on some $Z'\supset Z$ with $\dim(Z')=\dim(Z)+1$
\item
$\dim(V(F_1,\ldots,F_{k-1}))=\dim(V(F_1,\ldots,F_k))$.
\end{enumerate}
The first case is captured by the codimension of $\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_{k-1}}(X,A_1)$, and the second case is bounded above by ${\rm codim}(\Phi^{\mathbb{P}^r,a-1}_{d_1,\ldots,d_{k-1}}(X,A))+h_A(d_k)$, since $F_k$ must vanish on one of the components of $V(F_1,\ldots,F_{k-1})$ which happens in codimension at least $h_A(d_k)$.
\begin{proof}
Let the constructible set $B$ be the image of the projection $\pi: \Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X,A)\rightarrow \prod_{i=1}^{k-1}{W_{r,d_i}}$ forgetting the last hypersurface. By definition, we see $B\subset \Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_{k-1}}(X,A_1)\cup \Phi^{\mathbb{P}^r,a-1}_{d_1,\ldots,d_{k-1}}(X,A_2)$.
Lemma \ref{FY6} shows
\begin{align*}
{\rm codim}(\pi^{-1}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_{k-1}}(X,A_1)))\geq {\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_{k-1}}(X,A_1)).
\end{align*}
To bound $\pi^{-1}(\Phi^{\mathbb{P}^r,a-1}_{d_1,\ldots,d_{k-1}}(X,A_2)\backslash \Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_{k-1}}(X,A_1))$, let
\begin{align*}
(F_1,\ldots,F_{k-1})\in \Phi^{\mathbb{P}^r,a-1}_{d_1,\ldots,d_{k-1}}(X,A_2)\backslash \Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_{k-1}}(X,A_1)
\end{align*}
be a closed point and $V_1,\ldots,V_\ell$ be the $\dim(X)-k+a$ dimensional components of $X\cap V(F_1,\ldots, F_{k-1})$ that are in $A$. Here, we are using $(F_1,\ldots, F_{k-1})\notin \Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_{k-1}}(X,A_1)$ so that $X\cap V(F_1,\ldots F_{k-1})$ does not contain any component $V$ of dimension greater than $\dim(X)-k+a$ containing $Z$ for $[Z]\in A$.
The locus of $F_k$ that contain $V_i$ is a linear subspace in $W_{r,d_k}$ of codimension $h_{V_i}(d_k)$. Therefore, the codimension of $\pi^{-1}(p)$ in $W_{r,d_k}$ is at least $h_A(d_k)$. Therefore, we can apply Lemma \ref{FY6} to conclude
\begin{align*}
{\rm codim}(\pi^{-1}(\Phi^{\mathbb{P}^r,a-1}_{d_1,\ldots,d_{k-1}}(X,A_2)\backslash \Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_{k-1}}(X,A_1)))&\geq\\
{\rm codim}(\Phi^{\mathbb{P}^r,a-1}_{d_1,\ldots,d_{k-1}}(X,A_2)\backslash \Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_{k-1}}(X,A_1))+h_A(d_k)&\geq\\
{\rm codim}(\Phi^{\mathbb{P}^r,a-1}_{d_1,\ldots,d_{k-1}}(X,A_2))+h_A(d_k)&.
\end{align*}
\end{proof}
\begin{comment}
\begin{Rem}
This issue is well-disguised in the proof, but, if our base field $K$ is not algebraically closed, then we need to take geometric points instead of closed points when bounding $\pi^{-1}(\Phi^{\mathbb{P}^r,a-1}_{d_1,\ldots,d_{k-1}}(X,A_2)\backslash \Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_{k-1}}(X,A_1))$. The issue is that, if
\begin{align*}
p\in \Phi^{\mathbb{P}^r,a-1}_{d_1,\ldots,d_{k-1}}(X,A_2)\backslash \Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_{k-1}}(X,A_1)
\end{align*}
has residue field $k(p)$, then we cannot determine $B\times_K k(p)$ by just taking $k(p)$-points, as $X_{k(p)}\cap V(F_1, \ldots F_{k-1})$ might geometric components not defined over $k(p)$. However, since $B\times_K \overline{k(p)}$ is over an algebraically closed field, knowing the $\overline{k(p)}$-points is a union of linear spaces is enough to determine $B\times_K \overline{k(p)}$ topologically.
\end{Rem}
\end{comment}
\subsection{Varieties contained in a family}
Now, we want to partition by span as outlined in Sections \ref{PI} and \ref{KEX}. Since we will need to vary a linear space $\Lambda \subset \mathbb{P}^r$, we will need relative versions of $\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X)$, where we will eventually set $X=\Lambda$ and let $\Lambda$ vary. Even though we only need it for a very special case, it is easier only the notation to introduce the definitions more generally.
\begin{Lem}
\label{IC}
Suppose $S$ is a finite type $K$-scheme, $X\subset \mathbb{P}^r_K$ is a projective scheme, $\mathcal{X}\subset X\times_K S$ is a closed subscheme such that each fiber of $\mathcal{X}\rightarrow S$ has codimension $e>0$ in $X$, $\mathcal{A}\subset {\rm Hilb}_{\mathcal{X}/S}^{\dim(X)-k+a,\leq D}$ is a constructible subset, the image of $\mathcal{A}$ in ${\rm Hilb}_X^{\dim(X)-k+a}$ under ${\rm Hilb}_{\mathcal{X}/S}^{\dim(X)-k+a}\rightarrow {\rm Hilb}_X^{\dim(X)-k+a}$ is $A$, and the fiber dimension $\mathcal{A}\rightarrow A$ is at least $c$ for all points in $A$. Then,
\begin{align}
{\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X,A))\geq {\rm codim}(\Phi^{\mathbb{P}_S^r,a+e}_{d_1,\ldots,d_k}(\mathcal{X}/S,\mathcal{A}))-\dim(S)+c, \label{ICeq}
\end{align}
and equality holds if the fiber dimension of $\mathcal{A}\rightarrow A$ is exactly $c$ for all points of $A$, $\widetilde{\Phi}^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X,A)\rightarrow \Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X,A)$ has generically finite fibers, and $\Phi^{\mathbb{P}_S^r,a+e}_{d_1,\ldots,d_k}(\mathcal{X}/S,\mathcal{A})$ is irreducible.
\end{Lem}
Like Lemma \ref{kind}, the idea of Lemma \ref{IC} is simple, but the notation obscures the statement. In words, suppose $A\subset {\rm Hilb}_X^{\dim(X)-k+a}$ is a constructible subset such that every element $[Z]\in A$ is actually contained in a member of the family $\mathcal{X}\to S$. Let $\mathcal{X}|_s$ be a member of the family. Since $\mathcal{X}|_s\subset X$ has codimension $e>0$, the tuples of forms $(F_1,\ldots,F_k)$ that vanish on some $Z\subset \mathcal{X}|_s$ with $[Z]\in A$ actually have zero locus that is $e+a$ dimensional more than expected when restricted to $\mathcal{X}|_s$, instead of just $a$ dimensional more than expected when regarded on $X$. This will yield a better bound when we apply Lemma \ref{kind} repeatedly.
The price for that is we need to consider all choices of $s\in S$, so $-\dim(S)$ appears on the right side of \eqref{ICeq}. The role of $c$ and the equality case is just so we won't have to consider the case where $A$ consists of linear spaces separately. If $A$ is the Grassmannian of linear spaces, $\mathcal{X}\to S$ is the universal family of the Grassmannian and $X=\mathbb{P}^r$, then the content of Lemma \ref{IC} reduces to the usual incidence correspondence parameterizing choices of $((F_1,\ldots,F_k),\Lambda)$, where $F_1,\ldots,F_k$ are homogenous forms vanishing on $\Lambda$.
\begin{proof}
To summarize all the objects involved, consider the commutative diagram
\begin{center}
\begin{tikzcd}
& \Phi^{\mathbb{P}^r,a+e}_{d_1,\ldots,d_k}(\mathcal{X}/S) \arrow[r] \arrow[dl] & \Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X) \arrow[hook,dr]& \\
S &\widetilde{\Phi}^{\mathbb{P}^r,a+e}_{d_1,\ldots,d_k}(\mathcal{X}/S) \arrow[r] \arrow[u] \arrow[d] \arrow[l] & \widetilde{\Phi}^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X) \arrow[u] \arrow[d] \arrow[r]& \prod_{i=1}^{k}{W_{r,d_i}}\\
&{\rm Hilb}_{\mathcal{X}/S}^{\dim(X)-k+a} \arrow[r] \arrow[ul] & {\rm Hilb}_X^{\dim(X)-k+a} &
\end{tikzcd}
\end{center}
Restricting to $\mathcal{A}$ yields
\begin{center}
\begin{tikzcd}
& \Phi^{\mathbb{P}^r,a+e}_{d_1,\ldots,d_k}(\mathcal{X}/S,\mathcal{A}) \arrow[r, "\phi_1", two heads] \arrow[dl] & \Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X,A) \arrow[hook,dr]& \\
S &\widetilde{\Phi}^{\mathbb{P}^r,a+e}_{d_1,\ldots,d_k}(\mathcal{X}/S,\mathcal{A}) \arrow[r, "\phi_2", two heads] \arrow[u,two heads, "\widetilde{\pi}_1" ] \arrow[d,two heads, "\widetilde{\pi}_2"] \arrow[l] & \widetilde{\Phi}^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X,A) \arrow[u, two heads, "\pi_1"] \arrow[d, two heads, "\pi_2"] \arrow[r]& \prod_{i=1}^{k}{W_{r,d_i}}\\
&\mathcal{A} \arrow[r, "\phi_3", two heads] \arrow[ul] & A &
\end{tikzcd}
\end{center}
We claim $\phi_1$ is surjective. Indeed, $\widetilde{\pi}_1$, $\pi_1$ and $\phi_3$ are surjective by definition. The maps $\widetilde{\pi}_2$ and $\pi_2$ are surjections since the affine spaces $W_{r,d_i}$ contain the zero homogenous form. The map $\phi_2$ is surjective because the fiber of $\phi_2$ over a pair $((F_1,\ldots,F_k),[Z])$ corresponding to a closed point of $\prod_{i=1}^{k}W_{r,d_i}\times {\rm Hilb}_X^{\dim(X)-k+a}$ is $\phi_3^{-1}([Z])$, and $\phi_1$ is surjective because each fiber of $\phi_1$ is a union of fibers of $\phi_2$. Indeed, take a closed point $(F_1,\ldots,F_k)\in \Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X)$. Let $B:={\rm Hilb}_{V(F_1, \ldots, F_k)\cap X}^{\dim(X)-k+a}\cap A\subset A$ be the constructible set of all $[Z]\in A$ such that $Z\subset X\cap V(F_1,\ldots, F_{k})$. The fiber of $\phi_1$ over $(F_1,\ldots,F_k)$ is $\bigcup_{a\in B}{\phi_3^{-1}(a)}$.
To show the inequality, it suffices to show that $\phi_1$ has fiber dimension at least $c$. This is true as $\bigcup_{a\in B}{\phi_3^{-1}(a)}$ has dimension at least $c$, since each fiber of $\phi_3$ has dimension at least $c$.
To show the equality case, it suffices to show $\phi_1$ has generic fiber dimension $c$ by the irreducibility of $\Phi^{\mathbb{P}_S^r,a+e}_{d_1,\ldots,d_k}(\mathcal{X}/S,\mathcal{A})$, so we pick $(F_1,\ldots,F_k)\in \Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X)$ general. Then, $B$ is finite as $\pi_1$ is generically finite, so $\bigcup_{a\in B}{\phi_3^{-1}(a)}$ has dimension exactly $c$ if $\phi_3^{-1}(a)$ has dimension exactly $c$ for all $a\in B$.
\end{proof}
\section{Partitioning by Span}
In order to apply Lemmas \ref{kind} and \ref{IC}, we need to partition ${\rm Hilb}_{\mathbb{P}^r}$ into constructible sets. We will partition by span.
\begin{Def}
\label{spanD}
Given a positive integer $b$, let ${\rm Span}(r,b)\subset {\rm Hilb}^{\leq D}_{\mathbb{P}^r}$ denote the locally closed subset parameterizing geometrically integral schemes $Z\subset \mathbb{P}^r$ where $Z$ spans a plane of dimension $b$ and have degree at most $D$.
\end{Def}
We claim ${\rm Span}(r,b)$ is locally closed. Indeed, $\bigcup_{i=0}^{b}{{\rm Span}(r,i)}$ is closed because upper semicontinuity of dimension implies the locus $\mathcal{Z}\subset {\rm Hilb}_{\mathbb{P}^r}\times \mathbb{G}(b,r)$ of pairs $([Z],P)$ with $Z\subset P$ is closed. Also, it is easy to see
\begin{Lem}
We have ${\rm Contain}({\rm Span}(r,r))={\rm Span}(r,r)$.
\end{Lem}
\begin{Def}
Define
\begin{align*}
h_{r,a}(d):=(r-a)\binom{d+a-1}{d-1}+\binom{d+a}{d}.
\end{align*}
\end{Def}
By iterating Lemma \ref{kind} and applying Lemma \ref{ARBH}, we get
\begin{Lem}
\label{KIA}
We have ${\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r,{\rm Span}(r,r)))$ is at least
\begin{align*}
&\min\{\sum_{j=1}^{a}h_{r,r-i_j+j}(d_{i_j}):1\leq i_1<\cdots <i_a\leq k\}.
\end{align*}
Here $a\geq 0$, $k>0$ and $d_i>0$ for each $i$.
\end{Lem}
\begin{proof}
We will prove this by induction on $k$. If $k=1$, then
\begin{enumerate}
\item
If $a=0$, then ${\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1}(\mathbb{P}^r,{\rm Span}(r,r)))=0$, which is at least 0 (the empty sum).
\item
If $a=1$, then $\Phi^{\mathbb{P}^r,a}_{d_1}(\mathbb{P}^r,{\rm Span}(r,r))$ is just the zero homogenous form in $W_{r,d_1}$, which has codimension
\begin{align*}
\binom{r+d_1}{d_1}=h_{r,r}(d).
\end{align*}
\item
If $a>1$, then we are taking the minimum over the empty set, as we cannot choose a subset $S\subset \{1\}$ with cardinality $a$. The minimum of the empty set is $\infty$. This is equal to the codimension of $\Phi^{\mathbb{P}^r,a}_{d_1}(\mathbb{P}^r,{\rm Span}(r,r))=\emptyset$ by our convention (Definition \ref{DEFDIM}).
\end{enumerate}
Now, suppose $k>1$. By Lemma \ref{kind}, we know ${\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X,{\rm Span}(r,r)))$ is at least
\begin{align*}
\min\{{\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_{k-1}}(\mathbb{P}^r,{\rm Span}(r,r))),{\rm codim}(\Phi^{\mathbb{P}^r,a-1}_{d_1,\ldots,d_{k-1}}(\mathbb{P}^r,{\rm Span}(r,r)))+h_{{\rm Span}(r,r)}(d_k)\}
\end{align*}
By Lemma \ref{ARBH}, $h_{{\rm Span}(r,r)}(d_k)\geq h_{r,r-k+a}(d_k)$. By induction, we know ${\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_{k-1}}(\mathbb{P}^r,{\rm Span}(r,r)))$ is at least
\begin{align*}
\min\{\sum_{j=1}^{a}h_{r,r-i_j+j}(d_{i_j}):1\leq i_1<\cdots<i_a\leq k-1\},
\end{align*}
and ${\rm codim}(\Phi^{\mathbb{P}^r,a-1}_{d_1,\ldots,d_{k-1}}(\mathbb{P}^r,{\rm Span}(r,r)))+h_{r,r-k+a}(d_k)$ is at least
\begin{align*}
\min\{\sum_{j=1}^{a-1}h_{r,r-i_j+j}(d_{i_j}):1\leq i_1<\cdots<i_{a-1}\leq k-1\}+h_{r,r-k+a}(d_k),
\end{align*}
and taking the minimum over the two yields
\begin{align*}
\min\{\sum_{j=1}^{a}h_{r,r-i_j+j}(d_{i_j}):1\leq i_1<\cdots<i_a\leq k\}.
\end{align*}
\end{proof}
The bound in Lemma \ref{KIA} depends on the order of the $d_i$'s and it is better to order them so $d_1\leq d_2\leq \cdots\leq d_k$. We could have similarly bounded ${\rm codim}(\Phi^{r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r,{\rm Span}(r,b)))$, but this is better dealt with using Lemma \ref{IC}.
\begin{Lem}
\label{ICA}
We have
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r,{\rm Span}(r,b)))&\geq {\rm codim}(\Phi^{\mathbb{P}^b,a+(r-b)}_{d_1,\ldots,d_k}(\mathbb{P}^b,{\rm Span}(b,b)))-\dim(\mathbb{G}(b,r)),
\end{align*}
and equality holds when $b=a$.
\end{Lem}
\begin{proof}
We apply Lemma \ref{IC} in the case $S=\mathbb{G}(b,r)$, $\mathcal{X}$ is the universal family of $b$-planes over the Grassmannian, $\mathcal{A}$ is $({\rm Span}(r,b)\times_{K} S)\cap {\rm Hilb}_{\mathcal{X}/S}^{b-k+a,\leq D}$, $Y=\mathbb{P}^r$, and $c=0$ to get
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r,{\rm Span}(r,b)))&\geq {\rm codim}(\Phi^{\mathbb{P}_S^r,a+(r-b)}_{d_1,\ldots,d_k}(\mathcal{X}/S,\mathcal{A}))-\dim(\mathbb{G}(b,r)).
\end{align*}
Let $P\subset \mathbb{P}^r_K$ correspond to a closed point of $s\in S$. Then, the fiber of $\Phi^{\mathbb{P}^r_S,a+(r-b)}_{d_1,\ldots,d_k}(\mathcal{X}/S,\mathcal{A})\rightarrow S$ over $s$ is isomorphic to
\begin{align*}
\Phi^{\mathbb{P}^b,a+(r-b)}_{d_1,\ldots,d_k}(\mathbb{P}^b,{\rm Span}(b,b))\times \prod_{i=1}^{k}{W_{r,d_i}/W_{b,d_i}}
\end{align*}
by restricting to $P\cong \mathbb{P}^b$.
We now show equality in Lemma \ref{IC} holds when $b=a$. It suffices to show
\begin{align*}
\widetilde{\Phi}^{r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r,{\rm Span}(r,a))\rightarrow \Phi^{r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r,{\rm Span}(r,a))
\end{align*}
has generically finite fibers. Therefore, it suffices to show there exists a choice of $(F_1,\ldots,F_k)$ of hypersurfaces such that $V(F_1,\ldots,F_{k})$ contains some $r-k+a$ dimensional plane $P$ and $V(F_1,\ldots,F_{k-a})$ is dimension $r-k+a$, as this means $\bigcap_{i=1}^{k-a}F_i$ cannot contain a positive dimensional family of $r-k+a$ planes.
We will construct $(F_1,\ldots,F_{k-a})$ inductively. Fix a $r-k+a$ dimensional plane $P$, and suppose we have found $F_1,\ldots,F_i$ restricting to zero on $P$ such that $\dim(V(F_1,\ldots,F_i))=r-i$ for $1\leq i<k-a$. Let the components of $V(F_1,\ldots, F_i)$ be $X_1,\ldots,X_{\ell}$. We want to find a homogenous form $F_{i+1}$ that vanishes on $P$ but not on $X_j$ for $1\leq j\leq \ell$. Let $V_j\subset H^{0}(\mathbb{P}^r,\mathscr{O}_{\mathbb{P}^r}(d_{i+1}))$ be the vector space of degree $d_{i+1}$ forms vanishing on $X_j$ and let $V\subset H^{0}(\mathbb{P}^r,\mathscr{O}_{\mathbb{P}^r}(d_{i+1}))$ be the vector space of degree $d_{i+1}$ forms vanishing on $P$.
Let $h$ denote the Hilbert function. Since $\dim(X_j)>\dim(P)$ and $P$ is a linear space, $h_{P}(d_{i+1})< h_{X_j}(d_{i+1})$, so the inclusion $V_j\subset V$ is strict. Therefore, $V\backslash \bigcup_{j=1}^{\ell}{V_j}$ is nonempty, and we can pick any $F_{i+1}$ in $V\backslash \bigcup_{j=1}^{\ell}{V_j}$. Finally, we pick $F_{k-a+1},\ldots,F_k$ to be any homogenous forms restricting to zero on $P$.
\end{proof}
\begin{Def}
Given $r,a$ and degrees $d_1,\ldots,d_k$, let
\begin{align*}
F_{r,a}(d_1,\ldots,d_k)&:=\min\{\sum_{j=1}^{a}h_{r,r-i_j+j}(d_{i_j}):1\leq i_1<\cdots<i_a\leq k\}
\end{align*}
\end{Def}
\begin{Def}
Given $r,a$, degrees $d_1,\ldots,d_k$ and $r-k+a\leq b\leq r$, let
\begin{align*}
G_{r,a,b}(d_1,\ldots,d_k)&:= F_{b,a+(r-b)}(d_1,\ldots,d_k)-\dim(\mathbb{G}(b,r)).
\end{align*}
\end{Def}
Combining Lemmas \ref{KIA} and \ref{ICA}, we have
\begin{Thm}
\label{FML}
For $d_1\leq d_2\leq \cdots\leq d_k$, we have
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r, {\rm Span}(r-k+a)))&=G_{r,a,r-k+a}(d_1,\ldots,d_k)\\
&=-(r-k+a+1)(k-a)+\sum_{i=1}^{k}\binom{d_i+r-k+a}{r-k+a}.
\end{align*}
and
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r)\backslash \Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r,{\rm Span}(r-k+a)))&\geq \min_{b=r-k+a+1}^{r}{G_{r,a,b}(d_1,\ldots,d_k)}
\end{align*}
\end{Thm}
\begin{comment}
\begin{Rem}
The key example in Section \ref{KEX} outlines in detail the argument that goes into the proof of Theorem \ref{FML}.
\end{Rem}
\end{comment}
\begin{comment}
\subsection{Filtration by Hilbert function}
\begin{ToDo}
There are errors in this section. Specifically, the upper bound for c shrinks as a increases, and you didn't account for that. It should be possible to define notation to avoid this. However, to apply this to hypersurfaces containing a curve, the ``mowing'' step seems like a big problem.
\end{ToDo}
To illustrate the flexibility of Lemma \ref{kind} and Lemma \ref{IC}, we can stratify ${\rm Span}(r,r)$ further to get slightly better bounds.
\begin{Def}
For $d\geq 2$, let ${\rm Hyp}(r,d)\subset {\rm Span}(r,r)$ denote the locally closed subset parameterizing geometrically integral subschemes $Z\subset \mathbb{P}^r$ contained in a degree $d$ hypersurface but not a degree $d-1$ hypersurface.
\end{Def}
In particular, ${\rm Hyp}(r,2)={\rm Span}(r,r)$ by definition.
\begin{Def}
Let ${\rm Hyp}(r,d,c)\subset {\rm Hyp}(r,d)$ denote the locally closed subset parameterizing geometrically integral subschemes $Z\subset \mathbb{P}^r$ such that $h_Z(d-1)=h_{\mathbb{P}^r}(d-1)$ and $h_Z(d)=h_{\mathbb{P}^r}(d)-c$. In words, $Z$ is contained in $c$ hypersurfaces of degree $d$ but no hypersurfaces of degree $d-1$.
\end{Def}
To see ${\rm Hyp}(r,d)$ and ${\rm Hyp}(r,d,c)$ are locally closed, we note $\bigcup_{i=1}^{d-1}{\rm Hyp}(r,i)\cup \bigcup_{j=1}^{c}{\rm Hyp}(r,d,j)$ is closed by upper semicontinuity of $h^0$ in flat families applied to the ideal sheaf. We derive a crude lower bound to the Hilbert function of a scheme in ${\rm Hyp}(r,d,c)$.
\begin{Lem}
\label{SHY}
If $V\subset \mathbb{P}^r$ is a nondegenerate integral subscheme of dimension $a$, then $h_V(d)-h_V(d-1)\geq h_{r-1,a-1}(d)$.
\end{Lem}
\begin{proof}
If $\dim(V)=a>1$, then applying Theorem 6.3 (iv) in \cite{Bertini} shows we can find a hyperplane $H\subset \mathbb{P}^r$ such that $V\cap H$ is integral. If $V\cap H$ is degenerate in $H\cong \mathbb{P}^{r-1}$, then there exists another hyperplane $H'\neq H$ such that $H$ divides $H'$ in $\mathscr{O}_V$, but $\frac{H'}{H}\in H^{0}(\mathscr{O}_V^{\times})=K^{\times}$, so this is impossible. Therefore, $V\cap H$ is nondegenerate in $H$, and we apply Lemma \ref{ARBH} and Lemma 3.1 in \cite{Montreal} to find
\begin{align*}
h_V(d)-h_V(d-1)\geq& h_{V\cap H}(d)\geq h_{r-1,a-1}(d).
\end{align*}
If $\dim(V)=1$, then theorem 6.3 (iii) shows there exists a hyperplane $H$ such that $H\cap V$ is finitely many reduced points, which is also degenerate in $H\cong \mathbb{P}^{r-1}$ by the same argument as above. Then, applying Lemma 3.1 in \cite{Montreal} yields
\begin{align*}
h_V(d')-h_V(d'-1)\geq& h_{V\cap H}(d')\geq r=h_{r-1,0}(d').
\end{align*}
\end{proof}
\begin{Lem}
\label{ARBHH}
If ${\rm Hyp}(r,d,c)$ is nonempty, then
\begin{align*}
c\leq \binom{d+r-1}{r-1}-h_{r-1,a-1}(d)
\end{align*}
and
\begin{align*}
\begin{tabular}{c}
$h_{{\rm Hyp}(r,d,c)\cap {\rm Hilb}^a_{\mathbb{P}^r}}(d')$ \\
$h_{{\rm Contain}({\rm Hyp}(r,d,c))\cap {\rm Hilb}^a_{\mathbb{P}^r}}(d')$
\end{tabular}
\geq
\begin{cases}
\binom{d'+r}{r} &\text{if $d'<d$}\\
\binom{d+r}{r}-c+h_{r,a}(d')-h_{r,a}(d) &\text{ if $d'\geq d$}
\end{cases}
\end{align*}
\end{Lem}
\begin{proof}
By Lemma \ref{SHY}, for $[V]\in {\rm Hyp}(r,d,c)$,
\begin{align*}
h_V(d)-h_V(d-1)&\geq h_{r-1,a-1}(d)\\
\binom{d+r}{r}-c-\binom{d+r-1}{r}&\geq h_{r-1,a-1}(d)\\
c&\leq \binom{d+r-1}{r-1}-h_{r-1,a-1}(d),
\end{align*}
so $c$ must be at least $\binom{d+r-1}{r-1}-h_{r-1,a-1}(d)$ if ${\rm Hyp}(r,d,c)$ is nonempty. Now, for ${\rm Hyp}(r,d,c)$ nonempty, we prove the inequality on $h_{{\rm Hyp}(r,d,c)\cap {\rm Hilb}^a_{\mathbb{P}^r}}(d')$ by induction on $d'$. The case when $d'\leq d$ is vacuous, so assume $d'>d$. Suppose $[V]\in {\rm Hyp}(r,d,c)$ and $h_V(d'-1)\geq \binom{d+r}{r}-c+h_{r,a}(d'-1)-h_{r,a}(d)$. Applying Lemma \ref{SHY} yields
\begin{align*}
h_V(d')-h_V(d'-1)\geq& h_{r-1,a-1}(d')\\
h_V(d')\geq& \binom{d+r}{r}-c+h_{r,a}(d'-1)-h_{r,a}(d)+h_{r-1,a-1}(d')\\
h_V(d')\geq& \binom{d+r}{r}-c+h_{r,a}(d')-h_{r,a}(d).
\end{align*}
To show the inequality on $h_{{\rm Contain}({\rm Hyp}(r,d,c))\cap {\rm Hilb}^a_{\mathbb{P}^r}}(d')$, we just note that if $[V]\in {\rm Contain}({\rm Hyp}(r,d,c))$, then $V$ is not contained in any hypersurfaces of degree $d-1$ and is contained in at most $c$ linearly independent hypersurfaces of degree $d$, so the argument above goes through.
\end{proof}
\begin{Def}
\label{HRABD}
Let
\begin{align*}
h_{r,a,d}(d'):=
\begin{cases}
\binom{d'+r}{r} &\text{if $d'<d$}\\
\binom{d+r-1}{r}+h_{r,a}(d')-h_{r,a}(d-1) &\text{ if $d'\geq d$}
\end{cases}
\end{align*}
\end{Def}
In particular, $h_{r,a,2}=h_{r,a}$ and Lemma \ref{ARBHH} implies
\begin{align*}
h_{{\rm Hyp}(r,d,c)\cap {\rm Hilb}^a_{\mathbb{P}^r}}(d')-h_{r,a,d}(d')&\geq
\begin{cases}
0 &\text{ if $d'<d$}\\
\binom{d+r-1}{r-1}-c-h_{r-1,a-1}(d)&\text{ if $d'\geq d$}.
\end{cases}
\end{align*}
\begin{Def}
Given $r,a,d$ and degrees $d_1,\ldots,d_k$, let
\begin{align*}
F_{r,a,d}(d_1,\ldots,d_k)&:=\min\{\sum_{j=1}^{a}h_{r,r-i_j+j,d}(d_{i_j}):S=\{i_1,i_2,\cdots,i_a\}\subset \{1,\ldots,k\},i_1<\cdots<i_a\}
\end{align*}
\end{Def}
\begin{Lem}
For $2\leq d\leq d_1\leq d_2\leq \cdots\leq d_k$, we have
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r,{\rm Hyp}(r,d)))&\geq F_{r,a+1,d}(d_1,\ldots,d_k)-h_{r,a,d}(d).
\end{align*}
\end{Lem}
\begin{proof}
Since ${\rm Hyp}(r,d,c)$ is only nonempty for $c\leq \binom{d+r-1}{r-1}-h_{r-1,a-1}(d)$ and we know ${\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r,{\rm Hyp}(r,d)))$ is equal to
\begin{align*}
\min\{{\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r,{\rm Hyp}(r,d,c))):c\leq \binom{d+r-1}{r-1}-h_{r-1,a-1}(d)\},
\end{align*}
and applying Lemma \ref{HYPH} yields
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r,{\rm Hyp}(r,d)))&\geq F_{r,a+1,d}(d_1,\ldots,d_k)-\binom{d+r-1}{r}-h_{r-1,a-1}(d)\\
&=F_{r,a+1,d}(d_1,\ldots,d_k)-h_{r,a,d}(d).
\end{align*}
\end{proof}
\begin{Lem}
\label{HYPH}
For $2\leq d\leq d_1\leq d_2\leq \cdots\leq d_k$, we have ${\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r,{\rm Hyp}(r,d,c)))$ is at least
\begin{align*}
F_{r,a+1,d}(d_1,\ldots,d_k)+(a+1)(\binom{d+r-1}{r-1}-c-h_{r-1,a-1}(d))-\binom{d+r}{r}+c
\end{align*}
\end{Lem}
\begin{proof}
We first apply Lemma \ref{IC} where we let $S=\mathbb{P}W_{r,d}$ is space of degree $d$ hypersurfaces, $\mathcal{X}\rightarrow S$ is the universal family, $A={\rm Hyp}(r,d,c)$, $\mathcal{A}=({\rm Hyp}(r,d,c)\times_K S)\cap {\rm Hilb}_{\mathcal{X}/S}^{r-k+a}$, and the dimension of the fibers $\mathcal{A}\rightarrow A$ is $c-1$ to get
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(X,A))\geq {\rm codim}(\Phi^{\mathbb{P}_S^r,a+1}_{d_1,\ldots,d_k}(\mathcal{X}/S,\mathcal{A}))-\binom{d+r}{r}+c.
\end{align*}
Next, for $s\in S$ a closed point, we apply Lemma \ref{kind} to the fiber $\Phi^{\mathbb{P}^r,a+1}_{d_1,\ldots,d_k}(\mathcal{X}|_s,\mathcal{A}|_s)$ of $\Phi^{\mathbb{P}^r,a+1}_{d_1,\ldots,d_k}(\mathcal{X}/S,\mathcal{A})\rightarrow S$ over $s$ to get
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,a+1}_{d_1,\ldots,d_k}(\mathcal{X}|_s,\mathcal{A}|_s))\geq F_{r,a+1,d}(d_1,\ldots,d_k)+(a+1)(\binom{d+r-1}{r-1}-c-h_{r-1,a-1}(d)).
\end{align*}
Combining the two inequalities yields
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r,{\rm Hyp}(r,d,c)))&\geq\\
F_{r,a+1,d}(d_1,\ldots,d_k)+(a+1)(\binom{d+r-1}{r-1}-c-h_{r-1,a-1}(d))-\binom{d+r}{r}+c&.
\end{align*}
\end{proof}
\end{comment}
\subsection{Hypersurfaces containing a curve}
We specialize Theorem \ref{FML} to the case of hypersurfaces containing some curve in order to get cleaner results. At this point, we have reduced our problem to combinatorics.
What really matters in our case is not the exact codimension, but the difference between the codimension of the $k$-tuples of homogenous forms all vanishing on the same line, and the codimension of the $k$-tuples of homogenous forms all vanishing on some curve other than a line.
\begin{Def}
\label{HrabD}
Let $H_{r,a,b}(d_1,\ldots,d_k):=G_{r,a,b}(d_1,\ldots,d_k)-G_{r,a,r-k+a}(d_1,\ldots,d_k)$.
\end{Def}
Our goal is to prove
\begin{Thm}
\label{slope}
Suppose $r-k+a=1$ and $2\leq d_1\leq d_2\leq \cdots\leq d_k$. Then, if $d_i\leq d_1+(i-1)\binom{d_1}{2}$, then
\begin{align*}
{\rm codim}(\Phi^{r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r)\backslash \Phi^{r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r,{\rm Span}(r-k+a)))-{\rm codim}(\Phi^{r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r))
\end{align*}
is at least
\begin{align*}
\begin{cases}
r-1 &\text{ if $k=r$}\\
r-1 + (d_1-2)(r-2)+d_1(k-r)& \text{ if $k>r$}.
\end{cases}
\end{align*}
\end{Thm}
The reason why applying Theorem \ref{FML} is not completely straightforward is because the definition of $F_{r,a}$ requires us to take a minimum over a large choice of indices.
\begin{Def}
We say that the minimum for $F_{r,a}(d_1,\ldots,d_k)$ is achieved at the indices $1\leq i_1<\cdots<i_a \leq k$ if
\begin{align*}
F_{r,a}(d_1,\ldots,d_k)&=\sum_{j=1}^{a}h_{r,r-i_j+j}(d_{i_j})
\end{align*}
Note that the choice of $i_1<\cdots<i_k$ is not necessarily unique.
\end{Def}
\begin{Def}
Similarly, we say that the minimum for $G_{r,a,b}(d_1,\ldots,d_k)$ is achieved at $1\leq i_1<\cdots<i_{a+r-b}\leq k$ if the minimum for $F_{b,a+(r-b)}(d_1,\ldots,d_k)$ is achieved at $i_1<\cdots < i_{a+r-b}$.
\end{Def}
From directly computing, we have the following easy facts.
\begin{Lem}
\label{US2}
We have
\begin{align*}
h_{r,a}(d)-h_{r,a}(d-1)&=(r-a)\binom{d+a-2}{d-1}+\binom{d+a-1}{d}.
\end{align*}
\end{Lem}
\begin{Lem}
\label{US1}
We have
\begin{align*}
h_{r,a+1}(d)-h_{r,a}(d)&= (r-a)\binom{d+a-1}{a+1}.
\end{align*}
\end{Lem}
Lemma \ref{KCL} is the key combinatorial lemma in this section.
\begin{Lem}
\label{KCL}
For $r-k+a=1$, we have
\begin{align*}
\min\{H_{r,a,b}(d_1,\ldots,d_k):\ d=d_1\leq d_2\leq \cdots\leq d_k, d_i\leq d+(i-1)\binom{d}{2}\}=H_{r,a,b}(d,\ldots,d)
\end{align*}
\end{Lem}
\begin{proof}
Suppose we are given $d= d_1\leq d_2\leq \cdots\leq d_k$. We want to show
\begin{align*}
H_{r,a,b}(d_1,\ldots,d_k)&\geq H_{r,a,b}(d,\ldots,d).
\end{align*}
We will use induction on $d_1+\cdots+d_k$, so it suffices to find $d_1',\ldots,d_k'$ with $d_1'+\cdots+d_k'$ is less than $d_1+\cdots+d_k$ such that $H_{r,a,b}(d_1',\ldots,d_k')\leq H_{r,a,b}(d_1,\ldots,d_k)$. The proof is structured like an induction, but perhaps it is intuitively easier to think of it as taking $(d_1,\ldots,d_k)$, and altering the degrees bit by bit until they reach $(d,\ldots,d)$, while all the time not increasing the value of $H_{r,a,b}(d_1,\ldots,d_k)$.
Suppose the minimum of $G_{r,a,b}(d_1,\ldots,d_k)$ is achieved at $i_1<\cdots<i_{a-r+b}$. Suppose $d_{i_1}>d$.
Define
\begin{align*}
d_i'&=
\begin{cases}
\max\{d,d_i-1\} &\text{if $i\leq i_1$}\\
d_i & \text{if $i>i_1$}.
\end{cases}
\end{align*}
We claim that $H_{r,a,b}(d_1,\ldots,d_k)\geq H_{r,a,b}(d_1',\ldots,d_k')$. To see this,
\begin{align*}
H_{r,a,b}(d_1,\ldots,d_k)-H_{r,a,b}(d_1',\ldots,d_k')&=\\
G_{r,a,b}(d_1,\ldots,d_k)-G_{r,a,b}(d_1',\ldots,d_k')-G_{r,a,1}(d_1,\ldots,d_k)+G_{r,a,1}(d_1',\ldots,d_k')&\geq\\
h_{b,b-i_1+1}(d_{i_1})-h_{b,b-i_1+1}(d_{i_1}-1)-i_1&\geq\\
(i_1-1)\binom{d_{i_1}+b-i_i-1}{d_{i_1}-1}+\binom{d_{i_1}+b-i_i}{d_{i_1}}-i_1&,
\end{align*}
where we applied Lemma \ref{US2} in the last step. Since $r-k+a=1$, $i_1\leq b$. Therefore, we see the quantity above is at least $(i_1-1)+1-i_1=0$. Therefore, if $d_{i_1}>d$, we are done by induction. Otherwise, we can assume $d=d_1=\cdots=d_{i_1}$.
Suppose $j$ is the minimum index for which $i_j-i_{j-1}>1$. If there is no such index, let $j=a+r-b+1$. First, we reduce to the case where $d_{i_1}=\cdots=d_{i_{j-1}}=d$. Let $1\leq \ell< j$ be the minimum index such that $d_{i_\ell}>d$. We can use the same trick as before. If we again define
\begin{align*}
d_i'&=
\begin{cases}
d & \text{if $i<i_\ell$}\\
d_{i_\ell}-1&\text{if $i=i_\ell$}\\
d_i & \text{if $i>i_\ell$},
\end{cases}
\end{align*}
then we again see that $H_{r,a,b}(d_1,\ldots,d_k)\geq H_{r,a,b}(d_1',\ldots,d_k')$, as
\begin{align*}
H_{r,a,b}(d_1,\ldots,d_k)-H_{r,a,b}(d_1',\ldots,d_k')&=\\
G_{r,a,b}(d_1,\ldots,d_k)-G_{r,a,b}(d_1',\ldots,d_k')-G_{r,a,1}(d_1,\ldots,d_k)+G_{r,a,1}(d_1',\ldots,d_k')&\geq\\
h_{b,b-i_\ell+\ell}(d_{i_{\ell}})-h_{b,b-i_\ell+\ell}(d_{i_{\ell}}-1)-1&=\\
(i_\ell-\ell)\binom{d_{i_\ell}+b-i_\ell+\ell-2}{d_{i_\ell}-1}+\binom{d_{i_\ell}+b-i_\ell+\ell-1}{d_{i_\ell}}-1&\geq 0.
\end{align*}
Again, we are using $i_\ell-\ell\leq b-1$. Therefore, we see that if $d_{i_\ell}>d$, then we are done by induction. Otherwise, we can now assume $d_{i_1}=\cdots=d_{i_{j-1}}=d$. Recall from above, we also have by assumption that $d_i=d$ for $i\leq i_{j-1}$, $i_2=i_1+1,\ldots,i_{j-1}=i_{j-2}+1$.
If $i_1=b$, then $i_1,\ldots,i_{a+(r-b)}$ is precisely $b,\ldots,k$, so $d_i=d$ for all $1\leq i\leq k$, in which case we are done. Therefore, we can assume $i_1<b$.
Suppose $i_1<b$, so in particular $j\leq a$ and $i_{j}>i_{j-1}+1$. Note $d_{i_{j-1}+1}>d$, because otherwise replacing $i_{j-1}$ by $i_{j-1}+1$ would decrease $\sum_{\ell=1}^{a+(r-b)}{h_{b,r-i_\ell+\ell}(d_{i_\ell})}$, contradicting the assumption that the minimum of $G_{r,a,b}(d_1,\ldots,d_k)$ is achieved at $i_1<\cdots<i_{a+(r-b)}$. Let
\begin{align*}
d_i'&=
\begin{cases}
d & \text{if $i\leq i_{j-1}+1$}\\
d_i & \text{if $i>i_{j-1}+1$}.
\end{cases}
\end{align*}
We claim $H_{r,a,b}(d_1,\ldots,d_k)\geq H_{r,a,b}(d_1',\ldots,d_k')$. To see this, let
\begin{align*}
i_\ell' &=
\begin{cases}
i_\ell+1 &\text{if $\ell<j$}\\
i_\ell &\text{if $\ell\geq j$}
\end{cases}
\end{align*}
we see
\begin{align*}
H_{r,a,b}(d_1,\ldots,d_k)-H_{r,a,b}(d_1',\ldots,d_k')&=\\
G_{r,a,b}(d_1,\ldots,d_k)-G_{r,a,b}(d_1',\ldots,d_k')-G_{r,a,1}(d_1,\ldots,d_k)+G_{r,a,1}(d_1',\ldots,d_k')&\geq\\
\sum_{\ell=1}^{a+(r-b)}{h_{b,r-i_\ell+\ell}(d_{i_\ell})}-\sum_{\ell=1}^{a+(r-b)}{h_{b,r-i_\ell'+\ell}(d_{i_\ell}')}-(d_{i_{j-1}+1}-d)&\geq\\
(j-1)(h_{b,b-i_1+1}(d)-h_{b,b-i_1}(d))-\binom{d}{2}i_{j-1}&=\\
(j-1)i_1\binom{d+b-i_1-1}{b-i_1+1}-\binom{d}{2}(i_1+j-2)&.
\end{align*}
Since $i_1<b$, we know $b-i_1\geq 1$. So, this is at least
\begin{align*}
(j-1)i_1\binom{d}{2}-\binom{d}{2}(i_1+j-2)=\binom{d}{2}((j-1)i_1-(i_1+j-2)).
\end{align*}
Since $i_1\geq 1$ and $j>1$, $(j-1)i_1-(i_1+j-2)\geq 0$. Therefore, $H_{r,a,b}(d_1,\ldots,d_k)$ is at least $H_{r,a,b}(d_1',\ldots,d_k')$, so we are again done by induction.
\end{proof}
Now we prove Theorem \ref{slope}.
\begin{proof}
From Lemma \ref{KCL}, it suffices to consider the case $d_1=\cdots=d_k=d$. Then, we see
\begin{align*}
G_{r,a,b}(d_1,\ldots,d_k) &= -\dim(\mathbb{G}(b,r))+(a+(r-b))(db+1).
\end{align*}
This is a quadratic in $b$ with leading coefficient $1-d<0$, so it suffices to show
\begin{align*}
\begin{tabular}{c}
$H_{r,a,2}(d_1,\ldots,d_k),$\\
$H_{r,a,r}(d_1,\ldots,d_k)$
\end{tabular}
\geq
\begin{cases}
r-1 &\text{ if $k=r$}\\
r-1 + (d-2)(r-2)+d(k-r)& \text{ if $k>r$}.
\end{cases}
\end{align*}
We see
\begin{align}
H_{r,a,2}(d_1,\ldots,d_k)&=-3(r-2)+(k-1)(2d+1)-k(d+1)+2(r-1)\nonumber\\
&=d (k - 2) - r + 3=r-1+d(k-2)-(2r-4)\nonumber\\
&=r-1+(d-2)(r-2)+d(k-r).\label{Hra2}
\end{align}
and
\begin{align}
H_{r,a,r}(d_1,\ldots,d_k) &= (k-r+1)(rd+1)-k(d+1)+2(r-1)\nonumber\\
&=d (k r - k - r^2 + r) + r - 1=d(k-r)(r-1)+(r-1).\label{Hrar}
\end{align}
To finish, we need to check $H_{r,a,2}(d_1,\ldots,d_k)\geq H_{r,a,r}(d_1,\ldots,d_k)$ for $k=r$ and that $H_{r,a,2}(d_1,\ldots,d_k)\leq H_{r,a,r}(d_1,\ldots,d_k)$ for $k>r$. We calculate
\begin{align}
H_{r,a,r}(d_1,\ldots,d_k)-H_{r,a,2}(d_1,\ldots,d_k)&=d(k-r)(r-2)-(d-2)(r-2)\nonumber\\
&=(r-2)(d(k-r)-(d-2)), \nonumber
\end{align}
and $d(k-r)-(d-2)$ is positive if $k>r$ and nonnegative if $k=r$ and $d\geq 2$.
\end{proof}
\section{Application: Lines in a hypersurface through a point}
Let $\mathcal{F}_{r,d}\rightarrow W_{r,d}$ be the universal hypersurface. Let $F_{1}(\mathcal{F}_{r,d}/W_{r,d})$ denote the lines on the universal hypersurface, or the relative Hilbert scheme of lines in the family $\mathcal{F}_{r,d}\rightarrow W_{r,d}$. The universal family over $F_{1}(\mathcal{F}_{r,d}/W_{r,d})$ is a $\mathbb{P}^1$-bundle $F_{0,1}(\mathcal{F}_{r,d}/W_{r,d})\rightarrow F_{1}(\mathcal{F}_{r,d}/W_{r,d})$ corresponding to a choice of a line and a point on that line.
There is an evaluation map $F_{0,1}(\mathcal{F}_{r,d}/W_{r,d})\rightarrow \mathcal{F}_{r,d}$, and the expected fiber dimension is $r-1-d$. We are interested in when the fiber dimension jumps. In the statement of Theorem \ref{EVM} below, we will need to refer to \emph{Eckardt points}. In the notation of \cite{Eckardt}, the $0$-Eckardt points of a smooth variety $X\subset\mathbb{P}^r$ are the points for which the second fundamental form at $x\in X$ vanishes.
More concretely, if $X=V(F)$ for a nonzero homogenous form $F$ of degree $d$ and $x$ is the origin in an affine chart of $\mathbb{P}^n$, and we expand $F$ around $x$ as $F=F_1+\cdots+F_d$, where $F_i$ is the degree $i$ part of $F$ after dehomogenization, then $x$ is an Eckardt point if and only if $F_1$ divides $F_2$. Therefore, we make the following definition
\begin{Def}
Let $F(X_0,\ldots,X_r)$ be a homogenous form of degree $d$ that vanishes on $p=[0:\cdots:0:1]\in \mathbb{P}^r$, then $p$ is an Eckardt point of $F$ if
\begin{align*}
\frac{1}{X_r^d}F(X_0,\ldots,X_r)&= 0+F_1(\frac{X_0}{X_r},\ldots,\frac{X_{r-1}}{X_r})+\cdots+F_d(\frac{X_0}{X_r},\ldots,\frac{X_{r-1}}{X_r}),
\end{align*}
where $F_i$ is homogenous of degree $i$. For a general choice of $p\in V(F)$, we take a $PGL_{r+1}$ translate $\phi: \mathbb{P}^r\to \mathbb{P}^r$ where $\phi([0:\cdots:0:1])=p$ and apply the definition above to $\phi^{*}F$.
\end{Def}
In particular, every point is an Eckardt point of the zero form. Similarly, we say that a homogenous form $F$ on $\mathbb{P}^r$ is smooth if $V(F,\partial_{X_0}F,\ldots,\partial_{X_r}F)$ is empty.
\begin{Thm}
\label{EVM}
Let $U\subset W_{r,d}$ be the open subset of smooth homogenous forms $F$ of degree $d\geq 3$ in $\mathbb{P}^r$ for $r\geq 2$. Let $Z\subset U$ be the closed subset of homogenous forms $F$ for which the evaluation map $F_{0,1}(X)\rightarrow X$ has a fiber of dimension greater than $r-1-d$, where $X=V(F)$.
Then, $Z$ has a unique component of maximum dimension, except in the case $d=4$, $r=5$. The component(s) of $Z$ of maximal dimension are as follows:
\begin{enumerate}[(1)]
\item
the forms $F$ with an Eckardt point for $d\leq r-2$ or $d=r-1$ and $r\leq 5$
\item
the forms $F$ vanishing on a 2-plane for $d=r-1$ and $r\geq 5$
\item
the forms $F$ vanishing on a line for $d\geq r$.
\end{enumerate}
\begin{comment}
\begin{enumerate}
\item
for $d<r-1$,
the unique component of maximal dimension consists of the hypersurfaces containing an Eckardt point
\item
for $d=r-1$, then
\begin{enumerate}
\item
if $r\leq 4$,
the unique component of maximal dimension consists of the hypersurfaces containing an Eckardt point
\item
if $r\geq 6$,
the unique component of maximal dimension consists of the hypersurfaces containing a 2-plane
\item
if $r=5$,
the two components of maximal dimension correspond to the two cases above
\end{enumerate}
\item
for $d\geq r$,
$Z$ consists of the hypersurfaces containing a line.
\end{enumerate}
\end{comment}
\end{Thm}
\subsection{Application of filtration by span}
To prove Theorem \ref{EVM}, we first need to understand what happens on the universal hypersurface, which will be given by Proposition \ref{UHS} in this subsection.
\begin{Prop}
\label{2r1}
The unique component of $\Phi^{\mathbb{P}^r,1}_{2,\ldots,r+1}(\mathbb{P}^r)$ of maximal dimension is the component parameterizing $r$-tuples of hypersurfaces whose common vanishing locus contains a line, and that component is of codimension $\frac{r^2+r+4}{2}$.
For $r\geq 2$, the unique component of second largest dimension of $\Phi^{\mathbb{P}^r,1}_{2,\ldots,r+1}(\mathbb{P}^r)$ is the locus of tuples of hypersurfaces $(F_1,\ldots,F_r)$ where $F_1$ is identically zero, which has codimension $\binom{r+2}{2}$.
\end{Prop}
\begin{proof}
The component of $\Phi^{\mathbb{P}^r,1}_{2,\ldots,r+1}(\mathbb{P}^r)$ of maximal dimension can be identified by applying Theorem \ref{slope}. To find the second largest component in terms of dimensions, we see that Theorem \ref{slope} also says the difference between the dimensions of the largest and second largest components of $\Phi^{\mathbb{P}^r,1}_{2,\ldots,r+1}(\mathbb{P}^r)$ is at least $r-1$. Since
\begin{align*}
\binom{r+2}{2}-(\frac{r(r+5)}{2}-2(r-1))=r-1,
\end{align*}
we see that the locus of tuples of hypersurfaces $(F_1,\ldots,F_r)$ where $F_1$ is identically zero is a component of second largest dimension. To finish, we need to show uniqueness. Let $Z\subset \Phi^{r,1}_{2,\ldots,r+1}(\mathbb{P}^r)$ be a component of second highest dimension. From the proof of Theorem \ref{slope}, we know that if
\begin{align*}
\dim(Z) = \dim(Z\cap \Phi^{\mathbb{P}^r,1}_{2,\ldots,r+1}(\mathbb{P}^r,{\rm Span}(b))),
\end{align*}
then $b=2$ or $b=r$. More precisely, Lemma \ref{ICA} says
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,1}_{2,2,\ldots,2}(\mathbb{P}^r,{\rm Span}(b)))&\geq G_{r,1,b}(2,2,\ldots,2)\\
{\rm codim}(\Phi^{\mathbb{P}^r,1}_{2,3,\ldots,r+1}(\mathbb{P}^r,{\rm Span}(b)))&\geq G_{r,1,b}(2,3,\ldots,r+1)\\
{\rm codim}(\Phi^{\mathbb{P}^r,1}_{2,2,\ldots,2}(\mathbb{P}^r,{\rm Span}(1)))&= G_{r,1,1}(2,2,\ldots,2)\\
{\rm codim}(\Phi^{\mathbb{P}^r,1}_{2,3,\ldots,r+1}(\mathbb{P}^r,{\rm Span}(1)))&= G_{r,1,1}(2,3,\ldots,r+1)\\
\end{align*}
Lemma \ref{KCL} says that
\begin{align*}
H_{r,1,b}(2,3,\ldots,r+1)\geq H_{r,1,b}(2,2,\ldots,2),
\end{align*}
so by definition of $H_{r,1,b}$ (Definition \ref{HrabD})
\begin{align*}
G_{r,1,b}(2,3,\ldots,r+1)-G_{r,1,1}(2,3,\ldots,r+1)&\geq G_{r,1,b}(2,2,\ldots,2)-G_{r,1,1}(2,2,\ldots,2).
\end{align*}
Since
\begin{align*}
G_{r,1,b}(2,2,\ldots,2) &= -\dim(\mathbb{G}(b,r))+(1+(r-b))(2b+1)
\end{align*}
is quadratic in $b$ with negative leading coefficient, we see that $G_{r,1,b}(2,2,\ldots,2)-G_{r,1,1}(2,2,\ldots,2)=H_{r,1,b}(2,2,\ldots,2)$ is minimized over $b\in \{2,\ldots,r\}$ when $b=2$ or $b=r$. Also, \eqref{Hra2} and \eqref{Hrar} from the proof of Theorem \ref{slope} yields
\begin{align*}
H_{r,1,2}(2,\ldots,2)=H_{r,1,r}(2,\ldots,2)=r-1.
\end{align*}
Therefore, the only two values of $b$ for which $G_{r,1,b}(2,3,\ldots,r+1)$ would possible be equal to $r-1$ is when $b=2$ or $b=r$.
If $r>2$, we can rule out the case $b=2$ directly, as Lemma \ref{ICA} implies ${\rm codim}( \Phi^{r,1}_{2,\ldots,r+1}(\mathbb{P}^r,{\rm Span}(b)))\geq G_{r,r-1,2}(2,\ldots,r+1)$, and it is easy to check $G_{r,r-1,2}(2,\ldots,r+1)$ achieves its minimum at the choice of indices $1<3<4<\cdots<r$, so
\begin{align*}
{\rm codim}( \Phi^{\mathbb{P}^r,1}_{2,\ldots,r+1}(\mathbb{P}^r,{\rm Span}(b)))\geq 6+ \sum_{i=3}^{r}(2i+1)=r^2+2r-2,
\end{align*}
which is greater than $\binom{r+2}{2}$ for $r>2$. To finish, it suffices to show that
\begin{align*}
h_{r,r-i+1}(1+i)=(i-1)\binom{r+1}{i}+\binom{r+2}{i+1}
\end{align*}
achieves its unique minimum at $i=1$ over $1\leq i\leq r$. By looking at the second term, we see that it suffices to compare the cases when $i=1$ and $i=r$, and we see
\begin{align*}
(r-1)(r+1)+(r+2)-\binom{r+2}{2}=\frac{1}{2}(r^2-r)
\end{align*}
which is greater than zero when $r>1$.
\end{proof}
Proposition \ref{2r} is an example where we apply Theorem \ref{FML} to a case where the hypersurfaces all contain the same surface rather than a curve. This will be easier than Proposition \ref{2r1}, as we can make cruder approximations.
\begin{Prop}
\label{2r}
The unique component of $\Phi^{\mathbb{P}^r,1}_{2,\ldots,r}(\mathbb{P}^r)$ of largest dimension is the locus of tuples $(F_1,\ldots,F_{r-1})$ of degrees $(2,\ldots, r)$ such that $F_1=0$.
\end{Prop}
\begin{proof}
Applying Lemma \ref{E1} and Lemma \ref{ICA}, we find
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,1}_{2,\ldots,r}(\mathbb{P}^r,{\rm Span}(b)))&\geq \sum_{i=b+2}^{r+2}{\binom{i}{2}}-(b+1)(r-b)\\
&\geq \binom{r+3}{3}-\binom{b+2}{3}-(b+1)(r-b),
\end{align*}
and equality holds when $b=r$. Let $A(b,r):=\binom{r+3}{3}-\binom{b+2}{3}-(b+1)(r-b)$. Taking the difference $A(r,b)-A(1,b)$, we get
\begin{align*}
-\frac{1}{6} (b - r) (b^2 + b r - 3 b + r^2 + 3 r - 4).
\end{align*}
Since $b^2 + b r - 3 b + r^2 + 3 r - 4>0$ for $b<r$, we only have to deal with the case $b=r$. In this case, we see that $h_{r,r-i+1}(1+i)$ achieves its unique minimum at $i=1$ over $1\leq i\leq r-1$, as in the proof of Proposition \ref{2r1}.
\end{proof}
\begin{Lem}
\label{E1}
We have
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,a}_{2,\ldots,r+a-1}(\mathbb{P}^r,{\rm Span}(r)))\geq \binom{r+2}{2}+\binom{r+3}{2}+\cdots+\binom{r+a+1}{2}.
\end{align*}
\end{Lem}
\begin{proof}
We want to apply Lemma \ref{KIA}. However, instead of trying to determine the minimum, we can crudely approximate
\begin{align*}
F_{r,a}(2,\ldots,r+a-1) &=\min\{\sum_{j=1}^{a}(i_j-j)\binom{(i_j+1)+(r-i_j+j)-1}{i_j}+\binom{(i_j+1)+(r-i_j+j)}{i_j+1}\\&:S=\{i_1,i_2,\cdots,i_a\}\subset \{1,\ldots,r+a-2\},i_1<\cdots<i_a\}&
\end{align*}
by
\begin{align*}
&\min\{\sum_{j=1}^{a}\binom{(i_j+1)+(r-i_j+j)}{i_j+1}:S=\{i_1,i_2,\cdots,i_a\}\subset \{1,\ldots,r+a-2\},i_1<\cdots<i_a\}&=\\
&\min\{\sum_{j=1}^{a}\binom{1+r+j}{i_j+1}:S=\{i_1,i_2,\cdots,i_a\}\subset \{1,\ldots,r+a-2\},i_1<\cdots<i_a\}&.
\end{align*}
Since $i_j\leq (r+a-2)-(a-j)=r+j-2$, $\binom{1+r+j}{i_j+1}\leq \binom{1+r+j}{2}$. Therefore, the sum is bounded below by
\begin{align*}
\binom{r+2}{2}+\binom{r+3}{2}+\cdots+\binom{r+a+1}{2}.
\end{align*}
\end{proof}
\begin{Prop}
\label{UHS}
Suppose $r\geq d+1$ and $d\geq 2$. Let $\mathcal{F}_{r,d}\rightarrow W_{r,d}$ be the universal hypersurface, where $W_{r,d}\cong \mathbb{A}^{\binom{r+d}{r}}$ parameterizes hypersurfaces of degree $d$ in $\mathbb{P}^r$. Let $\mathcal{Z}\subset \mathcal{F}_{r,d}$ be the locus where the fiber of $F_{0,1}(\mathcal{F}_{r,d}/W_{r,d})\rightarrow F_{r,d}$ has dimension greater than $r-1-d$. The unique component of largest dimension of $\mathcal{Z}$ is the locus of points $(X,p)\in \mathcal{F}_{r,d}\subset W_{r,d}\times\mathbb{P}^r$ of $\mathcal{F}_{r,d}$ where $p$ is a singular point of $X$. More importantly,
\begin{enumerate}
\item
for $d<r-1$, the unique component of second largest dimension is the points $(X,p)$ where $p$ is an Eckardt point of $X$
\item
for $d=r-1$, the unique components of second and third largest components are
\begin{enumerate}
\item
the points $(X,p)$ where $X$ contains a plane through $p
\item
the points $(X,p)$ where $p$ is an Eckardt point of $X$
\end{enumerate}
\end{enumerate}
\end{Prop}
\begin{proof}
Fix $p\in \mathbb{P}^r$ and note the fiber over $p$ of the projection $\pi: \mathcal{F}_{r,d}\rightarrow \mathbb{P}^r$ is a hyperplane $\pi^{-1}(p)\subset W_{r,d}$. Let $\pi^{-1}(p)\cap \mathcal{Z}$ be $Z_p$. We see $Z_p\subset W_{r,d}$ is some closed subset. Given a hypersurface $X\subset\mathbb{P}^r$ of degree $d$ through $p$ given by a homogeous polynomial $F$ of degree $d$, we can take an affine chart $p\in \mathbb{A}^n$ where $p$ is the origin, and expand $F=F_1+\cdots+F_d$ around $p$. Here, $F_i$ is the degree $i$ part of $F$ once we restrict to $\mathbb{A}^n$. Lines through $p$ in $\mathbb{P}^r$ are parameterized by $\mathbb{P}^{r-1}$, and the lines through $p$ in $X$ are given by $\{F_1=\cdots=F_d=0\}$ in $\mathbb{P}^{r-1}$. See the proof of Theorem 2.1 in \cite{LowDegree} for more details and an approach that behaves better as we vary $p$.
Since specifying the Taylor expansion $(F_1,\ldots,F_d)$ of $F$ around a point $p$ is equivalent to specifying $F$, $Z_p\cong \Phi^{r-1,1}_{1,\ldots,d}(\mathbb{P}^{r-1})$. The locus where $F_1$ is identically zero corresponds to a choice of hypersurface $X$ through $p$ that is singular at $p$, and this happens in codimension $r$. If we assume $F_1$ is not zero, then we want to restrict to the hyperplane cut out by $F_1$. Take the open subset $U\subset \Phi^{r-1,1}_{1,\ldots,d}(\mathbb{P}^{r-1})$ of tuples $(F_1,\ldots,F_d)$ where $F_1\neq 0$. There is a map $U\rightarrow (\mathbb{P}^{r-1})^{*}$ given by $(F_1,\ldots,F_d)$ mapping to $F_1$. Each fiber is isomorphic to
\begin{align*}
\Phi^{r-2,1}_{2,\ldots,d}(\mathbb{P}^{r-1})\times \prod_{i=2}^{d}{W_{r-1,i}/W_{r-2,i}}.
\end{align*}
If $d<r-1$, by Proposition \ref{2r}, we find the unique componentof largest component of $\Phi^{r-2,1}_{2,\ldots,d}(\mathbb{P}^{r-1})$ is when the quadric is identically zero, which corresponds to when $F_2$ restricted to $F_1$ is zero. Equivalently, $p$ being an Eckardt point of $X$.
If $d=r-1$, by Proposition \ref{2r1}, we find the unique component of largest dimension of $\Phi^{r-2,1}_{2,\ldots,d}(\mathbb{P}^{r-1})$ is when $(F_2,\ldots,F_d)$ all contain a line lying in $F_1$, which is equivalent to $X$ containing a plane through $p$. By Proposition \ref{2r1}, the unique component of second largest dimension of $\Phi^{r-2,1}_{2,\ldots,d}(\mathbb{P}^{r-1})$ is when the quadric is zero, which corresponds to the case where $p$ is an Eckardt point of $X$.
\end{proof}
\subsection{Facts about general hypersurfaces}
To derive Theorem \ref{EVM} from Proposition \ref{UHS}, we require facts about hypersurfaces that are tedious but easy to check. In characteristic 0, many of these statements are immediate, as smooth hypersurfaces all have finitely many Eckardt points (see the discussion under Corollary 2.2 in \cite{cubic}) and the Fermat hypersurface $\{X_0^d+\cdots+X_r^d=0\}\subset \mathbb{P}^r$ contains Eckardt points and planes, and is smooth when the characteristic does not divide $d$.
\begin{Lem}
\label{BP1}
The following hold independently of the characteristic of our algebraically closed base field $K$.
\begin{enumerate}
\item
There exists a smooth hypersurface of degree $d>1$ in $\mathbb{P}^r$ containing a 2-plane if and only if $r\geq 5$.
\item
For $r\geq 2$ and $d\geq 2$, there exists a smooth hypersurface $X$ of degree $d$ in $\mathbb{P}^r$ and a hyperplane $H$ such that $X\cap H$ is a cone in $H\cong \mathbb{P}^{r-1}$.
\end{enumerate}
\end{Lem}
\begin{proof}
We know that if $X\subset\mathbb{P}^r$ is a smooth hypersurface of degree $d>1$ contains a linear space $\Lambda$ of dimension $m$, then $r\geq 2m+1$, for example from Proposition 1 in the Appendix of \cite{APP}.
\begin{comment}
For example, suppose we wanted to prove (1). Let $P\subset \mathbb{P}^r$ be the plane $\{x_3=\cdots=x_r=0\}$. Let $V\subset W_{r,d}$ be the linear subspace of hypersurfaces containing $P$. Let $\mathcal{X}\subset V\times\mathbb{P}^r$ be the incidence correspondence of pairs $(X,p)$ where $X$ is a hypersurface containing $P$ and singular at $p$. Here, our convention is that the zero homogenous form is singular at every point.
Consider the fiber of the map $\pi: \mathcal{X}\rightarrow \mathbb{P}^r$. If $p\notin P$, then we can assume $p=[0:\cdots:0:1]$ and $\pi^{-1}(p)$ is codimension $r+1$ in $V$. If $p\in P$, then we can assume $p=[1:0:\cdots:0]$ and $\pi^{-1}(p)$ is codimension $r-2$ in $V$. Therefore, $\dim(\mathcal{X})=\max\{\dim(V)-1,\dim(V)-r+4\}$, so when $r\geq 5$, $\mathcal{X}\rightarrow V$ cannot be surjective.
\end{comment}
To prove (2), Let $V\subset W_{r,d}$ be the linear subspace of forms whose expansion around $[1:0:\cdots:0]$ in the affine chart $X_{0}\neq 0$ is of the form $f_0+f_1+\cdots+f_d$, where $f_0=0$ and $x_1| f_i$ for $i=1,2,\ldots,d-1$. Here, $x_1:=\frac{X_1}{X_0}$ is one of the coordinates after dehomogenization. Let $\mathcal{X}\subset V\times \mathbb{P}^r$ be the incidence correspondence of pairs $(F,p)$, where $F\in V$ and $p\in \{F=0\}$ is a singular point. Here, our convention is that the zero homogenous form is singular at every point.
Consider the fiber of the map $\pi: \mathcal{X}\rightarrow \mathbb{P}^r$. If $p\notin \{X_1=0\}$, then we can assume that $p=[0:1:0:\cdots:0]$. We want to check that the $r+1$ conditions being singular at $p$ imposes on $W_{r,d}$ also imposes $r+1$ conditions on $V$. If we let $I$ denote a multi-index, a general element in $W_{r,d}$ can be written as $\sum_{I}c_I X^I$ and $V$ is cut out by the conditions that $c_I=0$ for all monomials $X^I$ divisible by $X_0$ but not $X_1$. Being singular at $p$ imposes the conditions that $c_I=0$ for $X_1^{d-1}$ dividing $X^I$. Since $d>1$, these impose $r+1$ independent conditions on $V$.
Suppose now $p\in \{X_1=0\}$, but $p\neq [1:0:\cdots:0]$, then we can assume $p=[0:\cdots:0:1]$, in which case being singular at $p$ imposes $r$ conditions on $V$. If $p=[1:0:\cdots:0]$, then being singular at $p$ imposes 1 condition on $V$. Combining the three cases, we see that $\dim(\mathcal{X})=\max\{\dim(V)-1,\dim(V)-1, \dim(V)-1\}$, so the projection $\mathcal{X}\rightarrow V$ cannot be surjective.
\end{proof}
\begin{Lem}
\label{BP2}
The following hold independently of the characteristic of our algebraically closed base field $K$.
\begin{enumerate}
\item
If $\binom{d+2}{2}>3(r-2)$, then a general hypersurface $X\subset\mathbb{P}^r$ of degree $d$ does not contain a 2-plane, and a general hypersurface containing a 2-plane contains exactly one 2-plane.
\item
If $d\geq 3$ and $r\geq 3$, then a general hypersurface $X\subset\mathbb{P}^r$ of degree $d$ containing an Eckardt point contains only one Eckardt point.
\end{enumerate}
\end{Lem}
\begin{proof}
The proof strategy is similar to the proof of Lemma \ref{BP1}. For example, suppose we wanted to prove (2). We can consider the incidence correspondence $\mathcal{I}\subset W_{r,d}\times(\mathbb{P}^r)^{*}\times \mathbb{P}^r$ consisting of triples $(F,H,p)$ such that $p\in H$ and $F$ restricted to $H$ vanishes at $p$ up to third order. By considering the projection to $(\mathbb{P}^r)^{*}\times \mathbb{P}^r$, we see
\begin{align*}
\dim(\mathcal{I})=\dim(W_{r,d})-\binom{r+1}{2}+(2r-1).
\end{align*}
We can also consider the incidence correspondence $\mathcal{J}\subset W_{r,d}\times(\mathbb{P}^r)^{*}\times \mathbb{P}^r\times (\mathbb{P}^r)^{*} \times \mathbb{P}^r$ consisting of tuples $(F,H_1,p_1,H_2,p_2)$ such that $p_1\neq p_2$, $p_i\in H_i$, and $F$ restricted to $H_i$ vanishes at $p_i$ up to third order.
To see the projection $\mathcal{J}\rightarrow \mathcal{I}$ is not surjective, it suffices to show $\dim(\mathcal{J})<\dim(\mathcal{I})$. This type of analysis is also described at the beginning of the proof of Theorem 1.3 in \cite{Eckardt}. They assume characteristic zero throughout the paper, but the assumption on characteristic is not used here. The idea is that the image of $\mathcal{J}$ in $(\mathbb{P}^r)^{*}\times \mathbb{P}^r\times (\mathbb{P}^r)^{*} \times \mathbb{P}^r$ decomposes into the following $\mathbb{P}GL(r+1)$-orbits:
\begin{enumerate}
\item $p_1\notin H_2, p_2\notin H_1$
\item $p_1\in H_2, p_2\notin H_1$ (and similarly the locus obtained by interchanging the indices 1 and 2)
\item $p_1\in H_2, p_2\in H_1$ but $H_1\neq H_2$
\item $H_1=H_2$, $p_1\neq p_2$.
\end{enumerate}
We will do case (3), because it seemed the most worrisome to us. The proofs of the other cases are similar. Without loss of generality, we can assume $p_1=[1:0:\cdots:0]$, $H_1=\{X_1=0\}$, $p_2=[0:0:1:0:\cdots:0]$, $H_2=\{X_3=0\}$. Then, the fiber of $\mathcal{J}$ over $(p_1,H_1,p_2,H_3)$ consists of the polynomials $\sum_{I}c_I X^I$ such that if
\begin{enumerate}[(a)]
\item
$X_0^{d-2}$ divides $X^I$ but $X_1$ does not or
\item
$X_2^{d-2}$ divides $X^I$ but $X_3$ does not
\end{enumerate}
then $c_I=0$. Each case gives $\binom{r+1}{2}$ conditions, but there might be overlapping conditions. The number of overlapping conditions is maximized for $d=3$, where it is $r-1$. So the locus of points $(F,p_1,H_1,p_2,H_2)$ in $\mathcal{J}$ where $(p_1,H_1,p_2,H_2)$ satisfy the conditions of case (3) has dimension
\begin{align*}
\dim(W_{r,d})-\left(2\binom{r+1}{2}-(r-1)\right)+\left(2r+2(r-2)\right).
\end{align*}
Subtracting this from $\dim(\mathcal{I})$ yields $\binom{r}{2}-2r+4=\frac{1}{2}(r^2-5r+8)$, which is positive for $r\geq 2$.
The condition that $r\geq 3$ comes from the part of $\mathcal{J}$ lying over case (1), and this is clearly necessary as the case $r=2$ corresponds to plane curves and Eckardt points are flex points.
\end{proof}
\subsection{Completion of proof of Theorem \ref{EVM}}
Now we apply Proposition \ref{UHS} and Lemmas \ref{BP1} and \ref{BP2} to prove Theorem \ref{EVM}.
\begin{proof}
If we let $\mathcal{Z}\subset \mathcal{F}_{r,d}$ be the locus where the fiber of $F_{0,1}(\mathcal{F}_{r,d}/W_{r,d})\rightarrow F_{r,d}$ has fiber dimension greater than $r-d-1$ and $\pi: \mathcal{F}_{r,d}\rightarrow W_{r,d}$ be the projection, then $Z=\pi(\mathcal{Z})\cap U$. The case $d\geq r$ is trivial as $Z$ is precisely the hypersurfaces containing a line, so we only consider when $d<r$.
Our strategy will be as follows:
\begin{enumerate}
\item
Use Proposition \ref{UHS} to find the largest component(s) of $\mathcal{Z}$
\item
Use Lemmas \ref{BP1} and \ref{BP2} to find their generic fiber dimensions under the map $\mathcal{Z}\to W_{r,d}$.
\end{enumerate}
Let $\mathcal{F}_{r,d}^{\circ}\subset \mathcal{F}_{r,d}$ denote the open subset of pairs $(F,p)$ where $p\notin V(F,\partial_{X_0}F,\ldots,\partial_{X_r}F)$. If $d<r-1$, then part (1) of Proposition \ref{UHS} shows the unique component of largest dimension of $\mathcal{C}$ of $\mathcal{F}_{r,d}^{\circ}\cap \mathcal{Z}$ consists of pairs $(X,p)$ where $p$ is an Eckardt point of $X$. Part (2) of Lemma \ref{BP2} shows $\mathcal{C}$ is generically injective onto its image under $\pi$. Part (2) of Lemma \ref{BP1} shows $\pi(\mathcal{C})\cap U$ is nonempty, so $\pi(\mathcal{C})\cap U$ is also the unique component of largest dimension of $Z$.
If $d=r-1$, then part (2) of Proposition \ref{UHS} shows the unique largest and second largest components $\mathcal{C}_1$ and $\mathcal{C}_2$ of $\mathcal{F}_{r,d}^{\circ}\cap \mathcal{Z}$ in terms of dimensions are respectively the points $(X,p)$ such that $X$ contains a 2-plane containing $p$ and the points $(X,p)$ where $p$ is an Eckardt point of $X$. Furthermore, we can directly compute $\dim(\mathcal{C}_1)-\dim(\mathcal{C}_2)=r-3$. As before, $\mathcal{C}_2$ is generically injective onto its image under $\pi$ and $\pi(\mathcal{C}_2)\cap U$ is nonempty. Part (1) of Lemma \ref{BP2} shows $\mathcal{C}_1$ maps onto its image with 2-dimensional fibers and part (1) of Lemma \ref{BP1} shows $\pi(\mathcal{C}_1)\cap U$ is nonempty for $r\geq 5$.
\end{proof}
\section{Application: Hypersurfaces singular along a curve}
\label{singh}
We want to show, among the hypersurfaces with positive dimensional singular locus, the unique component of largest dimension consists of the hypersurfaces singular along a line. To prove this in characteristic 0, it will suffice to prove it in characteristic $p$ for one choice of $p$ by an application of upper semicontinuity. We will chose $p=2$ because it gives us the best bounds.
The obstacle to directly applying our general argument to the problem at hand is that the partial derivatives $\partial_{X_i}$ of a degree $\ell$ form $F$ do not vary independently as we vary $F$ in $W_{r,\ell}$. However, the key trick is given in \cite{Poonen} and used in \cite{Kaloyan} to resolve this problem. Let $K$ be characteristic 2 and for simplicity suppose $\ell=2d+1$ is odd. Then, when choosing our degree $\ell$ form $F$, we can add independent fudge factors $G_0,\ldots,G_r$, which are forms of degree $d$, and take the sum
\begin{align*}
F+X_0 G_0^2+\cdots+X_r G_r^2,
\end{align*}
so $\partial_{X_i}(F+X_0 G_0^2+\cdots+X_r G_r^2)=\partial_{X_i}F+G_i^2$. At least optically, it looks like the partial derivatives are more independent, and we will reproduce the same argument Slavov used in \cite{Kaloyan} to reduce the problem of when $F+X_0 G_0^2+\cdots+X_r G_r^2$ is singular along a curve to the problem of when the fudge factors $G_i$ all contain the same curve. As a technical remark, we need to consider the case of hypersurfaces singular along a rational normal curve separately because the bounds given by Theorem \ref{FML} were slightly too weak. Once we remove the locus of all the $G_i$'s containing a rational normal curve, we can repeat the proof of Theorem \ref{FML} to get slightly better bounds that will suffice.
\subsection{Case of plane curves}
The case $r=2$ of plane curves is easy because everything can be computed explicitly. In the proof of Theorem \ref{oddl} and \ref{evenl}, our bounds will improve with increasing $r$, so it is helpful to be able to assume $r\geq 3$. Also, Claim \ref{ST2} below requires $r\geq 3$. It is an easy dimension computation to see:
\begin{Prop}
\label{psing}
For curves in $\mathbb{P}^2$, $\dim(\mathcal{S}_{1,K}^1)>\dim(\mathcal{S}_{1,K}\backslash \mathcal{S}_{1,K}^1)$ for all fields $K$ and all degrees $\ell$.
\end{Prop}
\begin{comment}
\begin{proof}
We can assume $\ell\geq 4$ because the only curves of degree less than 4 doubled along a curve must be doubled along a line. We will explicitly compute the dimension of degree $\ell$ curves singular along a degree $d$ curve for $1\leq d\leq \frac{\ell}{2}$. For each $d$, let $W_d=H^{0}(\mathbb{P}^2,\mathscr{O}_{\mathbb{P}^2}(d))$. Consider the incidence correspondence $X_d\subset W_\ell\times \mathbb{P}W_d$ consisting of pairs $(F,G)$ such that $G^2$ divides $F$. Since the projection $X_d\rightarrow W_\ell$ is finite, it suffices to find $\dim(X_d)$ to find the dimension of the curves with a doubled component of degree $d$.
Consider the second projection $X_d\rightarrow \mathbb{P}W_d$, we find $\dim(X_d)$ is
\begin{align*}
\binom{d+2}{2}+\binom{\ell-2d+2}{2}-1,
\end{align*}
which is quadratic in $d$ positive coefficient in front of $d^2$, so it suffices to check the cases where $d=1$ and $d=\lfloor\frac{\ell}{2}\rfloor$. If $\ell$ is even, then we see
\begin{align*}
\binom{1+2}{2}+\binom{\ell}{2}>\binom{\frac{\ell}{2}+2}{2}+\binom{0+2}{2},
\end{align*}
as $\binom{\frac{\ell}{2}+2}{2}\leq \binom{\ell}{2}$, $\binom{0+2}{2}<\binom{1+2}{2}$. If $\ell$ is odd, then we see
\begin{align*}
\binom{1+2}{2}+\binom{\ell}{2}>\binom{\frac{\ell-1}{2}+2}{2}+\binom{1+2}{2},
\end{align*}
as $\binom{\frac{\ell-1}{2}+2}{2}< \binom{\ell}{2}$ and $\binom{1+2}{2}=\binom{1+2}{2}$.
\end{proof}
\end{comment}
\subsection{Reduction to characteristic 2}
\label{SS61}
We will introduce an incidence correspondence over ${\rm Spec}(\mathbb{Z})$ as in \cite[Section 3.1]{Kaloyan}. Fix a degree $\ell\geq 3$ and dimension $r\geq 2$. Let $W_{\mathbb{Z}}:=\mathbb{Z}[X_0,\ldots,X_r]_{\ell}$. The notation $W_{r,\ell,\mathbb{Z}}$ would be more consistent with Definition \ref{WDEF}, but we drop $\ell$ and $r$ from the notation because they are fixed. Over the ${\rm Spec}(\mathbb{Z})$-scheme $W_{\mathbb{Z}}\cong \mathbb{A}^{\binom{r+\ell}{\ell}}_{\mathbb{Z}}$, we can construct a scheme $\mathscr{S} \subset W_{\mathbb{Z}}\times_{{\rm Spec}(\mathbb{Z})}\mathbb{P}^r_{\mathbb{Z}}=\mathbb{P}^r_{W_{\mathbb{Z}}}$, where over each point ${\rm Spec}(K)\rightarrow W_{\mathbb{Z}}$ corresponding to a homogenous polynomial $F$ over $K$, the fiber $\mathscr{S}\times_{\mathbb{P}W_{\mathbb{Z}}}{\rm Spec}(K)$ is the subscheme of $\mathbb{P}^r_K$ cut out by $(F,\partial_{X_0}F,\ldots,\partial_{X_r}F)$.
By upper semicontinuity of fiber dimension, we can filter
\begin{align*}
\mathbb{P}W_{\mathbb{Z}}=\mathcal{S}_{-1}\supset \mathcal{S}_0\supset \mathcal{S}_1\cdots \supset \mathcal{S}_i\supset\cdots ,
\end{align*}
where $\mathcal{S}_i$ is the closed subset over which the fiber of $\mathscr{S}$ has dimension at least $i$. Put another way, $\mathcal{S}_i$ is the hypersurfaces that are singular along a subvariety of dimension at least $i$.
\begin{Def}
Given $\ell$ and $r$ as above, let $\mathcal{S}_i\subset W_{\mathbb{Z}}$ be the locus of hypersurfaces that have a singular locus of dimension at least $i$. Let $\mathcal{S}_{i,K}$ denote $\mathcal{S}_i\times_{{\rm Spec}(\mathbb{Z})}{\rm Spec}(K)$ for a point ${\rm Spec}(K)\rightarrow {\rm Spec}(\mathbb{Z})$.
\end{Def}
\begin{Def}
We let $\mathcal{S}^1_{i}\subset \mathcal{S}_{i}$, where $\mathcal{S}^1_{i}$ is the locus of hypersurfaces singular along a dimension $i$ plane. As before, we let $\mathcal{S}_{i,K}^1$ denote the base change of $\mathcal{S}^1_{i}$ to a field $K$.
\end{Def}
Recall:
\begin{Thm} [{\cite[Theorem 1.1]{Kaloyan}}]
\label{SlavovT}
Fix $i, r, p$. Then, there is an effectively computable $\ell_0$ in terms of $i,r,p$ such that $\dim(\mathcal{S}_{i,K}^1)>\dim(\mathcal{S}_{i,K}\backslash \mathcal{S}_{i,K}^1)$ for $\ell>\ell_0$ and all fields $K$ of characteristic $p$.
\end{Thm}
Roughly, the proof of Theorem \ref{SlavovT} bounds $\mathcal{S}_{i,K}$ by stratifying based on the degree of the variety contained in the singular locus and has a separate argument for the case of low degree and the case of high degree. We will use the argument from the high degree case for hypersurfaces singular along any curve other than a rational normal curve (which includes lines).
In order to keep our statements clean, we will restrict ourselves to the case $i=1$ and the case our base field has characteristic 0. We want to show $\dim(\mathcal{S}_{i,K}^1)>\dim(\mathcal{S}_{i,K}\backslash \mathcal{S}_{i,K}^1)$. Recall:
\begin{Prop}
[{\cite[Lemma 5.1]{Kaloyan}}]
\label{S1d}
We have ${\rm codim}(\mathcal{S}_{1,K}^1)=\ell r - 2 r + 3$ for all fields $K$.
\end{Prop}
Now, we want to apply upper semicontinuity to show that $\dim(\mathcal{S}_{1,\overline{\mathbb{F}_2}}^1)>\dim(\mathcal{S}_{1,\overline{\mathbb{F}_2}}\backslash \mathcal{S}_{1,\overline{\mathbb{F}_2}}^1)$ implies $\dim(\mathcal{S}_{1,\overline{\mathbb{Q}}}^1)>\dim(\mathcal{S}_{1,\overline{\mathbb{Q}}}\backslash \mathcal{S}_{1,\overline{\mathbb{Q}}}^1)$. In fact, if we just wanted $\dim(\mathcal{S}_{1,\overline{\mathbb{Q}}}^1)\geq \dim(\mathcal{S}_{1,\overline{\mathbb{Q}}}\backslash \mathcal{S}_{1,\overline{\mathbb{Q}}}^1)$, this would follow from upper semicontinuity of fiber dimension applied to $\overline{\mathcal{S}_{1}\backslash \mathcal{S}_{1}^1}$. As it stands, we have to worry about the case where a component of $\overline{\mathcal{S}_{1}\backslash \mathcal{S}_{1}^1}$ is distinct from $\mathcal{S}_{1}^1$ over the generic fiber, but limits to $\mathcal{S}_{1,\overline{\mathbb{F}_2}}^1$ over the prime $2$.
\begin{Lem}
\label{RPC}
If $p$ is a prime and $\dim(\mathcal{S}_{1,\overline{\mathbb{F}_p}}^1)>\dim(\mathcal{S}_{1,\overline{\mathbb{F}_p}}\backslash \mathcal{S}_{1,\overline{\mathbb{F}_p}}^1)$, then we also have $\dim(\mathcal{S}_{1,K}^1)>\dim(\mathcal{S}_{1,K}\backslash \mathcal{S}_{1,K}^1)$ for algebraically closed fields of almost all characteristics, including characteristic zero.
\end{Lem}
\begin{proof}
Consider the incidence correspondence
\begin{center}
\begin{tikzcd}
& \tilde{\mathcal{S}_1} \arrow[swap, ld, "\pi_1"] \arrow[dr, "\pi_2"] &\\
\mathcal{S}_1 & & \overline{{\rm Hilb}^1_{\mathbb{P}_{\mathbb{Z}}^r}}
\end{tikzcd}
\end{center}
where ${\rm Hilb}^1_{\mathbb{P}_{\mathbb{Z}}^r}$ is the Hilbert scheme of curves in $\mathbb{P}^r_{\mathbb{Z}}$, and $\overline{{\rm Hilb}^1_{\mathbb{P}_{\mathbb{Z}}^r}}$ is the closure of the open sublocus consisting of integral curves. Here, $\tilde{\mathcal{S}_1}$ consists of pairs $(F,[C])$ of a degree $\ell$ form $F$ and a curve $C$ such that the partial derivatives of $F$ vanish on $C$. More precisely, we can apply Lemma \ref{CFF} to the universal family over $\overline{{\rm Hilb}^1_{\mathbb{P}_{\mathbb{Z}}^r}}$ and to the family $\mathcal{S}_1\rightarrow W_{\mathbb{Z}}$.
By definition, $\mathcal{S}^1_1=\pi_1(\pi_2^{-1}(\mathbb{G}(1,r)))$. Let $\mathcal{S}^2_1:=\pi_1(\pi_2^{-1}(\overline{{\rm Hilb}^1_{\mathbb{P}_{\mathbb{Z}}^r}}\backslash\mathbb{G}(1,r)))$ be the degree $\ell$ forms whose corresponding hypersurfaces are singular along a curve of degree greater than 1. Crucially, $\mathcal{S}^2_1$ contains, for example, hypersurfaces singular along a scheme supported on a line with multiplicity 2. Since $\mathcal{S}_1^2$ is closed, $\mathcal{S}^2_1\supset \overline{\mathcal{S}_{1}\backslash \mathcal{S}_{1}^1}$.
\begin{Claim}
\label{STZ}
For any field $K$, $\mathcal{S}^2_{1,K}:=\mathcal{S}^2_1\times_{{\rm Spec}(\mathbb{Z})}{\rm Spec}(K)$ does not contain $\mathcal{S}^1_{1,K}$.
\end{Claim}
First, if we assume Claim \ref{STZ}, then Lemma \ref{RPC} follows from upper semicontinuity applied to $\mathcal{S}^2_1\rightarrow {\rm Spec}(\mathbb{Z})$ as the fiber dimensions of $\mathcal{S}^1_1\rightarrow {\rm Spec}(\mathbb{Z})$ are constant by Proposition \ref{S1d} and $\mathcal{S}^1_{1,K}$ is irreducible for all $K$. To show Claim \ref{STZ}, it suffices to find a single polynomial $F\in K[X_0,\ldots,X_r]$ of degree $\ell$ such that the ideal generated by $(\partial_{X_0} F,\ldots,\partial_{X_r} F)$ scheme theoretically cuts out a curve in $\mathbb{P}^r$ of degree 1, so Claim \ref{ST1} suffices.
\begin{Claim}
\label{ST1}
Fix a line $L\subset \mathbb{P}_K^r$ and let $V\subset W_{\mathbb{Z}}\times_{{\rm Spec}(\mathbb{Z})}{\rm Spec}(K)$ be the subvector space of degree $\ell$ forms over $K$ that are singular along $L$. Then, there is a dense open $U\subset V$ consisting of degree $\ell$ forms $F$ where $(F,\partial_{X_0} F,\ldots,\partial_{X_r} F)$ scheme theoretically cut out a curve of degree 1.
\end{Claim}
To see Claim \ref{ST1}, we first note Lemma 7.4 in \cite{Kaloyan} shows there is a dense open subset $U_1\subset V$ consisting of forms $F$ whose partial derivatives cut out $L$ set-theoretically. (Lemma 7.4 in \cite{Kaloyan} assumes $\ell\geq 3$, though the case $\ell=2$ is also true, for example from the proof of Claim \ref{ST2} below.) We now focus our attention around $L$. Let $\mathcal{I}_L$ be the ideal sheaf of the line $L$ and $L'\supset L$ be the scheme cut out by $\mathcal{I}_L^2$. Let $X\rightarrow V$ be the family $X\subset \mathbb{P}^r\times V$, where each fiber of $X$ over $[F]\in V$ is the scheme cut out in $\mathbb{P}^r_K$ by the partials $(F,\partial_{X_0} F,\ldots,\partial_{X_r} F)$. (In the notation at the beginning of section \ref{SS61}, $X$ is $(\mathscr{S}\times_{{\rm Spec}(\mathbb{Z})}{\rm Spec}(K))|_{V}$.)
We consider the intersection $X\cap (L'\times_K V)$ and apply upper semicontinuity of degree to the family $X\cap (L'\times_K V)\rightarrow V$ to see the locus $U_2\subset V$ over which each fiber of $X\cap (L'\times_K V)\rightarrow V$ is degree 1 is open in $V$. To get upper semicontinuity of degree of $X\cap (L'\times_K V)\rightarrow V$, we are using that each fiber is of the same dimension. To prove it in our case, for $p\in V$ and slice $X\cap (L'\times_K V)$ by a general hyperplane $H\subset \mathbb{P}^r$ such that $X|_p\cap H$ has length equal to $\deg(X|_p)$. Then, since $H$ cannot contain the support of $L$, $X\cap (H\cap L'\times_K V)\rightarrow V$ is a finite morphism, and we can apply upper semicontinuity of rank of a coherent sheaf to the pushforward of the structure sheaf of $X\cap (H\cap L'\times_K V)$ to $V$ to conclude.
Finally, if we knew $U_2$ were nonempty, then $U_1\cap U_2$ would satisfy the conditions of Claim \ref{ST1}.
\begin{Claim}
\label{ST2}
The set $U_2\subset V$ is nonempty.
\end{Claim}
Without loss of generality, suppose the ideal sheaf of $L$ is generated by $(X_0,\ldots,X_{r-2})$. If $r\geq 3$ and $Q$ is a degree 2 form in $X_0,\ldots,X_{r-2}$ that cuts out a smooth quadric in $\mathbb{P}^{r-2}$. Here, we are using the fact that we can assume $K$ is algebraically closed. Since the partial derivatives of $Q$ are linear, the partial derivatives of $Q$ generate $(X_0,\ldots,X_{r-2})$ exactly. If we consider $Q$ as a form in $X_0,\ldots,X_r$ that ignores the last two varibles, the partial derivatives of $Q$ generate exactly the ideal sheaf of a line.
If $\ell=2$, then we are done. Otherwise, pick general linear forms $H_1,\ldots,H_{\ell-2}$ that all intersect $L$ properly. Then, the product $Q H_1\cdots H_{\ell-2}$ is in $U_2$.
Note that in the proof we use that $r>2$, since in the case $r=2$ and characteristic 2, then the singular locus of $V(X_0^2 G)$ for $G$ a degree $\ell-2$ form has $V(X_0^2)$ in the singular locus. Specifically, the proof above fails when $r=2$ since we can't pick a smooth quadric in only one variable.
\end{proof}
From Theorems \ref{oddl} and \ref{evenl}, we have:
\begin{Thm}
\label{alll}
For $\ell\geq 7$ or $\ell=5$, $\dim(\mathcal{S}_{1,\overline{\mathbb{F}_2}}^1)>\dim(\mathcal{S}_{1,\overline{\mathbb{F}_2}}\backslash \mathcal{S}_{1,\overline{\mathbb{F}_2}}^1)$, so in particular $\dim(\mathcal{S}_{1,K}^1)>\dim(\mathcal{S}_{1,K}\backslash \mathcal{S}_{1,K}^1)$ for algebraically closed fields $K$ of characteristic 0 or of characteristic $p$ for all but finitely many $p$.
\end{Thm}
\subsection{Counting in characteristic $p$}
We will use a clever trick first given in \cite{Poonen} and then used in \cite{Kaloyan}. To apply this trick, we need the Lang-Weil estimate to relate counting rational points in characteristic $p$ to dimension. Recall:
\begin{Thm}
[{\cite[Theorem 1]{LW}}]
\label{LWb}
Suppose $Z\subset \mathbb{P}^r$ is an irreducible projective variety defined over $\mathbb{F}_q$. Then, if $\# Z(\mathbb{F}_{q^c})$ is the number of $\mathbb{F}_{q^c}$ rational points of $Z$,
\begin{align*}
| \# Z(\mathbb{F}_{q^c})-q^{\dim(Z)}|\leq \delta q^{\dim(Z)-\frac{1}{2}}+A(r,\dim(Z),\deg(Z))q^{r-1},
\end{align*}
where $\delta=(\deg(Z)-1)(\deg(Z)-2)$ and $A$ is a function of $\dim(Z)$, $\deg(Z)$ and $r$.
\end{Thm}
We rephrase Theorem \ref{LWb} in a weaker form that is easier to apply.
\begin{Lem}
\label{LWb2}
If $X$ is a quasiprojective variety defined over $\mathbb{F}_q$ and $Y\subset X$ is a constructible set, then the codimension of $Y$ in $X$ is greater than $A$ if and only if
\begin{align*}
\lim_{c\rightarrow\infty}q^{cA}{\rm Prob}(x\in Y(\mathbb{F}_{q^c})):=\lim_{c\rightarrow\infty}q^{cA}\frac{\# Y(\mathbb{F}_{q^c})}{\# X(\mathbb{F}_{q^c})}=0.
\end{align*}
\end{Lem}
\begin{proof}
Let $X\subset \mathbb{P}^r$ be a locally closed embedding and $\overline{X}$ be its closure in projective space. Then, $\# X(\mathbb{F}_{q^c})=\Theta(q^{c\dim(X)})$, as $\dim(\overline{X}\backslash X)<\dim(X)$ and we can apply Theorem \ref{LWb} to every irreducible component of $\overline{X}$ and of $\overline{X}\backslash X$.
Let $Y^{\circ}\subset Y\subset \overline{Y}$, where $Y^{\circ}$ is a quasiprojective variety contained in $Y$ and $\overline{Y}$ is the closure of $Y$ in $\overline{X}$. By applying Theorem \ref{LWb} to each irreducible component of $\overline{Y}$ and to each irreducible component of $\overline{Y}\backslash Y^{\circ}$, we find $\# Y(\mathbb{F}_{q^{c}})=\Theta(q^{c\dim(Y)})$.
\end{proof}
See Lemma 6.3 in \cite{Kaloyan} for a slightly more general version of Lemmas \ref{lp1} and \ref{lp2}.
\begin{Lem}
\label{lp1}
Let $\ell$ be odd. Fix a reduced scheme $Z\subset \mathbb{P}^r$ defined over $\mathbb{F}_q$ for $q$ a power of $2$. If we fix $G\in \mathbb{F}_{q}[X_0,\ldots,X_r]_{\ell-1}$ and pick $G_0\in \mathbb{F}_{q}[X_0,\ldots,X_r]_{\frac{\ell-1}{2}}$ randomly then
\begin{align*}
{\rm Prob}(V(G+G_0^2)\supset Z)\leq {\rm Prob}(V(G_0)\supset Z).
\end{align*}
\end{Lem}
\begin{comment}
\begin{proof}
We will show equality holds if ${\rm Prob}(G+G_0^2\supset V)>0$. Suppose ${\rm Prob}(G+G_0^2\supset V)>0$. Fix $G_1\in \mathbb{F}_{q}[X_0,\ldots,X_r]_{\frac{\ell-1}{2}}$ such that $G+G_1^2\supset V$. Then, for every other choice $G_0\in \mathbb{F}_{q}[X_0,\ldots,X_r]_{\frac{\ell-1}{2}}$ such that $G+G_0\supset V$, we see that $G_0^2-G_1^2=(G_0-G_1)^2$ contains $V$. Since $V$ is reduced $G_0-G_1\supset V$. Thus, the map
\begin{align*}
\{G_0\in \mathbb{F}_{q}[X_0,\ldots,X_r]_{\frac{\ell-1}{2}}\mid G+G_0^2\supset V\}\rightarrow \{G_0\in \mathbb{F}_{q}[X_0,\ldots,X_r]_{\frac{\ell-1}{2}}\mid G_0\supset V\}
\end{align*}
that sends $G_0$ to $G_0-G_1$ is a bijection. Put another way, the first set is a torsor under the action of the second set under addition.
\end{proof}
\end{comment}
\begin{Lem}
\label{lp2}
Let $\ell$ be even. Fix a reduced scheme $Z\subset \mathbb{P}^r$ defined over $\mathbb{F}_q$ for $q$ a power of 2 and with no component contained in the hyperplane $\{X_0=0\}$. If we fix $G\in \mathbb{F}_{q}[X_0,\ldots,X_r]_{\ell-1}$ and pick $G_0\in \mathbb{F}_{q}[X_0,\ldots,X_r]_{\frac{\ell}{2}-1}$ randomly then
\begin{align*}
{\rm Prob}(V(G+X_0G_0^2)\supset Z)\leq {\rm Prob}(V(G_0)\supset Z).
\end{align*}
\end{Lem}
\begin{proof}
Since the proofs of Lemma \ref{lp1} and Lemma \ref{lp2} are exactly the same, we will prove Lemma \ref{lp2}. We will show equality holds if ${\rm Prob}(V(G+X_0 G_0^2)\supset Z)>0$. Suppose ${\rm Prob}(V(G+G_0^2)\supset Z)>0$. Fix $G_1\in \mathbb{F}_{q}[X_0,\ldots,X_r]_{\frac{\ell-1}{2}}$ such that $V(G+X_0G_1^2)\supset Z$. Then, for every other choice $G_0\in \mathbb{F}_{q}[X_0,\ldots,X_r]_{\frac{\ell}{2}-1}$ such that $V(G+X_0G_0^2)\supset Z$, we see that $X_0G_0^2-X_0G_1^2=X_0(G_0-G_1)^2$ contains $Z$. Since $Z$ reduced and $\{X_0=0\}$ does not contain a component of $Z$, $X_0$ restricts to a nonzero divisor on $Z$ and $V(G_0-G_1)\supset Z$. Thus, the map
\begin{align*}
\{G_0\in \mathbb{F}_{q}[X_0,\ldots,X_r]_{\frac{\ell}{2}-1}\mid V(G+X_0 G_0^2)\supset Z\}\rightarrow \{G_0\in \mathbb{F}_{q}[X_0,\ldots,X_r]_{\frac{\ell}{2}-1}\mid V(G_0)\supset Z\}
\end{align*}
that sends $G_0$ to $G_0-G_1$ is a bijection. Put another way, the first set is a torsor under the action of the second set under addition.
\end{proof}
\subsection{Hypersurfaces singular along a rational normal curve}
We will need to know the number of conditions it is to be singular along a fixed rational normal curve of degree $r$ in $\mathbb{P}^r$. We will use:
\begin{Lem}
[{\cite[Proposition 8]{Conca}}]
\label{Rs}
If $\ell\geq 3$, $C\cong \mathbb{P}^1\hookrightarrow \mathbb{P}^r$ is a fixed rational normal curve of degree $r$, and $V \subset H^{0}(\mathbb{P}^r,\mathscr{O}_{\mathbb{P}^r}(\ell))$ is the vector space of degree $\ell$ forms singular along $C$, then $V$ is of codimension $r^2(\ell+1)-2(r^2-1)$.
\end{Lem}
\begin{comment}
\begin{Rem}
Even though Conca \cite{Conca} assumes $r\geq 3$ throughout, the case $r=1$, $2$ can be done directly. The codimension of $V$ is bounded by the Hilbert polynomial of $\mathscr{O}_{\mathbb{P}^r}/\mathcal{I}_C^2$, where $\mathcal{I}_C$ is the ideal sheaf of $C$. To see this, we consider the short exact sequence
\begin{align*}
0\rightarrow \mathcal{I}_C/\mathcal{I}_C^2\rightarrow \mathscr{O}_{\mathbb{P}^{r}}/\mathcal{I}_C^2\rightarrow \mathscr{O}_{\mathbb{P}^{r}}/\mathcal{I}_C\rightarrow 0.
\end{align*}
We see that $H^{1}(\mathscr{O}_{\mathbb{P}^{r}}/\mathcal{I}_C^2(\ell))$ is 0 as $H^{1}(\mathscr{O}_{\mathbb{P}^{r}}/\mathcal{I}_C(\ell))\cong H^{1}(\mathscr{O}_{\mathbb{P}^1}(r\ell))=0$ and $H^1(\mathcal{I}_C/\mathcal{I}_C^2(\ell))=H^1(\mathscr{O}_{\mathbb{P}^1}(r\ell - r -2 ))^{r-1}=0$.
The content of Lemma \ref{Rs} is that for $\ell\geq 3$, the maximum is achieved.
\end{Rem}
\end{comment}
\begin{comment}
\begin{proof}
This is Proposition 8 in \cite{Conca}. Even though Conca in \cite{Conca} has the assumption that $r\geq 3$ throughout the paper, the case $r=1,2$ can be done directly. The maximum the codimension of $V$ can be is the Hilbert polynomial of $\mathscr{O}_{\mathbb{P}^r}/\mathcal{I}_C^2$, where $\mathcal{I}_C$ is the ideal sheaf of $C$. To see this, we consider the short exact sequence
\begin{align*}
0\rightarrow \mathcal{I}_C/\mathcal{I}_C^2\rightarrow \mathscr{O}_{\mathbb{P}^{r}}/\mathcal{I}_C^2\rightarrow \mathscr{O}_{\mathbb{P}^{r}}/\mathcal{I}_C\rightarrow 0.
\end{align*}
We see that $H^{1}(\mathscr{O}_{\mathbb{P}^{r}}/\mathcal{I}_C^2(\ell))$ is 0 as $H^{1}(\mathscr{O}_{\mathbb{P}^{r}}/\mathcal{I}_C(\ell))\cong H^{1}(\mathscr{O}_{\mathbb{P}^1}(\ell))=0$ and $H^1(\mathcal{I}_C/\mathcal{I}_C^2(\ell))=H^1(\mathscr{O}_{\mathbb{P}^1}(r\ell - r -2 ))^{r-1}=0$.
The content of Lemma \ref{Rs} is that for $\ell\geq 3$, the maximum is achieved.
For future reference, Proposition 8 computes the Hilbert function of the second symbolic power of the ideal of the rational normal curve. The second symbolic power the ideal sheaf is precisely the functions whose germ vanish to order 2 at every point of $C$ (see for example the introduction to \cite{Symb2}). In the case of a local complete intersection curve, squaring the ideal sheaf and taking the second symbolic power coincide. If there is doubt to what Proposition 8 in \cite{Conca} is computing, the abstract of the same paper says that if $I_C\subset k[X_0,\ldots,X_n]$ is the homogenous ideal associated to $\mathcal{I}_C$, then the second symbolic power, $I_C^{(2)}$, coincides with the saturation of $I_C^2$, so computing the Hilbert function of $I_C^{(2)}$ is equivalent to computing the Hilbert function of $\mathcal{I}_C^2$, which coincides with the second symbolic power $\mathcal{I}_C^{(2)}$, which is what we wanted.
\end{proof}
\end{comment}
\begin{Lem}
\label{Rs2}
If $\ell\geq 3$, $C\cong \mathbb{P}^1\hookrightarrow \mathbb{P}^r$ is a fixed rational normal curve of degree $r\geq 3$, $\mathbb{P}^r\subset \mathbb{P}^{r+a}$ is embedded as a linear subspace, $V\subset H^{0}(\mathbb{P}^{r+a},\mathscr{O}_{\mathbb{P}^{r+a}}(\ell))$ is the vector space of degree $\ell$ forms singular along $C$, then $V$ is of codimension $r^2(\ell+1)-2(r^2-1)+a(r(\ell-1)+1)$.
\end{Lem}
\begin{proof}
Since $C$ is a local complete intersection, from the introduction of \cite{Symb2}, it suffices to find the Hilbert function of $\mathscr{O}_{\mathbb{P}^{r+a}}/\mathcal{I}_C^2$. First, we see that $\mathscr{O}_{\mathbb{P}^{r+a}}/\mathcal{I}_C^2(\ell)$ has no higher cohomology from the exact sequence
\begin{align*}
0\rightarrow \mathcal{I}_C/\mathcal{I}_C^2\rightarrow \mathscr{O}_{\mathbb{P}^{r}}/\mathcal{I}_C^2\rightarrow \mathscr{O}_{\mathbb{P}^{r}}/\mathcal{I}_C\rightarrow 0.
\end{align*}
Indeed, $H^{1}(\mathscr{O}_{\mathbb{P}^{r}}/\mathcal{I}_C^2(\ell))$is 0 as $H^{1}(\mathscr{O}_{\mathbb{P}^{r}}/\mathcal{I}_C(\ell))\cong H^{1}(\mathscr{O}_{\mathbb{P}^1}(r\ell))=0$ and $H^1(\mathcal{I}_C/\mathcal{I}_C^2(\ell))=H^1(\mathscr{O}_{\mathbb{P}^1}(r\ell - r -2 ))^{r-1}\oplus H^1(\mathscr{O}_{\mathbb{P}^1}(r\ell -1))^a=0$ \cite[Example 3.4]{normal} .
Let $V'\subset H^{0}(\mathbb{P}^{r+a},\mathscr{O}_{\mathbb{P}^{r+a}}(\ell))$ be the subspace of forms vanishing on $C$. There is an induced map $V'\rightarrow H^{0}(\mathcal{I}_C/\mathcal{I}_C^2(\ell))$. Since $C$ is projectively normal, it suffices to show that the map $V'\rightarrow H^{0}(\mathcal{I}_C/\mathcal{I}_C^2(\ell))=H^{0}(N_{C/\mathbb{P}^r}^{\vee}(\ell))\oplus H^{0}(\mathscr{O}_C(\ell-1))^a$ is surjective.
Given a form $F(X_0,\ldots,X_{r+a})$ vanishing on $C$, we can write it as
\begin{align*}
F=G(X_0,\ldots,X_r)+G_1(X_0,\ldots,X_{r+a})X_{r+1}+\cdots G_a(X_0,\ldots,X_{r+a})X_{r+a}.
\end{align*}
The map $V'\rightarrow H^{0}(\mathcal{I}_C/\mathcal{I}_C^2(\ell))=H^{0}(N_{C/\mathbb{P}^r}^{\vee}(\ell))\oplus H^{0}(\mathscr{O}_C(\ell-1))^a$ sends $F$ to
\begin{align*}
(G|_{N_{C/\mathbb{P}^r}^{\vee}(\ell)},G_1|_{\mathscr{O}_C(\ell-1)},\ldots,G_a|_{\mathscr{O}_C(\ell-1)}).
\end{align*}
Since the map $V'\rightarrow H^{0}(N_{C/\mathbb{P}^r}^{\vee}(\ell))\cong H^{0}(\mathscr{O}_{\mathbb{P}^1}(-r-2+\ell r))^{r-1}$ is surjective from Lemma \ref{Rs2}, and the $G_i$'s can be chosen independently, we see that
\begin{align*}
V'\rightarrow H^{0}(\mathcal{I}_C/\mathcal{I}_C^2(\ell))=H^{0}(N_{C/\mathbb{P}^r}^{\vee}(\ell))\oplus H^{0}(\mathscr{O}_C(\ell-1))^a
\end{align*}
is surjective.
\end{proof}
\begin{Lem}
\label{srnci}
Let $T_i'\subset H^{0}(\mathbb{P}^{r},\mathscr{O}_{\mathbb{P}^{r}}(\ell))$ be the locus of hypersurfaces singular along some degree $i$ rational normal curve. Then, ${\rm codim}(T_i')\geq i (\ell r - 2 r - 2) + 5$ and equality holds when $i=1$.
\end{Lem}
\begin{proof}
From Lemma \ref{Rs2}, the number of conditions it is to be singular along a rational normal curve of degree $i$ is
\begin{align*}
i^2(\ell+1)-2(i^2-1)+(r-i)(i(\ell-1)+1).
\end{align*}
The space of rational normal curves of degree $i$ in $\mathbb{P}^r$ is dimension $(i+3)(i-1)+(i+1)(r-i)$, so
\begin{align*}
{\rm codim}(T_i')&\geq i^2(\ell+1)-2(i^2-1)+(r-i)(i(\ell-1)+1)-(i+3)(i-1)-(i+1)(r-i)\\
{\rm codim}(T_i')&\geq i (\ell r - 2 r - 2) + 5.
\end{align*}
Equality holds when $i=1$ from Lemma 5.1 in \cite{Kaloyan}.
\end{proof}
\subsection{Hypersurfaces containing a nondegenerate curve}
We need to repeat the proofs for Lemmas \ref{KIA} and Lemma \ref{ICA} but we want to remove rational normal curves from our set of curves.
\begin{Def}
Let ${\rm Span}(b)'\subset {\rm Hilb}_{\mathbb{P}^r}^1$ denote the integral curves whose span is exactly a $b$-dimensional plane except for the rational normal curves of degree $b$.
\end{Def}
From a special case of \cite[Theorem 4.5]{Park}, we have
\begin{Lem}
\label{ABP}
We have
\begin{align*}
h_{{\rm Span}(b)'\cap {\rm Hilb}^1_{\mathbb{P}^r}}(d)\geq (b+1)d.
\end{align*}
\end{Lem}
From iterating Lemma \ref{kind} together with applying \ref{ABP}, we find
\begin{Lem}
\label{KIAP}
For $r-k+a=1$, we have ${\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r,{\rm Span}(r)'))\geq a(r+1)d$ for $d_1=\cdots=d_k=d$.
\end{Lem}
\begin{proof}
It suffices to show $h_{{\rm Span}(r)'\cap {\rm Hilb}^1_{\mathbb{P}^r}}(d)\leq h_{{\rm Span}(r)\cap {\rm Hilb}_{\mathbb{P}^r}^2}(d)$ for all $d$. Equivalently,
\begin{align*}
(r-2)\binom{d+1}{2}+\binom{d+2}{2}&\geq (r+1)d=(r-1)\binom{d}{1}+\binom{d+1}{1}+(d-1)\\
(r-1)\binom{d}{2}+\binom{d+1}{2}-\binom{d+1}{2}&\geq d-1\\
(r-1)\frac{d}{2}(d-1) &\geq d-1,
\end{align*}
which is true when $d\geq 2$.
\end{proof}
Similarly, applying Lemma \ref{IC} and following the same proof as Lemma \ref{ICA}, we get
\begin{Lem}
\label{ICAP}
For $r-k+a=1$,
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r,{\rm Span}(b)'))&\geq {\rm codim}(\Phi^{\mathbb{P}^b,a+(r-b)}_{d_1,\ldots,d_k}(\mathbb{P}^b,{\rm Span}(b)'))-\dim(\mathbb{G}(b,r)).
\end{align*}
\end{Lem}
\begin{comment}
\begin{proof}
We apply Lemma \ref{IC} in the case $S=\mathbb{G}(b,r)$, $\mathcal{X}$ is the universal family of $b$-planes over the Grassmannian, $\mathcal{A}$ is $(A(b)'\times_{K} S)\cap {\rm Hilb}_{\mathcal{X}/S}^{1,\leq D}$, $Y=\mathbb{P}^r$, and $c=0$, we get
\begin{align*}
{\rm codim}(\Phi^{r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r,A(b)'))&\geq {\rm codim}(\Phi^{b,a+(r-b)}_{d_1,\ldots,d_k}(\mathcal{X}/S,\mathcal{A}))-\dim(\mathbb{G}(b,r)).
\end{align*}
Next, let ${\rm Spec}(L)\rightarrow S$ be any point corresponding to a plane $P\subset \mathbb{P}^r_L$. Then, $\Phi^{b,a+(r-b)}_{d_1,\ldots,d_k}(\mathcal{X}/S,\mathcal{A})\times_S {\rm Spec}(L)$ is the closed subset in $\prod_{i=1}^{k}{W_{r,d_i}}\times_K L$ corresponding to hypersurfaces $F_1,\ldots,F_k$, where the intersection $P\cap F_1\cap \cdots\cap F_k$ contains a curve spanning $P$ other than a rational normal curve of degree $b$. Therefore, the codimension of $\Phi^{b,a+(r-b)}_{d_1,\ldots,d_k}(\mathcal{X}/S,\mathcal{A})\times_S {\rm Spec}(L)$ in $\prod_{i=1}^{k}{W_{r,d_i}}\times_K L$ is the same as ${\rm codim}(\Phi^{b,a+(r-b)}_{d_1,\ldots,d_k}(\mathbb{P}^b,A(b)'))$.
\end{proof}
\end{comment}
Combining Lemmas \ref{KIAP} and Lemma \ref{ICAP}, we get
\begin{Lem}
\label{klpc}
For $r-k+a=1$, we have
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,a}_{d_1,\ldots,d_k}(\mathbb{P}^r,{\rm Span}(b)'))\geq (a+r-b)(b+1)d-\dim(\mathbb{G}(b,r))
\end{align*} for $d_1=\cdots=d_k=d$.
\end{Lem}
\subsection{Case of odd degree}
The argument is the same in both cases, but it is cleaner in the case where $\ell$ is odd, giving slightly better bounds.
\begin{Thm}
\label{oddl}
For odd $\ell\geq 5$, $\dim(\mathcal{S}_{1,\overline{\mathbb{F}_2}}^1)>\dim(\mathcal{S}_{1,\overline{\mathbb{F}_2}}\backslash \mathcal{S}_{1,\overline{\mathbb{F}_2}}^1)$.
\end{Thm}
\begin{proof}
From Proposition \ref{psing}, we can assume $r\geq 3$. We cover $\mathcal{S}_{1,\overline{\mathbb{F}_2}}$ by constructible sets:
\begin{enumerate}
\item
$T_1=\mathcal{S}_{1,\overline{\mathbb{F}_2}}^1$
\item
$T_i\subset \mathcal{S}_{1,\overline{\mathbb{F}_2}}$ be the hypersurfaces singular along a degree $i$ rational normal curve
\item
$T_i'\subset \mathcal{S}_{1,\overline{\mathbb{F}_2}}$ be the hypersurfaces singular along some integral curve that
\begin{enumerate}[(i)]
\item
spans an $i$-dimensional plane
\item
is not a degree $i$ rational normal curve.
\end{enumerate}
\end{enumerate}
We will bound the codimensions of the sets $T_i$, $T_i'$ for $2\leq i\leq r$ separately.
Let $A={\rm codim}(T_1)=\ell r - 2 r + 3$. First, ${\rm codim}(T_i)>{\rm codim}(T_1)$ for $i>1$, from Lemma \ref{srnci} as $r,\ell\geq 3$ implies $\ell r - 2 r - 2>0$.
We bound ${\rm codim}(T_i')$ for $2\leq i\leq r$. From Lemma \ref{LWb2}, it suffices to show
\begin{align*}
\lim_{c\rightarrow\infty}2^{cA}{\rm Prob}(F\in T_i')=0,
\end{align*}
where for each $c$ we select a hypersurface $F$ randomly from $\mathbb{F}_{2^c}[X_0,\ldots,X_r]_\ell$. Note that selecting $F$ randomly from $\mathbb{F}_{2^c}[X_0,\ldots,X_r]_\ell$ is equivalent to picking $(G,G_0,\ldots,G_r)$ from $\mathbb{F}_{2^c}[X_0,\ldots,X_r]_\ell\times (\mathbb{F}_{2^c}[X_0,\ldots,X_r]_{\frac{\ell-1}{2}})^{r+1}$ randomly and letting $F=G+X_0G_0^2+\cdots+X_r G_r^2$.
Then, $\partial_iF = \partial_i G + G_i^2$. Let $E_i$ be the condition that $ \{\partial_m G + G_m^2\mid 0\leq m\leq r \}$ all vanish on some integral curve spanning an $i$-dimensional plane other than a degree $i$ rational normal curve. It suffices to show
\begin{align*}
\lim_{c\rightarrow\infty}2^{cA}{\rm Prob}(E_i)=0.
\end{align*}
Let $E_i'$ be the condition that $G_0,\ldots,G_r$ all contain an integral curve spanning an $i$-dimensional plane other than a degree $i$ rational normal curve. From Lemma \ref{lp1}, it suffices to show
\begin{align*}
\lim_{c\rightarrow\infty}2^{cA}{\rm Prob}(E_i')=0.
\end{align*}
Applying Lemma \ref{LWb2} again, it suffices to show
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,2}_{d_1,\ldots,d_{r+1}}(\mathbb{P}^r,{\rm Span}(i)'))>A
\end{align*}
for $d_1=\cdots=d_{r+1}=d$.
Applying Lemma \ref{klpc}, we see
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,2}_{d_1,\ldots,d_{r+1}}(\mathbb{P}^r,{\rm Span}(i)'))&\geq -(i+1)(r-i)+(r-i+2)((i+1)d)\\
{\rm codim}(\Phi^{\mathbb{P}^r,2}_{d_1,\ldots,d_{r+1}}(\mathbb{P}^r,{\rm Span}(i)'))&\geq (1 - d) i^2 + i (d r + d - r + 1) + d r + 2 d - r
\end{align*}
for $d_1=\cdots=d_{r+1}=d$. Since this is quadratic in $i$ with negative leading coefficient, it suffices to check the cases $i=2$ and $i=r$. We see for $i=2$,
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,2}_{d_1,\ldots,d_{r+1}}(\mathbb{P}^r,{\rm Span}(2)'))&\geq -3(r-2)+r(3d)\\
{\rm codim}(\Phi^{\mathbb{P}^r,2}_{d_1,\ldots,d_{r+1}}(\mathbb{P}^r,{\rm Span}(2)'))-A&\geq d r - 2 r + 3,
\end{align*}
so $d\geq 2$ suffices. In the case $i=r$,
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,2}_{d_1,\ldots,d_{r+1}}(\mathbb{P}^r,{\rm Span}(r)'))&\geq 2(r+1)d\\
{\rm codim}(\Phi^{\mathbb{P}^r,2}_{d_1,\ldots,d_{r+1}}(\mathbb{P}^r,{\rm Span}(r)'))-A&\geq r+2 d - 3,
\end{align*}
so $d\geq 2$ suffices.
\end{proof}
\subsection{Case of even degree}
\begin{Thm}
\label{evenl}
For even $\ell\geq 8$, $\dim(\mathcal{S}_{1,\overline{\mathbb{F}_2}}^1)>\dim(\mathcal{S}_{1,\overline{\mathbb{F}_2}}\backslash \mathcal{S}_{1,\overline{\mathbb{F}_2}}^1)$.
\end{Thm}
\begin{proof}
From Proposition \ref{psing}, we can assume $r\geq 3$. Since
$$\bigcap_{j\neq j'}{\{X_jX_{j'}=0\}}=\{[1:0:\cdots:0],[0:1:\cdots:0],\ldots,[0:0:\cdots:1]\}$$
is finite, it cannot contain any curves. We thus cover $\mathcal{S}_{1,\overline{\mathbb{F}_2}}$ by constructible sets:
\begin{enumerate}
\item
$T_1=\mathcal{S}_{1,\overline{\mathbb{F}_2}}^1$
\item
$T_i\subset \mathcal{S}_{1,\overline{\mathbb{F}_2}}$ be the hypersurfaces singular along a degree $i$ rational normal curve
\item
$T_{i,j,j'}\subset \mathcal{S}_{1,\overline{\mathbb{F}_2}}$ be the hypersurfaces singular along some integral curve that
\begin{enumerate}[(i)]
\item spans an $i$ dimensional plane
\item is not a degree $i$ rational normal curve
\item is not contained in $\{X_jX_{j'}=0\}$.
\end{enumerate}
\end{enumerate}
Let $A={\rm codim}(T_1)=\ell r - 2 r + 3$.
First, ${\rm codim}(T_i)>{\rm codim}(T_1)$ for $i>1$, from Lemma \ref{srnci} as $r,\ell\geq 3$ implies $\ell r - 2 r - 2>0$.
We bound ${\rm codim}(T_{i,j,j'})$ for $2\leq i\leq r$. By symmetry, it suffices to assume $j=0, j'=1$. Let $d=\frac{\ell}{2}-1$. From Lemma \ref{LWb2}, it suffices to show
\begin{align*}
\lim_{c\rightarrow\infty}2^{cA}{\rm Prob}(F\in T_{i,j,j'})=0,
\end{align*}
were $F$ is selected randomly from $\mathbb{F}_{2^c}[X_0,\ldots,X_r]_\ell$ for each $c$.
Note that we can achieve the same, uniform distribution if we pick $(G,G_0,\ldots,G_r)$ from $\mathbb{F}_{2^c}[X_0,\ldots,X_r]_\ell\times (\mathbb{F}_{2^c}[X_0,\ldots,X_r]_{d})^{r+1}$ randomly and let $F=G+X_0X_1 G_0^2 + X_0(X_1G_1^2+\cdots+X_r G_r^2)$.
Then, $\partial_iF = \partial_i G + X_0 G_i^2$ for $i\neq 0$ and $\partial_0 F = \partial_i G + X_1 G_0^2$. Let $E_{i,0,1}$ be the condition that $ \{\partial_mF\mid 0\leq m\leq r \}$ all vanish along some integral curve spanning an $i$-dimensional plane, is not contained in $\{X_jX_{j'}=0\}$, and is not a rational normal curve of degree $i$.
It suffices to show
\begin{align*}
\lim_{c\rightarrow\infty}2^{cA}{\rm Prob}(E_{i,0,1})=0.
\end{align*}
Suppose we choose $G_0,\ldots,G_r$ from $(\mathbb{F}_{2^c}[X_0,\ldots,X_r]_{d})^{r+1}$ uniformly. Let $E_i'$ be the condition that they all contain some integral curve $C$ that spans an $i$-dimensional plane but is not a degree $i$ rational normal curve. By applying Lemma \ref{lp2}, it suffices to show that
\begin{align*}
\lim_{c\rightarrow\infty}2^{cA}{\rm Prob}(E_i')=0.
\end{align*}
Applying Lemma \ref{LWb2} again, we see it suffices to show ${\rm codim}(\Phi^{\mathbb{P}^r,2}_{d_1,\ldots,d_{r+1}}(\mathbb{P}^r,{\rm Span}(i)'))>A$. Applying Lemma \ref{klpc}, we see
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,2}_{d_1,\ldots,d_{r+1}}(\mathbb{P}^r,{\rm Span}(i)'))&\geq -(i+1)(r-i)+(r-i+2)((i+1)d)\\
{\rm codim}(\Phi^{\mathbb{P}^r,2}_{d_1,\ldots,d_{r+1}}(\mathbb{P}^r,{\rm Span}(i)'))&\geq (1 - d) i^2 + i (d r + d - r + 1) + d r + 2 d - r
\end{align*}
for $d_1=\cdots=d_{r+1}=d$. Since this is quadratic in $i$ with negative leading coefficient, it suffices to check the cases $i=2$ and $i=r$. We see for $i=2$,
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,2}_{d_1,\ldots,d_{r+1}}(\mathbb{P}^r,{\rm Span}(2)'))&\geq -3(r-2)+r(3d)\\
{\rm codim}(\Phi^{\mathbb{P}^r,2}_{d_1,\ldots,d_{r+1}}(\mathbb{P}^r,{\rm Span}(2)'))-A&\geq d r - 3 r + 3,
\end{align*}
so $d\geq 3$ suffices. In the case $i=r$,
\begin{align*}
{\rm codim}(\Phi^{\mathbb{P}^r,2}_{d_1,\ldots,d_{r+1}}(\mathbb{P}^r,{\rm Span}(r)'))&\geq 2(r+1)d\\
{\rm codim}(\Phi^{\mathbb{P}^r,2}_{d_1,\ldots,d_{r+1}}(\mathbb{P}^r,{\rm Span}(r)'))-A&\geq 2 d - 3,
\end{align*}
so $d\geq 2$ suffices.
Putting everything together, we see that we need $d\geq 3$, so $\ell\geq 8$ suffices.
\end{proof}
\bibliographystyle{plain}
|
1006.3866
|
\section{Introduction}
The Monte Carlo simulation method has developed into one of the standard tools for
the investigation of statistical mechanical systems undergoing first-order or
continuous phase transitions \cite{binder:book2}. While the formulation of the
Metropolis-Hastings algorithm \cite{metropolis:53a,hastings:70}, which is the basic
workhorse of the method up to this very day, dates back more than half a century ago,
its initial practical value was limited. This was partially due to the the fact that
computers for the implementation of such simulations where not widely available and
the computing power of those at hand was very limited compared to today's
standards. Hence, the method was not yet competitive, e.g., for studying critical
phenomena compared to more traditional approaches such as the $\epsilon$ or series
expansions \cite{zinn-justin}. That the situation has changed rather drastically in
favor of the numerical approaches since those times is not only owed to the dramatic
increase in available computational resources, but probably even more importantly to
the invention and refinement of a number of advanced techniques of simulation and
data analysis. These include (but are not limited to) the introduction of the concept
of finite-size scaling \cite{barber:domb}, which turned the apparent drawback of
finite system sizes in simulations into a powerful tool for extracting the asymptotic
scaling behavior, the invention of cluster algorithms \cite{swendsen:86,wolff:89a}
beating the critical slowing down close to continuous transitions, the
(re-)introduction of histogram reweighting methods
\cite{ferrenberg:88a,ferrenberg:89a} allowing for the extraction of a continuous
family of estimators of thermal averages from a single simulation run, and the
utilization of a growing family of generalized-ensemble simulation techniques such as
the multicanonical method \cite{berg:92b}, that allow to overcome barriers in the
free-energy landscape and enable us to probe highly suppressed transition states.
In a general setup for a Monte Carlo simulation, the microscopic states of the system
appear with frequencies according to a probability distribution
$p_\mathrm{sim}(\{s_i\})$, which is an expression of the chosen prescription of
picking states and hence specific to the used simulation algorithm. Here, having a
spin system in mind, we label the states with the set $\{s_i\}$, $i=1,\ldots,V$, of
variables. In thermal equilibrium, on the other hand, microscopic states are
populated according to the Boltzmann distribution for the case of the canonical
ensemble,
\begin{equation}
\label{eq:canonical_distr}
p_\mathrm{eq}(\{s_i\}) = \frac{1}{Z_\beta}e^{-\beta{\cal H}(\{s_i\})},
\end{equation}
where ${\cal H}(\{s_i\})$ denotes the energy of the configuration $\{s_i\}$ and
$Z_\beta$ is the partition function at inverse temperature $\beta = 1/T$. Therefore,
any sampling prescription with non-zero probabilities for all possible microscopic
states $\{s_i\}$ in principle\footnote{If the samples are generated by a Markov chain
there are, of course, additional caveats. In particular, any two states must be
connected by a finite sequence of transitions of positive probability, i.e., the
Markov chain must be ergodic.} allows to estimate thermal averages of any
observable ${\cal O}(\{s_i\})$ from a time series $\{s_i^{(t)}\}$, $t=1,\ldots,N$, of
observations,
\begin{equation}
\label{eq:general_estimate}
\hat{O} = \frac{\displaystyle\sum_{t=1}^N {\cal O}(\{s_i^{(t)}\})\frac{\displaystyle
p_\mathrm{eq}(\{s_i^{(t)}\})}{\displaystyle p_\mathrm{sim}(\{s_i^{(t)}\})}}
{\displaystyle\sum_{t=1}^N \frac{\displaystyle p_\mathrm{eq}(\{s_i^{(t)}\})}{\displaystyle p_\mathrm{sim}(\{s_i^{(t)}\})}},
\end{equation}
such that $O\equiv\langle {\cal O}\rangle = \langle \hat{O}\rangle$ at least in the limit
$N\rightarrow\infty$ of an infinite observation time. For a finite number of samples,
however, the estimate (\ref{eq:general_estimate}) becomes unstable as soon as the
simulated and intended probability distributions are too dissimilar, such that only a
vanishing number of simulated events are representative of the equilibrium
distribution. This is illustrated in Fig.~\ref{fig:importance_sampling}, where the
distribution $p_\mathrm{sim}(\{s_i\}) = \mathrm{const}.$ of purely random or simple sampling
is compared to the canonical distribution (\ref{eq:canonical_distr}) for a finite
temperature $T$. Since the Boltzmann distribution (\ref{eq:canonical_distr}) only
depends on the energy $E = {\cal H}(\{s_i\})$, it is here useful to compare the
one-dimensional densities $p_\mathrm{sim}(E)$ and $p_\mathrm{eq}(E)$ instead of the
high-dimensional distributions in state space. For importance sampling, on the other
hand, the simulated probability density is identical to the equilibrium distribution
which can be achieved, e.g., by proposing local updates $s_i\rightarrow s_i'$ (spin
flips) at random and accepting them according to the Metropolis rule
\begin{equation}
\label{eq:metropolis}
T(\{s'_i\}|\{s_i\})=\min[1,p_\mathrm{eq}(\{s'_i\})/p_\mathrm{eq}(\{s_i\})]
\end{equation}
for the transition probability $T(\{s'_i\}|\{s_i\})$.
\begin{figure}[t]
\centering
\scalebox{0.9}{
\input{importance_samplingT}
}
\vspace*{-0.5cm}
\caption
{ Probability distributions of reduced energies $E\equiv {\cal H}/J$ for the
example of a $16\times 16$ $2$-state Potts model at coupling $\beta J = 0.4$,
cf.~Eq.~(\ref{eq:potts_hamiltonian}). For the case of simple sampling, the
overlap of $p_\mathrm{sim}(E)$ and $p_\mathrm{eq}(E)$ is vanishingly small,
whereas the two distributions coincide exactly for the case of importance
sampling.
\label{fig:importance_sampling}
}
\end{figure}
While importance sampling is optimal in that the simulated and intended probability
densities coincide, this benefit comes at the expense of introducing correlations
between successive samples. Under these circumstances, the autocorrelation function
of an observable ${\cal O}$ is expected to decay exponentially,
\begin{equation}
\label{eq:tau_def}
C_{\cal O}(t) \equiv \langle {\cal O}_0 {\cal O}_t \rangle -\langle {\cal O}_0 \rangle\langle {\cal O}_t\rangle \sim C_{\cal O}(0) e^{-t/\tau({\cal O})},
\end{equation}
defining the associated {\em autocorrelation time\/} $\tau({\cal
O})$. Autocorrelations are a direct limiting factor for the amount of information
that be extracted from a time series of a given length for estimating thermal
averages. This is most clearly seen by inspecting an alternative definition of
autocorrelation time involving an integral or sum of the autocorrelation function,
\begin{equation}
\tau_{{\rm int}}({\cal O}) \equiv
\frac{1}{2}+\sum_{t=1}^{\infty}C_{\cal O}(t)/C_{\cal O}(0).
\label{tau_int_def}
\end{equation}
The resulting {\em integrated autocorrelation time\/} determines the precision of an
estimate $\hat{O}$ for the thermal average $\langle{\cal O}\rangle$ from the time series
\cite{sokal:92a},
\begin{equation}
\label{eq:precision_autocorrelations}
\sigma^2(\hat{O}) \approx \frac{\sigma^2({\cal O})}{N/2\tau_\mathrm{int}({\cal O})}.
\end{equation}
The presence of autocorrelations hence effectively reduces the number of independent
measurements by a factor $1/2\tau_\mathrm{int}({\cal O})$. Generically, autocorrelation
times are moderate and the problem is thus easily circumvented by adapting the number
of measurement sweeps according to the value of $\tau_\mathrm{int}$. The problem
turns out to be much more severe, however, in the vicinity of phase transitions
points. Close to a critical point, where clusters of pure phase states of all sizes
constitute the typical configurations, one observes {\em critical slowing
down\/}\footnote{The exponential autocorrelation time $\tau$ of
Eq.~(\ref{eq:tau_def}) and the integrated autocorrelation time $\tau_\mathrm{int}$
of Eq.~(\ref{tau_int_def}) are found to exhibit the same asymptotic scaling
behavior, such that we do not distinguish them in this respect.},
\begin{equation}
\tau \sim \min(\xi,L)^z,
\label{dyn_exp}
\end{equation}
where the {\em dynamical critical exponent\/} $z$ is found to be close to $z=2$ for
conventional local updating moves. Since the correlation length $\xi$ diverges as the
transition is approached, the same holds true for the autocorrelation time. This
problem is most elegantly solved by the introduction of cluster algorithms, which
involve updating collective variables that happen to show incipient percolation right
at the ordering transition of the spin system and, in addition, must exhibit
geometrical properties which are commensurate with the intrinsic geometry of the
underlying critical system. Such approaches are discussed for the case of the Potts
model in the context of the random-cluster representation in Section
\ref{sec:random_cluster} below.
\begin{figure}[t]
\centering
\scalebox{0.9}{
\input{firstorderT}
}
\vspace*{-0.5cm}
\caption
{ Sampled probability distribution of the internal energy $E$ of the $q=20$ states
Potts model on a $16\times 16$ square lattice at the transition coupling
$\beta J=1.699669\ldots$
\label{fig:first_order}
}
\end{figure}
Even more dramatic correlation effects are seen for the case of first-order phase
transitions. There, transition states connecting the pure phases coexisting at the
transition are highly suppressed, leading to the phenomenon of {\em metastability\/},
where the phase of higher free energy persists in some region below the transition
point for macroscopic times due to the free-energy barrier that needs to be overcome
to effect the ordering of the system. This is illustrated in
Fig.~\ref{fig:first_order} displaying the order parameter distribution of a $q=20$
states Potts model. The mixed phase states connecting the peaks of the pure phases
correspond to configurations containing interfaces and consequently carry an extra
excitation energy $\sim 2\sigma L^{d-1}$, where $L$ denotes the linear size
of the system and $\sigma$ is the surface free-energy per unit area associated to
interfaces between the pure phases. The thermally activated dynamics in overcoming
this additional energy barrier leads to exponentially divergent autocorrelation
times,
$$
\tau \sim \exp(-2\beta\sigma L^{d-1}),
$$
sometimes referred to as hypercritical slowing down. Due to the finite correlation
length at the transition point, cluster updates are of no use here but, instead,
techniques for overcoming energy barriers are required. These can be provided
(besides other means) by generalized-ensemble simulations discussed below in Section
\ref{sec:generalized_ensembles}. Similar problems are encountered in simulations of
disordered systems, where a multitude of metastable states separated by barriers is
found \cite{holovatch:07}.
The Potts model is a natural extension of the Ising model of a ferromagnet to a
system with $q$-state spins and Hamiltonian
\begin{equation}
\label{eq:potts_hamiltonian}
{\cal H} = -J \sum_{\langle i,j\rangle}\delta(\sigma_i,\sigma_j),
\;\;\;\;\sigma_i = 1,\ldots,q.
\end{equation}
It is well known that the Potts model undergoes a continuous phase transitions for
$q\le 4$ in two dimensions and $q < 3$ in three dimensions, while the transition
becomes discontinuous for a larger number of states \cite{wu:82a}. The $q=2$ Potts
model is equivalent to the well-known Ising model. In the random-cluster
representation introduced below in Section \ref{sec:random_cluster}, the definition
of the Potts model is naturally extended to all real values of $q > 0$, and it turns
out that the (bond) percolation problem then corresponds to the limit $q\rightarrow
1$, while for $q\rightarrow 0$ the model describes random resistor networks. Due to
these properties, the Potts model serves as a versatile playground for the study of
phase transitions with applications ranging from condensed matter to high-energy
physics.
\section{Using histograms}
When studying phase transitions with Markov chain Monte Carlo simulations, one
encounters another generic problem independent of the presence of
autocorrelations. In the standard approach, estimators of the type
(\ref{eq:general_estimate}) need to be used for each of a series of independent
simulations at different values of the temperature (or other external parameters) to
extract the temperature dependence of the observable at hand. This turns out to be
problematic when studying phase transitions, where certain observables (such as,
e.g., the specific heat) develop peaks which are narrowing down to the location of
the transitions point as successively larger system sizes are considered. Locating
such maxima to high precision then requires to perform a large number of independent
simulation runs.
\subsection{Energy and magnetization histograms}
This problem is avoided by realizing that each time series from a simulation run at
fixed temperature can be used to estimate thermal averages for nearby temperatures as
well. The concept of {\em histogram reweighting\/}
\cite{ferrenberg:88a,ferrenberg:89a} follows directly from the general relation
(\ref{eq:general_estimate}) connecting simulated and target probability densities. If
an importance sampling simulation is performed at coupling $K_0 = \beta_0 J$,
i.e., $p_\mathrm{sim}\sim \exp(-K_0 E)$, estimators for canonical expectation
values at a different coupling $K$ are found from Eq.~(\ref{eq:general_estimate})
with $p_\mathrm{eq}\sim\exp(-K E)$,
\begin{equation}
\hat{O}_K = \frac{\sum_{t=1}^N {\cal
O}(\{s_i^{(t)}\})e^{-(K-K_0){\cal H}(\{s_i^{(t)}\})/J}}
{\sum_{t=1}^Ne^{-(K-K_0){\cal H}(\{s_i^{(t)}\})/J} }.
\label{eq:basic_reweighting}
\end{equation}
While this is conceptually perfectly general, it is clear that --- quite similar to
the case of simple sampling discussed above --- reliable estimates can only be
produced if the simulated and target distributions have significant overlap (cf.\
Figure \ref{fig:importance_sampling}). This is most clearly seen when switching over
to a formulation involving histograms as estimates of the considered probability
densities. If $\hat{H}_{K_0}(E)$ is a sampled energy histogram at coupling $K_0$ and
the observable ${\cal O}$ only depends on the configuration $\{s_i\}$ via the energy
$E$, the estimate (\ref{eq:basic_reweighting}) becomes
\begin{equation}
\hat{O}_K = \frac{\sum_E{\cal O}(E)\hat{H}_{K_0}(E)e^{-(K-K_0)E}}
{\sum_E\hat{H}_{K_0}(E)e^{-(K-K_0)E}}.
\label{eq:histogram_reweight}
\end{equation}
It is useful to realize, then, that sampling the histogram $\hat{H}_{K_0}(E)$ one is,
in fact, estimating the {\em density of states\/} $\Omega(E)$,
\begin{equation}
\langle \hat{H}_{K_0}(E)/N \rangle = p_{K_0}(E) = \frac{1}{Z_{K_0}}\Omega(E)e^{-K_0E}
\label{eq:helper17}
\end{equation}
i.e., the number of microstates of energy $E$ via
\begin{equation}
\hat{\Omega}(E) = Z_{K_0}\,\hat{H}_{K_0}(E)/N\times e^{K_0 E},
\label{eq:single_histogram_dos}
\end{equation}
where $Z_{K_0}$ denotes the partition function at coupling $K_0$. Inserting this
expression into Eq.~(\ref{eq:helper17}), one indeed arrives back at the reweighting
estimate (\ref{eq:histogram_reweight}),
\begin{equation}
\hat{O}_K = \frac{1}{Z_K} \sum_E \hat{\Omega}(E) e^{-KE} {\cal O}(E) = \frac{Z_{K_0}}{Z_K}\sum_E
\hat{H}_{K_0}(E)/N \times e^{-(K-K_0)E}{\cal O}(E)
\end{equation}
with
\begin{equation}
\widehat{\frac{Z_{K_0}}{Z_K}} =
\frac{\sum_E\hat{H}_{K_0}(E)/N\times e^{-(K_0-K_0)E}}{\sum_E\hat{H}_{K_0}(E)/N\times e^{-(K-K_0)E}}.
\end{equation}
It should be clear that the density of states is a rather universal quantity in that
its complete knowledge allows to determine any thermal average related to the energy
for arbitrary temperatures. The limitation in the allowable reweighting range,
$|K-K_0|<\epsilon$, then translates into a window of energies for which the density
of states $\Omega(E)$ can be reliably estimated from a single canonical
simulation. This is illustrated in the left panel of Figure \ref{fig:beale} for the
$2$-state Potts model, where the density-of-states estimate of
Eq.~(\ref{eq:single_histogram_dos}) from a single simulation at coupling $K=0.4$ is
compared to the exact result. Note that from the estimator
(\ref{eq:single_histogram_dos}) $\Omega(E)$ can only be determined up to the unknown
normalization constant $Z_{K_0}$. This is irrelevant for thermal averages of the type
(\ref{eq:histogram_reweight}), but precludes the determination of free energies.
\begin{figure}[t]
\centering
\scalebox{0.9}{
\input{q2-0c4-densityT}
}
\scalebox{0.9}{
\input{q2-density-normalizedT}
}
\vspace*{-0.5cm}
\caption
{ Density-of-states estimate $\hat{\Omega}(E)$ for the $2$-state Potts model on a
$16\times 16$ square lattice from importance-sampling simulations at coupling
$K=0.4$ (left panel) and from a series of simulations ranging from $K=0.1$ up to
$K=0.9$ (right panel). The solid lines show the exact density of states
calculated according to Ref.~\cite{beale:96a}. From the estimate
(\ref{eq:single_histogram_dos}) and the corresponding multi-histogram analogue
(\ref{eq:optimal_edos}), $\Omega(E)$ can only be determined up to an unknown
normalization.
\label{fig:beale}
}
\end{figure}
If more than a tiny patch of the domain of the density of states is to be determined,
several simulations at different couplings $K$ need to be combined. Since the average
internal energy increases monotonically with temperature, a systematic series of
simulations can cover the relevant range of energies. A combination of several
estimates of the form (\ref{eq:single_histogram_dos}) for $\Omega(E)$ is problematic,
however, since each estimate has a different, unknown scale factor $Z_{K_i} = e^{\cal
F}_i$. This dilemma can be solved by the iterative solution of a system of
equations. A convex linear combination of density-of-states estimates
$\hat{\Omega}_i(E)$ from independent simulations at couplings $K_i$, $i=1,\ldots,n$,
$$
\hat{\Omega}(E) \equiv \sum_{i=1}^n\alpha_i(E)\hat{\Omega}_i(E),
$$
with $\sum_i\alpha_i(E) = 1$ is of minimal variance for \cite{weigel:09}
$$
\alpha_i(E) = \frac{1/\sigma^2[\hat{\Omega}_i(E)]}{\sum_i 1/\sigma^2[\hat{\Omega}_i(E)]}.
$$
In view of Eq.~(\ref{eq:single_histogram_dos}), and ignoring the variance of the
scale factors $e^{{\cal F}_i}$, one estimates
$$
\sigma^2[\hat{\Omega}_i(E)] \approx e^{2{\cal F}_i}\hat{H}_{K_i}(E)/N^2\times e^{2K_i E},
$$
such that
\begin{equation}
\hat{\Omega}(E) = \frac{\sum_ie^{-{\cal F}_i-K_i E}}{\sum_i e^{-2{\cal F}_i-2K_iE}[\hat{H}_{K_i}(E)/N]^{-1}}.
\label{eq:optimal_edos}
\end{equation}
From Eq.~(\ref{eq:single_histogram_dos}) follows the normalization condition
\begin{equation}
\label{eq:iteration2}
e^{\cal F}_i = \sum_E\hat{\Omega}(E) e^{-K_i E},
\end{equation}
which needs to be solved iteratively with Eq.~(\ref{eq:optimal_edos}) to
simultaneously result in the optimized estimate $\hat{\Omega}(E)$ and the scale
factors ${\cal F}_i$. An initial estimate can be deduced from thermodynamic
integration \cite{ferrenberg:89a,weigel:02a,binder:book2}. Here, again, it is a
crucial condition that the energy histograms to be combined have sufficient
overlap. Otherwise, the iterative solution of Eqs.~(\ref{eq:optimal_edos}) and
(\ref{eq:iteration2}) cannot converge. Combining an appropriately chosen series of
simulations, from this {\em multi-histogram\/} approach a reliable estimate of the
full density of states can be achieved, as is illustrated in the right panel of
Figure \ref{fig:beale} for the case of the $2$-state Potts model. (The states to the
right of the maximum in $\Omega(E)$ belong to the {\em antiferromagnetic\/} Potts
model and thus are not seen in the simulations.) If the full range of admissible
energies has been sampled, also an {\em absolute\/} normalization of $\Omega(E)$
becomes possible, matching $\hat{\Omega}(E)$ to reproduce the number $q$ of ground
states or the number $q^{\cal N}$ of different states in total, where ${\cal N}$
denotes the number of Potts spins.
For estimating thermal averages of observables that do not depend on the energy only,
the outlined framework can be easily generalized by replacing the measurements ${\cal
O}(E)$ in Eq.~(\ref{eq:histogram_reweight}) by the corresponding microcanonical
averages $\langle{\cal O}\rangle_E$ at energy $E$,
$$
\hat{O}_K = \frac{\sum_E\langle{\cal O}\rangle_E\hat{H}_{K_0}(E)e^{-(K-K_0)E}}
{\sum_E\hat{H}_{K_0}(E)e^{-(K-K_0)E}}.
$$
In the context of spin models, for instance, it can be useful to sample joint
histograms of energy and magnetization and also define the corresponding
two-dimensional density of states \cite{tsai:unpublished}. For the Potts model,
however, it appears to be even more natural to consider a density of states occurring
in the random cluster representation which also is the natural language for the
formulation of cluster algorithms. This will be discussed in the next section.
\subsection{Random cluster histograms\label{sec:random_cluster}}
As was first noted by Fortuin and Kasteleyn \cite{fortuin:72a}, the partition
function of the zero-field Potts model on a general graph ${\cal G}$ with ${\cal N}$
vertices and ${\cal E}$ edges can be rewritten as
\begin{equation}
Z_{K,q} \equiv \sum_{\{\sigma_i\}} e^{K\sum_{\langle i,j\rangle}\delta(\sigma_i,\sigma_j)}
= \sum_{{\cal G}' \subseteq {\cal G}} (e^K-1)^{b({\cal G}')}\,q^{n({\cal G}')},
\label{eq:random_cluster}
\end{equation}
where the sum runs over all bond configurations ${\cal G}'$ on the graph
(subgraphs). Note that the formulation (\ref{eq:random_cluster}) in contrast to that
of Eq.~(\ref{eq:potts_hamiltonian}) allows for a natural continuation of the model to
{\em non-integer\/} values of $q$. This expression can be interpreted as a
bond-correlated percolation model with percolation probability $p = 1-e^{-K}$:
\begin{equation}
Z_{p,q} = e^{K{\cal E}} \sum_{{\cal G}' \subseteq {\cal G}}
p^{b({\cal G}')}(1-p)^{{\cal E}-b({\cal G}')}\,q^{n({\cal G}')}
= e^{K{\cal E}} \sum_{b=0}^{{\cal E}}
\sum_{n=1}^{{\cal N}}g(b,n)\,p^b\,(1-p)^{{\cal E}-b}\,q^n,
\label{eq:partition_bcpm}
\end{equation}
where $g(b,n)$ denotes the number of subgraphs of ${\cal G}$ with $b$ activated bonds
and $n$ resulting clusters. This purely combinatorial quantity corresponds to the
density of states of the random-cluster model.
It is this formulation of the model which is exploited by the cluster algorithms
\cite{swendsen:86,wolff:89a} mentioned above. Since the Potts model is {\em
equivalent\/} to a (correlated) percolation model, it follows (almost)
automatically that the thus defined clusters percolate at the ordering transition and
have the necessary fractal properties. This deep connection between spin model and
percolation problem results in cluster algorithms for the Potts model dramatically
reducing, and in some cases completely removing, the effect of critical slowing down
\cite{wj:chem}. It appears thus desirable to combine this extraordinarily successful
approach with the idea of reweighting to result in continuous families of
estimates. In particular, one would want to reweight in the temperature as well as
the now continuous parameter $q$, for instance for determining the tricritical value
$q_c$ where the transition becomes of first order. In contrast to previous attempts
in this direction \cite{lee:91a} using the language of energy and magnetization that
results in certain systematic errors, such reweighting is very naturally possible in
the random-cluster representation. By construction, a cluster-update simulation of
the $q_0$-state Potts model at coupling $K_0$ produces bond configurations with the
probability distribution
\begin{equation}
p_{p_0,q_0}(b,n) = W_{p_0,q_0}^{-1}\,g(b,n)\,p_0^b\,(1-p_0)^{{\cal
E}-b}\,q_0^n,
\label{eq:rc_probability}
\end{equation}
where $p_0=1-e^{-K_0}$ and $W_{p_0,q_0} \equiv Z_{p_0,q_0} e^{-K_0{\cal
E}}$. Therefore, if a histogram $\hat{H}_{p_0,q_0}(b,n)$ of bond and cluster
numbers is sampled, one has $p_{p_0,q_0}(b,n) =
\langle\hat{H}_{p_0,q_0}(b,n)/N\rangle$ and thus follows an estimate of $g(b,n)$ as
\cite{weigel:02a}
\begin{equation}
\hat{g}(b,n) = W_{p_0,q_0} \frac{\hat{H}_{p_0,q_0}(b,n)}{p_0^b\,(1-p_0)^{{\cal
E}-b}\,q_0^n\,N},
\end{equation}
which, analogous to the estimate (\ref{eq:single_histogram_dos}), contains an
(unknown) normalization factor, $W_{p_0,q_0}$. The required cluster decomposition of
the lattice is a by-product of the Swendsen-Wang update and hence its determination
does not entail any additional computational effort.
In this way, cluster-update simulations with largely reduced critical slowing down
can be used for a systematic study of the model for arbitrary temperatures and
(non-integer) numbers of states. Thermal averages of observables ${\cal O}(b,n)$
can be easily deduced from $\hat{g}(b,n)$,
\begin{equation}
\hat{\cal O}(p,q) \equiv \left[{\cal O}\right]_{p,q} = \frac{\displaystyle\sum_{b=0}^{{\cal E}} \sum_{n=1}^{{\cal N}}
\hat{H}_{p_0,q_0}(b,n)
\left(\frac{p}{p_0}\right)^b\left(\frac{1-p}{1-p_0}\right)^{{\cal
E}-b}\left(\frac{q}{q_0}\right)^n {\cal O}(b,n)}{\displaystyle\sum_{b=0}^{{\cal E}} \sum_{n=1}^{{\cal N}}
\hat{H}_{p_0,q_0}(b,n)
\left(\frac{p}{p_0}\right)^b\left(\frac{1-p}{1-p_0}\right)^{{\cal
E}-b}\left(\frac{q}{q_0}\right)^n}.
\end{equation}
Relating expressions in the $(b,n)$ and $(E,M)$ languages, we have,
\begin{eqnarray}
\hat{u} & = & -\frac{1}{p{\cal N}}\,[b]_{p,q}, \nonumber \\
\hat{c}_v & = & \frac{K^2}{p^2{\cal N}}\,\left(\left[(b-[b]_{p,q})^2\right]_{p,q}
-(1-p)[b]_{p,q}\right), \nonumber
\end{eqnarray}
where $u$ denotes the internal energy per spin and $c_v$ is the specific heat. For
magnetic observables, an additional distinction between percolating and finite
clusters is necessary \cite{weigel:02a}.
\begin{figure}[t]
\centering
\vspace*{-0.8cm}
\hspace*{0.3cm}
\scalebox{0.9}{
\input{support1T}
}
\scalebox{0.9}{
\input{support2T}
}
\vspace*{-0.5cm}
\caption
{ Random-cluster density of states $g(b,n)$ on the $16\times 16$ square lattice as
estimated from a Swendsen-Wang cluster-update simulation of the $q=2$ Potts model
at $K=0.8$ (left panel) and the $q=2$ (brighter part, below) as well as $q=10$
(darker part, on top) models at a range of different couplings (right
panel). Brighter colors correspond to larger values of $\hat{g}(b,n)$. The white
areas correspond to $(b,n)$ values not visited in the simulations.
\label{fig:support}
}
\end{figure}
For averages at general values of $p$ and $q$, we run into the by now familiar
problem of vanishing overlap of histograms as we move too far from the simulated
$(p_0, q_0)$ point. This is illustrated in Figure \ref{fig:support}, showing the
support of the density-of-states estimate $\hat{g}$ in the $(b,n)$ plane. For a
single canonical simulation, only a small patch of the $(b,n)$ plane is sampled (left
panel). To improve on this, a multi-histogramming approach analogous to the technique
discussed in the previous section is required. The relations corresponding to
Eqs.~(\ref{eq:optimal_edos}) and (\ref{eq:optimal_edos}) are here \cite{weigel:02a}
\begin{equation}
\hat{g}(b,n) = \frac{\sum_i N_i\,e^{-{\cal F}_i}\,p_i^b\,(1-p_i)^{{\cal
E}-b}\,q_i^n}
{\sum_j N_j^2\,e^{-2{\cal F}_j}\,p_j^{2b}\,(1-p_j)^{2({\cal E}-b)}\,
q_j^{2n}[\hat{H}_{p_i,q_i}(b,n)]^{-1}},
\label{selfcon_1}
\end{equation}
and the following self-consistency equation for the free-energy factors ${\cal F}_i$:
\begin{equation}
e^{{\cal F}_i} = \sum_{b=0}^{\cal E}\sum_{n=1}^{\cal N}
\hat{g}(b,n)\,p_i^b\,(1-p_i)^{{\cal E}-b}\,q_i^n.
\label{selfcon_2}
\end{equation}
Combining a number of simulations at different temperatures and $q$ values, a more
significant patch of the density of states $g(b,n)$ can thus be sampled, cf.\ the
right panel of Figure \ref{fig:support}. Note that in the $(b,n)$ plane, moving from
$b=0$ to $b={\cal E}$ corresponds to moving from infinite to zero temperature,
whereas increasing the number of states $q$ moves the histograms up along the $n$
axis, corresponding to the fact that the presence of more states will tend to produce
configurations broken down into smaller (and thus more) clusters.
\section{Generalized ensembles\label{sec:generalized_ensembles}}
Two problems arise for an estimate of the total random-cluster density of states
$g(b,n)$ with the multi-histogram approach outlined above: (i) while simulations at
sufficiently small $q$ profit from the application of cluster algorithms in that
critical slowing down is strongly reduced, in the first-order regime of large $q$
cluster algorithms are not useful for tackling the hypercritical slowing down
observed there and (ii) as the system size is increased, histograms from simulations
at different (integer) values of $q$ cease to overlap, such that the set
(\ref{selfcon_1}) and (\ref{selfcon_2}) of multi-histogram equations eventually
breaks down. While the second problem could, in principle, be avoided by using the
cluster algorithm suggested in Ref.~\cite{chayes:98a} for general, non-integer $q$
values, we find it more convenient to tackle both issues simultaneously by moving
away entirely from the concept of {\em canonical\/} simulations which, as it turns
out, entails further advantages for the sampling problem.
The idea of multicanonical \cite{berg:92b} (or, less specifically, generalized
ensemble) simulations is motivated by the problem of dynamically tunneling the area
of (exponentially) low probability in between the coexisting phases at a first
order transition, cf.\ Figure \ref{fig:first_order}. Instead of simulating the
canonical distribution (\ref{eq:canonical_distr}), consider importance sampling
according to a generalized probability density,
\begin{equation}
\label{eq:muca}
p_\mathrm{muca}(E) = \frac{\Omega(E)/W(E)}{Z_\mathrm{muca}} =
\frac{e^{S(E)-\omega(E)}}{Z_\mathrm{muca}},
\end{equation}
where $W(E)$ resp.\ $\omega(E)$ denote (logarithms of) suitably chosen weight factors
and $S(E) = \ln\Omega(E)$ is the microcanonical entropy. To overcome barriers, the
sampling distribution should be {\em broadened\/} with respect to the canonical one,
in the extremal case to become completely flat, $p_\mathrm{muca}(E) = \mathrm{const}$. For
this case Eq.~(\ref{eq:muca}) tells us that
$$
W(E) = \Omega(E)\;\;\;\text{resp.}\;\;\;\omega(E) = S(E).
$$
Hence, we arrive back at the task of estimating the density of states of the system!
Since $\Omega(E)$ is not known {\em a priori\/}, one needs to revert to a recursive
solution, where starting out, e.g., with the initial guess $W_0(E) = 1$
(corresponding to a canonical simulation at infinite temperature) one produces an
estimate $\hat{\Omega}_1(E)$ of the density of states according to
Eq.~(\ref{eq:single_histogram_dos}) and sets $W_1(E) = \hat{\Omega}_1(E)$. Repeating
this process, eventually a reliable estimate for $\Omega(E)$ over the full range of
energies can be produced\footnote{In practice it is, of course, more reasonable to
combine the information from {\em all\/} previous simulations to form the current
best guess for the weight function \cite{berg:96}.}. Note that with the help of the
general relation (\ref{eq:general_estimate}) we can come back to estimating canonical
averages at any time during the multicanonical iteration. An alternative, rather
efficient, approach for arriving at a working estimate of $\Omega(E)$ was suggested
in Ref.~\cite{wang:01a}, where the weights $\omega(E)$ are {\em continuously\/}
updated $\omega(E) \rightarrow \omega(E) + \phi$ after visits of the energy $E$, and
the constant $\phi$ is gradually reduced to zero after the relevant energy range has
been sufficiently sampled. Although such a prescription ceases to form an equilibrium
Monte Carlo simulation, convergence to the correct density of states can be shown
under rather general circumstances \cite{zhou:05}.
Some combinations of the successful concepts of cluster algorithms/representations
and generalized-ensemble simulations have been suggested before, most notably the
multibondic algorithm of Ref.~\cite{wj:95a}, which attaches generalized weights to
the bond distribution function only (see also Ref.~\cite{hartmann:05}). Although it
appears most natural, it seems that it has not been noticed before that
multicanonical weights can be attached, instead, to the full random-cluster
probability density (\ref{eq:rc_probability}) to directly estimate the geometrical
density of states $g(b,n)$. In this ``multi-bondic-and-clusteric'' version one writes
\begin{equation}
\label{eq:p_mubocl}
p_\mathrm{mubocl}(b,n) = W_\mathrm{mubocl}^{-1}\,g(b,n)\,e^{-\gamma(b,n)},
\end{equation}
such that the generalized weights $\exp[-\gamma(b,n)]$ lead to a completely flat
histogram for $\gamma(b,n) = \ln g(b,n)$. At this point, it is crucial to observe
that, since $g(b,n)$ is a purely combinatorial quantity describing the number of
decompositions of the lattice through a given number of activated links, it is no
longer necessary to simulate the underlying spin model and, instead, one can consider
the corresponding percolation problem directly. This approach proceeds by simulating
subgraphs ${\cal G}'$ with local updates: assume that the current subgraph consists
of $b$ active bonds resulting in a decomposition of the graph into $n$
clusters. Picking a bond of the graph ${\cal G}$ at random two local moves are
possible:
\begin{enumerate}
\item If the chosen bond is not active, try to activate it. Then either
\begin{enumerate}
\item activating the bond does not change the cluster number $n$ ({\em internal bond}),
leading to a transition $(b,n) \rightarrow (b+1,n)$,
\item or activating the bond does join two previously disjoint clusters ({\em
coordinating bond\/}), such that $(b,n) \rightarrow (b+1,n-1)$.
\end{enumerate}
\item If the chosen bond is already active, try to deactivate or delete it. Then
either
\begin{enumerate}
\item deleting the bond does not change the cluster number $n$ (internal bond),
resulting in the transition $(b,n) \rightarrow (b-1,n)$,
\item or deleting the bond breaks a cluster apart in two parts (coordinating bond),
such that $(b,n) \rightarrow (b-1,n+1)$.
\end{enumerate}
\end{enumerate}
While with a naive approach (such as the application of the Hoshen-Kopelman algorithm
\cite{hoshen:79}) most of these moves would be very expensive computationally, this
is not the case for a clever choice of data structures and algorithms. We use
so-called ``union-find algorithms'' with additional improvements known as balanced
trees and path compression \cite{newman:01a}. With these structures, the
computational effort for identifying whether an inactive bond is internal or
coordinating and, for case (1b), the amalgamation of two clusters are operations in
constant running time, irrespective of the size of the graph (up to logarithmic
correction terms). The decision whether an active bond is internal or coordinating,
although an operation with ${\cal O}({\cal E})$ complexity in the worst case, can be
implemented very efficiently with interleaved breadth-first searches. Only the
operation (2b) of actually decomposing a cluster can be potentially expensive, but
this is only a problem directly at the percolation threshold. These local steps are
used for a generalized-ensemble simulation, for instance using the iteration
suggested by Wang and Landau \cite{wang:01a} to arrive at an estimate $\hat{g}(b,n)$
for the random-cluster density of states (additional speedups can be achieved
employing interpolation schemes for yet unvisited $(b,n)$ bins).
\begin{figure}[t]
\centering
\includegraphics[width=9cm]{lnt}
\vspace*{-0.5cm}
\caption
{ Logarithm of the cluster density of states $g(b,n)$ of the $q$-state Potts model
on a $16\times 16$ square lattice.
\label{fig:density}
}
\end{figure}
The estimated $g(b,n)$ can then be used either for directly estimating thermal
averages via the relation (\ref{eq:partition_bcpm}) or as weight function for a
multi-bondic-and-clusteric simulation to yield estimates of arbitrary observables via
the general relation (\ref{eq:general_estimate}). Note that, by construction, the
approach does not suffer from any (hyper-)critical slowing down, since it is based
entirely on simulating a non-interacting percolation model. Figure \ref{fig:density}
shows the (logarithm of the) density of states $g(b,n)$ sampled with this approach on
a $16\times 16$ square lattice. While the $\hat{g}(b,n)$ resulting from this approach
still comes only up to an unknown normalization constant, the random-cluster approach
has the advantage that there exist ${\cal E}$ independent normalization conditions
\begin{equation}
g(b) = \sum_{n} g(b,n) \stackrel{!}{=} {{\cal E} \choose b}.
\end{equation}
It is easily shown that the estimates from this approach reproduce the known results,
e.g., for the internal/free energy and specific heat \cite{ferdinand:69a} or the
(energy) density of states of the Ising model \cite{beale:96a}. Beyond that, it is
easy from this approach to study Potts model properties as a continuous function of
$q$, or to study equilibrium distributions of Potts models with a large number of
states without the problem of hypercritical slowing down. This is illustrated in
Figure \ref{fig:potts}.
\begin{figure}[t]
\centering
\vspace*{0.3cm}
\includegraphics[width=7.75cm]{free}
\includegraphics[width=7.75cm]{heat}
\vspace*{-0.5cm}
\caption
{ Absolute free energy (left) and specific heat (right) of the $q=2$, $5$, $10$,
$20$ and $50$ states Potts model on a $16\times 16$ square lattice as estimated
from the density of states $\hat{g}(b,n)$ resulting from a
``multi-bondic-and-clusteric'' simulation described in the main text.
\label{fig:potts}
}
\end{figure}
\section{Conclusions}
While importance sampling Monte Carlo simulations according to the
Metropolis-Hastings scheme appear to be the universally optimal solution to the
problem of estimating equilibrium thermal averages, a number of complications are
encountered in practical applications which result (a) from the requirement of
computing estimates as continuous functions of external parameters and (b) the
Markovian nature of the algorithm entailing autocorrelations that can lead to dynamic
ergodicity breaking. I have outlined how a number of techniques such as histogram
reweighting, cluster algorithms and generalized-ensemble simulations can provide
(partial) solutions to these problems. It turns out that all of these techniques are
closely related to the problem of estimating the density of states of the model at
hand which turns out to be a central quantity for the understanding of advanced
simulation techniques. For the prototypic case of the Potts model, it is shown how a
combination of the random-cluster representation underlying the concept of cluster
algorithms and multicanonical simulations allows to reduce the simulation to a purely
geometric cluster counting problem that can be efficiently solved, e.g., with the
Wang-Landau sampling scheme to yield arbitrary thermal averages as continuous
functions of both the temperature and the (general, non-integer) number of states
$q$. Possible applications are investigations of the tricritical point in the $(T,q)$
plane, estimates of critical exponents as continuous functions of $q$, or the
investigation of transition states in the first-order regime, to name only a few of
the problems that immediately come to mind.
\section*{Acknowledgments}
The author acknowledges support by the DFG through the Emmy Noether Programme under
contract No.\ WE4425/1-1.
|
2009.09277
|
\section{Introduction}
Polar codes can achieve the capacity of any binary-input symmetric channel with low-complexity encoding and \ac{sc} decoding algorithms when the code length tends towards infinity \cite{arikan}. These codes were recently adopted in the control channel of the enhanced mobile broadband (eMBB) scenario for the fifth generation mobile-communications 5G standard, which requires codes with short block lengths \cite{3gpp_polar}. Since \ac{sc} decoding does not result in a satisfactory error-correction performance for short-block-length polar codes, \ac{sc} decoding variants, such as \mbox{\ac{scl}} decoding concatenated with a \mbox{\ac{crc} \cite{tal_list}}, are used to decode short-block-length polar codes.
\ac{sc} decoding and its variants are sequential-decoding algorithms that progress bit by bit. The polar-code encoding process divides the encoder input bits into two sets based on the underlying synthetic channels' reliability. One set of the input bits, corresponding to the more reliable synthetic channels, is assigned to carry information bits. The other set, corresponding to the less reliable synthetic channels, carries predefined values known to the decoder. The problem of finding the synthetic channels' reliability and dividing the input bits into two sets is called code construction. In fact, polar-code construction works to provide the best error-correction performance for a specific transmission channel with its associated specific sequential decoder.
Several techniques have been proposed to construct polar codes with \ac{sc} decoding. The Bhattacharyya parameter was first used in \cite{arikan} to construct polar codes. Density evolution \cite{mori1,mori2}, Gaussian approximation of density evolution \cite{trifonov_GA}, and upgrading/downgrading of channels \cite{tal_construction,pedarsani} were also used to construct polar codes with SC decoding. A universal partial order based upon the reliability of synthetic channels was found in \cite{bardet,schurch}, and it was shown in \cite{mondelli_complexity} that by using these universal partial orders, the complexity of polar-code construction is sublinear with the block length. Moreover, $\beta$-expansion was used in \cite{beta} to construct polar codes for different channels.
All the aforementioned polar-code-construction techniques are for use with \ac{sc} decoding. However, polar-code construction with \ac{sc} decoding does not necessarily result in the best error-correction performance under variants of \ac{sc} decoding such as \ac{scl} decoding \cite{hashemi_part}. To address this issue, heuristic methods such as Monte-Carlo simulations \cite{sun_MC,qin_MC} were used to construct polar codes for other decoding algorithms. Artificial-intelligence techniques have evolved as promising candidates to construct polar codes. In particular, genetic algorithms and machine learning were used to construct codes for specific decoders in \cite{elkelesh_GA,huang_AI}, respectively.
Different from most existing polar-code construction techniques that sort bit channels by reliability and then pick the most reliable ones, this paper explores the sequential-decoding process and approaches polar-code construction from a novel perspective. In particular, this paper proposes a technique whereby polar-code construction maps to a game, in which the agent is trained to traverse a maze. The connection between the maze-traversing game and the \ac{sc}-based decoding is detailed that minimizes the \ac{fer} by maximizing the game's expected return. The \ac{rl} algorithm SARSA$(\lambda)$ \cite{sutton1998introduction} is adopted in solving the game. Simulation results show that with a moderate amount of training, the game-based polar-code constructions can match current standard constructions for \ac{sc} decoding, and outperform the standard construction with \ac{scl} decoding. Moreover, we show that the \ac{fer} gap between the game-based constructions and the standard constructions increases with the list size in \ac{scl} decoding.
The remainder of this paper is organized as follows: Section~\ref{sec:prel} reviews the preliminaries. Section~\ref{sec:game} details the proposed polar-code construction game. Section~\ref{sec:rl} explains the \ac{rl} algorithm that solves the game. Section~\ref{sec:simu} presents simulation results. Finally, Section~\ref{sec:conc} concludes the paper.
\section{Preliminaries}
\label{sec:prel}
\subsection{Polar Codes}
A polar code $\mathcal{P}(N, K)$ of length $N = 2^n$ is constructed by applying a linear transformation to input bit vector $\bm{u} = (u_0, u_1, \ldots, u_{N - 1})$ to obtain codeword $\bm{x} = \bm{u} \mathbf{G}^{\otimes n} = (x_0, x_1, \ldots, x_{N - 1})$, where $\mathbf{G}^{\otimes n}$ is the \mbox{$n$-th} Kronecker power of the polarizing kernel matrix $\mathbf{G} = \left[\begin{smallmatrix}
1 & 0 \\
1 & 1
\end{smallmatrix}\right]$. $\bm{x}$ is then modulated and transmitted through the channel. The input $\bm{u}$ contains a set $\mathcal{I}$ of $K$ nonfrozen bits, which need to be recovered, and a set $\mathcal{F}$ of $N - K$ frozen bits, whose positions and values are known to both the encoder and the decoder. The $K$ nonfrozen bits are divided into $A$ information bits and $P=K-A$ \ac{crc} bits. If no \ac{crc} is used, then $A = K$. The \emph{construction} of a polar code $\mathcal{P}(N, K)$ refers to the selection of $K$ nonfrozen bit positions.
This paper only considers \ac{bpsk} modulation for an \ac{awgn} channel. The values of the frozen bits are fixed as zero.
\subsection{\ac{sc}-Based Decoding}
\begin{figure}
\centering
\input{Figures/sc-dec}
\caption{Binary tree representation for the \ac{sc} decoding of $\mathcal{P}(8,4)$ code.}
\vspace{-1em}
\label{fig:sc_binary}
\end{figure}
\ac{sc} decoding and its variants decode the $k$-th input bit based on the received signal $\bm{y}$ and the previously decoded bits $\hat{\bm{u}}^{k - 1} = (\hat{u}_0,\hat{u}_1,\ldots,\hat{u}_{k-1})$ via the conditional \ac{llr} value $\mbox{LLR}(u_k | \bm{y}, \hat{\bm{u}}^{k - 1})$ \cite{arikan}. Fig.~\ref{fig:sc_binary} illustrates the calculation of the conditional \ac{llr} as a binary tree search. Each node in layer $m$ corresponds to $2^m$ bits. The soft messages $\alpha$ that contain the \ac{llr} values are passed from a parent node to its child nodes, while the hard-bit estimates $\beta$ are passed upwards from a child node to its parent. The messages flow through a node in the following order: get $\alpha$ from its parent; send $\alpha^{\rm left}$ to its left child; get back $\beta^{\rm left}$; send $\alpha^{\rm right}$ to its right child; get back $\beta^{\rm right}$; and finally send $\beta$ back to its parent.
The $i$-th entry in $\alpha^{\rm left}, \alpha^{\rm right} \in \mathbb{R}^{2^{m - 1}}$ sent from a node in layer $m$ to its left and right children are calculated as, respectively,
\begin{equation}
\alpha_i^{\rm left} = \mathrm{sgn} (\alpha_i \alpha_{i + 2^{m - 1}}) \cdot \min (|\alpha_i|, |\alpha_{i + 2^{m - 1}}|),
\end{equation}
\begin{equation}
\alpha_i^{\rm right} = \alpha_{i + 2^{m - 1}} + (1 - 2\beta_i^{\rm left}) \alpha_i,
\end{equation}
and the initial $\alpha^{\rm left}$ sent from the root node contains the \ac{llr} values of the received $N$ symbols. The $i$-th entry in $\beta \in \{0,1\}^{2^m}$ to be returned to the node's parent is given by
\begin{equation}
\beta_i = \begin{cases}
\beta_i^{\rm left} \oplus \beta_i^{\rm right}, & \mbox{if } i < 2^{m - 1}, \\
\beta_{i - 2^{m-1}}^{\rm right}, & \mbox{otherwise},
\end{cases}
\end{equation}
where $\oplus$ denotes binary addition.
When the $k$-th leaf node receives $\alpha$ from its parent, the decoder decodes the $k$-th bit $\hat{u}_k$ as
\begin{equation}\label{eq:sc_dec}
\beta = \hat{u}_k = \begin{cases}
0, & \mbox{if } k \in \mathcal{F} \mbox{ or } \alpha \ge 0, \\
1, & \mbox{otherwise}.
\end{cases}
\end{equation}
The \ac{scl} decoder improves the decoding performance of \ac{sc} decoder by keeping up to $L$ most likely decoding paths in parallel. Whenever a nonfrozen bit is encountered, both possible values, $0$ and $1$, are considered. A \ac{pm} is used to evaluate each decoding path. In particular, at the $k$-th leaf node, the \ac{pm} for the $l$-th path with $\hat{\bm{u}}^{k, (l)} = (\hat{u}_0^{(l)}, \ldots, \hat{u}_k^{(l)})$ is
\begin{equation}
\mathrm{PM}_k^{(l)} = \begin{cases}
\mathrm{PM}_{k - 1}^{(l)}, & \mbox{if } \hat{u}_k^{(l)} = \frac{1 - \mathrm{sgn}(\alpha^{(l)})}{2}, \\
\mathrm{PM}_{k - 1}^{(l)} + |\alpha^{(l)}|, & \mbox{otherwise},
\end{cases}
\end{equation}
where $\alpha^{(l)}$ is the soft message passed to the leaf node though the $l$-th path. After computing the \ac{pm}s for all possible paths, the $L$ paths with the lowest \ac{pm}s survive, and the others are dropped.
When the decoding process ends, one codeword needs to be selected from the list. There are several ways to select the final decoder result:
\begin{enumerate}
\item {\bf Pure \ac{scl}}: the decoder selects the codeword with the smallest \ac{pm} from the list.
\item {\bf\ac{cscl}}: the decoder decodes to the codeword with the smallest \ac{pm} among the candidates that pass the \ac{crc} check.
\item {\bf \ac{scl}-Genie}: the decoder decodes to the correct codeword as long as it is in the list.
\end{enumerate}
\begin{figure}[t!]
\centering
\includegraphics[width=.85\linewidth]{Figures/scl_crc_vs_genie.eps}
\caption{\ac{fer} comparison between \ac{scl}-Genie decoding of $\mathcal{P}(128,64)$ and \ac{cscl} decoding of $\mathcal{P}(128, 56+8)$. \ac{scl}$L$ denotes the \ac{scl}-based decoder with a list size $L$.}
\vspace{-1em}
\label{fig:genie}
\end{figure}
Although the \ac{scl}-Genie decoder cannot be implemented in practice, it is adopted during training where the correct codewords are known for the training samples. The main reasons are as follows. First, for a given construction, the \ac{cscl} decoder for $\mathcal{P}(N,A+P)$ yields almost identical decoding performance as the \ac{scl}-Genie decoder for $\mathcal{P}(N,K)$ with $K = A+P$, as long as $P$ is moderately large. This is verified in Fig.~\ref{fig:genie}, which shows the performance of the \ac{cscl} decoder for $\mathcal{P}(128, 56+8)$ and the corresponding performance of the \ac{scl}-Genie decoder for $\mathcal{P}(128, 64)$. Besides, the design of a good \ac{crc} given the construction of $\mathcal{P}(N, K)$ is a separate problem, which is beyond the scope of this work.
\section{Viewing Polar Code Construction as a Game}
\label{sec:game}
\subsection{Polar Code Construction Game}
The construction of $\mathcal{P}(N,K)$ is the selection of $K$ of $N$ nonfrozen bit positions. The selection procedure is equivalent to a maze-traversing game as follows:
\begin{itemize}
\item \textbf{Environment}: A maze with height $N - K + 1$ and width $K + 1$. The states of the environment are defined as the cells $({\rm row}, {\rm col})$. The upper left cell indexed by $(0,0)$ is set as the start cell. The bottom right cell indexed by $(K, N - K)$ is set as the terminal cell.
\item \textbf{Rule}: Each game starts from the start cell. At each step, the agent takes an action $a$ that is either ``move \emph{down}'' $(a = 0)$ or ``move \emph{right}'' $(a = 1)$, and it is not allowed to depart the maze. The game ends when the agent reaches the terminal cell or when it receives a nonzero reward from the environment.
\item \textbf{Reward}: The reward is associated with the \ac{scl}-Genie decoding process, which is further explained in Section~\ref{subsection:reward}.
\item \textbf{Goal}: The agent attempts to find the best path that yields the highest expected return throughout the game.
\end{itemize}
Each possible path that the agent can choose in the maze corresponds to a possible construction of $\mathcal{P}(N,K)$. In particular, the $k$-th bit is set as a frozen bit if the agent chooses the \emph{down} action at the $k$-th step; and it is set as an information bit if the agent chooses the \emph{right} action instead. Fig.~\ref{fig:polar_maze} is an example of the maze associated with the construction of $\mathcal{P}(8,5)$. Both the bit positions in the polar code and the steps in the game are indexed from $0$ to $N - 1$.
\begin{figure}[t!]
\centering
\includegraphics[width=.54\linewidth]{Figures/polar_maze.eps}
\vspace{-.5em}
\caption{An example of a polar code construction game for $\mathcal{P}(8,5)$. The action list is $\{\text{down}, \text{down}, \text{right}, \text{right}, \text{down}, $ $\text{right}, \text{right}, \text{right}\}$. The corresponding nonfrozen bit positions are $\{2, 3, 5, 6, 7\}$.}
\vspace{-1.5em}
\label{fig:polar_maze}
\end{figure}
\subsection{Reward Generation via \ac{scl}-Genie Decoding}
\label{subsection:reward}
The design of the instant reward at each step needs to satisfy the following requirements: (1) the reward needs to reflect how good the current action is in the short term; (2) the path with high expected return at the game's end should reflect a good polar-code construction, i.e., a construction that gives low \ac{fer}.
In the \ac{scl}-Genie decoding process, the bits are decoded sequentially. In particular, the decoding of the $k$-th bit is independent of how the frozen and nonfrozen bits are distributed after it, and given the \ac{pm}s and the survival paths at the $(k-1)$-th bit, the evolution of the survival paths and the \ac{pm}s at the $k$-th bit is independent of the distribution of frozen and nonfrozen positions other than the $k$-th bit itself. The sequential nature of the \ac{scl}-Genie decoder suggests that each step's instant reward can be set along with the decoding process. The detailed reward generation process is given in Algorithm~\ref{alg:reward}.
In the reward-generation process, the transmitted codeword is fixed to the all-zero codeword because it is the only valid codeword for all possible polar-code constructions. Algorithm~\ref{alg:reward} shows that the selected actions are penalized when the \ac{scl}-Genie decoder fails to decode the correct codeword, i.e., when the \ac{scl}-Genie decoder drops the correct codeword from the list during the decoding process. As such, by selecting a maze path with a high expected return, or equivalently, a small expected penalty, the agent implicitly chooses a polar-code construction with low expected \ac{fer}. The \ac{sc}-based decoders only append new bits after the already decoded bit stream, and no previous decisions will be altered. Therefore, once the correct codeword is dropped from the list at step $k$, a frame error must occur, and the actions after step $k$ cannot repair that result. In other words, it is unreasonable to prefer any actions over others after dropping the correct codeword at step $k$. Therefore, the game is designed to be terminated after the decoder drops the correct codeword, or equivalently, after the agent receives nonzero reward.
\begin{algorithm}[t]
\caption{Reward generation at step $k$\label{alg:reward}}
\hspace*{\algorithmicindent} \textbf{Input:} step $k$, action $a$ \\
\hspace*{\algorithmicindent} \textbf{Global Variable:} PM list, survival path list \\
\hspace*{\algorithmicindent} \textbf{Output:} reward $r$, termination flag $F$
\begin{algorithmic}[1]
\If {$k = 0$}
\State Transmit all-zero codeword through the channel
\State Initialize the \ac{scl}-Genie decoder with the received \ac{llr}
\EndIf
\If {$a = 0$}
\State Decode the $k$-th bit as if it is a frozen bit
\State Update the \ac{pm} list and the survival paths
\Else
\State Decode the $k$-th bit as if it is a nonfrozen bit
\State Update the \ac{pm} list and the survival paths
\State Check if the all-zero codeword survives
\If {all-zero codeword is dropped from the list}
\State Set $r = -1$, $F = \mathrm{True}$
\State Clear PM list, survival path list
\State \Return $(r, F)$
\EndIf
\EndIf
\State Set $r = 0$
\If {$k = N-1$}
\State Set $F = \mathrm{True}$
\State Clear PM list, survival path list
\EndIf
\State Set $F = \mathrm{False}$
\State \Return $(r, F)$
\end{algorithmic}
\end{algorithm}
\section{Reinforcement Learning Algorithm}\label{sec:rl}
Here, a \ac{rl} technique solves the polar-code construction problem. This section introduces tabular \ac{rl} systems and then describes the SARSA$(\lambda)$ algorithm for agent training.
\subsection{Reinforcement Learning Basics}
A typical tabular \ac{rl} system contains an environment and an agent. At a given time $t$, the environment is at state $s_t \in \mathcal{S}$, and the agent takes an \emph{action} $a_t \in \mathcal{A}$ according to a \emph{policy} $\pi: \mathcal{S} \to \mathcal{A}$ to interact with the environment. The environment, stimulated by the agent's action, changes its state to $s_{t+1}$ and provides \emph{reward} $r_{t+1}$ to the agent. The agent accumulates the rewards as this interaction continues. The \emph{return} $R = \sum_{t = 0}^{T} \gamma^{t} r_{t+1}$ is the accumulated reward that the agent receives throughout the game, in which $\gamma \in (0,1]$ is the discount rate that describes how much the agent weights the future reward, and $T$ denotes the game's termination time. The agent's goal is to optimize its policy $\pi$ to maximize the expected return $\mathbb{E}[R]$.
The agent's policy $\pi$ is commonly derived from a value function $Q: \mathcal{S} \times \mathcal{A} \to \mathbb{R}$, which approximates the expected return when taking action $a$ from state $s$ and then following policy $\pi$. The value function of any state-action pair $(s,a) \in \mathcal{S} \times \mathcal{A}$ under policy $\pi$ is defined as
\begin{equation}
Q^{\pi}(s, a) = \mathbb{E}_{\pi}\left[\left.\sum_{\tau = 0}^{T-t-1}\gamma^{\tau} r_{t+\tau+1} \right\vert s_t = s, a_t = a \right],
\end{equation}
and it satisfies the dynamics
\begin{equation}
Q^{\pi}(s,a) = \mathbb{E}\left[r_{t+1} + \gamma Q^{\pi}(s_{t+1}, \pi (s_{t+1}))|s_t = s, a_t = a\right].
\end{equation}
An $\epsilon$-greedy policy according to a value function $Q$ is defined as
\begin{equation}
\pi(s) = \begin{cases}
\displaystyle\arg \max_{a \in \mathcal{A}} Q(s, a), & \mbox{ w.p. } 1 - \epsilon, \\
\mbox{random } a \in \mathcal{A}, & \mbox{ w.p. } \epsilon.
\end{cases}
\end{equation}
\iffalse
The key to maximizing the expected return is the value function $Q: \mathcal{S} \times \mathcal{A} \to \mathbb{R}$, which approximates the expected return when taking action $a$ from state $s$ and then following policy $\pi$. The value function satisfies
\begin{equation}
Q(s_t, a_t) = r_{t+1} + \gamma Q(s_{t+1}, a_{t+1}).
\end{equation}
With an accurate estimate of the value function, the optimal policy is
\begin{equation}
a_t^* = \arg\max_{a \in \mathcal{A}} Q(s_t, a).
\end{equation}
\fi
\subsection{SARSA$(\lambda)$ with Eligibility Trace}
To learn a good policy, the agent updates the value function according to the rewards it receives from the environment. In this work, the agent uses the SARSA$(\lambda)$ algorithm with eligibility trace \cite{sutton1998introduction} to update the value function.
An eligibility trace captures the current game's historical trace and assigns credit to every state-action pair. In particular, the eligibility trace initializes as
\begin{equation}
E_0(s,a) = 0,~\forall s \in \mathcal{S}, a \in \mathcal{A},
\end{equation}
and it evolves as
\begin{equation}
E_t(s,a) = \gamma \lambda E_{t - 1} (s,a) + \mathds{1} (s_t = s, a_t = a),
\end{equation}
where $\mathds{1}(\cdot)$ is the indicator function, and the parameter $\lambda \in [0,1]$ indicates how much an agent would change the value function of an early state in the game according to a reward that is received later. A large $\lambda$ means that the agent traces back deeply and updates the value function of historical states at each step. A small $\lambda$ means that at each step, only the values of several recent states will be changed with the newly received reward.
During training, the agent maintains a table of value functions $Q$, and uses the $\epsilon$-greedy policy, where the exploration rate $\epsilon$ decreases over training episodes. At time $t$ in one episode, the agent is at state $s_t$ and takes action $a_t$ according to the $\epsilon$-greedy policy based on the current value functions. Upon receiving the reward $r_{t + 1}$, the \ac{td} error is defined as
\begin{equation}
\delta_t = r_{t + 1} + \gamma Q(s_{t+1}, a') - Q(s_t, a_t),
\end{equation}
in which $s_{t+1}$ is the next state after taking action $a_t$ at time $t$, and action $a'$ is selected according to the agent's current \mbox{$\epsilon$-greedy} policy from state $s_{t+1}$. The \ac{td} error roughly indicates how much the estimated value function deviates from the real reward. The value function is then updated as
\begin{equation}
Q(s, a) \leftarrow Q(s,a) + \rho \delta_t E_t(s, a),~\forall s,a,
\end{equation}
where $\rho$ is the learning rate. The agent then selects and takes action $a_{t+1}$ according to the $\epsilon$-greedy policy based on the updated value functions from state $s_{t + 1}$, and proceeds to the next step.
\subsection{Equivalence to Polar Code Construction Problem}
The value function of the state-action pair $(s,a)$, by definition, is the agent's expected return after taking action $a$ at state $s$. The reward generating process described in Section~\ref{subsection:reward} indicates that the expected return after $(s,a)$ is
\begin{equation}\nonumber
\begin{split}
\mathbb{E}[R] &= 0 \times \Pr(\mbox{correct codeword survives}) + \\
&(-1) \times \Pr(\mbox {correct codeword dropped afterwards}) \\
&= - \Pr(\mbox {correct codeword dropped afterwards}) \\
&= - \Pr(\mbox{frame error} | \mbox{correct decoding up to state }s).
\end{split}
\end{equation}
Therefore, by learning the strategy that maximizes the expected return at the start state $s=(0,0)$, the agent is in fact learning to construct the polar code in the optimal way that minimizes the \ac{fer} under the \ac{scl}-Genie decoder.
Recall that as shown in Fig.~\ref{fig:genie}, the \ac{fer} of $\mathcal{P}(N, K)$ under the \ac{scl}-Genie decoder is almost identical to the \ac{fer} of $\mathcal{P}(N, A+P)$ under the \ac{cscl} decoder when $K=A+P$. Therefore, the learned code construction from the game is nearly optimal for $\mathcal{P}(N, A+P)$ under the \ac{cscl} decoder.
\iffalse
\begin{figure}[t!]
\centering
\includegraphics[width=.8\linewidth]{Figures/value_fun.eps}
\caption{Value function along the selected path for the $\mathcal{P}(16, 8)$ code construction.}
\label{fig:value_fun}
\end{figure}
\fi
\section{Simulation Results}
\label{sec:simu}
To illustrate the performance of the proposed game-based polar-code construction method, the game's learned code constructions are evaluated under either the pure \ac{scl} decoder or the \ac{cscl} decoder. For each evaluation case, the number of simulated transmissions is such that the number of observed frame errors is at least $500$. The \ac{fer} performance of the learned code constructions is compared to the constructions given by the method in \cite{tal_construction}. In particular, since the polar-code constructions depend on the channel condition, the reported \ac{fer} performance uses the code construction that is either designed (for the method in \cite{tal_construction}) or trained (for the game-based method) at the given \ac{snr} level.
The parameters of the SARSA$(\lambda)$ algorithm are selected as the following. The discount rate $\gamma = 1$ since the agent cares about the \ac{fer} in the end, and having the \ac{scl} decoder drop the correct codeword at any step matters the same to the agent. The eligibility decay factor is $\lambda = 0.3$ for constructing the $\mathcal{P}(16,8)$ code, and $\lambda = 0.75$ for constructing the $\mathcal{P}(128,56+8)$ code.
\iffalse
$\lambda = 0$ for the \ac{sc} decoder; $\lambda = 0.3$ for the \ac{scl} and \ac{cscl} decoders with a list size of $2$; and $\lambda = 0.5$ for the \ac{scl} and \ac{cscl} decoders with a list size of $4$. The parameter $\lambda$ increases with the list size because it indicates how much the agent penalizes the historical actions for the current reward or penalty. For the \ac{sc} decoder, dropping the correct codeword at step $k$ is independent of how the previous $k-1$ actions are selected. For the \ac{scl} decoder, however, dropping the correct codeword at step $k$ indicates that there are already many decoding paths that are evolving in parallel, and some of them are competitive with the correct one. Therefore, the previous actions need to be penalized for this situation.
\fi
\subsection{\ac{fer} performance}
\begin{figure}[t!]
\centering
\includegraphics[width=.8\linewidth]{Figures/16_8_2_code_sc_scl.eps}
\caption{Comparison of \ac{fer} for $\mathcal{P}(16,8)$ code under \ac{sc} and pure \ac{scl} decoder.}
\label{fig:16_8_code_pure}
\end{figure}%
\begin{figure}[t!]
\centering
\begin{subfigure}{.49\linewidth}
\centering
\includegraphics[width=\linewidth]{Figures/polar_maze_16_8_1.eps}
\caption{}
\label{fig:16_8_1_maze}
\end{subfigure}
\begin{subfigure}{.49\linewidth}
\centering
\includegraphics[width=\linewidth]{Figures/polar_maze_16_8_2.eps}
\caption{}
\label{fig:16_8_2_maze}
\end{subfigure}
\caption{Path selected for $\mathcal{P}(16, 8)$ code at ${\rm SNR}=0$ dB for (a) \ac{sc} decoder; (b) \ac{scl} decoder with a list size of 2.}
\vspace{-1em}
\end{figure}
Fig.~\ref{fig:16_8_code_pure} shows the \ac{fer} performance of the learned $\mathcal{P}(16,8)$ code compared to the standard construction given in \cite{tal_construction} under the \ac{sc} decoder as well as the pure \ac{scl} decoders with a list size $2$ and $4$. Under \ac{sc} decoder, the game-based constructions at each evaluated \ac{snr} level match the construction given by the method in \cite{tal_construction}. Fig.~\ref{fig:16_8_1_maze} shows the learned policy of the polar construction game under \ac{sc} decoder when ${\rm SNR} = 0$~dB. In particular, the arrow in each cell shows the best action when the agent is in that cell. The highlighted cells are the selected path from the start cell $(0,0)$ to the terminate cell $(8,8)$. It can be seen from the figure that the action ``move right'' is selected at steps $\{7,9,10,11,12,13,14,15\}$, which are known to be the most reliable $8$ bit-channels for a polar code with codeword length of $16$.
When the \ac{scl} decoders are used, the game-based constructions do not match the constructions of \cite{tal_construction}. Fig.~\ref{fig:16_8_2_maze} shows the selected path under \ac{scl}-Genie decoder with a list size $2$ at ${\rm SNR} = 0$~dB. The corresponding selected nonfrozen positions in this case are $\{3,7,10,11,12,13,14,15\}$. The \ac{fer} of the learned code constructions under pure \ac{scl} decoders without \ac{crc} is slightly better than the method in \cite{tal_construction} at every evaluated \ac{snr} level. The observation that the agent learns better code constructions than the standard construction under \ac{scl} decoders verifies that ranking and picking the most reliable $K$ bit-channels is no longer optimal for \ac{scl} decoders.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{Figures/128_60_code_new_new.eps}
\caption{\ac{fer} performance comparison of $\mathcal{P}(128, 60+4)$ with \ac{scl} decoding.}
\label{fig:128_56_code}
\vspace{-1em}
\end{figure}%
\iffalse
\begin{figure}[t!]
\centering
\includegraphics[width=.8\linewidth]{Figures/128_56_constructions.eps}
\caption{Learned polar-code constructions of $\mathcal{P}(128, 56+8)$ code at ${\rm SNR} = 4$~dB. On $y$-axis, $\mathcal{I}$ denotes nonfrozen positions, and $\mathcal{F}$ denotes frozen positions.}
\label{fig:128_56_construction}
\end{figure}
\fi
The proposed game-based construction's advantage becomes more apparent for long codewords and when the \ac{crc} is used together with the selection of nonfrozen bit positions. Fig.~\ref{fig:128_56_code} illustrates the performance of the learned $\mathcal{P}(128, 60+4)$ codes. The game-based constructions match the standard constructions in \cite{tal_construction} under \ac{sc} decoding. This is expected since the standard constructions in \cite{tal_construction} are optimized for SC decoding. For the \ac{scl} decoder with list size $2$, the game-based constructions achieve almost the same performance as the standard ones. However, when the list size of \ac{scl} decoders is increased to $4$ and $8$,
the game-based constructions outperform the ones in \cite{tal_construction} over the entire range of evaluated \ac{snr}. When the \ac{crc} protects the correct codeword in the \ac{scl} decoder's final candidate list, the game-based construction method shows a clear gain over the standard construction method. This is expected because the game optimizes the probability of keeping the correct codeword in the decoding list. A reliable \ac{crc} ensures that the decoder actually finds the correct codeword with high probability as long as it is in the final list.
\iffalse
\textcolor{blue}{Fig.~\ref{fig:128_56_construction} compares the learned polar-code constructions of the $\mathcal{P}(128, 56+8)$ code for the \ac{sc} decoder and the \ac{scl} with a list size 2 at ${\rm SNR} = 4$ dB with the standard construction in \cite{tal_construction} at the same \ac{snr} level. The learned constructions }\hl{Need to talk about the new figure.}
\fi
\subsection{Efficiency of the Game-Based Construction Method}
Unlike the conventional construction methods that need to evaluate and rank the bit channels, the proposed game-based construction method selects the combination of nonfrozen bit positions altogether, without explicitly estimating the bit error rate on each bit-channel. The advantage of selecting the combination together in the polar-code-construction game is twofold: (1) as shown in the \ac{fer} comparisons, selecting the nonfrozen bit positions according to their ranking is not the optimal method under \ac{scl} decoders; (2) it avoids the need of running large-volume Monte-Carlo simulations on every bit-channel, which is so far the most accurate way to get a reliable bit-channel ranking, especially with long codewords. These Monte-Carlo simulations can be prohibitively expensive because the bit error rates on the bit-channels can be very small and close to each other.
The SARSA$(\lambda)$ algorithm updates the value function at every state along the selected path for each training sample fed into the system. By doing so, about $N$ value functions update with a single pass of one training sample. Most of the suboptimal paths are eliminated quickly at the beginning of the training, and the rest of the training distinguishes between several candidate paths that yield similar expected returns. The training process is highly efficient in terms of the number of training samples. To construct $\mathcal{P}(16, 8)$, only $2000$ samples are used during training, and each training sample is only used once. To construct $\mathcal{P}(128, 60+4)$ that are evaluated in Fig.~\ref{fig:128_56_code}, the training needs no more than $200,000$ training samples, with only one pass of each sample, and it is observed that the training usually converges before feeding $80,000$ training samples into the system. As a comparison, the methods described in \cite{huang_AI} converge in about $10,000$ iterations but needs to run Monte-Carlo simulations to estimate the \ac{fer} at each iteration, which needs at least $1/{\rm FER}$ samples. Therefore, the proposed game-based method is highly efficient compared to the methods in \cite{huang_AI} in terms of the number of training samples.
\section{Summary}
\label{sec:conc}
This paper formulated the polar-code construction problem for the \ac{scl} decoder as a maze-traversing game, in which the game's expected return indicates the \ac{fer} of the selected polar-code construction. The tabular \ac{rl} algorithm SARSA$(\lambda)$ was adopted to solve the game. The inherent equivalence of the polar code construction problem and the game was revealed. Simulation results showed that the game-based constructions matched the standard polar code constructions under the \ac{sc} decoder for short codes. For short codes under the \ac{scl} decoder and longer codes under both the \ac{sc} and \ac{scl} decoders, the game-based method was able to find polar code constructions that outperform the standard constructions significantly. Moreover, the game-based method is very efficient during training in terms of the number of required training samples. Future work includes (1) evaluating the effect of channel mismatch during training and evaluation; (2) using neural networks or other deep learning techniques to estimate the value of the value function to improve the memory efficiency for the construction of longer codes; and (3) incorporating \ac{crc} verification in the reward process during training to further improve the constructions.
\section*{Acknowledgments}
This work is supported in part by ONR grant N00014-18-1-2191. S.~A.~Hashemi is supported by a Postdoctoral Fellowship from the Natural Sciences and Engineering Research Council of Canada (NSERC) and by Huawei.
\bibliographystyle{IEEEtran}
|
2009.00627
|
\section{Introduction}
\label{sec:intro}
Polar plumes are bright structures observed in the polar coronal holes of the Sun, which extend from the solar surface far into the corona, and with a geometry similar to open magnetic field lines (see, e.g., \citealt{DeForest_2001b}). Several studies published in the literature have established that plumes are relatively slowly expanding structures, characterized by higher densities and lower temperatures than the surrounding interplume plasma. The present status of knowledge about plumes has been reviewed by \cite{Wilhelm2011a} and \cite{Poletto2015a}; a comprehensive review of the physics of coronal holes and of their sub--structures is given by \cite{Cranmer2009a}.
The role of plumes and interplumes as contributors to the fast solar wind has been greatly debated, both structures being recognized in the past as possible fast wind sources \citep{Gabriel2003a, Gabriel2005a}. Some authors assign a marginal role to plumes, giving more importance to the interplume plasma as the origin of the fast wind. The analysis of observations from the SOHO/UVCS (SOlar and Heliospheric Observatory/UltraViolet Coronagraphic Spectrometer) and SOHO/SUMER (Solar UV Measurements of Emitted Radiation) spectrometers \citep{Kohl1997a, Cranmer1999a,wilhelm_k_1995a}, and the application of the Doppler Dimming technique (see, e.g., \citealt{Noci1987a}), allowed us to estimate the outflow speed in plumes and interplumes in the intermediate corona. From an analysis of UVCS observations at 1.70~R$_{\odot}$\, of heliocentric distance, \cite{Giordano2000a} concluded that plume data are well reproduced by a static model, while interplume plasma is dynamically expanding. Moreover, in \cite{Teriaca2003a}, the analysis of a combination of SUMER and UVCS data led to the determination of plume dynamics from 1.05 up to 2.0~R$_{\odot}$, and to the deduction that plumes are essentially static structures. From stereoscopic reconstructions based on STEREO/SECCHI (Solar TErrestrial RElations Observatory/Sun Earth Connection Coronal and Heliospheric Investigation) observations, \cite{Feng_2009a} investigated the 3D geometry of plumes and, combining it with SUMER data, from a Doppler analysis deduced a maximum outflow speed for the O~{\sc{vi}} ions on the order of 10~km/s, at distances not higher then 200~Mm from the solar limb, corresponding to about 1.3~R$_{\odot}$.
However, different authors came to different conclusions. For example, from Doppler shift measurements taken by Hinode/EIS (Extreme-ultraviolet Imaging Spectrometer), \cite{Fu2014a} deduced a steady acceleration in plumes at heights lower than 1.05~R$_{\odot}$, and concluded that their contribution to the fast wind must be not negligible. A scenario of dynamically expanding plasma in plumes is also implied by \cite{Raouafi2007a}, who found that the shape of the observed line profiles and the line intensities are better reproduced in plumes by a low-speed regime at low altitudes, increasing with height, and reaching interplume values above 3-4~R$_{\odot}$. For these reasons we could say that, up to now, a definite consensus on fast wind sources seems not to have been reached.
As we have seen, estimates of the outflow speed of plumes in the intermediate corona deduced by many authors (see, e.g., \citealt{Wilhelm2011a}, \citealt{Fu2014a}) are rare and sometimes in disagreement. Hence it is of interest to obtain additional measurements in this interval of heliocentric distances. A good opportunity is offered by the analysis of UVCS spectroscopic data, acquired during a campaign of observations of polar plumes in the north polar coronal hole carried out in 1996, during the minimum of solar activity (see Sect.~\ref{sec:data} for a detailed description). Although partial results from the data set we are considering in the present study have been already published in \cite{Giordano2000a}, they only analyzed data at 1.7~R$_{\odot}$. Hence the full range of altitudes still has to be explored, and our purpose is to extend the Doppler Dimming analysis over a larger range of heliocentric distances. Our approach is to find models for plumes and interplumes, capable of reproducing the H~{\sc{i}}-Lyman $\alpha$ and O~{\sc{vi}} line intensities from UVCS observations.
The plume lifetime covers a time interval from hours to several days, and it has been pointed out that during their life plumes can disappear and then show up again at the same position \citep{Lamy_P_1997a, DeForest2001a}. Moreover, plumes show variability with time on timescales that depend on the observational spatial scales \citep{DeForest1997a}. In particular, plumes appear stationary for at least one day when observed at scales larger than $10^{\prime\prime}$ in the low corona.
In the present study, we assume that plumes are homogeneous. Over the past years, evidence has grown suggesting that plumes might not be homogeneous. This has been demonstrated, for example, by \cite{Woo_r_2006a}, who showed how the structures observable in coronal holes have a filamentary aspect on spatial scales down to $\sim 31$ km.
We will consider possible effects of plume variability with time and inhomogeneity in space, when discussing the results from our analysis of the data in the present paper. Our observations have been acquired during an overall time interval of a couple of days, and the observing time at a single altitude in the corona is on the order of a few hours (see Sect. \ref{sec:data}), leading to the problem of plume identification when comparing results obtained at different heliocentric distances.
The paper is organized as follows: in Sect.~\ref{sec:data} we describe the data, in Sect.~\ref{sec:analysis} we summarize the procedure by which we inferred the physical parameters of plumes and interplumes. Section~\ref{sec:results} describes the outcomes from the application of the Doppler Dimming analysis, which are then discussed in Sect.~\ref{sec:discussion}, where a summary of the main results is given and conclusions are drawn.
\section{Data}
\label{sec:data}
\subsection{SoHO/UVCS data and data analysis}
\label{sec:uvcs_data}
We analyzed SoHO/UVCS (\citealt{Kohl1995a}) coronal spectra acquired above the north polar coronal hole over the time interval 6-9 April 1996, taken in the UVCS spectral channels of the O~{\sc{vi}} doublet lines at 1031.9 and 1037.6~\AA,\, and the H~{\sc{i}}~Ly$\alpha$ line at 1215.7~\AA. The spectral and spatial information were binned over two-by-two detector pixels, corresponding to a spectral resolution of $0.2$~\AA\ and $0.28$~\AA\ per bin, for the O~{\sc{vi}} and H~{\sc{i}}~Ly$\alpha$ channels, respectively, and to a spatial resolution of $14^{\prime\prime}$ per bin in both channels. In the plane of the sky, $14^{\prime\prime}$ correspond to about 0.014~R$_{\odot}$.
The coronal instantaneous field of view imaged onto the entrance slits of the O~{\sc{vi}} and H~{\sc{i}}~Ly$\alpha$ channels covered different heliocentric distances, from 1.38 to 2.53~R$_{\odot}$, along the radial through the north pole. We refer to the altitude in the corona of the instantaneous field of view as the slit position or altitude. Slit widths of $49~{\rm \mu m}$ were used in both channels to have a good spectral resolution and photon flux, corresponding to an instantaneous field of view $14^{\prime\prime}$ wide, normal to the slit direction.
The exposure time of each acquired frame was 600~s, for the overall 72~hrs of observations. Because of the faint emission from coronal holes, we limited our analysis to slit positions below 1.9~R$_{\odot}$, and increased the signal to noise ratio by grouping the observations of three nearby altitudes, and by summing the photon counts of the corresponding spectral and spatial bins, so averaging data over $42^{\prime\prime}$ in the direction perpendicular to the slit. A further spatial rebin along the slit resulted in a spatial resolution of $28^{\prime\prime}$ per bin in that direction. The total exposure times, the mean observation altitudes, and the dates of observation of the resulting spectra, are summarized in Table~\ref{tab:01}.
\begin{table}
\caption{Parameters of the analyzed spectral data. These were derived from UVCS observations of the plume study campaign in the period 6-9 April 1996.}
\centering
\begin{tabular}{l l l c c}
\hline\hline
ID & Exposures & $r_{mirror}$ & Total exp. time & Date \\
~ & ~ & $(R_\odot)$ & (s) & ~ \\
\hline
\hline
0 & 0-19 & 1.375 & 12000 & 6 Apr.\\
1 & 20-44 & 1.442 & 15000 & 6 Apr. \\
2 & 45-74 & 1.524 & 18000 & 6 Apr. \\
3 & 75-109 & 1.620 & 21000 & 6 Apr. \\
4 & 110-149 & 1.730 & 24000 & 7 Apr. \\
5 & 150-194 & 1.855 & 27000 & 7 Apr. \\
\hline
\end{tabular}
\label{tab:01}
\end{table}
We studied the spectral lines of the O~{\sc{vi}} doublet and the redundant H~{\sc{i}}~Ly$\alpha$, which are recorded by the O~{\sc{vi}} detector and cover the same instantaneous field of view in the corona. Data were analyzed using the UVCS/Data Analysis Software, which takes care of wavelength and radiometric calibration. The UVCS radiometric calibration and its evolution in time is discussed by \cite{Gardner2002a}. The stray-light contribution has been subtracted according to the results obtained by \cite{tesi_phd_silvio}.
\subsection{Detector bias response correction}
The identification of plumes and interplumes in UVCS data could be affected by instrumental biases introduced by the detector. We constructed a flat field describing the uneven response of the detector by averaging a large number of coronal spectra at different latitudes and distances, based on the hypothesis that individual coronal structures are smoothed out by the sum of many different and unrelated contributions. Suitable observations are those of the synoptic program. During our observations no synoptic run was taken, hence we had to rely on the synoptic data taken during the three days preceding our observations. The synoptic program consists of observations routinely covering the intermediate corona at different position angles and heliocentric distances. The resulting intensity profiles along the slit for the two UVCS spectroscopic channels have been corrected for an intensity decrease of the corona in the instantaneous field of view, where different heliocentric distances are sampled. The correction for the coronal intensity decrease was done by normalizing the profiles by themselves after applying a spatial smoothing.
The analysis described above revealed a periodic spatial variation in the response of the detector of the O~{\sc{vi}} channel, with an amplitude of about 10\% and a period of eight to ten pixels, as shown in Fig,~\ref{fig:uvcs_flat}, in the two parts where the primary and redundant spectra are sampled.
\begin{figure}
\centering
\includegraphics[width=4.5cm]{ff_OVI.eps}\includegraphics[width=4.5cm]{ff_lyaR.eps}\\
\includegraphics[width=4.5cm]{plumes_lya_radial_corr.eps}
\caption{Flat field response of the O~{\sc{vi}} channel along the UVCS slit in the O~{\sc{vi}} doublet (top-left panel) and redundant H~{\sc{i}}~Ly$\alpha$ (top-right panel) spectral intervals. An example of bias-corrected data is shown in the bottom panel, where the original data are represented by a black curve and the corrected data are shown with a red curve. The data corrected for the detector flat field response have also been normalized to the value measured at the pole, only in this figure for graphical reasons, in order to account for the coronal intensity decrease, when moving from the slit center to parts of the corona at higher distances (see text for correction description).}
\label{fig:uvcs_flat}
\end{figure}
The intensity profiles along the slit after correction for the detector bias are shown in Fig. \ref{fig:uvcs_data} in the O~{\sc{vi}} 1031.9, 1037.6~\AA, and H~{\sc{i}}~Ly$\alpha$ 1215.7~\AA\, lines at six heliocentric distances.
\begin{figure*}
\centering
\includegraphics[width=5.5cm]{plumes_cut137.eps}
\includegraphics[width=5.5cm]{plumes_cut144.eps}
\includegraphics[width=5.5cm]{plumes_cut152.eps}
\includegraphics[width=5.5cm]{plumes_cut162.eps}
\includegraphics[width=5.5cm]{plumes_cut173.eps}
\includegraphics[width=5.5cm]{plumes_cut185.eps}
\caption{Intensity profiles in the north polar coronal hole, along the UVCS slit for six heliocentric distances. Intensity profiles of the redundant H~{\sc{i}}~Ly$\alpha$ are shown as black solid lines. The O~{\sc{vi}} 1031.9 and 1037.6 doublet intensities are represented by red and blue solid lines, respectively. We limited the sample of our data to the observations performed on the 6 and 7 April, with the slit center covering heliocentric distances up to 1.85~R$_{\odot}$. The abscissa axis reports the polar angle latitude in degrees (upper axis) measured counterclockwise from the north pole, and the corresponding detector bins (bottom axis).}
\label{fig:uvcs_data}
\end{figure*}
\subsection{SoHO LASCO-C2 and EIT context images}
\label{sec:context}
We used observations obtained by the SoHO instruments LASCO (Large Angle and Spectrometric COronagraph) C2 coronagraph (\citealt{Brueckner1995a}), and EIT 195\AA\ (Extreme ultraviolet Imaging Telescope; \citealt{Delaboudiniere1995a}) as context data, helping to discriminate the counterparts of plumes and interplumes in the UVCS spectra.
The three instruments have different fields of view, hence a single plume or interplume must be identified at different coronal altitudes, where it is imaged at different latitudes. Our starting point for plume identification is the occurrence of elongated bright emission regions in the polar caps in LASCO/C2 images. We assumed that plumes follow the profiles of the local magnetic field lines emerging from the coronal hole, and identified plumes across the field of view of the three instruments by tracing back model magnetic field lines from LASCO/C2 to the UVCS and EIT fields of view (see Section \ref{sec:ident}).
\subsection{Identification of plumes and interplumes}
\label{sec:ident}
The UVCS intensity profiles of Figure \ref{fig:uvcs_data} show a system of four candidate plumes and interplume regions, which can be identified both in O~{\sc{vi}} 1031.9~\AA\, and H~{\sc{i}}~Ly$\alpha$ 1215.7 \AA\, over an angle of roughly $\pm 14$ deg of amplitude, starting from the radial through the north pole. The positions of these candidates, in the field of view of UVCS, are listed in Table~\ref{tab:02}, and the four plumes are labeled PL0, PL1, PL2, and PL3, and the interplumes IP0, IP1, IP2, and IP3. Radial and latitudinal positions of each identified structure are given at seven different altitudes, $h$, of the UVCS slit center in the corona.
We adopted the magnetic field model proposed by \cite{Banaszkiewicz1998a} to reproduce the plume profiles across the field of view of LASCO, UVCS, and EIT. This model consists of an axisymmetric combination of a dipole plus a quadrupole, with an azimuthal current sheet in the equatorial plane. Different geometries can be explored by changing the parameter $Q$ (see Eqs. 1 and 2 in \citealt{Banaszkiewicz1998a}), which controls the topology of the magnetic field lines, describing a pure dipole plus a current sheet when $Q=0$, and a combination of dipole plus a current sheet plus a quadrupole, with the contribution of the latter component increasing with $Q$, when $Q>0$.
The fit of the magnetic field lines to the plume profiles has been obtained by minimizing the mean of the distances, on the plane of the sky, of a sample of coordinate points along the observed plume profiles derived from the three instruments above, to a model of a magnetic field line, obtained by varying the latitude of the footpoint and the $Q$ parameter. The results of the fit are summarized in Table \ref{tab:03}. The typical values of the mean of the distances of the modeled field lines to the plume profiles turned out to be on the order of $24^{\prime\prime}$ on the plane of the sky. We obtained values for the $Q$ parameter and hence for the magnetic field geometry, ranging from a configuration characterized by a pure dipole plus a current sheet to configurations where in addition a contribution from a quadrupole is present. The circumstance of having obtained a set of different values for $Q$, together with the low number of analyzed plumes, prevents us from inferring precise information about the real magnetic field geometry in the coronal hole, based on the model by \cite{Banaszkiewicz1998a}. We hypothesize that the dispersion of the fitted values of $Q$ could be ascribed to a projection effect of the profiles on the plane of the sky, as different longitude angles of the plume footpoints imply a different projection on the plane of the sky of profiles that are otherwise similar. The fit to the plume profiles applied to LASCO/C2, UVCS, and EIT data is illustrated in Figures \ref{fig:eit_uvcs_lasco} a and b, showing two composite images of the three instruments, taken on 6 and 7 April 1996.
\begin{figure*}[t]
\includegraphics[width=8.5cm]{composite_06_polar.eps}
\includegraphics[width=8.5cm]{composite_07_polar.eps}
\caption{Composite UVCS, LASCO/C2, and EIT 195 \AA\ images of the solar corona, taken on 6 and 7 April, left and right panel, respectively. The plume identification has been made easier by stretching the contrast in the UVCS, LASCO, and EIT images, and by mapping them in polar coordinates, as the spatial scales of the three instruments are different. We point out that the UVCS image in the O~{\sc{vi}}~1031.9 \AA\ line has been obtained by juxtaposing the intensity profiles along the slit, as the covered coronal fields of the observations turned out to be at contiguous altitudes, and not by interpolation. The fits of the magnetic field lines of the \cite{Banaszkiewicz1998a} model to the plumes we identified in LASCO, UVCS and EIT images (see Table \ref{tab:02}) are shown as dashed lines. In the left panel, plume PL0 appears to split into two components in the LASCO and EIT images, and the fitted line profiles of the east side component has been shown in the panel as a dotted line. In UVCS data the PL0 east side component has not been considered owing to the low signal.
}
\label{fig:eit_uvcs_lasco}
\end{figure*}
The positions of plumes appear to be stable in time and they can be recognized at the same positions in latitude during the two days of observations. Plume PL0 is an exception as it tends to move away from the pole on the 7 April and to split into two parts. From the EIT image, we can clearly identify PL0, PL1, and PL3 in the low corona, and their positions are
reasonably well traced back by the model magnetic field lines, starting from the LASCO images. During the 7 April, PL2 shows up as a faint structure, but it is still clearly identifiable in LASCO images. On the contrary, PL3 enforces and widens on the 7 April. Interplume lanes are well defined during the two days of observations. Interplume IP2 starts being visible from slit position 3 onward, perhaps because of the appearance of a background plume in between IP1.
\begin{table*}
\centering
\caption{Identification of plumes and interplumes, labeled as PL and IP,
respectively, at different heliocentric distances, as observed in redundant
Lyman $\alpha$ line. For each region we report its extension in spatial bins
along the UVCS slit.
}
\begin{tabular}{c c c c c c c c c c}
\hline\hline
slit & h & PL0 & PL1 & PL2 & PL3 & IP0 & IP1 & IP2 & IP3 \\
\hline
\hline
0 & 1.375 & 15-18 & 21-25 & 30-35 & 38-43 & - & 26-28 & - & 36-37 \\
1 & 1.442 & 14-17 & 21-25 & 29-35 & 39-44 & 18-19 & 26-28 & - & 36-38 \\
2 & 1.524 & 14-17 & 20-24 & 30-36 & 40-46 & 18-19 & 26-27 & - & 37-38 \\
3 & 1.620 & 13-16 & 19-24 & 31-37 & 40-46 & 17-18 & 26-27 & 29-30 & 38-39 \\
4 & 1.730 & 4-10 & 17-23 & 34-37 & 41-46 & 12-14 & 26-31 & - & 38-39 \\
5 & 1.855 & 2- 9 & 17-23 & 32-33 & 43-49 & 12-13 & 26-30 & - & 39-40 \\
\hline
\end{tabular}
\label{tab:02}
\end{table*}
The effect of solar rotation on the plume orientation can be considered negligible, particularly in the case of data acquired up to 2~R$_{\odot}$, covering a time interval of 1.4 days from the first to the last exposure. However, we know that polar plumes tend to appear in the same location, with a typical life time of about one day (\citealt{Lamy_P_1997a}, \citealt{DeForest2001a}); hence, we do not know, when comparing observations of two successive days, if the plume that we see as stable at a given position is actually the same or a recurrent plume.
From the Table \ref{tab:02} data, we can estimate the typical size of the observed plumes projected on the plane of the sky, in the field of view of UVCS. At the coronal altitude of 1.4~R$_{\odot}$, plume widths span from four to seven spatial bins along the slit, corresponding to a linear size perpendicular to the plume axis between $5\times 10^9~{\rm cm}$ and $1\times 10^{10}~{\rm cm}$ (the spatial bin is $28\arcsec$ wide along the slit).
\begin{table}
\caption{Footpoint latitudes and $Q$ parameter obtained by fitting the magnetic field model of \cite{Banaszkiewicz1998a} to the observed plume profiles from LASCO/C2, UVCS, and EIT data. The footpoint latitudes are given in degrees of polar angle. The values are given for the east and west sides of the four plumes we studied, and are relative to the 6 and 7 April 1996.}
\centering
\begin{tabular}{l r r r r}
\hline\hline
6 April, 1996 & ~ & ~ & ~ & ~\\
\hline
plume & ${\rm Lat_{east}}$ & ${\rm Q_{east}}$ & ${\rm Lat_{west}}$ & ${\rm Q_{west}}$ \\
~ & (deg) & ~ & (deg) & ~ \\
\hline
PL0 & 15.0 & 0.0 & 12.1 & 0.0 \\
PL1 & 8.9 & 0.5 & 5.1 & 1.2 \\
PL2 & 1.8 & 1.5 & -2.9 & 1.5 \\
PL3 & -4.6 & 1.5 & -8.3 & 0.3 \\
\hline
7 April, 1996 & ~ & ~ & ~ \\
\hline
plume & ${\rm Lat_{east}}$ & ${\rm Q_{east}}$ & ${\rm Lat_{west}}$ & ${\rm Q_{west}}$ \\
~ & (deg) & ~ & (deg) & ~ \\
\hline
PL0 & 17.4 & 0.3 & 16.5 & 0.0 \\
PL1 & 10.8 & 0.0 & 5.8 & 0.0 \\
PL2 & 0.5 & 1.5 & -2.9 & 1.5 \\
PL3 & -5.0 & 1.5 & -7.0 & 1.5 \\
\hline
\end{tabular}
\label{tab:03}
\end{table}
The expansion of plumes with altitude in the corona can be described by means of their expansion factor, which is defined as
\begin{equation}
f(r,\theta)=\frac{A(r,\theta_0)}{A(R_\odot,\theta_0)}
\left({
\frac{R_\odot}{r}}\right)^2
,\end{equation}
\noindent where $A(r,\theta)$ is the cross section of the plume at distance $r$ from the Sun, and at latitude $\theta$. The latitude $\theta_0$ is that of the footpoint of the field line on the solar surface. From the requirement of the magnetic flux conservation, $B(r,\theta)A(r,\theta)=B(R_{sun},\theta_0)A(R_{\rm sun},\theta_0)$, we have
\begin{equation}
f(r,\theta)=
\frac{B(R_\odot,\theta_0)}
{B(r,\theta)}
\left({
\frac{R_\odot}{r}
}\right)^2
\label{eq:expansion}
,\end{equation}
\noindent where $B(r,\theta)$ is, in our case, the intensity of the magnetic field given by the model of \cite{Banaszkiewicz1998a}. An estimate of the expansion factor of plumes has been done, for example, by \cite{DeForest1997a}, who derived $f\sim 3$ at 4-5~R$_{\odot}$. The expansion factor predicted by the magnetic field model we adopted, at the heliocentric distance of 5~R$_{\odot}$, is $f=2.7$ for $Q=0$, hence in good agreement with the observations by \cite{DeForest1997a}; however, when $Q=1.5$ the predicted value is $f=8.1$.
We can make a comparison of the typical size of plumes we obtained from UVCS data with observations in the low corona (see, e.g., \citealt{Wilhelm2011a}, \citealt{Poletto2015a} and references therein), which suggest a cross section of plumes of $\sim 30$~Mm. If we extrapolate this value by means of our model up to 1.4~R$_{\odot}$, we obtain $5\times10^9~{\rm cm}$, in agreement with the cross section of the plumes as seen by UVCS at this altitude.
\subsection{Filling factor of the plumes}
The filling factor of the plumes in the coronal hole we are studying, which is the fraction of the coronal hole volume occupied by plumes, can be estimated by making the assumptions that we are observing all the plumes present in the coronal hole, projected on the plane of the sky, and that they can be described as a cylindrical shaped structures, as the only information we have is their appearance on the plane of the sky. We used the data contained in Table~\ref{tab:02} to evaluate the cross section of each plume and to calculate the total volume occupied by plumes, compared to the volume of the portion of the coronal hole we considered. We derived filling factors at different heliocentric distances, and the resulting values are shown in Figure~\ref{fig:filling}. The plume filling factors range from 0.12 at 1.375~R$_{\odot}$, to 0.07 at 2.194~R$_{\odot}$, decreasing with distance, and they are consistent with a commonly accepted estimate of 0.1 (see \citealt{Wilhelm2011a}).
\begin{figure}[t]
\centering
\includegraphics[width=8.0cm]{filling.eps}
\caption{Fraction of coronal hole cross section covered by plumes, at
different UVCS slit altitudes.}
\label{fig:filling}
\end{figure}
A weakness of this method of evaluating the filling factor is partly related to the fact that, owing to the integration along the line of sight, we do not know how many background or foreground plumes are hidden or not resolved. In this regard, we add that, on the basis of SoHO/EIT observations, \cite{Gabriel2009a} proposed the existence of two distinct classes of plumes: beam plumes, that have a cylindrical structure, and network plumes, that are a curtain shape and become visible when they are aligned along the LOS, and whose extension can be underestimated. We point out that the analysis made by \cite{Gabriel2009a} refers to altitudes in the corona in the range of $1.02-1.08 ~{\rm R_\odot}$, hence we do not know if this distinction is relevant for our study. Another difficulty arises with the definition of the filling factor concept itself, as
distinguishing coronal plasma in plumes and interplumes could arbitrarily simplify a more complex scenario of coronal structures not resolved in space and time, as demonstrated by \cite{Feldman_1997a}. However, counting visible plumes and measuring their size is a relatively straightforward approach, providing at least a lower limit to the contribution of plumes to the polar wind.
\subsection{Intensity ratio of O~{\sc{vi}} doublet}
\label{sec:doublet}
The intensity ratio of the O~{\sc{vi}} doublet, $R=I_{1032}/I_{1038}$, can be used to discriminate between low and high outflow speed regimes in the coronal plasma. A comprehensive description of the use of $R$ as a diagnostic tool can be found in \cite{Noci1987a}. Owing to the dimming of the radiative component of the O~{\sc{vi}} doublet lines, the $R$ values decrease with the outflow speed and we can broadly state that values of about $R=3-4$ correspond to an almost static corona, while $R=2$ is attained for an outflow speed of about $100~{\rm km/s}$. Short descriptions of the formation of UV lines in the corona and
of the Doppler Dimming effect are given in Sects. \ref{sec:l_form} and \ref{sec:dd_analysis}, respectively. The intensity ratio, $R$, for our purposes can be considered to not be dependent on the choice of a particular coronal model, as it is only slightly affected by uncertainties in the chromospheric excitation radiation, in the electron density, and in the electron temperature.
\begin{figure*}
\centering
\includegraphics[width=20.0cm]{ratio_multi.eps}
\caption{Intensity ratio of O VI doublet line for plumes and interplumes identified in Table (\ref{tab:02}). The heliocentric distances of data are calculated where a plume or a interplume meets the UVCS slit. The errors in the ratio are a function of the intensity of the corona and of the exposure time, which is larger for high heliocentric distances (see Table \ref{tab:01}).}
\label{fig:ratio}
\end{figure*}
The profiles of $R$ in the plumes and interplumes we identified in UVCS data are shown in Figure \ref{fig:ratio}. The four interplumes are characterized by a linear decrease of $R$ with distance, pointing to a constant plasma acceleration with the altitude in the corona. The behavior of plumes is different as the profiles exhibit a less evident decrease with distance, which becomes more pronounced at coronal altitudes higher then 1.7-1.8~R$_{\odot}$, suggesting that plumes are characterized by a low outflow speed regime in the interval of distances 1.5-1.8~R$_{\odot}$, and after that are subjected to an increasing acceleration.
\subsection{Line profile analysis}
\label{sec:line_profiles}
We performed an analysis of the line profiles by fitting observed spectra with a Voigt function, which comprehends the instrumental function, the slit contribution, and the intrinsic line width (see \citealt{tesi_phd_silvio}). The intrinsic line widths of the O~{\sc{vi}}~1031.9 \AA\ and H~{\sc{i}}~Ly$\alpha$ lines are shown as most probable speeds in Figures~(\ref{fig:most_prob_speed}a,b). In oxygen ion observations, we can see a clear distinction between plume and interplume line widths, with the plumes attaining lower values on average, corresponding to lower kinetic temperatures in plumes than interplumes, in agreement with \cite{Giordano2000a}, who found ion speed distribution broader in interplumes than in plumes. An exception is represented by the data on plume PL1, which shows a radial profile similar to that of interplumes, for distances higher 1.6~R$_{\odot}$. The difference between plumes and interplumes is less evident in H~{\sc{i}}~Ly$\alpha$, and line profiles appear almost indistinguishable within errors (see Figure \ref{fig:most_prob_speed}b). The results are in agreement with what was expected on the basis of previous studies (see, e.g., \citealt{Wilhelm2011a}, \citealt{Poletto2015a}). In particular, \cite{Hassler1997a} have demonstrated that for heliocentric distances higher than 1.1-1.3~R$_{\odot}$~ plumes may have kinetic temperatures lower than interplumes (see also \citealt{Kohl1997a}) and, on the contrary, close to the coronal base the plume temperatures are higher than those of interplumes (see also \citealt{Walker_1993a}).
\begin{figure*}
\centering
\includegraphics[width=16.0cm]{most_probable_speed.eps}
\caption{Lines widths of O VI and H Lyman alpha, given as microscopic speed of the ions at $1/e$ of the line profile height. The widths are corrected for the instrumental function and for the slit width.}
\label{fig:most_prob_speed}
\end{figure*}
\section{Data analysis}
\label{sec:analysis}
\subsection{Formation of emission lines in the intermediate corona}
\label{sec:l_form}
The coronal O~{\sc{vi}} doublet and H~{\sc{i}}~Ly$\alpha$ lines form by the collisional excitation of ions and by resonance scattering of the radiation coming from the chromosphere. The emissivity of the collisional component, $j_{\rm coll}$, is a function of the electron density, $n_{\rm e}$, the electron temperature, $T_{\rm e}$, through the ionized fraction $R_{\rm ion}(T_{\rm e})=n(X)/n(X^{+m})$, and the element abundance relative to hydrogen $A_{\rm el}=n(X)/n_{\rm H}$. The emissivity of the resonantly scattered component, $j_{\rm res}$, also depends on the outflow speed of the plasma element in the corona, $w$, and on the absorption profile of coronal ions, which is determined by the local kinetic ion temperature, $T_{\rm ion}$. The outflow speed is responsible for a reduction of the resonant scattering efficiency. Hence, the radiative component of a coronal line can be used as a diagnostic tool, called Doppler dimming (DD) analysis, to infer $w$ (see, e.g., \citealt{Hyder1970a}, \citealt{Noci1987a}).
In the case of H~{\sc{i}}~Ly$\alpha$, resonant excitation accounts for 99\% of the total intensity (see \citealt{Gabriel1971a}). A detailed account of the formation mechanism of these lines can be found in \cite{Withbroe1982a}, \cite{Kohl1982a}, \cite{Noci1987a}, and \cite{Noci1999a}.
The total line intensity, $I_{\rm line}$, is the sum of the collisional and radiative contributions integrated over the frequency and along the line of sight:
\begin{eqnarray}
\nonumber
I_{\rm line}(n_{\rm e},w,A_{\rm el},T_{\rm e},T_{\rm ion})&=&I_{\rm coll}+I_{\rm res}\\
&=&\int_{\rm LOS}\int_\nu
\left({
j_{\rm coll}+j_{\rm res}
}\right)
d\nu dx
\label{eq:tot}
.\end{eqnarray}
\subsection{Doppler Dimming analysis}
\label{sec:dd_analysis}
We inferred the radial outflow speed and the electron density of the plasma in plumes and interplumes by means of the DD analysis applied to the O~{\sc{vi}} doublet and H~{\sc{i}}~Ly $\alpha$ lines. We make use of a method for separating the collisional and radiative components of the observed intensities, determining self-consistently the electron densities and the outflow speeds of the coronal plasma. In this section we will only give a summary of the method, a detailed description of which can be found in \cite{Zangrilli2002a}.
The separation of the radiative and collisional components can be obtained from the inversion of Eq.~(\ref{eq:tot}), by writing the O~{\sc{vi}} doublet intensities $I_{1031.9}$ and $I_{1037.6}$, as
\begin{eqnarray}
\nonumber
I_{1031.9}
& = &
2{\cal C}(n_{\rm e})+4{\cal R}(n_{\rm e},w_{\rm OVI})\\
I_{1037.6}
& = &
{\cal C}(n_{\rm e})+{\cal R}(n_{\rm e},w_{\rm OVI})+{\cal P}(n_{\rm e},
w_{\rm OVI})
\label{sys:int}
,\end{eqnarray}
where ${\cal C}(n_{\rm e})$ is the intensity of the collisional component of the O~{\sc{vi}}~$\lambda$1037.6 line, and ${\cal R}(n_{\rm e},w_{\rm OVI})$ and ${\cal P}(n_{\rm e},w_{\rm OVI})$ are the radiative components of the O~{\sc{vi}}~$\lambda$1037.6 line, excited by the corresponding chromospheric O~{\sc{vi}} line and by the C~{\sc{ii}} chromospheric pumping lines, respectively. We note that ${\cal C}$ is a function of the electron density squared only, while ${\cal R}$ and ${\cal P}$ depend linearly on $n_{\rm e}$ and also on the outflow speed, $w$. The components ${\cal C}$, ${\cal R,}$ and ${\cal P}$ are uniquely determined by a couple of values $\left({n_{\rm e},w_{\rm OVI}}\right)$, if all other model parameters are known, hence the solution of the system of equations (\ref{sys:int}) is uniquely determined by the two observed quantities, $I_{1031.9}$ and $I_{1037.6}$. \citealt{Zangrilli2002a} provide a discussion on the existence and the uniqueness of the solutions.
Assuming that the H~{\sc{i}}~Ly$\alpha$ line is essentially formed by resonant scattering of the chromospheric radiation, given the $n_{\rm e}$ derived from the DD applied to the O~{\sc{vi}} doublet, we can calculate the proton outflow
speed.
The search for a solution of equations (\ref{sys:int}) starts with a first guess coronal model, then the predicted intensities ${\cal C}$, ${\cal R,}$ and ${\cal P}$ are used to derive new values for $n_e $ and $w$; the procedure is iterated until the observed intensities $I_{1031.9}$ and $I_{1037.6}$ are reproduced.
We briefly recall the essential physical parameters of the coronal model needed for the DD analysis. The intensities of the UV coronal lines depend primarily on the plasma density and the outflow speed, which we find by means of the DD analysis, and the element abundance, which we determine by the constraint on the mass flux conservation (see Section \ref{sec:mass_flux}). A somewhat secondary role is played by the plasma electron temperature. We assume the electron temperature radial profile according to \cite{Cranmer1999a}, which is typical of a polar coronal hole. A further assumption is that the oxygen ion populations in coronal holes freeze in at the electron temperature of $T_{\rm e}=1.1~{\rm MK}$, as has been demonstrated by \cite{von_Steiger2000a} (see also \citealt{Ko1999b}). In our model, the electron temperature value $T_{\rm e}=1.1~{\rm MK}$ is attained at 2.35R$_{\odot}$, hence for higher heliocentric distances the O~{\sc{vi}} abundance is frozen in. The neutral hydrogen is not supposed to be frozen.
A summary of the plume and interplume measurements of $T_{\rm e}$ is given by \cite{Wilhelm2011a} (see Table 4 therein) for heliocentric distances up to 1.13R$_{\odot}$, from which we deduce that typical electron temperatures of plumes are about 0.8 MK, almost constant with height, while those of interplumes tend to increase with distance and to be higher with respect to plumes, from 10 to 30\%. Unfortunately, we have no information at altitudes higher than 1.13R$_{\odot}$. Given the uncertainties in the determinations of $T_{\rm e}$ and the lack of information in the interval of distances covered by UVCS observations, we chose not to distinguish between plume and interplume electron temperature, and the radial profile we adopted is representative of the mean conditions in a polar coronal hole; we will discuss the impact of this choice on the results. The ion kinetic temperature radial profiles included in the model, are obtained by fitting the most probable speed line profiles, as discussed in Section (\ref{sec:line_profiles}).
The value of the solar chromospheric intensity of H~{\sc{i}}~Lyman~$\alpha$ has been taken from the UARS/SOLSTICE (Upper Atmosphere Research Satellite/SOLar STellar Irradiance Comparison Experiment) experiment (see \citealt{Woods2000a}). In the case of O~{\sc{vi}} doublet lines there are no routine measurements, hence we used the values of 387.0 and $199.5~{(\rm erg/s/cm^2/sr)}$, measured from UVCS disk observations taken on December 4, 1996, for the 1031.9 and 1037.6~\AA\ lines, respectively, (see \citealt{Zangrilli2002a}). The adopted chromospheric line widths are taken from \cite{Warren1997a}.
The amount of ion temperature anisotropy of the ion kinetic temperatures along and across the magnetic field direction in the intermediate corona has been debated in the literature, since the beginning of the SoHO mission. The profile widths of oxygen lines observed by UVCS, larger than previously expected, have been interpreted as strong evidence for preferential ion heating perpendicularly to the magnetic field lines direction (see, e.g., \citealt{Kohl1997a}). A different explanation for the observed large profiles has been proposed by \cite{Raouafi2004b} and \cite{Raouafi2006a}. The objections raised by \cite{Raouafi2006a} have been discussed by \cite{Cranmer2008a}, who concluded that the most probable temperature anisotropy ratios correspond to $T_{i\parallel}/T_{i\perp}\approx 6$, thus we adopt this value in our coronal model.
\subsection{Mass flux conservation}
\label{sec:mass_flux}
From the DD analysis (see Section \ref{sec:dd_analysis}) we derived the plasma outflow speeds and electron densities of plumes, at different heliocentric distances. These results should satisfy the constraint of mass-flux conservation, given a geometrical configuration of the polar plumes, for protons and oxygen ions separately. In general, the mass flux conservation for a generic atom in the corona can be written as
\begin{equation}
w(r,\theta)=\frac{F(1 AU,\theta)(215R_\odot)^2}{n(r,\theta)r^2}
\frac{f(1AU,\theta)}{f(r,\theta)}
\label{eq:mass_cons}
,\end{equation}
where $w(r,\theta)$ and $n(r,\theta)$ are the atom outflow speed and density at distance $r$ and latitude $\theta$, respectively, $F(1 AU,\theta)$ is the atom mass flux measured at 1AU, which is assumed as a boundary condition, and $f(r,\theta)$ is the expansion factor, which describes the super-radial expansion of the plumes. A commonly accepted value for the fast wind proton flux is $F(1AU,\theta)\equiv F_p(1AU)=2\times 10^8~{\rm cm^{-2}~s^{-1}}$, although in the literature higher values are also given (see \citealt{Goldstein1996a}, \citealt{McComas2000a}), and we implicitly assume that it only depends on the radial coordinate $r$. The outflow speed of oxygen atoms at 1 AU turns out to be about 5\%-10\% faster than protons (e.g., Hefti et al. 1998). If we assume that, for our purposes, their speed is the same at 1 AU, from the condition of mass conservation (\ref{eq:mass_cons}) for oxygen and protons we have
\begin{equation}
w_{O^{+5}}(r,\theta)=w_p(r,\theta)\frac{Abb_{oxy}(1AU)}{Abb_{oxy}(r,\theta)}
\label{eq:oxygen_cons}
,\end{equation}
where $Abb_{oxy}=n_{oxy}/n_p$ is the abundance of oxygen with respect to hydrogen. Condition (\ref{eq:oxygen_cons}) does not depend upon knowledge of the expansion factor and ensures the mass flux conservation of oxygen atoms in the wind, provided that of protons is satisfied. The latter requirement can be verified after the DD analysis (see Section \ref{sec:results_geom}).
Knowledge of the geometry of plumes, described by the expansion factor $f(r,\theta)$, does not enter into the DD analysis, and the plume profiles derived from LASCO images, and traced back to the UVCS and EIT fields of view by means of the \cite{Banaszkiewicz1998a} model, have been used to identify the plumes in UVCS data (see Section \ref{sec:ident}).
\subsection{Distribution of plumes and interplumes along the line of sight}
\label{ssec:ipl_pl_models}
The intensities of UV spectral lines we derived from the UVCS observations are the sum of all coronal contributions along the LOS. When observing plumes, we can assume that we will measure the sum of different contributions coming from plumes and interplumes. We first derive an interplume model, based on the hypothesis that in the observed interplume regions the possible contribution of background plumes is negligible, and we identify the interplume plasma as the coronal hole medium surrounding plumes, assuming a spherical symmetry. Then we model plumes by confining the emitting plasma within a single cylindrical shaped structure located in the plane of the sky, embedded in the interplume medium.
\section{Results}
\label{sec:results}
\subsection{Plasma outflow speeds and electron densities}
\label{sec:results_dd}
The results obtained from the DD analysis described in Sect.~\ref{sec:analysis} are illustrated in Figs.~\ref{fig:ip_ne}, and \ref{fig:cons}. The main error source in the electron densities and outflow speeds we derived is the statistical uncertainty in the measured intensities of the O~{\sc{vi}} doublet lines, which can be estimated by means of the Poissonian statistics on the integrated counts at detector level. In the case of H~{\sc{i}}~Lyman~$\alpha$ the error given by the Poissonian statistics is negligible.
Further uncertainty is related to the radial profile of the electron temperature of our model. As far as the derivation of the O~{\sc{vi}} outflow speed by means of the DD technique is concerned, this mainly depends on the intensity ratio of the two O~{\sc{vi}} doublet lines, which is mostly independent of the electron temperature. However, the derived electron densities depend on $T_{\rm e}$, through the ionized fraction parameter $R_{\rm ion}(T_{\rm e})=n(X^{+m})/n(X)$, as the resonant component of the coronal UV line intensities is proportional to the product $R_{\rm ion}(T_{\rm e})A_{\rm el}n_{\rm e}$, and the collisional components on $R_{\rm ion}(T_{\rm e})A_{\rm el}n_{\rm e}^2$, where $A_{\rm el}$ is the element abundance. We can obtain a rough idea of the effect of our choice for $T_{\rm e}$, by assuming $T_{\rm e}=0.8~{\rm MK}$ as a lower
limit for the electron temperature in plumes, which turns out to be a decrease in the electron density of about 30\%. This is because at those temperatures the ionization balance is a decreasing function, both for O\textsuperscript{+5} and for protons. For the same reason, the uncertainty in $T_{\rm e}$ does not much affect the calculation of the proton outflow speed, as a variation in the electron density is compensated for by an opposite variation in the ionized fraction.
The electron densities in interplumes derived from the DD analysis are shown in Figure~(\ref{fig:ip_ne}), and they are compared with the fitted interplume density profiles given by \cite{Doyle1999b} and \cite{Esser1999a}. With the purpose of establishing a model for interplumes, we performed the following power law fit of the electron densities we derived:
\begin{equation}
n_{\rm e}(r)=1\times 10^5\times\left({\frac{435.98}{r^{7.31}}+\frac{2.50}{r^{2.22}}}\right)
\label{eq:neipfit}
.\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=8.0cm]{ne_ip.eps}
\caption{Electron density of interplume plasma from UVCS data. A comparison is made with $n_{\rm e}$ radial profiles taken from previous works.}
\label{fig:ip_ne}
\end{figure}
In the DD analysis of plumes, a combined plume-interplume emissivity along the LOS has been considered, where a single plume in the plane of the sky is embedded in the interplume medium (see Section \ref{ssec:ipl_pl_models}). The electron densities and outflow speeds for protons and O\textsuperscript{+5} ions are shown in Figs.~(\ref{fig:density_multi}), and (\ref{fig:ovi_speed_multi}), where in the first row the results for interplumes are represented. The second row shows the results for plumes. We immediately see that plumes are substantially denser than interplumes, and the density contrast turns out to be about 3-5, depending on the heliocentric distance, as expected on the basis of previous studies (see \citealt{Wilhelm2011a}, \citealt{Poletto2015a}).
As far as the outflow speeds are concerned, the basic result is that plumes have non-negligible outflow speeds, within the estimated errors, hence they can not be considered static structures, at least for distances higher than 1.6R$_{\odot}$. Speed profiles for O~{\sc{vi}} ions of PL0 and PL1 are similar to the profiles typical of interplumes, while in the case of PL2 and PL3 they differ, showing a lower acceleration with distance. In general, both for plumes and interplumes the proton speed profiles attain lower values than O~{\sc{vi}} ions. The considerations that can be drawn, when comparing plumes and interplumes, are the same we made above for O~{\sc{vi}}.
We stress that we must consider the electron density values we derived from the DD analysis as mean values, as we know that plumes show time variability on timescales shorter than the integration times of our UVCS data, and space substructures on spatial scales well below the spatial resolution of UVCS (see Section \ref{sec:intro}). However, the outflow speeds of O~{\sc{vi}} ions, derived from the DD analysis, are not sensitive to the uncertainties in the absolute value of the electron density, as they mainly rely on the intensity ratio of doublet lines from the same ion. The case of neutral hydrogen is different, and the derived outflow speeds are dependent on the electron density. If the micro-scale filling factor within plumes is less then unity, that is, if substructures or inhomogeneities are present and they are not resolved by UVCS, we could underestimate the electron density, hence leading to an underestimate of the outflow speed of protons from the DD analysis. As has been stated, however, the derivation of the O~{\sc{vi}} ions outflow speed is not affected by the presence of a possible micro-scale filling factor less than unity, unless plasma inhomogeneities move at different speeds.
\begin{figure*}
\centering \includegraphics[width=16.0cm]{ne_multi.eps}
\caption{Electron density in interplumes (upper row) and plumes (lower row), for a model with a single cylindrical shaped plume in the plane of the sky. The red lines represent, for comparison, the interplume electron density profile, obtained by fitting all data with a power law (see Eq. \ref{eq:neipfit}).}
\label{fig:density_multi}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=16.0cm]{vovi_multi.eps}\\
\includegraphics[width=16.0cm]{vh_multi.eps}
\caption{Outflow speed of O\textsuperscript{+5} in interplumes (first row from the top) and plumes (second row), in the case of a one plume model. The analogous results for protons are shown in the third and fourth rows. The red lines represent, for comparison, the interplume O\textsuperscript{+5} and protons outflow speed profiles (see Eqs. \ref{eq:voviipfit} and \ref{eq:vpipfit}, respectively).}
\label{fig:ovi_speed_multi}
\end{figure*}
\subsection{Plume contribution to the polar wind}
\label{sec:results_contr}
The contribution of plumes to the polar wind with respect to interplumes can be estimated as the ratio of the mass flux due to plumes to the mass flux due to interplumes,
\begin{equation}
C(r)=\frac{n_{\rm e,pl}(r)w_{pl}(r)ff(r)}{n_{\rm e,ip}(r)w_{ip}(r)}
,\end{equation}
where $ff$ is the filling factor of a single plume at the coronal altitude $r$. We implicitly assume that the parts of the coronal hole that have not been recognized as plumes, must be considered as interplumes.
We used the following linear fits to the interplume outflow speeds for O~{\sc{vi}} and protons, in order to calculate the interplume plasma flux in the polar wind. We expect that the validity of the linear fits will be restricted to the heliocentric distances explored by the present study:
\begin{equation}
w_{\rm O^{+5}}(r)=-344.02+254.88r
\label{eq:voviipfit}
,\end{equation}
\begin{equation}
w_{\rm H~Ly\alpha}(r)=-139.48+124.145r
\label{eq:vpipfit}
.\end{equation}
\begin{table}
\caption{Polar wind percentage contribution for each single plume, estimated
from the DD analysis on O~{\sc{vi}} and H~{\sc{i}} Lyman~$\alpha$
intensities.}
\centering
\begin{tabular}{l r r}
\hline\hline
r & \% O VI & \% H Lya \\
\hline
Plume 0 \\
\hline
1.45 & $ 5.4^{+4.0}_{-0.6}$ & $ 3.3^{+2.2}_{-1.9}$ \\
1.52 & $ 2.1^{+1.7}_{-0.9}$ & $ 3.2^{+1.7}_{-1.4}$ \\
1.60 & $ 5.9^{+2.3}_{-1.7}$ & $ 3.8^{+1.3}_{-1.2}$ \\
1.70 & $ 5.2^{+2.3}_{-1.5}$ & $ 3.7^{+1.2}_{-1.1}$ \\
1.88 & $12.1^{+4.4}_{-3.0}$ & $10.0^{+2.6}_{-2.2}$ \\
2.21 & $ 2.9^{+2.2}_{-1.3}$ & $ 4.5^{+0.9}_{-2.8}$ \\
\hline
Plume 1 \\
\hline
1.41 & $ 2.8^{+4.8}_{-2.8}$ & $ 1.5^{+2.4}_{-1.4}$ \\
1.47 & $ 1.9^{+2.0}_{-1.9}$ & $ 1.6^{+1.6}_{-1.6}$ \\
1.56 & $ 3.7^{+1.7}_{-1.5}$ & $ 3.7^{+1.4}_{-1.4}$ \\
1.65 & $ 3.3^{+1.5}_{-1.5}$ & $ 3.5^{+1.6}_{-1.5}$ \\
1.77 & $ 7.8^{+1.5}_{-1.5}$ & $ 7.5^{+1.2}_{-1.2}$ \\
\hline
Plume 2 \\
\hline
1.53 & $ 5.1^{+2.3}_{-2.3}$ & $ 2.7^{+2.1}_{-2.0}$ \\
1.63 & $ 7.9^{+2.6}_{-2.4}$ & $ 8.2^{+2.2}_{-2.4}$ \\
1.74 & $ 3.6^{+2.3}_{-2.2}$ & $ 3.8^{+2.3}_{-2.3}$ \\
1.86 & $ 8.4^{+6.4}_{-4.6}$ & $ 8.5^{+5.2}_{-4.5}$ \\
\hline
Plume 3 \\
\hline
1.66 & $ 5.5^{+2.3}_{-2.5}$ & $ 5.1^{+2.0}_{-2.3}$ \\
1.77 & $ 4.1^{+2.0}_{-2.4}$ & $ 3.5^{+1.8}_{-2.0}$ \\
1.90 & $ 4.1^{+1.9}_{-1.8}$ & $ 3.9^{+1.9}_{-1.7}$ \\
\hline
\end{tabular}
\label{tab:04}
\end{table}
The results for every single plume are given in Table~\ref{tab:04}, where the percentage contributions are reported for different heliocentric distances, as mean values with errors, for O~{\sc{vi}} and protons.
A possible way of estimating the total plume contribution to the polar wind, $C_{\rm tot}$, is to sum up the mean values of the contribution for all the plumes given in Table~\ref{tab:04}. In the case of O~{\sc{vi}} the result is $C_{\rm tot}=20\substack{+10\\-7}$ percent, and for protons it is $C_{\rm tot}=18\substack{+8\\-7}$ percent, with the two values fairly compatible within uncertainties.
\subsection{Geometry of plumes and mass flux conservation}
\label{sec:results_geom}
The consistency of the DD analysis with the proton flux conservation can be verified by comparing the proton flux $w_p(r)n_p(r)$, as derived from the DD analysis, with the flux derived from Equation~(\ref{eq:mass_cons}), expected on the basis of the plume geometry, with the boundary condition of the flux measured at 1 AU (see Section \ref{sec:results_geom}):
\begin{equation}
\nonumber
F_p(r)=\frac{F_p(1AU)(215R_{sun})^2}{r^2}\frac{f(1AU,\theta)}{f(r,\theta)}
.\end{equation}
The comparison is shown in Figure~\ref{fig:cons}, where the expected profiles for radial expansion, and super-radial expansion described by the magnetic field model by \cite{Banaszkiewicz1998a}, are shown for three different values of the parameter $Q$. Although the uncertainties from the DD analysis are large, and the fit of a model to the plume profiles does not allow us to establish a well defined geometry, the major part of the experimental points are compatible with the set of plume profile geometries we inferred from LASCO, UVCS, and EIT images, and with the condition of mass flux conservation, as in the case of plumes PL2 and PL3. In the case of plumes $P_0$ and $P_1$, the results apparently contradict the mass flux conservation for any value of the expansion factor.
\begin{figure*}
\centering
\includegraphics[width=16.0cm]{cons_pl1.eps}
\caption{Proton number flux for the single plume model. Solid lines represents the mass flux conservation for a radial expansion; dotted, short dashed, and long dashed profiles describe the super-radial expansion for the magnetic field models of \cite{Banaszkiewicz1998a} with $Q=0.3$, $Q=1.0,$ and $Q=1.5$, respectively.}
\label{fig:cons}
\end{figure*}
\section{Discussion and conclusions}
\label{sec:discussion}
The purpose of the present paper was to provide a contribution to the discussion on whether plumes are to be considered dynamic structures or not, and in the affirmative case, to help in quantifying their contribution to the solar polar wind. The results we obtained confirm that plumes are substantially denser than the interplumes, and the density contrast turns out to be about 3-5, depending on the altitude in the corona. The outflow speeds suggest that plumes are not static structures, at least for distances higher than about 1.6R$_{\odot}$. In some cases, for example, for plumes PL0 and PL1, the speed profiles appear similar to the profiles typical of interplumes. In other cases, such as those of plumes PL2 and PL3, the outflow speeds differ significantly from those of interplumes, being characterized by a lower acceleration with distance. The electron density and outflow speed values in plumes we derived from the DD analysis, and the estimated plume filling factor in the polar coronal hole led us to evaluate the contribution of plumes to the polar wind as being about 20\% of the mass flux, and this result should be considered as a lower limit (see discussion in Section \ref{sec:results_contr})
The results we obtained for different plumes are not uniform, in the sense that sometimes we derive low speed values, as we would expect on the basis of previous results published in the literature (see \citealt{Wilhelm2011a}, \citealt{Poletto2015a}). Sometimes the outflow speeds do not differ much from the interplume values, and we would like to comment upon this. Signatures of outflows in plumes have been found in the low corona from observations taken with SoHO/SUMER and Hinode/EIS (see \citealt{Wilhelm2000a}, \citealt{Tian2010a}). More recently, a study based on SDO/AIA (Solar Dynamics Observatory/Atmospheric Imaging Assembly) data published by \cite{Pucci2014a} analyzed the propagation of
disturbances in a polar coronal hole in the low corona, during an entire plume lifetime of about 40 hrs. They concluded that they represent high-speed outflowing events, with distributions peaking at $167~{\rm km~s^{-1}}$ and $100~{\rm km~s^{-1}}$ for plumes and interplumes, respectively.
On the basis of the model of polar plume formation proposed by \cite{Wang1995a}, which is characterized by the reconnection of closed bipolar regions with open unipolar magnetic flux tubes in polar coronal holes, the results obtained in the low corona suggest a scenario in which the outflow in a coronal hole is characterized by many episodes, which are more frequent in connection with plumes and evolve during the lifetime of a plume. Hence, in the plumes in the intermediate corona we could expect to observe an episodic outflow speed regime, which can not be resolved in time by UVCS observations, and the results of the DD analysis should be regarded as representative of an average (see Section \ref{sec:results_dd}). We may also speculate that the high and low speed regimes we observed in different plumes correspond to different stages in the evolution of a plume. Moreover, a variable and evolving outflow speed in plumes could also explain why, in some cases, we found that the mass flux conservation constraint along plumes is apparently not respected by the flux estimates we measured at different altitudes by applying the DD technique (see discussion in Section \ref{sec:results_geom}).
The analysis presented in this paper is based on a single case study, hence we do not know how much it could be representative of the real dynamics in plumes. The analysis of a larger sample of UVCS observations will be the subject of a forthcoming study, where we plan to investigate the effects of a possible speed regime evolution during a plume life time.
A new opportunity will become available with the Metis coronagraph on-board the Solar Orbiter spacecraft (see \citealt{Antonucci2017a}; \citealt{antonucci2019a}). Metis will provide simultaneous observations of the solar corona in visible light and H~{\sc{i}}~Lyman~$\alpha$, with the reasonably high cadence of 20 min., over time intervals of ten days during each perihelion. By means of the polarimetric analysis of visible light emission and the DD technique applied to the H~{\sc{i}}~Lyman~$\alpha$, it will be possible to investigate in detail the evolution of individual polar plumes and to establish more firmly their contribution to the slow solar wind.
\begin{acknowledgements}
The authors would like to thank Giannina Poletto for having read the manuscript and for the many helpful comments and suggestions, and the anonymous referee for the review of our paper.
\end{acknowledgements}
\bibliographystyle{aa}
|
1506.05980
|
\section{Introduction}
Today, cloud computing is already omnipresent and starts pervading all aspects of our life, whether in the private area or in the business domain. The annual market value related to cloud computing is estimated to be in the region of USD 150 billion, and will probably grow by the year 2018 to around USD 200 billion~\cite{TRA1,PRW1}.
The European Commission (EC) promotes in its strategy "Digital Agenda for Europe / Europe 2020" the rapid adoption of cloud computing in all sectors of the economy to boost productivity. Furthermore, the EC concludes that “cloud computing has the potential to slash users' IT expenditure and to enable many new services to be developed. Using the cloud, even the smallest firms can reach out to ever larger markets while governments can make their services more attractive and efficient even while reining in spending.”~\cite{STRAT1}.\\[-1.2em]
However, besides these advantages of cloud computing, many new problems arise which are not yet sufficiently solved, especially with respect to information security and privacy.
The fundamental concept of the cloud is storage and processing by a third party, i.e., the cloud or service provider, which actually invalidates the traditional view of a perimeter in IT security. In fact, the third party becomes part of the company's own computation and storage IT infrastructure albeit not being under its full control. This situation is very problematic and recent incidents show that economic incentives and legal tools used to increase trust in the service provider, e.g. Service Level Agreements, are by far not sufficient to guard personal data and trade secrets against illegal interceptions, insider threats, or vulnerabilities exposing data in the cloud to unauthorized parties. While being processed by a provider, data is typically neither adequately protected against unauthorized read access, nor against unwanted modification, or loss of authenticity. Consequently, in the most prominent cloud deployment model today -- the public cloud -- the cloud service provider necessarily needs to be trusted. Security guarantees with respect to user data can only be given on a contractual basis and rest to a considerable extent on organisational (besides technical) precautions. Hence, outsourcing IT tasks to an external shared infrastructure builds upon a problematic trust model. This situation inhibits many companies in the high-assurance and high-security area to benefit from external cloud offerings: for them confidentiality, integrity, and availability are of such major importance that adequate technical measures are required---but state-of-the-art ICT can currently not provide them. Moreover, individuals using public cloud services face a considerable privacy threat too, since they typically expose more information to services than required to perform the task. In all cases, in the end the cloud user is responsible for his or her data and outsourcing sensitive tasks to an external entity does not remove this burden. Therefore, novel security and privacy preserving methods need to be developed to facilitate cloud usage even for organisations dealing with sensitive information.
In this work we present a new approach towards cloud security which is developed by the \textsc{Prismacloud}\xspace consortium within the EU Horizon 2020 research framework. The vision of \textsc{Prismacloud}\xspace is to develop the next-generation of cryptographically secured cloud services with security and privacy built in by design. For us, the only reasonable way to achieve the required security properties for outsourced data storage and processing is by adopting suitable cryptographic mechanisms. \textsc{Prismacloud}\xspace shall impact through the development of next generation secure cloud services $(i)$~to achieve beneficial impact in society, industry, and research in Europe
$(ii)$~to remove a major inhibitor against cloud adoption in security relevant domains $(iii)$~by developing cloud applications, that preserve more privacy for citizens,
$(iv)$~for delivering input and strengthening the position of European industries,
$(v)$~to strengthen European research in a field with high research competition.
\section{A New Take on Cloud Security}
\label{sec:goals}
\subsection{Relevance and Project Objectives}
The importance of security for cloud computing is now widely accepted \cite{CSA2,ENISA1,NIST1} and security research for cloud computing is gaining tremendous momentum. It is of major importance for security research to catch up with the rapid developments in cloud computing before the ongoing transition to the cloud reaches critical areas. Future solutions for cloud deployment must be secure by design and provide end-to-end security for users to the best extent possible.
In the long-term, cloud computing will influence many important aspects of our lives and will likely have an enormous impact on our society. The European Commission already recognised the potential future impact of cloud computing for all of us and has issued a cloud computing strategy \cite{STRAT1} to protect European citizens from potential threats, while simultaneously unleashing the potential of cloud computing, for both the industry/public sector as well as for individuals. Many initiatives with these goals are currently either ongoing, or are going to be formed in the nearer future, and a substantial effort is put into cloud security research through the EU-research framework. However, there are still many unsolved problems and there is ample room for innovation. We will contribute innovations to that field and are targeting the development of next-generation secure and trustworthy cloud environments. The ambition of \textsc{Prismacloud}\xspace is to build trustworthy cloud services on top of untrusted infrastructure by the development and application of adequate cryptography and methodologies.
The main objectives of \textsc{Prismacloud}\xspace are: $(i)$~to develop next-generation cryptographically secured services for the cloud. This includes the development of novel cryptographic tools, mechanisms, and techniques
ready to be used in a cloud environment to protect the security of data over its lifecycle and to protect the privacy of the users. The security shall be based on 'by design' principles. $(ii)$~to assess and validate the project results by fully developing and implementing three realistic use case scenarios in the areas of e-government, healthcare, and smart city services. $(iii)$~to conduct a thorough analysis of the security of the final systems, their usability, as well as legal and information governance aspects of the new services.
\subsection{State of the Art}
Ongoing research activities like SECCRIT, Cumulus, and PASSIVE\footnote{EU-FP7: \url{http://www.seccrit.eu/}, \url{http://www.cumulus-project.eu/}, \url{http://ict-passive.eu/}} are extremely valuable and will be setting the standards and guidelines for secure cloud computing in the next years. However, these approaches consider the cloud infrastructure provider as being trustworthy in the sense that no information of the customers, i.e., tenants, will be leaked, nor their data will be tampered with. The cloud infrastructure provider, however, has unrestricted access to all physical and virtual resources and thus absolute control over all tenants' data and resources. The underlying assumption is, that if the cloud provider performs malicious actions against its customers, in the long run, he or she will be put out of business -- if such doings are revealed. However, this assumption is very strong, especially considering the ongoing revelation of intelligence agencies' data gathering activities. Data disclosure may even be legally enforced in a way completely undetectable by the cloud provider's customers.
Through auditing and monitoring of cloud services, some of the malicious behaviour of outsiders and insiders (e.g. disgruntled employees with administrator privileges) may be detectable \emph{ex-post}, however, that does not help a specific victim to prevent or survive such an attack. Moreover, advanced cyber-attacks directly targeting a specific victim can barely be detected and prevented with cloud auditing mechanisms or anomaly detection solutions. These methods are more efficient for the detection of large scale threats and problems and for making the infrastructure itself resilient, while keeping an acceptable level of service.
Other projects, like TClouds and PRACTICE\footnote{EU-FP7: \url{http://www.tclouds-project.eu}, \url{http://www.practice-project.eu/}} take cloud security a step further: TClouds already considers the impact of malicious provider behaviour and tries to protect users. However, it is not strongly focusing on comprehensive integration of cryptography up to the level of end-to-end security. PRACTICE, in contrast, is well aligned with our idea of secure services. However, it focuses mainly on the preservation of data confidentiality for processing, when outsourced to the cloud. This is achieved by means of secure multiparty computations and concepts from fully homomorphic encryption. \textsc{Prismacloud}\xspace is complimentary to these concepts and enhance them with cryptographic primitives for the verification of outsourced computation and other relevant functionality to be carried out on the data in the untrusted cloud.
Research activities in context of privacy in cloud computing were and are currently conducted by various projects like ABC4Trust, A4Cloud and AU2EU\footnote{EU-FP7: \url{https://abc4trust.eu}, \url{http://www.a4cloud.eu}, \url{http://www.au2eu.eu}}. \textsc{Prismacloud}\xspace complements these efforts by relying on and further developing privacy-enhancing technologies for the use in cloud based environments.
\subsection{Main Goal and Innovations}
The main goal of \textsc{Prismacloud}\xspace is to enable the deployment of highly critical data to the cloud. The required security levels for such a move shall be achieved by means of novel security enabled cloud services, pushing the boundary of cryptographic data protection in the cloud further ahead. \textsc{Prismacloud}\xspace core innovations include:
$(i)$~Techniques for outsourcing computation with verifiable correctness and authenticity-preservation for allowing secure delegation of computations to cloud providers.
$(ii)$~The provision of cryptographic techniques for the verification of claims about the secure connection and configuration of the virtualized cloud infrastructures.
$(iii)$~Addressing user privacy issues by cryptographic data minimization and anonymization technologies.
$(iv)$~Improving anonymization techniques for very large data sets in terms of performance and utility.
$(v)$~A distributed multi-cloud data storage architecture for sharing data among several cloud providers and thus improving data security and availability. Dynamically updating distributed data by means of novel techniques shall avoid vendor lock-in and promote a dynamic cloud provider market, while preserving data authenticity and facilitating long term data privacy with proactive secret sharing.
$(vi)$~Advanced format and order preserving encryption and tokenisation schemes for seamless integration of encryption into existing cloud services.
$(vii)$~The \textsc{Prismacloud}\xspace work program is complemented with activities addressing secure user interfaces, secure service composition, secure implementation in software and hardware, security certification, and an impact analysis from an end-user view. In order to converge with the European Cloud Computing Strategy, a strategy for the dissemination of results into standards is developed.
$(viii)$~As feasibility proof, three use cases from the fields of SmartCity, e-Government, and e-Health will be fully implemented and evaluated by the project participants.
\section{Verifiability of Data, Processing and Infrastructure}
\label{sec:verifiablity}
\subsection{Verifiable and Authenticity Preserving Data Processing}
Verifiable computing aims at outsourcing computations to one or more untrusted processing units in a way that the result of a computation can be efficiently checked for validity. General purpose constructions for verifiable computations have made significant process over the last years \cite{DBLP:journals/cacm/WalfishB15}, there are already various implemented systems which can be deemed nearly practical, but are not yet ready for real-world deployment. Besides general purpose systems, there are other approaches that are optimized for specific (limited) classes of computations or particular settings, e.g, \cite{DBLP:conf/ccs/BackesFR13,DBLP:conf/ccs/FioreGP14,DBLP:conf/asiacrypt/CatalanoMP14}.
In addition to verifiability of computations, another interesting aspect is to preserve the authenticity of data that is manipulated by computations. Tools for preserving authenticity under admissible modifications are (fully) homomorphic signatures (or message authentication codes) \cite{DBLP:conf/scn/Catalano14}.
Besides this general tool, there are signatures with more restricted capabilities, like redactable signatures introduced in~\cite{johnson2002_RSS,Steinfeld2002_RSS}, which have recently shown to offer interesting applications~\cite{poehlsACNS2014updatableRSS,Hanser2013BlankDigitalSignatures}. These and other functional and malleable signatures will be developed further within \textsc{Prismacloud}\xspace to meet requirements set by cloud applications.
By combining these cryptographic concepts, \textsc{Prismacloud}\xspace aims at providing tools that allow to realize processes (with potentially various participating entities) that guarantee to preserve the authenticity and provide verifiable of involved data and computations respectively.
\subsection{Integrity and Certification of Virtualized Infrastructure}
The area of structural integrity and certification of virtualized infrastructures bridges between three areas: attestation of component integrity, security assurance of cloud topologies and graph signatures \cite{Gros2014,Gros2015} to connect these areas. Attestation is the process in which a trusted component asserts the state of a physical or virtual component of the virtualized infrastructure, on all the layers of it.
Cloud security assurance refers to a research area and line of cloud security tools, in which recent proposals included the analysis of cloud topologies for security properties.
Graph signatures, that is, signatures on committed graphs are a new primitive we investigate within \textsc{Prismacloud}\xspace. Such a signature scheme allows a recipient of a signature to commit a hidden graph and have an issuer sign the recipient-committed graph while joining it with an issuer-committed graph. The resulting signature allows the recipient to prove in zero-knowledge properties of the graph, such as connectivity isolation.
Within \textsc{Prismacloud}\xspace we develop and optimize the use of graph signatures for practical use in virtualized infrastructures. Their application allows an auditor to analyse the configuration of a cloud, and issue a signature on its topology. The signature encodes the topology as a graph in a special way, such that the cloud provider can use it to prove in zero-knowledge high-level security properties such as isolation of tenants to verifiers, such as the tenants, without disclosure of secret information.
Further, we will bride between cloud security assurance and verification methodology and certification by establishing a framework that issues signatures and proves security properties based on standard graph models of cloud topologies and security goals in formal language (VALID).
\section{User Privacy Protection and Usability}
\label{sec:privacy}
\subsection{Privacy Preserving Service Usage}
For many services in the cloud it is important that users are given means to prove their authorisation to perform or delegate a certain task. However, it is not always necessary that users reveal their full identity to the cloud, but only prove by some means that they are authorised, e.g., possess certain rights. The main obstacle in this context is, that a cloud provider must still be cryptographically reassured that the user is authorised.
Attribute-based anonymous credential (ABC) systems have proved to be an important concept for privacy-preserving applications as they allow users to authenticate in an anonymous way without revealing more information than absolutely necessary to be authenticated at a service and there are strong efforts to bring them to practice\footnote{e.g., ABC4Trust: \url{https://abc4trust.eu/}}. Well known ABC systems are for instance the multi-show system Idemix \cite{DBLP:conf/ccs/CamenischH02} and the one-show system U-Prove \cite{uprove-spec}. Recently also some alternative approaches for ABC systems from malleable signature schemes \cite{Canard2013,DBLP:conf/csfw/ChaseKLM14} and a variant of structure-preserving signatures \cite{DBLP:conf/asiacrypt/HanserS14} have been proposed.
In \textsc{Prismacloud}\xspace we aim at improving the state of the art in anonymous credential systems and group signature schemes with a focus on their application in cloud computing services. Besides traditional application such as for anonymous authentication and authorization we will also investigate their application to privacy-preserving billing \cite{DBLP:conf/ih/DanezisKR11,Sl11} for cloud storage and computing services.
\subsection{Big Data Anonymization}
Anonymizing data sets is a problem which is often encountered when providing data for processing in cloud applications in a way, that a certain degree of privacy is guaranteed. However, e.g. achieving optimal $k$-anonymity is known to be an NP-hard problem. Typically, researchers have focused on achieving $k$-anonymity with minimum data loss, thus maximizing the utility of the anonymised results. But all of these techniques assume that the dataset to be anonymised is relatively small (and fits into computer memory). In the last few years several attempts have been made to tackle the problem of anonymising large datasets.
In \textsc{Prismacloud}\xspace, we aim to improve existing anonymisation techniques in terms of both performance and utility (minimizing information loss) for very large data sets. We strive to overcome deficiencies in current mechanisms, e.g. size limitations, speed, assumptions about quasi-identifiers, or existence of total ordering, and implement a solution suitable for very large data sets. In addition, we propose to address issues related to distribution of very large data sets.
\section{Securing Data at Rest}
\label{sec:storage}
\subsection{Confidentiality and Integrity for Unstructured Data}
Protecting customer data managed in the cloud from unauthorised access by the cloud provider itself should be one of the most basic and essential functionalities of a cloud system. However, the vast majority of current cloud offerings does not provide such a functionality. One reason for this situation is, that current cryptographic solutions can not be easily integrated without drastically limiting the capabilities of the storage service.
In PRISMACLOUD, we aim at researching and developing novel secure storage solutions with increased flexibility based on secret sharing. Secret sharing was invented in the late nineteen-seventies and has become a vital primitive in many cryptographic tasks. It can also be used to provide confidentiality for data at rest with strong security in a key-less manner when working in a distributed setting. Various systems have been proposed during the last years, but most of them work in rather naive single user approaches and require a trusted proxy in their setting. First approaches to support multiple users have been proposed in a combination with quorum based meta-data management, but still rely on passive storage nodes. Recently, a new type was proposed, using active nodes to fully delegate secure multi-user storage to the cloud. It combines efficient Byzantine protocols with various types of secret sharing protocols to cope with different adversary settings in a flexible way. However, many desired features, as well as a trustworthy distributed access control mechanism are still missing.
Our goal is to develop efficient and flexible secret sharing based storage solutions for dynamic environments, like the cloud, supporting different adversary models (active, passive, mixed). The research will focus on the design of multi-user storage systems in a distributed fashion, without single-point-of-trust and single-point-of-failure and how they can be extended with access privacy.
\subsection{Long-term Security Aspects and Everlasting Privacy}
To provide protection goals, such as integrity, authenticity, and confidentiality in the long-term, classic cryptographic primitives like digital signatures and encryption schemes are not sufficient.
They become insecure when their
security properties are defeated by advances in computer power or cryptanalytic techniques.
Thus, the only approach known to address long-term confidentiality is by using proactive secret sharing, e.g. \cite{Gupta:2007:GVI:1338446.1338890}.
In this approach, the data is split into several shares that are stored in different locations and are renewed from time to time.
Although secret sharing is needed to provide long-term confidentiality, there is no approach that considers performing publicly or privately verifiable computations or integrity preserving modifications on secret shares yet.
Besides the distributed storage of data, to provide everlasting privacy (or confidentiality) for data processed in a publicly verifiable manner, the information published for auditing needs to be information-theoretically secure.
Only a few solutions address this and only for specific problems, such as verifiable anonymisation of data \cite{DBLP:conf/fc/BuchmannDG13} and verifiable tallying of votes, e.g. \cite{DBLP:journals/tissec/MoranN10}.
No general applicable solution is provided, nor do existing approaches show how authenticated data can be processed in a publicly verifiable way.
Therefore, we aim at providing solutions for proactive secret sharing of authenticated data and techniques that allow for privately and publicly verifiable computations.
\subsection{Cryptography for Seamless Service Integration}
Considering existing applications in the cloud, it may be impossible to transparently add security features later on, e.g., to store encrypted data in the same database table used for unencrypted data, and applications running on the database may be unable to use the encrypted data, causing them to crash or alternatively, to output incorrect values. Standard encryption schemes are designed for bit-strings of a fixed length, and can therefore significantly alter the data format, which may cause disruptions both in storing and using the data.
To address this problem, techniques like Format-Preserving Encryption (FPE), Order-Preserving Encryption (OPE) and Tokenizaiton have emerged as most useful tools. In FPE schemes the encrypted ciphertexts have the same format as the messages, i.e. they can be directly applied without adapting the application itself. OPE schemes on the other hand, maintain the order between messages in the original domain, thus allowing execution of range queries on encrypted data.
In \textsc{Prismacloud}\xspace we aim to address the shortcomings of the existing FPE and OPE schemes. It can be shown that existing FPE schemes for general formats, e.g. name, address, etc., are inefficient, lack in their security level, and do not provide a clear way for format definition, thus making them practically unusable.
We propose to address both issues (security and efficiency) and develop an FPE scheme for general formats that: $(i)$ is more efficient; $(ii)$ provides an acceptable security guarantee; $(iii)$ supports complex format definition; $(iv)$ could be employed to solve practical problems, e.g. data sharing for cluster of private clouds. For OPE we aim to analyze further the existing approaches from both security and performance perspectives (and/or develop our own technique) and implement the option that provides a suitable security-functionality trade-off for a given set of applications/use cases.
\section{Methodology, Tools and Guidelines for Fast Adoption}
\label{sec:sechci}
\subsection{Holistic Security Models}
The paradigm of service orientation~\cite{Erl06} has increasingly been adopted as one of the main approaches for developing complex distributed systems out of re-usable components called services. We want to use the potential benefits of this software engineering approach, but not build yet another semi-automated or automated technique for service composition. However, combining the building blocks of \textsc{Prismacloud}\xspace correctly would require the developers to have a solid understanding of their cryptographic strength.
To allow composing the building blocks into secure higher level services, we will identify which existing models for the security of compositions are adequate to deal with the complexity and heterogeneity.
\textsc{Prismacloud}\xspace will adopt working and established solutions and assumes that the working way of composing services can be a way to allow secure composition. When each service can be described using standard description languages this allows extending composition languages~\cite{BBG06} to provide further capabilities, e.g., orchestrations, security, transaction, to service-oriented solutions~\cite{PLS08}. In \textsc{Prismacloud}\xspace we want to reduce the complexity further, just like recently, mashups~\cite{LHPB09} of web APIs provided means for non-experts to define simple workflows.
Within \textsc{Prismacloud}\xspace we will develop a description of not only the functionality of each cryptographic building block but also of their limitations and composability.
\subsection{Human Computer Interaction (HCI) Concepts}
Cryptographic concepts such as secret sharing, verifiable computation or anonymous credentials, are fundamental technologies for secure cloud services and to preserve end users' privacy by enforcing data minimization. End users are still unfamiliar with such data minimization technologies that are counterintuitive to them and for which no obvious real-world analogies exist. In previous HCI studies, it has been shown that users have therefore difficulties to develop the correct mental models for data minimisation techniques such as anonymous credentials \cite{DBLP:conf/ifip11-4/WastlundAF11} or the new German identity card \cite{harbach2013acceptance}. Moreover, end users often do not trust the claim that such privacy-enhancing technologies will really protect their privacy \cite{andersson2005trust}. Similarly, users may not trust claims of authenticity and verifiability functionality of malleable and of functional signature schemes.
In our earlier research work, we have explored different ways in which comprehensive mental models of the data minimization property of anonymous credentials can be evoked on end users \cite{DBLP:conf/ifip11-4/WastlundAF11}. \textsc{Prismacloud}\xspace extends this work by conducting research on suitable metaphors for evoking correct mental models for other privacy-enhancing protocols and cryptographic schemes used in \textsc{Prismacloud}\xspace.
Besides, it researches what social trust factors can establish trust in \textsc{Prismacloud}\xspace technology and how this can be matched into the user interfaces.
\subsection{Secure Cloud Usage for End-Users}
The cloud computing services market is currently soaring and already in the range of USD 100 billion, with an outlook onto a bright economic future with projected steep annual growth rates varying between 10 and 20\% \cite{TRA1,PRW1}. The huge interest in influencing market development and securing a proper share of the market is also visible in the manifold industry-driven standardization efforts\footnote{several consortia are listed at \url{http://cloud-standards.org}}. Following \emph{Key Action 1 ``Cutting through the Jungle of Standards''} of the European Commission's Cloud Computing Strategy \cite{STRAT1}, we will take a cautious approach with regard to a potential introduction of project results into a standardization process.
The crucial role of data security in cloud applications is widely recognized.
Previous studies have shown the vulnerability of information and communication technology systems, and especially also of cloud systems, to illegal and criminal activities \cite{SGH1}. We will take a critical appraisal of the secure cloud systems proposed in \textsc{Prismacloud}\xspace and will analyze, whether they live up to the security promises in practical application. We will give an indication for individuals, and for corporate and institutional security managers, what it means in practice to entrust sensitive data in specific use cases to systems claiming to implement, e.g.,``everlasting privacy'' \cite{MQU1}.
Besides licit use, we will assess the impact of potential criminal uses and misuses of the secure cloud infrastructures to foster, enhance, and promote cybercrime. We want to anticipate threats resulting from misuse, deception, hijacking, or misappropriation by licit entities.
We will map implications originating in technical details and in the operation or usage of systems in specific environments, to high level security objectives which can be understood and relied on in high-level security management practice \cite{LAE1}.
\section{Conclusion}
\label{sec:conclusion}
According to the importance of the project goals, i.e., to enable secure dependable cloud solutions, \textsc{Prismacloud}\xspace will have a significant impact in many areas. On a European level, \textsc{Prismacloud}\xspace's disruptive potential of results lies in its provision of a basis for the actual implementation and deployment of security enabled cloud services. Jointly developed by European scientists and industrial experts, the technology can act as an enabling technology in many sectors, like health care, e-government, smart cities. Increasing adoption of cloud services, with all its positive impact on productivity, and creation of jobs may be stimulated. On a societal level, \textsc{Prismacloud}\xspace potentially removes a major roadblock towards the adoption of efficient cloud solutions to a potential benefit of the end-users. Through the use of privacy-preserving data minimization functionalities, and depersonalization features, the amount of data being collected about end-users may effectively be reduced, maintaining the full functionality of the services. We will explicitly analyse potential negative consequences, and potential misuses (cybercrime) of secure cloud services. The potential impact for European industry is huge: \textsc{Prismacloud}\xspace results may contribute to pull some of the business currently concentrated in the United States of America to Europe and create sustainable business opportunities for companies in Europe. Equally important is the potential impact of \textsc{Prismacloud}\xspace for the European scientific community, as its results will be very much on the edge of scientific research.
|
1506.06111
|
\section{Introduction and Outline}\label{intro}
This paper is motivated by a remarkable physical observation. When two distinct 2-dimensional materials with favorable crystalline structures are joined along an edge, there exist propagating modes, {\it e.g.} electronic or photonic, whose energy remains localized in a neighborhood of the edge without spreading into the ``bulk''. Furthermore, these modes and their properties persist in the presence of arbitrary local, even large, perturbations of the edge. An understanding of such ``protected edge states" in periodic structures
has so far mainly been obtained by analyzing discrete ``tight-binding" models
and from numerical simulations. In this paper we prove that edge states arise from the Schr\"odinger equation for a class of potentials that have many features (not all) in common with the relevant experiments.
%
A central role is played by
a spectral ``no-fold" condition. In the case of small amplitude (low-contrast) honeycomb potentials, this reduces to a sign condition of a particular Fourier coefficient of the potential.
%
%
A combination of numerical simulation and heuristic argument suggests
that if the ``no-fold" condition fails, then edge
states need not be topologically protected. Let us explain these ideas in more detail.
Wave transport in periodic structures with honeycomb symmetry has been an area of intense activity catalyzed by the study of graphene, a single atomic layer two-dimensional honeycomb structure of carbon atoms. The remarkable electronic properties exhibited by graphene \cites{geim2007rise, RMP-Graphene:09, Katsnelson:12, zhang2005experimental} have inspired the study of waves in general honeycomb structures or ``artificial graphene'' in electronic \cites{artificial-graphene} and photonic \cites{HR:07,RH:08,Chen-etal:09,bahat2008symmetry,lu2014topological,BKMM_prl:13} contexts.
One such property, observed in electronic and photonic systems with honeycomb symmetry is the existence of
topologically protected {\it edge states}. Edge states are modes which are
(i) pseudo-periodic (plane-wave-like or propagating) parallel to a line-defect, and (ii) localized transverse to the line-defect; see Figure \ref{fig:mode_schematic}.
{\it Topological protection} refers to the persistence of these modes and their properties, even when the line-defect is subjected to strong local or random perturbations. In applications, edge states are of great interest due to their potential as robust vehicles for channeling energy.
The extensive physics literature on topologically robust edge states goes back to investigations of the quantum Hall effect; see, for example, \cites{H:82, TKNN:82, Hatsugai:93, wen1995topological} and the rigorous mathematical articles \cites{Macris-Martin-Pule:99,EG:02, EGS:05,Taarabt:14}.
In \cites{HR:07, RH:08} a proposal for realizing {\it photonic edge states} in periodic electromagnetic structures which exhibit the magneto-optic effect was made. In this case, the edge is realized via a domain wall across which
the Faraday axis is reversed.
Since the magneto-optic effect breaks time-reversal symmetry, as does the magnetic field in the Hall effect, the resulting edge states are unidirectional.
Other realizations of edges in photonic and electromagnetic systems, {\it e.g.} between periodic dielectric and conducting structures, between periodic structures and free-space, have been explored through experiment and numerical simulation; see, for example \cites{Soljacic-etal:08,Fan-etal:08,Rechtsman-etal:13a,Shvets-PTI:13,Shvets:14}.
%
In the context of tight-binding models, the existence and robustness of edge states has been related to topological invariants (Chern index or Berry / Zak phase \cites{delplace2011zak}) associated with the ``bulk'' (infinite periodic honeycomb) band-structure.
We are interested in exploring these phenomena in general energy-conserving wave equations in continuous media. We consider the case of the Schr\"odinger equation on $\mathbb{R}^2$, $i\partial_t\psi=H\psi$, and study the existence and robustness of edge states of time-harmonic form: $\psi=e^{-iEt}\Psi$. Our model consists of a honeycomb background potential, the ``bulk'' structure, and a perturbing ``edge-potential''. The edge-potential interpolates between two distinct asymptotic periodic structures, via a {\it domain wall} which varies transverse to a specified line-defect (``edge'') in the direction
of some element of the period lattice, $\Lambda_h$.
In the context of honeycomb structures, the most frequently studied edges are the ``zigzag'' and ``armchair'' edges; see Figure \ref{fig:edges}.
Our model of an edge is motivated by the domain-wall construction of \cites{HR:07, RH:08}. A difference is that
we break spatial-inversion symmetry, while preserving time-reversal symmetry. Hence, the edge states -- though topologically robust -- may travel in either direction along the edge. In \cites{FLW-PNAS:14,FLW-MAMS:15} we proved that a one-dimensional variant of such edge-potentials gives rise to topologically protected edge states in periodic structures with symmetry-induced linear band crossings, the analogue in one space dimension of Dirac points (see below). We explore a photonic realization of such states in coupled waveguide arrays in \cites{Thorp-etal:15}.
Our goal is to clarify the underlying mechanisms for the existence of topologically protected edge states. In Theorem \ref{thm-edgestate} we give general conditions for a topologically protected bifurcation of edge states from {\it Dirac points} of the background (bulk) honeycomb structure.
The bifurcation is seeded by the robust zero mode of a one-dimensional effective Dirac equation.
A key hypothesis is a {\it spectral no-fold condition} for the prescribed edge, assumed to be a {\it rational edge}.
In one-dimensional continuum models \cites{FLW-MAMS:15}, this condition is a consequence of monotonicity properties of dispersion curves. For continuous $d$-dimensional structures, with $d\ge2$, the spectral no-fold condition may or may not hold; see Section \ref{zz-gap}. Moreover, by varying a parameter, such as the lattice scale of a periodic structure, one can continuously tune between cases where the condition holds or does not hold; see Appendix \ref{V11-section}.
In Theorem \ref{SGC!} and Theorem \ref{Hepsdelta-edgestates} we verify the spectral no-fold condition for the zigzag edge, for a family of Hamiltonians with weak (low-contrast) potentials, and obtain the existence of {\it zigzag edge states} in this setting.
In a forthcoming article \cites{FLW-sb:16}, we study the strong binding regime (deep potentials) for a large class of honeycomb Schr\"odinger operators. We prove
that the two lowest energy dispersion surfaces, after a rescaling by the potential well's depth, converge uniformly to those of the celebrated Wallace (1947)
\cites{Wallace:47} tight-binding model of graphite. A corollary of this result is that the spectral no-fold condition, as stated in the present article,
is satisfied for sufficiently deep potentials (high contrast) for a very large classes of edge directions in $\Lambda_h$ (including the zigzag edge). In fact, we believe that the analysis of the present article can be extended and together with \cites{FLW-sb:16} will yield the existence of edge states
which are localized, transverse to {\it arbitrary} edge directions ${\bm{\mathfrak{v}}}_1\in\Lambda_h$. This is work in progress.
For a detailed discussion of examples and motivating numerical simulations, see \cites{FLW-2d_materials:15}.
The types of edge states which exist for edges generated by domain walls stand in contrast to those which exist in the case of ``hard edges'', {\it i.e.} edges defined by the tight-binding bulk Hamiltonian on one side of an edge with Dirichlet (zero) boundary condition imposed on the edge; see parenthetical remark in Figure \ref{fig:edges}. In this case, it is well-known that
zigzag (hard) edges support edge states, while armchair (hard) edges do not support edge states; see, for example, \cites{Graf-Porta:13}.
Finally, we believe that failure of the spectral no-fold condition implies that there are no topologically protected edge states, although there is evidence that there are meta-stable edge states, which are localized near the edge for a long time; see Section \ref{meta-stable?}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{Localized_mode.pdf}
\caption{\footnotesize
Edge state -- propagating (plane-wave like) parallel to a zigzag edge ($\mathbb{R}{\bf v}_1$) and localized transverse to the edge.
\label{fig:mode_schematic}
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{zigzag_vs_armchair.pdf}
\caption{\footnotesize
Bulk honeycomb structure, ${\bf H} = ({\bf A } + \Lambda_h) \cup ( {\bf B} + \Lambda_h)$.
{\bf Top panel}: Zigzag edge (blue line), $\mathbb{R}{\bf v}_1 = \{{\bf x} : {\bf k}_2\cdot{\bf x}=0\}$.
Shaded region is the fundamental domain of the cylinder, $\Sigma_{ZZ}$, corresponding to the zigzag edge.
{\bf Bottom panel}: Armchair edge (blue line), $\mathbb{R}\left({\bf v}_1+{\bf v}_2\right) = \{{\bf x} : ({\bf k}_1-{\bf k}_2)\cdot{\bf x}=0\}$.
Fundamental domain of the cylinder, $\Sigma_{AC}$, corresponding to the armchair edge, also indicated.
(Darkened vertices are sites at which zero-boundary conditions are imposed in tight-binding models of
``hard'' edges.)
\label{fig:edges}
}
\end{figure}
\subsection{Detailed discussion of main results}\label{detailed-intro}
Let $\Lambda_h = \mathbb{Z}{\bf v}_1\oplus \mathbb{Z}{\bf v}_2$ denote the regular (equilateral) triangular lattice
and $\Lambda_h^* = \mathbb{Z}{\bf k}_1\oplus \mathbb{Z}{\bf k}_2$ denote the associated dual lattice, with relations ${\bf k}_l\cdot{\bf v}_m=2\pi \delta_{lm},\ l,m=1,2$. The expressions for ${\bf k}_l$ and ${\bf v}_m$ are displayed in Section \ref{sec:honeycomb}. The honeycomb structure, ${\bf H}$, is the union of two interpenetrating triangular lattices: ${\bf A} + \Lambda_h$ and ${\bf B} + \Lambda_h$; see Figures \ref{fig:edges} and \ref{fig:lattices}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{honeycomb_lattices.pdf}
\caption{\footnotesize
{\bf Left panel:} ${\bf A}=(0,0)$, ${\bf B}=(\frac{1}{\sqrt3},0)$.
The honeycomb structure, ${\bf H}$ is the union of two interpenetrating sublattices: $\Lambda_{\bf A}={\bf A}+\Lambda_h$ (blue)
and $\Lambda_{\bf B}={\bf B}+\Lambda_h$ (red). The lattice vectors $\{{\bf v}_1,{\bf v}_2\}$ generate $\Lambda_h$.
Colors designate sublattices; in graphene the atoms occupying $\Lambda_{\bf A}-$ and $\Lambda_{\bf B}-$ sites are identical.
{\bf Right panel:}
Brillouin zone, ${\mathcal{B}}_h$, and dual basis $\{{\bf k}_1,{\bf k}_2\}$. ${\bf K}$ and ${\bf K}'$ are labeled. Other vertices of ${\mathcal{B}}_h$ obtained via application of $R$, a rotation by $2\pi/3$.
\label{fig:lattices}
}
\end{figure}
A {\it honeycomb lattice potential}, $V({\bf x})$, is a real-valued, smooth function, which is $\Lambda_h-$ periodic and, relative to some origin of coordinates, inversion symmetric (even) and invariant under a $2\pi/3$ rotation; see Definition \ref{honeyV}. A choice of period cell is $\Omega_h$, the parallelogram in $\mathbb{R}^2$ spanned by $\{{\bf v}_1, {\bf v}_2\}$.
We begin with the Hamiltonian for the unperturbed honeycomb structure:
%
%
\begin{align*}
H^{(0)} &= -\Delta +V({\bf x}). \label{H0}
\end{align*}
The {\it band structure} of the $\Lambda_h-$ periodic Schr\"odinger operator, $H^{(0)}$, is obtained by considering the
family of eigenvalue problems, parametrized by ${\bf k}\in\mathcal{B}_h$, the Brillouin zone:
$(H^{(0)}-E)\Psi=0,\ \Psi({\bf x}+{\bf v})=e^{i{\bf k}\cdot{\bf v}}\Psi({\bf x}),\ \ {\bf x}\in\mathbb{R}^2,\ {\bf v}\in\Lambda_h$.
Equivalently, $\psi({\bf x})=e^{-i{\bf k}\cdot{\bf x}}\Psi({\bf x})$, satisfies the periodic eigenvalue problem:
$ \left(H^{(0)}({\bf k})-E({\bf k})\right)\psi=0$ and $\psi({\bf x}+{\bf v})=\psi({\bf x})$ for all ${\bf x}\in\mathbb{R}^2$ and
${\bf v}\in\Lambda_h$, where $H^{(0)}({\bf k})=-(\nabla+i{\bf k})^2+V({\bf x})$.
For each ${\bf k}\in{\mathcal{B}}_h$, the spectrum is real and consists of discrete eigenvalues $E_b({\bf k}),\ b\ge1, $ where $E_j({\bf k})\le E_{j+1}({\bf k})$. The maps ${\bf k}\mapsto E_b({\bf k})\in\mathbb{R}$ are called the dispersion surfaces of $H^{(0)}$. The collection of these surfaces constitutes the {\it band structure} of $H^{(0)}$. As ${\bf k}$ varies over $\mathcal{B}_h$, each map ${\bf k}\to E_b({\bf k})$ is Lipschitz continuous and sweeps out a closed interval in $\mathbb{R}$. The union of these intervals is the $L^2(\mathbb{R}^2)-$ spectrum of $H^{(0)}$.
A more detailed discussion is presented in Section \ref{honeycomb_basics}.
A central role is played by the {\it Dirac points} of $H^{(0)}$.
These are quasi-momentum / energy pairs, $({\bf K}_\star,E_\star)$, in the band structure of $H^{(0)}$ at which neighboring dispersion surfaces touch conically at a point \cites{RMP-Graphene:09,Katsnelson:12,FW:12}. The existence of Dirac points, located at the six vertices of the Brillouin zone, $\mathcal{B}_h$ (regular hexagonal dual period cell) for generic honeycomb structures was proved in \cites{FW:12,FLW-MAMS:15}; see also \cites{Grushin:09,berkolaiko-comech:15}.
The quasi-momenta of Dirac points partition into two equivalence classes; the ${\bf K}-$ points consisting of ${\bf K}, R{\bf K}$ and $R^2{\bf K}$, where $R$ is a rotation by $2\pi/3$ and ${\bf K}'-$ points consisting of ${\bf K}'=-{\bf K}, R{\bf K}'$ and $R^2{\bf K}'$.
The time evolution of a wavepacket, with data spectrally localized near a Dirac point, is governed by a massless two-dimensional Dirac system \cites{FW:14}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{E_vs_k_three_surfaces.pdf}
\caption{\footnotesize
Lowest three dispersion surfaces ${\bf k}\equiv(k^{(1)},k^{(2)})\in\mathcal{B}_h\mapsto E({\bf k})$ of the band structure of $H^{(0)}\equiv -\Delta + V({\bf x})$, where
$V$ is the honeycomb potential: $V({\bf x}) = 10 \left(\cos({\bf k}_1 \cdot{\bf x})+\cos({\bf k}_2 \cdot{\bf x})+\cos(({\bf k}_1+{\bf k}_2)\cdot{\bf x})\right)$.
Dirac points occur at the intersection of the lower two dispersion surfaces, at the six vertices of the Brillouin zone, $\mathcal{B}_h$.
\label{fig:E_mesh_3}
}
\end{figure}
Figure \ref{fig:E_mesh_3} displays the first three dispersion surfaces of $H^{(0)}$ for a honeycomb potential. The lowest two of these surfaces touch conically at the six vertices of $\mathcal{B}_h$ (inset). Associated with the Dirac point $({\bf K}_\star,E_\star)$
is a two-dimensional eigenspace of ${\bf K}_\star-$ pseudo-periodic states,\ ${\rm span}\{\Phi_1,\Phi_2\}$:
\[ H^{(0)}\Phi_j({\bf x})=E_\star \Phi_j({\bf x}),\ {\bf x}\in\mathbb{R}^2,\ \ j=1,2\ ,\ \textrm{ where}\ \ \Phi_j({\bf x}+{\bf v})=e^{i{\bf K}_\star\cdot{\bf v}}\Phi_j({\bf x}),\ \ {\bf v}\in\Lambda_h ;\]
see Definition \ref{dirac-pt-defn}.
It is also shown in \cites{FW:12} that a $\Lambda_h-$ periodic perturbation of $V({\bf x})$, which breaks inversion or time-reversal
symmetry lifts the eigenvalue degeneracy; a (local) gap is opened about the Dirac points and the perturbed dispersion surfaces are locally smooth. The perturbation of $H^{(0)}$ by an edge potential (see \eqref{schro-domain}) takes advantage of this instability of Dirac points with symmetry breaking perturbations.
To construct our Hamiltonian, perturbed by an edge-potential, we first choose a vector ${\bm{\mathfrak{v}}}_1\in\Lambda_h$, the period lattice, and consider the line
$\mathbb{R}{\bm{\mathfrak{v}}}_1$, the ``edge''. Choose ${\bm{\mathfrak{v}}}_2$ such that $\Lambda_h=\mathbb{Z}{\bm{\mathfrak{v}}}_1\oplus\mathbb{Z}{\bm{\mathfrak{v}}}_2$. Also introduce dual basis vectors, ${\bm{\mathfrak{K}}}_1$ and $ {\bm{\mathfrak{K}}}_2$, satisfying ${\bm{\mathfrak{K}}}_l\cdot{\bm{\mathfrak{v}}}_m=2\pi\delta_{lm},\ l,m=1,2$; see Section \ref{ds-slices} for a detailed discussion.
The choice ${\bm{\mathfrak{v}}}_1={\bf v}_1$ (or equivalently ${\bf v}_2$) is a {\it zigzag edge} and the choice ${\bm{\mathfrak{v}}}_1={\bf v}_1+{\bf v}_2$ is an {\it armchair edge}; see Figure \ref{fig:edges}.
Introduce the perturbed Hamiltonian:
\begin{equation}
H^{(\delta)} \equiv -\Delta + V({\bf x}) + \delta\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot {\bf x})W({\bf x}) \ =\ H^{(0)} + \delta\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot {\bf x})W({\bf x}) .
\label{schro-domain}
\end{equation}
Here, $\delta$ is real and will be taken to be sufficiently small, and $W({\bf x})$ is $\Lambda_h-$ periodic
and odd. The function $\kappa$, defines a {\it domain wall}. We choose $\kappa$ to be sufficiently smooth and to satisfy $\kappa(0)=0$ and $\kappa(\zeta)\to\pm\kappa_\infty\ne0$
as $\zeta\to\pm\infty$. Without loss of generality, we assume $\kappa_\infty>0$, {\it e.g.} $\kappa(\zeta)=\tanh(\zeta)$. We refer to the line $\mathbb{R}{\bm{\mathfrak{v}}}_1$ as a ${\bm{\mathfrak{v}}}_1-$ edge.
Note that $H^{(\delta)}$ is invariant under translations parallel to the ${\bm{\mathfrak{v}}}_1-$ edge, ${\bf x}\mapsto{\bf x}+{\bm{\mathfrak{v}}}_1$, and hence there is a well-defined {\it parallel quasi-momentum}, denoted ${k_{\parallel}}$. Furthermore,
$H^{(\delta)}$ transitions adiabatically between the asymptotic Hamiltonian $H_-^{(\delta)}=H^{(0)}\ -\ \delta\kappa_\infty W({\bf x})$ as ${\bm{\mathfrak{K}}}_2\cdot{\bf x}\to-\infty$ to the asymptotic Hamiltonian $H_+^{(\delta)}=H^{(0)}\ +\ \delta\kappa_\infty W({\bf x})$
as ${\bm{\mathfrak{K}}}_2\cdot{\bf x}\to\infty$. In the case where $\kappa$ changes sign once across $\zeta=0$, the domain wall modulation of $W({\bf x})$ realizes a phase-defect
across the edge (line-defect) $\mathbb{R}{\bm{\mathfrak{v}}}_1$. A variant of this construction was used in \cites{FLW-MAMS:15} to insert a phase defect between asymptotic dimer periodic potentials.
\bigskip
\bigskip
Suppose $H^{(0)}$ has a Dirac point at $({\bf K}_\star,E_\star)$. It is important to note that while $H^{(0)}$ is inversion symmetric, $H^{(\delta)}_\pm$ is not.
For $\delta\ne0$, $H^{(\delta)}_\pm$ does not have Dirac points; its dispersion surfaces are locally smooth and for quasi-momenta ${\bf k}$ such that if $|{\bf k}-{\bf K}_\star|$ is sufficiently small, there is an open neighborhood of $E_\star$ not contained in the $L^2(\mathbb{R}^2/\Lambda_h)-$ spectrum of $H^{(\delta)}_\pm({\bf k})$. This ``spectral gap'' about $E=E_\star$ may however only be local about ${\bf K}_\star$ \cites{FW:12}. If there is a real open neighborhood of $E_\star$, not contained in the spectrum of $H^{(\delta)}_\pm({\bf k})=-(\nabla+i{\bf k})^2+V\pm\delta\kappa_\infty W$ for \emph{all} ${\bf k}\in\mathcal{B}_h$, then
$H_\pm^{(\delta)}$ is said to have a (global) omni-directional spectral gap about $E=E_\star$.
We'll see, in our discussion of the {\it spectral no-fold condition},
that it is a ``directional spectral gap'' that plays a key role in the existence of edge states;
see Section \ref{intro-nofold} and Definition \ref{SGC}.
%
Under suitable hypotheses, we shall construct {\it ${\bm{\mathfrak{v}}}_1-$ edge states} of $H^{(\delta)}$, which are spectrally localized near the Dirac point, $({\bf K}_\star,E_\star)$.
These are non-trivial solutions $\Psi$, with energies $E\approx E_\star$, of the ${k_{\parallel}}-$ eigenvalue problem:
\begin{align}
H^{(\delta)}\Psi\ &=\ E\Psi,\label{edge-evp}\\
\ \Psi({\bf x}+{\bm{\mathfrak{v}}}_1)\ &=\ e^{i{k_{\parallel}}}\Psi({\bf x})\ (\textrm{propagation parallel to}\ \mathbb{R}{\bm{\mathfrak{v}}}_1), \label{edge-bc1}\\
|\Psi({\bf x})|\ &\to\ 0,\ \ {\rm as}\ \ |{\bm{\mathfrak{K}}}_2\cdot{\bf x}|\to\infty\quad (\textrm{localization transverse to}\ \mathbb{R}{\bm{\mathfrak{v}}}_1), \label{edge-bc2}
\end{align}
for ${k_{\parallel}}\approx{\bf K}_\star\cdot{\bm{\mathfrak{v}}}_1$.
To formulate the eigenvalue problem in an appropriate Hilbert space, we introduce the cylinder $\Sigma\equiv \mathbb{R}^2/ \mathbb{Z}{\bm{\mathfrak{v}}}_1$. If $f({\bf x})$ satisfies the pseudo-periodic boundary condition \eqref{edge-bc1}, then $f({\bf x})e^{-i\frac{{k_{\parallel}}}{2\pi}{\bm{\mathfrak{K}}}_1\cdot{\bf x}}$ is well-defined on the cylinder $\Sigma$. Denote by $H^s(\Sigma),\ s\ge0$, the Sobolev spaces of functions defined on $\Sigma$. The pseudo-periodicity and decay conditions \eqref{edge-bc1}-\eqref{edge-bc2} are encoded by requiring $ \Psi \in H^s_{k_{\parallel}}(\Sigma)$, for some $s\ge0$, where
\begin{equation*}
H^s_{k_{\parallel}}=H^s_{k_{\parallel}}(\Sigma)\ \equiv \ \left\{f : f({\bf x})e^{-i\frac{{k_{\parallel}}}{2\pi}{\bm{\mathfrak{K}}}_1\cdot{\bf x}}\in H^s(\Sigma) \right\} .\label{Hs-kpar}
\end{equation*}
Thus we formulate the EVP \eqref{edge-evp}-\eqref{edge-bc2} as:
\begin{equation}
H^{(\delta)}\Psi\ =\ E\Psi,\ \ \Psi\in H^2_{{k_{\parallel}}}(\Sigma).
\label{EVP}\end{equation}
\begin{remark}[Symmetry relation among ${\bf K}-$ and ${\bf K}'-$ points]
Note that if $\Psi({\bf x})=e^{i{\bf k}\cdot{\bf x}}Z({\bf x})$ is a solution of the eigenvalue problem \eqref{EVP}, then $\psi_{{\bf K}}=e^{-i(E t-{\bf K}\cdot{\bf x})} Z({\bf x}),$
where $Z({\bf x}+{\bm{\mathfrak{v}}}_1)=Z({\bf x})$ and $Z({\bf x})\rightarrow0$ as $|{\bm{\mathfrak{K}}}_2\cdot{\bf x}| \rightarrow \infty$,
is a propagating edge state of the time-dependent Schr\"odinger equation:
$ i\partial_t\psi({\bf x},t) = H^{(\delta)} \psi({\bf x},t)$
with parallel quasi-momentum ${k_{\parallel}}={\bf K}\cdot{\bm{\mathfrak{v}}}_1$.
Since the time-dependent Schr\"odinger equation has the invariance $\psi({\bf x},t)\mapsto\overline{\psi({\bf x},-t)}$, it follows that
\[ \overline{\psi_{\bf K}({\bf x},-t)} = e^{-i(Et+{\bf K}\cdot{\bf x})} \overline{Z({\bf x})} = e^{-i(Et-{\bf K}'\cdot{\bf x})} \overline{Z({\bf x})} = \psi_{{\bf K}'}({\bf x},t) . \]
Thus $\psi_{{\bf K}'}({\bf x},t)$ is a counterpropagating edge state with parallel quasi-momentum, $k_\parallel={\bf K}'\cdot{\bm{\mathfrak{v}}}_1=-{\bf K}\cdot{\bm{\mathfrak{v}}}_1$.
Due to these symmetry considerations and the equivalence of ${\bf K}-$ points: $\{{\bf K},R{\bf K},R^2{\bf K}\}$, without loss of generality, we henceforth restrict our attention to the Dirac point $({\bf K},E_\star)$.
\end{remark}
\subsection{Summary of main results}\label{results-summary}
\subsubsection{General conditions for the existence of topologically protected edge states; Theorem \ref{thm-edgestate} and Corollary \ref{vary_k_parallel}}
In {\bf Theorem \ref{thm-edgestate}} we formulate hypotheses on the honeycomb potential, $V$, domain wall function, $\kappa(\zeta)$, and asymptotic periodic structure, $W({\bf x})$, which
imply the existence of topologically protected ${\bm{\mathfrak{v}}}_1-$ edge states, constructed as non-trivial eigenpairs $\delta\mapsto (\Psi^\delta, E^\delta)$ of \eqref{EVP} with ${k_{\parallel}}={\bf K}\cdot{\bm{\mathfrak{v}}}_1$,
defined for all $|\delta|$ sufficiently small. This branch of non-trivial states bifurcates from the trivial solution branch $E\mapsto(\Psi\equiv0,E)$ at $E=E_\star$, the energy of the Dirac point.
Key among the hypotheses is the spectral no-fold condition, discussed below in Section \ref{intro-nofold}.
At leading order in $\delta$, the edge state, $\Psi^\delta({\bf x})$, is a slow modulation of the degenerate nullspace of $H^{(0)}-E_\star$:
\begin{align}
\Psi^\delta({\bf x}) &\approx \alpha_{\star,+}(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})\Phi_+({\bf x}) + \alpha_{\star,-}(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})\Phi_-({\bf x}) \ \ \text{in} \ \ H_{{k_{\parallel}}={\bf K}\cdot{\bm{\mathfrak{v}}}_1}^2(\Sigma) , \label{multiscale-formal0} \\
E^\delta &= E_\star + \mathcal{O}(\delta^2),\ \ 0<|\delta|\ll1,\label{multiscale-formal1}
\end{align}
where $\Phi_+$ and $\Phi_-$ are the appropriate linear combinations of $\Phi_1$ and $\Phi_2$,
defined in \eqref{Phi_pm-def}.
The envelope amplitude-vector, $\alpha_\star(\zeta)=(\alpha_{\star,+}(\zeta),\alpha_{\star,-}(\zeta))^T$, is a zero-energy eigenstate, $\mathcal{D}\alpha_\star=0$, of the one-dimensional Dirac operator (see also \eqref{multi-dirac-op}):
\[ \mathcal{D} \equiv -i|\lambda_\sharp||{\bm{\mathfrak{K}}}_2|\sigma_3\partial_\zeta + \vartheta_\sharp\kappa(\zeta)\sigma_1,\]
where the Pauli matrices $\sigma_j$ are displayed in \eqref{Pauli-sigma}.
Here $\lambda_\sharp\in\mathbb{C}$ (see \eqref{lambda-sharp2}) depends on the unperturbed honeycomb potential, $V$, and is non-zero for generic $V$. The constant
$\vartheta_\sharp\equiv\left\langle \Phi_1,W\Phi_1\right\rangle_{L^2(\Omega_h)}$ is real and is also generically nonzero.
%
$\mathcal{D}$ has a spatially localized zero-energy eigenstate for any $\kappa(\zeta)$ having asymptotic limits of opposite sign at $\pm\infty$. Therefore, the zero-energy eigenstate, which seeds the bifurcation, persists for {\it localized} perturbations of $\kappa(\zeta)$.
In this sense, the bifurcating branch of edge states is topologically protected against a class of local perturbations of the edge.
Section \ref{formal-multiscale} gives an account of a formal multiple scale expansion, to any order in the small parameter, $\delta$, of a solution to the eigenvalue problem \eqref{EVP}. The expression in \eqref{multiscale-formal0} is the leading order term in this expansion. Our methods can be used to prove the validity of the multiple scale expansion, at any finite order.
{\bf Corollary \ref{vary_k_parallel}} ensures, under the conditions of Theorem \ref{thm-edgestate}, the existence of edge states,
$\Psi({\bf x};{k_{\parallel}})\in H^2_{{k_{\parallel}}}(\Sigma)$ for all ${k_{\parallel}}$ in a neighborhood of ${k_{\parallel}}={\bf K}\cdot{\bm{\mathfrak{v}}}_1$, and by symmetry (see Remark \ref{kpar-symmetry}) for all ${k_{\parallel}}$ in a neighborhood of ${k_{\parallel}}=-{\bf K}\cdot{\bm{\mathfrak{v}}}_1={\bf K}'\cdot{\bm{\mathfrak{v}}}_1$.
Thus,
by taking a continuous superposition of states given by Corollary \ref{vary_k_parallel}, one obtains states that remain localized about (and dispersing along) the zigzag edge for all time.
\begin{remark}\label{key-hyp}
A key hypothesis in Theorem \ref{thm-edgestate} is a {\it spectral no-fold} condition at $({\bf K},E_\star)$ for the ${\bm{\mathfrak{v}}}_1-$ edge of the band-structure of $-\Delta+V$. This (essentially) ensures the existence of a $L^2_{{k_{\parallel}}={\bf K}\cdot{\bm{\mathfrak{v}}}_1}(\Sigma)-$ spectral gap containing $E_\star$ for the perturbed Hamiltonian, $H^{(\delta)}$; see Definition \ref{SGC} and the discussion in Section \ref{intro-nofold}.
\end{remark}
\subsubsection{Theorem \ref{Hepsdelta-edgestates}; Existence of topologically protected zigzag edge states}\label{zigzag-summary}
We consider the case of zigzag edges corresponding to the choice ${\bm{\mathfrak{v}}}_1={\bf v}_1$, ${\bm{\mathfrak{v}}}_2={\bf v}_2$, and ${\bm{\mathfrak{K}}}_1={\bf k}_1$, ${\bm{\mathfrak{K}}}_2={\bf k}_2$. Recall that $\Lambda_h=\mathbb{Z}{\bf v}_1\oplus\mathbb{Z}{\bf v}_2$. The choice ${\bm{\mathfrak{v}}}_1={\bf v}_2$ would lead to equivalent results.
We consider the zigzag edge state eigenvalue problem
\begin{equation}
H^{(\varepsilon,\delta)}\Psi\ =\ E\Psi, \quad \Psi\in H^2_{{k_{\parallel}}}(\Sigma) \qquad \text{(see also \eqref{EVP})} ,
\label{EVP-1}\end{equation}
with Hamiltonian
\begin{equation}
H^{(\varepsilon,\delta)} \equiv -\Delta + \varepsilon V({\bf x}) + \delta\kappa(\delta{\bf k}_2\cdot {\bf x})W({\bf x}) \ =\ H^{(\varepsilon)} + \delta\kappa(\delta{\bf k}_2\cdot {\bf x})W({\bf x}) .
\label{Ham-ZZ}
\end{equation}
Here, $\varepsilon$ and $\delta$ are chosen to satisfy
\begin{equation}
0<|\delta|\lesssim \varepsilon^2 \ll1 .
\label{small-eps-delta}\end{equation}
There are two cases, which are delineated by the sign of the distinguished Fourier coefficient, $\varepsilon V_{1,1}$, of the unperturbed (bulk) honeycomb potential, $\varepsilon V({\bf x})$. Here,
\begin{equation*}
V_{1,1}\ \equiv\
\frac{1}{|\Omega_h|} \int_{\Omega_h} e^{-i({\bf k}_1+{\bf k}_2)\cdot{\bf y}}\ V({\bf y})\ d{\bf y},
\label{V11eq0-intro}
\end{equation*}
is assumed to be non-zero. We designate these cases:
\[ \textrm{\bf Case (1)}\qquad \varepsilon V_{1,1}>0\ \ \ \textrm{ and}\ \ \ \textrm{\bf Case (2)}\qquad \varepsilon V_{1,1}<0.\]
In Appendix \ref{V11-section} we give two explicit families of potentials, a superposition of ``bump-functions'' concentrated, respectively, on a triangular lattice, $\Lambda_h^{(a)}$, and a honeycomb structure, ${\bf H}$, that can be tuned between these two cases
by variation of a lattice scale parameter.
Under the condition $\varepsilon V_{1,1}>0$ (Case (1)) and \eqref{small-eps-delta}, we verify the spectral no-fold condition for the zigzag edge in {\bf Theorem \ref{SGC!}}. The existence of zigzag edge states
({\bf Theorem \ref{Hepsdelta-edgestates}}) then follows from Theorem \ref{thm-edgestate} and Corollary \ref{vary_k_parallel}. In particular, for all $\varepsilon$ and $\delta$ satisfying \eqref{small-eps-delta} and for each ${k_{\parallel}}$ near ${\bf K}\cdot{\bf v}_1=2\pi/3$,
the zigzag edge state eigenvalue problem \eqref{EVP}
has topologically protected edge states with energies sweeping out a neighborhood of $E_\star^\varepsilon$, where $({\bf K},E_\star^\varepsilon)$ is a Dirac point.
\begin{remark}[Directional versus omnidirectional spectral gaps]\label{BS-conj}
While the regime of weak potentials, implied by \eqref{small-eps-delta}, would at first seem to be a simplifying assumption, we wish to remark on a subtlety for $H^{(\varepsilon,\delta)}_\pm = -\Delta+\varepsilon V \pm \delta\kappa_\infty W$ ($\varepsilon, \delta$ small), which arises precisely in this regime. It is well-known that for sufficiently weak periodic potentials on $\mathbb{R}^d, d\ge2$, that there are no spectral gaps; this is related to the
``Bethe-Sommerfeld conjecture'' \cites{Bethe-Sommerfeld:33,Skriganov:79,Dahlberg-Trubowitz:82}. Nevertheless,
if $\varepsilon V_{1,1}>0$, and $\varepsilon$ and $\delta$ are related as in \eqref{small-eps-delta}, then a {\it directional} spectral gap, {\it i.e.} an $L_{k_{\parallel}}^2(\Sigma)-$ spectral gap exists; see Theorem \ref{delta-gap} and Section \ref{intro-nofold}.
\end{remark}
Figure \ref{fig:spectra_vary_delta}
and Figure \ref{fig:k_parallel3} are illustrative of Cases (1) and (2).
The simulations were done for the Hamiltonian
$H^{(\varepsilon,\delta)}$ with $\varepsilon=\pm10$ and $0\le\delta\le10$:
\begin{equation}
\label{VW-numerics}
\begin{split}
H^{(\varepsilon,\delta)}&= -\Delta +\varepsilon V({\bf x}) + \delta\kappa(\delta{\bf k}_2\cdot{\bf x})W({\bf x}),\ \ \kappa(\zeta)=\tanh(\zeta),\\
V({\bf x})&= \sum_{j=0}^2\cos(R^j{\bf k}_1 \cdot{\bf x}),\ \
W({\bf x})= \sum_{j=0}^2(-1)^{\delta_{j2}}\sin(R^j{\bf k}_1 \cdot{\bf x}).
\end{split}
\end{equation}
Here, $R$ is the $2\pi/3-$ rotation matrix displayed in \eqref{Rdef}.
Figure \ref{fig:spectra_vary_delta} displays, for fixed $\varepsilon$, the $L^2_{{\kpar=2\pi/3}}(\Sigma)-$ spectra (plotted horizontally) of $H^{(\varepsilon,\delta)}$ corresponding to a range of $\delta$ values (strength / scale of domain wall -perturbation) for
Cases (1) $\varepsilon V_{1,1}>0$ (top panel) and (2) $\varepsilon V_{1,1}<0$ (middle and bottom panels).
Figure \ref{fig:k_parallel3} displays, for these cases, the $L^2_{{k_{\parallel}}}(\Sigma)-$ spectra (plotted vertically)
for a range of parallel-quasi-momentum, ${k_{\parallel}}$.
\begin{remark}[Symmetries of ${k_{\parallel}}\mapsto E({k_{\parallel}})$] \label{kpar-symmetry}
Figure \ref{fig:k_parallel3} exhibits some elementary symmetries.
Since the boundary condition for the EVP \eqref{EVP-1}, $\Psi({\bf x}+{\bf v}_1)=e^{i{k_{\parallel}}}\Psi({\bf x})$
is $2\pi-$ periodicity in ${k_{\parallel}}$, the mapping $k_\parallel\mapsto E(k_\parallel)$ is $2\pi-$ periodic. Furthermore, invariance under complex conjugation, implies symmetry of $k_\parallel\mapsto E(k_\parallel)$ about ${k_{\parallel}}=0$ and ${k_{\parallel}}=\pi$.
\end{remark}
\subsubsection{Non-topologically protected bifurcations of edge states}\label{unprotected}
In Case (2), where $\varepsilon V_{1,1}<0$, {\bf Theorem \ref{NO-directional-gap!}} implies that the spectral no-fold condition fails and we do not obtain a bifurcation from the Dirac point.
However, through a combination of formal asymptotic analysis and numerical computations, we do find bifurcating branches of edge states. These branches do not emanate from Dirac points (the no-fold condition fails), but rather from a spectral band edge. Moreover, as we discuss below, these states are \underline{not} topologically protected; they may be destroyed by an appropriate localized perturbation of the edge. Case (2) ($\varepsilon V_{1,1}<0)$ is illustrated by Figures \ref{fig:spectra_vary_delta} (middle and bottom panels) and Figure \ref{fig:k_parallel3} (bottom panel).
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{E_delta.pdf}
\caption{\footnotesize
$L^2_{{k_{\parallel}}={\bf K}\cdot{\bf v}_1}(\Sigma)-$ spectra, where ${\bf K}\cdot{\bf v}_1=\frac{2}{3}\pi$,
of the Hamiltonian $H^{(\varepsilon,\delta)}$ (\eqref{VW-numerics}) for the zigzag edge ($\mathbb{R}{\bf v}_1$).
{\bf Top panel:} Case (1) $\varepsilon V_{1,1}>0$. Topologically protected bifurcation of edge states, described by Theorem \ref{SGC!} (dotted red curve), is seeded by zero-energy mode of a Dirac operator \eqref{multi-dirac-op}. The branch of edge states emanates from intersection of first and second bands ($B_1$ and $B_2$) at $E=E_\star^\varepsilon$ for $\delta=0$; see discussion in Section \ref{zigzag-summary}.
{\bf Middle panel:} Case (2) $\varepsilon V_{1,1}<0$ with domain wall function $\kappa$. Spectral no-fold condition does not hold.
Bifurcation of zigzag edge states from upper endpoint, $E=\widetilde{E}^\varepsilon$, of the first spectral band. This bifurcation is seeded by a bound state of a Schr\"odinger operator \eqref{effective-schroedinger} with effective mass $m_{\rm eff}<0$
and effective potential $Q_{\rm eff}(\zeta)$ (displayed in the inset) and is \emph{not} topologically protected; see discussion in Section \ref{unprotected}.
{\bf Bottom panel:} Case (2) $\varepsilon V_{1,1}<0$ with domain wall function $\kappa_\natural$. Bifurcation from upper endpoint of $B_1$ is destroyed. Bound states bifurcate from the lower edges of the first two spectral bands.
\label{fig:spectra_vary_delta}
}
\end{figure}
In particular, Dirac points occur at the intersection of the second and third spectral bands of $H^{(\varepsilon,0)}=-\Delta+\varepsilon V({\bf x})$ (see Theorem \ref{diracpt-small-thm}), and the failure of the spectral no-fold condition implies that an $L^2_{k_{\parallel}}-$ spectral gap does not open about $E=E_\star^\varepsilon$ for $\delta\ne0$ and small. However, for $\varepsilon V_{1,1}<0$ there is a spectral gap between the first and second spectral bands of $H^{(\varepsilon,0)}$. For the choice of edge-potential
displayed in \eqref{VW-numerics} with $\varepsilon=-10$, a family of nontrivial edge states bifurcates, for $0<|\delta|$ sufficiently small, from the upper edge of the first (lowest) $L^2_{{\kpar=2\pi/3}}-$ spectral band into the spectral gap (dotted blue curve); see middle panel of Figure \ref{fig:spectra_vary_delta}. A bifurcation of a similar nature is discussed in \cites{plotnik2013observation}.
A formal multiple scale analysis clarifies this latter bifurcation.
For ${\bf k}\in{\mathcal{B}}_h$, let $(\widetilde{E}^\varepsilon({\bf k}),\widetilde{\Phi}^\varepsilon({\bf x};{\bf k}))$ denote the eigenpair associated with a lowest spectral band.
In \cites{FLW-2d_materials:15}, We calculate that the edge state bifurcation is seeded by a discrete eigenvalue effective Schr\"odinger operator:
\begin{equation}
H^\varepsilon_{\rm eff}= -\frac{1}{2m^\varepsilon_{\rm eff}}\ \frac{\partial^2}{\partial\zeta^2}\ +\ Q^\varepsilon_{\rm eff}(\zeta;\kappa),\ \ {\rm where}\ \ \frac{1}{m^\varepsilon_{\rm eff}}=\ \sum_{i,j=1,2} [ D^2\widetilde{E}^\varepsilon({\bf K})]_{ij} \ {\mathfrak{K}}_2^i\ {\mathfrak{K}}_2^j,\label{effective-schroedinger}\end{equation}
and $Q_{\rm eff}(\zeta;\kappa,\widetilde{\Phi^\varepsilon}) = a\ \kappa'(\zeta) + b\ \left(\kappa^2_\infty-\kappa^2(\zeta) \right)$ is a spatially localized effective potential, depending on $\kappa(\zeta)$, and constants $a$ and $b$, with $b>0$, which depend on $V$, $W$ and $\widetilde{\Phi}^\varepsilon$.
For the above choice of the zigzag edge-potential (middle panel of Figure \ref{fig:spectra_vary_delta}), we have $m^\varepsilon_{\rm eff}<0$ and the effective potential $Q^\varepsilon_{\rm eff}$, displayed in the figure inset, induces a bifurcation into the gap above the first band.
Now, we can construct domain wall functions, $\kappa_{_\natural}(\zeta)$, for which the corresponding $H^\varepsilon_{\rm eff}$ has no point eigenvalues in a neighborhood of the right (upper) edge of the first spectral band; see bottom panel of Figure \ref{fig:spectra_vary_delta}.
If $\kappa(\zeta)$ is chosen as above, then
$Q_{\rm eff}(\zeta; (1-\theta)\kappa+\theta\kappa_{_\natural})$, $0\leq\theta\leq1$, provides a smooth homotopy from a Schr\"odinger Hamiltonian for which there is a bifurcation of edge states ($H^{(\varepsilon,\delta)}$ with domain wall $\kappa$) to one for which the branch of edge states does not exist ($H^{(\varepsilon,\delta)}$ with domain wall $\kappa_\natural$). Therefore, this type of bifurcation is not topologically protected; see \cites{FLW-2d_materials:15} for a more detailed discussion.
This contrast between topologically protected states and non-protected states is explained
and explored numerically, in a one-dimensional setting in \cites{Thorp-etal:15}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{E_kpar.pdf}
\caption{\footnotesize
{\bf Top panel:} $L^2_{{k_{\parallel}}}(\Sigma)-$ spectrum of protected states of $H^{(\varepsilon,\delta)}$, for the case $\varepsilon V_{1,1}>0$. {\bf Bottom panel:} $L^2_{{k_{\parallel}}}(\Sigma)-$ spectrum of non-protected states of $H^{(\varepsilon,\delta)}$ for the case $\varepsilon V_{1,1}<0$. $V$, $W$ and $\kappa$ are chosen as in \eqref{VW-numerics}.
For each fixed ${k_{\parallel}}$, edge states shown in the top panel ($\varepsilon V_{1,1}>0$) arise due to a protected bifurcation from a Dirac point displayed in the top panel of Figure \ref{fig:spectra_vary_delta}. Those edge states indicated in the bottom panel ($\varepsilon V_{1,1}<0$) arise via an edge bifurcation of the type shown in the middle and bottom panels of Figure \ref{fig:spectra_vary_delta}. The band edge energies from which this latter bifurcation takes place is well-separated
from the energy of
the Dirac point which, when $\varepsilon V_{1,1}<0$, lies within the overlap of the second and third spectral bands.
}
\label{fig:k_parallel3}
\end{figure}
\subsection{Remarks on the spectral no-fold condition}\label{intro-nofold}
The spectral no-fold hypothesis of Theorem \ref{thm-edgestate} requires that the dispersion curves obtained by slicing the band structure (situated in $\mathbb{R}^2_{\bf k}\times\mathbb{R}_E$) with a plane through the Dirac point $({\bf K},E_\star)$ containing the direction ${\bm{\mathfrak{K}}}_2$ (dual direction to the ${\bm{\mathfrak{v}}}_1-$ edge) do not fold-over
and fill out energies arbitrarily near $E_\star$.
This essentially implies that via a small perturbation which breaks inversion symmetry (as we do with $H^{(\delta)}=-\Delta+V({\bf x})+\delta\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})W({\bf x})$ for $\delta\ne0$) we open a $L^2_{k_{\parallel}}(\Sigma)-$ spectral gap about $E_\star$.
Figure \ref{fig:eps_V11_neg} is illustrative.
%
%
In the first row of plots in Figure \ref{fig:eps_V11_neg}, we consider whether the spectral no-fold condition holds at the Dirac point $({\bf K},E_\star^\varepsilon)$
for the zigzag edge, in the two cases: (1) $\varepsilon V_{1,1}>0$ and (2) $\varepsilon V_{1,1}<0$, as well as for the armchair edge.
The energy level $E=E_\star^\varepsilon$ is indicated with the dotted line. In the left panel we see that for the zigzag edge, the spectral no-fold condition holds if $\varepsilon V_{1,1}>0$. In this case, there is a topologically protected branch of edge states. In the center panel we see that the spectral no-fold condition fails if $\varepsilon V_{1,1}<0$.
Finally, in the right panel we see that it also fails for the armchair slice.
The second row of plots in Figure \ref{fig:eps_V11_neg}, illustrates that the spectral no-fold condition controls whether a full $L^2_{k_{\parallel}}-$ spectral gap opens when breaking inversion symmetry.
In particular, for $\delta>0$, $H^{(\varepsilon,\delta)}$ is no longer inversion symmetric. For $\varepsilon V_{1,1}>0$, a spectral gap opens about the Dirac point, between the first and second spectral bands (see Theorem \ref{diracpt-small-thm}). For the zigzag edge with $\varepsilon V_{1,1}<0$ there is no spectral gap about the Dirac point.
(Note, however, that there is a spectral gap between the first and second spectral bands; see the discussion above in Section \ref{unprotected}.)
Similarly, for the armchair edge (right panel) there is no spectral gap for $\delta>0$.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{spectral_no_fold.pdf}
\caption{\footnotesize
Zigzag and armchair slices at the Dirac point $({\bf K},E^\varepsilon_\star)$ of the band structure of $-\Delta+\varepsilon V+\delta\kappa_\infty W$ for $\delta=0$ ({\bf first row}) and $\delta>0$ ({\bf second row}). Insets indicate zigzag and armchair quasi-momentum segments (one-dimensional Brillouin zones) parametrized by $\lambda$, for $0\leq\lambda\leq1$. See discussion of Section \ref{meta-stable?} and Theorem \ref{fourier-edge}.
\label{fig:eps_V11_neg}
}
\end{figure}
\subsection{Are there meta-stable edge states?}\label{meta-stable?}
Consider the Hamiltonian
$ H^{(\delta)} = -\Delta_{\bf x} + V({\bf x}) + \delta\kappa\left(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}\right) W({\bf x})$ (as in \eqref{schro-domain}), corresponding to an {\it arbitrary} rational edge, $\mathbb{R}{\bm{\mathfrak{v}}}_1$, {\it i.e.} ${\bm{\mathfrak{v}}}_1=a_1{\bf v}_1+b_1{\bf v}_2$, $a_1$ and $b_1$ co-prime integers, as introduced in the discussion leading up to \eqref{schro-domain}; see also Section \ref{ds-slices}.
Irrespective of whether the spectral no-fold condition holds for the ${\bm{\mathfrak{v}}}_1-$ edge (see Section \ref{intro-nofold} and Definition \ref{SGC}), the multiple scale expansion of Section \ref{formal-multiscale} produces a formal edge state to {\it any finite order} in the small parameter $\delta$.
\noindent {\it But is this formal expansion the expansion of a true edge state?}
We believe the answer is no, if the spectral no-fold condition fails.
Indeed, from Theorem \ref{fourier-edge}, we have that any ${\bm{\mathfrak{v}}}_1-$ edge state, $\Psi\in L^2_{{k_{\parallel}}={\bf K}\cdot{\bm{\mathfrak{v}}}_1}$, is a superposition of Floquet-Bloch modes of $H^{(0)}=-\Delta+V$ along the quasimomentum segment:
${\bf K}+\lambda{\bm{\mathfrak{K}}}_2,\ |\lambda|\le1/2$.
The formal expansion of Section \ref{formal-multiscale} however is spectrally concentrated on Floquet-Bloch components along this segment, which are near the Dirac point, corresponding to $|\lambda|\ll1$. If the spectral no-fold condition fails, the expansion does not capture the effect of resonant coupling to quasi-momenta along this segment ``far from ${\bf K}$'' (corresponding to $\lambda$ bounded away from $\lambda=0$ in Figure \ref{fig:eps_V11_neg}).
\medskip
\noindent{\bf Conjecture:}
{\sl Suppose the spectral no-fold condition fails for the ${\bm{\mathfrak{v}}}_1-$ edge $\mathbb{R}{\bm{\mathfrak{v}}}_1$. Then, $H^{(\delta)}$
has topologically protected long-lived (meta-stable) edge quasi-modes, $\Psi\in H^2_{{k_{\parallel}}={\bf K}\cdot{\bm{\mathfrak{v}}}_1,{\rm loc}}(\Sigma)$, but generically has no topologically protected edge states.}
\subsection{Outline}
{\ }
In {\bf Section \ref{honeycomb_basics}} we review spectral theory for two-dimensional periodic Schr\"odinger operators, introduce the triangular lattice, the honeycomb structure and honeycomb lattice potentials.
In {\bf Section \ref{sec:dirac-pts}} we define Dirac points and review the results on the existence of Dirac points for generic honeycomb potentials from \cites{FW:12,FW:14}.
In {\bf Section \ref{ds-slices}} we introduce the notion of an edge or line defect in a bulk (unperturbed) honeycomb structure. Honeycomb structures with edges parallel to a period lattice direction, have a translation invariance.
Thus, an important tool is the Fourier decomposition of states which are $L^2$ (localized) in the unbounded direction, transverse to the edge, and propagating (plane-wave like) parallel to the edge.
In {\bf Section \ref{zigzag-edges}} we introduce our class of Hamiltonians, consisting of a bulk honeycomb potential, perturbed by a general line-defect / ${\bm{\mathfrak{v}}}_1-$ edge potential.
In {\bf Section \ref{formal-multiscale}} we give a formal multiple scale construction of edge states to any finite order in the small parameter $\delta$.
In {\bf Section \ref{thm-edge-state}} we formulate general hypotheses which imply the existence of a branch of topologically protected ${\bm{\mathfrak{v}}}_1-$ edge states, bifurcating from the Dirac point. The proof uses a Lyapunov-Schmidt reduction strategy, applied to a system for the Floquet-Bloch amplitudes which is equivalent to the eigenvalue problem. Such a strategy was implemented in a 1D setting in \cites{FLW-MAMS:15}. First, the edge-state eigenvalue problem is formulated in (quasi-) momentum space as an infinite system for the Floquet-Bloch mode amplitudes. We view this system as consisting of two coupled subsystems; one is for the quasi-momentum / energy components ``near'' the Dirac point, $({\bf K},E_\star)$, and the second governs
the components which are ``far'' from the Dirac point. We next solve for the far-energy components as a functional of the near-energy components and thereby obtain a reduction to a closed system for the near-energy components. The construction of this map requires that the spectral no-fold condition holds.
In {\bf Section \ref{zz-gap}} we consider the Hamiltonian, introduced in Section \ref{thm-edge-state}, in the weak-potential (low-contrast) regime and prove the existence of topologically protected {\it zigzag} edge states, under the condition $\varepsilon V_{1,1}>0$.
In {\bf Appendix \ref{V11-section}} we give two families of honeycomb potentials, depending on the lattice scale parameter, $a$, where we can tune between Case (1) $\varepsilon V_{1,1}>0$ and Case (2) $\varepsilon V_{1,1}<0$ by continuously varying the lattice scale parameter.
In a number of places, the proofs of certain assertions are very similar to those of corresponding assertions in \cites{FLW-MAMS:15}. In such cases, we do not repeat a variation on the proof in \cites{FLW-MAMS:15}, but rather refer to the specific proposition or lemma in \cites{FLW-MAMS:15}.
\subsection{Notation\label{subsec:notation}}
\begin{enumerate}[(1)]
\item ${\bf v}_j,\ j=1,2$ are basis vectors of the triangular lattice in $\mathbb{R}^2$, $\Lambda_h$.
${\bf k}_\ell,\ \ell=1,2$ are dual basis vectors of $\Lambda_h^*$, which satisfy ${\bf k}_\ell\cdot{\bf v}_j=2\pi\delta_{\ell j}$.
\item For ${\bf m}=(m_1,m_2)\in\mathbb{Z}^2$, ${\bf m}\vec{\bf k}=m_1{\bf k}_1+m_2{\bf k}_2$.
\item ${\bm{\mathfrak{v}}}_1=a_1{\bf v}_1+a_2{\bf v}_2\in\Lambda_h$,\ $a_1, a_2$ co-prime integers. The ${\bm{\mathfrak{v}}}_1-$ edge is $\mathbb{R}{\bm{\mathfrak{v}}}_1$.
${\bm{\mathfrak{v}}}_j,\ j=1,2$, is an alternate basis for $\Lambda_h$ with corresponding dual basis, ${\bm{\mathfrak{K}}}_\ell, \ell=1,2$, satisfying ${\bm{\mathfrak{K}}}_\ell\cdot{\bm{\mathfrak{v}}}_j=2\pi\delta_{\ell j}$.
\item ${\bm{\mathfrak{K}}}=(\mathfrak{K}^{(1)},\mathfrak{K}^{(2)})$, $\mathfrak{z}\equiv\mathfrak{K}^{(1)} + i \mathfrak{K}^{(2)}$, $|\mathfrak{z}|=|{\bm{\mathfrak{K}}}|$.
\item $\mathcal{B}$ denotes the Brillouin Zone, associated with $\Lambda_h$, shown in the right panel of Figure \ref{fig:lattices}.
\item $\inner{f,g} = \int\overline{f}g$.
\item $x\lesssim y$ if and only if there exists $C>0$ such that $x \leq Cy$. $x \approx y$ if and only if $x \lesssim y$ and $y \lesssim x$.
\item $L^{p,s}(\mathbb{R})$ is the space of functions $F:\mathbb{R}\rightarrow\mathbb{R}$ such that $(1+\abs{\cdot}^2)^{s/2}F\in L^p(\mathbb{R})$, endowed with the norm
\[\norm{F}_{L^{p,s}(\mathbb{R})} \equiv \norm{(1+\abs{\cdot}^2)^{s/2}F}_{L^p(\mathbb{R})} \approx
\sum_{j=0}^{s}\norm{\abs{\cdot}^jF}_{L^p(\mathbb{R})} < \infty,~~~ 1\leq p\leq \infty.\]
\item For $f,g\in L^2(\mathbb{R}^d)$, the Fourier transform and its inverse are given by
{\small
\begin{equation*}
\mathcal{F}\{f\}(\xi)\equiv\widehat{f}(\xi)=\frac{1}{(2\pi)^d}\int_{\mathbb{R}^d}e^{-iX\cdot\xi}f(X)dX,~~~
\mathcal{F}^{-1}\{g\}(X)\equiv\check{g}(X)=\int_{\mathbb{R}^d}e^{ iX\cdot\xi}g(\xi)d\xi.
\label{FT-def}
\end{equation*}
}
The Plancherel relation states:
$\int_{\mathbb{R}^d} f(x)\overline{g(x)} dx = (2\pi)^d\ \int_{\mathbb{R}^d} \widehat{f}(\xi)\overline{\widehat{g}(\xi)} d\xi .$
\item $\sigma_j$, $j=1,2,3$, denote the Pauli matrices, where
\begin{equation}\label{Pauli-sigma}
\sigma_1 = \begin{pmatrix}0&1\\1&0\end{pmatrix},~~
\sigma_2 = \begin{pmatrix}0&-i\\i&0\end{pmatrix},~~\text{and}~~
\sigma_3 = \begin{pmatrix}1&0\\0&-1\end{pmatrix}.
\end{equation}
\end{enumerate}
\subsection{Acknowledgements}
We would like to thank I. Aleiner, A. Millis, J. Liu and M. Rechtsman for stimulating discussions.
\section{Floquet-Bloch Theory and Honeycomb Lattice Potentials}\label{honeycomb_basics}
We begin with a review of Floquet-Bloch theory; see, for example, \cites{Eastham:74, RS4, kuchment2012floquet, kuchment2016overview}.
\subsection{Fourier analysis on $L^2(\mathbb{R}/\Lambda)$ and $L^2(\Sigma)$}\label{fourier-analysis}
Let $\{{\bm{\mathfrak{v}}}_1,{\bm{\mathfrak{v}}}_2\}$ be a linearly independent set in $\mathbb{R}^2$ and introduce the
\begin{align}
&\text{\bf Lattice: } \Lambda = \mathbb{Z}{\bm{\mathfrak{v}}}_1\oplus\mathbb{Z}{\bm{\mathfrak{v}}}_2 = \{m_1{\bm{\mathfrak{v}}}_1 + m_2{\bm{\mathfrak{v}}}_2 \ : \ m_1,m_2 \in \mathbb{Z} \} ; \nonumber \\%\label{lattice-def} \\
&\text{\bf Fundamental period cell: } \Omega = \{\theta_1{\bm{\mathfrak{v}}}_1 + \theta_2{\bm{\mathfrak{v}}}_2 \ : \ 0\leq\theta_j\leq1,\ j=1,2\} ; \label{Omega-def} \\
&\text{\bf Dual lattice: }
\Lambda^\ast = \mathbb{Z}{\bm{\mathfrak{K}}}_1\oplus\mathbb{Z}{\bm{\mathfrak{K}}}_2 = \{ {\bf m}\vec{\bm{\mathfrak{K}}}=m_1{\bm{\mathfrak{K}}}_1 + m_2{\bm{\mathfrak{K}}}_2 : m_1,m_2 \in \mathbb{Z} \} , \nonumber\\
&\qquad\qquad\qquad\qquad\qquad\qquad {\bm{\mathfrak{K}}}_i\cdot{\bm{\mathfrak{v}}}_j = 2\pi\delta_{ij},\ 1\leq i,j \leq 2; \nonumber\\%\label{dual-lattice-def}\\
&\text{\bf Brillouin zone: } \mathcal{B},\ \textrm{a choice of fundamental dual cell} ; \nonumber \\
&\text{\bf Cylinder: } \Sigma\equiv \mathbb{R}^2/\mathbb{Z}{\bm{\mathfrak{v}}}_1 ; \nonumber \\%\label{cylinder-def}\\
&\text{\bf Fundamental domain for $\Sigma$: } \Omega_\Sigma\equiv \{\tau_1{\bm{\mathfrak{v}}}_1 + \tau_2{\bm{\mathfrak{v}}}_2 : 0\leq\tau_1\leq1, \tau_2\in\mathbb{R}\} .\label{Omega-Sigma-def}
\end{align}
\noindent We denote by $L^2(\Omega)$ and $L^2(\Omega_\Sigma)$ the standard
$L^2$ spaces on the domains $\Omega$ and $\Omega_\Sigma$, respectively
\begin{definition}\label{L2-spaces}[The spaces $L^2(\mathbb{R}^2/\Lambda)$ and $L^2_{\bf k}$]
\begin{enumerate}
\item[(a)] $L^2(\mathbb{R}^2/\Lambda)$ denotes the space of $L^2_{loc}$ functions which are $\Lambda-$ periodic:
$ f\in L^2(\mathbb{R}^2/\Lambda)$ if and only if $f({\bf x}+{\bm{\mathfrak{v}}})=f({\bf x})$ for all $ {\bf x}\in\mathbb{R}^2,\ \ {\bm{\mathfrak{v}}}\in\Lambda$ and
$f\in L^2(\Omega)$.
\item[(b)] $L^2_{\bf k}$ denotes the space of $L^2_{loc}$ functions which satisfy a pseudo-periodic boundary condition: $ f({\bf x}+{\bm{\mathfrak{v}}})=e^{i{\bf k}\cdot{\bm{\mathfrak{v}}}}f({\bf x})$ for all ${\bf x}\in\mathbb{R}^2,\ \ {\bm{\mathfrak{v}}}\in\Lambda$ and $e^{-i{\bf k}\cdot{\bf x}}f({\bf x})\in L^2(\mathbb{R}^2/\Lambda)$.
For $f$ and $g$ in $L^2_{\bf k}$, $\overline{f}g$ is in $L^1(\mathbb{R}^2/\Lambda)$ and we define their inner product by
\begin{equation*}
\label{L2k_inner_def}
\inner{f,g}_{L^2_{\bf k}} = \int_\Omega \overline{f({\bf x})} g({\bf x}) d{\bf x}.
\end{equation*}
\end{enumerate}
\end{definition}
%
\begin{definition}\label{L2Sigma-spaces} [The spaces $L^2(\Sigma)$ and $L^2_{k_\parallel}$]
%
\begin{enumerate}
\item [(a)] $L^2(\Sigma)=L^2(\mathbb{R}^2/\mathbb{Z}{\bm{\mathfrak{v}}}_1)$ denotes the space of $L^2_{loc}$ functions, which are periodic in the direction of ${\bm{\mathfrak{v}}}_1$: $f({\bf x}+{\bm{\mathfrak{v}}}_1)=f({\bf x}), \text{\ \ for\ all\ } {\bf x}\in\mathbb{R}^2$ and such that $f\in L^2(\Omega_\Sigma)$,
where $\Omega_\Sigma$ is the fundamental domain for $\Sigma$; see \eqref{Omega-Sigma-def}.
\item [(b)] $L^2_{{k_{\parallel}}}(\Sigma)=L^2_{{k_{\parallel}}}$ denotes the space of $L^2_{loc}$ functions:
\begin{enumerate}
\item [(1)] which are ${k_{\parallel}}-$ pseudo-periodic in the direction ${\bm{\mathfrak{v}}}_1$:
\begin{equation*}
f({\bf x}+{\bm{\mathfrak{v}}}_1)=e^{i{k_{\parallel}}}f({\bf x}), \text{\ \ for } {\bf x}\in\mathbb{R}^2, \ \ \text{and}
\end{equation*}
\item [(2)] such that $e^{-i (1/2\pi ) k_{\parallel}{\bm{\mathfrak{K}}}_1\cdot{\bf x}}f({\bf x})$, which is defined on $\Sigma$, is in $L^2(\Omega_\Sigma)$.
\end{enumerate}
For $f$ and $g$ in $L^2_{k_{\parallel}}(\Sigma)$, $\overline{f}g$ is in $L^2(\Sigma)$ and we define their inner product by
\begin{equation*}
\label{L2kpar_inner_def}
\inner{f,g}_{L^2_{k_{\parallel}}} = \int_{\Omega_\Sigma} \overline{f({\bf x})} g({\bf x}) d{\bf x}.
\end{equation*}
\end{enumerate}
%
%
The respective Sobolev spaces $H^s(\mathbb{R}^2/\Lambda)$, $H^s_{\bf k}$, $H^s(\Sigma)$ and $H^s_{{k_{\parallel}}}(\Sigma)=H^s_{{k_{\parallel}}}$ are defined in a natural way.
\noindent {\bf Simplified notational convention:} We shall do many calculations requiring us to explicitly write out inner products like $\inner{f,g}_{L^2(\Sigma)}$ and $\inner{f,g}_{L^2_{k_{\parallel}}(\Sigma)}$. We shall write these as $ \int_\Sigma \overline{f({\bf x})}g({\bf x})\ d{\bf x}$ rather than as
$ \int_{\Omega_{_\Sigma}} \overline{f({\bf x})}g({\bf x})\ d{\bf x}$.
\end{definition}
If $f\in L^2(\mathbb{R}^2/\Lambda)$, then it can be expanded in a Fourier series:
\begin{equation}
\label{Omega-fourier}
f({\bf x}) = \sum_{{\bf m}\in\mathbb{Z}^2} f_{{\bf m}} e^{i{\bf m}\vec{\bm{\mathfrak{K}}}\cdot{\bf x}}, \quad f_{{\bf m}} = \frac{1}{|\Omega|}\int_\Omega e^{-i{\bf m}\vec{\bm{\mathfrak{K}}}\cdot{\bf y}}f({\bf y})d{\bf y} \ , \ \ {\bf m}\vec{\bm{\mathfrak{K}}}=m_1{\bm{\mathfrak{K}}}_1+m_2{\bm{\mathfrak{K}}}_2 \ ,
\end{equation}
where $|\Omega|$ denotes the area of the fundamental cell, $\Omega$.
In Section \ref{Fourier-edge}, we show that,
if $g\in L^2(\Sigma)$, then it can be expanded in a Fourier series in ${\bm{\mathfrak{v}}}_1\cdot{\bf x}$ and Fourier transform in ${\bm{\mathfrak{v}}}_2\cdot{\bf x}$:
\begin{align*}
g({\bf x}) &= 2\pi\ \sum_{n\in\mathbb{Z}}\int_\mathbb{R} \widehat{g}_n(2\pi\xi) e^{i\xi{\bm{\mathfrak{K}}}_2\cdot{\bf x}} d\xi e^{in{\bm{\mathfrak{K}}}_1\cdot{\bf x}}\ , \\
2\pi\ \widehat{g}_n(2\pi\xi) &= \frac{1}{\left|{\bm{\mathfrak{v}}}_1\wedge{\bm{\mathfrak{v}}}_2\right| } \int_\Sigma e^{-i\xi{\bm{\mathfrak{K}}}_2\cdot{\bf y}} e^{-in{\bm{\mathfrak{K}}}_1\cdot{\bf y}} g({\bf y}) d{\bf y} \ .
\end{align*}
\subsection{Floquet-Bloch Theory}\label{flo-bl-theory}
Let $Q({\bf x})$ denote a real-valued potential which is periodic with respect to $\Lambda$. We shall assume throughout this paper that $Q\in C^\infty(\mathbb{R}^2/\Lambda)$, although we expect that this condition can be relaxed without much extra work.
Introduce the Schr\"odinger Hamiltonian
$H \equiv -\Delta + Q({\bf x})$.
For each ${\bf k}\in\mathbb{R}^2$, we study the {\it Floquet-Bloch eigenvalue problem} on $L^2_{\bf k}$:
\begin{align}
\label{fl-bl-evp}
&H \Phi({\bf x};{\bf k}) = E({\bf k}) \Phi({\bf x};{\bf k}), \ \ {\bf x}\in\mathbb{R}^2, \\
&\Phi({\bf x}+{\bm{\mathfrak{v}}})=e^{i{\bf k}\cdot{\bm{\mathfrak{v}}}}\Phi({\bf x};{\bf k}), \ \ \forall {\bm{\mathfrak{v}}}\in\Lambda \nonumber .
\end{align}
An $L^2_{\bf k}$ solution of \eqref{fl-bl-evp} is called a {\it Floquet-Bloch} state.
Since the ${\bf k}-$ pseudo-periodic boundary condition in \eqref{fl-bl-evp} is invariant under translations in the dual period lattice, $\Lambda^\ast$, it suffices to restrict our attention to ${\bf k}\in\mathcal{B}$, where $\mathcal{B}$, the {\it Brillouin Zone}, is a fundamental cell in ${\bf k}-$ space.
An equivalent formulation to \eqref{fl-bl-evp} is obtained by setting $\Phi({\bf x};{\bf k})=e^{i{\bf k}\cdot{\bf x}}p({\bf x};{\bf k})$. Then,
\begin{equation}
\label{fl-bl-evp-per}
H({\bf k}) p({\bf x};{\bf k}) = E({\bf k}) p({\bf x};{\bf k}), \ {\bf x}\in\mathbb{R}^2, \quad p({\bf x}+{\bm{\mathfrak{v}}})=p({\bf x};{\bf k}),\ \ {\bm{\mathfrak{v}}}\in\Lambda,
\end{equation}
where
$ H({\bf k}) \equiv -(\nabla+i{\bf k})^2 + Q({\bf x})$
is a self-adjoint operator on $L^2(\mathbb{R}^2/\Lambda)$.
The eigenvalue problem \eqref{fl-bl-evp-per}, has a discrete set of eigenvalues
$E_1({\bf k})\leq E_2({\bf k})\leq \cdots \leq E_b({\bf k})\leq \cdots$,
with $L^2(\mathbb{R}^2/\Lambda)-$ eigenfunctions $p_b({\bf x};{\bf k}),\ b=1,2,3,\ldots$.
The maps ${\bf k}\in\mathcal{B}\mapsto E_j({\bf k})$ are, in general, Lipschitz continuous functions; see, for example, Appendix A of \cites{FW:14}. For each ${\bf k}\in\mathcal{B}$, the set $\{p_j({\bf x};{\bf k})\}_{j\geq1}$ can be taken to be a complete orthonormal basis for $L^2(\mathbb{R}^2/\Lambda)$.
As ${\bf k}$ varies over $\mathcal{B}$, $E_b({\bf k})$ sweeps out a closed real interval. The union over $b\ge1$ of these closed intervals is exactly the $L^2(\mathbb{R}^2)-$ spectrum of $-\Delta+V({\bf x})$:
$
\text{spec} \left(H\right) = \bigcup_{{\bf k}\in\mathcal{B}} \text{spec} \left(H({\bf k})\right).
$
Furthermore, the set $\{\Phi_b({\bf x};{\bf k})\}_{b\geq1,{\bf k}\in\mathcal{B}}$ is complete in $L^2(\mathbb{R}^2)$:
\begin{equation*}
\label{fb-R2-completeness}
f({\bf x}) = \sum_{b\geq1} \int_\mathcal{B} \inner{\Phi_b(\cdot;{\bf k}),f(\cdot)}_{L^2(\mathbb{R}^2)} \Phi_b({\bf x};{\bf k}) d{\bf k}
\equiv \sum_{b\geq1} \int_\mathcal{B} \widetilde{f}_b({\bf k}) \Phi_b({\bf x};{\bf k}) d{\bf k} ,
\end{equation*}
where the sum converges in the $L^2$ norm.
\subsection{The honeycomb period lattice, $\Lambda_h$, and its dual, $\Lambda_h^*$}\label{sec:honeycomb}
Consider $\Lambda_h=\mathbb{Z}{\bf v}_1 \oplus \mathbb{Z}{\bf v}_2$, the equilateral triangular lattice generated by the basis vectors: ${\bf v}_1=(\frac{\sqrt{3}}{2} , \frac{1}{2})^T$, ${\bf v}_2=(\frac{\sqrt{3}}{2} , -\frac{1}{2})^T$; see Figure \ref{fig:lattices}, left panel.
The dual lattice $\Lambda_h^* =\ \mathbb{Z} {\bf k}_1\oplus \mathbb{Z}{\bf k}_2$ is spanned by the dual basis vectors: ${\bf k}_1= q( \frac{1}{2} , \frac{\sqrt{3}}{2} )^T$, ${\bf k}_2= q( \frac{1}{2} , -\frac{\sqrt{3}}{2} )^T$,
where $ q\equiv \frac{4\pi}{\sqrt{3}}$,
with the biorthonormality relations ${\bf k}_{i}\cdot {\bf v}_{{j}}=2\pi\delta_{ij}$. Other useful relations are:
$|{\bf v}_1|=|{\bf v}_2|=1$, ${\bf v}_1\cdot{\bf v}_2=\frac{1}{2}$, $|{\bf k}_1|=|{\bf k}_2|=q$ and
${\bf k}_1\cdot{\bf k}_2=-\frac{1}{2}q^2$.
The Brillouin zone, ${\mathcal{B}}_h$, is a regular hexagon in $\mathbb{R}^2$. Denote by ${\bf K}$ and ${\bf K'}$ its top and bottom vertices (see right panel of Figure \ref{fig:lattices}) given by:\
${\bf K}\equiv\frac{1}{3}\left({\bf k}_1-{\bf k}_2\right),\ \ {\bf K'}\equiv-{\bf K}=\frac{1}{3}\left({\bf k}_2-{\bf k}_1\right)$.
All six vertices of ${\mathcal{B}}_h$ can be generated by application of the matrix $R$,
which rotates a vector in $\mathbb{R}^2$ clockwise by $2\pi/3$:
\begin{equation}
R\ =\ \left(
\begin{array}{cc}
-\frac{1}{2} & \frac{\sqrt{3}}{2}\\
{} & {}\\
-\frac{\sqrt{3}}{2} & -\frac{1}{2}
\end{array}\right)\ .
\label{Rdef}\end{equation}
The vertices of ${\mathcal{B}}_h$ fall into two groups, generated by the action of $R$ on ${\bf K}$ and ${\bf K}'$:
${\bf K}-$ type-points: $ {\bf K},\ R{\bf K}={\bf K}+{\bf k}_2,\ R^2{\bf K}={\bf K}-{\bf k}_1$, and
${\bf K'}-$ type-points: $ {\bf K'},\ R{\bf K'}={\bf K}'-{\bf k}_2,\ R^2{\bf K'}={\bf K'}+{\bf k}_1$.
Functions which are periodic on $\mathbb{R}^2$ with respect to the lattice $\Lambda_h$ may be viewed as functions on the torus, $\mathbb{R}^2/\Lambda_h$.
As a fundamental period cell, we choose the parallelogram spanned by ${\bf v}_1$ and ${\bf v}_2$, denoted
$\Omega_h$.
\begin{remark}[Symmetry Reduction]\label{symmetry-reduction}
Let $(\Phi({\bf x};{\bf k}), E({\bf k}))$ denote a Floquet-Bloch eigenpair for the eigenvalue problem \eqref{fl-bl-evp} with quasi-momentum ${\bf k}$. Since $V$ is real,
$(\tilde{\Phi}({\bf x};{\bf k})\equiv\overline{\Phi({\bf x};{\bf k})}, E({\bf k}))$ is a Floquet-Bloch eigenpair for the eigenvalue problem with quasi-momentum $-{\bf k}$. The above relations among the vertices of ${\mathcal{B}}_h$ and the $\Lambda_h^*$- periodicity of: ${\bf k}\mapsto E({\bf k})$ and ${\bf k}\mapsto \Phi({\bf x};{\bf k})$ imply that the local character of the dispersion surfaces in a neighborhood of any vertex of ${\mathcal{B}}_h$ is determined by its character about any other vertex of ${\mathcal{B}}_h$.
\end{remark}
\subsection{Honeycomb potentials}\label{honeycomb-potentials}
\begin{definition}\label{honeyV}[Honeycomb potentials]
Let $V$ be real-valued and $V\in C^\infty(\mathbb{R}^2)$.
$V$ is a \underline{honeycomb potential}
if there exists ${\bf x}_0\in\mathbb{R}^2$ such that $\tilde{V}({\bf x})=V({\bf x}-{\bf x}_0)$ has the following properties:
\begin{enumerate}
\item[(V1)] $\tilde{V}$ is $\Lambda_h-$ periodic, {\it i.e.} $\tilde{V}({\bf x}+{\bf v})=\tilde{V}({\bf x})$ for all ${\bf x}\in\mathbb{R}^2$ and ${\bf v}\in\Lambda_h$.
\item[(V2)] $\tilde{V}$ is even or inversion-symmetric, {\it i.e.} $\tilde{V}(-{\bf x})=\tilde{V}({\bf x})$.
\item[(V3)] $\tilde{V}$ is $\mathcal{R}$- invariant, {\it i.e.}
$ \mathcal{R}[\tilde{V}]({\bf x})\ \equiv\ \tilde{V}(R^*{\bf x})\ =\ \tilde{V}({\bf x}),$
where, $R^*$ is the counter-clockwise rotation matrix by $2\pi/3$, {\it i.e.} $R^*=R^{-1}$, where $R$ is given by \eqref{Rdef}.
\end{enumerate}
N.B. Throughout this paper, we shall omit the tildes on $V$ and choose coordinates with ${\bf x}_0=0$.
\end{definition}
Introduce the mapping $\widetilde{R}:\mathbb{Z}^2\to\mathbb{Z}^2$ which acts on the indices of the Fourier coefficients of $V$:
$ \widetilde{R} (m_1, m_2) = (-m_2, m_1-m_2)$ and therefore
$ \widetilde{R}^2 (m_1, m_2) = (m_2-m_1,-m_1)$, and $\widetilde{R}^3(m_1,m_2) = (m_1,m_2)$.
Any ${\bf m}\neq0$ lies on an $\widetilde{R}-$ orbit of length exactly three \cites{FW:12}. We say that ${\bf m}$ and ${\bf n}$ are in the same equivalence class if ${\bf m}$ and ${\bf n}$ lie on the same $3-$ cycle. Let $\widetilde{S}$ denote a set consisting of exactly one representative from each equivalence class. Honeycomb lattice potentials have the following Fourier series characterization \cites{FW:12}:
\begin{proposition}
\label{honey-cosine}
Let $V({\bf x})$ denote a honeycomb lattice potential.
Then,
\begin{align*}
V({\bf x}) &= v_{\bf 0} + \sum_{{\bf m}\in\widetilde{S}} \ v_{\bf m} \
\left[ \cos({\bf m}\vec{\bf k}\cdot{\bf x}) + \cos((\widetilde{R}{\bf m})\vec{\bf k}\cdot{\bf x}) + \cos((\widetilde{R}^2{\bf m})\vec{\bf k}\cdot{\bf x}) \right] ,
\end{align*}
where ${\bf m}\vec{\bf k} = m_1{\bf k}_1 + m_2{\bf k}_2$ and the $v_{\bf m}$ are real.
\end{proposition}
\section{Dirac Points}\label{sec:dirac-pts}
In this section we summarize results of \cites{FW:12} on {\it Dirac points}. These are conical singularities in the dispersion surfaces of $H_V=-\Delta+V({\bf x})$, where $V$ is a honeycomb lattice potential.
Let ${\bf K}_\star$ denote any vertex of ${\mathcal{B}}_h$, and recall that $L^2_{{\bf K}_\star}$ is the space of ${\bf K}_\star-$ pseudo-periodic functions.
A key property of honeycomb lattice potentials, $V$, is that $H_V$ and $\mathcal{R}$, defined in (V3) of Definition \ref{honeyV}, leave a dense subspace of $L^2_{{\bf K}_\star}$ invariant. Furthermore, restricted to this dense subspace of $L^2_{{\bf K}_\star}$, $H_{V}$ commutes with $\mathcal{R}$: $\left[\mathcal{R},H_{V}\right] = 0$.
Since $\mathcal{R}$ has eigenvalues $1,\tau$ and $\overline{\tau}$, it is natural to split $L^2_{{\bf K}_\star}$ into the direct sum:
\begin{equation*}
L^2_{{\bf K}_\star}\ =\ L^2_{{\bf K}_\star,1}\oplus L^2_{{\bf K}_\star,\tau}\oplus L^2_{{\bf K}_\star,\overline\tau}.\label{L2-directsum}
\end{equation*}
Here, $L^2_{{\bf K}_\star,\sigma}$, where $\sigma=1,\tau,\overline{\tau}$ and $\tau=\exp(2\pi i/3)$, denote the invariant eigenspaces of $\mathcal{R}$:
\begin{equation*}
L^2_{{\bf K}_\star,\sigma}\ =\ \Big\{g\in L^2_{{\bf K}_\star}: \mathcal{R}g=\sigma g\Big\}\ .
\label{L2Ksigma} \end{equation*}
We next give a precise definition of a Dirac point.
\begin{definition}\label{dirac-pt-defn}
Let $V({\bf x})$ be a smooth, real-valued, even (inversion symmetric) and periodic potential on $\mathbb{R}^2$.
Denote by ${\mathcal{B}}_h$, the Brillouin zone. Let ${\bf K}\in{\mathcal{B}}_h$.
The energy / quasi-momentum pair $({\bf K},E_\star)\in{\mathcal{B}}_h\times\mathbb{R}$ is called a {\it Dirac point} if there exists $b_\star\ge1$ such that:
\begin{enumerate}
\item $E_\star$ is a $L^2_{\bf K}-$ eigenvalue of $H_V$ of multiplicity two.
\item $\textrm{Nullspace}\Big(H_V-E_\star I\Big)\ =\
{\rm span}\Big\{ \Phi_1({\bf x}) , \Phi_2({\bf x})\Big\}$,
where $\Phi_1\in L^2_{{\bf K},\tau}\ (\mathcal{R}\Phi_1=\tau\Phi_1)$ and
$\Phi_2({\bf x}) = \left(\mathcal{C}\circ\mathcal{I}\right)[\Phi_1]({\bf x}) = \overline{\Phi_1(-{\bf x})}\in L^2_{{\bf K},\bar\tau}\ (\mathcal{R}\Phi_2=\overline{\tau}\Phi_2)$,
and $\inner{\Phi_a, \Phi_b}_{L^2_{\bf K}(\Omega)} = \delta_{ab}$, $a,b=1,2$.
\item There exist $\lambda_{\sharp}\ne0$, $\zeta_0>0$, and Floquet-Bloch eigenpairs
\[ {\bf k}\mapsto (\Phi_{b_\star+1}({\bf x};{\bf k}),E_{b_\star+1}({\bf k}))\ \ {\rm and}\ \ {\bf k}\mapsto (\Phi_{b_\star}({\bf x};{\bf k}),E_{b_\star}({\bf k})),\]
and Lipschitz functions $e_j({\bf k}),\ j=b_\star, b_\star+1$, where $e_j({\bf K})=0$, defined for $|{\bf k}-{\bf K}|<\zeta_0$ such that
\begin{align}
E_{b_\star+1}({\bf k})-E_\star\ &=\ + |\lambda_\sharp|\
\left| {\bf k}-{\bf K} \right|\
\left( 1\ +\ e_{b_\star+1}({\bf k}) \right),\nonumber\\
E_{b_\star}({\bf k})-E_\star\ &=\ - |\lambda_\sharp|\
\left| {\bf k}-{\bf K} \right|\
\left( 1\ +\ e_{b_\star}({\bf k}) \right),\label{cones}
\end{align}
where $|e_j({\bf k})|\le C|{\bf k}-{\bf K}|,\ j=b_\star, b_\star+1$, for some $C>0$.
\end{enumerate}
\end{definition}
In \cites{FW:12}, the authors prove the following
\begin{proposition}\label{lambda-is=|lambdasharp|}
Suppose conditions $1$ and $2$ of Definition \ref{dirac-pt-defn} hold and let
$\{c({\bf m})\}_{{\bf m}\in\mathcal S}$ denote the sequence of $L^2_{{\bf K},\tau}-$ Fourier-coefficients of $\Phi_1({\bf x})$ normalized as in \cites{FW:12}.
Define the sum
\begin{equation}
\lambda_\sharp\ \equiv\ \sum_{{\bf m}\in\mathcal{S}} c({\bf m})^2\ \left(\begin{array}{c}1\\ i\end{array}\right)\cdot \left({\bf K}+{\bf m}\vec{\bf k}\right)\ .
\label{lambda-sharp}
\end{equation}
Here, $\mathcal{S}\subset\mathbb{Z}^2$ is defined in \cites{FW:12}.
If $\lambda_\sharp\ne0$, then condition
$3$ of Definition \ref{dirac-pt-defn} holds (see \eqref{cones}).
\end{proposition}
\noindent Therefore Dirac points are found by verifying conditions $1$ and $2$ of Definition \ref{dirac-pt-defn} and the additional (non-degeneracy) condition: $\lambda_\sharp\ne0$.
Furthermore, Theorem 4.1 of \cites{FW:12} and Theorem 3.2 of \cites{FW:14} imply the following local behavior of Floquet-Bloch modes near the Dirac point:
\footnote{The factor $\frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|}$ in \eqref{Phib_star+1}-\eqref{Phib_star} corrects
a typographical error in equation (3.13) of \cites{FW:14}.}
\begin{corollary}\label{Phibstar-bstar+1}
\begin{align}
\Phi_{b_\star+1}({\bf x};{\bf k})\ &=\ \frac{1}{\sqrt2}\ \Big[\ \frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|}\
\frac{({\bf k}-{\bf K})^{(1)}+i({\bf k}-{\bf K})^{(2)}}{|{\bf k}-{\bf K}|}\ \Phi_1({\bf x})\ +\ \Phi_2({\bf x})\ \Big] + \Phi_{b_\star+1}^{(1)}({\bf x};{\bf k}), \label{Phib_star+1}\\
\Phi_{b_\star}({\bf x};{\bf k})\ &=\ \frac{1}{\sqrt2}\ \Big[\ \frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|}\
\frac{({\bf k}-{\bf K})^{(1)}+i({\bf k}-{\bf K})^{(2)}}{|{\bf k}-{\bf K}|}\ \Phi_1({\bf x})\ -\ \Phi_2({\bf x})\ \Big] + \Phi_{b_\star}^{(1)}({\bf x};{\bf k}) ,
\label{Phib_star}\end{align}
where $ \Phi_{j}^{(1)}(\cdot;{\bf k}) = \mathcal{O}(|{\bf k}-{\bf K}|)$ in $H^2(\Omega_h)$ as $|{\bf k}-{\bf K}|\to0$.
\end{corollary}
In the next section we discuss the result of \cites{FW:12}, that $-\Delta +\varepsilon V$ has Dirac points for generic $\varepsilon$.
\subsection{Dirac points of $-\Delta+\varepsilon V({\bf x})$, $\varepsilon$ generic} \label{dpts-generic-eps}
The strategy used in \cites{FW:12} to produce Dirac points is based on a bifurcation theory for the operator $-\Delta + \varepsilon V({\bf x})$ acting in $L^2_{\bf K}$, from the $\varepsilon=0$ limit. We describe the setup here, since we shall make detailed use of it.
Consider $-\Delta$ acting on $L^2_{\bf K}$. We note that $E^0_\star\equiv |{\bf K}|^2$ is an eigenvalue with multiplicity three, since the three vertices of the regular hexagon, $\mathcal{B}_h$: ${\bf K}, R{\bf K}$ and $R^2{\bf K}$ are equidistant from the origin.
The corresponding three-dimensional eigenspace has an orthonormal basis consisting of the functions: $\Phi_\sigma({\bf x}) = e^{i{\bf K}\cdot{\bf x}}p_\sigma({\bf x}) \in L^2_{{\bf K},\sigma},\ \sigma=1,\tau,\overline{\tau}$, defined by
\begin{align}
\Phi_\sigma ({\bf x}) &= e^{i{\bf K}\cdot{\bf x}} p_\sigma({\bf x}) , \qquad ( \sigma=1,\tau,\overline{\tau} )\nonumber\\
&= \frac{1}{\sqrt{3|\Omega|}}\ \Big[\ e^{i{\bf K}\cdot{\bf x}} + \overline{\sigma} e^{iR{\bf K}\cdot{\bf x}}+ \sigma e^{iR^2{\bf K}\cdot{\bf x}}\ \Big] \nonumber \\
&= \frac{1}{\sqrt{3|\Omega|}}\ e^{i{\bf K}\cdot{\bf x}} \Big[\ 1 + \overline{\sigma} e^{i{\bf k}_2\cdot{\bf x}} + \sigma e^{-i{\bf k}_1\cdot{\bf x}}\ \Big]\ .
\label{p_sigma}
\end{align}
We note that
\begin{equation}
\inner{\Phi_\sigma,\Phi_{\tilde{\sigma}}}_{L^2_{\bf K}}=
\inner{p_\sigma, p_{\tilde{\sigma}}}_{L^2(\mathbb{R}^2/\Lambda_h)}= \delta_{\sigma,{\tilde{\sigma}}} \ .
\label{orthon}\end{equation}
%
In Theorem 5.1 of \cites{FW:12}, the authors proved that for real, small and non-zero $\varepsilon$
and under the assumption that
$V$ satisfies the non-degeneracy condition:
\begin{equation}
V_{1,1}\ \equiv\
\frac{1}{|\Omega_h|} \int_{\Omega_h} e^{-i({\bf k}_1+{\bf k}_2)\cdot{\bf y}}\ V({\bf y})\ d{\bf y}\ne0,
\label{V11eq0}
\end{equation}
that the multiplicity three eigenvalue, $E^0_\star=|{\bf K}|^2$, splits into\\
(A) a multiplicity two eigenvalue, $E^\varepsilon_\star$, with two-dimensional $L^2_{{\bf K},\tau}\oplus L^2_{{\bf K},\overline{\tau}}-$ eigenspace structure, and\\
(B) a simple eigenvalue, $\widetilde{E}^\varepsilon$, with one-dimensional eigenspace, a subspace of $L^2_{{\bf K},1}$.
For all $\varepsilon$ sufficiently small, the quasi-momentum pairs $({\bf K},E^\varepsilon_\star)$ are Dirac points in the sense of Definition \ref{dirac-pt-defn}.
Furthermore,
a continuation argument is then used to extend this result from the regime of sufficiently small $\varepsilon$ to the regime of arbitrary $\varepsilon$ outside of a possible discrete set; see \cites{FW:12}
and the refinement concerning the possible exceptional set of $\varepsilon$ values in Appendix D of \cites{FLW-MAMS:15}.
We first state the result for arbitrarily large and generic $\varepsilon$, and then the more refined picture for $|\varepsilon|>0$ and sufficiently small.
\begin{theorem}\label{diracpt-thm}[Generic $\varepsilon$]
Let $V({\bf x})$ be a honeycomb lattice potential and consider the
parameter family of Schr\"odinger operators:
\begin{equation*}
H^{(\varepsilon)}\ \equiv\ -\Delta + \varepsilon\ V({\bf x}),
\label{Heps-def}
\end{equation*}
where $V$ satisfies the non-degeneracy condition \eqref{V11eq0}.
Then, there exists $\varepsilon_0>0$, such that for all real and nonzero $\varepsilon$, outside of a possible discrete subset of $\mathbb{R}\setminus(-\varepsilon_0,\varepsilon_0)$, $H^{(\varepsilon)}$ has Dirac points $({\bf K},E^\varepsilon_\star)$ in the sense of Definition \ref{dirac-pt-defn}
Specifically, for all such $\varepsilon$,
there exists $b_\star\ge1$ such that $E_\star\equivE^\varepsilon_{b_\star}({\bf K})=E^\varepsilon_{b_\star+1}({\bf K})$ is a ${\bf K}-$ pseudo-periodic eigenvalue of multiplicity two
where
\begin{enumerate}
\item
(a) $E^\varepsilon_\star$ is an $L^2_{{\bf K},\tau}-$ eigenvalue of $H^{(\varepsilon)}$ of multiplicity one, with corresponding eigenfunction,
$\Phi^\varepsilon_1({\bf x})$.\\
(b) $E^\varepsilon_\star$ is an $L^2_{{\bf K},\bar\tau}-$ eigenvalue of $H^{(\varepsilon)}$ of multiplicity one, with corresponding eigenfunction, $\Phi^\varepsilon_2({\bf x})=\overline{\Phi^\varepsilon_1(-{\bf x})}$.\\
(c) $E^\varepsilon_\star$ is \underline{not} an $L^2_{{\bf K},1}-$ eigenvalue of $H^{(\varepsilon)}$.
\item There exist $\delta_\varepsilon>0,\ C_\varepsilon>0$
and Floquet-Bloch eigenpairs: $(\Phi_j^\varepsilon({\bf x};{\bf k}), E_j^\varepsilon({\bf k}))$
and Lipschitz continuous functions, $e_j({\bf k})$, $j=b_\star, b_\star+1$,
defined for $|{\bf k}-{\bf K}|<\delta_\varepsilon$, such that
\begin{align}
E^\varepsilon_{b_\star+1}({\bf k})-E^\varepsilon({\bf K})\ &=\ +\ |\lambda^\varepsilon_\sharp|\
\left| {\bf k}-{\bf K} \right|\
\left( 1\ +\ e^\varepsilon_{b_\star+1}({\bf k}) \right)\ \ {\rm and}\nonumber\\
E^\varepsilon_{b_\star}{\bf k})-E^\varepsilon({\bf K})\ &=\ -\ |\lambda^\varepsilon_\sharp|\
\left| {\bf k}-{\bf K} \right|\
\left( 1\ +\ e^\varepsilon_{b_\star}{\bf k}) \right),\label{conical}
\end{align}
and where
\begin{equation}
\lambda_\sharp^\varepsilon\ \equiv\ \sum_{{\bf m}\in\mathcal{S}} c({\bf m},E_\star^\varepsilon,\varepsilon)^2\ \left(\begin{array}{c}1\\ i\end{array}\right)\cdot \left({\bf K}+\vec{\bf m}\vec{\bf k}\right) \ \ne\ 0
\label{lambda-sharp2}
\end{equation}
is given in terms of $\{c({\bf m},E_\star^\varepsilon,\varepsilon)\}_{{\bf m}\in\mathcal{S}}$, the $L^2_{{\bf K},\tau}-$ Fourier coefficients of $\Phi_1^\varepsilon({\bf x};{\bf K})$.
Furthermore, $|e_j^\varepsilon({\bf k})| \le C_\varepsilon |{\bf k}-{\bf K}|$, $j=b_\star, b_\star+1$.
Thus, in a neighborhood of the point $({\bf k},E)=({\bf K},E_\star^\varepsilon)\in \mathbb{R}^3 $, the dispersion surface is closely approximated by a circular {\it cone}.
\end{enumerate}
\end{theorem}
\subsection{Dirac points of $-\Delta+\varepsilon V({\bf x})$, $\varepsilon$ small} \label{dpts-small-eps}
In this section we collect explicit information on Dirac points for the weak potential regime.
\begin{theorem}\label{diracpt-small-thm}[Small $\varepsilon$]
There exists $\varepsilon_0>0$, such that for all $\varepsilon\in I_{\varepsilon_0}\equiv(-\varepsilon_0,\varepsilon_0)\setminus\{0\}$ the following holds:
\begin{enumerate}
\item For $\varepsilon\in I_{\varepsilon_0}$, $-\Delta +\varepsilon V({\bf x})$ has
\begin{enumerate}[(a)]
\item a multiplicity two $L^2_{{\bf K}}$- eigenvalue $E^\varepsilon_\star$, where $\textrm{ker}(-\Delta + \varepsilon V)\subset L^2_{{\bf K},\tau}\oplus
L^2_{{\bf K},\overline{\tau}}$, and
\item a multiplicity one $L^2_{{\bf K}}$- eigenvalue $\widetilde{E}_\star^\varepsilon$, where $\textrm{ker}(-\Delta + \varepsilon V)\subset L^2_{{\bf K},1}$.
\end{enumerate}
\item The maps $\varepsilon\mapsto E^\varepsilon_\star$ and $\varepsilon\mapsto \widetilde{E}^\varepsilon_\star$ are well defined for all $\varepsilon$ in the deleted neighborhood of zero, $I_{\varepsilon_0}$. They are constructed via perturbation theory of a simple eigenvalue in
$L^2_{{\bf K},\tau}$ and in $L^2_{{\bf K},1}$, respectively. Therefore, $E^\varepsilon_\star$ and $\widetilde{E}^\varepsilon_\star$ are real-analytic functions of $\varepsilon\in I_{\varepsilon_0}$. Moreover, they have the expansions:
\begin{align}
E^\varepsilon_\star\ &= |{\bf K}|^2 + \varepsilon(V_{0,0}-V_{1,1})+\mathcal{O}(\varepsilon^2), \label{E*expand}\\
\widetilde{E}^\varepsilon_\star &=|{\bf K}|^2 + \varepsilon(V_{0,0}+2V_{1,1})+\mathcal{O}(\varepsilon^2). \label{tildeE*expand}
\end{align}
\item If $\varepsilon V_{1,1}>0$, then conical intersections occur between the $1^{st}$ and $2^{nd}$ dispersion surfaces at the vertices of $\mathcal{B}_h$. Specifically, \eqref{conical} holds with $b_\star=1$.
\item If $\varepsilon V_{1,1}<0$,
then conical intersections occur between the $2^{nd}$ and $3^{rd}$ dispersion surfaces at the vertices of $\mathcal{B}_h$. Specifically, \eqref{conical} holds with $b_\star=2$
For $\varepsilon\in I_{\varepsilon_0}$,
\begin{equation}
| \lambda_\sharp^\varepsilon| = 4\pi |\Omega_h| + \mathcal{O}(\varepsilon) = 4\pi|{\bf v}_1\wedge{\bf v}_2|+ \mathcal{O}(\varepsilon).
\label{lambda-sharp-expand}
\end{equation}
\end{enumerate}
\end{theorem}
\noindent The expansions \eqref{E*expand}, \eqref{tildeE*expand} and \eqref{lambda-sharp-expand} are displayed in equations (6.22), (6.25) and (6.30) of \cites{FW:12}.
\noindent The intersections of the first two dispersion surfaces for $\varepsilon V_{1,1}>0$, and of the second and third dispersion surfaces for $\varepsilon V_{1,1}<0$, are illustrated in the first two panels of Figure \ref{fig:eps_V11_neg} along a dispersion slice corresponding to the zigzag edge.
\section{Edges and dual slices}\label{ds-slices}
Edge states are solutions of an eigenvalue equation on $\mathbb{R}^2$, which are spatially localized transverse to a line-defect or ``edge'' and propagating (plane-wave like or pseudo-periodic) parallel to the edge. Recall that $\Lambda_h=\mathbb{Z}{\bf v}_1\oplus\mathbb{Z}{\bf v}_2$ and $\Lambda_h^*=\mathbb{Z}{\bf k}_1\oplus\mathbb{Z}{\bf k}_2$. We consider edges which are lines
of the form $\mathbb{R} (a_1{\bf v}_1+a_2{\bf v}_2)$, where $(a_1,b_1)=1$, {\it i.e.} $a_1$ and $b_1$ are relatively prime.
We fix an edge by choosing ${\bm{\mathfrak{v}}}_1=a_1{\bf v}_1+b_1{\bf v}_2$, where $(a_1, b_1)=1$. Since
$a_1, b_1$ are relatively prime, there exists a relatively prime pair of integers: $a_2,b_2$ such that $a_1b_2-a_2b_1=1$.
Set ${\bm{\mathfrak{v}}}_2 = a_2 {\bf v}_1 + b_2 {\bf v}_2$.
%
It follows that $\mathbb{Z}{\bm{\mathfrak{v}}}_1\oplus\mathbb{Z}{\bm{\mathfrak{v}}}_2=\mathbb{Z}{\bf v}_1\oplus\mathbb{Z}{\bf v}_2=\Lambda_h$.
Since $a_1b_2-a_2b_1=1$, we have dual lattice vectors ${\bm{\mathfrak{K}}}_1, {\bm{\mathfrak{K}}}_2\in\Lambda_h^*$, given by
\[ {\bm{\mathfrak{K}}}_1=b_2{\bf k}_1-a_2{\bf k}_2,\ \ {\bm{\mathfrak{K}}}_2=-b_1{\bf k}_1+a_1{\bf k}_2, \]
which satisfy
\begin{equation*} {\bm{\mathfrak{K}}}_\ell \cdot {\bm{\mathfrak{v}}}_{\ell'} = 2\pi \delta_{\ell, \ell'},\ \ 1\leq \ell, \ell' \leq 2.\label{ktilde-orthog}
\end{equation*}
Note that
$\mathbb{Z}{\bm{\mathfrak{K}}}_1\oplus\mathbb{Z}{\bm{\mathfrak{K}}}_2=\mathbb{Z}{\bf k}_1\oplus\mathbb{Z}{\bf k}_2=\Lambda^*_h$.
Fix an edge, $\mathbb{R}{\bm{\mathfrak{v}}}_1$. In our construction of edge states, an important role is played by the ``quasi-momentum slice'' of the band structure through the Dirac point and ``dual'' to the given edge.
\begin{definition}\label{dual-slice}
For the edge $\mathbb{R}{\bm{\mathfrak{v}}}_1$, {\it the band structure slice at quasi-momentum ${\bf K}$, dual to the edge $\mathbb{R}{\bm{\mathfrak{v}}}_1$},
is defined to be the locus given by the union of curves:
\begin{equation*}
\lambda\mapsto E_b({\bf K}+\lambda{\bm{\mathfrak{K}}}_2),\ \ |\lambda|\le 1/2,\ \ b\ge1.
\label{tv-slice}
\end{equation*}
\end{definition}
We give two examples:
\begin{enumerate}
\item Zigzag: ${\bm{\mathfrak{v}}}_1={\bf v}_1$, ${\bm{\mathfrak{v}}}_2={\bf v}_2$ and ${\bm{\mathfrak{K}}}_1={\bf k}_1$ and ${\bm{\mathfrak{K}}}_2={\bf k}_2$.
\\ In this case, we shall refer to the {\sl zigzag slice}.
\item Armchair: ${\bm{\mathfrak{v}}}_1={\bf v}_1+{\bf v}_2$, ${\bm{\mathfrak{v}}}_2={\bf v}_2$ and ${\bm{\mathfrak{K}}}_1={\bf k}_1$ and ${\bm{\mathfrak{K}}}_2={\bf k}_2-{\bf k}_1$. \\ In this case, we shall refer to the {\sl armchair slice}.
\end{enumerate}
Figure \ref{fig:eps_V11_neg} (top row) displays three cases, for $-\Delta+\varepsilon V$, where $V$ is a honeycomb lattice potential. Shown are the curves
$\lambda\mapsto E_b({\bf K}+\lambda{\bm{\mathfrak{K}}}_2)$, $b=1,2,3$ for (i) $\varepsilon V_{1,1}>0$ and the zigzag slice (left panel), (ii) $\varepsilon V_{1,1}<0$ and the zigzag slice (middle panel), and (iii) the armchair slice (right panel). As discussed in the introduction, of these three examples, case (i) is the one for which the spectral no-fold condition of Definition \ref{SGC} holds.
\subsection{Completeness of Floquet-Bloch modes on $L^2(\Sigma)$}\label{Fourier-edge}
For ${\bm{\mathfrak{v}}}_1\in\Lambda_h$, introduce the cylinder $\Sigma= \mathbb{R}^2/\mathbb{Z}{\bm{\mathfrak{v}}}_1$. Consider the family of states $\Phi_b({\bf x};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2),\ b\ge1$ for $\lambda\in[0,1]$ (or equivalently $|\lambda|\le1/2$) corresponding to quasi-momenta along a line segment within $\mathcal{B}_h$ connecting ${\bf K}$ to ${\bf K}+{\bm{\mathfrak{K}}}_2$.
Since ${\bm{\mathfrak{K}}}_2\cdot{\bm{\mathfrak{v}}}_1=0$, all along this segment we have ${\bf K}\cdot{\bm{\mathfrak{v}}}_1-$ pseudo-periodicity:
\begin{equation}
\Phi_b({\bf x}+{\bm{\mathfrak{v}}}_1;{\bf K}+\lambda{\bm{\mathfrak{K}}}_2)= e^{i({\bf K}+\lambda{\bm{\mathfrak{K}}}_2)\cdot{\bm{\mathfrak{v}}}_1} \Phi_b({\bf x};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2)= e^{i{\bf K}\cdot{\bm{\mathfrak{v}}}_1} \Phi_b({\bf x};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2) \ .
\nonumber\end{equation}
The main result of this subsection is that any $f\in L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)$ is a superposition of these modes.
\begin{theorem}\label{fourier-edge}
Let $f\in L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)= L_{\kpar=\bK\cdot\vtilde_1}^2(\mathbb{R}^2/\mathbb{Z}{\bm{\mathfrak{v}}}_1)$. Then,
\begin{enumerate}
\item $f$ can be represented as a superposition of Floquet-Bloch modes of $-\Delta+V$ with quasimomenta in $\mathcal{B}$ located on the segment
$\Big\{{\bf k}={\bf K}+\lambda{\bm{\mathfrak{K}}}_2: |\lambda|\le\frac{1}{2}\Big\}:$
\begin{align}
f({\bf x})
&=\sum_{b\ge1}\int_{-\frac12}^{\frac12} \widetilde{f}_b(\lambda) \Phi_b({\bf x};{\bf K}+\lambda {\bm{\mathfrak{K}}}_2) d\lambda \nonumber \\
&= e^{i{\bf K}\cdot{\bf x}} \sum_{b\ge1}\int_{-\frac12}^{\frac12} e^{i\lambda{\bm{\mathfrak{K}}}_2\cdot{\bf x}}\widetilde{f}_b(\lambda) p_b({\bf x};{\bf K}+\lambda {\bm{\mathfrak{K}}}_2) d\lambda,\qquad {\rm where} \label{fb-Sigma} \\
\widetilde{f}_b(\lambda) \ &=\ \left\langle \Phi_b(\cdot,{\bf K}+\lambda{\bm{\mathfrak{K}}}_2),f(\cdot)\right\rangle_{L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)} . \nonumber
\end{align}
Here, the sum representing $e^{-i{\bf K}\cdot{\bf x}}f({\bf x})$, in \eqref{fb-Sigma} converges in the $L^2(\Sigma)$ norm.
\item In the special case where $V\equiv0$:
\begin{align*}
f({\bf x}) = \sum_{{{\bf m}}\in\mathbb{Z}^2}e^{ i({\bf K}+{\bf m}\vec{\bm{\mathfrak{K}}})\cdot{\bf x} } \int_{-\frac12}^{\frac12}
\widehat{f}_{\bf m}(\lambda)e^{i\lambda{\bm{\mathfrak{K}}}_2\cdot{\bf x}} d\lambda\
\end{align*}
\end{enumerate}
\end{theorem}
\begin{proof}[Proof of Theorem \ref{fourier-edge}]
We introduce the parameterizations of the fundamental period cell $\Omega$ of $V({\bf x})$:
\begin{align}
\label{omega-parameterization}
{\bf x}\in\Omega:\quad &{\bf x} = \tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,\quad 0\le\tau_1, \tau_2\le1 , \quad
{\bf k}_i\cdot{\bf x} = 2\pi\tau_i , \nonumber \\
& dx_1\ dx_2\ = \left|{\bm{\mathfrak{v}}}_1\wedge{\bm{\mathfrak{v}}}_2\right|\ d\tau_1\ d\tau_2\ =\ |\Omega|\ d\tau_1\ d\tau_2;
\end{align}
and of the cylinder $\Sigma=\mathbb{R}^2/\mathbb{Z}{\bm{\mathfrak{v}}}_1$:
\begin{align}
\label{sigma-parameterization}
{\bf x}\in\Sigma:\quad &{\bf x} = \tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,\ \ 0\le\tau_1\le1,\ s\in\mathbb{R} , \quad
{\bm{\mathfrak{K}}}_1\cdot{\bf x} = 2\pi\tau_1 , \ {\bm{\mathfrak{K}}}_2\cdot{\bf x} = 2\pi\tau_2 , \nonumber \\
&dx_1\ dx_2\ = \left|{\bm{\mathfrak{v}}}_1\wedge{\bm{\mathfrak{v}}}_2\right|\ d\tau_1\ d\tau_2\ =\ |\Omega|\ d\tau_1\ d\tau_2.
\end{align}
Let $f\in L^2_{\kpar=\bK\cdot\vtilde_1}(\Sigma)$ be such that $g({\bf x})= e^{-i{\bf K}\cdot{\bf x}}f({\bf x})$ is defined and smooth on $\Sigma$, and rapidly decreasing. It suffices to prove the result for such $f$, and then pass to all $L^2_{\kpar=\bK\cdot\vtilde_1}(\Sigma)$ by standard arguments.
The function $g({\bf x})$ has the Fourier representation
\begin{align}
\label{g-fourier}
g({\bf x}) &= 2\pi\ \sum_{n\in\mathbb{Z}}\int_\mathbb{R} \widehat{g}_n(2\pi\xi) e^{i\xi{\bm{\mathfrak{K}}}_2\cdot{\bf x}} d\xi e^{in{\bm{\mathfrak{K}}}_1\cdot{\bf x}}, \\
2\pi\ \widehat{g}_n(2\pi\xi) &= \frac{1}{\left|{\bm{\mathfrak{v}}}_1\wedge{\bm{\mathfrak{v}}}_2\right| } \int_\Sigma e^{-i\xi{\bm{\mathfrak{K}}}_2\cdot{\bf y}} e^{-in{\bm{\mathfrak{K}}}_1\cdot{\bf y}} g({\bf y}) d{\bf y} \ . \nonumber
\end{align}
The relation \eqref{g-fourier} is obtained by noting that $G(\tau_1,\tau_2)=g(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2)$ is $1-$ periodic in $\tau_1$ and in $L^2(\mathbb{R}; d\tau_2)$, and applying the standard Fourier representations.
Introduce the Gelfand-Bloch transform
\begin{equation}
\label{g-FB-tilde}
\widetilde{g}({\bf x};\lambda) = 2\pi \sum_{(m_1,m_2)\in\mathbb{Z}^2}\widehat{g}_{m_1}\left(2\pi(m_2+\lambda)\right)e^{i(m_1{\bm{\mathfrak{K}}}_1+m_2{\bm{\mathfrak{K}}}_2)\cdot{\bf x}},\quad |\lambda|\leq 1/2 \ .
\end{equation}
Note that ${\bf x}\mapsto \widetilde{g}({\bf x};\lambda)$ is $\Lambda_h-$ periodic and $\lambda\mapsto \widetilde{g}({\bf x};\lambda)$ is $1-$ periodic.
Using \eqref{g-FB-tilde} and \eqref{g-fourier}, it is straightforward to check that
\begin{equation}
\label{g-FB}
g({\bf x}) = \int_{-\frac12}^{\frac12} e^{i\lambda{\bm{\mathfrak{K}}}_2\cdot{\bf x}}\ \widetilde{g}({\bf x};\lambda)\ d\lambda \ .
\end{equation}
\begin{remark}\label{pb-Omega}
For any fixed $|\lambda|\le1/2$, the mapping ${\bf x}\mapsto \widetilde{g}({\bf x};\lambda)$ is $\Lambda_h-$ periodic.
We wish to expand ${\bf x}\mapsto \widetilde{g}({\bf x};\lambda)$ in terms of a basis for $L^2(\Omega)$,
where $\Omega$ denotes our choice of period cell (parallelogram) for $\mathbb{R}^2/\Lambda$ with $\Lambda=\mathbb{Z}{\bm{\mathfrak{v}}}_1\oplus\mathbb{Z}{\bm{\mathfrak{v}}}_2$;
see \eqref{Omega-def}. Now the eigenvalue problem $H({\bf k})p^\Omega=E^\Omega p^\Omega$
on $\Omega$ with periodic boundary conditions has a discrete sequence of eigenvalues, $E_j^\Omega({\bf k}),\ j\ge1$ with corresponding eigenfunctions $p^\Omega_j({\bf x};{\bf k}),\ j\ge 1$, which can be taken to be a complete orthonormal sequence. Recall $p^{\Omega_h}_b({\bf x};{\bf k}),\ b\ge1$, with corresponding eigenvalues, $E_b({\bf k})$, the complete set of eigenfunctions of $H({\bf k})$ with periodic boundary conditions on $\Omega_h$, the elementary period parallelogram spanned by $\{{\bf v}_1,{\bf v}_2\}$; see Section \ref{flo-bl-theory}. By periodicity, $p^{\Omega_h}_b({\bf x};{\bf k}),\ b\ge1,$ (initially defined on $\Omega_h$) and $p_j^\Omega({\bf x};{\bf k}),\ j\ge1$, (initially defined on $\Omega$)
can be extended to all $\mathbb{R}^2$ as periodic functions. We continue to denote these extensions by: $p_j^\Omega({\bf x};{\bf k})$ and $p^{\Omega_h}_b({\bf x};{\bf k})$, respectively. Since $\Lambda=\mathbb{Z}{\bm{\mathfrak{v}}}_1\oplus\mathbb{Z}{\bm{\mathfrak{v}}}_2=\mathbb{Z}{\bf v}_1\oplus\mathbb{Z}{\bf v}_2=\Lambda_h$, both sequences of eigenfunctions are $\Lambda_h-$ periodic. Thus, in a natural way, we can take $p^\Omega_b({\bf x};{\bf k})=A_b\ p^{\Omega_h}_b({\bf x};{\bf k}),\ b\ge1$, where $A_b$ is a normalization constant. Abusing notation, we henceforth drop the explicit dependence on $\Omega$, and simply write $p_b({\bf x};{\bf k})$ for $p_b^{\Omega}({\bf x};{\bf k})$.
\end{remark}
In view of Remark \ref{pb-Omega} we expand $\widetilde{g}({\bf x};\lambda)$ in terms of the states $\{p_b(\cdot;{\bf K}+\lambda{\bm{\mathfrak{K}}}_2)\},\ b\ge1$:
\begin{equation}
\label{g-FB-p-expansion}
\widetilde{g}({\bf x};\lambda) = \sum_{b\geq1}\inner{p_b(\cdot;{\bf K}+\lambda{\bm{\mathfrak{K}}}_2),\widetilde{g}(\cdot,\lambda)}_{L^2(\Omega)}\ p_b({\bf x};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2) .
\end{equation}
Recall that $f({\bf x})= e^{i{\bf K}\cdot{\bf x}}g({\bf x})$. We claim (and prove below) that
\begin{equation}
\label{g-FB-claim}
\inner{p_b(\cdot;{\bf K}+\lambda{\bm{\mathfrak{K}}}_2),\widetilde{g}(\cdot,\lambda)}_{L^2(\Omega)}
= \inner{\Phi_b(\cdot;{\bf K}+\lambda{\bm{\mathfrak{K}}}_2),f(\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)} \ .
\end{equation}
The assertions of Theorem \ref{fourier-edge} then follow from \eqref{g-FB},
\eqref{g-FB-p-expansion} and the claim \eqref{g-FB-claim}:
\begin{align*}
f({\bf x}) &= e^{i{\bf K}\cdot{\bf x}} g({\bf x})
= e^{i{\bf K}\cdot{\bf x}} \int_{-\frac12}^{\frac12} e^{i\lambda{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \widetilde{g}({\bf x};\lambda) d\lambda \\
& = e^{i{\bf K}\cdot{\bf x}} \sum_{b\geq1} \int_{-\frac12}^{\frac12} e^{i\lambda{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \inner{p_b(\cdot;{\bf K}+\lambda{\bm{\mathfrak{K}}}_2),\widetilde{g}(\cdot,\lambda)}_{L^2(\Omega)} p_b({\bf x};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2) d\lambda \\
&= \sum_{b\geq1} \int_{-\frac12}^{\frac12} \inner{\Phi_b(\cdot;{\bf K}+\lambda{\bm{\mathfrak{K}}}_2),f(\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)} \Phi_b({\bf x};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2) d\lambda \ ,
\end{align*}
where, in the final line we have used that $\Phi_b({\bf y};\lambda)=p_b({\bf y};\lambda)e^{i({\bf K}+\lambda{\bm{\mathfrak{K}}}_2)\cdot{\bf x}}$. Therefore, it remains to prove claim \eqref{g-FB-claim}. We shall employ the one-dimensional Poisson summation formula:
\begin{equation}
\label{poisson-sum-formula}
2\pi\sum_{n\in\mathbb{Z}} \widehat{f}\left(2\pi(n+\lambda)\right) e^{2\pi iny} = \sum_{n\in\mathbb{Z}}f(y+n)e^{-2\pi i \lambda(y+n)} \ .
\end{equation}
In the following calculation we use the abbreviated notation:
\[p_b({\bf y};\lambda)\equiv p_b({\bf y};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2)\quad \text{and} \quad \ \Phi_b({\bf y};\lambda)\equiv \Phi_b({\bf y};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2).\]
Substituting \eqref{g-fourier}-\eqref{g-FB-tilde} into the left hand side of \eqref{g-FB-claim} and applying \eqref{poisson-sum-formula} gives
{\footnotesize
\begin{align*}
&\inner{p_b(\cdot;\lambda),\widetilde{g}(\cdot,\lambda)}_{L^2(\Omega)}
= \int_{\Omega} \overline{p_b({\bf y};\lambda)} \widetilde{g}({\bf y},\lambda) d{\bf y} \nonumber \\
&\ = |\Omega| \int_0^1 \int_0^1 \overline{p_b(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2;\lambda)} 2\pi \sum_{{\bf m}\in\mathbb{Z}^2} \widehat{g}_{m_1} \left(2\pi(m_2+\lambda)\right) e^{2\pi i(m_1\tau_1+m_2\tau_2)} d\tau_1 d\tau_2 \ \ ( \eqref{omega-parameterization}) \nonumber \\
&\ = |\Omega| \int_0^1 \int_0^1 \overline{p_b(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2;\lambda)} \sum_{m_1\in\mathbb{Z}} \left [2\pi \sum_{m_2\in\mathbb{Z}} \widehat{g}_{m_1}\left(2\pi(m_2+\lambda)\right) e^{2\pi im_2\tau_2} \right] e^{2\pi im_1\tau_1} d\tau_1 d\tau_2 \nonumber \\
&\ = |\Omega| \int_0^1 \int_0^1 \overline{p_b(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2;\lambda)} \sum_{m_1\in\mathbb{Z}} \left [\sum_{m_2\in\mathbb{Z}} g_{m_1}(\tau_2+m_2) e^{-2\pi i \lambda(\tau_2+m_2)} \right] e^{2\pi im_1\tau_1} d\tau_1 d\tau_2 \ \ (\eqref{poisson-sum-formula}) \nonumber \\
&\ = |\Omega| \int_0^1 \int_\mathbb{R} \overline{p_b(\tau_1{\bm{\mathfrak{v}}}_1+s{\bm{\mathfrak{v}}}_2;\lambda)} e^{-2\pi i \lambda s} \sum_{m_1\in\mathbb{Z}} g_{m_1}(s) e^{2\pi im_1\tau_1} d\tau_1 ds \ (\tau_2+m_2=s) \nonumber \\
&\ = |\Omega| \int_0^1 \int_\mathbb{R} \overline{p_b(\tau_1{\bm{\mathfrak{v}}}_1+s{\bm{\mathfrak{v}}}_2;\lambda)} e^{-2\pi i \lambda s} G(\tau_1,s) d\tau_1 ds \nonumber
\ \ ( G(\tau_1,\tau_2)=g(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2) \text{ and } \eqref{g-fourier}) \\
&\ = \int_\Sigma \overline{p_b({\bf y};\lambda)} e^{-i \lambda {\bm{\mathfrak{K}}}_2\cdot{\bf x}} g({\bf y}) d{\bf y} \ \ (\text{by } \eqref{sigma-parameterization}) \nonumber .
\end{align*}
}
Finally, recalling that $g= e^{-i{\bf K}\cdot{\bf x}}f$ and $\overline{p_b({\bf y};\lambda)}e^{-i({\bf K}+\lambda{\bm{\mathfrak{K}}}_2)\cdot{\bf x}}=\overline{\Phi_b({\bf y};\lambda)}$, we obtain that
\begin{align*}
\inner{p_b(\cdot;\lambda),\widetilde{g}(\cdot,\lambda)}_{L^2(\Omega)}
&= \int_\Sigma \overline{p_b({\bf y};\lambda)} \ e^{-i \lambda {\bm{\mathfrak{K}}}_2\cdot{\bf x}} e^{-i{\bf K}\cdot{\bf x}} \ f({\bf y}) \ d{\bf y} \nonumber \\
&= \int_{\Sigma} \overline{\Phi_b({\bf y};\lambda)} f({\bf y}) \ d{\bf y} \nonumber
=\inner{\Phi_b(\cdot;\lambda),f(\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)}. \nonumber
\end{align*}
This completes the proof of claim \eqref{g-FB-claim} and part 1 for the case where $f$ is smooth and rapidly decreasing.
Passing to arbitrary $f\in L^2_{k_{\parallel}}$ is standard.
In the case where $V\equiv0$, the Schr\"odinger operator reduces to the Laplacian $-\Delta$. In this case the Floquet-Bloch coefficients of $f\in L^2_{\kpar=\bK\cdot\vtilde_1}$ are simply its Fourier coefficients: $\widetilde{f}(\lambda)=\widehat{f}(\lambda)$. Part 2 therefore follows from part 1, completing the proof of Theorem \ref{fourier-edge}.
\end{proof}
Sobolev regularity can be measured in terms of the Floquet-Bloch coefficients. Indeed, as in
Lemma 2.1 in \cites{FLW-MAMS:15}, by the $2D-$ Weyl law $E_b({\bf K}+\lambda{\bm{\mathfrak{K}}}_2)\approx b$ for all $\lambda\in[-1/2,1/2],\ b\gg1$, we have
\begin{corollary}\label{fourier-edge-norms}
$L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)$ and $H_{\kpar=\bK\cdot\vtilde_1}^s(\Sigma),\ s\in\mathbb{N}$, norms can be expressed in terms of the Floquet-Bloch coefficients $\widetilde{f}_b(\lambda)$, $b\ge1$. For $f\in L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)= L_{\kpar=\bK\cdot\vtilde_1}^2(\mathbb{R}^2/\mathbb{Z}{\bm{\mathfrak{v}}}_1)$:
\begin{align*}
\|f\|_{L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)}^2\ &\sim\ \sum_{b\ge1}\int_{-\frac12}^{\frac12}\ |\widetilde{f}_b(\lambda) |^2 d\lambda\ ,\\
\|f\|_{H_{\kpar=\bK\cdot\vtilde_1}^{^s}(\Sigma)}^2\ &\sim\ \sum_{b\ge1} (1+b)^s\ \int_{-\frac12}^{\frac12} |\widetilde{f}_b(\lambda) |^2 d\lambda\ .
\end{align*}
\end{corollary}
\subsection{Expansion of ${\bf k}\mapsto E_b({\bf k})$ along a quasi-momentum slice}\label{bloch-near-dirac}
Let $({\bf K},E_\star)$ denote a Dirac point as in Definition \ref{dirac-pt-defn}. In a neighborhood of
the Dirac point, the eigenvalues $E_{b_\star}({\bf k})$ and $E_{b_\star+1}({\bf k})$ are Lipschitz continuous functions and the corresponding normalized eigenmodes, $\Phi_{b_\star}({\bf x};{\bf k})$
and $\Phi_{b_\star+1}({\bf x};{\bf k})$ are discontinuous functions of ${\bf k}$; see \cites{FW:14}. Note however that what is relevant to our construction of ${\bm{\mathfrak{v}}}_1-$ edge states are Floquet-Bloch modes along the quasi-momentum line ${\bf K}+\lambda{\bm{\mathfrak{K}}}_2$, $|\lambda|\leq1/2$. The following proposition gives a smooth parametrization of these modes along this quasi-momentum line.
\begin{proposition}\label{directional-bloch}
Let $({\bf K},E_\star)$ denote a Dirac point in the sense of Definition \ref{dirac-pt-defn}. Let $\{ \Phi_1({\bf x}), \Phi_2({\bf x}) \}$ denote the basis of the
$L^2_{{\bf K}}=L^2_{{\bf K},\tau}\oplus L^2_{{\bf K},\overline\tau}-$ nullspace of $H_V - E_\star I$ in Definition \ref{dirac-pt-defn}. Introduce the $\Lambda_h-$ periodic functions
\begin{equation}
P_1({\bf x})=e^{-i{\bf K}\cdot {\bf x}}\Phi_1({\bf x}),\ \ P_2({\bf x})=e^{-i{\bf K}\cdot {\bf x}}\Phi_2({\bf x}).
\label{PhipmKlam}
\end{equation}
For each $|\lambda|\le1/2$, there exist $L^2_{{\bf K}+\lambda{\bm{\mathfrak{K}}}_2}-$ eigenpairs $(\Phi_\pm({\bf x};\lambda),E_\pm(\lambda))$, real analytic in $\lambda$, such that
$\left\langle \Phi_a(\cdot;\lambda),\Phi_b(\cdot;\lambda)\right\rangle=\delta_{ab}$ and
\[ \textrm{span}\ \{\Phi_{-}({\bf x};\lambda), \Phi_{+}({\bf x};\lambda)\}
=\ \textrm{span}\ \{\Phi_{b_\star}({\bf x};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2), \Phi_{b_\star+1}({\bf x};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2)\}\ .
\]
Introduce $\Lambda_h-$ periodic functions $p_\pm({\bf x};\lambda)$ by
\begin{equation}
\Phi_\pm({\bf x};\lambda)\ =\ e^{i({\bf K}+\lambda{\bm{\mathfrak{K}}}_2)\cdot{\bf x}}\ p_\pm({\bf x};\lambda),\ \ \left\langle p_a(\cdot;\lambda),p_b(\cdot;\lambda)\right\rangle=\delta_{ab} ,\ \
a,b\in\{+,-\}.
\label{p_pm-def}
\end{equation}
%
There is a constant $\zeta_0>0$ such that for $|\lambda|<\zeta_0$ the following holds:
\begin{enumerate}
\item The mapping $\lambda\mapsto E_\pm(\lambda)$ is real analytic in $\lambda$ with expansion
\begin{equation}
E_\pm(\lambda) = E_\star \pm |\lambda_\sharp|\ |{\bm{\mathfrak{K}}}_2|\ \lambda + E_{2,\pm}(\lambda)\lambda^2 \ ,
\label{EpmKlam}
\end{equation}
where $\lambda_\sharp\in\mathbb{C}$ is given by \eqref{lambda-sharp}, $|E_{2,\pm}(\lambda)|\leq C$ with $C$ a positive constant independent of $\lambda$.
\item Let
$ \mathfrak{z}_2={\mathfrak{K}}_2^{(1)}+i{\mathfrak{K}}_2^{(2)},\ |\mathfrak{z}_2|=|{\bm{\mathfrak{K}}}_2|$. The $\Lambda_h-$ periodic functions, $p_\pm({\bf x};\lambda)$, can be chosen to depend real analytically on $\lambda$ and so that
\footnote{The factor of $\frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|}$ in \eqref{ppmKlam} corrects
a typographical error in equation (3.13) of \cites{FW:14}.}
\begin{align}
p_\pm({\bf x};\lambda) &=
P_\pm({\bf x})\ +\ \varphi_{\pm}({\bf x},\lambda) \ \in L^2_{\bf K}(\mathbb{R}^2/\Lambda_h),
\label{ppmKlam}
\end{align}
where $p_\pm({\bf x};0)=P_\pm({\bf x})$ is given by
\begin{equation*}
P_\pm({\bf x})\ \equiv\ \frac{1}{\sqrt{2}}\ \Big[\ \frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|} \frac{\mathfrak{z}_2}{|\mathfrak{z}_2|}\ P_1({\bf x})
\pm P_2({\bf x})\ \Big]\ ,
\label{Ppm-def}\end{equation*}
and $\Phi_\pm({\bf x};0)=\Phi_\pm({\bf x})$ is given by
\begin{equation}
\Phi_\pm({\bf x})\ \equiv\ \frac{1}{\sqrt{2}}\ \Big[\ \frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|} \frac{\mathfrak{z}_2}{|\mathfrak{z}_2|}\ \Phi_1({\bf x})
\pm \Phi_2({\bf x})\ \Big]\ .
\label{Phi_pm-def}\end{equation}
Finally, $\lambda \mapsto\varphi_{\pm}({\bf x};\lambda)$ are real analytic satisfying the bound
$|\partial_{\bf x}^\aleph\varphi_\pm({\bf x};\lambda)|\leq C' \lambda$ for all ${\bf x}\in\Lambda_h$, where $\aleph=(\aleph_1,\aleph_2),\ |\aleph|\le2$.
\end{enumerate}
\end{proposition}
\noindent{N.B.} We wish to point out that the subscripts $\pm$ have a different meaning here than in \cites{FW:12,FW:14}. In \cites{FW:12,FW:14},
$E_\pm({\bf k})$ denote ordered eigenvalues, $E_-({\bf k})\le E_+({\bf k})$ (Lipschitz continuous) with corresponding eigenstates $\Phi_\pm({\bf x};{\bf k})$ (discontinuous at ${\bf k}={\bf K}$);
see Definition \ref{dirac-pt-defn} and Corollary \ref{Phibstar-bstar+1}.
In Proposition \ref{directional-bloch} and throughout this paper $E_\pm(\lambda)$ and $\Phi_\pm({\bf x};\lambda)$ refer to smooth parametrizations in $\lambda$ of Floquet-Bloch eigenvalues and eigenfunctions of the spectral bands, which intersect at energy $E_\star$.
\begin{proof}[Proof of Proposition \ref{directional-bloch}]
We present a proof along the lines of Theorem 3.2 in \cites{FW:14}; see also \cites{Friedrichs:65,kato1995perturbation}.
The ${\bf k}-$ pseudo-periodic Floquet-Bloch modes can be expressed in the form
$\Phi({\bf x};{\bf k})=e^{i{\bf k}\cdot{\bf x}}p({\bf x};{\bf k})$,
where $p({\bf x};{\bf k})$ is $\Lambda_h-$ periodic. For ${\bf k}={\bf K}+\lambda{\bm{\mathfrak{K}}}_2$,
consider the family of eigenvalue problems, parametrized by $|\lambda|\le1/2$:
\begin{align}
& H_V({\bf K}+\lambda{\bm{\mathfrak{K}}}_2)\ p({\bf x};\lambda)\ =\ E(\lambda)\ p({\bf x};\lambda)\ ,\label{Hk-evp}\\
&p({\bf x}+{\bf v};\lambda)=p({\bf x};\lambda),\ \ \textrm{for all}\ {\bf v}\in \Lambda_h\ ,
\label{psi-per}\end{align}
where $H_V({\bf k})\ \equiv\ -\left(\nabla_{\bf x} + i{\bf k}\right)^2\ +\ V({\bf x})$.
Degenerate perturbation theory of the double eigenvalue $E_\star$ of $H_V({\bf K})$, yields eigenvalues: $E_\pm(\lambda) =\ E_\star\ +\ E_\pm^{(1)}(\lambda)$, where
\begin{align}
\ E_\pm^{(1)}(\lambda)\ \equiv\ \pm\ |\lambda_\sharp|\ |{\bm{\mathfrak{K}}}_2|\ \lambda +\ \mathcal{O}(\lambda^2) ;\ \ \text{see\ \cites{FW:12}.}
\label{mu-exp}
\end{align}
Denote by $Q_\perp$ the projection onto the orthogonal complement of ${\rm span}\{P_1,P_2\}$.
Then,
\[ R_{\bf K}(E_\star)\ \equiv\ \left( H_V({\bf K})\ - E_\star\ I \right)^{-1}:\ Q_\perp L^2(\mathbb{R}^2/\Lambda_h)\to Q_\perp L^2(\mathbb{R}^2/\Lambda_h)\]
is bounded.
Furthermore, via Lyapunov-Schmidt reduction analysis of the periodic eigenvalue problem
\eqref{Hk-evp}-\eqref{psi-per} we obtain, corresponding to the eigenvalues \eqref{mu-exp}, the $\Lambda_h-$ periodic eigenstates:
\begin{align*}
p_\pm({\bf x};\lambda)\
&=\
\left( I +
R_{\bf K}(E_\star) Q_\perp
\left(2i\lambda\ {\bm{\mathfrak{K}}}_2\cdot\left(\nabla+i{\bf K}\right)\right) \right) \times \\
&\quad \left( \alpha(\lambda)\ P_1({\bf x})\ +\ \beta(\lambda)\ P_2({\bf x}) \right) \
+\ \mathcal{O}_{H^2(\mathbb{R}^2/\Lambda_h)}\left(\lambda(|\alpha|^2+|\beta|^2)^{\frac{1}{2}}\right)
\end{align*}
Here, the pair $\alpha(\lambda), \beta(\lambda)$ satisfies the homogeneous system:
\begin{align*}
\mathcal{M}(E^{(1)},\lambda)\ \left(\begin{array}{c} \alpha \\ { }\\ \beta\end{array}\right)\ &=\ 0\ , \text{ \ \ where } \\
%
\mathcal{M}(E^{(1)},\lambda) &\equiv\
\left(\begin{array}{cc}
E^{(1)} + \mathcal{O}\left( \lambda^2\right)&
-\overline{\lambda_\sharp}\ \lambda\ \mathfrak{z}_2\ + \mathcal{O}\left(\lambda^2\right) \\
&\nonumber\\
-\lambda_\sharp\ \lambda\ \overline{\mathfrak{z}_2} +
\mathcal{O}\left(\lambda^2\right)
&
E^{(1)} + \mathcal{O}\left(\lambda^2\right)
\end{array}\right) ;
\end{align*}
see \cites{FW:12}.
For $E^{(1)}=E^{(1)}_j(\lambda),\ j=\pm$, normalized solutions, $p_j({\bf k};\lambda),\ j=\pm$, are obtained by choosing:
{\footnotesize{
\begin{align*}
\left(\begin{array}{c} \alpha_+(\lambda) \\ \\ \beta_+(\lambda)\end{array}\right) &=
\left(\begin{array}{c} \frac{1}{\sqrt{2}}\ \frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|}\
\frac{ \mathfrak{z}_2}{| \mathfrak{z}_2|} +
\mathcal{O}\left(\lambda\right)\\ \\
+ \frac{1}{\sqrt{2}}\ +\ \mathcal{O}\left(\lambda\right)
\end{array}\right) ,\quad
\left(\begin{array}{c} \alpha_-(\lambda) \\ \\ \beta_-(\lambda)\end{array}\right)\ &=\
\left(\begin{array}{c} \frac{1}{\sqrt{2}}\ \frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|}\
\frac{ \mathfrak{z}_2}{| \mathfrak{z}_2|} +
\mathcal{O}\left(\lambda\right)\\ \\
- \frac{1}{\sqrt{2}} + \mathcal{O}\left(\lambda\right)
\end{array}\right) .
\end{align*}
}}
%
Finally, we note that $\mathcal{M}(E^{(1)},\lambda)$ is analytic in the parameter $\lambda$. Therefore the eigenvalues $E^{(1)}_\pm(\lambda)$ and eigenvectors $( \alpha_\pm(\lambda), \beta_\pm(\lambda) )^T$ are analytic functions of $\lambda$; see, for example, \cites{Friedrichs:65,kato1995perturbation}. It follows that
$E_\pm(\lambda)$ and $p_\pm({\bf x};\lambda)$ are bounded, real analytic functions of $\lambda\in\mathbb{R}$. This completes the proof of Proposition \ref{directional-bloch}.
\end{proof}
\section{Model of a honeycomb structure with an edge}\label{zigzag-edges}
Let $V({\bf x})$ denote a honeycomb potential in the sense of Definition \ref{honeyV}. In this section we introduce a model of an edge in a honeycomb structure. A one-dimensional variant of this model was introduced and studied in
\cites{FLW-PNAS:14,FLW-MAMS:15,Thorp-etal:15}.
Let $W\in C^\infty(\mathbb{R}^2)$ be real-valued and satisfy the following properties:
\begin{enumerate}
\item[(W1)] ${W}$ is $\Lambda_h-$ periodic, {\it i.e.} ${W}({\bf x}+{\bf v})={W}({\bf x})$ for all ${\bf x}\in\mathbb{R}^2$ and ${\bf v}\in\Lambda_h$.
\item[(W2)] ${W}$ is odd, {\it i.e.} ${W}(-{\bf x})=-{W}({\bf x})$.
\item[(W3)] $\vartheta_\sharp\equiv \left\langle \Phi_1,W\Phi_1\right\rangle_{L^2(\Omega_h)}\ne0$, with $\Phi_1$ as in Definition \ref{dirac-pt-defn}.
\end{enumerate}
The non-degeneracy condition (W3) arises in the multiple scale perturbation theory of Section \ref{formal-multiscale}.
\noindent Our model of a honeycomb structure with an edge is a smooth and slow interpolation between the Schr\"odinger Hamiltonians
$H^{(\delta)}_{-\infty} = -\Delta_{\bf x} + V({\bf x}) - \delta\kappa_\infty W({\bf x})$
and
$H^{(\delta)}_{+\infty} = -\Delta_{\bf x} + V({\bf x}) + \delta\kappa_\infty W({\bf x})$,
which is transverse to a lattice direction, say ${\bm{\mathfrak{v}}}_1$. Here, $\kappa_\infty$ is a positive constant.
This interpolation is effected by a {\it domain wall function}.
\begin{definition} \label{domain-wall-defn}
We call $\kappa(\zeta)\in C^{\infty}(\mathbb{R})$ a domain wall function if $\kappa(\zeta)$ tends to $\pm\kappa_\infty$ as $\zeta\to\pm\infty$, and
$
\Upsilon_1(\zeta)=\kappa^2(\zeta)-\kappa_\infty^2$ and
$ \Upsilon_2(\zeta)=\kappa'(\zeta)
$
satisfy:
{\small
\begin{equation}
\int_\mathbb{R} (1+|\zeta|)^a |\Upsilon_\ell(\zeta)| d\zeta<\infty\ \ \textrm{for some $a>5/2$ \ and }\ \int_\mathbb{R} |\partial_\zeta\Upsilon_\ell(\zeta)|
d\zeta<\infty, \ \ \ell=1,2.
\label{kappa-hypotheses}
\end{equation}
}
Without loss of generality, we assume $\kappa_\infty>0$.
\end{definition}
\begin{remark}
\label{kappa-description}
The technical hypotheses in \eqref{kappa-hypotheses} are required for the boundedness of {\it wave operators} used in the proof of Proposition \ref{solve4beta}. See also Section 6.6 of \cites{FLW-MAMS:15} and, in particular, the application of Theorem 6.15.
\end{remark}
Our model of a honeycomb structure with an edge is the domain-wall modulated Hamiltonian:
\begin{equation*}
\label{perturbed-ham}
H^{(\delta)} = -\Delta_{\bf x} + V({\bf x}) + \delta\kappa\left(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}\right) W({\bf x}) \ ,
\end{equation*}
where $\kappa(\zeta)$ is a domain wall function.
Suppose $\kappa(\zeta)$ has a single zero at $\zeta=0$. The ``edge'' is then given by
$\mathbb{R}{\bm{\mathfrak{v}}}_1=\{{\bf x}:{\bm{\mathfrak{K}}}_2\cdot{\bf x}=0\}$.
We shall seek solutions of the eigenvalue problem
\begin{align}
&H^{(\delta)} \Psi = E\Psi, \label{schro-evp}\\
&\Psi({\bf x}+{\bm{\mathfrak{v}}}_1)=e^{i{\bf K}\cdot{\bm{\mathfrak{v}}}_1}\Psi({\bf x})\qquad \textrm{(propagation parallel to the edge, $\mathbb{R}{\bm{\mathfrak{v}}}_1$)},\label{pseudo-per}\\
&\Psi({\bf x}) \to 0\ \ {\rm as}\ \ |{\bf x}\cdot{\bm{\mathfrak{K}}}_2|\to\infty \qquad \textrm{(localization tranverse to the edge, $\mathbb{R}{\bm{\mathfrak{v}}}_1$)}\label{localized} .
\end{align}
\noindent In the next section we present a formal asymptotic expansion of ${\bm{\mathfrak{v}}}_1-$ edge states
and in Section \ref{thm-edge-state} we formulate a rigorous theory.
\section{Multiple scales and effective Dirac equations}\label{formal-multiscale}
We re-express the eigenvalue problem \eqref{schro-evp}-\eqref{localized} in terms of an unknown function $\Psi=\Psi({\bf x},\zeta)$, depending on fast (${\bf x}$) and slow ($\zeta=\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}$)
spatial scales:
\begin{align}
&\Big[\ -\left(\nabla_{\bf x}+\delta{\bm{\mathfrak{K}}}_2\ \partial_\zeta\right)^2+V({\bf x})\Big]\Psi
\ +\ \delta\kappa(\zeta)W({\bf x})\Psi = E\Psi, \label{multi-schroedinger} \\
&\Psi({\bf x}+{\bm{\mathfrak{v}}}_1,\zeta)=e^{i{\bf K}\cdot{\bm{\mathfrak{v}}}_1}\Psi({\bf x},\zeta), \ \ {\rm and} \ \
\Psi({\bf x},\zeta) \to 0\ \ {\rm as}\ \ \zeta\to\pm\infty. \label{multi-schroedinger-bc}
\end{align}
We seek a solution to \eqref{multi-schroedinger}-\eqref{multi-schroedinger-bc} in the form:
\begin{align}
E^{\delta}&=E^{(0)}+\delta E^{(1)}+\delta^2E^{(2)}+\ldots, \label{formal-E}\\
\Psi^{\delta}&= \psi^{(0)}({\bf x},\zeta)+\delta\psi^{(1)}({\bf x},\zeta)+\delta^2\psi^{(2)}({\bf x},\zeta)+\ldots\ \label{formal-psi}.
\end{align}
The conditions \eqref{pseudo-per}, \eqref{localized} are encoded by requiring, for $i\ge0$, that
\begin{align*}
&\psi^{(i)}({\bf x}+{\bm{\mathfrak{v}}},\cdot)=e^{i{\bf K}\cdot{\bm{\mathfrak{v}}}}\psi^{(i)}({\bf x},\cdot) \ \ \ \forall\ {\bm{\mathfrak{v}}}\in\Lambda_h , \\
&\zeta\to\psi^{(i)}({\bf x},\zeta)\in L^2(\mathbb{R}_\zeta).\end{align*}
Substituting the expansions \eqref{formal-E}-\eqref{formal-psi} in \eqref{multi-schroedinger} yields
\begin{align*}
&\left[-\left(\Delta_{{\bf x}}+2\delta\ {\bm{\mathfrak{K}}}_2\cdot \nabla_{{\bf x}}\ \partial_\zeta+\delta^2\ |{\bm{\mathfrak{K}}}_2|^2\ \partial_\zeta^2+\ldots\right)
+\left(V({\bf x})+\delta\kappa(\zeta)W({\bf x})\right) \right. \\
&\quad \left.-\left(E^{(0)}+\delta E^{(1)}+\delta^2E^{(2)}+\ldots\right)\right]
\left(\psi^{(0)}+\delta\psi^{(1)}+\delta^2\psi^{(2)}+\ldots\right)=0.
\end{align*}
Equating terms of equal order in $\delta^j,\ j\ge0$, yields a hierarchy of equations governing $\psi^{(j)}({\bf x},\zeta)$.
At order $\delta^0$ we have that $(E^{(0)},\psi^{(0)})$ satisfy
\begin{equation}
\label{perturbed_schro_delta0}
\begin{split}
&\left(-\Delta_{{\bf x}} +V({\bf x})-E^{(0)}\right)\psi^{(0)} = 0, \\
&\psi^{(0)}({\bf x}+{\bm{\mathfrak{v}}},\cdot)=e^{i{\bf K}\cdot{\bm{\mathfrak{v}}}}\psi^{(0)}({\bf x},\cdot) \ \ \ \forall\ {\bm{\mathfrak{v}}}\in\Lambda_h .
\end{split}
\end{equation}
Equation \eqref{perturbed_schro_delta0} may be solved in terms of the orthonormal basis of the $L^2_{\bf K}(\Omega)-$ nullspace of $H_V-E_{\star}$ in Definition \ref{dirac-pt-defn}, namely $\{\Phi_1,\Phi_2\}$.
Expansion \eqref{ppmKlam} in Proposition \ref{directional-bloch} suggests that a particularly natural orthonormal basis of the $L^2_{{\bf K}}(\Omega)-$ nullspace of $H_V-E_{\star}$ is given by $\{\Phi_+,\Phi_-\}$, where
\begin{equation}
\label{correct-basis}
\Phi_\pm({\bf x}) \equiv \frac{1}{\sqrt{2}}\left[\ \frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|} \frac{ \mathfrak{z}_2}{| \mathfrak{z}_2|}\ \Phi_1({\bf x})
\pm \Phi_2({\bf x})\ \right].
\end{equation}
Here $\lambda_\sharp$ is given in \eqref{lambda-sharp}, $ \mathfrak{z}_2={\mathfrak{K}}_2^{(1)}+i{\mathfrak{K}}_2^{(2)}$ and $| \mathfrak{z}_2|=|{\bm{\mathfrak{K}}}_2|$.
We therefore solve \eqref{perturbed_schro_delta0} with
\begin{equation}
E^{(0)}=E_\star,\qquad \psi^{(0)}({\bf x},\zeta)=\alpha_+(\zeta)\Phi_+({\bf x})
+\alpha_-(\zeta)\Phi_-({\bf x}).
\label{psi0-soln}
\end{equation}
Proceeding to order $\delta^1$ we find that $(E^{(1)},\psi^{(1)})$ satisfies
\begin{equation}
\label{psi1-eqn}
\begin{split}
&\left(-\Delta_{{\bf x}}+V({\bf x})-E_{\star}\right)\psi^{(1)}({\bf x},\zeta)=G^{(1)}({\bf x},\zeta;\psi^{(0)}) +
E^{(1)} \psi^{(0)}, \\
&\psi^{(1)}({\bf x}+{\bm{\mathfrak{v}}},\cdot)=e^{i{\bf K}\cdot{\bm{\mathfrak{v}}}}\psi^{(1)}({\bf x},\cdot) \ \ \ \forall\ {\bm{\mathfrak{v}}}\in\Lambda_h ,
\end{split}
\end{equation}
where
\begin{align*}
&G^{(1)}({\bf x},\zeta;\psi^{(0)}) = G^{(1)}({\bf x},\zeta;\alpha_+,\alpha_-)\ \nonumber \\
&\quad \equiv\
2\partial_\zeta\alpha_+\ {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_++ 2\partial_\zeta\alpha_-\ {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_-
-\kappa(\zeta)W({\bf x}) \left(\alpha_+\Phi_++ \alpha_-\Phi_- \right) .
\end{align*}
Viewed as an equation in ${\bf x}$, \eqref{psi1-eqn} is solvable if and only if its right hand side is $L^2_{\bf K}(\Omega;d{\bf x})-$ orthogonal to the nullspace of $H_V-E_{\star}$. This is expressible in terms of the two orthogonality conditions:
\begin{align}
-E^{(1)}\alpha_j&=2\left\langle\Phi_j,\ {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_+\right\rangle \partial_\zeta\alpha_+\ +
2\left\langle\Phi_j,{\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_-\right\rangle\ \partial_\zeta\alpha_-\nonumber\\
&\qquad -\kappa(\zeta)\ \left[ \left\langle\Phi_j,W\Phi_+\right\rangle\alpha_++\left\langle\Phi_j,W\Phi_-\right\rangle\alpha_-\right],
\qquad j=\pm .
\label{alpha-j} \end{align}
We evaluate the inner products in \eqref{alpha-j} using the following two propositions.
\begin{proposition}\label{inner-prods-sharp}
\begin{align}
\left\langle\Phi_+, {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_-\right\rangle_{L^2_{\bf K}(\Omega)}
&= 0 \label{ip12} \ , \\
\left\langle\Phi_-, {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_+\right\rangle_{L^2_{\bf K}(\Omega)}
&= 0 \label{ip21} \ , \\
2\left\langle\Phi_+, {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_+\right\rangle_{L^2_{\bf K}(\Omega)}&=\
+ i|\lambda_\sharp|\ |{\bm{\mathfrak{K}}}_2|\ ,
\label{ip-aa+}\\
2 \left\langle\Phi_-, {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_-\right\rangle_{L^2_{\bf K}(\Omega)}&=\ - i|\lambda_\sharp|\ |{\bm{\mathfrak{K}}}_2|\ ,
\label{ip-aa-}\end{align}
The constant, $\lambda_\sharp\in\mathbb{C}$, is generically non-zero; see Theorem \ref{diracpt-thm}.
\end{proposition}
\begin{proof}
Let $\mathfrak{z}_2={\mathfrak{K}}_2^{(1)}+i{\mathfrak{K}}_2^{(2)}$.
By (7.28)-(7.29) of \cites{FW:14} (see also \cites{FW:12}) we have:
\begin{align}
\left\langle\Phi_1, {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_2\right\rangle_{L^2_{\bf K}(\Omega)}
&= \frac{i}{2}\ \overline{\lambda}_\sharp\ \mathfrak{z}_2\ ,\ \label{ip12_1} \\
\left\langle\Phi_2, {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_1\right\rangle_{L^2_{\bf K}(\Omega)}
&= \frac{i}{2}\ \lambda_\sharp\ \overline{\mathfrak{z}_2}\ \label{ip21_2} \ , \\
\left\langle\Phi_b,\ {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_b\right\rangle_{L^2_{\bf K}(\Omega)}&=0,\ \ b=1,2.
\label{ip-aa_1}\end{align}
Relations \eqref{ip12}-\eqref{ip-aa-} follow from the expressions for $\Phi_\pm$ in \eqref{correct-basis} and relations \eqref{ip12_1}-\eqref{ip-aa_1}.
\end{proof}
\begin{proposition}\label{inner-prods-W}
Assume that $W({\bf x})$ is real-valued, odd and $\Lambda_h-$ periodic.
Let $\vartheta_\sharp\equiv \left\langle \Phi_1,W\Phi_1\right\rangle_{L^2_{{\bf K}}(\Omega)}$. Then,
$ \vartheta_\sharp\in\mathbb{R}$ and
\begin{align}
\vartheta_\sharp\ =\ \left\langle \Phi_+, W\Phi_-\right\rangle_{L^2_{\bf K}(\Omega)}\ &=\ \left\langle \Phi_-, W\Phi_+\right\rangle_{L^2_{\bf K}(\Omega)} \label{ip-1W1} , \\
\left\langle \Phi_+, W\Phi_+\right\rangle_{L^2_{\bf K}(\Omega)}\ &=\ \left\langle \Phi_-, W\Phi_-\right\rangle_{L^2_{\bf K}(\Omega)}=0\label{ip-1W2} \ .
\end{align}
Note that since $\Phi_+({\bf x})=e^{i{\bf K}_\star\cdot{\bf x}}P_+({\bf x})$ and $\Phi_-({\bf x})=e^{i{\bf K}_\star\cdot{\bf x}}P_-({\bf x})$,
relations \eqref{ip-1W1} and \eqref{ip-1W2} hold with $\Phi_+$ and $\Phi_-$ replaced, respectively, by $P_+({\bf x})$
and $P_-({\bf x})$.
\end{proposition}
\begin{proof}
Equations \eqref{ip-1W1}-\eqref{ip-1W2} follow from the relations
\begin{align}
\vartheta_\sharp \equiv \left\langle \Phi_1, W\Phi_1\right\rangle_{L^2_{\bf K}(\Omega)}\ &=\ -\left\langle \Phi_2, W\Phi_2\right\rangle_{L^2_{\bf K}(\Omega)} \label{ip-1W1_1} , \\
\left\langle \Phi_1, W\Phi_2\right\rangle_{L^2_{\bf K}(\Omega)}\ &=\ \left\langle \Phi_2, W\Phi_1\right\rangle_{L^2_{\bf K}(\Omega)}=0\label{ip-1W2_1} \ .
\end{align}
%
To prove \eqref{ip-1W1_1} and \eqref{ip-1W2_1}, we begin by recalling that $\Phi_2({\bf x})=\overline{\Phi_1(-{\bf x})}$, $W$ is real-valued and $W(-{\bf x})=-W({\bf x})$.
Since $W$ is real-valued, it is clear that $\vartheta_\sharp\in\mathbb{R}$.
Furthermore,
\begin{align*}
\left\langle\Phi_2,W\Phi_2\right\rangle_{L^2_{\bf K}(\Omega)}&=
\int_\Omega\overline{\Phi_2({\bf x})}W({\bf x})\Phi_2({\bf x})d{\bf x}
= \int_\Omega \Phi_1(-{\bf x})W({\bf x})\overline{\Phi_1(-{\bf x})}d{\bf x}\\
&= \int_\Omega \Phi_1({\bf x})W(-{\bf x})\overline{\Phi_1({\bf x})}d{\bf x} = -\vartheta_\sharp \ .
\end{align*}
This proves \eqref{ip-1W1}. To prove \eqref{ip-1W2}, observe that
\begin{align*}
\left\langle\Phi_2,W\Phi_1\right\rangle_{L^2_{\bf K}(\Omega)}&=\int_\Omega\overline{\Phi_2({\bf x})}W({\bf x})\Phi_1({\bf x})d{\bf x}
=\int_\Omega\Phi_1(-{\bf x})W({\bf x})\Phi_1({\bf x})d{\bf x}\\
&=\int_\Omega\Phi_1({\bf x})W(-{\bf x})\Phi_1(-{\bf x})d{\bf x}
= -\int_\Omega\Phi_1({\bf x})W({\bf x})\Phi_1(-{\bf x})d{\bf x}\\
&= -\overline{\left\langle\Phi_1,W\Phi_2\right\rangle_{L^2_{\bf K}(\Omega)}} =- \left\langle\Phi_2,W\Phi_1\right\rangle_{L^2_{\bf K}(\Omega)} \ .
\end{align*}
This completes the proof of Proposition \ref{inner-prods-W}.
\end{proof}
Propositions \ref{inner-prods-sharp} and \ref{inner-prods-W} imply that the orthogonality conditions \eqref{alpha-j} reduce to the following eigenvalue problem for $\alpha(\zeta)=(\alpha_+(\zeta), \alpha_-(\zeta))^T$:
\begin{align}
\left( \mathcal{D} - E^{(1)} \right) \alpha = 0 , \quad \alpha\in L^2(\mathbb{R}) \ . \label{m-dirac-eqn}
\end{align}
Here, $\mathcal{D}$ denotes the 1D Dirac operator:
\begin{equation}
\label{multi-dirac-op}
\mathcal{D} = -i|\lambda_\sharp| |{\bm{\mathfrak{K}}}_2| \sigma_3 \partial_\zeta + \vartheta_\sharp \kappa(\zeta) \sigma_1,
\quad \text{and} \quad {\lambda_{\sharp}} \times {\vartheta_{\sharp}} \neq 0 \ .
\end{equation}
In Section \ref{dirac-solns} we prove that the eigenvalue problem \eqref{m-dirac-eqn} has an
exponentially localized eigenfunction $\alpha_{\star}(\zeta)$ with corresponding (mid-gap) zero-energy eigenvalue $E^{(1)}=0$.
Moreover, this eigenvalue has multiplicity one. We impose the normalization: $\|\alpha_\star\|_{L^2(\mathbb{R})}=1$.
Fix $(E^{(1)},\alpha)=(0,\alpha_\star)$. Then $\alpha_\star\in L^2(\mathbb{R})$, $\psi^{(0)}({\bf x}, \zeta)$ is completely determined (up to normalization) and the solvability conditions \eqref{alpha-j} are satisfied. Therefore, the right hand side of \eqref{psi1-eqn} lies in the range of $H_V-E_\star: H^2_{\bf K}\to L^2_{\bf K}$, and we may invert $(H_V-E_{\star})$ obtaining
\begin{align}
\psi^{(1)}({\bf x},\zeta) &= \left(R(E_{\star})G^{(1)}\right)({\bf x},\zeta) + \psi^{(1)}_h({\bf x},\zeta)
\equiv \psi^{(1)}_p({\bf x},\zeta) + \psi^{(1)}_h({\bf x},\zeta) ,\label{psi1p-def}
\end{align}
where
\begin{equation*}
R(E_\star) = \left(H_V-E_\star\right)^{-1}:P_\perp L^2_{\bf K}\to P_\perp H^2_{\bf K}\
\end{equation*}
and $P_\perp$ is the $L^2_{\bf K}(\Omega)-$ projection on to the orthogonal complement of the kernel of $H_V-E_\star$, equal to ${\rm span}\{\Phi_+,\Phi_-\}$.
Here, $\psi^{(1)}_p$ denotes a particular solution, and
\begin{equation*}
\psi^{(1)}_h({\bf x},\zeta) = \alpha^{(1)}_+(\zeta)\Phi_+({\bf x})+\alpha^{(1)}_-(\zeta)\Phi_-({\bf x})
\end{equation*} is a homogeneous solution.
Note that by exploiting the degrees of freedom coming from the $L^2_{\bf K}-$ kernel of $H_V-E_\star$, we can continue the formal expansion to any order in $\delta$. Indeed, at $\mathcal{O}(\delta^\ell)$ for $\ell\geq2$, we have
\begin{align}
\label{psi2-eqn}
& \left(-\Delta_{\bf x} +V({\bf x})-E_\star\right)\psi^{(\ell)}({\bf x},\zeta)\\
&\quad = \left(\ 2 ({\bm{\mathfrak{K}}}_2\cdot \nabla_{{\bf x}})\ \partial_\zeta
-\kappa(\zeta)W({\bf x})\right)\psi_h^{(\ell-1)}({\bf x},\zeta) +E^{(\ell)}\psi^{(0)}({\bf x},\zeta) \nonumber\\
&\qquad + G^{(\ell)}\left({\bf x},\zeta;\psi^{(0)},\ldots,\psi^{(\ell-2)},\psi_p^{(\ell-1)},E^{(1)},\ldots,E^{(\ell-1)}\right) , \nonumber\\
&\psi^{(\ell)}({\bf x}+{\bm{\mathfrak{v}}},\cdot)=e^{i{\bf K}\cdot{\bm{\mathfrak{v}}}}\psi^{(\ell)}({\bf x},\cdot) \ \ \ \forall\ {\bm{\mathfrak{v}}}\in\Lambda_h , \nonumber
\end{align}
where, for the case $\ell=2$,
\begin{equation}
G^{(2)}({\bf x},\zeta;\psi^{(0)},\psi_p^{(1)}) \\
= \left(\ 2 ({\bm{\mathfrak{K}}}_2\cdot \nabla_{{\bf x}})\ \partial_\zeta
-\kappa(\zeta)W({\bf x})\ \right)\psi_p^{(1)}({\bf x},\zeta) + |{\bm{\mathfrak{K}}}_2|^2\ \partial^2_\zeta \psi^{(0)}({\bf x},\zeta)\ . \label{G2def}
\end{equation}
As before, \eqref{psi2-eqn} has a solution if and only if the right hand side is $L^2_{\bf K}(\Omega;d{\bf x})$-orthogonal to the functions $\Phi_j({\bf x})$, $j=\pm$.
This solvability condition reduces to the inhomogeneous system:
\begin{equation}
\mathcal{D} \alpha^{(\ell-1)}(\zeta) =
\mathcal{G}^{(\ell)}\left(\zeta\right)+E^{(\ell)}\alpha_{\star}(\zeta), \quad \alpha^{(\ell-1)}\in L^2(\mathbb{R}) , \ \ {\rm where}
\label{solvability_cond}
\end{equation}
\begin{equation}
\mathcal{G}^{(\ell)}(\zeta) =
\left( \begin{array}{c}
\left\langle \Phi_+(\cdot),G^{(\ell)}(\cdot,\zeta;\psi^{(0)},\ldots,\psi^{(\ell-2)},\psi_p^{(\ell-1)},E^{(1)},\ldots,E^{(\ell-1)})) \right\rangle_{L^2_{\bf K}(\Omega)} \\
\left\langle \Phi_-(\cdot),G^{(\ell)}(\cdot,\zeta;\psi^{(0)},\ldots,\psi^{(\ell-2)},\psi_p^{(\ell-1)},E^{(1)},\ldots,E^{(\ell-1)})) \right\rangle_{L^2_{\bf K}(\Omega)}
\end{array} \right) . \label{ipG}
\end{equation}
Solvability of the non-homogeneous Dirac system \eqref{solvability_cond} in $L^2(\mathbb{R})$, is ensured by imposing $L^2(\mathbb{R})-$ orthogonality
of the right hand side of \eqref{solvability_cond} to $\alpha_\star(\zeta)$. This yields:
\begin{equation}
\label{solvability_cond_E2}
E^{(\ell)} = -\inner{\alpha_{\star},\mathcal{G}^{(\ell)}}_{L^2(\mathbb{R})}.
\end{equation}
Thus we obtain, at $\mathcal{O}(\delta^\ell)$, that $\psi^{(\ell)}=\psi_p^{(\ell)}+\psi_h^{(\ell)}$, where $\psi_p^{(\ell)}$ is a particular solution
of \eqref{psi2-eqn} and
$\psi^{(\ell)}_h({\bf x},\zeta) = \alpha^{(\ell)}_+(\zeta)\Phi_+({\bf x})+\alpha^{(\ell)}_-(\zeta)\Phi_-({\bf x})$
is a homogeneous solution.
\noindent {\bf Summary:} Given a zero-energy $L^2(\mathbb{R})-$ eigenstate of the Dirac operator, $\mathcal{D}$ (see Section \ref{dirac-solns}),
we can, to any polynomial order in $\delta$, construct a formal solution of the eigenvalue problem $H^{(\delta)}\psi=E\psi,\ \psi\in L^2_{{k_{\parallel}}={\bf K}\cdot{\bm{\mathfrak{v}}}_1}$.
\subsection{Zero-energy eigenstate of the Dirac operator, $\mathcal{D}$\label{dirac-solns}}
\begin{proposition}\label{zero-mode}
Let $\kappa(\zeta)$ be a domain wall function (Definition \ref{domain-wall-defn}) and assume, without loss of generality, that $\vartheta_\sharp>0$. Then,
\begin{enumerate}
\item The Dirac operator, $\mathcal{D}$, has a zero-energy eigenvalue, $E^{(1)}=0$, with exponentially localized solution given by:
\begin{align}
\alpha_\star(\zeta)\ &= \begin{pmatrix}\alpha_{\star,+}(\zeta) \\ \alpha_{\star,-}(\zeta) \end{pmatrix} =
\ \gamma \begin{pmatrix} 1 \\ -i \end{pmatrix}
e^{-\frac{{\vartheta_{\sharp}}}{|\lambda_\sharp| | \mathfrak{z}_2|}\ \int_0^\zeta \kappa(s) ds}\label{Fstar} \ .
\end{align}
Here, $\gamma\in\mathbb{C}$ is any constant for which $\|\alpha_\star\|_{L^2}=1$.
\item The solution \eqref{Fstar}, $\alpha_\star$, generates a leading order approximate (two-scale) edge state:
\begin{align}
&\Psi^{(0)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})\nonumber\\
&\qquad = \alpha_{\star,+}(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})\Phi_+({\bf x}) +
\alpha_{\star,-}(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})\Phi_-({\bf x}) \label{Psi0-a}\\
&\qquad = e^{i{\bf K}\cdot{\bf x}}\ \gamma \ (-1-i) \left[\ i \frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|} \frac{ \mathfrak{z}_2}{| \mathfrak{z}_2|} P_1({\bf x}) -
P_2({\bf x})\ \right]\
e^{-\frac{\vartheta_\sharp}{|\lambda_\sharp| | \mathfrak{z}_2|}\int_0^{\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}}\kappa(s)ds} \ .
\label{Psi0-d}\end{align}
$\Psi^{(0)}({\bf x},\delta{\bf x})$ is propagating in the ${\bm{\mathfrak{v}}}_1$ direction with parallel quasimomentum ${\kpar=\bK\cdot\vtilde_1}$, and is exponentially decaying, ${\bm{\mathfrak{K}}}_2\cdot{\bf x}\to\pm\infty$, in the transverse direction.
\end{enumerate}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{zero-mode}]
The system \eqref{m-dirac-eqn} with energy $E^{(1)}=0$ may be written as:
\begin{align*}
\partial_\zeta \alpha\ &=\ \frac{-i{\vartheta_{\sharp}}}{|\lambda_\sharp|| \mathfrak{z}_2|}\ \kappa(\zeta)\ \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix}\ \alpha,\quad \
\alpha = \begin{pmatrix} \alpha_+\\ \alpha_-\end{pmatrix},
\end{align*}
and has solutions:
\begin{align*}
\beta_1(\zeta)&=
\begin{pmatrix} 1\\ i\end{pmatrix} \
e^{\frac{{\vartheta_{\sharp}}}{|\lambda_\sharp| | \mathfrak{z}_2|}\ \int_0^\zeta \kappa(s) ds} \ , \quad \text{and} \quad
\beta_2(\zeta)=
\begin{pmatrix} 1\\ -i\end{pmatrix} \
e^{\frac{-{\vartheta_{\sharp}}}{|\lambda_\sharp| | \mathfrak{z}_2|}\ \int_0^\zeta \kappa(s) ds} \ .
\end{align*}
Since ${\vartheta_{\sharp}}>0$ and $\kappa(\zeta)\to\pm \kappa_\infty$ as $\zeta\to\pm\infty$, with $\kappa_\infty>0$, the solution $\beta_2(\zeta)$
decays as $\zeta\to\pm\infty$. Thus we set $\alpha_\star(\zeta)=\gamma \beta_2(\zeta)$, with constant $\gamma\in\mathbb{C}$ chosen so that $\norm{\alpha_\star}_{L^2(\mathbb{R})}=1$. This yields the expression for $\Psi^{(0)}({\bf x},\delta{\bf x})$ in \eqref{Psi0-a}-\eqref{Psi0-d}, and completes the proof of Proposition \ref{zero-mode}
\end{proof}
\begin{remark}[Topological Stability]
The zero-energy eigenpair, \eqref{Fstar}, is ``topologically stable'' or
``topologically protected'' in the sense that
it (and hence the bifurcation of edge states, which it seeds) persists for any localized
perturbation of $\kappa(\zeta)$. Such perturbations may be large but do not change the asymptotic behavior of $\kappa(\zeta)$ as
$\zeta\to\pm\infty$.
\end{remark}
\section{Existence of edge states localized along an edge}\label{thm-edge-state}
In this section we prove the existence of edge states for the eigenvalue problem:
\begin{align}
\label{EVP_2}
&H^{(\delta)} \Psi\ =\ E\ \Psi \ ,\ \ \Psi\in H_{{\kpar=\bK\cdot\vtilde_1}}^2(\Sigma) \ , \quad \text{where} \\
&H^{(\delta)} \equiv-\Delta+V({\bf x})+\delta\kappa(\delta {\bm{\mathfrak{K}}}_2\cdot{\bf x})W({\bf x}) \ . \nonumber
\end{align}
We make the following assumptions:
\begin{enumerate}
\item[(A1)] $V$ is a honeycomb potential in the sense of Definition \ref{honeyV} and $-\Delta+V$ has a Dirac point at $({\bf K},E_\star)$; see Definition \ref{dirac-pt-defn} and
the conclusions of Theorem \ref{diracpt-thm}. In particular,
the degenerate subspace of $H^{(0)}-E_\star$ has orthonormal basis of Floquet-Bloch modes $\{\Phi_1({\bf x})\ ,\ \Phi_2({\bf x})\}$ and
\begin{equation*}
\lambda_\sharp\ \equiv\ \sum_{{\bf m}\in\mathcal{S}} c({\bf m})^2\ \left(\begin{array}{c}1\\ i\end{array}\right)\cdot \left({\bf K}+{\bf m}\vec{\bf k}\right) \ \ne\ 0 ; \ \textrm{see \eqref{lambda-sharp2}.}
\end{equation*}
\item[(A2)] $W$ is real-valued and $\Lambda_h-$ periodic, odd and non-degenerate; {\it i.e.} (W1), (W2) and (W3) of Section \ref{zigzag-edges} hold. In particular,
\begin{equation*}
{\vartheta_{\sharp}}\equiv\inner{\Phi_1,W\Phi_1}_{L_{\bf K}^2}\ =\ \inner{\Phi_+,W\Phi_-}_{L_{\bf K}^2} \ \ne\ 0\ .
\end{equation*}
\item[(A3)] $\kappa(\delta {\bm{\mathfrak{K}}}_2\cdot{\bf x})$ is a domain wall function in the sense of Definition \ref{domain-wall-defn}.
\end{enumerate}
\noindent The following {\it spectral no-fold condition} plays a central role.
\begin{definition}\label{SGC}[Spectral no-fold condition]
Let $H_V=-\Delta+V({\bf x})$, where $V$ is a honeycomb potential in the sense of Definition \ref{honeyV}. Further, let $({\bf K},E_\star)$ be a Dirac point for $H_V$ in the sense of Definition \ref{dirac-pt-defn}, in which we use the convention of labeling the dispersion maps by:
${\bf k}\mapsto E_b({\bf k})$, where
$b\in\{b_\star,b_{\star}+1\}\cup\{b\ge1: b\ne b_\star,\ b_{\star}+1\}$
$\equiv \{-,+\}\cup\{b\ge1: b\ne -, +\}$.
To the ${\bm{\mathfrak{v}}}_1-$ edge, $\mathbb{R}{\bm{\mathfrak{v}}}_1$, we associate the ``${\bm{\mathfrak{K}}}_2 -$ slice at quasi-momentum ${\bf K}$'', given by the union over all $b\in \{-,+\}\cup\{b\ge1: b\ne -, +\}$ of the curves
$ \{({\bf K}+\lambda{\bm{\mathfrak{K}}}_2\ ,\ E_b({\bf K}+\lambda{\bm{\mathfrak{K}}}_2) : \ |\lambda|\le\frac12\}$.
We say the band structure of $H_V$ satisfies the spectral no-fold condition for the ${\bm{\mathfrak{v}}}_1-$ edge or, equivalently at the Dirac point and along the ${\bm{\mathfrak{K}}}_2 -$ slice, with constants $c_1, c_2, \mathfrak{a}_0$, and $\nu\in (0,1)$ if the following holds:
There is a ``modulus'', $\omega(\mathfrak{a})$, which is continuous, non-negative and increasing on $0\le\mathfrak{a}<\mathfrak{a}_0$, satisfying $\omega(0)=0$ and \[ \omega(\mathfrak{a^\nu})/\mathfrak{a}\to\infty\ \ {\rm as}\ \ \mathfrak{a}\to0,\]
%
such that for all $0\le\mathfrak{a}< \mathfrak{a}_0$:
\begin{align}
\mathfrak{a}^\nu \le |\lambda|\le\frac12\quad &\implies\quad \Big|\ E_\pm({\bf K}+\lambda{\bm{\mathfrak{K}}}_2) - E_\star\ \Big|\ \ge\ c_1\ \omega(\mathfrak{a}^\nu) \ ,
\label{no-fold-over} \\
b\ne\pm, \ |\lambda|\le1/2 \quad &\implies \quad \Big| E_b({\bf K}+\lambda{\bm{\mathfrak{K}}}_2)-E_\star \Big|\ \ge\ c_2\ (1+|b|) \ .
\label{no-fold-over-b}
\end{align}
\end{definition}
Our final assumption is
\begin{enumerate}
\item[(A4)]
$-\Delta+V$ satisfies the spectral no-fold condition at quasimomentum ${\bf K}$ along the ${\bm{\mathfrak{K}}}_2 -$ slice; see Definition \ref{SGC}.
\end{enumerate}
\begin{remark}\label{control-in-1d}
\begin{enumerate}
\item Conditions \eqref{no-fold-over}-\eqref{no-fold-over-b} ensure that, restricted to the quasi-momentum slice $\lambda\mapsto {\bf K}+\lambda{\bm{\mathfrak{K}}}_2\in\mathcal{B}_h$, the dispersion curves which touch at the Dirac point $({\bf K},E_\star)$
do not ``fold over'' and attain energies within $c_1\cdot\omega(\mathfrak{a}^\nu)$ of $E_\star$ for quasimomenta bounded away from ${\bf K}$.
\item Dispersion curves of periodic Schr\"odinger operators on $\mathbb{R}^1$ (Hill's operators,\ $H=-\partial_x^2+Q(x)$, where $Q(x+1)=Q(x)$) with ``Dirac points'' (see \cites{FLW-PNAS:14,FLW-MAMS:15}) always satisfy the natural 1D analogue of the spectral no-fold condition with $\omega(\mathfrak{a})=\mathfrak{a}$. Dirac points occur at quasi-momentum $k=\pm\pi$ and ODE arguments ensure that dispersion curves are monotone functions of $k$ away from $k=0,\pm\pi$.
\item In Section \ref{zz-gap}
we prove that $H_{\varepsilon V}=-\Delta+\varepsilon V$, where $V$ is a honeycomb potential, satisfies the no-fold condition along the zigzag slice (${\bm{\mathfrak{v}}}_1={\bf v}_1$) with modulus $\omega(\mathfrak{a})=\mathfrak{a}^2$, under the assumption that $\varepsilon V_{1,1}>0$ and $\varepsilon$ is sufficiently small.
\end{enumerate}
\end{remark}
We now state a key result of this paper, giving sufficient conditions for the existence
of ${\bm{\mathfrak{v}}}_1-$ edge states of $H^{(\delta)}$, for ${\bm{\mathfrak{v}}}_1\in\Lambda_h$.
\begin{theorem}\label{thm-edgestate}
Consider the ${\bm{\mathfrak{v}}}_1-$ edge state eigenvalue problem,
\eqref{EVP_2},
where $V({\bf x})$, $W({\bf x})$ and $\kappa(\zeta)$ satisfy assumptions (A1)-(A4).
Then, there exist positive constants $\delta_0, c_0$ and a branch of solutions of \eqref{EVP_2},
\[|\delta|\in(0,\delta_0)\longmapsto (E^\delta,\Psi^\delta)\in (E_\star-c_0\ \delta_0\ ,\ E_\star+c_0\ \delta_0)\times H^2_{{\kpar=\bK\cdot\vtilde_1}}(\Sigma) , \]
such that the following holds:
\begin{enumerate}
\item $\Psi^\delta$ is well-approximated by a slow modulation of a linear combination of
degenerate Floquet-Bloch modes $\Phi_+$ and $\Phi_-$ (\eqref{correct-basis}),
which is decaying transverse to the edge, $\mathbb{Z}{\bm{\mathfrak{v}}}_1$:
\begin{align}
&\left\|\ \Psi^\delta(\cdot)\ -\
\left[\alpha_{\star,+}(\delta{\bm{\mathfrak{K}}}_2 \cdot)\Phi_+(\cdot)+\alpha_{\star,-}(\delta{\bm{\mathfrak{K}}}_2 \cdot )\Phi_-(\cdot)\right]\
\right\|_{H^2_{{\kpar=\bK\cdot\vtilde_1}}}\
\lesssim\ \delta^{\frac12}\ , \label{TM_Psi-error}\\
& E^\delta = E_\star\ +\ E^{(2)}\delta^2\ +\ o(\delta^2), \label{TM_E-error}
\end{align}
where $E^{(2)}$ is obtained directly from \eqref{solvability_cond_E2}, \eqref{ipG} and \eqref{G2def}. The implied constant in \eqref{TM_Psi-error} depends on $V$, $W$ and $\kappa$, but is independent of $\delta$.
\item The amplitude vector, $\alpha_\star(\zeta)=\left(\alpha_{\star,+}(\zeta),\alpha_{\star,-}(\zeta)\right)$, is an $L^2(\mathbb{R}_\zeta)- $
normalized, topologically protected zero-energy eigenstate of the Dirac system \eqref{multi-dirac-op}: $\mathcal{D}\alpha_\star=0$ (see Proposition \ref{zero-mode}).
\end{enumerate}
\end{theorem}
Perturbation theory for ${k_{\parallel}}$ near ${\bf K}\cdot{\bm{\mathfrak{v}}}_1$ can be used to show the persistence of edge states
for parallel quasi-momenta near ${\bf K}\cdot{\bm{\mathfrak{v}}}_1$.
\begin{corollary}
\label{vary_k_parallel}
Fix $V$, $W$, $\kappa$ and $\delta$ as in Theorem \ref{thm-edgestate}. Then there exists $\eta_0\ll\delta$ such that for all ${k_{\parallel}}$ satisfying $|{k_{\parallel}} - {\bf K}\cdot{\bm{\mathfrak{v}}}_1| <\eta_0$, there exists an $H^2_{k_{\parallel}}(\Sigma)-$ eigenfunction with eigenvalue $\mu(\delta,{k_{\parallel}})=\ E^\delta\ + \ \mu^{\delta}\ ({k_{\parallel}}-{\bf K}\cdot{\bm{\mathfrak{v}}}_1)\ +\mathcal{O}(|{k_{\parallel}}-{\bf K}\cdot{\bm{\mathfrak{v}}}_1|^2)$, where $E^{\delta}$ is given in \eqref{TM_E-error}, and $\mu^\delta$ is a constant, which is independent of ${k_{\parallel}}$.
\end{corollary}
\noindent Zigzag edge states for ${k_{\parallel}}$ in a neighborhood of ${k_{\parallel}}={\bf K}\cdot{\bm{\mathfrak{v}}}_1=2\pi/3\ ({\bm{\mathfrak{v}}}_1={\bf v}_1)$ and, by symmetry, in a neighborhood of ${k_{\parallel}}=4\pi/3$ are indicated in Figure \ref{fig:k_parallel3}.
\subsection{Corrector equation}\label{corrector-equation}
We seek a solution of the eigenvalue problem \eqref{EVP_2}, $\Psi^\delta$, in the form
\begin{align}
\Psi^\delta &\equiv \psi^{(0)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})+\delta\psi^{(1)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})+\delta\eta^\delta({\bf x}) , \label{eta-def}\\
E^\delta &\equiv E_\star + \delta^2\mu^\delta \label{mu-def} .
\end{align}
Here,
$\psi^{(0)}$ and $\psi_p^{(1)}$ are given by their respective multiple scale expressions \eqref{psi0-soln} and \eqref{psi1p-def}:
\begin{align*}
\psi^{(0)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}) &= \alpha_{\star,+}(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})\Phi_+({\bf x}) + \alpha_{\star,-}(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})\Phi_-({\bf x}) , \nonumber \\
\psi_p^{(1)}({\bf x},\delta{\bf x}) &= \left(R(E_\star)G^{(1)}\right)({\bf x}, \delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}) ,
\end{align*}
and $(\mu^\delta,\eta^\delta({\bf x}))$ is the corrector, to be constructed. We may assume throughout that $\delta\ge0$.
\begin{remark}
\label{regularity}
We shall make frequent use of the regularity of $\Phi_+({\bf x})$, $\Phi_-({\bf x})$
and $\alpha_{\star}(\zeta)\equiv(\alpha_{\star,+}(\zeta),\alpha_{\star,-}(\zeta))^T$.
In particular, $V \in C^{\infty}(\mathbb{R}^2/\Lambda)$ and elliptic
regularity theory imply that $e^{-i{\bf K}\cdot{\bf x}}\Phi_\pm$ is $ C^{\infty}(\mathbb{R}^2/\Lambda)$, and
by Proposition \ref{zero-mode}, $\alpha_\star(\zeta)$ and its derivatives with respect to $\zeta$ are all exponentially decaying as $|\zeta|\to\infty$.
\end{remark}
The following proposition lists useful bounds on $\psi^{(0)}$ and $\psi_p^{(1)}$.
\begin{proposition}[$H_{\kpar=\bK\cdot\vtilde_1}^s(\Sigma_{\bf x})$ bounds on $\psi^{(0)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})$ and $\psi_p^{(1)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})$]
\label{lemma:psi_bounds} For all $s=1,2,\dots$, there exists $\delta_0>0$, such that if $0<|\delta|<\delta_0$,
then the leading order expansion terms $\psi^{(0)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})$ and $\psi^{(1)}_p({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})$ displayed in \eqref{psi0-soln} and
\eqref{psi1p-def} satisfy the bounds:
\begin{align*}
\norm{\psi^{(0)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})}_{H_{\kpar=\bK\cdot\vtilde_1}^s} + \norm{\partial_\zeta^2\psi^{(0)}({\bf x},\zeta)\Big|_{\zeta=\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}}}_{L^2_{\kpar=\bK\cdot\vtilde_1}}\ &\approx |\delta|^{-1/2},\\
\norm{\psi^{(1)}_p({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})}_{H_{\kpar=\bK\cdot\vtilde_1}^s} &\lesssim |\delta|^{-1/2}, \\
\norm{\partial_\zeta^2\psi^{(1)}_p({\bf x},\zeta)\Big|_{\zeta=\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}}}_{L_{\kpar=\bK\cdot\vtilde_1}^2}\ +\
\norm{\partial_{\bf x}\partial_\zeta\psi^{(1)}_p({\bf x},\zeta)\Big|_{\zeta=\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}}}_{L_{\kpar=\bK\cdot\vtilde_1}^2} &\lesssim |\delta|^{-1/2}.
\end{align*}
It follows that
$\|\psi^{(0)}\|_{H^2_{\kpar=\bK\cdot\vtilde_1}} \approx \delta^{-1/2} \ \gg\
\|\delta \psi^{(1)}_p(\cdot,\delta\cdot) \|_{H^2_{\kpar=\bK\cdot\vtilde_1}} =\mathcal{O}( \delta^{1/2})$.
\end{proposition}
The proof of Proposition \ref{lemma:psi_bounds} follows the approach taken in the proof of
Lemma 6.1 in Appendix G of \cites{FLW-MAMS:15}. We omit the details but make two key technical remarks, that facilitate this proof.
\noindent {\it Bound on $\|\Phi_b\|_{H^s}$:} Recall $\Phi_b({\bf x};{\bf k})= e^{i{\bf k}\cdot{\bf x}}p_b({\bf x};{\bf k})$, where
$\|\Phi_b\|_{L^2(\Omega)}=\|p_b\|_{L^2(\Omega)}=1$. Now $p_b({\bf x};{\bf k})$ satisfies
$-\Delta p_b= 2i{\bf k}\cdot\nabla p_b - |{\bf k}|^2 p_b - V p_b+E_b({\bf k}) p_b$, where $V$ is bounded and smooth.
By 2D Weyl asymptotics $|E_b({\bf k})|\approx(1+|b|), \ b\gg1$ and therefore we have
$\| \Delta p_b\|_{H^{s-1}}\le C_s (1+b)\ \|p_b\|_{H^s}$. Hence, by elliptic theory
$\| p_b\|_{H^{s+1}}\le C_s (1+b)\ \|p_b\|_{H^s}$ and induction on $s\ge0$ yields
$\| p_b\|_{H^{s}}\le C_s (1+b)^s\ $.
\noindent {\it Rapid decay of $\inner{\Phi_b(\cdot;{\bf K}),f(\cdot)}_{L^2(\Omega)}$:}
Using $H^{(0)}\Phi_b({\bf x};{\bf K}) = E_b({\bf K}) \Phi_b({\bf x};{\bf K})$, for sufficiently smooth $f$ we have:
$ \inner{\Phi_b(\cdot;{\bf K}),f(\cdot)}_{L^2(\Omega)} = (E_b({\bf K}))^{-M} \inner{\Phi_b(\cdot;{\bf K}),[H^{(0)}]^M f(\cdot)}_{L^2(\Omega)}.$ Hence, for any $M\ge0$, $ | \inner{\Phi_b(\cdot;{\bf K}),f(\cdot)}_{L^2(\Omega)} |\ \le\ C_M (1+b)^{-M}$.
\smallskip
It remains to construct and bound the corrector $(\mu,\eta({\bf x}))$. Substitution of the expansion \eqref{eta-def} into the eigenvalue problem \eqref{EVP_2}, yields an equation for $\eta({\bf x}) \in H^2_{\kpar=\bK\cdot\vtilde_1}(\Sigma) $, which depends on $\mu$ and the small parameter $\delta$:
{\small
\begin{align}
& \left(-\Delta_{\bf x} + V({\bf x}) - E_\star\right)\eta({\bf x}) + \delta\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}) W({\bf x})\eta({\bf x}) - \delta^2\mu\ \eta({\bf x}) \nonumber \\
&\quad = \delta \Big(2{\bm{\mathfrak{K}}}_2\cdot \nabla_{\bf x}\ \partial_\zeta-\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})W({\bf x})\Big)\psi_p^{(1)}({\bf x},\zeta)\Big|_{\zeta=\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}}\ +\ \delta\mu\psi^{(0)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}) \nonumber\\
&\quad\qquad
+ \delta |{\bm{\mathfrak{K}}}_2|^2 \partial_\zeta^2\psi^{(0)}({\bf x},\zeta)\Big|_{\zeta=\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}}
+\delta^2\mu\psi_p^{(1)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}) + \delta^2|{\bm{\mathfrak{K}}}_2|^2\ \partial_\zeta^2\ \psi_p^{(1)}({\bf x},\zeta)\Big|_{\zeta=\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}} .
\label{corrector-eqn1}
\end{align}
}
To prove Theorem \ref{thm-edgestate}, we shall prove that \eqref{corrector-eqn1} has a solution $(\mu(\delta),\eta^\delta)$, with $\eta^\delta \in H^2_{\kpar=\bK\cdot\vtilde_1}$ satisfying the bound
\begin{equation*}
\|\delta \eta^\delta \|_{H^2_{\kpar=\bK\cdot\vtilde_1}} \leq C\delta^{1/2} \ .
\end{equation*}
\subsection{Decomposition of corrector, $\eta$, into near and far energy components}\label{rough-strategy}
Introduce the abbreviated notation, for $|\lambda|\le1/2$:
\begin{align}\label{Eb_of_lambda}
E_b(\lambda)\ =\
\begin{cases}
E_b({\bf K}+\lambda{\bm{\mathfrak{K}}}_2) & b_\star\notin\{b_\star, b_\star+1\} , \\
E_-(\lambda) & b=b_\star , \\
E_+(\lambda) & b=b_\star+1 ,
\end{cases}
\end{align}
and
\begin{align}\label{Phib_of_lambda}
\Phi_b({\bf x};\lambda)\ =\
\begin{cases}
\Phi_b({\bf x};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2) & b_\star\notin\{b_\star, b_\star+1\} , \\
\Phi_-({\bf x};\lambda) & b=b_\star , \\
\Phi_+({\bf x};\lambda) & b=b_\star+1 .
\end{cases}
\end{align}
Define $\widetilde{f}_b(\lambda)=\inner{\Phi_b(\cdot, \lambda), f(\cdot)}_{L^2_{{\kpar=\bK\cdot\vtilde_1}}}$.
By Theorem \ref{fourier-edge}, any $\eta\in H^2_{\kpar=\bK\cdot\vtilde_1}(\Sigma) $ has the representation
\begin{equation}
\label{eta-expansion}
\eta({\bf x}) = \sum_{b\ge1}\ \int_{|\lambda|\le1/2}\ \Phi_b({\bf x};\lambda)\ \widetilde{\eta}_b(\lambda)\ d\lambda \ .
\end{equation}
Our strategy is to next derive a system of equations governing $\{\widetilde{\eta}_b(\lambda)\}_{b\geq1}$, which is formally equivalent to system \eqref{corrector-eqn1}. We then prove this system has a solution, which is used to construct $\eta({\bf x})$.
Take the inner product of \eqref{corrector-eqn1} with $\Phi_b({\bf x};\lambda)$, for $ b\ge1$, to obtain
\begin{align}
b\ge1:\quad &\left(\ E_b(\lambda)\ -\ E_\star\ \right) \widetilde{\eta}_b(\lambda) \nonumber \\
&\qquad\qquad +
\delta \left\langle \Phi_b(\cdot;\lambda) ,
\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot) W(\cdot) \eta(\cdot) \right\rangle_{L^2_{{\kpar=\bK\cdot\vtilde_1}}}\label{eta-b-system}\\
& \qquad = \delta\widetilde{F}_b[\mu,\delta](\lambda)
+ \delta^2\ \mu\ \widetilde{\eta}_b(\lambda) \ ,\ |\lambda|\le1/2. \nonumber
\end{align}
Here, $\widetilde{F}_b[\mu,\delta](\lambda),\ b\ge1$, is given by:
\begin{align}
& \widetilde{F}_b[\mu,\delta](\lambda)\equiv
\widetilde{F}^{1,\delta}_b(\lambda)\ + \mu \widetilde{F}^{2,\delta}_b(\lambda)\ +\ \delta\mu \widetilde{F}^{3,\delta}_b(\lambda)\ +\ \widetilde{F}^{4,\delta}_b(\lambda)\ +\ \delta \widetilde{F}^{5,\delta}_b(\lambda),
\label{Fb-def}
\end{align}
where
{\small
\begin{align}
\widetilde{F}^{1,\delta}_b(\lambda) & \equiv
\inner{\Phi_b({\bf x},\lambda), (2{\bm{\mathfrak{K}}}_2\cdot \nabla_{\bf x}\ \partial_\zeta -\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})W({\bf x}))
\psi^{(1)}_p({\bf x},\zeta)\Big|_{\zeta=\delta{\bm{\mathfrak{K}}}_2 {\bf x}}}_{L_{\kpar=\bK\cdot\vtilde_1}^2} ,\nonumber\\
\widetilde{F}^{2,\delta}_b(\lambda)& \equiv \inner{\Phi_b({\bf x},\lambda),\psi^{(0)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})}_{L_{\kpar=\bK\cdot\vtilde_1}^2} , \nonumber\\
\widetilde{F}^{3,\delta}_b(\lambda) & \equiv \inner{\Phi_b({\bf x},\lambda),\psi^{(1)}_p({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})}_{L_{\kpar=\bK\cdot\vtilde_1}^2} , \label{Fdef}\\
\widetilde{F}^{4,\delta}_b(\lambda) & \equiv \inner{\Phi_b({\bf x},\lambda), \left. |{\bm{\mathfrak{K}}}_2|^2\ \partial_\zeta^2 \psi^{(0)}({\bf x},\zeta)\right|_{\zeta=\delta {\bm{\mathfrak{K}}}_2\cdot{\bf x}}}_{L_{\kpar=\bK\cdot\vtilde_1}^2} , \nonumber\\
\widetilde{F}^{5,\delta}_b(\lambda) & \equiv \inner{\Phi_b({\bf x},\lambda), \left. |{\bm{\mathfrak{K}}}_2|^2\ \partial_\zeta^2 \psi^{(1)}_p({\bf x},\zeta)\right|_{\zeta=\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}}}_{L_{\kpar=\bK\cdot\vtilde_1}^2} . \nonumber
\end{align}
}
Recall the spectral no-fold condition ensuring that $\delta/\omega(\delta^\nu) \to 0$ as $\delta\to0$, where $\nu>0$. We next decompose $\eta({\bf x})$ into its components with energies ``near'' and ``far'' from the Dirac point:
\begin{align*}
\eta({\bf x})\ &=\ \eta_{\rm near}({\bf x})\ +\ \eta_{\rm far}({\bf x}), \ \ {\rm where}
\end{align*}
\begin{align}
\eta_{\rm near}({\bf x})\ &\equiv\
\sum_{b=\pm}\ \int_{|\lambda|\le1/2}\ \Phi_b({\bf x};\lambda)\ \widetilde{\eta}_{b,{\rm near}}(\lambda)\ d\lambda \label{eta-near} , \\
\eta_{\rm far}({\bf x})\ &\equiv\ \sum_{b\geq1}\ \int_{|\lambda|\le1/2}\ \Phi_b({\bf x};\lambda)\ \widetilde{\eta}_{b,{\rm far}}(\lambda)\ d\lambda,\qquad {\rm and} \label{eta-far}
\end{align}
\begin{align*}
\widetilde{\eta}_{\pm,{\rm near}}(\lambda)\ &\equiv\ \chi\left(|\lambda|\le\delta^\nu\right)\
\widetilde{\eta}_{\pm}(\lambda) , \\
\widetilde{\eta}_{b,{\rm far}}(\lambda)\ &\equiv\ \chi\left((\delta_{_{b,+}}+\delta_{_{b,-}})\delta^\nu\le |\lambda|\le \frac12 \right)\
\widetilde{\eta}_b(\lambda),\ b\geq1 ;
\end{align*}
$\delta_{b,+}$ and $\delta_{b,-}$ are Kronecker delta symbols.
%
We rewrite system \eqref{eta-b-system} as two coupled subsystems:\
a pair of equations, which governs the {\bf near energy components}:
\begin{align}
&\left(E_{+}(\lambda)-E_{\star}\right)\widetilde{\eta}_{+,\rm near}(\lambda) \nonumber \\
&\qquad+ \delta\chi\Big(\abs{\lambda}\leq\delta^{\nu}\Big)\inner{\Phi_{+}(\cdot,\lambda),\kappa(\delta{\bm{\mathfrak{K}}}_2
\cdot)W(\cdot)\left[\eta_{\rm near}(\cdot)+\eta_{\rm far}(\cdot)\right]}_{L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)}
\label{near_cpt_1} \\
&\quad =\delta\chi\Big(\abs{\lambda}\leq\delta^{\nu}\Big)
\widetilde{F}_{+}[\mu,\delta](\lambda) + \delta^2\mu\ \widetilde{\eta}_{+,{\rm near}}(\lambda), \nonumber \\
&\left(E_{-}(\lambda)-E_{\star}\right)\widetilde{\eta}_{-,\rm near}(\lambda) \nonumber \\
&\qquad+\delta\chi\left(\abs{\lambda}\leq\delta^{\nu}\right)\inner{\Phi_{-}(\cdot,\lambda),\kappa(\delta{\bm{\mathfrak{K}}}_2
\cdot)W(\cdot)\left[\eta_{\rm near}(\cdot)+ \eta_{\rm far}(\cdot)\right]}_{L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)}
\label{near_cpt_2} \\
&\quad =\delta\chi(\abs{\lambda}\leq\delta^{\nu})
\widetilde{F}_{-}[\mu,\delta](\lambda) + \delta^2\mu\ \widetilde{\eta}_{-,{\rm near}}(\lambda), \nonumber
\end{align}
coupled to an infinite system governing the {\bf far energy components}:
\begin{align}
&\left(E_b(\lambda)-E_{\star}\right)\widetilde{\eta}_{b,\rm far}(\lambda)
+\delta\chi\Big(1/2\ge\abs{\lambda}\geq(\delta_{b,-}+\delta_{b,+}\Big)\delta^{\nu}) \times \nonumber \\
&\qquad
\left\langle\Phi_{b}(\cdot,\lambda),\kappa(\delta{\bm{\mathfrak{K}}}_2
\cdot)W(\cdot)\left[\eta_{\rm near}(\cdot)+\eta_{\rm far}
(\cdot)\right] \right\rangle_{L^2_{{\kpar=\bK\cdot\vtilde_1}}(\Sigma)} \label{far_cpts} \\
&\quad =\delta\chi\Big(1/2\ge\abs{\lambda}\geq(\delta_{b,-}+\delta_{b,+})\delta^{\nu}\Big)
\widetilde{F}_{b}[\mu,\delta](\lambda) + \delta^2\mu\ \widetilde{\eta}_{b,\rm far}(\lambda),\ \ b\geq1. \nonumber
\end{align}
We now systematically manipulate \eqref{near_cpt_1}-\eqref{far_cpts} into the form of a band-limited Dirac system; see Proposition \ref{near_freq_compact}. This latter equation is then solved in Proposition \ref{solve4beta}. Since all steps are reversible, this yields a solution $(\mu^\delta , \{\widetilde{\eta}^\delta_b(\lambda)\}_{b\geq1})$ of \eqref{near_cpt_1}-\eqref{far_cpts}. Finally, $\eta^\delta\in H^2_{\kpar=\bK\cdot\vtilde_1}(\Sigma)$, the solution of corrector equation \eqref{corrector-eqn1}, is reconstructed from the amplitudes $\{\widetilde{\eta}^\delta_b(\lambda)\}_{b\geq1}$ using \eqref{eta-expansion}.
\subsection{Construction of $\eta_{\rm far}=\eta_{\text{far}}[\eta_{\text{near}},\mu,\delta]$ and derivation of a closed system for $\eta_{\rm near}$}
We solve \eqref{far_cpts} for $\eta_{\rm far}$ as a functional of $\eta_{\rm near}$, and the parameters $\mu$ and $\delta$.
We then study the {\it closed} equation for $\eta_{\rm near}$ obtained by substitution of $\eta_{\rm far}$ into \eqref{near_cpt_1} and \eqref{near_cpt_2}.
\noindent {\it It is in the construction of this map that we use assumption (A4), the spectral no-fold condition along the ${\bm{\mathfrak{K}}}_2 -$ slice; Definition \ref{SGC}.
We apply it in the form: There exists a modulus, $\omega(\mathfrak{a})$, and positive constants $\nu$, $c_1$ and $c_2$, depending on $V$, such that for all $\delta\ne0$ and sufficiently small:}
%
\begin{align}
\delta^\nu \le |\lambda|\le\frac12\quad &\implies\quad \Big|\ E_\pm(\lambda) - E_\star\ \Big|\ \ge\ c_1\ \omega(\delta^{\nu}),
\label{no-fold-over-A} \\
b\ne\pm:\ \ \ |\lambda|\le1/2 \quad &\implies \quad \Big| E_b(\lambda)-E_\star \Big|\ \ge\ c_2\ (1+|b|) \ .
\label{no-fold-over-b-A}
\end{align}
\noindent The far energy system \eqref{far_cpts} may be written as
a fixed point system for $\widetilde\eta_{\rm far}=\{\widetilde{\eta}_{b,\rm far}(\lambda)\}_{b\ge1}$:
\begin{equation}
\widetilde{\mathcal{E}}_b[\widetilde{\eta}_{\rm far};\eta_{\rm near},\mu,\delta]\ =\
\widetilde{\eta}_{b,\rm far}\ ,\qquad b\ge1,
\label{fixed-pt1}
\end{equation}
where the mapping $\widetilde{\mathcal{E}}_b$ is given by
\begin{align*}
\widetilde{\mathcal{E}}_b[\phi;\psi,\mu,\delta](\lambda)&\equiv
\delta^2\mu\ \frac{\widetilde{\phi}_{b,\textrm{far}}(\lambda)}{{E_b(\lambda)-E_{\star}}}
+\frac{\delta\ \chi\Big(1/2\ge\abs{\lambda}\geq(\delta_{b,-}+\delta_{b,+})\delta^{\nu}\Big)}{E_b(\lambda)-E_{\star}} \times \\
& \quad \left(-\inner{\Phi_{b}(\cdot,\lambda),\kappa(\delta {\bm{\mathfrak{K}}}_2 \cdot)W(\cdot)\left[\psi(\cdot)+ \phi
(\cdot)\right]}_{L_{\kpar=\bK\cdot\vtilde_1}^2} + \widetilde{F}_{b}[\mu,\delta](\lambda) \right) ,
\end{align*}
and
\begin{align*}
\phi({\bf x})&= \sum_{b\ge1} \int_{|\lambda| \leq 1/2} \chi\Big(\abs{\lambda}\geq(\delta_{b,-}+\delta_{b,+})\delta^{\nu}\Big)\
\widetilde{\phi}_b(\lambda) \Phi_b({\bf x};\lambda)\ d\lambda \\
& =\
\sum_{b\ge1} \int_{|\lambda| \leq 1/2} \widetilde{\phi}_{b,{\rm far}}(\lambda) \Phi_b({\bf x};\lambda)\ d\lambda\ .
\end{align*}
Equivalently,
\begin{equation}
\mathcal{E}[\eta_{\rm far};\eta_{\rm near},\mu,\delta]\ =\ \eta_{\rm far}\ .
\label{fixed-pt-notilde}
\end{equation}
For fixed $\mu$, $\delta$ and band-limited $\eta_{\rm near}$:
\begin{equation}
\widetilde{\eta}_{\pm,\rm near}(\lambda)\ =\ \chi\left(\abs{\lambda}\leq \delta^\nu\right)
\widetilde{\eta}_{\pm,\rm near}(\lambda)
, \label{near-def}\end{equation}
we seek a solution $\{\widetilde{\eta}_{b,\rm far}(\lambda)\}_{b\ge1}$, supported at energies
bounded away from $E_\star$:
\begin{equation}
\widetilde{\eta}_{b,\rm far}(\lambda)\\ =\
\chi\left(\abs{\lambda}\ge(\delta_{b,-}+\delta_{b,+})\delta^\nu\right) \widetilde{\eta}_{b,\rm far}(\lambda)
, ~~~b\ge1 .\label{far-def}\end{equation}
\noindent Introduce the Banach spaces of functions limited to ``far'' and ``near'' energy regimes:
\begin{align*}
L^2_{{\rm near}, \delta^\nu}(\Sigma) &\equiv\
\left\{ f\in L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma) : \widetilde{f}_b(\lambda)\ \textrm{satisfies \eqref{near-def}}\right\} , \\
L^2_{{\rm far}, \delta^\nu}(\Sigma) &\equiv\
\left\{ f\in L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma) : \widetilde{f}_b(\lambda)\ \textrm{satisfies \eqref{far-def}}\right\} ,
\end{align*}
Near- and far- energy Sobolev spaces $H^s_{\rm far}(\Sigma) $ and
$H^s_{\rm near}(\Sigma) $ are analogously defined.
The corresponding open balls of radius $\rho$ are given by:
\begin{align*}
B_{{\rm near},\delta^\nu}(\rho) &\equiv\
\left\{ f\in L^2_{{\rm near}, \delta^\nu} : \|f\|_{L_{\kpar=\bK\cdot\vtilde_1}^2}<\rho \right\} , \\
B_{{\rm far},\delta^\nu}(\rho) &\equiv\
\left\{ f\in L^2_{{\rm far}, \delta^\nu} : \|f\|_{L_{\kpar=\bK\cdot\vtilde_1}^2}<\rho \right\} .
\end{align*}
\noindent Using (A4) that $H^{(0)}=-\Delta+V$ satisfies the no-fold condition for the ${\bm{\mathfrak{v}}}_1 -$ edge, we deduce:
\begin{proposition}\label{fixed-pt}
\begin{enumerate}
\item For any fixed $M>0, R>0$, there exists a positive number, $\delta_0\le1$, such that for all $0<\delta<\delta_0$,
equation \eqref{fixed-pt-notilde}, or equivalently, the system \eqref{fixed-pt1}, has a unique solution
\begin{align*}
&(\eta_{\text{near}},\mu,\delta)\in B_{{\rm near},\delta^\nu}(R)\times\{|\mu|<M\}\times\{0<\delta<\delta_0\}
\nonumber\\
&\qquad\qquad \mapsto\ \eta_{\rm far}(\cdot;\eta_{\rm near},\mu,\delta)=
\mathcal{T}^{-1}\widetilde{\eta}_{\text{far}}\in B_{{\rm far},\delta^\nu}(\rho_\delta) ,\quad
\rho_\delta=\mathcal{O}\left(\frac{\delta^{\frac12}}{\omega(\delta^\nu)}\right).
\end{align*}
\item The mapping $(\eta_{\text{near}},\mu,\delta)\mapsto \eta_{\rm far}(\cdot;\eta_{\rm near},\mu,\delta)\in H_{\kpar=\bK\cdot\vtilde_1}^2$ is
Lipschitz in $(\eta_{\rm near},\mu)$ with:
\begin{align}
&\left\|\ \eta_{\text{far}}[\psi_1,\mu_1,\delta] - \eta_{\text{far}}[\psi_2,\mu_2,\delta]\ \right\|_{H_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)} \nonumber \\
&\qquad \leq \ C' \ \frac{\delta}{\omega(\delta^\nu)} \ \Big(\norm{\psi_1-\psi_2}_{H_{\kpar=\bK\cdot\vtilde_1}^2} + \abs{\mu_1-\mu_2} \Big), \nonumber \\
&\norm{\ \eta_{\text{far}}[\eta_{\text{near}};\mu,\delta]\ }_{H_{\kpar=\bK\cdot\vtilde_1}^2}
\le\ C''\left(\
\frac{\delta}{\omega(\delta^\nu)}\norm{\eta_{\text{near}}}_{H_{\kpar=\bK\cdot\vtilde_1}^2}+\frac{\delta^{\frac12}}{\omega(\delta^\nu)} \right)\ . \label{eta-far-bound}
\end{align}
The constants $C'$ and $C''$ depend only on $M, R$ and $\nu$.
\item
The mapping $(\eta_{\text{near}},\mu,\delta)\mapsto\eta_{\text{far}}[\eta_{\text{near}},\mu,\delta]$
satisfies:
\begin{equation}
\label{eta_far_affine}
\eta_{\text{far}}[\eta_{\text{near}},\mu,\delta]({\bf x}) = [A\eta_{\text{near}}]({\bf x};\mu,\delta) + \mu
B({\bf x};\delta) + C({\bf x};\delta).
\end{equation}
For $\eta_{\text{near}}\in B_{{\rm near}}(R)$ we have:
\begin{align*}
&\left\| [A\eta_{\text{near}}](\cdot,\mu_1,\delta) - [A\eta_{\text{near}}](\cdot,\mu_2,\delta)\right\|_{H_{\kpar=\bK\cdot\vtilde_1}^2}
\le \ C'_{M,R}\ \frac{\delta}{\omega(\delta^\nu)}\ |\mu_1-\mu_2|, \\
&\norm{[A\eta_{\text{near}}](\cdot;\mu,\delta)}_{H_{\kpar=\bK\cdot\vtilde_1}^2}
\leq \frac{\delta}{\omega(\delta^\nu)}
\norm{\eta_{\text{near}}}_{H_{\kpar=\bK\cdot\vtilde_1}^2}, \\
&\norm{B(\cdot;\delta)}_{H_{\kpar=\bK\cdot\vtilde_1}^2} \leq\frac{\delta^\frac12}{\omega(\delta^\nu)}, \ \ \text{and} \ \
\norm{C(\cdot;\delta)}_{H_{\kpar=\bK\cdot\vtilde_1}^2} \leq \frac{\delta^\frac12}{\omega(\delta^\nu)}.
\end{align*}
\item We may extend $\eta_{\rm far}[\cdot;\eta_{\rm near},\mu,\delta]$ to be defined on the half-open interval
$\delta\in[0,\delta_0)$
by defining
$\eta_{\rm far}[\eta_{\rm near},\mu,\delta=0]=0$. Then, by \eqref{eta-far-bound}
$\eta_{\rm far}[\eta_{\rm near},\mu,\delta]$ is continuous at $\delta=0$.
\end{enumerate}
\end{proposition}
\begin{remark}\label{frak-e-remark}[Remarks on the proof of Proposition \ref{fixed-pt}]
The proof follows that of Corollary 6.4 in \cites{FLW-MAMS:15}, with changes that we now discuss.
\begin{enumerate}
\item[(a)] The fixed point equation \eqref{fixed-pt1} for $\psi_{\rm far}$ is of the form:
\begin{align}
\eta_{\rm far}\ &=\ \mathcal{Q}_\delta \eta_{\rm far}\ + \ \frac{\delta\ \chi(\abs{\lambda}\geq(\delta_{b,-}+\delta_{b,+})\delta^{\nu})}{E_b(\lambda)-E_{\star}} \ \times \nonumber \\
&\qquad\qquad \left( - \inner{\Phi_{b}(\cdot,\lambda),\kappa(\delta {\bm{\mathfrak{K}}}_2 \cdot)W(\cdot)\ \eta_{\rm near}(\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2}
+\widetilde{F}_{b}[\mu,\delta](\lambda) \right) ,\label{fixed-pt2}
\end{align}
where $\mathcal{Q}_\delta$ is bounded and linear on $H_{\kpar=\bK\cdot\vtilde_1}^2$ and defined by:
\begin{align}
\widetilde{\left[\ \mathcal{Q}_\delta \phi\ \right]}_b(\lambda)\ &\equiv\ -\delta\
\frac{ \chi(\abs{\lambda}\geq(\delta_{b,b_{\star}}+\delta_{b,b_{\star}+1})\delta^{\nu})}{E_b(\lambda)-E_{\star}}
\inner{\Phi_{b}(\cdot,\lambda),\kappa(\delta {\bm{\mathfrak{K}}}_2 \cdot)W(\cdot)\ \phi
(\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2}\nonumber\\
&\qquad + \delta^2\mu\ \frac{\widetilde{\phi}_b(\lambda)}{{E_b(\lambda)-E_{\star}}}\ .
\label{tQ-def} \end{align}
To construct the mapping $(\eta_{\rm near},\mu,\delta)\mapsto\eta_{\rm far}[\eta_{\rm near},\mu,\delta]$ and obtain the conclusions of Proposition \ref{fixed-pt} it is convenient to solve \eqref{fixed-pt2} via the contraction mapping principle.
Thus we need to bound the operator norm of $\mathcal{Q}_\delta$ and we find from \eqref{tQ-def}
that $\mathcal{Q}_\delta$ maps $L^2_{{\rm near},\delta^\nu}$ to $H^2_{{\rm near},\delta^\nu}$
with norm bounded by $ {\rm constant}\times\mathfrak{e}(\delta)$, where
\begin{align}
\mathfrak{e}(\delta)\equiv \sup_{b=\pm}\ \ \sup_{|\delta|^\nu\le|\lambda|\le\frac12}\
\frac{|\delta|}{|E_b(\lambda)-E_{\star}|}\ +\ \sup_{ b\ge1,\ b\ne\pm}\ (\ 1+|b|\ ) \sup_{0\le|\lambda|\le\frac12}
\frac{|\delta|}{|E_b(\lambda)-E_{\star}|} .
\label{frak-e-def} \end{align}
The spectral no-fold condition hypothesis \eqref{no-fold-over-A}-\eqref{no-fold-over-b-A}
%
implies that
\begin{equation}
\mathfrak{e}(\delta) \lesssim\ \frac{|\delta|}{\omega(\mathfrak{\delta^{\nu}})\ c_1(V)}\ +\ \frac{|\delta|}{ c_2(V)} \ ,
\label{frak-e-bound}
\end{equation}
which tends to zero as $\delta$ tends to zero.
Hence, the contraction mapping principle can be applied on the ball
$B_{{\rm far},\delta^\nu}(\rho_\delta),\ \rho_\delta=\mathcal{O}({\delta^\frac12}/{\omega(\delta^\nu)})$.
%
%
\item[(b)] We note that although $\Sigma$ is a two-dimensional region, since $\Sigma$ is unbounded in only one direction, estimates on $H^2_{k_{\parallel}}(\Sigma)$ have the same scaling behavior in the parameter $\delta$ as in the 1D study \cites{FLW-MAMS:15}.
\end{enumerate}
\end{remark}
\subsection{Analysis of the closed system for $\eta_{\rm near}$\label{subsec:near_freq}}
Substitution of $\eta_{\text{far}}[\eta_{\text{near}},\mu,\delta]$ into the system \eqref{near_cpt_1}-\eqref{near_cpt_2} yields a closed system for $(\eta_{\text{near}},\mu)$, which depends on the parameter $\delta\in[0,\delta_0)$.
In this section we show, by careful rescaling and expansion of terms, that the equation for $\eta_{\rm near}$ may be rewritten as a Dirac-type system. We then solve this system in Section \ref{analysis-blDirac}.
Recall the abbreviated notation: $E_b(\lambda)$ and $\Phi_b({\bf x};\lambda)$, introduced in \eqref{Eb_of_lambda}-\eqref{Phib_of_lambda}.
Since both the spectral support of $\eta_{\text{near}}$ (parametrized by ${\bf K}+\lambda{\bm{\mathfrak{K}}}_2$, with $|\lambda|\le\delta^\nu$), and size of the domain wall perturbation, $\mathcal{O}(\delta)$,
tend to zero as $\delta\to0$, it is natural to scale in such a way as to obtain an order one limit.
We begin by introducing $\xi$, a scaling of the quasi-momentum parameter, $\lambda$,
and $\widehat{\eta}_{\pm,\rm near}\left(\xi\right)$, an expression for
$\widetilde{\eta}_{\pm,\rm near}(\lambda)$ as a standard Fourier transform on $\mathbb{R}$:
\begin{equation}
\label{amplitude-rescaled}
\widehat{\eta}_{\pm,\rm near}\left(\xi\right)\ \equiv\ \widetilde{\eta}_{\pm,\rm near}(\lambda),\quad {\rm where}\quad \xi\equiv \frac{\lambda}{\delta}.
\end{equation}
\noindent By Proposition \ref{directional-bloch}:
$E_{\pm}(\lambda)-E_{\star} =
\pm\abs{\lambda_\sharp}\ \abs{{\bm{\mathfrak{K}}}_2}\ \delta\xi + E_{2,\pm}(\delta\xi)\ (\delta\xi)^2$,
where $\abs{E_{2,\pm}(\delta\xi)} \lesssim 1$, for all $\xi$; see \eqref{EpmKlam}.
Substitution of this expansion and the rescaling \eqref{amplitude-rescaled} into \eqref{near_cpt_1}-\eqref{near_cpt_2}, and then canceling a factor of $\delta$ yields:
{\small
\begin{align}
&+|{\lambda_{\sharp}}|\abs{ {\bm{\mathfrak{K}}}_2}\ \xi\ \widehat{\eta}_{+,\rm near}(\xi)
+\chi(\abs{\xi}\leq\delta^{\nu-1})\inner{\Phi_{+}(\cdot,\delta\xi),\kappa(\delta {\bm{\mathfrak{K}}}_2
\cdot)W(\cdot)\eta_{\rm near}(\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2} \nonumber\\
&\qquad=\chi(\abs{\xi}\leq\delta^{\nu-1}) \widetilde{F}_{+}[\mu,\delta](\lambda) + \delta\mu\ \widehat{\eta}_{+,{\rm
near}}(\xi) - \delta E_{2,+}(\delta\xi)\xi^2\widehat{\eta}_{+,\rm near}(\xi) \label{near5} \\
&\qquad\qquad - \chi(\abs{\xi}\leq\delta^{\nu-1})\inner{\Phi_{+}(\cdot,\delta\xi),\kappa(\delta {\bm{\mathfrak{K}}}_2
\cdot)W(\cdot)\eta_{\rm far}[\eta_{\rm near},\mu,\delta](\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2} ,\nonumber\\
&\nonumber\\
&-|{\lambda_{\sharp}}|\abs{{\bm{\mathfrak{K}}}_2}\ \xi\ \widehat{\eta}_{-,\rm near}(\xi)
+\chi(\abs{\xi}\leq\delta^{\nu-1})\inner{\Phi_{-}(\cdot,\delta\xi),\kappa(\delta {\bm{\mathfrak{K}}}_2
\cdot)W(\cdot)\eta_{\rm near}(\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2} \nonumber\\
&\qquad=\chi(\abs{\xi}\leq\delta^{\nu-1}) \widetilde{F}_{-}[\mu,\delta](\lambda) + \delta\mu\ \widehat{\eta}_{-,{\rm
near}}(\xi) -\delta E_{2,-}(\delta\xi)\xi^2\widehat{\eta}_{-,\rm near }(\xi) \label{near6} \\
&\qquad\qquad - \chi(\abs{\xi}\leq\delta^{\nu-1})\inner{\Phi_{-}(\cdot,\delta\xi),\kappa(\delta {\bm{\mathfrak{K}}}_2
\cdot)W(\cdot)\eta_{\rm far}[\eta_{\rm near},\mu,\delta](\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2} \nonumber .
\end{align}}
We next extract the dominant behavior, for $\delta$ small, of the inner products involving $\eta_{\rm near}$ by first expanding $\eta_{\rm near}$ in terms of its spectral components near energy $E_\star=E_\pm(\lambda=0)$ plus a correction. To this end we apply Proposition \ref{directional-bloch} to expand $p_\pm({\bf x},\lambda)$ for $\lambda=\delta\xi$ small:
\begin{equation*}
p_{\pm}({\bf x},\lambda) = P_{\pm}({\bf x})\ +\ \varphi_\pm({\bf x},\delta\xi),\ \
P_{\pm}({\bf x}) \equiv \frac{1}{\sqrt{2}} \Big[\frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|} \frac{\mathfrak{z}_2}{|\mathfrak{z}_2|} P_1({\bf x}) \pm P_2({\bf x}) \Big]\ \ {\rm where}
\end{equation*}
\begin{equation}
\label{deltapbdd}
\abs{\varphi_\pm({\bf x},\delta\xi)} \leq \underset{{\bf x}\in\Sigma, ~ \abs{\omega}\leq\delta^{\nu}}{\sup}
\abs{\varphi_\pm({\bf x},\omega)} \leq \delta^{\nu},\ \ \abs{\xi}\leq\delta^{\nu-1}\ .
\end{equation}
Thus, using \eqref{eta-near} and that $\Phi_\pm({\bf x};\lambda) = e^{i({\bf K}+\lambda {\bm{\mathfrak{K}}}_2)\cdot{\bf x}}\ p_\pm({\bf x};\lambda)$ (see \eqref{p_pm-def}), we obtain
{\small
\begin{align}
\eta_{\text{near}}({\bf x}) &= \int_{\abs{\lambda}\leq\delta^{\nu}}
\Phi_{+}({\bf x},\lambda)\widetilde{\eta}_{+,\text{near}}(\lambda) d\lambda
+ \int_{\abs{\lambda}\leq\delta^{\nu}}
\Phi_{-}({\bf x},\lambda)\widetilde{\eta}_{-,\text{near}}(\lambda)d\lambda \nonumber\\
&= \int_{\abs{\lambda}\leq\delta^{\nu}}e^{i{\bf K}\cdot{\bf x}}e^{i\lambda{\bm{\mathfrak{K}}}_2\cdot{\bf x}}
p_{+}({\bf x},\lambda)\widehat{\eta}_{+,\text{near}}\left(\frac{\lambda}{\delta}\right)d\lambda \nonumber \\
&\quad+ \int_{\abs{\lambda}\leq\delta^{\nu}}e^{i{\bf K}\cdot{\bf x}}e^{i\lambda{\bm{\mathfrak{K}}}_2\cdot{\bf x}}
p_{-}({\bf x},\lambda)\widehat{\eta}_{-,\text{near}}\left(\frac{\lambda}{\delta}\right)d\lambda \nonumber\\
&= \delta e^{i{\bf K}{\bf x}} P_+({\bf x}) \int_{\abs{\xi}\leq\delta^{\nu-1}}
e^{i\delta\xi{\bm{\mathfrak{K}}}_2\cdot{\bf x}}\widehat{\eta}_{+,\text{near}}(\xi)d\xi
+ \delta e^{i{\bf K}\cdot{\bf x}} \rho_{+}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot {\bf x})\nonumber\\
&\quad+ \delta e^{i{\bf K}{\bf x}} P_-({\bf x}) \int_{\abs{\xi}\leq\delta^{\nu-1}}
e^{i\delta\xi{\bm{\mathfrak{K}}}_2\cdot{\bf x}}\widehat{\eta}_{-,\text{near}}(\xi)d\xi
+ \delta e^{i{\bf K}\cdot{\bf x}} \rho_{-}({\bf x},\delta {\bm{\mathfrak{K}}}_2\cdot{\bf x})\nonumber\\
&=\delta e^{i{\bf K}\cdot{\bf x}}\left[ P_+({\bf x})\ \eta_{+,\text{near}}(\delta {\bm{\mathfrak{K}}}_2\cdot {\bf x})
+ P_-({\bf x})\ \eta_{-,\text{near}}(\delta {\bm{\mathfrak{K}}}_2\cdot {\bf x}) + \sum_{b=\pm}\rho_{b}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot {\bf x})\right] ,\label{nearinxi}\\
\label{rhodefn}
&{\rm where}\qquad\ \rho_{\pm}({\bf x},\zeta) = \int_{\abs{\xi}\leq\delta^{\nu-1}}e^{i\xi \zeta} \varphi_\pm({\bf x},\delta\xi)
\widehat{\eta}_{\pm,\text{near}}(\xi)d\xi.
\end{align}
}
\noindent We now expand the inner product in \eqref{near5}; the corresponding term in \eqref{near6} is treated similarly. Substituting
\eqref{nearinxi} into the inner product in \eqref{near5} yields (using $\Phi_\pm({\bf x};{\bf k})=e^{i{\bf k}\cdot{\bf x}}p_\pm({\bf x};{\bf k})$)
\begin{align}
&\inner{\Phi_{+}(\cdot,\delta\xi),\kappa(\delta\cdot)W(\cdot)
\eta_{\rm near}(\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2} \nonumber \\
&\equiv\ \inner{\Phi_{+}(\cdot,{\bf K}+\delta\xi{\bm{\mathfrak{K}}}_2),\kappa(\delta\cdot)W(\cdot)
\eta_{\rm near}(\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2}\label{full_inner} \\
&=\ \delta \inner{e^{i\delta\xi {\bm{\mathfrak{K}}}_2\cdot}p_+(\cdot,\delta\xi),
P_+(\cdot)\
W(\cdot)\ \kappa(\delta{\bm{\mathfrak{K}}}_2\cdot)\ {\eta}_{+,\rm near}(\delta {\bm{\mathfrak{K}}}_2\cdot)}_{L^2(\Sigma)} \label{inner1}\\
&\qquad+ \delta \inner{e^{i\delta\xi {\bm{\mathfrak{K}}}_2\cdot}p_{+}(\cdot,\delta\xi),
P_-(\cdot)\
W(\cdot)\ \kappa(\delta{\bm{\mathfrak{K}}}_2\cdot)\ {\eta}_{-,\rm near}(\delta {\bm{\mathfrak{K}}}_2\cdot)}_{L^2(\Sigma)} \label{inner2}\\
&\qquad+ \delta \sum_{b=\pm}\ \inner{e^{i\delta\xi {\bm{\mathfrak{K}}}_2\cdot}\ p_{+}(\cdot,\delta\xi),
W(\cdot)\ \kappa(\delta{\bm{\mathfrak{K}}}_2\cdot)\ \rho_{b}(\cdot,\delta {\bm{\mathfrak{K}}}_2\cdot)}_{L^2(\Sigma)} \ .\label{inner3}
\end{align}
The inner product terms in \eqref{inner1}-\eqref{inner3} are each of the form:
\begin{equation}
\label{G_inner_def}
\mathcal{G}(\delta; \xi)
\equiv \delta \int_\Sigma e^{-i\xi\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}} g({\bf x},\delta\xi)\Gamma({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}) d{\bf x},\ \ {\rm where}
\end{equation}
\begin{enumerate}
\item [(IP1)] $g({\bf x} ,y)$ is a smooth function of $({\bf x},y)\in \mathbb{R}^2/\Lambda_h\times\mathbb{R}$ and
\item [(IP2)] ${\bf x}\mapsto\Gamma({\bf x},\zeta)$ is $\Lambda_h-$ periodic and $H^2(\Omega)$ with values in $L^2(\mathbb{R}_\zeta)$, {\it i.e.}
\begin{align}
\label{Gamma-conditions1}
&\Gamma({\bf x}+{\bf v},\zeta)\ =\ \Gamma({\bf x},\zeta), \quad \text{for all } {\bf v}\in\Lambda_h, \\
&\sum_{j=0}^2\ \sum_{{\bf |c}|=j}\int_\Omega\ \left\|\partial_{\bf x}^{\bf c}\Gamma({\bf x},\zeta)\right\|_{L^2(\mathbb{R}_\zeta)}^2\ d{\bf x}\ <\ \infty.
\label{Gamma-conditions2}
\end{align}
We denote this Hilbert space of functions by $\mathbb{H}^2$ with norm-squared, $\|\cdot\|_{\mathbb{H}^2}^2$, given in \eqref{Gamma-conditions2}.
It is easy to check that conditions \eqref{Gamma-conditions1}-\eqref{Gamma-conditions2} are satisfied for the cases $\Gamma=\Gamma(\zeta)=\kappa(\zeta)\eta_{\pm,{\rm near}}(\zeta)$ and $\Gamma=\Gamma({\bf x},\zeta)=\kappa(\zeta)\rho_{\pm}({\bf x},\zeta)$, where $\rho_\pm$ is defined in \eqref{rhodefn}.
\end{enumerate}
\noindent To expand expressions of the form $G(\delta,\xi)$, we use:
\begin{lemma}
\label{poisson_exp}
Let $g({\bf x} ,y)$ and $\Gamma({\bf x},\zeta)$ satisfy conditions (IP1) and (IP2), respectively.
Denote by $\widehat{\Gamma}({\bf x},\omega)$ the Fourier transform of $\Gamma({\bf x},\zeta)$ with respect to the $\zeta-$ variable,
given by
\begin{equation}
\widehat{\Gamma}({\bf x},\omega)\ \equiv\ \lim_{N\uparrow\infty}\ \frac{1}{2\pi}\int_{|\zeta|\le N}e^{- i\omega \zeta}\Gamma({\bf x},\zeta) d\zeta ,
\label{Gammahat-def}
\end{equation}
where the limit is taken in $L^2(\Omega\times\mathbb{R}_\omega;d{\bf x} d\omega)$. Then,
\begin{equation}
\label{poisson_app}
\mathcal{G}(\delta; \xi) = \
\sum_{n\in\mathbb{Z}} \int_\Omega e^{in {\bm{\mathfrak{K}}}_2\cdot{\bf x}} \widehat{\Gamma}\left({\bf x},\frac{n}{\delta}+\xi\right) g({\bf x},\delta\xi) d{\bf x},
\end{equation}
with equality holding in $L^2_{\rm loc}([-\xi_{\rm max},\xi_{max}];d\xi)$, for any fixed $\xi_{\rm max}>0$.
\end{lemma}
\noindent We adapt the proof in \cites{FLW-MAMS:15} (Lemma 6.5) for the 1D setting. We require the following variant of the Poisson summation formula in $L^2_{\rm loc}$.
\begin{theorem}\label{psum-L2}
Let $\Gamma({\bf x},\zeta)$ satisfy (IP2). Denote by $\widehat{\Gamma}({\bf x},\omega)$ the Fourier transform of $\Gamma({\bf x},\zeta)$ with respect to the variable, $\zeta$; see \eqref{Gammahat-def}.
Fix an arbitrary $y_{\rm max}>0$, and introduce the parameterization of the cylinder $\Sigma$: ${\bf x}=\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2, \ 0\le\tau_1\le1,\ \tau_2\in\mathbb{R}$. Then,
\begin{align*}
&\sum_{n\in\mathbb{Z}} e^{- iy(\tau_2+n)}\Gamma(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,\tau_2+n)
= 2\pi\ \sum_{n\in\mathbb{Z}} e^{2\pi i n \tau_2}\widehat{\Gamma}\left(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,2\pi n+y \right)
\end{align*}
in $L^2\left([0,1]^2\times[-y_{\rm max},y_{\rm max}];d\tau_1d\tau_2\cdot dy\right)$.
\end{theorem}
The 1D analogue of Theorem \ref{psum-L2} was proved in Appendix A of \cites{FLW-MAMS:15}.
Since the proof is very similar, we omit it.
We also require
\begin{lemma}\label{interchange}
Let $F({\bf x},y)$ and $F_N({\bf x},y),\ N=1,2,\dots$, belong to $L^2(\Sigma\times[-y_{\rm max},y_{max}];d{\bf x} dy)$. Assume that
\begin{equation*}
\left\| F_N-F \right\|_{L^2(\Sigma\times[-y_{\rm max},y_{max}];d{\bf x} dy)}\ \to\ 0,\ \ {\rm as}\ \ N\to\infty\ .
\end{equation*}
Let $G\in L^2([-y_{\rm max}, y_{max}];d y)$. Then, in the $L^2(\Sigma;d{\bf x})$ sense, we have:
\begin{align*}
\lim_{N\to\infty}\int^{y_{\rm max}}_{-y_{\rm max}}F_N({\bf x},y)G(y)\ dy\ &=\
\int^{y_{\rm max}}_{-y_{\rm max}}\lim_{N\to\infty}F_N({\bf x},y)G(y)\ dy\ \\
&=\ \int^{y_{\rm max}}_{-y_{\rm max}} F({\bf x},y)G(y)\ dy .
\end{align*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{interchange}]
Square the difference, apply Cauchy-Schwarz and then integrate $d{\bf x}$ over $\Sigma$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{poisson_exp}]
Recall the parameterization of the cylinder, $\Sigma$:
\begin{align*}
{\bf x}\in\Sigma:\qquad &{\bf x} = \tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,\ \ 0\le\tau_1\le1,\ \tau_2\in\mathbb{R} \ , \quad
{\bm{\mathfrak{K}}}_1\cdot{\bf x} = 2\pi\tau_1 , \ {\bm{\mathfrak{K}}}_2\cdot{\bf x} = 2\pi\tau_2 , \nonumber \\
&dx_1\ dx_2\ = \left|{\bm{\mathfrak{v}}}_1\wedge{\bm{\mathfrak{v}}}_2\right|\ d\tau_1\ d\tau_2\ \equiv\ |\Omega|\ d\tau_1\ d\tau_2 .
\end{align*}
Using that $g({\bf x}, \delta\xi) = g(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2, \delta \xi)$ and $\Gamma({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}) = \Gamma(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2, 2\pi\delta \tau_2)$ are both appropriately $1$-periodic, we expand $\mathcal{G}(\delta;\xi)$ defined in \eqref{G_inner_def}. By Lemma \ref{interchange}:
{\footnotesize{
\begin{align}
\mathcal{G}(\delta;\xi)\ &=
\delta\ |\Omega|\ \int_0^1d\tau_1\ \int_{-\infty}^{\infty} \
e^{-2\pi i\delta\xi\tau_2}g(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,\delta\xi)\ \Gamma\left(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2, 2\pi\delta\tau_2\right)\ d\tau_2\nonumber\\
&= \delta\ |\Omega|\ \int_0^1d\tau_1\ \lim_{N\to\infty} \sum_{n=-N}^{N}\int_n^{n+1}\ e^{-2\pi i\delta\xi \tau_2}g(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,\delta\xi)\ \Gamma\left(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2, 2\pi\delta \tau_2\right)\ d\tau_2\nonumber\\
&= \delta\ |\Omega|\ \int_0^1d\tau_1\ \lim_{N\to\infty} \sum_{n=-N}^{N} \int_0^1\ e^{-2\pi i\delta\xi (\tau_2+n)}g(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,\delta\xi)\ \Gamma\left(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2, 2\pi\delta(\tau_2+n)\right)\ d\tau_2\nonumber\\
&= \delta\ |\Omega|\ \int_0^1d\tau_1\ \int_0^1\ g(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,\delta\xi)\
\Big[\ \sum_{n\in\mathbb{Z}}\ e^{-2\pi i\xi\delta(\tau_2+n)}
\ \ \Gamma(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2, 2\pi\delta(\tau_2+n))\ \Big]\ d\tau_2 \ . \nonumber
\end{align}
}}
\noindent By Theorem \ref{psum-L2} with
$\Gamma=\Gamma(\tau_1{\bm{\mathfrak{v}}}_1+\tau{\bm{\mathfrak{v}}}_2, 2\pi\delta \tau_2)$ and $y=2\pi\delta \xi$, we have
\begin{align*}
\sum_{n\in\mathbb{Z}}\ e^{-2\pi i\delta\xi(\tau_2+n)}\ \Gamma(\tau_1{\bm{\mathfrak{v}}}_1+\tau{\bm{\mathfrak{v}}}_2
, 2\pi\delta \tau_2)
&=\ \frac{1}{\delta} \sum_{n\in\mathbb{Z}} e^{2\pi i n \tau_2} \widehat{\Gamma}\left(\tau_1{\bm{\mathfrak{v}}}_1+\tau {\bm{\mathfrak{v}}}_2, \frac{n}{\delta}+\xi\right) ,
\end{align*}
with equality holding in $L^2\left( [0,1]^2\times [-\xi_{\rm max}, \xi_{\rm max}]; d\tau_1 d\tau_2\cdot d\xi\right)$. Again using Lemma \ref{interchange} we may interchange the sum and integral to obtain\begin{align}
\mathcal{G}(\delta;\xi) & = \frac{\delta}{\delta}
|\Omega| \int_0^1d\tau_1 \int_0^1 g(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,\delta\xi)
\sum_{n\in\mathbb{Z}} e^{2\pi i n \tau_2} \widehat{\Gamma}\left(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2, \frac{n}{\delta}+\xi\right) d\tau_2 \nonumber\\
&= \sum_{n\in\mathbb{Z}} \int_\Omega e^{i n {\bm{\mathfrak{K}}}_2\cdot{\bf x}} \widehat{\Gamma}\left({\bf x}, \frac{n}{\delta}+\xi\right) g({\bf x},\delta\xi) d{\bf x} \ .
\end{align}
This completes the proof of Lemma \ref{poisson_exp}.
\end{proof}
We next apply Lemma \ref{poisson_exp} to each of the inner products \eqref{inner1}-\eqref{inner3}.
\noindent \underline{\it Expansion of inner product \eqref{inner1}:
\noindent Let
$g({\bf x},\delta\xi)=\overline{p_+({\bf x},\delta\xi)}P_+({\bf x})W({\bf x})$
and $\Gamma({\bf x},\zeta)=\kappa(\zeta) \eta_{+,\rm near}(\zeta)$.\ By Lemma \ref{poisson_exp},
\begin{align*}
&\delta\inner{e^{i\xi\delta{\bm{\mathfrak{K}}}_2 \cdot}p_+(\cdot,\delta\xi), P_+(\cdot)
W(\cdot)\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot) {\eta}_{+,\rm near}(\delta{\bm{\mathfrak{K}}}_2\cdot)}_{L^2(\Sigma)} \\
&=
\sum_{n\in\mathbb{Z}}\int_\Omega e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}}\mathcal{F}_{\zeta}[\kappa\eta_{+,\rm near}]\left(\frac{n}{\delta}+\xi \right)
\overline{p_{+}({\bf x},\delta\xi)}P_+({\bf x}) W({\bf x})d{\bf x}.
\end{align*}
Since $p_\pm({\bf x},\lambda=\delta\xi)=P_\pm({\bf x})+\varphi_\pm({\bf x},\delta\xi)$, where $\varphi_\pm({\bf x},\delta\xi)$ satisfies the
bound \eqref{deltapbdd}, we have
\begin{align*}
&\delta\inner{e^{i\xi\delta{\bm{\mathfrak{K}}}_2 \cdot}p_+(\cdot,\delta\xi), P_+(\cdot)
W(\cdot)\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot) {\eta}_{+,\rm near}(\delta{\bm{\mathfrak{K}}}_2\cdot)}_{L^2(\Sigma)} \\
&\equiv I_+^1(\xi;\eta_{+,\rm near}) + I_+^2(\xi;\eta_{+,\rm near}),
\end{align*}
where
\begin{align}
I_+^1(\xi;\eta_{+,\rm near}) &=
\sum_{n\in\mathbb{Z}} \mathcal{F}_{\zeta}[\kappa\eta_{+,\rm near}]\left(\frac{n}{\delta}+\xi\right)
\int_\Omega e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \abs{P_{+}({\bf x})}^2 W({\bf x})d{\bf x} \label{I_1} , \\
I_+^2(\xi;\eta_{+,\rm near}) &= \sum_{n\in\mathbb{Z}}
\mathcal{F}_{\zeta}[\kappa\eta_{+,\rm near}]\left(\frac{n}{\delta}+\xi\right) \int_\Omega
e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \overline{\varphi_+({\bf x},\delta\xi)}P_{+}({\bf x}) W({\bf x})d{\bf x} . \nonumber
\end{align}
From Proposition \ref{inner-prods-W} and Assumption (W3) we have
\begin{equation}
\label{zeroth-terms}
\int_\Omega \abs{P_{+}({\bf x})}^2 W({\bf x})d{\bf x} = 0 \quad \text{and} \quad
\int_\Omega \overline{P_{+}({\bf x})}P_{-}({\bf x}) W({\bf x})d{\bf x} = {\vartheta_{\sharp}} \neq 0 .
\end{equation}
Therefore, the $n=0$ term in the summation of $I_+^1(\xi;\eta_{+,\rm near})$ in \eqref{I_1} is zero and we may write:
\begin{equation*}
I_+^1(\xi;\eta_{+,\rm near}) =\
\sum_{\abs{n}\geq1} \mathcal{F}_{\zeta}[\kappa\eta_{+,\rm near}]\left(\frac{n}{\delta}+\xi\right)
\int_\Omega e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \abs{P_{+}({\bf x})}^2 W({\bf x})d{\bf x} .
\end{equation*}
\noindent \underline{\it Expansion of the inner product \eqref{inner2}:
\noindent Similarly, with $g({\bf x},\delta\xi)=\overline{p_+({\bf x},\delta\xi)}
P_-({\bf x})W({\bf x})$ and $\Gamma({\bf x},\zeta) = \kappa(\zeta) \eta_{-,\rm near}(\zeta)$, we have
\begin{align*}
&\delta\inner{e^{i\xi\delta{\bm{\mathfrak{K}}}_2 \cdot}p_+(\cdot,\delta\xi), P_-(\cdot)
W(\cdot)\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot) {\eta}_{-,\rm near}(\delta{\bm{\mathfrak{K}}}_2\cdot)}_{L^2(\Sigma)} \\
& \equiv \tilde{I_+^3}(\xi;\eta_{-,\rm near}) + I_+^4(\xi;\eta_{-,\rm near}),
\end{align*}
where (noting, by \eqref{zeroth-terms}, that the $n=0$ contribution is nonzero)
\begin{align*}
\tilde{I_+^3}(\xi;\eta_{-,\rm near}) &= {\vartheta_{\sharp}} \widehat{\kappa\eta}_{+,\rm near}(\xi) \ +\ {I_+^3}(\xi;\eta_{-,\rm near}),\ {\rm where}\nonumber\\
{I_+^3}(\xi;\eta_{-,\rm near}) &\equiv
\sum_{\abs{n}\geq1} \mathcal{F}_{\zeta}[\kappa\eta_{-,\rm near}]\left(\frac{n}{\delta}+\xi\right)
\int_\Omega e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \overline{P_{+}({\bf x})} P_{-}({\bf x}) W({\bf x})d{\bf x} ,\ \ {\rm and} \\
I_+^4(\xi;\eta_{-,\rm near}) &= \sum_{n\in\mathbb{Z}}
\mathcal{F}_{\zeta}[\kappa\eta_{-,\rm near}]\left(\frac{n}{\delta}+\xi\right) \int_\Omega
e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \overline{\varphi_+({\bf x},\delta\xi)}P_{-}({\bf x}) W({\bf x})d{\bf x} .
\end{align*}
\noindent \underline{\it Expansion of inner products \eqref{inner3}:}
\noindent Consider the $b=+$ term in \eqref{inner3}.
Let $g({\bf x},\delta\xi)=\overline{p_+({\bf x},\delta\xi)}W({\bf x})$ and $\Gamma({\bf x},\zeta) = \kappa(\zeta) \rho_+({\bf x},\zeta)$.
By Lemma \ref{poisson_exp} and the expansion of $p_+({\bf x},\delta\xi)$ about $P_+({\bf x})$ in \eqref{deltapbdd} we have:
%
\begin{align*}
&\delta\inner{e^{i\xi\delta{\bm{\mathfrak{K}}}_2 \cdot}p_+(\cdot,\delta\xi),
W(\cdot)\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot) \rho_+(\cdot,\delta{\bm{\mathfrak{K}}}_2\cdot)}_{L^2(\Sigma)} \equiv
I_+^5(\xi;\eta_{+,\rm near}) + I_+^6(\xi;\eta_{+,\rm near}),
\end{align*}
where
\begin{align*}
I_+^5(\xi;\eta_{+,\rm near}) &=
\sum_{n\in\mathbb{Z}} \int_\Omega \mathcal{F}_{\zeta}[\kappa\rho_+]\left({\bf x},\frac{n}{\delta}+\xi\right)
e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \overline{P_{+}({\bf x})} W({\bf x})d{\bf x} , \\
I_+^6(\xi;\eta_{+,\rm near}) &= \sum_{n\in\mathbb{Z}} \int_\Omega
\mathcal{F}_{\zeta}[\kappa\rho_+]\left({\bf x},\frac{n}{\delta}+\xi\right)
e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \overline{\varphi_+({\bf x},\delta\xi)} W({\bf x})d{\bf x} .
\end{align*}
\noindent For the $b=-$ term in \eqref{inner3} we have
\begin{align*}
&\delta\inner{e^{i\xi\delta{\bm{\mathfrak{K}}}_2 \cdot}p_+(\cdot,\delta\xi),
W(\cdot)\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot) \rho_-(\cdot,\delta{\bm{\mathfrak{K}}}_2\cdot)}_{L^2(\Sigma)} \equiv
I_+^7(\xi;\eta_{-,\rm near}) + I_+^8(\xi;\eta_{-,\rm near}),
\end{align*}
where
\begin{align*}
I_+^7(\xi;\eta_{+,\rm near}) &=
\sum_{n\in\mathbb{Z}} \int_\Omega \mathcal{F}_{\zeta}[\kappa\rho_-]\left({\bf x},\frac{n}{\delta}+\xi\right)
e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \overline{P_{+}({\bf x})} W({\bf x})d{\bf x} , \\
I_+^8(\xi;\eta_{+,\rm near}) &= \sum_{n\in\mathbb{Z}} \int_\Omega
\mathcal{F}_{\zeta}[\kappa\rho_-]\left({\bf x},\frac{n}{\delta}+\xi\right)
e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \overline{\varphi_+({\bf x},\delta\xi)} W({\bf x})d{\bf x} .
\end{align*}
Assembling the above expansions, we find that the full inner product, \eqref{full_inner}, may be expressed as:
\begin{equation}
\inner{\Phi_{+}(\cdot,\delta\xi),\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot)W(\cdot)
\eta_{\rm near}(\cdot)}_{L^2(\Sigma)} = {\vartheta_{\sharp}}\widehat{\kappa\eta}_{{\rm near},-}(\xi) +
\sum_{j=1}^8I_{+}^j(\xi;\eta_{\rm near}) ,\label{full-ipplus}
\end{equation}
A similar calculation yields:
\begin{equation}
\inner{\Phi_{-}(\cdot,\delta\xi),\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot)W(\cdot)
\eta_{\rm near}(\cdot)}_{L^2(\Sigma)} = {\vartheta_{\sharp}}\widehat{\kappa\eta}_{{\rm near},+}(\xi) +
\sum_{j=1}^8I_{-}^j(\xi;\eta_{\rm near}),\ \label{full-ipminus}
\end{equation}
where the terms $I_{-}^j(\xi;\eta_{\rm near})$ are defined analogously to $I_{+}^j(\xi;\eta_{\rm near})$. We now substitute our results \eqref{full-ipplus}-\eqref{full-ipminus}, \eqref{Fb-def}-\eqref{Fdef} and \eqref{eta_far_affine} into \eqref{near5}-\eqref{near6} to obtain the following:
\begin{proposition}
\label{near_freq_compact}
Let $\widehat{\beta}(\xi) = (\widehat{\eta}_{+,\rm near}(\xi) , \widehat{\eta}_{-,\rm near}(\xi))^T$.
Equations \eqref{near5}-\eqref{near6}, the closed system governing the near energy components, $\eta_{\rm near}$, of the corrector, $\eta$, is of the form:
\begin{equation}
\label{compacterroreqn}
\left(\widehat{\mathcal{D}}^{\delta}+\widehat{\mathcal{L}}^{\delta}(\mu) -\delta \mu\right)\widehat{\beta}(\xi) =
\mu\widehat{\mathcal{M}}(\xi;\delta) + \widehat{\mathcal{N}}(\xi;\delta) .
\end{equation}
Here, $\mathcal{D}^\delta$ denotes the band-limited Dirac operator defined by
\begin{equation}
\widehat{\mathcal{D}}^{\delta}\widehat{\beta}(\xi)\ \equiv\ |{\lambda_{\sharp}}|\ |{\bm{\mathfrak{K}}}_2|\ \sigma_3\ \xi\ \widehat{\beta}(\xi)\
+\ {\vartheta_{\sharp}}\ \chi\left(\abs{\xi}\leq\delta^{\nu-1}\right)\ \sigma_1\ \widehat{\kappa\beta}(\xi).
\label{bl-dirac-op}
\end{equation}
The linear operator, $\widehat{\mathcal{L}}^{\delta}(\mu)$, acting on $\widehat{\beta}$, and the source terms $\widehat{\mathcal{M}}(\xi;\delta)$ and $\widehat{\mathcal{N}}(\xi;\delta)$ are defined by:
{\small
\begin{equation}
\label{L_op}
\widehat{\mathcal{L}}^{\delta}(\mu)\widehat{\beta}(\xi) \equiv
\chi\left(\abs{\xi}\leq\delta^{\nu-1}\right) \sum_{j=1}^3 \widehat{\mathcal{L}}^{\delta}_j(\mu) \widehat{\beta}(\xi) , \quad \text{where}
\end{equation}
\begin{align*}
\widehat{\mathcal{L}}^{\delta}_1(\mu) \widehat{\beta}(\xi) & \equiv \delta \xi^2
\begin{pmatrix}
E_{2,+}(\delta\xi)\ \widehat{\eta}_{+,\rm near}(\xi) \\E_{2,-}(\delta\xi)\ \widehat{\eta}_{-,\rm near}(\xi)
\end{pmatrix} , \quad
\widehat{\mathcal{L}}^{\delta}_2(\mu) \widehat{\beta}(\xi) \equiv
\sum_{j=1}^8
\begin{pmatrix}
I_{+}^j(\xi;\widehat{\eta}_{\pm,\rm near}(\xi))\\ I_{-}^j(\xi;\widehat{\eta}_{\pm,\rm near}(\xi))
\end{pmatrix} , \\
\widehat{\mathcal{L}}^{\delta}_3(\mu) \widehat{\beta}(\xi) & \equiv
\begin{pmatrix}
\inner{\Phi_{+}(\cdot,\delta\xi),\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot)W(\cdot)
[A\eta_{\rm near}](\cdot;\mu,\delta)}_{L_{\kpar=\bK\cdot\vtilde_1}^2} \\
\inner{\Phi_{-}(\cdot,\delta\xi),
\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot)W(\cdot)[A\eta_{\rm near}](\cdot;\mu,\delta)}_{L_{\kpar=\bK\cdot\vtilde_1}^2}
\end{pmatrix},
\end{align*}
\begin{equation}
\label{M_op}
\widehat{\mathcal{M}}(\xi;\delta) \equiv
\chi\left(\abs{\xi}\leq\delta^{\nu-1}\right) \sum_{j=1}^3\widehat{\mathcal{M}}_j(\xi;\delta) , \quad \text{where (inner products over ${L_{\kpar=\bK\cdot\vtilde_1}^2}$)}
\end{equation}
\begin{align*}
\widehat{\mathcal{M}}_1(\xi;\delta) & \equiv
\begin{pmatrix}
\inner{\Phi_{+}(\cdot,\delta\xi),\psi^{(0)}(\cdot,\delta\cdot)} \\
\inner{\Phi_{-}(\cdot,\delta\xi),\psi^{(0)}(\cdot,\delta\cdot)}
\end{pmatrix} , \quad
\widehat{\mathcal{M}}_2(\xi;\delta) \equiv
\delta
\begin{pmatrix}
\inner{\Phi_{+}(\cdot,\delta\xi),\psi^{(1)}_p(\cdot,\delta\cdot)}\\
\inner{\Phi_{-}(\cdot,\delta\xi),\psi^{(1)}_p(\cdot,\delta\cdot)}
\end{pmatrix} , \nonumber \\
\widehat{\mathcal{M}}_3(\xi;\delta) & \equiv -
\begin{pmatrix}
\inner{\Phi_{+}(\cdot,\delta\xi),\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot)W(\cdot)B(\cdot;\delta)}\\
\inner{\Phi_{-}(\cdot,\delta\xi),\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot)W(\cdot)B(\cdot;\delta)}
\end{pmatrix}, \nonumber
\end{align*}
\begin{equation}
\label{N_op}
\widehat{\mathcal{N}}(\xi;\delta) \equiv
\chi\left(\abs{\xi}\leq\delta^{\nu-1}\right) \sum_{j=1}^4\widehat{\mathcal{N}}_j(\xi;\delta) , \quad \text{where (inner products over ${L_{\kpar=\bK\cdot\vtilde_1}^2}$)}
\end{equation}
\begin{align*}
\widehat{\mathcal{N}}_1(\xi;\delta) & \equiv
\begin{pmatrix}
\inner{\Phi_{+}({\bf x},\delta\xi), \left(2{\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\ \partial_\zeta-\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})W({\bf x})\right)\psi^{(1)}_p({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})} \\
\inner{\Phi_{-}({\bf x},\delta\xi), \left(2{\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\ \partial_\zeta-\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})W({\bf x})\right)\psi^{(1)}_p({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})}
\end{pmatrix} , \\
\widehat{\mathcal{N}}_2(\xi;\delta) & \equiv
\begin{pmatrix}
\inner{\Phi_{+}({\bf x},\delta\xi),|{\bm{\mathfrak{K}}}_2|^2\ \partial_\zeta^2\psi^{(0)}({\bf x},\zeta)\Big|_{\zeta=\delta {\bm{\mathfrak{K}}}_2\cdot{\bf x}}}\\
\inner{\Phi_{-}({\bf x},\delta\xi),|{\bm{\mathfrak{K}}}_2|^2\ \partial_\zeta^2\psi^{(0)}({\bf x},\zeta)\Big|_{\zeta=\delta {\bm{\mathfrak{K}}}_2\cdot {\bf x}}}
\end{pmatrix}, \\
\widehat{\mathcal{N}}_3(\xi;\delta) & \equiv
\begin{pmatrix}
\inner{\Phi_{+}({\bf x},\delta\xi),|{\bm{\mathfrak{K}}}_2|^2\ \partial_\zeta^2\psi_p^{(1)}({\bf x},\zeta)\Big|_{\zeta=\delta {\bm{\mathfrak{K}}}_2\cdot{\bf x}}} \\
\inner{\Phi_{-}({\bf x},\delta\xi),|{\bm{\mathfrak{K}}}_2|^2\ \partial_\zeta^2\psi_p^{(1)}({\bf x},\zeta)\Big|_{\zeta=\delta {\bm{\mathfrak{K}}}_2\cdot {\bf x}}}
\end{pmatrix} , \nonumber \\
\widehat{\mathcal{N}}_4(\xi;\delta) & \equiv -
\begin{pmatrix}
\inner{\Phi_{+}({\bf x},\delta\xi),\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})W({\bf x})C({\bf x};\delta)} \\
\inner{\Phi_{-}({\bf x},\delta\xi),\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})W({\bf x})C({\bf x};\delta)}
\end{pmatrix}. \nonumber
\end{align*}
}
\end{proposition}
We conclude this section with the assertion that from an appropriate solution
$\left(\widehat{\beta}^\delta(\xi),\mu(\delta)\right)$
of the band-limited Dirac system \eqref{compacterroreqn} one can construct a bound state $\left(\Psi^\delta,E^\delta\right)$ of the Schr\"odinger eigenvalue
problem \eqref{EVP_2}. \
We say $f\in L^{2,1}(\mathbb{R})$ if $\|f\|_{L^{2,1}(\mathbb{R})}^2\ \equiv\ \int (1+|\xi|^2)^{1/2}f(\xi)d\xi<\infty$.
\begin{proposition}\label{needtoshow}
Suppose, for $0<\delta<\delta_0$, the band-limited Dirac system \eqref{compacterroreqn} has a solution
$\left(\widehat{\beta}^\delta(\xi),\mu(\delta)\right)$, $\widehat{\beta}^\delta=(\widehat{\beta}^\delta_+,\widehat{\beta}_-^\delta)^T$, where $\text{supp}\widehat{\beta}^\delta\subset \{|\xi|\le\delta^{\nu-1}\}$, satisfying:
\begin{align*}
&\norm{\widehat{\beta}^\delta(\cdot;\mu,\delta)}_{L^{2,1}(\mathbb{R})} \lesssim \delta^{-1},\ 0<\delta<\delta_0
\quad (\textrm{to be verified in Proposition~\ref{solve4beta}}), \\
&\mu(\delta) \text{ bounded and } \mu(\delta) - \mu_0 \to 0 \text{ as } \delta\to0 \quad
(\textrm{to be verified in Proposition~\ref{proposition3}}).
\end{align*}
Define
\begin{equation}
\widehat{\eta}^{\delta}_{\rm near,+}(\xi) = \widehat{\beta}^\delta_+(\xi),\ \ \ \widehat{\eta}^{\delta}_{\rm near,-}(\xi)=\widehat{\beta}^\delta_-(\xi) ,
\label{hat-eta}\end{equation}
and construct $\eta^\delta\equiv \eta^\delta_{\rm near} + \eta^\delta_{\rm far}$ as follows:
\begin{align}
\eta^\delta_{\rm near}({\bf x})&= \sum_{b=\pm}\int_{|\lambda|\le \delta^\nu} \widehat{\eta}^\delta_{{\rm
near},b}\left(\frac{\lambda}{\delta}\right) \Phi_b({\bf x};\lambda) d\lambda , \label{eta_def_beta1} \\
\widetilde{\eta}^\delta_{{\rm far},b}(\lambda)&=
\widetilde{\eta}_{\rm far,b}[\eta_{\rm near},\mu,\delta](\lambda),\ \ b\ge1 ;
\quad \textrm{(see Proposition \ref{fixed-pt})} , \nonumber \\
\eta^\delta_{\rm far}({\bf x})&= \sum_{b=\pm} \int_{\delta^\nu \le |\lambda |\le 1/2 }
\widetilde{\eta}^\delta_{{\rm far},b}\left(\lambda\right) \Phi_b({\bf x};\lambda) d\lambda
+ \sum_{b\ne\pm} \int_{|\lambda| \leq 1/2}
\widetilde{\eta}^\delta_{{\rm far},b}\left(\lambda\right) \Phi_b({\bf x};\lambda) d\lambda . \nonumber \\
\eta^\delta({\bf x})&\equiv \eta^\delta_{\rm near}({\bf x}) + \eta^\delta_{\rm far}({\bf x}),\ \
E^\delta\equiv E_\star+\delta^2\mu(\delta),\ \
0<\delta<\delta_0. \nonumber
\end{align}
Then, for all $0<|\delta|<\delta_0$, the following holds:
\begin{enumerate}
\item[(a)] $ \eta^\delta({\bf x})\in H_{\kpar=\bK\cdot\vtilde_1}^{2}(\Sigma)$.
\item[(b)] $\left(\eta^\delta,\mu(\delta)\right)$ solves the corrector equation \eqref{corrector-eqn1}.
\item[(c)] Theorem \ref{thm-edgestate} holds. The pair $(\Psi^\delta,E^\delta)$, defined by (see also
\eqref{eta-def}-\eqref{mu-def})
\begin{equation}
\label{main_result_ansatz1}
\begin{split}
\Psi^\delta({\bf x}) &= \psi^{(0)}({\bf x},{\bf X})+\delta\psi^{(1)}_p({\bf x},{\bf X})+\delta\eta^\delta({\bf x}),\ \ {\bf X}=\delta{\bm{\mathfrak{K}}}_2\cdot {\bf x},\\
E^\delta &= E_\star+\delta^2 \mu_0 +o(\delta^2) ,
\end{split}
\end{equation}
is a solution of the eigenvalue problem \eqref{EVP_2} with corrector estimates asserted in the statement of Theorem \ref{thm-edgestate}.
\end{enumerate}
\end{proposition}
To prove Proposition \ref{needtoshow} we use the following lemma.
\begin{lemma}
\label{beta_vs_eta}
There exists a $\delta_0>0$ such that, for all $0<\delta<\delta_0$, the following holds:
Assume $\beta \in L^2(\mathbb{R})$ and let $\eta^\delta_{\rm near}({\bf x})$ be defined by \eqref{hat-eta}-\eqref{eta_def_beta1}.
Then,
\begin{equation*}
\norm{\eta_{\rm near}}_{H_{\kpar=\bK\cdot\vtilde_1}^2} \lesssim \delta^{1/2} \norm{\beta}_{L^2(\mathbb{R})}.
\end{equation*}
\end{lemma}
\noindent The proof of Lemma \ref{beta_vs_eta} parallels that of Lemma 6.9 in \cites{FLW-MAMS:15}, and is not reproduced here.
\begin{proof}[Proof of Proposition \ref{needtoshow}]
From $\widehat{\beta}$ we construct
$\eta^\delta_{\rm near}$, such that:
$ \norm{\eta_{\rm near}}_{H_{\kpar=\bK\cdot\vtilde_1}^2} \lesssim \delta^{1/2} \norm{\beta}_{L^2(\mathbb{R})}$ (Lemma
\ref{beta_vs_eta}).
Next, part 2 of Proposition \ref{fixed-pt}, \eqref{eta-far-bound}, gives a bound on
$\eta_{\rm far}$:
$\norm{\eta_{\rm far}[\eta_{\rm near};\mu,\delta]}_{H_{\kpar=\bK\cdot\vtilde_1}^2} \le\ C''\left(\
\frac{\delta}{\omega(\delta^\nu)}\norm{\eta_{\rm near}}_{L_{\kpar=\bK\cdot\vtilde_1}^2}+\frac{\delta^\frac12}{\omega(\delta^\nu)} \right)$.
These two bounds give the desired $H_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)$ bound on $\eta^{\delta}$.
Note that all steps in our derivation of the band-limited Dirac
system \eqref{compacterroreqn} are reversible, in particular our application of the Poisson summation formula in
$L^2_{\rm loc}$. Therefore, $(\Psi^\delta,E^\delta)$, given by \eqref{main_result_ansatz1} is an $H_{\kpar=\bK\cdot\vtilde_1}^2$ eigenpair of \eqref{EVP_2} .
\end{proof}
\noindent We focus then on constructing and estimating the solution of the band-limited Dirac system \eqref{compacterroreqn}.
\subsection{Analysis of the band-limited Dirac system}\label{analysis-blDirac}
The formal $\delta\downarrow0$ limit of the band-limited operator $\widehat{\mathcal{D}}^\delta$, displayed in \eqref{bl-dirac-op}, is a 1D Dirac operator ${\mathcal{D}}$ defined via:
\begin{equation}
\label{diraclimit}
\widehat{\mathcal{D}}\ \widehat{\beta}(\xi)\ \equiv\ |{\lambda_{\sharp}}|\ |{\bm{\mathfrak{K}}}_2|\ \sigma_3\ \xi \widehat{\beta}(\xi)\ +\ {\vartheta_{\sharp}}\ \sigma_1\ \widehat{\kappa\beta}(\xi).
\end{equation}
Our goal is to solve the system \eqref{compacterroreqn}.
We therefore rewrite the linear operator in equation \eqref{compacterroreqn} as a perturbation of $\widehat{\mathcal{D}}$ \eqref{diraclimit}, and seek $\widehat{\beta}$ as a solution to:
\begin{equation}
\label{erroreqnfactored}
\widehat{\mathcal{D}}\widehat{\beta}(\xi) + \left(\widehat{\mathcal{D}}^{\delta}-\widehat{\mathcal{D}} +
\widehat{\mathcal{L}}^{\delta}(\mu) -\delta \mu\right)\widehat{\beta}(\xi) =
\mu\widehat{\mathcal{M}}(\xi;\delta)
+ \widehat{\mathcal{N}}(\xi;\delta).
\end{equation}
We next solve \eqref{erroreqnfactored} using a Lyapunov-Schmidt reduction strategy.
By Proposition \ref{zero-mode}, the null space of $\widehat{\mathcal{D}}$ is spanned by $\widehat{\alpha}_{\star}(\xi)$, the Fourier transform of the zero energy eigenstate \eqref{Fstar}. Since $\alpha_{\star}(\zeta)$ is Schwartz class, so too is $\widehat{\alpha}_{\star}(\xi)$ and $\widehat{\alpha}_{\star}(\xi)\in H^s(\mathbb{R})$ for any $s\ge1$.
For any $f\in L^2{(\mathbb{R})}$, introduce the orthogonal projection operators,
\begin{equation*}
\widehat{P}_{\parallel}f =
\inner{\widehat{\alpha}_{\star},f}_{L^2(\mathbb{R})}\widehat{\alpha}_{\star},~~~\text{and}~~~\widehat{P}_{\perp}f =
(I-\widehat{P}_{\parallel})f.
\end{equation*}
Since $\widehat{P}_{\parallel}\widehat{\mathcal{D}}\widehat{\beta}(\xi)=0$ and
$\widehat{P}_{\perp}\widehat{\mathcal{D}}\widehat{\beta}(\xi)=\widehat{\mathcal{D}}\widehat{\beta}(\xi)$,
equation \eqref{erroreqnfactored} is equivalent to the system
\begin{align}
&\widehat{P}_{\parallel}\left\{\left(\widehat{\mathcal{D}}^{\delta}-\widehat{\mathcal{D}} +
\widehat{\mathcal{L}}^{\delta}(\mu) -\delta
\mu\right)\widehat{\beta}(\xi) - \mu\widehat{\mathcal{M}}(\xi;\delta) -
\widehat{\mathcal{N}}(\xi;\delta)\right\} = 0, \label{pplleqn}\\
&\widehat{\mathcal{D}}\widehat{\beta}(\xi) +
\widehat{P}_{\perp}\left\{\left(\widehat{\mathcal{D}}^{\delta}-\widehat{\mathcal{D}}
+ \widehat{\mathcal{L}}^{\delta}(\mu) -\delta \mu\right)\widehat{\beta}(\xi)\right\} =
\widehat{P}_{\perp}\left\{\mu\widehat{\mathcal{M}}(\xi;\delta) +
\widehat{\mathcal{N}}(\xi;\delta)\right\}.
\label{pperpeqn}
\end{align} Our strategy will be to first solve \eqref{pperpeqn} for $\widehat{\beta}=
\widehat{\beta}[\mu,\delta]$, for $\delta>0$ and sufficiently small.
We then substitute
$\widehat{\beta}[\mu,\delta]$ into \eqref{pplleqn} to obtain a closed scalar equation. This equation is then solved
for $\mu=\mu(\delta)$ for $\delta$ small.
The first step in this strategy is accomplished in
\begin{proposition}\label{solve4beta}
Fix $M>0$. There exists $\delta_0>0$ and a mapping
$(\mu,\delta)\in R_{M,\delta_0}\equiv \{|\mu|<M\}\times (0,\delta_0)\mapsto \widehat{\beta}(\cdot;\mu,\delta)\in
L^{2,1}(\mathbb{R})$
which is Lipschitz in $\mu$, such that $\widehat{\beta}(\cdot;\mu,\delta)$ solves
\eqref{pperpeqn} for $(\mu,\delta)\in R_{M,\delta_0}$. Furthermore, we have the bound
\begin{equation*}
\norm{\widehat{\beta}(\cdot;\mu,\delta)}_{L^{2,1}(\mathbb{R})} \lesssim \delta^{-1},\ 0<\delta<\delta_0.
\end{equation*}
\end{proposition}
The details of the proof of Proposition \ref{solve4beta} are similar to those in proof of
Proposition 6.10 in \cites{FLW-MAMS:15}; equation \eqref{pperpeqn} is expressed as $(I+C^\delta(\mu))\widehat{\beta}(\xi;\mu,\delta) = \widehat{\mathcal{D}}^{-1} \widehat{P}_{\perp}\left\{\mu\widehat{\mathcal{M}}(\xi;\delta) +
\widehat{\mathcal{N}}(\xi;\delta)\right\}$ and the operator $C^\delta(\mu)$ is proved to be bounded on $L^{2,1}(\mathbb{R})$ and of norm less than one for all $0<\delta<\delta_0$, with $\delta_0$ sufficiently small. In bounding $C^\delta(\mu)$ on $L^{2,1}(\mathbb{R})$, we require $H^1(\mathbb{R})$ bounds for wave operators associated with the Dirac operator, $\mathcal{D}$. These can derived from corresponding results for scalar Schr\"odinger operators, under the assumptions implied by $\kappa(\zeta)$ being a domain wall function in the sense of Definition \ref{domain-wall-defn}.
\subsection{Final reduction to an equation for $\mu=\mu(\delta)$ and its solution}\label{final-reduction}
Substituting the solution $\widehat{\beta}(\xi;\mu,\delta)$ (Proposition \ref{solve4beta}) into \eqref{pplleqn}, yields the equation $\mathcal{J}_+[\mu,\delta]=0$, relating $\mu$ and $\delta$.
Here, $\mathcal{J}_{+}[\mu;\delta] $ is given by:
\begin{align*}
\mathcal{J}_{+}[\mu;\delta] &\equiv
\mu\ \delta\inner{\widehat{\alpha}_{\star}(\cdot),\widehat{\mathcal{M}}(\cdot;\delta)}_{L^2(\mathbb{R})}
+ \delta\inner{\widehat{\alpha}_{\star}(\cdot),\widehat{\mathcal{N}}(\cdot;\delta)}_{L^2(\mathbb{R})} \\
&~~~ -\delta\inner{\widehat{\alpha}_{\star}(\cdot),\left(\widehat{\mathcal{D}}^{\delta}-\widehat{\mathcal{D}}\right)
\widehat{\beta}(\cdot;\mu,\delta)}_{L^2(\mathbb{R})}
-\delta\inner{\widehat{\alpha}_{\star}(\cdot),\widehat{\mathcal{L}}^{\delta}(\mu)
\widehat{\beta}(\cdot;\mu,\delta)}_{L^2(\mathbb{R})}\nonumber\\
&~~~+\delta^2 \mu\inner{\widehat{\alpha}_{\star}(\cdot),\widehat{\beta}(\cdot;\mu,\delta)}_{L^2(\mathbb{R})}. \nonumber
\end{align*}
The mapping $(\mu,\delta)\in\{|\mu|<M,\ \delta\in(0,\delta_0)\}\mapsto\mathcal{J}_{+}(\mu,\delta)$ is well defined and Lipschitz continuous with respect to $\mu$. In the following proposition, we note that $\mathcal{J}_{+}[\mu,\delta]$ can
be extended to a continuous function on the half-open interval $[0,\delta_0)$.
\begin{proposition}
\label{proposition3}
Let $\delta_0>0$ be as above. Define
\begin{equation*}
\mathcal{J}[\mu,\delta] \equiv \left\{
\begin{array}{cl}
\mathcal{J}_{+}[\mu,\delta] & ~~~\text{for}~~ 0<\delta<\delta_0,\\
\mu-\mu_0 & ~~~\text{for}~~ \delta=0\ ,
\end{array} \right.\
\end{equation*}
where
$ \mu_0 \equiv -\inner{\alpha_{\star},\mathcal{G}^{(2)}}_{L^2(\mathbb{R})} = E^{(2)}$,
and $\mathcal{G}^{(2)}$ is given in \eqref{ipG}; see also \eqref{G2def} and \eqref{solvability_cond_E2}.
Fix $M = \max\{2\abs{\mu_0},1\}$.
Then, $(\mu,\delta)\in\{|\mu|<M,\ 0\le\delta<\delta_0\}\mapsto\mathcal{J}(\mu,\delta)$
is well-defined and continuous.
\end{proposition}
{\it Proof:} The proof parallels that of Proposition 6.16 of \cites{FLW-MAMS:15}.
The key is to establish the following asymptotic relations, for all $0<\delta<\delta_0$ with $\delta_0$
sufficiently small:
\begin{align}
\lim_{\delta\rightarrow0}\delta\inner{\widehat{\alpha}_{\star}(\cdot),
\widehat{\mathcal{M}}(\cdot;\delta)}_{L^2(\mathbb{R})} &=1; \label{sketch_limit1} \\
\lim_{\delta\rightarrow0}\delta\inner{\widehat{\alpha}_{\star}(\cdot),
\widehat{\mathcal{N}}(\cdot;\delta)}_{L^2(\mathbb{R})} &= -\mu_0; \label{sketch_limit2}
\end{align}
and the following bounds hold for some constant $C_M$:
\begin{align}
\abs{\delta\inner{\widehat{\alpha}_{\star}(\cdot), \left(\widehat{\mathcal{D}}^{\delta}-\widehat{\mathcal{D}}\right)
\widehat{\beta}(\cdot;\mu,\delta)}_{L^2(\mathbb{R})}} &\le C_M \delta^{1-\nu}; \label{sketch_bound1} \\
\abs{\delta\inner{\widehat{\alpha}_{\star}(\cdot), \widehat{\mathcal{L}}^{\delta}(\mu)
\widehat{\beta}(\cdot;\mu,\delta)}_{L^2(\mathbb{R})}} &\le C_M \delta^{\nu}; \label{sketch_bound2} \\
\abs{\delta^2 \mu\inner{\widehat{\alpha}_{\star}(\cdot), \widehat{\beta}(\cdot;\mu,\delta)}_{L^2(\mathbb{R})}}
&\le C_M \delta. \label{sketch_bound3}
\end{align}
The detailed verification of \eqref{sketch_limit1}-\eqref{sketch_bound3} follows
the approach taken in Appendix H of \cites{FLW-MAMS:15}. We make a few remarks on the calculations. Each of the expressions in \eqref{sketch_limit1}-\eqref{sketch_limit2}
consists of inner products of the form:
{\small
\begin{equation}
\mathfrak{J}(\delta) \equiv \delta \left\langle \widehat{\alpha}_{\star}(\xi) , \chi\left(\abs{\xi}\leq\delta^{\nu-1}\right)
\begin{pmatrix}
\inner{\Phi_{+}({\bf x},\delta\xi), J({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})}_{L_{\kpar=\bK\cdot\vtilde_1}^2} \\
\inner{\Phi_{-}({\bf x},\delta\xi ), J({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})}_{L_{\kpar=\bK\cdot\vtilde_1}^2}
\end{pmatrix} \right\rangle_{L^2(\mathbb{R}_\xi)} .\label{sample-ip}
\end{equation}}
Here, $J({\bf x},\zeta)=e^{i{\bf K}\cdot{\bf x}}\mathcal{K}({\bf x},\zeta)$, where
${\bf x}\mapsto\mathcal{K}({\bf x},\zeta)$ is $\Lambda_h-$ periodic and $\zeta\mapsto \mathcal{K}({\bf x},\zeta)$ is smooth and rapidly decaying on $\mathbb{R}$. Consider, for example, the expression within:
$\inner{\Phi_{+}({\bf x},\delta\xi), J({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})}_{L_{\kpar=\bK\cdot\vtilde_1}^2}$. This may be rewritten and expanded, using Lemma \ref{poisson_exp}:
\begin{align}
&\delta\ \int_\Sigma e^{-i\delta\xi{\bm{\mathfrak{K}}}_2\cdot{\bf x}}\ \overline{p_+({\bf x},\delta\xi)}\ \mathcal{K}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})\ d{\bf x}\ =\ \sum_{n\in\mathbb{Z}} \int_\Omega e^{in {\bm{\mathfrak{K}}}_2\cdot{\bf x}}\ \overline{p_+({\bf x},\delta\xi)}\ \widehat{\mathcal{K}}\left({\bf x},\frac{n}{\delta}+\xi\right)\ d{\bf x}
\nonumber\\
&\ = \int_\Omega \overline{p_+({\bf x},\delta\xi)}\ \widehat{\mathcal{K}}\left({\bf x},\xi\right)\ d{\bf x}\ +\ \sum_{|n|\ge1} \int_\Omega e^{in {\bm{\mathfrak{K}}}_2\cdot{\bf x}}\ \overline{p_+({\bf x},\delta\xi )}\ \widehat{\mathcal{K}}\left({\bf x},\frac{n}{\delta}+\xi\right)\ d{\bf x} . \nonumber
\end{align}
Since in \eqref{sample-ip} $\xi$ is localized to the set where $|\xi|\le\delta^{\nu-1},\ \nu>0$, for $|n|\ge1$, we have $n/\delta+\xi\approx n/\delta$ and the decay of $\zeta\mapsto\widehat{\mathcal{K}}({\bf x},\zeta)$ can be used to show that, as $\delta$ tends to zero, the sum over $|n|\ge1$ tends to zero in $L^2(d\xi)$. It can also be shown, using the localization of $\xi$, that the $n=0$ contribution to the sum, tends to
$\left\langle P_+({\bf x}), \widehat{\mathcal{K}}\left({\bf x},\xi\right)\ \right\rangle_{L^2(\Omega)}$.
Therefore, uniformly in $|\xi|\le\delta^{\nu-1}$, we have
\begin{equation}
\lim_{\delta\to0}\ \delta\ \inner{\Phi_{+}({\bf x},\delta\xi), J({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})}_{L_{\kpar=\bK\cdot\vtilde_1}^2}\ =\ \left\langle P_+({\bf x}), \widehat{\mathcal{K}}\left({\bf x},\xi\right)\ \right\rangle_{L^2(\Omega)} .
\nonumber\end{equation}
Therefore,
\begin{align}
\lim_{\delta\to0}\mathfrak{J}(\delta)\ & =
\int_\mathbb{R}\ d\xi\ \left[\
\overline{\widehat{\alpha}_{\star,+}(\xi)}\ \left\langle P_+({\bf x}), \widehat{ \mathcal{K} }\left({\bf x},\xi\right)\ \right\rangle_{L^2(\Omega)} \right. \nonumber \\
&\qquad\qquad\qquad \left. + \overline{\widehat{\alpha}_{\star,-}(\xi)}\
\left\langle P_-({\bf x}), \widehat{ \mathcal{K} }\left({\bf x},\xi\right)\ \right\rangle_{L^2(\Omega)}
\ \right] . \label{help-limit}
\end{align}
The principle contribution to the limit in \eqref{sketch_limit1} comes from the $\widehat{\mathcal{M}}_1(\xi;\delta)$ term in \eqref{M_op}.
We apply \eqref{help-limit} with the choice
$J=J_{\mathcal{M}}({\bf x},\zeta)= \psi^{(0)}({\bf x},\zeta)$ and
\[ \mathcal{K}_{\mathcal{M}}({\bf x},\zeta)\equiv e^{-i{\bf K}\cdot{\bf x}}J_{\mathcal{M}}({\bf x},\zeta)=\alpha_{\star,+}(\zeta)P_+({\bf x})+\alpha_{\star,-}(\zeta)P_-({\bf x})\ .\]
The principle contribution to the limit in \eqref{sketch_limit2} comes from the $\widehat{\mathcal{N}}_1(\xi;\delta)$ and $\widehat{\mathcal{N}}_2(\xi;\delta)$ terms in
\eqref{N_op}. We
apply \eqref{help-limit} with the choice
\[J=J_{\mathcal{N}}({\bf x},\zeta)= \left(2{\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\ \partial_\zeta-\kappa(\zeta)W({\bf x})\right)\psi^{(1)}_p({\bf x},\zeta)+|{\bm{\mathfrak{K}}}_2|^2\ \partial_\zeta^2\psi^{(0)}({\bf x},\zeta) \]
and $\mathcal{K}_{\mathcal{N}}({\bf x},\zeta)\equiv
e^{-i{\bf K}\cdot{\bf x}}J_{\mathcal{N}}({\bf x},\zeta)$. The detailed computations are omitted since they
are similar to those in \cites{FLW-MAMS:15} .
By \eqref{sketch_limit1}-\eqref{sketch_bound3}, it follows that
$\mathcal{J}_+(\mu;\delta)=\mu-\mu_0+o(1)$ as $\delta\rightarrow0$ uniformly for
$|\mu|\leq M $.
Therefore, $\mathcal{J}[\mu,\delta]$ is well-defined on $ \{(\mu,\delta)\ :\ |\mu|<M,\ 0\leq\delta<\delta_0\}$, continuous at $\delta=0$ and Proposition \ref{proposition3} is proved.
Summarizing, we have that given $\widehat{\beta}(\cdot,\mu,\delta)$, constructed in Proposition \ref{solve4beta}, to complete our construction of a solution to \eqref{pplleqn}-\eqref{pperpeqn}, it suffices to solve \eqref{pplleqn} for
$\mu=\mu(\delta)$. Furthermore, we have just shown that \eqref{pplleqn} holds if and only if $\mu=\mu(\delta)$ is a solution of $\mathcal{J}[\mu;\delta]=0$.
From Proposition \ref{proposition3} it follows that $\mathcal{J}[\mu;\delta]=0$ has a transverse zero, $\mu=\mu(\delta)$, for all $\delta$ and sufficiently small. The details are presented in Proposition 6.17
of \cites{FLW-MAMS:15}:
\begin{proposition}\label{solveJeq0}
There exists $\delta_0>0$, and a function $\delta\mapsto\mu(\delta)$, defined for $0\le\delta<\delta_0$ such that:
$|\mu(\delta)|\le M$, $\lim_{\delta\to0}\mu(\delta)=\mu(0)=\mu_0 \equiv E^{(2)}$ and
$\mathcal{J}[\mu(\delta),\delta]=0$ for all $0\le\delta<\delta_0$.
\end{proposition}
\noindent We have constructed a solution pair $\left(\widehat{\beta}^\delta(\xi),\mu(\delta)\right)$, with
$\widehat{\beta}^\delta\in L^{2,1}(\mathbb{R};d\xi)$, of the band-limited Dirac system \eqref{compacterroreqn}. Now apply Proposition \ref{needtoshow} and the proof of Theorem \ref{thm-edgestate} is complete.
\section{Edge states for weak potentials and the no-fold condition for the zigzag slice} \label{zz-gap}
In Section \ref{thm-edge-state} we fixed an arbitrary edge, ${\bm{\mathfrak{v}}}_1=a_1{\bf v}_1+a_2{\bf v}_2$ and proved the existence of topologically protected ${\bm{\mathfrak{v}}}_1-$ edge states under the spectral no-fold condition. In this section, we consider the special case of the zigzag edge, corresponding to the choice: ${\bm{\mathfrak{v}}}_1={\bf v}_1$. We prove that the spectral no-fold condition holds in the weak potential regime, provided $\varepsilon V_{1,1}>0$; this implies the existence of a topologically protected family zigzag edge states.
We proceed in this section to prove the following:
\begin{enumerate}
\item Theorem \ref{SGC!}: The operator $-\Delta+\varepsilon V$ satisfies the no-fold condition along the zigzag (${\bf k}_2$) slice at the Dirac point $({\bf K},E^\varepsilon_\star)$ ; see Definition \ref{SGC}.
\item Theorem \ref{delta-gap}: $-\Delta+\varepsilon V({\bf x})+\delta W({\bf x})$ acting in $L^2(\Sigma_{{k_{\parallel}}={\bf K}\cdot{\bf v}_1}$) has a spectral gap
about the energy $E=E_\star^\varepsilon$.
\item Theorem \ref{NO-directional-gap!}: If $\varepsilon V_{1,1}<0$, then the spectral no-fold condition for the zigzag slice does not hold.
\item Theorem \ref{Hepsdelta-edgestates}: For $0<|\delta|\ll\varepsilon^2$ and $\varepsilon$ sufficiently small, the zigzag edge state eigenvalue problem for $H^{(\varepsilon,\delta)} =-\Delta+\varepsilon V({\bf x})+\delta\kappa(\delta{\bf k}_2\cdot{\bf x}) W({\bf x})$ has topologically protected edge states.
\end{enumerate}
\noindent We begin by stating our detailed
{assumptions on $V({\bf x})$ and $W({\bf x})$.}\ There exists ${\bf x}_0\in\mathbb{R}^2$ such that $\tilde{V}({\bf x})=V({\bf x}-{\bf x}_0)$
and $\tilde{W}({\bf x})=W({\bf x}-{\bf x}_0)$ satisfy the following:
\begin{equation}
\text{\rm{\bf Assumptions (V)}}
\label{V-assumptions}\end{equation}
\begin{enumerate}
\item[(V1)] $\Lambda_h$- periodicity:\ \ $\tilde{V}({\bf x}+{\bf v})= \tilde{V}({\bf x})$ for all ${\bf v}\in\Lambda_h$.
\item[(V2)] Inversion symmetry:\ \ $\tilde{V}({\bf x})=\tilde{V}(-{\bf x})$.
\item[(V3)] $2\pi/3$-rotational invariance:\ $\tilde{V}(R^*{\bf x})=\tilde{V}({\bf x})$.
\item[(V4)] Positivity of Fourier coefficient of $\varepsilon V$, $\varepsilon V_{1,1}$: $\varepsilon \tilde{V}_{1,1}>0$, \\
where
$\tilde{V}_{1,1}= \frac{1}{|\Omega|}\int_\Omega e^{-i({\bf k}_1+{\bf k}_2)\cdot{\bf y}}\tilde{V}({\bf y}) d{\bf y}$;
see \eqref{V11eq0} and \eqref{Omega-fourier}.
\end{enumerate}
\begin{equation}
\text{\rm{\bf Assumptions (W)}}
\label{W-assumptions}
\end{equation}
\begin{enumerate}
\item[(W1)~] $\Lambda_h$- periodicity:\ \ $\tilde{W}({\bf x}+{\bf v})= \tilde{W}({\bf x})$ for all ${\bf v}\in\Lambda_h$.
\item[(W2)~] Anti-symmetry:\ \ $\tilde{W}(-{\bf x})= -\tilde{W}({\bf x})$.
\item[(W3$^*$)] Uniform nondegeneracy of $\tilde{W}$:\ \ Let $\Phi^\varepsilon_j({\bf x}),\ j=1,2$ denote the $L^2_{{\bf K},\tau}$, respectively, $L^2_{{\bf K},\overline\tau}$, modes of the degenerate $L^2_{\bf K}-$ eigenspace of $H^{(\varepsilon,0)}=-\Delta+\varepsilon V$. Then, there exists $\theta_0>0$, independent of $\varepsilon$, such that for all $\varepsilon$ sufficiently small:
\begin{equation}
\left|\ \vartheta^\varepsilon_\sharp\ \right|\ \equiv \left|\inner{\Phi_1^\varepsilon,\tilde{W}\Phi_1^\varepsilon}_{L^2_{\bf K}} \right|\ge\theta_0>0\ .\label{uniform-nondegen}
\end{equation}
\end{enumerate}
\noindent {N.B.\ \ Consistent with our earlier convention, in the following discussion, we shall drop the ``tildes'' on both $V$ and $W$. It will be understood
that we have chosen coordinates with ${\bf x}_0=0$.}
\begin{remark}\label{on-theta-sharp}
We claim that (W3*) (see \eqref{uniform-nondegen}) uniform non-degeneracy of $W$ is equivalent to the assumption:
\begin{equation}
\tag{W3*}
{W}_{0,1} + {W}_{1,0} - {W}_{1,1} \ne0 ,
\label{uniform-nondegen-W}
\end{equation}
where $\{W_{\bf m}\}_{ {\bf m}\in\mathbb{Z}^2}$ denote the Fourier coefficients of $W({\bf x})$.
To see this, note by
Proposition 3.1 of \cites{FW:12}, that for sufficiently small $\varepsilon$,
\[\Phi^\varepsilon_1({\bf x}) = \frac{1}{\sqrt{3|\Omega|}} e^{i{\bf K}\cdot{\bf x}} \left[1+\overline{\tau}e^{i{\bf k}_2\cdot{\bf x}} + \tau e^{-i{\bf k}_1\cdot{\bf x}}\right] +\mathcal{O}(\varepsilon); \]
see also \eqref{p_sigma}.
Evaluation of $\vartheta_\sharp$ gives
\begin{align*}
{\vartheta_{\sharp}}^\varepsilon &= \frac{1}{3} \int_\Omega \abs{1+\overline{\tau}e^{i{\bf k}_2\cdot{\bf x}} +\tau e^{-i{\bf k}_1\cdot{\bf x}}}^2 W({\bf x})\ d{\bf x}+\mathcal{O}(\varepsilon) \nonumber\\
&= \frac{1}{3} \left[ ({W}_{0,1} + {W}_{1,0})\ (\tau - \overline{\tau}) + {W}_{1,1}(\tau^2 - \overline{\tau}^2) \right] + \mathcal{O}(\varepsilon) \nonumber \\
&=i\frac{\sqrt3}3\ \left[{W}_{0,1} + {W}_{1,0} - {W}_{1,1} \right] + \mathcal{O}(\varepsilon) ,
\end{align*}
which is nonzero if \eqref{uniform-nondegen-W} holds and $\varepsilon$ is sufficiently small.
\end{remark}
Let $({\bf K},E_\star^\varepsilon)$ denote a Dirac point of $H^{(\varepsilon,0)}=-\Delta + \varepsilon V({\bf x})$,
guaranteed to exist by Theorem \ref{diracpt-small-thm} for all $0<|\varepsilon|<\varepsilon_0$ and assume that $V({\bf x})$ and $W({\bf x})$ satisfy Assumptions (V), (W); see \eqref{V-assumptions}-\eqref{W-assumptions}.
\noindent In our next result, we verify the spectral no-fold condition for the zigzag edge. This is central to applying Theorem \ref{thm-edgestate} to prove our result (Theorem \ref{Hepsdelta-edgestates}) on the existence of a family of zigzag edge states.
\begin{theorem}\label{SGC!}\ \
There exists a positive constant, $\varepsilon_2\le\varepsilon_0$, such that for any $0<|\varepsilon|<\varepsilon_2$, $H^{(\varepsilon,0)}=-\Delta+\varepsilon V$ satisfies the spectral no-fold condition at quasi-momentum ${\bf K}$ along the zigzag slice; see Definition \ref{SGC}.\\ By Assumptions (V), $E_-(\lambda)\le
E_+(\lambda)\le E_b(\lambda),\ b\ge3$. For any $\mathfrak{a}$ sufficiently small:
\begin{align*}
&b=\pm:\ \ \mathfrak{a} \le |\lambda|\le\frac12\ \implies\ \Big|\ E^{\varepsilon,0}_b(\lambda) - E^\varepsilon_\star\ \Big|\ \ge\ \frac{q^4}2\ |V_{1,1}\ \varepsilon|\ \lambda^2\ge c_1\ |\varepsilon|\ \mathfrak{a}^2 ,
\\
&b\ge3:\ \ |\lambda|\le1/2 \ \implies \ \Big| E^{\varepsilon,0}_b(\lambda)-E^{\varepsilon}_\star \Big|\ \ge\ c_2\ |\varepsilon|\ (1+|b|)\ .
\end{align*}
\end{theorem}
\noindent Theorem \ref{SGC!} controls the zigzag slice of the band structure at ${\bf K}$, globally and outside a neighborhood of $({\bf K},E^\varepsilon_\star)$. Since a small perturbation of $V$ which breaks inversion symmetry opens a gap, locally about ${\bf K}$ (see \cites{FW:12}), it can be shown that $H^{(\varepsilon,\delta)}$ has a $L^2_{{k_{\parallel}}=2\pi/3}-$ spectral gap about $E=E_\star^\varepsilon$:
\begin{theorem}\label{delta-gap}
Assume $V({\bf x})$ and $W({\bf x})$ satisfy Assumptions (V) and (W).
Let $\varepsilon_2$ be as in Theorem \ref{SGC!}. Then, there exists $c_\flat>0$ such that for all
$0<|\varepsilon|<\varepsilon_2$ and $0<\delta\le c_\flat\ \varepsilon^2$,
the operator $-\Delta+\varepsilon V({\bf x})+\delta W({\bf x})$ has a non-trivial $L^2_{{k_{\parallel}}={\bf K}\cdot{\bf v}_1=2\pi/3}-$ spectral gap about the energy $E=E^\varepsilon_\star$.
\end{theorem}
\noindent In the case where (V4) does not hold and the Fourier coefficient of $\varepsilon V$, $\varepsilon V_{1,1}$, is negative: $\varepsilon \tilde{V}_{1,1}<0$, then we prove that the no-fold condition does not hold:
\begin{theorem}\label{NO-directional-gap!}[Conditions for non-existence of a zigzag spectral gap]
Assume hypotheses (V1)-(V3) but, instead of hypothesis (V4), assume
$\varepsilon V_{1,1}<0$.
Then, for any $\varepsilon$ sufficiently small, the no-fold condition of Definition \ref{SGC} does not hold for the zigzag slice.
\end{theorem}
\noindent The proofs of Theorems \ref{SGC!} and \ref{delta-gap} are presented below. Section \ref{LSreduction} discusses a reduction to the lowest three spectral bands. The general strategy, based on an analysis of a $3\times3$ determinant, is given in Section \ref{sec:strategy}. Theorem \ref{NO-directional-gap!} is proved in Section \ref{no-directionalgap} as a consequence of Proposition \ref{Deq0}.
We prove the following theorem on zigzag edge states using results of Section \ref{thm-edge-state}.
\begin{theorem}\label{Hepsdelta-edgestates}
Let
$ H^{(\varepsilon,\delta)} =-\Delta+\varepsilon V({\bf x})+\delta\kappa(\delta{\bf k}_2\cdot{\bf x}) W({\bf x})$,
where $V({\bf x})$ and $W({\bf x})$ satisfy Assumptions (V) and (W), and $\kappa({\bf X})$ is a domain wall function in the sense of Definition \ref{domain-wall-defn}.
Let $\varepsilon_2>0$ and $c_\flat>0$ be as in Theorem \ref{SGC!} and assume
$0<|\varepsilon|<\varepsilon_2$ and $0<|\delta|\le c_\flat \varepsilon^2$. Then,
there exist edge states, $\Psi({\bf x};k_\parallel)\in L^2_{k_\parallel}(\Sigma)$, with
$|k_\parallel-2\pi/3|$ sufficiently small.
Furthermore, continuous superposition in ${k_{\parallel}}$ yields wave-packets which are concentrated along the zigzag edge.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{Hepsdelta-edgestates}]
We claim that the theorem is an immediate consequence of the spectral no-fold condition for $-\Delta +\varepsilon V$ for the zigzag edge, stated in Theorem \ref{SGC!}. This follows by an application of Theorem \ref{thm-edgestate} (and Corollary \ref{vary_k_parallel}).
Since the details of the proof of Theorem \ref{thm-edgestate} are carried out for the case of $H^{(\varepsilon,\delta)}$ with $\varepsilon=1$,
we wish to point out how the proof applies with $\varepsilon$ and $\delta$ varying as in the statement of Theorem \ref{Hepsdelta-edgestates}.
The proof of Theorem \ref{thm-edgestate} uses a Lyapunov-Schmidt reduction strategy where the eigenvalue problem is reduced to an equivalent eigenvalue problem (nonlinear in the eigenvalue parameter, $E$)
for the Floquet-Bloch spectral components of the bound states in a neighborhood of the Dirac point $({\bf K},E^\varepsilon_\star)$. Stated for the relevant case of the zigzag edge, this reduction step requires the invertibility of an operator $(I-\mathcal{Q}_\delta)$ acting on $H_{{k_{\parallel}}={\bf K}\cdot{\bf v}_1}^2(\Sigma)$, where $\mathcal{Q}_\delta$ is defined in terms of Floquet-Bloch components in \eqref{tQ-def} for ${\bm{\mathfrak{K}}}_2={\bf k}_2$.
It suffices to show that the $H_{{k_{\parallel}}={\bf K}\cdot{\bf v}_1}^2(\Sigma)$ norm of $\mathcal{Q}_\delta$ is $o(1)$ as $\delta\downarrow0$. From \eqref{frak-e-bound} in Remark \ref{frak-e-remark}, the operator norm of $\mathcal{Q}_\delta$ is bounded by $\mathfrak{e}(\delta)$, given by:
\begin{equation*}
\mathfrak{e}(\delta) \lesssim\ \frac{|\delta|}{\omega(\mathfrak{\delta^{\nu}})c_1(V)}\ +\ \frac{|\delta|}{(1+|b|)c_2(V)} \
\leq \frac{|\delta|^{1}}{\omega(\mathfrak{\delta^{\nu}}) c_1(V)}\ +\ \frac{|\delta|}{c_2(V)}.
\end{equation*}
By Theorem \ref{SGC!}, the no-fold condition holds with modulus $\omega(\mathfrak{a})=\mathfrak{a}^2$ and constants
\[ c_1(V)=\tilde{c}_1\frac{q^4}{2} |V_{1,1}\varepsilon|\ ,\ \ \ \ c_2(V)=\tilde{c_2}|\varepsilon|.\]
Therefore, with $\mathfrak{a}=\delta^\nu$,
\begin{equation*}
\mathfrak{e}(\delta) \lesssim\ \frac{|\delta|^{1-2\nu}}{c_1(V)}\ +\ \frac{|\delta|}{c_2(V)} \
\lesssim\ \frac{2}{q^4\tilde{c}_1 |V_{1,1}|}\cdot \frac{|\delta|^{1-2\nu}}{|\varepsilon|}\ +\ \frac{1}{\tilde{c_2}}\cdot \frac{|\delta|}{|\varepsilon|}.
\end{equation*}
By hypothesis, $|\delta|\le c_\flat\varepsilon^2$. Hence, $0\le \mathfrak{e}(\delta)
\lesssim |\varepsilon|^{1-4\nu}+|\varepsilon|\to0$ as $\varepsilon\to0$ if we choose $\nu\in(0,1/4)$.
This completes the proof of Theorem \ref{Hepsdelta-edgestates}.
\end{proof}
To prove Theorems \ref{SGC!} - \ref{NO-directional-gap!} we introduce a tool to localize the analysis about the lowest three bands. Throughout the analysis, without loss of generality, we take $\varepsilon>0$ and satisfy the cases $\varepsilon V_{1,1}>0$ and $\varepsilon V_{1,1}<0$ by varying the sign of $V_{1,1}.$
\subsection{Reduction to the lowest three bands}\label{LSreduction}
In this subsection we show, via a Lyapunov-Schmidt reduction argument, that the proofs of Theorems \ref{SGC!} and \ref{delta-gap} can be reduced to the study of the lowest three spectral bands. To achieve this, we consider several parameter space regimes separately.
We start by considering $\varepsilon=\delta=\lambda=0$. In this case, $H^{(0,0,0)}({\bf K})=\left(\nabla+i{\bf K}\right)^2$ has a triple eigenvalue, $E^0_\star=|{\bf K}|^2$, with corresponding $3-$ dimensional $L^2(\mathbb{R}^2/\Lambda_h)-$ eigenspace spanned by the eigenfunctions $p_\sigma$, for $\sigma=1,\tau,\overline{\tau}$; see Section \ref{dpts-generic-eps}.
Next, we turn on $\varepsilon$, keeping $\delta=\lambda=0$.
From Theorem \ref{diracpt-small-thm}, there exists $\varepsilon_0>0$ such that for $\varepsilon\in I_{\varepsilon_0}\equiv (-\varepsilon_0,\varepsilon_0)\setminus\{0\}$, the operator $H^{(\varepsilon,0,0)}({\bf K})=-\left(\nabla+i{\bf K}\right)^2+\varepsilon V({\bf x})$ has a double $L^2(\mathbb{R}^2/\Lambda_h)$ eigenvalue, $E^\varepsilon_\star$.
Let $E^0_\star=|{\bf K}|^2$.
The maps $\varepsilon\mapsto E^\varepsilon_\star$ and $\varepsilon\mapsto \widetilde{E}^\varepsilon_\star$ are real analytic for $ \varepsilon\in I_{\varepsilon_0}$
with expansions \eqref{E*expand}, \eqref{tildeE*expand}:
\begin{align*}
E^\varepsilon_\star\ &= E_\star^0 + \varepsilon(V_{0,0}-V_{1,1})+\mathcal{O}(\varepsilon^2),\\
\widetilde{E}^\varepsilon_\star &= E_\star^0 + \varepsilon(V_{0,0}+2V_{1,1})+\mathcal{O}(\varepsilon^2).
\end{align*}
More generally, we may study the eigenvalue problem
\begin{equation}
(H^{(\varepsilon,\delta,\lambda)}-E)p=0,\ \ p\in L^2(\mathbb{R}^2/\Lambda_h) , \text{\ \ with\ \ }
E\ =\ E_\star^\varepsilon + \mu ,
\label{evp-eps-delta-lam}
\end{equation}
and seek, via Lyapunov-Schmidt reduction, to localize \eqref{evp-eps-delta-lam} about the three lowest zigzag slices.
Written out, the eigenvalue problem has the form:
\begin{equation}
\Big[ -\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2-E_\star^\varepsilon - \mu\ +\ \varepsilon V\ +\delta W\ \Big]p({\bf x}) = 0,\ \ p\in L^2(\mathbb{R}^2/\Lambda_h) .
\label{evp-p}
\end{equation}
Since $\varepsilon$ and $\delta$ will be chosen to be small, we shall expand $p({\bf x})$ relative to the natural basis of the $L^2(\mathbb{R}^2/\Lambda_h)-$ eigenspace of the free operator, $H^{(0,0,0)}=-(\nabla+i{\bf K})^2$, displayed explicitly in \eqref{p_sigma}. Let $P^\parallel$ denote the projection onto
${\rm span}\left\{p_\sigma:\sigma=1,\tau,\overline{\tau}\right\}$ and $P^\perp=I-P^\parallel$.
We seek a solution of \eqref{evp-p} in the form $p({\bf x}) = p_\parallel({\bf x}) + p_\perp({\bf x})$,
where
\begin{align*}
p_\parallel\in{\rm span}\left\{p_\sigma:\sigma=1,\tau,\overline{\tau}\right\},\ \
p_\perp\in \Big[\ {\rm span}\left\{p_\sigma:\sigma=1,\tau,\overline{\tau}\right\}\ \Big]^\perp .
\end{align*}
Then, we have that \eqref{evp-p} is equivalent to the coupled system of equations:
\begin{align}
&\Big[ -\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2-E_\star^\varepsilon - \mu \Big]p_\parallel \label{p-parallel} \\
&\quad +
\Big[ \varepsilon P^\parallel V + \delta P^\parallel W \Big]p_\parallel +
\Big[ \varepsilon P^\parallel V + \delta P^\parallel W \Big]p_\perp = 0, \nonumber \\
&\Big[ -\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2-E_\star^\varepsilon - \mu \Big]p_\perp \label{p-perp} \\
&\quad + \Big[ \varepsilon P^\perp V + \delta P^\perp W \Big]p_\perp +
\Big[ \varepsilon P^\perp V + \delta P^\perp W \Big]p_\parallel = 0 . \nonumber
\end{align}
\begin{proposition}\label{invert-on-Pperp}
There exists a constant $c>0$ such that if $|\varepsilon|+|\mu|<c$ then
\begin{equation*}
H^{(\varepsilon,0,\lambda)}-\mu\ \equiv\ \Big[ -\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2-E_\star^\varepsilon - \mu \Big]\ \ \textrm{is invertible on}\ P^\perp L^2(\mathbb{R}^2/\Lambda_h) .
\end{equation*}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{invert-on-Pperp}]
This follows from the lower bound:
\begin{equation*}
0<g_0 \equiv \inf_{|\lambda |\le1/2}\ \inf_{ {\bf m}\notin \{ (0,0),(0,1),(-1,0) \} }
\Big|\ |{\bf K}+{\bf m}\vec{{\bf k}}+\lambda{\bf k}_2|^2-|{\bf K}|^2\ \Big| ,
\end{equation*}
where ${\bf m}\vec{{\bf k}}=m_1{\bf k}_1+m_2{\bf k}_2$.
\end{proof}
By Proposition \ref{invert-on-Pperp}, for all $\varepsilon$ and $\mu$ sufficiently small equation \eqref{p-perp} is equivalent to:
\begin{align*}
&\Big[\ I\ +\ \left[H^{(\varepsilon,0,\lambda)}-\mu \right]^{-1}\
\left[ \varepsilon P^\perp V + \delta P^\perp W \right]\ \Big] p_\perp \\
&\qquad =\ -\left[H^{(\varepsilon,0,\lambda)}-\mu\right]^{-1}\ \left[ \varepsilon P^\perp V + \delta P^\perp W \right]p_\parallel .
\end{align*}
Suppose now $|\varepsilon|+|\delta|+|\mu|<d_1$, where $0<d_1\le c$ is chosen sufficiently small. Then we have
\begin{align}
p_\perp\
&= -\Big[\ I\ +\ \left[H^{(\varepsilon,0,\lambda)}-\mu \right]^{-1}\
\left[ \varepsilon P^\perp V + \delta P^\perp W \right]\ \Big]^{-1} \times \nonumber \\
&\qquad\qquad \left[H^{(\varepsilon,0,\lambda)}-\mu\right]^{-1}\ \left[ \varepsilon P^\perp V + \delta P^\perp W \right]p_\parallel
=\ \mathcal{P}(\varepsilon,\delta,\lambda,\mu)\ p_\parallel . \label{p-perp2}
\end{align}
%
Equation \eqref{p-perp2} defines a bounded linear mapping on $P^\parallel L^2(\mathbb{R}^2/\Lambda_h)$ with operator norm $\lesssim 1$:
\[ p_\parallel\mapsto p_\perp[\varepsilon,\delta,\lambda;p_\parallel]=\mathcal{P}(\varepsilon,\delta,\lambda,\mu)\ p_\parallel :\ Ran\left(P^\parallel\right)\to Ran\left(P^\perp\right) , \]
which is analytic in $\varepsilon, \delta, \lambda$ and $\mu$ for $|\varepsilon|+|\delta|+|\mu|<d_1$ and $|\lambda|<1/2$.
%
Consequently, equation \eqref{p-parallel} becomes a closed equation for $p_\parallel$
which we write as:
\begin{equation*}
\mathcal{M}(\varepsilon,\delta,\lambda,\mu) p =\ 0 ,
\end{equation*}
where
\begin{align}
\mathcal{M}(\varepsilon,\delta,\lambda,\mu) &\equiv
P^\parallel\Big[ -\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2-E_\star^\varepsilon - \mu
+ \varepsilon V + \delta W \nonumber\\
&\quad
+ ( \varepsilon V + \delta W ) \mathcal{P}(\varepsilon,\delta,\lambda,\mu)
\Big] P^\parallel \label{calMdef1}\\
&=
P^\parallel \Big[ -\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2-E_\star^\varepsilon - \mu + \varepsilon V + \delta W\nonumber\\
& \quad
+ ( \varepsilon V + \delta W ) P^\perp \left[ H^{(\varepsilon,0,\lambda)}-\mu+ \varepsilon V + \delta W \right]^{-1}P^\perp ( \varepsilon V + \delta W ) \Big] P^\parallel \ . \label{calMdef2}
\end{align}
The operator $\mathcal{M}(\varepsilon,\delta,\lambda,\mu)$ acts on the three-dimensional
space $P^\parallel L^2(\mathbb{R}^2/\Lambda_h)={\rm span}\{p_\sigma:\sigma=1,\tau,\overline{\tau}\}$.
Moreover, from the expression \eqref{calMdef2} it is clear that for $\varepsilon,\delta,\lambda,\mu$ complex, $\mathcal{M}(\varepsilon,\delta,\lambda,\mu)$ is self-adjoint.
We shall study it via its matrix representation relative to the basis $\{p_\sigma:\sigma=1,\tau,\overline{\tau}\}$:
\begin{equation}
M_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,\mu) \equiv \left\langle p_\sigma,\mathcal{M}(\varepsilon,\delta,\lambda,\mu) p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)},\ \ \sigma=1,\tau,\overline{\tau}\ .
\label{Msts}\end{equation}
Clearly, $M(\varepsilon,\delta,\lambda,\mu)$ is Hermitian for $\varepsilon,\delta,\lambda,\mu$ real:
$ M_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,\mu) = \overline{M_{{\tilde{\sigma}},\sigma}(\varepsilon,\delta,\lambda,\mu)}$.
Summarizing the above discussion we have the following:
\begin{proposition}\label{Msigtsig}
\begin{enumerate}
\item The entries of the $3\times3$
matrix $M(\varepsilon,\delta,\lambda,\mu)$ are analytic functions of complex
$\varepsilon, \delta, \lambda$ and $\mu$ for $|\varepsilon|+|\delta|+|\mu|<d_1$ and $|\lambda|<1/2$.
\item For $|\varepsilon|+|\delta|+|\mu|<d_1$ and $|\lambda|<1/2$, we have that
$E=E^\varepsilon_\star+\mu$ is an $L^2(\mathbb{R}^2/\Lambda_h)-$ eigenvalue of $H^{(\varepsilon,\delta,\lambda)}$
if and only if
$\det M(\varepsilon,\delta,\lambda,\mu)=0$.
\end{enumerate}
%
\end{proposition}
We now study the roots of $\det M(\varepsilon,\delta,\lambda,\mu)=0$ (eigenvalues of
$H^{(\varepsilon,\delta,\lambda)})$ for $\varepsilon$ and $\delta$ small.
First let $\varepsilon=\delta=0$. By the formulae \eqref{psig-nablaK-tpsig} and \eqref{J-matrix-computed}, derived and also applied in Section \ref{Mexpansion}, we have:
\begin{align*}
M(0,0,\lambda,\mu) &= \Big(\ \left\langle p_\sigma\ ,\ \mathcal{M}(0,0,\lambda,\mu) p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)}\ \Big)_{ \sigma,{\tilde{\sigma}}=1,\tau,\overline\tau}\ ,
\nonumber \\
&= \Big(\ \left\langle p_\sigma\ ,\ \left(-(\nabla+i({\bf K}+\lambda{\bf k}_2))^2\right), p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)}\ -\ (E_\star^0+\mu)\delta_{\sigma,{\tilde{\sigma}}}\ \Big)_{ \sigma,{\tilde{\sigma}}=1,\tau,\overline\tau}\ ,
\nonumber \\
&= \begin{pmatrix}
\lambda^2q^2-\mu & \alpha\lambda & \overline{\alpha}\lambda\\
\overline{\alpha}\lambda &\lambda^2q^2-\mu & \alpha\lambda
\\ \alpha\lambda & \overline{\alpha}\lambda & \lambda^2q^2-\mu
\end{pmatrix} , \ \ \alpha= \frac{q^2}{\sqrt3}\ i\tau.
\end{align*}
%
%
Thus, $M(0,0,\lambda,\mu)$ is singular if $\mu$ is a root of the polynomial:
\begin{align*}
&\det M(0,0,\lambda,\mu) \\
&\quad = -\mu^3 + \mu^2 (3q^2\lambda^2) + \mu(3\lambda^2|\alpha|^2-3\lambda^4q^4)
+\lambda^6q^6 -3\lambda^4q^2|\alpha|^2+2\lambda^3\Re(\alpha^3) \nonumber \\
&\quad = -\mu^3 + \mu^2(3\lambda^2q^2) + \mu(\lambda^2q^4-3\lambda^4q^4) + \lambda^6q^6 + \lambda^4q^6,
\end{align*}
where we have used that $|\alpha|^2=\frac{q^4}{3}$ and $\Re(\alpha^3)=0$.
The roots, $\mu^{(0)}_j(\lambda),\ j=1,2,3$, defined by the ordering
$\mu^{(0)}_1(\lambda)\le\mu^{(0)}_2(\lambda)\le\mu^{(0)}_3(\lambda)$,
listed with multiplicity, are given by:
\begin{align}
{\bf 0\le\lambda\le\frac12:}\
&\mu^{(0)}_1(\lambda)=q^2\lambda (\lambda-1),\ \mu^{(0)}_2(\lambda)=q^2\lambda^2,\ \mu^{(0)}_3(\lambda)=q^2\lambda (\lambda+1) , \label{mulam>0}\\
{\bf -\frac12\le\lambda\le0:}\
&\mu^{(0)}_1(\lambda)=q^2\lambda (\lambda+1),\ \mu^{(0)}_2(\lambda)=q^2\lambda^2,\ \mu^{(0)}_3(\lambda)=q^2\lambda (\lambda-1) . \label{mulam<0}
\end{align}
The roots $\mu^{(0)}_j(\lambda), j=1,2,3$, are eigenvalues of $-\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2-E_\star^0$. They are uniformly bounded away from all other $L^2(\mathbb{R}^2/\Lambda_h)-$ spectrum. This distance is given by
\begin{equation*}
g_0 \equiv \inf_{|\lambda |\le1/2}\ \inf_{ {\bf m}\notin \{ (0,0),(0,1),(-1,0) \} }
\Big|\ |{\bf K}+{\bf m} \vec{{\bf k}}+\lambda{\bf k}_2|^2-|{\bf K}|^2\ \Big|\ > 0 ,\ \ (E^0_\star=|{\bf K}|^2),
\end{equation*}
where ${\bf m} \vec{{\bf k}}=m_1{\bf k}_1+m_2{\bf k}_2$
Note that $\det M(0,0,0,\mu)=-\mu^3$ has a root of multiplicity three: $\mu=0$. By Proposition \ref{Msigtsig} and analyticity of $\det M(\varepsilon,\delta,\lambda,\mu)$, we have that for $\varepsilon, \delta$ and $\lambda$ in a small neighborhood of the origin in $\mathbb{C}^3$, there are three solutions of
$\det M(\varepsilon,\delta,\lambda,\mu)=0$. We label them $\mu_j(\varepsilon,\delta,\lambda),\ j=1,2,3$. By self-adjointness of $M(\varepsilon,\delta,\lambda,\mu)$, these roots are real for real values of $\varepsilon$ and $\delta$. Correspondingly,
for small and real $\varepsilon, \delta$ and $\lambda$ there are three real $L^2_{k_\parallel=2\pi/3}-$ eigenvalues of $H^{(\varepsilon,\delta,\lambda)}$, denoted
\[ E_j^{(\varepsilon,\delta)}(\lambda)\equiv E_j^{(\varepsilon,\delta)}({\bf K}+\lambda{\bf k}_2)\equiv E^\varepsilon_\star+\mu_j(\varepsilon,\delta,\lambda),\ \ \ j=1,2,3.\]
The ordering of the $E_j$ implies
\[ \mu_1(\varepsilon,\delta,\lambda)\le \mu_2(\varepsilon,\delta,\lambda)\le \mu_3(\varepsilon,\delta,\lambda) .\]
A mild extension of Proposition \ref{directional-bloch} yields
\begin{proposition}\label{roots-delta0}
For each $|\lambda|\le1/2$, there exist orthonormal $L^2_{{\bf K}+\lambda{\bf k}_2}-$ eigenpairs $(\Phi_\pm({\bf x};\lambda),E_\pm(\lambda))$ and $(\widetilde{\Phi}({\bf x};\lambda),\widetilde{E}(\lambda))$, real analytic in $\lambda$, such that
\[ \textrm{span}\ \{\Phi_{-}({\bf x};\lambda), \Phi_{+}({\bf x};\lambda), \widetilde{\Phi}({\bf x};\lambda)\}
=\ \textrm{span}\ \{\Phi_j({\bf x};{\bf K}+\lambda{\bf k}_2) : j=1,2,3 \}\ .
\]
For fixed $\varepsilon$ small and $\lambda$ tending to $0$ we have:
\begin{enumerate}
\item
\begin{align}
E_\pm^{\varepsilon,\delta=0}(\lambda)\ &=\ E_\star^\varepsilon\ \pm\ |\lambda^\varepsilon_\sharp|\ |z_2|\ \lambda\ +\ \lambda^2\ e_\pm(\lambda,\varepsilon) , \label{Epm-expand}\\
\widetilde{E}^{\varepsilon,\delta=0}(\lambda)\ &=\ \widetilde{E}_\star^\varepsilon\ +\ \lambda^2\ \widetilde{e}(\lambda,\varepsilon) , \label{Etilde-expand}
\end{align}
where $e_\pm(\lambda,\varepsilon),\ \widetilde{e}(\lambda,\varepsilon)\ =\ \mathcal{O}(1)$ as $\lambda, \varepsilon\to0$, and $z_2=k_2^{(1)}+ik_2^{(2)}$. These functions can be represented in a convergent power series in $\varepsilon$ and $\lambda$ in a fixed $\mathbb{C}^2$ neighborhood of the origin. Furthermore, $e_\pm(\lambda,\varepsilon),\ \widetilde{e}(\lambda,\varepsilon)$ are
real-valued for real $\lambda$ and $\varepsilon$.
\item Therefore, for all small $\varepsilon$ and $\lambda$, the three roots of $\det M(\varepsilon,\delta=0,\lambda,\mu)$ may be labeled:
\begin{align}
\mu_+(\lambda,\varepsilon)\ &=\ |\lambda^\varepsilon_\sharp|\ |z_2|\ \lambda\ +\ \lambda^2\ e_+(\lambda,\varepsilon)\label{mu+expand}\\
\mu_-(\lambda,\varepsilon)\ &= -|\lambda^\varepsilon_\sharp|\ |z_2|\ \lambda\ +\ \lambda^2\ e_-(\lambda,\varepsilon)\label{mu-expand}\\
\tilde{\mu}(\lambda,\varepsilon)\ &=\ \widetilde{E}_\star^\varepsilon-E_\star^\varepsilon\ +\ \lambda^2\ \widetilde{e}(\lambda,\varepsilon)\ ,\ \ {\rm where}\ \widetilde{E}_\star^\varepsilon-E_\star^\varepsilon= 3\varepsilon V_{1,1}\ +\ \mathcal{O}(\varepsilon^2) . \label{mutilde-expand}
\end{align}
\end{enumerate}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{roots-delta0}]
Part 1 can be proved as follows. The expansion \eqref{Epm-expand} and analyticity follow by perturbation theory as in Proposition \ref{directional-bloch}; see also \cites{Friedrichs:65,kato1995perturbation}. The expansion \eqref{mutilde-expand} also follows by perturbation theory of the simple eigenvalue $\widetilde{E}^\varepsilon_\star $ for $\lambda=0$.
It is easy to see that
\begin{equation}
\widetilde{E}^{\varepsilon,\delta=0}(\lambda)\
=\ \widetilde{E}_\star^\varepsilon\ +\
\lambda\ \times\left(\ -2i{\bf k}_2\cdot \left\langle\widetilde{\Phi}^\varepsilon,\nabla_{\bf x}\widetilde{\Phi}^\varepsilon
\right\rangle_{L^2_{\bf K}}\ \right)\ +\ \lambda^2\ \widetilde{e}(\varepsilon,\lambda).
\end{equation}
We claim that $\left\langle\widetilde{\Phi}^\varepsilon,\nabla_{\bf x}\widetilde{\Phi}^\varepsilon
\right\rangle_{L^2_{\bf K}}=0$. This follows since $\widetilde{\Phi}^\varepsilon\in L^2_{{\bf K},1}$ as in the proof of Proposition 4.1 of \cites{FW:12}. This proves the expansion \eqref{Etilde-expand}.
\end{proof}
Finally we discuss a useful symmetry of $\det M(\varepsilon,\delta,\lambda,\mu=0)$.
\begin{proposition}\label{symmetry-detM}
Assume $V$ satisfies Assumptions (V) and $W$ satisfies Assumptions (W).
Recall $M(\varepsilon,\delta,\lambda,0)$, defined in \eqref{calMdef1}-\eqref{Msts}.
Then, $\det M(\varepsilon,\delta,\lambda,0)$ is real-valued for real $\varepsilon, \delta, \lambda$ and analytic in a small neighborhood of the origin in $\mathbb{C}^3$. Furthermore,
\begin{align}
\det M(\varepsilon,\delta,\lambda,0)&=\det M(\varepsilon,-\delta,\lambda,0)
\label{delta-minusdelta}
\end{align}
and therefore $\det M(\varepsilon,\delta,\lambda,0)$ is a real analytic in $\varepsilon$, $\delta^2$ and $\lambda$, and we write
\begin{align*}
D(\varepsilon,\delta^2,\lambda)\equiv \det M(\varepsilon,\delta,\lambda,0) &
\end{align*}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{symmetry-detM}]
Let $\mathcal{I}[f]({\bf x})=f(-{\bf x})$ and $\mathcal{C}[f]({\bf x})=\overline{f({\bf x})}$. Using that $\varepsilon, \delta$ and $\lambda$ are real, $V({\bf x})$ is even and $W({\bf x})$ is odd, one can check directly that
\begin{align}
\mathcal{C}\circ\mathcal{I}\circ H^{\varepsilon,-\delta,\lambda} &= H^{\varepsilon,\delta,\lambda}\circ
\mathcal{I}\circ\mathcal{C}\label{sym1}
\end{align}
%
Furthermore, note that
\begin{align}
&(\mathcal{C}\circ\mathcal{I}) p_1 = p_1,\quad
(\mathcal{C}\circ\mathcal{I}) p_\tau = p_{\overline\tau},\quad
(\mathcal{C}\circ\mathcal{I}) p_{\overline\tau} = p_\tau.
\label{CIp} \end{align}
It follows that
\begin{align}
\mathcal{C}\circ\mathcal{I}\circ P^\parallel &= P^\parallel\circ \mathcal{C}\circ\mathcal{I} .
\label{sym2}\end{align}
Using the symmetry relations \eqref{sym1}-\eqref{sym2}
to rewrite $M(\varepsilon,-\delta,\lambda,0)$ in terms of $H^{\varepsilon,\delta,\lambda}$,
we find that $M(\varepsilon,-\delta,\lambda,0)$ can be transformed into
$M(\varepsilon,\delta,\lambda,0)$ by interchanging its second and third rows, and then interchanging its second and third columns.
Therefore, $\det M(\varepsilon,\delta,\lambda,0)=\det M(\varepsilon,-\delta,\lambda,0)$ and the proof of Proposition \ref{symmetry-detM} is complete.
\end{proof}
\subsection{Strategy for analysis of $\det M(\varepsilon,\delta,\lambda,\mu)$ in the case $\varepsilon V_{1,1}>0$} \label{sec:strategy}
We first observe that for a positive constant, $d_1$, if $|\varepsilon|+|\delta|<d_1$ then
\begin{equation}
|E_j^{(\varepsilon,\delta)}(\lambda)-E_\star^\varepsilon|\ \ge c_4(1+|j|),\ \ j\ge4 .
\label{outer-lams-bbig}\end{equation}
Indeed, by the discussion following Proposition \ref{Msigtsig}, we have that there exists $d_2>0$ such that for $j\geq4$,
$|\mu_j(\varepsilon,\delta,\lambda)|\equiv |E_j^{(\varepsilon,\delta)}(\lambda)-E_\star^\varepsilon|\ge d_2$;
the lower bound \eqref{outer-lams-bbig} now follows from the Weyl asymptotics for eigenvalues of second order elliptic operators in two space dimensions.
Hence, we restrict our attention to $E_j^{(\varepsilon,\delta)}(\lambda)=E_\star^\varepsilon+\mu_j(\varepsilon,\delta),\ j=1,2,3$, which we study by a detailed analysis of $\det M(\varepsilon,\delta,\lambda,\mu=0)$. The analysis consists of verifying two steps, which we now outline.
\noindent {\bf Step 1:}\
Fix $c_\flat>0$ and arbitrary. We will prove that there exists $C_\flat>0$, such that the following holds. There exists $\varepsilon_1>0$ and constant $c_3$, depending on $V$ and $W$, such that
for all $0<|\varepsilon|<\varepsilon_1$ and $0\le|\delta|\le c_\flat\ \varepsilon^2$:
\begin{align}
C_\flat\sqrt{\varepsilon}\le |\lambda|\le\frac12\ \implies \ |E_j^{(\varepsilon,\delta)}(\lambda)-E_\star^\varepsilon|\ & \ge c_3 \varepsilon,\ \ j=1,2,3 \label{outer-lams} .
\end{align}
%
\noindent Furthermore, by \eqref{outer-lams} and \eqref{outer-lams-bbig}, it follows that
\begin{align}
& C_\flat\sqrt{\varepsilon}\le |\lambda|\le\frac12\ \implies \ L^2(\mathbb{R}/\Lambda_h)-{\rm spec}( H^{(\varepsilon,\delta,\lambda)})\ \cap \Big[E_\star^\varepsilon-c\varepsilon,E_\star^\varepsilon+c\varepsilon\Big]\ \ \textrm{is empty}. \label{Step1}
\end{align}
\noindent {\bf Step 2:}\ Let $c_\flat$ and $C_\flat$ be as in Step 1. We will prove that there exists $0<\varepsilon_2\le \varepsilon_1$, such that if $(\lambda,\delta)$ are in the set:
\begin{equation}
|\lambda|\le C_\flat\ \varepsilon^{\frac12}\ \ \textrm{and}\ \ 0\le |\delta|\ \le c_\flat\ \varepsilon^2,\ \ {\rm where} \ 0<|\varepsilon|<\varepsilon_2,\ \ {\rm then}\label{smiley}
\end{equation}
\begin{align}
\det M(\varepsilon,\lambda,\delta,0)\ =\
\left(q^2\lambda^2 + \varepsilon V_{1,1}\right) \left( q^4\lambda^2 + \delta^2\ \left|W_{0,1}+W_{1,0}-W_{1,1} \right|^2 \right)\times\left(\ 1+o(1)\ \right).
\label{Dlower}
\end{align}
A simple lower bound on the three eigenvalues $|\mu_j(\varepsilon,\lambda)|=|E_j^\varepsilon(\lambda)-E^\varepsilon_\star|$ is then obtained as follows.
By \eqref{Dlower}, for some positive constants: $C_1$ and $C_2$, we have
{\small{
\begin{align*}
&C_1(\lambda^2+\varepsilon)\cdot (\lambda^2+\delta^2)\ \le\ \left| \det M(\varepsilon,\lambda,\delta,0) \right| \\
&= \left| \det M(\varepsilon,\lambda,\delta,0) - \det M (\varepsilon,\lambda,\delta,\mu_j(\lambda,\varepsilon,\delta))\right|\ \le C_2\ |0-\mu_j(\varepsilon,\delta,\lambda)| = C_2\ |\mu_j(\varepsilon,\delta,\lambda)|,
\end{align*}
}}
for $j=1,2,3$.
Therefore, with $C_3=C_1/C_2$,
\begin{equation}
\left| E_j^{\varepsilon,\delta}(\lambda)-E_\star^\varepsilon\right| = |\mu_j(\varepsilon,\delta,\lambda)| \geq C_3\ \varepsilon\ (\lambda^2+\delta^2) \ \ge\ C_3\ \varepsilon\ \delta^2,\ \ j=1,2,3.
\label{Ediff-lb}\end{equation}
It follows from \eqref{Ediff-lb} and \eqref{outer-lams-bbig} that
\begin{equation}
\textrm{the $L^2(\mathbb{R}^2/\Lambda_h)-$\ spec( $H^{(\varepsilon,\delta,\lambda)}$ )$\ \cap\ [E_\star^\varepsilon-\eta,E_\star^\varepsilon+\eta]$\ \ is empty}
\nonumber\end{equation}
with $\eta= \frac{1}{2}C_3\ \varepsilon\ \delta^2$, whenever $\varepsilon$, $(\lambda,\delta)$ satisfy the constraints \eqref{smiley}.
Theorem \ref{SGC!} and Theorem \ref{delta-gap} are immediate consequences of \eqref{Step1} (Step 1) and \eqref{Ediff-lb} (Step 2) and the representation:
\begin{equation*} \left. H^{(\varepsilon,\delta)}\ \right|_{L^2(\Sigma_{k_\parallel=2\pi/3})}\ =\
\oplus\ \int_{|\lambda|\le\frac12}\ \left.H^{(\varepsilon,\delta,\lambda)}\ \right|_{L^2(\mathbb{R}^2/\Lambda_h)}\ d\lambda .
\end{equation*}
Hence, we now turn to their proofs. We first carry out Step 1 by a simple perturbation analysis about the free Hamiltonian, $H^{(0,0,\lambda)}$. We then turn to Step 2, which is much more involved.
{\bf Verification of \eqref{outer-lams} from Step 1.}
Let $C_\flat$ denote a positive constant, which we will specify shortly, and consider the range $C_\flat\sqrt{\varepsilon}\le\lambda\le1/2$. Using the expressions for $\mu^{(0)}_j(\lambda),\ j=1,2,3$, in \eqref{mulam>0} we have that if $\varepsilon\le \varepsilon'(C_\flat)\equiv(4C_\flat^2)^{-1}$, then
$ |E_1^{(0)}(\lambda)-E_\star^0|= q^2|\lambda|\ |1-\lambda|\ge C_\flat q^2 \sqrt{\varepsilon}/2$,\
$|E_2^{(0)}(\lambda)-E_\star^0|= q^2|\lambda|^2\ \ge C_\flat^2 q^2 \varepsilon$,\
and $|E_3^{(0)}(\lambda)-E_\star^0|= q^2|\lambda|\ |1+\lambda|\ge C_\flat^2 q^2 \sqrt{\varepsilon}/2$.
Note that the eigenvalues $(\varepsilon,\delta,\lambda)\mapsto E^{\varepsilon,\delta}(\lambda)$ are Lipschitz continuous functions;
see Chapter XII of \cites{RS4} or Appendix A of \cites{FW:14}.
Therefore, we have
\begin{equation*}
|E_2^{(\varepsilon,\delta)}(\lambda)-E_\star^\varepsilon|\ \ge C_\flat^2\ q^2\ \varepsilon\ -\ |\varepsilon|\ \|V\|_\infty\ -\ |\delta|\ ||W\|_\infty\ge \frac{1}{2}\ C_\flat^2\ q^2\ \varepsilon ,
\end{equation*}
for some $C_\flat$ positive, finite and sufficiently large. With this choice of $C_\flat$, we also have
\begin{align*}
|E_1^{(\varepsilon,\delta)}(\lambda)-E_\star^\varepsilon|&\ \ge C_\flat\ q^2\ \sqrt{\varepsilon}/2\ -\ |\varepsilon|\ \|V\|_\infty\ -\ |\delta|\ ||W\|_\infty , \nonumber\\
|E_3^{(\varepsilon,\delta)}(\lambda)-E_\star^\varepsilon&|\ge C_\flat\ q^2\ \sqrt{\varepsilon}/2\ -\ |\varepsilon|\ \|V\|_\infty\ -\ |\delta|\ ||W\|_\infty .
\end{align*}
Therefore, there exists $\varepsilon_1>0$, such that for all $0<|\varepsilon|<\varepsilon_1$ we have
\begin{equation*}
|E_1^{(\varepsilon,\delta)}(\lambda)-E_\star^\varepsilon|\ +\ |E_3^{(\varepsilon,\delta)}(\lambda)-E_\star^\varepsilon|\ \ge\ C_\flat\ q^2\ \sqrt{\varepsilon}/4 .
\end{equation*}
This completes the proof of the assertions in Step 1.
\subsection{Expansion of $M(\varepsilon,\delta,\lambda,0)$ and its determinant for $\varepsilon V_{1,1} \neq 0$}
\label{Mexpansion}
The key to verifying Step 2 and proving Theorem \ref{SGC!} is the following:
\begin{proposition}\label{detM-expansion}
Let $C_\flat$ be as chosen in Step 1; see \eqref{Step1}.
Then, there exist constants $\varepsilon_2>0$ and $ c>0$ such that for all $0\le\varepsilon<\varepsilon_2$, if
\begin{equation}
0\le|\lambda|\ \le\ C_\flat\ \varepsilon^{1/2}\ \ {\rm and}\ \ 0\le |\delta| \le c\ \varepsilon^2,\ \ {\rm then}
\label{smiley3}
\end{equation}
\begin{align}
-\det M(\varepsilon,\delta,\lambda,0)\
&=
\pi(\varepsilon,\delta^2,\lambda) \ + \ o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right) ,
\label{detM-exp}
\end{align}
where
\begin{align}
\pi(\varepsilon,\delta^2,\lambda)&\equiv \left(q^2\lambda^2 + \varepsilon V_{1,1}\right) \left( q^4\lambda^2 + \delta^2\ \left|W_{0,1}+W_{1,0}-W_{1,1} \right|^2 \right).
\label{pi-def}\end{align}
Here, $W_{\bf m}$, $ {\bf m}\in\mathbb{Z}^2$, denote Fourier coefficients of $W({\bf x})$ and, by \eqref{smiley3}, the correction term in \eqref{detM-exp} divided by $(\lambda^2+\varepsilon)(\lambda^2+\delta^2)$ tends to zero as $\varepsilon$ tends to zero.
Thus,
\begin{align*}
\varepsilon V_{1,1}>0\ \implies\ -\det M(\varepsilon,\delta,\lambda,0)\
&=
\pi(\varepsilon,\delta^2,\lambda)\ \left(1 + o(1) \right)\ \textrm{in the region \eqref{smiley3}.}
\end{align*}
\end{proposition}
\noindent We now embark on an expansion of $\det M(\varepsilon,\delta,\lambda,0)$ and the proof of Proposition \ref{detM-expansion}. $M(\varepsilon,\delta,\lambda,0)$, see \eqref{calMdef1}-\eqref{Msts},
may be written as the sum of matrices:
\begin{equation}
M(\varepsilon,\delta,\lambda,0)=\ \left[\ M^0+M^V+M^W+M^{\mathcal{P}}\ \right](\varepsilon,\delta,\lambda,0),
\label{Msum}
\end{equation}
where the $\sigma,{\tilde{\sigma}}=1,\tau,\overline{\tau}$ entries are given by:
\begin{align}
M^0_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,0)\ &=\ \left\langle p_\sigma,\left(\ -\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2-E_\star^\varepsilon \ \right) p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)} , \label{M0}\\
M^V_{\sigma,{\tilde{\sigma}}}(\varepsilon)\ &=\ \varepsilon \left\langle p_\sigma, V p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)} , \nonumber \\
M^W_{\sigma,{\tilde{\sigma}}}(\delta) &=\ \delta \left\langle p_\sigma, W p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)} , \nonumber \\
M^{\mathcal{P}}_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,0)\ &=\ \left\langle p_\sigma,(\varepsilon V + \delta W)\
\mathcal{P}(\varepsilon,\delta,\lambda,0)\ p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)} . \nonumber
\end{align}
For $\varepsilon$ and $\delta$ small,
%
\begin{align}
&M^0_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,0)\ =\
M^{0,approx}_{\sigma,{\tilde{\sigma}}}(\varepsilon,\lambda)\ +\ \mathcal{O}(\varepsilon^2), \label{M0-M0app}
\end{align}
where (inner products over $L^2(\mathbb{R}^2/\Lambda_h)$)
{\small{
\begin{align}
&M^{0,approx}_{\sigma,{\tilde{\sigma}}}(\varepsilon,\lambda) =\left\langle p_\sigma,\left( -\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2- [E_\star^0 + \varepsilon(V_{0,0}-V_{1,1})] \right) p_{\tilde{\sigma}}\right\rangle, \ {\rm and} \label{M0-expanded}\\
& M^{\mathcal{P}}_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,0)
\label{MP-expanded}\\
&\quad =
\left\langle (\varepsilon V + \delta W) p_\sigma , P^\perp\left(-(\nabla+i[{\bf K}+\lambda{\bf k}_2])^2-E_\star^\varepsilon +\varepsilon V +\delta W\right)^{-1}P^\perp (\varepsilon V + \delta W) p_{\tilde{\sigma}}\right\rangle \nonumber \\
&\quad = \left\langle (\varepsilon V + \delta W) p_\sigma , P^\perp\left(-(\nabla+i[{\bf K}+\lambda{\bf k}_2])^2-E_\star^\varepsilon \right)^{-1}P^\perp (\varepsilon V + \delta W) p_{\tilde{\sigma}}\right\rangle \nonumber \\
&\quad\qquad +\mathcal{O}(\delta^3 + \delta^2\varepsilon + \delta \varepsilon^2 + \varepsilon^3) = \mathcal{O}(\varepsilon^2 + \varepsilon\delta + \delta^2) .
\nonumber
\end{align}}}
\noindent We next explain that to calculate the determinant of $ M(\varepsilon,\delta,\lambda,0)$ to the desired order in the region \eqref{smiley}, it suffices to calculate the determinant of the approximate matrix:
\begin{equation}
M^{approx}(\varepsilon,\delta,\lambda)\ \equiv \ M^{0,approx}(\varepsilon,\lambda)+M^V(\varepsilon)+M^W(\delta) . \label{detMapprox}
\end{equation}
That is, we show that the omitted terms in $M(\varepsilon,\delta,\lambda,0)-M^{approx}(\varepsilon,\delta,\lambda,0)$ contribute negligibly to the determinant of $ M(\varepsilon,\delta,\lambda,0)$, when compared with the polynomial, $\pi(\varepsilon,\delta^2,\lambda)$, in \eqref{pi-def}, provided $\lambda$ and $\delta$ are in the region \eqref{smiley}:
\begin{equation}
|\lambda|\ \le C_\flat\ \varepsilon^{\frac12}\ \ \textrm{and}\ \ |\delta|\le c_\flat\ \varepsilon^2 ,
\label{smiley2}
\end{equation}
where $C_\flat$ and $c_\flat$ are appropriately chosen constants.
Recall that $D(\varepsilon,\delta^2=0,\lambda=0)= \det M (\varepsilon,0,0,0) =0$ for all $\varepsilon$, since $\mu=0$ corresponds to $E=E^\varepsilon_\star$, which is an eigenvalue of $H^{(\varepsilon,\delta=0,\lambda=0)}=-\Delta+\varepsilon V$.
Thus, $D(\varepsilon,\delta^2,\lambda)=\det M(\varepsilon,\delta,\lambda,0)$ is a convergent power series in $\varepsilon$, $\delta^2$ and $\lambda$ with no ``pure $\varepsilon$'' terms. On the other hand,
the entries of the matrix $M(\varepsilon,\delta,\lambda,0)$ are convergent power series in $\varepsilon, \delta$ and $\lambda$.
\begin{proposition}\label{ld2}
For $(\lambda,\delta)$ in the region \eqref{smiley2} we have
\[ |\lambda|\ \delta^2 = o \left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right) .\]
Therefore we may drop the $\mathcal{O}(\lambda\delta^2)$ terms, a further simplification.
%
\end{proposition}
\begin{proof}[Proof of Proposition \ref{ld2}]
Consider separately the two regimes: (a) $|\lambda|\le \varepsilon^{1.1}$ and (b) $|\lambda|\ge \varepsilon^{1.1}$. For $|\lambda|\le \varepsilon^{1.1}$, we have $|\lambda|\delta^2\le \varepsilon^{1.1}\delta^2=
\varepsilon^{0.1}\varepsilon\delta^2\le \varepsilon^{0.1}(\lambda^2+\varepsilon)\cdot (\lambda^2+\delta^2)$. And for $|\lambda|\ge \varepsilon^{1.1}$, note that $(\lambda^2+\varepsilon)\cdot (\lambda^2+\delta^2)\ge\varepsilon\lambda^2\ge\varepsilon\cdot\varepsilon^{2.2}=\varepsilon^{3.2}$. On the other hand,
$|\lambda|\delta^2\le \frac12\delta^2\lesssim\varepsilon^4\le \varepsilon^{.8}\cdot (\lambda^2+\varepsilon)\cdot (\lambda^2+\delta^2)$. This completes the proof of Proposition \ref{ld2}.
\end{proof}
Let us now attach ``weight'' $1$ to the variable $\varepsilon$ and ``weight'' $1/2$ to the variables $\delta$ and $\lambda$. A monomial of the form $\varepsilon^a \lambda^b \delta^c$ carries the weight $a+b/2+c/2$, where $a,b,c\in\mathbb{N}$. In Proposition \ref{neglect} we show that all terms in the power series of $\det M(\varepsilon,\lambda,\delta)$ which are of weight strictly larger than two introduce negligible corrections
%
%
to $\det M^{approx}(\varepsilon,\lambda,\delta)$ for $\lambda$ and $\delta$ in the region \eqref{smiley}.
Recalling that there are no pure $\varepsilon$ terms, we see that a monomial in the power series of $D(\varepsilon,\delta^2,\lambda)$ of weight larger than $2$ must have one of the following two forms ($a,b,c\in\mathbb{N}$):
\begin{align*}
& (I)\quad \lambda\times \varepsilon^a\lambda^{b} (\delta^{2})^c,\quad {\rm with}\quad a+b/2+c>3/2\implies 2a+b+2c\ge4 \\% \label{form1}
& (II)\quad \delta^2\times \varepsilon^a\lambda^b(\delta^2)^c,\quad {\rm with}\quad a+b/2+c>1\implies 2a+b+2c\ge3
\end{align*}
\begin{proposition}\label{neglect} Terms of form (I) and form (II) that may appear in $D(\varepsilon, \delta^2, \lambda)$ are $o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ as $\varepsilon\to0$,
for $(\lambda,\delta)$ in the region \eqref{smiley2}:
$
|\lambda|\le C_\flat\ \varepsilon^{\frac12}\ \ \textrm{and}\ \ |\delta|\lesssim\varepsilon^2.
$
%
We may therefore neglect all terms in the power series of $D(\varepsilon,\delta^2,\lambda)$ which are of weight strictly larger than $2$ for $(\lambda,\delta)$ in the region \eqref{smiley2}.
\end{proposition}
\noindent We prove Proposition \ref{neglect} by estimating all terms of the form (I) or (II), and we begin with the following lemma, which is a consequence of part 2 of Proposition \ref{roots-delta0}:
\begin{lemma}\label{det-delta0}
Let $\mu_-(\varepsilon,\lambda),\ \mu_+(\varepsilon,\lambda)$ and $ \widetilde{\mu}(\varepsilon,\lambda)$ denote the three roots of $\det M(\varepsilon,\delta=0,\lambda,\mu)$, defined and analytic in $(\varepsilon,\lambda)$ in a $\mathbb{C}^2$ neighborhood of the origin and displayed in \eqref{mu+expand}-\eqref{mutilde-expand}. Then,
\begin{align*}
\det M(\varepsilon,\delta=0,\lambda,\mu)= (\mu-\mu_-(\varepsilon,\lambda)) \times\ (\mu-\mu_+(\varepsilon,\lambda))\times (\mu-\widetilde{\mu}(\varepsilon,\lambda))\times\Omega(\varepsilon,\lambda,\mu),
\end{align*}
where $\Omega(\varepsilon,\lambda,\mu)$ is bounded. In particular, for all $\varepsilon$ such that $0<|\varepsilon|<\varepsilon_0$ there exists a constant $C_\varepsilon$ such that
\begin{align}\label{bound-det}
\left| \det M(\varepsilon,\delta=0,\lambda,0) \right| &\le C_\varepsilon\ |\lambda|^2 .
\end{align}
\end{lemma}
\noindent {\it Proof:} For fixed $\varepsilon$ and $\lambda$, the mapping $\mu\mapsto\det M(\varepsilon,\delta=0,\lambda,\mu)$ is analytic for $|\mu|<\mu_0$ with zeros at $\mu_\pm(\varepsilon,\lambda), \widetilde{\mu}(\varepsilon,\lambda)$; see Proposition \ref{roots-delta0} . Fix $\varepsilon'$ and $\lambda'$ small such that if $|\varepsilon|<\varepsilon'$ and $|\lambda|<\lambda'$,
the roots all satisfy $|\mu_+(\varepsilon,\lambda)|$, $ |\mu_-(\varepsilon,\lambda)|$,
$|\widetilde{\mu}(\varepsilon,\lambda)|<\mu_0/2$. We claim that for such $\varepsilon$ and $\delta$,
\[ \Omega(\varepsilon,\lambda,\mu)\equiv
\frac{ \det M(\varepsilon,\delta=0,\lambda,\mu)}
{(\mu-\mu_-(\varepsilon,\lambda)) \times\ (\mu-\mu_+(\varepsilon,\lambda))\times (\mu-\widetilde{\mu}(\varepsilon,\lambda))}\]
is uniformly bounded for all $|\mu|\le\mu_0$, $|\varepsilon|\le\varepsilon'$ and $|\lambda|\le\lambda'$. Indeed, since the roots are bounded in magnitude by $\mu_0/2$, we have
$\max_{|\mu|=\mu_0} \left| \Omega(\varepsilon,\lambda,\mu) \right|\le (2/\mu_0)^3\ \max_{|\mu|=\mu_0}\left|\det M(\varepsilon,\delta=0,\lambda,\mu)\right|$. Applying the maximum principle we have
\[ \max_{|\mu|\le\mu_0}\left| \Omega(\varepsilon,\lambda,\mu) \right|\le (2/\mu_0)^3\ \max_{|\mu|=\mu_0}\left|\det M(\varepsilon,\delta=0,\lambda,\mu)\right|\le (2/\mu_0)^3 C(\mu_0,\varepsilon',\lambda'),\] where $C(\mu_0,\varepsilon',\lambda')$ is a constant. The bound \eqref{bound-det} now follows
from the expansions of the roots.
\noindent Proof of Proposition \ref{neglect}:\\
{\bf (I) Terms of the form $\lambda\times \varepsilon^a\lambda^{b} (\delta^{2})^c, \ {\rm with}\ 2a+b+2c\ge4,\ a,b,c\in\mathbb{N}$:}
\noindent (i) Suppose first that $c=0$. Then, we consider $\lambda\times\varepsilon^a\lambda^b$ with $2a+b\ge4$. By Lemma \ref{det-delta0} we must have $b\ge1$.
Thus, $\lambda\times\varepsilon^a\lambda^b=\lambda^2\varepsilon^a \lambda^{b-1}$. If $a\ge2$, then
$\lambda\times\varepsilon^a\lambda^b=\lambda^2\varepsilon^2 \varepsilon^{a-2} \lambda^{b-1}\lesssim
(\lambda^2+\delta^2) \times (\lambda^2+\varepsilon)^2= o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ for $(\lambda,\delta)$ in the region \eqref{smiley2}. Otherwise, $a=0$ or $a=1$. If $a=0$, then $b\ge4$ and
$\lambda\times \varepsilon^a\lambda^b=\lambda\cdot \lambda^2\times\lambda^2\cdot\lambda^{b-4}
\lesssim\lambda\cdot (\lambda^2+\varepsilon)\times(\lambda^2+\delta^2)=o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ for $(\lambda,\delta)$ in the region \eqref{smiley}. Now suppose $a=1$. Then, $b\ge2$ and we have
$\lambda\times\varepsilon^a\lambda^b=\lambda\cdot \lambda^2 \times\varepsilon \cdot\lambda^{b-2}
\lesssim \lambda\cdot (\lambda^2+\delta^2)\times(\lambda^2+\varepsilon) =
o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ for $(\lambda,\delta)$ in the region \eqref{smiley2}.
\noindent (ii) Suppose now that $c\ge1$. If $b=0$, then
$\lambda\times \varepsilon^a\lambda^{b} (\delta^{2})^c= \lambda\delta^2 \varepsilon^a(\delta^2)^{c-1}$
with $2a+2c\ge4$, which is $=o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ for $(\lambda,\delta)$ in the region by Proposition \ref{ld2}. Finally, if $b\ge1$, then $\lambda\times\varepsilon^a\lambda^b(\delta^2)^c= \lambda\delta^2\cdot \varepsilon^a\lambda^{b}(\delta^2)^{c-1}=o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ for $(\lambda,\delta)$ in the region \eqref{smiley2}.
{\bf (II) Terms of the form\ $\delta^2\times \varepsilon^a\lambda^{b} (\delta^{2})^c, \quad {\rm with}\
2a+b+2c\ge3,\ a,b,c\in\mathbb{N}$:
\noindent (i) Suppose first that $a=0$. Then, $\delta^2\times \varepsilon^a\lambda^{b} (\delta^{2})^c=\delta^2\times \lambda^{b} (\delta^{2})^c,\ b+2c\ge3$. If $b\ge1$, then
we rewrite this as $\delta^2\lambda \times \lambda^{b-1} (\delta^{2})^c=o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ for $(\lambda,\delta)$ in the region \eqref{smiley2}, by Proposition \ref{ld2}. And if $b=0$, then
$\delta^2\times \varepsilon^a\lambda^{b} (\delta^{2})^c=(\delta^2)^{c+1}$, with $c\ge2$,
which is $\lesssim \delta^2\delta^2 (\delta^2)^{c-1}\lesssim\delta^2\varepsilon\cdot\varepsilon^3(\delta^2)^{c-1}
\lesssim(\lambda^2+\delta^2)(\lambda^2+\varepsilon)\cdot\varepsilon^3(\delta^2)^{c-1}=
o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ for $(\lambda,\delta)$ in the region \eqref{smiley2}
\noindent (ii) If now $a\ge1$, then $\delta^2\times \varepsilon^a\lambda^{b} (\delta^{2})^c=\delta^2\varepsilon\cdot \varepsilon^{a-1}\lambda^b(\delta^2)^c\le(\lambda^2+\delta^2)\cdot(\lambda^2+\varepsilon)\cdot \varepsilon^{a-1}\lambda^b(\delta^2)^c=o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ for $(\lambda,\delta)$ in the region \eqref{smiley2}. This completes the proof of Proposition \ref{neglect}.
%
%
Now each entry in the $3\times3$ matrix, $M(\varepsilon,\delta,\lambda,0)$ is a sum of terms of weight $\ge1/2$; this is a consequence of the expansion \eqref{Msum}, \eqref{M0}, \eqref{M0-M0app}, \eqref{p-perp2} and the explicit expansion of $M^{approx}$ displayed in \eqref{M0app-expanded}. If we change any one entry by a term of weight $\ge3/2$, then the effect on the $3\times3$ determinant $D(\varepsilon,\delta^2,\lambda)$ will be a sum of terms of weight $\ge 3/2+1/2+1/2>2$. By Propositions \ref{ld2} and \ref{neglect}, such terms are $o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ in the region \eqref{smiley2}. Therefore, we may compute each entry of $M(\varepsilon,\delta,\lambda,0)$, retaining only terms of weight strictly smaller than $3/2$ and discarding the rest. The resulting determinant will differ from $D(\varepsilon,\delta^2,\lambda)$ by
terms which are $o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ in the region \eqref{smiley2}.
We next study the power series of the $3\times3$ matrix, $M(\varepsilon,\delta,\lambda,0)$, keeping in mind that the relevant monomials are those of weight $\ge1/2$ but strictly less than $3/2$. The complete list of such monomials is: $\varepsilon, \delta, \lambda, \lambda^2, \delta^2$ and $ \lambda\delta$.
Before proceeding further we show that in fact that a monomial of type $\delta^2$ can be neglected. Indeed, the weight $\le2$ contributions of such a monomial to $D(\varepsilon,\lambda,\delta^2,0)=\det M(\varepsilon,\delta,\lambda,0)$ will be a sum of monomials of the type: (i) $\delta^2\times \delta\cdot\delta$, (ii) $ \delta^2\times \lambda\cdot\lambda$ and
(iii) $\delta^2\times \lambda\cdot\delta$. Terms of type (ii) and (iii) are clearly
$o(\ \lambda\delta^2 )=o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ as $\varepsilon\to0$ for $(\lambda,\delta)$ in the region \eqref{smiley2}, by Proposition \ref{ld2}. For the type (i) term we have, for $(\lambda,\delta)$ in the region \eqref{smiley2},
$\delta^2\times \delta\cdot\delta\lesssim \delta^2\ \varepsilon\ \varepsilon\delta\lesssim (\lambda^2+\delta^2)\cdot (\lambda^2+\varepsilon)\ \varepsilon\delta=o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ as $\varepsilon\to0$. Hence, we may strike $\delta^2$ from our list. In particular, we may neglect the contribution from the matrix $\mathcal{M}^{\mathcal{P}}$; see \eqref{MP-expanded}.
\noindent{\bf Relevant monomials in the expansion of
$M(\varepsilon,\delta,\lambda,0)$ :}\ We shall call the monomials: $\varepsilon, \delta, \lambda, \lambda^2$ and $ \lambda\delta$ \underline{relevant}. All others are called \underline{irrelevant}.
\noindent Stepping back, we have shown above that
$M=M^{0}+M^V+M^W+M^{\mathcal{P}}$ (\eqref{Msum}),
where $M^{0}=M^{0,approx}+\mathcal{O}(\varepsilon^2)$ (\eqref{M0-M0app}) and
$M^{\mathcal{P}}=\mathcal{O}(\varepsilon^2+\varepsilon\delta+\delta^2)$ (\eqref{MP-expanded}). We have further shown that the relevant monomial contributions for calculation of $\det M(\varepsilon,\delta,\lambda,0)$ are all contained in
$M^{approx}(\varepsilon,\delta,\lambda,0)\equiv M^{0,approx}(\varepsilon,\lambda)+M^V(\varepsilon)+M^W(\delta)$. We consider each of these matrices individually, and explicitly extract the relevant terms in each; see Proposition
\ref{Det-approx} below.
%
\noindent{\bf Expansion of $M^{0, approx}$:} The entries of $M^{0,approx}$ are
\begin{align}
&M^{0,approx}_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,0) \nonumber \\
&\quad \equiv\ \left\langle p_\sigma, \left(\ -\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2- [E_\star^0 + \varepsilon(V_{0,0}-V_{1,1})]\ \right) p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)} \nonumber\\
&\quad = \left\langle p_\sigma,-\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2 p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)}
-[E_\star^0 + \varepsilon(V_{0,0}-V_{1,1})]\ \delta_{\sigma,{\tilde{\sigma}}}\ ,
\label{M0approx1}\end{align}
where we have used that $\left\langle p_\sigma,p_{\tilde{\sigma}}\right\rangle=\delta_{\sigma,{\tilde{\sigma}}}$.
The first term in \eqref{M0approx1} may be written, using \eqref{p_sigma}-\eqref{orthon}, $E_\star^0=|{\bf K}|^2$ and $|{\bf k}_2|^2=q^2$ as:
\begin{align}
&\left\langle p_\sigma,-\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2 p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)} \label{psig-nablaK-tpsig}\\
&\quad = \left\langle p_\sigma,-(\nabla+i{\bf K})^2 p_{\tilde{\sigma}}\right\rangle
\ -\ 2i\lambda{\bf k}_2\cdot \left\langle p_\sigma,(\nabla+i{\bf K}) p_{\tilde{\sigma}}\right\rangle\
+\ \left\langle p_\sigma,\lambda^2q^2 p_{\tilde{\sigma}}\right\rangle\nonumber\\
&\quad = \left(E_\star^0\ +\ \lambda^2q^2 \right)\delta_{\sigma,{\tilde{\sigma}}} + \lambda\ J_{\sigma,{\tilde{\sigma}}} .\nonumber
\end{align}
Consider now the matrix
\begin{align}
J_{\sigma,{\tilde{\sigma}}} &= -2i{\bf k}_2\cdot\left\langle \Phi_\sigma,\nabla \Phi_{\tilde{\sigma}}\right\rangle_{L^2_{\bf K}} =-2i{\bf k}_2\cdot\int_\Omega\ \overline{\Phi_\sigma}\ \nabla\Phi_{\tilde{\sigma}} d{\bf x}\nonumber\\
&=2\ {\bf k}_2\cdot \frac13\left(I+\sigma\overline{{\tilde{\sigma}}}R+\overline{\sigma}{\tilde{\sigma}} R^2\right){\bf K} . \label{J-matrix}
\end{align}
We pause to collect some properties that will enable the evaluation of $J_{\sigma,{\tilde{\sigma}}}$;
see also \cites{FW:12}.
Recall that $R$ has eigenpairs: $(\tau,\ \zeta)$
and $(\overline{\tau},\ \overline{\zeta}),$ where $\zeta=\frac{1}{\sqrt2}(1,i)^T$. Then, $\overline{\tau}R$ has eigenpairs: $[1,\zeta]$ and $[\tau,\overline{\zeta}]$. Furthermore,
$\tau R$ has eigenpairs $[1,\overline{\zeta}]$ and $[\overline{\tau},\zeta]$, and
\begin{align*}
\frac13\left[\ I +\overline{\tau}R+(\overline{\tau}R)^2 \right]\zeta=\zeta,\ \
\ \left[\ I +\overline{\tau}R+(\overline{\tau}R)^2 \right]\overline{\zeta}=0, \nonumber\\
\frac13\left[\ I + \tau R+(\tau R)^2 \right]\overline{\zeta}=\overline{\zeta},\ \
\left[\ I +\tau R+(\tau R)^2 \right]\zeta=0 .
\end{align*}
Hence, $\frac13\left[\ I +\overline{\tau}R+(\overline{\tau}R)^2 \right]$ and $ \frac13\left[\ I + \tau R+(\tau R)^2 \right]$ are, respectively, projections onto $\rm{span}\{\ \zeta\ \}$ and $\rm{span}\{\ \overline{\zeta}\ \}$.
For any ${\bf w}\in\mathbb{C}^2$, we have ${\bf w}=\left\langle\zeta,{\bf w}\right\rangle_{\mathbb{C}^2}\zeta +
\left\langle\overline{\zeta},{\bf w}\right\rangle_{\mathbb{C}^2}\overline{\zeta}$, where
$\left\langle {\bf x},{\bf y}\right\rangle_{\mathbb{C}^2}=\overline{{\bf x}}\cdot{\bf y}$. Therefore
\begin{equation}
\frac13\left[\ I +\overline{\tau}R+(\overline{\tau}R)^2 \right]{\bf w} = \left\langle\zeta,{\bf w}\right\rangle\zeta,\quad\
\frac13\left[\ I + \tau R+(\tau R)^2 \right]{\bf w} = \left\langle\overline{\zeta},{\bf w}\right\rangle\overline{\zeta} .\label{Rprojections}
\end{equation}
Also, $(I-R)(I + R+R^2)=I-R^3=0$ and therefore
\begin{equation}
I + R + R^2\ =\ 0 .
\label{Rsum0}\end{equation}
We next calculate $J_{\sigma,{\tilde{\sigma}}} $ using \eqref{Rprojections}-\eqref{Rsum0}.
Note that $J$ is Hermitian, and by
\eqref{Rsum0} its diagonal elements $J_{\sigma,\sigma} $ all vanish:
$ J_{\sigma,{\tilde{\sigma}}} \ =\ \overline{J_{{\tilde{\sigma}},\sigma}},\ J_{\sigma,\sigma}=0,\ \sigma=1,\tau,\overline{\tau}$ .
It suffices therefore to compute the three entries $J_{1,\tau},\ J_{1,\overline{\tau}}$ and
$J_{\tau,\overline{\tau}}$:
\begin{align*}
J_{1,\tau}=2\ \frac13\left[\ I +\overline{\tau}R+(\overline{\tau}R)^2 \right]{\bf K}\cdot{\bf k}_2=
2\ (\overline{\zeta}\cdot{\bf K})\ (\zeta\cdot{\bf k}_2)\equiv \ \alpha , \\
J_{1,\overline\tau}=2\ \frac13\left[\ I +\tau R+(\tau R)^2 \right]{\bf K}\cdot{\bf k}_2=
2\ (\zeta\cdot{\bf K})\ (\overline{\zeta}\cdot{\bf k}_2)=\ \overline{\alpha} , \\
J_{\tau,\overline\tau}=2\ \frac13\left[\ I +\overline\tau R+(\overline\tau R)^2 \right]{\bf K}\cdot{\bf k}_2= 2\ (\overline\zeta\cdot{\bf K})\ (\zeta\cdot{\bf k}_2)=\ \alpha .
\end{align*}
Thus,
\begin{equation}
J\ =\ \left(\ J_{\sigma,{\tilde{\sigma}}}\ \right)\ =\
\begin{pmatrix}
0&\alpha&\overline{\alpha}\\
\overline{\alpha}&0&\alpha\\
\alpha&\overline{\alpha}&0
\end{pmatrix},\ \ \ {\rm where}\ \ \alpha=2\ (\overline{\zeta}\cdot{\bf K})\ (\zeta\cdot{\bf k}_2) .
\label{J-matrix-computed}
\end{equation}
It follows that
\begin{equation}
M^{0,approx}_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,0)
=\ \left(\ - \varepsilon(V_{0,0}-V_{1,1}) + \lambda^2q^2 \ \right)\delta_{\sigma,{\tilde{\sigma}}}\
+\ \lambda\ J_{\sigma,{\tilde{\sigma}}} . \label{M0app-expanded}
\end{equation}
\noindent{\bf Expansion of $ M^V(\varepsilon)$:}
$M_{\sigma,{\tilde{\sigma}}} ^V(\varepsilon)= \varepsilon\ \left\langle p_\sigma, V\ p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)}\ =\ \varepsilon\ \mathcal{V}_{\sigma,{\tilde{\sigma}}},$
where
\begin{align}
\mathcal{V}_{\sigma,{\tilde{\sigma}}}& = \left\langle p_\sigma, V p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)}\nonumber\\
&= \frac{1}{3} \Big[ (1+\sigma \overline{{\tilde{\sigma}}}+\overline\sigma {\tilde{\sigma}}) V_{0,0}\ +\ \sigma V_{0,1}
\ +\ \overline{\tilde{\sigma}} V_{0,-1}\ +\ {\tilde{\sigma}} V_{1,0}\ \nonumber\\
&\qquad\qquad + \ \overline{\sigma} V_{-1,0} \ +\ \sigma{\tilde{\sigma}} V_{1,1}\ +\ \overline{\sigma {\tilde{\sigma}}} V_{-1,-1} \Big] \nonumber .
\end{align}
Since $V$ is real-valued and even, it follows that $V_{-{\bf m}}=V_{{\bf m}}$. Furthermore, $V$ is also $R-$ invariant and therefore $V_{0,1}=V_{1,0}=V_{1,1}$. Hence
%
\begin{align*}
\mathcal{V}_{\sigma,{\tilde{\sigma}}}&= \frac13
(1+\sigma\ \overline{{\tilde{\sigma}}}+\overline\sigma\ {\tilde{\sigma}})\ V_{0,0}\ +\ \frac13
(\sigma+\overline\sigma + {\tilde{\sigma}}+ \overline{\tilde{\sigma}} + \sigma{\tilde{\sigma}}+ \overline{\sigma\ {\tilde{\sigma}}})\ V_{1,1} .
\end{align*}
$\mathcal{V}$ is clearly symmetric and using that $1+\tau+\tau^2=1+\tau+\overline\tau=0$, we obtain $M^V(\varepsilon)=\varepsilon\ \mathcal{V}$, where
\begin{equation*}
\mathcal{V}\ =\
\begin{pmatrix}
V_{0,0}\ +\ 2\ V_{1,1}&0&0\\
0&V_{0,0}-V_{1,1}&0\\
0&0&V_{0,0}-V_{1,1}
\end{pmatrix} .
\end{equation*}
\noindent{\bf Expansion of $M^W(\delta)$:}
$M^W_{\sigma,{\tilde{\sigma}}}(\varepsilon) = \delta\ \left\langle p_\sigma, W\ p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)}\ =\ \delta\ \mathcal{W}_{\sigma,{\tilde{\sigma}}},$
where
\begin{equation}
\label{MW-step1}
\mathcal{W}_{\sigma,{\tilde{\sigma}}} =
\frac13\Big[\sigma W_{0,1}\ +\ \overline{\tilde{\sigma}} W_{0,-1}\ +\ \overline\sigma W_{-1,0}
+{\tilde{\sigma}} W_{1,0}\ +\ \sigma {\tilde{\sigma}} W_{1,1}\ +\ \overline{\sigma {\tilde{\sigma}}} W_{-1,-1} \Big] .
\end{equation}
Since $W$ is real and odd, we have that
$W_{-{\bf m}}= -W_{{\bf m}}$ and $W_{\bf m}$ is purely imaginary.
Therefore,
\begin{equation*}
\mathcal{W}_{\sigma,{\tilde{\sigma}}} = \ \frac{1}3\
\ \Big[\ (\sigma-\overline{\tilde{\sigma}})\ {W}_{0,1}\ +\ ({\tilde{\sigma}}-\overline\sigma)\ {W}_{1,0}\
+\ (\sigma\ {\tilde{\sigma}}\ -\ \overline{\sigma\ {\tilde{\sigma}}})\ {W}_{1,1}\ \Big] \ ,\ \sigma,{\tilde{\sigma}}=1,\tau,\overline{\tau}.
\end{equation*}
It follows that $M^W(\delta)=\delta\ \mathcal{W}$, where
\begin{equation*}
\mathcal{W} = w_{01} \begin{pmatrix} 0& \tau & -\overline{\tau}\\ \overline{\tau}&-1&0\\ -\tau&0&1\end{pmatrix} +
w_{10} \begin{pmatrix} 0& \overline{\tau}& -\tau\\ \tau &-1&0\\ -\overline{\tau}&0&1\end{pmatrix} +
w_{11} \begin{pmatrix} 0&-1&1\\-1&1&0\\ 1&0&-1\end{pmatrix} ,
\end{equation*}
and $w_{ij}\equiv -i\ {W}_{i,j} /\sqrt3 \in \mathbb{R}$.
Now assembling all relevant terms (weights $\ge1/2$ and less than $3/2$) we obtain
\begin{proposition}\label{Det-approx} For $\sigma,{\tilde{\sigma}} = 1,\tau,\overline{\tau} $,
\begin{align*}
M_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,0) &\approx\ M^{approx}_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,0)\nonumber\\
&\approx\ \left(\ - \varepsilon(V_{0,0}-V_{1,1}) + \lambda^2q^2 \ \right)\delta_{\sigma,{\tilde{\sigma}}}\
+\ \lambda\ J_{\sigma,{\tilde{\sigma}}}\ + \varepsilon\ \mathcal{V}_{\sigma,{\tilde{\sigma}}}\ +\
\delta\ \mathcal{W}_{\sigma,{\tilde{\sigma}}}.
\end{align*}
Here, $A_{\sigma,{\tilde{\sigma}}}\approx B_{\sigma,{\tilde{\sigma}}}$, means that their difference is a matrix with entries having weight $\ge 3/2$. Hence, the contribution of such terms to the determinant consists of terms of weight strictly larger than $2$, for $(\lambda,\delta)$ in the region \eqref{smiley2}. Hence, these terms can be neglected, by Proposition \ref{neglect}.
\end{proposition}
\noindent So the calculation of $\det M(\varepsilon,\delta,\lambda,0)$ boils down to the calculation of
$\det M^{approx}(\varepsilon,\delta,\lambda,0)$.
\noindent {\bf Calculation of $\det M^{approx}(\varepsilon,\delta,\lambda,0)$:}
Assembling the above computations, we have that
{\small
\begin{equation*}
M^{approx} =
\begin{pmatrix}
\lambda^2q^2 + 3 \varepsilon V_{1,1} &
\alpha \lambda - \delta\widetilde{w} &
\overline{\alpha} \lambda + \delta\overline{\widetilde{w}} \\
\overline{\alpha} \lambda - \delta\overline{\widetilde{w}} &
\lambda^2q^2 - \delta (w_{01} +w_{10} - w_{11}) &
\alpha \lambda \\
\alpha \lambda + \delta\widetilde{w} &
\overline{\alpha} \lambda &
\lambda^2q^2 + \delta (w_{01} +w_{10} - w_{11}) \\
\end{pmatrix} ,
\end{equation*}}
where
$\widetilde{w} = w_{11} -w_{01}\tau -w_{10}\overline{\tau}$ and $\alpha=2(\overline\zeta\cdot{\bf K})\ (\zeta\cdot{\bf k}_2)$. Note that:
\begin{equation}
\label{alpha-quantities}
\alpha=\frac{q^2}{\sqrt3}\ i\tau,\ \ \Re(\alpha)=-\frac{q^2}2,\ \ \Re(\alpha^3)=0 .
\end{equation}
Calculating the determinant of $M^{approx}$, and using \eqref{alpha-quantities} and that
$w_{ij}= -i W_{i,j}/\sqrt3$ yields:
\begin{align}
&\det M^{approx} (\varepsilon, \delta, \lambda, 0) \ =\
-\left(q^2\lambda^2 + \varepsilon V_{1,1}\right) \left( q^4\lambda^2 + 3\delta^2 (w_{01}^2+w_{10}^2+w_{11}^2) \right) \nonumber \\
&\qquad +6 \varepsilon V_{1,1} \delta^2 (w_{11}w_{01}+w_{10}w_{11}-w_{01}w_{10})
+\ \mathcal{O}( \lambda \delta^2) + \mathcal{O}(\varepsilon\lambda^4) + \mathcal{O}(\lambda^6) \nonumber \\
&\quad\ = -\left(q^2\lambda^2 + \varepsilon V_{1,1}\right) \left(\ q^4\lambda^2 + 3\delta^2 (w_{01}+w_{10}-w_{11})^2\ \right) \nonumber \\
&\qquad \ \ \
+\ \mathcal{O}( \lambda^2\delta^2)\ +\ \mathcal{O}( \lambda \delta^2) + \mathcal{O}(\varepsilon\lambda^4) + \mathcal{O}(\lambda^6)\ ,
\nonumber\\
&\quad\ =\ -\left(q^2\lambda^2 + \varepsilon V_{1,1}\right) \left( q^4\lambda^2 + \delta^2 \left| W_{0,1}+W_{1,0}-W_{1,1}\ \right|^2 \right) \nonumber \\
&\qquad\ \
+\ \mathcal{O}( \lambda^2\delta^2)\ +\ \mathcal{O}( \lambda \delta^2) + \mathcal{O}(\varepsilon\lambda^4) + \mathcal{O}(\lambda^6)\nonumber\\
&\quad\ =\ -\pi(\varepsilon,\delta^2,\lambda)\ +\ o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right) ,
\label{detMapprox2}
\end{align}
for $(\lambda,\delta)$ in the region \eqref{smiley2}.
This completes the proof of Proposition \ref{detM-expansion}.
\subsection{If $\varepsilon V_{1,1}<0$, the zigzag slice does not satisfy the no-fold condition}\label{no-directionalgap}
Recall that to satisfy the case $\varepsilon V_{1,1}<0$ we assume, without loss of generality, that $\varepsilon>0$ and $V_{1,1}<0$. Theorem \ref{NO-directional-gap!} follows from:
\begin{proposition}\label{Deq0}
Assume
\begin{equation}
0<|\varepsilon|<\varepsilon_2,\ \ {\rm and}\ \ 0\le\delta\le c_\flat\ \varepsilon^2.
\label{smiley4}
\end{equation}
There exists $\theta_0>0$ and $\lambda_\varepsilon>0$ satisfying $\varepsilon<\lambda_{\varepsilon}<\theta_0\sqrt{\varepsilon}$ such that for all $\varepsilon$ sufficiently small,
\begin{equation*}
\det M(\varepsilon,\delta,\lambda_\varepsilon,\mu=0) = 0.
\end{equation*}
Thus, $E_\star^\varepsilon$ is an interior point of the $L^2_{k_\parallel}(\Sigma)-$ spectrum of $H^{(\varepsilon,\delta)}$.
\end{proposition}
\noindent It follows that for $\varepsilon V_{1,1}<0$, the operator $H^{(\varepsilon,\delta)}$ does not have a spectral gap about $E=E_\star^\varepsilon$ along the zigzag slice. Referring to the middle panel of Figure \ref{fig:eps_V11_neg}, we see that, for $\delta\ne0$ and small, a {\it local in $\lambda$} gap opens, for $\lambda$ small, about the energy $E=E_\star^\varepsilon$. But since the no-fold property is not satisfied (by Proposition \ref{Deq0}), this is not a true (global in $\lambda\in[-1/2,1/2]$) spectral gap.
\begin{proof}[Proof of Proposition \ref{Deq0}]
Let $C_\flat$ denote the constant in Proposition
\ref{detM-expansion}. Note that $C_\flat$ was chosen to be sufficiently large in the proof of Proposition \ref{detM-expansion} and can be arranged to be taken so that $C_\flat>\theta_0$, where $\theta_0$ is defined by $\theta_0^2=2|V_{1,1}|/q^2$. Also, choose a constant $\zeta_0$ such that $\zeta_0^2= |V_{1,1}|/2q^2$. Note $\zeta_0<\theta_0$;
below we shall see why we make these choices.
For $(\lambda,\delta)$ in the region \eqref{smiley4} we have:
%
\begin{equation}-\det M(\varepsilon,\delta,\lambda,0) \ =\ \pi(\varepsilon,\delta^2,\lambda) + o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)=\pi(\varepsilon,\delta^2,\lambda) + o\left( \varepsilon^2 \right) , \label{-detMonsmiley}
\end{equation}
where
\begin{align*}
\pi(\varepsilon,\delta^2,\lambda)&\equiv \left(q^2\lambda^2 + \varepsilon V_{1,1}\right) \left( q^4\lambda^2 + \delta^2\ \left|W_{0,1}+W_{1,0}-W_{1,1} \right|^2 \right).
\end{align*}
We now show that there exists $\lambda^{\varepsilon,\delta}\in (\zeta_0\sqrt\varepsilon,\theta_0\sqrt{\varepsilon})$ such that
$\pi(\varepsilon,\delta^2,\lambda^{\varepsilon,\delta}) = 0$.
Note first that $\varepsilon V_{1,1}<0$, $\varepsilon^2\ll\varepsilon$ and the choice of $\zeta_0$ implies, upon evaluation of $\pi(\varepsilon,\delta^2,\lambda)$ at $\lambda=\zeta_0\sqrt\varepsilon$, that:
\begin{align*}
\pi(\varepsilon,\delta^2,\zeta_0\sqrt\varepsilon)
& =\left(q^2\zeta_0^2\varepsilon + \varepsilon V_{1,1}\right) \left( q^4\zeta_0^2\varepsilon + \delta^2\ \left|W_{0,1}+W_{1,0}-W_{1,1} \right|^2 \right)<0
\end{align*}
and by \eqref{-detMonsmiley} $-\det M(\varepsilon,\delta,\zeta_0\sqrt\varepsilon,0)<0$.
On the other hand, the choice of $\theta_0$ implies, upon evaluation at $\lambda=\theta_0\sqrt{\varepsilon}$ that:
\begin{align*}
\pi(\varepsilon,\delta^2,\theta_0\sqrt{\varepsilon}) &=
\left(q^2\theta_0^2\varepsilon + \varepsilon V_{1,1}\right) \left( q^4\theta_0^2\varepsilon + \delta^2\ \left|W_{0,1}+W_{1,0}-W_{1,1} \right|^2 \right)>0
\end{align*}
and hence, by \eqref{-detMonsmiley} $-\det M(\varepsilon,\delta,\theta_0\sqrt\varepsilon,0)>0$.
Now $\det M(\varepsilon,\delta^2,\lambda,0)$ is, for all $0<\varepsilon<\varepsilon_1$, a continuous function of $\lambda$. Hence, there exists $\lambda^{\varepsilon,\delta}\in (\zeta_0\sqrt\varepsilon,\theta_0\sqrt{\varepsilon})$ such that $\det M(\varepsilon,\delta,\lambda^{\varepsilon,\delta},\mu=0)=0$. Hence, $E^{\varepsilon,\delta}(\lambda^{\varepsilon,\delta})=E_\star^\varepsilon
\in\ L^2_{{\kpar=2\pi/3}}-\ {\rm spec}(H^{(\varepsilon,\delta)})$.
This completes the proof of Proposition \ref{Deq0}.
\end{proof}
\section{Introduction and Outline}\label{intro}
This paper is motivated by a remarkable physical observation. When two distinct 2-dimensional materials with favorable crystalline structures are joined along an edge, there exist propagating modes, {\it e.g.} electronic or photonic, whose energy remains localized in a neighborhood of the edge without spreading into the ``bulk''. Furthermore, these modes and their properties persist in the presence of arbitrary local, even large, perturbations of the edge. An understanding of such ``protected edge states" in periodic structures
has so far mainly been obtained by analyzing discrete ``tight-binding" models
and from numerical simulations. In this paper we prove that edge states arise from the Schr\"odinger equation for a class of potentials that have many features (not all) in common with the relevant experiments.
%
A central role is played by
a spectral ``no-fold" condition. In the case of small amplitude (low-contrast) honeycomb potentials, this reduces to a sign condition of a particular Fourier coefficient of the potential.
%
%
A combination of numerical simulation and heuristic argument suggests
that if the ``no-fold" condition fails, then edge
states need not be topologically protected. Let us explain these ideas in more detail.
Wave transport in periodic structures with honeycomb symmetry has been an area of intense activity catalyzed by the study of graphene, a single atomic layer two-dimensional honeycomb structure of carbon atoms. The remarkable electronic properties exhibited by graphene \cites{geim2007rise, RMP-Graphene:09, Katsnelson:12, zhang2005experimental} have inspired the study of waves in general honeycomb structures or ``artificial graphene'' in electronic \cites{artificial-graphene} and photonic \cites{HR:07,RH:08,Chen-etal:09,bahat2008symmetry,lu2014topological,BKMM_prl:13} contexts.
One such property, observed in electronic and photonic systems with honeycomb symmetry is the existence of
topologically protected {\it edge states}. Edge states are modes which are
(i) pseudo-periodic (plane-wave-like or propagating) parallel to a line-defect, and (ii) localized transverse to the line-defect; see Figure \ref{fig:mode_schematic}.
{\it Topological protection} refers to the persistence of these modes and their properties, even when the line-defect is subjected to strong local or random perturbations. In applications, edge states are of great interest due to their potential as robust vehicles for channeling energy.
The extensive physics literature on topologically robust edge states goes back to investigations of the quantum Hall effect; see, for example, \cites{H:82, TKNN:82, Hatsugai:93, wen1995topological} and the rigorous mathematical articles \cites{Macris-Martin-Pule:99,EG:02, EGS:05,Taarabt:14}.
In \cites{HR:07, RH:08} a proposal for realizing {\it photonic edge states} in periodic electromagnetic structures which exhibit the magneto-optic effect was made. In this case, the edge is realized via a domain wall across which
the Faraday axis is reversed.
Since the magneto-optic effect breaks time-reversal symmetry, as does the magnetic field in the Hall effect, the resulting edge states are unidirectional.
Other realizations of edges in photonic and electromagnetic systems, {\it e.g.} between periodic dielectric and conducting structures, between periodic structures and free-space, have been explored through experiment and numerical simulation; see, for example \cites{Soljacic-etal:08,Fan-etal:08,Rechtsman-etal:13a,Shvets-PTI:13,Shvets:14}.
%
In the context of tight-binding models, the existence and robustness of edge states has been related to topological invariants (Chern index or Berry / Zak phase \cites{delplace2011zak}) associated with the ``bulk'' (infinite periodic honeycomb) band-structure.
We are interested in exploring these phenomena in general energy-conserving wave equations in continuous media. We consider the case of the Schr\"odinger equation on $\mathbb{R}^2$, $i\partial_t\psi=H\psi$, and study the existence and robustness of edge states of time-harmonic form: $\psi=e^{-iEt}\Psi$. Our model consists of a honeycomb background potential, the ``bulk'' structure, and a perturbing ``edge-potential''. The edge-potential interpolates between two distinct asymptotic periodic structures, via a {\it domain wall} which varies transverse to a specified line-defect (``edge'') in the direction
of some element of the period lattice, $\Lambda_h$.
In the context of honeycomb structures, the most frequently studied edges are the ``zigzag'' and ``armchair'' edges; see Figure \ref{fig:edges}.
Our model of an edge is motivated by the domain-wall construction of \cites{HR:07, RH:08}. A difference is that
we break spatial-inversion symmetry, while preserving time-reversal symmetry. Hence, the edge states -- though topologically robust -- may travel in either direction along the edge. In \cites{FLW-PNAS:14,FLW-MAMS:15} we proved that a one-dimensional variant of such edge-potentials gives rise to topologically protected edge states in periodic structures with symmetry-induced linear band crossings, the analogue in one space dimension of Dirac points (see below). We explore a photonic realization of such states in coupled waveguide arrays in \cites{Thorp-etal:15}.
Our goal is to clarify the underlying mechanisms for the existence of topologically protected edge states. In Theorem \ref{thm-edgestate} we give general conditions for a topologically protected bifurcation of edge states from {\it Dirac points} of the background (bulk) honeycomb structure.
The bifurcation is seeded by the robust zero mode of a one-dimensional effective Dirac equation.
A key hypothesis is a {\it spectral no-fold condition} for the prescribed edge, assumed to be a {\it rational edge}.
In one-dimensional continuum models \cites{FLW-MAMS:15}, this condition is a consequence of monotonicity properties of dispersion curves. For continuous $d$-dimensional structures, with $d\ge2$, the spectral no-fold condition may or may not hold; see Section \ref{zz-gap}. Moreover, by varying a parameter, such as the lattice scale of a periodic structure, one can continuously tune between cases where the condition holds or does not hold; see Appendix \ref{V11-section}.
In Theorem \ref{SGC!} and Theorem \ref{Hepsdelta-edgestates} we verify the spectral no-fold condition for the zigzag edge, for a family of Hamiltonians with weak (low-contrast) potentials, and obtain the existence of {\it zigzag edge states} in this setting.
In a forthcoming article \cites{FLW-sb:16}, we study the strong binding regime (deep potentials) for a large class of honeycomb Schr\"odinger operators. We prove
that the two lowest energy dispersion surfaces, after a rescaling by the potential well's depth, converge uniformly to those of the celebrated Wallace (1947)
\cites{Wallace:47} tight-binding model of graphite. A corollary of this result is that the spectral no-fold condition, as stated in the present article,
is satisfied for sufficiently deep potentials (high contrast) for a very large classes of edge directions in $\Lambda_h$ (including the zigzag edge). In fact, we believe that the analysis of the present article can be extended and together with \cites{FLW-sb:16} will yield the existence of edge states
which are localized, transverse to {\it arbitrary} edge directions ${\bm{\mathfrak{v}}}_1\in\Lambda_h$. This is work in progress.
For a detailed discussion of examples and motivating numerical simulations, see \cites{FLW-2d_materials:15}.
The types of edge states which exist for edges generated by domain walls stand in contrast to those which exist in the case of ``hard edges'', {\it i.e.} edges defined by the tight-binding bulk Hamiltonian on one side of an edge with Dirichlet (zero) boundary condition imposed on the edge; see parenthetical remark in Figure \ref{fig:edges}. In this case, it is well-known that
zigzag (hard) edges support edge states, while armchair (hard) edges do not support edge states; see, for example, \cites{Graf-Porta:13}.
Finally, we believe that failure of the spectral no-fold condition implies that there are no topologically protected edge states, although there is evidence that there are meta-stable edge states, which are localized near the edge for a long time; see Section \ref{meta-stable?}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{Localized_mode.pdf}
\caption{\footnotesize
Edge state -- propagating (plane-wave like) parallel to a zigzag edge ($\mathbb{R}{\bf v}_1$) and localized transverse to the edge.
\label{fig:mode_schematic}
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{zigzag_vs_armchair.pdf}
\caption{\footnotesize
Bulk honeycomb structure, ${\bf H} = ({\bf A } + \Lambda_h) \cup ( {\bf B} + \Lambda_h)$.
{\bf Top panel}: Zigzag edge (blue line), $\mathbb{R}{\bf v}_1 = \{{\bf x} : {\bf k}_2\cdot{\bf x}=0\}$.
Shaded region is the fundamental domain of the cylinder, $\Sigma_{ZZ}$, corresponding to the zigzag edge.
{\bf Bottom panel}: Armchair edge (blue line), $\mathbb{R}\left({\bf v}_1+{\bf v}_2\right) = \{{\bf x} : ({\bf k}_1-{\bf k}_2)\cdot{\bf x}=0\}$.
Fundamental domain of the cylinder, $\Sigma_{AC}$, corresponding to the armchair edge, also indicated.
(Darkened vertices are sites at which zero-boundary conditions are imposed in tight-binding models of
``hard'' edges.)
\label{fig:edges}
}
\end{figure}
\subsection{Detailed discussion of main results}\label{detailed-intro}
Let $\Lambda_h = \mathbb{Z}{\bf v}_1\oplus \mathbb{Z}{\bf v}_2$ denote the regular (equilateral) triangular lattice
and $\Lambda_h^* = \mathbb{Z}{\bf k}_1\oplus \mathbb{Z}{\bf k}_2$ denote the associated dual lattice, with relations ${\bf k}_l\cdot{\bf v}_m=2\pi \delta_{lm},\ l,m=1,2$. The expressions for ${\bf k}_l$ and ${\bf v}_m$ are displayed in Section \ref{sec:honeycomb}. The honeycomb structure, ${\bf H}$, is the union of two interpenetrating triangular lattices: ${\bf A} + \Lambda_h$ and ${\bf B} + \Lambda_h$; see Figures \ref{fig:edges} and \ref{fig:lattices}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{honeycomb_lattices.pdf}
\caption{\footnotesize
{\bf Left panel:} ${\bf A}=(0,0)$, ${\bf B}=(\frac{1}{\sqrt3},0)$.
The honeycomb structure, ${\bf H}$ is the union of two interpenetrating sublattices: $\Lambda_{\bf A}={\bf A}+\Lambda_h$ (blue)
and $\Lambda_{\bf B}={\bf B}+\Lambda_h$ (red). The lattice vectors $\{{\bf v}_1,{\bf v}_2\}$ generate $\Lambda_h$.
Colors designate sublattices; in graphene the atoms occupying $\Lambda_{\bf A}-$ and $\Lambda_{\bf B}-$ sites are identical.
{\bf Right panel:}
Brillouin zone, ${\mathcal{B}}_h$, and dual basis $\{{\bf k}_1,{\bf k}_2\}$. ${\bf K}$ and ${\bf K}'$ are labeled. Other vertices of ${\mathcal{B}}_h$ obtained via application of $R$, a rotation by $2\pi/3$.
\label{fig:lattices}
}
\end{figure}
A {\it honeycomb lattice potential}, $V({\bf x})$, is a real-valued, smooth function, which is $\Lambda_h-$ periodic and, relative to some origin of coordinates, inversion symmetric (even) and invariant under a $2\pi/3$ rotation; see Definition \ref{honeyV}. A choice of period cell is $\Omega_h$, the parallelogram in $\mathbb{R}^2$ spanned by $\{{\bf v}_1, {\bf v}_2\}$.
We begin with the Hamiltonian for the unperturbed honeycomb structure:
%
%
\begin{align*}
H^{(0)} &= -\Delta +V({\bf x}). \label{H0}
\end{align*}
The {\it band structure} of the $\Lambda_h-$ periodic Schr\"odinger operator, $H^{(0)}$, is obtained by considering the
family of eigenvalue problems, parametrized by ${\bf k}\in\mathcal{B}_h$, the Brillouin zone:
$(H^{(0)}-E)\Psi=0,\ \Psi({\bf x}+{\bf v})=e^{i{\bf k}\cdot{\bf v}}\Psi({\bf x}),\ \ {\bf x}\in\mathbb{R}^2,\ {\bf v}\in\Lambda_h$.
Equivalently, $\psi({\bf x})=e^{-i{\bf k}\cdot{\bf x}}\Psi({\bf x})$, satisfies the periodic eigenvalue problem:
$ \left(H^{(0)}({\bf k})-E({\bf k})\right)\psi=0$ and $\psi({\bf x}+{\bf v})=\psi({\bf x})$ for all ${\bf x}\in\mathbb{R}^2$ and
${\bf v}\in\Lambda_h$, where $H^{(0)}({\bf k})=-(\nabla+i{\bf k})^2+V({\bf x})$.
For each ${\bf k}\in{\mathcal{B}}_h$, the spectrum is real and consists of discrete eigenvalues $E_b({\bf k}),\ b\ge1, $ where $E_j({\bf k})\le E_{j+1}({\bf k})$. The maps ${\bf k}\mapsto E_b({\bf k})\in\mathbb{R}$ are called the dispersion surfaces of $H^{(0)}$. The collection of these surfaces constitutes the {\it band structure} of $H^{(0)}$. As ${\bf k}$ varies over $\mathcal{B}_h$, each map ${\bf k}\to E_b({\bf k})$ is Lipschitz continuous and sweeps out a closed interval in $\mathbb{R}$. The union of these intervals is the $L^2(\mathbb{R}^2)-$ spectrum of $H^{(0)}$.
A more detailed discussion is presented in Section \ref{honeycomb_basics}.
A central role is played by the {\it Dirac points} of $H^{(0)}$.
These are quasi-momentum / energy pairs, $({\bf K}_\star,E_\star)$, in the band structure of $H^{(0)}$ at which neighboring dispersion surfaces touch conically at a point \cites{RMP-Graphene:09,Katsnelson:12,FW:12}. The existence of Dirac points, located at the six vertices of the Brillouin zone, $\mathcal{B}_h$ (regular hexagonal dual period cell) for generic honeycomb structures was proved in \cites{FW:12,FLW-MAMS:15}; see also \cites{Grushin:09,berkolaiko-comech:15}.
The quasi-momenta of Dirac points partition into two equivalence classes; the ${\bf K}-$ points consisting of ${\bf K}, R{\bf K}$ and $R^2{\bf K}$, where $R$ is a rotation by $2\pi/3$ and ${\bf K}'-$ points consisting of ${\bf K}'=-{\bf K}, R{\bf K}'$ and $R^2{\bf K}'$.
The time evolution of a wavepacket, with data spectrally localized near a Dirac point, is governed by a massless two-dimensional Dirac system \cites{FW:14}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{E_vs_k_three_surfaces.pdf}
\caption{\footnotesize
Lowest three dispersion surfaces ${\bf k}\equiv(k^{(1)},k^{(2)})\in\mathcal{B}_h\mapsto E({\bf k})$ of the band structure of $H^{(0)}\equiv -\Delta + V({\bf x})$, where
$V$ is the honeycomb potential: $V({\bf x}) = 10 \left(\cos({\bf k}_1 \cdot{\bf x})+\cos({\bf k}_2 \cdot{\bf x})+\cos(({\bf k}_1+{\bf k}_2)\cdot{\bf x})\right)$.
Dirac points occur at the intersection of the lower two dispersion surfaces, at the six vertices of the Brillouin zone, $\mathcal{B}_h$.
\label{fig:E_mesh_3}
}
\end{figure}
Figure \ref{fig:E_mesh_3} displays the first three dispersion surfaces of $H^{(0)}$ for a honeycomb potential. The lowest two of these surfaces touch conically at the six vertices of $\mathcal{B}_h$ (inset). Associated with the Dirac point $({\bf K}_\star,E_\star)$
is a two-dimensional eigenspace of ${\bf K}_\star-$ pseudo-periodic states,\ ${\rm span}\{\Phi_1,\Phi_2\}$:
\[ H^{(0)}\Phi_j({\bf x})=E_\star \Phi_j({\bf x}),\ {\bf x}\in\mathbb{R}^2,\ \ j=1,2\ ,\ \textrm{ where}\ \ \Phi_j({\bf x}+{\bf v})=e^{i{\bf K}_\star\cdot{\bf v}}\Phi_j({\bf x}),\ \ {\bf v}\in\Lambda_h ;\]
see Definition \ref{dirac-pt-defn}.
It is also shown in \cites{FW:12} that a $\Lambda_h-$ periodic perturbation of $V({\bf x})$, which breaks inversion or time-reversal
symmetry lifts the eigenvalue degeneracy; a (local) gap is opened about the Dirac points and the perturbed dispersion surfaces are locally smooth. The perturbation of $H^{(0)}$ by an edge potential (see \eqref{schro-domain}) takes advantage of this instability of Dirac points with symmetry breaking perturbations.
To construct our Hamiltonian, perturbed by an edge-potential, we first choose a vector ${\bm{\mathfrak{v}}}_1\in\Lambda_h$, the period lattice, and consider the line
$\mathbb{R}{\bm{\mathfrak{v}}}_1$, the ``edge''. Choose ${\bm{\mathfrak{v}}}_2$ such that $\Lambda_h=\mathbb{Z}{\bm{\mathfrak{v}}}_1\oplus\mathbb{Z}{\bm{\mathfrak{v}}}_2$. Also introduce dual basis vectors, ${\bm{\mathfrak{K}}}_1$ and $ {\bm{\mathfrak{K}}}_2$, satisfying ${\bm{\mathfrak{K}}}_l\cdot{\bm{\mathfrak{v}}}_m=2\pi\delta_{lm},\ l,m=1,2$; see Section \ref{ds-slices} for a detailed discussion.
The choice ${\bm{\mathfrak{v}}}_1={\bf v}_1$ (or equivalently ${\bf v}_2$) is a {\it zigzag edge} and the choice ${\bm{\mathfrak{v}}}_1={\bf v}_1+{\bf v}_2$ is an {\it armchair edge}; see Figure \ref{fig:edges}.
Introduce the perturbed Hamiltonian:
\begin{equation}
H^{(\delta)} \equiv -\Delta + V({\bf x}) + \delta\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot {\bf x})W({\bf x}) \ =\ H^{(0)} + \delta\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot {\bf x})W({\bf x}) .
\label{schro-domain}
\end{equation}
Here, $\delta$ is real and will be taken to be sufficiently small, and $W({\bf x})$ is $\Lambda_h-$ periodic
and odd. The function $\kappa$, defines a {\it domain wall}. We choose $\kappa$ to be sufficiently smooth and to satisfy $\kappa(0)=0$ and $\kappa(\zeta)\to\pm\kappa_\infty\ne0$
as $\zeta\to\pm\infty$. Without loss of generality, we assume $\kappa_\infty>0$, {\it e.g.} $\kappa(\zeta)=\tanh(\zeta)$. We refer to the line $\mathbb{R}{\bm{\mathfrak{v}}}_1$ as a ${\bm{\mathfrak{v}}}_1-$ edge.
Note that $H^{(\delta)}$ is invariant under translations parallel to the ${\bm{\mathfrak{v}}}_1-$ edge, ${\bf x}\mapsto{\bf x}+{\bm{\mathfrak{v}}}_1$, and hence there is a well-defined {\it parallel quasi-momentum}, denoted ${k_{\parallel}}$. Furthermore,
$H^{(\delta)}$ transitions adiabatically between the asymptotic Hamiltonian $H_-^{(\delta)}=H^{(0)}\ -\ \delta\kappa_\infty W({\bf x})$ as ${\bm{\mathfrak{K}}}_2\cdot{\bf x}\to-\infty$ to the asymptotic Hamiltonian $H_+^{(\delta)}=H^{(0)}\ +\ \delta\kappa_\infty W({\bf x})$
as ${\bm{\mathfrak{K}}}_2\cdot{\bf x}\to\infty$. In the case where $\kappa$ changes sign once across $\zeta=0$, the domain wall modulation of $W({\bf x})$ realizes a phase-defect
across the edge (line-defect) $\mathbb{R}{\bm{\mathfrak{v}}}_1$. A variant of this construction was used in \cites{FLW-MAMS:15} to insert a phase defect between asymptotic dimer periodic potentials.
\bigskip
\bigskip
Suppose $H^{(0)}$ has a Dirac point at $({\bf K}_\star,E_\star)$. It is important to note that while $H^{(0)}$ is inversion symmetric, $H^{(\delta)}_\pm$ is not.
For $\delta\ne0$, $H^{(\delta)}_\pm$ does not have Dirac points; its dispersion surfaces are locally smooth and for quasi-momenta ${\bf k}$ such that if $|{\bf k}-{\bf K}_\star|$ is sufficiently small, there is an open neighborhood of $E_\star$ not contained in the $L^2(\mathbb{R}^2/\Lambda_h)-$ spectrum of $H^{(\delta)}_\pm({\bf k})$. This ``spectral gap'' about $E=E_\star$ may however only be local about ${\bf K}_\star$ \cites{FW:12}. If there is a real open neighborhood of $E_\star$, not contained in the spectrum of $H^{(\delta)}_\pm({\bf k})=-(\nabla+i{\bf k})^2+V\pm\delta\kappa_\infty W$ for \emph{all} ${\bf k}\in\mathcal{B}_h$, then
$H_\pm^{(\delta)}$ is said to have a (global) omni-directional spectral gap about $E=E_\star$.
We'll see, in our discussion of the {\it spectral no-fold condition},
that it is a ``directional spectral gap'' that plays a key role in the existence of edge states;
see Section \ref{intro-nofold} and Definition \ref{SGC}.
%
Under suitable hypotheses, we shall construct {\it ${\bm{\mathfrak{v}}}_1-$ edge states} of $H^{(\delta)}$, which are spectrally localized near the Dirac point, $({\bf K}_\star,E_\star)$.
These are non-trivial solutions $\Psi$, with energies $E\approx E_\star$, of the ${k_{\parallel}}-$ eigenvalue problem:
\begin{align}
H^{(\delta)}\Psi\ &=\ E\Psi,\label{edge-evp}\\
\ \Psi({\bf x}+{\bm{\mathfrak{v}}}_1)\ &=\ e^{i{k_{\parallel}}}\Psi({\bf x})\ (\textrm{propagation parallel to}\ \mathbb{R}{\bm{\mathfrak{v}}}_1), \label{edge-bc1}\\
|\Psi({\bf x})|\ &\to\ 0,\ \ {\rm as}\ \ |{\bm{\mathfrak{K}}}_2\cdot{\bf x}|\to\infty\quad (\textrm{localization transverse to}\ \mathbb{R}{\bm{\mathfrak{v}}}_1), \label{edge-bc2}
\end{align}
for ${k_{\parallel}}\approx{\bf K}_\star\cdot{\bm{\mathfrak{v}}}_1$.
To formulate the eigenvalue problem in an appropriate Hilbert space, we introduce the cylinder $\Sigma\equiv \mathbb{R}^2/ \mathbb{Z}{\bm{\mathfrak{v}}}_1$. If $f({\bf x})$ satisfies the pseudo-periodic boundary condition \eqref{edge-bc1}, then $f({\bf x})e^{-i\frac{{k_{\parallel}}}{2\pi}{\bm{\mathfrak{K}}}_1\cdot{\bf x}}$ is well-defined on the cylinder $\Sigma$. Denote by $H^s(\Sigma),\ s\ge0$, the Sobolev spaces of functions defined on $\Sigma$. The pseudo-periodicity and decay conditions \eqref{edge-bc1}-\eqref{edge-bc2} are encoded by requiring $ \Psi \in H^s_{k_{\parallel}}(\Sigma)$, for some $s\ge0$, where
\begin{equation*}
H^s_{k_{\parallel}}=H^s_{k_{\parallel}}(\Sigma)\ \equiv \ \left\{f : f({\bf x})e^{-i\frac{{k_{\parallel}}}{2\pi}{\bm{\mathfrak{K}}}_1\cdot{\bf x}}\in H^s(\Sigma) \right\} .\label{Hs-kpar}
\end{equation*}
Thus we formulate the EVP \eqref{edge-evp}-\eqref{edge-bc2} as:
\begin{equation}
H^{(\delta)}\Psi\ =\ E\Psi,\ \ \Psi\in H^2_{{k_{\parallel}}}(\Sigma).
\label{EVP}\end{equation}
\begin{remark}[Symmetry relation among ${\bf K}-$ and ${\bf K}'-$ points]
Note that if $\Psi({\bf x})=e^{i{\bf k}\cdot{\bf x}}Z({\bf x})$ is a solution of the eigenvalue problem \eqref{EVP}, then $\psi_{{\bf K}}=e^{-i(E t-{\bf K}\cdot{\bf x})} Z({\bf x}),$
where $Z({\bf x}+{\bm{\mathfrak{v}}}_1)=Z({\bf x})$ and $Z({\bf x})\rightarrow0$ as $|{\bm{\mathfrak{K}}}_2\cdot{\bf x}| \rightarrow \infty$,
is a propagating edge state of the time-dependent Schr\"odinger equation:
$ i\partial_t\psi({\bf x},t) = H^{(\delta)} \psi({\bf x},t)$
with parallel quasi-momentum ${k_{\parallel}}={\bf K}\cdot{\bm{\mathfrak{v}}}_1$.
Since the time-dependent Schr\"odinger equation has the invariance $\psi({\bf x},t)\mapsto\overline{\psi({\bf x},-t)}$, it follows that
\[ \overline{\psi_{\bf K}({\bf x},-t)} = e^{-i(Et+{\bf K}\cdot{\bf x})} \overline{Z({\bf x})} = e^{-i(Et-{\bf K}'\cdot{\bf x})} \overline{Z({\bf x})} = \psi_{{\bf K}'}({\bf x},t) . \]
Thus $\psi_{{\bf K}'}({\bf x},t)$ is a counterpropagating edge state with parallel quasi-momentum, $k_\parallel={\bf K}'\cdot{\bm{\mathfrak{v}}}_1=-{\bf K}\cdot{\bm{\mathfrak{v}}}_1$.
Due to these symmetry considerations and the equivalence of ${\bf K}-$ points: $\{{\bf K},R{\bf K},R^2{\bf K}\}$, without loss of generality, we henceforth restrict our attention to the Dirac point $({\bf K},E_\star)$.
\end{remark}
\subsection{Summary of main results}\label{results-summary}
\subsubsection{General conditions for the existence of topologically protected edge states; Theorem \ref{thm-edgestate} and Corollary \ref{vary_k_parallel}}
In {\bf Theorem \ref{thm-edgestate}} we formulate hypotheses on the honeycomb potential, $V$, domain wall function, $\kappa(\zeta)$, and asymptotic periodic structure, $W({\bf x})$, which
imply the existence of topologically protected ${\bm{\mathfrak{v}}}_1-$ edge states, constructed as non-trivial eigenpairs $\delta\mapsto (\Psi^\delta, E^\delta)$ of \eqref{EVP} with ${k_{\parallel}}={\bf K}\cdot{\bm{\mathfrak{v}}}_1$,
defined for all $|\delta|$ sufficiently small. This branch of non-trivial states bifurcates from the trivial solution branch $E\mapsto(\Psi\equiv0,E)$ at $E=E_\star$, the energy of the Dirac point.
Key among the hypotheses is the spectral no-fold condition, discussed below in Section \ref{intro-nofold}.
At leading order in $\delta$, the edge state, $\Psi^\delta({\bf x})$, is a slow modulation of the degenerate nullspace of $H^{(0)}-E_\star$:
\begin{align}
\Psi^\delta({\bf x}) &\approx \alpha_{\star,+}(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})\Phi_+({\bf x}) + \alpha_{\star,-}(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})\Phi_-({\bf x}) \ \ \text{in} \ \ H_{{k_{\parallel}}={\bf K}\cdot{\bm{\mathfrak{v}}}_1}^2(\Sigma) , \label{multiscale-formal0} \\
E^\delta &= E_\star + \mathcal{O}(\delta^2),\ \ 0<|\delta|\ll1,\label{multiscale-formal1}
\end{align}
where $\Phi_+$ and $\Phi_-$ are the appropriate linear combinations of $\Phi_1$ and $\Phi_2$,
defined in \eqref{Phi_pm-def}.
The envelope amplitude-vector, $\alpha_\star(\zeta)=(\alpha_{\star,+}(\zeta),\alpha_{\star,-}(\zeta))^T$, is a zero-energy eigenstate, $\mathcal{D}\alpha_\star=0$, of the one-dimensional Dirac operator (see also \eqref{multi-dirac-op}):
\[ \mathcal{D} \equiv -i|\lambda_\sharp||{\bm{\mathfrak{K}}}_2|\sigma_3\partial_\zeta + \vartheta_\sharp\kappa(\zeta)\sigma_1,\]
where the Pauli matrices $\sigma_j$ are displayed in \eqref{Pauli-sigma}.
Here $\lambda_\sharp\in\mathbb{C}$ (see \eqref{lambda-sharp2}) depends on the unperturbed honeycomb potential, $V$, and is non-zero for generic $V$. The constant
$\vartheta_\sharp\equiv\left\langle \Phi_1,W\Phi_1\right\rangle_{L^2(\Omega_h)}$ is real and is also generically nonzero.
%
$\mathcal{D}$ has a spatially localized zero-energy eigenstate for any $\kappa(\zeta)$ having asymptotic limits of opposite sign at $\pm\infty$. Therefore, the zero-energy eigenstate, which seeds the bifurcation, persists for {\it localized} perturbations of $\kappa(\zeta)$.
In this sense, the bifurcating branch of edge states is topologically protected against a class of local perturbations of the edge.
Section \ref{formal-multiscale} gives an account of a formal multiple scale expansion, to any order in the small parameter, $\delta$, of a solution to the eigenvalue problem \eqref{EVP}. The expression in \eqref{multiscale-formal0} is the leading order term in this expansion. Our methods can be used to prove the validity of the multiple scale expansion, at any finite order.
{\bf Corollary \ref{vary_k_parallel}} ensures, under the conditions of Theorem \ref{thm-edgestate}, the existence of edge states,
$\Psi({\bf x};{k_{\parallel}})\in H^2_{{k_{\parallel}}}(\Sigma)$ for all ${k_{\parallel}}$ in a neighborhood of ${k_{\parallel}}={\bf K}\cdot{\bm{\mathfrak{v}}}_1$, and by symmetry (see Remark \ref{kpar-symmetry}) for all ${k_{\parallel}}$ in a neighborhood of ${k_{\parallel}}=-{\bf K}\cdot{\bm{\mathfrak{v}}}_1={\bf K}'\cdot{\bm{\mathfrak{v}}}_1$.
Thus,
by taking a continuous superposition of states given by Corollary \ref{vary_k_parallel}, one obtains states that remain localized about (and dispersing along) the zigzag edge for all time.
\begin{remark}\label{key-hyp}
A key hypothesis in Theorem \ref{thm-edgestate} is a {\it spectral no-fold} condition at $({\bf K},E_\star)$ for the ${\bm{\mathfrak{v}}}_1-$ edge of the band-structure of $-\Delta+V$. This (essentially) ensures the existence of a $L^2_{{k_{\parallel}}={\bf K}\cdot{\bm{\mathfrak{v}}}_1}(\Sigma)-$ spectral gap containing $E_\star$ for the perturbed Hamiltonian, $H^{(\delta)}$; see Definition \ref{SGC} and the discussion in Section \ref{intro-nofold}.
\end{remark}
\subsubsection{Theorem \ref{Hepsdelta-edgestates}; Existence of topologically protected zigzag edge states}\label{zigzag-summary}
We consider the case of zigzag edges corresponding to the choice ${\bm{\mathfrak{v}}}_1={\bf v}_1$, ${\bm{\mathfrak{v}}}_2={\bf v}_2$, and ${\bm{\mathfrak{K}}}_1={\bf k}_1$, ${\bm{\mathfrak{K}}}_2={\bf k}_2$. Recall that $\Lambda_h=\mathbb{Z}{\bf v}_1\oplus\mathbb{Z}{\bf v}_2$. The choice ${\bm{\mathfrak{v}}}_1={\bf v}_2$ would lead to equivalent results.
We consider the zigzag edge state eigenvalue problem
\begin{equation}
H^{(\varepsilon,\delta)}\Psi\ =\ E\Psi, \quad \Psi\in H^2_{{k_{\parallel}}}(\Sigma) \qquad \text{(see also \eqref{EVP})} ,
\label{EVP-1}\end{equation}
with Hamiltonian
\begin{equation}
H^{(\varepsilon,\delta)} \equiv -\Delta + \varepsilon V({\bf x}) + \delta\kappa(\delta{\bf k}_2\cdot {\bf x})W({\bf x}) \ =\ H^{(\varepsilon)} + \delta\kappa(\delta{\bf k}_2\cdot {\bf x})W({\bf x}) .
\label{Ham-ZZ}
\end{equation}
Here, $\varepsilon$ and $\delta$ are chosen to satisfy
\begin{equation}
0<|\delta|\lesssim \varepsilon^2 \ll1 .
\label{small-eps-delta}\end{equation}
There are two cases, which are delineated by the sign of the distinguished Fourier coefficient, $\varepsilon V_{1,1}$, of the unperturbed (bulk) honeycomb potential, $\varepsilon V({\bf x})$. Here,
\begin{equation*}
V_{1,1}\ \equiv\
\frac{1}{|\Omega_h|} \int_{\Omega_h} e^{-i({\bf k}_1+{\bf k}_2)\cdot{\bf y}}\ V({\bf y})\ d{\bf y},
\label{V11eq0-intro}
\end{equation*}
is assumed to be non-zero. We designate these cases:
\[ \textrm{\bf Case (1)}\qquad \varepsilon V_{1,1}>0\ \ \ \textrm{ and}\ \ \ \textrm{\bf Case (2)}\qquad \varepsilon V_{1,1}<0.\]
In Appendix \ref{V11-section} we give two explicit families of potentials, a superposition of ``bump-functions'' concentrated, respectively, on a triangular lattice, $\Lambda_h^{(a)}$, and a honeycomb structure, ${\bf H}$, that can be tuned between these two cases
by variation of a lattice scale parameter.
Under the condition $\varepsilon V_{1,1}>0$ (Case (1)) and \eqref{small-eps-delta}, we verify the spectral no-fold condition for the zigzag edge in {\bf Theorem \ref{SGC!}}. The existence of zigzag edge states
({\bf Theorem \ref{Hepsdelta-edgestates}}) then follows from Theorem \ref{thm-edgestate} and Corollary \ref{vary_k_parallel}. In particular, for all $\varepsilon$ and $\delta$ satisfying \eqref{small-eps-delta} and for each ${k_{\parallel}}$ near ${\bf K}\cdot{\bf v}_1=2\pi/3$,
the zigzag edge state eigenvalue problem \eqref{EVP}
has topologically protected edge states with energies sweeping out a neighborhood of $E_\star^\varepsilon$, where $({\bf K},E_\star^\varepsilon)$ is a Dirac point.
\begin{remark}[Directional versus omnidirectional spectral gaps]\label{BS-conj}
While the regime of weak potentials, implied by \eqref{small-eps-delta}, would at first seem to be a simplifying assumption, we wish to remark on a subtlety for $H^{(\varepsilon,\delta)}_\pm = -\Delta+\varepsilon V \pm \delta\kappa_\infty W$ ($\varepsilon, \delta$ small), which arises precisely in this regime. It is well-known that for sufficiently weak periodic potentials on $\mathbb{R}^d, d\ge2$, that there are no spectral gaps; this is related to the
``Bethe-Sommerfeld conjecture'' \cites{Bethe-Sommerfeld:33,Skriganov:79,Dahlberg-Trubowitz:82}. Nevertheless,
if $\varepsilon V_{1,1}>0$, and $\varepsilon$ and $\delta$ are related as in \eqref{small-eps-delta}, then a {\it directional} spectral gap, {\it i.e.} an $L_{k_{\parallel}}^2(\Sigma)-$ spectral gap exists; see Theorem \ref{delta-gap} and Section \ref{intro-nofold}.
\end{remark}
Figure \ref{fig:spectra_vary_delta}
and Figure \ref{fig:k_parallel3} are illustrative of Cases (1) and (2).
The simulations were done for the Hamiltonian
$H^{(\varepsilon,\delta)}$ with $\varepsilon=\pm10$ and $0\le\delta\le10$:
\begin{equation}
\label{VW-numerics}
\begin{split}
H^{(\varepsilon,\delta)}&= -\Delta +\varepsilon V({\bf x}) + \delta\kappa(\delta{\bf k}_2\cdot{\bf x})W({\bf x}),\ \ \kappa(\zeta)=\tanh(\zeta),\\
V({\bf x})&= \sum_{j=0}^2\cos(R^j{\bf k}_1 \cdot{\bf x}),\ \
W({\bf x})= \sum_{j=0}^2(-1)^{\delta_{j2}}\sin(R^j{\bf k}_1 \cdot{\bf x}).
\end{split}
\end{equation}
Here, $R$ is the $2\pi/3-$ rotation matrix displayed in \eqref{Rdef}.
Figure \ref{fig:spectra_vary_delta} displays, for fixed $\varepsilon$, the $L^2_{{\kpar=2\pi/3}}(\Sigma)-$ spectra (plotted horizontally) of $H^{(\varepsilon,\delta)}$ corresponding to a range of $\delta$ values (strength / scale of domain wall -perturbation) for
Cases (1) $\varepsilon V_{1,1}>0$ (top panel) and (2) $\varepsilon V_{1,1}<0$ (middle and bottom panels).
Figure \ref{fig:k_parallel3} displays, for these cases, the $L^2_{{k_{\parallel}}}(\Sigma)-$ spectra (plotted vertically)
for a range of parallel-quasi-momentum, ${k_{\parallel}}$.
\begin{remark}[Symmetries of ${k_{\parallel}}\mapsto E({k_{\parallel}})$] \label{kpar-symmetry}
Figure \ref{fig:k_parallel3} exhibits some elementary symmetries.
Since the boundary condition for the EVP \eqref{EVP-1}, $\Psi({\bf x}+{\bf v}_1)=e^{i{k_{\parallel}}}\Psi({\bf x})$
is $2\pi-$ periodicity in ${k_{\parallel}}$, the mapping $k_\parallel\mapsto E(k_\parallel)$ is $2\pi-$ periodic. Furthermore, invariance under complex conjugation, implies symmetry of $k_\parallel\mapsto E(k_\parallel)$ about ${k_{\parallel}}=0$ and ${k_{\parallel}}=\pi$.
\end{remark}
\subsubsection{Non-topologically protected bifurcations of edge states}\label{unprotected}
In Case (2), where $\varepsilon V_{1,1}<0$, {\bf Theorem \ref{NO-directional-gap!}} implies that the spectral no-fold condition fails and we do not obtain a bifurcation from the Dirac point.
However, through a combination of formal asymptotic analysis and numerical computations, we do find bifurcating branches of edge states. These branches do not emanate from Dirac points (the no-fold condition fails), but rather from a spectral band edge. Moreover, as we discuss below, these states are \underline{not} topologically protected; they may be destroyed by an appropriate localized perturbation of the edge. Case (2) ($\varepsilon V_{1,1}<0)$ is illustrated by Figures \ref{fig:spectra_vary_delta} (middle and bottom panels) and Figure \ref{fig:k_parallel3} (bottom panel).
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{E_delta.pdf}
\caption{\footnotesize
$L^2_{{k_{\parallel}}={\bf K}\cdot{\bf v}_1}(\Sigma)-$ spectra, where ${\bf K}\cdot{\bf v}_1=\frac{2}{3}\pi$,
of the Hamiltonian $H^{(\varepsilon,\delta)}$ (\eqref{VW-numerics}) for the zigzag edge ($\mathbb{R}{\bf v}_1$).
{\bf Top panel:} Case (1) $\varepsilon V_{1,1}>0$. Topologically protected bifurcation of edge states, described by Theorem \ref{SGC!} (dotted red curve), is seeded by zero-energy mode of a Dirac operator \eqref{multi-dirac-op}. The branch of edge states emanates from intersection of first and second bands ($B_1$ and $B_2$) at $E=E_\star^\varepsilon$ for $\delta=0$; see discussion in Section \ref{zigzag-summary}.
{\bf Middle panel:} Case (2) $\varepsilon V_{1,1}<0$ with domain wall function $\kappa$. Spectral no-fold condition does not hold.
Bifurcation of zigzag edge states from upper endpoint, $E=\widetilde{E}^\varepsilon$, of the first spectral band. This bifurcation is seeded by a bound state of a Schr\"odinger operator \eqref{effective-schroedinger} with effective mass $m_{\rm eff}<0$
and effective potential $Q_{\rm eff}(\zeta)$ (displayed in the inset) and is \emph{not} topologically protected; see discussion in Section \ref{unprotected}.
{\bf Bottom panel:} Case (2) $\varepsilon V_{1,1}<0$ with domain wall function $\kappa_\natural$. Bifurcation from upper endpoint of $B_1$ is destroyed. Bound states bifurcate from the lower edges of the first two spectral bands.
\label{fig:spectra_vary_delta}
}
\end{figure}
In particular, Dirac points occur at the intersection of the second and third spectral bands of $H^{(\varepsilon,0)}=-\Delta+\varepsilon V({\bf x})$ (see Theorem \ref{diracpt-small-thm}), and the failure of the spectral no-fold condition implies that an $L^2_{k_{\parallel}}-$ spectral gap does not open about $E=E_\star^\varepsilon$ for $\delta\ne0$ and small. However, for $\varepsilon V_{1,1}<0$ there is a spectral gap between the first and second spectral bands of $H^{(\varepsilon,0)}$. For the choice of edge-potential
displayed in \eqref{VW-numerics} with $\varepsilon=-10$, a family of nontrivial edge states bifurcates, for $0<|\delta|$ sufficiently small, from the upper edge of the first (lowest) $L^2_{{\kpar=2\pi/3}}-$ spectral band into the spectral gap (dotted blue curve); see middle panel of Figure \ref{fig:spectra_vary_delta}. A bifurcation of a similar nature is discussed in \cites{plotnik2013observation}.
A formal multiple scale analysis clarifies this latter bifurcation.
For ${\bf k}\in{\mathcal{B}}_h$, let $(\widetilde{E}^\varepsilon({\bf k}),\widetilde{\Phi}^\varepsilon({\bf x};{\bf k}))$ denote the eigenpair associated with a lowest spectral band.
In \cites{FLW-2d_materials:15}, We calculate that the edge state bifurcation is seeded by a discrete eigenvalue effective Schr\"odinger operator:
\begin{equation}
H^\varepsilon_{\rm eff}= -\frac{1}{2m^\varepsilon_{\rm eff}}\ \frac{\partial^2}{\partial\zeta^2}\ +\ Q^\varepsilon_{\rm eff}(\zeta;\kappa),\ \ {\rm where}\ \ \frac{1}{m^\varepsilon_{\rm eff}}=\ \sum_{i,j=1,2} [ D^2\widetilde{E}^\varepsilon({\bf K})]_{ij} \ {\mathfrak{K}}_2^i\ {\mathfrak{K}}_2^j,\label{effective-schroedinger}\end{equation}
and $Q_{\rm eff}(\zeta;\kappa,\widetilde{\Phi^\varepsilon}) = a\ \kappa'(\zeta) + b\ \left(\kappa^2_\infty-\kappa^2(\zeta) \right)$ is a spatially localized effective potential, depending on $\kappa(\zeta)$, and constants $a$ and $b$, with $b>0$, which depend on $V$, $W$ and $\widetilde{\Phi}^\varepsilon$.
For the above choice of the zigzag edge-potential (middle panel of Figure \ref{fig:spectra_vary_delta}), we have $m^\varepsilon_{\rm eff}<0$ and the effective potential $Q^\varepsilon_{\rm eff}$, displayed in the figure inset, induces a bifurcation into the gap above the first band.
Now, we can construct domain wall functions, $\kappa_{_\natural}(\zeta)$, for which the corresponding $H^\varepsilon_{\rm eff}$ has no point eigenvalues in a neighborhood of the right (upper) edge of the first spectral band; see bottom panel of Figure \ref{fig:spectra_vary_delta}.
If $\kappa(\zeta)$ is chosen as above, then
$Q_{\rm eff}(\zeta; (1-\theta)\kappa+\theta\kappa_{_\natural})$, $0\leq\theta\leq1$, provides a smooth homotopy from a Schr\"odinger Hamiltonian for which there is a bifurcation of edge states ($H^{(\varepsilon,\delta)}$ with domain wall $\kappa$) to one for which the branch of edge states does not exist ($H^{(\varepsilon,\delta)}$ with domain wall $\kappa_\natural$). Therefore, this type of bifurcation is not topologically protected; see \cites{FLW-2d_materials:15} for a more detailed discussion.
This contrast between topologically protected states and non-protected states is explained
and explored numerically, in a one-dimensional setting in \cites{Thorp-etal:15}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{E_kpar.pdf}
\caption{\footnotesize
{\bf Top panel:} $L^2_{{k_{\parallel}}}(\Sigma)-$ spectrum of protected states of $H^{(\varepsilon,\delta)}$, for the case $\varepsilon V_{1,1}>0$. {\bf Bottom panel:} $L^2_{{k_{\parallel}}}(\Sigma)-$ spectrum of non-protected states of $H^{(\varepsilon,\delta)}$ for the case $\varepsilon V_{1,1}<0$. $V$, $W$ and $\kappa$ are chosen as in \eqref{VW-numerics}.
For each fixed ${k_{\parallel}}$, edge states shown in the top panel ($\varepsilon V_{1,1}>0$) arise due to a protected bifurcation from a Dirac point displayed in the top panel of Figure \ref{fig:spectra_vary_delta}. Those edge states indicated in the bottom panel ($\varepsilon V_{1,1}<0$) arise via an edge bifurcation of the type shown in the middle and bottom panels of Figure \ref{fig:spectra_vary_delta}. The band edge energies from which this latter bifurcation takes place is well-separated
from the energy of
the Dirac point which, when $\varepsilon V_{1,1}<0$, lies within the overlap of the second and third spectral bands.
}
\label{fig:k_parallel3}
\end{figure}
\subsection{Remarks on the spectral no-fold condition}\label{intro-nofold}
The spectral no-fold hypothesis of Theorem \ref{thm-edgestate} requires that the dispersion curves obtained by slicing the band structure (situated in $\mathbb{R}^2_{\bf k}\times\mathbb{R}_E$) with a plane through the Dirac point $({\bf K},E_\star)$ containing the direction ${\bm{\mathfrak{K}}}_2$ (dual direction to the ${\bm{\mathfrak{v}}}_1-$ edge) do not fold-over
and fill out energies arbitrarily near $E_\star$.
This essentially implies that via a small perturbation which breaks inversion symmetry (as we do with $H^{(\delta)}=-\Delta+V({\bf x})+\delta\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})W({\bf x})$ for $\delta\ne0$) we open a $L^2_{k_{\parallel}}(\Sigma)-$ spectral gap about $E_\star$.
Figure \ref{fig:eps_V11_neg} is illustrative.
%
%
In the first row of plots in Figure \ref{fig:eps_V11_neg}, we consider whether the spectral no-fold condition holds at the Dirac point $({\bf K},E_\star^\varepsilon)$
for the zigzag edge, in the two cases: (1) $\varepsilon V_{1,1}>0$ and (2) $\varepsilon V_{1,1}<0$, as well as for the armchair edge.
The energy level $E=E_\star^\varepsilon$ is indicated with the dotted line. In the left panel we see that for the zigzag edge, the spectral no-fold condition holds if $\varepsilon V_{1,1}>0$. In this case, there is a topologically protected branch of edge states. In the center panel we see that the spectral no-fold condition fails if $\varepsilon V_{1,1}<0$.
Finally, in the right panel we see that it also fails for the armchair slice.
The second row of plots in Figure \ref{fig:eps_V11_neg}, illustrates that the spectral no-fold condition controls whether a full $L^2_{k_{\parallel}}-$ spectral gap opens when breaking inversion symmetry.
In particular, for $\delta>0$, $H^{(\varepsilon,\delta)}$ is no longer inversion symmetric. For $\varepsilon V_{1,1}>0$, a spectral gap opens about the Dirac point, between the first and second spectral bands (see Theorem \ref{diracpt-small-thm}). For the zigzag edge with $\varepsilon V_{1,1}<0$ there is no spectral gap about the Dirac point.
(Note, however, that there is a spectral gap between the first and second spectral bands; see the discussion above in Section \ref{unprotected}.)
Similarly, for the armchair edge (right panel) there is no spectral gap for $\delta>0$.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{spectral_no_fold.pdf}
\caption{\footnotesize
Zigzag and armchair slices at the Dirac point $({\bf K},E^\varepsilon_\star)$ of the band structure of $-\Delta+\varepsilon V+\delta\kappa_\infty W$ for $\delta=0$ ({\bf first row}) and $\delta>0$ ({\bf second row}). Insets indicate zigzag and armchair quasi-momentum segments (one-dimensional Brillouin zones) parametrized by $\lambda$, for $0\leq\lambda\leq1$. See discussion of Section \ref{meta-stable?} and Theorem \ref{fourier-edge}.
\label{fig:eps_V11_neg}
}
\end{figure}
\subsection{Are there meta-stable edge states?}\label{meta-stable?}
Consider the Hamiltonian
$ H^{(\delta)} = -\Delta_{\bf x} + V({\bf x}) + \delta\kappa\left(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}\right) W({\bf x})$ (as in \eqref{schro-domain}), corresponding to an {\it arbitrary} rational edge, $\mathbb{R}{\bm{\mathfrak{v}}}_1$, {\it i.e.} ${\bm{\mathfrak{v}}}_1=a_1{\bf v}_1+b_1{\bf v}_2$, $a_1$ and $b_1$ co-prime integers, as introduced in the discussion leading up to \eqref{schro-domain}; see also Section \ref{ds-slices}.
Irrespective of whether the spectral no-fold condition holds for the ${\bm{\mathfrak{v}}}_1-$ edge (see Section \ref{intro-nofold} and Definition \ref{SGC}), the multiple scale expansion of Section \ref{formal-multiscale} produces a formal edge state to {\it any finite order} in the small parameter $\delta$.
\noindent {\it But is this formal expansion the expansion of a true edge state?}
We believe the answer is no, if the spectral no-fold condition fails.
Indeed, from Theorem \ref{fourier-edge}, we have that any ${\bm{\mathfrak{v}}}_1-$ edge state, $\Psi\in L^2_{{k_{\parallel}}={\bf K}\cdot{\bm{\mathfrak{v}}}_1}$, is a superposition of Floquet-Bloch modes of $H^{(0)}=-\Delta+V$ along the quasimomentum segment:
${\bf K}+\lambda{\bm{\mathfrak{K}}}_2,\ |\lambda|\le1/2$.
The formal expansion of Section \ref{formal-multiscale} however is spectrally concentrated on Floquet-Bloch components along this segment, which are near the Dirac point, corresponding to $|\lambda|\ll1$. If the spectral no-fold condition fails, the expansion does not capture the effect of resonant coupling to quasi-momenta along this segment ``far from ${\bf K}$'' (corresponding to $\lambda$ bounded away from $\lambda=0$ in Figure \ref{fig:eps_V11_neg}).
\medskip
\noindent{\bf Conjecture:}
{\sl Suppose the spectral no-fold condition fails for the ${\bm{\mathfrak{v}}}_1-$ edge $\mathbb{R}{\bm{\mathfrak{v}}}_1$. Then, $H^{(\delta)}$
has topologically protected long-lived (meta-stable) edge quasi-modes, $\Psi\in H^2_{{k_{\parallel}}={\bf K}\cdot{\bm{\mathfrak{v}}}_1,{\rm loc}}(\Sigma)$, but generically has no topologically protected edge states.}
\subsection{Outline}
{\ }
In {\bf Section \ref{honeycomb_basics}} we review spectral theory for two-dimensional periodic Schr\"odinger operators, introduce the triangular lattice, the honeycomb structure and honeycomb lattice potentials.
In {\bf Section \ref{sec:dirac-pts}} we define Dirac points and review the results on the existence of Dirac points for generic honeycomb potentials from \cites{FW:12,FW:14}.
In {\bf Section \ref{ds-slices}} we introduce the notion of an edge or line defect in a bulk (unperturbed) honeycomb structure. Honeycomb structures with edges parallel to a period lattice direction, have a translation invariance.
Thus, an important tool is the Fourier decomposition of states which are $L^2$ (localized) in the unbounded direction, transverse to the edge, and propagating (plane-wave like) parallel to the edge.
In {\bf Section \ref{zigzag-edges}} we introduce our class of Hamiltonians, consisting of a bulk honeycomb potential, perturbed by a general line-defect / ${\bm{\mathfrak{v}}}_1-$ edge potential.
In {\bf Section \ref{formal-multiscale}} we give a formal multiple scale construction of edge states to any finite order in the small parameter $\delta$.
In {\bf Section \ref{thm-edge-state}} we formulate general hypotheses which imply the existence of a branch of topologically protected ${\bm{\mathfrak{v}}}_1-$ edge states, bifurcating from the Dirac point. The proof uses a Lyapunov-Schmidt reduction strategy, applied to a system for the Floquet-Bloch amplitudes which is equivalent to the eigenvalue problem. Such a strategy was implemented in a 1D setting in \cites{FLW-MAMS:15}. First, the edge-state eigenvalue problem is formulated in (quasi-) momentum space as an infinite system for the Floquet-Bloch mode amplitudes. We view this system as consisting of two coupled subsystems; one is for the quasi-momentum / energy components ``near'' the Dirac point, $({\bf K},E_\star)$, and the second governs
the components which are ``far'' from the Dirac point. We next solve for the far-energy components as a functional of the near-energy components and thereby obtain a reduction to a closed system for the near-energy components. The construction of this map requires that the spectral no-fold condition holds.
In {\bf Section \ref{zz-gap}} we consider the Hamiltonian, introduced in Section \ref{thm-edge-state}, in the weak-potential (low-contrast) regime and prove the existence of topologically protected {\it zigzag} edge states, under the condition $\varepsilon V_{1,1}>0$.
In {\bf Appendix \ref{V11-section}} we give two families of honeycomb potentials, depending on the lattice scale parameter, $a$, where we can tune between Case (1) $\varepsilon V_{1,1}>0$ and Case (2) $\varepsilon V_{1,1}<0$ by continuously varying the lattice scale parameter.
In a number of places, the proofs of certain assertions are very similar to those of corresponding assertions in \cites{FLW-MAMS:15}. In such cases, we do not repeat a variation on the proof in \cites{FLW-MAMS:15}, but rather refer to the specific proposition or lemma in \cites{FLW-MAMS:15}.
\subsection{Notation\label{subsec:notation}}
\begin{enumerate}[(1)]
\item ${\bf v}_j,\ j=1,2$ are basis vectors of the triangular lattice in $\mathbb{R}^2$, $\Lambda_h$.
${\bf k}_\ell,\ \ell=1,2$ are dual basis vectors of $\Lambda_h^*$, which satisfy ${\bf k}_\ell\cdot{\bf v}_j=2\pi\delta_{\ell j}$.
\item For ${\bf m}=(m_1,m_2)\in\mathbb{Z}^2$, ${\bf m}\vec{\bf k}=m_1{\bf k}_1+m_2{\bf k}_2$.
\item ${\bm{\mathfrak{v}}}_1=a_1{\bf v}_1+a_2{\bf v}_2\in\Lambda_h$,\ $a_1, a_2$ co-prime integers. The ${\bm{\mathfrak{v}}}_1-$ edge is $\mathbb{R}{\bm{\mathfrak{v}}}_1$.
${\bm{\mathfrak{v}}}_j,\ j=1,2$, is an alternate basis for $\Lambda_h$ with corresponding dual basis, ${\bm{\mathfrak{K}}}_\ell, \ell=1,2$, satisfying ${\bm{\mathfrak{K}}}_\ell\cdot{\bm{\mathfrak{v}}}_j=2\pi\delta_{\ell j}$.
\item ${\bm{\mathfrak{K}}}=(\mathfrak{K}^{(1)},\mathfrak{K}^{(2)})$, $\mathfrak{z}\equiv\mathfrak{K}^{(1)} + i \mathfrak{K}^{(2)}$, $|\mathfrak{z}|=|{\bm{\mathfrak{K}}}|$.
\item $\mathcal{B}$ denotes the Brillouin Zone, associated with $\Lambda_h$, shown in the right panel of Figure \ref{fig:lattices}.
\item $\inner{f,g} = \int\overline{f}g$.
\item $x\lesssim y$ if and only if there exists $C>0$ such that $x \leq Cy$. $x \approx y$ if and only if $x \lesssim y$ and $y \lesssim x$.
\item $L^{p,s}(\mathbb{R})$ is the space of functions $F:\mathbb{R}\rightarrow\mathbb{R}$ such that $(1+\abs{\cdot}^2)^{s/2}F\in L^p(\mathbb{R})$, endowed with the norm
\[\norm{F}_{L^{p,s}(\mathbb{R})} \equiv \norm{(1+\abs{\cdot}^2)^{s/2}F}_{L^p(\mathbb{R})} \approx
\sum_{j=0}^{s}\norm{\abs{\cdot}^jF}_{L^p(\mathbb{R})} < \infty,~~~ 1\leq p\leq \infty.\]
\item For $f,g\in L^2(\mathbb{R}^d)$, the Fourier transform and its inverse are given by
{\small
\begin{equation*}
\mathcal{F}\{f\}(\xi)\equiv\widehat{f}(\xi)=\frac{1}{(2\pi)^d}\int_{\mathbb{R}^d}e^{-iX\cdot\xi}f(X)dX,~~~
\mathcal{F}^{-1}\{g\}(X)\equiv\check{g}(X)=\int_{\mathbb{R}^d}e^{ iX\cdot\xi}g(\xi)d\xi.
\label{FT-def}
\end{equation*}
}
The Plancherel relation states:
$\int_{\mathbb{R}^d} f(x)\overline{g(x)} dx = (2\pi)^d\ \int_{\mathbb{R}^d} \widehat{f}(\xi)\overline{\widehat{g}(\xi)} d\xi .$
\item $\sigma_j$, $j=1,2,3$, denote the Pauli matrices, where
\begin{equation}\label{Pauli-sigma}
\sigma_1 = \begin{pmatrix}0&1\\1&0\end{pmatrix},~~
\sigma_2 = \begin{pmatrix}0&-i\\i&0\end{pmatrix},~~\text{and}~~
\sigma_3 = \begin{pmatrix}1&0\\0&-1\end{pmatrix}.
\end{equation}
\end{enumerate}
\subsection{Acknowledgements}
We would like to thank I. Aleiner, A. Millis, J. Liu and M. Rechtsman for stimulating discussions.
\section{Floquet-Bloch Theory and Honeycomb Lattice Potentials}\label{honeycomb_basics}
We begin with a review of Floquet-Bloch theory; see, for example, \cites{Eastham:74, RS4, kuchment2012floquet, kuchment2016overview}.
\subsection{Fourier analysis on $L^2(\mathbb{R}/\Lambda)$ and $L^2(\Sigma)$}\label{fourier-analysis}
Let $\{{\bm{\mathfrak{v}}}_1,{\bm{\mathfrak{v}}}_2\}$ be a linearly independent set in $\mathbb{R}^2$ and introduce the
\begin{align}
&\text{\bf Lattice: } \Lambda = \mathbb{Z}{\bm{\mathfrak{v}}}_1\oplus\mathbb{Z}{\bm{\mathfrak{v}}}_2 = \{m_1{\bm{\mathfrak{v}}}_1 + m_2{\bm{\mathfrak{v}}}_2 \ : \ m_1,m_2 \in \mathbb{Z} \} ; \nonumber \\%\label{lattice-def} \\
&\text{\bf Fundamental period cell: } \Omega = \{\theta_1{\bm{\mathfrak{v}}}_1 + \theta_2{\bm{\mathfrak{v}}}_2 \ : \ 0\leq\theta_j\leq1,\ j=1,2\} ; \label{Omega-def} \\
&\text{\bf Dual lattice: }
\Lambda^\ast = \mathbb{Z}{\bm{\mathfrak{K}}}_1\oplus\mathbb{Z}{\bm{\mathfrak{K}}}_2 = \{ {\bf m}\vec{\bm{\mathfrak{K}}}=m_1{\bm{\mathfrak{K}}}_1 + m_2{\bm{\mathfrak{K}}}_2 : m_1,m_2 \in \mathbb{Z} \} , \nonumber\\
&\qquad\qquad\qquad\qquad\qquad\qquad {\bm{\mathfrak{K}}}_i\cdot{\bm{\mathfrak{v}}}_j = 2\pi\delta_{ij},\ 1\leq i,j \leq 2; \nonumber\\%\label{dual-lattice-def}\\
&\text{\bf Brillouin zone: } \mathcal{B},\ \textrm{a choice of fundamental dual cell} ; \nonumber \\
&\text{\bf Cylinder: } \Sigma\equiv \mathbb{R}^2/\mathbb{Z}{\bm{\mathfrak{v}}}_1 ; \nonumber \\%\label{cylinder-def}\\
&\text{\bf Fundamental domain for $\Sigma$: } \Omega_\Sigma\equiv \{\tau_1{\bm{\mathfrak{v}}}_1 + \tau_2{\bm{\mathfrak{v}}}_2 : 0\leq\tau_1\leq1, \tau_2\in\mathbb{R}\} .\label{Omega-Sigma-def}
\end{align}
\noindent We denote by $L^2(\Omega)$ and $L^2(\Omega_\Sigma)$ the standard
$L^2$ spaces on the domains $\Omega$ and $\Omega_\Sigma$, respectively
\begin{definition}\label{L2-spaces}[The spaces $L^2(\mathbb{R}^2/\Lambda)$ and $L^2_{\bf k}$]
\begin{enumerate}
\item[(a)] $L^2(\mathbb{R}^2/\Lambda)$ denotes the space of $L^2_{loc}$ functions which are $\Lambda-$ periodic:
$ f\in L^2(\mathbb{R}^2/\Lambda)$ if and only if $f({\bf x}+{\bm{\mathfrak{v}}})=f({\bf x})$ for all $ {\bf x}\in\mathbb{R}^2,\ \ {\bm{\mathfrak{v}}}\in\Lambda$ and
$f\in L^2(\Omega)$.
\item[(b)] $L^2_{\bf k}$ denotes the space of $L^2_{loc}$ functions which satisfy a pseudo-periodic boundary condition: $ f({\bf x}+{\bm{\mathfrak{v}}})=e^{i{\bf k}\cdot{\bm{\mathfrak{v}}}}f({\bf x})$ for all ${\bf x}\in\mathbb{R}^2,\ \ {\bm{\mathfrak{v}}}\in\Lambda$ and $e^{-i{\bf k}\cdot{\bf x}}f({\bf x})\in L^2(\mathbb{R}^2/\Lambda)$.
For $f$ and $g$ in $L^2_{\bf k}$, $\overline{f}g$ is in $L^1(\mathbb{R}^2/\Lambda)$ and we define their inner product by
\begin{equation*}
\label{L2k_inner_def}
\inner{f,g}_{L^2_{\bf k}} = \int_\Omega \overline{f({\bf x})} g({\bf x}) d{\bf x}.
\end{equation*}
\end{enumerate}
\end{definition}
%
\begin{definition}\label{L2Sigma-spaces} [The spaces $L^2(\Sigma)$ and $L^2_{k_\parallel}$]
%
\begin{enumerate}
\item [(a)] $L^2(\Sigma)=L^2(\mathbb{R}^2/\mathbb{Z}{\bm{\mathfrak{v}}}_1)$ denotes the space of $L^2_{loc}$ functions, which are periodic in the direction of ${\bm{\mathfrak{v}}}_1$: $f({\bf x}+{\bm{\mathfrak{v}}}_1)=f({\bf x}), \text{\ \ for\ all\ } {\bf x}\in\mathbb{R}^2$ and such that $f\in L^2(\Omega_\Sigma)$,
where $\Omega_\Sigma$ is the fundamental domain for $\Sigma$; see \eqref{Omega-Sigma-def}.
\item [(b)] $L^2_{{k_{\parallel}}}(\Sigma)=L^2_{{k_{\parallel}}}$ denotes the space of $L^2_{loc}$ functions:
\begin{enumerate}
\item [(1)] which are ${k_{\parallel}}-$ pseudo-periodic in the direction ${\bm{\mathfrak{v}}}_1$:
\begin{equation*}
f({\bf x}+{\bm{\mathfrak{v}}}_1)=e^{i{k_{\parallel}}}f({\bf x}), \text{\ \ for } {\bf x}\in\mathbb{R}^2, \ \ \text{and}
\end{equation*}
\item [(2)] such that $e^{-i (1/2\pi ) k_{\parallel}{\bm{\mathfrak{K}}}_1\cdot{\bf x}}f({\bf x})$, which is defined on $\Sigma$, is in $L^2(\Omega_\Sigma)$.
\end{enumerate}
For $f$ and $g$ in $L^2_{k_{\parallel}}(\Sigma)$, $\overline{f}g$ is in $L^2(\Sigma)$ and we define their inner product by
\begin{equation*}
\label{L2kpar_inner_def}
\inner{f,g}_{L^2_{k_{\parallel}}} = \int_{\Omega_\Sigma} \overline{f({\bf x})} g({\bf x}) d{\bf x}.
\end{equation*}
\end{enumerate}
%
%
The respective Sobolev spaces $H^s(\mathbb{R}^2/\Lambda)$, $H^s_{\bf k}$, $H^s(\Sigma)$ and $H^s_{{k_{\parallel}}}(\Sigma)=H^s_{{k_{\parallel}}}$ are defined in a natural way.
\noindent {\bf Simplified notational convention:} We shall do many calculations requiring us to explicitly write out inner products like $\inner{f,g}_{L^2(\Sigma)}$ and $\inner{f,g}_{L^2_{k_{\parallel}}(\Sigma)}$. We shall write these as $ \int_\Sigma \overline{f({\bf x})}g({\bf x})\ d{\bf x}$ rather than as
$ \int_{\Omega_{_\Sigma}} \overline{f({\bf x})}g({\bf x})\ d{\bf x}$.
\end{definition}
If $f\in L^2(\mathbb{R}^2/\Lambda)$, then it can be expanded in a Fourier series:
\begin{equation}
\label{Omega-fourier}
f({\bf x}) = \sum_{{\bf m}\in\mathbb{Z}^2} f_{{\bf m}} e^{i{\bf m}\vec{\bm{\mathfrak{K}}}\cdot{\bf x}}, \quad f_{{\bf m}} = \frac{1}{|\Omega|}\int_\Omega e^{-i{\bf m}\vec{\bm{\mathfrak{K}}}\cdot{\bf y}}f({\bf y})d{\bf y} \ , \ \ {\bf m}\vec{\bm{\mathfrak{K}}}=m_1{\bm{\mathfrak{K}}}_1+m_2{\bm{\mathfrak{K}}}_2 \ ,
\end{equation}
where $|\Omega|$ denotes the area of the fundamental cell, $\Omega$.
In Section \ref{Fourier-edge}, we show that,
if $g\in L^2(\Sigma)$, then it can be expanded in a Fourier series in ${\bm{\mathfrak{v}}}_1\cdot{\bf x}$ and Fourier transform in ${\bm{\mathfrak{v}}}_2\cdot{\bf x}$:
\begin{align*}
g({\bf x}) &= 2\pi\ \sum_{n\in\mathbb{Z}}\int_\mathbb{R} \widehat{g}_n(2\pi\xi) e^{i\xi{\bm{\mathfrak{K}}}_2\cdot{\bf x}} d\xi e^{in{\bm{\mathfrak{K}}}_1\cdot{\bf x}}\ , \\
2\pi\ \widehat{g}_n(2\pi\xi) &= \frac{1}{\left|{\bm{\mathfrak{v}}}_1\wedge{\bm{\mathfrak{v}}}_2\right| } \int_\Sigma e^{-i\xi{\bm{\mathfrak{K}}}_2\cdot{\bf y}} e^{-in{\bm{\mathfrak{K}}}_1\cdot{\bf y}} g({\bf y}) d{\bf y} \ .
\end{align*}
\subsection{Floquet-Bloch Theory}\label{flo-bl-theory}
Let $Q({\bf x})$ denote a real-valued potential which is periodic with respect to $\Lambda$. We shall assume throughout this paper that $Q\in C^\infty(\mathbb{R}^2/\Lambda)$, although we expect that this condition can be relaxed without much extra work.
Introduce the Schr\"odinger Hamiltonian
$H \equiv -\Delta + Q({\bf x})$.
For each ${\bf k}\in\mathbb{R}^2$, we study the {\it Floquet-Bloch eigenvalue problem} on $L^2_{\bf k}$:
\begin{align}
\label{fl-bl-evp}
&H \Phi({\bf x};{\bf k}) = E({\bf k}) \Phi({\bf x};{\bf k}), \ \ {\bf x}\in\mathbb{R}^2, \\
&\Phi({\bf x}+{\bm{\mathfrak{v}}})=e^{i{\bf k}\cdot{\bm{\mathfrak{v}}}}\Phi({\bf x};{\bf k}), \ \ \forall {\bm{\mathfrak{v}}}\in\Lambda \nonumber .
\end{align}
An $L^2_{\bf k}$ solution of \eqref{fl-bl-evp} is called a {\it Floquet-Bloch} state.
Since the ${\bf k}-$ pseudo-periodic boundary condition in \eqref{fl-bl-evp} is invariant under translations in the dual period lattice, $\Lambda^\ast$, it suffices to restrict our attention to ${\bf k}\in\mathcal{B}$, where $\mathcal{B}$, the {\it Brillouin Zone}, is a fundamental cell in ${\bf k}-$ space.
An equivalent formulation to \eqref{fl-bl-evp} is obtained by setting $\Phi({\bf x};{\bf k})=e^{i{\bf k}\cdot{\bf x}}p({\bf x};{\bf k})$. Then,
\begin{equation}
\label{fl-bl-evp-per}
H({\bf k}) p({\bf x};{\bf k}) = E({\bf k}) p({\bf x};{\bf k}), \ {\bf x}\in\mathbb{R}^2, \quad p({\bf x}+{\bm{\mathfrak{v}}})=p({\bf x};{\bf k}),\ \ {\bm{\mathfrak{v}}}\in\Lambda,
\end{equation}
where
$ H({\bf k}) \equiv -(\nabla+i{\bf k})^2 + Q({\bf x})$
is a self-adjoint operator on $L^2(\mathbb{R}^2/\Lambda)$.
The eigenvalue problem \eqref{fl-bl-evp-per}, has a discrete set of eigenvalues
$E_1({\bf k})\leq E_2({\bf k})\leq \cdots \leq E_b({\bf k})\leq \cdots$,
with $L^2(\mathbb{R}^2/\Lambda)-$ eigenfunctions $p_b({\bf x};{\bf k}),\ b=1,2,3,\ldots$.
The maps ${\bf k}\in\mathcal{B}\mapsto E_j({\bf k})$ are, in general, Lipschitz continuous functions; see, for example, Appendix A of \cites{FW:14}. For each ${\bf k}\in\mathcal{B}$, the set $\{p_j({\bf x};{\bf k})\}_{j\geq1}$ can be taken to be a complete orthonormal basis for $L^2(\mathbb{R}^2/\Lambda)$.
As ${\bf k}$ varies over $\mathcal{B}$, $E_b({\bf k})$ sweeps out a closed real interval. The union over $b\ge1$ of these closed intervals is exactly the $L^2(\mathbb{R}^2)-$ spectrum of $-\Delta+V({\bf x})$:
$
\text{spec} \left(H\right) = \bigcup_{{\bf k}\in\mathcal{B}} \text{spec} \left(H({\bf k})\right).
$
Furthermore, the set $\{\Phi_b({\bf x};{\bf k})\}_{b\geq1,{\bf k}\in\mathcal{B}}$ is complete in $L^2(\mathbb{R}^2)$:
\begin{equation*}
\label{fb-R2-completeness}
f({\bf x}) = \sum_{b\geq1} \int_\mathcal{B} \inner{\Phi_b(\cdot;{\bf k}),f(\cdot)}_{L^2(\mathbb{R}^2)} \Phi_b({\bf x};{\bf k}) d{\bf k}
\equiv \sum_{b\geq1} \int_\mathcal{B} \widetilde{f}_b({\bf k}) \Phi_b({\bf x};{\bf k}) d{\bf k} ,
\end{equation*}
where the sum converges in the $L^2$ norm.
\subsection{The honeycomb period lattice, $\Lambda_h$, and its dual, $\Lambda_h^*$}\label{sec:honeycomb}
Consider $\Lambda_h=\mathbb{Z}{\bf v}_1 \oplus \mathbb{Z}{\bf v}_2$, the equilateral triangular lattice generated by the basis vectors: ${\bf v}_1=(\frac{\sqrt{3}}{2} , \frac{1}{2})^T$, ${\bf v}_2=(\frac{\sqrt{3}}{2} , -\frac{1}{2})^T$; see Figure \ref{fig:lattices}, left panel.
The dual lattice $\Lambda_h^* =\ \mathbb{Z} {\bf k}_1\oplus \mathbb{Z}{\bf k}_2$ is spanned by the dual basis vectors: ${\bf k}_1= q( \frac{1}{2} , \frac{\sqrt{3}}{2} )^T$, ${\bf k}_2= q( \frac{1}{2} , -\frac{\sqrt{3}}{2} )^T$,
where $ q\equiv \frac{4\pi}{\sqrt{3}}$,
with the biorthonormality relations ${\bf k}_{i}\cdot {\bf v}_{{j}}=2\pi\delta_{ij}$. Other useful relations are:
$|{\bf v}_1|=|{\bf v}_2|=1$, ${\bf v}_1\cdot{\bf v}_2=\frac{1}{2}$, $|{\bf k}_1|=|{\bf k}_2|=q$ and
${\bf k}_1\cdot{\bf k}_2=-\frac{1}{2}q^2$.
The Brillouin zone, ${\mathcal{B}}_h$, is a regular hexagon in $\mathbb{R}^2$. Denote by ${\bf K}$ and ${\bf K'}$ its top and bottom vertices (see right panel of Figure \ref{fig:lattices}) given by:\
${\bf K}\equiv\frac{1}{3}\left({\bf k}_1-{\bf k}_2\right),\ \ {\bf K'}\equiv-{\bf K}=\frac{1}{3}\left({\bf k}_2-{\bf k}_1\right)$.
All six vertices of ${\mathcal{B}}_h$ can be generated by application of the matrix $R$,
which rotates a vector in $\mathbb{R}^2$ clockwise by $2\pi/3$:
\begin{equation}
R\ =\ \left(
\begin{array}{cc}
-\frac{1}{2} & \frac{\sqrt{3}}{2}\\
{} & {}\\
-\frac{\sqrt{3}}{2} & -\frac{1}{2}
\end{array}\right)\ .
\label{Rdef}\end{equation}
The vertices of ${\mathcal{B}}_h$ fall into two groups, generated by the action of $R$ on ${\bf K}$ and ${\bf K}'$:
${\bf K}-$ type-points: $ {\bf K},\ R{\bf K}={\bf K}+{\bf k}_2,\ R^2{\bf K}={\bf K}-{\bf k}_1$, and
${\bf K'}-$ type-points: $ {\bf K'},\ R{\bf K'}={\bf K}'-{\bf k}_2,\ R^2{\bf K'}={\bf K'}+{\bf k}_1$.
Functions which are periodic on $\mathbb{R}^2$ with respect to the lattice $\Lambda_h$ may be viewed as functions on the torus, $\mathbb{R}^2/\Lambda_h$.
As a fundamental period cell, we choose the parallelogram spanned by ${\bf v}_1$ and ${\bf v}_2$, denoted
$\Omega_h$.
\begin{remark}[Symmetry Reduction]\label{symmetry-reduction}
Let $(\Phi({\bf x};{\bf k}), E({\bf k}))$ denote a Floquet-Bloch eigenpair for the eigenvalue problem \eqref{fl-bl-evp} with quasi-momentum ${\bf k}$. Since $V$ is real,
$(\tilde{\Phi}({\bf x};{\bf k})\equiv\overline{\Phi({\bf x};{\bf k})}, E({\bf k}))$ is a Floquet-Bloch eigenpair for the eigenvalue problem with quasi-momentum $-{\bf k}$. The above relations among the vertices of ${\mathcal{B}}_h$ and the $\Lambda_h^*$- periodicity of: ${\bf k}\mapsto E({\bf k})$ and ${\bf k}\mapsto \Phi({\bf x};{\bf k})$ imply that the local character of the dispersion surfaces in a neighborhood of any vertex of ${\mathcal{B}}_h$ is determined by its character about any other vertex of ${\mathcal{B}}_h$.
\end{remark}
\subsection{Honeycomb potentials}\label{honeycomb-potentials}
\begin{definition}\label{honeyV}[Honeycomb potentials]
Let $V$ be real-valued and $V\in C^\infty(\mathbb{R}^2)$.
$V$ is a \underline{honeycomb potential}
if there exists ${\bf x}_0\in\mathbb{R}^2$ such that $\tilde{V}({\bf x})=V({\bf x}-{\bf x}_0)$ has the following properties:
\begin{enumerate}
\item[(V1)] $\tilde{V}$ is $\Lambda_h-$ periodic, {\it i.e.} $\tilde{V}({\bf x}+{\bf v})=\tilde{V}({\bf x})$ for all ${\bf x}\in\mathbb{R}^2$ and ${\bf v}\in\Lambda_h$.
\item[(V2)] $\tilde{V}$ is even or inversion-symmetric, {\it i.e.} $\tilde{V}(-{\bf x})=\tilde{V}({\bf x})$.
\item[(V3)] $\tilde{V}$ is $\mathcal{R}$- invariant, {\it i.e.}
$ \mathcal{R}[\tilde{V}]({\bf x})\ \equiv\ \tilde{V}(R^*{\bf x})\ =\ \tilde{V}({\bf x}),$
where, $R^*$ is the counter-clockwise rotation matrix by $2\pi/3$, {\it i.e.} $R^*=R^{-1}$, where $R$ is given by \eqref{Rdef}.
\end{enumerate}
N.B. Throughout this paper, we shall omit the tildes on $V$ and choose coordinates with ${\bf x}_0=0$.
\end{definition}
Introduce the mapping $\widetilde{R}:\mathbb{Z}^2\to\mathbb{Z}^2$ which acts on the indices of the Fourier coefficients of $V$:
$ \widetilde{R} (m_1, m_2) = (-m_2, m_1-m_2)$ and therefore
$ \widetilde{R}^2 (m_1, m_2) = (m_2-m_1,-m_1)$, and $\widetilde{R}^3(m_1,m_2) = (m_1,m_2)$.
Any ${\bf m}\neq0$ lies on an $\widetilde{R}-$ orbit of length exactly three \cites{FW:12}. We say that ${\bf m}$ and ${\bf n}$ are in the same equivalence class if ${\bf m}$ and ${\bf n}$ lie on the same $3-$ cycle. Let $\widetilde{S}$ denote a set consisting of exactly one representative from each equivalence class. Honeycomb lattice potentials have the following Fourier series characterization \cites{FW:12}:
\begin{proposition}
\label{honey-cosine}
Let $V({\bf x})$ denote a honeycomb lattice potential.
Then,
\begin{align*}
V({\bf x}) &= v_{\bf 0} + \sum_{{\bf m}\in\widetilde{S}} \ v_{\bf m} \
\left[ \cos({\bf m}\vec{\bf k}\cdot{\bf x}) + \cos((\widetilde{R}{\bf m})\vec{\bf k}\cdot{\bf x}) + \cos((\widetilde{R}^2{\bf m})\vec{\bf k}\cdot{\bf x}) \right] ,
\end{align*}
where ${\bf m}\vec{\bf k} = m_1{\bf k}_1 + m_2{\bf k}_2$ and the $v_{\bf m}$ are real.
\end{proposition}
\section{Dirac Points}\label{sec:dirac-pts}
In this section we summarize results of \cites{FW:12} on {\it Dirac points}. These are conical singularities in the dispersion surfaces of $H_V=-\Delta+V({\bf x})$, where $V$ is a honeycomb lattice potential.
Let ${\bf K}_\star$ denote any vertex of ${\mathcal{B}}_h$, and recall that $L^2_{{\bf K}_\star}$ is the space of ${\bf K}_\star-$ pseudo-periodic functions.
A key property of honeycomb lattice potentials, $V$, is that $H_V$ and $\mathcal{R}$, defined in (V3) of Definition \ref{honeyV}, leave a dense subspace of $L^2_{{\bf K}_\star}$ invariant. Furthermore, restricted to this dense subspace of $L^2_{{\bf K}_\star}$, $H_{V}$ commutes with $\mathcal{R}$: $\left[\mathcal{R},H_{V}\right] = 0$.
Since $\mathcal{R}$ has eigenvalues $1,\tau$ and $\overline{\tau}$, it is natural to split $L^2_{{\bf K}_\star}$ into the direct sum:
\begin{equation*}
L^2_{{\bf K}_\star}\ =\ L^2_{{\bf K}_\star,1}\oplus L^2_{{\bf K}_\star,\tau}\oplus L^2_{{\bf K}_\star,\overline\tau}.\label{L2-directsum}
\end{equation*}
Here, $L^2_{{\bf K}_\star,\sigma}$, where $\sigma=1,\tau,\overline{\tau}$ and $\tau=\exp(2\pi i/3)$, denote the invariant eigenspaces of $\mathcal{R}$:
\begin{equation*}
L^2_{{\bf K}_\star,\sigma}\ =\ \Big\{g\in L^2_{{\bf K}_\star}: \mathcal{R}g=\sigma g\Big\}\ .
\label{L2Ksigma} \end{equation*}
We next give a precise definition of a Dirac point.
\begin{definition}\label{dirac-pt-defn}
Let $V({\bf x})$ be a smooth, real-valued, even (inversion symmetric) and periodic potential on $\mathbb{R}^2$.
Denote by ${\mathcal{B}}_h$, the Brillouin zone. Let ${\bf K}\in{\mathcal{B}}_h$.
The energy / quasi-momentum pair $({\bf K},E_\star)\in{\mathcal{B}}_h\times\mathbb{R}$ is called a {\it Dirac point} if there exists $b_\star\ge1$ such that:
\begin{enumerate}
\item $E_\star$ is a $L^2_{\bf K}-$ eigenvalue of $H_V$ of multiplicity two.
\item $\textrm{Nullspace}\Big(H_V-E_\star I\Big)\ =\
{\rm span}\Big\{ \Phi_1({\bf x}) , \Phi_2({\bf x})\Big\}$,
where $\Phi_1\in L^2_{{\bf K},\tau}\ (\mathcal{R}\Phi_1=\tau\Phi_1)$ and
$\Phi_2({\bf x}) = \left(\mathcal{C}\circ\mathcal{I}\right)[\Phi_1]({\bf x}) = \overline{\Phi_1(-{\bf x})}\in L^2_{{\bf K},\bar\tau}\ (\mathcal{R}\Phi_2=\overline{\tau}\Phi_2)$,
and $\inner{\Phi_a, \Phi_b}_{L^2_{\bf K}(\Omega)} = \delta_{ab}$, $a,b=1,2$.
\item There exist $\lambda_{\sharp}\ne0$, $\zeta_0>0$, and Floquet-Bloch eigenpairs
\[ {\bf k}\mapsto (\Phi_{b_\star+1}({\bf x};{\bf k}),E_{b_\star+1}({\bf k}))\ \ {\rm and}\ \ {\bf k}\mapsto (\Phi_{b_\star}({\bf x};{\bf k}),E_{b_\star}({\bf k})),\]
and Lipschitz functions $e_j({\bf k}),\ j=b_\star, b_\star+1$, where $e_j({\bf K})=0$, defined for $|{\bf k}-{\bf K}|<\zeta_0$ such that
\begin{align}
E_{b_\star+1}({\bf k})-E_\star\ &=\ + |\lambda_\sharp|\
\left| {\bf k}-{\bf K} \right|\
\left( 1\ +\ e_{b_\star+1}({\bf k}) \right),\nonumber\\
E_{b_\star}({\bf k})-E_\star\ &=\ - |\lambda_\sharp|\
\left| {\bf k}-{\bf K} \right|\
\left( 1\ +\ e_{b_\star}({\bf k}) \right),\label{cones}
\end{align}
where $|e_j({\bf k})|\le C|{\bf k}-{\bf K}|,\ j=b_\star, b_\star+1$, for some $C>0$.
\end{enumerate}
\end{definition}
In \cites{FW:12}, the authors prove the following
\begin{proposition}\label{lambda-is=|lambdasharp|}
Suppose conditions $1$ and $2$ of Definition \ref{dirac-pt-defn} hold and let
$\{c({\bf m})\}_{{\bf m}\in\mathcal S}$ denote the sequence of $L^2_{{\bf K},\tau}-$ Fourier-coefficients of $\Phi_1({\bf x})$ normalized as in \cites{FW:12}.
Define the sum
\begin{equation}
\lambda_\sharp\ \equiv\ \sum_{{\bf m}\in\mathcal{S}} c({\bf m})^2\ \left(\begin{array}{c}1\\ i\end{array}\right)\cdot \left({\bf K}+{\bf m}\vec{\bf k}\right)\ .
\label{lambda-sharp}
\end{equation}
Here, $\mathcal{S}\subset\mathbb{Z}^2$ is defined in \cites{FW:12}.
If $\lambda_\sharp\ne0$, then condition
$3$ of Definition \ref{dirac-pt-defn} holds (see \eqref{cones}).
\end{proposition}
\noindent Therefore Dirac points are found by verifying conditions $1$ and $2$ of Definition \ref{dirac-pt-defn} and the additional (non-degeneracy) condition: $\lambda_\sharp\ne0$.
Furthermore, Theorem 4.1 of \cites{FW:12} and Theorem 3.2 of \cites{FW:14} imply the following local behavior of Floquet-Bloch modes near the Dirac point:
\footnote{The factor $\frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|}$ in \eqref{Phib_star+1}-\eqref{Phib_star} corrects
a typographical error in equation (3.13) of \cites{FW:14}.}
\begin{corollary}\label{Phibstar-bstar+1}
\begin{align}
\Phi_{b_\star+1}({\bf x};{\bf k})\ &=\ \frac{1}{\sqrt2}\ \Big[\ \frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|}\
\frac{({\bf k}-{\bf K})^{(1)}+i({\bf k}-{\bf K})^{(2)}}{|{\bf k}-{\bf K}|}\ \Phi_1({\bf x})\ +\ \Phi_2({\bf x})\ \Big] + \Phi_{b_\star+1}^{(1)}({\bf x};{\bf k}), \label{Phib_star+1}\\
\Phi_{b_\star}({\bf x};{\bf k})\ &=\ \frac{1}{\sqrt2}\ \Big[\ \frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|}\
\frac{({\bf k}-{\bf K})^{(1)}+i({\bf k}-{\bf K})^{(2)}}{|{\bf k}-{\bf K}|}\ \Phi_1({\bf x})\ -\ \Phi_2({\bf x})\ \Big] + \Phi_{b_\star}^{(1)}({\bf x};{\bf k}) ,
\label{Phib_star}\end{align}
where $ \Phi_{j}^{(1)}(\cdot;{\bf k}) = \mathcal{O}(|{\bf k}-{\bf K}|)$ in $H^2(\Omega_h)$ as $|{\bf k}-{\bf K}|\to0$.
\end{corollary}
In the next section we discuss the result of \cites{FW:12}, that $-\Delta +\varepsilon V$ has Dirac points for generic $\varepsilon$.
\subsection{Dirac points of $-\Delta+\varepsilon V({\bf x})$, $\varepsilon$ generic} \label{dpts-generic-eps}
The strategy used in \cites{FW:12} to produce Dirac points is based on a bifurcation theory for the operator $-\Delta + \varepsilon V({\bf x})$ acting in $L^2_{\bf K}$, from the $\varepsilon=0$ limit. We describe the setup here, since we shall make detailed use of it.
Consider $-\Delta$ acting on $L^2_{\bf K}$. We note that $E^0_\star\equiv |{\bf K}|^2$ is an eigenvalue with multiplicity three, since the three vertices of the regular hexagon, $\mathcal{B}_h$: ${\bf K}, R{\bf K}$ and $R^2{\bf K}$ are equidistant from the origin.
The corresponding three-dimensional eigenspace has an orthonormal basis consisting of the functions: $\Phi_\sigma({\bf x}) = e^{i{\bf K}\cdot{\bf x}}p_\sigma({\bf x}) \in L^2_{{\bf K},\sigma},\ \sigma=1,\tau,\overline{\tau}$, defined by
\begin{align}
\Phi_\sigma ({\bf x}) &= e^{i{\bf K}\cdot{\bf x}} p_\sigma({\bf x}) , \qquad ( \sigma=1,\tau,\overline{\tau} )\nonumber\\
&= \frac{1}{\sqrt{3|\Omega|}}\ \Big[\ e^{i{\bf K}\cdot{\bf x}} + \overline{\sigma} e^{iR{\bf K}\cdot{\bf x}}+ \sigma e^{iR^2{\bf K}\cdot{\bf x}}\ \Big] \nonumber \\
&= \frac{1}{\sqrt{3|\Omega|}}\ e^{i{\bf K}\cdot{\bf x}} \Big[\ 1 + \overline{\sigma} e^{i{\bf k}_2\cdot{\bf x}} + \sigma e^{-i{\bf k}_1\cdot{\bf x}}\ \Big]\ .
\label{p_sigma}
\end{align}
We note that
\begin{equation}
\inner{\Phi_\sigma,\Phi_{\tilde{\sigma}}}_{L^2_{\bf K}}=
\inner{p_\sigma, p_{\tilde{\sigma}}}_{L^2(\mathbb{R}^2/\Lambda_h)}= \delta_{\sigma,{\tilde{\sigma}}} \ .
\label{orthon}\end{equation}
%
In Theorem 5.1 of \cites{FW:12}, the authors proved that for real, small and non-zero $\varepsilon$
and under the assumption that
$V$ satisfies the non-degeneracy condition:
\begin{equation}
V_{1,1}\ \equiv\
\frac{1}{|\Omega_h|} \int_{\Omega_h} e^{-i({\bf k}_1+{\bf k}_2)\cdot{\bf y}}\ V({\bf y})\ d{\bf y}\ne0,
\label{V11eq0}
\end{equation}
that the multiplicity three eigenvalue, $E^0_\star=|{\bf K}|^2$, splits into\\
(A) a multiplicity two eigenvalue, $E^\varepsilon_\star$, with two-dimensional $L^2_{{\bf K},\tau}\oplus L^2_{{\bf K},\overline{\tau}}-$ eigenspace structure, and\\
(B) a simple eigenvalue, $\widetilde{E}^\varepsilon$, with one-dimensional eigenspace, a subspace of $L^2_{{\bf K},1}$.
For all $\varepsilon$ sufficiently small, the quasi-momentum pairs $({\bf K},E^\varepsilon_\star)$ are Dirac points in the sense of Definition \ref{dirac-pt-defn}.
Furthermore,
a continuation argument is then used to extend this result from the regime of sufficiently small $\varepsilon$ to the regime of arbitrary $\varepsilon$ outside of a possible discrete set; see \cites{FW:12}
and the refinement concerning the possible exceptional set of $\varepsilon$ values in Appendix D of \cites{FLW-MAMS:15}.
We first state the result for arbitrarily large and generic $\varepsilon$, and then the more refined picture for $|\varepsilon|>0$ and sufficiently small.
\begin{theorem}\label{diracpt-thm}[Generic $\varepsilon$]
Let $V({\bf x})$ be a honeycomb lattice potential and consider the
parameter family of Schr\"odinger operators:
\begin{equation*}
H^{(\varepsilon)}\ \equiv\ -\Delta + \varepsilon\ V({\bf x}),
\label{Heps-def}
\end{equation*}
where $V$ satisfies the non-degeneracy condition \eqref{V11eq0}.
Then, there exists $\varepsilon_0>0$, such that for all real and nonzero $\varepsilon$, outside of a possible discrete subset of $\mathbb{R}\setminus(-\varepsilon_0,\varepsilon_0)$, $H^{(\varepsilon)}$ has Dirac points $({\bf K},E^\varepsilon_\star)$ in the sense of Definition \ref{dirac-pt-defn}
Specifically, for all such $\varepsilon$,
there exists $b_\star\ge1$ such that $E_\star\equivE^\varepsilon_{b_\star}({\bf K})=E^\varepsilon_{b_\star+1}({\bf K})$ is a ${\bf K}-$ pseudo-periodic eigenvalue of multiplicity two
where
\begin{enumerate}
\item
(a) $E^\varepsilon_\star$ is an $L^2_{{\bf K},\tau}-$ eigenvalue of $H^{(\varepsilon)}$ of multiplicity one, with corresponding eigenfunction,
$\Phi^\varepsilon_1({\bf x})$.\\
(b) $E^\varepsilon_\star$ is an $L^2_{{\bf K},\bar\tau}-$ eigenvalue of $H^{(\varepsilon)}$ of multiplicity one, with corresponding eigenfunction, $\Phi^\varepsilon_2({\bf x})=\overline{\Phi^\varepsilon_1(-{\bf x})}$.\\
(c) $E^\varepsilon_\star$ is \underline{not} an $L^2_{{\bf K},1}-$ eigenvalue of $H^{(\varepsilon)}$.
\item There exist $\delta_\varepsilon>0,\ C_\varepsilon>0$
and Floquet-Bloch eigenpairs: $(\Phi_j^\varepsilon({\bf x};{\bf k}), E_j^\varepsilon({\bf k}))$
and Lipschitz continuous functions, $e_j({\bf k})$, $j=b_\star, b_\star+1$,
defined for $|{\bf k}-{\bf K}|<\delta_\varepsilon$, such that
\begin{align}
E^\varepsilon_{b_\star+1}({\bf k})-E^\varepsilon({\bf K})\ &=\ +\ |\lambda^\varepsilon_\sharp|\
\left| {\bf k}-{\bf K} \right|\
\left( 1\ +\ e^\varepsilon_{b_\star+1}({\bf k}) \right)\ \ {\rm and}\nonumber\\
E^\varepsilon_{b_\star}{\bf k})-E^\varepsilon({\bf K})\ &=\ -\ |\lambda^\varepsilon_\sharp|\
\left| {\bf k}-{\bf K} \right|\
\left( 1\ +\ e^\varepsilon_{b_\star}{\bf k}) \right),\label{conical}
\end{align}
and where
\begin{equation}
\lambda_\sharp^\varepsilon\ \equiv\ \sum_{{\bf m}\in\mathcal{S}} c({\bf m},E_\star^\varepsilon,\varepsilon)^2\ \left(\begin{array}{c}1\\ i\end{array}\right)\cdot \left({\bf K}+\vec{\bf m}\vec{\bf k}\right) \ \ne\ 0
\label{lambda-sharp2}
\end{equation}
is given in terms of $\{c({\bf m},E_\star^\varepsilon,\varepsilon)\}_{{\bf m}\in\mathcal{S}}$, the $L^2_{{\bf K},\tau}-$ Fourier coefficients of $\Phi_1^\varepsilon({\bf x};{\bf K})$.
Furthermore, $|e_j^\varepsilon({\bf k})| \le C_\varepsilon |{\bf k}-{\bf K}|$, $j=b_\star, b_\star+1$.
Thus, in a neighborhood of the point $({\bf k},E)=({\bf K},E_\star^\varepsilon)\in \mathbb{R}^3 $, the dispersion surface is closely approximated by a circular {\it cone}.
\end{enumerate}
\end{theorem}
\subsection{Dirac points of $-\Delta+\varepsilon V({\bf x})$, $\varepsilon$ small} \label{dpts-small-eps}
In this section we collect explicit information on Dirac points for the weak potential regime.
\begin{theorem}\label{diracpt-small-thm}[Small $\varepsilon$]
There exists $\varepsilon_0>0$, such that for all $\varepsilon\in I_{\varepsilon_0}\equiv(-\varepsilon_0,\varepsilon_0)\setminus\{0\}$ the following holds:
\begin{enumerate}
\item For $\varepsilon\in I_{\varepsilon_0}$, $-\Delta +\varepsilon V({\bf x})$ has
\begin{enumerate}[(a)]
\item a multiplicity two $L^2_{{\bf K}}$- eigenvalue $E^\varepsilon_\star$, where $\textrm{ker}(-\Delta + \varepsilon V)\subset L^2_{{\bf K},\tau}\oplus
L^2_{{\bf K},\overline{\tau}}$, and
\item a multiplicity one $L^2_{{\bf K}}$- eigenvalue $\widetilde{E}_\star^\varepsilon$, where $\textrm{ker}(-\Delta + \varepsilon V)\subset L^2_{{\bf K},1}$.
\end{enumerate}
\item The maps $\varepsilon\mapsto E^\varepsilon_\star$ and $\varepsilon\mapsto \widetilde{E}^\varepsilon_\star$ are well defined for all $\varepsilon$ in the deleted neighborhood of zero, $I_{\varepsilon_0}$. They are constructed via perturbation theory of a simple eigenvalue in
$L^2_{{\bf K},\tau}$ and in $L^2_{{\bf K},1}$, respectively. Therefore, $E^\varepsilon_\star$ and $\widetilde{E}^\varepsilon_\star$ are real-analytic functions of $\varepsilon\in I_{\varepsilon_0}$. Moreover, they have the expansions:
\begin{align}
E^\varepsilon_\star\ &= |{\bf K}|^2 + \varepsilon(V_{0,0}-V_{1,1})+\mathcal{O}(\varepsilon^2), \label{E*expand}\\
\widetilde{E}^\varepsilon_\star &=|{\bf K}|^2 + \varepsilon(V_{0,0}+2V_{1,1})+\mathcal{O}(\varepsilon^2). \label{tildeE*expand}
\end{align}
\item If $\varepsilon V_{1,1}>0$, then conical intersections occur between the $1^{st}$ and $2^{nd}$ dispersion surfaces at the vertices of $\mathcal{B}_h$. Specifically, \eqref{conical} holds with $b_\star=1$.
\item If $\varepsilon V_{1,1}<0$,
then conical intersections occur between the $2^{nd}$ and $3^{rd}$ dispersion surfaces at the vertices of $\mathcal{B}_h$. Specifically, \eqref{conical} holds with $b_\star=2$
For $\varepsilon\in I_{\varepsilon_0}$,
\begin{equation}
| \lambda_\sharp^\varepsilon| = 4\pi |\Omega_h| + \mathcal{O}(\varepsilon) = 4\pi|{\bf v}_1\wedge{\bf v}_2|+ \mathcal{O}(\varepsilon).
\label{lambda-sharp-expand}
\end{equation}
\end{enumerate}
\end{theorem}
\noindent The expansions \eqref{E*expand}, \eqref{tildeE*expand} and \eqref{lambda-sharp-expand} are displayed in equations (6.22), (6.25) and (6.30) of \cites{FW:12}.
\noindent The intersections of the first two dispersion surfaces for $\varepsilon V_{1,1}>0$, and of the second and third dispersion surfaces for $\varepsilon V_{1,1}<0$, are illustrated in the first two panels of Figure \ref{fig:eps_V11_neg} along a dispersion slice corresponding to the zigzag edge.
\section{Edges and dual slices}\label{ds-slices}
Edge states are solutions of an eigenvalue equation on $\mathbb{R}^2$, which are spatially localized transverse to a line-defect or ``edge'' and propagating (plane-wave like or pseudo-periodic) parallel to the edge. Recall that $\Lambda_h=\mathbb{Z}{\bf v}_1\oplus\mathbb{Z}{\bf v}_2$ and $\Lambda_h^*=\mathbb{Z}{\bf k}_1\oplus\mathbb{Z}{\bf k}_2$. We consider edges which are lines
of the form $\mathbb{R} (a_1{\bf v}_1+a_2{\bf v}_2)$, where $(a_1,b_1)=1$, {\it i.e.} $a_1$ and $b_1$ are relatively prime.
We fix an edge by choosing ${\bm{\mathfrak{v}}}_1=a_1{\bf v}_1+b_1{\bf v}_2$, where $(a_1, b_1)=1$. Since
$a_1, b_1$ are relatively prime, there exists a relatively prime pair of integers: $a_2,b_2$ such that $a_1b_2-a_2b_1=1$.
Set ${\bm{\mathfrak{v}}}_2 = a_2 {\bf v}_1 + b_2 {\bf v}_2$.
%
It follows that $\mathbb{Z}{\bm{\mathfrak{v}}}_1\oplus\mathbb{Z}{\bm{\mathfrak{v}}}_2=\mathbb{Z}{\bf v}_1\oplus\mathbb{Z}{\bf v}_2=\Lambda_h$.
Since $a_1b_2-a_2b_1=1$, we have dual lattice vectors ${\bm{\mathfrak{K}}}_1, {\bm{\mathfrak{K}}}_2\in\Lambda_h^*$, given by
\[ {\bm{\mathfrak{K}}}_1=b_2{\bf k}_1-a_2{\bf k}_2,\ \ {\bm{\mathfrak{K}}}_2=-b_1{\bf k}_1+a_1{\bf k}_2, \]
which satisfy
\begin{equation*} {\bm{\mathfrak{K}}}_\ell \cdot {\bm{\mathfrak{v}}}_{\ell'} = 2\pi \delta_{\ell, \ell'},\ \ 1\leq \ell, \ell' \leq 2.\label{ktilde-orthog}
\end{equation*}
Note that
$\mathbb{Z}{\bm{\mathfrak{K}}}_1\oplus\mathbb{Z}{\bm{\mathfrak{K}}}_2=\mathbb{Z}{\bf k}_1\oplus\mathbb{Z}{\bf k}_2=\Lambda^*_h$.
Fix an edge, $\mathbb{R}{\bm{\mathfrak{v}}}_1$. In our construction of edge states, an important role is played by the ``quasi-momentum slice'' of the band structure through the Dirac point and ``dual'' to the given edge.
\begin{definition}\label{dual-slice}
For the edge $\mathbb{R}{\bm{\mathfrak{v}}}_1$, {\it the band structure slice at quasi-momentum ${\bf K}$, dual to the edge $\mathbb{R}{\bm{\mathfrak{v}}}_1$},
is defined to be the locus given by the union of curves:
\begin{equation*}
\lambda\mapsto E_b({\bf K}+\lambda{\bm{\mathfrak{K}}}_2),\ \ |\lambda|\le 1/2,\ \ b\ge1.
\label{tv-slice}
\end{equation*}
\end{definition}
We give two examples:
\begin{enumerate}
\item Zigzag: ${\bm{\mathfrak{v}}}_1={\bf v}_1$, ${\bm{\mathfrak{v}}}_2={\bf v}_2$ and ${\bm{\mathfrak{K}}}_1={\bf k}_1$ and ${\bm{\mathfrak{K}}}_2={\bf k}_2$.
\\ In this case, we shall refer to the {\sl zigzag slice}.
\item Armchair: ${\bm{\mathfrak{v}}}_1={\bf v}_1+{\bf v}_2$, ${\bm{\mathfrak{v}}}_2={\bf v}_2$ and ${\bm{\mathfrak{K}}}_1={\bf k}_1$ and ${\bm{\mathfrak{K}}}_2={\bf k}_2-{\bf k}_1$. \\ In this case, we shall refer to the {\sl armchair slice}.
\end{enumerate}
Figure \ref{fig:eps_V11_neg} (top row) displays three cases, for $-\Delta+\varepsilon V$, where $V$ is a honeycomb lattice potential. Shown are the curves
$\lambda\mapsto E_b({\bf K}+\lambda{\bm{\mathfrak{K}}}_2)$, $b=1,2,3$ for (i) $\varepsilon V_{1,1}>0$ and the zigzag slice (left panel), (ii) $\varepsilon V_{1,1}<0$ and the zigzag slice (middle panel), and (iii) the armchair slice (right panel). As discussed in the introduction, of these three examples, case (i) is the one for which the spectral no-fold condition of Definition \ref{SGC} holds.
\subsection{Completeness of Floquet-Bloch modes on $L^2(\Sigma)$}\label{Fourier-edge}
For ${\bm{\mathfrak{v}}}_1\in\Lambda_h$, introduce the cylinder $\Sigma= \mathbb{R}^2/\mathbb{Z}{\bm{\mathfrak{v}}}_1$. Consider the family of states $\Phi_b({\bf x};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2),\ b\ge1$ for $\lambda\in[0,1]$ (or equivalently $|\lambda|\le1/2$) corresponding to quasi-momenta along a line segment within $\mathcal{B}_h$ connecting ${\bf K}$ to ${\bf K}+{\bm{\mathfrak{K}}}_2$.
Since ${\bm{\mathfrak{K}}}_2\cdot{\bm{\mathfrak{v}}}_1=0$, all along this segment we have ${\bf K}\cdot{\bm{\mathfrak{v}}}_1-$ pseudo-periodicity:
\begin{equation}
\Phi_b({\bf x}+{\bm{\mathfrak{v}}}_1;{\bf K}+\lambda{\bm{\mathfrak{K}}}_2)= e^{i({\bf K}+\lambda{\bm{\mathfrak{K}}}_2)\cdot{\bm{\mathfrak{v}}}_1} \Phi_b({\bf x};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2)= e^{i{\bf K}\cdot{\bm{\mathfrak{v}}}_1} \Phi_b({\bf x};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2) \ .
\nonumber\end{equation}
The main result of this subsection is that any $f\in L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)$ is a superposition of these modes.
\begin{theorem}\label{fourier-edge}
Let $f\in L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)= L_{\kpar=\bK\cdot\vtilde_1}^2(\mathbb{R}^2/\mathbb{Z}{\bm{\mathfrak{v}}}_1)$. Then,
\begin{enumerate}
\item $f$ can be represented as a superposition of Floquet-Bloch modes of $-\Delta+V$ with quasimomenta in $\mathcal{B}$ located on the segment
$\Big\{{\bf k}={\bf K}+\lambda{\bm{\mathfrak{K}}}_2: |\lambda|\le\frac{1}{2}\Big\}:$
\begin{align}
f({\bf x})
&=\sum_{b\ge1}\int_{-\frac12}^{\frac12} \widetilde{f}_b(\lambda) \Phi_b({\bf x};{\bf K}+\lambda {\bm{\mathfrak{K}}}_2) d\lambda \nonumber \\
&= e^{i{\bf K}\cdot{\bf x}} \sum_{b\ge1}\int_{-\frac12}^{\frac12} e^{i\lambda{\bm{\mathfrak{K}}}_2\cdot{\bf x}}\widetilde{f}_b(\lambda) p_b({\bf x};{\bf K}+\lambda {\bm{\mathfrak{K}}}_2) d\lambda,\qquad {\rm where} \label{fb-Sigma} \\
\widetilde{f}_b(\lambda) \ &=\ \left\langle \Phi_b(\cdot,{\bf K}+\lambda{\bm{\mathfrak{K}}}_2),f(\cdot)\right\rangle_{L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)} . \nonumber
\end{align}
Here, the sum representing $e^{-i{\bf K}\cdot{\bf x}}f({\bf x})$, in \eqref{fb-Sigma} converges in the $L^2(\Sigma)$ norm.
\item In the special case where $V\equiv0$:
\begin{align*}
f({\bf x}) = \sum_{{{\bf m}}\in\mathbb{Z}^2}e^{ i({\bf K}+{\bf m}\vec{\bm{\mathfrak{K}}})\cdot{\bf x} } \int_{-\frac12}^{\frac12}
\widehat{f}_{\bf m}(\lambda)e^{i\lambda{\bm{\mathfrak{K}}}_2\cdot{\bf x}} d\lambda\
\end{align*}
\end{enumerate}
\end{theorem}
\begin{proof}[Proof of Theorem \ref{fourier-edge}]
We introduce the parameterizations of the fundamental period cell $\Omega$ of $V({\bf x})$:
\begin{align}
\label{omega-parameterization}
{\bf x}\in\Omega:\quad &{\bf x} = \tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,\quad 0\le\tau_1, \tau_2\le1 , \quad
{\bf k}_i\cdot{\bf x} = 2\pi\tau_i , \nonumber \\
& dx_1\ dx_2\ = \left|{\bm{\mathfrak{v}}}_1\wedge{\bm{\mathfrak{v}}}_2\right|\ d\tau_1\ d\tau_2\ =\ |\Omega|\ d\tau_1\ d\tau_2;
\end{align}
and of the cylinder $\Sigma=\mathbb{R}^2/\mathbb{Z}{\bm{\mathfrak{v}}}_1$:
\begin{align}
\label{sigma-parameterization}
{\bf x}\in\Sigma:\quad &{\bf x} = \tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,\ \ 0\le\tau_1\le1,\ s\in\mathbb{R} , \quad
{\bm{\mathfrak{K}}}_1\cdot{\bf x} = 2\pi\tau_1 , \ {\bm{\mathfrak{K}}}_2\cdot{\bf x} = 2\pi\tau_2 , \nonumber \\
&dx_1\ dx_2\ = \left|{\bm{\mathfrak{v}}}_1\wedge{\bm{\mathfrak{v}}}_2\right|\ d\tau_1\ d\tau_2\ =\ |\Omega|\ d\tau_1\ d\tau_2.
\end{align}
Let $f\in L^2_{\kpar=\bK\cdot\vtilde_1}(\Sigma)$ be such that $g({\bf x})= e^{-i{\bf K}\cdot{\bf x}}f({\bf x})$ is defined and smooth on $\Sigma$, and rapidly decreasing. It suffices to prove the result for such $f$, and then pass to all $L^2_{\kpar=\bK\cdot\vtilde_1}(\Sigma)$ by standard arguments.
The function $g({\bf x})$ has the Fourier representation
\begin{align}
\label{g-fourier}
g({\bf x}) &= 2\pi\ \sum_{n\in\mathbb{Z}}\int_\mathbb{R} \widehat{g}_n(2\pi\xi) e^{i\xi{\bm{\mathfrak{K}}}_2\cdot{\bf x}} d\xi e^{in{\bm{\mathfrak{K}}}_1\cdot{\bf x}}, \\
2\pi\ \widehat{g}_n(2\pi\xi) &= \frac{1}{\left|{\bm{\mathfrak{v}}}_1\wedge{\bm{\mathfrak{v}}}_2\right| } \int_\Sigma e^{-i\xi{\bm{\mathfrak{K}}}_2\cdot{\bf y}} e^{-in{\bm{\mathfrak{K}}}_1\cdot{\bf y}} g({\bf y}) d{\bf y} \ . \nonumber
\end{align}
The relation \eqref{g-fourier} is obtained by noting that $G(\tau_1,\tau_2)=g(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2)$ is $1-$ periodic in $\tau_1$ and in $L^2(\mathbb{R}; d\tau_2)$, and applying the standard Fourier representations.
Introduce the Gelfand-Bloch transform
\begin{equation}
\label{g-FB-tilde}
\widetilde{g}({\bf x};\lambda) = 2\pi \sum_{(m_1,m_2)\in\mathbb{Z}^2}\widehat{g}_{m_1}\left(2\pi(m_2+\lambda)\right)e^{i(m_1{\bm{\mathfrak{K}}}_1+m_2{\bm{\mathfrak{K}}}_2)\cdot{\bf x}},\quad |\lambda|\leq 1/2 \ .
\end{equation}
Note that ${\bf x}\mapsto \widetilde{g}({\bf x};\lambda)$ is $\Lambda_h-$ periodic and $\lambda\mapsto \widetilde{g}({\bf x};\lambda)$ is $1-$ periodic.
Using \eqref{g-FB-tilde} and \eqref{g-fourier}, it is straightforward to check that
\begin{equation}
\label{g-FB}
g({\bf x}) = \int_{-\frac12}^{\frac12} e^{i\lambda{\bm{\mathfrak{K}}}_2\cdot{\bf x}}\ \widetilde{g}({\bf x};\lambda)\ d\lambda \ .
\end{equation}
\begin{remark}\label{pb-Omega}
For any fixed $|\lambda|\le1/2$, the mapping ${\bf x}\mapsto \widetilde{g}({\bf x};\lambda)$ is $\Lambda_h-$ periodic.
We wish to expand ${\bf x}\mapsto \widetilde{g}({\bf x};\lambda)$ in terms of a basis for $L^2(\Omega)$,
where $\Omega$ denotes our choice of period cell (parallelogram) for $\mathbb{R}^2/\Lambda$ with $\Lambda=\mathbb{Z}{\bm{\mathfrak{v}}}_1\oplus\mathbb{Z}{\bm{\mathfrak{v}}}_2$;
see \eqref{Omega-def}. Now the eigenvalue problem $H({\bf k})p^\Omega=E^\Omega p^\Omega$
on $\Omega$ with periodic boundary conditions has a discrete sequence of eigenvalues, $E_j^\Omega({\bf k}),\ j\ge1$ with corresponding eigenfunctions $p^\Omega_j({\bf x};{\bf k}),\ j\ge 1$, which can be taken to be a complete orthonormal sequence. Recall $p^{\Omega_h}_b({\bf x};{\bf k}),\ b\ge1$, with corresponding eigenvalues, $E_b({\bf k})$, the complete set of eigenfunctions of $H({\bf k})$ with periodic boundary conditions on $\Omega_h$, the elementary period parallelogram spanned by $\{{\bf v}_1,{\bf v}_2\}$; see Section \ref{flo-bl-theory}. By periodicity, $p^{\Omega_h}_b({\bf x};{\bf k}),\ b\ge1,$ (initially defined on $\Omega_h$) and $p_j^\Omega({\bf x};{\bf k}),\ j\ge1$, (initially defined on $\Omega$)
can be extended to all $\mathbb{R}^2$ as periodic functions. We continue to denote these extensions by: $p_j^\Omega({\bf x};{\bf k})$ and $p^{\Omega_h}_b({\bf x};{\bf k})$, respectively. Since $\Lambda=\mathbb{Z}{\bm{\mathfrak{v}}}_1\oplus\mathbb{Z}{\bm{\mathfrak{v}}}_2=\mathbb{Z}{\bf v}_1\oplus\mathbb{Z}{\bf v}_2=\Lambda_h$, both sequences of eigenfunctions are $\Lambda_h-$ periodic. Thus, in a natural way, we can take $p^\Omega_b({\bf x};{\bf k})=A_b\ p^{\Omega_h}_b({\bf x};{\bf k}),\ b\ge1$, where $A_b$ is a normalization constant. Abusing notation, we henceforth drop the explicit dependence on $\Omega$, and simply write $p_b({\bf x};{\bf k})$ for $p_b^{\Omega}({\bf x};{\bf k})$.
\end{remark}
In view of Remark \ref{pb-Omega} we expand $\widetilde{g}({\bf x};\lambda)$ in terms of the states $\{p_b(\cdot;{\bf K}+\lambda{\bm{\mathfrak{K}}}_2)\},\ b\ge1$:
\begin{equation}
\label{g-FB-p-expansion}
\widetilde{g}({\bf x};\lambda) = \sum_{b\geq1}\inner{p_b(\cdot;{\bf K}+\lambda{\bm{\mathfrak{K}}}_2),\widetilde{g}(\cdot,\lambda)}_{L^2(\Omega)}\ p_b({\bf x};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2) .
\end{equation}
Recall that $f({\bf x})= e^{i{\bf K}\cdot{\bf x}}g({\bf x})$. We claim (and prove below) that
\begin{equation}
\label{g-FB-claim}
\inner{p_b(\cdot;{\bf K}+\lambda{\bm{\mathfrak{K}}}_2),\widetilde{g}(\cdot,\lambda)}_{L^2(\Omega)}
= \inner{\Phi_b(\cdot;{\bf K}+\lambda{\bm{\mathfrak{K}}}_2),f(\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)} \ .
\end{equation}
The assertions of Theorem \ref{fourier-edge} then follow from \eqref{g-FB},
\eqref{g-FB-p-expansion} and the claim \eqref{g-FB-claim}:
\begin{align*}
f({\bf x}) &= e^{i{\bf K}\cdot{\bf x}} g({\bf x})
= e^{i{\bf K}\cdot{\bf x}} \int_{-\frac12}^{\frac12} e^{i\lambda{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \widetilde{g}({\bf x};\lambda) d\lambda \\
& = e^{i{\bf K}\cdot{\bf x}} \sum_{b\geq1} \int_{-\frac12}^{\frac12} e^{i\lambda{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \inner{p_b(\cdot;{\bf K}+\lambda{\bm{\mathfrak{K}}}_2),\widetilde{g}(\cdot,\lambda)}_{L^2(\Omega)} p_b({\bf x};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2) d\lambda \\
&= \sum_{b\geq1} \int_{-\frac12}^{\frac12} \inner{\Phi_b(\cdot;{\bf K}+\lambda{\bm{\mathfrak{K}}}_2),f(\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)} \Phi_b({\bf x};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2) d\lambda \ ,
\end{align*}
where, in the final line we have used that $\Phi_b({\bf y};\lambda)=p_b({\bf y};\lambda)e^{i({\bf K}+\lambda{\bm{\mathfrak{K}}}_2)\cdot{\bf x}}$. Therefore, it remains to prove claim \eqref{g-FB-claim}. We shall employ the one-dimensional Poisson summation formula:
\begin{equation}
\label{poisson-sum-formula}
2\pi\sum_{n\in\mathbb{Z}} \widehat{f}\left(2\pi(n+\lambda)\right) e^{2\pi iny} = \sum_{n\in\mathbb{Z}}f(y+n)e^{-2\pi i \lambda(y+n)} \ .
\end{equation}
In the following calculation we use the abbreviated notation:
\[p_b({\bf y};\lambda)\equiv p_b({\bf y};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2)\quad \text{and} \quad \ \Phi_b({\bf y};\lambda)\equiv \Phi_b({\bf y};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2).\]
Substituting \eqref{g-fourier}-\eqref{g-FB-tilde} into the left hand side of \eqref{g-FB-claim} and applying \eqref{poisson-sum-formula} gives
{\footnotesize
\begin{align*}
&\inner{p_b(\cdot;\lambda),\widetilde{g}(\cdot,\lambda)}_{L^2(\Omega)}
= \int_{\Omega} \overline{p_b({\bf y};\lambda)} \widetilde{g}({\bf y},\lambda) d{\bf y} \nonumber \\
&\ = |\Omega| \int_0^1 \int_0^1 \overline{p_b(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2;\lambda)} 2\pi \sum_{{\bf m}\in\mathbb{Z}^2} \widehat{g}_{m_1} \left(2\pi(m_2+\lambda)\right) e^{2\pi i(m_1\tau_1+m_2\tau_2)} d\tau_1 d\tau_2 \ \ ( \eqref{omega-parameterization}) \nonumber \\
&\ = |\Omega| \int_0^1 \int_0^1 \overline{p_b(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2;\lambda)} \sum_{m_1\in\mathbb{Z}} \left [2\pi \sum_{m_2\in\mathbb{Z}} \widehat{g}_{m_1}\left(2\pi(m_2+\lambda)\right) e^{2\pi im_2\tau_2} \right] e^{2\pi im_1\tau_1} d\tau_1 d\tau_2 \nonumber \\
&\ = |\Omega| \int_0^1 \int_0^1 \overline{p_b(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2;\lambda)} \sum_{m_1\in\mathbb{Z}} \left [\sum_{m_2\in\mathbb{Z}} g_{m_1}(\tau_2+m_2) e^{-2\pi i \lambda(\tau_2+m_2)} \right] e^{2\pi im_1\tau_1} d\tau_1 d\tau_2 \ \ (\eqref{poisson-sum-formula}) \nonumber \\
&\ = |\Omega| \int_0^1 \int_\mathbb{R} \overline{p_b(\tau_1{\bm{\mathfrak{v}}}_1+s{\bm{\mathfrak{v}}}_2;\lambda)} e^{-2\pi i \lambda s} \sum_{m_1\in\mathbb{Z}} g_{m_1}(s) e^{2\pi im_1\tau_1} d\tau_1 ds \ (\tau_2+m_2=s) \nonumber \\
&\ = |\Omega| \int_0^1 \int_\mathbb{R} \overline{p_b(\tau_1{\bm{\mathfrak{v}}}_1+s{\bm{\mathfrak{v}}}_2;\lambda)} e^{-2\pi i \lambda s} G(\tau_1,s) d\tau_1 ds \nonumber
\ \ ( G(\tau_1,\tau_2)=g(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2) \text{ and } \eqref{g-fourier}) \\
&\ = \int_\Sigma \overline{p_b({\bf y};\lambda)} e^{-i \lambda {\bm{\mathfrak{K}}}_2\cdot{\bf x}} g({\bf y}) d{\bf y} \ \ (\text{by } \eqref{sigma-parameterization}) \nonumber .
\end{align*}
}
Finally, recalling that $g= e^{-i{\bf K}\cdot{\bf x}}f$ and $\overline{p_b({\bf y};\lambda)}e^{-i({\bf K}+\lambda{\bm{\mathfrak{K}}}_2)\cdot{\bf x}}=\overline{\Phi_b({\bf y};\lambda)}$, we obtain that
\begin{align*}
\inner{p_b(\cdot;\lambda),\widetilde{g}(\cdot,\lambda)}_{L^2(\Omega)}
&= \int_\Sigma \overline{p_b({\bf y};\lambda)} \ e^{-i \lambda {\bm{\mathfrak{K}}}_2\cdot{\bf x}} e^{-i{\bf K}\cdot{\bf x}} \ f({\bf y}) \ d{\bf y} \nonumber \\
&= \int_{\Sigma} \overline{\Phi_b({\bf y};\lambda)} f({\bf y}) \ d{\bf y} \nonumber
=\inner{\Phi_b(\cdot;\lambda),f(\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)}. \nonumber
\end{align*}
This completes the proof of claim \eqref{g-FB-claim} and part 1 for the case where $f$ is smooth and rapidly decreasing.
Passing to arbitrary $f\in L^2_{k_{\parallel}}$ is standard.
In the case where $V\equiv0$, the Schr\"odinger operator reduces to the Laplacian $-\Delta$. In this case the Floquet-Bloch coefficients of $f\in L^2_{\kpar=\bK\cdot\vtilde_1}$ are simply its Fourier coefficients: $\widetilde{f}(\lambda)=\widehat{f}(\lambda)$. Part 2 therefore follows from part 1, completing the proof of Theorem \ref{fourier-edge}.
\end{proof}
Sobolev regularity can be measured in terms of the Floquet-Bloch coefficients. Indeed, as in
Lemma 2.1 in \cites{FLW-MAMS:15}, by the $2D-$ Weyl law $E_b({\bf K}+\lambda{\bm{\mathfrak{K}}}_2)\approx b$ for all $\lambda\in[-1/2,1/2],\ b\gg1$, we have
\begin{corollary}\label{fourier-edge-norms}
$L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)$ and $H_{\kpar=\bK\cdot\vtilde_1}^s(\Sigma),\ s\in\mathbb{N}$, norms can be expressed in terms of the Floquet-Bloch coefficients $\widetilde{f}_b(\lambda)$, $b\ge1$. For $f\in L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)= L_{\kpar=\bK\cdot\vtilde_1}^2(\mathbb{R}^2/\mathbb{Z}{\bm{\mathfrak{v}}}_1)$:
\begin{align*}
\|f\|_{L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)}^2\ &\sim\ \sum_{b\ge1}\int_{-\frac12}^{\frac12}\ |\widetilde{f}_b(\lambda) |^2 d\lambda\ ,\\
\|f\|_{H_{\kpar=\bK\cdot\vtilde_1}^{^s}(\Sigma)}^2\ &\sim\ \sum_{b\ge1} (1+b)^s\ \int_{-\frac12}^{\frac12} |\widetilde{f}_b(\lambda) |^2 d\lambda\ .
\end{align*}
\end{corollary}
\subsection{Expansion of ${\bf k}\mapsto E_b({\bf k})$ along a quasi-momentum slice}\label{bloch-near-dirac}
Let $({\bf K},E_\star)$ denote a Dirac point as in Definition \ref{dirac-pt-defn}. In a neighborhood of
the Dirac point, the eigenvalues $E_{b_\star}({\bf k})$ and $E_{b_\star+1}({\bf k})$ are Lipschitz continuous functions and the corresponding normalized eigenmodes, $\Phi_{b_\star}({\bf x};{\bf k})$
and $\Phi_{b_\star+1}({\bf x};{\bf k})$ are discontinuous functions of ${\bf k}$; see \cites{FW:14}. Note however that what is relevant to our construction of ${\bm{\mathfrak{v}}}_1-$ edge states are Floquet-Bloch modes along the quasi-momentum line ${\bf K}+\lambda{\bm{\mathfrak{K}}}_2$, $|\lambda|\leq1/2$. The following proposition gives a smooth parametrization of these modes along this quasi-momentum line.
\begin{proposition}\label{directional-bloch}
Let $({\bf K},E_\star)$ denote a Dirac point in the sense of Definition \ref{dirac-pt-defn}. Let $\{ \Phi_1({\bf x}), \Phi_2({\bf x}) \}$ denote the basis of the
$L^2_{{\bf K}}=L^2_{{\bf K},\tau}\oplus L^2_{{\bf K},\overline\tau}-$ nullspace of $H_V - E_\star I$ in Definition \ref{dirac-pt-defn}. Introduce the $\Lambda_h-$ periodic functions
\begin{equation}
P_1({\bf x})=e^{-i{\bf K}\cdot {\bf x}}\Phi_1({\bf x}),\ \ P_2({\bf x})=e^{-i{\bf K}\cdot {\bf x}}\Phi_2({\bf x}).
\label{PhipmKlam}
\end{equation}
For each $|\lambda|\le1/2$, there exist $L^2_{{\bf K}+\lambda{\bm{\mathfrak{K}}}_2}-$ eigenpairs $(\Phi_\pm({\bf x};\lambda),E_\pm(\lambda))$, real analytic in $\lambda$, such that
$\left\langle \Phi_a(\cdot;\lambda),\Phi_b(\cdot;\lambda)\right\rangle=\delta_{ab}$ and
\[ \textrm{span}\ \{\Phi_{-}({\bf x};\lambda), \Phi_{+}({\bf x};\lambda)\}
=\ \textrm{span}\ \{\Phi_{b_\star}({\bf x};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2), \Phi_{b_\star+1}({\bf x};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2)\}\ .
\]
Introduce $\Lambda_h-$ periodic functions $p_\pm({\bf x};\lambda)$ by
\begin{equation}
\Phi_\pm({\bf x};\lambda)\ =\ e^{i({\bf K}+\lambda{\bm{\mathfrak{K}}}_2)\cdot{\bf x}}\ p_\pm({\bf x};\lambda),\ \ \left\langle p_a(\cdot;\lambda),p_b(\cdot;\lambda)\right\rangle=\delta_{ab} ,\ \
a,b\in\{+,-\}.
\label{p_pm-def}
\end{equation}
%
There is a constant $\zeta_0>0$ such that for $|\lambda|<\zeta_0$ the following holds:
\begin{enumerate}
\item The mapping $\lambda\mapsto E_\pm(\lambda)$ is real analytic in $\lambda$ with expansion
\begin{equation}
E_\pm(\lambda) = E_\star \pm |\lambda_\sharp|\ |{\bm{\mathfrak{K}}}_2|\ \lambda + E_{2,\pm}(\lambda)\lambda^2 \ ,
\label{EpmKlam}
\end{equation}
where $\lambda_\sharp\in\mathbb{C}$ is given by \eqref{lambda-sharp}, $|E_{2,\pm}(\lambda)|\leq C$ with $C$ a positive constant independent of $\lambda$.
\item Let
$ \mathfrak{z}_2={\mathfrak{K}}_2^{(1)}+i{\mathfrak{K}}_2^{(2)},\ |\mathfrak{z}_2|=|{\bm{\mathfrak{K}}}_2|$. The $\Lambda_h-$ periodic functions, $p_\pm({\bf x};\lambda)$, can be chosen to depend real analytically on $\lambda$ and so that
\footnote{The factor of $\frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|}$ in \eqref{ppmKlam} corrects
a typographical error in equation (3.13) of \cites{FW:14}.}
\begin{align}
p_\pm({\bf x};\lambda) &=
P_\pm({\bf x})\ +\ \varphi_{\pm}({\bf x},\lambda) \ \in L^2_{\bf K}(\mathbb{R}^2/\Lambda_h),
\label{ppmKlam}
\end{align}
where $p_\pm({\bf x};0)=P_\pm({\bf x})$ is given by
\begin{equation*}
P_\pm({\bf x})\ \equiv\ \frac{1}{\sqrt{2}}\ \Big[\ \frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|} \frac{\mathfrak{z}_2}{|\mathfrak{z}_2|}\ P_1({\bf x})
\pm P_2({\bf x})\ \Big]\ ,
\label{Ppm-def}\end{equation*}
and $\Phi_\pm({\bf x};0)=\Phi_\pm({\bf x})$ is given by
\begin{equation}
\Phi_\pm({\bf x})\ \equiv\ \frac{1}{\sqrt{2}}\ \Big[\ \frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|} \frac{\mathfrak{z}_2}{|\mathfrak{z}_2|}\ \Phi_1({\bf x})
\pm \Phi_2({\bf x})\ \Big]\ .
\label{Phi_pm-def}\end{equation}
Finally, $\lambda \mapsto\varphi_{\pm}({\bf x};\lambda)$ are real analytic satisfying the bound
$|\partial_{\bf x}^\aleph\varphi_\pm({\bf x};\lambda)|\leq C' \lambda$ for all ${\bf x}\in\Lambda_h$, where $\aleph=(\aleph_1,\aleph_2),\ |\aleph|\le2$.
\end{enumerate}
\end{proposition}
\noindent{N.B.} We wish to point out that the subscripts $\pm$ have a different meaning here than in \cites{FW:12,FW:14}. In \cites{FW:12,FW:14},
$E_\pm({\bf k})$ denote ordered eigenvalues, $E_-({\bf k})\le E_+({\bf k})$ (Lipschitz continuous) with corresponding eigenstates $\Phi_\pm({\bf x};{\bf k})$ (discontinuous at ${\bf k}={\bf K}$);
see Definition \ref{dirac-pt-defn} and Corollary \ref{Phibstar-bstar+1}.
In Proposition \ref{directional-bloch} and throughout this paper $E_\pm(\lambda)$ and $\Phi_\pm({\bf x};\lambda)$ refer to smooth parametrizations in $\lambda$ of Floquet-Bloch eigenvalues and eigenfunctions of the spectral bands, which intersect at energy $E_\star$.
\begin{proof}[Proof of Proposition \ref{directional-bloch}]
We present a proof along the lines of Theorem 3.2 in \cites{FW:14}; see also \cites{Friedrichs:65,kato1995perturbation}.
The ${\bf k}-$ pseudo-periodic Floquet-Bloch modes can be expressed in the form
$\Phi({\bf x};{\bf k})=e^{i{\bf k}\cdot{\bf x}}p({\bf x};{\bf k})$,
where $p({\bf x};{\bf k})$ is $\Lambda_h-$ periodic. For ${\bf k}={\bf K}+\lambda{\bm{\mathfrak{K}}}_2$,
consider the family of eigenvalue problems, parametrized by $|\lambda|\le1/2$:
\begin{align}
& H_V({\bf K}+\lambda{\bm{\mathfrak{K}}}_2)\ p({\bf x};\lambda)\ =\ E(\lambda)\ p({\bf x};\lambda)\ ,\label{Hk-evp}\\
&p({\bf x}+{\bf v};\lambda)=p({\bf x};\lambda),\ \ \textrm{for all}\ {\bf v}\in \Lambda_h\ ,
\label{psi-per}\end{align}
where $H_V({\bf k})\ \equiv\ -\left(\nabla_{\bf x} + i{\bf k}\right)^2\ +\ V({\bf x})$.
Degenerate perturbation theory of the double eigenvalue $E_\star$ of $H_V({\bf K})$, yields eigenvalues: $E_\pm(\lambda) =\ E_\star\ +\ E_\pm^{(1)}(\lambda)$, where
\begin{align}
\ E_\pm^{(1)}(\lambda)\ \equiv\ \pm\ |\lambda_\sharp|\ |{\bm{\mathfrak{K}}}_2|\ \lambda +\ \mathcal{O}(\lambda^2) ;\ \ \text{see\ \cites{FW:12}.}
\label{mu-exp}
\end{align}
Denote by $Q_\perp$ the projection onto the orthogonal complement of ${\rm span}\{P_1,P_2\}$.
Then,
\[ R_{\bf K}(E_\star)\ \equiv\ \left( H_V({\bf K})\ - E_\star\ I \right)^{-1}:\ Q_\perp L^2(\mathbb{R}^2/\Lambda_h)\to Q_\perp L^2(\mathbb{R}^2/\Lambda_h)\]
is bounded.
Furthermore, via Lyapunov-Schmidt reduction analysis of the periodic eigenvalue problem
\eqref{Hk-evp}-\eqref{psi-per} we obtain, corresponding to the eigenvalues \eqref{mu-exp}, the $\Lambda_h-$ periodic eigenstates:
\begin{align*}
p_\pm({\bf x};\lambda)\
&=\
\left( I +
R_{\bf K}(E_\star) Q_\perp
\left(2i\lambda\ {\bm{\mathfrak{K}}}_2\cdot\left(\nabla+i{\bf K}\right)\right) \right) \times \\
&\quad \left( \alpha(\lambda)\ P_1({\bf x})\ +\ \beta(\lambda)\ P_2({\bf x}) \right) \
+\ \mathcal{O}_{H^2(\mathbb{R}^2/\Lambda_h)}\left(\lambda(|\alpha|^2+|\beta|^2)^{\frac{1}{2}}\right)
\end{align*}
Here, the pair $\alpha(\lambda), \beta(\lambda)$ satisfies the homogeneous system:
\begin{align*}
\mathcal{M}(E^{(1)},\lambda)\ \left(\begin{array}{c} \alpha \\ { }\\ \beta\end{array}\right)\ &=\ 0\ , \text{ \ \ where } \\
%
\mathcal{M}(E^{(1)},\lambda) &\equiv\
\left(\begin{array}{cc}
E^{(1)} + \mathcal{O}\left( \lambda^2\right)&
-\overline{\lambda_\sharp}\ \lambda\ \mathfrak{z}_2\ + \mathcal{O}\left(\lambda^2\right) \\
&\nonumber\\
-\lambda_\sharp\ \lambda\ \overline{\mathfrak{z}_2} +
\mathcal{O}\left(\lambda^2\right)
&
E^{(1)} + \mathcal{O}\left(\lambda^2\right)
\end{array}\right) ;
\end{align*}
see \cites{FW:12}.
For $E^{(1)}=E^{(1)}_j(\lambda),\ j=\pm$, normalized solutions, $p_j({\bf k};\lambda),\ j=\pm$, are obtained by choosing:
{\footnotesize{
\begin{align*}
\left(\begin{array}{c} \alpha_+(\lambda) \\ \\ \beta_+(\lambda)\end{array}\right) &=
\left(\begin{array}{c} \frac{1}{\sqrt{2}}\ \frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|}\
\frac{ \mathfrak{z}_2}{| \mathfrak{z}_2|} +
\mathcal{O}\left(\lambda\right)\\ \\
+ \frac{1}{\sqrt{2}}\ +\ \mathcal{O}\left(\lambda\right)
\end{array}\right) ,\quad
\left(\begin{array}{c} \alpha_-(\lambda) \\ \\ \beta_-(\lambda)\end{array}\right)\ &=\
\left(\begin{array}{c} \frac{1}{\sqrt{2}}\ \frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|}\
\frac{ \mathfrak{z}_2}{| \mathfrak{z}_2|} +
\mathcal{O}\left(\lambda\right)\\ \\
- \frac{1}{\sqrt{2}} + \mathcal{O}\left(\lambda\right)
\end{array}\right) .
\end{align*}
}}
%
Finally, we note that $\mathcal{M}(E^{(1)},\lambda)$ is analytic in the parameter $\lambda$. Therefore the eigenvalues $E^{(1)}_\pm(\lambda)$ and eigenvectors $( \alpha_\pm(\lambda), \beta_\pm(\lambda) )^T$ are analytic functions of $\lambda$; see, for example, \cites{Friedrichs:65,kato1995perturbation}. It follows that
$E_\pm(\lambda)$ and $p_\pm({\bf x};\lambda)$ are bounded, real analytic functions of $\lambda\in\mathbb{R}$. This completes the proof of Proposition \ref{directional-bloch}.
\end{proof}
\section{Model of a honeycomb structure with an edge}\label{zigzag-edges}
Let $V({\bf x})$ denote a honeycomb potential in the sense of Definition \ref{honeyV}. In this section we introduce a model of an edge in a honeycomb structure. A one-dimensional variant of this model was introduced and studied in
\cites{FLW-PNAS:14,FLW-MAMS:15,Thorp-etal:15}.
Let $W\in C^\infty(\mathbb{R}^2)$ be real-valued and satisfy the following properties:
\begin{enumerate}
\item[(W1)] ${W}$ is $\Lambda_h-$ periodic, {\it i.e.} ${W}({\bf x}+{\bf v})={W}({\bf x})$ for all ${\bf x}\in\mathbb{R}^2$ and ${\bf v}\in\Lambda_h$.
\item[(W2)] ${W}$ is odd, {\it i.e.} ${W}(-{\bf x})=-{W}({\bf x})$.
\item[(W3)] $\vartheta_\sharp\equiv \left\langle \Phi_1,W\Phi_1\right\rangle_{L^2(\Omega_h)}\ne0$, with $\Phi_1$ as in Definition \ref{dirac-pt-defn}.
\end{enumerate}
The non-degeneracy condition (W3) arises in the multiple scale perturbation theory of Section \ref{formal-multiscale}.
\noindent Our model of a honeycomb structure with an edge is a smooth and slow interpolation between the Schr\"odinger Hamiltonians
$H^{(\delta)}_{-\infty} = -\Delta_{\bf x} + V({\bf x}) - \delta\kappa_\infty W({\bf x})$
and
$H^{(\delta)}_{+\infty} = -\Delta_{\bf x} + V({\bf x}) + \delta\kappa_\infty W({\bf x})$,
which is transverse to a lattice direction, say ${\bm{\mathfrak{v}}}_1$. Here, $\kappa_\infty$ is a positive constant.
This interpolation is effected by a {\it domain wall function}.
\begin{definition} \label{domain-wall-defn}
We call $\kappa(\zeta)\in C^{\infty}(\mathbb{R})$ a domain wall function if $\kappa(\zeta)$ tends to $\pm\kappa_\infty$ as $\zeta\to\pm\infty$, and
$
\Upsilon_1(\zeta)=\kappa^2(\zeta)-\kappa_\infty^2$ and
$ \Upsilon_2(\zeta)=\kappa'(\zeta)
$
satisfy:
{\small
\begin{equation}
\int_\mathbb{R} (1+|\zeta|)^a |\Upsilon_\ell(\zeta)| d\zeta<\infty\ \ \textrm{for some $a>5/2$ \ and }\ \int_\mathbb{R} |\partial_\zeta\Upsilon_\ell(\zeta)|
d\zeta<\infty, \ \ \ell=1,2.
\label{kappa-hypotheses}
\end{equation}
}
Without loss of generality, we assume $\kappa_\infty>0$.
\end{definition}
\begin{remark}
\label{kappa-description}
The technical hypotheses in \eqref{kappa-hypotheses} are required for the boundedness of {\it wave operators} used in the proof of Proposition \ref{solve4beta}. See also Section 6.6 of \cites{FLW-MAMS:15} and, in particular, the application of Theorem 6.15.
\end{remark}
Our model of a honeycomb structure with an edge is the domain-wall modulated Hamiltonian:
\begin{equation*}
\label{perturbed-ham}
H^{(\delta)} = -\Delta_{\bf x} + V({\bf x}) + \delta\kappa\left(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}\right) W({\bf x}) \ ,
\end{equation*}
where $\kappa(\zeta)$ is a domain wall function.
Suppose $\kappa(\zeta)$ has a single zero at $\zeta=0$. The ``edge'' is then given by
$\mathbb{R}{\bm{\mathfrak{v}}}_1=\{{\bf x}:{\bm{\mathfrak{K}}}_2\cdot{\bf x}=0\}$.
We shall seek solutions of the eigenvalue problem
\begin{align}
&H^{(\delta)} \Psi = E\Psi, \label{schro-evp}\\
&\Psi({\bf x}+{\bm{\mathfrak{v}}}_1)=e^{i{\bf K}\cdot{\bm{\mathfrak{v}}}_1}\Psi({\bf x})\qquad \textrm{(propagation parallel to the edge, $\mathbb{R}{\bm{\mathfrak{v}}}_1$)},\label{pseudo-per}\\
&\Psi({\bf x}) \to 0\ \ {\rm as}\ \ |{\bf x}\cdot{\bm{\mathfrak{K}}}_2|\to\infty \qquad \textrm{(localization tranverse to the edge, $\mathbb{R}{\bm{\mathfrak{v}}}_1$)}\label{localized} .
\end{align}
\noindent In the next section we present a formal asymptotic expansion of ${\bm{\mathfrak{v}}}_1-$ edge states
and in Section \ref{thm-edge-state} we formulate a rigorous theory.
\section{Multiple scales and effective Dirac equations}\label{formal-multiscale}
We re-express the eigenvalue problem \eqref{schro-evp}-\eqref{localized} in terms of an unknown function $\Psi=\Psi({\bf x},\zeta)$, depending on fast (${\bf x}$) and slow ($\zeta=\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}$)
spatial scales:
\begin{align}
&\Big[\ -\left(\nabla_{\bf x}+\delta{\bm{\mathfrak{K}}}_2\ \partial_\zeta\right)^2+V({\bf x})\Big]\Psi
\ +\ \delta\kappa(\zeta)W({\bf x})\Psi = E\Psi, \label{multi-schroedinger} \\
&\Psi({\bf x}+{\bm{\mathfrak{v}}}_1,\zeta)=e^{i{\bf K}\cdot{\bm{\mathfrak{v}}}_1}\Psi({\bf x},\zeta), \ \ {\rm and} \ \
\Psi({\bf x},\zeta) \to 0\ \ {\rm as}\ \ \zeta\to\pm\infty. \label{multi-schroedinger-bc}
\end{align}
We seek a solution to \eqref{multi-schroedinger}-\eqref{multi-schroedinger-bc} in the form:
\begin{align}
E^{\delta}&=E^{(0)}+\delta E^{(1)}+\delta^2E^{(2)}+\ldots, \label{formal-E}\\
\Psi^{\delta}&= \psi^{(0)}({\bf x},\zeta)+\delta\psi^{(1)}({\bf x},\zeta)+\delta^2\psi^{(2)}({\bf x},\zeta)+\ldots\ \label{formal-psi}.
\end{align}
The conditions \eqref{pseudo-per}, \eqref{localized} are encoded by requiring, for $i\ge0$, that
\begin{align*}
&\psi^{(i)}({\bf x}+{\bm{\mathfrak{v}}},\cdot)=e^{i{\bf K}\cdot{\bm{\mathfrak{v}}}}\psi^{(i)}({\bf x},\cdot) \ \ \ \forall\ {\bm{\mathfrak{v}}}\in\Lambda_h , \\
&\zeta\to\psi^{(i)}({\bf x},\zeta)\in L^2(\mathbb{R}_\zeta).\end{align*}
Substituting the expansions \eqref{formal-E}-\eqref{formal-psi} in \eqref{multi-schroedinger} yields
\begin{align*}
&\left[-\left(\Delta_{{\bf x}}+2\delta\ {\bm{\mathfrak{K}}}_2\cdot \nabla_{{\bf x}}\ \partial_\zeta+\delta^2\ |{\bm{\mathfrak{K}}}_2|^2\ \partial_\zeta^2+\ldots\right)
+\left(V({\bf x})+\delta\kappa(\zeta)W({\bf x})\right) \right. \\
&\quad \left.-\left(E^{(0)}+\delta E^{(1)}+\delta^2E^{(2)}+\ldots\right)\right]
\left(\psi^{(0)}+\delta\psi^{(1)}+\delta^2\psi^{(2)}+\ldots\right)=0.
\end{align*}
Equating terms of equal order in $\delta^j,\ j\ge0$, yields a hierarchy of equations governing $\psi^{(j)}({\bf x},\zeta)$.
At order $\delta^0$ we have that $(E^{(0)},\psi^{(0)})$ satisfy
\begin{equation}
\label{perturbed_schro_delta0}
\begin{split}
&\left(-\Delta_{{\bf x}} +V({\bf x})-E^{(0)}\right)\psi^{(0)} = 0, \\
&\psi^{(0)}({\bf x}+{\bm{\mathfrak{v}}},\cdot)=e^{i{\bf K}\cdot{\bm{\mathfrak{v}}}}\psi^{(0)}({\bf x},\cdot) \ \ \ \forall\ {\bm{\mathfrak{v}}}\in\Lambda_h .
\end{split}
\end{equation}
Equation \eqref{perturbed_schro_delta0} may be solved in terms of the orthonormal basis of the $L^2_{\bf K}(\Omega)-$ nullspace of $H_V-E_{\star}$ in Definition \ref{dirac-pt-defn}, namely $\{\Phi_1,\Phi_2\}$.
Expansion \eqref{ppmKlam} in Proposition \ref{directional-bloch} suggests that a particularly natural orthonormal basis of the $L^2_{{\bf K}}(\Omega)-$ nullspace of $H_V-E_{\star}$ is given by $\{\Phi_+,\Phi_-\}$, where
\begin{equation}
\label{correct-basis}
\Phi_\pm({\bf x}) \equiv \frac{1}{\sqrt{2}}\left[\ \frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|} \frac{ \mathfrak{z}_2}{| \mathfrak{z}_2|}\ \Phi_1({\bf x})
\pm \Phi_2({\bf x})\ \right].
\end{equation}
Here $\lambda_\sharp$ is given in \eqref{lambda-sharp}, $ \mathfrak{z}_2={\mathfrak{K}}_2^{(1)}+i{\mathfrak{K}}_2^{(2)}$ and $| \mathfrak{z}_2|=|{\bm{\mathfrak{K}}}_2|$.
We therefore solve \eqref{perturbed_schro_delta0} with
\begin{equation}
E^{(0)}=E_\star,\qquad \psi^{(0)}({\bf x},\zeta)=\alpha_+(\zeta)\Phi_+({\bf x})
+\alpha_-(\zeta)\Phi_-({\bf x}).
\label{psi0-soln}
\end{equation}
Proceeding to order $\delta^1$ we find that $(E^{(1)},\psi^{(1)})$ satisfies
\begin{equation}
\label{psi1-eqn}
\begin{split}
&\left(-\Delta_{{\bf x}}+V({\bf x})-E_{\star}\right)\psi^{(1)}({\bf x},\zeta)=G^{(1)}({\bf x},\zeta;\psi^{(0)}) +
E^{(1)} \psi^{(0)}, \\
&\psi^{(1)}({\bf x}+{\bm{\mathfrak{v}}},\cdot)=e^{i{\bf K}\cdot{\bm{\mathfrak{v}}}}\psi^{(1)}({\bf x},\cdot) \ \ \ \forall\ {\bm{\mathfrak{v}}}\in\Lambda_h ,
\end{split}
\end{equation}
where
\begin{align*}
&G^{(1)}({\bf x},\zeta;\psi^{(0)}) = G^{(1)}({\bf x},\zeta;\alpha_+,\alpha_-)\ \nonumber \\
&\quad \equiv\
2\partial_\zeta\alpha_+\ {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_++ 2\partial_\zeta\alpha_-\ {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_-
-\kappa(\zeta)W({\bf x}) \left(\alpha_+\Phi_++ \alpha_-\Phi_- \right) .
\end{align*}
Viewed as an equation in ${\bf x}$, \eqref{psi1-eqn} is solvable if and only if its right hand side is $L^2_{\bf K}(\Omega;d{\bf x})-$ orthogonal to the nullspace of $H_V-E_{\star}$. This is expressible in terms of the two orthogonality conditions:
\begin{align}
-E^{(1)}\alpha_j&=2\left\langle\Phi_j,\ {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_+\right\rangle \partial_\zeta\alpha_+\ +
2\left\langle\Phi_j,{\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_-\right\rangle\ \partial_\zeta\alpha_-\nonumber\\
&\qquad -\kappa(\zeta)\ \left[ \left\langle\Phi_j,W\Phi_+\right\rangle\alpha_++\left\langle\Phi_j,W\Phi_-\right\rangle\alpha_-\right],
\qquad j=\pm .
\label{alpha-j} \end{align}
We evaluate the inner products in \eqref{alpha-j} using the following two propositions.
\begin{proposition}\label{inner-prods-sharp}
\begin{align}
\left\langle\Phi_+, {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_-\right\rangle_{L^2_{\bf K}(\Omega)}
&= 0 \label{ip12} \ , \\
\left\langle\Phi_-, {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_+\right\rangle_{L^2_{\bf K}(\Omega)}
&= 0 \label{ip21} \ , \\
2\left\langle\Phi_+, {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_+\right\rangle_{L^2_{\bf K}(\Omega)}&=\
+ i|\lambda_\sharp|\ |{\bm{\mathfrak{K}}}_2|\ ,
\label{ip-aa+}\\
2 \left\langle\Phi_-, {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_-\right\rangle_{L^2_{\bf K}(\Omega)}&=\ - i|\lambda_\sharp|\ |{\bm{\mathfrak{K}}}_2|\ ,
\label{ip-aa-}\end{align}
The constant, $\lambda_\sharp\in\mathbb{C}$, is generically non-zero; see Theorem \ref{diracpt-thm}.
\end{proposition}
\begin{proof}
Let $\mathfrak{z}_2={\mathfrak{K}}_2^{(1)}+i{\mathfrak{K}}_2^{(2)}$.
By (7.28)-(7.29) of \cites{FW:14} (see also \cites{FW:12}) we have:
\begin{align}
\left\langle\Phi_1, {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_2\right\rangle_{L^2_{\bf K}(\Omega)}
&= \frac{i}{2}\ \overline{\lambda}_\sharp\ \mathfrak{z}_2\ ,\ \label{ip12_1} \\
\left\langle\Phi_2, {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_1\right\rangle_{L^2_{\bf K}(\Omega)}
&= \frac{i}{2}\ \lambda_\sharp\ \overline{\mathfrak{z}_2}\ \label{ip21_2} \ , \\
\left\langle\Phi_b,\ {\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\Phi_b\right\rangle_{L^2_{\bf K}(\Omega)}&=0,\ \ b=1,2.
\label{ip-aa_1}\end{align}
Relations \eqref{ip12}-\eqref{ip-aa-} follow from the expressions for $\Phi_\pm$ in \eqref{correct-basis} and relations \eqref{ip12_1}-\eqref{ip-aa_1}.
\end{proof}
\begin{proposition}\label{inner-prods-W}
Assume that $W({\bf x})$ is real-valued, odd and $\Lambda_h-$ periodic.
Let $\vartheta_\sharp\equiv \left\langle \Phi_1,W\Phi_1\right\rangle_{L^2_{{\bf K}}(\Omega)}$. Then,
$ \vartheta_\sharp\in\mathbb{R}$ and
\begin{align}
\vartheta_\sharp\ =\ \left\langle \Phi_+, W\Phi_-\right\rangle_{L^2_{\bf K}(\Omega)}\ &=\ \left\langle \Phi_-, W\Phi_+\right\rangle_{L^2_{\bf K}(\Omega)} \label{ip-1W1} , \\
\left\langle \Phi_+, W\Phi_+\right\rangle_{L^2_{\bf K}(\Omega)}\ &=\ \left\langle \Phi_-, W\Phi_-\right\rangle_{L^2_{\bf K}(\Omega)}=0\label{ip-1W2} \ .
\end{align}
Note that since $\Phi_+({\bf x})=e^{i{\bf K}_\star\cdot{\bf x}}P_+({\bf x})$ and $\Phi_-({\bf x})=e^{i{\bf K}_\star\cdot{\bf x}}P_-({\bf x})$,
relations \eqref{ip-1W1} and \eqref{ip-1W2} hold with $\Phi_+$ and $\Phi_-$ replaced, respectively, by $P_+({\bf x})$
and $P_-({\bf x})$.
\end{proposition}
\begin{proof}
Equations \eqref{ip-1W1}-\eqref{ip-1W2} follow from the relations
\begin{align}
\vartheta_\sharp \equiv \left\langle \Phi_1, W\Phi_1\right\rangle_{L^2_{\bf K}(\Omega)}\ &=\ -\left\langle \Phi_2, W\Phi_2\right\rangle_{L^2_{\bf K}(\Omega)} \label{ip-1W1_1} , \\
\left\langle \Phi_1, W\Phi_2\right\rangle_{L^2_{\bf K}(\Omega)}\ &=\ \left\langle \Phi_2, W\Phi_1\right\rangle_{L^2_{\bf K}(\Omega)}=0\label{ip-1W2_1} \ .
\end{align}
%
To prove \eqref{ip-1W1_1} and \eqref{ip-1W2_1}, we begin by recalling that $\Phi_2({\bf x})=\overline{\Phi_1(-{\bf x})}$, $W$ is real-valued and $W(-{\bf x})=-W({\bf x})$.
Since $W$ is real-valued, it is clear that $\vartheta_\sharp\in\mathbb{R}$.
Furthermore,
\begin{align*}
\left\langle\Phi_2,W\Phi_2\right\rangle_{L^2_{\bf K}(\Omega)}&=
\int_\Omega\overline{\Phi_2({\bf x})}W({\bf x})\Phi_2({\bf x})d{\bf x}
= \int_\Omega \Phi_1(-{\bf x})W({\bf x})\overline{\Phi_1(-{\bf x})}d{\bf x}\\
&= \int_\Omega \Phi_1({\bf x})W(-{\bf x})\overline{\Phi_1({\bf x})}d{\bf x} = -\vartheta_\sharp \ .
\end{align*}
This proves \eqref{ip-1W1}. To prove \eqref{ip-1W2}, observe that
\begin{align*}
\left\langle\Phi_2,W\Phi_1\right\rangle_{L^2_{\bf K}(\Omega)}&=\int_\Omega\overline{\Phi_2({\bf x})}W({\bf x})\Phi_1({\bf x})d{\bf x}
=\int_\Omega\Phi_1(-{\bf x})W({\bf x})\Phi_1({\bf x})d{\bf x}\\
&=\int_\Omega\Phi_1({\bf x})W(-{\bf x})\Phi_1(-{\bf x})d{\bf x}
= -\int_\Omega\Phi_1({\bf x})W({\bf x})\Phi_1(-{\bf x})d{\bf x}\\
&= -\overline{\left\langle\Phi_1,W\Phi_2\right\rangle_{L^2_{\bf K}(\Omega)}} =- \left\langle\Phi_2,W\Phi_1\right\rangle_{L^2_{\bf K}(\Omega)} \ .
\end{align*}
This completes the proof of Proposition \ref{inner-prods-W}.
\end{proof}
Propositions \ref{inner-prods-sharp} and \ref{inner-prods-W} imply that the orthogonality conditions \eqref{alpha-j} reduce to the following eigenvalue problem for $\alpha(\zeta)=(\alpha_+(\zeta), \alpha_-(\zeta))^T$:
\begin{align}
\left( \mathcal{D} - E^{(1)} \right) \alpha = 0 , \quad \alpha\in L^2(\mathbb{R}) \ . \label{m-dirac-eqn}
\end{align}
Here, $\mathcal{D}$ denotes the 1D Dirac operator:
\begin{equation}
\label{multi-dirac-op}
\mathcal{D} = -i|\lambda_\sharp| |{\bm{\mathfrak{K}}}_2| \sigma_3 \partial_\zeta + \vartheta_\sharp \kappa(\zeta) \sigma_1,
\quad \text{and} \quad {\lambda_{\sharp}} \times {\vartheta_{\sharp}} \neq 0 \ .
\end{equation}
In Section \ref{dirac-solns} we prove that the eigenvalue problem \eqref{m-dirac-eqn} has an
exponentially localized eigenfunction $\alpha_{\star}(\zeta)$ with corresponding (mid-gap) zero-energy eigenvalue $E^{(1)}=0$.
Moreover, this eigenvalue has multiplicity one. We impose the normalization: $\|\alpha_\star\|_{L^2(\mathbb{R})}=1$.
Fix $(E^{(1)},\alpha)=(0,\alpha_\star)$. Then $\alpha_\star\in L^2(\mathbb{R})$, $\psi^{(0)}({\bf x}, \zeta)$ is completely determined (up to normalization) and the solvability conditions \eqref{alpha-j} are satisfied. Therefore, the right hand side of \eqref{psi1-eqn} lies in the range of $H_V-E_\star: H^2_{\bf K}\to L^2_{\bf K}$, and we may invert $(H_V-E_{\star})$ obtaining
\begin{align}
\psi^{(1)}({\bf x},\zeta) &= \left(R(E_{\star})G^{(1)}\right)({\bf x},\zeta) + \psi^{(1)}_h({\bf x},\zeta)
\equiv \psi^{(1)}_p({\bf x},\zeta) + \psi^{(1)}_h({\bf x},\zeta) ,\label{psi1p-def}
\end{align}
where
\begin{equation*}
R(E_\star) = \left(H_V-E_\star\right)^{-1}:P_\perp L^2_{\bf K}\to P_\perp H^2_{\bf K}\
\end{equation*}
and $P_\perp$ is the $L^2_{\bf K}(\Omega)-$ projection on to the orthogonal complement of the kernel of $H_V-E_\star$, equal to ${\rm span}\{\Phi_+,\Phi_-\}$.
Here, $\psi^{(1)}_p$ denotes a particular solution, and
\begin{equation*}
\psi^{(1)}_h({\bf x},\zeta) = \alpha^{(1)}_+(\zeta)\Phi_+({\bf x})+\alpha^{(1)}_-(\zeta)\Phi_-({\bf x})
\end{equation*} is a homogeneous solution.
Note that by exploiting the degrees of freedom coming from the $L^2_{\bf K}-$ kernel of $H_V-E_\star$, we can continue the formal expansion to any order in $\delta$. Indeed, at $\mathcal{O}(\delta^\ell)$ for $\ell\geq2$, we have
\begin{align}
\label{psi2-eqn}
& \left(-\Delta_{\bf x} +V({\bf x})-E_\star\right)\psi^{(\ell)}({\bf x},\zeta)\\
&\quad = \left(\ 2 ({\bm{\mathfrak{K}}}_2\cdot \nabla_{{\bf x}})\ \partial_\zeta
-\kappa(\zeta)W({\bf x})\right)\psi_h^{(\ell-1)}({\bf x},\zeta) +E^{(\ell)}\psi^{(0)}({\bf x},\zeta) \nonumber\\
&\qquad + G^{(\ell)}\left({\bf x},\zeta;\psi^{(0)},\ldots,\psi^{(\ell-2)},\psi_p^{(\ell-1)},E^{(1)},\ldots,E^{(\ell-1)}\right) , \nonumber\\
&\psi^{(\ell)}({\bf x}+{\bm{\mathfrak{v}}},\cdot)=e^{i{\bf K}\cdot{\bm{\mathfrak{v}}}}\psi^{(\ell)}({\bf x},\cdot) \ \ \ \forall\ {\bm{\mathfrak{v}}}\in\Lambda_h , \nonumber
\end{align}
where, for the case $\ell=2$,
\begin{equation}
G^{(2)}({\bf x},\zeta;\psi^{(0)},\psi_p^{(1)}) \\
= \left(\ 2 ({\bm{\mathfrak{K}}}_2\cdot \nabla_{{\bf x}})\ \partial_\zeta
-\kappa(\zeta)W({\bf x})\ \right)\psi_p^{(1)}({\bf x},\zeta) + |{\bm{\mathfrak{K}}}_2|^2\ \partial^2_\zeta \psi^{(0)}({\bf x},\zeta)\ . \label{G2def}
\end{equation}
As before, \eqref{psi2-eqn} has a solution if and only if the right hand side is $L^2_{\bf K}(\Omega;d{\bf x})$-orthogonal to the functions $\Phi_j({\bf x})$, $j=\pm$.
This solvability condition reduces to the inhomogeneous system:
\begin{equation}
\mathcal{D} \alpha^{(\ell-1)}(\zeta) =
\mathcal{G}^{(\ell)}\left(\zeta\right)+E^{(\ell)}\alpha_{\star}(\zeta), \quad \alpha^{(\ell-1)}\in L^2(\mathbb{R}) , \ \ {\rm where}
\label{solvability_cond}
\end{equation}
\begin{equation}
\mathcal{G}^{(\ell)}(\zeta) =
\left( \begin{array}{c}
\left\langle \Phi_+(\cdot),G^{(\ell)}(\cdot,\zeta;\psi^{(0)},\ldots,\psi^{(\ell-2)},\psi_p^{(\ell-1)},E^{(1)},\ldots,E^{(\ell-1)})) \right\rangle_{L^2_{\bf K}(\Omega)} \\
\left\langle \Phi_-(\cdot),G^{(\ell)}(\cdot,\zeta;\psi^{(0)},\ldots,\psi^{(\ell-2)},\psi_p^{(\ell-1)},E^{(1)},\ldots,E^{(\ell-1)})) \right\rangle_{L^2_{\bf K}(\Omega)}
\end{array} \right) . \label{ipG}
\end{equation}
Solvability of the non-homogeneous Dirac system \eqref{solvability_cond} in $L^2(\mathbb{R})$, is ensured by imposing $L^2(\mathbb{R})-$ orthogonality
of the right hand side of \eqref{solvability_cond} to $\alpha_\star(\zeta)$. This yields:
\begin{equation}
\label{solvability_cond_E2}
E^{(\ell)} = -\inner{\alpha_{\star},\mathcal{G}^{(\ell)}}_{L^2(\mathbb{R})}.
\end{equation}
Thus we obtain, at $\mathcal{O}(\delta^\ell)$, that $\psi^{(\ell)}=\psi_p^{(\ell)}+\psi_h^{(\ell)}$, where $\psi_p^{(\ell)}$ is a particular solution
of \eqref{psi2-eqn} and
$\psi^{(\ell)}_h({\bf x},\zeta) = \alpha^{(\ell)}_+(\zeta)\Phi_+({\bf x})+\alpha^{(\ell)}_-(\zeta)\Phi_-({\bf x})$
is a homogeneous solution.
\noindent {\bf Summary:} Given a zero-energy $L^2(\mathbb{R})-$ eigenstate of the Dirac operator, $\mathcal{D}$ (see Section \ref{dirac-solns}),
we can, to any polynomial order in $\delta$, construct a formal solution of the eigenvalue problem $H^{(\delta)}\psi=E\psi,\ \psi\in L^2_{{k_{\parallel}}={\bf K}\cdot{\bm{\mathfrak{v}}}_1}$.
\subsection{Zero-energy eigenstate of the Dirac operator, $\mathcal{D}$\label{dirac-solns}}
\begin{proposition}\label{zero-mode}
Let $\kappa(\zeta)$ be a domain wall function (Definition \ref{domain-wall-defn}) and assume, without loss of generality, that $\vartheta_\sharp>0$. Then,
\begin{enumerate}
\item The Dirac operator, $\mathcal{D}$, has a zero-energy eigenvalue, $E^{(1)}=0$, with exponentially localized solution given by:
\begin{align}
\alpha_\star(\zeta)\ &= \begin{pmatrix}\alpha_{\star,+}(\zeta) \\ \alpha_{\star,-}(\zeta) \end{pmatrix} =
\ \gamma \begin{pmatrix} 1 \\ -i \end{pmatrix}
e^{-\frac{{\vartheta_{\sharp}}}{|\lambda_\sharp| | \mathfrak{z}_2|}\ \int_0^\zeta \kappa(s) ds}\label{Fstar} \ .
\end{align}
Here, $\gamma\in\mathbb{C}$ is any constant for which $\|\alpha_\star\|_{L^2}=1$.
\item The solution \eqref{Fstar}, $\alpha_\star$, generates a leading order approximate (two-scale) edge state:
\begin{align}
&\Psi^{(0)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})\nonumber\\
&\qquad = \alpha_{\star,+}(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})\Phi_+({\bf x}) +
\alpha_{\star,-}(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})\Phi_-({\bf x}) \label{Psi0-a}\\
&\qquad = e^{i{\bf K}\cdot{\bf x}}\ \gamma \ (-1-i) \left[\ i \frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|} \frac{ \mathfrak{z}_2}{| \mathfrak{z}_2|} P_1({\bf x}) -
P_2({\bf x})\ \right]\
e^{-\frac{\vartheta_\sharp}{|\lambda_\sharp| | \mathfrak{z}_2|}\int_0^{\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}}\kappa(s)ds} \ .
\label{Psi0-d}\end{align}
$\Psi^{(0)}({\bf x},\delta{\bf x})$ is propagating in the ${\bm{\mathfrak{v}}}_1$ direction with parallel quasimomentum ${\kpar=\bK\cdot\vtilde_1}$, and is exponentially decaying, ${\bm{\mathfrak{K}}}_2\cdot{\bf x}\to\pm\infty$, in the transverse direction.
\end{enumerate}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{zero-mode}]
The system \eqref{m-dirac-eqn} with energy $E^{(1)}=0$ may be written as:
\begin{align*}
\partial_\zeta \alpha\ &=\ \frac{-i{\vartheta_{\sharp}}}{|\lambda_\sharp|| \mathfrak{z}_2|}\ \kappa(\zeta)\ \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix}\ \alpha,\quad \
\alpha = \begin{pmatrix} \alpha_+\\ \alpha_-\end{pmatrix},
\end{align*}
and has solutions:
\begin{align*}
\beta_1(\zeta)&=
\begin{pmatrix} 1\\ i\end{pmatrix} \
e^{\frac{{\vartheta_{\sharp}}}{|\lambda_\sharp| | \mathfrak{z}_2|}\ \int_0^\zeta \kappa(s) ds} \ , \quad \text{and} \quad
\beta_2(\zeta)=
\begin{pmatrix} 1\\ -i\end{pmatrix} \
e^{\frac{-{\vartheta_{\sharp}}}{|\lambda_\sharp| | \mathfrak{z}_2|}\ \int_0^\zeta \kappa(s) ds} \ .
\end{align*}
Since ${\vartheta_{\sharp}}>0$ and $\kappa(\zeta)\to\pm \kappa_\infty$ as $\zeta\to\pm\infty$, with $\kappa_\infty>0$, the solution $\beta_2(\zeta)$
decays as $\zeta\to\pm\infty$. Thus we set $\alpha_\star(\zeta)=\gamma \beta_2(\zeta)$, with constant $\gamma\in\mathbb{C}$ chosen so that $\norm{\alpha_\star}_{L^2(\mathbb{R})}=1$. This yields the expression for $\Psi^{(0)}({\bf x},\delta{\bf x})$ in \eqref{Psi0-a}-\eqref{Psi0-d}, and completes the proof of Proposition \ref{zero-mode}
\end{proof}
\begin{remark}[Topological Stability]
The zero-energy eigenpair, \eqref{Fstar}, is ``topologically stable'' or
``topologically protected'' in the sense that
it (and hence the bifurcation of edge states, which it seeds) persists for any localized
perturbation of $\kappa(\zeta)$. Such perturbations may be large but do not change the asymptotic behavior of $\kappa(\zeta)$ as
$\zeta\to\pm\infty$.
\end{remark}
\section{Existence of edge states localized along an edge}\label{thm-edge-state}
In this section we prove the existence of edge states for the eigenvalue problem:
\begin{align}
\label{EVP_2}
&H^{(\delta)} \Psi\ =\ E\ \Psi \ ,\ \ \Psi\in H_{{\kpar=\bK\cdot\vtilde_1}}^2(\Sigma) \ , \quad \text{where} \\
&H^{(\delta)} \equiv-\Delta+V({\bf x})+\delta\kappa(\delta {\bm{\mathfrak{K}}}_2\cdot{\bf x})W({\bf x}) \ . \nonumber
\end{align}
We make the following assumptions:
\begin{enumerate}
\item[(A1)] $V$ is a honeycomb potential in the sense of Definition \ref{honeyV} and $-\Delta+V$ has a Dirac point at $({\bf K},E_\star)$; see Definition \ref{dirac-pt-defn} and
the conclusions of Theorem \ref{diracpt-thm}. In particular,
the degenerate subspace of $H^{(0)}-E_\star$ has orthonormal basis of Floquet-Bloch modes $\{\Phi_1({\bf x})\ ,\ \Phi_2({\bf x})\}$ and
\begin{equation*}
\lambda_\sharp\ \equiv\ \sum_{{\bf m}\in\mathcal{S}} c({\bf m})^2\ \left(\begin{array}{c}1\\ i\end{array}\right)\cdot \left({\bf K}+{\bf m}\vec{\bf k}\right) \ \ne\ 0 ; \ \textrm{see \eqref{lambda-sharp2}.}
\end{equation*}
\item[(A2)] $W$ is real-valued and $\Lambda_h-$ periodic, odd and non-degenerate; {\it i.e.} (W1), (W2) and (W3) of Section \ref{zigzag-edges} hold. In particular,
\begin{equation*}
{\vartheta_{\sharp}}\equiv\inner{\Phi_1,W\Phi_1}_{L_{\bf K}^2}\ =\ \inner{\Phi_+,W\Phi_-}_{L_{\bf K}^2} \ \ne\ 0\ .
\end{equation*}
\item[(A3)] $\kappa(\delta {\bm{\mathfrak{K}}}_2\cdot{\bf x})$ is a domain wall function in the sense of Definition \ref{domain-wall-defn}.
\end{enumerate}
\noindent The following {\it spectral no-fold condition} plays a central role.
\begin{definition}\label{SGC}[Spectral no-fold condition]
Let $H_V=-\Delta+V({\bf x})$, where $V$ is a honeycomb potential in the sense of Definition \ref{honeyV}. Further, let $({\bf K},E_\star)$ be a Dirac point for $H_V$ in the sense of Definition \ref{dirac-pt-defn}, in which we use the convention of labeling the dispersion maps by:
${\bf k}\mapsto E_b({\bf k})$, where
$b\in\{b_\star,b_{\star}+1\}\cup\{b\ge1: b\ne b_\star,\ b_{\star}+1\}$
$\equiv \{-,+\}\cup\{b\ge1: b\ne -, +\}$.
To the ${\bm{\mathfrak{v}}}_1-$ edge, $\mathbb{R}{\bm{\mathfrak{v}}}_1$, we associate the ``${\bm{\mathfrak{K}}}_2 -$ slice at quasi-momentum ${\bf K}$'', given by the union over all $b\in \{-,+\}\cup\{b\ge1: b\ne -, +\}$ of the curves
$ \{({\bf K}+\lambda{\bm{\mathfrak{K}}}_2\ ,\ E_b({\bf K}+\lambda{\bm{\mathfrak{K}}}_2) : \ |\lambda|\le\frac12\}$.
We say the band structure of $H_V$ satisfies the spectral no-fold condition for the ${\bm{\mathfrak{v}}}_1-$ edge or, equivalently at the Dirac point and along the ${\bm{\mathfrak{K}}}_2 -$ slice, with constants $c_1, c_2, \mathfrak{a}_0$, and $\nu\in (0,1)$ if the following holds:
There is a ``modulus'', $\omega(\mathfrak{a})$, which is continuous, non-negative and increasing on $0\le\mathfrak{a}<\mathfrak{a}_0$, satisfying $\omega(0)=0$ and \[ \omega(\mathfrak{a^\nu})/\mathfrak{a}\to\infty\ \ {\rm as}\ \ \mathfrak{a}\to0,\]
%
such that for all $0\le\mathfrak{a}< \mathfrak{a}_0$:
\begin{align}
\mathfrak{a}^\nu \le |\lambda|\le\frac12\quad &\implies\quad \Big|\ E_\pm({\bf K}+\lambda{\bm{\mathfrak{K}}}_2) - E_\star\ \Big|\ \ge\ c_1\ \omega(\mathfrak{a}^\nu) \ ,
\label{no-fold-over} \\
b\ne\pm, \ |\lambda|\le1/2 \quad &\implies \quad \Big| E_b({\bf K}+\lambda{\bm{\mathfrak{K}}}_2)-E_\star \Big|\ \ge\ c_2\ (1+|b|) \ .
\label{no-fold-over-b}
\end{align}
\end{definition}
Our final assumption is
\begin{enumerate}
\item[(A4)]
$-\Delta+V$ satisfies the spectral no-fold condition at quasimomentum ${\bf K}$ along the ${\bm{\mathfrak{K}}}_2 -$ slice; see Definition \ref{SGC}.
\end{enumerate}
\begin{remark}\label{control-in-1d}
\begin{enumerate}
\item Conditions \eqref{no-fold-over}-\eqref{no-fold-over-b} ensure that, restricted to the quasi-momentum slice $\lambda\mapsto {\bf K}+\lambda{\bm{\mathfrak{K}}}_2\in\mathcal{B}_h$, the dispersion curves which touch at the Dirac point $({\bf K},E_\star)$
do not ``fold over'' and attain energies within $c_1\cdot\omega(\mathfrak{a}^\nu)$ of $E_\star$ for quasimomenta bounded away from ${\bf K}$.
\item Dispersion curves of periodic Schr\"odinger operators on $\mathbb{R}^1$ (Hill's operators,\ $H=-\partial_x^2+Q(x)$, where $Q(x+1)=Q(x)$) with ``Dirac points'' (see \cites{FLW-PNAS:14,FLW-MAMS:15}) always satisfy the natural 1D analogue of the spectral no-fold condition with $\omega(\mathfrak{a})=\mathfrak{a}$. Dirac points occur at quasi-momentum $k=\pm\pi$ and ODE arguments ensure that dispersion curves are monotone functions of $k$ away from $k=0,\pm\pi$.
\item In Section \ref{zz-gap}
we prove that $H_{\varepsilon V}=-\Delta+\varepsilon V$, where $V$ is a honeycomb potential, satisfies the no-fold condition along the zigzag slice (${\bm{\mathfrak{v}}}_1={\bf v}_1$) with modulus $\omega(\mathfrak{a})=\mathfrak{a}^2$, under the assumption that $\varepsilon V_{1,1}>0$ and $\varepsilon$ is sufficiently small.
\end{enumerate}
\end{remark}
We now state a key result of this paper, giving sufficient conditions for the existence
of ${\bm{\mathfrak{v}}}_1-$ edge states of $H^{(\delta)}$, for ${\bm{\mathfrak{v}}}_1\in\Lambda_h$.
\begin{theorem}\label{thm-edgestate}
Consider the ${\bm{\mathfrak{v}}}_1-$ edge state eigenvalue problem,
\eqref{EVP_2},
where $V({\bf x})$, $W({\bf x})$ and $\kappa(\zeta)$ satisfy assumptions (A1)-(A4).
Then, there exist positive constants $\delta_0, c_0$ and a branch of solutions of \eqref{EVP_2},
\[|\delta|\in(0,\delta_0)\longmapsto (E^\delta,\Psi^\delta)\in (E_\star-c_0\ \delta_0\ ,\ E_\star+c_0\ \delta_0)\times H^2_{{\kpar=\bK\cdot\vtilde_1}}(\Sigma) , \]
such that the following holds:
\begin{enumerate}
\item $\Psi^\delta$ is well-approximated by a slow modulation of a linear combination of
degenerate Floquet-Bloch modes $\Phi_+$ and $\Phi_-$ (\eqref{correct-basis}),
which is decaying transverse to the edge, $\mathbb{Z}{\bm{\mathfrak{v}}}_1$:
\begin{align}
&\left\|\ \Psi^\delta(\cdot)\ -\
\left[\alpha_{\star,+}(\delta{\bm{\mathfrak{K}}}_2 \cdot)\Phi_+(\cdot)+\alpha_{\star,-}(\delta{\bm{\mathfrak{K}}}_2 \cdot )\Phi_-(\cdot)\right]\
\right\|_{H^2_{{\kpar=\bK\cdot\vtilde_1}}}\
\lesssim\ \delta^{\frac12}\ , \label{TM_Psi-error}\\
& E^\delta = E_\star\ +\ E^{(2)}\delta^2\ +\ o(\delta^2), \label{TM_E-error}
\end{align}
where $E^{(2)}$ is obtained directly from \eqref{solvability_cond_E2}, \eqref{ipG} and \eqref{G2def}. The implied constant in \eqref{TM_Psi-error} depends on $V$, $W$ and $\kappa$, but is independent of $\delta$.
\item The amplitude vector, $\alpha_\star(\zeta)=\left(\alpha_{\star,+}(\zeta),\alpha_{\star,-}(\zeta)\right)$, is an $L^2(\mathbb{R}_\zeta)- $
normalized, topologically protected zero-energy eigenstate of the Dirac system \eqref{multi-dirac-op}: $\mathcal{D}\alpha_\star=0$ (see Proposition \ref{zero-mode}).
\end{enumerate}
\end{theorem}
Perturbation theory for ${k_{\parallel}}$ near ${\bf K}\cdot{\bm{\mathfrak{v}}}_1$ can be used to show the persistence of edge states
for parallel quasi-momenta near ${\bf K}\cdot{\bm{\mathfrak{v}}}_1$.
\begin{corollary}
\label{vary_k_parallel}
Fix $V$, $W$, $\kappa$ and $\delta$ as in Theorem \ref{thm-edgestate}. Then there exists $\eta_0\ll\delta$ such that for all ${k_{\parallel}}$ satisfying $|{k_{\parallel}} - {\bf K}\cdot{\bm{\mathfrak{v}}}_1| <\eta_0$, there exists an $H^2_{k_{\parallel}}(\Sigma)-$ eigenfunction with eigenvalue $\mu(\delta,{k_{\parallel}})=\ E^\delta\ + \ \mu^{\delta}\ ({k_{\parallel}}-{\bf K}\cdot{\bm{\mathfrak{v}}}_1)\ +\mathcal{O}(|{k_{\parallel}}-{\bf K}\cdot{\bm{\mathfrak{v}}}_1|^2)$, where $E^{\delta}$ is given in \eqref{TM_E-error}, and $\mu^\delta$ is a constant, which is independent of ${k_{\parallel}}$.
\end{corollary}
\noindent Zigzag edge states for ${k_{\parallel}}$ in a neighborhood of ${k_{\parallel}}={\bf K}\cdot{\bm{\mathfrak{v}}}_1=2\pi/3\ ({\bm{\mathfrak{v}}}_1={\bf v}_1)$ and, by symmetry, in a neighborhood of ${k_{\parallel}}=4\pi/3$ are indicated in Figure \ref{fig:k_parallel3}.
\subsection{Corrector equation}\label{corrector-equation}
We seek a solution of the eigenvalue problem \eqref{EVP_2}, $\Psi^\delta$, in the form
\begin{align}
\Psi^\delta &\equiv \psi^{(0)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})+\delta\psi^{(1)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})+\delta\eta^\delta({\bf x}) , \label{eta-def}\\
E^\delta &\equiv E_\star + \delta^2\mu^\delta \label{mu-def} .
\end{align}
Here,
$\psi^{(0)}$ and $\psi_p^{(1)}$ are given by their respective multiple scale expressions \eqref{psi0-soln} and \eqref{psi1p-def}:
\begin{align*}
\psi^{(0)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}) &= \alpha_{\star,+}(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})\Phi_+({\bf x}) + \alpha_{\star,-}(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})\Phi_-({\bf x}) , \nonumber \\
\psi_p^{(1)}({\bf x},\delta{\bf x}) &= \left(R(E_\star)G^{(1)}\right)({\bf x}, \delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}) ,
\end{align*}
and $(\mu^\delta,\eta^\delta({\bf x}))$ is the corrector, to be constructed. We may assume throughout that $\delta\ge0$.
\begin{remark}
\label{regularity}
We shall make frequent use of the regularity of $\Phi_+({\bf x})$, $\Phi_-({\bf x})$
and $\alpha_{\star}(\zeta)\equiv(\alpha_{\star,+}(\zeta),\alpha_{\star,-}(\zeta))^T$.
In particular, $V \in C^{\infty}(\mathbb{R}^2/\Lambda)$ and elliptic
regularity theory imply that $e^{-i{\bf K}\cdot{\bf x}}\Phi_\pm$ is $ C^{\infty}(\mathbb{R}^2/\Lambda)$, and
by Proposition \ref{zero-mode}, $\alpha_\star(\zeta)$ and its derivatives with respect to $\zeta$ are all exponentially decaying as $|\zeta|\to\infty$.
\end{remark}
The following proposition lists useful bounds on $\psi^{(0)}$ and $\psi_p^{(1)}$.
\begin{proposition}[$H_{\kpar=\bK\cdot\vtilde_1}^s(\Sigma_{\bf x})$ bounds on $\psi^{(0)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})$ and $\psi_p^{(1)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})$]
\label{lemma:psi_bounds} For all $s=1,2,\dots$, there exists $\delta_0>0$, such that if $0<|\delta|<\delta_0$,
then the leading order expansion terms $\psi^{(0)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})$ and $\psi^{(1)}_p({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})$ displayed in \eqref{psi0-soln} and
\eqref{psi1p-def} satisfy the bounds:
\begin{align*}
\norm{\psi^{(0)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})}_{H_{\kpar=\bK\cdot\vtilde_1}^s} + \norm{\partial_\zeta^2\psi^{(0)}({\bf x},\zeta)\Big|_{\zeta=\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}}}_{L^2_{\kpar=\bK\cdot\vtilde_1}}\ &\approx |\delta|^{-1/2},\\
\norm{\psi^{(1)}_p({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})}_{H_{\kpar=\bK\cdot\vtilde_1}^s} &\lesssim |\delta|^{-1/2}, \\
\norm{\partial_\zeta^2\psi^{(1)}_p({\bf x},\zeta)\Big|_{\zeta=\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}}}_{L_{\kpar=\bK\cdot\vtilde_1}^2}\ +\
\norm{\partial_{\bf x}\partial_\zeta\psi^{(1)}_p({\bf x},\zeta)\Big|_{\zeta=\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}}}_{L_{\kpar=\bK\cdot\vtilde_1}^2} &\lesssim |\delta|^{-1/2}.
\end{align*}
It follows that
$\|\psi^{(0)}\|_{H^2_{\kpar=\bK\cdot\vtilde_1}} \approx \delta^{-1/2} \ \gg\
\|\delta \psi^{(1)}_p(\cdot,\delta\cdot) \|_{H^2_{\kpar=\bK\cdot\vtilde_1}} =\mathcal{O}( \delta^{1/2})$.
\end{proposition}
The proof of Proposition \ref{lemma:psi_bounds} follows the approach taken in the proof of
Lemma 6.1 in Appendix G of \cites{FLW-MAMS:15}. We omit the details but make two key technical remarks, that facilitate this proof.
\noindent {\it Bound on $\|\Phi_b\|_{H^s}$:} Recall $\Phi_b({\bf x};{\bf k})= e^{i{\bf k}\cdot{\bf x}}p_b({\bf x};{\bf k})$, where
$\|\Phi_b\|_{L^2(\Omega)}=\|p_b\|_{L^2(\Omega)}=1$. Now $p_b({\bf x};{\bf k})$ satisfies
$-\Delta p_b= 2i{\bf k}\cdot\nabla p_b - |{\bf k}|^2 p_b - V p_b+E_b({\bf k}) p_b$, where $V$ is bounded and smooth.
By 2D Weyl asymptotics $|E_b({\bf k})|\approx(1+|b|), \ b\gg1$ and therefore we have
$\| \Delta p_b\|_{H^{s-1}}\le C_s (1+b)\ \|p_b\|_{H^s}$. Hence, by elliptic theory
$\| p_b\|_{H^{s+1}}\le C_s (1+b)\ \|p_b\|_{H^s}$ and induction on $s\ge0$ yields
$\| p_b\|_{H^{s}}\le C_s (1+b)^s\ $.
\noindent {\it Rapid decay of $\inner{\Phi_b(\cdot;{\bf K}),f(\cdot)}_{L^2(\Omega)}$:}
Using $H^{(0)}\Phi_b({\bf x};{\bf K}) = E_b({\bf K}) \Phi_b({\bf x};{\bf K})$, for sufficiently smooth $f$ we have:
$ \inner{\Phi_b(\cdot;{\bf K}),f(\cdot)}_{L^2(\Omega)} = (E_b({\bf K}))^{-M} \inner{\Phi_b(\cdot;{\bf K}),[H^{(0)}]^M f(\cdot)}_{L^2(\Omega)}.$ Hence, for any $M\ge0$, $ | \inner{\Phi_b(\cdot;{\bf K}),f(\cdot)}_{L^2(\Omega)} |\ \le\ C_M (1+b)^{-M}$.
\smallskip
It remains to construct and bound the corrector $(\mu,\eta({\bf x}))$. Substitution of the expansion \eqref{eta-def} into the eigenvalue problem \eqref{EVP_2}, yields an equation for $\eta({\bf x}) \in H^2_{\kpar=\bK\cdot\vtilde_1}(\Sigma) $, which depends on $\mu$ and the small parameter $\delta$:
{\small
\begin{align}
& \left(-\Delta_{\bf x} + V({\bf x}) - E_\star\right)\eta({\bf x}) + \delta\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}) W({\bf x})\eta({\bf x}) - \delta^2\mu\ \eta({\bf x}) \nonumber \\
&\quad = \delta \Big(2{\bm{\mathfrak{K}}}_2\cdot \nabla_{\bf x}\ \partial_\zeta-\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})W({\bf x})\Big)\psi_p^{(1)}({\bf x},\zeta)\Big|_{\zeta=\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}}\ +\ \delta\mu\psi^{(0)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}) \nonumber\\
&\quad\qquad
+ \delta |{\bm{\mathfrak{K}}}_2|^2 \partial_\zeta^2\psi^{(0)}({\bf x},\zeta)\Big|_{\zeta=\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}}
+\delta^2\mu\psi_p^{(1)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}) + \delta^2|{\bm{\mathfrak{K}}}_2|^2\ \partial_\zeta^2\ \psi_p^{(1)}({\bf x},\zeta)\Big|_{\zeta=\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}} .
\label{corrector-eqn1}
\end{align}
}
To prove Theorem \ref{thm-edgestate}, we shall prove that \eqref{corrector-eqn1} has a solution $(\mu(\delta),\eta^\delta)$, with $\eta^\delta \in H^2_{\kpar=\bK\cdot\vtilde_1}$ satisfying the bound
\begin{equation*}
\|\delta \eta^\delta \|_{H^2_{\kpar=\bK\cdot\vtilde_1}} \leq C\delta^{1/2} \ .
\end{equation*}
\subsection{Decomposition of corrector, $\eta$, into near and far energy components}\label{rough-strategy}
Introduce the abbreviated notation, for $|\lambda|\le1/2$:
\begin{align}\label{Eb_of_lambda}
E_b(\lambda)\ =\
\begin{cases}
E_b({\bf K}+\lambda{\bm{\mathfrak{K}}}_2) & b_\star\notin\{b_\star, b_\star+1\} , \\
E_-(\lambda) & b=b_\star , \\
E_+(\lambda) & b=b_\star+1 ,
\end{cases}
\end{align}
and
\begin{align}\label{Phib_of_lambda}
\Phi_b({\bf x};\lambda)\ =\
\begin{cases}
\Phi_b({\bf x};{\bf K}+\lambda{\bm{\mathfrak{K}}}_2) & b_\star\notin\{b_\star, b_\star+1\} , \\
\Phi_-({\bf x};\lambda) & b=b_\star , \\
\Phi_+({\bf x};\lambda) & b=b_\star+1 .
\end{cases}
\end{align}
Define $\widetilde{f}_b(\lambda)=\inner{\Phi_b(\cdot, \lambda), f(\cdot)}_{L^2_{{\kpar=\bK\cdot\vtilde_1}}}$.
By Theorem \ref{fourier-edge}, any $\eta\in H^2_{\kpar=\bK\cdot\vtilde_1}(\Sigma) $ has the representation
\begin{equation}
\label{eta-expansion}
\eta({\bf x}) = \sum_{b\ge1}\ \int_{|\lambda|\le1/2}\ \Phi_b({\bf x};\lambda)\ \widetilde{\eta}_b(\lambda)\ d\lambda \ .
\end{equation}
Our strategy is to next derive a system of equations governing $\{\widetilde{\eta}_b(\lambda)\}_{b\geq1}$, which is formally equivalent to system \eqref{corrector-eqn1}. We then prove this system has a solution, which is used to construct $\eta({\bf x})$.
Take the inner product of \eqref{corrector-eqn1} with $\Phi_b({\bf x};\lambda)$, for $ b\ge1$, to obtain
\begin{align}
b\ge1:\quad &\left(\ E_b(\lambda)\ -\ E_\star\ \right) \widetilde{\eta}_b(\lambda) \nonumber \\
&\qquad\qquad +
\delta \left\langle \Phi_b(\cdot;\lambda) ,
\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot) W(\cdot) \eta(\cdot) \right\rangle_{L^2_{{\kpar=\bK\cdot\vtilde_1}}}\label{eta-b-system}\\
& \qquad = \delta\widetilde{F}_b[\mu,\delta](\lambda)
+ \delta^2\ \mu\ \widetilde{\eta}_b(\lambda) \ ,\ |\lambda|\le1/2. \nonumber
\end{align}
Here, $\widetilde{F}_b[\mu,\delta](\lambda),\ b\ge1$, is given by:
\begin{align}
& \widetilde{F}_b[\mu,\delta](\lambda)\equiv
\widetilde{F}^{1,\delta}_b(\lambda)\ + \mu \widetilde{F}^{2,\delta}_b(\lambda)\ +\ \delta\mu \widetilde{F}^{3,\delta}_b(\lambda)\ +\ \widetilde{F}^{4,\delta}_b(\lambda)\ +\ \delta \widetilde{F}^{5,\delta}_b(\lambda),
\label{Fb-def}
\end{align}
where
{\small
\begin{align}
\widetilde{F}^{1,\delta}_b(\lambda) & \equiv
\inner{\Phi_b({\bf x},\lambda), (2{\bm{\mathfrak{K}}}_2\cdot \nabla_{\bf x}\ \partial_\zeta -\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})W({\bf x}))
\psi^{(1)}_p({\bf x},\zeta)\Big|_{\zeta=\delta{\bm{\mathfrak{K}}}_2 {\bf x}}}_{L_{\kpar=\bK\cdot\vtilde_1}^2} ,\nonumber\\
\widetilde{F}^{2,\delta}_b(\lambda)& \equiv \inner{\Phi_b({\bf x},\lambda),\psi^{(0)}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})}_{L_{\kpar=\bK\cdot\vtilde_1}^2} , \nonumber\\
\widetilde{F}^{3,\delta}_b(\lambda) & \equiv \inner{\Phi_b({\bf x},\lambda),\psi^{(1)}_p({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})}_{L_{\kpar=\bK\cdot\vtilde_1}^2} , \label{Fdef}\\
\widetilde{F}^{4,\delta}_b(\lambda) & \equiv \inner{\Phi_b({\bf x},\lambda), \left. |{\bm{\mathfrak{K}}}_2|^2\ \partial_\zeta^2 \psi^{(0)}({\bf x},\zeta)\right|_{\zeta=\delta {\bm{\mathfrak{K}}}_2\cdot{\bf x}}}_{L_{\kpar=\bK\cdot\vtilde_1}^2} , \nonumber\\
\widetilde{F}^{5,\delta}_b(\lambda) & \equiv \inner{\Phi_b({\bf x},\lambda), \left. |{\bm{\mathfrak{K}}}_2|^2\ \partial_\zeta^2 \psi^{(1)}_p({\bf x},\zeta)\right|_{\zeta=\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}}}_{L_{\kpar=\bK\cdot\vtilde_1}^2} . \nonumber
\end{align}
}
Recall the spectral no-fold condition ensuring that $\delta/\omega(\delta^\nu) \to 0$ as $\delta\to0$, where $\nu>0$. We next decompose $\eta({\bf x})$ into its components with energies ``near'' and ``far'' from the Dirac point:
\begin{align*}
\eta({\bf x})\ &=\ \eta_{\rm near}({\bf x})\ +\ \eta_{\rm far}({\bf x}), \ \ {\rm where}
\end{align*}
\begin{align}
\eta_{\rm near}({\bf x})\ &\equiv\
\sum_{b=\pm}\ \int_{|\lambda|\le1/2}\ \Phi_b({\bf x};\lambda)\ \widetilde{\eta}_{b,{\rm near}}(\lambda)\ d\lambda \label{eta-near} , \\
\eta_{\rm far}({\bf x})\ &\equiv\ \sum_{b\geq1}\ \int_{|\lambda|\le1/2}\ \Phi_b({\bf x};\lambda)\ \widetilde{\eta}_{b,{\rm far}}(\lambda)\ d\lambda,\qquad {\rm and} \label{eta-far}
\end{align}
\begin{align*}
\widetilde{\eta}_{\pm,{\rm near}}(\lambda)\ &\equiv\ \chi\left(|\lambda|\le\delta^\nu\right)\
\widetilde{\eta}_{\pm}(\lambda) , \\
\widetilde{\eta}_{b,{\rm far}}(\lambda)\ &\equiv\ \chi\left((\delta_{_{b,+}}+\delta_{_{b,-}})\delta^\nu\le |\lambda|\le \frac12 \right)\
\widetilde{\eta}_b(\lambda),\ b\geq1 ;
\end{align*}
$\delta_{b,+}$ and $\delta_{b,-}$ are Kronecker delta symbols.
%
We rewrite system \eqref{eta-b-system} as two coupled subsystems:\
a pair of equations, which governs the {\bf near energy components}:
\begin{align}
&\left(E_{+}(\lambda)-E_{\star}\right)\widetilde{\eta}_{+,\rm near}(\lambda) \nonumber \\
&\qquad+ \delta\chi\Big(\abs{\lambda}\leq\delta^{\nu}\Big)\inner{\Phi_{+}(\cdot,\lambda),\kappa(\delta{\bm{\mathfrak{K}}}_2
\cdot)W(\cdot)\left[\eta_{\rm near}(\cdot)+\eta_{\rm far}(\cdot)\right]}_{L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)}
\label{near_cpt_1} \\
&\quad =\delta\chi\Big(\abs{\lambda}\leq\delta^{\nu}\Big)
\widetilde{F}_{+}[\mu,\delta](\lambda) + \delta^2\mu\ \widetilde{\eta}_{+,{\rm near}}(\lambda), \nonumber \\
&\left(E_{-}(\lambda)-E_{\star}\right)\widetilde{\eta}_{-,\rm near}(\lambda) \nonumber \\
&\qquad+\delta\chi\left(\abs{\lambda}\leq\delta^{\nu}\right)\inner{\Phi_{-}(\cdot,\lambda),\kappa(\delta{\bm{\mathfrak{K}}}_2
\cdot)W(\cdot)\left[\eta_{\rm near}(\cdot)+ \eta_{\rm far}(\cdot)\right]}_{L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)}
\label{near_cpt_2} \\
&\quad =\delta\chi(\abs{\lambda}\leq\delta^{\nu})
\widetilde{F}_{-}[\mu,\delta](\lambda) + \delta^2\mu\ \widetilde{\eta}_{-,{\rm near}}(\lambda), \nonumber
\end{align}
coupled to an infinite system governing the {\bf far energy components}:
\begin{align}
&\left(E_b(\lambda)-E_{\star}\right)\widetilde{\eta}_{b,\rm far}(\lambda)
+\delta\chi\Big(1/2\ge\abs{\lambda}\geq(\delta_{b,-}+\delta_{b,+}\Big)\delta^{\nu}) \times \nonumber \\
&\qquad
\left\langle\Phi_{b}(\cdot,\lambda),\kappa(\delta{\bm{\mathfrak{K}}}_2
\cdot)W(\cdot)\left[\eta_{\rm near}(\cdot)+\eta_{\rm far}
(\cdot)\right] \right\rangle_{L^2_{{\kpar=\bK\cdot\vtilde_1}}(\Sigma)} \label{far_cpts} \\
&\quad =\delta\chi\Big(1/2\ge\abs{\lambda}\geq(\delta_{b,-}+\delta_{b,+})\delta^{\nu}\Big)
\widetilde{F}_{b}[\mu,\delta](\lambda) + \delta^2\mu\ \widetilde{\eta}_{b,\rm far}(\lambda),\ \ b\geq1. \nonumber
\end{align}
We now systematically manipulate \eqref{near_cpt_1}-\eqref{far_cpts} into the form of a band-limited Dirac system; see Proposition \ref{near_freq_compact}. This latter equation is then solved in Proposition \ref{solve4beta}. Since all steps are reversible, this yields a solution $(\mu^\delta , \{\widetilde{\eta}^\delta_b(\lambda)\}_{b\geq1})$ of \eqref{near_cpt_1}-\eqref{far_cpts}. Finally, $\eta^\delta\in H^2_{\kpar=\bK\cdot\vtilde_1}(\Sigma)$, the solution of corrector equation \eqref{corrector-eqn1}, is reconstructed from the amplitudes $\{\widetilde{\eta}^\delta_b(\lambda)\}_{b\geq1}$ using \eqref{eta-expansion}.
\subsection{Construction of $\eta_{\rm far}=\eta_{\text{far}}[\eta_{\text{near}},\mu,\delta]$ and derivation of a closed system for $\eta_{\rm near}$}
We solve \eqref{far_cpts} for $\eta_{\rm far}$ as a functional of $\eta_{\rm near}$, and the parameters $\mu$ and $\delta$.
We then study the {\it closed} equation for $\eta_{\rm near}$ obtained by substitution of $\eta_{\rm far}$ into \eqref{near_cpt_1} and \eqref{near_cpt_2}.
\noindent {\it It is in the construction of this map that we use assumption (A4), the spectral no-fold condition along the ${\bm{\mathfrak{K}}}_2 -$ slice; Definition \ref{SGC}.
We apply it in the form: There exists a modulus, $\omega(\mathfrak{a})$, and positive constants $\nu$, $c_1$ and $c_2$, depending on $V$, such that for all $\delta\ne0$ and sufficiently small:}
%
\begin{align}
\delta^\nu \le |\lambda|\le\frac12\quad &\implies\quad \Big|\ E_\pm(\lambda) - E_\star\ \Big|\ \ge\ c_1\ \omega(\delta^{\nu}),
\label{no-fold-over-A} \\
b\ne\pm:\ \ \ |\lambda|\le1/2 \quad &\implies \quad \Big| E_b(\lambda)-E_\star \Big|\ \ge\ c_2\ (1+|b|) \ .
\label{no-fold-over-b-A}
\end{align}
\noindent The far energy system \eqref{far_cpts} may be written as
a fixed point system for $\widetilde\eta_{\rm far}=\{\widetilde{\eta}_{b,\rm far}(\lambda)\}_{b\ge1}$:
\begin{equation}
\widetilde{\mathcal{E}}_b[\widetilde{\eta}_{\rm far};\eta_{\rm near},\mu,\delta]\ =\
\widetilde{\eta}_{b,\rm far}\ ,\qquad b\ge1,
\label{fixed-pt1}
\end{equation}
where the mapping $\widetilde{\mathcal{E}}_b$ is given by
\begin{align*}
\widetilde{\mathcal{E}}_b[\phi;\psi,\mu,\delta](\lambda)&\equiv
\delta^2\mu\ \frac{\widetilde{\phi}_{b,\textrm{far}}(\lambda)}{{E_b(\lambda)-E_{\star}}}
+\frac{\delta\ \chi\Big(1/2\ge\abs{\lambda}\geq(\delta_{b,-}+\delta_{b,+})\delta^{\nu}\Big)}{E_b(\lambda)-E_{\star}} \times \\
& \quad \left(-\inner{\Phi_{b}(\cdot,\lambda),\kappa(\delta {\bm{\mathfrak{K}}}_2 \cdot)W(\cdot)\left[\psi(\cdot)+ \phi
(\cdot)\right]}_{L_{\kpar=\bK\cdot\vtilde_1}^2} + \widetilde{F}_{b}[\mu,\delta](\lambda) \right) ,
\end{align*}
and
\begin{align*}
\phi({\bf x})&= \sum_{b\ge1} \int_{|\lambda| \leq 1/2} \chi\Big(\abs{\lambda}\geq(\delta_{b,-}+\delta_{b,+})\delta^{\nu}\Big)\
\widetilde{\phi}_b(\lambda) \Phi_b({\bf x};\lambda)\ d\lambda \\
& =\
\sum_{b\ge1} \int_{|\lambda| \leq 1/2} \widetilde{\phi}_{b,{\rm far}}(\lambda) \Phi_b({\bf x};\lambda)\ d\lambda\ .
\end{align*}
Equivalently,
\begin{equation}
\mathcal{E}[\eta_{\rm far};\eta_{\rm near},\mu,\delta]\ =\ \eta_{\rm far}\ .
\label{fixed-pt-notilde}
\end{equation}
For fixed $\mu$, $\delta$ and band-limited $\eta_{\rm near}$:
\begin{equation}
\widetilde{\eta}_{\pm,\rm near}(\lambda)\ =\ \chi\left(\abs{\lambda}\leq \delta^\nu\right)
\widetilde{\eta}_{\pm,\rm near}(\lambda)
, \label{near-def}\end{equation}
we seek a solution $\{\widetilde{\eta}_{b,\rm far}(\lambda)\}_{b\ge1}$, supported at energies
bounded away from $E_\star$:
\begin{equation}
\widetilde{\eta}_{b,\rm far}(\lambda)\\ =\
\chi\left(\abs{\lambda}\ge(\delta_{b,-}+\delta_{b,+})\delta^\nu\right) \widetilde{\eta}_{b,\rm far}(\lambda)
, ~~~b\ge1 .\label{far-def}\end{equation}
\noindent Introduce the Banach spaces of functions limited to ``far'' and ``near'' energy regimes:
\begin{align*}
L^2_{{\rm near}, \delta^\nu}(\Sigma) &\equiv\
\left\{ f\in L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma) : \widetilde{f}_b(\lambda)\ \textrm{satisfies \eqref{near-def}}\right\} , \\
L^2_{{\rm far}, \delta^\nu}(\Sigma) &\equiv\
\left\{ f\in L_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma) : \widetilde{f}_b(\lambda)\ \textrm{satisfies \eqref{far-def}}\right\} ,
\end{align*}
Near- and far- energy Sobolev spaces $H^s_{\rm far}(\Sigma) $ and
$H^s_{\rm near}(\Sigma) $ are analogously defined.
The corresponding open balls of radius $\rho$ are given by:
\begin{align*}
B_{{\rm near},\delta^\nu}(\rho) &\equiv\
\left\{ f\in L^2_{{\rm near}, \delta^\nu} : \|f\|_{L_{\kpar=\bK\cdot\vtilde_1}^2}<\rho \right\} , \\
B_{{\rm far},\delta^\nu}(\rho) &\equiv\
\left\{ f\in L^2_{{\rm far}, \delta^\nu} : \|f\|_{L_{\kpar=\bK\cdot\vtilde_1}^2}<\rho \right\} .
\end{align*}
\noindent Using (A4) that $H^{(0)}=-\Delta+V$ satisfies the no-fold condition for the ${\bm{\mathfrak{v}}}_1 -$ edge, we deduce:
\begin{proposition}\label{fixed-pt}
\begin{enumerate}
\item For any fixed $M>0, R>0$, there exists a positive number, $\delta_0\le1$, such that for all $0<\delta<\delta_0$,
equation \eqref{fixed-pt-notilde}, or equivalently, the system \eqref{fixed-pt1}, has a unique solution
\begin{align*}
&(\eta_{\text{near}},\mu,\delta)\in B_{{\rm near},\delta^\nu}(R)\times\{|\mu|<M\}\times\{0<\delta<\delta_0\}
\nonumber\\
&\qquad\qquad \mapsto\ \eta_{\rm far}(\cdot;\eta_{\rm near},\mu,\delta)=
\mathcal{T}^{-1}\widetilde{\eta}_{\text{far}}\in B_{{\rm far},\delta^\nu}(\rho_\delta) ,\quad
\rho_\delta=\mathcal{O}\left(\frac{\delta^{\frac12}}{\omega(\delta^\nu)}\right).
\end{align*}
\item The mapping $(\eta_{\text{near}},\mu,\delta)\mapsto \eta_{\rm far}(\cdot;\eta_{\rm near},\mu,\delta)\in H_{\kpar=\bK\cdot\vtilde_1}^2$ is
Lipschitz in $(\eta_{\rm near},\mu)$ with:
\begin{align}
&\left\|\ \eta_{\text{far}}[\psi_1,\mu_1,\delta] - \eta_{\text{far}}[\psi_2,\mu_2,\delta]\ \right\|_{H_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)} \nonumber \\
&\qquad \leq \ C' \ \frac{\delta}{\omega(\delta^\nu)} \ \Big(\norm{\psi_1-\psi_2}_{H_{\kpar=\bK\cdot\vtilde_1}^2} + \abs{\mu_1-\mu_2} \Big), \nonumber \\
&\norm{\ \eta_{\text{far}}[\eta_{\text{near}};\mu,\delta]\ }_{H_{\kpar=\bK\cdot\vtilde_1}^2}
\le\ C''\left(\
\frac{\delta}{\omega(\delta^\nu)}\norm{\eta_{\text{near}}}_{H_{\kpar=\bK\cdot\vtilde_1}^2}+\frac{\delta^{\frac12}}{\omega(\delta^\nu)} \right)\ . \label{eta-far-bound}
\end{align}
The constants $C'$ and $C''$ depend only on $M, R$ and $\nu$.
\item
The mapping $(\eta_{\text{near}},\mu,\delta)\mapsto\eta_{\text{far}}[\eta_{\text{near}},\mu,\delta]$
satisfies:
\begin{equation}
\label{eta_far_affine}
\eta_{\text{far}}[\eta_{\text{near}},\mu,\delta]({\bf x}) = [A\eta_{\text{near}}]({\bf x};\mu,\delta) + \mu
B({\bf x};\delta) + C({\bf x};\delta).
\end{equation}
For $\eta_{\text{near}}\in B_{{\rm near}}(R)$ we have:
\begin{align*}
&\left\| [A\eta_{\text{near}}](\cdot,\mu_1,\delta) - [A\eta_{\text{near}}](\cdot,\mu_2,\delta)\right\|_{H_{\kpar=\bK\cdot\vtilde_1}^2}
\le \ C'_{M,R}\ \frac{\delta}{\omega(\delta^\nu)}\ |\mu_1-\mu_2|, \\
&\norm{[A\eta_{\text{near}}](\cdot;\mu,\delta)}_{H_{\kpar=\bK\cdot\vtilde_1}^2}
\leq \frac{\delta}{\omega(\delta^\nu)}
\norm{\eta_{\text{near}}}_{H_{\kpar=\bK\cdot\vtilde_1}^2}, \\
&\norm{B(\cdot;\delta)}_{H_{\kpar=\bK\cdot\vtilde_1}^2} \leq\frac{\delta^\frac12}{\omega(\delta^\nu)}, \ \ \text{and} \ \
\norm{C(\cdot;\delta)}_{H_{\kpar=\bK\cdot\vtilde_1}^2} \leq \frac{\delta^\frac12}{\omega(\delta^\nu)}.
\end{align*}
\item We may extend $\eta_{\rm far}[\cdot;\eta_{\rm near},\mu,\delta]$ to be defined on the half-open interval
$\delta\in[0,\delta_0)$
by defining
$\eta_{\rm far}[\eta_{\rm near},\mu,\delta=0]=0$. Then, by \eqref{eta-far-bound}
$\eta_{\rm far}[\eta_{\rm near},\mu,\delta]$ is continuous at $\delta=0$.
\end{enumerate}
\end{proposition}
\begin{remark}\label{frak-e-remark}[Remarks on the proof of Proposition \ref{fixed-pt}]
The proof follows that of Corollary 6.4 in \cites{FLW-MAMS:15}, with changes that we now discuss.
\begin{enumerate}
\item[(a)] The fixed point equation \eqref{fixed-pt1} for $\psi_{\rm far}$ is of the form:
\begin{align}
\eta_{\rm far}\ &=\ \mathcal{Q}_\delta \eta_{\rm far}\ + \ \frac{\delta\ \chi(\abs{\lambda}\geq(\delta_{b,-}+\delta_{b,+})\delta^{\nu})}{E_b(\lambda)-E_{\star}} \ \times \nonumber \\
&\qquad\qquad \left( - \inner{\Phi_{b}(\cdot,\lambda),\kappa(\delta {\bm{\mathfrak{K}}}_2 \cdot)W(\cdot)\ \eta_{\rm near}(\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2}
+\widetilde{F}_{b}[\mu,\delta](\lambda) \right) ,\label{fixed-pt2}
\end{align}
where $\mathcal{Q}_\delta$ is bounded and linear on $H_{\kpar=\bK\cdot\vtilde_1}^2$ and defined by:
\begin{align}
\widetilde{\left[\ \mathcal{Q}_\delta \phi\ \right]}_b(\lambda)\ &\equiv\ -\delta\
\frac{ \chi(\abs{\lambda}\geq(\delta_{b,b_{\star}}+\delta_{b,b_{\star}+1})\delta^{\nu})}{E_b(\lambda)-E_{\star}}
\inner{\Phi_{b}(\cdot,\lambda),\kappa(\delta {\bm{\mathfrak{K}}}_2 \cdot)W(\cdot)\ \phi
(\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2}\nonumber\\
&\qquad + \delta^2\mu\ \frac{\widetilde{\phi}_b(\lambda)}{{E_b(\lambda)-E_{\star}}}\ .
\label{tQ-def} \end{align}
To construct the mapping $(\eta_{\rm near},\mu,\delta)\mapsto\eta_{\rm far}[\eta_{\rm near},\mu,\delta]$ and obtain the conclusions of Proposition \ref{fixed-pt} it is convenient to solve \eqref{fixed-pt2} via the contraction mapping principle.
Thus we need to bound the operator norm of $\mathcal{Q}_\delta$ and we find from \eqref{tQ-def}
that $\mathcal{Q}_\delta$ maps $L^2_{{\rm near},\delta^\nu}$ to $H^2_{{\rm near},\delta^\nu}$
with norm bounded by $ {\rm constant}\times\mathfrak{e}(\delta)$, where
\begin{align}
\mathfrak{e}(\delta)\equiv \sup_{b=\pm}\ \ \sup_{|\delta|^\nu\le|\lambda|\le\frac12}\
\frac{|\delta|}{|E_b(\lambda)-E_{\star}|}\ +\ \sup_{ b\ge1,\ b\ne\pm}\ (\ 1+|b|\ ) \sup_{0\le|\lambda|\le\frac12}
\frac{|\delta|}{|E_b(\lambda)-E_{\star}|} .
\label{frak-e-def} \end{align}
The spectral no-fold condition hypothesis \eqref{no-fold-over-A}-\eqref{no-fold-over-b-A}
%
implies that
\begin{equation}
\mathfrak{e}(\delta) \lesssim\ \frac{|\delta|}{\omega(\mathfrak{\delta^{\nu}})\ c_1(V)}\ +\ \frac{|\delta|}{ c_2(V)} \ ,
\label{frak-e-bound}
\end{equation}
which tends to zero as $\delta$ tends to zero.
Hence, the contraction mapping principle can be applied on the ball
$B_{{\rm far},\delta^\nu}(\rho_\delta),\ \rho_\delta=\mathcal{O}({\delta^\frac12}/{\omega(\delta^\nu)})$.
%
%
\item[(b)] We note that although $\Sigma$ is a two-dimensional region, since $\Sigma$ is unbounded in only one direction, estimates on $H^2_{k_{\parallel}}(\Sigma)$ have the same scaling behavior in the parameter $\delta$ as in the 1D study \cites{FLW-MAMS:15}.
\end{enumerate}
\end{remark}
\subsection{Analysis of the closed system for $\eta_{\rm near}$\label{subsec:near_freq}}
Substitution of $\eta_{\text{far}}[\eta_{\text{near}},\mu,\delta]$ into the system \eqref{near_cpt_1}-\eqref{near_cpt_2} yields a closed system for $(\eta_{\text{near}},\mu)$, which depends on the parameter $\delta\in[0,\delta_0)$.
In this section we show, by careful rescaling and expansion of terms, that the equation for $\eta_{\rm near}$ may be rewritten as a Dirac-type system. We then solve this system in Section \ref{analysis-blDirac}.
Recall the abbreviated notation: $E_b(\lambda)$ and $\Phi_b({\bf x};\lambda)$, introduced in \eqref{Eb_of_lambda}-\eqref{Phib_of_lambda}.
Since both the spectral support of $\eta_{\text{near}}$ (parametrized by ${\bf K}+\lambda{\bm{\mathfrak{K}}}_2$, with $|\lambda|\le\delta^\nu$), and size of the domain wall perturbation, $\mathcal{O}(\delta)$,
tend to zero as $\delta\to0$, it is natural to scale in such a way as to obtain an order one limit.
We begin by introducing $\xi$, a scaling of the quasi-momentum parameter, $\lambda$,
and $\widehat{\eta}_{\pm,\rm near}\left(\xi\right)$, an expression for
$\widetilde{\eta}_{\pm,\rm near}(\lambda)$ as a standard Fourier transform on $\mathbb{R}$:
\begin{equation}
\label{amplitude-rescaled}
\widehat{\eta}_{\pm,\rm near}\left(\xi\right)\ \equiv\ \widetilde{\eta}_{\pm,\rm near}(\lambda),\quad {\rm where}\quad \xi\equiv \frac{\lambda}{\delta}.
\end{equation}
\noindent By Proposition \ref{directional-bloch}:
$E_{\pm}(\lambda)-E_{\star} =
\pm\abs{\lambda_\sharp}\ \abs{{\bm{\mathfrak{K}}}_2}\ \delta\xi + E_{2,\pm}(\delta\xi)\ (\delta\xi)^2$,
where $\abs{E_{2,\pm}(\delta\xi)} \lesssim 1$, for all $\xi$; see \eqref{EpmKlam}.
Substitution of this expansion and the rescaling \eqref{amplitude-rescaled} into \eqref{near_cpt_1}-\eqref{near_cpt_2}, and then canceling a factor of $\delta$ yields:
{\small
\begin{align}
&+|{\lambda_{\sharp}}|\abs{ {\bm{\mathfrak{K}}}_2}\ \xi\ \widehat{\eta}_{+,\rm near}(\xi)
+\chi(\abs{\xi}\leq\delta^{\nu-1})\inner{\Phi_{+}(\cdot,\delta\xi),\kappa(\delta {\bm{\mathfrak{K}}}_2
\cdot)W(\cdot)\eta_{\rm near}(\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2} \nonumber\\
&\qquad=\chi(\abs{\xi}\leq\delta^{\nu-1}) \widetilde{F}_{+}[\mu,\delta](\lambda) + \delta\mu\ \widehat{\eta}_{+,{\rm
near}}(\xi) - \delta E_{2,+}(\delta\xi)\xi^2\widehat{\eta}_{+,\rm near}(\xi) \label{near5} \\
&\qquad\qquad - \chi(\abs{\xi}\leq\delta^{\nu-1})\inner{\Phi_{+}(\cdot,\delta\xi),\kappa(\delta {\bm{\mathfrak{K}}}_2
\cdot)W(\cdot)\eta_{\rm far}[\eta_{\rm near},\mu,\delta](\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2} ,\nonumber\\
&\nonumber\\
&-|{\lambda_{\sharp}}|\abs{{\bm{\mathfrak{K}}}_2}\ \xi\ \widehat{\eta}_{-,\rm near}(\xi)
+\chi(\abs{\xi}\leq\delta^{\nu-1})\inner{\Phi_{-}(\cdot,\delta\xi),\kappa(\delta {\bm{\mathfrak{K}}}_2
\cdot)W(\cdot)\eta_{\rm near}(\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2} \nonumber\\
&\qquad=\chi(\abs{\xi}\leq\delta^{\nu-1}) \widetilde{F}_{-}[\mu,\delta](\lambda) + \delta\mu\ \widehat{\eta}_{-,{\rm
near}}(\xi) -\delta E_{2,-}(\delta\xi)\xi^2\widehat{\eta}_{-,\rm near }(\xi) \label{near6} \\
&\qquad\qquad - \chi(\abs{\xi}\leq\delta^{\nu-1})\inner{\Phi_{-}(\cdot,\delta\xi),\kappa(\delta {\bm{\mathfrak{K}}}_2
\cdot)W(\cdot)\eta_{\rm far}[\eta_{\rm near},\mu,\delta](\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2} \nonumber .
\end{align}}
We next extract the dominant behavior, for $\delta$ small, of the inner products involving $\eta_{\rm near}$ by first expanding $\eta_{\rm near}$ in terms of its spectral components near energy $E_\star=E_\pm(\lambda=0)$ plus a correction. To this end we apply Proposition \ref{directional-bloch} to expand $p_\pm({\bf x},\lambda)$ for $\lambda=\delta\xi$ small:
\begin{equation*}
p_{\pm}({\bf x},\lambda) = P_{\pm}({\bf x})\ +\ \varphi_\pm({\bf x},\delta\xi),\ \
P_{\pm}({\bf x}) \equiv \frac{1}{\sqrt{2}} \Big[\frac{\overline{\lambda_\sharp}}{|\lambda_\sharp|} \frac{\mathfrak{z}_2}{|\mathfrak{z}_2|} P_1({\bf x}) \pm P_2({\bf x}) \Big]\ \ {\rm where}
\end{equation*}
\begin{equation}
\label{deltapbdd}
\abs{\varphi_\pm({\bf x},\delta\xi)} \leq \underset{{\bf x}\in\Sigma, ~ \abs{\omega}\leq\delta^{\nu}}{\sup}
\abs{\varphi_\pm({\bf x},\omega)} \leq \delta^{\nu},\ \ \abs{\xi}\leq\delta^{\nu-1}\ .
\end{equation}
Thus, using \eqref{eta-near} and that $\Phi_\pm({\bf x};\lambda) = e^{i({\bf K}+\lambda {\bm{\mathfrak{K}}}_2)\cdot{\bf x}}\ p_\pm({\bf x};\lambda)$ (see \eqref{p_pm-def}), we obtain
{\small
\begin{align}
\eta_{\text{near}}({\bf x}) &= \int_{\abs{\lambda}\leq\delta^{\nu}}
\Phi_{+}({\bf x},\lambda)\widetilde{\eta}_{+,\text{near}}(\lambda) d\lambda
+ \int_{\abs{\lambda}\leq\delta^{\nu}}
\Phi_{-}({\bf x},\lambda)\widetilde{\eta}_{-,\text{near}}(\lambda)d\lambda \nonumber\\
&= \int_{\abs{\lambda}\leq\delta^{\nu}}e^{i{\bf K}\cdot{\bf x}}e^{i\lambda{\bm{\mathfrak{K}}}_2\cdot{\bf x}}
p_{+}({\bf x},\lambda)\widehat{\eta}_{+,\text{near}}\left(\frac{\lambda}{\delta}\right)d\lambda \nonumber \\
&\quad+ \int_{\abs{\lambda}\leq\delta^{\nu}}e^{i{\bf K}\cdot{\bf x}}e^{i\lambda{\bm{\mathfrak{K}}}_2\cdot{\bf x}}
p_{-}({\bf x},\lambda)\widehat{\eta}_{-,\text{near}}\left(\frac{\lambda}{\delta}\right)d\lambda \nonumber\\
&= \delta e^{i{\bf K}{\bf x}} P_+({\bf x}) \int_{\abs{\xi}\leq\delta^{\nu-1}}
e^{i\delta\xi{\bm{\mathfrak{K}}}_2\cdot{\bf x}}\widehat{\eta}_{+,\text{near}}(\xi)d\xi
+ \delta e^{i{\bf K}\cdot{\bf x}} \rho_{+}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot {\bf x})\nonumber\\
&\quad+ \delta e^{i{\bf K}{\bf x}} P_-({\bf x}) \int_{\abs{\xi}\leq\delta^{\nu-1}}
e^{i\delta\xi{\bm{\mathfrak{K}}}_2\cdot{\bf x}}\widehat{\eta}_{-,\text{near}}(\xi)d\xi
+ \delta e^{i{\bf K}\cdot{\bf x}} \rho_{-}({\bf x},\delta {\bm{\mathfrak{K}}}_2\cdot{\bf x})\nonumber\\
&=\delta e^{i{\bf K}\cdot{\bf x}}\left[ P_+({\bf x})\ \eta_{+,\text{near}}(\delta {\bm{\mathfrak{K}}}_2\cdot {\bf x})
+ P_-({\bf x})\ \eta_{-,\text{near}}(\delta {\bm{\mathfrak{K}}}_2\cdot {\bf x}) + \sum_{b=\pm}\rho_{b}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot {\bf x})\right] ,\label{nearinxi}\\
\label{rhodefn}
&{\rm where}\qquad\ \rho_{\pm}({\bf x},\zeta) = \int_{\abs{\xi}\leq\delta^{\nu-1}}e^{i\xi \zeta} \varphi_\pm({\bf x},\delta\xi)
\widehat{\eta}_{\pm,\text{near}}(\xi)d\xi.
\end{align}
}
\noindent We now expand the inner product in \eqref{near5}; the corresponding term in \eqref{near6} is treated similarly. Substituting
\eqref{nearinxi} into the inner product in \eqref{near5} yields (using $\Phi_\pm({\bf x};{\bf k})=e^{i{\bf k}\cdot{\bf x}}p_\pm({\bf x};{\bf k})$)
\begin{align}
&\inner{\Phi_{+}(\cdot,\delta\xi),\kappa(\delta\cdot)W(\cdot)
\eta_{\rm near}(\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2} \nonumber \\
&\equiv\ \inner{\Phi_{+}(\cdot,{\bf K}+\delta\xi{\bm{\mathfrak{K}}}_2),\kappa(\delta\cdot)W(\cdot)
\eta_{\rm near}(\cdot)}_{L_{\kpar=\bK\cdot\vtilde_1}^2}\label{full_inner} \\
&=\ \delta \inner{e^{i\delta\xi {\bm{\mathfrak{K}}}_2\cdot}p_+(\cdot,\delta\xi),
P_+(\cdot)\
W(\cdot)\ \kappa(\delta{\bm{\mathfrak{K}}}_2\cdot)\ {\eta}_{+,\rm near}(\delta {\bm{\mathfrak{K}}}_2\cdot)}_{L^2(\Sigma)} \label{inner1}\\
&\qquad+ \delta \inner{e^{i\delta\xi {\bm{\mathfrak{K}}}_2\cdot}p_{+}(\cdot,\delta\xi),
P_-(\cdot)\
W(\cdot)\ \kappa(\delta{\bm{\mathfrak{K}}}_2\cdot)\ {\eta}_{-,\rm near}(\delta {\bm{\mathfrak{K}}}_2\cdot)}_{L^2(\Sigma)} \label{inner2}\\
&\qquad+ \delta \sum_{b=\pm}\ \inner{e^{i\delta\xi {\bm{\mathfrak{K}}}_2\cdot}\ p_{+}(\cdot,\delta\xi),
W(\cdot)\ \kappa(\delta{\bm{\mathfrak{K}}}_2\cdot)\ \rho_{b}(\cdot,\delta {\bm{\mathfrak{K}}}_2\cdot)}_{L^2(\Sigma)} \ .\label{inner3}
\end{align}
The inner product terms in \eqref{inner1}-\eqref{inner3} are each of the form:
\begin{equation}
\label{G_inner_def}
\mathcal{G}(\delta; \xi)
\equiv \delta \int_\Sigma e^{-i\xi\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}} g({\bf x},\delta\xi)\Gamma({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}) d{\bf x},\ \ {\rm where}
\end{equation}
\begin{enumerate}
\item [(IP1)] $g({\bf x} ,y)$ is a smooth function of $({\bf x},y)\in \mathbb{R}^2/\Lambda_h\times\mathbb{R}$ and
\item [(IP2)] ${\bf x}\mapsto\Gamma({\bf x},\zeta)$ is $\Lambda_h-$ periodic and $H^2(\Omega)$ with values in $L^2(\mathbb{R}_\zeta)$, {\it i.e.}
\begin{align}
\label{Gamma-conditions1}
&\Gamma({\bf x}+{\bf v},\zeta)\ =\ \Gamma({\bf x},\zeta), \quad \text{for all } {\bf v}\in\Lambda_h, \\
&\sum_{j=0}^2\ \sum_{{\bf |c}|=j}\int_\Omega\ \left\|\partial_{\bf x}^{\bf c}\Gamma({\bf x},\zeta)\right\|_{L^2(\mathbb{R}_\zeta)}^2\ d{\bf x}\ <\ \infty.
\label{Gamma-conditions2}
\end{align}
We denote this Hilbert space of functions by $\mathbb{H}^2$ with norm-squared, $\|\cdot\|_{\mathbb{H}^2}^2$, given in \eqref{Gamma-conditions2}.
It is easy to check that conditions \eqref{Gamma-conditions1}-\eqref{Gamma-conditions2} are satisfied for the cases $\Gamma=\Gamma(\zeta)=\kappa(\zeta)\eta_{\pm,{\rm near}}(\zeta)$ and $\Gamma=\Gamma({\bf x},\zeta)=\kappa(\zeta)\rho_{\pm}({\bf x},\zeta)$, where $\rho_\pm$ is defined in \eqref{rhodefn}.
\end{enumerate}
\noindent To expand expressions of the form $G(\delta,\xi)$, we use:
\begin{lemma}
\label{poisson_exp}
Let $g({\bf x} ,y)$ and $\Gamma({\bf x},\zeta)$ satisfy conditions (IP1) and (IP2), respectively.
Denote by $\widehat{\Gamma}({\bf x},\omega)$ the Fourier transform of $\Gamma({\bf x},\zeta)$ with respect to the $\zeta-$ variable,
given by
\begin{equation}
\widehat{\Gamma}({\bf x},\omega)\ \equiv\ \lim_{N\uparrow\infty}\ \frac{1}{2\pi}\int_{|\zeta|\le N}e^{- i\omega \zeta}\Gamma({\bf x},\zeta) d\zeta ,
\label{Gammahat-def}
\end{equation}
where the limit is taken in $L^2(\Omega\times\mathbb{R}_\omega;d{\bf x} d\omega)$. Then,
\begin{equation}
\label{poisson_app}
\mathcal{G}(\delta; \xi) = \
\sum_{n\in\mathbb{Z}} \int_\Omega e^{in {\bm{\mathfrak{K}}}_2\cdot{\bf x}} \widehat{\Gamma}\left({\bf x},\frac{n}{\delta}+\xi\right) g({\bf x},\delta\xi) d{\bf x},
\end{equation}
with equality holding in $L^2_{\rm loc}([-\xi_{\rm max},\xi_{max}];d\xi)$, for any fixed $\xi_{\rm max}>0$.
\end{lemma}
\noindent We adapt the proof in \cites{FLW-MAMS:15} (Lemma 6.5) for the 1D setting. We require the following variant of the Poisson summation formula in $L^2_{\rm loc}$.
\begin{theorem}\label{psum-L2}
Let $\Gamma({\bf x},\zeta)$ satisfy (IP2). Denote by $\widehat{\Gamma}({\bf x},\omega)$ the Fourier transform of $\Gamma({\bf x},\zeta)$ with respect to the variable, $\zeta$; see \eqref{Gammahat-def}.
Fix an arbitrary $y_{\rm max}>0$, and introduce the parameterization of the cylinder $\Sigma$: ${\bf x}=\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2, \ 0\le\tau_1\le1,\ \tau_2\in\mathbb{R}$. Then,
\begin{align*}
&\sum_{n\in\mathbb{Z}} e^{- iy(\tau_2+n)}\Gamma(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,\tau_2+n)
= 2\pi\ \sum_{n\in\mathbb{Z}} e^{2\pi i n \tau_2}\widehat{\Gamma}\left(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,2\pi n+y \right)
\end{align*}
in $L^2\left([0,1]^2\times[-y_{\rm max},y_{\rm max}];d\tau_1d\tau_2\cdot dy\right)$.
\end{theorem}
The 1D analogue of Theorem \ref{psum-L2} was proved in Appendix A of \cites{FLW-MAMS:15}.
Since the proof is very similar, we omit it.
We also require
\begin{lemma}\label{interchange}
Let $F({\bf x},y)$ and $F_N({\bf x},y),\ N=1,2,\dots$, belong to $L^2(\Sigma\times[-y_{\rm max},y_{max}];d{\bf x} dy)$. Assume that
\begin{equation*}
\left\| F_N-F \right\|_{L^2(\Sigma\times[-y_{\rm max},y_{max}];d{\bf x} dy)}\ \to\ 0,\ \ {\rm as}\ \ N\to\infty\ .
\end{equation*}
Let $G\in L^2([-y_{\rm max}, y_{max}];d y)$. Then, in the $L^2(\Sigma;d{\bf x})$ sense, we have:
\begin{align*}
\lim_{N\to\infty}\int^{y_{\rm max}}_{-y_{\rm max}}F_N({\bf x},y)G(y)\ dy\ &=\
\int^{y_{\rm max}}_{-y_{\rm max}}\lim_{N\to\infty}F_N({\bf x},y)G(y)\ dy\ \\
&=\ \int^{y_{\rm max}}_{-y_{\rm max}} F({\bf x},y)G(y)\ dy .
\end{align*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{interchange}]
Square the difference, apply Cauchy-Schwarz and then integrate $d{\bf x}$ over $\Sigma$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{poisson_exp}]
Recall the parameterization of the cylinder, $\Sigma$:
\begin{align*}
{\bf x}\in\Sigma:\qquad &{\bf x} = \tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,\ \ 0\le\tau_1\le1,\ \tau_2\in\mathbb{R} \ , \quad
{\bm{\mathfrak{K}}}_1\cdot{\bf x} = 2\pi\tau_1 , \ {\bm{\mathfrak{K}}}_2\cdot{\bf x} = 2\pi\tau_2 , \nonumber \\
&dx_1\ dx_2\ = \left|{\bm{\mathfrak{v}}}_1\wedge{\bm{\mathfrak{v}}}_2\right|\ d\tau_1\ d\tau_2\ \equiv\ |\Omega|\ d\tau_1\ d\tau_2 .
\end{align*}
Using that $g({\bf x}, \delta\xi) = g(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2, \delta \xi)$ and $\Gamma({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x}) = \Gamma(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2, 2\pi\delta \tau_2)$ are both appropriately $1$-periodic, we expand $\mathcal{G}(\delta;\xi)$ defined in \eqref{G_inner_def}. By Lemma \ref{interchange}:
{\footnotesize{
\begin{align}
\mathcal{G}(\delta;\xi)\ &=
\delta\ |\Omega|\ \int_0^1d\tau_1\ \int_{-\infty}^{\infty} \
e^{-2\pi i\delta\xi\tau_2}g(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,\delta\xi)\ \Gamma\left(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2, 2\pi\delta\tau_2\right)\ d\tau_2\nonumber\\
&= \delta\ |\Omega|\ \int_0^1d\tau_1\ \lim_{N\to\infty} \sum_{n=-N}^{N}\int_n^{n+1}\ e^{-2\pi i\delta\xi \tau_2}g(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,\delta\xi)\ \Gamma\left(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2, 2\pi\delta \tau_2\right)\ d\tau_2\nonumber\\
&= \delta\ |\Omega|\ \int_0^1d\tau_1\ \lim_{N\to\infty} \sum_{n=-N}^{N} \int_0^1\ e^{-2\pi i\delta\xi (\tau_2+n)}g(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,\delta\xi)\ \Gamma\left(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2, 2\pi\delta(\tau_2+n)\right)\ d\tau_2\nonumber\\
&= \delta\ |\Omega|\ \int_0^1d\tau_1\ \int_0^1\ g(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,\delta\xi)\
\Big[\ \sum_{n\in\mathbb{Z}}\ e^{-2\pi i\xi\delta(\tau_2+n)}
\ \ \Gamma(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2, 2\pi\delta(\tau_2+n))\ \Big]\ d\tau_2 \ . \nonumber
\end{align}
}}
\noindent By Theorem \ref{psum-L2} with
$\Gamma=\Gamma(\tau_1{\bm{\mathfrak{v}}}_1+\tau{\bm{\mathfrak{v}}}_2, 2\pi\delta \tau_2)$ and $y=2\pi\delta \xi$, we have
\begin{align*}
\sum_{n\in\mathbb{Z}}\ e^{-2\pi i\delta\xi(\tau_2+n)}\ \Gamma(\tau_1{\bm{\mathfrak{v}}}_1+\tau{\bm{\mathfrak{v}}}_2
, 2\pi\delta \tau_2)
&=\ \frac{1}{\delta} \sum_{n\in\mathbb{Z}} e^{2\pi i n \tau_2} \widehat{\Gamma}\left(\tau_1{\bm{\mathfrak{v}}}_1+\tau {\bm{\mathfrak{v}}}_2, \frac{n}{\delta}+\xi\right) ,
\end{align*}
with equality holding in $L^2\left( [0,1]^2\times [-\xi_{\rm max}, \xi_{\rm max}]; d\tau_1 d\tau_2\cdot d\xi\right)$. Again using Lemma \ref{interchange} we may interchange the sum and integral to obtain\begin{align}
\mathcal{G}(\delta;\xi) & = \frac{\delta}{\delta}
|\Omega| \int_0^1d\tau_1 \int_0^1 g(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2,\delta\xi)
\sum_{n\in\mathbb{Z}} e^{2\pi i n \tau_2} \widehat{\Gamma}\left(\tau_1{\bm{\mathfrak{v}}}_1+\tau_2{\bm{\mathfrak{v}}}_2, \frac{n}{\delta}+\xi\right) d\tau_2 \nonumber\\
&= \sum_{n\in\mathbb{Z}} \int_\Omega e^{i n {\bm{\mathfrak{K}}}_2\cdot{\bf x}} \widehat{\Gamma}\left({\bf x}, \frac{n}{\delta}+\xi\right) g({\bf x},\delta\xi) d{\bf x} \ .
\end{align}
This completes the proof of Lemma \ref{poisson_exp}.
\end{proof}
We next apply Lemma \ref{poisson_exp} to each of the inner products \eqref{inner1}-\eqref{inner3}.
\noindent \underline{\it Expansion of inner product \eqref{inner1}:
\noindent Let
$g({\bf x},\delta\xi)=\overline{p_+({\bf x},\delta\xi)}P_+({\bf x})W({\bf x})$
and $\Gamma({\bf x},\zeta)=\kappa(\zeta) \eta_{+,\rm near}(\zeta)$.\ By Lemma \ref{poisson_exp},
\begin{align*}
&\delta\inner{e^{i\xi\delta{\bm{\mathfrak{K}}}_2 \cdot}p_+(\cdot,\delta\xi), P_+(\cdot)
W(\cdot)\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot) {\eta}_{+,\rm near}(\delta{\bm{\mathfrak{K}}}_2\cdot)}_{L^2(\Sigma)} \\
&=
\sum_{n\in\mathbb{Z}}\int_\Omega e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}}\mathcal{F}_{\zeta}[\kappa\eta_{+,\rm near}]\left(\frac{n}{\delta}+\xi \right)
\overline{p_{+}({\bf x},\delta\xi)}P_+({\bf x}) W({\bf x})d{\bf x}.
\end{align*}
Since $p_\pm({\bf x},\lambda=\delta\xi)=P_\pm({\bf x})+\varphi_\pm({\bf x},\delta\xi)$, where $\varphi_\pm({\bf x},\delta\xi)$ satisfies the
bound \eqref{deltapbdd}, we have
\begin{align*}
&\delta\inner{e^{i\xi\delta{\bm{\mathfrak{K}}}_2 \cdot}p_+(\cdot,\delta\xi), P_+(\cdot)
W(\cdot)\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot) {\eta}_{+,\rm near}(\delta{\bm{\mathfrak{K}}}_2\cdot)}_{L^2(\Sigma)} \\
&\equiv I_+^1(\xi;\eta_{+,\rm near}) + I_+^2(\xi;\eta_{+,\rm near}),
\end{align*}
where
\begin{align}
I_+^1(\xi;\eta_{+,\rm near}) &=
\sum_{n\in\mathbb{Z}} \mathcal{F}_{\zeta}[\kappa\eta_{+,\rm near}]\left(\frac{n}{\delta}+\xi\right)
\int_\Omega e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \abs{P_{+}({\bf x})}^2 W({\bf x})d{\bf x} \label{I_1} , \\
I_+^2(\xi;\eta_{+,\rm near}) &= \sum_{n\in\mathbb{Z}}
\mathcal{F}_{\zeta}[\kappa\eta_{+,\rm near}]\left(\frac{n}{\delta}+\xi\right) \int_\Omega
e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \overline{\varphi_+({\bf x},\delta\xi)}P_{+}({\bf x}) W({\bf x})d{\bf x} . \nonumber
\end{align}
From Proposition \ref{inner-prods-W} and Assumption (W3) we have
\begin{equation}
\label{zeroth-terms}
\int_\Omega \abs{P_{+}({\bf x})}^2 W({\bf x})d{\bf x} = 0 \quad \text{and} \quad
\int_\Omega \overline{P_{+}({\bf x})}P_{-}({\bf x}) W({\bf x})d{\bf x} = {\vartheta_{\sharp}} \neq 0 .
\end{equation}
Therefore, the $n=0$ term in the summation of $I_+^1(\xi;\eta_{+,\rm near})$ in \eqref{I_1} is zero and we may write:
\begin{equation*}
I_+^1(\xi;\eta_{+,\rm near}) =\
\sum_{\abs{n}\geq1} \mathcal{F}_{\zeta}[\kappa\eta_{+,\rm near}]\left(\frac{n}{\delta}+\xi\right)
\int_\Omega e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \abs{P_{+}({\bf x})}^2 W({\bf x})d{\bf x} .
\end{equation*}
\noindent \underline{\it Expansion of the inner product \eqref{inner2}:
\noindent Similarly, with $g({\bf x},\delta\xi)=\overline{p_+({\bf x},\delta\xi)}
P_-({\bf x})W({\bf x})$ and $\Gamma({\bf x},\zeta) = \kappa(\zeta) \eta_{-,\rm near}(\zeta)$, we have
\begin{align*}
&\delta\inner{e^{i\xi\delta{\bm{\mathfrak{K}}}_2 \cdot}p_+(\cdot,\delta\xi), P_-(\cdot)
W(\cdot)\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot) {\eta}_{-,\rm near}(\delta{\bm{\mathfrak{K}}}_2\cdot)}_{L^2(\Sigma)} \\
& \equiv \tilde{I_+^3}(\xi;\eta_{-,\rm near}) + I_+^4(\xi;\eta_{-,\rm near}),
\end{align*}
where (noting, by \eqref{zeroth-terms}, that the $n=0$ contribution is nonzero)
\begin{align*}
\tilde{I_+^3}(\xi;\eta_{-,\rm near}) &= {\vartheta_{\sharp}} \widehat{\kappa\eta}_{+,\rm near}(\xi) \ +\ {I_+^3}(\xi;\eta_{-,\rm near}),\ {\rm where}\nonumber\\
{I_+^3}(\xi;\eta_{-,\rm near}) &\equiv
\sum_{\abs{n}\geq1} \mathcal{F}_{\zeta}[\kappa\eta_{-,\rm near}]\left(\frac{n}{\delta}+\xi\right)
\int_\Omega e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \overline{P_{+}({\bf x})} P_{-}({\bf x}) W({\bf x})d{\bf x} ,\ \ {\rm and} \\
I_+^4(\xi;\eta_{-,\rm near}) &= \sum_{n\in\mathbb{Z}}
\mathcal{F}_{\zeta}[\kappa\eta_{-,\rm near}]\left(\frac{n}{\delta}+\xi\right) \int_\Omega
e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \overline{\varphi_+({\bf x},\delta\xi)}P_{-}({\bf x}) W({\bf x})d{\bf x} .
\end{align*}
\noindent \underline{\it Expansion of inner products \eqref{inner3}:}
\noindent Consider the $b=+$ term in \eqref{inner3}.
Let $g({\bf x},\delta\xi)=\overline{p_+({\bf x},\delta\xi)}W({\bf x})$ and $\Gamma({\bf x},\zeta) = \kappa(\zeta) \rho_+({\bf x},\zeta)$.
By Lemma \ref{poisson_exp} and the expansion of $p_+({\bf x},\delta\xi)$ about $P_+({\bf x})$ in \eqref{deltapbdd} we have:
%
\begin{align*}
&\delta\inner{e^{i\xi\delta{\bm{\mathfrak{K}}}_2 \cdot}p_+(\cdot,\delta\xi),
W(\cdot)\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot) \rho_+(\cdot,\delta{\bm{\mathfrak{K}}}_2\cdot)}_{L^2(\Sigma)} \equiv
I_+^5(\xi;\eta_{+,\rm near}) + I_+^6(\xi;\eta_{+,\rm near}),
\end{align*}
where
\begin{align*}
I_+^5(\xi;\eta_{+,\rm near}) &=
\sum_{n\in\mathbb{Z}} \int_\Omega \mathcal{F}_{\zeta}[\kappa\rho_+]\left({\bf x},\frac{n}{\delta}+\xi\right)
e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \overline{P_{+}({\bf x})} W({\bf x})d{\bf x} , \\
I_+^6(\xi;\eta_{+,\rm near}) &= \sum_{n\in\mathbb{Z}} \int_\Omega
\mathcal{F}_{\zeta}[\kappa\rho_+]\left({\bf x},\frac{n}{\delta}+\xi\right)
e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \overline{\varphi_+({\bf x},\delta\xi)} W({\bf x})d{\bf x} .
\end{align*}
\noindent For the $b=-$ term in \eqref{inner3} we have
\begin{align*}
&\delta\inner{e^{i\xi\delta{\bm{\mathfrak{K}}}_2 \cdot}p_+(\cdot,\delta\xi),
W(\cdot)\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot) \rho_-(\cdot,\delta{\bm{\mathfrak{K}}}_2\cdot)}_{L^2(\Sigma)} \equiv
I_+^7(\xi;\eta_{-,\rm near}) + I_+^8(\xi;\eta_{-,\rm near}),
\end{align*}
where
\begin{align*}
I_+^7(\xi;\eta_{+,\rm near}) &=
\sum_{n\in\mathbb{Z}} \int_\Omega \mathcal{F}_{\zeta}[\kappa\rho_-]\left({\bf x},\frac{n}{\delta}+\xi\right)
e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \overline{P_{+}({\bf x})} W({\bf x})d{\bf x} , \\
I_+^8(\xi;\eta_{+,\rm near}) &= \sum_{n\in\mathbb{Z}} \int_\Omega
\mathcal{F}_{\zeta}[\kappa\rho_-]\left({\bf x},\frac{n}{\delta}+\xi\right)
e^{in{\bm{\mathfrak{K}}}_2\cdot{\bf x}} \overline{\varphi_+({\bf x},\delta\xi)} W({\bf x})d{\bf x} .
\end{align*}
Assembling the above expansions, we find that the full inner product, \eqref{full_inner}, may be expressed as:
\begin{equation}
\inner{\Phi_{+}(\cdot,\delta\xi),\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot)W(\cdot)
\eta_{\rm near}(\cdot)}_{L^2(\Sigma)} = {\vartheta_{\sharp}}\widehat{\kappa\eta}_{{\rm near},-}(\xi) +
\sum_{j=1}^8I_{+}^j(\xi;\eta_{\rm near}) ,\label{full-ipplus}
\end{equation}
A similar calculation yields:
\begin{equation}
\inner{\Phi_{-}(\cdot,\delta\xi),\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot)W(\cdot)
\eta_{\rm near}(\cdot)}_{L^2(\Sigma)} = {\vartheta_{\sharp}}\widehat{\kappa\eta}_{{\rm near},+}(\xi) +
\sum_{j=1}^8I_{-}^j(\xi;\eta_{\rm near}),\ \label{full-ipminus}
\end{equation}
where the terms $I_{-}^j(\xi;\eta_{\rm near})$ are defined analogously to $I_{+}^j(\xi;\eta_{\rm near})$. We now substitute our results \eqref{full-ipplus}-\eqref{full-ipminus}, \eqref{Fb-def}-\eqref{Fdef} and \eqref{eta_far_affine} into \eqref{near5}-\eqref{near6} to obtain the following:
\begin{proposition}
\label{near_freq_compact}
Let $\widehat{\beta}(\xi) = (\widehat{\eta}_{+,\rm near}(\xi) , \widehat{\eta}_{-,\rm near}(\xi))^T$.
Equations \eqref{near5}-\eqref{near6}, the closed system governing the near energy components, $\eta_{\rm near}$, of the corrector, $\eta$, is of the form:
\begin{equation}
\label{compacterroreqn}
\left(\widehat{\mathcal{D}}^{\delta}+\widehat{\mathcal{L}}^{\delta}(\mu) -\delta \mu\right)\widehat{\beta}(\xi) =
\mu\widehat{\mathcal{M}}(\xi;\delta) + \widehat{\mathcal{N}}(\xi;\delta) .
\end{equation}
Here, $\mathcal{D}^\delta$ denotes the band-limited Dirac operator defined by
\begin{equation}
\widehat{\mathcal{D}}^{\delta}\widehat{\beta}(\xi)\ \equiv\ |{\lambda_{\sharp}}|\ |{\bm{\mathfrak{K}}}_2|\ \sigma_3\ \xi\ \widehat{\beta}(\xi)\
+\ {\vartheta_{\sharp}}\ \chi\left(\abs{\xi}\leq\delta^{\nu-1}\right)\ \sigma_1\ \widehat{\kappa\beta}(\xi).
\label{bl-dirac-op}
\end{equation}
The linear operator, $\widehat{\mathcal{L}}^{\delta}(\mu)$, acting on $\widehat{\beta}$, and the source terms $\widehat{\mathcal{M}}(\xi;\delta)$ and $\widehat{\mathcal{N}}(\xi;\delta)$ are defined by:
{\small
\begin{equation}
\label{L_op}
\widehat{\mathcal{L}}^{\delta}(\mu)\widehat{\beta}(\xi) \equiv
\chi\left(\abs{\xi}\leq\delta^{\nu-1}\right) \sum_{j=1}^3 \widehat{\mathcal{L}}^{\delta}_j(\mu) \widehat{\beta}(\xi) , \quad \text{where}
\end{equation}
\begin{align*}
\widehat{\mathcal{L}}^{\delta}_1(\mu) \widehat{\beta}(\xi) & \equiv \delta \xi^2
\begin{pmatrix}
E_{2,+}(\delta\xi)\ \widehat{\eta}_{+,\rm near}(\xi) \\E_{2,-}(\delta\xi)\ \widehat{\eta}_{-,\rm near}(\xi)
\end{pmatrix} , \quad
\widehat{\mathcal{L}}^{\delta}_2(\mu) \widehat{\beta}(\xi) \equiv
\sum_{j=1}^8
\begin{pmatrix}
I_{+}^j(\xi;\widehat{\eta}_{\pm,\rm near}(\xi))\\ I_{-}^j(\xi;\widehat{\eta}_{\pm,\rm near}(\xi))
\end{pmatrix} , \\
\widehat{\mathcal{L}}^{\delta}_3(\mu) \widehat{\beta}(\xi) & \equiv
\begin{pmatrix}
\inner{\Phi_{+}(\cdot,\delta\xi),\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot)W(\cdot)
[A\eta_{\rm near}](\cdot;\mu,\delta)}_{L_{\kpar=\bK\cdot\vtilde_1}^2} \\
\inner{\Phi_{-}(\cdot,\delta\xi),
\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot)W(\cdot)[A\eta_{\rm near}](\cdot;\mu,\delta)}_{L_{\kpar=\bK\cdot\vtilde_1}^2}
\end{pmatrix},
\end{align*}
\begin{equation}
\label{M_op}
\widehat{\mathcal{M}}(\xi;\delta) \equiv
\chi\left(\abs{\xi}\leq\delta^{\nu-1}\right) \sum_{j=1}^3\widehat{\mathcal{M}}_j(\xi;\delta) , \quad \text{where (inner products over ${L_{\kpar=\bK\cdot\vtilde_1}^2}$)}
\end{equation}
\begin{align*}
\widehat{\mathcal{M}}_1(\xi;\delta) & \equiv
\begin{pmatrix}
\inner{\Phi_{+}(\cdot,\delta\xi),\psi^{(0)}(\cdot,\delta\cdot)} \\
\inner{\Phi_{-}(\cdot,\delta\xi),\psi^{(0)}(\cdot,\delta\cdot)}
\end{pmatrix} , \quad
\widehat{\mathcal{M}}_2(\xi;\delta) \equiv
\delta
\begin{pmatrix}
\inner{\Phi_{+}(\cdot,\delta\xi),\psi^{(1)}_p(\cdot,\delta\cdot)}\\
\inner{\Phi_{-}(\cdot,\delta\xi),\psi^{(1)}_p(\cdot,\delta\cdot)}
\end{pmatrix} , \nonumber \\
\widehat{\mathcal{M}}_3(\xi;\delta) & \equiv -
\begin{pmatrix}
\inner{\Phi_{+}(\cdot,\delta\xi),\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot)W(\cdot)B(\cdot;\delta)}\\
\inner{\Phi_{-}(\cdot,\delta\xi),\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot)W(\cdot)B(\cdot;\delta)}
\end{pmatrix}, \nonumber
\end{align*}
\begin{equation}
\label{N_op}
\widehat{\mathcal{N}}(\xi;\delta) \equiv
\chi\left(\abs{\xi}\leq\delta^{\nu-1}\right) \sum_{j=1}^4\widehat{\mathcal{N}}_j(\xi;\delta) , \quad \text{where (inner products over ${L_{\kpar=\bK\cdot\vtilde_1}^2}$)}
\end{equation}
\begin{align*}
\widehat{\mathcal{N}}_1(\xi;\delta) & \equiv
\begin{pmatrix}
\inner{\Phi_{+}({\bf x},\delta\xi), \left(2{\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\ \partial_\zeta-\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})W({\bf x})\right)\psi^{(1)}_p({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})} \\
\inner{\Phi_{-}({\bf x},\delta\xi), \left(2{\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\ \partial_\zeta-\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})W({\bf x})\right)\psi^{(1)}_p({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})}
\end{pmatrix} , \\
\widehat{\mathcal{N}}_2(\xi;\delta) & \equiv
\begin{pmatrix}
\inner{\Phi_{+}({\bf x},\delta\xi),|{\bm{\mathfrak{K}}}_2|^2\ \partial_\zeta^2\psi^{(0)}({\bf x},\zeta)\Big|_{\zeta=\delta {\bm{\mathfrak{K}}}_2\cdot{\bf x}}}\\
\inner{\Phi_{-}({\bf x},\delta\xi),|{\bm{\mathfrak{K}}}_2|^2\ \partial_\zeta^2\psi^{(0)}({\bf x},\zeta)\Big|_{\zeta=\delta {\bm{\mathfrak{K}}}_2\cdot {\bf x}}}
\end{pmatrix}, \\
\widehat{\mathcal{N}}_3(\xi;\delta) & \equiv
\begin{pmatrix}
\inner{\Phi_{+}({\bf x},\delta\xi),|{\bm{\mathfrak{K}}}_2|^2\ \partial_\zeta^2\psi_p^{(1)}({\bf x},\zeta)\Big|_{\zeta=\delta {\bm{\mathfrak{K}}}_2\cdot{\bf x}}} \\
\inner{\Phi_{-}({\bf x},\delta\xi),|{\bm{\mathfrak{K}}}_2|^2\ \partial_\zeta^2\psi_p^{(1)}({\bf x},\zeta)\Big|_{\zeta=\delta {\bm{\mathfrak{K}}}_2\cdot {\bf x}}}
\end{pmatrix} , \nonumber \\
\widehat{\mathcal{N}}_4(\xi;\delta) & \equiv -
\begin{pmatrix}
\inner{\Phi_{+}({\bf x},\delta\xi),\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})W({\bf x})C({\bf x};\delta)} \\
\inner{\Phi_{-}({\bf x},\delta\xi),\kappa(\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})W({\bf x})C({\bf x};\delta)}
\end{pmatrix}. \nonumber
\end{align*}
}
\end{proposition}
We conclude this section with the assertion that from an appropriate solution
$\left(\widehat{\beta}^\delta(\xi),\mu(\delta)\right)$
of the band-limited Dirac system \eqref{compacterroreqn} one can construct a bound state $\left(\Psi^\delta,E^\delta\right)$ of the Schr\"odinger eigenvalue
problem \eqref{EVP_2}. \
We say $f\in L^{2,1}(\mathbb{R})$ if $\|f\|_{L^{2,1}(\mathbb{R})}^2\ \equiv\ \int (1+|\xi|^2)^{1/2}f(\xi)d\xi<\infty$.
\begin{proposition}\label{needtoshow}
Suppose, for $0<\delta<\delta_0$, the band-limited Dirac system \eqref{compacterroreqn} has a solution
$\left(\widehat{\beta}^\delta(\xi),\mu(\delta)\right)$, $\widehat{\beta}^\delta=(\widehat{\beta}^\delta_+,\widehat{\beta}_-^\delta)^T$, where $\text{supp}\widehat{\beta}^\delta\subset \{|\xi|\le\delta^{\nu-1}\}$, satisfying:
\begin{align*}
&\norm{\widehat{\beta}^\delta(\cdot;\mu,\delta)}_{L^{2,1}(\mathbb{R})} \lesssim \delta^{-1},\ 0<\delta<\delta_0
\quad (\textrm{to be verified in Proposition~\ref{solve4beta}}), \\
&\mu(\delta) \text{ bounded and } \mu(\delta) - \mu_0 \to 0 \text{ as } \delta\to0 \quad
(\textrm{to be verified in Proposition~\ref{proposition3}}).
\end{align*}
Define
\begin{equation}
\widehat{\eta}^{\delta}_{\rm near,+}(\xi) = \widehat{\beta}^\delta_+(\xi),\ \ \ \widehat{\eta}^{\delta}_{\rm near,-}(\xi)=\widehat{\beta}^\delta_-(\xi) ,
\label{hat-eta}\end{equation}
and construct $\eta^\delta\equiv \eta^\delta_{\rm near} + \eta^\delta_{\rm far}$ as follows:
\begin{align}
\eta^\delta_{\rm near}({\bf x})&= \sum_{b=\pm}\int_{|\lambda|\le \delta^\nu} \widehat{\eta}^\delta_{{\rm
near},b}\left(\frac{\lambda}{\delta}\right) \Phi_b({\bf x};\lambda) d\lambda , \label{eta_def_beta1} \\
\widetilde{\eta}^\delta_{{\rm far},b}(\lambda)&=
\widetilde{\eta}_{\rm far,b}[\eta_{\rm near},\mu,\delta](\lambda),\ \ b\ge1 ;
\quad \textrm{(see Proposition \ref{fixed-pt})} , \nonumber \\
\eta^\delta_{\rm far}({\bf x})&= \sum_{b=\pm} \int_{\delta^\nu \le |\lambda |\le 1/2 }
\widetilde{\eta}^\delta_{{\rm far},b}\left(\lambda\right) \Phi_b({\bf x};\lambda) d\lambda
+ \sum_{b\ne\pm} \int_{|\lambda| \leq 1/2}
\widetilde{\eta}^\delta_{{\rm far},b}\left(\lambda\right) \Phi_b({\bf x};\lambda) d\lambda . \nonumber \\
\eta^\delta({\bf x})&\equiv \eta^\delta_{\rm near}({\bf x}) + \eta^\delta_{\rm far}({\bf x}),\ \
E^\delta\equiv E_\star+\delta^2\mu(\delta),\ \
0<\delta<\delta_0. \nonumber
\end{align}
Then, for all $0<|\delta|<\delta_0$, the following holds:
\begin{enumerate}
\item[(a)] $ \eta^\delta({\bf x})\in H_{\kpar=\bK\cdot\vtilde_1}^{2}(\Sigma)$.
\item[(b)] $\left(\eta^\delta,\mu(\delta)\right)$ solves the corrector equation \eqref{corrector-eqn1}.
\item[(c)] Theorem \ref{thm-edgestate} holds. The pair $(\Psi^\delta,E^\delta)$, defined by (see also
\eqref{eta-def}-\eqref{mu-def})
\begin{equation}
\label{main_result_ansatz1}
\begin{split}
\Psi^\delta({\bf x}) &= \psi^{(0)}({\bf x},{\bf X})+\delta\psi^{(1)}_p({\bf x},{\bf X})+\delta\eta^\delta({\bf x}),\ \ {\bf X}=\delta{\bm{\mathfrak{K}}}_2\cdot {\bf x},\\
E^\delta &= E_\star+\delta^2 \mu_0 +o(\delta^2) ,
\end{split}
\end{equation}
is a solution of the eigenvalue problem \eqref{EVP_2} with corrector estimates asserted in the statement of Theorem \ref{thm-edgestate}.
\end{enumerate}
\end{proposition}
To prove Proposition \ref{needtoshow} we use the following lemma.
\begin{lemma}
\label{beta_vs_eta}
There exists a $\delta_0>0$ such that, for all $0<\delta<\delta_0$, the following holds:
Assume $\beta \in L^2(\mathbb{R})$ and let $\eta^\delta_{\rm near}({\bf x})$ be defined by \eqref{hat-eta}-\eqref{eta_def_beta1}.
Then,
\begin{equation*}
\norm{\eta_{\rm near}}_{H_{\kpar=\bK\cdot\vtilde_1}^2} \lesssim \delta^{1/2} \norm{\beta}_{L^2(\mathbb{R})}.
\end{equation*}
\end{lemma}
\noindent The proof of Lemma \ref{beta_vs_eta} parallels that of Lemma 6.9 in \cites{FLW-MAMS:15}, and is not reproduced here.
\begin{proof}[Proof of Proposition \ref{needtoshow}]
From $\widehat{\beta}$ we construct
$\eta^\delta_{\rm near}$, such that:
$ \norm{\eta_{\rm near}}_{H_{\kpar=\bK\cdot\vtilde_1}^2} \lesssim \delta^{1/2} \norm{\beta}_{L^2(\mathbb{R})}$ (Lemma
\ref{beta_vs_eta}).
Next, part 2 of Proposition \ref{fixed-pt}, \eqref{eta-far-bound}, gives a bound on
$\eta_{\rm far}$:
$\norm{\eta_{\rm far}[\eta_{\rm near};\mu,\delta]}_{H_{\kpar=\bK\cdot\vtilde_1}^2} \le\ C''\left(\
\frac{\delta}{\omega(\delta^\nu)}\norm{\eta_{\rm near}}_{L_{\kpar=\bK\cdot\vtilde_1}^2}+\frac{\delta^\frac12}{\omega(\delta^\nu)} \right)$.
These two bounds give the desired $H_{\kpar=\bK\cdot\vtilde_1}^2(\Sigma)$ bound on $\eta^{\delta}$.
Note that all steps in our derivation of the band-limited Dirac
system \eqref{compacterroreqn} are reversible, in particular our application of the Poisson summation formula in
$L^2_{\rm loc}$. Therefore, $(\Psi^\delta,E^\delta)$, given by \eqref{main_result_ansatz1} is an $H_{\kpar=\bK\cdot\vtilde_1}^2$ eigenpair of \eqref{EVP_2} .
\end{proof}
\noindent We focus then on constructing and estimating the solution of the band-limited Dirac system \eqref{compacterroreqn}.
\subsection{Analysis of the band-limited Dirac system}\label{analysis-blDirac}
The formal $\delta\downarrow0$ limit of the band-limited operator $\widehat{\mathcal{D}}^\delta$, displayed in \eqref{bl-dirac-op}, is a 1D Dirac operator ${\mathcal{D}}$ defined via:
\begin{equation}
\label{diraclimit}
\widehat{\mathcal{D}}\ \widehat{\beta}(\xi)\ \equiv\ |{\lambda_{\sharp}}|\ |{\bm{\mathfrak{K}}}_2|\ \sigma_3\ \xi \widehat{\beta}(\xi)\ +\ {\vartheta_{\sharp}}\ \sigma_1\ \widehat{\kappa\beta}(\xi).
\end{equation}
Our goal is to solve the system \eqref{compacterroreqn}.
We therefore rewrite the linear operator in equation \eqref{compacterroreqn} as a perturbation of $\widehat{\mathcal{D}}$ \eqref{diraclimit}, and seek $\widehat{\beta}$ as a solution to:
\begin{equation}
\label{erroreqnfactored}
\widehat{\mathcal{D}}\widehat{\beta}(\xi) + \left(\widehat{\mathcal{D}}^{\delta}-\widehat{\mathcal{D}} +
\widehat{\mathcal{L}}^{\delta}(\mu) -\delta \mu\right)\widehat{\beta}(\xi) =
\mu\widehat{\mathcal{M}}(\xi;\delta)
+ \widehat{\mathcal{N}}(\xi;\delta).
\end{equation}
We next solve \eqref{erroreqnfactored} using a Lyapunov-Schmidt reduction strategy.
By Proposition \ref{zero-mode}, the null space of $\widehat{\mathcal{D}}$ is spanned by $\widehat{\alpha}_{\star}(\xi)$, the Fourier transform of the zero energy eigenstate \eqref{Fstar}. Since $\alpha_{\star}(\zeta)$ is Schwartz class, so too is $\widehat{\alpha}_{\star}(\xi)$ and $\widehat{\alpha}_{\star}(\xi)\in H^s(\mathbb{R})$ for any $s\ge1$.
For any $f\in L^2{(\mathbb{R})}$, introduce the orthogonal projection operators,
\begin{equation*}
\widehat{P}_{\parallel}f =
\inner{\widehat{\alpha}_{\star},f}_{L^2(\mathbb{R})}\widehat{\alpha}_{\star},~~~\text{and}~~~\widehat{P}_{\perp}f =
(I-\widehat{P}_{\parallel})f.
\end{equation*}
Since $\widehat{P}_{\parallel}\widehat{\mathcal{D}}\widehat{\beta}(\xi)=0$ and
$\widehat{P}_{\perp}\widehat{\mathcal{D}}\widehat{\beta}(\xi)=\widehat{\mathcal{D}}\widehat{\beta}(\xi)$,
equation \eqref{erroreqnfactored} is equivalent to the system
\begin{align}
&\widehat{P}_{\parallel}\left\{\left(\widehat{\mathcal{D}}^{\delta}-\widehat{\mathcal{D}} +
\widehat{\mathcal{L}}^{\delta}(\mu) -\delta
\mu\right)\widehat{\beta}(\xi) - \mu\widehat{\mathcal{M}}(\xi;\delta) -
\widehat{\mathcal{N}}(\xi;\delta)\right\} = 0, \label{pplleqn}\\
&\widehat{\mathcal{D}}\widehat{\beta}(\xi) +
\widehat{P}_{\perp}\left\{\left(\widehat{\mathcal{D}}^{\delta}-\widehat{\mathcal{D}}
+ \widehat{\mathcal{L}}^{\delta}(\mu) -\delta \mu\right)\widehat{\beta}(\xi)\right\} =
\widehat{P}_{\perp}\left\{\mu\widehat{\mathcal{M}}(\xi;\delta) +
\widehat{\mathcal{N}}(\xi;\delta)\right\}.
\label{pperpeqn}
\end{align} Our strategy will be to first solve \eqref{pperpeqn} for $\widehat{\beta}=
\widehat{\beta}[\mu,\delta]$, for $\delta>0$ and sufficiently small.
We then substitute
$\widehat{\beta}[\mu,\delta]$ into \eqref{pplleqn} to obtain a closed scalar equation. This equation is then solved
for $\mu=\mu(\delta)$ for $\delta$ small.
The first step in this strategy is accomplished in
\begin{proposition}\label{solve4beta}
Fix $M>0$. There exists $\delta_0>0$ and a mapping
$(\mu,\delta)\in R_{M,\delta_0}\equiv \{|\mu|<M\}\times (0,\delta_0)\mapsto \widehat{\beta}(\cdot;\mu,\delta)\in
L^{2,1}(\mathbb{R})$
which is Lipschitz in $\mu$, such that $\widehat{\beta}(\cdot;\mu,\delta)$ solves
\eqref{pperpeqn} for $(\mu,\delta)\in R_{M,\delta_0}$. Furthermore, we have the bound
\begin{equation*}
\norm{\widehat{\beta}(\cdot;\mu,\delta)}_{L^{2,1}(\mathbb{R})} \lesssim \delta^{-1},\ 0<\delta<\delta_0.
\end{equation*}
\end{proposition}
The details of the proof of Proposition \ref{solve4beta} are similar to those in proof of
Proposition 6.10 in \cites{FLW-MAMS:15}; equation \eqref{pperpeqn} is expressed as $(I+C^\delta(\mu))\widehat{\beta}(\xi;\mu,\delta) = \widehat{\mathcal{D}}^{-1} \widehat{P}_{\perp}\left\{\mu\widehat{\mathcal{M}}(\xi;\delta) +
\widehat{\mathcal{N}}(\xi;\delta)\right\}$ and the operator $C^\delta(\mu)$ is proved to be bounded on $L^{2,1}(\mathbb{R})$ and of norm less than one for all $0<\delta<\delta_0$, with $\delta_0$ sufficiently small. In bounding $C^\delta(\mu)$ on $L^{2,1}(\mathbb{R})$, we require $H^1(\mathbb{R})$ bounds for wave operators associated with the Dirac operator, $\mathcal{D}$. These can derived from corresponding results for scalar Schr\"odinger operators, under the assumptions implied by $\kappa(\zeta)$ being a domain wall function in the sense of Definition \ref{domain-wall-defn}.
\subsection{Final reduction to an equation for $\mu=\mu(\delta)$ and its solution}\label{final-reduction}
Substituting the solution $\widehat{\beta}(\xi;\mu,\delta)$ (Proposition \ref{solve4beta}) into \eqref{pplleqn}, yields the equation $\mathcal{J}_+[\mu,\delta]=0$, relating $\mu$ and $\delta$.
Here, $\mathcal{J}_{+}[\mu;\delta] $ is given by:
\begin{align*}
\mathcal{J}_{+}[\mu;\delta] &\equiv
\mu\ \delta\inner{\widehat{\alpha}_{\star}(\cdot),\widehat{\mathcal{M}}(\cdot;\delta)}_{L^2(\mathbb{R})}
+ \delta\inner{\widehat{\alpha}_{\star}(\cdot),\widehat{\mathcal{N}}(\cdot;\delta)}_{L^2(\mathbb{R})} \\
&~~~ -\delta\inner{\widehat{\alpha}_{\star}(\cdot),\left(\widehat{\mathcal{D}}^{\delta}-\widehat{\mathcal{D}}\right)
\widehat{\beta}(\cdot;\mu,\delta)}_{L^2(\mathbb{R})}
-\delta\inner{\widehat{\alpha}_{\star}(\cdot),\widehat{\mathcal{L}}^{\delta}(\mu)
\widehat{\beta}(\cdot;\mu,\delta)}_{L^2(\mathbb{R})}\nonumber\\
&~~~+\delta^2 \mu\inner{\widehat{\alpha}_{\star}(\cdot),\widehat{\beta}(\cdot;\mu,\delta)}_{L^2(\mathbb{R})}. \nonumber
\end{align*}
The mapping $(\mu,\delta)\in\{|\mu|<M,\ \delta\in(0,\delta_0)\}\mapsto\mathcal{J}_{+}(\mu,\delta)$ is well defined and Lipschitz continuous with respect to $\mu$. In the following proposition, we note that $\mathcal{J}_{+}[\mu,\delta]$ can
be extended to a continuous function on the half-open interval $[0,\delta_0)$.
\begin{proposition}
\label{proposition3}
Let $\delta_0>0$ be as above. Define
\begin{equation*}
\mathcal{J}[\mu,\delta] \equiv \left\{
\begin{array}{cl}
\mathcal{J}_{+}[\mu,\delta] & ~~~\text{for}~~ 0<\delta<\delta_0,\\
\mu-\mu_0 & ~~~\text{for}~~ \delta=0\ ,
\end{array} \right.\
\end{equation*}
where
$ \mu_0 \equiv -\inner{\alpha_{\star},\mathcal{G}^{(2)}}_{L^2(\mathbb{R})} = E^{(2)}$,
and $\mathcal{G}^{(2)}$ is given in \eqref{ipG}; see also \eqref{G2def} and \eqref{solvability_cond_E2}.
Fix $M = \max\{2\abs{\mu_0},1\}$.
Then, $(\mu,\delta)\in\{|\mu|<M,\ 0\le\delta<\delta_0\}\mapsto\mathcal{J}(\mu,\delta)$
is well-defined and continuous.
\end{proposition}
{\it Proof:} The proof parallels that of Proposition 6.16 of \cites{FLW-MAMS:15}.
The key is to establish the following asymptotic relations, for all $0<\delta<\delta_0$ with $\delta_0$
sufficiently small:
\begin{align}
\lim_{\delta\rightarrow0}\delta\inner{\widehat{\alpha}_{\star}(\cdot),
\widehat{\mathcal{M}}(\cdot;\delta)}_{L^2(\mathbb{R})} &=1; \label{sketch_limit1} \\
\lim_{\delta\rightarrow0}\delta\inner{\widehat{\alpha}_{\star}(\cdot),
\widehat{\mathcal{N}}(\cdot;\delta)}_{L^2(\mathbb{R})} &= -\mu_0; \label{sketch_limit2}
\end{align}
and the following bounds hold for some constant $C_M$:
\begin{align}
\abs{\delta\inner{\widehat{\alpha}_{\star}(\cdot), \left(\widehat{\mathcal{D}}^{\delta}-\widehat{\mathcal{D}}\right)
\widehat{\beta}(\cdot;\mu,\delta)}_{L^2(\mathbb{R})}} &\le C_M \delta^{1-\nu}; \label{sketch_bound1} \\
\abs{\delta\inner{\widehat{\alpha}_{\star}(\cdot), \widehat{\mathcal{L}}^{\delta}(\mu)
\widehat{\beta}(\cdot;\mu,\delta)}_{L^2(\mathbb{R})}} &\le C_M \delta^{\nu}; \label{sketch_bound2} \\
\abs{\delta^2 \mu\inner{\widehat{\alpha}_{\star}(\cdot), \widehat{\beta}(\cdot;\mu,\delta)}_{L^2(\mathbb{R})}}
&\le C_M \delta. \label{sketch_bound3}
\end{align}
The detailed verification of \eqref{sketch_limit1}-\eqref{sketch_bound3} follows
the approach taken in Appendix H of \cites{FLW-MAMS:15}. We make a few remarks on the calculations. Each of the expressions in \eqref{sketch_limit1}-\eqref{sketch_limit2}
consists of inner products of the form:
{\small
\begin{equation}
\mathfrak{J}(\delta) \equiv \delta \left\langle \widehat{\alpha}_{\star}(\xi) , \chi\left(\abs{\xi}\leq\delta^{\nu-1}\right)
\begin{pmatrix}
\inner{\Phi_{+}({\bf x},\delta\xi), J({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})}_{L_{\kpar=\bK\cdot\vtilde_1}^2} \\
\inner{\Phi_{-}({\bf x},\delta\xi ), J({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})}_{L_{\kpar=\bK\cdot\vtilde_1}^2}
\end{pmatrix} \right\rangle_{L^2(\mathbb{R}_\xi)} .\label{sample-ip}
\end{equation}}
Here, $J({\bf x},\zeta)=e^{i{\bf K}\cdot{\bf x}}\mathcal{K}({\bf x},\zeta)$, where
${\bf x}\mapsto\mathcal{K}({\bf x},\zeta)$ is $\Lambda_h-$ periodic and $\zeta\mapsto \mathcal{K}({\bf x},\zeta)$ is smooth and rapidly decaying on $\mathbb{R}$. Consider, for example, the expression within:
$\inner{\Phi_{+}({\bf x},\delta\xi), J({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})}_{L_{\kpar=\bK\cdot\vtilde_1}^2}$. This may be rewritten and expanded, using Lemma \ref{poisson_exp}:
\begin{align}
&\delta\ \int_\Sigma e^{-i\delta\xi{\bm{\mathfrak{K}}}_2\cdot{\bf x}}\ \overline{p_+({\bf x},\delta\xi)}\ \mathcal{K}({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})\ d{\bf x}\ =\ \sum_{n\in\mathbb{Z}} \int_\Omega e^{in {\bm{\mathfrak{K}}}_2\cdot{\bf x}}\ \overline{p_+({\bf x},\delta\xi)}\ \widehat{\mathcal{K}}\left({\bf x},\frac{n}{\delta}+\xi\right)\ d{\bf x}
\nonumber\\
&\ = \int_\Omega \overline{p_+({\bf x},\delta\xi)}\ \widehat{\mathcal{K}}\left({\bf x},\xi\right)\ d{\bf x}\ +\ \sum_{|n|\ge1} \int_\Omega e^{in {\bm{\mathfrak{K}}}_2\cdot{\bf x}}\ \overline{p_+({\bf x},\delta\xi )}\ \widehat{\mathcal{K}}\left({\bf x},\frac{n}{\delta}+\xi\right)\ d{\bf x} . \nonumber
\end{align}
Since in \eqref{sample-ip} $\xi$ is localized to the set where $|\xi|\le\delta^{\nu-1},\ \nu>0$, for $|n|\ge1$, we have $n/\delta+\xi\approx n/\delta$ and the decay of $\zeta\mapsto\widehat{\mathcal{K}}({\bf x},\zeta)$ can be used to show that, as $\delta$ tends to zero, the sum over $|n|\ge1$ tends to zero in $L^2(d\xi)$. It can also be shown, using the localization of $\xi$, that the $n=0$ contribution to the sum, tends to
$\left\langle P_+({\bf x}), \widehat{\mathcal{K}}\left({\bf x},\xi\right)\ \right\rangle_{L^2(\Omega)}$.
Therefore, uniformly in $|\xi|\le\delta^{\nu-1}$, we have
\begin{equation}
\lim_{\delta\to0}\ \delta\ \inner{\Phi_{+}({\bf x},\delta\xi), J({\bf x},\delta{\bm{\mathfrak{K}}}_2\cdot{\bf x})}_{L_{\kpar=\bK\cdot\vtilde_1}^2}\ =\ \left\langle P_+({\bf x}), \widehat{\mathcal{K}}\left({\bf x},\xi\right)\ \right\rangle_{L^2(\Omega)} .
\nonumber\end{equation}
Therefore,
\begin{align}
\lim_{\delta\to0}\mathfrak{J}(\delta)\ & =
\int_\mathbb{R}\ d\xi\ \left[\
\overline{\widehat{\alpha}_{\star,+}(\xi)}\ \left\langle P_+({\bf x}), \widehat{ \mathcal{K} }\left({\bf x},\xi\right)\ \right\rangle_{L^2(\Omega)} \right. \nonumber \\
&\qquad\qquad\qquad \left. + \overline{\widehat{\alpha}_{\star,-}(\xi)}\
\left\langle P_-({\bf x}), \widehat{ \mathcal{K} }\left({\bf x},\xi\right)\ \right\rangle_{L^2(\Omega)}
\ \right] . \label{help-limit}
\end{align}
The principle contribution to the limit in \eqref{sketch_limit1} comes from the $\widehat{\mathcal{M}}_1(\xi;\delta)$ term in \eqref{M_op}.
We apply \eqref{help-limit} with the choice
$J=J_{\mathcal{M}}({\bf x},\zeta)= \psi^{(0)}({\bf x},\zeta)$ and
\[ \mathcal{K}_{\mathcal{M}}({\bf x},\zeta)\equiv e^{-i{\bf K}\cdot{\bf x}}J_{\mathcal{M}}({\bf x},\zeta)=\alpha_{\star,+}(\zeta)P_+({\bf x})+\alpha_{\star,-}(\zeta)P_-({\bf x})\ .\]
The principle contribution to the limit in \eqref{sketch_limit2} comes from the $\widehat{\mathcal{N}}_1(\xi;\delta)$ and $\widehat{\mathcal{N}}_2(\xi;\delta)$ terms in
\eqref{N_op}. We
apply \eqref{help-limit} with the choice
\[J=J_{\mathcal{N}}({\bf x},\zeta)= \left(2{\bm{\mathfrak{K}}}_2\cdot\nabla_{\bf x}\ \partial_\zeta-\kappa(\zeta)W({\bf x})\right)\psi^{(1)}_p({\bf x},\zeta)+|{\bm{\mathfrak{K}}}_2|^2\ \partial_\zeta^2\psi^{(0)}({\bf x},\zeta) \]
and $\mathcal{K}_{\mathcal{N}}({\bf x},\zeta)\equiv
e^{-i{\bf K}\cdot{\bf x}}J_{\mathcal{N}}({\bf x},\zeta)$. The detailed computations are omitted since they
are similar to those in \cites{FLW-MAMS:15} .
By \eqref{sketch_limit1}-\eqref{sketch_bound3}, it follows that
$\mathcal{J}_+(\mu;\delta)=\mu-\mu_0+o(1)$ as $\delta\rightarrow0$ uniformly for
$|\mu|\leq M $.
Therefore, $\mathcal{J}[\mu,\delta]$ is well-defined on $ \{(\mu,\delta)\ :\ |\mu|<M,\ 0\leq\delta<\delta_0\}$, continuous at $\delta=0$ and Proposition \ref{proposition3} is proved.
Summarizing, we have that given $\widehat{\beta}(\cdot,\mu,\delta)$, constructed in Proposition \ref{solve4beta}, to complete our construction of a solution to \eqref{pplleqn}-\eqref{pperpeqn}, it suffices to solve \eqref{pplleqn} for
$\mu=\mu(\delta)$. Furthermore, we have just shown that \eqref{pplleqn} holds if and only if $\mu=\mu(\delta)$ is a solution of $\mathcal{J}[\mu;\delta]=0$.
From Proposition \ref{proposition3} it follows that $\mathcal{J}[\mu;\delta]=0$ has a transverse zero, $\mu=\mu(\delta)$, for all $\delta$ and sufficiently small. The details are presented in Proposition 6.17
of \cites{FLW-MAMS:15}:
\begin{proposition}\label{solveJeq0}
There exists $\delta_0>0$, and a function $\delta\mapsto\mu(\delta)$, defined for $0\le\delta<\delta_0$ such that:
$|\mu(\delta)|\le M$, $\lim_{\delta\to0}\mu(\delta)=\mu(0)=\mu_0 \equiv E^{(2)}$ and
$\mathcal{J}[\mu(\delta),\delta]=0$ for all $0\le\delta<\delta_0$.
\end{proposition}
\noindent We have constructed a solution pair $\left(\widehat{\beta}^\delta(\xi),\mu(\delta)\right)$, with
$\widehat{\beta}^\delta\in L^{2,1}(\mathbb{R};d\xi)$, of the band-limited Dirac system \eqref{compacterroreqn}. Now apply Proposition \ref{needtoshow} and the proof of Theorem \ref{thm-edgestate} is complete.
\section{Edge states for weak potentials and the no-fold condition for the zigzag slice} \label{zz-gap}
In Section \ref{thm-edge-state} we fixed an arbitrary edge, ${\bm{\mathfrak{v}}}_1=a_1{\bf v}_1+a_2{\bf v}_2$ and proved the existence of topologically protected ${\bm{\mathfrak{v}}}_1-$ edge states under the spectral no-fold condition. In this section, we consider the special case of the zigzag edge, corresponding to the choice: ${\bm{\mathfrak{v}}}_1={\bf v}_1$. We prove that the spectral no-fold condition holds in the weak potential regime, provided $\varepsilon V_{1,1}>0$; this implies the existence of a topologically protected family zigzag edge states.
We proceed in this section to prove the following:
\begin{enumerate}
\item Theorem \ref{SGC!}: The operator $-\Delta+\varepsilon V$ satisfies the no-fold condition along the zigzag (${\bf k}_2$) slice at the Dirac point $({\bf K},E^\varepsilon_\star)$ ; see Definition \ref{SGC}.
\item Theorem \ref{delta-gap}: $-\Delta+\varepsilon V({\bf x})+\delta W({\bf x})$ acting in $L^2(\Sigma_{{k_{\parallel}}={\bf K}\cdot{\bf v}_1}$) has a spectral gap
about the energy $E=E_\star^\varepsilon$.
\item Theorem \ref{NO-directional-gap!}: If $\varepsilon V_{1,1}<0$, then the spectral no-fold condition for the zigzag slice does not hold.
\item Theorem \ref{Hepsdelta-edgestates}: For $0<|\delta|\ll\varepsilon^2$ and $\varepsilon$ sufficiently small, the zigzag edge state eigenvalue problem for $H^{(\varepsilon,\delta)} =-\Delta+\varepsilon V({\bf x})+\delta\kappa(\delta{\bf k}_2\cdot{\bf x}) W({\bf x})$ has topologically protected edge states.
\end{enumerate}
\noindent We begin by stating our detailed
{assumptions on $V({\bf x})$ and $W({\bf x})$.}\ There exists ${\bf x}_0\in\mathbb{R}^2$ such that $\tilde{V}({\bf x})=V({\bf x}-{\bf x}_0)$
and $\tilde{W}({\bf x})=W({\bf x}-{\bf x}_0)$ satisfy the following:
\begin{equation}
\text{\rm{\bf Assumptions (V)}}
\label{V-assumptions}\end{equation}
\begin{enumerate}
\item[(V1)] $\Lambda_h$- periodicity:\ \ $\tilde{V}({\bf x}+{\bf v})= \tilde{V}({\bf x})$ for all ${\bf v}\in\Lambda_h$.
\item[(V2)] Inversion symmetry:\ \ $\tilde{V}({\bf x})=\tilde{V}(-{\bf x})$.
\item[(V3)] $2\pi/3$-rotational invariance:\ $\tilde{V}(R^*{\bf x})=\tilde{V}({\bf x})$.
\item[(V4)] Positivity of Fourier coefficient of $\varepsilon V$, $\varepsilon V_{1,1}$: $\varepsilon \tilde{V}_{1,1}>0$, \\
where
$\tilde{V}_{1,1}= \frac{1}{|\Omega|}\int_\Omega e^{-i({\bf k}_1+{\bf k}_2)\cdot{\bf y}}\tilde{V}({\bf y}) d{\bf y}$;
see \eqref{V11eq0} and \eqref{Omega-fourier}.
\end{enumerate}
\begin{equation}
\text{\rm{\bf Assumptions (W)}}
\label{W-assumptions}
\end{equation}
\begin{enumerate}
\item[(W1)~] $\Lambda_h$- periodicity:\ \ $\tilde{W}({\bf x}+{\bf v})= \tilde{W}({\bf x})$ for all ${\bf v}\in\Lambda_h$.
\item[(W2)~] Anti-symmetry:\ \ $\tilde{W}(-{\bf x})= -\tilde{W}({\bf x})$.
\item[(W3$^*$)] Uniform nondegeneracy of $\tilde{W}$:\ \ Let $\Phi^\varepsilon_j({\bf x}),\ j=1,2$ denote the $L^2_{{\bf K},\tau}$, respectively, $L^2_{{\bf K},\overline\tau}$, modes of the degenerate $L^2_{\bf K}-$ eigenspace of $H^{(\varepsilon,0)}=-\Delta+\varepsilon V$. Then, there exists $\theta_0>0$, independent of $\varepsilon$, such that for all $\varepsilon$ sufficiently small:
\begin{equation}
\left|\ \vartheta^\varepsilon_\sharp\ \right|\ \equiv \left|\inner{\Phi_1^\varepsilon,\tilde{W}\Phi_1^\varepsilon}_{L^2_{\bf K}} \right|\ge\theta_0>0\ .\label{uniform-nondegen}
\end{equation}
\end{enumerate}
\noindent {N.B.\ \ Consistent with our earlier convention, in the following discussion, we shall drop the ``tildes'' on both $V$ and $W$. It will be understood
that we have chosen coordinates with ${\bf x}_0=0$.}
\begin{remark}\label{on-theta-sharp}
We claim that (W3*) (see \eqref{uniform-nondegen}) uniform non-degeneracy of $W$ is equivalent to the assumption:
\begin{equation}
\tag{W3*}
{W}_{0,1} + {W}_{1,0} - {W}_{1,1} \ne0 ,
\label{uniform-nondegen-W}
\end{equation}
where $\{W_{\bf m}\}_{ {\bf m}\in\mathbb{Z}^2}$ denote the Fourier coefficients of $W({\bf x})$.
To see this, note by
Proposition 3.1 of \cites{FW:12}, that for sufficiently small $\varepsilon$,
\[\Phi^\varepsilon_1({\bf x}) = \frac{1}{\sqrt{3|\Omega|}} e^{i{\bf K}\cdot{\bf x}} \left[1+\overline{\tau}e^{i{\bf k}_2\cdot{\bf x}} + \tau e^{-i{\bf k}_1\cdot{\bf x}}\right] +\mathcal{O}(\varepsilon); \]
see also \eqref{p_sigma}.
Evaluation of $\vartheta_\sharp$ gives
\begin{align*}
{\vartheta_{\sharp}}^\varepsilon &= \frac{1}{3} \int_\Omega \abs{1+\overline{\tau}e^{i{\bf k}_2\cdot{\bf x}} +\tau e^{-i{\bf k}_1\cdot{\bf x}}}^2 W({\bf x})\ d{\bf x}+\mathcal{O}(\varepsilon) \nonumber\\
&= \frac{1}{3} \left[ ({W}_{0,1} + {W}_{1,0})\ (\tau - \overline{\tau}) + {W}_{1,1}(\tau^2 - \overline{\tau}^2) \right] + \mathcal{O}(\varepsilon) \nonumber \\
&=i\frac{\sqrt3}3\ \left[{W}_{0,1} + {W}_{1,0} - {W}_{1,1} \right] + \mathcal{O}(\varepsilon) ,
\end{align*}
which is nonzero if \eqref{uniform-nondegen-W} holds and $\varepsilon$ is sufficiently small.
\end{remark}
Let $({\bf K},E_\star^\varepsilon)$ denote a Dirac point of $H^{(\varepsilon,0)}=-\Delta + \varepsilon V({\bf x})$,
guaranteed to exist by Theorem \ref{diracpt-small-thm} for all $0<|\varepsilon|<\varepsilon_0$ and assume that $V({\bf x})$ and $W({\bf x})$ satisfy Assumptions (V), (W); see \eqref{V-assumptions}-\eqref{W-assumptions}.
\noindent In our next result, we verify the spectral no-fold condition for the zigzag edge. This is central to applying Theorem \ref{thm-edgestate} to prove our result (Theorem \ref{Hepsdelta-edgestates}) on the existence of a family of zigzag edge states.
\begin{theorem}\label{SGC!}\ \
There exists a positive constant, $\varepsilon_2\le\varepsilon_0$, such that for any $0<|\varepsilon|<\varepsilon_2$, $H^{(\varepsilon,0)}=-\Delta+\varepsilon V$ satisfies the spectral no-fold condition at quasi-momentum ${\bf K}$ along the zigzag slice; see Definition \ref{SGC}.\\ By Assumptions (V), $E_-(\lambda)\le
E_+(\lambda)\le E_b(\lambda),\ b\ge3$. For any $\mathfrak{a}$ sufficiently small:
\begin{align*}
&b=\pm:\ \ \mathfrak{a} \le |\lambda|\le\frac12\ \implies\ \Big|\ E^{\varepsilon,0}_b(\lambda) - E^\varepsilon_\star\ \Big|\ \ge\ \frac{q^4}2\ |V_{1,1}\ \varepsilon|\ \lambda^2\ge c_1\ |\varepsilon|\ \mathfrak{a}^2 ,
\\
&b\ge3:\ \ |\lambda|\le1/2 \ \implies \ \Big| E^{\varepsilon,0}_b(\lambda)-E^{\varepsilon}_\star \Big|\ \ge\ c_2\ |\varepsilon|\ (1+|b|)\ .
\end{align*}
\end{theorem}
\noindent Theorem \ref{SGC!} controls the zigzag slice of the band structure at ${\bf K}$, globally and outside a neighborhood of $({\bf K},E^\varepsilon_\star)$. Since a small perturbation of $V$ which breaks inversion symmetry opens a gap, locally about ${\bf K}$ (see \cites{FW:12}), it can be shown that $H^{(\varepsilon,\delta)}$ has a $L^2_{{k_{\parallel}}=2\pi/3}-$ spectral gap about $E=E_\star^\varepsilon$:
\begin{theorem}\label{delta-gap}
Assume $V({\bf x})$ and $W({\bf x})$ satisfy Assumptions (V) and (W).
Let $\varepsilon_2$ be as in Theorem \ref{SGC!}. Then, there exists $c_\flat>0$ such that for all
$0<|\varepsilon|<\varepsilon_2$ and $0<\delta\le c_\flat\ \varepsilon^2$,
the operator $-\Delta+\varepsilon V({\bf x})+\delta W({\bf x})$ has a non-trivial $L^2_{{k_{\parallel}}={\bf K}\cdot{\bf v}_1=2\pi/3}-$ spectral gap about the energy $E=E^\varepsilon_\star$.
\end{theorem}
\noindent In the case where (V4) does not hold and the Fourier coefficient of $\varepsilon V$, $\varepsilon V_{1,1}$, is negative: $\varepsilon \tilde{V}_{1,1}<0$, then we prove that the no-fold condition does not hold:
\begin{theorem}\label{NO-directional-gap!}[Conditions for non-existence of a zigzag spectral gap]
Assume hypotheses (V1)-(V3) but, instead of hypothesis (V4), assume
$\varepsilon V_{1,1}<0$.
Then, for any $\varepsilon$ sufficiently small, the no-fold condition of Definition \ref{SGC} does not hold for the zigzag slice.
\end{theorem}
\noindent The proofs of Theorems \ref{SGC!} and \ref{delta-gap} are presented below. Section \ref{LSreduction} discusses a reduction to the lowest three spectral bands. The general strategy, based on an analysis of a $3\times3$ determinant, is given in Section \ref{sec:strategy}. Theorem \ref{NO-directional-gap!} is proved in Section \ref{no-directionalgap} as a consequence of Proposition \ref{Deq0}.
We prove the following theorem on zigzag edge states using results of Section \ref{thm-edge-state}.
\begin{theorem}\label{Hepsdelta-edgestates}
Let
$ H^{(\varepsilon,\delta)} =-\Delta+\varepsilon V({\bf x})+\delta\kappa(\delta{\bf k}_2\cdot{\bf x}) W({\bf x})$,
where $V({\bf x})$ and $W({\bf x})$ satisfy Assumptions (V) and (W), and $\kappa({\bf X})$ is a domain wall function in the sense of Definition \ref{domain-wall-defn}.
Let $\varepsilon_2>0$ and $c_\flat>0$ be as in Theorem \ref{SGC!} and assume
$0<|\varepsilon|<\varepsilon_2$ and $0<|\delta|\le c_\flat \varepsilon^2$. Then,
there exist edge states, $\Psi({\bf x};k_\parallel)\in L^2_{k_\parallel}(\Sigma)$, with
$|k_\parallel-2\pi/3|$ sufficiently small.
Furthermore, continuous superposition in ${k_{\parallel}}$ yields wave-packets which are concentrated along the zigzag edge.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{Hepsdelta-edgestates}]
We claim that the theorem is an immediate consequence of the spectral no-fold condition for $-\Delta +\varepsilon V$ for the zigzag edge, stated in Theorem \ref{SGC!}. This follows by an application of Theorem \ref{thm-edgestate} (and Corollary \ref{vary_k_parallel}).
Since the details of the proof of Theorem \ref{thm-edgestate} are carried out for the case of $H^{(\varepsilon,\delta)}$ with $\varepsilon=1$,
we wish to point out how the proof applies with $\varepsilon$ and $\delta$ varying as in the statement of Theorem \ref{Hepsdelta-edgestates}.
The proof of Theorem \ref{thm-edgestate} uses a Lyapunov-Schmidt reduction strategy where the eigenvalue problem is reduced to an equivalent eigenvalue problem (nonlinear in the eigenvalue parameter, $E$)
for the Floquet-Bloch spectral components of the bound states in a neighborhood of the Dirac point $({\bf K},E^\varepsilon_\star)$. Stated for the relevant case of the zigzag edge, this reduction step requires the invertibility of an operator $(I-\mathcal{Q}_\delta)$ acting on $H_{{k_{\parallel}}={\bf K}\cdot{\bf v}_1}^2(\Sigma)$, where $\mathcal{Q}_\delta$ is defined in terms of Floquet-Bloch components in \eqref{tQ-def} for ${\bm{\mathfrak{K}}}_2={\bf k}_2$.
It suffices to show that the $H_{{k_{\parallel}}={\bf K}\cdot{\bf v}_1}^2(\Sigma)$ norm of $\mathcal{Q}_\delta$ is $o(1)$ as $\delta\downarrow0$. From \eqref{frak-e-bound} in Remark \ref{frak-e-remark}, the operator norm of $\mathcal{Q}_\delta$ is bounded by $\mathfrak{e}(\delta)$, given by:
\begin{equation*}
\mathfrak{e}(\delta) \lesssim\ \frac{|\delta|}{\omega(\mathfrak{\delta^{\nu}})c_1(V)}\ +\ \frac{|\delta|}{(1+|b|)c_2(V)} \
\leq \frac{|\delta|^{1}}{\omega(\mathfrak{\delta^{\nu}}) c_1(V)}\ +\ \frac{|\delta|}{c_2(V)}.
\end{equation*}
By Theorem \ref{SGC!}, the no-fold condition holds with modulus $\omega(\mathfrak{a})=\mathfrak{a}^2$ and constants
\[ c_1(V)=\tilde{c}_1\frac{q^4}{2} |V_{1,1}\varepsilon|\ ,\ \ \ \ c_2(V)=\tilde{c_2}|\varepsilon|.\]
Therefore, with $\mathfrak{a}=\delta^\nu$,
\begin{equation*}
\mathfrak{e}(\delta) \lesssim\ \frac{|\delta|^{1-2\nu}}{c_1(V)}\ +\ \frac{|\delta|}{c_2(V)} \
\lesssim\ \frac{2}{q^4\tilde{c}_1 |V_{1,1}|}\cdot \frac{|\delta|^{1-2\nu}}{|\varepsilon|}\ +\ \frac{1}{\tilde{c_2}}\cdot \frac{|\delta|}{|\varepsilon|}.
\end{equation*}
By hypothesis, $|\delta|\le c_\flat\varepsilon^2$. Hence, $0\le \mathfrak{e}(\delta)
\lesssim |\varepsilon|^{1-4\nu}+|\varepsilon|\to0$ as $\varepsilon\to0$ if we choose $\nu\in(0,1/4)$.
This completes the proof of Theorem \ref{Hepsdelta-edgestates}.
\end{proof}
To prove Theorems \ref{SGC!} - \ref{NO-directional-gap!} we introduce a tool to localize the analysis about the lowest three bands. Throughout the analysis, without loss of generality, we take $\varepsilon>0$ and satisfy the cases $\varepsilon V_{1,1}>0$ and $\varepsilon V_{1,1}<0$ by varying the sign of $V_{1,1}.$
\subsection{Reduction to the lowest three bands}\label{LSreduction}
In this subsection we show, via a Lyapunov-Schmidt reduction argument, that the proofs of Theorems \ref{SGC!} and \ref{delta-gap} can be reduced to the study of the lowest three spectral bands. To achieve this, we consider several parameter space regimes separately.
We start by considering $\varepsilon=\delta=\lambda=0$. In this case, $H^{(0,0,0)}({\bf K})=\left(\nabla+i{\bf K}\right)^2$ has a triple eigenvalue, $E^0_\star=|{\bf K}|^2$, with corresponding $3-$ dimensional $L^2(\mathbb{R}^2/\Lambda_h)-$ eigenspace spanned by the eigenfunctions $p_\sigma$, for $\sigma=1,\tau,\overline{\tau}$; see Section \ref{dpts-generic-eps}.
Next, we turn on $\varepsilon$, keeping $\delta=\lambda=0$.
From Theorem \ref{diracpt-small-thm}, there exists $\varepsilon_0>0$ such that for $\varepsilon\in I_{\varepsilon_0}\equiv (-\varepsilon_0,\varepsilon_0)\setminus\{0\}$, the operator $H^{(\varepsilon,0,0)}({\bf K})=-\left(\nabla+i{\bf K}\right)^2+\varepsilon V({\bf x})$ has a double $L^2(\mathbb{R}^2/\Lambda_h)$ eigenvalue, $E^\varepsilon_\star$.
Let $E^0_\star=|{\bf K}|^2$.
The maps $\varepsilon\mapsto E^\varepsilon_\star$ and $\varepsilon\mapsto \widetilde{E}^\varepsilon_\star$ are real analytic for $ \varepsilon\in I_{\varepsilon_0}$
with expansions \eqref{E*expand}, \eqref{tildeE*expand}:
\begin{align*}
E^\varepsilon_\star\ &= E_\star^0 + \varepsilon(V_{0,0}-V_{1,1})+\mathcal{O}(\varepsilon^2),\\
\widetilde{E}^\varepsilon_\star &= E_\star^0 + \varepsilon(V_{0,0}+2V_{1,1})+\mathcal{O}(\varepsilon^2).
\end{align*}
More generally, we may study the eigenvalue problem
\begin{equation}
(H^{(\varepsilon,\delta,\lambda)}-E)p=0,\ \ p\in L^2(\mathbb{R}^2/\Lambda_h) , \text{\ \ with\ \ }
E\ =\ E_\star^\varepsilon + \mu ,
\label{evp-eps-delta-lam}
\end{equation}
and seek, via Lyapunov-Schmidt reduction, to localize \eqref{evp-eps-delta-lam} about the three lowest zigzag slices.
Written out, the eigenvalue problem has the form:
\begin{equation}
\Big[ -\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2-E_\star^\varepsilon - \mu\ +\ \varepsilon V\ +\delta W\ \Big]p({\bf x}) = 0,\ \ p\in L^2(\mathbb{R}^2/\Lambda_h) .
\label{evp-p}
\end{equation}
Since $\varepsilon$ and $\delta$ will be chosen to be small, we shall expand $p({\bf x})$ relative to the natural basis of the $L^2(\mathbb{R}^2/\Lambda_h)-$ eigenspace of the free operator, $H^{(0,0,0)}=-(\nabla+i{\bf K})^2$, displayed explicitly in \eqref{p_sigma}. Let $P^\parallel$ denote the projection onto
${\rm span}\left\{p_\sigma:\sigma=1,\tau,\overline{\tau}\right\}$ and $P^\perp=I-P^\parallel$.
We seek a solution of \eqref{evp-p} in the form $p({\bf x}) = p_\parallel({\bf x}) + p_\perp({\bf x})$,
where
\begin{align*}
p_\parallel\in{\rm span}\left\{p_\sigma:\sigma=1,\tau,\overline{\tau}\right\},\ \
p_\perp\in \Big[\ {\rm span}\left\{p_\sigma:\sigma=1,\tau,\overline{\tau}\right\}\ \Big]^\perp .
\end{align*}
Then, we have that \eqref{evp-p} is equivalent to the coupled system of equations:
\begin{align}
&\Big[ -\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2-E_\star^\varepsilon - \mu \Big]p_\parallel \label{p-parallel} \\
&\quad +
\Big[ \varepsilon P^\parallel V + \delta P^\parallel W \Big]p_\parallel +
\Big[ \varepsilon P^\parallel V + \delta P^\parallel W \Big]p_\perp = 0, \nonumber \\
&\Big[ -\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2-E_\star^\varepsilon - \mu \Big]p_\perp \label{p-perp} \\
&\quad + \Big[ \varepsilon P^\perp V + \delta P^\perp W \Big]p_\perp +
\Big[ \varepsilon P^\perp V + \delta P^\perp W \Big]p_\parallel = 0 . \nonumber
\end{align}
\begin{proposition}\label{invert-on-Pperp}
There exists a constant $c>0$ such that if $|\varepsilon|+|\mu|<c$ then
\begin{equation*}
H^{(\varepsilon,0,\lambda)}-\mu\ \equiv\ \Big[ -\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2-E_\star^\varepsilon - \mu \Big]\ \ \textrm{is invertible on}\ P^\perp L^2(\mathbb{R}^2/\Lambda_h) .
\end{equation*}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{invert-on-Pperp}]
This follows from the lower bound:
\begin{equation*}
0<g_0 \equiv \inf_{|\lambda |\le1/2}\ \inf_{ {\bf m}\notin \{ (0,0),(0,1),(-1,0) \} }
\Big|\ |{\bf K}+{\bf m}\vec{{\bf k}}+\lambda{\bf k}_2|^2-|{\bf K}|^2\ \Big| ,
\end{equation*}
where ${\bf m}\vec{{\bf k}}=m_1{\bf k}_1+m_2{\bf k}_2$.
\end{proof}
By Proposition \ref{invert-on-Pperp}, for all $\varepsilon$ and $\mu$ sufficiently small equation \eqref{p-perp} is equivalent to:
\begin{align*}
&\Big[\ I\ +\ \left[H^{(\varepsilon,0,\lambda)}-\mu \right]^{-1}\
\left[ \varepsilon P^\perp V + \delta P^\perp W \right]\ \Big] p_\perp \\
&\qquad =\ -\left[H^{(\varepsilon,0,\lambda)}-\mu\right]^{-1}\ \left[ \varepsilon P^\perp V + \delta P^\perp W \right]p_\parallel .
\end{align*}
Suppose now $|\varepsilon|+|\delta|+|\mu|<d_1$, where $0<d_1\le c$ is chosen sufficiently small. Then we have
\begin{align}
p_\perp\
&= -\Big[\ I\ +\ \left[H^{(\varepsilon,0,\lambda)}-\mu \right]^{-1}\
\left[ \varepsilon P^\perp V + \delta P^\perp W \right]\ \Big]^{-1} \times \nonumber \\
&\qquad\qquad \left[H^{(\varepsilon,0,\lambda)}-\mu\right]^{-1}\ \left[ \varepsilon P^\perp V + \delta P^\perp W \right]p_\parallel
=\ \mathcal{P}(\varepsilon,\delta,\lambda,\mu)\ p_\parallel . \label{p-perp2}
\end{align}
%
Equation \eqref{p-perp2} defines a bounded linear mapping on $P^\parallel L^2(\mathbb{R}^2/\Lambda_h)$ with operator norm $\lesssim 1$:
\[ p_\parallel\mapsto p_\perp[\varepsilon,\delta,\lambda;p_\parallel]=\mathcal{P}(\varepsilon,\delta,\lambda,\mu)\ p_\parallel :\ Ran\left(P^\parallel\right)\to Ran\left(P^\perp\right) , \]
which is analytic in $\varepsilon, \delta, \lambda$ and $\mu$ for $|\varepsilon|+|\delta|+|\mu|<d_1$ and $|\lambda|<1/2$.
%
Consequently, equation \eqref{p-parallel} becomes a closed equation for $p_\parallel$
which we write as:
\begin{equation*}
\mathcal{M}(\varepsilon,\delta,\lambda,\mu) p =\ 0 ,
\end{equation*}
where
\begin{align}
\mathcal{M}(\varepsilon,\delta,\lambda,\mu) &\equiv
P^\parallel\Big[ -\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2-E_\star^\varepsilon - \mu
+ \varepsilon V + \delta W \nonumber\\
&\quad
+ ( \varepsilon V + \delta W ) \mathcal{P}(\varepsilon,\delta,\lambda,\mu)
\Big] P^\parallel \label{calMdef1}\\
&=
P^\parallel \Big[ -\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2-E_\star^\varepsilon - \mu + \varepsilon V + \delta W\nonumber\\
& \quad
+ ( \varepsilon V + \delta W ) P^\perp \left[ H^{(\varepsilon,0,\lambda)}-\mu+ \varepsilon V + \delta W \right]^{-1}P^\perp ( \varepsilon V + \delta W ) \Big] P^\parallel \ . \label{calMdef2}
\end{align}
The operator $\mathcal{M}(\varepsilon,\delta,\lambda,\mu)$ acts on the three-dimensional
space $P^\parallel L^2(\mathbb{R}^2/\Lambda_h)={\rm span}\{p_\sigma:\sigma=1,\tau,\overline{\tau}\}$.
Moreover, from the expression \eqref{calMdef2} it is clear that for $\varepsilon,\delta,\lambda,\mu$ complex, $\mathcal{M}(\varepsilon,\delta,\lambda,\mu)$ is self-adjoint.
We shall study it via its matrix representation relative to the basis $\{p_\sigma:\sigma=1,\tau,\overline{\tau}\}$:
\begin{equation}
M_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,\mu) \equiv \left\langle p_\sigma,\mathcal{M}(\varepsilon,\delta,\lambda,\mu) p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)},\ \ \sigma=1,\tau,\overline{\tau}\ .
\label{Msts}\end{equation}
Clearly, $M(\varepsilon,\delta,\lambda,\mu)$ is Hermitian for $\varepsilon,\delta,\lambda,\mu$ real:
$ M_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,\mu) = \overline{M_{{\tilde{\sigma}},\sigma}(\varepsilon,\delta,\lambda,\mu)}$.
Summarizing the above discussion we have the following:
\begin{proposition}\label{Msigtsig}
\begin{enumerate}
\item The entries of the $3\times3$
matrix $M(\varepsilon,\delta,\lambda,\mu)$ are analytic functions of complex
$\varepsilon, \delta, \lambda$ and $\mu$ for $|\varepsilon|+|\delta|+|\mu|<d_1$ and $|\lambda|<1/2$.
\item For $|\varepsilon|+|\delta|+|\mu|<d_1$ and $|\lambda|<1/2$, we have that
$E=E^\varepsilon_\star+\mu$ is an $L^2(\mathbb{R}^2/\Lambda_h)-$ eigenvalue of $H^{(\varepsilon,\delta,\lambda)}$
if and only if
$\det M(\varepsilon,\delta,\lambda,\mu)=0$.
\end{enumerate}
%
\end{proposition}
We now study the roots of $\det M(\varepsilon,\delta,\lambda,\mu)=0$ (eigenvalues of
$H^{(\varepsilon,\delta,\lambda)})$ for $\varepsilon$ and $\delta$ small.
First let $\varepsilon=\delta=0$. By the formulae \eqref{psig-nablaK-tpsig} and \eqref{J-matrix-computed}, derived and also applied in Section \ref{Mexpansion}, we have:
\begin{align*}
M(0,0,\lambda,\mu) &= \Big(\ \left\langle p_\sigma\ ,\ \mathcal{M}(0,0,\lambda,\mu) p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)}\ \Big)_{ \sigma,{\tilde{\sigma}}=1,\tau,\overline\tau}\ ,
\nonumber \\
&= \Big(\ \left\langle p_\sigma\ ,\ \left(-(\nabla+i({\bf K}+\lambda{\bf k}_2))^2\right), p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)}\ -\ (E_\star^0+\mu)\delta_{\sigma,{\tilde{\sigma}}}\ \Big)_{ \sigma,{\tilde{\sigma}}=1,\tau,\overline\tau}\ ,
\nonumber \\
&= \begin{pmatrix}
\lambda^2q^2-\mu & \alpha\lambda & \overline{\alpha}\lambda\\
\overline{\alpha}\lambda &\lambda^2q^2-\mu & \alpha\lambda
\\ \alpha\lambda & \overline{\alpha}\lambda & \lambda^2q^2-\mu
\end{pmatrix} , \ \ \alpha= \frac{q^2}{\sqrt3}\ i\tau.
\end{align*}
%
%
Thus, $M(0,0,\lambda,\mu)$ is singular if $\mu$ is a root of the polynomial:
\begin{align*}
&\det M(0,0,\lambda,\mu) \\
&\quad = -\mu^3 + \mu^2 (3q^2\lambda^2) + \mu(3\lambda^2|\alpha|^2-3\lambda^4q^4)
+\lambda^6q^6 -3\lambda^4q^2|\alpha|^2+2\lambda^3\Re(\alpha^3) \nonumber \\
&\quad = -\mu^3 + \mu^2(3\lambda^2q^2) + \mu(\lambda^2q^4-3\lambda^4q^4) + \lambda^6q^6 + \lambda^4q^6,
\end{align*}
where we have used that $|\alpha|^2=\frac{q^4}{3}$ and $\Re(\alpha^3)=0$.
The roots, $\mu^{(0)}_j(\lambda),\ j=1,2,3$, defined by the ordering
$\mu^{(0)}_1(\lambda)\le\mu^{(0)}_2(\lambda)\le\mu^{(0)}_3(\lambda)$,
listed with multiplicity, are given by:
\begin{align}
{\bf 0\le\lambda\le\frac12:}\
&\mu^{(0)}_1(\lambda)=q^2\lambda (\lambda-1),\ \mu^{(0)}_2(\lambda)=q^2\lambda^2,\ \mu^{(0)}_3(\lambda)=q^2\lambda (\lambda+1) , \label{mulam>0}\\
{\bf -\frac12\le\lambda\le0:}\
&\mu^{(0)}_1(\lambda)=q^2\lambda (\lambda+1),\ \mu^{(0)}_2(\lambda)=q^2\lambda^2,\ \mu^{(0)}_3(\lambda)=q^2\lambda (\lambda-1) . \label{mulam<0}
\end{align}
The roots $\mu^{(0)}_j(\lambda), j=1,2,3$, are eigenvalues of $-\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2-E_\star^0$. They are uniformly bounded away from all other $L^2(\mathbb{R}^2/\Lambda_h)-$ spectrum. This distance is given by
\begin{equation*}
g_0 \equiv \inf_{|\lambda |\le1/2}\ \inf_{ {\bf m}\notin \{ (0,0),(0,1),(-1,0) \} }
\Big|\ |{\bf K}+{\bf m} \vec{{\bf k}}+\lambda{\bf k}_2|^2-|{\bf K}|^2\ \Big|\ > 0 ,\ \ (E^0_\star=|{\bf K}|^2),
\end{equation*}
where ${\bf m} \vec{{\bf k}}=m_1{\bf k}_1+m_2{\bf k}_2$
Note that $\det M(0,0,0,\mu)=-\mu^3$ has a root of multiplicity three: $\mu=0$. By Proposition \ref{Msigtsig} and analyticity of $\det M(\varepsilon,\delta,\lambda,\mu)$, we have that for $\varepsilon, \delta$ and $\lambda$ in a small neighborhood of the origin in $\mathbb{C}^3$, there are three solutions of
$\det M(\varepsilon,\delta,\lambda,\mu)=0$. We label them $\mu_j(\varepsilon,\delta,\lambda),\ j=1,2,3$. By self-adjointness of $M(\varepsilon,\delta,\lambda,\mu)$, these roots are real for real values of $\varepsilon$ and $\delta$. Correspondingly,
for small and real $\varepsilon, \delta$ and $\lambda$ there are three real $L^2_{k_\parallel=2\pi/3}-$ eigenvalues of $H^{(\varepsilon,\delta,\lambda)}$, denoted
\[ E_j^{(\varepsilon,\delta)}(\lambda)\equiv E_j^{(\varepsilon,\delta)}({\bf K}+\lambda{\bf k}_2)\equiv E^\varepsilon_\star+\mu_j(\varepsilon,\delta,\lambda),\ \ \ j=1,2,3.\]
The ordering of the $E_j$ implies
\[ \mu_1(\varepsilon,\delta,\lambda)\le \mu_2(\varepsilon,\delta,\lambda)\le \mu_3(\varepsilon,\delta,\lambda) .\]
A mild extension of Proposition \ref{directional-bloch} yields
\begin{proposition}\label{roots-delta0}
For each $|\lambda|\le1/2$, there exist orthonormal $L^2_{{\bf K}+\lambda{\bf k}_2}-$ eigenpairs $(\Phi_\pm({\bf x};\lambda),E_\pm(\lambda))$ and $(\widetilde{\Phi}({\bf x};\lambda),\widetilde{E}(\lambda))$, real analytic in $\lambda$, such that
\[ \textrm{span}\ \{\Phi_{-}({\bf x};\lambda), \Phi_{+}({\bf x};\lambda), \widetilde{\Phi}({\bf x};\lambda)\}
=\ \textrm{span}\ \{\Phi_j({\bf x};{\bf K}+\lambda{\bf k}_2) : j=1,2,3 \}\ .
\]
For fixed $\varepsilon$ small and $\lambda$ tending to $0$ we have:
\begin{enumerate}
\item
\begin{align}
E_\pm^{\varepsilon,\delta=0}(\lambda)\ &=\ E_\star^\varepsilon\ \pm\ |\lambda^\varepsilon_\sharp|\ |z_2|\ \lambda\ +\ \lambda^2\ e_\pm(\lambda,\varepsilon) , \label{Epm-expand}\\
\widetilde{E}^{\varepsilon,\delta=0}(\lambda)\ &=\ \widetilde{E}_\star^\varepsilon\ +\ \lambda^2\ \widetilde{e}(\lambda,\varepsilon) , \label{Etilde-expand}
\end{align}
where $e_\pm(\lambda,\varepsilon),\ \widetilde{e}(\lambda,\varepsilon)\ =\ \mathcal{O}(1)$ as $\lambda, \varepsilon\to0$, and $z_2=k_2^{(1)}+ik_2^{(2)}$. These functions can be represented in a convergent power series in $\varepsilon$ and $\lambda$ in a fixed $\mathbb{C}^2$ neighborhood of the origin. Furthermore, $e_\pm(\lambda,\varepsilon),\ \widetilde{e}(\lambda,\varepsilon)$ are
real-valued for real $\lambda$ and $\varepsilon$.
\item Therefore, for all small $\varepsilon$ and $\lambda$, the three roots of $\det M(\varepsilon,\delta=0,\lambda,\mu)$ may be labeled:
\begin{align}
\mu_+(\lambda,\varepsilon)\ &=\ |\lambda^\varepsilon_\sharp|\ |z_2|\ \lambda\ +\ \lambda^2\ e_+(\lambda,\varepsilon)\label{mu+expand}\\
\mu_-(\lambda,\varepsilon)\ &= -|\lambda^\varepsilon_\sharp|\ |z_2|\ \lambda\ +\ \lambda^2\ e_-(\lambda,\varepsilon)\label{mu-expand}\\
\tilde{\mu}(\lambda,\varepsilon)\ &=\ \widetilde{E}_\star^\varepsilon-E_\star^\varepsilon\ +\ \lambda^2\ \widetilde{e}(\lambda,\varepsilon)\ ,\ \ {\rm where}\ \widetilde{E}_\star^\varepsilon-E_\star^\varepsilon= 3\varepsilon V_{1,1}\ +\ \mathcal{O}(\varepsilon^2) . \label{mutilde-expand}
\end{align}
\end{enumerate}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{roots-delta0}]
Part 1 can be proved as follows. The expansion \eqref{Epm-expand} and analyticity follow by perturbation theory as in Proposition \ref{directional-bloch}; see also \cites{Friedrichs:65,kato1995perturbation}. The expansion \eqref{mutilde-expand} also follows by perturbation theory of the simple eigenvalue $\widetilde{E}^\varepsilon_\star $ for $\lambda=0$.
It is easy to see that
\begin{equation}
\widetilde{E}^{\varepsilon,\delta=0}(\lambda)\
=\ \widetilde{E}_\star^\varepsilon\ +\
\lambda\ \times\left(\ -2i{\bf k}_2\cdot \left\langle\widetilde{\Phi}^\varepsilon,\nabla_{\bf x}\widetilde{\Phi}^\varepsilon
\right\rangle_{L^2_{\bf K}}\ \right)\ +\ \lambda^2\ \widetilde{e}(\varepsilon,\lambda).
\end{equation}
We claim that $\left\langle\widetilde{\Phi}^\varepsilon,\nabla_{\bf x}\widetilde{\Phi}^\varepsilon
\right\rangle_{L^2_{\bf K}}=0$. This follows since $\widetilde{\Phi}^\varepsilon\in L^2_{{\bf K},1}$ as in the proof of Proposition 4.1 of \cites{FW:12}. This proves the expansion \eqref{Etilde-expand}.
\end{proof}
Finally we discuss a useful symmetry of $\det M(\varepsilon,\delta,\lambda,\mu=0)$.
\begin{proposition}\label{symmetry-detM}
Assume $V$ satisfies Assumptions (V) and $W$ satisfies Assumptions (W).
Recall $M(\varepsilon,\delta,\lambda,0)$, defined in \eqref{calMdef1}-\eqref{Msts}.
Then, $\det M(\varepsilon,\delta,\lambda,0)$ is real-valued for real $\varepsilon, \delta, \lambda$ and analytic in a small neighborhood of the origin in $\mathbb{C}^3$. Furthermore,
\begin{align}
\det M(\varepsilon,\delta,\lambda,0)&=\det M(\varepsilon,-\delta,\lambda,0)
\label{delta-minusdelta}
\end{align}
and therefore $\det M(\varepsilon,\delta,\lambda,0)$ is a real analytic in $\varepsilon$, $\delta^2$ and $\lambda$, and we write
\begin{align*}
D(\varepsilon,\delta^2,\lambda)\equiv \det M(\varepsilon,\delta,\lambda,0) &
\end{align*}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{symmetry-detM}]
Let $\mathcal{I}[f]({\bf x})=f(-{\bf x})$ and $\mathcal{C}[f]({\bf x})=\overline{f({\bf x})}$. Using that $\varepsilon, \delta$ and $\lambda$ are real, $V({\bf x})$ is even and $W({\bf x})$ is odd, one can check directly that
\begin{align}
\mathcal{C}\circ\mathcal{I}\circ H^{\varepsilon,-\delta,\lambda} &= H^{\varepsilon,\delta,\lambda}\circ
\mathcal{I}\circ\mathcal{C}\label{sym1}
\end{align}
%
Furthermore, note that
\begin{align}
&(\mathcal{C}\circ\mathcal{I}) p_1 = p_1,\quad
(\mathcal{C}\circ\mathcal{I}) p_\tau = p_{\overline\tau},\quad
(\mathcal{C}\circ\mathcal{I}) p_{\overline\tau} = p_\tau.
\label{CIp} \end{align}
It follows that
\begin{align}
\mathcal{C}\circ\mathcal{I}\circ P^\parallel &= P^\parallel\circ \mathcal{C}\circ\mathcal{I} .
\label{sym2}\end{align}
Using the symmetry relations \eqref{sym1}-\eqref{sym2}
to rewrite $M(\varepsilon,-\delta,\lambda,0)$ in terms of $H^{\varepsilon,\delta,\lambda}$,
we find that $M(\varepsilon,-\delta,\lambda,0)$ can be transformed into
$M(\varepsilon,\delta,\lambda,0)$ by interchanging its second and third rows, and then interchanging its second and third columns.
Therefore, $\det M(\varepsilon,\delta,\lambda,0)=\det M(\varepsilon,-\delta,\lambda,0)$ and the proof of Proposition \ref{symmetry-detM} is complete.
\end{proof}
\subsection{Strategy for analysis of $\det M(\varepsilon,\delta,\lambda,\mu)$ in the case $\varepsilon V_{1,1}>0$} \label{sec:strategy}
We first observe that for a positive constant, $d_1$, if $|\varepsilon|+|\delta|<d_1$ then
\begin{equation}
|E_j^{(\varepsilon,\delta)}(\lambda)-E_\star^\varepsilon|\ \ge c_4(1+|j|),\ \ j\ge4 .
\label{outer-lams-bbig}\end{equation}
Indeed, by the discussion following Proposition \ref{Msigtsig}, we have that there exists $d_2>0$ such that for $j\geq4$,
$|\mu_j(\varepsilon,\delta,\lambda)|\equiv |E_j^{(\varepsilon,\delta)}(\lambda)-E_\star^\varepsilon|\ge d_2$;
the lower bound \eqref{outer-lams-bbig} now follows from the Weyl asymptotics for eigenvalues of second order elliptic operators in two space dimensions.
Hence, we restrict our attention to $E_j^{(\varepsilon,\delta)}(\lambda)=E_\star^\varepsilon+\mu_j(\varepsilon,\delta),\ j=1,2,3$, which we study by a detailed analysis of $\det M(\varepsilon,\delta,\lambda,\mu=0)$. The analysis consists of verifying two steps, which we now outline.
\noindent {\bf Step 1:}\
Fix $c_\flat>0$ and arbitrary. We will prove that there exists $C_\flat>0$, such that the following holds. There exists $\varepsilon_1>0$ and constant $c_3$, depending on $V$ and $W$, such that
for all $0<|\varepsilon|<\varepsilon_1$ and $0\le|\delta|\le c_\flat\ \varepsilon^2$:
\begin{align}
C_\flat\sqrt{\varepsilon}\le |\lambda|\le\frac12\ \implies \ |E_j^{(\varepsilon,\delta)}(\lambda)-E_\star^\varepsilon|\ & \ge c_3 \varepsilon,\ \ j=1,2,3 \label{outer-lams} .
\end{align}
%
\noindent Furthermore, by \eqref{outer-lams} and \eqref{outer-lams-bbig}, it follows that
\begin{align}
& C_\flat\sqrt{\varepsilon}\le |\lambda|\le\frac12\ \implies \ L^2(\mathbb{R}/\Lambda_h)-{\rm spec}( H^{(\varepsilon,\delta,\lambda)})\ \cap \Big[E_\star^\varepsilon-c\varepsilon,E_\star^\varepsilon+c\varepsilon\Big]\ \ \textrm{is empty}. \label{Step1}
\end{align}
\noindent {\bf Step 2:}\ Let $c_\flat$ and $C_\flat$ be as in Step 1. We will prove that there exists $0<\varepsilon_2\le \varepsilon_1$, such that if $(\lambda,\delta)$ are in the set:
\begin{equation}
|\lambda|\le C_\flat\ \varepsilon^{\frac12}\ \ \textrm{and}\ \ 0\le |\delta|\ \le c_\flat\ \varepsilon^2,\ \ {\rm where} \ 0<|\varepsilon|<\varepsilon_2,\ \ {\rm then}\label{smiley}
\end{equation}
\begin{align}
\det M(\varepsilon,\lambda,\delta,0)\ =\
\left(q^2\lambda^2 + \varepsilon V_{1,1}\right) \left( q^4\lambda^2 + \delta^2\ \left|W_{0,1}+W_{1,0}-W_{1,1} \right|^2 \right)\times\left(\ 1+o(1)\ \right).
\label{Dlower}
\end{align}
A simple lower bound on the three eigenvalues $|\mu_j(\varepsilon,\lambda)|=|E_j^\varepsilon(\lambda)-E^\varepsilon_\star|$ is then obtained as follows.
By \eqref{Dlower}, for some positive constants: $C_1$ and $C_2$, we have
{\small{
\begin{align*}
&C_1(\lambda^2+\varepsilon)\cdot (\lambda^2+\delta^2)\ \le\ \left| \det M(\varepsilon,\lambda,\delta,0) \right| \\
&= \left| \det M(\varepsilon,\lambda,\delta,0) - \det M (\varepsilon,\lambda,\delta,\mu_j(\lambda,\varepsilon,\delta))\right|\ \le C_2\ |0-\mu_j(\varepsilon,\delta,\lambda)| = C_2\ |\mu_j(\varepsilon,\delta,\lambda)|,
\end{align*}
}}
for $j=1,2,3$.
Therefore, with $C_3=C_1/C_2$,
\begin{equation}
\left| E_j^{\varepsilon,\delta}(\lambda)-E_\star^\varepsilon\right| = |\mu_j(\varepsilon,\delta,\lambda)| \geq C_3\ \varepsilon\ (\lambda^2+\delta^2) \ \ge\ C_3\ \varepsilon\ \delta^2,\ \ j=1,2,3.
\label{Ediff-lb}\end{equation}
It follows from \eqref{Ediff-lb} and \eqref{outer-lams-bbig} that
\begin{equation}
\textrm{the $L^2(\mathbb{R}^2/\Lambda_h)-$\ spec( $H^{(\varepsilon,\delta,\lambda)}$ )$\ \cap\ [E_\star^\varepsilon-\eta,E_\star^\varepsilon+\eta]$\ \ is empty}
\nonumber\end{equation}
with $\eta= \frac{1}{2}C_3\ \varepsilon\ \delta^2$, whenever $\varepsilon$, $(\lambda,\delta)$ satisfy the constraints \eqref{smiley}.
Theorem \ref{SGC!} and Theorem \ref{delta-gap} are immediate consequences of \eqref{Step1} (Step 1) and \eqref{Ediff-lb} (Step 2) and the representation:
\begin{equation*} \left. H^{(\varepsilon,\delta)}\ \right|_{L^2(\Sigma_{k_\parallel=2\pi/3})}\ =\
\oplus\ \int_{|\lambda|\le\frac12}\ \left.H^{(\varepsilon,\delta,\lambda)}\ \right|_{L^2(\mathbb{R}^2/\Lambda_h)}\ d\lambda .
\end{equation*}
Hence, we now turn to their proofs. We first carry out Step 1 by a simple perturbation analysis about the free Hamiltonian, $H^{(0,0,\lambda)}$. We then turn to Step 2, which is much more involved.
{\bf Verification of \eqref{outer-lams} from Step 1.}
Let $C_\flat$ denote a positive constant, which we will specify shortly, and consider the range $C_\flat\sqrt{\varepsilon}\le\lambda\le1/2$. Using the expressions for $\mu^{(0)}_j(\lambda),\ j=1,2,3$, in \eqref{mulam>0} we have that if $\varepsilon\le \varepsilon'(C_\flat)\equiv(4C_\flat^2)^{-1}$, then
$ |E_1^{(0)}(\lambda)-E_\star^0|= q^2|\lambda|\ |1-\lambda|\ge C_\flat q^2 \sqrt{\varepsilon}/2$,\
$|E_2^{(0)}(\lambda)-E_\star^0|= q^2|\lambda|^2\ \ge C_\flat^2 q^2 \varepsilon$,\
and $|E_3^{(0)}(\lambda)-E_\star^0|= q^2|\lambda|\ |1+\lambda|\ge C_\flat^2 q^2 \sqrt{\varepsilon}/2$.
Note that the eigenvalues $(\varepsilon,\delta,\lambda)\mapsto E^{\varepsilon,\delta}(\lambda)$ are Lipschitz continuous functions;
see Chapter XII of \cites{RS4} or Appendix A of \cites{FW:14}.
Therefore, we have
\begin{equation*}
|E_2^{(\varepsilon,\delta)}(\lambda)-E_\star^\varepsilon|\ \ge C_\flat^2\ q^2\ \varepsilon\ -\ |\varepsilon|\ \|V\|_\infty\ -\ |\delta|\ ||W\|_\infty\ge \frac{1}{2}\ C_\flat^2\ q^2\ \varepsilon ,
\end{equation*}
for some $C_\flat$ positive, finite and sufficiently large. With this choice of $C_\flat$, we also have
\begin{align*}
|E_1^{(\varepsilon,\delta)}(\lambda)-E_\star^\varepsilon|&\ \ge C_\flat\ q^2\ \sqrt{\varepsilon}/2\ -\ |\varepsilon|\ \|V\|_\infty\ -\ |\delta|\ ||W\|_\infty , \nonumber\\
|E_3^{(\varepsilon,\delta)}(\lambda)-E_\star^\varepsilon&|\ge C_\flat\ q^2\ \sqrt{\varepsilon}/2\ -\ |\varepsilon|\ \|V\|_\infty\ -\ |\delta|\ ||W\|_\infty .
\end{align*}
Therefore, there exists $\varepsilon_1>0$, such that for all $0<|\varepsilon|<\varepsilon_1$ we have
\begin{equation*}
|E_1^{(\varepsilon,\delta)}(\lambda)-E_\star^\varepsilon|\ +\ |E_3^{(\varepsilon,\delta)}(\lambda)-E_\star^\varepsilon|\ \ge\ C_\flat\ q^2\ \sqrt{\varepsilon}/4 .
\end{equation*}
This completes the proof of the assertions in Step 1.
\subsection{Expansion of $M(\varepsilon,\delta,\lambda,0)$ and its determinant for $\varepsilon V_{1,1} \neq 0$}
\label{Mexpansion}
The key to verifying Step 2 and proving Theorem \ref{SGC!} is the following:
\begin{proposition}\label{detM-expansion}
Let $C_\flat$ be as chosen in Step 1; see \eqref{Step1}.
Then, there exist constants $\varepsilon_2>0$ and $ c>0$ such that for all $0\le\varepsilon<\varepsilon_2$, if
\begin{equation}
0\le|\lambda|\ \le\ C_\flat\ \varepsilon^{1/2}\ \ {\rm and}\ \ 0\le |\delta| \le c\ \varepsilon^2,\ \ {\rm then}
\label{smiley3}
\end{equation}
\begin{align}
-\det M(\varepsilon,\delta,\lambda,0)\
&=
\pi(\varepsilon,\delta^2,\lambda) \ + \ o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right) ,
\label{detM-exp}
\end{align}
where
\begin{align}
\pi(\varepsilon,\delta^2,\lambda)&\equiv \left(q^2\lambda^2 + \varepsilon V_{1,1}\right) \left( q^4\lambda^2 + \delta^2\ \left|W_{0,1}+W_{1,0}-W_{1,1} \right|^2 \right).
\label{pi-def}\end{align}
Here, $W_{\bf m}$, $ {\bf m}\in\mathbb{Z}^2$, denote Fourier coefficients of $W({\bf x})$ and, by \eqref{smiley3}, the correction term in \eqref{detM-exp} divided by $(\lambda^2+\varepsilon)(\lambda^2+\delta^2)$ tends to zero as $\varepsilon$ tends to zero.
Thus,
\begin{align*}
\varepsilon V_{1,1}>0\ \implies\ -\det M(\varepsilon,\delta,\lambda,0)\
&=
\pi(\varepsilon,\delta^2,\lambda)\ \left(1 + o(1) \right)\ \textrm{in the region \eqref{smiley3}.}
\end{align*}
\end{proposition}
\noindent We now embark on an expansion of $\det M(\varepsilon,\delta,\lambda,0)$ and the proof of Proposition \ref{detM-expansion}. $M(\varepsilon,\delta,\lambda,0)$, see \eqref{calMdef1}-\eqref{Msts},
may be written as the sum of matrices:
\begin{equation}
M(\varepsilon,\delta,\lambda,0)=\ \left[\ M^0+M^V+M^W+M^{\mathcal{P}}\ \right](\varepsilon,\delta,\lambda,0),
\label{Msum}
\end{equation}
where the $\sigma,{\tilde{\sigma}}=1,\tau,\overline{\tau}$ entries are given by:
\begin{align}
M^0_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,0)\ &=\ \left\langle p_\sigma,\left(\ -\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2-E_\star^\varepsilon \ \right) p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)} , \label{M0}\\
M^V_{\sigma,{\tilde{\sigma}}}(\varepsilon)\ &=\ \varepsilon \left\langle p_\sigma, V p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)} , \nonumber \\
M^W_{\sigma,{\tilde{\sigma}}}(\delta) &=\ \delta \left\langle p_\sigma, W p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)} , \nonumber \\
M^{\mathcal{P}}_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,0)\ &=\ \left\langle p_\sigma,(\varepsilon V + \delta W)\
\mathcal{P}(\varepsilon,\delta,\lambda,0)\ p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)} . \nonumber
\end{align}
For $\varepsilon$ and $\delta$ small,
%
\begin{align}
&M^0_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,0)\ =\
M^{0,approx}_{\sigma,{\tilde{\sigma}}}(\varepsilon,\lambda)\ +\ \mathcal{O}(\varepsilon^2), \label{M0-M0app}
\end{align}
where (inner products over $L^2(\mathbb{R}^2/\Lambda_h)$)
{\small{
\begin{align}
&M^{0,approx}_{\sigma,{\tilde{\sigma}}}(\varepsilon,\lambda) =\left\langle p_\sigma,\left( -\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2- [E_\star^0 + \varepsilon(V_{0,0}-V_{1,1})] \right) p_{\tilde{\sigma}}\right\rangle, \ {\rm and} \label{M0-expanded}\\
& M^{\mathcal{P}}_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,0)
\label{MP-expanded}\\
&\quad =
\left\langle (\varepsilon V + \delta W) p_\sigma , P^\perp\left(-(\nabla+i[{\bf K}+\lambda{\bf k}_2])^2-E_\star^\varepsilon +\varepsilon V +\delta W\right)^{-1}P^\perp (\varepsilon V + \delta W) p_{\tilde{\sigma}}\right\rangle \nonumber \\
&\quad = \left\langle (\varepsilon V + \delta W) p_\sigma , P^\perp\left(-(\nabla+i[{\bf K}+\lambda{\bf k}_2])^2-E_\star^\varepsilon \right)^{-1}P^\perp (\varepsilon V + \delta W) p_{\tilde{\sigma}}\right\rangle \nonumber \\
&\quad\qquad +\mathcal{O}(\delta^3 + \delta^2\varepsilon + \delta \varepsilon^2 + \varepsilon^3) = \mathcal{O}(\varepsilon^2 + \varepsilon\delta + \delta^2) .
\nonumber
\end{align}}}
\noindent We next explain that to calculate the determinant of $ M(\varepsilon,\delta,\lambda,0)$ to the desired order in the region \eqref{smiley}, it suffices to calculate the determinant of the approximate matrix:
\begin{equation}
M^{approx}(\varepsilon,\delta,\lambda)\ \equiv \ M^{0,approx}(\varepsilon,\lambda)+M^V(\varepsilon)+M^W(\delta) . \label{detMapprox}
\end{equation}
That is, we show that the omitted terms in $M(\varepsilon,\delta,\lambda,0)-M^{approx}(\varepsilon,\delta,\lambda,0)$ contribute negligibly to the determinant of $ M(\varepsilon,\delta,\lambda,0)$, when compared with the polynomial, $\pi(\varepsilon,\delta^2,\lambda)$, in \eqref{pi-def}, provided $\lambda$ and $\delta$ are in the region \eqref{smiley}:
\begin{equation}
|\lambda|\ \le C_\flat\ \varepsilon^{\frac12}\ \ \textrm{and}\ \ |\delta|\le c_\flat\ \varepsilon^2 ,
\label{smiley2}
\end{equation}
where $C_\flat$ and $c_\flat$ are appropriately chosen constants.
Recall that $D(\varepsilon,\delta^2=0,\lambda=0)= \det M (\varepsilon,0,0,0) =0$ for all $\varepsilon$, since $\mu=0$ corresponds to $E=E^\varepsilon_\star$, which is an eigenvalue of $H^{(\varepsilon,\delta=0,\lambda=0)}=-\Delta+\varepsilon V$.
Thus, $D(\varepsilon,\delta^2,\lambda)=\det M(\varepsilon,\delta,\lambda,0)$ is a convergent power series in $\varepsilon$, $\delta^2$ and $\lambda$ with no ``pure $\varepsilon$'' terms. On the other hand,
the entries of the matrix $M(\varepsilon,\delta,\lambda,0)$ are convergent power series in $\varepsilon, \delta$ and $\lambda$.
\begin{proposition}\label{ld2}
For $(\lambda,\delta)$ in the region \eqref{smiley2} we have
\[ |\lambda|\ \delta^2 = o \left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right) .\]
Therefore we may drop the $\mathcal{O}(\lambda\delta^2)$ terms, a further simplification.
%
\end{proposition}
\begin{proof}[Proof of Proposition \ref{ld2}]
Consider separately the two regimes: (a) $|\lambda|\le \varepsilon^{1.1}$ and (b) $|\lambda|\ge \varepsilon^{1.1}$. For $|\lambda|\le \varepsilon^{1.1}$, we have $|\lambda|\delta^2\le \varepsilon^{1.1}\delta^2=
\varepsilon^{0.1}\varepsilon\delta^2\le \varepsilon^{0.1}(\lambda^2+\varepsilon)\cdot (\lambda^2+\delta^2)$. And for $|\lambda|\ge \varepsilon^{1.1}$, note that $(\lambda^2+\varepsilon)\cdot (\lambda^2+\delta^2)\ge\varepsilon\lambda^2\ge\varepsilon\cdot\varepsilon^{2.2}=\varepsilon^{3.2}$. On the other hand,
$|\lambda|\delta^2\le \frac12\delta^2\lesssim\varepsilon^4\le \varepsilon^{.8}\cdot (\lambda^2+\varepsilon)\cdot (\lambda^2+\delta^2)$. This completes the proof of Proposition \ref{ld2}.
\end{proof}
Let us now attach ``weight'' $1$ to the variable $\varepsilon$ and ``weight'' $1/2$ to the variables $\delta$ and $\lambda$. A monomial of the form $\varepsilon^a \lambda^b \delta^c$ carries the weight $a+b/2+c/2$, where $a,b,c\in\mathbb{N}$. In Proposition \ref{neglect} we show that all terms in the power series of $\det M(\varepsilon,\lambda,\delta)$ which are of weight strictly larger than two introduce negligible corrections
%
%
to $\det M^{approx}(\varepsilon,\lambda,\delta)$ for $\lambda$ and $\delta$ in the region \eqref{smiley}.
Recalling that there are no pure $\varepsilon$ terms, we see that a monomial in the power series of $D(\varepsilon,\delta^2,\lambda)$ of weight larger than $2$ must have one of the following two forms ($a,b,c\in\mathbb{N}$):
\begin{align*}
& (I)\quad \lambda\times \varepsilon^a\lambda^{b} (\delta^{2})^c,\quad {\rm with}\quad a+b/2+c>3/2\implies 2a+b+2c\ge4 \\% \label{form1}
& (II)\quad \delta^2\times \varepsilon^a\lambda^b(\delta^2)^c,\quad {\rm with}\quad a+b/2+c>1\implies 2a+b+2c\ge3
\end{align*}
\begin{proposition}\label{neglect} Terms of form (I) and form (II) that may appear in $D(\varepsilon, \delta^2, \lambda)$ are $o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ as $\varepsilon\to0$,
for $(\lambda,\delta)$ in the region \eqref{smiley2}:
$
|\lambda|\le C_\flat\ \varepsilon^{\frac12}\ \ \textrm{and}\ \ |\delta|\lesssim\varepsilon^2.
$
%
We may therefore neglect all terms in the power series of $D(\varepsilon,\delta^2,\lambda)$ which are of weight strictly larger than $2$ for $(\lambda,\delta)$ in the region \eqref{smiley2}.
\end{proposition}
\noindent We prove Proposition \ref{neglect} by estimating all terms of the form (I) or (II), and we begin with the following lemma, which is a consequence of part 2 of Proposition \ref{roots-delta0}:
\begin{lemma}\label{det-delta0}
Let $\mu_-(\varepsilon,\lambda),\ \mu_+(\varepsilon,\lambda)$ and $ \widetilde{\mu}(\varepsilon,\lambda)$ denote the three roots of $\det M(\varepsilon,\delta=0,\lambda,\mu)$, defined and analytic in $(\varepsilon,\lambda)$ in a $\mathbb{C}^2$ neighborhood of the origin and displayed in \eqref{mu+expand}-\eqref{mutilde-expand}. Then,
\begin{align*}
\det M(\varepsilon,\delta=0,\lambda,\mu)= (\mu-\mu_-(\varepsilon,\lambda)) \times\ (\mu-\mu_+(\varepsilon,\lambda))\times (\mu-\widetilde{\mu}(\varepsilon,\lambda))\times\Omega(\varepsilon,\lambda,\mu),
\end{align*}
where $\Omega(\varepsilon,\lambda,\mu)$ is bounded. In particular, for all $\varepsilon$ such that $0<|\varepsilon|<\varepsilon_0$ there exists a constant $C_\varepsilon$ such that
\begin{align}\label{bound-det}
\left| \det M(\varepsilon,\delta=0,\lambda,0) \right| &\le C_\varepsilon\ |\lambda|^2 .
\end{align}
\end{lemma}
\noindent {\it Proof:} For fixed $\varepsilon$ and $\lambda$, the mapping $\mu\mapsto\det M(\varepsilon,\delta=0,\lambda,\mu)$ is analytic for $|\mu|<\mu_0$ with zeros at $\mu_\pm(\varepsilon,\lambda), \widetilde{\mu}(\varepsilon,\lambda)$; see Proposition \ref{roots-delta0} . Fix $\varepsilon'$ and $\lambda'$ small such that if $|\varepsilon|<\varepsilon'$ and $|\lambda|<\lambda'$,
the roots all satisfy $|\mu_+(\varepsilon,\lambda)|$, $ |\mu_-(\varepsilon,\lambda)|$,
$|\widetilde{\mu}(\varepsilon,\lambda)|<\mu_0/2$. We claim that for such $\varepsilon$ and $\delta$,
\[ \Omega(\varepsilon,\lambda,\mu)\equiv
\frac{ \det M(\varepsilon,\delta=0,\lambda,\mu)}
{(\mu-\mu_-(\varepsilon,\lambda)) \times\ (\mu-\mu_+(\varepsilon,\lambda))\times (\mu-\widetilde{\mu}(\varepsilon,\lambda))}\]
is uniformly bounded for all $|\mu|\le\mu_0$, $|\varepsilon|\le\varepsilon'$ and $|\lambda|\le\lambda'$. Indeed, since the roots are bounded in magnitude by $\mu_0/2$, we have
$\max_{|\mu|=\mu_0} \left| \Omega(\varepsilon,\lambda,\mu) \right|\le (2/\mu_0)^3\ \max_{|\mu|=\mu_0}\left|\det M(\varepsilon,\delta=0,\lambda,\mu)\right|$. Applying the maximum principle we have
\[ \max_{|\mu|\le\mu_0}\left| \Omega(\varepsilon,\lambda,\mu) \right|\le (2/\mu_0)^3\ \max_{|\mu|=\mu_0}\left|\det M(\varepsilon,\delta=0,\lambda,\mu)\right|\le (2/\mu_0)^3 C(\mu_0,\varepsilon',\lambda'),\] where $C(\mu_0,\varepsilon',\lambda')$ is a constant. The bound \eqref{bound-det} now follows
from the expansions of the roots.
\noindent Proof of Proposition \ref{neglect}:\\
{\bf (I) Terms of the form $\lambda\times \varepsilon^a\lambda^{b} (\delta^{2})^c, \ {\rm with}\ 2a+b+2c\ge4,\ a,b,c\in\mathbb{N}$:}
\noindent (i) Suppose first that $c=0$. Then, we consider $\lambda\times\varepsilon^a\lambda^b$ with $2a+b\ge4$. By Lemma \ref{det-delta0} we must have $b\ge1$.
Thus, $\lambda\times\varepsilon^a\lambda^b=\lambda^2\varepsilon^a \lambda^{b-1}$. If $a\ge2$, then
$\lambda\times\varepsilon^a\lambda^b=\lambda^2\varepsilon^2 \varepsilon^{a-2} \lambda^{b-1}\lesssim
(\lambda^2+\delta^2) \times (\lambda^2+\varepsilon)^2= o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ for $(\lambda,\delta)$ in the region \eqref{smiley2}. Otherwise, $a=0$ or $a=1$. If $a=0$, then $b\ge4$ and
$\lambda\times \varepsilon^a\lambda^b=\lambda\cdot \lambda^2\times\lambda^2\cdot\lambda^{b-4}
\lesssim\lambda\cdot (\lambda^2+\varepsilon)\times(\lambda^2+\delta^2)=o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ for $(\lambda,\delta)$ in the region \eqref{smiley}. Now suppose $a=1$. Then, $b\ge2$ and we have
$\lambda\times\varepsilon^a\lambda^b=\lambda\cdot \lambda^2 \times\varepsilon \cdot\lambda^{b-2}
\lesssim \lambda\cdot (\lambda^2+\delta^2)\times(\lambda^2+\varepsilon) =
o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ for $(\lambda,\delta)$ in the region \eqref{smiley2}.
\noindent (ii) Suppose now that $c\ge1$. If $b=0$, then
$\lambda\times \varepsilon^a\lambda^{b} (\delta^{2})^c= \lambda\delta^2 \varepsilon^a(\delta^2)^{c-1}$
with $2a+2c\ge4$, which is $=o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ for $(\lambda,\delta)$ in the region by Proposition \ref{ld2}. Finally, if $b\ge1$, then $\lambda\times\varepsilon^a\lambda^b(\delta^2)^c= \lambda\delta^2\cdot \varepsilon^a\lambda^{b}(\delta^2)^{c-1}=o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ for $(\lambda,\delta)$ in the region \eqref{smiley2}.
{\bf (II) Terms of the form\ $\delta^2\times \varepsilon^a\lambda^{b} (\delta^{2})^c, \quad {\rm with}\
2a+b+2c\ge3,\ a,b,c\in\mathbb{N}$:
\noindent (i) Suppose first that $a=0$. Then, $\delta^2\times \varepsilon^a\lambda^{b} (\delta^{2})^c=\delta^2\times \lambda^{b} (\delta^{2})^c,\ b+2c\ge3$. If $b\ge1$, then
we rewrite this as $\delta^2\lambda \times \lambda^{b-1} (\delta^{2})^c=o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ for $(\lambda,\delta)$ in the region \eqref{smiley2}, by Proposition \ref{ld2}. And if $b=0$, then
$\delta^2\times \varepsilon^a\lambda^{b} (\delta^{2})^c=(\delta^2)^{c+1}$, with $c\ge2$,
which is $\lesssim \delta^2\delta^2 (\delta^2)^{c-1}\lesssim\delta^2\varepsilon\cdot\varepsilon^3(\delta^2)^{c-1}
\lesssim(\lambda^2+\delta^2)(\lambda^2+\varepsilon)\cdot\varepsilon^3(\delta^2)^{c-1}=
o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ for $(\lambda,\delta)$ in the region \eqref{smiley2}
\noindent (ii) If now $a\ge1$, then $\delta^2\times \varepsilon^a\lambda^{b} (\delta^{2})^c=\delta^2\varepsilon\cdot \varepsilon^{a-1}\lambda^b(\delta^2)^c\le(\lambda^2+\delta^2)\cdot(\lambda^2+\varepsilon)\cdot \varepsilon^{a-1}\lambda^b(\delta^2)^c=o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ for $(\lambda,\delta)$ in the region \eqref{smiley2}. This completes the proof of Proposition \ref{neglect}.
%
%
Now each entry in the $3\times3$ matrix, $M(\varepsilon,\delta,\lambda,0)$ is a sum of terms of weight $\ge1/2$; this is a consequence of the expansion \eqref{Msum}, \eqref{M0}, \eqref{M0-M0app}, \eqref{p-perp2} and the explicit expansion of $M^{approx}$ displayed in \eqref{M0app-expanded}. If we change any one entry by a term of weight $\ge3/2$, then the effect on the $3\times3$ determinant $D(\varepsilon,\delta^2,\lambda)$ will be a sum of terms of weight $\ge 3/2+1/2+1/2>2$. By Propositions \ref{ld2} and \ref{neglect}, such terms are $o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ in the region \eqref{smiley2}. Therefore, we may compute each entry of $M(\varepsilon,\delta,\lambda,0)$, retaining only terms of weight strictly smaller than $3/2$ and discarding the rest. The resulting determinant will differ from $D(\varepsilon,\delta^2,\lambda)$ by
terms which are $o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ in the region \eqref{smiley2}.
We next study the power series of the $3\times3$ matrix, $M(\varepsilon,\delta,\lambda,0)$, keeping in mind that the relevant monomials are those of weight $\ge1/2$ but strictly less than $3/2$. The complete list of such monomials is: $\varepsilon, \delta, \lambda, \lambda^2, \delta^2$ and $ \lambda\delta$.
Before proceeding further we show that in fact that a monomial of type $\delta^2$ can be neglected. Indeed, the weight $\le2$ contributions of such a monomial to $D(\varepsilon,\lambda,\delta^2,0)=\det M(\varepsilon,\delta,\lambda,0)$ will be a sum of monomials of the type: (i) $\delta^2\times \delta\cdot\delta$, (ii) $ \delta^2\times \lambda\cdot\lambda$ and
(iii) $\delta^2\times \lambda\cdot\delta$. Terms of type (ii) and (iii) are clearly
$o(\ \lambda\delta^2 )=o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ as $\varepsilon\to0$ for $(\lambda,\delta)$ in the region \eqref{smiley2}, by Proposition \ref{ld2}. For the type (i) term we have, for $(\lambda,\delta)$ in the region \eqref{smiley2},
$\delta^2\times \delta\cdot\delta\lesssim \delta^2\ \varepsilon\ \varepsilon\delta\lesssim (\lambda^2+\delta^2)\cdot (\lambda^2+\varepsilon)\ \varepsilon\delta=o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)$ as $\varepsilon\to0$. Hence, we may strike $\delta^2$ from our list. In particular, we may neglect the contribution from the matrix $\mathcal{M}^{\mathcal{P}}$; see \eqref{MP-expanded}.
\noindent{\bf Relevant monomials in the expansion of
$M(\varepsilon,\delta,\lambda,0)$ :}\ We shall call the monomials: $\varepsilon, \delta, \lambda, \lambda^2$ and $ \lambda\delta$ \underline{relevant}. All others are called \underline{irrelevant}.
\noindent Stepping back, we have shown above that
$M=M^{0}+M^V+M^W+M^{\mathcal{P}}$ (\eqref{Msum}),
where $M^{0}=M^{0,approx}+\mathcal{O}(\varepsilon^2)$ (\eqref{M0-M0app}) and
$M^{\mathcal{P}}=\mathcal{O}(\varepsilon^2+\varepsilon\delta+\delta^2)$ (\eqref{MP-expanded}). We have further shown that the relevant monomial contributions for calculation of $\det M(\varepsilon,\delta,\lambda,0)$ are all contained in
$M^{approx}(\varepsilon,\delta,\lambda,0)\equiv M^{0,approx}(\varepsilon,\lambda)+M^V(\varepsilon)+M^W(\delta)$. We consider each of these matrices individually, and explicitly extract the relevant terms in each; see Proposition
\ref{Det-approx} below.
%
\noindent{\bf Expansion of $M^{0, approx}$:} The entries of $M^{0,approx}$ are
\begin{align}
&M^{0,approx}_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,0) \nonumber \\
&\quad \equiv\ \left\langle p_\sigma, \left(\ -\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2- [E_\star^0 + \varepsilon(V_{0,0}-V_{1,1})]\ \right) p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)} \nonumber\\
&\quad = \left\langle p_\sigma,-\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2 p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)}
-[E_\star^0 + \varepsilon(V_{0,0}-V_{1,1})]\ \delta_{\sigma,{\tilde{\sigma}}}\ ,
\label{M0approx1}\end{align}
where we have used that $\left\langle p_\sigma,p_{\tilde{\sigma}}\right\rangle=\delta_{\sigma,{\tilde{\sigma}}}$.
The first term in \eqref{M0approx1} may be written, using \eqref{p_sigma}-\eqref{orthon}, $E_\star^0=|{\bf K}|^2$ and $|{\bf k}_2|^2=q^2$ as:
\begin{align}
&\left\langle p_\sigma,-\left(\nabla+i[{\bf K}+\lambda{\bf k}_2] \right)^2 p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)} \label{psig-nablaK-tpsig}\\
&\quad = \left\langle p_\sigma,-(\nabla+i{\bf K})^2 p_{\tilde{\sigma}}\right\rangle
\ -\ 2i\lambda{\bf k}_2\cdot \left\langle p_\sigma,(\nabla+i{\bf K}) p_{\tilde{\sigma}}\right\rangle\
+\ \left\langle p_\sigma,\lambda^2q^2 p_{\tilde{\sigma}}\right\rangle\nonumber\\
&\quad = \left(E_\star^0\ +\ \lambda^2q^2 \right)\delta_{\sigma,{\tilde{\sigma}}} + \lambda\ J_{\sigma,{\tilde{\sigma}}} .\nonumber
\end{align}
Consider now the matrix
\begin{align}
J_{\sigma,{\tilde{\sigma}}} &= -2i{\bf k}_2\cdot\left\langle \Phi_\sigma,\nabla \Phi_{\tilde{\sigma}}\right\rangle_{L^2_{\bf K}} =-2i{\bf k}_2\cdot\int_\Omega\ \overline{\Phi_\sigma}\ \nabla\Phi_{\tilde{\sigma}} d{\bf x}\nonumber\\
&=2\ {\bf k}_2\cdot \frac13\left(I+\sigma\overline{{\tilde{\sigma}}}R+\overline{\sigma}{\tilde{\sigma}} R^2\right){\bf K} . \label{J-matrix}
\end{align}
We pause to collect some properties that will enable the evaluation of $J_{\sigma,{\tilde{\sigma}}}$;
see also \cites{FW:12}.
Recall that $R$ has eigenpairs: $(\tau,\ \zeta)$
and $(\overline{\tau},\ \overline{\zeta}),$ where $\zeta=\frac{1}{\sqrt2}(1,i)^T$. Then, $\overline{\tau}R$ has eigenpairs: $[1,\zeta]$ and $[\tau,\overline{\zeta}]$. Furthermore,
$\tau R$ has eigenpairs $[1,\overline{\zeta}]$ and $[\overline{\tau},\zeta]$, and
\begin{align*}
\frac13\left[\ I +\overline{\tau}R+(\overline{\tau}R)^2 \right]\zeta=\zeta,\ \
\ \left[\ I +\overline{\tau}R+(\overline{\tau}R)^2 \right]\overline{\zeta}=0, \nonumber\\
\frac13\left[\ I + \tau R+(\tau R)^2 \right]\overline{\zeta}=\overline{\zeta},\ \
\left[\ I +\tau R+(\tau R)^2 \right]\zeta=0 .
\end{align*}
Hence, $\frac13\left[\ I +\overline{\tau}R+(\overline{\tau}R)^2 \right]$ and $ \frac13\left[\ I + \tau R+(\tau R)^2 \right]$ are, respectively, projections onto $\rm{span}\{\ \zeta\ \}$ and $\rm{span}\{\ \overline{\zeta}\ \}$.
For any ${\bf w}\in\mathbb{C}^2$, we have ${\bf w}=\left\langle\zeta,{\bf w}\right\rangle_{\mathbb{C}^2}\zeta +
\left\langle\overline{\zeta},{\bf w}\right\rangle_{\mathbb{C}^2}\overline{\zeta}$, where
$\left\langle {\bf x},{\bf y}\right\rangle_{\mathbb{C}^2}=\overline{{\bf x}}\cdot{\bf y}$. Therefore
\begin{equation}
\frac13\left[\ I +\overline{\tau}R+(\overline{\tau}R)^2 \right]{\bf w} = \left\langle\zeta,{\bf w}\right\rangle\zeta,\quad\
\frac13\left[\ I + \tau R+(\tau R)^2 \right]{\bf w} = \left\langle\overline{\zeta},{\bf w}\right\rangle\overline{\zeta} .\label{Rprojections}
\end{equation}
Also, $(I-R)(I + R+R^2)=I-R^3=0$ and therefore
\begin{equation}
I + R + R^2\ =\ 0 .
\label{Rsum0}\end{equation}
We next calculate $J_{\sigma,{\tilde{\sigma}}} $ using \eqref{Rprojections}-\eqref{Rsum0}.
Note that $J$ is Hermitian, and by
\eqref{Rsum0} its diagonal elements $J_{\sigma,\sigma} $ all vanish:
$ J_{\sigma,{\tilde{\sigma}}} \ =\ \overline{J_{{\tilde{\sigma}},\sigma}},\ J_{\sigma,\sigma}=0,\ \sigma=1,\tau,\overline{\tau}$ .
It suffices therefore to compute the three entries $J_{1,\tau},\ J_{1,\overline{\tau}}$ and
$J_{\tau,\overline{\tau}}$:
\begin{align*}
J_{1,\tau}=2\ \frac13\left[\ I +\overline{\tau}R+(\overline{\tau}R)^2 \right]{\bf K}\cdot{\bf k}_2=
2\ (\overline{\zeta}\cdot{\bf K})\ (\zeta\cdot{\bf k}_2)\equiv \ \alpha , \\
J_{1,\overline\tau}=2\ \frac13\left[\ I +\tau R+(\tau R)^2 \right]{\bf K}\cdot{\bf k}_2=
2\ (\zeta\cdot{\bf K})\ (\overline{\zeta}\cdot{\bf k}_2)=\ \overline{\alpha} , \\
J_{\tau,\overline\tau}=2\ \frac13\left[\ I +\overline\tau R+(\overline\tau R)^2 \right]{\bf K}\cdot{\bf k}_2= 2\ (\overline\zeta\cdot{\bf K})\ (\zeta\cdot{\bf k}_2)=\ \alpha .
\end{align*}
Thus,
\begin{equation}
J\ =\ \left(\ J_{\sigma,{\tilde{\sigma}}}\ \right)\ =\
\begin{pmatrix}
0&\alpha&\overline{\alpha}\\
\overline{\alpha}&0&\alpha\\
\alpha&\overline{\alpha}&0
\end{pmatrix},\ \ \ {\rm where}\ \ \alpha=2\ (\overline{\zeta}\cdot{\bf K})\ (\zeta\cdot{\bf k}_2) .
\label{J-matrix-computed}
\end{equation}
It follows that
\begin{equation}
M^{0,approx}_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,0)
=\ \left(\ - \varepsilon(V_{0,0}-V_{1,1}) + \lambda^2q^2 \ \right)\delta_{\sigma,{\tilde{\sigma}}}\
+\ \lambda\ J_{\sigma,{\tilde{\sigma}}} . \label{M0app-expanded}
\end{equation}
\noindent{\bf Expansion of $ M^V(\varepsilon)$:}
$M_{\sigma,{\tilde{\sigma}}} ^V(\varepsilon)= \varepsilon\ \left\langle p_\sigma, V\ p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)}\ =\ \varepsilon\ \mathcal{V}_{\sigma,{\tilde{\sigma}}},$
where
\begin{align}
\mathcal{V}_{\sigma,{\tilde{\sigma}}}& = \left\langle p_\sigma, V p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)}\nonumber\\
&= \frac{1}{3} \Big[ (1+\sigma \overline{{\tilde{\sigma}}}+\overline\sigma {\tilde{\sigma}}) V_{0,0}\ +\ \sigma V_{0,1}
\ +\ \overline{\tilde{\sigma}} V_{0,-1}\ +\ {\tilde{\sigma}} V_{1,0}\ \nonumber\\
&\qquad\qquad + \ \overline{\sigma} V_{-1,0} \ +\ \sigma{\tilde{\sigma}} V_{1,1}\ +\ \overline{\sigma {\tilde{\sigma}}} V_{-1,-1} \Big] \nonumber .
\end{align}
Since $V$ is real-valued and even, it follows that $V_{-{\bf m}}=V_{{\bf m}}$. Furthermore, $V$ is also $R-$ invariant and therefore $V_{0,1}=V_{1,0}=V_{1,1}$. Hence
%
\begin{align*}
\mathcal{V}_{\sigma,{\tilde{\sigma}}}&= \frac13
(1+\sigma\ \overline{{\tilde{\sigma}}}+\overline\sigma\ {\tilde{\sigma}})\ V_{0,0}\ +\ \frac13
(\sigma+\overline\sigma + {\tilde{\sigma}}+ \overline{\tilde{\sigma}} + \sigma{\tilde{\sigma}}+ \overline{\sigma\ {\tilde{\sigma}}})\ V_{1,1} .
\end{align*}
$\mathcal{V}$ is clearly symmetric and using that $1+\tau+\tau^2=1+\tau+\overline\tau=0$, we obtain $M^V(\varepsilon)=\varepsilon\ \mathcal{V}$, where
\begin{equation*}
\mathcal{V}\ =\
\begin{pmatrix}
V_{0,0}\ +\ 2\ V_{1,1}&0&0\\
0&V_{0,0}-V_{1,1}&0\\
0&0&V_{0,0}-V_{1,1}
\end{pmatrix} .
\end{equation*}
\noindent{\bf Expansion of $M^W(\delta)$:}
$M^W_{\sigma,{\tilde{\sigma}}}(\varepsilon) = \delta\ \left\langle p_\sigma, W\ p_{\tilde{\sigma}}\right\rangle_{L^2(\mathbb{R}^2/\Lambda_h)}\ =\ \delta\ \mathcal{W}_{\sigma,{\tilde{\sigma}}},$
where
\begin{equation}
\label{MW-step1}
\mathcal{W}_{\sigma,{\tilde{\sigma}}} =
\frac13\Big[\sigma W_{0,1}\ +\ \overline{\tilde{\sigma}} W_{0,-1}\ +\ \overline\sigma W_{-1,0}
+{\tilde{\sigma}} W_{1,0}\ +\ \sigma {\tilde{\sigma}} W_{1,1}\ +\ \overline{\sigma {\tilde{\sigma}}} W_{-1,-1} \Big] .
\end{equation}
Since $W$ is real and odd, we have that
$W_{-{\bf m}}= -W_{{\bf m}}$ and $W_{\bf m}$ is purely imaginary.
Therefore,
\begin{equation*}
\mathcal{W}_{\sigma,{\tilde{\sigma}}} = \ \frac{1}3\
\ \Big[\ (\sigma-\overline{\tilde{\sigma}})\ {W}_{0,1}\ +\ ({\tilde{\sigma}}-\overline\sigma)\ {W}_{1,0}\
+\ (\sigma\ {\tilde{\sigma}}\ -\ \overline{\sigma\ {\tilde{\sigma}}})\ {W}_{1,1}\ \Big] \ ,\ \sigma,{\tilde{\sigma}}=1,\tau,\overline{\tau}.
\end{equation*}
It follows that $M^W(\delta)=\delta\ \mathcal{W}$, where
\begin{equation*}
\mathcal{W} = w_{01} \begin{pmatrix} 0& \tau & -\overline{\tau}\\ \overline{\tau}&-1&0\\ -\tau&0&1\end{pmatrix} +
w_{10} \begin{pmatrix} 0& \overline{\tau}& -\tau\\ \tau &-1&0\\ -\overline{\tau}&0&1\end{pmatrix} +
w_{11} \begin{pmatrix} 0&-1&1\\-1&1&0\\ 1&0&-1\end{pmatrix} ,
\end{equation*}
and $w_{ij}\equiv -i\ {W}_{i,j} /\sqrt3 \in \mathbb{R}$.
Now assembling all relevant terms (weights $\ge1/2$ and less than $3/2$) we obtain
\begin{proposition}\label{Det-approx} For $\sigma,{\tilde{\sigma}} = 1,\tau,\overline{\tau} $,
\begin{align*}
M_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,0) &\approx\ M^{approx}_{\sigma,{\tilde{\sigma}}}(\varepsilon,\delta,\lambda,0)\nonumber\\
&\approx\ \left(\ - \varepsilon(V_{0,0}-V_{1,1}) + \lambda^2q^2 \ \right)\delta_{\sigma,{\tilde{\sigma}}}\
+\ \lambda\ J_{\sigma,{\tilde{\sigma}}}\ + \varepsilon\ \mathcal{V}_{\sigma,{\tilde{\sigma}}}\ +\
\delta\ \mathcal{W}_{\sigma,{\tilde{\sigma}}}.
\end{align*}
Here, $A_{\sigma,{\tilde{\sigma}}}\approx B_{\sigma,{\tilde{\sigma}}}$, means that their difference is a matrix with entries having weight $\ge 3/2$. Hence, the contribution of such terms to the determinant consists of terms of weight strictly larger than $2$, for $(\lambda,\delta)$ in the region \eqref{smiley2}. Hence, these terms can be neglected, by Proposition \ref{neglect}.
\end{proposition}
\noindent So the calculation of $\det M(\varepsilon,\delta,\lambda,0)$ boils down to the calculation of
$\det M^{approx}(\varepsilon,\delta,\lambda,0)$.
\noindent {\bf Calculation of $\det M^{approx}(\varepsilon,\delta,\lambda,0)$:}
Assembling the above computations, we have that
{\small
\begin{equation*}
M^{approx} =
\begin{pmatrix}
\lambda^2q^2 + 3 \varepsilon V_{1,1} &
\alpha \lambda - \delta\widetilde{w} &
\overline{\alpha} \lambda + \delta\overline{\widetilde{w}} \\
\overline{\alpha} \lambda - \delta\overline{\widetilde{w}} &
\lambda^2q^2 - \delta (w_{01} +w_{10} - w_{11}) &
\alpha \lambda \\
\alpha \lambda + \delta\widetilde{w} &
\overline{\alpha} \lambda &
\lambda^2q^2 + \delta (w_{01} +w_{10} - w_{11}) \\
\end{pmatrix} ,
\end{equation*}}
where
$\widetilde{w} = w_{11} -w_{01}\tau -w_{10}\overline{\tau}$ and $\alpha=2(\overline\zeta\cdot{\bf K})\ (\zeta\cdot{\bf k}_2)$. Note that:
\begin{equation}
\label{alpha-quantities}
\alpha=\frac{q^2}{\sqrt3}\ i\tau,\ \ \Re(\alpha)=-\frac{q^2}2,\ \ \Re(\alpha^3)=0 .
\end{equation}
Calculating the determinant of $M^{approx}$, and using \eqref{alpha-quantities} and that
$w_{ij}= -i W_{i,j}/\sqrt3$ yields:
\begin{align}
&\det M^{approx} (\varepsilon, \delta, \lambda, 0) \ =\
-\left(q^2\lambda^2 + \varepsilon V_{1,1}\right) \left( q^4\lambda^2 + 3\delta^2 (w_{01}^2+w_{10}^2+w_{11}^2) \right) \nonumber \\
&\qquad +6 \varepsilon V_{1,1} \delta^2 (w_{11}w_{01}+w_{10}w_{11}-w_{01}w_{10})
+\ \mathcal{O}( \lambda \delta^2) + \mathcal{O}(\varepsilon\lambda^4) + \mathcal{O}(\lambda^6) \nonumber \\
&\quad\ = -\left(q^2\lambda^2 + \varepsilon V_{1,1}\right) \left(\ q^4\lambda^2 + 3\delta^2 (w_{01}+w_{10}-w_{11})^2\ \right) \nonumber \\
&\qquad \ \ \
+\ \mathcal{O}( \lambda^2\delta^2)\ +\ \mathcal{O}( \lambda \delta^2) + \mathcal{O}(\varepsilon\lambda^4) + \mathcal{O}(\lambda^6)\ ,
\nonumber\\
&\quad\ =\ -\left(q^2\lambda^2 + \varepsilon V_{1,1}\right) \left( q^4\lambda^2 + \delta^2 \left| W_{0,1}+W_{1,0}-W_{1,1}\ \right|^2 \right) \nonumber \\
&\qquad\ \
+\ \mathcal{O}( \lambda^2\delta^2)\ +\ \mathcal{O}( \lambda \delta^2) + \mathcal{O}(\varepsilon\lambda^4) + \mathcal{O}(\lambda^6)\nonumber\\
&\quad\ =\ -\pi(\varepsilon,\delta^2,\lambda)\ +\ o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right) ,
\label{detMapprox2}
\end{align}
for $(\lambda,\delta)$ in the region \eqref{smiley2}.
This completes the proof of Proposition \ref{detM-expansion}.
\subsection{If $\varepsilon V_{1,1}<0$, the zigzag slice does not satisfy the no-fold condition}\label{no-directionalgap}
Recall that to satisfy the case $\varepsilon V_{1,1}<0$ we assume, without loss of generality, that $\varepsilon>0$ and $V_{1,1}<0$. Theorem \ref{NO-directional-gap!} follows from:
\begin{proposition}\label{Deq0}
Assume
\begin{equation}
0<|\varepsilon|<\varepsilon_2,\ \ {\rm and}\ \ 0\le\delta\le c_\flat\ \varepsilon^2.
\label{smiley4}
\end{equation}
There exists $\theta_0>0$ and $\lambda_\varepsilon>0$ satisfying $\varepsilon<\lambda_{\varepsilon}<\theta_0\sqrt{\varepsilon}$ such that for all $\varepsilon$ sufficiently small,
\begin{equation*}
\det M(\varepsilon,\delta,\lambda_\varepsilon,\mu=0) = 0.
\end{equation*}
Thus, $E_\star^\varepsilon$ is an interior point of the $L^2_{k_\parallel}(\Sigma)-$ spectrum of $H^{(\varepsilon,\delta)}$.
\end{proposition}
\noindent It follows that for $\varepsilon V_{1,1}<0$, the operator $H^{(\varepsilon,\delta)}$ does not have a spectral gap about $E=E_\star^\varepsilon$ along the zigzag slice. Referring to the middle panel of Figure \ref{fig:eps_V11_neg}, we see that, for $\delta\ne0$ and small, a {\it local in $\lambda$} gap opens, for $\lambda$ small, about the energy $E=E_\star^\varepsilon$. But since the no-fold property is not satisfied (by Proposition \ref{Deq0}), this is not a true (global in $\lambda\in[-1/2,1/2]$) spectral gap.
\begin{proof}[Proof of Proposition \ref{Deq0}]
Let $C_\flat$ denote the constant in Proposition
\ref{detM-expansion}. Note that $C_\flat$ was chosen to be sufficiently large in the proof of Proposition \ref{detM-expansion} and can be arranged to be taken so that $C_\flat>\theta_0$, where $\theta_0$ is defined by $\theta_0^2=2|V_{1,1}|/q^2$. Also, choose a constant $\zeta_0$ such that $\zeta_0^2= |V_{1,1}|/2q^2$. Note $\zeta_0<\theta_0$;
below we shall see why we make these choices.
For $(\lambda,\delta)$ in the region \eqref{smiley4} we have:
%
\begin{equation}-\det M(\varepsilon,\delta,\lambda,0) \ =\ \pi(\varepsilon,\delta^2,\lambda) + o\left( (\lambda^2+\eps)(\lambda^2+\delta^2) \right)=\pi(\varepsilon,\delta^2,\lambda) + o\left( \varepsilon^2 \right) , \label{-detMonsmiley}
\end{equation}
where
\begin{align*}
\pi(\varepsilon,\delta^2,\lambda)&\equiv \left(q^2\lambda^2 + \varepsilon V_{1,1}\right) \left( q^4\lambda^2 + \delta^2\ \left|W_{0,1}+W_{1,0}-W_{1,1} \right|^2 \right).
\end{align*}
We now show that there exists $\lambda^{\varepsilon,\delta}\in (\zeta_0\sqrt\varepsilon,\theta_0\sqrt{\varepsilon})$ such that
$\pi(\varepsilon,\delta^2,\lambda^{\varepsilon,\delta}) = 0$.
Note first that $\varepsilon V_{1,1}<0$, $\varepsilon^2\ll\varepsilon$ and the choice of $\zeta_0$ implies, upon evaluation of $\pi(\varepsilon,\delta^2,\lambda)$ at $\lambda=\zeta_0\sqrt\varepsilon$, that:
\begin{align*}
\pi(\varepsilon,\delta^2,\zeta_0\sqrt\varepsilon)
& =\left(q^2\zeta_0^2\varepsilon + \varepsilon V_{1,1}\right) \left( q^4\zeta_0^2\varepsilon + \delta^2\ \left|W_{0,1}+W_{1,0}-W_{1,1} \right|^2 \right)<0
\end{align*}
and by \eqref{-detMonsmiley} $-\det M(\varepsilon,\delta,\zeta_0\sqrt\varepsilon,0)<0$.
On the other hand, the choice of $\theta_0$ implies, upon evaluation at $\lambda=\theta_0\sqrt{\varepsilon}$ that:
\begin{align*}
\pi(\varepsilon,\delta^2,\theta_0\sqrt{\varepsilon}) &=
\left(q^2\theta_0^2\varepsilon + \varepsilon V_{1,1}\right) \left( q^4\theta_0^2\varepsilon + \delta^2\ \left|W_{0,1}+W_{1,0}-W_{1,1} \right|^2 \right)>0
\end{align*}
and hence, by \eqref{-detMonsmiley} $-\det M(\varepsilon,\delta,\theta_0\sqrt\varepsilon,0)>0$.
Now $\det M(\varepsilon,\delta^2,\lambda,0)$ is, for all $0<\varepsilon<\varepsilon_1$, a continuous function of $\lambda$. Hence, there exists $\lambda^{\varepsilon,\delta}\in (\zeta_0\sqrt\varepsilon,\theta_0\sqrt{\varepsilon})$ such that $\det M(\varepsilon,\delta,\lambda^{\varepsilon,\delta},\mu=0)=0$. Hence, $E^{\varepsilon,\delta}(\lambda^{\varepsilon,\delta})=E_\star^\varepsilon
\in\ L^2_{{\kpar=2\pi/3}}-\ {\rm spec}(H^{(\varepsilon,\delta)})$.
This completes the proof of Proposition \ref{Deq0}.
\end{proof}
|
2004.03609
|
\section{Introduction}
Chiral multifold fermions have received much attention of modern condensed matter research in recent years \cite{armitage18rmp,bradlyn16sc,isobe16prb,tang17prl,boettcher20prl,rao19n,sanchez19n,schroter19np,takane19prl,lv19prb,Schroter20sc}. These fermions possess a topologically protected degenerate point in momentum space (Fig.~\ref{fig:cmp}), with the low-energy theory exhibiting an effective spin-momentum locking. Such structure generically hosts nontrivial topological features in the eigenstates, which originate from the Berry monopole at the degenerate point. The chiral multifold fermions potentially own a variety of novel characteristics. These include the exotic superconductivity with unconventional pairing and/or dramatic enhancement from flat bands \cite{lin18prb,sim19ax,lin20prr,link20prb}, as well as the novel optical responses \cite{flicker18prb,sanchezmartinez19prb}. The study of materials with prototypical spin-$1/2$ chiral multifold fermions, known as the Weyl semimetals, has developed as one of the mainstreams in modern condensed matter research in the past decade \cite{armitage18rmp}. Meanwhile, some realizations with higher spins, including spin-$1$ and $3/2$ fermions, have also been proposed and uncovered in the solid state materials \cite{bradlyn16sc,isobe16prb,tang17prl,rao19n,sanchez19n,schroter19np,takane19prl,lv19prb,Schroter20sc}. Moreover, the realm of higher-spin systems can potentially also be explored with ultracold atomic systems through the coupling of atomic hyperfine states \cite{hu18prl,zhu17pra}.
Compared to the extensively studied topological properties of eigenstates, the `geometric' aspects have not been explored in depth. For an eigenstate dependent on a set of adiabatic parameters, the variation in parameter space manifests in both the phase and the state \cite{berry89}. The phase variation is known as the Berry phase \cite{berry84rspa}, which contributes to the topological properties of the eigenstate \cite{xiao2010rmp}. Meanwhile, the state variation reflects the `quantum distance' between the initial and final states under the parameter change, and is captured by the quantum metric \cite{provost80cmp,page87pra,anandan90prl}. Such a feature can be experimentally probed \cite{bleu18prb,ozawa18prb,asteria19np,klees20prl,yu19nsr,tan19prl,gianfrate20n} by, for example, a periodic drive measurement. The quantum metric manifests on a variety of occasions, including the finite spread of Wannier functions \cite{marzari97prb,matsuura10prb,marzari12rmp}, the anomalous superfluid stiffness on flat bands \cite{peotta15nc,liang17prb,hu19prl,xie20prl}, the realization of fractional Chern insulators \cite{roy14prb,claassen15prl}, the geometric contribution to orbital susceptibility \cite{gao15prb,piechon16prb}, the current noise \cite{neupert13prb}, the indication of phase transition \cite{zanardi07prl,ma10prb,kolodrubetz13prb}, and the measurement of tensor monopoles \cite{palumbo18prl}. Despite the broad range of proposed applications, the inherent properties of quantum metric have not been investigated elaborately.
\begin{figure}[b]
\centering
\includegraphics[scale = 1]{FigBand.pdf}
\caption{\label{fig:cmp} Energy dispersions of chiral multifold fermions with spins $s=1/2$ (left), $1$ (center), and $3/2$ (right).}
\end{figure}
In this Letter, we aim to understand the inherent properties of quantum metric for the chiral multifold fermions. We show that a dual Haldane sphere \cite{haldane83prl} problem emerges in the computation of the trace of quantum metric owing to the Berry monopole in momentum space. A quantized geometric invariant can be defined through a surface integration, and together with the Chern number it establishes a sum rule. We further demonstrate the potential manifestations of such quantized band geometry in the measurable physical observables. A lower bound is derived for the finite spread of Wannier functions, which can trigger anomalous phase coherence in the flat band superconductivity. We briefly comment on the stability of these results under perturbations. Potential probes of quantum metric in the experimental systems are also discussed.
We begin by introducing a minimal model of chiral multifold fermions (CMF) in three-dimensional (3D) systems. For spin-$s$ fermion with integer or half-integer spin $s=1/2,1,3/2,\dots$, the Hamiltonian reads
\begin{equation}
\label{eq:ham0}
\mathcal H_{\text{CMF},\mathbf k}=v\mathbf k\cdot\mathbf S.
\end{equation}
Here $v$ is the effective velocity, $\mathbf k=k\mathbf{\hat k}$ denotes the momentum with magnitude $k$ along the direction of unit vector $\mathbf{\hat k}$, and $\mathbf S$ represents the vector of spin-$s$ operators $S^a$'s with $a=x,y,z$. The Hamiltonian exhibits $2s+1$ eigenstates $\ket{u^{sn}_{\mathbf k}}$ with energies $\varepsilon^{sn}_{\mathbf k}=vkn$, $n=-s,-s+1,\dots,s$ (Fig.~\ref{fig:cmp}). These eigenstates are nondegenerate except at the degenerate point $\mathbf k=\mbf0$. Nontrivial topological properties are generically manifest in these eigenstates. For the $n$-th eigenstate $\ket{u^{sn}_{\mathbf k}}$, the calculation of Berry flux $\mathbf B^{sn}_{\mathbf k}=i\sum_{m\neq n}\braket{u^{sm}_{\mathbf k}}{\boldsymbol\nabla_{\mathbf k}\mathcal H_{\text{CMF},\mathbf k}}{u^{sn}_{\mathbf k}}^*\times\braket{u^{sm}_{\mathbf k}}{\boldsymbol\nabla_{\mathbf k}\mathcal H_{\text{CMF},\mathbf k}}{u^{sn}_{\mathbf k}}/(\varepsilon^{sm}_{\mathbf k}-\varepsilon^{sn}_{\mathbf k})^2$ yields $\mathbf B^{sn}_{\mathbf k}=-(n/k^2)\mathbf{\hat k}$ \cite{berry84rspa}. An integration over any arbitrary closed surface around the degenerate point leads to an integer Chern number $C^{sn}=(1/2\pi)\oint d\mathbf S_{\mathbf k}\cdot\mathbf B^{sn}_{\mathbf k}=-2n$ \cite{xiao2010rmp}. The integer Chern number is a topological invariant of the eigenstate $\ket{u^{sn}_{\mathbf k}}$. Such an integer corresponds to a quantized monopole charge $q^{sn}=C^{sn}/2=-n$ of Berry flux $\mathbf B^{sn}_{\mathbf k}$ (Fig.~\ref{fig:magmnp}), which is a momentum-space analogy of Dirac's quantized magnetic monopole \cite{dirac31rspa}. The wavefunction of the eigenstate $\ket{u^{sn}_{\mathbf k}}$ can be identified based on symmetry. Since the Hamiltonian (\ref{eq:ham0}) manifests a spin-orbit-coupled rotation symmetry, the wavefunction follows the same symmetry and takes the form $\ket{u^{sn}_{\mathbf k}}=\sqrt{4\pi}\sum_{m=-s}^s\innp{ss;-mm}{00}Y_{-q^{sn}s-m}(\mathbf{\hat k})\ket{sm}$. Here $\innp{ls;m_lm_s}{jm_j}$ is the Clebsch-Gordan coefficient with orbital, spin, total angular momenta $l,s,j$ and according axial components $m_{l,s,j}$. $Y_{qlm}(\mathbf{\hat k})$ denotes the monopole harmonics in the monopole-$q$ sector \cite{wu76npb,wu77prd}, and $\ket{sm}$ stands for the $S^z$ eigenstate. Given $\innp{ss;-mm}{00}=(-1)^{m+s}/\sqrt{2s+1}$, the wavefunction can be expressed in a more convenient form $\ket{u^{sn}_{\mathbf k}}=\sqrt{4\pi/(2s+1)}\sum_{m=-s}^sY_{q^{sn}sm}^*(\mathbf{\hat k})\ket{sm}$. With the wavefunction in hand, the Berry connection $\mathbf A^{sn}_{\mathbf k}=\braket{u^{sn}_{\mathbf k}}{i\boldsymbol\nabla_{\mathbf k}}{u^{sn}_{\mathbf k}}$ can be derived, yielding consistent Berry flux $\mathbf B^{sn}_{\mathbf k}=\boldsymbol\nabla_{\mathbf k}\times\mathbf A^{sn}_{\mathbf k}$ and Chern number $C^{sn}=2q^{sn}=-2n$ with previous discussions.
\begin{figure}[t]
\centering
\includegraphics[scale = 0.2]{MagneticMonopole.pdf}
\caption{\label{fig:magmnp} A monopole emits radial fluxes which are perpendicular to the spherical shells around it. Such configuration illustrates the Berry monopole charge and Berry fluxes for the chiral multifold fermions, as well as the Dirac magnetic monopole and magnetic fields in the Haldane sphere.}
\end{figure}
For the eigenstate $\ket{u^{sn}_{\mathbf k}}$ of the chiral multifold fermion model (\ref{eq:ham0}), the variation under momentum change is measured by the quantum geometric tensor $T^{sn}_{ab\mathbf k}=\braket{\partial_{k_a}u^{sn}_{\mathbf k}}{(1-\ket{u^{sn}_{\mathbf k}}\bra{u^{sn}_{\mathbf k}})}{\partial_{k_b}u^{sn}_{\mathbf k}}$ \cite{berry89}. While the imaginary part of the tensor corresponds to the Berry flux $B^{sn}_{a\mathbf k}=-\varepsilon_{abc}\text{Im}[T^{sn}_{bc\mathbf k}]$, our interest lies in the real part $g^{sn}_{ab\mathbf k}=\text{Re}[T^{sn}_{ab\mathbf k}]$, known as the quantum or Fubini-Study metric \cite{provost80cmp,page87pra,anandan90prl}. This quantum metric takes the form
\begin{equation}\begin{aligned}
\label{eq:qm}
g^{sn}_{ab\mathbf k}
&=\frac{1}{2}(\innp{\partial_{k_a}u^{sn}_{\mathbf k}}{\partial_{k_b}u^{sn}_{\mathbf k}}+\innp{\partial_{k_b}u^{sn}_{\mathbf k}}{\partial_{k_a}u^{sn}_{\mathbf k}})\\
&\quad+\innp{u^{sn}_{\mathbf k}}{\partial_{k_a}u^{sn}_{\mathbf k}}\innp{u^{sn}_{\mathbf k}}{\partial_{k_b}u^{sn}_{\mathbf k}}
\end{aligned}\end{equation}
and measures the `quantum distance' $1-|\innp{u^{sn}_{\mathbf k}}{u^{sn}_{\mathbf k+d\mathbf k}}|^2=g^{sn}_{ab\mathbf k}dk_adk_b$ in the Hilbert space. Remarkably, we discover a `quantized' trace
\begin{equation}
\label{eq:trqm}
\mathrm{Tr} g^{sn}_{\mathbf k}=\frac{1}{k^2}[s(s+1)-(q^{sn})^2]
\end{equation}
for the quantum metric of chiral multifold fermions. This is the main result of this Letter. The quantization is determined by both the angular momentum $s$ and the monopole charge $q^{sn}$, and is indifferent to the orientation because of the rotation symmetry. Furthermore, the $k^{-2}$ dependence implies a quantized invariant
\begin{equation}
\label{eq:geominv}
G^{sn}=\frac{1}{2\pi}\oint d\mathbf S_{\mathbf k}\cdot\mathbf{\hat k}\mathrm{Tr} g^{sn}_{\mathbf k}=2[s(s+1)-(q^{sn})^2],
\end{equation}
provided the integral domain encloses the degenerate point. The quantization of $G^{sn}$ originates from the monopole harmonics wavefunction $\ket{u^{sn}_{\mathbf k}}$, which is protected by the spin-orbit-coupled rotation symmetry around the degenerate point. We thus uncover a `symmetry-protected geometric invariant' $G^{sn}$ in the chiral multifold fermion model (\ref{eq:ham0}). This geometric invariant is different from the Chern number $C^{sn}$, which is a well-recognized topological invariant. Despite the difference, a `sum rule' of these two invariants can be established from the quantization rule
\begin{equation}
\label{eq:sumrule}
G^{sn}+\frac{(C^{sn})^2}{2}=2s(s+1),
\end{equation}
where the angular momentum $s$ sets a measure of the sum. Note that the spin-$s$ chiral multifold fermions exhibit the maximal Chern number $|C^{s\pm s}|=2s$. This sets a lower bound for the geometric invariant $G^{sn}\geq|C^{sn}|$, as has been identified for general cases from the positive definiteness of quantum geometric tensor \cite{peotta15nc}.
To understand how the chiral multifold fermions acquire the quantized trace of quantum metric (\ref{eq:trqm}) and the geometric invariant $G^{sn}$ (\ref{eq:geominv}), we consider an alternative expression of the quantum metric (\ref{eq:qm})
\begin{equation}
g^{sn}_{ab\mathbf k}=\frac{1}{2}\braket{u^{sn}_{\mathbf k}}{\{r^{sn}_a,r^{sn}_b\}}{u^{sn}_{\mathbf k}}.
\end{equation}
Here the position $\mathbf r^{sn}=i\boldsymbol\nabla_{\mathbf k}-\mathbf A^{sn}_{\mathbf k}$ corresponds to the covariant derivative in momentum space, where the Berry connection is involved \cite{claassen15prl}. Notably, the trace of quantum metric $\mathrm{Tr} g^{sn}_{\mathbf k}=\braket{u^{sn}_{\mathbf k}}{|\mathbf r^{sn}|^2}{u^{sn}_{\mathbf k}}$ manifests the expectation value of momentum-space Laplacian $|\mathbf r^{sn}|^2$. This suggests a `duality' between the trace of quantum metric and the kinetic energy, where the roles of position and momentum are exchanged. Since the eigenstates are composed of monopole harmonics $Y_{q^{sn}sm}^*(\mathbf{\hat k})$, the radial part of Laplacian $(\mathbf{\hat k}\cdot\mathbf r^{sn})^2$ does not contribute, leaving only the angular part $|\mathbf r^{sn}|^2_\perp=|\mathbf r^{sn}|^2-(\mathbf{\hat k}\cdot\mathbf r^{sn})^2$ in the trace of quantum metric. The angular part of Laplacian can be related to the `dynamical angular momentum' $\mathbf \Lambda^{sn}=\mathbf r^{sn}\times\mathbf k$ through $|\mathbf \Lambda^{sn}|^2=k^2|\mathbf r^{sn}|^2_\perp$. This leads to an alternative form of the trace of quantum metric
\begin{equation}
\label{eq:trqmangmomt}
\mathrm{Tr} g^{sn}_{\mathbf k}=\Braket{u^{sn}_{\mathbf k}}{\frac{|\mathbf \Lambda^{sn}|^2}{k^2}}{u^{sn}_{\mathbf k}},
\end{equation}
which captures the `dual energy' from the dynamical angular momentum in the eigenstate $\ket{u^{sn}_{\mathbf k}}$.
Amazingly, the trace of quantum metric in chiral multifold fermions is algebraically equivalent to the energy of an electron moving on a sphere enclosing a Dirac magnetic monopole. Such equivalence can be observed from the common structure in the two problems, where a rotationally symmetric system around a monopole is defined. When a Dirac magnetic monopole is present at the center of a sphere (Fig.~\ref{fig:magmnp}), the two-dimensional (2D) electron gas experiences a uniform perpendicular magnetic field, thereby manifests the quantum Hall effect. This implies a Landau level quantization for the eigenstates in the energy spectrum \cite{haldane83prl}. The Hamiltonian describing this `Haldane sphere problem' is $H=|\mathbf\Lambda|^2/2m_eR^2$, where $\mathbf\Lambda=\mathbf R\times(-i\boldsymbol\nabla+e\mathbf A)$ is the dynamical angular momentum, $\mathbf R=R\mathbf{\hat R}$ is the position at constant radius $R$ along the direction of unit vector $\mathbf{\hat R}$, $m_e$ and $-e$ are the electronic mass and charge, and $\mathbf A$ is the electromagnetic gauge field. Rotation symmetry enforces the angular momentum $l$ and its axial component $m$ as the good quantum numbers. These quantities then determine the quantized Landau level energy $E_{qlm}=[l(l+1)-q^2]/2m_eR^2$ in the presence of monopole charge $q$ \cite{jainbook,hsiao20prb}. The result immediately suggests an analogous quantization for the trace of quantum metric (\ref{eq:trqmangmomt}) in the chiral multifold fermion model (\ref{eq:ham0}). As `dual Haldane spheres' in momentum space, the chiral multifold fermions manifest the `dual Landau level quantization' (\ref{eq:trqm}), consistent with our previous observation from direct calculation. Note that the eigenstates in the Haldane sphere exhibit the monopole harmonics wavefunction $\psi_{qlm}(\mathbf R)=Y_{qlm}(\mathbf{\hat R})$ \cite{jainbook}. This feature again elucidates the duality between Haldane sphere and chiral multifold fermions, where the monopole harmonics wavefunctions are also manifest in the eigenstates.
Having dualized the chiral multifold fermions onto the Haldane spheres, we now illustrate explicitly how the trace of quantum metric (\ref{eq:trqmangmomt}) acquires the quantization (\ref{eq:trqm}) for the chiral multifold fermions. The essential point is to uncover the relation between the dynamical angular momentum $\mathbf\Lambda^{sn}$ and the actual angular momentum $\mathbf L^{sn}$ under rotation symmetry \cite{haldane83prl}. This can be achieved by examining the commutation relations and composing the one which satisfies the $\text{SU}(2)$ Lie algebra. The analysis starts by calculating the commutation relation of dynamical angular momentum, yielding $[\Lambda^{sn}_a,\Lambda^{sn}_b]=i\varepsilon_{abc}(\Lambda^{sn}_c-q^{sn}\hat k_c)$. This result motivates the derivation of another commutation relation $[\Lambda^{sn}_a,\hat k_b]=i\varepsilon_{abc}\hat k_c$. Based on these two relations, we identify the angular momentum as $\mathbf L^{sn}=\mathbf \Lambda^{sn}+q^{sn}\mathbf{\hat k}$, whose commutation relation manifests the $\text{SU}(2)$ Lie algebra $[L^{sn}_a,L^{sn}_b]=i\varepsilon_{abc}L^{sn}_c$. A correspondence between the angular momentum and the good quantum number in the model (\ref{eq:ham0}) is then established $|\mathbf L^{sn}|^2=s(s+1)$. To calculate the trace of quantum metric (\ref{eq:trqmangmomt}) in terms of the good quantum numbers $s$ and $q$, we utilize the expression $|\mathbf\Lambda^{sn}|^2=|\mathbf L^{sn}-q^{sn}\mathbf{\hat k}|^2$ and note that $\mathbf \Lambda^{sn}\cdot\mathbf{\hat k}=\mathbf{\hat k}\cdot\mathbf \Lambda^{sn}=0$. The calculation confirms the dual Landau level quantization for the trace of quantum metric (\ref{eq:trqm}). This further justifies the validity of the geometric invariant $G^{sn}$ (\ref{eq:geominv}) and the sum rule (\ref{eq:sumrule}) along with the Chern number $C^{sn}$. Despite the initial induction based on observation, the dual Haldane sphere provides a rigorous and concise derivation that solidates the results.
The quantized trace of quantum metric (\ref{eq:trqm}) and according geometric invariant (\ref{eq:geominv}) can have interesting effects on various measurable physical quantities. To study these manifestations, we assume a general 3D multiorbital system which exhibits a multiband structure and hosts the chiral multifold fermions. The model (\ref{eq:ham0}) is realized at a band crossing point $\mathbf K$ in the Brillouin zone (BZ), with the eligible region $\mathcal R_\text{CMF}$ defined by a radial momentum cutoff $\Lambda_k$. Note that the spin-orbit-coupled rotation symmetry serves as an approximate symmetry in this low-energy theory. For a certain band involved in the band crossing, the Bloch state $\ket{u^n_{\mathbf k}}$ is described by the eigenstate $\ket{u^n_{\mathbf k}}=\ket{u^{sn}_{\mathbf k}}$ of the model (\ref{eq:ham0}) in the eligible region $\mathcal R_\text{CMF}$. On the linearly dispersing bands $n\neq0$, the Fermi surfaces are spherical shells at finite doping from the band crossing. Such spherical shells locate at radius $k_F=|\mu|/v|n|$, where $\mu\neq0$ is the relative chemical potential to the band crossing. Meanwhile, the flat bands $n=0$ can occur in the integer spin models $s=1,2,\dots$. The according Fermi surfaces at $\mu=0$ are solid spheres with radius $\Lambda_k$, which covers the whole eligible region $\mathcal R_\text{CMF}$ of the low-energy theory.
An important basis for the manifestations of quantum metric lies in the spread of Wannier functions \cite{marzari97prb,matsuura10prb,marzari12rmp}. Wannier functions are the localized representations of electronic states in real space. For a band $\ket{u^n_{\mathbf k}}$ in the multiband structure, the Wannier function $\ket{\mathbf Rn}$ at lattice vector $\mathbf R$ is constructed by a Fourier transform from the Bloch states $\ket{\mathbf Rn}=(\mathcal V_0/\mathcal V)\sumv{k}\intv{r}\ket{\mathbf r}\innp{\mathbf r}{u^n_{\mathbf k}}e^{i\mathbf k\cdot(\mathbf r-\mathbf R)}$. Here $\mathcal V_0$ and $\mathcal V$ denote the volumes of primitive unit cell and whole system, respectively. The availability of exponentially localized Wannier functions are usually expected for a single isolated band \cite{marzari12rmp}. However, such exponential localization may be lost for a single band in a set of composite bands, known as the Wannier obstruction. To capture the localization in the Wannier functions, the `spread functional' $\Omega^n=\braket{\mbf0n}{\mathbf r^2}{\mbf0n}-\braket{\mbf0n}{\mathbf r}{\mbf0n}^2$ was defined as a quantitative measure \cite{marzari97prb}. This functional contains a gauge invariant part $\Omega^n_I$ as a lower bound $\Omega^n\geq\Omega^n_I$, which is constant under any gauge transformation $\ket{u^n_{\mathbf k}}\rightarrow e^{i\phi^n_{\mathbf k}}\ket{u^n_{\mathbf k}}$. Significantly, the contribution from band geometry is encoded in the gauge invariant part of spread functional $\Omega^n_I=(\mathcal V_0/\mathcal V)\sumv{k}\mathrm{Tr} g^n_{\mathbf k}$. As nontrivial band geometry occurs from the $\mathbf k$-dependent orbital composition, the single band may become insufficient for exponential localization, leading to a finite spread in the Wannier function. For the bands involved in the band crossing (\ref{eq:ham0}), the contributions from the chiral multifold fermions can be further distinguished $\Omega^{sn}_{\text{CMF},I}=(\mathcal V_0/\mathcal V)\sum_{\mathcal R_\text{CMF},\mathbf k}\mathrm{Tr} g^{sn}_{\mathbf k}$. Note that this sets a lower bound of the spread functional $\Omega^{sn}_I\geq\Omega^{sn}_{\text{CMF},I}$, since the trace of quantum metric is a momentum-space Laplacian and is positive semidefinite. We thus identify a lower bound of the spread functional solely from the band geometry of chiral multifold fermions. With an integration over the eligible region $\mathcal R_\text{CMF}$, we determine this lower bound from the geometric invariant $G^{sn}$ (\ref{eq:geominv})
\begin{equation}
\Omega^{sn}_I\geq\frac{\mathcal V_0\Lambda_k}{4\pi^2}G^{sn}.
\end{equation}
Notably, the bands with smaller monopole charge (such as the flat trivial bands with $q^{s0}=0$) exhibit larger lower bounds for the finite spread. This feature differs remarkably from the usual understanding of Wannier obstruction, which expects a higher degree of obstruction on a band with more nontrivial topology.
As a direct consequence of Wannier obstruction from band geometry, the chiral multifold fermions can form superconductivity even if the bands are (nearly) flat \cite{lin20prr}. When a superconducting band is flattened, the Cooper pairs become nondispersive and well localized. This may lead to the lost of interpair communication, thereby suppress the phase coherence of superconductivity. A quantitative measure of phase coherence is provided by the superfluid stiffness $D^S_{ab}$, which captures the response of a supercurrent $j^S_a$ to an electromagnetic gauge field $\mathbf A_b$. The scaling $D^S_{ab}\sim v_F^2$ with respect to Fermi velocity $v_F$ confirms the loss of phase coherence $D^S_{ab}\rar0$ in the flat band limit $v_F\rar0$. Due to the absence of global phase coherence, the obstruction to superconductivity is usually expected on flat bands. Nevertheless, `anomalous phase coherence' may arise and support superconductivity on a single flat band in a set of composite bands \cite{peotta15nc}. Despite the localization of Cooper pairs, the overlaps of wavefunctions from Wannier obstruction can still mediate the phase coherence. Such effect is reflected by the anomalous superfluid stiffness \cite{peotta15nc,liang17prb,hu19prl,xie20prl,lin20prr}
\begin{equation}
\label{eq:sfstf}
D^{S,n}_{\text{geom},ab}(T)=\frac{1}{\mathcal V}\sumv{k}\frac{2|\Delta_{\mathbf k}|^2}{E^n_{\mathbf k}}\tanh\frac{E^n_{\mathbf k}}{2T}g^n_{ab\mathbf k},
\end{equation}
where $\Delta_{\mathbf k}$ is the superconducting gap function, $E^n_{\mathbf k}$ is the quasiparticle energy, and $T$ is the temperature. As a simplest illustration for the chiral multifold fermions, we calculate the anomalous superfluid stiffness of a uniform superconductivity $\Delta_{\mathbf k}=\Delta(T)$ on the flat bands $n=0$ \cite{lin20prr}. With the quasiparticle energy $E^n_{\mathbf k}=|\Delta|$, a proportionality to the gap function is established at zero temperature $T=0$
\begin{equation}
\mathrm{Tr} D^{S,sn}_{\text{geom}}(0)=\frac{\Lambda_k}{2\pi^2}|\Delta(0)|G^{sn}.
\end{equation}
This result is also valid for the linearly dispersing bands $n\neq0$ in the flat band limit $v\rar0$. Note that the geometric invariant $G^{sn}$ (\ref{eq:geominv}) serves as an important measure of the anomalous superfluid stiffness. While previous works adopted the general relation $G^{sn}\geq|C^{sn}|$ and determined a lower bound from the Chern number \cite{peotta15nc,liang17prb,hu19prl,xie20prl}, our analysis uncovers a more precise `geometric dependence' particularly for the chiral multifold fermions. Interestingly, the bands with smaller Chern numbers manifest larger anomalous superfluid stiffness, which differs significantly from the usual expectations.
With the anomalous superfluid stiffness derived, we can further estimate the critical temperature $T_c\sim\bar D^{S,sn}_{\text{geom}}\xi$ for the flat band superconductivity \cite{emery95n,hazra19prx}. Here $\bar D^{S,sn}_{\text{geom}}=[\prod_aD^{S,sn}_{\text{geom},aa}(0)]^{1/3}=(\Lambda_k/6\pi^2)|\Delta(0)|G^{sn}$ from rotation symmetry $D^{S,sn}_{\text{geom},aa}=\mathrm{Tr} D^{S,sn}_{\text{geom}}/3$, and an estimation of coherence length $\xi\sim\Lambda_k^{-1}$ is utilized \cite{hazra19prx,lin20prr}. Note that the flat band pairing leads to a dramatic enhancement $|\Delta(0)|\sim Vn^{sn}$, where $-V<0$ is the attraction and $n^{sn}$ is the number of states in the eligible region $\mathcal R_\text{CMF}$ per unit volume \cite{lin18prb,lin20prr}. The resulting critical temperature manifests a linear scaling in the interaction strength
\begin{equation}
T_c\sim Vn^{sn}G^{sn},
\end{equation}
which is much higher than the conventional exponential scaling. Such dramatic enhancement is available solely because the chiral multifold fermions host nontrivial band geometry that supports anomalous phase coherence for flat band superconductivity.
Our analysis has focused on the minimal $\mathbf k\cdot\mathbf S$ model (\ref{eq:ham0}) of chiral multifold fermions. In general, the low-energy theory can experience various types of perturbations, which may become relevant away from the band crossing or close to a topological phase transition of eigenstates \cite{boettcher20prl}. How stable the quantized band geometry is against these perturbations serves as an interesting question to investigate. When the perturbation respects the spin-orbit-coupled rotation symmetry, such as $\delta H\sim(\mathbf k\cdot\mathbf S)^2$ or the irreducible representation with $(l,s,j)=(2,2,0)$ \cite{lin18prb,lin20prr}, the eigenstates are unchanged. The quantized trace of quantum metric (\ref{eq:trqm}) and the geometric invariant (\ref{eq:geominv}) are still manifest, as guaranteed by the symmetry. On the other hand, the exact quantization may be obstructed when the perturbation breaks the spin-orbit-coupled rotation symmetry. Nevertheless, the original quantization can still set the characteristic scales of the nonquantized results. The surface integration $G^{sn}$ may remain close to the originally quantized value (\ref{eq:geominv}), until the structures of chiral multifold fermions disappear away from the band crossing or the divergences occur at the topological phase transitions of eigenstates \cite{ma10prb}. The study of how the band geometry varies under various perturbations is an interesting direction for future work.
Several experimental methods were proposed and realized to probe the quantum metric \cite{neupert13prb,bleu18prb,ozawa18prb,asteria19np,klees20prl,yu19nsr,tan19prl,gianfrate20n}. One of the main methods is based on the periodic drive to the system \cite{ozawa18prb}. Consider a linear shake $\delta H_a=2E\cos(\omega t)r_a$ with amplitude $E$, frequency $\omega$, and time $t$ on a multiband system. The initial state is prepared as a Bloch state at momentum $\mathbf k$ in the lowest-energy band, which may be realized by loading an according wave packet adiabatically. Significantly, the diagonal components of quantum metric can be obtained from the integrated transmission rate $\Gamma^\text{int}_a=2\pi E^2g_{aa\mathbf k}$ under the periodic drive. Here the integrated rate reads $\Gamma^\text{int}_a=\int_\omega\Gamma(\omega)$, and $\Gamma(\omega)$ is the time-averaged transmission probability to all of the higher-energy bands. The trace of quantum metric can thus be determined by applying the shake along all directions $a=x,y,z$ and measuring the transmission rates. Some other methods were also proposed, such as the measurement by detecting the current noise \cite{neupert13prb}. Note that the single-band quantum metric can only be probed for the lowest-energy band in all of these methods. The target of a particular band may serve as an important goal in the future development of experimental probes.
In summary, we derive an exact quantization rule of the trace of quantum metric for the chiral multifold fermions for any spin. Such derivation is achieved by dualizing the computation onto a Haldane sphere in momentum space. A surface integration around the degenerate point leads to a quantized geometric invariant. The quantized band geometry may physically manifest itself in the finite spread of Wannier functions, as well as the according anomalous phase coherence of flat band superconductivity. The quantization remains valid under the spin-orbit-coupled rotation symmetry, and may set the characteristic scales of nonquantized results under symmetry-breaking perturbations. Experimental probes with periodic drive may be adopted to obtain the quantum metric. Potential applications to the other physical observables may serve as interesting topics for future work. Meanwhile, our paradigmatic analysis may also be generalized to the other gapless topological materials, such as the nodal-loop semimetals \cite{carter12prb,weng15prx,nandkishore16prb}, the systems with 3D Z2 monopoles \cite{zhao17prl,ahn18prl}, and the topological spintronic devices \cite{ PhysRevB.95.014519}. Our analysis indicates a new framework of how the novel properties can originate from band geometry, thereby opening a new route toward the understanding of unconventional states of matter.
\begin{acknowledgments}
The authors especially thank Dam Thanh Son and Rahul Nandkishore for encouragement and feedback on the manuscript. YPL was sponsored by the Army Research Office under Grant No. W911NF-17-1-0482. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. WHH was supported by a Simons Investigator Grant from the Simons Foundation.
\end{acknowledgments}
|
1111.5062
|
\section{Introduction}\label{sec:intro}
The availability of electronic information is essential to the
functioning of our communities. Increasingly often, data needs to be shared between
parties without complete mutual trust. Naturally, this raises important privacy concerns
with respect to the disclosure and the long-term safety of sensitive content.
One interesting problem occurs whenever two or more entities
need to evaluate the similarity of their datasets, but are
reluctant to openly disclose their data.
This task faces three important technical challenges: (1) how to identify a meaningful metric
to estimate similarity, (2) how to compute a measure thereof such that no
private information is revealed during the process, and (3) how to do so efficiently.
We address such challenges by introducing a cryptographic primitive called {\bf\em EsPRESSo -- Privacy-Preserving Evaluation of Sample Set Similarity}. Among others, this construction is appealing in
a few relevant applications, presented below. %
\paragraph{Document similarity:} Two parties need to estimate the similarity
of their documents, or collections thereof.
In many settings, documents contain sensitive information and parties may be unwilling,
or simply forbidden, to reveal their content. For instance, program chairs of a conference may want
to verify that none of submitted papers is also under review in other conferences or journals,
but, obviously, they are not allowed to disclose papers in submission.
Likewise, two law enforcement authorities (e.g., the FBI and local police), or two investigation teams
with different clearance levels, might need to share documents pertaining suspect terrorists,
but they can do so only conditioned upon a clear indication that content is relevant to the same investigation.
\paragraph{Iris Matching:} Biometric identification and authentication are increasingly used
due to fast and inexpensive devices that can extract biometric information from a multitude of sources, e.g., voice, fingerprints, iris, and so on. Clearly, given its utmost sensitivity, biometric data must
be protected from arbitrary disclosure.
Consider, for instance, an agency that needs to determine whether a given biometric appears on a government watch-list.
As agencies may have different clearance levels, privacy of biometric's owner needs to be preserved if no matches are found, but, at the same time, unrestricted access to the watch-list cannot be granted.
\paragraph{Multimedia File Similarity:}
Digital media, e.g., images, audio, video,
are increasingly relevant in today's computing ecosystems.
Consider two parties that wish to evaluate similarity of their media files, e.g.,
for plagiarism detection: sensitivity of possibly unreleased material (or copyright issues)
may prevent parties from revealing actual content.
\bigskip
EsPRESSO does not only appeal to examples above, but are also relevant to a wide
spectrum of applications, for instance, in the context of privacy-preserving sharing of information and/or recommender systems, e.g., to privately assess similarity of genomic information~\cite{baldi2011countering}, social network profiles~\cite{pino}, attackers' information~\cite{katti2005collaborating}, etc.
\subsection{Technical Roadmap \& Contributions}
Our first step is to identify a {\em metric} for effectively evaluating
similarity of sample sets. Several similarity measures are available and commonly
used in different contexts,
such as Cosine, Euclidean, Manhattan, Minkowski similarity, or Hamming and Levenshtein distances.
In this paper, we focus on a well-known metric, namely, the \emph{Jaccard Similarity Index}~\cite{jaccard1901etude}, which quantifies the similarity of {\em any} two sets $A$ and $B$. It is expressed as
a rational number between 0 and 1, and, as showed in~\cite{broder1997resemblance}, it effectively captures the informal notion of ``roughly the same''. The Jaccard Index can be used, e.g.,
to find near duplicate records~\cite{xiao2008efficient}
and similar documents~\cite{broder1997resemblance},
for web-page clustering~\cite{strehl2000impact},
data mining~\cite{tan2006introduction},
and genetic tests~\cite{genodroid,dombek2000use,popescu2006fuzzy}.
Also note that, as sample sets can be relatively large, in distributed settings
an approximation of the index is oftentimes preferred to its exact calculation.
To this end, \emph{MinHash} techniques~\cite{broder1997resemblance} are often used to estimate
the Jaccard index, with remarkably lower computation and communication costs (see Section~\ref{subsec:minhash}).
We define and instantiate a cryptographic primitive for efficient privacy-preserving evaluation of sample set similarity (or EsPRESSo, for short). We present two instantiations, that
allow two interacting parties to compute and/or approximate the Jaccard similarity of their private sets,
without reciprocally disclosing any information about their content (or, at most, their size).
Our main cryptographic building block is {\em Private Set Intersection
Cardinality} (PSI-CA)~\cite{freedman2004efficient}, which we review in Section~\ref{subsec:psi-ca}.
Specifically, we use PSI-CA to privately compute the magnitude of set intersection and union,
and we then derive the value of the Jaccard index.
As fast (linear-complexity) PSI-CA protocols become available (e.g.,~\cite{ePrint}), %
this can be done efficiently, even on large sets.
Nonetheless, our work shows that, using MinHash approximations,
one can obtain an estimate of the Jaccard index with remarkably increased efficiency,
by reducing the size of input sets (thus, the number of underlying cryptographic operations).
Privacy-preserving evaluation of sample set similarity is appealing in many scenarios.
We focus on document and multimedia similarity as well as iris matching,
and show that privacy is attainable with low overhead.
Experiments demonstrate that our generic technique -- while not bounded to any specific application --
is appreciably more efficient than state-of-the-art protocols that only focus
on one specific scenario, while maintaing comparable accuracy.
Finally, in the process of reviewing related work, we identify limits and flaws of some prior results.
\paragraph{Organization.} The rest of this paper is organized as follows.
Next section introduces building blocks, then
Section~\ref{sec:new} presents our construction for secure computation of Jaccard index
and an even more efficient technique to (privately) approximate it.
Then, Sections~\ref{sec:docsimilarity}, \ref{sec:iris}, and~\ref{sec:multimedia} present
our constructions for privacy-preserving similarity evaluation of, respectively, documents, irises,
and multimedia content.
Finally, Section~\ref{sec:approx-psica} sketches a very efficient
protocol that privately approximates set intersection cardinality, additionally hiding input set sizes,
while the paper concludes in Section~\ref{sec:conclusion}. Appendix~\ref{app:minhash} presents some more details
on MinHash, and Appendix~\ref{app:flaw} shows a flaw in the protocol for secure document similarity in~\cite{JiSa}.
\section{Preliminaries}\label{sec:preliminaries}
This section provides some relevant background information on Jaccard index, MinHash techniques, and our main cryptographic building blocks.
\subsection{Jaccard Similarity Index and MinHash Techniques}
\paragraph{Jaccard Index. }\label{subsec:jaccard}
One of the most common metrics for assessing the similarity of two sets $A$ and $B$ (hence, of
data they represent) is the Jaccard index~\cite{jaccard1901etude}, defined as $J(A,B)=|A\cap B|/|A\cup B|$.
Values close to 1 suggest that two sets are
very similar, whereas, those closer to 0 indicate that $A$ and $B$ are almost disjoint.
Note that the Jaccard index of $A$ and $B$ can be rewritten as a mere function of the {\em intersection}:
$J(A,B)=|A\cap B|/(|A|+|B|-|A\cap B|)$.
\paragraph{MinHash Techniques.}\label{subsec:minhash}
Clearly, computing the Jaccard index incurs a complexity linear in set sizes.
Thus, in the context of a large number of big sets, its computation might be relatively expensive.
In fact, for each pair of sets, the Jaccard index must be computed from scratch,
i.e., no information used to calculate $J(A,B)$ can be re-used for $J(A,C)$.
(That is, similarity is not a transitive measure.)
As a result, an {\em approximation} of the Jaccard index is often preferred,
as it can be obtained at a significantly lower cost, e.g., using
MinHash~\cite{broder1997resemblance}.
Informally, MinHash techniques extract a small representation $h_{k}(S)$ of a set $S$ through deterministic
(salted) sampling. This representation has a constant size $k$, independent from $|S|$, and can be used to compute an approximation of the Jaccard index. The parameter $k$ also defines %
the expected error with respect to the exact Jaccard index. Intuitively, larger values of $k$
yield smaller approximation errors.
The computation of $h_{k}(S)$ also incurs a complexity linear in set sizes, however,
it must be performed {\em only once} per set, for {\em any} number of comparisons.
Thus, with MinHash techniques, evaluating the similarity of any two sets requires only a constant number of comparisons.
Similarly, the bandwidth used by two interacting parties to approximate the Jaccard index
of their respective sets is also constant (i.e., $O(k)$).
There are two strategies to realize MinHashes: one employs multiple hash functions,
while the other is built from a single hash function. This paper focuses on the former
technique. Thus, we defer the description of the latter to Appendix~\ref{app:minhash}, which
also overviews possible MinHash instantiations.
\paragraph{MinHash with many hash functions.}
Let $\mathcal{F}$ be a family of hash functions that map items from set $U$ to distinct $\tau$-bit integers.
Select $k$ different functions $h^{(1)}(\cdot),\ldots, h^{(k)}(\cdot)$ from $\mathcal{F}$;
for any set $S\subseteq U$, let $h^{(i)}_{min}(S)$ be the item $s \in S$ with the smallest value $h^{(i)}(s)$ .
The MinHash representation $h_k(S)$ of set $S$ is a vector $h_k(S) = \{h_{min}^{(i)}(S)\}_{i=1}^k$.
The Jaccard index $J(A,B)$ is estimated by counting the number of indexes $i$-s, such that that
$h_{min}^{(i)}(A) = h_{min}^{(i)}(B)$, and this value is then divided by $k$. Observe that it holds that $h^{(i)}_{min}(A)=h^{(i)}_{min}(B)$ iff the minimum hash value of %
$A\cup B$ lies in $A\cap B$.
This measure can be obtained by computing the cardinality of the intersection of $h_k(A)$ and $h_k(B)$ as follows. Each element $a_i$ of the vector $h_k(A)$ is encoded as $\langle a_i,i\rangle$. Similarly, $\langle b_i,i\rangle$ represents the $i$-th element of vector $h_k(B)$. An unbiased estimate of the Jaccard index of $A$ and $B$ is given by: %
\begin{equation}%
sim(A,B) = \dfrac{\left|\{\langle a_i,i\rangle\}_{i=1}^k \cap \{\langle b_i,i\rangle\}_{i=1}^k\right|}{k}%
\end{equation}
As discussed in~\cite{broder1998min}, if $\mathcal{F}$ is a family of min-wise independent hash functions,
then each value of a fixed set $A$ has the same probability to be the element with the smallest hash value, for all functions in $\mathcal{F}$.
Specifically, for each min-wise independent hash function $h^{(i)}(\cdot)$ and for any set $S$,
we have that, for any $s_j,s_l\in S$, $\Pr[s_j = h^{(i)}_{min}(S)] = \Pr[s_l = h^{(i)}_{min}(S)]$.
Thus, we also obtain that $Pr[h^{(i)}_{min}(A)=h^{(i)}_{min}(B)]=J(A,B)$.
In other words, if $r$ is a random variable that is 1 when $h^{(i)}_{min}(A) = h^{(i)}_{min}(B)$ and 0 otherwise,
then $r$ is an unbiased estimator of $J(A,B)$; however, in order to reduce its variance, such random variable must be sampled several times, i.e., $k\gg1 $ hash values must be used.
In particular, by Chernoff bounds~\cite{chernoff1952measure}, the expected error of this estimate is $O(1/\sqrt{k})$~\cite{broder1997resemblance}.
\paragraph{Approximating (Jaccard) Similarity of Vectors without MinHash.}
If one needs to approximate the Jaccard index of two fixed-length {\em vectors} (rather than sets),
one could use other (more efficient) techniques similar to MinHash.
Observe that a vector $\overrightarrow S$ can be represented as a
set $S=\{\langle s_i,i\rangle\}$, where $s_i$ is simply the $i$-th element of $\overrightarrow S$.
We now discuss a new efficient strategy to approximate the Jaccard index of two vectors $A=\{\langle a_i,i\rangle\}_{i=1}^n$, $B=\{\langle b_i,i\rangle\}_{i=1}^n$ of length $n$, without using MinHash.
Our approach incurs constant ($O(k)$) computational and communication complexity, i.e., it is independent from the length of the vectors being compared.
First, select $k$ random values $(r_1,\ldots,r_k)$, for $r_i$ uniformly distributed in $[1,n]$, and compute $A_k = \{\langle a_{r_i},r_i\rangle\}_{i=1}^k$ and $B_k = \{\langle b_{r_i},r_i\rangle\}_{i=1}^k$, respectively.
The value $\delta=|A_k\cap B_k|/k$ can then be used to assess the similarity of $A$ and $B$. %
We argue that $\delta$ is an unbiased estimate of $J(A,B)$: for each $\alpha \in (A_k \cup B_k)$ we have that $\Pr[\alpha \in (A_k \cap B_k)] = \Pr[\alpha \in (A \cap B)]$ since $\alpha \in (A \cap B) \wedge \alpha \in (A_k \cup B_k)\Leftrightarrow \alpha \in (A_k \cap B_k)$. We also have $\Pr[\alpha \in (A \cap B)] = J(A,B)$, thus, $\delta$
is indeed an unbiased estimate of $J(A,B)$.
The above algorithm implements a perfect min-wise permutation for this setting: since elements $(r_1, \ldots, r_k)$ are uniformly distributed, for each $i\in [1,k]$ any element in $A$ and $B$ has the same probability of being selected.
As such, similar to MinHash with many hash functions, the expected error is also $O(1/\sqrt{k})$.
\subsection{Cryptography Background}\label{subsec:psi-ca}
\paragraph{Private Set Intersection Cardinality (PSI-CA).}
PSI-CA is a cryptographic protocol involving two parties: $\ensuremath{{\sf Alice}}$, on input $A \hspace*{-0.05cm}=\hspace*{-0.05cm} \{a_1,\ldots,a_w\}$,
and $\ensuremath{{\sf Bob}}$, on input $B \hspace*{-0.05cm}=\hspace*{-0.05cm} \{b_1,\ldots,b_v\}$, such that %
$\ensuremath{{\sf Alice}}$ outputs $|A\cap B|$, while $\ensuremath{{\sf Bob}}$ has no output.
In the last few years, several PSI-CA protocols
have been proposed, including~\cite{ePrint,freedman2004efficient,kissner2005,vaidya2005}.
\paragraph{De Cristofaro et al.'s PSI-CA~\cite{ePrint}.}
Throughout this paper, we will use the PSI-CA construction presented by De Cristofaro, Gasti, and Tsudik in~\cite{ePrint}, denoted as {\sf DGT12}\ in the rest of the paper,
as it offers the best communication and computation complexities (linear in set sizes).
{\sf DGT12}\ is secure, in the presence of semi-honest adversaries, under the Decisional Diffie-Hellman assumption (DDH) in the Random Oracle Model (ROM).
The {\sf DGT12}\ protocol is illustrated in Fig.~\ref{fig:psi-ca}.
First, $\ensuremath{{\sf Alice}}$ hashes and masks its set items ($a_i$-s) with a random exponent ($R_a$) and sends resulting values ($\alpha_i$-s) to $\ensuremath{{\sf Bob}}$, which ``blindly'' exponentiates them with its own random value $R_b$. $\ensuremath{{\sf Bob}}$ then shuffles the resulting values ($\alpha'_i$-s) and sends them to $\ensuremath{{\sf Alice}}$.
Then, $\ensuremath{{\sf Bob}}$ sends $\ensuremath{{\sf Alice}}$ the output of a one-way function, $\mathrm{H}'(\cdot)$, computed over the exponentiations of $\ensuremath{{\sf Bob}}$'s (hashed and shuffled) items ($b_j$-s) to randomness $R_b$.
Finally, $\ensuremath{{\sf Alice}}$ tries to match one-way
function outputs received from $\ensuremath{{\sf Bob}}$ with one-way function outputs computed over
the shuffled ($\alpha'_i$-s) values, stripped of the initial randomness $R_a$. $\ensuremath{{\sf Alice}}$ learns the set intersection cardinality (and nothing else) by counting the number of such matches. Unless they correspond to items in the intersection, one-way function outputs received from $\ensuremath{{\sf Bob}}$ cannot be used by $\ensuremath{{\sf Alice}}$ to learn related items in $\ensuremath{{\sf Bob}}$'s set (under the DDH assumption). Also, $\ensuremath{{\sf Alice}}$ does not learn {\em which} items are in the intersection as the matching occurs using {\em shuffled} $\alpha_i$ values.
{\sf DGT12}\ requires $O(|A|+|B|)$ {\em offline} and $O(|A|)$ {\em online} modular exponentiations in $\mathbb{Z}_p$ with exponents from subgroup $\mathbb{Z}_q$. (Offline operations are computed only once, for any number of interactions and any number of interacting parties).
Communication overhead amounts to $O(|A|)$ elements in $\mathbb{Z}_p$ and $O(|B|)$ -- in $\mathbb{Z}_q$.
(Assuming 80-bit security parameter, $|q|=$ 160 bits and $|p|=$ 1024 bits.)
\begin{figure*}[t!]
\centering
{\fbox{\footnotesize
\begin{minipage}{0.72\columnwidth}
\begin{tabular}{lcl}
\textbf{\underline{Alice}}, on input
&& \textbf{\underline{Bob}}, on input\vspace{0.1cm}\\
\hspace{0.25cm} $A=\{a_1,\ldots,a_v\},|B|$ & & \hspace{0.25cm} $B=\{b_1,\ldots, b_w\},|A|$ \vspace{-0.18cm}\\
\multicolumn{3}{l}{\hspace{-0.3cm}\line(1,0){344.5}} \vspace{0.1cm}\\
$R_a \leftarrow \mathbb{Z}_q $\vspace{0.1cm} && $(\hat b_1,\ldots, \hat b_w) \leftarrow \Pi (B)$\\ %
$\forall i\mbox{ } 1\leq i\leq v:$ && $\forall j\mbox{ } 1\leq j\leq w: hb_j = \mathrm{H}(\hat{b}_j)$\\
$\hspace{0.25cm} ha_i = \mathrm{H}(a_i); \; $ && \vspace{-0.45cm}\\
$\hspace{0.25cm} \alpha_i=({ha_i})^{R_a}$ &\hspace{0.0cm} $\xymatrix@1@=100pt{\ar[r]^*+<4pt>{\{\alpha_1,\ldots,\alpha_v\}}&}$\vspace{-0.3cm}\\
& & $R_b \leftarrow \mathbb{Z}_{q}$\vspace{0.1cm}\\
&& $\forall i\mbox{ } 1\leq i\leq v:\mbox{ } \alpha_i' =(\alpha_i)^{R_b} $\vspace{0.1cm}\\
&& $(\alpha'_{\ell_1}, \ldots, \alpha'_{\ell_v}) = \Pi'(\alpha'_{1},\ldots,\alpha'_{v})$\vspace{0.1cm}\\
&& $\forall j\mbox{ } 1\leq j\leq w:\mbox{ } \beta_{j} ={(hb_{j})}^{R_b} $\vspace{0.1cm}\\
&& $\forall j\mbox{ } 1\leq j\leq w:\mbox{ } tb_{j} =\mathrm{H}'(\beta_{j})$\vspace{-0.6cm}\\
$\forall i\mbox{ } 1\leq i\leq v$: &\hspace{-0.2cm} $\xymatrix@1@=100pt{& \ar[l]^*+<4pt>{\{tb_{1},\ldots,tb_{w} \}}_*+<4pt>{\{\alpha'_{\ell_1},\ldots,\alpha'_{\ell_v}\}}}$\vspace{-0.4cm}\\
\hspace{0.25cm}$\beta'_{i} = (\alpha'_{\ell_i})^{1/R_a \bmod q}$\vspace{0.1cm}\\
$\forall i\mbox{ } 1\leq i\leq v$:\vspace{0.1cm}\\
\hspace{0.25cm}$ta_{i} =\mathrm{H}'(\beta'_i)$\vspace{0.3cm}\\
\end{tabular}
{\color{white}P} {\bf Output}: $\left| \{tb_{1},\ldots,tb_{w}\} \cap \{ta_1,\ldots,ta_v\} \right|$
\end{minipage}
}
}%
\vspace{-0.1cm}
\caption{\small PSI-CA protocol from~\cite{ePrint}, denoted as {\sf DGT12}, executed on common input of two primes $p$ and
$q$ (such that $q|p-1$), a generator $g$ of a subgroup of size $q$ and two hash functions, $\mathrm{H}$ and $\mathrm{H}'$, modeled as random oracles. $\Pi(\cdot)$ and $\Pi'(\cdot)$ denote random permutations. {\em All computation is mod $p$}.}
\label{fig:psi-ca}
\end{figure*}
Protocol correctness is easily verifiable: for any $a_i$ held by \ensuremath{{\sf Alice}}\
and $b_j$ held by \ensuremath{{\sf Bob}}, if $a_i=b_j$ (hence, $ha_i=hb_j$), we obtain:\vspace{-0.2cm}
$$
ta_{\ell_i} = \mathrm{H}'(\beta'_{\ell_i}) = \mathrm{H}'({\alpha_{\ell_i}}^{(1/R_a)}) =
\mathrm{H}'({ha_i}^{R_b}) =
\mathrm{H}'({hb_j}^{R_b}) =
\mathrm{H}'(\beta_j) =
tb_{j}
$$
\noindent Hence, \ensuremath{{\sf Alice}}\ learns set intersection cardinality by counting the number
of matching pairs $(ta_{\ell_i},tb_j)$.
\paragraph{Adversarial Model.}
We use standard security models for secure two-party computation, which assume
adversaries to be either semi-honest or malicious.\footnote{Hereafter,
the term {\em adversary} refers to protocol participants. Outside adversaries
are not considered, since their actions can be mitigated via standard network security techniques.}
As per definitions in~\cite{Goldreich}, protocols secure in the presence of
{\em semi-honest adversaries} assume that parties faithfully follow
all protocol specifications and do not misrepresent any information related to their inputs,
e.g., size and content. However, during or after protocol execution, any party might
(passively) attempt to infer additional information about other party's input.
Whereas, security in the presence of {\em malicious parties} allows arbitrary deviations
from the protocol. %
Security arguments in this paper are made with respect to \textbf{\em semi-honest} participants.
\section{Privacy-preserving Sample Set Similarity}\label{sec:new}
This section presents and analyzes our protocols for privacy-preserving computation
of sample set similarity, via secure computation
of the Jaccard Similarity index.
\subsection{Protocols Description}\label{sec:pj}
We propose two protocols -- both based on Private Set Intersection Cardinality (PSI-CA). The
first protocol provides secure and exact computation of the Jaccard index, whereas, the other
securely approximates it, using MinHash techniques, incurring significantly lower communication
and computation overhead.
\paragraph{Privacy-Preserving Computation of Jaccard Index.} Fig.~\ref{fig:sjacc} illustrates
our first protocol construction for securely computing the Jaccard index. It involves two
parties, $\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$, on input sets $A$ and $B$, respectively, that wish to compute $J(A,B)$
in a privacy-preserving manner, i.e., in such a way that nothing is revealed about their input
sets -- besides their size and joint Jaccard index.
Given $|A|$, $|B|$ and $J(A,B)$, the size of the intersection between $A$ and $B$ can be easily
computed as $|A\cap B| = J(A,B) \cdot (|A| + |B|) / (J(A,B) + 1)$. In other words, knowledge of $(|A|, |B|, J(A,B))$ is equivalent to knowledge of $(|A|, |B|, |A\cap B|)$. Therefore, in our protocol \ensuremath{{\sf Alice}}\ computes $|A\cap B|$ and uses it -- together with her input -- to derive $J(A,B)$. As it is customary in secure-computation
protocols, the size of parties' input is available to the counterpart, thus, it is included as
part of the protocol input. The protocol does not assume any specific secure PSI-CA instantiation.
\begin{figure}[th]
\begin{center}
\fbox{
\begin{minipage}{0.645\columnwidth}
{\textbf{Privacy-preserving computation of ${J(A,B)}$}}\vspace{0.02cm}\\
{\small \sf (Run by $\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$, on input, resp., $(A,|B|),(B,|A|)$)}\vspace{-0.2cm}\\
\hspace{-0.35cm}\line(1,0){300} \vspace{-0.35cm}\\
{\small
\begin{compactenum}
\item $\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$ execute PSI-CA on input, respectively, $(A,|B|)$ and $(B,|A|)$
\item $\ensuremath{{\sf Alice}}$ learns $c = |A\cap B|$
\item $\ensuremath{{\sf Alice}}$ computes $u=|A\cup B| = |A|+|B|-c$
\item $\ensuremath{{\sf Alice}}$ outputs $J(A,B)=c/u$
\end{compactenum}}
\end{minipage}}
\vspace{-0.2cm}
\caption{Proposed protocol for privacy-preserving computation of set similarity.}
\label{fig:sjacc}
\vspace{-0.3cm}
\end{center}
\end{figure}
\paragraph{Privacy-Preserving Approximation of Jaccard Index using MinHash.} The computation of the Jaccard index, with or without privacy, can be relatively expensive when (1)
sample sets are very large, or (2) each set must be compared with a large number
of other sets -- since for each comparison, all computation must be re-executed {\em from scratch}.
Thus, MinHash is often used to
estimate the Jaccard index, trading (bounded) error with appreciably faster computation.
In order to jointly and privately approximate $J(A,B)$ as
$sim(A,B)$, \ensuremath{{\sf Alice}}\ and \ensuremath{{\sf Bob}}\ agree on $k$ and select a random subset of their sets using the MinHash
technique in Section~\ref{subsec:minhash}. In particular, \ensuremath{{\sf Alice}}\ computes
$\{\langle a_i,i\rangle\}_{i=1}^k$ where $a_i = h_{min}^{(i)}(A)$, and \ensuremath{{\sf Bob}}\ computes
$\{\langle b_i,i\rangle\}_{i=1}^k$ for $b_i = h_{min}^{(i)}(B)$.
Similarity of two sample sets is then computed as $sim(A,B) = |\{\langle a_i,i\rangle\}_{i=1}^k
\cap \{\langle b_i,i\rangle\}_{i=1}^k|/k$ using any secure instantiation of PSI-CA.
Therefore, privacy-preserving estimation of the Jaccard index, using multi-hash MinHash, can be
reduced to securely computing cardinality of set intersection.
The resulting protocol is presented in Fig.~\ref{fig:spmin} below.
\begin{figure}[ht]
\begin{center}
\fbox{
\begin{minipage}{0.73\columnwidth}
{\textbf{Private Jaccard index estimation ${sim(A,B)}$}}\vspace{0.02cm}\\
{\small{ Run by $\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$, on common input $k$ and private input $\{\langle a_i,i\rangle\}_{i=1}^k$ (for \ensuremath{{\sf Alice}}) and $\{\langle b_i,i\rangle\}_{i=1}^k$} (for \ensuremath{{\sf Bob}})}\vspace{-0.2cm}\\
\hspace{-0.35cm}\line(1,0){343} \vspace{-0.35cm}\\
{\small
\begin{compactenum}
\item $\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$ execute PSI-CA on input, resp., $(\{\langle a_i,i\rangle\}_{i=1}^k, k)$ and
$(\{\langle b_i,i\rangle\}_{i=1}^k,k)$
\item $\ensuremath{{\sf Alice}}$ learns $\delta=|\{\langle a_i,i\rangle\}_{i=1}^k\cap\{\langle b_i,i\rangle\}_{i=1}^k|$
\item $\ensuremath{{\sf Alice}}$ outputs $sim(A,B)= \delta/k$
\end{compactenum}}
\end{minipage}}
\vspace{-0.2cm}
\caption{Proposed protocol for privacy-preserving approximation of set similarity.\vspace{0.1cm}}
\label{fig:spmin}
\vspace{-0.2cm}
\end{center}
\end{figure}
It is easy to observe that, compared to the Jaccard index computation (Fig.~\ref{fig:sjacc}), the use of MinHash leads
to executing PSI-CA on smaller sets, as $k \ll {\rm Min}(|A|, |B|)$. %
Communication and computation overhead only depend on $k$, since inputs to PSI-CA are now sets of $k$ items.
\subsection{Security Analysis}
\paragraph{Security of Privacy-Preserving Computation of Jaccard Index.}
Informally, by running the protocol in Fig.~\ref{fig:sjacc}, parties do not reciprocally
disclose the content of their private sets. \ensuremath{{\sf Alice}}\ learns similarity computed as the
Jaccard index, while both parties learn the size of the other party's input.
The security of the protocol in Fig.~\ref{fig:sjacc} relies on the security of the
instantiation of the underlying PSI-CA protocol. In particular we assume that, in the semi-
honest model, PSI-CA only reveals $|A\cap B|$ to \ensuremath{{\sf Alice}}, while \ensuremath{{\sf Bob}}\ learns nothing besides $|A|$.
$\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$ do not exchange any information besides messages related to the PSI-CA protocol.
For this reason, a secure implementation of the underlying PSI-CA guarantees that neither $\ensuremath{{\sf Alice}}$
nor $\ensuremath{{\sf Bob}}$ learn additional information about the other party's set.
Since knowledge of $(|A|, |B|, J(A,B))$ is equivalent to knowledge of $(|A|, |B|, |A\cap B|)$,
the protocol in Fig.~\ref{fig:sjacc} is secure in the semi-honest setting.
\paragraph{Security of Privacy-Preserving Approximation of Jaccard Index.} Similar to the
protocol in Fig.~\ref{fig:sjacc}, the security of protocol in Fig.~\ref{fig:spmin}
relies on the security of the underlying PSI-CA construction. In particular, $sim(A,B)$ is
defined as the size of the intersection between $\{\langle a_i,i\rangle\}_{i=1}^k$ and
$\{\langle b_i,i\rangle\}_{i=1}^k$, divided by a (public) constant $k$. Therefore, any
information \ensuremath{{\sf Alice}}\ and \ensuremath{{\sf Bob}}\ learn about the other party's input can also be used to break the
underlying PSI-CA protocol. Since the PSI-CA protocol is assumed to be secure, \ensuremath{{\sf Alice}}\ and \ensuremath{{\sf Bob}}\ do
not learn additional information.
$k$ is selected independently from $|A|$ and $|B|$, therefore it does not reveal any
information about the two sets. PSI-CA, together with the way input is constructed, conceals
the relationship between $k$ and $|A|,|B|$ by
not disclosing how many elements $a_i=a_j$ and $b_i=b_j$ for $i\neq j$ on the parties' inputs.
Therefore, the protocol does not disclose the size of \ensuremath{{\sf Alice}}\ and \ensuremath{{\sf Bob}}'s inputs.
\paragraph{Extension of Privacy-Preserving Approximation of Jaccard Index.}
In the previous protocol, $\ensuremath{{\sf Alice}}$ learns some additional information compared to the protocol in
Fig.~\ref{fig:sjacc}. In particular, rather than computing the similarity -- and therefore the
size of the intersection -- of sets $A$ and $B$, she determines how many elements from a
particular subset of $A$ (constructed using MinHash) also appear in the subset selected from
$B$. We now provide a brief overview of how this issue can be fixed efficiently.
$\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$ can construct their input sets (i.e., $\{\langle a_i,i\rangle\}_{i=1}^k$ and $\{\langle b_i,i\rangle\}_{i=1}^k$) using a set of Oblivious Pseudo Random Function (OPRF) evaluations\footnote{An Oblivious Pseudo Random Function (OPRF) is a two-party protocol, involving one party, on input $key$, and another, on input $s$. At the end of the interaction the former learns nothing, while the latter obtains $f_{key}(s)$, where $f$ is a pseudo-random function.} rather than a set of hash functions: $\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$ engage in a multi-party protocol where $\ensuremath{{\sf Alice}}$ inputs her set $A = \{a_1,\ldots,a_v\}$ and learns a random permutation of OPRF$_{key_j}(a_1),\ldots,$OPRF$_{key_j}(a_v)$ for $1\leq j \leq k$. $\ensuremath{{\sf Alice}}$ constructs her input selecting the smallest value OPRF$_{key_j}(a_i)$ for each $j$. $\ensuremath{{\sf Bob}}$ constructs his input without interacting with $\ensuremath{{\sf Alice}}$.
While the cost of this protocol is linear in the size of the input sets, it is significantly higher than that of protocol Fig.~\ref{fig:spmin}.
\subsection{Performance Evaluation}
\subsubsection{Privacy-Preserving Computation of Jaccard Index.}
Cost of protocol in Fig.~\ref{fig:sjacc} is dominated by that incurred by the underlying PSI-CA protocol. While it could be instantiated using {\em any} PSI-CA construction, we choose {\sf DGT12}\ in order to maximize efficiency. This protocol, reviewed in Section~\ref{subsec:psi-ca}, incurs linear communication and computational complexities, thus, overall complexities of protocol in Fig.~\ref{fig:sjacc} are also linear in the size of sets. If we were to compute the Jaccard index without privacy,
asymptotic complexities would be same as our privacy-preserving protocol -- i.e., linear.
However, given the lack of cryptographic operations,
constants hidden by the big $O(\cdot)$ notation would be much smaller. %
To assess the real-world practicality of resulting construction, protocol in Fig.~\ref{fig:sjacc} has been implemented in C (with OpenSSL and GMP libraries), using 160-bit random exponents and 1024-bit moduli to obtain 80-bit security.
We run experiments on sets such that ${|A|=|B|=1000}$. Recall that, in {\sf DGT12}\ items are always hashed ({\sf DGT12}\ assumes ROM), so their size is non-influent. We use select SHA-1 as the hash function, thus, hashed items are 160-bit.
In this setting, protocol in Fig.~\ref{fig:sjacc} incurs (i) $\mathbf{0.5s}$ total computation time
on a single Intel Xeon E5420 core running at 2.50GHz
and (ii) $\mathbf{276}$KB in bandwidth. We omit running times for larger sets since, as complexities are linear,
one can easily derive a meaningful estimate of time/bandwidth for virtually any size.
We also implement an optimized prototype that further improves total running time
by (1) pipelining computation and transmission and (2) parallelizing computation on two cores.
We test the prototype by running
$\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$ on two PCs equipped with 2 quad-core Intel Xeon E5420 processors
running at 2.50GHz, however, we always use (at most) 2 cores.
On a conservative stance, we do not allow parties to perform any pre-computation offline.
We simulate a 9Mbps link, since, according to~\cite{49},
it represents the current average Internet bandwidth in US and Canada.
In this setting, and again considering $|A|=|B|=1000$,
total running time of protocol in Fig.~\ref{fig:sjacc} amounts to $\mathbf{0.23s}$.
Whereas, the computation of Jaccard index {\em without privacy}
takes $\mathbf{0.018s}$.
Therefore, we conclude that privacy protection, in our experiments,
only introduces a (roughly) \textbf{12-fold slowdown}, independently from set sizes.
\paragraph{Comparison to prior work.}
Performance evaluation above does not include any prior solutions, since,
to the best of our knowledge, there is no comparable cryptographic primitive
for privacy-preserving computation of the Jaccard index.
The work in~\cite{singh2009privacy} is somewhat related:
it targets private computation of the Jaccard index using Private Equality Testing (PET)~\cite{lipmaa2003verifiable}
and deterministic encryption. However, it introduces the need for a non-colluding semi-honest {\em third party}, which violates our design model. Also, it incurs an impractical number of public-key operations,
i.e., {\em quadratic} in the size of sample sets (as opposed to linear in our case).
Finally, additional (only vaguely) related techniques include: (i) work on privately approximating dot product of two vectors, such as,~\cite{rulemining,PSDM04}, and (ii) probabilistic/approximated private set operations based on Bloom filters, such as,~\cite{rulemining,kerschbaum2012outsourced}. (None of these techniques, however, can be used to
solve problems considered in this paper.)
\subsubsection{Privacy-Preserving Estimation of Jaccard Index with MinHash}
We also tested the performance of our construction for privacy-preserving {\em approximation} of Jaccard similarity, again, using {\sf DGT12}, i.e., the PSI-CA from~\cite{ePrint}.
Once again, we select sets with 1000 items, 1024-bit moduli and 160-bit random exponents, and run experiments on two PCs with 2.5GHz CPU and a 9Mbps link. We select $\mathbf{k=400}$ in order to have an estimated error of about $5\%$. (Recall that, as mentioned in Section~\ref{subsec:minhash} the error is approximated as $1/\sqrt{k}$).
In this setting, the total running time of protocol in Fig.~\ref{fig:spmin} amounts to $\mathbf{0.09s}$ -- less than half compared to the one in Fig.~\ref{fig:sjacc}.
Whereas, in the same setting, the approximation of Jaccard index {\em without privacy} takes $\mathbf{0.007s}$. Thus,
the slow-down factor introduced by the privacy-protecting layer (similar to the protocol proposed in Section~\ref{sec:pj}) is {\bf 12-fold}.
Again, note that times for different values of $k$ can be easily estimated since the complexity of the protocol is linear.
\paragraph{Comparison to Prior Work.}
The estimation of set similarity through MinHash -- whether privacy-preserving
or not -- requires counting the number of times for which it holds that
$h_{min}^{(i)}(A)=h_{min}^{(i)}(B)$, with $i=1,\ldots,k$. We have denoted this number as $\delta$.
Protocol in Fig.~\ref{fig:spmin} above attains secure computation of $\delta$ through privacy-preserving
set intersection cardinality.
However, it appears somewhat more intuitive to do so by using the approach proposed
by~\cite{pino} in the context of social-network friends discovery.
Specifically, in~\cite{pino}, $\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$ compute, resp., $\{a_i\}_{i=1}^k$ and $\{b_i\}_{i=1}^k$, just like in our protocol.
Then, $\ensuremath{{\sf Alice}}$ generates a public-private keypair $(pk,sk)$ for Paillier's additively homomorphic encryption
cryptosystem~\cite{Pa99} and sends $\ensuremath{{\sf Bob}}$ $\{z_i = Enc_{pk}(a_i)\}_{i=1}^k$.
$\ensuremath{{\sf Bob}}$ computes $\{(z_i\cdot Enc_{pk}(-b_i))^{r_i}\}_{i=1}^k$ for random $r_i$'s and
returns the resulting vector of ciphertexts after shuffling it. Upon decryption, $\ensuremath{{\sf Alice}}$ learns
$\delta$ by counting the number of $0$'s. %
Nonetheless, the technique proposed by~\cite{pino} actually incurs
an increased complexity, compared to our protocol in Fig.~\ref{fig:spmin}
(instantiated with {\sf DGT12}).
Assuming 80-bit security parameters, thus, 1024-bit moduli and
160-bit subgroups, and 2048-bit Paillier moduli,
and using \mbox{\textsf{\em m}}\ to denote a multiplication of 1024-bit numbers,
multiplications of 2048-bit numbers
count for 4\mbox{\textsf{\em m}}. Using square-and-multiply, exponentiations with $q$-bit exponents modulo 1024-bit count for $(1.5|q|)\mbox{\textsf{\em m}}$.
In~\cite{pino}, $\ensuremath{{\sf Alice}}$ performs $k$ Paillier encryptions (i.e., $2k$ exponentiations and $k$ multiplications) and $k$ decryptions (i.e., $k$ exponentiations and multiplications), while $\ensuremath{{\sf Bob}}$ computes $k$ exponentiations and multiplications. Therefore, the total computation complexity amounts to $(6\cdot4\cdot1.5\cdot1024+4\cdot4)\mbox{\textsf{\em m}} k = 36,880\mbox{\textsf{\em m}}$.
Whereas, our approach (even without pre-computation)
requires both $\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$ to perform $4k$ exponentiations of 160-bit numbers modulo 1024-moduli
and $2k$ multiplications, i.e., $(4\cdot1.5\cdot160+2)k\mbox{\textsf{\em m}} = 962 \mbox{\textsf{\em m}} k$, thus, our protocol
achieves a 38-fold efficiency improvement.
Communication overhead is also higher in~\cite{pino}: it amounts to $(2\cdot2048)k$ bits; whereas,
using PSI-CA, we need to transfer $(2\cdot1024+160)k$ bits, i.e., slightly more than half the traffic.
\section{Privacy-Preserving Document Similarity}\label{sec:docsimilarity}
After building efficient (linear-complexity) primitives for privacy-preserving
computation/approximation of Jaccard index, we now explore their applications
to a few compelling problems. %
We start with evaluating the similarity of two documents, which is relevant in
many common applications, including copyright protection, file management, plagiarism prevention,
duplicate submission detection, law enforcement.
In last few years, the security community has started investigating privacy-preserving techniques to enable
detection of similar documents without disclosing documents' actual content. Below, we first review prior work and,
then, present our technique for efficient privacy-preserving document similarity.
\subsection{Prior Work}
The work in~\cite{jiang2008similar} (later extended in~\cite{MurugesanVLDB2010})
is the first to realize {\em privacy-preserving document similarity}.
It realizes secure computation of the \emph{cosine similarity}
of vectors representing the documents, i.e., each document is represented
as the list of words appearing in it, along with the normalized number of occurrences.
Recently, Jiang and Samanthula~\cite{JiSa} have proposed a novel technique
relying on the Jaccard index and $N$-gram based document representation~\cite{manber1994finding}.
(Given any string, an $N$-gram is a substring of size $N$).
According to~\cite{JiSa}, the $N$-gram based technique presents several advantages over
cosine similarity: (1) it improves on finding {\em local similarity}, e.g., overlapping of
pieces of texts, (2) it is language-independent, (3) it requires a much simpler
representation, and (4) it is less sensitive to document modification. We overview it below.
\paragraph{Documents as sets of {\em N}-grams.} A document can be represented as a set of $N$-grams contained in it.
To obtain such a representation, one needs to remove spaces and punctuation
and build the set of successive $N$-grams in the document.
An example of a sentence, along with its $N$-gram representation (for $N=3$), is illustrated in Fig.~\ref{fig:fox}.
The similarity of two documents can then be estimated as the {\em Jaccard index} of the two
corresponding sets of $N$-grams.
In the context of document similarity, experts point out that 3 results
as a good choice of $N$~\cite{broder1997resemblance}.
\begin{figure}[ht]
\centering
\fbox{
\begin{tabular}{c}
{\em the quick brown fox jumps over the lazy dog}\\
$\downarrow$\\
\{azy, bro, ckb, dog, ela, equ, ert, fox, hel, heq, ick,
jum, kbr, laz, mps, nfo,\\
ove, own, oxj, pso, qui, row,
rth, sov, the, uic, ump, ver, wnf, xju, ydo, zyd\}
\end{tabular}
}
\vspace{-0.2cm}
\caption{Tri-gram representation.}\label{fig:fox}
\end{figure}
To enable privacy-preserving computation of Jaccard index, and therefore %
estimation of document similarity, Jiang and Samanthula~\cite{JiSa} propose a two-stage protocol based
on {\em Paillier}'s additively homomorphic encryption~\cite{Pa99}.
Suppose $\ensuremath{{\sf Alice}}$ wants to privately evaluate the similarity of her document $D_A$ against
a list of $n$ documents held by $\ensuremath{{\sf Bob}}$, i.e., $D_{B:1}, \ldots, D_{B:n}$.
First, $\ensuremath{{\sf Bob}}$ generates a global space, $|S|$, of tri-grams based on his document collection.
This way, $D_A$ as well as each of $\ensuremath{{\sf Bob}}$'s document, $D_{B:i}$, for $i=1,\ldots,n$, can be represented as binary vectors
in the global space of tri-grams: each component is 1 if the corresponding tri-gram
is included in the document and 0 otherwise. In the following, we denote with $A$ the representation of $D_A$ and
with $B_i$ that of $D_{B:i}$.
Then, $\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$ respectively compute random shares
$a$ and $b_i$ such that $a+b_i = |A\cap B_i|$. Next, they set $c = |A| - a$ and $d_i = |B|-b_i$.
Finally, $\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$, on input $(a,c)$ and
$(b_i,d_i)$, resp., execute a Secure Division protocol (e.g.,~\cite{bunn2007secure,AtallahBLFT04})
to obtain $(a+b_i)/(c+d_i) = |A\cap B_i|/|A\cup B_i| = J(A,B_i)$.
The computational complexity of the protocol in~\cite{JiSa}
amounts to $O(|S|)$ Paillier encryptions performed by $\ensuremath{{\sf Alice}}$, and $O(n\cdot |S|)$ modular multiplications
-- by $\ensuremath{{\sf Bob}}$. Whereas, communication overhead amounts to $O(n\cdot |S|)$ Paillier ciphertexts. %
\paragraph{Flaw in~\cite{JiSa}.} Unfortunately, protocol in \cite{JiSa} {\em is not secure}, since
$\ensuremath{{\sf Bob}}$ has to disclose his global space of tri-grams (i.e., the set of all tri-grams appearing in his document
collection). Therefore, $\ensuremath{{\sf Alice}}$ can passively check whether or not a word appears in $\ensuremath{{\sf Bob}}$'s document
collection. Actually, $\ensuremath{{\sf Alice}}$ can learn much more, as we show in Appendix~\ref{app:flaw}.
We argue that this flaw could be fixed by considering the global space of tri-grams
as the set of all possible tri-grams, thus, avoiding the disclosure of $\ensuremath{{\sf Bob}}$'s tri-grams set.
Assuming that documents are stripped of any symbol and contain
only lower-cased letters and digits, we obtain $S=\{a,b,\ldots,z, 0,1,\ldots,9\}^3$.
Unfortunately, this modification would tremendously increase computation and communication
overhead. %
\subsection{Our Construction}
As discussed in Section~\ref{sec:new}, we can realize privacy-preserving
computation of the Jaccard index using PSI-CA.
To privately evaluate the similarity of documents $D_A$ and any document $D_{B:i}$, $\ensuremath{{\sf Alice}}$ and
$\ensuremath{{\sf Bob}}$ execute protocol in Fig.~\ref{fig:ppds}. Function $\mbox{Tri-Gram}(\cdot)$
denotes the representation of a document as the set of tri-grams appearing in it.
\begin{figure}[bth]
\centering
\hspace{1cm}
\fbox{\small
\begin{minipage}{0.61\columnwidth}
\begin{tabular}{lcl}
\hspace{-0.05cm}$\ensuremath{{\sf Alice}}$ $(D_A)$ & &\hspace{0.3cm} $\ensuremath{{\sf Bob}}$ $(D_{B:i})$ \vspace{-0.2cm}\\
\multicolumn{3}{c}{\hspace{-0.3cm}\line(1,0){293.5} \vspace{0.1cm}}\\
\hspace{-0.05cm}$A\leftarrow\mbox{Tri-Gram}(D_A)$ & & \hspace{0.3cm} $B_i\leftarrow\mbox{Tri-Gram}(D_{B:i})$\hspace{0.2cm}\\
\hspace{3.99cm} \Big\{ \Big. \vspace{-0.69cm} & & \hspace{-0.55cm} \Big. \Big\}\\
&\hspace{-0.62cm}$\xymatrix@1@=45pt{\ar[r]^*+<4pt>{}&}$\vspace{0.02cm}\\
&\hspace{-0.62cm}$\xymatrix@1@=45pt{& \ar[l]^* +<4pt>{}_*+<4pt>{}}$ & \vspace{-0.2cm}\\
\\
\hspace{-0.05cm}$|A\cap B_i| \leftarrow \mbox{PSI-CA}(A,B_i)$\vspace{-0.05cm}\\
\end{tabular}
\vspace{0.2cm}
\hspace{0.14cm}{\bf Output Similarity as} %
$J(A,B_i)=\dfrac{|A\cap B_i|}{|A|+|B_i|-|A\cap B_i|}$ %
\end{minipage}
}
\caption{Privacy-preserving {\em evaluation} of document similarity of documents $D_A$ and $D_{B:i}$.}
\label{fig:ppds}
\end{figure}
\paragraph{Complexity.} Complexity of protocol in Fig.~\ref{fig:ppds}
is bounded by that of the underlying PSI-CA construction.
Using {\sf DGT12}, computational complexity amounts to $O(|A|+|B_i|)$ modular
exponentiations, whereas, communication overhead -- to $O(|A|+|B_i|)$. %
Observe that, in the setting where $\ensuremath{{\sf Alice}}$ holds one documents and $\ensuremath{{\sf Bob}}$ a collection of $n$ documents,
complexities should be amended to $O(n|A|+\sum_{i=1}^n |B_i|)$.
However, due to the nature of {\sf DGT12}, $\ensuremath{{\sf Bob}}$ can perform $O(\sum_{i=1}^n |B_i|)$ computation {\em off-line}, ahead of time.
Hence, total {\em online} computation amounts to $O(n|A|)$.
\paragraph{More efficient computation using MinHash.} As discussed in Section~\ref{subsec:minhash}, one can approximate
the Jaccard index by using MinHash techniques, thus, trading off accuracy with significant improvement in protocol
complexity. The resulting construction is similar to the one presented above and is illustrated
in Fig.~\ref{fig:app-ppds}. It adds an intermediate step between the tri-gram representation and the
execution of PSI-CA: $\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$ apply MinHash to sets $A$ and $B_i$, respectively, and obtain $h_k(A)$ and $h_k(B_i)$.
The main advantage results from the fact that PSI-CA is now executed on smaller sets, of constant size $k$,
thus, achieving significantly improved communication and computational complexities.
Again, note that the error is bounded by $O(1/\sqrt{k})$.
\begin{figure}[t!]
\centering
\hspace{1cm}
\fbox{\small
\begin{minipage}{0.62\columnwidth}
\begin{tabular}{lcl}
\hspace{-0.05cm}$\ensuremath{{\sf Alice}}$ $(D_A)$ & &\hspace{-3.8cm} $\ensuremath{{\sf Bob}}$ $(D_{B:i})$ \vspace{-0.15cm}\\
\multicolumn{3}{c}{\hspace{-0.5cm}\line(1,0){299} \vspace{0.1cm}}\\
\hspace{-0.05cm}$A\leftarrow\mbox{Tri-Gram}(D_A)$ & & \hspace{-3.8cm} $B_i\leftarrow\mbox{Tri-Gram}(D_{B:i})$\vspace{0.1cm}\\
\hspace{-0.05cm}$h_k(A)\leftarrow\mbox{MinHash}(A)$ & & \hspace{-3.8cm} $h_k(B_i)\leftarrow\mbox{MinHash}(B_i)$\hspace{-0.35cm}\mbox{ }\vspace{0.1cm}\\
\hspace{3.95cm} \Big\{ \Big. \vspace{-0.7cm} & & \hspace{-4.55cm} \Big. \Big\}\\
&\hspace{-9.6cm}$\xymatrix@1@=45pt{\ar[r]^*+<4pt>{}&}$\vspace{-0.0cm}\\
&\hspace{-9.6cm}$\xymatrix@1@=45pt{& \ar[l]^*+<4pt>{}_*+<4pt>{}}$ & \vspace{-0.25cm}\\
\hspace{-0.05cm}$|h_k(A)\cap h_k(B_i)|\leftarrow$\vspace{0.05cm}\\
$\;\;\;\;\;\mbox{PSI-CA}(h_k(A),h_k(B_i))$\vspace{0.15cm}\\
\hspace{-0.05cm}{\bf Output Similarity Approximation as}: %
$sim(A,B_i)=\dfrac{|h_k(A)\cap h_k(B_i)|}{k}$ %
\end{tabular}
\end{minipage}
}
\vspace{-0.2cm}
\caption{Privacy-preserving {\em approximation} of document similarity of documents $D_A$ and $D_{B:i}$.}
\label{fig:app-ppds}
\vspace{-0.2cm}
\end{figure}
\begin{table}[b!]
{\small
\centerline{
\begin{tabular}{|c|c|c|c|c|} \hline
\multirow{2}[2]{*} {$n$} &
\multirow{2}[2]{*} {\cite{JiSa}} &
\multirow{2}[2]{*} {Fig.~\ref{fig:ppds}} &
\multicolumn{2}{c|}{Fig.~\ref{fig:app-ppds}} \\ \cline{4-5}
& & & $k=100$ & $k=40$\\ \hline
$10$ & 9.5 mins & 6.3 secs & 0.05 secs & 0.05 secs\\
$10^2$ & 9.9 mins & 63 secs & 1.9 secs & 1.9 secs\\
$10^3$ & 12.7 mins & 10.4 mins & 48 secs & 19.2 secs\\
$10^4$ & 40.7 mins & 1.74 hours & 8 mins & 3.2 mins\\
$10^5$ & 5.3 hours & 17.4 hours & 1.2 hours & 32 mins \\
\hline
\end{tabular}
\vspace{-0.2cm}
}
\caption{Computation time of privacy-preserving document similarity. $n$ denotes the number of documents.}
\label{tab:expp}
}
\end{table}
\paragraph{Performance Evaluation.}
We now compare the performance of our constructions to the most efficient prior technique, i.e., the protocol
in~\cite{JiSa} (that, unfortunately, is insecure).
We consider the setting of~\cite{JiSa}, where $\ensuremath{{\sf Bob}}$ maintains a collection of $n$ documents.
Recall that our constructions use {\sf DGT12}, i.e., the PSI-CA in~\cite{ePrint}. Assuming 80-bit security parameters,
we select 1024-bit moduli and 160-bit random exponents. As \cite{JiSa} relies on Paillier encryption,
it uses 2048-bit moduli and 1024-bit exponents. %
In the following, let \mbox{\textsf{\em m}}\ denote a multiplication of 1024-bit numbers. Multiplications of 2048-bit numbers
count for 4\mbox{\textsf{\em m}}. Modular exponentiations with $q$-bit exponents modulo 1024-bit count for $(1.5|q|)\mbox{\textsf{\em m}}$.
The protocol in~\cite{JiSa} requires $O(|S|)$ Paillier encryptions and $O(n\cdot|S|)$ modular multiplications.
We need $|S|=36^3=46,656$ as we consider 3-grams and 26 alphabet letters plus [0--9] digits.
Therefore, the total complexity amounts to $(4\cdot 1.5\cdot 1024 + 4n)|S|\mbox{\textsf{\em m}} = (6144+4n)|S|\mbox{\textsf{\em m}} \approx
(2.9\cdot 10^8 + 1.9\cdot 10^5n)\mbox{\textsf{\em m}}$.
Our construction above requires $(2\cdot1.5\cdot160n(max(|A|,\{B_i\}_{i=1}^n))\mbox{\textsf{\em m}}$
for the computation of Jaccard index similarity and ($1.5\cdot160nk)\mbox{\textsf{\em m}}$ for its approximation.
Thus, to compare performance of our protocol to that of~\cite{JiSa}, we need to take into account the dimensions of $A$, $B_i$, as well as $n$ and $k$.
To this end, we collected 393 scientific papers from the KDDcup dataset of scientific papers published
in ArXiv between 1996 and 2003~\cite{kdd}. The average number of different tri-grams
appearing in each paper is 1307.
Therefore, cost of our two techniques can be estimated as $(2\cdot1.5\cdot160\cdot1307n)\mbox{\textsf{\em m}}$
and $(1.5\cdot160\cdot nk)\mbox{\textsf{\em m}}$, respectively.
Thus, our technique for privacy-preserving document similarity is faster than~\cite{JiSa}
for $n<2000$. Furthermore, using MinHash techniques,
complexity is always faster (and of at least one order of magnitude), using both $k=40$ and $k=100$.
Also, recall that, as opposed to ours, the protocol in~\cite{JiSa} is {\bf\em not} secure.
Assuming that it takes about 1$\mu s$ to perform modular multiplications of 1024-bit integers (as per our experiments
on a single Intel Xeon E5420 core running at 2.50GHz),
we report estimated running times in Table~\ref{tab:expp}
for increasing values of $n$ (i.e., the number of $\ensuremath{{\sf Bob}}$'s documents).
We performed some statistical analysis to determine the real magnitude of the error
introduced by MinHash, when compared to the Jaccard index without MinHash.
Our analysis is based on the trigrams from documents in the KDDcup dataset \cite{kdd}, and confirms that the average error is within the expected bounds: for $k=40$, we obtained an average error of 14\%, while for $k=100$ the average error was 9\%.
This is acceptable, considering that the Jaccard index actually provides a normalized {\em estimate} of the similarity
between two sets, not a definite metric.
\section{Privacy-Preserving Iris Matching}\label{sec:iris}
Advances in biometric recognition enable the use of biometric data
as a practical mean of authentication and identification. Today,
several governmental agencies around the world perform large-scale collections of different biometric features.
As an example, the US Department of Homeland Security (DHS) collects face, fingerprint and iris images,
from visitors within its US-VISIT program~\cite{usvisit}. Iris images are also collected
from all foreigners, by the United Arab Emirates (UAE) Ministry of Interior, as well as fingerprints and photographs
from certain types of travelers~\cite{uaei}.
While biometry serves as an excellent mechanism for
identification of individuals, biometric data is, undeniably,
extremely sensitive and must be subject to minimal exposure.
As a result, such data cannot be disclosed arbitrarily. %
Nonetheless, there are many legitimate scenarios where biometric data
should be shared, in a controlled way, between different entities.
For instance, an agency may need to determine whether a given biometric appears on a government watch-list.
As agencies may have different clearance levels, privacy of biometric's owner should be preserved if no matches are found, but, at the same time, unrestricted access to the watch-list cannot be granted.
\subsection{Prior Work}
As biometric identification techniques are increasingly employed,
related privacy concerns have been investigated by the research community~\cite{CimatoGPSS09}.
A number of recent results address the problem of
privacy-preserving face recognition. The work in~\cite{erk09} is the first to present
a secure protocol, based on Eigenfaces, later improved by~\cite{sad09}.
Next,~\cite{scifi} designs a new privacy-preserving face recognition algorithm, called SCiFI. %
Furthermore, the protocol in~\cite{bar10} realizes privacy-preserving fingerprint identification, using FingerCodes~\cite{jai00}. FingerCodes use texture information from a fingerprint to compare two biometrics. The algorithm is not as discriminative as traditional fingerprint matching techniques based on location of minutiae points, but it is adopted
in~\cite{bar10} given its suitability for efficient {\em privacy-preserving} realization. %
Among all biometric techniques, this paper focuses on {\em iris-based} identification.
The problem of privacy-preserving iris matching has been introduced by Blanton and Gasti
in~\cite{Blanton11esorics}, who propose an approach
based on a combination of garbled circuits~\cite{yao86} and homomorphic encryption.
\subsection{Our Construction}
A (human) iris can be digitalized as an $n$-bit string $S=s_1s_2\cdots s_n$ with an $n$-bit mask
$M_S = ms_1ms_2\cdots ms_n$. The mask indicates which bits of $S$ have been read reliably.
In particular, the $i$-th bit of $S$ should be used for matching only if the $i$-th bit of $M_S$ is set to 1.
A common value for $n$, which we use in our experiments, is 2048.
As, during an iris scan, the subject may rotate its head, a right or left shift can occur
in the iris representation, depending on the direction of the rotation.
Therefore, the distance between two irises $A$ and $B$ is computed as the minimum distance between all rotations of $A$ and the representation of $B$.
In practice, it is reasonable to assume that the rotation is limited to a shift of at most 5 positions towards left/right \cite{Blanton11esorics}.
The matching between two irises, $A$ and $B$, is computed via the {\em Weighted Hamming Distance} ($\mathrm{WHD}$)
of the samples.
Let $M = (M_A \wedge M_B)$; $\mathrm{WHD}$ is computed as:\vspace*{-0.1cm}
\begin{equation} \label{eq:hd}
\mathrm{WHD}(A, B, M) = \frac{HD(A \wedge M, B \wedge M) } {\|M\|}
\vspace{-0.1cm}
\end{equation}
where $\| \cdot \|$ denotes hamming weight, i.e. the number of string bits set to 1.
Given a threshold $t$, if $\mathrm{WHD}(A, B, M)<t$, we say that irises $A$ and $B$ are
{\em matching}. (Assuming a maximum rotation of 5 positions, the distance must be computed 11 times.)
In the following, we propose a probabilistic technique for privately
estimating of $\mathrm{WHD}(A,B,M)$, that relies on the construction for privacy-preserving estimation of Jaccard index based on MinHash (introduced in Section~\ref{sec:pj}).
The error on the approximation is bounded by the MinHash parameter $k$.
Proposed protocol is illustrated in Fig.~\ref{fig:app-iris}. %
Given any two $n$-bit strings $X = \{x_1, \ldots, x_n\}$ and $Y = \{y_1, \ldots, y_n\}$ and a list of $k$ values $R=(r_1, \ldots, r_k)$,
with $r_i \in [1,n]$, $r_i \neq r_j$ for all $i\neq j$, we define probabilistic function $\mathrm{Extract}_{R}:\{0,1\}^n \times \{0,1\}^n \rightarrow (\{0,1\} \times \{1,\ldots,n\})^k$ as:
$$\mathrm{Extract}_{R}(X,Y) = \{w_{r_1},\ldots,w_{r_k}\}, \mbox{where }
w_{r_i} = \left\{
\begin{array}{ll}
\langle x_{r_i}, r_i \rangle & \mbox{if } y_{r_i}=1 \\
\langle r, r_i \rangle \mathrm{\ with\ } r\leftarrow\{0,1\}^\tau & \mbox{otherwise}
\end{array}
\right.\vspace{0.1cm}
$$
where $x \leftarrow Y$ represents uniform random sampling of element $x$ from set $Y$.
Intuitively, for each value $r_i$ in $R$, $\mathrm{Extract}_{R}(X,Y)$ selects the $r_i$-th bit of $X$ and encodes it with $r_i$ (e.g., concatenates the two), if the $r_i$-th bit of $Y$ is 1. If the $r_i$-th bit of $Y$ is 0, the function selects a random value end encodes it with $r_i$.
Given $A, M_A, B, M_B$, $\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$ privately determine $\mathrm{WHD}(A, B, M_A \wedge M_B)$:
\begin{itemize}
\item[$\bullet$] $\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$ negotiate $R$.
\item[$\bullet$] $\ensuremath{{\sf Alice}}$ computes $C_M=\mathrm{Extract}_R(M_A, M_A)$; $\ensuremath{{\sf Bob}}$ computes $S_M=\mathrm{Extract}_R(M_B,M_B)$.
\item[$\bullet$] $\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$ engage in a PSI-CA protocol where their inputs are $C_M$ and $S_M$ and $\ensuremath{{\sf Alice}}$ learns the output $c_1$ of PSI-CA. $c_1$ corresponds to the number of bits set to 1 in both $M_A$ and $M_B$ at indices specified by $R$.
\item[$\bullet$] $\ensuremath{{\sf Alice}}$ computes $C = $ Extract$_R(A,M_A)$; similarly, $\ensuremath{{\sf Bob}}$ computes $S = $ Extract$_R(B,M_B)$.
\item[$\bullet$] $\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$ interact in a PSI-CA protocol with input $C$ and $S$ respectively; at the end of the protocol,
$\ensuremath{{\sf Alice}}$ learns $c_2$, i.e., the size of the intersection of the subsets of $A$ and $B$ defined by $R$.
\item[$\bullet$] Biometric $A$ matches $B$ iff $(n-c_2)/c_1<t$.
\end{itemize}
\begin{figure}[ttt!]
\centering
\fbox{\small
\centering
\begin{minipage}{0.625\columnwidth}
\begin{tabular}{lcl}
\hspace{0.02cm}$\ensuremath{{\sf Alice}}$ $(A)$ & &\hspace{0.1cm} $\ensuremath{{\sf Bob}}$ $(B)$ \vspace{-0.2cm}\\
\multicolumn{3}{c}{\hspace{-0.3cm}\line(1,0){300} \vspace{0.1cm}}\\
\hspace{0.02cm}$A, M_A$ (iris, mask) & & \hspace{0.1cm} $B,M_B$ (iris, mask)\vspace{0.1cm}\\
\hspace{0.02cm}$R={(r_1, \ldots, r_k)}$ & & \hspace{0.1cm} $R={(r_1, \ldots, r_k)}$ \vspace{0.1cm}\\
\hspace{0.02cm}$C_M = \mathrm{Extract}_R(M_A, M_A)$ & & \hspace{0.1cm} $S_M=\mathrm{Extract}_R(M_B, M_B)$\hspace{-0.35cm}\\
\hspace{3.9cm} \Big\{ \Big. \vspace{-0.7cm} & & \hspace{-0.55cm} \Big. \Big\}\\
&\hspace{-0.6cm}$\xymatrix@1@=45pt{\ar[r]^*+<4pt>{}&}$\vspace{-0.0cm}\\
\hspace{0.02cm}$c_1 \leftarrow \;\mbox{PSI-CA}(C_M,S_M)$ &\hspace{-0.6cm}$\xymatrix@1@=45pt{& \ar[l]^*+<4pt>{}_*+<4pt>{}}$ & \vspace{0.15cm}\\
\vspace{0.4cm}\\
\hspace{0.02cm}$C=\mathrm{Extract}_R(A,M_A)$ & & \hspace{0.1cm} $S=\mathrm{Extract}_R(B,M_B)$\hspace{-0.35cm}\mbox{ }\vspace{0.1cm}\\
\hspace{3.9cm} \Big\{ \Big. \vspace{-0.7cm} & & \hspace{-0.55cm} \Big. \Big\}\\
&\hspace{-0.6cm}$\xymatrix@1@=45pt{\ar[r]^*+<4pt>{}&}$\vspace{-0.0cm}\\
\hspace{0.02cm}$c_2 \leftarrow \;\mbox{PSI-CA}(C,S)$ &\hspace{-0.6cm}$\xymatrix@1@=45pt{& \ar[l]^*+<4pt>{}_*+<4pt>{}}$ & \vspace{0.35cm}\\
\multicolumn{3}{l}{\hspace{0.02cm}{\bf Output match as $(n-c_2)/c_1<t$}} %
\end{tabular}
\end{minipage}
}
\vspace{-0.15cm}
\caption{Privacy-preserving iris matching of biometric $A$ and $B$.}%
\label{fig:app-iris}
\end{figure}
\subsection{Comparison to prior work}
We now compare our technique for privacy-preserving iris matching to prior work, namely the
technique in~\cite{Blanton11esorics}. We compare our approach with the protocol
in~\cite{Blanton11esorics} because, at the time of writing, it provides the best performance for privacy-preserving comparison of iris codes.
First, observe that protocol in Fig.~\ref{fig:app-iris} estimates the Weighted Hamming Distance
with bounded error, whereas, construction in~\cite{Blanton11esorics} yields its exact computation.
However, as we discuss below, the error incurred by our technique is low enough to be used in practice and achieves reduced computational complexity.
In fact, our probabilistic protocol could be used to perform a fast, preliminary test: if differences between two irises are significant, then there is no need for further tests. Otherwise, the two parties can engage in the protocol in \cite{Blanton11esorics} to obtain (in a privacy-preserving way) a precise result. %
Next, as opposed to the technique in \cite{Blanton11esorics}, $\ensuremath{{\sf Alice}}$ also learns an estimate on the number of bits set to 1 in the combined mask $M_A \wedge M_B$, but not their position.
However, this information is not sensitive, thus, it does not leak any information about the iris sampled by $\ensuremath{{\sf Alice}}$ or $\ensuremath{{\sf Bob}}$.
\paragraph{Optimization.} As discussed above, it is reasonable to assume that $\ensuremath{{\sf Bob}}$ (e.g., the Department of Homeland Security) holds a database with a large number of biometric samples, whereas, $\ensuremath{{\sf Alice}}$ (e.g., the Transportation Security Administration) has only one or few samples that she is searching
in $\ensuremath{{\sf Bob}}$'s database. To this end, we now show how the protocol in Fig.~\ref{fig:app-iris} can be optimized,
by pre-computing several expensive operations offline, for such a scenario.
Note that $\ensuremath{{\sf Bob}}$ can perform the offline phase of {\sf DGT12}\ (see Fig.~\ref{fig:psi-ca}) on all bits
of his biometric samples: unlike the protocol in~\cite{Blanton11esorics}, this is required {\em
only once}, independently on the number of interactions between $\ensuremath{{\sf Bob}}$ and any user.
After negotiating with $\ensuremath{{\sf Bob}}$ the values $R=(r_1, \ldots, r_k)$, and before receiving her input, $\ensuremath{{\sf Alice}}$ pre-computes $k$ pairs $\langle\alpha_{0,i}=H(\langle 0,r_i\rangle)^{R'_c}, \alpha_{1,i}= H(\langle 1,r_i\rangle)^{R'_c}\rangle$. (This assumes the use of {\sf DGT12}.)
Once $\ensuremath{{\sf Alice}}$'s mask has been revealed, she constructs the corresponding encrypted representation by simply selecting the appropriate element from each pair.
Similarly, she computes $k$ triples $(\alpha_{0,i}, \alpha_{1,i}, \alpha_{\rho,i})$ where $\alpha_{0,i}, \alpha_{1,i}, \alpha_{\rho,i}$ represent 0, 1 and a random element in ${\{0,1\}^\tau}$, respectively. $\ensuremath{{\sf Alice}}$ later uses such triples to represent each bit $\beta_i$ of her iris sample as $\alpha_{i}=\alpha_{\beta,i}$ if the corresponding bit in the mask is 1, else, as $\alpha_{i}=\alpha_{\rho, i}$.
During the online phase, $\ensuremath{{\sf Alice}}$ selects the appropriate pre-computed values to match the mask and the iris bits. Similarly, $\ensuremath{{\sf Bob}}$ inputs the selected bits of each record's mask and iris into the PSI-CA protocol.
\paragraph{Performance Comparison.}
In Table~\ref{tab:perf}, we report running times from implementations of, respectively, our protocol in
Fig.~\ref{fig:app-iris} and technique in~\cite{Blanton11esorics}.
We assume that about 75\% of the bits in the mask are set to 1 (like
in~\cite{Blanton11esorics}). We set the length of each iris and mask to 2048 bits and the database size to 320 irises, which is the number used in prior work. All tests are run on a single Intel Xeon E5420 core running at 2.50GHz. We set $k=25$, thus, obtaining an expected error in the order of $1/\sqrt{25}$, i.e., 20\%.
\begin{table}[t]
\vspace{-0.2cm}
{
\setlength{\tabcolsep}{0.5ex}
\centerline{\resizebox{0.8\columnwidth}{!}{\small
\begin{tabular}{|c|c|c|c|} \hline
\multicolumn{4}{|c|}{\bf \em Protocol in Fig.~\ref{fig:app-iris}}\\ \hline
\multicolumn{2}{|c|}{} & {\bf Offline} & {\bf Online} \\ \hline
$\ensuremath{{\sf Bob}}$ & $\pm$ 5-bit rot. & 0.13 ms + 5.8 s/rec & 71.5 ms/rec \\ \cline{2-4}
& no rot. & 0.13 ms + 530 ms/rec & 6.5 ms/rec \\\cline{1-4}
$\ensuremath{{\sf Alice}}$ & $\pm$ 5-bit rot. & 71.63 ms & 71.5 ms/rec \\ \cline{2-4}
& no rot. & 6.63 ms & 6.5 ms/rec \\
\hline
\end{tabular}
\begin{tabular}{|c|c|} \hline
\multicolumn{2}{|c|}{{\bf \em Protocol in }\cite{Blanton11esorics}}\\ \hline
{\bf Offline} & {\bf Online} \\ \hline
2.8 s + 71.55 ms/rec & 97.2 ms + 134.28 ms/rec \\ \hline
2.6 s + 6.48 ms/rec & 97.2 ms + 12.33 ms/rec \\\hline
12.2 s + 3 ms/rec & 20.34 ms/rec \\ \hline
11.7 s + 0.27 ms/rec & 1.8 ms/rec \\
\hline
\end{tabular}}}
\vspace{-0.25cm}
\caption{Computation overhead of our randomized iris matching technique in Fig.~\ref{fig:app-iris}
and that of \cite{Blanton11esorics}. Experiments are performed with 5-bit left/right rotation and with no rotation of the iris sample. ``rot'' abbreviates ``rotation'' and ``rec'' -- ``record''.}
\label{tab:perf}
}
\vspace{-0.05cm}
\end{table}
Observe that {\em online} cost incurred by $\ensuremath{{\sf Bob}}$ with our technique is significantly lower compared to that of protocol in~\cite{Blanton11esorics}. Whereas, it is higher for $\ensuremath{{\sf Alice}}$. Nonetheless, summing up the computation overhead incurred by both $\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$, our protocol always results faster that the one in~\cite{Blanton11esorics} for the online computation.
The {\em offline} cost imposed on $\ensuremath{{\sf Bob}}$ is about twice as high as its counterpart in protocol from \cite{Blanton11esorics}. However, in our protocol, the offline part is done once, for all possible interactions, independently from their number. Whereas, in \cite{Blanton11esorics}, the offline computation needs to be performed anew, for {\em every} interaction. In settings where $\ensuremath{{\sf Bob}}$ interacts frequently with multiple entities, this may significantly effect protocol's overall efficiency. Furthermore, the offline cost imposed on $\ensuremath{{\sf Alice}}$ is markedly lower (several orders of magnitudes) using our technique.
We conclude that the protocol in Fig.~\ref{fig:app-iris} improves, in many settings, overall efficiency compared to state of the art. However, it introduces a maximum error of about 20\%, whereas, the scheme in \cite{Blanton11esorics} compute the exact -- rather than approximate -- outcome of an iris comparison.
Thus, a good practice is to use the scheme in Fig.~\ref{fig:app-iris} to perform an initial selection of relevant biometric samples, using a threshold $t'>t$, in order to compensate for the error. The final matching on selected samples can then be done, in a privacy-preserving manner, using the protocol in \cite{Blanton11esorics}.
\section{Privacy-Preserving Multimedia File Similarity}\label{sec:multimedia}
Amid widespread availability of digital cameras, digital audio recorders, and media-enabled smartphones, users generate a staggering amount of {\em multimedia} content.
As a result, secure online storage (and management) of large volumes of multimedia data
becomes increasingly desirable. %
According to \cite{youtubeupload11}, YouTube received more than 13 million hours of video in 2010, and
48 hours of new content are uploaded {\em every minute} (i.e., 8 years of video each day).
Similarly, Flickr users upload about 60 photos every second.
Such an enormous amount of multimedia data prevents manual content curation -- e.g., manually
{\em tagging} all uploaded content to allow textual searches.
For this reason, in recent years research has focused on automated tools for feature
extraction and analysis on multimedia content.
A prominent example is Content-Based Image Retrieval (CBIR)~\cite{smeulders2000content}. It
allows automatic extraction of features from an image, a video, an audio file or any other
multimedia content. These features can then be compared across different files, establishing for example {\em how similar} two documents are.
There are several available techniques to implement CBIR,
including search techniques based on color histograms~\cite{smith96},
bin similarity coefficients \cite{niblack93}, texture for image characterization \cite{ma96},
shape features \cite{sclaroff97}, edge directions \cite{jain96}, and matching of shape components such as corners, line segments or circular arcs \cite{cohen97}.
In this section we design a generic privacy-preserving technique for comparing multimedia
documents by comparing their features. Our technique is based on Jaccard and MinHash, and is
oblivious to the specific type of features used to perform comparison.We implement a small
prototype, which we use to evaluate the performance of our approach.
\paragraph{Prior Work.}
Motivated by the potential sensitivity of multimedia data, the research community has begun to
develop mechanisms for secure signal processing.
For instance, authors in~\cite{erkin2007} are the first to investigate secure signal processing related to
multimedia documents.
Then, the work in~\cite{lu09-2,lu09} introduces two protocols to search over encrypted multimedia databases.
Specifically, it extracts 256 visual features from each image. Then, files are encrypted in a distance-preserving fashion,
so that encrypted features can be directly compared for similarity evaluation. Similarity is computed using the Jaccard index
between the visual features of searched image and those of images in a database.
However, the security of the scheme relies on order-preserving encryption (used to mask frequencies of recurring visual features), which is known to provide only a limited level of security \cite{boldyreva-ope}.
\paragraph{Our Approach.}
We use the Jaccard index to assess the similarity of multimedia files.
As showed in Section~\ref{sec:new}, we can do so, in a privacy-preserving way,
using protocol in Fig.~\ref{fig:sjacc}. Our Privacy-preserving evaluation of multimedia file similarity protocol is presented in Fig.~\ref{fig:ppmfs}. We denote a multimedia file owned by \ensuremath{{\sf Alice}}\ as $F_A$, and a file owned by Bob as $F_{B:i}$.
\begin{figure}[h]
\centering
\hspace{1cm}
\fbox{\small
\begin{minipage}{0.61\columnwidth}
\begin{tabular}{lcl}
\hspace{-0.05cm}$\ensuremath{{\sf Alice}}$ $(F_A)$ & &\hspace{0.3cm} $\ensuremath{{\sf Bob}}$ $(F_{B:i})$ \vspace{-0.2cm}\\
\multicolumn{3}{c}{\hspace{-0.3cm}\line(1,0){293.8} \vspace{0.1cm}}\\
\hspace{-0.05cm}$A\leftarrow\mbox{Extract}(F_A)$ & & \hspace{0.3cm} $B_i\leftarrow\mbox{Extract}(F_{B:i})$\hspace{0.2cm}\\
\hspace{3.99cm} \Big\{ \Big. \vspace{-0.69cm} & & \hspace{-0.55cm} \Big. \Big\}\\
&\hspace{-0.62cm}$\xymatrix@1@=45pt{\ar[r]^*+<4pt>{}&}$\vspace{0.02cm}\\
&\hspace{-0.62cm}$\xymatrix@1@=45pt{& \ar[l]^* +<4pt>{}_*+<4pt>{}}$ & \vspace{-0.2cm}\\
\\
\hspace{-0.05cm}$|A\cap B_i| \leftarrow \mbox{PSI-CA}(A,B_i)$\vspace{-0.05cm}\\
\end{tabular}
\vspace{0.2cm}
\hspace{0.14cm}{\bf Output Similarity as} %
$J(A,B_i)=\dfrac{|A\cap B_i|}{|A|+|B_i|-|A\cap B_i|}$ %
\end{minipage}
}
\caption{Privacy-preserving evaluation of multimedia file similarity.}
\label{fig:ppmfs}
\end{figure}
Our approach is independent of the underlying feature extraction algorithm, even though protocol accuracy naturally relies on the quality of the feature extraction phase.
Once features have been extracted, our privacy-preserving protocols only reveal their similarity, thus,
without disclosing the features themselves.
As an example, we instantiate our techniques for privacy-preserving {\em image} similarity.
Our approach for feature extraction is based on \cite{lu09}, since its accuracy is reasonable enough for
real-world use, using color histograms in the color space of Hue, Saturation and Value (HSV).
Thus, our scheme achieves the same accuracy of~\cite{lu09}, in terms of precision and recall.
Once again, to obtain improved efficiency, similarity can be approximated using MinHash techniques,
as per protocol in Fig.~\ref{fig:ppmfs-mh}. In this case, protocol performance and accuracy depend on the MinHash parameter $k$.
\begin{figure}[h]
\centering
\hspace{1cm}
\fbox{\small
\begin{minipage}{0.62\columnwidth}
\begin{tabular}{lcl}
\hspace{-0.05cm}$\ensuremath{{\sf Alice}}$ $(F_A)$ & &\hspace{-3.8cm} $\ensuremath{{\sf Bob}}$ $(F_{B:i})$ \vspace{-0.15cm}\\
\multicolumn{3}{c}{\hspace{-0.5cm}\line(1,0){299} \vspace{0.1cm}}\\
\hspace{-0.05cm}$A\leftarrow\mbox{Extract}(F_A)$ & & \hspace{-3.8cm} $B_i\leftarrow\mbox{Extract}(F_{B:i})$\vspace{0.1cm}\\
\hspace{-0.05cm}$h_k(A)\leftarrow\mbox{MinHash}(A)$ & & \hspace{-3.8cm} $h_k(B_i)\leftarrow\mbox{MinHash}(B_i)$\hspace{-0.35cm}\mbox{ }\vspace{0.1cm}\\
\hspace{3.95cm} \Big\{ \Big. \vspace{-0.7cm} & & \hspace{-4.55cm} \Big. \Big\}\\
&\hspace{-9.6cm}$\xymatrix@1@=45pt{\ar[r]^*+<4pt>{}&}$\vspace{-0.0cm}\\
&\hspace{-9.6cm}$\xymatrix@1@=45pt{& \ar[l]^*+<4pt>{}_*+<4pt>{}}$ & \vspace{-0.25cm}\\
\hspace{-0.05cm}$|h_k(A)\cap h_k(B_i)|\leftarrow$\vspace{0.05cm}\\
$\;\;\;\;\;\mbox{PSI-CA}(h_k(A),h_k(B_i))$\vspace{0.15cm}\\
\hspace{-0.05cm}{\bf Output Similarity Approximation as}: %
$sim(A,B_i)=\dfrac{|h_k(A)\cap h_k(B_i)|}{k}$ %
\end{tabular}
\end{minipage}
}
\vspace{-0.2cm}
\caption{Privacy-preserving {\em approximation} of multimedia file similarity.}
\label{fig:ppmfs-mh}
\vspace{-0.2cm}
\end{figure}
\paragraph{Performance Evaluation.} We test our technique with the same dataset used by \cite{lu09}, i.e., 1000 images from the standard Corel dataset.
We extract 256 features from each image, for a total of 256,000 features for the whole database.
We envision a user, $\ensuremath{{\sf Alice}}$, willing to assess similarity of an image against an image database, held by $\ensuremath{{\sf Bob}}$.
We run our protocol for privately computing the Jaccard index (``Exact'' rows in Table~\ref{tab:multimedia}) and for estimated similarity, using MinHash with $k=100$ (``Approximate'' row). Table~\ref{tab:multimedia} summarizes our experiments. All tests are run on a single Intel Xeon E5420 core running at 2.50GHz and show that privacy protection
is attainable at a very limited cost.
Remark that a thorough performance comparison between our protocol and related work is out of the scope of this paper, since the main effort of prior work has been achieving high accuracy in similarity detection, rather improving efficiency.
Thus, we defer it to future work. Nonetheless, the authors of~\cite{lu09} report
that the running time of their protocol is in the order of 1 second per image,
on a hardware comparable to our testbed (a dual-core 3GHz PC with 4GB of RAM).
Therefore, it is safe to assume that our protocol for privacy-preserving multimedia file similarity is about one order of magnitude faster than available techniques, even without considering pre-computation.
\begin{table}[t]
{
\setlength{\tabcolsep}{1ex}
\centerline{{\small
\begin{tabular}{|c|c|c|c|} \hline
& & {\bf Offline} & {\bf Online} \\ \hline
$\ensuremath{{\sf Bob}}$ & Exact & 0.13 ms + 33.28 ms/record & 33.28 ms/record \\ \cline{2-4}
& Approximate & 0.13 ms + 13 ms/record & 13 ms/rec \\\cline{1-4}
$\ensuremath{{\sf Alice}}$ & Exact & 66.69 ms & 33.28 ms/record \\ \cline{2-4}
& Approximate & 26.13 ms & 13 ms/record \\
\hline
\end{tabular}}}
\vspace{-0.3cm}
\caption{Computation cost of our multimedia documents similarity protocol.}
\label{tab:multimedia}
}
\vspace{-0.05cm}
\end{table}
\section{Faster and Size-Hiding (Approximated) PSI-CA}
\label{sec:approx-psica}
Privacy-preserving computation of set intersection cardinality has been investigated
quite extensively by the research community~\cite{ePrint,freedman2004efficient,kissner2005,vaidya2005},
motivated by several interesting applications, including:
privacy-preserving authentication and key exchange protocols~\cite{ateniese2007secret},
data and association rule mining~\cite{vaidya2005},
genomic applications~\cite{baldi2011countering}, healthcare~\cite{kissner2005},
policy-based information sharing~\cite{ePrint}, anonymous routing~\cite{zhang2005anonymous},
and -- as argued by this paper -- sample set similarity.
In many of the application scenarios, however, it may be enough to obtain an estimation, rather than the exact measure,
of set intersection cardinality. For instance, if PSI-CA is used to privately quantify the
number of common social-network friends (e.g., to assess profile similarity)~\cite{li2011findu}, then
one may want to trade off a bounded accuracy loss with
a significant improvement in protocol overhead (and without sacrificing the level of attained privacy protection).
Clearly, such an improved construction is particularly appealing whenever participants' input sets are very large.
Using MinHash techniques, we can construct a protocol for privacy-preserving estimation of set
intersection cardinality with (constant) computation and communication
complexities, that only depend on the MinHash parameter -- i.e., $O(k)$.
Proposed construction is illustrated in Fig.~\ref{fig:csi}.
While we tolerate a bounded accuracy loss depending on MinHash's parameter
$k$, i.e., $O(1/\sqrt{k})$, observe that our protocol achieves the same, provably-secure, privacy guarantees as if we ran PSI-CA
on whole sets.
\begin{figure}[htb]
\vspace{0.2cm}
\begin{center}
\fbox{\small
\begin{minipage}{0.72\columnwidth}
\underline{\textbf{Privacy-preserving approximation of $|A\cap B|$}}\vspace{0.05cm}\\
{\small {\sf Run by $\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$, on input, resp., $A$ and $B$}}\vspace{-0.1cm}
\begin{enumerate}
\item $\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$ compute, $\{\langle a_i,i\rangle\}_{i=1}^k$ and
$\{\langle b_i,i\rangle\}_{i=1}^k$, resp., using multi-hash\\ MinHash
where: $a_i\stackrel{\text{\tiny def}}{=}h^{(i)}_{min}(A)$ and $b_i\stackrel{\text{\tiny def}}{=}h^{(i)}_{min}(B)$
\item $\ensuremath{{\sf Alice}}$ and $\ensuremath{{\sf Bob}}$ execute PSI-CA on input, resp., $(\{\langle a_i,i\rangle\}_{i=1}^k, k)$ and
$(\{\langle b_i,i\rangle\}_{i=1}^k,k)$
\item $\ensuremath{{\sf Alice}}$ learns $\delta=|\{\langle a_i,i\rangle\}_{i=1}^k\cap\{\langle b_i,i\rangle\}_{i=1}^k|$
\item $\ensuremath{{\sf Bob}}$ sends $w$ to $\ensuremath{{\sf Alice}}$
\item $\ensuremath{{\sf Alice}}$ outputs $\delta \cdot (v+w)/(1+\delta)$
\end{enumerate}
\end{minipage}}
\vspace{-0.25cm}
\caption{Our technique for Approximated Private Set Intersection Cardinality.}
\label{fig:csi}
\end{center}
\vspace{-0.35cm}
\end{figure}
\paragraph{Size-Hiding.} Another factor motivating the use of MinHash techniques for PSI-CA is related to input size secrecy.
Available PSI-CA protocols always disclose, from the execution, at least an upper bound on input
set sizes. Whereas, protocol in Fig.~\ref{fig:csi} conceals -- unconditionally -- $\ensuremath{{\sf Alice}}$'s set size, thus,
achieving {\em Size-Hiding} Private Set Intersection Cardinality. With this protocol, \ensuremath{{\sf Alice}}\ and \ensuremath{{\sf Bob}}\ do not need to disclose $|A|$ and $|B|$. Rather, public protocol input includes $k$, which is {\bf\em independent from $|A|$ and $|B|$}. Because secure PSI-CA, used as building block for the protocol in Fig.~\ref{fig:csi}, does not leak information about the input sets, neither party learns information about the ratio between $k$ and $|A|$, $|B|$.
Considering recent results motivating
the need for size-hiding features in private set operations (see~\cite{ateniese2011if}),
this additional feature is particularly valuable.
\paragraph{Note:} While we leave as part of future work a thorough experimental performance evaluation, observe that PSI-CA and the approximated and size-hiding protocol (using MinHash) are essentially the same protocols. The latter runs on smaller, constant-size input ($k$): since the protocols have linear complexities, it is straightforward to assess the performance spread.
\section{Conclusion}\label{sec:conclusion}
This paper introduced the first efficient construction for privacy-preserving evaluation
of sample set similarity, relying on the Jaccard index measure.
We also presented an efficient randomized protocol that approximates,
with bounded error, this similarity index.
Our techniques are generic and practical enough to be used as a basic building block for a wide
array of different privacy-preserving functionalities, including
document and multimedia file similarity, biometric matching, genomic testing, similarity of
social profiles, and so on.
Experimental analyses support our efficiency claims and demonstrate improvements over prior results.
As part of future work, we plan to study privacy-preserving computation of other similarity measures,
as well as to further investigate additional applications and extensions.
\descr{Acknowledgments.} The work of Carlo Blundo has been supported, in part, by
the Italian Ministry of Research (MIUR), under project n.~2010RTFWBH ``Data-Driven Genomic Computing (GenData 2020).''
We gratefully acknowledge Elaine Shi
for providing us with the idea of genetic paternity testing
as one of our motivating examples. Finally, we thank the Journal of Computer Security's (anonymous) reviewers,
whose comments helped us improve papers' content and presentation.
{\small
\bibliographystyle{abbrv}
|
1111.5340
|
\section[Introduction]{Introduction}
Let $C$ be a fixed compact convex shape, and let $X_n$ be a
random sample of $n$ points chosen uniformly and
independently from $C$. Let $Z_n$ denote the number of
vertices of the convex hull of $X_n$. R{\'e}nyi and Sulanke
\cite{rs-udkhv-63} showed that $E[Z_n] = O(k \log{n})$, when
$C$ is a convex polygon with $k$ vertices in the plane.
Raynaud \cite{r-slcdn-70} showed that expected number of
facets of the convex hull is $O(n^{(d-1)/(d+1)})$, where $C$
is a ball in $\Re^d$, so $E[Z_n] = O(n^{1/3})$ when $C$ is a
disk in the plane. Raynaud \cite{r-slcdn-70} showed that
the expected number of facets of ${\mathop{\mathrm{CH}}}(X_n) = ConvexHull(X_n)$ is $O
\pth{(\log(n))^{(d-1)/2}}$, where the points are chosen from
$\Re^d$ by a $d$-dimensional normal distribution. See
\cite{ww-sg-93} for a survey of related results.
All these bounds are essentially derived by computing or
estimating integrals that quantify the probability of two
specific points of $X_n$ to form an edge of the convex hull
(multiplying this probability by $\binom{n}{2}$ gives
$E[Z_n]$). Those integrals are fairly complicated to
analyze, and the resulting proofs are rather long,
counter-intuitive and not elementary.
Efron \cite{e-chrsp-65} showed that instead of arguing about
the expected number of vertices directly, one can argue
about the expected area/volume of the convex hull, and this
in turn implies a bound on the expected number of vertices
of the convex hull. In this paper, we present a new argument
on the expected area/volume of the convex hull (this method
can be interpreted as a discrete approximation to the
integral methods). The argument
goes as follows: Decompose $C$ the into smaller shapes
(called tiles). Using the topology of the tiling and the
underlining type of convexity, we argue about the expected
number of tiles that are exposed by the random convex hull,
where a tile is exposed if it does not lie completely in the
interior of the random convex hull. Resulting in a lower
bound on the area/volume of the random convex hull. We
apply this technique to the standard case, and also for
more exotic types of convexity.
In Section \ref{sec:random:ch}, we give a rather simple and
elementary proofs of the aforementioned bounds
$E[Z_n]=O(n^{1/3})$ for $C$ a disk, and $E[Z_n]=O \pth{ k
\log{n}}$ for $C$ a convex $k$-gon. We believe that
these new elementary proofs are indeed simpler and more
intuitive\footnote{Preparata and Shamos \cite[pp.
152]{ps-cgi-85} comment on the older proof for the case
of a disk: ``Because the circle has no corners, the
expected number of hull vertices is comparatively high,
although we know of no elementary explanation of the
$n^{1/3}$ phenomenon in the planar case.'' It is the
author's belief that the proof given here remedies this
situation.} than the previous integral-based proofs.
The question on the expected complexity of the convex hull
remains valid, even if we change our type of convexity. In
Section \ref{sec:genrelized:convex}, we define a generalized
notion of convexity induced by ${\cal D}$, a given set of
directions. This extends both rectilinear convexity, and
standard convexity. We prove that the expected complexity
of the ${\cal D}$-convex hull of a set of $n$ points, chosen
uniformly and independently from a disk, is $O \pth {n^{1/3}
+ \sqrt{n\alpha({\cal D})}}$, where $\alpha({\cal D})$ is the largest
angle between two consecutive vectors in ${\cal D}$. This result
extends the known bounds for the cases of rectilinear and
standard convexity.
Finally, in Section \ref{sec:hcube}, we deal with another
type convexity, which is an extension of the generalized
convexity mentioned above for the higher dimensions, where
the set of the directions is the standard orthonormal basis
of $\Re^d$. We prove that the expected number of points that
lie on the boundary of the quadrant hull of $n$ points,
chosen uniformly and independently from the axis-parallel
unit hypercube in $\Re^d$, is $O(\log^{d-1} n)$. This
readily imply $O(\log^{d-1} n)$ bound on the expected number
of maxima and the expected number of vertices of the convex
hull of such a point set. Those bounds are known
\cite{bkst-anmsv-78}, but we believe the new proof is
simpler and more intuitive.
\section[On the Complexity of the Convex Hull of a Random Point
Set]{On the Complexity of the Convex Hull of a Random Point
Set}
\label{sec:random:ch}
In this section, we show that the expected number of
vertices of the convex hull of $n$ points, chosen uniformly
and independently from a disk, is $O(n^{1/3})$. Applying the
same technique to a convex polygon with $k$ sides, we prove
that the expected number of vertices of the convex hull is
$O( k \log{n})$.\footnote{As already noted, these results are well
known (\cite{rs-udkhv-63,r-slcdn-70,ps-cgi-85}), but we
believe that the elementary proofs given here are simpler
and more intuitive.} The following lemma, shows that the
larger the expected area outside the random
convex hull, the larger is the expected number of vertices
of the convex hull.
\begin{lemma}
Let $C$ be a bounded convex set in the plane, such that
the expected area of the convex hull of $n$ points,
chosen uniformly and independently from $C$, is at least
$\pth{1-f(n)}Area(C)$, where $1 \geq f(n) \geq 0$, for
$n \geq 0$. Then the expected number of vertices of the
convex hull is $\leq n f(n/2)$.
\label{lemma:area:to:vertices}
\end{lemma}
\begin{proof}
Let $N$ be a random sample of $n$ points, chosen
uniformly and independently from $C$. Let $N_1$ (resp.
$N_2$) denote the set of the first (resp. last) $n/2$
points of $N$. Let $V_1$ (resp. $V_2$) denote the
number of vertices of $H = {\mathop{CH}}(N_1 \cup N_2)$ that
belong to $N_1$ (resp. $N_2$), where ${\mathop{CH}}(N_1 \cup
N_2) = {\mathrm{ConvexHull}}( N_1 \cup N_2 )$.
Clearly, the expected number of vertices of $C$ is
$E[V_1] + E[V_2]$. On the other hand,
\[
E \pbrc{V_1 \sep{N_2}} \leq \frac{n}{2} \pth{\frac{Area(C) -
Area({\mathop{CH}}(N_2))}{Area(C)}},
\]
since $V_1$ is bounded by the expected number of points
of $N_1$ falling outside ${\mathop{CH}}(N_2)$.
We have
\begin{eqnarray*}
E[V_1] &=& E_{N_2} \pbrc{ E[V_1|N_2] } \leq E
\pbrc{\frac{n}{2}
\pth{\frac{Area(C) - Area({\mathop{CH}}(N_2))}{Area(C)}}}\\
&\leq& \frac{n}{2}f(n/2),
\end{eqnarray*}
since $E[X]=E_Y[E[X|Y]]$ for any two random variables $X,Y$. Thus,
the expected number of vertices of $H$ is $E[V_1] + E[V_2] \leq
n f(n/2)$.
\end{proof}
\begin{remark}
Lemma \ref{lemma:area:to:vertices} is known as {\em Efron's
Theorem}. See \cite{e-chrsp-65}.
\end{remark}
\begin{theorem}
The expected number of vertices of the convex hull of $n$ points,
chosen uniformly and independently from the unit disk, is
$O(n^{1/3})$.
\label{theorem:area}
\end{theorem}
\begin{proof}
We claim that the expected area of the convex hull of
$n$ points, chosen uniformly and independently from the
unit disk, is at least $\pi - O \pth{ n^{-2/3}}$.
Indeed, let $D$ denote the unit disk, and assume without
loss of generality, that $n=m^3$, where $m$ is a
positive integer. Partition $D$ into $m$ sectors,
${\cal S}_1, \ldots, {\cal S}_m$, by placing $m$ equally spaced
points on the boundary of $D$ and connecting them to the
origin. Let $D_1, \ldots, D_{m^2}$ denote the $m^2$
disks centered at the origin, such that (i) $D_{1} = D$,
and (ii) $Area(D_{i-1}) - Area(D_{i}) = \pi/m^2$, for
$i=2, \ldots, m^2$. Let $r_i$ denote the radius of
$D_i$, for $i=1, \ldots, m^2$.
Let $S_{i,j} = (D_{i} \setminus D_{i+1}) \cap {\cal S}_j$,
and $S_{m^2,j}=D_{m^2} \cap {\cal S}_j$, for $i=1, \ldots,
m^2-1$, $j=1, \ldots, m$. The set $S_{i,j}$ is called
the $i$-th {\em tile} of the sector ${\cal S}_j$, and its
area is $\pi/n$, for $i=1,\ldots,m^2$, $j=1, \ldots,m$.
Let $N$ be a random sample of $n$ points chosen
uniformly and independently from $D$. Let $X_j$ denote
the first index $i$ such that $N \cap S_{i, j} \ne
\emptyset$, for $j=1, \ldots, m$. For a fixed $j \in
\brc{1,\ldots,m}$, the probability that
$X_j = k$ is upper-bounded by the probability that the
tiles $S_{1, j}
, \ldots, S_{(k-1),j}$ do not contain any
point of $N$; namely, by $\pth{1-\frac{k-1}{n}}^{n}$.
Thus, $P[X_j = k] \leq \pth{1-\frac{k-1}{n}}^{n} \leq
e^{-(k-1)}$, since $1-x\leq e^{-x}$, for $x \geq 0$.
Thus,
\[
E \pbrc{ X_j } = \sum_{k=1}^{m^2} k P[X_j = k ] \leq
\sum_{k=1}^{m^2} k e^{-(k-1)} = O(1),
\]
for $j=1, \ldots, m$.
Let $K_o$ denote the convex hull of $N \cup \brc{o}$,
where $o$ is the origin. The tile $S_{i,j}$ is {\em
exposed} by a set $K$, if $S_{i,j} \setminus K \ne
\emptyset$. We claim that at most $X_{j-1} + X_{j+1} +
O(1)$ tiles are exposed by $K_o$ in the sector ${\cal S}_j$,
for $j=1,\ldots, m$ (where we put $X_0 = X_m$, $X_{m+1}
= X_1$).
\begin{figure}
\centerline{
\Ipe{figs/expose.ipe}
}
\caption{Illustrating the proof that bounds the number of tiles
exposed by $T$ inside ${\cal S}_j$}
\label{fig:slice}
\end{figure}
Indeed, let $w=w(N,j) = max(X_{j-1},X_{j+1})$, and let $p,q$ be
the two points in $S_{j-1,w}, S_{j+1,w}$, respectively,
such that the number of sets exposed by the triangle $T
= \triangle{opq}$, in the sector ${\cal S}_i$, is maximal.
Both $p$ and $q$ lie on ${\partial}{D_{w+1}}$ and on the
external radii bounding ${\cal S}_{j-1}$ and ${\cal S}_{j+1}$, as
shown in Figure \ref{fig:slice}. Clearly, any tile which
is exposed in ${\cal S}_j$ by $K_o$ is also exposed by $T$.
Let $s$ denote the segment connecting the middle of the
base of $T$ to its closest point on ${\partial}{D_w}$. The
number of tiles in ${\cal S}_j$ exposed by $T$ is bounded by
$\max \pth{X_{j-1}, X_{j+1}}$, plus the number of tiles
intersecting the segment $s$. The length of $s$ is
\[
|oq| - |oq| \cos \pth{ \frac{3}{2} \cdot \frac{2\pi}{m}}
\leq 1 - \cos \pth{ \frac{3}{2} \cdot \frac{2\pi}{m}}
\leq \frac{1}{2} \pth{ \frac{3\pi}{m}}^2 =
\frac{4.5\pi^2}{m^2},
\]
since $\cos(x) \geq 1-x^2/2$, for $x \geq 0$.
On the other hand, $r_{i+1} - r_{i} \geq r_i - r_{i-1}
\geq 1/(2m^2)$, for $i=2,\ldots,m^2$. Thus, the segment
$s$ intersects at most $\ceil{||s||/(1/(2m^2))} =
\ceil{9\pi^2} = 89$ tiles, and we have that the number
of tiles exposed in the sector ${\cal S}_i$ by $K_o$ is at most
$\max \pth{X_{j-1}, X_{j+1}} + 89 \leq X_{j-1} + X_{j+1}
+ 89$, for $j=1, \ldots, m$.
Thus, the expected number of tiles exposed by $K_o$ is at most
\[
E \pbrc{ \sum_{i=1}^{m} \pth{ X_{j-1} + X_{j+1} + 89 } } =
O(m).
\]
The area of $K ={\mathop{CH}}(N)$ is bounded from below by the
area of tiles which are not exposed by $K$. The
probability that $K \subsetneq K_o$ (namely, the origin
is not inside $K$, or, equivalently, all points of $N$
lie in some semidisk) is at most $2\pi/2^n$, as easily
verified. Hence,
\[
E[ Area( K ) ] \geq E[ Area( C ) ] - P \pbrc{ C \ne K
} \pi = \pi - O(m)\frac{\pi}{n} - \frac{2\pi}{2^n}\pi
= \pi -O \pth{n^{-2/3}}.
\]
The assertion of the theorem now follows from Lemma
\ref{lemma:area:to:vertices}.
\end{proof}
\begin{lemma}
The expected number of vertices of the convex hull of
$n$ points, chosen uniformly and independently from the
unit square, is $O(\log{n})$.
\label{lemma:vertices:square}
\end{lemma}
\begin{figure}
\centerline{ \Ipe{figs/sq_expose.ipe} }
\caption{Illustrating the proof that bounds the number of tiles
exposed by ${\mathop{CH}}(N)$ inside the $j$-th column, by
using a non-uniform tiling of the strips to the left
and to the right of the $j$-th column. The area of
such a larger tile is at least $1/n$.}
\label{fig:column:ch}
\end{figure}
\begin{proof}
We claim that the expected area of the convex hull of
$n$ points, chosen uniformly and independently from the
unit square, is at least $1 - O \pth{ \log(n)/ n}$.
Let $S$ denote the unit square. Partition $S$ into $n$
rows and $n$ columns, such that $S$ is partitioned into
$n^2$ identical squares. Let $S_{i,j} = [(i-1)/n,i/n]
\times [(j-1)/n,j/n]$ denote the $j$-th square in the
$i$-th column, for $1 \leq i,j \leq n$. Let ${\cal S}_i =
\cup_{j=1}^{n} S_{i,j}$ denote the $i$-th column of $S$,
for $i=1,\ldots,n$, and let ${\cal S}(l,k) = \cup_{i=l}^{k}
{\cal S}_i$, for $1 \leq l \leq k \leq n$.
Let $N$ be a random sample of $n$ points chosen
uniformly and independently from $S$. Let $X_j$ denote
the first index $i$ such that $N \cap (\cup_{l=1}^{j-1}
S_{l, i}) \ne \emptyset$, for $j=2, \ldots, n-1$;
namely, $X_j$ is the index of the first row in
${\cal S}(1,j-1)$ that contains a point from $N$.
Symmetrically, let $X_{j}'$ be the index of the first
row in ${\cal S}(j+1,n)$ that contains a point of $N$.
Clearly, $E[X_j] = E[X_{n-j+1}']$, for $j=2,\ldots,
n-1$.
Let $Z_j$ denote the number of squares $S_{i,j}$ in the
bottom of the $j$-th column that are exposed by
${\mathop{CH}}(N)$, for $j=2,\ldots, n-1$. Arguing as in the
proof of Theorem \ref{theorem:area}, we have that $Z_j
\leq \max ( X_j, X_j' ) \leq X_j + X_j'$. Thus, in order
to bound $E[Z_j]$, we first bound $E[X_j]$ by covering
the strips ${\cal S}(1,j-1), {\cal S}(j+1,n)$ by tiles of area
$\geq 1/n$. In particular, let $h(l) = \ceil{n/(l-1)}$,
and let $R_j(m) = [0, (j-1)/n] \times [h(n-j+1)(m-1)/n,
h(j)m/n ]$, and let $R_j'(m) = [(j+1)/n, 1] \times
[h(j)(m-1)/n, h(j)m/n ]$, for $j=2, \ldots, n-1$. See
Figure \ref{fig:column:ch}.
Let $Y_j$ denote the minimal index $i$ such that $R_j(i)
\cap N \ne \emptyset$. The area of $R_{j}(i)$ is at
least $1/n$, for any $i$ and $j$. Arguing as in the
proof of Theorem \ref{theorem:area}, it follows that $E[
Y_j ] = O(1)$. On the other hand, $E[ X_j] \leq h(j)
E[Y_j] = O( n/(j-1))$. Symmetrically, $E[X_j'] =
O(n/(n-j))$.
Thus, by applying the above argument to the four
directions (top, bottom, left, right), we have that the
expected number of squares $S_{i,j}$ exposed by
${\mathop{CH}}(N)$ is bounded by
\[
4n - 4 + 4 { \sum_{j=2}^{n-1} E[Z_j]} < 4n + 4 {
\sum_{j=2}^{n-1} (E[X_j] + E[X_j'])} = 4n + 8 {
\sum_{j=2}^{n-1} O\pth{\frac{n}{j-1}} } =
O(n\log{n}),
\]
where $4n - 4$ is the number of squares adjacent to the
boundary of $S$.
Since the area of each square is $1/n^2$, it follows
that the expected area of ${\mathop{CH}}(N)$ is at least $1 -
O(\log(n)/n)$.
By Lemma \ref{lemma:area:to:vertices}, the expected
number of vertices of the convex hull is $O(\log n)$.
\end{proof}
\begin{lemma}
The expected number of vertices of the convex hull of
$n$ points, chosen uniformly and independently from a
triangle, is $O(\log{n})$.
\label{lemma:triangle}
\end{lemma}
\begin{proof}
We claim that the expected area of the convex hull of
$n$ points, chosen uniformly and independently from a
triangle $T$, is at least $(1 - O \pth{ \log(n)/ n})
Area(T)$. We adapt the tiling used in Lemma
\ref{lemma:vertices:square} to a triangle. Namely, we
partition $T$ into $n$ equal-area triangles, by segments
emanating from a fixed vertex, each of which is then
partitioned into $n$ equal-area trapezoids by segments
parallel to the opposite side, such that each resulting
trapezoid has area $1/n^2$. See Figure
\ref{fig:triangle:tiling}.
Notice that this tiling has identical topology to the
tiling used in Lemma \ref{lemma:vertices:square}. Thus,
the proof of Lemma \ref{lemma:vertices:square} can be
applied directly to this case, repeating the tiling
process three times, once for each vertex of $T$. This
readily implies the asserted bound.
\end{proof}
\begin{figure}
\centerline{
\Ipe{figs/tri_expose.ipe}
}
\caption{Illustrating the proof of Lemma
\ref{lemma:vertices:square} for the case of a
triangle.}
\label{fig:triangle:tiling}
\end{figure}
\begin{theorem}
The expected number of vertices of the convex hull of
$n$ points, chosen uniformly and independently from a
polygon $P$ having $k$ sides, is $O(k \log{n})$.
\end{theorem}
\begin{proof}
We triangulate $P$ in an arbitrary manner into $k$
triangles $T_1, \ldots, T_k$. Let $N$ be a random sample
of $n$ points, chosen uniformly and independently from
$P$. Let $Y_i = |T_i \cap N|$, $N_i = T_i \cap N$, and
$Z_i = |{\mathop{CH}}(N_i)|$, for $i=1, \ldots, k$. Notice that
the distribution of the points of $N_i$ inside $T_i$ is
identical to the distribution of $Y_i$ points chosen
uniformly and independently from $T_i$. In particular,
$E[Z_i | Y_i] = O( \log{Y_i})$, by Lemma
\ref{lemma:triangle}, and $E[ Z_i ] = E_{Y_i}[ E[ Z_i |
Y_i ] ] = O( \log{n } )$, for $i=1, \ldots, k$.
Thus, $E[ |{\mathop{CH}}(N)| ] \leq E \pbrc{ \sum_{i=1}^{k}
|{\mathop{CH}}(N_i)| } \leq \sum_{i=1}^{k} E[ Z_i] = O( k
\log{n} )$.
\end{proof}
\section{On the Expected Complexity of a
Generalized Convex Hull Inside a Disk}
\label{sec:genrelized:convex}
In this section, we derive a bound on the expected
complexity on a generalized convex hull of a set of points,
chosen uniformly and independently for the unit disk. The
new bound matches the known bounds, for the case of standard
convexity and maxima.
The bound follows by extending the proof of Theorem
\ref{theorem:area}.
We begin with some terminology and some initial
observations, most of them taken or adapted from
\cite{mp-ofsch-97}. A set ${\cal D}$ of vectors in the plane is a
{\em set of directions}, if the length of all the vectors in
${\cal D}$ is $1$, and if $v \in {\cal D}$ then $-v \in {\cal D}$. Let
${\cal D}_{\Re}$ denote the set of all possible directions. A set
$C$ is {\em ${\cal D}$-convex} if the intersection of $C$ with any
line with a direction in ${\cal D}$ is connected. By definition,
a set $C$ is convex (in the standard sense), if and only if
it is ${\cal D}_{\Re}$-convex.
For a set $C$ in the plane, we denote by ${\mathop{\cal{CH}}}_{D}(C)$ the
{\em ${\cal D}$-convex hull} of $C$; that is, the smallest
${\cal D}$-convex set that contains $C$. While this seems like a
reasonable extension of the regular notion of convexity, its
behavior is counterintuitive. For example, let ${\cal D}_Q$ denote
the set of all rational directions (the slopes of the
directions are rational numbers). Since ${\cal D}_Q$ is dense in
${\cal D}_{\Re}$, one would expect that ${\mathop{\cal{CH}}}_{{\cal D}_Q}(C) =
{\mathop{\cal{CH}}}_{{\cal D}_\Re}(C) = {\mathop{\mathrm{CH}}}(C)$. However, if $C$ is a set of
points such that the slope of any line connecting a pair of
points of $C$ is irrational, then ${\mathop{\cal{CH}}}_{D_Q}(C) =C$. See
\cite{osw-dcrch-85, rw-cgro-88, rw-ocfoc-87} for further
discussion of this type of convexity.
\begin{definition}
Let $f$ be a real function defined on a ${\cal D}$-convex set
$C$. We say that $f$ is {\em ${\cal D}$-convex} if, for any $x
\in C$ and any $v \in {\cal D}$, the function $g(t)=f(x+tv)$
is a convex function of the real variable $t$. (The
domain of $g$ is an interval in $\Re$, as $C$ is assumed
to be ${\cal D}$-convex.)
Clearly, any convex function, in the standard sense,
defined over the whole plane satisfies this condition.
\end{definition}
\begin{definition}
Let $C \subseteq \Re^2$. The set ${\mathop{\cal{CH}}}^{\cal D}(C)$, called
the {\em functional ${\cal D}$-convex hull of $C$}, is defined
as
\[
{\mathop{\cal{CH}}}^{\cal D}(C) = \brc{ x\in \Re^2 \sep{ f(x) \leq \sup_{y\in
C}f(y) \text{ for all ${\cal D}$-convex } f:\Re^2
\rightarrow \Re}}
\]
A set $C$ is {\em functionally ${\cal D}$-convex} if
$C={\mathop{\cal{CH}}}^{{\cal D}}(C)$.
\end{definition}
\begin{definition}
Let ${\cal D}$ be a set of directions. A pair of vectors
$v_1,v_2 \in {\cal D}$, is a {\em ${\D\text{-pair}}$}, if $v_2$ is
counterclockwise from $v_1$, and there is no vector in
${\cal D}$ between $v_1$ and $v_2$. Let $\DPAIRS{{\cal D}}$ denote
the set of all ${\D\text{-pair}}$s. Let $\mathop{pspan}(u_1,u_2)$ denote
the portion of the plane that can be represented as a
{\em positive} linear combination of $u_1, u_2 \in {\cal D}$.
Thus $\mathop{pspan}( u_1,u_2)$ is the {\em open} wedge bounded
by the rays emanating from the origin in directions
$u_1, u_2$. We define by $(v_1,v_2)_L = \mathop{pspan}( -v_1,
v_2)$ and $(v_1,v_2)_R = \mathop{pspan}( v_1, -v_2)$: these are
two of the four quadrants of the plane induced by the
lines containing $v_1$ and $v_2$. Similarly, for $v \in
{\cal D}$ we denote by $\HL{v}$ and $\HR{v}$ the two open
half-planes defined by the line passing through $v$.
Let
\[
{\cal{Q}}({\cal D}) = \brc{ \HL{v}, \HR{v} \sep{ v \in {\cal D}}} \cup
\brc{ (v_1,v_2)_R, (v_1,v_2)_L \sep{ (v_1, v_2) \in
\DPAIRS{D} }}.
\]
\end{definition}
\begin{definition}
For a set $S \subseteq \Re^2$ we denote by $T(S)$ the
set of translations of $S$ in the plane, that is $T(S) =
\brc{ S + p \sep{ p \in \Re^2 }}$. Given a set of
directions ${\cal D}$, let ${\cal T}({\cal D}) = \bigcup_{Q \in {\cal{Q}}({\cal D})}
T(Q)$.
\end{definition}
For ${\cal D}_\Re$, the set ${\cal T}({\cal D}_\Re)$ is the set of all open
half-planes. The standard convex hull of a planar point set
$S$ can be defined as follows: start from the whole plane,
and remove from it all the open half-planes $H^+$ such that
$H^+ \cap S = \emptyset$. We extend this definition to
handle ${\cal D}$-convexity for an arbitrary set of directions
${\cal D}$, as follows:
\[
\DCH{{\cal D}}(S) = \Re^2 \setminus \pth{ \bigcup_{I \in {\cal T}({\cal D}), I
\cap S = \emptyset} I };
\]
that is, we remove from the plane all the translations of
quadrants and halfplanes in ${\cal{Q}}({\cal D})$ that do not contain a
point of $S$. See Figures \ref{fig:dch:example},
\ref{fig:dch:example:ext}.
\begin{figure}
\centerline{ \Ipe{figs/set-of-dir.ipe} }
\caption{(a) A set of directions ${\cal D}$, (b) the set of
quadrants ${\cal{Q}}({\cal D})$ induced by ${\cal D}$, and (c) the
$\DCH{{\cal D}}$ of three points.}
\label{fig:dch:example}
\end{figure}
\begin{figure}
\centerline{ \Ipe{figs/set-of-dir-discon.ipe} }
\caption{(a) A set of directions ${\cal D}$, such that $\alpha({\cal D}) > \pi/2$,
(b) the set of quadrants ${\cal{Q}}({\cal D})$ induced by ${\cal D}$,
and (c) the $\DCH{{\cal D}}$ of a set of points which is
not connected.}
\label{fig:dch:example:ext}
\end{figure}
For the case ${\cal D}_{xy} = \brc{ (0,1), (1,0), (0,-1), (-1,0)
}$, Matou{\v s}ek{} and Plech{\' a}{\v c}{} \cite{mp-ofsch-97} showed
that if $\DCH{{\cal D}_{xy}}(S)$ is connected, then
${\mathop{\cal{CH}}}^{{\cal D}_{xy}}(S) = \DCH{{\cal D}_{xy}}(S)$.
\begin{definition}
For a set of directions ${\cal D}$, we define the
{\em density} of ${\cal D}$ to be
\[
\alpha({\cal D}) = \max_{(v_1,v_2) \in \DPAIRS{{\cal D}}}
\alpha(v_1,v_2),
\]
where $\alpha(v_1,v_2)$ denotes the counterclockwise
angle from $v_1$ to $v_2$.
\end{definition}
See Figure \ref{fig:dch:example:ext}, for an example of a
set of directions with density larger than $\pi/2$.
\begin{corollary}
Let ${\cal D}$ be a set of directions in the plane. Then:
\begin{itemize}
\item The set $\DCH{{\cal D}}(A)$ is ${\cal D}$-convex, for any
$A \subseteq \Re^2$.
\item For any $A \subseteq B \subseteq \Re^2$, one
has $\DCH{{\cal D}}(A) \subseteq \DCH{{\cal D}}(B)$.
\item For two sets of directions ${\cal D}_1 \subseteq
{\cal D}_2$ we have $\DCH{{\cal D}_1}(S) \subseteq
\DCH{{\cal D}_2}(S)$, for any $S \subseteq \Re^2$.
\item Let $S$ be a bounded set in the plane, and let
${\cal D}_1 \subseteq {\cal D}_2 \subseteq {\cal D}_3 \cdots$ be a
sequence of sets of directions, such that
$\lim_{i\rightarrow \infty} \alpha(D_i) = 0$. Then,
$\mathop{int}{{\mathop{\mathrm{CH}}}(S)} \subseteq \lim_{i \rightarrow
\infty} \DCH{{\cal D}_i}(S) \subseteq {\mathop{\mathrm{CH}}}(S)$.
\remove{
\item Let $S$ be a finite set of points in general
position in the plane, and let ${\cal D}_1 \supseteq {\cal D}_2
\supseteq {\cal D}_3 \cdots$ be a sequence of sets of
directions, such that $\lim_{i\rightarrow \infty}
\alpha(D_i) = \pi$. Then, $\lim_{i \rightarrow
\infty} \DCH{{\cal D}_i}(S) = S$.}
\end{itemize}
\end{corollary}
\begin{lemma}
Let ${\cal D}$ a set of directions, and let $S$ be a finite
set of points in the plane. Then $C = \DCH{{\cal D}}(S)$ is a
polygonal set whose complexity is $O(|S \cap {\partial}{C}|)$.
\label{lemma:on:boundary}
\end{lemma}
\begin{proof}
It is easy to show that $C$ is polygonal. We charge
each vertex of $C$ to some point of $S' = S \cap{\partial}{C}$.
Let $C'$ be a connected component of $C$. If $C'$ is a
single point, then this is a point of $S'$. Otherwise,
let $e$ be an edge of $C'$, and let $I$ be a set in
${\cal T}({\cal D})$ such that $e \subseteq {\partial}{I}$, and $I \cap S =
\emptyset$.
Since $e$ is an edge of $C'$, there is no $q \in \Re^2$
such that $e \subseteq q + I$, and $(q+I)\cap S =
\emptyset$. This implies that there must be a point $p$
of $S$ on ${\partial}{I} \cap l_e$, where $l_e$ is the line
passing through $e$. However, $C$ is a ${\cal D}$-convex set,
and the direction of $e$ belongs to ${\cal D}$. It follows
that $l_e$ intersects $C$ along a connected set (i.e.,
the segment $e$), and $p \in l_e \cap C = e$. We charge
the edge $e$ to $p$. We claim that a point $p$ of $S'$
can be charged at most 4 times. Indeed, for each edge
$e'$ of $C$ incident to $p$, there is a supporting set
in ${\cal T}({\cal D})$, such that $p$ and $e'$ lie on its boundary.
Only two of those sets can have angle less than $\pi/2$
at $p$ (because such a set corresponds to a
${\D\text{-pair}}(v_1,v_2)$ with $\alpha(v_1,v_2) > \pi/2$). Thus,
a point of $S'$ is charged at most $\max( 2\pi/(\pi/2),
\pi/(\pi/2) + 2) = 4$ times.
\end{proof}
\begin{lemma}
Let ${\cal D}$ be a set of directions, and let $K$ be a
bounded convex body in the plane, such that the expected
area of $\DCH{{\cal D}}(N)$ of a set $N$ of $n$ points, chosen
uniformly and independently from $K$, is at least
$\pth{1-f(n)}Area(K)$, where $1 \geq f(n) \geq 0$, for
$n \geq 1$. Then, the expected number of vertices of $C
= \DCH{{\cal D}}(N)$ is $O(n f(n/2))$.
\label{lemma:area:to:vertices:ext}
\end{lemma}
\begin{proof}
By Lemma \ref{lemma:on:boundary}, the complexity of $C$
is proportional to the number of points of $N$ on the
boundary of $C$. Using this observation, it is easy to
verify that the proof of Lemma
\ref{lemma:area:to:vertices} can be extended to this
case.
\end{proof}
We would like to apply the proof of Theorem 2.3 to bound the
expected complexity of a random ${\cal D}$-convex hull inside a
disk. Unfortunately, if we try to concentrate only on three
consecutive sectors (as in Figure \ref{fig:slice}) it might
be that there is a quadrant $I$ of ${\cal T}({\cal D})$ that intersects
the middle the middle sector from the side (i.e. through the
two adjacent sectors). This, of course, can not happen when
working with the regular convexity. Thus, we first would
like to decompose the unit disk into ``safe'' regions, where
we can apply a similar analysis as the regular case, and the
``unsafe'' areas. To do so, we will first show that, with
high probability, the $\DCH{{\cal D}}$ of a random point set
inside a disk, contains a ``large'' disk in its interior.
Next, we argue that this implies that the random $\DCH{{\cal D}}$
covers almost the whole disk, and the desired bound will
readily follows from the above Lemma.
\begin{definition}
For $r \geq 0$, let $B_r$ denote the disk of
radius of $r$ centered at the origin.
\end{definition}
\begin{lemma}
Let ${\cal D}$ be a set of directions, such that $0 \leq
\alpha({\cal D}) \leq \pi / 2$. Let $N$ be a set of $n$ points
chosen uniformly and independently from the unit disk.
Then, with probability $1-n^{-10}$ the set $\DCH{{\cal D}}(N)$
contains $B_r$ in its interior, where $r = 1 - c
\sqrt{\log{n}/n}$, for an appropriate constant $c$.
\label{lemma:big:disk}
\end{lemma}
\begin{proof}
Let $r'=1 - c \sqrt{(\log{n})/n}$, where $c$ is a
constant to be specified shortly. Let $q$ be any point
of $B_{r'}$. We bound the probability that $q$ lies
outside $C = \DCH{{\cal D}}(N)$ as follows: Draw $8$ rays
around $q$, such that the angle between any two
consecutive rays is $\pi/4$. This partitions $q +
B_{r''}$, where $r'' = c \sqrt{(\log{n})/n}$, into eight
portions $R_1, \ldots, R_8$, each having area $\pi c^2
\log{n}/(8n)$. Moreover, $R_i \subseteq q + B_{r''}
\subseteq B_1$, for $i=1, \ldots, 8$. The probability of
a point of $N$ to lie outside $R_i$ is $1 - c^2
\log{n}/(8n)$. Thus, the probability that all the points
of $N$ lie outside $R_i$ is
\[
P \pbrc{ N \cap R_i = \emptyset } \leq \pth{ 1 - \frac{
c^2 \log{n}}{8n}}^n \leq e^{-(c^2\log{n})/8} =
n^{-c^2/8},
\]
since $1-x \leq e^{-x}$, for $x \geq 0$. Thus, the
probability that one of the $R_i$'s does not contain a
point of $N$ is bounded by $8 n^{-c^2/8}$. We claim that
if $R_i \cap N \ne \emptyset$, for every $i=1,\ldots,
8$, then $q \in C$. Indeed, if $q \notin C$ then there
exists a set $Q \in {\cal{Q}}({\cal D})$, such that $(q + Q) \cap N
= \emptyset$. Since $\alpha({\cal D}) \leq \pi/2$ there exists
an $i$, $1 \leq i \leq 8$, such that $R_i \subseteq q +
Q$; see Figure \ref{fig:quadrants}. This is a
contradiction, since $R_i \cap N \ne \emptyset$. Thus,
the probability that $q$ lies outside $C$ is $\leq
8n^{-c^2/8}$.
\begin{figure}
\centerline{
\Ipe{figs/quadrants.ipe}
}
\caption{Since $\alpha({\cal D}) \leq \pi/2$, any quadrant
$Q \in {\cal{Q}}({\cal D})$, when translated by $q$, must
contain one of the $R_i$'s.}
\label{fig:quadrants}
\end{figure}
Let $N'$ denote a set of $n^{10}$ points spread
uniformly on the boundary of $B_{r'}$. By the above
analysis, all the points of $N'$ lie inside $C$ with
probability at least $1-8n^{10-c^2/8}$. Furthermore,
arguing as above, we conclude that $B_{r} \subseteq
\DCH{{\cal D}}(N')$, where $r = 1 - 2 c \sqrt{(\log{n})/n}$.
Hence, with probability at least $1 - 8n^{10-c^2/8}$,
$\DCH{{\cal D}}(C)$ contains $B_r$. The lemma now follows by
setting $c=20$, say.
\end{proof}
Since the set of directions may contain large gaps, there
are points in $B_1 \setminus B_r$ that are ``unsafe'', in
the following sense:
\begin{definition}
Let ${\cal D}$ be a set of directions, and let $0 \leq r \leq
1$ be a prescribed constant, such that $0 \leq
\alpha({\cal D}) \leq \pi / 2$. A point $p$ in $B_1$ is {\em
safe}, relative to $B_r$, if $op \subseteq
\DCH{{\cal D}}(B_r \cup \brc{p})$.
\end{definition}
See Figure \ref{fig:unsafe} for an example how the unsafe
area looks like. The behavior of the $\DCH{{\cal D}}$ inside the
unsafe areas is somewhat unpredictable. Fortunately, those
areas are relatively small.
\begin{figure}
\centerline{
\Ipe{figs/unsafe1.ipe}
}
\caption{The dark areas are the unsafe areas for a
consecutive pairs of directions
$v_1, v_2 \in {\cal D}$.}
\label{fig:unsafe}
\end{figure}
\begin{lemma}
Let ${\cal D}$ be a set of directions, such that $0 \leq
\alpha({\cal D}) \leq \pi / 2$, and let $r= 1 - O \pth{
\sqrt{(\log{n})/n}}$. The unsafe area in $B_1$,
relative to $B_r$, can be covered by a union of $O(1)$ caps.
Furthermore, the length of the base of such a cap is
$O(((\log{n})/n)^{1/4})$, and its height is
$O\pth{\sqrt{(\log{n})/n}}$.
\label{lamma:bad:bad:bad:caps}
\end{lemma}
\begin{proof}
Let $p$ be an unsafe point of $B_1$. Let
$\overrightarrow{v_1},\overrightarrow{v_2}$ be the
consecutive pair of vectors in ${\cal D}$, such that the
vector $\overrightarrow{po}$ lies between them. If
$\mathop{ray}(p,\overrightarrow{v_1}) \cap B_r \ne \emptyset$,
and $\mathop{ray}(p,\overrightarrow{v_2}) \cap B_r \ne
\emptyset$ then $po \subseteq {\mathop{\mathrm{CH}}} \pth{ \brc{ p, o, p_1,
p_2 } } \subseteq \DCH{{\cal D}}(B_r \cup \brc{p})$, for
any pair of points $p_1 \in B_r \cap
\mathop{ray}(p,\overrightarrow{v_1}), p_2 \in B_r \cap
\mathop{ray}(p,\overrightarrow{v_2})$. Thus, $p$ is unsafe only
if one of those two rays miss $B_r$. Since $p$ is close
to $B_r$, the angle between the two tangents to $B_r$
emanating from $p$ is close to $\pi$. This implies that
the angle between $\overrightarrow{v_1}$ and
$\overrightarrow{v_2}$ is at least $\pi/4$ (provided $n$
is a at least some sufficiently large constant), and the
number of such pairs is at most $8$.
The area in the plane that sees $o$ in a direction
between $\overrightarrow{v_1}$ and
$\overrightarrow{v_2}$, is a quadrant $Q$ of the plane.
The area in $Q$ which is is safe, is a parallelogram
$T$. Thus, the unsafe area in $B_1$ that induced by the
pair $\overrightarrow{v_1}$ and $\overrightarrow{v_2}$
is $(B_1 \cap Q) \setminus T$. Since $\alpha({\cal D}) \leq
\pi/2$, this set can covered with two caps of $B_1$ with
their base lying on the boundary of $B_r$. See Figure
\ref{fig:unsafe}.
The height of such a cap is $1-r =
O\pth{\sqrt{\frac{\log{n}}{n(\pi - \alpha)}}}$, and the
length of the base of such a cap is $2\sqrt{ 1- r^2 } =
O\pth{ \pth{\frac{\log{n}}{n(\pi-\alpha)}}^{1/4} }$.
\end{proof}
The proof of Lemma \ref{lamma:bad:bad:bad:caps} is where our
assumption that $\alpha({\cal D}) \leq \pi/2$ plays a critical
role. Indeed, if $\alpha({\cal D}) > \pi/2$, then the unsafe areas
in $B_1 \setminus B_r$ becomes much larger, as indicated by
the proof.
\begin{theorem}
Let ${\cal D}$ be a set of directions, such that $0 \leq
\alpha({\cal D}) \leq \pi / 2$. The expected number of
vertices of $\DCH{{\cal D}}(N)$, where $N$ is a set of $n$
points, chosen uniformly and independently from the unit
disk, is $O\pth{n^{1/3} + \sqrt{n\alpha({\cal D})}}$.
\label{theorem:area:x}
\end{theorem}
\begin{proof}
We claim that the expected area $\DCH{{\cal D}}(N)$ is at
least $\pi - O \pth{ n^{-2/3} + \sqrt{\alpha/n} }$,
where $\alpha = \alpha({\cal D})$. The theorem will then
follow from Lemma \ref{lemma:area:to:vertices:ext}.
Indeed, let $m$ be an integer to be specified later, and
assume, without loss of generality, that $m$ divides
$n$. Partition $B$ into $m$ congruent sectors, ${\cal S}_1,
\ldots, {\cal S}_m$. Let $B^1, \ldots, B^{\mu}$ denote the
$\mu = n/m$ disks centered at the origin, such that (i)
$B^{1} = B_1$, and (ii) $Area(B^{i-1}) - Area(B^{i}) =
\pi/\mu$, for $i=2, \ldots, \mu$. Let $r_i$ denote the
radius of $B^i$, for $i=1, \ldots, \mu$. Note\footnote{
$Area(B^1) - Area(B^2) = \pi(1-r_2^2) = \pi/\mu$,
thus $r_2^2 = 1 - 1/\mu$. We have $r_2 \leq 1
-1/(2\mu)$, and $r_1 - r_2 \geq 1 - (1-1/(2\mu)) =
1/(2\mu)$.}, that $r_{i} - r_{i+1} \geq r_{i-1} -
r_{i}\geq 1/(2\mu)$, for $i=2, \ldots, \mu-1$.
Let $r= 1 - O\pth{\sqrt{(\log{n})/n}}$, and let $U$ be
the set of sectors that either intersect an unsafe area
of $B$ relative to $B_r$, or their neighboring sectors
intersect the unsafe area of $B$. By Lemma
\ref{lamma:bad:bad:bad:caps}, the number of sectors in
$U$ is $O(1) \cdot O \pth{\frac{ ((\log{n})/n)^{1/4}}
{(2\pi/m)}} = O(m((\log{n})/n)^{1/4})$.
Let $S_{i,j} = (B^{i} \setminus B^{i+1}) \cap {\cal S}_j$,
and $S_{\mu,j}=B^{\mu} \cap {\cal S}_j$, for $i=1, \ldots,
\mu-1$, and $j=1, \ldots, m$. The set $S_{i,j}$ is
called the $i$-th {\em tile} of the sector ${\cal S}_j$, and
its area is $\pi/n$, for $i=1,\ldots,\mu$, and $j=1,
\ldots,m$.
Let $X_j$ denote the first index $i$ such that $N \cap
S_{i, j} \ne \emptyset$, for $j=1, \ldots, m$. The
probability that $X_j = k$ is upper-bounded by the
probability that the tiles $S_{1, j}, \ldots,
S_{(k-1),j}$ do not contain any point of $N$; namely, by
$\pth{1-\frac{k-1}{n}}^{n}$. Thus, $P[X_j = k] \leq
\pth{1-\frac{k-1}{n}}^{n} \leq e^{-(k-1)}$. Thus,
\[
E \pbrc{ X_j } = \sum_{k=1}^{\mu} k P[X_j = k ] \leq
\sum_{k=1}^{\mu} k e^{-(k-1)} = O(1),
\]
for $j=1, \ldots, m$.
Let $C$ denote the set $\DCH{{\cal D}}(N \cup B_r)$. The tile
$S_{i,j}$ is {\em exposed} by a set $K$, if $S_{i,j}
\setminus K \ne \emptyset$.
We claim that the expected number of tiles exposed by
$C$ in a section $S_j \notin U$ is at most $X_{j-1} +
X_{j+1} + O(\mu/m^2 + \alpha\mu/m)$, for $j=1,\ldots, m$
(where we put $X_0 = X_m$, $X_{m+1} = X_1$).
Indeed, let $w=max(X_{j-1},X_{j+1})$, and let $p,q$ be
the two points in $S_{j-1,w}, S_{j+1,w}$, respectively,
such that the number of sets exposed by the triangle $T
= \triangle{opq}$, in the sector ${\cal S}_j$, is maximal.
Both $p$ and $q$ lie on ${\partial}{B^{w+1}}$ and on the
external radii bounding ${\cal S}_{j-1}$ and ${\cal S}_{j+1}$, as
shown in Figure \ref{fig:slice}. Let $s$ denote the
segment connecting the midpoint $\rho$ of the base of
$T$ to its closest point on ${\partial}{B^w}$. The number of
tiles in ${\cal S}_j$ exposed by $T$ is bounded by $w$, plus
the number of tiles intersecting the segment $s$. The
length of $s$ is
\[
|oq| - |oq| \cos \pth{ \frac{3}{2} \cdot \frac{2\pi}{m}}
\leq 1 - \cos \pth{ \frac{3\pi}{m}} \leq \frac{1}{2}
\pth{ \frac{3\pi}{m}}^2 = \frac{4.5\pi^2}{m^2},
\]
since $\cos{x} \geq 1-x^2/2$, for $x \geq 0$.
On the other hand, the segment $s$ intersects at most
$\ceil{||s||/(1/(2\mu))} = O(\mu/m^2)$ tiles, and we
have that the number of tiles exposed in the sector
${\cal S}_i$ by $T$ is at most $w + O(\mu/m^2)$, for $j=1,
\ldots, m$.
Since ${\cal S}_j \notin U$, the points $p,q$ are safe, and
$op, oq \subseteq C$. This implies that the only
additional tiles that might be exposed in ${\cal S}_j$ by
$C$, are exposed by the portion of the boundary of $C$
between $p$ and $q$ that lie inside $T$. Let $V$ be the
circular cap consisting of the points in $T$ lying
between $pq$ and a circular arc $\gamma \subseteq T$,
connecting $p$ to $q$, such that for any point $p' \in
\gamma$ one has $\angle{pp'q} = \pi - \alpha$. See
Figure \ref{fig:circ:arc}.
\begin{figure}
\centerline{
\Ipe{figs/attack.ipe} }
\caption{The portion of $T$ that can be removed by a quadrant
$Q$ of ${\cal T}({\cal D})$, is covered by the darkly-shaded
circular cap, such that any point on its bounding
arc creates an angle $\pi-\alpha$ with $p$ and
$q$.}
\label{fig:circ:arc}
\end{figure}
Let $Q \in {\cal T}({\cal D})$ be any quadrant of the plane induce
by ${\cal D}$, such that $Q \cap N = \emptyset$ (i.e. $C\cap Q
= \emptyset$), and $Q \cap T \ne \emptyset$. Then, $Q
\cap op = \emptyset, Q \cap oq =\emptyset$ since $p$ and
$q$ are safe. Moreover, the angle of $Q$ is at least
$\pi - \alpha$, which implies that $Q \cap T \subseteq
V$. See Figure \ref{fig:circ:arc}.
Let $s'$ be the segment $o\rho \cap V$, where $\rho$ is
as above, the midpoint of $pq$. The length of $s''$ is
\[
|s'| \leq \sin \pth{ \frac{3}{2} \cdot \frac{2\pi}{m}}
\tan{ \frac{\alpha}{2} } \leq \frac{3\pi}{m}
\frac{\sqrt{2}\alpha}{2} \leq \frac{3\pi\alpha}{m},
\]
since $\sin{x} \leq x$, for $x \geq 0$, and $1/\sqrt{2}
\leq \cos{(\alpha/2)}$ (because $0 \leq \alpha \leq
\pi/2$).
Thus, the expected number of tiles exposed by $C$, in a
sector ${\cal S}_j \notin U$, is bounded by
\[
X_{j-1} + X_{j+1} + O \pth{ \frac{\mu}{m^2}} + O \pth{
\frac{3\pi\alpha/m}{1/(2\mu)} } = X_{j-1} + X_{j+1} +
O \pth{ \frac{\mu}{m^2}} + O \pth{ \frac{\alpha
\mu}{m}}.
\]
Thus, the expected number of tiles exposed by $C$, in
sectors that do not belong to $U$, is at most
\[
E \pbrc{ \sum_{j=1}^{m} \pth{ X_{j-1} + X_{j+1} + O
\pth{ \frac{\mu}{m^2}} + O \pth{ \frac{\alpha
\mu}{m}} } } = O \pth{ m + \frac{\mu}{m} +
\alpha \mu}.
\]
Adding all the tiles that lie outside $B_r$ in the
sectors that belong to $U$, it follows that the expected
number of tiles exposed by $C$ is at most
\begin{eqnarray*}
O&&\hspace{-0.75cm} \pth{ m + \frac{\mu}{m} + \alpha
\mu + |U|\cdot \frac{1-r}{1/2\mu} }= O \pth{ m +
\frac{\mu}{m} + \alpha \mu + m
\pth{\frac{\log{n}}{n}}^{1/4} \cdot
\mu \sqrt{\pth{\frac{\log{n}}{n}}}} \\
&=& O \pth{ m + \frac{n}{m^2} + \frac{\alpha n}{m} +
n\pth{\frac{\log{n}}{n}}^{3/4} } = O \pth{ m +
\frac{n}{m^2} + \frac{\alpha n}{m} +
n^{1/4}\log^{3/4}{n} }.
\end{eqnarray*}
Setting $m=\max{\pth{n^{1/3}, \sqrt{n\alpha}}}$, we
conclude that the expected number of tiles exposed by
$C$ is $O\pth{n^{1/3} + \sqrt{n\alpha}}$.
The area of $C' =\DCH{{\cal D}}(N)$ is bounded from below by
the area of the tiles which are not exposed by $C'$. The
probability that $C' \ne C$ (namely, that the disk $B_r$
is not inside $C'$) is at most $n^{-10}$, by Lemma
\ref{lemma:big:disk}. Hence the expected area of $C'$
is at least
\[
E[ Area( C ) ] - Prob \pbrc{ C \ne C' } \pi = \pi -
O\pth{ n^{1/3} + \sqrt{n\alpha}}\frac{\pi}{n} -
n^{-10}\pi = \pi -O \pth{n^{-2/3} +
\sqrt{\frac{\alpha}{n}} \;}.
\]
The assertion of the theorem now follows from Lemma
\ref{lemma:area:to:vertices:ext}.
\end{proof}
The expected complexity of the $\DCH{{\cal D}_{xy}}$ of $n$
points, chosen uniformly and independently from the unit
square, is $O(\log{n})$ (Lemma \ref{lemma:vertices:square}).
Unfortunately, this is a degenerate case for a set of
directions with $\alpha({\cal D}) = \pi/2$, as the following
corollary testifies:
\begin{corollary}
Let ${\cal D}_{xy}'$ be the set of directions resulting from
rotating ${\cal D}_{xy}$ by 45 degrees. Let $N$ be a set of
$n$ points, chosen independently and uniformly from the
unit square ${S'}$. The expected complexity of
$\DCH{{\cal D}_{xy}'}(N)$ is $\Omega \pth{\sqrt{n}}$.
\end{corollary}
\begin{proof}
Without loss of generality, assume that $n=m^2$ for some
integer $m$. Tile ${S'}$ with $n$ translated copies of
a square of area $1/n$. Let ${\cal S}_1, \ldots, {\cal S}_m$ denote
the squares in the top raw of this tiling, from left to
right. Let $A_j$ denote the event that ${\cal S}_j$ contains a
point of $N$, and neither of the two adjacent squares
$S_{j-1}, S_{j+1}$ contains a point of $N$, for $j=2,
\ldots, m-1$.
We have
\[
Prob\pbrc{A_j} = Prob \pbrc{ {\cal S}_{j+1} \cap N = \emptyset
\text{ and }{\cal S}_{j-1} \cap N = \emptyset } - Prob
\pbrc{ ({\cal S}_{j-1} \cup {\cal S}_j \cup {\cal S}_{j+1}) \cap N =
\emptyset },
\]
for $j=2, \ldots, m-1$. Hence,
\[
\lim_{n\rightarrow \infty} Prob\pbrc{A_j} =
\lim_{n\rightarrow \infty } \pth{ \pth{1 -
\frac{2}{n}}^{n} - \pth{1 - \frac{3}{n}}^{n} } =
e^{-2} - e^{-3} \approx 0.0855
\]
\begin{figure}
\centerline{
\Ipe{figs/squares.ipe}
}
\caption{If $A_j$ happens, then the squares
${\cal S}_{j-1}, {\cal S}_{j+1}$ do not
contain a point of $N$. Thus, if $q$ is the
highest point in ${\cal S}_j$, then $q+Q_{top}$ can not
contain a point of $N$, and $q$ is a vertex of
$\DCH{{\cal D}_{xy}'}(N)$.}
\label{fig:squares}
\end{figure}
This implies, that for $n$ large enough, $Prob
\pbrc{A_j} \geq 0.01$. Thus, the expected value of $Y$
is $\Omega(m) = \Omega \pth{ \sqrt{n}}$, where $Y$ is
the number of $A_j$'s that have occurred, for
$j=2,\ldots, m-1$. However, if $A_j$ occurs, then $C
=\DCH{{\cal D}_{xy}'}(N)$ must have a vertex at ${\cal S}_j$.
Indeed, let $Q_{top}$ denote the quadrant of
${\cal{Q}}({\cal D}_{xy}')$ that contains the positive $y$-axis. If
we translate $Q_{top}$ to the highest point in $S_j \cap
N$, then it does not contain a point of $N$ in its
interior, implying that this point is a vertex of $C$,
see Figure \ref{fig:squares}.
This implies that the expected complexity of
$\DCH{{\cal D}_{xy}'}(N)$ is $\Omega \pth{\sqrt{n}}$
\end{proof}
\section[Result]{On the Expected Number of Points on the Boundary of the
Quadrant Hull Inside a Hypercube}
\label{sec:hcube}
In this section, we show that the expected number of points
on the boundary of the quadrant hull of a set $S$ of $n$
points, chosen uniformly and independently from the unit
cube is $O(\log^{d-1}n)$. Those bounds are known
\cite{bkst-anmsv-78}, but we believe the new proof is
simpler.
\begin{definition}[\cite{mp-ofsch-97}]
Let ${\cal Q}$ be a family of subsets of $\Re^d$. For a set $A
\subseteq \Re^d$, we define the ${\cal Q}$-hull of $A$ as
\[
\QHull{{\cal Q}}(A) = \bigcap\brc{ Q \in {\cal Q} \sep{ A \subseteq
Q }}.
\]
\end{definition}
\begin{definition}[\cite{mp-ofsch-97}]
For a sign vector $s \in \brc{-1, +1}^d$, define
\[
q_s = \brc{ x \in \Re^d \sep{ \mathop{\mathrm{sign}}(x_i ) = s_i,
\text{ for } i=1, \ldots, d }},
\]
and for $a \in \Re^d$, let $q_s(a) =q_s + a$. We set
${\cal Q}_{sc} = \brc{ \Re^d \setminus q_s(a) \sep{ a \in
\Re^d, s \in \brc{-1,+1}^d}}$. We shall refer to
$\QHull{{\cal Q}_{sc}}(A)$ as the {\em quadrant hull} of $A$. These are
all points which cannot be separated from $A$ by any
open orthant in space (i.e., quadrant in the plane).
\end{definition}
\begin{definition}
Given a set of points $S \subseteq \Re^d$, a point $p
\in \Re^d$ is {\em ${\cal Q}_{sc}$-exposed}, if there is $s \in
\brc{-1,+1}^d$, such that $q_s(p) \cap S = \emptyset$. A
set $C$ is {\em ${\cal Q}_{sc}$-exposed}, if there exists a
point $p\in C$ which is ${\cal Q}_{sc}$-exposed.
\end{definition}
\begin{definition}
For a set $S \subseteq \Re^d$, let $n_{sc}(S)$ denote
the number of points of $S$ on the boundary of $\QHull{{\cal Q}_{sc}}(S)$.
\end{definition}
\begin{theorem}
Let ${\cal C}$ be a unit axis parallel hypercube in $\Re^d$,
and let $S$ be a set of $n$ points chosen uniformly and
independently from ${\cal C}$. Then, the expected number of
points of $S$ on the boundary of $H = \QHull{{\cal Q}_{sc}}(S)$ is
$O(\log^{d-1}(n))$.
\label{theorem:main}
\end{theorem}
\begin{proof}
We partition ${\cal C}$ into equal size tiles, of volume
$1/n^d$; that is $C(i_1, i_2, \ldots, i_d) = [(i_1-1)/n,
i_1/n] \times \cdots \times [(i_d - 1)/n, i_d/n]$, for
$1 \leq i_1, i_2, \ldots, i_d \leq n$.
We claim that the expect number of tiles in our
partition of ${\cal C}$ which are exposed by $S$ is $O(n^{d-1}
\log^{d-1}n)$.
Indeed, let $q = q_{(-1, -1, \ldots, -1)}$ be the
``negative'' quadrant of $\Re^d$. Let $X(i_2, \ldots,
i_d)$ be the maximal integer $k$, for which $C(k,i_2,
\ldots, i_d)$ is exposed by $q$. The probability that
$X(i_2, \ldots, i_d) \geq k$ is bounded by the
probability that the cubes $C(l_1, l_2, \ldots, l_d)$
does not contain a point of $S$, where $l_1 < k, l_2<
i_2, \ldots, l_d < i_d$. Thus,
\begin{eqnarray*}
\Pr \pbrc{ X(i_2, \ldots, i_d) \geq k} &\leq& \pth{
1 - \frac{(k-1)(i_2
-1) \cdots (i_d-1)}{n^d}}^n \\
&\leq& \exp \pth{ -\frac{{(k-1)(i_2 - 1) \cdots
(i_d-1)}}{n^{d-1}}},
\end{eqnarray*}
since $1-x \leq e^{-x}$, for $x \geq 0$.
Hence, the probability that $\Pr\pbrc{X(i_2, \ldots,
i_d) \geq i\cdot m + 1} \leq e^{-i}$, where\\ $m =
\ceil{\frac{n^{d-1}}{(i_2-1) \cdots (i_d-1)}}$. Thus,
\begin{eqnarray*}
E \pbrc{ X(i_2, \ldots, i_d) } &=&
\sum_{i=1}^{\infty} i \Pr\pbrc{X(i_2, \ldots, i_d) =
i } = \sum_{i=0}^{\infty} \sum_{j=im+1}^{(i+1)m}
j \Pr\pbrc{X(i_2, \ldots, i_d) = j }\\
&\leq& \sum_{i=0}^{\infty} (i+1)m \Pr \pbrc{ X(i_2,
\ldots, i_d) \geq im + 1} \leq
\sum_{i=0}^{\infty} (i+1)me^{-i} = O(m).
\end{eqnarray*}
Let $r$ denote the expected number of tiles exposed by
$q$ in ${\cal C}$. If $C(i_1, \ldots, i_d)$ is exposed by
$q$, then $X(i_2, \ldots, i_d) \geq i_1$. Thus, one can
bound $r$ by the number of tiles on the boundary of
${\cal C}$, plus the sum of the expectations of the variables
$X(i_2, \ldots, i_d)$. We have
\begin{eqnarray*}
r &=& O(n^{d-1}) + \sum_{i_2=2}^{n-1}
\sum_{i_3=2}^{n-1} \cdots \sum_{i_d=2}^{n-1} O \pth{
\frac{n^{d-1}}{(i_2 - 1)(i_3 -1 )
\cdots(i_d - 1)}} \\
&=& O \pth{ n^{d-1}} \sum_{i_2=2}^{n-1}
\frac{1}{i_2-1}\sum_{i_3=2}^{n-1} \frac{1}{i_3-1}
\cdots \sum_{i_d=2}^{n-1} \frac{1}{i_d-1} = O \pth{
n^{d-1} \log^{d-1}{n}}.
\end{eqnarray*}
The set ${\cal Q}_{sc}$ contains translation of $2^d$
different quadrants. This implies, by symmetry, that
the expected number of tiles exposed in ${\cal C}$ by $S$ is
$O \pth{ 2^dn^{d-1} \log^{d-1}{n}} = O \pth{n^{d-1}
\log^{d-1}{n}}$. However, if a tile is not exposed by
any $q_s$, for $s \in \brc{-1,+1}^d$, then it lies in
the interior of $H$. Implying that the expected volume
of $H$ is at least
\[
\frac{n^d - O\pth{n^{d-1} \log^{d-1}{n}}}{n^d} = 1 -
O\pth{\frac{\log^{d-1} n}{n}}.
\]
We now apply an argument similar to the one used in
Lemma \ref{lemma:area:to:vertices} (Efron's Theorem),
and the theorem follows.
\end{proof}
\remove{
We are now in position to apply an argument similar to
the one used in Lemma \ref{lemma:area:to:vertices}:
Partition $S$ into two equal size sets $S_1, S_2$. We
have that $n_{sc}(S)$ is bounded by the number of points
of $S_1$ falling outside $H_2 = \QHull{{\cal Q}_{sc}}(S_2)$, plus the
number of points of $S_2$ falling outside $H_1 =
\QHull{{\cal Q}_{sc}}(S_1)$, where $n_{sc}(S)$ is the number of points of
$S$ on the boundary of $\QHull{{\cal Q}_{sc}}(S)$. We conclude,
\begin{eqnarray*}
E[ n_{sc}(S) ] &\leq& E \pbrc{ |S_1 \setminus H_2 |
} + E \pbrc{ |S_2 \setminus H_1|} = E\pbrc{
\rule{0cm}{0.5cm} E \pbrc{ |S_1 \setminus H_2|
\sep{ S_2 } }} + E \pbrc{ \rule{0cm}{0.5cm} E
\pbrc{ |S_2
\setminus H_1| \sep{ S_1 } } }\\
&=& E\pbrc{ \frac{n}{2}\pth{1 - \mathop{\mathrm{Vol}}(H_2)} +
\frac{n}{2}\pth{1 - \mathop{\mathrm{Vol}}(H_1)}} = O \pth{ n \cdot
\frac{\log^{d-1} n}{n}} = O( \log^{d-1} n),
\end{eqnarray*}
and the theorem follows.
}
\begin{remark}
A point $p$ of $S$ is a {\em maxima}, if there is no
point $p'$ in $S$, such that $p_i \leq p'_i$, for
$i=1,\ldots, d$. Clearly, a point which is a maxima, is
also on the boundary of $\QHull{{\cal Q}_{sc}}(S)$. By Theorem
\ref{theorem:main}, the expected number of maxima in a
set of $n$ points chosen independently and uniformly
from the unit hypercube in $\Re^d$ is $O(\log^{d-1} n)$.
This was also proved in \cite{bkst-anmsv-78}, but we
believe that our new proof is simpler.
Also, as noted in \cite{bkst-anmsv-78}, a vertex of the
convex hull of $S$ is a point of $S$ lying on the
boundary of the $\QHull{{\cal Q}_{sc}}(S)$. Hence, the expected
number of vertices of the convex hull of a set of $n$
points chosen uniformly and independently from a
hypercube in $\Re^d$ is $O(\log^{d-1} n)$.
\end{remark}
\subsection*{Acknowledgments}
I wish to thank my thesis advisor, Micha Sharir, for his
help in preparing this manuscript. I also wish to thank
Pankaj Agarwal, and Imre B{\'a}r{\'a}ny for helpful
discussions concerning this and related problems.
\bibliographystyle{salpha}
|
1111.5168
|
\section{Introduction}
In geometric group theory, weak amenability is an approximation property which is satisfied by a large class of groups (for example free or even hyperbolic groups \cite{ozawa2007weak}) but is strong enough to give interesting properties for the von Neumann algebras associated to these groups (for example related to deformation/rigidity techniques \cite{ozawa2010class}, \cite{ozawa2010classbis}). The stability of this property under free products is still an open question. However, using a very general version of the Khintchine inequality, E. Ricard and Q. Xu were able to prove in \cite{ricard2005khintchine} that if $(G_{i})$ is a family of weakly amenable discrete groups \emph{with Cowling-Haagerup constant equal to $1$}, then their free product is again weakly amenable, and its Cowling-Haagerup constant is also equal to $1$. The proof uses a classical characterization of those bounded functions on a group giving rise to completely bounded multipliers.
This characterization has recently been generalized to arbitrary locally compact quantum groups by M. Daws in \cite{daws2011multipliers}. Using it, we will prove an analogue of Ricard and Xu's result in the setting of discrete quantum groups and give some new examples of non-commutative and non-cocommutative discrete quantum groups having Cowling-Haagerup constant equal to $1$. Before that, we give a brief survey on the notion of weak amenability for discrete quantum groups. Along the way, we prove Theorem \ref{thm:quantumwa} which is a slight generalization of \cite[Thm 5.14]{kraus1999approximation}.
\section{Premilinaries}
\subsection{Notations}
All scalar products will be taken to be \emph{left-linear}. For two Hilbert spaces $H$ and $K$, $\mathcal{B}(H, K)$ will denote the set of bounded linear maps from $H$ to $K$ and $\mathcal{B}(H):=\mathcal{B}(H, H)$. In the same way we use the notations $\mathcal{K}(H, K)$ and $\mathcal{K}(H)$ for compact linear maps. We will denote by $\mathcal{B}(H)_{*}$ the predual of $\mathcal{B}(H)$, i.e. the Banach space of all normal linear forms on $\mathcal{B}(H)$. On any tensor product $A\otimes B$, we define the flip operator
\begin{equation*}
\Sigma : \left\{\begin{array}{ccc}
A\otimes B & \rightarrow & B\otimes A \\
x\otimes y & \mapsto & y\otimes x
\end{array}\right.
\end{equation*}
We will use the usual leg-numbering notations : for an operator $X$ acting on a tensor product we set $X_{12}:=X\otimes1$, $X_{23}:=1\otimes X$ and $X_{13}:=(\Sigma\otimes 1)(1\otimes X)(\Sigma\otimes 1)$. The identity map of a C*-algebra $A$ will be denoted $\i_{A}$ or simply $\i$ if there is no possible confusion. For a subset $B$ of a topological vector space $C$, $\spa B$ will denote the \emph{closed linear span} of $B$ in $C$. The symbol $\otimes$ will denote the \emph{minimal} (or spatial) tensor product of C*-algebras or the topological tensor product of Hilbert spaces.
\subsection{Compact and discrete quantum groups}
Discrete quantum groups will be seen as duals of compact quantum groups in the sense of Woronowicz. We briefly present the basic theory of compact quantum groups as introduced in \cite{woronowicz1995compact}. Another survey, encompassing the non-separable case, can be found in \cite{maes1998notes}.
\begin{de}
A \emph{compact quantum group} $\mathbb{G}$ is a pair $(C(\mathbb{G}), \Delta)$ where $C(\mathbb{G})$ is a unital C*-algebra and $\Delta : C(\mathbb{G})\rightarrow C(\mathbb{G})\otimes C(\mathbb{G})$ is a unital $*$-homomorphism such that
\begin{equation*}
(\Delta\otimes \i)\circ\Delta = (\i\otimes\Delta)\circ\Delta\text{ and }\overline{\Delta(C(\mathbb{G}))(1\otimes C(\mathbb{G}))} = C(\mathbb{G})\otimes C(\mathbb{G}) = \overline{\Delta(C(\mathbb{G}))(C(\mathbb{G})\otimes 1)}.
\end{equation*}
\end{de}
The main feature of compact quantum groups is the existence of a Haar measure, which happens to be both left and right invariant (see \cite[Thm 1.3]{woronowicz1995compact} or \cite[Thm 4.4]{maes1998notes}).
\begin{prop}
Let $\mathbb{G}$ be a compact quantum group, there is a unique Haar state on $\mathbb{G}$, that is to say a state $h$ on $C(\mathbb{G})$ such that for all $a\in C(\mathbb{G})$,
\begin{eqnarray*}
(\i\otimes h)\circ \Delta(a) =h(a).1 \\
(h\otimes \i)\circ \Delta(a) =h(a).1
\end{eqnarray*}
\end{prop}
Let $(L^{2}(\mathbb{G}), \xi_{h})$ be the associated GNS construction and let $C_{\text{red}}(\mathbb{G})$ be the image of $C(\mathbb{G})$ under the GNS map, called the \emph{reduced form} of $\mathbb{G}$. Let $W$ be the unique unitary operator on $L^{2}(\mathbb{G})\otimes L^{2}(\mathbb{G})$ such that
\begin{equation*}
W^{*}(\xi\otimes a\xi_{h}) = \Delta(a)(\xi\otimes \xi_{h})
\end{equation*}
for $\xi \in L^{2}(\mathbb{G})$ and $a\in C(\mathbb{G})$ and let $\widehat{W} := \Sigma W^{*}\Sigma$. Then $W$ is a \emph{multiplicative unitary} in the sense of \cite{baaj1993unitaires}, i.e. $W_{12}W_{13}W_{23} = W_{23}W_{12}$ and we have the following equalities :
\begin{eqnarray*}
C_{\text{red}}(\mathbb{G}) & = & \overline{\rm{span}}(\i\otimes \mathcal{B}(H)_{*})(W) \\
\Delta(x) & = & W^{*}(1\otimes x)W \\
\end{eqnarray*}
Moreover, we can define the \emph{dual discrete quantum group} $\widehat{\mathbb{G}} = (C_{0}(\widehat{\mathbb{G}}), \widehat{\Delta})$ by
\begin{eqnarray*}
C_{0}(\widehat{\mathbb{G}}) & = & \overline{\rm{span}}(\mathcal{B}(H)_{*}\otimes\i)(W) \\
\widehat{\Delta}(x) & = & \Sigma W(x\otimes 1)W^{*}\Sigma
\end{eqnarray*}
This data defines a \emph{discrete quantum group}. One can prove that $W$ is in fact a multiplier of $C(\mathbb{G})\otimes C_{0}(\widehat{\mathbb{G}})$. We define two von Neumann algebras associated to these quantum groups as $L^{\infty}(\mathbb{G}) = C(\mathbb{G})''$ and $\ell^{\infty}(\widehat{\mathbb{G}})=C_{0}(\widehat{\mathbb{G}})''$ in $\mathcal{B}(L^{2}(\mathbb{G}))$. We will need a few facts about the representation theory of compact quantum groups.
\begin{de}
A \emph{finite-dimensional representation} of a compact quantum group $\mathbb{G}$ is an element $u=(u_{i, j})\in M_{n}(C(\mathbb{G}))$ such that
\begin{equation*}
\Delta(u_{i, j}) = \sum_{k}u_{i, k}\otimes u_{k, j}.
\end{equation*}
The elements $u_{i, j}$ are called the \emph{coefficients} of the representation $u$. Such a representation will often be seen as an element of $M_{n}(\mathbb{C})\otimes C(\mathbb{G})$.
\end{de}
The following generalization of the classical Peter-Weyl theorem holds (see \cite[Section 6]{woronowicz1995compact}).
\begin{thm}[Woronowicz]
Every irreducible representation of a compact quantum group is finite dimensional and every unitary representation is unitarily equivalent to a sum of irreducible ones. Moreover, the linear span of the coefficients of all irreducible representations is a dense Hopf $*$-algebra denoted $\mathcal{C}(\mathbb{G})$.
\end{thm}
Let $\Ir(\mathbb{G})$ be the set of isomorphism classes of irreducible representations of $\mathbb{G}$. If $\alpha\in \Ir(\mathbb{G})$, we will denote by $u^{\alpha}$ a representative and $H_{\alpha}$ the finite dimensional Hilbert space on which it acts. There are isomorphisms
\begin{equation*}
C_{0}(\widehat{\mathbb{G}}) = \oplus_{\alpha\in \Ir(\mathbb{G})}\mathcal{B}(H_{\alpha})\text{ and }\ell^{\infty}(\widehat{\mathbb{G}}) = \prod_{\alpha\in \Ir(\mathbb{G})}\mathcal{B}(H_{\alpha}).
\end{equation*}
The minimal central projection in $\ell^{\infty}(\widehat{\mathbb{G}})$ corresponding to $\alpha$ will be denoted $p_{\alpha}$.
\section{Weak amenability}
We now study the notion of weak amenability for discrete quantum groups. In the classical case, this notion is defined by the existence of some bounded functions on the group giving rise to completely bounded multipliers. The notion of completely bounded multipliers for locally compact quantum groups has now been largely studied, see for instance \cite{hu2011completely, junge2009representation, kraus1999approximation, daws2011completely, daws2011multipliers}. For the sake of completeness and because it is much simpler in the context of discrete quantum groups, we give a brief description of the main result.
If $G$ is a discrete group, one way to construct useful multipliers for its reduced C*-algebra is to start with a bounded function $\varphi : G\rightarrow \mathbb{C}$. Its associated multiplier is the operator $m_{\varphi}$ defined on the linear span of $\{\lambda(g)\}$ ($\lambda$ being the left regular representation) by $m_{\varphi}(\lambda(g))=\varphi(g)\lambda(g)$. One then looks for some criterion on $\varphi$ ensuring that $m_{\varphi}$ extends to a (completely) bounded map on $C^{*}_{r}(G)$.
\begin{de}\label{de:quantummultiplier}
Let $\widehat{\mathbb{G}}$ be a discrete quantum group and $a\in \ell^{\infty}(\widehat{\mathbb{G}})$. The \emph{left multiplier} associated to $a$ is the map $m_{a} : \mathcal{C}(\mathbb{G}) \rightarrow \mathcal{C}(\mathbb{G})$ defined by
\begin{equation*}
(m_{a}\otimes \i)(u^{\alpha}) = (1\otimes ap_{\alpha})u^{\alpha},
\end{equation*}
for any irreducible representation $\alpha$ of $\mathbb{G}$.
\end{de}
\begin{rem}
This definition can be rephrased simply as
\begin{equation*}
(m_{a}\otimes \i)(W) = (1\otimes a)W.
\end{equation*}
This means that for any $\omega\in \mathcal{B}(L^{2}(\mathbb{G}))_{*}$, one has
\begin{equation*}
m_{a}((\i\otimes \omega)(W)) = (\i\otimes \omega)((1\otimes a)W = (\i\otimes \omega a)(W),
\end{equation*}
which is the usual definition of multipliers for locally compact quantum groups.
\end{rem}
\begin{rem}
Let us assume that there exists $\omega_{a}\in \mathcal{B}(L^{2}(\mathbb{G}))_{*}$ such that $(\omega_{a}\otimes \i)(W) = a$, then $m_{a} = (\omega_{a}\otimes \i)\circ\Delta$. Indeed,
\begin{eqnarray*}
(m_{a}\otimes\i)(W) & = & (1\otimes a)W \\
& = & (\omega_{a}\otimes \i\otimes \i)(W_{13})W \\
& = & (\omega_{a}\otimes \i\otimes \i)(W_{13}W_{23}) \\
& = & (\omega_{a}\otimes \i\otimes \i)\circ(\Delta\otimes \i)(W) \\
& = & ([(\omega_{a}\otimes\i)\circ\Delta]\otimes \i)(W).
\end{eqnarray*}
This links definition \ref{de:quantummultiplier} with the "convolution operators" used by M. Brannan in \cite{brannan2011approximation} to study the Haagerup property for some particular discrete quantum groups.
\end{rem}
\begin{de}
A net $(a_{i})$ of elements of $\ell^{\infty}(\widehat{\mathbb{G}})$ is said to \emph{converge pointwise} to $a\in \ell^{\infty}(\widehat{\mathbb{G}})$ if
\begin{equation*}
a_{i}p_{\alpha} \rightarrow ap_{\alpha}
\end{equation*}
for any irreducible representation $\alpha$ of $\mathbb{G}$. An element $a\in \ell^{\infty}(\widehat{\mathbb{G}})$ is said to have \emph{finite support} if $ap_{\alpha}$ is non-zero only for a finite number of irreducible representations $\alpha$.
\end{de}
\begin{rem}
If $a_{\lambda}$ is a sequence in $\ell^{\infty}(\widehat{\mathbb{G}})$ converging pointwise to $a$, then $\widehat{\varepsilon}(a_{\lambda}) \rightarrow \widehat{\varepsilon}(1)$ since $xp_{\varepsilon} = \widehat{\varepsilon}(x)p_{\varepsilon}$.
\end{rem}
Before defining weak amenability for discrete quantum groups, we need an intrinsic characterization of those bounded functions giving rise to completely bounded multipliers. For a discrete group $G$, it is known that a bounded function $\varphi : G\rightarrow \mathbb{C}$ gives rise to a completely bounded multiplier if and only if there exists a Hilbert space $K$ and two families $(\xi_{s})_{s\in G}$ and $(\eta_{t})_{t\in G}$ of vectors in $K$ such that for all $s, t\in G$, $\varphi(s) = \langle \eta_{t}, \xi_{st}\rangle$(which is usually written $\varphi(st^{-1}) = \langle \eta_{t}, \xi_{s}\rangle$). Such a characterization has been generalized to the setting of locally compact quantum groups by M. Daws \cite[Prop 4.1 and Thm 4.2]{daws2011multipliers}. Due to its generality, Daws' proof is quite complicated and subtle, but restricting to the discrete case enables us to drop many technicalities and keep only the heart of the proof, which we give here for the sake of completeness.
\begin{thm}[Daws]\label{theorem:quantumgilbert}
Let $\widehat{\mathbb{G}}$ be a discrete quantum group and $a\in \ell^{\infty}(\widehat{\mathbb{G}})$. Then $m_{a}$ extends to a competely bounded multiplier on $\mathcal{B}(L^{2}(\mathbb{G}))$ if and only if there exists a Hilbert space $K$ and two maps $\alpha, \beta \in \mathcal{B}(L^{2}(\mathbb{G}), L^{2}(\mathbb{G})\otimes K)$ such that $\|\alpha\|\|\beta\| = \|m_{a}\|_{cb}$ and
\begin{equation}\label{eq:quantumgilbert}
(1\otimes \beta)^{*}\widehat{W}_{12}^{*}(1\otimes \alpha)\widehat{W} = a\otimes 1.
\end{equation}
Moreover, we then have $m_{a}(x) = \beta^{*}(x\otimes 1)\alpha$.
\end{thm}
\begin{proof}
We only consider the case when $m_{a}$ extends to a completely contractive map on $\mathcal{B}(L^{2}(\mathbb{G}))$ still denoted $m_{a}$. By Wittstock's factorization theorem (see for example \cite[Thm B.7]{brown2008finite}), there is a representation $\pi : \mathcal{B}(L^{2}(\mathbb{G})) \rightarrow \mathcal{B}(K)$ and two isometries $P, Q \in \mathcal{B}(L^{2}(\mathbb{G}), K)$ such that for all $x\in \mathcal{B}(L^{2}(\mathbb{G}))$, $m_{a}(x) = Q^{*}\pi(x)P$. Set $U = (\i \otimes \pi)(\widehat{W})$ and define two maps $\alpha$ and $\beta$ by
\begin{equation*}
\left\{\begin{array}{ccc}
\alpha & = & U^{*}(1\otimes P)\widehat{W}(1\otimes \xi_{h}) \\
\beta & = & U^{*}(1\otimes Q)\widehat{W}(1\otimes \xi_{h}) \\
\end{array}\right.
\end{equation*}
These are contractive linear maps from $L^{2}(\mathbb{G})$ to $L^{2}(\mathbb{G})\otimes K$. If we set
\begin{equation*}
X = (1\otimes \beta)^{*}\widehat{W}_{12}^{*}(1\otimes \alpha)\widehat{W},
\end{equation*}
then the pentagon equation gives
\begin{eqnarray*}
X & = & (1\otimes 1 \otimes \xi_{h})^{*}\widehat{W}_{23}^{*}(1\otimes 1\otimes Q^{*})U_{23}\widehat{W}_{12}^{*}U_{23}^{*} \\
& & (1\otimes 1\otimes P)\widehat{W}_{23}(1\otimes 1\otimes \xi_{h})\mathbf{\widehat{W}} \\
& = & (1\otimes 1 \otimes \xi_{h})^{*}\widehat{W}_{23}^{*}(1\otimes 1\otimes Q^{*})U_{23}\widehat{W}_{12}^{*}U_{23}^{*} \\
& & (1\otimes 1\otimes P)\mathbf{\widehat{W}_{23}\widehat{W}_{12}}(1\otimes 1\otimes \xi_{h}) \\
& = & (1\otimes 1 \otimes \xi_{h})^{*}\widehat{W}_{23}^{*}(1\otimes 1\otimes Q^{*})U_{23}\widehat{W}_{12}^{*}U_{23}^{*} \\
& & (1\otimes 1\otimes P)\mathbf{\widehat{W}_{12}\widehat{W}_{13}\widehat{W}_{23}}(1\otimes 1\otimes \xi_{h}) \\
& = & (1\otimes 1 \otimes \xi_{h})^{*}\widehat{W}_{23}^{*}(1\otimes 1\otimes Q^{*})U_{23}\widehat{W}_{12}^{*}U_{23}^{*}\mathbf{\widehat{W}_{12}} \\
& & (1\otimes 1\otimes P)\widehat{W}_{13}\widehat{W}_{23}(1\otimes 1\otimes \xi_{h}). \\
\end{eqnarray*}
Using the fact that $\widehat{W}_{12}^{*}U_{23}^{*}\widehat{W}_{12} = U_{23}^{*}U_{13}^{*}$, we get
\begin{eqnarray*}
X
& = & (1\otimes 1 \otimes \xi_{h})^{*}\widehat{W}_{23}^{*}(1\otimes 1\otimes Q^{*})\mathbf{U_{13}^{*}} \\
& & (1\otimes 1\otimes P)\widehat{W}_{13}\widehat{W}_{23}(1\otimes 1\otimes \xi_{h}) \\
& = & (1\otimes 1 \otimes \xi_{h})^{*}\widehat{W}_{23}^{*}\mathbf{(1\otimes 1\otimes Q^{*})U_{13}^{*}(1\otimes 1\otimes P)}\widehat{W}_{13}\widehat{W}_{23}(1\otimes 1\otimes \xi_{h}) \\
& = & (1\otimes 1 \otimes \xi_{h})^{*}\widehat{W}_{23}^{*}(\i\otimes m_{a})(\widehat{W}^{*})_{13}\widehat{W}_{13}\widehat{W}_{23}(1\otimes 1\otimes \xi_{h})
\end{eqnarray*}
Now we observe that
\begin{eqnarray*}
(\i\otimes m_{a})(\widehat{W}^{*})_{13} & = & (\i\otimes m_{a})(\Sigma W\Sigma)_{13} \\
& = & \Sigma_{13}(m_{a}\otimes \i)(W)_{13}\Sigma_{13} \\
& = & \Sigma_{13}((1\otimes a)W)_{13}\Sigma_{13} \\
& = & (a\otimes 1\otimes 1)\Sigma_{13}W_{13}\Sigma_{13} \\
& = & (a\otimes 1\otimes 1)\widehat{W}_{13}^{*}.
\end{eqnarray*}
Thus, $X = (1\otimes 1 \otimes \xi_{h})^{*}\widehat{W}_{23}^{*}(a\otimes 1\otimes 1)\widehat{W}_{13}^{*}\widehat{W}_{13}\widehat{W}_{23}(1\otimes 1\otimes \xi_{h}) = a\otimes 1$.
Assume now the existence of $\alpha$ and $\beta$ satisfying equation (\ref{eq:quantumgilbert}), then
\begin{eqnarray*}
(m_{a}\otimes\i)(W) & = & (1\otimes a)W \\
& = & \Sigma (a\otimes 1)\widehat{W}^{*}\Sigma \\
& = & \Sigma (1\otimes \beta^{*})\widehat{W}_{12}^{*}(1\otimes \alpha)\widehat{W}\widehat{W}^{*}\Sigma \\
& = & \Sigma (1\otimes \beta^{*})\widehat{W}_{12}^{*}(1\otimes \alpha)\Sigma \\
& = & \Sigma (1\otimes \beta^{*})\Sigma_{12}W_{12}\Sigma_{12}(1\otimes \alpha)\Sigma \\
& = & (\beta^{*}\otimes 1)\Sigma_{23} W_{12}\Sigma_{23}(\alpha\otimes 1) \\
& = & (\beta^{*}\otimes 1)W_{13}(\alpha\otimes 1).
\end{eqnarray*}
Thus, $x\mapsto \beta^{*}(x\otimes 1)\alpha$ is a completely bounded extension of $m_{a}$ to $\mathcal{B}(L^{2}(\mathbb{G}))$.
\end{proof}
From this we may deduce a result which is well-known in the classical case.
\begin{cor}
Let $\widehat{\mathbb{G}}$ be a discrete quantum group and let $a\in \ell^{\infty}(\widehat{\mathbb{G}})$. The following are equivalent
\begin{enumerate}
\item $m_{a}$ extends to a completely bounded map on $\mathcal{B}(L^{2}(\mathbb{G}))$.
\item $m_{a}$ extends to a completely bounded map on $C_{\text{red}}(\mathbb{G})$.
\item $m_{a}$ extends to a completely bounded map on $L^{\infty}(\mathbb{G})$.
\end{enumerate}
Moreover, the completely bounded norms of these maps are all equal.
\end{cor}
\begin{proof}
We only prove the equivalence of $(1)$ and $(2)$, the same argument applies for the equivalence between $(1)$ and $(3)$.
Assume $(1)$, then as $m_{a}$ maps $\mathcal{C}(\mathbb{G})$ into itself, its completely bounded extension restricts to a completely bounded map on $C_{\text{red}}(\mathbb{G})$ with smaller norm.
Assume $(2)$, then $\alpha$ and $\beta$ can still be defined if $m_{a}$ is a completely bounded map on $C_{\text{red}}(\mathbb{G})$ ($\pi$ being then a representation of $C_{\text{red}}(\mathbb{G})$) and the formula
\begin{equation*}
x \mapsto \beta^{*}(x\otimes 1)\alpha
\end{equation*}
defines a completely bounded map on all of $\mathcal{B}(L^{2}(\mathbb{G}))$ with norm less than $\|\alpha\|\|\beta\|$.
\end{proof}
We are now able to give a definition of weak amenablity for discrete quantum groups.
\begin{de}\label{de:quantumwa}
A discrete quantum group $\widehat{\mathbb{G}}$ is said to be \emph{weakly amenable} if there exists a net $(a_{\lambda})$ of elements of $\ell^{\infty}(\widehat{\mathbb{G}})$ such that
\begin{itemize}
\item $a_{\lambda}$ has finite support for all $\lambda$.
\item $(a_{\lambda})$ converges pointwise to $1$.
\item $K:=\limsup_{\lambda} \|m_{a_{\lambda}}\|_{cb}$ is finite.
\end{itemize}
The lower bound of the constants $K$ for all nets satisfying these properties is denoted $\Lambda_{cb}(\widehat{\mathbb{G}})$ and called the \emph{Cowling-Haagerup constant} of $\widehat{\mathbb{G}}$. By convention, $\Lambda_{cb}(\widehat{\mathbb{G}})=\infty$ if $\widehat{\mathbb{G}}$ is not weakly amenable.
\end{de}
It is clear on the definition that a discrete group $G$ is weakly amenable in the classical sense if and only if the commutative discrete quantum group $(C_{0}(G), \Delta_{G})$ is weakly amenable (and the constants are the same). We recall the following notions of weak amenability for operator algebras.
\begin{de}
A C*-algebra $A$ is said to be \emph{weakly amenable} if there exists a net $(T_{\lambda})$ of linear maps from $A$ to itself such that
\begin{itemize}
\item $T_{\lambda}$ has finite rank for all $\lambda$.
\item $\|T_{\lambda}(x)-x\|\rightarrow 0$ for all $x\in A$.
\item $K :=\limsup_{\lambda}\|T_{\lambda}\|_{cb}$ is finite.
\end{itemize}
The lower bound of the constants $K$ for all nets satisfying these properties is denoted $\Lambda_{cb}(A)$ and called the \emph{Cowling-Haagerup constant} of $A$. By convention, $\Lambda_{cb}(A) = \infty$ if the C*-algebra $A$ is not weakly amenable.
A von Neumann algebra $N$ is said to be \emph{weakly amenable} if there exists a net $(T_{\lambda})$ of normal linear maps from $N$ to itself such that
\begin{itemize}
\item $T_{\lambda}$ has finite rank for all $\lambda$.
\item $T_{\lambda}(x)-x\rightarrow 0$ ultraweakly for all $x\in N$.
\item $K :=\limsup_{i}\|T_{\lambda}\|_{cb}$ is finite.
\end{itemize}
The lower bound of the constants $K$ for all nets satisfying these properties is denoted $\Lambda_{cb}(N)$ and called the \emph{Cowling-Haagerup constant} of $N$. By convention, $\Lambda_{cb}(N) = \infty$ if the von Neumann algebra $N$ is not weakly amenable.
\end{de}
\begin{rem}
Note that given a von Neumann algebra $N$, its Cowling-Haagerup constant as a C*-algebra need not be equal to its Cowling-Haagerup constant as a von Neumann algebra. For instance, $\mathcal{B}(\ell^{2}(\mathbb{N}))$ is weakly amenable as a von Neumann algebra (it is even amenable) but not as a C*-algebra (it is not even exact). Except otherwise stated, $\Lambda_{cb}(N)$ will always denote the Cowling-Haagerup constant of $N$ as a von Neumann algebra.
\end{rem}
The following theorem was first proved by J. Kraus and Z-J. Ruan in \cite{kraus1999approximation} for discrete quantum groups \emph{of Kac type} (i.e. with the Haar state on $\mathbb{G}$ being tracial). We give here a proof which works for any discrete quantum group.
\begin{thm}\label{thm:quantumwa}
Let $\widehat{\mathbb{G}}$ be a discrete quantum group, then
\begin{equation*}
\Lambda_{cb}(\widehat{\mathbb{G}}) = \Lambda_{cb}(C_{\text{red}}(\mathbb{G})) = \Lambda_{cb}(L^{\infty}(\mathbb{G})).
\end{equation*}
\end{thm}
\begin{proof}
Let $(a_{\lambda})$ be a net satisfying the hypothesis of definition \ref{de:quantumwa}, then the maps $m_{a_{\lambda}}$ are unital normal finite rank uniformly completely bounded maps and converge pointwise to the identity, giving
\begin{equation*}
\Lambda_{cb}(L^{\infty}(\mathbb{G}))\leqslant \Lambda_{cb}(\widehat{\mathbb{G}})\text{ and }\Lambda_{cb}(C_{\text{red}}(\mathbb{G}))\leqslant \Lambda_{cb}(\widehat{\mathbb{G}}).
\end{equation*}
Let now $(T_{\lambda})$ be a net of unital finite rank uniformly completely bounded maps on $L^{\infty}(\mathbb{G})$ converging pointwise to the identity and set
\begin{equation*}
a_{\lambda} := (h\otimes \i)((T_{\lambda}\otimes \i)(W)W^{*})\in \ell^{\infty}(\widehat{\mathbb{G}}).
\end{equation*}
These are elements converging to $1$ pointwise. Let us prove that we can perturb the maps $T_{\lambda}$ so that the $a_{\lambda}$'s have finite support. Let $(\xi_{1}, \dots, \xi_{n})$ be an orthonormal basis of the image of $T_{\lambda}$ and set $\omega_{i}(x) := \langle \xi_{i}, T_{\lambda}(x)\rangle$. Then $\omega_{i}$ is a normal linear form on $L^{\infty}(\mathbb{G})$ and for all $x\in L^{\infty}(\mathbb{G})$, $T_{\lambda}(x) = \sum_{i=1}^{n}\omega_{i}(x)\xi_{i}$. Let $\eta > 0$ and chose elements $\zeta_{i}\in \mathcal{C}(\mathbb{G})$ such that
\begin{equation*}
\|\zeta_{i} - \xi_{i}\|\leqslant \frac{\eta}{\sup\|\omega_{i}\|}.
\end{equation*}
Then the equation $\tilde{T}_{\lambda}(x) = \sum \omega_{i}(x)\zeta_{i}$ defines a completely bounded finite rank map on $L^{\infty}(\mathbb{G})$ such that $\|T_{\lambda} - \tilde{T}_{\lambda}\|_{cb}\leqslant \eta$ and giving rise to an element of $\ell^{\infty}(\widehat{\mathbb{G}})$ with finite support. To study the completely bounded norm of $m_{a_{\lambda}}$, first remark that the coproduct $\Delta$ is an isometry from $\mathcal{C}(\mathbb{G})$ onto its image with respect to the scalar product $\langle a, b\rangle = h(a b^{*})$. Hence, we can extend it to an isometry from $L^{2}(\mathbb{G})$ to $L^{2}(\mathbb{G})\otimes L^{2}(\mathbb{G})$ and then consider its adjoint operator $\Delta^{*}$. With these considerations, the following formula holds :
\begin{equation}\label{eq:quantummultiplier}
m_{a_{\lambda}} = \Delta^{*}\circ(T_{\lambda}\otimes \i)\circ \Delta.
\end{equation}
To prove this equality, let $\alpha$ and $\beta$ be two irreducible representations of $\mathbb{G}$ and recall (see e.g. \cite[Prop 5.3.8]{timmermann2008invitation}) that
\begin{equation*}
h(u^{\alpha}_{i, j}(u^{\beta}_{l, m})^{*}) = \delta_{\alpha, \beta}\delta_{i, l}\frac{F^{\alpha}_{j, m}}{\dim_{q}(\alpha)}
\end{equation*}
for some coefficients $F^{\alpha}_{j, m}$ which are equal to $\delta_{j, m}$ if and only if the group is of Kac type. We compute on the one hand
\begin{eqnarray*}
\langle \Delta^{*}\circ(T_{\lambda}\otimes \i)\circ\Delta(u^{\alpha}_{i, j}), u^{\beta}_{l, m}\rangle & = & \sum_{k}\langle (T_{\lambda}(u^{\alpha}_{i, k})\otimes u^{\alpha}_{k, j}), \Delta(u^{\beta}_{l, m})\rangle \\
& = & \sum_{k, t}h(T_{\lambda}(u^{\alpha}_{i, k})(u^{\beta}_{l, t})^{*})h(u^{\alpha}_{k, j}(u^{\beta}_{t, m})^{*}) \\
& = & \sum_{k}h(T_{\lambda}(u^{\alpha}_{i, k})(u^{\alpha}_{l, k})^{*})\delta_{\alpha, \beta}\frac{F_{j, m}^{\alpha}}{\dim_{q}(\alpha)} \\
& = & \delta_{\alpha, \beta}\frac{F_{j, m}^{\alpha}}{\dim_{q}(\alpha)}\sum_{k}h((1\otimes e_{i}^{*})(T_{\lambda}\otimes \i)(u^{\alpha})(1\otimes e_{k}) \\
& & (1\otimes e_{k}^{*})(u^{\alpha})^{*}(1\otimes e_{l})) \\
& = & \delta_{\alpha, \beta}\frac{F_{j, m}^{\alpha}}{\dim_{q}(\alpha)}h((1\otimes e_{i}^{*})(T_{\lambda}\otimes \i)(u^{\alpha})(u^{\alpha})^{*}(1\otimes e_{l})) \\
& = & \delta_{\alpha, \beta}\frac{F_{j, m}^{\alpha}}{\dim_{q}(\alpha)}h((1\otimes e_{i}^{*})(1\otimes a_{\lambda}p_{\alpha})(1\otimes e_{l})) \\
& = & \delta_{\alpha, \beta}\frac{F_{j, m}^{\alpha}}{\dim_{q}(\alpha)}\langle e_{i}, a_{\lambda}p_{\alpha}e_{l}\rangle
\end{eqnarray*}
and on the other hand
\begin{eqnarray*}
\langle m_{a_{\lambda}}(u^{\alpha}_{i, j}), u^{\beta}_{l, m}\rangle & = & h(m_{a_{\lambda}}(u^{\alpha}_{i, j})(u^{\beta}_{l, m})^{*}) \\
& = & h((1\otimes e_{i}^{*})(1\otimes a_{\lambda}p_{\alpha})u^{\alpha}(1\otimes e_{j})(u^{\beta}_{l, m})^{*}) \\
& = & \sum_{k} h((1\otimes e_{i}^{*})(1\otimes a_{\lambda}p_{\alpha})(1\otimes e_{k})(1\otimes e_{k}^{*})u^{\alpha}(1\otimes e_{j})(u^{\beta}_{l, m})^{*}) \\
& = & \sum_{k} \langle a_{\lambda}^{*}p_{\alpha}e_{i}, e_{k}\rangle h(u^{\alpha}_{k, j}(u^{\beta}_{l, m})^{*})\\
& = & \langle e_{i}, a_{\lambda}p_{\alpha}e_{l}\rangle \delta_{\alpha, \beta}\frac{F_{j, m}^{\alpha}}{\dim_{q}(\alpha)}.
\end{eqnarray*}
Equation \ref{eq:quantummultiplier} implies that $\|m_{a_{\lambda}}\|_{cb} \leqslant \|T_{\lambda}\|_{cb}$, yielding
\begin{equation*}
\Lambda_{cb}(\widehat{\mathbb{G}}) \leqslant \Lambda_{cb}(L^{\infty}(\mathbb{G}))\text{ and }\Lambda_{cb}(\widehat{\mathbb{G}}) \leqslant \Lambda_{cb}(C_{\text{red}}(\mathbb{G})).
\end{equation*}
\end{proof}
\begin{cor}
Let $\widehat{\mathbb{G}}$ be a weakly amenable discrete quantum group, then $\widehat{\mathbb{G}}$ is exact.
\end{cor}
\begin{proof}
Any weakly amenable C*-algebra is exact according to \cite{kirchberg1999exact}. Combining this with the fact from \cite[Prop. 4.1]{blanchard2001remarks} that a discrete quantum group $\widehat{\mathbb{G}}$ is exact if and only if $C_{\text{red}}(\mathbb{G})$ is an exact C*-algebra yields the result
\end{proof}
As a consequence of Theorem \ref{thm:quantumwa}, we can deduce a few permanance properties. First of all, we can consider discrete quantum subgroups in the following sense : let $\mathbb{G}$ be a compact quantum group and let $C(\H)$ be a unital C*-subalgebra of $C(\mathbb{G})$ which becomes a compact quantum group when endowed with the restriction of the coproduct of $\mathbb{G}$ (this means in particular that $\Delta(C(\H))\subset C(\H)\otimes C(\H)$), then $\widehat{\H}$ will be called a \emph{discrete quantum subgroup} of $\widehat{\mathbb{G}}$. According to \cite[Lemma 2.2]{vergnioux2004k}, there is a unique conditional expectation from $C(\mathbb{G})$ to $C(\H)$. Thus we have $\Lambda_{cb}(\H)\leqslant \Lambda_{cb}(\mathbb{G})$.
For any two reduced compact quantum groups $\mathbb{G}$ and $\H$, there is a unique compact quantum group structure on the spatial tensor product $C(\mathbb{G})\otimes C(\H)$ turning $C(\mathbb{G})$ and $C(\H)$ into sub-Hopf-C*-algebras under the canonical inclusions, which is defined in \cite{wang1995tensor}. By analogy with the classical case, the dual of this compact quantum group will be called the \emph{direct product} of $\widehat{\mathbb{G}}$ and $\widehat{\H}$ and denoted $\widehat{\mathbb{G}}\times\widehat{\H}$.
\begin{cor}
Let $\widehat{\mathbb{G}}$ and $\widehat{\H}$ be two discrete quantum groups, then
\begin{equation*}
\Lambda_{cb}(\widehat{\H}\times\widehat{\mathbb{G}})=\Lambda_{cb}(\widehat{\H})\Lambda_{cb}(\widehat{\mathbb{G}}).
\end{equation*}
\end{cor}
\begin{proof}
It is true for any two C*-algebras $A$ and $B$ that $\Lambda_{cb}(A\otimes B) = \Lambda_{cb}(A)\Lambda_{cb}(B)$ (see \cite[Thm 12.3.13]{brown2008finite}). This, combined with Theorem \ref{thm:quantumwa}, gives the result.
\end{proof}
Let $(\widehat{\mathbb{G}}_{i})$ be a family of discrete quantum groups together with $*$-homomorphisms
\begin{equation*}
\pi_{i, j} : C(\mathbb{G}_{i})\rightarrow C(\mathbb{G}_{j})
\end{equation*}
intertwining the comultiplications and satisfying $\pi_{j, k}\circ\pi_{i, j} = \pi_{i, k}$. We will call the data $(\widehat{\mathbb{G}}_{i}, \pi_{i, j})$ an \emph{inductive system of discrete quantum groups}. The inductive limit C*-algebra $C(\mathbb{G})$ of this system can be endowed with a natural compact quantum group structure, see for example \cite[Lemma 1.1]{bhowmick2011quantum}. We will say that $\widehat{\mathbb{G}}$ is the \emph{inductive limit} of the system $(\widehat{\mathbb{G}}_{i}, \pi_{i, j})$.
Assume the maps $\pi_{i, j}$ to be injective, then we can identify the $C(\mathbb{G}_{i})$'s with sub-Hopf-C*-algebras of $C(\mathbb{G})$ satisfying
\begin{equation*}
\overline{\bigcup C(\mathbb{G}_{i})}=C(\mathbb{G}).
\end{equation*}
This implies that any irreducible representation of some $\mathbb{G}_{i}$ yields an irreducible representation of $\mathbb{G}$. Moreover,
\begin{equation*}
\mathcal{A} := \bigcup_{i}\mathcal{C}(\mathbb{G}_{i})
\end{equation*}
is a dense Hopf-$*$-subalgebra of $C(\mathbb{G})$ spanned by coefficients of irreducible representations. Because of Shur's orthogonality relations, this implies that the coefficients of all irreducible representations of $\mathbb{G}$ are in $\mathcal{A}$, i.e. $\mathcal{A} = \mathcal{C}(\mathbb{G})$. This means that any irreducible representation of $\mathbb{G}$ comes from an irreducible representation of some $\mathbb{G}_{i}$.
\begin{cor}
Let $(\widehat{\mathbb{G}}_{i}, \pi_{i, j})$ be an inductive system of discrete quantum groups with inductive limit $\widehat{\mathbb{G}}$ and limit maps $\pi_{i} : C(\mathbb{G}_{i})\rightarrow C(\mathbb{G})$, then
\begin{equation*}
\sup_{i}\Lambda_{cb}(\pi_{i}(C_{\text{red}}(\mathbb{G}_{i})) = \Lambda_{cb}(\widehat{\mathbb{G}}).
\end{equation*}
In particular, if all the connecting maps are injective, the inductive limit is weakly amenable if and only if the quantum groups are all weakly amenable with uniformly bounded Cowling-Haagerup constant.
\end{cor}
\begin{proof}
The inequality $\sup_{i}\Lambda_{cb}(\pi_{i}(C_{\text{red}}(\mathbb{G}_{i})) \leqslant \Lambda_{cb}(\widehat{\mathbb{G}})$ is straightforward from the fact that $(\pi_{i}(C_{\text{red}}(\mathbb{G}_{i})), \Delta)$ is the dual of a discrete quantum subgroup of $\widehat{\mathbb{G}}$. The second one can be seen as a consequence of the following more general result.
\end{proof}
\begin{prop}
Let $(A_{i}, \pi_{i, j})$ be a direct system of C*-algebras such that for each $i$, there is a conditional expectation $\mathbb{E}_{i}$ from the inductive limit $A$ onto the subalgebra $\pi_{i}(A_{i})$. Then $\Lambda_{cb}(A)\leqslant \sup_{i}\Lambda_{cb}(\pi_{i}(A_{i}))$.
\end{prop}
\begin{proof}
The proof is certainly well-known, but we give it for completeness. Let $\varepsilon > 0$ and $\mathcal{F}\subset A$ be a finite subset, set $\Lambda = \sup_{i}(\Lambda_{cb}(\pi_{i}(A_{i}))$ and
\begin{equation*}
\eta = \frac{\sqrt{(2+\Lambda)^{2} + 4\varepsilon}-(2+\Lambda)}{2}.
\end{equation*}
We can see $A$ as the closure of the union of the $\pi_{i}(A_{i})$ (see for example \cite[Prop 6.2.4]{rordam2000introduction}), thus there is an index $i_{0}$ and a finite subset $\mathcal{G}$ of $A_{i_{0}}$ such that $d(\mathcal{F}, \mathcal{G})\leqslant \eta$. Let $T$ be a finite rank linear map from $\pi_{i_{0}}(A_{i_{0}})$ to itself approximating the identity up to $\eta$ on $\mathcal{G}$ and with
\begin{equation*}
\|T\|_{cb}\leqslant \Lambda + \varepsilon.
\end{equation*}
Set $T_{\mathcal{F}, \varepsilon} = T\circ\mathbb{E}_{i}$. This is a finite rank linear maps from $A$ to itself with $\|T_{\mathcal{F}, \varepsilon}\|_{cb}\leqslant \Lambda + \eta$. Moreover, for any $x\in \mathcal{F}$, if $y$ is an element of $\mathcal{G}$ such that $\|x-y\|\leqslant \eta$, one has
\begin{eqnarray*}
\|T_{\mathcal{F}, \varepsilon}(x)-x\| & = & \|T\circ\mathbb{E}_{i_{0}}(x) - x\| \\
& \leqslant & \|T\circ\mathbb{E}_{i_{0}}(y) - y\| + \|T\circ\mathbb{E}_{i_{0}}(x-y) - (x-y)\| \\
& \leqslant & \eta + \|T\circ\mathbb{E}_{i_{0}}\|\|x-y\| + \|x-y\| \\
& \leqslant & \eta + (\Lambda + \eta)\eta + \eta \\
& = & \eta(2+\Lambda +\eta) \\
& \leqslant & \varepsilon.
\end{eqnarray*}
As $\eta$ tends to $0$ when $\varepsilon$ tends to $0$, we get the desired result.
\end{proof}
\section{Free products}
Recall that given two discrete quantum groups $\widehat{\mathbb{G}}$ and $\widehat{\mathbb{H}}$, there is a unique compact quantum group structure on the reduced free product $C(\mathbb{G})\ast C(\mathbb{H})$ with respect to the Haar states turning $C(\mathbb{G})$ and $C(\H)$ into sub-Hopf-C*-algebras under the canonical inclusions, which is defined in \cite{wang1995free}. By analogy with the classical case, the dual of this compact quantum group will be called the \emph{reduced free product} of $\widehat{\mathbb{G}}$ and $\widehat{\mathbb{H}}$ and denoted $\widehat{\mathbb{G}}\ast\widehat{\mathbb{H}}$.
The fact that a free product of amenable groups has Cowling-Haagerup constant equal to $1$ (though it may not be amenable) was first proved by M. Bo\.{z}ejko and M.A. Picardello in \cite{bozejko1993weakly} (even allowing amalgamation over a finite subgroup). But it can also be recovered as an easy consequence of the following theorem \cite[Thm 4.3]{ricard2005khintchine}.
\begin{thm}[Ricard, Xu]
Let $(A_{i}, \varphi_{i})_{i\in I}$ be C*-algebras with distinguished states $(\varphi_{i})$ having faithful GNS construction. Assume that for each $i$, there is a net of finite rank unital completely positive maps $(V_{i, j})$ on $A_{i}$ converging to the identity pointwise and preserving the state (i.e. $\varphi_{i}$ is \emph{cp-approximable} in the sense of \cite[Def 1.1]{eckhardt2010free}). Then, the reduced free product of the family $(A_{i}, \varphi_{i})$ has Cowling-Haagerup constant equal to $1$.
\end{thm}
Thus we only need to find such a net of unital completely positive maps which leaves the Haar states invariant when the discrete quantum groups are amen\-able (i.e. the reduced form of their duals admit a bounded counit). This is given by the following characterization of amenability \cite[Thm 3.8]{tomatsu2006amenable}.
\begin{thm}[Tomatsu]\label{thm:quantumamenable}
A discrete quantum group $\widehat{\mathbb{G}}$ is amenable if and only if there is a net $(\omega_{j})$ of states on $C_{\text{red}}(\mathbb{G})$ such that the nets of completely positive maps $((\omega_{j}\otimes\i)\circ\Delta)$ and $((\i\otimes \omega_{j})\circ\Delta)$ converge pointwise to the identity.
\end{thm}
The $h$-invariance of these maps is given by the left and right invariance of the Haar state on compact quantum groups. Thus we have the following :
\begin{cor}
Let $(\widehat{\mathbb{G}}_{i})_{i\in I}$ be a family of amenable discrete quantum groups, then
\begin{equation*}
\Lambda_{cb}(\ast_{i\in I}\widehat{\mathbb{G}}_{i})=1.
\end{equation*}
\end{cor}
\begin{example}
Let $(G_{i})_{i\in I}$ be any family of compact groups, then their duals in the sense of quantum groups are amenable. Thus $\ast_{i\in I}(C(G_{i}))$ is the dual of a non classical (non-commutative and non-cocommutative) discrete quantum group with Cowling-Haagerup constant equal to $1$.
\end{example}
We will now prove that a free product of weakly amenable discrete quantum groups with Cowling-Haagerup constant equal to $1$ has Cowling-Haagerup constant equal to $1$. This result has been proved in the classical case by E. Ricard and Q. Xu \cite[Thm 4.3]{ricard2005khintchine} using the following key result \cite[Prop 4.11]{ricard2005khintchine}.
\begin{thm}[Ricard, Xu]\label{thm:freecmap}
Let $(B_{i}, \psi_{i})_{i\in I}$ be unital C*-algebras with distinguished states $(\psi_{i})$ having faithful GNS constructions. Let $A_{i}\subset B_{i}$ be unital C*-subalgebras such that the states $\varphi_{i} =\psi_{i\vert A_{i}}$ also have faithful GNS construction. Assume that for each $i$, there is a net of finite rank maps $(V_{i, j})$ on $A_{i}$ converging to the identity pointwise, preserving the state and such that $\limsup_{j}\|V_{i, j}\|_{cb}=1$. Assume moreover that for each pair $(i, j)$, there is a completely positive unital map $U_{i, j} : A_{i} \rightarrow B_{i}$ preserving the state and such that
\begin{equation*}
\| V_{i, j} - U_{i, j}\|_{cb} + \| V_{i, j}-U_{i, j}\|_{\mathcal{B}(L^{2}(A_{i}, \varphi_{i}), L^{2}(B_{i}, \psi_{i}))} + \| V_{i, j} - U_{i, j}\|_{\mathcal{B}(L^{2}(A_{i}, \varphi_{i})^{op}, L^{2}(B_{i}, \psi_{i})^{op})} \rightarrow 0.
\end{equation*}
Then, the reduced free product of the family $(A_{i}, \varphi_{i})$ has Cowling-Haagerup constant equal to $1$.
\end{thm}
Having a characterization of weak amenability in terms of approximation of the identity on the group is the main ingredient to apply this theorem. Thanks to Theorem \ref{theorem:quantumgilbert}, we can prove a quantum version.
\begin{thm}\label{thm:quantumfreeproduct}
Let $(\widehat{\mathbb{G}}_{i})_{i\in I}$ be a family of discrete quantum groups with Cowling-Haage\-rup constant equal to $1$, then $\Lambda_{cb}(\ast_{i\in I}\widehat{\mathbb{G}}_{i})=1$
\end{thm}
\begin{proof} Let $\widehat{\mathbb{G}}$ be a discrete quantum group, let $0 < \eta < 1$ and let $a\in \ell^{\infty}(\widehat{\mathbb{G}})$ be such that $\|m_{a}\|_{cb}\leqslant 1+\eta$. Note that the $\alpha$ and $\beta$ given by Theorem \ref{theorem:quantumgilbert} can be both chosen to have norm less than $\sqrt{1+\eta}$. We set $\gamma = (\alpha + \beta)/2$ and $\delta = (\alpha-\beta)/2$.
Observing that $m_{a}(1) = (m_{a}\otimes\i)(u^{\varepsilon}) = (1\otimes ap_{\varepsilon})u^{\varepsilon} = \widehat{\varepsilon}(a).1$ and assuming $\widehat{\varepsilon}(a)$ to be non-zero, we can divide $a$ by it so that $m_{a}$ becomes unital (this ensures that $m_{a}$ preserves the vector state but we will prove it later on). We also know from \cite[Prop 2.6]{kraus1999approximation} that for any $x\in C_{\text{red}}(\mathbb{G})$,
\begin{equation*}
\alpha^{*}(x\otimes 1)\beta = m_{a}(x^{*})^{*} = m_{\widehat{S}(a)^{*}}(x).
\end{equation*}
Thus we can, up to replacing $a$ by $\frac{1}{2}(a+\widehat{S}(a)^{*})$ and using the fact that $\widehat{S}\circ \ast\circ \widehat{S}\circ\ast =\i$, assume that
\begin{equation*}
m_{a}(x) = \frac{1}{2}(m_{a}(x) + m_{\widehat{S}(a)^{*}}(x)) = \frac{1}{2}((\beta^{*}(x\otimes 1)\alpha + \alpha^{*}(x\otimes 1)\beta)) = M_{\gamma}(x) - M_{\delta}(x),
\end{equation*}
where $M_{\gamma}(x) = \gamma^{*}(x\otimes 1)\gamma$ and $M_{\delta}(x) = \delta^{*}(x\otimes 1)\delta$. The maps $M_{\gamma}$ and $M_{\delta}$ are completely positive thus $\|M_{\gamma}\|_{cb} = \|\gamma\|^{2}\leqslant 1+\eta$ and evaluating at $1$ gives $\|1 + \delta^{*}\delta\| \leqslant 1+\eta$, i.e. $\|M_{\delta}\|_{cb} = \|\delta^{*}\delta\|\leqslant \eta$.
We now want to perturb $M_{\gamma}$ into a \emph{unital} completely positive map. To do this, first note that
\begin{equation*}
\|1 - \gamma^{*}\gamma\| = \|\delta^{*}\delta\|\leqslant \eta <1,
\end{equation*}
which implies that $\gamma^{*}\gamma$ is invertible, and set $\tilde{\gamma} = \gamma\vert \gamma\vert^{-1}$ where $\vert \gamma \vert = (\gamma^{*}\gamma)^{1/2}$. Note that $\|\tilde{\gamma} - \gamma\|\leqslant \eta$.
Thus, $M_{\tilde{\gamma}}$ is a unital completely positive map and
\begin{eqnarray*}
\|M_{\tilde{\gamma}} - M_{\gamma}\|_{cb} & = & \|M_{\gamma + (\tilde{\gamma}- \gamma)} - M_{\gamma}\|_{cb} \\
& \leqslant & \|\tilde{\gamma}-\gamma\|\|\gamma\| + \|\tilde{\gamma}-\gamma\|\|\gamma\| + \|\tilde{\gamma}-\gamma\|\|\tilde{\gamma}-\gamma\| \\
& \leqslant & \eta(2+3\eta)\leqslant 5\eta.
\end{eqnarray*}
This proves that $M_{\tilde{\gamma}}$ is a unital completely positive map approximating $m_{a}$ on $C(\mathbb{G})$ up to $6\eta$ in completely bounded norm.
We now have to prove that $M_{\tilde{\gamma}}$ also approximates $m_{a}$ on $\mathcal{B}(L^{2}(\mathbb{G}))$. Let us denote by $\tau$ the vector state on $\mathcal{B}(L^{2}(\mathbb{G}))$ associated to $\xi_{h}$ :
\begin{equation*}
\tau(x) = \langle x(\xi_{h}), \xi_{h}\rangle.
\end{equation*}
\begin{lem}\label{lem:technical}
Let $T$ be any bounded linear operator on $K$ and set
\begin{equation*}
A(T) = (\i\otimes\pi)(\widehat{W})^{*}(1\otimes T)\widehat{W}(\i\otimes \xi_{h})\in \mathcal{B}(L^{2}(\mathbb{G})).
\end{equation*}
Then $\|A(T)\| \leqslant \|T\|$, $M_{A(T)}$ is a bounded operator on $(\mathcal{B}(L^{2}(\mathbb{G}), \tau)$ of norm less than $\|T\|^{2}$ and $\tau(M_{A(T)}(x^{*}x))\leqslant \|T\|^{2}\tau(x^{*}x)$. If moreover $A(T)^{*}A(T)$ is invertible, then $M_{A(T)\vert A(T)\vert^{-1}}$ is $\tau$-invariant.
\end{lem}
\begin{proof}
The inequality $\|A(T)\| \leqslant \|T\|$ is obvious since $\widehat{W}$ and $(\i\otimes\pi)(\widehat{W})$ are unitary operators. Note that since by definition $W(\xi\otimes \xi_{h}) = \xi\otimes \xi_{h}$ for any $\xi\in H$, we also have $\widehat{W}(\xi_{h}\otimes \xi) = \xi_{h}\otimes \xi$. Let us prove that $(\i\otimes \pi)(\widehat{W})(\xi_{h}\otimes \zeta) = \xi_{h}\otimes \zeta$ for any $\zeta\in K$. First, for any $\theta_{1}, \theta_{2}, \xi\in L^{2}(\mathbb{G})$,
\begin{eqnarray*}
\langle (\i \otimes \omega_{\theta_{1}, \theta_{2}})(\widehat{W})\xi_{h}, \xi\rangle & = & \langle \widehat{W}(\xi_{h}\otimes\theta_{1}), \xi\otimes \theta_{2}\rangle \\
& = & \langle \xi_{h}, \xi\rangle \langle \theta_{1}, \theta_{2}\rangle \\
& = & \omega_{\theta_{1}, \theta_{2}}(1)\langle \xi_{h}, \xi\rangle.
\end{eqnarray*}
Thus by density, we have $\langle (\i\otimes \omega)(\widehat{W})\xi_{h}, \xi\rangle = \omega(1)\langle \xi_{h}, \xi\rangle$ for any $\omega\in \mathcal{B}(L^{2}(\mathbb{G}))_{*}$. Secondly, let $\zeta_{1}, \zeta_{2}\in K$ and $\xi\in L^{2}(\mathbb{G})$, then
\begin{eqnarray*}
\langle (\i\otimes \pi)(\widehat{W})(\xi_{h}\otimes \zeta_{1}), \xi\otimes \zeta_{2}\rangle & = & \langle (\i\otimes \omega_{\zeta_{1}, \zeta_{2}}\circ\pi)(\widehat{W})\xi_{h}, \xi\rangle \\
& = & \omega_{\zeta_{1}, \zeta_{2}}(\pi(1))\langle \xi_{h}, \xi\rangle = \langle \xi_{h}\otimes \zeta_{1}, \xi\otimes \zeta_{2}\rangle.
\end{eqnarray*}
Now we can compute
\begin{eqnarray*}
A(T)\xi_{h} & = & (\i\otimes\pi)(\widehat{W})^{*}(1\otimes T)\widehat{W}(\xi_{h}\otimes \xi_{h}) \\
& = & (\i\otimes\pi)(\widehat{W})^{*}(1\otimes T)(\xi_{h}\otimes \xi_{h}) \\
& = & (\i\otimes\pi)(\widehat{W})^{*}(\xi_{h}\otimes T(\xi_{h})) \\
& = & \xi_{h}\otimes T(\xi_{h}).
\end{eqnarray*}
Thus, $\langle A(T)^{*}(x\otimes 1)A(T)\xi_{h}, \xi_{h}\rangle = \langle (x\otimes 1)A(T)(\xi_{h}), A(T)\xi_{h} \rangle = \langle x(\xi_{h}), \xi_{h}\rangle \|T(\xi_{h})\|^{2}$ and using Kadison's inequality we get
\begin{eqnarray*}
\tau(M_{A(T)}(x)^{*}M_{A(T)}(x)) & \leqslant & \|A(T)\|^{2} \tau(M_{A(T)}(x^{*}x)) \\
& \leqslant & \|T\|^{2}\|A(T)\|^{2} \tau(x^{*}x) \\
& \leqslant & \|T\|^{4} \tau(x^{*}x).
\end{eqnarray*}
Let us now turn to $A(T)^{*}A(T)$. First,
\begin{eqnarray*}
A(T)^{*}A(T)\xi_{h} & = & (\i\otimes \xi_{h}^{*})\widehat{W}^{*}(1\otimes T^{*})(\i\otimes \pi)(\widehat{W})(\xi_{h}\otimes T(\xi_{h})) \\
& = & (\i\otimes \xi_{h}^{*})\widehat{W}^{*}(1\otimes T^{*})(\xi_{h}\otimes T(\xi_{h})) \\
& = & (\i\otimes \xi_{h}^{*})\widehat{W}^{*}(\xi_{h}\otimes T^{*}T(\xi_{h})) \\
& = & (\i\otimes \xi_{h}^{*})(\xi_{h}\otimes T^{*}T(\xi_{h})) \\
& = & \langle \xi_{h}, T^{*}T(\xi_{h})\rangle \xi_{h} \\
& = & \|T(\xi_{h})\|^{2}\xi_{h}
\end{eqnarray*}
and $\xi_{h}$ is an eigenvector for $A(T)^{*}A(T)$. If $A(T)^{*}A(T)$ is invertible, then
\begin{equation*}
(A(T)^{*}A(T))^{-1/2}\xi_{h} = \|T(\xi_{h})\|^{-1}\xi_{h}.
\end{equation*}
Thus $A(T)\vert A(T)\vert^{-1}\xi_{h} = \xi_{h}\otimes \|T(\xi_{h})\|^{-1}T(\xi_{h})$ and
\begin{eqnarray*}
\tau(M_{A(T)\vert A(T)\vert^{-1}} x) & = & \langle (x\otimes 1)A(T)\vert A(T)\vert^{-1}\xi_{h}, A(T)\vert A(T)\vert^{-1}\xi_{h}\rangle \\
& = & \langle x(\xi_{h}), \xi_{h}\rangle \\
& = & \tau(x).
\end{eqnarray*}
\end{proof}
For $x\in \mathcal{B}(L^{2}(\mathbb{G}))$, we set $\|x\|_{2} = \tau(x^{*}x)^{1/2}$. Using Lemma \ref{lem:technical} with $\delta = A((P-Q)/2)$, we obtain
\begin{equation*}
\|(m_{a} - M_{\gamma})(x)\|_{2}^{2} = \|M_{\delta}(x)\|_{2}^{2} = \tau(M_{\delta}(x)^{*}M_{\delta}(x))\leqslant \|\delta\|^{4}\|x\|_{2}^{2} \leqslant \eta^{4}\|x\|_{2}^{2}
\end{equation*}
i.e. $\|(m_{a} - M_{\gamma})(x)\|_{2}\leqslant \eta^{2}\|x\|_{2}$. We also have $\gamma = A((P+Q)/2)$, thus, setting $T=((P+Q)/2)$ and observing that
\begin{equation*}
\left\|\left(T\xi_{h}- \frac{1}{\|T\xi_{h}\|}T\xi_{h}\right)\right\| = \left\|\xi_{h}\otimes\left(T\xi_{h}- \frac{1}{\|T\xi_{h}\|}T\xi_{h}\right)\right\| = \|(\gamma - \tilde{\gamma})\xi_{h}\|\leqslant \|\gamma - \tilde{\gamma}\| \leqslant \eta,
\end{equation*}
we can compute again with Lemma \ref{lem:technical}
\begin{eqnarray*}
\tau(M_{\gamma - \tilde{\gamma}}(x^{*}x)) & \leqslant & \langle (x^{*}x\otimes 1)(\gamma - \tilde{\gamma})\xi_{h}, (\gamma - \tilde{\gamma})\xi_{h}\rangle \\
& = & \left\langle (x^{*}x\otimes 1)\left(\xi_{h}\otimes \left(T\xi_{h}- \frac{1}{\|T\xi_{h}\|}T\xi_{h}\right)\right), \xi_{h}\otimes \left(T\xi_{h} - \frac{1}{\|T\xi_{h}\|}T\xi_{h}\right)\right\rangle \\
& = & \langle (x^{*}x)\xi_{h}, \xi_{h}\rangle \left\|T\xi_{h}- \frac{1}{\|T\xi_{h}\|}T\xi_{h}\right\|^{2} \\
& \leqslant & \eta^{2}\tau(x^{*}x) \\
& \leqslant & \eta^{2}\|x\|_{2}^{2},
\end{eqnarray*}
thus $\|M_{\gamma - \tilde{\gamma}}(x)\|_{2}^{2} = \tau(M_{\gamma - \tilde{\gamma}}(x)^{*}M_{\gamma - \tilde{\gamma}}(x)) \leqslant \|\gamma - \tilde{\gamma}\|^{2}\tau(M_{\gamma - \tilde{\gamma}}(x^{*}x)) \leqslant \eta^{4}\|x\|^{2}_{2}$. Now, for all $x\in \mathcal{B}(L^{2}(\mathbb{G}))$, we have
\begin{eqnarray*}
\|M_{\gamma}(x) - M_{\tilde{\gamma}}(x)\|_{2} & = & \|M_{\gamma}(x) - M_{\gamma + (\tilde{\gamma} -\gamma)}(x)\|_{2} \\
& \leqslant & \|M_{\tilde{\gamma} - \gamma}(x)\|_{2} + \tau((\tilde{\gamma} - \gamma)^{*}(x^{*}\otimes 1)\gamma\gamma^{*}(x\otimes 1)(\tilde{\gamma} - \gamma))^{1/2} \\
& & +\: \tau(\gamma^{*}(x^{*}\otimes 1)(\tilde{\gamma} - \gamma)(\tilde{\gamma} - \gamma)^{*}(x\otimes 1)\gamma)^{1/2} \\
& \leqslant & \eta^{2}\|x\|_{2} + (1+\eta)\tau((\tilde{\gamma} - \gamma)^{*}(x^{*}\otimes 1)(x\otimes 1)(\tilde{\gamma} - \gamma))^{1/2} \\
& & +\: \eta\tau(\gamma^{*}(x^{*}\otimes 1)(x\otimes 1)\gamma)^{1/2} \\
& \leqslant & \eta^{2}\|x\|_{2} + (1+\eta)\tau(M_{\tilde{\gamma} - \gamma}(x^{*}x))^{1/2} + \eta\tau(M_{\gamma}(x^{*}x))^{1/2} \\
& \leqslant & \eta^{2}\|x\|_{2} + (1+\eta)\eta\|x\|_{2} + \eta(1+\eta)\|x\|_{2} \\
& \leqslant & 5\eta\|x\|_{2}.
\end{eqnarray*}
Finally, $\|(m_{a} - M_{\tilde{\gamma}})(x)\|_{2}\leqslant 6\eta\|x\|_{2}$. Moreover, by the last part of Lemma \ref{lem:technical}, $M_{\tilde{\gamma}}$ preserves $\tau$. We also get that $m_{a}$ is state preserving since
\begin{eqnarray*}
\tau(m_{a}(x)) & = & \langle \beta^{*}(x\otimes 1)\alpha(\xi_{h}), \xi_{h}\rangle \\
& = & \langle (x\otimes 1)\alpha(\xi_{h}), \beta(\xi_{h})\rangle \\
& = & \langle (x\otimes 1)T(P)(\xi_{h}), T(Q)(\xi_{h})\rangle \\
& = & \langle (x\otimes 1)(\xi_{h}\otimes P(\xi_{h}), \xi_{h}\otimes Q(\xi_{h})\rangle \\
& = & \langle x(\xi_{h}), \xi_{h}\rangle \langle P(\xi_{h}), Q(\xi_{h})\rangle \\
& = & \tau(x)\tau(m_{a}(1))
\end{eqnarray*}
and we have assumed that $m_{a}(1)=1$.
We can now prove the theorem. For each $i$, set $A_{i} = C_{\text{red}}(\mathbb{G}_{i})$ and $B_{i} = \mathcal{B}(L^{2}(\mathbb{G}_{i}))$. Consider a net $(a_{i, j})_{j}$ of finite rank elements in $\ell^{\infty}(\widehat{\mathbb{G}}_{i})$ converging pointwise to the identity and such that $\limsup_{j} \|m_{a_{i, j}}\|_{cb}\leqslant 1$ and note that since $\widehat{\varepsilon}(a_{i, j})\rightarrow 1$ (because of the pointwise convergence assumption), we can, up to extracting a suitable subsequence, assume it to be non-zero. For any $0 < \eta < 1$, there is a $j(\eta)$ such that $\|m_{a_{i, j(\eta)}}\|_{cb}\leqslant 1+\eta$ (the same being automatically true for $m_{\widehat{S}(a_{i, j(\eta)})}$). The procedure above then yields a unital completely positive map approximating $m_{a_{i, j(\eta)}}$ up to $6\eta$ in completely bounded norm and in $L^{2}$-operator norm. Applying Theorem \ref{thm:freecmap} proves that $\Lambda_{cb}(\ast_{i}A_{i}) = 1$ and Theorem \ref{thm:quantumwa} gives the desired conclusion.
\end{proof}
\begin{example}
The free orthogonal quantum groups $\widehat{A_{o}(F)}$ are amenable for any $F$ in $GL(2, \mathbb{C})$ such that $F\overline{F}\in \mathbb{R}.Id$, thus their Cowling-Haagerup constant is equal to $1$. Moreover, $\widehat{A_{u}(F)}$ can be seen as a subgroup of $\mathbb{Z}\ast \widehat{A_{o}(F)}$. Thus $\Lambda_{cb}(\widehat{A_{u}(F)})=1$ and any free product of some $2$-dimensional free quantum groups with duals of compact groups has Cowling-Haagerup constant equal to $1$.
\end{example}
\section*{Acknowledgments}
We are very grateful to P. Fima for many useful discussions and advice and to E. Blanchard and R. Vergnioux for their careful proof-reading of an early version. We also thank M. Brannan for pointing to us the paper \cite{daws2011multipliers}.
\bibliographystyle{amsalpha}
|
2109.09610
|
\section*{Other titles in \nowfnt@journaltitle}}
\subsection{Editor-in-Chief}
\journalchiefeditor{Yonina Eldar}{Weizmann Institute\\Israel}
\subsection{Editors}
\begin{multicols}{3}\raggedcolumns
\journaleditor{Pao-Chi Chang}{National Central University}
\journaleditor{Pamela Cosman}{University of California, San Diego}
\journaleditor{Michelle Effros}{California Institute of Technology}
\journaleditor{Yariv Ephraim}{George Mason University}
\journaleditor{Alfonso Farina}{Selex ES}
\journaleditor{Sadaoki Furui}{Tokyo Institute of Technology}
\journaleditor{Georgios Giannakis}{University of Minnesota}
\journaleditor{Vivek Goyal}{Boston University}
\journaleditor{Sinan Gunturk}{Courant Institute}
\journaleditor{Christine Guillemot}{INRIA}
\journaleditor{Robert W. Heath, Jr.}{The University of Texas at Austin}
\journaleditor{Sheila Hemami}{Northeastern University}
\journaleditor{Lina Karam}{Arizona State University}
\journaleditor{Nick Kingsbury}{University of Cambridge}
\journaleditor{Alex Kot}{Nanyang Technical University}
\journaleditor{Jelena Kovacevic}{New York University}
\journaleditor{Geert Leus}{TU Delft}
\journaleditor{Jia Li}{Pennsylvania State University}
\journaleditor{Henrique Malvar}{Microsoft Research}
\journaleditor{B.S. Manjunath}{University of California, Santa Barbara}
\journaleditor{Urbashi Mitra}{University of Southern California}
\journaleditor{Bj\"{o}rn Ottersten}{KTH Stockholm}
\journaleditor{Vincent Poor}{Princeton University}
\journaleditor{Anna Scaglione}{University of California, Davis}
\journaleditor{Mihaela van der Shaar}{University of California, Los Angeles}
\journaleditor{Nicholas D. Sidiropoulos}{Technical University of Crete}
\journaleditor{Michael Unser}{EPFL}
\journaleditor{P.P. Vaidyanathan}{California Institute of Technology}
\journaleditor{Ami Wiesel}{The Hebrew University of Jerusalem}
\journaleditor{Min Wu}{University of Maryland}
\journaleditor{Josiane Zerubia}{INRIA}
\end{multicols}
\end{editorialboard}
\section*{Abstract}
\input{sections/s01,abs}
\input{sections/s10,intro,background}
\input{sections/s20,imagerecon}
\input{sections/s30,hpo}
\input{sections/s40,gd,lower}
\input{sections/s50,gd,upper}
\input{sections/s60,prevresults}
\input{sections/s70,discussion,conc}
\section*{Acknowledgements}
\input{sections/acknowledgements}
\chapter{Introduction}
\label{chap: intro}
Methods for image recovery aim to estimate a good-quality image
from noisy, incomplete, or indirect measurements.
Such methods are also known as computational imaging. %
For example,
image denoising and image deconvolution
attempt to recover a clean image
from a noisy and/or blurry input image,
and
image inpainting tries to complete missing measurements from an image.
Medical image reconstruction aims
to recover
images that humans can interpret
from the indirect measurements
recorded by a system
like an \ac{MRI} or \ac{CT} scanner.
Such image reconstruction applications
are a type of inverse problem
\cite{engl:96}.
New methods for image reconstruction
attempt to lower complexity,
decrease data requirements,
or
improve image quality for a given input data quality.
For example,
in CT
one goal is to provide doctors with information
to help their patients while reducing radiation exposure
\cite{mccollough:17:ldc}.
To achieve these lower radiation doses,
the CT system must collect data with lower beam intensity
or fewer views.
Similarly, in MRI collecting fewer k-space samples
can reduce scan times.
Such \dquotes{undersampling} leads to an under-determined problem,
with fewer knowns (measurements from a scanner) than unknowns (pixels in the reconstructed image),
requiring advanced image reconstruction methods.
Existing reconstruction methods make different assumptions
about the characteristics of the images being recovered.
Historically, the assumptions are based on easily observed
(or assumed)
characteristics of the desired output image,
such as a tendency to have smooth regions with few edges
or to have some form of sparsity
\cite{eldar:12:cs}.
More recent machine learning approaches use training data to discover image characteristics.
These learning-based methods often outperform traditional methods,
and are gaining popularity
in part because of increased availability of training data and computational resources
\citep{wang:16:apo,hammernik:2020:machinelearningimage}.
There are many design decisions in
learning-based reconstruction methods.
How many parameters should be learned?
What makes a set of parameters \dquotes{good?}
How can one learn these good parameters?
Using a bilevel methodology
is one systematic way to address these questions.
Bilevel methods are so named because they involve two \dquotes{levels}
of optimization:
an upper-level loss function that defines
a goal or measure of goodness (equivalently, badness)
for the learnable parameters
and a lower-level cost function that
uses the learnable parameters,
typically as part of a regularizer.
The main benefits of bilevel methods are
learning task-based hyperparameters in a principled approach
and
connecting machine learning techniques
with image reconstruction methods
that are defined in terms of
optimizing a cost function,
often called model-based image reconstruction methods.
Conversely, the main challenge with bilevel methods
is the computational complexity.
However, like with neural networks,
that complexity is highest during the training process,
whereas deployment has lower complexity
because it requires only the lower-level problem.
The methods in this review are broadly applicable to bilevel problems,
but we focus on formulations and applications where the lower-level problem
is an image reconstruction cost function
that uses regularization based on analysis sparsity.
The application of bilevel methods to image reconstruction problems
is relatively new \citep{hammernik:2020:machinelearningimage},
but there are a growing number of promising research efforts in this direction.
We hope this review serves as a primer
and unifying treatment %
for readers who may already be familiar with image reconstruction problems
and traditional regularization
approaches
but who have not yet delved into bilevel methods.
This review lies at the intersection of
a specific machine learning method, bilevel,
and a specific application, filter learning for image reconstruction.
For a broad overview of machine learning in image reconstruction,
see \citep{hammernik:2020:machinelearningimage}.
For an overview of image reconstruction methods,
including classical, variational, and learning-based methods,
see \citep{mccann:2019:biomedicalimagereconstruction}.
Finally, for a historical overview of bilevel optimization
and perspective on its use in a wide variety of fields,
see \citep{dempe:2020:bileveloptimizationadvances}.
Within the image recovery field,
bilevel methods have also been used, \eg,
in learning synthesis dictionaries \citep{mairal:2012:taskdrivendictionarylearning, chambolle:2021:learningconsistentdiscretizations}.
The structure of this review is as follows.
The remainder of the introduction
defines our notation
and presents a running example
bilevel problem.
\cref{chap: image recon}
provides background information
on the lower-level image reconstruction
cost function
and analysis regularizers.
\cref{chap: hpo}
provides background information
on the upper-level loss function,
specifically loss function design
and hyperparameter optimization strategies.
These background sections
are not meant to be exhaustive overviews
of these broad topics.
\cref{chap: ift and unrolled}
presents building blocks for optimizing a bilevel problem.
\cref{chap: bilevel methods}
uses these building blocks to discuss optimization methods for the upper-level loss function.
\cref{chap: applications}
discusses previous applications of the bilevel method
in image recovery problems,
including signal denoising, image inpainting, and medical image reconstruction.
It also overviews bilevel formulations for blind learning
and learning space-varying tuning parameters.
Finally, \cref{chap: conclusion} offers
summarizing commentary on the benefits and drawbacks
of bilevel methods for computational imaging
and
our thoughts for future directions for the field.
\section{Notation}
This review considers
continuous-valued,
discrete space signals.
Some papers,
\eg, \citep{calatroni:2017:bilevelapproacheslearning,delosreyes:2017:bilevelparameterlearning},
analyze signals in function space,
arguing that
the goal of high resolution imagery is to approximate a continuous space reality
and
that analysis in continuous domain can yield insights and optimization algorithms
that are resolution independent.
However,
the majority of bilevel methods are motivated and described
in discrete space.
The review does not include
discrete-valued settings,
such as image segmentation \citep{ochs:2015:bileveloptimizationnonsmooth};
those problems often require different techniques
to optimize the lower-level cost function.
The literature is inconsistent in how it refers to variables in machine learning problems.
For consistency within this document,
we define the following terms:
\begin{itemize}[noitemsep,topsep=0pt]
\item \textbf{Hyperparameters}:
Any adjustable %
parameters that are part of a model.
Tuning parameters and model parameters are both sub-types of hyperparameters.
This document uses \params to denote a vector of hyperparameters.
\item \textbf{Tuning parameters}:
Scalar parameters that weight terms in a cost function
to determine the relative importance of each term.
This review uses $\beta$ to denote individual tuning parameters.
\item \textbf{Model parameters}:
Parameters, generally in vector or matrix form,
that are used in the structure of an cost or loss function,
typically as part of the regularization term.
In the running example in the next section,
the model parameters are typically filter coefficients,
denoted \vc.
\end{itemize}
We write vectors as column vectors
and
use bold to denote
matrices (uppercase letters)
and vectors (lowercase letters).
Subscripts index vector elements,
so $x_i$ is the $i$th element in \vx.
For functions that are applied element-wise to vectors,
we use notation following the Julia programming language
\cite{bezanson:17:jaf},
where
$f.(\vx)$
denotes the function $f$ applied element wise
to its argument:
\[
\vx \in \F^\sdim
\implies
f.(\vx) = \begin{bmatrix}
f(x_1) \\
\vdots \\
f(x_\sdim)
\end{bmatrix}
\in \F^\sdim
.\]
The field \F can be either \R or \C,
depending on the application.
\begin{table}[hb!]
\ifloadeps
\includegraphics[]{TablesAndAlgs/epsfiles/tab,variablelookup.eps}
\else
\input{TablesAndAlgs/tab,variablelookup}
\fi
\iffigsatend \tabletag{1.1} \fi
\caption{ Overview of frequently used symbols in the review.}
\label{tab: variable lookup}
\end{table}
Convolution between a vector, \vx,
and a filter, \vc,
is denoted as
$\vc \conv \vx$.
This review assumes all convolutions
use circular boundary conditions.
Thus,
the same convolution is equivalent to
multiplication with a square, circulant matrix:
\[
\vc \conv \vx = \mC \vx
.\]
The conjugate mirror reversal of \vc
is denoted as $\Tilde{\vc}$ and
its application is equivalent to
multiplying with the adjoint of \mC:
\[
\Tilde{\vc} \conv \vx = \mC' \vx
.\]
Finally,
for partial derivatives,
we use the notation that
\comment{
\begin{align*}
\nabla_x f(x,y) &= \frac{\partial f(x,y)}{\partial x}, \nonumber \\
\nabla_{xy} f(x,y) &= \frac{\partial^2 f(x,y)}{\partial x \partial y},
\text{ and } \\
\nabla_{xy} f(\hat{x},y) &= \nabla_{xy} f(x,y)\evalat_{x=\hat{x}}.
\end{align*}
}
\begin{align}
\nabla_\vx f(\vx,\vy) &= \frac{\partial f(\vx,\vy)}{\partial \vx} \in \F^N,
\nonumber \\
\nabla_{\vx \vy} f(\vx,\vy) &= \left[ \frac{\partial^2 f(\vx,\vy)}{\partial x_i \partial y_j} \right]
\in \F^{\sdim \times \ydim},
\label{eq:nabla-x-y}
\\
\nabla_{\vx \vy} f(\xhat,\vy) &= \nabla_{\vx\vy} f(\vx,\vy) \evalat_{\vx=\xhat}.
\nonumber
\end{align}
\trefs{tab: variable lookup}{tab: function lookup}
summarize our frequently used notation
for variables and functions.
\begin{table}[ht]
\centering
\ifloadeps
\includegraphics[]{TablesAndAlgs/epsfiles/tab,variablelookupfunctions.eps}
\else
\input{TablesAndAlgs/tab,variablelookupfunctions}
\fi
\iffigsatend \tabletag{1.2} \fi
\caption{ Overview of frequently used functions in the review. }
\label{tab: function lookup}
\end{table}
\section{Bilevel Method: Set-up and Running Example \label{sec: bilevel set-up}}
This section introduces a generic bilevel problem
as well as
a specific bilevel problem
that serves as a running example
throughout the review.
Later sections discuss
many of the ideas presented here more thoroughly.
Our hope is that an early introduction to the formal problem
motivates readers
and that
this section acts as a quick-reference guide to our notation.
This review considers the image reconstruction problem
where the goal is to form an estimate
$\xhat \in \F^\sdim$
of a (vectorized) latent image,
given a set of measurements
$\vy \in \F^\ydim$.
For denoising and inpainting problems, $\sdim=\ydim$,
but the two dimensions may differ significantly in
more general image reconstruction problems.
The forward operator, $\mA \in \F^{\ydim \by \sdim}$
models the physics of the system
such that one would expect $\vy = \mA \vx$
in an ideal (noiseless) system. %
We focus on linear imaging systems here,
but the concepts generalize readily to nonlinear forward models.
When known (in a supervised training setting),
we denote the true, underlying signal
as $\xtrue \in \F^\sdim$.
We focus on model-based image reconstruction methods
where the goal is to estimate \vx
from \vy
by solving an optimization problem
of the form
\begin{equation}
\xhat
= \xhat(\params)
= \argmin_{\vx \in \F^\sdim} \ofcn(\vx \, ; \params, \vy)
\label{eq: xhat definition}.
\end{equation}
To simplify notation,
we drop \vy from the list of \ofcn arguments
except where needed for clarity.
The quality of the estimate \xhat
can depend greatly
on the choice of the hyperparameters \params.
Historically
there have been numerous approaches pursued
for choosing \params,
such as
cross validation
\cite{stone:78:cva},
generalized cross validation
\cite{golub:79:gcv},
the discrepancy principle
\cite{phillips:62:atf}
and Bayesian methods
\cite{saquib:98:mpe},
among others.
\comment{I've rearranged the following paragraphs
and given separate deterministic and stochastic bilevel options.} %
Bilevel methods provide a framework for choosing hyperparameters.
Without further ado,
a bilevel problem
for learning hyperparameters
\params
has the following \dquotes{double minimization} form:
\begin{align}
\paramh =
\argmin_{\params \in \F^\paramsdim}
& \underbrace{\lfcn (\params \,;\, \xhat(\params))}_{\lfcn(\params)}
\text{ where } \label{eq: generic bilevel upper-level}
\tag{UL}\\
&\xhat(\params) = \argmin_{\vx \in \F^\sdim} \ofcn(\vx \, ; \params) \label{eq: generic bilevel lower-level}
\tag{LL}.
\end{align}
\fref{fig: generic bilevel} depicts a generic bilevel problem
for image reconstruction.
The upper-level (UL), or outer, loss function,
$\lfcn : \R^\paramsdim \times \F^\sdim \mapsto \R$,
quantifies how (not) good is a vector \params of learnable parameters;
it depends on the solution to the
lower-level (LL) cost function, \ofcn, which depends on \params.
We will also write the upper-level loss function with a single parameter as
$\lfcn(\params)$.
We write the lower-level cost
as an optimization problem with \dquotes{argmin}
and thus implicitly assume %
that \ofcn has unique minimizer
with respect to \vx.
(See \cref{chap: ift and unrolled} for more discussion of this point).
\begin{figure}[htb]
\centering
\ifloadepsorpdf
\includegraphics[]{bilevel,bigpic}
\else
\input{Figures/tikz,bilevel,bigpic}
\fi
\iffigsatend \figuretag{1.1} \fi
\caption{
Depiction of a typical bilevel problem
for image reconstruction,
illustrated using XCAT phantom from \citep{segars:10:4xp}.
The upper box represents the training process,
with the upper-level loss
and lower-level cost function.
During training, one minimizes the upper-level loss
with respect to a vector of parameters, \params,
that are used in the image reconstruction task.
Once learned, \paramh
is typically deployed in the same
image reconstruction task,
shown in the lower box.
\comment{do you think it reads okay without an `argmin' in the diagram somewhere?} %
}
\label{fig: generic bilevel}
\end{figure}
Bilevel methods typically use training data.
Specifically,
one often assumes
that a given a set of
\Ntrue
good quality images
\(
\xtrue_1, \ldots, \xtrue_{\Ntrue}
\in \F^\sdim
\)
are representative
of the images of interest
in a given application.
(For simplicity of notation
we assume the training images
have the same size,
but they can have different sizes in practice.) %
We generate corresponding simulated measurements
for each training image
using the imaging system model:
\begin{equation}
\vy_\xmath{j}
= \mA \xtrue_\xmath{j} + \vn_\xmath{j}
,\quad
\xmath{j} = 1,\ldots, \Ntrue
, \label{eq: y=Ax+n}
\end{equation}
where
$\vn_\xmath{j} \in \F^\ydim$
denotes
an appropriate random noise realization%
\footnote{
A more general system model allows the noise to depend on the data
and system model,
\ie, $\vn_j(\mA, \vx_j)$.
This generality is needed for applications with certain noise distributions
such as Poisson noise.
}.
In \eqref{eq: y=Ax+n},
we add one noise realization
to each of the \Ntrue images;
in practice one could add multiple noise realizations
to each $\xtrue_\xmath{j}$
to augment the training data.
We then use the training pairs
$ (\xtrue_\xmath{j}, \vy_\xmath{j}) $
to learn a good value of \params.
After those parameters are learned,
subsequent test images are reconstructed
using \eqref{eq: xhat definition}
with the learned hyperparameters,
\paramh.
An alternative
to \eqref{eq: generic bilevel upper-level}
is the following
stochastic formulation
of bilevel learning:
\begin{align}
\paramh =
\argmin_{\params \in \F^\paramsdim}
& \underbrace{\E{\lfcn(\params)}}_{
\approx \frac{1}{J} \sum_{\xmath{j}=1}^\Ntrue \lfcn (\params \,;\, \xhat_\xmath{j}(\params))}
\text{ where } \label{eq: stochastic bilevel upper-level}
\\
&\xhat_\xmath{j}(\params) = \argmin_{\vx \in \F^\sdim} \ofcn(\vx \, ; \params, \vy_j) \label{eq: stochastic bilevel lower-level}.
\end{align}
The expectation,
taken with respect to the training data and noise distributions,
is typically approximated as a sum over the training data.
To offer a concrete example,
this review will frequently refer to the following
running example (Ex),
a filter learning bilevel problem:
\begin{align}
\paramh = &\argmin_{\params \in \F^\paramsdim} \onehalf \normrsq{\xhat(\params) - \xtrue}_2
, \text{ where } \nonumber \\
\xhat(\params) &= \argmin_{\vx \in \F^\sdim} \onehalf \norm{\mA \vx-\vy}^2_2
+ \ebeta{0} \sum_{k=1}^K \ebeta{k} \mat{1}' \sparsefcn.(\hk \conv \vx; \epsilon),
\label{eq: bilevel for analysis filters}
\tag{Ex}
\end{align}
where $\params \in \F^\paramsdim$ contains all variables
that we wish to learn:
the filter coefficients
$\hk \in \F^\filterdim$
and tuning parameters
$\beta_k \in \R $
for all $k \in [1,K]$.
We include an auxiliary tuning parameter,
$\beta_0 \in \R$,
for easier comparison to other models.
\fref{fig: running example for bilevel}
depicts the running example.
In \eqref{eq: bilevel for analysis filters},
we drop the sum over \Ntrue training images %
for simplicity;
the methods easily extend to multiple training signals.
In general,
the filters may be of different lengths
with minimal impact on the methods presented in this review.
However, for ease of notation,
we will consider \hk to be of length \filterdim for all $k$,
\eg, a 2D filter might be $\sqrt{\filterdim} \by \sqrt{\filterdim}$.
\begin{figure}
\centering
\iffigsatend \figuretag{1.2} \fi
\ifloadepsorpdf
\includegraphics[]{bilevelrunningexample}
\else
\input{Figures/tikz,bilevelrunningexample}
\fi
\caption{Example bilevel problem in \eqref{eq: bilevel for analysis filters}.
The vector of learnable hyperparameters, \params,
includes the tuning parameters, $\beta_k$,
and the filter coefficients, \hk,
shown as example filters.
Although this review will generally
consider learning filters of a single size,
the figure depicts how the framework easily
extends to 2d filters of different sizes.}
\label{fig: running example for bilevel}
\end{figure}
The function \sparsefcn
in \eqref{eq: bilevel for analysis filters}
is a sparsity-promoting function.
If we were to choose
$\phi(z) = |z|$,
then the regularizer would involve 1-norm terms
of the type common in compressed sensing formulations:
\[
\mat{1}' \sparsefcn.(\hk \conv \vx)
= \norm{\hk \conv \vx}_1 %
.\]
However,
to satisfy differentiability assumptions
(see \cref{chap: ift and unrolled}),
this review will often consider
\sparsefcn
to denote the following ``corner rounded'' 1-norm
having the shape of a hyperbola
with the corresponding first and second derivative:
\begin{align}
\sparsefcn(z) &= \sqrt{z^2 + \epsilon^2} \label{eq: corner rounded 1-norm}
\tag{CR1N}
\\ %
\dsparsefcn(z) &= \frac{z}{\sqrt{z^2 + \epsilon^2}} \in [0,1) \nonumber \\ %
\ddsparsefcn(z) &= \frac{\epsilon^2}{\left( z^2 + \epsilon^2 \right)^{3/2}} \in (0,\frac{1}{\epsilon}] \nonumber %
,\end{align}
where $\epsilon$ is a small,
relative to the expected range of $z$,
parameter that controls the amount of corner rounding.
(Here, we use a dot over the function
rather than $\nabla$
to indicate a derivative
because \sparsefcn is a scalar function.)
\section{Conclusion}
Bilevel methods for selecting hyperparameters
offer many benefits.
Previous papers motivate them as
a principled way to approach hyperparameter optimization
\citep{holler:2018:bilevelapproachparameter,dempe:2020:bileveloptimizationadvances},
as a task-based approach to learning
\citep{peyre:2011:learninganalysissparsity,haber:2003:learningregularizationfunctionals,delosreyes:2017:bilevelparameterlearning},
and/or
as a way to combine the data-driven improvements from learning methods
with the theoretical guarantees and explainability
provided by cost function-based approaches
\cite{chen:2020:learnabledescentalgorithm,calatroni:2017:bilevelapproacheslearning,kobler:2020:totaldeepvariation}.
A corresponding drawback of bilevel methods are their computational cost;
see \crefs{chap: ift and unrolled}{chap: bilevel methods}
for further discussion.
The task-based nature of bilevel methods
is a particularly important advantage;
\sref{sec: hpo filter learning} exemplifies why by
comparing the bilevel problem
to single-level, non-task-based
approaches for learning sparsifying filters.
Task-based refers to the hyperparameters being learned
based on how well they work in the lower-level cost function--%
the image reconstruction task in our running example.
The learned hyperparameters also adapt
to the training dataset and noise characteristics.
The task-based nature yields other benefits,
such as making constraints or regularizers
on the hyperparameters generally unnecessary;
\sref{sec: prev results loss function}
presents some exceptions
and
\citep{dempe:2020:bileveloptimizationadvances}
discusses further bilevel methods for
applications with constraints.
There are three main elements to a bilevel approach.
First,
the lower-level cost function
in a bilevel problem
defines a goal,
such as image reconstruction,
including what hyperparameters
can be learned,
such as filters for a sparsifying regularizer.
\cref{chap: image recon}
provides background on this element
specifically for
image reconstruction tasks,
such as the one in \eqref{eq: bilevel for analysis filters}.
\sref{sec: prev results lower level}
reviews example tasks
used in bilevel methods.
Second, the upper-level loss function
determines how the hyperparameters should be learned.
While the mean square error
(MSE)
loss function in the running example
is a common choice,
\cref{chap: hpo}
discusses other loss functions
based on supervised and unsupervised image quality metrics.
\sref{sec: prev results loss function}
then reviews example loss functions
used in bilevel methods.
While not apparent in the written optimization problem,
the third main element for a bilevel problem
is the optimization approach.
\sref{sec: hyperparameter search strategies} briefly discusses various
hyperparameter optimization strategies,
then
\crefs{chap: ift and unrolled}{chap: bilevel methods}
present multiple gradient-based bilevel
optimization strategies.
Throughout the review,
we refer to the running example
to show how the bilevel optimization strategies apply.
\chapter{Background: Image Reconstruction}
\label{chap: image recon}
This review focuses on bilevel problems
having image reconstruction as the lower-level problem.
Image reconstruction involves
undoing any transformations inherent
in an imaging system,
\eg, a camera or CT scanner,
and
removing measurement noise,
\eg, thermal and shot noise,
to realize an image that captures an underlying object of interest,
\eg, a patient's anatomy.
\fref{fig: image recon pipline}
shows an example image reconstruction pipeline for CT data.
The following sections
formally define image reconstruction,
discuss why regularization is important,
and overview common approaches
to regularization.
\begin{figure}[hb]
\centering
\ifloadepsorpdf
\includegraphics[]{imagereconpipeline}
\else
\input{Figures/tikz,imagereconpipeline}
\fi
\iffigsatend \figuretag{2.1} \fi
\caption{Example image reconstruction pipe-line,
illustrated using XCAT phantom from \citep{segars:10:4xp}.
Here $\mathcal{A}$ denotes the actual physical mapping
of the imaging system
and \mA denotes the numerical system matrix
used for reconstruction.
}
\label{fig: image recon pipline}
\end{figure}
\section{Image Reconstruction \label{sec: image recon background}}
Although the true object is in continuous space,
image reconstruction is almost always performed on sampled, discretized signals
\cite{lewitt:03:oom}.
Without going into detail of the discretization process,
we define $\xtrue \in \F^\sdim$ as the \dquotes{true,} discrete signal.
The goal of image reconstruction is to recover
an estimate
$\xhat \approx \xtrue$
given corrupted measurements $\vy \in \F^\sdim$.
Although we define the signal as a one-dimensional vector for notational convenience,
the mathematics generalize to arbitrary dimensions.
To find \xhat,
image reconstruction involves minimizing a cost function,
$\ofcnargs$,
with two terms:
\begin{align}
\xhat = \argmin_{\vx \in \F^\sdim}
\underbrace{\overbrace{\dfcnargs}^{\text{Data-fit}} + \;\;\; \beta
\overbrace{\regfcn(\vx \, ; \params)}^{\text{Regularizer}}}_{\ofcnargs}
\label{eq: general data-fit plus reg}
\end{align}
The first term,
\dfcnargs,
is a data-fit term
that captures the physics of the ideal (noiseless) system
using the matrix $\mA \in \F^{\ydim \by \sdim}$;
that matrix
models the physical system
such that we expect an observation, \vy, to be
$\vy \approx \mA \vx$.
The most common data-fit term
penalizes the square Euclidean norm
of the \dquotes{measurement error,}
$\dfcnargs = \normsq{\mA \xhat - \vy}_2$.
This intuitive data-fit term can be derived from a
maximum likelihood perspective,
assuming a white Gaussian noise distribution \citep{elad:07:avs}.
Many image reconstruction problems have linear
system models.
In image denoising problems, one takes $\mA=\I$.
For image inpainting, \mA is a diagonal matrix of 1's and 0's,
where the 0's correspond to sample indices of missing data
\cite{guillemot:14:iio}.
In MRI, the system matrix is often approximated as a diagonal matrix
times a discrete Fourier transform matrix,
though more accurate models are often needed
\cite{fessler:10:mbi}.
In some settings one can learn \mA
\cite{golub:80:aao},
or at least parts of \mA
\cite{ying:07:jir},
as part of the estimation process.
Although the bilevel method generalizes to learning \mA,
the majority of papers in the field
assume \mA is known;
\cref{chap: applications}
discusses a few exceptions.
Using the system model
\eqref{eq: y=Ax+n},
if \vn were known and \mA were invertible,
we could simply compute $\xhat = \xtrue = \mA^{\neg 1}(\vy-\vn)$.
However, \vn is random and,
while we may be able to model its characteristics,
we never know it exactly.
Further, the system matrix, \mA,
is often not invertible because the reconstruction problem is frequently under-determined,
with fewer knowns than unknowns ($\ydim < \sdim$).
Therefore, we must include prior assumptions about
\xtrue
to make the problem feasible.
These assumptions about \xtrue are captured in the second,
regularization term in \eqref{eq: general data-fit plus reg},
which depends on \params.
The following section further discusses regularizers.
In sum,
image reconstruction involves finding
\xhat that matches the collected data \textit{and}
satisfies a set of prior assumptions.
The data-fit term encourages
\xhat to be a good match for the data;
without this term, there would be no need to collect data.
The regularization term encourages \xhat to match the prior assumptions.
Finally, the tuning parameter, $\beta$,
controls the relative importance of the two terms.
The cost function can be minimized using different optimization techniques
depending on the form of each term.
This section is a very short overview of image reconstruction methods.
See \citep{mccann:2019:biomedicalimagereconstruction}
for a more thorough review of biomedical image reconstruction.
\section{Sparsity-Based Regularizers}
The regularization, or prior assumption,
term in \eqref{eq: general data-fit plus reg}
often involves assumptions about sparsity
\cite{eldar:12:cs}.
The basic idea behind sparsity-based regularization
is that the true signal is sparse in some representation,
while the noise or corruption is not.
Thus, one can use the representation to separate the noise and signal,
and then keep only the sparse signal component.
The design problem therefore becomes determining
what representation best sparsifies the signal.
There are two main types of sparsity-based regularizers
corresponding to two representational assumptions: synthesis and analysis
\cite{elad:07:avs,ravishankar:20:irf};
\fref{fig: analysis vs synthesis} depicts both.
While both are popular,
this review concentrates on analysis regularizers,
which are more widely represented in the bilevel image reconstruction literature.
This section briefly compares the analysis and synthesis formulations.
Here we simplify the formulas by considering $\mA=\I$;
the discussion generalizes to reconstruction by including \mA.
For more thorough discussions of analysis and synthesis regularizers,
see
\citep{elad:07:avs,nam:2013:cosparseanalysismodel,ravishankar:20:irf}.
\begin{figure}[h]
\centering
\ifloadepsorpdf
\includegraphics[]{avss}
\else
\input{Figures/tikz,avss}
\fi
\iffigsatend \figuretag{2.2} \fi
\caption{Depiction of synthesis and analysis sparsity.
Under the synthesis model of sparsity
(left),
\vx is a linear combination of a few
dictionary atoms.
The dictionary, \mD, is typically wide,
with more atoms (columns) than elements in \vx.
Under the analysis model of sparsity
(right),
\vx is orthogonal to many filters.
The filter matrix, \mOmega, is typically tall,
with more filters (rows) than elements in \vx.
}
\label{fig: analysis vs synthesis}
\end{figure}
\subsection{Synthesis Regularizers}
Synthesis regularizers model a signal
being composed of building blocks,
or \dquotes{atoms.}
Small subsets of the atoms
span a low dimensional subspace
and the sparsity assumption is that the signal requires using only a few of the atoms.
More formally, the synthesis model is
$\vy = \vx + \vn$,
where the signal
$\vx = \mD \vz$
and \vz is a sparse vector.
The columns of
$\mD \in \F^{\sdim \by K}$
contain contain the $K$ dictionary atoms
and form a low dimensional subspace for the signal.
If \mD is a wide matrix ($\sdim < K$),
the dictionary is over-complete
and it is easier to represent a wide range of signals
with a given number of dictionary atoms.
The dictionary is complete when \mD is square
(and full rank)
and under-complete if \mD is tall
(an uncommon choice).
Assuming one knows or has already learned \mD,
one can use the sparsity synthesis assumption to denoise a noisy signal \vy
by optimizing
\begin{align}
\xhat = \mD \cdot \parenr{ \underbrace{\argmin_\vz \onehalf \normsq{\vy - \mD \vz} + \sparsefcn(\vz)}_{\hat{\vz}} }
. \label{eq: strict synthesis}
\end{align}
The estimation procedure involves finding the sparse codes,
$\hat{\vz}$,
from which the image is synthesized
via
$\xhat = \mD \hat{\vz}$.
Common sparsity-inducing functions, \sparsefcn,
are the 0-norm and 1-norm.
The 2-norm is occasionally used as \sparsefcn,
but it does not yield true sparse codes and
it over-penalizes large values
\cite{elad:10}.
As written in \eqref{eq: strict synthesis},
the synthesis formulation constrains the signal, \vx,
to be in the range of \mD.
This \dquotes{strict synthesis} model
can be undesirable in some applications,
\eg, when one is not confident in the quality of the dictionary.
An alternative formulation is
\begin{align}
\xhat &= \argmin_\vx \onehalf \normsq{\vy - \vx} + \beta \regfcn(\vx),
\nonumber \\
\regfcn(\vx) &= \min_\vz \onehalf \normsq{\vx - \mD \vz} + \sparsefcn(\vz),
\label{eq:R-like-lasso}
\end{align}
which no longer constrains \vx to be exactly in the range of \mD.
One can also learn \mD while solving
\eqref{eq:R-like-lasso}
\cite{peyre:11:aro}. %
Both synthesis denoising forms have equivalent sparsity constrained versions;
one can replace \sparsefcn a characteristic function
that is 0 within some desired set and infinite outside it,
\eg,
\begin{align}
\sparsefcn(\vz) =
\begin{cases}
0 & \text{ if } \norm{\vz}_0 \leq \alpha \\
\infty & \text{ else, }
\label{eq: 0-norm characteristic function}
\end{cases}
\end{align}
for some sparsity constraint given by the hyperparameter
$\alpha \in \mathbb{N}$.
See \citep{candes:2006:robustuncertaintyprinciples,elad:10}
for discussions of when the synthesis model
can guarantee accurate recovery of signals.
The minimization problem in
\eqref{eq:R-like-lasso}
is called sparse coding
and is closely related to the LASSO problem
\citep{tibshirani:1996:regressionshrinkageselection}.
One can think of the entire dictionary \mD
as a hyperparameter
that can be learned
with a bilevel method
\cite{zhou:17:bmb}. %
\subsection{Analysis Regularizers}
Analysis regularizers model a signal
as being sparsified
when mapped into another vector space
by a linear transformation,
often represented by a set of filters.
More formally,
an analysis model
assumes the signal satisfies
$\mOmega \vx = \vz$
for a sparse coefficient vector \vz.
Often the rows of the matrix
$\mOmega \in \F^{K \by \sdim}$
are thought of as filters %
and the rows of \mOmega
where
$[\mOmega \vx]_k = 0$
span a subspace
to which \vx is orthogonal.
The analysis operator is called over-complete
if \mOmega is tall ($\sdim < K$),
complete if \mOmega is square (and full rank),
and under-complete if \mOmega is wide.
A particularly common analysis regularizer is based
on a discretized version of total variation (TV)
\cite{rudin:92:ntv},
and uses finite difference filters
(or, more generally, filters that approximate higher-order derivatives).
The finite difference filters sparsify any piece-wise constant (flat)
regions in the signal,
leaving the edges that are often approximately sparse in natural images.
Other common analysis regularizers include the
discrete Fourier transform (DFT), %
curvelets, and wavelet transforms
\citep{candes:2011:compressedsensingcoherent}.
The literature is less consistent in analysis regularizer vocabulary,
and \mOmega has been called
an analysis dictionary,
an analysis operator,
a filter matrix,
and
a cosparse operator.
The term \dquotes{cosparse}
comes from the sparsity holding in the codomain of the transformation
\mbox{$T\{\vx\} = \mOmega \vx$}.
The cosparsity of \vx with respect to \mOmega is
the number of zeros in $\mOmega \vx$ or
$K - \norm{\mOmega \vx}_0$ \citep{nam:2013:cosparseanalysismodel}.
Correspondingly, \dquotes{cosupport} describes the indices of the rows where
$\mOmega \vx = 0$.
We find the phrase \dquotes{analysis operator}
intuitive for general \mOmega's
and \dquotes{filter matrix} more descriptive
when referring to the specific (common) case when
the rows of \mOmega are dictated by a set of convolutional filters.
Assuming one knows, or has already learned, \mOmega,
one can use the analysis sparsity assumption
to denoise a noisy signal, \vy, by optimizing
\begin{equation}
\xhat = \argmin_\vx \onehalf \normsq{\vy - \vx} + \beta \sparsefcn(\mOmega \vx)
. \label{eq:analysis-with-one-term}
\end{equation}
An alternative version %
is
\begin{align}
\xhat = \argmin_\vx \onehalf \normsq{\vy - \vx} + \beta \regfcn(\vx) \label{eq: analysis opt function} \\
\regfcn(\vx) = \min_\vz \onehalf \normsq{\mOmega \vx - \vz} + \sparsefcn(\vz). \nonumber
\end{align}
As in the synthesis case,
both analysis formulations have equivalent sparsity constrained forms
using a \sparsefcn as in \eqref{eq: 0-norm characteristic function}.
See \citep{candes:2011:compressedsensingcoherent}
for an %
error bound on the estimated signal \xhat when using a 1-norm as the sparsity function.
\subsection{Comparing Analysis and Synthesis Approaches}
The analysis and synthesis models are
equivalent when the dictionary and analysis operator are invertible,
with $\mD = \mOmega^{\neg1}$ \citep{elad:07:avs}.
Furthermore,
in the denoising scenario where the system matrix \mA is identity,
the two are almost equivalent in the under-complete case
\cite{elad:07:avs},
with the lack of full equivalence stemming from the analysis form
not constraining \vx to be in the range space \mD.
However, the two models diverge in the more common over-complete case.
Whether analysis-based or synthesis-based regularizers are preferable
is an open question,
and the answer likely depends on the application
and the relative importance of
reconstruction accuracy and speed
\citep{elad:07:avs}.
Synthesis regularization is perhaps easier to interpret
because of its constructive nature,
and
the synthesis approach
used to be \dquotes{widely considered to provide superior results} \citep[950]{elad:07:avs}.
However, \citep{elad:07:avs} goes on to show that an analysis regularizer
produced more accurate reconstructed images
in experiments on real images.
Later analysis-based results also show competitive, if not superior,
quality results when compared to similar synthesis models
\citep{hawe:13:aol, ravishankar:2013:learningsparsifyingtransforms}.
\comment{I wanted to add more (recent) references here,
but most papers don't compare to synthesis anymore -
they seem to like analysis since it's related to CNNs}
See \cite{fessler:20:omf}
for a survey of optimization methods for MRI reconstruction
and a comparison of the computational challenges
for cost functions with synthesis and analysis-based regularizers.
The analysis and synthesis regularizers in
\eqref{eq: strict synthesis} and \eqref{eq: analysis opt function}
quickly yield infeasibly large operators as the signal size increases.
In practice,
both approaches are usually
implemented with patch-based formulations.
For the synthesis approach,
the patches typically overlap and there is an averaging effect. %
Analysis regularizers that have rows corresponding to filters,
called the convolutional analysis model,
extend very naturally to a global image regularizer.
For example,
in the lower-level cost function
of our running filter learning example \eqref{eq: bilevel for analysis filters},
we can define an analysis regularizer matrix as follows:
\[
\mOmega =
\begin{bmatrix}
\mC_1 \\
\vdots \\
\mC_K
\end{bmatrix}
\in \F^{K \sdim \by \sdim }
.\]
See
\cite{chen:2014:insightsanalysisoperator} and \cite{pfister:2019:learningfilterbank}
for discussion of the connections
between global models
and patch-based models for analysis regularizers.
This survey focuses on bilevel learning
of analysis regularizers.
\section{Brief History of Analysis Regularizer Learning \label{sec: filter learning history}}
In 2003,
\citet{haber:2003:learningregularizationfunctionals}
proposed using bilevel methods
to learn part of the regularizer
in inverse problems. %
The authors motivate the use of bilevel methods
through the task-based nature,
nothing that
\dquotes{the choice of good regularization operators strongly depends on the forward problem.} %
They consider learning tuning parameters,
space-varying weights,
and regularization operators
(comparable to defining \sparsefcn),
all for regularizers based on penalizing
the energy in the derivatives of the reconstructed image. %
Their framework is general enough to handle learning filters.
Ref. \citep{haber:2003:learningregularizationfunctionals}
was published a few years earlier than the other bilevel methods
we consider in this review
and was not cited in most other early works;
\citep{afkham:2021:learningregularizationparameters}
calls it a
\dquotes{groundbreaking, but often overlooked publication.}
In 2005, \citet{roth:2005:fieldsexpertsframework}
proposed the Field of Experts (FoE) model to learn filters.
Many papers on bilevel methods for filter learning
cite FoE as a starting or comparison point.
The FoE model is a translation-invariant analysis operator model,
built on convolutional filters.
It is motivated by the local operators and
presented as a Markov random field model,
with the order of the field determined by the filter size.
Under the FoE model, the negative log%
\footnote{
By taking the log of the probability model %
in \citep{roth:2005:fieldsexpertsframework},
the connection between the FoE and
the regularization term in the lower-level
of the running filter learning example
\eqref{eq: bilevel for analysis filters} is more evident.
}%
of the probability of a full image, \vx, is proportional to
\begin{align}
\sum_k \beta_k \; \sparsefcn.(\hk \conv \vx) \text{ where }
\sparsefcn(z) = \text{log}\left( 1 + \onehalf z^2 \right). \label{eq: FoE}
\end{align}
This (non-convex) choice of sparsity function $\phi$
stems from the Student-t distribution.
Ref. \citep{roth:2005:fieldsexpertsframework}
learns the filters and filter-dependent tuning parameters
such that the model distribution
is as close as possible (defined using Kullback-Leibler divergence)
to the training data distribution.
In 2007,
\citet{tappen:2007:learninggaussianconditional}
proposed a different model based on convolutional filters:
the Gaussian Conditional Random Field (GCRF) model.
Rather than using a sparsity promoting regularizer,
the GCRF uses a quadratic function for \sparsefcn.
The authors introduce space-varying weights, \mW,
so that the quadratic model does not overly penalize sharp features in the image.
The general idea behind $\mW$ is to use the given (noisy) image
to guess where edges occur,
and correspondingly penalize those areas less to avoid blurring edges.
The likelihood for GCRF model is thus
(to within a proportionality constant and monotonic function transformations):
\begin{align*}
\sum_k \normsq{\hk \conv \vx - e_k\{\vx\}}_{\mW_k},
\end{align*}
where the term
$e_k\{\vx\}$
captures the estimated value of the filtered image.
For example,
\citep{tappen:2007:learninggaussianconditional}
used one averaging filter and
multiple differencing filters for the \hk's.
The corresponding estimated values are
\vx for the averaging filter %
and
zero for the differencing filters.
The filters, \hk, are pre-determined in the GCRF model;
the learned element is how to form
the weights as a function of image features.
Specifically, each $\mW_k$ is formed as a
linear combination of the
(absolute) responses to a set of edge-detecting filters,
with the linear combination coefficients learned from training data.
Rather than maximizing the likelihood of training data
as in \citep{roth:2005:fieldsexpertsframework},
\citep{tappen:2007:learninggaussianconditional}
learns these coefficients to minimize
the (corner-rounded) $l_1$ norm of the error of the predicted image,
which is a form of bilevel learning
even though not described with that terminology.
Apparently
one of the first papers
to explicitly propose using bilevel methods to learn filters
appeared in 2009,
where
\textcite{samuel:2009:learningoptimizedmap}
considered a bilevel formulation
where the upper-level loss was mean square error on training data
and the lower-level cost was a denoising task based on filter sparsity
equivalent to \eqref{eq: bilevel for analysis filters}.
The method builds on the FoE model,
using the same \sparsefcn as in \citep{roth:2005:fieldsexpertsframework},
but now learns the filters using a bilevel formulation
rather than by maximizing a likelihood.
In 2011,
\textcite{peyre:2011:learninganalysissparsity}
proposed a similar bilevel method to learn analysis regularizers.
The authors generalized the denoising task to use
an analysis operator matrix and a wider
class of sparsifying functions.
Their results concentrate on the convolutional filter case
with a corner-rounded 1-norm for \sparsefcn.
Both \citep{samuel:2009:learningoptimizedmap} and \citep{peyre:2011:learninganalysissparsity}
focus on introducing the bilevel method for analysis regularizer learning,
with denoising or inpainting as illustrations.
\cref{chap: ift and unrolled} further discusses the methodology of both papers.
Many of the bilevel based papers in this review build on one or both of their efforts.
The rest of the review will summarize other bilevel based papers;
here, we continue to highlight some of papers in the
non-bilevel thread of the literature for context and comparison.
\citet{ophir:2011:sequentialminimaleigenvalues}
proposed another approach to learning an analysis operator.
The method learns the operator
one row at a time by searching for vectors orthogonal to the training signals.
Algorithm parameters were chosen empirically
without an upper-level loss function as a guide.
Between 2011 \citep{yaghoobi:2011:analysisoperatorlearning}
and 2013 \citep{yaghoobi:2013:constrainedovercompleteanalysis},
\setmaxcitenames{10}
\citeauthor{yaghoobi:2011:analysisoperatorlearning}
\setmaxcitenames{3} %
were among the first to formally present
analysis operator learning as an optimization problem.
Their %
conference paper \citep{yaghoobi:2011:analysisoperatorlearning}
considered noiseless training data
and proposed learning an analysis operator as
\begin{equation}
\argmin_\mOmega \normr{\mOmega \mXtrue}_1 \text{ s.t. } \mOmega \in \S \label{eq: noiseless AOL yaghoobi}
\end{equation}
for some constrained set \S.
Each column of \mXtrue contains a training sample.
The authors
discussed varying options for \S,
including a row norm, full rank, and tight frame constrained set.
Without any constraint on \mOmega,
the trivial solution to \eqref{eq: noiseless AOL yaghoobi}
would be to learn the zero matrix,
which is not informative for any problem such as image denoising.
\sref{sec: filter constraints} discusses in more detail
the need for constraints
and the various options proposed for filter learning.
Ref. \citep{yaghoobi:2013:constrainedovercompleteanalysis}
extends \eqref{eq: noiseless AOL yaghoobi}
to the noisy case where one does not have access to \mXtrue.
The proposed cost function is
\begin{equation}
\argmin_{\mOmega, \, \mX} \norm{\mOmega \mX}_1 + \frac{\beta}{2} \normsq{\mY - \mX}
\text{ s.t. } \mOmega \in \S, \label{eq: AOL yaghoobi}
\end{equation}
where each column of \mY contains a noisy data vector.
Ref. \citep{yaghoobi:2013:constrainedovercompleteanalysis} minimized \eqref{eq: AOL yaghoobi}
by alternating updating \mX,
using alternating direction method of multipliers (ADMM),
and \mOmega,
using a projected subgradient method
for various constraint sets \S,
especially Parseval tight frames.
Also in 2013,
\citet{ravishankar:2013:learningsparsifyingtransforms}
made a distinction between the analysis model,
where one models $\vy = \vx + \vn$ with $\vz = \mOmega \vx$ being sparse,
and the transform model,
where $\mOmega \vy = \vz + \vn$ where \vz is sparse.
The analysis version models the measurement
as being a cosparse signal plus noise;
the transform version models the measurement
as being approximately cosparse.
Another perspective on the distinction is that,
if there is no noise,
the analysis model constrains \vy to be in the range space of \mOmega,
while there is no such constraint on the transform model.
The corresponding transform learning problem is
\begin{align}
\argmin_{\mOmega} \min_\mZ \normsq{\mOmega \mY - \mZ}_2 + &\regfcn(\mOmega)
\quad \text{ s.t. } \norm{\mZ_i}_0 \leq \alpha \;\forall i, \label{eq: transform learning}
\end{align}
where $i$ indexes the columns of \mZ.
Ref. \citep{ravishankar:2013:learningsparsifyingtransforms}
considers only square matrices \mOmega.
The regularizer, \regfcn, promotes diversity in the rows of \mOmega to avoid trivial solutions,
similar to the set constraint in \eqref{eq: AOL yaghoobi}.
A more recent development is directly modeling %
the convolutional structure during the learning process.
In 2020,
\citep{chun:2020:convolutionalanalysisoperator} proposed Convolutional Analysis Operator Learning (CAOL)
to learn convolutional filters without patches.
The CAOL cost function is
\begin{align}
\argmin_{[\vc_1, \ldots, \vc_K]} \sum_{k=1}^K \min_\vz
\onehalf \normsq{\hk \conv \vx - \vz}_2 + \beta \norm{\vz}_0
\text{ s.t. } [\vc_1 \ldots \vc_K] \in \S \label{eq: CAOL}.
\end{align}
Unlike the previous cost functions,
which typically require patches,
CAOL can easily handle full-sized training images \vx
due to the nature of the convolutional operator.
While model-based methods were being developed
in the signal processing literature,
convolutional neural network (CNN) models
were being advanced and trained
in the machine learning and computer vision literature
\cite{haykin:96:nne}
\cite{hwang:97:tpp}
\cite{lucas:18:udn}.
The filters used in CNN models like U-Nets
\cite{ronneberger:15:unc}
can be thought of as having analysis roles
in the earlier layers,
and synthesis roles in the final layers
\cite{ye:18:dcf}.
See also
\cite{wen:20:tlf}
for further connections between analysis and transform models
within CNN models.
CNN training is usually supervised,
and the supervised approach of bilevel learning of filters
strengthens the relationships
between the two approaches.
A key distinction is that CNN models are generally feed-forward computations,
whereas bilevel methods of the form
\eqref{eq: generic bilevel lower-level}
have a cost function formulation.
See \sref{sec: connections}
for further discussion of the parallels
between CNNs and bilevel methods.
\section{Summary}
This background section focused on the lower-level problem:
image reconstruction with a sparsity-based regularizer.
After defining the problem and the need for regularization,
\sref{sec: filter learning history}
reviewed the history of analysis regularizer learning
and included many examples of methods
to learn hyperparameters.
Bilevel methods are just one, task-based way to learn
such hyperparameters.
\sref{sec: filter constraints} further expands
on this point,
but we can already see benefits of the task-based nature of bilevel methods.
Without the bilevel approach,
filters are learned so that they best sparsify training data.
These sparsifying filters can then be used in a prior for image reconstruction tasks.
However, they are learned to \textit{sparsify},
not necessarily to best \textit{reconstruct}.
In contrast, the bilevel approach aims to learn filters that best reconstruct images
(or whatever other task is desired), %
even if those filters are not the ones that best sparsify.
Although this may seem like a subtle distinction,
\citep{chambolle:2021:learningconsistentdiscretizations}
shows that different (synthesis) filters work better for
image denoising versus image inpainting.
Having provided some background on the lower-level cost function
and motivated bilevel methods,
this review now turns to defining the upper-level loss function
and surveying methods of hyperparameter optimization.
\chapter{Background: Hyperparameter Optimization}
\label{chap: hpo}
Most inverse problems involve at least one hyperparameter.
For example,
the general reconstruction cost function
\eref{eq: general data-fit plus reg}
requires choosing the tuning parameter $\beta$
that trades-off the influence of the data-fit and regularization terms.
The field of hyperparameter optimization is quite large
and can include categorical hyperparameters, such as which optimizer to use;
conditional hyperparameters, where certain hyperparameters are relevant
only if others take on certain values;
and integer or real-valued hyperparameters
\citep{feurer:2019:chapterhyperparameteroptimization}.
Here, we focus on learning real-valued, continuous hyperparameters.
\begin{figure}[h]
\centering
\ifloadepsorpdf
\includegraphics[]{betaisimportant}
\else
\input{tikz,betaisimportant}
\fi
\iffigsatend \figuretag{3.1} \fi
\caption{
Example reconstructed simulated MRI images that
demonstrate the importance of tuning parameters.
(a) The original image, $\xtrue \in \xmath{R}^\sdim$, is a SheppLogan phantom
\cite{shepp:74:tfr}
and $N$ is the number of pixels. %
(b) A simplistic reconstruction
$\frac{1}{N} \mA'\vy$
of the noisy, undersampled data, \vy.
This image is used as initialization, $\vx^{(0)}$,
for the following reconstructions.
(c-e) Reconstructed images, found by optimizing
$\argmin_\vx \onehalf \normrsq{\mA \vx - \vy}_2 + 10^\beta N \sparsefcn(\mC \vx)$,
where \mC is an operator that takes vertical and horizontal finite differences.
The reconstructed images correspond to
(c) $\beta=-6$, resulting in an image that contains ringing artifacts,
(d) $\beta=-3$, resulting in a visually appealing \xhat,
and
(e) $\beta=1$, resulting in a blurred image.
The demonstration code
and more details about the reconstruction set-up
are available on github
\citep{fessler:2020:mirtdemo01recon}.
}
\label{fig: rmse vs beta}
\end{figure}
A hyperparameter's value can greatly influence the properties of the minimizer
and a tuned hyperparameter typically improves over a default setting
\citep{feurer:2019:chapterhyperparameteroptimization}.
\fref{fig: rmse vs beta} illustrates
how changing a tuning parameter can dramatically impact
the visual quality of the reconstructed image.
If $\beta$ is too low,
not enough weight is on the regularization term,
and the minimizer is likely to be corrupted by noise in the measurements.
If $\beta$ is too high, the regularization term dominates,
and the minimizer will not align with the measurements.
Generalizing to an arbitrary learning problem
that could have multiple hyperparameters,
the goal of hyperparameter optimization
is to find the ``best'' set of hyperparameters,
$\hat{\params}$ to meet a goal, described by a loss function \lfcn.
Specifically, we wish to solve
\begin{equation}
\hat{\params} = \argmin_{\params \in \Gamma}
\E{ \lfcn(\params) }, \label{eq: general hyperparameter opt}
\end{equation}
where $\Gamma$
is the set of all possible hyperparameters
and the expectation is taken with respect to the distribution of the input data.
If evaluating \lfcn uses the output of another optimization problem,
\eg, \xhat, then \eqref{eq: general hyperparameter opt} is a bilevel problem
as defined in \eqref{eq: generic bilevel upper-level}.
There are two key tasks in hyperparameter optimization.
The first is to quantify
how good a hyperparameter is;
this step is equivalent to defining \lfcn in \eqref{eq: general hyperparameter opt}.
The second step is finding a good hyperparameter,
which is equivalent to designing an optimization algorithm to minimize
\eqref{eq: general hyperparameter opt}.
The next two sections address each of these tasks in turn.
\section{Image Quality Metrics \label{sec: loss function design}}
This section concentrates on the part of the upper-level loss function
that compares the reconstructed image, \xhatp,
to the true image, \xtrue.
As mentioned in \cref{chap: intro},
bilevel methods rarely require additional regularization
for \params,
but it is simple to add a regularization term to
any of the loss functions if useful for a specific application.
To discuss only the portion of the loss function
that measures image quality,
we use the notation
$\lfcnargs = \l(\xhat, \, \xtrue)$.
Picking a loss function is part of the engineering design process.
No single loss function is likely to work in all scenarios;
users must decide on the loss function that best fits their system, data, and goals.
Consequently, there are a wide variety of loss functions proposed in the literature
and some approaches combined multiple loss functions \citep{you:2018:structuresensitivemultiscaledeep,hammernik:2020:machinelearningimage}.
One important decision criteria when selecting a loss function
is the end purpose of the image.
Much of the image quality assessment (IQA) literature
focuses on metrics for images of natural scenes
and is often motivated by applications where human enjoyment is the end-goal
\citep{wang:04:iqa,wang:2011:reducednoreferenceimage}.
In contrast, in the medical image reconstruction field, image quality is not the end-goal,
but rather a means to achieving a correct diagnosis.
Thus, the perceptual quality is less important than the information content.
There are two major classes of image quality metrics
in the IQA literature, called full reference and no reference IQA%
\footnote{There are also reduced-reference image quality metrics,
but we will not consider those here.}. %
The principles are somewhat analogous
to supervised and unsupervised approaches
in the machine learning literature.
This section discusses some of the most common
full reference and no reference loss functions;
see \citep{zhang:2012:comprehensiveevaluationfull}
for a comparison of
11 full-reference IQA metrics
and
\citep{zhang:2020:blindimagequality}
for additional no-reference IQA metrics.
Perhaps surprisingly,
the bilevel filter learning literature
contains few examples of loss functions other than MSE or slight variants
(see \sref{sec: prev results loss function}).
While this is likely at least partially due to the computational requirements of bilevel methods
(see \cref{chap: ift and unrolled} and \ref{chap: bilevel methods}),
exploring additional loss functions is an interesting future direction for bilevel research.
\subsection{Full Reference IQA \label{sec: hpo supervised}}
Full reference IQA metrics
assume that you have a noiseless image, \xtrue, for comparison.
Some of the simplest (and most common) full reference loss functions are:
\begin{itemize}[noitemsep,topsep=0pt]
\item Mean square error (or $\ell_2$ error):
$\l_{\mathrm{MSE}}(\xhat,\xtrue) = \frac{1}{\sdim} \normsq{\xhat-\xtrue}_2$
\item Mean absolute error (or $\ell_1$ error):
$\l_{\mathrm{MAE}}(\xhat,\xtrue) = \frac{1}{\sdim} \norm{\xhat-\xtrue}_1$
\item Signal to Noise Ratio (SNR, commonly expressed in dB): \\
$\l_{\mathrm{SNR}}(\xhat,\xtrue) = 10 \log{\frac{\normsq{\xtrue}_2}{\normsq{\xhat-\xtrue}_2}}$
\item Peak SNR (\ac{PSNR}, in dB): $\l_{\mathrm{PSNR}}(\xhat,\xtrue)
= 10 \log{\frac{\sdim \norm{\xtrue}_{\infty}}{\normsq{\xhat-\xtrue}_2}}$.
\end{itemize}
The Euclidean norm is %
also frequently used as the data-fit term for reconstruction. %
\ac{MSE}
(and the related metrics SNR and PSNR)
are common in the signal processing field;
they are intuitive and easy to use
because they are differentiable and operate point-wise.
However, these measures do not align well with human perceptions of image quality \cite{mason:2020:comparisonobjectiveimage, zhang:2012:comprehensiveevaluationfull}.
For example,
scaling an image by 2 leads to the same visual quality
but causes 100\% MSE.
\fref{fig: rmse for various distortions}
shows a clean image
and five images with different degradations.
All five degraded images have almost equivalent \ac{MSE}s,
but humans judge their qualities as very different.
\begin{figure}
\centering
\ifloadepsorpdf
\includegraphics[]{distortions}
\else
\input{tikz,distortions}
\fi
\iffigsatend \figuretag{3.2} \fi
\caption{
Example distortions that yield images with identical
normalized MSE values:
$\norm{\xtrue - \vx}/\norm{\xtrue} = 0.17$.
(a) The original image, \xtrue, is a SheppLogan phantom \cite{shepp:74:tfr}.
The remaining images are displayed with the same colormap
and have the following distortions:
(b) blurred with an averaging filter,
(c) additive, white Gaussian noise,
(d) salt and pepper noise,
and
(e) a constant value added to every pixel.
%
}
\label{fig: rmse for various distortions}
\end{figure}
Tuning parameters using MSE
tends to lead to images that are overly-smoothed,
sacrificing high frequency information
\cite{gholizadehansari:20:dlf,seif:18:ebl}.
High frequency details are particularly important for perceptual quality
as they correspond to edges in images.
Therefore, some authors use the MSE on edge-enhanced versions of images
to discourage solutions that blur edges.
For example, \citep{ravishankar:2011:mrimagereconstruction}
used a \dquotes{high frequency error norm} metric
consisting of the MSE of the difference of \xhat and \xtrue
after applying a Laplacian of Gaussian (LoG) filter.
Another common full-reference IQA is Structural SIMilarity (\ac{SSIM})
\citep{wang:2004:imagequalityassessment}
that attempts to address the issues with \ac{MSE} discussed above.
SSIM is defined in terms of the local
luminance, contrast, and structure
in images.
\comment{
They first define the common image statistics:
$\mu_x$ and $\mu_y$ as the mean values of \vx and \vy respectively,
$\sigma_x$ and $\sigma_y$ as the standard deviations, and
$\sigma_{xy}$ as the correlation coefficient between \vx and \vy.
The \ac{SSIM} metric \citep{wang:2004:imagequalityassessment} is then:
\begin{align*}
\text{SSIM}(\vx,\vy) &= [l(\vx,\vy)]^\alpha [c(\vx,\vy)]^\beta [s(\vx,\vy)]^\gamma, \text{ where } \\
l(\vx,\vy) &= \frac{2\mu_x \mu_y + C_1}{\mu_x^2+\mu_y^2+C_1} \text{ considers the luminance,} \\
c(\vx,\vy) &= \frac{2\sigma_x\sigma_y+C_2}{\sigma_x^2+\sigma_y^2+C_2} \text{ considers the contrast, and } \\
s(\vx,\vy) &= \frac{\sigma_{xy}+C_3}{\sigma_x \sigma_y + C_3} \text{ considers the structure.}
\end{align*}
Here, $C_1, C_2,$ and $C_3$ are small coefficients for numerical stability
and $\alpha, \beta,$ and $\gamma$ are tuning parameters.
jeff{perhaps the exact equations are unnecessary here? could just state it's a function of luminance, contrast, and structure}
}
A multiscale extension of \ac{SSIM}, called MS-SSIM,
considers information at different resolutions
\cite{wang:2003:multiscalestructuralsimilarity}.
The method computes the contrast and structure measures of SSIM
for downsampled versions of the input images
and then defines
MS-SSIM as the product of the luminance at the original scale
and the contrast and structure measures at each scale.
However,
SSIM and MS-SSIM
may not correlate well with human observer performance
on radiological tasks
\cite{renieblas:17:ssi}.
Recent works,
\eg, \citep{bosse:2018:deepneuralnetworks,zhang:2020:blindimagequality},
consider using (deep) CNN models
for IQA.
CNN methods are increasingly popular
and their use as a model for the human visual system
\citep{lindsay:2020:convolutionalneuralnetworks}
makes them an attractive tool for assessing images.
For example,
\citep{bosse:2018:deepneuralnetworks}
proposed a CNN with convolutional and pooling layers for feature extraction
and
fully connected layers for regression.
They used
VGG \citep{simonyan:2015:verydeepconvolutional},
a frequently-cited CNN design with $3 \by 3$ convolutional kernels,
as the basis of the feature extraction portion of their network.
Ref. \citep{bosse:2018:deepneuralnetworks} showed that deeper networks with more
learnable parameters were able to better predict image quality.
However,
datasets of images with quality labels
remain relatively scarce,
making it difficult to train deep networks.
\subsection{No Reference IQA}
No reference, or unsupervised, IQA metrics
attempt to quantify an image's quality without access to a noiseless version of the image.
These metrics rely on modeling statistical characteristics of images
or noise.
Many no reference IQA metrics assume the noise distribution is known.
The discrepancy principle is a classic example
that uses an assumed noise distribution to characterize
the expected relation between the reconstructed image
and the noisy data.
For additive zero-mean white Gaussian noise
with known variance $\sigma^2$,
the discrepancy principle
uses the fact that the
expected MSE in the data space is the noise variance
\cite{phillips:62:atf}:
\[
\E{\frac{1}{\ydim}\normsq{\mA \xhat(\params) - \vy}_2} = \sigma^2 %
.\]
The discrepancy principle
can be used as a stopping criteria in machine learning methods
or as a loss function,
\eg,
\[
\lfcnargs = \left(\frac{1}{\ydim}\normsq{\mA \xhat(\params) - \vy}_2 - \sigma^2 \right)^2
.\]
However,
images of varying quality
can yield the same noise estimate,
as seen in
\fref{fig: rmse for various distortions}.
Related methods have been developed for Poisson noise as well
\cite{hebert:92:sbm}.
Paralleling MSE's popularity among supervised loss metrics,
Stein's Unbiased Risk Estimator (SURE)
\citep{stein:1981:estimationmeanmultivariate}
is an unbiased estimate of MSE that does not require noiseless images.
Let $\vy = \xtrue + \vn$
denote a signal plus noise measurement
where \vn is, as above, Gaussian noise
with known variance $\sigma^2$.
The SURE estimate of the MSE
of a denoised signal, \xhat,
is
\begin{align}
\frac{1}{\sdim} \normsq{\vy - \xhat(\vy)}_2 - \sigma^2 + \frac{2 \sigma^2}{\sdim} \nabla_y \xhat(\vy) \label{eq: SURE},
\end{align}
where we write \xhat as a function of \vy to emphasize the dependence.
For large signal dimensions, \sdim,
such as is common in image reconstruction problems,
the law of large numbers suggests SURE is a fairly accurate approximation of the true MSE.
However, it is often impractical to evaluate the divergence term in \eqref{eq: SURE},
due to computational limitations
or
not knowing the form of $\xhat(\vy)$.
A Monte-Carlo approach to estimating the divergence
\cite{ramani:2008:montecarlosureblackbox}
uses the following key equation:
\begin{align}
\nabla_\vy \xhat(\vy) = \lim_{\epsilon \rightarrow 0} \E{\vb' \cdot \frac{\xhat(\vy + \epsilon \vb) - \xhat(\vy)}{\epsilon}}, \label{eq: monte carlo sure}
\end{align}
where \vb is a independent and identically distributed (i.i.d.) random vector
with zero mean, unit variance, and bounded higher order moments.
Theoretical and empirical arguments
show that a single noise vector can well-approximate the divergence
\citep{ramani:2008:montecarlosureblackbox},
so only two calls to the lower-level solver $\xhat(\vy)$ are required.
This method treats the lower-level problem like a blackbox,
thus allowing one to estimate the divergence of complicated functions,
including those that may not be differentiable.
See \citep{soltanayev:2018:trainingdeeplearning,kim:20:uto,zhussip:19:tdl}
for examples of
applying the Monte-Carlo estimation of SURE to train deep neural networks,
and \citep{zhang:2020:bilevelnestedsparse,deledalle:2014:steinunbiasedgradient}
for two examples of learning a tuning parameter
using a bilevel approach with SURE as the upper-level loss function.
For extensions to inverse problems
and noise from exponential families,
see
\citep{eldar:2009:generalizedsureexponential}.
While SURE and the discrepancy principle are popular no-reference metrics
in the signal processing literature,
there are many additional no-reference metrics in the image quality assessment literature.
These metrics typically depend on modeling one (or more) of three things \citep{wang:2011:reducednoreferenceimage}:
\begin{itemize}[noitemsep,topsep=0pt]
\item image source characteristics,
\item image distortion characteristics, \eg, blocking artifact from JPEG compression, and/or
\item human visual system perceptual characteristics.
\end{itemize}
As an example of a strategy that can capture
both image source and human visual system characteristics,
natural scene%
\footnote{
Natural scenes are those captured by optical cameras
(not created by computer graphics or other artificial processes)
and are not limited to outdoor scenes.
}
statistics characterize the distribution of various features in natural scenes,
typically using some filters \citep{mittal:2013:makingcompletelyblind,wang:2011:reducednoreferenceimage}.
If a feature reliably follows a specific statistical pattern in natural images
but has a noticeably different distribution in distorted images,
one can use that feature to assign quality scores to images.
Some IQA metrics attempt to first identify the type of distortion and
measure features specific to that distortion,
while others use the same features for all images.
In addition to their use in full-reference IQA,
CNN models have be trained
to perform no-reference IQA
\citep{kang:2014:convolutionalneuralnetworks,bosse:2018:deepneuralnetworks}.
For example, \citet{kang:2014:convolutionalneuralnetworks}
proposes a CNN model that
extracts small (32 \by 32) patches from images,
estimates quality for each one,
and averages the scores over all patches to get the quality score for the entire image.
Briefly, their method involves local contrast normalization for each patch,
applying (learned) convolutional filters to extract features,
maximum and minimum pooling,
and fully connected layers with rectified linear units (ReLUs).
As with most no reference IQAs,
\citep{kang:2014:convolutionalneuralnetworks}
trained their CNN on a dataset of (human encoded) image quality scores
(see \citep{laboratoryforimagevideoengineering::imagevideoquality}
for a commonly used collection of publicly available test images with quality scores).
Unlike most other IQA approaches,
\citep{kang:2014:convolutionalneuralnetworks} used backpropagation to learn all the CNN weights
rather than learning a transformation from (handcrafted) features to quality scores.
Interestingly,
some of the no-reference IQA metrics
\citep{mittal:2013:makingcompletelyblind,kang:2014:convolutionalneuralnetworks, wang:2011:reducednoreferenceimage}
approach the performance of the full-reference IQAs
in terms of their ability to match human judgements of image quality.
This observation suggests that there is room to improve full-reference IQA metrics
and that assessing image quality is a very challenging problem!
\section{Parameter Search Strategies}
\label{sec: hyperparameter search strategies}
After selecting a metric to measure how good a hyperparameter is,
the next task is devising a strategy to find the best hyperparameter
according to that metric.
Search strategies fall into three main categories:
(i) model-free, \lfcn-only;
(ii) model-based, \lfcn-only;
and
(iii) gradient-based,
using both \lfcn and $\nabla \lfcn$.
Model-free strategies
do not assume any information about
about the hyperparameter landscape,
whereas model-based strategies
use historical \lfcn evaluations
to predict the loss-function at
untested hyperparameter values.
The following sections describe
common model-free and model-based
hyperparameter search strategies
that only use \lfcn.
See \cite[Ch.~13 and Ch.~20.6]{dempe:2020:bileveloptimizationadvances}
for discussion of additional gradient-free methods for bilevel problems,
\eg, population-based evolutionary algorithms,
and \citep{larson:2019:derivativefreeoptimizationmethods}
for a general discussion of derivative-free optimization methods.
The third class of hyperparameter optimization schemes
are approaches based on gradient descent of a bilevel problem.
The high-level strategy in bilevel approaches is to
calculate the gradient of the upper-level loss function, \lfcn,
with respect to \params and then use any gradient descent method to minimize \params.
Although this approach can be computationally challenging,
it generalizes well to a large number of hyperparameters.
\cref{chap: ift and unrolled} and \cref{chap: bilevel methods}
discuss this point further and go into depth on different methods for computing this gradient.
\subsection{Model-free}
The most common search strategy
is probably an empirical search,
where a researcher tries different hyperparameter combinations manually.
A punny, but often accurate,
term for this manual search is
GSD: grad[uate] student descent
\citep{gencoglu:2019:harksidedeep}.
\citet{bergstra:12:rsf} hypothesizes that manual search is common
because
it provides some insight as the user must evaluate each option,
it requires no overhead for implementation,
and
it can perform reliably in very low dimensional hyperparameter spaces.
Grid search is a more systematic alternative to manual search.
When there are only one or two continuous hyperparameters,
or the possible set of hyperparameters, $\Params$, is small,
a grid search (or exhaustive search) strategy
may suffice to find the optimal value, $\paramshat$,
to within the grid spacing.
However, the complexity of grid search grows exponentially
with the number of hyperparameters.
Regularizers frequently have many hyperparameters,
so one must use a more sophisticated search strategy.
One popular approach is random search,
which \citet{bergstra:12:rsf} shows is superior to a grid search,
especially when some hyperparameters are more important
than others.
There are also variations on random search, such as using Poisson disk sampling theory to explore the hyperparameter space \citep{muniraju:18:crs}.
The simplicity of random search makes it popular,
and,
even if one uses a more complicated search strategy,
random search can provide a useful baseline or an initialization strategy.
However, random search, like grid search,
suffers from the curse of dimensionality,
and is less effective as the hyperparameter space grows.
Another group of model-free blackbox strategies are population-based methods
such as evolutionary algorithms.
A popular population-based method is the covariance matrix adaption evolutionary strategy (CMA-ES)
\citep{beyer:2001:theoryevolutionstrategies}.
In short,
every iteration,
CMA-ES involves sampling a multivariate normal distribution
to create a number of \dquotes{offspring} samples.
Mimicking natural selection,
these offspring are judged according to some fitness function,
a parallel to the upper-level loss function.
The fittest offspring determine the update to the normal distribution
and thus \dquotes{pass on} their good characteristics
to the next generation.
\subsection{Model-based}
Model-based search strategies
assume a model (or prior) for the hyperparameter space
and use only loss function evaluations (no gradients).
This section discusses two common model-based strategies:
Bayesian approaches
and
trust region methods.
\subsubsection{Bayesian Approaches}
Bayesian approaches
fit previous hyperparameter trials' results to a model to select
the hyperparameters that appear most promising to evaluate next \cite{klein:17:fbh}.
For example, a common model for the hyperparameters is the Gaussian Process prior.
Given a few hyperparameter and cost function points,
a Bayesian method involves
the following steps.
\begin{enumerate}
\item Find the mean and covariance functions for the Gaussian Process.
The mean function will generally interpolate the sampled points.
The covariance function is generally expressed as a kernel function,
often using squared exponential functions \citep{frazier:2018:tutorialbayesianoptimization}.
\item Create an acquisition function.
The acquisition function captures how desirable it is
to sample (\dquotes{acquire}) a hyperparameter setting.
Thus, it should be large (desirable) for hyperparameter values that are predicted
to yield small loss function values
or that have high enough uncertainty that they may yield low losses.
The design of the acquisition function thus trades-off between exploring new areas of the hyperparameter landscape with high uncertainty and a more locally focused exploitation of the current best hyperparameter settings. See \citep{frazier:2018:tutorialbayesianoptimization}
for a discussion of specific acquisition function designs.
\item Maximize the acquisition function
(typically designed to be easy to optimize)
to determine which hyperparameter point to sample next.
\item Evaluate the loss function at the new hyperparameter candidate.
\end{enumerate}
These steps repeat for a given amount of time or until convergence.
\subsubsection{Trust Region Method}
Another derivative-free optimization
method that uses only loss function evaluations
is a trust-region method.
This section describes the specific trust-region method
as presented in
\cite{ehrhardt:2021:inexactderivativefreeoptimization}
(see references therein for previous, similar methods).
The derivative-free, trust-region method (TRM)
\citep{conn:2000:trustregionmethods}
is similar to Bayesian optimization
in that it involves fitting
an easier to optimize function
to the loss function of interest,
\lfcn,
and then minimizing the easier, surrogate function
(the \dquotes{model}).
Thus,
TRM requires only function evaluations,
not gradients,
to construct and then minimize the model.
However,
unlike most Bayesian optimization-based approaches,
TRM uses a local (often quadratic) model for \lfcn
around the current iterate,
rather than a surrogate that fits all previous points.
In taking a step
based on this local information,
TRM is more similar to gradient-based approaches.
The trust-region captures how well the model
matches the observed \lfcn values and
determines the maximum
step at every iteration.
The \dquotes{goodness} of the model
is typically quantified as the ratio
of the actual decrease in \lfcn
(based on observed function evaluations)
to the predicted decrease
(based on the model).
If this ratio is relatively large (close to one),
then the model is a good approximation of \lfcn
and the trust-region grows for the next iteration.
If this ratio is close to zero,
then the observed decrease is much less than predicted
and the trust-region shrinks.
Recall that evaluating \lfcn is typically expensive
in bilevel problems
as
each upper-level function evaluation
involves optimizing the lower-level cost.
Thus,
even constructing the model for a TRM
can be expensive.
To mitigate this computational complexity,
\citep{ehrhardt:2021:inexactderivativefreeoptimization}
incorporated a dynamic accuracy component,
with the accuracy for the lower-level cost initially set relatively loose
(leading to rough estimates of \lfcn)
but increasing with the upper-level iterations
(leading to refined estimates of \lfcn
as the algorithm nears a stationary point).
One can use any optimization method for the lower-level cost;
\citep{ehrhardt:2021:inexactderivativefreeoptimization}
used a gradient method for the lower-level optimization method
with well-known convergence results
to facilitate establishing convergence
and computational complexity results.
The upper-level loss function
considered in
\citep{ehrhardt:2021:inexactderivativefreeoptimization}
is additively separable and quadratic:
\begin{align*}
\lfcn(\params) &= \frac{1}{\Ntrue} \sum_{\xmath{j}=1}^\Ntrue \lfcn(\params \, ; \xhat_\xmath{j}(\params))
= \frac{1}{\Ntrue} \sum_{\xmath{j}=1}^\Ntrue \underbrace{\left(
\lfcnquad(\params \, ; \xhat_\xmath{j}(\params))
\right)^2}_{\text{Equivalently, } \lfcnquad_\xmath{j}(\params)^2},
\end{align*}
where \lfcnquad is typically %
$\xhat_\xmath{j}(\params) - \xtrue_\xmath{j}$.
(Although we define the sum to be over the number of training samples,
this expression easily generalizes to include a regularization term on \params
by defining an additional
$\lfcnquad_{\Ntrue+1}$ term.)
Given a current value for the hyperparameters, \paramsset,
the TRM models the local upper-level loss function
by creating a linear model such that
\[
\lfcnquad_\xmath{j}(\paramsset + \bmath{\delta})
\approx
m_\xmath{j}(\bmath{\delta})
\defeq \lfcnquad_\xmath{j}(\paramsset) + \vg_\xmath{j}' \bmath{\delta},
\]
where $\vg_\xmath{j} \in \F^\paramsdim$ approximates $\nabla \lfcnquad_\xmath{j}(\paramsset)$.
Then,
the overall model for the upper-level problem
is quadratic:
\begin{align}
\lfcn(\paramsset + \bmath{\delta}) &\approx \sum_\xmath{j} m^2_{\xmath{j}}(\bmath{\delta})
= \lfcn(\paramsset)
+ \frac{1}{\Ntrue} \sum_\xmath{j} 2 \lfcnquad_\xmath{j}(\paramsset) \vg_\xmath{j}' \bmath{\delta}
+ (\vg_\xmath{j}' \bmath{\delta})^2.
\label{eq: DFO model} %
\end{align}
One can estimate the gradients, $\vg_\xmath{j}$,
by interpolating a set of \paramsdim samples
(recall $\params \in \F^\paramsdim$)
of the upper-level loss function.
This process involves
choosing a set of interpolating points,
$\{ \paramsset + \bmath{\delta}^{(1)}, \ldots, \paramsset + \bmath{\delta}^{(\paramsdim)} \}$,
(approximately) evaluating \lfcnquad at each one,
then solving
\begin{align*}
\underbrace{
\begin{bmatrix}
\lfcnquad_\xmath{j}(\paramsset + \bmath{\delta}^{(1)}) - \lfcnquad_\xmath{j}(\paramsset) \\
\vdots \\
\lfcnquad_\xmath{j}(\paramsset + \bmath{\delta}^{(\paramsdim)}) - \lfcnquad_\xmath{j}(\paramsset)
\end{bmatrix}
}_{\paramsdim}
=
\underbrace{
\begin{bmatrix}
\paren{\bmath{\delta}^{(1)}}' \\
\vdots \\
\paren{\bmath{\delta}^{(\paramsdim)}}'
\end{bmatrix}
}_{\paramsdim \by \paramsdim}
\vg_\xmath{j}
\end{align*}
for $\xmath{j} \in [1 \ldots \Ntrue]$.
After forming the quadratic model for the upper-level loss,
the TRM minimizes the model \eqref{eq: DFO model}
within some trust region,
which is a simple convex-constrained quadratic problem
\comment{
The resulting quadratic optimization problem
is a standard optimization problem
and computationally easy to solve
\comment{is a citation needed?}. %
}
After computing $\hat{\bmath{\delta}}$,
the TRM accepts the step and updates the hyperparameters
($\params^{(i+1)} = \params^{(i)} + \hat{\bmath{\delta}}$) if
the actual reduction (based on the estimated loss function values)
to predicted reduction (based on the quadratic model of the loss function)
ratio
is large enough.
Otherwise, if the ratio is low,
the update step is rejected
and
the trust region shrinks.
While the TRM appears to involve \paramsdim
upper-level function evaluations every iteration
to construct the gradient estimates,
after an initialization,
one can generally reuse samples,
gradually replacing old samples with
the samples at new hyperparameter iterates.
Ref. \citep{ehrhardt:2021:inexactderivativefreeoptimization}
discusses requirements on the interpolation set
to guarantee a good geometry
and conditions for re-setting the interpolation sample
if the model is not sufficiently accurate.
A main result from \citep{ehrhardt:2021:inexactderivativefreeoptimization}
is a bound on the number of iterations to reach an $\epsilon$-optimal point
(defined as
$\min_\upperiter \normr{\nabla_\params \lfcn(\iter{\params})} < \epsilon$,
where \upperiter indexes the upper-level iterates).
The bound derivation assumes
(i)
\ofcn is differentiable in \vx,
(ii)
\ofcn is $\mu$-strongly convex, \ie,
$\ofcn(\vx) - \frac{\mu}{2}\norm{\vx}^2$ is convex,
(iii)
the derivative of \ofcn is Lipschitz continuous,
and
(iv)
the first and second derivative of the lower-level cost
with respect to \vx exist and are continuous.
These requirements are satisfied
by the example filter learning problem \eqref{eq: bilevel for analysis filters}, %
when \mA has full column rank,
and more generally
when there are certain constraints on the hyperparameters.
The iteration bound is a function of the following:
\begin{itemize}[noitemsep,topsep=0pt]
\item the tolerance $\epsilon$,
\item the trust region parameters
(parameters that control the increase and decrease in trust region size
based on the actual to predicted reduction,
the starting trust region size,
and
the minimum possible trust region size),
\item the initialization for \params, and
\item the maximum possible error between
the gradient of the upper-level loss function and
the gradient of the model for the upper-level loss
within a trust region
(when the gradient of \lfcn is Lipschitz continuous,
this bound is the corresponding Lipschitz constant).
%
\end{itemize}
The number of iterations required to reach
such an $\epsilon$-optimal point is
\order{\frac{1}{\epsilon^2}}
\citep{ehrhardt:2021:inexactderivativefreeoptimization}
and the number of required upper-level loss function
evaluations
depends more than linearly on \sdim
\citep{roberts:2021:inexactdfobilevel}.
That growth with \sdim
impedes its use
in problems with many hyperparameters,
though new techniques such as \citep{cartis:2021:scalablesubspacemethods}
may be able to decrease or remove the dependency.
\section{Summary}
Turning from the discussion
of the lower-level problem in
\cref{chap: image recon},
this section concentrated on
the other two aspects of bilevel problems:
the upper-level loss function and the optimization strategy.
The loss function defines
what a \dquotes{good} hyperparameter is,
typically using a metric of image quality
to compare \xhatp to
a clean, training image, \xtrue.
MSE is the most common upper-level loss function. %
It is well known from statistical estimation
that the estimator that minimizes MSE is the conditional mean,
$\xhat(\vy) = \E{\vx|\vy}$.
Thus, if MSE is the true metric of interest,
then lower-level problems should
be designed to try to approximate $\E{\vx|\vy}$ closely.
Yet lower-level formulations in most bilevel papers
are not described as conditional mean estimators or approximations thereof.
\sref{sec: loss function design}
discussed many other full reference and no reference options,
including ones motivated by human judgements of perceptual quality,
from the image quality assessment literature;
\sref{sec: prev results loss function}
gives examples of bilevel methods
that use some of these other loss functions.
The second half of this section concentrated on
model-free and model-based
hyperparameter search strategies.
The grid search, CMA-ES, and trust region
methods described above
all scale at least linearly with the number of hyperparameters.
Similarly,
Bayesian optimization
is best-suited for small hyperparameter dimensions;
\citep{frazier:2018:tutorialbayesianoptimization} suggests
it is typically used for problems with 20 or fewer hyperparameters.
The remainder of this review
considers gradient-based strategies
for hyperparameter optimization.
The main benefit of gradient-based methods
is that they can scale to the large number of hyperparameters
that are commonly used in machine learning applications.
Correspondingly,
the main drawback of a gradient-based method
over the methods discussed in this section
is the implementation complexity
and the differentiability requirement.
\crefs{chap: ift and unrolled}{chap: bilevel methods}
discuss multiple options for
gradient-based methods.
\chapter{Gradient Based Bilevel Methodology: The Groundwork \label{chap: ift and unrolled}}
When
the lower-level optimization problem
\eqref{eq: generic bilevel lower-level}
has a closed-form solution, \xhat,
one can substitute that solution into the upper-level loss function
\eqref{eq: generic bilevel upper-level}.
In this case, the bilevel problem is equivalent to a single level problem
and one can use classic single-level optimization methods
to minimize the upper-level loss.
(See \citep{kunisch:2013:bileveloptimizationapproach}
for analysis and discussion of some simple
bilevel problems with closed-form solutions for \xhat.)
This review focuses on the more typical
bilevel problems
that lack a closed-form solution for \xhat.
Although there are a wide variety of optimization methods
for this challenging category of bilevel problems,
many methods are %
built on gradient descent
of the upper-level loss.
The primary challenge with gradient-based methods
is that the gradient of the upper-level function
depends on a variable that is itself the solution to an optimization problem involving the hyperparameters of interest.
This section describes two main approaches for overcoming this challenge.
The first approach uses the fact that the gradient of the lower-level cost function is zero at the minimizer
to compute an exact gradient at the exact minimizer.
The second approach uses knowledge of the update scheme for the lower-level cost function
to calculate the exact gradient for an approximation to the minimizer
after a specific number of lower-level optimization
steps.
With this (approximation of the) gradient of the
lower-level optimization variable with respect to the hyperparameters,
one can compute
the gradient of the upper-level loss function
with respect to the hyperparameters, \params.
\cref{chap: bilevel methods}
uses the building blocks from this section
to explain various bilevel methods based on this gradient.
\section{Set-up}
Recall from \sref{sec: bilevel set-up} that a generic bilevel problem is
\begin{align}
\argmin_\params \; &\lfcnargs \text{ where }
\nonumber\\
&\xhat(\params) = \argmin_{\vx} \ofcnargs.
\label{eq:lower-repeat}
\end{align}
The way
\eqref{eq:lower-repeat}
expresses the lower-level problem
implies a unique minimizer, \xhat.
This is satisfied when $\ofcn$ is a strictly convex function of $\vx$,
which is typically the case when we use a strictly convex regularizer.
However, the following derivations also hold
in the case of non-unique or local minimizers.
For more discussion, \citep{holler:2018:bilevelapproachparameter}
defines optimistic and pessimistic versions
of the bilevel problem
for the case of multiple lower-level solutions.
For simplicity,
hereafter we focus on the case $\F = \R$.
Using the chain rule, the gradient of the upper-level loss function with respect to the hyperparameters is
\begin{align}
\uppergrad
&= \dParams{\lfcnargs} + \left( \dParams{\xhatargs} \right) ' \dx{\lfcnargs}
, \label{eq: bilevel first chain rule}
\end{align}
where on the right hand side
$\dParams$
and $\dx$
denote partial derivatives
w.r.t. the first and second arguments of $\lfcnparamsvx$,
respectively.
We typically select the loss function such that it is easy to compute
these partials.
For example, if \lfcn is the MSE training loss,
\ie,
$\lfcnargs = \frac{1}{2} %
\normsq{\xhatargs - \xtrue}_2$,
then
\begin{equation*}
\dParams{\lfcnargs} = 0
\text{ and }
\dx{\lfcnargs} = \xhatargs - \xtrue.
\end{equation*}
The following sections survey methods to
find \dParams{\xhatargs},
which is the more challenging piece in
\eqref{eq: bilevel first chain rule}.
\section{Minimizer Approach}
\label{sec: minimizer approach}
The first approach finds
$\dParams{\xhatargs}$
by assuming the gradient of \ofcn at the minimizer is zero.
There are two ways to arrive at the final expression:
the
implicit function theorem (\ac{IFT}) perspective
(as in \cite{samuel:2009:learningoptimizedmap, gould:2016:differentiatingparameterizedargmin})
and the Lagrangian/KKT transformation perspective
(as in \cite{chen:2014:insightsanalysisoperator, holler:2018:bilevelapproachparameter}).
This section presents both perspectives in sequence.
The end of the section summarizes the required assumptions
and discusses computational complexity and memory requirements.
The first step in both perspectives is to assume
we have computed
\xhatargs %
and that the lower-level problem
\ref{eq:lower-repeat}
is unconstrained
(\eg, no non-negativity or box constraints).
Therefore, the gradient of \ofcn with respect to \vx
and evaluated at \xhat must be zero:
\begin{align}
\dx{\ofcnargs}\evalat_{\vx=\xhatargs} =
\dx{\ofcn(\xhat \, ; \params)}
= \bmath{0} \label{eq:dPhi}.
\end{align}
After this point,
the two perspectives diverge.
\subsection{Implicit Function Theorem Perspective}
In the IFT perspective,
we apply the IFT
(\cf. \cite{fessler:96:mav})
to define a function $h$ such that $\xhatargs = \hfunc$.
If we could write $h$ explicitly,
then the bilevel problem could be converted to an equivalent single-level.
However, per the \ac{IFT}, we do not need to define $h$,
we only state that such an $h$ exists.
Combining this definition with \eqref{eq:dPhi} yields
\begin{align}
\bmath{0} &= \nabla_{\vx} \ofcnargsh \label{eq:dPhi hfunc} .
\end{align}
Using the chain rule, we differentiate both sides of \eqref{eq:dPhi hfunc} with respect to \params.
The \I in the equation below
follows from the chain rule because
$\nabla_\params \params =\I$.
We then rearrange terms to solve for the desired quantity, noting that
$\nabla_\params \xhatargs = \nabla_\params \hfunc$.
Thus,
evaluating all terms at \xhat
leads to a gradient expression
for the lower-level problem:
\begin{align}
0 =& \nabla_{\vx \vx} \ofcnargsh \dParams{\hfunc} +
\I \cdot \nabla_{\vx \params} \ofcnargsh \nonumber \\
\dParams{\hfunc} =& -[\nabla_{\vx \vx} \ofcnargsh]^{-1} \cdot \nabla_{\vx \params} \ofcnargsh \nonumber \\
\dParams{\xhatargs} =& -[\nabla_{\vx \vx} \ofcn(\xhat; \params)]^{-1} \cdot \nabla_{\vx \params} \ofcn(\xhat; \params) \label{eq: dhdgamma IFT}.
\end{align}
When \ofcn is strictly convex,
the Hessian of \ofcn is positive definite
and
$\nabla_{\vx \vx} \ofcn(\xhat; \params)$ is invertible.
Substituting \eqref{eq: dhdgamma IFT}
into \eqref{eq: bilevel first chain rule}
yields the following expression
for the gradient of the upper-level
loss function with respect to \params:
\begin{align*}
\uppergrad %
&=
\dParams{\lfcnargs} -
\left(\nabla_{\vx \params} \ofcn(\xhat; \params)\right)' %
\Hinv %
\dx{\lfcn(\params \, ; \xhat)}
\nonumber %
.\end{align*}
Ref. \cite{gould:2016:differentiatingparameterizedargmin} provides example simple bilevel problems
that verify that the \ac{IFT} gradient
agrees with
the analytic gradient obtained by substituting the closed-form solution of the lower-level problem
into the upper-level problem and differentiating directly. %
\subsection{KKT Conditions \label{sec: minimizer via kkt}}
In the Lagrangian perspective,
\eqref{eq:dPhi}
is treated as a constraint on the upper-level problem,
creating a single-level problem with $\sdim$ equality constraints%
\footnote{
The KKT transformation
\eqref{eq: Lagrange set-up for bilevel}
relates bilevel optimization
to mathematical programs with equilibrium constraints (MPEC).
Some authors use the terms interchangeably;
see \cite[Ch.~12]{dempe:2020:bileveloptimizationadvances}.
}%
:
\begin{align}
&\argmin_\params \lfcn(\params \, ; \vx) \text{ subject to } %
\dx{\ofcnargs} = \mat{0}_\sdim. \label{eq: Lagrange set-up for bilevel}
\end{align}
The corresponding Lagrangian is
\begin{align}
L(\vx, \params, \vnu) %
&= \lfcn(\params \, ; \vx) + \vnu^T \dx{\ofcnargs} \nonumber
\end{align}
where $\vnu \in \F^\sdim$ is a vector of Lagrange multipliers
associated with the $\sdim$ equality constraints in \eqref{eq: Lagrange set-up for bilevel}.
\comment{ %
To write the Lagrangian in standard notation,
one could equivalently define
$h_j(\vx) = [\dx{\ofcnargs}]_j$ for $j \in [1\ldots \sdim]$
as the $\sdim$ equality constraints.
}
The Lagrange reformulation is generally
well-posed because many bilevel problems,
such as \eqref{eq: bilevel for analysis filters},
satisfy the
linear independence constraint qualification
(LICQ)
\citep{dempe:2003:annotatedbibliographybilevel,scholtes:2001:howstringentlinear}.
LICQ requires that
the matrix of derivatives of the constraint
have full row rank
\citep{scholtes:2001:howstringentlinear},
\ie
\begin{equation*}
\text{rank}
\paren{
\begin{bmatrix}
\nabla_{\vx \params} \ofcnargs & \nabla_{\vx \vx} \ofcnargs
\end{bmatrix}
}
= \sdim
.\end{equation*}
Strict convexity of \ofcnargs
is therefore a sufficient condition
for LICQ to hold.
(Note the similarity
to the IFT perspective,
where strict convexity
is sufficient for the Hessian to be invertible.)
The first \ac{KKT} condition states
that,
at the optimal point,
the gradient of the Lagrangian with respect to \vx must be \mat{0}.
We can use this fact to solve for
the optimal Lagrangian multiplier, $\hat{\vnu}$:
\begin{align}
\dx{ L(\xhat, \params, \hat{\vnu})}
&= \dx{\lfcn(\params \, ; \xhat)} + \nabla_{\vx \vx} \ofcn(\xhat \,; \params) \hat{\vnu} = \bmath{0}
\nonumber \\
\hat{\vnu} &= \neg \Hinv \dx{\lfcn(\params \, ; \xhat)}. \nonumber
\end{align}
Substituting the expression for $\hat{\vnu}$ into
the
gradient of the Lagrangian with respect to \params
yields
\begin{align}
\nabla_\params L(\xhat, \params, \hat{\vnu}) &=
\dParams{\lfcn(\params \, ; \xhat)} +
\left(
\nabla_{\vx \params} \ofcn(\xhat \, ; \params)
\right)'
\hat{\vnu} \nonumber \\
=&
\dParams{\lfcn(\params \, ; \xhat)} -
\left(\nabla_{\vx \params} \ofcn(\xhat \,; \params) \right)'
\Hinv \dx{\lfcn(\params \, ; \xhat)}
, \nonumber
\end{align}
which is equivalent
to \eqref{eq: IFT final gradient dldparams}.
Ref. \citep{holler:2018:bilevelapproachparameter}
generalized the Lagrangian
approach
to the case
where the forward model
is defined only implicitly,
\eg,
as the solution to
a differential equation.
The authors write the lower-level problem as
\begin{equation}
\xhat%
= \argmin_{\vx} \min_{\tilde{\vy}}
\normsq{\vy - \tilde{\vy}}_2 + \regfcn(\vx)
\text{ s.t. } e(\tilde{\vy}, \vx) = 0, \label{eq: holler lower level}
\end{equation}
where the constraint function, $e$,
incorporates the implicit system model.
For example,
when the forward model is linear
($\mA\vx$),
taking
\mbox{$e(\tilde{\vy}, \vx) = \normsq{\mA\vx - \tilde{\vy}}_2$}
shows the equivalence
of the approach here
to the one in \citep{holler:2018:bilevelapproachparameter}.
\subsection{Summary of Minimizer Approach}
In summary,
the upper-level gradient expression
for the minimizer approach
(\ie, when one ``exactly'' minimizes the lower-level cost function)
is
\begin{align}
\uppergrad %
&= \dParams{\lfcn(\params \, ; \xhat)} -
\left(\nabla_{\vx \params} \ofcn(\xhat; \params)\right)'
\Hinv
\dx{\lfcn(\params \, ; \xhat)}.
\label{eq: IFT final gradient dldparams}
\end{align}
Thus,
for a given loss function and cost function,
calculating the gradient of the
upper-level loss function
(with respect to \params)
requires the following components
all evaluated at $\vx = \xhat$:
$\dParams{\lfcn(\params \, ; \vx)} \in \F^{\paramsdim}$,
$\nabla_{\vx \params} \ofcnargs \in \F^{\sdim \by \paramsdim}$,
$\nabla_{\vx \vx} \ofcnargs \in \F^{\sdim \by \sdim}$,
and
$\nabla_\vx \lfcn(\params \, ; \vx) \in \F^{\sdim}$.
Continuing the specific example
of learning filter coefficients and tuning parameters
\eqref{eq: bilevel for analysis filters},
the components are:
\begin{align}
\nabla_{\vx} \ofcn(\xhat \,; \params) &= \mA' (\mA \vx - \vy)
+ \ebeta{0} \sum_{k=1}^K \ebeta{k} \hktil \conv \dsparsefcn.(\hk \conv \vx; \epsilon) \nonumber
\\
\nabla_{\vx \beta_k} \ofcn(\xhat \,; \params) &=
\ebetazerok \hktil \conv \dsparsefcn.(\hk \conv \xhat)
\nonumber \\
\nabla_{\vx c_{k,s}} \ofcn(\xhat \, ; \params) &=
\ebetazerok \paren{ \dsparsefcn.(\circshift{(\hk \conv \xhat)}{\vs})
+ \hktil \conv \left( \ddsparsefcn.(\hk \conv \xhat) \odot \circshift{\xhat}{\vs} \right) }
\nonumber \\
\nabla_{\vx \vx} \ofcn(\xhat \, ; \params) &= \mA'\mA + \ebeta{0} \sum_k \ebeta{k} \mC_k' \diag{\ddsparsefcn.(\hk \conv \xhat)} \mC_k \nonumber \\
\dParams{\lfcn(\params \,; \vx)} &= 0 %
\nonumber \\
\nabla_\vx \lfcn(\params \, ; \xhat) &= \xhatargs - \xtrue.
\label{eq: nablas for filter learning}
\end{align}
Here, the notation \circshift{\vx}{\vi} means circularly shifting the vector \vx
by \vi elements,
and $c_{k,\vs}$ denotes the $\vs$th element
of the $k$th filter \hk,
where
$\vs$ is a tuple that indexes each dimension of \hk.
\apref{sec: dh of htilde conv f(h conv x)}
gives examples of using the \circshift{\vx}{\vi} notation
and derives
$\nabla_{\hks} \paren{ \htilde_k \conv f.(\hk \conv \vx)}$,
which is the key step to expressing
$\nabla_{\vx c_{k,\vs}} \ofcn(\xhat \, ; \params)$.
The other gradients
easily follow from
$\nabla_{\vx} \ofcn(\xhat \,; \params)$
using standard gradient tools for matrix expressions
\citep{petersen:2012:matrixcookbook}.
The minimizer approach to finding \uppergrad
uses the following assumptions:
\begin{enumerate}[noitemsep]
\item Both the upper and lower optimization problems have no inequality constraints.
\item \xhat is the minimizer to the lower-level cost function, not an approximation of the minimizer.
This constraint ensures that \eqref{eq:dPhi} holds.
\item The cost function \ofcn is twice-differentiable in \vx and differentiable with respect to \vx and \params.
This condition disqualifies the 0-norm and 1-norm as choices for the function \sparsefcn.
\item The Hessian of the lower-level cost function, $\nabla_{\vx \vx}\ofcnargs$, is invertible;
this is guaranteed when \ofcn is strictly convex.
\comment{This condition equivalently guarantees that strong duality holds in the Lagrangian approach.} %
\end{enumerate}
The first condition technically excludes applications like \ac{CT} imaging,
where the image is typically constrained to be non-negative.
However, non-negativity constraints are rarely required when good regularizers are used,
so the resulting non-constrained image can still be useful in practice \cite{fessler:96:mav}.
The second constraint is often the most challenging
since the lower-level problem typically uses an iterative algorithm
that runs for a certain number of iterations
or until a given convergence criteria is met.
As previously noted,
if there were a closed-form solution for \xhat,
then we would not have needed to use the \ac{IFT} or Lagrangian
to find the partial derivative of \xhat with respect to \params.
Since one usually does not reach the exact minimizer,
the calculated gradient will have some error in it,
depending on how close the final iterate is to the true minimizer \xhat.
Thus, the practical application of this method is more accurately called
Approximate Implicit Differentiation (AID)
\citep{ji:2021:bileveloptimizationconvergence,grazzi:2020:iterationcomplexityhypergradient}.
\sref{sec: ift unrolled comparison} further discusses gradient accuracy.
\subsection{Computational Costs \label{sec: ift complexity} }
The largest cost in computing the gradient
of the upper-level loss using
\eqref{eq: IFT final gradient dldparams}
is often finding
(an approximation of)
\xhat.
However,
this cost is difficult to quantify,
as the IFT approach is agnostic
to the lower-level optimization methodology.
To compare the bilevel gradient methods,
we will later assume the cost is comparable to the gradient descent
calculations used in the unrolled approach
(described in the following section).
However, this is an over-estimation of the cost,
as the IFT approach is not constrained to smooth lower-level updates,
and one can use optimization methods with,
\eg, warm starts and restarts
to reduce this cost.
When the lower-level problem satisfies
the assumptions above,
and assuming one has already found \xhat,
a straight-forward approach to computing
the gradient \eqref{eq: IFT final gradient dldparams}
would be dominated by the
$\order{\sdim^3}$
operations required to compute the Hessian's inverse.
For many problems,
\sdim is large, and that matrix inversion is infeasible
due to computation or memory requirements.
Instead,
as described in \citep{do:2007:efficientmultiplehyperparameter},
one can use a conjugate gradient (\ac{CG}) method
to compute the matrix-vector product
\begin{equation}
\Hinv \dx{\lfcn(\params \,; \xhat)} \label{eq: Hinv step for CG}
\end{equation}
because the Hessian is symmetric and positive definite
(see assumption \#4 in the previous section).
For a generic \mA,
each CG iteration requires multiplying the Hessian by a vector, which is
\order{\sdim^2}.
CG takes \sdim iterations to converge fully
(ignoring finite numerical precision),
so the final complexity is still \order{\sdim^3} in general.
However, the Hessian often has a special structure
that %
simplifies computing
the matrix-vector product.
Consider the running example of learning filters
per \eqref{eq: bilevel for analysis filters}.
The Hessian, as given in \eqref{eq: nablas for filter learning},
multiplied with any vector
$\vv \in \F^N$
is
\begin{align}
\nabla_{\vx \vx} \ofcn(\xhat; \params, \vy) \cdot \vv =&
\underbrace{\mA' (\mA \vv) }_{2 \sdim^2}
+ \nonumber \\
&\ebeta{0} \sum_k \ebeta{k}
\underbrace{ \Ck' \cdot }_{\sdim \filterdim}
\overbrace{
\diag{\ddsparsefcn.(\underbrace{\hk \conv \xhat}_{\sdim \filterdim})}
\cdot
}^{\sdim}
\underbrace{(\mC_k \vv)}_{\sdim \filterdim}
. \label{eq: Hessian with computational cost}
\end{align}
The annotations show the multiplications required
for each component,
where
we used the simplifying assumption that
the number of measurements matches the number of unknowns
($\ydim=\sdim$).
As written,
\eqref{eq: Hessian with computational cost} does not made any assumptions on \mA, so the first
term is still computationally expensive.
If \mA is the identity matrix (as in denoising),
the $\sdim^2$ term could instead be zero cost.
If $\mA' \mA$ is circulant, \eg,
if \mA is a MRI sampling matrix that can be written in terms of a discrete Fourier transform,
then the cost is $\sdim \log{\sdim}$.
More generally, the computational cost for one (of \sdim) iterations of CG is
\order{\cA \sdim}
where $\cA \in [0,\sdim]$ is some constant
dependent on the structure of \mA.
For the second addend in \eqref{eq: Hessian with computational cost},
we assume that
$\filterdim \ll \sdim$,
so direct convolution is most efficient
and the matrix-vector product
requires \order{\sdim \filterdim} multiplies.
When the filters are relatively large,
one can use Fourier transforms for the filtering,
and the cost is \order{\sdim \text{log}(\sdim)}.
The final cost
of the Hessian-vector product
for \eqref{eq: bilevel for analysis filters}
is \order{\cA \sdim + \paramsdim \sdim}.
This cost includes a multiplication by $K$
to account for the sum over all filters,
which simplifies since $\filterdim K$ is%
\footnote{
The full parameter dimension includes the filters and tuning parameters,
so $\paramsdim = \filterdim (K+1) + 1$.
}
\order{\paramsdim}.
\section{Translation to a Single Level \label{sec: translation to a single level}}
Before discussing the other widely used approach
to calculating the gradient of the upper-level loss,
we %
summarize %
a specialized approach for 1-norm regularizers.
Like the minimizer approach described above,
this approach
assumes we have computed an (almost) exact minimizer
of the lower-level cost function.
It writes the minimizer as an
(almost everywhere) %
differentiable function
in terms of that
\xhat,
then substitutes this expression into the upper-level loss
to create a
single-level optimization problem
that is suitable
for one hyperparameter update step.
Ref. \citep{sprechmann:2013:supervisedsparseanalysis} proposed the
translation to a single-level approach
to solve a bilevel problem
with both synthesis and analysis operators.
\citet{mccann:2020:supervisedlearningsparsitypromoting}
recently presented a version
specific to analysis operators.
Here, we summarize the approach in \citep{mccann:2020:supervisedlearningsparsitypromoting}
that avoids
an auxiliary variable
from splitting the one-norm sparsity penalty into two terms in \citep{sprechmann:2013:supervisedsparseanalysis}.
The specific bilevel problem considered in \citep{mccann:2020:supervisedlearningsparsitypromoting} is:
\begin{align}
&\argmin_{\params} \sum_\xmath{j} \onehalf \normrsq{\xhat_\xmath{j}(\params) - \xtrue_\xmath{j}}_2
\nonumber\\
\hat{\vx}_j(\params) = &\argmin_{\vx} \onehalf \normsq{\mA \vx - \vy_j}_2 + \beta \norm{\mOmega_\params \vx}_1,
\label{eq:mccann-lower}
\end{align}
where $\mOmega_\params$ is a matrix constructed based on \params,
\eg, as a convolutional matrix as in our running example.
(In the results, \citep{mccann:2020:supervisedlearningsparsitypromoting}
parameterizes $\mOmega_\params$ to have $3\by3$ convolutional filters as the learnable parameters.)
We write \mOmega in the following discussion to simplify notation.
Eqn.~\eqref{eq:mccann-lower} slightly generalizes
\cite{mccann:2020:supervisedlearningsparsitypromoting}
by considering an inverse problem,
rather than just denoising
where $\mA = \I$.
As in the minimizer approach,
the first step is to compute
\xhatp
for the current guess of \params,
\eg,
using a fast proximal gradient method
\cite{kim:18:aro}
or ADMM
(as used in
\citep{mccann:2020:supervisedlearningsparsitypromoting}).
One can then re-write the 1-norm
in \eqref{eq:mccann-lower}
based on the sign pattern of
$\mOmega \xhatp$
since
\[
\norm{\mOmega \xhatp}_1 =
\sum_{i \in \cI_+(\params)} [\mOmega \xhatp]_i -
\sum_{i \in \cI_-(\params)} [\mOmega \xhatp]_i
,\]
where
$\cI_+(\params)$
and
$\cI_-(\params)$
denote the set of indices where
$\mOmega \xhatp$ is positive
and negative, respectively.
Ref. \citep{mccann:2020:supervisedlearningsparsitypromoting}
defines a diagonal sign matrix,
$
\mS(\params) = \diag{ \sign{\mOmega \xhatp} }
$,
having positive and negative diagonal elements
at the
appropriate
indices.
The lower-level problem
\eqref{eq:mccann-lower}
is thus equivalent to
\begin{align}
\hat{\vx}_j(\params) =
\argmin_{\vx} \onehalf \normsq{\mA \vx - \vy_j}_2
+ \beta \mat{1}' \mS(\params) \mOmega \vx ,
\text{ s.t. } [\mOmega \vx]_{\cI_0(\params)} = \mat{0},
\label{eq: mccann rewritten}
\end{align}
where
$\cI_0(\params)$ denotes
the set of indices where $[\mOmega \xhat(\params)]_i = 0$.
The solution to the rewritten optimization problem
\eqref{eq: mccann rewritten}
is the
same as the original version
\eqref{eq:mccann-lower}
with the 1-norm.
(The rewriting process requires \xhatp,
so one cannot use the equivalence to solve the lower-level problem.)
However, the rewritten problem has a simple structure:
it is a quadratic cost function
with a linear equality constraint.
Therefore,
using KKT conditions,
if $\mA'\mA$ is positive definite,
then
the minimizer of \eqref{eq: mccann rewritten}
is the solution to a
linear system of equations
\[
\begin{bmatrix}
\mA'\mA & \mOmega_0'(\params) \\ \mOmega_0(\params) & \mat{0}
\end{bmatrix}
\begin{bmatrix}
\xhatp
\\
\vnu
\end{bmatrix}
= \begin{bmatrix}
\mA'\vy - \beta \mOmega' \mS'(\params) \mat{1} \\ \bmath{0}
\end{bmatrix}
,\]
where $\mOmega_0$ denotes the rows of \mOmega
corresponding to the indices
$\cI_0(\params)$
and \vnu is a vector of Lagrange multipliers.
See \cite[Sect.~10.1.1]{boyd:04}
for discussion of existence and uniqueness
of solutions to this linear system.
Because of the discontinuity of the sign function
in \eqref{eq: mccann rewritten},
\xhatp is not everywhere differentiable in \params.
However,
\citep{mccann:2020:supervisedlearningsparsitypromoting}
states %
that \xhatp is differentiable everywhere
except a set of measure zero
when $\mA = \I$
and when $\mOmega_0$ has full row rank.
Then
\citep{mccann:2020:supervisedlearningsparsitypromoting}
uses matrix calculus
to derive
a gradient expression for \xhatp
that is
suitable for
standard automatic differentiation software tools.
In summary,
the translation to a single level approach
involves computing \xhat,
creating a closed-form expression for \xhat,
and then differentiating the closed-form
expression to compute
the desired gradient,
$\nabla_\params \xhat(\params)$.
In terms of computation,
the approach
requires optimizing the lower-level cost
and then two calls to a CG method
as part of calculating the gradient,
making the cost similar to the minimizer approach.
\section{Unrolled Approaches \label{sec: unrolled}}
Another popular approach to finding
$\dParams{\xhatargs}$
is to
assume that the lower-level cost function is approximately minimized
by applying $T$ iterations of some (sub)differentiable optimization algorithm,
where we write the update step at iteration
$t \in [1 \ldots T]$
as
\begin{equation*}
\vx^{(t)} = \optalgstep (\vx^{(t-1)} \,; \params),
\end{equation*}
for some mapping
$\optalgstep: \F^N \mapsto \F^N$
that should have the fixed-point property
$\optalgstep(\xhatp \,; \params) = \xhatp$.
For example, GD has
\(
\optalgstep(\vx \,; \params) = \vx - \alpha \nabla \ofcnargs
\)
for some step size $\alpha$.
(Algorithms with momentum depend on earlier iterates like
$\vx^{(t-2)}$ as well,
but we omit that dependence
to simplify notation.)
In contrast to the two approaches described above,
this ``unrolled'' approach no longer assumes the solution to the lower-level problem
is an exact minimizer.
Instead, the unrolled approach reformulates the bilevel problem \eqref{eq: generic bilevel lower-level} as
\begin{align}
\argmin_\params
&\underbrace{\lfcn \left(\params \, ; \, \vx^{(T)}(\params) \right)}_{
\lfcn(\params)}
\text{ s.t. }
\label{eq: unrolled upper-level} \\
&\vx^{(t)}(\params) = \optalgstep(\vx^{(t-1)} \,; \params)
,\quad \forall t \in [1 \ldots T] \nonumber,
\end{align}
where
$\vx^{(0)}$ is an initialization,
\eg, the pseudo-inverse of \mA times \vy.
One can then take the (sub)gradient of a finite number $T$ of iterations
of \optalgstep,
hoping that
$\vx^T$
approximately minimizes the lower-level function, \ofcn.
The chain rule for derivatives is the foundation of the unrolled method.
The gradient of interest, \uppergrad,
depends on %
the gradient of the optimization algorithm step
with respect to \vx and \params.
For readability, define the following
gradient matrices for the $t$th unrolled iteration
\begin{align}
\mH_{t} &\defeq \franA{t-1} \in \F^{\sdim \by \sdim}
\text{ and }
\mJ_{t} \defeq \franB{t-1} \in \F^{\sdim \by \paramsdim}
\nonumber
,\end{align}
for $t \in [1,T]$.
We use these letters because,
when using gradient descent as the optimization algorithm,
$\nabla_\vx \optalgstep(\vx \, ; \params)$ is
closely related to %
the Hessian of \ofcn
and
$\nabla_\params \optalgstep(\vx \, ; \params)$
is the Jacobian of the gradient.
Thus, when \optalgstep corresponds to GD,
an unrolled approach involves computing the same quantities
as required by the IFT approach
\eqref{eq: IFT final gradient dldparams}.
\renewcommand{\franA}[1] {\xmath{\mH_{#1}}}
\renewcommand{\franB}[1] {\xmath{\mJ_{#1}}}
By the chain rule,
the gradient of \eqref{eq: unrolled upper-level} is
\begin{align}
\uppergrad
=&
\nabla_\params \lfcn(\params \, ; \vx^{(T)}) +
\left( \sum_{t=1}^T \left(\franA{T} \cdots \franA{t+1} \right) \franB{t} \right)'
\finalterm \in \F^{\paramsdim}.
\label{eq: generic lower-level chain rule}
\end{align}
One can derive this gradient expression
\comment{%
Ref.
\citep{franceschi:2017:forwardreversegradientbased}
uses row vectors, so the formulas appear different.
}%
using a reverse or forward perspective,
with parallels to
back-propagation through time
and
real-time recurrent learning
respectively
\citep{franceschi:2017:forwardreversegradientbased}.
We present the update here in terms of \vx for simplicity, but
the idea easily extends to updates in terms of a state vector %
that allows one to include momentum terms, weights, and other accessory variables
in \params
\citep{franceschi:2017:forwardreversegradientbased}.
Unlike the minimizer approach,
where the goal is to run the lower-level
optimization until (close to) convergence,
most unrolling methods set $T$ in advance.
This set number of lower-level iterations
mimics the set depth of CNNs and
allows a precise estimate of how much computational effort
each lower-level optimization takes.
However,
if $T$ is relatively small,
the lower-level optimization may not near convergence,
and the unrolled method is only loosely tied
to the original bilevel optimization problem.
To maintain a strong connection to the bilevel problem
while avoiding setting $T$ larger than necessary for convergence,
\citep{antil:2020:bileveloptimizationdeep}
used a convergence criterion to determine
the number of \optalgstep iterations
rather than pre-specifying a number of iterations.
\sref{sec: connections unrolled}
further discusses the relation between
bilevel problems
and
unrolling methods in the broader literature
\cite{monga:21:aui}.
\subsection{Computational and Memory Complexity \label{sec: unrolled complexity}}
To compare the forward and reverse approaches
to gradient computation for unrolled methods,
we introduce notation for
an ordered product of matrices.
We indicate
the arrangement of the multiplications
by the set endpoints,
$s \in [ s_1 \leftrightarrow s_2 ]$
with the left endpoint, $s_1$,
corresponding to the index for the left-most matrix in the product
and the right endpoint, $s_2$,
corresponding to the right-most matrix.
Thus, for any sequence of square matrices
$\{\mA\}_i$:
\comment{
\begin{align*}
\left( \prod_{s \in \left[ T \leftrightarrow t \right]} \mA_s \right)'
\defeq
\left(\mA_T \mA_{T-1} \cdots \mA_{t}\right)'
=
\mA_{t}' \mA_{t+1}' \cdots \mA_T'
=
\prod_{s \in \left[ t \leftrightarrow T \right]} \mA_s'
.
\end{align*}
}
\begin{align*}
\prod_{s \in \left[ t \leftrightarrow T \right]} \mA_s
\defeq
\mA_{t} \mA_{t+1} \cdots \mA_T
=
\left(\mA_T' \mA_{T-1}' \cdots \mA_{t}' \right)'
=
\left( \prod_{s \in \left[ T \leftrightarrow t \right]} \mA_s' \right)'
.
\end{align*}
The above double arrow notation does not indicate order of operations.
In the following notation
the arrow direction
does not affect the product result
(ignoring finite precision effects),
but rather signifies the direction (order) of calculation:
\begin{align*}
\prod_{s \in \left[ T \leftarrow t \right]} \mA_s
&\defeq \mA_T \left( \mA_{T-1} \cdots \left( \mA_{t+1} \left( \mA_{t} \right) \right) \right)
\\
\prod_{s \in \left[ T \rightarrow t \right]} \mA_s
&\defeq \left( \left( \left( \mA_T \mA_{T-1} \right) \cdots \right) \mA_{t+1} \right) \mA_{t}
.\end{align*}
We use a similar arrow notation to denote the order that terms are computed for sums;
as above, the order is only important for computational considerations
and does not affect the final result.
\begin{figure}[htb]
\centering
\ifloadepsorpdf
\includegraphics[]{unrolledreverse}
\else
\input{tikz,unrolledreverse}
\fi
\iffigsatend \figuretag{4.1} \fi
\caption{
Reverse mode computation of the unrolled gradient from
\eqref{fig: unrolled reverse-mode}.
The first gradient computation requires
$\vx^{(T)}$,
so all computations occur after
the lower-level optimization algorithm is complete.
The final gradient is
$\uppergrad = \nabla_\params \lfcn(\params \, ; \vx^{(T)}) + \vr$.
}
\label{fig: unrolled reverse-mode}
\end{figure}
Using this notation,
the reverse gradient calculation of
\eqref{eq: generic lower-level chain rule}
is
\begin{align}
\nabla_\params \lfcn(\params \, ; \vx^{(T)}) +
\sum_{t \in [T \rightarrow 1]} \franB{t}{}'
\left( \prod_{s \in [(t+1) \leftarrow T]} \franA{s}' \right)
\finalterm. \label{eq: reverse mode}
\end{align}
This expression requires
$\prod_{s \in [(T+1) \leftarrow T]} \franA{s}' = \I$,
because \franA{T+1} is not defined.
For example,
for $T=3$, we have
\[
\nabla_\params \lfcn(\params \, ; \vx^{(3)}) +
\underbrace{\franB{3}' (\I) \vg}_{t=3} +
\underbrace{\franB{2}' \paren{\franA{3}'} \vg}_{t=2} +
\underbrace{\franB{1}' \paren{\franA{2}'\franA{3}'}\vg}_{t=1}
,\]
where
\vg is shorthand for \finalterm here.
This version is called reverse as all computations (arrows) begin at the end, $T$.
The primary benefit of the reverse mode comes from the ability to group
\finalterm with the right-most \franA{T},
such that all products are matrix-vector products,
as seen in \fref{fig: unrolled reverse-mode}
Further,
one can save the matrix-vector products
for use during the next iteration
and avoid duplicating the computation.
Continuing the example for $T=3$, we have
\[
\nabla_\params \lfcn(\params \, ; \vx^{(3)}) +
\underbrace{\franB{3}' (\I) \vg}_{t=3} +
\underbrace{\franB{2}' (\overbrace{\franA{3}' \vg}^{\bmath{\Delta}})}_{t=2} +
\underbrace{\franB{1}' (\franA{2}' \overbrace{\paren{\franA{3}'\vg}}^{\bmath{\Delta}} ) }_{t=1}
,\]
where one only needs to compute $\bmath{\Delta}$ once.
This ability to rearrange the parenthesis
to compute matrix-vector products
greatly decreases the computational requirement
compared to matrix-matrix products.
Excluding the costs of the optimization algorithm steps
and forming the \franA{s} and \franB{t} matrices
(these costs will be the same in the forward mode computation),
reverse mode requires
$\order{T}$
Hessian-vector multiplies %
and
$\order{T \sdim \paramsdim}$
additional multiplies.
The trade-off is that
reverse mode requires storing all $T$ iterates,
$\vx^{(t)}$,
so that one can compute
the corresponding Hessians and Jacobians
from them as needed,
and thus has a memory complexity
$\order{T \sdim}$.
\begin{figure}[htb]
\centering
\ifloadepsorpdf
\includegraphics[]{unrolledforward}
\else
\input{tikz,unrolledforward}
\fi
\iffigsatend \figuretag{4.2} \fi
\caption{Forward mode computation of the unrolled gradient
from \eqref{eq: forward mode}.
The intermediate computation matrix, \mZ,
is initialized to zero ($\mZ_0 = \bmath{0}$)
then updated every iteration.
The final gradient is
$\uppergrad = \nabla_\params \lfcn(\params \, ; \vx^{(T)}) + \mZ_T' \finalterm$.
}
\label{fig: unrolled forward-mode}
\end{figure}
The forward mode calculation of
\eqref{eq: generic lower-level chain rule},
depicted in \fref{fig: unrolled forward-mode},
has all computations (arrows) starting at the earlier iterate:
\begin{align}
\nabla_\params \lfcn(\params \, ; \vx^{(T)})
+
\left( \sum_{t\in [1\rightarrow T]}
\left( \prod_{s \in [T \leftarrow (t+1)] } \franA{s} \right) \franB{t} \right)' \finalterm.
\label{eq: forward mode}
\end{align}
As before, \franA{T+1} is not defined,
so we take
$\prod_{s \in [T \leftarrow (T+1)] } \franA{s} = \I$.
For example,
for $T=3$ we have
\begin{align*}
\nabla_\params& \lfcn(\params \, ; \vx^{(T)}) +
\paren{
\underbrace{\paren{(\franA{3} \franA{2} ) \franB{1}}' }_{t=1} +
\underbrace{\paren{(\franA{3} ) \franB{2}}' }_{t=2} +
\underbrace{\paren{(\I ) \franB{3}}' }_{t=3}
}
\vg
.\end{align*}
How the forward mode
avoids storing \vx iterates
is evident after
rearranging the parenthesis to avoid
duplicate calculations,
as illustrated in \fref{fig: unrolled forward-mode}.
Continuing the example for $T=3$,
we have
\[
\nabla_\params \lfcn(\params \, ; \vx^{(T)}) +
\left[
\underbrace{\franA{3}
\paren{
\overbrace{
\franA{2}
\underbrace{\paren{
\franA{1} \cdot \bmath{0} + \franB{1}
}}_{\mZ_1}
+ \franB{2}
}^{\mZ_2}
}
+ \franB{3}
}_{\mZ_3}
\right]' \vg
,\]
where
$\mZ_{s} = \franA{s} \mZ_{s-1} + \franB{s} \in \F^{\sdim \by \paramsdim}$
stores the intermediate calculations.
The above formula
also illustrates why \franA{1}
is not needed in \eqref{eq: unrolled upper-level};
\dParams{\vx^{(0)} = \bmath{0}}
is the last element from applying the chain rule.
There is no way to rearrange the terms
in the forward mode formula
to achieve matrix-vector products
(while preserving the computation order).
Therefore, the computation requirement is much higher at
\order{T \paramsdim}
Hessian-vector multiplications.
The corresponding benefit of the
forward mode method is
that it does not require storing iterates,
thus decreasing
(in the common case when $T > \paramsdim$)
the memory requirement to \order{\sdim \paramsdim}
for storing the intermediate matrix
$\mZ_s$
during calculation.
As with the IFT/Lagrangian approach,
the computational complexity of the unrolled approach
is lower than the generic bound
when we consider the specific example of
learning convolutional filters according to
\eqref{eq: bilevel for analysis filters}.
Nevertheless,
the general comparison that reverse mode takes more
memory but less computation holds true.
See \tref{tab: ift and unrolled complexities}
for a full comparison of the
computational and memory complexities.
Neural network training methods
commonly use the reverse method,
backpropagation,
to compute \eqref{eq: generic lower-level chain rule}
because it requires less computation
than forward mode.
However,
in some problems with large dimensions,
storing all the iterates is infeasible.
A strategy to trade-off the
memory and computation requirements is
checkpointing,
which stores \vx every few iterations.
Checkpointing is an active research area;
see
\citep{dauvergne:2006:dataflowequationscheckpointing}
for an overview.
\section{Summary \label{sec: ift unrolled comparison}}
This section focused on computing
\uppergrad,
the gradient of the upper-level
loss function with respect to the learnable parameters.
\cref{chap: bilevel methods}
builds on this foundation to
consider optimization methods for bilevel problems.
Many of those optimization methods
are agnostic of if one uses
the minimizer,
translation to a single level,
or unrolled approaches
to compute \uppergrad.
Thus, how one selects an approach may depend on
the structure of the specific bilevel problem,
how closely tied one wishes to be to the original bilevel problem,
computational cost,
and/or
gradient accuracy.
The translation to a single level
approach is tailored to a
specific bilevel problem.
A benefit of the translation approach
is the ability to use the 1-norm
(without any corner rounding)
in the lower-level cost function.
However, the corresponding drawback
is the (current) lack of generality in the minimizer approach;
the closed-form expression derived in
\citep{sprechmann:2013:supervisedsparseanalysis,mccann:2020:supervisedlearningsparsitypromoting}
is specific to using the 1-norm as \sparsefcn
and applications where \mA has full column rank,
like denoising problems,
rather than inverse problems,
like compressed sensing,
where \mA is wide.
Expanding this approach to regularizers
other than the 1-norm is a possible
avenue for future work.
One difference among the methods
is whether they depend on %
the lower-level optimization algorithm;
while the unrolled approach depends on the specific optimization algorithm,
the minimizer approach
and the translation to a single level approach do not.
A resulting downside of unrolling is that
one cannot use techniques such as
warm starts and non-differentiable restarts,
so $\vx^{(T)}$ may be farther from the minimizer
than the approximation from a similar number of iterations
of a more sophisticated, non-differentiable update method.
However, the unrolled method's dependence on \optalgstep is also a benefit,
as an unrolled method
can be applied to non-smooth cost functions,
as long as the resulting update mapping
\optalgstep
is smooth.
Another advantage of unrolling
is that one can run a set number of iterations of the optimization algorithm,
without having to reach convergence,
and still calculate a valid gradient,
thus avoiding the issue of the second constraint for the minimizer approach.
Particularly in image reconstruction problems,
where finding \xhat exactly can be time intensive,
the benefit of a more flexible run-time could outweigh the disadvantages.
However,
the corresponding downside of unrolling is that
the learned hyperparameters
are less clearly tied to the original cost function
than when one uses the minimizer approach.
\sref{sec: connections unrolled}
further discusses this point
in connection to how unrolling
for bilevel methods can differ
from (deep) learnable optimization algorithms.
Gradient accuracy and
computational cost are, unsurprisingly,
trade-offs.
\tref{tab: ift and unrolled complexities}
summarizes the cost
of the minimizer and unrolled approaches,
derived in \sref{sec: ift complexity}
and \sref{sec: unrolled complexity} respectively,
but
the total computation will depend on the
required gradient accuracy.
By accuracy,
we mean error from the true bilevel gradient
\[
\lVert \underbrace{\xmath{\hat{\nabla}}_T \lfcn(\params)}_{\substack{\text{Estimated} \\ \text{ gradient}}}
- \underbrace{\nabla \lfcn(\params)}_{\substack{ \text{True bilevel} \\ \text{gradient}}} \rVert
,\]
where $T$ is the number of lower-level optimization steps.
\begin{table}[tbh]
\ifloadeps
\includegraphics[]{TablesAndAlgs/epsfiles/tab,iftunrolledcomplexity.eps}
\else
\input{TablesAndAlgs/tab,iftunrolledcomplexity}
\fi
\iffigsatend \tabletag{4.1} \fi
\caption{
Memory and computational complexity of the minimizer approach
\eqref{eq: IFT final gradient dldparams},
reverse-mode unrolled approach \eqref{eq: reverse mode}, and
forward-mode unrolled approach \eqref{eq: forward mode} to computing \dParams{\lfcnargs}.
Computational costs do not include running the optimization algorithm
(typically expensive but often comparable across methods),
computing \finalterm (typically cheap),
or computing \dParams{\lfcn(\params \, ; \vx)} (frequently zero).
Memory requirements do not include
storing a single copy of \vx, \mA, \params, \franA, and \franB{},
because all methods require those.
Recall $\vx \in \F^\sdim$, $\params \in \F^\paramsdim$, %
and there are $T$ iterations of the lower-level optimization algorithm for the unrolled method. Hessian-vector products (first row) and Hessian-inverse-vector products (middle row) are listed
separately from all other multiplications
(last row)
as the computational cost of Hessian operations
can vary widely;
see discussion in \sref{sec: ift complexity}.
}
\label{tab: ift and unrolled complexities}
\end{table}
The unrolled gradient is always accurate for the
unrolled mapping, %
but not for the original bilevel problem.
Therefore,
unrolling may be more computationally feasible
when one cannot run a sufficient number of lower-level optimization steps
to reach close enough to a minimizer
to assume the gradient is approximately zero
\citep{kobler:2020:totaldeepvariation}.
When using the unrolling approach,
one must decide between the forward-mode and reverse-mode
gradient calculation.
Most approaches use the reverse-mode
due to the smaller computational burden,
but unrolling with reverse mode differentiation may have
prohibitively high memory requirements
if one plans to run many lower-level iterates
or if the training dataset includes large images
\citep{chambolle:2021:learningconsistentdiscretizations}.
A check-pointing strategy,
which is a hybrid of the reverse-mode and forward-mode
gradient calculations,
or using reversible network layers
\citep{kellman:2020:memoryefficientlearninglargescale},
can trade off
the memory and computational requirements.
In all of the approaches considered,
the accuracy of the estimated hyperparameter gradient
in turn
depends on the solution accuracy or number of unrolled iterations
of the lower-level cost function.
Ref.
\citep{mccann:2020:supervisedlearningsparsitypromoting}
notes that their
translation to a single level
approach failed if they did not
optimize the lower-level problem
to a sufficient accuracy level.
However, \citep{sprechmann:2013:supervisedsparseanalysis,mccann:2020:supervisedlearningsparsitypromoting}
did not investigate
how the solution accuracy of the lower-level problem
impacts the gradient estimation.
For the minimizer and unrolled approaches,
\citep{grazzi:2020:iterationcomplexityhypergradient,ji:2021:bileveloptimizationconvergence}
found that the gradient estimate from the
minimizer approach converges
to the true gradient faster
than the unrolled approach
(in terms of computation).
To state the bounds,
\citep{grazzi:2020:iterationcomplexityhypergradient,ji:2021:bileveloptimizationconvergence}
assert conditions on the structure of the bilevel problem.
They assume that
\xhatp is the unique minimizer of the lower-level cost function,
the Hessian of the lower-level is invertible,
the Hessian and Jacobian of \ofcn
are Lipschitz continuous with respect to \vx,
the gradients of the upper-level loss
are Lipschitz continuous with respect to \vx,
the norm of \vx is bounded, %
and the lower-level cost is strongly convex and Lipschitz smooth
for every \params value.
\sref{sec: double-loop complexity analysis}
discusses similar investigations
that use these conditions,
how easy or hard they are to satisfy,
and how they apply to
\eqref{eq: bilevel for analysis filters}.
Ref. \citep{grazzi:2020:iterationcomplexityhypergradient}
initializes the lower-level iterates for
both the unrolled and minimizer approach
with the zero vector,
\ie, $\vx^{(0)} = \bmath{0}$.
Under their assumptions,
\citep{grazzi:2020:iterationcomplexityhypergradient}
prove that both the unrolled and minimizer gradients converge linearly
in the number of lower-level iterations
when the lower-level optimization algorithm
and conjugate gradient algorithm for the minimizer appraoch
converge linearly.
Although the rate of the approaches is the same,
the minimizer approach converges at a faster linear rate
and \citep{grazzi:2020:iterationcomplexityhypergradient}
generally recommends the minimizer approach,
though they found empirically
that the unrolled approach may be more reliable
when the strong convexity and Lipschitz smooth assumptions
on the lower-level cost do not hold.
Ref. \citep{ji:2021:bileveloptimizationconvergence}
extended the analysis from \citep{grazzi:2020:iterationcomplexityhypergradient}
to consider a warm start initialization for the lower-level
optimization algorithm.
They
similarly find that the minimizer approach
has a lower complexity than the unrolled approach.
\srefs{sec: double-loop complexity analysis}{sec: single-loop complexity}
further discuss complexity results
after introducing
specific bilevel optimization algorithms.
\chapter{Gradient-Based Bilevel Optimization Methods}
\label{chap: bilevel methods}
The previous section discussed different approaches
to finding
\uppergrad,
the gradient
of the upper-level loss function
with respect to the learnable parameters.
Building on those results,
this section considers
approaches for optimizing
the bilevel problem.
In particular,
this section concentrates on gradient-based algorithms
for optimizing the hyperparameters. %
While there is some overlap with single-level optimization methods,
this section focuses on the challenges
due to the bilevel structure.
Therefore,
this section does not discuss the lower-level optimization algorithms in detail;
for overviews of lower-level optimization,
see, \eg,
\cite{palomar:11:coi}.
Gradient-based methods for bilevel problems
are an alternative to the approaches described in
\sref{chap: hpo},
\eg,
grid or random search,
Bayesian optimization,
and
trust region methods.
By incorporating gradient information,
the methods presented in this section
can scale to problems having many hyperparameters.
In fact,
the second section reviews papers that
provide bounds on the number of upper-level
gradient descent iterations
required to reach a point within some user-defined
tolerance of a solution.
While the bounds depend on the regularity
of the upper-level loss and lower-level cost functions,
they do not
depend on the number of hyperparameters nor the signal dimension.
Although having more hyperparameters will increase computation per iteration,
using a gradient-descent approach
means the number of iterations
need not
scale with the number of hyperparameters,
\paramsdim.
Bilevel gradient methods fall into two broad categories.
Most gradient-based approaches to the bilevel problem
fall under the first category:
double-loop algorithms.
These methods involve
(i) optimizing the lower-level cost,
either to some convergence tolerance if using a minimizer approach
or for a certain number of iterations if using an unrolled approach,
(ii) calculating \uppergrad, %
(iii) taking a gradient step in \params,
and
(iv) iterating.
The first step is itself an optimization algorithm
and may involve many inner iterations,
thus the categorization as a \dquotes{double-loop algorithm.}
The second category, \dquotes{single-loop} algorithms,
involve one loop,
with each iteration containing
one gradient step for both the lower-level optimization variable, \vx,
and the upper-level optimization variable, \params.
Single-loop algorithms may alternate updates
or update the variables simultaneously.
\cref{chap: ift and unrolled} used \xmath{t}
to denote the lower-level iteration counter;
this section introduces \upperiter as the
iteration counter for the upper-level iterations
and as the single iteration counter
for single-loop algorithms.
\section{Double-Loop Algorithms}
After using one of the approaches in \cref{chap: ift and unrolled}
to compute the hyperparameter gradient
\uppergrad,
typical double-loop algorithms for bilevel problems
run some type of gradient descent on the upper-level loss.
\aref{alg: hoag}
shows an example double-loop algorithm
\cite{pedregosa:2016:hyperparameteroptimizationapproximate}.
The final iterate of a lower-level optimizer
is only an approximation
of the lower-level minimizer.
However,
the minimizer approach to calculating the
upper-level gradient,
\uppergrad
(\sref{sec: minimizer approach}),
assumes
$\nabla_\vx \ofcn(\xhat \, ; \params) = \bmath{0}$.
Any error stemming from not being at an exact critical point
can be magnified in the full calculation \eqref{eq: IFT final gradient dldparams},
and
the resulting hyperparameter gradient
will be an approximation
of the true gradient,
as illustrated in \fref{fig: grad accuracy example}.
Alternatively,
if one uses the unrolled approach with a set number of iterations \eqref{eq: unrolled upper-level},
the gradient is accurate for that specific number of iterations,
but the lower-level optimization sequence may not have converged
and the overall method may not accurately approximate the
original bilevel problem.
\begin{figure}
\centering
\ifloadepsorpdf
\includegraphics[]{graderror}
\else
\input{tikz,graderror}
\fi
\iffigsatend \figuretag{5.1} \fi
\caption{
Error in the upper-level gradient, \uppergrad,
for various convergence thresholds for the lower-level optimizer.
The bilevel problem is \eqref{eq: bilevel for analysis filters}
with a single filter,
$\vc = \begin{bmatrix} c_0 & c_1 \end{bmatrix}$,
$\ebeta{0}=0$, $\ebeta{1}=-5$, and
$\sparsefcn(z) = z^2$ so there is an analytic solution for \uppergrad.
The training data is piece-wise constant 1d signals
and the learnable hyperparameters are the filter coefficients.
(a) Upper-level loss function, $\lfcn(\params)$.
The cost function is low (dark)
where $c_1 \approx c_0$,
corresponding to approximate finite differences.
The star indicates the minimum.
(b-d) Error in the estimated gradient angle
using the minimizer approach \eqref{eq: IFT final gradient dldparams},
defined as the angle between
$\xmath{\hat{\nabla}} \lfcn(\params)$ and \uppergrad,
when the lower-level optimization is run until
$\norm{\nabla \ofcnargs} < \epsilon$.
}
\label{fig: grad accuracy example}
\end{figure}
Due to such inevitable inexactness
when computing
\uppergrad,
one may wonder about the convergence
of double-loop algorithms for bilevel problems. %
Considering the unrolled method
of computing \uppergrad,
\citep{franceschi:2018:bilevelprogramminghyperparameter}
showed that %
the sequence of hyperparameter values
in a double-loop algorithm,
\iter{\params},
converges as the number of unrolled iterations increases.
To prove this result,
\citep{franceschi:2018:bilevelprogramminghyperparameter} assumed
the hyperparameters were constrained to a compact set%
\comment{
A compact set is one that is closed and bounded.
For example, imposing a box constraint on the hyperparameters yields a compact set.
},
\lfcnparamsvx and \ofcnargs are jointly continuous,
there is a unique solution, \xhatp,
to the lower-level cost for all \params;
and
\xhatp is bounded for all \params.
These conditions are easily satisfied
for problems with strictly convex lower-level cost functions
and suitable box constraints on \params.
\sref{sec: double-loop complexity analysis}
further discusses convergence results
for double-loop algorithms.
\begin{algorithm}[t!]
\caption{Hyperparameter optimization with approximate gradient (HOAG) from \cite{pedregosa:2016:hyperparameteroptimizationapproximate}.
As written below, the HOAG algorithm is impractical because
it uses $\xhat(\iter{\params})$
in the convergence criteria;
however, for strongly convex lower-level problems,
the convergence criteria,
$\normr{\xhat(\iter{\params}) - \lliter (\iter{\params}) }$,
is easily upper-bounded.
}
\label{alg: hoag}
\input{TablesAndAlgs/alg,hoag}
\end{algorithm}
\citet{pedregosa:2016:hyperparameteroptimizationapproximate}
proved
a similar result for the minimizer formula
\eqref{eq: dhdgamma IFT}
using CG to compute
\eqref{eq: Hinv step for CG}.
Specifically, \citep{pedregosa:2016:hyperparameteroptimizationapproximate}
showed that the hyperparameter sequence
convergences to a stationary point
if the sequence of positive tolerances,
$\{\iter{\epsilon}, \upperiter=1,2,\ldots\}$ in \aref{alg: hoag},
is summable.
The convergence results
are for the algorithm version
shown in \aref{alg: hoag}
that uses a Lipschitz constant
of $\lfcn(\params)$,
which is generally unknown.
Although \citep{pedregosa:2016:hyperparameteroptimizationapproximate}
discusses various empirical strategies
for setting the step size,
the convergence theory does not consider those variations.
Thus, the double-loop algorithm
\citep{pedregosa:2016:hyperparameteroptimizationapproximate}
requires multiple design decisions.
There are %
four key design decisions
for double-loop algorithms:
\begin{enumerate}
\item How accurately should one solve the lower-level problem?
\item What upper-level gradient descent algorithm should one use?
\item %
How does one pick the step size for the upper-level descent step?
\item What stopping criteria should one use for the upper-level iterations?
\end{enumerate}
This section first reviews some
(largely heuristic)
approaches to these design decisions
and presents example bilevel gradient descent methods
with no (or few)
assumptions beyond those made in
\cref{chap: ift and unrolled}.
Without any further assumptions,
the answers to the questions above
are based on heuristics,
with few theoretical guarantees
but often providing good experimental results.
The following section discusses recent methods with
stricter assumptions on the bilevel problem
and their theory-backed answers to the above questions.
The first step in a double-loop algorithm
is to optimize the lower-level cost,
for which there are many optimization approaches.
The only restriction %
is computability of %
the gradient of the upper-level loss,
\uppergrad,
which typically includes a smoothness assumption
(see \cref{chap: ift and unrolled} for discussion). %
Many bilevel methods papers
use a standard optimizer for the lower-level problem,
although others propose new variants, \eg,
\citep{chen:2020:learnabledescentalgorithm}.
The \textbf{first design decision}
(how accurately to solve the lower-level problem)
involves a trade-off
between computational complexity and accuracy.
Example convergence criteria are fairly standard to the optimization literature,
\eg,
the Euclidean norm of %
the lower-level gradient
\citep{hintermuller:2020:dualizationautomaticdistributed, chen:2014:insightsanalysisoperator}
or
the normalized change in the estimate \vx %
\citep{sixou:2020:adaptativeregularizationparameter}
being less than some threshold.
For example,
\citep{chen:2014:insightsanalysisoperator}
used a convergence criteria of
$\normr{\nabla_\vx \ofcn(\lliter \, ; \params)}_2 \leq 10^{\neg 3}$
(where the image scale is 0-255).
As mentioned above,
\citep{pedregosa:2016:hyperparameteroptimizationapproximate}
uses a sequence of convergence tolerances
so that the lower-level cost function
is optimized more accurately as
the upper-level iterations continue.
Many publications do not report a specific threshold
or %
discuss how %
they chose a convergence criteria
or number of lower-level iterations.
However,
a few note the importance of such decisions.
For example, \citep{mccann:2020:supervisedlearningsparsitypromoting} found
that their learning method fails
if the lower-level optimizer
is insufficiently close to the minimizer
and
\citep{chen:2014:insightsanalysisoperator}
stated their results are \dquotes{significantly better} than \citep{samuel:2009:learningoptimizedmap}
because they solve the lower-level problem \dquotes{with high[er] accuracy.}
After selecting a level of accuracy, %
finding (an approximation of) \xhat,
and calculating %
$\nabla_\params \lfcnargs$
using one of the approaches from \cref{chap: ift and unrolled},
one must make the \textbf{second design decision}:
which gradient-based method to use for the upper-level problem.
Many bilevel methods suggest a
simple gradient-based method
such as
plain gradient descent (GD) %
\citep{peyre:2011:learninganalysissparsity},
GD with a line search (see the third design decision),
projected GD \citep{antil:2020:bileveloptimizationdeep}, %
or
stochastic GD \citep{mccann:2020:supervisedlearningsparsitypromoting}. %
These methods update \params based on only the current upper-level gradient;
they do not have memory
of previous gradients %
nor require/estimate any second-order information.
Methods that incorporate some second-order information
use more memory and computation per iteration,
but may converge faster
than basic GD methods.
For example,
Broyden-Fletcher-Goldfarb-Shanno (BFGS)
and L-BFGS (the low-memory version of BFGS)
\cite{byrd:95:alm}
are quasi-Newton algorithms that
store and update an approximate Hessian matrix
that serves as a preconditioner for the gradient.
The
$\paramsdim \by \paramsdim$ size of the Hessian
grows as the number hyperparameters increases,
but quasi-Newton methods like L-BFGS
use practical rank-1 updates with storage \order{\paramsdim}.
Adam \citep{kingma:2015:adammethodstochastic}
is a popular GD method,
especially in the machine learning community,
that tailors the step size
(equivalently the learning rate)
for each hyperparameter
based on moments of the gradient.
Although Adam requires its own parameters,
the parameters are relatively easy to set
and
the default settings often perform adequately.
Example bilevel papers using methods with second-order information
include those that use
BFGS \citep{holler:2018:bilevelapproachparameter},
L-BFGS \citep{chen:2014:insightsanalysisoperator},
Gauss-Newton
\citep{fehrenbach:2015:bilevelimagedenoising},
and Adam \citep{chen:2020:learnabledescentalgorithm}.
Many gradient-based methods require selecting a step size parameter,
\eg,
one must choose a step size $\alpha$ in classical GD:
\begin{equation*}
\params^{(\upperiter+1)} = \params^{(\upperiter)}
- \alpha \, \uppergradu
.\end{equation*}
This choice is the \textbf{third design decision}.
Bilevel problems are generally non-convex%
\comment{ %
Although there are exceptions for simple functions,
for common upper and lower-level functions,
non-convexity is easily verified by
plotting a cross-section of the cost function.
},
and typically a Lipschitz constant is unavailable,
so line search strategies initially appear appealing.
However, any line search strategy that involves attempting multiple values
quickly becomes computationally intractable for large-scale problems.
The upper-level loss function in bilevel problems
is particularly expensive to evaluate
because
it requires optimizing the lower-level cost!
Further, recall that
the upper-level loss
is typically an expectation over multiple training samples \eqref{eq: generic bilevel upper-level},
so evaluating a single option for $\alpha$ involves optimizing the lower-level cost
\Ntrue times
(or using a stochastic approach and selecting a batch size).
Despite these challenges, %
a line search strategy may be viable
if it rarely requires multiple attempts.
For example,
the backtracking line search
in \cite{hintermuller:2020:dualizationautomaticdistributed}
that used
the Armijo–Goldstein condition
required
57-59 lower-level evaluations
(per training example)
over 40 upper-level gradient descent steps,
so most upper-level steps required only one lower-level evaluation. %
Other bilevel papers
that used backtracking with
Armijo-type conditions
include
\citep{holler:2018:bilevelapproachparameter, sixou:2020:adaptativeregularizationparameter,calatroni:2017:bilevelapproacheslearning},
whereas
\citep{lecouat:2020:flexibleframeworkdesigning}
used the
Barzilai-Borwein method for picking an adaptive step size.
Other approaches to determining the step size
are:
(i)
normalize the gradient by the dimension of the data
and pick a fixed step size \citep{mccann:2020:supervisedlearningsparsitypromoting},
(ii)
pick a value that is small enough
based on %
experience \citep{peyre:2011:learninganalysissparsity},
or
(iii)
adapt the step size based on the decrease
from the previous iteration
\citep{pedregosa:2016:hyperparameteroptimizationapproximate}.
The \textbf{fourth design decision}
needed %
is the convergence criteria for the upper-level loss.
As with the lower-level convergence criteria,
few publications include a specific threshold,
but most bilevel methods tend to use traditional convergence criteria
such as
the norm of the hyperparameter gradient falling below some threshold \citep{holler:2018:bilevelapproachparameter},
the norm of the change in parameters falling below some threshold \citep{chen:2014:insightsanalysisoperator},
and/or reaching a maximum iteration count (many papers).
One specific example is to terminate
when the normalized change in learned parameters,
\(
\normr{\iter{\params}{+1} - \iter{\params}} / \normr{\iter{\params}},
\)
is below 0.01 \citep{sixou:2020:adaptativeregularizationparameter}.
The normalized change bound is convenient
because it is unitless
and thus invariant to scaling of
\params.
\subsection{Complexity Analysis \label{sec: double-loop complexity analysis}}
A series of recent papers
established finite-time sample complexity bounds
for stochastic bilevel optimization methods
based on gradient descent
for the upper-level loss and lower-level cost.
Whereas
\citep{ghadimi:2018:approximationmethodsbilevel,%
ji:2021:bileveloptimizationconvergence}
use double-loop approaches;
\citep{hong:2020:twotimescaleframeworkbilevel,%
chen:2021:singletimescalestochasticbilevel}
use single-loop algorithms
(see \sref{sec: single-loop complexity}).
Unlike most of the methods discussed in the previous section,
these papers make additional assumptions
about the upper and lower-level functions
then chose
the upper and lower-level step sizes to ensure convergence.
In these works,
``finite-time sample complexity'' refers to
big-O bounds
on a number of iterations
that ensures %
one reaches a minimizer
to within some desired tolerance.
In contrast to asymptotic convergence analysis,
finite-time bounds
provide information about the estimated hyperparameters,
\iter{\params},
after a finite number of upper-level iterations.
These bounds %
depend on
problem-specific quantities,
such as Lipschitz constants,
but not on the hyperparameter or signal dimensions.
To summarize the results,
this section returns to the notation from
the introduction
where the upper-level loss may
be deterministic or stochastic,
\eg,
the bilevel problem is
\begin{align}
\paramh &= \argmin_\params \lfcn(\params) \text{ with }
\lfcn(\params) = \begin{cases}
\lfcn(\params, \xhat(\params)) & \text{deterministic} \\
\E{\lfcn(\params, \xhat(\params))} & \text{ stochastic. }
\end{cases} %
\label{eq:bilevel,E}
\end{align}
This section reviews results for double-loop methods
\citep{ghadimi:2018:approximationmethodsbilevel,ji:2021:bileveloptimizationconvergence}
for the deterministic setting
and then the stochastic case.
The expectation
in \eqref{eq:bilevel,E}
can have different meanings depending on the setting.
When one has $J$ training images
with one noise realization per image,
one often picks a random subset (``minibatch'') of those $J$ images
for each update of \params,
corresponding to stochastic gradient descent
of the upper-level loss.
In this setting,
the randomness is a property of the algorithm,
not of the upper-level loss,
and the expectation reduces to the deterministic case.
An interesting opportunity for a stochastic formulation
is to add different noise realizations
in \eqref{eq: y=Ax+n},
providing an uncountable ensemble
of $(\vx,\vy)$ training tuples,
where the expectation above
is over the distribution of noise realizations.
Yet another possibility
is to have a truly random set of training images
\xtrue
drawn from some distribution.
For example,
\cite{jin:17:dcn}
trained a CNN-based CT reconstruction method
using an ensemble of images %
consisting of
randomly generated
ellipses.
Other variations,
such as random rotations or warps,
have also been used for data augmentation
\cite{shorten:19:aso}.
One could combine such a random ensemble of images
with a random ensemble of noise realizations,
in which case the expectation
in \eqref{eq:bilevel,E}
would be taken over both the image and noise distributions.
We are unaware of any bilevel methods for imaging
that exploit this full generality.
Future literature on stochastic methods
should clearly state what expectation is used
and
may consider exploiting a more general
definition of randomness.
The complexity results
(summarized in \tref{tab: deterministic bilevel complexity} and \tref{tab: stochastic complexity summaries})
are all in terms of finding \xmath{\params_\epsilon},
defined as
an $\epsilon$-optimal solution.
In the
(atypical)
setting where $\lfcn(\params)$ is convex, %
\xmath{\params_\epsilon} is an $\epsilon$-optimal solution
if it
satisfies either
$\lfcn(\xmath{\params_\epsilon}) - \lfcn(\paramh) \leq \epsilon$
\citep{ghadimi:2018:approximationmethodsbilevel, ji:2021:bileveloptimizationconvergence,hong:2020:twotimescaleframeworkbilevel}
or
$\normsq{\paramh - \xmath{\params_\epsilon}} \leq \epsilon$
\citep{chen:2021:singletimescalestochasticbilevel}.
(These conditions are equivalent if \lfcn is strongly convex in \params,
but can differ otherwise.) %
In the
(common)
non-convex setting,
\xmath{\params_\epsilon} is typically called an $\epsilon$-stationary point
if it satisfies %
$\normsq{\nabla \lfcn(\xmath{\params_\epsilon})} \leq \epsilon$
\citep{ghadimi:2018:approximationmethodsbilevel,%
ji:2021:bileveloptimizationconvergence,%
chen:2021:singletimescalestochasticbilevel}.
In the stochastic setting,
\xmath{\params_\epsilon} must
satisfy these conditions in expectation.
References
\citep{ghadimi:2018:approximationmethodsbilevel,%
ji:2021:bileveloptimizationconvergence,%
hong:2020:twotimescaleframeworkbilevel,%
chen:2021:singletimescalestochasticbilevel}
all make similar assumptions about \lfcn and \ofcn
to derive theoretical results
for their proposed bilevel optimization methods.
We first summarize the set of assumptions from
\citep{ghadimi:2018:approximationmethodsbilevel},
and later note any additional assumptions %
used by the other methods.
The conditions in
\citep{ghadimi:2018:approximationmethodsbilevel}
on the upper-level function, \lfcnparamsvx,
are:
\begin{enumerate}[label=A\lfcn\!\arabic*.,leftmargin=2cm,itemsep=0pt,ref=A\lfcn\!\arabic*]
\item
$\forall \params \in \F^\paramsdim$,
$\nabla_\params \lfcn(\params, \vx)$ and
$\nabla_\vx \lfcn(\params, \vx)$
are Lipschitz continuous with respect to \vx,
with corresponding Lipschitz constants
\xmath{L_{\vx,\nabla_\params \lfcn}} and \xmath{L_{\vx,\nabla_\vx \lfcn}}.
(These constants are independent of \vx and \params.)
\label{BA assumption upper-level 1}
\item The gradient with respect to \vx
is bounded, \ie,
\\
$\norm{\nabla_\vx \lfcn(\paramsset, \vx)} \leq \xmath{C_{\nabla_\vx \lfcn}}
,\ \forall \vx \in \F^\sdim
$.
\label{BA assumption upper-level 2}
\item
$\forall \vx \in \F^\sdim$,
$\nabla_\vx \lfcn(\params, \vx)$ is Lipschitz continuous with respect to \params,
with corresponding Lipschitz constant
\xmath{L_{\params,\nabla_\vx \lfcn}}.
\label{BA assumption upper-level end}
\end{enumerate}
\comment{We defined these as universal constants, but it's actually unclear in the paper.
This version is stricter, so it is the safer approach. } %
The conditions
in \citep{ghadimi:2018:approximationmethodsbilevel}
on the lower-level function, \ofcnargs,
are:
\begin{enumerate}[label=A\ofcn\!\!\arabic*.,leftmargin=2cm,itemsep=0pt,ref=A\ofcn\!\!\arabic*]
\item \ofcn is continuously twice differentiable in \params and \vx.
\label{BA assumption lower-level 1}
\item
$\forall \params \in \F^\paramsdim$,
$\nabla_\vx \ofcnargs$ is Lipschitz continuous with respect to \vx
with corresponding constant \xmath{L_{\vx,\nabla_\vx \ofcn}}.
\label{BA assumption lower-level 2}
\item
$\forall \params \in \F^\paramsdim$,
$\ofcnargs$ is strongly convex with respect to \vx, \ie,
$\xmath{\mu_{\vx,\ofcn}} \I \preceq \nabla_\vx^2 \ofcn(\paramsset\, ; \vx)$,
for some $\xmath{\mu_{\vx,\ofcn}} > 0$.
\label{BA assumption lower-level 3}
\item
$\forall \params \in \F^\paramsdim$,
$\nabla_{\vx \vx} \ofcnargs$ and $\nabla_{\params \vx} \ofcnargs$
are Lipschitz continuous with respect to \vx with Lipschitz constants \xmath{L_{\vx,\nabla_{\vx\vx}\ofcn}} and \xmath{L_{\vx,\nabla_{\params\vx}\ofcn}}.
\label{BA assumption lower-level 4}
\item The mixed second gradient of \ofcn is bounded, \ie,
\\
$\norm{\nabla_{\params \vx} \ofcnargs} \leq \xmath{C_{\nabla_{\params\vx}\ofcn}},
\quad \forall \params, \vx$.
\label{BA assumption lower-level 5}
\item
$\forall \vx \in \F^\sdim$,
$\nabla_{\params \vx} \ofcnargs$ and $\nabla_{\vx \vx} \ofcnargs$
are Lipschitz continuous with respect to \params
with Lipschitz constants
$\xmath{L_{\params,\nabla_{\params\vx}\ofcn}}$
and
$\xmath{L_{\params,\nabla_{\vx\vx}\ofcn}}$.
\label{BA assumption lower-level end}
\end{enumerate}
To consider how the
complexity analysis bounds
may apply in practice,
\apref{sec: ghadimi bounds applied} examines
how each of these conditions
applies to the running filter learning example
\eqref{eq: bilevel for analysis filters}.
Although a few of the conditions are easily satisfied,
most are not.
\apref{sec: ghadimi bounds applied}
shows that the conditions are met
if one invokes box constraints on the variables
\vx and \params.
Although imposing box constraints %
requires modifying the %
algorithms,
\eg, by including a projection step,
the iterates remain unchanged
if the constraints are sufficiently generous.
However,
such generous box constraints
are likely to yield
large Lipschitz constants and bounds,
leading to overly-conservative predicted convergence rates.
Further,
any differentiable
upper-level loss and lower-level cost function
would met the conditions above
with such box constraints. %
Generalizing the following complexity analysis
for looser conditions
is an important avenue for future work.
\subsubsection{Deterministic}
\citet{ghadimi:2018:approximationmethodsbilevel}
were the first to provide a
finite-time analysis
of the bilevel problem.
The authors
proposed and analyzed the Bilevel Approximation (BA) method
(see \aref{alg: ba}).
BA uses two nested loops
and
is very similar to the gradient-based methods
described in the previous section.
The inner loop minimizes the lower-level cost to some accuracy,
determined by the number of lower-level iterations;
the more inner iterations,
the more accurate the gradient will be,
but at the cost of more computation and time.
The outer loop is (inexact) projected gradient steps on \lfcn.
Ref. \citep{ghadimi:2018:approximationmethodsbilevel}
used the minimizer result \eqref{eq: IFT final gradient dldparams}
(with the IFT perspective for the derivation)
to estimate the upper-level gradient.
\begin{algorithm}[t!]
\caption{
Bilevel Approximation (BA) Method from \citep{ghadimi:2018:approximationmethodsbilevel}.
The differences for the AID-BiO and ITD-BiO methods from \citep{ji:2021:bileveloptimizationconvergence} are:
(1) when $\upperiter > 0$,
the BiO methods replace \lref{alg: BA line: lower level init}
with \mbox{$\vx^{(0)} = \vx^{(T_{\upperiter-1})}$},
(2) $T_i$ does not vary with upper-level iteration,
(3) the upper-level gradient calculation in \lref{alg: BA line: hypergradient calc}
can use the minimizer approach \eqref{eq: IFT final gradient dldparams}
or backpropagation \eqref{eq: reverse mode},
and
(4) the hyperparameter update is standard gradient descent,
so \lref{alg: BA line: hyperparameter update} becomes
\mbox{$\params^{(i+1)} = \params^{(i)} - \ssupper \vg $}.
}
\label{alg: ba}
\input{TablesAndAlgs/alg,ba}
\end{algorithm}
\newcommand{\nowidth}[2] {\makebox[0pt][#1]{#2}} %
\newcommand{\textnw}[1] {\text{\nowidth{c}{#1}}}
\newcommand{\textnwl}[1] {\text{\nowidth{l}{#1}}}
\newcommand{\textnwr}[1] {\text{\nowidth{r}{#1}}}
To bound the complexity of BA,
\citep{ghadimi:2018:approximationmethodsbilevel}
first related the error in the lower-level solution
to the error in the upper-level gradient estimate as
\begin{align*}
\lVert %
\underbrace{\xmath{\hat{\nabla}}_\params \lfcn (\params, \vx)}_{\textnw{Estimated gradient}} -
\underbrace{\nabla_\params \lfcn (\params, \xhat(\params))}_{\text{True gradient}}
\rVert %
\leq \Cgw
\underbrace{\norm{\vx - \xhat(\params)}}_{\textnw{Error in lower-level}},
\end{align*}
where \Cgw is a constant that depends
on many of the %
bounds defined in the assumptions above
\citep{ghadimi:2018:approximationmethodsbilevel}.
Combing the above error bound with
known
gradient descent bounds for the accuracy of the lower-level problem
yields bounds on the accuracy of the upper-level gradient.
The standard lower-level bounds
can vary by the specific algorithm
(\citep{ghadimi:2018:approximationmethodsbilevel} uses plain GD),
but are in terms of $Q_\ofcn = \frac{\xmath{L_{\vx,\nabla_\vx \ofcn}}}{\xmath{\mu_{\vx,\ofcn}}}$
(the
\dquotes{condition number}
for the strongly convex lower-level function)
and the distance between the initialization and the minimizer.
Ref. \citep{ghadimi:2018:approximationmethodsbilevel}
shows that $\xhat(\params)$
is Lipschitz continuous in \params
under the above assumptions,
which intuitively states that the lower-level minimizer
does not change too rapidly with changes in the hyperparameters.
Further, $\uppergrad$
is Lipschitz continuous in \params
with a Lipschitz constant, \xmath{L_{\params,\nabla_\params \lfcn}},
that depends on
many of the constants given above.
The main theorems
from \citep{ghadimi:2018:approximationmethodsbilevel}
hold when
the lower-level GD step size is
$\sslower = \frac{2}{\xmath{L_{\vx,\nabla_\vx \ofcn}} + \xmath{\mu_{\vx,\ofcn}}}$ and
the upper-level step size satisfies
$\ssupper \leq \frac{1}{\xmath{L_{\params,\nabla_\params \lfcn}}}$.
Then,
the distance between the $\upperiter$th loss function value
and the minimum loss function value,
$\lfcn(\iter{\params}, \xhat(\iter{\params}) - \lfcn(\paramh, \xhat(\paramh))$,
is bounded by a constant that depends on
the starting distance from a minimizer
(dependent on the initialization of \params and \vx),
$Q_\ofcn$,
\Cgw,
the number of inner iterations,
and the upper-level step size.
The bound differs for strongly convex, convex, and possibly non-convex
upper-level loss functions.
\tref{tab: BA deterministic bilevel complexity convexity scale}
summarizes the sample complexity required to reach an
$\epsilon$-optimal point in each of these scenarios.
\begin{table}[htbp]
\centering
\ifloadeps
\includegraphics[]{TablesAndAlgs/epsfiles/tab,BAsamplecomplexities.eps}
\else
\input{TablesAndAlgs/tab,BAsamplecomplexities}
\fi
\iffigsatend \tabletag{5.1} \fi
\caption{
Sample complexity
to reach an $\epsilon$-optimal solution
of the deterministic bilevel problem
using BA \citep{ghadimi:2018:approximationmethodsbilevel},
for various assumptions on the upper-level loss function.
Usually $\lfcn(\params)$ is non-convex
and that case has the worst-case order results.
The complexities show the total number of partial gradients
of the upper-level loss
(equal to the number of lower-level Hessians needed
for estimating \uppergrad using \eqref{eq: IFT final gradient dldparams})
and the partial gradients of the lower-level.
The convex results use the accelerated BA method,
which uses acceleration techniques similar to Nesterov's method
\cite{nesterov:83:amo}
applied to the upper-level gradient step in \aref{alg: ba}.
}
\label{tab: BA deterministic bilevel complexity convexity scale}
\end{table}
\citet{ji:2021:bileveloptimizationconvergence}
proposed two methods for Bilevel Optimization
that improve on the sample complexities from
\citep{ghadimi:2018:approximationmethodsbilevel}
for non-convex loss functions
under similar assumptions.
The first,
ITD-BiO,
uses the unrolled method for calculating the
upper-level gradient (see \sref{sec: unrolled}).
The second, AID-BiO,
uses the minimizer method with the implicit function theory perspective
(see \sref{sec: minimizer approach}).
\tref{tab: deterministic bilevel complexity}
summarizes the sample complexities
\citep{ji:2021:bileveloptimizationconvergence}.
Much of the computational advantage of
ITD-BiO and AID-BiO is in improving the
iteration complexity with respect to the %
condition number.
One of the main computational advantages of
the AID-BiO and IFT-BiO methods in \citep{ji:2021:bileveloptimizationconvergence}
over the BA algorithm \aref{alg: ba}
is a warm restart for the lower-level optimization.
Although the hyperparameters change every outer iteration,
the change is generally small enough
that the stopping point of the previous lower-level descent
is a better initialization than the noisy data
(recall that \citep{ghadimi:2018:approximationmethodsbilevel}
showed the lower-level minimizer is Lipschitz continuous in \params).
One can account for this warm restart when using
automatic differentiation tools (backpropagation)
\citep{ji:2021:bileveloptimizationconvergence}.
The caption for \aref{alg: ba} summarizes the other differences
between BA and the BiO methods.
\begin{table}[htbp]
\centering
\ifloadeps
\includegraphics[]{TablesAndAlgs/epsfiles/tab,BAvsBiOsamplecomplexity.eps}
\else
\input{TablesAndAlgs/tab,BAvsBiOsamplecomplexity}
\fi
\iffigsatend \tabletag{5.2} \fi
\caption{
A comparison of the
finite-time sample complexity
to reach an $\epsilon$-solution
of the deterministic bilevel problem
when the upper-level loss function is non-convex using
BA \citep{ghadimi:2018:approximationmethodsbilevel},
AID-BiO \citep{ji:2021:bileveloptimizationconvergence},
and ITD-BiO \citep{ji:2021:bileveloptimizationconvergence}.
* = order omits any $\log \epsilon^{\neg1}$ term.
}
\label{tab: deterministic bilevel complexity}
\end{table}
\subsubsection{Stochastic}
Discussing optimization algorithms for the stochastic bilevel problem
requires additional assumptions about the stochastic estimates.
The stochastic methods discussed here
are all based on the minimizer approach
to finding the upper-level gradient.
Therefore, the methods require estimates for
$\nabla_\params \lfcnparamsvx$,
$\nabla_\vx \lfcnparamsvx$,
$\nabla_\vx \ofcnargs$,
$\nabla_{\params,\vx} \ofcnargs$,
and
$\nabla_{\vx,\vx}\ofcnargs$.
We denote the estimates of these gradient using hats,
\eg,
$\xmath{\hat{\nabla}}_\params \lfcnparamsvx$.
Following
\eqref{eq: IFT final gradient dldparams},
an estimate of the upper-level gradient approximation is thus
\begin{align}
\uppergradhat
&= \xmath{\hat{\nabla}}_\params \lfcn(\params, \vx)
- \parenr{\xmath{\hat{\nabla}}_{\vx \params} \ofcn(\vx \, ; \params)}' %
\parenr{\xmath{\hat{\nabla}}_{\vx \vx} \ofcn(\vx \, ; \params)}^{\neg 1} %
\xmath{\hat{\nabla}}_\vx \lfcn(\params, \vx). \nonumber %
\end{align}
As in the deterministic case,
we start with the assumptions and results from
\citep{ghadimi:2018:approximationmethodsbilevel}.
In addition to the assumptions above on
\lfcn
(assumptions \ref{BA assumption upper-level 1}-\ref{BA assumption upper-level end})
and \ofcn
(assumptions \ref{BA assumption lower-level 1}-\ref{BA assumption lower-level end}),
the stochastic-specific assumptions are that
(i)
all estimated gradients are unbiased
and
(ii)
the variance of the estimation errors
is bounded by
\xmath{\sigma^2_{\nabla_\params \lfcn}}, \xmath{\sigma^2_{\nabla_\vx \lfcn}}, \xmath{\sigma^2_{\nabla_{\vx} \ofcn}}, \xmath{\sigma^2_{\nabla_{\params\vx} \ofcn}}, and \xmath{\sigma^2_{\nabla_{\vx\vx} \ofcn}}.
For example,
\citep{ghadimi:2018:approximationmethodsbilevel}
assumes
\begin{equation*}
\E{\normrsq{\nabla_\params \lfcnparamsvx - \xmath{\hat{\nabla}}_\params \lfcn(\params\, ; \vx)} }
\leq \xmath{\sigma^2_{\nabla_\params \lfcn}}
\quad \forall \vx, \params
.
\end{equation*}
\comment{ %
where the expectation is taken with respect to
the data and noise distributions.
}
\comment{i took out some of the details about the Hessian inverse (commented out above)...
if we have time, we can try to figure it out, but not highest priority} %
The Bilevel Stochastic Approximation (BSA) method
replaces the lower-level update in BA (see \aref{alg: ba})
with standard stochastic gradient descent.
The corresponding upper-level step in BSA is
a projected gradient step
with stochastic estimates of all gradients.
Another difference in
the stochastic versions of
the BA \citep{ghadimi:2018:approximationmethodsbilevel}
and BiO \citep{ji:2021:bileveloptimizationconvergence}
methods
is
that they
use an inverse matrix theorem
(based on the Neumann series)
to estimate the Hessian inverse.
\comment{
The stochastic methods
use an inverse matrix theorem
(based on the Neumann series)
to estimate the Hessian inverse.
}
Ref. \citep{ji:2021:bileveloptimizationconvergence}
simplifies the inverse Hessian calculation
to replace expensive matrix-matrix multiplications
with matrix-vector multiplications.
This same strategy makes
backpropagation more computationally efficient
than the forward mode computation for the unrolled gradient;
see \sref{sec: unrolled}.
\section{Single-Loop Algorithms}
Unlike the double-loop algorithms,
single-loop algorithms take a gradient step
in \params
without optimizing the lower-level problem
each step.
Two early bilevel method papers
\citep{haber:2003:learningregularizationfunctionals,kunisch:2013:bileveloptimizationapproach}
proposed single-loop approaches
based on solving the system of equations
that arises from the Lagrangian.
More recently,
\citep{hong:2020:twotimescaleframeworkbilevel,chen:2021:singletimescalestochasticbilevel}
extended the complexity analysis of
\citep{ghadimi:2018:approximationmethodsbilevel,ji:2021:bileveloptimizationconvergence}
to single-loop algorithms that alternate gradient steps in \vx and \params.
This section discusses the two approaches
in turn.
The system of equations approach in
\citep{haber:2003:learningregularizationfunctionals,kunisch:2013:bileveloptimizationapproach}
closely follows
the KKT perspective on the minimizer
approach in \sref{sec: minimizer via kkt}.
Recall that the gradient of the lower-level problem
is zero at a minimizer, \xhat,
and one can use this equality
as a constraint on the upper-level loss function.
The corresponding Lagrangian is
\begin{align}
L(\vx, \params, \vnu)
&= \lfcn(\params \, ; \vx) + \vnu' \dx{\ofcnargs} \label{eq: lagrangian}
,\end{align}
where \vnu is a vector of Lagrange multipliers.
For the filter learning example
\eqref{eq: bilevel for analysis filters},
the Lagrangian is
\begin{align}
L(\vx, \params, \vnu)
&= \onehalf \normrsq{\vx - \xtrue}_2 + \nonumber \\
& \, \, \vnu' \left(
\mA' (\mA \vx - \vy)
+ \ebeta{0} \sum_{k=1}^K \ebeta{k} \hktil \conv \sparsefcn.(\hk \conv \vx; \epsilon)
\right)
\nonumber
.\end{align}
We again consider derivatives of the Lagrangian
with respect to the Lagrange multipliers, \vx, and \params.
Here are the general expressions
and the specific equations
for the filter learning example
\eqref{eq: bilevel for analysis filters},
when%
\footnote{
The column of the gradient expression
$\nabla_{\vx \params} \ofcn(\vx \,; \params)$
corresponding to an element of \params
that is a filter coefficient
is %
given in \eqref{eq: nablas for filter learning}.
}
considering the element of
\params corresponding to $\beta_k$:
\begin{align*}
\nabla_\vnu L(\vx, \params, \vnu) &= \dx{\ofcnargs} \\
&= \mA' (\mA \vx - \vy)
+ \ebeta{0} \sum_{k=1}^K \ebeta{k} \hktil \conv \sparsefcn.(\hk \conv \vx; \epsilon) \\
\dx{ L(\vx, \params, \vnu)}
&= \dx{\lfcn(\params \, ; \vx)} + \nabla_{\vx \vx} \ofcn(\vx \,; \params) \vnu \\
&= \vx - \xtrue \\
\dParams{L(\vx, \params, \vnu)} &= \dParams{\lfcnparamsvx} + \vnu' \nabla_{\vx \params} \ofcn(\vx \,; \params)
\\
\dbetak{L(\vx, \params, \vnu)}
&= \vnu' \left( \ebeta{0} \ebeta{k} \hktil \conv \dsparsefcn.(\hk \conv \xhat) \right)
.\end{align*}
These expressions are equivalent
to the primal, adjoint, and optimality conditions
respectively
in \citep{kunisch:2013:bileveloptimizationapproach}.
Here, the minimizer and single-loop approach diverge.
\sref{sec: minimizer via kkt} used
the above Lagrangian gradients to solve for $\hat{\vnu}$,
substitute $\hat{\vnu}$ into the
gradient of the Lagrangian with respect to \params,
and thus find the minimizer expression for \uppergrad.
The single-loop approach
instead considers solving the system of gradient equations
directly:
\[
\mG(\vx, \params, \vnu)
= \begin{bmatrix}
\nabla_\vnu L(\vx, \params, \vnu) \\
\dx{ L(\vx, \params, \vnu)} \\
\dParams{ L(\vx, \params, \vnu)}
\end{bmatrix} = \bmath{0}
.\]
For example,
\citep{kunisch:2013:bileveloptimizationapproach}
proposed a Newton algorithm
using
the Jacobian of the gradient matrix \mG.
\subsection{Complexity Analysis}
\label{sec: single-loop complexity}
Rather than working on the lower-level problem
over multiple iterations before calculating
and taking a step along the gradient of the upper-level loss function,
the two-timescale stochastic approximation (TTSA) method
\citep{hong:2020:twotimescaleframeworkbilevel}
and the
Single Timescale stochAstic BiLevEl optimization (STABLE)
method
\citep{chen:2021:singletimescalestochasticbilevel}
alternate between
one gradient step for the lower-level cost
and one gradient step for the upper-level problem.
There are two main challenges in designing such a
single loop algorithm for bilevel optimization.
Because both TTSA and STABLE use the minimizer approach
\eqref{eq: IFT final gradient dldparams}
to finding the upper-level gradient,
the first challenge is
ensuring the current lower-level iterate
is close enough to the minimizer
to calculate a useful upper-level gradient.
TTSA addresses this challenge by taking
larger steps for the lower-level problem
while STABLE addresses this using a
lower-level update
that better predicts
the next lower-level minimizer,
$\xhat(\iter{\params}{+1})$.
The second main challenge is in estimating
the upper-level gradient,
even given stochastic estimates of
$\nabla_{\vx \vx} \ofcn$
and
$\nabla_{\vx \params} \ofcn$,
because the minimizer equation \eqref{eq: IFT final gradient dldparams}
is nonlinear.
The theoretical results about TTSA
are built on the assumption that the upper-level gradient
is biased due to this nonlinearity.
In contrast, STABLE uses recursion
to update estimates of the
gradients and thus reduce variance.
The remainder of this section goes into more detail
about both algorithms.
\aref{alg: ttsa}
summarizes
the single-loop algorithm TTSA
\citep{hong:2020:twotimescaleframeworkbilevel}.
The analysis of TTSA
uses the same lower-level cost function
assumptions as mentioned above for
BSA \citep{ghadimi:2018:approximationmethodsbilevel}
and one additional upper-level assumption:
that \lfcn is weakly convex with parameter $\mu_\lfcn$, \ie,
\[
\lfcn(\params+\bmath{\delta}) \geq \lfcn(\params) \langle \nabla \lfcn(\params), \bmath{\delta} \rangle + \mu_\lfcn \normsq{\bmath{\delta}}
,\quad \forall \params, \bmath{\delta} \in \R^\paramsdim. %
\]
TTSA assumes the lower-level gradient estimate is still unbiased
and that its variance is now bounded as
\[
\E{\normrsq{\nabla_\vx \ofcn(\vx, \params) - \xmath{\tilde{\nabla}}_\vx \ofcn(\vx, \params) }}
\leq \xmath{\sigma^2_{\nabla_{\vx} \ofcn}} \, (1 + \normsq{\nabla_\vx \ofcn(\vx, \params)}).
\]
Further,
the stochastic upper-level gradient estimate,
$\xmath{\tilde{\nabla}}_\params \lfcn(\iter{\params},\iter{\vx}{+1})$,
includes a bias
$B_\upperiter \leq b_\upperiter$,
that stems from the nonlinear dependence on the
lower-level Hessian.
This bias decreases as the batch size increases.
\begin{algorithm}[t!]
\caption{Two-Timescale Stochastic Approximation (TTSA) Method from \citep{hong:2020:twotimescaleframeworkbilevel}.
TTSA includes a possible projection of the hyperparameter after each gradient step onto a constraint set,
not shown here.
The tildes denote stochastic approximations for the corresponding gradients.
}
\label{alg: ttsa}
\input{TablesAndAlgs/alg,ttsa}
\end{algorithm}
The \dquotes{two-timescale} part of TTSA
comes from using different upper and lower step size
sequences.
The lower-level step size is larger and
bounds the tracking error
(the distance between \xhat and the \vx iterate)
as the hyperparameters change
(at the upper-level loss's relatively slower rate).
Thus,
\citep{hong:2020:twotimescaleframeworkbilevel}
chose step-sizes such that
\mbox{$\ssupper(\upperiter) /\sslower(\upperiter) \rightarrow 0 $}.
Specifically,
if \lfcn is strongly convex, then
\ssupper is $\order{\upperiter^{\neg1}}$ and
\sslower is $\order{\upperiter^{\neg2/3}}$.
If \lfcn is convex, then
\ssupper is $\order{\upperiter^{\neg3/4}}$ and
\sslower is $\order{\upperiter^{\neg1/2}}$.
\citet{chen:2021:singletimescalestochasticbilevel}
improved the sample complexity of TTSA.
By using a single timescale,
their algorithm, STABLE, achieves the
\dquotes{same order of sample complexity as the stochastic
gradient descent method for the single-level stochastic optimization}
\citep{chen:2021:singletimescalestochasticbilevel}. %
However, the improved sample complexity comes at the cost of
additional computation per iteration as STABLE
can no longer trade a matrix inversion (of size $\paramsdim \by \paramsdim$)
for matrix-vector products, as done in the \citep{ji:2021:bileveloptimizationconvergence}.
Ref. \citep{chen:2021:singletimescalestochasticbilevel} therefore
recommended STABLE when sampling is more costly than computation
or when \paramsdim is relatively small.
The analysis of STABLE uses
the same upper-level loss and lower-level cost function
assumptions as listed above for BSA.
Additionally, STABLE assumes that,
\xmath{\forall \vx, \, \nabla_\params \lfcnparamsvx} is
Lipschitz continuous
in \params.
This condition is easily satisfied
as many upper-level loss functions do not regularize \params.
Further, those that do often use a squared 2-norm,
\ie, Tikhonov-style regularization,
that has a Lipschitz continuous gradient.
\comment{it feels odd to me that the other papers didn't have this assumption somewhere...}
Additionally,
rather than bounding the gradient norms as in
assumptions \ref{BA assumption upper-level 2} and \ref{BA assumption lower-level 5},
\citep{hong:2020:twotimescaleframeworkbilevel}
assumes the following moments are bounded:
\begin{itemize}[noitemsep,topsep=0pt]
\item the second and fourth moment of $\nabla_\params \lfcnparamsvx$ and $\nabla_\vx \lfcnparamsvx$
and
\item the second moment of $\nabla_{\params \vx} \ofcnargs$ and $\nabla_{\vx \vx} \ofcnargs$,
\end{itemize}
ensuring that the upper-level gradient is Lipschitz continuous.
Like the previous algorithms discussed,
STABLE evaluates the minimizer result
\eqref{eq: IFT final gradient dldparams}
at non-minimizer lower-level iterates, \iter{\vx},
to estimate the hyperparameter gradient.
However, it differs in how it
estimates and uses the gradients.
STABLE replaces the upper-level gradient
in TTSA \lref{alg: ttsa line: upper-level gradient calc}
with
\begin{align}
\vg = \nabla_\params \iter{\lfcn}
-
\underbrace{\parenr{ \iter{\recursivegrad_{\vx \params}} }'}_{\mathllap{
\text{Prev. } %
\xmath{\tilde{\nabla}}_{\vx \params} \iter{\ofcn} \!\!\!\!\!\!\!\!\!\!\!}
}
\underbrace{\parenr{ \iter{\recursivegrad_{\vx \vx}} }^{\neg 1}}_{\mathrlap{\!\!\!\!\!\!\!\!\!\!\! \text{Prev. }
\xmath{\tilde{\nabla}}_{\vx \vx} \iter{\ofcn} }} %
\nabla_\vx \iter{\lfcn}. \label{eq: stable gradient}
\end{align}
The newly defined gradient matrices
are recursively updated as
\begin{align*}
\iter{\recursivegrad_{\vx \params}} &=
\mathcal{P}_{ \norm{\recursivegrad} \leq \xmath{C_{\nabla_{\params\vx}\ofcn}} } \left(
(1-\tau_\upperiter)
\underbrace{( \iter{\recursivegrad_{\vx \params}}{-1} - \xmath{\tilde{\nabla}}_{\vx \params} \iter{\ofcn}{-1})}_{\text{Recursive update}}
+
\underbrace{\xmath{\tilde{\nabla}}_{\vx \params} \iter{\ofcn}}_{\text{New estimate}}
\right) \\
\iter{\recursivegrad_{\vx \vx}} &=
\mathcal{P}_{\recursivegrad \psd \xmath{\mu_{\vx,\ofcn}} \I } \left(
(1-\tau_\upperiter)
\overbrace{(\iter{\recursivegrad_{\vx \vx}}{-1}-\xmath{\tilde{\nabla}}_{\vx \vx} \iter{\ofcn}{-1})}
+
\overbrace{\xmath{\tilde{\nabla}}_{\vx \vx} \iter{\ofcn}}
\right)
.\end{align*}
In the
$\iter{\recursivegrad_{\vx \params}}$
update,
the projection onto the set of matrices with a maximum norm
helps ensure stability
by not allowing the gradient to get too large.
The projection in the
$\iter{\recursivegrad_{\vx \vx}}$
update
is an eigenvalue truncation
that ensures positive definiteness of the estimated Hessian
in this Newton-based method.
After computing the gradient \vg \eqref{eq: stable gradient},
the upper-level update is a standard descent step
as in \aref{alg: ttsa} \lref{alg: ttsa line: upper-level step}.
STABLE also uses the recursively estimated gradient matrices
in the lower-level cost function descent.
It replaces the standard gradient descent step in
\aref{alg: ttsa} \lref{alg: ttsa line: lower-level update}
with one that uses second order information:
\begin{align*}
\iter{\vx}{+1} &= \iter{\vx} -
\underbrace{\sslower(\upperiter) \xmath{\tilde{\nabla}}_\vx \ofcn(\iter{\vx}; \iter{\params})}_{\text{Standard GD step}}
- \underbrace{(\iter{\xmath{\tilde{\nabla}}}_{\vx \vx})^{\neg 1} (\iter{\xmath{\tilde{\nabla}}}_{\params \vx})'(\iter{\vx}{+1} - \iter{\vx})}_{\text{New term}}.
\end{align*}
With these changes,
STABLE is able to reduce the iteration complexity
relative to TTSA as summarized in
\tref{tab: stochastic complexity summaries}. %
\begin{table}[htb]
\ifloadeps
\includegraphics[]{TablesAndAlgs/epsfiles/tab,stochasticbilevelcomplexities.eps}
\else
\input{TablesAndAlgs/tab,stochasticbilevelcomplexities}
\fi
\iffigsatend \tabletag{5.3} \fi
\caption{
Finite-time sample complexities
for the stochastic bilevel problem
in the common scenario where \lfcn is non-convex
when using
BA \citep{ghadimi:2018:approximationmethodsbilevel},
stocBiO \citep{ji:2021:bileveloptimizationconvergence},
TTSA \citep{hong:2020:twotimescaleframeworkbilevel},
and
STABLE \citep{chen:2021:singletimescalestochasticbilevel}.
When \lfcn is strongly convex,
the sample complexity of STABLE
is $\order{\frac{1}{\epsilon^1}}$
(for the upper- and lower-level gradients),
which is the same as single level stochastic gradient algorithms.
See cited papers for other complexity
results when \lfcn is strongly convex.
}
\label{tab: stochastic complexity summaries}
\end{table}
\section{Summary of Methods}
There are many variations
of gradient-based methods
for optimizing bilevel problems,
especially when one considers that
many of the upper-level descent strategies
can work with either the minimizer or unrolled approach
discussed in \cref{chap: ift and unrolled}.
There is no clear single \dquotes{best} algorithm
for all applications;
each algorithm involves trade-offs.
Building on the minimizer and unrolled
methods for finding the upper-level gradient
with respect to the hyperparameters,
\uppergrad,
double-loop algorithms
are an intuitive approach.
Although optimizing the lower-level problem
every time one takes a gradient
step in \params
is computationally expensive,
the lower-level problem is
is embarrassingly parallelizable across samples.
Specifically,
one can optimize the lower-level cost for each
training sample independently
before averaging the resulting gradients
to take an upper-level gradient step.
In the typical scenario when training is performed offline,
training wall-time can therefore be dramatically reduced
by using multiple processors.
Single-loop algorithms
remove the need to optimize the lower-level cost function
multiple times.
The single-loop algorithms
that consider a system of equations
often accelerate convergence
using Newton solvers
\citep{kunisch:2013:bileveloptimizationapproach,calatroni:2017:bilevelapproacheslearning}.
However, the optimality system grows quickly
when there are multiple training images,
and may become too computationally expensive
as \Ntrue increases
\citep{chen:2014:insightsanalysisoperator}.
The methods that alternate gradient steps in \vx and \params
\citep{hong:2020:twotimescaleframeworkbilevel,chen:2021:singletimescalestochasticbilevel}
seem particularly promising for bilevel problems
with large training datasets
and many hyperparameters.
Both methods improve on the finite-time sample complexity
of the double-loop algorithms proposed in
\citep{ghadimi:2018:approximationmethodsbilevel,ji:2021:bileveloptimizationconvergence}.
These alternating gradient step methods
are relatively recent
and have yet to appear in
many other papers.
This section organized algorithms
based on the number of for-loops;
double-loop algorithms have two loops
while single-loop algorithms have one.
However, not all methods fall
cleanly into one group.
One such example is
the Penalty method
\citep{mehra:2020:penaltymethodinversionfree}.
The Penalty method forms a single-level,
constrained optimization problem,
with the constraint that the
gradient of the lower-level cost function should be zero,
$\nabla_\vx \ofcnargs = \bmath{0}$.
(This step is similar to
the derivation of
the minimizer approach via KKT conditions;
see \sref{sec: minimizer via kkt}.)
Rather than forming the Lagrangian
as in \eqref{eq: lagrangian},
\citep{mehra:2020:penaltymethodinversionfree}
penalizes the norm of the gradient,
with increasing penalties as the upper iterations increase.
Thus, the Penalty cost function%
\footnote{
This is a simplification;
\citep{mehra:2020:penaltymethodinversionfree}
allows for constraints on \vx and \params.
}
at iteration \upperiter is
\begin{equation*}
p(\params \, , \vx) = \lfcnargs + \iter{\lambda} \normsq{\nabla_\vx \ofcnargs}
.\end{equation*}
The penalty variable sequence,
${\iter{\lambda}}$,
must be positive, non-decreasing, and divergent
($\iter{\lambda} \rightarrow \infty$).
Penalty \citep{mehra:2020:penaltymethodinversionfree}
incorporates elements of both double-loop and single-loop algorithms.
Similar to the double-loop algorithms,
Penalty takes multiple gradient descent
steps in the lower-level optimization variable, \vx,
before calculating and updating
the hyperparameters.
However,
Penalty forms a single-level optimization problem
that could be optimized using techniques
such as those used in the single-loop algorithms.
The finite-time complexity analyses
\citep{ghadimi:2018:approximationmethodsbilevel,ji:2021:bileveloptimizationconvergence,hong:2020:twotimescaleframeworkbilevel,chen:2021:singletimescalestochasticbilevel}
justify the use of gradient-based
bilevel methods for problems with many hyperparameters,
as none of the sample complexity bounds
involved the number of hyperparameters.
This is in stark contrast with
the hyperparameter optimization strategies
in \cref{chap: hpo}.
However,
the per-iteration cost for bilevel methods
is still large and increasing with the hyperparameter dimension.
Further,
the conditions on the lower-level cost function
\ref{BA assumption lower-level 1}-\ref{BA assumption lower-level end}
seem restrictive
and may not be satisfied in practice.
Complexity analysis
based on more relaxed conditions
could be very valuable.
Gradient-based and other hyperparameter optimization methods
are active research areas,
and the trade-offs
continue to evolve.
Although it currently seems that
gradient-based bilevel
methods make sense for problems
with many hyperparameters,
new
methods
may overtake or combine with what is presented here.
For example,
most bilevel methods
(and convergence analyses thereof)
use classical gradient descent for the
lower-level optimization algorithm,
whereas \citep{kim:2017:convergenceanalysisoptimized}
showed that the Optimized Gradient Method (OGM)
has better convergence guarantees
and is optimal among first-order methods
for smooth convex problems
\cite{drori:17:tei}.
These advances provide opportunities
for further acceleration of bilevel methods.
\chapter{Survey of Applications}
\label{chap: applications}
Bilevel methods have been used in
many image reconstruction applications,
including
1D signal denoising \citep{peyre:2011:learninganalysissparsity},
image denoising (see following sections),
compressed sensing \citep{chen:2020:learnabledescentalgorithm},
spectral CT image reconstruction
\citep{sixou:2020:adaptativeregularizationparameter},
and
MRI image reconstruction
\citep{chen:2020:learnabledescentalgorithm}.
This section discusses trends
and highlights specific applications
to provide concrete examples
of bilevel methods
for image reconstruction.
Many papers present or analyze bilevel optimization methods
for general upper-level loss functions
and lower-level cost functions,
under some set of assumptions about each level.
\crefs{chap: ift and unrolled}{chap: bilevel methods}
summarized many of these methods.
Although there are cases when
the choice of a loss function and/or cost
impacts the optimization strategy,
many bilevel problems could
use any optimization method.
Thus, this section concentrates on
the specific applications, %
rather than methodology.
This section is split into
a discussion of lower-level cost
and upper-level loss functions.
(Lower-level cost functions
that involve CNNs
are discussed separately;
see \sref{sec: connections unrolled}.)
The conclusion section discusses
examples where the loss function
is tightly connected to the cost function.
\section{Lower-level Cost Function Design}
\label{sec: prev results lower level}
Once a bilevel problem is optimized
to find \paramh,
the learned parameters
are typically deployed
in the same lower-level problem
as used during training
but with new, testing data.
Thus, it is the lower-level cost function
that specifies the application
of the bilevel problem,
\eg, %
CT image reconstruction
or image deblurring.
Denoising applications
consider the case where
the forward model is an identity operator
($\mA=\I$).
This case has the simplest possible
data-fit term in
the cost function
and requires the least
amount of computation
when computing gradients or evaluating \ofcn.
Because bilevel methods are generally already
computationally expensive,
it is unsurprising that
many papers
focus on denoising,
even if only as a starting point towards applying
the proposed bilevel method to other applications.
More general image reconstruction problems consider
non-identity forward models.
Few papers learn parameters for image reconstruction
in the fully task-based manner
described in \eqref{eq: generic bilevel upper-level},
likely due to the additional computational cost.
Some papers,
\eg, \cite{kobler:2020:totaldeepvariation,chen:2014:insightsanalysisoperator,chambolle:2021:learningconsistentdiscretizations}
consider learning parameters for denoising,
and then apply \paramh in a
reconstruction problem
with the same regularizer but introducing the new \mA to the data-fit term.
These \dquotes{crossover experiments}
\citep{chambolle:2021:learningconsistentdiscretizations} %
test the generalizability of the learned parameters,
but they sacrifice the specific task-based nature of the bilevel method.
Many bilevel methods,
especially in image denoising
\citep{peyre:2011:learninganalysissparsity,fehrenbach:2015:bilevelimagedenoising,samuel:2009:learningoptimizedmap,kunisch:2013:bileveloptimizationapproach,chen:2014:insightsanalysisoperator},
but also in image reconstruction \citep{holler:2018:bilevelapproachparameter},
use the same or a very similar lower-level cost
as the running example in this review.
From \sref{sec: bilevel set-up},
the running example cost function is:
\begin{equation}
\xhat(\params, \vy) = \argmin_\vx \overbrace{\onehalf \norm{\mA \vx-\vy}^2_2 + \ebeta{0}
\underbrace{\sum_{k=1}^K \ebeta{k} \mat{1}' \sparsefcn(\hk \conv \vx; \epsilon)}_{R(\vx \, ; \params)}
}^{\ofcnargs}
\label{eq: lower-level repeat 2}
.\end{equation}
The learned hyperparameters,
\params,
include the tuning parameters,
$\beta_k$ %
and/or the filter coefficients,
\hk.
The image reconstruction example
in \citep{holler:2018:bilevelapproachparameter}
generalized \eqref{eq: lower-level repeat 2}
for implicitly defined forward models
by using a different data-fit term,
as given in
\eqref{eq: holler lower level}.
Their two examples learn
parameters to estimate
the diffusion coefficient
or forcing function in a second-order elliptic
partial differential equation.
Two common variations among applications using \eqref{eq: lower-level repeat 2}
are (1) the choice of which tuning parameters to learn
and (2) what sparsifying function, \sparsefcn, to use.
Some methods
\citep{kunisch:2013:bileveloptimizationapproach,fehrenbach:2015:bilevelimagedenoising,holler:2018:bilevelapproachparameter}
learn only the tuning parameters;
these methods typically use finite differencing filters or
discrete cosine transform (DCT) filters
(excluding the DC filter) as the \hk's.
Other methods learn only filter coefficients \citep{peyre:2011:learninganalysissparsity}.
A slight variation for learning the filter coefficients
is to learn coefficients
for a linear combination of filter basis elements
\citep{samuel:2009:learningoptimizedmap,chen:2014:insightsanalysisoperator},
\ie,
learning $a_{k,i}$ where
\[
\hk = \sum_i a_{k,i} \vb_i
,\]
for some set of basis filter elements, $\vb_i$.
One benefit of imposing a filter basis
is the ability to ensure the filters
lie %
in a given subspace.
For example,
\citep{samuel:2009:learningoptimizedmap,chen:2014:insightsanalysisoperator}
use the DCT as a basis
and remove the the constant filter
so that all learned filters are guaranteed to have zero-mean.
In terms of sparsifying functions,
\citep{peyre:2011:learninganalysissparsity,fehrenbach:2015:bilevelimagedenoising}
used the same corner rounded 1-norm as in \eqref{eq: corner rounded 1-norm},
\citep{samuel:2009:learningoptimizedmap} used
$\sparsefcn = \log{1+z^2}$
to relate their method to the Field of Experts framework \citep{roth:2005:fieldsexpertsframework},
\citep{holler:2018:bilevelapproachparameter} used a quadratic penalty,
and
\citep{kunisch:2013:bileveloptimizationapproach,chen:2014:insightsanalysisoperator}
both consider multiple \sparsefcn options to examine the impact of non-convexity in \sparsefcn.
Ref. \citep{kunisch:2013:bileveloptimizationapproach} compared $p$-norms,
$\norm{\hk \conv \vx}_p^p$,
for $p \in \{\onehalft, 1, 2\}$,
where the $p=\onehalft$ and $p=1$ cases are corner-rounded
to ensure \sparsefcn is smooth.
(The $p=\onehalft$ case is non-convex.)
Ref. \citep{chen:2014:insightsanalysisoperator}
compared the convex corner-rounded 1-norm in \eqref{eq: corner rounded 1-norm}
with two non-convex choices:
the log-sum penalty $\log{1+z^2}$,
and the Student-t function $\log{10\epsilon + \sqrt{z^2+\epsilon^2}}$.
Both \citep{kunisch:2013:bileveloptimizationapproach,chen:2014:insightsanalysisoperator}
found that non-convex penalty functions led to denoised images
with better (higher) PSNR.
They hypothesize that the improvement
is due to the non-convex penalty functions
better matching the heavy-tailed distributions
in natural images.
As further evidence of the importance of non-convexity,
\citep{chen:2014:insightsanalysisoperator}
found that untrained $7 \by 7$ DCT filters
(excluding the constant filter)
with learned tuning parameters
and a non-convex \sparsefcn
outperformed
learned filter coefficients
with a convex \sparsefcn,
despite the increased data adaptability
when learning filter coefficients. %
The trade-off for using non-convex penalty functions
is %
the possibility of local minimizers of the lower-level cost.
\citet{chen:2014:insightsanalysisoperator}
also investigated how the number of learned filters
and the size of the filters
impacted denoising PSNR.
They concluded that increasing the number of filters
to achieve an over-complete filter set
may not be worth the increased computational expense
and that increasing the filter size
past $11 \by 11$ is unlikely to improve PSNR.
Using 48 filters of size $7 \by 7$
and the log-sum penalty function,
\citep{chen:2014:insightsanalysisoperator}
achieved
denoising results on natural images
comparable to algorithms such as BM3D
\citep{dabov:2007:imagedenoisingsparse}. %
Although results will vary between
applications and training data sets,
the results from \citep{chen:2014:insightsanalysisoperator}
provide
motivation for filter learning
and an initial guide for designing bilevel methods.
In addition to variations on
the running example for \ofcn \eqref{eq: lower-level repeat 2},
a common regularizer for the lower-level cost is
Total Generalized Variation (TGV) \citep{bredies:2010:totalgeneralizedvariation}.
Whereas
TV encourages images to be piece-wise constant,
TGV is a
generalization of TV
designed for piece-wise linear images. %
Another generalization of TV
for piece-wise linear images
is Infimal Convolutional Total Variation (ICTV) \citep{chambolle:1997:imagerecoverytotal}.
Bilevel papers that investigate ICTV include \citep{delosreyes:2017:bilevelparameterlearning,calatroni:2017:bilevelapproacheslearning};
these papers also investigate TGV.
See \citep{benning:2013:higherordertvmethods}
for a comparison of the two.
TGV cost functions are
typically expressed in the continuous domain,
at least initially,
but then discretized for implementation,
\eg,
\cite{knoll:11:sot,setzer:11:icr}.
One discrete approximation
of the TGV regularizer is:
\begin{align*}
R_{\mathrm{TGV}}(\vx) = \min_{\vz} \ebeta{1} \norm{\xmath{\vh_{\text{TV}}} \conv \vx - \vz}_1 + \ebeta{2} \norm{\partial \vz}_1
,\end{align*}
where
\xmath{\vh_{\text{TV}}} is a filter that takes finite differences
and
$\partial$ is a filter that approximates a symmetrized gradient.
In TV, one usually thinks of \vz as a sparse vector;
here \vz is a vector whose finite differences are sparse,
so \vz is approximately piece-wise constant.
Encouraging \vz to be piece-wise constant
in turn makes \vx approximately piece-wise linear,
since $\xmath{\vh_{\text{TV}}} \conv \vx \approx \vz$
from the first term.
Bilevel methods for learning $\beta_1$ and $\beta_2$ for the TGV regularizer
include \citep{delosreyes:2017:bilevelparameterlearning,calatroni:2017:bilevelapproacheslearning}.
An extension to the TGV regularizer model
is to learn a space-varying tuning parameter
\citep{hintermuller:2020:dualizationautomaticdistributed}.
As an example of how the regularizer should be chosen based on the application,
\citep{hintermuller:2020:dualizationautomaticdistributed}
found that standard TV with a learned tuning parameter performed best
(in terms of SSIM)
for approximately piece-wise constant images
while TGV with learned tuning parameters performed best
for approximately piece-wise linear images.
\section{Upper-Level Loss Function Design \label{sec: prev results loss function}}
From some of the earliest bilevel methods,
\eg, \citep{haber:2003:learningregularizationfunctionals,peyre:2011:learninganalysissparsity},
to some of the most recent bilevel methods,
\eg, \citep{kobler:2020:totaldeepvariation,antil:2020:bileveloptimizationdeep},
mean square error (MSE) remains the most common
upper-level loss function.
In the unsupervised setting,
\citep{zhang:2020:bilevelnestedsparse,deledalle:2014:steinunbiasedgradient}
used SURE
(an estimate of the MSE, see \sref{sec: loss function design})
as the upper-level loss function.
Unlike many perceptually motivated image quality measures,
MSE is convex in \vx
and it is easy to find \dx{\lfcnargs}.
However, MSE does not capture perceptual quality nor image utility
(see \sref{sec: loss function design}).
This section discusses a few bilevel methods
that used
different loss functions.
Ref. \citep{delosreyes:2017:bilevelparameterlearning}
compared a MSE upper-level loss function with a
Huber (corner rounded 1-norm) loss function.
The corresponding lower-level problem
was a denoising problem
with a standard 2-norm data-fit term
and three different options for a regularizer:
TV, TGV, and ICTV.
The authors learned tuning parameters
for a natural image dataset
using both upper-level loss function options
for each of the lower-level regularizers.
Since SNR is equivalent to MSE,
the MSE loss will always perform the best
according to any SNR metric
(assuming the bilevel model is properly trained).
However, \citep{delosreyes:2017:bilevelparameterlearning}
found the tuning parameters learned using the Huber loss
yielded denoised images with
better qualitative properties
and
better SSIM,
especially at low noise levels.
Like MSE, the Huber loss operates point-wise
and is easy to differentiate.
Thus, the authors conclude that
the Huber loss is a good trade-off
between
tractability and %
improving on MSE as an image quality measure.
A set of loss functions in
\citep{fehrenbach:2015:bilevelimagedenoising,
hintermuller:2020:dualizationautomaticdistributed,
sixou:2020:adaptativeregularizationparameter}
consider the unsupervised or \dquotes{blind} bilevel setting,
where one wishes to reconstruct an image without clean samples.
Therefore, rather than using an image quality metric that compares
a reconstructed image, \xhat, to some true image, \xtrue,
these loss function consider the estimated residual, %
\[
\hat{\vn}
= \hat{\vn}(\params)
= \mA \xhat(\params) - \vy,
\]
where \params is learned using only noisy data.
Unsupervised bilevel methods may be beneficial when
there is no clean data and
one has more knowledge of noise properties
than of expected image content.
All three methods
\citep{fehrenbach:2015:bilevelimagedenoising,
hintermuller:2020:dualizationautomaticdistributed,
sixou:2020:adaptativeregularizationparameter}
assume the noise variance,
$\sigma^2$, is known.
The earliest example
\citep{fehrenbach:2015:bilevelimagedenoising},
learned tuning parameters \params
such that $\hat{\vn}$
matched the second moment of the assumed %
Gaussian distribution for the noise.
Their lower-level cost is comparable to
\eqref{eq: bilevel for analysis filters},
but re-written in terms of \vn and with
pre-defined finite differencing or
$5 \by 5$ DCT filters,
\ie, they learn only the tuning parameters, $\beta_k$.
Their upper-level loss
encourages the empirical variances
of the noise
in different frequency bands
to match the expected variances:
\begin{align*}
\lfcn(\params \, ; \vn(\params)) = \onehalf \sum_i \frac{\left( \normsq{\vf_i \conv \vn}_{2} - \mu_i \right)^2 }{v_i} \\
\mu_i = \E{\normsq{\vf_i \conv \vn}_2} \text{ and } v_i = \text{Var}\left[\normsq{\vf_i \conv \vn}_2 \right],
\end{align*}
where $\vf_i$ are predetermined
filters that select specific frequency components.
By using bandpass filters that partition Fourier space,
the corresponding means and variances
of the second moments of the filtered noise
are easily computed,
with
\begin{align*}
\mu_i = \sdim \sigma^2 \normsq{\vf_i}
\quad \text{ and } \quad
v_i = \sdim \sigma^4 \norm{\vf_i}^4
.\end{align*}
Although the experimental results are promising,
\citep{fehrenbach:2015:bilevelimagedenoising}
does not claim state-of-the-art results
since their lower-level denoiser is relatively simple.
As an alternative to
the Gaussian-inspired approach
in \citep{fehrenbach:2015:bilevelimagedenoising},
\citep{hintermuller:2020:dualizationautomaticdistributed}
and
\citep{sixou:2020:adaptativeregularizationparameter}
use loss functions that penalize noise outside a
set \dquotes{noise corridor.}
Both methods learn space-varying tuning parameters,
and the upper-level loss consists of a data-fit term
(that measures noise properties)
and a regularizer on \params.
The data-fit term in the upper-level loss function
in \citep{fehrenbach:2015:bilevelimagedenoising}
defines the noise corridor
between a maximum variance, $\Bar{\sigma}^2$,
and a minimum variance, $\underline{\sigma}^2$:
\begin{align}
\bmath{1}' &F.\left(\vw \odot (\vn(\params) \odot \vn(\params))\right) \text{ for } \nonumber \\
&F(n) =
\onehalf \text{max}(n - \Bar{\sigma}^2, 0)^2
+
\onehalf \text{min}(n - \underline{\sigma}^2, 0)^2 \label{eq: noise corridor}
,\end{align}
where \vw is a predetermined weighting vector.
The noise corridor function, $F(n)$,
penalizes any noise outside of the expected range
as shown in \fref{fig: noise corridor plot}.
Ref. \citep{sixou:2020:adaptativeregularizationparameter}
uses the same noise corridor function,
but extends the bilevel method for images with Poisson noise;
\citep{sixou:2020:adaptativeregularizationparameter} thus estimates the noisy image
using the Kullback-Leibler distance.
In addition to the noise corridor function
as the data-fit component of the upper-level loss function,
\citep{hintermuller:2020:dualizationautomaticdistributed,sixou:2020:adaptativeregularizationparameter}
include
a smoothness-promoting regularizer
on \params,
which is a spatially varying tuning parameter vector in both methods.
\begin{figure}
\ifloadeps
\includegraphics[]{Figures/epsfiles/noisecorridor.eps}
\else
\begin{tikzpicture}
\begin{axis}[
xlabel=$n$,
ylabel={$F(n)$},
domain=0:1,
xtick = {0},
ytick = {0},
extra x ticks={0.35,0.65},
extra x tick labels={$\underline{\sigma}^2$,$\Bar{\sigma}^2$},
width=6cm,
height=4.5cm
]
\addplot[mark=none] {max(x-0.65,0)^2 + min(x-0.35,0)^2};
\end{axis}
\end{tikzpicture}
\fi
\iffigsatend \figuretag{6.2} \fi
\caption{Noise corridor function \eqref{eq: noise corridor}
used as part of the upper-level loss function
for the unsupervised bilevel method in
\citep{hintermuller:2020:dualizationautomaticdistributed}.}
\label{fig: noise corridor plot}
\end{figure}
The task-based nature of bilevel
typically makes regularizers or constraints
on \params unnecessary
(see \sref{sec: filter constraints} for common options
for other forms of learning).
However, there are two general cases
where a regularizer on \params
is useful
in the upper-level loss function.
First,
a regularizer can help avoid over-fitting when the amount of training
data is insufficient for the number of learnable hyperparameters.
This is often the case when learning space-varying parameters
that have similar dimensions as the input data,
\eg,
\citep{haber:2003:learningregularizationfunctionals,delia:2020:bilevelparameteroptimization, hintermuller:2020:dualizationautomaticdistributed, sixou:2020:adaptativeregularizationparameter}.
In such cases,
the regularization often takes the form of a 2-norm on the
learned hyperparameters, $\normsq{\params}_2$.
Second,
some problems require application-specific constraints.
Many such constraints do not require a regularization term.
For example, non-negativity constraints on tuning parameters
are easily handled by redefining the tuning parameter
in terms of an exponential,
as in \eqref{eq: bilevel for analysis filters},
and
box constraints are common and easy to incorporate
with a projection step if using a gradient-based method.
Constraints that require sparsity
on the learned parameters
may benefit from regularization
in the upper-level loss function.
An example of an application-specific constraint
is found in \citep{ehrhardt:2021:inexactderivativefreeoptimization,sherry:2020:learningsamplingpattern}.
The application in
both papers
is MRI reconstruction
with a data-fit term and a variational regularizer.
Both papers
extend the bilevel model
in \eqref{eq: bilevel for analysis filters}
to include part of the forward model in the learnable parameters, \params.
Specifically,
\citep{ehrhardt:2021:inexactderivativefreeoptimization,sherry:2020:learningsamplingpattern}
learned the sparse sampling matrix for MRI.
(Ref.~\citep{sherry:2020:learningsamplingpattern}
additionally learns tuning parameters for predetermined filters,
whereas
\citep{ehrhardt:2021:inexactderivativefreeoptimization}
sets %
the tuning parameters and filters
and learns only the sampling matrix.)
Here,
the forward model is
\[
\mA =
\mathrm{diag}\Big( \underbrace{s_1, s_2, \ldots , s_\ydim }_{\vs(\params)} \Big)
\mF
,\]
where \mF is the
DFT matrix
and $s_i$ are learned binary values that specify
whether a frequency location should be sampled.
The motivation for learning a sparse sampling matrix
comes from the lower-level
MRI reconstruction problem;
designing more effective sparse sampling patterns in MRI
can decrease scan time and thus
improve patient experience,
decrease cost,
and decrease artifacts from patient movement.
This goal requires the learned parameters,
$s_i$,
to be binary,
which in turn influences the upper-level loss function design.
Thus,
\citep{ehrhardt:2021:inexactderivativefreeoptimization,sherry:2020:learningsamplingpattern}
include regularization in the upper-level to encourage \vs to be sparse,
\eg,
\citep{sherry:2020:learningsamplingpattern}
uses an upper-level loss with a MSE term and
regularizer on \vs:
\begin{equation}
\lfcnargs = \normrsq{\xhatp - \xtrue}_2 + \lambda \sum_i \left(s_i + s_i (1-s_i) \right)
\label{eq:binary-s-regularizer}
,\end{equation}
where $\lambda$ is a upper-level tuning parameter
that one must set manually.
(In experiments, they thresholded the learned $s_i$
values to be exactly binary.)
An alternative approach
is to constrain the number of samples
\cite{gozcu:18:lbc},
though that formulation requires other optimization methods.
\section{Conclusion}
This section
split the discussion of lower-level cost and upper-level loss functions
to discuss trends in both areas.
However, when designing a bilevel problem,
design decisions can impact both levels.
For example,
the unsupervised nature of
\citep{fehrenbach:2015:bilevelimagedenoising,sixou:2020:adaptativeregularizationparameter}
clearly impacted their choice of upper-level loss function
to use noise statistics rather than MSE
calculated with ground-truth data.
Since it can be challenging to learn
many good parameters from noisy training data,
the unsupervised nature also likely impacted
the authors' decision to learn only tuning parameters
and set the filters manually.
Another example
of coupling between lower-level and upper-level design
is when one enforces
application-specific constraints
on the learned parameters,
\eg,
using a regularizer
like \eqref{eq:binary-s-regularizer}
in the upper-level loss
to promote sparsity of the MRI sampling matrix
\citep{ehrhardt:2021:inexactderivativefreeoptimization,sherry:2020:learningsamplingpattern}.
In addition to design decisions influencing
both levels,
bilevel methods may adopt common
techniques for the upper-level loss function
and lower-level cost function.
For example,
a common theme
is the tendency to use smooth functions,
such as replacing the 1-norm with a corner-rounded 1-norm.
This approach requires setting a smoothing parameter,
\eg, $\epsilon$ in \eqref{eq: corner rounded 1-norm},
which in turn impacts the Lipschitz constant
and optimization speed.
More accurate approximations generally
lead to larger Lipschitz constants
and slower convergence.
One approach to trading-off the accuracy of the smoothing
with optimization speed is
to use a graduated approach and
approximate the non-smooth term more and more closely
as the optimization progresses
\citep{chen:2020:learnabledescentalgorithm}.
The prevalence of smoothing
is unsurprising considering
that this review focuses on gradient-based bilevel methods.
A rare exception is \citep{mccann:2020:supervisedlearningsparsitypromoting},
which used the (not corner-rounded) one-norm
to define \sparsefcn
to learn convolutional filters
using the translation to a single level approach
described in \sref{sec: translation to a single level}.
The impact of smoothing
and how accurately one should approximate a non-differentiable point
remains an open question.
From an image quality perspective,
ideally one would design independently
the lower-level cost function
and upper-level training loss.
The lower-level cost
would depend on the imaging physics
and would incorporate regularizers that expected to provide
excellent image quality when tuned appropriately,
and the upper-level loss
would use terms that are meaningful
for the imaging tasks of interest.
As we have seen,
in practice one often makes compromises
to facilitate optimization
and reduce computation time.
\chapter{Connections and Future Directions \label{chap: conclusion} \label{sec: connections}}
This final section
connects bilevel methods
with related approaches
and mentions some additional future directions
beyond those already described in previous sections.
Shlezinger \textit{et al.} \citep{shlezinger:2020:modelbaseddeeplearning}
recently proposed a framework,
summarized in \fref{fig: model-based to learning spectrum},
for categorizing learning-based approaches
that combine inferences, or prior knowledge%
\footnote{
Ref. \citep{shlezinger:2020:modelbaseddeeplearning}
uses the term \dquotes{model-based},
but this review uses \dquotes{inferences}
to differentiate from other definitions
of model-based learning in the literature.
},
and deep learning.
Inferences can include
information about the structure of the
forward model, \mA,
or about the object \vx being imaged.
For example,
any known statistical properties
of the object of interest
could be used to design a regularizer
that encourages the minimizer \xhat
to be compatible with that prior information.
At one extreme,
inference-based approaches rely on
a relatively small number of handcrafted regularizers
with a few, if any,
tuning parameters learned from training data.
At the other extreme,
fully learned approaches
assume no information about the application or data
and learn all hyperparameters from training data.
\begin{figure}
\centering
\ifloadepsorpdf
\includegraphics[]{methodsspectrum}
\else
\input{tikz,methodsspectrum}
\fi
\iffigsatend \figuretag{7.1} \fi
\caption{
Spectrum of learning to inference-based
methods from \citep{shlezinger:2020:modelbaseddeeplearning}.}
\label{fig: model-based to learning spectrum}
\end{figure}
Ref. \citep{shlezinger:2020:modelbaseddeeplearning}
proposes two general categories for methods
that mix elements
of inference-based and learning-based methods.
The first category,
inference-aided networks,
includes DNNs with
architectures based on
an inference-based method.
For example, in deep unrolling,
one starts with a fixed number of iterations
of an optimization algorithm derived from a cost function
and then learns parameters that may vary between iterations, or ``layers,''
or may be shared across such iterations.
\sref{sec: connections unrolled}
further discusses unrolling,
which is a common inference-aided network
design strategy,
and the connection to the bilevel unrolling method
described in \sref{sec: unrolled}.
The second general category
is DNN-aided inference methods
\citep{shlezinger:2020:modelbaseddeeplearning}.
These methods incorporate a deep learning component
into traditional inference-based techniques
(typically a cost function in image reconstruction).
The learned DNN component(s)
can be trained separately for each iteration
or end-to-end.
Because prior knowledge
takes a larger role than in the inference-aided networks,
these methods typically require smaller training datasets,
with the amount of training data required varying
with the number of hyperparameters.
\sref{sec: connections plug and play}
discusses how bilevel methods compare to
Plug-and-Play,
which is an example DNN-aided inference model.
While \citep{shlezinger:2020:modelbaseddeeplearning}
focused on DNNs
due to their highly expressive nature
and the abundance of interest in
them,
the idea of trading off prior knowledge and learning components
applies to machine learning more broadly.
\srefs{sec: connections unrolled}{sec: connections plug and play}
describe how bilevel methods
fit into the framework from
\citep{shlezinger:2020:modelbaseddeeplearning}
and relates bilevel methods to other methods
in the framework.
Although not covered in the above framework,
\sref{sec: connections single-level}
also compares bilevel methods to a third general category:
\dquotes{single-level} hyperparameter learning methods.
Like bilevel methods,
single-level methods learn hyperparameters
in a supervised manner.
However, they generally learn parameters that
sparsify the training images, $\{\xtrue_j\}$,
and do not use the noisy data,
$\{\vy_j\}$.
This last comparison
demonstrates the benefit of task-based approaches.
Of course,
there is variety among bilevel methods;
this discussion is meant to provide perspective
and general relations to increase understanding,
rather than to narrow the definition or application of any method.
\section{Connection: Learnable Optimization Algorithms \label{sec: connections unrolled} }
Learning parameters in
unrolled optimization algorithms
to create an inference-aided network,
often called a
Learnable Optimization Algorithm (LOA),
is a quickly growing area of research
\citep{monga:2021:algorithmunrollinginterpretable}.
The first such instance was a learned version of the
Iterative Shrinkage and Thresholding Algorithm (ISTA),
called LISTA
\cite{gregor:10:lfa}.
Similar to the bilevel unrolling method,
a LOA typically starts from
a traditional, inference-based optimization algorithm,
unrolls multiple iterations,
and then learns parameters using
end-to-end training.
There are many other unrolled methods for image reconstruction
\citep{monga:2021:algorithmunrollinginterpretable}.
Two examples that explicitly state the bilevel connection are
\citep{chen:2020:learnabledescentalgorithm,bian:2020:deepparallelmri};
both set-up a bilevel problem
with a DNN as a regularizer
and then allow the parameters to vary by iteration,
\ie, learning $\lliter{\hk}$
where $t$ denotes the lower-level iteration.
Ref. \citep{bian:2020:deepparallelmri}
motivated the use of an unrolled DNN
over more inference-based methods
by the lack of an accurate forward model, %
specifically coil sensitivity maps,
for MRI reconstruction.
Other examples of unrolled networks are
\citep{hammernik:2018:learningvariationalnetwork},
which unrolls the Field of Experts model \citep{roth:2005:fieldsexpertsframework}
(see \srefs{sec: filter learning history}{sec: prev results lower level}
for how the Field of Experts model has inspired many bilevel methods);
\citep{lim:2020:improvedlowcountquantitative},
which unrolls the convolutional analysis operator model \citep{chun:2020:convolutionalanalysisoperator}
(see \sref{sec: filter learning history});
and \citep{franceschi:2018:bilevelprogramminghyperparameter},
which discusses the connection to meta-learning.
Unlike the unrolled approach to bilevel learning
described in \sref{sec: unrolled},
many LOAs depart from their
base cost function
and
\dquotes{only superficially resemble the steps of optimization algorithms} %
\citep{chen:2020:learnabledescentalgorithm}.
For example,
unrolled algorithms may
\dquotes{untie} the gradient from the original cost function,
\eg,
using
$\widetilde{\mA}' (\mA \vx - \vy)$,
instead of
$\mA' (\mA \vx - \vy)$
for the gradient of the common 2-norm data-fit term,
where $\tilde{\mA}'$ is learned %
or otherwise differs from the adjoint of \mA.
LOAs that allow the learned parameters %
to vary every unrolled iteration
or learn step size and momentum parameters
further depart from a cost function perspective.
In addition to selecting which variables to learn,
one must decide how many iterations to unroll
for both bilevel unrolled approaches and LOAs.
Most methods pick a set number of iterations
in advance,
perhaps based on previous experience,
initial trials,
or the available computational resources.
Using a set number of iterations
yields an algorithm with predictable run times
and allows the learned parameters to adapt
to the given number of iterations.
Further,
picking a small number of iterations
can act as implicit regularization,
comparable to early stopping in machine learning,
which may be helpful when the amount of training data
is small relative to the number of hyperparameters
in the unrolled algorithm
\citep{franceschi:2018:bilevelprogramminghyperparameter}.
One can also
use a convergence criteria
to determine the number of iterations
to evaluate,
rather than selecting a number in advance
\cite{antil:2020:bileveloptimizationdeep}.
This convergence-based method
more closely follows
classic inference-based %
optimization algorithms.
A benefit of
running the lower-level optimization algorithm until convergence
is that one could switch optimization algorithms
between training and testing,
especially for strictly convex lower-level cost functions,
and still expect the learned parameters to
perform similarly.
This ability to switch optimization algorithms
means one could use faster,
but not differentiable,
algorithms at test-time,
such as accelerated gradient descent methods with adaptive restart
\cite{kim:18:aro}.
We are unaware of any bilevel methods
that have exploited this possibility.
A subtle point in unrolling gradient-based methods
for the lower-level cost function
is that the Lipschitz constant
of its gradient
is a function of the hyperparameters,
so the step size range that ensures convergence
cannot be pre-specified.
Most LOA methods
use some fixed step size
and allow the learned parameters
to adapt to it or learn the step size.
An alternative approach is to
compute a new step-size
as a function of the current parameters,
\iter{\params},
every upper-level iteration.
For example, given a set value of the
tuning parameters and filter coefficients,
a Lipschitz constant of the lower-level gradient for
\eqref{eq: bilevel for analysis filters} is
\begin{equation}
L = \sigma^2_1(\mA) + \ebeta{0} L_\sparsefcn \sum_k \ebeta{k} \normsq{\hk}
\label{eq: bilevel for caol L equation}
,
\end{equation}
so a reasonable step size for the classical gradient descent
method would be $1/L$.
The adaptive approach to setting the step size
ensures that any theoretical guarantees
of the lower-level optimizer hold.
This approach may be beneficial
when using a convergence criteria
for the lower-level optimization algorithm
or when running sufficiently many lower-level iterations
to essentially converge.
However,
updating the step-size every upper-level iteration is
incompatible with fixing
the number of unrolled iterations.
To illustrate,
consider an upper-level iteration
where the tuning parameters increase,
leading to a larger $L$ and a smaller step size.
In a fixed number of iterations,
the smaller step size
means the lower-level optimization algorithm
will be further from convergence,
and the estimated minimizer,
$\xhat(\iter{\params}{+1})$,
may be worse
(as judged by the upper-level loss function)
than $\xhat(\iter{\params})$,
even if the updated hyperparameters
are better when evaluated with
the previous (larger) step-size
or more lower-level iterations.
Even within the unrolling methodology,
one must make several design decisions.
To remain most closely tied to the
original optimization algorithm,
an unrolled method might
fix a large number of iterations
or run the optimization algorithm until convergence,
use the same parameters every layer,
and calculate the step size
based on the Lipschitz constant every upper-level iteration.
Like all design decisions,
there are trade-offs
and the literature shows many successful
methods that benefit from the increased generality
of designing LOAs that are further removed
from their cost function roots
\citep{monga:2021:algorithmunrollinginterpretable}.
Echoing the ideas from \citep{shlezinger:2020:modelbaseddeeplearning},
the design should be based on the specific application
and relative availability, reliability, and importance of
prior knowledge and training data.
This survey focuses on unrolled methods
that are closely tied to the original bilevel formulation;
\citep{monga:2021:algorithmunrollinginterpretable}
reviews LOAs more broadly.
A benefit of maintaining the connection
to the original cost function and optimization algorithm
is that,
once trained,
the lower-level problem in an unrolled bilevel method
inherits any theoretical and convergence results
from the corresponding optimization method.
The corresponding benefit for LOAs
is increased flexibility in network architecture.
\section{Connection: Plug-and-play Priors \label{sec: connections plug and play}}
The Plug-and-Play (PNP) framework
\citep{venkatakrishnan:2013:plugandplaypriorsmodel}
is an example of a DNN-aided inference method.
It is similar to bilevel methods
in its dependence on the forward model.
However,
unlike bilevel methods,
the PNP framework
need not be connected to a
specific lower-level cost function
and
it leverages pre-trained denoisers
rather than training them
for a specific task.
As a brief overview of the PNP framework,
consider rewriting the generic data-fit plus regularizer
optimization problem
\eqref{eq: general data-fit plus reg}
with an auxiliary variable:
\begin{align}
\xhat = \argmin_{\vx \in \F^\sdim}
\underbrace{\overbrace{\dfcnargs}^{\text{Data-fit}} + \;\;\; \beta
\overbrace{\regfcn(\vz \, ; \params)}^{\text{Regularizer}}}_{\ofcnargs}
\quad \text{ s.t. } \vx = \vz
\label{eq: data-fit plus reg split}
.\end{align}
Using ADMM
\cite{eckstein:92:otd}
to solve this constrained optimization problem
and rearranging variables
yields the following iterative optimization approach for
\eqref{eq: data-fit plus reg split}:
\begin{align*}
\iter{\vx}{+1} &= \argmin_\vx \dfcnargs + \frac{\lambda}{2}
\normrsq{ %
\vx - \underbrace{(\iter{\vz}-\iter{\vu})}_{\tilde{\vx}}
}_2
&&= \text{prox}_{\frac{1}{\lambda} \dfcnargs}(\tilde{\vx})
\\
\iter{\vz} &= \argmin_\vz \beta \regfcn(\vz \, ; \params) + \frac{\lambda}{2}
\normrsq{
\vz - \underbrace{(\iter{\vx}+\iter{\vu})}_{\tilde{\vz}}
}_2
&&= \text{prox}_{\frac{\beta}{\lambda} \regfcn(\vz \, ; \params) }(\tilde{\vz})
\\
\iter{\vu}{+1} &= \iter{\vu} + (\iter{\vx}{+1} - \iter{\vy}{+1}), &&
\end{align*}
where $\lambda$ is an ADMM penalty parameter
that effects the convergence rate
(but not the limit, for convex problems).
The first step is a proximal update for \vx
that uses the forward model
but does not depend on the regularizer.
Conversely,
the second step is proximal update for the split variable \vz
that depends on the regularizer, but is agnostic of the forward model.
This step acts as a denoiser.
The final step is the dual variable update and encourages
$\iter{\vx} \approx \iter{\vz}$
as
$\upperiter \rightarrow \infty$.
The key insight from \citep{venkatakrishnan:2013:plugandplaypriorsmodel}
is that the above update equations
separate the forward model and denoiser.
Thus, one can substitute, or \dquotes{plug in,}
a wide range of denoisers for the \vz update,
in place of its proximal update,
while keeping the data-fit update independent.
Whereas in the original ADMM approach,
the parameter $\lambda$
has no effect on the final image
for convex cost functions,
in the PNP framework
that parameter does affect image quality.
Thus,
one could also use training data
to tune the $\lambda$ in a bilevel manner.
Although PNP allows one to substitute
a pre-trained denoiser,
one could additionally tune the parameters in the denoiser.
Ref.~\citep{he:2019:optimizingparameterizedplugandplay}
provides one such example
of starting from a PNP framework
then learning
denoising parameters and $\lambda$
that vary by iteration.
A large motivation for the PNP framework
is the abundance of advanced denoising methods,
including ones that are not associated with
an optimization problem such as BM3D
\citep{dabov:2007:imagedenoisingsparse}.
However,
using existing denoisers sacrifices
the ability to learn parameters
to work well with the specific forward model,
as is done in task-based methods.
As a simple example of how learned parameters
may differ when \mA changes,
\citet{chambolle:2021:learningconsistentdiscretizations}
found that
different (synthesis) filters worked better for
image denoising versus image inpainting.
A more complicated example is
using bilevel methods to learn some aspect of \mA
alongside
some aspect of the regularizer,
\eg,
\citep{sherry:2020:learningsamplingpattern}
learned a sparse sampling matrix and tuning parameter for MRI
that are %
adaptive to the regularization
for the image reconstruction problem.
\section{Connection: Single-Level Parameter Learning}
\label{sec: hpo filter learning}
\label{sec: filter constraints}
\label{sec: connections single-level}
\sref{sec: filter learning history}
briefly discussed some approaches
to learning analysis operators.
This section
further motivates the task-based bilevel set-up
by discussing
the filter learning constraints imposed in
single-level hyperparameter learning methods.
As summarized in \sref{sec: filter learning history},
the earliest methods
for learning analysis regularizers
had no constraints on the analysis operators.
Those approaches learned filters from training data
to make a prior distribution match the observed data distribution.
In contrast,
more recent approaches to filter learning
minimize a cost function %
that requires either a penalty function or constraint on the operators to ensure filter diversity.
For reference,
the cost functions mentioned in \sref{sec: filter learning history} were:
\begin{align*}
\text{AOL}: & \argmin_{\mOmega,\, \mX}
\norm{\mOmega \mX}_1 + \frac{\beta}{2} \normsq{\mY - \mX}
\text{ s.t. } \mOmega \in \S
, \nonumber \\
\text{TL}: & \argmin_{\mOmega \in \F^{\filterdim \by \filterdim},\, \mX}
\normsq{\mOmega \mY - \mX}_2 + \regfcn(\mOmega)
\text{ s.t. } \norm{\mX_i}_0 \leq \alpha \;\forall i
, \nonumber \\
\text{CAOL}: &\argmin_{[\vc_1, \ldots, \vc_K]} \min_\vz
\sum_{k=1}^K \onehalf \normsq{\hk \conv \vx - \vz}_2 + \beta \norm{\vz_k}_0
\text{ s.t. } [\vc_1, \ldots, \vc_K] \in \S,
\end{align*}
where AOL is analysis operator learning \citep{yaghoobi:2013:constrainedovercompleteanalysis},
TL is transform learning \citep{ravishankar:2013:learningsparsifyingtransforms},
and CAOL is convolutional analysis operator learning \citep{chun:2020:convolutionalanalysisoperator}.
In the following discussion of constraint sets,
the equivalent filter matrix for CAOL has
the convolutional kernels as rows:
\[
\mOmega_{\mathrm{CAOL}} =
\begin{bmatrix}
\vc_1' \\
\vdots \\
\vc_K'
\end{bmatrix}
.\]
While there are many other proposed cost functions in the literature,
using different norms or including additional variables,
these three examples capture the most common structures
for filter learning.
In all the above cost functions,
if one removed the constraint or regularizer,
then the trivial solution
would be to learn
zero filters for \mOmega. %
Furthermore,
a simple row norm constraint on \mOmega %
would be insufficient,
as then the minimizer would contain a single filter
that is repeated many times.
(In contrast,
a unit norm constraint typically suffices for dictionary learning.)
A row norm constraint
plus a full rank constraint is also insufficient
because \mOmega
can have full rank
while being arbitrarily close to the rank-1 case of having a single repeated row.
\comment{ %
Note that the matrix
\begin{equation}
\begin{bmatrix}
1 & 0 \\
1 & \epsilon
\end{bmatrix} \label{eq: ill conditioned matrix}
\end{equation}
has full rank whenever $\epsilon \neq 0$,
but the two rows can be arbitrarily close to each other
as $\epsilon \rightarrow 0$.
If we replace the one in the lower left with a zero,
then the example similarly shows how a matrix can be full rank
while having a row that is arbitrarily close to the zero vector.
}
The choice of constraint set $\S$ is important
in single-level learning.
Many methods constrain analysis operators to satisfy a tight frame constraint.
A matrix $\mA$ is a tight frame if
there is a positive constant, $\alpha$,
such that
\begin{equation*}
\normsq{\mA' \vx}_2 = \sum_{i} \abs{\langle \va_i, \vx \rangle}^2 = \alpha \normsq{\vx}_2,
\; \forall \vx %
\end{equation*}
where $\va_i$ is the $i$th column of \mA.
This tight frame condition is equivalent to
$\mA \mA' = \alpha \I$
for some positive constant $\alpha$.
Most analysis operators are defined with filters in their rows,
so a tight frame requirement on the filters appears as the constraint
$\mOmega'\mOmega = \alpha \I$.
Under the tight frame constraint for the filters,
\mOmega must be square or tall, so the filters are complete or over-complete.
However,
\citep{yaghoobi:2013:constrainedovercompleteanalysis}
found that the frame constraint was insufficient when learning over-complete operators,
as the \dquotes{excess} rows past full-rank tended to be all zeros.
Therefore, \citep{yaghoobi:2013:constrainedovercompleteanalysis}
imposed a uniformly-normalized tight frame constraint:
each row of the \mOmega had to have unit norm and the filters had to form a tight frame.
Ref. \citep{hawe:13:aol} similarly constrained \mOmega to have unit-norm rows
with the filters forming a frame (though not tight).
Such loosening of the tight frame constraint to a frame constraint
could lead to the problem of learning almost identical rows,
as discussed above.
To prevent this issue,
\citep{hawe:13:aol} additionally included a penalty
that encourages distinct rows:
\begin{equation}
- \sum_k \sum_{\tilde{k} < k} \log{1- (\vomega_{\tilde{k}}' \vomega_k)^2} %
\label{eq: encourage distinct filters via correlation}
.
\end{equation}
One possible concern with a tight frame constraint
is that it requires the filters to span all of $\F^\sdim$,
so every spatial frequency can pass through at least one filter.
However,
most images are not zero-mean and have piece-wise constant regions,
so the zero frequency component is not sparse.
Ref. \citep{yaghoobi:2013:constrainedovercompleteanalysis}
modified the tight-frame constraint to require \mOmega to span some space
(\eg, the space orthogonal to the zero frequency term).
Likewise, \citep{crockett:2019:incorporatinghandcraftedfilters}
extended the CAOL algorithm to include handcrafted filters,
such as a zero frequency term,
that can then be used or discarded when reconstructing images.
In the bilevel literature,
\citep{samuel:2009:learningoptimizedmap,chen:2014:insightsanalysisoperator} similarly
ensured that learned filters had no zero frequency component
by learning coefficients for a linear combination of filter basis vectors,
rather than learning the filters directly; %
see \sref{sec: prev results lower level}.
As an alternative to imposing a strict constraint on the filters,
one can penalize \mOmega
to encourage filter diversity,
as in \eqref{eq: encourage distinct filters via correlation}.
Using a penalty
has the advantage of being able to learn any size (under- or over-complete) \mOmega
and not \textit{requiring} the filters to represent all frequencies.
For example,
as an alternative to the tight frame constraint,
\citep{chun:2020:convolutionalanalysisoperator}
proposed a version of CAOL
using the following regularizer (to within scaling constants)
\begin{align}
\regfcn(\mOmega) = \beta \normsq{\mOmega \mOmega' - \I} \nonumber
\end{align}
and a unit norm constraint on the filters.
Ref.~\citep{pfister:2019:learningfilterbank}
included a similar penalty
to \eqref{eq: encourage distinct filters via correlation},
but with the inner product being divided by the norm of the filters
as the filters were not constrained to unit norm.
All such variations on this penalty are to encourage filter diversity.
To ensure a square \mOmega is full rank,
while also encouraging it to be well-conditioned,
\citep{ravishankar:2013:learningsparsifyingtransforms}
used a regularizer that includes a term of the form
\begin{equation}
\regfcn(\mOmega) = \neg \beta_1 \log{|\mOmega|} \nonumber
.
\end{equation}
The log determinant term is known as a log barrier;
it forces \mOmega to have full rank because
of the asymptote of the log function.
Ref.~\citep{pfister:2019:learningfilterbank}
includes a similar log barrier regularization term
in terms of the eigenvalues of \mOmega
to ensure it is left-invertible.
As another example of a filter penalty regularizer,
both
\citep{ravishankar:2013:learningsparsifyingtransforms}
and
\citep{pfister:2019:learningfilterbank},
include the following regularization term
\begin{equation}
\regfcn(\mOmega) = \beta_2 \norm{\mOmega}_F^2 \nonumber
,\end{equation}
rather than constraining the norm of the filters.
This Frobenius norm addresses the scale ambiguity
in the analysis and transform formulations
and ensures the filter coefficients do not grow too large in magnitude.
Yet another approach to encouraging filter diversity is to
consider the frequency response of the set of filters.
\citet{pfister:2019:learningfilterbank} discusses
different constraint options for filter banks based on
convolution strides to ensure perfect reconstruction.
When the stride is one and one considers circular boundary conditions,
the filters can perfectly reconstruct any signal as long as they pass the
$\sdim$ %
discrete Fourier transform frequencies.
Tight frames satisfy this constraint,
but the constraint is more relaxed than a tight frame constraint.
\cref{chap: applications} discussed some
(relatively rare) bilevel problems
with penalties on the learned hyperparameters,
but,
notably,
there are no constraints nor penalties on the filters
in the bilevel method \eqref{eq: bilevel for analysis filters}!
Because of its task-based nature,
filters learned via the bilevel method
should be those that are best for image reconstruction.
Thus, one should not have to worry about redundant filters,
zero filters, or filters with excessively large coefficients.
This property is one of the key benefits of bilevel methods. %
\section{Future Directions}
Throughout this review,
we mentioned a few areas for future work
on bilevel methods.
This section highlights some of the avenues
that we think are particularly promising.
Advancing upper-level loss function design
is identified as future work in many bilevel papers.
Despite the abundance of research on image quality metrics
(see \sref{sec: loss function design}),
most bilevel methods
use mean square error for the upper-level loss function
(see \sref{sec: prev results loss function} for exceptions).
Using loss functions that better match the
end-application of the images
is a clear future direction for bilevel methods
that nicely aligns with their task-based nature.
For example, in the medical imaging field
there is a large literature
on objective measures of image quality
\cite{barrett:90:oao},
often based on mathematical observers
designed to emulate human performance
on signal detection tasks,
for example for situations where a lesion's location is unknown
\cite{yendiki:07:aoo}.
To our knowledge,
there has been little if any work to date
on using such mathematical observers
to define loss functions
for bilevel methods
or for training CNN models,
though there has been work
on CNN-based observers
\cite{kopp:18:cam}.
Using task-based metrics
for bilevel methods
and CNN training
is a natural direction for future work
that could bridge the extensive literature on such metrics
with the image reconstruction field.
Unsupervised bilevel problems are exceptions
to the trend of using MSE for the upper-level loss function.
\sref{sec: prev results lower level}
considered a few %
unsupervised bilevel methods
that use noise statistics to estimate the quality of the reconstructed images,
\eg,
\citep{fehrenbach:2015:bilevelimagedenoising, hintermuller:2020:dualizationautomaticdistributed, sixou:2020:adaptativeregularizationparameter}
\citep{zhang:2020:bilevelnestedsparse,deledalle:2014:steinunbiasedgradient}. %
One extension to the unsupervised setting
is the semi-supervised setting,
where one might have access to a few clean training samples
and additional, noisy training samples.
An alternative extension would be to consider
a two-stage bilevel problem,
where perhaps filters and initial tuning parameters
are learned offline from a training dataset
and then the tuning parameters are further adjusted
during the reconstruction
in an unsupervised manner
to account for possible differences between
the specific image being reconstructed
and the training dataset.
Just as considering more advanced image quality metrics for the upper-level loss function is a promising area for future work,
bilevel methods can likely be improved by
using more advanced lower-level cost functions.
Perhaps due to the already challenging and non-convex nature of bilevel problems,
most methods consider convex lower-level cost functions.
Papers that examine non-convex regularizers,
\eg, \citep{kunisch:2013:bileveloptimizationapproach,chen:2014:insightsanalysisoperator},
conclude that non-convex regularizers
lead to more accurate image reconstructions,
likely due to better matching the statistics of natural images.
This observation
aligns with the simple denoising experimental results in
\citep{crockett:2021:motivatingbilevelapproaches},
where learned filters with \eqref{eq: corner rounded 1-norm} as the regularizer
denoised signals worse than
a hand-crafted filter with the non-convex 0-norm regularizer.
In other words, the structure of the regularizer matters
in addition to how one learns the filters.
In addition to non-convexity,
future bilevel methods could consider non-smooth cost functions.
Many bilevel methods require the lower-level cost
to be smooth.
Exceptions include, the translation to a single level approach
(\sref{sec: translation to a single level}),
which uses the 1-norm as the lower-level regularizer
and
unrolled methods
(\sref{sec: unrolled}),
which can be applied to non-smooth cost functions
as long as the optimization algorithm has smooth updates.
As an example of a non-smooth cost with a smooth update,
\citep{christof:2020:gradientbasedsolutionalgorithms}
proved that a cost function
with a 1-norm plus a $p$-norm regularizer
has a smooth proximal operator
for $p \in (1,2)$,
despite including a non-smooth 1-norm.
The impact of smoothing the cost function
on the perceptual quality of the reconstructed image
is largely unknown.
Another avenue for future work is
based on the fact that
\xtrue is really a continuous-space function.
A few methods,
\eg, \citep{calatroni:2017:bilevelapproacheslearning,delosreyes:2017:bilevelparameterlearning},
develop bilevel methods in continuous-space.
However, the majority of methods
use discretized forward models
without considering the impact of this simplification
(as done in this review paper).
Future investigations of bilevel methods
should strive to avoid
the ``inverse crime''
\cite{kaipioa:07:sip}
implicit in
\eqref{eq: y=Ax+n}
where the data is synthesized
using the same discretization
assumed by the reconstruction method.
\comment{ %
Finally,
a possibly undervalued aspect of bilevel methods
is that,
by maintaining a strong connection with the original lower-level cost function,
one could use different lower-level optimization algorithms
during training and testing.
\sref{sec: connections unrolled} discussed that
this generality
is likely to be useful
only if one optimizes the lower-level
cost function to some convergence threshold.
The main benefit of this possibility
is that optimization algorithms used for testing
could involve non-differentiable steps,
such as in adaptive restart,
and thus could be faster
than the differentiable ones selected during training
for the unrolled bilevel method.
}
\section{Summary of Advantages and Disadvantages}
Like the methods described in
\citep{shlezinger:2020:modelbaseddeeplearning},
bilevel methods for computational imaging
involve mixing inference-based
optimization approaches with learning-based approaches
to leverage benefits of both techniques.
Inference-based approaches use prior knowledge,
usually in the form of a forward model
and an object model,
to reconstruct images.
Typically the forward model, \mA, is under-determined,
so some form of regularization based on the object model is essential.
Regularizers always involve some number of adjustable parameters;
traditionally
inference-based methods
select such parameters empirically
or using basic image properties
like resolution and noise
\cite{fessler:96:srp,fessler:96:mav}.
The regularization parameters may also be learned
from training
to maximize SNR
\cite{qi:06:pml}
or detection task performance
\cite{yang:14:rdi}
in a bilevel manner
(often using a grid or random search
due to the relatively small number of learnable parameters).
When the forward model and object model are well-known
and easy to incorporate in a cost function,
inference-based methods can yield accurate reconstructions
without the need for large datasets of clean training data.
Learning-based approaches use training datasets
to learn a prior.
Recently,
learning-based approaches have achieved
remarkable reconstruction accuracy in practice,
largely due to
the increased availability in computational resources
and larger, more accessible training datasets
\citep{wang:16:apo,hammernik:2020:machinelearningimage}.
However,
many (deep) learning methods lack theoretical guarantees and explainability
and finding sufficient training data is still
challenging in many applications.
Both of these challenges
may impede
adoption of learning-based methods
in clinical practice for some applications,
such as medical image reconstruction
\citep{sahiner:18:dli}.
Some deep learning methods
for CT image reconstruction
were approved for clinical use
in 2019
\cite{fda:19:ge-dlir}; %
early studies have shown such methods
can significantly reduce noise
but may also compromise low-contrast spatial resolution
\cite{solomon:20:nas}.
Combining inference-based and learning-based approaches
allows the integration of learning from training data
while using smaller training datasets
by incorporating prior knowledge.
Such mixed methods often maintain interpretability
from the inference-based roots
while using learning to provide adaptive regularization.
Thus, the benefits of bilevel methods
in this review's introduction
are generally shared among the methods described in
\citep{shlezinger:2020:modelbaseddeeplearning}:
theoretical guarantees, %
competitive performance in terms of reconstruction accuracy, %
and similar performance to learned networks with a fraction of the free parameters,
\eg,
\citep{chen:2020:learnabledescentalgorithm, kobler:2020:totaldeepvariation}.
What distinguishes bilevel methods
from the other methods in the inference-based to learning-based
spectrum in
\fref{fig: model-based to learning spectrum}?
While one can argue that the conventional CNN and deep learning approach
is always bilevel in the sense that the hyperparameters
are trained to minimize a loss function,
this review considered bilevel methods
with the cost function structure
\eqref{eq: generic bilevel lower-level}.
The regularization term in
\eqref{eq: generic bilevel lower-level}
could be based on a DNN
\citep{chen:2020:learnabledescentalgorithm},
but we followed the bilevel literature
that focuses on priors/regularizers,
such as in \eqref{eq: bilevel for analysis filters},
maintaining a stronger connection
to traditional cost function design.
Another lens for understanding bilevel methods
is extending single-level hyperparameter
optimization approaches to be task-based, bilevel approaches.
Single-level approaches to image reconstruction,
such as
those using dictionary learning
\cite{ravishankar:2011:mrimagereconstruction},
convolutional analysis operator learning
\citep{chun:2020:convolutionalanalysisoperator},
and convolutional dictionary learning
\citep{garcia-cardona:2018:convolutionaldictionarylearning,chun:18:cdl},
generally aim to learn
characteristics of a training dataset,
with the idea that these characteristics
can then be used in a prior for an image reconstruction task.
While such an approach may
learn more general information,
\citep{crockett:2021:motivatingbilevelapproaches,mccann:2020:supervisedlearningsparsitypromoting}
showed that a common single-level optimization strategy
resulted in learning a regularizer that
was suboptimal for the simple task of image denoising.
As further evidence of the benefit of task-based learning,
\citep{mccann:2020:supervisedlearningsparsitypromoting}
found that the lack of constraints in the bilevel filter learning problem is important;
the learned filters used the flexibility of the model
and were not orthonormal,
whereas orthonormality
is a constraint
often imposed in single-level models
(see \sref{sec: filter constraints}).
Ref. \citep{kunisch:2013:bileveloptimizationapproach}
showed how the task-based nature adapts to training data;
total variation based regularization works well
for piece-wise constant images
but less so for natural images.
Beyond adapting to the training dataset,
bilevel methods are task-based in terms of adapting
to the level of noise;
\citep{ehrhardt:2021:inexactderivativefreeoptimization}
found the learned tuning parameters for image denoising go to $0$
as the noise goes to 0,
since no regularization is needed
in the absence of noise
for well-determined problems.
A primary disadvantage cited for most bilevel methods
is the computational cost
compared to single-level hyperparameter optimization methods
or other methods with a smaller learning component.
In turn,
the main driver behind the large computational cost of
gradient-descent based bilevel optimization methods is that
one typically has to optimize the lower-level cost function many times,
either to some tolerance or for a certain number of iterations.
The computational cost involves a trade-off
because
how accurately one optimizes the lower-level problem
can impact
the quality of the learned parameters.
For example,
\citep{kunisch:2013:bileveloptimizationapproach, chen:2014:insightsanalysisoperator}
both claim better denoising accuracy than
\citep{samuel:2009:learningoptimizedmap}
because they optimize the lower-level problem more accurately.
Similarly,
\citep{mccann:2020:supervisedlearningsparsitypromoting}
notes that learning will fail if the lower-level cost is not optimized to sufficient accuracy.
There are various strategies
to decrease the computational cost
for bilevel methods.
Some are relatively intuitive and applicable
to a wide range of problems in machine learning.
For example,
\citep{mccann:2020:supervisedlearningsparsitypromoting}
used larger batch size as the iterations continue,
\citep{calatroni:2017:bilevelapproacheslearning}
increased the batch size
if a gradient step in \params
does not sufficiently improve the loss function,
and
\citep{ehrhardt:2021:inexactderivativefreeoptimization}
tightened the accuracy requirement for the gradient estimation over iterations.
These strategies all save computation
by starting with rougher approximations near the beginning
of the optimization method,
when \iter{\params} is likely far from \paramh,
while using a relatively accurate solution
by the end of the algorithm.
\comment{
Other work in bilevel methods,
\eg, \citep{ghadimi:2018:approximationmethodsbilevel,ji:2020:bileveloptimizationnonasymptotic,hong:2020:twotimescaleframeworkbilevel,chen:2021:singletimescalestochasticbilevel},
investigates theoretical bounds on sample complexity
based on specific conditions for the upper-level loss and lower-level cost functions;
see \srefs{sec: double-loop complexity analysis}{sec: all-at-once complexity}.
} %
Another disadvantage of bilevel methods
is that,
while the optimization algorithm
for the lower-level problem
often has theoretical convergence guarantees,
and the lower-level cost
is often designed to be strictly convex,
the full bilevel problem
\eqref{eq: generic bilevel upper-level}
is usually non-convex,
so the quality of the learned hyperparameters
can depend on initialization.
Thus, in practice,
one need a strategy for initializing \params.
For example,
for \eqref{eq: bilevel for analysis filters},
one may decide to
use a single-level filter learning technique
such as the Field of Experts \citep{roth:2005:fieldsexpertsframework}
to initialize the hyperparameters.
Or, one can use a handcrafted set of filters,
such as the DCT filters
(or a subset thereof).
Other hyperparameters often have
similar warm start options.
Despite the non-convexity,
papers that tested multiple initializations
generally found similarly good solutions surprisingly often,
\eg,
\citep{chen:2014:insightsanalysisoperator,ehrhardt:2021:inexactderivativefreeoptimization,hintermuller:2020:dualizationautomaticdistributed}.
There is no one correct answer
for how much a method should use
prior information
or learning techniques,
and it is unlikely that any single approach
can be the best for all image reconstruction applications.
Like most engineering problems,
the trade-off is application-dependent.
One should (minimally) consider
the amount of training data available,
how representative the training data is
of the test data,
how under-determined the forward model is
(\ie, how strong of regularization is needed),
how well-known the object model is,
the importance of theoretical guarantees
and explainability,
and
the available computational resources at training time and at test time.
Bilevel methods show particular promise
for applications where training data is limited and/or explainability is highly valued,
such as in medical imaging.
\chapter{Additional Running Example Results}
\setcounter{section}{0}
\renewcommand{\thesection}{A.\arabic{section}}
\renewcommand*{\theHsection}{appB.\the\value{section}}
This appendix derives some results
that are relevant to the running example
used throughout the survey.
\section{Derivatives for Convolutional Filters}
\label{sec: dh of htilde conv f(h conv x)}
{ %
\newcommand{\xmath{\h_{s}}}{\xmath{\h_{s}}}
\newcommand{\is}{\xmath{i_1,\ldots,i_N}} %
\newcommand{\xmath{\neg i_1,\ldots,\neg i_N}}{\xmath{\neg i_1,\ldots,\neg i_N}}
\renewcommand{\vh}{\vc}
This section proves the result
\begin{align}
\nabla_{\xmath{\h_{s}}} \paren{ \htilde_k \conv f.(\hk \conv \vx)}
=
f.(\circshift{\hk \conv \vz}{s}) + \hktil \conv \paren{\dot{f}.(\hk \conv \vx) \odot \circshift{\vx}{-s}} \label{eq: bilevel caol pd htilde conv f(h conv x) pd hs}
,\end{align}
when considering $\F=\R$.
This equation is key
to finding derivatives
of the lower-level cost function in
\eqref{eq: bilevel for analysis filters}
with respect to the filter coefficients.
To simplify notation,
we drop the indexing over $k$,
so \vh is a single filter
and \xmath{\h_{s}} denotes the $\vs$th element in the filter
for $\vs \in \ints^D$.
Here, $\vs$ indexes every dimension of \vh,
\eg, for a two-dimensional filter,
we could equivalently write $\vs$ as
$\langle s_1, s_2 \rangle$. %
Recall that the notation \htilde signifies
a reversed version of \vh,
as needed for the adjoint of convolution.
Define the notation $\circshift{\vx}{\vi}$
as the vector \vx circularly shifted according
to the index $\vi$.
Thus,
if \vx is 0-indexed and we use circular indexing,
\[
\parenr{\circshift{\vx}{\vs}}_\vi = \vx_{\vs+\vi}
.\]
As two concrete examples,
\[
\circshift{\vx}{-1} = \begin{bmatrix} x_2 \\ x_3 \\ \vdots \\ x_N \\ x_1 \end{bmatrix}
,\]
and, in two dimensions,
if $\mX \in \F^{M \by N}$
\[
\circshift{\mX}{1,2} =
\begin{bmatrix}
x_{M, N-1} & x_{M, N} & x_{M, 1} & \ldots & x_{M, 3} \\
x_{1, N-1} & x_{1, N} & x_{1,1} & \ldots & x_{1, 3} \\
x_{2, N-1} & x_{2, N} & x_{2,1} & \ldots & x_{2, 3} \\
\vdots & & \ddots & & \vdots \\
x_{M-1,N-1} & x_{M-1,N} & x_{M-1,1} & \ldots & x_{M-1,3}
\end{bmatrix}
.\]
Notice the element $x_{1,1}$ appears
in the second row and third column of
\circshift{\mX}{1,2},
since
\[
\underbrace{\langle 1, 2 \rangle}_{\vs} +
\underbrace{\langle 1, 1\rangle}_{\vi}
= \langle 2, 3 \rangle
.\]
This circular shift notation
is useful in the derivation and statement of
the desired gradient.
Define
$\vz = \vh \conv \vx$,
where \vh and \vx are both $N$-dimensional.
By the definition of convolution,
\vz is given by
\begin{equation*}
\vz = \sum_{i_1} \cdots \sum_{i_N} h_{\is} \circshift{\vx}{\xmath{\neg i_1,\ldots,\neg i_N}}
\defeq \sum_{\is} h_{\is} \circshift{\vx}{\neg \vi}
,\end{equation*}
where, for each sum,
the indexing variable $i_n$
iterates over the size of \vh in the $i$th dimension
and we simplify the index for circularly shifting vectors,
\is, as simply $\langle \vi \rangle$.
This expression shows that the derivative
of $\vh \conv \vx$
with respect to the $\vs$th filter coefficient
is the $- \vs$th coefficient in \vx,
\ie,
\begin{equation}
\nabla_{\xmath{\h_{s}}} (\vh \conv \vx) = \circshift{\vx}{-\vs} \label{eq: bilevel caol pd z pd hs}
.\end{equation}
We can now find the gradient of interest:
\begin{align*}
\htilde \conv f.(\vz) &=
\sum_{\is} (\htilde)_{\is} \circshift{f.(\vz)}{\neg \vi}
&& \text{ by the convolution formula }
\\
&= \sum_{\is} [\htilde]_{\is} f.\paren{\circshift{\vz}{\neg \vi}}
&& \text{ since $f$ operates point-wise}
\\
&= \sum_{\is} \h_{\xmath{\neg i_1,\ldots,\neg i_N}} f.\paren{\circshift{\vz}{\neg \vi}}
&& \text{ by definition of } \htilde
\\
&= \sum_{\is} \h_{\is} f.\paren{\circshift{\vz}{\vi}}
&& \text{ reverse summation order.}
\end{align*}
Recall that \vz is a function of \xmath{\h_{s}}.
Therefore,
using the chain rule to take the derivative,
\begin{align*}
\nabla_{\xmath{\h_{s}}} &\paren{\htilde \conv f.(\vz)} \\
&= f.(\circshift{\vz}{s}) + \sum_{i_1} \cdots \sum_{i_N} \h_{\is} \dot{f}.(\circshift{\vz}{\is}) \odot \nabla_{\xmath{\h_{s}}} \paren{\circshift{\vz}{\vi}} \\
&= f.(\circshift{\vz}{\vs}) + \sum_{i_1} \cdots \sum_{i_N} \h_{\is} \dot{f}.(\circshift{\vz}{\is}) \odot \circshift{\vx}{\vi+\vs}
,\end{align*}
where the second equality follows
from \eqref{eq: bilevel caol pd z pd hs}.
Recognizing the convolution formula
in the second summand,
the expression can be simplified to
\[
f.(\circshift{\vz}{\vs}) + \htilde \conv \paren{\dot{f}.(\vz) \odot \circshift{\vx}{-\vs}}
.\]
This proves the claim.
Note that the provided formula
is for a single element in \vh.
One can concatenate the result
for each value of $\vs$
to get the gradient for all elements in \vh.
} %
\section{Evaluating Conditions for the Running Example \label{sec: ghadimi bounds applied}}
To better understand the conditions
in \sref{sec: double-loop complexity analysis},
this section illustrates how they apply
to the filter learning example
\eqref{eq: bilevel for analysis filters}.
\subsection{Upper-level Loss Conditions}
Recall the upper-level loss function is
mean square error:
\begin{align}
\lfcnparamsvx = \onehalf \normr{\vx - \xtrue}^2_2
\label{eq: loss function repeat}
,\end{align}
where \lfcn is typically evaluated at
$\vx=\xhat(\params)$.
The loss function \eqref{eq: loss function repeat}
satisfies \ref{BA assumption upper-level 1}.
Because there is no dependence on \params in the upper-level,
$\xmath{L_{\vx,\nabla_\params \lfcn}}=0$.
The gradient with respect to \vx is
$\nabla_\vx \lfcnparamsvx = \vx - \xtrue$,
so $\xmath{L_{\vx,\nabla_\vx \lfcn}}=1$.
The norm of the gradient with respect to \vx,
$\norm{\nabla_\vx \lfcnparamsvx} = \norm{\vx - \xtrue}$,
can grow arbitrarily large,
so condition
\ref{BA assumption upper-level 2}
is not met in general.
However, in most applications,
one can assume an upper bound
(possibly quite large)
on the elements of \xtrue
and impose that bound
as a box constraint when computing \xhat.
Then the triangle inequality
provides a bound
on
\norm{\vx - \xtrue}
for all \vx within the constraint box.
\comment{
one can set a reasonable bound on the reconstruction error
and the bound is likely met in practice.
|x - xtrue| <= |x| + |xtrue| <= 2 sqrt(N) * xmax
For example,
if
all elements in \vx
are within $c \sigma^2$ of
the corresponding element in \xtrue
for some constant $c>0$,
\ie,
$\abs{(\vx-\xtrue)_i} \leq c \sigma^2 \, \forall i$,
then $\xmath{C_{\nabla_\vx \lfcn}} = c\sigma^2 \sqrt{\sdim}$.
}%
Finally,
\ref{BA assumption upper-level end}
is met by any loss function,
including \eqref{eq: loss function repeat},
that lacks cross terms between \vx and \params.
We are unaware of any bilevel method papers
using such cross terms.
\subsection{Lower-level Cost Conditions}
One property used %
below
in many of the bounds
for the lower-level cost function
is that
\begin{equation}
\sigma_1(\Ck) = \norm{\hk}_2 \label{eq: sigma1 Ck}
,\end{equation}
where
$\sigma_1(\cdot)$
is a function that returns the first singular value of a matrix argument.
To derive this equality,
we use the circular boundary assumption on the convolutional filters.
By the DFT convolution property,
multiplication by the convolutional matrix \Ck
is equivalent to multiplication by the orthogonal discrete Fourier Transform (DFT) matrix, $\mF'$,
filtering in the Fourier transform domain,
and multiplication with the inverse DFT matrix.
This identity provides an eigenvalue decomposition for the convolutional matrix,
$\Ck = \mF \mLambda \mF'$,
where the eigenvalues of \Ck are simply
the DFT of the first column of \Ck:
\[
\text{eig}(\Ck) = \left\{ \sum_{n=0}^{\sdim-1} (\hk)_n e^{-i 2 \pi n m/\sdim} \right\}
\text { for } m \in \{0,\ldots,\sdim-1\}
.\]
The largest singular value of $\Ck'\Ck$
is the squared magnitude of the largest eigenvalue of \Ck ,
which we can simplify using the property that
the magnitude of any complex exponential is bounded by $1$:
\begin{align*}
\sigma_1(\Ck'\Ck) &=
\max_{m = \{0,\ldots,\sdim-1\} } \left(\sum_{n=0}^{\sdim-1} (\hk)_n e^{-i 2 \pi n m/\sdim} \right)^2 \\
&\leq \left(\sum_{n=0}^{\sdim-1} (\hk)_n \right)^2 = \normsq{\hk}_2
.\end{align*}
This inequality proves the claim.
As with the upper-level conditions,
\eqref{eq: bilevel for analysis filters}
meets most of the conditions
if we impose additional constraints
on the maximum norm of variables.
In addition to bounding the elements in \vx,
as we did to ensure \ref{BA assumption upper-level 2},
imposing bounds on
$\norm{\hk}$
and $\abs{\beta_k}$
is sufficient
to meet
all the lower-level conditions
\ref{BA assumption lower-level 1}-\ref{BA assumption lower-level end}.
We now examine each condition individually.
Recall from
\eqref{eq: bilevel for analysis filters}
that the example lower-level cost function is
\begin{align}
\xhat(\params) &= \argmin_{\vx \in \F^\sdim} \onehalf \norm{\mA \vx-\vy}^2_2
+ \ebeta{0} \sum_{k=1}^K \ebeta{k} \mat{1}' \sparsefcn.(\hk \conv \vx; \epsilon)
\nonumber
,\end{align}
where \sparsefcn is a corner-rounded 1-norm
\eqref{eq: corner rounded 1-norm}.
As described in \sref{sec: minimizer approach},
the minimizer approach requires \ofcn to be twice differentiable.
Thus, \ofcn satisfies
\ref{BA assumption lower-level 1}.
This condition limits the choices of \sparsefcn
to twice differentiable functions.
Considering \ref{BA assumption lower-level 2},
the gradient of \ofcn with respect to \vx
is Lipschitz continuous in \vx if the norm of the Hessian,
$\norm{\nabla_{\vx\vx} \ofcnargs}_2$,
is bounded.
Using \eqref{eq: nablas for filter learning}
and assuming the Lipschitz constant of the derivative of \sparsefcn
is $\xmath{L_{\dot{\sparsefcn}}}\xspace$
(for \eqref{eq: corner rounded 1-norm},
$\xmath{L_{\dot{\sparsefcn}}}\xspace=\tfrac{1}{\epsilon}$),
a Lipschitz constant
for $\nabla_\vx \ofcn$ is
\begin{align*}
\xmath{L_{\vx,\nabla_\vx \ofcn}} &= \sigma_1^2(\mA) + L_{\dot{\sparsefcn}} \ebeta{0} \sum_k \ebeta{k} \sigma_1{\Ck'\Ck} \\
&= \sigma_1^2(\mA) + L_{\dot{\sparsefcn}} \ebeta{0} \sum_k \ebeta{k} \norm{\hk}^2 \text{ by \eqref{eq: sigma1 Ck}}
.\end{align*}
The Lipschitz constant \xmath{L_{\vx,\nabla_\vx \ofcn}} depends on the values in \params
and therefore does not strictly satisfy \ref{BA assumption lower-level 2}.
Here if $\beta_0$, $\beta_k$, and $\vc_k$ have upper bounds,
then one can upper bound \xmath{L_{\vx,\nabla_\vx \ofcn}}.
All of the bounds below have similar considerations.
To consider the strong convexity condition in
\ref{BA assumption lower-level 3},
we again consider the Hessian,
\begin{equation}
\nabla_{\vx \vx} \ofcn(\vx \, ; \params) =
\underbrace{\mA'\mA}_{\text{From data-\\ fit term}} +
\underbrace{\ebeta{0} \sum_k \ebeta{k} \mC_k' \diag{\ddsparsefcn.(\hk \conv \vx)} \mC_k}_{\text{From regularizer}} \label{eq: Hessian repeat}
.\end{equation}
We assume that
$\ddsparsefcn(z) \geq 0 \, \forall z$,
as is the case for the corner rounded 1-norm.
If $\mA'\mA$ is positive-definite
with $\sigma_\sdim(\mA'\mA) > 0$
(this is equivalent to \mA having full column rank),
then the Hessian is positive-definite
and $\xmath{\mu_{\vx,\ofcn}}=\sigma_\sdim^2(\mA)$
suffices as a strong convexity parameter.
In applications like compressed sensing,
\mA does not have full column rank.
In such cases,
$\sigma_\sdim(\mA'\mA) = 0$
and as $e^{\beta_0} \rightarrow 0$ the regularizer term vanishes,
so there does not exist any universal
$\xmath{\mu_{\vx,\ofcn}} > 0$
for all $\params \in \F^\paramsdim$,
so the strong convexity condition \ref{BA assumption lower-level 3}
is not satisfied.
However,
the condition is likely to hold in practice
for \dquotes{good}
values of \params.
Specifically, the condition will hold for any value of \params
such that
the null-space of the regularization term
is disjoint %
from the null-space of \mA
and the regularization parameters
are sufficiently large.
To interpret this condition,
recall that regularization helps
compensate for the under-determined nature of \mA
(\sref{sec: image recon background}).
Values of \params that do not sufficiently
\dquotes{fill-in} the null-space of \mA
will leave the lower-level cost function under-determined;
the task-based nature of the bilevel problem
should discourage these \dquotes{bad} values.
Nevertheless,
how to adapt the complexity theory
to rigorously address these subtleties
is an open question.
The fourth condition, %
\ref{BA assumption lower-level 4},
is that
$\nabla_{\vx \vx} \ofcnargs$
and
$\nabla_{\params \vx} \ofcnargs$
are Lipschitz continuous with respect to \vx for all \params.
For the first part part,
a Lipschitz constant results from bounding the difference
in the Hessian evaluated at two points,
$\vx^{(1)}$ and $\vx^{(2)}$:
\begin{align*}
&\norm{\nabla_{\vx \vx} \ofcn(\vx^{(1)} \, ; \params) - \nabla_{\vx \vx} \ofcn(\vx^{(2)} \, ; \params) }_2 \\
&\quad\quad = \norm{ \ebeta{0} \sum_k \ebeta{k} \mC_k'
\diag{\ddsparsefcn.(\hk \conv \vx^{(1)}) - \ddsparsefcn(\hk \conv \vx^{(2)})}
\mC_k}_2
.\end{align*}
Since every element of \ddsparsefcn
is bounded in $(0,L_{\dot{\sparsefcn}})$,
the difference between any two evaluations of \ddsparsefcn
is at most $L_{\dot{\sparsefcn}}$.
Thus
\begin{align*}
\norm{\nabla_{\vx \vx} \ofcn(\vx^{(1)} \, ; \params) - \nabla_{\vx \vx} \ofcn(\vx^{(2)} \, ; \params) }_2
& \leq
\ebeta{0} L_{\dot{\sparsefcn}} \sum_k \ebeta{k} \norm{\mC_k'\mC_k}_2 \\
&\leq \ebeta{0} L_{\dot{\sparsefcn}} \sum_k \ebeta{k} \normsq{\hk}_2
.\end{align*}
The final simplification again uses \eqref{eq: sigma1 Ck}.
Thus,
$\xmath{L_{\vx,\nabla_{\vx\vx}\ofcn}} = \ebeta{0} \xmath{L_{\dot{\sparsefcn}}}\xspace \sum_k \ebeta{k} \normsq{\hk}_2$.
For the second part of \ref{BA assumption lower-level 4},
we must look at the tuning parameters and filter coefficients separately.
When considering learning a tuning parameter, $\beta_k$,
\begin{align*}
\nabla_{\beta_k \vx} \ofcnargs &=
\ebetazerok \Ck' \dsparsefcn.(\Ck \vx)
.\end{align*}
To find a Lipschitz constant,
consider the gradient with respect to \vx:
\begin{align*}
\nabla_\vx \left( \nabla_{\beta_k \vx} \ofcnargs \right) &=
\ebetazerok \Ck' \diag{\ddsparsefcn.(\Ck \vx)} \Ck
.\end{align*}
A Lipschitz constant of $\nabla_{\beta_k \vx} \ofcnargs$
is given by the bound on the norm of this matrix
(we chose to use the matrix 2-norm,
also called the spectral norm).
Using similar steps as above to simplify the expression,
$L_{\vx,\nabla_{\beta_k \vx}\ofcn} = \ebetazerok \xmath{L_{\dot{\sparsefcn}}}\xspace \normsq{\hk}_2$.
When considering learning the $s$th element of the $k$th filter,
\begin{align*}
\nabla_{c_{k,s} \vx} \ofcn(\vx \, ; \params) &=
\ebetazerok \left( \dsparsefcn.(\circshift{(\Ck \vx)}{s})
+ \Ck' \left( \ddsparsefcn.(\Ck \vx) \odot \circshift{\vx}{s} \right) \right) \\
&= \ebetazerok \left( \underbrace{\dsparsefcn.(\mR \Ck \vx)}_{\text{Expression 1}}
+ \underbrace{\Ck' \left( \ddsparsefcn.(\Ck \vx) \odot \mR \vx \right)}_{\text{Expressions 2-3}} \right)
\in \F^\sdim
,\end{align*}
where \mR is a rotation matrix that depends on $s$
such that $\mR \vx = \circshift{\vx}{s}$.
For taking the gradient,
it is convenient to note that the last term
can be expressed in multiple ways:
\[
\ddsparsefcn.(\Ck \vx) \odot \circshift{\vx}{s}
=
\underbrace{\diag{\ddsparsefcn.(\Ck \vx)} \circshift{\vx}{s}}_{\mathrm{Expression \, 2}}
=
\underbrace{\diag{\circshift{\vx}{s}} \ddsparsefcn.(\Ck \vx)}_{\mathrm{Expression \, 3}}
.\]
Using the alternate expressions
to perform the chain rule with respect to the \vx
that is not in the $\diag{\cdot}$ statement,
the gradient with respect to \vx is:
\begin{align*}
\nabla_\vx \left( \nabla_{c_{k,s} \vx} \ofcnargs \right) =
\ebetazerok (
&\underbrace{\Ck' \mR' \diag{\ddsparsefcn.(\mR \Ck \vx)}}_{\mathrm{Expression \, 1}} \\
&+ \underbrace{\Ck' \diag{\ddsparsefcn.(\Ck \vx)}}_{\mathrm{Expression \, 2}} \\
&+ \underbrace{\Ck' \diag{\xmath{\dddot{\phi}}\xspace(\Ck \vx)} \diag{\mR \vx}' \Ck}_{\mathrm{Expression \, 3}}
)
.\end{align*}
The bound on the spectral norm of the
first and second expressions
are both
$\sigma_1(\Ck) \xmath{L_{\dot{\sparsefcn}}}\xspace $
because, for any $\vz \in \F^\sdim$,
\[
\norm{\diag{\ddsparsefcn.(\vz)}}_2 \leq
\max_z \abs{\ddsparsefcn(z)}
= \xmath{L_{\dot{\sparsefcn}}}\xspace
.\]
The third expression is bounded by
$\sigma_1^2(\Ck) \norm{\vx} \xmath{L_{\ddot{\sparsefcn}}}\xspace$,
which requires a
bound on the norm of \vx,
similar to \ref{BA assumption upper-level 2}.
Summing the three expressions
and including the tuning parameters gives the final
Lipschitz constant
\begin{equation}
L_{\vx,\nabla_{\hks \vx}\ofcn} = \ebetazerok
\sigma_1(\Ck)
( 2\xmath{L_{\dot{\sparsefcn}}}\xspace + \sigma_1(\Ck) \xmath{L_{\ddot{\sparsefcn}}}\xspace \norm{\vx} )
\label{eq: L for hks for ghadimi LL4}
.\end{equation}
The fifth assumption,
\ref{BA assumption lower-level 5}
states that the mixed second gradient of \ofcn is bounded.
For the tuning parameters,
the mixed second gradient is given in \eqref{eq: nablas for filter learning} as
\[
\nabla_{\beta_k \vx} \ofcn(\xhat \,; \params) = \ebeta{0} \ebeta{k} \hktil \conv \dsparsefcn.(\hk \conv \xhat)
.\]
The bound given in \sref{sec: double-loop complexity analysis}
follows easily by considering that
\[
\norm{\diag{\dsparsefcn.(\hk \conv \xhat)}}_2 \leq \max_z \abs{\dsparsefcn(z)} = \xmath{L_{\sparsefcn}}\xspace
.\]
For a filter coefficient,
the mixed second gradient is more complicated:
\[
\nabla_{c_{k,s} \vx} \ofcn(\xhat \, ; \params) =
\ebetazerok \left( \underbrace{\dsparsefcn.(\circshift{(\hk \conv \xhat)}{s})}_{\text{Bounded by $\xmath{L_{\sparsefcn}}\xspace$}}
+ \hktil \conv \left( \underbrace{\ddsparsefcn.(\hk \conv \xhat)}_{\text{Bounded by $\xmath{L_{\dot{\sparsefcn}}}\xspace$}}
\odot \circshift{\xhat}{s} \right) \right)
.\]
Under the assumption that the bounds
$\xmath{L_{\sparsefcn}}\xspace$ and
$\xmath{L_{\dot{\sparsefcn}}}\xspace$
exist
(they are $1$ and $\frac{1}{\epsilon}$
respectively for \eqref{eq: corner rounded 1-norm}),
a bound on the norm of the mixed gradient is
\begin{align*}
\norm{\nabla_{c_{k,s} \vx} \ofcn(\xhat \, ; \params)}
&\leq
\ebetazerok \paren{\xmath{L_{\sparsefcn}}\xspace + \xmath{L_{\dot{\sparsefcn}}}\xspace \norm{\hk} \norm{\vx}}
.\end{align*}
\comment{we could just define bounds on \dsparsefcn and \xmath{\dddot{\phi}}\xspace,
but i already had Lipshitz constants elsewhere...} %
The sixth assumption,
\ref{BA assumption lower-level end},
is that
$\xmath{L_{\params,\nabla_{\params\vx}\ofcn}}$ and
$\xmath{L_{\params,\nabla_{\vx\vx}\ofcn}}$ exist.
Lipschitz constants %
for the tuning parameters
are
\[
L_{\beta_k, \nabla_{\beta_k \vx} \ofcn} = \ebetazerok \norm{\hk}_2 \xmath{L_{\sparsefcn}}\xspace
\text{ and }
L_{\beta_k, \nabla_{\vx \vx} \ofcn} = \ebetazerok \normsq{\hk}_2 \xmath{L_{\dot{\sparsefcn}}}\xspace
.\]
Using similar derivations as shown above,
corresponding Lipschitz constants for the filter coefficients are
\begin{align*}
L_{\hks, \nabla_{\hks \vx} \ofcn} &= \ebetazerok \paren{
\xmath{L_{\sparsefcn}}\xspace + \norm{\vx}
\paren{
\xmath{L_{\dot{\sparsefcn}}}\xspace +
\xmath{L_{\ddot{\sparsefcn}}}\xspace \norm{\hk} \norm{\vx}
}
} \\
L_{\hks, \nabla_{\vx \vx} \ofcn} &= \ebetazerok \paren{
2 \xmath{L_{\dot{\sparsefcn}}}\xspace \norm{\hk} + \xmath{L_{\ddot{\sparsefcn}}}\xspace \norm{\hk}^2 \norm{\vx}
}
.\end{align*}
This is the last lower-level condition.
|
1810.00409
|
\section{Introduction}\label{intro}
Let $\mathsf{G}$ be a finite group and $\mathsf{Irr}(\mathsf{G}) = \{\chi_0,\chi_1, \ldots, \chi_\ell\}$ be the set of ordinary (complex) irreducible characters of $\mathsf{G}$. Fix a faithful (not necessarily irreducible)
character $\alpha$ and generate a Markov chain on $\mathsf{Irr}(\mathsf{G})$ as follows. For $\chi \in \mathsf{Irr}(\mathsf{G})$, let $\alpha \chi = \sum_{i=1}^\ell a_i \chi_i$, where $a_i$ is the multiplicity of $\chi_i$ as a constituent of the tensor product $\alpha \chi $. Pick an irreducible constituent $\chi'$ from the right-hand side with probability proportional to its
multiplicity times its dimension. Thus, the chance $\mathsf{K}(\chi,\chi')$ of moving from $\chi$ to $\chi'$ is
\begin{equation}\label{eq:Mchain} \mathsf{K}(\chi, \chi') = \frac{ \langle \alpha \chi, \chi' \rangle \chi'(1)}{\alpha(1) \chi(1)}, \end{equation}
where $\langle \chi, \psi \rangle = |\mathsf{G}|^{-1} \sum_{g \in \mathsf{G}} \chi(g) \overbar{\psi(g)}$ is the usual Hermitian inner product on class functions $\chi,\psi$ of $\mathsf{G}$.
These tensor product Markov chains were introduced by Fulman in \cite{F3}, and have been studied by the hypergroup community, by Fulman for use with Stein's method \cite{F2}, \cite{F3}, and implicitly by algebraic geometry
and group theory communities in connection with the McKay Correspondence. A detailed literature review is given in Section \ref{litrev}. One feature is that the construction
allows a complete diagonalization. The following theorem is implicit in Steinberg \cite{St} and explicit in Fulman \cite{F3}.
\begin{thm}\label{T:measure} ({\rm \cite{F3}}) Let $\alpha$ be a faithful complex character of a finite group $\mathsf{G}$. Then the Markov chain $\mathsf{K}$ in \eqref{eq:Mchain} has
as stationary distribution the Plancherel measure
$$\pi(\chi) = \frac{\chi(1)^2}{|\mathsf{G}|}\;\;(\chi \in \mathsf{Irr}(\mathsf{G})).$$
The eigenvalues of $\mathsf{K}$ are $\alpha(c)/\alpha(1)$ as $c$ runs over a set $\mathcal C$ of conjugacy class representatives of $\mathsf{G}$. The corresponding right
(left) eigenvectors have as their $\chi$th-coordinates:
$$\mathsf{r}_c(\chi) = \frac{\chi(c)}{\chi(1)}, \qquad \mathsf{\ell}_c(\chi) = \frac{\chi(1)\overbar{\chi(c)}} {|\mathsf{C}_\mathsf{G}(c)|} = |c^G|\, \pi(\chi)\overbar{\mathsf{r}_c(\chi)},$$
where $|c^G|$ is the size of the conjugacy class of $c$, and $\mathsf{C}_\mathsf{G}(c)$ is the centralizer subgroup of $c$ in $\mathsf{G}$.
The chain is reversible if and only if $\alpha$ is real.
\end{thm}
We study a natural extension to the modular case, where $p$ divides $|\mathsf{G}|$ for $p$ a prime, and work over an algebraically closed field $\mathbb{k}$ of characteristic $p$. Let $\varrho_0,\varrho_1\ldots, \varrho_r$ be (representatives of equivalence classes of) the irreducible $p$-mo\-du\-lar representations of $\mathsf{G}$, with corresponding Brauer characters $\chi_0,\chi_1,\ldots,\chi_r$, and let $\alpha$ be a faithful $p$-modular representation. The tensor product $\varrho_i \otimes \alpha$ does not have a direct sum decomposition into irreducible summands, but we can still choose an irreducible composition factor with probability
proportional to its multiplicity times its dimension. We find that a parallel result holds (see Proposition \ref{basicone}).
It turns out that the stationary distribution is
$$\pi(\chi) = \frac{\mathsf{p}_{\chi}(1) \, \chi(1)}{| \mathsf{G} |},$$
where $\mathsf{p}_{\chi}$ is the Brauer character of the projective indecomposable module associated to the irreducible Brauer character $\chi$. Moreover, the eigenvalues are the
Brauer character ratios $\alpha(c)/\alpha(1)$, where now $c$ runs through the conjugacy class representatives of $p$-regular elements of $\mathsf{G}$. The chain is usually not reversible; the
right eigenvectors come from the irreducible Brauer characters, and the left eigenvectors come from the associated projective characters.
A tutorial on the necessary representation theory is included in Appendix II (Section \ref{append2}); we also include a tutorial on basic Markov chain theory in Appendix I (Section \ref{append1}).
Here are four motivations for the present study:
\medskip
\noindent (a) \emph{Construction of irreducibles}. \ Given a group $\mathsf{G}$ it is not at all clear how to construct its character table. Indeed, for many groups
this is a provably intractible problem. For example, for the symmetric group on $n$ letters, deciding if an irreducible character at a general conjugacy class is zero or not is NP complete (by reduction to a knapsack problem in \cite{PP}). A classical theorem of Burnside-Brauer \cite{Bu, Br} (see \cite[19.10]{JL}) gives a frequently used route: \ Take a faithful character $\alpha$ of $\mathsf{G}$. Then all irreducible characters appear in the tensor powers $\alpha^k$, where $1 \leq k \leq \upsilon$ (or $0 \leq k \leq \upsilon-1$, alternatively) and $\upsilon$ can be taken as the number of distinct character values
$\alpha(g)$. This is exploited in \cite{U}, which contains the most frequently used algorithm for computing character tables and is a basic tool of computational group theory.
Theorem \ref{T:measure} above refines this description by showing what proportion of
times each irreducible occurs. Further, the analytic estimates available can substantially decrease the maximum number of tensor powers needed. For example,
if $\mathsf{G} = \mathsf{PGL}_n(q)$ with $q$ fixed and $n$ large, and $\alpha$ is the permutation character of the group action on lines, then $\alpha$ takes at least the order of $n^{q-1}/((q-1)!)^2$
distinct values, whereas Fulman \cite[Thm. 5.1]{F3} shows that the Markov chain is close to stationary in $n$ steps. In \cite{BM}, Benkart and Moon use
tensor walks to determine information about the centralizer algebras and invariants of tensor powers $\alpha^k$ of faithful characters $\alpha$ of a finite group.
\medskip
\noindent (b) \emph{Natural Markov chains}. \ Sometimes the Markov chains resulting from tensor products are of independent interest, and their explicit diagonalization (due to the
availability of group theory) reveals sharp rates of convergence to stationarity. A striking example occurs in one of the first appearances of tensor product chains in this context,
the Eymard-Roynette walk on $\mathrm{SU}_2(\mathbb{C})$ \cite{ER}. The tensor product Markov chains make sense for compact groups (and well beyond).
The ordinary irreducible representations for $\mathrm{SU}_2(\mathbb{C})$ are indexed by $ \mathbb N \cup \{0\} =\{0,1,2, \ldots\}$, where
the corresponding dimensions of the irreducibles are $1,2,3, \ldots$ . Tensoring with the two-dimensional representation
gives a Markov chain on $\mathbb N \cup \{0\}$ with transition kernel
\begin{equation}\label{eq:SU2} \mathsf{K}(i,i-1) = \half\left(1 - \frac{1}{i+1}\right) \ \ (i \ge 1), \quad \mathsf{K}(i,i+1) = \half\left(1 + \frac{1}{i+1}\right) \ \ (i \ge 0). \end{equation}
This birth/death chain arises in several contexts. Eymard-Roynette \cite{ER} use the group analysis to show results such as the following: there exists a constant $\textsl{\footnotesize C}$ such that, as $n\to \infty$,
\begin{equation}\label{eq:probER} \it{p}\left\{\frac{X_n}{\sqrt{\textsl{\footnotesize C}n}}\le x\right \} \sim \sqrt{\frac{2}{\pi}}\int_{0}^x y^2 \mathsf{e}^{-y^2/2} dy, \end{equation}
where $X_n$ represents the state of the tensor product chain starting from 0 at time $n$. The hypergroup community
has substantially extended these results. See \cite{GR}, \cite{Bl}, \cite{RV} for pointers. Further details are in our Section \ref{2c}.
In a different direction, the Markov chain \eqref{eq:SU2} was discovered by Pitman \cite{P} in his work on the $2M-X$ theorem. A splendid account is in \cite{Legal}.
Briefly, consider a simple symmetric random walk on $\mathbb{Z}$ starting at 1. The conditional distribution of this walk, conditioned not to hit 0, is precisely
\eqref{eq:SU2}. Rescaling space by $1/\sqrt{n}$ and time by $1/n$, the random walk converges to Brownian motion, and the Markov chain \eqref{eq:SU2}
converges to a Bessel(3) process (radial part of 3-dimensional Brownian motion). Pitman's construction gives a probabilistic proof of results of Williams:
Brownian motion conditioned never to hit zero is distributed as a Bessel(3) process. This work has spectacular extensions to higher dimensions in the
work of Biane-Bougerol-O'Connell (\cite{BiBO1}, \cite{BiBO2}). See \cite[final chapter]{GKR} for earlier work on tensor walks, and references \cite{Bi1}, \cite{Bi2} for the relation to `quantum random walks'. Connections to fusion coefficients can be found in \cite{deF}, and extensions to random walks on root systems appear in \cite{LLP} for affine root systems and in \cite{BdF} for more general Kac-Moody root systems.
The literature on related topics is extensive.
In Section \ref{3b}, we show how finite versions of these walks arise from the modular representations of $\mathsf{SL}_2(p)$. Section \ref{quant} shows how they
arise from quantum groups at roots of unity. The finite cases offer many extensions and suggest myriad new research areas. These sections have their
own introductions, which can be read now for further motivation.
All of this illustrates our theme: \ \emph{Sometimes tensor walks are of independent interest.}
\medskip
\noindent (c) \emph{New analytic insight}. \ Use of representation theory to give sharp analysis of random walks on groups has many successes. It led
to the study of cut-off phenomena \cite{DSh}. The study of `nice walks' and comparison theory \cite{DSa1}
allows careful study of
`real walks'. The attendant analysis of character ratios has widespread use for other group theory problems (see for example \cite{BLST}, \cite{LST}). The present walks yield a collection of
fresh examples. The detailed analysis of Sections \ref{basics1}--\ref{sl3psec} highlights new behavior; remarkable cancellation occurs, calling for detailed hold on the eigenstructure.
In the quantum group case covered in Section \ref{quant}, the Markov chains are not diagonalizable, $\underline{\text {but}}$ the Jordan blocks of the transition matrix have bounded size, and an analysis using
generalized eigenvectors is available. This is the first natural example we have seen with these ingredients.
\medskip
\noindent (d) \emph{Interdisciplinary opportunities}. \ Modular representation theory is an extremely deep subject with applications within group theory,
number theory, and topology. We do not know applications outside those areas and are pleased to see its use in probability. We hope the present project
and its successors provide an opportunity for probabilists and analysts to learn some representation theory (and conversely).
\medskip
The outline of this paper follows: Section \ref{litrev} gives a literature review. Section \ref{basics1} presents a modular version of Theorem \ref{T:measure} and the first example
$\mathsf{SL}_2(p)$.
Section \ref{sl2p2sec} treats $\mathsf{SL}_2(p^2)$, Section \ref{sl22nsec} features $\mathsf{SL}_2(2^n)$, and Section \ref{sl3psec} considers $\mathsf{SL}_3(p)$. In Section \ref{quant}, we examine the case of quantum $\mathsf{SL}_2$ at a root of unity. Finally, two appendices (Sections \ref{append1} and \ref{append2}) provide introductory information about Markov chains and modular representations.
\subsubsection*{Acknowledgments} We acknowledge the support of the National Science Foundation under Grant No. DMS-1440140 while in residence at the Mathematical Sciences Research Institute (MSRI) in Berkeley, California, during the Spring 2018 semester. The first author acknowledges the support of the NSF grant DMS-1208775, and the fourth author acknowledges the support of the NSF
grant DMS-1840702. We also thank Phillipe Bougerol, Valentin Buciumas, Daniel Bump, David Craven, Manon deFosseux, Marty Isaacs, Sasha Kleshchev, Gabriel Navarro, Neil O'Connell, and Aner Shalev for helpful discussions.
Kay Magaard worked with all of us during our term at MSRI, and we will miss his enthusiasm and insights.
\medskip
\section{Literature review and related results} \label{litrev}
This section reviews connections between tensor walks and (a) the McKay Correspondence, (b) hypergroup random walks, (c) chip firing, and (d) the distribution of character ratios.
\medskip
\subsection{McKay Correspondence}\label{2a} \ We begin with a well-known example.
\begin{example}{\rm For $n\ge 2$ let $\mathsf{BD}_n$ denote the {\it binary dihedral} group
\[
\mathsf{BD}_n = \langle a,x\;|\;a^{2n}=1, x^2=a^n, x^{-1}ax = a^{-1} \rangle
\]
of order $4n$.
This group has $n+3$ conjugacy classes, with representatives $1,x^2,x,xa$ and $a^j\,(1\le j\le n-1)$. It has 4 linear characters and $n-1$ irreducible characters of degree 2; the character table appears in Table 2.1.
\begin{table}[h]
\caption{Character table of $\mathsf{BD}_n$}
\label{chBD}
\[ {\small \begin{tabular}[t]{|c||c|c|c|c|c|}
\hline
& $1$ & $x^2$ & $a^j\ (1\le j\le n-1)$ & $x$ & $xa$ \\
\hline \hline
$\lambda_{1}$ & $\,1$ &$\;1$&$1$&$\;1$&$\;1$ \\ \hline
$\lambda_{2}$ & $1$&$\;1$&$1$&$-1$&$-1$ \\ \hline
$\lambda_{3}\,(n \hbox{ even})$ & $\,1$&$\;1$&$(-1)^j$&$\;1$&$-1$ \\ \hline
$\lambda_{4}\,(n \hbox{ even})$ & $\,1$&$\;1$&$(-1)^j$&$-1$&$\;1$ \\ \hline
$\lambda_{3}\,(n \hbox{ odd})$ & $\,1$&$-1$&$(-1)^j$&$\;i$&$-i$ \\ \hline
$\lambda_{4}\,(n \hbox{ odd})$ & $\,1$&$-1$&$(-1)^j$&$-i$&$\; i$\\ \hline
$\chi_r\,(1\le r\le n-1)$ & $\,2$ & $2\,(-1)^r$ & $2\cos\left(\frac{\pi jr}{n}\right)$ & $\;0$ & $\;0$ \\
\hline
\end{tabular}}\]
\end{table}
Consider the random walk (\ref{eq:Mchain}) given by tensoring with the faithful character $\chi_1$. Routine computations give
\[
\begin{array}{l}
\lambda_1\chi_1 = \lambda_2\chi_1 = \chi_1,\; \; \lambda_3\chi_1 = \lambda_4\chi_1 = \chi_{n-1}, \\
\chi_r\chi_1 = \chi_{r-1}+\chi_{r+1}\quad (2\le r \le n-2), \\
\chi_1^2 = \chi_2 + \lambda_1+\lambda_2, \\
\chi_{n-1}\chi_1 = \chi_{n-2} + \lambda_3+\lambda_4.
\end{array}
\]
Thus, the Markov chain (\ref{eq:Mchain}) can be seen as a simple random walk on the following graph (weighted as in (\ref{eq:Mchain})),
where nodes designated with a prime ${}^\prime$ correspond to the characters $\lambda_j$, $j=1,2,3,4$,
and the other nodes label the characters $\chi_r$ ($1\le r \le n-1$).
\medskip
\tikzstyle{rep}=[circle,
thick,
minimum size=.6cm, inner sep=0pt,
draw= black,
fill=black!15]
\tikzstyle{norep}=[circle,
thick,
minimum size=.6cm, inner sep=0pt,
draw= white,
fill=white]
\begin{figure}[h]
\label{BDn-graph}
$$\begin{tikzpicture}[scale=1,line width=1pt]
\path (5.5,.88) node[norep] (D0) {\bf {\color{magenta} (1${}^\prime$)}};
\path (7,.88) node[norep] (D1) {\bf{\color{magenta} (1)}};
\path (8.5,.88) node[norep] (D2) {\bf{\color{magenta} (2)}};
\path (10,.88) node[norep] (Dd) {$\cdots$};
\path (11.5,.88) node[norep] (Dn2){{\color{magenta}(\bf{n--2})}};
\path (13,.88) node[norep] (Dn1) {{\color{magenta} (\bf{n--1})}};
\path (14.5,.88) node[norep] (Dn){\bf {\color{magenta} (4${}^\prime$)}};
\path (7,-.62) node[norep](D0p) {\bf {\color{magenta} (2${}^\prime$)}};
\path (13,-.62)node[norep](Dnp){\bf {\color{magenta} (3${}^\prime$)}}; \path
(D0) edge[thick] (D1)
(D1) edge[thick] (D2)
(D2) edge[thick] (Dd)
(Dd) edge[thick] (Dn2)
(Dn2) edge[thick] (Dn1)
(Dn1) edge[thick] (Dn)
(D1) edge[thick] (D0p)
(Dn1) edge[thick] (Dnp);
\end{tikzpicture}$$
\caption{McKay graph for the binary dihedral group $\mathsf{BD}_n$}
\end{figure}
\bigskip
For example, when $n = 4$, the transition matrix is
\[\bordermatrix{&\lambda_1&\lambda_2&\chi_1 &\chi_2&\chi_3&\lambda_3&\lambda_4\cr
\lambda_1 &0&0&1&0 &0&0&0 \cr
\lambda_2 &0&0&1&0 &0&0&0 \cr
\chi_1& \frac{1}{4} & \frac{1}{4} &0&\half&0&0&0\cr
\chi_2& 0 &0 &\half&0 &\half&0&0\cr
\chi_3& 0 &0 &0 &\half &0&\frac{1}{4}&\frac{1}{4} \cr
\lambda_3 &0&0&0&0&1&0 &0 \cr
\lambda_4 &0&0&0&0&1&0 &0
}\]
}
\end{example}
\medskip
The fact that the above graph is the affine Dynkin diagram of type $\mathsf{D}_{n+2}$ is a particular instance of the celebrated McKay correspondence. The correspondence begins with a faithful character $\alpha$ of
a finite group $\mathsf{G}$. Let $k$ be the number of irreducible characters of $\mathsf{G}$, and define a $k\times k$ matrix $\mathsf{M}$ (the McKay matrix) indexed by the ordinary irreducible characters $\chi_i$ of $\mathsf{G}$ by setting
\begin{equation}\label{eq:McKay} \mathsf{M}_{ij} = \langle \alpha \chi_i, \chi_j\rangle \qquad \text{(the multiplicity of \ $\chi_j$ in $\alpha \chi_i$)}.\end{equation}
The matrix $\mathsf{M}$ can be regarded as the adjacency matrix of a quiver having nodes indexed by the irreducible characters of $\mathsf{G}$ and $\mathsf{M}_{ij}$ arrows
from node $i$ to node $j$. When there is an arrow between $i$ and $j$ in both directions, it is replaced by a single edge (with no arrows). In particular, when $\mathsf{M}$ is symmetric,
the result is a graph. John McKay \cite{M} found that the graphs associated to these matrices, when $\alpha$ is the
natural two-dimensional character of a finite subgroup
of $\mathsf{SU}_2(\mathbb{C})$, are exactly the affine Dynkin diagrams of types $\mathrm {A,D,E}$. The Wikipedia page for `McKay Correspondence' will lead the reader to the widespread
developments from this observation; see in particular \cite{St}, \cite{R}, \cite{B} and the references therein.
There is a simple connection with the tensor walk \eqref{eq:Mchain}.
\begin{lemma}\label{L:KMrel} Let $\alpha$ be a faithful character of a finite group $\mathsf{G}$.
\begin{itemize}
\item[{\rm(a)}] The Markov chain $\mathsf{K}$ of \eqref{eq:Mchain} and the McKay quiver matrix $\mathsf{M}$
of \eqref{eq:McKay} are related by
\begin{equation}\label{eq:KMrel}\mathsf{K} = \frac{1}{\alpha(1)} \mathsf{D}^{-1}\mathsf{M} \mathsf{D}\end{equation}
where $\mathsf{D}$ is a diagonal matrix having the irreduible character degrees $\chi_i(1)$ as diagonal entries.
\item[{\rm (b)}] If $v$ is a right eigenvector of $\mathsf{M}$ corresponding to the eigenvalue $\lambda$, then $\mathsf{D} ^{-1}v$ is a right eigenvector of $\mathsf{K}$ with
corresponding eigenvalue $\frac{1}{\alpha(1)}\lambda$.
\item[{\rm (c)}] If $w$ is a left eigenvector of $\mathsf{M}$ corresponding to the eigenvalue $\lambda$, then $w\mathsf{D}$ is a left eigenvector of $\mathsf{K}$ with
corresponding eigenvalue $\frac{1}{\alpha(1)}\lambda$.
\end{itemize} \end{lemma}
Parts (b) and (c) show that the eigenvalues and eigenvectors of $\mathsf{K}$ and $\mathsf{M}$ are simple functions of each other. In particular, Theorem \ref{T:measure} is implicit in
Steinberg \cite{St}. Of course, our interests are different; we would like to bound the rate of convergence of the Markov chain $\mathsf{K}$ to its stationary distribution $\pi$.
In the $\mathrm{BD}_{n}$ example, the `naive' walk using $\mathsf{K}$ has a parity problem. However, if the `lazy' walk is used instead, where at each step staying in place
has probability of $\half$ and moving according to $\chi_1$ has probability of $\half$, then that problem is solved. Letting $\overbar{\mathsf{K}}$
be the transition matrix for the lazy walk, we prove
\begin{thm}\label{T:dihedral} For the lazy version of the Markov chain $\overbar \mathsf{K}$ on $\mathsf{Irr}(\mathrm{BD}_{n})$ starting from
the trivial character $\mathbb{1}=\lambda_1$ and multiplying by $\chi_1$ with probability $\half$
and staying in place with probability $\half$, there are positive universal constants $B,B'$ such that
$$ B \mathsf{e}^{{-2\pi^2 \ell}/{n^2}} \leq \parallel \overbar{\mathsf{K}}^\ell - \pi \parallel_{{}_{\mathsf{TV}}} \leq B' \mathsf{e}^{{-2\pi^2 \ell}/{n^2}}.$$
\end{thm}
In this theorem, $||\overbar{\mathsf{K}}^\ell - \pi ||_{{}_{\mathsf{TV}}} = \half \sum_{\chi \in \mathsf{Irr}({\mathsf{BD}_{n})}} \vert \overbar{\mathsf{K}}^\ell(\mathbb{1},\chi)-\pi(\chi)\vert$ is the total variation
distance (see Appendix I, Section \ref{append1}). The result shows that order $n^2$ steps are necessary and sufficient to reach stationarity. The proof can be found in
Appendix I, Section \ref{append1}.
\subsection{Hypergroup walks}\label{2b} \ A {\it hypergroup} is a set $\mathcal{X}$ with an associative product $\chi \ast \psi$ such that $\chi \ast \psi$ is a probability distribution on $\mathcal{X}$ (there are a few other axioms, see \cite{Bl} for example). Given $\alpha \in \mathcal{X}$, a Markov chain can be defined. From $\chi \in \mathcal{X}$, choose $\psi$ from $\alpha \ast \chi$. As shown below, this notion includes our tensor chains.
Aside from groups, examples of hypergroups include the set of conjugacy classes of a finite group $\mathsf{G}$: if a conjugacy class $\mathcal{C}$ of $\mathsf{G}$ is identified with the corresponding sum $\sum_{c \in \mathcal{C}}c$ in the group algebra, then then product of two conjugacy classes is a positive integer combination of conjugacy classes, and the coefficients can be scaled to be a probability. In a similar way, double coset spaces form a hypergroup. The irreducible representations of a finite group also form a hypergroup under tensor product. Indeed,
let $\mathcal{X} = \mathsf{Irr}(\mathsf{G})$, and consider the normalized characters $\bar \chi = \frac{1}{\chi(1)} \chi$ for $\chi \in \mathcal{X}$. If $\alpha$ is any character, and $\alpha \chi = \sum_{\psi\in \mathcal{X}} \, a_{\psi} \,\psi$ (with $a_{\psi}$ the
multiplicity), then
$$\alpha(1) \chi(1) \overbar{\alpha \chi} = \sum_{\psi \in \mathcal{X}} \,a_{\psi}\, \psi =\sum_{\psi \in \mathcal{X}}\, a_{\psi} \psi(1) \overbar{\psi}$$
so
$$ \overbar{\alpha \chi} = \sum_{\psi \in \mathcal{X}} \frac{\,a_{\psi}\, \psi(1)}{\alpha(1)\chi(1)} \overbar\psi = \sum_{\psi \in \mathcal{X}} \mathsf{K}(\chi,\psi)\,\overbar\psi,$$
yielding the Markov chain \eqref{eq:Mchain}.
Of course, there is work to do in computing the decomposition of tensor products and in doing the analysis required for the asymptotics of high convolution powers. The tensor walk on $\mathsf{SU}_2(\mathbb{C})$ was
pioneering work of Eymard-Roynette \cite{ER} with follow-ups by Gallardo and Reis \cite{GR} and Gallardo \cite{Ga}, and by Voit \cite{Vo} who
proved iterated log fluctuations for the Eymard-Roynette walk. Impressive recent work on higher rank double coset walks is in the paper \cite{RV} by R\"osler and Voit. The treatise of Bloom and Hyer \cite{Bl} contains much further development.
Usually, this community works with infinite hypergroups and natural questions revolve around recurrence/transience and asymptotic behavior. There has been
some work on walks derived from finite hypergroups (see Ross-Xu \cite{RX1,RX2}, Vinh \cite{V}). The present paper shows there is still much to do.
\subsection{Chip firing and the critical group of a graph}\label{2c} \ A marvelous development linking graph theory, classical Riemann surface theory, and topics in number theory
arises by considering certain chip-firing games on a graph. Roughly, there is an integer number $f(v)$ of chips at each vertex $v$ of a finite, connected simple graph
($f(v)$ can be negative). `Firing vertex $v$' means adding $1$ to each neighbor of $v$ and subtracting $\mathsf{deg}(v)$ from $f(v)$. The chip-firing model is a discrete dynamical system classically modeling the distribution
of a discrete commodity on a graphical network. Chip-firing dynamics and the long-term behavior of the
model have been related to many different subjects such as economic models, energy minimization, neuron firing, travel flow, and so forth.
Baker and Norine \cite{BaN} develop
a parallel with the classical theory of compact Riemann surfaces, formulating an appropriate analog of the Riemann-Roch and Abel-Jacobi Theorems for graphs. An excellent textbook introduction to chip firing is the recent \cite{CPe}. A splendid
resource for these developments is the forthcoming book of Levin-Peres \cite{LeP}. See M. Matchett Wood \cite{W} for connections to number theory.
A central object in this development is the {\it critical group} of the graph. This is a finite abelian group which can be identified as $\mathbb{Z}^{|{\small V}|}/\mathsf{ker}(L)$, with $|{\small V}|$ the number of vertices and $\mathsf{ker}(L)$ the kernel of the reduced graph Laplacian (delete a row and matching column from the Laplacian matrix).
Baker-Norine identify the critical group as the Jacobian of the graph.
Finding `nice graphs' where the critical group is explicity describable is a natural activity. In \cite{BKR}, Benkart, Klivans, and Reiner work
with what they term the `McKay-Cartan' matrix $\mathsf{C} = \alpha(1) \mathrm{I}-\mathsf{M}$ rather than the Laplacian, where
$\mathsf{M}$ is the McKay matrix determined by the irreducible characters $\mathsf{Irr}(\mathsf{G})$ of a finite group $\mathsf{G}$,
and $\alpha$ is a distinguished character.
They exactly identify the associated critical group and show that the reduced matrix $\widetilde{\mathsf{C}}$ obtained by deleting the row and column
corresponding to the trivial character is always avalanche finite (chip firing stops).
In the special case that the graph is a (finite) Dynkin diagram, the reduced matrix $\widetilde{\mathsf{C}}$ is the corresponding Cartan matrix, and the various chip-firing notions have nice interpretations as Lie theory concepts. See also \cite{G} for further information about the critical group in this setting.
An extension of this work by Grinberg, Huang, and Reiner \cite{GHR}
is particularly relevant to the present paper. They consider modular representations of a finite group
$\mathsf{G}$, where the characteristic is $p$ and $p$ divides $|\mathsf{G}|$, defining an analog of the McKay matrix (and the McKay-Cartan matrix $\mathsf{C}$) using composition factors, just as
we do in Section \ref{basics1}. They extend considerations to finite-dimensional Hopf algebras such as restricted enveloping algebras and finite quantum groups.
In a natural way, our results in Section \ref{quant} on quantum groups at roots of unity answer some questions they pose. Their primary interest is in the associated
critical group. The dynamical Markov problems we study go in an entirely different direction. They show that the Brauer characters
(both simple and projective) yield eigenvalues and left and right eigenvectors (see Proposition \ref{basicone}). Our version of the theory is developed from first principles in
Section \ref{basics1}.
Pavel Etingof has suggested modular tensor categories or the $\mathbb{Z}_+$-modules of \cite[ Chap. 3]{EGNO} as a natural further generalization, but
we do not explore that direction here.
\subsection{Distribution of character ratios}\label{2d} \ Fulman \cite{F3} developed the Markov chain \eqref{eq:Mchain}
on $\mathsf{Irr}(\mathsf{G})$ for yet different purposes, namely, probabilistic combinatorics. One way to understand a set of objects is to pick one at random and study its properties. For $\mathsf{G}
= \mathsf{S}_n$, the symmetric group on $n$ letters, Fulman studied
`pick $\chi \in \mathsf{Irr}(\mathsf{G})$ from the Plancherel measure'. Kerov had shown that for a fixed conjugacy class representative $c \ne 1$ in
$\mathsf{S}_n$, $\chi(c)/\chi(1)$ has an approximate
normal distribution -- indeed, a multivariate normal distribution when several fixed conjugacy classes are considered. A wonderful exposition of this work is
in Ivanov-Olshanski \cite{IO}. The authors proved normality by computing moments. However, this does not lead to error estimates.
Fulman used `Stein's method' (see \cite{CGS}), which calls for an exchangeable pair $(\chi,\chi')$ marginally distributed as Plancherel measure.
Equivalently, choose $\chi$ from Plancherel measure and then $\chi'$ from a Markov kernel $\mathsf{K}(\chi,\chi')$ with Plancherel measure a stationary distribution.
This led to \eqref{eq:Mchain}. The explicit diagonalization was crucial in deriving the estimates needed for Stein's method.
Along the way, `just for fun,' Fulman gave sharp bounds for two examples of rates of convergence: \ tensoring the irreducible characters $\mathsf{Irr}(\Sf_n)$ with the $n$-dimensional permutation representation and tensoring the irreducible representations of
$\mathsf{SL}_n\left(p\right)$ with the permutation representation on lines. In each case he found the cut-off phenomenon with explicit constants.
In retrospect, any of the Markov chains in this paper could be used with Stein's method to study Brauer character analogs. There is work to
do, but a clear path is available.
\medskip
\noindent \emph{Final remarks}. \ The decomposition of tensor products is a well-known difficult subject, even for ordinary characters of the symmetric
group (the Kronecker problem). A very different set of problems about the asymptotics of decomposing tensor products is considered in Benson and Symonds \cite{BSy}.
For the fascinating difficulties of decomposing tensor products of tilting modules (even for $\mathsf{SL}_3(\mathbb{k})$), see Lusztig-Williamson \cite{LuW1, LuW2}.
\section{Basic setup and first examples}\label{basics1}
In this section we prove some basic results for tensor product Markov chains in the modular case, and work out sharp rates of convergence for the groups $\mathsf{SL}_2(p)$ with respect to tensoring with the natural two-dimensional module and also with the Steinberg module. Several analogous chains where the same techniques apply are laid out in Sections \ref{sl2p2sec}--\ref{sl3psec}. Some basic background material on Markov chains can be found in Appendix I (Section \ref{append1}), and on modular representations in Appendix II (Section \ref{append2}).
\subsection{Basic setup}\label{3a}
Let $\mathsf{G}$ be a finite group, and let $\mathbb{k}$ be an algebraically closed field of characteristic $p$.
Denote by $\mathsf{G}_{p'}$ the set of $p$-regular elements of $\mathsf{G}$, and by $\mathcal{C}$ a set of representatives of the $p$-regular conjugacy classes in $\mathsf{G}$. Let
$\mathsf{IBr}(\mathsf{G})$ be the set of irreducible Brauer characters of $\mathsf{G}$ over $\mathbb{k}$.
We shall abuse notation by referring to the irreducible $\mathbb{k}\mathsf{G}$-module with Brauer character $\chi$, also by $\chi$.
For $\chi \in \mathsf{IBr}(\mathsf{G}) $, and a $\mathbb{k}\mathsf{G}$-module with Brauer character $\varrho$, let $\langle \chi, \varrho\rangle$ denote the multiplicity of $\chi$ as a composition factor of $\varrho$. Let $\mathsf{p}_{\chi}$ be the Brauer character of the projective indecomposable cover of $\chi$.
Then if $\chi \in \mathsf{IBr}(\mathsf{G})$ and $\varrho$ is the Brauer character of any finite-dimensional $\mathbb k\mathsf{G}$-module,
\begin{equation*}
\langle\chi,\varrho\rangle = \frac{1}{|\mathsf{G}|} \sum_{g\in \mathsf{G}_{p'}}\mathsf{p}_{\chi}(g)\overbar{\varrho(g)} = \frac{1}{|\mathsf{G}|} \sum_{g\in \mathsf{G}_{p'}}\overbar{\mathsf{p}_{\chi}(g)}\varrho(g).
\end{equation*}
The orthogonality relations (see \cite[pp. 201, 203]{Webb} say for $\varrho \in \mathsf{IBr}(\mathsf{G})$, $g \in \mathsf{G}_{p'}$,
and $c$ a $p$-regular element that
\begin{equation}\label{row} \langle \chi,\varrho \rangle = \begin{cases} 0 & \quad \text{ if } \ \ \chi \not \cong \varrho, \\
1 & \quad \text{ if } \ \ \chi \cong \varrho. \end{cases} \end{equation}
\begin{equation}\label{col}
\sum_{\chi \in \mathsf{IBr}(\mathsf{G})}\mathsf{p}_{\chi}(g)\overbar{\chi (c)} = \begin{cases} 0 & \quad \text{ if }\ \ g \not \in c^G, \\
|\mathsf{C}_\mathsf{G}(c)| & \quad \text{ if }\ \ g \in c^G,
\end{cases}
\end{equation}
where $c^G$ is the conjugacy class of $c$, and $|\mathsf{C}_\mathsf{G}(c)|$ is the centralizer of $c$.
Fix a faithful $\mathbb{k}\mathsf{G}$-module with Brauer character $\alpha$, and define a Markov chain on $\mathsf{IBr}(\mathsf{G})$ by moving from $\chi$ to $\chi'$ with probability proportional to the product of $\chi'(1)$ with the multiplicity of $\chi'$ in $\chi \otimes \alpha$, that is,
\begin{equation}\label{eq:Mchain2}
\mathsf{K}(\chi, \chi') = \frac{ \langle \chi',\chi \otimes \alpha \rangle \chi'(1)}{\alpha(1) \chi(1)}.
\end{equation}
As usual, denote by $\mathsf{K}^\ell$ the transition matrix of this Markov chain after $\ell$ steps.
\begin{prop}\label{basicone} For the Markov chain in $(\ref{eq:Mchain2})$, the following hold.
\begin{itemize}
\item[{\rm (i)}] The stationary distribution is
\[
\pi(\chi) = \ \frac{\mathsf{p}_{\chi}(1) \chi(1)} {|\mathsf{G}|} \quad \left(\chi \in \mathsf{IBr}(\mathsf{G})\right).
\]
\item[{\rm (ii)}] The eigenvalues are $\alpha(c)/\alpha(1)$, where $c$ ranges over a set ${\mathcal C}$ of representatives of the $p$-regular conjugacy classes of $\mathsf{G}$.
\item[{\rm (iii)}] The right eigenfunctions are $\mathsf{r}_c$ ($c \in {\mathcal C}$), where for $\chi \in \mathsf{IBr}(\mathsf{G})$,
\[
\mathsf{r}_c(\chi) = \frac{\chi(c)}{\chi(1)}.
\]
\item[{\rm (iv)}] The left eigenfunctions are $\ell_c$ ($c \in {\mathcal C}$), where for $\chi \in \mathsf{IBr}(\mathsf{G})$,
\[
\ell_c(\chi) = \frac{\overbar{\mathsf{p}_{\chi}(c)}\chi(1)}{|\mathsf{C}_\mathsf{G}(c)|}.
\]
Moreover, \ $\ell_{1}(\chi) = \pi(\chi)$,\, $\mathsf{r}_{1}(\chi) = 1$, and for $c,c' \in \mathcal{C}$,
\[
\sum_{\chi \in \mathsf{IBr}(\mathsf{G})}\ell_c(\chi)\overbar{\mathsf{r}_{c'}(\chi)} = \delta_{c,c'}.
\]
\item[{\rm (v)}] For $\ell \ge 1$,
\[
\mathsf{K}^\ell(\chi,\chi') = \sum_{c\in {\mathcal C}} \left(\frac{\overbar{\alpha(c)}}{\alpha(1)}\right)^\ell \overbar{\mathsf{r}_c(\chi)}\,\ell_c(\chi').
\]
In particular, for the trivial character $\mathbb{1}$ of $\mathsf{G}$,
\[
\frac{\mathsf{K}^\ell(\mathbb{1},\chi')}{\pi(\chi')}-1 = \sum_{c\ne 1} \left(\frac{\overbar {\alpha(c)}}{\alpha(1)}\right)^\ell
\frac{\overbar{\mathsf{p}_{\chi'}(c)}}{\mathsf{p}_{\chi'}(1)}\, |c^G|.
\]
\end{itemize}
\end{prop}
\begin{proof} (i) \ Define $\pi$ as in the statement. Then summing over $\chi \in \mathsf{IBr}(\mathsf{G})$ gives
\begin{align*}
\sum_\chi \pi(\chi)\mathsf{K}(\chi,\chi') & = \frac{1}{|\mathsf{G}|}\sum_\chi \frac{\mathsf{p}_{\chi}(1)\,\chi(1)\,\langle \chi', \chi \otimes \alpha \rangle\chi'(1)}{\chi(1)\alpha(1)}\\
&= \frac{\chi'(1)}{|\mathsf{G}|\,\alpha(1)}\sum_\chi \mathsf{p}_{\chi}(1)\langle\chi',\chi \otimes \alpha\rangle \\
& = \frac{\chi'(1)}{|\mathsf{G}|\,\alpha(1)} \langle \chi',\left(\textstyle{\sum_\chi} \mathsf{p}_{\chi}(1)\chi\right)\otimes \alpha\rangle \\
& = \frac{\chi'(1)}{|\mathsf{G}|\,\alpha(1)} \langle\chi', \mathbb{k}\mathsf{G}\otimes \alpha\rangle\quad \text{ as } \ \ \mathsf{p}_{\chi}(1) = \langle \chi,\mathbb{k}\mathsf{G}\rangle \\
& = \frac{\chi'(1)}{|\mathsf{G}|\,\alpha(1)}\, \alpha(1)\langle\chi', \mathbb{k}\mathsf{G} \rangle \quad \text{ as } \ \ \mathbb{k}\mathsf{G} \otimes \alpha \cong (\mathbb{k}\mathsf{G})^{\oplus \alpha(1)} \\
& = \frac{\chi'(1)\mathsf{p}_{\chi'}(1)}{|\mathsf{G}|} = \pi(\chi'). \end{align*}
This proves (i).
(ii) and (iii) \ Define $\mathsf{r}_c$ as in (iii).
Summing over $\chi' \in \mathsf{IBr}(\mathsf{G})$ and using the orthogonality relations (\ref{row}), (\ref{col}), we have
\begin{align*}
\sum_{\chi'} \mathsf{K}(\chi,\chi')\mathsf{r}_c(\chi') & = \frac{1}{\chi(1)\alpha(1)} \sum_{\chi'} \chi'(c) \langle\chi',\chi\otimes \alpha\rangle \\
& = \frac{1}{\chi(1)\alpha(1)} \sum_{\chi'} \chi'(c) \frac{1}{|\mathsf{G}|}\sum_{g\in \mathsf{G}_{p'}} \mathsf{p}_{\chi'}(g)\overbar{\chi(g)}\,\overbar{\alpha(g)} \\
& = \frac{1}{\chi(1)\alpha(1)|\mathsf{G}|} \sum_g \overbar {\chi(g)}\,\overbar{\alpha(g}) \sum_{\chi'}\mathsf{p}_{\chi'}(g) \overbar{\chi'(c^{-1})} \\
& = \frac{1}{\chi(1)\alpha(1)|\mathsf{G}|} |\mathsf{C_G}(c)| \sum_{g^{-1}\in c^G} \overbar{\chi(g)}\,\overline{\alpha(g}) \quad \text{by }(\ref{col}) \\
& = \frac{1}{\chi(1)\alpha(1)}\chi(c)\alpha(c) \\
& = \frac{\alpha(c)}{\alpha(1)}\mathsf{r}_c(\chi).
\end{align*}
This proves (ii) and (iii).
(iv) Define $\ell_c$ as in (iv), and sum over $\chi \in \mathsf{IBr}(\mathsf{G})$:
\begin{align*}
\sum_{\chi} \ell_c(\chi) \mathsf{K}(\chi,\chi') & = \frac{\chi'(1)}{\alpha(1)|\mathsf{C_G}(c)|} \sum_{\chi} \overbar{\mathsf{p}_{\chi}(c)} \langle\chi',\chi\otimes \alpha\rangle \\
& = \frac{\chi'(1)}{\alpha(1)|\mathsf{C_G}(c)|} \sum_{\chi} \overbar{\mathsf{p}_{\chi}(c)} \frac{1}{|\mathsf{G}|}\sum_{g\in \mathsf{G}_{p'}} \mathsf{p}_{\chi'}(g)\overbar {\chi(g)}\,\overbar{\alpha(g)} \\
& = \frac{\chi'(1)}{\alpha(1) |\mathsf{C_G}(c)||\mathsf{G}|} \sum_g \mathsf{p}_{\chi'}(g)\overbar{\alpha(g)} \overbar{\sum_{\chi}\mathsf{p}_{\chi}(c)\, \overbar{{\chi(g^{-1})}}} \\
& = \frac{\chi'(1)}{\alpha(1)|\mathsf{G}|} \sum_{g^{-1}\in c^G}\mathsf{p}_{\chi'}(g)\overbar{\alpha(g)} \ \ \text{by }(\ref{col}) \\
& = \frac{\alpha(c)}{\alpha(1)|\mathsf{G}|}\overbar{\mathsf{p}_{\chi'}(c)}\chi'(1)|c^G| = \frac{\alpha(c)}{\alpha(1)}\frac{\overbar{\mathsf{p}_{\chi'}(c)}\chi'(1)}{|\mathsf{C_G}(c)|} \\
& = \frac{\alpha(c)}{\alpha(1)}\ell_c(\chi').
\end{align*}
The relations $\ell_1(\chi) = \pi(\chi)$ and $\mathsf{r}_1(\chi) = 1$ follow from the definitions, and the
fact that $\sum_{\chi \in \mathsf{IBr}(\mathsf{G})}\ell_c(\chi)\overbar{\mathsf{r}_{c'}(\chi)} = \delta_{c,c'}$ for $c,c' \in \mathcal C$ is a direct consequence of \eqref{col}. This proves (iv).
\vspace{2mm}
(v) For any function $f: \mathsf{IBr}(\mathsf{G}) \to \mathbb{C}$, we have $f(\chi') = \sum_{c \in {\mathcal C}}a_c \ell_c(\chi')$ with $a_c = \sum_{\chi'}f(\chi')\overbar{\mathsf{r}_c(\chi')}$ by (iv). For fixed $\chi$, apply this to $\mathsf{K}^\ell(\chi,\chi')$ as a function of $\chi'$, to see that
$\mathsf{K}^\ell(\chi,\chi') = \sum_c a_c \ell_c(\chi')$, where
\[
a_c = \sum_{\chi'}\mathsf{K}^\ell(\chi,\chi')\overbar{\mathsf{r}_c(\chi')} = \left(\frac{\overbar{\alpha(c)}}{\alpha(1)}\right)^\ell \overbar{\mathsf{r}_c(\chi)}.
\]
The first assertion in (v) follows, and the second follows by setting $\chi=\mathbb{1}$ and using (i)--(iii).
\end{proof}
\noindent {\bf Remark.} The second formula in part (v) will be the workhorse in our examples, in the following form:
\begin{align}\label{eq:horse}\begin{split}
\parallel\mathsf{K}^\ell(\mathbb{1},\cdot)-\pi\parallel_{{}_{\mathsf {TV}}} & = \frac{1}{2}\sum_{\chi'}|\mathsf{K}^\ell(\mathbb{1},\chi')-\pi(\chi')|\\
& = \frac{1}{2}\sum_{\chi'}\left|\frac{\mathsf{K}^\ell(\mathbb{1},\chi')}{\pi(\chi')}-1\right|\pi(\chi') \\
& \le \frac{1}{2}{\rm max}_{\chi'}\left|\frac{\mathsf{K}^\ell(\mathbb{1},\chi')}{\pi(\chi')}-1\right|.
\end{split}
\end{align}
\subsection{$\mathsf{SL}_2(p)$}\label{3b}
Let $p$ be an odd prime, and let $\mathsf{G} = \mathsf{SL}_2(p)$ of order $p(p^2-1)$. The $p$-modular representation theory of $\mathsf{G}$ is expounded in \cite{Al}: writing $\mathbb k$ for the
algebraic closure of $\mathbb{F}_p$, we have that the irreducible $\mathbb{k}\mathsf{G}$-modules are labelled $\mathsf{V}(a)$ ($0\le a\le p-1$), where $\mathsf{V}(0)$ is the trivial module, $\mathsf{V}(1)$ is the natural two-dimensional module, and $\mathsf{V}(a) = \mathsf{S}^a(\mathsf{V}(1))$, the $a^{th}$ symmetric power, of dimension $a+1$. Denote by $\chi_a$ the Brauer character of $\mathsf{V}(a)$, and by $\mathsf{p}_a :=\mathsf{p}_{\chi_a}$ the Brauer character of the projective indecomposable cover of $\mathsf{V}(a)$. The $p$-regular classes of $\mathsf{G}$ have representatives $\mathbf{1}$, $-\mathbf{1}$, $x^r\,(1\le r\le \frac{p-3}{2})$ and $y^s\,(1\le s\le \frac{p-1}{2})$, where $\mathbf{1}$ is the $2 \times 2$ identity matrix, $x$ and $y$ are fixed elements of $\mathsf{G}$ of orders $p-1$ and $p+1$, respectively; the corresponding centralizers in $\mathsf{G}$ have orders $|\mathsf{G}|$, $|\mathsf{G}|$, $p-1$ and $p+1$. The values of the characters $\chi_a$ and $\mathsf{p}_{a}$ are given in Tables \ref{ci}
and \ref{pi}. In particular, we have $\mathsf{p}_{a}(\mathbf{1}) = p$ for $a=0$ or $p-1$, and $\mathsf{p}_{a}(\mathbf{1})=2p$ for other values of $a$.
Hence by Proposition \ref{basicone}(i), for any faithful $\mathbb{k}\mathsf{G}$-module $\alpha$, the stationary distribution for the Markov chain given by (\ref{eq:Mchain2}) is
\begin{equation}\label{eq:statio}
\pi(\chi_a) =\begin{cases}
\frac{1}{p^2-1} & \quad \text{if} \ \ a=0, \\
\frac{2(a+1)}{p^2-1} & \quad \text{if} \ \ 1\le a\le p-2, \\
\frac{p}{p^2-1}& \quad \text{if} \ \ a=p-1.
\end{cases} \end{equation}
\begin{table}[h]
\caption{Brauer character table of $\mathsf{SL}_2(p)$}
\label{ci}
{\small \begin{tabular}[t]{|c||c|c|c|c|}
\hline
& $\mathbf{1}$ &$ -\mathbf{1}$ & $x^r$ $(1\le r \le \frac{p-3}{2})$ & $y^s$ $(1\le s \le \frac{p-1}{2})$\\
\hline \hline
$\chi_0$ & 1&1&1&1 \\
\hline
$\chi_1$ & $2$ & $-2$ & $2\cos\left(\frac{2\pi r}{p-1}\right)$& $2\left(\cos \frac{2\pi s}{p+1}\right)$ \\ \hline
$\begin{array}{c} \chi_\ell \ (\ell \, \text{\footnotesize even}) \\
\ell \ne {\footnotesize 0,p-1}\end{array}$ &$\ell+1$ &$ \ell+1$ & $1+2\sum_{j=1}^{\frac{\ell}{2}}\cos\left(\frac{4j\pi r}{p-1}\right)$ &
$1+2\sum_{j=1}^{\frac{\ell}{2}} \cos\left( \frac{4j\pi s}{p+1}\right)$ \\
\hline
$\begin{array}{c} \chi_k \ (k \, \text{\footnotesize odd} ) \\
k \ne 1\end{array}$ & $k+1$ & $-(k+1)$ & $2\sum_{j=0}^{\frac{k-1}{2}}\cos\left(\frac{(4j+2)\pi r}{p-1}\right)$ &
$2\sum_{j=0}^{\frac{k-1}{2}}\cos \left(\frac{(4j+2)\pi s}{p+1}\right)$ \\ \hline
$\chi_{p-1}$ & $p$&$p$&$1$&$-1$ \\
\hline
\end{tabular}}
\end{table}
\begin{table}[h]
\caption{Characters of projective indecomposables for $\mathsf{SL}_2(p)$}
\label{pi}
\vspace{-.5cm}
$${\small \begin{tabular}[t]{|c||c|c|c|c|}
\hline
& $\mathbf{1}$ &$ -\mathbf{1}$ & $x^r$ $(1\le r \le \frac{p-3}{2})$ & $y^s$ $(1\le s \le \frac{p-1}{2})$ \\
\hline
\hline
$\mathsf{p}_{0}$ &$p$&$p$&1&$1-2\cos\left(\frac{4\pi s}{p+1}\right)$ \\
\hline
$\mathsf{p}_{1}$ & $2p$& $-2p$ & $2\cos\left(\frac{2\pi r}{p-1}\right)$ & $-2\cos\left(\frac{6\pi}{p+1}\right)$ \\
\hline
$\mathsf{p}_{2}$ & $2p$& $2p$ & $2\cos\left(\frac{4\pi r}{p-1}\right)$ & $-2\cos\left( \frac{8\pi s}{p+1}\right)$ \\
\hline
$\mathsf{p}_{k} \ (3 \le k\le p-2)$ & $2p$ & $(-1)^k\,2p$ & $2\cos\left(\frac{2k\pi r}{p-1}\right)$ & $-2\cos\left(\frac{(2k+4)\pi s}{p+1}\right)$ \\
\hline
$\mathsf{p}_{{p-1}}$ & $p$&$p$&$1$&$-1$ \\
\hline
\end{tabular}}$$
\end{table}
We shall consider two walks: tensoring with the two-dimensional module $\mathsf{V}(1)$, and tensoring with the Steinberg module $\mathsf{V}(p-1)$. In both cases the walk has a parity problem: starting from 0, the walk is at an even position after an even number of steps, and hence does not converge to stationarity. This can be fixed by considering instead the `lazy' version $\frac{1}{2}\mathsf{K} + \frac{1}{2}\,\mathrm{I}$: probabilistically, this means that at each step, with probability $\frac{1}{2}$ we remain in the same place, and with probability $\frac{1}{2}$ we transition according to the matrix $\mathsf{K}$.
\subsubsection{Tensoring with $\mathsf{V}(1)$}\label{3c}
As we shall justify below, the rule for decomposing tensor products is as follows, writing just
$a$ for the module $\mathsf{V}(a)$ as a shorthand:
\begin{align}\label{tensl2p}\begin{split}
a \otimes 1 = \begin{cases}
1 &\quad \text{if} \ \ a=0, \\
(a+1)/(a-1) &\quad \text{if} \ \ 1\le a\le p-2, \\
(p-2)^2/1&\quad \text{if} \ \ a=p-1.
\end{cases}
\end{split}
\end{align}
\begin{remark} {\rm The notation here and elsewhere in the paper records the
composition factors of the tensor product, and their multiplicities;
so the $a=p-1$ line indicates that the tensor product $(p-1)\otimes 1$ has composition factors $\mathsf{V}(p-2)$ with multiplicity 2, and $\mathsf{V}(1)$ with multiplicity 1 (the order in which the factors are listed is not significant).} \end{remark}
We now justify (\ref{tensl2p}). Consider the algebraic group $\mathsf{SL}_2(\mathbb{k})$, and let $\mathsf{T}$ be the subgroup consisting of diagonal matrices $t_\lambda = \hbox{diag}(\lambda,\lambda^{-1})$ for $\lambda \in\mathbb{k}^*$. For $1\le a \le p-1$, the element
$t_\lambda$ acts on $\mathsf{V}(a)$ with eigenvalues $\lambda^{a},\lambda^{a-2},\ldots ,\lambda^{-(a-2)},\lambda^{-a}$, and we call the exponents
\[
a,a-2,\ldots ,-(a-2),-a
\]
the {\it weights} of $\mathsf{V}(a)$.
The weights of the tensor product $\mathsf{V}(a)\otimes \mathsf{V}(1)$ are then
\[
a+1,(a-1)^2,\ldots,-(a-1)^2,-(a+1),
\]
where the superscripts indicate multiplicities (since the eigenvalues of $t_\lambda$ on the tensor product are the products of the eigenvalues on the factors $\mathsf{V}(a)$ and $\mathsf{V}(1)$). For $a<p-1$ these weights can only match up with the weights of a module with composition factors $\mathsf{V}(a+1), \mathsf{V}(a-1)$. However, for $a=p-1$ the weights $\pm(a+1) = \pm p$ are the weights of $\mathsf{V}(1)^{(p)}$, the Frobenius twist of $\mathsf{V}(1)$ by the $p^{th}$-power field automorphism. On restriction to
$\mathsf{G} = \mathsf{SL}_2(p)$, this module is just $\mathsf{V}(1)$, and hence the composition factors of $\mathsf{V}(p-1) \otimes \mathsf{V}(1)$ are as indicated in the third line of (\ref{tensl2p}).
From (\ref{tensl2p}), the Markov chain corresponding to tensoring with $\mathsf{V}(1)$ has transition matrix $\mathsf{K}$, where
\begin{align}\label{transp}\begin{split}
\mathsf{K}(a,a+1) = \frac{1}{2}\left(1+\frac{1}{a+1}\right), &\;\; \,\mathsf{K}(a,a-1) = \frac{1}{2}\left(1-\frac{1}{a+1}\right)\,(0\le a\le p-2), \\
\hspace{-1cm} \mathsf{K}(p-1,p-2) = 1-\frac{1}{p}, &\;\;\, \mathsf{K}(p-1,1) = \frac{1}{p},
\end{split}
\end{align}
and all other entries are 0.
\medskip
\noindent {\bf Remark.} Note that, except for transitions out of $p-1$, this Markov chain is exactly the truncation of the chain on $\{0,1,2,3,\ldots\}$ derived from tensoring with the two-dimensional irreducible module for $\mathsf{SU}_2(\mathbb{C})$ (see (\ref{eq:SU2})). It thus inherits the nice connections to Bessel processes and Pitman's $2M-X$ theorem described in (b) of Section \ref{intro} above. As shown in Section \ref{quant}, the obvious analogue on $\{0,1,\ldots,n-1\}$ in the quantum group case has a somewhat different spectrum
that creates new phenomena. The `big jump' from $p-1$ to 1 is strongly reminiscent of the `snakes and ladders' chain studied in (\cite{DD}, \cite{DSa}) and the Nash inequality techniques developed there provide another route to analyzing rates of convergence. The next theorem shows that order $p^2$ steps are necessary and sufficient for convergence.
\begin{thm}\label{mainsl2p} Let $\mathsf{K}$ be the Markov chain on $\{0,1,\ldots,p-1\}$ given by $(\ref{transp})$ starting at $0$, and let $\overbar \mathsf{K} = \frac{1}{2}\mathsf{K} + \frac{1}{2}\,\mathrm{I}$ be the corresponding lazy walk.
Then with $\pi$ as in $(\ref{eq:statio})$, there are universal positive constants $A,A'$ such that
\begin{itemize}
\item[{\rm (i)}] $\parallel\overbar \mathsf{K}^\ell-\pi\parallel_{{}_{\mathsf{TV}}} \ge A\mathsf{e}^{-\pi^2 \ell/p^2}$ for all $\ell \ge 1$, and
\item[{\rm (ii)}] $\parallel\overbar \mathsf{K}^\ell-\pi\parallel_{{}_{\mathsf{TV}}}\le A'\mathsf{e}^{- \pi^2 \ell/p^2}$ for all $\ell \ge p^2$.
\end{itemize}
\end{thm}
\begin{proof}
By Proposition \ref{basicone}, the eigenvalues of $\overbar \mathsf{K}$ are $0$ and $1$ together with
\[
\begin{array}{l}
\frac{1}{2}+\frac{1}{2}\cos \left(\frac{2k \pi }{p-1}\right) \;\;(1\le k\leq \frac{p-3}{2}), \\
\frac{1}{2}+\frac{1}{2}\cos\left(\frac{2j\pi }{p+1}\right)\;\;(1\le j\leq \frac{p-1}{2}).
\end{array}
\]
To establish the lower bound in part (i), we use that fact that
$||\overbar{\mathsf{K}}^\ell-\pi ||_{{}_{\mathsf{TV}}} = \frac{1}{2}{\rm sup}_{|| f ||_\infty\le 1}|\overbar \mathsf{K}^\ell(f)-\pi(f)|$
(see (\ref{eq:TV}) in Appendix I). Choose $f = \mathsf{r}_x$, the right eigenfunction corresponding to the class representative
$x \in \mathsf{G}$ of order $p-1$. Then $\mathsf{r}_x(\chi) = \frac{\chi(x)}{\chi(1)}$ for $\chi \in \mathsf{IBr}(\mathsf{G})$. Clearly $||\mathsf{r}_x||_\infty = 1$, and from the orthogonality relation (\ref{col}),
\[
\pi(\mathsf{r}_x) = \sum_\chi \pi(\chi)\mathsf{r}_x(\chi) = \frac{1}{|\mathsf{G}|}\sum_\chi \mathsf{p}_{\chi}(1)\chi(x) = 0.
\]
From Table \ref{ci}, the eigenvalue corresponding to $\mathsf{r}_x$ is $\frac{1}{2}+\frac{1}{2}\cos\left(\frac{2\pi}{p-1}\right)$, and so
\[
\overbar \mathsf{K}^\ell(\mathsf{r}_x) = \textstyle{\left(\frac{1}{2}+\frac{1}{2}\cos\left(\frac{2\pi}{p-1}\right)\right)^\ell \mathsf{r}_x(0) =
\left(\frac{1}{2}+\frac{1}{2}\cos\left(\frac{2\pi}{p-1}\right)\right)^\ell.}
\]
It follows that
\[
\parallel\overbar\mathsf{K}^\ell-\pi\parallel_{{}_{\mathsf{TV}}} \ge \frac{1}{2} \textstyle{\left(\frac{1}{2}+\frac{1}{2}\cos \frac{2\pi}{p-1}\right)^\ell =
\frac{1}{2} \left(1-\frac{\pi^2}{p^2}+O\left(\frac{1}{p^4}\right)\right)^\ell}.
\]
This yields the lower bound (i), with $A = \frac{1}{2}+o(1)$.
Now we prove the upper bound (ii). Here we use the bound
\[
\parallel\overbar\mathsf{K}^\ell-\pi\parallel_{{}_{\mathsf{TV}}} \le \frac{1}{2}{\rm max}_{\chi}\left|\frac{\overbar \mathsf{K}^\ell(\mathbb{1},\chi)}{\pi(\chi)}-1\right|
\]
given by \eqref{eq:horse}. Using the shorthand
$\overbar\mathsf{K}^\ell(0,a) =\overbar\mathsf{K}^\ell(\chi_0,\chi_a)$, where $\chi_0 = \mathbb{1}$, and
Proposition \ref{basicone}(v), we show in the $\mathsf{SL}_2(p)$ case that
{\small
\begin{equation}\label{spec}
\frac{\overbar\mathsf{K}^\ell(0,a)}{\pi(a)}-1 = \left\{ \begin{array}{l} (p+1)\sum_{r=1}^{\frac{p-3}{2}} \left(\frac{1}{2}+\frac{1}{2}\cos \left(\frac{2\pi r}{p-1}\right)\right)^\ell \cos\left( \frac{2a \pi r}{p-1}\right) \\
- (p-1)\sum_{s=1}^{\frac{p-1}{2}}\, \left(\frac{1}{2}+\frac{1}{2}\cos\left(\frac{2\pi s}{p+1}\right)\right)^\ell \hspace{-.15cm} \cos\left(\frac{(2a+4)\pi s}{p+1}\right) \, (1\le a \le p-2), \\
(p+1)\sum_{r=1}^{\frac{p-3}{2}} \left(\frac{1}{2}+\frac{1}{2}\cos \left(\frac{2\pi r}{p-1}\right)\right)^\ell \\
- (p-1)\sum_{s=1}^{\frac{p-1}{2}}\, \left(\frac{1}{2}+\frac{1}{2}\cos\left(\frac{2\pi s}{p+1}\right)\right)^\ell \; \,
(a= p-1), \\
(p+1)\sum_{r=1}^{\frac{p-3}{2}} \left(\frac{1}{2}+\frac{1}{2}\cos \left(\frac{2\pi r}{p-1}\right)\right)^\ell \\
+ (p-1)\sum_{s=1}^{\frac{p-1}{2}}\, \left(\frac{1}{2}+\frac{1}{2}\cos\left(\frac{2\pi s}{p+1}\right)\right)^\ell
\left(1-2\cos\left(\frac{4\pi s}{p+1}\right)\right) \; \, (a=0).
\end{array}
\right.
\end{equation}
}
\noindent To derive an upper bound, on the right-hand side we pair terms in the two sums for $1\le r=s\le p^{\frac{1}{2}}$. Terms with $r,s \ge p^{\frac{1}{2}}$ are shown to be exponentially small. The argument is most easily seen when $a=0$.
In this case, the terms in the sums in the formula (\ref{spec}) are approximated as follows. First assume $r,s \le p^{\frac{1}{2}}$. Then we claim that
\begin{itemize}
\item[{\rm (a)}] $\left(\frac{1}{2}+\frac{1}{2}\cos\left(\frac{2\pi r}{p-1}\right)\right)^\ell = \mathsf{e}^{-\frac{\pi^2r^2\ell}{p^2} + O\left(\frac{r^2\ell}{p^3}\right)} = \mathsf{e}^{-\frac{\pi^2r^2\ell}{p^2}}\left(1+O(\frac{1}{p})\right)$;
\item[{\rm (b)}] $\left(\frac{1}{2}+\frac{1}{2}\cos\left( \frac{2\pi s}{p+1}\right)\right)^\ell= \mathsf{e}^{-\frac{\pi^2s^2\ell}{p^2} + O\left(\frac{s^2\ell}{p^3}\right)} = \mathsf{e}^{-\frac{\pi^2s^2\ell}{p^2}}\left(1+O\big(\frac{1}{p}\big)\right)$;
\item[{\rm (c)}] $1-2\cos\left(\frac{4\pi s}{p+1}\right) = -1 +\frac{4\pi^2s^2}{p^2} + O\left(\frac{s^2}{p^3}\right)$.
\end{itemize}
The justification of the claim is as follows. For (a), observe that
\[
\begin{array}{ll}
\frac{1}{2}+\frac{1}{2}\cos \left(\frac{2\pi r}{p-1}\right)& = \frac{1}{2}+\frac{1}{2}\left(1-\frac{1}{2}\left(\frac{2\pi r}{p-1}\right)^2+ O\left(\frac{r^4}{p^4}\right)\right) = 1-\frac{\pi^2r^2}{(p-1)^2}+ O\left(\frac{r^4}{p^4}\right) \\
& = 1-\frac{\pi^2r^2}{p^2}\left(1+\frac{2}{p}+ O\left(\frac{1}{p^2}\right) + O\left(\frac{r^4}{p^4}\right) \right) \\
& = 1-\frac{\pi^2r^2}{p^2}+ O\left(\frac{r^2}{p^3}\right) + O\left(\frac{r^4}{p^4}\right) \\
& = 1-\frac{\pi^2r^2}{p^2}+ O\left(\frac{r^2}{p^3}\right)\;\;\;(\hbox{as }r^2\le p).
\end{array}
\]
Hence,
\[\textstyle{
\left(\frac{1}{2}+\frac{1}{2}\cos\left(\frac{2\pi r}{p-1}\right)\right)^\ell= \mathsf{e}^{\ell \log \left(1-\frac{\pi^2r^2}{p^2}+ O\left(\frac{r^2}{p^3}\right)\right)} = \mathsf{e}^{-\frac{\pi^2r^2\ell}{p^2} + O\left(\frac{r^2\ell}{p^3}\right)},}
\]
giving (a).
Part (b) follows in a similar way. Finally, for (c),
\[
\begin{array}{ll}
1-2\cos\left(\frac{4\pi s}{p+1}\right) & = 1-2\left(1-\frac{2\pi^2s^2}{(p+1)^2} + O\left(\frac{r^4}{p^4}\right)\right)
= -1 + \frac{4\pi^2s^2}{(p+1)^2}+O\left(\frac{r^4}{p^4}\right) \\
&= -1 + \frac{4\pi^2s^2}{p^2}\left(1+O\left(\frac{1}{p}\right)\right)+O\big(\frac{r^4}{p^4}\big)\\
& = -1 +\frac{4\pi^2s^2}{p^2} + O\big(\frac{s^2}{p^3}\big).
\end{array}
\]
This completes the proof of claims (a)-(c). Note that all the error terms hold uniformly in $\ell,p,r,s$ for $r,s\le p^{\frac{1}{2}}$.
Combining terms, we see that the summands with $r=s < p^{\frac{1}{2}}$ in (\ref{spec}) (with $a=0$) contribute
\[
\begin{array}{l}
(p+1)\mathsf{e}^{-\frac{\pi^2r^2\ell}{p^2}}\left(1+O\big(\frac{1}{p}\big)\right) + (p-1) \mathsf{e}^{-\frac{\pi^2r^2\ell}{p^2}}\left(1+O\big(\frac{1}{p}\big)\right)\left(-1+O\big(\frac{r^2}{p^2}\big)\right) \\
\qquad \qquad \qquad = \mathsf{e}^{-\frac{\pi^2r^2\ell}{p^2}}({\blue 2}+O(1)).
\end{array}
\]
The sum over $1\le r < \infty$ of this expression is bounded above by a constant times $\mathsf{e}^{-\frac{\pi^2\ell}{p^2}}$, provided $\ell\ge p^2$.
For $\frac{p-1}{2} \geq b=r,s\ge p^{\frac{1}{2}}$ we have ${\blue \bigl|\frac{1}{2}+\frac{1}{2}\cos\left(\frac{2\pi b}{p\pm 1}\right)\bigr|} \le 1-\frac{1}{p}$, so the sums in the right-hand side of (\ref{spec}) are bounded above by $p^2\mathsf{e}^{-\frac{\ell}{p}}$, which is negligible for $\ell \ge p^2$.
This completes the argument for $a=0$ and shows
\[
\textstyle{\left|\frac{\overbar\mathsf{K}^\ell(0,0)}{\pi(0)}-1\right| \le A\mathsf{e}^{-\frac{\pi^2\ell}{p^2}}.}
\]
At the other end, for the Steinberg module $\mathsf{V}(p-1)$, a similar but easier analysis of the spectral formula (\ref{spec})
with $a=p-1$ gives the same conclusion.
Consider finally $0<a<p-1$ in (\ref{spec}). To get the cancellation for $r^2,s^2 \le p$, use a Taylor series expansion to write
\[
\textstyle{\cos\left(\frac{(2a+4)\pi s}{p+1}\right) = \cos\left(\frac{2a\pi s}{p+1}\right) {\blue -} \frac{4\pi s}{p+1} \sin\left(\frac{2a\pi s}{p+1}\right) + O\left(\frac{s^2}{p^2}\right).}
\]
Then
\[
\textstyle{(p+1)\cos\left(\frac{2a\pi r}{p-1}\right) - (p-1)\cos\left(\frac{(2a+4)\pi r}{p+1}\right) = O(r)}
\]
and we obtain
\[
\sum_{1\le r \le \sqrt{p}}\mathsf{e}^{-\frac{\pi^2r\ell}{p^2}}r \le Ae^{-\frac{\pi^2\ell}{p^2}}
\]
as before. We omit further details.
\end{proof}
\subsubsection{Tensoring with the Steinberg module $\mathsf{V}(p-1)$} \label{3d}
The Steinberg module $\mathsf{V}(p-1)$ of dimension $p$ is the irreducible for $\mathsf{SL}_2(p)$ of largest dimension, and intuition suggests that the walk induced by tensoring with this should approach the stationary distribution \eqref{eq:statio} much more rapidly than the $\mathsf{V}(1)$ walk analyzed in the previous subsection. The argument below shows that for a natural implementation, order of $\log p$ steps are necessary and sufficient. One problem to be addressed is that the Steinberg representation is not faithful, as $-\mathbf{1}$ is in the kernel. There are two simple ways to fix this:
\medskip \noindent {\bf Sum Chain}: \ \ Let $\mathsf{K}_s$ be the Markov chain starting from $\mathsf{V}(0)$ and tensoring with $\mathsf{V}(1)\oplus \mathsf{V}(p-1)$.
\medskip \noindent {\bf Mixed Chain}: \ \ Let $\mathsf{K}_m$ be the Markov chain starting from $\mathsf{V}(0)$ and defined by `at each step, with probability $\frac{1}{2}$ tensor with $\mathsf{V}(p-1)$ and with probability $\frac{1}{2}$ tensor with $\mathsf{V}(1)$.'
\medskip \noindent {\bf Remark } Because the two chains involved in $\mathsf{K}_s$ and $\mathsf{K}_m$ are simultaneously diagonalizable (all tensor chains have the same eigenvectors by Proposition \ref{basicone}), the eigenvalues of $\mathsf{K}_s, \mathsf{K}_m$ are as in Table \ref{kevals}.
\begin{table}[h]
\caption{Eigenvalues of $\mathsf{K}_s$ and $\mathsf{K}_m$}\label{kevals}
\begin{center}{\small \begin{tabular}[t]{|c||c|c|c|c|}
\hline
class &$\mathbf{1}$ & $-\mathbf{1}$ & $x^r$ $(1\le r \le \frac{p-3}{2})$ & $y^s$ $(1\le s \le \frac{p-1}{2})$ \\ \hline
\hline
$\mathsf{K}_s$ & $1$ & $\frac{1}{p+2}(p-2)$ & $\frac{1}{p+2}\left(1+2\cos\left(\frac{2\pi r}{p-1}\right)\right)$ & $\frac{1}{p+2}\left(2\cos\left(\frac{2\pi s}{p+1}\right)-1\right)$ \\
\hline
$\mathsf{K}_m$ &1 & 0 & $\frac{1}{2}\left(\frac{1}{p} + \cos\left(\frac{2\pi r}{p-1}\right)\right)$ &
$\frac{1}{2}\left(\cos \left(\frac{2\pi s}{p+1} \right) - \frac{1}{p}\right)$ \\
\hline \end{tabular}}
\end{center}
\end{table}
\bigskip\noindent {\it Sum Chain}: The following considerations show that the sum walk $\mathsf{K}_s$ is `slow': it takes order $p$ steps to converge. From Table \ref{kevals}, the right eigenfunction for the second eigenvalue $1-\frac{4}{p+2}$ is $\mathsf{r}_{-\mathbf{1}}$, where $\mathsf{r}_{-\mathbf{1}}(\chi) = \frac{\chi(-\mathbf{1})}{\chi(\mathbf{1})}$.
Let $X_\ell$ be the position of the walk after $\ell$ steps, and let $E_s$ denote expectation, starting from the trivial representation. Then $E_s(\mathsf{r}_{-\mathbf{1}}(X_\ell)) = \left(1-\frac{4}{p+2}\right)^\ell$. In stationarity, $E_s (\mathsf{r}_{-\mathbf{1}}(X))=0$. Then $\parallel\mathsf{K}_s^\ell-\pi\parallel \ge \frac{1}{2}\left(1-\frac{4}{p+2}\right)^\ell$ shows that $\ell$ must be of size greater than $p$ to get to stationarity, using the same lower bounding technique as in the proof of Theorem \ref{mainsl2p}. In fact, order $p$ steps are sufficient, in the $\ell_\infty$ distance (see \ref{eq:inf}), but we will not prove this here. We will not analyze the sum chain any further.
\bigskip \noindent {\it Mixed Chain}: We now analyze $\mathsf{K}_m$. Arguing with weights as for tensoring with $\mathsf{V}(1)$ in
(\ref{tensl2p}), we see that tensor products with $\mathsf{V}(p-1)$ decompose as follows:
\begin{table}[h]\label{tabSL2(p)}
\caption{Decomposition of $\mathsf{V}(a) \otimes \mathsf{V}(p-1)$ for $\mathsf{SL_2}(p)$}\label{eq:pims}
\begin{center}{\small \begin{tabular}[t]{|c||c|c|}
\hline
$a$ & $a \otimes (p-1)$ \\
\hline \hline
$0$ & $p-1$ \\ \hline
$1$ & $(p-2)^2/1$ \\ \hline
$2$ & $(p-1)/(p-3)^2/2/0$ \\ \hline
$a\ge 3$ \hbox{ odd} & $(p-2)^2/(p-4)^2/\cdots /(p-a-1)^2/a/(a-2)^2/\cdots /1^2$ \\ \hline
$a \ge 4$ \hbox{ even} & $(p-1)/(p-3)^2/\cdots /(p-a-1)^2/a/(a-2)^2/\cdots /2^2/0$ \\
\hline \end{tabular}}
\end{center}
\end{table}
\noindent Note that when $a\ge \frac{p-1}{2}$, some of the terms $a,a-2,\ldots$ can equal terms $p-1,p-2,\ldots$, giving rise to some higher multiplicities -- for example,
\[
\begin{array}{l}
(p-2)\otimes (p-1) = (p-2)^3/(p-4)^4/\cdots /1^4, \\
(p-1)\otimes (p-1) = (p-1)^2/(p-3)^4/\cdots /2^4/0^3.
\end{array}
\]
These decompositions explain the `tensor with $\mathsf{V}(p-1)$' walk: starting at $\mathsf{V}(0)$, the walk moves to $\mathsf{V}(p-1)$ at the first step. It then moves to an even position with essentially the correct stationary distribution (except for $\mathsf{V}(0)$). Thus, the
tensor with $\mathsf{V}(p-1)$ walk is close to stationary after 2 steps, conditioned on being even. Mixing in $\mathsf{V}(1)$ allows moving from even to odd. The following theorem makes this precise, showing that order $\log p$ steps are necessary and sufficient, with respect to the $\ell_\infty$ norm.
\begin{thm}\label{steinby} For the mixed walk $\mathsf{K}_m$ defined above, starting at $\mathsf{V}(0)$, we have for all $p \geq 23$ and $\ell \geq 1$ that
\begin{itemize}
\item[{\rm (i)}] $\parallel\mathsf{K}^\ell-\pi\parallel_{\infty} \ge \mathsf{e}^{-(2\log 2)(\ell+1) +(4/3)\log p}$, and
\item[{\rm (ii)}] $\parallel\mathsf{K}^\ell-\pi\parallel_{\infty} \le \mathsf{e}^{-\ell/4 + 2\log p}$.
\end{itemize}
In fact, the mixed walks $\mathsf{K}_m$ have cutoff at time
$\log_2 p^2$, when we let $p$ tend to $\infty$.
\end{thm}
\noindent \begin{proof}
Using Proposition \ref{basicone}(v) together with Table \ref{pi}, we see that the values of
$\frac{\mathsf{K}_m^\ell(0,a)}{\pi(a)}-1$ are as displayed below.
\begin{table}[h]
\caption{Values of \ $\frac{\mathsf{K}_m^\ell(0,a)}{\pi(a)}-1$ \ for \ $\mathsf{SL_2}(p)$} \label{Kvals}
\begin{center}{\begin{tabular}[t]{|c||c|c|}
\hline
$a$ & $\frac{\mathsf{K}_m^\ell(0,a)}{\pi(a)}-1$\\
\hline \hline
$0$ & $(p+1)\sum_{r=1}^{\frac{p-3}{2}} {\blue \left(\frac{1}{2}\left(\cos\left(\frac{2\pi r}{p-1}\right)+\frac{1}{p}\right)\right)^\ell}$ \\
& $\qquad \quad + (p-1)\sum_{s=1}^{\frac{p-1}{2}} {\blue \left(\frac{1}{2}\left(\cos\left(\frac{2\pi s}{p+1}\right)-\frac{1}{p}\right)\right)^\ell}$ \\ \hline
$1\le a \le p-2$ &$(p+1)\sum_{r=1}^{\frac{p-3}{2}} {\blue \left(\frac{1}{2}\left(\cos\left(\frac{2\pi r}{p-1}\right)+\frac{1}{p}\right)\right)^\ell} \cos\left(\frac{4a\pi }{p-1}\right)$ \\
& \qquad \quad $ - (p-1)\sum_{s=1}^{\frac{p-1}{2}} {\blue \left(\frac{1}{2}\left(\cos\left(\frac{2\pi s}{p+1}\right)-\frac{1}{p}\right)\right)^\ell}
\cos\left(\frac{(2a+4)\pi s}{p+1}\right)$ \\
\hline
$p-1$ & $(p+1)\sum_{r=1}^{\frac{p-3}{2}} {\blue \left(\frac{1}{2}\left(\cos\left(\frac{2\pi r}{p-1}\right)+\frac{1}{p}\right)\right)^\ell}$ \\
& \qquad \quad $- (p-1)\sum_{s=1}^{\frac{p-1}{2}} {\blue \left(\frac{1}{2}\left(\cos\left(\frac{2\pi s}{p+1}
\right)-\frac{1}{p}\right)\right)^\ell}$ \\
\hline \end{tabular}}
\end{center}
\end{table}
For the upper bound, observe that if $p \geq 23$, then
\begin{align*}
\left|\frac{\mathsf{K}_m^\ell(0,a)}{\pi(a)}-1\right| & \le \frac{p+1}{2^\ell}\sum_{r=1}^{\frac{p-3}{2}} \left(1+\frac{1}{p}\right)^\ell
+ \frac{p-1}{2^\ell}\sum_{s=1}^{\frac{p-1}{2}} \left(1+\frac{1}{p}\right)^\ell \\
& < \frac{p^2}{2^\ell}\left(1+\frac{1}{p}\right)^\ell < \mathsf{e}^{-\ell (\log 2-1/p) + 2 \log p} < \mathsf{e}^{-\ell/4 + 2\log p}
\end{align*}
This implies the upper bound (ii) in the conclusion. Moreover, if we let $p \to \infty$ and take $\ell \approx (1+\epsilon)\log_2(p^2)$ with
$0 < \epsilon < 1$ fixed, then $\ell/p$ is bounded from above, and so
\begin{equation}\label{cutoff1}
\left|\frac{\mathsf{K}_m^\ell(0,a)}{\pi(a)}-1\right| < \frac{p^2}{2^\ell}\left(1+\frac{1}{p}\right)^\ell <
\frac{\mathsf{e}^{\ell/p}}{p^{2\epsilon}}
\end{equation}
tends to zero.
For the lower bound (i), we use the monotonicity property \eqref{monotone} and choose $\ell_0 \in \{\ell,\ell+1\}$ to be {\it even}. Observe that
if $1 \leq r \leq (p-1)/6$, then $\cos\left(\frac{2\pi r}{p-1}\right) \geq 1/2$. As $\lfloor (p-1)/6 \rfloor \geq (p-5)/6$, it follows that
$$\left|\frac{\mathsf{K}_m^{\ell_0}(0,0)}{\pi(0)}-1\right| \geq \frac{(p+1)(p-5)}{6}2^{-2\ell_0} > \mathsf{e}^{-(2\log 2)\ell_0 + (4/3)\log p}$$
when $p \geq 23$. Now the lower bound follows by \eqref{eq:inf}.
To establish the cutoff, we again let $p \to \infty$ and consider {\it even} integers
$$\ell \approx (1-\epsilon)\log_2p^2$$
with $0 < \epsilon < 1$ fixed. Note that when $0 \leq x \leq \sqrt{\log 2}$, then
$$\cos(x) \geq 1-x^2/2 \geq \mathsf{e}^{-x^2}.$$
Hence, there are absolute constants $C_1,C_2 > 0$ such that when $1 \leq r \leq \lceil C_1(p/\sqrt{\log p})\rceil$ we have
$$\cos\left(\frac{2\pi r}{p-1}\right)+1/p \geq \mathsf{e}^{-4\pi^2r^2/(p-1)^2} \geq \mathsf{e}^{-C_2/(\log p)},$$
and so
$$\left(\cos\left(\frac{2\pi r}{p-1}\right)+1/p\right)^\ell \geq \mathsf{e}^{-C_2\ell/(\log p)} \geq \mathsf{e}^{-2C_2}.$$
It follows that
$$\left|\frac{\mathsf{K}_m^{\ell}(0,0)}{\pi(0)}-1\right| > \frac{C_1\mathsf{e}^{-2C_2}p^2}{2^\ell\sqrt{\log p}} \approx
\frac{C_1\mathsf{e}^{-2C_2}p^{2\epsilon}}{\sqrt{\log p}}$$
tends to $\infty$. Together with \eqref{cutoff1}, this proves the cutoff at $\log_2(p^2)$.
\end{proof}
\medskip
\noindent {\bf Remark. }The above result uses $\ell_\infty$ distance. We conjecture that any increasing number of steps is sufficient to send the total variation distance to zero. In principle, this can be attacked directly from the spectral representation of $\mathsf{K}_m^\ell(0,a)$, but the details seem difficult.
\section{$\mathsf{SL}_2(q)$, $q = p^2$}\label{sl2p2sec}
\subsection{Introduction}\label{4a} The nice connections between the tensor walk on $\mathsf{SL}_2(p)$ and probability suggest that closely related walks may give rise to
interesting Markov chains. In this section, we work with $\mathsf{SL}_2(q)$ over a field of $q = p^2$ elements. Throughout, $\mathbb k$ is an algebraically closed field
of characteristic $p >0$. We present some background representation theory in Section \ref{4b}.
In Section \ref{4c}, we will be tensoring with the usual (natural) two-dimensional representation $\mathsf{V}$. In Section \ref{4d}, the 4-dimensional module $\mathsf{V} \otimes \mathsf{V}^{(p)}$ will be considered.
We now describe the irreducible modules for $\mathsf{G}=\mathsf{SL}_2(p^2)$ over $\mathbb k$. As in Section \ref{3b}, let $\mathsf{V}(0)$ denote the trivial module, $\mathsf{V}(1)$ the natural 2-dimensional module, and for $1\le a\le p-1$, let $\mathsf{V}(a) = \mathsf{S}^a(\mathsf{V}(1))$, the $a^{th}$ symmetric power of $\mathsf{V}(1)$ (of dimension $a+1$). Denote by $\mathsf{V}(a)^{(p)}$ the Frobenius twist of $\mathsf{V}(a)$ by the field automorphism of $\mathsf{G}$ raising matrix entries to the $p^{th}$ power.
Then by the Steinberg tensor product theorem (see for example \cite[\S 16.2]{MT}), the irreducible
$\mathbb k\mathsf{G}$-modules are the $p^2$ modules $\mathsf{V}(a)\otimes \mathsf{V}(b)^{(p)}$, where $0 \le a,b \le p-1$
(note that the weights of the diagonal subgroup $\mathsf T$ on these modules are given in (\ref{wtst}) below). Denote this module by the pair $(a,b)$. In particular, the trivial representation corresponds to $(0,0)$ and the Steinberg representation is indexed by $(p-1,p-1)$. The
natural two-dimensional representation corresponds to $(1,0)$. For $p=5$, the tensor walk using
$(1,0)$ is pictured in Table \ref{p5wk}.
The exact probabilities depend
on $(a,b)$ and are given in (\ref{eq:tensruleq=p2}) below.
Thus, from a position $(0,b)$ on the left-hand wall of the display, the walk must move one to the right. At an interior $(a,b)$, the walk moves
one horizontally to $(a-1,b)$ or $(a+1,b)$. At a point $(p-1,b)$ on the right-hand wall, the walk can move left one horizontally
(indeed, it does so with probability $1-\frac{1}{p}$) or it makes a big jump to $(0,b-1)$ or to $(0,b+1)$ if $b \ne p-1$ and
a big jump to $(0,p-2)$ or to $(1,0)$ when $b=p-1$. The walk has a drift to the right, and a drift upward.
\emph{Throughout this article, double-headed arrows in displays indicate that the module pointed to occurs twice in the tensor product decomposition.}
\begin{figure}[h]
\label{p5wk}
$$\begin{tikzpicture}[scale=1.5,line width=1pt]
\tikzstyle{Trep}=[circle,
draw= black,
fill=blue!60]
\matrix[row sep=.4cm,column sep=.4cm] {
&& \node(V0){\color{magenta}\bf (0,4)}{}; &&\node(V1){\color{magenta}\bf(1,4)}{}; && \node(V2){\color{magenta}(\bf2,4)}{}; && \node(V3){\color{magenta}\bf(3,4)}{}; && \node(V4){\color{magenta}\bf(4,4)}{}; \\
&&&&&&&&&&\\
&&&&&&&&&&\\
&& \node(V5){\color{magenta}\bf(0,3)}{}; &&\node(V6){\color{magenta}\bf(1,3)}{}; && \node(V7){\color{magenta}\bf(2,3)}{}; && \node(V8){\color{magenta}\bf(3,3)}{}; && \node(V9){\color{magenta}\bf(4,3)}{}; \\
&&&&&&&&&&\\
&&&&&&&&&&\\
&& \node(V10){\color{magenta}\bf(0,2)}{}; &&\node(V11){\color{magenta}\bf(1,2)}{}; && \node(V12){\color{magenta}\bf(2,2)}{}; && \node(V13){\color{magenta}\bf(3,2)}{}; && \node(V14){\color{magenta}\bf(4,2)}{}; \\
&&&&&&&&&&\\
&&&&&&&&&&\\
&& \node(V15){\color{magenta}\bf(0,1)}{}; &&\node(V16){\color{magenta}\bf(1,1)}{}; && \node(V17){\color{magenta}\bf(2,1)}{}; && \node(V18){\color{magenta}\bf(3,1)}{}; && \node(V19){\color{magenta}\bf(4,1)}{}; \\
&&&&&&&&&&\\
&&&&&&&&&&\\
&& \node(V20){\color{magenta}\bf(0,0)}{}; &&\node(V21){\color{magenta}\bf(1,0)}{}; && \node(V22){\color{magenta}\bf(2,0)}{}; && \node(V23){\color{magenta}\bf(3,0)}{}; && \node(V24){\color{magenta}\bf(4,0)}{}; \\
};
\path
(V0)edge[black,thick,<->] (V1)
(V1) edge[black,thick,<->] (V2)
(V2) edge[black,thick,<->] (V3)
(V3) edge[black,thick,<<->] (V4)
(V5)edge[black,thick,<->] (V6)
(V6)edge[black,thick,<->] (V7)
(V7) edge[black,thick,<->] (V8)
(V8) edge[black,thick,<<->] (V9)
(V10) edge[black,thick,<->] (V11)
(V11) edge[black,thick,<->] (V12)
(V12) edge[black,thick,<->] (V13)
(V13) edge[black,thick,<<->] (V14)
(V15) edge[black,thick,<->] (V16)
(V16) edge[black,thick,<->] (V17)
(V17) edge[black,thick,<->] (V18)
(V18) edge[black,thick,<<->] (V19)
(V20) edge[black,thick,<->] (V21)
(V21) edge[black,thick,<->] (V22)
(V22) edge[black,thick,<->] (V23)
(V23) edge[black,thick,<<->] (V24)
(V9) edge[black,thick,->] (V0)
(V9) edge[black,thick,->] (V10)
(V14) edge[black,thick,->] (V5)
(V14) edge[black,thick,->] (V15)
(V19) edge[black,thick,->] (V10)
(V19) edge[black,thick,->] (V20)
(V24) edge[black,thick,->] (V15)
(V4) edge[black,thick,->>] (V5)
(V4) edge[black,thick,->] (V21)
;
;
\end{tikzpicture}$$
\caption{Tensor walk on irreducibles of $\mathsf{SL}_2(p^2),\,p=5$}
\end{figure}
Heuristically, the walk moves back and forth at a fixed horizontal level just like the $\mathsf{SL}_2(p)$-walk of Section \ref{3c}. As in that section, it takes order $p^2$ steps to go across. Once it hits the right-hand wall, it usually bounces back, but with small probability (order $\frac{1}{p}$), it jumps up or down by one to $(0,b\pm 1)$ (to $(0,p-2), (1,0)$ when $b= p-1$). There need to be order $p^2$ of these horizontal shifts for the horizontal
coordinate to equilibriate. All of this suggests that the walk will take order $p^4$ steps to totally equilibriate. As shown below, analysis
yields that $p^4$ steps are necessary and sufficient; again the cancellation required is surprisingly delicate.
\subsection{Background on modular representations of $\mathsf{SL}_2(p^2)$.} \label{4b}
Throughout this discussion, $p$ is an odd
prime and $\mathsf{G} = \mathsf{SL}_2(p^2)$. The irreducible $\mathbb k\mathsf{G}$-modules are as described above, and the projective indecomposables are given in \cite{Sr}.
The irreducible Brauer characters $\chi_{(a,b)} = \chi_a\,\chi_{b^{(p)}} \in \mathsf{IBr}\big(\mathsf{SL}_2(p^2)\big)$ are indexed by pairs $(a,b)$, $0 \le a,b \le p-1$, where `$a$' stands for the usual symmetric power representation of $\mathsf{SL}_2(p^2)$ of
dimension $a+1$, and `$b^{(p)}$' stands for the Frobenius twist of the $b$th symmetric power representation of dimension $b+1$ where
the representing matrices on the $b$th symmetric power have their entries raised to the $p$th power. Thus $\chi_{(a,b)}$ has degree $(a+1)(b+1)$. The $p$-regular conjugacy classes of $\mathsf{G} =\mathsf{SL}_2(p^2)$, and the values of the Brauer character $\chi_{(1,0)}$ of the natural module are displayed in Table \ref{eq:tabq=p2}, where $x$ and $y$ are fixed elements of orders $p^2-1$ and $p^2+1$, respectively.
\begin{table}[h]
\caption{Values of the Brauer character $\chi_{(1,0)}$ for $\mathsf{SL}_2(p^2)$}\label{eq:tabq=p2}
\[
\begin{array}{|r||c|c|c|c|}
\hline
\text{\small class rep.} \ c & \mathbf{1} & -\mathbf{1} & x^r\,(1\le r<\frac{p^2-1}{2}) & y^s\,(1\le s < \frac{p^2+1}{2}) \\
\hline \hline
|\mathsf{C}_{\mathsf{G}}(c)| & |\mathsf{G}| & |\mathsf{G}| & p^2-1 & p^2+1 \\
\hline
\chi_{(1,0)}(c) & 2 & -2 & 2\cos\left(\frac{2\pi r}{p^2-1}\right) & 2\cos\left(\frac{2\pi s}{p^2+1}\right) \\
\hline
\end{array}
\]
\end{table}
We will also need the
character $\mathsf{p}_{a,b}$ of the projective indecomposable
module $\mathsf{P}(a,b)$ indexed by $(a,b)$, that is the projective cover of $\chi_{a,b}$. Information about the characters is given in Table \ref{eq:pims}, with the size of
the conjugacy class given in the second line.
\begin{table}[h]
\caption{Characters of projective indecomposables for $\mathsf{SL}_2(p^2)$}\label{eq:pims}
{\small \begin{tabular}[t]{|c||c|c|c|c|}
\hline
&$\mathbf{1}$ & $-\mathbf{1}$ &$x^r\; (1\le r<\frac{p^2-1}{2})$ & $y^s\; (1\le s < \frac{p^2+1}{2})$ \\ \hline
\hline
$\mathsf{p}_{(0,0)}$ & $3p^2$ & $3p^2$ & $4 \mathsf{cos}\left(\frac{2\pi r}{p+1}\right) - 1$ & $\begin{matrix}1-\Big(4 \mathsf{cos}\left(\frac{2(p-1)\pi s}{p^2+1}\right) \times \\
\qquad \quad \mathsf{cos}\left(\frac{2(p+1)\pi s}{p^2+1}\right)\Big)\end{matrix}$ \\ \hline
$\begin{matrix}\mathsf{p}_{a,b}\\ {}_{(a,b < p-1)} \end{matrix}$ & $4p^2$ &$(-1)^{a+b}\,4p^2$ & $\begin{matrix} 4 \mathsf{cos}\left(\frac{2(p-1-a)\pi r}{p^2-1}\right) \times \\
\mathsf{cos}\left(\frac{2(p(b+1)-1)\pi r}{p^2-1}\right)\end{matrix}$ & $\begin{matrix} -4 \mathsf{cos}\left(\frac{2(p-1-a)\pi s}{p^2+1}\right) \times \\
\qquad \mathsf{cos}\left(\frac{2(p(b+1)+1)\pi s}{p^2+1}\right)\end{matrix}$
\\ \hline
$\begin{matrix}\mathsf{p}_{p-1,b}\\ {}_{(b < p-1)} \end{matrix}$ & $2p^2$ & $(-1)^b\,2p^2$ & $2\mathsf{cos}\left(\frac{2(p(b+1)-1 )\pi r}{p^2-1}\right)$ & $-2\mathsf{cos}\left(\frac{2(p(b+1)+1)\pi s}{p^2+1}\right)$ \\
\hline
$\begin{matrix}\mathsf{p}_{a,p-1}\\ {}_{(a < p-1)} \end{matrix}$ & $2p^2$ & $(-1)^a\,2p^2$ & $2\mathsf{cos}\left(\frac{2(p-1-a)\pi r}{p^2-1}\right)$ & $-2\mathsf{cos}\left(\frac{2(p-1-a)\pi s}{p^2+1}\right)$ \\
\hline
$\mathsf{p}_{p-1,p-1}$ & $p^2$ & $p^2$ & $1$ & $-1$ \\
\hline
\end{tabular}}
\end{table}
The order of $\mathsf{G} = \mathsf{SL}_2(p^2)$ is $p^2(p^4-1)$, and by Proposition \ref{basicone}(i), the stationary distribution $\pi$ is roughly a product measure linearly increasing in each variable. Explicitly, the values of $\pi$ are:
\begin{equation}\label{eq:statdis}
\begin{array}{|c|c|}
\hline
(a,b) & \pi(a,b) \\
\hline \hline
(0,0) & \frac{3}{p^4-1} \\
\hline
a,b<p-1 & \frac{4(a+1)(b+1)}{p^4-1} \\
\hline
(p-1,b),\,b<p-1 & \frac{2p(b+1)}{p^4-1} \\
\hline
(a,p-1),\,a<p-1 & \frac{2p(a+1)}{p^4-1} \\
\hline
(p-1,p-1) & \frac{p^2}{p^4-1} \\
\hline
\end{array}
\end{equation}
\subsection{Tensoring with $(1,0)$}\label{4c}
In this section we consider the Markov chain given by tensoring with the natural module $(1,0)$.
The transition probabilities are determined as usual: from $(a,b)$ tensor with $(1,0)$, and pick a
composition factor with
probability proportional to its multiplicity times its dimension.
The composition factors of the tensor product $(a,b) \otimes (1,0)$ can be determined using weights, as in
Section \ref{3c}. Note first that the weights of the diagonal subgroup $\mathsf T$ on $(a,b)$ are
\begin{equation}\label{wtst}
(a-2i)+p(b-2j)\;\;(0\le i\le a,\; 0\le j \le b).
\end{equation}
The tensor product $(a,b) \otimes (1,0)$ takes the form
\begin{equation}\label{abp}
\mathsf{V}(a) \otimes \mathsf{V}(b)^{(p)} \otimes \mathsf{V}(1).
\end{equation}
For $a<p-1$, we see as in Section \ref{3c} that $\mathsf{V}(a)\otimes \mathsf{V}(1)$ has composition factors $\mathsf{V}(a+1)$ and $\mathsf{V}(a-1)$, so the tensor product is $(a-1,b)/(a+1,b)$ (with only the second term if $a=0$). For $a=p-1$, a weight calculation gives
$\mathsf{V}(p-1) \otimes \mathsf{V}(1) = \mathsf{V}(p-2)^2/\mathsf{V}(1)^{(p)}$, so if $b<p-1$ the tensor product (\ref{abp}) has composition factors
$(p-2,b)^2 / (0,b-1) / (0,b+1)$. If $b=p-1$, then $\mathsf{V}(1)^{(p)} \otimes \mathsf{V}(b)^{(p)}$ has composition factors
$\mathsf{V}(p-2)^{(p)}$ (twice) and $\mathsf{V}(1)^{(p^2)}$, and for $\mathsf{G}=\mathsf{SL}_2(p^2)$, the latter is just the trivial module $\mathsf{V}(0)$. We conclude that in all cases the composition factors of $(a,b) \otimes (1,0)$ are
{\small \begin{equation}\label{eq:tensruleq=p2}
(a,b) \otimes (1,0) \ = \ \begin{cases} (1,b) \, & \ \, a = 0, \\
(a-1,b) / (a+1,b) \, & \ \, 1 \le a < p-1, \\
(p-2,b)^2 / (0,b-1) / (0,b+1) \, & \ \, a = p-1,\, b < p-1, \\
(p-2,p-1)^2 / (0,p-2)^2 / (1,0) \, & \ \, a = b= p-1. \end{cases}
\end{equation}}
Translating into probabilities, for $0 \le a,b < p-1$, the walk from $(a,b)$
moves to $(a-1,b)$ or $(a+1,b)$ with probability
\begin{equation}\label{eq:tabq=p2prob1}
\begin{tabular}[t]{|c||c|c|}
\hline
& $(a-1,b)$ & $(a+1,b)$ \\
\hline
$\mathsf{K}\big((a,b), \cdot)$ & $\frac{a}{2(a+1)}$ & $\frac{a+2}{2(a+1)}$ \\
\hline
\end{tabular}
\end{equation}
For these values of $a$ and $b$, the chain thus moves exactly like the $\mathsf{SL}_2(p)$-walk. For $(p-1,b)$ with $b < p-1$ on the right-hand wall, the walk moves back left to $(p-2,b)$
with probability $1 - \frac{1}{p}$, to $(0,b-1)$ with probability $\frac{b}{2p(b+1)}$, or to $(0,b+1)$ with probability
$\frac{b+2}{2p(b+1)}$. The Steinberg module $(p-1,p-1)$ is
the unique irreducible module for $\mathsf{SL}_2(p^2)$ that is also projective. Tensoring with $(1,0)$ sends $(p-1,p-1)$ to
$(p-2,p-1)$ with probability $1 - \frac{1}{p}$, to $(0,p-2)$ with probability $\frac{p-1}{p^2}$, or to $(1,0)$ with probability $\frac{1}{p^2}$.
The main result of this section shows that order $p^4$ steps are necessary and sufficient for convergence. As before, the walk has a parity problem: starting at $(0,0)$, after an even number of steps the walk is always at $(a,b)$ with $a+b$ even.
As usual we sidestep this by considering the lazy version.
\begin{thm}\label{T:SL2(p^2)} Let $\mathsf{G} = \mathsf{SL}_2(p^2)$, and let $\mathsf{K}$ be the Markov chain on $\mathsf{IBr}(\mathsf{G})$ given by
tensoring with $(1,0)$ with probability $\frac{1}{2}$, and with $(0,0)$ with probability $\frac{1}{2}$ (starting at $(0,0)$).
Then the stationary distribution $\pi$ is given by $(\ref{eq:statdis})$, and there are universal positive constants $A,A'$ such that
\begin{itemize}
\item[{\rm (i)}] $\parallel\mathsf{K}^\ell-\pi\parallel_{{}_{\mathsf{TV}}} \ge A\mathsf{e}^{-\frac{\pi^2\ell}{p^4}}$ for all $\ell\ge 1$, and
\item[{\rm (i)}] $\parallel\mathsf{K}^\ell-\pi\parallel_{{}_{\mathsf{TV}}} \le A'\mathsf{e}^{-\frac{\pi^2\ell}{p^4}}$ for all $\ell\ge p^4$.
\end{itemize}
\end{thm}
\noindent {\it Proof. }
For the lower bound, we use the fact that $f_r(a,b) := \frac{\chi_{(a,b)}(x^r)}{\chi_{(a,b)}(1)}$ is a right eigenfunction with eigenvalues $\frac{1}{2}+\frac{1}{2}\cos\left(\frac{2\pi r}{p^2-1}\right)$. Clearly $|f_r(a,b)|\le 1$ for all $a,b,r$. Using the fact that
$\sum_{a,b} f_r(a,b)\pi(a,b) = 0$ for $r\ne 0$, we have (see \eqref{eq:TV} in Appendix I)
\[
\begin{array}{ll}
\parallel\mathsf{K}^\ell-\pi\parallel_{{}_{\mathsf{TV}}} & = \frac{1}{2}{\rm sup}_f |\mathsf{K}^\ell(f)-\pi(f)| \\
& \ge \frac{1}{2}|\mathsf{K}^\ell(f_r)| \\
& = \frac{1}{2}\left(\frac{1}{2}+\frac{1}{2}\cos \left(\frac{2\pi r}{p^2-1}\right)\right)^\ell.
\end{array}
\]
Taking $r=1$, we have
\[
\begin{array}{ll}
\left(\frac{1}{2}+\frac{1}{2}\cos\left(\frac{2\pi }{p^2-1}\right)\right)^\ell &
= \left(1-\frac{\pi^2}{(p^2-1)^2}+O\left(\frac{1}{p^8}\right)\right)^\ell \\
& = \mathsf{e}^{-\frac{\pi^2\ell}{(p^2-1)^2}}\left(1+O\left(\frac{\ell}{p^8}\right)\right).
\end{array}
\]
This proves the lower bound.
For the upper bound, we use Proposition \ref{basicone}(v) to see that for all $(a,b)$,
\begin{equation}\label{eq1}
\begin{array}{ll}
\frac{\mathsf{K}^\ell\left((0,0), (a,b)\right)}{\pi(a,b)}-1 = & p^2(p^2+1)\sum_{r=1}^{\frac{p^2-1}{2}} \left(\frac{1}{2}+\frac{1}{2}\cos\left(\frac{2\pi r}{p^2-1}\right)\right)^\ell \frac{\mathsf{p}_{(a,b)}(x^r)}{\mathsf{p}_{(a,b)}(1)} \\
& \ \ + p^2(p^2-1)\sum_{s=1}^{\frac{p^2+1}{2}} \left(\frac{1}{2}+\frac{1}{2}\cos \frac{2\pi s}{p^2+1}\right)^\ell \frac{\mathsf{p}_{(a,b)}(y^s)}{\mathsf{p}_{(a,b)}(1)}.
\end{array}
\end{equation}
The terms in the two sums are now paired with $r=s$ for $1\le r,s \le p$ as in the proof of Theorem \ref{mainsl2p}.
The cancellation is easiest to see at $(a,b)=(0,0)$. Then
\[
\begin{array}{l}
\mathsf{p}_{(0,0)}(1)=3p^2,\quad \mathsf{p}_{(0,0)}(x^r) = 4\cos^2\left(\frac{2\pi r}{p+1}\right)-1, \\
\mathsf{p}_{(0,0)}(y^s) = 1-4\cos \left(\frac{2(p-1)\pi s}{p^2+1}\right) \cos\left( \frac{2(p+1)\pi s}{p^2+1}\right).
\end{array}
\]
We now use the estimates
\[
\begin{array}{l}
4\cos^2\left(\frac{2\pi r}{p+1}\right)-1 = 3 - \frac{16\pi^2r^2}{p^2}+O\left(\frac{r^2}{p^3}\right), \\
1-4\cos \left(\frac{2(p-1)\pi s}{p^2+1}\right) \cos\left( \frac{2(p+1)\pi s}{p^2+1}\right) = -3 + \frac{16\pi^2s^2}{p^2}+O\left(\frac{s^2}{p^3}\right).
\end{array}
\]
It follows that the $r=s$ terms of the right-hand side of (\ref{eq1}) pair to give
\[
\begin{array}{ll}
& p^2(p^2+1) \left(\frac{1}{2} +\frac{1}{2}\cos\left(\frac{2\pi s}{p^2-1}\right)\right)^\ell \left(3 - \frac{16\pi^2s^2}{p^2}+
O\left(\frac{s^2}{p^3}\right)\right)\frac{1}{p^2} \\
& \qquad + p^2(p^2-1) \left(\frac{1}{2}+\frac{1}{2}\cos \frac{2\pi s}{p^2+1}\right)^\ell
\left( -3 + \frac{16\pi^2s^2}{p^2}+O\left(\frac{s^2}{p^3}\right)\right) )\frac{1}{p^2} \\
&\ \ = \ \mathsf{e}^{-\frac{\pi^2s^2\ell}{p^2}}\cdot O\left(\frac{s^2}{p}\right).
\end{array}
\]
The sum of this over $1\le s \le p$ is dominated by the lead term
$\mathsf{e}^{-\frac{\pi^2\ell}{p^2}}$ up to multiplication by a universal constant. As in the proof of Theorem \ref{mainsl2p}, the terms for other $r,s$ are negligible (even without pairing). This completes the upper bound argument
for $(a,b)=(0,0)$. Other $(a,b)$ terms are similar (see the argument for $\mathsf{SL}_2(p)$), and we omit the details. \hspace{1.5cm} $\Box$
\medskip
\noindent {\bf Remark.} For large $p$, the above $\mathsf{SL}_2(p^2)$ walk is essentially a one-dimensional walk which shows Bessel(3) fluctuations. A genuinely two-dimensional process can be constructed by tensoring with the 4-dimensional module $(1,1) = \mathsf{V}(1) \otimes \mathsf{V}(1)^{(p)}$. We analyze this next.
\subsection{Tensoring with $(1,1)$}\label{4d}
The values of the Brauer character $\chi_{(1,1)}$ are:
\begin{equation*}\label{eq:tabq=p2prob1}
\begin{tabular}[t]{|c|c|c|c|}
\hline
$ \mathbf{1}$ & $-\mathbf{1}$ & $x^r\; (1\le r<\frac{p^2-1}{2})$ & $y^s\; (1\le s < \frac{p^2+1}{2})$ \\
\hline
\hline
4 & 4 & $2\cos\left(\frac{2\pi r}{p-1}\right)+2\cos \left(\frac{2\pi r}{p+1}\right)$
& $2\cos \left(\frac{2(p+1)\pi s}{p^2+1}\right)+2\cos \left(\frac{2(p-1)\pi s}{p^2+1}\right)$ \\
\hline
\end{tabular}
\end{equation*}
and the rules for tensoring with $(1,1)$ are given in Table \ref{11tens} -- these are justified in similar fashion to (\ref{eq:tensruleq=p2}).
Thus, apart from behavior at the boundaries, the walk moves from $(a,b)$ one step diagonally, with a drift upward and to the right: for $a,b<p-1$ the transition probabilities are
\begin{equation}\label{gener}
\begin{tabular}[t]{|c||c|c|c|c|}
\hline
& $(a-1,b-1)$ & $(a-1,b+1)$ & $(a+1,b-1)$ & $(a+1,b+1)$ \\
\hline \hline
$\mathsf{K}((a,b),\cdot)$ & $\frac{ab}{4(a+1)(b+1)}$ & $\frac{a(b+2)}{4(a+1)(b+1)}$ & $\frac{(a+2)b}{4(a+1)(b+1)}$ &
$\frac{(a+2)(b+2)}{4(a+1)(b+1)}$ \\
\hline
\end{tabular}
\end{equation}
\smallskip
\noindent At the boundaries, the probabilities change: for example, $\mathsf{K}((0,0),(1,1)) = 1$ and for the Steinberg module $\mathsf{St} = (p-1,p-1)$,
{\small \[
\begin{tabular}[t]{|c||c|c|c|c|c|c|}
\hline \hline
&$(p-2,p-2)$ & $(p-3,0)$ & $(p-1,0)$ & $(0,p-3)$ & $(0,p-1)$ & $(1,1)$ \\
\hline
$\mathsf{K}(\mathsf{St},\cdot)$ & $\frac{4(p-1)^2}{4p^2}$ &$\frac{p-2}{4p^2}$ &$\frac{p}{4p^2}$ &$\frac{p-2}{4p^2}$ &$\frac{p}{4p^2}$ &$\frac{4}{4p^2}$ \\
\hline
\end{tabular}
\]}
\begin{table}[h]
\caption{ Tensoring with $(1,1)$} \label{11tens}
\begin{tabular}[t]{|c||c|}
\hline
& $(a,b) \otimes (1,1)$ \\
\hline \hline
$\small{\ a,b< p-1}$ & $\small{(a-1,b-1)/(a-1,b+1)/
(a+1,b-1)/(a+1,b+1)}$ \\
\hline
${\small a=p-1,}$ & \\
${\small b <p-2}$ & $\small{(p-2,b-1)^2/(p-2,b+1)^2/ (0,b)^2/(0,b-2)/(0,b+2)}$ \\
\hline ${\small a=p-1,}$ & \\
${\small b=p-2}$ & $\small{(p-2,p-3)^2/(p-2,p-1)^2/ (0,p-2)^2/(1,0)}$ \\
\hline
${\small a=b=p-1}$ & $\hspace{-.8cm} {\small (p-2,p-2)^4/(p-3,0)^2/(p-1,0)^2/ }$ \\
& $\qquad \quad {\small (0,p-3)^2/(0,p-1)^2/(1,1)}$ \\
\hline
\end{tabular}
\end{table}
Heuristically, this is a local walk with a slight drift, and intuition suggests that it should behave roughly like the simple random walk on a $p\times p$ grid (with a uniform stationary distribution) -- namely, order $p^2$ steps should be necessary and sufficient. The next result makes this intuition precise.
We need to make one adjustment, as the representation $(1,1)$ is not faithful. We patch this here with the `mixed chain' construction of Section \ref{3d}. Namely, let $\mathsf{K}$ be defined by `at each step, with probability $\frac{1}{2}$ tensor with $(1,1)$ and with probability $\frac{1}{2}$ tensor with $(1,0)$'.
\begin{thm}\label{T:(1,1)} Let $\mathsf{K}$ be the Markov chain on $\mathsf{IBr}(\mathsf{SL}_2(p^2))$ defined above, starting at $(0,0)$ and tensoring
with $(1,1)$. Then there are universal positive constants $A,A'$ such that for all $\ell \ge 1$,
\[
A\mathsf{e}^{-\frac{\pi^2\ell}{p^2}} \le \parallel\mathsf{K}^\ell-\pi\parallel_{{}_{\mathsf{TV}}} \le A' \mathsf{e}^{-\frac{\pi^2\ell}{p^2}}.
\]
\end{thm}
\begin{proof} The lower bound follows as in the proof of Theorem \ref{T:SL2(p^2)} using the same right eigenfunction as a test function. For the upper bound, use formula (\ref{eq1}), replacing the eigenvalues there by
\[
\begin{array}{l}
\beta_{x^r} =\half+\frac{1}{4}\left(\cos\left(\frac{2\pi r}{p-1}\right)+\cos \left(\frac{2\pi r}{p+1}\right) \right) = 1-\frac{\pi^2r^2}{p^2}+O\left(\frac{r^2}{p^3}\right) \\
\beta_{y^s} =\half+\frac{1}{4}\left(\cos\left( \frac{2\pi s(p+1)}{p^2+1}\right)+\cos \left(\frac{2\pi s(p-1)}{p^2+1}\right) \right) = 1-\frac{\pi^2s^2}{p^2}+O\left(\frac{s^2}{p^3}\right).
\end{array}
\]
Now the same approximations to $\mathsf{p}_{(a,b)}(x^r), \mathsf{p}_{(a,b)}(y^s)$ work in the same way to give the stated result. We omit further details.
\end{proof}
\begin{remark} \ {\rm For the walk just treated (tensoring with $(1,1)$ for $\mathsf{SL}_2(p^2)$), the generic behavior away from the boundary is given in (\ref{gener}) above. Note that this exactly factors into the product of two one-dimensional steps of the walk on $\mathsf{SL}_2(p)$ studied in Section \ref{3c}: $\mathsf{K}\left((a,b),(a',b')\right) = \mathsf{K}(a,a')\mathsf{K}(b,b')$.
In the large $p$ limit, this becomes the walk on $\left(\mathbb{N}\cup\{0\}\right) \times \left(\mathbb{N}\cup\{0\}\right)$ arising from $\mathrm{SU}_2(\mathbb{C}) \times \mathrm{SU}_2(\mathbb{C})$ by tensoring with the 4-dimensional module $1\otimes 1$. Rescaling space by $\frac{1}{\sqrt{n}}$ and time by $\frac{1}{n}$, we have that the Markov chain on $\mathsf{SL}_2(p^2)$ converges to the product of two Bessel processes, as discussed in the Introduction.} \end{remark}
\section{$\mathsf{SL}_2(2^n)$} \label{sl22nsec}
\subsection{Introduction} \label{2nint}
Let $\mathsf{G} = \mathsf{SL}_2(2^n)$, $q=2^n$, and $\mathbb k$ be an algebraically closed field of characteristic 2. The irreducible $\mathbb k\mathsf{G}$-modules are described as follows: let $\mathsf{V}_1$ denote the natural 2-dimensional module, and for $1\le i\le n-1$, let $\mathsf{V}_i$ be the Frobenius twist of $\mathsf{V}_1$ by the field automorphism $\alpha \mapsto \alpha^{2^{i-1}}$. Set $N = \{1,\ldots,n\}$, and for $I = \{i_1<i_2 <\ldots <i_k\} \subseteq N$ define
$\mathsf{V}_I = \mathsf{V}_{i_1} \otimes \mathsf{V}_{i_2} \otimes \cdots \otimes \mathsf{V}_{i_k}$.
By Steinberg's tensor product theorem (\cite[\S 16.2]{MT}), the $2^n$ modules $\mathsf{V}_I$ form a complete set of inequivalent irreducible $\mathbb k\mathsf{G}$-modules.
Their Brauer characters and projective indecomposable covers will be described in Section \ref{2nreps}.
Consider now the Markov chain arising from tensoring with the module $\mathsf{V}_1$. Denoting $\mathsf{V}_I$ by the corresponding binary $n$-tuple $\underline x = \underline x_I$ (with 1's in the positions in $I$ and 0's elsewhere), the walk moves as follows:
\begin{equation}\label{eq:walkmoves} \end{equation}
\vspace{-1cm}
\begin{itemize}
\item[(1)] from $\underline x = (0,\,*)$ go to $(1,\,*)$;
\item[(2)] if $\underline x$ begins with $i$ 1's, say $\underline x = (1^i,0,*)$, where $1\le i\le n-1$, flip fair coins until the first head occurs at time $k$: then
\begin{itemize}
\item[] if $1\le k\le i$, change the first $k$ 1's to 0's
\item[] if $k>i$, change the first $i$ 1's to 0's, and put 1 in position $i+1$;
\end{itemize}
\item[(3)] if $\underline x=(1,\ldots,1)$, proceed as in (2), but if $k>n$, change all 1's to 0's and put a 1 in position 1.
\end{itemize}
Pictured in Figure \ref{p2^3wk} is the walk for tensoring with $\mathsf{V}_1$ for $\mathsf{SL}_2(2^3).$ We remind the reader that a double-headed
arrow means that the module pointed to occurs with multiplicity 2.
\begin{figure}[h]
\label{p2^3wk}
$$\begin{tikzpicture}[scale=2,line width=1pt]
\tikzstyle{Trep}=[circle,
minimum size=.01mm,
draw= black,
fill=magenta!55]
\tikzstyle{norep}=[circle,
thick,
minimum size=1.25cm,
draw= white,
fill=white]
\matrix[row sep=.4cm,column sep=.4cm] {
& & \node(V0)[norep]{}; && && \node (V6)[norep]{}; \\
&&&&&&\\
\node (V2)[norep]{}; &&&&&\node(V7)[norep]{}; \\
&&&&&&& \\
&&&&&&& \\
&& \node (V3)[norep]{}; &&&& \node (V4)[norep]{}; \\
&&&&&&\\
\node (V5)[norep]{}; &&&&&\node (V8)[norep]{}; \\
};
\path (1.32,1.63) node(V61) {};
\path (-1.73,-1.6) node(V51) {};
\path (1.44,1.57) node(V62) {};
\path (.7,-1.5)node(V81) {};
\path
(V61)edge[black,thick,->>] (V51)
(V0) edge[black,thick,->] (V6)
(V6) edge[black,thick,->>] (V0)
(V6) edge[black,thick,->>] (V2)
(V62)edge[black,thick,->] (V81)
(V4) edge[black,thick,->] (V2)
(V2) edge[black,thick,->] (V7)
(V2) edge[cyan,dashed] (V0)
(V7) edge[black,thick,->>] (V2)
(V7) edge[black,thick,->] (V0)
(V3) edge[thick,->] (V4)
(V4) edge[black,thick,->>] (V3)
(V4) edge[black,thick,->>] (V5)
(V6) edge[cyan,dashed] (V4)
(V6) edge[cyan,dashed] (V7)
(V8) edge[black,thick,->>] (V5)
(V8) edge[black,thick,->] (V3)
(V5) edge[black,thick,->] (V8)
(V4) edge[cyan,dashed] (V8)
(V7) edge[cyan,dashed] (V8)
(V5) edge[cyan,dashed] (V3)
(V5) edge[cyan,dashed] (V2)
(V3) edge[cyan,dashed] (V0);
\draw (V5) node[black] {\small \color{magenta}\bf(0,0,0)};
\draw (V3) node[black] {\small \color{magenta}\bf \ \ \, (0,1,0)};
\draw (V7) node[black] { \small \color{magenta} \bf \ (1,0,1)};
\draw (V2) node[black] {\small \color{magenta} \bf (0,0,1)};
\draw (V4) node[black] {\small \color{magenta} \bf (1,1,0)};
\draw (V0) node[black]{\small \color{magenta} \bf(0,1,1)};
\draw (V6) node[black]{\small \color{magenta} \bf(1,1,1)};
\draw (V8) node[black] {\small \color{magenta} \bf(1,0,0)};
;
\end{tikzpicture}$$
\caption{Tensor walk on irreducibles of $\mathsf{SL}_2(2^3)$}
\end{figure}
We shall justify this description and analyze this walk in Section \ref{2nmark1}.
The walk generated by tensoring with $\mathsf{V}_j$ has the same dynamics, but starting at the $j^{th}$ coordinate of $x$ and proceeding cyclically. We shall see that all of these walks have the same stationary distribution, namely,
\begin{equation}\label{eq:2nstat} \displaystyle{ \pi(\underline{x}) = \begin{cases} \frac{q}{q^2-1} & \quad \text{if} \ \ \underline{x} \neq \underline{0}\\
\frac{1}{q+1} & \quad \text{if} \ \ \underline{x} = \underline{0}.\\
\end{cases}}
\end{equation}
Note that, perhaps surprisingly, this is essentially the uniform distribution for $q$ large.
Section \ref{2nreps} contains the necessary representation theory for $\mathsf{G}$, and in Sections \ref{2nmark1} and \ref{2nmarkj} we shall analyze the random walks generated by tensoring with $\mathsf{V}_1$ and with a randomly chosen $\mathsf{V}_j$.
\subsection{Representation theory for $\mathsf{SL}_2(2^n)$}\label{2nreps}
Fix elements $x,y \in \mathsf{G} = \mathsf{SL}_2(q)$ ($q=2^n$) of orders $q-1$ and $q+1$, respectively. The 2-regular classes of $\mathsf{G}$ have representatives $\mathbf{1}$ (the $2 \times 2$ identity matrix), $x^r$ ($1\le r\le \frac{q}{2}-1$) and $y^s$ ($1\le s\le \frac{q}{2}+1$). Define $\mathsf{V}_i$ and $\mathsf{V}_I$ ($I \subseteq N = \{1,\ldots,n\}$) as above, and let $\chi_i$, $\chi_I$ be the corresponding Brauer characters. Their values are given in Table \ref{br2n},
\begin{table}[h]
\caption{Brauer characters of $\mathsf{SL}_2(q)$,\, $q=2^n$} \label{br2n}
\begin{tabular}[t]{|c||c|c|c|}
\hline
& $\mathbf{1}$ & $x^r \; \, (1 \le r \le\frac{q}{2} -1)$ & $y^s \; \, (1 \le s \le \frac{q}{2})$
\\
\hline \hline
$|\mathsf{C}_\mathsf{G}(c)|$ &
$q(q^2-1)$ & $q-1$ & $q+1$
\\ \hline
$\chi_{i}$ &
$2$ & ${2\cos\left(\frac{2^i \pi r}{q-1}\right)}$ & ${2\cos\left(\frac{2^i \pi s}{q+1}\right)}$
\\ \hline
$\chi_{I}$ &
$2^k$ & ${2^k\prod_{a=1}^k\cos\left(\frac{2^{i_a} \pi r}{q-1}\right)}$ & $2^k{\prod_{b=1}^k\cos\left(\frac{2^{i_b} \pi s}{q+1}\right)}$\\
$I = \{i_1,\ldots,i_k\}$& & &
\\ \hline
$\chi_{N}$ &
$2^n$ & $1$ & $-1$
\\ \hline \end{tabular}
\end{table}
The projective indecomposable modules are described as follows (see \cite{Al2}). Let $I = \{i_1,\ldots,i_k\} \subset N$, with $I \ne \emptyset, N$, and let $\bar I$ be the complement of $I$. Then the projective indecomposable cover $\mathsf{P}_{\bar I}$ of the irreducible module $\mathsf{V}_{\bar I}$
has character $\mathsf{p}_{\bar I} = \chi_I \otimes \chi_N$. The other projective indecomposables $\mathsf{P}_N$ and $\mathsf{P}_\emptyset$ are the covers of
the Steinberg module $\mathsf{V}_N$ and the trivial module $\mathsf{V}_\emptyset$, and their characters are
\[
\mathsf{p}_N = \chi_N,\quad \mathsf{p}_0 = \chi_N^2-\chi_N.
\]
The values of the Brauer characters of all the projectives are displayed in Table \ref{proj2n}.
\begin{table}[h]
\caption{Projective indecomposable characters of $\mathsf{SL}_2(q),\,q=2^n$} \label{proj2n}
\[
\begin{tabular}{|c||c|c|c|}
\hline
& $\mathbf{1}$ & $x^r \; \,(1 \le r \le\frac{q}{2} -1)$ & $y^s \; \, (1 \le s \le \frac{q}{2})$\\
\hline \hline
$\mathsf{p}_{\bar I},\, I \subset N$ & $2^k q$& $2^k\prod_{a=1}^k \cos \frac{2^{i_a}\pi r}{q-1}$ &
$-2^k\prod_{b=1}^k \cos \frac{2^{i_b}\pi s}{q+1}$ \\
$I = \{i_1,\ldots ,i_k\}$ &&& \\
\hline
$\mathsf{p}_N$ & $2^n$ & $1$ & $-1$ \\
\hline
$\mathsf{p}_0$ & $q^2-q$ & $0$ & $2$ \\
\hline
\end{tabular}
\]
\end{table}
From Tables \ref{br2n} and \ref{proj2n}, we see that the stationary distribution is as claimed in \eqref{eq:2nstat}:
\begin{align*} \pi(I) & = \frac{\mathsf{p}_I(\mathbf{1})\,\chi_{I}(\mathbf{1})}{|\mathsf{G}|} = \frac{2^{n- | I | + n+ |I|}}{q (q^2-1)} = \frac{q}{q^2-1} \quad \text{for} \ \ I \ne \emptyset, \\
\pi(\emptyset) &= \frac{q^2 - q}{q(q^2-1)} = \frac{1}{q+1}. \end{align*}
Next we give the rules for decomposing the tensor product of an irreducible module $\mathsf{V}_I$ with $\mathsf{V}_1$.
These are proved using simple weight arguments, as in Sections \ref{3c} and \ref{4c}.
Suppose $I \ne \emptyset, N$, and let $i$ be maximal such that $\{1,2,\ldots, i\} \subseteq I$ (so $0\le i\le n-1$). Let $\underline x = \underline x_I$ be the corresponding binary $n$-tuple, so that $\underline x = (1^i,0,*)$ (starting with $i$ 1's). Then
\[
\mathsf{V}_I \otimes \mathsf{V}_1 = (0,1^{i-1},0,*)^2 / (0^21^{i-2},0,*)^2 /\cdots /(0^i,0,*)^2/(0^i,1,*).
\]
And for $I = \emptyset, N$, the rules are $\mathsf{V}_\emptyset \otimes \mathsf{V}_1 = \mathsf{V}_1$ and
\[
\mathsf{V}_N \otimes \mathsf{V}_1 = (0,1^{n-1})^2 / (0^21^{n-2})^2 /\cdots /(0^n)^2/(1,0^{n-1}).
\]
These rules justify the description of the Markov chain arising from tensoring with $\mathsf{V}_1$ given in \eqref{eq:walkmoves}.
\subsection{Tensoring with $\mathsf{V}_1$: the Markov chain}\label{2nmark1}
In this section, we show that for the Markov chain arising from tensoring with $\mathsf{V}_1$ order $q^2$ steps are necessary and sufficient to reach stationarity. As explained above, the chain can be viewed as evolving on the $n$-dimensional hypercube. Starting at $\underline x=0$, it evolves according to the coin-tossing dynamics described in Section \ref{2nint}. Beginning at $\underline x=0$,
the chain slowly moves 1's to the right. The following theorem resembles the corresponding result for $\mathsf{SL}_2(p)$ (Theorem \ref{mainsl2p}), but the dynamics are very different.
\begin{thm}\label{mainsl2n}
Let $\mathsf{K}$ be the Markov chain on $\mathsf{IBr}(\mathsf{SL}_2(q))$ ($q=2^n$) by by tensoring with the natural module $\mathsf{V}_1$, starting at the trivial module. Then
\begin{itemize}
\item[{\rm (a)}] for any $\ell\ge 1$,
\[
\parallel\mathsf{K}^\ell-\pi\parallel_{{}_\mathsf{TV}} \ge \frac{1}{2}\left(\cos\left(\frac{2\pi}{q-1}\right)\right)^\ell = \frac{1}{2}\left(1-\frac{2\pi^2}{q^2}+O\left(\frac{1}{q^4}\right)\right)^\ell
\]
\item[{\rm (b)}] there is a universal constant $A$ such that for any $\ell \ge q^2$,
\[
\parallel\mathsf{K}^\ell-\pi\parallel_{{}_\mathsf{TV}} \le A\mathsf{e}^{-\frac{\pi^2\ell}{q^2}}.
\]
\end{itemize}
\end{thm}
\begin{proof} From Proposition \ref{basicone}, the eigenvalues of $\mathsf{K}$ are indexed by the 2-regular class representatives, $\mathbf{1}$, $x^r$, $y^s$ of Section \ref{2nreps}. They are
$$\beta_{\mathbf{1}} = 1, \ \; \beta_{x^r} = \cos\left(\frac{2 \pi r}{q-1}\right) \ \, (1 \le r \le \frac{q}{2}-1), \;\; \beta_{y^s} = \cos\left(\frac{2 \pi s}{q+1}\right) \ \, (1 \le s \le \frac{q}{2}).$$
To determine a lower bound, use as a test function the right eigenfunction corresponding to $\beta_\mathbf{1}$,
which is defined on $\underline{x} = (x{(1)}, x{(2)}, \dots, x{(n)})$ by
$$f(\underline{x}) = \prod_{j=1}^n \cos \left( \frac{x(j)2^{jx{(j)}} \pi}{q-1}\right).$$
(Here as in Section \ref{2nint}, we are identifying a subset $I$ of $N$ with its corresponding binary $n$-tuple
$\underline{x} = (x{(1)}, x{(2)}, \dots, x{(n)})$ having $1$'s in the positions of $I$ and 0's everywhere else.
Characters will carry $n$-tuple labels also, and we will write $\mathsf{K}(\underline{x}, \underline{y})$
rather than the cumbersome
$\mathsf{K}(\chi_{\underline{x}}, \chi_{\underline{y}})$.)
Clearly, $|| f ||_\infty \leq 1$. Further, the orthogonality relations (\ref{row}), (\ref{col}) for Brauer characters imply
$$\pi(f) = \sum_{\underline{x}} f(\underline{x}) \pi(\underline{x}) = \sum_{\underline{x}} \frac{\mathsf{p}_{\underline x}(\mathbf{1}) \chi_{\underline x}(\mathbf{1})}{|\mathsf{G}|}
\frac{ \chi_{\underline x}({\underline{x}})}{ \chi_{\underline x}(\mathbf{1}) } = 0,$$
where $\mathsf{p}_{\underline x}$ is the character of the projective indecomposable module indexed by $\underline{x}$. Then \eqref{eq:TV} in Appendix I implies
$$|| \mathsf{K}^\ell - \pi || = | \ge \half | \mathsf{K}^\ell(f) - \pi(f) |
= \half \left(\cos\left(\frac{2\pi}{q-1}\right) \right)^\ell.$$
This proves (a).
To prove the upper bound in (b), use Proposition \ref{basicone} (v):
\begin{equation}\label{eq:upb} \frac{\mathsf{K}^\ell(\underline 0,\underline y)}{\pi(\underline y)} - 1 = \sum_{c \neq \mathbf{1}} \beta_{c}^\ell \ \frac{\mathsf{p}_{\underline y}(c)}{\mathsf{p}_{\underline y}(\mathbf{1})}\
|c^G|,
\end{equation}
where the sum is over $p$-regular class representatives $c \ne \mathbf{1}$, and $|c^G|$ is the size of the class of $c$.
We bound the right-hand side of this for each $\underline y$. There are three different basic cases:
(i) $\underline{y} = \underline{0}$ (all $0$'s tuple corresponding to $\emptyset$), (ii) $\underline{y} = \underline{1}$ (all $1$'s tuple corresponding to $N$), and
(iii) $\underline{y} \ne \underline{0}, \underline{1}$:
\begin{align*} {\rm (i) }\; \frac{\mathsf{K}^\ell(\underline 0,\underline 0)}{\pi(\underline 0)} - 1 & = 2 \sum_{s=1}^{q/2} \cos^\ell\left(\frac{2\pi s}{q+1}\right), \\
{\rm (ii) }\; \frac{\mathsf{K}^\ell(\underline 0,\underline 1)}{\pi(\underline 1)} - 1 & = (q+1) \sum_{r=1}^{q-1}\cos^\ell\left(\frac{2\pi r}{q-1}\right) - (q-1)\sum_{s=1}^{q/2} \cos^\ell\left(\frac{2\pi s}{q+1}\right), \\
{\rm (iii) }\; \frac{\mathsf{K}^\ell(\underline 0,\underline y)}{\pi(\underline y)} - 1 & = (q+1) \sum_{r=1}^{q-1} \cos^\ell\left(\frac{2\pi r}{q-1}\right)
\prod_{a=1}^k \cos\left(\frac{2^{i_a}\pi r}{q-1}\right) \\
& \hspace{1.8cm} - (q-1)\sum_{s=1}^{q/2} \cos^\ell\left(\frac{2\pi s}{q+1}\right) \prod_{b=1}^k \cos\left(\frac{2^{i_b}\pi r}{q+1}\right), \end{align*}
where $\underline y$ has ones in positions $i_1,i_2, \dots, i_k$. These formulas follow from \eqref{eq:upb} by using the sizes of the 2-regular classes from Table \ref{br2n} and the expressions for the projective characters in Table \ref{proj2n}. For example, when $\underline{y} = \underline{0}$, then
from Table \ref{proj2n}, $\mathsf{p}_{\underline 0}(x^r) = 0$ and $\mathsf{p}_{\underline 0}(y^s) = 2$, while $\mathsf{p}_{\underline 0}(\mathbf{1}) = q^2-q$, and the order of the class
of $y^s$ is $|c^G| = q(q-1)$. The other cases are similar.
The sum (i) (when $\underline{y} = \underline{0}$) is exactly the sum bounded for a simple random walk on $\mathbb{Z}/(q+1)\mathbb{Z}$; the work in \cite[Chap. 3]{Diacbk} shows
it is exponentially small when $\ell >> (q+1)^2$. The sum (ii) (corresponding to $\underline{y} = \underline{1}$) is just what was bounded in
proving Theorem \ref{mainsl2p}. Those bounds do not use the primality of $p$, and gain $\ell >> q^2$ suffices.
For the sum in (iii) (general $\underline{y} \neq \underline{0}$ or $\underline{1}$), note that the products of the terms (for $r$ and $s$) are essentially the same and are at most 1 in
absolute value. It follows that the same pair-matching cancellation argument used for $\underline{y} = \underline{1}$ works to give the same bound.
Combining these arguments, the result is proved.
\end{proof}
\subsection{Tensoring with a uniformly chosen $\mathsf{V}_j$.}\label{2nmarkj}
As motivation recall that the classical Ehrenfest urn can be realized as a simple random walk on the hypercube of binary $n$-tuples.
From an $n$-tuple $\underline{x}$ pick a coordinate at random, and change it to its opposite. Results of
\cite{DSh2} show that this walk takes $\frac{1}{4} n \log n + {\textsl{\footnotesize C}}\,n$ to converge, and there is a cut off as ${\textsl{\footnotesize C}}$ varies.
We conjecture similar behavior for the walk derived from tensoring with a uniformly chosen simple $\mathsf{V}_j, \ 1 \leq j \le n$. As in \eqref{eq:upb},
\begin{equation}\label{asine} \frac{\mathsf{K}^\ell(\underline{0},\underline y)}{\pi(\underline y)} - 1 = \sum_{c \neq \mathbf{1}} \beta_c^\ell \,\frac{\mathsf{p}_{\underline y}(c)}{\mathsf{p}_{\underline y}(\mathbf{1})}\,
|c^G|
\end{equation}
and the eigenvalues $\beta_c$ are
$$\begin{gathered} \beta_\mathbf{1} =1, \quad \beta_{x^r} \, = \,\frac{1}{n} \sum_{i=0}^{n-1} \cos\left(\frac{2\pi 2^i r}{q-1}\right) \quad 1 \le r \le \frac{q}{2} -1, \\
\beta_{y^s}\, = \,
\frac{1}{n} \sum_{i=0}^{n-1} \cos\left(\frac{2\pi 2^i s}{q+1}\right) \quad 1 \le s \le \frac{q}{2}.\end{gathered}$$
Consider the eigenvalues closest to 1, which are $\beta_{x^r}$ with $r=1$ and $\beta_{y^s}$ with $s = 1$. It is easy to see that as $n$ goes to $\infty$,
$$\textstyle{\beta_x = 1 - \frac{\gamma}{n}\left(1+o(1)\right) \quad \text{with} \quad \gamma= \sum_{i=1}^\infty \left( 1- \cos\left(\frac{2 \pi}{2^i}\right)\right).}$$
Note further that the eigenvalues $\beta_{x^r}$ have multiplicities:
expressing $r$ as a binary number with $n$ digits, any cyclic permutation of these digits gives a value $r'$ for which $\beta_{x^r} = \beta_{x^{r'}}$. Hence, the multiplicity of $\beta_{x^r}$ is the number of different values $r'$ obtained in this way, and the number of distinct such eigenvalues is equal to the number of orbits of the cyclic group $\mathsf{Z}_n$ acting on $\mathsf{Z}_2^n$ by permuting coordinates cyclically. The number of orbits can be counted
by classical Polya Theory: \, there are $\sum_{d | n} \phi(d) 2^{n/d}$ of them, where $\phi$ is the Euler phi function.
Similarly, the eigenvalues $\beta(y^s)$ have multiplicities. For example, $\beta(y)$ has multiplicity $n$.
Turning back to our walk, take $\underline{y} =\underline{0}$ in (\ref{asine}).
Then, because $\mathsf{p}_{\underline 0}(x^r)=0$, $$\frac{\mathsf{K}^\ell( \underline{0},\underline{0})}{\pi(\underline{0})} -1 = 2 \sum_{s = 1}^{q/2} \beta(y^s)^\ell,$$
and the eigenvalue closest to 1 occurs when $s=1$ and $\beta(y)$ has multiplicity $n$. The dominant term in this sum is thus $2n\big(1-\gamma(1+o(1))/n\big)^\ell$.
This takes $\ell = n\log n + {\textsl{\footnotesize C}}n$ to get to $\mathsf{e}^{-{\textsl{\footnotesize C}}}$. We have not carried out further details but remark that very similar sums are considered
by Hough \cite{Ho} where he finds a cutoff for the walk on the cyclic group $\mathsf{Z}_p$ by adding
$\pm 2^i$, for $0 \le i \le m = \lfloor \log_2p \rfloor$, chosen uniformly with probability $\frac{1}{2m}$.
\section{$\mathsf{SL}_3(p)$}\label{sl3psec}
\subsection{Introduction} This section treats a random walk on the irreducible modules for the group $\mathsf{SL}_3(p)$ over an algebraically closed field $\mathbb k$ of characteristic $p$. The walk is generated by repeatedly tensoring with the 3-dimensional natural module. The irreducible Brauer characters and projective indecomposables are given by Humphreys in \cite{H}; the theory is quite a bit more complicated than that of $\mathsf{SL}_2(p)$.
The irreducible modules are indexed by pairs $(a,b)$ with $0\le a,b \le p-1$. For example, $(0,0)$ is the trivial module, $(1,0)$ is a natural 3-dimensional module, and $(p-1,p-1)$ is the Steinberg module of dimension $p^3$. The Markov chain is given by tensoring with $(1,0)$. Here is a rough description of the walk; details will follow. Away from the boundary, for $1<a,b<p-1$, the walk is local, and $(a,b)$ transitions only to $(a-1,b+1)$, $(a+1,b)$ or $(a,b-1)$. The transition probabilities $\mathsf{K}((a,b), (a',b'))$ show a drift towards the diagonal $a=b$, and on the diagonal, a drift diagonally upward. Furthermore, there is a kind of discontinuity at the line $a+b=p-1$: for $a+b\le p-2$, the transition probabilities (away from the boundary) are:
\begin{equation}\label{eq:tran1}
\begin{tabular}{|c||c|}
\hline
$(c,d)$ & $\mathsf{K}((a,b),(c,d))$ \\
\hline \hline
$(a-1,b+1)$ & $ \frac{1}{3}\left(1-\frac{1}{a+1}\right)\left(1+\frac{1}{b+1}\right)$ \\
\hline
$(a+1,b)$ & $\frac{1}{3}\left(1+\frac{1}{a+1}\right)\left(1+\frac{1}{a+b+2}\right)$ \\
\hline
$(a,b-1)$ & $\frac{1}{3}\left(1-\frac{1}{b+1}\right)\left(1-\frac{1}{a+b+2}\right)$ \\
\hline
\end{tabular}
\end{equation}
whereas for $a+b\ge p$ they are as follows, writing $f(x,y) = \frac{1}{2}xy(x+y)$:
\begin{equation}\label{eq:tran2}
\begin{tabular}{|c||c|}
\hline
$(c,d)$ & $\mathsf{K}((a,b),(c,d))$ \\
\hline \hline
$(a-1,b+1)$ & $ \frac{1}{3}\left(\frac{f(a,b+2)-f(p-a,p-b-2)}{f(a+1,b+1)-f(p-a-1,p-b-1)}\right)$ \\ \hline
$(a+1,b)$ & $\frac{1}{3}\left(\frac{f(a+2,b+1)-f(p-a-2,p-b-1)}{f(a+1,b+1)-f(p-a-1,p-b-1)}\right)$ \\ \hline
$(a,b-1)$ & $\frac{1}{3}\left(\frac{f(a+1,b)-f(p-a-1,p-b)}{f(a+1,b+1)-f(p-a-1,p-b-1)}\right)$\\
\hline
\end{tabular}
\end{equation}
The stationary distribution $\pi$ can be found in Table \ref{sl3stat}.
As a local walk with a stationary distribution of polynomial growth, results of Diaconis-Saloffe-Coste \cite{DSa}
show that
(diameter)$^2$ steps are necessary and sufficient for convergence to stationarity. The analytic expressions below confirm this (up to logarithmic terms).
Section \ref{6a} describes the $p$-regular classes and the irreducible and projective indecomposable Brauer characters, following Humphreys \cite{H}, and also the decomposition of tensor products $(a,b) \otimes (1,0)$. These results are translated into Markov chain language in Section \ref{6b}, where a complete description of the transition kernel and stationary distribution appears, and the convergence analysis is carried out.
\subsection{$p$-modular representations of $\mathsf{SL}_3(p)$}\label{6a}
For ease of presentation, we shall assume throughout that $p$ is a prime congruent to 2 modulo 3 (so that $\mathsf{SL}_3(p) = \mathsf{PSL}_3(p)$). For $p\equiv 1\hbox{ mod }3$, the theory is very similar, with minor notational adjustments. The material here largely follows from the information given in \cite[Section 1]{H}.
\subsubsection*{(a) \ $p$-regular classes}\label{pregsl3}
Let $\mathsf{G} = \mathsf{SL}_3(p)$, of order $p^3(p^3-1)(p^2-1)$, and assume $x,y \in \mathsf{G}$ are fixed elements of orders $p^2+p+1$, $p^2-1$, respectively.
Let $\mathbf{1}$ be the $3 \times 3$ identity matrix. Assume $J$ and $K$ are sets of representatives of the nontrivial orbits of the $p^{th}$-power map on the cyclic groups $\langle x\rangle$ and $\langle y\rangle$, respectively. Also, for $\zeta,\eta \in \mathbb{F}_p^*$, let $z_{\zeta,\eta}$ be the diagonal matrix ${\rm diag}(\zeta,\eta,\zeta^{-1}\eta^{-1}) \in \mathsf{G}$. Then the representatives and centralizer orders of the $p$-regular classes of $\mathsf{G}$ are as follows:
\[
\begin{array}{|c|c|c|}
\hline
\hbox{representatives} & \hbox{ no. of classes} & \hbox{centralizer order} \\
\hline
\hline
\mathbf{1} & 1 & |\mathsf{G}| \\
\hline
x^r \in J & \frac{p^2+p}{3} & p^2+p+1 \\
\hline
y^s \in K & \frac{p^2-p}{2} & p^2-1 \\
\hline
z_{\zeta,\zeta}\ (\zeta\in \mathbb{F}_p^*, \ \zeta \ne 1) & p-2 & p(p^2-1)(p-1) \\
\hline
z_{\zeta,\eta}\,(\zeta,\eta,\zeta^{-1}\eta^{-1} \hbox{ distinct}) & \frac{(p-2)(p-3)}{6} & (p-1)^2 \\
\hline
\end{array}
\]
\subsubsection*{(b) Irreducible modules and dimensions}\label{irrsl3}
As mentioned above, the irreducible $\mathbb k\mathsf{G}$-modules are indexed by pairs $(a,b)$ for $0\le a,b\le p-1$. Denote by $\mathsf{V}(a,b)$ or just $(a,b)$ the corresponding irreducible module. The dimension of $\mathsf{V}(a,b)$ is given in Table \ref{dimab}, expressed in terms of the function $f(x,y) = \frac{1}{2}xy(x+y)$.
\begin{table}[h]
\caption{Dimensions of irreducible $\mathsf{SL}_3(p)$-modules with $f(x,y) = \frac{1}{2}xy(x+y)$}
\label{dimab}
\[
\begin{array}{|c|c|}
\hline
(a,b) & \, \mathsf{dim}(\mathsf{V}(a,b)) \\
\hline
\hline
(a,0),\,(0,a) & f(a+1,1) \\
\hline
(p-1,a),\,(a,p-1) & f(a+1,p) \\
\hline
(a,b),\,a+b\le p-2 & f(a+1,b+1) \\
\hline
(a,b),\,a+b\ge p-1,& f(a+1,b+1)-f(p-a-1,p-b-1) \\
1\le a,b\le p-2 & \\
\hline
\end{array}
\]
\end{table}
The Steinberg module $\mathsf{St} = (p-1,p-1)$ has Brauer character
\begin{equation}\label{stsl3}
\begin{array}{|c||c|c|c|c|c|}
\hline
& \mathbf{1} & x^r & y^s & z_{\zeta,\zeta} & z_{\zeta,\eta} \\
\hline
\mathsf{St} & p^3 & 1 & -1 & p & 1 \\
\hline
\end{array}
\end{equation}
\subsubsection*{(c) \ Projective indecomposables}\label{projsl3}
Denote by $\mathsf{p}_{(a,b)}$ the Brauer character of the projective indecomposable cover of the irreducible $(a,b)$. To describe these, we need to introduce some notation. For any $r,j,\ell,m$ define
\begin{align}\label{eq:tuv}
&\mathsf{t}_r = q_1^r+q_1^{pr}+ q_1^{p^2r} &\; \text{where} \ \ &q_1 = \mathsf{e}^{2\pi i/(p^2+p+1)},\nonumber \\
&\mathsf{u}_j = q_2^j+q_2^{pj} &\text{where} \; \ &q_2 = \mathsf{e}^{2\pi i/(p^2-1)},\\
&\mathsf{u}_j' = q_2^j+q_2^{pj} + q_2^{-j(p+1)} &\,\text{where} \ \ &q_2 = \mathsf{e}^{2\pi i/(p^2-1)}, \nonumber\\
&\mathsf{v}_{\ell,m} =q_3^\ell+q_3^m+q_3^{-\ell-m} &\,\text{where} \ \ &q_3 = \mathsf{e}^{2\pi i/(p-1)}.\nonumber
\end{align}
Now for $0\le a,b \le p-1$, define the function $\mathsf{s}(a,b)$ on the $p$-regular classes of $\mathsf{G}$ as in Table \ref{sab}.
Then the projective indecomposable characters $\mathsf{p}_{(a,b)}$ are as in Table \ref{rabs}.
\begin{table}[h]
\caption{The function $\mathsf{s}(a,b)$}\label{sab}
{\small \[
\begin{tabular}{|c||c|c|c|c|c|}
\hline
& $\mathbf{1}$ & $x^r $& $y^s$ & $z_{\zeta^k,\zeta^k}$ &$z_{\zeta^\ell,\zeta^m}\,(\ell\ne m)$ \\
\hline \hline
(0,0) & 1&1&1&1&1 \\
\hline
$\mathsf{s}(a,0)$ & 3 & $\mathsf{t}_{ar}$ &$\mathsf{u}_{as}'$ &$\mathsf{v}_{ak,ak}$ &$\mathsf{v}_{a\ell,am}$ \\
$a\ne 0$ &&&&& \\
\hline
$\mathsf{s}(0,b)$& 3 & $\mathsf{t}_{-br}$ & $\mathsf{u}_{-bs}'$ & $\mathsf{v}_{-bk,-bk}$ & $\mathsf{v}_{-b\ell,-bm}$ \\
$b\ne 0$ &&&&& \\
\hline
$\mathsf{s}(a,b)$ & 6 & $\mathsf{t}_{r(a-bp)}$ &
$\mathsf{u}_{s(a+b+bp)}$ & $2\mathsf{v}_{k(a+2b),k(a-b)}$ &$\mathsf{v}_{\ell(a+b)+mb,-\ell b+ma}$\\
$ab\ne 0$ && $+ \mathsf{t}_{r(ap-b)}$& $+\mathsf{u}_{s(a-bp)}$ & & $+\mathsf{v}_{\ell b+m(a+b),-\ell a-mb}$ \\
&&& $+\mathsf{u}_{s(-a(1+p)-b)}$ && \\
\hline
\end{tabular}
\]}
\end{table}
Table \ref{rabs} displays the projective characters. There, $\mathsf{St}$ stands for the character of
the (irreducible and projective) Steinberg module $(p-1,p-1)$ (see \eqref{stsl3}) and $\mathsf{s}(a,b)$ is the function in Table \ref{sab}.
\begin{table}[h]
\caption{Projective indecomposable Brauer characters $\mathsf{p}_{(a,b)}$ for $\mathsf{SL}_3(p)$}\label{rabs}
\[
\begin{tabular}{|c|c|c|}
\hline
$(a,b)$ & $\mathsf{p}_{(a,b)}$ & $\text{dimension}$\\
\hline \hline
$(p-1,p-1)$ & $\mathsf{St}$ &$p^3$\\
\hline
$(p-1,0)$ & $\left(\mathsf{s}(p-1,0)-\mathsf{s}(0,0)\right)\,\mathsf{St}$ & $2p^3$ \\
\hline
$(p-2,0)$ & $\left(\mathsf{s}(p-1,1)-\mathsf{s}(0,1)\right)$\,$\mathsf{St}$ & $3p^3$ \\
\hline
$(0,0)$ & $\big(\mathsf{s}(p-1,p-1)+\mathsf{s}(1,1)+\mathsf{s}(0,0)$ & $7p^3$ \\
& $-\mathsf{s}(p-1,0)-\mathsf{s}(0,p-1)\big)\,\mathsf{St}$ &\\
\hline
$(a,0)$ &$\big(\mathsf{s}(p-1,p-a-1)+\mathsf{s}(a+1,1)$ & $9p^3$ \\
$0<a<p-2$ &$-\mathsf{s}(0,p-a-1)\big)\,\mathsf{St}$ & \\
\hline
$(a,b),\,ab\ne 0$ &$\mathsf{s}(p-b-1,p-a-1)\,\mathsf{St}$ &$6p^3$\\
$a+b\ge p-2$ & & \\ \hline
$(a,b),\,ab\ne 0$ &$\big(\mathsf{s}(p-b-1,p-a-1)$ & $12p^3$ \\
$a+b< p-2$ &
$+\mathsf{s}(a+1,b+1)\big)\,\mathsf{St}$ & \\
\hline
\end{tabular}
\]
\end{table}
\subsubsection*{(d) \ 3-dimensional Brauer character}\label{3dsl3}
The Brauer character of the irreducible 3-dimensional representation $\alpha =\chi_{(1,0)}$ is:
\begin{equation}\label{br3}
\begin{tabular}{|c||c|c|c|c|c|}
\hline
& $\mathbf{1}$ & $x^r$ & $y^s$ & $z_{\zeta^k,\zeta^k}$ & $z_{\zeta^\ell,\zeta^m}$ \\
\hline
$\alpha$ & 3& $\mathsf{t}_r$& $\mathsf{u}_{s}'$ & $\mathsf{v}_{k,k}$ &$\mathsf{v}_{\ell,m}$ \\
\hline
\end{tabular}
\end{equation}
where $\zeta$ is a fixed element of $\mathbb{F}_p^*$, $\zeta \neq 1$.
\subsubsection*{(e) Tensor products with $(1,0)$}\label{10sl3}
The basic rule for tensoring an irreducible $\mathsf{SL}_3(p)$-module $(a,b)$ with $(1,0)$ is
\[
(a,b) \otimes (1,0) = (a-1,b+1) / (a+1,b) / (a,b-1),
\]
but there are many tweaks to this rule at the boundaries (i.e. when $a$ or $b$ is $0, 1$ or $p-1$), and also when $a+b=p-2$. The complete information is given in Table \ref{tens10}.
\begin{table}[h]
\caption{Tensor products with $(1,0)$}\label{tens10}
{\small \[
\begin{tabular}{|c|c|}
\hline \hline
$(a,b)$ & $(a,b) \otimes (1,0)$ \\
\hline \hline
$ab\ne 0,\,a+b\le p-3$ & $(a-1,b+1) / (a+1,b) / (a,b-1)$ \\
$\text{or }a+b\ge p-1,\,2\le a,b\le p-2$ & \\
\hline \hline
$ab\ne 0,\,a+b=p-2$ & $(a-1,b+1) / (a+1,b) / (a,b-1)^2$ \\
\hline \hline
$(a,0),\,a\le p-2$ & $(a-1,1)/(a+1,0)$ \\
\hline
$(p-1,0)$ & $(p-2,1)^2/(p-3,0)/(1,0)$ \\
\hline \hline
$(0,b),\,b\le p-3$ & $(1,b)/(0,b-1)$ \\ \hline
$(0,p-2)$ & $(1,p-2)/(0,p-3)^2$ \\ \hline
$(0,p-1)$ & $(1,p-1)/(0,p-2)$ \\
\hline \hline
($1,p-1)$ & $(1,p-2)^2/(2,p-1)/(0,p-3)/(0,1)$ \\ \hline
$(1,p-2)$ & $(2,p-2)/(0,p-1)$ \\ \hline
$(p-1,1)$ & $(p-2,2)^2/(p-1,0)/(p-4,0)/(1,1)/(0,0)$ \\ \hline
$(p-2,1)$ & $(p-3,2)/(p-1,1)$ \\ \hline
\hline
$(p-1,b),\,2\le b\le p-3$ & $(p-2,b+1)^2/(p-1,b-1)/(p-3-b,0)/$ \\
& $(1,b)/(0,b-1)$ \\ \hline
$(a,p-1),\,2\le a\le p-2$ & $(a,p-2)^2/(a+1,p-1)/(a-1,1)/$ \\
& $(a-2,0)/(0,p-a-2)$ \\ \hline
$(p-1,p-2)$ & $(p-2,p-1)^2/(0,p-3)^2/(p-1,p-3)/(1,p-2)$ \\ \hline
$(p-1,p-1)$ & $(p-1,p-2)^3/(p-2,1)^2/(1,p-1)/$ \\
& $(p-3,0)^4/(0,p-2)$ \\
\hline
\end{tabular}
\]}
\end{table}
We shall need the following estimates.
\begin{lemma}\label{angles}
Let $n \geq 7$ be an integer, and let $L := \{ 2\pi j/n \mid j \in \mathbb{Z}\}$.
\begin{enumerate}
\item[\rm(i)] If $0 \leq x \leq \pi/3$ then $\sin(x) \geq x/2$ and $\cos(x) \leq 1-x^2/4$.
\item[\rm (ii)] Suppose $x \in L \smallsetminus 2\pi\mathbb{Z}$. Then $\cos(x) \leq 1 - \pi^2/n^2$. Furthermore,
$$|2 + \cos(x)| \leq 3-\pi^2/n^2,~|1 + 2\cos(x)| \leq 3-2\pi^2/n^2.$$
\item[\rm(iii)] Suppose that $x,y,z \in L$ with $x+y+z \in 2\pi\mathbb{Z}$ but at least one of
$x,y,z$ is not in $2\pi\mathbb{Z}$. Then $|\cos(x)+\cos(y)+\cos(z)| \leq 3-2\pi^2/n^2$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) Note that if $f(x) := \sin(x)-x/2$ then $f'(x) = \cos(x)-1/2 \geq 0$ on $[0,\pi/3]$, whence $f(x) \geq f(0) = 0$ on
the same interval.
Next, for $g(x):= (1-x^2/4)-\cos(x)$ we have $g'(x) = f(x)$, whence $g(x) \geq g(0) = 0$ for
$0 \leq x \leq \pi/3$.
(ii) Replacing $x$ by $2\pi k \pm x$ for a suitable $k \in \mathbb{Z}$, we may assume that $2\pi/n \leq x \leq \pi$. If moreover
$x \geq \pi/3$, then $\cos(x) \leq 1/2 < 1-\pi^2/n^2$ as $n \geq 5$. On the other hand, if $2\pi/n \leq x \leq \pi/3$, then
by (i) we have $\cos(x) \leq 1-x^2/4 \leq 1-\pi^2/n^2$, proving the first claim. Now
$$1 \leq 2 +\cos(x) \leq 3-\pi^2/n^2,~-1 \leq 1 +2\cos(x) \leq 3-2\pi^2/n^2$$
establishing the second claim.
(iii) Subtracting multiples of $2\pi$ from $x,y,z$ we may assume that $0 \leq x,y,z < 2\pi$ and
$x+y+x \in \{2\pi,4\pi\}$. If moreover some of them equal to $0$, say $x = 0$, then $0 < y < 2\pi$ and
$$|\cos(x)+\cos(y)+\cos(z)| = |1 + 2\cos(y)| \leq 3-2\pi^2/n^2$$
by (ii). So we may assume $0 < x \leq y \leq z < 2\pi$. This implies by (ii) that
$$\cos(x) + \cos(y) + \cos(z) \leq 3-3\pi^2/n^2.$$
If moreover $x \leq 2\pi/3$, then $\cos(x) \geq -1/2$ and so
\begin{equation}\label{cos-1}
\cos(x) + \cos(y)+\cos(z) \geq -5/2 > -(3-2\pi^2/n^2)
\end{equation}
as $n \geq 7$, and we are done. Consider the remaining case $x > 2\pi/3$; in particular,
$x+y+z = 4\pi$. It follows that $4\pi/3 \leq \gamma < 2\pi$, $\cos(z) \geq -1/2$, whence
\eqref{cos-1} holds and we are done again.
\end{proof}
\subsection{The Markov chain}\label{6b}
Consider now the Markov chain on $\mathsf{IBr}(\mathsf{SL}_3(p))$ given by tensoring with $(1,0)$. The transition matrix has entries
\[
\mathsf{K}((a,b), (a',b')) = \frac{\langle (a',b'),\,(a,b)\otimes (1,0)\rangle \,\, \mathsf{dim}(a',b')}{3\, \mathsf{dim}(a,b)},
\]
and from the information in Tables \ref{dimab} and \ref{tens10}, we see that away from the boundaries (i.e for $a,b \ne 0,1,p-1$), the transition probabilities are as in \eqref{eq:tran1}, (\ref{eq:tran2}). The probabilities at the boundaries of course also follow but are less clean to write down.
The stationary distribution $\pi$ is given by Proposition \ref{basicone}(i), hence follows from Tables \ref{dimab} and \ref{rabs}. We have written this down in Table \ref{sl3stat}. Notice that on the diagonal
\[
\pi(a,a)\cdot (p^3-1)(p^2-1) = \begin{cases}
7 & \quad \text{ if }\ a=0, \\
12(a+1)^3 & \quad \text{ if } \ 1\le a \le \frac{p-3}{2}, \\
6\left((a+1)^3- (p-a-1)^3\right)& \quad \text{ if } \ \frac{p-1}{2}\le a <p-1, \\
p^3 & \quad \text{ if }\ a=p-1.
\end{cases}
\]
In particular, $\pi(a,a)$ increases cubically on $[0,\frac{p-3}{2}]$ and on $[\frac{p-1}{2},p-1]$, and drops quadratically from $(p-3)/2$ to $(p-1)/2$.
\begin{table}[h]
\caption{Stationary distribution for $\mathsf{SL}_3(p)$ with $f(x,y)=\frac{1}{2}xy(x+y)$}\label{sl3stat}
\[{\small
\begin{tabular}{|c|c|}
\hline
$(a,b)$ & $\pi(a,b)\cdot (p^3-1)(p^2-1)$ \\
\hline
\hline
(0,0) & 7 \\
\hline
$(p-1,0),\,(0,p-1)$ &$ 2f(p,1)$ \\ \hline
$(p-2,0),(0,p-2)$ & $3f(p-1,1)$ \\ \hline
$(a,0),(0,a)\,(0<a<p-2)$ & $9f(a+1,1)$ \\ \hline
$ab\ne 0,\,a+b<p-2$ & $12f(a+1,b+1)$ \\ \hline
$ab\ne 0,\,a+b=p-2$ & $6f(a+1,b+1)$ \\ \hline
$a,b\ne 0 \ \text{or} \ p-1\ \,\text{and} \ \, a+b\ge p-1$ & $6\left(f(a+1,b+1)-f(p-a-1,p-b-1)\right)$ \\
\hline
$(a,p-1),(p-1,a)\,(a\ne 0,p-1)$ & $6f(a+1,p)$ \\
\hline
$(p-1,p-1)$ & $p^3$ \\
\hline
\end{tabular}
}\]
\end{table}
From Proposition \ref{basicone}(ii) and (\ref{br3}), we see in the notation of \eqref{eq:tuv} that the eigenvalues are
\begin{equation}\label{evals}
\begin{array}{l}
\beta_\mathbf{1} = 1, \\
\beta_{x^r} = \frac{1}{3}\mathsf{t}_r,\\
\beta_{y^s} = \frac{1}{3}\mathsf{u}_s', \\
\beta_{z_{\zeta^k,\zeta^k}} = \frac{1}{3}\mathsf{v}_{k,k}, \\
\beta_{z_{\zeta^\ell,\zeta^m}} = \frac{1}{3} \mathsf{v}_{\ell,m}.
\end{array}
\end{equation}
Now Proposition \ref{basicone}(v) gives
\begin{equation}\label{estim}
\frac{\mathsf{K}^\ell((0,0),(a,b))}{\pi(a,b)}-1 = \sum_{c \neq \mathbf{1}} \, \beta_c^\ell \, \frac{\mathsf{p}_{(a,b)}(c)}{\mathsf{p}_{(a,b)}(\mathbf{1})}|c^G|,
\end{equation}
where the sum is over representatives $c$ of the nontrivial $p$-regular classes.
We shall show below (for $p \geq 11$) that
\begin{equation}\label{univ}
\beta_c \le 1-\frac{\blue 3}{p^2}
\end{equation}
for all representatives $c \ne \mathbf{1}$. Given this, (\ref{estim}) implies
\[
\parallel\mathsf{K}^\ell((0,0),\cdot)-\pi(\cdot)\parallel_{{}_{\mathsf{TV}}} \le p^8\left(1-\frac{\blue 3}{p^2}\right)^\ell.
\]
This is small for $\ell$ of order $p^2\log p$. More delicate analysis allows the removal of the $\log p$ term, but we will not pursue this further.
{\blue It remains to establish the bound \eqref{univ}. First, if $c = z_{\zeta^k,\zeta^k}$ with $1 \leq k \leq p-2$, then we can
apply Lemma \ref{angles}(ii) to $\beta_c = \frac{1}{3}\mathsf{v}_{k,k}$.
In all other cases,
$\beta_c = (\cos(x)+\cos(y)+\cos(z))/3$ with $x,y,z \in (2\pi/n)\mathbb{Z}$, $x+y +z \in 2\pi\mathbb{Z}$, and at least one of $x,y,z$ not in
$2\pi\mathbb{Z}$, where $n \in \{p-1,p^2-1,p^2+p+1\}$. Now the bound follows by applying Lemma \ref{angles}(iii).
\medskip
\noindent{\bf Summary.} \ In this section we have analyzed the Markov chain on $\mathsf{IBr}(\mathsf{SL}_3(p))$ given by tensoring with the natural 3-dimensional module $(1,0)$. We have computed the transition probabilities \eqref{eq:tran1}, \eqref{eq:tran2}, the stationary distribution (Table \ref{sl3stat}), and shown that order $p^2\log p$ steps suffice for stationarity.
\section{Quantum groups at roots of unity} \label{quant}
\subsection{Introduction} \label{quanta}
The tensor walks considered above can be studied in any context where `tensoring' makes sense: tensor categories, Hopf algebras, or the
$\mathbb{Z}_{+}$ modules of \cite{EGNO}. Questions abound: Will the explicit spectral theory of Theorems \ref{T:dihedral} \ref{mainsl2p}, \ref{T:SL2(p^2)}, \ref{T:(1,1)}, and \ref{mainsl2n} still hold? Can the rules for tensor products be found? Are there examples that anyone (other than the authors) will care about? This section makes a start on these
problems by studying the tensor walk on the (restricted) quantum group $\mathfrak{u}_\xi(\fsl_2)$ at a root of unity $\xi$ (described below). It turns out
that there $\underline{\text {is}}$ a reasonable spectral theory, though not as nice as the previous ones. The walks are not diagonalizable and generalized
spectral theory (Jordan blocks) must be used. This answers a question of Grinberg, Huang, and Reiner \cite[Question 3.12]{GHR}. Some tensor product decompositions $\underline{\text{are}}$ available using
years of work by the representation theory community, $\underline{\text{and}}$ the walks that emerge are of independent interest. Let us begin with this last point.
Consider the Markov chain on the irreducible modules of $\mathsf{SL}_2(p)$ studied in Section \ref{3b}. This chain arises in Pitman's study of Gamblers' Ruin and leads to his
$2M-X$ theorem and a host of generalizations of current interest in both probability and Lie theory. The nice spectral theory of Section 3 depends on $p$ being
a prime. On the other hand, the chain makes perfect sense with $p$ replaced by $n$. A special case of the Markov chains studied in this section handles
these examples.
\begin{example}\label{Ex:quantumchain} {\rm Fix $n$ odd, $n \ge 3$ and define a Markov chain on $ \{0,1,\dots, n-1\}$ by $\mathsf{K}(0,1) = 1$ and \begin{align}\begin{split}\label{eq:quantumMarkov} &\mathsf{K}(a,a-1) = \half\left(1-\frac{1}{a+1}\right) \quad 1 \le a \le n-2, \\ &\mathsf{K}(a,a+1) =\half\left(1+\frac{1}{a+1}\right) \quad 0 \leq a \le n-2, \\
&\mathsf{K}(n-1,n-2) = 1 - \frac{1}{n}, \qquad \mathsf{K}(n-1,0) = \frac{1}{n}. \end{split} \end{align}
Thus, when $n = 9$, the transition matrix is
\[\mathsf{K} \ \,= \ \,\bordermatrix{&0&1&2&3 &4&5&6&7&8\cr
0 &0&1&0&0 &0 &0&0&0&0 \cr
1 &\frac{1}{4}&0&\frac{3}{4}&0 &0&0&0&0&0 \cr
2 & 0& \frac{2}{6}&0&\frac{4}{6}&0 &0&0&0&0 \cr
3 &0 & 0 & \frac{3}{8}&0&\frac{5}{8}&0 &0&0&0 \cr
4 &0 & 0 & 0 & \frac{4}{10}&0&\frac{6}{10}&0 &0&0 \cr
5 &0 & 0 & 0 & 0& \frac{5}{12}&0&\frac{7}{12}&0 &0 \cr
6& 0 &0 & 0 & 0 & 0& \frac{6}{14}&0&\frac{8}{14}&0 \cr
7 & 0& 0 &0 & 0 & 0 & 0& \frac{7}{16}&0&\frac{9}{16} \cr
8& \frac{2}{18} &0 & 0 & 0 & 0& 0&0&\frac{16}{18}&0 \cr
}\]
The entries have been left as un-reduced fractions to make the pattern readily apparent. The first and last rows are different, but for the other rows,
the sub-diagonal entries have numerators $1,2, \dots, n-2$ and denominators $4,6,\dots, 2(n-1)$. This is a non-reversible chain. The theory developed
below shows that
\begin{itemize}
\item the stationary distribution is \
\begin{equation}\label{eq:quantumpi}{ \pi(j) = \textstyle{\frac{2(j+1)}{n^2}}, \ \ 0 \leq j \le n-2, \quad \pi(n-1) = \frac{1}{n}}; \end{equation}
\item the eigenvalues for the transition matrix $\mathsf{K}$ are $1$ and
\begin{equation}\label{eq:quantumevalues} \textstyle{\lambda_j = \mathsf{cos}\left(\frac{2\pi j}{n}\right), \quad 1 \leq j \leq (n-1)/2;}\end{equation}
\item a right eigenvector corresponding to the eigenvalue $\lambda_j$
is
\begin{equation}\label{eq:quantumrightev}{\textsl{\footnotesize R}}_j = \textstyle{ \left[\sin\left(\frac{2\pi j}{n}\right), \frac{1}{2}\sin\left(\frac{4\pi j}{n}\right), \ldots, \frac{1}{n-1}\sin\left(\frac{2(n-1)\pi j}{n}\right), 0\right]^{\tt T},} \end{equation}
where $\tt T$ denotes the transpose;
\item a left eigenvector corresponding to the eigenvalue $\lambda_j$
is
\begin{equation}\label{eq:quantumleftev}{\textsl{\footnotesize L}}_j = \textstyle{\left[\cos\left(\frac{2\pi j}{n}\right), 2\cos\left(\frac{4\pi j}{n}\right), \ldots, (n-1)\cos\left(\frac{2(n-1) \pi j}{n}\right), \frac{n}{2}\right]}; \end{equation}
\end{itemize}
Note that the above accounts for only half of the spectrum. Each of the eigenvalues $\lambda_j, 1 \le j \leq \half(n-1)$, is associated with a $2 \times 2$ Jordan block of the form
$\left(\begin{smallmatrix}\lambda_j & 1\\0 & \lambda_j\end{smallmatrix}\right)$, giving rise to a set of generalized eigenvectors ${\textsl{\footnotesize R}}_j', {\textsl{\footnotesize L}}_j'$ with
\begin{equation}\label{eq:genev4K} \mathsf{K}^\ell {\textsl{\footnotesize R}}_j' = \lambda_j^\ell\,{\textsl{\footnotesize R}}_j' + \ell \lambda_j^{\ell-1}{\textsl{\footnotesize R}}_j \qquad {\textsl{\footnotesize L}}_j' \mathsf{K}^\ell = \lambda_j^\ell {\textsl{\footnotesize L}}_j' + \ell \lambda_j^{\ell-1} {\textsl{\footnotesize L}}_j \end{equation}
for all $\ell \ge 1$. The vectors ${\textsl{\footnotesize R}}_j'$ and ${\textsl{\footnotesize L}}_j'$ can be determined explicitly from
the expressions for the generalized eigenvectors ${\textsl{\footnotesize X}}_j'$ and ${\textsl{\footnotesize Y}}_j'$ for $\mathsf{M}$ given in
Proposition \ref{P:Mvs}.
Using these ingredients a reasonably sharp analysis of mixing times follows.
\medskip
Our aim will be to show for the quantum group $\mathfrak{u}_\xi(\fsl_2)$ at a primitive $n$th root of unity $\xi$ for $n$ odd that the following result holds.
\begin{thm}\label{T:quantum} For $n$ odd, $n \ge 3$, tensoring with the two-dimensional irreducible representation of $\mathfrak{u}_\xi(\fsl_2)$ yields the Markov chain $\mathsf{K}$ of
\eqref{eq:quantumMarkov} with the stationary distribution $\pi$ in \eqref{eq:quantumpi}. Moreover, there exist explicit continuous functions $f_1$, $f_2$ from $[0,\infty)$ to $[0,\infty)$ with
$f_1(\ell /n^2) \ge ||\mathsf{K}^\ell -\pi ||_{{}_{\mathsf{TV}}}$ for all $\ell$, and $||\mathsf{K}^\ell - \pi ||_{{}_{\mathsf{TV}}} \le f_2(\ell/n^2)$ for all $\ell \ge n^2$. Here $f_1(x)$ is monotone increasing and strictly positive at $x=0$, and $f_2(x)$ is positive, strictly decreasing, and tends to 0 as $x$ tends to infinity.
\end{thm}
}
\end{example}
Section \ref{quantb} introduces $\mathfrak{u}_\xi(\mathfrak{sl}_2)$ and gives a description of its irreducible, Weyl, and Verma modules. Section \ref{quantc} describes tensor products with
the natural 2-dimensional irreducible $\mathfrak{u}_\xi(\mathfrak{sl}_2)$-module $\mathsf{V}_1$,
and Section \ref{quantcd} focuses on projective indecomposable modules and the result of tensoring $\mathsf{V}_1$ with the Steinberg module.
Analytic facts about the generalized eigenvectors of the related Markov chains, along with a derivation of \eqref{eq:quantumMarkov}-\eqref{eq:quantumleftev}, are in Section \ref{quantd}.
Theorem \ref{T:quantum} is proved in Section \ref{quante}. Some further developments (e.g. results on tensoring with the Steinberg module)
form the content of Section \ref{quantf}.
We will use \cite{CP} as our main reference in this section, but other incarnations of quantum $\mathsf{SL}_2$ exist (see, for example, Sec VI.5 of \cite{Kas} and the many
references in Sec.~VI.7 of that volume or Sections 6.4 and 11.1 of the book \cite{CPr} by Chari and Pressley, which contains a wealth of material on quantum groups and a host of related topics.) The graduate text \cite{Jan} by Jantzen is a wonderful introduction to basic material on quantum groups, but does not treat the roots of unity case.
\subsection{Quantum $\fsl_2$ and its Weyl and Verma modules} \label{quantb}
Let $\xi = \mathsf{e}^{2\pi i/n} \in \mathbb C$, where $n$ is odd and $n \ge 3$.
The quantum group $\mathfrak{u}_\xi(\fsl_2)$ is an
$n^3$-dimensional Hopf algebra over $\mathbb{C}$ with generators $e,f,k$ satisfying the relations
$$\begin{gathered} e^n = 0, \ \ f^n = 0, \ \ k^{n} = 1 \\
kek^{-1} = \xi^2 e, \quad k f k^{-1} = \xi^{-2} f, \quad [e,f] = ef-fe = \frac{k - k^{-1}}{\xi-\xi^{-1}}. \end{gathered}$$
The coproduct $\Delta$, counit $\varepsilon$, and antipode $S$ of $\mathfrak{u}_\xi(\fsl_2)$ are defined by their action on the generators:
$$\begin{gathered} \Delta(e) = e \otimes k + 1 \otimes e, \quad \Delta(f) = f \otimes 1 + k^{-1} \otimes f, \quad \Delta(k) = k \otimes k, \\
\varepsilon(e) = 0 = \varepsilon(f), \ \ \ \varepsilon(k) = 1, \qquad S(e) = -ek^{-1}, \ \ S(f) = -fk, \ \ S(k) = k^{-1}. \end{gathered}$$
The coproduct is particularly relevant here, as it affords the action of $\mathfrak{u}_\xi(\mathfrak{sl}_2)$ on tensor products.
Chari and Premet have determined the indecomposable modules for $\mathfrak{u}_\xi(\mathfrak{sl}_2)$ in \cite{CP}, where this algebra
is denoted $U_\epsilon^{red}$. We adopt results from their paper using somewhat different notation
and add material needed here on tensor products.
For $r$ a nonnegative integer, the {\it Weyl module} $\mathsf{V}_r$ has a basis $\{v_0,v_1,\dots, v_r\}$ and $\mathfrak{u}_\xi(\mathfrak{sl}_2)$-action is given by
\begin{equation}\label{eq:qslacts} k v_j = \xi^{r-2j} v_j, \qquad ev_j = [r-j+1] v_{j-1}, \qquad f v_j = [j+1] v_{j+1}, \end{equation}
where $v_s = 0$ if $s \not \in \{0,1,\dots, r\}$ and $[m] = \frac{ \xi^{m} - \xi^{-m}}{\xi-\xi^{-1}}.$ In what follows, $[0]! = 1$ and $[m]! = [m][m-1] \cdots [2][1]$
for $m \ge 1$.
The modules $\mathsf{V}_r$ for $0\le r \le n-1$ are irreducible and constitute a complete set of irreducible $\mathfrak{u}_\xi(\mathfrak{sl}_2)$-modules up to isomorphism.
For $0 \leq r \le n-1$, the {\it Verma module} $\mathsf{M}_r$ is the quotient of $\mathfrak{u}_\xi(\mathfrak{sl}_2)$ by the left ideal generated by $e$ and $k - \xi^r$.
It has dimension $n$ and is indecomposable. Any module generated by a vector $v_0$ with $ev_0 = 0$ and $kv_0 = \xi^r v_0$ is isomorphic
to a quotient of $\mathsf{M}_r$. When $0 \le r < n-1$, $\mathsf{V}_r$ is the unique irreducible quotient of $\mathsf{M}_r$, and there is a non-split exact sequence
\begin{equation}\label{eq:Verma} (0) \rightarrow \mathsf{V}_{n-r-2} \rightarrow \mathsf{M}_r \rightarrow \mathsf{V}_r \rightarrow (0).\end{equation}
When $r=n-1$, $\mathsf{M}_{n-1} \cong \mathsf{V}_{n-1}$,
the Steinberg module, which has dimension $n$.
We consider the two-dimensional $\mathfrak{u}_\xi(\mathfrak{sl}_2)$-module $\mathsf{V}_1$, and to distinguish it from the others, we use $u_0, u_1$ for its basis. Then relative
to that basis, the generators $e,f,k$ are represented by the following matrices
$$e \rightarrow \left(\begin{matrix} 0 & 1 \\ 0 & 0 \end{matrix}\right), \qquad f \rightarrow \left(\begin{matrix} 0 & 0 \\ 1 & 0 \end{matrix}\right),
\qquad k \rightarrow \left(\begin{matrix} \xi & 0 \\ 0 & \xi^{-1} \end{matrix}\right) .$$
\subsection{Tensoring with $\mathsf{V}_1$} \label{quantc}
The following result describes the result of tensoring an irreducible $\mathfrak{u}_\xi(\mathfrak{sl}_2)$-module $\mathsf{V}_r$ for $r \ne n-1$ with $\mathsf{V}_1$. In the next section,
we describe the projective indecomposable $\mathfrak{u}_\xi(\mathfrak{sl}_2)$-modules and treat the case $r = n-1$.
\smallskip
\begin{prop}\label{P:quant} Assume $\mathsf{V}_1 = \, \mathsf{span}_\mathbb{C}\{u_0,u_1\}$ and $\mathsf{V}_r = \, \mathsf{span}_\mathbb{C}\{v_0,v_1,\dots,v_r\}$ for $0 \le r < n-1$. \begin{itemize} \item [{\rm(i)}] The $\mathfrak{u}_\xi(\mathfrak{sl}_2)$-submodule of
$\mathsf{V}_1 \otimes \mathsf{V}_r$ generated by $u_0 \otimes v_0$ is isomorphic to $\mathsf{V}_{r+1}$.
\item [{\rm (ii)}] $\mathsf{V}_0 \otimes \mathsf{V}_1 \cong \mathsf{V}_1$, and $\mathsf{V}_1 \otimes \mathsf{V}_r \cong \mathsf{V}_{r+1} \otimes \mathsf{V}_{r-1}$ when $1 \le r < n-1$.
\end{itemize}
\end{prop}
\smallskip
\begin{proof} (i) Let $w_0 = u_0 \otimes v_0$, and for $j \ge 1$ set
$$ w_j := \xi^{-j} u_0 \otimes v_j + u_1 \otimes v_{j-1}$$
Note that $w_j = 0$ when $j > r+1$. Then it can be argued by induction on $j$ that the following hold:
\begin{align} \label{eq:Waction} e w_0 &= 0, \qquad e w_j = [r+1-j+1]w_{j-1} =[r+2-j] w_{j-1} \ \ \ (j \ge 1)\nonumber \\
k w_j &= \xi^{r+1 - 2j} w_j \\
f w_j &= [j+1] w_{j+1} \ \, (\text{in particular}, \ w_j = \frac{f^j (u_0 \otimes v_0)}{[j]!} \ \text{for} \ 0 \le j < n-1 ). \nonumber
\end{align}
Thus, $\mathsf{W}: = \mathsf{span}_\mathbb{C}\{w_0,w_1,\dots, w_{r+1}\}$ is a submodule of $\mathsf{V}_1 \otimes \mathsf{V}_r$ isomorphic to
$\mathsf{V}_{r+1}$.
(ii) When $r < n-1$, \ $\mathsf{W} \cong \mathsf{V}_{r+1}$ is irreducible. In this case, set $$y_0 := \xi^r u_0 \otimes v_1 - [r] u_1 \otimes v_0,$$
and let $\mathsf{Y}$ be the $\mathfrak{u}_\xi(\mathfrak{sl}_2)$-submodule of $\mathsf{V}_1 \otimes \mathsf{V}_r$ generated by $y_0$.
It is easy to check that $k y_0 = \xi^{r-1} y_0$ and $e y_0 = 0$.
As $\mathsf{Y}$ is a homomorphic image of the Verma module $\mathsf{M}_{r-1}$, $\mathsf{Y}$ is isomorphic to either $\mathsf{V}_{r-1}$ or
$\mathsf{M}_{r-1}$. In either event, the only possible candidates for vectors in $\mathsf{Y}$ sent to 0 by $e$ have eigenvalue $\xi^{r-1}$ or $\xi^{n-r-1}$
relative to $k$. Neither of those values can equal $\xi^{r+1}$, since $\xi$ is an odd root of 1 and $r \ne n-1$. Thus, $\mathsf{Y}$ cannot contain
$w_0$, and since $\mathsf{W}$ is irreducible, $\mathsf{W} \cap \mathsf{Y} = (0)$.
Then $\, \mathsf{dim}(\mathsf{W}) + \, \mathsf{dim}(\mathsf{Y})
= r+2+ \, \mathsf{dim}(\mathsf{Y}) \le 2(r+1)$, forces $\mathsf{Y} \cong \mathsf{V}_{r-1}$ and $\mathsf{V}_1 \otimes \mathsf{V}_r \cong \mathsf{V}_{r+1} \oplus \mathsf{V}_{r-1}$. \end{proof}
\subsection{Projective indecomposable modules for $\mathfrak{u}_\xi(\mathfrak{sl}_2)$ and $\mathsf{V}_1 \otimes \mathsf{V}_{n-1}$.}\label{quantcd}
Chari and Premet \cite{CP} have described the indecomposable projective covers $\mathsf{P}_r$ of the irreducible $\mathfrak{u}_\xi(\mathfrak{sl}_2)$-modules $\mathsf{V}_r$.
The Steinberg module $\mathsf{V}_{n-1}$ being both irreducible and projective is its own cover, $\mathsf{P}_{n-1} = \mathsf{V}_{n-1}$. For $0 \le r < n-1$, the following results
are shown to hold for $\mathsf{P}_r$ in \cite[Prop., Sec.~3.8]{CP}:
\begin{align*}
&\text{{\rm (i)} \ $[\mathsf{P}_r:\mathsf{M}_j] = \begin{cases} 1 & \qquad \text{if} \ \ j = r \, \ \text{or} \ \, n-2-r \\
0 & \qquad \text{otherwise} \end{cases}.$} \\
&\text{{\rm(ii)} $\, \mathsf{dim}(\mathsf{P}_r) = 2n$.} \\
&\text{{\rm(iii)} The socle of $\mathsf{P}_r$ (the sum of all its irreducible submodules) is isomorphic to $\mathsf{V}_r$.} \\
&\text{{\rm(iv)} There is a non-split short exact sequence} \end{align*}
\begin{equation}\label{eq:Xact} (0) \rightarrow \mathsf{M}_{n-r-2} \rightarrow \mathsf{P}_r \rightarrow \mathsf{M}_r \rightarrow (0).\end{equation}
Using these facts we prove
\begin{prop}\label{P:stein} For $\mathfrak{u}_\xi(\mathfrak{sl}_2)$ with $\xi$ a primitive $n$th root of unity, $n$ odd, $n \ge 3$,
$\mathsf{V}_1 \otimes \mathsf{V}_{n-1}$ is isomorphic to $\mathsf{P}_{n-2}$. Thus,
$$[\mathsf{V}_1 \otimes \mathsf{V}_{n-1}:\mathsf{V}_{n-2}] = 2 = [\mathsf{V}_1 \otimes \mathsf{V}_{n-1}:\mathsf{V}_{0}].$$
\end{prop}
\begin{proof} We know from the above calculations that $\mathsf{V}_1 \otimes \mathsf{V}_{n-1}$ contains a submodule $\mathsf{W}$ which is isomorphic to $\mathsf{V}_{n}$
and has a basis $w_0,w_1,\dots, w_{n}$ with $w_0 = u_0 \otimes v_0$ and
$$ w_j := \xi^{-j} u_0 \otimes v_j + u_1 \otimes v_{j-1} \ \ \text{for} \ \ 1 \le j \le n.$$
It is a consequence of \eqref{eq:Waction} that
\begin{align*} & \hspace{.5cm} e w_1 = [n-1+2-1] w_0 = 0, \ \ \ f w_0 = w_1, \\
& \hspace{.5cm} f w_{n-1} = [n] w_n= 0, \ \ \ e w_n = [n-1+2-n] w_{n-1} = w_{n-1}.\\
& \hspace{-1.25cm} \text{It is helpful to visualize the submodule $\mathsf{W}$ as follows, where
the images under $e$} \\
& \hspace{-1.25cm} \text{and $f$ are up to scalar multiples: }\end{align*}
\tikzstyle{noorep}=[circle,
thick,
minimum size=.5cm,
draw= white,
fill=white]
\begin{figure}[h]
\label{Wmod}
\hspace{-.42cm}$\begin{tikzpicture}[scale=.9,line width=1pt]
\path (-9.7,1.32) node[noorep] (V0){};
\path (-9.55,1.32) node[noorep] (V01){};
\path (-8.3,1.32) node[noorep] (V1) {};
\path (-7,.5) node[noorep] (V2) {};
\path (-8.3,-.32) node[noorep] (V21) {};
\path (-5.35,.5) node[noorep] (V3) {};
\path (-3.7,.5) node[noorep] (V35) {};
\path (-3,.5) node[noorep] (V4) {$\ldots$};
\path (-2.5,.5) node[noorep] (V5) {};
\path (-.9,.5) node[noorep] (V6) {};
\path (-.5,.5) node[noorep] (V61) {};
\path (-.28,.5) node[noorep] (V62) {};
\path (1.32,.5) node[noorep] (V7) {};
\path (1.5,.5) node[noorep] (V8) {};
\path (2.82,1.32) node[noorep] (V9) {};
\path (2.82,-.32) node[noorep] (V91) {};
\path (4.18,1.32) node[noorep] (V10) {};
\path (4.18,1.32) node[noorep] (V11) {};
\path
(V1) edge[thick,->] (V0)
(V1) edge[thick,->] (V2)
(V2) edge[thick,->] (V21)
(V2) edge[thick,->] (V3)
(V3) edge[thick,->] (V2)
(V3) edge[thick,->] (V35)
(V35) edge[thick,->] (V3)
(V5) edge[thick,->] (V6)
(V6) edge[thick,->] (V5)
(V62) edge[thick,->] (V7)
(V7) edge[thick,->] (V62)
(V9) edge[thick,->] (V8)
(V9) edge[thick,->] (V10)
(V8) edge[thick,->] (V91);
\draw(V01) node[magenta]{$0$};
\draw(V0) node[black,right=.38cm,above=.01cm]{${{}_e}$};
\draw(V1) node[magenta]{$w_0$};
\draw(V1) node[magenta]{$w_0$};
\draw(V1) node[magenta]{$w_0$};
\draw(V2) node[black,right=.38cm,above=.01cm]{${{}_e}$};
\draw(V2) node[magenta]{$w_1$};
\draw(V2) node[magenta]{$w_1$};
\draw(V2) node[magenta]{$w_1$};
\draw(V2) node[black,left=.32cm,above=.16cm]{${{}_f}$};
\draw(V21) node[magenta]{$0$};
\draw(V21) node[black,right=.28cm,above=.16cm]{${{}_e}$};
\draw(V35) node[black,left=.32cm,above=.01cm]{${{}_f}$};
\draw(V3) node[magenta]{$w_2$};
\draw(V3) node[black,left=.34cm,above=.007cm]{${{}_f}$};
\draw(V3) node[black,right=.38cm,above=.01cm]{${{}_e}$};
\draw(V5) node[black,right=.38cm,above=.01cm]{${{}_e}$};
\draw(V8) node[magenta]{$w_{n-1}$};
\draw(V8) node[magenta]{$w_{n-1}$};
\draw(V8) node[magenta]{$w_{n-1}$};
\draw(V8) node[black,right=.28cm,above=.16cm]{${{}_e}$};
\draw(V9) node[magenta]{$w_n$};
\draw(V9) node[magenta]{$w_n$};
\draw(V9) node[magenta]{$w_n$};
\draw(V6) node[black,left=.32cm,above=.01cm]{${{}_f}$};
\draw(V61) node[magenta]{$w_{n-2}$};
\draw(V61) node[magenta]{$w_{n-2}$};
\draw(V61) node[magenta]{$w_{n-2}$};
\draw(V62) node[black,right=.4cm,above=.01cm]{${{}_e}$};
\draw(V7) node[black,left=.32cm,above=.01cm]{${{}_f}$};
\draw(V11) node[magenta]{$0$};
\draw(V91) node[magenta]{$0$};
\draw(V10) node[black,left=.3cm,above=.01cm]{${{}_f}$};
\draw(V9) node[black,left=.25cm,below=.83cm]{${{}_f}$};
\end{tikzpicture}$
\vspace{-.9cm}
\caption{The submodule $\mathsf{W}$ of $\mathsf{V}_1 \otimes \mathsf{V}_{n-1}$}
\end{figure}
Now since $e w_1 = 0$ and $k w_1 = \xi^{n-2} w_1$, there is a $\mathfrak{u}_\xi(\mathfrak{sl}_2)$-module homomorphism $\mathsf{V}_{n-2} \to \mathsf{W}': =\mathsf{span}_\mathbb{C}\{w_1, \dots, w_{n-1}\}$ mapping the basis $\tilde v_0, \tilde v_1, \dots, \tilde v_{n-2}$ of $\mathsf{V}_{n-2}$ according to the rule $\tilde v_0 \mapsto w_1$, $\tilde v_j = \frac{f^j \tilde v_0}{[j]!} \mapsto \frac{f^j w_1}{[j]!} \in \mathsf{W}'$.
As $\mathsf{V}_{n-2}$ is irreducible, this is an isomorphism. From the above considerations, we see that $\mathsf{W}/\mathsf{W}'$ is isomorphic to a direct sum of
two copies of the one-dimensional $\mathfrak{u}_\xi(\mathfrak{sl}_2)$-module $\mathsf{V}_0$. (In fact, $\, \mathsf{span}_{\mathbb{C}}\{w_1,\dots, w_{n-1}, w_n\} \cong \mathsf{M}_0$.)
Because $\mathsf{V}_{n-1}$ is projective, the tensor product $\mathsf{V}_1 \otimes \mathsf{V}_{n-1}$ decomposes into a direct sum of projective indecomposable summands $\mathsf{P}_{r}$. But
$\mathsf{V}_1 \otimes \mathsf{V}_{n-1}$ contains a copy of the irreducible module $\mathsf{V}_{n-2}$, so one of those summands must be $\mathsf{P}_{n-2}$ (the unique projective indecomposable module
with an irreducible submodule $\mathsf{V}_{n-2}$).
Since $\, \mathsf{dim}(\mathsf{P}_{n-2}) = 2n = \, \mathsf{dim}(\mathsf{V}_1 \otimes \mathsf{V}_{n-1})$, it must be that $\mathsf{V}_1 \otimes \mathsf{V}_{n-1} \cong \mathsf{P}_{n-2}$.
The assertion $[\mathsf{V}_1 \otimes \mathsf{V}_{n-1}:\mathsf{V}_{n-2}] = 2 = [\mathsf{V}_1 \otimes \mathsf{V}_{n-1}:\mathsf{V}_{0}]$ follows directly from
the short exact sequence $(0) \rightarrow \mathsf{M}_{0} \rightarrow \mathsf{P}_{n-2} \rightarrow \mathsf{M}_{n-2} \rightarrow (0)$ (as in \eqref{eq:Xact} with $r=n-2$)
and the fact that $[\mathsf{M}_j:\mathsf{V}_0] = 1 = [\mathsf{M}_j:\mathsf{V}_{n-2}]$ for $j = 0,n-2$.
\end{proof}
In Figure 5, we display the tensor chain graph resulting from Propositions \ref{P:quant} and \ref{P:stein}.
\tikzstyle{Trep}= [circle,
minimum size=.001cm,
draw= black,
fill=magenta!50]
\tikzstyle{norep}=[circle,
thick,
minimum size=1.25cm,
draw= white,
fill=white]
\begin{figure}[h]
\label{quantwk}
$$\begin{tikzpicture}[scale=1,line width=1pt]
\path (0,2) node (V0) [Trep] {};
\path (1.17,1.62) node (V1)[Trep]{};
\path (1.90,0.61) node[Trep] (V2) {};
\path (1.90,-0.62) node[Trep] (V3) {};
\path (1.17,-1.61) node[Trep] (V4) {};
\path (0,-2) node[Trep] (V5) {};
\path (-1.17,1.62) node[Trep] (Vm1) {};
\path (-1.90,0.61) node[Trep] (Vm2) {};
\path (-1.90,-0.62) node[Trep] (Vm3) {};
\path (-1.17,-1.61) node[Trep] (Vm4) {};
\path
(Vm1) edge[thick, ->>] (V0)
(V0) edge[thick, ->] (V1)
(Vm1) edge[thick, ->>] (Vm2)
(Vm2) edge[thick, ->] (Vm1)
(V1) edge[thick, ->] (V0)
(V1) edge[thick,->] (V2)
(V2) edge[thick,->] (V1)
(V2) edge[thick,->] (V3)
(V3) edge[thick,->] (V2)
(V3) edge[thick,->] (V4)
(V4) edge[thick,->] (V3)
(V5) edge[thick,dashed,<->] (V4)
(V5) edge[thick,dashed,<->] (Vm4)
(Vm4) edge[thick,->] (Vm3)
(Vm3) edge[thick,->] (Vm4)
(Vm3) edge[thick,->] (Vm2)
(Vm2) edge[thick,->] (Vm3)
(Vm2) edge[thick,->] (Vm1)
(Vm1) edge[thick] (V0);
\draw (V0) node[black,above=0.2cm]{\color{magenta} $\mathbf{0}$};
\draw (V1) node[black,right=0.2cm]{\color{magenta} $\mathbf{1}$};
\draw (Vm1) node[black,left=0.2cm]{\color{magenta} $\mathbf{n-1}$};
\end{tikzpicture}$$
\caption{Tensor walk on irreducibles of $\mathfrak{u}_\xi(\mathfrak{sl}_2)$}
\end{figure}
\medskip
\noindent \begin{remarks}{\rm (i) Proposition \ref{P:stein} shows that $\mathsf{V}_1 \otimes \mathsf{V}_{n-1} \cong \mathsf{P}_{n-2}$. Had we been interested only in
proving that $[\mathsf{V}_1 \otimes \mathsf{V}_{n-1}:\mathsf{V}_0] = 2 = [\mathsf{V}_1 \otimes \mathsf{V}_{n-1}:\mathsf{V}_{n-2}]$, we could have avoided using projective
covers by arguing that the vector $x_0 = u_0 \otimes v_1 \not \in \mathsf{W}$ is such that $k x_0 = \xi^{n-2} x_0$ and $ex_0 = -w_0$. Thus, $\left(\mathsf{V}_1 \otimes \mathsf{V}_{n-1}\right)/\mathsf{W}$
is a homomorphic image of $\mathsf{M}_{n-2}$, but since $\left(\mathsf{V}_1 \otimes \mathsf{V}_{n-1}\right)/\mathsf{W}$ has dimension $n-1$,
$\left(\mathsf{V}_1 \otimes \mathsf{V}_{n-1}\right)/\mathsf{W} \cong \mathsf{V}_{n-2}$. From that fact and the structure of $\mathsf{W}$, we can deduce that
$[\mathsf{V}_1 \otimes \mathsf{V}_{n-1}:\mathsf{V}_0] = 2 = [\mathsf{V}_1 \otimes \mathsf{V}_{n-1}:\mathsf{V}_{n-2}]$. The projective covers will reappear in Section \ref{quantf} when we consider
tensoring with the Steinberg module $\mathsf{V}_{n-1}$.
(ii) The probabilistic description of the Markov chain in \eqref{eq:quantumMarkov} will follow from these two propositions.
It is interesting to note that even when $n = p$ a prime, the tensor chain for $\mathfrak{u}_\xi(\mathfrak{sl}_2)$ is slightly different and the spectral analysis more complicated
(as will be apparent in the next section)
from that of $\mathsf{SL}_2(p)$. In the group case (see Table \ref{tabSL2(p)}), when tensoring the natural two-dimensional
module $\mathsf{V}(1)$ with the Steinberg module $\mathsf{V}(p-1)$, the module $\mathsf{V}(1)$ occurs with multiplicity 1 and $\mathsf{V}(p-2)$ with multiplicity 2. But in the quantum case,
$\mathsf{V}_1 \otimes \mathsf{V}_{p-1}$ has composition factors $\mathsf{V}_0, \mathsf{V}_{p-2}$, each with multiplicity 2 by Proposition \ref{P:stein}.
(iii) The quantum considerations above most closely resemble tensor chains for the Lie algebra $\fsl_2$ over an algebraically closed field $\mathbb k$ of characteristic $p \ge 3$. The
restricted irreducible $\fsl_2$-representations are $\mathsf{V}_0,\mathsf{V}_1,\dots, \mathsf{V}_{p-1}$ where $\, \mathsf{dim}(\mathsf{V}_j) = j+1$. The tensor products of them with $\mathsf{V}_1$ exactly follow the results in Proposition \ref{P:quant}
and \ref{P:stein} with $n = p$. (For further details, consult (\cite{Po}, \cite{BO}, \cite{Ru}, and \cite{Pr}).}
\end{remarks}
\subsection{Generalized spectral analysis}\label{quantd}
Consider the matrix $\mathsf{K}$ in \eqref{eq:quantumMarkov}. As a stochastic matrix, $\mathsf{K}$ has $[1,1, \dots, 1]^{\tt T}$ as a right eigenvector with eigenvalue 1.
It is easy to verify by induction on $n$ that $\pi:= [\pi(0),\pi(1), \dots, \pi(n-1)]$, where $\pi(j)$ is as in \eqref{eq:quantumpi} is a left eigenvector with eigenvalue 1.
In this section, we determine the other eigenvectors of $\mathsf{K}$. A small example will serve as motivation for the calculations to follow.
\begin{example}\label{ex:quantn=3} For $n = 3$,
\begin{itemize}\item the transition matrix is
$$\mathsf{K} = \left(\begin{matrix} 0 & 1& 0 \\ \frac{1}{4} & 0 & \frac{3}{4} \\
\frac{1}{3} & \frac{2}{3} & 0 \end{matrix}\right),$$
and the stationary distribution is $\pi(j) = \frac{2(j+1)}{n^2} \, (j = 0,1), \ \pi(2) = \frac{1}{3}$ so that
$$\pi = \textstyle{\left[\frac{2}{9}, \frac{4}{9}, \frac{1}{3}\right];}$$
\item the eigenvalues are $\lambda_j = \cos(\frac{2 \pi j}{3}), 0 \le j \le 1$, with $\lambda_1$ occurring in a block of size 2, so
$$\textstyle{(\lambda_0, \lambda_1) = (1, -\half);}$$
\item the right eigenvectors ${\textsl{\footnotesize R}}_0,{\textsl{\footnotesize R}}_1$ in \eqref{eq:quantumrightev} are
$${\textsl{\footnotesize R}}_0 = [1,1,1]^{\tt T }, \qquad {\textsl{\footnotesize R}}_1
= \textstyle{\left[\sin(\frac{2\pi}{3}),\half\sin(\frac{4\pi}{3}),0\right]}^{\tt T}
=\textstyle{\big[\frac{\sqrt{3}}{2},-\frac{\sqrt{3}}{4},0\big]}^{\tt T};
$$
\item the generalized right eigenvector ${\textsl{\footnotesize R}}_1'$ for the eigenvalue $-1/2$ is
$$ {\textsl{\footnotesize R}}_1' = \textstyle{\left[0,\frac{\sqrt{3}}{2}, -\frac{2}{\sqrt{3}}\right]}^{\tt T};$$
\item the left eigenvectors ${\textsl{\footnotesize L}}_0, {\textsl{\footnotesize L}}_1$ in \eqref{eq:quantumleftev} are
$${\textsl{\footnotesize L}}_0 = \pi, \qquad {\textsl{\footnotesize L}}_1
= \textstyle{\left[\cos(\frac{2\pi}{3}),2\cos(\frac{4\pi}{3}),\frac{3}{2}\right]} = \textstyle{\left[-\half,-1,\frac{3}{2}\right]};
$$
\item the generalized left eigenvector ${\textsl{\footnotesize L}}_1'$ for the eigenvalue $-1/2$ is
$$ {\textsl{\footnotesize L}}_1' = \textstyle{\left[-2,2,0\right]}.$$
\end{itemize}
Note that ${\textsl{\footnotesize L}}_1 {\textsl{\footnotesize R}}_1 = 0$, \ \
$ {\textsl{\footnotesize L}}_1{\textsl{\footnotesize R}}_1' = {\textsl{\footnotesize L}}_1' {\textsl{\footnotesize R}}_1\big(= -\frac{3\sqrt{3}}{2}\big)$
in accordance with Lemma \ref{L:simple} below.
\end{example}
Now in the general case, we know that $\mathsf{K}$ has $[1,1, \dots, 1]^{\tt T}$ as a right eigenvector
and $\pi= [\pi(0),\pi(1), \dots, \pi(n-1)]$ as a left eigenvector corresponding to the eigenvalue 1.
Next, we determine the other eigenvalues and eigenvectors of $\mathsf{K}$.
To accomplish this, conjugate the matrix $\mathsf{K}$ with the diagonal matrix $\mathsf{D}$ having $1,2, \dots, n$ down the diagonal (the dimensions
of the irreducible $\mathfrak{u}_\xi(\fsl_2)$-representations), and multiply by 2 (the dimension of $\mathsf{V}_1$) to get
\begin{equation}\label{eq:Mdef} 2\, \mathsf{D}\mathsf{K} \mathsf{D}^{-1} = \mathsf{M} = \left(\begin{matrix} 0 & 1 & 0 & 0 & \ldots & 0 & 0 & 0 \\
1 & \ 0 & 1 & 0 & \ldots & 0 & 0 & 0 \\
\ 0 & 1 & 0 & 1 & \ldots & 0 & 0 & 0 \\
\vdots & \vdots & \ \ddots & \ddots & \ddots & \vdots & \ \vdots & \ \vdots \\
0 & 0 & \ldots & 1 & 0 & 1 & 0 & 0 \\
0 & 0 & \ldots & 0 & 1 & 0 & 1 & 0 \\
0 & 0 &\ldots & 0 & 0 & 1 & 0 & 1 \\
2 & 0&\ldots & 0 & 0 & 0 & 2 & 0
\end{matrix}\right),\end{equation}
a matrix that, except for the bottom row, has ones on its sub and super diagonals and zeros elsewhere. The bottom row has a 2 as its $(n,1)$ and $(n,n-1)$
entries and zeros everywhere else. In fact, $\mathsf{M}$ is precisely the McKay matrix of the Markov chain determined by tensoring with $\mathsf{V}_1$ in the
$\mathfrak{u}_\xi(\mathfrak{sl}_2)$ case as in Propositions \ref{P:quant} and \ref{P:stein}. A cofactor (Laplace) expansion shows that this last matrix has the same characteristic polynomial as the circulant
matrix with first row [0,1,0, \ldots, 0, 1], that is
\begin{equation} \left(\begin{matrix} 0 & 1 & 0 & 0 & \ldots & 0 & 0 & 1 \\
1 & \ 0 & 1 & 0 & \ldots & 0 & 0 & 0 \\
\ 0 & 1 & 0 & 1 & \ldots & 0 & 0 & 0 \\
\vdots & \vdots & \ \ddots & \ddots & \ddots & \vdots & \ \vdots & \ \vdots \\
0 & 0 & \ldots & 1 & 0 & 1 & 0 & 0 \\
0 & 0 & \ldots & 0 & 1 & 0 & 1 & 0 \\
0 & 0 &\ldots & 0 & 0 & 1 & 0 & 1 \\
1 & 0&\ldots & 0 & 0 & 0 & 1 & 0
\end{matrix}\right).\end{equation}
As is well known \cite{Dav}, this circulant matrix has eigenvalues $2 \cos(\frac{2\pi j}{n}), \ 0 \le j \le n-1$. Dividing by 2 gives
\eqref{eq:quantumevalues}.
Determining the eigenvectors in \eqref{eq:quantumrightev}- \eqref{eq:quantumleftev} are straightforward exercises, but here are a few details.
Rather than working with $\mathsf{K}$, we first identify (generalized) eigenvectors for $\mathsf{M}$ (see Corollary \ref{C:Kvs}). Since $\mathsf{M} = 2\mathsf{D}\mathsf{K} \mathsf{D}^{-1}$,
a right eigenvector $v$ (resp.~left eigenvector $w$) of $\mathsf{M}$ with eigenvalue $\lambda$ yields a right eigenvector $\mathsf{D}^{-1}v$ (resp. left eigenvector~$w\mathsf{D}$) for $\mathsf{K}$ with eigenvalue $\half \lambda$,
just as in Lemma \ref{L:KMrel}. Similarly, if $v',w'$ are generalized
eigenvectors for $\mathsf{M}$ with $\mathsf{M} v' = \lambda v' + v$ and $w' \mathsf{M} = \lambda w' + w$, then
$\mathsf{K} \mathsf{D}^{-1} v' = \half \lambda\ \mathsf{D}^{-1}v' + \half\mathsf{D}^{-1}v$ and $w' \mathsf{D}\, \mathsf{K} =\half \lambda\,w' \mathsf{D}+ \half w\mathsf{D}.$
\begin{prop}\label{P:Mvs} For the matrix $\mathsf{M}$ defined in \eqref{eq:Mdef},
corresponding to its eigenvalue $2\cos(\frac{2\pi j}{n}) = \xi^j+ \xi^{-j}$, $j = 1,2,\dots, m = \half(n-1)$, we have the following:
\begin{itemize}
\item[{\rm (a)}] Let $\textsl{\footnotesize X}_j = [\textsl{\footnotesize X}_j(0), \textsl{\footnotesize X}_j(1), \ldots, \textsl{\footnotesize X}_j(n-1)]^{\tt T}$, where $\textsl{\footnotesize X}_j(a) = \xi^{(a+1)j} - \xi^{-(a+1)j}$ for $0 \leq a \le n-1$. Then
\begin{equation}\label{eq:Rev4M} \textsl{\footnotesize X}_j = [\xi^{j}-\xi^{-j}, \xi^{2j}-\xi^{-2j}, \ldots, \xi^{(n-1)j}-\xi^{-(n-1)j},0]^{\tt T},\end{equation}
and $\textsl{\footnotesize X}_j$ is a right eigenvector for $\mathsf{M}$.
\item[{\rm (b)}] Let $\textsl{\footnotesize Y}_j = [\textsl{\footnotesize Y}_j(0), \textsl{\footnotesize Y}_j(1), \ldots, \textsl{\footnotesize Y}_j(n-1)]^{\tt T}$, where $\textsl{\footnotesize Y}_j(a) = \xi^{(a+1)j} + \xi^{-(a+1)j}$ for $0 \leq a \le n-2$
and $\textsl{\footnotesize Y}_j(n-1) = 1$. Then
\begin{equation}\label{eq:Lev4M}{\textsl{\footnotesize Y}}_j = [\xi^{j}+\xi^{-j}, \xi^{2j}+\xi^{-2j}, \ldots, \xi^{(n-1)j}+\xi^{-(n-1)j},1],\end{equation}
and $\textsl{\footnotesize Y}_j$ is a left eigenvector for $\mathsf{M}$.
\item[{\rm (c)}] Set $\eta_a= \xi^{ja} - \xi^{-ja}$ for $0 \le a \le n-1$, so that $\eta_0 = 0$, and $\eta_{n-a} = -\eta_a$ for $a = 1,\dots, m$.
The vector $\textsl{\footnotesize X}_j' = [\textsl{\footnotesize X}_j'(0),\textsl{\footnotesize X}_j'(1),\ldots,\textsl{\footnotesize X}_j'(n-1) ]^{\tt T}$ with
\begin{equation}\label{eq:Rprimecoord}\textsl{\footnotesize X}_j'(a) \ = \ a \eta_a + (a-2)\eta_{a-2} + \cdots + \left(a-2\lfloor \textstyle{\frac{a}{2}}\rfloor\right) \eta_{a-2\lfloor\frac{a}{2}\rfloor}.
\end{equation}
for $0 \le a \le n-1$ satisfies
\begin{equation}\label{eq:Rjprime} \mathsf{M} \textsl{\footnotesize X}_j' = 2\,\textstyle{ \cos(\frac{2\pi j}{n})}\textsl{\footnotesize X}_j' + {\textsl{\footnotesize X}}_j
= (\xi^j+\xi^{-j})\textsl{\footnotesize X}_j' + {\textsl{\footnotesize X}}_j. \end{equation}
\item[{\rm(d)}] Let $\gamma_0 = 1$, and for $1\le a \le n-1$, set $\gamma_a = \xi^{ja}+\xi^{-ja}$. Let $\delta_0 = 1$, and for $1 \le b \le m$, set
\begin{equation}\label{eq:defnzeta} \delta_b \ = \ \gamma_{b-1} + \gamma_{b-3} + \cdots + \gamma_{b-1 - 2 \lfloor \frac{b-1}{2}\rfloor}.\end{equation}
If $\textsl{\footnotesize Y}_j' =[\textsl{\footnotesize Y}_j'(0), \textsl{\footnotesize Y}_j'(1), \ldots, \textsl{\footnotesize Y}_j'(n-1)],$
where
$$\textsl{\footnotesize Y}_j'(a) = \begin{cases} (a+1-n) \delta_{a+1} & \quad \text{if} \ \ 0 \le a \le m-1, \\
(n-1-a) \delta_{n-1-a} & \quad \text{if} \ \ m \le a \le n-1, \end{cases}$$
then \begin{equation}\hspace{-.7cm}\label{eq:Lgenev4M}\textsl{\footnotesize Y}_j' =
{\small [(1-n)\delta_1,(2-n)\delta_2, \ldots, (m-n)\delta_m \mid m \delta_{m}, (m-1)\delta_{m-1},\, \dots, \delta_1, \, 0]} \end{equation}
and $\textsl{\footnotesize Y}_j' \mathsf{M} = 2\cos(\frac{2\pi j}{n}) \textsl{\footnotesize Y}_j' + \textsl{\footnotesize Y}_j$.
\end{itemize}
\end{prop}
\begin{proof} (a) \ Recall that the eigenvalues of $\mathsf{M}$ are $2 \cos(\frac{2\pi j}{n}) = \xi^{j} + \xi^{-j}$, so there are only $\half(n+1)$ distinct eigenvalues (including the eigenvalue 1).
For showing that ${\textsl{\footnotesize X}}_j$ is a right eigenvector of $\mathsf{M}$ for $j = 1,\dots,m = \half(n-1)$, note that
$\xi^{2j} - \xi^{-2j} = ( \xi^{j} + \xi^{-j})( \xi^{j} - \xi^{-j})$. This confirms that multiplying row 0 of $\mathsf{M}$ by the vector ${\textsl{\footnotesize X}}_j$ in \eqref{eq:Rev4M}
correctly gives $( \xi^{j} + \xi^{-j}){\textsl{\footnotesize X}}_j(0)$. For rows $a = 1, 2, \dots, n-2$, use
\[\xi^{(a-1)j}-\xi^{-(a-1)j} + \xi^{(a+1)j}-\xi^{-(a+1)j} = ( \xi^{j} + \xi^{-j}) ( \xi^{aj}-\xi^{-aj}).\]
Lastly, for row $n-1$ we have
\[2\xi^{j}-2\xi^{-j} + 2\xi^{(n-1)j}-2\xi^{-(n-1)j} = 2\xi^{j}-2\xi^{-j}+2\xi^{-j}-2\xi^{j} = 0 = ( \xi^{j} + \xi^{-j}) \cdot 0.\]
\noindent (b)\ The argument for the left eigenvectors is completely analogous. Multiply the vector ${\textsl{\footnotesize Y}}_j$ in \eqref{eq:Lev4M}
on the right by column 0 of $\mathsf{M}$. The result is
$\xi^{2j}+\xi^{-2j}+2 = ( \xi^{j} + \xi^{-j})( \xi^{j} +\xi^{-j})$, which is $( \xi^{j} + \xi^{-j}){\textsl{\footnotesize Y}}_j(0)$. For $a=1,2,\dots,n-2$, entry $a$
of $( \xi^{j} + \xi^{-j}){\textsl{\footnotesize Y}}_j$ is $\xi^{aj}+\xi^{-aj} + \xi^{(a+2)j}+\xi^{-(a+2)j} = (\xi^{j} + \xi^{-j}) ( \xi^{(a+1)j} +\xi^{-(a+1)j}) = (\xi^{j} + \xi^{-j}){\textsl{\footnotesize Y}}_j(a).$ Finally, entry $n-1$ of $( \xi^{j} + \xi^{-j}){\textsl{\footnotesize Y}}_j$
is $\xi^{(n-1)j}+\xi^{-(n-1)j} = (\xi^j + \xi^{-j})\cdot 1 = (\xi^j + \xi^{-j}){\textsl{\footnotesize Y}}_j(n-1)$.
\medskip
\noindent (c)\ The vector $\textsl{\footnotesize X}_j' = [\textsl{\footnotesize X}_j'(0),\textsl{\footnotesize X}_j'(1),\ldots, \textsl{\footnotesize X}_j'(n-1) ]^{\tt T}$ in this part has components given in terms
of the values $\eta_a= \xi^{ja} - \xi^{-ja}$ for $0 \le a \le n-1$ in
\eqref{eq:Rprimecoord}. For example, when $n = 7$ and $1 \le j \le 3$,
\[ \textsl{\footnotesize X}_j' = \left[0, \ \eta_1, \ 2 \eta_2,\ 3 \eta_3+\eta_1,\ 4\eta_4 + 2 \eta_2,\ 5 \eta_5 + 3 \eta_3 + \eta_1,\
6\eta_6 + 4\eta_4 + 2 \eta_2\right]^{\tt T}.\]
To verify that $\mathsf{M} \textsl{\footnotesize X}_j' = 2\,\textstyle{ \cos(\frac{2\pi j}{n})}\textsl{\footnotesize X}_j' + {\textsl{\footnotesize X}}_j$, use the fact that $\eta_{n-a} = -\eta_a$ and
\begin{equation}\label{eq:betaeq}\textstyle{2\cos(\frac{2\pi j}{n})}\eta_a = (\xi^j + \xi^{-j}) \eta_a= \eta_{a-1}+\eta_{a+1} \quad \text{for all $1 \le a \le n-1$}.\end{equation} In this notation,
$\textsl{\footnotesize X}_j = [\eta_1, \eta_2, \dots, \eta_{n-1},0]^{\tt T}$ and ${\textsl{\footnotesize X}}_{n-j} = -{\textsl{\footnotesize X}}_j$.
Checking that (c) holds just amounts to computing both sides and using \eqref{eq:betaeq}. Thus, $\mathsf{span}_\mathbb{C}\{\textsl{\footnotesize X}_j', {\textsl{\footnotesize X}}_j\}$ for $j=1,\dots,m$ forms a two-dimensional generalized eigenspace corresponding to
a $2 \times 2$ Jordan block with $\xi^j + \xi^{-j} = 2\cos(\frac{2\pi j}{n})$ on the diagonal.
\medskip
\noindent (d)\ Set $\gamma_a = \xi^{ja}+\xi^{-ja}$ for $a = 1,2, \dots, n-1$. Then $\gamma_1 = 2 \cos(\frac{2\pi j}{n})$ and
\begin{equation}\label{eq:gammarels}\gamma_1^2 = \gamma_2 + 2, \qquad \gamma_1 \gamma_a = \gamma_{a+1} + \gamma_{a-1} \ \ \text{for $a \ge 2$}. \end{equation}
From \eqref{eq:Lev4M}, a left eigenvector of $\mathsf{M}$ corresponding to the eigenvalue $2 \cos(\frac{2\pi j}{n})$ is
$\textsl{\footnotesize Y}_j = [\gamma_1, \gamma_2, \ldots,\gamma_m,\gamma_m,\gamma_{m-1}, \ldots,\gamma_1,1].$
We want to demonstrate that the vector $\textsl{\footnotesize Y}_j'$ in \eqref{eq:Lgenev4M} satisfies
$\textsl{\footnotesize Y}_j'\, \mathsf{M} = 2 \cos(\frac{2\pi j}{n})\textsl{\footnotesize Y}_j' + \textsl{\footnotesize Y}_j.$
An example to keep in mind is the following one for $n=9$ (a vertical line is included only to make the pattern more evident):
\vspace{-.25cm}
$$\textsl{\footnotesize Y}_j' \, = \, [-8,-7\gamma_1,-6(\gamma_2+1), -5(\gamma_3+\gamma_1)\, \mid \, 4(\gamma_3+\gamma_1), 3(\gamma_2+1), 2\gamma_1, 1,0].$$
More generally, assume $\gamma_0 = 1$, and for $b =1,2,\dots,m$, \, let
$ \delta_b \ = \ \gamma_{b-1} + \gamma_{b-3} + \cdots + \gamma_{b-1 - 2 \lfloor \frac{b-1}{2}\rfloor}$,
as in \eqref{eq:defnzeta}. Thus, $\delta_1 = \gamma_0 = 1$, \ $\delta_2 = \gamma_1$, \ $\delta_3 = \gamma_2 + \gamma_0 = \gamma_2 + 1$,
$\delta_4 = \gamma_3 + \gamma_1$, \ $\delta_5 = \gamma_4+\gamma_2 + 1$, etc.
Recall from \eqref{eq:Lgenev4M} that
\begin{equation*}\hspace{-.2cm} \textsl{\footnotesize Y}_j' \, = \, {\small [(1-n)\delta_1,(2-n)\delta_2, \ldots, (m-n)\delta_m \mid m\delta_{m}, (m-1)\delta_{m-1},\, \dots, \delta_1, \, 0]} \end{equation*}
Verifying that $\textsl{\footnotesize Y}_j' \,\mathsf{M} = \gamma_1 \textsl{\footnotesize Y}_j' + \textsl{\footnotesize Y}_j$ uses \eqref{eq:gammarels} and the fact that
$$1 + \gamma_1 + \gamma_2 + \cdots + \gamma_m = 0.\qquad \qquad \qedhere $$ \end{proof}
\medskip
Assume now that $\mathsf{D}$ is the $n \times n$ diagonal matrix $\mathsf{D} = \mathsf{diag}\{1,2,\dots, n\}$ having the dimensions of the simple $\mathfrak{u}_\xi(\fsl_2)$-modules
down its diagonal. We know that $1$ is an eigenvalue of the matrix $\mathsf{K}$ with right eigenvector $[1,1,\dots,1]^{\tt T}$
and corresponding left eigenvector the stationary distribution vector $\pi = [\pi(0),\dots, \pi(n-1)]$. As a consequence of Proposition \ref{P:Mvs} and the relation $\mathsf{K} = \half \mathsf{D}^{-1}\mathsf{M} \mathsf{D}$, we have the following result.
\begin{cor}\label{C:Kvs} Suppose $\theta_j = \frac{2\pi j}{n}$ for $j = 1,\dots,m = \half(n-1)$ and $i = \sqrt{-1}$. Set
$$\textsl{\footnotesize R}_j = \textstyle{\frac{1}{2i}} \mathsf{D}^{-1} \textsl{\footnotesize X}_j, \qquad \textsl{\footnotesize L}_j = \half \textsl{\footnotesize Y}_j \mathsf{D}
\qquad \textsl{\footnotesize R}_j' = \textstyle{\frac{1}{2i}} \mathsf{D}^{-1} \textsl{\footnotesize X}_j', \qquad \textsl{\footnotesize L}_j' = \half \textsl{\footnotesize Y}_j'\mathsf{D} ,$$
where $\textsl{\footnotesize X}_j, \ \textsl{\footnotesize Y}_j, \ \textsl{\footnotesize X}_j'$, and $\textsl{\footnotesize Y}_j',$ are as in Proposition \ref{P:Mvs}.
Then corresponding to the eigenvalue $\cos(\frac{2\pi j}{n})$,
\begin{itemize}
\item[{\rm (a)}] $\textsl{\footnotesize R}_j = [\sin(\theta_j), \half \sin(2\theta_j), \dots, \frac{1}{n-1}\sin((n-1)\theta_j), 0]^{\tt T}$ is a right eigenvector for $\mathsf{K}$;
\item[{\rm (b)}] $\textsl{\footnotesize L}_j = [\cos(\theta_j), 2 \cos(2\theta_j), \dots, (n-1)\cos((n-1)\theta_j), \frac{n}{2}]$ is a left eigenvector for $\mathsf{K}$;
\item[{\rm (c)}] if $\textsl{\footnotesize R}_j' = [\textsl{\footnotesize R}_j'(0),\textsl{\footnotesize R}_j'(1),\dots,\textsl{\footnotesize R}_j'(n-1)]^{\tt T}$, where
$\textsl{\footnotesize R}_j'(a)= \frac{1}{2(a+1)i} \, \textsl{\footnotesize X}_j'(a) = -\frac{i}{2(a+1)} \, \textsl{\footnotesize X}_j'(a)$ and $\textsl{\footnotesize X}_j'(a)$ is the $a$th coordinate
of $\textsl{\footnotesize X}_j'$ given in \eqref{eq:Rprimecoord}, then \begin{center}{$\mathsf{K} \textsl{\footnotesize R}_j' = \cos(\frac{2 \pi j}{n})\textsl{\footnotesize R}_j' + \textsl{\footnotesize R}_j$}
\end{center}
\item[{\rm (d)}] if $\textsl{\footnotesize L}_j' = [\textsl{\footnotesize L}_j'(0),\textsl{\footnotesize L}_j'(1),\dots,\textsl{\footnotesize L}_j'(n-1)]^{\tt T}$, where
$\textsl{\footnotesize L}_j'(a) = \frac{a+1}{2} \, \textsl{\footnotesize Y}_j'(a)$ and $\textsl{\footnotesize Y}_j'(a)$ is the $a$th coordinate
of $\textsl{\footnotesize Y}_j'$ given in \eqref{eq:Lgenev4M}, then $ \textsl{\footnotesize L}_j'\mathsf{K} = \cos(\frac{2 \pi j}{n})\textsl{\footnotesize L}_j' + \textsl{\footnotesize L}_j.$
\end{itemize}
\end{cor}
For the results in the next section, we will need to know various products such as $\textsl{\footnotesize L}_j \, \textsl{\footnotesize R}_j'$
and $\textsl{\footnotesize L}_j' \,\textsl{\footnotesize R}_j.$ These two expressions are equal, as
the following simple lemma explains. Compare \eqref{eq:evecrels}.
\begin{lemma} \label{L:simple} Let $\mathsf{A}$ be an $n \times n$ matrix over some field $\mathbb K$. Assume $\textsl{\footnotesize L}$ (resp. $\textsl{\footnotesize R}$) is a left (resp. right)
eigenvector of $\mathsf{A}$ corresponding to an eigenvalue $\lambda$. Let $\textsl{\footnotesize L}'$ (resp. $\textsl{\footnotesize R}'$) be a $1 \times n$ (resp. $n \times 1$) matrix over
$\mathbb K$ such that
$$\textsl{\footnotesize L}' \mathsf{A} = \lambda \textsl{\footnotesize L}' + \textsl{\footnotesize L} \quad \text{and} \quad \mathsf{A}\textsl{\footnotesize R}' = \lambda\textsl{\footnotesize R}' + \textsl{\footnotesize R}$$
so that $\textsl{\footnotesize L}'$ and $\textsl{\footnotesize R}'$ are generalized eigenvectors corresponding to $\lambda$. Then
$$\textsl{\footnotesize L} \, \textsl{\footnotesize R}' \ = \ \textsl{\footnotesize L}' \,\textsl{\footnotesize R}.$$
\end{lemma}
\begin{proof} \ This is apparent from computing $\textsl{\footnotesize L}' \mathsf{A}\textsl{\footnotesize R}'$ two different ways:
\begin{align*} \textsl{\footnotesize L}' \,\mathsf{A} \textsl{\footnotesize R}' & = (\textsl{\footnotesize L}' \mathsf{A})\textsl{\footnotesize R}'= (\lambda \textsl{\footnotesize L}' + \textsl{\footnotesize L})\textsl{\footnotesize R}' = \lambda\textsl{\footnotesize L}'\textsl{\footnotesize R}' +\textsl{\footnotesize L}\,\textsl{\footnotesize R}'\\
& =\textsl{\footnotesize L}'
(\mathsf{A}\textsl{\footnotesize R}') =\textsl{\footnotesize L}'
(\lambda \textsl{\footnotesize R}' + \textsl{\footnotesize R})= \lambda \textsl{\footnotesize L}'
\textsl{\footnotesize R}'+ \textsl{\footnotesize L}' \textsl{\footnotesize R}. \qedhere \end{align*}
\end{proof}
To undertake detailed analysis of convergence, the inner products $d_j = \textsl{\footnotesize L}_j \, \textsl{\footnotesize R}_j' \ = \ \textsl{\footnotesize L}_j' \,\textsl{\footnotesize R}_j$
and $d_j' = \textsl{\footnotesize L}_j' \, \textsl{\footnotesize R}_j'$, \ $1 \le j \le (n-1)/2$ are needed. We were surprised to see that $d_j$ came out so neatly.
\begin{lemma} \label{L:dj} For $ \textsl{\footnotesize L}_j'$ and $\textsl{\footnotesize R}_j$ as in Corollary \ref{C:Kvs},
$$d_j = \sum_{k=0}^{n-1} \textsl{\footnotesize L}_j'(k) \textsl{\footnotesize R}_j(k) = \frac{n}{32}\left(\frac{4}{\sin(\theta_j)}-\frac{n+1}{\sin^3(\theta_j)}\right), \quad
\text{where} \;\, \theta_j = {\frac{2\pi j}{n}}.$$ \end{lemma}
\begin{proof} \ Recall that $\textsl{\footnotesize L}_j' = \half \textsl{\footnotesize Y}_j' \mathsf{D}$ and $\textsl{\footnotesize R}_j = \frac{1}{2i} \mathsf{D}^{-1}\textsl{\footnotesize X}_j$, where $i = \sqrt{-1}$, $\mathsf{D}$ is the diagonal $n \times n$ matrix with $1,2,\dots,n$ down its main diagonal, and
$\textsl{\footnotesize Y}_j'$ and $\textsl{\footnotesize X}_j$ are as in Proposition \ref{P:Mvs}. Therefore
$$d_j = \textsl{\footnotesize L}_j' \,\textsl{\footnotesize R}_j = \left( \half \textsl{\footnotesize Y}_j' \mathsf{D}\right) \left( \frac{1}{2i} \mathsf{D}^{-1}\textsl{\footnotesize X}_j \right)
= \frac{1}{4i} \textsl{\footnotesize Y}_j' \,\textsl{\footnotesize X}_j,$$
so it suffices to compute $ \textsl{\footnotesize Y}_j' \,\textsl{\footnotesize X}_j = \sum_{k=0}^{n-1} \textsl{\footnotesize Y}_j'(k)\textsl{\footnotesize X}_j(k)$.
With $m = \half(n-1)$ and $\xi = \mathsf{e}^{\frac{2\pi i}{n}}$, we have from \eqref{eq:Lgenev4M} and Corollary \ref{C:Kvs} that
\[ \textsl{\footnotesize Y}_j' \, = \, {\small [(1-n)\delta_1,(2-n)\delta_2, \ldots, (m-n)\delta_m \mid m\delta_{m}, (m-1)\delta_{m-1},\, \dots, \delta_1, \, 0]} \]
with $ \delta_b \ = \ \gamma_{b-1} + \gamma_{b-3} + \cdots + \gamma_{b-1 - 2 \lfloor \frac{b-1}{2}\rfloor}$ and $\gamma_a = \xi^{ja}+\xi^{-ja} = \textstyle{2\cos(\frac{2\pi ja}{n})};$
\[ \textsl{\footnotesize X}_j \, = \, {\small [\eta_1,\eta_2, \dots, \eta_m, -\eta_m, \dots, -\eta_1,0]^{\tt T}}, \]
with $\eta_b = \xi^{bj}-\xi^{-bj} = \mathsf{e}^{\frac{2\pi i\,jb}{n}}-\mathsf{e}^{-\frac{2\pi i\,jb}{n}} = -\eta_{n-b}$.
Then $\eta_0 = \eta_n = 0$, \; $\gamma_a \eta_b =
\eta_{a+b} + \eta_{b-a}$ for $1 \le b \le m$, and
\begin{align*} \textsl{\footnotesize Y}_j' \,\textsl{\footnotesize X}_j &= -n \sum_{b=1}^m \delta_b \eta_b = -n \sum_{b=1}^m \left(\gamma_{b-1}+\gamma_{b-3} + \cdots +
+ \gamma_{b-1 - 2 \lfloor \frac{b-1}{2}\rfloor}\right)\eta_b \\
&= -n \left(m \eta_1 + (m-1) \eta_3 + \cdots + 2 \eta_{2m-3} + \eta_{2m-1}\right) \\
& = -2ni \Big(m \sin(\theta_j) + (m-1) \sin(3\theta_j) + \; \, \cdots \\
& \hspace{3.15cm} + 2 \sin((2m-3)\theta_j) + \sin((2m-1)\theta_j) \Big).
\end{align*}
The argument continues by summing the (almost) geometric series using
\[
\sum_{a=1}^m (m+1-a) \xi^{2a-1} = \frac{\xi}{\left(\xi^2-1\right)^2} \Bigg (\big(\xi^{2(m+1)}-1\big) - (m+1)\big(\xi^2 -1 \big) \Bigg ).
\]
As a result,
\begin{align*} \textsl{\footnotesize Y}_j' \,\textsl{\footnotesize X}_j &= -n \Bigg\{\frac{\xi}{(\xi^2-1)^2}\Big((\xi-1)-(m+1)(\xi^2-1)\Big)\Bigg. \\
& \hspace{2.5cm} - \Bigg.\frac{\xi^{-1}}{\left(\xi^{-2}-1\right)^2}\Big((\xi^{-1}-1)-(m+1)(\xi^{-2}-1)\Big) \Bigg \} \\
& = \frac{-n}{(\xi^2-1)^2\,(\xi^{-2}-1)} \Bigg\{\xi (\xi^{-2}-1)\Big((\xi-1)-(m+1)(\xi^2-1)\Big)\Bigg. \\
& \hspace{3.7cm} - \Bigg.\xi^{-1}({\xi}^2-1)\Big((\xi^{-1}-1)-(m+1)(\xi^{-2}-1)\Big)\Bigg\}\\
& = \frac{-n}{4\big(1-\cos(2\theta_j)\big)^2}\Big \{ 2i \bigg(\sin(3 \theta_j)-3 \sin(\theta_j)\bigg) + 4i (m+1)\sin(\theta_j)\Big\} \\
& = \frac{-ni}{2\big(1-\cos(2\theta_j)\big)^2}\bigg \{\sin(3\theta_j)+(2m-1)\sin(\theta_j)\bigg\}.
\end{align*}
Now use $\cos(2\theta_j) = 1 - 2\sin^2(\theta_j)$ and $\sin(3\theta_j) = 3 \sin(\theta_j) - 4 \sin^3(\theta_j)$, to get
\[\textsl{\footnotesize Y}_j' \,\textsl{\footnotesize X}_j = \frac{ni}{8}\bigg \{\frac{4}{\sin(\theta_j)} - \frac{n+1}{\sin^3(\theta_j)} \bigg\} \; \; \text{and} \; \;
d_j = \textsl{\footnotesize L}_j' \,\textsl{\footnotesize R}_j = \frac{n}{32}\bigg \{ \frac{4}{\sin(\theta_j)} - \frac{n+1}{\sin^3(\theta_j)} \bigg\} \] \qedhere
\end{proof}.
\begin{remark}{\rm We have not been as successful at understanding $d_j'$. This is less crucial, as $d_j'$ appears in the numerator of various terms, so
upper bounds suffice. We content ourselves with the following.}\end{remark}
\begin{prop} \label{P:dj'} For $\textsl{\footnotesize L}_j'$ and $\textsl{\footnotesize R}_j'$ defined in Corollary \ref{C:Kvs}, the inner product
$d_j' = \textsl{\footnotesize L}_j'\textsl{\footnotesize R}_j'$ satisfies $|d_j'| \le A n^5$ for a universal positive constant $A$ independent of $j$. \end{prop}
\begin{proof} Since $d_j' = \frac{1}{4i} \textsl{\footnotesize Y}_j'\textsl{\footnotesize X}_j'$,
we can work instead with the vectors
\[ \textsl{\footnotesize Y}_j' \, = \, {\small [(1-n)\delta_1,(2-n)\delta_2, \ldots, (m-n)\delta_m, m\delta_{m}, (m-1)\delta_{m-1},\, \dots, \delta_1, \, 0]} \]
\[ \textsl{\footnotesize X}_j' \, = \, {\small [0,\eta_1,2\eta_2, 3\eta_3+\eta_1, 4\eta_4+2\eta_2, \ldots, (n-1)\eta_{n-1}+(n-3)\eta_{n-3}+\ldots+2\eta_2].} \]
Since $|\delta_a| \leq 2a$ and $|\eta_b| \le 1$, the inner product $d_j'$ is bounded above by
\[ 4\left(\sum_{a=1}^m (n-a)a \cdot a^2 + \sum_{b=1}^m b^2 (n-b)^2\right) \le A' n^5. \qedhere \]
\end{proof}.
\subsection{Proof of Theorem \ref{T:quantum}}\label{quante}
We need to prove that
\begin{equation}\label{eq:quantumTV} f_1(\ell/n^2) \le \parallel \mathsf{K}^\ell - \pi \parallel_{{}_{\mathsf{TV}}} \le f_2(\ell/n^2). \end{equation}
For the lower bound, a first step analysis for the Markov chain $\mathsf{K}(i,j)$, started at $0$, shows that it has high probability of not hitting $(n-1)/2$
after $\ell = \textsl{\footnotesize C}n^2$ steps for $\textsl{\footnotesize C}$ small. On the other hand,
$$\pi\left(\left\{\frac{n-1}{2}, \ldots ,n-1\right\}\right) \sim \frac{1}{4}.$$
This shows
$$\parallel \mathsf{K}^\ell - \pi \parallel_{{}_{\mathsf{TV}}} \ge f_1(\ell/n^2)$$
for $f_1(x)$ strictly positive as $x$ tends to $0$. See \cite{KT} for background on first step analysis.
$\underline{\text{Note}}$: \ Curiously, the `usual lower bound argument' applied in all of our previous theorems breaks down in the $\mathsf{SL}_2$ quantum case. Here the
largest eigenvalue $\ne 1$ for $\mathsf{K}$ is $\cos(\frac{2\pi}{n})$ and $\half \textsl{\footnotesize R}_1(x) = f(x)$ is an eigenfunction with $|| f ||_\infty \le 1$. Thus,
\[
|\mathsf{K}_0^\ell(f) - \pi(f)| \ge \cos\left(\frac{2\pi}{n}\right)f(0).
\]
Alas, $f(0) = \sin(\frac{2\pi}{n}) \sim \frac{2\pi}{n}$, so this bound is useless.
From Appendix I (Section \ref{append1}), for any $y$ we have from equation \eqref{eq:Kleq},
\begin{equation} \frac{\mathsf{K}^\ell(x,y)}{\pi(y)}-1 = \frac{1}{\pi(y)} \left(a_1 {\textsl{\footnotesize L}}_1(y) + a_1'{\textsl{\footnotesize L}}_1'(y) + \cdots +
a_m {\textsl{\footnotesize L}}_m(y) + a_m'{\textsl{\footnotesize L}}_m'(y)\right), \end{equation}
with $\pi(y)$, ${\textsl{\footnotesize L}}_j$, ${\textsl{\footnotesize L}}_j'$ given in \eqref{eq:quantumpi}, Corollary \ref{C:Kvs} (b),(d), respectively,
and with $a_j'$, $a_j$ given in \eqref{eq:ajs} by the expressions
\begin{align*} a_j' &= \frac{\lambda_j^\ell {\textsl{\footnotesize R}}_j(0)}{d_j} =
\frac{\lambda_j^\ell \sin(\theta_j)}{d_j}, \\
a_j &= \frac{\lambda_j^\ell {\textsl{\footnotesize R}}_j(0)}{d_j}\left(\frac{\ell}{\lambda_j} - \frac{d_j'}{d_j}\right)
= \frac{\lambda_j^\ell\sin(\theta_j)}{d_j}\left(\frac{\ell}{\lambda_j} - \frac{d_j'}{d_j}\right),\end{align*}
where $\theta_j = \frac{2\pi j}{n}$ and $\lambda_j = \cos(\theta_j)$.
Now from Lemma \ref{L:dj},
\[ \frac{2i \sin(\theta_j)}{d_j} = \frac{16 \sin^4(\theta_j)}{n^2} \left( 1 + O\left(\frac{1}{n}\right)\right),\]
with the error uniform in $j$. Therefore,
\begin{align*} a_j' &= \cos^\ell(\theta_j) \frac{16 \sin^4(\theta_j)}{n^2} \left( 1 + O\left(\frac{1}{n}\right)\right) \\
a_j &= \cos^\ell(\theta_j) \frac{16 \sin^4(\theta_j)}{n^2} \left(\frac{\ell}{\cos(\theta_j)} + O\left(n^3\sin^3(\theta_j)\right)\right) \left(1+O\left(\frac{1}{n}\right)\right)
\end{align*}
Consider first the case that $y = 0$. Then ${\textsl{\footnotesize L}}_j(0) = \cos(\theta_j)$, ${\textsl{\footnotesize L}}_j'(0) = n-1$, and
$\pi(0) = \frac{2}{n^2}$. The terms $\frac{1}{\pi(0)} a_j' {\textsl{\footnotesize L}}_j'(0)$ can be bounded using the inequalities
\begin{align*} & \cos(z) \le \mathsf{e}^{\frac{-z^2}{2}} \; \; (0 \le z \le \frac{\pi}{2}), \qquad |\sin(z)| \leq |z|, \\
& \frac{n^2}{2} n \sum_{j=1}^{\lfloor m/2 \rfloor} \frac{\mathsf{e}^{-\theta_j^2 \frac{\ell}{2}}}
{n^2} 16 \, \theta_j^4 = 8 \frac{(2\pi)^4}{n^6} n^3 \sum_{j=1}^{\lfloor m/2 \rfloor} j^4\mathsf{e}^{-\theta_j^2 \frac{\ell}{2}}.
\end{align*}
Writing $\textsl{\footnotesize C} = \ell n^2$ and $f({\textsl{\footnotesize C}}) = \sum_{j=1}^{\infty}j^4 \mathsf{e}^{-{\textsl{\footnotesize C}}(2\pi j)^2} $, observe that $f({\textsl{\footnotesize C}})$ tends to $0$ as ${\textsl{\footnotesize C}}$ increases, and the sum of the paired
terms up to $\lfloor m/2 \rfloor $ is at most $\frac{8 (2\pi)^4 f({\textsl{\footnotesize C}})}{n^3}$. The terms from $\lfloor m/2 \rfloor+1$ to $m$
are dealt with below.
The unprimed terms can be similarly bounded by
\[ \frac{n^2}{2} \sum_{j=1}^{\lfloor (m-1)/2 \rfloor} \mathsf{e}^{-\theta_j^2 \frac{\ell}{2}}
\left(\frac{16 \, (2\pi j)^4}{n^6}\right)\left( \ell + O(j^3)\right).\]
Again when $\ell = {\textsl{\footnotesize C}}n^2$, this is at most a constant times $\frac{f_1({\textsl{\footnotesize C}})}{n^2}$, with $$f_1({\textsl{\footnotesize C}})
= \sum_{j=1}^\infty j^7\mathsf{e}^{-{\textsl{\footnotesize C}}(2\pi j)^2/2 )}.$$
For the sum from $\lfloor m/2 \rfloor$ to $m$ use $\cos(\pi + z) = - \cos(z)$ and $|\sin(\pi + z)| = |\sin(z)|$ to write \, $\cos\left(\frac{2\pi (m-j)}{n}\right) = - \cos(\frac{2\pi}{n}(j-\half))$,
and $\sin\left(\frac{2\pi (m-j)}{n}\right)= \sin(\frac{2\pi}{n}(j-\half))$. With trivial modification, the same bounds now hold for the upper tail sum. Combining bounds
gives $\frac{\mathsf{K}^\ell(0,0)}{\pi(0)} -1\le f({\textsl{\footnotesize C}})$ when $\ell = {\textsl{\footnotesize C}}n^2$ for an explicit $f({\textsl{\footnotesize C}})$ going to 0 from above
as ${\textsl{\footnotesize C}}$ increases to infinity.
Consider next the case that $y = n-1$. Then $\pi(n-1) = \frac{1}{n}, \; {\textsl{\footnotesize L}}_j'(n-1) = 0$ (Hooray!) \; ${\textsl{\footnotesize L}}_j(n-1) = 1$ for $j=1,\dots,m$. Essentially the same arguments show that order $n^2$ steps suffice. The argument for intermediate $y$ is similar and further
details are omitted. \qed
\subsection{Tensoring with $\mathsf{V}_{n-1}$}\label{quantf}
This section examines the
tensor walk obtained by tensoring irreducible modules for $\mathfrak{u}_\xi(\mathfrak{sl}_2)$ with the Steinberg module $\mathsf{V}_{n-1}$.
The short exact sequences \eqref{eq:Verma} and \eqref{eq:Xact} imply that
the projective indecomposable module
$\mathsf{P}_{r}$, $0 \leq r \leq n-2$, has the following structure $\mathsf{P}_r/\mathsf{M}_{n-2-r} \cong \mathsf{M}_r$, where $\mathsf{M}_j/\mathsf{V}_{n-2-j} \cong \mathsf{V}_j$
for $j=r,n-2-r$.
Thus, $[\mathsf{P}_r:\mathsf{V}_j] = 0$ unless $j = r$ or $j=p-2-r$, in which case $[\mathsf{P}_r:\mathsf{V}_j] = 2$.
In \cite{BO}, tensor products of irreducible modules and their projective covers are considered for the Lie algebra $\fsl_2$ over a field of characteristic $p \ge 3$. Identical arguments
can be applied in the quantum case; we omit the details. The rules for tensoring with the Steinberg module $\mathsf{V}_{n-1}$ for $\mathfrak{u}_\xi(\mathfrak{sl}_2)$ are displayed below, and the ones for $\fsl_2$ can be read from these
by specializing $n$ to $p$.
\begin{align} \label{eq:jtens} \begin{split}
& \mathsf{V}_0 \otimes \mathsf{V}_{n-1} \cong \mathsf{V}_{n-1} \\
& \mathsf{V}_r \otimes \mathsf{V}_{n-1} \cong \mathsf{P}_{n-1-r} \oplus \mathsf{P}_{n+1-r} \oplus \cdots \oplus \begin{cases} \mathsf{P}_{n-3} \oplus \mathsf{V}_{n-1} & \quad \text{if $r$ is even,} \\
\mathsf{P}_{n-2}
& \quad \text{if $r$ is odd}. \end{cases}
\end{split}
\end{align}
The expression for $ \mathsf{V}_r \otimes \mathsf{V}_{n-1}$ holds when $1\le r \le n-1$, and the subscripts on the terms in that line go up by 2.
The right-hand side of \eqref{eq:jtens} when $r =1$ says that $\mathsf{V}_1 \otimes \mathsf{V}_{n-1} \cong \mathsf{P}_{n-2}$ (compare Proposition \ref{P:stein}).
The McKay matrix $\mathsf{M}$ for the tensor chain is displayed below for $n = 3,5,7$.
$$\left(\begin{matrix} 0 & 0 & 1 \\ 2 & 2 & 0 \\ 2 & 2 & 1 \end{matrix}\right) \qquad \quad
\left(\begin{matrix} 0 & 0 & 0 & 0& 1 \\ 2 & 0 & 0 & 2 & 0 \\ 0 & 2 & 2 & 0 & 1 \\
2 & 2 & 2 & 2 & 0 \\ 2 & 2 & 2 & 2 & 1 \end{matrix}\right) \qquad \quad \left(\begin{matrix} 0 & 0 & 0 & 0 & 0 & 0& 1 \\ 2 & 0 & 0 & 0 & 0 & 2 & 0 \\ 0 & 2 & 0 & 0 & 2 & 0 & 1 \\
2 & 0 & 2 & 2 & 0 & 2 & 0\\
0 & 2 & 2 & 2 & 2 & 0 & 1 \\ 2 & 2 & 2 & 2 & 2 & 2 & 0 \\ 2 & 2 & 2 & 2 & 2 & 2 & 1 \end{matrix}\right)$$
The following results hold for all odd $n \ge 3$:
\begin{itemize}
\item The vector $\mathsf{r}_0: =[1, 2, 3, \dots, n-1,n]^{\tt T}$ of dimensions of the irreducible modules is a right eigenvector corresponding to the eigenvalue $n$.
\item The vector $\ell_0: =[2,2,2, \dots, 2,1]$ of dimensions of the projective covers (times $\frac{1}{n}$) is a left eigenvector corresponding to the eigenvalue $n$.
\item The $\frac{n-1}{2}$ vectors displayed in \eqref{eq:sttensr} are right eigenvectors of $\mathsf{M}$ corresponding to the eigenvalue $0$:
\begin{align}\begin{split}\label{eq:sttensr} \mathsf{r}_1 & =[1,0,0, \ \ldots \ 0,0, -1, 0 ]^{\tt T} \\
\mathsf{r}_2 & =[0,1, 0, \, \ldots \,0,-1,0, 0 ]^{\tt T} \\
\vdots \ & \qquad \qquad \vdots \\
\mathsf{r}_{j+1}& =[0,\ldots, 0,\underbrace{1}_j,0 \ldots, 0,\underbrace{-1}_{n-2-j},0, \ldots, 0 ]^{\tt T}, \\
\vdots \ & \qquad \qquad \vdots \\
\mathsf{r}_{\frac{n-1}{2}} & =[0, 0,\ \ldots, \underbrace{1,-1}_{\frac{n-3}{2},\frac{n-1}{2} \text{slots}} 0, \ldots 0]^{\tt T}.\end{split}\end{align}
(Recall that the rows and columns of $\mathsf{M}$ are numbered $0,1,\dots, n-1$ corresponding
to the labels of the irreducible modules.)
That the vectors in \eqref{eq:sttensr} are right eigenvectors for the eigenvalue 0 can be seen from a direct computation, and it also follows from the structure of the projective covers and \eqref{eq:jtens}. Indeed, if $\mathsf{P}_j$ is a summand of
$\mathsf{V}_i \otimes \mathsf{V}_{n-1}$ for $j=0,1,\dots,\frac{n-3}{2}$, then since $[\mathsf{P}_j:\mathsf{V}_j] = 2 = [\mathsf{P}_j:\mathsf{V}_{n-2-j}]$, there is a $2$ as the $(i,j)$ and $(i,n-2-j)$ entries of row $i$.
Therefore, $\mathsf{M} \mathsf{r}_{j+1} = 0$.
\item When $n = 3$ and $\mathsf{r_1}' = [-1,-1,4]^{\tt T}$, then $\mathsf{M} \mathsf{r}_1' = 4 \mathsf{r}_1$. Therefore, $ \mathsf{r}_1, \frac{1}{4}\mathsf{r}_1'$ give a
$2 \times 2$ Jordan block $\mathsf{J} =\left(\begin{matrix} 0 & 1 \\ 0 & 0 \end{matrix}\right)$ corresponding to the eigenvalue 0, and $\mathsf{M}$ is conjugate to the matrix
$$\left(\begin{matrix} 3 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{matrix}\right).$$
\item When $n > 3$, define
\begin{align}\begin{split}\label{eq:sttensr'} \mathsf{r}_1' & =[0,0,0, \ \ldots, \ 0, -1, 0,2 ]^{\tt T} \\
\mathsf{r}_2' & =[0, 0, \, \ldots \,0, -1,0, 1,0 ]^{\tt T} \\
\vdots \ & \qquad \qquad \qquad \vdots \\
\mathsf{r}_{j+1}' & =[0,\ldots, 0,\underbrace{-1}_{n-j-2}, 0,\underbrace{1}_{n-j},0, \ldots 0]^{\tt T} \quad \text{for} \ j=2,\dots,\textstyle{\frac{n-3}{2}} \\
\vdots \ & \qquad \qquad \qquad \vdots \\
\mathsf{r}_{\frac{n-1}{2}}' & =[0, 0, \ldots, \underbrace{-1}_{\frac{n-3}{2}}, 0, \underbrace{1}_{\frac{n+1}{2}}, 0, \ldots 0]^{\tt T}.\
\end{split}\end{align}
The vectors $\mathsf{r}_j$, $\half \mathsf{r}_j'$ correspond to the $2 \times 2$ Jordan block $\mathsf{J}$ above. Using the
basis $\mathsf{r}_0,\mathsf{r}_1,\half \mathsf{r}_1', \ldots, \mathsf{r}_{\frac{n-1}{2}},\half \mathsf{r}_{\frac{n-1}{2}}'$, we see that $\mathsf{M}$ is conjugate to the matrix
$$\left(\begin{matrix} n & 0 & & \ldots & & 0 \\ 0 & \mathsf{J} & 0 &\ldots & & 0 \\ 0 & 0 & \mathsf{J} & 0 & & 0 \\
0 & 0 & & \ddots & & 0 \\
0 & 0 & &\ldots && \mathsf{J} \end{matrix}\right).$$ \item The characteristic polynomial of $\mathsf{M}$ is $x^n - n x^{n-1} = x^{n-1}(x - n).$
\item The vectors $\ell_j $ for $j = 1,2,\dots, \frac{n-1}{2}$ displayed in \eqref{eq:tensl} are left eigenvectors for $\mathsf{M}$ corresponding to the eigenvalue $0$, where
\begin{align}\begin{split}\label{eq:tensl} \ell_1 & =[1,0,0, \; \; \ldots, \; \; 0, 0, 1,-1 ] \\
\ell_2 & =[0,1, 0, \; \ \ldots, \; \ 0,1,0, -1 ] \\
\vdots \ & \qquad \qquad \qquad \vdots \\
\mathsf{\ell}_{j}& =[0,\ldots, 0,\underbrace{1}_{j-1},0 \ldots, 0,\underbrace{1}_{n-1-j},0, \ldots, 0,-1 ], \\
\vdots \ & \qquad \qquad \qquad \vdots \\
\ell_{\frac{n-1}{2}} & =[0, 0,\ \ldots, \underbrace{1,1}_{\frac{n-3}{2},\frac{n-1}{2}}, 0,\ldots, -1].\end{split}\end{align}
\item Let
\begin{align}\begin{split}\label{eq:sttensl'} \mathsf{\ell}_1' & =[-2,1,0, \; \; \ldots, \;\; 0, 0 ] \\
\mathsf{\ell}_2' & =[-3, 0, 1, 0, \; \ldots,\; 0,0, 0 ] \\
\mathsf{\ell}_3' & =[-2, -1, 0, 1, 0, \ldots, 0,0, 0 ] \\
\vdots \ & \qquad \qquad \qquad \vdots \\
\mathsf{\ell}_{j}' & =[-2,0,\ldots, 0,\underbrace{-1}_{j-2},0,\underbrace{1}_{j},0, \ldots 0] \quad \text{for} \ j=3,\dots,\textstyle{\frac{n-3}{2}}\\
\vdots \ & \qquad \qquad \qquad \vdots \\
\ell_{\frac{n-1}{2}}' & =[0, 0,\ \ldots, \underbrace{-1}_{\frac{n-5}{2}},0\underbrace{1}_{\frac{n-1}{2}}, 0,\ldots, -1].\end{split}\end{align}
(The underbrace in these definitions indicates the slot position.) Then \\
$\left(\half{\ell_j'}\right) \mathsf{M} = \ell_j$ for $j = 1,2,\ldots, \frac{n-1}{2}$.
\end{itemize}
We have not carried out the convergence analysis for the Markov chain coming from tensoring with the Steinberg module for $\mathfrak{u}_\xi(\mathfrak{sl}_2)$ but guess that a
bounded number of steps will be necessary and sufficient for total variation convergence.
\section{ Appendix I. \ \ Background on Markov chains}\label{append1}
Markov chains are a classical topic of elementary probability theory and are treated in many introductory accounts. We recommend \cite{Fe}, \cite{KS}, \cite{KT}, \cite{LeP}
for introductions.
Let $\mathcal{X}$ be a finite set. A matrix with $\mathsf{K}(x,y) \ge 0$ for all $x,y \in \mathcal{X}$, and $\sum_{y \in \mathcal{X}} \mathsf{K}(x,y) = 1$ for all $x \in \mathcal{X}$ gives a Markov chain on $\mathcal{X}$: From $x$, the probability
of moving to $y$ in one step is $\mathsf{K}(x,y)$. Then inductively, $\mathsf{K}^\ell(x,y) = \sum_{z} \mathsf{K}(x,z)\mathsf{K}^{\ell-1}(z,y)$ is the probability of moving from $x$ to $y$ in $\ell$ steps. Say $\mathsf{K}$
has \emph{stationary distribution} $\pi$ if $\pi(y) \ge 0$, \ $\sum_{y \in \mathcal{X}} \pi(y) = 1$, and $\sum_{x \in \mathcal{X}} \pi(x) \mathsf{K}(x,y) = \pi(y)$ for all $y \in \mathcal{X}$. Thus, $\pi$ is a left eigenvector
with eigenvalue 1 and having coordinates $\pi(y), y \in \mathcal{X}$. Under mild conditions, the Perron-Frobenius Theorem says that Markov chains are \emph{ergodic}, that is to say they have
unique stationary distributions and $\mathsf{K}^\ell(x,y) \buildrel {\color{black}{\ell \rightarrow \infty}} \over \longrightarrow \pi(y)$ for all starting states $x$.
The rate of convergence is measured in various metrics. Suppose $\mathsf{K}^\ell_x = \mathsf{K}^\ell(x, \cdot)$. Then
\begin{align}
|| \mathsf{K}_x^\ell - \pi ||_{{}_{\mathsf{TV}}} &= \mathsf{max}_{\displaystyle{\mathcal{Y} \subseteq \mathcal{X}}}\ \, \vert \mathsf{K}^\ell(x,\mathcal Y)-\pi(x) \vert
= \half \displaystyle{\sum_{y \in \mathcal{X}}} \vert \mathsf{K}^\ell(x,y)- \pi(y) \vert \nonumber \\
&= \half \mathsf{sup}_{\vert \vert f \vert \vert_\infty \le 1} \vert \mathsf{K}^\ell(f)(x) - \pi(f) \vert \;\, \text{with} \;\, \vert\vert f\vert\vert_\infty \ = \ \mathsf{max}_y f(y), \label{eq:TV} \\
\text{where} \; \mathsf{K}^\ell(f)(x) = &\sum_{y \in \mathcal{X}} \mathsf{K}^\ell(x,y) f(y), \; \pi(f) = \sum_{y \in \mathcal{X}} \pi(y)f(y)\,\text{for a test function $f$, and} \nonumber \\
|| \mathsf{K}_x^\ell - \pi||_{\infty} &= \mathsf{max}_{y \in \mathcal{X}} \ \, \left | \frac{\mathsf{K}^\ell(x,y)}{\pi(y)} - 1\right | . \label{eq:inf} \end{align}
Clearly, $|| \mathsf{K}_x^\ell - \pi ||_{{}_{\mathsf{TV}}} = \half \sum_{y \in \mathcal{X}}\ \left | \frac{\mathsf{K}^\ell(x,y)}{\pi(y)} - 1\right | \ \pi(y) \le \half ||\mathsf{K}^\ell_x - \pi ||_{\infty}$.
Throughout, this is the route taken to determine upper bounds, while \eqref{eq:TV} gives $|| \mathsf{K}_x^\ell - \pi ||_{{}_{\mathsf{TV}}} \ge \half \vert \mathsf{K}^\ell(f)(x) - \pi(f) \vert$
for any test function $f$ with
$\vert\vert f\vert\vert_\infty \le 1$ (usually $f$ is taken as the eigenfunction for the second largest eigenvalue).
The $\ell_{\infty}$ distance satisfies a useful monotonicity property, namely,
\begin{equation}\label{monotone}
\parallel \mathsf{K}^\ell - \pi \parallel_{\infty} \text{ is monotone non-increasing}.
\end{equation}
\noindent
Indeed, fix $x \in \mathcal{X}$ and consider the Markov chain $\mathsf{K}(x,y)$ with stationary distribution
$\pi(y)$, so $\mathsf{K}^\ell(x,y) = \sum_{z \in \mathcal{X}}\mathsf{K}^{\ell-1}(x,z)\mathsf{K}(z,y)$. As
$\pi(y)=\sum_{z \in \mathcal{X}}\pi(z)\mathsf{K}(z,y)$, we have by \eqref{eq:inf} for any $y \in \mathcal{X}$ that\\
$$\begin{aligned}
|\mathsf{K}^\ell(x,y) - \pi(y)| & = \biggl|\sum_{z \in \mathcal{X}}\left(\mathsf{K}^{\ell-1}(x,z)-\pi(z)\right)\mathsf{K}(z,y)\biggr|\\
& \leq \sum_{z \in \mathcal{X}}\left|\mathsf{K}^{\ell-1}(x,z)-\pi(z)\right|\mathsf{K}(z,y)\\
& \leq\;\parallel \mathsf{K}^{\ell-1} - \pi \parallel_{\infty}\cdot\sum_{z \in \mathcal{X}}\pi(z)\mathsf{K}(z,y)\\
& =\;\parallel \mathsf{K}^{\ell-1} - \pi \parallel_{\infty}\cdot\pi(y).
\end{aligned}$$
Now \eqref{monotone} follows by taking the supremum over $y \in \mathcal{X}$ and applying \eqref{eq:inf} again.
Suppose now that $\mathsf{K}$ is the Markov chain on the irreducible characters $\mathsf{Irr}(\mathsf{G})$ of a finite group $\mathsf{G}$ using the character $\alpha$.
The matrix $\mathsf{K}$ has eigenvalues $\beta_c = \alpha(c)/\alpha(1)$, where $c$ is a representative for a conjugacy class of $\mathsf{G}$, and
there is an orthonormal basis of (right) eigenfunctions $f_c \in L^2(\pi)$ (see \cite[Prop. 2.3]{F0}) defined by
$$f_c(\chi) = \frac{|c^G|^{\half} \, \chi(c)}{\chi(1)},$$
where $|c^G|$ is the size of the class of $c$. Using these ingredients, we have as in \cite[Lemma 2.2]{F5},
\begin{align}\begin{split} \label{eq:kf} \mathsf{K}^\ell(\chi,\varrho) &= \sum_{c} \beta_c^\ell \, f_c(\chi) \, f_c(\varrho) \, \pi(\varrho) \\
&= \sum_c \left( \frac{\alpha(c)}{\alpha(1)}\right)^\ell |c^G| \, \frac{\chi(c)}{\chi(1)}\, \frac{\varrho(c)}{\varrho(1)} \, \frac{\varrho(1)^2}{|\mathsf{G}|} \\
& = \frac{\varrho(1)}{\alpha(1)^\ell \chi(1) |\mathsf{G}|} \sum_c \alpha(c)^\ell |c^G| \chi(c) \varrho(c)
\end{split}\end{align}
In particular, $\mathsf{K}^\ell(\mathbb{1},\varrho) = \frac{\varrho(1)}{\alpha(1)^\ell |\mathsf{G}|} \sum_c \alpha(c)^\ell\, |c^G| \,\varrho(c)$, for the trivial character $\mathbb{1}$ of $\mathsf{G}$.
An alternate general formula can be found, for example, in \cite[Lemma 3.2]{F3}:
$$\mathsf{K}^\ell(\mathbb{1}, \varrho) = \frac{\varrho(1)}{\alpha(1)^\ell} \langle \alpha^\ell, \varrho \rangle,$$
where $\langle \alpha^\ell,\varrho \rangle$ is the multiplicity of $\varrho$ in $\alpha^\ell$.
\subsection*{The binary dihedral case - proof of Theorem \ref{T:dihedral}}
To illustrate these formulas, here is a proof of Theorem \ref{T:dihedral}. Recall that $\mathsf{K}$ is the Markov chain on the binary dihedral graph in
Figure \ref{BDn-graph} starting at $0$ and tensoring with $\chi_1$, and $\overbar \mathsf{K} = \frac{1}{2}\mathsf{K} + \frac{1}{2}\,\mathrm{I}$ is the corresponding lazy walk.
For the lower bound, we use \eqref{eq:TV} to see that
$||\overbar{\mathsf{K}}^\ell - \pi ||_{{}_{\mathsf{TV}}} \ge \half \vert \overbar{\mathsf{K}}^\ell(f)(1)- \pi(f) \vert$ with $f(\chi) = \chi(c)/\chi(1)$ for some
conjugacy class representative $c \ne 1$ in $\mathsf{BD}_n$. Clearly,
$|| f ||_\infty \le 1$, and from Theorem
\ref{T:measure} or \eqref{eq:kf} above, we have $f$ is the right eigenfunction for the lazy Markov chain $\overbar \mathsf{K}$ with eigenvalue $\half +\half \cos\left(\frac{2\pi}{n}\right)$.
Since $f$ is orthogonal to the constant functions, $\pi(f) = 0$, so the lower bound becomes
$||\overbar{\mathsf{K}}^\ell-\pi ||_{{}_{\mathsf{TV}}} \ge \left(\half +\half \cos\left(\frac{2\pi}{n}\right)\right)^\ell$. Since
$ \cos\left(\frac{2\pi}{n}\right) \ge 1 - \frac{2\pi^2}{n^2} + o\left(\frac{1}{n^4}\right)$,
$||\overbar{\mathsf{K}}^\ell-\pi ||_{{}_{\mathsf{TV}}} \ge \left(1 - \frac{2\pi^2}{n^2} + o\left(\frac{1}{n^4}\right)\right)^\ell$ and the result,
$||\overbar \mathsf{K}^\ell-\pi ||_{{}_{\mathsf{TV}}} \ge Be^{-2\pi^2 \ell/n^2}$ for some positive constant $B$ holds all $\ell \ge 1$.
For the upper bound, \eqref{eq:kf} and the character values from Table \ref{chBD} give explicit formulas for the transition probabilities. For example,
for $1 \le r \le n-1$,
$$\frac{\overbar{\mathsf{K}}^\ell(\mathbb 1, \chi_r)}{\pi(\chi_r)} -1 = 4 \sum_{j=1}^{r-1} \left(\half +\half \cos\left(\frac{2\pi j}{n}\right)\right)^\ell \cos\left(\frac{2\pi j}{n}\right).$$
Now standard bounds for the simple random walk show that the right side is at most $B'e^{-2\pi^2 \ell/n^2}$ for some positive constant $B'$, for details see
\cite[Chap.~3]{Diacbk}. The same argument works for the one-dimensional characters $\lambda_{1'},\lambda_{2'},\lambda_{3'},\lambda_{4'}$, yielding
$\parallel\overbar{\mathsf{K}}^\ell-\pi\parallel_\infty \le B'e^{-2\pi^2 \ell/n^2}$ and proving the upper bound in Theorem \ref{T:dihedral}. \qed
\subsection*{Generalized spectral analysis using Jordan blocks}
The present paper uses the Jordan block decomposition of the matrix $\mathsf{K}$ in the quantum $\mathsf{SL}_2$ case to give a generalized spectral analysis. We have not seen this classical tool of matrix theory used in
quite the same way and pause here to include some details.
\medskip
For $\mathsf{K}$ as above, the Jordan decomposition provides an invertible matrix $\mathsf{A}$ such that $\mathsf{A}^{-1} \mathsf{K} \mathsf{A} = \mathsf{J}$, with $\mathsf{J}$
a block diagonal matrix with blocks
$$\mathsf{B} = \mathsf{B}(\lambda) = \left(\begin{matrix} \lambda & 1 & 0 & \ldots & 0 & 0 \\
0 & \lambda & 1 & .. & 0 & 0 \\
\vdots & \ddots & \ddots & \ddots & \vdots & \vdots \\
0 & & \ldots & \ddots & 1 & 0 \\
0 & 0 &\ldots & &\lambda& 1 \\
0 &0 & \ldots & 0 & 0 & \lambda\end{matrix} \right)$$
of various sizes. If $\mathsf{B}$ is $h \times h$, then
$$\mathsf{B}^\ell = \small{\left(\begin{matrix} \lambda^\ell & \ell \lambda^{\ell-1} &{\ell \choose 2} \lambda^{\ell-2} & \ldots & \ldots &{\ell \choose h-1} \lambda^{\ell-h+1} \\
0 & \lambda^\ell & \ell \lambda^{\ell-1}& \ldots & &{\ell \choose h-2} \lambda^{\ell-h+2} \\
\\
\vdots & \ddots & \ddots & \ddots & \vdots & \vdots \\
0 & & \ldots & \ddots & & 0 \\
0 & 0 &\ldots & &\lambda^\ell & \ell \lambda^{\ell-1} \\
0 &0 & \ldots & 0 & 0 & \lambda^\ell \end{matrix} \right)}$$
Since $\mathsf{K} \mathsf{A} = \mathsf{A\, J}$, we may think of $\mathsf{A}$ as a matrix of generalized right eigenvectors for $\mathsf{K}$. Each block
of $\mathsf{J}$ contributes one actual eigenvector.
Since $\mathsf{A}^{-1} \mathsf{K} = \mathsf{J}\, \mathsf{A}^{-1}$, then $\mathsf{A}^{-1}$ may be regarded as a matrix of generalized left eigenvectors.
Denote the rows of $\mathsf{A}^{-1}$ by $\mathsf{b}_0, \mathsf{b}_1, \dots, \mathsf{b}_{|\mathcal{X}|-1}$ and the columns of $\mathsf{A}$ by $\mathsf{c}_0, \mathsf{c}_1, \dots,
\mathsf{c}_{|\mathcal{X}|-1}$. Then from $\mathsf{A}^{-1} \mathsf{A} = \mathrm{I}$, it follows that $\sum_{x\in \mathcal{X}} \mathsf{b}_i(x) \mathsf{c}_j(x) = \delta_{i,j}$.
Throughout, we take $\mathsf{b}_0(x) = \pi(x)$ and $\mathsf{c}_0(x) = 1$ for all $x \in \mathcal{X}$. For an ergodic Markov chain, (the only kind considered in this paper), the Jordan block corresponding
to the eigenvalue $1$ is a $1 \times 1$ matrix with entry $|\mathcal{X}|$.
In the next result, we consider a special type of Jordan decomposition, where one block has size one, and the rest have size two. Of course, the motivation
for this special decomposition comes from the quantum case in Section \ref{quant}.
\begin{prop} \label{P: eigenrels} Suppose
$\mathsf{A}^{-1} \mathsf{K} \mathsf{A} = \mathsf{J}$, where
$$\mathsf{J} = \left(\begin{matrix} 1 & 0 & 0& \ldots & & 0 \\ 0 & \mathsf{B}(\lambda_1) & 0 &\ldots & & 0 \\ 0 & 0 & \mathsf{B}(\lambda_2) & 0 & & 0 \\
\vdots & \vdots &&&& \vdots \\
0 & 0 & & \ddots & \ddots & 0 \\
0 & 0 & \ldots &&0& \mathsf{B}(\lambda_m) \end{matrix}\right),$$
and for each $j = 1,\dots,m$,
$$\mathsf{B}(\lambda_j) = \left(\begin{matrix} \lambda_j & 1 \\ 0 & \lambda_j \end{matrix}\right).$$
Let $\tilde{\textsl{\footnotesize R}}_0$ be column 0 of $\mathsf{A}$, and for $j=1,\dots, m$, let
$\tilde{\textsl{\footnotesize R}}_j$,$\tilde{\textsl{\footnotesize R}}_j'$ be columns $2j-1$ and $2j$ respectively of $\mathsf{A}$.
Let $\tilde{\textsl{\footnotesize L}}_0$ be row 0 of $\mathsf{A}^{-1}$, and for $i=1,\dots, m$, let
$\tilde{\textsl{\footnotesize L}}_i$,$\tilde{\textsl{\footnotesize L}}_i'$ be rows $2i$ and $2i-1$ respectively of $\mathsf{A}^{-1}$.
Then the following relations hold for all $1 \le i,j \le m$:
\begin{align}\begin{split}\label{eq:evecrels}
&\mathsf{K} \tilde{\textsl{\footnotesize R}}_0 = \tilde{\textsl{\footnotesize R}}_0, \qquad \qquad \mathsf{K} \tilde{\textsl{\footnotesize R}}_j = \lambda_j \tilde{\textsl{\footnotesize R}}_j, \qquad \quad \quad \mathsf{K} \tilde{\textsl{\footnotesize R}}_j' = \lambda_j\tilde{\textsl{\footnotesize R}}_j' +
\tilde{\textsl{\footnotesize R}}_j, \\
& \tilde{\textsl{\footnotesize L}}_0 \mathsf{K} = \tilde{\textsl{\footnotesize L}}_0, \qquad \qquad \tilde{\textsl{\footnotesize L}}_j \mathsf{K} = \lambda_j \tilde{\textsl{\footnotesize L}}_j, \qquad \quad \quad \; \tilde{\textsl{\footnotesize L}}_j' \mathsf{K} = \lambda_j\tilde{\textsl{\footnotesize L}}_j' +
\tilde{\textsl{\footnotesize L}}_j, \\
& \tilde{\textsl{\footnotesize L}}_0 \tilde{\textsl{\footnotesize R}}_0 = 1, \qquad \qquad
\tilde{\textsl{\footnotesize L}}_0 \tilde{\textsl{\footnotesize R}}_j = 0 = \tilde{\textsl{\footnotesize L}}_0 \tilde{\textsl{\footnotesize R}}_j',
\quad \quad \; \tilde{\textsl{\footnotesize L}}_i \tilde{\textsl{\footnotesize R}}_0 = 0 = \tilde{\textsl{\footnotesize L}}_i' \tilde{\textsl{\footnotesize R}}_0,\\
& \tilde{\textsl{\footnotesize L}}_i \tilde{\textsl{\footnotesize R}}_j = 0 = \tilde{\textsl{\footnotesize L}}_i' \tilde{\textsl{\footnotesize R}}_j', \\
&\tilde{\textsl{\footnotesize L}}_i \tilde{\textsl{\footnotesize R}}_j ' = \tilde{\textsl{\footnotesize L}}_i' \tilde{\textsl{\footnotesize R}}_j = \delta_{i,j}.
\end{split}\end{align} \end{prop}
\begin{proof} For $j \ge 1$,
the right-hand side of the expression $\mathsf{K} \mathsf{A}=\mathsf{A} \mathsf{J}$ has
column $2j-1$ of $\mathsf{A}$ multiplied by $\lambda_j$.
Column $2j$ is multiplied by $\lambda_j$ and column $2j-1$ is added to it because of the diagonal block $\mathsf{B}(\lambda_j)$ of $\mathsf{J}$. Thus,
the columns of $\mathsf{A}$ are (generalized) right eigenvectors $\tilde{\textsl{\footnotesize R}}_0,\tilde{\textsl{\footnotesize R}}_1, \tilde{\textsl{\footnotesize R}}_1', \ldots,
\tilde{\textsl{\footnotesize R}}_m, \tilde{\textsl{\footnotesize R}}_m'$ for $\mathsf{K}$ as described in the first line of \eqref{eq:evecrels}.
Similarly, on the right-hand side of the expression
$\mathsf{A}^{-1}\,\mathsf{K} =\mathsf{J}\,\mathsf{A}^{-1}$,
row $2i$ of $\mathsf{A}^{-1}$ is multiplied by $\lambda_i$, and row $2i-1$ is $\lambda_i$ times row $2i-1$ plus row $2i$ for all $i \ge 1$. Therefore,
the rows of $\mathsf{A}^{-1}$ are (generalized) left eigenvectors $\tilde{\textsl{\footnotesize L}}_0, \tilde{\textsl{\footnotesize L}}_1',\ldots, \tilde{\textsl{\footnotesize L}}_1,
\tilde{\textsl{\footnotesize L}}_m', \tilde{\textsl{\footnotesize L}}_m$ of $\mathsf{K}$ (in that order) to give the second line. The other relations in \eqref{eq:evecrels} follow from
$\mathsf{A}^{-1} \mathsf{A} = \mathrm{I}$. \end{proof}
\subsection*{Summary of application of these results to the quantum case}\label{S:summaryquantum}
In Section \ref{quant}, we explicitly constructed left and right (generalized) eigenvectors
${\textsl{\footnotesize L}}_0 = \pi$ (the stationary distribution), ${\textsl{\footnotesize L}}_1, {\textsl{\footnotesize L}}_1', \dots, {\textsl{\footnotesize L}}_m,
{\textsl{\footnotesize L}}_m', {\textsl{\footnotesize R}}_0, {\textsl{\footnotesize R}}_1,{\textsl{\footnotesize R}}_1', \ldots, {\textsl{\footnotesize R}}_m, {\textsl{\footnotesize R}}_m'$
for the tensor chain resulting from tensoring with the two-dimensional natural module $\mathsf{V}_1$ for $\mathfrak{u}_\xi(\mathfrak{sl}_2)$, $\xi$ a primitive $n$th root of unity, $n \ge 3$ odd.
Since the eigenvalues are distinct, the eigenvectors
${\textsl{\footnotesize L}}_0, {\textsl{\footnotesize L}}_1, \ldots, {\textsl{\footnotesize L}}_m$, ${\textsl{\footnotesize R}}_0, {\textsl{\footnotesize R}}_1,\ldots, {\textsl{\footnotesize R}}_m$, must be nonzero scalar multiples of
the ones coming from Proposition \ref{P: eigenrels}. Suppose for $1 \le i \le m$,
${\textsl{\footnotesize R}}_i = \gamma_i \tilde{\textsl{\footnotesize R}}_i$, and
${\textsl{\footnotesize R}}_i' = \delta_i \tilde{\textsl{\footnotesize R}}_i' + \varepsilon_i \tilde{\textsl{\footnotesize R}}_i$,
where $\gamma_i$ and $\delta_i$ are nonzero. Then the
relation $\mathsf{K} {\textsl{\footnotesize R}}_i' = \lambda_i {\textsl{\footnotesize R}}_i' + {\textsl{\footnotesize R}}_i$, which holds by
construction of these vectors in Section \ref{quant}, can be used to show
$\delta_i = \gamma_i$, so
${\textsl{\footnotesize R}}_i' = \gamma_i \tilde{\textsl{\footnotesize R}}_i' + \varepsilon_i \tilde{\textsl{\footnotesize R}}_i$. Similar results apply for the left eigenvectors. It follows from the relations in \eqref{eq:evecrels} that there exist nonzero scalars $d_i$ and $d_i'$ for $1 \le i \le m$ such that
\begin{equation}\label{eq:dis} {\textsl{\footnotesize L}}_i {\textsl{\footnotesize R}}_i' = {\textsl{\footnotesize L}}_i' {\textsl{\footnotesize R}}_i = d_i \quad \text{and} \quad {\textsl{\footnotesize L}}_i' {\textsl{\footnotesize R}}_i' = d_i'. \end{equation}
Now fix a starting state $x$ and consider $\mathsf{K}^\ell(x,y)$ as a function of $y$. Since $\{{\textsl{\footnotesize L}}_i, {\textsl{\footnotesize L}}_i'
\mid 1 \le i \le m\}\cup \{\pi\}$ is a basis of $\mathbb R^n$, there are scalars $a_0, a_i, a_i', 1 \le i \le m$ such that
\begin{equation} \label{eq:Kleq} \mathsf{K}^\ell(x,y) = a_0 \pi(y) +a_1 {\textsl{\footnotesize L}}_1(y) + a_1'{\textsl{\footnotesize L}}_1'(y) + \cdots +
a_m {\textsl{\footnotesize L}}_m(y) + a_m'{\textsl{\footnotesize L}}_m'(y). \end{equation}
Multiply both sides of \eqref{eq:Kleq} by ${\textsl{\footnotesize R}}_0$ and sum over $y$ to show that $a_0 = 1$. Now multiplying both sides of \eqref{eq:Kleq} by
${\textsl{\footnotesize R}}_j(y)$ and summing gives
\begin{equation}\label{eq:ajprime} \sum_y \mathsf{K}^\ell(x,y) {\textsl{\footnotesize R}}_j(y) = \lambda_j^\ell {\textsl{\footnotesize R}}_j(x) = a_j' d_j, \quad \text{that is,} \quad a_j' = \frac{\lambda_j^\ell {\textsl{\footnotesize R}}_j(x)}{d_j}.\end{equation}
Similarly, multiplying both sides of \eqref{eq:Kleq} by ${\textsl{\footnotesize R}}_j'(x)$ and summing shows that
$$\lambda_j^\ell {\textsl{\footnotesize R}}_j'(x) + \ell \lambda_j^{\ell-1} {\textsl{\footnotesize R}}_j(x) = a_j' d_j' + a_j d_j.$$
Consequently,
\begin{equation}\label{eq:aj} a_j = \frac{\lambda_i^\ell}{d_j}\left( {\textsl{\footnotesize R}}_j'(x) + \frac{\ell {\textsl{\footnotesize R}}_j(x)}{\lambda_j}- {\textsl{\footnotesize R}}_j(x)\frac{d_j'}{d_j}\right).\end{equation}
In the setting of Section \ref{quant}, with the Markov chain arising from tensoring with $\mathsf{V}_1$ for $\mathfrak{u}_\xi(\mathfrak{sl}_2)$, we have $x = 0$,
and from Corollary \ref{C:Kvs}, ${\textsl{\footnotesize R}}_j'(0) = 0, {\textsl{\footnotesize R}}_j(0) = 2i \sin\left(\frac{2\pi j}{n}\right)$, and $\lambda_j = \cos\left(\frac{2\pi j}{n}\right).$
Thus, \eqref{eq:Kleq} holds with $a_0 = 1$,
\begin{equation}\label{eq:ajs} a_j' = \frac{\lambda_j^\ell {\textsl{\footnotesize R}}_j(0)}{d_j} \quad \text{and} \quad
a_j = \frac{\lambda_j^\ell {\textsl{\footnotesize R}}_j(0)}{d_j}\left(\frac{\ell}{\lambda_j} - \frac{d_j'}{d_j}\right). \end{equation}
Expressions and bounds for $d_j, d_j'$ are determined in Lemma \ref{L:dj} and Proposition \ref{P:dj'} in Section \ref{quantc}.
\section{Appendix II. \, Background on modular representation theory}\label{append2}
Introductions to the ordinary (complex) representation theory of finite groups can be found in (\cite{Is}, \cite{JL}, \cite{Se}). A {\it modular} representation of a finite group $\mathsf{G}$ is a representation
(group homomorphism) $\varrho: \mathsf{G} \to \mathsf{GL}_n(\mathbb{k})$, where $\mathbb{k}$ is a field of prime characteristic $p$ dividing $|\mathsf{G}|$. For simplicity, we shall assume that $\mathbb k$ is algebraically closed. Some treatments of modular representation theory can be found in (\cite{Al}, \cite{Nav}, \cite{Webb}), and we summarize here some basic results and examples. The modular theory is very different from the ordinary theory: for example, if $\mathsf{G}$ is the cyclic group $\mathsf{Z}_p = \langle x\rangle$ of order $p$, the two-dimensional representation $\varrho: \mathsf{G} \to \mathsf{GL}_2(\mathbb{k})$ sending
\[
x \to \begin{pmatrix}1&1 \\ 0&1 \end{pmatrix}
\]
has a one-dimensional invariant subspace (a $\mathsf{G}$-submodule) that has no invariant complement, but over $\mathbb{C}$ it decomposes into the direct sum of two one-dimensional submodules. A representation is {\it irreducible} if it has no nontrivial submodules, and is {\it indecomposable} if it has no nontrivial direct sum decomposition into invariant subspaces. A second difference with the theory over $\mathbb{C}$: for most groups (even for $\mathsf{Z}_2\times \mathsf{Z}_2\times \mathsf{Z}_2$) the indecomposable modular representations are unknown and seemingly unclassifiable.
A representation $\varrho: \mathsf{G} \to \mathsf{GL}_n(\mathbb{k})$ is {\it projective} if the associated module for the group algebra $\mathbb{k}\mathsf{G}$ is projective (i.e. a direct summand of a
free $\mathbb{k}\mathsf{G}$-module $\mathbb{k}^m$ for some $m$). There is a bijective correspondence between the projective indecomposable and the irreducible $\mathbb{k}\mathsf{G}$-modules: in this, the projective indecomposable module $\mathsf{P}$ corresponds to the irreducible module $\mathsf{V}_\mathsf{P} = \mathsf{P}/{\mathsf{rad}(\mathsf{P})}$ (see \cite[p.31]{Al}), where ${\mathsf{rad}(\mathsf{P})}$ denotes the
radical of $\mathsf{P}$ (the intersection of all the maximal submodules); we call $\mathsf{P}$ the {\it projective cover} of $\mathsf{V}_\mathsf{P}$. For the group $\mathsf{G} = \mathsf{SL}_2(p)$, with $\mathbb{k}$ of characteristic $p$, the irreducible
$\mathbb{k}\mathsf{G}$-modules and their projective covers were discussed in Section \ref{3b}; likewise for $\mathsf{SL}_2(p^2)$, $\mathsf{SL}_2(2^n)$ and $\mathsf{SL}_3(p)$ in Sections \ref{4b}, \ref{2nreps} and \ref{6a}, respectively.
A conjugacy class $\mathsf{C}$ of $\mathsf{G}$ is said to be $p$-{\it regular} if its elements are of order coprime to $p$. There is a (non-explicit) bijective correspondence between the $p$-regular classes of $\mathsf{G}$ and the irreducible $\mathbb{k}\mathsf{G}$-modules (see \cite[Thm. 2, p.14]{Al}). Each $\mathbb{k}\mathsf{G}$-module $\mathsf{V}$ has a {\it Brauer character}, a complex function defined on the $p$-regular classes as follows. Let $\mathsf{R}$ denote the ring of algebraic integers in $\mathbb{C}$, and let $\mathsf{M}$ be a maximal ideal of $\mathsf{R}$ containing $p\mathsf{R}$. Then $\mathbb{k} = \mathsf{R}/\mathsf{M}$ is an algebraically closed field of characteristic $p$. Let $*:\mathsf{R} \to \mathbb{k}$ be the canonical map, and let
\[
\mathsf{U} = \{\xi \in \mathbb{C} \mid \xi^m=1 \hbox{ for some }m \hbox{ coprime to }p\},
\]
the set of $p'$-roots of unity in $\mathbb{C}$. It turns out (see \cite[p.17]{Nav}) that the restriction of $*$ to $\mathsf{U}$ defines an isomorphism $\mathsf{U} \to \mathbb{k}^*$ of multiplicative groups. Now if $g \in \mathsf{G}$ is a $p$-regular element, the eigenvalues of $g$ on $\mathsf{V}$ lie in $\mathbb{k}^*$, and hence are of the form $\xi_1^*,\ldots, \xi_n^*$ for uniquely determined elements $\xi_i \in \mathsf{U}$. Define the Brauer character $\chi$ of $\mathsf{V}$ by
\[
\chi(g) = \xi_1+\cdots + \xi_n.
\]
The Brauer characters of the irreducible $\mathbb{k}\mathsf{G}$-modules and their projective covers satisfy two orthogonality relations (see (\ref{row}) and (\ref{col})), which are used
in the proof of Proposition \ref{basicone}.
The above facts cover all the general theory of modular representations that we need. As for examples, many have been given in the text -- the $p$-modular irreducible modules and their projective covers are described for the groups $\mathsf{SL}_2(p)$, $\mathsf{SL}_2(p^2)$, $\mathsf{SL}_2(2^n)$ and $\mathsf{SL}_3(p)$ in Sections \ref{basics1}-\ref{sl3psec}.
|
1810.00447
|
\section{Missing proofs of Subsection~\ref{sec:HBY-analysis}}
\label{sec:proof-of-alg1}
\begin{proof}{\textbf{Proof of Lemma~\ref{claim:full-case-comp1}:}}
The revenue of Algorithm~\ref{algorithm:hybrid} in this case is $ ALG_1(\vec{v}) = b - (1-a) \left[ q_{2,e}(1) + q_{2,f}(1) \right],$ which is decreasing in $q_{2,e}(1) + q_{2,f}(1)$.
Note that due to the fixed threshold rule, we already have an upper bound on $q_{2,f}(1)$, i.e., $q_{2,f}(1) \leq \theta b$.
As a result, using Lemma~\ref{lem:HYB:full-case1}, $ALG_1(\vec{v}) \geq b - (1-a)(\theta b + p(b-n_1)^+ +\Delta)$.
Note that $OPT(\vec{v}) \leq b- (1- a) \left(b-n_1\right)^+$, and thus
\begin{align*} \frac{ALG_1(\vec{v})}{OPT(\vec{v})} \geq & \frac{b - (1-a)(\theta b + p(b-n_1)^+ + \Delta)}{b- (1- a) \left(b-n_1\right)^+}
\\ \geq & \frac{b - (1-a)(\theta b + (p + \theta) (b-n_1)^+ + \Delta)}{b- (1- a) \left(b-n_1\right)^+} &(\theta \geq 0)
\\ = &p + \frac{1-p}{2-a}- \frac{(1-a)\Delta}{OPT(\vec{v})}. &(1-(1-a)\theta = p+(1-p)/(2-a))
\end{align*}
The rest follows from the simple inequality $OPT(\vec{v}) \geq ab$ due to $q_1(1) + q_{2,e}(1) +q_{2,f}(1) =b$.
\end{proof}
\begin{proof}{\textbf{Proof of Lemma~\ref{claim:full-comp2}:}}
We consider three cases of Lemma~\ref{lem:HYB:full-case2} separately.
\begin{enumerate}[(a)]
\item $ q_{2,e}(1) +q_{2,f}(1) = n_2$:
\\ Algorithm~\ref{algorithm:hybrid} accepts all customers and hence achieves the optimal revenue, \vmn{i.e., $\frac{ALG_1(\vec{v})}{OPT(\vec{v})} = 1$}.
\item $q_{2,f}(1) = {\lfloor \theta b\rfloor} $ and $n_1 > bp - 3 \Delta$:
\\ $ ALG_1(\vec{v}) \geq n_1 + a \theta b $ and $OPT(\vec{v}) \leq ab+n_1(1-a)$. Therefore,
$$ \frac{ALG_1(\vec{v})}{OPT(\vec{v})} \geq \frac{n_1+ a \theta b}{ab+n_1(1-a)}, $$
which is increasing in $n_1$, so the ratio is minimized at $n_1=bp - 3 \Delta$, which is a special case of the last case, \vmn{which we analyze next.}
\item$q_{2,f}(1) =\lfloor \theta b\rfloor$, $n_1 \leq bp - 3 \Delta $, and $q_{2,e}(1) \geq \left(p(n_1+n_2) -n_1 - 5 \Delta \right)^+$:
{First, we remark that following the discussion in the proof of Lemma~\ref{lem:HYB:full-case2}, we assume, without loss of generality, $n_1+n_2\leq b$. Note that by construction, the alternative adversarial instance has the same optimum offline solution, i.e., $OPT(\vec{v}) = OPT(\vec{v}_A)$.}
\begin{align*} ALG_1(\vec{v}) \geq & n_1+ a(p(n_1+n_2)-n_1 -5\Delta+ \theta b) \\
\geq & n_1+ a(p(n_1+n_2)-n_1 -5\Delta+ \theta (n_1+n_2)) &(b \geq n_1+n_2)
\\
= & n_1(1-a+pa+\theta a) + n_2(p+\theta)a -5\Delta a \\
\geq & n_1((1-a)(p+\theta)+a(p+\theta )) + n_2(p+\theta)a -5\Delta a &(p+\theta \leq 1) \\
= &(p+\theta)(n_1+ n_2 a) -5\Delta a \geq (p+\theta)OPT(\vec{v}) -5\Delta a \\
= &\left( p + \frac{1-p}{2-a}\right)OPT(\vec{v}) - 5\Delta a. &(p+\theta = p + \frac{1-p}{2-a})
\end{align*}
Since $OPT(\vec{v}) \geq q_{2,f}(1)a = \theta b a$,
$$ \frac{ALG_1(\vec{v})}{OPT(\vec{v})} \geq p + \frac{1-p}{2-a} - \frac{5 \Delta a}{\theta b a} = p + \frac{1-p}{2-a} - \frac{5 \Delta }{\theta b }.$$
\end{enumerate}
\end{proof}
\begin{proof}{\textbf{Proof of Lemma~\ref{lem:HYB:full-case3}:}}
Clearly, either \vmn{condition} \vmn{(a)} \vmn{holds} or $q_1(1)=n_1$.
Below we consider the cases where $q_1(1)=n_1$.
If $q_{2,f}(1)<\vmn{\lfloor \theta b \rfloor}$, then we do not reject any \st{class}\vmn{type}-$2$ customer, and thus \vmn{condition} \vmn{(b) holds}.
The interesting case is when $q_{2,f}(1)= \lfloor \theta b \rfloor$. \vmn{We prove that condition (c) will hold.}
\vmn{First note that} in this case, we know $n_2 \geq \theta b$.
As a result, \eqref{epsilon-condition-n_2-big} implies we can use \vmn{the concentration result of }\eqref{inequality:good-approximation-app--o^R_2}.
Following the discussion in the proof of Lemma~\ref{lem:HYB:full-case2}, we assume, without loss of generality, $n_1+n_2\leq b$.
\vmn{Further, if we find a time $\hat \lambda$ for which we have:}
\vmn{
\begin{align}
\label{eq:app:intermed2}
o_1(\lambda)+o^{\vmn{\mathcal{S}}}_2(\lambda)-o^{\vmn{\mathcal{S}}}_2(\hat \lambda) \leq \lfloor \lambda pb \rfloor \quad \quad \text{ for all }\lambda \geq \hat \lambda,
\end{align}}
\vmn{then, using a similar induction to the one in Lemma~\ref{lem:HYB:full-case2}, we can show:}
\vmn{
\begin{align}
\label{ineq:app:case2:a2}
q_{2,e}(\lambda) \geq o_2^{\vmn{\mathcal{S}}} (\lambda) - o_2^{\vmn{\mathcal{S}}} (\hat \lambda) \quad \quad \text{ for all }\lambda \geq \hat \lambda.
\end{align}
}
\if false
For a $\bar \lambda$ satisfying Condition~(\ref{condition-for-bar-lambda}), Inequality~\eqref{ineq:case2:a2} holds.\fi
Here we find a sufficient condition on $\hat\lambda$ for Condition~\eqref{eq:app:intermed2} to hold.
\begin{align*}
o_1(\lambda)+o^{\vmn{\mathcal{S}}}_2(\lambda)-o^{\vmn{\mathcal{S}}}_2(\hat \lambda) & < \frac{k}{p^2} \log n +\lambda pn_2 +\Delta - (\hat\lambda pn_2 - \Delta ) &(o_1(\lambda) \leq n_1 < \frac{k}{p^2} \log n,\eqref{inequality:good-approximation-app--o^R_2})\\
& = \frac{k}{p^2} \log n +(\lambda - \hat\lambda)pn_2 + 2 \Delta \\
& \leq \frac{k}{p^2} \log n +(\lambda - \hat\lambda)pb + 2 \Delta &(n_1+n_2 \leq b).
\end{align*}
As a result, Condition~\eqref{eq:app:intermed2} holds if
$$ \frac{k}{p^2} \log n +(\lambda - \hat\lambda)pb + 2 \Delta \leq \lambda p b,$$
which can be achieved when
\begin{align*}
\hat\lambda \triangleq \frac{\frac{k}{p^2} \log n+2\Delta}{pb}
\end{align*}
If $\hat\lambda \leq 1$, then
\begin{align*}
q_{2,e}(1) \geq & o_2^{\vmn{\mathcal{S}}}(1) - o_2^{\vmn{\mathcal{S}}}(\hat\lambda) &(\text{Inequality}~\eqref{ineq:app:case2:a2})\\
\geq & pn_2 - \Delta - (\hat\lambda pn_2 + \Delta ) &(\eqref{epsilon-condition-n_2-big}\text{ and }\eqref{inequality:good-approximation-app--o^R_2}) \\
\geq & pn_2 - (\frac{k}{p^2} \log n+2\Delta) - 2 \Delta &(\hat\lambda \leq \frac{\frac{k}{p^2} \log n+2\Delta}{pn_2} ) \\
= & pn_2 - \frac{k}{p^2} \log n - 4 \Delta .\end{align*}
If $\hat\lambda >1$, then
\begin{align*}
pn_2 - \frac{k}{p^2} \log n - 4 \Delta
= & pn_2 - (\frac{k}{p^2} \log n + 2\Delta) - 2 \Delta \\
< & pn_2 - pb - 2 \Delta < p(n_2-b) < 0 &(\hat\lambda >1, n_2\leq b) \\
\leq & q_{2,e}(1).
\end{align*}
\end{proof}
\begin{proof}{\textbf{Proof of Lemma~\ref{claim:full-comp3}:}}
Following the discussion in the proof of Lemma~\ref{lem:HYB:full-case2}, we assume, without loss of generality, $n_1+n_2\leq b$.
We consider three cases in Lemma~\ref{lem:HYB:full-case3} separately.
\noindent If case \vmn{(a)} in Lemma~\ref{lem:HYB:full-case3} happens, then $ALG_1(\vec{v})+n_1 \geq OPT(\vec{v})$ and $OPT(\vec{v}) \geq ab$.
As a result,
$$ \frac{ALG_1(\vec{v})}{OPT(\vec{v})} \geq 1-\frac{n_1}{OPT(\vec{v})} \geq 1- \frac{\frac{k}{p^2} \log n}{ab} \geq p+\frac{1-p}{2-a}- \frac{\frac{k}{p^2} \log n}{ab} .$$
\noindent If case \vmn{(b)} in Lemma~\ref{lem:HYB:full-case3} happens, then $ALG_1(\vec{v})=OPT(\vec{v})$, and we are done.
\noindent If case \vmn{(c)} happens, then
\begin{align*}
ALG_1(\vec{v}) \geq n_1 + \left(pn_2 - \frac{k}{p^2} \log n - 4 \Delta + \theta b \right)a.
\end{align*}
Because $n_1 \geq \left( p+\frac{1-p}{2-a} \right) n_1 $, $p n_2 +\theta b \geq (p+\theta)n_2 = \left( p+\frac{1-p}{2-a}\right)n_2$ and $OPT(\vec{v}) = n_1 + a n_2 \geq a \theta b$, we have
\begin{align}
\frac{ALG_1(\vec{v})}{OPT(\vec{v})} \geq p+\frac{1-p}{2-a}
-\frac{a\left(\frac{k}{p^2} \log n + 4 \Delta \right)}{{OPT(\vec{v})}} \geq p+\frac{1-p}{2-a} - \frac{\frac{k}{p^2} \log n + 4 \Delta}{\theta b} .\label{condition:HYA-epsilon-3}
\end{align}
\end{proof}
\if false
\begin{proof}[Proof of Claim~\ref{claim:epsilon-con}]
We have $\frac{1}{a(1-p)p}\sqrt{\frac {\log n}{b}} \geq \sqrt{\frac{1}{b}} \geq \frac{1}{n}$.
In addition, $\frac{1}{a(1-p)p}\sqrt{\frac {\log n}{b}} \leq \bar\epsilon $ is implied by \eqref{ineq:Alg1-non-trivial-case} and \begin{align} \vm{k'} \leq \bar\epsilon. \label{condition:ALG_11:k'<=bar-epsilon}
\end{align}
Therefore, Condition of the claim holds.
\end{proof}
\fi
\begin{proof}{\textbf{Proof of Lemma~\ref{claim:error-terms}:}}
It is easy to check \vmn{(a)} $ \frac{(1-a)\Delta}{ab} =O\left(\frac{1}{a(1-p)p} \sqrt{\frac{\log n}{b}}\right)$ and \vmn{(b)} $ \frac{5 \Delta }{\theta b }=O\left(\frac{1}{a(1-p)p} \sqrt{\frac{\log n}{b}}\right)$. To prove \vmn{(c)} $\frac{\frac{k}{p^2} \log n}{ab}=O\left(\frac{1}{a(1-p)p} \sqrt{\frac{\log n}{b}}\right)$, we first note that \eqref{ineq:Alg1-non-trivial-case} and \begin{align} \vmn{\bar\epsilon} \leq 1. \label{condition:ALG_11:k'<=1}
\end{align}
implies $\log n \leq a^2 p^2 b$, and thus $\log n = \sqrt{\log n} \sqrt{\log n} \leq ap\sqrt{b\log n}$.
As a result,
\begin{align*}
\frac{\frac{k}{p^2} \log n }{a b} \leq &
\frac{\frac{k}{p} \sqrt{b\log n} }{b} &(\log n\leq ap\sqrt{b\log n} ) \\
\leq & \frac{k}{a(1-p)p}\sqrt{\frac {\log n}{b}} &(0<p<1 \text{ and } a<1) \\
= & O\left(\frac{1}{a(1-p)p}\sqrt{\frac {\log n}{b}}\right).
\end{align*}
Similarly, to prove \vmn{(d)} $\frac{\frac{k}{p^2} \log n + 4 \Delta}{\theta b} =O\left(\frac{1}{a(1-p)p} \sqrt{\frac{\log n}{b}}\right)$, we first note that \eqref{ineq:Alg1-non-trivial-case} and \eqref{condition:ALG_11:k'<=1}
implies $\log n \leq bp^2$, and thus $\log n = \sqrt{\log n} \sqrt{\log n} \leq p\sqrt{b\log n}$.
As a result,
\begin{align*}
\frac{\frac{k}{p^2} \log n + 4 \Delta}{\theta b} \leq &
\frac{\frac{k}{p} \sqrt{b\log n} + 4 \alpha \sqrt{b\log n}}{\theta b} &(\log n\leq p\sqrt{b\log n} \text{ and }\Delta = \alpha \sqrt{b\log n}) \\
\leq & \frac{k+4\alpha}{pa}\sqrt{\frac{ \log n}{b}} &(p<1 \text{ and }\theta > a) \\
\leq & (k+4\alpha)\left(\frac{1}{a(1-p)p} \
\sqrt{\frac {\log n}{b}}\right) \\ = & O\left(\frac{1}{a(1-p)p}\sqrt{\frac {\log n}{b}}\right).
\end{align*}
\end{proof}
\subsection{Competitive Analysis}\label{sec:HBY-analysis}
\st{\vm{The theorem was stated twice! We can't just copy the whole appendix here... }
\vm{I restructured the proof; defined $v_A$ in Lemma 4.7 instead of $\bar{v}$; fixed numerous typos. One question: we don't use $\theta b$ integer anymore, right? I changed all of them to $\lfloor \theta b \rfloor$. }}
In this subsection, we analyze the competitive ratio of Algorithm~\ref{algorithm:hybrid}.
Our main result is the following theorem:
\begin{theorem}\label{thm:hybrid}
For $p\in (0,1)$, the competitive ratio of Algorithm~\ref{algorithm:hybrid} is at least $p+\frac{1-p}{2-a}- O\left(\frac{1}{a(1-p)p} \sqrt{\frac{\log n}{b}}\right)$ in the partially \st{learnalbe}\vmn{predictable} model.
\end{theorem}
\noindent \vmn{Before proceeding to the proof of the above theorem, we make the following remarks:}
\begin{remark}
\label{rem:alg1:3}
Our competitive analysis of Algorithm~\ref{algorithm:hybrid} is tight (up to an $O\left(\sqrt{\frac{\log n}{b}}\right)$ term). In particular, for the following instance, Algorithm~\ref{algorithm:hybrid} can attain only a $p + \frac{1-p}{2-a}$ fraction of the \vmn{optimum} offline solution: Suppose $b = n$ and all customers \st{belong to}\vmn{are of} \st{class}\vmn{type}-$2$. The revenue of the \st{optimal}\vmn{optimum} offline algorithm is $ab$.
On the other hand, if we employ Algorithm~\ref{algorithm:hybrid}, at the end we will have $q_1=0$, $q_{2,e}\leq pb$ and $q_{2,f}\leq \vmn{\theta b} $. This results in a \vmn{competitive ratio} of at most $p+\theta \vmn{ = p+\frac{1-p}{2-a}}.$
\end{remark}
\begin{remark}
\label{rem:alg1:2}
In Subsection~\ref{sec:upper-bounds}, we prove that no online algorithm can have a competitive ratio larger than $p+\frac{1-p}{2-a}+ o(1)$ when $b = o\left(\sqrt{n}\right)$. On the other hand, Theorem~\ref{thm:hybrid} indicates that Algorithm~\ref{algorithm:hybrid} achieves a competitive ratio of $p+\frac{1-p}{2-a}-o(1)$ when $b = \omega (\log n) $. \st{Comparing}\vmn{Combining} the two results \vmn{implies that} for fixed $a$ and $p$, Algorithm~\ref{algorithm:hybrid} achieves the best possible competitive ratio (up to an $o(1)$ term) in the regime where conditions $b=\omega (\log n)$ and $b= o(\sqrt{n})$ hold simultaneously.
\end{remark}
\begin{remark}
\label{rem:alg1:1}
Note that even though $p+\frac{1-p}{2-a}$ is the convex combination of the \st{worst-case bound}\vmn{competitive ratios} of~\cite{Ball2009} and \st{the average-case one}of~\cite{Agrawal2009b}, it cannot be achieved by simply randomizing between these two algorithms. Suppose we flip a biased coin; with probability $p$, we follow the algorithm of~\cite{Agrawal2009b} (or any other algorithms designed for a random order model such as~\cite{Kesselheim}) and with probability $(1-p)$ we follow the fixed threshold algorithm of~\cite{Ball2009}.
In Subsection~\ref{sec:bad-instance-for-other-models} we show that for a certain class of instances, such a randomized algorithm does not generate $p+\frac{1-p}{2-a}$ fraction of the \vmn{optimum} offline solution.
\end{remark}
\begin{proof}{\textbf{Proof of Theorem~\ref{thm:hybrid}:}}
\vmn{We start the proof by making the following observation:
Theorem~\ref{thm:hybrid} is nontrivial only if $\sqrt{\frac{\log n}{b}}$ is small enough, such that the approximation term $O(\cdot)$ is negligible. Therefore, without loss of generality, we can restrict attention to the case where $\sqrt{\frac{\log n}{b}}$ is small. In particular,
\vmn{recalling that we defined constant $\bar\epsilon = 1/24$ in Lemma~\ref{lemma:needed-centrality-result-for-m=2}, if $\frac{1}{a(1-p)p}\sqrt{\frac {\log n}{b}} \geq \bar\epsilon$,}
\if false
let us define constant $\vm{k'}$ as $\vm{k'}\triangleq \min\left\{ \frac{1}{\sqrt{k}}, \bar\epsilon, 1 \right\}$, where constants $k$ and $\bar\epsilon$ are defined in Lemma~\ref{lemma:needed-centrality-result-for-m=2}. \pj{Since $k=16$ and $\bar\epsilon=1/24$ in Lemma~\ref{lemma:needed-centrality-result-for-m=2}, then $\vm{k'}$ simply becomes the same as $\bar\epsilon$. So why introduce $\vm{k'}$?}
If $\frac{1}{a(1-p)p}\sqrt{\frac {\log n}{b}} \geq \vm{k'}$,
\fi
then $O\left(\frac{1}{a(1-p)p}\sqrt{\frac {\log n}{b}}\right)$ becomes $O(1)$ and Theorem~\ref{thm:hybrid} becomes trivial.
Therefore, without loss of generality, we assume $\frac{1}{a(1-p)p}\sqrt{\frac {\log n}{b}} < \vmn{\bar\epsilon}$, or equivalently,
\begin{align}
b > \frac{1}{\vmn{\bar\epsilon}^2}\frac{\log n}{a^2(1-p)^2p^2}.\label{ineq:Alg1-non-trivial-case}
\end{align}
}
\st{For any initial customer sequence $\vmn{\vec{v}_I}$, }We denote the random revenue generated by Algorithm~\ref{algorithm:hybrid} by $ALG_1(\vec{V})$. To analyze $\E{ALG_1(\vec{V})}$ we condition it on the event $\mathcal{E}$\st{for an appropriately defined $\epsilon$}. \vmn{Thus we have:}
\vmn{
\begin{align*}
\frac{\E{ALG_1(\vec{V})}}{OPT(\vmn{\vec{v}_I})} \geq \frac{\E{ALG_1(\vec{V})|\mathcal{E}}\prob{\mathcal{E}}}{OPT(\vmn{\vec{v}_I})}.
\end{align*}
}
\vmn{Define $\epsilon \triangleq \frac{1}{a(1-p)p}\sqrt{\frac {\log n}{b}}$. For $b$ that satisfies condition \eqref{ineq:Alg1-non-trivial-case}, \vmn{and assuming
that $n \geq 3$\st{is large enough}, }we have $\frac{1}{n} \leq \epsilon \leq \bar\epsilon$.
Therefore, we can apply Lemma~\ref{lemma:needed-centrality-result-for-m=2} to get:}
\vmn{
\begin{align*}
\frac{\E{ALG_1(\vec{V})}}{OPT(\vmn{\vec{v}_I})} \geq \frac{\E{ALG_1(\vec{V})|\mathcal{E}}\prob{\mathcal{E}}}{OPT(\vmn{\vec{v}_I})} \geq \frac{\E{ALG_1(\vec{V})|\mathcal{E}}}{OPT(\vmn{\vec{v}_I})} \left(1 - \epsilon \right).
\end{align*}}
\vmn{
This will allow us to focus on the realizations that belong to event $\mathcal{E}$.
In the main part of the proof we show that, for any realization $\vec{v}$ belonging to event $\mathcal{E}$,}
\vmn{
\begin{align*}
\frac{{ALG_1(\vec{v})}}{OPT(\vmn{\vec{v}_I})} \geq p+\frac{1-p}{2-a} - O(\epsilon).
\end{align*}
}
\if false
In particular
let $\alpha$, $k$, and $\bar\epsilon$ denote the constants specified in Lemma~\ref{lemma:needed-centrality-result-for-m=2}, and $\epsilon \triangleq \frac{1}{a(1-p)p}\sqrt{\frac {\log n}{b}}$. If we show that $\frac{1}{n} \leq \epsilon \leq \bar\epsilon$, then we can apply Lemma~\ref{lemma:needed-centrality-result-for-m=2} which implies:
$$\frac{\E{ALG_1(\vec{V})}}{OPT(\vmn{\vec{v}_I})} \geq \frac{\E{ALG_1(\vec{V})|\mathcal{E}}\prob{\mathcal{E}}}{OPT(\vmn{\vec{v}_I})} \geq \frac{\E{ALG_1(\vec{V})|\mathcal{E}}}{OPT(\vmn{\vec{v}_I})} \left(1 - \epsilon \right)$$
This will allow us to focus on the realizations that belong to event $\mathcal{E}$. Claim~\ref{claim:epsilon-con} (stated at the end of the proof) proves that in fact condition $\frac{1}{n} \leq \epsilon \leq \bar\epsilon$ holds. In the main part of the proof we show that for any realization $\vec{v}$ belonging to event $\mathcal{E}$,
$$\frac{{ALG_1(\vec{v})}}{OPT(\vmn{\vec{v}_I})} \geq p+\frac{1-p}{2-a} - O(\epsilon).$$
This in turn implies the result (combined with a few small steps). Before proceeding with the proof, we make the following observation:
Theorem~\ref{thm:hybrid} is only non-trivial if $\sqrt{\frac{\log n}{b}}$ is small enough, so that the approximation term $O(\cdot)$ is negligible. Therefore, without loss of generality we can restrict attention to the case where $\sqrt{\frac{\log n}{b}}$ is small. In particular,
let $\vm{k'}\triangleq \min\left\{ \frac{1}{\sqrt{k}}, \bar\epsilon, 1 \right\}$.
If $\frac{1}{a(1-p)p}\sqrt{\frac {\log n}{b}} \geq \vm{k'}$, then $O\left(\frac{1}{a(1-p)p}\sqrt{\frac {\log n}{b}}\right)$ becomes $O(1)$ and Theorem~\ref{thm:hybrid} becomes trivial.
Therefore, without loss of generality, we assume $\frac{1}{a(1-p)p}\sqrt{\frac {\log n}{b}} < \vm{k'}$, or equivalently,
\begin{align}
b > \frac{1}{\vm{k'}^2}\frac{\log n}{a^2(1-p)^2p^2}.\label{ineq:Alg1-non-trivial-case}
\end{align}
\fi
Fixing a realization $\vec{v}$ that belongs to event $\mathcal{E}$, we define $q_1(\lambda)$, $q_{2,e}(\lambda)$, and $q_{2,f}(\lambda)$ \vmn{to be }the values of \vmn{counters} $q_1$, $q_{2,e}$, and $q_{2,f}$ right after the algorithm determines whether to accept the customer arriving at time $\lambda$. Further, we define $\Delta \triangleq \alpha \sqrt{b \log n}$ (constant $\alpha$ is defined in Lemma~\ref{lemma:needed-centrality-result-for-m=2}). To analyze the competitive ratio we analyze three cases separately.
\medskip
\noindent { \bf Case (i): $n_1 \geq \frac{k}{p^2} \log n$, and Algorithm~\ref{algorithm:hybrid} exhausts the inventory.}
\medskip
Note that when $n_1 \geq \frac{k}{p^2} \log n$, we can apply \vmn{the concentration result} \eqref{inequality:good-approximation-o_1} from Lemma~\ref{lemma:needed-centrality-result-for-m=2}.
When Algorithm~\ref{algorithm:hybrid} exhausts the inventory it is possible that the algorithm accepts \emph{too many} \st{class}\vmn{type}-$2$ customers, which results in rejecting \st{class}\vmn{type}-$1$ customers and losing revenue. We control for this loss by establishing the following upper bound on the number of \st{class}\vmn{type}-$2$ customers accepted by the evolving threshold.\footnote{Note that we already have an upper bound on the number of \st{class}\vmn{type}-$2$ customers accepted by the fixed threshold: $q_{2,f}(1) \leq \theta b$.}
In particular, we have the following lemma:
\begin{lemma}
\label{lem:HYB:full-case1}
Under event $\mathcal{E}$, if $n_1 \geq \frac{k}{p^2} \log n$, then
\begin{align*}
q_{2,e}(1) \leq p(b-n_1)^+ + \Delta.
\end{align*}
\end{lemma}
\begin{proof}{\textbf{Proof:}}
We assume, without loss of generality, that $n_1 \leq b$.
Otherwise, we \vmn{construct a modified adversarial instance, denoted by $\vec{v}_{I,M}$, as follows:}
\vmn{keep}
an arbitrary subset of \st{class}\vmn{type}-$1$ customers with size $b$ in $\vmn{\vec{v}_I}$ (before the random permutation), \vmn{and remove the remaining type-$1$ customers (e.g., set their revenue to be $0$)}.
\vmn{For the same realization of the {{\st{predictable}\vmn{stochastic}}} group and random permutation, we claim that at any time $\lambda \in \{1/n, \ldots, 1\}$, the number of \st{class}\vmn{type}-$2$ customers accepted through the evolving threshold rule
in the original instance is not larger than that in the modified one.
This holds because $o_1(\lambda, \vec{v}) \geq o_{1}(\lambda, \vec{v}_{M})$, where the second argument is added to $o_1(\cdot, \cdot)$ to indicate the corresponding instance.
Note that because the algorithm accepts all type-$1$ customers, this implies $q_1(\lambda, \vec{v}) \geq q_{1}(\lambda, \vec{v}_{M})$, which proves our claim (i.e., $q_{2,e}(\lambda, \vec{v}) \leq q_{2,e}(\lambda, \vec{v}_{M})$). Thus, without loss of generality, we assume $n_1 \leq b$.
Further, note that because of condition \eqref{ineq:Alg1-non-trivial-case}, we have $n_{1}(\vec{v}_{M}) = b \geq \frac{k}{p^2} \log n$.\footnote{This follows from Condition \eqref{ineq:Alg1-non-trivial-case} and the fact that $\frac{1}{a^2(1-p)^2} > 1$ and by definition (given in Lemma~\ref{lemma:needed-centrality-result-for-m=2}) $\frac{1}{\vmn{\bar\epsilon}^2} \geq k$ which imply $\frac{1}{\vmn{\bar\epsilon}^2}\frac{1}{a^2(1-p)^2} \geq k$.} Thus we are still in Case (i) for the
modified instance.
}
\if false
\vmn{We argue that the number of \st{class}\vmn{type}-$2$ customers accepted through the evolving threshold rule in the original case is not larger than that in the case with a subset of $b$ customers.}
For any $\lambda$, with the same realization of the {{\st{predictable}\vmn{stochastic}}} group and random permutation, the value of $o_1(\lambda)$ corresponding to these $b$ \st{class}\vmn{type}-$1$ customers cannot be larger than the original ones.
As a result, the number of \st{class}\vmn{type}-$2$ customers accepted through the evolving threshold rule in the original case is not larger than that in the case with a subset of $b$ customers.
Hence the upper bound proven in the case with a subset of $b$ customers is still valid for the original case.
Note that we can still apply Inequalities~\eqref{inequality:good-approximation-o_1} to these $b$ customers.
\fi
If no \st{class}\vmn{type}-$2$ customer is accepted by the evolving threshold, then $q_{2,e}(1)=0$ and \st{we are done}\vmn{the proof is complete}.
Otherwise, let $\bar{\lambda} \leq 1$ be the last time that a \st{class}\vmn{type}-$2$ customer is accepted by the evolving rule.
Then we have
\begin{align*}
q_{2,e}(1) = & q_{2,e}(\bar{\lambda}) \leq \bar{\lambda} pb - o_1(\bar{\lambda}) &(\text{Evolving threshold rule}) \\
\leq & \bar{\lambda} pb - (\bar \lambda p n_1 + (1-p)\eta_1(\bar\lambda) -\Delta) ) &(\eqref{inequality:good-approximation-o_1}) \\
\leq & p(b-n_1) + \Delta . &(\eta_1(\bar\lambda) \geq 0 \text{, } n_1\leq b \text{, and } \bar\lambda \leq 1)
\end{align*}
\vmn{The reason for each inequality appears in the same line. We remark that in the second inequality, we crucially use the concentration result of Lemma~\ref{lemma:needed-centrality-result-for-m=2}.}
\end{proof}
\noindent Using Lemma~\ref{lem:HYB:full-case1}, we prove, in Appendix~\ref{sec:proof-of-alg1}, the following lemma \vmn{that gives a lower bound on the competitive ratio for Case (i)}:
\begin{lemma}
\label{claim:full-case-comp1}
Under event $\mathcal{E}$, if $n_1 \geq \frac{k}{p^2} \log n$ and $q_1(1) + q_{2,e}(1) +q_{2,f}(1) =b$, then
$\frac{ALG_1(\vec{v})}{OPT(\vec{v})} \geq p + \frac{1-p}{2-a} - \frac{(1-a)\Delta}{ab}$.
\end{lemma}
\medskip
\noindent { \bf Case (ii): $n_1 \geq \frac{k}{p^2} \log n$, and Algorithm~\ref{algorithm:hybrid} does not exhaust the inventory.}
\medskip
In this case, all \st{class}\vmn{type}-$1$ customers are accepted.
Therefore, the ratio between $ALG_1(\vec{v})$ and $OPT(\vec{v})$ can be expressed as:
$$
\frac{ALG_1(\vec{v})}{OPT(\vec{v})} = \frac{n_1 + a \left[q_{2,e}(1) +q_{2,f}(1)\right]}{n_1 + a \min \{n_2,(b-n_1)\}}.
$$
The only ``mistake'' that the algorithm may make is to reject too many \st{class}\vmn{type}-$2$ customers. The following lemma establishes a lower bound on the number of accepted \st{class}\vmn{type}-$2$ customers:
\begin{lemma}
\label{lem:HYB:full-case2}
Under event $\mathcal{E}$, if $n_1 \geq \frac{k}{p^2} \log n$ and $q_1(1) + q_{2,e}(1) +q_{2,f}(1) <b$, then one of the following conditions holds:
\begin{enumerate}[(a)]
\item $ q_{2,e}(1) +q_{2,f}(1) = n_2$,
\item $q_{2,f}(1) = \lfloor \theta b\rfloor$ and $n_1 > bp - 3 \Delta $, or
\item $q_{2,f}(1) =\lfloor \theta b\rfloor$, $n_1 \leq bp - 3 \Delta $, and $q_{2,e}(1) \geq \left(p(n_1+n_2) -n_1 - 5 \Delta \right)^+$.
\end{enumerate}
\end{lemma}
\begin{proof}{\textbf{Proof:}}
First note that $q_{2,f}(1) < \lfloor \theta b \rfloor$ means that Algorithm~\ref{algorithm:hybrid} never rejects a \st{class}\vmn{type}-$2$ customer.
This implies that $ q_{2,e}(1) +q_{2,f}(1) = n_2$, \vmn{i.e., condition (a) holds}.
Now suppose $q_{2,f}(1) = \lfloor \theta b \rfloor $.
If $n_1 > bp - 3\Delta $, then condition (b) holds.
The most interesting case is when $q_{2,f}(1) = \lfloor \theta b \rfloor $, and $n_1 \leq bp - 3\Delta$.
\vmn{In the following, we show that in this case, condition (c) will hold.}
\st{Now we consider the case where $q_{2,f}(1) = \lfloor \theta b \rfloor $, and $n_1 \leq bp - 3\Delta$.}
In this case, without loss of generality, we can assume $n_1+n_2 \leq b$.
\vmn{Otherwise, we construct an alternative adversarial instance, denoted by $\vec{v}_{I,A}$, as follows:}
\vmn{keep an arbitrary subset of \st{class}\vmn{type}-$2$ customers with size $b-n_1$ in $\vmn{\vec{v}_I}$ (before the random permutation), \vmn{and remove the remaining type-$2$ customers (e.g., set their revenue to be $0$)}.
With the same realization of the {{\st{predictable}\vmn{stochastic}}} group and random permutation, we claim that:}
\vmn{
\begin{align}\label{eq:induc:1}
q_{2,e}(\lambda, \vec{v}) \geq q_{2,e}(\lambda, \vec{v}_A), ~~ \lambda \in \{0, 1/n, \ldots, 1\}.
\end{align}
}
\st{Otherwise, we consider an alternative adversarial instance $\vec{v}_{I,A}$ that keeps the values of arbitrary $b-n_1$ \st{class}\vmn{type}-$2$ customers in $\vmn{\vec{v}_I}$ but modifies the values of all other \st{class}\vmn{type}-$2$ customers in $\vmn{\vec{v}_I}$ to $0$'s.
In the following we show that, with the same realization of the {{\st{predictable}\vmn{stochastic}}} group and random permutation, for any $\lambda$, the value of $q_{2,e}(\lambda)$ corresponding to $\vmn{\vec{v}_I}$ (denoted by $q_{2,e}(\lambda, \vec{v})$) is not smaller than that corresponding to ${\vec{v_A'}}$ (denoted by $q_{2,e}(\lambda, {\vec{v_A}})$), that is, for all $\lambda$,$$q_{2,e}(\lambda, \vec{v}) \geq q_{2,e}(\lambda, {\vec{v_A}}).$$}
To \st{see} \vmn{show \eqref{eq:induc:1},} \st{this,} we use induction.
The \vmn{base case, corresponding to taking }$\lambda = 0$, is trivial.
\vmn{Suppose \eqref{eq:induc:1} holds for $\lambda- 1/n$. We show it will hold for $\lambda$ as well.}
At time $\lambda$, if $q_{2,e}(\lambda, \vec{v}_A) = q_{2,e}(\lambda- 1/n, \vec{v}_A)$, then \eqref{eq:induc:1} holds \vmn{because $q_{2,e}(\lambda, {\vec{v}}) \geq q_{2,e}(\lambda- 1/n, {\vec{v}})$}.
Otherwise, $q_{2,e}(\lambda, \vec{v}_A) = q_{2,e}(\lambda- 1/n, \vec{v}_A)+1$.
This implies that a \st{class}\vmn{type}-$2$ customer arrives at time $\lambda$ in $\vec{v}_A$, and thus also in $\vec{v}$.
If $q_{2,e}(\lambda, \vec{v}) = q_{2,e}(\lambda- 1/n, \vec{v}) +1$, then \eqref{eq:induc:1} again holds.
Otherwise, under customer arrival sequence $\vec{v}$, we do not accept the \st{class}\vmn{type}-$2$ customer at time $\lambda$ \vmn{by the evolving threshold rule}, which means that $o_1(\lambda, \vec{v}) + q_{2,e}(\lambda, \vec{v}) = {\lfloor\lambda pb \rfloor}.$
Because $o_1(\lambda,{\vec{v}}_A) + q_{2,e}(\lambda, {\vec{v}}_A) \leq {\lfloor\lambda pb \rfloor}$, \vmn{and $o_1(\lambda,{\vec{v}}) = o_1(\lambda,{\vec{v}}_A)$, }we can conclude that \st{ $q_{2,e}(\lambda, {\vec{v}}_A) \leq q_{2,e}(\lambda, \vec{v})$} \vmn{\eqref{eq:induc:1} holds in the last case as well}. This concludes the induction.
Thus, without loss of generality, we assume $n_1+n_2 \leq b$.
\vmn{To prove that condition (c) holds when $q_{2,f}(1) =\lfloor \theta b\rfloor$ and $n_1 \leq bp - 3 \Delta$, we make two important observations: (i)
In this case, the number of type-$2$ customers is large enough to apply the concentration results of~\eqref{inequality:good-approximation-app--o^R_2}. In particular, we have:}
\vmn{
\begin{align}
\label{epsilon-condition-n_2-big}
n_2\geq \theta b \geq \frac{ k \log n}{p^2}
\end{align}
}\vmn{where the last inequality holds because of \eqref{ineq:Alg1-non-trivial-case}, and definitions of $\theta =\frac{1-p}{2-a}$ and $k$ (defined in Lemma~\ref{lemma:needed-centrality-result-for-m=2}).
(ii) The number of type-$1$ customers
is so small that after a certain time the evolving threshold accepts a sufficient number of type-$2$ customers that ensures condition (c) holds.
In particular, define $$ \bar\lambda \triangleq \frac{1}{n} \lceil \frac{n(n_1(1-p)+ 3\Delta)}{p(b-n_1)}\rceil.$$ Note that $\bar \lambda \leq 1$ when $n_1 \leq bp-3\Delta$.
For any $\lambda \geq \bar\lambda$, we have:
}
\vmn{
\begin{align*}
o_1(\lambda)+o^{\vmn{\mathcal{S}}}_2(\lambda)-o^{\vmn{\mathcal{S}}}_2(\bar \lambda) & \leq \lambda p n_1+(1-p) \eta_1 (\lambda) + \Delta +\lambda pn_2 +\Delta - (\bar\lambda pn_2 - \Delta ) &(\eqref{inequality:good-approximation-o_1},\eqref{inequality:good-approximation-app--o^R_2})\\
&\leq \lambda p n_1+(1-p) n_1 +(\lambda - \bar\lambda)pn_2 + 3 \Delta &(\eta_1(\lambda)\leq n_1)\\
& = \bar \lambda p n_1+(1-p) n_1 +(\lambda - \bar\lambda)p (n_1+n_2) + 3 \Delta \\
& \leq \bar \lambda p n_1+(1-p) n_1 +(\lambda - \bar\lambda)p b + 3 \Delta &(n_1+n_2 \leq b) \\
& \leq \lambda pb. &(\vmn{\text{definition of $\bar\lambda$}})
\end{align*}
}
\noindent Note that because $o_1(\lambda)+o^{\vmn{\mathcal{S}}}_2(\lambda)-o^{\vmn{\mathcal{S}}}_2(\bar \lambda)$ is an integer, the above inequality also implies
\begin{align}
\label{ineq:case2:a3}
o_1(\lambda)+o^{\vmn{\mathcal{S}}}_2(\lambda)-o^{\vmn{\mathcal{S}}}_2(\bar \lambda) \leq \lfloor\lambda pb \rfloor \quad \quad \text{ for all }\lambda \geq \bar \lambda.
\end{align}
\vmn{Further, the above inequality implies that for $\lambda \geq \bar\lambda$, there is a gap between $o_1(\lambda)$ and the evolving threshold $ \lfloor\lambda pb \rfloor$, which in turn implies that the evolving threshold will accept type-$2$ customers.
Next, for $\lambda \geq \bar\lambda$, we establish a lower bound on the number of type-$2$ customers that the evolving threshold accepts.
In particular, we show that
}
\begin{align}
\label{ineq:case2:a2}
q_{2,e}(\lambda) \geq o_2^{\vmn{\mathcal{S}}} (\lambda) - o_2^{\vmn{\mathcal{S}}} (\bar \lambda) \quad \quad \text{ for all }\lambda \geq \bar \lambda.
\end{align}
\vmn{We show \eqref{ineq:case2:a2} by induction. }
The base case $\lambda = \bar{ \lambda}$ is trivial.
\vmn{Suppose \eqref{ineq:case2:a2} holds for $\lambda-1/n \geq \bar{ \lambda}$.
We show it will also hold for $\lambda$: }
If the arriving customer is not a \st{class}\vmn{type}-$2$ customer belonging to the {{\st{predictable}\vmn{stochastic}}} group, then $o_2^{\vmn{\mathcal{S}}} ({\lambda}) = o_2^{\vmn{\mathcal{S}}} (\lambda -1/n)$; but $q_{2,e}(\lambda) \geq q_{2,e}(\lambda-1/n)$, and thus \eqref{ineq:case2:a2} holds.
\vmn{Otherwise,} we have $o_2^{\vmn{\mathcal{S}}} ({\lambda}) = o_2^{\vmn{\mathcal{S}}} ({\lambda-1/n}) + 1$. Now if this customer is accepted \vmn{by the evolving threshold rule}, then both sides of \eqref{ineq:case2:a2} are increased by one, and thus inequality \eqref{ineq:case2:a2} still holds.
Otherwise, if the customer is not accepted, it implies we have reached the threshold. Therefore
\begin{align}
\label{eq:intermed1}
q_{2,e}(\lambda)= \lfloor \lambda p b\rfloor - o_1(\lambda).
\end{align}
\vmn{Now we utilize the gap between $\lfloor \lambda p b\rfloor$ and $o_1(\lambda)$ that we established above in~\eqref{ineq:case2:a3}.}
\vmn{Combining \eqref{eq:intermed1} and \eqref{ineq:case2:a3} proves that \eqref{ineq:case2:a2} holds in this case as well. This completes the induction, and thus
the proof of \eqref{ineq:case2:a2}.}
\vmn{We complete the proof of the lemma by} using \eqref{ineq:case2:a2} with $\lambda =1$, to have the following lower bound:
\begin{align*}
q_{2,e}(1) \geq & o_2^{\vmn{\mathcal{S}}} (1) - o_2^{\vmn{\mathcal{S}}} (\bar \lambda) &(\eqref{ineq:case2:a2}) \\
\geq & pn_2 - \Delta - (\bar\lambda pn_2 + \Delta) &(\eqref{inequality:good-approximation-app--o^R_2}) \\
\geq & pn_2 - (n_1(1-p)+ 3\Delta) - 2\Delta &(b-n_1 \geq n_2) \\
= & p(n_1+n_2) -n_1- 5\Delta.
\end{align*}
\if false
In order to apply Inequalities~\eqref{inequality:good-approximation-o_2} and~\eqref{inequality:good-approximation-app--o^R_2}, we need $n_2 \geq \frac{k \log n}{p^2}$.
Because $n_2\geq \theta b$, $n_2 \geq \frac{k \log n}{p^2}$ holds if
\begin{align}
\theta b \geq \frac{ k \log n}{p^2},\label{epsilon-condition-n_2-big}
\end{align}
which is given by \eqref{ineq:Alg1-non-trivial-case} and $\vm{k'}\leq 1/\sqrt{k}$.
We first show that, for any $\bar\lambda$ that satisfies the following condition:
\begin{align}
o_1(\lambda)+o^{\vmn{\mathcal{S}}}_2(\lambda)-o^{\vmn{\mathcal{S}}}_2(\bar \lambda) \leq \lambda pb \quad \quad \text{ for all }\lambda \geq \bar \lambda,\label{condition-for-bar-lambda}
\end{align}
the following inequalities hold:
\begin{align}
\label{ineq:case2:a2}
q_{2,e}(\lambda) \geq o_2^{\vmn{\mathcal{S}}} (\lambda) - o_2^{\vmn{\mathcal{S}}} (\bar \lambda) \quad \quad \text{ for all }\lambda \geq \bar \lambda.
\end{align}
We prove that \eqref{condition-for-bar-lambda} implies \eqref{ineq:case2:a2} by induction on $\lambda$.
The base case $\lambda = \bar{ \lambda}$ is trivial. At time $\lambda > \bar{ \lambda}$, if the arriving customer is not a \st{class}\vmn{type}-$2$ belonging to the {{\st{predictable}\vmn{stochastic}}} group, then $o_2^{\vmn{\mathcal{S}}} ({\lambda}) = o_2^{\vmn{\mathcal{S}}} (\lambda -1/n)$ but $q_{2,e}(\lambda) \geq q_{2,e}(\lambda-1/n)$, so we are done. Else, the customer is a \st{class}\vmn{type}-$2$ {{\st{predictable}\vmn{stochastic}}}, then $o_2^{\vmn{\mathcal{S}}} ({\lambda}) = o_2^{\vmn{\mathcal{S}}} ({\lambda-1/n}) + 1$. Now if this customer is accepted, then both sides of \eqref{ineq:case2:a2} are increased by one, so we are done.
Otherwise, the customers is not accepted, which implies we have reached the threshold, and therefore $o_1(\lambda)+q_{2,e}(\lambda)=\lambda p b,$ which, combined with~\eqref{condition-for-bar-lambda}, implies~\eqref{ineq:case2:a2}.
Here we find a sufficient condition on $\bar\lambda$ for Condition~\eqref{condition-for-bar-lambda} to hold.
\begin{align*}
o_1(\lambda)+o^{\vmn{\mathcal{S}}}_2(\lambda)-o^{\vmn{\mathcal{S}}}_2(\bar \lambda) & \leq \lambda p n_1+(1-p) \eta_1 (\lambda) + \Delta +\lambda pn_2 +\Delta - (\bar\lambda pn_2 - \Delta ) &(\eqref{inequality:good-approximation-o_1},\eqref{inequality:good-approximation-app--o^R_2})\\
&\leq \lambda p n_1+(1-p) n_1 +(\lambda - \bar\lambda)pn_2 + 3 \Delta &(\eta_1(\lambda)\leq n_1)\\
& = \bar \lambda p n_1+(1-p) n_1 +(\lambda - \bar\lambda)p (n_1+n_2) + 3 \Delta \\
& \leq \bar \lambda p n_1+(1-p) n_1 +(\lambda - \bar\lambda)p b + 3 \Delta. &(n_1+n_2 \leq b)
\end{align*}
As a result, Condition~\eqref{condition-for-bar-lambda} holds if
$ \bar \lambda p n_1 + (1-p) n_1 + 3\Delta \leq \bar \lambda p b.$
Thus, defining
$$ \bar\lambda \triangleq \frac{n_1(1-p)+ 3\Delta}{p(b-n_1)}$$
(note that $\bar \lambda \leq 1$ when $n_1 \leq bp-3\Delta$) and using \eqref{ineq:case2:a2} with $\lambda =1$, we have the bound:
\begin{align*}
q_{2,e}(1) \geq & o_2^{\vmn{\mathcal{S}}} (1) - o_2^{\vmn{\mathcal{S}}} (\bar \lambda) &(\eqref{ineq:case2:a2}) \\
\geq & pn_2 - \Delta - (\bar\lambda pn_2 + \Delta) &(\eqref{inequality:good-approximation-app--o^R_2}) \\
\geq & pn_2 - (n_1(1-p)+ 3\Delta) - 2\Delta &(b-n_1 \geq n_2) \\
= & p(n_1+n_2) -n_1- 5\Delta.
\end{align*}
\fi
\end{proof}
Using Lemma~\ref{lem:HYB:full-case2}, we prove, in Appendix~\ref{sec:proof-of-alg1}, the following lemma \vmn{that gives a lower bound on the competitive ratio for Case (ii)}:
\begin{lemma}
\label{claim:full-comp2}
Under event $\mathcal{E}$, if $n_1 \geq \frac{k}{p^2} \log n$ and $q_1(1) + q_{2,e}(1) +q_{2,f}(1) <b$, then
$$\frac{ALG_1(\vec{v})}{OPT(\vec{v})} \geq p + \frac{1-p}{2-a} - \frac{5 \Delta }{\theta b }.$$
\end{lemma}
\medskip
\noindent{ \bf Case (iii): $n_1 < \frac{k}{p^2} \log n$.}
\medskip
The competitive ratio analysis for Case (iii) is fairly similar to that for Case (ii). It follows from the next two lemmas. The proofs are deferred to Appendix~\ref{sec:proof-of-alg1}.
\begin{lemma}
\label{lem:HYB:full-case3}
Under event $\mathcal{E}$, if $n_1 < \frac{k}{p^2} \log n$, then one of the following conditions holds:
\begin{enumerate}[(a)]
\item $ q_1(1) + q_{2,e}(1) +q_{2,f}(1) = b$,
\item $q_1(1)=n_1$ and $ q_{2,e}(1) +q_{2,f}(1) = n_2$, or
\item $q_1(1)=n_1$, $q_{2,f}(1) =\lfloor \theta b\rfloor$ and $q_{2,e}(1) \geq pn_2 - \frac{k}{p^2} \log n - 4 \Delta$.
\end{enumerate}
\end{lemma}
Using Lemma~\ref{lem:HYB:full-case3}, the following lemma (proven in Appendix~\ref{sec:proof-of-alg1}) gives a lower bound on the competitive ratio for Case(iii):
\begin{lemma}
\label{claim:full-comp3}
Under event $\mathcal{E}$, if $n_1 < \frac{k}{p^2} \log n$, then $$\frac{ALG_1(\vec{v})}{OPT(\vec{v})} = \frac{n_1 + a \left[q_{2,e}(1) +q_{2,f}(1)\right]}{n_1 + a \min \{n_2,(b-n_1)\}} \geq \min \left\{ p+\frac{1-p}{2-a}- \frac{\frac{k}{p^2} \log n}{ab} , p+\frac{1-p}{2-a} - \frac{\frac{k}{p^2} \log n + 4 \Delta}{\theta b}\right\}.$$
\end{lemma}
Using Lemmas~\ref{claim:full-case-comp1},~\ref{claim:full-comp2}, and~\ref{claim:full-comp3}, we have lower bounds on the competitive ratio of Algorithm~\ref{algorithm:hybrid} for all possible cases. \vmn{We complete the proof of the theorem by the following lemma (proven in Appendix~\ref{sec:proof-of-alg1}) that ensures that the error terms in Lemmas~\ref{claim:full-case-comp1},~\ref{claim:full-comp2}, and~\ref{claim:full-comp3} are $O(\epsilon)$.}
\if false
To complete the proof, we just need to show the following two claims that are proven in Appendix~\ref{sec:proof-of-alg1}.
\begin{claim}
\label{claim:epsilon-con}
$\frac{1}{n}\leq \epsilon = \frac{1}{a(1-p)p}\sqrt{\frac {\log n}{b}} \leq \bar\epsilon$.
\end{claim}
\fi
\begin{lemma}
\label{claim:error-terms}
The error terms in Lemmas~\ref{claim:full-case-comp1},~\ref{claim:full-comp2}, and~\ref{claim:full-comp3} are $O(\epsilon)$, i.e., we have: (a)~ $ \frac{(1-a)\Delta}{ab} =O(\epsilon)$, (b)~ $\frac{5 \Delta }{\theta b }=O(\epsilon)$, (c)~ $\frac{\frac{k}{p^2} \log n}{ab}=O(\epsilon)$, and
(d)~ $\frac{\frac{k}{p^2} \log n + 4 \Delta}{\theta b} =O(\epsilon)$.
\end{lemma}
\end{proof}
\section{A Non-Adaptive Algorithm}
\label{sec:alg1}
In this section, we \st{design}\vmn{present} and analyze \st{a}\vmn{our first online}\st{non-adaptive} algorithm \vmn{for the resource allocation problem and the demand model described in Section~\ref{sec:prem}}.
First, in Section~\ref{sec:alg1-alg}, we describe the algorithm.
\vmn{Then, in Section~\ref{sec:HBY-analysis}, we present the analysis of its competitive ratio}.
\subsection{The Algorithm}\label{sec:alg1-alg}
Our first algorithm is a non-adaptive online algorithm that uses predetermined \vmn{dynamic} thresholds to accept or reject customers.
This algorithm combines some ideas from the primal algorithm of~\cite{Kesselheim} and the threshold algorithm of~\cite{Ball2009} to \st{capture}\vmn{generate} maximal revenue from both the \st{predictable}\vmn{stochastic} and \st{unpredictable}\vmn{adversarial} components of the demand.
In particular, our non-adaptive algorithm makes use of the fact that customers from the {{\st{predictable}\vmn{stochastic}}} group are \vmn{uniformly} spread over the entire horizon.
Therefore, at least a fraction $p$ of the inventory should be allocated at a roughly constant rate.
To this end, we define an \emph{evolving threshold} that works as follows: at any time $\lambda$, accept a \st{class}\vmn{type}-$2$ customer if the total number of accepted customers by this rule does not exceed \st{$\lambda pb$}\vmn{${\lfloor\lambda pb \rfloor}$}.
However, the arrival pattern of the other $1-p$ fraction can take any arbitrary form. In particular, if the adversary puts many \st{class}\vmn{type}-$2$ customers at the very beginning of the time horizon but none toward the end, then we may reject too many \st{class}\vmn{type}-$2$ customers\st{ from the \st{unpredictable}\vmn{adversarial} group} early on.
To prevent this loss, we keep another quota for a \st{class}\vmn{type}-$2$ customer rejected by the evolving threshold. We only reject that customer if the number of such \st{class}\vmn{type}-$2$ customers accepted so far exceeds the \emph{fixed} threshold of $\theta \triangleq \frac{1-p}{2-a}$. When $p=0$, this is the same threshold as in~\cite{Ball2009}.
The formal definition of our algorithm is presented in Algorithm~\ref{algorithm:hybrid}.
\vmn{Note that $q_1$, $q_{2,e}$, and $q_{2,f}$ respectively represent counters for the number of accepted \st{class}\vmn{type}-$1$ customers, the number of \st{class}\vmn{type}-$2$ customers accepted by the evolving threshold, and the number of \st{class}\vmn{type}-$2$ customers accepted by the fixed threshold.}
\st{
Note that $q_1$ represents the number of accepted \st{class}\vmn{type}-$1$ customers; $q_{2,e}$ represents the number of \st{class}\vmn{type}-$2$ customers accepted by the evolving threshold, and $q_{2,f}$ represents the number of \st{class}\vmn{type}-$2$ customers accepted by the fixed threshold.}
\begin{algorithm}[H]
\begin{enumerate}
\item Initialize $q_1, q_{2,e}, q_{2,f} \leftarrow 0$, and define $\theta \triangleq \frac{1-p}{2-a}$.
\item Repeat for time $\lambda = 1/n, 2/n, \dots, 1$, accept customer $i=\lambda n$ arriving at time $\lambda$ if there is remaining inventory and one of the following conditions holds:
\begin{enumerate}
\item $v_i = 1$; update $q_1\leftarrow q_1+1$.
\item {\bf Evolving threshold rule:} $v_i = a$ and $q_1+q_{2,e}< {\lfloor\lambda pb \rfloor}$; update $q_{2,e} \leftarrow q_{2,e} +1$.
\item {\bf Fixed threshold rule:} $v_i = a$ and $q_{2,f} < {\lfloor \theta b\rfloor}$; update $q_{2,f} \leftarrow q_{2,f} +1$.
\end{enumerate}
We prioritize the evolving threshold rule if both of the last two conditions are satisfied.
\end{enumerate}
\caption{ Online Non-adaptive Algorithm ($ALG_1$)}\label{algorithm:hybrid}
\end{algorithm}
\subsection{Competitive Analysis}\label{subsec:adapt:com}
In this section we analyze the competitive ratio of Algorithm~\ref{algorithm:adaptive-threshold} and prove the following theorem:
\begin{theorem}\label{thm:adaptive-threshold}
For $p\in(0,1)$, let $c^*$ be the optimal objective value of (\ref{MP1}). For any $c \leq c^*$ \vmn{such that $c<1$}, $ALG_{2,c}$ is $c - O\left(\frac{1}{(1-c)^2ap^{3/2}}\sqrt{\frac{n^2 \log n}{b^3}}\right)$ competitive in the partially \st{learnable}\vmn{predictable} model.
\end{theorem}
\vmn{The above theorem implies that if $c^* <1$, then $ALG_{2,c^*}$ is $c^* - O\left(\frac{1}{(1-c^*)^2ap^{3/2}}\sqrt{\frac{n^2 \log n}{b^3}}\right)$ competitive. However, the same does not hold
when $c^* = 1$. For this special case,}
we have the following corollary of Theorem~\ref{thm:adaptive-threshold}:
\begin{corollary}
\label{cor:cstar1}
When $c^*=1$, for $c = 1- \sqrt[3]{\frac{1}{ap^{3/2}}\sqrt{\frac{n^2 \log n}{b^3}}}$ the competitive ratio of $ALG_{2,c}$ is $1-O\left( \sqrt[3]{\frac{1}{ap^{3/2}}\sqrt{\frac{n^2 \log n}{b^3}}}\right)$.
\end{corollary}
\begin{remark}
Theorem~\ref{thm:adaptive-threshold} combined with Proposition~\ref{prop:values-math-program} shows that in the asymptotic regime (where $n$ and $b$ both grow), if the scaling factor $\sqrt{\frac{n^2 \log n}{b^3}}$ (which appears in the error term of the competitive ratio) is vanishing (i.e., order of $o(1)$), then our adaptive algorithm outperforms our non-adaptive one. For instance, the aforementioned condition holds if $b = \kappa n$ \vmn{where $0< \kappa \leq 1$ is a constant}\st{with $\kappa$ being a positive constant}.
\end{remark}
\if false
\begin{remark}
\label{remark:alg2-2}
In Subsection~\ref{sec:asymptotic-ub}, we present a class of instances that provides evidence that no online algorithm can achieve a competitive ratio better than $c^*$ in the asymptotic regime and for the case $b = \kappa n$. Our partial analysis is based on two approximations; making this a rigorous statement is beyond the scope of our current paper.
\end{remark}
\fi
\begin{proof}{\textbf{Proof of Theorem~\ref{thm:adaptive-threshold}:}}
\vmn{Similar to the proof of Theorem~\ref{thm:hybrid}, we start by making the observation that}
Theorem~\ref{thm:adaptive-threshold} is nontrivial only if $\sqrt{\frac{n^2 \log n}{b^3}}$ is small enough \st{so}\vmn{such} that the approximation term $O(\cdot)$ is negligible. Therefore, without loss of generality, we can restrict attention to the case where $\sqrt{\frac{n^2 \log n}{b^3}}$ is small. In particular,
\if false
\st{we define}\vmn{let us define constant} $\vmn{\vm{\tilde{k}}}$ \vmn{as} $\min\left\{ \frac{3}{2\alpha}, \frac{1}{\sqrt{k}}, \frac{1}{\sqrt[4]{k}}, \bar\epsilon \right\}$\vmn{, where constants $\alpha$, $k$, and $\bar\epsilon$ are defined in Lemma~\ref{lemma:needed-centrality-result-for-m=2}. We note that we have already defined $\vm{\tilde{k}}$ in Lemma~\ref{prop:full-Ubounds}.} \pj{See my previous comment on the fact that we dont need to introduce $\vm{\tilde{k}}$, as this is nothing else than $\bar\epsilon$, the smallest in $\min\left\{ \frac{3}{2\alpha}, \frac{1}{\sqrt{k}}, \frac{1}{\sqrt[4]{k}}, \bar\epsilon \right\}$.}
\fi
\st{When}\vmn{if} $\frac{1}{(1-c)^2ap^{3/2}}\sqrt{\frac{n^2 \log n}{b^3}} \geq \vmn{\bar\epsilon}$, \vmn{then} $O\left(\frac{1}{(1-c)^2ap^{3/2}}\sqrt{\frac{n^2 \log n}{b^3}}\right)$ becomes $O(1)$ and Theorem~\ref{thm:adaptive-threshold} becomes trivial \vmn{(recall that constant $\bar\epsilon = 1/24$ is defined in Lemma~\ref{lemma:needed-centrality-result-for-m=2})}.
Therefore, without loss of generality, we assume $\frac{1}{(1-c)^2ap^{3/2}}\sqrt{\frac{n^2 \log n}{b^3}} < \vmn{\bar\epsilon}$, or equivalently,
\begin{align}
b ^{\frac{3}{2}}> \frac{1}{\vmn{\bar\epsilon}} \frac{n\sqrt{\log n}}{(1-c)^2ap^{3/2}}.\label{ineq:ADP-non-trivial-case}
\end{align}
{We remark that we impose the same condition on $b$ in Lemma~\ref{prop:full-Ubounds}.}
\vmn{We denote the random revenue generated by Algorithm~\ref{algorithm:adaptive-threshold} by $ALG_{2,c}(\vec{V})$. Similar to the proof of Theorem~\ref{thm:hybrid}, we define
an appropriate $\epsilon$ that allows us to focus on the realizations that belong to event $\mathcal{E}$. In particular, let
$\epsilon = \frac{1}{(1-c)^2ap^{3/2}}\sqrt{\frac{n^2 \log n}{b^3}}$. For $b$ that satisfies condition \eqref{ineq:ADP-non-trivial-case}, \vmn{and assuming
that $n \geq 3$\st{is large enough}, }we have $\frac{1}{n} \leq \epsilon \leq \bar\epsilon$. Therefore, we can apply Lemma~\ref{lemma:needed-centrality-result-for-m=2} to get:}
\vmn{
\begin{align*}
\frac{\E{ALG_{2,c}(\vec{V})}}{OPT(\vmn{\vec{v}_I})} \geq \frac{\E{ALG_{2,c}(\vec{V})|\mathcal{E}}\prob{\mathcal{E}}}{OPT(\vmn{\vec{v}_I})} \geq \frac{\E{ALG_{2,c}(\vec{V})|\mathcal{E}}}{OPT(\vmn{\vec{v}_I})} \left(1 - \epsilon \right).
\end{align*}}
\noindent \vmn{In the main part of the proof, we show that for any realization $\vec{v}$ belonging to event $\mathcal{E}$,}
\vmn{
\begin{align*}
\frac{{ALG_{2,c}(\vec{v})}}{OPT(\vmn{\vec{v}_I})} \geq c - O(\epsilon).
\end{align*}
}
\noindent \vmn{To analyze the competitive ratio we analyze three cases separately.}
\if false
For any initial customer sequence $\vec{v'}\vmn{ZZ\vec{v}_I}$, we denote the random revenue generated by Algorithm~\ref{algorithm:adaptive-threshold} by $ALG_{2,c}(\vec{V})$. In order to analyze $\E{ALG_{2,c}(\vec{V})}$ we condition it on the event $\mathcal{E}$ for an appropriately defined $\epsilon$. In fact we have already defined $\epsilon$ in Subsection~\ref{sec:alg2-alg}, right before Lemma~\ref{prop:full-Ubounds}; $\epsilon \triangleq \frac{1}{(1-c)^2ap^{3/2}}\sqrt{\frac{n^2 \log n}{b^3}} $. As mentioned before, in Claim~\ref{claim:check-epsilon} we show that $\frac{1}{n} \leq \epsilon \leq \bar\epsilon$; this allows us to
apply Lemma ~\ref{lemma:needed-centrality-result-for-m=2} which in turn implies:
$$\frac{\E{ALG_{2,c}(\vec{V})}}{OPT(\vec{v'}\vmn{ZZ\vec{v}_I})} \geq \frac{\E{ALG_{2,c}(\vec{V})|\mathcal{E}}\prob{\mathcal{E}}}{OPT(\vec{v'}\vmn{ZZ\vec{v}_I})} \geq \frac{\E{ALG_{2,c}(\vec{V})|\mathcal{E}}}{OPT(\vec{v'}\vmn{ZZ\vec{v}_I})} \left(1 - \epsilon \right)$$
This will allow us to focus on the realizations that belong to event $\mathcal{E}$.
In the main part of the proof we show that for any realization $\vec{v}$ belonging to event $\mathcal{E}$,
$$\frac{{ALG_{2,c}(\vec{v})}}{OPT(\vec{v'}\vmn{ZZ\vec{v}_I})} \geq c - O(\epsilon).$$
This in turn implies the result (combined with a few small steps). Before proceeding with the proof, we make the following observation:
Theorem~\ref{thm:adaptive-threshold} is only non-trivial if $\sqrt{\frac{n^2 \log n}{b^3}}$ is small enough, so that the approximation term $O(\cdot)$ is negligible. Therefore, without loss of generality we can restrict attention to the case where $\sqrt{\frac{n^2 \log n}{b^3}}$ is small. In particular,
we define $\vmn{\vm{\tilde{k}}} \triangleq \min\left\{ \frac{3}{2\alpha}, \frac{1}{\sqrt{k}}, \frac{1}{\sqrt[4]{k}}, \bar\epsilon \right\}$.
When $\frac{1}{(1-c)^2ap^{3/2}}\sqrt{\frac{n^2 \log n}{b^3}} \geq \vmn{\vm{\tilde{k}}}$, $O\left(\frac{1}{(1-c)^2ap^{3/2}}\sqrt{\frac{n^2 \log n}{b^3}}\right)$ becomes $O(1)$ and Theorem~\ref{thm:adaptive-threshold} becomes trivial.
Therefore, we assume, without loss of generality, $\frac{1}{(1-c)^2ap^{3/2}}\sqrt{\frac{n^2 \log n}{b^3}} < \vmn{\vm{\tilde{k}}}$, or equivalently,
\begin{align}
b ^{\frac{3}{2}}> \frac{1}{\vmn{\vm{\tilde{k}}}} \frac{n\sqrt{\log n}}{(1-c)^2ap^{3/2}}. \label{ineq:ADP-non-trivial-case}
\end{align}
Finally, for simplicity, we treat the threshold of accepting \st{class}\vmn{type}-$2$ customer, $\phi b + c \left(b - u_1(\lambda) \right)^+$, to be integers for all $\lambda$. \vm{do we need this?}
In order to analyze the competitive ratio we analyze $3$ cases separately.
\fi
\medskip
\noindent { \bf Case (i): $n_1 \geq \frac{k}{p^2} \log n$, and Algorithm~\ref{algorithm:adaptive-threshold} exhausts the inventory.}
\medskip
When $n_1 \geq \frac{k}{p^2} \log n$, we can apply \eqref{inequality:good-approximation-o_1} from Lemma~\ref{lemma:needed-centrality-result-for-m=2} and Lemma~\ref{prop:full-Ubounds}. \vmn{Because Algorithm~\ref{algorithm:adaptive-threshold} exhausts the inventory, we know that $n_1 + n_2 \geq b$. Now we have either (a) $n_1 + n_2 -\frac{2\Delta}{\delta p} \leq b$ or (b) $n_1 + n_2 -\frac{2\Delta}{\delta p} > b$. If (a) happens, then (according to Lemma~\ref{prop:full-Ubounds}) we may have $u_{1,2} (\lambda) < b$, which may result in accepting a \vmn{type}-$2$ customer through the second condition that we should have rejected. However, in this case, we also have a tight upper bound on the optimum offline solution. As shown in the proof of Lemma~\ref{claim:q1+q2=b-ADP-ratio}\vmn{---which analyzes the competitive ratio of the two cases (a) and (b) separately---}such a bound allows us to establish the desired lower bound on the competitive ratio.} \vmn{Case (b) is the more interesting case, which accepts type-$2$ customers through the third condition of Algorithm~\ref{algorithm:adaptive-threshold}. It
is possible that the algorithm accepts \emph{too many} \st{class}\vmn{type}-$2$ customers through this condition, resulting in rejecting \st{class}\vmn{type}-$1$ customers, and thus in revenue loss. In the following lemma, we control for this loss by establishing an upper bound on the number of accepted \st{class}\vmn{type}-$2$ customers. The proof of the lemma, which uses similar ideas to those in Lemma~\ref{lem:HYB:full-case1}, is deferred to Appendix~\ref{sec:proof-value-MP1}.}
\if false
Note that it is possible that $u_{1,2}(\lambda)<b$ but $n_1+n_2\geq b$, which misleads us to accept a \st{class}\vmn{type}-$2$ customer which we should have rejected.
In the following lemma (proven in Appendix~\ref{sec:proof-value-MP1}), using Lemma~\ref{prop:full-Ubounds}, we show that this does not happen when $n_1+n_2 > b+\frac{2\Delta}{\delta p}$:
\fi
\begin{lemma}\label{lemma:upper-bound-q2-ADP-case-1}
Under event $\mathcal{E}$, if $n_1\geq \frac{k}{p^2} \log n$, then one of the following conditions holds:
\begin{enumerate}[(a)]
\item $n_1+n_2 - \frac{2\Delta}{\delta p} \leq b $, or
\item $n_1+n_2 - \frac{2\Delta}{\delta p} > b $ and $q_2(1) \leq \frac{1-c}{1-a} b + c \left(b - n_1 \right)^+ + c \frac{2\Delta}{\delta p} +1.$
\end{enumerate}
\end{lemma}
\if false
Using the Lemma~\ref{lemma:upper-bound-q2-ADP-case-1} \vmn{and the discussion before the lemma}, we find a lower bound on the competitive ratio:
\fi
Using Lemma~\ref{lemma:upper-bound-q2-ADP-case-1} \vmn{and the discussion before the lemma}, in Appendix~\ref{sec:proof-value-MP1}, we prove the following lemma, \vmn{which gives a lower bound on the competitive ratio for Case (i)}:
\begin{lemma} \label{claim:q1+q2=b-ADP-ratio}
Under event $\mathcal{E}$, if $n_1\geq \frac{k}{p^2} \log n$ and $q_1(1)+ q_2(1)=b$, then
$$\frac{ALG_{2,c}({\vec{v}})}{OPT(\vec{v})} \geq c - \frac{{3}\Delta}{ab\delta p}.$$
\end{lemma}
\if false
{\noindent The proof is deferred to Appendix~\ref{sec:proof-value-MP1}.}
\fi
\medskip
\noindent { \bf Case (ii): $n_1 \geq \frac{k}{p^2} \log n$, and Algorithm~\ref{algorithm:adaptive-threshold} does not exhaust the inventory.}
\medskip
First note that in this case $OPT(\vec{v}) = n_1 + a \min \{b-n_1,n_2\}$.
Also, in this case, we accept all \st{class}\vmn{type}-$1$ customers. Therefore, $q_1(1)=n_1$. To \vmn{lower-}bound the competitive ratio, we need to show only that we do not reject too many \st{class}\vmn{type}-$2$ customers, i.e., $q_2(1)$ is large enough.
Note that if for all $\lambda\vmn{\in \{1/n, 2/n, \ldots, 1\}}$, condition \eqref{eq:Ccomp2} holds, then all \st{class}\vmn{type}-$2$ customers are accepted, and we have $q_2(1) = n_2$. This implies that $ALG_{2,c}({\vec{v}}) = OPT(\vec{v})$\st{, and we are done}.
The more interesting case is when there exists at least one time step for which condition \eqref{eq:Ccomp2} is violated. Let $\vmn{\l}$ be the last time that we reject a \st{class}\vmn{type}-$2$ customer. This means that at time $\vmn{\l}$, we have:
\noindent\begin{tabularx}{\textwidth}{>{\hsize=0.7\hsize}X>{\hsize=1.3\hsize}X
\begin{equation}
u_{1,2}(\vmn{\l}) \geq b \label{ineq:u2>=b},\end{equation} &
\begin{equation}
q_2(\vmn{\l}) \geq \frac{1-c}{1-a} b + c \left(b - u_1(\vmn{\l}) \right)^+.\label{eq:Ccomp5} \end{equation}
\end{tabularx}
\noindent This also provides the following lower bound on the number of accepted \st{class}\vmn{type}-$2$ customers:
\begin{align}
q_2(1) = q_2(\vmn{\l}) + \left[n_2 - o_2(\vmn{\l})\right] \geq \frac{1-c}{1-a} b + c \left(b - u_1(\vmn{\l}) \right)^+ + \left[n_2 - o_2(\bar{\lambda})\right].
\end{align}
Therefore, when $q_1(1)+q_2(1)<b$,
\begin{align}
\label{ineq:lower-bound-ratio-ADP}
\frac{ALG_{2,c}({\vec{v}})}{OPT(\vec{v})}
\geq \frac{n_1 + a\left(\frac{1-c}{1-a} b + c \left(b - u_1(\vmn{\l}) \right)^+ + \left[n_2 - o_2(\vmn{\l})\right]\right) }{n_1 + a \min \{b-n_1,n_2\}}.
\end{align}
For a fixed $c$, if for all possible instances the right-hand side of \eqref{ineq:lower-bound-ratio-ADP} is greater than $c$, then $ALG_{2,c}$ would be $c$-competitive.
However, if $c$ is too large, then there will be instances for which \vmn{the right-hand side of} \eqref{ineq:lower-bound-ratio-ADP} \st{is violated}\vmn{will be less than $c$}.
We identify a superset of these instances by all possible combinations of $(\vmn{\l}, n_1, n_2, \eta_1(\vmn{\l}), \eta_2(\vmn{\l}))$ that satisfy certain \st{``regularity'' conditions}\vmn{constraints to ensure they correspond to valid instances}.
As a reminder, $\eta_j(\vmn{\l})$ \st{and $\eta_2(\vmn{\l})$}represents the number of \st{class}\vmn{type}-$j$ customers\st{and\st{class}\vmn{type}-$2$} by time $\vmn{\l}$ in the initial sequence (determined by the adversary, i.e., $\vec{v}_I$).
As we describe these \st{conditions}\vmn{constraints} below, it becomes clear that \vmn{(1)} any instance of the problem would satisfy all these \st{conditions}\vmn{constraints}, and \vmn{(2) these constraints correspond to the feasible region of the mathematical program in (\ref{MP1})}.
We start with the straightforward \st{conditions}\vmn{constraints}: for every instance, $n_1 + n_2 \leq n$. Also, $\eta_1(\vmn{\l}) \leq n_1$, and $\eta_2(\vmn{\l}) \leq n_2$. Further, in the initial customer sequence $\vec{v}_I$, at time $\vmn{\l}$ we cannot have more than $\vmn{\l} n$ customers, thus $\eta_1(\vmn{\l}) + \eta_2(\vmn{\l}) \leq \vmn{\l} n$. Similarly after time $\vmn{\l}$, we cannot have more than $(1 - \vmn{\l}) n$ customers, and therefore $n_1 + n_2 - \left[\eta_1(\vmn{\l}) + \eta_2(\vmn{\l})\right] \leq (1-\vmn{\l}) n$. By definition of $\vmn{\l}$, we have $\vmn{\l} \leq 1$. We also add the condition $n_1 \leq b$, which is always true under the case when $q_1(1)+q_2(1)<b$. \vmn{Note that these are Constraints~\eqref{constraint:x<=1}-\eqref{constraint:n1'+n2'big} in (\ref{MP1}), where in (\ref{MP1}), with a slight abuse of notation, we simplify by substituting $\eta_j$ for $\eta_j(\vmn{\l})$.}
For a moment, suppose $o_j(\vmn{\l}) =\tilde o_j(\vmn{\l})$. First, we remind the reader that $\tilde o_j(\vmn{\l}) = (1-p)\eta_j(\vmn{\l})+p \vmn{\l} n_j$ \vmn{is the deterministic approximation of $o_j(\vmn{\l})$ that we introduced in Section~\ref{sec:prem}, and also is redefined in (\ref{MP1})} (at the bottom). \st{Also,}\vmn{Further,} note that this is just to explain the idea behind constructing (\ref{MP1}). Later in the proof, we address the difference between
$\tilde o_j(\vmn{\l})$ and $ o_j(\vmn{\l})$.
In this case, \vmn{we have:}
\vmn{
\begin{align}
\label{eq:const:b}
\tilde u_{1,2}(\vmn{\l}) \triangleq \min \left\{ \frac{\tilde o_1(\vmn{\l})+\tilde o_2(\vmn{\l})}{\vmn{\l} p }, \frac{\tilde o_1(\vmn{\l})+\tilde o_2(\vmn{\l}) + (1- \vmn{\l} ) (1-p)n}{(1-p+\vmn{\l} p)} \right\} = u_{1,2}(\vmn{\l})\geq b
\end{align}
}
\vmn{where the last inequality is the same as inequality \eqref{ineq:u2>=b}.
Further note that rejecting a customer at time $\vmn{\l}$ implies that
$\vmn{\l} \geq \frac{\phi b}{n}= \delta$, and thus by definition $u_{1,2}(\vmn{\l}) = \min \left\{ \frac{o_1(\vmn{\l})+o_2(\vmn{\l})}{\vmn{\l} p}, \frac{o_1(\vmn{\l}) +o_2(\vmn{\l}) + (1-\vmn{\l}) (1-p) n}{1-p+\vmn{\l} p} \right\}$.\footnote{\vmn{Note that when $\vmn{\l} < \delta$ Algorithm~\ref{algorithm:adaptive-threshold} never rejects a customer, because $q_2(\l) \leq \l n < \delta n = \phi b$.}}
Note that Inequality \eqref{eq:const:b} is Constraint \eqref{constraint:u_2>=b}, where in (\ref{MP1}), again with a slight abuse of notation, we simplify by substituting $\tilde u_{1,2}$ for $\tilde u_{1,2}(\vmn{\l})$ and $\tilde o_j$ for $\tilde o_j(\vmn{\l})$.}
\if false
$\tilde u_{1,2}(\bar{\lambda}) = u_{1,2}({\lambda})\geq b$ (see \vmn{\ref{MP1} for the definition of $\tilde u_{1,2}(\bar{\lambda})$, and }inequality \eqref{ineq:u2>=b}).
\fi
Further, the most interesting \vmn{constraint, Constraint \eqref{constraint:not-c-competitive},}
\st{condition}comes from condition \eqref{ineq:lower-bound-ratio-ADP}.
By rearranging terms, we can show that the right-hand side of \eqref{ineq:lower-bound-ratio-ADP} being smaller or equal to $c$ is equivalent to:
\begin{align}
c \geq \frac{a(n_2- o_2(\vmn{\l})+\frac{b}{1-a})+n_1}{a\min\{n_1+n_2, b\}+(1-a)n_1+\frac{a^2b}{1-a}+a \min\{ u_1(\vmn{\l}), b\}} \label{eq:Ccomp6}
\end{align}
\vmn{which is Constraint \eqref{constraint:not-c-competitive} after substituting
$o_2(\vmn{\l})$ with $\tilde o_2$ and $u_1(\vmn{\l})$ with $\tilde u_1$.}
\vmn{Overall,} the above conditions define the feasible region of the math program (\ref{MP1}).
By minimizing $c$, we find the threshold for making (\ref{MP1}) infeasible: Let $c^*$ be the solution of (\ref{MP1}); for any $c < c^*$, (\ref{MP1}) is infeasible, and the only constraint that $(\vmn{\l} , n_1, n_2, \eta_1(\vmn{\l}), \eta_2(\vmn{\l}), c)$ can violate is ~\eqref{constraint:not-c-competitive} (same as \eqref{eq:Ccomp6}). \vmn{This implies}\st{implying} that $ALG_{2,c}$ is $c$-competitive.
We now go back and address the issue that $ \tilde o_j(\vmn{\l})$ and $o_j(\vmn{\l})$ are not equal.
Due to the difference between $o_j(\vmn{\l})$ and $ \tilde o_j(\vmn{\l})$, \vmn{(1)} Constraint~\eqref{constraint:u_2>=b} might be violated (even though \eqref{ineq:u2>=b} is satisfied) and \vmn{(2)} violating Constraint~\eqref{constraint:not-c-competitive} does not imply violating \eqref{eq:Ccomp6}.
\st{To partially \vm{Dawsen: why partially?!}} \vmn{To} address these issues, \st{in the following lemma}\vmn{first in Lemma~\ref{lemma:feasible-sol}}, we give a slightly modified tuple that satisfies Constraints~\eqref{constraint:u_2>=b}-\eqref{constraint:n1'+n2'big}; \vmn{then, in Lemma~\ref{claim:ADP-non-exahust}, we prove that for any $c\leq c^*$, if Constraint~\eqref{constraint:not-c-competitive} is violated, then $\frac{ALG_{2,c}({\vec{v}})}{OPT(\vec{v})} \geq c- \frac{4\Delta n}{\phi^2 b^2 p}$.}
\vmn{The proofs of both lemmas are deferred to Appendix~\ref{sec:proof-value-MP1}, and they amount to applying the concentration results of Lemma~\ref{lemma:needed-centrality-result-for-m=2} and carefully analyzing the error terms.
These two lemmas complete the analysis of competitive ratio in Case (ii).}
\begin{lemma} \label{lemma:feasible-sol}
Under event $\mathcal{E}$, if $n_1\geq \frac{k}{p^2} \log n$ \vmn{and $q_1(1)+q_2(1) < b$}, then the tuple $(\vmn{\l}', n_1', n_2', \eta_1', \eta_2', c')\triangleq (\vmn{\l}, n_1, n_2 + \xi, \eta_1(\vmn{\l}), \eta_2(\vmn{\l}) + \bar \xi, c)$ satisfies Constraints~\eqref{constraint:u_2>=b}-\eqref{constraint:n1'+n2'big}, where
\begin{align*}
\xi &\triangleq \begin{cases} 0 &\text{ if }n_1+n_2 \geq b ,\\
\min \left\{ n - (n_1 + n_2), \frac{\Delta n}{\phi b p} \right\} &\text{ if }n_1+n_2< b;
\end{cases}\\
\bar \xi &\triangleq \begin{cases} 0 &\text{ if }n_1+n_2 \geq b, \\
\min \left\{ \xi, \vmn{\l} n - (\eta_1 (\vmn{\l})+ \eta_2(\vmn{\l})) \right\} &\text{ if }n_1+n_2< b,
\end{cases}
\end{align*}
\vmn{and where $\Delta = \alpha \sqrt{b \log n}$, $\phi = \frac{1-c}{1-a}$, and $\vmn{\l}$ is the last time that we reject a \vmn{type}-$2$ customer.}
\end{lemma}
\if flase
\vm{Dawse, there is ? reference in the proof. please check}
\noindent{The proof of the lemma is presented in Appendix~\ref{sec:proof-value-MP1}}.
Using the tuple $(\lambda', n_1', n_2', \eta_1', \eta_2', c')$
given in Lemma~\ref{lemma:feasible-sol}, we show the following lemma (proven in Appendix~\ref{sec:proof-value-MP1}):
\vm{Dawsen: let's add a sentence here: analyzing the objective of (\ref{MP1}) for tuple... }
\fi
\begin{lemma} \label{claim:ADP-non-exahust}
Under event $\mathcal{E}$, if $n_1\geq \frac{k}{p^2} \log n$ and $q_1(1)+q_2(1) < b$, then $$\frac{ALG_{2,c}({\vec{v}})}{OPT(\vec{v})} \geq c- \frac{4\Delta n}{\phi^2 b^2 p}.$$
\end{lemma}
\medskip
\noindent{ \bf Case (iii): $n_1 < \frac{k}{p^2} \log n$.}
\medskip
The competitive ratio analysis for this case uses ideas similar to those in the previous two cases, and it follows from the next two lemmas. The proofs are deferred to Appendix~\ref{sec:proof-value-MP1}.
\begin{lemma}\label{lemma:n1-small-ADP-full}
Under event $\mathcal{E}$, if $n_1 < \frac{k}{p^2} \log n$, then one of the following three conditions holds:
\begin{enumerate}[(a)]
\item $q_1(1)+q_2(1)=b$;
\item $q_1(1)=n_1$ and $q_2(1)=n_2$; or
\item $q_1(1)=n_1$ and $q_2(1) \geq cb$.
\end{enumerate}
\end{lemma}
Using Lemma~\ref{lemma:n1-small-ADP-full}, \st{we prove the} \vmn{in the } following lemma, \vmn{we establish a lower bound on the competitive ratio for
Case (iii):}
\begin{lemma}
Under event $\mathcal{E}$, if $n_1 < \frac{k}{p^2} \log n$, then
$\frac{ALG_{2,c}({\vec{v}})}{OPT(\vec{v})} \geq c$. \label{claim:small-n1}
\end{lemma}
Having Lemmas~\ref{claim:q1+q2=b-ADP-ratio},~\ref{claim:ADP-non-exahust}, and~\ref{claim:small-n1}, we have lower bounds on the competitive ratio of Algorithm~\ref{algorithm:adaptive-threshold} for all possible cases.
\vmn{We complete the proof of the theorem by the following lemma (proven in
Appendix~\ref{sec:proof-value-MP1}) that ensures that the error terms in Lemmas~\ref{claim:q1+q2=b-ADP-ratio} and~\ref{claim:ADP-non-exahust} are $O(\epsilon)$.}
\if false
To complete the proof, we just need to show the following two claims that are proven in Appendix~\ref{sec:proof-value-MP1}.
\fi
\if false
\begin{claim}
\label{claim:check-epsilon}
$\frac{1}{n} \leq \epsilon \leq \bar\epsilon$.
\end{claim}
\fi
\begin{lemma}
\label{claim:check-error-terms}
The error terms in Lemmas~\ref{claim:q1+q2=b-ADP-ratio} and~\ref{claim:ADP-non-exahust} are $O(\epsilon)$, i.e.,
\vmn{(a)} $\frac{{3}\Delta}{ab\delta p} =O\left( \epsilon \right)$ and \vmn{(b)} $ \frac{4\Delta n}{\phi^2 b^2 p} = O\left( \epsilon \right)$.
\end{lemma}
\end{proof}
\section{Missing proofs of Section~\ref{sec:alg2}}
\label{sec:proof-value-MP1}
\vmn{Before proceeding with the proofs, we state and prove an auxiliary lemma that establishes an upper bound on $n_1$ and $n_1 + n_2$ using the deterministic approximation functions $\tilde o_j(\cdot)$.}
\vmn{
\begin{lemma}
\label{prop:Ubounds}
For $\lambda \in \{1/n, 2/n, \ldots, 1\}$, we have:
\if flase
If for some $\lambda$ and $\vec{v}$, the estimation~\eqref{eq:estimate} holds with equality for both $o_1(\lambda)$ and $o_2(\lambda)$, then
\fi
\begin{subequations}
\begin{align} & n_1 \leq \min \left \{ \frac{\tilde o_1(\lambda)}{\lambda p}, \frac{ \tilde o_1(\lambda) + (1-\lambda) (1-p) n}{1-p+\lambda p} \right\} \text{, and } \label{eq:u_1}
\\ & n_1 + n_2 \leq \min \left \{ \frac{\tilde o_1(\lambda)+\tilde o_2(\lambda)}{\lambda p}, \frac{ \tilde o_1(\lambda)+\tilde o_2(\lambda) + (1-\lambda) (1-p) n}{1-p+\lambda p} \right\}. \label{eq:u_2}
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}
The proof essentially follows from the definition of the deterministic approximation functions. Here we just prove \eqref{eq:u_1}.
First note that $\eta_1(\lambda) \geq 0$, thus:
\begin{align*}
\tilde o_1(\lambda) = (1-p)\eta_1(\lambda)+p \lambda n_1 \geq p \lambda n_1 ~~\Rightarrow~~ n_1 \leq \frac{\tilde o_1(\lambda)}{\lambda p}.
\end{align*}
Second note that $n_1 - \eta_1(\lambda) \leq (1-\lambda) n$, therefore,
\begin{align*}
\tilde o_1(\lambda) = (1-p)\eta_1(\lambda)+p \lambda n_1 \geq (1-p)(n_1 -(1-\lambda) n) + p \lambda n_1 ~~\Rightarrow~~ n_1 \leq \frac{ \tilde o_1(\lambda) + (1-\lambda) (1-p) n}{1-p+\lambda p}.
\end{align*}
Which completes the proof of \eqref{eq:u_1}. Proof of \eqref{eq:u_2} follows similar steps.
\if false
The simple inequality $\eta_1(\lambda) \geq 0$ and \eqref{eq:estimate} imply that $n_1 \leq \frac{o_1(\lambda)}{\lambda p}$.
Furthermore, in the adversarial customer arrival sequence $\vec{v'}\vmn{ZZ\vec{v}_I}$, the number of \st{class}\vmn{type}-$1$ customers arriving after time $\lambda$ cannot exceed the number of the remaining time slots, i.e., $n_1 - \eta_1(\lambda) \leq (1-\lambda) n$.
This means $\eta_1(\lambda) \geq n_1 - (1-\lambda) n$. Substituting this in \eqref{eq:estimate}, and rearranging term we get: $n_1 \leq \frac{ o_1(\lambda) + (1-\lambda) (1-p) n}{1-p+\lambda p}$.
\fi
\end{proof}
}
\begin{proof}{\textbf{Proof of Lemma~\ref{prop:full-Ubounds}:}}
Since Inequalities~\eqref{eq:full-u_1} and~\eqref{eq:full-u_2} are similar, we only present the proof of Inequality~\eqref{eq:full-u_1}.
When $\lambda < \delta$, $u_1(\lambda) \triangleq b$, and thus~\eqref{eq:full-u_1} trivially holds.
The more interesting case is when $\lambda \geq \delta$.
Without loss of generality, we assume $n_1 \leq b+\frac{2 \Delta}{\delta p}$.
\vmn{Otherwise, similar to the proof of Lemma~\ref{lem:HYB:full-case1}, we construct a modified adversarial instance with only $b+\frac{2 \Delta}{\delta p}$ type-$1$ customers and argue that, for the same realization of the {{\st{predictable}\vmn{stochastic}}} group and random permutation, $u_1(\lambda)$ is lower bounded by the one corresponding to the modified instance.}
\if flase
Otherwise, under the same realization of the {{\st{predictable}\vmn{stochastic}}} group and the random permutation, the number of observed \st{class}\vmn{type}-$1$ customers is at least the number in the case where we consider only an arbitrary subset of $b+\frac{2 \Delta}{\delta p}$ \st{class}\vmn{type}-$1$ customers in the original adversarial sequence ${\vec{v}}'$.
Therefore, we still get a valid lower bound on $o_1(\lambda)$ (and hence $u_1(\lambda)$) when we assume $n_1 \leq b+\frac{2 \Delta}{\delta p} $.
\fi
Note that we can apply Inequality~\eqref{inequality:good-approximation-o_1} \vmn{to this modified instance, because}\st{to those} $ b+\frac{2 \Delta}{\delta p} \geq \frac{k}{p^2} \log n$ \vmn{under the condition imposed on $b$}.
\noindent Note that $\delta =\frac{\phi b}{n}=\frac{(1-c)b}{(1-a)n} \geq \frac{(1-c)b}{n}$, Condition \vmn{imposed on $b$, and}
\begin{align} \vmn{\bar\epsilon}\leq \frac{3}{2\alpha}\label{ineq:ADP-k'}
\end{align}
\vmn{which holds by the definition of constants $\alpha$ and $\bar\epsilon$ given in Lemma~\ref{lemma:needed-centrality-result-for-m=2}, }give \vmn{us}
\begin{align}
\frac{\Delta}{\delta p}\leq \frac{3}{2}b.\label{epsilon:ADP-small-Delta}
\end{align}
Therefore, $n_1 \leq b+\frac{2 \Delta}{\delta p} \leq 4b$, and thus Inequality~\eqref{inequality:good-approximation-o_1} implies $o_1(\lambda) \geq \tilde o_1(\lambda) - \alpha \sqrt{n_1 \log n} \geq \tilde o_1(\lambda) - \alpha \sqrt{4 b \log n} = \tilde o_1(\lambda) - 2\Delta$.
As a result,
\begin{align*}
u_1(\lambda) \triangleq & \min \left \{ \frac{o_1(\lambda)}{\lambda p}, \frac{ o_1(\lambda) + (1-\lambda) (1-p) n}{1-p+\lambda p} \right\} \\
\geq & \min \left \{ \frac{\tilde o_1(\lambda) - 2\Delta}{\lambda p}, \frac{ \tilde o_1(\lambda) - 2\Delta + (1-\lambda) (1-p) n}{1-p+\lambda p} \right\} &(o_1(\lambda) \geq \tilde o_1(\lambda) - 2\Delta ) \\
\geq & n_1 - \max \left\{ \frac{2\Delta}{\lambda p} , \frac{2\Delta}{1-p+\lambda p}\right\} = n_1-\frac{2\Delta}{\lambda p} &(\text{Lemma~\ref{prop:Ubounds}}) \\
\geq & n_1 - \frac{2\Delta}{\delta p}. &(\lambda \geq \delta)
\end{align*}
\end{proof}
\if flase
\vm{TEMP}
\begin{lemma}
\label{prop:Ubounds}
If for some $\lambda$ and $\vec{v}$, the estimation~\eqref{eq:estimate} holds with equality for both $o_1(\lambda)$ and $o_2(\lambda)$, then
\begin{subequations}
\begin{align} & n_1 \leq \min \left \{ \frac{o_1(\lambda)}{\lambda p}, \frac{ o_1(\lambda) + (1-\lambda) (1-p) n}{1-p+\lambda p} \right\} \text{, and } \label{eq:u_1}
\\ & n_1 + n_2 \leq \min \left \{ \frac{o_1(\lambda)+o_2(\lambda)}{\lambda p}, \frac{ o_1(\lambda)+o_2(\lambda) + (1-\lambda) (1-p) n}{1-p+\lambda p} \right\}. \label{eq:u_2}
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}
Note that \eqref{eq:u_2} and \eqref{eq:u_1} are symmetric and hence it is sufficient to derive \eqref{eq:u_1}.
The simple inequality $\eta_1(\lambda) \geq 0$ and \eqref{eq:estimate} imply that $n_1 \leq \frac{o_1(\lambda)}{\lambda p}$.
Furthermore, in the adversarial customer arrival sequence $\vec{v'}\vmn{ZZ\vec{v}_I}$, the number of \st{class}\vmn{type}-$1$ customers arriving after time $\lambda$ cannot exceed the number of the remaining time slots, i.e., $n_1 - \eta_1(\lambda) \leq (1-\lambda) n$.
This means $\eta_1(\lambda) \geq n_1 - (1-\lambda) n$. Substituting this in \eqref{eq:estimate}, and rearranging term we get: $n_1 \leq \frac{ o_1(\lambda) + (1-\lambda) (1-p) n}{1-p+\lambda p}$.
\end{proof}
\vm{TEMP}
\fi
\begin{proof}{\textbf{Proof of Proposition~\ref{prop:values-math-program}:}}
\textbf{Case $b < n$:}
For proving this case, we can relax some of the constraints and only keep Constraints~(\ref{constraint:not-c-competitive}), (\ref{constraint:x<=1}), (\ref{constraint:n1'<=n1}), and (\ref{constraint:n2'<=n2}) and show that for every point in this superset of the feasibility region, $c\geq p+ \frac{1-p}{2-a}$.
We first notice that, for fixed $n_1$ and $n_2$, the right hand side of Constraint~(\ref{constraint:not-c-competitive}) is non-increasing in each of $\tilde u_1$ and $\tilde o_2$, and hence non-increasing in each of $\eta_1$ and $\eta_2$.
By using Constraints~(\ref{constraint:n1'<=n1}) and~(\ref{constraint:n2'<=n2}), we can obtain upper bounds $\tilde o_1 \leq (1-p+\vmn{\l} p)n_1$ and $\tilde o_2 \leq (1-p+\vmn{\l} p) n_2$ .
With these upper bounds and the fact $\tilde u_1 \leq \frac{\tilde o_1}{\vmn{\l} p } $, Constraint~(\ref{constraint:not-c-competitive}) gives:
\begin{align*}
c \geq \frac{a(n_2- (1-p+p\vmn{\l})n_2 +\frac{b}{1-a})+n_1}{a\min\{n_1+n_2, b\}+(1-a)n_1+\frac{a^2b}{1-a}+a \min \left\{ \frac{(1-p+p\vmn{\l})n_1}{\vmn{\l} p } , b \right\}},
\end{align*}
which, after rearranging terms, leads to
\begin{align}
c \geq \frac{a n_2 p (1-\vmn{\l}) +\frac{ab}{1-a}+n_1}{a\min\{n_2, b-n_1\}+n_1+\frac{a^2b}{1-a}+a \min \left\{ \left( \frac{1-p}{\vmn{\l} p }+1 \right) n_1, b \right\}}. \label{inequality:raw-form}
\end{align}
We focus on lower bounding the right hand side of~(\ref{inequality:raw-form}).
When $n_2\geq b-n_1$, the right hand side of~(\ref{inequality:raw-form}) is non-decreasing in $n_2$ because the denominator remains the same while the numerator is non-decreasing when $n_2$ increases (due to Constraint~(\ref{constraint:x<=1})).
Therefore, for the sake of obtaining a lower bound, we can assume, without loss of generality,
\begin{align}
n_2 \leq b-n_1. \label{assumption:n_1+n_2<=b}
\end{align}
With~(\ref{assumption:n_1+n_2<=b}), the right hand side of~(\ref{inequality:raw-form}) can be written as
\begin{subequations}
\begin{align}
&\vmn{f_1(\vmn{\l}) \triangleq }\frac{a n_2 p (1-\vmn{\l}) +\frac{ab}{1-a}+n_1}{an_2+n_1+\frac{a^2b}{1-a}+a b} & \text{ if }\vmn{\l} \leq \frac{(1-p)n_1}{p(b-n_1)},\label{to-prove:small-lambda}
\\ &\vmn{f_2(\vmn{\l}) \triangleq } \frac{a n_2 p (1-\vmn{\l}) +\frac{ab}{1-a}+n_1}{an_2+n_1+\frac{a^2b}{1-a}+a\left( \frac{1-p}{\vmn{\l} p }+1 \right) n_1} &\text{ if }\vmn{\l} > \frac{(1-p)n_1}{p(b-n_1)}.\label{to-prove:big-lambda}
\end{align}
\end{subequations}
We prove \vmn{
\begin{align}
& f_1(\vmn{\l}) \geq p+\frac{1-p}{2-a}, ~~~\text{ if } \vmn{\l} \leq \frac{(1-p)n_1}{p(b-n_1)}\label{eq:inter1}\\
& f_2(\vmn{\l}) \geq p+\frac{1-p}{2-a}, ~~~\text{ if } \vmn{\l} > \frac{(1-p)n_1}{p(b-n_1)}\label{eq:inter2}
\end{align}}
\if false
$f_1(\vmn{\l}) \geq p+\frac{1-p}{2-a}$, for $\vmn{\l} \leq \frac{(1-p)n_1}{p(b-n_1)}$, and
$f_2(\vmn{\l}) \geq p+\frac{1-p}{2-a}$, for $\vmn{\l} > \frac{(1-p)n_1}{p(b-n_1)}$
\fi
\vmn{separately. We start by the former:}
\if false
expressions~(\ref{to-prove:small-lambda}) and~(\ref{to-prove:big-lambda}) being at least $p+\frac{1-p}{2-a}$ separately.
We first prove that expression (\ref{to-prove:small-lambda}) is at least $p+\frac{1-p}{2-a}$.
\fi
Because
\vmn{$f_1(\vmn{\l})$} is non-increasing in $\vmn{\l}$, we only need to prove for the case $\vmn{\l} = \frac{(1-p)n_1}{p(b-n_1)}$; \vmn{
$f_1(\vmn{\l})$ at $\vmn{\l} = \frac{(1-p)n_1}{p(b-n_1)}$ can be rearranged as}
\begin{align*}
\vmn{f_1\left(\frac{(1-p)n_1}{p(b-n_1)}\right) =} 1- \frac{an_2 \left(1-p+\frac{(1-p)n_1}{p(b-n_1)} p\right)}{an_2+n_1+\frac{ab}{1-a}},
\end{align*}
which, for fixed $n_1$ and $n_2$, is non-decreasing in $b$.
Therefore, according to~(\ref{assumption:n_1+n_2<=b}), we only need to consider the case $b=n_1+n_2$ (in the degenerated case $n_1=n_2=0$, \st{the above quantity}\vmn{$f_1\left(\frac{(1-p)n_1}{p(b-n_1)}\right)$} is $1$, which is greater than $p+\frac{1-p}{2-a}$, so we can assume, without loss of generality, $n_1+n_2>0$), in which case, \st{the above quantity equals to}\vmn{we have:}
\begin{align*}
\vmn{f_1\left(\frac{(1-p)n_1}{p(b-n_1)}\right) =} 1- \frac{a (1-p) (n_1+n_2)}{an_2+n_1+\frac{a(n_1+n_2)}{1-a}}
\geq 1- \frac{a (1-p) (n_1+n_2)}{an_2+an_1+\frac{a(n_1+n_2)}{1-a}}=1-\frac{1-p}{1+\frac{1}{1-a}}=p+\frac{1-p}{2-a},
\end{align*}
\vmn{which completes the proof of \eqref{eq:inter1}.}
\if false
which is at least
\begin{align*}
1- \frac{a (1-p) (n_1+n_2)}{an_2+an_1+\frac{a(n_1+n_2)}{1-a}}=1-\frac{1-p}{1+\frac{1}{1-a}}=p+\frac{1-p}{2-a}.
\end{align*}
Thus, we are done proving that~(\ref{to-prove:small-lambda}) is at least $p+\frac{1-p}{2-a}$.
\fi
\vmn{Next we prove \eqref{eq:inter2}.}
Due to Constraint~(\ref{constraint:x<=1}), \vmn{i.e., $\l \leq 1$, } case~(\ref{to-prove:big-lambda}) only happens when $\frac{(1-p)n_1}{p(b-n_1)}<1$, or equivalently,
\begin{align}
b > \frac{n_1}{p}. \label{lower-bound-b}
\end{align}
Proving \eqref{eq:inter2}
\if false
~(\ref{to-prove:big-lambda}) being at least $p+\frac{1-p}{2-a}$
\fi
is trickier because both the numerator and denominator decrease in $\vmn{\l}$.
To address this issue,
\vmn{we first remark that by the definition of $f_2(\l)$, inequality $f_2(\l) \geq p+\frac{1-p}{2-a}$ is equivalent to}
\begin{align*}
a n_2 p (1-\vmn{\l}) +\frac{ab}{1-a}+n_1 \geq \left( p+\frac{1-p}{2-a} \right) \left(an_2+n_1+\frac{a^2b}{1-a}+a\left( \frac{1-p}{\vmn{\l} p }+1 \right) n_1\right),
\end{align*}
which is \vmn{in turn} equivalent to
\begin{align}
a n_2 p +\frac{ab}{1-a}+n_1 \geq \left( p+\frac{1-p}{2-a} \right) \left(an_2+n_1+\frac{a^2b}{1-a}+a\left( \frac{1-p}{\vmn{\l} p }+1 \right) n_1\right) + an_2p\vmn{\l}. \label{to-prove:function-of-lambda}
\end{align}
\vmn{Now} the left hand side of \st{statement}\vmn{Inequality}~(\ref{to-prove:function-of-lambda}) is not a function of $\vmn{\l}$ while the right hand side of~(\ref{to-prove:function-of-lambda}) is a function of $\vmn{\l}$ of the form
\begin{align} x\vmn{\l}+\frac{y}{\vmn{\l}}+z \label{function-of-lambda}
\end{align} where $x,y,z$ are non-negative\st{constants}.
Clearly, the second derivative of~(\ref{function-of-lambda}) (\st{over}\vmn{with respect to} $\vmn{\l}$), $\frac{2y}{\vmn{\l}^3}$, is non-negative for $\vmn{\l} \in \left[\frac{(1-p)n_1}{p(b-n_1)}, 1\right]$.
As a result, (\ref{function-of-lambda}) is convex and is maximized at extreme values of $\vmn{\l}$, which in our case is at either $\vmn{\l} = \frac{(1-p)n_1}{p(b-n_1)}$ or $\vmn{\l}=1$.
Therefore, we only need to prove \st{statement}\vmn{Inequality}~(\ref{to-prove:function-of-lambda}) at these extreme two values of $\vmn{\l}$.
The former case, $\vmn{\l} = \frac{(1-p)n_1}{p(b-n_1)}$, is covered in
\vmn{\eqref{eq:inter1}}.
Thus, we only need to prove~(\ref{to-prove:function-of-lambda}) for $\vmn{\l} = 1$.
When $\vmn{\l} =1$, (\ref{to-prove:function-of-lambda}) can \st{easily} be rearranged as
\begin{align}
\label{eq:chain1}
\frac{ab}{1-a}+n_1 - \left( p+\frac{1-p}{2-a} \right) \left(an_2+n_1+\frac{a^2b}{1-a}+\frac{an_1}{p }\right) \geq 0.
\end{align}
Because the left hand side of Inequality \vmn{\eqref{eq:chain1}} is a decreasing function of $n_2$, using~(\ref{assumption:n_1+n_2<=b}), we only need to prove that the inequality holds when $n_2=b-n_1$, which is equivalent to
\begin{align}
\label{eq:chain2}
\frac{ab}{1-a}+n_1 - \left( p+\frac{1-p}{2-a} \right) \left(a(b-n_1)+n_1+\frac{a^2b}{1-a}+\frac{an_1}{p }\right) \geq 0.
\end{align}
By separating terms associated with $n_1$ and $b$, and using $$1-\left( p+\frac{1-p}{2-a} \right)=\frac{(1-p)(1-a)}{2-a},$$
\st{above}Inequality \vmn{\eqref{eq:chain2}} is equivalent to
\begin{align}
\label{eq:chain3}
\frac{a(1-p)}{2-a}b + \frac{(1-p)(1-a)}{2-a}n_1 - \left( p+\frac{1-p}{2-a} \right) \left(\frac{a(1-p)}{p }\right)n_1 \geq 0.
\end{align}
Using the lower bound on $b$ given by~(\ref{lower-bound-b}), \vmn{i.e., $b > \frac{n_1}{p}$} and then dividing $(1-p)n_1$ on both sides (in the degenerated case where $(1-p)n_1=0$, both sides are $0$ so we are done), Inequality \vmn{\eqref{eq:chain3}} is implied by
\begin{align*}
\frac{a}{(2-a)p} + \frac{1-a}{2-a} - \left( p+\frac{1-p}{2-a} \right) \left(\frac{a}{p }\right)\geq 0,
\end{align*}
\vmn{which holds because after canceling the two terms involving $1/p$, it} is equivalent to
\begin{align*}
\frac{(1-a)^2}{2-a} \geq 0.
\end{align*}
\vmn{Therefore, we proved Inequality \vmn{\eqref{eq:chain1}}, and thus \eqref{eq:inter2}. This completes the proof of proposition for the case $b < n$.}
\st{so we are done.}
\textbf{Case $b=n$:}
\st{Now let us prove the $b=n$ case.}
For proving this case, we relax some of the constraints and only keep Constraints~(\ref{constraint:not-c-competitive}),~(\ref{constraint:x<=1}),~(\ref{constraint:eta_1+eta_2<=lambdan}) and~(\ref{constraint:n1+n2<=n}) and show that for every point in this superset of the feasibility region, $c\geq 1$.
According to Constraint~(\ref{constraint:not-c-competitive}) and using $a\min\{n_1+n_2, b\}+(1-a)n_1 = a\min\{n_2, b-n_1\}+n_1 $, it suffices to prove
\begin{align}
\label{ineq:inter:1}
\frac{a(n_2-\tilde o_2+\frac{b}{1-a})+n_1}{a\min\{n_2, b-n_1\}+n_1+\frac{a^2b}{1-a}+a \tilde u_1} \geq 1.
\end{align}
\vmn{or equivalently,}
\begin{align}
\label{ineq:inter:2}
{a(n_2-\tilde o_2+\frac{b}{1-a})+n_1} \geq {a\min\{n_2, b-n_1\}+n_1+\frac{a^2b}{1-a}+a \tilde u_1}.
\end{align}
\st{Multiplying both sides by the denominator of the left hand side,} Using $\min\{n_2, b-n_1\}\leq n_2$, subtracting $an_2+n_1$ on both sides \vmn{of \eqref{ineq:inter:2}}, and dividing both sides by $a$, \st{the above}Inequality \vmn{\eqref{ineq:inter:2}} is implied by
\begin{align*}
-\tilde o_2+\frac{b}{1-a} \geq \frac{ab}{1-a}+ \tilde u_1.
\end{align*}
Subtracting $\frac{ab}{1-a}$ on both sides and using $\tilde u_1 \leq \frac{\tilde o_1 + (1-\vmn{\l} ) (1-p)n}{(1-p+\vmn{\l} p)}$, the above inequality is implied by
\begin{align*}
b -\tilde o_2 \geq \frac{\tilde o_1 + (1-\vmn{\l} ) (1-p)n}{(1-p+\vmn{\l} p)}.
\end{align*}
Multiplying $1-p+\vmn{\l} p$ on both sides and using $b=n$ (which is the assumption of this case), the above inequality is equivalent to
\begin{align*}
\vmn{\l} n- (1-p+\vmn{\l} p) \tilde o_2 \geq \tilde o_1.
\end{align*}
Due to Constraint~(\ref{constraint:x<=1}), $1-p+\vmn{\l} p \leq 1$, and thus the inequality above is implied by
\begin{align*}
\vmn{\l} n \geq \tilde o_2 + \tilde o_1, \end{align*}
or equivalently,
\begin{align*}
\vmn{\l} n \geq (1-p)\eta_2+p n_2 \vmn{\l} + (1-p)\eta_1+p n_1 \vmn{\l}.
\end{align*}
\vmn{The above inequality} follows straightforwardly from Constraints~(\ref{constraint:eta_1+eta_2<=lambdan}) and~(\ref{constraint:n1+n2<=n}). \vmn{This completes our proof of {\eqref{ineq:inter:2}}, and consequently that of the proposition in the case $b = n$.}
\end{proof}
\if false
\begin{proof}{\textbf{Proof of Proposition~\ref{prop:values-math-program}:}}
\textbf{Case $b < n$:}
For proving this case, we\st{can} relax some of the constraints and only keep Constraints~(\ref{constraint:not-c-competitive}), (\ref{constraint:x<=1}), (\ref{constraint:n1'<=n1}), and (\ref{constraint:n2'<=n2}) and show that for every point in this superset of the feasibility region, $c\geq p+ \frac{1-p}{2-a}$.
We first notice that, for fixed $n_1$ and $n_2$, the right hand side of Constraint~(\ref{constraint:not-c-competitive}) is non-increasing in each of $\tilde u_1$ and $\tilde o_2$, and hence non-increasing in each of $\eta_1$ and $\eta_2$.
By using Constraints~(\ref{constraint:n1'<=n1}) and~(\ref{constraint:n2'<=n2}), we can obtain upper bounds $\tilde o_1 \leq (1-p+\lambda p)n_1$ and $\tilde o_2 \leq (1-p+\lambda p) n_2$ .
With these upper bounds and the fact $\tilde u_1 \leq \frac{\tilde o_1}{\lambda p } $, Constraint~(\ref{constraint:not-c-competitive}) gives:
\begin{align*}
c \geq \frac{a(n_2- (1-p+p\lambda)n_2 +\frac{b}{1-a})+n_1}{a\min\{n_1+n_2, b\}+(1-a)n_1+\frac{a^2b}{1-a}+a \min \left\{ \frac{(1-p+p\lambda)n_1}{\lambda p } , b \right\}},
\end{align*}
which, after rearranging terms, leads to
\begin{align}
c \geq \frac{a n_2 p (1-\lambda) +\frac{ab}{1-a}+n_1}{a\min\{n_2, b-n_1\}+n_1+\frac{a^2b}{1-a}+a \min \left\{ \left( \frac{1-p}{\lambda p }+1 \right) n_1, b \right\}}. \label{inequality:raw-form}
\end{align}
We focus on lower bounding the right hand side of~(\ref{inequality:raw-form}).
When $n_2\geq b-n_1$, the right hand side of~(\ref{inequality:raw-form}) is non-decreasing in $n_2$ because the denominator remains the same while the numerator is non-decreasing when $n_2$ increases (due to Constraint~(\ref{constraint:x<=1})).
Therefore, for the sake of obtaining a lower bound, we can assume, without loss of generality,
\begin{align}
n_2 \leq b-n_1. \label{assumption:n_1+n_2<=b}
\end{align}
With~(\ref{assumption:n_1+n_2<=b}), the right hand side of~(\ref{inequality:raw-form}) can be written as
\begin{subequations}
\begin{align}
&\vmn{f_1(\lambda) \triangleq }\frac{a n_2 p (1-\lambda) +\frac{ab}{1-a}+n_1}{an_2+n_1+\frac{a^2b}{1-a}+a b} & \text{ if }\lambda \leq \frac{(1-p)n_1}{p(b-n_1)},\label{to-prove:small-lambda}
\\ &\vmn{f_2(\lambda) \triangleq } \frac{a n_2 p (1-\lambda) +\frac{ab}{1-a}+n_1}{an_2+n_1+\frac{a^2b}{1-a}+a\left( \frac{1-p}{\lambda p }+1 \right) n_1} &\text{ if }\lambda > \frac{(1-p)n_1}{p(b-n_1)}.\label{to-prove:big-lambda}
\end{align}
\end{subequations}
We prove \vmn{
\begin{align}
& f_1(\lambda) \geq p+\frac{1-p}{2-a}, ~~~\lambda \leq \frac{(1-p)n_1}{p(b-n_1)}\label{eq:inter1}\\
& f_2(\lambda) \geq p+\frac{1-p}{2-a}, ~~~\lambda > \frac{(1-p)n_1}{p(b-n_1)}\label{eq:inter2}
\end{align}}
\if false
$f_1(\lambda) \geq p+\frac{1-p}{2-a}$, for $\lambda \leq \frac{(1-p)n_1}{p(b-n_1)}$, and
$f_2(\lambda) \geq p+\frac{1-p}{2-a}$, for $\lambda > \frac{(1-p)n_1}{p(b-n_1)}$
\fi
\vmn{separately. We start by the former:}
\if false
expressions~(\ref{to-prove:small-lambda}) and~(\ref{to-prove:big-lambda}) being at least $p+\frac{1-p}{2-a}$ separately.
We first prove that expression (\ref{to-prove:small-lambda}) is at least $p+\frac{1-p}{2-a}$.
\fi
Because
\vmn{$f_1(\lambda)$} is non-increasing in $\lambda$, we only need to prove for the case $\lambda = \frac{(1-p)n_1}{p(b-n_1)}$; \vmn{
$f_1(\lambda)$ at $\lambda = \frac{(1-p)n_1}{p(b-n_1)}$ can be rearranged as}
\begin{align*}
\vmn{f_1\left(\frac{(1-p)n_1}{p(b-n_1)}\right) =} 1- \frac{an_2 \left(1-p+\frac{(1-p)n_1}{p(b-n_1)} p\right)}{an_2+n_1+\frac{ab}{1-a}},
\end{align*}
which, for fixed $n_1$ and $n_2$, is non-decreasing in $b$.
Therefore, according to~(\ref{assumption:n_1+n_2<=b}), we only need to consider the case $b=n_1+n_2$ (in the degenerated case $n_1=n_2=0$, \st{the above quantity}\vmn{$f_1\left(\frac{(1-p)n_1}{p(b-n_1)}\right)$} is $1$, which is greater than $p+\frac{1-p}{2-a}$, so we can assume, without loss of generality, $n_1+n_2>0$), in which case, \st{the above quantity equals to}\vmn{we have:}
\begin{align*}
\vmn{f_1\left(\frac{(1-p)n_1}{p(b-n_1)}\right) =} 1- \frac{a (1-p) (n_1+n_2)}{an_2+n_1+\frac{a(n_1+n_2)}{1-a}}
\geq 1- \frac{a (1-p) (n_1+n_2)}{an_2+an_1+\frac{a(n_1+n_2)}{1-a}}=1-\frac{1-p}{1+\frac{1}{1-a}}=p+\frac{1-p}{2-a},
\end{align*}
\vmn{which completes the proof of \eqref{eq:inter1}.}
\if false
which is at least
\begin{align*}
1- \frac{a (1-p) (n_1+n_2)}{an_2+an_1+\frac{a(n_1+n_2)}{1-a}}=1-\frac{1-p}{1+\frac{1}{1-a}}=p+\frac{1-p}{2-a}.
\end{align*}
Thus, we are done proving that~(\ref{to-prove:small-lambda}) is at least $p+\frac{1-p}{2-a}$.
\fi
\vmn{Next we prove \eqref{eq:inter2}.}
Due to Constraint~(\ref{constraint:x<=1}), case~(\ref{to-prove:big-lambda}) only happens when $\frac{(1-p)n_1}{p(b-n_1)}<1$, or equivalently,
\begin{align}
b > \frac{n_1}{p}. \label{lower-bound-b}
\end{align}
Proving~(\ref{to-prove:big-lambda}) being at least $p+\frac{1-p}{2-a}$ is trickier because both the numerator and denominator decrease in $\lambda$.
To address this issue, we express what we need to prove as follow,
\begin{align*}
a n_2 p (1-\lambda) +\frac{ab}{1-a}+n_1 \geq \left( p+\frac{1-p}{2-a} \right) \left(an_2+n_1+\frac{a^2b}{1-a}+a\left( \frac{1-p}{\lambda p }+1 \right) n_1\right),
\end{align*}
which is equivalent to
\begin{align}
a n_2 p +\frac{ab}{1-a}+n_1 \geq \left( p+\frac{1-p}{2-a} \right) \left(an_2+n_1+\frac{a^2b}{1-a}+a\left( \frac{1-p}{\lambda p }+1 \right) n_1\right) + an_2p\lambda. \label{to-prove:function-of-lambda}
\end{align}
The left hand side of statement~(\ref{to-prove:function-of-lambda}) is not a function of $\lambda$ while the right hand side of~(\ref{to-prove:function-of-lambda}) is a function of $\lambda$ of the form
\begin{align} x\lambda+\frac{y}{\lambda}+z \label{function-of-lambda}
\end{align} where $x,y,z$ are non-negative constants.
Clearly, the second derivative of~(\ref{function-of-lambda}) (over $\lambda$), $\frac{2y}{\lambda^3}$, is non-negative for $\lambda \in \left[\frac{(1-p)n_1}{p(b-n_1)}, 1\right]$.
As a result, (\ref{function-of-lambda}) is convex and is maximized at extreme values of $\lambda$, which in our case is at either $\lambda = \frac{(1-p)n_1}{p(b-n_1)}$ or $\lambda=1$.
Therefore, we only need to prove statement~(\ref{to-prove:function-of-lambda}) at these extreme two values of $\lambda$.
The former case, $\lambda = \frac{(1-p)n_1}{p(b-n_1)}$, is covered in~(\ref{to-prove:small-lambda}).
Thus, we only need to prove~(\ref{to-prove:function-of-lambda}) for $\lambda = 1$.
When $\lambda =1$, (\ref{to-prove:function-of-lambda}) can easily be rearranged as
\begin{align*}
\frac{ab}{1-a}+n_1 - \left( p+\frac{1-p}{2-a} \right) \left(an_2+n_1+\frac{a^2b}{1-a}+\frac{an_1}{p }\right) \geq 0.
\end{align*}
Because the left hand side of the inequality above is a decreasing function of $n_2$, using~(\ref{assumption:n_1+n_2<=b}), we only need to prove that the inequality holds when $n_2=b-n_1$, which is equivalent to
\begin{align*}
\frac{ab}{1-a}+n_1 - \left( p+\frac{1-p}{2-a} \right) \left(a(b-n_1)+n_1+\frac{a^2b}{1-a}+\frac{an_1}{p }\right) \geq 0.
\end{align*}
By separating terms associated with $n_1$ and $b$, and using $$1-\left( p+\frac{1-p}{2-a} \right)=\frac{(1-p)(1-a)}{2-a},$$
the above inequality is equivalent to
\begin{align*}
\frac{a(1-p)}{2-a}b + \frac{(1-p)(1-a)}{2-a}n_1 - \left( p+\frac{1-p}{2-a} \right) \left(\frac{a(1-p)}{p }\right)n_1 \geq 0.
\end{align*}
Using the lower bound of $b$ given by~(\ref{lower-bound-b}), and then dividing $(1-p)n_1$ on both sides (in the degenerated case where $(1-p)n_1=0$, both sides are $0$ so we are done), the above inequality is implied by
\begin{align*}
\frac{a}{(2-a)p} + \frac{1-a}{2-a} - \left( p+\frac{1-p}{2-a} \right) \left(\frac{a}{p }\right)\geq 0.
\end{align*}
Canceling the two terms involving $1/p$, the above inequality is equivalent to
\begin{align*}
\frac{(1-a)^2}{2-a} \geq 0,
\end{align*}
so we are done.
\textbf{case $b=n$:}
Now let us prove the $b=n$ case.
For proving this case, we can relax some of the constraints and only keep Constraints~(\ref{constraint:not-c-competitive}),~(\ref{constraint:x<=1}),~(\ref{constraint:eta_1+eta_2<=lambdan}) and~(\ref{constraint:n1+n2<=n}) and show that for every point in this superset of the feasibility region, $c\geq 1$.
According to Constraint~(\ref{constraint:not-c-competitive}) and using $a\min\{n_1+n_2, b\}+(1-a)n_1 = a\min\{n_2, b-n_1\}+n_1 $, it suffices to prove
\begin{align}
\label{ineq:inter:1}
\frac{a(n_2-\tilde o_2+\frac{b}{1-a})+n_1}{a\min\{n_2, b-n_1\}+n_1+\frac{a^2b}{1-a}+a \tilde u_1} \geq 1.
\end{align}
\vmn{or equivalently,}
\begin{align}
\label{ineq:inter:2}
{a(n_2-\tilde o_2+\frac{b}{1-a})+n_1} \geq {a\min\{n_2, b-n_1\}+n_1+\frac{a^2b}{1-a}+a \tilde u_1}.
\end{align}
\st{Multiplying both sides by the denominator of the left hand side,} Using $\min\{n_2, b-n_1\}\leq n_2$, subtracting $an_2+n_1$ on both sides \vmn{of \eqref{ineq:inter:2}}, and dividing both sides by $a$, \st{the above} Inequality \vmn{\eqref{ineq:inter:1}} is implied by
\begin{align*}
-\tilde o_2+\frac{b}{1-a} \geq \frac{ab}{1-a}+ \tilde u_1.
\end{align*}
Subtracting $\frac{ab}{1-a}$ on both sides and using $\tilde u_1 \leq \frac{\tilde o_1 + (1-\lambda ) (1-p)n}{(1-p+\lambda p)}$, the above inequality is implied by
\begin{align*}
b -\tilde o_2 \geq \frac{\tilde o_1 + (1-\lambda ) (1-p)n}{(1-p+\lambda p)}.
\end{align*}
Multiplying $1-p+\lambda p$ on both sides and using $b=n$, the above inequality is equivalent to
\begin{align*}
\lambda n- (1-p+\lambda p) \tilde o_2 \geq \tilde o_1.
\end{align*}
Due to Constraint~(\ref{constraint:x<=1}), $1-p+\lambda p \leq 1$, and thus the inequality above is implied by
\begin{align*}
\lambda n \geq \tilde o_2 + \tilde o_1, \end{align*}
or equivalently,
\begin{align*}
\lambda n \geq (1-p)\eta_2+p n_2 \lambda + (1-p)\eta_1+p n_1 \lambda.
\end{align*}
This follows straightforwardly from Constraints~(\ref{constraint:eta_1+eta_2<=lambdan}) and~(\ref{constraint:n1+n2<=n}).
\end{proof}
\fi
\begin{proof}{\textbf{Proof of Lemma~\ref{lemma:upper-bound-q2-ADP-case-1}:}}
The only interesting case is \vmn{case (b), i.e., }when $n_1+n_2 > b+ \frac{2\Delta}{\delta}$.
If $q_2(1)=0$, then we are done.
Otherwise, let $\bar\lambda$ be the last time we accept a \st{class}\vmn{type}-$2$ customer.
By Lemma~\ref{prop:full-Ubounds}, $u_{1,2}(\bar\lambda) \geq \min \left\{ b, n_1 + n_2 -\frac{2\Delta}{\delta p}\right\} = b$.
Therefore, according to the definition of $\bar\lambda$, Condition~\eqref{eq:Ccomp2} must be satisfied.
Thus,
\begin{align*}
q_2(1) = & q_2 ( \bar \lambda) \\ \leq & \frac{1-c}{1-a} b + c \left(b - u_1( \bar \lambda) \right)^+ {+1} &(\text{Condition~\eqref{eq:Ccomp2}}) \\
\leq & \frac{1-c}{1-a} b + c \left(b - \min \left\{ b, n_1-\frac{2\Delta}{\delta p}\right\} \right)^+ {+1} &(\text{Lemma~\ref{prop:full-Ubounds}}) \\
\leq & \frac{1-c}{1-a} b + c \left(b - n_1 \right)^+ + c \frac{2\Delta}{\delta p} {+1}.
\end{align*}
\end{proof}
\begin{proof}{\textbf{Proof of Lemma~\ref{claim:q1+q2=b-ADP-ratio}:}}
We consider the two cases of Lemma~\ref{lemma:upper-bound-q2-ADP-case-1} separately.
For case \vmn{(a)}, $n_1+n_2 \leq b + \frac{2\Delta}{\delta p}$, we note that
\begin{align*}
OPT(\vec{v}) \leq & n_1+ n_2 a \\
\leq & \left( b+ \frac{2\Delta}{\delta p} -n_2 \right) + n_2 a &(n_1+n_2 \leq b+ \frac{2\Delta}{\delta p}) \\
\leq & ALG_{2,c} ({\vec{v}})+ \frac{2\Delta}{\delta p} .&(ALG_{2,c} ({\vec{v}})\geq (b-n_2)+an_2)
\end{align*}
Therefore,
\begin{align*}
\frac{ALG_{2,c}({\vec{v}})}{OPT(\vec{v})}\geq & \frac{ALG_{2,c}({\vec{v}})}{ ALG_{2,c}({\vec{v}}) + \frac{2\Delta}{\delta p}} \geq\frac{ALG_{2,c}({\vec{v}}) - \frac{2\Delta}{\delta p}}{ ALG_{2,c} ({\vec{v}})} &((ALG_{2,c} ({\vec{v}}))^2 \geq (ALG_{2,c}({\vec{v}}) )^2 - \left(\frac{2\Delta}{\delta p}\right)^2) \\
= & 1- \frac{\frac{2\Delta}{\delta p}}{ALG_{2,c}({\vec{v}})} \geq 1 -\frac{2\Delta}{ab\delta p} \geq c- \frac{2\Delta}{ab\delta p}. &(ALG_{2,c} ({\vec{v}})\geq ab)
\end{align*}
For case \vmn{(b)}, $n_1+n_2 > b + \frac{2\Delta}{\delta p}$ and $q_2(1) \leq \frac{1-c}{1-a} b + c \left(b - n_1 \right)^+ + c \frac{2\Delta}{\delta p} {+1}$, we have
\begin{align*}
\frac{ALG_{2,c}({\vec{v}})}{OPT(\vec{v})} = & \frac{b - (1-a)q_2(1)}{OPT(\vec{v})} &(q_1(1)+q_2(1)=b)\\
\geq & \frac{b - (1-a)\left( \frac{1-c}{1-a} b + c \left(b - n_1 \right)^+ + c \frac{2\Delta}{\delta p} {+1}\right) }{OPT(\vec{v})} &(\vmn{q_2(1) \leq \frac{1-c}{1-a} b + c \left(b - n_1 \right)^+ + c \frac{2\Delta}{\delta p} {+1}}) \\
= & \frac{c(b-(1-a)(b-n_1)^+)}{OPT(\vec{v})} - \frac{(1-a)c\frac{2\Delta}{\delta p}}{OPT(\vec{v}) } {- \frac{1-a}{OPT(\vec{v})} }\\
\geq & c - \frac{(1-a)c\frac{2\Delta}{\delta p}}{OPT(\vec{v}) } {- \frac{1-a}{OPT(\vec{v})} } &(OPT(\vec{v}) \leq b-(1-a)(b-n_1)^+ ) \\
\geq & c-\frac{2(1-a)c\Delta}{ab\delta p} {- \frac{1-a}{ab} } &(OPT(\vec{v}) \geq ab) \\
\geq & c - \frac{{3}\Delta}{ab\delta p}. &(1-a <1, c \leq 1, \delta \leq 1, p<1)
\end{align*}
\end{proof}
\begin{proof}{\textbf{Proof of Lemma~\ref{lemma:feasible-sol}:}}
Let us define $\tilde o_1', \tilde o_2', \tilde u_1'$ and $\tilde u_{1,2}'$ to be the \vmn{corresponding functions defined for} \st{values of $\tilde o_1, \tilde o_2, \tilde u_1$ and $\tilde u_{1,2}$ corresponding to}the modified tuple $(\vmn{l}', n_1', n_2', \eta_1', \eta_2', c')$.
It is easy to check that $(\vmn{\l}', n_1', n_2', \eta_1', \eta_2', c')$ satisfies Constraints~\eqref{constraint:x<=1}-\eqref{constraint:n1'+n2'big}.
The interesting part is to show that it satisfies Constraint~\eqref{constraint:u_2>=b}.
When $n_1+n_2\geq b$, we can prove it directly from Lemma~\ref{prop:Ubounds} (since $\tilde u_{1,2}'\geq n_1'+n_2' = n_1+n_2\geq b$).
\vmn{Next we focus on }\st{Below we consider}the case $n_1+n_2<b$\st{For the case $n_1+n_2<b$}, \vmn{and} we prove $\tilde u_{1,2}' \geq b$ by showing $\tilde u_{1,2}' \geq u_{1,2}(\vmn{\l})$; \vmn{note that we have $u_{1,2}(\vmn{\l}) \geq b$ by Inequality \eqref{ineq:u2>=b}}.
Recall that we reject a customer at time $\vmn{\l}$ and that the threshold of rejecting a customer is at least $\phi b$; \vmn{thus} we have $\vmn{\l} n \geq o_2(\vmn{\l}) \geq \phi b $.
This gives \begin{align}
\vmn{\l} \geq \frac{\phi b}{n}= \delta \label{ineq:big-bar-lambda}
\end{align}
\vmn{Note that by definition for $\vmn{\l} \geq \delta$, we have $u_{1,2}(\vmn{\l}) = \min \left\{ \frac{o_1(\vmn{\l})+o_2(\vmn{\l})}{\vmn{\l} p}, \frac{o_1(\vmn{\l}) +o_2(\vmn{\l}) + (1-\vmn{\l}) (1-p) n}{1-p+\vmn{\l} p} \right\}$, which is a non-decreasing function of $o_1(\vmn{\l})+o_2(\vmn{\l})$.}
Thus, $\tilde u_{1,2}' \geq u_{1,2}(\vmn{\l}) $ is implied by $\tilde o_1' + \tilde o_2 ' \geq o_1(\vmn{\l})+o_2(\vmn{\l})$.
We prove this by breaking down into two cases based on the value of $\xi$: \st{For the first case,}\vmn{Case (1)} $\xi = n-(n_1+n_2)$: we have $n_1'+n_2'=n$\vmn{, i.e., there is no time period without a customer.} \vmn{Thus} $\eta_1'+\eta_2' = \vmn{\l} n$, and $\tilde o_1' + \tilde o_2 ' = \vmn{\l} n \geq o_1(\vmn{\l})+o_2(\vmn{\l})$.
\st{For the second case,}\vmn{Case (2)} $\xi = \frac{\Delta n }{\phi b p}$, we have
\begin{align*}
\tilde o_1' + \tilde o_2' = & \vmn{\l}' p n_1' + (1-p ) \eta_1' +\vmn{\l}' p n_2' + (1-p ) \eta_2' \\
\geq & \vmn{\l} p n_1 + (1-p ) \eta_1 +\vmn{\l} p (n_2 + \frac{\Delta n }{\phi b p}) + (1-p ) \eta_2 &( \xi = \frac{\Delta n }{\phi b p}, \bar \xi \geq 0)\\
= & \tilde o_1(\vmn{\l}) + \tilde o_2(\vmn{\l})+\frac{\Delta n }{\phi b }\vmn{\l}
\geq \tilde o_1(\vmn{\l}) + \tilde o_2(\vmn{\l}) +\Delta &(\eqref{ineq:big-bar-lambda}) \\
\geq & o_1(\vmn{\l})+o_2(\vmn{\l}) &(\eqref{inequality:good-approximation-o_1+o_2})
\end{align*}
\end{proof}
\begin{proof}{\textbf{Proof of Lemma~\ref{claim:ADP-non-exahust}:}}
We first show that, for all $c\leq c^*$, Constraint~\eqref{constraint:not-c-competitive} (same as \eqref{eq:Ccomp6}) is either violated or holds with equality.
First, we note that, for all real number $x$, the tuple
$(\vmn{\l}', n_1', n_2', \eta_1', \eta_2', x)$ satisfies Constraints~\eqref{constraint:u_2>=b}-\eqref{constraint:n1'+n2'big} because those constraints are not related to the last element in the tuple.
For all $x<c^*$, $(\vmn{\l}', n_1', n_2', \eta_1', \eta_2', x)$ is not in the feasible set of (\ref{MP1}), and hence Constraint~\eqref{constraint:not-c-competitive} is violated.
Taking the limit $x\to c^*$, Constraint~\eqref{constraint:not-c-competitive} is either violated or hold with equality.
This means, for $ALG_{2,c}$ (with any $ c \leq c^*$),
\begin{align}
c = c' \leq \frac{a(n_2' - \tilde o_2'+\frac{b}{1-a})+n_1'}{a\min\{n_1'+n_2' , b\}+(1-a)n_1'+\frac{a^2b}{1-a}+a \min\{ \tilde u_1', b\}}.\label{ineq:good-ratio}
\end{align}
After rearranging terms
~\eqref{ineq:good-ratio} is equivalent to
\begin{align}
\label{ineq:key-ADP-full}\frac{n_1' + a\left(\frac{1-c}{1-a} b + c \left(b - \tilde u_1' \right)^+ + \left[n_2' - \tilde o_2'\right]\right) }{n_1' + a \min \{b-n_1',n_2 '\}} \geq c. ~~~~~~~~(n_1 \geq \frac{k}{p^2} \log n)
\end{align}
\noindent \vmn{Repeating \eqref{ineq:lower-bound-ratio-ADP}, recall that we have:}
\vmn{
\begin{align*}
\frac{ALG_{2,c}({\vec{v}})}{OPT(\vec{v})}
\geq \frac{n_1 + a\left(\frac{1-c}{1-a} b + c \left(b - u_1(\vmn{\l}) \right)^+ + \left[n_2 - o_2(\vmn{\l})\right]\right) }{n_1 + a \min \{b-n_1,n_2\}}.
\end{align*}
}
We want to compare the right hand side of~\eqref{ineq:lower-bound-ratio-ADP} with the left hand side of~\eqref{ineq:key-ADP-full}.
First, we compare $\left(b - \tilde u_1' \right)^+$ with $\left(b - u_1(\vmn{\l}) \right)^+$ and show
\begin{align}
\left(b - \tilde u_1' \right)^+ \leq \left(b - u_1(\vmn{\l}) \right)^+ + \xi.
\label{ineq:u_1-and-tilde-u_1}
\end{align}
Recall that we do not exhaust the inventory, and thus $n_1 < b$. \vmn{Further $\Delta = \alpha \sqrt{b \log n}$, thus we have: $\Delta \geq \alpha \sqrt{n_1 \log n}$.}
\st{Therefore, a}According to \eqref{inequality:good-approximation-o_1}, $\tilde o_1 ' = \tilde o_1(\vmn{\l}) \geq o_1(\vmn{\l})- \Delta$.
Combining this and using an argument similar to the proof of Lemma~\ref{prop:full-Ubounds},
\begin{align*}
\tilde u_1' \triangleq & \min \left \{ \frac{\tilde o_1'}{\vmn{\l}' p}, \frac{ \tilde o_1' + (1-\vmn{\l}') (1-p) n}{1-p+\vmn{\l}' p} \right\}\\
= &
\min \left \{ \frac{\tilde o_1(\vmn{\l})}{\vmn{\l} p}, \frac{ \tilde o_1(\vmn{\l}) + (1-\vmn{\l}) (1-p) n}{1-p+\vmn{\l} p} \right\} \\
\geq & \min \left \{ \frac{ o_1(\vmn{\l}) - \Delta}{\vmn{\l} p}, \frac{ o_1(\vmn{\l}) - \Delta + (1-\vmn{\l}) (1-p) n}{1-p+\vmn{\l} p} \right\} &(\tilde o_1 ' \geq o_1(\vmn{\l})- \Delta) \\
\geq & u_1(\vmn{\l})- \max \left\{ \frac{\Delta}{\vmn{\l} p} , \frac{\Delta}{1-p+\vmn{\l} p}\right\} = u_1(\vmn{\l})-\frac{\Delta}{\vmn{\l} p} \\
\geq & u_1(\vmn{\l}) - \frac{\Delta n}{\phi b p}. &(\vmn{\eqref{ineq:big-bar-lambda}})
\end{align*}
\vmn{Note that by the definition of $\xi$, the above inequality implies Inequality \eqref{ineq:u_1-and-tilde-u_1}.}
\vmn{Next,} we compare $\tilde o_2 ' $ with $o_2(\vmn{\l})$ and \st{obtain}\vmn{ we show that}
\begin{align}
\tilde o_2' \geq o_2(\vmn{\l}) -2\Delta. \label{ineq:o2'>o2(lambda)}
\end{align}
\vmn{In order to prove \eqref{ineq:o2'>o2(lambda)}, }
we first show that we can assume, without loss of generality, $\phi b < n_2 \leq b+ \frac{2\Delta}{\delta p}$.
To see this, we note that when $q_1(1)+q_2(1) < b$, $q_1(1)=n_1$.
Therefore, the only ``mistakes'' that the algorithm may make is to reject too many \st{class}\vmn{type}-$2$ customers.
When $n_2 \leq \phi b$, we never reject a \st{class}\vmn{type}-$2$ customer and so $q_2(1)=n_2$ and $ALG_{2,c}({\vec{v}}) = OPT(\vec{v})$.
For proving the upper bound on $n_2$, \vmn{i.e., $n_2 \leq b+ \frac{2\Delta}{\delta p}$,} we first note that, clearly, if $n_2 > b+ \frac{2\Delta}{\delta p}$, decreasing $n_2$ to $b+ \frac{2\Delta}{\delta p}$ (while fixing $n_1$) does not modify the optimal revenue $OPT(\vec{v})$.
Using Lemma~\ref{prop:full-Ubounds}, we know that, when $n_2 \geq b+ \frac{2\Delta}{\delta p}$, $u_{1,2}(\vmn{\l}) \geq \vmn{\min \left\{ b, n_1 + n_2 -\frac{2\Delta}{\delta p}\right\} = } b$.
Therefore, we accept a \st{class}\vmn{type}-$2$ customer arriving at time $\vmn{\l}$ only if the number of \vmn{type}-$2$ customer accepted so far does not reach the dynamic threshold \vmn{(i.e., the third rule in Algorithm~\ref{algorithm:adaptive-threshold})} that depends only on $o_1(\vmn{\l})$ but not \vmn{on} $o_2(\vmn{\l})$.
\vmn{Given all the above, similar to the proof of Lemma~\ref{lem:HYB:full-case2}, we can construct an alternative adversarial instance where we reduce the number of type-$2$
customers to $b+ \frac{2\Delta}{\delta p}$, and show that, for the same realization of the {{\st{predictable}\vmn{stochastic}}} group and random permutation,
the number of accepted type-$2$ customers in the alternative instance serves as a lower bound on its counterpart in the original instance.
}
\if false
Thus, we can apply the argument in the proof of Lemma~\ref{lem:HYB:full-case2} to show that when $n_2$ increases, because there are more \st{class}\vmn{type}-$2$ customers, we will accept more \st{class}\vmn{type}-$2$ customers.
\fi
\vmn{Next we show that condition $n_2 \geq \frac{ k \log n}{p^2}$ is satisfied which implies we can apply concentration result~\eqref{inequality:good-approximation-o_2} from Lemma~\ref{lemma:needed-centrality-result-for-m=2}. Because $n_2>\phi b$, it suffices to show:}
\begin{align}
\phi b \geq \frac{k}{p^2} \log n. \label{epsilon-phi-b-condition}
\end{align}
Inequality \eqref{ineq:ADP-non-trivial-case} and
\begin{align}\vmn{\bar\epsilon}\leq \frac{1}{\sqrt{k}} \label{ineq:k'-condition-ADP}
\end{align}
\vmn{which holds by the definition of constants $k$ and $\bar\epsilon$ given in Lemma~\ref{lemma:needed-centrality-result-for-m=2}, }implies
$$\sqrt{b} = \frac{b^{\frac{3}{2}}}{b} \geq \frac{b^{\frac{3}{2}}}{n} > \frac{1}{\vmn{\bar\epsilon}}\frac{\sqrt{\log n}}{(1-c)^2a^2p^{3/2}}\geq \sqrt{k}\frac{\sqrt{\log n}}{p\sqrt{1-c}}.$$
This, together with $\phi =\frac{1-c}{1-a} \geq 1-c$, \st{gives}\vmn{proves} \eqref{epsilon-phi-b-condition}. \vmn{Thus, we can apply~\eqref{inequality:good-approximation-o_2}.}
\vmn{Further note that \eqref{epsilon:ADP-small-Delta} implies that $n_2\leq b+ \frac{2\Delta}{\delta p} \leq 4b$. Finally note that by definition $\xi\geq 0$, $\xi' \geq 0$. Putting all these together, we have:}
\begin{align*}
\tilde o_2' \geq \tilde o_2(\vmn{\l}) \geq o_2(\vmn{\l}) - \alpha \sqrt{n_2 \log n} \geq o_2(\vmn{\l}) - \alpha \sqrt{4b \log n} = o_2(\vmn{\l}) -2\Delta.
\end{align*}
\vmn{This proves \eqref{ineq:o2'>o2(lambda)}. Having proved \eqref{ineq:u_1-and-tilde-u_1} and \eqref{ineq:o2'>o2(lambda)},} at last, \vmn{we complete the proof as follows:}
\begin{align*}
c \leq & \frac{n_1' + a\left(\frac{1-c}{1-a} b + c \left(b - \tilde u_1' \right)^+ + \left[n_2' - \tilde o_2'\right]\right) }{n_1' + a \min \{b-n_1',n_2 '\}} &(\eqref{ineq:key-ADP-full}) \\
\leq & \frac{n_1 + a\left(\frac{1-c}{1-a} b + c \left[ \left(b - u_1(\vmn{\l}) \right)^+ + \frac{\Delta n}{\phi b p} \right] + \left[n_2 + \xi - o_2(\vmn{\l})+2\Delta\right]\right) }{n_1 + a \min \{b-n_1,n_2 \}} &(\eqref{ineq:u_1-and-tilde-u_1}, \eqref{ineq:o2'>o2(lambda)}, n_2' = n_2+\xi \geq n_2) \\
\leq & \frac{ALG_{2,c}({\vec{v}})}{OPT(\vec{v})} + \frac{a \left( c\frac{\Delta n}{\phi b p} + \xi + 2\Delta \right)}{OPT(\vec{v})} &(\eqref{ineq:lower-bound-ratio-ADP})\\
\leq & \frac{ALG_{2,c}({\vec{v}})}{OPT(\vec{v})} + \frac{4a\Delta n}{a\phi^2 b^2 p} =\frac{ALG_{2,c}({\vec{v}})}{OPT(\vec{v})} + \frac{4\Delta n}{\phi^2 b^2 p} &(n_2 > \phi b, \Delta \leq \frac{\Delta n}{\phi b p} , \xi \leq \frac{\Delta n}{\phi b p}, OPT \geq a \phi b).
\end{align*}
\end{proof}
\begin{proof}{\textbf{Proof of Lemma~\ref{lemma:n1-small-ADP-full}:}}
Note that \vmn{if we are not in case (a), i.e., $q_1(1)+q_2(1)<b$, then $q_1(1)=n_1$.}
\vmn{Now either $q_2(1) = n_2$, i.e., we are in case (b), or $q_2(1) < n_2$.}
Therefore, what is remaining is to show that if $q_1(1)+q_2(1)<b$ and $q_2(1)<n_2$, then $q_2(1) \geq cb$ \vmn{, i.e., we are in case (c)}.
Let $\bar \lambda$ be the last time \st{when}\vmn{that} a customer is rejected.
Then, similar to earlier discussion, Inequality~\eqref{ineq:big-bar-lambda} is satisfied.
Therefore,
\begin{align}
u_1(\bar\lambda) = &\min \left \{ \frac{o_1(\bar \lambda)}{\bar \lambda p}, \frac{ o_1(\bar \lambda) + (1-\bar \lambda) (1-p) n}{1-p+\bar \lambda p} \right\} &(\bar \lambda \geq \delta)\nonumber \\
\leq & \frac{o_1(\bar \lambda)}{\bar \lambda p} \nonumber \\
\leq & \frac{n_1 n}{\phi b p} & (\eqref{ineq:big-bar-lambda}\text{ and }o_1(\bar\lambda)\leq n_1) \nonumber \\
< & \frac{n \frac{k}{p^2} \log n }{\phi b p}=\frac{kn \log n}{\phi b p^3}. &(n_1< \frac{k}{p^2} \log n) \nonumber
\end{align}
As a result, \vmn{we have:}
\vmn{
\begin{align*}
q_2(1) \vmn{\geq} \phi b + c(b-u_1(\bar\lambda))^+ > (\phi + c) b- c\frac{kn \log n}{\phi b p^3},
\end{align*}}
\vmn{In order to complete the proof, it suffices to show that }
\vmn{
\begin{align}
\label{ineq:lemma:14:1}
q_2(1) > (\phi + c) b- c\frac{kn \log n}{\phi b p^3} \geq cb,
\end{align}}
\vmn{The last inequality in \eqref{ineq:lemma:14:1} holds if $ b^2 \geq c \frac{kn \log n}{\phi ^2 p^3}$. Thus in the following, we show $ b^2 \geq c \frac{kn \log n}{\phi ^2 p^3}$: }
Using $\phi = \frac{1-c}{1-a} \geq 1-c$, Inequality \eqref{ineq:ADP-non-trivial-case}, and
\begin{align}\vmn{\bar\epsilon}\leq \frac{1}{\sqrt[4]{k}} \label{ineq:k'-condition-ADP<=k4},
\end{align}
\vmn{which holds by the definitions of constants $k$ and $\bar\epsilon$ given in Lemma~\ref{lemma:needed-centrality-result-for-m=2}, }we have $$b^2 = \frac{\left( b^{\frac{3}{2}}\right)^2 }{b} \geq \frac{\left( b^{\frac{3}{2}}\right)^2 }{n} > \frac{1}{\vmn{\bar\epsilon}^4}\frac{n\log n}{(1-c)^4a^2p^3}\geq c \frac{kn \log n}{\phi ^2 p^3}.$$
\st{Therefore,}\vmn{This proves} $ b^2 \geq c \frac{kn \log n}{\phi ^2 p^3}$, and thus $q_2(1) \geq cb$. \vmn{This completes the proof of the lemma.}
\end{proof}
\begin{proof}{\textbf{Proof of Lemma~\ref{claim:small-n1}:}}
We consider each case of Lemma~\ref{lemma:n1-small-ADP-full} separately.
For \st{the first}case \vmn{(a)}, $q_1(1)+q_2(1)=b$, since $n_1 < \frac{k}{p^2} \log n$, it is easy to see that
\begin{align*}
\frac{ALG_{2,c}({\vec{v}})}{OPT(\vec{v})} \geq \frac{ab}{ab+\frac{k}{p^2} \log n} \geq \frac{ab - \frac{k}{p^2} \log n}{ab} = 1- \frac{k \log n}{ab p^2},
\end{align*}
which is at least $c$ if $b \geq \frac{k \log n}{a(1-c) p^2}$.
Inequality \eqref{ineq:ADP-non-trivial-case} and \eqref{ineq:k'-condition-ADP} imply
$$\sqrt{b} = \frac{b^{\frac{3}{2}}}{b} \geq \frac{b^{\frac{3}{2}}}{n} > \frac{1}{\vmn{\bar\epsilon}}\frac{\sqrt{\log n}}{(1-c)^2a^2p^{3/2}}\geq \sqrt{k}\frac{\sqrt{\log n}}{p\sqrt{a(1-c)}},$$
and thus $b \geq \frac{k \log n}{a(1-c) p^2}$; \vmn{therefore we have:} $\frac{ALG_{2,c}({\vec{v}})}{OPT(\vec{v})} \geq c$.
\st{For the second and third}\vmn{In cases (b) and (c), }$q_2(1) \geq \min \{n_2, \vmn{c}b\}$; \vmn{thus} we have
$$ ALG_{2,c}({\vec{v}}) \geq n_1+c(\min \{n_2, b\})a \geq c (n_1+\min \{n_2, b\}a) \geq c OPT(\vec{v}).$$
\end{proof}
\if false
\begin{proof}[Proof of Claim~\ref{claim:check-epsilon}]
We have $\epsilon = \frac{1}{(1-c)^2ap^{3/2}}\sqrt{\frac{n^2 \log n}{b^3}} \geq \sqrt{\frac{1}{b}} \geq \frac{1}{n}$.
In addition, $\frac{1}{(1-c)^2ap^{3/2}}\sqrt{\frac{n^2 \log n}{b^3}} \leq \bar\epsilon $, is implied by \eqref{ineq:ADP-non-trivial-case} and \begin{align} \vmn{\vm{\tilde{k}}} \leq \bar\epsilon. \label{condition:ALG2:k'<=bar-epsilon}
\end{align}
Therefore, the claim statement holds.
\end{proof}
\fi
\begin{proof}{\textbf{Proof of Lemma~\ref{claim:check-error-terms}:}}
Both follow from definition.
\end{proof}
\begin{proof}{\textbf{Proof of Corollary~\ref{cor:cstar1}:}}
Theorem~\ref{thm:adaptive-threshold} with $c=1- \sqrt[3]{\frac{1}{ap^{3/2}}\sqrt{\frac{n^2 \log n}{b^3}}}$ proves the corollary.
\end{proof}
\section{The Adaptive Algorithm}
\label{sec:alg2}
In the design of Algorithm~\ref{algorithm:hybrid}, we used the observation that in the partially \st{learnable}\vmn{predictable} model, the demand has a \st{predictable}\vmn{stochastic} component \vmn{that is} uniformly spread over the entire horizon. \vmn{This observation motivated us to define the evolving threshold rule. We remark that in Algorithm~\ref{algorithm:hybrid} neither the evolving threshold rule nor the fixed threshold rule adapts to the observed data, which makes Algorithm~\ref{algorithm:hybrid} a non-adaptive algorithm.}
As noted in Remark~\ref{rem:alg1:1}, when \vmn{the initial }inventory $b$ is small compared to the horizon $n$, the competitive ratio of Algorithm~\ref{algorithm:hybrid}, $p + \frac{1-p}{2-a}$, is in fact the best possible, and it can be achieved with our non-adaptive algorithm. Therefore, in this regime, \st{learning}\vmn{adapting to the data, i.e., setting thresholds based on the observed data,}\st{from data} would not improve the performance.
More precisely, when $b = o(\sqrt{n})$ \st{there is not enough time for learning}\vmn{the inventory is so small compared to the time horizon that there may not be enough time to effectively adapt to the
observed data.} The adversary can mislead us to allocate all the inventory before \st{seeing}\vmn{we can observe} \st{enough of the}\vmn{a sufficient portion of the} data.
However, as $b$ becomes larger, we will have more chance to observe \st{and learn from}\vmn{and adapt to} the data before allocating a significant part of the inventory.
In this section, \vmn{in fact, }we design an adaptive algorithm that achieves a better competitive ratio for large enough $b$ \vmn{(relative to $n$)}.
In Section~\ref{sec:alg2-alg}, we \vmn{first} \st{describe}\vmn{present} \vmn{the ideas behind our adaptive algorithm along with its formal description}\st{the algorithm}.
\vmn{Then, in Section~\ref{subsec:adapt:com}, we} analyze the competitive ratio of \st{the proposed}\vmn{our} algorithm.
\subsection{The Algorithm}\label{sec:alg2-alg}
In this section, we describe \st{the Adaptive}\vmn{our adaptive} algorithm, denoted by $ALG_{2,c}$, which takes $c\in [0,1]$ as a parameter.
For a certain range of $c$, we show that $ALG_{2,c}$ attains a competitive ratio of $c$ (up to an error term); however, if $c$ becomes too large (for example if $c=1$),
then $ALG_{2,c}$ no longer guarantees a $c$ fraction of the \vmn{optimum} offline solution.
We call this algorithm adaptive because it makes decisions based on the sequence of arrivals it has observed so far. In particular, this algorithm repeatedly computes upper bounds on the total number of \st{class}\vmn{type}-$1$/-$2$ customers based on the observed data and uses these upper bounds to decide whether to accept an arriving \st{class}\vmn{type}-$2$ customer or not.
Before proceeding with the algorithm, we first introduce two functions $u_1(\lambda)$ and $u_{1,2}(\lambda)$ that will prove useful in constructing \vmn{the aforementioned }upper bounds.
In particular we define:
\begin{align*}
u_1 (\lambda) & \triangleq \begin{cases}
b &\text{ if }\lambda<\delta \text{ (not enough data \st{to learn}\vmn{observed})}.\\
\min \left\{ \frac{o_1(\lambda)}{\lambda p}, \frac{o_1(\lambda) + (1-\lambda) (1-p) n}{1-p+\lambda p} \right\}& \text{ if }\lambda \geq \delta.
\end{cases} \\
u_{1,2} (\lambda) & \triangleq \begin{cases}
b &\text{ if }\lambda<\delta \text{ (not enough data \st{to learn}\vmn{observed})}.\\
\min \left\{ \frac{o_1(\lambda)+o_2(\lambda)}{\lambda p}, \frac{o_1(\lambda) +o_2(\lambda) + (1-\lambda) (1-p) n}{1-p+\lambda p} \right\}& \text{ if }\lambda \geq \delta,
\end{cases}
\end{align*}
where $\delta \triangleq \frac{(1-c)b}{(1-a)n}$. Note that $u_1(\lambda)$ and $u_{1,2}(\lambda)$ are functions of the observed data $o_1(\lambda)$ and $o_2(\lambda)$.
In the following lemma, we show how $u_1(\lambda)$ and $u_{1,2}(\lambda)$ provide upper bounds on $n_1$ and $n_1+n_2$
\vmn{when the realized sequence, $\vec{v}$, belongs to event $\mathcal{E}$ and}
\st{when}\vmn{the} number of \st{class}\vmn{type}-$1$ customers \vmn{as well as the initial inventory $b$} is large enough \vmn{(as specified in the lemma's statement)}.
\if false
In particular, similar to the analysis for Algorithm ~\ref{algorithm:hybrid}, we let $\alpha$, $k$, and $\bar\epsilon$ denote the constants specified in Lemma~\ref{lemma:needed-centrality-result-for-m=2}, and define $\epsilon \triangleq \frac{1}{(1-c)^2ap^{3/2}}\sqrt{\frac{n^2 \log n}{b^3}} $.
In Claim~\ref{claim:check-epsilon} we show that $\frac{1}{n} \leq \epsilon \leq \bar\epsilon$ which implies that we can apply Lemma ~\ref{lemma:needed-centrality-result-for-m=2} to the corresponding event $\mathcal{E}$.
\fi
\vmn{Recall that we defined $\Delta$ to be $\alpha \sqrt{b \log n}$, where constant $\alpha$ itself is defined in Lemma~\ref{lemma:needed-centrality-result-for-m=2}. }
\if false
Further we define $\Delta \triangleq \alpha \sqrt{b \log n}$. We have:
\fi
\begin{lemma}
\label{prop:full-Ubounds}
Under event $\mathcal{E}$, \st{if}\vmn{suppose} $n_1\geq \frac{k}{p^2} \log n$ \vmn{and $b > \left({\frac{1}{\vmn{\bar\epsilon}} \frac{n\sqrt{\log n}}{(1-c)^2ap^{3/2}}}\right)^\frac{2}{3}$, where
constants $k$ and $\bar\epsilon$ are defined in Lemma~\ref{lemma:needed-centrality-result-for-m=2}}.
Then for all $\lambda \in \{1/n, 2/n, \dots, 1\}$,
\begin{subequations}
\begin{align} & u_1(\lambda) \geq \min \left\{ b, n_1-\frac{2\Delta}{\delta p}\right\} \text{, and }\label{eq:full-u_1}
\\ & u_{1,2}(\lambda) \geq \min \left\{ b, n_1 + n_2 -\frac{2\Delta}{\delta p}\right\}. \label{eq:full-u_2}
\end{align}
\end{subequations}
\end{lemma}
\noindent{Lemma~\ref{prop:full-Ubounds} is proven in Appendix~\ref{sec:proof-value-MP1}.}
Having defined $u_1(\lambda)$ and $u_{1,2}(\lambda)$, now we describe how the adaptive algorithm determines whether to accept an arriving \st{class}\vmn{type}-$2$ customer when there is remaining inventory.
In the following, $q_j(\lambda)$, $j=1,2$, represents the number of \st{class}\vmn{type}-$j$ customers accepted by the algorithm up to time $\lambda$ (for a particular realization $\vec{v}$).
\vmn{Suppose the arriving customer at time $\lambda$ is of type-$2$.} If $u_{1,2}(\lambda) < b$, then we accept the customer, because \eqref{eq:full-u_2} implies that the total number of \st{class}\vmn{type}-$1$ and \st{class}\vmn{type}-$2$ customers will not exceed $b$ (neglecting the error term), and thus we will have extra inventory at the end.
On the other hand, if $u_{1,2}(\lambda) \geq b$, we may want to reject this customer to reserve inventory for a future \st{class}\vmn{type}-$1$ customer.
The decision of whether to accept the customer is based on the following two observations:
\begin{observation} If $u_1(\lambda) \geq n_1$, then \label{obs:upper-bound-OPT}
$$OPT(\vec{v}) \leq \min\{n_1, b\} + a (b-n_1)^+ = (1-a) \min\{n_1, b\}+ab \leq \min \{ u_1(\lambda) , b\}(1-a)+ab.$$
\end{observation}
\begin{observation} \label{observation:maximum-revenue}
If we accept the current \st{class}\vmn{type}-$2$ customer, then the maximum revenue we can get is
$\left( b-\left( {q}_2\left(\lambda -1/n \right)+1\right)\right) + a \left( {q}_2\left(\lambda -1/n \right) +1 \right).$
\end{observation}
To have a competitive ratio of at least $c$, Observations~\ref{obs:upper-bound-OPT} and~\ref{observation:maximum-revenue} motivate us to accept the \st{class}\vmn{type}-$2$ customer only if \begin{align} \frac{\left( b-\left( {q}_2\left(\lambda -1/n\right)+1\right)\right) + a \left( {q}_2\left(\lambda - 1/n \right) +1 \right) } {\min \{ u_1(\lambda ) , b\}(1-a)+ab}\geq c. \label{inequality:cond-accept-class-2}
\end{align}
After rearranging terms, we get the following threshold for accepting the \st{class}\vmn{type}-$2$ customer:
\begin{align} \label{eq:Ccomp}
q_2(\lambda - 1/n ) +1 \leq \frac{1-c}{1-a} b + c \left(b - u_1(\lambda) \right)^+.
\end{align}
\st{In summary,}\vmn{Thus} when $u_{1,2}(\lambda) \geq b$, we use Condition~\eqref{eq:Ccomp} to accept/reject a \st{class}\vmn{type}-$2$ customer.
For notational convenience, we define $\phi \triangleq \frac{1-c}{1-a}$.
We point out the right-hand side of~\eqref{eq:Ccomp} may not be an integer; thus, in our algorithm, we use a slightly modified version of it, defined as follows:
\begin{align} \label{eq:Ccomp2}
q_2(\lambda - 1/n ) \leq \lfloor \frac{1-c}{1-a} b + c \left(b - u_1(\lambda) \right)^+ \rfloor.
\end{align}
\st{Note that $\phi b$ is the lowest threshold for accepting a \st{class}\vmn{type}-$2$ customer.}
\vmn{Note that by the definition of the threshold given in \eqref{eq:Ccomp2}, we always accept the first $\lfloor \phi b \rfloor$ \vmn{type}-$2$ customers.}
The formal definition of our algorithm is presented in Algorithm~\ref{algorithm:adaptive-threshold}.
In Algorithm~\ref{algorithm:adaptive-threshold}, $q_j$ represents the counter \vmn{for the} number of accepted customers \vmn{of} \st{class}\vmn{type}-$j$ so far.
\begin{algorithm}[H]\begin{enumerate}
\item Initialize $q_1, q_2 \leftarrow 0$, and define $\phi \triangleq \frac{1-c}{1-a}$, and $\delta \triangleq \frac{\phi b}{n}$.
\item Repeat for time $\lambda = 1/n, 2/n, \dots, 1$:
\begin{enumerate}
\item Calculate functions $u_1 (\lambda)$ and $u_{1,2} (\lambda)$ (to construct upper bounds for $n_1$ and $n_1+n_2$):
\begin{align*}
u_1 (\lambda) & \triangleq \begin{cases}
b &\text{ if }\lambda<\delta \text{ (not enough data \st{to learn}\vmn{observed})}.\\
\min \left\{ \frac{o_1(\lambda)}{\lambda p}, \frac{o_1(\lambda) + (1-\lambda) (1-p) n}{1-p+\lambda p} \right\}& \text{ if }\lambda \geq \delta.
\end{cases} \\
u_{1,2} (\lambda) & \triangleq \begin{cases}
b &\text{ if }\lambda<\delta \text{ (not enough data \st{to learn}\vmn{observed})}.\\
\min \left\{ \frac{o_1(\lambda)+o_2(\lambda)}{\lambda p}, \frac{o_1(\lambda) +o_2(\lambda) + (1-\lambda) (1-p) n}{1-p+\lambda p} \right\}& \text{ if }\lambda \geq \delta .
\end{cases}
\end{align*}
\item Accept customer $i=\lambda n$ arriving at time $\lambda$ if there is remaining inventory and one of the following conditions holds:
\begin{enumerate}
\item $v_i = 1$; update $q_1\leftarrow q_1+1$.
\item $v_i = a$ and $u_{1,2}(\lambda) < b$; update $q_2 \leftarrow q_2 +1$.
\item $v_i = a$ and $q_2 \leq \lfloor \phi b + c \left(b - u_1(\lambda) \right)^+ \rfloor$; update $q_2 \leftarrow q_2 +1$.
\end{enumerate}
We prioritize the second condition if both the second and the third ones hold.
\end{enumerate}
\end{enumerate}
\caption{Online Adaptive Algorithm ($ALG_{2,c}$)}\label{algorithm:adaptive-threshold}
\end{algorithm}
Before we analyze the algorithm, we highlight two key properties of threshold $\lfloor \phi b + c \left(b - u_1(\lambda) \right)^+ \rfloor$: (i) The threshold is decreasing in $u_1(\lambda)$; the smaller $u_1(\lambda)$ is, the less inventory we reserve for future \st{class}\vmn{type}-$1$ customers. (ii) The threshold is decreasing in $c$ as well (the right-hand side of \eqref{eq:Ccomp2} can be expressed as $\lfloor \frac{1}{1-a} b -c (\frac{b}{1-a} - \left(b - u_1(\lambda) \right)^+)\rfloor$). When $c$ is too large, we may reject too many \st{class}\vmn{type}-$2$ customers, which in turn hurts the revenue in a certain class of instances.
Said another way, note that Inequality~\eqref{eq:Ccomp2} only gives a ``necessary'' condition for achieving $c$-competitiveness.
We identify the sufficient condition \vmn{for $c$-competitiveness} by solving \st{the following mathematical program whose feasibility region contains all such instances. T}the \vmn{{\em factor-revealing}} mathematical program \st{is} presented in (\ref{MP1}). \vmn{We will explain the construction of this program in the analysis of the competitive ratio (in Section~\ref{subsec:adapt:com}). On a high level, we construct the feasible region such that it contains any valid instance that can violate the $c$-competitiveness; by minimizing over $c$, we find the smallest value of $c$ for which the feasible region is not empty.}
\bigskip
\begin{mdframed}
\begin{subequations}
\begin{equation}\underset{(\vmn{\l} , n_1, n_2, \eta_1, \eta_2, c)\st{\in \mathbb{R}^6_{\geq 0}}}{\text{Minimize}} c \tag{MP1} \label{MP1}
\end{equation}
subject to
\begin{equation}
c \geq \frac{a(n_2-\tilde o_2+\frac{b}{1-a})+n_1}{a\min\{n_1+n_2, b\}+(1-a)n_1+\frac{a^2b}{1-a}+a \min\{\tilde u_1, b\}}\label{constraint:not-c-competitive}
\end{equation}
\begin{equation}
\tilde u_{1,2} \geq b \label{constraint:u_2>=b}
\end{equation}
\vspace{-20pt}
\\
\vspace{-35pt}
\noindent\begin{tabularx}{\textwidth}{@{}>{\hsize=1\hsize}X>{\hsize=1.2\hsize}X>{\hsize=1\hsize}X>{\hsize=1\hsize}X@{}}
\begin{equation}
\vmn{\l} \leq 1 \label{constraint:x<=1} \end{equation} &
\begin{equation}
\eta_1+\eta_2 \leq \vmn{\l} n \label{constraint:eta_1+eta_2<=lambdan} \end{equation} &
\begin{equation}
\eta_1 \leq n_1 \label{constraint:n1'<=n1} \end{equation} &
\begin{equation}
\eta_2 \leq n_2 \label{constraint:n2'<=n2} \end{equation}
\end{tabularx}
\vspace{-20pt}
\noindent\begin{tabularx}{\textwidth}{>{\hsize=0.7\hsize}X>{\hsize=1\hsize}X>{\hsize=1.5\hsize}X
\begin{equation}
n_1 \leq b \label{constraint:n1<=b} \end{equation} &
\begin{equation}
n_1+n_2 \leq n \label{constraint:n1+n2<=n}
\end{equation} &
\begin{equation}
n_1+n_2 \leq \eta_1+\eta_2+(1-\vmn{\l})n\label{constraint:n1'+n2'big} \end{equation}
\end{tabularx}
\end{subequations}
where $\tilde o_1 \triangleq (1-p)\eta_1+p n_1 \vmn{\l} $, $\tilde o_2 \triangleq (1-p)\eta_2+p n_2 \vmn{\l}$, $\tilde u_1 \triangleq \min \left\{ \frac{\tilde o_1}{\vmn{\l} p }, \frac{\tilde o_1 + (1-\vmn{\l} ) (1-p)n}{(1-p+\vmn{\l} p)} \right\} $, and $\tilde u_{1,2} \triangleq \min \left\{ \frac{\tilde o_1+\tilde o_2}{\vmn{\l} p }, \frac{\tilde o_1+\tilde o_2 + (1-\vmn{\l} ) (1-p)n}{(1-p+\vmn{\l} p)} \right\} $.
\end{mdframed}
\bigskip
Before we analyze Algorithm~\ref{algorithm:adaptive-threshold}, we \vmn{also} evaluate the solution of (\ref{MP1}).
\vmn{Denote the optimal objective value of (\ref{MP1}) \vmn{by $c^*$}. As will be stated in Theorem~\ref{thm:adaptive-threshold}, $ALG_{2,{c^*}}$ achieves a competitive ratio of $c^*$ (minus an error term).}
First, we solve (\ref{MP1}) numerically for the regime where $b= \kappa n$ (where $0< \kappa \leq 1$ is a constant), and show that if $b/n > 0.5$, then Algorithm~\ref{algorithm:adaptive-threshold} achieves a better competitive ratio than Algorithm~\ref{algorithm:hybrid}.
\begin{figure}[h]
\centering
\noindent\begin{tabularx}{\textwidth}{@{}>{\hsize=1\hsize}X>{\hsize=1\hsize}X@{}}
\psscalebox{0.5}{ \centering
\input{ratio50}
}
&\psscalebox{0.5}{ \centering
\input{ratio70}
}
\\ \centering
$a=0.50$
& \centering
$a=0.70$
\end{tabularx}
\caption{Solution of (\ref{MP1}), \vmn{$c^*$}, vs. $p$ for $a=0.50$ and $ 0.70$}
\label{fig:adaptive_threshold_vs_hybrid}
\end{figure}
In Figure~\ref{fig:adaptive_threshold_vs_hybrid}, we fix $a=0.5,0.7$, and plot $c^*$ for $p = 0.05, 0.1,\dots, 0.95$ for three cases of $b/n=0.9$, $0.7$, and $0.5$.
\vmn{Figure~\ref{fig:adaptive_threshold_vs_hybrid} leads us to make the following observation:}
\st{As the figure illustrates that} \vmn{The competitive ratio of $ALG_{2,{c^*}}$ is at least that of $ALG_{1}$, and it is significantly larger when (i) $p$ is small and (ii) $b/n$ is large.} \st{(ii) For every $p$, the solution $c^*$ increases as $b/n$ ratio becomes larger. This increase is more significant for small probabilities.}
\st{This}\vmn{This observation} highlights the power of \st{adaptive (even partially) learning}\vmn{adapting to the data, even though it contains an adversarial component}: Consider $a=0.7$, $b = 0.7 n$ and $p = 0.2$; this means that $80 \%$ of the demand belongs to the {\st{unpredictable}\vmn{adversarial}} group\st{, i.e., we can only learn from $20 \%$ of the data}. Our adaptive algorithm guarantees $10 \%$ more revenue than the non-adaptive algorithm does.
In addition, we note that as the \vmn{initial} inventory $b$ becomes larger (for a fixed time horizon $n$), the adversary's power naturally \st{reduces}\vmn{declines}.
\vmn{Thus one would expect that a ``smart'' algorithm achieves a higher competitive ratio.}
Our adaptive algorithm \st{takes advantage of this limiting phenomenon, and provides better}\vmn{ indeed attains a higher} competitive ratio \st{for larger initial inventory values}\vmn{as the initial inventory increases}. In contrast, \st{the}\vmn{the competitive ratio of our} non-adaptive algorithm \st{is ignorant of this phenomenon}\vmn{remains the same}.
\vmn{We conclude our study of (\ref{MP1}) by establishing a lower bound on its optimum solution. The following proposition states that $c^*$ is at least $p+\frac{1-p}{2-a}$, which is the competitive ratio of Algorithm~\ref{algorithm:hybrid} (ignoring the error term).}
\if false
Finally, in the next proposition (proved in Appendix~\ref{sec:proof-value-MP1}), we find lower bounds on $c^*$:
\fi
\begin{proposition}
\label{prop:values-math-program}
For any $b\leq n$, we have: $c^* \geq p+\frac{1-p}{2-a}$. Further, if $b=n$, then $c^* =1$.
\end{proposition}
\section{Proof of Lemma~\ref{lemma:needed-centrality-result-for-m=2}}\label{sec:proof-of-lemma-m=2}
\noindent The proof of Lemma~\ref{lemma:needed-centrality-result-for-m=2} is based on the following lemma:
\begin{lemma}\label{lemma:low-tail-o1}
\vmn{Define constants $\alpha_{\ref{lemma:low-tail-o1}} \triangleq 5 + \sqrt{6}$, $\bar\epsilon_{\ref{lemma:low-tail-o1}} \triangleq 1/24$, and $k_{\ref{lemma:low-tail-o1}} \triangleq 4$.
If $\epsilon' \in (0, \bar\epsilon_{\ref{lemma:low-tail-o1}}]$ and $n_1 > \frac{k_{\ref{lemma:low-tail-o1}}}{p^2} \log \left( \frac{1}{\epsilon' } \right)$, for any
$\lambda\in \{ 1/n, 2/n, \dots , n/n\}$, we have:
$$
\prob{\left| O_1(\lambda)-\tilde o_1 (\lambda) \right| \geq \alpha_{\ref{lemma:low-tail-o1}} \sqrt{n_1 \log \left( \frac{1}{\epsilon'}\right)}} \leq \epsilon'.
$$}
\end{lemma}
To prove Lemma~\ref{lemma:low-tail-o1}, we use two existing concentration bounds for random variables obtained from sampling with \vmn{and} without replacement.
Before proceeding to the proof, we state these concentration bounds. \vmn{Using the existing concentration bounds, Lemma~\ref{lemma:low-tail-o1} is proven through
a series of auxiliary corollaries (of the concentration results) and lemmas whose proofs are deferred to Section~\ref{subsec:aux}}.
First, we use a well-know variant of the classical Chernoff bound (\citet{chernoff1952measure}) regarding the concentration of binomial random variables as given in \cite{mcdiarmid1998concentration}:
\begin{theorem}[\cite{mcdiarmid1998concentration}]\label{thm:binomial-bound}
Let $0<p<1$, let $X_1, X_2, \dots, X_n$ be independent binary random variables, with $\prob{X_k=1}=p$ and $\prob{X_k=0}=1-p$ for each $k$, and let $S_n=\sum_{k=1}^n X_k$.
Then for any $t\geq 0$,
$$ \prob{|S_n-np| \geq nt} \leq 2e^{-2nt^2}.$$
\end{theorem}
When applying this theorem in our proof, we find it more insightful and convenient to use the following form of the above concentration result:
\begin{corollary} \label{coro:binomial}
\vmn{For any $k \geq 0$, define constants $\alpha_{\ref{coro:binomial},k} \triangleq 1$ and $\bar\epsilon_{\ref{coro:binomial},k} \triangleq k/2$.
For $\epsilon \in (0, \bar\epsilon_{\ref{coro:binomial},k})$, under the same setting as in Theorem~\ref{thm:binomial-bound}, we have:}
$$ \prob{ |S_n-np| \geq \alpha_{\ref{coro:binomial},k} \sqrt{n \log \left( \frac{1}{\epsilon} \right) } } \leq k \epsilon. $$
\end{corollary}
Second, we use a concentration result for random variables drawn from the hypergeometric distribution given by \citet{hush2005concentration}. Recall that hypergeometric distribution is similar to binomial distribution when sampling without replacement is performed. It is defined precisely within the following theorem.
\begin{theorem}[\cite{hush2005concentration}]\label{thm:hypergeometric-bound}
Let $K \sim \text{Hyper}(n_1, n, m)$ denote the hypergeometric random variable describing the process of counting how many defectives are selected when $n_1$ items are randomly selected without replacement from a population of $n$ items of which $m$ are defective. Let $\gamma \geq 2$.
Then,
$$\prob{K-\E{K}>\gamma } < e^{-2\alpha_{n_1,n,m}(\gamma^2-1)}$$
and
$$\prob{K-\E{K}<-\gamma } < e^{-2\alpha_{n_1,n,m}(\gamma^2-1)},$$
where
$$\alpha_{n_1,n,m}=\max \left\{\frac{1}{n_1+1}+\frac{1}{n-n_1+1}, \frac{1}{m+1}+\frac{1}{n-m+1} \right\}. $$
\end{theorem}
Similar to the concentration result for binomial distribution, we find it easier to use the following form of the above concentration result:
\begin{corollary} \label{coro:hyper}
\vmn{For any $k \geq 0$, define constants $\alpha_{\ref{coro:hyper},k} \triangleq 2$ and $\bar\epsilon_{\ref{coro:hyper},k} \triangleq k/2$ , $\underline m_{\ref{coro:hyper},k} \triangleq \max \left\{ \left( \log \frac{1}{\bar \epsilon_{\ref{coro:hyper},k}}\right)^{-1} , 1\right\}$.
For $\epsilon \in (0, \bar\epsilon_{\ref{coro:hyper},k})$ and $m \geq \underline m_{\ref{coro:hyper},k}$, under the same setting as in Theorem~\ref{thm:hypergeometric-bound}, we have:}
$$ \prob{ |K - \E{K}| \geq \alpha_{\ref{coro:hyper},k} \sqrt{m \log \left( \frac{1}{\epsilon} \right) } } \leq k \epsilon. $$
\end{corollary}
\vmn{\textbf{Proof Sketch of Lemma~\ref{lemma:low-tail-o1}:}}
Before proceeding to the proof of Lemma~\ref{lemma:low-tail-o1}, we explain the idea of the proof by going back to the example of Figure~\ref{figure:example} from Section~\ref{sec:prem}.
Let us consider $\lambda = 5/8$.
In the following, we count the number of customers in the {{\st{predictable}\vmn{stochastic}}} group and the {\st{unpredictable}\vmn{adversarial}} group in $O_1(\lambda)$ separately.
We begin by counting the number of \st{class}\vmn{type}-$1$ customers in the {{\st{predictable}\vmn{stochastic}}} group that arrive no later than time $5/8$ in $\vec{V}$ in Figure~\ref{figure:example}.
Among the five customers arriving by time $5/8$, two of them are in the {{\st{predictable}\vmn{stochastic}}} group: customers at positions $2$ and $5$.
We aim to count the number of \st{class}\vmn{type}-$1$ customers in these two positions.
There are a total of four customers in the {{\st{predictable}\vmn{stochastic}}} group (there are four black nodes in the middle row).
Note that only one of them is \st{class}\vmn{type}-$1$ ($\vmn{v_{I,5}} = 1$).
Now we take two samples without replacement from the four customers to fill the two positions ($2$ and $5$).
Thus, given the realization of the {{\st{predictable}\vmn{stochastic}}} group (the middle row), the number of \st{class}\vmn{type}-$1$ customers in these two positions follows a hypergeometric distribution with parameters $(2,4,1)$ (which, as defined in Theorem~\ref{thm:hypergeometric-bound}, corresponds to taking two samples without replacement from four customers among which one is \st{class}\vmn{type}-$1$).
In the particular realization of Figure~\ref{figure:example}, the \st{class}\vmn{type}-$1$ customer in the {{\st{predictable}\vmn{stochastic}}} group is placed in position $2$.
Now we count the number of \st{class}\vmn{type}-$1$ customers in the {\st{unpredictable}\vmn{adversarial}} group that arrive no later than time $5/8$ in $\vec{V}$ in Figure~\ref{figure:example}.
In the adversarial sequence $\vmn{\vec{v}_I}$, there are three \st{class}\vmn{type}-$1$ customers (at positions $1$, $3$, and $5$).
Any of these three customers will be in the {\st{unpredictable}\vmn{adversarial}} group independent of each other and with probability $(1-p)$, and hence the number of \st{class}\vmn{type}-$1$ in the {\st{unpredictable}\vmn{adversarial}} group that arrive no later than time $5/8$ in $\vec{V}$ follows the binomial distribution $\text{Bin}(3, 1-p)$.
In the particular realization of Figure~\ref{figure:example}, among the three \st{class}\vmn{type}-$1$ customers arriving no later than time $5/8$ in $\vmn{\vec{v}_I}$, two of them are in the {\st{unpredictable}\vmn{adversarial}} group: customers at position $1$ and $3$.
Therefore, the number of \st{class}\vmn{type}-$1$ customers in the {\st{unpredictable}\vmn{adversarial}} group that arrive no later than time $5/8$ in the particular realization $\vec{v}$ is two.
In the proof of Lemma~\ref{lemma:low-tail-o1}, we use the method described in the above example to count the number of customers in $O_1(\lambda)$.
For counting the number of customers in the {{\st{predictable}\vmn{stochastic}}} group in $O_1(\lambda)$:
(i) First we count the number of positions before time $\lambda $ that belong to the {{\st{predictable}\vmn{stochastic}}} group.
Call this number $Z$.
(ii) Next we count the number of \st{class}\vmn{type}-$1$ customers in the {{\st{predictable}\vmn{stochastic}}} group. Call the total number of customers in the {{\st{predictable}\vmn{stochastic}}} group $R$ and the number of \st{class}\vmn{type}-$1$ customers in the {{\st{predictable}\vmn{stochastic}}} group $R_1$.
(iii) We compute the number of \st{class}\vmn{type}-$1$ customers in the {{\st{predictable}\vmn{stochastic}}} group that fill one of these $Z$ positions.
Call this number $Z_1$.
As mentioned above, this is equivalent to taking $Z$ samples without replacement from $R$ customers among which $R_1$ are \st{class}\vmn{type}-$1$.
The number $Z_1$ is the number of customers in the {{\st{predictable}\vmn{stochastic}}} group in $O_1(\lambda)$.
Counting the customers in the {\st{unpredictable}\vmn{adversarial}} group in $O_1(\lambda)$ is relatively simple. Call this number $\zeta_1$.
Finally, we obtain $O_1(\lambda)$ with the equation $O_1(\lambda) = Z_1+\zeta_1$.
In summary, the random variables have the following distributions:
\begin{itemize}
\item $R \sim \text{Bin}(n, p)$,
\item $R_1 \sim \text{Bin}(n_1, p)$,
\item $Z \sim \text{Bin}(\lambda n, p)$,
\item $Z_1 \sim \text{Hyper}(Z, R, R_1)$\footnote{Note that $Z$, $R$, and $R_1$ are not necessarily independent.},
\item $\zeta_1 \sim \text{Bin}(\eta_1(\lambda), 1-p)$.
\end{itemize}
The proof of Lemma~\ref{lemma:low-tail-o1} includes \vmn{establishing concentration results for $Z_1$ and $\zeta_1$ through a series of auxiliary lemmas; Lemmas~\ref{claim:BinomialBounds},~\ref{claim:expectationofZ1}, and~\ref{claim:concentraionZ1} are concerned with the former random variable, and Lemma~\ref{claim:adversary} is concerned with the latter one. The proof of these lemmas is deferred to Section~\ref{subsec:aux}.}
The first lemma focuses on analyzing $R$, $R_1$, and $Z$.
In particular, we use Corollary~\ref{coro:binomial} along with the union bound to show the following:
\begin{lemma}
\label{claim:BinomialBounds}
\vmn{Define constants $\bar\epsilon_{\ref{claim:BinomialBounds}} \triangleq 1/24$ and $\alpha_{\ref{claim:BinomialBounds}} \triangleq 1$. For $ \epsilon \in (0, \bar\epsilon_{\ref{claim:BinomialBounds}}]$,}
with probability at least $1 - \epsilon/4$, all the following three events happen:
\begin{subequations}
\begin{align}
R & \in \left( np - \alpha_{\ref{claim:BinomialBounds}} \sqrt{ n \log \left( \frac{1}{\epsilon} \right)}, np + \alpha_{\ref{claim:BinomialBounds}} \sqrt{ n \log \left( \frac{1}{\epsilon} \right)} \right)\label{event:R},\\
R_1& \in \left( n_1p - \alpha_{\ref{claim:BinomialBounds}} \sqrt{ n_1 \log \left( \frac{1}{\epsilon} \right)}, n_1p + \alpha_{\ref{claim:BinomialBounds}} \sqrt{ n_1 \log \left( \frac{1}{\epsilon} \right)} \right)\text{, and }\label{event:R1+bar-R1}\\
Z & \in \left(\lambda np - \alpha_{\ref{claim:BinomialBounds}} \sqrt{ \lambda n \log \left( \frac{1}{\epsilon} \right)}, \lambda np + \alpha_{\ref{claim:BinomialBounds}} \sqrt{ \lambda n \log \left( \frac{1}{\epsilon} \right)} \right). \label{event:R1+R-1}
\end{align}
\end{subequations}
\end{lemma}
\noindent We note that conditioned on $R$, $R_1$, and $Z$, the expected value of $Z_1$ is $\frac{ZR_1}{R}$. Thus we have:
\begin{align}
\E{Z_1} = \E{\E{Z_1|R,R_1,Z}} = \E{\frac{ZR_1}{R}}. \label{def:EZ1}
\end{align}
The last expectation is a non-linear function of the three random variables $R$, $R_1$, and $Z$. Instead of computing the expectation directly, we
use the concentration bounds of \eqref{event:R} - \eqref{event:R1+R-1} to show the following lemma:
\begin{lemma}
\label{claim:expectationofZ1}
\vmn{Define constants $\alpha_{\ref{claim:expectationofZ1}} \triangleq 4$, $k_{\ref{claim:expectationofZ1}} \triangleq 4$. }
Conditioned on the events \eqref{event:R}- \eqref{event:R1+R-1},
and when $n_1 > \frac{k_{\ref{claim:expectationofZ1}}}{p^2}\log \left(\frac{1}{\epsilon}\right)$, we have:
\begin{align}
\frac{Z R_1 }{R} \in \left( \lambda pn_1 - \alpha_{\ref{claim:expectationofZ1}} \sqrt{n_1\log\left( \frac{1}{\epsilon} \right)}, \lambda pn_1 + \alpha_{\ref{claim:expectationofZ1}} \sqrt{n_1\log\left( \frac{1}{\epsilon} \right) }\right).\label{inequality:fractional-thing-to-prove}
\end{align}
\end{lemma}
\noindent Lemmas~\ref{claim:BinomialBounds} and~\ref{claim:expectationofZ1} together imply that, \vmn{for $ \epsilon \in (0, \bar\epsilon_{\ref{claim:BinomialBounds}}]$}:
\begin{align}
\prob{ \left| \frac{R_1 Z}{R} - pn_1 \lambda\right| \geq \alpha_{\ref{claim:expectationofZ1}} \sqrt{n_1\log\left( \frac{1}{\epsilon} \right)} } \leq \frac{\epsilon}{4}
\label{ineq:expectationError}
\end{align}
Having \eqref{def:EZ1} and Lemma~\ref{claim:expectationofZ1}, we are ready to establish a concentration result for $Z_1$.
We partition the sample space of ($R, R_1, Z$) into two events as follows: the event where \eqref{event:R}-\eqref{event:R1+R-1} hold, denoted by $\mathcal{E}$; the complement event, denoted by $\mathcal{E}^c$.\footnote{We note that this event is only locally defined within this appendix, and it is not the same as the one defined in Definition~\ref{def:event}.}
Note that Lemma~\ref{claim:BinomialBounds} implies that $\prob{\mathcal{E}^c} \leq \frac{\epsilon}{4}$. Using the law of total probability, we have: for any $\tilde{ \alpha} > 0$,
\begin{align}
& \prob{ \left| Z_1 - \E{Z_1 | R,R_1,Z} \right| \geq \tilde{\alpha} \sqrt{n_1\log\left( \frac{1}{\epsilon} \right)} }
\nonumber \\ & = \prob{\mathcal{E}^c}\prob{ \left| Z_1 - \E{Z_1 | R,R_1,Z} \right| \geq \tilde{\alpha} \sqrt{n_1\log\left( \frac{1}{\epsilon} \right)} \Big|~\mathcal{E}^c}
\nonumber \\ & + \prob{\mathcal{E}} \prob{ \left| Z_1 - \E{Z_1 | R,R_1,Z} \right| \geq \tilde{\alpha} \sqrt{n_1\log\left( \frac{1}{\epsilon} \right)} \Big|~\mathcal{E}}
\nonumber \\ & \leq \frac{\epsilon}{4} \cdot 1 +1\cdot \prob{ \left| Z_1 - \E{Z_1 | R,R_1,Z} \right| \geq \tilde{\alpha} \sqrt{n_1\log\left( \frac{1}{\epsilon} \right)} \Big|~\mathcal{E}}.
\label{ineq:totalProb}
\end{align}
\noindent Using Corollary~\ref{coro:hyper} and the definition of events \eqref{event:R}-\eqref{event:R1+R-1}, we show the following lemma:
\begin{lemma}
\label{claim:concentraionZ1}
\vmn{Define constants $\bar\epsilon_{\ref{claim:concentraionZ1}} \triangleq 1/24$, $\alpha_{\ref{claim:concentraionZ1}} \triangleq \sqrt{6}$, and $k_{\ref{claim:concentraionZ1}} \triangleq 4$. For $\epsilon \in (0, \bar\epsilon_{\ref{claim:concentraionZ1}}]$, }
if $n_1 > \frac{k_{\ref{claim:concentraionZ1}}}{p^2}\log \left(\frac{1}{\epsilon}\right)$, and $(R,R_1,Z) \in \mathcal{E}$, we have:
\begin{align}
\prob{ \left| Z_1 - \E{Z_1 | R,R_1,Z} \right| \geq \alpha_{\ref{claim:concentraionZ1}} \sqrt{n_1\log\left( \frac{1}{\epsilon} \right)} \Big|~R,R_1,Z} \leq \frac{\epsilon}{4}.
\end{align}
\end{lemma}
\noindent Putting Lemma~\ref{claim:concentraionZ1} back to \eqref{ineq:totalProb} and setting $\tilde{\alpha} = \alpha_{\ref{claim:concentraionZ1}} $, we get:
\begin{align}
\prob{ \left| Z_1 - \E{Z_1 | R,R_1,Z} \right| \geq \alpha_{\ref{claim:concentraionZ1}} \sqrt{n_1\log\left( \frac{1}{\epsilon} \right)} }\leq \frac{\epsilon}{2}
\label{ineq:totalProb2}
\end{align}
Finally, we have the following lemma regarding $\zeta_1$:
\begin{lemma}
\label{claim:adversary}
\vmn{Define constants $\bar\epsilon_{\ref{claim:adversary}} \triangleq 1/8$ and $\alpha_{\ref{claim:adversary}} \triangleq 1$. For
$\epsilon \in (0, \bar\epsilon_{\ref{claim:adversary}}]$, we have:}
\begin{align}
\prob{ \left| \zeta_1 - (1-p) \eta_1(\lambda)\right| \geq \alpha_{\ref{claim:adversary}} \sqrt{n_1\log\left( \frac{1}{\epsilon} \right)} } \leq \frac{\epsilon}{4}
\label{ineq:adversary}
\end{align}
\end{lemma}
\noindent{With the lemmas above, we are ready to prove Lemma~\ref{lemma:low-tail-o1}: }
\begin{proof}{\textbf{Proof of Lemma~\ref{lemma:low-tail-o1}:}}
First, \vmn{we note that we set the constants in Lemma~\ref{lemma:low-tail-o1} and Lemmas~\ref{claim:BinomialBounds}-\ref{claim:adversary} such that we get} $\bar\epsilon_{\ref{lemma:low-tail-o1}} = \min\{\bar\epsilon_{\ref{claim:BinomialBounds}}, \bar\epsilon_{\ref{claim:concentraionZ1}}, \bar\epsilon_{\ref{claim:adversary}}\}$, $ k_{\ref{lemma:low-tail-o1}} = \max \{ k_{\ref{claim:concentraionZ1}},k_{\ref{claim:expectationofZ1}}\}$ and $\alpha_{\ref{lemma:low-tail-o1}} = \alpha_{\ref{claim:expectationofZ1}}+ \alpha_{\ref{claim:concentraionZ1}}+\alpha_{\ref{claim:adversary}}$.
Now we can apply the union bound on \eqref{ineq:totalProb2}, \eqref{ineq:expectationError}, and \eqref{ineq:adversary} and obtain: when $0<\epsilon' \leq \bar\epsilon_{\ref{lemma:low-tail-o1}} $ and $n_1 > \frac{k_{\ref{lemma:low-tail-o1}}}{p^2} \log \left( \frac{1}{\epsilon'} \right)$, with probability at least $1-\epsilon'$,
\begin{align*}
& \left| Z_1 - \E{Z_1 | R,R_1,Z} \right| < \alpha_{\ref{claim:concentraionZ1}} \sqrt{n_1\log\left( \frac{1}{\epsilon'} \right)}, \\
& \left| \frac{R_1 Z}{R} - pn_1 \lambda\right| < \alpha_{\ref{claim:expectationofZ1}} \sqrt{n_1\log\left( \frac{1}{\epsilon'} \right)}\text{, and} \\
& \left| \zeta_1 - (1-p) \eta_1(\lambda)\right| < \alpha_{\ref{claim:adversary}} \sqrt{n_1\log\left( \frac{1}{\epsilon'} \right)}. \\
\end{align*}
We have $O_1(\lambda) = Z_1 +\zeta_1$, and thus, by using the triangular inequality (note that $\E{Z_1 | R,R_1,Z} = \frac{R_1 Z}{R}$),
$$ \left| O_1(\lambda) - \tilde o_1 (\lambda) \right| \leq \left| Z_1 - \E{Z_1 | R,R_1,Z} \right|+ \left| \frac{R_1 Z}{R} - pn_1 \lambda\right| + \left| \zeta_1 - (1-p) \eta_1(\lambda) \right|, $$
which, according to the three inequalities above and the definition $\alpha_{\ref{lemma:low-tail-o1}} = \alpha_{\ref{claim:expectationofZ1}}+ \alpha_{\ref{claim:concentraionZ1}}+\alpha_{\ref{claim:adversary}}$, is smaller than $\alpha_{\ref{lemma:low-tail-o1}} \sqrt{n_1\log\left( \frac{1}{\epsilon'} \right)}$.
\end{proof}
\noindent With Lemma~\ref{lemma:low-tail-o1}, we are ready to prove Lemma~\ref{lemma:needed-centrality-result-for-m=2}:
\begin{proof}{\textbf{Proof of Lemma~\ref{lemma:needed-centrality-result-for-m=2}:}}
The proof consists of two steps:
First, we note that the \vmn{concentration} results similar to Lemma~\ref{lemma:low-tail-o1} can be \st{applied to}\vmn{obtained for} the other three random variables $O_1(\lambda)+ O_2(\lambda)$, $O_2(\lambda)$, and $O_2^{\vmn{\mathcal{S}}}(\lambda)$.
When applied to $O_2^{\vmn{\mathcal{S}}}(\lambda)$, the only modification is that we do not need to consider Lemma~\ref{claim:adversary} and~\eqref{ineq:adversary}.
Second, we apply the union bound on the probability that at least one of the $4n$ events in \eqref{inequality:good-approximation-o_1}-\eqref{inequality:good-approximation-app--o^R_2} is violated (note that $\lambda$ takes $n$ different non-zero values: when $\lambda=0$, $|O_1(\lambda) - \tilde o_1(\lambda)|=|0-0| =0$ and the same holds for the other three random variables), and choose the appropriate constants.
To prove this lemma, \vmn{we first note that we set the constants in Lemmas~\ref{lemma:needed-centrality-result-for-m=2} and~\ref{lemma:low-tail-o1} such that we get} $\alpha = 2\alpha_{\ref{lemma:low-tail-o1}}$, $\bar\epsilon = \bar\epsilon_{\ref{lemma:low-tail-o1}}$, and $k = 4 k_{\ref{lemma:low-tail-o1}}$.
Using Lemma~\ref{lemma:low-tail-o1} with $\epsilon' = \frac{\epsilon}{4n}$, the lemma holds because all of the following three statements are true:
\begin{subequations}
\begin{align}
\text{If }\epsilon \leq \bar\epsilon & \text{, then } \epsilon ' \leq \bar\epsilon_{\ref{lemma:low-tail-o1}}. \label{cond-epsilon-bar}\\
\text{If }n_1 > \frac{k}{p^2}\log n & \text{, then } n_1 > \frac{k_{\ref{lemma:low-tail-o1}}}{p^2}\log \left( \frac{1}{\epsilon'}\right). \label{cond-k}\\
\text{If } \left| O_1(\lambda)-\tilde o_1 (\lambda) \right| \geq \alpha \sqrt{n_1 \log n} &\text{, then } \left| O_1(\lambda)-\tilde o_1 (\lambda) \right| \geq \alpha_{\ref{lemma:low-tail-o1}} \sqrt{n_1 \log \left( \frac{1}{\epsilon'}\right)}.\label{cond-alpha}
\end{align}
\end{subequations}
Because $\epsilon'<\epsilon$, and $\bar\epsilon = \bar\epsilon_{\ref{lemma:low-tail-o1}}$, \eqref{cond-epsilon-bar} holds.
Before proceeding to the other two conditions, we first note that
$\log \left( \frac{1}{\epsilon'}\right) = \log \left( \frac{4n}{\epsilon}\right) \leq \log \left( \frac{n^3}{\epsilon}\right) \leq \log \left( n^4\right) = 4 \log n$, where we use $n\geq 2$ in the first inequality and $\epsilon \geq \frac{1}{n}$ in the second inequality.
Therefore, \eqref{cond-k} and~\eqref{cond-alpha} hold because $k = 4 k_{\ref{lemma:low-tail-o1}}$, and $\alpha = 2\alpha_{\ref{lemma:low-tail-o1}}$.
\end{proof}
\subsection{\vmn{Further Remak on Deterministic Approximations}}
\label{subsec:remark}
\begin{remark}\label{remark:deterministic-approx-vs-expected-value}
In Lemma~\ref{lemma:needed-centrality-result-for-m=2}, we use the deterministic value $\tilde o_j(\lambda)$ rather than $\E{O_j(\lambda)}$ to estimate $O_j(\lambda)$ because $\tilde o_j(\lambda)$ is a very simple function of $n_j$ and $\eta_j(\lambda)$.
Here we provide an example to show that $\tilde o_j(\lambda)$ and $\E{O_j(\lambda)}$ are not necessarily the same, which explains why we do not write the term $\tilde o_j(\lambda)$ as $\E{O_j(\lambda)}$.
Let us consider an example where $n=2$ and $\vmn{\vec{v}_I}=(v_{I,1}, v_{I,2})=(1,0)$ at $\lambda =1/2$.
First we compute $\E{O_1(1/2)}$.
Because $O_1(1/2)$ consists of only one customer,
$$\E{O_1(1/2)}=\prob{V_1=1}.$$
We can then use the law of total probability to express the probability as
\begin{align*}
\prob{V_1=1} = & \prob{V_1=1|1\notin {\vmn{\mathcal{S}}}}\prob{1\notin {\vmn{\mathcal{S}}}} \\ + & \prob{V_1=1|1\in {\vmn{\mathcal{S}}},2\notin {\vmn{\mathcal{S}}}}\prob{1\in {\vmn{\mathcal{S}}},2\notin {\vmn{\mathcal{S}}}} \\ + & \prob{V_1=1| 1, 2\in {\vmn{\mathcal{S}}}}\prob{1,2 \in {\vmn{\mathcal{S}}}}.
\end{align*}
Following the definitions,
\begin{align*}
\E{O_1(1/2)} = \prob{V_1=1} = 1 \cdot (1-p) + 1 \cdot p(1-p) + & \frac{1}{2} \cdot p^2 = 1-\frac{p^2}{2}.
\end{align*}
On the other hand,
$$\tilde o_1(1/2) = (1-p)\eta_1(\frac{1}{2})+p\frac{1}{2}n_1= 1-\frac{p}{2}.$$
Therefore, for all $p\in(0,1)$, $\E{O_1(1/2)} \neq \tilde o_1(1/2)$.
\end{remark}
\subsection{\vmn{Proof of Auxiliary Corollaries and Lemmas}}
\label{subsec:aux}
\begin{proof}{\textbf{Proof of Corollary~\ref{coro:binomial}:}}
In order to apply Theorem~\ref{thm:binomial-bound}, we define $t$ such that
$2e^{-2nt^2} \leq k\epsilon$, which corresponds to
\begin{align*}
t \geq \sqrt{\frac{1}{2n} \log \frac{2}{ k\epsilon}}.
\end{align*}
By setting $t$ to be $ \sqrt{\frac{1}{2n} \log \frac{2}{ k\epsilon}}$, what is remaining to prove is that we can find $\alpha_{\ref{coro:binomial},k}, \bar \epsilon_{\ref{coro:binomial},k} $ such that when $0<\epsilon<\bar\epsilon_{\ref{coro:binomial},k}$,
\begin{align*}
n \sqrt{\frac{1}{2n} \log \frac{2}{ k\epsilon}} \leq \alpha_{\ref{coro:binomial},k} \sqrt{n \log \left( \frac{1}{\epsilon} \right) }.
\end{align*}
This can be achieved by setting $\bar \epsilon_{\ref{coro:binomial},k} =k/2 $ and $\alpha_{\ref{coro:binomial},k} =1$:
\begin{align*}
\epsilon \leq k/2 \quad \Rightarrow \quad \frac{2}{k \epsilon} \leq \frac{1}{\epsilon^2} \quad \Rightarrow \quad \log \frac{2}{ k\epsilon} \leq 2 \log \frac{1}{\epsilon} \quad \Rightarrow \quad n \sqrt{\frac{1}{2n} \log \frac{2}{ k\epsilon}} \leq \alpha_{\ref{coro:binomial},k} \sqrt{n \log \left( \frac{1}{\epsilon} \right) }.
\end{align*}
\end{proof}
\begin{proof}{\textbf{Proof of Corollary~\ref{coro:hyper}:}}
According to Theorem~\ref{thm:hypergeometric-bound}, when $\gamma \geq 2$,
\begin{align*}
\prob{\left| K-\E{K} \right| > \gamma } < 2 e^{-2\alpha_{n_1,n,m}(\gamma^2-1)}.
\end{align*}
We first find an upper bound of the above right-hand-side probability when $\gamma$ and $m$ are large enough.
When $m\geq 1$,
\begin{align*}
\alpha_{n_1,n,m}=\max \left\{\frac{1}{n_1+1}+\frac{1}{n-n_1+1}, \frac{1}{m+1}+\frac{1}{n-m+1} \right\} \geq \frac{1}{m+1} \geq \frac{1}{2m}.
\end{align*}
Further, when $\gamma \geq 2$, we have: $\gamma^2 -1 \geq \gamma ^2 /2$. Putting these two together,
\begin{align*}
2 e^{-2\alpha_{n_1,n,m}(\gamma^2-1)} \leq 2 e^{-\frac{1}{m} \frac{\gamma^2}{2}}.
\end{align*}
Therefore, if $\alpha_{\ref{coro:hyper},k} \sqrt{m \log \left( \frac{1}{\epsilon} \right) } \geq 2$ and $m\geq 1$,
\begin{align*}
\prob{ |K - \E{K}| \geq \alpha_{\ref{coro:hyper},k} \sqrt{m \log \left( \frac{1}{\epsilon} \right) } } < 2 \exp \left( - \frac{1}{m} \frac{\alpha_{\ref{coro:hyper},k}^2 m \log \left( \frac{1}{\epsilon} \right)}{2} \right)
= 2 \epsilon^{\alpha_{\ref{coro:hyper},k}^2/2}.
\end{align*}
Thus, it is sufficient to have $\alpha_{\ref{coro:hyper},k} \sqrt{m \log \left( \frac{1}{\epsilon} \right) } \geq 2$, $m\geq 1$, and
\begin{align*}
2\epsilon^{\alpha_{\ref{coro:hyper},k}^2/2} \leq k\epsilon.
\end{align*}
The last condition holds by setting $\alpha_{\ref{coro:hyper},k} = 2$ and $\bar \epsilon_{\ref{coro:hyper},k} = k/2$ ($ \epsilon \leq \bar \epsilon_{\ref{coro:hyper},k} = k/2 \Rightarrow \epsilon^2 \leq k \epsilon/2$).
The first two conditions hold by defining $\underline m_{\ref{coro:hyper},k} \triangleq \max \left\{ \left( \log \frac{1}{\bar \epsilon_{\ref{coro:hyper},k}}\right)^{-1} , 1\right\}$.
\end{proof}
\begin{proof}{\textbf{Proof of Lemma~\ref{claim:BinomialBounds}:}}
Let $k = 1/12$, Corollary~\ref{coro:binomial} implies that there exist $\bar\epsilon_{\ref{coro:binomial},k}$ and $\alpha_{\ref{coro:binomial},k}$ such that when $0<\epsilon \leq \bar\epsilon_{\ref{coro:binomial},k}$,
\begin{align}
& 1 - \prob{R \in \left( np - \alpha_{\ref{coro:binomial},k} \sqrt{ n \log \left( \frac{1}{\epsilon} \right)}, np + \alpha_{\ref{coro:binomial},k} \sqrt{ n \log \left( \frac{1}{\epsilon} \right)} \right)} \nonumber \\ & =
\prob{|R - np| \geq \alpha_{\ref{coro:binomial},k} \sqrt{ n \log \left( \frac{1}{\epsilon} \right)}} \leq \epsilon/12.
\end{align}
Defining $\bar\epsilon_{\ref{claim:BinomialBounds}} \triangleq \bar\epsilon_{\ref{coro:binomial},k}$ and $\alpha_{\ref{claim:BinomialBounds}} \triangleq \alpha_{\ref{coro:binomial},k}$, repeating the same for $R_1$ and $Z$, and applying the union bound imply the statement.
\end{proof}
\begin{proof}{\textbf{Proof of Lemma~\ref{claim:expectationofZ1}:}}
First we define the constant $\alpha_{\ref{claim:expectationofZ1}}\triangleq 3 \alpha_{\ref{claim:BinomialBounds}} + 2\alpha_{\ref{claim:BinomialBounds}}^2 \sqrt{1/k_{\ref{claim:expectationofZ1}}}$ where $k_{\ref{claim:expectationofZ1}} \triangleq 4 \alpha_{\ref{claim:BinomialBounds}}^2$.
The reason for definition of the constants becomes clear in the process of the proof.
We prove the lower bound first.
Because $0 < \frac{Z}{R} \leq 1$, the ratio does not increase by subtracting the same positive number from both the denominator and the numerator if the denominator remains positive after the subtraction. In particular, we subtract $\alpha_{\ref{claim:BinomialBounds}} \sqrt{n \log \left( \frac{1}{\epsilon} \right)}$ from both the denominator and the numerator,
Therefore,
\begin{align}
\frac{Z}{R} > \frac{Z - \alpha_{\ref{claim:BinomialBounds}} \sqrt{n \log \left( \frac{1}{\epsilon} \right)} }{R - \alpha_{\ref{claim:BinomialBounds}} \sqrt{n \log \left( \frac{1}{\epsilon} \right)}}. \label{ineq:ratio}
\end{align}
Note that $R - \alpha_{\ref{claim:BinomialBounds}} \sqrt{n \log \left( \frac{1}{\epsilon} \right)} > 0$, because under event \eqref{event:R}, we have:
\begin{align*}
& R - \alpha_{\ref{claim:BinomialBounds}} \sqrt{n \log \left( \frac{1}{\epsilon} \right)} \geq np - 2 \alpha_{\ref{claim:BinomialBounds}} \sqrt{n \log \left( \frac{1}{\epsilon} \right)}. \\
\end{align*}
Therefore,
\begin{align}
n > \frac{4 \alpha_{\ref{claim:BinomialBounds}}^2}{p^2} \log \left( \frac{1}{\epsilon} \right) \Rightarrow R - \alpha_{\ref{claim:BinomialBounds}} \sqrt{n \log \left( \frac{1}{\epsilon} \right)} > 0 \label{ineq:conditionN1}.
\end{align}
The first inequality in \eqref{ineq:conditionN1} holds because $n \geq n_1$, and, by assumption in the lemma, $n_1 > \frac{k_{\ref{claim:expectationofZ1}}}{p^2} \log \left( \frac{1}{\epsilon} \right) = \frac{4\alpha_{\ref{claim:BinomialBounds}}^2}{p^2} \log \left( \frac{1}{\epsilon} \right) $ where we use the fact that we defined $k_{\ref{claim:expectationofZ1}} = 4 \alpha_{\ref{claim:BinomialBounds}}^2$.
Going back to \eqref{ineq:ratio}, under the events \eqref{event:R} and \eqref{event:R1+R-1}, we have:
\begin{align}
\frac{Z}{R} & > \frac{Z - \alpha_{\ref{claim:BinomialBounds}} \sqrt{n \log \left( \frac{1}{\epsilon} \right)} }{R - \alpha_{\ref{claim:BinomialBounds}} \sqrt{n \log \left( \frac{1}{\epsilon} \right)}} \nonumber \\
& > \frac{\lambda n p - \alpha_{\ref{claim:BinomialBounds}} \sqrt{\lambda n \log \left( \frac{1}{\epsilon} \right) } - \alpha_{\ref{claim:BinomialBounds}} \sqrt{n \log \left( \frac{1}{\epsilon} \right)} }{np + \alpha_{\ref{claim:BinomialBounds}} \sqrt{n \log \left( \frac{1}{\epsilon} \right)} - \alpha_{\ref{claim:BinomialBounds}} \sqrt{n \log \left( \frac{1}{\epsilon} \right)}}
\geq \lambda - \frac{2\alpha_{\ref{claim:BinomialBounds}} \sqrt{n \log \left( \frac{1}{\epsilon} \right)}}{np}. \label{ineq:ZtoR}
\end{align}
Combining \eqref{ineq:ZtoR} and with the lower-bound on $R_1$ under event \eqref{event:R1+bar-R1}, we get:
\begin{align*}
& \frac{Z R_1}{R} > \left( \lambda - \frac{2\alpha_{\ref{claim:BinomialBounds}}}{p} \sqrt{\frac{ \log \left( \frac{1}{\epsilon} \right)}{n}} \right)
\left( n_1p - \alpha_{\ref{claim:BinomialBounds}} \sqrt{ n_1 \log \left( \frac{1}{\epsilon} \right)} \right)
\\
= &\lambda n_1 p -2\alpha_{\ref{claim:BinomialBounds}} n_1 \sqrt{\frac{ \log \left( \frac{1}{\epsilon} \right)}{n}} - \lambda \alpha_{\ref{claim:BinomialBounds}} \sqrt{ n_1 \log \left( \frac{1}{\epsilon} \right)}
+ 2\frac{\alpha_{\ref{claim:BinomialBounds}}^2}{p} \sqrt{\frac{n_1}{n}} \log \left( \frac{1}{\epsilon} \right).
\end{align*}
Because $n_1\leq n$, we have $n_1 \sqrt{\frac{1}{n}} \leq \sqrt{n_1}$.
By definition, $\alpha_{\ref{claim:expectationofZ1}} \geq 3 \alpha_{\ref{claim:BinomialBounds}} \geq (2+\lambda ) \alpha_{\ref{claim:BinomialBounds}}$, and therefore, the right hand side of the above inequality is at least
$$\lambda n_1 p - \alpha_{\ref{claim:expectationofZ1}} \sqrt{ n_1 \log \left( \frac{1}{\epsilon} \right)}. $$
Hence we complete the proof for the lower bound part of Inequality~(\ref{inequality:fractional-thing-to-prove}).
When it comes to the upper bound, we can use the same argument as the lower bound and obtain,
\begin{align}
& \frac{Z R_1 }{R} < \left( \lambda + \frac{2\alpha_{\ref{claim:BinomialBounds}}}{p} \sqrt{\frac{ \log \left( \frac{1}{\epsilon} \right)}{n}} \right)
\left( n_1p + \alpha_{\ref{claim:BinomialBounds}} \sqrt{ n_1 \log \left( \frac{1}{\epsilon} \right)} \right)
\nonumber \\
= &\lambda n_1 p +2\alpha_{\ref{claim:BinomialBounds}} n_1 \sqrt{\frac{ \log \left( \frac{1}{\epsilon} \right)}{n}} + \lambda \alpha_{\ref{claim:BinomialBounds}} \sqrt{ n_1 \log \left( \frac{1}{\epsilon} \right)}
+ 2\frac{\alpha_{\ref{claim:BinomialBounds}}^2}{p} \sqrt{\frac{n_1}{n}} \log \left( \frac{1}{\epsilon} \right) \nonumber\\
\leq & \lambda n_1 p +2\alpha_{\ref{claim:BinomialBounds}} \sqrt{n_1 \log \left( \frac{1}{\epsilon} \right)} + \alpha_{\ref{claim:BinomialBounds}} \sqrt{ n_1 \log \left( \frac{1}{\epsilon} \right)}
+ 2\frac{\alpha_{\ref{claim:BinomialBounds}}^2}{p} \sqrt{n_1 \log \left( \frac{1}{\epsilon} \right)} \sqrt{\frac{ \log \left( \frac{1}{\epsilon} \right)}{n}} \nonumber \\
= & \lambda n_1 p + \left(3 \alpha_{\ref{claim:BinomialBounds}} + 2\alpha_{\ref{claim:BinomialBounds}}^2 \frac{1}{p}\sqrt{\frac{\log\left(\frac{1}{\epsilon}\right)}{n}} \right) \sqrt{n_1 \log\left(\frac{1}{\epsilon}\right)}, \label{ineq:lower_bound}
\end{align}
where the inequality in the third line comes from the fact that $\sqrt{\frac{n_1}{n}} \leq 1$ and $\lambda \leq 1$.
Combining the definition $\alpha_{\ref{claim:expectationofZ1}}= 3 \alpha_{\ref{claim:BinomialBounds}} + 2\alpha_{\ref{claim:BinomialBounds}}^2 \sqrt{1/k_{\ref{claim:expectationofZ1}}}$ and~\eqref{ineq:lower_bound}, it is sufficient to have
$$ \frac{1}{p}\sqrt{\frac{\log\left(\frac{1}{\epsilon}\right)}{n}}$$
upper bounded by the constant $\sqrt{1/k_{\ref{claim:expectationofZ1}}}$.
By the assumption of this lemma, $n_1 \geq \frac{k_{\ref{claim:expectationofZ1}}}{p^2} \log \left( \frac{1}{\epsilon} \right)$, and thus
$ \frac{1}{p}\sqrt{\frac{\log\left(\frac{1}{\epsilon}\right)}{n_1}} \leq \sqrt{1/k_{\ref{claim:expectationofZ1}}}$.
Further, because $n\geq n_1$, we have: $ \frac{1}{p}\sqrt{\frac{\log\left(\frac{1}{\epsilon}\right)}{n}} \leq \frac{1}{p}\sqrt{\frac{\log\left(\frac{1}{\epsilon}\right)}{n_1}} \leq \sqrt{1/k_{\ref{claim:expectationofZ1}}}$.
Thus we have:
$$ 3 \alpha_{\ref{claim:BinomialBounds}} + 2\alpha_{\ref{claim:BinomialBounds}}^2 \frac{1}{p}\sqrt{\frac{\log\left(\frac{1}{\epsilon}\right)}{n}} \leq 3\alpha_{\ref{claim:BinomialBounds}}+ 2\alpha_{\ref{claim:BinomialBounds}}^2 \sqrt{1/k_{\ref{claim:expectationofZ1}}} \leq \alpha_{\ref{claim:expectationofZ1}}.$$
Putting this back in \eqref{ineq:lower_bound} completes the proof of the lemma.
\end{proof}
\begin{proof}{\textbf{Proof of Lemma~\ref{claim:concentraionZ1}:}}
Applying Corollary~\ref{coro:hyper} with $k=1/4$, there exist positive real numbers $\alpha_{\ref{claim:concentraionZ1}}'\triangleq \alpha_{\ref{coro:hyper},k}$, $\bar\epsilon_{\ref{coro:hyper},k}$, and $\underline m \triangleq \underline m_{\ref{coro:hyper},k}$ such that when $0< \epsilon \leq \bar\epsilon_{\ref{coro:hyper},k}$ and $R_1 \geq \underline m$,
\begin{align*}
\prob{ \left| Z_1 - \E{Z_1 | R,R_1,Z} \right| \geq \alpha_{\ref{claim:concentraionZ1}}'\sqrt{ R_1 \log \left( \frac{1}{\epsilon} \right) }} \leq \frac{ \epsilon}{4}.
\end{align*}
Now, we define $\bar\epsilon_{\ref{claim:concentraionZ1}} \triangleq \min \{ \bar\epsilon_{\ref{coro:hyper},k} , \bar\epsilon_{\ref{claim:BinomialBounds}} \}$, $k_{\ref{claim:concentraionZ1}}\triangleq \max{\left\{ 4 \alpha_{\ref{claim:BinomialBounds}}^2, \frac{2 \underline{m}}{\log \left( \frac{1}{\bar\epsilon_{\ref{claim:concentraionZ1}}}\right)}\right\}}$ and $\alpha_{\ref{claim:concentraionZ1}} \triangleq \alpha_{\ref{claim:concentraionZ1}}' \sqrt{\frac{3}{2}}$.
First we check the condition $R_1 \geq \underline m$:
Because $n_1 > \frac{k_{\ref{claim:concentraionZ1}}}{p^2}\log\left( \frac{1}{\epsilon} \right) \geq \frac{4 \alpha_{\ref{claim:BinomialBounds}}^2}{p^2}\log\left( \frac{1}{\epsilon} \right) $,
\begin{align}
\alpha_{\ref{claim:BinomialBounds}} \sqrt{n_1 \log \left( \frac{1}{\epsilon} \right) } \leq \frac{n_1p}{2}. \label{ineq:n_1p-big-enough}
\end{align}
Therefore, under event \eqref{event:R1+bar-R1}, we have:
$$ R_1 > n_1p - \alpha_{\ref{claim:concentraionZ1}}' \sqrt{n_1 \log \left( \frac{1}{\epsilon} \right) } \geq \frac{1}{2} n_1p \geq \underline m,$$
where the first inequality follows the definition of event \eqref{event:R1+bar-R1}, second one uses Inequality~\eqref{ineq:n_1p-big-enough}, and the last one uses $k_{\ref{claim:concentraionZ1}} \geq \frac{2 \underline{m}}{\log \left( \frac{1}{\bar\epsilon}\right)}$ (which gives $n_1 \geq \frac{k_{\ref{claim:concentraionZ1}}}{p^2}\log \left( \frac{1}{\epsilon}\right) \geq \frac{2 \underline {m}}{p^2}\geq\frac{2 \underline {m}}{p}$).
Next we show that because we defined $\alpha_{\ref{claim:concentraionZ1}} = \alpha_{\ref{claim:concentraionZ1}}' \sqrt{\frac{3}{2}}$, we have:
\begin{align}
\alpha_{\ref{claim:concentraionZ1}}'\sqrt{ R_1 \log \left( \frac{1}{\epsilon} \right) } \leq \alpha_{\ref{claim:concentraionZ1}} \sqrt{n_1 \log \left( \frac{1}{\epsilon} \right) }. \label{ineq:alpha}
\end{align}
This again follows by the definition of $k_{\ref{claim:concentraionZ1}}$ and event \eqref{event:R1+bar-R1} and Inequality~\eqref{ineq:n_1p-big-enough}:
$$ R_1 < n_1p + \alpha_{\ref{claim:concentraionZ1}}' \sqrt{n_1 \log \left( \frac{1}{\epsilon} \right) } \leq \frac{3}{2} n_1p \leq \frac{3}{2} n_1.$$
Inequality \eqref{ineq:alpha} implies that:
\begin{align*}
&\prob{ \left| Z_1 - \E{Z_1 | R,R_1,Z} \right| \geq \alpha_{\ref{claim:concentraionZ1}} \sqrt{n_1 \log \left( \frac{1}{\epsilon} \right) }} \\ &\leq \prob{ \left| Z_1 - \E{Z_1 | R,R_1,Z} \right| \geq \alpha_{\ref{claim:concentraionZ1}}'\sqrt{ R_1 \log \left( \frac{1}{\epsilon} \right) }} \leq \frac{ \epsilon}{4},
\end{align*}
which completes the proof of the lemma.
\end{proof}
\begin{proof}{\textbf{Proof of Lemma~\ref{claim:adversary}:}}
Recall that $\zeta_1$ follows the binomial distribution $\text{Bin}(\eta_1(\lambda), 1-p)$.
Hence, the lemma follows straightforwardly from Corollary~\ref{coro:binomial}.
\end{proof}
\section{Conclusion}\label{sec:conclusion}
\section{Proof of Theorem~\ref{thm:b=1}}\label{sec:proof-b=1}
\section{Missing proofs of Section~\ref{sec:secretary}}\label{sec:proof-b=1}
\if false
\begin{proof}[Proof of Lemma~\ref{lemma:stopping_time}]
Let $X$ denote the number of customers from the {{\st{predictable}\vmn{stochastic}}} group among the first $\frac{2\zeta n}{p}$ of all customers.
Then, $X$ is drawn from the binomial distribution $Bin\left( \frac{2\zeta n}{p}, p\right)$.
Note that $T\geq \frac{2\zeta n}{p}$ implies $X\leq \zeta n$.
Thus, $\prob{T\geq \frac{2\zeta n}{p}} \leq \prob{X\leq \zeta n}$.
Chernoff bound gives, for any $\delta \in (0,1)$ and $\mu = \mathbb{E}(X)$, $ \mathbb{P}(X < (1-\delta)\mu) < e^{-\delta^2\mu/2}.$
Therefore, setting $\delta = 1/2$ and $\mu=\zeta n$,
\begin{align*}
\prob{T\geq \frac{2\zeta n}{p}} \leq \prob{X\leq \zeta n} \leq e^{-\frac{1}{4}\zeta n} \leq \epsilon,
\end{align*}
where the last inequality follows from our assumption $n\geq \frac{4}{\zeta}\log \frac{1}{\epsilon}$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theorem:one-time-learning}]
For proving this theorem,
we first review the Hoeffding-Berstein's Inequality for sampling without replacement, which first appeared in~\cite{VanderVaart1996}.
We quote from~\cite{Agrawal2009b}:
\begin{theorem}\label{thm:HB-Inequality}
Let $u_1, u_2, \dots, u_r$ be random samples without replacement from the real numbers $
{c_1, c_2, \dots, c_R}$. Then for every $t>0$,
$$ \prob{ \left\vert \sum_{i=1}^r u_i - r \bar{c} \right\vert \geq t } \leq 2 \exp \left( - \frac{t^2}{2r\sigma_R^2 + t \Delta_R}\right)$$
where $\Delta_R = \max_i c_i-\min_i c_i$, $\bar{c}=\frac{1}{R}\sum_{i=1}^R c_i$, and $\sigma_R^2 = \frac{1}{R}\sum_{i=1}^R (c_i-\bar{c})^2$.
\end{theorem}
When applying Theorem~\ref{thm:HB-Inequality} to our proof, the variables take values in $\{0,1\}$, and we find it more convenient to use the following corollary:
\begin{corollary}\label{cor:HB-ineq}
In the setting of Theorem~\ref{thm:HB-Inequality}, if $c_1,c_2,\dots, c_R$ take values in $\{0,1\}$, then for every $t>0$,
$$ \prob{ \left\vert \sum_{i=1}^r u_i - r \bar{c} \right\vert \geq t } \leq 2 \exp \left( - \frac{t^2}{2r \bar c + t }\right).$$
\end{corollary}
\begin{proof}
First we note that $\Delta_R \leq 1$. Therefore, it is sufficient to prove $\sigma_R^2 \leq \bar c$, which is given by
\begin{align*}
\sigma_R^2 = & \frac{1}{R}\sum_{i=1}^R (c_i-\bar{c})^2 = \frac{1}{R}\sum_{i=1}^R (c_i^2-2c_i\bar{c}+\bar{c}^2) \\
= & \frac{1}{R}\left[\sum_{i=1}^R (c_i^2)-2\sum_{i=1}^R(c_i)\bar{c}+R\bar{c}^2\right] \\
= & \frac{1}{R} \left[\sum_{i=1}^R (c_i)-2R\bar{c}^2+R\bar{c}^2 \right] &(c_i^2 = c_i, \sum_{i=1}^R(c_i)= R\bar c ) \\
\geq & \frac{1}{R} \sum_{i=1}^R (c_i) = \bar c.
\end{align*}
\end{proof}
For the subsequent proof, we introduce the following notation:
$$x_i(s^*) \triangleq \begin{cases} 0 \text{ if }v_i' \leq s^*, \\ 1\text{ if }v_i' > s^*. \end{cases} $$
According to the definition of $x_i(s^*)$, the algorithm accepts customer $i$ (where $i$ is the order of the customer before the random permutation) when either $\sigma_{{\vmn{\mathcal{S}}}}(i) \leq T$ or $ x_i(s^*)=1$ .
As a consequence, $T + \sum_{i=1}^n x_i(s^*)$ is an upper bound on the overall number of customers that would have been selected if we did not have the inventory constraint.
In addition, if $T + \sum_{i=1}^n x_i(s^*)\leq b$, then $\sum_{i=1}^n v'_ix_i(s^*)$ is a lower bound on the total revenue of the algorithm (recall that we accept all the first $T$ arrived customers).
The above two observations motivate us to prove the following two lemmas.
The first lemma shows that with high probability, the one-time-learning algorithm does not attempt to sell products to too many customers:
\begin{lemma}\label{lemma:inventory_constraint_hold_high_probabiliy}
If $\epsilon \leq 1$, $ \frac{2\zeta n}{p} \leq \frac{b}{2} $ and
$ \frac{b}{6} \geq \frac{1}{\epsilon^2 \zeta}\log \left( \frac{n+1}{\epsilon}\right)$, then $$\prob{\sum_{i=1}^n x_i(s^*) > \left(b-\frac{2\zeta n}{p}\right)|T\leq n}\leq \epsilon.$$
\end{lemma}
\begin{proof}
We first note that all values of $s$ collectively give a total of $n+1$ distinct sequences of $(x_i(s))_{i=1}^n$.
This is because the number of zeros in the sequence $(x_i(s))_{i=1}^n$ is between $0$ and $n$, and when we fix the number of zeros in a sequence to be $m$, those zeros must correspond to the $m$ $i$'s with the lowest values of $v_i'$.
Therefore, it is sufficient to show that for each particular realization $(\hat{x}_i)_{i=1}^n$ satisfying \begin{align}\sum_{i=1}^n \hat{x}_i > b-\frac{2\zeta n}{p}, \label{ineq:exaust-inventory} \end{align} $
\prob{ (x_i(s^*))_{i=1}^n = (\hat{x}_i)_{i=1}^n } \leq \frac{\epsilon}{n+1}$.
We first make a connection between $x_i(s^*)$ and the optimal solution of~\ref{Primal} and~\ref{Dual}, denoted by $(\{x^* _i, y^* _i\}_{i\in S_{\vmn{\mathcal{S}}}}, s^*)$.
For all $i$ in the {{\st{predictable}\vmn{stochastic}}} group ($i \in {\vmn{\mathcal{S}}}$), if $x_{\sigma_{\vmn{\mathcal{S}}}^{-1}(i)}(s^*)=1$, then $v_i = v_{\sigma_{\vmn{\mathcal{S}}}^{-1}(i)}'> s^*$ and thus $v_i + y_i^* > s^*$.
Due to complementary slackness, $x_i^*=1$.
Therefore, for all $i \in {\vmn{\mathcal{S}}}$,
\begin{align*}x_{\sigma_{\vmn{\mathcal{S}}}^{-1}(i)}(s^*) \leq x_i^*.
\end{align*}
Thus, when $(x_i(s^*))_{i=1}^n = (\hat{x}_i)_{i=1}^n$,
$$ \sum_{i\in S_{\vmn{\mathcal{S}}}} \hat{x}_{\sigma_{\vmn{\mathcal{S}}}^{-1}(i)} = \sum_{i\in S_{\vmn{\mathcal{S}}}} x_{\sigma_{\vmn{\mathcal{S}}}^{-1}(i)}(s^*) \leq \sum_{i\in S_{\vmn{\mathcal{S}}}} x_i^* \leq (1-\epsilon) \zeta \left(b-\frac{2\zeta n}{p}\right) . $$
As a result, \begin{align*}
& \prob{ (x_i(s^*))_{i=1}^n = (\hat{x}_i)_{i=1}^n |T \leq n} \\
\leq & \prob{ \sum_{i\in S_{\vmn{\mathcal{S}}}} \hat{x}_{\sigma_{\vmn{\mathcal{S}}}^{-1}(i)} \leq (1-\epsilon) \zeta \left(b-\frac{2\zeta n}{p}\right) |T \leq n} \\
\leq & \prob{ \sum_{i\in S_{\vmn{\mathcal{S}}}} \hat{x}_{\sigma_{\vmn{\mathcal{S}}}^{-1}(i)} \leq (1-\epsilon) \zeta \sum_{i=1}^n \hat{x}_i |T \leq n} &(\eqref{ineq:exaust-inventory})\\
\leq & \prob{ \left| \sum_{i\in S_{\vmn{\mathcal{S}}}} \hat{x}_{\sigma_{\vmn{\mathcal{S}}}^{-1}(i)} - \zeta \sum_{i=1}^n \hat{x}_i \right| \geq \epsilon \zeta \sum_{i=1}^n \hat{x}_i |T \leq n}
\end{align*}
Conditioned on $T \leq n$, at the end of the algorithm, $S_{{\vmn{\mathcal{S}}}}$ is a random set of size $\zeta n$ (whose indices are $\{\sigma_{\vmn{\mathcal{S}}}^{-1}(i)| i \in S_{{\vmn{\mathcal{S}}}}\}$ before the random permutation) from $\{ 1,2,\dots , n\}$.
Thus, we can use Corollary~\ref{cor:HB-ineq} and give the following upper bound on the probability presented int the above expression.
\begin{align*}
& 2\exp\left( - \frac{\left[\epsilon\zeta \sum_{i=1}^n \hat{x}_i \right]^2}{2\zeta n \frac{\sum_{i=1}^n \hat{x}_i }{n}+ \epsilon\zeta \sum_{i=1}^n \hat{x}_i }\right)
=2 \exp\left(\frac{ - \epsilon^2 \zeta \sum_{i=1}^n \hat{x}_i}{2+\epsilon}\right)
\\
\leq & 2 \exp\left(\frac{ - \epsilon^2 \zeta \left(b-\frac{2\zeta n}{p}\right)}{2+\epsilon}\right) \leq \exp\left(\frac{ - \epsilon^2 \zeta \left(b-\frac{2\zeta n}{p}\right)}{3}\right) \leq \frac{\epsilon}{n+1},
\end{align*}
where the first inequality holds due to \eqref{ineq:exaust-inventory} and last inequality holds because $ \frac{2\zeta n}{p} \leq \frac{b}{2} $ and
$ \frac{b}{6} \geq \frac{1}{\epsilon^2 \zeta}\log \left( \frac{n+1}{\epsilon}\right).$
\end{proof}
The following lemma shows that $\sum_{i=1}^n x_i(s^*)v_i'$ is close to $OPT$:
\begin{lemma}
\label{lemma:high_objective_value}
If $ \frac{2\zeta n}{p} \leq \frac{b}{2} $ and $b \geq \frac{12 \log\left(\frac{n}{\epsilon}\right)}{\epsilon^3}$,
then $$\prob{\sum_{i=1}^n x_i(s^*)v_i' < (1-3\epsilon)\left( 1-\frac{2\zeta n}{pb} \right) OPT |T \leq n } \leq \epsilon.$$
\end{lemma}
\begin{proof}
Conditioned on $T \leq n$, and thus at the end of the algorithm, $s^*$ is well defined and at the end of the algorithm, $S_{{\vmn{\mathcal{S}}}}$ is a random set of size $\zeta n$ (whose indices are $\{\sigma_{\vmn{\mathcal{S}}}^{-1}(i)| i \in S_{{\vmn{\mathcal{S}}}}\}$ before the random permutation) from $\{ 1,2,\dots , n\}$.
Recall that $OPT$ is the optimal offline value to the original problem.
Denote $OPT'$ the optimal offline value when the inventory is $b' \triangleq \left(b-\frac{2\zeta n}{p}\right)$ rather than $b$ and the binary decision variables are relaxed as continuous $[0,1]$ variables.
Then, by multiplying the offline optimal solution by the same factor of $\left( 1-\frac{2\zeta n}{pb} \right)$, we have, $OPT' \geq (1-\frac{2\zeta n}{pb}) OPT$.
Consider Lemma $3$ of \cite{Agrawal2009b} with $m=1$ and $b'=\left(b-\frac{2\zeta n}{p}\right) $.
The condition for the lemma holds because $b'=\left(b-\frac{2\zeta n}{p}\right) \geq \frac{b}{2} \geq \frac{6m \log\left(\frac{n}{\epsilon}\right)}{\epsilon^3}.$
Therefore, with proability at least $1-\epsilon$,
$$\sum_{i=1}^n x_i(s^*)v_i' \geq (1-3\epsilon) OPT' \geq (1-3\epsilon)\left( 1-\frac{2\zeta n}{pb} \right) OPT.$$
\end{proof}
With Lemmas~\ref{lemma:stopping_time},~\ref{lemma:inventory_constraint_hold_high_probabiliy}, and~\ref{lemma:high_objective_value}, we can complete the proof of the theorem as follows:
First we show that can apply Lemmas~\ref{lemma:stopping_time},~\ref{lemma:inventory_constraint_hold_high_probabiliy}, and~\ref{lemma:high_objective_value}.
To apply Lemma~\ref{lemma:stopping_time}, we need
$n\geq \frac{4}{\zeta}\log \frac{1}{\epsilon}$, which is equivalent to $b \geq \frac{8}{p \epsilon}\log \left( \frac{1}{\epsilon}\right)$, which is implied by our assumption $b^2 \geq \frac{12n}{p \epsilon ^3} \log \left( \frac{n+1}{\epsilon}\right)$.
To apply Lemma~\ref{lemma:inventory_constraint_hold_high_probabiliy}, we need
$ \frac{2\zeta n}{p} \leq \frac{b}{2} $ and $ \frac{2\zeta n}{p} \leq \frac{b}{2} $.
The first condition is implied by $ \frac{2\zeta n}{p} =\epsilon b < \epsilon \frac{p}{2} <\frac{b}{2} $.
The second condition is equivalent to our assumption $b^2 \geq \frac{12n}{p \epsilon ^3} \log \left( \frac{n+1}{\epsilon}\right)$.
To apply Lemma~\ref{lemma:high_objective_value}, we need $b \geq \frac{12 \log\left(\frac{n}{\epsilon}\right)}{\epsilon^3}$, which is implied by our assumption $b^2 \geq \frac{12n}{p \epsilon ^3} \log \left( \frac{n+1}{\epsilon}\right)$.
Applying the union bound to Lemmas~\ref{lemma:stopping_time},~\ref{lemma:inventory_constraint_hold_high_probabiliy}, and~\ref{lemma:high_objective_value}, we have with probability at least $1-3\epsilon$, $T<\frac{2\epsilon n}{p}(<n)$ (and thus $s^*$ is computed), $\sum_{i=1}^n x_i(s^*)\leq b-\frac{2\zeta n}{p} < b-T$ (and thus we do not run out of inventory), and $\sum_{i=1}^n x_i(s^*)v_i' \geq (1-3\epsilon)\left( 1-\frac{2\zeta n}{pb} \right) OPT$.
This implies that with probability at least $1-3\epsilon$, the total revenue achieved by the online algorithm is at least $(1-3\epsilon)\left( 1-\frac{2\zeta n}{pb} \right) OPT$.
Therefore, the competitive ratio is at least $(1-3\epsilon)^2\left( 1-\frac{2\zeta n}{pb} \right) = (1-3\epsilon)^2\left( 1-\epsilon \right) \geq 1- 7 \epsilon$, which completes the proof.
\end{proof}
\fi
\begin{proof}{\textbf{Proof of Theorem~\ref{thm:b=1}:}}
For each integer $k \geq 2$, we denote $\vmn{\mathcal{F}_k}$ the event satisfying all the following conditions:
\begin{enumerate}
\item All of the $k$ \vmn{highest-revenue} customers are in the {{\st{predictable}\vmn{stochastic}}} group.
\item The $k^{\text{th}}$-\vmn{highest-revenue} customer arrives in the observation period and the $k-1$ customers with the highest \st{values}\vmn{revenue} do not.
\item The \vmn{highest-revenue} customer arrives first among the $k-1$ customers with the highest \st{values}\vmn{revenue}.
\end{enumerate}
Clearly, for any $k\geq 2$, $\vmn{\mathcal{F}_k}$ is a success event and those events are mutually exclusive for different values of $k$.
Furthermore, we note that, conditioned on \st{that a customer is}\vmn{being} in the {{\st{predictable}\vmn{stochastic}}} group, the probability that \st{it}\vmn{a customer} arrives before time $\gamma$ approaches $\gamma$ as $n\to \infty$.
This can be done by using \vmn{a concentration result for random permutations} \st{a slightly modified version of}\vmn{similar to the one used in the proof of} \eqref{inequality:good-approximation-app--o^R_2} \st{that takes into account all customers instead of only \st{class}\vmn{type}-$2$ customers}.
\vmn{Further, for two customers $l$ and $\tilde{l}$, conditioned on being in the {{\st{predictable}\vmn{stochastic}}} group, the events that customer $l$ arrives before time $\gamma$ and customer $\tilde{l}$ arrives after time $\gamma$ are asymptotically independent.}
Thus, we can write the probability of event $\vmn{\mathcal{F}_k}$ as:
\begin{align*}
\prob{\vmn{\mathcal{F}_k}}= p^k \gamma (1-\gamma)^{k-1}\frac{1}{k-1} + o(1),
\end{align*}
where $o(1)$ represents a small (positive or negative) real number approaching zero as $n\to \infty$.
As a result,\st{$|\prob{\vmn{\mathcal{F}_k}}- p^k \gamma (1-\gamma)^{k-1}\frac{1}{k-1}| \leq p^k \frac{k}{k-1}|o(1)| $, which approaches $0$ as $n\to \infty$, so we can denote $\prob{\vmn{\mathcal{F}_k}} \geq p^k \gamma (1-\gamma)^{k-1}\frac{1}{k-1} - |o(1)|$.
Thus,} for any fixed $m$, we have
\begin{align*}
\prob{\text{success}}\geq &\sum_{k=2}^n\prob{\vmn{\mathcal{F}_k}} \geq \sum_{k=2}^m\prob{\vmn{\mathcal{F}_k}} \\
\geq & \sum_{k=2}^m \left( p^k \gamma (1-\gamma)^{k-1}\frac{1}{k-1} - |o(1)| \right)\\
\geq & \sum_{k=2}^m \left(p^k \gamma (1-\gamma)^{k-1}\frac{1}{k-1}\right) - m|o(1)|,
\end{align*}
which approaches $\sum_{k=2}^m \left(p^k \gamma (1-\gamma)^{k-1}\frac{1}{k-1}\right)$, for fixed $m$, as $n \to \infty$.
Since the above inequality holds for all $m$, we have
\begin{align}
\label{eq:LB}
\prob{\text{success}} \geq \lim_{n\to \infty }\sum_{k=2}^n \prob{\vmn{\mathcal{F}_k}} \geq \lim_{m \to \infty}\sum_{k=2}^m p^k \gamma (1-\gamma)^{k-1}\frac{1}{k-1}=\gamma p \log \frac{1}{\gamma p + 1-p}.
\end{align}
\if false
For simplicity, in the remaining proofs related to this algorithm, we skip the above argument, and simply write the probabity that a customer (given that it is in the {{\st{predictable}\vmn{stochastic}}} group) arrives between time $\lambda_1$ and $\lambda_2(>\lambda_1)$ to be $\lambda_2-\lambda_1$.
\fi
\vmn{Next, }we show that the \st{analysis}\vmn{lower bound given in \eqref{eq:LB}} is tight \vmn{by presenting an instance for which $\text{OSA}_\gamma$ achieves a success probability of at most $\gamma p \log \frac{1}{\gamma p + 1-p}$}.
Consider the following adversarial instance.
The \vmn{highest-revenue} customer is the first customer in $\st{\vec{v'}}\vmn{\vec{v}_I}$ \vmn{(As a reminder, the subscript $I$ indicates that this is the initial sequence determined by the adversary.)}\st{(before the random permutation.)}
For each $k=2, 3, \dots, (1-\gamma)n+1$, the $k^{\text{th}}$-\vmn{highest-revenue} customer is the $(\gamma n + k-1)^{\text{th}}$ customer in $\st{\vec{v'}}\vmn{\vec{v}_I}$.
Other customers arbitrarily fill other positions in $\st{\vec{v'}}\vmn{\vec{v}_I}$.
For each positive integer $1\leq l\leq (1-\gamma)n$, we denote $\vmn{\mathcal{H}_l}$ the event where all of the $l$ \vmn{highest-revenue} customers are in the {{\st{predictable}\vmn{stochastic}}} group but the $(l+1)^{\text{th}}$-highest is not.
Similarly, we denote $\vmn{\mathcal{H}_{(1-\gamma)n+1}}$ the event where all of the $((1-\gamma)n+1)$ \vmn{highest-revenue} customers are in the {{\st{predictable}\vmn{stochastic}}} group; \vmn{finally, we denote} $\vmn{\mathcal{H}_0}$ the event where the \vmn{highest-revenue} customer is in the {\st{unpredictable}\vmn{adversarial}} group.
Clearly, $\{ \vmn{\mathcal{H}_l} \}_{l=0}^{(1-\gamma)n+1}$ is a partition of the sample space.
In addition, for all $l$, $\prob{\vmn{\mathcal{H}_l}} \leq p^l$.
Conditioned on $\vmn{\mathcal{H}_0}$, the \vmn{highest-revenue} customer arrives during the observation period, and thus the algorithm has a success probability of $0$.
Conditioned on $\vmn{\mathcal{H}_1}$, the algorithm either accepts the customer with the second-highest value or does not accept any customer (if the \vmn{highest-revenue} customer arrives before $\gamma$), and hence \vmn{again it }has a success probability of $0$.
Further,\st{it is easy to check that} for any $2\leq l \leq (1-\gamma)n$, conditioned on $\vmn{\mathcal{H}_l}$, to have a success, either one of \vmn{the events} $\vmn{\mathcal{F}}_2, \vmn{\mathcal{F}}_3 , \dots \vmn{\mathcal{F}}_l$ occurs or the \vmn{highest-revenue} customer \st{must be one of the}\vmn{arrives between time} $(\gamma n+1)$ \st{through}\vmn{and} $(\gamma n+l-1)$; \vmn{note that conditioned on $\vmn{\mathcal{H}_l}$, the latter has a probability of $\frac{l-1}{n}$.}\st{arrived customers, which has a probability of $\frac{l-1}{n}$ (conditioned on $\vmn{\mathcal{H}_l}$).} As a result, the total success probability is at most
\begin{align*}
& \sum_{l=2}^{(1-\gamma ) n} \prob{\vmn{\mathcal{H}_l}}\left[ \prob{ \bigcup_{m=2}^l\vmn{\mathcal{F}}_m | \vmn{\mathcal{H}_l}} +\frac{l-1}{n} \right] + \prob{\vmn{\mathcal{H}}_{(1-\gamma ) n+1}} \\
\leq & \sum_{l=2}^{(1-\gamma ) n} \sum_{m=2}^{l} \prob{\vmn{\mathcal{F}}_m \cap \vmn{\mathcal{H}_l}} + \sum_{l=2}^{(1-\gamma ) n} p^l \frac{l-1}{n} + p^{(1-\gamma ) n+1} &( \prob{\vmn{\mathcal{H}}_l} \leq p^l) \\
\leq & \sum_{m=2}^{\infty} \prob{\vmn{\mathcal{F}}_m} + \sum_{l=2}^{\infty}p^l \frac{l-1}{n} + p^{(1-\gamma ) n+1}\\
= &\sum_{l=2}^{\infty} \prob{\vmn{\mathcal{F}}_l} + \frac{p^2}{(1-p)^2 n}+ p^{(1-\gamma ) n+1},
\end{align*}
which converges to $\sum_{l=2}^{\infty} \prob{\vmn{\mathcal{F}}_l}$ as $n$ approaches infinity.
\end{proof}
\begin{proof}{\textbf{Proof of Proposition~\ref{thm:randomized-b=1}:}}
The key in the proof of the proposition
is that when the position of the second-\vmn{highest-revenue} customer in $\vmn{\vec{v}_I}$ is before $\gamma_2 n$, $OSA_{\gamma_2}$ has a success probability greater than $s_2$; otherwise, $OSA_{\gamma_1}$ has a success probability greater than $s_1$.
To formalize this idea, we introduce two lemmas.
\begin{lemma}\label{lemma:random-OSA-good-gamma-2}
If the second-\vmn{highest-revenue} customer is among the first $\gamma_2 n$ customers in $\vmn{\vec{v}_I}$, then $OSA_{\gamma_2}$ has a success probability of at least $s_2+p(1-p)(1-\gamma_2)$\vmn{, when $n\to \infty$}.
\end{lemma}
\begin{proof}{\textbf{Proof:}}
\vmn{Note that}
the events $\{\vmn{\mathcal{F}_k}\}_{k=2}^\infty$ \st{introduced}\vmn{defined} in the proof of Theorem~\ref{thm:b=1} collectively give an \vmn{asymptotic} success probability of $s_2$.
\vmn{We identify}\st{Now we consider} \vmn{another disjoint event which also results in a success.}
\vmn{In particular, we define event $\vmn{\mathcal{\bar F}}$ that satisfies the following conditions:}
\st{the event, denoted by $\mathcal{\bar E}$, which satisfies the following conditions:}
\begin{enumerate}
\item The \vmn{highest-revenue} customer is in the {{\st{predictable}\vmn{stochastic}}} group and arrives after time $\gamma_2$.
\item The second-\vmn{highest-revenue} customer is in the {\st{unpredictable}\vmn{adversarial}} group.
\end{enumerate}
Note that $\vmn{\mathcal{\bar F}}$ is a success event that \st{does not overlap with events}\vmn{is disjoint from} $\{\vmn{\mathcal{F}_k}\}_{k=2}^\infty$.
Therefore, $\vmn{\mathcal{\bar F}}$ gives an additional success probability of $p(1-p)(1-\gamma_2) + o(1)$.
\end{proof}
\begin{lemma}\label{lemma:random-OSA-good-gamma-1}
If the second-\vmn{highest-revenue} customer is not among the first $\gamma_2 n$ customers in $\vmn{\vec{v}_I}$, then $OSA_{\gamma_1}$ has a success probability of at least $s_1+(1-p)\frac{\gamma_2-\gamma_1}{1-\gamma_1}s_1$\vmn{, when $n\to \infty$}.
\end{lemma}
\begin{proof}{\textbf{Proof:}}
\vmn{Similar to the previous lemma, we first note that} the events $\{\vmn{\mathcal{F}_k}\}_{k=2}^\infty$ introduced in the proof of Theorem~\ref{thm:b=1} collectively give an \vmn{asymptotic} success probability of $s_1$. \vmn{We identify another set of disjoint events that also results in success.}
\vmn{In particular,} for each positive integer $k\geq 2$, we \st{consider the event, denoted by}\vmn{define the event} $\vmn{\mathcal{\widehat{F}}_k}$ that satisfies all of the following conditions:
\begin{enumerate}
\item Among the $k$ \vmn{highest-revenue} customers, all except the second \vmn{highest-revenue} customer are in the {{\st{predictable}\vmn{stochastic}}} group and arrive after time $\gamma_1$.
\item The second-\vmn{highest-revenue} customer is in the {\st{unpredictable}\vmn{adversarial}} group.
\item The $(k+1)^{\text{th}}$-\vmn{highest-revenue} customer is in the {{\st{predictable}\vmn{stochastic}}} group and arrives before time $\gamma_1$.
\item The \vmn{highest-revenue} customer arrives no later than $\gamma_2 $ and arrives first among the $k$ \vmn{highest-revenue} customers (except for the second-\vmn{highest-revenue} customer).
\end{enumerate}
Clearly, for any $k\geq 2$, $\vmn{\mathcal{\widehat{F}}_k}$ is a success event.
In addition, those events are mutually exclusive for different values of $k$.
Furthermore, $\{\vmn{\mathcal{\widehat{F}}_k}\}_{k=2}^\infty$ does not overlap $\{ \vmn{\mathcal{F}_k} \}_{k \geq 2}$.
Note that \vmn{the probability that the \vmn{highest-revenue} customer arrives between time $\gamma_1$ and $\gamma_2$,} and it arrives first among the $k$ \vmn{highest-revenue} customers (except for the second-\vmn{highest-revenue} customer) is at least \vmn{$\frac{\gamma_2 - \gamma_1}{k-1}+o(1)$}.\st{conditioning on the \vmn{highest-revenue} customer arriving before time $\gamma_2 $, the probability that it arrives first among the $k$ \vmn{highest-revenue} customers (except for the second-\vmn{highest-revenue} customer) is at least $\frac{1}{k-1}$.}
Therefore, $\prob{\vmn{\mathcal{\widehat{F}}_k}}\geq p^k (1-p) \gamma_1 (1-\gamma_1)^{k-2}(\gamma_2 - \gamma_1)\frac{1}{k-1} \vmn{+ o(1)}$.
As a result, the \vmn{asymptotic} success probability is at least
\begin{align*}
s_1 + \sum_{k=2}^\infty \prob{\vmn{\mathcal{\widehat{F}}_k}}
\geq & s_1+\sum_{k=2}^\infty p^k (1-p) \gamma_1 (1-\gamma_1)^{k-2}(\gamma_2 - \gamma_1)\frac{1}{k-1}
\\ = & s_1 + (1-p)\frac{\gamma_2-\gamma_1}{1-\gamma_1}s_1 .\end{align*}
\end{proof}
With Lemmas~\ref{lemma:random-OSA-good-gamma-2} and~\ref{lemma:random-OSA-good-gamma-1}, we complete the proof of the proposition as follows:
Recall that for any position of the second-\vmn{highest-revenue} customer in $\vmn{\vec{v}_I}$, $OSA_{\gamma_1}$ has a success probability of at least $s_1$ and $OSA_{\gamma_2}$ has a success probability of at least $s_2$.
As a result, using Lemma~\ref{lemma:random-OSA-good-gamma-2}, if the second-\vmn{highest-revenue} customer is among the first $\gamma_2 n$ customers in $\vmn{\vec{v}_I}$, then we have a success probability of at least $qs_1 + (1-q)(s_2+p(1-p)(1-\gamma_2))$.
Similarly, using Lemma~\ref{lemma:random-OSA-good-gamma-1}, if the second-\vmn{highest-revenue} customer is not among the first $\gamma_2 n$ customers in $\vmn{\vec{v}_I}$, then we have a success probability of at least $q(s_1+(1-p)\frac{\gamma_2-\gamma_1}{1-\gamma_1}s_1)+(1-q)s_2$.
As a result, for any adversarial problem instance $\vmn{\vec{v}_I}$, \st{our} \vmn{the asymptotic} success probability is at least
\begin{align*} & \min \left \{ qs_1 + (1-q)(s_2+p(1-p)(1-\gamma_2)) , q(s_1+(1-p)\frac{\gamma_2-\gamma_1}{1-\gamma_1}s_1)+(1-q)s_2 \right \} \\
= & q s_1 +(1-q) s_2 +\min \left\{ (1-q)p(1-p)(1-\gamma_2), q(1-p)\frac{\gamma_2-\gamma_1}{1-\gamma_1}s_1 \right\} ,
\end{align*}
which completes the proof.
\end{proof}
\section{Problem Extensions}\label{sec:extension}
\vm{Minor edits here + putting the proofs back to the appendix. }
In this section, we study two extensions of the problem.
In both of the extensions, we relax the constraint that $v_i' \in \{0,a,1\}$ and let $v_i'$ take any positive values.
In Section~\ref{sec:discerning-online-algorithm}, we study the hypothetical situation where the algorithms can observe whether a customer is from the {{\st{predictable}\vmn{stochastic}}} group or not.
To distinguish with the original setting, we call such algorithms \emph{discerning} online algorithms.
In Section~\ref{sec:secretary}, we study the classical online secretary problem (where $b=1$ and $n\to \infty$) under our arrival model.
Since $b=1$ and the values can take any positive numbers, the objective is to maximize the probability of successfully selecting the highest-valued customer.
\subsection{Discerning Online Algorithm}\label{sec:discerning-online-algorithm}
In this section we assume that different customers take different values.
To ensure that this happens, we can add a small and independent random disturbance to the customer values.
When online algorithms can distinguish whether a customer is from the {{\st{predictable}\vmn{stochastic}}} group or not, we can learn the pattern from the first few samples of customers in the {{\st{predictable}\vmn{stochastic}}} group, and derive an online algorithm with a competitive ratio of $1-O(\epsilon)$ with some $\epsilon >0$.
Our one-time learning algorithm builds upon the ideas of \cite{Agrawal2009b}. We solve a scaled version of the allocation linear program after observing ``enough'' customers from the {{\st{predictable}\vmn{stochastic}}} group, and we use the dual prices from the linear program to make online decisions after this initial period.
However, we modify both the initial phase and also the linear program of \cite{Agrawal2009b} to account for the adversarial component of the demand.
In particular, unlike \cite{Agrawal2009b}, we accept all of the customers in the learning phase. In the next paragraph, we show that this is indeed necessary when $p<1$.
Let us consider the example where $v_1'=b^2$ and $v_i'<1$ for all $i\geq 2$.
To achieve at least a $1/b$ fraction of the optimal revenue, an online algorithm must accept the customer with value $v_1'$.
However, if this customer is in the {\st{unpredictable}\vmn{adversarial}} group, which happens with probability $1-p$, then the one-time-learning algorithm of \cite{Agrawal2009b} rejects the customer with value $v_1=v_1'$.
Thus the competitive ratio of the classical one-time-learning is at most $p + \frac{1-p}{b}$, which is clearly not near-optimal when $b\geq 2$.
Note that if we use a learning period of $\epsilon $ (same as \cite{Agrawal2009b}), then we might exhaust all the inventory in this initial phase (if $b \leq \epsilon n$).
Therefore we use a different criterion in defining our learning period.
In particular, let $\zeta \triangleq \frac{p b \epsilon}{2n}$, our learning period continues until we observe $\zeta n$ customers from the {{\st{predictable}\vmn{stochastic}}} group.
Note that our learning period is not deterministic.
The following lemma (proven in Appendix~\ref{sec:proof-b=1}) gives an upper bound on the number of customers arriving during the learning period, denoted by $T$.
(By convention, we define $T=\infty$ if the learning period never ends.)
\begin{lemma}\label{lemma:stopping_time}
If $n\geq \frac{4}{\zeta}\log \frac{1}{\epsilon}$, then, $$ \prob{T\geq \frac{2\zeta n}{p}} \leq \epsilon.$$
\end{lemma}
At time $T$, we solve a scaled offline linear program (\ref{Primal}) and its dual (\ref{Dual}) to estimate the dual price.
In the following, we use $S_{\vmn{\mathcal{S}}}$ to denote the set of customers from the {{\st{predictable}\vmn{stochastic}}} group we have observed in the learning period.
In~\ref{Primal}, we first reduce the inventory to $b - \frac{2\zeta n}{p}$ to compensate for the customers we have accepted during the learning period (note that by Lemma~\ref{lemma:stopping_time}, $\prob{T\geq \frac{2\zeta n}{p}} \leq \epsilon$).
Then, we scale the inventory by $\zeta$ (because $|S_{{\vmn{\mathcal{S}}}}|=\zeta n$).
Finally we scale the inventory by a factor of $(1-\epsilon)$ to ensure that, with high probability, what we learn will select \vm{Dawsen: what do you mean here?!} at most $b - \frac{2\zeta n}{p}$ future customers.
After this initial phase we use $s^*$ (optimal solution of variable $s$ of~\ref{Dual}) to decide whether to accept/reject a \st{class}\vmn{type}-$2$ customer.
The formal definition of our algorithm is given in Algorithm~\ref{algorithm:one-time-learing}.
\begin{mdframed}
\begin{align}
\underset{x_i \in [0,1] \text{ for all } i\in S_{\vmn{\mathcal{S}}}} {\text{Maximize}}&&& \sum_{i\in S_{\vmn{\mathcal{S}}}} v_ix_i \tag{Primal} \label{Primal} \\
\text{subject to}&&& \sum_{i\in S_{\vmn{\mathcal{S}}}} x_i \leq (1-\epsilon) \zeta \left(b - \frac{2\zeta n}{p}\right) .\nonumber \\
\hline \nonumber \\
\underset{s\geq 0; y_i \geq 0 \text{ for all } i\in S_{\vmn{\mathcal{S}}}} {\text{Minimize}}&&& (1-\epsilon) \zeta \left( b - \frac{2\zeta n}{p} \right) s+\sum_{i\in S_{\vmn{\mathcal{S}}}} y_i \tag{Dual} \label{Dual} \\
\text{subject to}&&&\nonumber \\
\text{for all }i \in S_{\vmn{\mathcal{S}}} &&& s+y_i \geq v_i. \nonumber
\end{align}
\end{mdframed}
\begin{algorithm}[H]
\begin{enumerate}
\item Initialize $S_{\vmn{\mathcal{S}}} \leftarrow \emptyset$ and define $\zeta \triangleq \frac{p b \epsilon}{2n}$.
\item Repeat for time $\lambda = 1/n,2/n,\dots, 1$, accept the customer $i(=\lambda n)$ arriving at time $\lambda$ if there is remaining inventory and one of the following holds:
\begin{enumerate}
\item \textbf{Before learning:} $|S_{\vmn{\mathcal{S}}}|< \zeta n$. \\If customer $i$ is from the {{\st{predictable}\vmn{stochastic}}} group, then update $S_{\vmn{\mathcal{S}}} \leftarrow S_{\vmn{\mathcal{S}}} \cup \{ i \}$; If after updating, $|S_{\vmn{\mathcal{S}}}| \geq \zeta n$, then find the optimal solution $s^*$ of variable $s$ of~\ref{Dual}.
\item \textbf{After learning:} $|S_{\vmn{\mathcal{S}}}| \geq \zeta n$ and $v_i > s^*$.
\end{enumerate}
\end{enumerate}
\caption{The One-Time Learning Algorithm. (OTL)} \label{algorithm:one-time-learing}
\end{algorithm}
We show that the one-time-learning algorithm achieves the following near-optimal results:
\begin{theorem} \label{theorem:one-time-learning}
If $\epsilon < \frac{p}{2}$ and $b^2 \geq \frac{12n}{p \epsilon ^3} \log \left( \frac{n+1}{\epsilon}\right)$, then the one-time learning algorithm is $(1-O(\epsilon))-$competitive in the partially \st{learnable}\vmn{predictable} model.
\end{theorem}
Roughly speaking, (if we ignore the $\log \frac{1}{\epsilon}$ terms and make some minor assumption \vm{Dawsen make assumption on what?!}), the one-time-learning algorithm is near optimal in the regime where $b=\omega\left( \sqrt{{n\log n}}\right)$.
On the other hand, Proposition~\ref{thm:impossibility} also applies to discerning online algorithms.
Therefore, it is not possible to achieve near-optimality when $b = o\left( \sqrt{{n}}\right)$.
Thus we have achieved near-optimality in a fairly wide range of $b$ (with respect to $n$) where it is still possible.
The proof of this theorem uses similar ideas as in the proofs of ~\cite{Agrawal2009b}. We defer the proof to Appendix~\ref{sec:proof-b=1}.
\fi
\section{The Secretary Problem \vmn{under Partially Predictable Demand}}
\label{sec:secretary}
In \vmn{this} section, we study the online secretary problem under our \vmn{new} arrival model.
\vmn{In our setting, the secretary problem corresponds to having one unit of inventory, i.e., $b = 1$, and $n$ customers, where $v_{I,j} \in \mathbb{R}^{+}$ for $1\leq j \leq n$, i.e., we relax the assumption that there are only two types. The objective is to maximize the probability of\st{successfully} selecting the highest-revenue customer in the asymptotic regime, where $n\to \infty$.}
\vmn{In the classical setting, the arrival sequence is assumed to be a uniformly random permutation of $n$ customers, which corresponds to the extreme case of $p = 1$ under our
partially predictable model.}
In this \st{case}\vmn{setting}, it is well known that the best-possible online algorithm \st{works as follows}\vmn{is the following {\em deterministic} algorithm} \citep{lindley1961dynamic,dynkin1963optimum,ferguson1989solved,freeman1983secretary}:
Observe the first $\left\lfloor \gamma n \right\rfloor$ customers, where $\gamma = \frac{1}{e}$; then accept the next one that has the highest \st{value}\vmn{revenue} so far (if any). \vmn{The success probability of this algorithm approaches $\frac{1}{e} \approx 0.37$ as $n\to \infty$.}
\vmn{We generalize the classical setting by studying the problem under our demand model}. \vmn{First, we analyze the success probability of a similar class of algorithms for any $p \in (0,1]$. Next, we show that under our demand model where $p <1$---i.e., in the presence of an \vmn{adversarial} component---this class of algorithms is not necessarily the best possible.}
\if false
(where $b=1$ and $n\to \infty$) under our arrival model.
Since $b=1$ and the values can take any positive numbers, the objective is to maximize the probability of successfully selecting the highest-valued customer.
\fi
\if false
When $p=1$, our setting reduces to the classical online secretary problem.
In this case, it is well-known that the best-possible online algorithm works as follows (\cite{dynkin1963optimum,lindley1961dynamic,ferguson1989solved,freeman1983secretary}):
Observe the fist $\left\lfloor \gamma n \right\rfloor$ customers, where $\gamma = \frac{1}{e}$; then accept the next one that has the highest value so far (if any).
\fi
\vmn{For any $\gamma \in (0,1)$, we define the Observation-Selection Algorithm ($\text{OSA}_\gamma$), which works similarly to the classical algorithm described above.}
The formal definition of the algorithm is presented in Algorithm~\ref{algorithm:observation-selection}.
\begin{algorithm}[H]
\begin{enumerate}
\item Initialize $v_{\max} \leftarrow 0$.
\item \textbf{Observation period:} Repeat for customer $i = 1,2,\dots, \lfloor \gamma n \rfloor$: reject customer $i$ and update $v_{\max} \leftarrow \max\{v_{\max} ,v_i\}$.
\item \textbf{Selection period:} Repeat for customer $i = \lfloor \gamma n \rfloor+ 1, \lfloor \gamma n \rfloor+ 2\dots, n$:
\begin{itemize}
\item If $v_i \geq v_{\max}$, then select customer $i$ and stop the algorithm.
\item Otherwise, reject customer $i$.
\end{itemize}
\end{enumerate}
\caption{: Observation-Selection Algorithm ($\text{OSA}_\gamma$, $\gamma \in (0,1)$)}
\label{algorithm:observation-selection}
\end{algorithm}
\noindent{In Appendix~\ref{sec:proof-b=1}, we analyze the success probability of Algorithm~\ref{algorithm:observation-selection} \st{(i.e., the probability that it selects the customer with the highest value)} and prove the following theorem:}
\begin{theorem}\label{thm:b=1}
Under the partially \st{learnable}\vmn{predictable} model, \st{when}\vmn{in the limit} $n \rightarrow \infty$, the success probability of $\text{OSA}_\gamma$ approches $\gamma p \log \frac{1}{\gamma p + 1-p}.$
\end{theorem}
By optimizing over $\gamma$, we obtain the following corollary:
\begin{corollary}\label{cor:best-obs-length}
Let $\gamma^* \in(0,1)$ be the unique solution to $$ \log (\gamma^* p + 1-p) + \frac{\gamma^* p}{\gamma^* p + 1-p}=0;$$
then, $\text{OSA}_{\gamma^*}$ achieves the highest success probability among $\text{OSA}_\gamma$ for all $\gamma \in (0,1)$.
\end{corollary}
\begin{table}[h]
{
\begin{center}
\begin{tabular}[h]{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
$p$ & $0.1$& $0.2$ & $0.3$&$0.4$&$0.5$&$0.6$&$0.7$&$0.8$&$0.9$&$1$\\
\hline
$\gamma^*$ & $0.4935$ & $0.4863$ & $0.4784$ & $0.4696$ & $0.4597$ & $ 0.4482$ & $ 0.4348$ & $ 0.4184$ & $ 0.3975$ & $0.3679$ \\
\hline
$OSA_{\gamma^*}$ & $0.0026$ & $ 0.0105$ & $0.0244$ & $0.0448$ & $ 0.0724$ & $ 0.1081 $ & $ 0.1533 $ & $ 0.2095 $ & $ 0.2796 $ & $ 0.3679$ \\
\hline
\end{tabular}
\end{center}
}
\caption{The optimal length of the observation period, $\gamma^*$, and the success probability of $OSA_{\gamma^*}$ \st{for different values of}\vmn{vs.} $p$.}
\label{table:b=1}
\end{table}
Table~\ref{table:b=1} \st{shows}\vmn{presents} the optimal length of the observation period, $\gamma^*$, and the success probability of $OSA_{\gamma^*}$ for different values of $p$.
\vmn{We observe that as the size of the stochastic component increases, i.e., as $p$ increases, the length of the observation period decreases, whereas the success probability increases.}
Next, \vmn{in the following proposition,} we \vmn{establish a lower bound on the success probability when we randomize over the length of the observation period ($\gamma$); further, we present an example that shows such randomization increases the success probability for $p<1$. This illustrates the benefit of employing randomized algorithms in the presence of an adversarial component in the arrival sequence.}
\st{show that when $p<1$, randomizing over the length of the observation period ($\gamma$) may increase the success probability.}
\st{Intuitively, when $p<1$, the customer arrival sequence has an ``adversarial component'', and thus randomization may help.
In particular, we have the following proposition (proven in Appendix~\ref{sec:proof-b=1}):}
\begin{proposition}\label{thm:randomized-b=1}
Under the partially \st{learnable}\vmn{predictable} model, for any $0 < \gamma_1 < \gamma_2 < 1$ and $0<q<1$, the randomized algorithm that runs $\text{OSA}_{\gamma_1}$ with probability $q$ and $\text{OSA}_{\gamma_2}$ with probability $1-q$ has \vmn{an asymptotic} success probability of at least $$ q s_1 +(1-q) s_2 +\min \left\{ (1-q)p(1-p)(1-\gamma_2), q(1-p)\frac{\gamma_2-\gamma_1}{1-\gamma_1}s_1 \right\} $$
where for $i=1,2$, $s_i$ denotes the success probability of $\text{OSA}_{\gamma_i}$.
\end{proposition}
\vmn{The proposition is proven in Appendix~\ref{sec:proof-b=1}.} \vmn{Suppose $p=0.5$; randomizing over $\gamma_1=0.427$ and $\gamma_2=0.69$ with $q=0.824$}
\vmn{results in a success probability of at least $0.083$ (utilizing the result of Proposition~\ref{thm:randomized-b=1}).}
\st{When $p<1$, this lower bound on the success probability shows that randomizing the length of the observation period can lead to higher success probability.
For example, when $p=0.5$, if we set $\gamma_1=0.427$, $\gamma_2=0.69$, and $q=0.824$, then the success probability is $0.083$.}
On the other hand, the success probability of the best possible \vmn{deterministic observation period} $\text{OSA}_\gamma$, given in Theorem~\ref{thm:b=1} and Corollary~\ref{cor:best-obs-length}, is $0.072$.
\subsection{Math Programs and numerical solution of~\ref{MP1}}
\subsection{Comparison with Existing Algorithms}\label{sec:bad-instance-for-other-models}
In this section, we show that,\st{if the customers follow} \vmn{under} our \vmn{demand arrival }model, \st{then for a certain}\vmn{there exists a} class of instances \vmn{for which }our algorithms \st{have better performance}\vmn{achieve higher revenue} than algorithms designed for either the worst-case \citep{Ball2009} or the random-order model \citep{devanur2009adwords,Agrawal2009b}\vmn{, which respectively correspond to $p = 0$, and $p=1$ in our model}.
To \st{see this}\vmn{this end}, we consider\st{the following adversarial} instance $\vmn{{\vec{v}_{I}}}$ where \vmn{$$v_{I,j}=\begin{cases}a&\text{ for }1 \leq j \leq b, \\ 0 &\text{ for } j > b .\end{cases}$$}
\begin{table}[h]
{
\footnotesize
\centering
\begin{tabular}[h]{|c|c|c|c|c|}
\hline
Algorithm & Worst-Case & Random-Order & Algorithm~\ref{algorithm:hybrid} & Algorithm~\ref{algorithm:adaptive-threshold} \\
& {\tiny{(\cite{Ball2009})}} & {\tiny{(\vmn{Idea of}~\cite{Agrawal2009b})}} & \vmn{(Non-Adaptive Algorithm)} & \vmn{(Adaptive Algorithm)}\\
\hline
Ratio & $\frac{1}{2-a}$ & at most $p+\frac{b}{n}(1-p)$ & $p+\frac{1-p}{2-a}-O\left(\frac{1}{a(1-p)p}\sqrt{\frac{\log n}{b}}\right)$ & $1$ \\
\hline
\end{tabular}
}
\caption{\st{Expected} Ratio \vmn{between the expected revenue} of different algorithms \vmn{and the optimum offline solution}.}
\label{table:compare-algorithms}
\end{table}
Table~\ref{table:compare-algorithms} \st{states}\vmn{presents} the ratio between the expected revenue of different online algorithms and that of the \st{optimal}\vmn{optimum} offline \st{one}\vmn{solution}.
In the following, we will explain how we compute these bounds.
Before\st{doing} that, we discuss the implications of this example.
This instance class shows that, for any $p \in (0,1)$, when $b = \omega (\sqrt{\log n})$ and $b = o(n)$ the\st{competitive} ratio for both of our algorithms is better than existing ones.
\vmn{Further, note that the ratio for Algorithm~\ref{algorithm:hybrid} is in fact its competitive ratio; thus the same ratio holds for any other instance as well.
This implies that the competitive ratio of our non-adaptive algorithm is higher than those of
\cite{Ball2009} and~\cite{Agrawal2009b} under the partially predictable model.}
Also note that for the same instance, randomizing between the algorithm of \cite{Ball2009} (with probability $\vmn{1 - }p$) and that of \cite{Agrawal2009b} (with probability $\st{1 - }p$)
\vmn{leads to a ratio of $\frac{1-p}{2-a} + p^2 + o(1)$, which is not}
\st{does not lead to a that is}the convex combination of the \st{guarantee}\vmn{competitive ratios} of these two algorithms (as \vmn{also} pointed out in Remark~\ref{rem:alg1:1}).
\vmn{Next, we}\st{Here we} calculate the ratios listed in Table~\ref{table:compare-algorithms}. The offline solution is $OPT(\vmn{{\vec{v}_{I}}})=ab$.
The algorithm of \cite{Ball2009}, proposed for the \st{worst-case}\vmn{adversarial} model, has a fixed threshold of $\frac{1}{2-a}b$ \vmn{for accepting type-$2$ customers}, and hence accepts \st{a total of} $\frac{1}{2-a}b$ \st{class}\vmn{type}-$2$ customers.
\vmn{Next we compute the ratio for algorithms designed for the random-order model (e.g., \cite{devanur2009adwords,Agrawal2009b}, and \cite{Kesselheim}).}
\vmn{We note that, for the sake of brevity, we present an analysis based on the idea of these papers, which is allocating inventory at a roughly unform rate over the entire horizon.}
\vmn{In particular, these} algorithms\st{proposed for the random-order model (e.g., \cite{Agrawal2009b,devanur2009adwords})} accept roughly $\lambda b$ customers at any time $\lambda\in[0,1]$.
As a result, \vmn{for this instance,} \st{these algorithms }\vmn{they} accept at most $b^2/n$ \vmn{type-$2$} customers up to time $\lambda=b/n$.
According to our model, in the arriving instance ${\vec{v}}$, there are approximately $(1-b/n)bp$ \vmn{type-$2$ }customers arriving after time $b/n$.
Therefore, these algorithms can accept at most \st{roughly}$b^2/n + (1-b/n)bp $ \st{class}\vmn{type}-$2$ customers, which corresponds to a ratio of at most $ p + \frac{b}{n}(1-p)$.
Note that $p + \frac{b}{n}(1-p) < p+\frac{1-p}{2-a}$ for any $b < \frac{n}{2-a}$.
Our Algorithm~\ref{algorithm:hybrid} achieves a ratio of at least \st{the performance guarantee}\vmn{its competitive ratio} as given in Theorem~\ref{thm:hybrid}
and the ratio is tight for this instance (up to an additive error term of $O\left(\frac{1}{a(1-p)p}\sqrt{\frac{\log n}{b}}\right)$).
\vmn{For Algorithm~\ref{algorithm:adaptive-threshold}, let $c \in (0,1)$ be an arbitrary constant. We show that $ALG_{2,c}$ achieves the ratio of $1$ because the
third condition in Algorithm~\ref{algorithm:adaptive-threshold}, i.e., the dynamic threshold, is never violated. To see this we compute the threshold as follows:}
\vmn{
\begin{align*}
\lfloor \phi b + c \left(b - u_1(\lambda) \right)^+ \rfloor =
\begin{cases}
\lfloor \phi b \rfloor & \lambda < \delta = \frac{\phi b}{n}\\
\lfloor \phi b + c b \rfloor & \lambda \geq \delta
\end{cases}
\end{align*}}\vmn{where we use the fact that $u_1(\lambda) = b$ for $\lambda < \delta$, and $u_1(\lambda) = 0$ for $\lambda \geq \delta$. }
\vmn{In both cases we have $\lfloor \phi b + c \left(b - u_1(\lambda) \right)^+ \rfloor > \lambda$, which implies that the algorithm never rejects a type-$2$ customer because
$o_2(\lambda) \leq \lambda < \lfloor \phi b + c \left(b - u_1(\lambda) \right)^+ \rfloor$.}
\if false
The second algorithm accepts all \st{class}\vmn{type}-$2$ customers before time $\delta$ (since we have $\delta \leq \phi$).
The algorithm again accepts all customers for time $\lambda>\delta$ because $u_1(\lambda) \leq \frac{o_1(\lambda)}{\lambda p}=0$.
Hence, the adaptive threshold algorithm will accept all customers and give a ratio of $1$.
\fi
\section{Introduction}
\label{sec:intro}
E-commerce platforms host markets for perishable resources in various industry sectors ranging from airlines to hotels to internet advertising.
In these markets, demand realizes sequentially, and the firms need to make online (irrevocable) decisions regarding how (and at what price) to allocate resources to arriving demand without precise knowledge of future demand.
The success of any online allocation algorithm depends crucially on a firm's ability to predict future demand.
\vmn{If demand can be predicted\st{or learned}, then under some conditions on the amount of available resources, making online decisions incurs little loss (as shown in \cite{Agrawal2009b}, among others).}
However, in many markets, demand cannot be perfectly\st{forecast} \vmn{predicted} due to unpredictable components such as traffic spikes and strategy changes by competitors.
\st{Demand forecasting has long been a challenge for applying traditional revenue management, specially in new industries} \st{and for new products.} The emergence of sharing-economy platforms such as Airbnb, which can
scale supply at negligible cost and on short notice \citep{AirBnb}, has significantly added to\st{uncertainty in demand and its} \vmn{unpredictable} variability\st{across time} \vmn{in demand} even for products that are not new (e.g., existing hotels).
In such cases firms can take a\st{completely robust} \vmn{worst-case} approach and assume that demand is\st{not predictable} \vmn{controlled by an imaginary adversary and thus is unpredictable}.\st{ but} \vmn{Such an approach, however,}\st{that} usually results in\st{strategies} \vmn{online policies} that are too conservative (as studied in \cite{Ball2009} and others).
Instead, firms may wish to employ online policies\st{that \emph{try to learn} the demand \footnote{We use the notion of ``learning'' and ``predicting'' interchangeably.
}, but at the same time do not \emph{overfit} to the observed data with the caution that it could embody an unpredictable component.}
\vmn{based on models that assume the future demand can partially be predicted, avoiding being too conservative while not being reliant on fully accurate predictions.}
This paper aims to investigate to what extent the above goal is achievable. We propose a new demand model\vmn{, called {\em partially predictable},} that contains both\st{predictable and unpredictable} \vmn{adversarial (thus unpredictable) and stochastic (predictable) components}\st{, hence is only partially learnable}.
\vmn{We design novel algorithms to demonstrate that even though demand is assumed to include an unpredictable component,}\st{We show how } firms can\st{still} make use of the limited information that the data reveals and improve upon the completely conservative approach.
We study a basic online\st{resource} allocation problem\st{, known as the single-resource revenue management with $2$ fare classes, under our new demand model called \emph{partially-learnable model}}
\vmn{of a single resource with an arbitrary capacity to a sequence of customers, each of which belongs to one of two types.}
\vmn{Each customer demands one unit of the resource. If the resource is allocated, the firm earns a type-dependent revenue. Type-1 and -2 customers generate revenue of $1$ and $a <1$, respectively.}
Our demand model takes a parameter $0 < p < 1$ and works as follows.
An unknown number of customers of each\st{fare class} \vmn{type} will be revealed to the firm in unknown order. \vmn{Both the number and the order of customers are assumed to be controlled by an imaginary adversary}. However, a fraction $p$ of randomly chosen customers does {\em not follow} this prescribed order and instead arrives at uniformly random times. This group of customers represents the\st{predictable} \vmn{stochastic} component of the demand that is mixed with the\st{unpredictable (unknown)} \vmn{adversarial} element.
Although we cannot identify which customers belong to the\st{predictable} \vmn{stochastic} group, we can still {\em partially}\st{learn} \vmn{predict} future demand, because this group is almost uniformly spread over the time horizon. Therefore parameter $p$ determines the level of\st{learnability} \vmn{predictability} of demand.
From a practical point of view, our demand model requires {\em no forecast} for the number of customers \st{from each fare class} \vmn{of each type prior to arrival}; instead, it assumes a rather mild ``regularity'' in the arrival pattern\vmn{: a fraction $p$ of customers of each type is spread throughout the time horizon}. We motivate this through a simple example. Suppose an airline launches a new flight route for which it has no demand forecast. However, using historical data \vmn{on customer booking behavior}, the airline knows that there is heterogeneity in \vmn{booking behavior of customers, namely, the time they request a booking varies across customers of each type.}
\st{the advanced booking behavior of the customers (from each fare class) which results in gradual arrival of at least a portion of requests for each fare-class.}
\vmn{Such heterogeneity results in the gradual arrival of a portion of customers of each type.}
For example, \cite{Adv_Booking} illustrates a significant disparity in the advanced booking behavior of business travelers based on their age, gender, and travel frequency. Therefore, the airline can reasonably assume that demand \vmn{from business travelers (who correspond to type-1 in our model)}\st{for business (high) fare class } is, to some degree, spread over the sale horizon.\st{, and this is exactly the regularity condition that our demand model imposes.}
\st{In this paper, we show how the firm can take advantage of this limited information and improve the revenue guarantee upon existing algorithms.}
\vmn{From a theoretical point of view, our demand model aims to address the limitations of the main two approaches that have been taken so far in the literature: (1) adversarial models and (2) stochastic ones.}\footnote{A few papers consider arrival models outside these two categories. We carefully review them and compare them with our model in Section~\ref{sec:review}.} \vmn{Under the adversarial modeling approach,}\st{In the robust approach,} the sequence of arrivals is assumed to be completely unpredictable. The online algorithms developed for these models aim to perform well in the worst-case scenario, often resulting in very conservative bounds (see \cite{Ball2009} for the single-resource revenue management problem and \cite{Mehta2007a} and \cite{buchbinder2009design} for online allocation problems in internet advertising).
On the other hand, the stochastic modeling approach assumes that demand follows an unknown\st{stationary} distribution \citep{kleinberg2005multiple,devanur2009adwords,Agrawal2009b}.\footnote{In fact these papers assume a more general model, the random order model, that we discuss in Section~\ref{sec:review}.} In this case we can\st{{\em learn} the demand distribution} \vmn{predict future demand after}\st{by} observing a small fraction of it. For example,\st{if} after observing the first $10 \%$ of the demand, \vmn{if} we\st{see} \vmn{observe} that $15 \%$ of customers are\st{business travelers} \vmn{of type-1}, we\st{forecast} \vmn{can predict} that roughly $15 \%$ of the remaining customers are also\st{business travelers} \vmn{of type-1}. The limitation of such an approach is that it cannot model variability across time. In some cases, real data does not confirm the\st{stationary} stochastic structure presumed in these models, as shown in \citet{WangTraffic} and \citet{Shamsi}\remove{ using real-data}. In fact, as discussed in \citet{Mirrokni2012} and \citet{esfandiari2015online}, large online markets (such as internet advertising systems) often use modified versions of these algorithms to make them less reliant on accurate demand prediction. \vmn{Our model provides a middle ground between the aforementioned approaches}
\st{Our partially-learnable model aims to address the limitation of both of the aforementioned approaches} by assuming that the arrival sequence contains both an
\st{unpredictable (non-stationary)}\vmn{adversarial} component and a\st{predictable} \vmn{stochastic} one.\st{Relative size of the unpredictable component (i.e., $(1-p)$) can be viewed as ``level of robustness'' or caution that the firm would like to take in regard to what the observed data reveals about the future.
In Section ?, we discuss how to estimate the parameter $p$, and comment on how robust our algorithm is to overestimating $p$.}
For the above problem, we design two online algorithms (a non-adaptive and an adaptive one\footnote{We call an algorithm ``adaptive'' if it makes decisions based on the sequence of arrivals it has observed so far.}) that perform well in the partially\st{learnable} \vmn{predictable} model.
We use the metric of competitive ratio, which is commonly used to evaluate the performance of online algorithms. Competitive ratio is the worst-case ratio between the revenue of the online scheme to that of a clairvoyant solution (see Definition~\ref{def:comp}).
The competitive ratio of our algorithms\st{are} \vmn{is} parameterized by $p$, and for both algorithms\st{they are} \vmn{the ratio} increases with $p$: {\em as the relative size of\st{predictable} \vmn{the stochastic} component grows, the loss due to making online decisions decreases.} We further show that using an adaptive algorithm is particularly beneficial when the
\st{inventory}\vmn{capacity}\st{is of the same order as the time horizon (maximum number of customers)} \vmn{scales linearly with the maximum number of customers}. Our algorithms are easily adjustable with respect to parameter $p$. Therefore, if a firm wishes to use different levels of\st{robustness} \vmn{predictability} for different products, then it can\st{still} use the same algorithm with different parameters $p$.
In designing of our algorithms, we keep track of the number of accepted customers of each\st{class} \vmn{type}, and we decide whether to accept an arriving\st{class-$2$} \vmn{type-$2$} customer by comparing the number of already accepted\st{class-$2$} \vmn{type-$2$} customers with optimally designed {\em dynamic} thresholds.\footnote{\vmn{We always accept a type-1 customer if there is remaining inventory.}}
Our non-adaptive algorithm strikes a balance between ``smoothly'' allocating the inventory over time (by not accepting many\st{class-$2$} \vmn{type-$2$} customers toward the beginning) and not protecting too much inventory for potential late-arriving\st{class-$1$} \vmn{type-$1$} customers (see Algorithm~\ref{algorithm:hybrid} and Theorem~\ref{thm:hybrid}).
Our adaptive algorithm frequently recomputes upper bounds on the number of future customers\st{in each class} \vmn{of each type} based on observed data
and uses these upper bounds to ensure that we protect enough inventory for future\st{class-$1$} \vmn{type-$1$} customers. We show that such an adaptive policy significantly improves the performance guarantee when the initial inventory is large relative to the \vmn{maximum }number of customers (see Algorithm~\ref{algorithm:adaptive-threshold} and Theorem~\ref{thm:adaptive-threshold}). Both algorithms could reject a\st{low-fare} \vmn{type-$2$} customer early on but accept another\st{low-fare} \vmn{type-$2$} customer later. This is consistent with\st{the common} practice. \vmn{For example, in online airline booking systems, }\st{ where} lower fare classes can open up after being closed out previously \citep{CheapoAir}.
\vmn{From a methodological standpoint, an analysis of the competitive ratio of our algorithms presents many new technical challenges arising from the fact that our arrival model contains both an adversarial and a stochastic component. Our analysis crucially relies on a concentration result
that we establish for our arrival model (see Lemma~\ref{lemma:needed-centrality-result-for-m=2}) as well as fairly intricate case analyses for both algorithms. Further, to prove the lower bound on the competitive ratio of our adaptive algorithm, we construct a novel {\em factor-revealing} nonlinear mathematical program (see~\ref{MP1} and Section~\ref{subsec:adapt:com}).}
The two extreme cases of our model where all or none of the customers belong to the\st{unpredictable} \vmn{adversarial} group (i.e., $p = 0$ and $p=1$) reduce to the\st{robust} \vmn{adversarial} and stochastic modeling approaches that have been mainly studied in the literature thus far (for instance, \cite{Ball2009} study the former model and \cite{Agrawal2009b} study the latter).
Our algorithms recover the known performance guarantees for these two extreme cases. For the regime in between (i.e., when $0 < p < 1$), we show that our algorithms achieve competitive ratios better than what can be achieved by any of the algorithms designed for these extreme cases (or even any combination of them). This highlights the need to design new algorithms when departing from traditional \st{approaches}\vmn{arrival models}.
We also\st{consider a classic stopping time problem known as} \vmn{study} the classic secretary problem \vmn{under our partially predictable arrival model.}
\vmn{The secretary problem, a stopping time problem, }
corresponds to the setting in which we initially have one unit of inventory; each customer is of a different\st{value (fare-class)} \vmn{type}, and we wish to maximize the probability of\st{accepting the highest value customer} \vmn{allocating the inventory to the type generating the highest revenue}\st{in our partially learnable model}. We show that, unlike the classic setting (which corresponds to $p =1$ in our model), \vmn{the celebrated deterministic stopping rule policy based on}
a \st{fixed}\vmn{deterministic} observation period is no longer\st{best possible} \vmn{optimal}. Due to the presence of the\st{unpredictable} \vmn{adversarial} component, randomizing over the length of the observation period may result in improvement \vmn{(see Algorithm~\ref{algorithm:observation-selection}, Theorem~\ref{thm:b=1}, and Proposition~\ref{thm:randomized-b=1})}.
We conclude this section by highlighting our motivations and contributions. For many applications, demand arrival processes are inherently prone to contain unpredictable components that even advanced information technologies cannot mitigate. \vmn{An allocation policy whose design is based on stochastic modeling cannot incorporate the presence of such unpredictable components.}
At the same time, taking a\st{completely robust} \vmn{worst-case adversarial} approach usually leads to allocation polices that are too conservative. We introduce the {\em first arrival model} that contains {\em both \vmn{adversarial} (thus unpredictable) and \vmn{stochastic}\st{predictable}} components. Through novel algorithm design, we show that (1) we can take advantage of even limited available information (due to the presence of the\st{predictable} \vmn{stochastic} component) to improve a firm's revenue when compared to algorithms that take a\st{fully robust } \vmn{worst-case} approach and that (2) there is an unavoidable loss due to the presence of an\st{unpredictable} \vmn{adversarial} component, which emphasizes {\em the value of stochastic information and\st{learning} \vmn{predictability}} in online resource allocation.\st{Concurrently, one can consider such a loss as the price of limited robustness}
The rest of the paper is organized as follows. In Section~\ref{sec:review}, we review the related literature and highlight the differences between the current paper and previous work.
In Section~\ref{sec:prem}, we formally introduce our demand arrival model and our performance metric, and prove a consequential concentration result for the arrival process.
Sections~\ref{sec:alg1} and~\ref{sec:alg2} are dedicated to description and analysis of our two algorithms.
In Section~\ref{sec:model:discussion}, we \st{derive some further fundamental properties of our demand model such as}\vmn{present} upper bounds on \vmn{the} performance of any online algorithm, \vmn{and we compare the performance of our algorithms with that of existing ones.} \st{model estimation, and robustness issues.}
Section~\ref{sec:secretary} studies \vmn{the secretary problem under our new arrival model.}
\st{two extensions of our model that allow for arbitrary number of fare classes.}
In Section~\ref{sec:conclusion}, we conclude by outlining several directions for future research.
For the sake of brevity, we include proofs of only selected results in the main text. Detailed proofs of the remaining statements are deferred to clearly marked appendices.
\section{Literature Review }
\label{sec:review}
Online allocation problems have broad applications in revenue management, internet advertising, scheduling appointments (through web applications) in health care, just to name a few. Thus it has been studied in various forms in operations research and management, as well as computer science.
As discussed in the introduction, the approach taken in modeling the arrival process is the first consequential step in studying these problems. Therefore, in this literature review, we categorize related streams of research by modeling approach rather than by the particular problem formulation and application.
First, we note that the single-leg revenue management (RM) problem and its generalizations have been extensively studied using frameworks other than online resource allocation problems \vmn{and competitive analysis}. Earlier papers assumed {\em low-before-high} models (where all low-fare demand realizes before high-fare demand) with known demand distributions \citep{belobaba1987survey,belobaba1989or,brumelle1993airline,littlewood2005special}
or assumed the arrival process is known, and formulated the problem as a Markov decision problem \citep{lee1993model,lautenbacher1999underlying}. We refer the reader to \cite{talluri2006theory} for a comprehensive review of RM literature.
\vmn{Further, many recent papers in revenue management study dynamic pricing when the underlying price-sensitive demand process is unknown.
See, for example, seminal work by \cite{besbes2009dynamic} and \cite{araman2009dynamic}.
For the sake of brevity, we will not review these streams of work.}
\vspace{2mm}
{\bf Adversarial models:}
\cite{Ball2009} studied the single-leg revenue management problem under an adversarial model and showed that in the two-fare case the optimal competitive ratio is $\frac{1}{2-a}$ where $a <1$ is the ratio of two fares. As discussed in the introduction, our model reduces to that of \cite{Ball2009} for $p=0$. In this special case, our non-adaptive algorithm reduces to the threshold policy of \cite{Ball2009} and recovers the same performance guarantee. However, when $0 < p < 1$, we show that for a certain class
of instances our algorithms perform better than that of \cite{Ball2009} (see Subsection~\ref{sec:bad-instance-for-other-models}), indicating the need for new algorithms for our new arrival model.
Several papers studied the adwords problem under the adversarial model \citep{Mehta2007a,buchbinder2009design}. This problem concerns allocating ad impressions to budget-constrained advertisers. As mentioned in \cite{Mehta2007a}, even though the optimal competitive ratio under an adversarial model is $1 - 1/e$, one would expect to do better when statistical information is available. Later, \cite{Mirrokni2012} showed that it is impossible to design an algorithm with a near-optimal competitive ratio under both adversarial and random arrival models. Such an impossibility result affirms the need for new modeling approaches to serve as a middle ground between these two models. Our paper takes a step in this direction.
\vspace{2mm}
{\bf Stationary stochastic models:}
A general form of these models is the \emph{random order model}, which assumes that the sequence of arrivals is a random permutation of an arbitrary sequence \citep{kleinberg2005multiple,devanur2009adwords,Agrawal2009b}.
In such a model, after observing a small fraction of the input, one can predict pattern of future demand. This intuition is used to develop primal- and dual-based online algorithms that achieve near-optimal revenue, under appropriate conditions on the relative amount of available resources to allocate. These algorithms rely heavily on learning from observed data, either once \citep{devanur2009adwords} or repeatedly \citep{kleinberg2005multiple,Agrawal2009b,Kesselheim}. As discussed in the introduction, arrival patterns could experience high variability across time, limiting the performance of these algorithms in practice \citep{Mirrokni2012,esfandiari2015online}.
We note that assuming i.i.d. arrivals with known or unknown distributions also falls into this category of modeling approaches. Several revenue management papers provided asymptotic analysis of linear programming-based (LP-based) approaches for such settings; see \cite{Talluri:1998, Cooper2002} and \cite{Jasin2015}.
Our model reduces to a special case of the model of \cite{Agrawal2009b} for $p = 1$, and like their algorithm, ours also achieves near-optimal revenue when $p=1$. However, when $0 < p < 1$, we show, in Subsection~\ref{sec:bad-instance-for-other-models}, that for a certain class of instances our algorithms perform better than that of \cite{Agrawal2009b}.
\vspace{2mm}
{\bf Nonstationary stochastic models:}
Motivated by advanced service reservation and scheduling, \cite{Van-Ahn2} and \cite{Van-Ahn1} studied online allocation problems where demand arrival follows a {\em known} nonhomogeneous Poisson process. For such settings, they developed online algorithms with constant competitive ratios. Further, \cite{Ciocan2012} considered another interesting setting where the (unknown) arrival process belongs to a broad class of stochastic processes. They proved a constant factor guarantee for the case where arrival rates are uniform.
Our modeling strategy differs from both approaches by assuming that $(1-p)$ fraction of the input is\st{unpredictable} \vmn{adversarial}. Even for the\st{predictable} \vmn{stochastic} component, we assume no prior knowledge of the distribution. However, we limit the adversary's power by assuming that these two components are {\em mixed}. Also, we note that the aforementioned papers studied more general allocation problems in settings like network revenue management.
\vspace{2mm}
{\bf Other models:}
Several earlier papers also acknowledged and addressed the limitation of both the adversarial and random order (or stochastic) models using various approaches.
\citet{Mahdian2007} and \citet{Mirrokni2012} considered allocation problems where the demand can \emph{either} be perfectly estimated or adversarial.
They designed and analyzed algorithms that have good performance guarantees in both worst-case and average-case scenarios.
Unlike these works, our demand model contains \emph{both}\st{predictable} \vmn{stochastic} and\st{unpredictable} \vmn{adversarial} components at the same time, and we design algorithms that take advantage of partial\st{estimation} \vmn{predictability}.
Another approach to address unpredictable patterns in demand is to use robust stochastic optimization \citep{BanTal, Bertsimas_RobustLP}. These papers
aim to optimize allocations when the demand belongs to a class of distributions (or uncertainty set).
This approach limits the adversary's power by restricting the class of demand distributions. Here, we take a different approach. We do not limit the class of distribution that the adversary can choose from; instead, we assume that a fraction $p$ of the demand will not follow the adversary.
\citet{Lan2008} also took a robust approach, studying the single-leg multi-fare class revenue management problem in a very interesting setting, where the only prior knowledge about demand is the lower and upper bounds on the number of customers from each fare class.
\citet{Lan2008} used fixed upper and lower bounds to develop optimal static policies in the form of nested booking limits, and also showed that dynamically adjusting these policies can improve the competitive ratio.
Unlike their work, we do not assume prior knowledge of lower and upper bounds on the number of customers from each class.
Instead, in our model, we learn the bounds as the sequence unfolds.
\citet{Shamsi} used a real data set from display advertising at AOL/Advertising.com to show that arrival patterns do not satisfy the crucial property implied by assuming a random order model for demand. In particular, they showed that the dual prices of the offline allocation problem at different times can vary significantly.
They used a risk minimization framework to devise allocation rules that outperform existing algorithms when applied to AOL data. Even though the results are practically promising, the paper provides no performance guarantee, nor does it offer insights on how to model traffic in practice.
\st{Finally} \vmn{Further}, \citet{esfandiari2015online} also considered a hybrid arrival model where the input comprises known stochastic i.i.d. demand and an unknown number of arrivals that are chosen by an adversary (which is motivated by traffic spikes).
They do not assume any knowledge of the traffic spikes, but the performance guarantee of their algorithm is parameterized by $\lambda$, roughly the fraction of the revenue in the optimal solution that is obtained from the stochastic (predictable) part of the demand.
Parameter $\lambda$ plays a similar role as parameter $p$ in our model, in that it controls the adversary's power.
However, the underlying arrival processes in these two models differ considerably and cannot be directly compared.
In particular, we do not assume any prior knowledge of the stochastic component; instead we partially\st{learn this} \vmn{predict it}.
However, we do assume that the adversary determines only the initial order of arrivals (i.e., before knowing which customer will eventually follow its order).
Our work is also closely related to the literature on the secretary problem. In the original formulation of the problem, $n$ secretaries with unique values arrive in uniformly random order; the goal is \vmn{to} maximize the probability of hiring the most valuable secretary.
The optimal solution to this problem is an observation-selection policy: observe the first $n/e$ secretaries, then select the first one whose value exceeds that of the best of the previously observed secretaries
\citep{lindley1961dynamic,dynkin1963optimum,freeman1983secretary,ferguson1989solved}. Recently, \citet{kesselheim2015secretary} relaxed the assumption of uniformly random order, and analyzed the performance of the above policy under certain classes of nonuniform distribution over permutations. Here, we study the secretary problem in our new arrival model (i.e., only a $p$ fraction of secretaries arrive in uniformly random order) and show that a \st{fixed}\vmn{deterministic} observation period is \st{sub-}\vmn{not} optimal.
\section{Missing proofs of Section~\ref{sec:model:discussion}}
\label{sec:app:model:discussion}
\begin{proof}{\textbf{Proof of Proposition~\ref{thm:impossibility}:}}
\st{Here w}We prove that the competitive ratio of any online algorithm is at most $p+\frac{1-p}{2-a}+3\left(\frac{pb^2}{n}\right)$.
\vmn{Note that }when $\frac{pb^2}{n}>1/2$, $p+\frac{1-p}{2-a}+3\left(\frac{pb^2}{n}\right)$ is greater than $1$ and hence \vmn{the upper bound trivially holds}\st{we are done}.
\vmn{Thus} in the following, we assume, without loss of generality, $\frac{pb^2}{n}\leq 1/2$.
We consider two adversarial instances ${\vec{v}_{I}}$ and ${\vec{w}_{I}}$ defined \st{by}\vmn{as}
\begin{equation*}
v_{I,j} = \begin{cases}
a, \qquad & 1 \leq j \leq b, \\
0, \qquad & b < j \leq 2b, \\
0, \qquad & j > 2b.
\end{cases} ~~~~~
w_{I,j} = \begin{cases}
a, \qquad & 1 \leq j \leq b, \\
1, \qquad & b < j \leq 2b, \\
0, \qquad & j > 2b.
\end{cases}
\end{equation*}
\if false
\begin{equation*}
v'_i = \left\{
\begin{array}{lll}
a, \qquad & i = 1, 2, \ldots, b, \\
1, \qquad & i = b+1, b+2, \ldots, 2b \text{ for } {\vec{v_2'}} ,\\
0, \qquad & i = b+1, b+2, \ldots, 2b \text{ for } {\vec{v_1'}} , \\
0, \qquad & i > 2b.
\end{array}
\right.
\end{equation*}
\fi
Let \st{us denote $\mathcal E$}\vmn{$\mathcal U$} \vmn{denote} \st{to be}\vmn{the} event in which
\vmn{in the arrival sequence, none of the first $b$ arrivals belongs to positions $[b+1, 2b]$ in the {\em initial} customer sequence, i.e.,
for all $i \in [b]$, we have: $i \notin {\vmn{\mathcal{S}}}$ or $\sigma^{-1}_{\vmn{\mathcal{S}}}(i) \notin [b+1,2b]$, where we use the following definition: For $x, y \in \mathbb{N}$ and $x < y$, $[x,y] \triangleq \{x, x+1, \ldots,y\}$. Further, we define $[y] \triangleq [1,y]$.}
\vmn{Note that under event $\mathcal U$, no online algorithms can distinguish whether the initial sequence is \vmn{${\vec{v}_{I}}$} or \vmn{${\vec{w}_{I}}$} \vmn{up to time $b/n$}.}
\vmn{We first compute the probability of event $\mathcal U$ as follows:}
\if false
the realization of the {{\st{predictable}\vmn{stochastic}}} group and the random permutation is in a way such that the online algorithms cannot distinguish whether the adversarial sequence is \vmn{${\vec{v}_{I}}$} or \vmn{${\vec{w}_{I}}$} \vmn{up to time $b/n$}.
Note that event $\mathcal{E}$ occurs when none of the customers intended to appear in positions $b+1, b+2, \ldots, 2b$ shows up before time $b/n$.
It occurs with probability
\fi
\begin{align}
\prob{\st{\mathcal E}\vmn{\mathcal U}} &= \prob{\text{for all } i \in [b]: i \notin {\vmn{\mathcal{S}}} \text{ or } \sigma^{-1}_{\vmn{\mathcal{S}}}(i) \notin [b+1,2b] } \nonumber \\
&\geq 1- \sum_{i \in [b]} \prob{ i \in {\vmn{\mathcal{S}}} \text{ and } \sigma_{\vmn{\mathcal{S}}}^{-1}(i) \in [b+1, 2b] } &(\text{Union bound}) \nonumber \\
&\geq 1-\frac{pb^2}{n},\label{eq:event:U}
\end{align}
where \st{we have used}\vmn{the last inequality holds because of} the following inequality \vmn{(which we prove next)}\st{in the last inequality}: For all $i\neq j$,
\begin{align}\label{observation:small-prob-go-to-other-location}
\prob{i \in {\vmn{\mathcal{S}}} \text{ and } \sigma_{{\vmn{\mathcal{S}}}}^{-1}(i)=j}\leq \frac{p}{n}.
\end{align}
To prove \eqref{observation:small-prob-go-to-other-location}, we first note for any $i$, we have $p = \prob{i \in {\vmn{\mathcal{S}}}} = \sum_{j=1}^n \prob{i \in {\vmn{\mathcal{S}}} \text{ and } \sigma_{{\vmn{\mathcal{S}}}}^{-1}(i)=j}$.
Second, denoting $R$ the random variable corresponding to the size of the {{\st{predictable}\vmn{stochastic}}} group, we have $\prob{ \sigma_{{\vmn{\mathcal{S}}}}^{-1}(i)=i | i \in {\vmn{\mathcal{S}}} , R}=\frac{1}{R} \geq \frac{1}{n}$, and thus $\prob{ \sigma_{{\vmn{\mathcal{S}}}}^{-1}(i)=i | i \in {\vmn{\mathcal{S}}}} \geq \frac{1}{n}$.
Therefore,
\begin{align*}
& \sum_{j\neq i} \prob{i \in {\vmn{\mathcal{S}}} \text{ and } \sigma_{{\vmn{\mathcal{S}}}}^{-1}(i)=j} = p - \prob{i \in {\vmn{\mathcal{S}}} \text{ and } \sigma_{{\vmn{\mathcal{S}}}}^{-1}(i)=i} \\
= & p - \prob{ \sigma_{{\vmn{\mathcal{S}}}}^{-1}(i)=i | i \in {\vmn{\mathcal{S}}}}\prob{ i \in {\vmn{\mathcal{S}}}} \leq p-\frac{p}{n}=\frac{(n-1)p}{n}.
\end{align*}
By symmetry, for each $j \neq i$, $\prob{i \in {\vmn{\mathcal{S}}} \text{ and } \sigma_{{\vmn{\mathcal{S}}}}^{-1}(i)=j}\leq \frac{p}{n}$, which proves \eqref{observation:small-prob-go-to-other-location}.
\vmn{This completes our proof of inequality \eqref{eq:event:U}.}
Under the event \st{$\mathcal{E}$}\vmn{$\mathcal{U}$}, \vmn{in both problem instances,} the \st{value}\vmn{revenue} of each customer accepted \st{by}\vmn{up to} time $b/n$ is\st{necessarily} $a$.
\vmn{Conditioned on \vmn{event} \st{$\mathcal{E}$}\vmn{$\mathcal{U}$}, }we denote $q_2$ the expected number of accepted \st{class}\vmn{type}-$2$ customers up to time $b/n$ \vmn{under either problem instances---recall that under event $\mathcal{U}$, up to time $b/n$, the online algorithm cannot distinguish the two}.
\vmn{We now proceed to find an upper bound on the expected revenue of any online algorithm under the two problem instances. We start by ${\vec{w}_{I}}$:}
\begin{align*}
\E{ALG({\vec{W}})}
&\leq \E{ALG(\vec{W}) \,\middle|\, \st{\mathcal{E}}\vmn{\mathcal{U}} } \prob{\st{\mathcal{E}}\vmn{\mathcal{U}}} + OPT({\vec{w}_{I}}) \left(1- \prob{\st{\mathcal{E}}\vmn{\mathcal{U}}} \right) \\
&\leq q_2 a + (b-q_2) + \frac{pb^2}{n} OPT({\vec{w}_{I}}).
\end{align*}
\vmn{Next, we proceed to establish an upper bound on the expected revenue under ${\vec{v}_{I}}$, by
proving an upper bound on the number of type-$2$ customers that arrive after time $b/n$ conditioned on the event $\vmn{\mathcal{U}}$:}
\if false
Before computing the total revenue for ${\vec{v_1'}}$, we first turn our attention to the customers arriving after time $b/n$ in ${\vec{V}}_1$.
Note that, under $\mathcal E$, the expected number of \st{class}\vmn{type}-$2$ customers that arrive after $b/n$ is upper-bounded as follows
\fi
\begin{align*}
\E{\left|\left\{ i \geq b+1 \,\middle|\, V_i = a \right\} \right| \,\middle|\, \st{\mathcal{E}}\vmn{\mathcal{U}}} &=\sum_{i=b+1}^n \prob{ V_i = a \,\middle|\,\st{\mathcal{E}}\vmn{\mathcal{U}}} \\
& \leq \frac{\sum_{i=b+1}^n \prob{ i \in {\vmn{\mathcal{S}}} \text{ and } \sigma_{\vmn{\mathcal{S}}}^{-1} (i) \in [b] }}{\prob{\st{\mathcal{E}}\vmn{\mathcal{U}}}} \\
&\leq \frac{(n-b) b\frac{p}{n}}{1-\frac{pb^2}{n}} &(\text{Inequalities}~\eqref{eq:event:U},~\eqref{observation:small-prob-go-to-other-location}) \\
& \leq \frac{pb}{1-\frac{pb^2}{n}}
\leq pb \left( 1 + 2\left(\frac{pb^2}{n}\right) \right),
\end{align*}
where we use $\frac{pb^2}{n} \leq 1/2$ in the last inequality. {Note that $\E{ ALG({\vec{V}}) \,\middle|\,\st{\mathcal{E}}\vmn{\mathcal{U}}} \leq q_2 a + a \E{\left|\left\{ i \geq b+1 \,\middle|\, V_i = a \right\} \right| \,\middle|\, \st{\mathcal{E}}\vmn{\mathcal{U}}}$.} As a result,
\begin{align*}
\E {ALG({\vec{V}}) }
&\leq \E{ ALG({\vec{V}}) \,\middle|\,\st{\mathcal{E}}\vmn{\mathcal{U}}} \prob{\st{\mathcal{E}}\vmn{\mathcal{U}}} + OPT(\vec{v}_{I}) \left( 1 - \prob{\st{\mathcal{E}}\vmn{\mathcal{U}}} \right) \\
&\leq q_2 a + \left(1 + 2 \left( \frac{pb^2}{n} \right)\right) pba + \left( \frac{pb^2}{n} \right) OPT(\vec{v}_{I}) \\
&\leq q_2 a + pba + 3\left( \frac{pb^2}{n} \right) OPT(\vec{v}_{I}). &(OPT(\vec{v}_{I}) = ba \geq pba)
\end{align*}
Thus, the competitive ratio is at most
\begin{align*}
\min \left\{ \frac{\E{ALG({\vec{V}})}}{OPT({\vec{v}_{I}})}, \frac{\E{ALG({\vec{W}})}}{OPT(\vec{w}_{I})} \right\}
&\leq \min \left\{ \frac{q_2}{b} +p+ 3 \left( \frac{pb^2}{n} \right), \frac{q_2}{b} a + \left( 1-\frac{q_2}{b}+ \frac{pb^2}{n} \right) \right\} \nonumber \\
&\leq \min \left\{ \frac{q_2}{b} +p, \frac{q_2}{b} a + \left( 1-\frac{q_2}{b} \right) \right\} + 3 \left( \frac{pb^2}{n} \right) \nonumber \\
&\leq p + \frac{1-p}{2-a} + 3 \left( \frac{pb^2}{n} \right),
\end{align*}
where the last inequality holds \vmn{because function $g(q_2) \triangleq \min \left\{ \frac{q_2}{b} +p, \frac{q_2}{b} a + \left( 1-\frac{q_2}{b} \right) \right\}$---defined on $q_2 \in [0,b]$---achieves its maximum at $q^*_2 = \frac{1-p}{2-a} b$, and $g(q^*_2) \leq p + \frac{1-p}{2-a}$.} \st{for $q_2 = \frac{1-p}{2-a} b$.}
\if false
Note that our proof works even if we augment the power of online algorithms by allowing them to see whether an observed customer $v_i$ is from the {{\st{predictable}\vmn{stochastic}}} group or not and its intended location in the adversarial instance.
\fi
\end{proof}
\if false
\section{Asymptotic results based on approximate analysis}
\label{sec:app:approx}
\subsection{Upper bounds in Asymptotic Case}\label{sec:asymptotic-ub}
In this section, we assume $b,n \rightarrow \infty$ while $b/n = \kappa$ where $\kappa$ is a positive constant.
We have the following conjecture for this asymptotic case:
\begin{conjecture}\label{conj:ata-tightness}
Under the partially \st{learnable}\vmn{predictable} model, when $ b= \kappa n$ where $\kappa$ is a positive constant and $n \rightarrow \infty$, no online algorithm, deterministic and randomized, can achieve a competitive ratio better than $c^*$, the optimal objective value of (\ref{MP1}).
\end{conjecture}
If true, this conjecture would imply the asymptotic tightness of the adaptive algorithm.
While rigorous proof of this conjecture is beyond the scope of this article, we show how to prove it under two approximations.
The first approximation allows partial customers to arrive in continuous time:
\begin{approximation}[Continuous Instance]\label{assumption:continuous instance}
A valid problem instance ${\vec{v'}\vmn{ZZ\vec{v}_I}}$ is specified by two non-decreasing functions $\eta_1, \eta_2:[0,1]\to [0,n]$ that represent the number of \st{class}\vmn{type}-$1$ (\st{class}\vmn{type}-$2$) customers arriving up to time $\lambda \in [0,1]$. These functions satisfy:
$\eta_1(0)=\eta_2(0)$, and for any $\lambda\in[0,1]$,
$\frac{\partial \left[\eta_1(\lambda)+\eta_2(\lambda)\right]}{\partial \lambda n}\leq 1.$
We again denote $n_1 = \eta_1(1)$ and $n_2 = \eta_2(1)$ to be the total number of \st{class}\vmn{type}-$1$ and \st{class}\vmn{type}-$2$ customers respectively.
\end{approximation}
The second approximation ignores the error terms and assumes that \eqref{eq:estimate} holds with equality:
\begin{approximation}[Deterministic Observation]\label{assumption:determ-observ}
For any instance ${\vec{v}}$ and every $\lambda$, we have $$o_1(\lambda) =\lambda p n_1 + (1-p)\eta_1(\lambda)\text{ and }o_2(\lambda) =\lambda p n_2 + (1-p)\eta_2(\lambda).$$
\end{approximation}
Focusing on asymptotic instances allows us to drastically decrease the variances of random events, and make the two approximations almost exact. Note that the feasibility of a competitive ratio $c$ in (MP1) only depends on the ratio $b/n$, and hence it does not change by scaling the problem.
While rigorous proof of Conjecture~\ref{conj:ata-tightness} is beyond the scope of this thesis, we manage to prove the following proposition with these two approximations:
\begin{proposition}\label{prop:ata-tight}
Under Approximations~\ref{assumption:continuous instance} and~\ref{assumption:determ-observ}, no online algorithm, deterministic or randomized, can achieve a competitive ratio better than $c^*$.
\end{proposition}
\begin{proof}
Based on the optimal solution of (\ref{MP1}), $(\lambda^*, n_1^*, n_2^*, \eta_1^*, \eta_2^*, c^*)$, we construct two adversarial instances ${\vec{v'}\vmn{ZZ\vec{v}_I}}$ and $\vec{\widehat{v}'}$ that are indistinguishable by any online algorithm up to time $\lambda^*$.
Using Approximations~\ref{assumption:continuous instance}, we will define instance ${\vec{v'}\vmn{ZZ\vec{v}_I}}$ by functions $\eta_1$ and $\eta_2$ and $\vec{\widehat{v}'}$ by functions $\widehat{\eta_1}$ and $\widehat{\eta_2}$ .
We define the first instance ${\vec{v'}\vmn{ZZ\vec{v}_I}}$ by setting, for $j=1,2$, $\eta_j(\lambda^*)= \eta_j^*$ and $\eta_j(1)= n_j^*$, and apply linear interpolation for other values of $\lambda$. Formally,
\begin{align*}
\eta_j(\lambda) \triangleq \begin{cases}
\frac{\lambda}{\lambda^*}\eta_j^* &\text{ for } 0 \leq \lambda \leq \lambda^*. \\
\frac{1-\lambda}{1-\lambda^*}\eta_j^* + \frac{\lambda-\lambda^*}{1-\lambda^*} n_j &\text{ for }\lambda^* < \lambda \leq 1. \\
\end{cases}
\end{align*}
We construct the second instance $\vec{\widehat{v}'}$ with the following properties: (i) the number of observed \st{class}\vmn{type}-$j$ customers up to time $\lambda^*$ is the same as in instance ${\vec{v'}\vmn{ZZ\vec{v}_I}}$, i.e., for all $\lambda \leq \lambda^*$, $\widehat{o}_j(\lambda) = {o}_j(\lambda)$.
(ii) instance $\vec{\widehat{v}'}$ has as many \st{class}\vmn{type}-$1$ customers as possible after time $\lambda^*$.
Recall that at time $\lambda^*$, from an online algorithm's perspective, we have the following upper bounds on $\widehat{n_1}$ and $\widehat{n_1}+\widehat{n_2}$:
\begin{align*}
\widehat{n_1} \leq u_1^* \triangleq &\min \left\{ \frac{\lambda^* p n_1^* + (1-p)\eta_1^* }{\lambda^* p}, \frac{\lambda^* p n_1^* + (1-p)\eta_1^* + p (1-p)(1-\lambda^*)n}{1-p+\lambda^* p} \right\}, \\
\widehat{n_1}+\widehat{n_2} \leq u_{1,2}^* \triangleq & \min \left\{ \frac{\lambda^* p (n_1^*+n_2^*) + (1-p)(\eta_1^* + \eta_2^*) }{\lambda^* p}, \right. \\ & \left. \frac{\lambda^* p (n_1^*+n_2^*) + (1-p)(\eta_1^* + \eta_2^*) + p (1-p)(1-\lambda^*)n}{1-p+\lambda^* p} \right\}.
\end{align*}
Note that the second terms of the minima ensure that $u_1^* \leq n$ and $\bar u_{1,2}^* \leq n$.
Thus, we can define $\vec{\widehat{v}'}$ to have $\widehat{n_1} = u_1^*$,
$\widehat{n_2} = u_{1,2}^* - u_1^*$.
To put as many \st{class}\vmn{type}-$1$ as possible after $\lambda^*$, we set:
\begin{align*}
\widehat{\eta_1}(\lambda^*) = \widehat{\eta_1} &\triangleq \max \left\{ 0, \widehat{n_1} - (1-\lambda^*)n \right\}, \\
\widehat{\eta_2}(\lambda^*) = \widehat{\eta_2} &\triangleq \max \left\{ 0, \widehat{n_2} - (1-\lambda^*) n - \widehat{\eta_1} \right\}.
\end{align*}
Now we define instance $\vec{\widehat{v}'}$ by linear interpolation between time $0$ and $\lambda^*$ and between $\lambda^*$ and $1$:
\begin{align*}
\widehat{\eta_j}(\lambda) \triangleq \begin{cases}
\frac{\lambda}{\lambda^*}\widehat{\eta_j} &\text{ for } 0 \leq \lambda \leq \lambda^*. \\
\frac{1-\lambda}{1-\lambda^*}\widehat{\eta_j} + \frac{\lambda-\lambda^*}{1-\lambda^*} \widehat{n_j} &\text{ for }\lambda^* < \lambda^* \leq 1. \\
\end{cases}
\end{align*}
It is easy to check both ${\vec{v'}\vmn{ZZ\vec{v}_I}}$ and $\vec{\widehat{v}'}$ are valid continuous instances under
Approximations~\ref{assumption:continuous instance}.
Now we consider the arriving instances ${\vec{v}}$ and $\widehat{{\vec{v}}}$ generated from adversary instances ${\vec{v'}\vmn{ZZ\vec{v}_I}}$ and $\vec{\widehat{v}'}$ respectively.
We can show that, under Approximation~\ref{assumption:determ-observ}, online algorithms cannot distinguish between ${\vec{v}}$ and $\widehat{{\vec{v}}}$ up to time $\lambda^*$.
Formally, for all $\lambda \in[0, \lambda^*]$ and $j=1,2$, we have $o_j(\lambda) = \widehat{o_j}(\lambda)$.
To see this, we note both $o_j(\lambda)$ and $\widehat{o_j}(\lambda)$ are linear in $\lambda \in [0, \lambda^*]$. Thus it is enough to show that
$o_j(\lambda^*) = \widehat{o_j}(\lambda^*)$, which is easy to verify.
Now, because ${\vec{v}}$ and $\vec{\widehat{{v}}}$ are indistinguishable until time $\lambda^*$, we know that any online algorithm must accept the same number of customers in both cases.
In particular, the expected number of accepted \st{class}\vmn{type}-$2$ customers by time $\lambda^*$, which we denote as $q_2$, must be the same for both instances.
The competitive ratio of any online algorithm is at most
\begin{align*}
\min \left\{ \frac{\E{ALG({\vec{V}})}}{OPT({\vec{v'}\vmn{ZZ\vec{v}_I}})} ,\frac{\E{ALG(\vec{\widehat{{V}}})}}{OPT(\vec{\widehat{v}'})}\right\}.
\end{align*}
If $q_2 \leq (\frac{1-ac^*}{1-a} )b-c^* \min \{ u_1^* , b\}$, then we have not accepted enough \st{class}\vmn{type}-$2$ customers to guarantee $c^*$-competitiveness in ${\vec{v'}\vmn{ZZ\vec{v}_I}}$:
\begin{align*}
\frac{\E{ALG({\vec{V}})}}{OPT({\vec{v'}\vmn{ZZ\vec{v}_I}})} < & \frac{n_1^* + a (n_2^* - o_2(\lambda^*) + (\frac{1-ac^*}{1-a} )b-c^* \min \{ u_1^* , b\})}{(1-a)n_1^* + a \min \left\{ b, n_1^*+ n_2^* \right\}} &(Constraint~\eqref{constraint:n1<=b}) \\
\leq &c^*. &(Constraint~\eqref{constraint:not-c-competitive})
\end{align*}
On the other hand, if $q_2 > (\frac{1-ac^*}{1-a} )b-c^* \min \{ u_1^* , b\}$, then we will not have enough inventory for \st{class}\vmn{type}-$1$ customers arriving after $\lambda^*$:
\begin{align*}
\frac{\E{ALG(\vec{\widehat{{V}}})}}{OPT(\vec{\widehat{v}'})} \leq & \frac{(b -q_2) + a q_2}{ ab + (1-a)\min\{b, u_1^*\}} &(\widehat{n_1}+ \widehat{n_2} = u_{1,2}^*\text{ and Constraint}~\eqref{constraint:u_2>=b}) \\
< & \frac{b-(1-a) \left( (\frac{1-ac^*}{1-a} )b-c^* \min \{ u_1^* , b\}\right)}{ ab + (1-a)\min\{b, u_1^*\}} \\
= &c^*.
\end{align*}
This completes the proof.
\end{proof}
\subsection{Robustness}\label{sec:robust}
In this section, we show that our adaptive algorithm is robust against a slightly off estimation of the parameter $p$.
For notational convenience, we define $ALG_{2,c,p}$ to be $ALG_{2,c}$ with parameter $p$, and we study the performance of $ALG_{2,c,p}$ when the true parameter is $\hat{p}$.
For the sake of brevity, we only consider robustness under Approximations~\ref{assumption:continuous instance} and~\ref{assumption:determ-observ}.
Under these approximations, the algorithms are modified so that for all $\lambda$,
\begin{align*}
u_1 (\lambda) & \triangleq
\min \left\{ \frac{o_1(\lambda)}{\lambda p}, \frac{o_1(\lambda) + (1-\lambda) (1-p) n}{1-p+\lambda p} \right\}, \\
u_{1,2} (\lambda) & \triangleq
\min \left\{ \frac{o_1(\lambda)+o_2(\lambda)}{\lambda p}, \frac{o_1(\lambda) +o_2(\lambda) + (1-\lambda) (1-p) n}{1-p+\lambda p} \right\}.
\end{align*}
In this section, we use $c^*( p )$ to denote the solution of (\ref{MP1}) under parameter $p$.
In the next proposition (proved later in this section), we show that $c^*( p )$ is continuous and non-decreasing in $p$:
\begin{proposition}\label{claim:small-c(p)-(p-hat)}
For all $ 0 < \hat{p}\leq p <1 $,
$$ 0 \leq c^*(p) - c^*(\hat{p}) \leq \frac{4(1-a)}{a}\log\left(\frac{p(1-\hat{p})}{\hat{p}(1-p)}\right) .$$
\end{proposition}
Note that the proposition above is a property of (\ref{MP1}) but not a property of the algorithm.
Furthermore, we have the following proposition (proved later in this section) showing that our adaptive algorithm is robust to a small error in estimating the degree of randomness (in the proof we also give the competitive ratio for the case $p \geq \hat{p} + \delta_{\hat{p}}$):
\begin{proposition}\label{prop:robust}
Under Approximations~\ref{assumption:continuous instance}, and~\ref{assumption:determ-observ},
if the true probability in the partially \st{learnable}\vmn{predictable} model is $\hat{p}$, then,
\begin{enumerate}
\item if $p < \hat{p}$, then $ALG_{2,c,p}$ has a competitive ratio of at least $c$ for all $c\leq c^*(p)$.
\item if $\hat{p}<p<\hat{p} + \delta_{\hat{p}}$, where $\delta_{\hat{p}}$ is a small positive constant, then $ALG_{2,c,p}$ has a competitive ratio of at least
$$ \begin{cases}
c^*(\hat{p}) - \frac{4(1-a)}{1-c} \log\left(\frac{p(1-\hat{p})}{\hat{p}(1-p)}\right) &,\text{ if }c^*(\hat{p})<c \leq c^*(p),\\
c\left[ 1- \frac{(1-a)(p - \hat{p})}{p} \right] &,\text{ if }c \leq c^*(\hat{p}).\end{cases}$$
\end{enumerate}
\end{proposition}
\begin{proof}[Proof of Proposition~\ref{claim:small-c(p)-(p-hat)}]
The proof relies on the following lemma:
\begin{lemma}\label{lemma:small:derivative-c*}
For any $p\in (0,1)$,
$$ \frac{{\rm d} c^*(p)}{{\rm d} p} \leq \frac{4(1-a)}{ap(1-p)}.$$
\end{lemma}
\begin{proof}
For any fixed $\bar p \in (0,1)$, we discuss what happens to $c^*(p)$ when we increase the value of $p$ from $\bar p$ to $\bar p + {\rm d} p$ where ${\rm d} p$ is a small positive number approaching $0$.
We start with an optimal solution $(\lambda^*, n_1^*, n_2^*, \eta_1^*, \eta_2^*, c^*)$ of $\ref{MP1}(\bar p)$.
We first prove that the optimal solution satisfies the following conditions:
\begin{subequations}
\begin{align}
c^* &= f(n_2^*, \eta_2^*, \tilde u_1^*, \bar p) \label{define-f}
\end{align}
where $f(n_2, \eta_2, \tilde u_1, p) \triangleq \frac{a\left((1-\lambda^* p) n_2 - (1-p)\eta_2 + \frac{b}{1-a} \right) + n_1^*}{a\min\{n_1^*+n_2, b\}+(1-a)n_1^*+\frac{a^2b}{1-a}+a \min\{\tilde u_1, b\}}$.
\begin{align}
\eta_1^*+\eta_2 ^*&= \min\{n_1^*+n_2^*, \lambda^* n\}. \label{eta1+eta2-big} & \\
n_1^*+n_2^* &\leq b. & \label{n1+n2=b}
\end{align}
\end{subequations}
Condition~\eqref{define-f} is obtained by setting \eqref{constraint:not-c-competitive} to an equality and expressing $\tilde o_2$ as $(1-p)\eta_2 - p\lambda n_2$.
We can derive Condition~\eqref{eta1+eta2-big} because $f( n_2, \eta_2, \tilde u_1, p)$ is decreasing in both $\eta_2$ and $\eta_1$ (this holds because $f$ is non-increasing in $ \tilde u_1$ and $ \tilde u_1$ is increasing in $\eta_1$).
Also note that \eqref{constraint:eta_1+eta_2<=lambdan}-\eqref{constraint:n2'<=n2} gives $\eta_1+\eta_2 \leq \min\{n_1+n_2, \lambda n\}$.
While fixing $\lambda, n_1$, and $n_2$, $\min\{n_1+n_2, \lambda n\}$ is the maximum value of $\eta_1 +\eta_2$.
Thus, in order to minimize $f$, Condition~\eqref{eta1+eta2-big} holds.
Now let us prove Condition~\eqref{n1+n2=b}.
Assume on the contrary $n_1^*+n_2^*>b$, we note that, decreasing $n_2$ does not change the value of the denominator of the function $f$.
On the other hand, the numerator of the function $f$ is decreasing either when $n_2$ and $\eta_2$ decrease by the same amount or when $n_2$ decreases by itself.
Therefore, replacing $(n_2^*, \eta_2^*)$ with $ (b-n_1^*, \max\{\eta_2^*+b-n_1^*-n_2^*, 0\})$ gives a smaller value of $f$.
Further, it is easy to verify that replacing $(n_2^*, \eta_2^*)$ with $ (b-n_1^*, \max\{\eta_2^*+b-n_1^*-n_2^*, 0\})$ still gives a feasible solution.
Thus, Condition~\eqref{n1+n2=b} holds.
In the following, our goal is to find feasible solutions of~\ref{MP1}($\bar p+{\rm d}p$) for all small enough positive numbers ${\rm d}p$.
We first note that the tuple $(\lambda^*, n_1^*, n_2^*, \eta_1^*, \eta_2^*, c^*)$ satisfies Constraints~\eqref{constraint:x<=1} through~\eqref{constraint:n1'+n2'big} in~\ref{MP1}$(\bar p+{\rm d}p)$.
However, Constraint~\eqref{constraint:u_2>=b} is not necessarily satisfied.
Therefore, we may need to define an alternative tuple with increased values of $n_1, n_2, \eta_1, \eta_2$.
In what follows, we construct a feasible tuple by keeping $n_1=n_1^*$ and $\eta_1=\eta_1^*$, and increasing the values of $n_2$, $\eta_2$, $c$.
More precisely, we define
\begin{subequations}
\begin{align}
n_2(p) & \triangleq n_2^* +\frac{2b}{\bar p(1-\bar p)}(p-\bar p)\text{, }
\label{def:n_2p}\\
\eta_2 (p) & \triangleq \eta_2^* + \min\{n_2(p)-n_2^*, \lambda^* n- \eta_1^*-\eta_2 ^*\}\text{, and }\label{def:eta_2p}\\
c(p)& \triangleq c^* + \frac{4(1-a)}{a\bar p(1-\bar p)}(p-\bar p).\label{def:cp}
\end{align}
\end{subequations}
Note that at $p=\bar p$, $n_2(\bar p)= n_2^*$, $\eta_2(\bar p)= \eta_2^*$, and $c(\bar p) =c^*$.
Next, we show that if ${\rm d}p$ is a small enough positive number, then the tuple $(\lambda^*, n_1^*, n_2(\bar p+{\rm d}p), \eta_1^*, \eta_2(\bar p+{\rm d}p), c(\bar p+{\rm d}p))$ is in the feasible set of~\ref{MP1}$(\bar p +{\rm d}p)$.
First, we note that $(\lambda^*, n_1^*, n_2(\bar p+{\rm d}p), \eta_1^*, \eta_2(\bar p+{\rm d}p), c(\bar p+{\rm d}p))$ still satisfies Constraints~\eqref{constraint:x<=1}-\eqref{constraint:n1'+n2'big} in $\ref{MP1}(p')$ due to \eqref{eta1+eta2-big} and \eqref{n1+n2=b}.
In what follows, we show that it satisfies Constraints~\eqref{constraint:u_2>=b} and~\eqref{constraint:not-c-competitive} separately.
\noindent{\textbf{Constraints~\eqref{constraint:not-c-competitive}}}
First, we show that for small enough positive ${\rm d}p$, $(\lambda^*, n_1^*, n_2(\bar p+{\rm d}p), \eta_1^*, \eta_2(\bar p+{\rm d}p), c(\bar p+{\rm d}p))$ satisfies Constraint~\eqref{constraint:not-c-competitive} in $\ref{MP1}(\bar p+{\rm d}p)$.
Due to ~\eqref{define-f}, it suffices to show that when evaluated at $p=\bar p$,
\begin{align}
\frac{{\rm d}c(p)}{{\rm d} p} & > \frac{{\rm d} f(n_2(p), \eta_2(p), \tilde u_1(p), p)}{{\rm d} p}, \label{ineq:c'-c*dp>=df/dp}
\end{align}
where we note that $\tilde u_1$ is a function of $p$ even though $(n_1,\eta_1, \lambda) $ are fixed at $(n_1^*,\eta_1^*, \lambda^*)$.
For simplicity, we denote $ \tilde u_1^* \triangleq \tilde u_1(\bar p)$.
The right hand sice of \eqref{ineq:c'-c*dp>=df/dp} is the total derivative over $p$, which can be expressed as
\begin{align}
& \frac{\partial f(n_2, \eta_2, \tilde u_1, p)}{\partial n_2} \frac{{\rm d}n_1(p)}{{\rm d}p} + \frac{\partial f(n_2, \eta_2, \tilde u_1, p)}{\partial \eta_2} \frac{{\rm d}\eta_2(p)}{{\rm d}p} +\frac{\partial f(n_2, \eta_2, \tilde u_1, p)}{\partial \tilde u_1} \frac{{\rm d} u_1(p)}{{\rm d}p} \nonumber
\\ & + \frac{\partial f(n_2, \eta_2, \tilde u_1, p)}{\partial p}, \label{equa:total-diff}
\end{align}
where the partial derivatives are evaluated at $(n_2, \eta_2, \tilde u_1,p) = ( n_2^* ,\eta_2^* , \tilde u_1^*, \bar p )$ and the total derivatives are evaluated at $p=\bar p$ throughout the proof.
We can further derive an upper bound for each term in \eqref{equa:total-diff}.
First, because \eqref{n1+n2=b} and $a\min\{n_1^*+n_2', b\}+(1-a)n_1^* \geq a n_2 ^*+ n_1 ^*$, we have
\begin{align}
\frac{\partial f(n_2, \eta_2, \tilde u_1, p)}{\partial n_2} &= \frac{1}{{\rm d}n_2}\left( \frac{a\left((1-\lambda^* \bar p) (n_2^*+{\rm d}n_2) - (1-\bar p)\eta_2^* + \frac{b}{1-a} \right) + n_1^*}{a (n_2^*+{\rm d}n_2) + n_1^*+\frac{a^2b}{1-a}+a \min\{\tilde u_1^*, b\}} \right. \nonumber \\ & - \left. \frac{a\left((1-\lambda^* \bar p) n_2^* - (1-\bar p)\eta_2^* + \frac{b}{1-a} \right) + n_1^*}{a n_2^* + n_1^*+\frac{a^2b}{1-a}+a \min\{\tilde u_1^*, b\}} \right) \nonumber\\
\leq & \frac{1}{{\rm d}n_2}\left( \frac{a\left((1-\lambda^* \bar p) (n_2^*+{\rm d}n_2) - (1-\bar p)\eta_2^* + \frac{b}{1-a} \right) + n_1^*}{a n_2^* + n_1^*+\frac{a^2b}{1-a}+a \min\{\tilde u_1^*, b\}} \right. \nonumber \\ & - \left. \frac{a\left((1-\lambda^* \bar p) n_2^* - (1-\bar p)\eta_2^* + \frac{b}{1-a} \right) + n_1^*}{a n_2^* + n_1^*+\frac{a^2b}{1-a}+a \min\{\tilde u_1^*, b\}} \right) &(\text{see below}) \label{ineq:n_2_increasing}\\
= & \frac{a (1-\lambda^* \bar p )}{a n_2 ^*+ n_1^*+\frac{a^2b}{1-a}+a \min\{\tilde u_1^*, b\}}
\leq \frac{a}{\frac{a^2b}{1-a}} = \frac{1-a}{ab}. \nonumber
\end{align}
For deriving \eqref{ineq:n_2_increasing}, we use the fact that when ${\rm d}p>0$, $n_2(\bar p + {\rm d}p)>n_2^*$, and thus ${\rm d}n_2 > 0$.
Using the above inequality and \eqref{def:n_2p},
\begin{align}
\frac{\partial f(n_2, \eta_2, \tilde u_1, p)}{\partial n_2} \frac{{\rm d}n_2(p)}{{\rm d}p} \leq \frac{1-a}{ab} \frac{2b}{p(1-p)} = \frac{2(1-a)}{ap(1-p)}.\label{inequ:partial-n2}
\end{align}
For the term regarding the partial derivative of $\eta_2$, we have
\begin{align*}
\frac{\partial f(n_2, \eta_2, \tilde u_1, p)}{\partial \eta_2}
& = \frac{\partial}{\partial\eta_2} \frac{a\left((1- \lambda^* \bar p) n_2^* - (1-\bar p)\eta_2 + \frac{b}{1-a} \right) + n_1^*}{a n_2^* + n_1^*+\frac{a^2b}{1-a}+a \tilde u_1^*} &(\eqref{n1+n2=b}) \nonumber \\
& = - \frac{a(1-\bar p)}{a n_2^* + n_1^*+\frac{a^2b}{1-a}+a \tilde u_1^*} \leq 0 .\nonumber \\
\end{align*}
Further, from \eqref{def:eta_2p}, $\frac{{\rm d}\eta_2(p)}{{\rm d}p} \geq 0$, and thus
\begin{align} \frac{\partial f(n_2, \eta_2, \tilde u_1, p)}{\partial \eta_2} \frac{{\rm d}\eta_2(p)}{{\rm d}p} \leq 0\label{ineq:partial-eta-2}
\end{align}
Now let us consider the term regarding the partial derivative of $\tilde u_1$.
We consider two cases $\tilde u_1 ^* > b$ and $\tilde u_1 ^* \leq b$ separately.
For the case $\tilde u_1 ^* > b$, since $\tilde u_1$ is continuous in $p$, for small enough ${\rm d} p$ we have $\tilde u_1 (\bar p +{\rm d} p)>b$, and thus
\begin{align*}
\frac{\partial f(n_2, \eta_2, \tilde u_1, p)}{\partial \tilde u_1} = 0.
\end{align*}
Therefore, in this case,
\begin{align}
\frac{\partial f(n_2, \eta_2, \tilde u_1, p)}{\partial \tilde u_1} \frac{{\rm d} \tilde u_1(p)}{{\rm d}p} =0 .\label{ineq-partial-u1-trivial-0}
\end{align}
For the other case, $\tilde u_1 ^* \leq b$, we have
\begin{align*}
\frac{\partial f(n_2, \eta_2, \tilde u_1, p)}{\partial \tilde u_1}
& = \frac{\partial}{\partial \tilde u_1} \frac{a\left((1- \lambda^* \bar p) n_2 - (1-\bar p)\eta_2 + \frac{b}{1-a} \right) + n_1^*}{a n_2 + n_1^*+\frac{a^2b}{1-a}+a \tilde u_1} &(\eqref{n1+n2=b}) \nonumber \\
& = - a \frac{a\left((1- \lambda^* \bar p) n_2^* - (1-\bar p)\eta_2^* + \frac{b}{1-a} \right) + n_1^*}{(a n_2^* + n_1^*+\frac{a^2b}{1-a}+a \tilde u_1^*)^2} \nonumber \\
& = - \frac{ac^*}{a n_2^* + n_1^*+\frac{a^2b}{1-a}+a \tilde u_1^*} &(\eqref{define-f})
\\
& \geq -\frac{ac^*}{\frac{a^2b}{1-a}} = - \frac{(1-a)c^*}{ab}.
\end{align*}
In order to bound $\frac{{\rm d}\tilde u_1(p)}{{\rm d}p}$ (under the case $\tilde u_1 \leq b$), we distinguish two cases based on the definition of $\tilde u_1 ^*$ in~\ref{MP1}($\bar p$).
Note that $\frac{(1-\bar p)\eta_1^* + \lambda^* \bar p n_1^*}{\lambda^* \bar p} \leq \frac{(1-\bar p)\eta_1 ^*+ \lambda ^* \bar p n_1 ^* +(1-\bar p)(1-\lambda^*)n}{1-\bar p+\lambda^* \bar p}$ is equivalent to $\bar p(\lambda^*(1-\lambda^*)n-\lambda^* n_1^* +\eta_1^*) \geq \eta_1^*$.
The first case is $\tilde u_1^*=\frac{(1-\bar p)\eta_1^* + \lambda^* \bar p n_1^*}{\lambda ^* p} \leq \frac{(1-\bar p)\eta_1^* + \lambda^* \bar p n_1^* +(1-\bar p)(1-\lambda^*)n}{1-\bar p+\lambda^* \bar p}$, which happens when $\bar p(\lambda^*(1-\lambda^*)n-\lambda^* n_1^* +\eta_1^*) \geq \eta_1^*$.
Since, $(\bar p+{\rm d}p)(\lambda^*(1-\lambda^*)n-\lambda^* n_1^* +\eta_1^*)\geq \bar p(\lambda^*(1-\lambda^*)n-\lambda^* n_1^* +\eta_1^*) \geq \eta_1^* $, we have $\tilde u_1(p+{\rm d}p) = \frac{(1-(\bar p+{\rm d}p))\eta_1^* + \lambda^* (\bar p+{\rm d}p) n_1^*}{\lambda ^* (\bar p+{\rm d}p)}$.
Furthermore, $ \frac{(1-\bar p)\eta_1^* + \lambda^* \bar p n_1^*}{\lambda ^* \bar p} = \tilde u_1^* \leq b$ implies $ \eta_1^* \leq \frac{\lambda^* \bar p b}{1-\bar p}$.
Combining the above equality and inequality.
\begin{align}
\frac{{\rm d}\tilde u_1(p)}{{\rm d}p} = \frac{{\rm d}}{{\rm d}p}\frac{(1-p)\eta_1^* + \lambda^* p n_1^*}{\lambda^* p}
= -\frac{ \eta_1^*}{\lambda^* \bar p^2} \geq - \frac{b}{\bar p(1-\bar p)}. \label{ineq:partial-u1-dp-case1}
\end{align}
The other case is $\tilde u_1^* = \frac{(1-\bar p)\eta_1^* + \lambda^* \bar p n_1^* +(1-\bar p)(1-\lambda^*)n}{1-\bar p+\lambda^* \bar p}< \frac{(1-\bar p)\eta_1^* + \lambda^* \bar p n_1^*}{\lambda ^* p} $, which happens when $\bar p(\lambda^*(1-\lambda^*)n-\lambda^* n_1^* +\eta_1^*) < \eta_1^*$, we have $\tilde u_1^* = \frac{(1-\bar p)\eta_1^* + \lambda^* \bar p n_1^* +(1-\bar p)(1-\lambda^*)n}{1-\bar p+\lambda^* \bar p}$.
For small enough and positive ${\rm d}p$, $(\bar p +{\rm d}p)(\lambda^*(1-\lambda^*)n-\lambda^* n_1^* +\eta_1^*) < \eta_1^*$, and thus $\tilde u_1(\bar p+{\rm d}p) = \frac{(1-(\bar p+{\rm d}p))\eta_1^* + \lambda^* (\bar p+{\rm d}p) n_1^* +(1-(\bar p+{\rm d}p))(1-\lambda^*)n}{1-\bar p+\lambda^* (\bar p+{\rm d}p)}$.
In addition, rearranging terms in the definition of the case, we have $-(1-\lambda^*) \lambda^* n > -\frac{(1-\bar p)}{\bar p}\eta_1^* - \frac{\lambda^* \bar p}{\bar p} n_1^*$.
As a result,
\begin{align}
\frac{{\rm d}\tilde u_1(p)}{{\rm d}p} & = \frac{{\rm d}}{{\rm d}p}\frac{(1-p)\eta_1^* + \lambda^* p n_1^* +(1-p)(1-\lambda^*)n}{1-p+\lambda^* p} \nonumber \\
& = \frac{{\rm d}}{{\rm d}p} \frac{\lambda^* p (n_1^*-\eta_1^*)-\lambda^* n}{1-p+\lambda^* p} \nonumber\\
& = \frac{-(1-\lambda^*) \lambda^* n + \lambda(n_1^*-\eta_1^*)}{(1-\bar p+\lambda^* \bar p )^2} \label{ineq:partial-u1-dp-case2}\\
& > \frac{-\frac{(1-\bar p)}{\bar p}\eta_1^* - \frac{\lambda^* \bar p}{\bar p} n_1^*+ \lambda^*(n_1^*-\eta_1^*)}{(1-\bar p+\lambda^* \bar p )^2} \nonumber
\\ & = - \frac{\eta_1^*}{\bar p(1-\bar p+\lambda^* \bar p) } > - \frac{b}{\bar p(1-\bar p)} .&
(\eqref{constraint:n1'<=n1} \text{ and }\eqref{n1+n2=b}) \nonumber
\end{align}
Overall, if $\tilde u_1^* \leq b$, in either case,
\begin{align}
\frac{\partial f(n_2, \eta_2, \tilde u_1, p)}{\partial \tilde u_1} \frac{{\rm d}\tilde u_1(p)}{{\rm d}p} \leq \frac{(1-a)c}{ab}\frac{b}{\bar p(1-\bar p)} = \frac{(1-a)c}{a\bar p(1-\bar p)}. \label{inequa:partial:u1-overall
\end{align}
Since this bound is looser than that of ~\eqref{ineq-partial-u1-trivial-0}, the above inequality is true for general $\tilde u_1^*$.
For the terms regarding the partial derivative with respect to $p$, we have
\begin{align}
\frac{\partial f(n_2, \eta_2, \tilde u_1, p)}{\partial p}
& = \frac{\partial}{\partial p} \frac{a\left((1- \lambda^* p) n_2 - (1-p)\eta_2 + \frac{b}{1-a} \right) + n_1^*}{a n_2 + n_1^*+\frac{a^2b}{1-a}+a \min\{\tilde u_1, b\}} &(\eqref{n1+n2=b}) \nonumber \\
& = \frac{a\left(\eta_2^* - \lambda^* n_2 ^*\right) }{a n_2^* + n_1^* +\frac{a^2b}{1-a}+a \min\{\tilde u_1^*, b\}} \nonumber \\
& \leq \frac{ab}{\frac{a^2b}{1-a}} =\frac{1-a}{a}. &(\eqref{constraint:n2'<=n2} \text{ and }\eqref{n1+n2=b}) \label{inequ:parial_p}
\end{align}
Applying~\eqref{inequ:partial-n2},~\eqref{ineq:partial-eta-2},~\eqref{inequa:partial:u1-overall} and~\eqref{inequ:parial_p}, on ~\eqref{equa:total-diff}, we obtain
\begin{align*}
\eqref{equa:total-diff} & \leq \frac{2(1-a)}{a\bar p(1-\bar p)} + \frac{(1-a)c^*}{a\bar p(1-\bar p)} + \frac{1-a}{a} < \frac{4(1-a)}{a\bar p(1-\bar p)} = \frac{{\rm d}c(p)}{{\rm d}p},\end{align*}
which implies \eqref{ineq:c'-c*dp>=df/dp} and thus completes the proof that $(\lambda^*, n_1^*, n_2(\bar p+{\rm d}p), \eta_1^*, \eta_2(\bar p+{\rm d}p), c(\bar p+{\rm d}p))$ satisfies Constraint~\eqref{constraint:u_2>=b}.
\noindent{\textbf{Constraint~\eqref{constraint:u_2>=b}}}
Now, we show that for small enough positive ${\rm d}p$, $(\lambda^*, n_1^*, n_2(\bar p+{\rm d}p), \eta_1^*, \eta_2(\bar p+{\rm d}p), c(\bar p+{\rm d}p))$ satisfies Constraint~\eqref{constraint:u_2>=b} in $\ref{MP1}(\bar p+{\rm d}p)$.
For notational convenience, we view $\tilde u_{1,2}$ as a function of $p$ and denote $\tilde u_{1,2}^* \triangleq \tilde u_{1,2}(\bar p)$ as the value of $\tilde u_{1,2}$ corresponding to the original solution in $\ref{MP1}(\bar p)$.
If $\tilde u_{1,2}^* > b$, then because $u_{1,2}(p)$ is continuous in $p$, for small enough and positive ${\rm d}p$, the modified tuple still gives $\tilde u_{1,2}(\bar p+{\rm d}p) \geq b$ and thus satisfies Constraint~\eqref{constraint:u_2>=b} in $\ref{MP1}(\bar p+{\rm d}p)$.
Due to \eqref{constraint:u_2>=b} in $\ref{MP1}(\bar p)$, $\tilde u_{1,2}^* \geq b$, and hence the case $\tilde u_{1,2}^* < b$ does not exist.
In the following, we consider the remaining case where $\tilde u_{1,2}^* = b$.
Similar to how we distinguish two cases of $u_1^*$ when proving that Constraints~\eqref{constraint:not-c-competitive} is satisfied, we consider different values of $\tilde u_{1,2}^* $ separately.
However, the proof here is trickier and we need to consider three cases separately, based on whether $\frac{(1-\bar p)(\eta_1^*+\eta_2^*) + \lambda ^* \bar p (n_1^*+n_2^*)}{\lambda^* \bar p}$ is smaller, greater, or equal to $\frac{(1-\bar p)(\eta_1^*+\eta_2^*) + \lambda^* \bar p (n_1^*+n_2^*) + (1-\bar p)(1-\lambda^*) n ^*}{1- \bar p + \lambda ^* \bar p}$.
The first case is $\frac{(1-\bar p)(\eta_1^*+\eta_2^*) + \lambda ^* \bar p (n_1^*+n_2^*)}{\lambda^* \bar p} <\frac{(1-\bar p)(\eta_1^*+\eta_2^*) + \lambda^* \bar p (n_1^*+n_2^*) + (1-\bar p)(1-\lambda^*) n ^*}{1- \bar p + \lambda ^* \bar p}$.
In this case, we have $\tilde u_{1,2}^* = \frac{(1-\bar p)(\eta_1^*+\eta_2^*) + \lambda ^* \bar p (n_1^*+n_2^*)}{\lambda^* \bar p}$.
Furthermore, when ${\rm d}p$ is a small enough positive number, $n_2(\bar p+{\rm d}p), \eta_2(\bar p+{\rm d}p),$ and $\bar p+{\rm d}p$ are close enough to the respective values of $n_2^*, \eta_2^*,$ and $\bar p$, and thus the corresponding value of $\tilde u_{1,2}$ still takes the same form.
Therefore,
\begin{align*}
& \frac{\tilde u_{1,2}(p)}{{\rm d}p} \\
= &\frac{\partial}{\partial n_2}\left( \frac{(1-p)(\eta_1^*+\eta_2) + \lambda^* p (n_1^*+n_2)}{\lambda^* p} \right) \frac{{\rm d}n_2(p)}{{\rm d}p} \\ &
+ \frac{\partial}{\partial \eta_2}\left( \frac{(1-p)(\eta_1^*+\eta_2) + \lambda^* p (n_1^*+n_2)}{\lambda^* p} \right) \frac{{\rm d}\eta_2(p)}{{\rm d}p} \\
& + \frac{\partial}{\partial p} \frac{(1-p)(\eta_1^*+\eta_2) + \lambda^* p (n_1^*+n_2)}{\lambda^* p} \\
\geq & \frac{{\rm d}n_2(p)}{{\rm d}p} + \frac{1-\bar p}{\lambda^* \bar p}\frac{{\rm d}\eta_2(p)}{{\rm d}p} - \frac{b}{\bar p(1-\bar p)} &(\eqref{ineq:partial-u1-dp-case1}) \\
\geq & \frac{2b}{\bar p(1-\bar p)} - \frac{b}{\bar p(1-\bar p)} = \frac{b}{\bar p(1-\bar p)}>0,&(\eqref{def:n_2p}\text{ and }\eqref{def:eta_2p})
\end{align*}
where the partial derivatives are evaluated at $( n_2, \eta_2, p) = ( n_2^*, \eta_2^* ,\bar p)$ throughout the proof.
As a result, when ${\rm d}p$ is a small enough positive number, $\tilde u_{1,2}(\bar p+{\rm d}p) \geq \tilde u_{1,2}(\bar p) \geq b$, and hence Constraint~\eqref{constraint:u_2>=b} is satisfied in $\ref{MP1}(p')$
The second case is $\frac{(1-\bar p)(\eta_1^*+\eta_2^*) + \lambda ^* \bar p (n_1^*+n_2^*)}{\lambda^* \bar p} > \frac{(1-\bar p)(\eta_1^*+\eta_2^*) + \lambda^* \bar p (n_1^*+n_2^*) + (1-\bar p)(1-\lambda^*) n ^*}{1- \bar p + \lambda ^* \bar p}$.
In this case, we have $\tilde u_{1,2}^* = \frac{(1-\bar p)(\eta_1^*+\eta_2^*) + \lambda^* \bar p (n_1^*+n_2^*) + (1-\bar p)(1-\lambda^*) n ^*}{1- \bar p + \lambda ^* \bar p}.$
Similar to the first case, when ${\rm d}p$ is a small enough positive number, $n_2(\bar p+{\rm d}p), \eta_2(\bar p+{\rm d}p),$ and $\bar p+{\rm d}p$ are close enough to the respective values of $n_2^*, \eta_2^*,$ and $\bar p$, and thus the corresponding value of $\tilde u_{1,2}$ still takes the same form.
Therefore,
\begin{align*}
& \frac{{\rm d}\tilde u_{1,2}(p)}{{\rm d}p} \\
= &\frac{\partial}{\partial n_2}\left( \frac{(1-p)(\eta_1^*+\eta_2) + \lambda^* p (n_1^*+n_2) + (1-p)(1-\lambda^*) n }{1- p + \lambda^* p} \right) \frac{{\rm d}n_2(p)}{{\rm d}p} \\
& + \frac{\partial}{\partial \eta_2}\left( \frac{(1-p)(\eta_1^*+\eta_2) + \lambda^* p (n_1^*+n_2) + (1-p)(1-\lambda^*) n }{1- p + \lambda^* p} \right) \frac{{\rm d}\eta_2(p)}{{\rm d}p} \\
& + \frac{\partial}{\partial p} \frac{(1-p)(\eta_1^*+\eta_2) + \lambda^* p (n_1^*+n_2) + (1-p)(1-\lambda^*) n }{1- p + \lambda^* p}\\
= & \frac{ \lambda^* \bar p}{1-\bar p+\lambda ^* \bar p}\frac{{\rm d}n_2(p)}{{\rm d}p} + \frac{ 1-\bar p}{1-\bar p+\lambda ^* \bar p}\frac{{\rm d}\eta_2(p)}{{\rm d}p} \\& + \frac{-(1-\lambda^*) \lambda^* n + \lambda^*(n_1^*+n_2^*-\eta_1^*-\eta_2^*)}{(1-\bar p+\lambda^* \bar p )^2} &(\text{similar to }\eqref{ineq:partial-u1-dp-case2})
\\ \geq & \frac{ \lambda^* \bar p}{1-\bar p+\lambda ^*\bar p}\frac{{\rm d}n_2(p)}{{\rm d}p} + \frac{-(1-\lambda^*) \lambda^* n + \lambda^*(n_1^*+n_2^*-\eta_1^*-\eta_2^*)}{(1-\bar p+\lambda^* \bar p )^2},
\end{align*}
which is greater than $0$ when
\begin{align}
\frac{{\rm d}n_2(p)}{{\rm d}p}>
\frac{(1-\lambda^*) n - (n_1^*+n_2^*-\eta_1^*-\eta_2^*)}{(1-\bar p+\lambda^* \bar p )\bar p}. \label{ineq:condition-to-finish-proof}
\end{align}
Therefore, it is sufficient to prove \eqref{ineq:condition-to-finish-proof}.
By the assumption of this case, we know
\begin{align*}
b = \tilde u_{1,2}^* = \frac{(1-\bar p)(\eta_1^*+\eta_2^*) + \lambda^* \bar p (n_1^*+n_2^*) + (1-\bar p)(1-\lambda^*) n }{1- \bar p + \lambda ^*p} \geq \frac{(1-\bar p)(1-\lambda^*) n }{1- \bar p + \lambda ^* \bar p}.
\end{align*}
By using Constraints~\eqref{constraint:n1'<=n1},~\eqref{constraint:n2'<=n2} and the inequality above, we know
\begin{align*}
\frac{(1-\lambda^*) n - (n_1^*+n_2^*-\eta_1^*-\eta_2^*)}{(1-\bar p+\lambda^* \bar p )\bar p} \leq \frac{(1-\lambda^*) n}{(1-\bar p+\lambda^* \bar p )\bar p} \leq \frac{b}{(1-\bar p)\bar p} < \frac{2b}{(1-\bar p)\bar p} = \frac{{\rm d}n_2(p)}{{\rm d}p},
\end{align*}
which proves \eqref{ineq:condition-to-finish-proof} and hence concludes the proof of this case.
Finally, the third case is $\frac{(1-\bar p)(\eta_1^*+\eta_2^*) + \lambda ^* \bar p (n_1^*+n_2^*)}{\lambda^* \bar p} = \frac{(1-\bar p)(\eta_1^*+\eta_2^*) + \lambda^* \bar p (n_1^*+n_2^*) + (1-\bar p)(1-\lambda^*) n ^*}{1- \bar p + \lambda ^* \bar p}$.
In this case, we have $\tilde u_{1,2}^* = \frac{(1-\bar p)(\eta_1^*+\eta_2^*) + \lambda ^* \bar p (n_1^*+n_2^*)}{\lambda^* \bar p} = \frac{(1-\bar p)(\eta_1^*+\eta_2^*) + \lambda^* \bar p (n_1^*+n_2^*) + (1-\bar p)(1-\lambda^*) n ^*}{1- \bar p + \lambda ^* \bar p}.$
In this case, we can show that (we will show this shortly), either for all small enough ${\rm d}p$,
\begin{align}\tilde u_{1,2}(\bar p +{\rm d}p) = \frac{(1-(\bar p +{\rm d}p))(\eta_1^*+\eta_2(\bar p +{\rm d}p)) + \lambda ^* \bar p (n_1^*+n_2(\bar p +{\rm d}p))}{\lambda^* \bar p},\label{u12p+dp-first-case}
\end{align}
or for all small enough ${\rm d}p$,
\begin{align} &\tilde u_{1,2}(\bar p +{\rm d}p) \nonumber \\=& \frac{(1-(\bar p +{\rm d}p))(\eta_1^*+\eta_2(\bar p +{\rm d}p)) + \lambda^* (\bar p +{\rm d}p)(n_1^*+n_2(\bar p +{\rm d}p)) + (1-(\bar p +{\rm d}p))(1-\lambda^*) n ^*}{1- (\bar p +{\rm d}p) + \lambda ^* (\bar p +{\rm d}p)}.\label{u12p+dp-second-case}
\end{align}
And hence, we can reduce this case to either the first or the second case that we have just proved and completes the proof.
In what follows, we show either \eqref{u12p+dp-first-case} or \eqref{u12p+dp-second-case} holds.
To see this, we first note that, for all $p\in(0,1)$, $\frac{(1-p)(\eta_1^*+\eta_2(p)) + \lambda ^* p (n_1^*+n_2(p))}{\lambda^* p} \leq \frac{(1-p)(\eta_1^*+\eta_2(p)) + \lambda^* p (n_1^*+n_2(p)) + (1- p)(1-\lambda^*) n ^*}{1- p + \lambda ^* p}$ is equivalent to $p (\lambda^*(1-\lambda^*)n-\lambda^* ( n_1^* + n_2(p)) +\eta_1^*+\eta_2(p)) -(\eta_1^* + \eta_2(p)) \geq 0$.
Since the left hand side of this inequality is a quadratic function of $p$, there must exists some positive number $\delta>0$ such that for all ${\rm d}p< \delta$, the quadratic function evaluated at all $p = \bar p+ {\rm d}p$ is either always non-negative or always non-positive, which concludes the proof that the tuple $(\lambda^*, n_1^*, n_2(\bar p+{\rm d}p), \eta_1^*, \eta_2(\bar p+{\rm d}p), c(\bar p+{\rm d}p))$ is in the feasible set of~\ref{MP1}$(\bar p +{\rm d}p)$.
Since the tuple $(\lambda^*, n_1^*, n_2(\bar p+{\rm d}p), \eta_1^*, \eta_2(\bar p+{\rm d}p), c(\bar p+{\rm d}p))$ is in the feasible set of~\ref{MP1}$(\bar p +{\rm d}p)$, when evaluating the derivative of $c^*(p)$ at $p=\bar p$,
\begin{align*}
\frac{{\rm d} c^*(p)}{{\rm d} p} = &
\frac{ c^*(\bar p+{\rm d}p) -c^*(\bar p) }{{\rm d} p} \leq \frac{ c(\bar p+{\rm d}p) -c(\bar p) }{{\rm d} p} &(c^*(\bar p+{\rm d}p) \leq c(\bar p+{\rm d}p) \text{ and }c^*(\bar p)=c(\bar p) ) \\
= & \frac{ c^*+\frac{4(1-a)}{a\bar p(1-\bar p)}{\rm d}p -c^* }{{\rm d} p} &(\eqref{def:cp}) \\
= & \frac{4(1-a)}{a\bar p(1-\bar p)},
\end{align*}
which completes the proof.
\end{proof}
With Lemma~\ref{lemma:small:derivative-c*}, we are ready to prove the proposition:
First, we prove the lower bound, $c^*(p) - c^*(\hat{p}) \geq 0 $.
We start with an optimal solution $(\lambda^*, n_1^*, n_2^*, \eta_1^*, \eta_2^*, c^*)$ of $\ref{MP1}(p)$.
Consider the follwoing tuple $(\lambda^*, n_1^*, n_2^*, \eta_1', \eta_2', c^*)$ where
$\eta_1' = \frac {n_1^*\lambda^* (p-\hat{p} )+\eta_1^*(1-p)}{1- \hat{p} }$ and $\eta_2' = \frac {n_2^*\lambda^* (p-\hat{p} )+\eta_2^*(1-p)}{1- \hat{p} }.$
It is easy to verify that $(\lambda^*, n_1^*, n_2^*, \eta_1', \eta_2', c^*)$ satisfies Constraints~\eqref{constraint:x<=1}-\eqref{constraint:n1'+n2'big} in $\ref{MP1}(\hat{p})$.
Below we prove that $(\lambda^*, n_1^*, n_2^*, \eta_1', \eta_2', c^*)$ also satisfies Constraints~\eqref{constraint:not-c-competitive}-\eqref{constraint:u_2>=b} in $\ref{MP1}(p')$.
To prove this, we denote $\tilde o_{1}^*$, $\tilde o_{2}^*$, $\tilde u_{1}^*$, $\tilde u_{1,2}^*$ the values of $\tilde o_{1}$, $\tilde o_{2}$, $\tilde u_{1}$, $\tilde u_{1,2}$ corresponds to the original (optimal) solution in $\ref{MP1}(p)$, and $\tilde o_{1}'$, $\tilde o_{2}'$, $\tilde u_{1}'$, $\tilde u_{1,2}'$ the values corresponding to the modified solution (the tuple $(\lambda^*, n_1^*, n_2^*, \eta_1', \eta_2', c^*)$) in $\ref{MP1}(\hat{p})$.
By the definition of $\eta_1'$ and $\eta_2'$, we have $\tilde o_1' = \tilde o_1^*$ and $\tilde o_2' = \tilde o_2^*$.
This, conbined with the observation that the right hand side of Constraint~\eqref{constraint:not-c-competitive} is non-increasing in $\tilde u_1$ and the left hand side of Constraint~\eqref{constraint:u_2>=b} is increasing in $\tilde u_{1,2}$, indicates that if $\tilde u_1' \geq \tilde u_1^*$ and $\tilde u_{1,2}' \geq \tilde u_{1,2}^*$, then $(\lambda^*, n_1^*, n_2^*, \eta_1', \eta_2', c^*)$ satisfies Constraints~\eqref{constraint:not-c-competitive}-\eqref{constraint:u_2>=b} in $\ref{MP1}(p')$.
Thus it suffices to prove $\tilde u_1' \geq \tilde u_1^*$ and $\tilde u_{1,2}' \geq \tilde u_{1,2}^*$.
Using the definition of $\tilde u_{1}$ in~\ref{MP1}, for proving $\tilde u_1' \geq \tilde u_1^*$, it is sufficient to show
$$ \frac{\tilde o_1' }{ \lambda^* \hat{p} } \geq \frac{\tilde o_1^*}{\lambda^* p }, $$
and
$$ \frac{\tilde o_1' +(1- p ) (1-\lambda^*)n}{1- p + \lambda^* p } \geq \frac{\tilde o_1^* +(1- \hat{p}) (1-\lambda^*)n}{1- \hat{p}+ \lambda^*\hat{p} } . $$
The first inequality is due to $\tilde o_1' = \tilde o_1^*$ and $p \geq \hat{p}$, and the second is implied by
\begin{align*}
\frac{\tilde o_1' +(1- p ) (1-\lambda^*)n}{1- p + \lambda^* p } - \frac{\tilde o_1^* +(1- \hat{p}) (1-\lambda^*)n}{1- \hat{p}+ \lambda^*\hat{p} } = \frac{(\lambda^* n- o_1^*)( \hat{p} - p )(1-\lambda^*)}{(1- \hat{p}+ \lambda^* \hat{p} )(1- p + \lambda^* p )} \geq 0.\end{align*}
Similarly, we have $\tilde u_{1,2}' \geq \tilde u_{1,2}^*$, which completes the proof for the lower bound.
The upper bound, $c^*(p) - c^*(\hat{p}) \leq \frac{4(1-a)}{a}\log\left(\frac{p(1-\hat{p})}{\hat{p}(1-p)}\right)$, relies on Lemma~\ref{lemma:small:derivative-c*}:
\begin{align*}
& c^*(p)-c^*(\hat{p}) = \int_{t=\hat{p}}^{p} {\rm d} c^*(t) \\
\leq & \int_{t=\hat{p}}^{p} \frac{4(1-a)}{at(1-t)}{\rm d} t & (\text{Lemma~\ref{lemma:small:derivative-c*}})\\
= & \left[\frac{4(1-a)}{a}\left(\log p - \log (1-p)\right)\right] - \left[\frac{4(1-a)}{a}\left(\log \hat{p} - \log (1-\hat{p})\right)\right] \\
= & \frac{4(1-a)}{a}\log\left(\frac{p(1-\hat{p})}{\hat{p}(1-p)}\right),
\end{align*}
which completes the proof.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:robust}]
Proposition~\ref{prop:robust} consists of two cases based on whether we underestimate the problem parameter ($p < \hat{p}$) or overestimate it ($p > \hat{p}$).
For proving Proposition~\ref{prop:robust}, we first introduce two lemmas, each of them deals with one of the two cases.
When we underestimate the true parameter ($ p < \hat{p}$), we have the following lemma:
\begin{lemma}\label{thm:adaptive-threshold-p-hat}
Under Approximations~\ref{assumption:continuous instance}, and~\ref{assumption:determ-observ},
if the true probability in the customer arrival model is $\hat{p}$ and $p < \hat{p}$, then $ALG_{2,c,p}$ has a competitive ratio of at least $c$ for all $c\leq c^*(p)$.
\end{lemma}
\begin{proof}
For any adversarial problem instance ${\vec{v}}'$ given by functions $\eta_1$ and $\eta_2$ in Approximation~\ref{assumption:continuous instance}, we define the following auxiliary problem instance $\bar {\vec{v}}'$ given by functions $\bar \eta_1$ and $\bar \eta_2$, where for all $\lambda\in [0,1]$, $$\bar\eta_1(\lambda) \triangleq \frac {n_1\lambda (\hat{p}- p )+\eta_1(\lambda)(1-\hat{p})}{1- p }\text{ and }\bar \eta_2(\lambda) \triangleq \frac {n_2\lambda (\hat{p}- p )+\eta_2(\lambda)(1-\hat{p})}{1- p }.$$
The validity of functions $\bar \eta_1(\lambda)$ and $\bar \eta_2(\lambda)$ are easy to verify because our construction of each of them is a convex combination of two obviously valid instances $(n_1\lambda, n_2\lambda)$ and $(\eta_1(\lambda), \eta_2(\lambda))$ with weights $\frac{\hat{p}- p }{1- p }$ and $ \frac{1-\hat{p}}{1- p }$ and the constraints of valid instances are linear.
Note that for all $\lambda\in[0,1]$, $$n_1\lambda \hat{p}+ \eta_1(\lambda)(1-\hat{p}) = n_1\lambda p + \bar \eta_1(\lambda)(1- p )\text{ and }n_2\lambda \hat{p}+ \eta_2(\lambda)(1-\hat{p}) = n_2\lambda p + \bar \eta_2(\lambda)(1- p ).$$
This means that the values of $ \{ o_1(\lambda) , o_2(\lambda)\}_{\lambda \in [0,1]}$ given by ${\vec{v}}'$ with the true problem parameter $\hat{p}$ is the same as those given by $\bar {\vec{v}}'$ with problem parameter $p$.
As a result, $ALG_{2,c,p}$ makes the same decision for the above two scenarios.
The rest follows from that the ratio between $ALG_{2,c,p}$ and $OPT$ is at least $c$ against $\bar {\vec{v}}'$ with problem parameter $p$ and that $OPT({\vec{v}}')=OPT(\bar {\vec{v}}')$ (due to $\eta_1(1)=\bar\eta_1(1)$ and $\eta_2(1)=\bar\eta_2(1)$).
\end{proof}
When we overestimate the true parameter ($ p >\hat{p}$), we have the following lemma:
\begin{lemma}\label{thm:optimistic-error}
Under Approximations~\ref{assumption:continuous instance}, and~\ref{assumption:determ-observ},
if the true probability in the customer arrival model is $\hat{p}$ and $p > \hat{p}$, then for all $c\leq c^*(p)$, $ALG_{2,c,p}$ has a competitive ratio of at least $$ \begin{cases}
\min\left\{1-\frac{(1-a)(p-\hat{p})}{a \hat{p}+(1-a)(p-\hat{p})}, c\left( 1- \frac{(1-a)(p - \hat{p})}{p} \right), c^*(\hat{p}) - \frac{4(1-a)}{1-c} \log\left(\frac{p(1-\hat{p})}{\hat{p}(1-p)}\right) \right\} &,\text{ if }c>c^*(\hat{p}),\\
\min\left\{1-\frac{(1-a)(p-\hat{p})}{a \hat{p}+(1-a)(p-\hat{p})}, c\left( 1- \frac{(1-a)(p - \hat{p})}{p} \right) \right\} &,\text{ if }c \leq c^*(\hat{p}).\end{cases}$$
\end{lemma}
For proving Lemma~\ref{thm:optimistic-error}, for all $\lambda\in[0,1]$, we denote $u_1(\lambda, p)$ ($u_{1,2}(\lambda, p)$, respectively) to be the upper bound on $n_1$ ($n_1+n_2$, respectively) we compute assuming the probability is $p$ at time $\lambda$ (while we still observe the $o_1(\lambda)$ and $o_2(\lambda)$ given by the model with the true probability parameter $\hat{p}$).
For proving Lemma~\ref{thm:optimistic-error}, we need a series of lemmas. The first lemma indicates that when $p$ is not too far away from $\hat{p}$, the upper bounds we compute is not too far from the ones if we used the real problem parameter $\hat{p}$:
\begin{lemma}\label{claim:not-bad-u1-u2}
If $p \geq \hat{p}$, then
\begin{align*}
u_1(\lambda, p ) \leq u_1(\lambda, \hat{p} ) \leq \frac{ p }{\hat{p}}u_1(\lambda, p )& & \text{ and } & &u_{1,2}(\lambda, p ) \leq u_{1,2}(\lambda, \hat{p} ) \leq \frac{ p }{\hat{p}}u_{1,2}(\lambda, p ).
\end{align*}
\end{lemma}
\begin{proof}
The two statements are essentially the same, so proving the inequalities regarding $u_1$ is sufficient.
We first prove the first part of the inequality, $u_1(\lambda, p ) \leq u_1(\lambda, \hat{p} )$.
Using Definition~\ref{eq:u_1}, it is sufficient to show
$$ \frac{o_1(\lambda) }{ \lambda\hat{p} } \geq \frac{o_1(\lambda)}{\lambda p }, $$
and
$$ \frac{o_1(\lambda) +(1- \hat{p}) (1-\lambda)n}{1- \hat{p}+ \lambda\hat{p} } \geq \frac{o_1(\lambda) +(1- p ) (1-\lambda)n}{1- p + \lambda p }. $$
The first inequality is trivial, and the second is implied by
\begin{align*}
\frac{o_1(\lambda) +(1- \hat{p}) (1-\lambda)n}{1- \hat{p}+ \lambda\hat{p} } - \frac{o_1(\lambda) +(1- p ) (1-\lambda)n}{1- p + \lambda p } = \frac{(\lambda n- o_1(\lambda))( p -\hat{p})(1-\lambda)}{(1- \hat{p}+ \lambda\hat{p} )(1- p + \lambda p )} \geq 0.\end{align*}
Let us prove $u_1(\lambda, \hat{p} ) \leq \frac{ p }{\hat{p}}u_1(\lambda, p )$ now.
To do this, we consider the following two cases separately:
$u_1(\lambda, p )= \frac{o_1(\lambda)}{\lambda p }$, and $u_1(\lambda, p ) = \frac{o_1(\lambda) +(1- p ) (1-\lambda)n}{1- p + \lambda p }.$
The first case is easy, we have
$$ u_1(\lambda, \hat{p}) \leq \frac{o_1(\lambda)}{\lambda \hat{p} } = \frac{p}{\hat{p}} \frac{o_1(\lambda)}{\lambda p } = \frac{p}{ \hat{p} } u_1(\lambda, p ) .$$
This case implies $\frac{o_1(\lambda)}{\lambda p } \geq\frac{o_1(\lambda) +(1- p ) (1-\lambda) n}{1- p + \lambda p },$
which is equivalent to
\begin{align}
o_1(\lambda) \geq (1-\lambda)\lambda p n. \label{ineq:o1-big-typer-2-ub}
\end{align}
Replacing $o_1(\lambda)$ in $\frac{o_1(\lambda) +(1- p ) (1-\lambda) n }{1- p + \lambda p }$ using \eqref{ineq:o1-big-typer-2-ub} gives
\begin{align}
u_1(\lambda, p ) = \frac{o_1(\lambda) +(1- p ) (1-\lambda) n }{1- p + \lambda p }\geq (1-\lambda) n. \label{ineq:lower-bound-u-1-case-2}
\end{align}
Using \eqref{ineq:o1-big-typer-2-ub} and \eqref{ineq:lower-bound-u-1-case-2},
\begin{align*}
u_1(\lambda, \hat{p})-u_1(\lambda , p ) & \leq \frac{o_1(\lambda) +(1- \hat{p}) (1-\lambda)n}{1- \hat{p}+ \lambda\hat{p} } - \frac{o_1(\lambda) +(1- p ) (1-\lambda)n}{1- p + \lambda p } \\
& = \frac{(\lambda n- o_1(\lambda))( p -\hat{p})(1-\lambda)}{(1- \hat{p}+ \lambda\hat{p} )(1- p + \lambda p )}\\
& \leq \frac{\lambda n (1- p + \lambda p )( p -\hat{p})(1-\lambda)}{(1- \hat{p}+ \lambda\hat{p} )(1- p + \lambda p )} &(\text{using }~\eqref{ineq:o1-big-typer-2-ub})\\
& = \frac{\lambda n ( p -\hat{p})(1-\lambda)}{(1- \hat{p}+ \lambda\hat{p} ))} \\
& \leq \frac{\lambda ( p -\hat{p})}{(1- \hat{p}+ \lambda\hat{p} ))} u_1(\lambda, p) &(\text{using }~\eqref{ineq:lower-bound-u-1-case-2})
\\
& \leq ( p -\hat{p}) u_1(\lambda, p). &( 1- \hat{p}+ \lambda\hat{p} \geq (1- \hat{p}) \lambda+ \lambda\hat{p} = \lambda)
\end{align*}
Rearranging terms, we get
\begin{align*}
u_1(\lambda, \hat{p}) & \leq ( 1+ p -\hat{p}) u_1(\lambda, p) \leq \left( 1+\frac{p-\hat{p}}{\hat{p}}\right) u_1(\lambda , p)=\frac{p}{\hat{p}} u_1(\lambda, p),\end{align*}
and hence we are done.
\end{proof}
We break down the rest of proof of Lemma~\ref{thm:optimistic-error} into two cases based on whether $ALG_{2,c,p}$ exhausts the capacity or not, and each of the following lemmas is useful for one of the two cases.
We first consider the case where $ALG_{2,c,p}$ exhausts the capacity, i.e., $q_1(1)+q_2(1)=b$.
We consider the last \st{class}\vmn{type}-$2$ customer accepted by $ALG_{2,c, p }$.
(If $ALG_{2,c, p }$ does not accept any \st{class}\vmn{type}-$2$ customer, then it has a total value of $b$, which is the optimal so the case is trivial.)
Say the customer arrives at time $ \lambda $.
Since $ALG_{2,c,p}$ accepts a customer at time $\lambda$, one of the following condition must hold:
$u_{1,2}(\lambda, p ) \leq b$, or
\begin{align} \frac{ b- {q}_2\left(\lambda \right) + {q}_2\left(\lambda \right) a} {\min \{ u_1(\lambda, p ) , b\}(1-a)+ab} \geq c. \label{ineq:condition-at-least-c}
\end{align}
For each of the two cases, we have one of the following two lemmas.
\begin{lemma}\label{lemma:p>phat-u12<=b} Under the setting of Lemma~\ref{thm:optimistic-error}, if $q_1(1)+q_2(1)=b$ and $u_{1,2}(\lambda, p ) \leq b$, then
$\frac{ALG_{2,c, p }({\vec{v}})}{OPT} \geq 1-\frac{(1-a)(p-\hat{p})}{a \hat{p}+(1-a)(p-\hat{p})}.
$
\end{lemma}
\begin{proof}
From Lemma~\ref{claim:not-bad-u1-u2} and using the fact that $u_{1,2}(\lambda, \hat{p} )$ is an upper bound of $n_1+n_2$ (given by Lemma~\ref{prop:Ubounds}), we have
\begin{align}
n_1+n_2 \leq u_{1,2}(\lambda, \hat{p} ) \leq \frac{ p }{\hat{p}} u_{1,2}(\lambda, p ) \leq \frac{ p }{\hat{p}}b.\label{inequality:n1+n2<=p/phatb}
\end{align}
As a result,
\begin{align}
\frac{ALG_{2,c, p }({\vec{v}})}{OPT} \geq & \frac{a \min\{b, n_2\}+b-\min\{b, n_2\}}{\min\{b, n_1\}+(b-\min\{b, n_1\}) a} = \frac{b-(1-a)\min\{b, n_2\}}{ab + (1-a) \min\{b, n_1\}}. \label{ineq:a_ratio}
\end{align}
For general $p/\hat{p}$ ratio, the above ratio is at least $a$, which happens when $\min\{b, n_1\} = \min\{b, n_2\} = b$.
Therefore, when $p/\hat{p} \geq 2$,
\begin{align*}
\frac{ALG_{2,c, p }({\vec{v}})}{OPT} \geq a \geq \frac{a \hat{p}}{ \hat{p}+(1-a)(p-2\hat{p})} = 1-\frac{(1-a)(p-\hat{p})}{a \hat{p}+(1-a)(p-\hat{p})}
\end{align*}
When $p/\hat{p}<2$, we can derive a better bound using~\eqref{inequality:n1+n2<=p/phatb}.
We have
\begin{align*}
\frac{ALG_{2,c, p }({\vec{v}})}{OPT} \geq & \frac{b-(1-a)\min\{b, n_2\}}{ab + (1-a) \min\{b, n_1\}} &(\eqref{ineq:a_ratio}) \\
\geq & \frac{b-(1-a) \min\{b, n_2\} }{ab + (1-a) n_1}. &(n_1 \geq \min\{b, n_1\})
\end{align*}
Combine the above in equality and the trick that when $0<x<y$, for any $x > \delta \geq 0$, $(x-\delta) / (y-\delta) \leq x/y$, we have, either
\begin{align}
\frac{ALG_{2,c, p }({\vec{v}})}{OPT} \geq
\frac{b-(1-a) \min\{b, n_2\} }{ab + (1-a) n_1} \geq 1,\label{ineq:adp/opt>=1}
\end{align}
or
\begin{align}
\frac{ALG_{2,c, p }({\vec{v}})}{OPT} \geq &
\frac{b-(1-a) \min\{b, n_2\} }{ab + (1-a) n_1} \geq \frac{b-(1-a) \min\{b, n_2\}- (1-a)(b-\min\{b, n_2\})}{ab + (1-a) n_1-(1-a)(b-\min\{b, n_2\})} \nonumber
\\ = & \frac{b-(1-a)b}{ab + (1-a) ( n_1 + \min\{b, n_2\} -b)} .\label{ineq:adp/opt<1}
\end{align}
When \eqref{ineq:adp/opt>=1},
$$\frac{ALG_{2,c, p }({\vec{v}})}{OPT} \geq 1
\geq 1-\frac{(1-a)(p-\hat{p})}{a \hat{p}+(1-a)(p-\hat{p})} ,$$
and we are done.
When \eqref{ineq:adp/opt<1},
\begin{align*}
\frac{ALG_{2,c, p }({\vec{v}})}{OPT} \geq & \frac{b-(1-a)b}{ab + (1-a) ( n_1 + \min\{b, n_2\} -b)} & \\
\geq & \frac{b-(1-a)b}{ab + (1-a) ( n_1 + n_2 -b)} &(\min\{b, n_2\}\leq n_2)\\
\geq & \frac{b-(1-a)b}{ab + (1-a) \left(\frac{p}{\hat{p}}-1\right)b} &(\eqref{inequality:n1+n2<=p/phatb}) \\
= & \frac{a \hat{p}}{a \hat{p}+(1-a)(p-\hat{p})} = 1-\frac{(1-a)(p-\hat{p})}{a \hat{p}+(1-a)(p-\hat{p})},
\end{align*}
which completes the proof.
\end{proof}
\begin{lemma} \label{lemma:inequality:q1+q2=b_ratio>=c}
Under the setting of Lemma~\ref{thm:optimistic-error}, if $q_1(1)+q_2(1)=b$ and~\eqref{ineq:condition-at-least-c}, then
$
\frac{ALG_{2,c, p }}{OPT} \geq c\left( 1- \frac{(1-a)(p - \hat{p})}{p} \right).
$
\end{lemma}
\begin{proof}From Lemma~\ref{claim:not-bad-u1-u2}, we have $n_1 \leq u_1(\lambda , \hat{p}) \leq \frac{ p }{\hat{p}} u_1 (\lambda , p)$, and thus
$OPT \leq \min \{ n_1 , b\}(1-a)+ab \leq \min \{ \frac{ p }{\hat{p}} u_1(\lambda, p ) , b\}(1-a)+ab $.
Combing this with $ALG_{2,c, p}({\vec{v}})= b- {q}_2\left(\lambda \right) + {q}_2\left(\lambda \right) a$, we obtain
\begin{align}
\frac{ALG_{2,c, p }({\vec{v}})}{OPT} \geq & \frac{ b- {q}_2\left(\lambda \right) + {q}_2\left(\lambda \right) a}{\min\{ \frac{ p }{\hat{p}} u_1(\lambda, p ) , b\}(1-a)+ab } \nonumber \\
\geq & c \frac{\min\{ u_1(\lambda, p ) , b\}(1-a)+ab }{ \min\{ \frac{ p }{\hat{p}} u_1(\lambda, p ) , b\}(1-a)+ab } &(\eqref{ineq:condition-at-least-c}) \nonumber \\ = & cf(u_1(\lambda, p )),\label{ineq:adp/opt>=a_function}
\end{align}
where $f(x)\triangleq \frac{\min\{x , b\}(1-a)+ab }{ \min\{ \frac{ p }{\hat{p}} x, b\}(1-a)+ab }. $
We next show that $f(x)$ is minimized when $x= \frac{\hat{p}}{p}b$.
When $x \geq \frac{\hat{p}}{p}b$, $f(x) = \frac{\min\{x , b\}(1-a)+ab }{ b(1-a)+ab },$ and hence $f(x)$ is non-decreasing in $x$.
On the other hand, when $x \leq \frac{\hat{p}}{p}b$, $f(x) = \frac{x(1-a)+ab }{ \frac{ p }{\hat{p}} x(1-a)+ab },$ and hence $f(x)$ is non-increasing in $x$. Combining the above two cases, $f(x)$ is minimized at $x=\frac{\hat{p}}{p}b$.
As a result, \eqref{ineq:adp/opt>=a_function} gives
\begin{align*}
\frac{ALG_{2,c, p }({\vec{v}})}{OPT} \geq cf(u_1(\lambda, p )) \geq c f(\frac{\hat{p}}{p}b)
\geq & c \frac{\frac{\hat{p}}{p}b(1-a)+ab}{b(1-a)+ab} = c\left( 1- \frac{(1-a)(p - \hat{p})}{p} \right).
\end{align*}
\end{proof}
Now we consider the case where $ALG_{2,c,p}$ does not exhaust the capacity, i.e., $q_1(1)+q_2(1)<b$.
We discuss the cases $c\leq c^*(p)$ and $c^*(\hat{p}) < c \leq c^*(p)$ separately in the following two lemmas.\begin{lemma}\label{claim:optimistic-p-not-exhausting-small-c}
Under the setting of Lemma~\ref{thm:optimistic-error}, if $q_1(1)+q_2(1)<b$ and $c\leq c^*(p)$, then
$\frac{ALG_{2,c, p }}{OPT}\geq c.$
\end{lemma}
\begin{proof}
According to Lemma~\ref{claim:not-bad-u1-u2}, $u_1(\lambda, p ) \leq u_1(\lambda, \hat{p} )$ and $u_{1,2}(\lambda, p ) \leq u_{1,2}(\lambda, \hat{p} )$.
As a result, when a \st{class}\vmn{type}-$2$ customer arrives, if $ALG_{2,c, \hat{p} }$ accepts the customer, then $ALG_{2,c, p }$ either accepts the customer or has already accepted at least the same amount of \st{class}\vmn{type}-$2$ customers as $ALG_{2,c, \hat{p} }$ has.
As a result, $ALG_{2,c, p }$ accepts at least as many \st{class}\vmn{type}-$2$ customers as $ALG_{2,c, \hat{p} }$ does.
Furthermore,since $q_1(1)+q_2(1)<b$, $ALG_{2,c, p }$ accepts all \st{class}\vmn{type}-$1$ customers.
Therefore, $ALG_{2,c, p } \geq ALG_{2,c, \hat{p} }$.
At last, $\frac{ALG_{2,c, p }}{OPT} \geq \frac{ALG_{2,c, \hat{p} }}{OPT} \geq c$.
\end{proof}
\begin{lemma}\label{lemma:optimistic-p-not-exhausting-large-c}
Under the setting of Lemma~\ref{thm:optimistic-error}, if $q_1(1)+q_2(1)<b$ and $c^*(\hat{p}) < c \leq c^*(p)$, then
$\frac{ALG_{2,c, p }({\vec{v}})}{OPT}\geq c^*(\hat{p}) - \frac{4(1-a)}{1-c} \log\left(\frac{p(1-\hat{p})}{\hat{p}(1-p)}\right).$
\end{lemma}
\begin{proof}
Let $\lambda$ be the last time we reject a \st{class}\vmn{type}-$2$ customer.
According to
\eqref{ineq:lower-bound-ratio-ADP},
\begin{align}
\frac{ALG_{2,c, p }({\vec{v}})}{OPT}
\geq & \frac{n_1 + a\left(\frac{1-c}{1-a} b + c \left(b - u_1(\lambda, p) \right)^+ + \left[n_2 - o_2(\lambda)\right]\right) }{n_1 + a \min \{b-n_1,n_2\}}.\nonumber \\
\geq & \frac{a\left(\frac{b}{1-a}+n_2-o_2(\lambda)\right) - c\left( \frac{a^2}{1-a}b+ a \min\{u_1(\lambda, \hat{p}), b\}\right) + n_1 }{a\min\{n_1+n_2,b\}+(1-a)n_1} .&(\text{Lemma~\ref{claim:not-bad-u1-u2}}) \label{lower-bound-ratio}
\end{align}
Let us consider the tuple $(\lambda , n_1, n_2, \eta_1( \lambda), \eta_2(\lambda), c)$ in~\ref{MP1}$(\hat{p})$.
Recall that under Approximation~\ref{assumption:determ-observ}, $o_1(\lambda)=\tilde o_1$ , $o_2(\lambda)=\tilde o_1$, and $u_1(\lambda, \hat{p})=\tilde u_1$.
Due to Lemma~\ref{claim:not-bad-u1-u2},
$$u_{1,2}(\lambda, \hat{p}) \geq u_{1,2}(\lambda, p) \geq b.$$
Thus, $(\lambda , n_1, n_2, \eta_1( \lambda), \eta_2(\lambda), c)$ satisfies Constraints~\eqref{constraint:u_2>=b}-~\eqref{constraint:n1'+n2'big} in~\ref{MP1}$(\hat{p})$.
Following the argument deriving \eqref{ineq:good-ratio} and \eqref{ineq:key-ADP-full} regarding~\ref{MP1}$(\hat{p})$, the optimal objective value $c^*(\hat{p})$ satisfies
\begin{align} c^*(\hat{p}) \leq \frac{a\left(\frac{b}{1-a}+n_2-\tilde o_2\right) - c^*(\hat{p})\left( \frac{a^2}{1-a}b+ a\min\{\tilde u_1, b\}\right) + n_1 }{a\min\{n_1+n_2,b\}+(1-a)n_1}.\label{inequality:mp1-p-hat}
\end{align}
Therefore,
\begin{align}
\frac{ALG_{2,c, p }({\vec{v}})}{OPT} \geq & \frac{a\left(\frac{b}{1-a}+n_2-\tilde o_2\right) - c\left( \frac{a^2}{1-a}b+ a \min\{\tilde u_1, b\}\right) + n_1 }{a\min\{n_1+n_2,b\}+(1-a)n_1} &(\eqref{lower-bound-ratio} ) \nonumber \\
= & \frac{a\left(\frac{b}{1-a}+n_2-\tilde o_2\right) - c^*(\hat{p})\left( \frac{a^2}{1-a}b+ a \min\{\tilde u_1, b\}\right) + n_1 }{a\min\{n_1+n_2,b\}} \nonumber \\
- & \frac{a (c - c^*(\hat{p}))\left( \frac{a^2}{1-a}b+ a \min\{\tilde u_1, b\}\right) }{a\min\{n_1+n_2,b\}+(1-a)n_1}\nonumber \\
\geq & c^*(\hat{p}) - \frac{a (c^*(p) - c^*(\hat{p}))\left( \frac{a^2}{1-a}b+ a \min\{\tilde u_1, b\}\right) }{a\min\{n_1+n_2,b\}+(1-a)n_1}. &(\eqref{inequality:mp1-p-hat}\text{ and }c \leq c^*(p)) \label{ineq:74.1}
\end{align}
Recall that we always accept at least $\phi b = \frac{1-c}{1-a}b $ \st{class}\vmn{type}-$2$ customers.
Thus, in this case ($q_1(1)+q_2(1)<b$), if $n_2 < \frac{1-c}{1-a}b$, the we have a ratio of $1$.
Thus, without loss of generality, we assume $n_2 \geq \frac{1-c}{1-a}b$.
As a result,
\begin{align*}
\frac{ALG_{2,c, p }({\vec{v}})}{OPT} \geq & c^*(\hat{p}) - \frac{a (c^*(p) - c^*(\hat{p}))\left( \frac{a}{1-a}b\right) }{a\frac{1-c}{1-a}b} \nonumber &(\eqref{ineq:74.1})
\\ = & c^*(\hat{p}) - (c^*(p) - c^*(\hat{p})) \frac{a}{1-c} \nonumber \\
\geq & c^*(\hat{p}) - \frac{4(1-a)}{1-c} \log\left(\frac{p(1-\hat{p})}{\hat{p}(1-p)}\right), &(Proposition~\ref{claim:small-c(p)-(p-hat)})
\end{align*}
which completes the proof.
\end{proof}
With Lemmas~\ref{lemma:inequality:q1+q2=b_ratio>=c},~\ref{claim:optimistic-p-not-exhausting-small-c},~\ref{claim:optimistic-p-not-exhausting-small-c} and~\ref{lemma:optimistic-p-not-exhausting-large-c}, we are ready complete the proof of Lemma ~\ref{thm:optimistic-error}:
The lemma
follows directly from Lemmas~\ref{lemma:inequality:q1+q2=b_ratio>=c},~\ref{claim:optimistic-p-not-exhausting-small-c},~\ref{claim:optimistic-p-not-exhausting-small-c} and~\ref{lemma:optimistic-p-not-exhausting-large-c} and $c\left( 1- \frac{(1-a)(p - \hat{p})}{p} \right)< c $.
With Lemmas~\ref{thm:adaptive-threshold-p-hat} and~\ref{thm:optimistic-error}, we are ready to complete the proof of Proposition~\ref{prop:robust}:
Lemmas~\ref{thm:adaptive-threshold-p-hat} proves the case where $p<\hat{p}$.
For the case where $\hat{p}<p<\hat{p} + \delta_{\hat{p}}$, we compare the terms in each of the minimizations in Lemma~\ref{thm:optimistic-error}.
Clearly, there is a small positive number $\delta_{\hat{p}}>0$ such that when $0<p-\hat{p}<\delta_{\hat{p}}$, the competitive ratio is at least
$$ \begin{cases}
c^*(\hat{p}) - \frac{4(1-a)}{1-c} \log\left(\frac{p(1-\hat{p})}{\hat{p}(1-p)}\right) &,\text{ if }c>c^*(\hat{p}),\\
c\left( 1- \frac{(1-a)(p - \hat{p})}{p} \right) &,\text{ if }c \leq c^*(\hat{p}).\end{cases}$$
Hence the proof of Proposition~\ref{prop:robust} is completed.
\end{proof}
\fi
\section{Discussion of the Model}\label{sec:model:discussion}
In this section, we\st{derive some fundamental properties of} \vmn{further study the performance of online algorithms in} our demand model.
\vmn{First, }in Section~\ref{sec:upper-bounds}, we present \vmn{an} upper bound on the competitive ratio achievable by any online algorithm \vmn{under our demand model when the initial inventory $b$ is small---more precisely, $b = o\left(\sqrt{{n}}\right)$.}
\vmn{Next,} in Section~\ref{sec:bad-instance-for-other-models}, we \vmn{highlight the need for our new online algorithms by }presenting a problem instance for which our algorithms outperform existing ones in \st{the}\vmn{our} partially \st{learnable}\vmn{predictable} model.\st{ for which our algorithms outperform the existing ones.}
\subsection{Upper Bounds}\label{sec:upper-bounds}
In this section, we present \vmn{an} upper bound on the competitive ratio of any online algorithm when $b = o(\sqrt{n})$.
We start with a warm-up example that illustrates a fundamental limit of any online algorithm in \vmn{the} partially \st{learnable}\vmn{predictable} model.
Figure~\ref{figure:hard:example} shows two instances with $n = 8$ \st{arriving customers}.
The bottom row shows the sequence that the online algorithm will see; \vmn{as a reminder,}\st{note that} we represent the nodes of the {{\st{predictable}\vmn{stochastic}}} group as filled (even though the online algorithm cannot distinguish between the two groups of customers).
Suppose $b=4$;
in the instance presented on the left, the \vmn{optimum} offline solution rejects all \st{class}\vmn{type}-$2$ customers, and in the instance on the right, it accepts all of them.
Now, by time $\lambda = \vmn{b/n = }4/8$, online algorithms cannot distinguish between these two instances, and hence cannot perform as well as the optimal offline algorithm on {\em both} of these instances.
Similar to this example, \vmn{in the following proposition}, we \st{prove}\vmn{establish} the upper bound by \st{presenting}\vmn{constructing} two problem instances that are ``difficult'' for online algorithms to distinguish between up to\st{a certain} time \vmn{$\frac{b}{n}$}, and show that the trade-off between accepting too many or too few \st{class}\vmn{type}-$2$ customers limits the competitive ratio of any online algorithm.
\st{First we focus on the case $b = o\left(\sqrt{{n}}\right)$, and show:}
\begin{figure}
\centering
\psscalebox{0.75}{
\begin{tabular}{| l | l | l | }
\input{many1}& \quad &\input{few1}
\end{tabular}}
\caption{Two problem instances \vmn{between which} online algorithms cannot distinguish at time\st{$\frac{4}{8}$} \vmn{$\frac{b}{n}$, where $b = 4$ and $n = 8$}.} \label{figure:hard:example}
\end{figure}
\begin{proposition}\label{thm:impossibility}
Under the partially \st{learnable}\vmn{predictable arrival} model\vmn{, and for any $p \in (0,1)$}, no online algorithm, deterministic and randomized, can achieve a competitive ratio better than $\frac{1-p}{2-a} + p + O\left( \frac{pb^2}{n} \right)$.
Therefore, when $b = o\left(\sqrt{{n}}\right)$, no online algorithm can achieve a competitive ratio better than $\frac{1-p}{2-a} + p + o\left(1\right)$.
\end{proposition}
\noindent{The details of the proof are deferred to Appendix~\ref{sec:app:model:discussion}. As explained above, the main idea of the proof is to \st{come up with}\vmn{construct} two instances that are almost indistinguishable \vmn{up to time $\frac{b}{n}$} to any online algorithm. In the proof we show that the following two instances ${\vec{v}_{I}}$ and ${\vec{w}_I}$\st{will} serve our purpose: }
\if false
\begin{equation*}
v_{I,j} = \left\{
\begin{array}{lll}
a, \qquad & i = 1, 2, \ldots, b, \\
1, \qquad & i = b+1, b+2, \ldots, 2b \text{ for } {\vec{v_2'}} ,\\
0, \qquad & i = b+1, b+2, \ldots, 2b \text{ for } {\vec{v_1'}} , \\
0, \qquad & i > 2b.
\end{array}
\right
w_{I,j} = \left\{
\begin{array}{lll}
a, \qquad & i = 1, 2, \ldots, b, \\
1, \qquad & i = b+1, b+2, \ldots, 2b \text{ for } {\vec{v_2'}} ,\\
0, \qquad & i = b+1, b+2, \ldots, 2b \text{ for } {\vec{v_1'}} , \\
0, \qquad & i > 2b.
\end{array}
\right.
\end{equation*}
\fi
\begin{equation*}
v_{I,j} = \begin{cases}
a, \qquad & 1 \leq j \leq b, \\
0, \qquad & b < j \leq 2b, \\
0, \qquad & j > 2b.
\end{cases} ~~~~~
w_{I,j} = \begin{cases}
a, \qquad & 1 \leq j \leq b, \\
1, \qquad & b < j \leq 2b, \\
0, \qquad & j > 2b.
\end{cases}
\end{equation*}
\input{extra-app}
\if false
For the regime where $b = \kappa n$, and $n \rightarrow \infty$, we are able to identify problem instances that provide us evidence that the following conjecture holds.
\begin{conjecture}\label{conj:ata-tightness}
Under the partially \st{learnable}\vmn{predictable} model, when $ b= \kappa n$ where $\kappa$ is a positive constant and $n \rightarrow \infty$, no online algorithm, deterministic and randomized, can achieve a competitive ratio better than $c^*$, the optimal objective value of (\ref{MP1}).
\end{conjecture}
In Appendix~\ref{sec:app:approx} we show how to prove the above conjecture under two approximations. However, the rigorous proof requires carefully analyzing all the approximation error terms, and it is beyond the scope of the current manuscript.
\subsection{Offline Estimation of $p$ and Robustness to Errors}\label{sec:unknown-p}
In this section, we discuss what Algorithm~\ref{algorithm:adaptive-threshold} can do if it does not know the level of learnability $p$.
We first show how we may estimate $p$ if there are multiple runs of the same process with the same $p$ (but different instances, i.e, different ${\vec{v'}\vmn{ZZ\vec{v}_I}}$'s). Further we discuss how robust Algorithm~\ref{algorithm:adaptive-threshold} is with respect to over-/under-estimating $p$.
We start with offline estimattion of $p$.
Our estimation is based on the fact that the inequalities in
Lemma~\ref{lemma:needed-centrality-result-for-m=2} hold with high probability.
After observing the sequence, we know $n_j$ and $o_j(\lambda)$ for $0 \leq \lambda \leq 1$ and $j = 1,2$.
However, we do not observe the arrival order of customers set by adversary, thus we do not have functions $\eta_1(\lambda)$, and $\eta_2(\lambda)$.
We find values of $p\in(0,1)$ for which Inequalities \eqref{inequality:good-approximation-o_1}, \eqref{inequality:good-approximation-o_1+o_2}, and \eqref{inequality:good-approximation-o_2} hold for at least one set of functions $\eta_j(\lambda)$, $j = 1,2$.
We estimate level of learnability by maximizing such $p$.
This optimization problem is described in (\ref{MP2}).
\begin{mdframed}
\footnotesize
\begin{subequations}
\begin{equation}\underset{(\eta_1(0), \eta_1(1/n)\dots \eta_1(1),\eta_2(0), \eta_2(1/n)\dots \eta_2(1), p) \in [0,1]^{2n+3}} {\text{Maximize}} p \tag{MP2} \label{MP2}
\end{equation}
subject to
\begin{align*}
|\tilde o_1(\lambda) - o_1(\lambda)| &\leq \alpha \sqrt{n_1 \log n} & \text{ for all }\lambda = \frac{1}{n},\dots, 1 \text{ (if }n_1\geq \frac{k}{p^2}\log n), \\
|\tilde o_1(\lambda)
+\tilde o_2(\lambda) - (o_1(\lambda)+o_2(\lambda))| &\leq \alpha \sqrt{(n_1+n_2) \log n} & \text{ for all }\lambda = \frac{1}{n},\dots, 1 \text{ (if }n_1\geq \frac{k}{p^2}\log n), \\
|\tilde o_2(\lambda) - o_2(\lambda)| &\leq \alpha \sqrt{n_2 \log n} & \text{ for all }\lambda = \frac{1}{n},\dots, 1 \text{ (if }n_2\geq \frac{k}{p^2}\log n),\\
\eta_1\left(\lambda-\frac{1}{n}\right) \leq \eta_1(\lambda);&\eta_2\left(\lambda-\frac{1}{n}\right)\leq \eta_2(\lambda)& \text{ for all }\lambda = \frac{1}{n},\dots, 1, \\
\eta_1(\lambda)+\eta_2(\lambda) \leq&\eta_1\left(\lambda-\frac{1}{n}\right)+\eta_2\left(\lambda-\frac{1}{n}\right)+1& \text{ for all }\lambda = \frac{1}{n},\dots, 1,
\end{align*}
where $ \eta_1(1) = n_1 $, $ \eta_2(1) = n_2$, $\eta_1(0) = 0$, $\eta_2(0) = 0$, $\tilde o_1(\lambda) \triangleq (1-p)\eta_1(\lambda)+p n_1 \lambda $, and $\tilde o_2 (\lambda)\triangleq (1-p)\eta_2(\lambda)+p n_2 \lambda$.
\end{subequations}
\end{mdframed}
In (\ref{MP2}), the first three sets of constraints correspond to Inequalities~\eqref{inequality:good-approximation-o_1},~\eqref{inequality:good-approximation-o_1+o_2}, and~\eqref{inequality:good-approximation-o_2} and the last two sets ensure that $\eta_j(\cdot )$ corresponds to a valid problem instance: the functions are non-decreasing, and in one time step at most one customer arrives.
(under the Approximation~\ref{assumption:continuous instance}) \vm{Dawsen: we don't need Approximation~\ref{assumption:continuous instance} to be true.. Also don't we need to comment on the case that $n_1$ or $n_2$ are too small?}
Note that the above estimation is only useful if we have access to many past runs of the process. In fact, if we only have access to one offline instance then the adversary can mislead us to estimate $p$ to be $1$ by
giving us a randomly order sequence. However, if we use the strategy of repeating the estimation, and always taking the minimum estimated $p$, then in the long run the adversary may not benefit by misleading us to overestimate $p$.
For the regime where $b = \kappa n$, and $n \rightarrow \infty$, under two approximations, we are able to show that Algorithm~\ref{algorithm:adaptive-threshold} is robust against a slightly off estimation of the parameter $p$. In Subsection~\ref{sec:robust} we present these approximate analysis and results. However, the rigorous statement and proof require carefully analyzing all the approximation error terms, and it is beyond the scope of the current manuscript.
\vm{Dawsen in the prof are there are some ?'s...}
\fi
\section{Model and Preliminaries}
\label{sec:prem}
A firm is endowed with $b$ (identical) units of a product to sell over $n \geq 3$ periods, where $n \geq b$. In each period, at most one customer arrives demanding one unit of the product; customers belong to two\st{classes} \vmn{types} depending on\st{their willingness-to-pay: class-1 and class-2 customers are willing to pay $\$ 1$ and $ \$ a$ respectively, where $0 < a <1$} \vmn{the revenue they generate. Type-1 and type-2 customers generate revenue of $1$ and $0 < a <1$, respectively}. Upon the arrival of a customer, the firm observes the\st{class} \vmn{type} of the customer and must make an irrevocable decision to accept this customer and allocate one unit, or to reject this customer. If a firm accepts a\st{class} \vmn{type}-1 (\st{class}\vmn{type}-2) customer, it will earn $\$ 1$ ($ \$ a$).
Our goal is to devise online allocation algorithms that maximize the firm's revenue. We evaluate the performance of an algorithm by comparing it to the \st{optimal}\vmn{optimum} offline solution
(i.e., the clairvoyant solution).
Before proceeding with the model, we introduce a few notations and briefly discuss the structure of the problem.
We represent each customer by the value of\st{her requested fare class} \vmn{revenue she generates if accepted}, and the sequence of arrival by $\vec{v}=(v_1,v_2,\dots, v_n)$, where $v_i \in \{0,a,1\}$; $v_i =0$ implies that no customer arrives at period $i$.
We denote the number of\st{class} \vmn{type}-$1$ (\st{class}\vmn{type}-$2$) customers in the entire sequence as $n_1$ ($n_2$).
Note that the \st{optimal}\vmn{optimum} offline solution that we denote by $OPT(\vec{v})$ has the following simple structure: accept all the\st{class} \vmn{type}-$1$ customers, and if $n_1<b$, then accept $\min \{n_2, b-n_1\}$\st{class} \vmn{type}-2 customers. Therefore,
\begin{align}
\label{eq:OPT}
OPT(\vec{v}) = \min\{b, n_1\} + a \min \{n_2, (b-n_1)^{+}\},
\end{align}
where $(x)^+ \triangleq \max \{ x, 0\}$, \vmn{and we use the symbol ``$\triangleq$'' for definitions.}
At each period, a reasonable online algorithm will accept an arriving\st{class} \vmn{type}-$1$ customer if there is inventory left. Thus the main challenge for an online algorithm is to decide whether to accept/reject an arriving\st{class} \vmn{type}-$2$ customer facing the following natural trade-off: accepting a\st{class} \vmn{type}-$2$ customer may result in rejecting a potential future\st{class} \vmn{type}-$1$ customer due to limited inventory; on the other hand, rejecting a\st{class} \vmn{type}-$2$ customer may lead to having unused inventory at the end. Therefore, any good online algorithm needs to strike a balance between accepting \emph{too few} and \emph{too many}\st{class} \vmn{type}-$2$ customers. We denote by $ALG(\vec{v})$ the revenue obtained by an online algorithm.
\vmn{Next we} introduce \st{a}\vmn{our} \emph{partially \st{learnable}\vmn{predictable}} \vmn{demand arrival model}\st{ which} \vmn{that} works as follows.
The adversary determines an initial sequence which we denote by $\vmn{\vec{v}_I}=(v_{I,1},v_{I,2},\dots, v_{I,n})$, where $v_{I,j} \in \{0,a,1\}$, for $1 \leq j \leq n$.
However, a subset of customers will not follow this order. We call this subset the {\emph {{\st{predictable}\vmn{stochastic}}} group}, which we denote by ${{\vmn{\mathcal{S}}}}$.
Each customer joins the {{\st{predictable}\vmn{stochastic}}} group independently and with the same probability $p$.
Other customers are in the {\emph{\st{unpredictable}\vmn{adversarial}} group} denoted by $\vmn{\mathcal{A}}$.
Customers in the {{\st{predictable}\vmn{stochastic}}} group are permuted uniformly at random among themselves. Formally, a permutation $\sigma_{{{\vmn{\mathcal{S}}}}}:{{\vmn{\mathcal{S}}}}\rightarrow {{\vmn{\mathcal{S}}}}$ is chosen uniformly at random and determines the order of arrivals among the {{\st{predictable}\vmn{stochastic}}} group.
In the resulting overall arriving sequence, the {\st{unpredictable}\vmn{adversarial}} group follows the adversarial sequence according to $\vmn{\vec{v}_I}$, and those in the {{\st{predictable}\vmn{stochastic}}} group follow the random order given by $\sigma_{{{\vmn{\mathcal{S}}}}}$.
Given $\vmn{\vec{v}_I}$, we denote the {\em random} customer arrival sequence by $\vec{V}=(V_1,V_2,\dots , V_n)$,
and the {\em realization} of it by $\vec{v}=(v_1,v_2,\dots , v_n)$.
The example presented in Figure~\ref{figure:example} illustrates the arrival process.
The top row (gray nodes) shows the initial sequence ($\vmn{\vec{v}_I}$).
The middle row shows which customers belong to the {{\st{predictable}\vmn{stochastic}}} group (the black nodes) and which belong to the {\st{unpredictable}\vmn{adversarial}} group (the white ones).
The bottom row shows both the permutation $\sigma_{{{\vmn{\mathcal{S}}}}}$ and the actual arrival sequence.
In this example, ${{\vmn{\mathcal{S}}}}=\{2, 5, 6, 8\}$, and $\sigma_{{{\vmn{\mathcal{S}}}}}(2)=6, \sigma_{{{\vmn{\mathcal{S}}}}}(5)=2, \sigma_{{{\vmn{\mathcal{S}}}}}(6)=5$, and $\sigma_{{{\vmn{\mathcal{S}}}}}(8)=8$.
\begin{figure}
\centering
\psscalebox{0.95}{
\input{example}
}
\caption{Illustration of the customer arrival model} \label{figure:example}
\end{figure}
Note that the extreme cases $p = 0$ and $p=1$ correspond to the adversarial and random order models that have been studied before (e.g., \cite{Ball2009} and \cite{Agrawal2009b}, respectively). Hereafter, we assume that $0 < p < 1$. For a given $p \in (0,1)$, at any time over the horizon, we can use the number of past observed\st{class} \vmn{type}-$1$ (\st{class}\vmn{type}-$2$) customers to obtain \st{``rough'' estimate}\vmn{bounds} on the number of customers of each\st{class} \vmn{type} to be expected over the rest of the horizon.
This idea is formalized later in Subsection~\ref{subsec:concentration} along with further analysis of our model.
Having described the arrival process, we now define the competitive ratio of an online algorithm under the proposed partially\st{learnable} \vmn{predictable} model as follows:
\begin{definition}An online algorithm is $c$-competitive in the proposed partially\st{learnable} \vmn{predictable} model if for any adversarial instance $\vmn{\vec{v}_I}$, $$ \E{ALG(\vec{V})} \geq c OPT(\vmn{\vec{v}_I}),$$
where the expectation is taken over which customers belong to the {{\st{predictable}\vmn{stochastic}}} group (i.e., subset ${\vmn{\mathcal{S}}}$), the choice of the random permutation $\sigma_{{{\vmn{\mathcal{S}}}}}$, and any possibly randomized decisions of the online algorithm.
\label{def:comp}
\end{definition}
Note that $OPT(\vec{V})=OPT(\vmn{\vec{v}_I})$
and thus, in the above definition, $ \E{ALG(\vec{V})} \geq c OPT(\vmn{\vec{v}_I})$ is equivalent to $ \E{ALG(\vec{V})} \geq c \E{OPT(\vec{V})}$.
In Sections~\ref{sec:alg1} and~\ref{sec:alg2}, we present two online algorithms that perform well in the proposed partially\st{learnable} \vmn{predictable} model for various ranges of $b$ and $n$. Before introducing our online algorithms, in the following subsections we introduce a series of notations used throughout the paper and state a consequential concentration result that will allow us to partially\st{learn} \vmn{predict} future demand using past observed data.
\subsection{Notational Conventions}
\label{subsec:notation}
Throughout the paper, we use uppercase letters for random variables and lowercase ones for realizations. We have already used this convention in defining $\vec{V}$ vs. $\vec{v}$.
We normalize the time horizon to $1$, and represent time steps by $\lambda = 1/n, 2/n, \dots, 1$.
First, we introduce notations related to the random customer arrival sequence $\vec{V}$.
At any time step $\lambda$, for $j=1,2$, the number of\st{class} \vmn{type}-$j$ customers \emph{to be observed} by the online algorithms up to time $\lambda$ is denoted by $O_j(\lambda)$.
Further, we denote by $O_j^{\vmn{\mathcal{S}}}(\lambda)$ the number of\st{class} \vmn{type}-$j$ customers in the {{\st{predictable}\vmn{stochastic}}} group that arrive up to time $\lambda$ in $\vec{V}$.
Note that the online algorithm cannot distinguish between customers in the {{\st{predictable}\vmn{stochastic}}} group and customers in the {\st{unpredictable}\vmn{adversarial}} group. Therefore, the online algorithm does {\em not} observe the realizations of $O_j^{\vmn{\mathcal{S}}}(\lambda)$.
We denote realizations of $O_j(\lambda)$ and $O_j^{\vmn{\mathcal{S}}}(\lambda)$ by $o_j(\lambda)$ and $o_j^{\vmn{\mathcal{S}}}(\lambda)$, respectively.
Next, we introduce notations related to the initial adversarial sequence $\vmn{\vec{v}_I}$.
As discussed earlier, we denote the total number of\st{class} \vmn{type}-$j$ customers in $\vmn{\vec{v}_I}$ by $n_j$.
In addition, given the sequence $\vmn{\vec{v}_I}$, we denote the total number of\st{class} \vmn{type}-$j$ customers among the first $\lambda n$ customers by $\eta_j(\lambda)$.
Note that both $n_j$ and $\eta_j(\lambda)$ are deterministic.
Also, we define $\tilde o_j(\lambda) \triangleq (1-p) \eta_j(\lambda) + p\lambda n_j$ and $\tilde o^{\vmn{\mathcal{S}}}_j(\lambda) \triangleq p\lambda n_j$ which will serve as deterministic approximations for $O_j(\lambda)$ and $O^{\vmn{\mathcal{S}}}_j(\lambda)$, respectively (see Lemma~\ref{lemma:needed-centrality-result-for-m=2} and the subsequent discussion for motivation of this definition).
Here we return to the example
in Figure~\ref{figure:example} and review the notations.
Suppose $\lambda = 5/8$ and $p = 0.5$; in this example, looking at the bottom row that shows \vmn{the} sequence $\vec{v}$, we have: $o_1(5/8) = 3$, $o^{\vmn{\mathcal{S}}}_1(5/8)=1$, which are realizations of
random variables $O_1(5/8)$ and $O^{\vmn{\mathcal{S}}}_1(5/8)$, respectively. Looking at the top row that shows sequence $\vmn{\vec{v}_I}$, we have: $n_1=4$, $\eta_1(5/8) = 3$, $\tilde o_1(5/8) = 0.5\times 3 + 0.5\times 4 \times (5/8)$ = 2.75, and $\tilde o^{\vmn{\mathcal{S}}}_1(5/8) = 0.5\times 4 \times (5/8)=1.25$ that are all deterministic quantities.
Similarly, for\st{class} \vmn{type}-$2$ customers, $o_2(5/8) = 2$, $o^{\vmn{\mathcal{S}}}_2(5/8) = 1$, $n_2=2$, $\eta_2(5/8) = 1$, $\tilde o_2(5/8) = 0.5\times 1 + 0.5\times 2 \times (5/8)=1.125$, and $\tilde o^{\vmn{\mathcal{S}}}_2(5/8) = 0.5\times 2 \times (5/8)=0.625$.
For convenience of reference, in Table~\ref{table:notations-partial} we present a summary of the defined notations.
\begin{table}[htbp]
\centering
\caption{Notations}
\begin{tabular}{| c | l |}
\hline
$\vmn{\vec{v}_I}$& $\vmn{\vec{v}_I}=(v_{I,1}, v_{I,2}, \dots, v_{I,n})$,\st{adversarial } initial customer sequence \\\hline
${\vmn{\mathcal{S}}}$& subset of customers in the {{\st{predictable}\vmn{stochastic}}} group \\\hline
$\vmn{\mathcal{A}}$& subset of customers in the {\st{unpredictable}\vmn{adversarial}} group \\\hline
$\vec{V}$& $\vec{V}=(V_1,V_2, \dots, V_n)$, random customer arrival sequence \\\hline
$\vec{v}$& $\vec{v}=(v_1,v_2, \dots, v_n)$, a realization of $\vec{V}$ (what online algorithm actually observes)\\\hline
$n_j$ & number of\st{class} \vmn{type}-$j$, $j=1,2$, customers in $\vmn{\vec{v}_I}$ (which is the same as in $\vec{V}$)\\ \hline
$\lambda$ & normalized time: $\lambda =1/n, \dots, 1$ \\ \hline
$O_j(\lambda)$ & random number of\st{class} \vmn{type}-$j$ customers arriving up to time $\lambda$ \\ \hline
$o_j(\lambda)$ & a realization of $O_j(\lambda)$\\ \hline
$O^{\vmn{\mathcal{S}}}_j(\lambda)$ & random number of\st{class} \vmn{type}-$j$ customers in ${{\vmn{\mathcal{S}}}}$ arriving up to time $\lambda$ \\ \hline
$o^{\vmn{\mathcal{S}}}_j(\lambda)$ & a realization of $O^{\vmn{\mathcal{S}}}_j(\lambda)$ \\ \hline
$\eta_j(\lambda)$ & number of\st{class} \vmn{type}-$j$, $j=1,2$, customers among the first $\lambda n$ ones in $\vmn{\vec{v}_I}$ \\ \hline
$\tilde o_j(\lambda)$ & $(1-p)\eta_j(\lambda)+p \lambda n_j$ (a deterministic approximation of $O_j(\lambda)$) \\ \hline
$
\tilde o^{\vmn{\mathcal{S}}}_j(\lambda)$ & $p \lambda n_j$ (a deterministic approximation of $O^{\vmn{\mathcal{S}}}_j(\lambda)$) \\
\hline
\end{tabular}\label{table:notations-partial}
\end{table}
Finally, to avoid carrying cumbersome expressions in the statement of our results for second-order quantities (e.g., bounds on approximation errors), we use the following approximation notations.
\begin{definition}
\label{def:bigO}
Suppose $f,g:{\mathcal{X}}\rightarrow {\mathbb{R}}$ are two functions defined on set $\mathcal{X}$. We use the notation $f = O(g)$ if there exists a constant $k$ such that $f(x) < k g(x)$ for all $x \in \mathcal{X}$.
\end{definition}
\begin{definition}
\label{def:bigO}
Suppose $f,g: {\mathbb{N}}\rightarrow {\mathbb{R}}$ are two functions defined on natural numbers. We use the notation $f = o(g)$ if $\lim_{n \rightarrow \infty} \frac{f(n)}{g(n)} = 0$, and the notation $f = \omega(g)$ if $\lim_{n \rightarrow \infty} \lvert{\frac{f(n)}{g(n)}}\rvert = \infty$.
\end{definition}
\subsection{Estimating Future Demand}
\label{subsec:concentration}
At time $\lambda < 1$, upon observing $o_j(\lambda)$, $j=1,2$ (but not $n_j$ and $\eta_j(\lambda)$), we wish to estimate future demand, or equivalently the total demand $n_j$.
To make such an estimation, we establish the following concentration result:
\begin{lemma}\label{lemma:needed-centrality-result-for-m=2}
\vmn{Define constants $\alpha \triangleq 10 + 2 \sqrt{6}$, $\bar\epsilon \triangleq 1/24$, and $k \triangleq 16$.
For any $\epsilon \in [\frac{1}{n}, \bar\epsilon]$, with probability at least $1 - \epsilon$, all the following statements hold:
}
\begin{itemize}
\item If $n_1 \geq \frac{k}{p^2} \log n$, then for all $\lambda \in \{0,1/n, 2/n, \dots, 1\}$,
\begin{subequations}
\begin{align} & \left| O_1(\lambda)-\tilde o_1 (\lambda) \right| < \alpha \sqrt{n_1 \log n} \text{, and } \label{inequality:good-approximation-o_1}
\\ & \left| O_1(\lambda)+O_2(\lambda) - (\tilde o_1 (\lambda) +\tilde o_2(\lambda))\right| < \alpha \sqrt{ (n_1+n_2) \log n} \label{inequality:good-approximation-o_1+o_2}
\end{align}
\end{subequations}
\item If $n_2 \geq \frac{k}{p^2} \log n$, then for all $\lambda \in \{ 0,1/n, 2/n, \dots, 1\}$,
\begin{subequations}
\begin{align} & \left| O_2(\lambda)-\tilde o_2 (\lambda) \right| < \alpha \sqrt{n_2 \log n} \text{, and } \label{inequality:good-approximation-o_2}
\\ & \left| O^{\vmn{\mathcal{S}}}_2(\lambda)-\tilde o^{\vmn{\mathcal{S}}}_2 (\lambda) \right| < \alpha \sqrt{n_2 \log n}. \label{inequality:good-approximation-app--o^R_2}
\end{align}
\end{subequations}
\end{itemize}
\end{lemma}
The lemma is proved in Appendix~\ref{sec:proof-of-lemma-m=2}. Given that there are two layers of randomization (selection of subset ${\vmn{\mathcal{S}}}$ and the random permutation), proving the above concentration results requires a fairly delicate analysis that builds upon several existing concentration bounds. Because proving concentration results is not the main focus of our work, we will not outline the proof in the main text, and refer the interested reader to Appendix~\ref{sec:proof-of-lemma-m=2}.\footnote{\vmn{We present the values of the constants, defined in the statement of the lemma, only to clarify that they exist and do not depend on $n$; however, they are not optimized.}}
Here we focus on the following two questions: (i) what is our motivation for using deterministic approximations $\tilde o_j(\lambda)$ and $\tilde o^{\vmn{\mathcal{S}}}_j(\lambda)$? and (ii) how do such approximations help us to estimate $n_j$?
To answer the first question, let us count the number of\st{class} \vmn{type}-$j$ customers in $O_j(\lambda)$ that belong to the {{\st{predictable}\vmn{stochastic}}} and {\st{unpredictable}\vmn{adversarial}} groups separately.
We start with the {{\st{predictable}\vmn{stochastic}}} group.
Roughly, a total of $p n_j$\st{class} \vmn{type}-$j$ customers belong to the {{\st{predictable}\vmn{stochastic}}} group, and a $\lambda$ fraction of them arrive by time $\lambda$, because these customers are spread almost uniformly over the entire time horizon.
As a result, there are approximately $p n_j \lambda$\st{class} \vmn{type}-$j$ customers from ${{\vmn{\mathcal{S}}}}$ arriving up to time $\lambda$.
Now we move on to the {\st{unpredictable}\vmn{adversarial}} group: there are a total of $\eta_j(\lambda)$ of\st{class} \vmn{type}-$j$ customers in the first $\lambda n$ customers in $\vmn{\vec{v}_I}$.
Since with probability $1-p$ each of them will be in the {\st{unpredictable}\vmn{adversarial}} group, the total number of\st{class} \vmn{type}-$j$ customers from the {\st{unpredictable}\vmn{adversarial}} group arriving up to time $\lambda$ is approximately $(1-p)\eta_j(\lambda)$. Combining these two approximate counting arguments gives us:
\begin{align}
\label{eq:estimate}
O_j(\lambda) \approx (1-p) \eta_j(\lambda) + p\lambda n_j = \tilde o_j(\lambda) .
\end{align}
A similar argument shows that $O_j^{\vmn{\mathcal{S}}}(\lambda)\approx p\lambda n_j = \tilde o^{\vmn{\mathcal{S}}}_j(\lambda) $. Lemma~\ref{lemma:needed-centrality-result-for-m=2} confirms that these approximations hold with high probability. Lemma~\ref{lemma:needed-centrality-result-for-m=2} also it provides upper bounds on the corresponding approximation errors. Further, we note that $\tilde o_j(\lambda) \neq \E{O_j(\lambda)}$, as shown in \vmn{Appendix}
\ref{subsec:remark}.
However, the difference between the two is very small \vmn{and vanishing in $n$}. Given that $\tilde o_j(\lambda)$ provides a very intuitive deterministic
approximation for random variable $O_j(\lambda)$ and admits a simple closed-form expression, we use it instead of the $\E{O_j(\lambda)}$.
Now, let us answer the second posed question. There are simple relations between $n_j$ and $\eta_j(\lambda)$ such as $n_j \geq \eta_j(\lambda)$ and $\eta_j(\lambda) + (1-\lambda)n \geq n_j$.\footnote{The first inequality follows from definition. The second one also follows from definition and from the observation that the number of\st{class} \vmn{type}-$j$ customers arriving between $\lambda$ and $1$ cannot be more than the number of remaining time steps, i.e., $(1-\lambda)n$.} Combining these with our deterministic approximations leads us to compute upper bounds on the total number of customers as established in Lemma~\ref{prop:full-Ubounds}.
Finally, based on Lemma~\ref{lemma:needed-centrality-result-for-m=2}, \st{fixing $\epsilon \in [\frac{1}{n}, \bar\epsilon]$,} we partition the sample space of arriving sequences into two subsets, $\mathcal{E}$ and its complement $\bar{\mathcal{E}}$, and define event $\mathcal{E}$ as follows:
\begin{definition}
\label{def:event}
Event $\mathcal{E}$ occurs if the realized arrival sequence $\vec{v}$ satisfies all the conditions of Lemma~\ref{lemma:needed-centrality-result-for-m=2}, i.e.,
\begin{itemize}
\item If $n_1 \geq \frac{k}{p^2} \log n$, then for all $\lambda \in \{0,1/n, 2/n, \dots, 1\}$, $$ \left| o_1(\lambda)-\tilde o_1 (\lambda) \right| < \alpha \sqrt{n_1 \log n} \quad \quad \text{and} \quad \quad \left| o_1(\lambda)+o_2(\lambda) - (\tilde o_1 (\lambda) +\tilde o_2(\lambda))\right| < \alpha \sqrt{ (n_1+n_2) \log n}, $$
\item If $n_2 \geq \frac{k}{p^2} \log n$, then for all $\lambda \in \{ 0,1/n, 2/n, \dots, 1\}$, $$ \left| o_2(\lambda)-\tilde o_2 (\lambda) \right| < \alpha \sqrt{n_2 \log n} \quad \quad \text{and} \quad \quad \left| o^{\vmn{\mathcal{S}}}_2(\lambda)-\tilde o^{\vmn{\mathcal{S}}}_2 (\lambda) \right| < \alpha \sqrt{n_2 \log n}.$$
\end{itemize}
\end{definition}
Lemma~\ref{lemma:needed-centrality-result-for-m=2} confirms that event $\mathcal{E}$ occurs {\em with high probability}. In all our analyses, we use the above definition to focus on the\st{(very likely case)} {\vmn{event}} that the deterministic approximations (i.e., $\tilde o_j (\lambda)$) are in fact ``very close'' to the observed sequence. This greatly helps us simplify the analysis and its presentation.
|
1411.7175
|
\section{Introduction}
Because they can be viewed as large atoms, colloids
have been used to perform detailed explorations of equilibrium
materials. Experimental studies of colloidal systems
have largely contributed to the development of
accurate statistical mechanics descriptions of
classical states of matter~\cite{Pusey86,Hansen2005}.
Recently, `active colloidal matter'
has emerged as a novel class of colloidal systems, which is currently
under intense scrutiny. Active colloids evolve
far from equilibrium because they are self-propelled, motorized or
motile objects, for which active forces compete with
interparticle interactions and thermal fluctuations.
This broad class of systems encompasses bacterial colonies,
epithelial tissues and specifically engineered, or `synthetic',
colloidal systems~\cite{Ball2013,Ramaswamy2010,Poon2013}.
Progress in particle synthesis and tracking capability are such that
frontier research in this field
has shifted from single particle studies to the understanding of
physical properties of bulk active materials. Such
large assemblies raise a number of basic
physical challenges, from elucidation of two-body
interactions to emerging many-body physics. In an effort to understand
the various phases of matter emerging in active colloids, we study
how self-propulsion affects the colloidal equation of state, and provide
a microscopic interpretation of our results.
Experimentally, active colloidal systems have been
mostly characterized so far through detailed single particle
studies~\cite{Paxton2005,Howse2007} in the dilute gas limit,
where particles only interact with the solvent. In this regime,
a mapping from nonequilibrium active systems to
equilibrium passive ones was established in the presence
of a gravitational field, where activity renormalizes the value of
the effective temperature~\cite{Tailleur2009,Palacci2010}.
Theoretical analysis shows that this experimental result
is not trivial, as the existence of a mapping to an effective
`hot' ideal gas might break down for different
geometries~\cite{Tailleur2009,Szamel2014}
and depends also sensitively of the detailed modelling of
self-propulsion~\cite{Szamel2014,Encul}.
Much less is understood at finite densities
when particle interactions and many-body effects
cannot be neglected. Several new phases of active matter
have been observed experimentally in synthetic
self-propelled colloids~\cite{Teurkauff2012,Palacci2013,Buttionni20013},
from active cluster phases at relatively low density to
gel-like solids and phase separated systems at larger density,
suggesting that active colloids tend to aggregate when
self-propulsion is increased. These observations have
triggered a number of theoretical studies~\cite{Ball2013b},
which have suggested several physical mechanisms for
particle aggregation. In particular, particles may aggregate in regions
of large density, because self-propulsion is hindered
by steric effects~\cite{Tailleur2008,fily2012,redner2013,Levis2014}.
Clustering is also observed when more complex
coupling between motility, density, reactant
diffusion~\cite{Mognetti2013,stark2014,Encul}
are taken into account. Motility induced adhesion
can be strong enough to induce a nonequilibrium phase separation
between a dense fluid and a dilute gas, as observed
in some recent studies, but complete phase separation does not
always occur in simulations~\cite{Levis2014} or
experiments~\cite{Teurkauff2012,Palacci2013} on active systems.
Here, we characterize the phase behaviour and equation
of state of a system of active colloids
and of a model of self-propelled hard disks, extending
to nonequilibrium suspensions the classical sedimentation studies
pioneered by Jean Perrin~\cite{perrin}.
While theory and simulations have recently started to explore the equation
of state of simple models of active particles~\cite{valeriani,manning,brady},
there is so far
no corresponding experimental investigation of the pressure-density
relation in active particle systems. We find that
activity modifies the equation of state in a way
which can be described by the introduction of both an effective
temperature for the dilute system, and of an effective adhesive
interaction at finite density. From the known equation of state
for adhesive disks at equilibrium, we extract an effective
adhesion between active particles and find a unique scaling law
relating activity to self-propulsion in both
experiments and simulations.
\section{Sedimentation experiments in
nonequilibrium active suspensions}
Sedimentation is a simple yet powerful tool to study
colloidal suspensions, because it allows a continuous exploration of
the phase behaviour of the system without fine-tuning of the
volume fraction~\cite{perrin,piazzareview}. This technique
has been used to explore phase diagrams and equation of state
for several types of
suspensions~\cite{Biben92,Biben93,piazza1,Chaikin96,piazza,durian}.
Sedimentation allows us to extend previous work on active colloids performed
in the dilute regime~\cite{Palacci2010} and at
low density~\cite{Teurkauff2012}, to a much broader range
of densities.
Experimentally, we study colloidal particles which are self-propelled through
phoretic effects. We use gold Janus
microspheres with one half covered with platinum. When immersed
in a hydrogen peroxide bath the colloids convert chemical energy
into active motion. The average radius
obtained from scanning electron microscopy
measurements (SEM) is $R=1.1\pm 0.1~\mu$m, but image analysis
at large density indicates an effective radius between 1.34 and
1.44~$\mu$m, decreasing slightly with activity. In the following,
we assume that $R$ is constant, with the value given by the SEM measurements.
Due to gravity, the Janus colloids (mass density $\rho \sim 11$~g.cm$^{-3}$)
form a bidimensional monolayer at the bottom of the observation chamber.
We tilt the system
by a small angle $\theta \sim 10^{-3}$~rad along the $z$-direction
as shown in the inset of Fig.~\ref{sedim}a, so that the gravity
field felt by the particles is reduced to $g \sin \theta$.
To study the effect of activity on the sedimentation, we measure the density
profiles in the $z$-direction for various mass concentrations in
H$_2$O$_2$ from 0 to 10$^{-2} ~w/w$. This controls the level of particle
self-propulsion, which can be quantified by the translational
P\'eclet number $P_e$, defined as: $P_e=\frac{{Rv}}{{D_0}}$,
where $D_0$ is the diffusion coefficient of the
colloids in the absence of activity, and $v$ is the average velocity of
free microswimmers (see Methods).
Notice that the present active particle system was
previously studied only at a single density~\cite{Teurkauff2012}, and
the experimental setup to study its sedimentation profiles
differs from earlier experimental studies~\cite{Palacci2010}.
\begin{figure}
\includegraphics[width=8.5cm]{figure1.pdf}
\caption{{Imaging sedimentation profiles (color online).}
Snapshots of (a) phoretic gold-platinum Janus colloids and (b)
self-propelled hard disks under gravity for activity increasing
from left to right.
(a) Four different activities are shown,
corresponding to P\'eclet numbers $P_e = 0$ (passive case),
8, 14, and 20 (from left to right). Inset shows a
schematic view of the experimental setup. The scale bar is 20~$\mu m$,
(b) Simulation snapshots for persistence times
$\tau=0$ (passive disks), 1, 10, 100 (from left to right).}
\label{sedim}
\end{figure}
We also conduct simulations of sedimentation
profiles within a simple model of self-propelled
hard disks~\cite{Berthier2013,Levis2014}, see Fig.~\ref{sedim}b.
The model is simpler than the experiments on two key aspects. Firstly, it
uses a purely hard core repulsion between particles. This is
a reasonable choice, because the pair interaction between the Janus colloids
has not been characterized in detail, and this avoids
the introduction of multiple control parameters. Secondly,
the phoretic mechanism behind self-propulsion is not simulated,
and particle activity is implemented directly into Monte-Carlo equations of
motion~\cite{Levis2014}, which generalize the algorithm
used for Brownian hard disks to introduce self-propulsion.
In the model, the activity is controlled by a single
parameter, the persistence time $\tau$ (equivalent to a rotational
P\'eclet number) determining the
crossover time between ballistic and diffusive regimes in the dilute
limit~\cite{Levis2014}. The intensity of the gravity force $G$
controls the sedimentation process (see Methods).
The phase behavior and dynamics of the model without gravity has been
studied carefully before~\cite{Berthier2013,Levis2014},
but the equation of state has never been analysed.
In Fig.~\ref{sedim}, we show the sedimentation images for both the
experimental system and the numerical model, for four different
activity levels. The relevance of sedimentation studies
is immediately clear, as the system continuously
evolves from a very dilute suspension at the top, to very dense
configurations at the bottom. Therefore, a single experiment
explores at once a broad range of densities for a given level of activity.
Despite its simplicity, the numerical model reproduces
qualitatively the various regimes observed for Janus colloids.
For both systems, the passive case reveals equilibrium configurations from a
dilute fluid to a dense homogeneous amorphous phase, while active systems
exhibit much richer structures. The dilute gas spreads
over larger altitudes, finite size clusters are observed
at moderate densities, gel-like configurations are found at larger densities,
and dense, arrested phases exist at the bottom of the cell, as revealed
by these images. These various phases have been
carefully analysed in the numerical model~\cite{Berthier2013,Levis2014},
but only the cluster phase was studied experimentally
before~\cite{Teurkauff2012}. In the following we shall record
and analyse quantitatively the equation of state of both systems over
a range of densities where dynamics is not arrested and steady state
conditions can be achieved.
\section{Sedimentation profiles: dilute
limit and effective temperature}
\begin{figure*}
\includegraphics[width=17cm]{figure2.pdf}
\caption{{Experimental density profiles (color online)}.
(a) Density profiles for various activities, arbitrarily
shifted horizontally so that $z_0$ corresponds to $\phi = 0.2$.
Inset: evolution of ${T}_{\textrm{eff}}$ as a function of the P\'eclet
number, together with the theoretical expectation,
$T_{\rm eff}/T_0 = 1 + \frac{2}{9} Pe^2$.
(b) Zoom on the dilute phase $\phi \ll 1$ using a semi-log scale,
with the associated exponential fit,
from which we extract the value of the effective temperature.}
\label{profile}
\end{figure*}
Our basic physical observable is the density profile $\phi(z)$
measured along the direction of the gravity from the images
shown in Fig.~\ref{sedim} (see Methods).
In Fig.~\ref{profile}a we show the density profiles obtained
experimentally for different activities, which confirms the
large range of densities explored in each experiment.
We focus in Fig.~\ref{profile}b on the dilute phase
at small density (large $z$) and observe that for $\phi \lesssim 0.05$
the density evolves exponentially with the position
with a decay rate evolving continuously with the activity. We have
obtained similar exponential profiles in the simulations.
The experimental results suggest that in the dilute limit,
the system behaves as an ideal gas with an activity-dependent
effective temperature, ${T}_{\textrm{eff}}$, such that
$\phi(z) \sim \exp \left[ - {m g z \sin \theta}/(k_B
T_{\rm eff}) \right]$ where $m$ is the mass of a colloid
and $k_B$ the Boltzman constant.
From the linear fits shown in Fig.~\ref{profile}b, we extract the value
of $k_B T_{\textrm{eff}} / (m g \sin \theta)$ for various activities.
These measurements are more precise for $P_e$ above 10,
as the dilute phase extends over larger distances;
$k_B T_0/(m g \sin \theta)$ is then evaluated from
the evolution of $k_B T_{\textrm{eff}} / (m g \sin \theta)$ with $P_e$.
Overall, our results in this regime agree
with earlier work on dilute suspensions~\cite{Palacci2010},
and the theoretical expression, $T_{\rm eff} / T_0 = 1 + \frac{2}{9}{Pe}^2$
is recovered, see inset of Fig.~\ref{profile}a.
Quantitatively, we observe that $T_{\rm eff}$ increases
from the thermal bath temperature $T_{\rm eff} \approx T_0 \approx 300$K
for passive colloids (from which we determine the tilt angle
$\theta \approx 8.10^{-3} \pm 2.10^{-3}$) to a maximum value of
about $T_{\rm eff} \approx 3.10^4$~K for the most active system.
In the simulations, we find similarly that $\phi(z)$
decays exponentially with $z$. We have verified numerically that
the dependence on the gravity field $G$ is given by
$\phi(z) \sim \exp [ - z G / (k_B T_{\rm eff}) ]$, which
defines an effective temperature.
Exponential decay is obeyed over about 4 decades for
$\phi \in [10^{-6}, 10^{-2}]$. The lower bound stems from
statistical accuracy, but the upper bound emerges because profiles
become non-exponential when density is large enough for many-body
interactions to play a role. Numerically,
$T_{\rm eff}$ is directly proportional to the persistence
time of the self-propulsion, $T_{\rm eff} \propto \tau$.
Therefore, self-propelled hard disks under gravity also behave
at low density, $\phi \ll 10^{-2}$, as an ideal gas with an
effective temperature
different from the bath temperature, $T_{\rm eff} > T_0$.
To reinforce this view, we performed additional simulations in the
absence of gravity where we measured in the dilute limit $\phi \to 0$ both the
self-diffusion coefficient $D_s$ (from the long-time limit of the mean-squared
displacement) and the mobility $\mu$ (measured from the long-time limit of the
response to a constant force in the linear response regime).
The diffusion constant scales
with the persistence time $D_s \propto \tau$, as expected~\cite{Levis2014},
and we find that the mobility does not scale with $\tau$, such that
$D_s / \mu \propto \tau$.
Our simulations indicate that the equality
\begin{equation}
k_B T_{\rm eff} = {D_s}/{\mu}
\label{einstein}
\end{equation}
holds quantitatively, within our statistical accuracy.
Equation (\ref{einstein}) states
that the same effective temperature controls the sedimentation profiles
and the (effective) Einstein relation between diffusion and mobility.
Hence, it is justified to describe self-propelled particles
as an `effective ideal gas'. These conclusions are far from trivial, as they
do not hold for all types of active particles
(run-and-tumble bacteria being a relevant
counterexample~\cite{Tailleur2009}), while the mapping to an ideal gas
may break down in more complex geometries even for
self-propelled particles~\cite{Szamel2014}.
Finally, note that $T_{\rm eff}$ is a single particle quantity,
which is conceptually distinct from the (collective) effective
temperature emerging in dense glassy regimes~\cite{BK2013}.
\section{Nonequilibrium equation of state
in active suspensions}
\begin{figure*}
\includegraphics[width=17cm]{figure3.pdf}
\caption{{Equation of state and compressibility factor (color online)}.
Equation of state $P/(k_B T_0)$ versus
linear density $\phi$ for experiment (a) and simulations (b).
In (a) the inset is a zoom on the dilute phase, where lines
represent the best linear fit from which we recover $T_{\rm eff}$.
(c) Compressiblity factor versus $\phi$ for experiment. Black points:
passive case; black dashed curve: empirical equation of state for hard disks.
Dashed color lines obtained from Eq.~(\ref{viriel}).
(d) Compressiblity factor versus $\phi$ for simulations with
dashed lines obtained from Percus-Yevick closure for two-dimensional
Baxter model.
Experiments: $T_{\textrm{eff}}/T_0= 1$, 5, 15, 34, 47, 62, 87 (bottom to top).
Simulations: $\tau = 0, 1, 3, 10, 30, 100, 300, 1000$ (bottom to top).}
\label{pressionrho}
\end{figure*}
Provided we measure density profiles in steady state conditions,
and using the sole assumption of mechanical equilibrium~\cite{Biben92},
we can convert the measured profiles
into a pressure measurement,
$P(z) = \Pi(z)-\Pi_{0}$, where $\Pi_{0}$ is the pressure at the top of
the observation cell and $ \Pi(z)= \frac{m g \sin \theta}{\pi R^2 }
\int_{z}^{L} dz' {\phi}(z')$. We can then represent the parametric evolution of
$P(z)$ with $\phi(z)$ by varying $z$, which gives direct access
to the nonequilibrium equation of state $P(\phi)$.
To our knowledge, there is no previous experimental report of such
quantities, that have only very recently been discussed
in theoretical work~\cite{valeriani,manning,brady}.
In Figs.~\ref{pressionrho}a,b
we present the outcome of this analysis, and show
the evolution of the normalised osmotic pressure $P/({k}_{B}{T}_0)$
for various activities in experiments and simulations.
In Figs.~\ref{pressionrho}c,d we replot the same set of data
to examine how the compressibility factor
$Z = {P} / (\rho {k}_{B} {T}_{0})$, with $\rho$ the number density,
depends on density $\phi$. This
second representation offers a finer perspective
on the nonequilibrium equation of state, as the ideal gas
behaviour at low $\phi$ is scaled out.
In the dilute regime, we recover the `effective'
ideal gas behavior, $P = \rho k_B T_{\rm eff}$, which directly follows
from integration of the exponential profiles of
Fig.~\ref{profile}. This linear dependence of $P$ with $\phi$
is more carefully examined in Figs.~\ref{pressionrho}c,d,
as it translates into a finite value for $Z(\phi \to 0)$, namely
$Z(\phi) \to T_{\rm eff} / T_0$, as observed. The data in
Figs.~\ref{pressionrho}c,d thus provide a simple, direct measurement
of the effective temperature in active suspensions.
More interesting is the behaviour at finite density,
which has not been explored experimentally before. Both
experimental and numerical data indicate that the functional form of
the equation of state changes continuously
as the activity increases, in a way that
cannot be uniquely accounted by the introduction of the effective
temperature. This is not surprising, as the images in Fig.~\ref{sedim}
show that the structure of the system at finite density changes
dramatically with activity.
To confirm this, we first analyse the data for passive systems.
In that case, the equation of state can be well
described by an empirical equation of state for hard
disks~\cite{Santos1995,note}. The agreement with
hard disks is not guaranteed in experiments, because it
is not obvious that (passive) Janus colloids uniquely interact with
hard core repulsion, but this seems to be a good approximation.
In the simulation, Fig.~\ref{pressionrho}d (black curves),
we show that the data for passive disks agree very well with the
equilibrium Percus-Yevick equation of state. This is expected, because
the system becomes for $\tau=0$ an equilibrium fluid of hard disks.
When activity increases, experiments and simulations
can no longer be described by the equilibrium hard disk equation of state.
Qualitatively, the compressibility
factor $Z(\phi)$ increases more weakly with $\phi$ for moderately
active particles than for the passive system, and we observe that
$Z(\phi)$ even decreases with $\phi$ for
more active systems at low $\phi$, so that $Z(\phi)$ actually becomes
a non-monotonic function of $\phi$ for large
activity~\cite{valeriani,manning,brady}.
In equilibrium systems, such a behaviour represents
a direct signature of adhesive interactions~\cite{piazza}.
This suggests that the strong clustering observed in
Fig.~\ref{sedim} in active particle systems directly impacts
the equation of state, which takes a form reminiscent of
equilibrium colloidal systems with attractive interactions.
\section{Determination of effective adhesion
induced by self-propulsion}
The idea that self-propulsion provides a mechanism for
inducing effective attractive forces in purely repulsive
systems of active particles has recently
emerged~\cite{Buttionni20013,Tailleur2008,brady,epl2013}.
Because we have direct access to the equation of state
in our systems of active particles, we are in a unique position to
study the similarity between the equation of state of nonequilibrium
active particles and equilibrium adhesive disks. If successful,
we can then quantify the strength of the effective adhesion
between particles which is induced by the self-propulsion mechanism.
To this end, we compare the equation of state
obtained for active particles to the equilibrium equation
of state of a system of adhesive particles. We have analysed
the equation of state of the Baxter model of
adhesive particles in two dimensions~\cite{baxter}.
This is a square-well potential with a hard core repulsion
for $r < \sigma$, and a short-range repulsion of range $\sigma+\delta$
and depth
\begin{equation}
V(\sigma < r < \sigma+\delta) = - k_B T \ln \left(
\frac{\sigma+\delta}{4 \delta} A \right),
\end{equation}
where $A$ is a nondimensional number quantifying the adhesion strength,
and $V(r > \sigma+\delta)=0$.
The model is defined such that the adhesive limit can smoothly
be taken, where $\delta \to 0$, $V(r=\sigma^+) \to \infty$ but the second
Virial coefficient remains finite and is uniquely controlled by
the dimensionless
adhesion parameter $A$~\cite{baxter,noro}. The purely repulsive
hard disk limit is recovered with $A \to 0$.
For this model, the first two Virial
coefficients are known analytically~\cite{Post1986}:
\begin{equation}
Z=\frac{{P}}{\rho {k}_{B} {T}}=1+b_1\phi+b_2\phi^2+{\cal O}(\phi^3),
\label{viriel}
\end{equation}
with $b_1=2-A$, $b_2=\frac{25}{8}-\frac{25}{8}A+\frac{4}{3}A^2-0.122A^3$.
When $A$ increases, the compressibility factor $Z$ in Eq.~(\ref{viriel})
changes from the monotonic hard disk behaviour to a non-monotonic
density dependence for $A>2$ as the initial slope given by $b_1$
then becomes negative.
The above expression provides a reasonable
description of the experimental data, provided we carefully
adjust $A$ for each activity. This
provides a direct estimate of the `effective adhesion'
induced by the self-propulsion, which thus quantifies
the nonequilibrium clustering of self-propelled particles
using a concept drawn from equilibrium physics. In practice,
we fit the experimental compressibility factor at intermediate densities
($0.04<\phi<0.4$) with the expression $Z = \alpha \left( 1+b_1\phi+b_2\phi^2
+{\cal O}(\phi^3) \right)$, adjusting both the effective adhesion $A$ and
a prefactor $\alpha$, which accounts for the effective temperature.
Within errorbars, we obtain $\alpha \approx T_{\rm eff}/T_0$,
confirming the robustness of the analysis.
We have also verified that the uncertainty on
the determination of $\phi$ (due to the uncertainty in
particle diameter) has a negligible impact on the measured $A$ values.
As shown in Fig.~\ref{ATeff}, we find that the
effective adhesion $A$ increases with self-propulsion.
\begin{figure}
\includegraphics[width=8.5cm]{figure4.pdf}
\caption{{Quantifying motility-induced adhesion (color online)}.
Evolution of adhesive parameter $A$ obtained from the equation of
state with the effective temperature obtained in the dilute limit
for both experiments (circles) and simulations (squares).
The lines correspond to the scaling law $A \sim \sqrt{T_{\rm eff}/T_0}$,
with slighlty different prefactors for simulations and experiments.}
\label{ATeff}
\end{figure}
We carry out a similar analysis for the compressibility factors
obtained numerically for self-propelled hard disks. Because the
numerical data are statistically accurate over a broad density range,
we solved the full Percus-Yevick closure relation to obtain the equation
of state of the bidimensional Baxter model,
and adjust $A$ to obtain the best agreement between
self-propelled and adhesive disks, as shown in Fig.~\ref{pressionrho}d.
As observed in the experiments, we again obtain an adhesion
$A$ that increases with self-propulsion, see Fig.~\ref{ATeff}.
To analyse the evolution of the effective adhesion with activity,
it is useful to represent $A$ as a function of $T_{\rm eff}/T_0$.
This representation is convenient because it only involves
dimensionless quantities, and $T_{\rm eff}/T_0$
efficiently quantifies self-propulsion with no reference to
its detailed physical origin ($T_{\rm eff}$ is
proportional to the diffusion constant of noninteracting particles).
Remarkably, we observe a similar scaling relation
for both experiments and simulations between
adhesion and effective temperature, namely $A \sim \sqrt{T_{\rm eff}/T_0} $.
\section{Physical discussion}
The scaling law relating $A$ and $T_{\rm eff}$ suggests that
self-propelled disks at finite density do not simply
behave as a `hot' suspension of hard disks, but rather as a
hot suspension of adhesive disks with a simple
relation between adhesion and effective temperature.
A plausible scaling argument for the emergence of such a scaling
is to compare the
contact duration for active particles, controlled
by the rotational P\'eclet number (or persistence time),
to a bonding time in the context of the Baxter model,
$t_{\rm bond} \sim \exp(-V/k_BT) \sim A$. This simple
argument suggests however an incorrect proportionality
between $A$ and $T_{\rm eff}$. This demonstrates that the scaling
of the active adhesion follows from a more complex interplay
between bonding time and structural evolution induced by the self-propulsion.
To rationalize the observed scaling,
we have compared quantitatively the structure
obtained with self-propelled disks at finite density to
computer simulations of the equilibrium Baxter model of adhesive
disks~\cite{Post1986}. In particular, we find that
the degree of clustering (quantified by the average cluster size)
is the same in both models when $A$ and $\sqrt{T_{\rm eff}}$ increase
proportionally, which gives further support to the
results found in Fig.~\ref{ATeff}. In previous work,
it was shown that the average cluster size increases
as $\sqrt{T_{\rm eff}}$ in our numerical model~\cite{Levis2014}.
This $\sqrt{T_{\rm eff}}$ scaling law is actually captured by a kinetic model
of reversible aggregation~\cite{Levis2014}, which assumes
that clusters result from a competition between particle
aggregation (when a self-propelled particle collides with an existing cluster)
and escape from the cluster surface (when the direction of the
self-propulsion for a surface particle becomes oriented towards the exterior).
While this physical model is exactly in the spirit as the above
scaling argument based on a bonding timescale, the kinetic model
is able to capture the many-body physics
of particle exchanges between dynamic clusters responsible for the observed
connection between adhesion and activity.
The results in Fig.~\ref{ATeff} indicate that, while simulations and
experiments obey the same scaling,
the experimental adhesive parameter is systematically larger.
This suggests that a second mechanism could be at work in
experiments. Following Refs.~\cite{Teurkauff2012,stark2014}
we may attribute it to the chemical (phoretic) interactions between colloids.
Since the fuel powering phoretic motion is depleted around the colloids,
this may induce a diffusio-phoretic
attraction between colloids.
From the force balance between colloids, this interaction can be quantified
in terms of an attraction energy which is
directly proportional to the chemical consumption rate, and therefore
to the self-propelled velocity~\cite{Teurkauff2012}.
Altogether, this suggests an attraction energy
scaling like $A \sim P_e \sim \sqrt{T_{\rm eff}}$, as observed.
Our experiments and simulations therefore point to an
additive combination of kinetic and phoretic contributions to the
experimentally observed effective attraction.
Our approach provides a quantitative way to account for
the clustering commonly observed in self-propelled particle
systems as it suggests that active particles effectively
behave as an equilibrium system of hot adhesive particles,
at least in the regime explored experimentally in this work.
In the experiments, we can
notice that the effective adhesion $A$ seems to saturate with the
effective temperature, see Fig.~\ref{ATeff}. This could be linked
to the fact that in our experiments we do not observe
a macroscopic phase separation. In adhesive spheres, phase separation
occurs for larger adhesion values, $A > 10$~\cite{baxter},
while clustering is observed~\cite{Post1986}
for adhesion values corresponding to our experiments.
Finally, it is remarkable that
despite the complex mechanisms responsible for
self-propulsion at the micro-scale, the experimental equation of state
is very well reproduced in a very simple model of self-propelled
hard disks.
This suggests that Janus active colloids represent an interesting
experimental model system to study
the statistical physics of `active colloidal matter'.
\section*{ACKOWLEDGMENTS}
F. Ginot and I. Theurkauff contributed equally to this work.
We thank T. Biben, H. L\"owen, and G. Szamel
for fruitful discussions, B. Ab\'ecassis for the
synthesis of the gold colloids and INL for access to its clean room facility,
A. Ikeda for his help in the numerical integration of the Percus-Yevick
equation.
The research leading to these results has received funding
from the European Research Council under the European Union's Seventh
Framework Programme (FP7/2007-2013) / ERC Grant agreement No 306845.\\
|
1206.3871
|
\section{Introduction}
\label{sec:intro}
The s-process is responsible for creating about half of the elements
heavier than iron that are observed in the solar system\
\citep{SNE08}. This process involves the slow capture of neutrons
(slower than the average $\beta$-decay rate of unstable nuclei) onto
seed material, hence nucleosynthesis follows the nuclear valley of
stability. By considering the solar system abundances of s-only nuclei
(that is, nuclei that can only be produced in the s-process) it can be
shown that there are two key components of the s-process: the ``main''
component and the ``weak'' component\ \citep{KAP11}. The main
component produces s-nuclei with masses of $A>90$, while the weak
component enriches the s-nuclei abundances at $A \lesssim 90$.
The main component of the s-process arises from neutron captures
during He-burning in $M \leq 4 M_{\odot}$ Asymptotic Giant Branch
(AGB) stars (a detailed discussion of nuclear burning in AGB stars can
be found in Refs.~\cite{AGBBook} and \cite{HER05}). In low mass (0.8 to 4\
$M_{\odot}$) AGB stars of solar metalicity, most neutrons are released
through the $^{13}$C($\alpha$,n)$^{16}$O reaction during the
inter-pulse period, while the \an reaction produces an additional
burst of neutrons during thermal pulses. This burst of neutrons
affects mainly the branchings in the s-process path. In
intermediate-mass AGB stars ($M>4M_{\odot}$), where the temperatures
are expected to be higher, the \an reaction is thought to be the main
source of neutrons and could explain the enhancement of rubidium seen
in some metal poor AGB stars
\citep{PIG05,GAR06,LUG08,GAR09,KAR12}.
In addition to s-process elements, the \an and \ag rates influence the
relative production of \nuc{25}{Mg} and \nuc{26}{Mg}, whose abundance
ratios can be measured to high precision in circumstellar
(``presolar'') dust grains. Magnesium is also one of the few elements
for which the isotopic ratios (\nuc{25}{Mg}/\nuc{24}{Mg} and
\nuc{26}{Mg}/\nuc{24}{Mg} can be derived from stellar spectra (for
example, Refs.~\cite{YON03a,YON03b}). However, \textcite{KAR06} showed
that with their estimated \an and \ag reaction rate uncertainties, the
relative abundances of \nuc{25}{Mg} and \nuc{26}{Mg} predicted by
their stellar models can vary by up to 60\%.
The weak component of the s-process arises from nuclear burning in
massive stars. The core temperature in these stars (typically with $M
\gtrsim 11M_{\odot}$) becomes high enough during He-burning for the
\an reaction to produce a high flux of neutrons shortly before the
helium fuel is exhausted. Any remaining \nuc{22}{Ne} releases a second
flux of neutrons during convective carbon shell burning. The s-process
yield in these stars is therefore sensitive to the temperature at
which the \an reaction starts to produce an appreciable flux of
neutrons. \textcite{THE07} showed that the s-process during the core
He-burning stage in massive stars depends strongly on both the \Nepa
and the \reaction{16}{O}{n}{$\gamma$}{17}{O} reaction rates. They also
found that not only are the overall uncertainties in the rates
important, but also the temperature dependence of the
rates.
The \Nepa reactions also affect nucleosynthesis in other
astrophysical environments. During type II supernova explosions, two
$\gamma$-ray emitting radionuclides, \nuc{26}{Al} and \nuc{60}{Fe} are
ejected,
and their abundance ratio provides a sensitive constraint on stellar
models \cite[][and references therein]{WOO07}. The species
\nuc{60}{Fe} is mainly produced in massive stars by neutron captures
during convective shell carbon burning\ \citep[e.g.,][]{PIG10}. Its
abundance, therefore, depends strongly on the \Nepa rates. The \Nepa
rates also play a role in type Ia supernovae. Throughout the
``simmering'' stage, roughly 1000 years prior to the explosion,
\textcite{PIR09} suggested that neutrons released by the \an reaction
affect the carbon abundance, thus altering the amount of \nuc{56}{Ni}
produced (i.e., the peak luminosity) in the explosion. \textcite{TIM03}
also found that during the explosion, neutronisation by the \an
reaction affects the electron mole fraction, $Y_e$, thus influencing
the nature of the explosion.
In this work we will evaluate new reaction rates for
\mbox{\Nepa}. Compared to previous results\ \citep{ANG99,JAE01,KAR06}
our new rates are significantly improved because (i) we incorporate
all the recently obtained data on resonance fluorescence absorption,
$\alpha$-particle transfer etc., and (ii) we employ a sophisticated
(Monte Carlo) method to estimate the rates and associated
uncertainties.
We have recently presented new \Nepa rates in Ref.\ \cite{ILI10a}, but did
not give a detailed account of their calculation. Since the latter
results were published, we found, and could account for, a number of
inconsistencies in data previously reported in the literature. In
addition, new data from Ref.\ \cite{DEB10} became available, which have been
included in the present work. Thus the rates presented here supersede
our earlier results\ \citep{ILI10a}.
The paper will be organised as follows: in Sec.\ \ref{sec:formalism} a
detailed discussion of the Monte Carlo method used to calculate
reaction rates is discussed. This method is described in detail
elsewhere\ \citep{LON10} but will be summarised to show its
applicability to the specific cases of the \Nepa reactions. The \Nepa
rate calculations and comparisons with the literature will
be presented in Sec.\ \ref{sec:rates}. The reaction rates will then be
used to present new nucleosynthesis yields along with their
uncertainties in Sec.~\ref{sec:results}. Conclusions will be
presented in Sec.\ \ref{sec:conclusions}.
\section{Reaction Rate Formalism}
\label{sec:formalism}
\subsection{Thermonuclear Reaction Rates}
\label{sec:rates-formalism}
The reaction rate per particle pair in a plasma of temperature, $T$,
is given by
\begin{equation}
\label{eq:rates-reacrate}
\langle\sigma v\rangle = \sqrt{\frac{8}{\pi \mu}}
\frac{1}{(kT)^{3/2}}\int_{0}^{\infty}E\sigma(E)e^{-E/kT} dE
\end{equation}
where $\mu$ is the reduced mass of the reacting particles,
$\mu=M_{0}M_{1}/(M_0+M_1)$; $M_i$ denotes the masses of the
particles; $k$ is the Boltzmann constant; $E$ is the centre-of-mass
energy of the reacting particles; and $\sigma(E)$ is the reaction
cross section at energy, $E$.
The strategy for determining reaction rates from Eq.\
(\ref{eq:rates-reacrate}) depends on the nature of the cross section.
In many cases the cross section can be separated into non-resonant and
resonant parts. Reactions such as \Nepa proceed through the compound
nucleus \nuc{26}{Mg} at relatively high excitation energy ($Q_{\alpha
\gamma}=10614.787 (33)$\ keV\ \citep{AME03}) and are frequently
dominated by resonant capture. The non-resonant part of the cross
section will, therefore, be neglected in the following discussion. The
reader is referred to Refs.\ \cite{ILIBook} and \cite{LON10} for more
details.
The resonant part of the cross-section can be represented in one of
two ways: (i) by narrow resonances, whose partial widths can be
assumed to be approximately constant over the resonance width
(``narrow resonances''), and (ii) by wide resonances, for which the
resonant cross section must be integrated numerically to account for
the energy dependence of the partial widths involved. The reaction
rates per particle pair for single, isolated narrow and wide
resonances, respectively, are given by
\begin{equation}
\label{eq:rates-narrowrate}
\langle\sigma v\rangle = \left(\frac{2\pi}{\mu
kT}\right)^{3/2}\hbar^2 \omega\gamma e^{-E_r/kT}
\end{equation}
and
\begin{equation}
\label{eq:rates-rr-resonance}
\langle\sigma v\rangle =
\frac{\sqrt{2\pi}\hbar^2}{(\mu kT)^{3/2}}\omega \int_{0}^{\infty}
\frac{\Gamma_a(E)\Gamma_b(E)}{(E-E_r)^2 + \Gamma(E)^2/4} e^{-E/kT} dE
\end{equation}
where the resonance strength, $\omega \gamma$ is defined by
\begin{equation}
\label{eq:rates-ResStrength}
\omega \gamma = \omega \Gamma_a \Gamma_b/\Gamma,
\end{equation}
$E_r$ is the resonance energy; $\Gamma_a(E)$, $\Gamma_b(E)$, and
$\Gamma(E)$ are the energy-dependent entrance channel (particle)
partial width, exit channel partial width, and total width,
respectively; and $\omega$, the statistical spin factor, is defined
by $\omega = (2J+1)/(2J_0+1)(2J_1+1)$, where $J$ and $J_i$ are the
resonance and particle spins, respectively. The particle partial
width, $\Gamma_c$, can be written as the product of an
energy-independent reduced width, $\gamma_c^2$, and an
energy-dependent penetration factor, $P_c(E)$, as
\begin{equation}
\label{eq:rates-GammaDefinition}
\Gamma_{c} = 2 P_c(E) \gamma_c^2.
\end{equation}
For the present case of $^{22}$Ne$+\alpha$, the entrance channel
($\alpha$-particle) reduced width, $\gamma^2_{\alpha}$, is related to
the $\alpha$-particle spectroscopic factor, $S_{\alpha}$, by
\begin{align}
\label{eq:S-reduced-width-relation1}
\gamma^2_{\alpha} =& \frac{\hbar^2}{\mu a^2} \theta^2_{\alpha} \\
\label{eq:S-reduced-width-relation2}
& = \frac{\hbar^2}{2 \mu a} S_{\alpha}\phi^2(a)
\end{align}
where $\phi(a)$ is the single-particle radial wave function at the
channel radius, $a$~\citep[see, for example,][]{ILI97,BEL07}. The
constant $\hbar^2/(\mu a^2)$ is the Wigner Limit (in the notation of
Lane and Thomas~\cite{LAN58}). It can be regarded as an upper limit,
according to the sum rules in the dispersion theory of nuclear
reactions, i.e., $\theta^2_{\alpha} \leq 1$. Note that it is
frequently assumed that $\theta^2=S$, which must be regarded as a
crude approximation only. For example, in the case of $^{17}$O levels
it was shown in Ref.~\cite{KEE03b} that $S_{\alpha}$ exceeds
$\theta^2_{\alpha}$ by at most a factor of 2. We will return to these
issues in Sec. \ref{sec:spectro-factors}.
The above relationships are useful since they allow for an estimation
of the important $\alpha$-particle partial widths from spectroscopic
factors obtained in $\alpha$-particle transfer reactions, as will be
discussed later. It is important to note that the value of
$S_{\alpha}$ depends on the parameters of the nuclear potentials
assumed in the transfer data analysis. Similarly, the value of
$\gamma^2_{\alpha}$ depends on the channel radius. However, if,
throughout the analysis, consistent values of these parameters (such
as $a$) are used, their impact on the value of $\Gamma_{\alpha}$ will
be strongly reduced.
\subsection{Monte Carlo Reaction Rates}
\label{sec:rates-montecarlo}
The equations outlined in Sec.\ \ref{sec:rates-formalism} provide the
tools for calculating thermonuclear reaction rates given available
estimates for the cross section parameters ($E_r$, $\omega\gamma$,
etc.). A problem arises, however, when statistically rigorous
uncertainties of the reaction rates are desired. What is usually
presented in the literature are recommended rates, together with upper
and lower ``limits'', but the reported values are not derived from a
suitable probability density function. Therefore, the reported values
have no rigorous statistical meaning. An attempt to construct a
method for analytical uncertainty propagation of reaction rates was
made by \textcite{THO99}. However, their method is applicable only in
special cases, when the uncertainties in resonance parameters are
relatively small. \textcite{THO99} were also not able to treat the
uncertainty propagation for reaction rates that need to be integrated
or for rates that include upper limits on some parameters. For these
reasons, a Monte Carlo method is used in the present study to
calculate statistically meaningful reaction rates.
The general strategy of Monte Carlo uncertainty\footnote{Throughout
this work, care is taken to refer to the terms \textit{uncertainty}
and \textit{error} correctly. The term \textit{error} refers to a
quantity that is believed to be incorrect, whereas
\textit{uncertainty} refers to the spread estimate of a parameter.}
propagation is as follows: (i) randomly sample from the probability
density distribution of each input parameter; (ii) calculate the
reaction rates for each randomly sampled parameter set on a grid of
temperatures (using the same set at each temperature); (iii) repeat
steps (i)-(ii) many times (on the order of 5000). Steps (i)--(iii)
will result in a distribution at each temperature grid point that can
be interpreted as the probability density function of the reaction
rate. Extraction of uncertainties from this distribution will be
discussed later. While input parameter sampling is being performed,
care must be taken to consider correlations in parameters. For
example, particle partial widths depend on the penetration factor,
which is an energy dependent quantity. The individual energy samples
must, therefore, be propagated consistently through resonance energy
and partial width estimation in order to fully account for the
correlation of these quantities. The code \texttt{RatesMC}\
\citep{LON10} was used to perform the Monte Carlo sampling and to
analyse the probability densities of the total reaction rates.
In order to apply Monte Carlo sampling to calculate reaction rate
uncertainties, sampling distributions must be chosen for each input
parameter. Once a reaction rate (output) distribution has been
computed, an appropriate mathematical description must be found to
present the result in a convenient manner. Statistical distributions
important for reaction rate calculations are described in detail in
Refs.\ \cite{LON10} and \cite{LON10T}, and are summarised briefly
below.
Uncertainties of resonance energies are determined by the \textit{sum}
of different contributions. In this case, the central limit theorem of
statistics predicts that resonance energies are Gaussian distributed.
Note that there is a finite probability of calculating a negative
resonance energy and that this choice of probability density naturally
accounts for the inclusion of sub-threshold resonances in the above
formalism.
A resonance strength or a partial width, on the other hand, is
experimentally derived from the \textit{product} of measured input
quantities (e.g., count rates, stopping powers, detection
efficiencies, etc.). In such a case the central limit theorem predicts
that resonance strengths or partial widths are lognormally
distributed.
The lognormal probability density for a resonance strength or a
partial width is given by
\begin{equation}
\label{eq:rates-lognormal}
f(x) = \frac{1}{\sigma \sqrt{2 \pi}} \frac{1}{x} e^{-(\ln x -
\mu)^2/(2 \sigma^2)}
\end{equation}
with the lognormal parameters $\mu$ and $\sigma$ representing the mean
and standard deviation of $\ln{x}$. These quantities are related to
the expectation value, $E[x]$, and variance, $V[x]$, by
\begin{equation}
\label{eq:rates-lognormpars}
\mu = \ln(E[x]) - \frac{1}{2}\ln\left(1+\frac{V[x]}{E[x]^2}\right),
\quad \sigma = \sqrt{\ln\left(1+\frac{V[x]}{E[x]^2}\right)}
\end{equation}
The quantities, $\ln(E[x])$ and $\sqrt V[x]$ can be associated with
the central value and uncertainty, respectively, that are commonly
reported. Note that a lognormal distribution is only defined for
positive values of $x$. This feature is crucial because it removes the
finite probability of sampling unphysical, negative values when
Gaussian uncertainties are used. This is especially true for partial
width measurements, which frequently have uncertainties in the 20-50\%
range. Note, also, that a 50\% Gaussian uncertainty results in a 3\%
probability of the partial width having a value below zero.
The important problem of estimating reaction rates when only upper
limits of resonance strengths or partial widths are available will now
be discussed. The standard practise in nuclear
astrophysics~\citep[see, for example,][]{ANG99,ILI01} is to adopt 10\%
resonance strength upper limit values for the calculation of the
\textit{recommended} total rates. ``Lower limits'' or ``upper
limits'' of rates are then derived by completely excluding or by
adopting the full upper limit, respectively, for all resonance
strengths. This procedure is questionable for two reasons. First,
without further knowledge, it is implicitly assumed that the
probability density for the resonance strength is a uniform
distribution extending from zero to the upper limit value. The
implication is that the \textit{mean value} of the resonance strength
amounts to half of its upper limit value. This conclusion contradicts
fundamental nuclear physics, as will be explained below. Second, the
derived ``upper limit'' and ``lower limit'' on the total reaction rate
are usually interpreted as sharp boundaries. This conclusion is also
unphysical, as will be explained below.
The strength of a resonance depends on particle partial widths, which
can be expressed in terms of reduced widths, $\gamma^2$, or,
alternatively, spectroscopic factors, $S$ (see
section~\ref{sec:rates-formalism}).
These quantities depend on the overlap between the incoming channel
($a+A$) and the compound nucleus final state, which in turn depends on
a nuclear matrix element.
If the nuclear matrix element has contributions from many different
parts of configuration space, and if the signs of these contributions
are random, then the central limit theorem predicts that the
probability density of the transition amplitude will tend toward a
Gaussian distribution centred at zero. The probability density of the
reduced width, representing the \textit{square} of the amplitude, is
then given by a chi-squared distribution with one degree of
freedom. These arguments were first presented by \textcite{POR56} and
this probability density is also known as the Porter-Thomas
distribution. For a particle channel it can be written as
\begin{equation}
\label{eq:PT}
f(x) = \frac{c}{\sqrt{\theta^2}}e^{-\theta^2/(2 \hat{\theta}^2)}
\end{equation}
where $c$ is a normalisation constant, $\theta^2$ is the dimensionless
reduced width, and $\hat{\theta}^2$ is the local mean value of the
dimensionless reduced width. The distribution implies that the reduced
width for a given nucleus and set of quantum numbers varies by several
orders of magnitude, with a higher probability the smaller the value
of the reduced width. The Porter-Thomas distribution emerges naturally
from the Gaussian orthogonal ensemble of random matrix theory and is
well established experimentally (see Ref.\ \cite{WEI09} for a recent
review)\footnote{Recently, a high precision study of neutron partial
widths in plutonium ($A=192, 194$) by \textcite{KOE10} and a
re-analysis of the Nuclear Data Ensemble ($A=64 - 238$) in Ref.\
\cite{KOE11} have claimed that the data are not well described by a
$\chi^2$ distribution with one degree of freedom ($\nu=1$, i.e., a
Porter-Thomas distribution). They find, depending on the data set
under consideration, values between $\nu=0.5$ and $\nu=1.2$. These
new results are controversial and more studies are needed before the
issue can be settled. It is not clear at present if this controversy
has any implications for the compound nucleus \nuc{26}{Mg}.}.
The above discussion provides a physically sound method for randomly
sampling reduced widths (or spectroscopic factors) if only an upper
limit value is available. Furthermore, in the present work we
assume a sharp truncation of the Porter-Thomas distribution at the
upper limit value for the dimensionless reduced width,
$\theta^2_{ul}$, that is, we randomly sample over the probability
density
\begin{equation}
\label{eq:rates-portthom}
f(\theta) =\left\{\begin{array}{ll}
\dfrac{c}{\sqrt{\theta^2}}e^{-\theta^2/(2
\hat{\theta}^2)} &\mbox{ if }\theta^2 \leq \theta_{ul}^2\\
0 &\mbox{ if }\theta^2>\theta_{ul}^2 \end{array}\right.
\end{equation}
Once dimensionless reduced widths are obtained from sampling according
to equation~(\ref{eq:rates-portthom}), samples of particle partial
widths can be found from equation~(\ref{eq:rates-GammaDefinition}).
Subsequently, samples of resonance strengths can be determined from
equation~(\ref{eq:rates-ResStrength}).
In order to utilise equation~(\ref{eq:rates-portthom}) for Monte Carlo
sampling of $\alpha$-particle partial widths, the mean value of the
dimensionless reduced width, $\hat{\theta}_{\alpha}^2$, must be
known. To this end we considered 360 $\alpha$-particle reduced widths
in the A=20-40 mass region \citep[see][and references
therein]{DRA94}. The distribution is shown in
figure~\ref{fig:SpecFacts} as a black histogram. Binning and fitting
the data to equation~(\ref{eq:PT}) (solid line) results in a best-fit
value of $\hat{\theta}_{\alpha}^2=0.010$, which we adopt in the
present work. It is important to recall the above arguments: the
distribution of reduced widths for a given nucleus, given orbital
angular momentum, given channel spin, etc., is expected to follow a
Porter-Thomas distribution. However, because of the relatively small
sample size of 360 values, we were compelled to fit the entire set by
disregarding differences in nuclear mass number and orbital angular
momentum. For this reason, our derived mean value of 0.010 must be
regarded as preliminary. More reliable estimates of
$\hat{\theta}_{\alpha}^2$ have to await the analysis of a
significantly larger data set of $\alpha$-particle reduced widths when
it becomes available in the future.
From the arguments presented above it should also be clear that the
Porter-Thomas distribution is not expected to represent the reduced
width of all nuclear levels, particularly if the amplitude is
dominated by a few large contributions of configuration space. The
most important example for the latter situation are $\alpha$-cluster
states, which are expected to have relatively large reduced
widths. Indeed, the large reduced width values in
figure~\ref{fig:SpecFacts} that are not described by the Porter-Thomas
distribution (solid line) originate most likely from $\alpha$-cluster
states. Clearly, the nuclear structure of a level in question must be
considered carefully. For this reason, results from $\alpha$-particle
transfer studies are very important. It can be argued that these
measurements populate preferentially $\alpha$-cluster states, with
large reduced widths (or spectroscopic factors), while levels not
populated in $\alpha$-transfer have small reduced widths and,
therefore, are more likely statistical in nature (i.e., described by a
Porter-Thomas distribution). This issue will become important in later
sections.
Once a random sampling of all input parameters has been performed, an
ensemble of reaction rates is obtained. From its probability density
one can extract descriptive statistics (mean, median, variance
etc.). For the recommended reaction rate, we adopt the median value.
The median is a useful statistic because exactly half of the
calculated rates lie above this value and half below. Note that we do
not use the mean value because it is strongly affected by outliers in
the reaction rate distribution. The low and high reaction rates are
obtained by assuming a 68\% coverage probability. There are several
methods for obtaining these coverage probabilities, such as finding
the coverage that minimises the range of the uncertainties, or one
that is centred on the median. In the present work, the
$16^{\text{th}}$ to 84$^{\text{th}}$ percentiles of the cumulative
reaction rate distribution are used. We emphasise an important point
regarding reaction rate uncertainties: contrary to previous work, our
``low'' and ``high'' rates do not represent sharp boundaries (i.e., a
probability density of zero outside the boundaries). As with any other
continuous probability density function, these values depend on the
assumed coverage probability, i.e., assuming a larger coverage will
result in a larger uncertainty of the total reaction rate (this is
further illustrated in Figs.~\ref{fig:Uncerts_ag}
and~\ref{fig:Uncerts_an}). The important point here is that the Monte
Carlo sampling results in ``low'' and ``high'' rates for which the
coverage probability can be quantified precisely.
Although a low, high and median rate are useful quantities, they do
not necessarily contain all the information on the rate probability
density. For application of a reaction rate to nucleosynthesis
calculations, therefore, it is useful to approximate the rate
probability density by a simple analytical approximation. It was shown
in \cite{LON10} that in most (but not all) cases the reaction rate
probability density is well approximated by a lognormal distribution
(equation~\ref{eq:rates-lognormal}). The lognormal parameters $\mu$
and $\sigma$ can be found from the sampled total rates at each
temperature according to
\begin{equation}
\label{eq:rates-lognormPars}
\mu = E[\ln(y)],\qquad \sigma^2 = V[\ln(y)]
\end{equation}
where $E[\ln(y)]$ and $V[\ln(y)]$ denote the expectation value and
variance of the natural logarithm of the total rate, $y$,
respectively. A useful measure of the applicability of a lognormal
approximation to the actual sampled distribution is provided by the
Anderson-Darling statistic\footnote{The Anderson-Darling
statistic~\citep{AND54} is more useful than a $\chi^2$ statistic
because it does not require binning of the data. The latter usually
results in a loss of information.}, which is calculated from
\begin{equation}
\label{eq:rates-AD}
t_{AD} = -n - \sum_{i=1}^n\frac{2i-1}{n}(\ln F(y_i) +
\ln\left[1-F(y_{n+1-i})\right]
\end{equation}
where $n$ is the number of samples, $y_i$ are the sampled reaction
rates at a given temperature (arranged in ascending order), and $F$ is
the cumulative distribution of a standard normal function (i.e., a
Gaussian centred at zero). An A-D value greater than unity indicates a
deviation from a lognormal distribution. However, it was found by
\textcite{LON10} that the rate distribution does not \emph{visibly}
deviate from lognormal until A-D exceeds $t_{AD} \approx 30$. The A-D
statistic is presented in Tabs.~\ref{tab:agRate} and~\ref{tab:anRate}
along with the reaction rates at each temperature in order to provide
a reference to the reader.
\subsection{Extrapolation of Experimental Reaction Rates to Higher Temperatures}
\label{sec-2_4}
Experimental rates usually need to be extrapolated to high
temperatures with the aid of theoretical models because resonances are
only measured up to some finite energy,
$E_{\text{max}}^{\text{exp}}$. If the effective stellar burning energy
window\ \citep{NEW08} extends above this energy, the rate calculated
using the procedure outlined above will become inaccurate. Statistical
nuclear reaction models must, therefore, be used to extrapolate the
experimental rates beyond this temperature. The method used here is
described in detail in Ref.\ \cite{NEW08}. It uses the following strategy:
(i) an {\textit{effective thermonuclear energy range}} (ETER) is
defined using the 8$^{\text{th}}$, 50$^{\text{th}}$, and
92$^{\text{nd}}$ percentiles of the cumulative distribution of
fractional reaction rates (i.e., the relative contribution of single
resonances at temperature $T$ divided by the total reaction rate at
$T$); (ii) the temperature, $T_{\text{match}}$, beyond which the total
rate must be extrapolated is estimated from
\begin{equation}
\label{eq:HFMatch} E(T_{\text{match}}) + \Delta
E(T_{\text{match}}) = E^{\text{exp}}_{\text{max}}
\end{equation}
where $\Delta E(T_{\text{match}})$ is the width of the
ETER calculated from the 8$^{\text{th}}$ and 92$^{\text{nd}}$ rate
percentiles.
We adopt the Hauser-Feshbach rates of Ref.\ \cite{RAU00} for
temperatures beyond $T_{\text{match}}$, normalised to the experimental
rate at $T_{\text{match}}$.
\section{The \Nepa Reactions}
\label{sec:rates}
\subsection{General Aspects}
\label{sec:general-aspects}
The \an ($Q_{\alpha n}=-478.296 (89)$\ keV) and \ag ($Q_{\alpha
\gamma}=10614.787 (33)$\ keV) reactions are both important in
s-process neutron production. While the \an reaction produces
neutrons, the \ag reaction also influences the neutron flux by
directly competing for available $\alpha$-particles. The rates of both
reactions will therefore be presented here. The centre-of-mass energy
region of interest to the s-process amounts to E$_{cm}=600 \pm 300$\
keV, corresponding to excitation energies of E$_{x}=10900 - 11500$~keV
in the \nuc{26}{Mg} compound nucleus. Note that only states of
``natural'' parity (i.e., $0^+, 1^-, 2^+$, etc.) can be populated via
\Nepa (because both target and projectile have spin-parities of
$0^+$).
Since the early 1980's, several direct measurements were performed of
both reactions close to the energy region of interest\
\citep{WOL89,HAR91,DRO91,DRO93,GIE93,JAE01}. All of these
measurements, with the exception of Ref.\ \cite{GIE93}, were made
using gas targets at the Institut f\"{u}r Strahlenphysik in Stuttgart,
Germany\ \citep[e.g.,][]{HAM98}. The lowest energy resonance measured
in those works is located at $E_{r}^{\text{lab}} \approx 830$\ keV,
near the high energy end of the astrophysically important region. The
structure of the \nuc{26}{Mg} compound nucleus near the
$\alpha$-particle and neutron thresholds has been investigated
previously via neutron capture \citep{SIN74,WEI76}, scattering
\citep{FAG75,MOS76,TAM03}, photoexcitation \citep{BER84,CRA89,SCH09},
transfer \citep{GLA86,YAS90,GIE93,UGA07}, and photoneutron
measurements \citep{BER69}. In particular, the latter study observed
the strong population of a \nuc{26}{Mg} level near \Ex{11150}, with
presumed quantum numbers of J$^{\pi}=1^-$, corresponding to an
expected low-energy resonance at E$_{cm}=450$~keV. It was believed to
have been observed by \textcite{DRO91} and\ \textcite{HAR91} at
\Erlab{630}, but the presumed signal was later shown to be caused by
background from the $^{11}$B($\alpha$,n)$^{14}$N
reaction. Nevertheless, the anticipated contribution from this
low-energy resonance has sensitively influenced all past estimates of
\Nepa reaction rates. For example, it was shown by \textcite{THE00}
that it has a strong impact on s-process nucleosynthesis in massive
stars. However, recent \ensuremath{^{26}}Mg\ensuremath{(\Vec{\gamma},\gamma)^{26}}Mg studies by~\textcite{LON09} demonstrated
unambiguously that this particular level has unnatural parity
(\Jpi{1}{+}) and, therefore, cannot be populated via $\alpha$-particle
capture on \nuc{22}{Ne}.
Studies that provide new experimental information relevant to
$^{22}$Ne$+\alpha$, obtained after the NACRE compilation was
published~\citep{ANG99}, are summarised in
table~\ref{tab:data-since-NACRE}. The goal of the following discussion
is to consider all the available experimental information for states
in \nuc{26}{Mg} of interest to s-process nucleosynthesis and to assign
these levels to corresponding resonances in both \ag and
$^{22}$Ne($\alpha$,n)$^{25}$Mg. This allows for an estimation of the
partial and total resonance widths, resulting in more accurate \Nepa
reaction rates. A number of levels in \nuc{26}{Mg} near the
$\alpha$-particle and neutron thresholds have unknown spin-parities
and partial widths. These levels have been disregarded in all previous
reaction rate estimates. Since it is not known at present if any of
these are natural parity states and, therefore, may be populated in
$^{22}$Ne$+\alpha$, they cannot be easily included in a Monte Carlo
reaction rate analysis at present. Thus, our strategy is as follows:
we will first derive \Nepa Monte Carlo rates by excluding these levels
of unknown spin-parities. Subsequently, we will investigate their
impact on the total reaction rates {\textit{under the extreme
assumption that all of these levels possess natural parity}}. As
will be seen below, future measurements of these states are highly
desirable. Throughout the following discussion, energies are presented
in the centre of mass frame unless otherwise stated.
\begin{table*}
\centering
\begin{tabular}{cc|l}
\hline \hline
Reference & Reaction Studied & Comments \\ \hline
\textcite{JAE01} & \an & Resonances between \Erlab{570} and \Erlab{1450} \\
\textcite{KOE02} & $^{\text{nat}}$Mg(n,$\gamma$) & $E_{x}$, $J^{\pi}$, $\Gamma_{\gamma}$, $\Gamma_{n}$ for states corresponding to \Erlab{570} to \Erlab{1000} \\
\textcite{UGA07} & \reaction{22}{Ne}{$^{6}$Li}{d}{26}{Mg} & $J^{\pi}$ and $S_{\alpha}$ for two states below neutron threshold \\
\textcite{LON09} & \reaction{26}{Mg}{$\Vec{\gamma}$}{$\gamma'$}{26}{Mg} & $E_{x}$ and $J^{\pi}$ for four resonances corresponding to \Erlab{38} to \Erlab{636} \\
\textcite{DEB10} & \reaction{26}{Mg}{$\Vec{\gamma}$}{$\gamma'$}{26}{Mg} & $\Gamma_{\gamma}$ for four resonances corresponding to \Erlab{38} to \Erlab{636} \\
\hline \hline
\end{tabular}
\caption{\label{tab:data-since-NACRE}New information relevant to the \Nepa reaction rates that has
become available since the NACRE compilation was published \citep{ANG99}.}
\end{table*}
\subsection{Resonance Strengths}
\label{sec:resonance-strengths}
Directly measured resonance strengths in the \ag reaction in the
energy range of E$_r^{\text{lab}}=830 - 2040$~keV are adopted from
Ref.\ \cite{WOL89}. Direct measurements of resonances in the \an
reaction at energies of E$_r^{\text{lab}}=830 - 2040$~keV are reported
in Refs.\ \cite{WOL89,HAR91,DRO93,GIE93,JAE01}. Note, however, that
the \an resonance strengths from the different measurements disagree
by up to a factor of 5 (i.e., a deviation well outside the quoted
uncertainties). Clearly, adopting a simple weighted average value
would not account for the unknown systematic bias present in the
data. To alleviate this problem, we adopt the method of Ref.\
\cite{WIE11}, previously applied to account for unknown systematic
uncertainties in neutron-lifetime measurements. This method follows a
similar procedure for characterising unknown systematic uncertainties
as that presented in Ref.\ \cite{DES04}. It assumes that all the
reported strength values of a given resonance have the same, unknown,
systematic error, $\sigma_u$, which can be summed in quadrature with
the reported uncertainties. Hence for each reported uncertainty of
data set $i$, $\sigma_i$, an inflated uncertainty, $\sigma_i'$, is
obtained via
\begin{equation}
\label{eq:inflated-uncertainty}
\sigma_i' = \sqrt{\sigma_u^2 + \sigma_i^2}
\end{equation}
From the inflated uncertainties, the weighted average of the resonance
strengths, $\omega \gamma_i$, is obtained in the usual manner,
\begin{equation}
\label{eq:weightedaverage}
\overline{\omega \gamma} = \frac{\sum_i \omega
\gamma_i/\sigma^{'2}_i}{\sum_i 1/\sigma^{'2}_i} \qquad
\overline{\sigma} = \sqrt{\frac{1}{\sum_i 1/\sigma^{'2}_i}}
\end{equation}
The unknown value of $\sigma_u$ is adjusted numerically until the
reduced chi-squared, $\chi^2/\nu$, becomes equal to unity.
\begin{equation}
\label{eq:chi2}
\frac{\chi^2}{\nu} = \frac{1}{n-1}\sum_i{\frac{\left(\omega \gamma_i - \overline{\omega
\gamma}\right)^2}{\sigma^{'2}_i}}
\end{equation}
where $\nu$ is the degree of freedom (i.e., $\nu = n-1$, with $n$
equal to the number of measurements).
Application of this method has two consequences compared to
calculating the weighted average of the reported resonance strength
values: (i) the uncertainty of the resonance strength,
$\overline{\sigma}$, will be larger, reflecting the fact that the
systematic shift in the data is of unknown nature; and (ii) strength
values with small reported uncertainties will carry less
weight. Consider as an example the lowest-lying observed resonance in
the \an reaction, located at \Erlab{831}. The measurements reported in
Refs.\ \cite{HAR91,DRO93,GIE93,JAE01} yield for the resonance strength
a (standard) weighted average of $\omega \gamma = 1.2 (1) \times
10^{-4}$~eV, with $\chi^2/\nu = 2.9$, indicating poor agreement
between the individual measurements. On the other hand, the inflated
weighted average value is $\omega \gamma = 1.4 (3) \times
10^{-4}$~eV. We applied the inflated weighted average method to all
resonances in the energy region E$_r^{\text{lab}}=830 -
1495$~keV. Above this energy range, we used the (standard) weighted
average because the different data sets are in considerably better
agreement.
From the measured \an and \ag strengths of a given resonance, the
neutron and $\alpha$-particle partial widths can be found if the
$\gamma$-ray partial width can be estimated. This information allows
for integrating the resonance cross section numerically, according to
equation~(\ref{eq:rates-rr-resonance}), which is more reliable than
adopting the narrow resonance approximation,
equation~(\ref{eq:rates-narrowrate}).
Because of Coulomb barrier penetrability arguments, the neutron width
is expected to dominate the total width of the resonances important
for s-process nucleosynthesis (i.e., $\Gamma_n \approx \Gamma$). Thus,
in most (but not all) cases, the neutron width exceeds the
$\alpha$-particle width for a given state substantially and we can use
the following approximations to determine the $\alpha$-particle width
from measured resonance strengths. For the \ag reaction, the
$\alpha$-particle partial width can be found by assuming a reasonable
average value for the \ensuremath{\gamma}-ray\ partial width of $\Gamma_{\gamma} \approx
3$\ eV\ \citep{KOE02}. We investigated the effect of this
choice on the reaction rates and the exact average value of
$\Gamma_{\gamma}$ was found to be relatively unimportant. The
$\alpha$-particle partial width can then be found (for $\Gamma_n
\approx \Gamma$) from
\begin{equation}
\omega \gamma_{\alpha\gamma} = \omega \frac{\Gamma_{\alpha}
\Gamma_{\gamma}}{\Gamma_{\alpha}+\Gamma_{\gamma}+\Gamma_{n}}, \qquad
\Gamma_{\alpha}= \frac{\omega
\gamma_{\alpha\gamma}}{\omega}\frac{\Gamma}{3\ eV}
\end{equation}
For the \an reaction, the $\alpha$-particle partial width can be
calculated from:
\begin{equation}
\omega \gamma_{{\alpha}n} =
\omega \frac{\Gamma_{\alpha}
\Gamma_{n}}{\Gamma_{\alpha}+\Gamma_{\gamma}+\Gamma_{n}}, \qquad
\Gamma_{\alpha}= \frac{\omega \gamma_{{\alpha}n}}{\omega} \label{eqn:approxGa}
\end{equation}
\subsection{Spectroscopic Factors}
\label{sec:spectro-factors}
Alpha-particle spectroscopic factors for levels near the
$\alpha$-particle and neutron thresholds in \nuc{26}{Mg} have been
obtained from \reaction{22}{Ne}{$^6$Li}{d}{26}{Mg} transfer studies by
Refs.\ \cite{GIE93} and \cite{UGA07}. The spectroscopic factors
derived from the ($^6$Li,d) transfer data are important because they
allow for an estimate of the $\alpha$-particle partial width,
$\Gamma_{\alpha}$, of \Nepa resonances via
equations~(\ref{eq:S-reduced-width-relation1}),
(\ref{eq:S-reduced-width-relation2}),
and~(\ref{eq:rates-GammaDefinition}).
Numerous studies have shown that $\alpha$-transfer measurements are
very useful for measuring {\textit{relative}} spectroscopic factors,
but are not sufficiently accurate for predicting {\textit{absolute}}
values. For this reason, the measured spectroscopic factors are
frequently scaled relative to resonances with well-known partial
widths (note that this is an approximation equivalent to assuming that
$\theta^2=S$ in Sec.\ \ref{sec:formalism}). For example,
\textcite{GIE93} scaled their spectroscopic factors relative to the
\Erlab{831} (\Jpi{2}{+}) resonance in
\reaction{22}{Ne}{$\alpha$}{n}{25}{Mg}. Our best value for the
($\alpha$,n) resonance strength is $\omega \gamma_{\alpha n} = 1.4 (3)
\times 10^{-4}$~eV (see Tab.~\ref{tab:an_direct}). Since for this
low-energy resonance it can be safely assumed that $\Gamma \approx
\Gamma_n$, a spectroscopic factor of $S_{\alpha}^{(\alpha,n)}=0.98$ is
obtained from equations~(\ref{eq:rates-ResStrength})
and~(\ref{eq:rates-GammaDefinition}). Surprisingly, this value is a
factor of 27 larger than the transfer value extracted by Ref.\
\cite{GIE93}, $S_{\alpha}^{(^6\text{Li},d)} = 0.037$. Renormalisation
of all measured ($^6$Li,d) spectroscopic factors to the ($\alpha$,n)
spectroscopic factor of the \Erlab{831} resonance results in the
values shown in green in Fig.~\ref{fig:SpecFacts}. It is certainly
remarkable that all levels observed by Ref.\ \cite{GIE93} should have
dimensionless reduced $\alpha$-particle widths far larger in value
than the Porter-Thomas prediction. Additionally, several of these
levels exhibit dimensionless reduced widths near or exceeding the
Wigner limit, even if one accounts for the difference between
$S_{\alpha}$ and $\theta^2_{\alpha}$ (Sec.~\ref{sec:rates-formalism}).
\begin{figure*}
\begin{center}
\includegraphics[width=0.6\textwidth]{fig1}
\caption[Spectroscopic Factors]{\label{fig:SpecFacts}(Colour
online) Dimensionless reduced $\alpha$-particle widths of
unbound states from Ref.\ \cite{DRA94}, and references therein
\citep[see also,][]{LON10}. Also plotted is the Porter-Thomas
distribution that best fits these data at small values. It is
apparent from the figure that states with large
$\alpha$-particle spectroscopic factors are not represented by
the Porter-Thomas distribution. These levels most likely have an
$\alpha$-particle cluster structure and would be populated
preferentially in transfer measurements, such as the
($^{6}$Li,d) measurements of Refs.\ \cite{GIE93} and
\cite{UGA07}. Also shown in green and blue are the normalised
spectroscopic factors measured by Ref.\ \cite{GIE93}. The values
normalised using the \Erlab{1434} resonance are shown in blue,
while those normalised to the \Erlab{831} resonance are shown in
green. Clearly, the normalisations are vastly different, and the
spectroscopic factors obtained using the \Erlab{831} resonance
as a normalisation reference appear to be too high, as shown in
the figure inset, which displays the same information but on an
expanded scale. See text for more detail.}
\end{center}
\end{figure*}
However, there is no compelling reason why the \Erlab{831} resonance
should be singled out for the normalisation procedure, other than it
being the lowest-lying observed resonance.
For example, one may consider another well-known resonance, located at
\Erlab{1434} (\Jpi{2}{+}). From its measured \an resonance strength,
an $\alpha$-particle spectroscopic factor of $S_{\alpha}^{(\alpha,n)}
= 0.27$ is obtained. The $\alpha$-particle transfer value for the
corresponding levels, measured by Ref.\ \cite{GIE93}, amounts to
$S_{\alpha}^{(^6\text{Li},d)} = 0.11$. These two values differ by a
factor of 2.5, and thus are much closer in agreement than the results
for the \Erlab{831} resonance. Normalisation of all measured relative
($^6$Li,d) spectroscopic factors to the ($\alpha$,n) spectroscopic
factor of the \Erlab{1434} resonance results in the values shown in
blue in Fig.~\ref{fig:SpecFacts}. It is evident that these normalised
values are in far better agreement with the Porter-Thomas distribution
than the results obtained when scaling spectroscopic factors relative
to the \Erlab{831} resonance. In addition, by using the \Erlab{1434}
resonance normalisation, all of the resulting dimensionless reduced
widths now have values less than the Wigner limit, making them more
believable. Since we feel it is more reasonable to scale the
($^6$Li,d) spectroscopic factors using the \Erlab{1434} resonance
instead of the \Erlab{831} resonance, we adopt the reduced widths
shown in blue in Fig.~\ref{fig:SpecFacts} for calculating the \Nepa
rates. Note that the {\textit{relative}} spectroscopic factors
obtained by Ref.\ \cite{GIE93} have been used at face value by Refs.\
\cite{KOE02,KAR06} in their reaction rate calculations. Clearly, this
issue needs to be resolved in future work.
\section{Information on Specific \nuc{26}{Mg} Levels}
\label{sec:levels}
\Exb{10693} (\Erlabb{92}; \mbox{\Jpib{4}{+}}). An excited state near
this energy has been observed by \textcite{GLA86} at E$_{x}$=10695(2)\
keV, \textcite{GIE93} at E$_{x}$=10694(20)\ keV, and \textcite{MOS76}
at E$_{x}$=10689(3)\ keV. A weighted average of these excitation
energies is used in the present work. The \Jpi{4}{+} assignment was
made by considering the decay scheme of this state as observed by
Ref.\ \cite{GLA86} and that the state most likely has natural
parity. The $\alpha$-particle spectroscopic factor for this state from
Ref.\ \cite{GIE93}, after normalisation, is $S_{\alpha}=0.059$.
\Exb{10806} (\Erlabb{226}; \mbox{\Jpib{1}{-}}). This state was seen
previously in $^{22}$Ne($^{6}$Li,d)$^{26}$Mg measurements by
\textcite{UGA07} at E$_{x}$=10808 (20)\ keV, and in
$^{25}$Mg(n$_{t}$,$\gamma$)$^{26}$Mg measurements (thermal neutron
capture) by \textcite{WAL92} at E$_{x}$=10805.9 (4)\ keV. A recent
experiment assigned a spin-parity of \Jpi{1}{-}\ \citep{LON09}. The
adopted excitation energy is the weighted average of these
results. The $\alpha$-particle spectroscopic factor from Ref.\
\cite{UGA07}, after normalisation, amounts to $S_{\alpha}=0.048$.
\Exb{10943} (\Erlabb{388}; $\mathbf{J^{\pi}=(5^{-}-7^{-})}$). An
excited state at this energy has been observed by
\textcite{GLA86}. The observed decay scheme restricts the quantum
numbers, using the dipole-or-E2 rule of Ref.\ \cite{END90}, to
$J^{\pi}=(5^{\pm}-7^{-})$. A $^{22}$Ne($^{6}$Li,d)$^{26}$Mg transfer
measurement populated a state at E$_{x}$=10953 (25)\ keV, but did not
obtain the quantum numbers other than to report that it most likely
had natural parity\ \citep{UGA07}. The combined quantum number
assignment is therefore $J^{\pi}=(5^{-}-7^{-})$. This state is treated
here as part of a doublet with the \Ex{10949} state. Note that in the
reaction rate calculations of \textcite{KAR06}, this state was
incorrectly assigned spin-parity values of $J^{\pi}=2^+, 3^-$.
\Exb{10949} (\Erlabb{395}; \mbox{\Jpib{1}{-}}). This state has been
observed previously in $^{26}$Mg(p,p$'$)$^{26}$Mg measurements by
\textcite{MOS76} at E$_{x} = 10950 (3)$\ keV. The (p,p$'$)
measurements suggest a $J^{\pi}=1^{-}$ assignment, which agrees with
the \ensuremath{^{26}}Mg\ensuremath{(\Vec{\gamma},\gamma)^{26}}Mg result of \textcite{LON09}. It is unclear whether
\textcite{UGA07} observed this state or the one at E$_{x}$=10943\
keV. Therefore, in the present analysis, the normalised spectroscopic
factor of $S=7 \times 10^{-3}$ reported in Ref.\ \cite{UGA07} is
treated as an upper limit for both states at \Ex{10943} and
\Ex{10949}.
\Exb{11112} (\Erlabb{587}; \mbox{\Jpib{2}{+}}). The state is located
above the neutron threshold. It has been observed previously in a
\reaction{25}{Mg}{n}{$\gamma$}{26}{Mg} experiment\ \citep{WEI76,KOE02}
and was assigned a spin-parity of \Jpi{2}{+}.
\Exb{11154} (\Erlabb{637}; \mbox{\Jpib{1}{+}}).
A state at this energy has been observed by \textcite{FAG75},
\textcite{WEI76}, \textcite{CRA89}, \textcite{YAS90},
\textcite{KOE02}, \textcite{TAM03}, and
\textcite{SCH09}. Additionally, an excited state at this energy was
strongly populated by the photoneutron experiment of \textcite{BER69},
who predicted a \Jpi{1}{-} assignment. As a result of this prediction,
several studies have searched for a resonance corresponding to this
energy~\citep{HAR91,DRO91,DRO93,GIE93,JAE01,UGA07}. Of these studies,
a presumed resonance was reported by Refs.\ \cite{HAR91,DRO91}, but
later proven to be caused by beam induced background
\citep{DRO93}. Recently, however, a \ensuremath{^{26}}Mg\ensuremath{(\Vec{\gamma},\gamma)^{26}}Mg experiment \citep{LON09}
showed unambiguously that the spin-parity of this state amounts to
\Jpi{1}{+} and, therefore, cannot contribute to the \Nepa reactions
rates. A more detailed discussion of this state is presented in
section~\ref{sec:general-aspects}.
\textbf{E$\mathbf{_{x}}$=11163-11326\ keV
(E$\mathbf{_{r}^{\textbf{lab}}=648-840}$~keV)}. Excitation energies
were taken as weighted averages of \textcite{MOS76}, \textcite{GLA86},
and \textcite{KOE02}. Quantum numbers, neutron and $\gamma$-ray
partial widths were all adopted from Ref.\ \cite{KOE02}. Since no
$\alpha$-particle partial widths have been measured for these states,
upper limits were derived either from the data presented by Ref.\
\cite{WOL89}, or adopted from the maximum theoretically allowed
values, depending on which was smaller.
\Exb{11318} (\Erlabb{831}; \mbox{\Jpib{2}{+}}). \textcite{KOE02}
argued that this state cannot correspond to both the resonance
observed by \textcite{JAE01} at E$_{r}$=832 (2)\ keV in \an and by
\textcite{WOL89} at E$_{r}^{lab}=828 (5)$\ keV in
\reaction{22}{Ne}{$\alpha$}{$\gamma$}{26}{Mg}, because the implied
value of $\Gamma_{\gamma}=76$~eV would be far larger than the average
$\gamma$-ray partial width ($\Gamma_{\gamma}=3$~eV) in this energy
range. However, this conclusion is questionable considering the large
uncertainty, $\Gamma_{\gamma}=76 (53)$~eV, when the $\gamma$-ray
partial width is derived from the measured values of $\omega
\gamma_{\alpha \gamma}$, $\omega \gamma_{\alpha n}$, and
$\Gamma$. Clearly, the deviation from the average in this energy range
amounts to only $1.4\sigma$.
Since it cannot be decided at present if the ($\alpha$,n) and
($\alpha,\gamma$) resonances correspond to the same \nuc{26}{Mg} level
or not, the partial widths cannot be derived unambiguously from the
measured resonance strengths and total width. Thus we assumed that the
($\alpha$,n) and ($\alpha,\gamma$) resonances are ``narrow'', i.e., we
employed equation~(\ref{eq:rates-narrowrate}) instead of
equation~(\ref{eq:rates-rr-resonance}) in our rate calculations. The
strength reported by Ref.\ \cite{WOL89} is used for the \ag resonance, while
the inflated weighted average (see
section~\ref{sec:resonance-strengths}) is adopted for the \an
resonance, resulting in a strength of $\omega \gamma_{(\alpha,n)} =
1.4 (3) \times 10^{-4}$~eV.
\textbf{E$\mathbf{_{x}}$=11328--11425\ keV
(E$\mathbf{_{r}^{\textbf{lab}}=843-957}$~keV)}. Excitation energies,
quantum numbers, neutron, and \ensuremath{\gamma}-ray\ widths for these levels are
adopted from Refs.\ \cite{WEI76}, and \cite{KOE02}. No
$\alpha$-particle widths have been measured for these states, and thus
upper limits have been adopted from either the data presented by
Refs.\ \cite{WOL89} and \cite{JAE01}, or from the maximum
theoretically allowed values, depending on which was smaller.
\textbf{E$\mathbf{_{x}>11441}$\ keV
(E$\mathbf{_{r}^{\textbf{lab}}>976}$~keV)}. Resonances corresponding
to excited states above \Ex{11441} have been measured directly\
\citep{WOL89,HAR91,DRO91,DRO93,GIE93,JAE01}. In order to take the
widths of wide resonances into account, the neutron and \ensuremath{\gamma}-ray\ partial
widths (and quantum numbers) measured by Refs.\ \cite{WEI76} and
\cite{KOE02} have been used when available. The inflated weighted
average method (see section~\ref{sec:resonance-strengths}) is used to
combine the different \an strengths for resonances below \Erlab{1434},
while standard weighted averages are used above this energy. Since the
\ag resonances measured in Ref.\ \cite{WOL89} cannot be assigned
unambiguously to corresponding \an resonances, all of these resonances
were treated as independent and narrow. The quantum numbers of \an
resonances located above \Erlab{1530} are adopted from Ref.\
\cite{WOL89} when not available otherwise.
\section{Reaction Rates for \Nepa}
\label{sec:rates-results}
The resonance properties used to calculate the rates for both the \ag
and the \an reactions are presented in
Tabs. \ref{tab:ag_direct}~--~\ref{tab:an_upperlims}. For more detailed
information on level properties, see Ref.\ \cite{LON10T}. Separate
tables are used to list resonances with measured partial widths and
those which possess only an upper limit for the $\alpha$-particle
width but have known neutron and $\gamma$-ray widths.
\begin{table*}
\centering
\begin{tabular}{cccr@{$\times$}l|r@{$\times$}lcr@{$\times$}lr@{$\times$}l|c}
\hline \hline & & & \multicolumn{2}{c|}{ } & \multicolumn{7}{c|}{Partial Widths (eV)} & \\
E$_{x}$ (keV) & \multicolumn{1}{c}{E$_{r}^{\textrm{lab}}$ (keV)} & J$^{\pi}$ $^{c}$ & \multicolumn{2}{c|}{$\omega\gamma$ (eV)} & \multicolumn{2}{c}{$\Gamma_{\alpha}$} & \multicolumn{1}{c}{$\Gamma_{\gamma}$$^{a}$} & \multicolumn{2}{c}{$\Gamma_{n}$} & \multicolumn{2}{c|}{$\Gamma$} & Int \\ \hline
10693 & 93 (2) & $4^+$ & \multicolumn{2}{c|}{- - - -} & $3.5 (18)$ & $10^{-46}$ & 3.0 (15) & \multicolumn{2}{c}{- - - -} & \multicolumn{2}{c|}{3.0 (15)} & \\
11315 & 828 (5) & $2^{+}$ & $3.6 (4)$ & $10^{-5}$ & \multicolumn{2}{c}{ - - - - } & - - - - & \multicolumn{2}{c}{ - - - - } & \multicolumn{2}{c|}{ - - - - } & \\
11441 & 976.39 (23) & $4^{+}$ & \multicolumn{2}{c|}{- - - -} & $4.3 (11)$ & $10^{-6}$ $^{b}$ & 3.0 (15) & 1.47 (8) & $10^3$ & 1.47 (8) & $10^3$ & \checkmark \\
11465 & 1005.23 (25) & $5^{-}$ & \multicolumn{2}{c|}{- - - -} & $5.0 (15)$ & $10^{-6}$ $^{b}$ & 3.0 (15) & 6.55 (9) & $10^3$ & 6.55 (9) & $10^3$ & \checkmark \\
11508 & 1055.9 (11) & $1^{-}$ & \multicolumn{2}{c|}{- - - -} & $1.2 (2)$ & $10^{-4}$ $^{b}$ & 3.0 (15) & 1.27 (25) & $10^{4}$ & 1.27 (25) & $10^{4}$ & \checkmark \\
11526 & 1075.5 (18) & $1^{-}$ & \multicolumn{2}{c|}{- - - -} & $4.3 (11)$ & $10^{-4}$ $^{b}$ & 3.0 (15) & 1.8 (9) & $10^{3}$ & 1.8 (9) & $10^{3}$ & \checkmark \\
11630 & 1202.3 (17) & $1^{-}$ & \multicolumn{2}{c|}{- - - -} & $2.4 (5)$ & $10^{-3}$ $^{b}$ & 3.0 (15) & 1.35 (17) & $10^{4}$ & 1.35 (17) & $10^{4}$ & \checkmark \\
11748 & 1345 (7) & $1^{-}$ & \multicolumn{2}{c|}{- - - -} & $2.0 (3)$ & $10^{-2}$ $^{b}$ & 3.0 (15) & 6.4 (9) & $10^{4}$ & 6.4 (9) & $10^{4}$ & \checkmark \\
11787 & 1386 (3) & $1^{-}$ & \multicolumn{2}{c|}{- - - -} & $8 (3)$ & $10^{-3}$ $^{b}$ & 3.0 (15) & 2.45 (24) & $10^{4}$ & 2.45 (24) & $10^{4}$ & \checkmark \\
11828 & 1433.7 (12) & $2^{+}$ & $2.5 (3)$ & $10^{-4}$ & $1.8 (10) $ & $10^{-1}$ & 3.0 (15) & 1.10 (25) & $10^3$ & 1.10 (25) & $10^3$ & \checkmark \\
11895 & 1513 (5) & $1^{-}$ & $2.0 (2)$ & $10^{-3}$ & \multicolumn{2}{c}{ - - - - } & - - - - & \multicolumn{2}{c}{ - - - - } & \multicolumn{2}{c|}{$< 3000$} & \\
11912 & 1533 (3) & $1^{-},2^+$ & $3.4 (4)$ & $10^{-3}$ & $1.9 (8) $ & $10^{+0}$ & 3.0 (15) & 5 (2) & $10^{3}$ & 5 (2) & $10^{3}$ & \checkmark \\
11953 & 1582 (3) & $2^+,3^{-},4^+$ & $3.4 (4)$ & $10^{-3}$ & $3.2 (17) $ & $10^{-1}$ & 3.0 (15) & 2 (1) & $10^{3}$ & 2 (1) & $10^{3}$ & \checkmark \\
12051 & 1698 (3) & $2^+,3^{-}$ & $6.0 (7)$ & $10^{-3}$ & $1.1 (3) $ & $10^{-1}$ & 3.0 (15) & 4 (1) & $10^{3}$ & 4 (1) & $10^{3}$ & \checkmark \\
12140 & 1802 (3) & $1^{-}$ & $1.0 (2)$ & $10^{-3}$ & $1.7 (5) $ & $10^{+0}$ & 3.0 (15) & 15 (2) & $10^{3}$ & 15 (2) & $10^{3}$ & \checkmark \\
12184 & 1855 (8) & ($0^{+}$) & $1.1 (2)$ & $10^{-3}$ & $1.21 (29) $ & $10^{+1}$ & 3.0 (15) & 33 (5) & $10^{3}$ & 33 (5) & $10^{3}$ & \checkmark \\
12273 & 1960 (8) & ($0^{+}$) & $8.9 (1)$ & $10^{-3}$ & $2.2 (4) $ & $10^{+2}$ & 3.0 (15) & 73 (9) & $10^{3}$ & 73 (9) & $10^{3}$ & \checkmark \\
12343 & 2043 (5) & $0^{+}$ & $5.4 (7)$ & $10^{-2}$ & $6.3 (12) $ & $10^{+2}$ & 3.0 (15) & 35 (5) & $10^{3}$ & 35 (5) & $10^{3}$ & \checkmark \\
\hline \hline
\end{tabular}
\footnotetext{Average value from Ref.\ \cite{KOE02}}
\footnotetext{From $^{22}$Ne($\alpha$,n)$^{25}$Mg
measurements (see equation~(\ref{eqn:approxGa}))}
\footnotetext{Detailed discussion on quantum number
assignments can be found in
section~\ref{sec:resonance-strengths}.}
\caption{\label{tab:ag_direct}Resonances of \ag with known $\alpha$-particle
partial widths or resonance strengths. Total widths are from
Ref.\ \cite{WOL89} for resonances above \Erlab{1533}. For lower-lying
resonances, total widths are adopted from Ref.\ \cite{JAE01} and
Ref.\ \cite{KOE02}. Ambiguous spin-parities (i.e., those not based on
strong arguments) are placed in parentheses, according to the
guidelines in Ref\ \cite{END90}. The last column, labelled ``Int''
indicates those resonances for which sufficient information is
available in order to integrate their reaction rate contribution
numerically, according to
equation~(\ref{eq:rates-rr-resonance}).}
\end{table*}
\begin{table*}[!ht]
\begin{center}
\begin{tabular}{cccr@{$\times$}l|cr@{$\times$}lccc|c}
\hline \hline
& & & \multicolumn{2}{c|}{ } & \multicolumn{6}{c|}{Partial Widths (eV)} & \\
E$_{x}$ (keV) & \multicolumn{1}{c}{$E_{r}^{\text{lab}}$ (keV)} & J$^{\pi}$ & \multicolumn{2}{c|}{$\omega\gamma_{\mathrm{UL}}$ (eV )} & S$_{\alpha,\mathrm{UL}}$ &\multicolumn{2}{c}{$\Gamma_{\alpha,\mathrm{UL}}$} & \multicolumn{1}{c}{$\Gamma_{\gamma}$} & \multicolumn{1}{c}{$\Gamma_{n}$} & \multicolumn{1}{c|}{$\Gamma$} & Int \\ \hline
10806 & 225.9 (5) & $1^-$ & \multicolumn{2}{c|}{- - - -} & 4.8$\times 10^{-2}$ & $3.2$ & $10^{-23}$ & $0.72 (18)$ & - - - - & $0.72 (18)$ & \\
10943 & 388 (2) & $(5^- - 7^-)$ & \multicolumn{2}{c|}{- - - -} & 7$\times 10^{-3}$ & $1.5$ & $10^{-19}$ & $3.0 (15)$ & - - - - & $3.0 (15)$ & \\
10949 & 395.15 (18) & $1^-$ & \multicolumn{2}{c|}{- - - -} & 7$\times 10^{-3}$ & $2.9$ & $10^{-15}$ & $1.9 (3)$ & - - - - & $1.9 (3)$ & \\
11112 & 587.90 (10) & $2^{+}$ & $3.7$ & $10^{-08}$ & 1.00 & $7.7$ & $10^{-09}$ & $1.73 (3)$ & $2578 (240))$ & $2580 (240) $ & \checkmark \\
11163 & 647.93 (11) & $2^{+}$ & $4.3$ & $10^{-07}$ & 1.00 & $8.7$ & $10^{-08}$ & $4.56 (29) $ & $4640 (100) $ & $4650 (100)$ & \checkmark \\
11171 & 657.53 (19) & $(2^{+})$ & $6.2$ & $10^{-07}$ & 1.00 & $1.3$ & $10^{-07}$ & $3.0 (15)$ & $1.44 (16) $ & $4.4 (15) $ & \\
11183 & 671.70 (21) & $(1^{-})$ & $1.0$ & $10^{-06}$ & 1.00 & $2.1$ & $10^{-07}$ & $3.0 (15)$ & $0.54 (9) $ & $3.5 (15) $ & \\
11243 & 742.81 (12) & $2^{(-)}$ & $4.7$ & $10^{-06}$ & 0.44 & $9.5$ & $10^{-07}$ & $7.4 (6) $ & $4510 (110) $ & $4520 (110)$ & \checkmark \\
11274 & 779.32 (14) & $(2)^{+}$ & $4.9$ & $10^{-06}$ & 0.15 & $1.0$ & $10^{-06}$ & $3.2 (4) $ & $540 (50) $ & $540 (50) $ & \checkmark \\
11280 & 786.17 (13) & $4^{(-)}$ & $8.2$ & $10^{-07}$ & 1.00 & $9.2$ & $10^{-08}$ & $0.59 (24) $ & $1510 (30) $ & $1510 (30) $ & \checkmark \\
11286 & 792.90 (15) & $1^{-}$ & $5.0$ & $10^{-06}$ & 0.05 & $1.7$ & $10^{-06}$ & $0.8 (5) $ & $1260 (100) $ & $1260 (100)$ & \checkmark \\
11286 & 793.83 (14) & $(2^{+})$ & $5.0$ & $10^{-06}$ & 0.11 & $1.0$ & $10^{-06}$ & $4.3 (6) $ & $12.8 (6) $ & $17.1 (60) $ & \checkmark \\
11289 & 797.10 (29) & $(2^{-})$ & $5.1$ & $10^{-06}$ & 0.10 & $1.0$ & $10^{-06}$ & $3.0 (15)$ & $1.5 (5) $ & $4.5 (16) $ & \\
11296 & 805.19 (16) & $(3^{-})$ & $5.1$ & $10^{-06}$ & 0.39 & $7.4$ & $10^{-07}$ & $3.3 (7) $ & $8060 (120) $ & $8060 (120)$ & \checkmark \\
11311 & 822.6 (4) & $(1^{-})$ & $5.2$ & $10^{-06}$ & 0.02 & $1.8$ & $10^{-06}$ & $3.0 (15)$ & $1.1 (4) $ & $4.1 (16) $ & \\
11326 & 840.8 (6) & $(1^{-})$ & $5.4$ & $10^{-06}$ & 0.01 & $1.8$ & $10^{-06}$ & $3.0 (15)$ & $0.6 (3) $ & $3.6 (15) $ & \\
11328 & 843.24 (17) & $1^{-}$ & $5.4$ & $10^{-06}$ & 0.01 & $1.8$ & $10^{-06}$ & $3.6 (5) $ & $420 (90) $ & $420 (90) $ & \checkmark \\
11329 & 844.4 (6) & $(1^{-})$ & $5.4$ & $10^{-06}$ & 0.01 & $1.8$ & $10^{-06}$ & $3.0 (15)$ & $2.8 (10) $ & $5.8 (18) $ & \\
11337 & 853.6 (7) & $(1^{-})$ & $5.4$ & $10^{-06}$ & 0.01 & $1.8$ & $10^{-06}$ & $3.0 (15)$ & $1.4 (6) $ & $4.4 (18) $ & \\
11344 & 861.86 (18) & $(2^{+})$ & $5.5$ & $10^{-06}$ & 0.02 & $1.1$ & $10^{-06}$ & $1.18 (27) $ & $150 (40) $ & $150 (40) $ & \checkmark \\
11345 & 862.91 (19) & $4^{(-)}$ & $5.5$ & $10^{-06}$ & 0.87 & $6.2$ & $10^{-07}$ & $1.8 (4) $ & $4130 (190) $ & $4130 (190)$ & \checkmark \\
11393 & 919.34 (19) & $5^{(+)}$ & $1.6$ & $10^{-06}$ & 1.00 & $1.5$ & $10^{-07}$ & $3.0 (15)$ & $290 (19) $ & $290 (19) $ & \checkmark \\
\hline \hline
\end{tabular}
\caption{\label{tab:ag_upperlims}Properties of Unobserved
Resonances in \ag. For these resonances, only upper limits of
the resonance strength and/or the $\alpha$-particle
spectroscopic factor are available at present. The $\gamma$-ray
and neutron partial widths are taken from the R-matrix fit of
Ref.\ \cite{KOE02}. Quantum numbers for states below \Ex{11163}
are discussed in Sec.\ \ref{sec:levels}. All other quantum
numbers are adopted from Ref.\ \cite{KOE02}. When a range of
quantum numbers is allowed, the upper limit of the
$\alpha$-particle width is calculated assuming the lowest
possible orbital angular momentum transfer. The upper limit
$\alpha$-particle spectroscopic factors adopted ($S_{\alpha,
\mathrm{UL}}$) are also listed for completeness.}
\end{center}
\end{table*}
\begin{table*}[!ht]
\begin{center}
{ \addtolength{\tabcolsep}{2pt}
\addtolength{\extrarowheight}{2pt}
\begin{tabular}{cccr@{$\times$}l|r@{$\times$}lcr@{$\times$}lr@{$\times$}l|c}
\hline \hline
& & & \multicolumn{2}{c|}{ } & \multicolumn{7}{c|}{Partial Widths (eV)} & \\
E$_{x}$ (keV) & \multicolumn{1}{c}{E$_{r}^{\textrm{lab}}$ (keV)} & J$^{\pi}$ $^{a}$ & \multicolumn{2}{c|}{$\omega\gamma$ (eV)} & \multicolumn{2}{c}{$\Gamma_{\alpha}$$^{b}$} & \multicolumn{1}{c}{$\Gamma_{\gamma}$$^{c}$} & \multicolumn{2}{c}{$\Gamma_{n}$$^{d}$} & \multicolumn{2}{c|}{$\Gamma$} & Int \\ \hline
11318 & 830.8 (13) & $2^{+}$ & $1.4 (3) $ & $10^{-4}$ & \multicolumn{2}{c}{- - - -} & - - - - & \multicolumn{2}{c}{- - - -} & 2.5 (17) & $10^2$ & \\
11441 & 976.39 (23) & $4^{+}$ & $3.9 (10) $ & $10^{-5}$ & $ 4.3 (11) $ & $ 10^{-6}$ & 3.0 (15) & 1.47 (8) & $10^{3}$ & 1.47 (8) & $10^{3}$ & \checkmark \\
11465 & 1005.23 (25) & $5^{-}$ & $5.5 (17) $ & $10^{-5}$ & $ 5.0 (15) $ & $ 10^{-6}$ & 3.0 (15) & 6.55 (9) & $10^{3}$ & 6.55 (9) & $10^{3}$ & \checkmark \\
11508 & 1055.9 (11) & $1^{-}$ & $3.5 (6) $ & $10^{-4}$ & $ 1.17 (20) $ & $ 10^{-4}$ & 3.0 (15) & 1.27 (25) & $10^{4}$ & 1.27 (25) & $10^{4}$ & \checkmark \\
11525 & 1075.5 (18) & $1^{-}$ & $1.3 (3) $ & $10^{-3}$ & $ 4.3 (11) $ & $ 10^{-4}$ & 3.0 (15) & 1.8 (9) & $10^{3}$ & 1.8 (9) & $10^{3}$ & \checkmark \\
11632 & 1202.3 (17) & $1^{-}$ & $7.1 (15) $ & $10^{-3}$ & $ 2.4 (5) $ & $ 10^{-3}$ & 3.0 (15) & 1.35 (17) & $10^{4}$ & 1.35 (17) & $10^{4}$ & \checkmark \\
11752 & 1345 (7) & $1^{-}$ & $5.9 (8) $ & $10^{-2}$ & $ 2.0 (3) $ & $ 10^{-2}$ & 3.0 (15) & 6.4 (9) & $10^{4}$ & 6.4 (9) & $10^{4}$ & \checkmark \\
11788 & 1386 (3) & $1^{-}$ & $2.5 (9) $ & $10^{-2}$ & $ 8 (2) $ & $ 10^{-3}$ & 3.0 (15) & 2.45 (24) & $10^{4}$ & 2.45 (24) & $10^{4}$ & \checkmark \\
11828 & 1433.7 (12) & $2^{+}$ & $8.5 (14) $ & $10^{-1}$ & $ 1.7 (3)$ & $ 10^{-1}$ & 3.0 (15) & 1.10 (25) & $10^{3}$ & 1.10 (25) & $10^{3}$ & \checkmark \\
11863 & 1475 (3) & $1^{-}$ & $5 (3) $ & $10^{-2}$ & $ 1.5 (10) $ & $ 10^{-2}$ & 3.0 (15) & 2.45 (34) & $10^{4}$ & 2.45 (34) & $10^{4}$ & \checkmark \\
11880 & 1495 (3) & $1^{-}$ & $1.9 (19) $ & $10^{-1}$ & \multicolumn{2}{c}{- - - -} & - - - - & \multicolumn{2}{c}{- - - -} & \multicolumn{2}{c|}{- - - -} & \\
11890 & 1507.9 (16) & $1^{-}$ & $4.1 (4) $ & $10^{-1}$ & \multicolumn{2}{c}{- - - -} & - - - - & \multicolumn{2}{c}{- - - -} & \multicolumn{2}{c|}{- - - -} & \\
11910 & 1530.9 (15) & $\mathbf{1^{-}},2^{+}$ & $1.40 (10) $ & $10^{+0}$ & \multicolumn{2}{c}{- - - -} & - - - - & \multicolumn{2}{c}{- - - -} & \multicolumn{2}{c|}{- - - -} & \\
11951 & 1579.4 (15) & $2^{+},\mathbf{3^{-}},4^{+}$ & $1.60 (13) $ & $10^{+0}$ & \multicolumn{2}{c}{- - - -} & - - - - & \multicolumn{2}{c}{- - - -} & \multicolumn{2}{c|}{- - - -} & \\
12050 & 1696.7 (15) & $\mathbf{2^{+}},3^{-}$ & $4.7 (3) $ & $10^{+0}$ & \multicolumn{2}{c}{- - - -} & - - - - & \multicolumn{2}{c}{- - - -} & \multicolumn{2}{c|}{- - - -} & \\
12111 & 1768.2 (18) & $1^{-}$ & $7.1 (6) $ & $10^{-1}$ & \multicolumn{2}{c}{- - - -} & - - - - & \multicolumn{2}{c}{- - - -} & \multicolumn{2}{c|}{- - - -} & \\
12141 & 1803.5 (15) & $1^{-}$ & $2.4 (2) $ & $10^{+0}$ & \multicolumn{2}{c}{- - - -} & - - - - & \multicolumn{2}{c}{- - - -} & \multicolumn{2}{c|}{- - - -} & \\
12184 & 1855 (6) & ($0^{+}$) & $9.0 (11) $ & $10^{-1}$ & \multicolumn{2}{c}{- - - -} & - - - - & \multicolumn{2}{c}{- - - -} & \multicolumn{2}{c|}{- - - -} & \\
12270 & 1956 (6) & ($0^{+}$) & $2.1 (2) $ & $10^{+1}$ & \multicolumn{2}{c}{- - - -} & - - - - & \multicolumn{2}{c}{- - - -} & \multicolumn{2}{c|}{- - - -} & \\
12345 & 2044.8 (18) & $0^{+}$ & $1.57 (10) $ & $10^{+2}$ & \multicolumn{2}{c}{- - - -} & - - - - & \multicolumn{2}{c}{- - - -} & \multicolumn{2}{c|}{- - - -} & \\
12435 & 2152 (10) & $1^{-}$ & $2.8 (7) $ & $10^{+1}$ & \multicolumn{2}{c}{- - - -} & - - - - & \multicolumn{2}{c}{- - - -} & \multicolumn{2}{c|}{- - - -} & \\
12551 & 2289 (15) & $1^{-}$ & $1.2 (5) $ & $10^{+2}$ & \multicolumn{2}{c}{- - - -} & - - - - & \multicolumn{2}{c}{- - - -} & \multicolumn{2}{c|}{- - - -} & \\
\hline \hline
\end{tabular}
}
\footnotetext{A detailed discussion of quantum number
assignment can be found in the text.}
\footnotetext{Calculated using equation~(\ref{eqn:approxGa}).}
\footnotetext{Average value from Ref.\ \cite{KOE02}.}
\footnotetext{Assuming $\Gamma$ is dominated by $\Gamma_{n}$
(see section~\ref{sec:general-aspects}).}
\caption{\label{tab:an_direct}Resonances in \an with known $\alpha$-particle
partial widths or resonance strengths. When a range of quantum
numbers is present, the one used for calculating the reaction
rates is presented in bold.}
\end{center}
\end{table*}
\begin{table*}[!ht]
\begin{center}
{
\begin{tabular}{cccr@{$\times$}l|cr@{$\times$}lccc|c}
\hline \hline
& & & \multicolumn{2}{c|}{ } & \multicolumn{6}{c|}{Partial Widths (eV)} & \\
E$_{x}$ (keV) & \multicolumn{1}{c}{E$_{r}^{\textrm{lab}}$ (keV)} & J$^{\pi}$ & \multicolumn{2}{c|}{$\omega\gamma_{\mathrm{UL}}$ (eV)} & S$_{\alpha}$ & \multicolumn{2}{c}{$\Gamma_{\alpha,\mathrm{UL}}$} & \multicolumn{1}{c}{$\Gamma_{n}$} & \multicolumn{1}{c}{$\Gamma_{\gamma}$} & \multicolumn{1}{c|}{$\Gamma$} & Int \\ \hline
11112 & 587.90 (10) & $2^{+}$ & 5.8 & $10^{-8}$ & 1.00 & 7.7 & $10^{-9}$ & $2580 (240)$ & $1.73 (3)$ & $2580 (240) $ & \checkmark \\
11163 & 647.93 (11) & $2^{+}$ & 1.9 & $10^{-7}$ & 0.44 & 3.8 & $10^{-8}$ & $4640 (100) $ & $4.56 (29) $ & $4650 (100)$ & \checkmark \\
11171 & 657.53 (19) & $(2^{+})$ & 7.5 & $10^{-8}$ & 0.12 & 1.5 & $10^{-8}$ & $1.44 (16) $ & $3.0 (15)$ & $4.4 (15) $ & \\
11183 & 671.70 (21) & $(1^{-})$ & 7.7 & $10^{-5}$ & 1.00 & 2.1 & $10^{-7}$ & $0.54 (9) $ & $3.0 (15) $ & $3.5 (15) $ & \\
11243 & 742.81 (12) & $2^{(-)}$ & 1.2 & $10^{-7}$ & 0.01 & 2.4 & $10^{-8}$ & $4510 (110) $ & $7.4 (6) $ & $4520 (110)$ & \checkmark \\
11274 & 779.33 (14) & $(2)^{+}$ & 1.1 & $10^{-7}$ & 3.5$\times 10^{-3}$ & 2.2 & $10^{-8}$ & $540 (50) $ & $3.2 (4) $ & $540 (50) $ & \checkmark \\
11280 & 786.17 (13) & $4^{(-)}$ & 1.3 & $10^{-7}$ & 0.16 & 1.4 & $10^{-8}$ & $1510 (30) $ & $0.59 (24) $ & $1510 (30) $ & \checkmark \\
11286 & 792.90 (15) & $1^{-}$ & 7.7 & $10^{-8}$ & 7.3$\times 10^{-4}$ & 2.6 & $10^{-8}$ & $1260 (100) $ & $0.8 (5) $ & $1260 (100)$ & \checkmark \\
11286 & 793.83 (14) & $(2^{+})$ & 7.7 & $10^{-8}$ & 1.6$\times 10^{-3}$ & 1.5 & $10^{-8}$ & $13 (6) $ & $4.3 (6) $ & $17 (6) $ & \checkmark \\
11289 & 797.10 (29) & $(2^{-})$ & 7.7 & $10^{-8}$ & 1.5$\times 10^{-3}$ & 1.5 & $10^{-8}$ & $1.5 (5) $ & $3.0 (15)$ & $4.5 (16) $ & \\
11296 & 805.19 (16) & $(3^{-})$ & 1.0 & $10^{-7}$ & 7.7$\times 10^{-3}$ & 1.4 & $10^{-8}$ & $8060 (120) $ & $3.3 (7) $ & $8060 (120)$ & \checkmark \\
11311 & 822.6 (4) & $(1^{-})$ & 1.6 & $10^{-8}$ & 7.5$\times 10^{-5}$ & 5.8 & $10^{-9}$ & $1.1 (4) $ & $3.0 (15)$ & $4.1 (16) $ & \\
11326 & 840.8 (6) & $(1^{-})$ & 1.2 & $10^{-7}$ & 3.6$\times 10^{-4}$ & 4.5 & $10^{-8}$ & $0.6 (3) $ & $3.0 (15)$ & $3.6 (15) $ & \\
11328 & 843.24 (17) & $1^{-}$ & 5.0 & $10^{-7}$ & 1.3$\times 10^{-3}$ & 1.7 & $10^{-7}$ & $420 (90) $ & $3.6 (5) $ & $430 (90) $ & \checkmark \\
11329 & 844.4 (6) & $(1^{-})$ & 1.2 & $10^{-7}$ & 3.3$\times 10^{-4}$ & 4.5 & $10^{-8}$ & $2.8 (10) $ & $3.0 (15)$ & $5.8 (18) $ & \\
11337 & 853.6 (7) & $(1^{-})$ & 1.3 & $10^{-7}$ & 2.7$\times 10^{-4}$ & 4.6 & $10^{-8}$ & $1.4 (6) $ & $3.0 (15)$ & $4.4 (18) $ & \\
11344 & 861.86 (18) & $(2^{+})$ & 2.0 & $10^{-7}$ & 7.2$\times 10^{-4}$ & 4.0 & $10^{-8}$ & $150 (40) $ & $1.18 (27) $ & $150 (40) $ & \checkmark \\
11345 & 862.91 (19) & $4^{(-)}$ & 4.2 & $10^{-8}$ & 7.2$\times 10^{-4}$ & 5.1 & $10^{-9}$ & $4130 (190) $ & $1.8 (4) $ & $4130 (190)$ & \checkmark \\
11393 & 919.34 (19) & $5^{(+)}$ & 3.7 & $10^{-8}$ & 2.4$\times 10^{-2}$ & 3.7 & $10^{-9}$ & $290 (19) $ & $3.0 (15)$ & $293 (19) $ & \checkmark \\
\hline \hline
\end{tabular}
}
\caption{\label{tab:an_upperlims}Properties of Unobserved
Resonances in \an. For these resonances, only upper
limits of the resonance strength and/or the $\alpha$-particle
spectroscopic factor can be derived. Quantum numbers,
$\gamma$-ray and neutron partial widths are taken from the
R-matrix fit of Ref.\ \cite{KOE02}. Resonance energies represent
a weighted average of values adopted from Refs.\
\cite{MOS76,WEI76,GLA86,KOE02}.}
\end{center}
\end{table*}
The matching temperature, $T_{\text{match}}$, (see Sec.\
\ref{sec-2_4}) for both the \ag and \an reactions, beyond which the
rates are estimated by normalising Hauser-Feshbach predictions to
experimental rates, amounts to $T = 1.33$\ GK i.e., well above the
temperatures relevant for the s-process during He-burning ($T = 0.01 -
0.3$\ GK).
Monte Carlo reaction rates for the \ag and \an reactions are presented
in Tabs. \ref{tab:agRate} and\ \ref{tab:anRate}, respectively. The
median, low, and high rates are shown alongside the lognormal
parameters and the Anderson-Darling statistic described in Sec.\
\ref{sec:rates-montecarlo}.
The Monte Carlo reaction rate probability density functions are
displayed in Figs.~\ref{fig:Panel_ag} and~\ref{fig:Panel_an} as red
histograms. The solid black lines indicate the lognormal
approximation, calculated with the lognormal parameters, $\mu$ and
$\sigma$, listed in columns 5, 6, 10, and 11 of Tabs.~\ref{tab:agRate}
and~\ref{tab:anRate}.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\textwidth]{fig2}
\caption[\ag rate distributions]{\label{fig:Panel_ag}(Colour
online) Reaction rate
probability densities for the
$^{22}$Ne($\alpha$,$\gamma$)$^{26}$Mg reaction at various
stellar temperatures.
In each panel, the red histogram represents the Monte Carlo
results, while the solid line shows the lognormal
approximation. Note that the solid line is {\textit{not}} a fit
to the histogram, but was calculated from the lognormal
parameters $\mu$ and $\sigma$ (table~\ref{tab:agRate}), which in
turn were determined from equation~(\ref{eq:rates-lognormPars}).
It is apparent that the lognormal approximation to the reaction
rates holds in the temperature range of the s-process (near 0.3\
GK).}
\end{center}
\end{figure*}
\begin{table*}
\begin{center}
\begin{tabular}{l||ccc|cc}
\hline \hline
T (GK) & Low rate & Median rate & High rate & lognormal $\mu$ & lognormal $\sigma$ \\ \hline
0.010 & 1.05$\times$10$^{-77}$ & 2.14$\times$10$^{-77}$ & 4.52$\times$10$^{-77}$ & -1.765$\times$10$^{+02}$ & 7.42$\times$10$^{-01}$ \\
0.011 & 3.99$\times$10$^{-74}$ & 7.28$\times$10$^{-74}$ & 1.34$\times$10$^{-73}$ & -1.684$\times$10$^{+02}$ & 6.15$\times$10$^{-01}$ \\
0.012 & 3.69$\times$10$^{-71}$ & 6.34$\times$10$^{-71}$ & 1.07$\times$10$^{-70}$ & -1.617$\times$10$^{+02}$ & 5.34$\times$10$^{-01}$ \\
0.013 & 1.15$\times$10$^{-68}$ & 1.90$\times$10$^{-68}$ & 3.09$\times$10$^{-68}$ & -1.559$\times$10$^{+02}$ & 4.92$\times$10$^{-01}$ \\
0.014 & 1.55$\times$10$^{-66}$ & 2.52$\times$10$^{-66}$ & 4.04$\times$10$^{-66}$ & -1.511$\times$10$^{+02}$ & 4.80$\times$10$^{-01}$ \\
0.015 & 1.06$\times$10$^{-64}$ & 1.73$\times$10$^{-64}$ & 2.79$\times$10$^{-64}$ & -1.468$\times$10$^{+02}$ & 4.90$\times$10$^{-01}$ \\
0.016 & 4.11$\times$10$^{-63}$ & 6.96$\times$10$^{-63}$ & 1.14$\times$10$^{-62}$ & -1.431$\times$10$^{+02}$ & 5.13$\times$10$^{-01}$ \\
0.018 & 1.80$\times$10$^{-60}$ & 3.26$\times$10$^{-60}$ & 5.63$\times$10$^{-60}$ & -1.370$\times$10$^{+02}$ & 5.75$\times$10$^{-01}$ \\
0.020 & 2.24$\times$10$^{-58}$ & 4.34$\times$10$^{-58}$ & 8.04$\times$10$^{-58}$ & -1.321$\times$10$^{+02}$ & 6.43$\times$10$^{-01}$ \\
0.025 & 1.54$\times$10$^{-54}$ & 3.14$\times$10$^{-54}$ & 6.30$\times$10$^{-54}$ & -1.232$\times$10$^{+02}$ & 7.13$\times$10$^{-01}$ \\
0.030 & 2.82$\times$10$^{-50}$ & 3.35$\times$10$^{-49}$ & 1.30$\times$10$^{-48}$ & -1.121$\times$10$^{+02}$ & 1.87$\times$10$^{+00}$ \\
0.040 & 1.81$\times$10$^{-42}$ & 2.31$\times$10$^{-41}$ & 8.91$\times$10$^{-41}$ & -9.413$\times$10$^{+01}$ & 2.14$\times$10$^{+00}$ \\
0.050 & 8.51$\times$10$^{-38}$ & 1.08$\times$10$^{-36}$ & 4.17$\times$10$^{-36}$ & -8.338$\times$10$^{+01}$ & 2.15$\times$10$^{+00}$ \\
0.060 & 1.05$\times$10$^{-34}$ & 1.34$\times$10$^{-33}$ & 5.14$\times$10$^{-33}$ & -7.624$\times$10$^{+01}$ & 2.08$\times$10$^{+00}$ \\
0.070 & 1.95$\times$10$^{-32}$ & 2.12$\times$10$^{-31}$ & 8.04$\times$10$^{-31}$ & -7.104$\times$10$^{+01}$ & 1.79$\times$10$^{+00}$ \\
0.080 & 2.76$\times$10$^{-30}$ & 1.14$\times$10$^{-29}$ & 3.67$\times$10$^{-29}$ & -6.679$\times$10$^{+01}$ & 1.33$\times$10$^{+00}$ \\
0.090 & 1.76$\times$10$^{-28}$ & 6.30$\times$10$^{-28}$ & 1.35$\times$10$^{-27}$ & -6.289$\times$10$^{+01}$ & 1.15$\times$10$^{+00}$ \\
0.100 & 4.79$\times$10$^{-27}$ & 2.28$\times$10$^{-26}$ & 6.55$\times$10$^{-26}$ & -5.931$\times$10$^{+01}$ & 1.35$\times$10$^{+00}$ \\
0.110 & 8.17$\times$10$^{-26}$ & 5.95$\times$10$^{-25}$ & 1.86$\times$10$^{-24}$ & -5.616$\times$10$^{+01}$ & 1.55$\times$10$^{+00}$ \\
0.120 & 1.11$\times$10$^{-24}$ & 9.63$\times$10$^{-24}$ & 3.07$\times$10$^{-23}$ & -5.343$\times$10$^{+01}$ & 1.64$\times$10$^{+00}$ \\
0.130 & 1.23$\times$10$^{-23}$ & 1.03$\times$10$^{-22}$ & 3.28$\times$10$^{-22}$ & -5.102$\times$10$^{+01}$ & 1.57$\times$10$^{+00}$ \\
0.140 & 1.38$\times$10$^{-22}$ & 8.23$\times$10$^{-22}$ & 2.50$\times$10$^{-21}$ & -4.883$\times$10$^{+01}$ & 1.36$\times$10$^{+00}$ \\
0.150 & 1.53$\times$10$^{-21}$ & 5.57$\times$10$^{-21}$ & 1.51$\times$10$^{-20}$ & -4.679$\times$10$^{+01}$ & 1.10$\times$10$^{+00}$ \\
0.160 & 1.41$\times$10$^{-20}$ & 3.79$\times$10$^{-20}$ & 8.10$\times$10$^{-20}$ & -4.484$\times$10$^{+01}$ & 8.63$\times$10$^{-01}$ \\
0.180 & 8.05$\times$10$^{-19}$ & 1.54$\times$10$^{-18}$ & 2.84$\times$10$^{-18}$ & -4.102$\times$10$^{+01}$ & 6.29$\times$10$^{-01}$ \\
0.200 & 3.41$\times$10$^{-17}$ & 5.43$\times$10$^{-17}$ & 9.60$\times$10$^{-17}$ & -3.740$\times$10$^{+01}$ & 5.19$\times$10$^{-01}$ \\
0.250 & 5.88$\times$10$^{-14}$ & 7.56$\times$10$^{-14}$ & 1.00$\times$10$^{-13}$ & -3.019$\times$10$^{+01}$ & 2.78$\times$10$^{-01}$ \\
0.300 & 9.32$\times$10$^{-12}$ & 1.13$\times$10$^{-11}$ & 1.38$\times$10$^{-11}$ & -2.520$\times$10$^{+01}$ & 1.96$\times$10$^{-01}$ \\
0.350 & 3.46$\times$10$^{-10}$ & 4.08$\times$10$^{-10}$ & 4.86$\times$10$^{-10}$ & -2.162$\times$10$^{+01}$ & 1.69$\times$10$^{-01}$ \\
0.400 & 5.11$\times$10$^{-09}$ & 5.95$\times$10$^{-09}$ & 6.98$\times$10$^{-09}$ & -1.894$\times$10$^{+01}$ & 1.56$\times$10$^{-01}$ \\
0.450 & 4.09$\times$10$^{-08}$ & 4.72$\times$10$^{-08}$ & 5.50$\times$10$^{-08}$ & -1.686$\times$10$^{+01}$ & 1.47$\times$10$^{-01}$ \\
0.500 & 2.13$\times$10$^{-07}$ & 2.44$\times$10$^{-07}$ & 2.82$\times$10$^{-07}$ & -1.522$\times$10$^{+01}$ & 1.41$\times$10$^{-01}$ \\
0.600 & 2.47$\times$10$^{-06}$ & 2.79$\times$10$^{-06}$ & 3.20$\times$10$^{-06}$ & -1.278$\times$10$^{+01}$ & 1.32$\times$10$^{-01}$ \\
0.700 & 1.39$\times$10$^{-05}$ & 1.57$\times$10$^{-05}$ & 1.78$\times$10$^{-05}$ & -1.106$\times$10$^{+01}$ & 1.25$\times$10$^{-01}$ \\
0.800 & 5.15$\times$10$^{-05}$ & 5.77$\times$10$^{-05}$ & 6.51$\times$10$^{-05}$ & -9.758$\times$10$^{+00}$ & 1.18$\times$10$^{-01}$ \\
0.900 & 1.48$\times$10$^{-04}$ & 1.66$\times$10$^{-04}$ & 1.88$\times$10$^{-04}$ & -8.701$\times$10$^{+00}$ & 1.19$\times$10$^{-01}$ \\
1.000 & 3.65$\times$10$^{-04}$ & 4.11$\times$10$^{-04}$ & 4.73$\times$10$^{-04}$ & -7.788$\times$10$^{+00}$ & 1.35$\times$10$^{-01}$ \\
1.250 & 2.33$\times$10$^{-03}$ & 2.77$\times$10$^{-03}$ & 3.43$\times$10$^{-03}$ & -5.867$\times$10$^{+00}$ & 2.02$\times$10$^{-01}$ \\
1.500 & (1.45$\times$10$^{-02}$) & (1.79$\times$10$^{-02}$) & (2.21$\times$10$^{-02}$) & (-4.024$\times$10$^{+00}$) & (2.12$\times$10$^{-01}$) \\
1.750 & (7.64$\times$10$^{-02}$) & (9.45$\times$10$^{-02}$) & (1.17$\times$10$^{-01}$) & (-2.360$\times$10$^{+00}$) & (2.12$\times$10$^{-01}$) \\
2.000 & (3.00$\times$10$^{-01}$) & (3.70$\times$10$^{-01}$) & (4.58$\times$10$^{-01}$) & (-9.932$\times$10$^{-01}$) & (2.12$\times$10$^{-01}$) \\
2.500 & (2.55$\times$10$^{+00}$) & (3.15$\times$10$^{+00}$) & (3.89$\times$10$^{+00}$) & (1.147$\times$10$^{+00}$) & (2.12$\times$10$^{-01}$) \\
3.000 & (1.24$\times$10$^{+01}$) & (1.53$\times$10$^{+01}$) & (1.89$\times$10$^{+01}$) & (2.729$\times$10$^{+00}$) & (2.12$\times$10$^{-01}$) \\
3.500 & (4.18$\times$10$^{+01}$) & (5.17$\times$10$^{+01}$) & (6.39$\times$10$^{+01}$) & (3.945$\times$10$^{+00}$) & (2.12$\times$10$^{-01}$) \\
4.000 & (1.10$\times$10$^{+02}$) & (1.36$\times$10$^{+02}$) & (1.68$\times$10$^{+02}$) & (4.913$\times$10$^{+00}$) & (2.12$\times$10$^{-01}$) \\
5.000 & (4.71$\times$10$^{+02}$) & (5.82$\times$10$^{+02}$) & (7.19$\times$10$^{+02}$) & (6.366$\times$10$^{+00}$) & (2.12$\times$10$^{-01}$) \\
6.000 & (1.33$\times$10$^{+03}$) & (1.64$\times$10$^{+03}$) & (2.03$\times$10$^{+03}$) & (7.405$\times$10$^{+00}$) & (2.12$\times$10$^{-01}$) \\
7.000 & (2.91$\times$10$^{+03}$) & (3.59$\times$10$^{+03}$) & (4.44$\times$10$^{+03}$) & (8.186$\times$10$^{+00}$) & (2.12$\times$10$^{-01}$) \\
8.000 & (5.35$\times$10$^{+03}$) & (6.62$\times$10$^{+03}$) & (8.18$\times$10$^{+03}$) & (8.798$\times$10$^{+00}$) & (2.12$\times$10$^{-01}$) \\
9.000 & (8.68$\times$10$^{+03}$) & (1.07$\times$10$^{+04}$) & (1.33$\times$10$^{+04}$) & (9.281$\times$10$^{+00}$) & (2.12$\times$10$^{-01}$) \\
10.000& (1.30$\times$10$^{+04}$) & (1.60$\times$10$^{+04}$) & (1.98$\times$10$^{+04}$) & (9.681$\times$10$^{+00}$) & (2.12$\times$10$^{-01}$) \\
\hline \hline
\end{tabular}
\caption{\label{tab:agRate} Monte Carlo reaction rates for the
$^{22}$Ne($\alpha$,$\gamma$)$^{26}$Mg reaction. Shown are the
low, median, and high rates, corresponding to the 16th, 50th,
and 84th percentiles of the Monte Carlo probability density
distributions. Also shown are the parameters ($\mu$ and
$\sigma$) of the lognormal approximation to the actual Monte
Carlo probability density. See Ref.\ \cite{LON10} for
details. The rate values shown in parentheses indicate the
temperatures ($T>T_{\text{match}}=1.33$~GK) for which
Hauser-Feshbach rates, normalised to experimental results, are
adopted (see section~\ref{sec-2_4}).}
\end{center}
\end{table*}
\begin{table*}
\begin{center}
\begin{tabular}{l||ccc|cc}
\hline \hline
T (GK) & Low rate & Median rate & High rate & lognormal $\mu$ & lognormal $\sigma$ \\ \hline
0.010 & 0.0 & 0.0 & 0.0 & - - - - & - - - - \\
0.011 & 0.0 & 0.0 & 0.0 & - - - - & - - - - \\
0.012 & 0.0 & 0.0 & 0.0 & - - - - & - - - - \\
0.013 & 0.0 & 0.0 & 0.0 & - - - - & - - - - \\
0.014 & 0.0 & 0.0 & 0.0 & - - - - & - - - - \\
0.015 & 0.0 & 0.0 & 0.0 & - - - - & - - - - \\
0.016 & 0.0 & 0.0 & 0.0 & - - - - & - - - - \\
0.018 & 0.0 & 0.0 & 0.0 & - - - - & - - - - \\
0.020 & 0.0 & 0.0 & 0.0 & - - - - & - - - - \\
0.025 & 0.0 & 0.0 & 0.0 & - - - - & - - - - \\
0.030 & 5.12$\times$10$^{-88}$ & 5.08$\times$10$^{-87}$ & 2.25$\times$10$^{-86}$ & -1.991$\times$10$^{+02}$ & 1.90$\times$10$^{+00}$ \\
0.040 & 1.46$\times$10$^{-67}$ & 1.49$\times$10$^{-66}$ & 6.64$\times$10$^{-66}$ & -1.519$\times$10$^{+02}$ & 1.94$\times$10$^{+00}$ \\
0.050 & 2.99$\times$10$^{-55}$ & 3.05$\times$10$^{-54}$ & 1.36$\times$10$^{-53}$ & -1.236$\times$10$^{+02}$ & 1.95$\times$10$^{+00}$ \\
0.060 & 4.92$\times$10$^{-47}$ & 4.87$\times$10$^{-46}$ & 2.17$\times$10$^{-45}$ & -1.047$\times$10$^{+02}$ & 1.92$\times$10$^{+00}$ \\
0.070 & 3.70$\times$10$^{-41}$ & 3.48$\times$10$^{-40}$ & 1.55$\times$10$^{-39}$ & -9.117$\times$10$^{+01}$ & 1.84$\times$10$^{+00}$ \\
0.080 & 1.03$\times$10$^{-36}$ & 8.44$\times$10$^{-36}$ & 3.73$\times$10$^{-35}$ & -8.101$\times$10$^{+01}$ & 1.74$\times$10$^{+00}$ \\
0.090 & 3.23$\times$10$^{-33}$ & 2.19$\times$10$^{-32}$ & 9.43$\times$10$^{-32}$ & -7.309$\times$10$^{+01}$ & 1.62$\times$10$^{+00}$ \\
0.100 & 2.17$\times$10$^{-30}$ & 1.20$\times$10$^{-29}$ & 4.92$\times$10$^{-29}$ & -6.673$\times$10$^{+01}$ & 1.50$\times$10$^{+00}$ \\
0.110 & 4.65$\times$10$^{-28}$ & 2.12$\times$10$^{-27}$ & 8.22$\times$10$^{-27}$ & -6.151$\times$10$^{+01}$ & 1.39$\times$10$^{+00}$ \\
0.120 & 4.24$\times$10$^{-26}$ & 1.62$\times$10$^{-25}$ & 5.82$\times$10$^{-25}$ & -5.714$\times$10$^{+01}$ & 1.29$\times$10$^{+00}$ \\
0.130 & 1.94$\times$10$^{-24}$ & 6.61$\times$10$^{-24}$ & 2.14$\times$10$^{-23}$ & -5.342$\times$10$^{+01}$ & 1.19$\times$10$^{+00}$ \\
0.140 & 5.27$\times$10$^{-23}$ & 1.64$\times$10$^{-22}$ & 4.81$\times$10$^{-22}$ & -5.020$\times$10$^{+01}$ & 1.08$\times$10$^{+00}$ \\
0.150 & 9.94$\times$10$^{-22}$ & 2.74$\times$10$^{-21}$ & 7.18$\times$10$^{-21}$ & -4.737$\times$10$^{+01}$ & 9.62$\times$10$^{-01}$ \\
0.160 & 1.43$\times$10$^{-20}$ & 3.39$\times$10$^{-20}$ & 7.89$\times$10$^{-20}$ & -4.484$\times$10$^{+01}$ & 8.29$\times$10$^{-01}$ \\
0.180 & 1.61$\times$10$^{-18}$ & 2.74$\times$10$^{-18}$ & 5.01$\times$10$^{-18}$ & -4.040$\times$10$^{+01}$ & 5.53$\times$10$^{-01}$ \\
0.200 & 9.14$\times$10$^{-17}$ & 1.24$\times$10$^{-16}$ & 1.79$\times$10$^{-16}$ & -3.660$\times$10$^{+01}$ & 3.43$\times$10$^{-01}$ \\
0.250 & 1.68$\times$10$^{-13}$ & 2.06$\times$10$^{-13}$ & 2.53$\times$10$^{-13}$ & -2.921$\times$10$^{+01}$ & 2.06$\times$10$^{-01}$ \\
0.300 & 2.74$\times$10$^{-11}$ & 3.36$\times$10$^{-11}$ & 4.15$\times$10$^{-11}$ & -2.411$\times$10$^{+01}$ & 2.06$\times$10$^{-01}$ \\
0.350 & 1.05$\times$10$^{-09}$ & 1.29$\times$10$^{-09}$ & 1.59$\times$10$^{-09}$ & -2.046$\times$10$^{+01}$ & 2.05$\times$10$^{-01}$ \\
0.400 & 1.64$\times$10$^{-08}$ & 2.00$\times$10$^{-08}$ & 2.45$\times$10$^{-08}$ & -1.773$\times$10$^{+01}$ & 1.99$\times$10$^{-01}$ \\
0.450 & 1.42$\times$10$^{-07}$ & 1.71$\times$10$^{-07}$ & 2.07$\times$10$^{-07}$ & -1.558$\times$10$^{+01}$ & 1.88$\times$10$^{-01}$ \\
0.500 & 8.51$\times$10$^{-07}$ & 1.00$\times$10$^{-06}$ & 1.19$\times$10$^{-06}$ & -1.381$\times$10$^{+01}$ & 1.68$\times$10$^{-01}$ \\
0.600 & 1.74$\times$10$^{-05}$ & 1.92$\times$10$^{-05}$ & 2.15$\times$10$^{-05}$ & -1.085$\times$10$^{+01}$ & 1.07$\times$10$^{-01}$ \\
0.700 & 2.36$\times$10$^{-04}$ & 2.51$\times$10$^{-04}$ & 2.69$\times$10$^{-04}$ & -8.287$\times$10$^{+00}$ & 6.70$\times$10$^{-02}$ \\
0.800 & 2.15$\times$10$^{-03}$ & 2.27$\times$10$^{-03}$ & 2.42$\times$10$^{-03}$ & -6.084$\times$10$^{+00}$ & 5.79$\times$10$^{-02}$ \\
0.900 & 1.36$\times$10$^{-02}$ & 1.43$\times$10$^{-02}$ & 1.51$\times$10$^{-02}$ & -4.246$\times$10$^{+00}$ & 5.33$\times$10$^{-02}$ \\
1.000 & 6.34$\times$10$^{-02}$ & 6.64$\times$10$^{-02}$ & 6.98$\times$10$^{-02}$ & -2.711$\times$10$^{+00}$ & 4.82$\times$10$^{-02}$ \\
1.250 & 1.18$\times$10$^{+00}$ & 1.22$\times$10$^{+00}$ & 1.27$\times$10$^{+00}$ & 1.998$\times$10$^{-01}$ & 3.88$\times$10$^{-02}$ \\
1.500 & (1.09$\times$10$^{+01}$) & (1.14$\times$10$^{+01}$) & (1.18$\times$10$^{+01}$) & (2.431$\times$10$^{+00}$) & (3.89$\times$10$^{-02}$) \\
1.750 & (6.79$\times$10$^{+01}$) & (7.06$\times$10$^{+01}$) & (7.34$\times$10$^{+01}$) & (4.257$\times$10$^{+00}$) & (3.89$\times$10$^{-02}$) \\
2.000 & (2.92$\times$10$^{+02}$) & (3.04$\times$10$^{+02}$) & (3.16$\times$10$^{+02}$) & (5.717$\times$10$^{+00}$) & (3.89$\times$10$^{-02}$) \\
2.500 & (2.74$\times$10$^{+03}$) & (2.85$\times$10$^{+03}$) & (2.96$\times$10$^{+03}$) & (7.953$\times$10$^{+00}$) & (3.89$\times$10$^{-02}$) \\
3.000 & (1.41$\times$10$^{+04}$) & (1.46$\times$10$^{+04}$) & (1.52$\times$10$^{+04}$) & (9.590$\times$10$^{+00}$) & (3.89$\times$10$^{-02}$) \\
3.500 & (4.96$\times$10$^{+04}$) & (5.16$\times$10$^{+04}$) & (5.37$\times$10$^{+04}$) & (1.085$\times$10$^{+01}$) & (3.89$\times$10$^{-02}$) \\
4.000 & (1.36$\times$10$^{+05}$) & (1.41$\times$10$^{+05}$) & (1.47$\times$10$^{+05}$) & (1.186$\times$10$^{+01}$) & (3.89$\times$10$^{-02}$) \\
5.000 & (6.10$\times$10$^{+05}$) & (6.34$\times$10$^{+05}$) & (6.59$\times$10$^{+05}$) & (1.336$\times$10$^{+01}$) & (3.89$\times$10$^{-02}$) \\
6.000 & (1.80$\times$10$^{+06}$) & (1.88$\times$10$^{+06}$) & (1.95$\times$10$^{+06}$) & (1.444$\times$10$^{+01}$) & (3.89$\times$10$^{-02}$) \\
7.000 & (4.07$\times$10$^{+06}$) & (4.23$\times$10$^{+06}$) & (4.40$\times$10$^{+06}$) & (1.526$\times$10$^{+01}$) & (3.89$\times$10$^{-02}$) \\
8.000 & (7.70$\times$10$^{+06}$) & (8.01$\times$10$^{+06}$) & (8.32$\times$10$^{+06}$) & (1.590$\times$10$^{+01}$) & (3.89$\times$10$^{-02}$) \\
9.000 & (1.28$\times$10$^{+07}$) & (1.33$\times$10$^{+07}$) & (1.39$\times$10$^{+07}$) & (1.640$\times$10$^{+01}$) & (3.89$\times$10$^{-02}$) \\
10.000&(1.97$\times$10$^{+07}$) & (2.04$\times$10$^{+07}$) & (2.12$\times$10$^{+07}$) & (1.683$\times$10$^{+01}$) & (3.89$\times$10$^{-02}$) \\
\hline \hline
\end{tabular}
\caption{\label{tab:anRate} Monte Carlo reaction rates for the
$^{22}$Ne($\alpha$,n)$^{25}$Mg reaction. Shown are the low,
median, and high rates, corresponding to the 16th, 50th, and
84th percentiles of the Monte Carlo probability density
distributions. Also shown are the parameters ($\mu$ and
$\sigma$) of the lognormal approximation to the actual Monte
Carlo probability density. See Ref.\ \cite{LON10} for
details. The rate values shown in parentheses indicate the
temperatures ($T>T_{\text{match}}=1.33$~GK) for which
Hauser-Feshbach rates, normalised to experimental results, are
adopted (see section~\ref{sec-2_4}).}
\end{center}
\end{table*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.95\textwidth]{fig3}
\caption[\an rate distributions]{\label{fig:Panel_an}(Colour
online) Reaction rate probability densities for the
$^{22}$Ne($\alpha$,n)$^{25}$Mg reaction. See caption to
Fig.~\ref{fig:Panel_ag}.}
\end{center}
\end{figure*}
In order to emphasise that our low and high rates, obtained for a
coverage probability of 68\% (see section~\ref{sec:rates-montecarlo}),
do {\textit{not}} represent sharp boundaries, we show the
($\alpha$,$\gamma$) and ($\alpha$,n) reaction rates, normalised to the
respective recommended (median) values, as colour contours in
Figs.~\ref{fig:Uncerts_ag} and~\ref{fig:Uncerts_an}. The thick
and thin solid lines represent coverage probabilities of 68\% and
95\%, respectively. The three dashed lines show the previously
reported rates (\textcite{ANG99} for
\reaction{22}{Ne}{$\alpha$}{$\gamma$}{26}{Mg} and \textcite{JAE01} for
\reaction{22}{Ne}{$\alpha$}{n}{25}{Mg}), normalised to our recommended
rate. Our calculations of the relative resonance contributions to the
total ($\alpha$,$\gamma$) and ($\alpha$,n) reaction rates show that,
at temperatures most relevant to the s-process, resonances including
and below the \Erlab{831} resonance are the most important. Future
experimental efforts should, therefore, be concentrated on studying
resonances in the excitation energy region near the neutron threshold.
\begin{figure*}
\begin{center}
\includegraphics[width=0.7\textwidth]{fig4}
\caption[\ag comparison with NACRE]{\label{fig:Uncerts_ag}(Colour
online) The uncertainty bands for the
$^{22}$Ne($\alpha$,$\gamma$)$^{26}$Mg reaction. The
uncertainties are the result of upper limit resonance
contributions and of resonance strength uncertainties. The
colour-densities represent the present reaction rate probability
densities normalised to our recommended rate. The thick and thin
black lines represent the 68\% and 95\% uncertainties,
respectively. The dashed blue lines represent the literature
rates from Ref.\ \cite{ANG99}, with the thick and thin lines
denoting the recommended rate and rate limits, respectively,
normalised to our recommended rate. Values below unity (dotted
line) indicate that the rates are lower than the present
recommended rate. The relevant temperatures for helium- and
carbon shell-burning have been added as red bars with the labels
``He'' and ``C'', respectively.}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\textwidth]{fig5}
\caption[\an comparison with NACRE]{\label{fig:Uncerts_an}(Colour
online) The probability densities for the
$^{22}$Ne($\alpha$,n)$^{25}$Mg reaction in comparison to those
presented by Ref.\ \cite{JAE01}. See caption of Fig.\
\ref{fig:Uncerts_ag} for an explanation.}
\end{center}
\end{figure*}
For the \ag reaction, the present rates deviate significantly from the
results of Ref.\ \cite{ANG99}, by factors of 2-100.
The differences are caused by: (i) a different treatment of partial
widths; in Ref.\ \cite{ANG99} the rates were found from numerical
integration by assuming upper limit values ($\Gamma=4-10$~keV) for the
total widths, whereas in the present work total widths have been
adopted from measured values; (ii) our improved treatment of upper
limits for reduced $\alpha$-particle widths (i.e., sampling over a
Porter-Thomas distribution; see section~\ref{sec:rates-montecarlo});
and (iii) the fact that new nuclear data became available since 1999
(see Tab.~\ref{tab:data-since-NACRE}). The combined effect of these
improvements results in a factor of 5 reduction in reaction rate
uncertainties in the He-burning temperature region.
As already noted in section~\ref{sec:general-aspects}, a number of
excited states near the $\alpha$-particle and neutron thresholds in
\nuc{26}{Mg} have been observed by additional inelastic proton
scattering experiments~\citep{MOS76,CRA89,TAM03}. However,
their spins and parities have not been determined. In particular, it
is not known at present if these levels possess natural parity and
thereby may be populated in the \Nepa reactions. In order to
investigate the maximum impact of these states with unknown $J^{\pi}$
values on the \ag reaction rate, we
performed a test by assuming that all of these levels possess natural
parity and by adopting upper limit $\alpha$-particle spectroscopic
factors from that data of Refs.\ \cite{GIE93} and \cite{UGA07}. The
results show that these states can increase the \ag reaction rate by
up to a factor of 30 at temperatures between T$_9=0.1$ and $0.2$~GK.
For the \an reaction there is better agreement between previous and
new rates. The present rates are slightly higher (up to a factor of 2)
than those calculated by Ref.\ \cite{JAE01}. The two main reasons for
the difference are: (i) we used inflated weighted averages of the
reported resonance strengths from different measurements (see
section~\ref{sec:resonance-strengths}); and (ii) excluded the
contribution of a presumed \Erlab{630} resonance, because the level at
\Ex{11154} has been shown to possess unnatural parity~\citep{LON09}.
We would like to emphasise that the observed ($\alpha$,n) and
($\alpha$,$\gamma$) resonances near \Erlab{830} introduce another
systematic uncertainty that we have not accounted for. Recall that we
treated these two resonances as independent and narrow
(section~\ref{sec:levels}). On the other hand, if they correspond to
the same level in \nuc{26}{Mg}, the partial widths could be derived
from the measured resonance strengths. In that case, the resonance
turns out to be relatively broad, resulting in a significant
contribution of the resonance tail to the total reaction rate. Tests
show that the resulting reaction rates near $T \approx 0.3$~GK could
increase by roughly a factor of 5.
The ratio of \an to \ag reaction rates is shown in
figure~\ref{fig:RateRatio}. Note that these rates are not independent
since, for example, the same values of $\alpha$-particle partial
widths enter in both rate calculations if an $(\alpha,\gamma)$ and
$(\alpha,n)$ resonance corresponds to the same \nuc{26}{Mg}
level. Thus the uncertainties shown in Fig.~\ref{fig:RateRatio} are
somewhat overestimated. Nevertheless, it is instructive to compare the
present ratios, shown in black, to those from previous work
\citep{ANG99,JAE01}, displayed in red. It can be seen that the present
ratio is significantly larger than previous results and, consequently,
we predict that more neutrons will be produced per captured
$\alpha$-particle.
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{fig6}
\caption{(Colour online) Uncertainty bands of the reaction rate
\textit{ratio}, $N_{A}\langle \sigma v
\rangle_{(\alpha,n)}/N_{A}\langle \sigma v
\rangle_{(\alpha,\gamma)}$. The solid (black) lines represent
the present reaction rate ratio, while the dashed (blue) lines
represent the ratio of rates from Ref.\ \cite{JAE01} for
\reaction{22}{Ne}{$\alpha$}{n}{25}{Mg}, and Ref.\ \cite{ANG99},
for \reaction{22}{Ne}{$\alpha$}{$\gamma$}{26}{Mg}. The
recommended ratio (the center line in each set) was calculated
by dividing the recommended \an reaction rate by that of the
\ag reaction at each temperature. To obtain the uncertainty
bands for the rate ratio, the high rate for \ag was divided by
the low rate for \reaction{22}{Ne}{$\alpha$}{n}{25}{Mg}, and
vice versa. Values greater than unity indicate that more
neutrons (and \nuc{25}{Mg}) are produced than $\gamma$-rays
(and \nuc{26}{Mg}) per $\alpha$-particle capture. The
temperatures relevant in helium- and carbon shell-burning are
represented by red bars and are marked with ``He'' and ``C'',
respectively.}
\label{fig:RateRatio}
\end{figure*}
\section{Astrophysical Implications}
\label{sec:results}
\subsection{Models}
\label{sec:num-method}
In order to explore how the current \Nepa reaction rates affect
s-process nucleosynthesis, two kind of calculations are presented
here. The first compares final abundance yields from post-processing
models upon changing the \textit{recommended} \Nepa reaction rates
from previously published results to those presented in this
paper.
The second calculation estimates the variations in s-process
nucleosynthesis arising from \textit{uncertainties} in the present
\Nepa reaction rates. These can then be compared with abundance
variations arising from the literature rates.
In order to take the uncertainties into account, three sets of
calculations were performed: (i) recommended rates for both \an and
\ag reactions, (ii) low \an rate and high \ag rate, and (iii) high \an
rate and low \ag rate. Although in reality the \ag and \an rates will
be correlated, it is difficult to account for these correlations since
the \ag rate includes resonances below the neutron threshold and since
some \nuc{26}{Mg} levels contribute more to one reaction channel than
the other. For these reasons, we have chosen in the present study to
explore conservatively the impact of the largest reaction rate
variations. These nucleosynthesis calculations are performed
separately for massive stars and AGB stars.
\subsubsection{Massive Star Models}
A single zone temperature-density profile has been used to study the
effects of the \Nepa reaction rates on nucleosynthesis during the core
helium burning stage in massive stars.
The temperature-density profile and initial abundances used in the
present study are for a $25M_{\odot}$ star and have been used
previously in Refs.\ \cite{THE00,ILIBook}. The most abundant isotopes
at the onset of helium burning are (in mass fractions, $X$): $^4$He
($X_{\alpha}=0.982$), $^{14}$N ($X_{^{14}\text{N}}=0.0122$), $^{20}$Ne
($X_{^{20}\text{Ne}}=0.0016$), and $^{60}$Fe
($X_{^{60}\text{Fe}}=0.00117$). During most of the core helium burning
phase, the temperature and density ($T \approx 100 - 250$~MK and $\rho
\approx 1000 - 2000$~g$/$cm$^3$, respectively) are not high enough for the
\an neutron source to produce a significant number of
neutrons. However, towards the end of this phase the temperatures
become high enough for efficient neutron production. The exact time at
which the \an reaction starts to occur not only affects the number of
neutrons produced during core helium burning, but also the amount of
\nuc{22}{Ne} remaining that can be processed later during the carbon
shell burning phase. Although not studied here, the s-process is also
expected to be active during shell carbon burning.
The nucleosynthesis study was performed with a 583 nucleus s-process
network that extends up to molybdenum. Reaction rates (other than the
\Nepa rates) were adopted from the \texttt{starlib} library
\citep{ILI11}. The \texttt{starlib} library incorporates a compilation
of recently evaluated experimental Monte Carlo reaction rates in
tabular format on a grid of 60 temperatures from 1~MK to
10~GK. Tabulated are the temperature, the reaction rate, and the
factor uncertainty, which is closely related to the lognormal
parameter, $\sigma$, in Ref.\ \cite{LON10}.
\subsubsection{AGB Star Models}
The AGB nucleosynthesis tests are performed on a 5.5$M_{\odot}$,
$Z=0.0001$ model star, detailed in Ref.\ \cite{KAR10}. This model was
chosen because it experiences many thermal pulses (77 in total) during
the AGB phase, where 69 of those He-shell instabilities reach peak
temperatures of 0.30~GK or higher (with temperatures of 0.35~GK for 50
thermal pulses).
One complication arises from disentangling the effects of the \Nepa
rates and those of proton-capture nucleosynthesis at the base of the
convective envelope (hot bottom burning, HBB). In our model, the base
of the envelope reaches peak temperatures of 98 MK, easily hot enough
for activation of the NeNe and MgAl proton-burning chains. The main
results were reductions in the envelope \nuc{24}{Mg} and \nuc{25}{Mg}
abundances, and increases in \nuc{26}{Mg}, \nuc{26}{Al}, and
\nuc{27}{Al}. This means that the He-intershell preceding a pulse
contains a non-solar Mg isotopic composition that is enriched in
\nuc{26}{Mg}.
The post-processing nucleosynthesis used for the AGB star models has
previously been described in detail by Ref.\
\cite[e.g.,][]{KAR10}. This code needs as input from the stellar
evolution code variables such as temperature, density, and convective
boundaries as a function of time and mass fraction. The code then
traces the abundance changes as a function of mass and time using a
nuclear network containing 172 species (from neutrons to sulphur, and
then from iron to molybdenum) and assuming time-dependent diffusive
mixing for all convective zones \citep{CAN93}. Although this network
does not contain species of the main s-process above A $\approx 100$,
their production is estimated by the inclusion of an extra isotope
(the ``g particle'', counting neutron captures beyond our
network). The reaction rates used in the nuclear network are mostly
taken from the JINA \texttt{reaclib} database \citep{CYB10}, with the
exception of the \Nepa rates adopted from the present work. Some
modifications were made to the JINA \texttt{reaclib} library including
the removal of the \nuc{96}{Zr} decay rate (since this is an
essentially stable isotope with a half-life of $t_{1/2} \ge 10^{19}$
years), and the inclusion of the ground and isomeric states in
\nuc{85}{Kr}. This is done because 50\% of the neutron flux from $n +$
\nuc{84}{Kr} proceeds to the ground state of \nuc{85}{Kr} ($t_{1/2} =
3934.4$ days) and the other 50\% goes to the isomeric state ($\tau =
4.480$ hours). The inclusion of both \nuc{85}{Kr} states is essential
for Rb abundance predictions in AGB nucleosynthesis models \citep[see
discussion in][]{GAR09,LUG11}.
\subsection{Results}
The effects of our new rates on the nucleosynthesis in comparison to
using the results obtained in the literature are shown in Fig.\
\ref{fig:AbundanceChange-both}. The improvements in abundance
predictions for the two stellar environments are shown in Fig.\
\ref{fig:Uncertainties-both}. The most up-to-date previously published
rates for the \ag and \an reactions are from Refs.\ \cite{ANG99} and
\cite{JAE01}, respectively. The effects are markedly different for the
two s-process environments, hence they will be discussed separately in
the following.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{fig7}
\caption{(Colour online) Ratio of final abundances resulting from
the new recommended \ag and \an rates to those obtained from the
old recommended rates. Points above unity (red line) represent a
net increase in abundance. (a) At the end of core He-burning in a
$25M_{\odot}$ star, the most significant abundances affected by
the new rates are those of \nuc{26}{Mg} and the p-nuclei
\nuc{74}{Se}, \nuc{78}{Kr}, and \nuc{84}{Sr}. (b) For AGB stars,
higher mass nuclei are produced in larger quantities, as evidenced
by the `g' particle that monitors neutron captures above
molybdenum.}
\label{fig:AbundanceChange-both}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{fig8}
\caption{(Colour online) Comparison of abundance variations versus
mass number arising from \Nepa rate uncertainties presented in the
literature (Ref.\ \cite{ANG99} for the \ag reaction rate and Ref.\
\cite{JAE01} for the \an reaction rate) and from the current \Nepa
rate uncertainties. Abundance changes based on the present and
previous rate uncertainties are shown as thick black bars and thin
red bars, respectively, for (a) massive stars, and (b) AGB stars.}
\label{fig:Uncertainties-both}
\end{figure*}
\subsubsection{Massive Stars}
The recommended \an reaction rate has not changed significantly in the
present analysis. Consequently, we do not expect the final
\nuc{25}{Mg} abundance to change. The final \nuc{26}{Mg} abundance, on
the other hand, changes significantly by roughly a factor of
three. The abundance changes in nuclei heavier than iron are smaller,
with the largest abundance increases occurring near \nuc{64}{Ni}. The
increased destruction of isotopes already present in the star is also
apparent for the p-nuclides \nuc{74}{Se}, \nuc{78}{Kr}, \nuc{84}{Sr},
and \nuc{93}{Nb}. These results indicate that with the reduced \ag
rate, more neutrons are produced per \Nepa reaction. Rather than
extending the reach of the weak s-process component (i.e., synthesis
of more massive nuclei), this flux increase affects branchings in the
s-process path close to the iron peak. A wider range of intermediate
mass nuclei are therefore produced. Fig.\
\ref{fig:AbundanceChange-both} also illustrates that the \Nepa rates
not only affect the abundances of traditional s-process nuclides, but
also the abundances of nuclei below the iron peak that act as
poisons. An example is \nuc{25}{Mg}, which produces \nuc{26}{Mg} through
the \reaction{25}{Mg}{n}{$\gamma$}{26}{Mg} reaction. With a higher flux
of available neutrons, this neutron poison reaction occurs more
frequently, effectively lessening the impact of the increased neutron
flux on s-process nucleosynthesis.
Uncertainties in s-process nucleosynthesis in massive stars arising
from uncertainties in the \Nepa reaction rates are shown in Fig.\
\ref{fig:Uncertainties-both}, where the thin (red) bars show
uncertainties arising from the old rates, and thicker (black) bars
show those from the new rates. In particular, large reductions are
noticeable for \nuc{26}{Mg}, where the current yield uncertainty
amounts to around 50\% in contrast to the previous factor of
5. Uncertainties in weak s-process nucleosynthesis have also undergone
significant improvements, especially for species that can only be
destroyed, but not created, by neutron captures. An example of this is
the nucleus \nuc{58}{Ni} whose yield uncertainty has been reduced from
a factor of five to just 50\%. It is important to note
here that, although the Monte-Carlo reaction rates do take into
account systematic uncertainties, it is difficult to account for
ambiguities in the data, for example, the open question of whether or
not the \Erlab{830} resonance is a doublet. Clearly, more measurements
are needed.
\subsubsection{AGB Stars}
Nucleosynthesis yields from our low metallicity AGB star models show a
very different pattern to those of the massive star study. For AGB
stars, the effect on lighter elements is reduced in comparison to
massive stars, with higher mass s-process elements revealing the
largest changes. This weighting toward higher mass s-process elements
is caused by our choice of using a low metallicity model. At low
metallicity, the neutron/Fe seed ratio is much higher meaning that
there is a higher production of higher atomic mass nuclei (e.g., see
discussion in Refs.~\cite{BUS01,KAR12}). Nuclei towards the upper end
of our network are produced up to a factor of 2 more than before, with
the `g' particle representing nuclei beyond our network capturing over
70\% more neutrons. In low metallicity AGB stars, therefore, the \Nepa
reactions can be expected to produce more high-mass s-process
elements, while leaving the low-mass s-process below $A \approx 80$
largely unaffected.
Uncertainties in s-process nucleosynthesis have been, as in massive
stars, dramatically improved with our new rates. The previous
abundance uncertainties were approximately a factor of 10, while the
present uncertainties amount to less than a factor of 2. The present
uncertainties in the rates affect the lower masses from A$\approx$25
to A$\approx$35 more than the s-process abundances. The ratio of
\nuc{26}{Mg} and \nuc{25}{Mg} is still uncertain by
approximately 20\%, whereas it was previously around 80\% (note that
Ref.\ \cite{KAR06} found \nuc{26}{Mg}/\nuc{25}{Mg} ratio uncertainties
of 60\%). Rubidium and zirconium isotopes have undergone yield
uncertainty improvements by a factor of about two. For the s-nuclide
\nuc{96}{Mo}, the uncertainty has been reduced from a factor of 4 to
a factor of 2 with our present results.
The new \Nepa reaction rates presented here should also be tested with
low-mass AGB star models ($M \le 3 M_{\odot}$). In lower mass AGB
stars, while the \reaction{13}{C}{$\alpha$}{n}{16}{O} reaction is the
main neutron source active between thermal pulses, activation of the
\Nepa reactions during a convective thermal pulse can have a
significant effect on branchings in the s-process path.
\section{Conclusions}
\label{sec:conclusions}
Both the \an and the \ag reactions influence the neutron flux
available to the s-process in massive stars and AGB
stars. Uncertainties in the rates, therefore, lead to large
uncertainties in s-process nucleosynthesis. In this paper, we have
estimated greatly improved \Nepa reaction rates, based on newly
available experimental information published since the works of Refs.\
\cite{JAE01} and \cite{ANG99}, and by applying a sophisticated rate
computational method\ \citep{LON10}. Subsequently, we explored the
astrophysical consequences for massive stars and for AGB stars.
In massive stars, simple one zone models of core helium-burning were
utilised to determine the influence of the new rates on the weak
component of the s-process. The most important result of our study is
a significant reduction of nucleosynthesis uncertainties. The yield
uncertainty has been reduced by between a factor of 5 and 10 across
the s-process mass region considered here (A $< 100$). For example,
the yields of key isotopes, \nuc{26}{Mg} and \nuc{70}{Zn}, have
uncertainty reduction factors of about 5 and 10, respectively. When
comparing abundances obtained from our new recommended rates with
those derived from previous recommended rates, the final yield of
\nuc{26}{Mg} is found to have been reduced by roughly a factor of
three, while s-process isotopes were affected only
marginally. However, s-process nucleosynthesis is more concentrated
around the iron peak when using the new reaction rates. This relative
insensitivity to changes in neutron flux is partially caused by
captures on the neutron poisons \nuc{12}{C}, \nuc{16}{O}, and
\nuc{25}{Mg}, which are present in large quantities.
In our AGB star models, the final abundance uncertainties have also
been improved significantly with the new rates, with reductions by up
to an order of magnitude. The key rubidium and zirconium isotopes, for
example, have undergone yield uncertainty improvements of roughly a
factor of two. We have also found that s-process nucleosynthesis is
more active when including the new \Nepa reaction rates. While only
small changes are found in the low-mass s-process path ($A < 80$), at
higher masses production increases by up to a factor of 2. This is
especially evident by counting the number of captures at the end of
our network, yielding an increase of over 70\%. Further calculations
should be performed to study the effect of our new rates on lower mass
AGB stars, while paying special attention to their effects on
branchings in the s-process path.
The Monte-Carlo method used in the present study to calculate the
\Nepa reaction rates has the distinct advantage of calculating the
uncertainties in a robust and statistical meaningful manner. Although
our rates include some of the systematic uncertainties in the nuclear
data, there are still open questions regarding the resonance
properties that could affect the rates. Clearly, the remaining
ambiguities in the nuclear data for the \Nepa reaction rates need to
be resolved. The discrepancies discussed here, by \textcite{KOE02},
and by \textcite{KAR06}, make it difficult to assign some \nuc{26}{Mg}
levels to \Nepa resonances. Furthermore, the \Erlab{831} resonance
should be re-measured with high precision. More information should
also be gathered on the structure of \nuc{26}{Mg} levels near the
$\alpha$-particle and neutron thresholds. Indirect methods such as
particle transfer measurements are useful here, since the Coulomb
barrier inhibits direct measurements.
\section{Acknowledgements}
This work was supported in part by the US Department of Energy under
grant DE-FG02-97ER41041 and the National Science Foundation under
award number AST-1008355. This work was also partially supported by
the Spanish grant AYA2010-15685 and the ESF EUROCORES Program
EuroGENESIS through the MICINN grant EUI2009-04167. AIK thanks Maria
Lugaro and Joelene Buntain for help with setting up the
nucleosynthesis code that reads in tables. AIK is grateful for the
support of the NCI National Facility at the ANU. RL would like to
thank James deBoer for the in-depth discussions about properties of
\nuc{26}{Mg} levels and the \Nepa reactions.
|
2112.11450
|
\section{Introduction}
Learning effective data representations is crucial to the success of any machine learning model. Recent years have seen a surge in algorithms for unsupervised representation learning that leverage the vast amounts of unlabeled data~\cite{chen2020simple,gidaris2018unsupervised,lee2017unsupervised,zhang2019aet,zhan2020online}. In such algorithms, an auxiliary learning objective is typically designed to produce generalizable representations that capture some higher-order properties of the data. The assumption is that such properties could potentially be useful in (supervised) downstream tasks, which may have fewer annotated training samples. For example, in~\cite{noroozi2016unsupervised,santa2018visual}, the pre-text task is to solve patch jigsaw puzzles, so that the representations learned could potentially capture the natural semantic structure of images. Other popular auxiliary objectives include video frame prediction~\cite{oord2018representation}, image coloring~\cite{zhang2016colorful}, and deep clustering~\cite{caron2018deep}, to name a few.
Among the auxiliary objectives typically used for representation learning, one that has gained significant momentum recently is that of contrastive learning, which is a variant of the standard noise-constrastive estimation (NCE)~\cite{gutmann2010noise} procedure. In NCE, the goal is to learn data distributions by classifying the unlabeled data against random noise. However, recently developed contrastive learning methods learn representations by designing objectives that capture data invariances. Specifically, instead of using random noise as in NCE, these methods transform data samples to \emph{sets of samples}, each set consisting of transformed variants of a sample, and the auxiliary task is to classify one set (positives) against the rest (negatives). Surprisingly, even by using simple data transformations, such as color jittering, image cropping, or rotations, these methods are able to learn superior and generalizable representations, sometimes even outperforming supervised learning algorithms in downstream tasks (e.g., CMC~\cite{tian2019contrastive}, MoCo~\cite{chen2020improved,he2020momentum}, SimCLR~\cite{chen2020simple}, and BYOL~\cite{grill2020bootstrap}).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{AAAI2022/figures/teaser_v7.pdf}
\caption{An illustration of our \emph{Max-Margin Contrastive Learning} framework. For every positive example, we compute a weighted subset of (hard) negatives via computing a discriminative hyperplane by solving an SVM objective. This hyperplane is then used in learning to maximize the similarity between the representations of the positives and minimize the similarity between the representations of the positives against the negatives. The negatives in the figure are actual ones selected by our scheme for the respective positive.}
\vspace*{-0.5cm}
\label{fig: teaser_fig}
\end{figure}
Typically, contrastive learning methods use the NCE-loss
for the learning objective, which is usually a logistic
classifier separating the positives from the negatives. However, as is often found in NCE algorithms, the negatives should be \emph{close} in distribution to the positives for the learned representations to be useful -- a criteria that often demands a large number of negatives in practice (e.g., 16K in SimCLR~\cite{chen2020simple}). Further, standard contrastive learning approaches make the implicit assumption that the positives and negatives belong to distinct classes in the downstream task~\cite{arora2019theoretical}. This requirement is hard to enforce in an unsupervised training regime and defying this assumption may hurt the downstream performance due to beneficial discriminative cues being ignored.
In this paper, we explore alternative formulations for contrastive learning beyond the standard logistic classifier. Rather than contrasting the positive samples against all the negatives in a batch, our key insight is to design an objective that: (i) selects a suitable subset of negatives to be contrasted against, and (ii) provides a means to relax the effect of false negatives on the learned representations. Fig.~\ref{fig: teaser_fig} presents an overview of the idea. A natural objective in this regard is the classical support vector machine (SVM), which produces a discriminative hyperplane with the maximum margin separating the positives from the negatives. Inspired by SVMs, we propose a novel objective, \emph{max-margin contrastive learning} (MMCL), to learn data representations that maximizes the SVM decision margin. MMCL brings in several benefits to representation learning. For example, the \emph{kernel trick} allows for the use of rich non-linear embeddings that could capture desirable data similarities. Further, the decision margin is directly related to the support vectors, which form a weighted data subset. The ability to use slack variables within the SVM formulation allows for a natural control of the influence of false negatives on the representation learning setup.
A straightforward use of the MMCL objective could be practically challenging. This is because SVMs involve solving a constrained quadratic optimization problem, solving which exactly could dramatically increase the training time when used within standard deep learning models. To this end, inspired by coordinate descent algorithms, we propose a novel reformulation of the SVM objective using the assumptions typically used in contrastive learning setups. Specifically, we propose to use a single positive data sample to train the SVM against the negatives -- a situation for which efficient approximate solutions can be obtained for the discriminative hyperplane. Once the hyperplane is obtained, we propose to use it for representation learning. Thus, we formulate an objective that uses this learned hyperplane to maximize the classification margin between the remaining positives and the negatives. To demonstrate the empirical benefits of our approach to unsupervised learning, we replace the logistic classifier from prior contrastive learning algorithms with the proposed MMCL objectives. We present experiments on standard benchmark datasets; our results reveal that using our max-margin objective leads to {faster convergence} and needs {far fewer negatives} than prior approaches and produces representations that are {better generalizable} to several downstream tasks, including transfer learning for many-shot recognition, few-shot recognition, and surface normal estimation.
Below, we summarize the key contributions of this work:
\begin{itemize}
\item We propose a novel contrastive learning formulation using SVMs, dubbed \emph{max-margin contrastive learning}.
\item We present a novel simplification of the SVM objective using the problem setup commonly used in contrastive learning -- this simplification allows deriving efficient approximations for the decision hyperplane.
\item We explore two approximate solvers for the SVM hyperplane: (i) using projected gradient descent and (ii) closed-form using truncated least squares.
\item We present experiments on standard computer vision datasets such as ImageNet-1k, ImageNet-100, STL-10, CIFAR-100, and UCF101, demonstrating superior performances against state of the art, while requiring only smaller negative batches. Further, on a wide variety of transfer learning tasks, our pre-trained model shows better generalizability than competing approaches.
\end{itemize}
\section{Related Works}
While the key ideas in contrastive learning are classical \cite{becker1992self,gutmann2010noise,hadsell2006dimensionality}, it has recently become very popular due to its applications in self-supervised learning. Arguably, objectives based on contrastive learning have outperformed several hand-designed pre-text tasks \cite{doersch2015unsupervised,gidaris2018unsupervised,larsson2016learning,noroozi2016unsupervised,zhang2016colorful}.
Apart from visual representation learning, the idea of contrastive learning is quickly proliferating into several other subdomains in machine learning, including video understanding~\cite{NEURIPS2020_3def184a}, graph representation learning \cite{you2020graph,sun2020infograph}, natural language processing \cite{logeswaran2018an}, and learning audio representations \cite{saeed2020contrastive}.
In contrastive predictive coding~\cite{oord2018representation}, which is one of the first works to apply contrastive learning for self-supervised learning, the noise-contrastive loss was re-targeted for representation learning via the pre-text task of future prediction in sequences. It is often empirically seen that the quality of the negatives to be contrasted against has a strong influence on the effectiveness of the representation learned. To this end, for visual representation learning tasks, SimCLR \cite{chen2020simple,chen2020big} proposed a framework that uses a bank of augmentations to generate positives and negatives. As the number of negatives play a crucial role in NCE, many approaches also make use of a memory bank \cite{chen2020improved,he2020momentum,misra2020self,zhuang2019local} to enable efficient bookkeeping of the large batches of negatives. Other contrastive learning objectives include: clustering \cite{caron2018deep,caron2020unsupervised,PCL}, predicting the representations of augmented views \cite{grill2020bootstrap}, and learning invariances~\cite{tian2019contrastive,Xiao2020WhatSN}. The lack of access to class labels in contrastive learning can lead to incorrect learning; e.g., due to false negatives. Recent works have attempted to tackle this issue via avoiding \emph{sampling bias} \cite{chuang2020debiased} and adjusting the contrastive loss for the impact of false negatives \cite{robinson2021contrastive,huynh2020boosting,kalantidis2020hard,Iscen2018MiningOM}. In comparison to these methods that make adjustments to the NCE loss, we propose an alternative way to view contrastive learning through the lens of max-margin methods using support vector machines; allowing for an amalgamation of the rich literature of SVMs with modern deep unsupervised representation learning approaches.
A key idea in our setup is to view the support vectors as hard negatives for contrastive learning via maximizing the decision margin. Conceptually, this idea is reminiscent of hard-negative mining used in classical supervised learning setups, such as deformable parts models~\cite{felzenszwalb2009object}, triplet-based losses~\cite{schroff2015facenet}, and stochastic negative mining approaches~\cite{reddi2019stochastic}. However, different from these methods, we explore self-supervised losses in this paper, which require novel reformulations of max-margin objectives for making the setup computationally tractable. Our proposed approximations to MMCL result in a one-point-against-all SVM classifier, which is similar to exemplar-SVMs~\cite{malisiewicz2011ensemble}; however rather than learning a bank of classifiers for specific tasks, our objective is to learn embeddings that are generalizable and useful for other tasks.
\section{Preliminaries}
In this section, we review our notation and visit the principles of contrastive learning, support vector machines, and their potential connections, that will set the stage for presenting our approach. We use lower-case for single entities (such as $x$), and upper-case (e.g., ${X}$) for matrices (synonymous with a collection of entities). We use lower-case bold-font (e.g., $\bm{z}$) for vectors. For a function, say $f$, defined on vectors, we sometimes overload it as $f(X)$, by which we mean applying $f$ to each entity in $X$.
\subsection{Contrastive Learning}
Suppose $\mathcal{D}=\set{x}%{\bm{x}_i}_{i=1}^N$ is a given unlabeled dataset, where each $x}%{\bm{x}_i\in\reals{d}$. Let $\mathcal{T}\!:\reals{d}\to\reals{d}$ denote a random cascade of data transformation maps (e.g., random image crops and rotations).
Standard contrastive learning methods use $\mathcal{T}$ to augment $\mathcal{D}$, thereby producing sets of data points $\mathcal{D}'=\set{{X}_1,{X}_2,\cdots, {X}_N}$, where each ${X}$ is a (potentially infinite) set of transformed data samples obtained via randomly applying $\mathcal{T}$ on each $x}%{\bm{x}$, i.e., ${X}=\set{\mathcal{T}(x}%{\bm{x})}$. The task of representation learning then amounts to minimizing an objective that maximizes the similarity between points from within a set against data points from other sets -- essentially learning the data manifold in some representation space, with the hope that such representations are useful in subsequent tasks.
Suppose $\bm{f}_\theta\!:\reals{d}\to\reals{d'}$ denote a function mapping a data point $x}%{\bm{x}$ to its representation, i.e., $\bm{f}_\theta(x}%{\bm{x})$. Then, inspired by noise-contrastive estimation~\cite{gutmann2010noise}, contrastive learning methods learn the function $\bm{f}_\theta$ via minimizing the empirical logistic loss (with respect to $\theta$):
\begin{equation}
-\!\!\sum\limits_{{X}\inB}\!\!\log\frac{g(\bm{f}_\theta(x}%{\bm{x}), \bm{f}_\theta(x^+}%{\bm{x}^+))}{g(\bm{f}_\theta(x}%{\bm{x}),\bm{f}_\theta(x^+}%{\bm{x}^+))+ \sum_{x^{-}} %{\bm{x}^-\inB\backslash{X}}g(\bm{f}_\theta(x}%{\bm{x}),\bm{f}_\theta(x^{-}} %{\bm{x}^-))},
\label{eq:1}
\end{equation}
over batches $B\subset\mathcal{D}'$ with positives $\set{x}%{\bm{x},x^+}%{\bm{x}^+}\subset{X}\inB$, negatives $x^{-}} %{\bm{x}^-\in{X}'$, where ${X}'\inB\backslash{X}$, and using a suitable similarity function $g$ (e.g., a learnable projection-head followed by an exponentiated-cosine distance as in SimCLR~\cite{chen2020simple}).
As alluded to earlier, the contrastive learning loss in~\eqref{eq:1} poses several challenges from a representation learning perspective. For example, in the absence of any form of supervision, this learning objective needs to derive the training signals from the (thus far learned) representations of the negative pairs, which could be very noisy; thereby requiring very large negative batches. However, having such large batches increases the chances of \emph{class collisions}, i.e., positives and negatives belonging to the same class in a subsequent downstream task; such collisions have been shown to be detrimental~\cite{arora2019theoretical}. As alluded to earlier, unlike approaches that attempt to circumvent this issue, such as~\cite{huynh2020boosting,robinson2021contrastive,chuang2020debiased}, we seek to explore alternative contrastive learning objectives that are less sensitive to issues discussed above using formulations that maximize the discriminative margin between the positives and the negatives.
Note that instead of the InfoNCE loss, as in ~\eqref{eq:1}, for contrasting the positives from the negatives, an alternative is perhaps the hinge loss~\cite{arora2019theoretical,chen2020simple}, that minimizes (with respect to $\theta$):
\begin{equation}
\sum_{x}%{\bm{x}, x^+}%{\bm{x}^+, x^{-}} %{\bm{x}^-} \hinge{\ t - \dist(\bm{f}_\theta(x}%{\bm{x}), \bm{f}_\theta(x^+}%{\bm{x}^+)) + \dist(\bm{f}_\theta(x), \bm{f}_\theta(x^{-}} %{\bm{x}^-))},\notag
\end{equation}
where $\hinge{\ .\ }=\max(0,\ .)$ denotes the hinge loss and $t$ is a margin hyperparameter that must be tuned manually. Our proposed scheme avoids the need for this hyperparameter as the margin is an objective of the optimization.
\begin{figure}
\centering
\includegraphics[width=8.5cm,trim={0.5cm 0cm 0cm 0cm},clip]{AAAI2022/figures/neurips_mmcl_crop.pdf}
\caption{An illustration of our MMCL approach. Given a positive point ($+$) and a set of negatives ${Y}^{-}$, MMCL learns the parameters $\theta$ of a backbone network $f_\theta$ via extracting features $z^+$ and $Z^-$ using a view $x^+$ of the positive $+$ and the negatives $Y^-$, respectively. These features are then used in an SVM with an RKHS kernel $\mathcal{K}$ to find a decision hyperplane parameterized by $\alpha_x$ and $\bm{\alpha}_Y$. Next, MMCL uses the remaining positive views $z$ maximizing the similarity between $z$ and $z^+$, while minimizing the similarity between $z$ and $Z^-$, thereby achieving contrastiveness. This ensuing MMCL loss is then backpropagated through the pipeline, thereby learning $\theta$, which is the goal.}
\label{fig:representative_figure}
\end{figure}
\subsection{Support Vector Machines}
Given two sets ${X}^+$ and ${X}^-$ with labels $y_x}%{\bm{x}=1$, if $x}%{\bm{x}\in{X}^+$ and $-1$ otherwise, the soft-margin SVM solves the objective:
\begin{align}
&\min_{\bm{w}, b, \xi\geq 0} \frac{1}{2}\enorm{\bm{w}}^2 + C\sum_{x}%{\bm{x}}\xi_x}%{\bm{x} \notag\\
\text{ s. t. } y_x}%{\bm{x}&(\bm{w}^\topx}%{\bm{x} + b) \geq 1-\xi_x}%{\bm{x}, \forallx}%{\bm{x}\in{X}^+ \cup {X}^-,
\label{eq:psvm}
\end{align}
where $\bm{w}$ denotes the discriminative hyperplane separating the two classes, $b$ is a bias, and $\xi_x}%{\bm{x}$ is a per-data-point non-negative slack with a penalty $C$ that balances between misclassification of \emph{hard} points and maximizing the decision margin. It is well-known that $1/\enorm{\bm{w}}$ captures the margin between the positives and the negatives, and thus the objective in~\eqref{eq:psvm} attempts to find the hyperplane $\bm{w}$ that maximizes this margin. The Lagrangian dual of~\eqref{eq:psvm} is given by:
\begin{align}
\min_{0\leq \bm{\alpha}\leq C, \bm{\alpha}^\top\bm{y}=0} \tfrac{1}{2}\bm{\alpha}^{\top}\bm{K}({X}^+,{X}^-)\bm{\alpha} - \bm{\alpha}^{\top}\bm{1},
\label{eq:dsvm}
\end{align}
where $\bm{K}\in\spd{|{X}^+\cup{X}^-|}$ denotes a symmetric positive definite kernel matrix, the $ij$-th element of which is given by: $\bm{K}_{ij}= y_{x}%{\bm{x}_i}y_{x}%{\bm{x}_j}\mathcal{K}(x}%{\bm{x}_i, x}%{\bm{x}_j)$ for some suitable RKHS kernel $\mathcal{K}$ and $x}%{\bm{x}_i,x}%{\bm{x}_j\in{X}^+\cup{X}^-$. As the formulations in~\eqref{eq:psvm} and~\eqref{eq:dsvm} are convex, a solution $\bm{\alpha}$ to~\eqref{eq:dsvm} provides the exact decision hyperplane for~\eqref{eq:psvm} and is given by:
\begin{equation}
\bm{w}(.) = \sum_{x}%{\bm{x}\in{X}^+\cup{X}^-}\!\!\!\!\!\! \alpha_{x}%{\bm{x}}y_{x}%{\bm{x}}\mathcal{K}(x}%{\bm{x}, .).
\label{eq:hyperplane}
\end{equation}
As the bias term $b$ in~\eqref{eq:psvm} is not essential for the details to follow, we will not need the exact form of this term and will use $\bm{w}(.)$ to refer to the decision hyperplane.
\section{Proposed Method}
In this section, we connect the approaches described above deriving our MMCL formulation. An overview of our approach is illustrated in Figure~\ref{fig:representative_figure}.
\subsection{Contrastive Learning Meets SVMs}
The advantages of SVM listed in the last section may seem worthwhile from a contrastive representation learning perspective, and suggest directly using SVM instead of the logistic classifier in~\eqref{eq:1}. Formally, using a soft-constraint variant of~\eqref{eq:psvm} with a margin $t$, the optimization problem in~\eqref{eq:1} can be re-written as:
\begin{align}
\label{eq:2}
&\min\limits_\theta\sum\limits_{B \subset D'}\sum\limits_{X \in B} \min\limits_{\bm{w}_X}\Bigl(\tfrac12\enorm{\bm{w}_X}^2 +
\hinge{t - \ip{\bm{w}_X}{\bm{f}_\theta(X)}} + \notag\\
&\qquad\qquad\qquad\sum\limits_{X^- \in B\backslash X} \hinge{t + \ip{\bm{w}_X}{\bm{f}_\theta(X^-)}}\Bigr),
\end{align}
where $X, X^-$ denote the sets of positives and negatives respectively, and $w_X$ captures a max-margin hyperplane separating them.\footnote{Note that $\bm{f}_\theta(\Lambda)$ we mean applying $\bm{f}_\theta$ to each item in set $\Lambda$.} The inner optimization over \emph{each} $w_X$ is what translates into training an SVM. We augment this inner optimization problem in two ways: (i) by including slack variables to model a soft-margin (as in~\eqref{eq:psvm}), which results in a hyperparameter $C$; and (ii) by permitting an additional nonlinear feature map $\phi$ so that we may use $\phi(\bm{f}_\theta)$ (as in~\eqref{eq:dsvm}) in~\eqref{eq:2}. Using these changes, a contrastive learning formulation via maximizing the SVM classification margin may be derived (by rewriting~\eqref{eq:2}) as:
\begin{align}
&\min_{\theta}\ {\cal L}(\theta) := \sum_{B\subset\mathcal{D}'}\sum_{{X}\inB} \bm{\alpha}_X{^*}^{\top}\!\bm{K}\left(\bm{f}_\theta({X}), \bm{f}_\theta(B\backslash{X})\right)\bm{\alpha}_X^*,\label{eq:5}\\
&\text{s.t.}\quad\label{eq:6}\bm{\alpha}_X^* = \argmin\limits_{\substack{0\leq \bm{\alpha}\leq C,\\\bm{\alpha}^\top\bm{y}=0}}\ \tfrac12\bm{\alpha}^{\top}\bm{K}(\bm{f}_\theta({X}), \bm{f}_\theta(B\backslash{X}))\bm{\alpha}-\bm{\alpha}^{\top}\bm{1},
\end{align}
where $\bm{K}({Z}^{+}, {Z}^{-})=\left[\begin{array}{cc}\mathcal{K}({Z}^{+},{Z}^{+}), -\mathcal{K}({Z}^{+}, {Z}^{-})\\ -\mathcal{K}({Z}^{-},{Z}^{+}), \mathcal{K}({Z}^{-}, {Z}^{-})\end{array}\right]$
is a kernel matrix induced by the RKHS kernel $\mathcal{K}(\bm{z},\bm{z}')=\ip{\phi(\bm{z})}{\phi(\bm{z}')}$. While SVMs have been widely studied in the machine learning literature~\cite{smola1998learning,cortes1995support}, our idea of linking the fields of SVMs and Contrastive Learning has not been explored before.
In~\eqref{eq:6}, we use the so-far trained $\bm{f}_\theta$ to produce $\bm{\alpha}_X^*$ defining the decision margin, which is then used in~\eqref{eq:5} to update $\theta$ while striving to maximize the margin; doing so, pushes the support vectors from the positive and negative classes away from each other. Unfortunately, despite its intuitive simplicity, the formulation \eqref{eq:5}-\eqref{eq:6} is impractical to use directly. Indeed, it is a challenging bilevel optimization problem~\cite{gould2016differentiating,amos2017optnet,wang2018video}, and if we use an iterative SVM solver for the lower problem~\eqref{eq:6} within a deep learning framework, it can incur significant slowdown.
\textbf{Remarks.} There are several interesting aspects of the SVM solution that are perhaps beneficial from a contrastive learning perspective: (i) the dual solution $\bm{\alpha}$ is usually sparse\footnote{When the $\mathcal{K}$ is chosen appropriately.}, and its active dimensions can be used to identify data points that are the support vectors defining the decision margin, (ii) the slack regularization controls the misclassification rate, and allows tuning the performance against the class collisions, similar to~\cite{chuang2020debiased}, (iii) the dimensions of $\bm{\alpha}_{X}$ are equal to $C$ for misclassified points, which are perhaps hard or false negatives, and thus our formulation allows for identifying these points and mitigate their effects, and (iv) the use of the kernel function provides rich RKHS similarities at our disposal allowing to use, for example, novel structures within the learned representations (e.g., trees, graphs, etc.).
\subsection{Max-Margin Contrastive Learning}
\label{sec:mmcl}
The primary method for solving~\eqref{eq:5} is stochastic gradient descent (SGD), which computes stochastic gradients over the batches $B \subset \mathcal{D}'$ via backpropagation while iteratively updating $\theta$. However, as has been previously observed for bilevel optimization~\citep{amos2017optnet,gould2016differentiating}, even obtaining a single stochastic gradient requires solving the lower problem~\eqref{eq:6} exactly, which is impractical. Our key idea to overcome this challenge is to introduce a ``\emph{sample splitting}'' trick inspired by coordinate descent, which helps to reduce the computational burden. Subsequently, we make additional approximations that lead to our final training procedure.
Without loss of generality, assume that $X$ consists of the pair $(x}%{\bm{x},x^+}%{\bm{x}^+)$; the same idea applies if we permit multiple such positive pairs in $X$. Instead of solving~\eqref{eq:6} using all the ``coordinates'', we \emph{split} the pair $(x}%{\bm{x},x^+}%{\bm{x}^+)$ into two parts: (i) $x^+}%{\bm{x}^+$, which is used to perform coordinate descent on~\eqref{eq:6}; and (ii) $x}%{\bm{x}$, which is used to perform the SGD step for~\eqref{eq:5}. This splitting aligns well with contrastive learning, where often one uses only a pair of positives that must be contrasted against the negatives.
The following proposition states how we perform part (i) of our split to estimate $\bm{\alpha}_X^*$, which we will henceforth denote as $\bm{\alpha}_{x}%{\bm{x}}$ to indicate its dependence on the split sample.
\begin{prop}
\label{prop:npt}
Let $(x^+}%{\bm{x}^+,Y^{-}}%{\bm{Y}^-)$ be a tuple consisting of a positive point $x^+}%{\bm{x}^+\in\reals{d}$ and a set of $n$ negative points $Y^{-}}%{\bm{Y}^-\in\reals{d\times n}$. Further, let $\bm{z}^+=\bm{f}_\theta(x^+}%{\bm{x}^+)$ and ${\bm{Z}}^{-}=\bm{f}_\theta(Y^{-}}%{\bm{Y}^-)$. Suppose $k_{xx},\bm{k}_{xY},$ and $\bm{K}_{YY}$ denote $\mathcal{K}(\bm{z}^+,\bm{z}^+)$, $\mathcal{K}(\bm{z}^+, {\bm{Z}}^{-})$, and $\mathcal{K}({\bm{Z}}^{-}, {\bm{Z}}^{-})$, respectively. Consider the SVM decision function for a new point $\bm{z}$ given by
\begin{equation}
\label{eq:7}
\bm{w}(\bm{z}) = \bm{\alpha}_x^{\top}\left(\mathcal{K}(\bm{z}^+, \bm{z})\bm{1} - \mathcal{K}({\bm{Z}}^{-}, \bm{z})\right).
\end{equation}
Let $\bm{\Delta}=\bm{1}\bm{1}^{\!\top} + \bm{K}_{YY} - \bm{k}_{xY}\bm{1}^{\!\top} - \bm{1}\bm{k}_{xY}^{\top}$, and let $P_{[0,C]}$ denote projection onto the interval $[0,C]$. By suitably selecting $\bm{\alpha}_x$ in~\eqref{eq:7} we then obtain the following approximate max-margin solutions:
\begin{enumerate}[(i)]
\setlength{\itemsep}{0pt}
\item (block) coordinate minimization $\bm{\alpha}_{x}%{\bm{x}}^{\text{cm}}=\argmin_{0\le \bm{\alpha} \le C} g(\bm{\alpha}) := \tfrac{1}{2}\bm{\alpha}^{\top}\bm{\Delta}\bm{\alpha} -2\bm{\alpha}^{\top}\bm{1}$,
\item m-step projected gradient ($\PGDMMCL$): $\bm{\alpha}_{x}%{\bm{x}}^{\text{pg}} := \bm{\alpha}_{m}= P_{[0,C]}(\bm{\alpha}_{m-1}-\eta(\bm{\Delta}\bm{\alpha}_{m-1}-2\bm{1}))$, for some initial guess $\bm{\alpha}_0\in [0,C]^n$, $\eta>0$ a step-size, and
\item greedy truncated least-squares ($\INVMMCL$): $\bm{\alpha}_{x}%{\bm{x}}^{\text{ls}}=P_{[0,C]}(2\inv{\bm{\Delta}}\bm{1})$.
\end{enumerate}
The various solutions satisfy $g(\bm{\Delta}^{-1}1) \le g(\bm{\alpha}_{x}%{\bm{x}}^{\text{cm}}) \le \min\{g(\bm{\alpha}_{x}%{\bm{x}}^{\text{pg}}, g(\bm{\alpha}_{x}%{\bm{x}}^{\text{ls}})\}$. Moreover, $g(\bm{\alpha}_{x}%{\bm{x}}^{\text{pg}})-g(\bm{\alpha}_{x}%{\bm{x}}^{\text{cm}})=O\left(\exp\left(-m\frac{\lambda_{\min(\bm{\Delta})}}{\lambda_{\max(\bm{\Delta})}}\right)(g(\bm{\alpha}_0)-g(\bm{\alpha}_{x}%{\bm{x}}^{\text{cm}}))\right)$.
\end{prop}
\begin{proof}
Choice (i) is obvious. To obtain (ii) and (iii), consider the following dual SVM formulation:
\begin{equation*}
\min_{0\leq\bm{\alpha}_{{Y}}\leq C} \frac12 \left[\!\!\begin{array}{c}\alpha_x\\\bm{\alpha}_{{Y}}\end{array}\!\!\right]^{\top} \left[\!\!\begin{array}{cc}k_{xx} &-\bm{k}_{xY}^{\top}\\-\bm{k}_{xY}&\bm{K}_{YY}\end{array}\!\!\right]\left[\!\!\begin{array}{c}\alpha_x\\\bm{\alpha}_{{Y}}\end{array}\!\!\right] - \left[\alpha_x + \bm{\alpha}_{{Y}}^{\top}\bm{1}\right],
\end{equation*}
\noindent where $\alpha_x=\bm{\alpha}_{{Y}}^\top\bm{1}$. Substituting for $\alpha_x$, we obtain\footnote{Note that $\alpha_x$ is the scalar Lagrangian dual associated with the data point $\bm{z}$ while $\bm{\alpha}_{x}$ is the vector of all dual variables associated with the batch $B$ when considering $x$ as the positive.}:
\begin{align*}
\min_{0\leq\bm{\alpha}_{{Y}}\leq C} g(\bm{\alpha}_{{Y}}) = \tfrac{1}{2}\bm{\alpha}_{{Y}}^{\top}\bm{\Delta}\bm{\alpha}_{{Y}} - 2\bm{\alpha}_{{Y}}^{\top}\bm{1}.
\end{align*}
Setting $\nabla g(\bm{\alpha}_{{Y}})=0$, we obtain the unconstrained least-squares solution $2\bm{\Delta}^{-1}\bm{1}$, which we can greedily truncate to lie in the interval $[0,C]$ to obtain (iii). Solution (ii) runs $m$ iterations of projected gradient descent, and hence it also satisfies a linear convergence rate, which rapidly brings it within the optimal solution $\bm{\alpha}_{x}%{\bm{x}}^{\text{cm}}$ at the well-known rate depending on the condition number $\lambda_{\max(\bm{\Delta})}/\lambda_{\min(\bm{\Delta})}$.
\end{proof}
Using Prop.~\ref{prop:npt}, we can reformulate the contrastive learning objective in ~\eqref{eq:5} as maximizing the margin in classifying the other part of the split, namely the positive point $x}%{\bm{x}$, correctly against the negatives. Here, we introduce an additional simplification by rewriting the margin in terms of the separation between $x}%{\bm{x}$ and $Y^{-}}%{\bm{Y}^-$, using the decision hyperplane~\eqref{eq:7}. Let $\bm{\alpha}_x$ denote the solution obtained from Proposition~\ref{prop:npt} using the positive point $x}%{\bm{x}$. Then, we rewrite~\eqref{eq:5} into our proposed \emph{max-margin contrastive learning} objective as:
\begin{equation}
\min_{\theta}\hspace{-7pt}\sum_{\substack{(x}%{\bm{x},x^+}%{\bm{x}^+)\simB\in\mathcal{D}'\\Y^{-}}%{\bm{Y}^-=B\backslash(x}%{\bm{x},x^+}%{\bm{x}^+)}}\hspace{-18pt}\bm{\alpha}_{x}^T \big[\mathcal{K}\left(\bm{f}_\theta(Y^{-}}%{\bm{Y}^-\!),\bm{f}_\theta(x}%{\bm{x})\right) -
\bm{1}\mathcal{K}\left(\bm{f}_\theta(x^+}%{\bm{x}^+), \bm{f}_\theta(x}%{\bm{x})\right)\big].
\label{eq:mmcl}
\end{equation}
When optimizing for $\theta$, \eqref{eq:mmcl} seeks a representation map $\bm{f}_\theta$ that improves the similarity between the positives $(x}%{\bm{x},x^+}%{\bm{x}^+)$ and the dissimilarity between $x}%{\bm{x}$ and all the points in $Y^{-}}%{\bm{Y}^-$, achieving a similar effect as in standard contrastive learning objective in~\eqref{eq:1}, but with the advantage of choosing kernels, selecting the support vectors that matter to the decision margin, as well as finding points that are perhaps hard negatives (those at the upper-bound of the box-constraints), all in one formulation. Note that, using the exact solver (i) in Prop.~\ref{prop:npt} turned out to be prohibitively expensive in standard contrastive learning pipelines and thus we do not use that variant in our experiments. In Algorithm~\ref{alg:mmcl}, we provide a pseudocode highlighting the key steps in our approach.
\begin{algorithm}[!t]
\caption{\label{alg:mmcl} Pseudocode for MMCL}
\begin{algorithmic}
\STATE \textbf{Input:} Dataset $\mathcal{D}$, batch size $N$, encoder $\bm{f}_\theta$, slack- \\penalty $C$, kernel $\mathcal{K}$, augmentation map $\mathcal{T}$
\FOR{minibatch $B = \{\bm x_k\}_{k=1}^N\sim\mathcal{D}$}
\STATE $\texttt{loss} := 0$
\STATE \textbf{for} $k = 1, \ldots, N$ \textbf{do}
\STATE $~~~~$draw $t_1 \!\sim\! \mathcal{T}$, $t_2 \!\sim\! \mathcal{T}$
\STATE $~~~~$\textcolor{gray}{\# get embeddings for positives and negatives}
\STATE $~~~~$$\bm{z}^+ = \bm{f}_\theta(t_1(\bm x_k)), \bm{z} = \bm{f}_\theta(t_2(\bm x_k))$
\STATE $~~~~$${\bm{Z}}^{-} = \bm{f}_\theta(t_1(B \setminus \bm x_k)\cup t_2(B \setminus \bm x_k))$
\STATE $~~~~$\textcolor{gray}{\# calculate kernel similarities}
\STATE $~~~~$$\bm{k}_{xY} = \mathcal{K}(\bm{z}^+, {\bm{Z}}^{-})$, $\bm{K}_{YY} = \mathcal{K}({\bm{Z}}^{-}, {\bm{Z}}^{-})$
\STATE $~~~~$\textcolor{gray}{\# Solve SVM}
\STATE $~~~~$$\bm{\Delta}=\bm{1}\bm{1}^{\!\top} + \bm{K}_{YY} - \bm{k}_{xY}\bm{1}^{\!\top} - \bm{1}\bm{k}_{xY}^{\top}$
\STATE $~~~~$$\bm{\alpha}_{x}%{\bm{x}} = $ \texttt{svm\_solver}($\bm{\Delta}$, $C$)~~\textcolor{gray} {\# using PGD or INV}
\STATE $~~~~$\textcolor{gray}{\# calculate the loss}
\STATE $~~~~$\texttt{loss} += $ \bm{\alpha}_x^{\top}\left(\mathcal{K}({\bm{Z}}^{-}, \bm{z}) - \mathcal{K}(\bm{z}^+, \bm{z})\bm{1}\right)$
\STATE \textbf{end for}
\STATE update the model to minimize the loss
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{Experiments and Results}
\begin{table*}[h]
\resizebox{\linewidth}{!}{
\begin{tabular}{l|cccccccc|cc|c}
\toprule
& \multicolumn{8}{c}{Many-Shot classification} & \multicolumn{2}{c}{Few-Shot classification} & Surface Normal Est. \\
\textit{Method} & Aircraft & Caltech101 & Cars & CIFAR10 & CIFAR100 & DTD & Flowers & Food & CropDiseases & EuroSat & NYUv2 (Angular error) \\
\midrule
Supervised & 83.5 & 91.01 & 82.61 & 96.39 & 82.91 & 73.30 & 95.50 & 84.60 & 93.09 $\pm$ 0.43 & 88.36 $\pm$ 0.44 & 27.91 \\
\midrule
InsDis~\cite{wu2018unsupervised} & 73.38 & 72.04 & 61.56 & 93.32 & 68.26 & 63.99 & 89.51 & 76.78 & 91.95 $\pm$ 0.44 & 86.52 $\pm$ 0.51 & 27.35 \\
MoCo~\cite{he2020momentum} & 75.61 & 74.95 & 65.02 & 93.89 & 71.52 & 65.37 & 89.45 & 77.28 & 92.04 $\pm$ 0.43 & 86.55 $\pm$ 0.51 & 28.63 \\
PCL-v1~\cite{PCL} & 74.97 & \textbf{87.62} & 73.24 & \textbf{96.35} & 79.62 & 70 & 90.83 & 78.3 & 80.74 $\pm$ 0.57 & 75.19 $\pm$ 0.67 & 33.58\\
PCL-v2~\cite{PCL} & 79.37 & 88.04 & 71.68 & \textbf{96.5} & 80.26 & 71.76 & 92.95 & 80.34 &92.58 $\pm$ 0.44 & 87.94 $\pm$ 0.4 & 28.67 \\
MoCo-v2~\cite{chen2020improved}$^\dagger$ & 82.46 & 82.31 & 85.1 & 96.06 & 72.99 & 69.41 & \textbf{95.62} & 77.19 & 90.01 $\pm$ 0.48 & 88.06 $\pm$ 0.4 & \textbf{24.49} \\
MoCHI~\cite{kalantidis2020hard}$^\dagger$ & 83.03 & 84.45 & 85.49 & 95.68 & 77.07 & 70.85 & 94.8 & 78.9 & 91.93 $\pm$ 0.46 & \textbf{88.26} $\pm$ \textbf{0.4} & 31.75 \\
\midrule
Ours & \textbf{85.38} & \textbf{87.82} & \textbf{89.23} & \textbf{96.24} & \textbf{82.09} & \textbf{73.51} & \textbf{95.24} & \textbf{82.39} & \textbf{93.1} $\pm$ \textbf{0.45} & \textbf{88.75} $\pm$ \textbf{0.4} & \textbf{24.69}\\
\bottomrule
\end{tabular}}
\caption{Transfer learning results. We transfer an ImageNet-pre-trained model (using MMCL) on a range of downstream tasks and datasets. We compare with models pre-trained using a similar batch size and epochs. Results on competing approaches are taken from~\cite{Ericsson2021HowTransfer}. $^\dagger$Models evaluated using publicly available checkpoints.}
\label{tab: transfer_multishot}
\end{table*}
In this section, we systematically study the various components in MMCL, as well as compare performances of MMCL-learned representations for their quality via linear evaluation, and their generalizability on transfer learning tasks.
\noindent \textbf{Visual Representation Learning Experiments.} We base our experimental setup on the popular SimCLR~\cite{chen2020simple} baseline, which is widely used, especially to evaluate the effectiveness of the ``learning loss'' against other factors in a self-supervised algorithm (e.g., data augmentations, use of queues, multiple crops).
We use a ResNet50 backbone, followed by a two-layer MLP as the projection-head, followed by unit normalization. We pretrain our models on ImageNet-1K~\cite{deng2009imagenet} using the LARS optimizer~\cite{you2017large} with an initial learning rate of 1.2 for 100 epochs. We also present results on ImageNet-100~\cite{tian2019contrastive} (a subset of ImageNet-1K) and on smaller datasets such as STL-10~\cite{coates2011analysis} and CIFAR-100~\cite{krizhevsky2009learning}, especially for our ablation studies. We pre-train on ImageNet-100 for 200 epochs in our studies, while pre-training for 400 epochs on the smaller datasets. We use the Adam optimizer with a learning rate of 1e-3 as in~\cite{chuang2020debiased,robinson2021contrastive}. Unless otherwise stated, we use a batch-size of 256 for all ImageNet-1K, CIFAR-100, and STL-10 experiments and 128 for ImageNet-100 experiments. In addition, we also present results on video representation learning using an S3D backbone~\cite{Xie2018RethinkingSF} on the UCF-101~\cite{soomro2012ucf101} dataset, pre-trained using MMCL for 300 epochs.
\\
\noindent\textbf{Hyperparameters:}
We mainly use the RBF kernel for the SVM. For CIFAR-100 experiments, we start with a kernel bandwidth $\sigma^2=0.02$ and increase it by a factor of 10 at 75 and 125 epochs. For STL-10 experiments, we use a kernel bandwidth $\sigma^2=1$. We used $\sigma^2=5$ for ImageNet experiments.
We set the SVM slack regularization $C$ to 100. For the projected gradient descent optimizer for MMCL, we use a maximum of 1000 steps. Additional details are provided in the Appendix.
\noindent\textbf{Practical Considerations:}
Here, we note a few important but subtle technicalities that need to be addressed when implementing MMCL. Specifically, we found that backpropagating the gradients through $\bm{\alpha}_Y$ is prohibitively expensive when using PGD iterations. On the other hand, for the least-squares variant, gradients through $\bm{\alpha}_Y$ was found to be detrimental. This is perhaps unsurprising, because note that, $\bm{\alpha}_Y$ term includes the term $\inv{\Delta}$. To improve the decision margin, one needs to make $\Delta$ an identity matrix, so that the off-diagonal elements go to zero during optimization, which suggests that the training gradients should reduce the magnitude of these terms. However, on the other hand, as $\bm{\alpha}_Y$ uses $\inv{\Delta}$ one could also maximize the margin by making $\Delta$ ill-conditioned, via making the off-diagonal elements going to one. Such a tug-of-war between the gradients can essentially destabilize the training. Thus, we found that avoiding any backpropagation through $\bm{\alpha}$ is essential for MMCL to learn to produce representations. We also found that using a small regularization $\Delta+\beta I$ ($\beta=0.1$) is necessary for the learning to begin. This is because, initially the representations can be nearly zero, and thus the kernel may be poorly conditioned.
\subsection{Experiments on Transfer Learning}
Recently, models pretrained using various self-supervised learning approaches have shown impressive performance when transferred to various downstream tasks. In this section, we evaluate MMCL-ImageNet pretrained models on such downstream tasks. For these experiments, we follow the experimental protocol provided in \cite{Ericsson2021HowTransfer}. We evaluate the models in the fine-tuning setting and use the benchmarking scripts provided in~\cite{Ericsson2021HowTransfer} without any modifications.
First, we transfer the MMCL-pretrained backbone model to a collection of many-shot classification datasets used in~\cite{Ericsson2021HowTransfer}, namely FGVC Aircraft, Caltech-101, Stanford Cars, CIFAR-10, CIFAR-100, DTD, Oxford Flowers, and Food-101. The setup involves using the pretrained model as the initial checkpoint and attaching a task specific head to the backbone model. The entire network is then finetuned for the downstream task. These datasets vary widely in content and texture compared to ImageNet images. Further, the benchmark datasets include a significant diversity in the number of training images (2K--50K) and the number of classes (10--196). For a fair comparison, we only include results for models which are trained for a comparable number of epochs and batch sizes. For few-shot experiments, we follow the setup described in~\cite{Ericsson2021HowTransfer} for few-shot learning on the Cross-Domain Few-Shot Learning (CD-FSL) benchmark~\cite{guo2020broader}. We evaluate on Crop-Diseases~\cite{mohanty2016using}, EuroSAT~\cite{helber2019eurosat} datasets for 5-way 20-shot transfer. Finally, we evaluate performance of our model for the dense prediction task of surface normal estimation on NYUv2~\cite{silberman2012indoor} and report the median angular error.
In Table~\ref{tab: transfer_multishot}, we provide results on the transfer learning experiments. We see that MMCL consistently outperforms the competing self-supervised learning approaches on a wide variety of transfer tasks and across all datasets. Further, MMCL also outperforms the supervised counterpart on several datasets. These results show that MMCL learns high-quality generalizable features.
\subsection{Experiments on Linear Evaluation}
For these experiments, we freeze the weights of the backbone (ResNet-50), and attach a linear layer as in~\cite{chen2020simple}, which is trained using the class labels available with the dataset. We train this linear layer for 100 epochs. Tables~\ref{tab: imagenet_100} and \ref{tab: imagenet_1k} show our results. We see that MMCL-pretrained model outperforms SimCLR by 6.3\% on ImageNet-1K using the same number of negatives. We also compare with the recent memory queue-based methods such as MoCo-v2~\cite{chen2020improved} and MoCHI~\cite{kalantidis2020hard}, demonstrating competitive performances while using far fewer negatives (510 vs 65536). We also establish a new state of the art on ImageNet-100 outperforming MoCHI by 1.7\% using only 510 negatives (0.008x) and without a memory bank.
\begin{table}[!htb]
\centering
\begin{tabular}{lcccr}
\toprule
Variant & Negative Source & Negatives & top-1 \\
\toprule
MoCo & Memory Queue & 16000 & 75.9 \\
CMC & Memory Queue &16000 & 75.7 \\
MoCo-v2 & Memory Queue & 16000 & 78.0 \\
MoCHI & Memory Queue & 16000 & 79.0 \\
\midrule
Ours & Batch & 254 & \textbf{80.7} \\
\bottomrule
\end{tabular}
\caption{ImageNet-100 linear evaluation.
\label{tab: imagenet_100}
}
\end{table}
\begin{table}[!htb]
\centering
\begin{tabular}{lcccr}
\toprule
Variant & Negative Source & Negatives & top-1 \\
\toprule
SimCLR & Batch & 254 & 57.5 \\
SimCLR & Batch & 510 & 60.62 \\
\midrule
MoCo-v2 & Memory Queue & 65536 & 63.6 \\
MoCHI & Memory Queue & 65536 & \textbf{63.9} \\
\midrule
Ours & Batch & 510 & \textbf{63.8} \\
\bottomrule
\end{tabular}
\caption{ImageNet-1K linear evaluation.
\label{tab: imagenet_1k}}
\end{table}
\begin{table*}[h]
\label{sample-table}
\vskip 0.15in
\begin{center}
\begin{tabular}{lccccr}
\toprule
Method & DD & MUTAG & REDDIT-BIN & REDDIT-M5K & IMDB-BIN \\
\midrule
GraphCL & $\mathbf{78.62 \pm 0.40}$ & $86.80 \pm 1.34$ & $89.53 \pm 0.84$ & $55.99 \pm 0.28$ & $71.14 \pm 0.44$ \\
\midrule
Ours & $\mathbf{78.74 \pm 0.30}$ & $\mathbf{88.42 \pm 1.33}$ & $\mathbf{90.41 \pm 0.60}$ & $\mathbf{56.18 \pm 0.29}$ & $\mathbf{71.62 \pm 0.28}$ \\
\bottomrule
\end{tabular}
\end{center}
\caption{Comparison with GraphCL. We compare graph representation learning on five graph benchmark datasets. The compared numbers are obtained from the original paper~\cite{you2020graph}.}
\label{tab: graph}
\vskip -0.1in
\end{table*}
\begin{figure*}[!h]
\centering
\begin{minipage}{0.3\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{AAAI2022/figures/effect_of_C.png}
\caption{Effect of slack penalty $C$.}
\label{fig: main_sigma_ablation}
\end{minipage}
\begin{minipage}{0.3\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{AAAI2022/figures/timing_imagenet_v2.png}
\caption{Computations (ImageNet).}
\label{fig: compute_time}
\end{minipage}
\begin{minipage}{0.3\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{AAAI2022/figures/convergence_no_nn_stl10.png}
\caption{Convergence (STL-10).}
\label{fig:convergence}
\end{minipage}
\end{figure*}
\subsection{Experiments on Graph Representation Learning}
Recall that our MMCL formulation works by modifying the contrastive learning loss function; and as a result, our approach is generically applicable to a variety of tasks. In this section, we evaluate our approach on learning graph representations using contrastive learning. We experiment with five common graph benchmark datasets MUTAG \cite{Kriege2012SubgraphMK} -- a dataset containing mutagenic compounds, DD\cite{yanardag2015deep}-- a dataset of biochemical molecules, REDDIT-BINARY, REDDIT-M5K, and IMDB-BINARY \cite{yanardag2015deep} which are social network datasets. Our experiments use GraphCL~\cite{you2020graph} -- a projection head based on the contrastive learning framework derived from SimCLR while incorporating graph augmentations.
For these experiments, we follow the training and evaluation protocols described in \cite{you2020graph}. Specifically, we use the standard ten-fold cross validation using an SVM and report the average performances and their standard deviations. We use the Adam optimizer for training these models. Table~\ref{tab: graph} shows the results of using MMCL instead of the NCE loss. We see that adding MMCL is comparable or better than GraphCL for these datasets. On MUTAG, we obtain an absolute improvement of 1.62\% over GraphCL. These results demonstrate the effectiveness of our approach in learning better representations. Given that the only change from GraphCL is the underlying objective, the results also show that our approach is general and can easily replace NCE based losses.
\begin{table}[!htb]
\centering
\begin{tabular}{lccccr}
\toprule
Loss Variant & Modality & Negatives & top-1 & R@1\\
\toprule
MoCo-v2 & RGB & 2048 & 46.8 & 33.1\\
Ours & RGB & 254 & \textbf{52.45} & \textbf{45.6} \\
\midrule
MoCo-v2 & Flow & 2048 & 66.8 & 45.2\\
Ours& Flow & 254 & \textbf{68.01} & \textbf{50.94} \\
\bottomrule
\end{tabular}
\caption{Video self-supervised learning on UCF-101 dataset.}
\label{tab: videossl}
\end{table}
\subsection{Experiments on Video Action Recognition}
For this experiment, we use the S3D backbone model~\cite{Xie2018RethinkingSF} pre-trained using MMCL on RGB and optical flow images from the UCF-101 dataset. We pre-train the network for 300 epochs, followed by 100 epochs for linear evaluation on the task of action recognition. We report the standard 10-crop test accuracy on split-1, as well as on nearest neighbor retrieval. As seen in Table~\ref{tab: videossl}, MMCL outperforms the baseline by 5.65\% on RGB and 1.21\% on flow in linear evaluation and 12.5\% and 5.74\% on Retrieval@1, demonstrating the generalizability of our approach to the video domain.
\subsection{Ablation studies and Analyses}
For some of the ablation experiments, we use the smaller datasets: STL-10 and CIFAR-100, and report the readout accuracy calculated using k-NN with k=200 at 200 epochs, besides standard evaluations.
\\
\noindent\textbf{Choice of Kernels:} Unlike the traditional NCE objective, our approach naturally allows for the use of kernels to better capture the similarity between the data points. In Table~\ref{tab: kernels}, we compare the readout accuracy on CIFAR100 and STL10 for various choices of popular kernels. As is clear from the table, the RBF kernel performs better on both datasets. The best kernel hyperparameters $\sigma,\gamma$ were found empirically. We choose the RBF kernel in our subsequent experiments.
\begin{table}[]
\centering
\begin{tabular}{lrc}
\toprule
Kernel ($K(x,y)$) & CIFAR100 & STL10\\
\midrule
Linear: $x^Ty$ & 41.43\% & 74.82\%\\
Tanh: $\tanh(-\gamma x^Ty+\eta)$ & 54.53\% & 80.5\% \\
RBF: $\exp(-\frac{\|x-y\|^2}{2\sigma^2})$ & \textbf{55.35} \% & \textbf{81.33}\% \\
\bottomrule
\end{tabular}
\caption{Effect of kernel choice.}
\label{tab: kernels}
\end{table}
\noindent\textbf{Effect of slack:} A key benefit of our MMCL formulation is the possibility to use a slack that could potentially control the impact of false or hard negatives. To evaluate this effect, we changed the slack penalty $C$ from 0.01 (i.e., low penalty for misclassification) to $C=\infty$. The results on readout accuracy in Figure~\ref{fig: main_sigma_ablation} shows that $C$ plays a key role in achieving good performance. For example, with $C=0.01$, it appears that the performance is consistently low for both the datasets, perhaps because the hard negatives are under-weighted. We also find that using a large $C$ may not be beneficial always.
\\
\noindent\textbf{Effect of batch size:}
We use STL-10 dataset for this experiment and train all models for 400 epochs. We report the Linear evaluation results for this experiment. From Table~\ref{tab: small_datasets_sota}, we see that our model consistently outperforms the SimCLR baseline, while performing much better than other approaches. Indeed, we find that MMCL is about 1-3\% better than HCL, which reweights the hard negatives. We also find that MMCL using a batch size of only 128 reaches close to the performance of HCL~\cite{robinson2021contrastive} trained using a batch size of 256, suggesting that the proposed selection of hard negatives via support vectors is beneficial.
\\
\begin{table}[]
\centering
\begin{tabular}{lccr}
\toprule
STL-10 & 64 & 128 & 256 \\
\midrule
SimCLR~\cite{chen2020simple} & 74.21 & 77.18 & 80.15 \\
DCL~\cite{chuang2020debiased} & 76.72 & 81.09 & 84.26 \\
HCL~\cite{robinson2021contrastive} & 80.39 & 83.98 & 87.44 \\
\midrule
Ours & 80.11 & \textbf{86.8} & \textbf{88.3} \\
\bottomrule
\end{tabular}
\caption{Accuracy (in \%) against the batch size on STL-10.}
\label{tab: small_datasets_sota}
\end{table}
\noindent\textbf{Computational time against batch size:}
In Figure~\ref{fig: compute_time}, we show the time taken per iteration of MMCL variants against those of prior methods, such as SimCLR, for an increasing batch size. The computational cost of our inner optimization for finding the support vectors is directly related to the batch size. These experiments are done on ImageNet-1K with each RTX3090 GPU holding 64 images. We see that the time taken by MMCL is comparable to SimCLR.
\\
\noindent\textbf{Performances between MMCL Variants:}
In Table~\ref{tab: mmcl_variants_compare}, we compare performances between MMCL variants: PGD and INV. We see that both variants outperform the SimCLR, while the two MMCL variants show similar performance. In Figure~\ref{fig:convergence}, we plot the convergence curves (readout accuracy) against training epochs. The plots clearly show that our variants converge to superior performances rapidly than the baseline.
\\
\noindent\textbf{Visualization of Support Vectors:}
Next, we qualitatively analyze if the support vectors found by MMCL are semantically meaningful. To this end, we use an MMCL model pre-trained on STL-10 dataset. We use a batch of examples as input to the model, and choose one of the examples from the batch as a positive and the remaining as negative. We then solve the MMCL objective to find $\alpha$, where $\alpha=0$ corresponds to non-support vectors, $\alpha=C$ are the misclassified points, and $\alpha\in[0,C)$ are the support vectors. In Figure~\ref{fig : sv_vis}, we show the positive point, and a set of samples from the batch and the respective $\alpha$ values. The figure clearly shows that object instances from a similar class gets a high $\alpha$,
suggesting that they lie on or inside the margin and contribute to the loss
while batch samples that are irrelevant or easy negatives
are not support vectors and do not contribute to the loss. For example, in Figure~\ref{fig : sv_vis}, the \emph{yellow bird} is an easy negative for a \emph{white truck} query image and our approach does not include that \emph{bird} in the support set.
\begin{figure*}[ht]
\centering
\includegraphics[trim=0 96 0 0,clip,width=0.8\textwidth]{AAAI2022/figures/supple/sv_viz/new/grid_78.jpg}\\% first figure itself=
\vspace{0.2cm}
\includegraphics[trim=0 96 0 0,clip,width=0.8\textwidth]{AAAI2022/figures/supple/sv_viz/new/grid_0.jpg}\\
\vspace{0.2cm}
\includegraphics[trim=0 96 0 0,clip,width=0.8\textwidth]{AAAI2022/figures/supple/sv_viz/new/grid_65.jpg}\\
\vspace{0.2cm}
\caption{Visualizing Support Vectors : We visualize a query image (green box), corresponding support vectors (blue boxes) and non-support vectors (red boxes). We see that the support vectors are plausible hard negatives while in most cases the non-support vectors are easy negatives. The $\alpha$ corresponding to the various negatives is shown at the bottom left of each image.}
\label{fig : sv_vis}
\end{figure*}
\textbf{Longer Training:}
Our focus in the above experiments has been in improving convergence and negative utilization with limited training. However, we see competitive performances on longer training as well. Using a batch size of 256 (510 negatives), our model reaches 66.5\% in 200 and 69.9\% in 400 epochs, compared to 62\% and 64.5\% respectively with same number of negatives in SimCLR\cite{chen2020simple}. Remarkably, our 100 epochs pre-trained models transfer better than PCL-v2’s~\cite{li2020prototypical} 200 epoch models on most transfer learning tasks (see Table~\ref{tab: transfer_multishot}).
\\
\begin{figure}[]
\centering
\begin{minipage}{0.33\textwidth}
\resizebox{\linewidth}{!}{
\begin{tabular}{lccr}
\toprule
Variant & CIFAR-100 & STL-10 \\
\midrule
SimCLR & 66 & 80.15 \\
\midrule
$\PGDMMCL$ & \textbf{68.0} & 88.03 \\
$\INVMMCL$ & \textbf{68.81} & \textbf{88.3} \\
\bottomrule
\end{tabular}
}
\captionof{table}{Performances on MMCL variants.}
\label{tab: mmcl_variants_compare}
\end{minipage}
\vspace*{-0.5cm}
\end{figure}
\vspace*{-0.5cm}
\section{Conclusions}
In this paper, we proposed a new contrastive learning framework, dubbed Max-Margin Contrastive Learning, using which we learn powerful deep representations for self-supervised learning by maximizing the decision margin separating data pseudo-labeled as positives and negatives. Our approach draws motivations from the classical support vector machines via modeling the selection of useful negatives through support vectors. We obtain consistent improvements over baselines on a variety of downstream tasks.
\section{Acknowledgements}
AS thanks Ketul Shah, Aniket Roy, Shlok Mishra, and Susmija Reddy for their feedback. AS and RC are supported by an ONR MURI grant N00014-20-1-2787. SS acknowledges support from NSF-TRIPODS+X:RES (1839258) and from NSF-BIGDATA (1741341).
\section{Appendix}
\section{Additional Transfer Learning Experiments}
In this section, we report results from several additional experiments we conducted analyzing the generalizability of MMCL-learned representations. Specifically, we used SUN397~\cite{xiao2010sun}, Oxford-IIIT Pets~\cite{parkhi2012cats}, and VOC2007~\cite{everingham2010pascal} for many-shot classification, ISIC~\cite{tschandl2018ham10000,codella2019skin} and ChestX~\cite{wang2017chestx} for few-shot classification, and VOC 2007~\cite{everingham2010pascal} for object detection. We follow the standard benchmarking protocol in \cite{Ericsson2021HowTransfer} for evaluating all our results. We report the top-1 accuracy metric on Food,
CIFAR-10, CIFAR-100, SUN397, Stanford Cars, and DTD datasets. The
mAP accuracy is reported for Aircraft,
Pets, Caltech101, and Flowers and the 11-point
mAP metric is reported on Pascal VOC 2007. For few-shot transfer experiments, we report
the average accuracy along with a 95\% confidence interval. We report AP for object detection on VOC and median angular error for surface normal estimation on NYUv2. Please refer to ~\cite{Ericsson2021HowTransfer} for additional details. For a fair comparison, we only compare to models which were pre-trained for a comparable number of epochs (upto 200) and batch-size of 256. To enable comparison with approaches like MoCo-v2 and MOCHI, we download their official pre-trained models (for a batch size of 256) and follow the same benchmarking process to report the numbers.\footnote{Note that comparing with the large batch size (as high as 4096) models and longer pre-trained models (as high as 1000 epochs) is not fair since they require as many as 16 high-performance GPUs \cite{caron2020unsupervised} and very long pretraining times. For comparison, on our compute setup, training with a 256 batch size model for 100 epochs takes upwards of 3.5 days.} All results are reported in ~\ref{tab: transfer_multishot_supple}. We see that our approach consistently outperforms the prior approaches on most tasks and datasets which show the high-quality nature of representations learned using MMCL.
\begin{table*}[ht]
\resizebox{\linewidth}{!}{
\begin{tabular}{l|ccc|cc|c}
\toprule
& \multicolumn{3}{c}{Many-Shot classification} & \multicolumn{2}{c}{Few-Shot classification} & \multicolumn{1}{c}{Object Det.} \\
\textit{Method} & Pets & SUN397 & VOC2007 & ISIC & ChestX & VOC (AP) \\
\midrule
Supervised & 92.42 & 63.56 & 84.76 & 48.79 $\pm$ 0.53 & 29.26 $\pm$ 0.44 & 53.26 \\
\midrule
InsDis~\cite{wu2018unsupervised} & 76.22 & 51.84 & 71.90 & 52.19 $\pm$ 0.53 & 29.13 $\pm$ 0.44 & 48.82 \\
MoCo~\cite{he2020momentum} & 76.96 & 53.35 & 74.61 & \textbf{53.79} $\pm$ \textbf{0.54} & \textbf{30.0} $\pm$ \textbf{0.43} & 50.51 \\
PCL-v1~\cite{PCL} & 86.98 & 58.40 & 82.08 & 38.01 $\pm$ 0.44 & 25.54 $\pm$ 0.43 & \textbf{53.93} \\
PCL-v2~\cite{PCL} & 85.39 & 58.82 & 82.20 & 44.4 $\pm$ 0.52 & 28.28 $\pm$ 0.42 & \textbf{53.92} \\
MoCo-v2~\cite{chen2020improved}$^\dagger$ & 87.32 & 56.22 & 79.34 & 49.70 $\pm$ 0.51 & 29.48 $\pm$ 0.45 & 50.67 \\
MoCHI~\cite{kalantidis2020hard}$^\dagger$ & 87.4 & 57.74 & 79.73 & 49.0 $\pm$ 0.53 & 28.4 $\pm$ 0.4 & 52.61 \\
\midrule
Ours & \textbf{87.81} & \textbf{62.78} & \textbf{82.83} & 48.79 $\pm$ 0.53 & 29.26 $\pm$ 0.44 & 50.73 \\
\bottomrule
\end{tabular}}
\caption{Transfer learning results. We transfer a ImageNet-pretrained model (using MMCL) on a range of downstream tasks and datasets. We compare with models pretrained using a similar batch size and epochs. Results on competing approaches are taken from~\cite{Ericsson2021HowTransfer}. $^\dagger$Models evaluated using publicly available checkpoints.}
\label{tab: transfer_multishot_supple}
\end{table*}
\section{Additional Ablation Studies}
In this section, we include some additional analyses and experiments to analyse the MMCL loss.
\noindent\textbf{Effect of Slack Penalty $C$:} In the main paper, we explored the effects of the slack penalty $C$ on performance. In this section, we augment that study with an analysis of the behavior of our model towards a changing $C$. Specifically, note that when starting to train the model, the neural network weights are randomly initialized, and as a result, the generated contrastive learning features might be arbitrary that asking for a misclassification penalty might be unlikely to be useful. To empirically understand this conjencture, we train a model from scratch for 25 epochs using a slack $C=\infty$ (i.e., no misclassification is allowed) and then reduce $C$ to a fixed value for the next 75 epochs. In Table~\ref{tab: reducing_C}, we provide the result of this analysis on CIFAR-100 and STL-10 datasets, where the legend shows the fixed value of $C$ used. The table shows that there is some benefit of using this approach, for example, on STL-10 dataset, where we see that for $C$=10 shows a slight benefit over $C=\infty$ (i.e., not using the proposed approach). However, overall it looks like such a scheme does not reduce the performance dramatically, unless we underpenalize the misclassifications (such as using C=0.1, for example).
\begin{table}[]
\centering
\begin{tabular}{lcr}
\toprule
& CIFAR-100 & STL-10 \\
\midrule
C=0.1 & 44.41 & 73.82 \\
C=1 & 53.08 & 78.86 \\
C=10 & 55.1 & 81.25 \\
C=50 & 54.95 & 79.63 \\
C=$\infty$ & 55.1 & 79.42 \\
\bottomrule
\end{tabular}
\caption{Reducing C over epochs.}
\label{tab: reducing_C}
\end{table}
\noindent\textbf{False Negative Correction:}
In the main paper, we argue that the use of slack allows to limit the effect that false negatives have on training by limiting the maximum contribution that they can make to the loss. An alternative idea would be to nullify all points that lie \emph{inside} the margin (i.e., points that are potentially close to positives, and thus are perhaps false negatives). This would amount to the operation \texttt{$\alpha$[$\alpha$ == C] = 0} after finding the optimal dual values $\alpha$. For this experiment, we use a single $C$ for the entire training (unlike the above experiment). The result of this experiment is shown in Figure~\ref{tab: C_FN_correction}. The table shows that FN correction has a significant impact on performance when using a small $C$, while insignificant for larger $C$. This is perhaps not unexpected given that using a small C will lead to a large number of misclassified points, which if removed from training, will lead to suboptimal contrastive learning, especially in the initial epochs of training. However, with larger values of $C$, it appears from the table that the impact of removing some points does not impact the performance much. This is perhaps because using the same value of $C$ for all the training epochs is perhaps sub-optimal, and would need a schedule for changing this parameter. We plan to explore this idea in the future.
\begin{table}[]
\centering
\begin{tabular}{lcr}
\toprule
& Readout Acc. \\
\midrule
C=1 & 61.12 \\
C=2 & 69 \\
C=10 & 80.05 \\
C=50 & 79.91 \\
No Correction & 79.63 \\
\bottomrule
\end{tabular}
\caption{Using false negative correction for STL-10.}
\label{tab: C_FN_correction}
\end{table}
\noindent\textbf{Influence of RBF Kernel Bandwidth $\sigma$:}
In this experiment we vary the RBF Kernel bandwidth for CIFAR-100 and STL-10. From Figure~\ref{fig: main_gamma_ablation}, we see that the performance can be influenced by the choice of $\sigma$, however, using $\sigma=1$ performs reasonably well for these datasets.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{AAAI2022/figures/effect_of_gamma.png}
\caption{Effect of kernel bandwidth $1/\sigma^2$.}
\label{fig: main_gamma_ablation}
\end{figure}
\noindent\textbf{Effect of Batch Size:}
In the main paper, we showed the effect of batch size on STL-10 (Table~\ref{tab: small_datasets_sota}). In Table~\ref{tab: batchsize_cifar100}, we augment that with a similar study for CIFAR-100. Our performances on CIFAR-100 are similar, albeit the improvements are not as pronounced as in STL-10, perhaps because its a smaller dataset. Further, the trend in the tables show that similar to other methods, the performance of MMCL increases consistently with increasing batch sizes, while showing signs of diminishing returns as the batch sizes grows larger (e.g., grows from 256 to 512). We still consistently outperform the SimCLR baseline.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{AAAI2022/figures/convergence_no_nn_cifar100.png}
\caption{Convergence (CIFAR-100).}
\label{fig:convergence_CIFAR100}
\end{figure}
\begin{table}[]
\centering
\begin{tabular}{lcccr}
\toprule
CIFAR-100 & 64 & 128 & 256 & 512 \\
\midrule
SimCLR & 60.8 & 65 & 66 & 66.32 \\
DCL & 63.2 & 66.2 & 67.5 & 67.71 \\
HCL & 66.46 & 68.37 & \textbf{69} & \textbf{70.31}\\
\midrule
$\MMCL$ & 65.82 & \textbf{68.75} & \textbf{68.81} & \textbf{70.7}\\
\bottomrule
\end{tabular}
\caption{Accuracy (in \%) against the batch size on CIFAR-100.}
\label{tab: batchsize_cifar100}
\end{table}
\noindent\textbf{Convergence}
In Figure~\ref{fig:convergence} of the main paper, we show the convergence analysis of training on STL-10. In Figure~\ref{fig:convergence_CIFAR100} we add a similar analysis for STL-10. Similar to results on STL-10, we see that the MMCL variants converge to superior performances more rapidly than the baseline.
\noindent\textbf{Kernels with SimCLR:}
The loss used in vanilla SimCLR (and subsequently other contrastive learning approaches) is technically equivalent to using an RBF kernel, and one could perhaps choose other similarity kernels, e.g., tanh. To quantitatively evaluate this intuition and the impact of such a choice, we used a tanh kernel in SimCLR, and experimented with 5 different bandwidths. The results (Table~\ref{tab: kernel_simclr}) suggest that while there can be some improvements to SimCLR via selecting other kernel similarities, our proposed formulation demonstrates significantly better performance. This is because our proposed kernel SVM formulation produces a classification margin in an RKHS where the support set is automatically sparse and weighted, which to the best of our knowledge is difficult to be argued for in the SimCLR setup.
\noindent\textbf{Using an Exact SVM Solver:}
Our initial approach was to use an exact SVM solver (using Qpth~\cite{amos2017optnet}) to estimate the max-margin hyperplane, however we found that our iterations were considerably slower (0.52 vs 3.07 for batch size 256).
\noindent\textbf{Intuition for Slack Penalty $C$:}
Note that in contrast to standard SVMs where the penalty $C$ is a trade off between the misclassification error and the margin, in our setup it has an even bigger role as the feature representation itself is evolving. Specifically, when using a larger $C$, the $\alpha$ weights on the misclassified points will be equal to $C$, and thus given our contrastive objective is using representations of data points linearly weighted by $\alpha$, the backbone neural network will be updated via gradients to minimize this loss -- thereby pushing these misclassified points out of the margin. Thus, in subsequent iterations, the focus of learning will be to increase the margin, as the number of misclassified points will be smaller. However, $C$ must also account for an estimate of false negatives, and thus a larger value of $C$ is perhaps not appropriate consistently. We found that an empirical evaluation of collisions is quite difficult, especially given that the network weights evolve over the epochs that collisions vanish as the contrastive loss drops. In Table~\ref{tab: C_FN_correction}, we attempted to define some heuristics to explicitly correct for false negatives (FN); our results show that when the penalty $C$ is small, correcting for the FNs do show minor benefits (e.g., C=50). However, for larger $C$, the network weights are perhaps updated very quickly to fix the FN misclassification error in the early epochs that FNs are absent in subsequent epochs.
\begin{table}[]
\centering
\begin{tabular}{l|c}
\toprule
Method & Readout @ 200 \\
\midrule
SimCLR & 48.2 \\
SimCLR tanh $\gamma=1$ & 50.5 \\
SimCLR tanh $\gamma=0.5$ & 49.1 \\
SimCLR tanh $\gamma=0.1$ & 49.9 \\
SimCLR tanh $\gamma=2$ & 50 \\
SimCLR tanh $\gamma=10$ & 49.9 \\
MMCL & \textbf{55} \\
\bottomrule
\end{tabular}
\caption{Use of Kernels with SimCLR. We see that while use of kernels does lead to an improvement for the SimCLR loss, our proposed formulation demonstrates significantly better performance }
\label{tab: kernel_simclr}
\end{table}
\section{Other Details}
\noindent\textbf{PGD Solver:} We used nesterov accelerated PGD solver to get faster convergence. The value of $\alpha$ before each pre-training step is randomly initialized before the PGD iterations. We use a step size of 0.001 and optionally a step size of $\frac{1}{\|\Delta\|_2}$.
\noindent\textbf{ImageNet Experiments:} We use an initial learning rate of 1.2 and following SimCLR we use a warmup and an annealing scheme. Our best model uses $\sigma^2$ of 5. To keep hyperparameter search tractable, we obtain top-1 validation accuracy after pre-training for 10 epochs and finetuning for 1 epoch. Based on exeriments with ImageNet-100, we found the PGD variant to work slightly better than the INV variant (80.7\% vs 80.5\%) and so we use PGD variant for all ImageNet experiments.
\noindent\textbf{CIFAR-100 and STL-10 Experiments:} We choose hyperparameters using the validation accuracy at 200 epochs based protocol mentioned in the main paper (Readout accuracy).
\noindent\textbf{UCF-101 Experiments} : We use a publicly available implementation for MoCo-v2 for video self supervised learning ~\cite{NEURIPS2020_3def184a}. All the hyperparameters are the same except $\sigma^2$ of 0.5, $C=1$, and learning rate of 1e-4. We use the PGD variant for experiments on UCF-101.
\subsection{Limitations and Failure Cases}
The proposed approach has some minor overheads. While, we did not see any significant difference in training times (as shown in Figure~\ref{fig: compute_time}), they can be expensive atleast in the initial epochs of training. This is, for example, because, when the model is randomly initialized, the features produced are arbitrary and thus the PGD iterations in MMCL would need longer cycles to finish. However, we see that this trend is only for the initial few epochs and training speed improves over time. We could also perhaps use a different and better solver for box-constrained optimization that leads to much faster convergence (for example,~\cite{kiSrDh12} or OSQP~\cite{osqp-gpu})
\subsection{Potential Negative Impact}
Our paper makes a fundamental contribution to the field of self-supervised and contrastive learning. We do not perceive any obvious negative social impacts of this work. In fact, self-supervision naturally helps to avoid certain biases that can emerge from labeling biases. That said, training self-supervised learning methods on already biased datasets could be harmful and potentially lead to the biases being inherited by the models during the fine-tuning stage.
|
2211.06253
|
\section{Introduction}
A solution of a geometric flow, such as the Ricci flow or mean curvature flow, is called ancient if it exists on a time interval of the form $(-\infty,T]$. Rescaling shows that such solutions model the flow in regions of high curvature. Therefore, determining the possible shapes of ancient solutions is a central problem in the study of singularities.
In recent years there has been significant progress towards the classification of ancient solutions to the Ricci flow and mean curvature flow under natural geometric conditions. For example, all 3-dimensional ancient solutions of Ricci flow which are nonnegatively curved and $\kappa$-noncollapsed have been classified, in \cite{Perelman, Brendle_13, Brendle-Huisken-Sinestrari, Brendle_20, Angenent-Daskalopoulos-Sesum_22, Brendle-Daskalopoulos-Sesum}. For mean curvature flow, ancient solutions in $\mathbb{R}^3$ which are convex and interior noncollapsed have been classified in \cite{Huisken-Sinestrari, Haslhofer-Hershkovits, Haslhofer, Brendle-Choi, Angenent-Daskalopoulos-Sesum_19, Angenent-Daskalopoulos-Sesum_20}. These results have been generalised and expanded upon in various directions; further references are given below.
In \cite{Lynch}, the first-named author classified ancient solutions of mean curvature flow (and a natural class of fully nonlinear flows) which are convex and have Type~I curvature growth --- all such solutions are homothetically shrinking cylinders. The key step was to establish that such a solution, if it is noncompact, splits a Euclidean factor. In this paper we prove analogous results for the Ricci flow.
Let $(M, g(t))$, $t \in (-\infty, T]$, be an $n$-dimensional solution of Ricci flow which is complete, nonflat, satisfies the curvature bound
\[\sup_{t\in(-\infty,T]} \sup_M |\Rm| < \infty,\]
and is $\kappa$-noncollapsed.\footnote{That is, $\vol(B_{g(t)}(x,r)) \geq \kappa r^n$ whenever $|\Rm| \leq r^{-2}$ in $B_{g(t)}(x,r)$.} We then call $(M,g(t))$ an ancient $\kappa$-solution. All of our results concern ancient $\kappa$-solutions which are either weakly PIC2 or have nonnegative sectional curvature. An ancient $\kappa$-solution is Type~I if there is a constant $C > 0$ such that
\[|\Rm| \leq \frac{C}{T -t}\]
for all $t < T$, and is Type~II otherwise.
A Riemannian manifold $(M,g)$ of dimension $n \geq 4$ is said to have nonnegative isotropic curvature if
\[\Rm_{1313}+\Rm_{1414}+\Rm_{2323}+\Rm_{2424}-2\Rm_{1234} \geq 0\]
for all orthonormal four-frames $\{e_1,e_2,e_3,e_4\}$. If the inequality is strict, $(M,g)$ is said to have positive isotropic curvature. For short, such spaces are said to be weakly PIC, and strictly PIC, respectively. If $\mathbb{R}^2\times(M,g)$ has nonnegative (positive) isotropic curvature, then $(M,g)$ is said to be weakly (strictly) PIC2. We note that a space with nonnegative curvature operator is necessarily weakly PIC2, and a weakly PIC2 manifold has nonnegative sectional curvature (see \cite[p.~100]{Brendle_10}). The PIC condition was introduced in \cite{Micallef-Moore}, and was studied in conjunction with the Ricci flow on 4-manifolds by Hamilton \cite{Hamilton_PIC}. It is preserved by the flow in all dimensions \cite{Nguyen, Brendle-Schoen_strict}, and played an essential role in the proof of the differentiable sphere theorem \cite{Brendle-Schoen_strict, Brendle-Schoen_weak}.
We establish the following dimension-reduction result.
\begin{theorem}\label{main}
Let $(M,g(t))$, $t \in (-\infty, T]$, be a simply connected, noncompact Type I ancient $\kappa$-solution with nonnegative sectional curvature. There is an integer $1 \leq m \leq n-2$ such that, for every $t \in (-\infty, T]$, $(M,g(t))$ is isometric to the product of $\mathbb{R}^m$ with a compact Riemannian manifold.
\end{theorem}
We note the following direct consequence of Theorem~\ref{main} (which, at least for ancient $\kappa$-solutions, answers Question 8 on p. 390 of \cite{Chow-Lu-Ni}).
\begin{corollary}\label{Type II}
If $(M,g(t))$, $t \in (-\infty, T]$, is a noncompact ancient $\kappa$-solution with positive sectional curvature, then it is Type~II, i.e.,
\[\limsup_{t \to -\infty} \sup_M \, (-t) |\Rm| = \infty.\]
\end{corollary}
If $(M,g(t))$ is as in Corollary~\ref{Type II}, then Hamilton's rescaling procedure \cite[Section~16]{Hamilton_Singularities} yields a sequence of flows $(M, \lambda_k g(\lambda_k^{-1} t), p_k)$ which converge smoothly (in the pointed Cheeger--Gromov sense) to an eternal solution with globally bounded scalar curvature, and which attains its maximum scalar curvature at a point at $t = 0$. If in addition $(M,g(t))$ is PIC2 and the eternal limit has positive Ricci curvature, then it is a steady soliton by Brendle's Harnack inequality \cite{Brendle_Harnack}. Examples of $\kappa$-noncollapsed steady solitons which are PIC2 include the Bryant soliton and the examples constructed in \cite{Lai}. Every known example of a noncompact, PIC2, Type~II ancient $\kappa$-solution is a steady soliton.
A further consequence of Theorem~\ref{main} is the following classification result for Type~I ancient $\kappa$-solutions which are weakly PIC2. This was previously established in \cite{Li}, but we are able to give a more succinct proof.
\begin{theorem}\label{einstein}
Let $(M,g(t))$, $t \in (-\infty, T]$, be a simply connected Type~I ancient $\kappa$-solution which is weakly PIC2. Then $(M,g(t))$ is isometric to $\mathbb{R}^m\times(\tilde M, \tilde g(t))$, where $0 \leq m \leq n-2$, and $(\tilde M, \tilde g(t))$ is a compact locally symmetric space with positive Ricci curvature.
\end{theorem}
The compact factor $(\tilde M, \tilde g(t))$ in Theorem~\ref{einstein} need not be a shrinking soliton, but does split as a product of Einstein manifolds.
Refinements of Theorem~\ref{einstein} have been proven under stronger hypotheses. A nonnegatively curved Type~I ancient Ricci flow of dimension 2 has constant curvature by \cite{Daskalopoulos-Hamilton-Sesum}. A 3-dimensional Type~I ancient $\kappa$-solution with nonnegative sectional curvature has universal cover isometric to $\mathbb{S}^3$ if it is compact \cite{Perelman}, or $\mathbb{R}\times\mathbb{S}^2$ if it is noncompact (see \cite{Hallgren}, \cite{Zhang} and \cite{Brendle_20}). In higher dimensions, a compact Type~I ancient $\kappa$-solution which is strictly PIC2 has constant curvature by \cite{Brendle-Huisken-Sinestrari} (see also \cite{Ni}). A noncompact Type~I ancient $\kappa$-solution which is
uniformly PIC and weakly PIC2 has universal cover isometric to $\mathbb{R}\times\mathbb{S}^{n-1}$ (see \cite{Naff_19} and \cite{Brendle-Naff}). Enders--M\"{u}ller--Topping showed that, for a general complete solution of Ricci flow, blow-ups at a Type~I singularitiy are shrinking solitons \cite{Enders-Mueller-Topping}.
Let us remark on the hypotheses of Theorem~\ref{einstein}. Bakas, Ni and Kong constructed examples of compact ancient solutions of the Ricci flow on certain spheres which are Type~I but fail to be locally symmetric \cite{Bakas-Ni-Kong}. Among them are solutions which have positive curvature operator but are not $\kappa$-noncollapsed, and solutions that are $\kappa$-noncollapsed but only have positive sectional curvature. This means that Theorem~\ref{einstein} fails if we remove the $\kappa$-noncollapsing assumption, or if we replace weakly PIC2 by nonnegative sectional curvature.
As an illustration we note that in dimension 4, Theorem~\ref{einstein} amounts to the following statement.
\begin{corollary}
A simply connected, weakly PIC2, Type~I ancient $\kappa$-solution of dimension 4 is isometric (up to parabolic rescaling) to one of the following:
\begin{itemize}
\item The product of two self-similarly shrinking round two-spheres.
\item The self-similarly shrinking solution generated by
\[\mathbb{S}^4, \;\; \mathbb{CP}^2, \;\; \mathbb{R}\times\mathbb{S}^3, \;\; \text{or} \;\; \mathbb{R}^2\times\mathbb{S}^2.\]
\end{itemize}
\end{corollary}
\iffalse
If $(M,g(t))$ is 5-dimensional, it is either isometric to the product of $\mathbb{R}$ with one of the above, or to one of the following:
\[\mathbb{S}^5, \;\; SU(3)/SO(3), \;\; SU(4)/Sp(2), \;\; \text{or} \;\; \mathbb{S}^2\times\mathbb{S}^3.\]
\fi
\subsection{Further related work} All convex ancient solutions of curve-shortening flow have been classified \cite{Daskalopoulos-Hamilton-Sesum_curve, BLT_curve}. Ancient solutions of mean curvature flow which are convex, interior noncollapsed and uniformly two-convex were classified in all dimensions in \cite{Brendle-Choi_higherdimensions, Angenent-Daskalopoulos-Sesum_20}. Recently, there has been progress towards a classification of all interior noncollapsed convex ancient solutions of mean curvature flow in $\mathbb{R}^4$ \cite{Zhu, Choi-Haslhofer-Hershkovits, Du-Haslhofer_a, Du-Haslhofer_b, CDDHS}. If the noncollapsing assumption is dropped, many more examples of convex ancient solutions arise \cite{BLT_collapse}. Important structural results for such solutions were established in \cite{Wang, Brendle-Naff_noncollapsing, BLL}, in addition to \cite{BLT_collapse}.
All positively curved ancient solutions of Ricci flow of dimension 2 have been classified \cite{Daskalopoulos-Hamilton-Sesum, Chu, Daskalopoulos-Sesum}. Ancient solutions which are $\kappa$-noncollapsed, unformly PIC, and strictly PIC2 were classified in \cite{Brendle-Naff, Brendle-Daskalopoulos-Naff-Sesum}.
Some of the most fundamental results concerning ancient solutions of the Ricci flow were established by Perelman \cite{Perelman}, and then put to use in his construction of a flow with surgeries. The surgery construction for mean curvature flow carried out in \cite{Haslhofer-Kleiner} also makes extensive use of ancient solutions (but they play less of a role in other versions of the construction \cite{Huisken-Sinestrari_surgery, Brendle-Huisken}).
\subsection{Outline} In Section~\ref{splitting} we prove Theorem~\ref{main}. This is achieved by extracting an asymptotic shrinker at $t = -\infty$ (obtained by parabolic rescalings), and an asymptotic cone (obtained by scaling down the distance function at a fixed time), and then relating these two different limits. The asymptotic shrinker is simply connected and has nonnegative sectional curvature, so it splits a Euclidean factor by work of Munteanu and Wang. Using the Type~I property, we show that the asymptotic cone is isometric to said Euclidean factor. An application of Toponogov's triangle comparison and splitting theorems then gives Theorem~\ref{main}.
In Section~\ref{einstein section} we prove Theorem~\ref{einstein}. In light of Theorem~\ref{main}, it suffices to show that a compact Type~I ancient $\kappa$-solution is locally symmetric and has positive Ricci curvature. This follows from a standard blow-down argument and convergence results for PIC2 metrics under the Ricci flow.
\subsection{Acknowledgements} We would like to express thanks to S. Brendle, G. Huisken, K. Naff and M. Wink for helpful conversations relating to this work. We also thank Y. Li for his correspondence, which led us to correct an error in the statement of Theorem~\ref{main}.
\section{Dimension reduction for noncompact Type~I solutions}\label{splitting}
In this section we establish Theorem~\ref{main}. A key tool in the proof is the following result of Naber \cite[Theorem~3.1]{Naber}, which generalised earlier work of Perelman \cite{Perelman}.
\begin{lemma}\label{shrinker}
Let $(M,g(t))$, $t \in (-\infty, T]$, be a Type~I ancient $\kappa$-solution. Fix $p \in M$ and let $\lambda_k \to 0$ be a sequence of scales. The pointed rescaled flows $(M, \lambda_kg(\lambda_k^{-1}t), p)$, $t \in (-\infty, \lambda_kT]$, subconverge smoothly\footnote{Here we mean smooth convergence in the pointed Cheeger--Gromov sense --- see \cite{Hamilton_Singularities} for the precise definition.} to a pointed gradient shrinking soliton $(\bar M, \bar g(t), \bar p)$, $t \in (-\infty, 0)$.
\end{lemma}
We refer to any shrinking soliton which arises by the rescaling procedure described in Lemma~\ref{shrinker} as an asymptotic shrinker for $(M,g(t))$. The notation $(\bar M, \bar g(t))$ will always refer to an asymptotic shrinker. Note that if $M$ is compact, then $M$ and $\bar M$ are diffeomorphic. This follows from Hamilton's distance distortion estimate (see Lemma~\ref{distance distortion} below).
\begin{remark} The existence of an asymptotic shrinker requires that the rescaled flows $(M, \lambda_kg(\lambda_k^{-1}t), p)$ admit a uniform lower bound for the injectivity radius at $p$. This follows from the $\kappa$-noncollapsing assumption. The proofs of Theorem~\ref{main} and Theorem~\ref{einstein} only use $\kappa$-noncollapsing in this way, so it could be replaced by any other condition which leaves Lemma~\ref{shrinker} valid.
\end{remark}
We also make use of the following result. The assertion is that a space with nonnegative sectional curvature, and whose asymptotic cone splits a Euclidean factor, must itself split the same factor. This seems to be well-known, but for completeness we provide a proof based on the Toponogov splitting theorem.
We recall that for a complete Riemannian manifold $(M,g)$ with nonnegative sectional curvature, given a sequence $\lambda_k \to 0$, the sequence of metric spaces $(M, \lambda_k d_g, p)$ subconverges in the pointed Gromov--Hausdorff sense. Any limiting metric space obtained from $(M,g)$ in this way is called an asymptotic cone for $(M,g)$ at $p$.
\begin{lemma}\label{cone split}
Let $(M, g)$ be a complete Riemannian $n$-manifold of nonnegative sectional curvature and suppose that it has an asymptotic cone which is isometric to $\mathbb{R}^m$. The space $(M, g)$ then splits isometrically as $\mathbb{R}^m \times \tilde M^{n-m}$, where $\tilde M^{n-m}$ is compact.
\end{lemma}
Before proceeding with the proof, we recall Toponogov's angle comparison theorem (see eg. \cite[Theorem 10.3.1]{Burago-Burago-Ivanov}), which will play a key role.
\begin{theorem}[Toponogov]
Consider a complete Riemannian manifold $(M,g)$ which has nonnegative sectional curvature. Let $\rho:[0,S] \to (M,g)$ and $\eta:[0,T] \to (M,g)$ be length minimising unit-speed geodesics such that $\rho(0) = \eta(0)$. The function
\begin{equation}\label{angle}
(s,t) \mapsto \arccos\bigg(\frac{s^2 + t^2 - d_g(\rho(s),\eta(t))^2}{2st}\bigg)
\end{equation}
is nonincreasing in $s$ for any fixed $t \in [0,T]$.
\end{theorem}
The quantity in \eqref{angle} converges to the angle between $\rho'(0)$ and $\eta'(0)$ as $(s,t) \to 0$.
\begin{proof}[Proof of Lemma~\ref{cone split}]
Fix some $p\in M$ and a sequence of positive numbers $\lambda_k \to 0$, and construct an asymptotic cone by extracting a subsequential pointed Gromov--Hausdorff limit of $(M,\lambda_k d,p)$, where $d$ is the Riemannian distance function on $(M,g)$. We assume that the cone is isometric to $\mathbb{R}^m$.
Let $B$ denote the ball of radius 2 around the origin in $\mathbb{R}^m$. By the Gromov--Hausdorff convergence, there are maps $\phi_k : B \to M$ such that $\phi(0) = p$ and
\[\lim_{k\to \infty} \big| d_{\mathbb{R}^m}(x,y) - \lambda_k d(\phi_k(x), \phi_k(y))\big| = 0\]
for all $x, y \in B$. In particular, for the points
\[x_k := \phi_k((1,0,...,0)), \qquad y_k := \phi_k((-1,0,...,0)),\]
we have
\begin{equation}\label{cone split asymptotics}
\lim_{k \to \infty}\lambda_k d(p, x_k) = 1, \qquad \lim_{k \to \infty}\lambda_k d(p, y_k) = 1, \qquad \lim_{k \to \infty} \lambda_k d(x_k, y_k) = 2 .
\end{equation}
Since both $x_k$ and $y_k$ escape to infinity in $(M,g)$ as $k \to \infty$, we can construct geodesic rays $\gamma_x$ and $\gamma_y$ emanating from $p$ as the limit of geodesic segments from $p$ to $x_k$ and $y_k$, respectively. If we denote by $\alpha_k \in [0,\pi]$ the angle of the geodesic triangle $(p, x_k, y_k)$ at $p$, Toponogov's angle comparison theorem implies that $\alpha_k \geq \theta_k$, where
\[\theta_k := \arccos \frac{d(p, x_k)^2 + d(p, y_k)^2 - d(x_k, y_k)^2}{2d(p,x_k)d(p, y_k)}\]
is the corresponding Euclidean comparison angle. Inserting \eqref{cone split asymptotics}, we obtain
\[\lim_{k\to \infty} \theta_k = \lim_{k\to\infty} \arccos\bigg( \frac{d(p, x_k)^2 + d(p, y_k)^2 - d(x_k, y_k)^2}{2d(p,x_k)d(p, y_k)} \bigg) = \pi,\]
and hence
\[\lim_{k \to \infty} \alpha_k = \pi.\]
We conclude that the angle formed by the rays $\gamma_x$ and $\gamma_y$ at $p$ is $\pi$, and hence the concatenation $\gamma := -\gamma_x \frown \gamma_y : \mathbb{R} \to M$ is smooth (by Picard--Lindel\"{o}f).
We now claim that every subinterval of $\gamma$ is length-minimizing between its endpoints. To see this, consider a new pair of sequences $\Bar{x}_k \in \gamma_x$ and $\Bar{y}_k \in \gamma_y$, defined uniquely by the requirement that
\[d(p, x_k)=d(p, \bar x_k), \qquad d(p, y_k)=d(p, \bar y_k).
\]
We appeal to the angle comparison theorem again to control $d(x_k,\bar x_k)$ and $d(y_k,\bar y_k)$. From the definition of $\gamma_x$ and $\gamma_y$, the angles at $p$ formed by the geodesic triangles $(p, x_k ,\bar x_k)$ and $(p, y_k, \bar y_k)$ converge to zero, so by angle comparison we have
\[ 0 \geq \lim_{k\to\infty} \arccos \bigg(1 - \frac{\lambda_k^2d(\bar x_k, x_k)^2}{2\lambda_k^2 d(p,x_k)d(p,\bar x_k)}\bigg),\]
and hence \eqref{cone split asymptotics} implies
\[\lim_{k \to \infty}\lambda_k d(\bar x_k, x_k) = 0.\]
Similarly,
\[\lim_{k \to \infty}\lambda_k d(\bar y_k, y_k) = 0.\]
We now multiply the inequality
\[d(x_k, y_k) - d(x_k, \bar x_k) - d(\bar y_k, y_k) \leq d(\bar x_k, \bar y_k) \leq d(\bar x_k, x_k) + d(x_k, y_k) + d(y_k, \bar y_k),
\]
by $\lambda_k$ and take the limit to obtain
\[\lim_{k \to \infty} \lambda_k d(\bar x_k, \bar y_k) = 2.
\]
We thus conclude that $\bar \theta_k \to \pi$, where $\bar \theta_k$ is the Euclidean comparison angle
\[\bar \theta_k = \arccos\bigg(\frac{d(p, \bar x_k)^2 + d(p, \bar y_k)^2 - d(\bar x_k, \bar y_k)^2}{2d(p,\bar x_k)d(p, \bar y_k)}\bigg).
\]
By angle monotonicity, we conclude that $\bar \theta_k = \pi$ for all $k$, and hence
\[d(\bar x_k, \bar y_k) = d(\bar x_k, p) + d(p, \bar y_k)\]
for all $k$. It follows that $\gamma$ is minimizing on every subinterval.
We now appy Toponogov's splitting theorem to conclude that $(M,g)$ is isometric to $\mathbb{R} \times (\tilde M^{n-1}, \tilde g)$, where $(\tilde M^{n-1}, \tilde g)$ is a complete Riemannian $(n-1)$-manifold with nonnegative sectional curvature. The space $(\tilde M^{n-1}, \tilde g)$ has an asymptotic cone isometric to $\mathbb{R}^{m-1}$, so we can repeat the above argument, and continue iteratively until we have exhibited $(M,g)$ as $\mathbb{R}^m \times (\tilde M^{n-m}, \tilde g)$, where $(\tilde M^{n-m}, \tilde g)$ is a Riemannian $(n-m)$-manifold whose asymptotic cone is a single point. It follows that $\tilde M^{n-m}$ is compact, for otherwise it would contain a ray, in which case all of its asymptotic cones would also contain a ray.
\end{proof}
We recall that one can compare the Riemannian distance at different times along the Ricci flow using Hamilton's distance distortion estimate. For a Type I ancient solution with nonnegative Ricci curvature we have the following.
\begin{lemma}\label{TypeIdistance}
Let $(M,g(t))$, $t \in (-\infty, 0]$, be an ancient solution of the Ricci flow with nonnegative Ricci curvature and which satisfies the Type~I condition $|\Rm| \leq \bar C(-t)^{-1}$, $t < 0$. There is a constant $C=C(n, \bar C)$ such that
\[0 \leq d_{g(s)}(x, y) - d_{g(t)}(x, y) \leq C(\sqrt{-s} - \sqrt{-t})
\]
for every $x, y \in M$ and $-\infty < s < t \leq 0$.
\end{lemma}
\begin{proof}
Since $\Ric \geq 0$, we have
\[\frac{d}{dt} d_{g(t)}(x,y) \leq 0.\]
This gives the first inequality.
Set $\sigma = (\sup_M |\Rm|(t))^{-1/2}$ in part (a) of \cite[Lemma~17.4]{Hamilton_Singularities}, and combine the result with \cite[Lemma~17.3]{Hamilton_Singularities} to obtain
\[\frac{d}{dt} d_{g(t)}(x,y) \geq -\frac{C}{\sqrt{-t}}.\]
After integrating, this gives the second inequality.
\end{proof}
Before carrying out the proof of Theorem~\ref{main}, we note the following consequence of Lemma~\ref{TypeIdistance}.
\begin{lemma}\label{simply connected}
Let $(M,g(t))$, $t \in (-\infty,T]$, be a Type~I ancient $\kappa$-solution with nonnegative sectional curvature. Let $(\bar M, \bar g(t))$, $t \in (-\infty,0)$, be an asymptotic shrinker for $(M,g(t))$. If $M$ is simply connected, then $\bar M$ is simply connected.
\end{lemma}
\begin{proof}
Suppose $M$ is simply connected. Let $\Gamma \subset \bar M$ be the image of a continuous map from $S^1$ into $\bar M$. We claim that $\Gamma$ can be contracted to a point in $\bar M$ via a continuous homotopy.
By definition, there is a point $p \in M$, a point $\bar p \in \bar M$, and a sequence of scales $\lambda_k \to 0$ such that $(M, \lambda_k g(\lambda_k^{-1}t), p)$ converges to $(\bar M, \bar g(t), \bar p)$ in the pointed Cheeger--Gromov sense. Let us write $g_k(t) := \lambda_k g(\lambda_k^{-1} t)$. In particular, there are maps $\Phi_k : B_{\bar g(-1)}(\bar p, k) \to M$ with the following properties:
\begin{itemize}
\item Each $\Phi_k$ is a diffeomorphism onto its image, and $\Phi_k(\bar p) = p$.
\item The pulled back metrics $\Phi_k^* g_k(-1)$ converge locally smoothly to $\bar g(-1)$ as $k \to \infty$.
\end{itemize}
For each $k$, we define $\Gamma_k := \Phi_k(\Gamma)$. We may choose $R > 0$ so that
\[\Gamma \subset B_{\bar g(-1)}(\bar p, R-1).\]
It follows that there is an integer $m$ such that
\begin{equation
\Gamma_k \subset B_{g_k(-1)}(p, R)
\end{equation}
for all $k \geq m$. Also, making $m$ larger if necessary, we can ensure that
\[B_{g_k(-1)}(p, R) \subset \Phi_k\Big(B_{\bar g(-1)}(\bar p, R+1)\Big) \subset B_{g_m(-1)}(p,R+2)\]
for all $k \geq m$. We thus have
\[\Gamma_k \subset B_{g_m(-1)}(p,R+2)\]
for all $k \geq m$. It will be important that the right-hand side is independent of $k$.
By the soul theorem \cite{Cheeger-Gromoll_soul}, there is a compact totally convex submanifold $\Sigma$ of $(M, g_m(-1))$ such that $M$ is diffeomorphic to the normal bundle of $\Sigma$. Let us denote the normal bundle of $\Sigma$ by $N$, and identify $N$ with $M$. Observe that, by scaling in the fiber, we obtain a deformation retraction of $N$ onto $\Sigma$. Since $M = N$ is assumed to be simply connected, this implies that $\Sigma$ is simply connected.
Let $N'$ be the subbundle of $N$ consisting of normal vectors whose length with respect to $g_m(-1)$ is at most $E$. If $E$ is chosen large enough, then
\[\Gamma_k \subset B_{g_m(-1)}(p,R+2) \subset N',\]
for all $k \geq m$. Therefore, we can contract $\Gamma_k$ to a point in $N'$ via a continuous homotopy, by first retracting onto $\Sigma$, and then using that $\Sigma$ is simply connected to contract to a point in $\Sigma$. Choosing $R'>R+2$ large enough so that
\[N' \subset B_{g_m(-1)}(p,R'),\]
we ensure that each $\Gamma_k$ can be contracted to a point in $B_{g_m(-1)}(p,R')$.
We have
\[B_{g_m(-1)}(p, R') = B_{g(-\lambda_m^{-1})}(p, R' / \sqrt{\lambda_m}).\]
By the Type~I distance distortion estimate (Lemma~\ref{distance distortion}),
\[B_{g(-\lambda_m^{-1})}(p, R' / \sqrt{\lambda_m}) \subset B_{g(t)}(p, R' / \sqrt{\lambda_m} + C\sqrt{-t})\]
for all $t \leq -\lambda_m^{-1}$, so for $\lambda_k \leq \lambda_m$ we have
\[B_{g(-\lambda_m^{-1})}(p, R' / \sqrt{\lambda_m}) \subset B_{g(-\lambda_k^{-1})}(p, R'/\sqrt{\lambda_m} + C/\sqrt{\lambda_k}).\]
It follows that
\[B_{g(-\lambda_m^{-1})}(p, R'/\sqrt{\lambda_m}) \subset B_{g_k(-1)}(p, R'\sqrt{\lambda_k}/\sqrt{\lambda_m} + C),\]
and hence
\[B_{g_m(-1)}(p, R') \subset B_{g_k(-1)}(p, R' + C)\]
whenever $\lambda_k \leq \lambda_m$. In particular, for all sufficiently large $k$, each of the loops $\Gamma_k$ can be contracted to a point in $B_{g_k(-1)}(p, R' + C)$. But for sufficiently large $k$ the image of $B_{\bar g(-1)}(\bar p, k)$ under $\Phi_k$ contains $B_{g_k(-1)}(p, R' + C)$. Since $\Phi_k$ is a diffeomorphism onto its image, for sufficiently large $k$, we can take any homotopy of $\Gamma_k$ to a point in $B_{g_m(-1)}(p, R')$ and pull back by $\Phi_k$. This yields a homotopy of $\Gamma$ to a point in $\bar M$. We conclude that $\bar M$ is simply connected.
\end{proof}
We are now ready to prove Theorem~\ref{main}.
\begin{proof}[Proof of Theorem~\ref{main}]
For simplicity, we assume $T = 0$. This can be arranged by shifting the time variable, and so does not constitute a loss of generality.
Fix $p \in M$ and a sequence of positive numbers $\lambda_k \to 0$. Let $g_k(t):=\lambda_k g(\lambda_k^{-1}t)$. Appealing to Lemma~\ref{shrinker}, after passing to a subsequence if necessary, we can extract an asymptotic shrinker $(\bar M, \bar g(t), \bar p)$, $t\in(-\infty,0)$, as a limit of the sequence $(M, g_k(t), p)$.
For each $R>0$ and $t < 0$ we have
\[\lim_{k \to \infty} d_{GH}\Big(B_{g_k(t)}(p,R), B_{\bar g(t)}(\bar p, R)\Big) = 0
\]
where $d_{GH}$ stands for the pointed Gromov--Hausdorff distance. Here, and for the remainder of the proof, all balls are closed.
After passing to a further subsequence if necessary, we may assume the sequence $(M,\sqrt{\lambda_k}d_{g(-1)}, p)$ converges in the pointed Gromov--Hausdorff sense to an asymptotic cone for $(M,g(-1))$, which we denote $(\hat M, \hat d, \hat p)$. For each $R>0$ we write $\hat B(R)$ for the closed metric ball of radius $R$ about $\hat p$ in $\hat M$. We then have
\[\lim_{k \to \infty} d_{GH}\Big(B_{\lambda_k g(-1)}(p,R), \hat B(R)\Big)= 0\]
for every $R>0$.
Appealing to Lemma~\ref{simply connected}, we conclude that the asymptotic shrinker $(\bar M, \bar g(t))$ is simply connected. Since $(\bar M, \bar g(t))$ is also noncompact and has nonnegative sectional curvature, by a result of Munteanu--Wang \cite[Corollary 3]{Munteanu-Wang}, it splits a Euclidean factor $\mathbb{R}^m$, where $m \in \{1,\dots,n\}$. We will use this fact to establish that $\hat M$ is isometric to $\mathbb{R}^m$.
Since $d_{g_k(t)} = \sqrt{\lambda_k}d_{g(\lambda_k^{-1}t)}$, we can use Lemma~\ref{TypeIdistance} to compare $B_{\lambda_k g(-1)}(p,R)$ and $B_{g_k(t)}(p, R)$, as follows. For $x$ and $y$ in $M$ we have
\begin{align}\label{distance distortion}
|d_{g_k(t)}(x, y) - \sqrt{\lambda_k}\,d_{g(-1)}(x,y)| &= \sqrt{\lambda_k}\,|d_{g(\lambda_k^{-1}t)}(x, y) - d_{g(-1)}(x,y)| \notag \\
& \leq \sqrt{\lambda_k}\,C\,(\sqrt{-\lambda_k^{-1}t} + 1) \notag \\
&\leq C(\sqrt{-t} + \sqrt{\lambda_k}).
\end{align}
For any fixed $R>0$ and $t < 0$, Lemma~\ref{TypeIdistance} also tells us that
\[B_{g_k(t)}(p, R) \subset B_{\lambda_k g(-1)}(p, R)\]
whenever $k$ is so large that $\lambda_k^{-1} t < -1$. On the other hand \eqref{distance distortion} implies
\[B_{\lambda_k g(-1)}(p,R) \subset B_{g_k(t)}(p, R + C(\sqrt{-t} + \sqrt{\lambda_k})).\]
Combining these two facts we obtain
\[d_{GH}\Big(B_{\lambda_k g(-1)}(p,R) , B_{g_k(t)}(p, R)\Big) \leq C(\sqrt{-t} + \sqrt{\lambda_k}).\]
We now apply the triangle inequality to estimate
\begin{align*}
d_{GH}\Big(&B_{g_k(t)}(p, R), \hat B(R)\Big)\\
&\leq d_{GH}\Big(B_{g_k(t)}(p, R), B_{\lambda_kg(-1)}(p,R)\Big) + d_{GH}\Big(B_{\lambda_kg(-1)}(p,R), \hat B(R)\Big) \\
&\leq C(\sqrt{-t} + \sqrt{\lambda_k}) + d_{GH}\Big(B_{\lambda_kg(-1)}(p,R), \hat B(R)\Big).
\end{align*}
Combining this with
\begin{align*}
d_{GH}\Big(&B_{\bar g(t)}(\bar p, R), \hat B(R)\Big)\\
&\leq d_{GH}\Big(B_{\bar g(t)}(\bar p, R), B_{g_k(t)}(p, R)\Big) + d_{GH}\Big(B_{g_k(t)}(p, R), \hat B(R)\Big),
\end{align*}
and taking the limit as $k \to \infty$, we obtain
\[d_{GH}\Big(B_{\bar g(t)}(\bar p, R), \hat B(R)\Big) \leq C\sqrt{-t}\]
for every $R>0$ and $t < 0$. It follows that, for a sequence $t_j \to 0$, the pointed Gromov--Hausdorff limit $(\bar M, \bar g(-t_j), \bar p)$ is $(\hat M, \hat d, \hat p)$. Recalling that $(\bar M, \bar g(t))$ splits as the product of $\mathbb{R}^m$ with a compact factor, we see that $(\hat M, \hat d)$ is isometric to $\mathbb{R}^m$.
We now appeal to Lemma~\ref{cone split}. This shows that $(M, g(t))$ is isometric to the product of $\mathbb{R}^m$ with a compact Riemannian manifold at each time. The cases $m = n$ and $m = n - 1$ are ruled out by the assumption that $(M,g(t))$ is nonflat.
\end{proof}
\section{Type~I solutions are shrinking solitons}\label{einstein section}
In this section we prove Theorem~\ref{einstein}. Although this result was previously established by Li in \cite{Li}, using Theorem~\ref{main}, we are able to give a shorter proof. Let $(M,g(t))$, $t \in (-\infty,T]$, be a simply connected Type~I ancient $\kappa$-solution which is weakly PIC2. By Theorem~\ref{main}, we know that if $(M,g(t))$ is noncompact, then it splits as the product of a Euclidean factor with a compact solution. We claim that the compact factor is locally symmetric and has positive Ricci curvature. In fact, we prove the following stronger statement.
\begin{proposition}\label{compact einstein}
Let $(M,g(t))$, $t \in (-\infty, T]$, be a compact Type~I ancient $\kappa$-solution which is weakly PIC2. Then $(M,g(t))$ is a locally symmetric space, and its universal cover splits as a product of compact Einstein manifolds with positive Ricci curvature.
\end{proposition}
We first establish local symmetry. This is achieved by a standard argument based on the Berger holonomy theorem. After reducing to the case of a simply connected irreducible solution, one can argue that $(M,g(t))$ is either locally symmetric or else has an asymptotic shrinker isometric to $\mathbb{S}^n$ or $\mathbb{CP}^n$. In the latter cases, the solution itself is isometric $\mathbb{S}^n$ or $\mathbb{CP}^n$, respectively. For the sake of completeness we provide a detailed account of these arguments (versions of which have appeared in \cite{Ni, Deng-Zhu, Li}).
We will make use of the following lemma.
\begin{lemma}\label{sphere}
If $(M,g)$ is a symmetric space which is homeomorphic to $\mathbb{S}^n$ then, up to scaling, $(M,g)$ is isometric to $\mathbb{S}^n$.
\end{lemma}
\begin{proof}
By a result of Kostant \cite{Kostant}, since $M$ is a topological sphere, the holonomy group of $(M,g)$ is $SO(n)$. Since the curvature tensor of $(M,g)$ is parallel, this immediately implies that $(M,g)$ has constant curvature.
\end{proof}
\begin{proposition}\label{compact symmetry}
Let $(M,g(t))$, $t \in (-\infty, T]$, be a compact Type~I ancient $\kappa$-solution which is weakly PIC2. Then $(M,g(t))$ is a locally symmetric space.
\end{proposition}
\begin{proof}
We need only consider the case that $(M,g(t))$ is simply connected. If not, we pass to the universal cover. This may be noncompact, but it splits as the product of a Euclidean space with a compact factor by Theorem~\ref{main}, in which case it suffices to show that the compact factor is locally symmetric.
In dimensions 2 and 3, $(M,g(t))$ has constant curvature by \cite{Daskalopoulos-Hamilton-Sesum} and \cite{Perelman}, so let us assume $n \geq 4$. We may also assume without loss of generality that $(M, g(t))$ is irreducible. Indeed, there exists a time $\bar t$ such that $(M, g(t))$ is isometric to a fixed number of irreducible factors for all $t \leq \bar t$. But if $(M,g(t))$ is locally symmetric for $t \leq \bar t$, then this remains true up to $t = T$.
Consider a sequence of times $t_k \to -\infty$. For each $k$ we apply the Berger holonomy theorem (see eg. \cite[Corollary 10.92]{Besse}). This tells us that we are in one of the following situations, possibly after passing to a subsequence in $k$.
\emph{Case 1.} $(M, g(t_k))$ is a locally symmetric space for all $k$, and hence $(M, g(t))$ is locally symmetric for all $t \in (-\infty, T]$.
\emph{Case 2.} The holonomy group of $(M, g(t_k))$ is not $SO(n)$ or $U(n/2)$. In this case $(M, g(t_k))$ is Einstein, and is therefore locally symmetric by work of Brendle \cite{Brendle_Einstein}.
\emph{Case 3.} The holonomy group of $(M, g(t_k))$ is $SO(n)$. In this case, by the strong maximum principle proven in \cite[Proposition~10]{Brendle-Schoen_weak}, we conclude that $(M, g(t_k))$ is strictly PIC2, and hence is diffeomorphic to a sphere by \cite{Brendle-Schoen_strict}. At this point we can apply \cite[Theorem 16]{Brendle-Huisken-Sinestrari} to conclude that $(M, g(t))$ has constant curvature for all $t \in (-\infty, T]$. Note the argument in \cite{Brendle-Huisken-Sinestrari} assumes $M$ is even-dimensional, but only in order obtain a lower bound for the injectivity radius. In our case, such a bound follows from the $\kappa$-noncollapsing.
Another way to argue is as follows. Let $(\bar M, \bar g)$ be an asymptotic shrinker for $(M, g(t))$. Once again, by the Berger theorem and \cite[Proposition 10]{Brendle-Schoen_weak}, $(\bar M, \bar g)$ is either strictly PIC2, or else is a symmetric space. In both cases $(\bar M, \bar g)$ has constant curvature, by \cite{Brendle-Schoen_strict} and Lemma~\ref{sphere}, respectively. It follows that, after rescaling, $(M,g(t_k))$ has constant curvature in the limit as $k \to \infty$, and hence $(M,g(t))$ has constant curvature for all $t \in (-\infty, T]$ by the pinching construction in \cite{Boehm-Wilking} (in fact, in this last step Huisken's pinching estimate suffices \cite{Huisken}).
\emph{Case 4.} We have $n = 2m$ and the holonomy group of $(M, g(t_k))$ is $U(m)$. That is, $(M, g(t_k))$ is a K\"{a}hler manifold. In this case we can conclude that $(M,g(t))$ is isometric to $\mathbb{CP}^m$ (we note that an argument similar to the following appeared in \cite{Deng-Zhu}). Indeed, since $(M, g(t_k))$ is weakly PIC2, and therefore has nonnegative sectional curvature, a result of Mok \cite{Mok} implies it is either biholomorphic to $\mathbb{CP}^m$, or else is isometric to a Hermitian symmetric space. In the latter case we are done, so suppose $(M, g(t_k))$ is biholomorphic to $\mathbb{CP}^m$.
Let $(\bar M, \bar g)$ be an asymptotic shrinker for $(M,g(t))$. We know that $(\bar M, \bar g)$ is K\"{a}hler and is diffeomorphic to $\mathbb{CP}^m$, but such a space is biholomorphic to $\mathbb{CP}^m$ by work of Hirzebruch--Kodaira \cite{Hirzebruch-Kodaira} and Yau \cite{Yau}. Since the Futaki invariant of $\mathbb{CP}^m$ vanishes, we have that $(\bar M, \bar g)$ is a K\"{a}hler--Ricci soliton with vanishing Futaki invariant, and hence $(\bar M, \bar g)$ is K\"{a}hler--Einstein (see eg. \cite[p. 125]{Cao}). Using the uniqueness theorem for K\"{a}hler--Einstein metrics due to Bando and Mabuchi \cite{Bando-Mabuchi}, we conclude that $(\bar M, \bar g)$ is biholomorphically isometric to $\mathbb{CP}^m$. Therefore, after rescaling, we have that $(M, g(t_k))$ converges smoothly to $\mathbb{CP}^m$ as $k \to \infty$. By the convergence result of Chen--Tian \cite{Chen-Tian}, $(M,g(t))$ can be extended to a maximal solution which converges (up to rescaling) to $\mathbb{CP}^m$ forward in time. By Perelman's monotonicity formula for the reduced volume \cite{Perelman}, it follows that the reduced volume of $(M,g(t))$ is constant (equal to that of the shrinking soliton generated by $\mathbb{CP}^m$ --- we refer to \cite[Theorem~2.1]{Naber} for a detailed proof). Therefore, $(M,g(t))$ is a shrinking soliton, and hence is isometric to $\mathbb{CP}^m$.
In every case, we have found that $(M,g(t))$ is a locally symmetric space.
\end{proof}
We continue with the proof of Proposition~\ref{compact einstein}. It remains to prove that the universal cover of $(M,g(t))$ is isometric to a product of Einstein manifolds with positive Ricci curvature. This follows from some basic facts about symmetric spaces. Recall that a locally symmetric Riemannian manifold which is also simply connected is a symmetric space (see eg. \cite[Theorem~10.3.2]{Petersen}). Such a space splits as a product of simply connected irreducible factors, each of which is automatically Einstein (see eg. \cite[p. 386]{Petersen}). A simply connected irreducible symmetric space is either of compact type, Euclidean type, or noncompact type. Spaces of compact type have nonnegative curvature operator and positive Ricci curvature, spaces of Euclidean type are flat, and spaces of noncompact type have nonpositive curvature operator and negative Ricci curvature. For Cartan's classification of symmetric spaces, see eg. \cite[Chapter~X]{Helgason}.
\begin{proof}[Proof of Proposition~\ref{compact einstein}]
Let $(\bar M, \bar g)$ be an asymptotic shrinker for $(M,g(t))$. We know that $(\bar M, \bar g)$ is compact and locally symmetric (because of Proposition~\ref{compact symmetry}), so the shrinking soliton equation
\[\bar \Ric + \bar \nabla^2 f = \lambda \bar g\]
implies that $f$ is constant, and hence $(\bar M, \bar g)$ is Einstein. Since $(M,g(t))$ is nonflat, so is $(\bar M, \bar g)$ (this follows from Perelman's monontonicity formula for the reduced volume --- see \cite[Theorem~2.1]{Naber}), and hence $\lambda > 0$. Using the Bonnet--Myers theorem, we conclude that the universal cover of $\bar M$ is compact. But $M$ and $\bar M$ are diffeomorphic, so the universal cover of $M$ is compact.
Since $(M,g(t))$ is locally symmetric, we conclude that its universal cover $(\tilde M, \tilde g(t))$ is a compact symmetric space. There is a time $\bar t \leq T$ such that
\begin{equation}\label{compact split}
(\tilde M, \tilde g(t)) \cong (M_1, g_1(t)) \times \dots (M_k, g_k(t))
\end{equation}
for all $t \leq \bar t$, where each of the factors $(M_i,g_i(t))$ is irreducible. Each of these has nonnegative sectional curvature, and may not be flat (by the $\kappa$-noncollapsing assumption). Therefore, each factor is a symmetric space of compact type, and is hence Einstein with positive Ricci curvature. By uniqueness of compact solutions, each $(M_i, g_i(t))$ extends smoothly to $t = T$, and \eqref{compact split} persists for all $t \leq T$.
\end{proof}
\iffalse
\begin{lemma}\label{einstein lemma}
Let $(M,g(t))$, $t \in (-\infty, T]$, be a compact ancient solution of Ricci flow such that $(M,g(t))$ is Einstein for each $t \in (-\infty,T]$. We then have
\[g(t) = \bigg(1+2\frac{R(T)}{n}(T-t)\bigg)g(T).\]
\end{lemma}
\begin{proof}
For each $t \in (-\infty, T]$, since $(M,g(t))$ is Einstein, its scalar curvature $R$ is constant and we have
\[\Ric = \frac{R}{n}g.\]
It follows that
\[\frac{\partial R}{\partial t} = \Delta R + 2|\Ric|^2 = \frac{2}{n}R^2,\]
so for any pair of times $t$ and $s$,
\[R(t)^{-1} - R(s)^{-1} = \frac{2}{n}(s - t).\]
From this we obtain
\[\frac{\partial g}{\partial t} = -2\Ric = -2\bigg(\frac{n}{R(T)}+ 2(T-t)\bigg)^{-1} g,\]
which gives the claim upon integrating.
\end{proof}
\fi
\bibliographystyle{abbrv}
|
1903.03697
|
\section{Introduction}
Mobility in urban environments is becoming a major issue on the global scale~\cite{LevyBuonocoreEtAl2010}.
The main reasons are an increasing population with higher mobility demands and a slowly adapting infrastructure~\cite{CIA:2018}, resulting in serious congestion problems.
In addition, the usage of public transit is dropping, whilst mobility-on-demand operators such as Uber and Lyft are increasing their operation on urban roads, increasing further congestion~\cite{Berger2018,Molla2018,Siddiqui2018}.
For instance, the yearly cost of congestion in the US has doubled between 2007 and 2013~\cite{Schrank2007,TuttleCowles2014}, and in Manhattan cars are traveling about 15\% slower compared to five years ago~\cite{Hu2017}.
Space limitations and a largely fixed infrastructure make congestion an issue difficult to address in urban environments.
While existing public transportation systems need to be extended to ease congestion, it is important to adopt technological innovations improving the efficiency of urban transit.
The advent of cyber-physical technologies such as autonomous driving and wireless communications will enable the deployment of \gls{abk:amod} systems, i.e., fleets of self-driving cars providing on-demand mobility in a one-way vehicle-sharing fashion (see Fig.~\ref{fig:AMoD}). Specifically, such a system is designed to carry passengers from their origins to their destinations, potentially in an intermodal fashion (i.e., utilizing several modes of transportation), and to assign empty vehicles to new requests.
The main advantage of \gls{abk:amod} systems is that they can be controlled by a \emph{central} operator simultaneously computing routes for customer-carrying vehicles and \emph{rebalancing} routes for empty vehicles, thus enabling a \emph{system-optimal} operation of this transportation system. This way, \gls{abk:amod} systems could replace current taxi and ride-hailing services and reduce the global cost of travel~\cite{SpieserTreleavenEtAl2014}.
Conversely to conventional navigation providers computing the fastest route by passively considering congestion in an exogenous manner, \gls{abk:amod} systems controlled by a central operator enable one to consider the \emph{endogenous} impact of the single vehicles' routes on road traffic and travel time, and can thus be operated in a congestion-aware fashion.
\begin{figure}[t]
\centering\includegraphics[width=\columnwidth]{fig/AMoD.pdf}\vspace{-5pt}
\caption{The AMoD network. The white circles represent intersections and the black arrows denote road links. The dotted arrows represent pick-up and drop-off locations for single customers.}
\label{fig:AMoD}\vspace{-15pt}
\end{figure}
{\em Statement of contributions:}
We introduce a computationally-efficient approach for congestion-aware \textsc{AMoD}\xspace routing to minimize the system cost---the total cost of executing the routing scheme over all the vehicles in the system. To the best of our knowledge, this is the first method that takes into consideration the full representation of the volume-delay function that estimates the travel time based on the amount of traffic.
Moreover, we demonstrate that our approach is faster by at least one order of magnitude than previous work (see Section~\ref{sec:related}) for congestion-aware \textsc{AMoD}\xspace, while being more accurate in terms of congestion estimation.
On the algorithmic side, we develop a reduction which transforms the \textsc{AMoD}\xspace routing problem into a Traffic Assignment Problem (\textsc{TAP}\xspace), where the latter does not involve rebalancing of empty vehicles. We then prove mathematically that an optimal solution for the latter \textsc{TAP}\xspace instance yields a solution to our original \textsc{AMoD}\xspace problem with the following properties: (i) The majority (e.g., 99\% in our experiments) of rebalancing demands are fulfilled and (ii) the system cost of the solution is upper bounded by the system optimum where 100\% of the rebalancing demands are fulfilled. (We note that, in practice, the unfulfilled rebalancing demand, being just a small fraction -- say, $<1\%$ -- can be addressed via post-processing heuristic strategies with minimal impact on cost.)
Such a reformulation of the \textsc{AMoD}\xspace problem allows us to leverage state-of-the-art techniques for \textsc{TAP}\xspace, that can efficiently compute a congestion-aware system optimum. In particular, we employ the classic Frank-Wolfe algorithm~\cite{FrankWolfe56, Patriksson15}, which is paired with modern shortest-path techniques, such as contraction hierarchies~\cite{GeisbergerETAL12} (both of which are implemented in recent open-source libraries~\cite{BuchETAL18,DibbeltETAL16}). This allows us to compute in a few seconds (on a commodity laptop) \textsc{AMoD}\xspace routing schemes for a realistic test case over Manhattan, New York, consisting of $\num{156000}$ passenger travel requests.
\emph{Organization}: The remainder of this paper is structured as follows.
In Section~\ref{sec:related} we provide a review of related work.
In Section~\ref{sec:preliminaries} we formally define the instances of \textsc{TAP}\xspace and \textsc{AMoD}\xspace we are concerned with in this work. There we also discuss the assumptions of our model and possible limitations. In Section~\ref{sec:convex} we provide a description of the Frank-Wolfe algorithm for \textsc{TAP}\xspace. Our main theoretical contribution is given in Section~\ref{sec:alg_amod}, where we describe our approach for \textsc{AMoD}\xspace by casting it into \textsc{TAP}\xspace, and develop its mathematical properties. In Section~\ref{sec:experiments} we demonstrate the power of our approach and test its scalability on realistic inputs. We conclude the paper with a discussion and future work in Section~\ref{sec:future}.
\section{Related Work}\label{sec:related}
There exist several approaches to study \gls{abk:amod} systems, spanning from simulation models~\cite{HoerlRuchEtAl2018, LevinKockelmanEtAl2017,MaciejewskiBischoffEtAl2017} and queuing-theoretical models~\cite{ZhangPavone2016,IglesiasRossiEtAl2016} to network-flow models~\cite{PavoneSmithEtAl2012,RossiZhangEtAl2017,SpieserTreleavenEtAl2014}.
On the algorithmic side, the \textit{control} of \gls{abk:amod} systems has been mostly based on network flow models employed in a receding-horizon fashion~\cite{IglesiasRossiEtAl2018,TsaoIglesiasEtAl2018,TsaoMilojevicEtAl2019}, and thresholded approximations of congestion effects~\cite{RossiZhangEtAl2017}, also accounting for the interaction with public transit~\cite{SalazarRossiEtAl2018} and the power-grid~\cite{RossiIglesiasEtAl2018}. In such a framework, cars can travel through a road at free-flow speed until a fixed capacity of the road is reached. At that point, no more cars can traverse it.
Such models result in optimization problems solvable with off-the-shelf linear-programming solvers---making them very well suitable for \emph{control} purposes---but lacking accuracy when accounting for congestion phenomena, which are usually described with \emph{volume-delay functions} providing a relationship between traffic flow and travel time. In particular, the \gls{abk:bpr} developed the most commonly used volume-delay function~\cite{BPR1964}, which has been applied to problems ranging from dynamic estimation of congestion~\cite{RivasInmaculadaEtAl2016} to route planning in agent-based models~\cite{AslamLimEtAl2012, ManleyChengEtAl2014}.
Against this backdrop, a piecewise-affine approximation of the \gls{abk:bpr} function is presented in~\cite{SalazarTsaoEtAl2019} and combined with convex relaxation techniques to devise a congestion-aware routing scheme for \gls{abk:amod} systems resulting in a quadratic program. Nevertheless, in large urban environments with several thousand transportation requests such approaches usually lead to computational times of the order of minutes, possibly rendering them less suitable for real-time control purposes.
Mathematically, \textsc{AMoD}\xspace can be viewed as an extension of \textsc{TAP}\xspace, where the latter ignores the cost and impact of rebalancing empty vehicles. Historically, \textsc{TAP}\xspace was introduced to model and quantify the impact of independent route choices of human drivers on themselves, and the system as a whole (see ~\cite{Patriksson15, Sheffi85}). Algorithmic approaches for \textsc{TAP}\xspace typically assume a \emph{static} setting in which travel patterns do not change with time, allowing to cast the problem into a multi-commodity minimum-cost flow~\cite{Ahuja93}, which can then be formulated as a \emph{convex programming} problem. One of the most popular tools for convex programming in the context of \textsc{TAP}\xspace is the Frank-Wolfe algorithm~\cite{FrankWolfe56}. What makes this algorithm particularly suitable for solving \textsc{TAP}\xspace is that its direction-finding step corresponds to multiple shortest-path
queries (we expand on this point in Section~\ref{sec:convex}). Recent advances in pathfinding tailored for transportation networks~\cite{BastETAL2016}, including contraction hierarchies~\cite{DibbeltETAL16, GeisbergerETAL12}, have made the Frank-Wolfe approach remarkably powerful. In particular, a recent work~\cite{BuchETAL18} introduced a number of improvements for pathfinding, and combines those with the Frank-Wolfe method. Notably, the authors present experimental results for \textsc{TAP}\xspace, where their approach computes within seconds a routing scheme for up to~3 million requests over a network of a large metropolitan area. Nevertheless, such an approach is not directly applicable to \textsc{AMoD}\xspace problems as it would not account for the rebalancing of empty vechicles.
Finally, we mention that \textsc{AMoD}\xspace is closely related to \emph{Multi-Robot Motion Planning} (MRMP), which consists of computing collision-free paths for a group of physical robots, moving them from the initial set of positions to their destination. MRMP has been studied for the setting of discrete~\cite{SharonETAL15,Yu18,YuLaValle16} and continuous~\cite{CapETAL15,ShomeETAL2019,SoloveyETAL15} domains, respectively. The unlabeled variant of MRMP~\cite{AdlerETAL15,SolHal16,TurpinETAL14b,TurpinETAL14a,YuLaValle12}, which involves also target assignment, is reminiscent of the rebalancing empty vehicles in \textsc{AMoD}\xspace, as such vehicles do not have a priori assigned destinations.
\section{Supplementary material (AMoD with loss as TAP)}
\subsection{AMoD with customer loss}
We introduce the constant $c_l$ which corresponds to the cost of not servicing one unit of user. For every class of users $m\in M$ we introduce the variable $\ell_m$ with the constraint
\begin{align}\label{eq:loss:bound}
0\leq \ell_m \leq \lambda_m,\quad \forall m\in M.
\end{align}
We then introduce the following constraint, which substitutes constraint~(\ref{eq:tap:flow}),
\begin{align}
\sum_{j\in V_i^+}x_{ijm}-\sum_{j\in V_i^-}x_{jim}&=\lambda_{im}-\ell_m,\quad \forall i\in V. \label{eq:loss:flow}
\end{align}
We also introduce an alternative to constraint~(\ref{eq:amod:flow})
\begin{align}\label{eq:loss:rebalance}
\sum_{j\in V_i^+}x_{ijr}-\sum_{j\in V_i^-}x_{jir}&=r_i^l,\quad \forall i\in V,
\end{align}
where \[r_i^l\coloneqq \sum_{m\in M}\left(\mathds{1}\{d_m=i\}-\mathds{1}\{o_m=i\}\right)\lambda_m.\]
\begin{definition}
The \emph{AMoD with loss problem} (\textsc{AMoD+L}\xspace) consists of minimizing the expression $F(\hat{\bm{x}})+G(\bm{\ell})$,
where \[G(\bm{\ell})=c_l\cdot \sum_{m\in M}\ell_m,\]
subject to constraints~(\ref{eq:tap:positive}),(\ref{eq:amod:positive}),(\ref{eq:loss:bound}),(\ref{eq:loss:flow}),(\ref{eq:loss:rebalance}).
\end{definition}
\todo{Explain the motivation behind this section. E.g., to demonstrate the generality of the approach.}
Similarly to the previous section, we show that \textsc{AMoD+L}\xspace can be viewed as a special instance of \textsc{TAP}\xspace. However, in this section the transformation is slightly more involved and requires to modify the \textsc{FrankWolfe}\xspace algorithm.
Recall that in addition to the rebalancing of user requests, which occurs in \textsc{AMoD}\xspace, we also need to allow that some of the user requests will not be fulfilled. In particular, $\ell_m$ denotes the number of unfulfilled requests for $m\in M$. In that case, we need to send only $\ell_m$ rebalancers to $o_m$. To achieve this more complex logic we modify $G$ into a larger graph $G''$. Additionally, we couple the task of a car driving a passenger, and rebalacing to another origin node, and treat it as a single action of one vehicle. This way it is easier to determine whether losing a specific customer is worthwhile, as this cost encompasses both the cost of driving a customer, and then the cost rebalancing to the the origin of another. This will become clear shortly.
\subsection{The construction}
Define
\[O\coloneqq\bigcup_{m\in M}\{o_m\},\quad D\coloneqq \bigcup_{m\in M}\{d_m\}\]
to be the sets of all origin and destination vertices, respectively. We introduce the graph $G''=(V'',E'')$ as follows:
\[V''=V\cup \{n,n'\}\cup\{i'|i\in D\},\]
\[E''=\{(n,n')\}\cup E_{\text{in}}\cup E_{\text{out}}\cup E_{\text{between}} \cup E_{\text{end}}, E_{\text{self}},\]
where
\begin{align*}
E_{\text{in}}&=\{(i,j)|(i,j)\in E, i\not\in D\},\\
E_{\text{out}}&=\{(i',j)|(i,j)\in E, i\in D\},\\
E_{\text{between}}&=\{(i,i')|i\in D\},\\
E_{\text{end}}&=\{(i,n)|i\in D\},\\
E_{\text{self}}&=\{(i',i)|i\in D, i\in O\}.\\
\end{align*}
\todo{Explain the purpose of $E_{\text{self}}$.}
We also assign costs $c'_{ij}$ for the edges in the following manner ($c_{ij}$ denotes the original cost with respect to $G$, if relevant):
for $(i,j)\in E_{\text{in}}\cup E_{\text{out}}$, $c'_{ij}(x_{ij})=c_{ij}(x_{ij})$ i.e., original cost; for $(i,j)\in E_{\text{between}}$, $c'_{ij}(x_{ij})=\varepsilon\cdot x_{ij}$, where $\varepsilon$ is a small constant (to be discussed below); for $(n,n')$, $c_{nn'}(x_{nn'})=c_l\cdot x_{nn'}$;
for $(i,n)\in E_{\text{end}}$, $c'_{in}(x_{in})=\textsc{bpr}\xspace(x_{in},\kappa_{in},L)$, where $L$ is defined as in the previous section and $\kappa_{in}=\sum_{m\in M}\mathds{1}\{d_m=i\}$.
This requires some explanation. In addition to the dummy vertex $n$, which represents a destination for rebalancers that decided to fulfill passenger trips and rebalancing, we add the vertex $n'$ which would denote the destination for rebalancers that decided to stay idle. We add an additional copy $i'$ for every $i\in D$ (we assume that $i'$ is a new index that was not available in $V$). We add the edge $(n,n')$ as well as a slew of the following edges: Every edge $(i,j)\in E$ that does not terminate in a destination vertex in $G$ is added in $E_{\text{in}}$; every edge $(i,j)\in E$ that begins in a destination vertex is transformed into $(i',j)\in E_{\text{out}}$; an edge $(i,i')$ is drawn from $i\in D$ and its copy $i'$, in $E_{\text{between}}$, and $(i,n)$ is added from every destination vertex $i$, similarly to the previous section, in $E_{\text{end}}$.
\subsection{Modified \textsc{AllOrNothing}\xspace assignment}
The purpose of this construction goes as follows: a car that begins its journey in $o_m$, for some $m\in M$ can be assigned with one of two actions: (i) it can remain \emph{idle}, in which case it will traverse the edges $(o_m,n),(n,n')$, or (ii) \emph{execute} the task, which includes traversing the shortest path from $o_m$ to $d'_m$, and then rebalancing by traversing the shortest path from $d'_m$ to $n$. This logic is described in the modified \textsc{AllOrNothing}\xspace assignment below (Algorithm~\ref{alg:aonl}), which is executed by \textsc{FrankWolfe}\xspace for the \textsc{AMoD+L}\xspace problem.
To ensure that every destination vertex $i\in d$ receives the correct number of rebalancers, we assign to edge $(i,n)$ a capacity which is equal to the total arriving cars, idle or executing, similarly to the case of \textsc{AMoD}\xspace.
We then extend the flow of idle cars to the edge $(i,n)$ and cost $c_l\cdot x_{nn'}$ to penalize idle cars. The final ingredient of duplicating destination vertices $i\in D$ into $i'$ guarantees that a request of computing a shortest path from a destination $i$ to $n$ will not use the edge $(i,n)$. Indeed, the execution task traverses a shortest path from $o_m$ to $d'_m$, where the latter vertex is not connected directly to $n$. \todo{Explain the purpose of $\varepsilon$ edges.}
\begin{algorithm}\label{alg:aonl}
\caption{\textsc{AllOrNothing-loss}\xspace$(G'',\nabla\bar{F}_E(\bm{x}^k),OD)$}
\begin{algorithmic}[1]
\For {$m\in M$}
\State{$\bm{y}_m\gets \textsc{ShortestPath}\xspace(G'',\nabla\bar{F}_E(\bm{x}^k),o_m,d'_m)$}
\State{$\bm{y}'_m\gets \textsc{ShortestPath}\xspace(G'',\nabla\bar{F}_E(\bm{x}^k),d'_m,n)$}
\State{$\bm{y}^{\text{exe}}_m=\bm{y}_m+\bm{y}'_m$}
\State{$c_{\text{exe}}=\nabla\bar{F}_E(\bm{x}^k)^T \bm{y}^{\text{exe}}_m$}
\State{$\bm{y}^{\text{idle}}_m= \bm{y}(o_m,n)+\bm{y}(n,n')$}
\State{$c_{\text{idle}}=\nabla\bar{F}_E(\bm{x}^k)^T \bm{y}^{\text{idle}}_m$}
\If {$c_{\text{idle}}<c_{\text{exe}}$}
\State{$\bm{y}^k_m =\bm{y}^{\text{idle}}_m$}
\Else
\State {$\bm{y}^k_m =\bm{y}^{\text{exe}}_m$}
\EndIf
\EndFor
\State \Return $\bm{y}^{k} = \sum_{m\in M}\lambda_m y^k_m$
\end{algorithmic}
\end{algorithm}
We now review Algorithm~\ref{alg:aonl}. $\bm{y}_m,\bm{y}'_m$ represent the shortest paths from $o_m$ to $d'_m$, and from $d'_m$ to $n$, respectively (lines~2,3). The concatenation of the two paths is denoted by $\bm{y}^{\text{exe}}_m$ (line~4), where the cost of the entire execution is described by $c_{\text{exe}}$ (line 5). In line~6 we compute the $\bm{y}_m^\text{idle}$ which represent an idle rebalancer. The notation $\bm{y}(i,j)$ represents assigning $x_{ij}=1$, and $x_{i'j'}=0$ if $i'\neq i\vee j'\neq j$. Its cost is denoted by $c_\text{idle}$ (line~7). The minimal-cost action is chosen in lines~8-11.
Observe that this modified \textsc{AllOrNothing}\xspace routine ensures that for every request $m$ the total number of cars executing or idling over request $(o_m,d_m)$ is exactly $\lambda_i$. Furthermore, this property is maintained from one iteration of \textsc{FrankWolfe}\xspace to the next, due to the averaging performed in line~5 of Algorithm~\ref{alg:fw}.
\section{Preliminaries}\label{sec:preliminaries}
In this section we provide a formal definition of \textsc{TAP}\xspace and \textsc{AMoD}\xspace, as our work will exploit the tight mathematical relation between these two problems.
The basic ingredient in our study is the road network, which is modeled as a directed graph $G=(V,E)$: Each vertex $v\in V$ represents either a physical road intersection or points of interest on the road. Each edge $(i,j)\in E$ represents a road segment connecting two vertices $i,j\in V$.
To model the travel times along the network, every edge $(i,j)\in E$ is associated with a cost function $c_{ij}:\mathbb{R}} \def\dM{\mathbb{M}} \def\dm{\mathbb{m}_+\rightarrow \mathbb{R}} \def\dM{\mathbb{M}} \def\dm{\mathbb{m}_+$, which represents the travel time along the edge as a function of flow (i.e., traffic) along the edge, denoted by $x_{ij}\geq 0$. In order to accurately capture the cost, every edge $(i,j)$ has two additional attributes: the capacity of the edge $\kappa_{ij}>0$, which can be viewed as the amount of flow beyond which the travel time will increase rapidly, and the free-flow travel time $\phi_{ij}>0$, i.e., the \emph{nominal} travel time when $x_{ij}= 0$. We mention that those attributes are standard when modeling traffic (see, e.g.,~\cite{Patriksson15}).
The time-invariant nature of this model captures the \emph{average} value of the flows for a certain time period.
To compute $c_{ij}$ we use the \textsc{bpr}\xspace function~\cite{BPR1964}, which is often employed in practice. It is noteworthy that our approach presented below can be modified to work with other functions such as the modified Davidson cost~\cite{Akcelik78}. Specifically, we define the cost function as
\begin{align*}
c_{ij}(x_{ij})=\textsc{bpr}\xspace(x_{ij},\kappa_{ij},\phi_{ij}) \coloneqq \phi_{ij}\cdot \left(1+\alpha\cdot \left(\frac{x_{ij}}{\kappa_{ij}}\right)^{\beta}\right),
\end{align*}
where typically $\alpha=0.15$ and $\beta=4$.
Travel demand is represented by passenger requests $OD=\{(\lambda_m,o_m,d_m)\}_{m=1}^M$,
where \mbox{$\lambda_m>0$} represents the amount of customers willing to travel from the origin node $o_m\in V$ to the destination node $d_m\in V$ per time unit.
\subsection{Traffic Assignment}
Here we provide a mathematical formulation of traffic assignment. We denote by $x_{ijm}\in \mathbb{R}} \def\dM{\mathbb{M}} \def\dm{\mathbb{m}_+$ the flow induced by request $m\in M$ on edge $(i,j)\in E$. We introduce the following constraint which ensures that the amount of flow associated with each request is maintained when a flow enters and leaves a given vertex. The amount of flow corresponding to the request $m\in M$, leaving $o_m$ and entering $d_m$ must match the demand flow $\lambda_m$ as
\begin{align}
\sum_{j\in V_i^+}x_{ijm}-\sum_{j\in V_i^-}x_{jim}&=\lambda_{im},\quad \forall i\in V, m\in M, \label{eq:tap:flow}
\end{align}
\[\textup{where }\lambda_{im}\coloneqq\begin{cases} \lambda_m,\quad & \text{if }o_m=i,\\ -\lambda_m,\quad & \text{if }d_m=i,\\bm{0},\quad & \text{otherwise},\end{cases}\]
and $V_i^-\coloneqq\left\{j\middle|(i,j)\in E\right\},V_i^+\coloneqq \left\{j\middle|(j,i)\in E\right\}$, denote heads and tails of edges leaving and entering $i\in V$, respectively. We also impose non-negative flows as
\begin{align}
x_{ijm}& \geq 0, \quad\forall (i,j)\in E. \label{eq:tap:positive}
\end{align}
The objective of \textsc{TAP}\xspace is specified in the following definition. Informally, the goal is to minimize the total travel time experienced by the users in the system, that is the sum of travel times for each individual request $m$.
\begin{definition}\label{def:tap}
The \emph{traffic-assignment problem} (\textsc{TAP}\xspace) consists of minimizing the expression
\begin{equation}
F_E(\bm{x})=\sum_{(i,j)\in E}x_{ij} c_{ij}(x_{ij}),\quad \textup{subject to~(\ref{eq:tap:flow}), (\ref{eq:tap:positive})},
\end{equation}
where $\bm{x}\coloneqq \left\{x_{ij}=\sum_{m\in M}x_{ijm}|(i,j)\in E\right\}$.
\end{definition}
\subsection{Autonomous Mobility-on-Demand}
In an \gls{abk:amod} system, the formulation of \textsc{TAP}\xspace captures only partially the cost of operating the full system. In particular, vehicles need to perform two types of tasks: (i) \emph{occupied} vehicles drive passengers from their origins to their destinations; (ii) after dropping passengers off at their destination, \emph{empty} vehicles need to drive to the next origin nodes, where passengers will be picked up. Indeed, the formulation of \textsc{TAP}\xspace above only captures the cost associated with (i), but not (ii). Another crucial difference between \textsc{TAP}\xspace and \textsc{AMoD}\xspace, which makes the latter significantly more challenging, is the fact that the travel destinations of empty vehicles are not given a priori and should be computed by the algorithm.
Thus, we extend the model to include also rebalancing empty vehicles and define $x_{ijr}$ as the rebalancing flow of empty vehicles over $(i,j)\in E$.
We force empty vehicles to be rebalanced from destination nodes to origin nodes as
\begin{align}
\sum_{j\in V_i^+}x_{ijr}-\sum_{j\in V_i^-}x_{jir}&=r_i,\quad \forall i\in V,\label{eq:amod:flow}
\end{align}
\[\textup{for }r_i\coloneqq \sum_{m\in M}\left(\mathds{1}\{d_m=i\}-\mathds{1}\{o_m=i\}\right)\lambda_m, \] where $\mathds{1}\{\cdot\}$ is a boolean indicator function. Observe that nodes with more arriving than departing passengers do not require rebalancing. We use $R\coloneqq \sum_{i\in V}\mathds{1}\{r_i>0\}r_i$ to denote the total number of rebalancing requests and enforce non-negativity of rebalancing flows as
\begin{align}
x_{ijr} &\geq 0,\quad \forall(i,j)\in E.\label{eq:amod:positive}
\end{align}
\begin{definition}
The \emph{autonomous-mobility-on-demand problem} (\textsc{AMoD}\xspace) consists of minimizing the expression $F_E(\hat{\bm{x}})$, subject to (\ref{eq:tap:flow}), (\ref{eq:tap:positive}), (\ref{eq:amod:flow}), (\ref{eq:amod:positive}), where
$\hat{\bm{x}}\coloneqq \left\{\hat{x}_{ij}\coloneqq x_{ij}+x_{ijr}\right\}_{(i,j)\in E}$.
\end{definition}
\subsection{Discussion}
A few comments are in order.
First, we make the assumption that mobility requests do not change in time. This assumption is justified in cities where transportation requests change slowly with respect to the average travel time~\cite{Neuburger1971}.
Second, the model describes vehicle routes as fractional flows and it does not account for the stochastic nature of the trip requests and exogenous traffic. Given the mesoscopic perspective of our study, such an approximation is in order.
Moreover, given the computational effectiveness of the approach, our algorithm is readily implementable in real-time in a receding horizon fashion, whereby randomized sampling algorithms can be adopted to compute integer-valued solutions with near-optimality guarantees~\cite{Rossi2018}.
Third, we assume exogenous traffic to follow habitual routes and neglect the impact of our decisions on the traffic base load, leaving the inclusion of reactive flow patterns to future work.
Fourth, we model the impact of road traffic on travel time with the \gls{abk:bpr} function~\cite{BPR1964}, which is well established and, despite it does not account for microscopic traffic phenomena such as traffic lights, serves the purpose of route-planning on the mesoscopic level.
Finally, we constrain the capacity of the vehicles to one single customer, which is in line with current trends, and leave the extension to ride-sharing to future research~\cite{CapAlonso18,TsaoMilojevicEtAl2019}.
\section{Convex Optimization for \textsc{TAP}\xspace}\label{sec:convex}
In this section we describe the Frank-Wolfe method for convex optimization, which will later be used for solving \textsc{AMoD}\xspace. First, we have the following statement concerning the convexity of \textsc{TAP}\xspace.
\begin{claim}[(Convexity)]
\textsc{TAP}\xspace (Definition~\ref{def:tap}) is a convex problem. \label{claim:convex}
\end{claim}
\begin{proof}
Given a specific edge $(i,j)\in E$, observe that the derivative of the expression $x_{ij}c_{ij}(x_{ij})$ is strictly increasing, which implies that it is convex. As the expression $F_E(\bm{x})$ consists of a sum of convex functions, it is convex as well.
\end{proof}
\subsection{The Frank-Wolfe Algorithm}
Due to the convexity of the problems introduced, we can leverage convex optimization to solve \textsc{TAP}\xspace, and consequently \textsc{AMoD}\xspace, as we will see later on. Specifically, we use the Frank-Wolfe algorithm (\textsc{FrankWolfe}\xspace), which is well suited to our setting and has achieved impressive practical results for large-scale instances of \textsc{TAP}\xspace in a recent work~\cite{BuchETAL18}.
Before introducing \textsc{FrankWolfe}\xspace, it should be noted that
it is typically employed to minimize the \emph{user-equilibrium} cost function captured by
\begin{align}
\bar{F}_E(\bm{x})=\sum_{(i,j)\in E}\int_0^{x_{ij}}\bar{c}_{ij}(s)\,\mathrm{d}s,\label{eq:ue}
\end{align}
for some $\bar{c}_{ij}:\mathbb{R}} \def\dM{\mathbb{M}} \def\dm{\mathbb{m}_+\rightarrow \mathbb{R}} \def\dM{\mathbb{M}} \def\dm{\mathbb{m}_+$, whereas we are interested in computing the \emph{system optimum} corresponding to the minimum of $F_E(\bm{x})=\sum_{(i,j)\in E}x_{ij}c_{ij}(x_{ij})$, using $c_{ij}$ as defined in the previous section. However, we can enforce the user-equilibrium reached by selfish agents to correspond to the system optimum by using the \emph{marginal costs} \[\bar{c}_{ij}(x_{ij})=\frac{\mathrm{d}}{\mathrm{d}x_{ij}}\left(x_{ij}c_{ij}(x_{ij})\right)=c_{ij}(x_{ij})+x_{ij}c'_{ij}(x_{ij}),\]
which quantifies the sensitivity of the total cost with respect of small changes in flows.
Specifically, to compute the system optimum, we only need to apply \textsc{FrankWolfe}\xspace to minimize $\bar{F}_E(\bm{x})$ (Equation~\ref{eq:ue}). (See more information on this transformation in~\cite{Patriksson15}.)
The algorithm below will be presented with respect to $\bar{F}_E$.
The following pseudo-code (Algorithm~\ref{alg:fw}) presents a simplified version of \textsc{FrankWolfe}\xspace, which is based on~\cite[Chapter 4.1]{Patriksson15}. The algorithm begins with an initial solution $\bm{x}^0$, which satisfies~(\ref{eq:tap:flow}), (\ref{eq:tap:positive}). To obtain $\bm{x}_0$, one can, for instance, assign each request $(\lambda_m,o_m,d_m)$ to the shortest route over the traffic-free graph $G$, while ignoring the flows of the other users.
\begin{algorithm}
\caption{\textsc{FrankWolfe}\xspace$(\bar{F}_E,G,OD)$}
\begin{algorithmic}[1]
\State{$\bm{x}^0\gets \text{feasible solution for \textsc{TAP}\xspace}$; $k\gets 0$}
\While {stopping criterion not reached}
\State{$\bm{y}^k\gets \mathop{\mathrm{argmin}}_{\bm{y}} \bar{F}_E(\bm{x}^k)+\nabla \bar{F}_E(\bm{x}^k)^T(\bm{y}-\bm{x}^k)$, s.t. $\bm{y}^k$ satisfies (\ref{eq:tap:flow}), (\ref{eq:tap:positive})}
\State{$\alpha_k\gets \mathop{\mathrm{argmin}}_{\alpha\in [0,1]}\bar{F}_E(\bm{x}^k+\alpha(\bm{y}^k-\bm{x}^k))$}
\State{$\bm{x}^{k+1}\gets \bm{x}^k+\alpha_k(\bm{y}^k-\bm{x}^k)$; $k\gets k+1$}
\EndWhile
\State \Return $\bm{x}^{k}$
\end{algorithmic} \label{alg:fw}
\end{algorithm}
In each iteration $k$ of the algorithm, the following steps are performed. In line~3 a value of $\bm{y}^k$ minimizing the expression $\bar{F}_E(\bm{x}^k)+\nabla \bar{F}_E(\bm{x}^k)^T(\bm{y}^k-\bm{x}^k)$, which satisfies~(\ref{eq:tap:flow}), (\ref{eq:tap:positive}), is obtained. It should be noted that this corresponds to solving a linear program with respect to $\bm{y}^k$, due to the fact that $\bm{x}^k$ is already known, and one is working with the gradient of $\bar{F}_E$ rather than the function itself. We will say a few more words about this computation below. In line~4 a scalar $\alpha_k\in [0,1]$ is found, such that $\bar{F}_E\left(\bm{x}^k+\alpha_k(\bm{y}^k-\bm{x}^k)\right)$ is minimized, which corresponds to solving a single-variable optimization problem, which can be done efficiently. At the end of the iteration in line~5 the solution is updated to be a linear interpolation between $\bm{x}^k$ and $\bm{y}^k-\bm{x}^k$. The last value of $\bm{x}^k$ computed before the stopping criteria has been reached, is returned in the end. Due to the convexity of the problem, it is guaranteed that as \mbox{$k\rightarrow \infty$}, $\bm{x}^k$ converges to the optimal solution of \textsc{TAP}\xspace.
\subsection{All-or-nothing Assignment}
What makes \textsc{FrankWolfe}\xspace particularly suitable for solving \textsc{TAP}\xspace is the special structure of the task of computing $\bm{y}^k$ which minimizes $\bar{F}_E(\bm{x}^k)+\nabla \bar{F}_E(\bm{x}^k)^T(\bm{y}^k-\bm{x}^k)$. First, observe that it is equivalent to minimizing the expression $\nabla\bar{F}_E(\bm{x}^k)^T\bm{y}^k$. Next, notice that for any $(i,j)\in E$ it holds that $\frac{\partial}{\partial x_{ij}}\bar{F}_E(\bm{x}^k)=\bar{c}_{ij}(x^k_{ij})$, where $x^k_{ij}$ is the value corresponding to $(i,j)$ of $\bm{x}^k$. That is, every variable $y^k_{ij}$ is multiplied by $\bar{c}_{ij}(x^k_{ij})$. Thus, minimizing the expression $\nabla\bar{F}_E(\bm{x}^k)^T\bm{y}^k$ while satisfying~\eqref{eq:tap:flow}, \eqref{eq:tap:positive} is equivalent to independently assigning the shortest route for every request $(\lambda_m,o_m,d_m)$, over the graph $G$, where the cost of traversing the edge $(i,j)$ is independent of the traffic passing through it, and is equal to $(\nabla\bar{F}_E(\bm{x}^k))_{ij}$.
This operation is known as All-or-Nothing assignment, due to the fact that each request is assigned to one specific route. Its pseudo code is given below (Algorithm~\ref{alg:aon}). The \textsc{ShortestPath}\xspace routine returns a vector $\bm{y}^k_m$, where for every $(i,j)\in E$ that is found on the shortest path from $o_m$ to $d_m$ on $G$, weighted by $\nabla\bar{F}_E(\bm{x}^k)$, $\bm{y}^k_{m,ij}=1$, and $\bm{y}^k_{m,ij}=0$ otherwise.
\begin{algorithm}
\caption{\textsc{AllOrNothing}\xspace$(G,\nabla\bar{F}_E(\bm{x}^k),OD)$}
\begin{algorithmic}[1]
\For {$m\in M$}
\State{$\bm{y}^k_m\gets \textsc{ShortestPath}\xspace(G,\nabla\bar{F}_E(\bm{x}^k),o_m,d_m)$}
\EndFor
\State \Return $\bm{y}^{k}\coloneqq \sum_{m\in M}\lambda_m y^k_m$
\end{algorithmic}\label{alg:aon}
\end{algorithm}
\section{AMoD as TAP}\label{sec:alg_amod}
In this section we establish an equivalence between \textsc{TAP}\xspace and \textsc{AMoD}\xspace. In particular, we show that a given \textsc{AMoD}\xspace problem can be transformed into a \textsc{TAP}\xspace, such that a solution to the latter, which is obtained by \textsc{FrankWolfe}\xspace, yields a solution to the former.
The crucial difference between the two problems is that in \textsc{TAP}\xspace every vehicle has a specific origin and destination vertex, whereas in \textsc{AMoD}\xspace this is not the case. In particular, while in \textsc{AMoD}\xspace empty rebalancers originate in specific destination vertices of user requests, the destinations of these rebalancers can in theory be any of the origin vertices. However, we show that this gap can be bridged by supplementing the original graph $G$ with an additional ``dummy'' vertex, and connecting to it edges emanating from all the vertices that need to be rebalanced. We then set the costs of the edges to guarantee an almost complete rebalancing, i.e., only a small fraction of the rebalancing requests will not be fulfilled. In the remainder of this section we provide a detailed description of the approach and proceed to analyze its theoretical guarantees.
\subsection{The construction}
We formally describe the structure of this new graph $G'=(V',E')$, where $V'=V\cup \{n\}$, and
\[E'=E\cup \left\{(i,n)\middle| i\in V\textup{ and }r_i<0\right\}.\]
Recall that $r_i<0$ indicates that there are fewer user requests arriving to $i\in V$ than there are departing from the vertex, which implies that rebalancers should be sent to this vertex. The vertex $n\not\in V$ is new and will serve as dummy target vertex for all the rebalancers. See example in Figure~\ref{fig:construction}.
To ensure that a sufficient number of rebalancers will arrive at each vertex that needs to be rebalanced, we assign to every edge $(i,n)$
the cost $c_{in}(x_{in})=\textsc{bpr}\xspace(x_{in},\kappa_{in},\phi_{in})$, where $\kappa_{in}=-r_i$, and $\phi_{in}=L$,
where $L$ is a large constant whose value will be determined later on.
The final ingredient in transforming \textsc{AMoD}\xspace to \textsc{TAP}\xspace is providing excess vehicles with specific origins and destinations. Given the original set of requests $OD$, for every $i\in V$ such that $r_i>0$ we add the request $(r_i,o_i,n)$, where $r_i$ is its intensity. This yields the extended requests set~$OD'$.
\begin{figure
\centering
\includegraphics[width=0.5\columnwidth]{fig/construction.pdf}\vspace{-5pt}
\caption{A simple example for the construction. The graph~$G$ consists of the vertices $1$ to $5$ and the (black) edges between them. The demand for~$G$ is $OD=\{(2,1,2),(1,2,4),(1,3,4),(2,4,1),(2,4,2)\}$, where each triplet denotes the intensity, origin, and destination, respectively. Red vertices indicate shortage of incoming vehicles (e.g., four passengers depart from vertex $4$, but only two vehicles arrive), green indicates excess (e.g, four vehicles terminate in vertex $2$ but only two passengers leave), black represents vertices with met demand (e.g., vertex $1$ where two vehicles terminating, and two passengers departing).
Consequently, the graph~$G'$ additionally contains the dummy vertex $6$, and edges $(3,6),(4,6)$ drawn in blue, originating from vertices of $G$ with shortage. Accordingly, the capacity is set to $\kappa_{3,6}=1,\kappa_{4,6}=2$. $OD'$ extends $OD$ with the request $(3,2,6)$. Observe that its intensity corresponds to the total excess in vertices $3,4$.}
\label{fig:construction}
\vspace{-15pt}
\end{figure}
As each free or occupied vehicle has a specific destination, we can think of the \textsc{AMoD}\xspace problem as a new \textsc{TAP}\xspace problem over the graph $G'$ and the extended set of requests $OD'$. As rebalancers are no longer needed to be considered separately from the users, we may redefine $x_{ij}$ to be the total flow along an edge $(i,j)\in E'$, including users and rebalancers. Denote $\bar{E}\coloneqq E'\setminus E$. The cost function for the corresponding \textsc{TAP}\xspace is $F_{E'}(\bm{x})=F_{E}(\bm{x})+F_{\bar{E}}(\bm{x})$, where $F_{\bar{E}}(\bm{x})=\sum_{(i,n)\in \bar{E}}x_{in}c_{in}(x_{in})$.
We show in the remainder of this section that after choosing~$L$ correctly, then $\bm{x}^*$, which minimizes the system optimum $F_{E'}(\bm{x}^*)$ under the constraints (\ref{eq:tap:flow}), (\ref{eq:tap:positive}), with respect to $G',OD'$, represents a high-quality solution to the \textsc{AMoD}\xspace problem, in which the majority of rebalancing requests are fulfilled.
It is worth clarifying that the cost of the obtained flow is represented by $F_{E}(\bm{x})$.
\subsection{Analysis}\label{sec:theory}
First, we note that the new objective function $F_{E'}$ remains convex owing to the fact that for every new dummy edge its cost function is monotone and increasing with respect to its flow (see Claim~\ref{claim:convex}).
Given a vector assignment $\bm{x}$ for \textsc{TAP}\xspace over $G'$, it will be useful to split it into variables $\bm{x}_E$ corresponding to the edges $E$, and variables $\bm{x}_{\bar{E}}$ corresponding to $\bar{E}$.
The motivation for setting the specific capacity value $\kappa_{in}$ to edge $(i,n)\in \bar{E}$ is given in the following lemma. Recall that $R\coloneqq \sum_{i\in V}\mathds{1}\{r_i>0\}r_i$.
\begin{lemma}[(Optimal assignment for dummy edges)]\label{lem:opt}
Let $\bm{x}^*$ minimize $F_{\bar{E}}(\bm{x}^*)$, under the constraint that $\sum_{(i,n)\in \bar{E}}x^*_{in}=R$. Then $\bm{x}^*_{\bar{E}}=\bm{\kappa}$, where $\bm{\kappa}=\left\{\kappa_{in}\middle| (i,n)\in \bar{E}\right\}$.
\end{lemma}
\begin{proof}
Note that $F_{\bar{E}}(\bm{x}_{\bar{E}})$ is convex, and feasible for the constraint $\sum_{x_{in}\in\bm{x}_{\bar{E}}} x_{in}=R$. Thus, it has a unique minimum. To find it, we shall use Lagrange multipliers.
Let $g(\bm{x})\coloneqq \sum_{j=1}^m x_j$and define the Lagrangian \mbox{$\mathcal{L}(\bm{x},\lambda)\coloneqq F_{\bar{E}}(\bm{x})-\lambda(g(\bm{x})-R)$}.
For any $x_j\in \bm{x}$,
\begin{align*}
\frac{\partial}{\partial x_j} \mathcal{L} = L\left(1+5\left(\frac{x_j}{\kappa_j}\right)^4\right)-\lambda, \text{ and } \frac{\partial}{\partial \lambda} \mathcal{L} =R-g(\bm{x}).
\end{align*}The constraint $\frac{\partial}{\partial x_j} \mathcal{L}=0$ yields that \mbox{$x_j = \kappa_j\left(\frac{\lambda}{5L}-\frac{1}{5}\right)^{1/4}$},
which is then plugged into $\frac{\partial}{\partial \lambda}\mathcal{L}=0$, where we use the fact that $\sum_{j=1}^\ell \kappa_j =R$, to yield
$\lambda = 6L$. We then substitute $\lambda$ to yield $x_j=\kappa_j$, which concludes the proof.
\end{proof}
We arrive at the main theoretical contribution of the paper, which states that $L$ can be tuned to obtain a solution where the fraction of unfulfilled rebalancing requests $\delta>0$ is as small as desired. Notice that for a given solution $\bm{x}$, the expression $\frac{\|\bm{x}_{\bar{E}}-\bm{\kappa}\|_1}{2R}$ represents the fraction of unfulfilled requests.
\begin{theorem}[(Bounded fraction of unfulfilled requests)] \label{thm:amod}
Let $\bm{x}^*\coloneqq\mathop{\mathrm{argmin}}_{\bm{x}} F_{E'}(\bm{x})$ subject to~(\ref{eq:tap:flow}), (\ref{eq:tap:positive}).
For every $\delta\in (0,1]$ exists $L_\delta\in (0,\infty)$ such that if $L>L_\delta$ then $\frac{\|\bm{x}^*_{\bar{E}}-\bm{\kappa}\|_1}{2R}\leq \delta$.
\end{theorem}
\begin{proof}
Let $\bm{x}^0$ be an assignment satisfying~(\ref{eq:tap:flow}), (\ref{eq:tap:positive}) such that $\bm{x}^0_{\bar{E}}=\bm{\kappa}$, which minimizes the expression $F_{E'}(\bm{x}^0)$. That is, $\bm{x}^0$ minimizes $F_{E'}$ over all $\bm{x}$ which fully satisfies the rebalancing constraints. (Observe that such $\bm{x}^0$ represents the optimal solution of the original \textsc{AMoD}\xspace problem.)
If $\bm{x}^0$ turns out to yield the minimum of $F_{E'}(\bm{x})$ without conditioning on $\bm{x}^0_{\bar{E}}=\bm{\kappa}$ then the result follows. Thus, we assume otherwise.
Fix $\delta\in (0,1]$ and let $\bm{x}^\delta$ represent any assignment, satisfying~(\ref{eq:tap:flow}), (\ref{eq:tap:positive}), and for which it holds that $\frac{\|\bm{x}^\delta_{\bar{E}}-\bm{\kappa}\|_1}{2R}>\delta$.
Our goal is to find a value of $L_\delta$ such that for any $L>L_\delta$ it holds that
$F_{E'}(\bm{x}^\delta)>F_{E'}(\bm{x}^0)$. This implies that using such $L$ we are guaranteed that if a solution returned by \textsc{FrankWolfe}\xspace will have at most $\delta$ unfulfilled request. This is equivalent to requiring that \[F_{\bar{E}}(\bm{x}^\delta)-F_{\bar{E}}(\bm{x}^0)>F_E(\bm{x}^0)-F_E(\bm{x}^\delta).\]
Notice that by Lemma~\ref{lem:opt}, it holds that $F_{\bar{E}}(\bm{x}^\delta)-F_{\bar{E}}(\bm{x}^0)>0$.
Recovering a precise upper bound for the expression $F_E(\bm{x}^0)-F_E(\bm{x}^\delta)$, which depends on $\delta$, is quite difficult. We therefore resort to a crude (over-)estimation of it, which is the value $F_E(\bm{x}^0)$. Thus, we wish to find $L$ such that $F_{\bar{E}}(\bm{x}^\delta)-F_{\bar{E}}(\bm{x}^0)>F_E(\bm{x}^0)$.
Define $\Delta_{in}:=x_{in}-\kappa_{in}$, for every $(i,n)\in \bar{E}$, where \mbox{$x_{in}\in \bm{x}^\delta_{\bar{E}}$}. For every such $x_{in}$ the following holds:
{\small
\begin{align*}\allowdisplaybreaks
c_{in}(& x_{in}) = L\left(1+0.15 \left(\frac{x_{in}}{\kappa_{in}}\right)^{4}\right) \\ & \geq L \left(1+0.15 \left(1+\frac{4\Delta_{in}}{\kappa_{in}}\right)\right) = L \left(1.15+\frac{0.6\Delta_{in}}{\kappa_{in}}\right),
\end{align*}
}where the inequality follows from Bernoulli's inequality. Also, note that $c_{in}(\kappa_{in})=L\cdot 1.15$. Thus,
{\small
\begin{align*}\allowdisplaybreaks
F&_{\bar{E}}(\bm{x}^\delta)=\sum_{(i,n)}x_{in} c_{in}(x_{in}) \geq \sum_{(i,n)}(\kappa_{in}+\Delta_{in}) L \left(1.15+\frac{0.6\Delta_{in}}{\kappa_{in}}\right) \\ & = \sum_{(i,n)}L\left(1.15\cdot \kappa_{in} + 1.75\cdot \Delta_{in}+\frac{0.6\Delta_{in}^2}{\kappa_{in}}\right) \\ & = \sum_{(i,n)}1.15 L \kappa_{in} +\sum_{(i,n)}1.75L\Delta_{in}+ \sum_{(i,n)}L\frac{0.6\Delta_{in}^2}{\kappa_{in}}\\ & = F_{\bar{E}}(\bm{x}^0) +0+ \sum_{(i,n)}L\frac{0.6\Delta_{in}^2}{\kappa_{in}} \geq F_{\bar{E}}(\bm{x}^0)+\frac{0.6L}{R}\sum_{(i,n)}\Delta^2_{in}\\
&\geq F_{\bar{E}}(\bm{x}^0)+\frac{0.6L}{R} \cdot \ell^{-1}\left(\sum_{(i,n)}|\Delta_{in}|\right)^2 \\ & = F_{\bar{E}}(\bm{x}^0)+\frac{0.6L}{R\ell}\left\|\bm{x}^\delta_{\bar{E}}-\bm{x}^0_{\bar{E}}\right\|_1^2 \geq F_{\bar{E}}(\bm{x}^0) +2.4RL\ell^{-1} \delta^2,
\end{align*}}where the second to last inequality is due to Cauchy-Schwarz, and $\ell$ is the number of dummy edges.
Thus, we have just shown that for any $\bm{x}^\delta$ it holds that $F_{\bar{E}}(\bm{x}^\delta)-F_{\bar{E}}(\bm{x}^0)\geq 2.4RL\ell^{-1}\delta^2$. To conclude, for \mbox{$L=\frac{F_E(\bm{x}^0)}{2.4R\ell^{-1}\delta^2}$}, it follows that $F_{E'}(\bm{z}^\delta)>F_{E'}(\bm{z}^0)$, which implies that any value $\bm{x}^*$ satisfying the constraints~(\ref{eq:tap:flow}), (\ref{eq:tap:positive}) and minimizing $F_{E'}(\bm{x})$, also guarantees that $\frac{\|\bm{x}^*_{\bar{E}}-\bm{\kappa}\|_1}{2R}\leq \delta$.
\end{proof}
We will consider the practical aspects of computing a proper $L$ in Section~\ref{sec:experiments}. The next corollary is the final piece of the puzzle. It proves that when using a proper $L$, not only that $\delta$ is bounded, but also the value $F_E(\bm{x}^*)$ is at most $F_E(\bm{x}^0)$.
\begin{corollary}[(Bounded cost of routing)]\label{thm:amod}
Fix $\delta\in (0,1]$ and let $L\in (0,\infty)$ such that $L>L_{\delta}$.
Then (i) $\frac{\|\bm{x}^*_{\bar{E}}-\bm{\kappa}\|_1}{2R}\leq \delta$, and (ii) $F_{E}(\bm{x}^*)< F_E(\bm{x}^0)$,
where $\bm{x}^*=\mathop{\mathrm{argmin}}_{\bm{x}} F_{E'}(\bm{x})$ under constraints~(\ref{eq:tap:flow}), (\ref{eq:tap:positive}).
\end{corollary}
\begin{proof}
Let $\bm{x}^*,\bm{x}^0$ be as defined in the previous proof. It is clear that $x^*$ satisfies (i). Now, observe that by definition $F_{E'}(\bm{x}^*)< F_{E'}(\bm{x}^0)$, and by Lemma~\ref{lem:opt}, $F_{\bar{E}}(\bm{x}^0)<F_{\bar{E}}(\bm{x}^*)$. Then the following derivation proves (ii):
\begin{equation*}
F_{E}(\bm{x}^*)+F_{\bar{E}}(\bm{x}^*) < F_{E}(\bm{x}^0)+F_{\bar{E}}(\bm{x}^0) < F_{E}(\bm{x}^0)+F_{\bar{E}}(\bm{x}^*). \qedhere
\end{equation*}
\end{proof}
\section{Experimental results}\label{sec:experiments}
In this section, we use experimental results to demonstrate the power of our approach for \textsc{AMoD}\xspace routing via a reduction to \textsc{TAP}\xspace (Section~\ref{sec:alg_amod}) on a real-world case study. In the first set of experiments in Section~\ref{sec:results} we validate experimentally the theory developed in Section~\ref{sec:theory}. In summary, we observe that the approach yields near-optimal solutions within a few seconds, where most (more than 99\%) of the rebalancing requests are fulfilled, when $L$ is properly tuned. We then test the scalability of the approach on scenarios involving as much as 600k user requests, where we observe that running times and convergence rates scale only linearly with the size of input. In the final set of experiments we demonstrate the benefit of the approach over previous methods.
\subsection{Implementation Details}
All results were obtained using a commodity laptop equipped with 2.80GHz $\times$ 4 core i7-7600U CPU, and 16GB of RAM, running 64bit Ubuntu 18.04 OS. The C\raise.08ex\hbox{\tt ++}\xspace implementation of the \textsc{FrankWolfe}\xspace algorithm is adapted (with merely minor changes) from the \texttt{routing-framework}, which was developed for~\cite{BuchETAL18}. For shortest-path computation in the \textsc{AllOrNothing}\xspace routine, we use \emph{contraction hierarchies}~\cite{GeisbergerETAL12}, which are embedded in the \texttt{routing-framework}, and are in turn based upon the code in the \texttt{RoutingKit} (see~\cite{DibbeltETAL16}). Running times reported below are for a 4-core parallelization.
In our experiment we observed that in some situations, the first few iterations of \textsc{FrankWolfe}\xspace route most of the rebalancers to a select set of dummy edges, so that the actual flow is far larger than the capacity. This leads to numeric overflows when working with the standard C\raise.08ex\hbox{\tt ++}\xspace \texttt{double} and to failure of the program. We mention that this phenomenon was not observed for the modified Davidson cost function~\cite{Akcelik78}, linearized at $95\%$ of the capacity, which has a more gentle gradient. To alleviate the problem when working with \textsc{bpr}\xspace, one can resort to using \texttt{long double} or linearize the value of the function after a certain threshold is reached. We chose the latter, by linearizing at $500\%$ of the capacity.
We emphasize that this does not affect the final outcome of the algorithm.
Finally, we mention that we experimented with a few cost functions (including linear, and modified Davidson) for the dummy edges, until we settled on \textsc{bpr}\xspace, which yields the best convergence rates.
\subsection{Data}
Similarly to~\cite{SalazarTsaoEtAl2019}, our experiments are conducted over Manhattan in New York City, where the OD-pairs are inferred from taxi rides that occurred during the morning peak hour on March 1st, 2012. As in~\cite{SalazarTsaoEtAl2019}, we assume to centrally control all ride-hailing vehicles in Manhattan, and accordingly scale taxi rides requests by a factor of $6$.
The total number of real user OD-pairs in our experiments is thus $6\times \num{25960}$ (unless stated otherwise).
In order to take into consideration the fact that autonomous vehicles need to share the road with private vehicles, which should increase the overall cost of travelling along edges, we introduce a parameter of exogenous traffic (as was done in~\cite{SalazarTsaoEtAl2019}). In particular, for a non-dummy edge $(i,j)\in E$, with a flow $x_{ij}$, we assign the cost $c_{ij}\left(x_{ij}+x^{e}_{ij}\right)$, where $x^{e}_{ij}$ denotes the exogenous flow. For simplicity, we set this value so that the fraction of $x^{e}_{ij}$ over the capacity $\kappa_{ij}$ of the edge $\kappa_{ij}$ is the same, over all edges. That is, we choose a value $\gamma_{\text{exo}}\geq 0$ and set $x^e_{ij}/\kappa_{ij}=\gamma_{\text{exo}}$. Unless stated otherwise, $\gamma_{\text{exo}}=0.8$, which approximates the scenario of the rush-hour traffic, mentioned above. The underlying road-map $G=(V,E)$ was extracted from an Open Street Map~\cite{HakWeb08}, where $|V|=\num{1352},|E|=\num{3338}$.
\subsection{Results}\label{sec:results}
Before proceeding to the experiments we mention that in some results we compare the outcome of our algorithm with an optimal value, denoted by $\text{OPT}$. To obtain it, we run the algorithm, with a corresponding set of parameters, for $\num{10000}$ iterations. This provides a good approximation of the real optimum. E.g., for the final iteration of the algorithm, when $L=96$ minutes, the relative difference in the real and dummy cost is $\num{1.64}\cdot 10^{-7}$ and $\num{2.91}\cdot 10^{-7}$, respectively. The terms real and dummy costs correspond to $F_E$ and $F_{\bar{E}}$, respectively.
\vspace{5pt}
\noindent \textbf{Validation of the theory.} Our first set of experiments is designed to validate the convergence of the approach, and the theory presented in~Section~\ref{sec:alg_amod}. In summary, we observe that with already small $L$ a solution where a large majority of rebalancing requests are fulfilled is achieved. Moreover, the system cost for the rebalaced system is also very close to the optimal value. Importantly, this is achieved within only $\num{100}$ iterations of \textsc{FrankWolfe}\xspace, corresponding to around $15$ seconds.
In order to test how the value of $L$ affects the fraction of unbalanced requests $\delta$, as stated in Theorem~\ref{thm:amod}, and the real and dummy costs of solution, i.e., $C_r\coloneqq F_E(\bm{x}),C_d\coloneqq F_{\bar{E}}(\bm{x})$, respectively, we experiment with values of $L$ in the range of $\num{3}$ to $\num{192}$ minutes (see Figure~\ref{fig:basic}).
In terms of the fraction of unfulfilled requests $\delta$, as Theorem~\ref{thm:amod} states, increasing $L$ reduces this value. For instance, when $L=3$, $\delta=0.109$, but already when $L=48$ then $\delta=0.009$, after 100 iterations. It should be noted that a small value of $\delta$ is reached only when $C_d$ is noticeably larger than $C_r$ (see middle and bottom plots in Figure~\ref{fig:basic}, for comparison).
This implies that our estimation for $L$ suggested in Theorem~\ref{thm:amod} is quite conservative. From a practical point of view, we recommend iterating over $L$ using binary search until a desired value of $\delta$ is achieved.
We now discuss the relation between the magnitude of $L$ and $C_r$. We observe that $C_r$ increases with $L$. This follows from the fact that a smaller $\delta$ requires rebalancers to increase the total length of their trips in order to accommodate more requests. Here we can also observe a possible drawback in setting $L$ to be needlessly large, as it takes more iterations to settle on the correct value of $C_r$. This follows from the fact that to obtain a smaller $\delta$ requires more iterations.
Lastly, observe that already after approximately 50 iterations, all three values reach a plateau, i.e., increasing the number of iterations only slightly changes the corresponding value. Also note that values of OPT, computed for $L=96$, are very close to corresponding values for the same $L$ after only $100$ iterations.
This indicates that a small number of iterations suffices to reach a near-optimal value, be it $\delta$, $C_r$, or $C_d$.
\begin{figure
\centering
\includegraphics[width=0.9\columnwidth,trim={0.5cm 1.5cm 1.5cm 2.5cm},clip]{fig/exp_basic.pdf}\vspace{-5pt}
\caption{Validation of theoretical results. For all the plots, OPT represents the corresponding optimal value for $L=96$. [TOP] A plot of $\delta$. The left number in the legend represents $L$, whereas the value in the brackets denotes $\delta$ after iteration $100$. For instance, OPT yields $\delta=0.004$, whereas for the same $L$ after $100$ iterations we have that $\delta=0.007$.
[CENTER] A plot of $C_r$. The left number in the legend represents $L$, whereas the value in brackets denotes $(C_r-\text{OPT})/\text{OPT}$ after iteration $100$. E.g., $C_r$ for $L=96$ is only $\num{1.7}\%$ longer than OPT.
[BOTTOM] A plot for $C_d$. }
\label{fig:basic}
\vspace{-15pt}
\end{figure}
\vspace{5pt}
\noindent \textbf{Scalability.} In this set of experiments we demonstrate the scalability of the approach by increasing the total number of OD requests. We use the set of $6\times\num{25960}$ OD pairs, which was utilized in the above experiments, as a basis, and then multiply this set by $1,2,3$ and $4$. The last case includes $\num{623040}$ travel requests. In an attempt to make the four settings similar in terms of the total traffic on the road, we pair each setting with exogenous traffic of $\gamma_\text{exo}\in \{0.8,0.6,0.4,0.2\}$, respectively.
Each scenario was executed for $1000$ iterations. For the first two plots we chose to show results only for the first $100$ iterations, as the change is very minor from this point on. We also omit a plot for the dummy cost as we found it uninformative. The results are presented in Figure~\ref{fig:scale}.
The rate of convergence and the final result for $\delta,C_r$ behave similarly for the four settings. However, we note that we expect to see a more significant difference for a more realistic data set, as a bigger data set would have more diverse OD pairs. Nevertheless, we emphasize that our four settings do not yield similar solutions with respect to flow distribution, as the scale of flow affects the cost. In terms of the running time, we mention that it scales proportionally to the number of OD pairs (e.g., x$1$ is roughly 4 times quicker than x$4$), and the rate of change with respect to the number of iterations is roughly constant.
\begin{figure
\centering
\includegraphics[width=0.9\columnwidth,trim={0.5cm 1.5cm 1.5cm 2.5cm},clip]{fig/exp_scale.pdf}\vspace{-5pt}
\caption{Plots for scalability experiments. [TOP] A plot of $\delta$, i.e., fraction of unfulfilled rebalancing requests. The left notation in the legend ($i$x) indicates the used number of copies of the original OD set, whereas the value in brackets denotes $\delta$ after iteration $100$.
[CENTER] A plot of $C_r$. The value in brackets denotes the ratio of deviation
for $C_r$ between the iterations $100$ and $1000$.
[BOTTOM] A plot of total running time. The value in brackets denotes the running time after iteration $100$.}
\label{fig:scale}
\vspace{-15pt}
\end{figure}\vspace{5pt}
\noindent \textbf{Comparison with previous work.} Here we demonstrate the benefits of computing a solution to \textsc{AMoD}\xspace using the precise formulation of the cost function \textsc{bpr}\xspace, as opposed to a piecewise-affine approximation of \textsc{bpr}\xspace, or a congestion-unaware solution. The piecewise-affine approximation approach was suggested in~\cite{SalazarTsaoEtAl2019}, where the \textsc{bpr}\xspace function is approximated by two affine functions (see more details in Section~\ref{sec:related}). In terms of computation time, our approach yields the results in around 15 seconds, whereas the reported running times of~\cite{SalazarTsaoEtAl2019} are below 4 minutes for similar hardware. We wish to point out that~\cite{SalazarTsaoEtAl2019} do not minimize travel time for rebalancing vehicles, and also include walking times in their analysis. The congestion-unaware approach, which was utilized in earlier works (see, e.g., \cite{PavoneSmithEtAl2011}), generates a solution without considering the effect of congestion of the routed vehicles on the overall cost.
We implemented all three types of solution using our framework, by replacing the precise formulation of the \textsc{bpr}\xspace function, where relevant. We wish to clarify that the cost function used on the dummy edges remains \textsc{bpr}\xspace, to guarantee rebalancing (see Section~\ref{sec:theory}), as this does not affect the real solution cost. The computation was done with $L=96$. The same applies to OPT which was computed using the full \textsc{bpr}\xspace.
We ran the three approaches for varying values of \mbox{$\gamma_{\text{exo}}\in [0,2]$} (see plots of the comparison in Figure~\ref{fig:exo}). As was observed in previous studies, congestion-unaware planning underestimates the real travel cost, which results in plans that divert traffic to overly congested routes. The deviation from OPT increases with exogenous traffic. Already for $\gamma_{\text{exo}}=0.8$, the total cost is around $1.3$ times OPT. Approximate \textsc{bpr}\xspace is much more accurate in this respect. However, it either under- or overestimates the real cost, which yields plans where vehicles are rerouted from low-cost routes to more congested routes. Approximate \textsc{bpr}\xspace yields plans whose deviation from OPT is twice as high for the precise \textsc{bpr}\xspace, when $\gamma_{\text{exo}}\in [0, 1.1]$; it coincides with our solution for $\gamma_{\text{exo}}=1.2$; for larger values of $\gamma_{\text{exo}}\in [0, 1.1]$ the deviation becomes more noticeable. For instance, when $\gamma_{\text{exo}}=1.3$ it yields a solution of around~$1.1$ times OPT. In contrast, our approach yields an accurate estimation of OPT for the entire range of~$\gamma_{\text{exo}}$.
\begin{figure
\centering
\includegraphics[width=0.9\columnwidth,trim={0.5cm 16cm 1.4cm 2.5cm},clip]{fig/exp_exo.pdf}\vspace{-5pt}
\caption{Plots for comparison experiments. For each of the three types of cost functions we plot the ratio between the obtained cost $C_r$ and the cost for OPT, as a function $\gamma_{\text{exo}}$ ($x$-axis). The corresponding value for $\gamma_{\text{exo}}=0.8$ is given in the brackets.}
\label{fig:exo}
\vspace{-15pt}
\end{figure}
\section{Discussion and Future Work}\label{sec:future}
This paper presented a computationally-efficient framework to compute the system-optimal operation of a fleet of self-driving cars providing on-demand mobility in a congestion-aware fashion.
To the best of our knowledge, this is the first scheme providing high-quality routing solutions for large-scale \gls{abk:amod} fleets within seconds, thus enabling real-time implementations for operational control through receding horizon optimization (to account for new information that is revealed over time) and randomized sampling (to recover integer flows with near-optimality guarantees \cite{Rossi2018}).
Our approach consists of reducing our problem to a \textsc{TAP}\xspace instance, such that an optimal solution achieved for the latter yields an optimal for \textsc{AMoD}\xspace. This allows us to leverage modern and highly effective approaches for \textsc{TAP}\xspace.
We showcased the benefits of our approach in a real-world case-study. The results showed our method outperforming state-of-the-art approaches by at least one order of magnitude in terms of computational time, whilst improving the system performance by up to 20\%.
This work opens the field for several extensions, and leaves a few open questions.
On the side of theory, our immediate goal is to analyze the convergence rate of our approach. There is an abundance of recent results on the theoretical properties of \textsc{FrankWolfe}\xspace (see, e.g.,~\cite{LacosteJaggi15,PenaRodriguez16}), which regained popularity in recent years due to applications in machine learning\footnote{See \url{https://sites.google.com/site/frankwolfegreedytutorial/}.}. However, it is not clear whether these results are applicable to our setting, and what their implications are on the convergence of the real cost $F_E$, rather than $F_{E'}$. We also plan to investigate approaches to obtain a more informative estimation of the constant $L$ (see Theorem~\ref{thm:amod}).
On the implementation side, we mention that the performance can be further improved by using customizable contraction hierarchies~\cite{DibbeltETAL16} for pathfinding, and bundling identical OD pairs. We also point out the fact that the \textsc{AllOrNothing}\xspace routine is embarrassingly parallel, and a significant speedup can be gained from using a multi-core machine.
Finally, on the application side, we aim at exploiting the high computational efficiency of the presented approach by implementing it in real-time using receding horizon schemes.
To this end, it might be necessary to employ a time-expansion of the road-graph and account for stochastic effects, as it was done in~\cite{IglesiasRossiEtAl2016,TsaoIglesiasEtAl2018}.
In addition, it is of interest to extend this framework to capture the interaction with public transit~\cite{SalazarRossiEtAl2018} and the power grid~\cite{RossiIglesiasEtAl2018}, and account for the interaction of self-driving vehicles with the urban infrastructure.
\section{Related work}\label{sec:related}
The \gls{abk:bpr} developed the mostly-used volume-delay function~\cite{BPR1964}, of which there exist several variations~\cite{Spiess1990}.
It has been used in research works ranging from dynamic estimation of congestion~\cite{RivasInmaculadaEtAl2016} to route planning in agent-based models~\cite{AslamLimEtAl2012, ManleyChengEtAl2014}, and including simulations of route planning for \gls{abk:amod} in~\cite{TreiberHenneckeEtAl2000,FagnantKockelman2014, LevinKockelmanEtAl2017}. \kiril{I feel that there is too much information and emphasis on BPR.}
On the algorithmic side, the \textit{control} of \gls{abk:amod} systems has been mostly based on network flow models and thresholded approximations of the \gls{abk:bpr} function~\cite{RossiZhangEtAl2017}, also accounting for the interaction with public transit~\cite{SalazarRossiEtAl2018} and the power-grid~\cite{RossiIglesiasEtAl2018}. In such a framework, cars can travel through a road at free-flow speed until a fixed capacity of the road is reached. At that point, no more cars can traverse it. Such models provide a conservative approach to model congestion for control purposes, with the advantage that they can be used to formulate optimization problems solvable with off-the-shelf linear programming algorithms.
To avoid oversimplifying congestion phenomena, a piecewise-affine approximation of the \gls{abk:bpr} function is presented in~\cite{SalazarTsaoEtAl2019} and combined with convex relaxation techniques to devise a congestion-aware routing scheme resulting in a quadratic program. Nevertheless, in large urban environments with several thousands transportation requests such approaches usually lead to computational times of the order of minutes, possibly making them not completely suitable for real-time control purposes. \kiril{The text on previous work of AMoD can be significantly extended.}
Mathematically, \textsc{AMoD}\xspace can be viewed as a special case of \textsc{TAP}\xspace in which the cost and impact of rebalancing empty vehicles is ignored. Historically, \textsc{TAP}\xspace was introduced to model and quantify the impact of independent route choices of human drivers on themselves, and the system as a whole (see ~\cite{Patriksson15, Sheffi85}). Algorithmic approaches for \textsc{TAP}\xspace typically assume a \emph{static} setting in which travel patterns do not change with time, which allows to cast the problem into a multi-commodity minimum-cost flow~\cite{Ahuja93}, which can then be viewed as a \emph{convex programming} problem. One of the most popular tools for convex programming in the context of \textsc{TAP}\xspace is the Frank-Wolfe algorithm~\cite{FrankWolfe56}. What makes this algorithm particularly suitable for solving \textsc{TAP}\xspace is that its direction-finding step corresponds to multiple shortest-path\footnote{The shortest-path problem corresponds to finding the shortest path between two given vertices of a given graph. It is important to note that in the context of the Frank-Wolfe algorithm, the multitude of paths should be computed independently of each other, disregarding the affect each path has on the others.} queries (we expand on this point in Section~\ref{sec:convex}). Recent advances in path finding tailored for transportation networks~\cite{BastETAL2016}, including contraction hierarchies~\cite{GeisbergerETAL12,DibbeltETAL16}, have made the Frank-Wolfe approach even more successful. A recent paper~\cite{BuchETAL18}, introduces a number of improvements for path-finding, and combines those with the Frank-Wolfe method. Notably, the authors present experimental results for \textsc{TAP}\xspace, where their approach computes within seconds a routing schemes for up to 3 million requests over a network of a large metropolitan area.
Finally, we mention that \textsc{AMoD}\xspace is closely related to \emph{Multi-Robot Motion Planning} (MRMP), which consists of computing collision-free path for a group of physical robots, moving them from initial set of positions to their destination. MRMP has been studied for the setting of discrete~\cite{YuLaValle16,SharonETAL15} and continuous~\cite{SoloveyETAL15,ShomeETAL2019} domains, respectively. \todo{Explain the similarity between unlabeled planning and \textsc{AMoD}\xspace.}
\section{Introduction}
\input{chapters/introduction.tex}
\input{chapters/preliminaries.tex}
\input{chapters/convex_optimization.tex}
\input{chapters/amod.tex}
\input{chapters/experiments.tex}
\input{chapters/conclusion.tex}
\ifdoubleblind
\else
\section*{Acknowledgments}
We would like to thank Valentin Buchhold for advising on the {\tt routing-framework}.
We thank Dr.\ Ilse New and Michal Kleinbort for proofreading this paper and providing us with their comments and advice. Our special thanks are also extended to our labmates Ramón Darío Iglesias, Matthew Tsao, and Jannik Zgraggen for the fruitful discussions.
The second author would like to express his gratitude to Dr.\ Chris Onder for his support.
This research was supported by the National Science Foundation under CAREER Award CMMI-1454737 and the Toyota Research Institute (TRI). The first author is also supported by the Fulbright Scholars Program. This article solely reflects the opinions and conclusions of its authors and not NSF, TRI, Fulbright, or any other entity.
\fi
\bibliographystyle{plainnat}
|
2201.10589
|
\section{Introduction}
Symmetric tensor gauge theories \cite{Gu:2006vw,Xu:2006,Pankov:2007,Xu2008,Gu:2009jh,rasmussen,Slagle:2018kqf,Du:2021pbc} are non-relativistic field theories, which have been studied extensively in recent years due to their association with fracton models \cite{Pretko:2016kxt,Pretko:2016lgv,Pretko:2018jbi}. (See \cite{Nandkishore:2018sel,Pretko:2020cko,Grosvenor:2021hkn} for a review on this subject.) The simplest theory in this class, commonly known as the ``rank-2 scalar charge theory'' \cite{Pretko:2016kxt,Pretko:2016lgv}, involves $U(1)$ gauge fields $(A_\tau,A_{ij})$ with gauge symmetry
\ie\label{scalarchargetheory-gaugesym}
A_\tau \sim A_\tau + \partial_\tau \alpha~, \quad A_{ij} \sim A_{ij} + \partial_i \partial_j \alpha~.
\fe
Here, $\alpha$ is the gauge parameter, and $A_{ij}$ is symmetric in the spatial indices $i,j$, i.e., $A_{ij} = A_{ji}$.\footnote{A variant of the above theory has only off-diagonal components of the gauge field $A_{ij}$, i.e., $A_{ij}=0$ for $i =j$ \cite{Xu2008,Slagle:2017wrc,Bulmash:2018lid,Ma:2018nhd,You:2018zhj,You:2019cvs,You:2019bvu,
Radicevic:2019vyb,paper1,paper2,paper3,Dubinkin:2020kxo}. Its properties and dynamics are quite different than the theory we will study here.}
Most of the discussion in this note with be in Euclidean signature spacetime and we will denote the Euclidean time direction by $\tau$. In the few places, where we will rotate to Lorentzian signature, we will denote the Lorentzian signature time by $t$. We will often abuse the terminology and use the phrase ``time-like'' to mean ``along the Euclidean time direction.''
When this gauge theory is coupled to a matter theory, the gauge field $(A_\tau,A_{ij})$ couples to Noether current $(J_\tau,J^{ij})$. The gauge symmetry \eqref{scalarchargetheory-gaugesym} shows that the Noether current $(J_\tau,J^{ij})$ must satisfy a dipole current conservation equation
\ie\label{Noecn}
\partial_\tau J_\tau = \partial_i \partial_j J^{ij}~, \qquad J^{ij} = J^{ji}.
\fe
This global symmetry and the current conservation have been studied in \cite{Griffin:2013dfa,Griffin:2014bta,Pretko:2016lgv,Pretko:2018jbi,Gromov:2018nbv,Seiberg:2019vrp,Shenoy:2019wng,
Gromov:2020rtl,Gromov:2020yoc,Du:2021pbc,Stahl:2021sgi}. A matter theory containing the Noether current $(J_\tau,J^{ij})$ has a \emph{dipole} global symmetry generated by the conserved scalar and dipole charges\footnote{The variant of the tensor gauge theory with only off-diagonal terms is coupled to a matter theory whose Noether current $(J_\tau,J^{ij})$ satisfies \eqref{Noecn}, but it has only off-diagonal components, i.e., $J^{ij}=0$ for $i=j$. As for the gauge theory, this matter theory is quite different than the theory with diagonal elements in $J^{ij}$. In particular, its Noether current leads to a \emph{subsystem} global symmetry generated by the charges
\ie
Q_i(x^i) = \int_{\text{fixed}~x^i} J_\tau~,
\fe
where the integral is over the subspace with fixed $x^i$. This symmetry is significantly larger than the dipole symmetry \eqref{dipolechargem}. Examples of such theories were analyzed in \cite{PhysRevB.66.054526,You:2018zhj,You:2019bvu,paper1,paper2}.}
\ie\label{dipolechargem}
Q = \int_\text{space} J_\tau~, \qquad Q^i = \int_\text{space} x^i J_\tau~.
\fe
As we will discuss below, such global symmetries should be handled with care. The factor of $x^i$ in the charge is not well defined in compact space. And even on $\mathbb R^d$ it grows at infinity and the action of this $Q^i$ might take us out of the allowed space of fields.
If the matter theory is invariant under spatial translations, there is also a conserved momentum operator $P_i$. Together, they satisfy
\ie\label{PQcommu}
[P_j,Q^i] = -i\delta^i_j Q~.
\fe
This mixture between the global symmetry and translations will be important below.
A typical matter theory in $d$ spatial dimensions with the conservation equation \eqref{Noecn} is the Lifshitz theory (see \cite{Chen:2009ka}, and references therein) with the action
\ie\label{intro-dipphi-action1t}
S = \oint d\tau d^dx~ \left[ \frac{\mu_0}{2} (\partial_\tau \phi)^2 + \frac{1}{2\mu} \left(\sum_i\partial_i^2 \phi\right)^2 \right]~.
\fe
In this case, the conserved current \eqref{Noecn} is\footnote{Here we follow the conventions in \cite{Gorantla:2021svj} where when we analytically continue to Lorentzian signature, $J_\tau$ does not get another factor of $i$ due to its subscript.\label{conventionsf}}
\ie\label{dipphi-momcuri}
&J_\tau = i\mu_0 \partial_\tau \phi~,\qquad J^{ij} = \frac{i}{\mu} \partial^i\partial^j \phi~,
\\
&\partial_\tau J_\tau = \partial_i\partial_j J^{ij}~
\fe
and the conserved charges are the scalar and dipole charges \eqref{dipolechargem} implementing
\ie\label{phishifi}
\phi \to \phi +c + c_{i} x^i~,
\fe
with constants $c$ and $c_i$. As we commented after \eqref{dipolechargem}, such a transformation is subtle. If we work in compact space, it is not well defined. And if we work on $\mathbb R^d$, then this shift changes the behavior of $\phi$ at infinity and takes us out of the allowed space of fields.\footnote{Actually, in this case, the equation of motion $\mu_0\partial_\tau^2 \phi ={1\over \mu} \partial_i\partial^i \partial_j\partial^j\phi$ suggests that there are additional conserved charges -- multipole charges, e.g., $Q^{ij}=\int_\text{space} x^ix^j J_\tau$ and $Q^{ijk}=\int_\text{space} x^ix^jx^k J_\tau$, implementing the transformations $\phi \to \phi + c_{ij} x^ix^j$ and $\phi \to \phi + c_{ijk} x^ix^jx^k$, respectively. These transformations are even more singular than the shift \eqref{phishifi} and might not even leave the action \eqref{intro-dipphi-action1t} invariant.}
In most papers on the Lifshitz theory, the scalar field $\phi$ is noncompact.
Instead, following the discussion of the 2+1d case in \cite{Henley1997,
Moessner2001,Vishwanath:2004,Fradkin:2004,Ardonne:2003wa,2005,2018PhRvB..98l5105M,Yuan:2019geh,Lake:2022ico}, we will be interested in the case where the scalar is compact, i.e., $\phi \sim \phi + 2\pi$.
This compactness will have important consequences below. Among other things, the theory \eqref{intro-dipphi-action1t} has more global symmetries in addition to \eqref{dipphi-momcuri}.
We can also go in the reverse direction. Given a matter theory containing the Noether current $(J_\tau,J^{ij})$ satisfying \eqref{Noecn}, we can gauge the dipole global symmetry by coupling the current to the gauge field $(A_\tau,A_{ij})$ as
\ie
iA_\tau J_\tau + iA_{ij} J^{ij}~.
\fe
We can also add kinetic terms for the gauge fields, such as $E_{ij}^2$ and $B_{[ij]k}^2$, where
\ie
&E_{ij} = \partial_\tau A_{ij} - \partial_i \partial_j A_\tau~,\\
&B_{[ij]k} = \partial_i A_{jk} - \partial_j A_{ik}
\fe
are the electric and magnetic fields. Then, we can study the pure gauge theory of $(A_\tau,A_{ij})$ without matter.
There are some important questions and subtleties in both the matter and gauge theories mentioned above:
\begin{itemize}
\item It is common to analyze a theory in finite volume by placing it on a compact space, such as a flat spatial torus with periodic boundary conditions. However, if we place the matter theory with a dipole global symmetry on a compact space, the dipole charge $Q^i$ is not well defined even if the scalar charge $Q$ vanishes \cite{Seiberg:2019vrp}. See also the comment after \eqref{dipolechargem}.
\item The pure gauge theory of $(A_\tau,A_{ij})$ famously has fracton defects, i.e., defects that describe world-lines of immobile particles, or fractons. The immobility of fractons is usually attributed to conservation of scalar and dipole charges \cite{Pretko:2016kxt}. However, in a gauge theory, the notion of ``conservation of charge'' does not make sense in compact space because the global symmetry generated by that charge is gauged.\footnote{When the theory is placed on a space with a boundary, the notion of gauge charge depends on the boundary conditions, and in a noncompact space, we can discuss the total gauge charge measured at infinity.} Here, ``charge'' refers to both scalar and dipole charges.
\item What is the geometric setup for such tensor gauge theories? What are the allowed gauge transformations and transition functions? What are the nontrivial bundles and how are they characterized? What replaces the notion of holonomies?
\end{itemize}
The goal of this paper is to address these subtleties, and make the statement of immobility of fractons in theories with dipole global symmetries more precise. Following \cite{Seiberg:2019vrp,paper1,paper2,paper3,Gorantla:2020xap,Gorantla:2020jpy,Rudelius:2020kta,Gorantla:2021svj,
Gorantla:2021bda,Burnell:2021reh}, we will focus on the global symmetries and their consequences and then we will study the corresponding gauge theory.
For simplicity, let us consider the 1+1d continuum theory described by the action
\ie\label{intro-dipphi-action1}
S = \oint d\tau dx~ \left[ \frac{\mu_0}{2} (\partial_\tau \phi)^2 + \frac{1}{2\mu} (\partial_x^2 \phi)^2 \right]~,
\fe
where $\phi \sim \phi + 2\pi$ is a compact scalar, and $\mu_0$ and $\mu$ are coupling constants with mass dimensions $0$ and $2$. Due to the mass dimension and periodicity of $\phi$, the local operators $e^{i\phi}$ and $\partial_x \phi$ exist in this continuum theory. The obvious fact that since $\phi$ is dimensionless, $e^{i\partial_x \phi}$ does not exist, will have important consequences below.
The theory has a dipole global symmetry that shifts $\phi$ as
\ie
\phi(\tau,x) \rightarrow \phi(\tau,x) + c + c_x x~.
\fe
We will comment on the global properties of $c$ and $c_x$ momentarily.
This is the simplest scalar field theory with a dipole global symmetry, while more general ones with multipole global symmetries have been discussed extensively in \cite{Griffin:2013dfa,Griffin:2014bta,Pretko:2018jbi,Gromov:2018nbv,Shenoy:2019wng,Gromov:2020rtl,Gromov:2020yoc,Stahl:2021sgi}.
We now turn to the global aspects of the above dipole global symmetry.
The parameter $c\sim c+2\pi$ is a circle-valued constant, which generates an ordinary $U(1)$ symmetry.
Following the standard terminology in string theory, we will refer to this symmetry as the $U(1)$ momentum global symmetry.\footnote{Here by ``momentum" we mean the momentum in the target space, rather than on the worldsheet. In the condensed matter literature, this symmetry is referred to as the ``particle number symmetry."}
On the other hand, $c_x$ is a real constant with mass dimension $1$, which generates a momentum dipole symmetry.
On noncompact space, the symmetry group of the momentum dipole symmetry is the noncompact group of real numbers $\mathbb{R}$ (rather than the compact group $U(1)$).
If we place the theory on a spatial circle of length $\ell_x$, the shift $c_x x$ is not well defined unless $c_x \in \frac{2\pi}{\ell_x} \mathbb Z$.
So, on a compact space, the symmetry group generated by $c_x$ is actually the discrete group of integers $\mathbb Z$.
The action \eqref{intro-dipphi-action1} is also invariant under spatial translations. Denote the $U(1)$ charge of momentum symmetry by $Q$, the generator of $\mathbb Z$ momentum dipole symmetry by $U$, and the generator of spatial translations by $P$. They satisfy
\ie
\, [P,U] = \frac{2\pi}{\ell_x} QU~.
\fe
It differs from \eqref{PQcommu} because on a compact space the dipole symmetry is $\mathbb Z$ rather than $\mathbb R$.
Interestingly, the continuum theory \eqref{intro-dipphi-action1} has an infinite ground state degeneracy.\footnote{A similar phenomenon has been noted in the 2+1d quantum dimer model at the Rokhsar-Kivelson point \cite{Rokhsar1988} and in the 2+1d quantum Lifshitz model \cite{Henley1997,Fradkin:2004}.} This can be understood as a result of the symmetries of the model as follows.
In addition to the $U(1)$ momentum and the $\mathbb{Z}$ momentum dipole symmetries discussed above, the continuum theory \eqref{intro-dipphi-action1} has a $U(1)$ winding dipole global symmetry. We will discuss it in detail below.
Denoting the $U(1)$ charge of the winding dipole symmetry by $\tilde{\mathcal Q}$, we have
\ie
\, [\tilde{\mathcal{Q}},U] = U ~,
\fe
or, in terms of the more general group elements $U_m=U^m$ and $e^{i\theta \tilde{\mathcal{Q}}}$,
\ie
U_m e^{i\theta \tilde{\mathcal{Q}}} = e^{-im\theta}e^{i\theta \tilde{\mathcal{Q}}}U_m~.
\fe
This lack of commutativity between the group elements of the momentum and winding dipole symmetries means that the Hilbert space realizes this symmetry projectively. And as a result, the ground state is infinitely degenerate. More abstractly, this can be described as an 't Hooft anomaly between these symmetries.
In Section \ref{sec:dipphi}, we will analyze the theory \eqref{intro-dipphi-action1} in more detail. In order to regularize the infinite ground state degeneracy, we will formulate it on a finite Euclidean lattice. And then, in order to preserve the symmetries of the continuum theory, we will study its modified Villain version following \cite{Sulejmanpasic:2019ytl,Gorantla:2021svj}.
On a lattice with $L_x$ sites, the modified Villain model has $L_x$ ground states. It becomes infinite in the continuum limit.
Curiously, there are at least three natural continuum limits of this lattice model. They have the same action \eqref{intro-dipphi-action1}, but differ in the identifications on the scalar field.
It turns out that this lattice model is the same as the modified Villain version of the 2+1d $\phi$-theory\footnote{The continuum 2+1 $\phi$-theory is described by the action
\ie\label{2+1contphi}
S=\oint d\tau dx dy\left[\frac{\mu_0}{2}(\partial_\tau\phi)^2+\frac{1}{2\mu}(\partial_x\partial_y\phi)^2\right]~.
\fe} of \cite{Gorantla:2021svj} on a slanted spatial 2-torus (as in \cite{Rudelius:2020kta}) with identifications
\ie
(\hat x, \hat y) \sim (\hat x+L_x,\hat y) \sim (\hat x+1, \hat y-1)~,
\fe
where the integers $(\hat x,\hat y)$ label the sites of the spatial lattice,\footnote{Note that $\hat x $ and $\hat y$ are not unit vectors, but integers labeling the sites.} and $L_x$ is the number of sites in the $x$ direction.
This equivalence between the 1+1d theory and the 2+1d theory on a slanted torus exists even for the $U(1)$ and $\mathbb Z_N$ dipole gauge theories described below. The agreement between the analyses of the 1+1d theory here and the 2+1d theory in \cite{Rudelius:2020kta} provides an interesting perspective and a good check on these two independent discussions.
In Section \ref{sec:dipA}, we will study the pure gauge theory of the $U(1)$ dipole gauge fields $(A_\tau,A_{xx})$ that couple to the $U(1)$ dipole global symmetry of the 1+1d dipole $\phi$-theory. It is the 1+1d version of the pure gauge theory of $(A_\tau,A_{ij})$ mentioned around \eqref{scalarchargetheory-gaugesym}. The gauge theory has defects that describe world-lines of fractons. Which defects should be considered and their properties depend on the second and third subtleties above.
A crucial element in the analysis of gauge theories is their electric and magnetic global symmetries \cite{Gaiotto:2014kfa}. The electric global symmetries are associated with shifts of the gauge fields that leave the action invariant, but are not gauge transformations.
In a pure gauge theory like ours, the system does not have charged dynamical fields and the objects charged under the global symmetry are various line operators and defects. In a relativistic system, people often abuse the terminology and do not distinguish between line operators, which acts at a given time, and line defects, which are supported on a time-like line.\footnote{As we said above, our discussion will be mostly in Euclidean spacetime. Then, when we say that the defect is supported on a time-like line, we mean that it is supported on a line along the Euclidean time direction.} The latter represent the world-line of a probe massive particles.
However, in our case, which is not relativistic, the distinction between these two notions is important. We refer to symmetries that act on operators as ordinary or \emph{space-like global symmetries}, and to symmetries that act on defects, but not on operators, as \emph{time-like global symmetries}. See Appendix \ref{app:timelike} for a more detailed discussion of time-like global symmetries.
Let us return to the $U(1)$ dipole gauge theory. It was originally argued in \cite{Pretko:2016kxt,Pretko:2016lgv,Pretko:2018jbi} that when it is coupled to matter fields, the matter fields are immobile. We will study the theory without dynamical matter fields. Instead, the theory has defects
\ie\label{fractondefi}
\exp\left( i \oint d\tau~A_\tau(\tau,x) \right)~,
\fe
which represent the world-lines of probe charged particles. As in the discussion in \cite{paper1,paper2,paper3} of a different theory, it is easy to see, using the gauge transformation laws of the gauge field \eqref{scalarchargetheory-gaugesym} that these defects are immobile -- these defects are fractons.
Below, we will derive the immobility of the defect \eqref{fractondefi} as a consequence of a global symmetry. The theory
has a time-like global symmetry that acts on this fracton defect as
\ie\label{timelike-dip-sym}
\exp\left( i \oint d\tau~A_\tau(\tau,x) \right) \rightarrow \exp\left(ic_\tau + 2\pi i m \frac{x}{\ell_x}\right) \exp\left( i \oint d\tau~A_\tau(\tau,x) \right)~,
\fe
where $c_\tau \sim c_\tau + 2\pi$ is circle-valued, and $m$ is an integer. Fracton defects at different positions carry different time-like dipole charges, so they cannot be deformed to each other without violating the time-like global symmetry.
This explains the restricted mobility of fractons using global symmetry, rather than gauge symmetry. It also gives a more precise explanation of the intuitive ``dipole moment conservation'' discussed in \cite{Pretko:2016kxt,Pretko:2016lgv,Pretko:2018jbi}.
Curiously, the operator $\oint dx~ A_{xx}$, which is a line observable acting at a fixed time, does not need exponentiation for gauge invariance. This will be discussed in detail below.
We will see that there are other consistent continuum tensor gauge theories with the same Lagrangian, but with different global properties of the fields where the fractons \eqref{fractondefi} are absent.
In Section \ref{sec:dipZN}, we will study the 1+1d $\mathbb Z_N$ dipole gauge theory. Following \cite{Gorantla:2021svj}, we will consider a BF version of the theory on a Euclidean lattice. It has a ground state degeneracy of $N \gcd(N,L_x)$, where $L_x$ is the number of sites in the $x$ direction. This is a consequence of the mixed 't Hooft anomaly in the space-like symmetry of the model.
Surprisingly, unlike the $U(1)$ theory, the $\mathbb Z_N$ dipole gauge theory has no fractons on the lattice. (This was pointed out in a closely related model in \cite{Bulmash:2018lid,Ma:2018nhd}.) First, a particle can hop by $N$ sites. In addition, on a finite lattice with $L_x$ sites, a particle can move around the whole space a number of times and end up hopping by $\gcd(N,L_x)$ sites. Once again, the relaxed restriction on the mobility is explained by the time-like global symmetry of the model.
To summarize, the ground state degeneracy of these models follows from their space-like global symmetries and the restricted mobility of their defects is controlled by their time-like global symmetries.
Observe that both the ground state degeneracy and the mobility of defects depend on the number theoretic properties of $L_x$. Consequently, this theory does not have a smooth $L_x\to \infty$ continuum limit, which is a manifestation of the UV/IR mixing of such theories \cite{paper1,Gorantla:2021bda}. Related phenomena have also been observed in various models, for example, \cite{Haah:2011drr,Yoshida:2013sqa,Meng,Manoj:2020wwy,Rudelius:2020kta}. Our example is presumably the simplest setup exhibiting this phenomenon.
In Appendix \ref{app:timelike}, we will introduce and explain the notion of time-like global symmetry in various well-known theories. In an ordinary gauge theory, it is part of the one-form global symmetry. In exotic gauge theories with subsystem symmetries, including models containing fractons, it explains the restricted mobility of fractons, lineons, etc. In particular, using time-like global symmetries, we will show that the mobility of fractons and lineons in the X-cube model, which is na\"ively a local property, can depend sensitively on the global geometry of the lattice. In all these examples, the time-like symmetry is a consequence of Gauss law in the presence of defects. However, gauge fields are not essential for the existence of time-like symmetry, as we will demonstrate in the case of 2+1d compact boson.
\section{1+1d dipole $\phi$-theory}\label{sec:dipphi}
In this section, we will study a compact scalar field theory in 1+1d with dipole global symmetries. We will place the modified Villain version of this theory on a finite Euclidean space-time lattice with $L_x$ sites in the $x$-direction, and $L_\tau$ sites in the $\tau$-direction, and impose periodic boundary conditions. This will lead us to explore various different continuum limits.
\subsection{First look at the continuum theory -- compact Lifshitz theory}\label{sec:dipphi-contfirst}
Consider the continuum action
\ie\label{dipphi-action1}
S = \oint d\tau dx~ \left[ \frac{\mu_0}{2} (\partial_\tau \phi)^2 + \frac{1}{2\mu} (\partial_x^2 \phi)^2 \right]~,
\fe
where $\phi$ is a dimensionless compact scalar with the identification $\phi(\tau,x) \sim \phi(\tau,x) +2\pi$.
We will refer to this theory as the 1+1d dipole $\phi$-theory.
The action \eqref{dipphi-action1} is the same as that of a 1+1d version of Lifshitz scalar field theory (see \cite{Chen:2009ka}, and references therein). However, it differs from the conventional Lifshitz theory in that our scalar field is compact. Hence the term ``compact Lifshitz theory."
It is also similar to that of a 1+1d version of the 2+1d quantum Lifshitz model \cite{Henley1997,
Moessner2001,Vishwanath:2004,Fradkin:2004,Ardonne:2003wa,2005,2018PhRvB..98l5105M,Yuan:2019geh,Lake:2022ico}, which has a compact scalar field. The relation to Lifshitz theory will be reviewed in Section \ref{sec:dipphi-Lifshitz}.
Let us place this continuum system on the circle of length $\ell_x$ and analyze its global symmetries:
\begin{itemize}
\item
A $U(1)$ momentum symmetry shifts $\phi \rightarrow \phi + c$, where $c$ is a real constant. The periodicity of $\phi$ makes $c$ circle-valued. The Noether current is\footnote{Recall our conventions, as discussed in footnote \ref{conventionsf}. They guarantee that in Lorentzian signature the charge operator is hermitian.}
\ie\label{dipphi-momcur}
&J_\tau = i\mu_0 \partial_\tau \phi~,\qquad J_{xx} = \frac{i}{\mu} \partial_x^2 \phi~,
\\
&\partial_\tau J_\tau = \partial_x^2 J_{xx}~.
\fe
The conserved charge is
\ie\label{dipphi-momcharge}
Q = \oint dx ~ J_\tau~.
\fe
The operators charged under this symmetry are $e^{in\phi}$ with integer $n$.
\item
A $U(1)$ winding dipole symmetry has Noether current\footnote{We refer to this $U(1)$ symmetry as a dipole symmetry for reasons that will become clear in Section \ref{Villain}.}
\ie\label{dipphi-windcur}
&\tilde{\mathcal J}_\tau = -\frac{1}{2\pi} \partial_x \phi~,\qquad \tilde{\mathcal J}_x = -\frac{1}{2\pi} \partial_\tau \phi~,
\\
& \partial_\tau \tilde{\mathcal J}_\tau = \partial_x \tilde{\mathcal J}_x~.
\fe
The conserved charge is
\ie\label{dipphi-windcharge}
\tilde{\mathcal Q} = \oint dx~ \tilde{\mathcal J}_\tau~.
\fe
A configuration that carries a nontrivial $U(1)$ winding dipole charge $p\in\mathbb Z$ is
\ie
\phi(\tau,x) = -2\pi p \frac{x}{\ell_x}~.
\fe
Since this charge is quantized, this symmetry is $U(1)$, rather than $\mathbb R$.
\item
A $\mathbb Z$ momentum dipole symmetry acts on $\phi$ as
\ie\label{dipphi-momdip}
U_m:\quad \phi(\tau,x) \rightarrow \phi(\tau,x) + 2\pi m \frac{x}{\ell_x}~,
\fe
for some integer $m$. Here we use $U_m$ to denote the corresponding symmetry operator. This symmetry acts on the local operator $\partial_x \phi$ inhomogeneously. Importantly, $e^{i\partial_x \phi}$ is not a well-defined local operator in this continuum theory because $\partial_x \phi$ has mass-dimension $1$. Note that because of the compact space and the compact target space, this dipole symmetry action does not suffer from the subtlety mentioned after \eqref{dipolechargem}.
\end{itemize}
The $\mathbb Z$ momentum dipole symmetry does not commute with the $U(1)$ winding dipole symmetry
\ie
U_m e^{i\theta \tilde{\mathcal{Q}}} = e^{-im\theta}e^{i\theta \tilde{\mathcal{Q}}}U_m~.
\fe
The minimal representation of this algebra is infinite dimensional with states $|p\rangle$, $p\in\mathbb{Z}$
\ie
&\tilde{\mathcal{Q}} |p\rangle = p|p\rangle~,
\\
&U_m |p\rangle = |p+m\rangle~.
\fe
It implies that all energy levels, in particular the lowest energy level, are infinitely degenerate and the states transform in a projective representation of the two symmetries. This signals a mixed 't Hooft anomaly.
It is easy to see that the theory is not invariant under any scale transformation. For example, under the scale transformation of the Lifshitz theory with a noncompact $\phi$,
\ie\label{xtauscal}
x\rightarrow \lambda x ~ , \qquad \tau \rightarrow \lambda^2\tau~,
\fe
the couplings scale as
\ie
\mu_0 \rightarrow \frac{\mu_0}{\lambda}~,
\quad \mu\rightarrow \lambda \mu~.
\fe
Alternatively, we can keep the coupling constants unchanged, but scale $\phi$. This has the effect of changing the periodicity of $\phi$ from $2\pi$ to $2\pi/\sqrt{\lambda}$. Either way, we see that the dipole $\phi$-theory \eqref{dipphi-action1} is not scale invariant.
Since the continuum theory is very singular, we would like to regularize it on a lattice, while preserving its global symmetries. Below we will discuss the modified Villain model of \eqref{dipphi-action1}, which provides an unambiguous regularization of the continuum theory.
\subsection{Modified Villain formulation}\label{Villain}
In the Villain form, the continuum theory is associated with the lattice action\footnote{In Section \ref{sec:dipphi-robustness}, we will comment on the relation between them.}
\ie\label{dipphi-Vill-action}
S = \frac{\beta_0}{2} \sum_{\tau\text{-link}} (\Delta_\tau \phi - 2\pi n_\tau)^2 + \frac{\beta}{2} \sum_\text{site} (\Delta_x^2 \phi - 2\pi n_{xx})^2 ~,
\fe
where $\Delta_x^2 f(\hat x) = f(\hat x+1) + f(\hat x-1) - 2f(\hat x)$. Here, $\phi$ is a real-valued scalar field and $(n_\tau, n_{xx})$ are integer-valued gauge fields with gauge symmetry
\ie\label{dipphi-modVill-gaugesym}
&\phi \sim \phi + 2\pi k~,
\\
&n_\tau \sim n_\tau + \Delta_\tau k~,
\\
&n_{xx} \sim n_{xx} + \Delta_x^2 k~,
\fe
where $k$ are integer gauge parameters. This $\mathbb Z$ gauge symmetry effectively makes $\phi$ compact.
We can further deform the action \eqref{dipphi-Vill-action} to the modified Villain version:
\ie\label{dipphi-modVill-action}
S = \frac{\beta_0}{2} \sum_{\tau\text{-link}} (\Delta_\tau \phi - 2\pi n_\tau)^2 + \frac{\beta}{2} \sum_\text{site} (\Delta_x^2 \phi - 2\pi n_{xx})^2 + i\sum_{\tau\text{-link}}\tilde \phi (\Delta_\tau n_{xx} - \Delta_x^2 n_\tau)~,
\fe
where $\tilde \phi$ is a Lagrange multiplier that makes the integer gauge fields $(n_\tau, n_{xx})$ flat. It has a gauge symmetry
\ie\label{dipphi-modVill-gaugesym2}
\tilde \phi \sim \tilde \phi + 2\pi \tilde k~,
\fe
where $\tilde k$ are integer gauge parameters. Physically, the above deformation suppresses all topological excitations, or vortices.
In the rest of this subsection, we will analyze this modified Villain model \eqref{dipphi-modVill-action} following similar steps in \cite{Gorantla:2021svj}.
\subsubsection{Relation to 2+1d $\phi$-theory}\label{sec:dipphi-compactify}
First, we will provide an alternative interpretation of the 1+1d action \eqref{dipphi-modVill-action}.
We will show that it arises from the modified Villain version of the 2+1d $\phi$-theory\footnote{The continuum limit of this modified Villain lattice model is the 2+1d $\phi$-theory of \cite{PhysRevB.66.054526,paper1} with Lagrangian \eqref{2+1contphi}. See also \cite{Tay_2011,You:2019cvs,You:2019bvu,Karch:2020yuy,You:2021tmm,Gorantla:2021bda,Burnell:2021reh,Distler:2021qzc,Distler:2021bop} for related discussions.} \cite{Gorantla:2021svj}
\ie\label{2+1dphi}
S = \frac{\beta_0}{2} \sum_{\tau\text{-link}} (\Delta_\tau \phi - 2\pi n_\tau)^2 + \frac{\beta}{2} \sum_{xy\text{-plaq}} (\Delta_x\Delta_y \phi - 2\pi n_{xy})^2 + i\sum_\text{cube} \phi^{xy} (\Delta_\tau n_{xy} - \Delta_x\Delta_y n_\tau)
\fe
on a special torus.
This relation can be viewed roughly as a dimension reduction, but we emphasize that this is an exact equivalence with no approximation involved.
We place this 2+1d lattice model \eqref{2+1dphi} on a slanted spatial torus with identifications
\ie\label{slanted}
(\hat x, \hat y) \sim (\hat x+L_x,\hat y) \sim (\hat x+1, \hat y-1)~.
\fe
On this slanted torus, we have the following relations:
\ie
&\Delta_y \phi(\tau, \hat x, \hat y) = \Delta_x \phi(\tau, \hat x, \hat y) \,,\\
&\Delta_x \Delta_y \phi(\tau, \hat x,\hat y) = \Delta_x^2 \phi(\tau, \hat x+1,\hat y)~.
\fe
In \cite{Rudelius:2020kta}, the continuum 2+1d $\phi$-theory was studied on a torus with more general complex structure.
Next, we treat the $y$-direction as the compactified direction, and view the resulting system as 1+1 dimensional.
More specifically, we can always use the identification \eqref{slanted} to bring any field to $\hat y=0$.
We thus replace the fields of the 2+1d model by the fields in 1+1d:
\ie
&\phi(\tau,\hat x, 0) = \phi(\tau, \hat x)\,,&&\qquad \Delta_x \Delta_y \phi(\tau, \hat x-1,0) = \Delta_x^2 \phi(\tau, \hat x)\,,\\
&n_\tau (\tau, \hat x,0)= n_\tau (\hat \tau ,\hat x)\,,&&\qquad n_{xy}(\tau, \hat x-1, 0) = n_{xx}(\tau, \hat x) \,,\\
& \phi^{xy}(\tau, \hat x-1,0)= \tilde \phi(\tau, \hat x)\,.
\fe
Under this replacement, the 2+1d action \eqref{2+1dphi} on this elongated torus \eqref{slanted} is exactly equivalent to the 1+1d model \eqref{dipphi-modVill-action}.
From this exact equivalence, all the analysis in the rest of this section follows from the 2+1d $\phi$-theory on a slanted torus \cite{Rudelius:2020kta}.
\subsubsection{Global symmetry}\label{sec:villainsymmetry}
The global symmetry of the modified Villain model \eqref{dipphi-modVill-action} includes:
\begin{itemize}
\item
The $U(1)$ momentum symmetry acts as $\phi \rightarrow \phi + c$, where $c$ is a real constant. The Noether current is
\ie\label{dipphi-modVill-momcur}
&J_\tau = i\beta_0 (\Delta_\tau \phi - 2\pi n_\tau)~,\qquad J_{xx} = i\beta (\Delta_x^2 \phi - 2\pi n_{xx})~,
\\
&\Delta_\tau J_\tau = \Delta_x^2 J_{xx}~,
\fe
which follows from the equation of motion of $\phi$. The charge is
\ie\label{U1momentumVillain}
Q = \sum_{\tau\text{-link: fixed }\hat \tau} J_\tau~,
\fe
and the charged operators are $e^{i n\phi}$ with charge $n\in \mathbb Z$. The symmetry transformations with $c$ and with $c+2\pi$ are related by a gauge transformation. Therefore, this symmetry group is $U(1)$ rather than $\mathbb R$.
\item
The $\mathbb Z_{L_x}$ momentum dipole symmetry acts as
\ie\label{dipphi-modVill-momdip}
&\phi \rightarrow \phi + 2\pi m \frac{\hat x}{L_x}~, &&\quad\text{for}\quad 0 \le \hat x < L_x~,
\\
&n_{xx} \rightarrow n_{xx} + m \left( \delta_{\hat x,0} - \delta_{\hat x,L_x-1} \right)~,
\fe
where $m=0,1,\ldots, L_x-1$, and $\delta_{\hat x,\hat x_0}$ is the Kronecker delta function. It is a $\mathbb Z_{L_x}$ rather than a $\mathbb Z$ symmetry, because the shift corresponding to $m \in L_x \mathbb Z$ is a gauge transformation \eqref{dipphi-modVill-gaugesym}. The symmetry operator is\footnote{The current $J_\tau$ is imaginary in Euclidean signature. Following the comment in footnote \ref{conventionsf}, it is hermitian in Lorentzian signature and consequently, $U_m$ is unitary.}
\ie\label{dipphi-modVill-momdip-symop}
U_m = \exp\left( \frac{2\pi i m}{L_x} \sum_{\tau\text{-link: fixed }\hat \tau} \hat x J_\tau - i m \left[ \tilde \phi(\hat \tau, 0) - \tilde \phi(\hat \tau,L_x-1) \right] \right)~.
\fe
Here the sum is restricted to the fundamental domain $0\leq \hat x<L_x$. It can be understood in a simple way: the first and second terms shift $\phi$ and $n_{xx}$, respectively, as in \eqref{dipphi-modVill-momdip}. The charged operators are $e^{i\phi}$ and $e^{i p \Delta_x \phi}$ with $p \in \mathbb Z$. The operator $e^{i p \Delta_x \phi}$ has $\mathbb{Z}_{L_x}$ charge $p$ mod $L_x$. See below how the symmetry acts on these charged operators.
\item
There is a $U(1)$ winding symmetry that shifts $\tilde \phi \rightarrow \tilde \phi + \tilde c$, where $\tilde c$ is a circle-valued real constant. The Noether current is
\ie\label{dipphi-modVill-windcur}
&\tilde J_\tau = \frac{1}{2\pi} (\Delta_x^2 \phi - 2\pi n_{xx})~, \qquad \tilde J_{xx} = \frac{1}{2\pi} (\Delta_\tau \phi - 2\pi n_\tau)~,
\\
&\Delta_\tau \tilde J_\tau = \Delta_x^2 \tilde J_{xx}~,
\fe
which follows from the equation of motion of $\tilde \phi$. The charge is
\ie\label{dipphi-modVill-windcharge}
\tilde Q = \sum_{\text{site: fixed }\hat \tau} \tilde J_\tau = - \sum_{\text{site: fixed }\hat \tau} n_{xx}~.
\fe
The charged operators are $e^{i\tilde n \tilde \phi}$ with charge $\tilde n \in \mathbb Z$.
\item
Finally, there is a $\mathbb Z_{L_x}$ winding dipole symmetry that shifts
\ie\label{dipphi-modVill-winddip}
\tilde \phi \rightarrow \tilde \phi + 2\pi \tilde m \frac{\hat x}{L_x}~,\quad \text{for}\quad 0 \le \hat x < L_x~,
\fe
where $\tilde m \in \mathbb Z$. It is a $\mathbb Z_{L_x}$ rather than a $\mathbb Z$ symmetry because the shift corresponding to $\tilde m \in L_x \mathbb Z$ is a gauge transformation \eqref{dipphi-modVill-gaugesym}. The symmetry operator is
\ie\label{dipphi-modVill-winddipo}
\tilde U_{\tilde m} = \exp\left(-\frac{2\pi i \tilde m}{L_x} \sum_{\text{site: fixed }\hat \tau} \hat x n_{xx} \right)~.
\fe
The charged operators are $e^{i\tilde \phi}$ and $e^{i \tilde p \Delta_x \tilde \phi}$ with $\tilde p \in \mathbb Z$. The operator $e^{i \tilde p \Delta_x \tilde \phi}$ has $\mathbb{Z}_{L_x}$ charge $\tilde p$ mod $L_x$. See below how the symmetry acts on these charged operators.
\end{itemize}
The operator $e^{in\phi}$ ($e^{i\tilde n\tilde \phi}$), which is charged under the $U(1)$ momentum (winding) symmetry does not transform simply under the $\mathbb Z_{L_x}$ momentum (winding) dipole symmetry. The reason is that the $\mathbb Z_{L_x}$ spatial translation symmetry does not commute with the dipole symmetries. Let $T$ be the generator of the lattice translation $\hat x \rightarrow \hat x + 1$. Then,
\ie
&TU_mT^{-1} = e^{\frac{2\pi i}{L_x} mQ} U_m~,
\\
&T\tilde U_{\tilde m}T^{-1} = e^{\frac{2\pi i}{L_x} \tilde m \tilde Q} \tilde U_{\tilde m}~,
\fe
Note that this lack of commutativity is not a central extension of the symmetries generated by the dipole symmetries and the translations. It is not an anomaly.
On the other hand, the non-commutativity of the two $\mathbb Z_{L_x}$ dipole symmetries,
\ie\label{noncommute}
U_m \tilde U_{\tilde m} = e^{-\frac{2\pi i}{L_x} m \tilde m}\tilde U_{\tilde m} U_m~,
\fe
does signal a mixed 't Hooft anomaly between them. (This follows from using \eqref{dipphi-modVill-momdip} and \eqref{dipphi-modVill-winddip} in the operators \eqref{dipphi-modVill-winddipo} and \eqref{dipphi-modVill-momdip-symop}.) As a result, every energy level is $L_x$-fold degenerate. In particular, there is a large ground state degeneracy, which depends on the number of lattice sites $L_x$. As in \cite{Gorantla:2021bda}, this degeneracy, which depends on the number of sites is a manifestation of UV/IR mixing.
As discussed in Section \ref{sec:dipphi-compactify}, the 1+1d modified Villain model \eqref{dipphi-modVill-action} is equivalent to the 2+1d $\phi$-theory on a slanted torus \eqref{slanted}.
Indeed, the algebra \eqref{noncommute} arises from the projective $\mathbb{Z}_M\times \mathbb{Z}_M$ symmetry discussed in Section 3 of \cite{Rudelius:2020kta}, with $M=L_x$ in that reference for the torus \eqref{slanted}.
\subsubsection{Self-duality}\label{sec:villainduality}
Using Poisson resummation of the integers $(n_\tau,n_{xx})$, the modified Villain model \eqref{dipphi-modVill-action} is self-dual with $\phi \leftrightarrow \tilde \phi$ and $\beta_0 \leftrightarrow \frac{1}{(2\pi)^2\beta}$. The dual action is
\ie\label{dipphi-modVill-dualaction}
S = \frac{1}{2(2\pi)^2\beta} \sum_\text{site} (\Delta_\tau \tilde \phi - 2\pi \tilde n_\tau)^2 + \frac{1}{2(2\pi)^2\beta_0} \sum_{\tau\text{-link}} (\Delta_x^2 \tilde \phi - 2\pi \tilde n_{xx})^2 - i\sum_\text{site} \phi (\Delta_\tau \tilde n_{xx} - \Delta_x^2 \tilde n_\tau)~,
\fe
where $(\tilde n_\tau,\tilde n_{xx})$ are integer gauge fields that make $\tilde \phi$ compact. Under the gauge symmetry \eqref{dipphi-modVill-gaugesym2}, they transform as
\ie
\tilde n_\tau \sim \tilde n_\tau + \Delta_\tau \tilde k~, \qquad \tilde n_{xx} \sim \tilde n_{xx} + \Delta_x^2 \tilde k~.
\fe
\subsubsection{Gauge fixing the integers}\label{sec:dipphi-modVill-gaugefix}
Following the same procedure as in \cite{Gorantla:2021svj}, after integrating out $\tilde \phi$ and gauge fixing the integer gauge fields, the action \eqref{dipphi-modVill-action} can be written in terms of a new field $\bar \phi$ as
\ie\label{dipphi-modVill-gaugefixaction}
S = \frac{\beta_0}{2} \sum_{\tau\text{-link}} (\Delta_\tau \bar \phi)^2 + \frac{\beta}{2} \sum_\text{site} (\Delta_x^2 \bar \phi)^2~.
\fe
The new field is defined as
\ie
&\bar \phi(0,0) = \phi(0,0)~,\qquad \Delta_x \bar\phi(0,-1) = \Delta_x \phi(0,-1)~,
\\
&\Delta_x^2 \bar \phi = \Delta_x^2 \phi - 2\pi n_{xx}~,
\\
&\Delta_\tau \bar \phi = \Delta_\tau \phi - 2\pi n_\tau~.
\fe
The integer gauge fields are gauge fixed to zero except that
\ie
&n_\tau(L_\tau-1,\hat x)=-\bar n_\tau~,
\\
&n_{xx}(0,0)=-\bar n_{xx}-\bar p_{xx}~,
\\
&n_{xx}(0,L_x-1)=\bar p_{xx}~,
\fe
where $\bar n_\tau,\bar n_{xx},\bar p_{xx}\in\mathbb{Z}$.
The residual gauge symmetry acts on $\bar\phi$ as
\ie\label{dipphi-modVill-gaugesym-barphi}
\bar \phi(\hat\tau,\hat x,\hat y)\sim \bar\phi(\hat\tau,\hat x,\hat y) + 2\pi k_0 + 2\pi k_x \hat x~,\quad k_0,k_x\in\mathbb{Z}~.
\fe
The remaining gauge parameters $k_0,k_x$ are constant on the lattice.
Unlike $\phi$, the field $\bar \phi$ need not be single-valued. Instead, it can wind with the boundary condition
\ie
&\bar \phi(\hat \tau+L_\tau, \hat x)=\bar \phi(\hat \tau, \hat x)+2\pi \bar n_\tau~,
\\
&\bar \phi(\hat \tau, \hat x+L_x)=\bar \phi(\hat \tau, \hat x)+2\pi \bar n_{xx} \hat x +2\pi \bar p_{xx}~.
\fe
Because of the gauge symmetry \eqref{dipphi-modVill-gaugesym-barphi}, $\tilde p_{xx}\sim\tilde p_{xx}+L_x$. One configuration that winds in the $x$-direction is
\ie\label{dipphi-modVill-windconfig}
\bar \phi(\hat \tau, \hat x) = 2\pi \bar n_{xx} \frac{\hat x(\hat x- L_x)}{2L_x} +2\pi \bar p_{xx} \frac{\hat x}{L_x}~.
\fe
\subsubsection{Spectrum}\label{spectrum}
We will now determine the spectrum of the theory. We will work with a continuous Lorentzian time, denoted by $t$, while keeping the space discrete. We do this by introducing a lattice spacing $a_\tau$ in the $\tau$-direction, taking the limit $a_\tau\rightarrow 0$, while keeping $\beta_0'=\beta_0 a_\tau$ and $\beta'=\beta/a_\tau$ fixed, and then Wick rotating from Euclidean time to Lorentzian time.
The spectrum of the modified Villain model \eqref{dipphi-modVill-action} includes plane waves with nonzero spatial momentum and states charged under the $U(1)$ momentum and winding symmetries. The dispersion relation for the plane waves is
\ie
\omega_{n_x} = 4\sqrt{\frac{\beta'}{\beta_0'}} \sin^2\left( \frac{\pi n_x}{L_x} \right)~.
\fe
The winding configuration \eqref{dipphi-modVill-windconfig} has the minimal energy with those charges:
\ie
H = \frac{\beta'}{2 } \sum_{\text{site: fixed }\tau} (\Delta_x^2 \bar\phi)^2 = \frac{(2\pi)^2\beta'}{2 L_x} \tilde n^2~.
\fe
Note that the energy does not depend on $\tilde p$. This is related to the fact that the two $\mathbb Z_{L_x}$ dipole symmetries have a mixed 't Hooft anomaly resulting in a degeneracy in the spectrum.
Similarly, the minimal energy of a state with $U(1)$ momentum charge $n$ is
\ie
H = \frac{n^2}{2 \beta_0' L_x}~.
\fe
For fixed lattice parameters (recall that we have taken $a_\tau\to 0$ and rotated to Lorentzian signature), the energies of the three kinds of states scale with $L_x$ as
\ie\label{fixedbetas}
E_\text{wave} \sim \frac{1}{L_x^2}~,\qquad E_\text{mom} \sim \frac{1}{L_x}~,\qquad E_\text{wind} \sim \frac{1}{L_x}~,
\fe
i.e., for large $L_x$, the states charged under the $U(1)$ momentum or winding symmetry are parametrically heavier than the plane waves.
Finally, recall that each state appears $L_x$ times forming a projective representation of the two ${\mathbb Z}_{L_x}$ momentum and winding symmetries.
The above degeneracy is lifted if we impose only the momentum symmetries.
Indeed, the deformation of the modified Villain model \eqref{dipphi-modVill-action} by the winding dipole operator $e^{i\Delta_x \tilde \phi}$ breaks the $\mathbb Z_{L_x}$ winding dipole symmetry explicitly and lifts the ground state degeneracy.
\subsection{Continuum limits}\label{sec:dipphi-cont}
Now that we understand the modified Villain model \eqref{dipphi-modVill-action}, we can explore its continuum limit. Surprisingly, there are three possible continuum limits. All of them have the same continuum Lagrangian, but their fields have different properties. Consequently, the three different continuum theories have different global symmetries (see Table \ref{tbl:lat-cont-sym}) and different spectra (see Table \ref{tbl:lat-cont-energies}). One of these theories corresponds to the continuum theory \eqref{dipphi-action1}.
In all these limits, we introduce the spatial lattice spacing $a$, and take the limit $a,a_\tau \rightarrow 0$ and $L_x,L_\tau \rightarrow \infty$ such that $\ell_x = a L_x$ and $\ell_\tau = a_\tau L_\tau$ are fixed.
Before analyzing the system in detail, let us discuss the limit of the algebra of dipole symmetries \eqref{noncommute}
\ie\label{noncommutea}
U_m \tilde U_{\tilde m} = e^{-\frac{2\pi i}{L_x} m \tilde m}\tilde U_{\tilde m} U_m~.
\fe
As we take $L_x\to \infty$, we can focus on different elements of this algebra to find different limits. Here are some options.
\begin{itemize}
\item We can focus on $U_m$ and $\tilde U_{\tilde m}$ with finite $m $ and $\tilde m$. In this limit, these operators lead to two commuting copies of $\mathbb Z$.
\item We can focus on $U_m$ with finite $m$ and $\tilde U_{\tilde m}$ with $\tilde m \to \infty$ and finite ${\tilde m\over L_x}\to \tilde r$. (Clearly, $\tilde r$ is circle-valued.) In this limit, the operators $U_m$ lead to $\mathbb Z$, the operators $\tilde U_{\tilde m}\to \tilde U_{\tilde r}$ lead to $U(1)$ and they do not commute
\ie
U_m \tilde U_{\tilde r} = e^{-2\pi i m \tilde r}\tilde U_{\tilde r} U_m~.
\fe
\item We can exchange $U\leftrightarrow \tilde U$ in the previous limit.
\item We can focus on $U_m$ and $\tilde U_{\tilde m}$ with $m, \tilde m \to \infty$, with fixed $m \over \sqrt{L_x}$, $\tilde m\over \sqrt{L_x}$. In this limit, we can write $U_m\to \exp(i{m\over \sqrt{L_x}}{\cal U})$ and $\tilde U_{\tilde m} \to \exp(i{\tilde m\over \sqrt{L_x}}\tilde {\cal U})$. $\cal U$ and $\tilde {\cal U}$ generate two copies of $\mathbb R$, which do not commute
\ie
\, [{\cal U},\tilde {\cal U}]=2\pi i~.
\fe
\end{itemize}
Below we will see these algebras (except the first one) in various limits of the lattice system.
\subsubsection{1+1d dipole $\phi$-theory} \label{sec:dipphi-cont2}
To obtain the continuum dipole $\phi$-theory \eqref{dipphi-action1}, we scale the lattice coupling constants with $a,a_\tau$ as
\ie\label{dipphi-scaling}
\beta_0 = \frac{\mu_0 a}{a_\tau}~, \qquad \beta = \frac{a_\tau}{\mu a^3}~,
\fe
where $\mu_0$ and $\mu$ are fixed continuum coupling constants with mass dimensions $0$ and $2$ respectively.
In this continuum limit, the global symmetries of the modified Villain model reduce to the ones discussed in Section \ref{sec:dipphi-contfirst}. This is the second option in the list following \eqref{noncommutea}. See Table \ref{tbl:lat-cont-sym}, for the relation between the global symmetries in this continuum limit and on the lattice. In particular, the $U(1)$ winding symmetry of the modified Villain lattice model does not act in the continuum theory. Since $\partial_x \phi$ is a well-defined operator, the $U(1)$ winding charge associated with \eqref{dipphi-modVill-windcur} vanishes\footnote{We see here an interesting analogy with the $\phi$-theories with subsystem symmetry in 2+1d. There, the momentum and winding subsystem symmetry currents exist in the continuum limit. But the continuum theory has no charged finite energy states \cite{paper1,Gorantla:2021svj,Gorantla:2021bda}.\label{analowss}}
\ie
\tilde Q = \frac{1}{2\pi}\oint dx~ \partial_x^2\phi = 0.
\fe
Relatedly, the lattice operator $e^{i\Delta_x \phi}$ becomes neutral under the momentum dipole symmetry. It does not lead to exponential operators, but to operators of the form $\partial_x \phi$, which transforms under the $\mathbb Z$ momentum dipole symmetry inhomogeneously. In contrast, $e^{i\phi}$ is a well-defined local operator charged under the $U(1)$ momentum symmetry. See Table \ref{tbl:lat-cont-chargedop} for a comparison of charged operators on the lattice and in the continuum theory.
\renewcommand{\arraystretch}{1.5}
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Lattice & Continuum dipole $\phi$-theory & Continuum dipole $\hat \phi$-theory
\tabularnewline
\hline
$U(1)$ momentum & $U(1)$ momentum & does not act
\tabularnewline
\hline
$U(1)$ winding & does not act & does not act
\tabularnewline
\hline
$\mathbb Z_{L_x}$ momentum dipole & $\mathbb Z$ momentum dipole & $\mathbb R$ momentum dipole
\tabularnewline
\hline
$\mathbb Z_{L_x}$ winding dipole & $ U(1)$ winding dipole & $\mathbb R$ winding dipole
\tabularnewline
\hline
\end{tabular}
\caption{Relation between the global symmetries on the lattice and in various continuum limits. The global symmetries of the continuum dipole $\Phi$-theory are the same as in the second column after swapping ``momentum'' and ``winding.''}\label{tbl:lat-cont-sym}
\end{center}
\end{table}
\renewcommand{\arraystretch}{1.6}
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Theory & $E_\text{wave}$ & $E_\text{mom}$ & $E_\text{wind}$
\tabularnewline
\hline
Modified Villain model & $\frac{1}{L_x^2}$ & $\frac{1}{L_x}$ & $\frac{1}{L_x}$
\tabularnewline
\hline
Continuum dipole $\phi$-theory & $\frac{1}{\ell_x^2}$ & $\frac{1}{\ell_x}$ & $\frac{1}{a^2\ell_x}$
\tabularnewline
\hline
Continuum dipole $\hat \phi$-theory & $\frac{1}{\ell_x^2}$ & $\frac{1}{a\ell_x}$ & $\frac{1}{a\ell_x}$
\tabularnewline
\hline
\end{tabular}
\caption{Energies of the three kinds of states on the lattice and in various continuum limits. The energies of the continuum dipole $\Phi$-theory are the same as in the third row after swapping ``momentum'' and ``winding.'' The fact that the energy of the winding states of the $\phi$-theory diverge in the continuum limit is compatible with the lack of local winding operators in this theory (Table \ref{tbl:lat-cont-chargedop}). A similar comment applies to the momentum and winding states in the $\hat\phi$-theory.}\label{tbl:lat-cont-energies}
\end{center}
\end{table}
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Symmetry & Lattice & $\phi$-theory & $\Phi$-theory & $\hat \phi$-theory
\tabularnewline
\hline
Momentum & $e^{i\phi}$ & $e^{i\phi}$ & -- & --
\tabularnewline
\hline
Winding & $e^{i\tilde \phi}$ & -- & $e^{i\tilde \phi}$ & --
\tabularnewline
\hline
Momentum dipole & $e^{i\Delta_x\phi}$ & $\partial_x\phi$ & $e^{i\partial_x \Phi}$ & $\partial_x \hat \phi$
\tabularnewline
\hline
Winding dipole & $e^{i\Delta_x\tilde \phi}$ & $e^{i\partial_x \tilde \Phi}$ & $\partial_x \tilde \phi$ & $\partial_x \check \phi$
\tabularnewline
\hline
\end{tabular}
\caption{Some local operators that transform under various symmetries on the lattice and in the three continuum limits. The exponentiated operators shown here transform linearly under their respective symmetry transformations, whereas the others transform inhomogeneously. Here, $\phi$ and $\tilde \phi$ have mass dimension $0$, $\Phi$ and $\tilde \Phi$ have mass dimension $-1$, and $\hat \phi$ and $\check\phi$ have mass dimension $-\frac12$. Note that the exponent is always dimensionless, and its coefficient is always an integer. Consequently, no continuum theory has local operators of the form $e^{i\phi}$ and $e^{i\partial_x \phi}$ at the same time. The continuum fields $\Phi$ and $\tilde \phi$ are dual to each other as discussed in Section \ref{sec:dipphi-cont3}. Similarly, the continuum fields $\phi$ and $\tilde \Phi$ are dual to each other. The continuum fields $\hat \phi$ and $\check \phi$ are dual to each other, as discussed in Section \ref{sec:dipphi-cont1}.}\label{tbl:lat-cont-chargedop}
\end{center}
\end{table}
As mentioned around \eqref{xtauscal}, the dipole $\phi$-theory is not scale invariant under any scaling of $x$ and $\tau$ because the periodicity of $\phi$ is not preserved under this scaling.
The energies of the three kinds of states in this limit are (see Table \ref{tbl:lat-cont-energies})
\ie\label{momeli}
E_\text{wave} \sim \frac{1}{\sqrt{\mu_0 \mu}}\frac{1}{\ell_x^2}~,\qquad E_\text{mom} \sim \frac{1}{\mu_0\ell_x}~,\qquad E_\text{wind} \sim \frac{1}{\mu a^2 \ell_x}~.
\fe
We see that the plane waves and momentum states have finite energy, but the winding states are infinitely heavy. This is consistent with the fact the $U(1)$ winding symmetry of the lattice model does not exist in this continuum limit. Here we see that the dipole $\phi$-theory is not self-dual. We will study the dual theory in subsection \ref{sec:dipphi-cont3}.
\subsubsection{1+1d dipole $\Phi$-theory}\label{sec:dipphi-cont3}
We consider a different continuum limit of \eqref{dipphi-modVill-action} by scaling
\ie\label{dipphi-dual-scaling}
\beta_0 = \frac{M_0 a^3}{a_\tau}~, \qquad \beta = \frac{a_\tau}{M a}~,
\fe
where $M_0$ and $M$ are fixed continuum coupling constants with mass dimensions $2$ and $0$, respectively.
At the same time, we define the continuum field $\Phi$ as
\ie
\Phi \equiv a\bar\phi\,.
\fe
Recall that $\bar\phi$ is the gauge-fixed version of $\phi$ on the modified Villain lattice model.
The action of this continuum limit is
\ie\label{dipphi-dualaction}
S = \oint d\tau dx~ \left[ \frac{M_0}{2} (\partial_\tau \Phi)^2 + \frac{1}{2M} (\partial_x^2 \Phi)^2 \right]~.
\fe
This action is very similar to that of the dipole $\phi$-theory \eqref{dipphi-action1}, but $\Phi$ has a different mass dimension of $-1$, and a different identification
\ie\label{tildecx}
\Phi(\tau,x) \sim \Phi(\tau,x) + c + 2\pi x~,
\fe
where $ c$ is an arbitrary constant. We will refer to this theory as the 1+1d dipole $\Phi$-theory.
Using the standard duality transformation in the continuum, we find that the $\Phi$-theory is dual to the 1+1d dipole $\phi$-theory \eqref{dipphi-action1}:
\ie
S = \oint d\tau dx~ \left[ \frac{\tilde \mu_0}{2} (\partial_\tau \tilde \phi)^2 + \frac{1}{2\tilde \mu} (\partial_x^2 \tilde \phi)^2 \right]~,
\fe
with the following identification of the couplings:
\ie
&M_0 = {\tilde \mu \over (2\pi)^2}\,,~~~M = (2\pi)^2 \tilde \mu_0\,.
\fe
Here, $\tilde \phi$ has mass-dimension $0$. It is subject to the identification $\tilde\phi\sim\tilde\phi+2\pi$ and has the same global properties as $\phi$ of Section \ref{sec:dipphi-cont2}. The currents from the two dual descriptions are mapped to each other as follows:
\ie\label{dipphi-duality}
iM_0 \partial_\tau \Phi = \frac{1}{2\pi} \partial_x^2 \tilde \phi ~, \qquad \frac{i}{M} \partial_x^2 \Phi = \frac{1}{2\pi} \partial_\tau \tilde \phi~.
\fe
This is the continuum version of the duality in the modified Villain lattice model discussed in Section \ref{sec:villainduality}.
We now discuss the global symmetries in this continuum limit.
This theory corresponds to the third option in the list following \eqref{noncommutea}.
\begin{itemize}
\item
Since the constant shift of the continuum field $\Phi$ is part of the gauge symmetry \eqref{tildecx}, the $U(1)$ momentum charge \eqref{U1momentumVillain} vanishes:
\ie
Q = i M_0 \oint dx~ \partial_\tau\Phi = 0\,.
\fe
We conclude that the $U(1)$ momentum symmetry on the lattice does not act in the continuum $\Phi$-theory. This is consistent with the fact that the would-be charged local operator $e^{i\Phi}$ does not exist in the continuum theory because $\Phi$ has mass dimension $-1$ and has the gauge symmetry \eqref{tildecx}. The analogy mentioned in footnote \ref{analowss} is applicable also here.
\item The momentum dipole symmetry \eqref{dipphi-modVill-momdip} shifts $\Phi\rightarrow \Phi+\theta x$ with $\theta\sim\theta+2\pi$.
The charged operator on the lattice is $e^{i\Delta_x \phi}$, which becomes a non-trivial local operator $e^{i \partial_x \Phi}$ in the continuum. The $\mathbb{Z}_{L_x}$ momentum dipole symmetry becomes a $U(1)$ symmetry in the continuum.
\item The symmetry group of the winding symmetry \eqref{dipphi-modVill-windcharge} is still $U(1)$ in the continuum $\Phi$-theory. The $U(1)$ winding charge $\tilde Q$ is
\ie
\tilde Q = {1\over 2\pi} \oint dx \,\partial_x^2\Phi\,.
\fe
The minimally charged configuration is
\ie
\Phi = \frac{2\pi x (x-\ell_x)}{2\ell_x}\,.
\fe
The charged local operator is $e^{i\tilde \phi}$, where $\tilde \phi$ is the dimensionless dual field of $\Phi$.
\item The $\mathbb{Z}_{L_x}$ winding dipole symmetry operator \eqref{dipphi-modVill-winddipo} on the lattice becomes a $\mathbb{Z}$ symmetry in the continuum $\Phi$-theory. The symmetry operator in the continuum is
\ie\label{dipphi-dipolesymop}
\tilde U_{\tilde m} = \exp\left[ -\frac{i\tilde m}{\ell_x} \left(\int_{x_0}^{x_0+\ell_x} dx~\partial_x \Phi - 2\pi x_0\tilde Q\right) \right]~,\qquad \text{for}\qquad \tilde m\in \mathbb Z~.
\fe
Although $\partial_x \Phi$ is not a well-defined operator, the symmetry operator $\tilde U_{\tilde m}$ is well defined. Moreover, it is independent of $x_0$. A configuration that carries a nontrivial $\mathbb Z$ dipole charge is $\Phi = \theta x$ (where we identify $\theta\sim\theta +2\pi$). The charged local operator is $\partial_x \tilde \phi$ which realizes the $\mathbb Z$ winding dipole symmetry inhomogeneously.
\end{itemize}
See Table \ref{tbl:lat-cont-sym} for the relation between the global symmetries in this continuum limit and on the lattice.
As in the discussion of the $\phi$-theory in Section \ref{sec:dipphi-cont2}, because of the identification \eqref{tildecx}, this theory is also not scale invariant under any scaling of $x$ and $\tau$.
The energies of the three kinds of states in this limit are
\ie
E_\text{wave} \sim \frac{1}{\sqrt{M_0M}}\frac{1}{\ell_x^2}~,\qquad E_\text{mom} \sim \frac{1}{M_0a^2 \ell_x}~,\qquad E_\text{wind} \sim \frac{1}{M \ell_x}~.
\fe
\subsubsection{1+1d dipole $\hat \phi$-theory}\label{sec:dipphi-cont1}
We can also study the low-energy limit with fixed lattice couplings \eqref{fixedbetas} and focus on the lightest states, the plane waves, ignoring the momentum and winding states. (In addition, each state appears an infinite number of times because of the momentum and winding dipole symmetry). This leads to a self-dual spectrum.
We scale the lattice coupling constants as
\ie
\beta_0 = \frac{\hat \mu_0 a^2}{a_\tau}~,\qquad \beta = \frac{a_\tau}{\hat \mu a^2}~,
\fe
where $\hat \mu_0$ and $\hat \mu$ are fixed continuum coupling constants with mass dimension $1$.\footnote{We will soon see that this theory is scale invariant, so fixing the lattice coupling constants $\beta_0,\beta$ is equivalent to fixing the continuum coupling constants $\hat \mu_0,\hat \mu$.} We also define a new continuum field,
\ie
\hat \phi = \sqrt{a} \,\bar\phi\,,
\fe
with mass dimension $-\frac12$. Then the action \eqref{dipphi-modVill-gaugefixaction} becomes
\ie\label{dipphi-action2}
S = \oint d\tau dx~ \left[ \frac{\hat \mu_0}{2} (\partial_\tau \hat \phi)^2 + \frac{1}{2\hat \mu} (\partial_x^2 \hat \phi)^2 \right]~.
\fe
The field $\hat \phi$ has gauge symmetry
\ie\label{dipphi-gaugesym2}
\hat \phi(\tau,x) \sim \hat \phi(\tau,x) + \hat c~,
\fe
where $\hat c$ is a real constant. In other words, the zero mode of $\hat \phi$ is removed. This means that $\partial_x \hat \phi$ is a well-defined operator. We will refer to this theory as the 1+1d dipole $\hat \phi$-theory.
The dipole $\hat \phi$-theory is self-dual with $\hat \mu_0 \leftrightarrow \frac{\hat \mu}{(2\pi)^2}$. The field $\hat \phi$ and its dual field $\check\phi$ are related by the duality map
\ie\label{dipphi-duality2}
i\hat \mu_0 \partial_\tau \hat \phi = \frac{1}{2\pi} \partial_x^2 \check \phi ~, \qquad \frac{i}{\hat \mu} \partial_x^2 \hat \phi = \frac{1}{2\pi} \partial_\tau \check \phi~.
\fe
$\check \phi$ has the same gauge symmetry \eqref{dipphi-gaugesym2} as $\hat \phi$.
Let us study the fate of various global symmetries of the modified Villain model in this continuum limit.
This theory corresponds to the fourth option in the list following \eqref{noncommutea}.
\begin{itemize}
\item
The $U(1)$ momentum symmetry $\hat\phi\rightarrow \hat\phi+\hat c$ does not act in the dipole $\hat \phi$-theory because it is part of the gauge symmetry \eqref{dipphi-gaugesym2}.
\item
The $U(1)$ winding symmetry does not act in the dipole $\hat \phi$-theory because $\partial_x\hat\phi$ is a well-defined operator and the winding charge vanishes,
\ie
\tilde Q = \frac{1}{2\pi}\oint dx~ \partial_x^2\hat\phi = 0.
\fe
This is consistent with the self-duality of $\hat \phi$-theory.
\item
The $\mathbb Z_{L_x}$ momentum dipole symmetry becomes an $\mathbb R$ momentum dipole symmetry, which acts as
\ie
\hat \phi(\tau,x) \rightarrow \hat \phi(\tau,x) + c \frac{x}{\ell_x}~,
\fe
where $c$ is a real constant. This action seems inconsistent with the periodic boundary conditions in space. However, because of the gauge symmetry \eqref{dipphi-gaugesym2} it maps $\hat\phi$ between different twisted sectors of the same theory and therefore it is an allowed transformation.
\item
Similarly, the $\mathbb Z_{L_x}$ winding dipole symmetry becomes an $\mathbb R$ winding dipole symmetry. A nontrivial charged configuration with charge $q \in \mathbb R$ is
\ie
\hat \phi(\tau,x) = -q \frac{x}{\ell_x}~.
\fe
Again, because of the gauge symmetry \eqref{dipphi-gaugesym2}, this is a valid configuration.
\end{itemize}
The self-duality exchanges the two $\mathbb R$ dipole symmetries. Moreover, they do not commute with each other, resulting in infinite ground state degeneracy.
See Table \ref{tbl:lat-cont-sym} for the relation between the global symmetries in this continuum limit and on the lattice.
Under the scale transformation, $x\rightarrow \lambda x$, $\tau \rightarrow \lambda^2\tau$, we can scale the field $\hat \phi$ as
\ie
\hat \phi \rightarrow \sqrt{\lambda} \hat \phi~,
\fe
which leaves the action \eqref{dipphi-action2} invariant. It does not change the identification \eqref{dipphi-gaugesym2}. Therefore, the dipole $\hat \phi$-theory is scale invariant.
The energies of the three kinds of states in this limit are
\ie\label{momeli}
E_\text{wave} \sim \frac{1}{\sqrt{\hat \mu_0 \hat \mu}}\frac{1}{\ell_x^2}~,\qquad E_\text{mom} \sim \frac{1}{\hat \mu_0 a \ell_x}~,\qquad E_\text{wind} \sim \frac{1}{\hat \mu a \ell_x}~.
\fe
We see that the plane waves have finite energy, but the momentum and winding states are infinitely heavy. This is consistent with the facts that the $U(1)$ momentum and winding symmetries of the lattice model do not act in this continuum limit, and the dipole $\hat \phi$-theory is scale invariant.
Again, the analogy mentioned in footnote \ref{analowss} is applicable also here. In fact, here the analogy is even better because there are no finite energy states charged under either the momentum or the winding symmetry.
\subsection{More comments}\label{sec:dipphi-morecomments}
\subsubsection{Local operators in different continuum limits}
Let us compare the local operators that transform under the global symmetries in the modified Villain lattice model and its three continuum limits.
The modified Villain lattice model \eqref{dipphi-modVill-action} has two dimensionless compact scalar fields, $\phi$ and $\tilde \phi$.
The local operators include $e^{i\phi}, e^{i\Delta_x \phi}, e^{i\tilde\phi}, e^{i\Delta_x\tilde \phi}$, which are charged under the four global symmetries discussed in Section \ref{sec:villainsymmetry}. $e^{i\phi}$ is charged under the two momentum symmetries, while $e^{i\Delta_x \phi}$ is invariant under the $U(1)$ momentum symmetry, but transforms under the ${\mathbb Z}_{L_x}$ momentum dipole symmetry. Similarly, $e^{i\tilde\phi}$ is charged under the two winding symmetries, while $e^{i\Delta_x \tilde \phi}$ is invariant under the $U(1)$ winding symmetry, but transforms under the ${\mathbb Z}_{L_x}$ dipole winding symmetry.
In the continuum $\phi$-theory, we have the local operators $e^{i\phi}$, and $\partial_x \phi$, but not $e^{i\partial_x \phi}$ because $\phi$ has mass dimension 0.\footnote{Note that $\partial_x \phi$ is invariant under the $U(1)$ momentum symmetry, but is not invariant under the $\mathbb Z$ momentum dipole symmetry. It transforms inhomogenously under it.}
On the other hand, in the continuum $\Phi$-theory, we have the local operators $e^{i\partial_x \Phi}$, but not $e^{i\Phi}$ because $\Phi$ has mass dimension $-1$. . Finally, in the continuum $\hat \phi$-theory, we have the local operators $\partial_x \hat \phi$, but not $e^{i\hat \phi}$ because $\hat\phi$ has mass dimension $-\frac12$.
There are other local operators that cannot be written in terms of the fundamental fields in the Lagrangian, but rather in terms of their dual fields.
The dual fields of $\phi$, $\Phi$ and $\hat \phi$ are $\tilde \Phi$, $\tilde \phi$ and $\check \phi$ respectively. They have mass-dimensions $-1$, $0$, and $-\frac12$ respectively, and have the same identifications as $\Phi$, $\phi$, and $\hat \phi$ respectively.
See Section \ref{sec:dipphi-cont3} and \ref{sec:dipphi-cont1} for the duality transformations.
In terms of the dual field, the continuum $\phi$-theory has an additional local operator $e^{i\partial_x \tilde \Phi}$, but not $e^{i\tilde \Phi}$. On the other hand, the continuum $\Phi$-theory has additional local operators $e^{i\tilde \phi}$, and $\partial_x \tilde \phi$, but not $e^{i\partial_x \tilde \phi}$. Finally, the continuum $\hat \phi$-theory has additional local operator $\partial_x \check \phi$, but not $e^{i\check \phi}$.
Importantly, none of the three continuum theories has local operators of the form $e^{i\phi}$ and $e^{i\partial_x\phi}$ at the same time. These operators are summarized in Table \ref{tbl:lat-cont-chargedop}.
\subsubsection{Robustness} \label{sec:dipphi-robustness}
The Lifshitz theory \eqref{intro-dipphi-action1t} and specifically its 1+1d version \eqref{dipphi-action1} is natural in the high energy physics sense. The absence of potential terms and two-derivative terms for $\phi$ is natural because such terms violate a global symmetry -- the two momentum symmetries. Furthermore, this continuum theory also has the winding symmetries and therefore, it is natural to set the coefficients of all the winding violating operators to zero.
However, this theory might not be robust. If we start at short distances with a UV theory without some of these symmetries, some level of fine tuning might be needed in order to end up at long distances with this continuum theory. (See \cite{paper1}, for a review of naturalness vs. robustness in high energy physics and in condensed matter physics.)
Let us study a concrete example.
Consider a lattice action with $U(1)$ variables $e^{i\varphi}$ at each site and the action
\ie\label{dipphi-lat-actionc}
-{\beta_0}\sum_{\tau\text{-link}} \cos (\Delta_\tau \varphi) -{\beta} \sum_\text{site} \cos(\Delta_x^2 \varphi)~.
\fe
For $\beta_0,\beta \gg 1$, it is similar to the Villain theory \eqref{dipphi-Vill-action}
\ie\label{dipphi-Vill-actionc}
\frac{\beta_0}{2} \sum_{\tau\text{-link}} (\Delta_\tau \phi - 2\pi n_\tau)^2 + \frac{\beta}{2} \sum_\text{site} (\Delta_x^2 \phi - 2\pi n_{xx})^2 ~.
\fe
These two theories preserves the $U(1)$ momentum and $\mathbb Z_{L_x}$ momentum dipole symmetries, but they do not have the winding symmetries of the modified Villain action \eqref{dipphi-modVill-action}
\ie\label{dipphi-modVill-actionc}
\frac{\beta_0}{2} \sum_{\tau\text{-link}} (\Delta_\tau \phi - 2\pi n_\tau)^2 + \frac{\beta}{2} \sum_\text{site} (\Delta_x^2 \phi - 2\pi n_{xx})^2 + i\sum_{\tau\text{-link}}\tilde \phi (\Delta_\tau n_{xx} - \Delta_x^2 n_\tau)~,
\fe
or the continuum theory.
Following \cite{Gorantla:2021svj}, we can explore the relation between the theory \eqref{dipphi-lat-actionc} (or \eqref{dipphi-Vill-actionc}) and \eqref{dipphi-modVill-actionc} by perturbing the latter by the winding dipole violating operator
\ie\label{winding-violating}
\cos(\Delta_x\tilde \phi)~.
\fe
Starting with \eqref{dipphi-modVill-actionc}, we flow in the IR to the continuum theory. Then, we deform it by \eqref{winding-violating} to check whether the IR behavior changes.
If this deformation is irrelevant, then the modified Villain theory \eqref{dipphi-modVill-actionc} is robust and the continuum theory captures the long distance behavior of \eqref{dipphi-lat-actionc} and \eqref{dipphi-Vill-actionc}. If, however, it is relevant, then the modified Villain theory \eqref{dipphi-modVill-actionc} is not robust and the lattice actions \eqref{dipphi-lat-actionc} and \eqref{dipphi-Vill-actionc} do not flow in the IR to the theory described by the continuum model.
In our case, it is easy to see that the deformation \eqref{winding-violating} is relevant. The operator \eqref{winding-violating} carries dipole winding charge and therefore when it acts on a state with a given dipole charge, it changes this charge. In particular, it acts nontrivially in the space of ground states. As a result, if we deform the modified Villain model by this operator, the ground state degeneracy is removed.\footnote{Note that the discussion of the spectrum in Section \ref{spectrum} and, in particular, the ground state degeneracy of states charged under the two dipole symmetries is unlike the situation in \cite{paper1,Gorantla:2021bda}, where the charged states are heavier than the plane waves. Consequently, the theory discussed in \cite{paper1,Gorantla:2021bda} is robust.}
We could reach the same conclusion if we deformed the action by $\cos(\Delta_x\phi)$ instead of \eqref{winding-violating}. This would violate the momentum symmetries, but preserve the winding symmetries.
A closely related question is whether the infinite volume limit of our system exhibits spontaneous symmetry breaking (see \cite{Stahl:2021sgi,Lake:2022ico} for a recent discussion). Na\"ively, the answer is yes. We have an infinite number of ground states carrying various charges under the dipole symmetries and as we take the volume to infinity, the Hilbert space of the theory could split into separate superselection sectors and lead to spontaneous symmetry breaking. However, because of the singular nature of these states, we do not have a coherent picture of this phenomenon. It would be nice to understand this issue better.
\subsubsection{Relation to Lifshitz theory}\label{sec:dipphi-Lifshitz}
The theory \eqref{dipphi-action1} can be viewed as the 1+1d version of the Lifshitz theory
\ie
S = \oint d\tau d^dx~ \left[ \frac{\mu_0}{2} (\partial_\tau \phi)^2 + \frac{1}{2\mu} \left(\sum_i\partial_i^2 \phi\right)^2 \right]~.
\fe
In most of the literature, the scalar field $\phi$ is taken to be noncompact and the theory has a Lifshitz scale symmetry
\ie\label{eq:lif_scale}
\tau\rightarrow \lambda^2\tau~,\quad x^i\rightarrow \lambda x^i~,\quad \phi\rightarrow \lambda^{2-d}\phi~.
\fe
In this section, we have considered various versions of the 1+1d Lifshitz theory with different identifications on $\phi$. Typically, imposing identification on $\phi$ breaks the Lifshitz scale symmetry. For example, the identifications $\phi\sim\phi+2\pi$ in Section \ref{sec:dipphi-cont2} and $\Phi\sim\Phi+ c+2\pi x$ in Section \ref{sec:dipphi-cont3} make the theory incompatible with the Lifshitz scale symmetry.
The situation in the $\hat \phi$-theory in Section \ref{sec:dipphi-cont1} is different. Here, the identification $\hat\phi\sim \hat\phi+\hat c$ removes the zero mode of the field and it is compatible with the scale symmetry. Indeed, it describes the low-energy limit of the 1+1d modified Villain lattice model \eqref{dipphi-modVill-action} with fixed coupling.
In 2+1d, the Lifshitz scale transformation does not act on the scalar field $\phi$, so it is natural to consider a compact version of the Lifshitz theory with identification $\phi\sim\phi+2\pi$ \cite{Henley1997, Moessner2001, Vishwanath:2004, Fradkin:2004, Ardonne:2003wa,2005,2018PhRvB..98l5105M,Yuan:2019geh,Lake:2022ico}. Such a theory arises naturally in the study of quantum dimer models \cite{Rokhsar1988,Henley1997,Fradkin:2004} and the dipolar Bose-Hubbard model \cite{Lake:2022ico}. Most of our discussions about 1+1d compact Lifshitz theories, including their global symmetries and infinite ground state degeneracy, are applicable in 2+1d. In particular, the infinite ground state degeneracy due to different winding sectors has been noticed in the quantum dimer model \cite{Rokhsar1988} and its effective description in terms of the compact Lifshitz theory in \cite{Henley1997,Fradkin:2004}.
Unlike the 1+1d theory, the winding symmetry of the 2+1d theory is actually robust. In 2+1d, the states charged under the winding dipole symmetry are extended in space. They are not created by point-like operators, but by line operators. Consequently, the theory is robust under adding operators violating this winding symmetry. This is similar to the fact that the standard 2+1d $U(1)$ gauge theory is not robust under deformations breaking its magnetic symmetry, which is the famous Polyakov mechanism, while the similar 3+1d theory is robust.
\section{1+1d $U(1)$ dipole gauge theory}\label{sec:dipA}
\subsection{First look at the continuum theory}\label{sec:dipAcontinuum}
We can gauge the momentum global symmetries of the dipole $\phi$-theory by coupling it to the gauge fields $(A_\tau, A_{xx})$ of mass dimensions $1$ and $2$, respectively. The gauge symmetry is
\ie\label{dipA-cont2-gaugesym}
A_\tau \sim A_\tau + \partial_\tau \alpha~, \qquad A_{xx} \sim A_{xx} + \partial_x^2 \alpha~,
\fe
where $\alpha$ is the gauge parameter with mass dimension $0$. The global properties of $\alpha$ are the same as those of $\phi$ in Section \ref{sec:dipphi-cont2}.
The continuum action of the pure gauge theory is
\ie\label{dipA-conti-action}
S = \oint d\tau dx~ \frac{1}{2g^2} E_{xx}^2 ~.
\fe
where
\ie
E_{xx} = \partial_\tau A_{xx} - \partial_x^2 A_\tau
\fe
is the electric field with mass dimension $3$.
Here, $g$ is a fixed continuum coupling of mass dimension 2.
We will refer to this continuum action as the 1+1d $U(1)$ dipole $A$-theory.
Below we will discuss some unusual subtleties of this continuum theory.
On a Euclidean torus, we can consider the following large gauge transformation
\ie
\alpha = \frac{2\pi n_\tau \tau}{\ell_\tau}+\frac{2\pi n_x x}{\ell_x}~,\quad n_\tau, n_x\in\mathbb{Z}~.
\fe
It shifts the gauge fields by
\ie\label{eq:large_gauge}
(A_\tau,A_{xx})\rightarrow \left(A_\tau+\frac{2\pi n_\tau}{\ell_\tau},A_{xx}\right)~.
\fe
Note that the gauge transformation associated with $n_x$ acts trivially on the gauge fields.
The theory has gauge invariant line defects
\ie\label{eq:defect}
\exp\left( in \oint d\tau~ A_\tau(\tau,x) \right)~,
\fe
with the integer $n$ quantized by the large gauge transformation \eqref{eq:large_gauge}. They represent world-lines of fractons.
We also have another observable
\ie\label{dipA-dipdef}
\oint_{\mathcal C} \left[d\tau~ \partial_x A_\tau(\tau,x) + dx~ A_{xx}(\tau,x) \right] ~,
\fe
where $\mathcal C$ is a closed curve in the spacetime.
When $\mathcal C$ is purely space-like, \eqref{dipA-dipdef} simplifies to a gauge invariant operator
\ie\label{eq:operator}
\oint dx\, A_{xx} ~
\fe
at a fixed time. Both the general observable \eqref{dipA-dipdef} and the special case \eqref{eq:operator} are gauge invariant, including under the large gauge transformation \eqref{eq:large_gauge} and do not need to be exponentiated. In fact, they have dimension +1 and therefore, it makes no sense to exponentiate them.
Instead, the integrated version of \eqref{dipA-dipdef}
\ie\label{dipA-dipdefecti}
\oint_{\mathcal C} \left[d\tau~ (A_\tau(\tau,x+x_0) - A_\tau(\tau,x)) + dx~ \int_x^{x+x_0}dx'~ A_{xx}(\tau,x') \right] ~,
\fe
with fixed $x_0$ is dimensionless and can be exponentiated to the defect
\ie\label{dipA-dipdefectid}
\exp\left(ir\oint_{\mathcal C} \left[d\tau~ (A_\tau(\tau,x+x_0) - A_\tau(\tau,x)) + dx~ \int_x^{x+x_0}dx'~ A_{xx}(\tau,x') \right] \right)~,
\fe
with any real $r$. In the special case where $r$ is an integer, this can be interpreted as a dipole of fractons \eqref{eq:defect} with opposite charges $\pm r$ separated by $x_0$. More generally, for real $r$ it is a dipole of fractional charged sources.
It is interesting that while the fracton defect \eqref{eq:defect} is not mobile, the dipole defect \eqref{dipA-dipdefectid} is mobile. Below, in Section \ref{sec:dipA-cont}, we will discuss this fact in more detail.
We see that the line defects and the line operators are very different. The defects \eqref{eq:defect} are exponentials with quantized coefficients, while the observables \eqref{dipA-dipdef} and their special cases, the operators \eqref{eq:operator}, do not have to be exponentiated. To understand this better, we will regularize the continuum theory using a Villain lattice model.
\subsection{Villain formulation}\label{sec:dipA-modVill}
The Villain version of the continuum $U(1)$ dipole gauge theory is described by the lattice action
\ie\label{dipA-modVill-action}
S = \frac{\Gamma}{2} \sum_{\tau\text{-link}} (\Delta_\tau \mathcal A_{xx}-\Delta_x^2 \mathcal A_\tau - 2\pi n_{\tau xx})^2 = \frac{\Gamma}{2} \sum_{\tau\text{-link}} \mathcal E_{xx}^2~,
\fe
where $\Gamma$ is a coupling constant, $n_{\tau xx}$ is an integer-valued gauge field and $\mathcal A_\tau,\mathcal A_{xx}$ are real-valued gauge fields. Here, the electric field,
\ie\label{calEdef}
\mathcal E_{xx} = \Delta_\tau \mathcal A_{xx} - \Delta_x^2 \mathcal A_\tau - 2\pi n_{\tau xx}~,
\fe
is the only gauge invariant field strength under the gauge symmetry
\ie\label{dipA-modVill-gaugesym}
&\mathcal A_\tau \sim \mathcal A_\tau + \Delta_\tau \alpha + 2\pi k_\tau~,
\\
&\mathcal A_{xx} \sim \mathcal A_{xx} + \Delta_x^2 \alpha + 2\pi k_{xx}~,
\\
&n_{\tau xx} \sim n_{\tau xx} + \Delta_\tau k_{xx} - \Delta_x^2 k_\tau~,
\\
&\alpha\in {\mathbb R}~,
\\
&k_\tau,k_{xx}\in {\mathbb Z}~.
\fe
The gauge parameters have their own gauge symmetries
\ie
&\alpha\sim\alpha+c+\frac{2\pi m\hat x}{L_x}+2\pi k~,
\\
&k_{\tau}\sim k_{\tau} -\Delta_\tau k~,
\\
&k_{xx}\sim k_{xx}-\Delta_{xx}k-m(\delta_{\hat x,0}-\delta_{\hat x,L_x-1})~,
\\
&c\in {\mathbb R}~,\\
&m,k\in {\mathbb Z}~,
\fe
with $c$ and $m$ constants on the lattice.
The gauge configurations have a quantized $\mathbb{Z}$-valued electric flux
\ie\label{eleflui}
\frac{1}{2\pi}\sum_{\tau\text{-link}} \mathcal E_{xx} = - \sum_{\tau\text{-link}} n_{\tau xx}\in \mathbb{Z}~,
\fe
and a quantized $\mathbb{Z}_{L_x}$-valued electric dipole flux
\ie\label{ZLxefi}
-\sum_{\tau\text{-link}} \hat x n_{\tau xx}\text{ mod } L_x~.
\fe
Both \eqref{eleflui} and \eqref{ZLxefi} are gauge invariant.
In contrast to the dipole $\phi$-theory, the $U(1)$ dipole gauge theory has no ``vortices.'' Relatedly, there is no gauge invariant field strength of the integer gauge field $n_{\tau xx}$. So we do not modify the Villain action \eqref{dipA-modVill-action}.
We can add a theta-term to the action \eqref{dipA-modVill-action}:
\ie
\frac{i\theta}{2\pi} \sum_{\tau\text{-link}} \mathcal E_{xx} = - i\theta \sum_{\tau\text{-link}} n_{\tau xx}~,
\fe
where $\theta \sim \theta + 2\pi$.\footnote{We could also add a discrete theta-term associated with the $\mathbb{Z}_{L_x}$-valued dipole flux \eqref{ZLxefi}. However, the dipole flux is not invariant under the {time-like $\mathbb Z_{L_x}$ dipole symmetry} (see Section \ref{sec:dipA-modVill-sym}). Therefore, adding a nontrivial discrete theta-term makes the partition function vanish.}
The full action is
\ie\label{dipA-modVill-fullaction}
S = \frac{\Gamma}{2} \sum_{\tau\text{-link}} \mathcal E_{xx}^2 + \frac{i\theta}{2\pi} \sum_{\tau\text{-link}} \mathcal E_{xx}~.
\fe
Note that we could not add such a $\theta$-term to the continuum action \eqref{dipA-cont2-gaugesym} since the electric field $E_{xx}$ has mass dimension +3. Below, this will be discussed further.
The Villain model \eqref{dipA-modVill-fullaction} has gauge invariant operators
\ie\label{dipA-modVill-operator}
\exp\left( in\sum_{\text{site: fixed }\hat \tau} \mathcal A_{xx} \right)~,\quad n\in\mathbb{Z}~.
\fe
Unlike the operators \eqref{eq:operator} in the continuum, these lattice operators are gauge invariant only after exponentiation due to the integer $k_{xx}$ symmetry.
The model also has defects that describe immobile particles, i.e., fractons
\ie\label{dipA-modVill-fracton}
\exp\left( in \sum_{\tau\text{-link: fixed }\hat x} \mathcal A_\tau \right)~,\quad n\in\mathbb{Z}~.
\fe
These are the lattice counterparts of the continuum defects \eqref{eq:defect}.
A dipole can move as long as its separation is fixed. This is described by
\ie\label{dipA-modVill-dipdefect}
\exp\left( in\sum_{\tau x\text{-plaq: }\hat \tau < \hat \tau_0} \Delta_x \mathcal A_\tau(\hat \tau,\hat x_1) + i n\sum_{\text{site: }\hat x_1 < \hat x \le \hat x_2} \mathcal A_{xx}(\hat \tau_0,\hat x) + in\sum_{\tau x\text{-plaq: }\hat \tau \ge \hat \tau_0} \Delta_x \mathcal A_\tau(\hat \tau,\hat x_2) \right)~.
\fe
The coefficients of these defects are quantized because of the integer gauge symmetry of $(k_\tau,k_{xx})$ in \eqref{dipA-modVill-gaugesym}.
Given the gauge invariant defects \eqref{dipA-modVill-fracton} and the gauge invariant field strength \eqref{calEdef} we can write additional gauge invariant defects
\ie\label{dipA-modVill-dipdefect-frac}
&\exp\left( in \sum_{\tau\text{-link}} \Delta_x \mathcal A_\tau(\hat \tau,\hat x+\hat x_0) \right)\exp\left( \frac{in}{\hat x_0} \sum_{\tau\text{-link: }\hat x < \hat x' < \hat x + \hat x_0} (\hat x' - \hat x)\mathcal E_{ xx}(\hat \tau,\hat x') \right)
\\
&=\exp\left( \frac{in}{\hat x_0} \sum_{\tau\text{-link}} [\mathcal A_\tau(\hat \tau,\hat x + \hat x_0) - \mathcal A_\tau(\hat \tau,\hat x)] - \frac{2\pi in}{\hat x_0}\sum_{\tau\text{-link: }\hat x < \hat x' < \hat x + \hat x_0} (\hat x' - \hat x) n_{\tau xx}(\hat \tau,\hat x') \right)~,
\fe
for any $\hat x_0$, where $n\in \mathbb Z$.
These defects can be interpreted as the worldlines of a dipole of fractional charges $\pm {n\over \hat x_0}$ at $\hat x$ and at $\hat x +\hat x_0$. Surprisingly, these dipoles are mobile as long as their separation is fixed:
\ie\label{movingfd}
&\exp\left( \frac{in}{\hat x_0} \sum_{\tau\text{-link: }\hat \tau < \hat \tau_0} [\mathcal A_\tau(\hat \tau,\hat x_1 + \hat x_0) - \mathcal A_\tau(\hat \tau,\hat x_1)] - \frac{2\pi in}{\hat x_0} \sum_{\tau\text{-link: }\hat \tau < \hat \tau_0 \atop \hat x_1 < \hat x' < \hat x_1 + \hat x_0} (\hat x' - \hat x_1) n_{\tau xx}(\hat \tau,\hat x') \right)
\\
&\times \exp\left( \frac{in}{\hat x_0} \sum_{\hat x_1 < \hat x \le \hat x_2} ~~\sum_{\text{site: }\hat x \le \hat x' < \hat x + \hat x_0}\mathcal A_{xx}(\hat \tau_0,\hat x') \right)
\\
&\times \exp\left( \frac{in}{\hat x_0} \sum_{\tau\text{-link: }\hat \tau \ge \hat \tau_0} [\mathcal A_\tau(\hat \tau,\hat x_2 + \hat x_0) - \mathcal A_\tau(\hat \tau,\hat x_2)] - \frac{2\pi in}{\hat x_0} \sum_{\tau\text{-link: }\hat \tau \ge \hat \tau_0 \atop \hat x_2 < \hat x' < \hat x_2 + \hat x_0} (\hat x' - \hat x_2) n_{\tau xx}(\hat \tau,\hat x') \right)~.
\fe
\subsubsection{Relation to the 2+1d $U(1)$ tensor gauge theory}\label{sec:dipA-compactify}
Similar to the discussion in Section \ref{sec:dipphi-compactify}, here we will relate the 1+1d model \eqref{dipA-modVill-action} to a 2+1d model.
In \cite{Gorantla:2021svj}, the Villain version of the 2+1d $U(1)$ tensor gauge theory of \cite{paper1} was studied:
\ie\label{2+1dU1}
S = \frac{\Gamma}{2} \sum_{\tau\text{-link}} \mathcal E_{xy}^2 + \frac{i\theta}{2\pi} \sum_{\tau\text{-link}} \mathcal E_{xy}~.
\fe
Here $\mathcal E_{xy} = \Delta_\tau \mathcal A_{xy}-\Delta_x \Delta_y \mathcal A_\tau - 2\pi n_{\tau xy}$ is the gauge-invariant electric field, and $\mathcal A_\tau,\mathcal A_{xy}$ are real-valued gauge fields and $n_{\tau x y}$ is the Villain integer gauge field.
The gauge transformations are
\ie
&\mathcal A_\tau \sim \mathcal A_\tau + \Delta_\tau \alpha + 2\pi k_\tau~,
\\
&\mathcal A_{xy} \sim \mathcal A_{xy} + \Delta_x\Delta_y \alpha + 2\pi k_{xy}~,
\\
&n_{\tau xy} \sim n_{\tau xy} + \Delta_\tau k_{xy} - \Delta_x\Delta_y k_\tau~,
\fe
where $\alpha$ is a real-valued gauge parameter and $k_\tau,k_{xy}$ are integer-valued gauge parameters.
We refer the readers to \cite{Gorantla:2021svj} for more detaila of this 2+1d lattice model.
We will now place this 2+1d model on the slanted torus \eqref{slanted}.
Following an identical discussion in Section \ref{sec:dipphi-compactify}, we find the exact equivalence between the 2+1d model \eqref{2+1dU1} and the 1+1d model \eqref{dipA-modVill-action} under the identification
\ie
&\mathcal A_{xx}(\hat \tau,\hat x) = \mathcal A_{xy}(\hat \tau,\hat x-1,0)~, \qquad \mathcal A_\tau(\hat \tau,\hat x) = \mathcal A_\tau(\hat \tau,\hat x,0)~,
\\
&n_{\tau xx}(\hat \tau,\hat x) = n_{\tau xy}(\hat \tau,\hat x-1,0)~.
\fe
Due to this equivalence, the analysis in the rest of this subsection follows from the discussion of the 2+1d $U(1)$ tensor gauge theory on a slanted torus in \cite{Rudelius:2020kta}.
\subsubsection{Global symmetry} \label{sec:dipA-modVill-sym}
In the Villain model \eqref{dipA-modVill-fullaction}, the global electric symmetry acts as
\ie\label{eteranU}
\mathcal A_\tau \rightarrow \mathcal A_\tau + \Lambda_\tau ~, \qquad \mathcal A_{xx} \rightarrow \mathcal A_{xx} + \Lambda_{xx}~, \qquad n_{\tau xx} \rightarrow n_{\tau xx} + m_{\tau xx}~,
\fe
where $(\Lambda_\tau,\Lambda_{xx};m_{\tau xx})$ is a flat gauge field, i.e.,
\ie
\Delta_\tau \Lambda_{xx} - \Delta_x^2 \Lambda_\tau - 2\pi m_{\tau xx} = 0~.
\fe
The Noether current is\footnote{Recall our conventions, as discussed in footnote \ref{conventionsf}.}
\ie
&\mathcal J_\tau^{xx} = i\Gamma \mathcal E_{xx} - \frac{\theta}{2\pi}~,
\\
&\Delta_\tau \mathcal J_\tau^{xx} = 0~,\qquad \Delta_x^2 \mathcal J_\tau^{xx} = 0~,
\fe
where the equations in the second line are the conservation equation and the Gauss law respectively.
Using the freedom in $\alpha$ and $(k_\tau,k_{xx})$, we can set
\ie\label{dipA-modVill-elecsym}
&\Lambda_\tau = \frac{c_\tau}{L_\tau} + 2\pi m {\hat x \over L_x} \delta_{\hat \tau,0}~, \qquad 0\le \hat x< L_x~,
\\
&\Lambda_{xx} = \frac{c_{xx}}{L_x}~,
\\
&m_{\tau xx} = - m \left( \delta_{\hat x, 0} - \delta_{\hat x, L_x-1} \right)\delta_{\hat \tau,0}~,
\fe
with $m=0,1,\ldots,L_x-1$, and circle-valued $c_\tau\sim c_\tau +2\pi$ and $c_{xx}\sim c_{xx}+2\pi$. This will ultimately lead to \eqref{timelike-dip-sym}. As we will discuss below, the parameters $c_{xx}$ and $(c_\tau, m)$ generate space-like and time-like global symmetries, respectively. In the rest of this sub-subsection, we will discuss the space-like symmetry, and leave the time-like symmetry to the next one.
In terms of a Hilbert space interpretation, the transformation associated with $c_{xx}$ is a standard symmetry transformation, acting on states and operators such as \eqref{dipA-modVill-operator}. Since $c_{xx}$ is circle-valued, it is related to a $U(1)$ space-like symmetry. This is to be contrasted with the $\mathbb{R}$ space-like symmetry in the continuum theory discussed in Section \ref{sec:dipAcontinuum}.
The charge of the symmetry is
\ie\label{dipA-modVill-charge}
Q^{xx}(\hat x) = \mathcal J_\tau^{xx}~.
\fe
Using the Gauss law and the fact that it should be single valued, $Q^{xx}(\hat x)=\bar Q^{xx}$ is an integer constant, independent of $\hat x$.
\subsubsection{Restricted mobility of defects}
How should we interpret the symmetries associated with $c_\tau$ and $m$ in \eqref{dipA-modVill-elecsym}?
The circle-valued parameter $c_\tau$ does not correspond to a standard symmetry. It does not act on states or operators. Instead, it acts on defects, such as \eqref{dipA-modVill-fracton}, so it is a $U(1)$ time-like symmetry. The symmetry operator of this $U(1)$ time-like symmetry is the bilocal operator
\ie\label{dipA-modVill-U1timelikesymop}
U_{c_\tau}(\hat\tau;\hat x_1,\hat x_2)=\exp\left(ic_\tau [\Delta_x \mathcal J^{xx}_\tau(\hat \tau,\hat x_2) - \Delta_x \mathcal J^{xx}_\tau(\hat \tau,\hat x_1)]\right)~.
\fe
Because of the Gauss law and the conservation equation it is invariant under deformations of $\hat x_1$, $\hat x_2$ and $\hat \tau$ as long as they do not cross any defect.
In particular, when there is no defect, the $U(1)$ time-like symmetry operator is trivial because of the Gauss law:
\ie\label{Uisatri}
U_{c_\tau}(\hat \tau;\hat x_1,\hat x_2) = \exp \left( ic_\tau \sum_{\tau\text{-link: }\hat x_1 < \hat x \le \hat x_2} \Delta_x^2 \mathcal J^{xx}_\tau(\hat \tau,\hat x) \right) = 1~.
\fe
However, it is nontrivial in the presence of defects because the presence of the defect changes the Gauss law. The action of this time-like symmetry on defects is
\ie\label{dipA-modVill-timelikesymaction}
U_{c_\tau}(\hat \tau;\hat x_1,\hat x_2) \exp\left(in \sum_{\tau\text{-link: fixed }\hat x} \mathcal A_\tau \right) = e^{in c_\tau}\exp\left(in \sum_{\tau\text{-link: fixed }\hat x} \mathcal A_\tau \right)~,\qquad \hat x_1<\hat x<\hat x_2~.
\fe
The action is trivial if $\hat x$ is not in between $\hat x_1$ and $\hat x_2$. See Figure \ref{fig:dipA-timelikesymaction}.
\begin{figure}[t]
\begin{center}
\raisebox{-1\height}{\includegraphics[scale=0.35]{axes1d.pdf}}~~~
\raisebox{-0.5\height}{\includegraphics[scale=0.25]{dipole.pdf}}~~~~~~~$\longrightarrow ~~~e^{i n c_\tau}$~~
\raisebox{-0.5\height}{\includegraphics[scale=0.25]{defect.pdf}}
\caption{The Euclidean configuration for the action \eqref{dipA-modVill-timelikesymaction} of the $U(1)$ time-like symmetry operator (red dots) with a circle-valued parameter $c_\tau$ on the fracton defect (blue line) of charge $n$.}\label{fig:dipA-timelikesymaction}
\end{center}
\end{figure}
This $U(1)$ time-like symmetry leads to a selection rule stating that amplitudes like
\ie\label{defectco}
\left\langle \prod_i e^{iq_i\sum_{\hat \tau} \mathcal A_\tau (\hat \tau, \hat x_i)}\right\rangle
\fe
are nonzero only when
\ie
\sum_i q_i =0~.
\fe
In infinite volume, we can send one of these defects to infinity and then the sum of the charges $q_i$ of the remaining defects can be nonzero. In that case, this selection rule becomes the statement of total charge conservation. The discussion here, using the time-like symmetry, gives a precise meaning to this charge conservation in compact space.
The $\mathbb{Z}_{L_x}$-valued parameter $m$ in \eqref{dipA-modVill-elecsym} also does not act on states and operators. Instead, it acts on the defects \eqref{dipA-modVill-fracton}, \eqref{dipA-modVill-dipdefect} so it generates a $\mathbb{Z}_{L_x}$ time-like symmetry. The symmetry operator is the bilocal operator
\ie\label{dipA-modVill-ZLtimelikesymop}
\mathbf U_m(\hat \tau; \hat x_1,\hat x_2)&=
\exp\left( \frac{2\pi i m}{L_x} \left[\hat x_2 \Delta_x \mathcal J^{xx}_\tau(\hat\tau,\hat x_2 - 1) - \mathcal J^{xx}_\tau(\hat\tau,\hat x_2) \right] \right)
\\
&\quad \times \exp\left(- \frac{2\pi i m}{L_x} \left[\hat x_1 \Delta_x \mathcal J^{xx}_\tau(\hat\tau,\hat x_1 - 1) - \mathcal J^{xx}_\tau(\hat\tau,\hat x_1) \right] \right)~.
\fe
The exponent of $\mathbf U_m(\hat \tau; \hat x_1,\hat x_2)$ in \eqref{dipA-modVill-ZLtimelikesymop} is not well-defined because of the identification $\hat x \sim \hat x + L_x$. In contrast, $\mathbf U_m(\hat \tau; \hat x_1,\hat x_2)$ itself is well-defined because, under this identification, it changes by $\exp[2\pi i m \sum_{\tau\text{-link: }\hat x_1 \le \hat x < \hat x_2} \Delta_x^2 \mathcal J^{xx}_\tau(\hat \tau,\hat x)]$ which is trivial because $\Delta_x^2 \mathcal J^{xx}_\tau$ is an integer even in the presence of defects.
This $\mathbb{Z}_{L_x}$ time-like symmetry becomes the $\mathbb{Z}$ time-like symmetry of the continuum theory in Section \ref{sec:dipAcontinuum}.
Because of the Gauss law and the conservation equation, the operator $\mathbf U_m(\hat \tau; \hat x_1,\hat x_2)$ is invariant under deformations of $\hat x_1$, $\hat x_2$ and $\hat \tau$ as long as they not cross any defects. In particular, when there is no defect, the $\mathbb Z_{L_x}$ time-like symmetry operator is trivial because of the Gauss law:
\ie
\mathbf U_m(\hat \tau;\hat x_1,\hat x_2) = \exp \left( \frac{2\pi i m}{L_x} \sum_{\tau\text{-link: }\hat x_1 \le \hat x < \hat x_2} \hat x\Delta_x^2 \mathcal J^{xx}_\tau(\hat \tau,\hat x) \right) = 1~.
\fe
However, it is nontrivial in the presence of defects because the presence of the defect changes the Gauss law. The action of this time-like symmetry on defects is
\ie\label{dipA-modVill-timelikedipsymaction}
\mathbf U_m(\hat \tau;\hat x_1,\hat x_2) \exp\left(in \sum_{\tau\text{-link: fixed }\hat x} \mathcal A_\tau \right) = e^{\frac{2\pi i n m \hat x}{L_x}}\exp\left(in \sum_{\tau\text{-link: fixed }\hat x} \mathcal A_\tau \right)~,\qquad \hat x_1<\hat x<\hat x_2~.
\fe
The action is trivial if $\hat x$ is not in between $\hat x_1$ and $\hat x_2$.
The dipole defects \eqref{dipA-modVill-dipdefect} and \eqref{dipA-modVill-dipdefect-frac} (or \eqref{movingfd}) carry charge $n$ under the $\mathbb Z_{L_x}$ time-like dipole symmetry. This is obvious in \eqref{dipA-modVill-dipdefect} and in the first line of \eqref{dipA-modVill-dipdefect-frac}. The second line of \eqref{dipA-modVill-dipdefect-frac} can be interpreted as smearing this dipole charge over the interval $(\hat x,\hat x+\hat x_0)$.
As with the $U(1)$ time-like symmetry, the $\mathbb Z_{L_x}$ time-like symmetry leads to a selection rule on correlation functions of defects. In particular, \eqref{defectco} is nonzero only when
\ie\label{dipA-ZLxselection}
\sum_i \hat x_i q_i =0\mod L_x~.
\fe
The selection rule \eqref{dipA-ZLxselection} implies that two fractons carry the same $\mathbb{Z}_{L_x}$ time-like symmetry charges only if their positions differ by a multiple of $L_x$, i.e., if they are at the same point in space.
This implies that a single fracton cannot move by itself.
This explains the restricted mobility of the fracton defect in terms of global symmetries.
Again, in infinite volume, we can send some of these defects to infinity, and then the sum of the dipoles $\hat x_i q_i$ of the remaining defects can be nonzero.\footnote{Note that when we do that and the remaining defects in the interior of the space have nonzero $U(1)$ charge, their dipole moment depends on the origin of the coordinate.} In that case, this selection rule becomes the statement of total dipole charge conservation.
As for the ordinary charge conservation, our discussion using the time-like symmetry gives us a precise way to formulate the notion of conserved dipole charges in compact space.
\subsubsection{Gauge fixing the integers}\label{sec:dipA-modVill-gaugefix}
Following the same procedure as in \cite{Gorantla:2021svj}, after gauge fixing the integer gauge fields, the action \eqref{dipA-modVill-fullaction} can be written in terms of a new gauge field $(\bar{\mathcal A}_\tau, \bar{\mathcal A}_{xx})$ as
\ie\label{dipA-modVill-action-gaugefix}
S = \frac{\Gamma}{2} \sum_{\tau\text{-link}} \mathcal E_{xx}^2 + \frac{i\theta}{2\pi} \sum_{\tau\text{-link}} \mathcal E_{xx}~,
\fe
where $\mathcal E_{xx} = \Delta_\tau \bar{\mathcal A}_{xx} - \Delta_x^2 \bar{\mathcal A}_\tau$ is the electric field. The new gauge field $(\bar{\mathcal A}_\tau, \bar{\mathcal A}_{xx})$ is defined as
\ie\label{dipA-modVill-newfield}
\Delta_\tau \bar{\mathcal A}_{xx} - \Delta_x^2 \bar{\mathcal A}_\tau = \Delta_\tau \mathcal A_{xx} - \Delta_x^2 \mathcal A_\tau - 2\pi n_{\tau xx}~.
\fe
It has the gauge symmetry
\ie\label{eq:U(1)_barA_gauge_symmetry}
&\bar{\mathcal A}_\tau \sim \bar{\mathcal A}_\tau +\Delta_\tau \bar\alpha~,
\\
&\bar{\mathcal A}_{xx} \sim \bar{\mathcal A}_{xx} +\Delta_x^2 \bar\alpha~.
\fe
More generally, $\bar\alpha$ may not be single-valued in which case the above corresponds to a change of trivialization.
Unlike $(\mathcal A_\tau,\mathcal A_{xx})$, the new gauge fields $(\bar{\mathcal A}_\tau,\bar{\mathcal A}_{xx})$ need not be single-valued. Instead, they can have transition functions. Around the $\tau$-cycle, we have
\ie
&\bar{\mathcal A}_\tau(\hat\tau+L_\tau,\hat x)-\bar{\mathcal A}_\tau(\hat\tau,\hat x)=\Delta_\tau\bar\gamma_T(\hat\tau,\hat x)~,
\\
&\bar{\mathcal A}_{xx}(\hat\tau+L_\tau,\hat x)-\bar{\mathcal A}_{xx}(\hat\tau,\hat x)=\Delta_x^2\bar\gamma_T(\hat\tau,\hat x)~.
\fe
Around the $x$-cycle, we have
\ie
&\bar{\mathcal A}_\tau(\hat\tau,\hat x+L_x)-\bar{\mathcal A}_\tau(\hat\tau,\hat x)=\Delta_\tau\bar\gamma_X(\hat\tau,\hat x)~,
\\
&\bar{\mathcal A}_{xx}(\hat\tau,\hat x+L_x)-\bar{\mathcal A}_{xx}(\hat\tau,\hat x)=\Delta_x^2\bar\gamma_X(\hat\tau,\hat x)~.
\fe
These transition functions are subject to the cocycle condition
\ie\label{eq:cocycle_condtion}
\bar\gamma_T(\hat\tau,\hat x+L_x)-\bar\gamma_T(\hat\tau,\hat x)-\bar\gamma_X(\hat\tau+L_\tau,\hat x)+\bar\gamma_X(\hat\tau,\hat x)=2\pi n\hat x + 2\pi p~, \quad n,p\in\mathbb{Z}~.
\fe
They transform under the gauge transformation \eqref{eq:U(1)_barA_gauge_symmetry} as
\ie
&\bar\gamma_T(\hat \tau,\hat x) \sim \bar\gamma_T(\hat \tau,\hat x) + \bar{\alpha}(\hat \tau +L_\tau,\hat x)-\bar{\alpha}(\hat \tau ,\hat x)~,
\\
&\bar\gamma_X(\hat \tau,\hat x) \sim \bar\gamma_X(\hat \tau,\hat x) + \bar{\alpha}(\hat \tau,\hat x+L_x)-\bar{\alpha}(\hat \tau ,\hat x)~.
\fe
In addition, they are also subject to the same identifications \eqref{dipphi-modVill-gaugesym-barphi} as $\bar \phi$. It implies that $p\sim p+L_x$. The cocycle condition \eqref{eq:cocycle_condtion} is invariant under both the gauge transformation and the identifications.
$(\bar{\mathcal A}_\tau,\bar{\mathcal A}_{xx})$ can have nontrivial electric fluxes. For example, the configuration
\ie
\bar{\mathcal A}_\tau(\hat \tau,\hat x) = 0~,\qquad \bar{\mathcal A}_{xx}(\hat \tau, \hat x) = 2\pi n \frac{\hat \tau}{L_\tau L_x}~,
\fe
has a transition function
\ie
\bar \gamma_T(\hat \tau,\hat x) = 2\pi n \frac{\hat x(\hat x-L_x)}{2L_x}~,
\fe
in the $\tau$-direction. It gives rise to a nontrivial $\mathbb{Z}$-valued electric flux:
\ie
\frac{1}{2\pi}\sum_{\tau\text{-link}} \mathcal E_{xx} =\frac{1}{2\pi}\Delta_x\big[\bar\gamma_T(\hat\tau,\hat x+L_x)-\bar\gamma_T(\hat\tau,\hat x)-\bar\gamma_X(\hat\tau+L_\tau,\hat x)+\bar\gamma_X(\hat\tau,\hat x)\big] = n \in \mathbb Z~.
\fe
In terms of the original integer gauge fields, it is $-\sum_{\tau\text{-link}} n_{\tau x x}$ \eqref{eleflui}.
There is also another $\mathbb{Z}_{L_x}$-valued dipole electric flux. Consider the configuration
\ie
\bar{\mathcal A}_\tau(\hat \tau,\hat x) = 2\pi p \frac{\hat x}{L_xL_\tau}~,\qquad \bar{\mathcal A}_{xx}(\hat \tau, \hat x) = 0~.
\fe
It has a transition function
\ie
\bar \gamma_X(\hat \tau,\hat x) = 2\pi p \frac{\hat \tau}{L_\tau}~,
\fe
in the $x$-direction. This configuration carries a nontrivial $\mathbb Z_{L_x}$ dipole flux
\ie
-\frac{1}{2\pi}\big[\bar\gamma_T(\hat\tau, L_x)-\bar\gamma_T(\hat\tau,0)-\bar\gamma_X(\hat\tau+L_\tau,0)+\bar\gamma_X(\hat\tau,0) \big]= p \text{ mod } L_x~.
\fe
In terms of the original integer gauge fields, it is $-\sum_{\tau\text{-link}} \hat x n_{\tau x x}$ mod $L_x$.
\subsubsection{Spectrum}
We will now determine the spectrum of the theory. We will work with a continuous Lorentzian time, denoted by $t$, while keeping the space discrete. We do this by introducing a lattice spacing $a_\tau$ in the $\tau$-direction, taking the limit $a_\tau\rightarrow 0$, while keeping $\Gamma' = \Gamma a_\tau$ fixed, and then Wick rotating from Euclidean time to Lorentzian time. We pick the temporal gauge $\bar{\mathcal A}_0 =0$ and Gauss law tells us that
\ie
\Delta_x^2 \mathcal E_{xx}(t,\hat x)=0~.
\fe
It is solved by
\ie
\mathcal E_{xx}(t,\hat x)=\check{\mathcal E}_x(t) \hat x+\check{\mathcal E}_{xx}(t)~.
\fe
Since $\mathcal E_{xx}$ is single-valued, $\check{\mathcal E}_x$ has to vanish. Up to a time-independent gauge transformation, the solution is
\ie
\bar{\mathcal A}_{xx} =\frac{1}{L_x} f(t)~,
\fe
where $f(t)$ has periodicity $f(t)\sim f(t)+2\pi$.\footnote{This follows from the identification $e^{if} = \exp(i\sum_{\text{site: fixed }\hat \tau} \mathcal A_{xx})$ (see \eqref{dipA-modVill-operator}).}
The effective Lorentzian action is
\ie
S = \oint dt\left[\frac{\Gamma'}{2L_x} \dot f(t)^2 - \frac{\theta}{2\pi} \dot f(t)\right]~,
\fe
Let $\Pi$ be the conjugate momentum of $f(t)$. The periodicity of $f(t)$ implies that $\Pi$ is an integer. The Hamiltonian is
\ie
H =\frac{L_x}{2\Gamma'} \left(\Pi+\frac{\theta}{2\pi}\right)^2~.
\fe
This theory is reminiscent of the ordinary $U(1)$ gauge theory in 1+1d. It has no local degrees of freedom. All the gauge invariant information is summarized in the holonomy \eqref{dipA-modVill-operator}. And its dynamics is that of a quantum mechanical rotor.
\subsection{Continuum limit}\label{sec:dipA-cont}
\renewcommand{\arraystretch}{1.5}
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
&Lattice gauge theory& Dipole $A$-theory & Dipole $\tilde A$-theory
\tabularnewline
\hline
Gauge parameter & $\phi$ of Section \ref{Villain} & $\phi$ of Section \ref{sec:dipphi-cont2} & $\Phi$ of Section \ref{sec:dipphi-cont3}
\tabularnewline
\hline
Space-like & \multirow{2}{*}{$U(1)$} & \multirow{2}{*}{$\mathbb R$} & \multirow{2}{*}{$U(1)$}
\tabularnewline
symmetry & $$ & $$ & $$
\tabularnewline
\hline
Time-like & $U(1)$ & $U(1)$ & --
\tabularnewline
\cline{2-4}
symmetry & $\mathbb Z_{L_x}$ dipole & $\mathbb Z$ dipole & $U(1)$ dipole
\tabularnewline
\hline
\multirow{2}{*}{Fluxes} & $\mathbb{Z}$-valued flux & -- & $\mathbb{Z}$-valued flux
\tabularnewline
\cline{2-4}
& $\mathbb Z_{L_x}$-valued dipole flux
& $\mathbb{Z}$-valued dipole flux
& circle-valued dipole flux
\tabularnewline
\hline
Basic defect & $\exp\left( i\sum_{\tau\text{-link: fixed }\hat x} \mathcal A_\tau \right)$
& $\exp\left(i\oint d\tau~ A_\tau \right)$ & not present
\tabularnewline
\hline
Basic operator& $\exp\left( i\sum_{\text{site: fixed }\hat \tau} \mathcal A_{xx} \right)$
& $\oint dx~A_{xx}$ & $\exp\left(\oint dx~\tilde A_{xx}\right)$
\tabularnewline
\hline
\end{tabular}
\caption{Relation between symmetries and fluxes of the lattice theory of Section \ref{sec:dipA-modVill}, and its continuum limits in Section \ref{sec:dipA-cont}. All these symmetries are electric symmetries. There is another continuum theory, the $\hat A$-theory, whose gauge parameter has the same global properties as $\hat \phi$ of Section \ref{sec:dipphi-cont1}. All of its global symmetries are noncompact. We discuss it briefly in Section \ref{sec:dipA-cont1}.
}\label{tbl:dipA-lat-cont}
\end{center}
\end{table}
Below, we will consider three continuum limits. They have similar Lagrangians, but they are different in various global aspects, such as their global symmetries and fluxes, summarized in Table \ref{tbl:dipA-lat-cont}. The gauge parameters of these continuum gauge theories have the same global properties as the continuum scalar fields in Section \ref{sec:dipphi-cont}. One of these theories reproduces the continuum theory in Section \ref{sec:dipAcontinuum}.
In all these limits, we introduce the spatial and temporal lattice spacings $a,a_\tau$, and take the limit $a,a_\tau \rightarrow 0$ and $L_x,L_\tau \rightarrow \infty$ such that $\ell_x = a L_x$ and $\ell_\tau = a_\tau L_\tau$ are fixed.
\subsubsection{1+1d $U(1)$ dipole $A$-theory}\label{sec:dipA-cont2}
Following Section \ref{sec:dipphi-cont2}, we scale the lattice coupling constant as
\ie
\Gamma = \frac{1}{g^2 a_\tau a^3}~.
\fe
where $g$ is a fixed continuum coupling constant with mass dimension $2$. We define new continuum gauge fields,
\ie\label{dipA-cont1fields}
A_\tau = a_\tau^{-1} \bar{\mathcal A}_\tau~,\qquad A_{xx} = a^{-2} \bar{\mathcal A}_{xx}~,
\fe
with mass dimensions $1$ and $2$ respectively. Recall that $(\bar{\mathcal A}_\tau,\bar{\mathcal A}_{xx})$ are the gauge-fixed versions of the lattice gauge fields $(\mathcal A_\tau,\mathcal A_{xx})$. Then the theory reduces to the continuum theory discussed in Section \ref{sec:dipAcontinuum}, and it reproduces the defects and the operators discussed there. Recall that there is no theta-term in Section \ref{sec:dipAcontinuum}.
\begin{center}
\emph{Defects and operators}
\end{center}
Let us substitute \eqref{dipA-cont1fields} in the defects \eqref{dipA-modVill-fracton} and \eqref{dipA-modVill-dipdefect}, and take their continuum limit to find the defects in this continuum theory.
There are immobile fractons:
\ie\label{dipA-cont2-fractondefect}
\exp\left( in \oint d\tau~ A_\tau(\tau,x) \right)~,
\fe
where $n$ is quantized by the large gauge transformation $\alpha(\tau,x) = 2\pi \frac{\tau}{\ell_\tau}$.
We also have the observables
\ie\label{dipA-cont2-dipdefectd}
\oint_{\mathcal C} \left[d\tau~ \partial_x A_\tau(\tau,x) + dx~ A_{xx}(\tau,x) \right] ~,
\fe
where $\mathcal C$ is a closed curve in the spacetime. Note that \eqref{dipA-cont2-dipdefectd} does not have to be exponentiated.\footnote{Indeed, after using the limit \eqref{dipA-cont1fields} (and dropping the bar) in \eqref{dipA-modVill-dipdefect} with fixed lattice points $\hat x_1$ and $\hat x_2$, the coefficient in the exponent vanishes in the limit $a\rightarrow 0$, so we can expand it to find the continuum observable \eqref{dipA-cont2-dipdefectd}.} The integrated version of \eqref{dipA-cont2-dipdefectd} can be exponentiated
\ie\label{dipA-cont2-dipdefect}
\exp\left(ir \oint_{\mathcal C} \left[d\tau~ (A_\tau(\tau,x+x_0) - A_\tau(\tau,x)) + dx~ \int_x^{x+x_0}dx'~ A_{xx}(\tau,x') \right]\right) ~,
\fe
with any real $r$. When $r$ is quantized, \eqref{dipA-cont2-dipdefect} represents a defect of a dipole of probe particles with charges $\pm r$ separated by fixed amount, $x_0$. It is the continuum limit of \eqref{dipA-modVill-dipdefect-frac} with $x_0 = a\hat x_0$, and $r = na/x_0$ fixed, where $n,\hat x_0$ are scaled appropriately.
Finally, when $\mathcal C$ is purely space-like, \eqref{dipA-cont2-dipdefectd} is a gauge invariant operator.
\begin{center}
\emph{Global symmetry}
\end{center}
It is interesting that unlike the lattice theory, in this continuum theory, the quantization of the line defects and the line operators are very different. Below, we will discuss the space-like and time-like symmetries that act on them.
There is an $\mathbb R$ space-like symmetry $A_{xx}(\tau,x) \rightarrow A_{xx}(\tau,x) + c_{xx}$. Its charge is found by taking the continuum limit of the charge \eqref{dipA-modVill-charge} on the lattice:
\ie
\frac{i}{g^2} E_{xx}~.
\fe
It is independent of $x$. The scaling to the continuum limit turned the quantized $U(1) $ charge on the lattice to an $\mathbb R$ charge. This is the analog of the one-form global symmetry charge of the standard $U(1)$ 1+1d gauge theory. But unlike that case, here, this charge is not quantized.
There is a $U(1)$ time-like symmetry $A_\tau(\tau,x) \rightarrow A_\tau(\tau,x) + \frac{c_\tau}{\ell_\tau}$, where $c_\tau \sim c_\tau + 2\pi$. Its symmetry operator is the continuum version of the operator \eqref{dipA-modVill-U1timelikesymop} on the lattice:
\ie\label{Uonetls}
U_{c_\tau}(\tau;x_1,x_2)=\exp\left(-\frac{c_\tau}{g^2} [\partial_x E_{xx}(\tau,x_2) - \partial_x E_{xx}(\tau,x_1)]\right)~.
\fe
This is the analog of the $U(1)$ time-like symmetry of the 1+1d $U(1)$ gauge theory.
Similar to its lattice counterpart \eqref{dipA-modVill-U1timelikesymop}, it is invariant under deformations of $ x_1$, $ x_2$ and $\tau$ as long as they do not cross any defect, because of the Gauss law and the conservation equation. In particular, when there is no defect, the $U(1)$ time-like symmetry operator is trivial.
More explicitly, in the presence of the defect \eqref{dipA-cont2-fractondefect},
\ie
\exp\left( in \oint d\tau~ A_\tau(\tau,x_0) \right)~,
\fe
the equation of motion of $A_\tau$ (i.e., Gauss law) leads to
\ie
{1\over g^2}\partial_x^2 E_{xx}=-in \delta(x-x_0)
\fe
and therefore, for $x_1<x_0<x_2$, the value of \eqref{Uonetls} is $e^{inc_\tau}$.
There is also a $\mathbb Z$ dipole time-like symmetry $A_\tau \rightarrow A_\tau + 2\pi m\frac{x}{\ell_x\ell_\tau}$, where $m$ is an integer. Its symmetry operator is found by taking the continuum limit of the operator \eqref{dipA-modVill-ZLtimelikesymop} on the lattice:
\ie
\mathbf U_m(\tau;x_1,x_2) &= \exp \left(-\frac{2\pi m }{g^2\ell_x} [x_2\partial_x E_{xx}(\tau,x_2) - E_{xx}(\tau,x_2)] \right)
\\
&\quad \times \exp \left(\frac{2\pi m }{g^2\ell_x} [x_1\partial_x E_{xx}(\tau,x_1) - E_{xx}(\tau,x_1)] \right)~.
\fe
It is well-defined under $x \rightarrow x+\ell_x$ because it is shifted by $\exp \left( -\frac{2\pi m}{g^2} \int_{x_1}^{x_2}dx~ \partial_x^2 E_{xx}(\tau,x) \right)$, which is trivial because $\frac{i}{g^2} \int_{x_1}^{x_2}dx~ \partial_x^2 E_{xx}(\tau,x)$ is an integer even in the presence of defects.
Similar to its lattice counterpart \eqref{dipA-modVill-ZLtimelikesymop}, it is invariant under deformations of $x_1$, $x_2$ and $ \tau$ as long as they do not cross any defect, because of the Gauss law and the conservation equation. In particular, when there is no defect, the $U(1)$ time-like symmetry operator is trivial.
This symmetry was ${\mathbb Z}_{L_x}$ on the lattice, and became $\mathbb Z$ in the continuum limit. Note that this symmetry does not have an analog in the ordinary 1+1d $U(1)$ gauge theory.
\begin{center}
\emph{Fluxes}
\end{center}
In this continuum limit, there are no configurations with nontrivial electric flux and therefore, there is no $\theta$-term in the action. However, there are configurations with nontrivial
dipole flux:
\ie
&A_\tau(\tau,x) = 2\pi p \frac{x}{\ell_x\ell_\tau}~,\qquad &&A_{xx}(\tau,x) = 0~,
\\
&\gamma_X(\tau,x) = 2\pi p \frac{\tau}{\ell_\tau}~,\qquad &&\gamma_T(\tau,x) = 0~, \qquad p\in \mathbb Z~.
\fe
Unlike on the lattice, the dipole flux in this continuum limit becomes $\mathbb{Z}$-valued:
\ie
-\frac{1}{2\pi}\left[ \gamma_T(0,\ell_x) - \gamma_T(0,0) - \gamma_X(\ell_\tau,0) + \gamma_X(0,0)\right] = p\in\mathbb{Z}~.
\fe
\begin{center}
\emph{Spectrum}
\end{center}
The spectrum consists of states charged under the $\mathbb R$ global symmetry. The energy of the state carrying charge $q\in \mathbb R$ is
\ie
E \sim \frac{g^2 \ell_x}{2}q^2~,
\fe
which is finite in the continuum limit $a\rightarrow 0$. Since $q$ is not quantized, the spectrum is continuous.
\subsubsection{1+1d $U(1)$ dipole $\tilde A$-theory}\label{sec:dipA-cont3}
Following Section \ref{sec:dipphi-cont3}, we scale the lattice coupling constants as
\ie
\Gamma = \frac{1}{\tilde g^2 a_\tau a}~,
\fe
where $\tilde g$ is a fixed continuum coupling constant with mass dimension $1$. We define new continuum gauge fields,
\ie\label{dipA-cont2fields}
\tilde A_\tau = a_\tau^{-1} a \bar{\mathcal A}_\tau~,\qquad \tilde A_{xx} = a^{-1} \bar{\mathcal A}_{xx}~,
\fe
with mass dimensions $0$ and $1$ respectively. Then the action \eqref{dipA-modVill-action-gaugefix} becomes
\ie
S = \oint d\tau dx~ \left[ \frac{1}{2\tilde g^2} \tilde E_{xx}^2 + \frac{i\theta}{2\pi} \tilde E_{xx} \right]~.
\fe
where $\tilde E_{xx} = \partial_\tau \tilde A_{xx} - \partial_x^2 \tilde A_\tau$ is the electric field with mass dimension $2$. The gauge symmetry is
\ie
\tilde A_\tau \sim \tilde A_\tau + \partial_\tau \tilde \alpha~, \qquad \tilde A_{xx} \sim \tilde A_{xx} + \partial_x^2 \tilde \alpha~,
\fe
where $\tilde \alpha$ is the gauge parameter with mass dimension $-1$, which has its own gauge symmetry $\tilde \alpha \sim \tilde \alpha + \tilde c + 2\pi x$, where $\tilde c$ is a real constant. The global properties of $\tilde \alpha$ are the same as those of $\Phi$ of Section \ref{sec:dipphi-cont3}.
\begin{center}
\emph{Defects and operators}
\end{center}
Let us substitute \eqref{dipA-cont2fields} in the defects \eqref{dipA-modVill-fracton} and \eqref{dipA-modVill-dipdefect}, and take their continuum limit to find the defects in this continuum theory.
There are no fracton defects because, after substituting \eqref{dipA-cont2fields} in the defect \eqref{dipA-modVill-fracton}, the coefficient diverges in the limit $a\rightarrow 0$, unless $n=0$. We can also see this in the continuum: the would-be defect ``$\oint d\tau~\tilde A_\tau$'' is not invariant under the large gauge transformation $\tilde \alpha = \frac{\tilde c \tau}{\ell_\tau}$, where $\tilde c$ is a real constant. Related to that, $\oint d\tau~\tilde A_\tau$ cannot be exponentiated because it is dimensionful.
However, there are mobile dipole defects:
\ie\label{dipA-cont3-defect}
\exp\left( i n \oint_{\mathcal C} \left[ d\tau~ \partial_x \tilde A_\tau(\tau,x) + dx~ \tilde A_{xx}(\tau,x) \right] \right)~,
\fe
where $n$ is quantized.\footnote{When $\mathcal C$ is space-like, this can be seen by the large gauge transformation $\tilde \alpha(\tau,x) = 2\pi \frac{x(x-\ell_x)}{2\ell_x}$. Such a gauge transformation has its own transition functions consistent with its periodicities, $\tilde \alpha \sim \tilde \alpha + \tilde c + 2\pi x$. \label{globlsymgau}} When $\mathcal C$ is purely space-like, it is a gauge invariant operator with a quantized coefficient, whose lattice counterpart is \eqref{dipA-modVill-operator}.
There are also gauge invariant mobile dipole defects derived from \eqref{dipA-modVill-dipdefect-frac}
\ie\label{dipA-cont3-defect2}
\exp\left( \frac{in}{x_0} \oint_{\mathcal C} \left[ d\tau~ (\tilde A_\tau(\tau,x+x_0) - \tilde A_\tau(\tau,x)) + dx \int_x^{x+x_0}dx'~\tilde A_{xx}(\tau,x') \right] \right)~,
\fe
where $x_0$ is the (fixed) separation of the dipole, and $n$ is quantized.
\begin{center}
\emph{Global symmetry}
\end{center}
There is a $U(1)$ global symmetry with charge
\ie
Q^{xx} = \frac{i}{\tilde g^2} \tilde E_{xx} - \frac{\theta}{2\pi}~.
\fe
It acts on the gauge fields as
\ie
\tilde A_{xx} \rightarrow \tilde A_{xx} + \frac{\tilde c_{xx}}{\ell_x}~,
\fe
where $\tilde c_{xx}$ is circle-valued, i.e., $\tilde c_{xx} \sim \tilde c_{xx} + 2\pi$. The charged operator is \eqref{dipA-cont3-defect} with $\mathcal C$ purely space-like, and its charge is $n$, which is quantized due to the large gauge transformation in footnote \ref{globlsymgau}. Unlike the continuum theory of Section \ref{sec:dipA-cont2}, here the space-like $U(1)$ global symmetry of the lattice theory remains $U(1)$ in the continuum.
The $U(1)$ time-like symmetry of the lattice theory is absent in this continuum limit because the shift
\ie
\tilde A_\tau(\tau,x) \rightarrow \tilde A_\tau(\tau,x) + \frac{\tilde c_\tau}{\ell_\tau}~,
\fe
is a gauge transformation with gauge parameter $\tilde \alpha = \tilde c_\tau \frac{\tau}{\ell_\tau}$ for any $\tilde c_\tau \in \mathbb R$. So the would-be time-like symmetry operator $-\frac{1}{\tilde g^2} [\partial_x \tilde E_{xx}(\tau,x_2)-\partial_x \tilde E_{xx}(\tau,x_1)]$ is trivial.
This is consistent with the fact that there are no fracton defects.
Finally, the ${\mathbb Z}_{L_x}$ dipole time-like symmetry \eqref{dipA-modVill-ZLtimelikesymop} of the lattice theory becomes a $U(1)$ dipole time-like symmetry with symmetry operator
\ie
\mathbf U_{\rho}(\tau;x_1,x_2) &= \exp\left(-\frac{\rho}{\tilde g^2} [x_2 \partial_x \tilde E_{xx}(\tau,x_2) - \tilde E_{xx}(\tau,x_2)]\right)
\\
&\quad \times \exp\left(\frac{\rho}{\tilde g^2} [x_1 \partial_x \tilde E_{xx}(\tau,x_1) - \tilde E_{xx}(\tau,x_1)]\right)~,
\fe
where $\rho$ is a real parameter with $\rho \sim \rho + 2\pi$.
The exponent is well-defined under $x \rightarrow x+\ell_x$ because it is shifted by $-\frac{1}{\tilde g^2} [\partial_x \tilde E_{xx}(\tau,x_2)-\partial_x \tilde E_{xx}(\tau,x_1)]$, which is trivial.
Similar to its lattice counterpart \eqref{dipA-modVill-ZLtimelikesymop}, it is invariant under deformations of $x_1,x_2$, and $\tau$ as long as they do not cross any defects, because of Gauss law and the conservation equation.
In particular, it is trivial in the absence of any defect insertions.
It acts on the gauge fields, up to gauge transformations, as
\ie
\tilde A_\tau(\tau,x) \rightarrow \tilde A_\tau(\tau,x) + \frac{\rho x}{\ell_\tau}~.
\fe
The charged defects are \eqref{dipA-cont3-defect} and \eqref{dipA-cont3-defect2} with $\mathcal C$ that wraps around the $\tau$-direction once. Both have charge $n$.
\begin{center}
\emph{Fluxes}
\end{center}
There are configurations, such as
\ie
&\tilde A_\tau(\tau,x) = 0~,\qquad && \tilde A_{xx}(\tau,x) = 2\pi n \frac{\tau}{\ell_x \ell_\tau}~,
\\
&\tilde \gamma_X(\tau,x) = 0~,\qquad && \tilde \gamma_T(\tau,x) = 2\pi n \frac{x(x-\ell_x)}{2\ell_x}~,\qquad k\in\mathbb Z~,
\fe
that realize a nontrivial $\mathbb{Z}$-valued electric flux
\ie
\frac{1}{2\pi}\oint d\tau dx~ \tilde E_{xx} = n~.
\fe
This flux allows a nontrivial $\theta$-term.
There are also configurations that realize a nontrivial dipole flux, such as
\ie
&\tilde A_\tau(\tau,x) = \vartheta x\frac{1}{\ell_\tau} ~,\qquad &&\tilde A_{xx}(\tau,x) = 0~,
\\
&\tilde \gamma_X(\tau,x) = \vartheta \ell_x\frac{\tau}{\ell_\tau}~,\qquad &&\tilde \gamma_T(\tau,x) = 0~.
\fe
Importantly, $\vartheta$ and $\vartheta+2\pi$ are related by a change of trivialization with $\tilde\alpha = 2\pi x\frac{\tau}{\ell_\tau}$ and the identification $\tilde\gamma_T(\tau,x)\sim \tilde\gamma_T(\tau,x)+2\pi x$.
Thus, they should be identified and the parameter $\vartheta$ is circle-valued, i.e., $\vartheta\sim \vartheta+2\pi$.
The dipole flux of this configuration is
\ie
-\frac{1}{\ell_x}\left[ \tilde \gamma_T(0,\ell_x) - \tilde \gamma_T(0,0) - \tilde \gamma_X(\ell_\tau,0) + \tilde \gamma_X(0,0)\right] = \vartheta\text{ mod } 2\pi~.
\fe
Unlike on the lattice, the dipole flux is circle-valued in this continuum limit.
\begin{center}
\emph{Spectrum}
\end{center}
The spectrum consists of states charged under the $U(1)$ global symmetry. The energy of the state with charge $n\in\mathbb Z$ is
\ie
E \sim \frac{\tilde g^2 \ell_x}{2}\left(n+\frac{\theta}{2\pi}\right)^2~,
\fe
which is finite in the continuum limit $a\rightarrow 0$. Since $n$ is quantized, the spectrum is discrete.
\subsubsection{1+1d dipole $\hat A$-theory}\label{sec:dipA-cont1}
Following Section \ref{sec:dipphi-cont1}, we scale the lattice coupling constant as
\ie
\Gamma = \frac{1}{\hat g^2 a_\tau a^2}~,
\fe
where $\hat g$ is a fixed continuum coupling constant with mass dimension $\frac32$. We define new continuum gauge fields, $\hat A_\tau = a_\tau^{-1} a^{\frac12} \bar{\mathcal A}_\tau$, and $\hat A_{xx} = a^{-\frac32} \bar{\mathcal A}_{xx}$, with mass dimensions $\frac12$ and $\frac32$, respectively. Then the action \eqref{dipA-modVill-action-gaugefix} becomes
\ie
S = \oint d\tau dx~ \frac{1}{2\hat g^2} \hat E_{xx}^2 ~,
\fe
where $\hat E_{xx} = \partial_\tau \hat A_{xx} - \partial_x^2 \hat A_\tau$ is the electric field with mass dimension $\frac52$. There is no $\theta$-term in this limit. The gauge symmetry is
\ie
\hat A_\tau \sim \hat A_\tau + \partial_\tau \hat\alpha~, \qquad \hat A_{xx} \sim \hat A_{xx} + \partial_x^2 \hat\alpha~,
\fe
where $\hat \alpha$ is the gauge parameter with mass dimension $-\frac12$. The global properties of $\hat \alpha$ are the same as those of $\hat \phi$ of Section \ref{sec:dipphi-cont1}.
There are no fracton defects but there are mobile dipole defects and line operators with real coefficients. There is no $U(1)$ time-like symmetry, whereas the electric symmetry and dipole time-like symmetry are noncompact.
\subsection{More comments}
We can study another continuum limit, in which we take the gauge coupling $g$ in the $A$-theory to zero. We take $g=\epsilon g'\to 0$ with fixed $g'$. This can be absorbed by rescaling $A_\tau = \epsilon A'_\tau$ and $A_{xx}= \epsilon A'_{xx}$. Therefore, we should also take the gauge parameter $\alpha'= {1\over \epsilon} \alpha$. This has the effect of decompactifying the underlying gauge group from $U(1)$ to $\mathbb R$. Correspondingly, there are no identifications in the space of gauge parameters $\alpha'$. In this case, the fractons are not quantized. In fact, the observable $\oint d\tau~A'_\tau$ does not need to be exponentiated to a defect. Similarly, the operators $\oint dx~A'_{xx}$ do not need to be exponentiated and all the global symmetries are noncompact.
In ordinary classical gauge theory with gauge algebra $\mathfrak u(1)$, the gauge group can be $U(1)$ or $\mathbb R$. Here, we see that there are more options. All of them arise from the same underlying lattice theory with $U(1)$ gauge symmetry (or as in the Villain formulation, $\mathbb R$ with another $\mathbb Z$ gauge field), but arise in different continuum limits.
Just as the ordinary $U(1) $ and $\mathbb R$ gauge theories differ in their fluxes, operators, defects, and global symmetries, the same is true in the various different continuum theories here.
We have not discussed the higher dimensional versions of this theory. One difference from the 1+1d case we discussed here is that in order to preserve the magnetic symmetries, the lattice Villain model should be modified. We expect that the subtleties we discussed here will still be present in the higher dimensional theory.
\section{1+1d $\mathbb Z_N$ dipole gauge theory}\label{sec:dipZN}
In this section, we will study the $\mathbb Z_N$ lattice dipole gauge theory, and its BF version. Surprisingly, while the $U(1)$ dipole gauge theory has immobile fracton defects, a particle in the $\mathbb{Z}_N$ theory in noncompact space can hop by $N$ sites on its own, and is, therefore, not fully immobile. As we will see, in a lattice with $L_x$ sites with periodic boundary conditions, the particle can hop by even smaller steps -- steps of $\gcd(N,L_x)$ sites.
\subsection{1+1d $\mathbb Z_N$ lattice dipole gauge theory}
The $\mathbb Z_N$ lattice dipole gauge theory is defined by the action
\ie\label{dipZN-action}
S = -\Gamma \sum_{\tau\text{-link}} \cos\left[ \frac{2\pi}{N} (\Delta_\tau m_{xx} - \Delta_x^2 m_\tau) \right]~,
\fe
where $\Gamma$ is the gauge coupling constant.
The integer fields $m_\tau$ and $m_{xx}$ are placed on the $\tau$-links and the sites, respectively.
The gauge symmetry is
\ie\label{dipZN-gaugesym}
m_\tau \sim m_\tau + \Delta_\tau k + N k_\tau~,\qquad m_{xx} \sim m_{xx} + \Delta_x^2 k + N k_{xx}~,
\fe
where $k,k_\tau,k_{xx}$ are integer gauge parameters. It has an electric global symmetry that shifts
\ie\label{ZNelecg}
m_\tau \rightarrow m_\tau + p_\tau~,\qquad m_{xx} \rightarrow m_{xx} + p_{xx}~,
\fe
where $(p_\tau,p_{xx})$ is a flat $\mathbb Z_N$ gauge field, i.e.,
\ie
\Delta_\tau p_{xx} - \Delta_x^2 p_\tau = 0 \mod N~.
\fe
Using the gauge freedom of $k$, we can set $p_\tau = 0$ at $\hat \tau \ne 0$. The flatness condition then implies that
\ie
&\Delta_\tau p_{xx}(\hat \tau,\hat x) = 0 \mod N~,
\\
&\Delta_x^2 p_\tau(0,\hat x) = 0 \mod N~.
\fe
Using the residual (time-independent) gauge freedom in $k$, we can set
\ie\label{eq:electric_ZN}
&p_\tau(\hat \tau,\hat x) = \left( \bar p_\tau + \bar r_\tau \frac{N}{\gcd(N,L_x)} \hat x \right) \delta_{\hat \tau,0}~,\qquad 0\le \hat x < L_x~,
\\
&p_{xx}(\hat \tau,\hat x) = (\bar p_{xx} + \bar r_{xx})\delta_{\hat x,0} - \bar r_{xx}\delta_{\hat x,L_x-1}~,
\fe
where $\bar p_\tau,\bar p_{xx}$ are integers modulo $N$, whereas $\bar r_\tau,\bar r_{xx}$ are integers modulo $\gcd(N,L_x)$.
Similar to the electric global symmetries in the dipole $U(1)$ gauge theory in the previous section, the parameters $\bar p_{xx}$ and $\bar r_{xx}$ are associated with $\mathbb Z_N$ and $\mathbb Z_{\gcd(N,L_x)}$ space-like global symmetries, respectively. On the other hand, the parameters $\bar p_\tau$ and $\bar r_\tau$ correspond to $\mathbb Z_N$ and $\mathbb Z_{\gcd(N,L_x)}$ time-like symmetries, respectively.
\subsection{An integer BF lattice model}\label{sec:dipZN-modVill}
For $\Gamma \gg 1$, the partition function is dominated by configurations satisfying $\Delta_\tau m_{xx} - \Delta_x^2 m_\tau = 0 \mod N$. Therefore, we can replace the action \eqref{dipZN-action} by the BF-type action
\ie\label{dipZN-intBFaction}
S = \frac{2\pi i}{N} \sum_{\tau\text{-link}} \tilde m (\Delta_\tau m_{xx} - \Delta_x^2 m_\tau)~,
\fe
where $\tilde m$ is an integer Lagrange multiplier field. In addition to \eqref{dipZN-gaugesym}, there is another gauge symmetry making $\tilde m$ $\mathbb{Z}_N$-valued
\ie\label{dipZN-periodicity}
\tilde m \sim \tilde m + N\tilde k~,
\fe
where $\tilde k$ is an integer gauge parameter.
The BF-type action \eqref{dipZN-intBFaction} is similar to the topological $\mathbb Z_N$ ordinary lattice gauge theory action in \cite{Dijkgraaf:1989pz,Kapustin:2014gua}. Following steps similar to those in Appendix C.2 of \cite{Gorantla:2021svj}, the action \eqref{dipZN-action} and the effective action \eqref{dipZN-intBFaction} can be related to a number of other actions of the Villain form and of a modified Villain form.
There is a related lattice spin model given by the action
\ie\label{dipZN-clock-action}
S = -\tilde \Gamma_0 \sum_{\tau\text{-link}} \cos\left( \frac{2\pi}{N}\Delta_\tau \tilde m \right) - \tilde \Gamma \sum_{\text{site}} \cos\left( \frac{2\pi}{N}\Delta_x^2 \tilde m \right)~,
\fe
where $\tilde m$ is an integer field at each site with identification \eqref{dipZN-periodicity}, and $\tilde \Gamma_0,\tilde \Gamma$ are coupling constants. It is natural to refer to it as the dipole $\mathbb Z_N$ clock model. For $\tilde \Gamma_0,\tilde \Gamma \gg 1$, the partition function is dominated by configurations satisfying $\Delta_\tau \tilde m = \Delta_x^2 \tilde m = 0 \mod N$, so we can replace the action \eqref{dipZN-clock-action} by the BF action \eqref{dipZN-intBFaction}. Now, $m_\tau$ and $m_{xx}$ are interpreted as integer Lagrange multiplier fields.
\subsubsection{Relation to 2+1d $\mathbb Z_N$ tensor gauge theory}\label{sec:dipZN-compactify}
As in Sections \ref{sec:dipphi-compactify} and \ref{sec:dipA-compactify}, there is an exact equivalence between the action \eqref{dipZN-intBFaction} and the integer $BF$-action of the 2+1d $\mathbb Z_N$ tensor gauge theory \cite{Gorantla:2021svj}
\ie
S = \frac{2\pi i}{N} \sum_\text{cube} \tilde m^{xy} (\Delta_\tau m_{xy} - \Delta_x \Delta_y m_\tau)~,
\fe
on a slanted spatial torus with identifications \eqref{slanted}.\footnote{There is a similar relation between the original 1+1d model \eqref{dipZN-action}, and the 2+1d $\mathbb Z_N$ tensor gauge theory with action
\ie
S = -\Gamma \sum_\text{cube} \cos \left[ \frac{2\pi}{N} (\Delta_\tau m_{xy} - \Delta_x \Delta_y m_\tau) \right]~,
\fe
on a slanted spatial torus with identifications \eqref{slanted}.}
Here $\tilde m^{xy}, m_{xy}, m_\tau$ are the $\mathbb{Z}_N$-valued fields of the 2+1d model.
The equivalence follows from
\ie
\Delta_y m_\tau(\hat x, \hat y) = \Delta_x m_\tau(\hat x, \hat y) \implies \Delta_x \Delta_y m_\tau(\hat x,\hat y) = \Delta_x^2 m_\tau(\hat x+1,\hat y)~.
\fe
The remaining fields are related as $m_{xx}(\hat x) = m_{xy}(\hat x-1,0)$, and $\tilde m(\hat x) = \tilde m^{xy}(\hat x-1, 0)$.
Due to this equivalence, all the analysis in the rest of this section follows from the 2+1d $\mathbb Z_N$ tensor gauge theory on this slanted torus \cite{Rudelius:2020kta}.
\subsubsection{Global symmetry}
The electric space-like global symmetries of the original model \eqref{dipZN-action} are also present in the BF model \eqref{dipZN-intBFaction}. It is generated by the operator $e^{\frac{2\pi i}{N} \tilde m}$. More specifically, the $\mathbb{Z}_N$ electric symmetry associated with $\bar p_{xx}$ in \eqref{eq:electric_ZN} is generated by $e^{\frac{2\pi i }{N} \bar p_{xx}\tilde m(\hat x=0)}$ and the $\mathbb{Z}_{\gcd(N,L_x)}$ electric dipole symmetry in \eqref{eq:electric_ZN} associated with $\bar r_{xx}$ is generated by $e^{-\frac{2\pi i}{N}\bar r_{xx} \Delta_x\tilde m(\hat x=L_x-1)}$.
In fact, there are additional space-like symmetries in the BF model that are not present in the original model \eqref{dipZN-action}.\footnote{This is a common property of BF models and modified Villain versions of various system and one of the motivations to introduce them \cite{Gorantla:2021svj}. The original systems or their Villain versions have various symmetries, like momentum symmetries and electric symmetries. The BF models and modified Villain versions of these theories, when they exist, have additional symmetries like winding symmetries and magnetic symmetries. (The gauge theory in Section \ref{sec:dipA-modVill} does not have a modification of its Villain version and therefore all the symmetries are visible already in the Villain theory.)} It has magnetic symmetries that shift $\tilde m$ by
\ie
\tilde m(\hat \tau,\hat x) \rightarrow \tilde m(\hat \tau,\hat x)+ \tilde p + \tilde r \frac{N}{\gcd(N,L_x)} \hat x~, \qquad 0\le \hat x<L_x~,
\fe
where $\tilde p$ and $\tilde r$ are integers modulo $N$ and $\gcd(N,L_x)$, respectively.
These magnetic symmetries are manifest in the dipole $\mathbb Z_N$ clock model \eqref{dipZN-clock-action}, while the electric symmetries are not present there.
The $\mathbb{Z}_N$ magnetic symmetry associated with $\tilde p$ is implemented by $W^{\tilde p}$, with the generator
\ie
W = \exp \left( \frac{2\pi i }{N}\sum_{\text{site: fixed }\hat \tau} m_{xx} \right)~.
\fe
The $\mathbb{Z}_{\gcd(N,L_x)}$ magnetic symmetry associated with $\tilde r$ is implemented by $\mathbf W^{\tilde r}$, with the generator
\ie
\mathbf W = \exp \left( \frac{2\pi i }{\gcd(N,L_x)}\sum_{\text{site: fixed }\hat \tau} \hat x m_{xx} \right)~.
\fe
\subsubsection{Ground state degeneracy}\label{sec:dipZN-gsd}
We will now count the number of ground states in the BF lattice model. In this case, it is equivalent to counting the number of solutions to the ``equations of motion.'' (The quotation is because, strictly, for integer fields there is no equation of motion.) Summing over $m_\tau$ and $m_{xx}$ gives
\ie\label{eq:eqofmotion_tildem}
\Delta_x^2 \tilde m = 0 \mod N~, \qquad \Delta_\tau \tilde m = 0 \mod N~.
\fe
The most general solution to these equations is
\ie
\tilde m(\hat \tau,\hat x) = \tilde p + \tilde r \frac{N}{\gcd(N,L_x)} \hat x~, \qquad 0\le \hat x<L_x~,
\fe
where $\tilde p$ and $\tilde r$ are integers modulo $N$ and $\gcd(N,L_x)$, respectively. Therefore, the number of solutions is
\ie\label{dipZN-GSD}
N \gcd(N,L_x)~.
\fe
As discussed in Section \ref{sec:dipZN-compactify}, this ground state degeneracy can also be computed from the 2+1d $\mathbb{Z}_N$ tensor gauge theory of \cite{Gorantla:2021svj} on the slanted torus \eqref{slanted}. Indeed, the ground state degeneracy \eqref{dipZN-GSD} agrees with (4.8) of \cite{Rudelius:2020kta} (with $L_{x}^{\text{eff}}=L_{y}^{\text{eff}}=1$ and $M=L_x$).
Another way to count the ground states is to use the algebra generated by the electric and magnetic space-like symmetry operators
\ie
&e^{\frac{2\pi i}{N} \tilde m(\hat x)} W = e^{-2\pi i/N}W e^{\frac{2\pi i}{N} \tilde m(\hat x)}~,
\\
&e^{\frac{2\pi i}{N} \tilde m(\hat x)} \mathbf{W} = e^{-2\pi i \hat x/\text{gcd}(N,L_x)}\mathbf{W} e^{\frac{2\pi i}{N} \tilde m(\hat x)}~.
\fe
Using \eqref{eq:eqofmotion_tildem}, the $L_x$ operators $e^{\frac{2\pi i}{N}\tilde m(\hat x)}$ can be generated by two operators $e^{\frac{2\pi i}{N}\tilde m(0)}$ and $e^{\frac{2\pi i}{N}\Delta_x \tilde m(0)}$.
We then find that the minimal representation of the algebra has dimension $N \gcd(N,L_x)$:
\ie
&e^{\frac{2\pi i}{N} \tilde m(\hat x)}|\tilde p,\tilde r\rangle=e^{\frac{2\pi i}{N}\left(\tilde p +\tilde r \frac{N}{\gcd(N,L_x)}\hat x\right)}|\tilde p,\tilde r\rangle~,
\\
&W|\tilde p,\tilde r\rangle=|\tilde p-1,\tilde r\rangle
\\
&\mathbf W|\tilde p,\tilde r\rangle=|\tilde p,\tilde r-1\rangle~.
\fe
This reproduces the same ground state degeneracy \eqref{dipZN-GSD}. To conclude, the space-like global symmetries lead to the ground state degeneracy $N \gcd(N,L_x)$.
Observe that the ground state degeneracy (GSD) is always between $N$ and $N^2$, but its value depends sensitively on the number of lattice sites $L_x$.
One consequence of it is that the GSD does not have a good $L_x\rightarrow \infty$ limit. For example, if we take $L_x=sN+1$ with $s\to \infty$, then for every finite but large $L_x$ the GSD is $N$. In the other extreme, we can take $L_x=sN$ with $s\to \infty$ to find that the GSD is $N^2$. This sensitivity to how we take the limit is a manifestation of UV/IR mixing.
Since all these degenerate ground states can be distinguished by the local operators $e^{2\pi i \tilde m /N}$, the large ground state degeneracy \eqref{dipZN-GSD} is not robust and will be lifted when the system is perturbed by these local operators. This would be the case if we start with the lattice theory \eqref{dipZN-action}, which does not have the magnetic symmetries. On the other hand, since the magnetic symmetries are natural in the dipole $\mathbb Z_N$ clock model \eqref{dipZN-clock-action}, the ground state degeneracy is robust in this model.
\subsubsection{Defects and their restricted mobility}\label{sec:dipZN-def-mob}
The theory has a defect
\ie\label{dipZN-particle}
W_\tau(\hat x) = \exp\left( \frac{2\pi i}{N} \sum_{\tau\text{-link: fixed }\hat x} m_\tau \right)~,
\fe
that represents the world-line of a static particle. On a noncompact space, the particle can hop by $N$ sites. This is described by the defect
\ie
\exp\left[\frac{2\pi i}{N} \left(\sum_{\tau \text{-link: }\hat \tau < 0} m_\tau(\hat \tau,0) + \sum_{\text{site: }0 < \hat x < N} \hat x m_{xx}(0,\hat x) + \sum_{\tau \text{-link: }\hat \tau \ge 0} m_\tau(\hat \tau,N) \right) \right]
\fe
More generally, the particle can hop by any multiple of $N$ sites. When space is a circle with $L_x$ sites, i.e., $\hat x\sim \hat x+L_x$, the particle can hop by $\gcd(N,L_x)$ sites.
Let us clarify this fact. For finite $L_x \gg N$, the hopping by $N$ sites is simple. There is also a more complicated possibility of moving several times around the space always in jumps of $N$ sites (on the covering space) and ending up only $\gcd(N,L_x)$ sites away from the starting point in real space.
How should we interpret it for infinite $L_x$?
The simple hopping by $N$ sites is clearly possible. And if we take a continuum limit, it is a microscopic hop and we conclude that the fracton is completely mobile. In this case, there is no need to consider the more complicated process involving going around the space.
Alternatively, we can think of the limit $L_x\to \infty$ without taking a continuum limit. Then, for any finite, but large $L_x$, the more complicated jumps that circle around the space, are also possible. Then the possible hops on the lattice depend on how we take $L_x\to \infty$. For example, if we take $L_x=sN+1$ with $s\to \infty$, then for every finite, but large $L_x$ the particle is completely mobile. In the other extreme, for $L_x=sN$ with $s\to \infty$, the particle can hop only by $N$ sites.
There is, however, an important difference between the simple hop by $N$ sites and the shorter hop by $\gcd(N,L_x)$ sites. The former is implemented by an operator stretching over the $N$ sites, while the latter is implemented by a more complicated operator affecting degrees of freedom all over the lattice. This difference becomes more significant as $L_x$ gets larger.
The fact that the number of sites the particle can hop on the lattice depends on the long distance geometry of the lattice is another manifestation of UV/IR mixing.
We conclude that in the $\mathbb{Z}_N$ dipole gauge theory, the particle represented by the defect \eqref{dipZN-particle} is not completely immobile. This is to be contrasted with the defect \eqref{dipA-modVill-fracton} in the $U(1)$ dipole gauge theory, which cannot be deformed and hence represents an immobile fracton. This difference in the particle mobility between the $U(1)$ and $\mathbb{Z}_N$ theories has already been discussed in the 3+1d tensor gauge theories in \cite{Bulmash:2018lid,Ma:2018nhd}.
Having discussed the defect for a single particle, we will now consider that for a dipole of particles with opposite charges. A dipole of particles can move by arbitrary number of sites as long as their separation is fixed. It is described by the defect
\ie
\exp\left[\frac{2\pi i}{N} \left(\sum_{\tau x\text{-plaq: }\hat \tau < 0} \Delta_x m_\tau(\hat \tau,\hat x_1) + \sum_{\text{site: }\hat x_1 < \hat x \le \hat x_2} m_{xx}(0,\hat x) + \sum_{\tau x\text{-plaq: }\hat \tau \ge 0} \Delta_x m_\tau(\hat \tau,\hat x_2) \right) \right]~.
\fe
The restricted mobility of the particle can be understood as a result of the time-like symmetries that act on the defects.
Let us discuss the time-like symmetries.
The BF lattice model \eqref{dipZN-intBFaction} has the same time-like symmetries \eqref{eq:electric_ZN} as the $\mathbb{Z}_N$ lattice dipole gauge theory \eqref{dipZN-action}. The $\mathbb Z_N$ electric time-like symmetry associated with $\bar p_\tau$ is generated by the $\mathbb Z_N$ operator
\ie
U(\hat \tau;\hat x_1,\hat x_2) = \exp \left(-\frac{2\pi i}{N} [\Delta_x \tilde m(\hat\tau, \hat x_2) - \Delta_x \tilde m(\hat\tau, \hat x_1)] \right)~.
\fe
The $\mathbb Z_{\gcd(N,L_x)}$ electric dipole time-like symmetry associated with $\bar r_\tau$ is generated by the $\mathbb Z_{\gcd(N,L_x)}$ operator
\ie
\mathbf U(\hat \tau;\hat x_1,\hat x_2) &= \exp \left( -\frac{2\pi i}{\gcd(N,L_x)}[ \hat x_2 \Delta_x \tilde m(\hat \tau, \hat x_2-1)- \tilde m(\hat\tau, \hat x_2) ] \right)
\\
&\quad \times \exp \left( \frac{2\pi i}{\gcd(N,L_x)}[ \hat x_1 \Delta_x \tilde m(\hat \tau, \hat x_1-1)- \tilde m(\hat\tau, \hat x_1) ] \right)~.
\fe
Both $U(\hat \tau;\hat x_1,\hat x_2)$ and $\mathbf U(\hat \tau;\hat x_1,\hat x_2)$ are invariant under deformations of $\hat x_1$, $\hat x_2$, and $\hat \tau$, because of \eqref{eq:eqofmotion_tildem}, as long as they do not cross any defect.
In particular, they are trivial operators in the absence of any defect insertions.
These time-like symmetries act on the defect $W_\tau(\hat x)$ as
\ie
&U(\hat \tau;\hat x_1,\hat x_2) W_\tau(\hat x) = e^{\frac{2\pi i}{N}}W_\tau(\hat x)~,\quad \text{if}\quad\hat x_1<\hat x<\hat x_2\,,
\\
&\mathbf U(\hat \tau;\hat x_1,\hat x_2) W_\tau(\hat x) = e^{\frac{2\pi i}{\gcd(N,L_x)} \hat x}W_\tau(\hat x)~,\quad \text{if}\quad\hat x_1<\hat x<\hat x_2\,.
\fe
The action is trivial if $\hat x$ is not in between $\hat x_1, \hat x_2$, which follows from the Gauss law.
The defects $W_\tau(\hat x)$ and $W_\tau(\hat x+\gcd(N,L_x))$ carry the same $\mathbb Z_N$ and $\mathbb Z_{\gcd(N,L_x)}$ time-like charges. Therefore, the time-like global symmetry explains the relaxed mobility of the particles, which can hop by $\gcd(N,L_x)$ sites. This is to be compared with the ground state degeneracy of $N\gcd(N,L_x)$, which is a consequence of space-like global symmetry.
\section{Conclusions}
We studied the continuum Lifshitz theory of a compact scalar $\phi$ with action \eqref{dipphi-action1} on a spatial circle in 1+1d. It has momentum and winding dipole global symmetries with a mixed 't Hooft anomaly between them. This leads to an infinite degeneracy in the spectrum.
We also studied a continuum $U(1)$ dipole gauge theory of $(A_\tau,A_{xx})$, which is the 1+1d version of the symmetric tensor gauge theory of \cite{Gu:2006vw,Xu:2006,Pankov:2007,Xu2008,Gu:2009jh,rasmussen,Pretko:2016lgv,Slagle:2018kqf,Du:2021pbc,Pretko:2016kxt}. This theory has defects, $\exp(i\oint d\tau ~A_\tau)$, which describe the world-lines of immobile particles -- fractons. Curiously, the line operator $\oint dx~A_{xx}$ is gauge invariant without exponentiating.
To understand the subtleties of the continuum field theories of the scalar and the tensor gauge field, we studied their lattice models. We used the modified Villain model for the scalar theory and a Villain version for the tensor gauge theory. The lattice models are unambiguous and exhibit most of global symmetries of the continuum theories. Surprisingly, for each lattice model, there are several continuum limits one can take. These continuum limits are described by the same action, but they differ in various global aspects including the field identifications. Correspondingly, the global symmetries of the continuum models and their observables (defects and operators) are different.
We discussed three continuum limits of the scalar field theory. Two of them, the $\phi$- and $\Phi$-theories, are dual to each other, while the third, the $\hat \phi$-theory, is self-dual. The latter is also scale-invariant under the Lifshitz scale transformation $x\rightarrow \lambda x$ and $\tau\rightarrow \lambda^2 \tau$. Their global symmetries are summarized in Table \ref{tbl:lat-cont-sym}.
While the modified Villain lattice model has the local operators $e^{i\phi}$ and $e^{i\Delta_x\phi}$, none of the three continuum theories has both of these operators at the same time: the $\phi$-theory has $e^{i\phi}$ but no $e^{i\partial_x\phi}$, the $\Phi$-theory has $e^{i\partial_x\Phi}$ but no $e^{i\Phi}$, and the $\hat\phi$-theory has neither $e^{i\hat\phi}$ nor $e^{i\partial_x\hat\phi}$. This observation follows from the mass dimensions of these continuum fields.\footnote{More precisely, when we say that the operator $e^{i\partial_x\phi}$ is absent in the continuum theory, we mean the following. The lattice operator $e^{i\Delta_x\phi}$ flows to a nontrivial operator in the continuum limit. At leading order it flows to the identity operator. (In this sense it is trivial in the continuum.) However, $e^{i\Delta_x\phi}-1$ flows, up to wave function renormalization, to $\partial_x\phi$, which is nontrivial. And at higher orders we find additional operators. The fact that $e^{i\Delta_x\phi}$ transforms under the lattice momentum dipole symmetry, leads to the fact that in the continuum, $\partial_x \phi$ also transforms (albeit inhomogeneously) under this symmetry.}
We summarized the local operators on the lattice and in the continuum in Table \ref{tbl:lat-cont-chargedop}.
We also discussed three continuum limits of the $U(1)$ dipole gauge theory. These are the pure gauge theories associated with the momentum symmetries of the $\phi$-, $\Phi$- and $\hat \phi$-theories. As already mentioned above, the $A$-theory has quantized fracton defects, but the line operators are not quantized. In the $\tilde A$-theory, there are no fracton defects and the line operators are exponentiated with a quantized coefficient. In the $\hat A$-theory, there are no fracton defects and the line operators are not quantized. The global symmetries and fluxes in these theories are summarized in Table \ref{tbl:dipA-lat-cont}.
Finally, we discussed the BF lattice version of the $\mathbb Z_N$ dipole gauge theory in 1+1d. Unlike the $U(1)$ theory, there are no fractons in the $\mathbb Z_N$ theory. The would-be fractons can hop by $N$ sites on the lattice. In fact, if the lattice has $L_x$ sites with periodic boundary conditions, the would-be fracton can hop even by smaller steps, steps of $\gcd(N,L_x)$. Surprisingly, the amount by which the particle can hop locally depends on the total number of sites of the lattice. Consequently, the $L_x\to \infty$ limit is not well defined. This subtlety, which is related to phenomena observed in \cite{Haah:2011drr,Yoshida:2013sqa,Meng,Manoj:2020wwy,Rudelius:2020kta}, reflects the UV/IR mixing in these theories, as emphasized in \cite{Gorantla:2021bda}.
One of the tools we used was the notion of a time-like global symmetry. This is a global symmetry that acts trivially on the operators and the Hilbert space without defects, but it acts nontrivially on defects extended in the time direction.
The time-like global symmetry leads to selection rules and constrains the shapes and locations of line defects.
When these defects represent the world-line of particles, the time-like global symmetry explains their mobility restrictions as a result of a global symmetry, rather than a gauge symmetry.
This discussion based on the time-like global symmetry makes precise the intuition about conservation of ``gauge charges."
Specifically, in the $U(1)$ dipole gauge theory, the time-like global symmetry completely restricts the mobility of the fracton defects, while in the $\mathbb Z_N$ theory, it explains the relaxed mobility of the would-be fractons.
As we said above, we can summarize the role of the global symmetries in these exotic systems as follows. {\it The space-like global symmetries lead to the peculiar ground state degeneracy and the time-like global symmetries lead to the unusual restricted mobility of the defects.}
Throughout the paper, we focused on models in 1+1d, but we expect many of these subtleties to be present in the higher dimensional versions of these theories. For instance, the origin of the infinite degeneracy in our 1+1d compact Lifshitz theory is almost the same as the degeneracy in the 2+1d quantum Lifshitz theory \cite{Henley1997,Fradkin:2004}.
Similarly, the absence of the fracton defects in the $\mathbb{Z}_N$ symmetric tensor gauge theory (unlike its $U(1)$ counterpart) was also observed in the higher-dimensional models of \cite{Bulmash:2018lid,Ma:2018nhd}.
\section*{Acknowledgements}
We thank X.\ Chen, M.\ Hermele, E.\ Lake, Y.\ Oz, W.\ Shirley, and T.\ Senthil for helpful discussions.
PG was supported by Physics Department of Princeton University.
HTL was supported in part by a Croucher fellowship from the Croucher Foundation, the Packard Foundation and the Center for Theoretical Physics at MIT.
The work of NS was supported in part by DOE grant DE$-$SC0009988.
NS was also supported by the Simons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons Foundation (651440, NS).
The authors of this paper were ordered alphabetically.
Opinions and conclusions expressed here are those of the authors and do not necessarily reflect the views of funding agencies.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.